[
  {
    "path": "README.md",
    "content": "Feature Pyramid Network on caffe\n\nThis is the unoffical version  Feature Pyramid Network for Feature Pyramid Networks for Object Detection https://arxiv.org/abs/1612.03144\n\n# results\n`FPN(resnet50)-end2end result is implemented without OHEM and train with pascal voc 2007 + 2012 test on 2007`\n\nmerged rcnn\n\n|mAP@0.5|aeroplane|bicycle|bird|boat|bottle|bus|car|cat|chair|cow|\n|:--:|:-------:| -----:| --:| --:|-----:|--:|--:|--:|----:|--:|\n|0.788|0.8079| 0.8036| 0.8010| 0.7293|0.6743|0.8680|0.8766|0.8967|0.6122|0.8646|\n\n|diningtable|dog |horse|motorbike|person |pottedplant|sheep|sofa|train|tv|\n|----------:|:--:|:---:| -------:| -----:| -------:|----:|---:|----:|--:|\n|0.7330|0.8855|0.8760| 0.8063| 0.7999| 0.5138|0.7905|0.7755|0.8637|0.7736|\n\n\nshared rcnn\n\n|mAP@0.5|aeroplane|bicycle|bird|boat|bottle|bus|car|cat|chair|cow|\n|:--:|:-------:| -----:| --:| --:|-----:|--:|--:|--:|----:|--:|\n|0.7833|0.8585| 0.8001| 0.7970| 0.7174|0.6522|0.8668|0.8768|0.8929|0.5842|0.8658|\n\n|diningtable|dog |horse|motorbike|person |pottedplant|sheep|sofa|train|tv|\n|----------:|:--:|:---:| -------:| -----:| -------:|----:|---:|----:|--:|\n|0.7022|0.8891|0.8680| 0.7991| 0.7944| 0.5065|0.7896|0.7707|0.8697|0.7653|\n# framework\nmegred rcnn framework\n\nNetwork overview: [link](http://ethereon.github.io/netscope/#/gist/c5334efdd667ce41d540e3697de2936c)\n\n![](merge_rcnn_framework.png)\n\nshared rcnn\n\nNetwork overview: [link](http://ethereon.github.io/netscope/#/gist/63c0281751afd1b2d50f4c2764b31a4e)\n\n![](framework.png)\n`the red and yellow are shared params`\n# about the anchor size setting\nIn the paper the anchor setting is `Ratios： [0.5,1,2],scales :[8,]`\n\nWith the setting and P2~P6, all anchor sizes are  `[32,64,128,512,1024]`,but this setting is suit for COCO dataset which has so many small targets.\n\nBut the voc dataset targets are range `[128,256,512]`.\n\nSo, we desgin the anchor setting:`Ratios： [0.5,1,2],scales :[8,16]`, this is very import for voc dataset.\n\n# usage\ndownload  voc07,12 dataset `ResNet50.caffemodel` and rename to `ResNet50.v2.caffemodel`\n\n```bash\ncp ResNet50.v2.caffemodel data/pretrained_model/\n```\n- OneDrive download: [link](https://onedrive.live.com/?authkey=%21AAFW2-FVoxeVRck&id=4006CBB8476FF777%2117887&cid=4006CBB8476FF777)\n\n`In my expriments, the codes require ~10G GPU memory in training and ~6G in testing. \nyour can design the suit image size, mimbatch size and rcnn batch size for your GPUS.`\n### compile  caffe & lib\n```bash\ncd caffe-fpn\nmkdir build\ncd build\ncmake ..\nmake -j16 all\ncd lib\nmake \n```\n### train & test\nshared rcnn\n```bash\n./experiments/scripts/FP_Net_end2end.sh 1 FPN pascal_voc\n./test.sh 1 FPN pascal_voc\n```\nmegred rcnn\n```bash\n ./experiments/scripts/FP_Net_end2end_merge_rcnn.sh 0 FPN pascal_voc\n ./test_mergercnn.sh 0 FPN pascal_voc\n```\n0 1 is GPU id.\n\n### TODO List\n - [x] all tests passed\n - [x] evaluate  object detection  performance on voc\n - [x] evaluate merged rcnn version  performance on voc\n \n### feature pyramid networks for object detection\n\nLin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2016). Feature pyramid networks for object detection. arXiv preprint arXiv:1612.03144.\n"
  },
  {
    "path": "caffe-fpn/.Doxyfile",
    "content": "# Doxyfile 1.8.8\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org) for a project.\n#\n# All text after a double hash (##) is considered a comment and is placed in\n# front of the TAG it is preceding.\n#\n# All text after a single hash (#) is considered a comment and will be ignored.\n# The format is:\n# TAG = value [value, ...]\n# For lists, items can also be appended using:\n# TAG += value [value, ...]\n# Values that contain spaces should be placed between quotes (\\\" \\\").\n\n#---------------------------------------------------------------------------\n# Project related configuration options\n#---------------------------------------------------------------------------\n\n# This tag specifies the encoding used for all characters in the config file\n# that follow. The default is UTF-8 which is also the encoding used for all text\n# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv\n# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv\n# for the list of possible encodings.\n# The default value is: UTF-8.\n\nDOXYFILE_ENCODING      = UTF-8\n\n# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by\n# double-quotes, unless you are using Doxywizard) that should identify the\n# project for which the documentation is generated. This name is used in the\n# title of most generated pages and in a few other places.\n# The default value is: My Project.\n\nPROJECT_NAME           = \"Caffe\"\n\n# The PROJECT_NUMBER tag can be used to enter a project or revision number. This\n# could be handy for archiving the generated documentation or if some version\n# control system is used.\n\nPROJECT_NUMBER         =\n\n# Using the PROJECT_BRIEF tag one can provide an optional one line description\n# for a project that appears at the top of each page and should give viewer a\n# quick idea about the purpose of the project. Keep the description short.\n\nPROJECT_BRIEF          =\n\n# With the PROJECT_LOGO tag one can specify an logo or icon that is included in\n# the documentation. The maximum height of the logo should not exceed 55 pixels\n# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo\n# to the output directory.\n\nPROJECT_LOGO           =\n\n# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path\n# into which the generated documentation will be written. If a relative path is\n# entered, it will be relative to the location where doxygen was started. If\n# left blank the current directory will be used.\n\nOUTPUT_DIRECTORY       = ./doxygen/\n\n# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub-\n# directories (in 2 levels) under the output directory of each output format and\n# will distribute the generated files over these directories. Enabling this\n# option can be useful when feeding doxygen a huge amount of source files, where\n# putting all generated files in the same directory would otherwise causes\n# performance problems for the file system.\n# The default value is: NO.\n\nCREATE_SUBDIRS         = NO\n\n# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII\n# characters to appear in the names of generated files. If set to NO, non-ASCII\n# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode\n# U+3044.\n# The default value is: NO.\n\nALLOW_UNICODE_NAMES    = NO\n\n# The OUTPUT_LANGUAGE tag is used to specify the language in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all constant output in the proper language.\n# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,\n# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),\n# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,\n# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),\n# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,\n# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,\n# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,\n# Ukrainian and Vietnamese.\n# The default value is: English.\n\nOUTPUT_LANGUAGE        = English\n\n# If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member\n# descriptions after the members that are listed in the file and class\n# documentation (similar to Javadoc). Set to NO to disable this.\n# The default value is: YES.\n\nBRIEF_MEMBER_DESC      = YES\n\n# If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief\n# description of a member or function before the detailed description\n#\n# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the\n# brief descriptions will be completely suppressed.\n# The default value is: YES.\n\nREPEAT_BRIEF           = YES\n\n# This tag implements a quasi-intelligent brief description abbreviator that is\n# used to form the text in various listings. Each string in this list, if found\n# as the leading text of the brief description, will be stripped from the text\n# and the result, after processing the whole list, is used as the annotated\n# text. Otherwise, the brief description is used as-is. If left blank, the\n# following values are used ($name is automatically replaced with the name of\n# the entity):The $name class, The $name widget, The $name file, is, provides,\n# specifies, contains, represents, a, an and the.\n\nABBREVIATE_BRIEF       =\n\n# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then\n# doxygen will generate a detailed section even if there is only a brief\n# description.\n# The default value is: NO.\n\nALWAYS_DETAILED_SEC    = NO\n\n# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all\n# inherited members of a class in the documentation of that class as if those\n# members were ordinary class members. Constructors, destructors and assignment\n# operators of the base classes will not be shown.\n# The default value is: NO.\n\nINLINE_INHERITED_MEMB  = NO\n\n# If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path\n# before files name in the file list and in the header files. If set to NO the\n# shortest path that makes the file name unique will be used\n# The default value is: YES.\n\nFULL_PATH_NAMES        = YES\n\n# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.\n# Stripping is only done if one of the specified strings matches the left-hand\n# part of the path. The tag can be used to show relative paths in the file list.\n# If left blank the directory from which doxygen is run is used as the path to\n# strip.\n#\n# Note that you can specify absolute paths here, but also relative paths, which\n# will be relative from the directory where doxygen is started.\n# This tag requires that the tag FULL_PATH_NAMES is set to YES.\n\nSTRIP_FROM_PATH        =\n\n# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the\n# path mentioned in the documentation of a class, which tells the reader which\n# header file to include in order to use a class. If left blank only the name of\n# the header file containing the class definition is used. Otherwise one should\n# specify the list of include paths that are normally passed to the compiler\n# using the -I flag.\n\nSTRIP_FROM_INC_PATH    =\n\n# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but\n# less readable) file names. This can be useful is your file systems doesn't\n# support long names like on DOS, Mac, or CD-ROM.\n# The default value is: NO.\n\nSHORT_NAMES            = NO\n\n# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the\n# first line (until the first dot) of a Javadoc-style comment as the brief\n# description. If set to NO, the Javadoc-style will behave just like regular Qt-\n# style comments (thus requiring an explicit @brief command for a brief\n# description.)\n# The default value is: NO.\n\nJAVADOC_AUTOBRIEF      = NO\n\n# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first\n# line (until the first dot) of a Qt-style comment as the brief description. If\n# set to NO, the Qt-style will behave just like regular Qt-style comments (thus\n# requiring an explicit \\brief command for a brief description.)\n# The default value is: NO.\n\nQT_AUTOBRIEF           = NO\n\n# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a\n# multi-line C++ special comment block (i.e. a block of //! or /// comments) as\n# a brief description. This used to be the default behavior. The new default is\n# to treat a multi-line C++ comment block as a detailed description. Set this\n# tag to YES if you prefer the old behavior instead.\n#\n# Note that setting this tag to YES also means that rational rose comments are\n# not recognized any more.\n# The default value is: NO.\n\nMULTILINE_CPP_IS_BRIEF = NO\n\n# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the\n# documentation from any documented member that it re-implements.\n# The default value is: YES.\n\nINHERIT_DOCS           = YES\n\n# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a\n# new page for each member. If set to NO, the documentation of a member will be\n# part of the file/class/namespace that contains it.\n# The default value is: NO.\n\nSEPARATE_MEMBER_PAGES  = NO\n\n# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen\n# uses this value to replace tabs by spaces in code fragments.\n# Minimum value: 1, maximum value: 16, default value: 4.\n\nTAB_SIZE               = 8\n\n# This tag can be used to specify a number of aliases that act as commands in\n# the documentation. An alias has the form:\n# name=value\n# For example adding\n# \"sideeffect=@par Side Effects:\\n\"\n# will allow you to put the command \\sideeffect (or @sideeffect) in the\n# documentation, which will result in a user-defined paragraph with heading\n# \"Side Effects:\". You can put \\n's in the value part of an alias to insert\n# newlines.\n\nALIASES                =\n\n# This tag can be used to specify a number of word-keyword mappings (TCL only).\n# A mapping has the form \"name=value\". For example adding \"class=itcl::class\"\n# will allow you to use the command class in the itcl::class meaning.\n\nTCL_SUBST              =\n\n# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources\n# only. Doxygen will then generate output that is more tailored for C. For\n# instance, some of the names that are used will be different. The list of all\n# members will be omitted, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_FOR_C  = NO\n\n# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or\n# Python sources only. Doxygen will then generate output that is more tailored\n# for that language. For instance, namespaces will be presented as packages,\n# qualified scopes will look different, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_JAVA   = NO\n\n# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran\n# sources. Doxygen will then generate output that is tailored for Fortran.\n# The default value is: NO.\n\nOPTIMIZE_FOR_FORTRAN   = NO\n\n# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL\n# sources. Doxygen will then generate output that is tailored for VHDL.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_VHDL   = NO\n\n# Doxygen selects the parser to use depending on the extension of the files it\n# parses. With this tag you can assign which parser to use for a given\n# extension. Doxygen has a built-in mapping, but you can override or extend it\n# using this tag. The format is ext=language, where ext is a file extension, and\n# language is one of the parsers supported by doxygen: IDL, Java, Javascript,\n# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:\n# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:\n# Fortran. In the later case the parser tries to guess whether the code is fixed\n# or free formatted code, this is the default for Fortran type files), VHDL. For\n# instance to make doxygen treat .inc files as Fortran files (default is PHP),\n# and .f files as C (default is Fortran), use: inc=Fortran f=C.\n#\n# Note For files without extension you can use no_extension as a placeholder.\n#\n# Note that for custom extensions you also need to set FILE_PATTERNS otherwise\n# the files are not read by doxygen.\n\nEXTENSION_MAPPING      =\n\n# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments\n# according to the Markdown format, which allows for more readable\n# documentation. See http://daringfireball.net/projects/markdown/ for details.\n# The output of markdown processing is further processed by doxygen, so you can\n# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in\n# case of backward compatibilities issues.\n# The default value is: YES.\n\nMARKDOWN_SUPPORT       = YES\n\n# When enabled doxygen tries to link words that correspond to documented\n# classes, or namespaces to their corresponding documentation. Such a link can\n# be prevented in individual cases by by putting a % sign in front of the word\n# or globally by setting AUTOLINK_SUPPORT to NO.\n# The default value is: YES.\n\nAUTOLINK_SUPPORT       = YES\n\n# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want\n# to include (a tag file for) the STL sources as input, then you should set this\n# tag to YES in order to let doxygen match functions declarations and\n# definitions whose arguments contain STL classes (e.g. func(std::string);\n# versus func(std::string) {}). This also make the inheritance and collaboration\n# diagrams that involve STL classes more complete and accurate.\n# The default value is: NO.\n\nBUILTIN_STL_SUPPORT    = NO\n\n# If you use Microsoft's C++/CLI language, you should set this option to YES to\n# enable parsing support.\n# The default value is: NO.\n\nCPP_CLI_SUPPORT        = NO\n\n# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:\n# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen\n# will parse them like normal C++ but will assume all classes use public instead\n# of private inheritance when no explicit protection keyword is present.\n# The default value is: NO.\n\nSIP_SUPPORT            = NO\n\n# For Microsoft's IDL there are propget and propput attributes to indicate\n# getter and setter methods for a property. Setting this option to YES will make\n# doxygen to replace the get and set methods by a property in the documentation.\n# This will only work if the methods are indeed getting or setting a simple\n# type. If this is not the case, or you want to show the methods anyway, you\n# should set this option to NO.\n# The default value is: YES.\n\nIDL_PROPERTY_SUPPORT   = YES\n\n# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC\n# tag is set to YES, then doxygen will reuse the documentation of the first\n# member in the group (if any) for the other members of the group. By default\n# all members of a group must be documented explicitly.\n# The default value is: NO.\n\nDISTRIBUTE_GROUP_DOC   = NO\n\n# Set the SUBGROUPING tag to YES to allow class member groups of the same type\n# (for instance a group of public functions) to be put as a subgroup of that\n# type (e.g. under the Public Functions section). Set it to NO to prevent\n# subgrouping. Alternatively, this can be done per class using the\n# \\nosubgrouping command.\n# The default value is: YES.\n\nSUBGROUPING            = YES\n\n# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions\n# are shown inside the group in which they are included (e.g. using \\ingroup)\n# instead of on a separate page (for HTML and Man pages) or section (for LaTeX\n# and RTF).\n#\n# Note that this feature does not work in combination with\n# SEPARATE_MEMBER_PAGES.\n# The default value is: NO.\n\nINLINE_GROUPED_CLASSES = NO\n\n# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions\n# with only public data fields or simple typedef fields will be shown inline in\n# the documentation of the scope in which they are defined (i.e. file,\n# namespace, or group documentation), provided this scope is documented. If set\n# to NO, structs, classes, and unions are shown on a separate page (for HTML and\n# Man pages) or section (for LaTeX and RTF).\n# The default value is: NO.\n\nINLINE_SIMPLE_STRUCTS  = NO\n\n# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or\n# enum is documented as struct, union, or enum with the name of the typedef. So\n# typedef struct TypeS {} TypeT, will appear in the documentation as a struct\n# with name TypeT. When disabled the typedef will appear as a member of a file,\n# namespace, or class. And the struct will be named TypeS. This can typically be\n# useful for C code in case the coding convention dictates that all compound\n# types are typedef'ed and only the typedef is referenced, never the tag name.\n# The default value is: NO.\n\nTYPEDEF_HIDES_STRUCT   = NO\n\n# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This\n# cache is used to resolve symbols given their name and scope. Since this can be\n# an expensive process and often the same symbol appears multiple times in the\n# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small\n# doxygen will become slower. If the cache is too large, memory is wasted. The\n# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range\n# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536\n# symbols. At the end of a run doxygen will report the cache usage and suggest\n# the optimal cache size from a speed point of view.\n# Minimum value: 0, maximum value: 9, default value: 0.\n\nLOOKUP_CACHE_SIZE      = 0\n\n#---------------------------------------------------------------------------\n# Build related configuration options\n#---------------------------------------------------------------------------\n\n# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in\n# documentation are documented, even if no documentation was available. Private\n# class members and static file members will be hidden unless the\n# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.\n# Note: This will also disable the warnings about undocumented members that are\n# normally produced when WARNINGS is set to YES.\n# The default value is: NO.\n\nEXTRACT_ALL            = NO\n\n# If the EXTRACT_PRIVATE tag is set to YES all private members of a class will\n# be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PRIVATE        = NO\n\n# If the EXTRACT_PACKAGE tag is set to YES all members with package or internal\n# scope will be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PACKAGE        = NO\n\n# If the EXTRACT_STATIC tag is set to YES all static members of a file will be\n# included in the documentation.\n# The default value is: NO.\n\nEXTRACT_STATIC         = NO\n\n# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined\n# locally in source files will be included in the documentation. If set to NO\n# only classes defined in header files are included. Does not have any effect\n# for Java sources.\n# The default value is: YES.\n\nEXTRACT_LOCAL_CLASSES  = YES\n\n# This flag is only useful for Objective-C code. When set to YES local methods,\n# which are defined in the implementation section but not in the interface are\n# included in the documentation. If set to NO only methods in the interface are\n# included.\n# The default value is: NO.\n\nEXTRACT_LOCAL_METHODS  = NO\n\n# If this flag is set to YES, the members of anonymous namespaces will be\n# extracted and appear in the documentation as a namespace called\n# 'anonymous_namespace{file}', where file will be replaced with the base name of\n# the file that contains the anonymous namespace. By default anonymous namespace\n# are hidden.\n# The default value is: NO.\n\nEXTRACT_ANON_NSPACES   = NO\n\n# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all\n# undocumented members inside documented classes or files. If set to NO these\n# members will be included in the various overviews, but no documentation\n# section is generated. This option has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_MEMBERS     = NO\n\n# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all\n# undocumented classes that are normally visible in the class hierarchy. If set\n# to NO these classes will be included in the various overviews. This option has\n# no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_CLASSES     = NO\n\n# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend\n# (class|struct|union) declarations. If set to NO these declarations will be\n# included in the documentation.\n# The default value is: NO.\n\nHIDE_FRIEND_COMPOUNDS  = NO\n\n# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any\n# documentation blocks found inside the body of a function. If set to NO these\n# blocks will be appended to the function's detailed documentation block.\n# The default value is: NO.\n\nHIDE_IN_BODY_DOCS      = NO\n\n# The INTERNAL_DOCS tag determines if documentation that is typed after a\n# \\internal command is included. If the tag is set to NO then the documentation\n# will be excluded. Set it to YES to include the internal documentation.\n# The default value is: NO.\n\nINTERNAL_DOCS          = NO\n\n# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file\n# names in lower-case letters. If set to YES upper-case letters are also\n# allowed. This is useful if you have classes or files whose names only differ\n# in case and if your file system supports case sensitive file names. Windows\n# and Mac users are advised to set this option to NO.\n# The default value is: system dependent.\n\nCASE_SENSE_NAMES       = YES\n\n# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with\n# their full class and namespace scopes in the documentation. If set to YES the\n# scope will be hidden.\n# The default value is: NO.\n\nHIDE_SCOPE_NAMES       = NO\n\n# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of\n# the files that are included by a file in the documentation of that file.\n# The default value is: YES.\n\nSHOW_INCLUDE_FILES     = YES\n\n# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each\n# grouped member an include statement to the documentation, telling the reader\n# which file to include in order to use the member.\n# The default value is: NO.\n\nSHOW_GROUPED_MEMB_INC  = NO\n\n# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include\n# files with double quotes in the documentation rather than with sharp brackets.\n# The default value is: NO.\n\nFORCE_LOCAL_INCLUDES   = NO\n\n# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the\n# documentation for inline members.\n# The default value is: YES.\n\nINLINE_INFO            = YES\n\n# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the\n# (detailed) documentation of file and class members alphabetically by member\n# name. If set to NO the members will appear in declaration order.\n# The default value is: YES.\n\nSORT_MEMBER_DOCS       = YES\n\n# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief\n# descriptions of file, namespace and class members alphabetically by member\n# name. If set to NO the members will appear in declaration order. Note that\n# this will also influence the order of the classes in the class list.\n# The default value is: NO.\n\nSORT_BRIEF_DOCS        = NO\n\n# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the\n# (brief and detailed) documentation of class members so that constructors and\n# destructors are listed first. If set to NO the constructors will appear in the\n# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.\n# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief\n# member documentation.\n# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting\n# detailed member documentation.\n# The default value is: NO.\n\nSORT_MEMBERS_CTORS_1ST = NO\n\n# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy\n# of group names into alphabetical order. If set to NO the group names will\n# appear in their defined order.\n# The default value is: NO.\n\nSORT_GROUP_NAMES       = NO\n\n# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by\n# fully-qualified names, including namespaces. If set to NO, the class list will\n# be sorted only by class name, not including the namespace part.\n# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.\n# Note: This option applies only to the class list, not to the alphabetical\n# list.\n# The default value is: NO.\n\nSORT_BY_SCOPE_NAME     = NO\n\n# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper\n# type resolution of all parameters of a function it will reject a match between\n# the prototype and the implementation of a member function even if there is\n# only one candidate or it is obvious which candidate to choose by doing a\n# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still\n# accept a match between prototype and implementation in such cases.\n# The default value is: NO.\n\nSTRICT_PROTO_MATCHING  = NO\n\n# The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the\n# todo list. This list is created by putting \\todo commands in the\n# documentation.\n# The default value is: YES.\n\nGENERATE_TODOLIST      = YES\n\n# The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the\n# test list. This list is created by putting \\test commands in the\n# documentation.\n# The default value is: YES.\n\nGENERATE_TESTLIST      = YES\n\n# The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug\n# list. This list is created by putting \\bug commands in the documentation.\n# The default value is: YES.\n\nGENERATE_BUGLIST       = YES\n\n# The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO)\n# the deprecated list. This list is created by putting \\deprecated commands in\n# the documentation.\n# The default value is: YES.\n\nGENERATE_DEPRECATEDLIST= YES\n\n# The ENABLED_SECTIONS tag can be used to enable conditional documentation\n# sections, marked by \\if <section_label> ... \\endif and \\cond <section_label>\n# ... \\endcond blocks.\n\nENABLED_SECTIONS       =\n\n# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the\n# initial value of a variable or macro / define can have for it to appear in the\n# documentation. If the initializer consists of more lines than specified here\n# it will be hidden. Use a value of 0 to hide initializers completely. The\n# appearance of the value of individual variables and macros / defines can be\n# controlled using \\showinitializer or \\hideinitializer command in the\n# documentation regardless of this setting.\n# Minimum value: 0, maximum value: 10000, default value: 30.\n\nMAX_INITIALIZER_LINES  = 30\n\n# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at\n# the bottom of the documentation of classes and structs. If set to YES the list\n# will mention the files that were used to generate the documentation.\n# The default value is: YES.\n\nSHOW_USED_FILES        = YES\n\n# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This\n# will remove the Files entry from the Quick Index and from the Folder Tree View\n# (if specified).\n# The default value is: YES.\n\nSHOW_FILES             = YES\n\n# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces\n# page. This will remove the Namespaces entry from the Quick Index and from the\n# Folder Tree View (if specified).\n# The default value is: YES.\n\nSHOW_NAMESPACES        = YES\n\n# The FILE_VERSION_FILTER tag can be used to specify a program or script that\n# doxygen should invoke to get the current version for each file (typically from\n# the version control system). Doxygen will invoke the program by executing (via\n# popen()) the command command input-file, where command is the value of the\n# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided\n# by doxygen. Whatever the program writes to standard output is used as the file\n# version. For an example see the documentation.\n\nFILE_VERSION_FILTER    =\n\n# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed\n# by doxygen. The layout file controls the global structure of the generated\n# output files in an output format independent way. To create the layout file\n# that represents doxygen's defaults, run doxygen with the -l option. You can\n# optionally specify a file name after the option, if omitted DoxygenLayout.xml\n# will be used as the name of the layout file.\n#\n# Note that if you run doxygen from a directory containing a file called\n# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE\n# tag is left empty.\n\nLAYOUT_FILE            =\n\n# The CITE_BIB_FILES tag can be used to specify one or more bib files containing\n# the reference definitions. This must be a list of .bib files. The .bib\n# extension is automatically appended if omitted. This requires the bibtex tool\n# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.\n# For LaTeX the style of the bibliography can be controlled using\n# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the\n# search path. See also \\cite for info how to create references.\n\nCITE_BIB_FILES         =\n\n#---------------------------------------------------------------------------\n# Configuration options related to warning and progress messages\n#---------------------------------------------------------------------------\n\n# The QUIET tag can be used to turn on/off the messages that are generated to\n# standard output by doxygen. If QUIET is set to YES this implies that the\n# messages are off.\n# The default value is: NO.\n\nQUIET                  = YES\n\n# The WARNINGS tag can be used to turn on/off the warning messages that are\n# generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES\n# this implies that the warnings are on.\n#\n# Tip: Turn warnings on while writing the documentation.\n# The default value is: YES.\n\nWARNINGS               = YES\n\n# If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate\n# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag\n# will automatically be disabled.\n# The default value is: YES.\n\nWARN_IF_UNDOCUMENTED   = NO\n\n# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for\n# potential errors in the documentation, such as not documenting some parameters\n# in a documented function, or documenting parameters that don't exist or using\n# markup commands wrongly.\n# The default value is: YES.\n\nWARN_IF_DOC_ERROR      = YES\n\n# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that\n# are documented, but have no documentation for their parameters or return\n# value. If set to NO doxygen will only warn about wrong or incomplete parameter\n# documentation, but not about the absence of documentation.\n# The default value is: NO.\n\nWARN_NO_PARAMDOC       = NO\n\n# The WARN_FORMAT tag determines the format of the warning messages that doxygen\n# can produce. The string should contain the $file, $line, and $text tags, which\n# will be replaced by the file and line number from which the warning originated\n# and the warning text. Optionally the format may contain $version, which will\n# be replaced by the version of the file (if it could be obtained via\n# FILE_VERSION_FILTER)\n# The default value is: $file:$line: $text.\n\nWARN_FORMAT            = \"$file:$line: $text\"\n\n# The WARN_LOGFILE tag can be used to specify a file to which warning and error\n# messages should be written. If left blank the output is written to standard\n# error (stderr).\n\nWARN_LOGFILE           =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the input files\n#---------------------------------------------------------------------------\n\n# The INPUT tag is used to specify the files and/or directories that contain\n# documented source files. You may enter file names like myfile.cpp or\n# directories like /usr/src/myproject. Separate the files or directories with\n# spaces.\n# Note: If this tag is empty the current directory is searched.\n\nINPUT                  = ./include/caffe \\\n                         ./src/caffe\n\n# This tag can be used to specify the character encoding of the source files\n# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses\n# libiconv (or the iconv built into libc) for the transcoding. See the libiconv\n# documentation (see: http://www.gnu.org/software/libiconv) for the list of\n# possible encodings.\n# The default value is: UTF-8.\n\nINPUT_ENCODING         = UTF-8\n\n# If the value of the INPUT tag contains directories, you can use the\n# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank the\n# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii,\n# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp,\n# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown,\n# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf,\n# *.qsf, *.as and *.js.\n\nFILE_PATTERNS          =\n\n# The RECURSIVE tag can be used to specify whether or not subdirectories should\n# be searched for input files as well.\n# The default value is: NO.\n\nRECURSIVE              = YES\n\n# The EXCLUDE tag can be used to specify files and/or directories that should be\n# excluded from the INPUT source files. This way you can easily exclude a\n# subdirectory from a directory tree whose root is specified with the INPUT tag.\n#\n# Note that relative paths are relative to the directory from which doxygen is\n# run.\n\nEXCLUDE                = ./src/caffe/test/ \\\n                         ./include/caffe/test/\n\n# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or\n# directories that are symbolic links (a Unix file system feature) are excluded\n# from the input.\n# The default value is: NO.\n\nEXCLUDE_SYMLINKS       = NO\n\n# If the value of the INPUT tag contains directories, you can use the\n# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude\n# certain files from those directories.\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories for example use the pattern */test/*\n\nEXCLUDE_PATTERNS       =\n\n# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names\n# (namespaces, classes, functions, etc.) that should be excluded from the\n# output. The symbol name can be a fully qualified name, a word, or if the\n# wildcard * is used, a substring. Examples: ANamespace, AClass,\n# AClass::ANamespace, ANamespace::*Test\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories use the pattern */test/*\n\nEXCLUDE_SYMBOLS        =\n\n# The EXAMPLE_PATH tag can be used to specify one or more files or directories\n# that contain example code fragments that are included (see the \\include\n# command).\n\nEXAMPLE_PATH           =\n\n# If the value of the EXAMPLE_PATH tag contains directories, you can use the\n# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank all\n# files are included.\n\nEXAMPLE_PATTERNS       =\n\n# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be\n# searched for input files to be used with the \\include or \\dontinclude commands\n# irrespective of the value of the RECURSIVE tag.\n# The default value is: NO.\n\nEXAMPLE_RECURSIVE      = NO\n\n# The IMAGE_PATH tag can be used to specify one or more files or directories\n# that contain images that are to be included in the documentation (see the\n# \\image command).\n\nIMAGE_PATH             =\n\n# The INPUT_FILTER tag can be used to specify a program that doxygen should\n# invoke to filter for each input file. Doxygen will invoke the filter program\n# by executing (via popen()) the command:\n#\n# <filter> <input-file>\n#\n# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the\n# name of an input file. Doxygen will then use the output that the filter\n# program writes to standard output. If FILTER_PATTERNS is specified, this tag\n# will be ignored.\n#\n# Note that the filter must not add or remove lines; it is applied before the\n# code is scanned, but not when the output code is generated. If lines are added\n# or removed, the anchors will not be placed correctly.\n\nINPUT_FILTER           =\n\n# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern\n# basis. Doxygen will compare the file name with each pattern and apply the\n# filter if there is a match. The filters are a list of the form: pattern=filter\n# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how\n# filters are used. If the FILTER_PATTERNS tag is empty or if none of the\n# patterns match the file name, INPUT_FILTER is applied.\n\nFILTER_PATTERNS        =\n\n# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using\n# INPUT_FILTER ) will also be used to filter the input files that are used for\n# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).\n# The default value is: NO.\n\nFILTER_SOURCE_FILES    = NO\n\n# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file\n# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and\n# it is also possible to disable source filtering for a specific pattern using\n# *.ext= (so without naming a filter).\n# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.\n\nFILTER_SOURCE_PATTERNS =\n\n# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that\n# is part of the input, its contents will be placed on the main page\n# (index.html). This can be useful if you have a project on for instance GitHub\n# and want to reuse the introduction page also for the doxygen output.\n\nUSE_MDFILE_AS_MAINPAGE =\n\n#---------------------------------------------------------------------------\n# Configuration options related to source browsing\n#---------------------------------------------------------------------------\n\n# If the SOURCE_BROWSER tag is set to YES then a list of source files will be\n# generated. Documented entities will be cross-referenced with these sources.\n#\n# Note: To get rid of all source code in the generated output, make sure that\n# also VERBATIM_HEADERS is set to NO.\n# The default value is: NO.\n\nSOURCE_BROWSER         = NO\n\n# Setting the INLINE_SOURCES tag to YES will include the body of functions,\n# classes and enums directly into the documentation.\n# The default value is: NO.\n\nINLINE_SOURCES         = NO\n\n# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any\n# special comment blocks from generated source code fragments. Normal C, C++ and\n# Fortran comments will always remain visible.\n# The default value is: YES.\n\nSTRIP_CODE_COMMENTS    = YES\n\n# If the REFERENCED_BY_RELATION tag is set to YES then for each documented\n# function all documented functions referencing it will be listed.\n# The default value is: NO.\n\nREFERENCED_BY_RELATION = NO\n\n# If the REFERENCES_RELATION tag is set to YES then for each documented function\n# all documented entities called/used by that function will be listed.\n# The default value is: NO.\n\nREFERENCES_RELATION    = NO\n\n# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set\n# to YES, then the hyperlinks from functions in REFERENCES_RELATION and\n# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will\n# link to the documentation.\n# The default value is: YES.\n\nREFERENCES_LINK_SOURCE = YES\n\n# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the\n# source code will show a tooltip with additional information such as prototype,\n# brief description and links to the definition and documentation. Since this\n# will make the HTML file larger and loading of large files a bit slower, you\n# can opt to disable this feature.\n# The default value is: YES.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nSOURCE_TOOLTIPS        = YES\n\n# If the USE_HTAGS tag is set to YES then the references to source code will\n# point to the HTML generated by the htags(1) tool instead of doxygen built-in\n# source browser. The htags tool is part of GNU's global source tagging system\n# (see http://www.gnu.org/software/global/global.html). You will need version\n# 4.8.6 or higher.\n#\n# To use it do the following:\n# - Install the latest version of global\n# - Enable SOURCE_BROWSER and USE_HTAGS in the config file\n# - Make sure the INPUT points to the root of the source tree\n# - Run doxygen as normal\n#\n# Doxygen will invoke htags (and that will in turn invoke gtags), so these\n# tools must be available from the command line (i.e. in the search path).\n#\n# The result: instead of the source browser generated by doxygen, the links to\n# source code will now point to the output of htags.\n# The default value is: NO.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nUSE_HTAGS              = NO\n\n# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a\n# verbatim copy of the header file for each class for which an include is\n# specified. Set to NO to disable this.\n# See also: Section \\class.\n# The default value is: YES.\n\nVERBATIM_HEADERS       = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the alphabetical class index\n#---------------------------------------------------------------------------\n\n# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all\n# compounds will be generated. Enable this if the project contains a lot of\n# classes, structs, unions or interfaces.\n# The default value is: YES.\n\nALPHABETICAL_INDEX     = YES\n\n# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in\n# which the alphabetical index list will be split.\n# Minimum value: 1, maximum value: 20, default value: 5.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nCOLS_IN_ALPHA_INDEX    = 5\n\n# In case all classes in a project start with a common prefix, all classes will\n# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag\n# can be used to specify a prefix (or a list of prefixes) that should be ignored\n# while generating the index headers.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nIGNORE_PREFIX          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the HTML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_HTML tag is set to YES doxygen will generate HTML output\n# The default value is: YES.\n\nGENERATE_HTML          = YES\n\n# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_OUTPUT            = html\n\n# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each\n# generated HTML page (for example: .htm, .php, .asp).\n# The default value is: .html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FILE_EXTENSION    = .html\n\n# The HTML_HEADER tag can be used to specify a user-defined HTML header file for\n# each generated HTML page. If the tag is left blank doxygen will generate a\n# standard header.\n#\n# To get valid HTML the header file that includes any scripts and style sheets\n# that doxygen needs, which is dependent on the configuration options used (e.g.\n# the setting GENERATE_TREEVIEW). It is highly recommended to start with a\n# default header using\n# doxygen -w html new_header.html new_footer.html new_stylesheet.css\n# YourConfigFile\n# and then modify the file new_header.html. See also section \"Doxygen usage\"\n# for information on how to generate the default header that doxygen normally\n# uses.\n# Note: The header is subject to change so you typically have to regenerate the\n# default header when upgrading to a newer version of doxygen. For a description\n# of the possible markers and block names see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_HEADER            =\n\n# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each\n# generated HTML page. If the tag is left blank doxygen will generate a standard\n# footer. See HTML_HEADER for more information on how to generate a default\n# footer and what special commands can be used inside the footer. See also\n# section \"Doxygen usage\" for information on how to generate the default footer\n# that doxygen normally uses.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FOOTER            =\n\n# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style\n# sheet that is used by each HTML page. It can be used to fine-tune the look of\n# the HTML output. If left blank doxygen will generate a default style sheet.\n# See also section \"Doxygen usage\" for information on how to generate the style\n# sheet that doxygen normally uses.\n# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as\n# it is more robust and this tag (HTML_STYLESHEET) will in the future become\n# obsolete.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_STYLESHEET        =\n\n# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# cascading style sheets that are included after the standard style sheets\n# created by doxygen. Using this option one can overrule certain style aspects.\n# This is preferred over using HTML_STYLESHEET since it does not replace the\n# standard style sheet and is therefor more robust against future updates.\n# Doxygen will copy the style sheet files to the output directory.\n# Note: The order of the extra stylesheet files is of importance (e.g. the last\n# stylesheet in the list overrules the setting of the previous ones in the\n# list). For an example see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_STYLESHEET  =\n\n# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the HTML output directory. Note\n# that these files will be copied to the base HTML output directory. Use the\n# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these\n# files. In the HTML_STYLESHEET file, use the file name only. Also note that the\n# files will be copied as-is; there are no commands or markers available.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_FILES       =\n\n# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen\n# will adjust the colors in the stylesheet and background images according to\n# this color. Hue is specified as an angle on a colorwheel, see\n# http://en.wikipedia.org/wiki/Hue for more information. For instance the value\n# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300\n# purple, and 360 is red again.\n# Minimum value: 0, maximum value: 359, default value: 220.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_HUE    = 220\n\n# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors\n# in the HTML output. For a value of 0 the output will use grayscales only. A\n# value of 255 will produce the most vivid colors.\n# Minimum value: 0, maximum value: 255, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_SAT    = 100\n\n# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the\n# luminance component of the colors in the HTML output. Values below 100\n# gradually make the output lighter, whereas values above 100 make the output\n# darker. The value divided by 100 is the actual gamma applied, so 80 represents\n# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not\n# change the gamma.\n# Minimum value: 40, maximum value: 240, default value: 80.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_GAMMA  = 80\n\n# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML\n# page will contain the date and time when the page was generated. Setting this\n# to NO can help when comparing the output of multiple runs.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_TIMESTAMP         = YES\n\n# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML\n# documentation will contain sections that can be hidden and shown after the\n# page has loaded.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_SECTIONS  = NO\n\n# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries\n# shown in the various tree structured indices initially; the user can expand\n# and collapse entries dynamically later on. Doxygen will expand the tree to\n# such a level that at most the specified number of entries are visible (unless\n# a fully collapsed tree already exceeds this amount). So setting the number of\n# entries 1 will produce a full collapsed tree by default. 0 is a special value\n# representing an infinite number of entries and will result in a full expanded\n# tree by default.\n# Minimum value: 0, maximum value: 9999, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_INDEX_NUM_ENTRIES = 100\n\n# If the GENERATE_DOCSET tag is set to YES, additional index files will be\n# generated that can be used as input for Apple's Xcode 3 integrated development\n# environment (see: http://developer.apple.com/tools/xcode/), introduced with\n# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a\n# Makefile in the HTML output directory. Running make will produce the docset in\n# that directory and running make install will install the docset in\n# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at\n# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html\n# for more information.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_DOCSET        = NO\n\n# This tag determines the name of the docset feed. A documentation feed provides\n# an umbrella under which multiple documentation sets from a single provider\n# (such as a company or product suite) can be grouped.\n# The default value is: Doxygen generated docs.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_FEEDNAME        = \"Doxygen generated docs\"\n\n# This tag specifies a string that should uniquely identify the documentation\n# set bundle. This should be a reverse domain-name style string, e.g.\n# com.mycompany.MyDocSet. Doxygen will append .docset to the name.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_BUNDLE_ID       = org.doxygen.Project\n\n# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify\n# the documentation publisher. This should be a reverse domain-name style\n# string, e.g. com.mycompany.MyDocSet.documentation.\n# The default value is: org.doxygen.Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_ID    = org.doxygen.Publisher\n\n# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.\n# The default value is: Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_NAME  = Publisher\n\n# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three\n# additional HTML index files: index.hhp, index.hhc, and index.hhk. The\n# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop\n# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on\n# Windows.\n#\n# The HTML Help Workshop contains a compiler that can convert all HTML output\n# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML\n# files are now used as the Windows 98 help format, and will replace the old\n# Windows help format (.hlp) on all Windows platforms in the future. Compressed\n# HTML files also contain an index, a table of contents, and you can search for\n# words in the documentation. The HTML workshop also contains a viewer for\n# compressed HTML files.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_HTMLHELP      = NO\n\n# The CHM_FILE tag can be used to specify the file name of the resulting .chm\n# file. You can add a path in front of the file if the result should not be\n# written to the html output directory.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_FILE               =\n\n# The HHC_LOCATION tag can be used to specify the location (absolute path\n# including file name) of the HTML help compiler ( hhc.exe). If non-empty\n# doxygen will try to run the HTML help compiler on the generated index.hhp.\n# The file has to be specified with full path.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nHHC_LOCATION           =\n\n# The GENERATE_CHI flag controls if a separate .chi index file is generated (\n# YES) or that it should be included in the master .chm file ( NO).\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nGENERATE_CHI           = NO\n\n# The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc)\n# and project file content.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_INDEX_ENCODING     =\n\n# The BINARY_TOC flag controls whether a binary table of contents is generated (\n# YES) or a normal table of contents ( NO) in the .chm file. Furthermore it\n# enables the Previous and Next buttons.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nBINARY_TOC             = NO\n\n# The TOC_EXPAND flag can be set to YES to add extra items for group members to\n# the table of contents of the HTML help documentation and to the tree view.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nTOC_EXPAND             = NO\n\n# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and\n# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that\n# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help\n# (.qch) of the generated HTML documentation.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_QHP           = NO\n\n# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify\n# the file name of the resulting .qch file. The path specified is relative to\n# the HTML output folder.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQCH_FILE               =\n\n# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help\n# Project output. For more information please see Qt Help Project / Namespace\n# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_NAMESPACE          = org.doxygen.Project\n\n# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt\n# Help Project output. For more information please see Qt Help Project / Virtual\n# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-\n# folders).\n# The default value is: doc.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_VIRTUAL_FOLDER     = doc\n\n# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom\n# filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_NAME   =\n\n# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the\n# custom filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_ATTRS  =\n\n# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this\n# project's filter section matches. Qt Help Project / Filter Attributes (see:\n# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_SECT_FILTER_ATTRS  =\n\n# The QHG_LOCATION tag can be used to specify the location of Qt's\n# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the\n# generated .qhp file.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHG_LOCATION           =\n\n# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be\n# generated, together with the HTML files, they form an Eclipse help plugin. To\n# install this plugin and make it available under the help contents menu in\n# Eclipse, the contents of the directory containing the HTML and XML files needs\n# to be copied into the plugins directory of eclipse. The name of the directory\n# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.\n# After copying Eclipse needs to be restarted before the help appears.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_ECLIPSEHELP   = NO\n\n# A unique identifier for the Eclipse help plugin. When installing the plugin\n# the directory name containing the HTML and XML files should also have this\n# name. Each documentation set should have its own identifier.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.\n\nECLIPSE_DOC_ID         = org.doxygen.Project\n\n# If you want full control over the layout of the generated HTML pages it might\n# be necessary to disable the index and replace it with your own. The\n# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top\n# of each HTML page. A value of NO enables the index and the value YES disables\n# it. Since the tabs in the index contain the same information as the navigation\n# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nDISABLE_INDEX          = NO\n\n# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index\n# structure should be generated to display hierarchical information. If the tag\n# value is set to YES, a side panel will be generated containing a tree-like\n# index structure (just like the one that is generated for HTML Help). For this\n# to work a browser that supports JavaScript, DHTML, CSS and frames is required\n# (i.e. any modern browser). Windows users are probably better off using the\n# HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can\n# further fine-tune the look of the index. As an example, the default style\n# sheet generated by doxygen has an example that shows how to put an image at\n# the root of the tree instead of the PROJECT_NAME. Since the tree basically has\n# the same information as the tab index, you could consider setting\n# DISABLE_INDEX to YES when enabling this option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_TREEVIEW      = NO\n\n# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that\n# doxygen will group on one line in the generated HTML documentation.\n#\n# Note that a value of 0 will completely suppress the enum values from appearing\n# in the overview section.\n# Minimum value: 0, maximum value: 20, default value: 4.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nENUM_VALUES_PER_LINE   = 4\n\n# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used\n# to set the initial width (in pixels) of the frame in which the tree is shown.\n# Minimum value: 0, maximum value: 1500, default value: 250.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nTREEVIEW_WIDTH         = 250\n\n# When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to\n# external symbols imported via tag files in a separate window.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nEXT_LINKS_IN_WINDOW    = NO\n\n# Use this tag to change the font size of LaTeX formulas included as images in\n# the HTML documentation. When you change the font size after a successful\n# doxygen run you need to manually remove any form_*.png images from the HTML\n# output directory to force them to be regenerated.\n# Minimum value: 8, maximum value: 50, default value: 10.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_FONTSIZE       = 10\n\n# Use the FORMULA_TRANPARENT tag to determine whether or not the images\n# generated for formulas are transparent PNGs. Transparent PNGs are not\n# supported properly for IE 6.0, but are supported on all modern browsers.\n#\n# Note that when changing this option you need to delete any form_*.png files in\n# the HTML output directory before the changes have effect.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_TRANSPARENT    = YES\n\n# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see\n# http://www.mathjax.org) which uses client side Javascript for the rendering\n# instead of using prerendered bitmaps. Use this if you do not have LaTeX\n# installed or if you want to formulas look prettier in the HTML output. When\n# enabled you may also need to install MathJax separately and configure the path\n# to it using the MATHJAX_RELPATH option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nUSE_MATHJAX            = NO\n\n# When MathJax is enabled you can set the default output format to be used for\n# the MathJax output. See the MathJax site (see:\n# http://docs.mathjax.org/en/latest/output.html) for more details.\n# Possible values are: HTML-CSS (which is slower, but has the best\n# compatibility), NativeMML (i.e. MathML) and SVG.\n# The default value is: HTML-CSS.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_FORMAT         = HTML-CSS\n\n# When MathJax is enabled you need to specify the location relative to the HTML\n# output directory using the MATHJAX_RELPATH option. The destination directory\n# should contain the MathJax.js script. For instance, if the mathjax directory\n# is located at the same level as the HTML output directory, then\n# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax\n# Content Delivery Network so you can quickly see the result without installing\n# MathJax. However, it is strongly recommended to install a local copy of\n# MathJax from http://www.mathjax.org before deployment.\n# The default value is: http://cdn.mathjax.org/mathjax/latest.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_RELPATH        = http://www.mathjax.org/mathjax\n\n# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax\n# extension names that should be enabled during MathJax rendering. For example\n# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_EXTENSIONS     =\n\n# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces\n# of code that will be used on startup of the MathJax code. See the MathJax site\n# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an\n# example see the documentation.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_CODEFILE       =\n\n# When the SEARCHENGINE tag is enabled doxygen will generate a search box for\n# the HTML output. The underlying search engine uses javascript and DHTML and\n# should work on any modern browser. Note that when using HTML help\n# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)\n# there is already a search function so this one should typically be disabled.\n# For large projects the javascript based search engine can be slow, then\n# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to\n# search using the keyboard; to jump to the search box use <access key> + S\n# (what the <access key> is depends on the OS and browser, but it is typically\n# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down\n# key> to jump into the search results window, the results can be navigated\n# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel\n# the search. The filter options can be selected when the cursor is inside the\n# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>\n# to select a filter and <Enter> or <escape> to activate or cancel the filter\n# option.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nSEARCHENGINE           = YES\n\n# When the SERVER_BASED_SEARCH tag is enabled the search engine will be\n# implemented using a web server instead of a web client using Javascript. There\n# are two flavors of web server based searching depending on the EXTERNAL_SEARCH\n# setting. When disabled, doxygen will generate a PHP script for searching and\n# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing\n# and searching needs to be provided by external tools. See the section\n# \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSERVER_BASED_SEARCH    = NO\n\n# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP\n# script for searching. Instead the search results are written to an XML file\n# which needs to be processed by an external indexer. Doxygen will invoke an\n# external search engine pointed to by the SEARCHENGINE_URL option to obtain the\n# search results.\n#\n# Doxygen ships with an example indexer ( doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/).\n#\n# See the section \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH        = NO\n\n# The SEARCHENGINE_URL should point to a search engine hosted by a web server\n# which will return the search results when EXTERNAL_SEARCH is enabled.\n#\n# Doxygen ships with an example indexer ( doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/). See the section \"External Indexing and\n# Searching\" for details.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHENGINE_URL       =\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed\n# search data is written to a file for indexing by an external tool. With the\n# SEARCHDATA_FILE tag the name of this file can be specified.\n# The default file is: searchdata.xml.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHDATA_FILE        = searchdata.xml\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the\n# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is\n# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple\n# projects and redirect the results back to the right project.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH_ID     =\n\n# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen\n# projects other than the one defined by this configuration file, but that are\n# all added to the same external search index. Each project needs to have a\n# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of\n# to a relative location where the documentation can be found. The format is:\n# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTRA_SEARCH_MAPPINGS  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the LaTeX output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_LATEX tag is set to YES doxygen will generate LaTeX output.\n# The default value is: YES.\n\nGENERATE_LATEX         = YES\n\n# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_OUTPUT           = latex\n\n# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be\n# invoked.\n#\n# Note that when enabling USE_PDFLATEX this option is only used for generating\n# bitmaps for formulas in the HTML output, but not in the Makefile that is\n# written to the output directory.\n# The default file is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_CMD_NAME         = latex\n\n# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate\n# index for LaTeX.\n# The default file is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nMAKEINDEX_CMD_NAME     = makeindex\n\n# If the COMPACT_LATEX tag is set to YES doxygen generates more compact LaTeX\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nCOMPACT_LATEX          = NO\n\n# The PAPER_TYPE tag can be used to set the paper type that is used by the\n# printer.\n# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x\n# 14 inches) and executive (7.25 x 10.5 inches).\n# The default value is: a4.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPAPER_TYPE             = a4\n\n# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names\n# that should be included in the LaTeX output. To get the times font for\n# instance you can specify\n# EXTRA_PACKAGES=times\n# If left blank no extra packages will be included.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nEXTRA_PACKAGES         = amsmath \\\n                         amsfonts \\\n                         xr\n\n# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the\n# generated LaTeX document. The header should contain everything until the first\n# chapter. If it is left blank doxygen will generate a standard header. See\n# section \"Doxygen usage\" for information on how to let doxygen write the\n# default header to a separate file.\n#\n# Note: Only use a user-defined header if you know what you are doing! The\n# following commands have a special meaning inside the header: $title,\n# $datetime, $date, $doxygenversion, $projectname, $projectnumber,\n# $projectbrief, $projectlogo. Doxygen will replace $title with the empy string,\n# for the replacement values of the other commands the user is refered to\n# HTML_HEADER.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HEADER           =\n\n# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the\n# generated LaTeX document. The footer should contain everything after the last\n# chapter. If it is left blank doxygen will generate a standard footer. See\n# LATEX_HEADER for more information on how to generate a default footer and what\n# special commands can be used inside the footer.\n#\n# Note: Only use a user-defined footer if you know what you are doing!\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_FOOTER           =\n\n# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the LATEX_OUTPUT output\n# directory. Note that the files will be copied as-is; there are no commands or\n# markers available.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_FILES      =\n\n# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is\n# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will\n# contain links (just like the HTML output) instead of page references. This\n# makes the output suitable for online browsing using a PDF viewer.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPDF_HYPERLINKS         = YES\n\n# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate\n# the PDF file directly from the LaTeX files. Set this option to YES to get a\n# higher quality PDF documentation.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nUSE_PDFLATEX           = YES\n\n# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode\n# command to the generated LaTeX files. This will instruct LaTeX to keep running\n# if errors occur, instead of asking the user for help. This option is also used\n# when generating formulas in HTML.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BATCHMODE        = NO\n\n# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the\n# index chapters (such as File Index, Compound Index, etc.) in the output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HIDE_INDICES     = NO\n\n# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source\n# code with syntax highlighting in the LaTeX output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_SOURCE_CODE      = NO\n\n# The LATEX_BIB_STYLE tag can be used to specify the style to use for the\n# bibliography, e.g. plainnat, or ieeetr. See\n# http://en.wikipedia.org/wiki/BibTeX and \\cite for more info.\n# The default value is: plain.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BIB_STYLE        = plain\n\n#---------------------------------------------------------------------------\n# Configuration options related to the RTF output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_RTF tag is set to YES doxygen will generate RTF output. The\n# RTF output is optimized for Word 97 and may not look too pretty with other RTF\n# readers/editors.\n# The default value is: NO.\n\nGENERATE_RTF           = NO\n\n# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: rtf.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_OUTPUT             = rtf\n\n# If the COMPACT_RTF tag is set to YES doxygen generates more compact RTF\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nCOMPACT_RTF            = NO\n\n# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will\n# contain hyperlink fields. The RTF file will contain links (just like the HTML\n# output) instead of page references. This makes the output suitable for online\n# browsing using Word or some other Word compatible readers that support those\n# fields.\n#\n# Note: WordPad (write) and others do not support links.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_HYPERLINKS         = NO\n\n# Load stylesheet definitions from file. Syntax is similar to doxygen's config\n# file, i.e. a series of assignments. You only have to provide replacements,\n# missing definitions are set to their default value.\n#\n# See also section \"Doxygen usage\" for information on how to generate the\n# default style sheet that doxygen normally uses.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_STYLESHEET_FILE    =\n\n# Set optional variables used in the generation of an RTF document. Syntax is\n# similar to doxygen's config file. A template extensions file can be generated\n# using doxygen -e rtf extensionFile.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_EXTENSIONS_FILE    =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the man page output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_MAN tag is set to YES doxygen will generate man pages for\n# classes and files.\n# The default value is: NO.\n\nGENERATE_MAN           = NO\n\n# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it. A directory man3 will be created inside the directory specified by\n# MAN_OUTPUT.\n# The default directory is: man.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_OUTPUT             = man\n\n# The MAN_EXTENSION tag determines the extension that is added to the generated\n# man pages. In case the manual section does not start with a number, the number\n# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is\n# optional.\n# The default value is: .3.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_EXTENSION          = .3\n\n# The MAN_SUBDIR tag determines the name of the directory created within\n# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by\n# MAN_EXTENSION with the initial . removed.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_SUBDIR             =\n\n# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it\n# will generate one additional man file for each entity documented in the real\n# man page(s). These additional files only source the real man page, but without\n# them the man command would be unable to find the correct page.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_LINKS              = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the XML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_XML tag is set to YES doxygen will generate an XML file that\n# captures the structure of the code including all documentation.\n# The default value is: NO.\n\nGENERATE_XML           = NO\n\n# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: xml.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_OUTPUT             = xml\n\n# If the XML_PROGRAMLISTING tag is set to YES doxygen will dump the program\n# listings (including syntax highlighting and cross-referencing information) to\n# the XML output. Note that enabling this will significantly increase the size\n# of the XML output.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_PROGRAMLISTING     = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the DOCBOOK output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_DOCBOOK tag is set to YES doxygen will generate Docbook files\n# that can be used to generate PDF.\n# The default value is: NO.\n\nGENERATE_DOCBOOK       = NO\n\n# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.\n# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in\n# front of it.\n# The default directory is: docbook.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_OUTPUT         = docbook\n\n# If the DOCBOOK_PROGRAMLISTING tag is set to YES doxygen will include the\n# program listings (including syntax highlighting and cross-referencing\n# information) to the DOCBOOK output. Note that enabling this will significantly\n# increase the size of the DOCBOOK output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_PROGRAMLISTING = NO\n\n#---------------------------------------------------------------------------\n# Configuration options for the AutoGen Definitions output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_AUTOGEN_DEF tag is set to YES doxygen will generate an AutoGen\n# Definitions (see http://autogen.sf.net) file that captures the structure of\n# the code including all documentation. Note that this feature is still\n# experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_AUTOGEN_DEF   = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the Perl module output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_PERLMOD tag is set to YES doxygen will generate a Perl module\n# file that captures the structure of the code including all documentation.\n#\n# Note that this feature is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_PERLMOD       = NO\n\n# If the PERLMOD_LATEX tag is set to YES doxygen will generate the necessary\n# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI\n# output from the Perl module output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_LATEX          = NO\n\n# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be nicely\n# formatted so it can be parsed by a human reader. This is useful if you want to\n# understand what is going on. On the other hand, if this tag is set to NO the\n# size of the Perl module output will be much smaller and Perl will parse it\n# just the same.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_PRETTY         = YES\n\n# The names of the make variables in the generated doxyrules.make file are\n# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful\n# so different doxyrules.make files included by the same Makefile don't\n# overwrite each other's variables.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_MAKEVAR_PREFIX =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the preprocessor\n#---------------------------------------------------------------------------\n\n# If the ENABLE_PREPROCESSING tag is set to YES doxygen will evaluate all\n# C-preprocessor directives found in the sources and include files.\n# The default value is: YES.\n\nENABLE_PREPROCESSING   = YES\n\n# If the MACRO_EXPANSION tag is set to YES doxygen will expand all macro names\n# in the source code. If set to NO only conditional compilation will be\n# performed. Macro expansion can be done in a controlled way by setting\n# EXPAND_ONLY_PREDEF to YES.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nMACRO_EXPANSION        = NO\n\n# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then\n# the macro expansion is limited to the macros specified with the PREDEFINED and\n# EXPAND_AS_DEFINED tags.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_ONLY_PREDEF     = NO\n\n# If the SEARCH_INCLUDES tag is set to YES the includes files in the\n# INCLUDE_PATH will be searched if a #include is found.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSEARCH_INCLUDES        = YES\n\n# The INCLUDE_PATH tag can be used to specify one or more directories that\n# contain include files that are not input files but should be processed by the\n# preprocessor.\n# This tag requires that the tag SEARCH_INCLUDES is set to YES.\n\nINCLUDE_PATH           =\n\n# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard\n# patterns (like *.h and *.hpp) to filter out the header-files in the\n# directories. If left blank, the patterns specified with FILE_PATTERNS will be\n# used.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nINCLUDE_FILE_PATTERNS  =\n\n# The PREDEFINED tag can be used to specify one or more macro names that are\n# defined before the preprocessor is started (similar to the -D option of e.g.\n# gcc). The argument of the tag is a list of macros of the form: name or\n# name=definition (no spaces). If the definition and the \"=\" are omitted, \"=1\"\n# is assumed. To prevent a macro definition from being undefined via #undef or\n# recursively expanded use the := operator instead of the = operator.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nPREDEFINED             =\n\n# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this\n# tag can be used to specify a list of macro names that should be expanded. The\n# macro definition that is found in the sources will be used. Use the PREDEFINED\n# tag if you want to use a different macro definition that overrules the\n# definition found in the source code.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_AS_DEFINED      =\n\n# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will\n# remove all references to function-like macros that are alone on a line, have\n# an all uppercase name, and do not end with a semicolon. Such function macros\n# are typically used for boiler-plate code, and will confuse the parser if not\n# removed.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSKIP_FUNCTION_MACROS   = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to external references\n#---------------------------------------------------------------------------\n\n# The TAGFILES tag can be used to specify one or more tag files. For each tag\n# file the location of the external documentation should be added. The format of\n# a tag file without this location is as follows:\n# TAGFILES = file1 file2 ...\n# Adding location for the tag files is done as follows:\n# TAGFILES = file1=loc1 \"file2 = loc2\" ...\n# where loc1 and loc2 can be relative or absolute paths or URLs. See the\n# section \"Linking to external documentation\" for more information about the use\n# of tag files.\n# Note: Each tag file must have a unique name (where the name does NOT include\n# the path). If a tag file is not located in the directory in which doxygen is\n# run, you must also specify the path to the tagfile here.\n\nTAGFILES               =\n\n# When a file name is specified after GENERATE_TAGFILE, doxygen will create a\n# tag file that is based on the input files it reads. See section \"Linking to\n# external documentation\" for more information about the usage of tag files.\n\nGENERATE_TAGFILE       =\n\n# If the ALLEXTERNALS tag is set to YES all external class will be listed in the\n# class index. If set to NO only the inherited external classes will be listed.\n# The default value is: NO.\n\nALLEXTERNALS           = NO\n\n# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed in\n# the modules index. If set to NO, only the current project's groups will be\n# listed.\n# The default value is: YES.\n\nEXTERNAL_GROUPS        = YES\n\n# If the EXTERNAL_PAGES tag is set to YES all external pages will be listed in\n# the related pages index. If set to NO, only the current project's pages will\n# be listed.\n# The default value is: YES.\n\nEXTERNAL_PAGES         = YES\n\n# The PERL_PATH should be the absolute path and name of the perl script\n# interpreter (i.e. the result of 'which perl').\n# The default file (with absolute path) is: /usr/bin/perl.\n\nPERL_PATH              = /usr/bin/perl\n\n#---------------------------------------------------------------------------\n# Configuration options related to the dot tool\n#---------------------------------------------------------------------------\n\n# If the CLASS_DIAGRAMS tag is set to YES doxygen will generate a class diagram\n# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to\n# NO turns the diagrams off. Note that this option also works with HAVE_DOT\n# disabled, but it is recommended to install and use dot, since it yields more\n# powerful graphs.\n# The default value is: YES.\n\nCLASS_DIAGRAMS         = YES\n\n# You can define message sequence charts within doxygen comments using the \\msc\n# command. Doxygen will then run the mscgen tool (see:\n# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the\n# documentation. The MSCGEN_PATH tag allows you to specify the directory where\n# the mscgen tool resides. If left empty the tool is assumed to be found in the\n# default search path.\n\nMSCGEN_PATH            =\n\n# You can include diagrams made with dia in doxygen documentation. Doxygen will\n# then run dia to produce the diagram and insert it in the documentation. The\n# DIA_PATH tag allows you to specify the directory where the dia binary resides.\n# If left empty dia is assumed to be found in the default search path.\n\nDIA_PATH               =\n\n# If set to YES, the inheritance and collaboration graphs will hide inheritance\n# and usage relations if the target is undocumented or is not a class.\n# The default value is: YES.\n\nHIDE_UNDOC_RELATIONS   = YES\n\n# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is\n# available from the path. This tool is part of Graphviz (see:\n# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent\n# Bell Labs. The other options in this section have no effect if this option is\n# set to NO\n# The default value is: NO.\n\nHAVE_DOT               = NO\n\n# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed\n# to run in parallel. When set to 0 doxygen will base this on the number of\n# processors available in the system. You can set it explicitly to a value\n# larger than 0 to get control over the balance between CPU load and processing\n# speed.\n# Minimum value: 0, maximum value: 32, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_NUM_THREADS        = 0\n\n# When you want a differently looking font in the dot files that doxygen\n# generates you can specify the font name using DOT_FONTNAME. You need to make\n# sure dot is able to find the font, which can be done by putting it in a\n# standard location or by setting the DOTFONTPATH environment variable or by\n# setting DOT_FONTPATH to the directory containing the font.\n# The default value is: Helvetica.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTNAME           = Helvetica\n\n# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of\n# dot graphs.\n# Minimum value: 4, maximum value: 24, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTSIZE           = 10\n\n# By default doxygen will tell dot to use the default font as specified with\n# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set\n# the path where dot can find it using this tag.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTPATH           =\n\n# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for\n# each documented class showing the direct and indirect inheritance relations.\n# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCLASS_GRAPH            = YES\n\n# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a\n# graph for each documented class showing the direct and indirect implementation\n# dependencies (inheritance, containment, and class references variables) of the\n# class with other documented classes.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCOLLABORATION_GRAPH    = YES\n\n# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for\n# groups, showing the direct groups dependencies.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGROUP_GRAPHS           = YES\n\n# If the UML_LOOK tag is set to YES doxygen will generate inheritance and\n# collaboration diagrams in a style similar to the OMG's Unified Modeling\n# Language.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LOOK               = NO\n\n# If the UML_LOOK tag is enabled, the fields and methods are shown inside the\n# class node. If there are many fields or methods and many nodes the graph may\n# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the\n# number of items for each type to make the size more manageable. Set this to 0\n# for no limit. Note that the threshold may be exceeded by 50% before the limit\n# is enforced. So when you set the threshold to 10, up to 15 fields may appear,\n# but if the number exceeds 15, the total amount of fields shown is limited to\n# 10.\n# Minimum value: 0, maximum value: 100, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LIMIT_NUM_FIELDS   = 10\n\n# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and\n# collaboration graphs will show the relations between templates and their\n# instances.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nTEMPLATE_RELATIONS     = NO\n\n# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to\n# YES then doxygen will generate a graph for each documented file showing the\n# direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDE_GRAPH          = YES\n\n# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are\n# set to YES then doxygen will generate a graph for each documented file showing\n# the direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDED_BY_GRAPH      = YES\n\n# If the CALL_GRAPH tag is set to YES then doxygen will generate a call\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable call graphs for selected\n# functions only using the \\callgraph command.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALL_GRAPH             = NO\n\n# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable caller graphs for selected\n# functions only using the \\callergraph command.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALLER_GRAPH           = NO\n\n# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical\n# hierarchy of all classes instead of a textual one.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGRAPHICAL_HIERARCHY    = YES\n\n# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the\n# dependencies a directory has on other directories in a graphical way. The\n# dependency relations are determined by the #include relations between the\n# files in the directories.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDIRECTORY_GRAPH        = YES\n\n# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images\n# generated by dot.\n# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order\n# to make the SVG files visible in IE 9+ (other browsers do not have this\n# requirement).\n# Possible values are: png, jpg, gif and svg.\n# The default value is: png.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_IMAGE_FORMAT       = png\n\n# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to\n# enable generation of interactive SVG images that allow zooming and panning.\n#\n# Note that this requires a modern browser other than Internet Explorer. Tested\n# and working are Firefox, Chrome, Safari, and Opera.\n# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make\n# the SVG files visible. Older versions of IE do not have SVG support.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINTERACTIVE_SVG        = NO\n\n# The DOT_PATH tag can be used to specify the path where the dot tool can be\n# found. If left blank, it is assumed the dot tool can be found in the path.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_PATH               =\n\n# The DOTFILE_DIRS tag can be used to specify one or more directories that\n# contain dot files that are included in the documentation (see the \\dotfile\n# command).\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOTFILE_DIRS           =\n\n# The MSCFILE_DIRS tag can be used to specify one or more directories that\n# contain msc files that are included in the documentation (see the \\mscfile\n# command).\n\nMSCFILE_DIRS           =\n\n# The DIAFILE_DIRS tag can be used to specify one or more directories that\n# contain dia files that are included in the documentation (see the \\diafile\n# command).\n\nDIAFILE_DIRS           =\n\n# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the\n# path where java can find the plantuml.jar file. If left blank, it is assumed\n# PlantUML is not used or called during a preprocessing step. Doxygen will\n# generate a warning when it encounters a \\startuml command in this case and\n# will not generate output for the diagram.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nPLANTUML_JAR_PATH      =\n\n# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes\n# that will be shown in the graph. If the number of nodes in a graph becomes\n# larger than this value, doxygen will truncate the graph, which is visualized\n# by representing a node as a red box. Note that doxygen if the number of direct\n# children of the root node in a graph is already larger than\n# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that\n# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.\n# Minimum value: 0, maximum value: 10000, default value: 50.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_GRAPH_MAX_NODES    = 50\n\n# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs\n# generated by dot. A depth value of 3 means that only nodes reachable from the\n# root by following a path via at most 3 edges will be shown. Nodes that lay\n# further from the root node will be omitted. Note that setting this option to 1\n# or 2 may greatly reduce the computation time needed for large code bases. Also\n# note that the size of a graph can be further restricted by\n# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.\n# Minimum value: 0, maximum value: 1000, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nMAX_DOT_GRAPH_DEPTH    = 0\n\n# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent\n# background. This is disabled by default, because dot on Windows does not seem\n# to support this out of the box.\n#\n# Warning: Depending on the platform used, enabling this option may lead to\n# badly anti-aliased labels on the edges of a graph (i.e. they become hard to\n# read).\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_TRANSPARENT        = NO\n\n# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output\n# files in one run (i.e. multiple -o and -T options on the command line). This\n# makes dot run faster, but since only newer versions of dot (>1.8.10) support\n# this, this feature is disabled by default.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_MULTI_TARGETS      = YES\n\n# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page\n# explaining the meaning of the various boxes and arrows in the dot generated\n# graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGENERATE_LEGEND        = YES\n\n# If the DOT_CLEANUP tag is set to YES doxygen will remove the intermediate dot\n# files that are used to generate the various graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_CLEANUP            = YES\n"
  },
  {
    "path": "caffe-fpn/.travis.yml",
    "content": "# Use a build matrix to do two builds in parallel:\n# one using CMake, and one using make.\nenv:\n  matrix:\n    - WITH_CUDA=false WITH_CMAKE=false WITH_IO=true\n    - WITH_CUDA=false WITH_CMAKE=true WITH_IO=true PYTHON_VERSION=3\n    - WITH_CUDA=true WITH_CMAKE=false WITH_IO=true\n    - WITH_CUDA=true WITH_CMAKE=true WITH_IO=true\n    - WITH_CUDA=false WITH_CMAKE=false WITH_IO=false\n    - WITH_CUDA=false WITH_CMAKE=true WITH_IO=false PYTHON_VERSION=3\n\nlanguage: cpp\n\n# Cache Ubuntu apt packages.\ncache:\n  apt: true\n  directories:\n  - /home/travis/miniconda\n  - /home/travis/miniconda2\n  - /home/travis/miniconda3\n\ncompiler: gcc\n\nbefore_install:\n  - export NUM_THREADS=4\n  - export SCRIPTS=./scripts/travis\n  - export CONDA_DIR=\"/home/travis/miniconda$PYTHON_VERSION\"\n\ninstall:\n  - sudo -E $SCRIPTS/travis_install.sh\n\nbefore_script:\n  - export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/cuda/lib64:$CONDA_DIR/lib\n  - export PATH=$CONDA_DIR/bin:$PATH\n  - if ! $WITH_CMAKE; then $SCRIPTS/travis_setup_makefile_config.sh; fi\n\nscript: $SCRIPTS/travis_build_and_test.sh\n\nnotifications:\n# Emails are sent to the committer's git-configured email address by default,\n# but only if they have access to the repository.  To enable Travis on your\n# public fork of Caffe, just go to travis-ci.org and flip the switch on for\n# your Caffe fork.  To configure your git email address, use:\n#     git config --global user.email me@example.com\n  email:\n    on_success: always\n    on_failure: always\n\n# IRC notifications disabled by default.\n# Uncomment next 5 lines to send notifications to chat.freenode.net#caffe\n#   irc:\n#     channels:\n#       - \"chat.freenode.net#caffe\"\n#     template:\n#       - \"%{repository}/%{branch} (%{commit} - %{author}): %{message}\"\n"
  },
  {
    "path": "caffe-fpn/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8.7)\nif(POLICY CMP0046)\n  cmake_policy(SET CMP0046 NEW)\nendif()\nif(POLICY CMP0054)\n  cmake_policy(SET CMP0054 NEW)\nendif()\n\n# ---[ Caffe project\nproject(Caffe C CXX)\n\n# ---[ Caffe version\nset(CAFFE_TARGET_VERSION \"1.0.0-rc3\")\nset(CAFFE_TARGET_SOVERSION \"1.0.0-rc3\")\nadd_definitions(-DCAFFE_VERSION=${CAFFE_TARGET_VERSION})\n\n# ---[ Using cmake scripts and modules\nlist(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules)\n\ninclude(ExternalProject)\n\ninclude(cmake/Utils.cmake)\ninclude(cmake/Targets.cmake)\ninclude(cmake/Misc.cmake)\ninclude(cmake/Summary.cmake)\ninclude(cmake/ConfigGen.cmake)\n\n# ---[ Options\ncaffe_option(CPU_ONLY  \"Build Caffe without CUDA support\" OFF) # TODO: rename to USE_CUDA\ncaffe_option(USE_CUDNN \"Build Caffe with cuDNN library support\" ON IF NOT CPU_ONLY)\ncaffe_option(BUILD_SHARED_LIBS \"Build shared libraries\" ON)\ncaffe_option(BUILD_python \"Build Python wrapper\" ON)\nset(python_version \"2\" CACHE STRING \"Specify which Python version to use\")\ncaffe_option(BUILD_matlab \"Build Matlab wrapper\" OFF IF UNIX OR APPLE)\ncaffe_option(BUILD_docs   \"Build documentation\" ON IF UNIX OR APPLE)\ncaffe_option(BUILD_python_layer \"Build the Caffe Python layer\" ON)\ncaffe_option(USE_OPENCV \"Build with OpenCV support\" ON)\ncaffe_option(USE_LEVELDB \"Build with levelDB\" ON)\ncaffe_option(USE_LMDB \"Build with lmdb\" ON)\ncaffe_option(ALLOW_LMDB_NOLOCK \"Allow MDB_NOLOCK when reading LMDB files (only if necessary)\" OFF)\n\n# ---[ Dependencies\ninclude(cmake/Dependencies.cmake)\n\n# ---[ Flags\nif(UNIX OR APPLE)\n  set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fPIC -Wall\")\nendif()\n\nif(USE_libstdcpp)\n  set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -stdlib=libstdc++\")\n  message(\"-- Warning: forcing libstdc++ (controlled by USE_libstdcpp option in cmake)\")\nendif()\n\nadd_definitions(-DGTEST_USE_OWN_TR1_TUPLE)\n\n# ---[ Warnings\ncaffe_warnings_disable(CMAKE_CXX_FLAGS -Wno-sign-compare -Wno-uninitialized)\n\n# ---[ Config generation\nconfigure_file(cmake/Templates/caffe_config.h.in \"${PROJECT_BINARY_DIR}/caffe_config.h\")\n\n# ---[ Includes\nset(Caffe_INCLUDE_DIR ${PROJECT_SOURCE_DIR}/include)\ninclude_directories(${Caffe_INCLUDE_DIR} ${PROJECT_BINARY_DIR})\ninclude_directories(BEFORE src) # This is needed for gtest.\n\n# ---[ Subdirectories\nadd_subdirectory(src/gtest)\nadd_subdirectory(src/caffe)\nadd_subdirectory(tools)\nadd_subdirectory(examples)\nadd_subdirectory(python)\nadd_subdirectory(matlab)\nadd_subdirectory(docs)\n\n# ---[ Linter target\nadd_custom_target(lint COMMAND ${CMAKE_COMMAND} -P ${PROJECT_SOURCE_DIR}/cmake/lint.cmake)\n\n# ---[ pytest target\nif(BUILD_python)\n  add_custom_target(pytest COMMAND python${python_version} -m unittest discover -s caffe/test WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/python )\n  add_dependencies(pytest pycaffe)\nendif()\n\n# ---[ Configuration summary\ncaffe_print_configuration_summary()\n\n# ---[ Export configs generation\ncaffe_generate_export_configs()\n"
  },
  {
    "path": "caffe-fpn/CONTRIBUTING.md",
    "content": "# Contributing\n\n## Issues\n\nSpecific Caffe design and development issues, bugs, and feature requests are maintained by GitHub Issues.\n\n_Please do not post usage, installation, or modeling questions, or other requests for help to Issues._\nUse the [caffe-users list](https://groups.google.com/forum/#!forum/caffe-users) instead. This helps developers maintain a clear, uncluttered, and efficient view of the state of Caffe.\n\nWhen reporting a bug, it's most helpful to provide the following information, where applicable:\n\n* What steps reproduce the bug?\n* Can you reproduce the bug using the latest [master](https://github.com/BVLC/caffe/tree/master), compiled with the `DEBUG` make option?\n* What hardware and operating system/distribution are you running?\n* If the bug is a crash, provide the backtrace (usually printed by Caffe; always obtainable with `gdb`).\n\nTry to give your issue a title that is succinct and specific. The devs will rename issues as needed to keep track of them.\n\n## Pull Requests\n\nCaffe welcomes all contributions.\n\nSee the [contributing guide](http://caffe.berkeleyvision.org/development.html) for details.\n\nBriefly: read commit by commit, a PR should tell a clean, compelling story of _one_ improvement to Caffe. In particular:\n\n* A PR should do one clear thing that obviously improves Caffe, and nothing more. Making many smaller PRs is better than making one large PR; review effort is superlinear in the amount of code involved.\n* Similarly, each commit should be a small, atomic change representing one step in development. PRs should be made of many commits where appropriate.\n* Please do rewrite PR history to be clean rather than chronological. Within-PR bugfixes, style cleanups, reversions, etc. should be squashed and should not appear in merged PR history.\n* Anything nonobvious from the code should be explained in comments, commit messages, or the PR description, as appropriate.\n"
  },
  {
    "path": "caffe-fpn/CONTRIBUTORS.md",
    "content": "# Contributors\n\nCaffe is developed by a core set of BVLC members and the open-source community.\n\nWe thank all of our [contributors](https://github.com/BVLC/caffe/graphs/contributors)!\n\n**For the detailed history of contributions** of a given file, try\n\n    git blame file\n\nto see line-by-line credits and\n\n    git log --follow file\n\nto see the change log even across renames and rewrites.\n\nPlease refer to the [acknowledgements](http://caffe.berkeleyvision.org/#acknowledgements) on the Caffe site for further details.\n\n**Copyright** is held by the original contributor according to the versioning history; see LICENSE.\n"
  },
  {
    "path": "caffe-fpn/INSTALL.md",
    "content": "# Installation\n\nSee http://caffe.berkeleyvision.org/installation.html for the latest\ninstallation instructions.\n\nCheck the users group in case you need help:\nhttps://groups.google.com/forum/#!forum/caffe-users\n"
  },
  {
    "path": "caffe-fpn/LICENSE",
    "content": "--------------------------START OF THIRD PARTY NOTICE--------------------------\n\nMicrosoft licenses this Third Party IP to you under the licensing\nterms for the Microsoft product. Microsoft reserves all other rights\nnot expressly granted under this agreement, whether by implication,\nestoppel or otherwise.\n\nCaffe\n\nCopyrights can be found here: https://github.com/BVLC/caffe/blob/master/LICENSE\n\nProvided for Informational Purposes Only\n\nBSD License\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n\nRedistributions of source code must retain the above copyright notice,\nthis list of conditions and the following disclaimer.\n\nRedistributions in binary form must reproduce the above copyright notice,\nthis list of conditions and the following disclaimer in the documentation\nand/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n*AS IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nHOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED\nTO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n---------------------------END OF THIRD PARTY NOTICE---------------------------\n\nFast R-CNN\n\nCopyright (c) Microsoft Corporation\n\nAll rights reserved.\n\nMIT License\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included\nin all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\nTHE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\nOTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,\nARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\nOTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "caffe-fpn/Makefile",
    "content": "PROJECT := caffe\n\nCONFIG_FILE := Makefile.config\n# Explicitly check for the config file, otherwise make -k will proceed anyway.\nifeq ($(wildcard $(CONFIG_FILE)),)\n$(error $(CONFIG_FILE) not found. See $(CONFIG_FILE).example.)\nendif\ninclude $(CONFIG_FILE)\n\nBUILD_DIR_LINK := $(BUILD_DIR)\nifeq ($(RELEASE_BUILD_DIR),)\n\tRELEASE_BUILD_DIR := .$(BUILD_DIR)_release\nendif\nifeq ($(DEBUG_BUILD_DIR),)\n\tDEBUG_BUILD_DIR := .$(BUILD_DIR)_debug\nendif\n\nDEBUG ?= 0\nifeq ($(DEBUG), 1)\n\tBUILD_DIR := $(DEBUG_BUILD_DIR)\n\tOTHER_BUILD_DIR := $(RELEASE_BUILD_DIR)\nelse\n\tBUILD_DIR := $(RELEASE_BUILD_DIR)\n\tOTHER_BUILD_DIR := $(DEBUG_BUILD_DIR)\nendif\n\n# All of the directories containing code.\nSRC_DIRS := $(shell find * -type d -exec bash -c \"find {} -maxdepth 1 \\\n\t\\( -name '*.cpp' -o -name '*.proto' \\) | grep -q .\" \\; -print)\n\n# The target shared library name\nLIBRARY_NAME := $(PROJECT)\nLIB_BUILD_DIR := $(BUILD_DIR)/lib\nSTATIC_NAME := $(LIB_BUILD_DIR)/lib$(LIBRARY_NAME).a\nDYNAMIC_VERSION_MAJOR \t\t:= 1\nDYNAMIC_VERSION_MINOR \t\t:= 0\nDYNAMIC_VERSION_REVISION \t:= 0-rc3\nDYNAMIC_NAME_SHORT := lib$(LIBRARY_NAME).so\n#DYNAMIC_SONAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR)\nDYNAMIC_VERSIONED_NAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)\nDYNAMIC_NAME := $(LIB_BUILD_DIR)/$(DYNAMIC_VERSIONED_NAME_SHORT)\nCOMMON_FLAGS += -DCAFFE_VERSION=$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)\n\n##############################\n# Get all source files\n##############################\n# CXX_SRCS are the source files excluding the test ones.\nCXX_SRCS := $(shell find src/$(PROJECT) ! -name \"test_*.cpp\" -name \"*.cpp\")\n# CU_SRCS are the cuda source files\nCU_SRCS := $(shell find src/$(PROJECT) ! -name \"test_*.cu\" -name \"*.cu\")\n# TEST_SRCS are the test source files\nTEST_MAIN_SRC := src/$(PROJECT)/test/test_caffe_main.cpp\nTEST_SRCS := $(shell find src/$(PROJECT) -name \"test_*.cpp\")\nTEST_SRCS := $(filter-out $(TEST_MAIN_SRC), $(TEST_SRCS))\nTEST_CU_SRCS := $(shell find src/$(PROJECT) -name \"test_*.cu\")\nGTEST_SRC := src/gtest/gtest-all.cpp\n# TOOL_SRCS are the source files for the tool binaries\nTOOL_SRCS := $(shell find tools -name \"*.cpp\")\n# EXAMPLE_SRCS are the source files for the example binaries\nEXAMPLE_SRCS := $(shell find examples -name \"*.cpp\")\n# BUILD_INCLUDE_DIR contains any generated header files we want to include.\nBUILD_INCLUDE_DIR := $(BUILD_DIR)/src\n# PROTO_SRCS are the protocol buffer definitions\nPROTO_SRC_DIR := src/$(PROJECT)/proto\nPROTO_SRCS := $(wildcard $(PROTO_SRC_DIR)/*.proto)\n# PROTO_BUILD_DIR will contain the .cc and obj files generated from\n# PROTO_SRCS; PROTO_BUILD_INCLUDE_DIR will contain the .h header files\nPROTO_BUILD_DIR := $(BUILD_DIR)/$(PROTO_SRC_DIR)\nPROTO_BUILD_INCLUDE_DIR := $(BUILD_INCLUDE_DIR)/$(PROJECT)/proto\n# NONGEN_CXX_SRCS includes all source/header files except those generated\n# automatically (e.g., by proto).\nNONGEN_CXX_SRCS := $(shell find \\\n\tsrc/$(PROJECT) \\\n\tinclude/$(PROJECT) \\\n\tpython/$(PROJECT) \\\n\tmatlab/+$(PROJECT)/private \\\n\texamples \\\n\ttools \\\n\t-name \"*.cpp\" -or -name \"*.hpp\" -or -name \"*.cu\" -or -name \"*.cuh\")\nLINT_SCRIPT := scripts/cpp_lint.py\nLINT_OUTPUT_DIR := $(BUILD_DIR)/.lint\nLINT_EXT := lint.txt\nLINT_OUTPUTS := $(addsuffix .$(LINT_EXT), $(addprefix $(LINT_OUTPUT_DIR)/, $(NONGEN_CXX_SRCS)))\nEMPTY_LINT_REPORT := $(BUILD_DIR)/.$(LINT_EXT)\nNONEMPTY_LINT_REPORT := $(BUILD_DIR)/$(LINT_EXT)\n# PY$(PROJECT)_SRC is the python wrapper for $(PROJECT)\nPY$(PROJECT)_SRC := python/$(PROJECT)/_$(PROJECT).cpp\nPY$(PROJECT)_SO := python/$(PROJECT)/_$(PROJECT).so\nPY$(PROJECT)_HXX := include/$(PROJECT)/layers/python_layer.hpp\n# MAT$(PROJECT)_SRC is the mex entrance point of matlab package for $(PROJECT)\nMAT$(PROJECT)_SRC := matlab/+$(PROJECT)/private/$(PROJECT)_.cpp\nifneq ($(MATLAB_DIR),)\n\tMAT_SO_EXT := $(shell $(MATLAB_DIR)/bin/mexext)\nendif\nMAT$(PROJECT)_SO := matlab/+$(PROJECT)/private/$(PROJECT)_.$(MAT_SO_EXT)\n\n##############################\n# Derive generated files\n##############################\n# The generated files for protocol buffers\nPROTO_GEN_HEADER_SRCS := $(addprefix $(PROTO_BUILD_DIR)/, \\\n\t\t$(notdir ${PROTO_SRCS:.proto=.pb.h}))\nPROTO_GEN_HEADER := $(addprefix $(PROTO_BUILD_INCLUDE_DIR)/, \\\n\t\t$(notdir ${PROTO_SRCS:.proto=.pb.h}))\nPROTO_GEN_CC := $(addprefix $(BUILD_DIR)/, ${PROTO_SRCS:.proto=.pb.cc})\nPY_PROTO_BUILD_DIR := python/$(PROJECT)/proto\nPY_PROTO_INIT := python/$(PROJECT)/proto/__init__.py\nPROTO_GEN_PY := $(foreach file,${PROTO_SRCS:.proto=_pb2.py}, \\\n\t\t$(PY_PROTO_BUILD_DIR)/$(notdir $(file)))\n# The objects corresponding to the source files\n# These objects will be linked into the final shared library, so we\n# exclude the tool, example, and test objects.\nCXX_OBJS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o})\nCU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o})\nPROTO_OBJS := ${PROTO_GEN_CC:.cc=.o}\nOBJS := $(PROTO_OBJS) $(CXX_OBJS) $(CU_OBJS)\n# tool, example, and test objects\nTOOL_OBJS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o})\nTOOL_BUILD_DIR := $(BUILD_DIR)/tools\nTEST_CXX_BUILD_DIR := $(BUILD_DIR)/src/$(PROJECT)/test\nTEST_CU_BUILD_DIR := $(BUILD_DIR)/cuda/src/$(PROJECT)/test\nTEST_CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o})\nTEST_CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o})\nTEST_OBJS := $(TEST_CXX_OBJS) $(TEST_CU_OBJS)\nGTEST_OBJ := $(addprefix $(BUILD_DIR)/, ${GTEST_SRC:.cpp=.o})\nEXAMPLE_OBJS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o})\n# Output files for automatic dependency generation\nDEPS := ${CXX_OBJS:.o=.d} ${CU_OBJS:.o=.d} ${TEST_CXX_OBJS:.o=.d} \\\n\t${TEST_CU_OBJS:.o=.d} $(BUILD_DIR)/${MAT$(PROJECT)_SO:.$(MAT_SO_EXT)=.d}\n# tool, example, and test bins\nTOOL_BINS := ${TOOL_OBJS:.o=.bin}\nEXAMPLE_BINS := ${EXAMPLE_OBJS:.o=.bin}\n# symlinks to tool bins without the \".bin\" extension\nTOOL_BIN_LINKS := ${TOOL_BINS:.bin=}\n# Put the test binaries in build/test for convenience.\nTEST_BIN_DIR := $(BUILD_DIR)/test\nTEST_CU_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, \\\n\t\t$(foreach obj,$(TEST_CU_OBJS),$(basename $(notdir $(obj))))))\nTEST_CXX_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, \\\n\t\t$(foreach obj,$(TEST_CXX_OBJS),$(basename $(notdir $(obj))))))\nTEST_BINS := $(TEST_CXX_BINS) $(TEST_CU_BINS)\n# TEST_ALL_BIN is the test binary that links caffe dynamically.\nTEST_ALL_BIN := $(TEST_BIN_DIR)/test_all.testbin\n\n##############################\n# Derive compiler warning dump locations\n##############################\nWARNS_EXT := warnings.txt\nCXX_WARNS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o.$(WARNS_EXT)})\nCU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o.$(WARNS_EXT)})\nTOOL_WARNS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o.$(WARNS_EXT)})\nEXAMPLE_WARNS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o.$(WARNS_EXT)})\nTEST_WARNS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o.$(WARNS_EXT)})\nTEST_CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o.$(WARNS_EXT)})\nALL_CXX_WARNS := $(CXX_WARNS) $(TOOL_WARNS) $(EXAMPLE_WARNS) $(TEST_WARNS)\nALL_CU_WARNS := $(CU_WARNS) $(TEST_CU_WARNS)\nALL_WARNS := $(ALL_CXX_WARNS) $(ALL_CU_WARNS)\n\nEMPTY_WARN_REPORT := $(BUILD_DIR)/.$(WARNS_EXT)\nNONEMPTY_WARN_REPORT := $(BUILD_DIR)/$(WARNS_EXT)\n\n##############################\n# Derive include and lib directories\n##############################\nCUDA_INCLUDE_DIR := $(CUDA_DIR)/include\n\nCUDA_LIB_DIR :=\n# add <cuda>/lib64 only if it exists\nifneq (\"$(wildcard $(CUDA_DIR)/lib64)\",\"\")\n\tCUDA_LIB_DIR += $(CUDA_DIR)/lib64\nendif\nCUDA_LIB_DIR += $(CUDA_DIR)/lib\n\nINCLUDE_DIRS += $(BUILD_INCLUDE_DIR) ./src ./include\nifneq ($(CPU_ONLY), 1)\n\tINCLUDE_DIRS += $(CUDA_INCLUDE_DIR)\n\tLIBRARY_DIRS += $(CUDA_LIB_DIR)\n\tLIBRARIES := cudart cublas curand\nendif\n\nLIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5\n\n# handle IO dependencies\nUSE_LEVELDB ?= 1\nUSE_LMDB ?= 1\nUSE_OPENCV ?= 1\n\nifeq ($(USE_LEVELDB), 1)\n\tLIBRARIES += leveldb snappy\nendif\nifeq ($(USE_LMDB), 1)\n\tLIBRARIES += lmdb\nendif\nifeq ($(USE_OPENCV), 1)\n\tLIBRARIES += opencv_core opencv_highgui opencv_imgproc \n\n\tifeq ($(OPENCV_VERSION), 3)\n\t\tLIBRARIES += opencv_imgcodecs\n\tendif\n\t\t\nendif\nPYTHON_LIBRARIES ?= boost_python python2.7\nWARNINGS := -Wall -Wno-sign-compare\n\n##############################\n# Set build directories\n##############################\n\nDISTRIBUTE_DIR ?= distribute\nDISTRIBUTE_SUBDIRS := $(DISTRIBUTE_DIR)/bin $(DISTRIBUTE_DIR)/lib\nDIST_ALIASES := dist\nifneq ($(strip $(DISTRIBUTE_DIR)),distribute)\n\t\tDIST_ALIASES += distribute\nendif\n\nALL_BUILD_DIRS := $(sort $(BUILD_DIR) $(addprefix $(BUILD_DIR)/, $(SRC_DIRS)) \\\n\t$(addprefix $(BUILD_DIR)/cuda/, $(SRC_DIRS)) \\\n\t$(LIB_BUILD_DIR) $(TEST_BIN_DIR) $(PY_PROTO_BUILD_DIR) $(LINT_OUTPUT_DIR) \\\n\t$(DISTRIBUTE_SUBDIRS) $(PROTO_BUILD_INCLUDE_DIR))\n\n##############################\n# Set directory for Doxygen-generated documentation\n##############################\nDOXYGEN_CONFIG_FILE ?= ./.Doxyfile\n# should be the same as OUTPUT_DIRECTORY in the .Doxyfile\nDOXYGEN_OUTPUT_DIR ?= ./doxygen\nDOXYGEN_COMMAND ?= doxygen\n# All the files that might have Doxygen documentation.\nDOXYGEN_SOURCES := $(shell find \\\n\tsrc/$(PROJECT) \\\n\tinclude/$(PROJECT) \\\n\tpython/ \\\n\tmatlab/ \\\n\texamples \\\n\ttools \\\n\t-name \"*.cpp\" -or -name \"*.hpp\" -or -name \"*.cu\" -or -name \"*.cuh\" -or \\\n        -name \"*.py\" -or -name \"*.m\")\nDOXYGEN_SOURCES += $(DOXYGEN_CONFIG_FILE)\n\n\n##############################\n# Configure build\n##############################\n\n# Determine platform\nUNAME := $(shell uname -s)\nifeq ($(UNAME), Linux)\n\tLINUX := 1\nelse ifeq ($(UNAME), Darwin)\n\tOSX := 1\nendif\n\n# Linux\nifeq ($(LINUX), 1)\n\tCXX ?= /usr/bin/g++\n\tGCCVERSION := $(shell $(CXX) -dumpversion | cut -f1,2 -d.)\n\t# older versions of gcc are too dumb to build boost with -Wuninitalized\n\tifeq ($(shell echo | awk '{exit $(GCCVERSION) < 4.6;}'), 1)\n\t\tWARNINGS += -Wno-uninitialized\n\tendif\n\t# boost::thread is reasonably called boost_thread (compare OS X)\n\t# We will also explicitly add stdc++ to the link target.\n\tLIBRARIES += boost_thread stdc++\n\tVERSIONFLAGS += -Wl,-soname,$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../lib\nendif\n\n# OS X:\n# clang++ instead of g++\n# libstdc++ for NVCC compatibility on OS X >= 10.9 with CUDA < 7.0\nifeq ($(OSX), 1)\n\tCXX := /usr/bin/clang++\n\tifneq ($(CPU_ONLY), 1)\n\t\tCUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release \\d' | grep -o '\\d')\n\t\tifeq ($(shell echo | awk '{exit $(CUDA_VERSION) < 7.0;}'), 1)\n\t\t\tCXXFLAGS += -stdlib=libstdc++\n\t\t\tLINKFLAGS += -stdlib=libstdc++\n\t\tendif\n\t\t# clang throws this warning for cuda headers\n\t\tWARNINGS += -Wno-unneeded-internal-declaration\n\tendif\n\t# gtest needs to use its own tuple to not conflict with clang\n\tCOMMON_FLAGS += -DGTEST_USE_OWN_TR1_TUPLE=1\n\t# boost::thread is called boost_thread-mt to mark multithreading on OS X\n\tLIBRARIES += boost_thread-mt\n\t# we need to explicitly ask for the rpath to be obeyed\n\tDYNAMIC_FLAGS := -install_name @rpath/libcaffe.so\n\tORIGIN := @loader_path\n\tVERSIONFLAGS += -Wl,-install_name,$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../../build/lib\nelse\n\tORIGIN := \\$$ORIGIN\nendif\n\n# Custom compiler\nifdef CUSTOM_CXX\n\tCXX := $(CUSTOM_CXX)\nendif\n\n# Static linking\nifneq (,$(findstring clang++,$(CXX)))\n\tSTATIC_LINK_COMMAND := -Wl,-force_load $(STATIC_NAME)\nelse ifneq (,$(findstring g++,$(CXX)))\n\tSTATIC_LINK_COMMAND := -Wl,--whole-archive $(STATIC_NAME) -Wl,--no-whole-archive\nelse\n  # The following line must not be indented with a tab, since we are not inside a target\n  $(error Cannot static link with the $(CXX) compiler)\nendif\n\n# Debugging\nifeq ($(DEBUG), 1)\n\tCOMMON_FLAGS += -DDEBUG -g -O0\n\tNVCCFLAGS += -G\nelse\n\tCOMMON_FLAGS += -DNDEBUG -O2\nendif\n\n# cuDNN acceleration configuration.\nifeq ($(USE_CUDNN), 1)\n\tLIBRARIES += cudnn\n\tCOMMON_FLAGS += -DUSE_CUDNN\nendif\n\n# configure IO libraries\nifeq ($(USE_OPENCV), 1)\n\tCOMMON_FLAGS += -DUSE_OPENCV\nendif\nifeq ($(USE_LEVELDB), 1)\n\tCOMMON_FLAGS += -DUSE_LEVELDB\nendif\nifeq ($(USE_LMDB), 1)\n\tCOMMON_FLAGS += -DUSE_LMDB\nifeq ($(ALLOW_LMDB_NOLOCK), 1)\n\tCOMMON_FLAGS += -DALLOW_LMDB_NOLOCK\nendif\nendif\n\n# CPU-only configuration\nifeq ($(CPU_ONLY), 1)\n\tOBJS := $(PROTO_OBJS) $(CXX_OBJS)\n\tTEST_OBJS := $(TEST_CXX_OBJS)\n\tTEST_BINS := $(TEST_CXX_BINS)\n\tALL_WARNS := $(ALL_CXX_WARNS)\n\tTEST_FILTER := --gtest_filter=\"-*GPU*\"\n\tCOMMON_FLAGS += -DCPU_ONLY\nendif\n\n# Python layer support\nifeq ($(WITH_PYTHON_LAYER), 1)\n\tCOMMON_FLAGS += -DWITH_PYTHON_LAYER\n\tLIBRARIES += $(PYTHON_LIBRARIES)\nendif\n\n# BLAS configuration (default = ATLAS)\nBLAS ?= atlas\nifeq ($(BLAS), mkl)\n\t# MKL\n\tLIBRARIES += mkl_rt\n\tCOMMON_FLAGS += -DUSE_MKL\n\tMKL_DIR ?= /opt/intel/mkl\n\tBLAS_INCLUDE ?= $(MKL_DIR)/include\n\tBLAS_LIB ?= $(MKL_DIR)/lib $(MKL_DIR)/lib/intel64\nelse ifeq ($(BLAS), open)\n\t# OpenBLAS\n\tLIBRARIES += openblas\nelse\n\t# ATLAS\n\tifeq ($(LINUX), 1)\n\t\tifeq ($(BLAS), atlas)\n\t\t\t# Linux simply has cblas and atlas\n\t\t\tLIBRARIES += cblas atlas\n\t\tendif\n\telse ifeq ($(OSX), 1)\n\t\t# OS X packages atlas as the vecLib framework\n\t\tLIBRARIES += cblas\n\t\t# 10.10 has accelerate while 10.9 has veclib\n\t\tXCODE_CLT_VER := $(shell pkgutil --pkg-info=com.apple.pkg.CLTools_Executables | grep 'version' | sed 's/[^0-9]*\\([0-9]\\).*/\\1/')\n\t\tXCODE_CLT_GEQ_6 := $(shell [ $(XCODE_CLT_VER) -gt 5 ] && echo 1)\n\t\tifeq ($(XCODE_CLT_GEQ_6), 1)\n\t\t\tBLAS_INCLUDE ?= /System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/\n\t\t\tLDFLAGS += -framework Accelerate\n\t\telse\n\t\t\tBLAS_INCLUDE ?= /System/Library/Frameworks/vecLib.framework/Versions/Current/Headers/\n\t\t\tLDFLAGS += -framework vecLib\n\t\tendif\n\tendif\nendif\nINCLUDE_DIRS += $(BLAS_INCLUDE)\nLIBRARY_DIRS += $(BLAS_LIB)\n\nLIBRARY_DIRS += $(LIB_BUILD_DIR)\n\n# Automatic dependency generation (nvcc is handled separately)\nCXXFLAGS += -MMD -MP\n\n# Complete build flags.\nCOMMON_FLAGS += $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))\nCXXFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)\nNVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)\n# mex may invoke an older gcc that is too liberal with -Wuninitalized\nMATLAB_CXXFLAGS := $(CXXFLAGS) -Wno-uninitialized\nLINKFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)\n\nUSE_PKG_CONFIG ?= 0\nifeq ($(USE_PKG_CONFIG), 1)\n\tPKG_CONFIG := $(shell pkg-config opencv --libs)\nelse\n\tPKG_CONFIG :=\nendif\nLDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) $(PKG_CONFIG) \\\n\t\t$(foreach library,$(LIBRARIES),-l$(library))\nPYTHON_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON_LIBRARIES),-l$(library))\n\n# 'superclean' target recursively* deletes all files ending with an extension\n# in $(SUPERCLEAN_EXTS) below.  This may be useful if you've built older\n# versions of Caffe that do not place all generated files in a location known\n# to the 'clean' target.\n#\n# 'supercleanlist' will list the files to be deleted by make superclean.\n#\n# * Recursive with the exception that symbolic links are never followed, per the\n# default behavior of 'find'.\nSUPERCLEAN_EXTS := .so .a .o .bin .testbin .pb.cc .pb.h _pb2.py .cuo\n\n# Set the sub-targets of the 'everything' target.\nEVERYTHING_TARGETS := all py$(PROJECT) test warn lint\n# Only build matcaffe as part of \"everything\" if MATLAB_DIR is specified.\nifneq ($(MATLAB_DIR),)\n\tEVERYTHING_TARGETS += mat$(PROJECT)\nendif\n\n##############################\n# Define build targets\n##############################\n.PHONY: all lib test clean docs linecount lint lintclean tools examples $(DIST_ALIASES) \\\n\tpy mat py$(PROJECT) mat$(PROJECT) proto runtest \\\n\tsuperclean supercleanlist supercleanfiles warn everything\n\nall: lib tools examples\n\nlib: $(STATIC_NAME) $(DYNAMIC_NAME)\n\neverything: $(EVERYTHING_TARGETS)\n\nlinecount:\n\tcloc --read-lang-def=$(PROJECT).cloc \\\n\t\tsrc/$(PROJECT) include/$(PROJECT) tools examples \\\n\t\tpython matlab\n\nlint: $(EMPTY_LINT_REPORT)\n\nlintclean:\n\t@ $(RM) -r $(LINT_OUTPUT_DIR) $(EMPTY_LINT_REPORT) $(NONEMPTY_LINT_REPORT)\n\ndocs: $(DOXYGEN_OUTPUT_DIR)\n\t@ cd ./docs ; ln -sfn ../$(DOXYGEN_OUTPUT_DIR)/html doxygen\n\n$(DOXYGEN_OUTPUT_DIR): $(DOXYGEN_CONFIG_FILE) $(DOXYGEN_SOURCES)\n\t$(DOXYGEN_COMMAND) $(DOXYGEN_CONFIG_FILE)\n\n$(EMPTY_LINT_REPORT): $(LINT_OUTPUTS) | $(BUILD_DIR)\n\t@ cat $(LINT_OUTPUTS) > $@\n\t@ if [ -s \"$@\" ]; then \\\n\t\tcat $@; \\\n\t\tmv $@ $(NONEMPTY_LINT_REPORT); \\\n\t\techo \"Found one or more lint errors.\"; \\\n\t\texit 1; \\\n\t  fi; \\\n\t  $(RM) $(NONEMPTY_LINT_REPORT); \\\n\t  echo \"No lint errors!\";\n\n$(LINT_OUTPUTS): $(LINT_OUTPUT_DIR)/%.lint.txt : % $(LINT_SCRIPT) | $(LINT_OUTPUT_DIR)\n\t@ mkdir -p $(dir $@)\n\t@ python $(LINT_SCRIPT) $< 2>&1 \\\n\t\t| grep -v \"^Done processing \" \\\n\t\t| grep -v \"^Total errors found: 0\" \\\n\t\t> $@ \\\n\t\t|| true\n\ntest: $(TEST_ALL_BIN) $(TEST_ALL_DYNLINK_BIN) $(TEST_BINS)\n\ntools: $(TOOL_BINS) $(TOOL_BIN_LINKS)\n\nexamples: $(EXAMPLE_BINS)\n\npy$(PROJECT): py\n\npy: $(PY$(PROJECT)_SO) $(PROTO_GEN_PY)\n\n$(PY$(PROJECT)_SO): $(PY$(PROJECT)_SRC) $(PY$(PROJECT)_HXX) | $(DYNAMIC_NAME)\n\t@ echo CXX/LD -o $@ $<\n\t$(Q)$(CXX) -shared -o $@ $(PY$(PROJECT)_SRC) \\\n\t\t-o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(PYTHON_LDFLAGS) \\\n\t\t-Wl,-rpath,$(ORIGIN)/../../build/lib\n\nmat$(PROJECT): mat\n\nmat: $(MAT$(PROJECT)_SO)\n\n$(MAT$(PROJECT)_SO): $(MAT$(PROJECT)_SRC) $(STATIC_NAME)\n\t@ if [ -z \"$(MATLAB_DIR)\" ]; then \\\n\t\techo \"MATLAB_DIR must be specified in $(CONFIG_FILE)\" \\\n\t\t\t\"to build mat$(PROJECT).\"; \\\n\t\texit 1; \\\n\tfi\n\t@ echo MEX $<\n\t$(Q)$(MATLAB_DIR)/bin/mex $(MAT$(PROJECT)_SRC) \\\n\t\t\tCXX=\"$(CXX)\" \\\n\t\t\tCXXFLAGS=\"\\$$CXXFLAGS $(MATLAB_CXXFLAGS)\" \\\n\t\t\tCXXLIBS=\"\\$$CXXLIBS $(STATIC_LINK_COMMAND) $(LDFLAGS)\" -output $@\n\t@ if [ -f \"$(PROJECT)_.d\" ]; then \\\n\t\tmv -f $(PROJECT)_.d $(BUILD_DIR)/${MAT$(PROJECT)_SO:.$(MAT_SO_EXT)=.d}; \\\n\tfi\n\nruntest: $(TEST_ALL_BIN)\n\t$(TOOL_BUILD_DIR)/caffe\n\t$(TEST_ALL_BIN) $(TEST_GPUID) --gtest_shuffle $(TEST_FILTER)\n\npytest: py\n\tcd python; python -m unittest discover -s caffe/test\n\nmattest: mat\n\tcd matlab; $(MATLAB_DIR)/bin/matlab -nodisplay -r 'caffe.run_tests(), exit()'\n\nwarn: $(EMPTY_WARN_REPORT)\n\n$(EMPTY_WARN_REPORT): $(ALL_WARNS) | $(BUILD_DIR)\n\t@ cat $(ALL_WARNS) > $@\n\t@ if [ -s \"$@\" ]; then \\\n\t\tcat $@; \\\n\t\tmv $@ $(NONEMPTY_WARN_REPORT); \\\n\t\techo \"Compiler produced one or more warnings.\"; \\\n\t\texit 1; \\\n\t  fi; \\\n\t  $(RM) $(NONEMPTY_WARN_REPORT); \\\n\t  echo \"No compiler warnings!\";\n\n$(ALL_WARNS): %.o.$(WARNS_EXT) : %.o\n\n$(BUILD_DIR_LINK): $(BUILD_DIR)/.linked\n\n# Create a target \".linked\" in this BUILD_DIR to tell Make that the \"build\" link\n# is currently correct, then delete the one in the OTHER_BUILD_DIR in case it\n# exists and $(DEBUG) is toggled later.\n$(BUILD_DIR)/.linked:\n\t@ mkdir -p $(BUILD_DIR)\n\t@ $(RM) $(OTHER_BUILD_DIR)/.linked\n\t@ $(RM) -r $(BUILD_DIR_LINK)\n\t@ ln -s $(BUILD_DIR) $(BUILD_DIR_LINK)\n\t@ touch $@\n\n$(ALL_BUILD_DIRS): | $(BUILD_DIR_LINK)\n\t@ mkdir -p $@\n\n$(DYNAMIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)\n\t@ echo LD -o $@\n\t$(Q)$(CXX) -shared -o $@ $(OBJS) $(VERSIONFLAGS) $(LINKFLAGS) $(LDFLAGS) $(DYNAMIC_FLAGS)\n\t@ cd $(BUILD_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT);   ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)\n\n$(STATIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)\n\t@ echo AR -o $@\n\t$(Q)ar rcs $@ $(OBJS)\n\n$(BUILD_DIR)/%.o: %.cpp | $(ALL_BUILD_DIRS)\n\t@ echo CXX $<\n\t$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \\\n\t\t|| (cat $@.$(WARNS_EXT); exit 1)\n\t@ cat $@.$(WARNS_EXT)\n\n$(PROTO_BUILD_DIR)/%.pb.o: $(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_GEN_HEADER) \\\n\t\t| $(PROTO_BUILD_DIR)\n\t@ echo CXX $<\n\t$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \\\n\t\t|| (cat $@.$(WARNS_EXT); exit 1)\n\t@ cat $@.$(WARNS_EXT)\n\n$(BUILD_DIR)/cuda/%.o: %.cu | $(ALL_BUILD_DIRS)\n\t@ echo NVCC $<\n\t$(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -M $< -o ${@:.o=.d} \\\n\t\t-odir $(@D)\n\t$(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -c $< -o $@ 2> $@.$(WARNS_EXT) \\\n\t\t|| (cat $@.$(WARNS_EXT); exit 1)\n\t@ cat $@.$(WARNS_EXT)\n\n$(TEST_ALL_BIN): $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \\\n\t\t| $(DYNAMIC_NAME) $(TEST_BIN_DIR)\n\t@ echo CXX/LD -o $@ $<\n\t$(Q)$(CXX) $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \\\n\t\t-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib\n\n$(TEST_CU_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CU_BUILD_DIR)/%.o \\\n\t$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)\n\t@ echo LD $<\n\t$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) \\\n\t\t-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib\n\n$(TEST_CXX_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CXX_BUILD_DIR)/%.o \\\n\t$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)\n\t@ echo LD $<\n\t$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) \\\n\t\t-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib\n\n# Target for extension-less symlinks to tool binaries with extension '*.bin'.\n$(TOOL_BUILD_DIR)/%: $(TOOL_BUILD_DIR)/%.bin | $(TOOL_BUILD_DIR)\n\t@ $(RM) $@\n\t@ ln -s $(notdir $<) $@\n\n$(TOOL_BINS): %.bin : %.o | $(DYNAMIC_NAME)\n\t@ echo CXX/LD -o $@\n\t$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \\\n\t\t-Wl,-rpath,$(ORIGIN)/../lib\n\n$(EXAMPLE_BINS): %.bin : %.o | $(DYNAMIC_NAME)\n\t@ echo CXX/LD -o $@\n\t$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \\\n\t\t-Wl,-rpath,$(ORIGIN)/../../lib\n\nproto: $(PROTO_GEN_CC) $(PROTO_GEN_HEADER)\n\n$(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_BUILD_DIR)/%.pb.h : \\\n\t\t$(PROTO_SRC_DIR)/%.proto | $(PROTO_BUILD_DIR)\n\t@ echo PROTOC $<\n\t$(Q)protoc --proto_path=$(PROTO_SRC_DIR) --cpp_out=$(PROTO_BUILD_DIR) $<\n\n$(PY_PROTO_BUILD_DIR)/%_pb2.py : $(PROTO_SRC_DIR)/%.proto \\\n\t\t$(PY_PROTO_INIT) | $(PY_PROTO_BUILD_DIR)\n\t@ echo PROTOC \\(python\\) $<\n\t$(Q)protoc --proto_path=$(PROTO_SRC_DIR) --python_out=$(PY_PROTO_BUILD_DIR) $<\n\n$(PY_PROTO_INIT): | $(PY_PROTO_BUILD_DIR)\n\ttouch $(PY_PROTO_INIT)\n\nclean:\n\t@- $(RM) -rf $(ALL_BUILD_DIRS)\n\t@- $(RM) -rf $(OTHER_BUILD_DIR)\n\t@- $(RM) -rf $(BUILD_DIR_LINK)\n\t@- $(RM) -rf $(DISTRIBUTE_DIR)\n\t@- $(RM) $(PY$(PROJECT)_SO)\n\t@- $(RM) $(MAT$(PROJECT)_SO)\n\nsupercleanfiles:\n\t$(eval SUPERCLEAN_FILES := $(strip \\\n\t\t\t$(foreach ext,$(SUPERCLEAN_EXTS), $(shell find . -name '*$(ext)' \\\n\t\t\t-not -path './data/*'))))\n\nsupercleanlist: supercleanfiles\n\t@ \\\n\tif [ -z \"$(SUPERCLEAN_FILES)\" ]; then \\\n\t\techo \"No generated files found.\"; \\\n\telse \\\n\t\techo $(SUPERCLEAN_FILES) | tr ' ' '\\n'; \\\n\tfi\n\nsuperclean: clean supercleanfiles\n\t@ \\\n\tif [ -z \"$(SUPERCLEAN_FILES)\" ]; then \\\n\t\techo \"No generated files found.\"; \\\n\telse \\\n\t\techo \"Deleting the following generated files:\"; \\\n\t\techo $(SUPERCLEAN_FILES) | tr ' ' '\\n'; \\\n\t\t$(RM) $(SUPERCLEAN_FILES); \\\n\tfi\n\n$(DIST_ALIASES): $(DISTRIBUTE_DIR)\n\n$(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)\n\t# add proto\n\tcp -r src/caffe/proto $(DISTRIBUTE_DIR)/\n\t# add include\n\tcp -r include $(DISTRIBUTE_DIR)/\n\tmkdir -p $(DISTRIBUTE_DIR)/include/caffe/proto\n\tcp $(PROTO_GEN_HEADER_SRCS) $(DISTRIBUTE_DIR)/include/caffe/proto\n\t# add tool and example binaries\n\tcp $(TOOL_BINS) $(DISTRIBUTE_DIR)/bin\n\tcp $(EXAMPLE_BINS) $(DISTRIBUTE_DIR)/bin\n\t# add libraries\n\tcp $(STATIC_NAME) $(DISTRIBUTE_DIR)/lib\n\tinstall -m 644 $(DYNAMIC_NAME) $(DISTRIBUTE_DIR)/lib\n\tcd $(DISTRIBUTE_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT);   ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)\n\t# add python - it's not the standard way, indeed...\n\tcp -r python $(DISTRIBUTE_DIR)/python\n\n-include $(DEPS)\n"
  },
  {
    "path": "caffe-fpn/Makefile.config",
    "content": "## Refer to http://caffe.berkeleyvision.org/installation.html\n# Contributions simplifying and improving our build system are welcome!\n\n# cuDNN acceleration switch (uncomment to build with cuDNN).\nUSE_CUDNN := 1\n\n# CPU-only switch (uncomment to build without GPU support).\n# CPU_ONLY := 1\n\n# uncomment to disable IO dependencies and corresponding data layers\n# USE_OPENCV := 0\n# USE_LEVELDB := 0\n# USE_LMDB := 0\n\n# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)\n#       You should not set this flag if you will be reading LMDBs with any\n#       possibility of simultaneous read and write\n# ALLOW_LMDB_NOLOCK := 1\n\n# Uncomment if you're using OpenCV 3\nOPENCV_VERSION := 3\n\n# To customize your choice of compiler, uncomment and set the following.\n# N.B. the default for Linux is g++ and the default for OSX is clang++\n# CUSTOM_CXX := g++\n\n# CUDA directory contains bin/ and lib/ directories that we need.\nCUDA_DIR := /usr/local/cuda\n# On Ubuntu 14.04, if cuda tools are installed via\n# \"sudo apt-get install nvidia-cuda-toolkit\" then use this instead:\n# CUDA_DIR := /usr\n\n# CUDA architecture setting: going with all of them.\n# For CUDA < 6.0, comment the *_50 lines for compatibility.\nCUDA_ARCH := -gencode arch=compute_20,code=sm_20 \\\n                -gencode arch=compute_20,code=sm_21 \\\n                -gencode arch=compute_30,code=sm_30 \\\n                -gencode arch=compute_35,code=sm_35 \\\n                -gencode arch=compute_50,code=sm_50 \\\n                -gencode arch=compute_50,code=compute_50\n\n# BLAS choice:\n# atlas for ATLAS (default)\n# mkl for MKL\n# open for OpenBlas\nBLAS := atlas\n# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.\n# Leave commented to accept the defaults for your choice of BLAS\n# (which should work)!\n# BLAS_INCLUDE := /path/to/your/blas\n# BLAS_LIB := /path/to/your/blas\n\n# Homebrew puts openblas in a directory that is not on the standard search path\n# BLAS_INCLUDE := $(shell brew --prefix openblas)/include\n# BLAS_LIB := $(shell brew --prefix openblas)/lib\n\n# This is required only if you will compile the matlab interface.\n# MATLAB directory should contain the mex binary in /bin.\n# MATLAB_DIR := /usr/local\n# MATLAB_DIR := /Applications/MATLAB_R2012b.app\n\n# NOTE: this is required only if you will compile the python interface.\n# We need to be able to find Python.h and numpy/arrayobject.h.\n#PYTHON_INCLUDE := /usr/include/python2.7 \\\n                /usr/lib/python2.7/dist-packages/numpy/core/include\n# Anaconda Python distribution is quite popular. Include path:\n# Verify anaconda location, sometimes it's in root.\nANACONDA_HOME := /usr/local/anaconda2\nPYTHON_INCLUDE := $(ANACONDA_HOME)/include \\\n                $(ANACONDA_HOME)/include/python2.7 \\\n                $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \\\n\n# Uncomment to use Python 3 (default is Python 2)\n# PYTHON_LIBRARIES := boost_python3 python3.5m\n# PYTHON_INCLUDE := /usr/include/python3.5m \\\n#                 /usr/lib/python3.5/dist-packages/numpy/core/include\n\n# We need to be able to find libpythonX.X.so or .dylib.\n# PYTHON_LIB := /usr/lib\nPYTHON_LIB := $(ANACONDA_HOME)/lib\n\n# Homebrew installs numpy in a non standard path (keg only)\n# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include\n# PYTHON_LIB += $(shell brew --prefix numpy)/lib\n\n# Uncomment to support layers written in Python (will link against Python libs)\nWITH_PYTHON_LAYER := 1\n\n# Whatever else you find you need goes here.\nINCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include\nLIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib\n\n# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies\n# INCLUDE_DIRS += $(shell brew --prefix)/include\n# LIBRARY_DIRS += $(shell brew --prefix)/lib\n\n# Uncomment to use `pkg-config` to specify OpenCV library paths.\n# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)\n# USE_PKG_CONFIG := 1\n\n# N.B. both build and distribute dirs are cleared on `make clean`\nBUILD_DIR := build\nDISTRIBUTE_DIR := distribute\n\n# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171\n# DEBUG := 1\n\n# The ID of the GPU that 'make runtest' will use to run unit tests.\nTEST_GPUID := 0\n\n# enable pretty build (comment to see full commands)\nQ ?= @\n"
  },
  {
    "path": "caffe-fpn/Makefile.config.example",
    "content": "## Refer to http://caffe.berkeleyvision.org/installation.html\n# Contributions simplifying and improving our build system are welcome!\n\n# cuDNN acceleration switch (uncomment to build with cuDNN).\n# USE_CUDNN := 1\n\n# CPU-only switch (uncomment to build without GPU support).\n# CPU_ONLY := 1\n\n# uncomment to disable IO dependencies and corresponding data layers\n# USE_OPENCV := 0\n# USE_LEVELDB := 0\n# USE_LMDB := 0\n\n# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)\n#\tYou should not set this flag if you will be reading LMDBs with any\n#\tpossibility of simultaneous read and write\n# ALLOW_LMDB_NOLOCK := 1\n\n# Uncomment if you're using OpenCV 3\n# OPENCV_VERSION := 3\n\n# To customize your choice of compiler, uncomment and set the following.\n# N.B. the default for Linux is g++ and the default for OSX is clang++\n# CUSTOM_CXX := g++\n\n# CUDA directory contains bin/ and lib/ directories that we need.\nCUDA_DIR := /usr/local/cuda\n# On Ubuntu 14.04, if cuda tools are installed via\n# \"sudo apt-get install nvidia-cuda-toolkit\" then use this instead:\n# CUDA_DIR := /usr\n\n# CUDA architecture setting: going with all of them.\n# For CUDA < 6.0, comment the *_50 lines for compatibility.\nCUDA_ARCH := -gencode arch=compute_20,code=sm_20 \\\n\t\t-gencode arch=compute_20,code=sm_21 \\\n\t\t-gencode arch=compute_30,code=sm_30 \\\n\t\t-gencode arch=compute_35,code=sm_35 \\\n\t\t-gencode arch=compute_50,code=sm_50 \\\n\t\t-gencode arch=compute_50,code=compute_50\n\n# BLAS choice:\n# atlas for ATLAS (default)\n# mkl for MKL\n# open for OpenBlas\nBLAS := atlas\n# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.\n# Leave commented to accept the defaults for your choice of BLAS\n# (which should work)!\n# BLAS_INCLUDE := /path/to/your/blas\n# BLAS_LIB := /path/to/your/blas\n\n# Homebrew puts openblas in a directory that is not on the standard search path\n# BLAS_INCLUDE := $(shell brew --prefix openblas)/include\n# BLAS_LIB := $(shell brew --prefix openblas)/lib\n\n# This is required only if you will compile the matlab interface.\n# MATLAB directory should contain the mex binary in /bin.\n# MATLAB_DIR := /usr/local\n# MATLAB_DIR := /Applications/MATLAB_R2012b.app\n\n# NOTE: this is required only if you will compile the python interface.\n# We need to be able to find Python.h and numpy/arrayobject.h.\nPYTHON_INCLUDE := /usr/include/python2.7 \\\n\t\t/usr/lib/python2.7/dist-packages/numpy/core/include\n# Anaconda Python distribution is quite popular. Include path:\n# Verify anaconda location, sometimes it's in root.\n# ANACONDA_HOME := $(HOME)/anaconda\n# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \\\n\t\t# $(ANACONDA_HOME)/include/python2.7 \\\n\t\t# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \\\n\n# Uncomment to use Python 3 (default is Python 2)\n# PYTHON_LIBRARIES := boost_python3 python3.5m\n# PYTHON_INCLUDE := /usr/include/python3.5m \\\n#                 /usr/lib/python3.5/dist-packages/numpy/core/include\n\n# We need to be able to find libpythonX.X.so or .dylib.\nPYTHON_LIB := /usr/lib\n# PYTHON_LIB := $(ANACONDA_HOME)/lib\n\n# Homebrew installs numpy in a non standard path (keg only)\n# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include\n# PYTHON_LIB += $(shell brew --prefix numpy)/lib\n\n# Uncomment to support layers written in Python (will link against Python libs)\n# WITH_PYTHON_LAYER := 1\n\n# Whatever else you find you need goes here.\nINCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include\nLIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib\n\n# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies\n# INCLUDE_DIRS += $(shell brew --prefix)/include\n# LIBRARY_DIRS += $(shell brew --prefix)/lib\n\n# Uncomment to use `pkg-config` to specify OpenCV library paths.\n# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)\n# USE_PKG_CONFIG := 1\n\nBUILD_DIR := build\nDISTRIBUTE_DIR := distribute\n\n# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171\n# DEBUG := 1\n\n# The ID of the GPU that 'make runtest' will use to run unit tests.\nTEST_GPUID := 0\n\n# enable pretty build (comment to see full commands)\nQ ?= @\n"
  },
  {
    "path": "caffe-fpn/README.md",
    "content": "deformable convolution net on caffe\n"
  },
  {
    "path": "caffe-fpn/caffe.cloc",
    "content": "Bourne Shell\n    filter remove_matches ^\\s*#\n    filter remove_inline #.*$\n    extension sh\n    script_exe sh\nC\n    filter remove_matches ^\\s*//\n    filter call_regexp_common C\n    filter remove_inline //.*$\n    extension c\n    extension ec\n    extension pgc\nC++\n    filter remove_matches ^\\s*//\n    filter remove_inline //.*$\n    filter call_regexp_common C\n    extension C\n    extension cc\n    extension cpp\n    extension cxx\n    extension pcc\nC/C++ Header\n    filter remove_matches ^\\s*//\n    filter call_regexp_common C\n    filter remove_inline //.*$\n    extension H\n    extension h\n    extension hh\n    extension hpp\nCUDA\n    filter remove_matches ^\\s*//\n    filter remove_inline //.*$\n    filter call_regexp_common C\n    extension cu\nPython\n    filter remove_matches ^\\s*#\n    filter docstring_to_C\n    filter call_regexp_common C\n    filter remove_inline #.*$\n    extension py\nmake\n    filter remove_matches ^\\s*#\n    filter remove_inline #.*$\n    extension Gnumakefile\n    extension Makefile\n    extension am\n    extension gnumakefile\n    extension makefile\n    filename Gnumakefile\n    filename Makefile\n    filename gnumakefile\n    filename makefile\n    script_exe make\n"
  },
  {
    "path": "caffe-fpn/cmake/ConfigGen.cmake",
    "content": "\n################################################################################################\n# Helper function to fetch caffe includes which will be passed to dependent projects\n# Usage:\n#   caffe_get_current_includes(<includes_list_variable>)\nfunction(caffe_get_current_includes includes_variable)\n  get_property(current_includes DIRECTORY PROPERTY INCLUDE_DIRECTORIES)\n  caffe_convert_absolute_paths(current_includes)\n\n  # remove at most one ${PROJECT_BINARY_DIR} include added for caffe_config.h\n  list(FIND current_includes ${PROJECT_BINARY_DIR} __index)\n  list(REMOVE_AT current_includes ${__index})\n\n  # removing numpy includes (since not required for client libs)\n  set(__toremove \"\")\n  foreach(__i ${current_includes})\n    if(${__i} MATCHES \"python\")\n      list(APPEND __toremove ${__i})\n    endif()\n  endforeach()\n  if(__toremove)\n    list(REMOVE_ITEM current_includes ${__toremove})\n  endif()\n\n  caffe_list_unique(current_includes)\n  set(${includes_variable} ${current_includes} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Helper function to get all list items that begin with given prefix\n# Usage:\n#   caffe_get_items_with_prefix(<prefix> <list_variable> <output_variable>)\nfunction(caffe_get_items_with_prefix prefix list_variable output_variable)\n  set(__result \"\")\n  foreach(__e ${${list_variable}})\n    if(__e MATCHES \"^${prefix}.*\")\n      list(APPEND __result ${__e})\n    endif()\n  endforeach()\n  set(${output_variable} ${__result} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Function for generation Caffe build- and install- tree export config files\n# Usage:\n#  caffe_generate_export_configs()\nfunction(caffe_generate_export_configs)\n  set(install_cmake_suffix \"share/Caffe\")\n\n  # ---[ Configure build-tree CaffeConfig.cmake file ]---\n  caffe_get_current_includes(Caffe_INCLUDE_DIRS)\n\n  set(Caffe_DEFINITIONS \"\")\n  if(NOT HAVE_CUDA)\n    set(HAVE_CUDA FALSE)\n    list(APPEND Caffe_DEFINITIONS -DCPU_ONLY)\n  endif()\n\n  if(USE_OPENCV)\n    list(APPEND Caffe_DEFINITIONS -DUSE_OPENCV)\n  endif()\n\n  if(USE_LMDB)\n    list(APPEND Caffe_DEFINITIONS -DUSE_LMDB)\n    if (ALLOW_LMDB_NOLOCK)\n        list(APPEND Caffe_DEFINITIONS -DALLOW_LMDB_NOLOCK)\n    endif()\n  endif()\n\n  if(USE_LEVELDB)\n    list(APPEND Caffe_DEFINITIONS -DUSE_LEVELDB)\n  endif()\n\n  if(NOT HAVE_CUDNN)\n    set(HAVE_CUDNN FALSE)\n  else()\n    list(APPEND DEFINITIONS -DUSE_CUDNN)\n  endif()\n\n  if(BLAS STREQUAL \"MKL\" OR BLAS STREQUAL \"mkl\")\n    list(APPEND Caffe_DEFINITIONS -DUSE_MKL)\n  endif()\n\n  configure_file(\"cmake/Templates/CaffeConfig.cmake.in\" \"${PROJECT_BINARY_DIR}/CaffeConfig.cmake\" @ONLY)\n\n  # Add targets to the build-tree export set\n  export(TARGETS caffe proto FILE \"${PROJECT_BINARY_DIR}/CaffeTargets.cmake\")\n  export(PACKAGE Caffe)\n\n  # ---[ Configure install-tree CaffeConfig.cmake file ]---\n\n  # remove source and build dir includes\n  caffe_get_items_with_prefix(${PROJECT_SOURCE_DIR} Caffe_INCLUDE_DIRS __insource)\n  caffe_get_items_with_prefix(${PROJECT_BINARY_DIR} Caffe_INCLUDE_DIRS __inbinary)\n  list(REMOVE_ITEM Caffe_INCLUDE_DIRS ${__insource} ${__inbinary})\n\n  # add `install` include folder\n  set(lines\n     \"get_filename_component(__caffe_include \\\"\\${Caffe_CMAKE_DIR}/../../include\\\" ABSOLUTE)\\n\"\n     \"list(APPEND Caffe_INCLUDE_DIRS \\${__caffe_include})\\n\"\n     \"unset(__caffe_include)\\n\")\n  string(REPLACE \";\" \"\" Caffe_INSTALL_INCLUDE_DIR_APPEND_COMMAND ${lines})\n\n  configure_file(\"cmake/Templates/CaffeConfig.cmake.in\" \"${PROJECT_BINARY_DIR}/cmake/CaffeConfig.cmake\" @ONLY)\n\n  # Install the CaffeConfig.cmake and export set to use with install-tree\n  install(FILES \"${PROJECT_BINARY_DIR}/cmake/CaffeConfig.cmake\" DESTINATION ${install_cmake_suffix})\n  install(EXPORT CaffeTargets DESTINATION ${install_cmake_suffix})\n\n  # ---[ Configure and install version file ]---\n\n  # TODO: Lines below are commented because Caffe does't declare its version in headers.\n  # When the declarations are added, modify `caffe_extract_caffe_version()` macro and uncomment\n\n  # configure_file(cmake/Templates/CaffeConfigVersion.cmake.in \"${PROJECT_BINARY_DIR}/CaffeConfigVersion.cmake\" @ONLY)\n  # install(FILES \"${PROJECT_BINARY_DIR}/CaffeConfigVersion.cmake\" DESTINATION ${install_cmake_suffix})\nendfunction()\n\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Cuda.cmake",
    "content": "if(CPU_ONLY)\n  return()\nendif()\n\n# Known NVIDIA GPU achitectures Caffe can be compiled for.\n# This list will be used for CUDA_ARCH_NAME = All option\nset(Caffe_known_gpu_archs \"20 21(20) 30 35 50\")\n\n################################################################################################\n# A function for automatic detection of GPUs installed  (if autodetection is enabled)\n# Usage:\n#   caffe_detect_installed_gpus(out_variable)\nfunction(caffe_detect_installed_gpus out_variable)\n  if(NOT CUDA_gpu_detect_output)\n    set(__cufile ${PROJECT_BINARY_DIR}/detect_cuda_archs.cu)\n\n    file(WRITE ${__cufile} \"\"\n      \"#include <cstdio>\\n\"\n      \"int main()\\n\"\n      \"{\\n\"\n      \"  int count = 0;\\n\"\n      \"  if (cudaSuccess != cudaGetDeviceCount(&count)) return -1;\\n\"\n      \"  if (count == 0) return -1;\\n\"\n      \"  for (int device = 0; device < count; ++device)\\n\"\n      \"  {\\n\"\n      \"    cudaDeviceProp prop;\\n\"\n      \"    if (cudaSuccess == cudaGetDeviceProperties(&prop, device))\\n\"\n      \"      std::printf(\\\"%d.%d \\\", prop.major, prop.minor);\\n\"\n      \"  }\\n\"\n      \"  return 0;\\n\"\n      \"}\\n\")\n\n    execute_process(COMMAND \"${CUDA_NVCC_EXECUTABLE}\" \"--run\" \"${__cufile}\"\n                    WORKING_DIRECTORY \"${PROJECT_BINARY_DIR}/CMakeFiles/\"\n                    RESULT_VARIABLE __nvcc_res OUTPUT_VARIABLE __nvcc_out\n                    ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)\n\n    if(__nvcc_res EQUAL 0)\n      string(REPLACE \"2.1\" \"2.1(2.0)\" __nvcc_out \"${__nvcc_out}\")\n      set(CUDA_gpu_detect_output ${__nvcc_out} CACHE INTERNAL \"Returned GPU architetures from caffe_detect_gpus tool\" FORCE)\n    endif()\n  endif()\n\n  if(NOT CUDA_gpu_detect_output)\n    message(STATUS \"Automatic GPU detection failed. Building for all known architectures.\")\n    set(${out_variable} ${Caffe_known_gpu_archs} PARENT_SCOPE)\n  else()\n    set(${out_variable} ${CUDA_gpu_detect_output} PARENT_SCOPE)\n  endif()\nendfunction()\n\n\n################################################################################################\n# Function for selecting GPU arch flags for nvcc based on CUDA_ARCH_NAME\n# Usage:\n#   caffe_select_nvcc_arch_flags(out_variable)\nfunction(caffe_select_nvcc_arch_flags out_variable)\n  # List of arch names\n  set(__archs_names \"Fermi\" \"Kepler\" \"Maxwell\" \"All\" \"Manual\")\n  set(__archs_name_default \"All\")\n  if(NOT CMAKE_CROSSCOMPILING)\n    list(APPEND __archs_names \"Auto\")\n    set(__archs_name_default \"Auto\")\n  endif()\n\n  # set CUDA_ARCH_NAME strings (so it will be seen as dropbox in CMake-Gui)\n  set(CUDA_ARCH_NAME ${__archs_name_default} CACHE STRING \"Select target NVIDIA GPU achitecture.\")\n  set_property( CACHE CUDA_ARCH_NAME PROPERTY STRINGS \"\" ${__archs_names} )\n  mark_as_advanced(CUDA_ARCH_NAME)\n\n  # verify CUDA_ARCH_NAME value\n  if(NOT \";${__archs_names};\" MATCHES \";${CUDA_ARCH_NAME};\")\n    string(REPLACE \";\" \", \" __archs_names \"${__archs_names}\")\n    message(FATAL_ERROR \"Only ${__archs_names} architeture names are supported.\")\n  endif()\n\n  if(${CUDA_ARCH_NAME} STREQUAL \"Manual\")\n    set(CUDA_ARCH_BIN ${Caffe_known_gpu_archs} CACHE STRING \"Specify 'real' GPU architectures to build binaries for, BIN(PTX) format is supported\")\n    set(CUDA_ARCH_PTX \"50\"                     CACHE STRING \"Specify 'virtual' PTX architectures to build PTX intermediate code for\")\n    mark_as_advanced(CUDA_ARCH_BIN CUDA_ARCH_PTX)\n  else()\n    unset(CUDA_ARCH_BIN CACHE)\n    unset(CUDA_ARCH_PTX CACHE)\n  endif()\n\n  if(${CUDA_ARCH_NAME} STREQUAL \"Fermi\")\n    set(__cuda_arch_bin \"20 21(20)\")\n  elseif(${CUDA_ARCH_NAME} STREQUAL \"Kepler\")\n    set(__cuda_arch_bin \"30 35\")\n  elseif(${CUDA_ARCH_NAME} STREQUAL \"Maxwell\")\n    set(__cuda_arch_bin \"50\")\n  elseif(${CUDA_ARCH_NAME} STREQUAL \"All\")\n    set(__cuda_arch_bin ${Caffe_known_gpu_archs})\n  elseif(${CUDA_ARCH_NAME} STREQUAL \"Auto\")\n    caffe_detect_installed_gpus(__cuda_arch_bin)\n  else()  # (${CUDA_ARCH_NAME} STREQUAL \"Manual\")\n    set(__cuda_arch_bin ${CUDA_ARCH_BIN})\n  endif()\n\n  # remove dots and convert to lists\n  string(REGEX REPLACE \"\\\\.\" \"\" __cuda_arch_bin \"${__cuda_arch_bin}\")\n  string(REGEX REPLACE \"\\\\.\" \"\" __cuda_arch_ptx \"${CUDA_ARCH_PTX}\")\n  string(REGEX MATCHALL \"[0-9()]+\" __cuda_arch_bin \"${__cuda_arch_bin}\")\n  string(REGEX MATCHALL \"[0-9]+\"   __cuda_arch_ptx \"${__cuda_arch_ptx}\")\n  caffe_list_unique(__cuda_arch_bin __cuda_arch_ptx)\n\n  set(__nvcc_flags \"\")\n  set(__nvcc_archs_readable \"\")\n\n  # Tell NVCC to add binaries for the specified GPUs\n  foreach(__arch ${__cuda_arch_bin})\n    if(__arch MATCHES \"([0-9]+)\\\\(([0-9]+)\\\\)\")\n      # User explicitly specified PTX for the concrete BIN\n      list(APPEND __nvcc_flags -gencode arch=compute_${CMAKE_MATCH_2},code=sm_${CMAKE_MATCH_1})\n      list(APPEND __nvcc_archs_readable sm_${CMAKE_MATCH_1})\n    else()\n      # User didn't explicitly specify PTX for the concrete BIN, we assume PTX=BIN\n      list(APPEND __nvcc_flags -gencode arch=compute_${__arch},code=sm_${__arch})\n      list(APPEND __nvcc_archs_readable sm_${__arch})\n    endif()\n  endforeach()\n\n  # Tell NVCC to add PTX intermediate code for the specified architectures\n  foreach(__arch ${__cuda_arch_ptx})\n    list(APPEND __nvcc_flags -gencode arch=compute_${__arch},code=compute_${__arch})\n    list(APPEND __nvcc_archs_readable compute_${__arch})\n  endforeach()\n\n  string(REPLACE \";\" \" \" __nvcc_archs_readable \"${__nvcc_archs_readable}\")\n  set(${out_variable}          ${__nvcc_flags}          PARENT_SCOPE)\n  set(${out_variable}_readable ${__nvcc_archs_readable} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Short command for cuda compilation\n# Usage:\n#   caffe_cuda_compile(<objlist_variable> <cuda_files>)\nmacro(caffe_cuda_compile objlist_variable)\n  foreach(var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG)\n    set(${var}_backup_in_cuda_compile_ \"${${var}}\")\n\n    # we remove /EHa as it generates warnings under windows\n    string(REPLACE \"/EHa\" \"\" ${var} \"${${var}}\")\n\n  endforeach()\n\n  if(UNIX OR APPLE)\n    list(APPEND CUDA_NVCC_FLAGS -Xcompiler -fPIC)\n  endif()\n\n  if(APPLE)\n    list(APPEND CUDA_NVCC_FLAGS -Xcompiler -Wno-unused-function)\n  endif()\n\n  cuda_compile(cuda_objcs ${ARGN})\n\n  foreach(var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG)\n    set(${var} \"${${var}_backup_in_cuda_compile_}\")\n    unset(${var}_backup_in_cuda_compile_)\n  endforeach()\n\n  set(${objlist_variable} ${cuda_objcs})\nendmacro()\n\n################################################################################################\n# Short command for cuDNN detection. Believe it soon will be a part of CUDA toolkit distribution.\n# That's why not FindcuDNN.cmake file, but just the macro\n# Usage:\n#   detect_cuDNN()\nfunction(detect_cuDNN)\n  set(CUDNN_ROOT \"\" CACHE PATH \"CUDNN root folder\")\n\n  find_path(CUDNN_INCLUDE cudnn.h\n            PATHS ${CUDNN_ROOT} $ENV{CUDNN_ROOT} ${CUDA_TOOLKIT_INCLUDE}\n            DOC \"Path to cuDNN include directory.\" )\n\n  get_filename_component(__libpath_hist ${CUDA_CUDART_LIBRARY} PATH)\n  find_library(CUDNN_LIBRARY NAMES libcudnn.so # libcudnn_static.a\n                             PATHS ${CUDNN_ROOT} $ENV{CUDNN_ROOT} ${CUDNN_INCLUDE} ${__libpath_hist}\n                             DOC \"Path to cuDNN library.\")\n\n  if(CUDNN_INCLUDE AND CUDNN_LIBRARY)\n    set(HAVE_CUDNN  TRUE PARENT_SCOPE)\n    set(CUDNN_FOUND TRUE PARENT_SCOPE)\n\n    file(READ ${CUDNN_INCLUDE}/cudnn.h CUDNN_VERSION_FILE_CONTENTS)\n\n    # cuDNN v3 and beyond\n    string(REGEX MATCH \"define CUDNN_MAJOR * +([0-9]+)\"\n           CUDNN_VERSION_MAJOR \"${CUDNN_VERSION_FILE_CONTENTS}\")\n    string(REGEX REPLACE \"define CUDNN_MAJOR * +([0-9]+)\" \"\\\\1\"\n           CUDNN_VERSION_MAJOR \"${CUDNN_VERSION_MAJOR}\")\n    string(REGEX MATCH \"define CUDNN_MINOR * +([0-9]+)\"\n           CUDNN_VERSION_MINOR \"${CUDNN_VERSION_FILE_CONTENTS}\")\n    string(REGEX REPLACE \"define CUDNN_MINOR * +([0-9]+)\" \"\\\\1\"\n           CUDNN_VERSION_MINOR \"${CUDNN_VERSION_MINOR}\")\n    string(REGEX MATCH \"define CUDNN_PATCHLEVEL * +([0-9]+)\"\n           CUDNN_VERSION_PATCH \"${CUDNN_VERSION_FILE_CONTENTS}\")\n    string(REGEX REPLACE \"define CUDNN_PATCHLEVEL * +([0-9]+)\" \"\\\\1\"\n           CUDNN_VERSION_PATCH \"${CUDNN_VERSION_PATCH}\")\n\n    if(NOT CUDNN_VERSION_MAJOR)\n      set(CUDNN_VERSION \"???\")\n    else()\n      set(CUDNN_VERSION \"${CUDNN_VERSION_MAJOR}.${CUDNN_VERSION_MINOR}.${CUDNN_VERSION_PATCH}\")\n    endif()\n\n    message(STATUS \"Found cuDNN: ver. ${CUDNN_VERSION} found (include: ${CUDNN_INCLUDE}, library: ${CUDNN_LIBRARY})\")\n\n    string(COMPARE LESS \"${CUDNN_VERSION_MAJOR}\" 3 cuDNNVersionIncompatible)\n    if(cuDNNVersionIncompatible)\n      message(FATAL_ERROR \"cuDNN version >3 is required.\")\n    endif()\n\n    set(CUDNN_VERSION \"${CUDNN_VERSION}\" PARENT_SCOPE)\n    mark_as_advanced(CUDNN_INCLUDE CUDNN_LIBRARY CUDNN_ROOT)\n\n  endif()\nendfunction()\n\n################################################################################################\n###  Non macro section\n################################################################################################\n\nfind_package(CUDA 5.5 QUIET)\nfind_cuda_helper_libs(curand)  # cmake 2.8.7 compartibility which doesn't search for curand\n\nif(NOT CUDA_FOUND)\n  return()\nendif()\n\nset(HAVE_CUDA TRUE)\nmessage(STATUS \"CUDA detected: \" ${CUDA_VERSION})\ninclude_directories(SYSTEM ${CUDA_INCLUDE_DIRS})\nlist(APPEND Caffe_LINKER_LIBS ${CUDA_CUDART_LIBRARY}\n                              ${CUDA_curand_LIBRARY} ${CUDA_CUBLAS_LIBRARIES})\n\n# cudnn detection\nif(USE_CUDNN)\n  detect_cuDNN()\n  if(HAVE_CUDNN)\n    add_definitions(-DUSE_CUDNN)\n    include_directories(SYSTEM ${CUDNN_INCLUDE})\n    list(APPEND Caffe_LINKER_LIBS ${CUDNN_LIBRARY})\n  endif()\nendif()\n\n# setting nvcc arch flags\ncaffe_select_nvcc_arch_flags(NVCC_FLAGS_EXTRA)\nlist(APPEND CUDA_NVCC_FLAGS ${NVCC_FLAGS_EXTRA})\nmessage(STATUS \"Added CUDA NVCC flags for: ${NVCC_FLAGS_EXTRA_readable}\")\n\n# Boost 1.55 workaround, see https://svn.boost.org/trac/boost/ticket/9392 or\n# https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/src/picongpu/CMakeLists.txt\nif(Boost_VERSION EQUAL 105500)\n  message(STATUS \"Cuda + Boost 1.55: Applying noinline work around\")\n  # avoid warning for CMake >= 2.8.12\n  set(CUDA_NVCC_FLAGS \"${CUDA_NVCC_FLAGS} \\\"-DBOOST_NOINLINE=__attribute__((noinline))\\\" \")\nendif()\n\n# disable some nvcc diagnostic that apears in boost, glog, glags, opencv, etc.\nforeach(diag cc_clobber_ignored integer_sign_change useless_using_declaration set_but_not_used)\n  list(APPEND CUDA_NVCC_FLAGS -Xcudafe --diag_suppress=${diag})\nendforeach()\n\n# setting default testing device\nif(NOT CUDA_TEST_DEVICE)\n  set(CUDA_TEST_DEVICE -1)\nendif()\n\nmark_as_advanced(CUDA_BUILD_CUBIN CUDA_BUILD_EMULATION CUDA_VERBOSE_BUILD)\nmark_as_advanced(CUDA_SDK_ROOT_DIR CUDA_SEPARABLE_COMPILATION)\n\n# Handle clang/libc++ issue\nif(APPLE)\n  caffe_detect_darwin_version(OSX_VERSION)\n\n  # OSX 10.9 and higher uses clang/libc++ by default which is incompartible with old CUDA toolkits\n  if(OSX_VERSION VERSION_GREATER 10.8)\n    # enabled by default if and only if CUDA version is less than 7.0\n    caffe_option(USE_libstdcpp \"Use libstdc++ instead of libc++\" (CUDA_VERSION VERSION_LESS 7.0))\n  endif()\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Dependencies.cmake",
    "content": "# This list is required for static linking and exported to CaffeConfig.cmake\nset(Caffe_LINKER_LIBS \"\")\n\n# ---[ Boost\nfind_package(Boost 1.46 REQUIRED COMPONENTS system thread filesystem)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\nlist(APPEND Caffe_LINKER_LIBS ${Boost_LIBRARIES})\n\n# ---[ Threads\nfind_package(Threads REQUIRED)\nlist(APPEND Caffe_LINKER_LIBS ${CMAKE_THREAD_LIBS_INIT})\n\n# ---[ Google-glog\ninclude(\"cmake/External/glog.cmake\")\ninclude_directories(SYSTEM ${GLOG_INCLUDE_DIRS})\nlist(APPEND Caffe_LINKER_LIBS ${GLOG_LIBRARIES})\n\n# ---[ Google-gflags\ninclude(\"cmake/External/gflags.cmake\")\ninclude_directories(SYSTEM ${GFLAGS_INCLUDE_DIRS})\nlist(APPEND Caffe_LINKER_LIBS ${GFLAGS_LIBRARIES})\n\n# ---[ Google-protobuf\ninclude(cmake/ProtoBuf.cmake)\n\n# ---[ HDF5\nfind_package(HDF5 COMPONENTS HL REQUIRED)\ninclude_directories(SYSTEM ${HDF5_INCLUDE_DIRS} ${HDF5_HL_INCLUDE_DIR})\nlist(APPEND Caffe_LINKER_LIBS ${HDF5_LIBRARIES})\n\n# ---[ LMDB\nif(USE_LMDB)\n  find_package(LMDB REQUIRED)\n  include_directories(SYSTEM ${LMDB_INCLUDE_DIR})\n  list(APPEND Caffe_LINKER_LIBS ${LMDB_LIBRARIES})\n  add_definitions(-DUSE_LMDB)\n  if(ALLOW_LMDB_NOLOCK)\n    add_definitions(-DALLOW_LMDB_NOLOCK)\n  endif()\nendif()\n\n# ---[ LevelDB\nif(USE_LEVELDB)\n  find_package(LevelDB REQUIRED)\n  include_directories(SYSTEM ${LevelDB_INCLUDE})\n  list(APPEND Caffe_LINKER_LIBS ${LevelDB_LIBRARIES})\n  add_definitions(-DUSE_LEVELDB)\nendif()\n\n# ---[ Snappy\nif(USE_LEVELDB)\n  find_package(Snappy REQUIRED)\n  include_directories(SYSTEM ${Snappy_INCLUDE_DIR})\n  list(APPEND Caffe_LINKER_LIBS ${Snappy_LIBRARIES})\nendif()\n\n# ---[ CUDA\ninclude(cmake/Cuda.cmake)\nif(NOT HAVE_CUDA)\n  if(CPU_ONLY)\n    message(STATUS \"-- CUDA is disabled. Building without it...\")\n  else()\n    message(WARNING \"-- CUDA is not detected by cmake. Building without it...\")\n  endif()\n\n  # TODO: remove this not cross platform define in future. Use caffe_config.h instead.\n  add_definitions(-DCPU_ONLY)\nendif()\n\n# ---[ OpenCV\nif(USE_OPENCV)\n  find_package(OpenCV QUIET COMPONENTS core highgui imgproc imgcodecs)\n  if(NOT OpenCV_FOUND) # if not OpenCV 3.x, then imgcodecs are not found\n    find_package(OpenCV REQUIRED COMPONENTS core highgui imgproc)\n  endif()\n  include_directories(SYSTEM ${OpenCV_INCLUDE_DIRS})\n  list(APPEND Caffe_LINKER_LIBS ${OpenCV_LIBS})\n  message(STATUS \"OpenCV found (${OpenCV_CONFIG_PATH})\")\n  add_definitions(-DUSE_OPENCV)\nendif()\n\n# ---[ BLAS\nif(NOT APPLE)\n  set(BLAS \"Atlas\" CACHE STRING \"Selected BLAS library\")\n  set_property(CACHE BLAS PROPERTY STRINGS \"Atlas;Open;MKL\")\n\n  if(BLAS STREQUAL \"Atlas\" OR BLAS STREQUAL \"atlas\")\n    find_package(Atlas REQUIRED)\n    include_directories(SYSTEM ${Atlas_INCLUDE_DIR})\n    list(APPEND Caffe_LINKER_LIBS ${Atlas_LIBRARIES})\n  elseif(BLAS STREQUAL \"Open\" OR BLAS STREQUAL \"open\")\n    find_package(OpenBLAS REQUIRED)\n    include_directories(SYSTEM ${OpenBLAS_INCLUDE_DIR})\n    list(APPEND Caffe_LINKER_LIBS ${OpenBLAS_LIB})\n  elseif(BLAS STREQUAL \"MKL\" OR BLAS STREQUAL \"mkl\")\n    find_package(MKL REQUIRED)\n    include_directories(SYSTEM ${MKL_INCLUDE_DIR})\n    list(APPEND Caffe_LINKER_LIBS ${MKL_LIBRARIES})\n    add_definitions(-DUSE_MKL)\n  endif()\nelseif(APPLE)\n  find_package(vecLib REQUIRED)\n  include_directories(SYSTEM ${vecLib_INCLUDE_DIR})\n  list(APPEND Caffe_LINKER_LIBS ${vecLib_LINKER_LIBS})\nendif()\n\n# ---[ Python\nif(BUILD_python)\n  if(NOT \"${python_version}\" VERSION_LESS \"3.0.0\")\n    # use python3\n    find_package(PythonInterp 3.0)\n    find_package(PythonLibs 3.0)\n    find_package(NumPy 1.7.1)\n    # Find the matching boost python implementation\n    set(version ${PYTHONLIBS_VERSION_STRING})\n    \n    STRING( REGEX REPLACE \"[^0-9]\" \"\" boost_py_version ${version} )\n    find_package(Boost 1.46 COMPONENTS \"python-py${boost_py_version}\")\n    set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})\n    \n    while(NOT \"${version}\" STREQUAL \"\" AND NOT Boost_PYTHON_FOUND)\n      STRING( REGEX REPLACE \"([0-9.]+).[0-9]+\" \"\\\\1\" version ${version} )\n      \n      STRING( REGEX REPLACE \"[^0-9]\" \"\" boost_py_version ${version} )\n      find_package(Boost 1.46 COMPONENTS \"python-py${boost_py_version}\")\n      set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})\n      \n      STRING( REGEX MATCHALL \"([0-9.]+).[0-9]+\" has_more_version ${version} )\n      if(\"${has_more_version}\" STREQUAL \"\")\n        break()\n      endif()\n    endwhile()\n    if(NOT Boost_PYTHON_FOUND)\n      find_package(Boost 1.46 COMPONENTS python)\n    endif()\n  else()\n    # disable Python 3 search\n    find_package(PythonInterp 2.7)\n    find_package(PythonLibs 2.7)\n    find_package(NumPy 1.7.1)\n    find_package(Boost 1.46 COMPONENTS python)\n  endif()\n  if(PYTHONLIBS_FOUND AND NUMPY_FOUND AND Boost_PYTHON_FOUND)\n    set(HAVE_PYTHON TRUE)\n    if(BUILD_python_layer)\n      add_definitions(-DWITH_PYTHON_LAYER)\n      include_directories(SYSTEM ${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIR} ${Boost_INCLUDE_DIRS})\n      list(APPEND Caffe_LINKER_LIBS ${PYTHON_LIBRARIES} ${Boost_LIBRARIES})\n    endif()\n  endif()\nendif()\n\n# ---[ Matlab\nif(BUILD_matlab)\n  find_package(MatlabMex)\n  if(MATLABMEX_FOUND)\n    set(HAVE_MATLAB TRUE)\n  endif()\n\n  # sudo apt-get install liboctave-dev\n  find_program(Octave_compiler NAMES mkoctfile DOC \"Octave C++ compiler\")\n\n  if(HAVE_MATLAB AND Octave_compiler)\n    set(Matlab_build_mex_using \"Matlab\" CACHE STRING \"Select Matlab or Octave if both detected\")\n    set_property(CACHE Matlab_build_mex_using PROPERTY STRINGS \"Matlab;Octave\")\n  endif()\nendif()\n\n# ---[ Doxygen\nif(BUILD_docs)\n  find_package(Doxygen)\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/External/gflags.cmake",
    "content": "if (NOT __GFLAGS_INCLUDED) # guard against multiple includes\n  set(__GFLAGS_INCLUDED TRUE)\n\n  # use the system-wide gflags if present\n  find_package(GFlags)\n  if (GFLAGS_FOUND)\n    set(GFLAGS_EXTERNAL FALSE)\n  else()\n    # gflags will use pthreads if it's available in the system, so we must link with it\n    find_package(Threads)\n\n    # build directory\n    set(gflags_PREFIX ${CMAKE_BINARY_DIR}/external/gflags-prefix)\n    # install directory\n    set(gflags_INSTALL ${CMAKE_BINARY_DIR}/external/gflags-install)\n\n    # we build gflags statically, but want to link it into the caffe shared library\n    # this requires position-independent code\n    if (UNIX)\n        set(GFLAGS_EXTRA_COMPILER_FLAGS \"-fPIC\")\n    endif()\n\n    set(GFLAGS_CXX_FLAGS ${CMAKE_CXX_FLAGS} ${GFLAGS_EXTRA_COMPILER_FLAGS})\n    set(GFLAGS_C_FLAGS ${CMAKE_C_FLAGS} ${GFLAGS_EXTRA_COMPILER_FLAGS})\n\n    ExternalProject_Add(gflags\n      PREFIX ${gflags_PREFIX}\n      GIT_REPOSITORY \"https://github.com/gflags/gflags.git\"\n      GIT_TAG \"v2.1.2\"\n      UPDATE_COMMAND \"\"\n      INSTALL_DIR ${gflags_INSTALL}\n      CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}\n                 -DCMAKE_INSTALL_PREFIX=${gflags_INSTALL}\n                 -DBUILD_SHARED_LIBS=OFF\n                 -DBUILD_STATIC_LIBS=ON\n                 -DBUILD_PACKAGING=OFF\n                 -DBUILD_TESTING=OFF\n                 -DBUILD_NC_TESTS=OFF\n                 -BUILD_CONFIG_TESTS=OFF\n                 -DINSTALL_HEADERS=ON\n                 -DCMAKE_C_FLAGS=${GFLAGS_C_FLAGS}\n                 -DCMAKE_CXX_FLAGS=${GFLAGS_CXX_FLAGS}\n      LOG_DOWNLOAD 1\n      LOG_INSTALL 1\n      )\n\n    set(GFLAGS_FOUND TRUE)\n    set(GFLAGS_INCLUDE_DIRS ${gflags_INSTALL}/include)\n    set(GFLAGS_LIBRARIES ${gflags_INSTALL}/lib/libgflags.a ${CMAKE_THREAD_LIBS_INIT})\n    set(GFLAGS_LIBRARY_DIRS ${gflags_INSTALL}/lib)\n    set(GFLAGS_EXTERNAL TRUE)\n\n    list(APPEND external_project_dependencies gflags)\n  endif()\n\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/External/glog.cmake",
    "content": "# glog depends on gflags\ninclude(\"cmake/External/gflags.cmake\")\n\nif (NOT __GLOG_INCLUDED)\n  set(__GLOG_INCLUDED TRUE)\n\n  # try the system-wide glog first\n  find_package(Glog)\n  if (GLOG_FOUND)\n      set(GLOG_EXTERNAL FALSE)\n  else()\n    # fetch and build glog from github\n\n    # build directory\n    set(glog_PREFIX ${CMAKE_BINARY_DIR}/external/glog-prefix)\n    # install directory\n    set(glog_INSTALL ${CMAKE_BINARY_DIR}/external/glog-install)\n\n    # we build glog statically, but want to link it into the caffe shared library\n    # this requires position-independent code\n    if (UNIX)\n      set(GLOG_EXTRA_COMPILER_FLAGS \"-fPIC\")\n    endif()\n\n    set(GLOG_CXX_FLAGS ${CMAKE_CXX_FLAGS} ${GLOG_EXTRA_COMPILER_FLAGS})\n    set(GLOG_C_FLAGS ${CMAKE_C_FLAGS} ${GLOG_EXTRA_COMPILER_FLAGS})\n\n    # depend on gflags if we're also building it\n    if (GFLAGS_EXTERNAL)\n      set(GLOG_DEPENDS gflags)\n    endif()\n\n    ExternalProject_Add(glog\n      DEPENDS ${GLOG_DEPENDS}\n      PREFIX ${glog_PREFIX}\n      GIT_REPOSITORY \"https://github.com/google/glog\"\n      GIT_TAG \"v0.3.4\"\n      UPDATE_COMMAND \"\"\n      INSTALL_DIR ${gflags_INSTALL}\n      CONFIGURE_COMMAND env \"CFLAGS=${GLOG_C_FLAGS}\" \"CXXFLAGS=${GLOG_CXX_FLAGS}\" ${glog_PREFIX}/src/glog/configure --prefix=${glog_INSTALL} --enable-shared=no --enable-static=yes --with-gflags=${GFLAGS_LIBRARY_DIRS}/..\n      LOG_DOWNLOAD 1\n      LOG_CONFIGURE 1\n      LOG_INSTALL 1\n      )\n\n    set(GLOG_FOUND TRUE)\n    set(GLOG_INCLUDE_DIRS ${glog_INSTALL}/include)\n    set(GLOG_LIBRARIES ${GFLAGS_LIBRARIES} ${glog_INSTALL}/lib/libglog.a)\n    set(GLOG_LIBRARY_DIRS ${glog_INSTALL}/lib)\n    set(GLOG_EXTERNAL TRUE)\n\n    list(APPEND external_project_dependencies glog)\n  endif()\n\nendif()\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Misc.cmake",
    "content": "# ---[ Configuration types\nset(CMAKE_CONFIGURATION_TYPES \"Debug;Release\" CACHE STRING \"Possible configurations\" FORCE)\nmark_as_advanced(CMAKE_CONFIGURATION_TYPES)\n\nif(DEFINED CMAKE_BUILD_TYPE)\n  set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS ${CMAKE_CONFIGURATION_TYPES})\nendif()\n\n# --[ If user doesn't specify build type then assume release\nif(\"${CMAKE_BUILD_TYPE}\" STREQUAL \"\")\n  set(CMAKE_BUILD_TYPE Release)\nendif()\n\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"Clang\")\n  set(CMAKE_COMPILER_IS_CLANGXX TRUE)\nendif()\n\n# ---[ Solution folders\ncaffe_option(USE_PROJECT_FOLDERS \"IDE Solution folders\" (MSVC_IDE OR CMAKE_GENERATOR MATCHES Xcode) )\n\nif(USE_PROJECT_FOLDERS)\n  set_property(GLOBAL PROPERTY USE_FOLDERS ON)\n  set_property(GLOBAL PROPERTY PREDEFINED_TARGETS_FOLDER \"CMakeTargets\")\nendif()\n\n# ---[ Install options\nif(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)\n  set(CMAKE_INSTALL_PREFIX \"${PROJECT_BINARY_DIR}/install\" CACHE PATH \"Default install path\" FORCE)\nendif()\n\n# ---[ RPATH settings\nset(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE CACHE BOOLEAN \"Use link paths for shared library rpath\")\nset(CMAKE_MACOSX_RPATH TRUE)\n\nlist(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES ${CMAKE_INSTALL_PREFIX}/lib __is_systtem_dir)\nif(${__is_systtem_dir} STREQUAL -1)\n  set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/lib)\nendif()\n\n# ---[ Funny target\nif(UNIX OR APPLE)\n  add_custom_target(symlink_to_build COMMAND \"ln\" \"-sf\" \"${PROJECT_BINARY_DIR}\" \"${PROJECT_SOURCE_DIR}/build\"\n                                     COMMENT \"Adding symlink: <caffe_root>/build -> ${PROJECT_BINARY_DIR}\" )\nendif()\n\n# ---[ Set debug postfix\nset(Caffe_DEBUG_POSTFIX \"-d\")\n\nset(Caffe_POSTFIX \"\")\nif(CMAKE_BUILD_TYPE MATCHES \"Debug\")\n  set(Caffe_POSTFIX ${Caffe_DEBUG_POSTFIX})\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindAtlas.cmake",
    "content": "# Find the Atlas (and Lapack) libraries\n#\n# The following variables are optionally searched for defaults\n#  Atlas_ROOT_DIR:            Base directory where all Atlas components are found\n#\n# The following are set after configuration is done:\n#  Atlas_FOUND\n#  Atlas_INCLUDE_DIRS\n#  Atlas_LIBRARIES\n#  Atlas_LIBRARYRARY_DIRS\n\nset(Atlas_INCLUDE_SEARCH_PATHS\n  /usr/include/atlas\n  /usr/include/atlas-base\n  $ENV{Atlas_ROOT_DIR}\n  $ENV{Atlas_ROOT_DIR}/include\n)\n\nset(Atlas_LIB_SEARCH_PATHS\n  /usr/lib/atlas\n  /usr/lib/atlas-base\n  $ENV{Atlas_ROOT_DIR}\n  $ENV{Atlas_ROOT_DIR}/lib\n)\n\nfind_path(Atlas_CBLAS_INCLUDE_DIR   NAMES cblas.h   PATHS ${Atlas_INCLUDE_SEARCH_PATHS})\nfind_path(Atlas_CLAPACK_INCLUDE_DIR NAMES clapack.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})\n\nfind_library(Atlas_CBLAS_LIBRARY NAMES  ptcblas_r ptcblas cblas_r cblas PATHS ${Atlas_LIB_SEARCH_PATHS})\nfind_library(Atlas_BLAS_LIBRARY NAMES   atlas_r   atlas                 PATHS ${Atlas_LIB_SEARCH_PATHS})\nfind_library(Atlas_LAPACK_LIBRARY NAMES alapack_r alapack lapack_atlas  PATHS ${Atlas_LIB_SEARCH_PATHS})\n\nset(LOOKED_FOR\n  Atlas_CBLAS_INCLUDE_DIR\n  Atlas_CLAPACK_INCLUDE_DIR\n\n  Atlas_CBLAS_LIBRARY\n  Atlas_BLAS_LIBRARY\n  Atlas_LAPACK_LIBRARY\n)\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(Atlas DEFAULT_MSG ${LOOKED_FOR})\n\nif(ATLAS_FOUND)\n  set(Atlas_INCLUDE_DIR ${Atlas_CBLAS_INCLUDE_DIR} ${Atlas_CLAPACK_INCLUDE_DIR})\n  set(Atlas_LIBRARIES ${Atlas_LAPACK_LIBRARY} ${Atlas_CBLAS_LIBRARY} ${Atlas_BLAS_LIBRARY})\n  mark_as_advanced(${LOOKED_FOR})\n\n  message(STATUS \"Found Atlas (include: ${Atlas_CBLAS_INCLUDE_DIR}, library: ${Atlas_BLAS_LIBRARY})\")\nendif(ATLAS_FOUND)\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindGFlags.cmake",
    "content": "# - Try to find GFLAGS\n#\n# The following variables are optionally searched for defaults\n#  GFLAGS_ROOT_DIR:            Base directory where all GFLAGS components are found\n#\n# The following are set after configuration is done:\n#  GFLAGS_FOUND\n#  GFLAGS_INCLUDE_DIRS\n#  GFLAGS_LIBRARIES\n#  GFLAGS_LIBRARYRARY_DIRS\n\ninclude(FindPackageHandleStandardArgs)\n\nset(GFLAGS_ROOT_DIR \"\" CACHE PATH \"Folder contains Gflags\")\n\n# We are testing only a couple of files in the include directories\nif(WIN32)\n    find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h\n        PATHS ${GFLAGS_ROOT_DIR}/src/windows)\nelse()\n    find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h\n        PATHS ${GFLAGS_ROOT_DIR})\nendif()\n\nif(MSVC)\n    find_library(GFLAGS_LIBRARY_RELEASE\n        NAMES libgflags\n        PATHS ${GFLAGS_ROOT_DIR}\n        PATH_SUFFIXES Release)\n\n    find_library(GFLAGS_LIBRARY_DEBUG\n        NAMES libgflags-debug\n        PATHS ${GFLAGS_ROOT_DIR}\n        PATH_SUFFIXES Debug)\n\n    set(GFLAGS_LIBRARY optimized ${GFLAGS_LIBRARY_RELEASE} debug ${GFLAGS_LIBRARY_DEBUG})\nelse()\n    find_library(GFLAGS_LIBRARY gflags)\nendif()\n\nfind_package_handle_standard_args(GFlags DEFAULT_MSG GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)\n\n\nif(GFLAGS_FOUND)\n    set(GFLAGS_INCLUDE_DIRS ${GFLAGS_INCLUDE_DIR})\n    set(GFLAGS_LIBRARIES ${GFLAGS_LIBRARY})\n    message(STATUS \"Found gflags  (include: ${GFLAGS_INCLUDE_DIR}, library: ${GFLAGS_LIBRARY})\")\n    mark_as_advanced(GFLAGS_LIBRARY_DEBUG GFLAGS_LIBRARY_RELEASE\n                     GFLAGS_LIBRARY GFLAGS_INCLUDE_DIR GFLAGS_ROOT_DIR)\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindGlog.cmake",
    "content": "# - Try to find Glog\n#\n# The following variables are optionally searched for defaults\n#  GLOG_ROOT_DIR:            Base directory where all GLOG components are found\n#\n# The following are set after configuration is done:\n#  GLOG_FOUND\n#  GLOG_INCLUDE_DIRS\n#  GLOG_LIBRARIES\n#  GLOG_LIBRARYRARY_DIRS\n\ninclude(FindPackageHandleStandardArgs)\n\nset(GLOG_ROOT_DIR \"\" CACHE PATH \"Folder contains Google glog\")\n\nif(WIN32)\n    find_path(GLOG_INCLUDE_DIR glog/logging.h\n        PATHS ${GLOG_ROOT_DIR}/src/windows)\nelse()\n    find_path(GLOG_INCLUDE_DIR glog/logging.h\n        PATHS ${GLOG_ROOT_DIR})\nendif()\n\nif(MSVC)\n    find_library(GLOG_LIBRARY_RELEASE libglog_static\n        PATHS ${GLOG_ROOT_DIR}\n        PATH_SUFFIXES Release)\n\n    find_library(GLOG_LIBRARY_DEBUG libglog_static\n        PATHS ${GLOG_ROOT_DIR}\n        PATH_SUFFIXES Debug)\n\n    set(GLOG_LIBRARY optimized ${GLOG_LIBRARY_RELEASE} debug ${GLOG_LIBRARY_DEBUG})\nelse()\n    find_library(GLOG_LIBRARY glog\n        PATHS ${GLOG_ROOT_DIR}\n        PATH_SUFFIXES lib lib64)\nendif()\n\nfind_package_handle_standard_args(Glog DEFAULT_MSG GLOG_INCLUDE_DIR GLOG_LIBRARY)\n\nif(GLOG_FOUND)\n  set(GLOG_INCLUDE_DIRS ${GLOG_INCLUDE_DIR})\n  set(GLOG_LIBRARIES ${GLOG_LIBRARY})\n  message(STATUS \"Found glog    (include: ${GLOG_INCLUDE_DIR}, library: ${GLOG_LIBRARY})\")\n  mark_as_advanced(GLOG_ROOT_DIR GLOG_LIBRARY_RELEASE GLOG_LIBRARY_DEBUG\n                                 GLOG_LIBRARY GLOG_INCLUDE_DIR)\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindLAPACK.cmake",
    "content": "# - Find LAPACK library\n# This module finds an installed fortran library that implements the LAPACK\n# linear-algebra interface (see http://www.netlib.org/lapack/).\n#\n# The approach follows that taken for the autoconf macro file, acx_lapack.m4\n# (distributed at http://ac-archive.sourceforge.net/ac-archive/acx_lapack.html).\n#\n# This module sets the following variables:\n#  LAPACK_FOUND - set to true if a library implementing the LAPACK interface is found\n#  LAPACK_LIBRARIES - list of libraries (using full path name) for LAPACK\n\n# Note: I do not think it is a good idea to mixup different BLAS/LAPACK versions\n# Hence, this script wants to find a Lapack library matching your Blas library\n\n# Do nothing if LAPACK was found before\nIF(NOT LAPACK_FOUND)\n\nSET(LAPACK_LIBRARIES)\nSET(LAPACK_INFO)\n\nIF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)\n  FIND_PACKAGE(BLAS)\nELSE(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)\n  FIND_PACKAGE(BLAS REQUIRED)\nENDIF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)\n\n# Old search lapack script\ninclude(CheckFortranFunctionExists)\n\nmacro(Check_Lapack_Libraries LIBRARIES _prefix _name _flags _list _blas)\n  # This macro checks for the existence of the combination of fortran libraries\n  # given by _list.  If the combination is found, this macro checks (using the\n  # Check_Fortran_Function_Exists macro) whether can link against that library\n  # combination using the name of a routine given by _name using the linker\n  # flags given by _flags.  If the combination of libraries is found and passes\n  # the link test, LIBRARIES is set to the list of complete library paths that\n  # have been found.  Otherwise, LIBRARIES is set to FALSE.\n  # N.B. _prefix is the prefix applied to the names of all cached variables that\n  # are generated internally and marked advanced by this macro.\n  set(_libraries_work TRUE)\n  set(${LIBRARIES})\n  set(_combined_name)\n  foreach(_library ${_list})\n    set(_combined_name ${_combined_name}_${_library})\n    if(_libraries_work)\n      if (WIN32)\n        find_library(${_prefix}_${_library}_LIBRARY\n          NAMES ${_library} PATHS ENV LIB PATHS ENV PATH)\n      else (WIN32)\n        if(APPLE)\n          find_library(${_prefix}_${_library}_LIBRARY\n            NAMES ${_library}\n            PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64\n            ENV DYLD_LIBRARY_PATH)\n        else(APPLE)\n          find_library(${_prefix}_${_library}_LIBRARY\n            NAMES ${_library}\n            PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64\n            ENV LD_LIBRARY_PATH)\n        endif(APPLE)\n      endif(WIN32)\n      mark_as_advanced(${_prefix}_${_library}_LIBRARY)\n      set(${LIBRARIES} ${${LIBRARIES}} ${${_prefix}_${_library}_LIBRARY})\n      set(_libraries_work ${${_prefix}_${_library}_LIBRARY})\n    endif(_libraries_work)\n  endforeach(_library ${_list})\n  if(_libraries_work)\n    # Test this combination of libraries.\n    set(CMAKE_REQUIRED_LIBRARIES ${_flags} ${${LIBRARIES}} ${_blas})\n    if (CMAKE_Fortran_COMPILER_WORKS)\n      check_fortran_function_exists(${_name} ${_prefix}${_combined_name}_WORKS)\n    else (CMAKE_Fortran_COMPILER_WORKS)\n      check_function_exists(\"${_name}_\" ${_prefix}${_combined_name}_WORKS)\n    endif (CMAKE_Fortran_COMPILER_WORKS)\n    set(CMAKE_REQUIRED_LIBRARIES)\n    mark_as_advanced(${_prefix}${_combined_name}_WORKS)\n    set(_libraries_work ${${_prefix}${_combined_name}_WORKS})\n  endif(_libraries_work)\n  if(NOT _libraries_work)\n    set(${LIBRARIES} FALSE)\n  endif(NOT _libraries_work)\nendmacro(Check_Lapack_Libraries)\n\n\nif(BLAS_FOUND)\n\n  # Intel MKL\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"mkl\"))\n    IF(MKL_LAPACK_LIBRARIES)\n      SET(LAPACK_LIBRARIES ${MKL_LAPACK_LIBRARIES} ${MKL_LIBRARIES})\n    ELSE(MKL_LAPACK_LIBRARIES)\n      SET(LAPACK_LIBRARIES ${MKL_LIBRARIES})\n    ENDIF(MKL_LAPACK_LIBRARIES)\n    SET(LAPACK_INCLUDE_DIR ${MKL_INCLUDE_DIR})\n    SET(LAPACK_INFO \"mkl\")\n  ENDIF()\n\n  # OpenBlas\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"open\"))\n    SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})\n    check_function_exists(\"cheev_\" OPEN_LAPACK_WORKS)\n    if(OPEN_LAPACK_WORKS)\n      SET(LAPACK_INFO \"open\")\n    else()\n      message(STATUS \"It seems OpenBlas has not been compiled with Lapack support\")\n    endif()\n  endif()\n\n  # GotoBlas\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"goto\"))\n    SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})\n    check_function_exists(\"cheev_\" GOTO_LAPACK_WORKS)\n    if(GOTO_LAPACK_WORKS)\n      SET(LAPACK_INFO \"goto\")\n    else()\n      message(STATUS \"It seems GotoBlas has not been compiled with Lapack support\")\n    endif()\n  endif()\n\n  # ACML\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"acml\"))\n    SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})\n    check_function_exists(\"cheev_\" ACML_LAPACK_WORKS)\n    if(ACML_LAPACK_WORKS)\n      SET(LAPACK_INFO \"acml\")\n    else()\n      message(STATUS \"Strangely, this ACML library does not support Lapack?!\")\n    endif()\n  endif()\n\n  # Accelerate\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"accelerate\"))\n    SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})\n    check_function_exists(\"cheev_\" ACCELERATE_LAPACK_WORKS)\n    if(ACCELERATE_LAPACK_WORKS)\n      SET(LAPACK_INFO \"accelerate\")\n    else()\n      message(STATUS \"Strangely, this Accelerate library does not support Lapack?!\")\n    endif()\n  endif()\n\n  # vecLib\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"veclib\"))\n    SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})\n    check_function_exists(\"cheev_\" VECLIB_LAPACK_WORKS)\n    if(VECLIB_LAPACK_WORKS)\n      SET(LAPACK_INFO \"veclib\")\n    else()\n      message(STATUS \"Strangely, this vecLib library does not support Lapack?!\")\n    endif()\n  endif()\n\n  # Generic LAPACK library?\n  IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL \"generic\"))\n    check_lapack_libraries(\n      LAPACK_LIBRARIES\n      LAPACK\n      cheev\n      \"\"\n      \"lapack\"\n      \"${BLAS_LIBRARIES}\"\n      )\n    if(LAPACK_LIBRARIES)\n      SET(LAPACK_INFO \"generic\")\n    endif(LAPACK_LIBRARIES)\n  endif()\n\nelse(BLAS_FOUND)\n  message(STATUS \"LAPACK requires BLAS\")\nendif(BLAS_FOUND)\n\nif(LAPACK_INFO)\n  set(LAPACK_FOUND TRUE)\nelse(LAPACK_INFO)\n  set(LAPACK_FOUND FALSE)\nendif(LAPACK_INFO)\n\nIF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)\n  message(FATAL_ERROR \"Cannot find a library with LAPACK API. Please specify library location.\")\nENDIF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)\nIF(NOT LAPACK_FIND_QUIETLY)\n  IF(LAPACK_FOUND)\n    MESSAGE(STATUS \"Found a library with LAPACK API. (${LAPACK_INFO})\")\n  ELSE(LAPACK_FOUND)\n    MESSAGE(STATUS \"Cannot find a library with LAPACK API. Not using LAPACK.\")\n  ENDIF(LAPACK_FOUND)\nENDIF(NOT LAPACK_FIND_QUIETLY)\n\n# Do nothing if LAPACK was found before\nENDIF(NOT LAPACK_FOUND)\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindLMDB.cmake",
    "content": "# Try to find the LMBD libraries and headers\n#  LMDB_FOUND - system has LMDB lib\n#  LMDB_INCLUDE_DIR - the LMDB include directory\n#  LMDB_LIBRARIES - Libraries needed to use LMDB\n\n# FindCWD based on FindGMP by:\n# Copyright (c) 2006, Laurent Montel, <montel@kde.org>\n#\n# Redistribution and use is allowed according to the terms of the BSD license.\n\n# Adapted from FindCWD by:\n# Copyright 2013 Conrad Steenberg <conrad.steenberg@gmail.com>\n# Aug 31, 2013\n\nfind_path(LMDB_INCLUDE_DIR NAMES  lmdb.h PATHS \"$ENV{LMDB_DIR}/include\")\nfind_library(LMDB_LIBRARIES NAMES lmdb   PATHS \"$ENV{LMDB_DIR}/lib\" )\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(LMDB DEFAULT_MSG LMDB_INCLUDE_DIR LMDB_LIBRARIES)\n\nif(LMDB_FOUND)\n  message(STATUS \"Found lmdb    (include: ${LMDB_INCLUDE_DIR}, library: ${LMDB_LIBRARIES})\")\n  mark_as_advanced(LMDB_INCLUDE_DIR LMDB_LIBRARIES)\n\n  caffe_parse_header(${LMDB_INCLUDE_DIR}/lmdb.h\n                     LMDB_VERSION_LINES MDB_VERSION_MAJOR MDB_VERSION_MINOR MDB_VERSION_PATCH)\n  set(LMDB_VERSION \"${MDB_VERSION_MAJOR}.${MDB_VERSION_MINOR}.${MDB_VERSION_PATCH}\")\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindLevelDB.cmake",
    "content": "# - Find LevelDB\n#\n#  LevelDB_INCLUDES  - List of LevelDB includes\n#  LevelDB_LIBRARIES - List of libraries when using LevelDB.\n#  LevelDB_FOUND     - True if LevelDB found.\n\n# Look for the header file.\nfind_path(LevelDB_INCLUDE NAMES leveldb/db.h\n                          PATHS $ENV{LEVELDB_ROOT}/include /opt/local/include /usr/local/include /usr/include\n                          DOC \"Path in which the file leveldb/db.h is located.\" )\n\n# Look for the library.\nfind_library(LevelDB_LIBRARY NAMES leveldb\n                             PATHS /usr/lib $ENV{LEVELDB_ROOT}/lib\n                             DOC \"Path to leveldb library.\" )\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(LevelDB DEFAULT_MSG LevelDB_INCLUDE LevelDB_LIBRARY)\n\nif(LEVELDB_FOUND)\n  message(STATUS \"Found LevelDB (include: ${LevelDB_INCLUDE}, library: ${LevelDB_LIBRARY})\")\n  set(LevelDB_INCLUDES ${LevelDB_INCLUDE})\n  set(LevelDB_LIBRARIES ${LevelDB_LIBRARY})\n  mark_as_advanced(LevelDB_INCLUDE LevelDB_LIBRARY)\n\n  if(EXISTS \"${LevelDB_INCLUDE}/leveldb/db.h\")\n    file(STRINGS \"${LevelDB_INCLUDE}/leveldb/db.h\" __version_lines\n           REGEX \"static const int k[^V]+Version[ \\t]+=[ \\t]+[0-9]+;\")\n\n    foreach(__line ${__version_lines})\n      if(__line MATCHES \"[^k]+kMajorVersion[ \\t]+=[ \\t]+([0-9]+);\")\n        set(LEVELDB_VERSION_MAJOR ${CMAKE_MATCH_1})\n      elseif(__line MATCHES \"[^k]+kMinorVersion[ \\t]+=[ \\t]+([0-9]+);\")\n        set(LEVELDB_VERSION_MINOR ${CMAKE_MATCH_1})\n      endif()\n    endforeach()\n\n    if(LEVELDB_VERSION_MAJOR AND LEVELDB_VERSION_MINOR)\n      set(LEVELDB_VERSION \"${LEVELDB_VERSION_MAJOR}.${LEVELDB_VERSION_MINOR}\")\n    endif()\n\n    caffe_clear_vars(__line __version_lines)\n  endif()\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindMKL.cmake",
    "content": "# Find the MKL libraries\r\n#\r\n# Options:\r\n#\r\n#   MKL_USE_SINGLE_DYNAMIC_LIBRARY  : use single dynamic library interface\r\n#   MKL_USE_STATIC_LIBS             : use static libraries\r\n#   MKL_MULTI_THREADED              : use multi-threading\r\n#\r\n# This module defines the following variables:\r\n#\r\n#   MKL_FOUND            : True mkl is found\r\n#   MKL_INCLUDE_DIR      : unclude directory\r\n#   MKL_LIBRARIES        : the libraries to link against.\r\n\r\n\r\n# ---[ Options\r\ncaffe_option(MKL_USE_SINGLE_DYNAMIC_LIBRARY \"Use single dynamic library interface\" ON)\r\ncaffe_option(MKL_USE_STATIC_LIBS \"Use static libraries\" OFF IF NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)\r\ncaffe_option(MKL_MULTI_THREADED  \"Use multi-threading\"   ON IF NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)\r\n\r\n# ---[ Root folders\r\nset(INTEL_ROOT \"/opt/intel\" CACHE PATH \"Folder contains intel libs\")\r\nfind_path(MKL_ROOT include/mkl.h PATHS $ENV{MKL_ROOT} ${INTEL_ROOT}/mkl\r\n                                   DOC \"Folder contains MKL\")\r\n\r\n# ---[ Find include dir\r\nfind_path(MKL_INCLUDE_DIR mkl.h PATHS ${MKL_ROOT} PATH_SUFFIXES include)\r\nset(__looked_for MKL_INCLUDE_DIR)\r\n\r\n# ---[ Find libraries\r\nif(CMAKE_SIZEOF_VOID_P EQUAL 4)\r\n  set(__path_suffixes lib lib/ia32)\r\nelse()\r\n  set(__path_suffixes lib lib/intel64)\r\nendif()\r\n\r\nset(__mkl_libs \"\")\r\nif(MKL_USE_SINGLE_DYNAMIC_LIBRARY)\r\n  list(APPEND __mkl_libs rt)\r\nelse()\r\n  if(CMAKE_SIZEOF_VOID_P EQUAL 4)\r\n    if(WIN32)\r\n      list(APPEND __mkl_libs intel_c)\r\n    else()\r\n      list(APPEND __mkl_libs intel gf)\r\n    endif()\r\n  else()\r\n    list(APPEND __mkl_libs intel_lp64 gf_lp64)\r\n  endif()\r\n\r\n  if(MKL_MULTI_THREADED)\r\n    list(APPEND __mkl_libs intel_thread)\r\n  else()\r\n     list(APPEND __mkl_libs sequential)\r\n  endif()\r\n\r\n  list(APPEND __mkl_libs core cdft_core)\r\nendif()\r\n\r\n\r\nforeach (__lib ${__mkl_libs})\r\n  set(__mkl_lib \"mkl_${__lib}\")\r\n  string(TOUPPER ${__mkl_lib} __mkl_lib_upper)\r\n\r\n  if(MKL_USE_STATIC_LIBS)\r\n    set(__mkl_lib \"lib${__mkl_lib}.a\")\r\n  endif()\r\n\r\n  find_library(${__mkl_lib_upper}_LIBRARY\r\n        NAMES ${__mkl_lib}\r\n        PATHS ${MKL_ROOT} \"${MKL_INCLUDE_DIR}/..\"\r\n        PATH_SUFFIXES ${__path_suffixes}\r\n        DOC \"The path to Intel(R) MKL ${__mkl_lib} library\")\r\n  mark_as_advanced(${__mkl_lib_upper}_LIBRARY)\r\n\r\n  list(APPEND __looked_for ${__mkl_lib_upper}_LIBRARY)\r\n  list(APPEND MKL_LIBRARIES ${${__mkl_lib_upper}_LIBRARY})\r\nendforeach()\r\n\r\n\r\nif(NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)\r\n  if (MKL_USE_STATIC_LIBS)\r\n    set(__iomp5_libs iomp5 libiomp5mt.lib)\r\n  else()\r\n    set(__iomp5_libs iomp5 libiomp5md.lib)\r\n  endif()\r\n\r\n  if(WIN32)\r\n    find_path(INTEL_INCLUDE_DIR omp.h PATHS ${INTEL_ROOT} PATH_SUFFIXES include)\r\n    list(APPEND __looked_for INTEL_INCLUDE_DIR)\r\n  endif()\r\n\r\n  find_library(MKL_RTL_LIBRARY ${__iomp5_libs}\r\n     PATHS ${INTEL_RTL_ROOT} ${INTEL_ROOT}/compiler ${MKL_ROOT}/.. ${MKL_ROOT}/../compiler\r\n     PATH_SUFFIXES ${__path_suffixes}\r\n     DOC \"Path to Path to OpenMP runtime library\")\r\n\r\n  list(APPEND __looked_for MKL_RTL_LIBRARY)\r\n  list(APPEND MKL_LIBRARIES ${MKL_RTL_LIBRARY})\r\nendif()\r\n\r\n\r\ninclude(FindPackageHandleStandardArgs)\r\nfind_package_handle_standard_args(MKL DEFAULT_MSG ${__looked_for})\r\n\r\nif(MKL_FOUND)\r\n  message(STATUS \"Found MKL (include: ${MKL_INCLUDE_DIR}, lib: ${MKL_LIBRARIES}\")\r\nendif()\r\n\r\ncaffe_clear_vars(__looked_for __mkl_libs __path_suffixes __lib_suffix __iomp5_libs)\r\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindMatlabMex.cmake",
    "content": "# This module looks for MatlabMex compiler\n# Defines variables:\n#    Matlab_DIR    - Matlab root dir\n#    Matlab_mex    - path to mex compiler\n#    Matlab_mexext - path to mexext\n\nif(MSVC)\n  foreach(__ver \"9.30\" \"7.14\" \"7.11\" \"7.10\" \"7.9\" \"7.8\" \"7.7\")\n    get_filename_component(__matlab_root \"[HKEY_LOCAL_MACHINE\\\\SOFTWARE\\\\MathWorks\\\\MATLAB\\\\${__ver};MATLABROOT]\" ABSOLUTE)\n    if(__matlab_root)\n      break()\n    endif()\n  endforeach()\nendif()\n\nif(APPLE)\n  foreach(__ver \"R2014b\" \"R2014a\" \"R2013b\" \"R2013a\" \"R2012b\" \"R2012a\" \"R2011b\" \"R2011a\" \"R2010b\" \"R2010a\")\n    if(EXISTS /Applications/MATLAB_${__ver}.app)\n      set(__matlab_root /Applications/MATLAB_${__ver}.app)\n      break()\n    endif()\n  endforeach()\nendif()\n\nif(UNIX)\n   execute_process(COMMAND which matlab OUTPUT_STRIP_TRAILING_WHITESPACE\n                   OUTPUT_VARIABLE __out RESULT_VARIABLE __res)\n\n   if(__res MATCHES 0) # Suppress `readlink` warning if `which` returned nothing\n     execute_process(COMMAND which matlab  COMMAND xargs readlink\n                     COMMAND xargs dirname COMMAND xargs dirname COMMAND xargs echo -n\n                     OUTPUT_VARIABLE __matlab_root OUTPUT_STRIP_TRAILING_WHITESPACE)\n   endif()\nendif()\n\n\nfind_path(Matlab_DIR NAMES bin/mex bin/mexext PATHS ${__matlab_root}\n                     DOC \"Matlab directory\" NO_DEFAULT_PATH)\n\nfind_program(Matlab_mex    NAMES mex    mex.bat    HINTS ${Matlab_DIR} PATH_SUFFIXES bin NO_DEFAULT_PATH)\nfind_program(Matlab_mexext NAMES mexext mexext.bat HINTS ${Matlab_DIR} PATH_SUFFIXES bin NO_DEFAULT_PATH)\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(MatlabMex DEFAULT_MSG Matlab_mex Matlab_mexext)\n\nif(MATLABMEX_FOUND)\n  mark_as_advanced(Matlab_mex Matlab_mexext)\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindNumPy.cmake",
    "content": "# - Find the NumPy libraries\n# This module finds if NumPy is installed, and sets the following variables\n# indicating where it is.\n#\n# TODO: Update to provide the libraries and paths for linking npymath lib.\n#\n#  NUMPY_FOUND               - was NumPy found\n#  NUMPY_VERSION             - the version of NumPy found as a string\n#  NUMPY_VERSION_MAJOR       - the major version number of NumPy\n#  NUMPY_VERSION_MINOR       - the minor version number of NumPy\n#  NUMPY_VERSION_PATCH       - the patch version number of NumPy\n#  NUMPY_VERSION_DECIMAL     - e.g. version 1.6.1 is 10601\n#  NUMPY_INCLUDE_DIR         - path to the NumPy include files\n\nunset(NUMPY_VERSION)\nunset(NUMPY_INCLUDE_DIR)\n\nif(PYTHONINTERP_FOUND)\n  execute_process(COMMAND \"${PYTHON_EXECUTABLE}\" \"-c\"\n    \"import numpy as n; print(n.__version__); print(n.get_include());\"\n    RESULT_VARIABLE __result\n    OUTPUT_VARIABLE __output\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n\n  if(__result MATCHES 0)\n    string(REGEX REPLACE \";\" \"\\\\\\\\;\" __values ${__output})\n    string(REGEX REPLACE \"\\r?\\n\" \";\"    __values ${__values})\n    list(GET __values 0 NUMPY_VERSION)\n    list(GET __values 1 NUMPY_INCLUDE_DIR)\n\n    string(REGEX MATCH \"^([0-9])+\\\\.([0-9])+\\\\.([0-9])+\" __ver_check \"${NUMPY_VERSION}\")\n    if(NOT \"${__ver_check}\" STREQUAL \"\")\n      set(NUMPY_VERSION_MAJOR ${CMAKE_MATCH_1})\n      set(NUMPY_VERSION_MINOR ${CMAKE_MATCH_2})\n      set(NUMPY_VERSION_PATCH ${CMAKE_MATCH_3})\n      math(EXPR NUMPY_VERSION_DECIMAL\n        \"(${NUMPY_VERSION_MAJOR} * 10000) + (${NUMPY_VERSION_MINOR} * 100) + ${NUMPY_VERSION_PATCH}\")\n      string(REGEX REPLACE \"\\\\\\\\\" \"/\"  NUMPY_INCLUDE_DIR ${NUMPY_INCLUDE_DIR})\n    else()\n     unset(NUMPY_VERSION)\n     unset(NUMPY_INCLUDE_DIR)\n     message(STATUS \"Requested NumPy version and include path, but got instead:\\n${__output}\\n\")\n    endif()\n  endif()\nelse()\n  message(STATUS \"To find NumPy Python interpretator is required to be found.\")\nendif()\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(NumPy REQUIRED_VARS NUMPY_INCLUDE_DIR NUMPY_VERSION\n                                        VERSION_VAR   NUMPY_VERSION)\n\nif(NUMPY_FOUND)\n  message(STATUS \"NumPy ver. ${NUMPY_VERSION} found (include: ${NUMPY_INCLUDE_DIR})\")\nendif()\n\ncaffe_clear_vars(__result __output __error_value __values __ver_check __error_value)\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindOpenBLAS.cmake",
    "content": "\n\nSET(Open_BLAS_INCLUDE_SEARCH_PATHS\n  /usr/include\n  /usr/include/openblas\n  /usr/include/openblas-base\n  /usr/local/include\n  /usr/local/include/openblas\n  /usr/local/include/openblas-base\n  /opt/OpenBLAS/include\n  $ENV{OpenBLAS_HOME}\n  $ENV{OpenBLAS_HOME}/include\n)\n\nSET(Open_BLAS_LIB_SEARCH_PATHS\n        /lib/\n        /lib/openblas-base\n        /lib64/\n        /usr/lib\n        /usr/lib/openblas-base\n        /usr/lib64\n        /usr/local/lib\n        /usr/local/lib64\n        /opt/OpenBLAS/lib\n        $ENV{OpenBLAS}cd\n        $ENV{OpenBLAS}/lib\n        $ENV{OpenBLAS_HOME}\n        $ENV{OpenBLAS_HOME}/lib\n )\n\nFIND_PATH(OpenBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Open_BLAS_INCLUDE_SEARCH_PATHS})\nFIND_LIBRARY(OpenBLAS_LIB NAMES openblas PATHS ${Open_BLAS_LIB_SEARCH_PATHS})\n\nSET(OpenBLAS_FOUND ON)\n\n#    Check include files\nIF(NOT OpenBLAS_INCLUDE_DIR)\n    SET(OpenBLAS_FOUND OFF)\n    MESSAGE(STATUS \"Could not find OpenBLAS include. Turning OpenBLAS_FOUND off\")\nENDIF()\n\n#    Check libraries\nIF(NOT OpenBLAS_LIB)\n    SET(OpenBLAS_FOUND OFF)\n    MESSAGE(STATUS \"Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off\")\nENDIF()\n\nIF (OpenBLAS_FOUND)\n  IF (NOT OpenBLAS_FIND_QUIETLY)\n    MESSAGE(STATUS \"Found OpenBLAS libraries: ${OpenBLAS_LIB}\")\n    MESSAGE(STATUS \"Found OpenBLAS include: ${OpenBLAS_INCLUDE_DIR}\")\n  ENDIF (NOT OpenBLAS_FIND_QUIETLY)\nELSE (OpenBLAS_FOUND)\n  IF (OpenBLAS_FIND_REQUIRED)\n    MESSAGE(FATAL_ERROR \"Could not find OpenBLAS\")\n  ENDIF (OpenBLAS_FIND_REQUIRED)\nENDIF (OpenBLAS_FOUND)\n\nMARK_AS_ADVANCED(\n    OpenBLAS_INCLUDE_DIR\n    OpenBLAS_LIB\n    OpenBLAS\n)\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindSnappy.cmake",
    "content": "# Find the Snappy libraries\n#\n# The following variables are optionally searched for defaults\n#  Snappy_ROOT_DIR:    Base directory where all Snappy components are found\n#\n# The following are set after configuration is done:\n#  SNAPPY_FOUND\n#  Snappy_INCLUDE_DIR\n#  Snappy_LIBRARIES\n\nfind_path(Snappy_INCLUDE_DIR NAMES snappy.h\n                             PATHS ${SNAPPY_ROOT_DIR} ${SNAPPY_ROOT_DIR}/include)\n\nfind_library(Snappy_LIBRARIES NAMES snappy\n                              PATHS ${SNAPPY_ROOT_DIR} ${SNAPPY_ROOT_DIR}/lib)\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(Snappy DEFAULT_MSG Snappy_INCLUDE_DIR Snappy_LIBRARIES)\n\nif(SNAPPY_FOUND)\n  message(STATUS \"Found Snappy  (include: ${Snappy_INCLUDE_DIR}, library: ${Snappy_LIBRARIES})\")\n  mark_as_advanced(Snappy_INCLUDE_DIR Snappy_LIBRARIES)\n\n  caffe_parse_header(${Snappy_INCLUDE_DIR}/snappy-stubs-public.h\n                     SNAPPY_VERION_LINES SNAPPY_MAJOR SNAPPY_MINOR SNAPPY_PATCHLEVEL)\n  set(Snappy_VERSION \"${SNAPPY_MAJOR}.${SNAPPY_MINOR}.${SNAPPY_PATCHLEVEL}\")\nendif()\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Modules/FindvecLib.cmake",
    "content": "# Find the vecLib libraries as part of Accelerate.framework or as standalon framework\n#\n# The following are set after configuration is done:\n#  VECLIB_FOUND\n#  vecLib_INCLUDE_DIR\n#  vecLib_LINKER_LIBS\n\n\nif(NOT APPLE)\n  return()\nendif()\n\nset(__veclib_include_suffix \"Frameworks/vecLib.framework/Versions/Current/Headers\")\n\nfind_path(vecLib_INCLUDE_DIR vecLib.h\n          DOC \"vecLib include directory\"\n          PATHS /System/Library/${__veclib_include_suffix}\n                /System/Library/Frameworks/Accelerate.framework/Versions/Current/${__veclib_include_suffix}\n                /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/)\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(vecLib DEFAULT_MSG vecLib_INCLUDE_DIR)\n\nif(VECLIB_FOUND)\n  if(vecLib_INCLUDE_DIR MATCHES \"^/System/Library/Frameworks/vecLib.framework.*\")\n    set(vecLib_LINKER_LIBS -lcblas \"-framework vecLib\")\n    message(STATUS \"Found standalone vecLib.framework\")\n  else()\n    set(vecLib_LINKER_LIBS -lcblas \"-framework Accelerate\")\n    message(STATUS \"Found vecLib as part of Accelerate.framework\")\n  endif()\n\n  mark_as_advanced(vecLib_INCLUDE_DIR)\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/ProtoBuf.cmake",
    "content": "# Finds Google Protocol Buffers library and compilers and extends\n# the standard cmake script with version and python generation support\n\nfind_package( Protobuf REQUIRED )\ninclude_directories(SYSTEM ${PROTOBUF_INCLUDE_DIR})\nlist(APPEND Caffe_LINKER_LIBS ${PROTOBUF_LIBRARIES})\n\n# As of Ubuntu 14.04 protoc is no longer a part of libprotobuf-dev package\n# and should be installed separately as in: sudo apt-get install protobuf-compiler\nif(EXISTS ${PROTOBUF_PROTOC_EXECUTABLE})\n  message(STATUS \"Found PROTOBUF Compiler: ${PROTOBUF_PROTOC_EXECUTABLE}\")\nelse()\n  message(FATAL_ERROR \"Could not find PROTOBUF Compiler\")\nendif()\n\nif(PROTOBUF_FOUND)\n  # fetches protobuf version\n  caffe_parse_header(${PROTOBUF_INCLUDE_DIR}/google/protobuf/stubs/common.h VERION_LINE GOOGLE_PROTOBUF_VERSION)\n  string(REGEX MATCH \"([0-9])00([0-9])00([0-9])\" PROTOBUF_VERSION ${GOOGLE_PROTOBUF_VERSION})\n  set(PROTOBUF_VERSION \"${CMAKE_MATCH_1}.${CMAKE_MATCH_2}.${CMAKE_MATCH_3}\")\n  unset(GOOGLE_PROTOBUF_VERSION)\nendif()\n\n# place where to generate protobuf sources\nset(proto_gen_folder \"${PROJECT_BINARY_DIR}/include/caffe/proto\")\ninclude_directories(SYSTEM \"${PROJECT_BINARY_DIR}/include\")\n\nset(PROTOBUF_GENERATE_CPP_APPEND_PATH TRUE)\n\n################################################################################################\n# Modification of standard 'protobuf_generate_cpp()' with output dir parameter and python support\n# Usage:\n#   caffe_protobuf_generate_cpp_py(<output_dir> <srcs_var> <hdrs_var> <python_var> <proto_files>)\nfunction(caffe_protobuf_generate_cpp_py output_dir srcs_var hdrs_var python_var)\n  if(NOT ARGN)\n    message(SEND_ERROR \"Error: caffe_protobuf_generate_cpp_py() called without any proto files\")\n    return()\n  endif()\n\n  if(PROTOBUF_GENERATE_CPP_APPEND_PATH)\n    # Create an include path for each file specified\n    foreach(fil ${ARGN})\n      get_filename_component(abs_fil ${fil} ABSOLUTE)\n      get_filename_component(abs_path ${abs_fil} PATH)\n      list(FIND _protoc_include ${abs_path} _contains_already)\n      if(${_contains_already} EQUAL -1)\n        list(APPEND _protoc_include -I ${abs_path})\n      endif()\n    endforeach()\n  else()\n    set(_protoc_include -I ${CMAKE_CURRENT_SOURCE_DIR})\n  endif()\n\n  if(DEFINED PROTOBUF_IMPORT_DIRS)\n    foreach(dir ${PROTOBUF_IMPORT_DIRS})\n      get_filename_component(abs_path ${dir} ABSOLUTE)\n      list(FIND _protoc_include ${abs_path} _contains_already)\n      if(${_contains_already} EQUAL -1)\n        list(APPEND _protoc_include -I ${abs_path})\n      endif()\n    endforeach()\n  endif()\n\n  set(${srcs_var})\n  set(${hdrs_var})\n  set(${python_var})\n  foreach(fil ${ARGN})\n    get_filename_component(abs_fil ${fil} ABSOLUTE)\n    get_filename_component(fil_we ${fil} NAME_WE)\n\n    list(APPEND ${srcs_var} \"${output_dir}/${fil_we}.pb.cc\")\n    list(APPEND ${hdrs_var} \"${output_dir}/${fil_we}.pb.h\")\n    list(APPEND ${python_var} \"${output_dir}/${fil_we}_pb2.py\")\n\n    add_custom_command(\n      OUTPUT \"${output_dir}/${fil_we}.pb.cc\"\n             \"${output_dir}/${fil_we}.pb.h\"\n             \"${output_dir}/${fil_we}_pb2.py\"\n      COMMAND ${CMAKE_COMMAND} -E make_directory \"${output_dir}\"\n      COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --cpp_out    ${output_dir} ${_protoc_include} ${abs_fil}\n      COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --python_out ${output_dir} ${_protoc_include} ${abs_fil}\n      DEPENDS ${abs_fil}\n      COMMENT \"Running C++/Python protocol buffer compiler on ${fil}\" VERBATIM )\n  endforeach()\n\n  set_source_files_properties(${${srcs_var}} ${${hdrs_var}} ${${python_var}} PROPERTIES GENERATED TRUE)\n  set(${srcs_var} ${${srcs_var}} PARENT_SCOPE)\n  set(${hdrs_var} ${${hdrs_var}} PARENT_SCOPE)\n  set(${python_var} ${${python_var}} PARENT_SCOPE)\nendfunction()\n"
  },
  {
    "path": "caffe-fpn/cmake/Summary.cmake",
    "content": "################################################################################################\n# Caffe status report function.\n# Automatically align right column and selects text based on condition.\n# Usage:\n#   caffe_status(<text>)\n#   caffe_status(<heading> <value1> [<value2> ...])\n#   caffe_status(<heading> <condition> THEN <text for TRUE> ELSE <text for FALSE> )\nfunction(caffe_status text)\n  set(status_cond)\n  set(status_then)\n  set(status_else)\n\n  set(status_current_name \"cond\")\n  foreach(arg ${ARGN})\n    if(arg STREQUAL \"THEN\")\n      set(status_current_name \"then\")\n    elseif(arg STREQUAL \"ELSE\")\n      set(status_current_name \"else\")\n    else()\n      list(APPEND status_${status_current_name} ${arg})\n    endif()\n  endforeach()\n\n  if(DEFINED status_cond)\n    set(status_placeholder_length 23)\n    string(RANDOM LENGTH ${status_placeholder_length} ALPHABET \" \" status_placeholder)\n    string(LENGTH \"${text}\" status_text_length)\n    if(status_text_length LESS status_placeholder_length)\n      string(SUBSTRING \"${text}${status_placeholder}\" 0 ${status_placeholder_length} status_text)\n    elseif(DEFINED status_then OR DEFINED status_else)\n      message(STATUS \"${text}\")\n      set(status_text \"${status_placeholder}\")\n    else()\n      set(status_text \"${text}\")\n    endif()\n\n    if(DEFINED status_then OR DEFINED status_else)\n      if(${status_cond})\n        string(REPLACE \";\" \" \" status_then \"${status_then}\")\n        string(REGEX REPLACE \"^[ \\t]+\" \"\" status_then \"${status_then}\")\n        message(STATUS \"${status_text} ${status_then}\")\n      else()\n        string(REPLACE \";\" \" \" status_else \"${status_else}\")\n        string(REGEX REPLACE \"^[ \\t]+\" \"\" status_else \"${status_else}\")\n        message(STATUS \"${status_text} ${status_else}\")\n      endif()\n    else()\n      string(REPLACE \";\" \" \" status_cond \"${status_cond}\")\n      string(REGEX REPLACE \"^[ \\t]+\" \"\" status_cond \"${status_cond}\")\n      message(STATUS \"${status_text} ${status_cond}\")\n    endif()\n  else()\n    message(STATUS \"${text}\")\n  endif()\nendfunction()\n\n\n################################################################################################\n# Function for fetching Caffe version from git and headers\n# Usage:\n#   caffe_extract_caffe_version()\nfunction(caffe_extract_caffe_version)\n  set(Caffe_GIT_VERSION \"unknown\")\n  find_package(Git)\n  if(GIT_FOUND)\n    execute_process(COMMAND ${GIT_EXECUTABLE} describe --tags --always --dirty\n                    ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE\n                    WORKING_DIRECTORY \"${PROJECT_SOURCE_DIR}\"\n                    OUTPUT_VARIABLE Caffe_GIT_VERSION\n                    RESULT_VARIABLE __git_result)\n    if(NOT ${__git_result} EQUAL 0)\n      set(Caffe_GIT_VERSION \"unknown\")\n    endif()\n  endif()\n\n  set(Caffe_GIT_VERSION ${Caffe_GIT_VERSION} PARENT_SCOPE)\n  set(Caffe_VERSION \"<TODO> (Caffe doesn't declare its version in headers)\" PARENT_SCOPE)\n\n  # caffe_parse_header(${Caffe_INCLUDE_DIR}/caffe/version.hpp Caffe_VERSION_LINES CAFFE_MAJOR CAFFE_MINOR CAFFE_PATCH)\n  # set(Caffe_VERSION \"${CAFFE_MAJOR}.${CAFFE_MINOR}.${CAFFE_PATCH}\" PARENT_SCOPE)\n\n  # or for #define Caffe_VERSION \"x.x.x\"\n  # caffe_parse_header_single_define(Caffe ${Caffe_INCLUDE_DIR}/caffe/version.hpp Caffe_VERSION)\n  # set(Caffe_VERSION ${Caffe_VERSION_STRING} PARENT_SCOPE)\n\nendfunction()\n\n\n################################################################################################\n# Prints accumulated caffe configuration summary\n# Usage:\n#   caffe_print_configuration_summary()\n\nfunction(caffe_print_configuration_summary)\n  caffe_extract_caffe_version()\n  set(Caffe_VERSION ${Caffe_VERSION} PARENT_SCOPE)\n\n  caffe_merge_flag_lists(__flags_rel CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS)\n  caffe_merge_flag_lists(__flags_deb CMAKE_CXX_FLAGS_DEBUG   CMAKE_CXX_FLAGS)\n\n  caffe_status(\"\")\n  caffe_status(\"******************* Caffe Configuration Summary *******************\")\n  caffe_status(\"General:\")\n  caffe_status(\"  Version           :   ${CAFFE_TARGET_VERSION}\")\n  caffe_status(\"  Git               :   ${Caffe_GIT_VERSION}\")\n  caffe_status(\"  System            :   ${CMAKE_SYSTEM_NAME}\")\n  caffe_status(\"  C++ compiler      :   ${CMAKE_CXX_COMPILER}\")\n  caffe_status(\"  Release CXX flags :   ${__flags_rel}\")\n  caffe_status(\"  Debug CXX flags   :   ${__flags_deb}\")\n  caffe_status(\"  Build type        :   ${CMAKE_BUILD_TYPE}\")\n  caffe_status(\"\")\n  caffe_status(\"  BUILD_SHARED_LIBS :   ${BUILD_SHARED_LIBS}\")\n  caffe_status(\"  BUILD_python      :   ${BUILD_python}\")\n  caffe_status(\"  BUILD_matlab      :   ${BUILD_matlab}\")\n  caffe_status(\"  BUILD_docs        :   ${BUILD_docs}\")\n  caffe_status(\"  CPU_ONLY          :   ${CPU_ONLY}\")\n  caffe_status(\"  USE_OPENCV        :   ${USE_OPENCV}\")\n  caffe_status(\"  USE_LEVELDB       :   ${USE_LEVELDB}\")\n  caffe_status(\"  USE_LMDB          :   ${USE_LMDB}\")\n  caffe_status(\"  ALLOW_LMDB_NOLOCK :   ${ALLOW_LMDB_NOLOCK}\")\n  caffe_status(\"\")\n  caffe_status(\"Dependencies:\")\n  caffe_status(\"  BLAS              : \" APPLE THEN \"Yes (vecLib)\" ELSE \"Yes (${BLAS})\")\n  caffe_status(\"  Boost             :   Yes (ver. ${Boost_MAJOR_VERSION}.${Boost_MINOR_VERSION})\")\n  caffe_status(\"  glog              :   Yes\")\n  caffe_status(\"  gflags            :   Yes\")\n  caffe_status(\"  protobuf          : \" PROTOBUF_FOUND THEN \"Yes (ver. ${PROTOBUF_VERSION})\" ELSE \"No\" )\n  if(USE_LMDB)\n    caffe_status(\"  lmdb              : \" LMDB_FOUND THEN \"Yes (ver. ${LMDB_VERSION})\" ELSE \"No\")\n  endif()\n  if(USE_LEVELDB)\n    caffe_status(\"  LevelDB           : \" LEVELDB_FOUND THEN  \"Yes (ver. ${LEVELDB_VERSION})\" ELSE \"No\")\n    caffe_status(\"  Snappy            : \" SNAPPY_FOUND THEN \"Yes (ver. ${Snappy_VERSION})\" ELSE \"No\" )\n  endif()\n  if(USE_OPENCV)\n    caffe_status(\"  OpenCV            :   Yes (ver. ${OpenCV_VERSION})\")\n  endif()\n  caffe_status(\"  CUDA              : \" HAVE_CUDA THEN \"Yes (ver. ${CUDA_VERSION})\" ELSE \"No\" )\n  caffe_status(\"\")\n  if(HAVE_CUDA)\n    caffe_status(\"NVIDIA CUDA:\")\n    caffe_status(\"  Target GPU(s)     :   ${CUDA_ARCH_NAME}\" )\n    caffe_status(\"  GPU arch(s)       :   ${NVCC_FLAGS_EXTRA_readable}\")\n    if(USE_CUDNN)\n      caffe_status(\"  cuDNN             : \" HAVE_CUDNN THEN \"Yes (ver. ${CUDNN_VERSION})\" ELSE \"Not found\")\n    else()\n      caffe_status(\"  cuDNN             :   Disabled\")\n    endif()\n    caffe_status(\"\")\n  endif()\n  if(HAVE_PYTHON)\n    caffe_status(\"Python:\")\n    caffe_status(\"  Interpreter       :\" PYTHON_EXECUTABLE THEN \"${PYTHON_EXECUTABLE} (ver. ${PYTHON_VERSION_STRING})\" ELSE \"No\")\n    caffe_status(\"  Libraries         :\" PYTHONLIBS_FOUND  THEN \"${PYTHON_LIBRARIES} (ver ${PYTHONLIBS_VERSION_STRING})\" ELSE \"No\")\n    caffe_status(\"  NumPy             :\" NUMPY_FOUND  THEN \"${NUMPY_INCLUDE_DIR} (ver ${NUMPY_VERSION})\" ELSE \"No\")\n    caffe_status(\"\")\n  endif()\n  if(BUILD_matlab)\n    caffe_status(\"Matlab:\")\n    caffe_status(\"  Matlab            :\" HAVE_MATLAB THEN \"Yes (${Matlab_mex}, ${Matlab_mexext}\" ELSE \"No\")\n    caffe_status(\"  Octave            :\" Octave_compiler THEN  \"Yes (${Octave_compiler})\" ELSE \"No\")\n    if(HAVE_MATLAB AND Octave_compiler)\n      caffe_status(\"  Build mex using   :   ${Matlab_build_mex_using}\")\n    endif()\n    caffe_status(\"\")\n  endif()\n  if(BUILD_docs)\n    caffe_status(\"Documentaion:\")\n    caffe_status(\"  Doxygen           :\" DOXYGEN_FOUND THEN \"${DOXYGEN_EXECUTABLE} (${DOXYGEN_VERSION})\" ELSE \"No\")\n    caffe_status(\"  config_file       :   ${DOXYGEN_config_file}\")\n\n    caffe_status(\"\")\n  endif()\n  caffe_status(\"Install:\")\n  caffe_status(\"  Install path      :   ${CMAKE_INSTALL_PREFIX}\")\n  caffe_status(\"\")\nendfunction()\n"
  },
  {
    "path": "caffe-fpn/cmake/Targets.cmake",
    "content": "################################################################################################\n# Defines global Caffe_LINK flag, This flag is required to prevent linker from excluding\n# some objects which are not addressed directly but are registered via static constructors\nif(BUILD_SHARED_LIBS)\n  set(Caffe_LINK caffe)\nelse()\n  if(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"Clang\")\n    set(Caffe_LINK -Wl,-force_load caffe)\n  elseif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    set(Caffe_LINK -Wl,--whole-archive caffe -Wl,--no-whole-archive)\n  endif()\nendif()\n\n################################################################################################\n# Convenient command to setup source group for IDEs that support this feature (VS, XCode)\n# Usage:\n#   caffe_source_group(<group> GLOB[_RECURSE] <globbing_expression>)\nfunction(caffe_source_group group)\n  cmake_parse_arguments(CAFFE_SOURCE_GROUP \"\" \"\" \"GLOB;GLOB_RECURSE\" ${ARGN})\n  if(CAFFE_SOURCE_GROUP_GLOB)\n    file(GLOB srcs1 ${CAFFE_SOURCE_GROUP_GLOB})\n    source_group(${group} FILES ${srcs1})\n  endif()\n\n  if(CAFFE_SOURCE_GROUP_GLOB_RECURSE)\n    file(GLOB_RECURSE srcs2 ${CAFFE_SOURCE_GROUP_GLOB_RECURSE})\n    source_group(${group} FILES ${srcs2})\n  endif()\nendfunction()\n\n################################################################################################\n# Collecting sources from globbing and appending to output list variable\n# Usage:\n#   caffe_collect_sources(<output_variable> GLOB[_RECURSE] <globbing_expression>)\nfunction(caffe_collect_sources variable)\n  cmake_parse_arguments(CAFFE_COLLECT_SOURCES \"\" \"\" \"GLOB;GLOB_RECURSE\" ${ARGN})\n  if(CAFFE_COLLECT_SOURCES_GLOB)\n    file(GLOB srcs1 ${CAFFE_COLLECT_SOURCES_GLOB})\n    set(${variable} ${variable} ${srcs1})\n  endif()\n\n  if(CAFFE_COLLECT_SOURCES_GLOB_RECURSE)\n    file(GLOB_RECURSE srcs2 ${CAFFE_COLLECT_SOURCES_GLOB_RECURSE})\n    set(${variable} ${variable} ${srcs2})\n  endif()\nendfunction()\n\n################################################################################################\n# Short command getting caffe sources (assuming standard Caffe code tree)\n# Usage:\n#   caffe_pickup_caffe_sources(<root>)\nfunction(caffe_pickup_caffe_sources root)\n  # put all files in source groups (visible as subfolder in many IDEs)\n  caffe_source_group(\"Include\"        GLOB \"${root}/include/caffe/*.h*\")\n  caffe_source_group(\"Include\\\\Util\"  GLOB \"${root}/include/caffe/util/*.h*\")\n  caffe_source_group(\"Include\"        GLOB \"${PROJECT_BINARY_DIR}/caffe_config.h*\")\n  caffe_source_group(\"Source\"         GLOB \"${root}/src/caffe/*.cpp\")\n  caffe_source_group(\"Source\\\\Util\"   GLOB \"${root}/src/caffe/util/*.cpp\")\n  caffe_source_group(\"Source\\\\Layers\" GLOB \"${root}/src/caffe/layers/*.cpp\")\n  caffe_source_group(\"Source\\\\Cuda\"   GLOB \"${root}/src/caffe/layers/*.cu\")\n  caffe_source_group(\"Source\\\\Cuda\"   GLOB \"${root}/src/caffe/util/*.cu\")\n  caffe_source_group(\"Source\\\\Proto\"  GLOB \"${root}/src/caffe/proto/*.proto\")\n\n  # source groups for test target\n  caffe_source_group(\"Include\"      GLOB \"${root}/include/caffe/test/test_*.h*\")\n  caffe_source_group(\"Source\"       GLOB \"${root}/src/caffe/test/test_*.cpp\")\n  caffe_source_group(\"Source\\\\Cuda\" GLOB \"${root}/src/caffe/test/test_*.cu\")\n\n  # collect files\n  file(GLOB test_hdrs    ${root}/include/caffe/test/test_*.h*)\n  file(GLOB test_srcs    ${root}/src/caffe/test/test_*.cpp)\n  file(GLOB_RECURSE hdrs ${root}/include/caffe/*.h*)\n  file(GLOB_RECURSE srcs ${root}/src/caffe/*.cpp)\n  list(REMOVE_ITEM  hdrs ${test_hdrs})\n  list(REMOVE_ITEM  srcs ${test_srcs})\n\n  # adding headers to make the visible in some IDEs (Qt, VS, Xcode)\n  list(APPEND srcs ${hdrs} ${PROJECT_BINARY_DIR}/caffe_config.h)\n  list(APPEND test_srcs ${test_hdrs})\n\n  # collect cuda files\n  file(GLOB    test_cuda ${root}/src/caffe/test/test_*.cu)\n  file(GLOB_RECURSE cuda ${root}/src/caffe/*.cu)\n  list(REMOVE_ITEM  cuda ${test_cuda})\n\n  # add proto to make them editable in IDEs too\n  file(GLOB_RECURSE proto_files ${root}/src/caffe/*.proto)\n  list(APPEND srcs ${proto_files})\n\n  # convet to absolute paths\n  caffe_convert_absolute_paths(srcs)\n  caffe_convert_absolute_paths(cuda)\n  caffe_convert_absolute_paths(test_srcs)\n  caffe_convert_absolute_paths(test_cuda)\n\n  # propogate to parent scope\n  set(srcs ${srcs} PARENT_SCOPE)\n  set(cuda ${cuda} PARENT_SCOPE)\n  set(test_srcs ${test_srcs} PARENT_SCOPE)\n  set(test_cuda ${test_cuda} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Short command for setting defeault target properties\n# Usage:\n#   caffe_default_properties(<target>)\nfunction(caffe_default_properties target)\n  set_target_properties(${target} PROPERTIES\n    DEBUG_POSTFIX ${Caffe_DEBUG_POSTFIX}\n    ARCHIVE_OUTPUT_DIRECTORY \"${PROJECT_BINARY_DIR}/lib\"\n    LIBRARY_OUTPUT_DIRECTORY \"${PROJECT_BINARY_DIR}/lib\"\n    RUNTIME_OUTPUT_DIRECTORY \"${PROJECT_BINARY_DIR}/bin\")\n  # make sure we build all external depepdencies first\n  if (DEFINED external_project_dependencies)\n    add_dependencies(${target} ${external_project_dependencies})\n  endif()\nendfunction()\n\n################################################################################################\n# Short command for setting runtime directory for build target\n# Usage:\n#   caffe_set_runtime_directory(<target> <dir>)\nfunction(caffe_set_runtime_directory target dir)\n  set_target_properties(${target} PROPERTIES\n    RUNTIME_OUTPUT_DIRECTORY \"${dir}\")\nendfunction()\n\n################################################################################################\n# Short command for setting solution folder property for target\n# Usage:\n#   caffe_set_solution_folder(<target> <folder>)\nfunction(caffe_set_solution_folder target folder)\n  if(USE_PROJECT_FOLDERS)\n    set_target_properties(${target} PROPERTIES FOLDER \"${folder}\")\n  endif()\nendfunction()\n\n################################################################################################\n# Reads lines from input file, prepends source directory to each line and writes to output file\n# Usage:\n#   caffe_configure_testdatafile(<testdatafile>)\nfunction(caffe_configure_testdatafile file)\n  file(STRINGS ${file} __lines)\n  set(result \"\")\n  foreach(line ${__lines})\n    set(result \"${result}${PROJECT_SOURCE_DIR}/${line}\\n\")\n  endforeach()\n  file(WRITE ${file}.gen.cmake ${result})\nendfunction()\n\n################################################################################################\n# Filter out all files that are not included in selected list\n# Usage:\n#   caffe_leave_only_selected_tests(<filelist_variable> <selected_list>)\nfunction(caffe_leave_only_selected_tests file_list)\n  if(NOT ARGN)\n    return() # blank list means leave all\n  endif()\n  string(REPLACE \",\" \";\" __selected ${ARGN})\n  list(APPEND __selected caffe_main)\n\n  set(result \"\")\n  foreach(f ${${file_list}})\n    get_filename_component(name ${f} NAME_WE)\n    string(REGEX REPLACE \"^test_\" \"\" name ${name})\n    list(FIND __selected ${name} __index)\n    if(NOT __index EQUAL -1)\n      list(APPEND result ${f})\n    endif()\n  endforeach()\n  set(${file_list} ${result} PARENT_SCOPE)\nendfunction()\n\n"
  },
  {
    "path": "caffe-fpn/cmake/Templates/CaffeConfig.cmake.in",
    "content": "# Config file for the Caffe package.\n#\n# Note:\n#   Caffe and this config file depends on opencv,\n#   so put `find_package(OpenCV)` before searching Caffe\n#   via `find_package(Caffe)`. All other lib/includes\n#   dependencies are hard coded in the file\n#\n# After successful configuration the following variables\n# will be defined:\n#\n#   Caffe_INCLUDE_DIRS - Caffe include directories\n#   Caffe_LIBRARIES    - libraries to link against\n#   Caffe_DEFINITIONS  - a list of definitions to pass to compiler\n#\n#   Caffe_HAVE_CUDA    - signals about CUDA support\n#   Caffe_HAVE_CUDNN   - signals about cuDNN support\n\n\n# OpenCV dependency (optional)\n\nif(@USE_OPENCV@)\n  if(NOT OpenCV_FOUND)\n    set(Caffe_OpenCV_CONFIG_PATH \"@OpenCV_CONFIG_PATH@\")\n    if(Caffe_OpenCV_CONFIG_PATH)\n      get_filename_component(Caffe_OpenCV_CONFIG_PATH ${Caffe_OpenCV_CONFIG_PATH} ABSOLUTE)\n\n      if(EXISTS ${Caffe_OpenCV_CONFIG_PATH} AND NOT TARGET opencv_core)\n        message(STATUS \"Caffe: using OpenCV config from ${Caffe_OpenCV_CONFIG_PATH}\")\n        include(${Caffe_OpenCV_CONFIG_PATH}/OpenCVModules.cmake)\n      endif()\n\n    else()\n      find_package(OpenCV REQUIRED)\n    endif()\n    unset(Caffe_OpenCV_CONFIG_PATH)\n  endif()\nendif()\n\n# Compute paths\nget_filename_component(Caffe_CMAKE_DIR \"${CMAKE_CURRENT_LIST_FILE}\" PATH)\nset(Caffe_INCLUDE_DIRS \"@Caffe_INCLUDE_DIRS@\")\n\n@Caffe_INSTALL_INCLUDE_DIR_APPEND_COMMAND@\n\n# Our library dependencies\nif(NOT TARGET caffe AND NOT caffe_BINARY_DIR)\n  include(\"${Caffe_CMAKE_DIR}/CaffeTargets.cmake\")\nendif()\n\n# List of IMPORTED libs created by CaffeTargets.cmake\nset(Caffe_LIBRARIES caffe)\n\n# Definitions\nset(Caffe_DEFINITIONS \"@Caffe_DEFINITIONS@\")\n\n# Cuda support variables\nset(Caffe_CPU_ONLY @CPU_ONLY@)\nset(Caffe_HAVE_CUDA @HAVE_CUDA@)\nset(Caffe_HAVE_CUDNN @HAVE_CUDNN@)\n"
  },
  {
    "path": "caffe-fpn/cmake/Templates/CaffeConfigVersion.cmake.in",
    "content": "set(PACKAGE_VERSION \"@Caffe_VERSION@\")\n\n# Check whether the requested PACKAGE_FIND_VERSION is compatible\nif(\"${PACKAGE_VERSION}\" VERSION_LESS \"${PACKAGE_FIND_VERSION}\")\n  set(PACKAGE_VERSION_COMPATIBLE FALSE)\nelse()\n  set(PACKAGE_VERSION_COMPATIBLE TRUE)\n  if (\"${PACKAGE_VERSION}\" VERSION_EQUAL \"${PACKAGE_FIND_VERSION}\")\n    set(PACKAGE_VERSION_EXACT TRUE)\n  endif()\nendif()\n"
  },
  {
    "path": "caffe-fpn/cmake/Templates/caffe_config.h.in",
    "content": "/* Sources directory */\n#define SOURCE_FOLDER \"${PROJECT_SOURCE_DIR}\"\n\n/* Binaries directory */\n#define BINARY_FOLDER \"${PROJECT_BINARY_DIR}\"\n\n/* NVIDA Cuda */\n#cmakedefine HAVE_CUDA\n\n/* NVIDA cuDNN */\n#cmakedefine HAVE_CUDNN\n#cmakedefine USE_CUDNN\n\n/* NVIDA cuDNN */\n#cmakedefine CPU_ONLY\n\n/* Test device */\n#define CUDA_TEST_DEVICE ${CUDA_TEST_DEVICE}\n\n/* Temporary (TODO: remove) */\n#if 1\n  #define CMAKE_SOURCE_DIR SOURCE_FOLDER \"/src/\"\n  #define EXAMPLES_SOURCE_DIR BINARY_FOLDER \"/examples/\"\n  #define CMAKE_EXT \".gen.cmake\"\n#else\n  #define CMAKE_SOURCE_DIR \"src/\"\n  #define EXAMPLES_SOURCE_DIR \"examples/\"\n  #define CMAKE_EXT \"\"\n#endif\n\n/* Matlab */\n#cmakedefine HAVE_MATLAB\n\n/* IO libraries */\n#cmakedefine USE_OPENCV\n#cmakedefine USE_LEVELDB\n#cmakedefine USE_LMDB\n#cmakedefine ALLOW_LMDB_NOLOCK\n"
  },
  {
    "path": "caffe-fpn/cmake/Utils.cmake",
    "content": "################################################################################################\n# Command alias for debugging messages\n# Usage:\n#   dmsg(<message>)\nfunction(dmsg)\n  message(STATUS ${ARGN})\nendfunction()\n\n################################################################################################\n# Removes duplicates from list(s)\n# Usage:\n#   caffe_list_unique(<list_variable> [<list_variable>] [...])\nmacro(caffe_list_unique)\n  foreach(__lst ${ARGN})\n    if(${__lst})\n      list(REMOVE_DUPLICATES ${__lst})\n    endif()\n  endforeach()\nendmacro()\n\n################################################################################################\n# Clears variables from list\n# Usage:\n#   caffe_clear_vars(<variables_list>)\nmacro(caffe_clear_vars)\n  foreach(_var ${ARGN})\n    unset(${_var})\n  endforeach()\nendmacro()\n\n################################################################################################\n# Removes duplicates from string\n# Usage:\n#   caffe_string_unique(<string_variable>)\nfunction(caffe_string_unique __string)\n  if(${__string})\n    set(__list ${${__string}})\n    separate_arguments(__list)\n    list(REMOVE_DUPLICATES __list)\n    foreach(__e ${__list})\n      set(__str \"${__str} ${__e}\")\n    endforeach()\n    set(${__string} ${__str} PARENT_SCOPE)\n  endif()\nendfunction()\n\n################################################################################################\n# Prints list element per line\n# Usage:\n#   caffe_print_list(<list>)\nfunction(caffe_print_list)\n  foreach(e ${ARGN})\n    message(STATUS ${e})\n  endforeach()\nendfunction()\n\n################################################################################################\n# Function merging lists of compiler flags to single string.\n# Usage:\n#   caffe_merge_flag_lists(out_variable <list1> [<list2>] [<list3>] ...)\nfunction(caffe_merge_flag_lists out_var)\n  set(__result \"\")\n  foreach(__list ${ARGN})\n    foreach(__flag ${${__list}})\n      string(STRIP ${__flag} __flag)\n      set(__result \"${__result} ${__flag}\")\n    endforeach()\n  endforeach()\n  string(STRIP ${__result} __result)\n  set(${out_var} ${__result} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Converts all paths in list to absolute\n# Usage:\n#   caffe_convert_absolute_paths(<list_variable>)\nfunction(caffe_convert_absolute_paths variable)\n  set(__dlist \"\")\n  foreach(__s ${${variable}})\n    get_filename_component(__abspath ${__s} ABSOLUTE)\n    list(APPEND __list ${__abspath})\n  endforeach()\n  set(${variable} ${__list} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Reads set of version defines from the header file\n# Usage:\n#   caffe_parse_header(<file> <define1> <define2> <define3> ..)\nmacro(caffe_parse_header FILENAME FILE_VAR)\n  set(vars_regex \"\")\n  set(__parnet_scope OFF)\n  set(__add_cache OFF)\n  foreach(name ${ARGN})\n    if(\"${name}\" STREQUAL \"PARENT_SCOPE\")\n      set(__parnet_scope ON)\n    elseif(\"${name}\" STREQUAL \"CACHE\")\n      set(__add_cache ON)\n    elseif(vars_regex)\n      set(vars_regex \"${vars_regex}|${name}\")\n    else()\n      set(vars_regex \"${name}\")\n    endif()\n  endforeach()\n  if(EXISTS \"${FILENAME}\")\n    file(STRINGS \"${FILENAME}\" ${FILE_VAR} REGEX \"#define[ \\t]+(${vars_regex})[ \\t]+[0-9]+\" )\n  else()\n    unset(${FILE_VAR})\n  endif()\n  foreach(name ${ARGN})\n    if(NOT \"${name}\" STREQUAL \"PARENT_SCOPE\" AND NOT \"${name}\" STREQUAL \"CACHE\")\n      if(${FILE_VAR})\n        if(${FILE_VAR} MATCHES \".+[ \\t]${name}[ \\t]+([0-9]+).*\")\n          string(REGEX REPLACE \".+[ \\t]${name}[ \\t]+([0-9]+).*\" \"\\\\1\" ${name} \"${${FILE_VAR}}\")\n        else()\n          set(${name} \"\")\n        endif()\n        if(__add_cache)\n          set(${name} ${${name}} CACHE INTERNAL \"${name} parsed from ${FILENAME}\" FORCE)\n        elseif(__parnet_scope)\n          set(${name} \"${${name}}\" PARENT_SCOPE)\n        endif()\n      else()\n        unset(${name} CACHE)\n      endif()\n    endif()\n  endforeach()\nendmacro()\n\n################################################################################################\n# Reads single version define from the header file and parses it\n# Usage:\n#   caffe_parse_header_single_define(<library_name> <file> <define_name>)\nfunction(caffe_parse_header_single_define LIBNAME HDR_PATH VARNAME)\n  set(${LIBNAME}_H \"\")\n  if(EXISTS \"${HDR_PATH}\")\n    file(STRINGS \"${HDR_PATH}\" ${LIBNAME}_H REGEX \"^#define[ \\t]+${VARNAME}[ \\t]+\\\"[^\\\"]*\\\".*$\" LIMIT_COUNT 1)\n  endif()\n\n  if(${LIBNAME}_H)\n    string(REGEX REPLACE \"^.*[ \\t]${VARNAME}[ \\t]+\\\"([0-9]+).*$\" \"\\\\1\" ${LIBNAME}_VERSION_MAJOR \"${${LIBNAME}_H}\")\n    string(REGEX REPLACE \"^.*[ \\t]${VARNAME}[ \\t]+\\\"[0-9]+\\\\.([0-9]+).*$\" \"\\\\1\" ${LIBNAME}_VERSION_MINOR  \"${${LIBNAME}_H}\")\n    string(REGEX REPLACE \"^.*[ \\t]${VARNAME}[ \\t]+\\\"[0-9]+\\\\.[0-9]+\\\\.([0-9]+).*$\" \"\\\\1\" ${LIBNAME}_VERSION_PATCH \"${${LIBNAME}_H}\")\n    set(${LIBNAME}_VERSION_MAJOR ${${LIBNAME}_VERSION_MAJOR} ${ARGN} PARENT_SCOPE)\n    set(${LIBNAME}_VERSION_MINOR ${${LIBNAME}_VERSION_MINOR} ${ARGN} PARENT_SCOPE)\n    set(${LIBNAME}_VERSION_PATCH ${${LIBNAME}_VERSION_PATCH} ${ARGN} PARENT_SCOPE)\n    set(${LIBNAME}_VERSION_STRING \"${${LIBNAME}_VERSION_MAJOR}.${${LIBNAME}_VERSION_MINOR}.${${LIBNAME}_VERSION_PATCH}\" PARENT_SCOPE)\n\n    # append a TWEAK version if it exists:\n    set(${LIBNAME}_VERSION_TWEAK \"\")\n    if(\"${${LIBNAME}_H}\" MATCHES \"^.*[ \\t]${VARNAME}[ \\t]+\\\"[0-9]+\\\\.[0-9]+\\\\.[0-9]+\\\\.([0-9]+).*$\")\n      set(${LIBNAME}_VERSION_TWEAK \"${CMAKE_MATCH_1}\" ${ARGN} PARENT_SCOPE)\n    endif()\n    if(${LIBNAME}_VERSION_TWEAK)\n      set(${LIBNAME}_VERSION_STRING \"${${LIBNAME}_VERSION_STRING}.${${LIBNAME}_VERSION_TWEAK}\" ${ARGN} PARENT_SCOPE)\n    else()\n      set(${LIBNAME}_VERSION_STRING \"${${LIBNAME}_VERSION_STRING}\" ${ARGN} PARENT_SCOPE)\n    endif()\n  endif()\nendfunction()\n\n########################################################################################################\n# An option that the user can select. Can accept condition to control when option is available for user.\n# Usage:\n#   caffe_option(<option_variable> \"doc string\" <initial value or boolean expression> [IF <condition>])\nfunction(caffe_option variable description value)\n  set(__value ${value})\n  set(__condition \"\")\n  set(__varname \"__value\")\n  foreach(arg ${ARGN})\n    if(arg STREQUAL \"IF\" OR arg STREQUAL \"if\")\n      set(__varname \"__condition\")\n    else()\n      list(APPEND ${__varname} ${arg})\n    endif()\n  endforeach()\n  unset(__varname)\n  if(\"${__condition}\" STREQUAL \"\")\n    set(__condition 2 GREATER 1)\n  endif()\n\n  if(${__condition})\n    if(\"${__value}\" MATCHES \";\")\n      if(${__value})\n        option(${variable} \"${description}\" ON)\n      else()\n        option(${variable} \"${description}\" OFF)\n      endif()\n    elseif(DEFINED ${__value})\n      if(${__value})\n        option(${variable} \"${description}\" ON)\n      else()\n        option(${variable} \"${description}\" OFF)\n      endif()\n    else()\n      option(${variable} \"${description}\" ${__value})\n    endif()\n  else()\n    unset(${variable} CACHE)\n  endif()\nendfunction()\n\n################################################################################################\n# Utility macro for comparing two lists. Used for CMake debugging purposes\n# Usage:\n#   caffe_compare_lists(<list_variable> <list2_variable> [description])\nfunction(caffe_compare_lists list1 list2 desc)\n  set(__list1 ${${list1}})\n  set(__list2 ${${list2}})\n  list(SORT __list1)\n  list(SORT __list2)\n  list(LENGTH __list1 __len1)\n  list(LENGTH __list2 __len2)\n\n  if(NOT ${__len1} EQUAL ${__len2})\n    message(FATAL_ERROR \"Lists are not equal. ${__len1} != ${__len2}. ${desc}\")\n  endif()\n\n  foreach(__i RANGE 1 ${__len1})\n    math(EXPR __index \"${__i}- 1\")\n    list(GET __list1 ${__index} __item1)\n    list(GET __list2 ${__index} __item2)\n    if(NOT ${__item1} STREQUAL ${__item2})\n      message(FATAL_ERROR \"Lists are not equal. Differ at element ${__index}. ${desc}\")\n    endif()\n  endforeach()\nendfunction()\n\n################################################################################################\n# Command for disabling warnings for different platforms (see below for gcc and VisualStudio)\n# Usage:\n#   caffe_warnings_disable(<CMAKE_[C|CXX]_FLAGS[_CONFIGURATION]> -Wshadow /wd4996 ..,)\nmacro(caffe_warnings_disable)\n  set(_flag_vars \"\")\n  set(_msvc_warnings \"\")\n  set(_gxx_warnings \"\")\n\n  foreach(arg ${ARGN})\n    if(arg MATCHES \"^CMAKE_\")\n      list(APPEND _flag_vars ${arg})\n    elseif(arg MATCHES \"^/wd\")\n      list(APPEND _msvc_warnings ${arg})\n    elseif(arg MATCHES \"^-W\")\n      list(APPEND _gxx_warnings ${arg})\n    endif()\n  endforeach()\n\n  if(NOT _flag_vars)\n    set(_flag_vars CMAKE_C_FLAGS CMAKE_CXX_FLAGS)\n  endif()\n\n  if(MSVC AND _msvc_warnings)\n    foreach(var ${_flag_vars})\n      foreach(warning ${_msvc_warnings})\n        set(${var} \"${${var}} ${warning}\")\n      endforeach()\n    endforeach()\n  elseif((CMAKE_COMPILER_IS_GNUCXX OR CMAKE_COMPILER_IS_CLANGXX) AND _gxx_warnings)\n    foreach(var ${_flag_vars})\n      foreach(warning ${_gxx_warnings})\n        if(NOT warning MATCHES \"^-Wno-\")\n          string(REPLACE \"${warning}\" \"\" ${var} \"${${var}}\")\n          string(REPLACE \"-W\" \"-Wno-\" warning \"${warning}\")\n        endif()\n        set(${var} \"${${var}} ${warning}\")\n      endforeach()\n    endforeach()\n  endif()\n  caffe_clear_vars(_flag_vars _msvc_warnings _gxx_warnings)\nendmacro()\n\n################################################################################################\n# Helper function get current definitions\n# Usage:\n#   caffe_get_current_definitions(<definitions_variable>)\nfunction(caffe_get_current_definitions definitions_var)\n  get_property(current_definitions DIRECTORY PROPERTY COMPILE_DEFINITIONS)\n  set(result \"\")\n\n  foreach(d ${current_definitions})\n    list(APPEND result -D${d})\n  endforeach()\n\n  caffe_list_unique(result)\n  set(${definitions_var} ${result} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Helper function get current includes/definitions\n# Usage:\n#   caffe_get_current_cflags(<cflagslist_variable>)\nfunction(caffe_get_current_cflags cflags_var)\n  get_property(current_includes DIRECTORY PROPERTY INCLUDE_DIRECTORIES)\n  caffe_convert_absolute_paths(current_includes)\n  caffe_get_current_definitions(cflags)\n\n  foreach(i ${current_includes})\n    list(APPEND cflags \"-I${i}\")\n  endforeach()\n\n  caffe_list_unique(cflags)\n  set(${cflags_var} ${cflags} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Helper function to parse current linker libs into link directories, libflags and osx frameworks\n# Usage:\n#   caffe_parse_linker_libs(<Caffe_LINKER_LIBS_var> <directories_var> <libflags_var> <frameworks_var>)\nfunction(caffe_parse_linker_libs Caffe_LINKER_LIBS_variable folders_var flags_var frameworks_var)\n\n  set(__unspec \"\")\n  set(__debug \"\")\n  set(__optimized \"\")\n  set(__framework \"\")\n  set(__varname \"__unspec\")\n\n  # split libs into debug, optimized, unspecified and frameworks\n  foreach(list_elem ${${Caffe_LINKER_LIBS_variable}})\n    if(list_elem STREQUAL \"debug\")\n      set(__varname \"__debug\")\n    elseif(list_elem STREQUAL \"optimized\")\n      set(__varname \"__optimized\")\n    elseif(list_elem MATCHES \"^-framework[ \\t]+([^ \\t].*)\")\n      list(APPEND __framework -framework ${CMAKE_MATCH_1})\n    else()\n      list(APPEND ${__varname} ${list_elem})\n      set(__varname \"__unspec\")\n    endif()\n  endforeach()\n\n  # attach debug or optimized libs to unspecified according to current configuration\n  if(CMAKE_BUILD_TYPE MATCHES \"Debug\")\n    set(__libs ${__unspec} ${__debug})\n  else()\n    set(__libs ${__unspec} ${__optimized})\n  endif()\n\n  set(libflags \"\")\n  set(folders \"\")\n\n  # convert linker libraries list to link flags\n  foreach(lib ${__libs})\n    if(TARGET ${lib})\n      list(APPEND folders $<TARGET_LINKER_FILE_DIR:${lib}>)\n      list(APPEND libflags -l${lib})\n    elseif(lib MATCHES \"^-l.*\")\n      list(APPEND libflags ${lib})\n    elseif(IS_ABSOLUTE ${lib})\n      get_filename_component(folder  ${lib} PATH)\n      get_filename_component(filename ${lib} NAME)\n      string(REGEX REPLACE \"\\\\.[^.]*$\" \"\" filename_without_shortest_ext ${filename})\n\n      string(REGEX MATCH \"^lib(.*)\" __match ${filename_without_shortest_ext})\n      list(APPEND libflags -l${CMAKE_MATCH_1})\n      list(APPEND folders    ${folder})\n    else()\n      message(FATAL_ERROR \"Logic error. Need to update cmake script\")\n    endif()\n  endforeach()\n\n  caffe_list_unique(libflags folders)\n\n  set(${folders_var} ${folders} PARENT_SCOPE)\n  set(${flags_var} ${libflags} PARENT_SCOPE)\n  set(${frameworks_var} ${__framework} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n# Helper function to detect Darwin version, i.e. 10.8, 10.9, 10.10, ....\n# Usage:\n#   caffe_detect_darwin_version(<version_variable>)\nfunction(caffe_detect_darwin_version output_var)\n  if(APPLE)\n    execute_process(COMMAND /usr/bin/sw_vers -productVersion\n                    RESULT_VARIABLE __sw_vers OUTPUT_VARIABLE __sw_vers_out\n                    ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)\n\n    set(${output_var} ${__sw_vers_out} PARENT_SCOPE)\n  else()\n    set(${output_var} \"\" PARENT_SCOPE)\n  endif()\nendfunction()\n"
  },
  {
    "path": "caffe-fpn/cmake/lint.cmake",
    "content": "\nset(CMAKE_SOURCE_DIR ..)\nset(LINT_COMMAND ${CMAKE_SOURCE_DIR}/scripts/cpp_lint.py)\nset(SRC_FILE_EXTENSIONS h hpp hu c cpp cu cc)\nset(EXCLUDE_FILE_EXTENSTIONS pb.h pb.cc)\nset(LINT_DIRS include src/caffe examples tools python matlab)\n\ncmake_policy(SET CMP0009 NEW)  # suppress cmake warning\n\n# find all files of interest\nforeach(ext ${SRC_FILE_EXTENSIONS})\n    foreach(dir ${LINT_DIRS})\n        file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/${dir}/*.${ext})\n        set(LINT_SOURCES ${LINT_SOURCES} ${FOUND_FILES})\n    endforeach()\nendforeach()\n\n# find all files that should be excluded\nforeach(ext ${EXCLUDE_FILE_EXTENSTIONS})\n    file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/*.${ext})\n    set(EXCLUDED_FILES ${EXCLUDED_FILES} ${FOUND_FILES})\nendforeach()\n\n# exclude generated pb files\nlist(REMOVE_ITEM LINT_SOURCES ${EXCLUDED_FILES})\n\nexecute_process(\n    COMMAND ${LINT_COMMAND} ${LINT_SOURCES}\n    ERROR_VARIABLE LINT_OUTPUT\n    ERROR_STRIP_TRAILING_WHITESPACE\n)\n\nstring(REPLACE \"\\n\" \";\" LINT_OUTPUT ${LINT_OUTPUT})\n\nlist(GET LINT_OUTPUT -1 LINT_RESULT)\nlist(REMOVE_AT LINT_OUTPUT -1)\nstring(REPLACE \" \" \";\" LINT_RESULT ${LINT_RESULT})\nlist(GET LINT_RESULT -1 NUM_ERRORS)\nif(NUM_ERRORS GREATER 0)\n    foreach(msg ${LINT_OUTPUT})\n        string(FIND ${msg} \"Done\" result)\n        if(result LESS 0)\n            message(STATUS ${msg})\n        endif()\n    endforeach()\n    message(FATAL_ERROR \"Lint found ${NUM_ERRORS} errors!\")\nelse()\n    message(STATUS \"Lint did not find any errors!\")\nendif()\n\n"
  },
  {
    "path": "caffe-fpn/data/cifar10/get_cifar10.sh",
    "content": "#!/usr/bin/env sh\n# This scripts downloads the CIFAR10 (binary version) data and unzips it.\n\nDIR=\"$( cd \"$(dirname \"$0\")\" ; pwd -P )\"\ncd $DIR\n\necho \"Downloading...\"\n\nwget --no-check-certificate http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz\n\necho \"Unzipping...\"\n\ntar -xf cifar-10-binary.tar.gz && rm -f cifar-10-binary.tar.gz\nmv cifar-10-batches-bin/* . && rm -rf cifar-10-batches-bin\n\n# Creation is split out because leveldb sometimes causes segfault\n# and needs to be re-created.\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/data/ilsvrc12/get_ilsvrc_aux.sh",
    "content": "#!/usr/bin/env sh\n#\n# N.B. This does not download the ilsvrcC12 data set, as it is gargantuan.\n# This script downloads the imagenet example auxiliary files including:\n# - the ilsvrc12 image mean, binaryproto\n# - synset ids and words\n# - Python pickle-format data of ImageNet graph structure and relative infogain\n# - the training splits with labels\n\nDIR=\"$( cd \"$(dirname \"$0\")\" ; pwd -P )\"\ncd $DIR\n\necho \"Downloading...\"\n\nwget -c http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz\n\necho \"Unzipping...\"\n\ntar -xf caffe_ilsvrc12.tar.gz && rm -f caffe_ilsvrc12.tar.gz\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/docs/CMakeLists.txt",
    "content": "# Building docs script\n# Requirements:\n#   sudo apt-get install doxygen texlive ruby-dev\n#   sudo gem install jekyll execjs therubyracer\n\nif(NOT BUILD_docs OR NOT DOXYGEN_FOUND)\n  return()\nendif()\n\n#################################################################################################\n# Gather docs from <root>/examples/**/readme.md\nfunction(gather_readmes_as_prebuild_cmd target gathered_dir root)\n  set(full_gathered_dir ${root}/${gathered_dir})\n\n  file(GLOB_RECURSE readmes ${root}/examples/readme.md ${root}/examples/README.md)\n  foreach(file ${readmes})\n    # Only use file if it is to be included in docs.\n    file(STRINGS ${file} file_lines REGEX \"include_in_docs: true\")\n\n    if(file_lines)\n      # Since everything is called readme.md, rename it by its dirname.\n      file(RELATIVE_PATH file ${root} ${file})\n      get_filename_component(folder ${file} PATH)\n      set(new_filename ${full_gathered_dir}/${folder}.md)\n\n      # folder value might be like <subfolder>/readme.md. That's why make directory.\n      get_filename_component(new_folder ${new_filename} PATH)\n      add_custom_command(TARGET ${target} PRE_BUILD\n        COMMAND ${CMAKE_COMMAND} -E make_directory ${new_folder}\n        COMMAND ln -sf ${root}/${file} ${new_filename}\n        COMMENT \"Creating symlink ${new_filename} -> ${root}/${file}\"\n        WORKING_DIRECTORY ${root} VERBATIM)\n    endif()\n  endforeach()\nendfunction()\n\n################################################################################################\n# Gather docs from examples/*.ipynb and add YAML front-matter.\nfunction(gather_notebooks_as_prebuild_cmd target gathered_dir root)\n  set(full_gathered_dir ${root}/${gathered_dir})\n\n  if(NOT PYTHON_EXECUTABLE)\n    message(STATUS \"Python interpeter is not found. Can't include *.ipynb files in docs. Skipping...\")\n    return()\n  endif()\n\n  file(GLOB_RECURSE notebooks ${root}/examples/*.ipynb)\n  foreach(file ${notebooks})\n    file(RELATIVE_PATH file ${root} ${file})\n    set(new_filename ${full_gathered_dir}/${file})\n\n    get_filename_component(new_folder ${new_filename} PATH)\n    add_custom_command(TARGET ${target} PRE_BUILD\n      COMMAND ${CMAKE_COMMAND} -E make_directory ${new_folder}\n      COMMAND ${PYTHON_EXECUTABLE} scripts/copy_notebook.py ${file} ${new_filename}\n      COMMENT \"Copying notebook ${file} to ${new_filename}\"\n      WORKING_DIRECTORY ${root} VERBATIM)\n  endforeach()\n\n  set(${outputs_var} ${outputs} PARENT_SCOPE)\nendfunction()\n\n################################################################################################\n########################## [ Non macro part ] ##################################################\n\n# Gathering is done at each 'make doc'\nfile(REMOVE_RECURSE ${PROJECT_SOURCE_DIR}/docs/gathered)\n\n# Doxygen config file path\nset(DOXYGEN_config_file ${PROJECT_SOURCE_DIR}/.Doxyfile CACHE FILEPATH \"Doxygen config file\")\n\n# Adding docs target\nadd_custom_target(docs COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYGEN_config_file}\n                       WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}\n                       COMMENT \"Launching doxygen...\" VERBATIM)\n\n# Gathering examples into docs subfolder\ngather_notebooks_as_prebuild_cmd(docs docs/gathered ${PROJECT_SOURCE_DIR})\ngather_readmes_as_prebuild_cmd(docs docs/gathered  ${PROJECT_SOURCE_DIR})\n\n# Auto detect output directory\nfile(STRINGS ${DOXYGEN_config_file} config_line REGEX \"OUTPUT_DIRECTORY[ \\t]+=[^=].*\")\nif(config_line)\n  string(REGEX MATCH \"OUTPUT_DIRECTORY[ \\t]+=([^=].*)\" __ver_check \"${config_line}\")\n  string(STRIP ${CMAKE_MATCH_1} output_dir)\n  message(STATUS \"Detected Doxygen OUTPUT_DIRECTORY: ${output_dir}\")\nelse()\n  set(output_dir ./doxygen/)\n  message(STATUS \"Can't find OUTPUT_DIRECTORY in doxygen config file. Try to use default: ${output_dir}\")\nendif()\n\nif(NOT IS_ABSOLUTE ${output_dir})\n  set(output_dir ${PROJECT_SOURCE_DIR}/${output_dir})\n  get_filename_component(output_dir ${output_dir} ABSOLUTE)\nendif()\n\n# creates symlink in docs subfolder to code documentation built by doxygen\nadd_custom_command(TARGET docs POST_BUILD VERBATIM\n                   COMMAND ln -sfn \"${output_dir}/html\" doxygen\n                   WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/docs\n                   COMMENT \"Creating symlink ${PROJECT_SOURCE_DIR}/docs/doxygen -> ${output_dir}/html\")\n\n# for quick launch of jekyll\nadd_custom_target(jekyll COMMAND jekyll serve -w -s . -d _site --port=4000\n                         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/docs\n                         COMMENT \"Launching jekyll...\" VERBATIM)\n"
  },
  {
    "path": "caffe-fpn/docs/CNAME",
    "content": "caffe.berkeleyvision.org\n"
  },
  {
    "path": "caffe-fpn/docs/README.md",
    "content": "# Caffe Documentation\n\nTo generate the documentation, run `$CAFFE_ROOT/scripts/build_docs.sh`.\n\nTo push your changes to the documentation to the gh-pages branch of your or the BVLC repo, run `$CAFFE_ROOT/scripts/deploy_docs.sh <repo_name>`.\n"
  },
  {
    "path": "caffe-fpn/docs/_config.yml",
    "content": "defaults:\n  -\n    scope:\n      path: \"\" # an empty string here means all files in the project\n    values:\n      layout: \"default\"\n\n"
  },
  {
    "path": "caffe-fpn/docs/_layouts/default.html",
    "content": "<!doctype html>\n<html>\n  <head>\n    <!-- MathJax -->\n    <script type=\"text/javascript\"\n      src=\"http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML\">\n    </script>\n    <meta charset=\"utf-8\">\n    <meta http-equiv=\"X-UA-Compatible\" content=\"chrome=1\">\n    <title>\n      Caffe {% if page contains 'title' %}| {{ page.title }}{% endif %}\n    </title>\n\n    <link rel=\"icon\" type=\"image/png\" href=\"/images/caffeine-icon.png\">\n\n    <link rel=\"stylesheet\" href=\"/stylesheets/reset.css\">\n    <link rel=\"stylesheet\" href=\"/stylesheets/styles.css\">\n    <link rel=\"stylesheet\" href=\"/stylesheets/pygment_trac.css\">\n\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, user-scalable=no\">\n    <!--[if lt IE 9]>\n    <script src=\"//html5shiv.googlecode.com/svn/trunk/html5.js\"></script>\n    <![endif]-->\n  </head>\n  <body>\n  <script>\n    (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){\n    (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),\n    m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)\n    })(window,document,'script','//www.google-analytics.com/analytics.js','ga');\n\n    ga('create', 'UA-46255508-1', 'daggerfs.com');\n    ga('send', 'pageview');\n  </script>\n    <div class=\"wrapper\">\n      <header>\n        <h1 class=\"header\"><a href=\"/\">Caffe</a></h1>\n        <p class=\"header\">\n          Deep learning framework by the <a class=\"header name\" href=\"http://bvlc.eecs.berkeley.edu/\">BVLC</a>\n        </p>\n        <p class=\"header\">\n          Created by\n          <br>\n          <a class=\"header name\" href=\"http://daggerfs.com/\">Yangqing Jia</a>\n          <br>\n          Lead Developer\n          <br>\n          <a class=\"header name\" href=\"http://imaginarynumber.net/\">Evan Shelhamer</a>\n        <ul>\n          <li>\n            <a class=\"buttons github\" href=\"https://github.com/BVLC/caffe\">View On GitHub</a>\n          </li>\n        </ul>\n      </header>\n      <section>\n\n      {{ content }}\n\n      </section>\n  </div>\n  </body>\n</html>\n"
  },
  {
    "path": "caffe-fpn/docs/development.md",
    "content": "---\ntitle: Developing and Contributing\n---\n# Development and Contributing\n\nCaffe is developed with active participation of the community.<br>\nThe [BVLC](http://bvlc.eecs.berkeley.edu/) brewers welcome all contributions!\n\nThe exact details of contributions are recorded by versioning and cited in our [acknowledgements](http://caffe.berkeleyvision.org/#acknowledgements).\nThis method is impartial and always up-to-date.\n\n## License\n\nCaffe is licensed under the terms in [LICENSE](https://github.com/BVLC/caffe/blob/master/LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.\n\n## Copyright\n\nCaffe uses a shared copyright model: each contributor holds copyright over their contributions to Caffe. The project versioning records all such contribution and copyright details.\n\nIf a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. Do not include copyright notices in files for this purpose.\n\n### Documentation\n\nThis website, written with [Jekyll](http://jekyllrb.com/), acts as the official Caffe documentation -- simply run `scripts/build_docs.sh` and view the website at `http://0.0.0.0:4000`.\n\nWe prefer tutorials and examples to be documented close to where they live, in `readme.md` files.\nThe `build_docs.sh` script gathers all `examples/**/readme.md` and `examples/*.ipynb` files, and makes a table of contents.\nTo be included in the docs, the readme files must be annotated with [YAML front-matter](http://jekyllrb.com/docs/frontmatter/), including the flag `include_in_docs: true`.\nSimilarly for IPython notebooks: simply include `\"include_in_docs\": true` in the `\"metadata\"` JSON field.\n\nOther docs, such as installation guides, are written in the `docs` directory and manually linked to from the `index.md` page.\n\nWe strive to provide lots of usage examples, and to document all code in docstrings.\nWe absolutely appreciate any contribution to this effort!\n\n### Versioning\n\nThe `master` branch receives all new development including community contributions.\nWe try to keep it in a reliable state, but it is the bleeding edge, and things do get broken every now and then.\nBVLC maintainers will periodically make releases by marking stable checkpoints as tags and maintenance branches. [Past releases](https://github.com/BVLC/caffe/releases) are catalogued online.\n\n#### Issues & Pull Request Protocol\n\nPost [Issues](https://github.com/BVLC/caffe/issues) to propose features, report [bugs], and discuss framework code.\nLarge-scale development work is guided by [milestones], which are sets of Issues selected for bundling as releases.\n\nPlease note that since the core developers are largely researchers, we may work on a feature in isolation for some time before releasing it to the community, so as to claim honest academic contribution.\nWe do release things as soon as a reasonable technical report may be written, and we still aim to inform the community of ongoing development through Github Issues.\n\n**When you are ready to develop a feature or fixing a bug, follow this protocol**:\n\n- Develop in [feature branches] with descriptive names. Branch off of the latest `master`.\n- Bring your work up-to-date by [rebasing] onto the latest `master` when done.\n(Groom your changes by [interactive rebase], if you'd like.)\n- [Pull request] your contribution to `BVLC/caffe`'s `master` branch for discussion and review.\n  - Make PRs *as soon as development begins*, to let discussion guide development.\n  - A PR is only ready for merge review when it is a fast-forward merge, and all code is documented, linted, and tested -- that means your PR must include tests!\n- When the PR satisfies the above properties, use comments to request maintainer review.\n\nThe following is a poetic presentation of the protocol in code form.\n\n#### [Shelhamer's](https://github.com/shelhamer) “life of a branch in four acts”\n\nMake the `feature` branch off of the latest `bvlc/master`\n\n    git checkout master\n    git pull upstream master\n    git checkout -b feature\n    # do your work, make commits\n\nPrepare to merge by rebasing your branch on the latest `bvlc/master`\n\n    # make sure master is fresh\n    git checkout master\n    git pull upstream master\n    # rebase your branch on the tip of master\n    git checkout feature\n    git rebase master\n\nPush your branch to pull request it into `BVLC/caffe:master`\n\n    git push origin feature\n    # ...make pull request to master...\n\nNow make a pull request! You can do this from the command line (`git pull-request -b master`) if you install [hub](https://github.com/github/hub). Hub has many other magical uses.\n\nThe pull request of `feature` into `master` will be a clean merge. Applause.\n\n[bugs]: https://github.com/BVLC/caffe/issues?labels=bug&page=1&state=open\n[milestones]: https://github.com/BVLC/caffe/issues?milestone=1\n[Pull request]: https://help.github.com/articles/using-pull-requests\n[interactive rebase]: https://help.github.com/articles/interactive-rebase\n[rebasing]: http://git-scm.com/book/en/Git-Branching-Rebasing\n[feature branches]: https://www.atlassian.com/git/workflows#!workflow-feature-branch\n\n**Historical note**: Caffe once relied on a two branch `master` and `dev` workflow.\nPRs from this time are still open but these will be merged into `master` or closed.\n\n### Testing\n\nRun `make runtest` to check the project tests. New code requires new tests. Pull requests that fail tests will not be accepted.\n\nThe `gtest` framework we use provides many additional options, which you can access by running the test binaries directly. One of the more useful options is `--gtest_filter`, which allows you to filter tests by name:\n\n    # run all tests with CPU in the name\n    build/test/test_all.testbin --gtest_filter='*CPU*'\n\n    # run all tests without GPU in the name (note the leading minus sign)\n    build/test/test_all.testbin --gtest_filter=-'*GPU*'\n\nTo get a list of all options `googletest` provides, simply pass the `--help` flag:\n\n    build/test/test_all.testbin --help\n\n### Style\n\n- **Run `make lint` to check C++ code.**\n- Wrap lines at 80 chars.\n- Follow [Google C++ style](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml) and [Google python style](http://google-styleguide.googlecode.com/svn/trunk/pyguide.html) + [PEP 8](http://legacy.python.org/dev/peps/pep-0008/).\n- Remember that “a foolish consistency is the hobgoblin of little minds,” so use your best judgement to write the clearest code for your particular case.\n"
  },
  {
    "path": "caffe-fpn/docs/index.md",
    "content": "---\ntitle: Deep Learning Framework\n---\n\n# Caffe\n\nCaffe is a deep learning framework made with expression, speed, and modularity in mind.\nIt is developed by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and by community contributors.\n[Yangqing Jia](http://daggerfs.com) created the project during his PhD at UC Berkeley.\nCaffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).\n\nCheck out our web image classification [demo](http://demo.caffe.berkeleyvision.org)!\n\n## Why Caffe?\n\n**Expressive architecture** encourages application and innovation.\nModels and optimization are defined by configuration without hard-coding.\nSwitch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.\n\n**Extensible code** fosters active development.\nIn Caffe's first year, it has been forked by over 1,000 developers and had many significant changes contributed back.\nThanks to these contributors the framework tracks the state-of-the-art in both code and models.\n\n**Speed** makes Caffe perfect for research experiments and industry deployment.\nCaffe can process **over 60M images per day** with a single NVIDIA K40 GPU\\*.\nThat's 1 ms/image for inference and 4 ms/image for learning.\nWe believe that Caffe is the fastest convnet implementation available.\n\n**Community**: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.\nJoin our community of brewers on the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) and [Github](https://github.com/BVLC/caffe/).\n\n<p class=\"footnote\" markdown=\"1\">\n\\* With the ILSVRC2012-winning [SuperVision](http://www.image-net.org/challenges/LSVRC/2012/supervision.pdf) model and caching IO.\nConsult performance [details](/performance_hardware.html).\n</p>\n\n## Documentation\n\n- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)<br>\nTutorial presentation.\n- [Tutorial Documentation](/tutorial)<br>\nPractical guide and framework reference.\n- [arXiv / ACM MM '14 paper](http://arxiv.org/abs/1408.5093)<br>\nA 4-page report for the ACM Multimedia Open Source competition (arXiv:1408.5093v1).\n- [Installation instructions](/installation.html)<br>\nTested on Ubuntu, Red Hat, OS X.\n* [Model Zoo](/model_zoo.html)<br>\nBVLC suggests a standard distribution format for Caffe models, and provides trained models.\n* [Developing & Contributing](/development.html)<br>\nGuidelines for development and contributing to Caffe.\n* [API Documentation](/doxygen/annotated.html)<br>\nDeveloper documentation automagically generated from code comments.\n\n### Examples\n\n{% assign examples = site.pages | where:'category','example' | sort: 'priority' %}\n{% for page in examples %}\n- <div><a href=\"{{page.url}}\">{{page.title}}</a><br>{{page.description}}</div>\n{% endfor %}\n\n### Notebook Examples\n\n{% assign notebooks = site.pages | where:'category','notebook' | sort: 'priority' %}\n{% for page in notebooks %}\n- <div><a href=\"http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/{{page.original_path}}\">{{page.title}}</a><br>{{page.description}}</div>\n{% endfor %}\n\n## Citing Caffe\n\nPlease cite Caffe in your publications if it helps your research:\n\n    @article{jia2014caffe,\n      Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},\n      Journal = {arXiv preprint arXiv:1408.5093},\n      Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},\n      Year = {2014}\n    }\n\nIf you do publish a paper where Caffe helped your research, we encourage you to update the [publications wiki](https://github.com/BVLC/caffe/wiki/Publications).\nCitations are also tracked automatically by [Google Scholar](http://scholar.google.com/scholar?oi=bibs&hl=en&cites=17333247995453974016).\n\n## Contacting Us\n\nJoin the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) to ask questions and discuss methods and models. This is where we talk about usage, installation, and applications.\n\nFramework development discussions and thorough bug reports are collected on [Issues](https://github.com/BVLC/caffe/issues).\n\nContact [caffe-dev](mailto:caffe-dev@googlegroups.com) if you have a confidential proposal for the framework *and the ability to act on it*.\nRequests for features, explanations, or personal help will be ignored; post to [caffe-users](https://groups.google.com/forum/#!forum/caffe-users) instead.\n\nThe core Caffe developers offer [consulting services](mailto:caffe-coldpress@googlegroups.com) for appropriate projects.\n\n## Acknowledgements\n\nThe BVLC Caffe developers would like to thank NVIDIA for GPU donation, A9 and Amazon Web Services for a research grant in support of Caffe development and reproducible research in deep learning, and BVLC PI [Trevor Darrell](http://www.eecs.berkeley.edu/~trevor/) for guidance.\n\nThe BVLC members who have contributed to Caffe are (alphabetical by first name):\n[Eric Tzeng](https://github.com/erictzeng), [Evan Shelhamer](http://imaginarynumber.net/), [Jeff Donahue](http://jeffdonahue.com/), [Jon Long](https://github.com/longjon), [Ross Girshick](http://www.cs.berkeley.edu/~rbg/), [Sergey Karayev](http://sergeykarayev.com/), [Sergio Guadarrama](http://www.eecs.berkeley.edu/~sguada/), and [Yangqing Jia](http://daggerfs.com/).\n\nThe open-source community plays an important and growing role in Caffe's development.\nCheck out the Github [project pulse](https://github.com/BVLC/caffe/pulse) for recent activity and the [contributors](https://github.com/BVLC/caffe/graphs/contributors) for the full list.\n\nWe sincerely appreciate your interest and contributions!\nIf you'd like to contribute, please read the [developing & contributing](development.html) guide.\n\nYangqing would like to give a personal thanks to the NVIDIA Academic program for providing GPUs, [Oriol Vinyals](http://www1.icsi.berkeley.edu/~vinyals/) for discussions along the journey, and BVLC PI [Trevor Darrell](http://www.eecs.berkeley.edu/~trevor/) for advice.\n"
  },
  {
    "path": "caffe-fpn/docs/install_apt.md",
    "content": "---\ntitle: Installation: Ubuntu\n---\n\n# Ubuntu Installation\n\n**General dependencies**\n\n    sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler\n    sudo apt-get install --no-install-recommends libboost-all-dev\n\n**CUDA**: Install via the NVIDIA package instead of `apt-get` to be certain of the library and driver versions.\nInstall the library and latest driver separately; the driver bundled with the library is usually out-of-date.\nThis can be skipped for CPU-only installation.\n\n**BLAS**: install ATLAS by `sudo apt-get install libatlas-base-dev` or install OpenBLAS or MKL for better CPU performance.\n\n**Python** (optional): if you use the default Python you will need to `sudo apt-get install` the `python-dev` package to have the Python headers for building the pycaffe interface.\n\n**Remaining dependencies, 14.04**\n\nEverything is packaged in 14.04.\n\n    sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev\n\n**Remaining dependencies, 12.04**\n\nThese dependencies need manual installation in 12.04.\n\n    # glog\n    wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz\n    tar zxvf glog-0.3.3.tar.gz\n    cd glog-0.3.3\n    ./configure\n    make && make install\n    # gflags\n    wget https://github.com/schuhschuh/gflags/archive/master.zip\n    unzip master.zip\n    cd gflags-master\n    mkdir build && cd build\n    export CXXFLAGS=\"-fPIC\" && cmake .. && make VERBOSE=1\n    make && make install\n    # lmdb\n    git clone https://github.com/LMDB/lmdb\n    cd lmdb/libraries/liblmdb\n    make && make install\n\nNote that glog does not compile with the most recent gflags version (2.1), so before that is resolved you will need to build with glog first.\n\nContinue with [compilation](installation.html#compilation).\n"
  },
  {
    "path": "caffe-fpn/docs/install_osx.md",
    "content": "---\ntitle: Installation: OS X\n---\n\n# OS X Installation\n\nWe highly recommend using the [Homebrew](http://brew.sh/) package manager.\nIdeally you could start from a clean `/usr/local` to avoid conflicts.\nIn the following, we assume that you're using Anaconda Python and Homebrew.\n\n**CUDA**: Install via the NVIDIA package that includes both CUDA and the bundled driver. **CUDA 7 is strongly suggested.** Older CUDA require `libstdc++` while clang++ is the default compiler and `libc++` the default standard library on OS X 10.9+. This disagreement makes it necessary to change the compilation settings for each of the dependencies. This is prone to error.\n\n**Library Path**: We find that everything compiles successfully if `$LD_LIBRARY_PATH` is not set at all, and `$DYLD_FALLBACK_LIBRARY_PATH` is set to provide CUDA, Python, and other relevant libraries (e.g. `/usr/local/cuda/lib:$HOME/anaconda/lib:/usr/local/lib:/usr/lib`).\nIn other `ENV` settings, things may not work as expected.\n\n**General dependencies**\n\n    brew install -vd snappy leveldb gflags glog szip lmdb\n    # need the homebrew science source for OpenCV and hdf5\n    brew tap homebrew/science\n    brew install hdf5 opencv\n\nIf using Anaconda Python, a modification to the OpenCV formula might be needed\nDo `brew edit opencv` and change the lines that look like the two lines below to exactly the two lines below.\n\n      -DPYTHON_LIBRARY=#{py_prefix}/lib/libpython2.7.dylib\n      -DPYTHON_INCLUDE_DIR=#{py_prefix}/include/python2.7\n\nIf using Anaconda Python, HDF5 is bundled and the `hdf5` formula can be skipped.\n\n**Remaining dependencies, with / without Python**\n\n    # with Python pycaffe needs dependencies built from source\n    brew install --build-from-source --with-python -vd protobuf\n    brew install --build-from-source -vd boost boost-python\n    # without Python the usual installation suffices\n    brew install protobuf boost\n\n**BLAS**: already installed as the [Accelerate / vecLib Framework](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man7/Accelerate.7.html). OpenBLAS and MKL are alternatives for faster CPU computation.\n\n**Python** (optional): Anaconda is the preferred Python.\nIf you decide against it, please use Homebrew.\nCheck that Caffe and dependencies are linking against the same, desired Python.\n\nContinue with [compilation](installation.html#compilation).\n\n## libstdc++ installation\n\nThis route is not for the faint of heart.\nFor OS X 10.10 and 10.9 you should install CUDA 7 and follow the instructions above.\nIf that is not an option, take a deep breath and carry on.\n\nIn OS X 10.9+, clang++ is the default C++ compiler and uses `libc++` as the standard library.\nHowever, NVIDIA CUDA (even version 6.0) currently links only with `libstdc++`.\nThis makes it necessary to change the compilation settings for each of the dependencies.\n\nWe do this by modifying the Homebrew formulae before installing any packages.\nMake sure that Homebrew doesn't install any software dependencies in the background; all packages must be linked to `libstdc++`.\n\nThe prerequisite Homebrew formulae are\n\n    boost snappy leveldb protobuf gflags glog szip lmdb homebrew/science/opencv\n\nFor each of these formulas, `brew edit FORMULA`, and add the ENV definitions as shown:\n\n      def install\n          # ADD THE FOLLOWING:\n          ENV.append \"CXXFLAGS\", \"-stdlib=libstdc++\"\n          ENV.append \"CFLAGS\", \"-stdlib=libstdc++\"\n          ENV.append \"LDFLAGS\", \"-stdlib=libstdc++ -lstdc++\"\n          # The following is necessary because libtool likes to strip LDFLAGS:\n          ENV[\"CXX\"] = \"/usr/bin/clang++ -stdlib=libstdc++\"\n          ...\n\nTo edit the formulae in turn, run\n\n    for x in snappy leveldb protobuf gflags glog szip boost boost-python lmdb homebrew/science/opencv; do brew edit $x; done\n\nAfter this, run\n\n    for x in snappy leveldb gflags glog szip lmdb homebrew/science/opencv; do brew uninstall $x; brew install --build-from-source -vd $x; done\n    brew uninstall protobuf; brew install --build-from-source --with-python -vd protobuf\n    brew install --build-from-source -vd boost boost-python\n\nIf this is not done exactly right then linking errors will trouble you.\n\n**Homebrew versioning** that Homebrew maintains itself as a separate git repository and making the above `brew edit FORMULA` changes will change files in your local copy of homebrew's master branch. By default, this will prevent you from updating Homebrew using `brew update`, as you will get an error message like the following:\n\n    $ brew update\n    error: Your local changes to the following files would be overwritten by merge:\n      Library/Formula/lmdb.rb\n    Please, commit your changes or stash them before you can merge.\n    Aborting\n    Error: Failure while executing: git pull -q origin refs/heads/master:refs/remotes/origin/master\n\nOne solution is to commit your changes to a separate Homebrew branch, run `brew update`, and rebase your changes onto the updated master. You'll have to do this both for the main Homebrew repository in `/usr/local/` and the Homebrew science repository that contains OpenCV in  `/usr/local/Library/Taps/homebrew/homebrew-science`, as follows:\n\n    cd /usr/local\n    git checkout -b caffe\n    git add .\n    git commit -m \"Update Caffe dependencies to use libstdc++\"\n    cd /usr/local/Library/Taps/homebrew/homebrew-science\n    git checkout -b caffe\n    git add .\n    git commit -m \"Update Caffe dependencies\"\n\nThen, whenever you want to update homebrew, switch back to the master branches, do the update, rebase the caffe branches onto master and fix any conflicts:\n\n    # Switch batch to homebrew master branches\n    cd /usr/local\n    git checkout master\n    cd /usr/local/Library/Taps/homebrew/homebrew-science\n    git checkout master\n\n    # Update homebrew; hopefully this works without errors!\n    brew update\n\n    # Switch back to the caffe branches with the formulae that you modified earlier\n    cd /usr/local\n    git rebase master caffe\n    # Fix any merge conflicts and commit to caffe branch\n    cd /usr/local/Library/Taps/homebrew/homebrew-science\n    git rebase master caffe\n    # Fix any merge conflicts and commit to caffe branch\n\n    # Done!\n\nAt this point, you should be running the latest Homebrew packages and your Caffe-related modifications will remain in place.\n"
  },
  {
    "path": "caffe-fpn/docs/install_yum.md",
    "content": "---\ntitle: Installation: RHEL / Fedora / CentOS\n---\n\n# RHEL / Fedora / CentOS Installation\n\n**General dependencies**\n\n    sudo yum install protobuf-devel leveldb-devel snappy-devel opencv-devel boost-devel hdf5-devel\n\n**Remaining dependencies, recent OS**\n\n    sudo yum install gflags-devel glog-devel lmdb-devel\n\n**Remaining dependencies, if not found**\n\n    # glog\n    wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz\n    tar zxvf glog-0.3.3.tar.gz\n    cd glog-0.3.3\n    ./configure\n    make && make install\n    # gflags\n    wget https://github.com/schuhschuh/gflags/archive/master.zip\n    unzip master.zip\n    cd gflags-master\n    mkdir build && cd build\n    export CXXFLAGS=\"-fPIC\" && cmake .. && make VERBOSE=1\n    make && make install\n    # lmdb\n    git clone https://github.com/LMDB/lmdb\n    cd lmdb/libraries/liblmdb\n    make && make install\n\nNote that glog does not compile with the most recent gflags version (2.1), so before that is resolved you will need to build with glog first.\n\n**CUDA**: Install via the NVIDIA package instead of `yum` to be certain of the library and driver versions.\nInstall the library and latest driver separately; the driver bundled with the library is usually out-of-date.\n    + CentOS/RHEL/Fedora:\n\n**BLAS**: install ATLAS by `sudo yum install atlas-devel` or install OpenBLAS or MKL for better CPU performance. For the Makefile build, uncomment and set `BLAS_LIB` accordingly as ATLAS is usually installed under `/usr/lib[64]/atlas`).\n\n**Python** (optional): if you use the default Python you will need to `sudo yum install` the `python-devel` package to have the Python headers for building the pycaffe wrapper.\n\nContinue with [compilation](installation.html#compilation).\n"
  },
  {
    "path": "caffe-fpn/docs/installation.md",
    "content": "---\ntitle: Installation\n---\n\n# Installation\n\nPrior to installing, have a glance through this guide and take note of the details for your platform.\nWe install and run Caffe on Ubuntu 14.04 and 12.04, OS X 10.10 / 10.9 / 10.8, and AWS.\nThe official Makefile and `Makefile.config` build are complemented by an automatic CMake build from the community.\n\n- [Prerequisites](#prerequisites)\n- [Compilation](#compilation)\n- [Hardware](#hardware)\n- Platforms: [Ubuntu guide](install_apt.html), [OS X guide](install_osx.html), and [RHEL / CentOS / Fedora guide](install_yum.html)\n\nWhen updating Caffe, it's best to `make clean` before re-compiling.\n\n## Prerequisites\n\nCaffe has several dependencies:\n\n* [CUDA](https://developer.nvidia.com/cuda-zone) is required for GPU mode.\n    * library version 7.0 and the latest driver version are recommended, but 6.* is fine too\n    * 5.5, and 5.0 are compatible but considered legacy\n* [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) via ATLAS, MKL, or OpenBLAS.\n* [Boost](http://www.boost.org/) >= 1.55\n* `protobuf`, `glog`, `gflags`, `hdf5`\n\nOptional dependencies:\n\n* [OpenCV](http://opencv.org/) >= 2.4 including 3.0\n* IO libraries: `lmdb`, `leveldb` (note: leveldb requires `snappy`)\n* cuDNN for GPU acceleration (v3)\n\nPycaffe and Matcaffe interfaces have their own natural needs.\n\n* For Python Caffe:  `Python 2.7` or `Python 3.3+`, `numpy (>= 1.7)`, boost-provided `boost.python`\n* For MATLAB Caffe: MATLAB with the `mex` compiler.\n\n**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. The current version is cuDNN v3; older versions are supported in older Caffe.\n\n**CPU-only Caffe**: for cold-brewed CPU-only Caffe uncomment the `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA. This is helpful for cloud or cluster deployment.\n\n### CUDA and BLAS\n\nCaffe requires the CUDA `nvcc` compiler to compile its GPU code and CUDA driver for GPU operation.\nTo install CUDA, go to the [NVIDIA CUDA website](https://developer.nvidia.com/cuda-downloads) and follow installation instructions there. Install the library and the latest standalone driver separately; the driver bundled with the library is usually out-of-date. **Warning!** The 331.* CUDA driver series has a critical performance issue: do not use it.\n\nFor best performance, Caffe can be accelerated by [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). Register for free at the cuDNN site, install it, then continue with these installation instructions. To compile with cuDNN set the `USE_CUDNN := 1` flag set in your `Makefile.config`.\n\nCaffe requires BLAS as the backend of its matrix and vector computations.\nThere are several implementations of this library. The choice is yours:\n\n* [ATLAS](http://math-atlas.sourceforge.net/): free, open source, and so the default for Caffe.\n* [Intel MKL](http://software.intel.com/en-us/intel-mkl): commercial and optimized for Intel CPUs, with a free trial and [student](http://software.intel.com/en-us/intel-education-offerings) licenses.\n    1. Install MKL.\n    2. Set `BLAS := mkl` in `Makefile.config`\n* [OpenBLAS](http://www.openblas.net/): free and open source; this optimized and parallel BLAS could require more effort to install, although it might offer a speedup.\n    1. Install OpenBLAS\n    2. Set `BLAS := open` in `Makefile.config`\n\n### Python and/or MATLAB Caffe (optional)\n\n#### Python\n\nThe main requirements are `numpy` and `boost.python` (provided by boost). `pandas` is useful too and needed for some examples.\n\nYou can install the dependencies with\n\n    for req in $(cat requirements.txt); do pip install $req; done\n\nbut we suggest first installing the [Anaconda](https://store.continuum.io/cshop/anaconda/) Python distribution, which provides most of the necessary packages, as well as the `hdf5` library dependency.\n\nTo import the `caffe` Python module after completing the installation, add the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like. You should not import the module in the `caffe/python/caffe` directory!\n\n*Caffe's Python interface works with Python 2.7. Python 3.3+ should work out of the box without protobuf support. For protobuf support please install protobuf 3.0 alpha (https://developers.google.com/protocol-buffers/). Earlier Pythons are your own adventure.*\n\n#### MATLAB\n\nInstall MATLAB, and make sure that its `mex` is in your `$PATH`.\n\n*Caffe's MATLAB interface works with versions 2015a, 2014a/b, 2013a/b, and 2012b.*\n\n#### Windows\n\nThere is an unofficial Windows port of Caffe at [niuzhiheng/caffe:windows](https://github.com/niuzhiheng/caffe). Thanks [@niuzhiheng](https://github.com/niuzhiheng)!\n\n## Compilation\n\nCaffe can be compiled with either Make or CMake. Make is officially supported while CMake is supported by the community.\n\n### Compilation with Make\n\nConfigure the build by copying and modifying the example `Makefile.config` for your setup. The defaults should work, but uncomment the relevant lines if using Anaconda Python.\n\n    cp Makefile.config.example Makefile.config\n    # Adjust Makefile.config (for example, if using Anaconda Python, or if cuDNN is desired)\n    make all\n    make test\n    make runtest\n\n- For CPU & GPU accelerated Caffe, no changes are needed.\n- For cuDNN acceleration using NVIDIA's proprietary cuDNN software, uncomment the `USE_CUDNN := 1` switch in `Makefile.config`. cuDNN is sometimes but not always faster than Caffe's GPU acceleration.\n- For CPU-only Caffe, uncomment `CPU_ONLY := 1` in `Makefile.config`.\n\nTo compile the Python and MATLAB wrappers do `make pycaffe` and `make matcaffe` respectively.\nBe sure to set your MATLAB and Python paths in `Makefile.config` first!\n\n**Distribution**: run `make distribute` to create a `distribute` directory with all the Caffe headers, compiled libraries, binaries, etc. needed for distribution to other machines.\n\n**Speed**: for a faster build, compile in parallel by doing `make all -j8` where 8 is the number of parallel threads for compilation (a good choice for the number of threads is the number of cores in your machine).\n\nNow that you have installed Caffe, check out the [MNIST tutorial](gathered/examples/mnist.html) and the [reference ImageNet model tutorial](gathered/examples/imagenet.html).\n\n### Compilation with CMake\n\nIn lieu of manually editing `Makefile.config` to configure the build, Caffe offers an unofficial CMake build thanks to @Nerei, @akosiorek, and other members of the community. It requires CMake version >= 2.8.7.\nThe basic steps are as follows:\n\n    mkdir build\n    cd build\n    cmake ..\n    make all\n    make install\n    make runtest\n\nSee [PR #1667](https://github.com/BVLC/caffe/pull/1667) for options and details.\n\n## Hardware\n\n**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with K40s, K20s, and Titans including models at ImageNet/ILSVRC scale. We also run on GTX series cards (980s and 770s) and GPU-equipped MacBook Pros. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.\n\n**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Your mileage may vary.\n\nOnce installed, check your times against our [reference performance numbers](performance_hardware.html) to make sure everything is configured properly.\n\nAsk hardware questions on the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users).\n"
  },
  {
    "path": "caffe-fpn/docs/model_zoo.md",
    "content": "---\ntitle: Model Zoo\n---\n# Caffe Model Zoo\n\nLots of researchers and engineers have made Caffe models for different tasks with all kinds of architectures and data.\nThese models are learned and applied for problems ranging from simple regression, to large-scale visual classification, to Siamese networks for image similarity, to speech and robotics applications.\n\nTo help share these models, we introduce the model zoo framework:\n\n- A standard format for packaging Caffe model info.\n- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` binaries.\n- A central wiki page for sharing model info Gists.\n\n## Where to get trained models\n\nFirst of all, we bundle BVLC-trained models for unrestricted, out of the box use.\n<br>\nSee the [BVLC model license](#bvlc-model-license) for details.\nEach one of these can be downloaded by running `scripts/download_model_binary.py <dirname>` where `<dirname>` is specified below:\n\n- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in [ImageNet classification with deep convolutional neural networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Jeff Donahue @jeffdonahue)\n- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in [ImageNet classification with deep convolutional neural networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)\n- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn) as described by Girshick et al. in CVPR 2014. (Trained by Ross Girshick @rbgirshick)\n- **BVLC GoogLeNet** in `models/bvlc_googlenet`: GoogLeNet trained on ILSVRC 2012, almost exactly as described in [Going Deeper with Convolutions](http://arxiv.org/abs/1409.4842) by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)\n\n**Community models** made by Caffe users are posted to a publicly editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).\nThese models are subject to conditions of their respective authors such as citation and license.\nThank you for sharing your models!\n\n## Model info format\n\nA caffe model is distributed as a directory containing:\n\n- Solver/model prototxt(s)\n- `readme.md` containing\n    - YAML frontmatter\n        - Caffe version used to train this model (tagged release or commit hash).\n        - [optional] file URL and SHA1 of the trained `.caffemodel`.\n        - [optional] github gist id.\n    - Information about what data the model was trained on, modeling choices, etc.\n    - License information.\n- [optional] Other helpful scripts.\n\n### Hosting model info\n\nGithub Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.\n\n`scripts/upload_model_to_gist.sh <dirname>` uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `<dirname>/readme.md` frontmatter, then updates existing Gist.\n\nTry doing `scripts/upload_model_to_gist.sh models/bvlc_alexnet` to test the uploading (don't forget to delete the uploaded gist afterward).\n\nDownloading model info is done just as easily with `scripts/download_model_from_gist.sh <gist_id> <dirname>`.\n\n### Hosting trained models\n\nIt is up to the user where to host the `.caffemodel` file.\nWe host our BVLC-provided models on our own server.\nDropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).\n\n`scripts/download_model_binary.py <dirname>` downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.\n\n## BVLC model license\n\nThe Caffe models bundled by the BVLC are released for unrestricted use.\n\nThese models are trained on data from the [ImageNet project](http://www.image-net.org/) and training data includes internet photos that may be subject to copyright.\n\nOur present understanding as researchers is that there is no restriction placed on the open release of these learned model weights, since none of the original images are distributed in whole or in part.\nTo the extent that the interpretation arises that weights are derivative works of the original copyright holder and they assert such a copyright, UC Berkeley makes no representations as to what use is allowed other than to consider our present release in the spirit of fair use in the academic mission of the university to disseminate knowledge and tools as broadly as possible without restriction.\n"
  },
  {
    "path": "caffe-fpn/docs/multigpu.md",
    "content": "---\ntitle: Multi-GPU Usage, Hardware Configuration Assumptions, and Performance\n---\n\n# Multi-GPU Usage\n\nCurrently Multi-GPU is only supported via the C/C++ paths and only for training.\n\nThe GPUs to be used for training can be set with the \"-gpu\" flag on the command line to the 'caffe' tool.  e.g. \"build/tools/caffe train --solver=models/bvlc_alexnet/solver.prototxt --gpu=0,1\" will train on GPUs 0 and 1.\n\n**NOTE**: each GPU runs the batchsize specified in your train_val.prototxt.  So if you go from 1 GPU to 2 GPU, your effective batchsize will double.  e.g. if your train_val.prototxt specified a batchsize of 256, if you run 2 GPUs your effective batch size is now 512.  So you need to adjust the batchsize when running multiple GPUs and/or adjust your solver params, specifically learning rate.\n\n# Hardware Configuration Assumptions\n\nThe current implementation uses a tree reduction strategy.  e.g. if there are 4 GPUs in the system, 0:1, 2:3 will exchange gradients, then 0:2 (top of the tree) will exchange gradients, 0 will calculate\nupdated model, 0\\-\\>2, and then 0\\-\\>1, 2\\-\\>3. \n\nFor best performance, P2P DMA access between devices is needed. Without P2P access, for example crossing PCIe root complex, data is copied through host and effective exchange bandwidth is greatly reduced.\n\nCurrent implementation has a \"soft\" assumption that the devices being used are homogeneous.  In practice, any devices of the same general class should work together, but performance and total size is limited by the smallest device being used.  e.g. if you combine a TitanX and a GTX980, peformance will be limited by the 980.  Mixing vastly different levels of boards, e.g. Kepler and Fermi, is not supported.\n\n\"nvidia-smi topo -m\" will show you the connectivity matrix.  You can do P2P through PCIe bridges, but not across socket level links at this time, e.g. across CPU sockets on a multi-socket motherboard.\n\n# Scaling Performance\n\nPerformance is **heavily** dependent on the PCIe topology of the system, the configuration of the neural network you are training, and the speed of each of the layers.  Systems like the DIGITS DevBox have an optimized PCIe topology (X99-E WS chipset).  In general, scaling on 2 GPUs tends to be ~1.8X on average for networks like AlexNet, CaffeNet, VGG, GoogleNet.  4 GPUs begins to have falloff in scaling.  Generally with \"weak scaling\" where the batchsize increases with the number of GPUs you will see 3.5x scaling or so.  With \"strong scaling\", the system can become communication bound, especially with layer performance optimizations like those in [cuDNNv3](http://nvidia.com/cudnn), and you will likely see closer to mid 2.x scaling in performance.  Networks that have heavy computation compared to the number of parameters tend to have the best scaling performance."
  },
  {
    "path": "caffe-fpn/docs/performance_hardware.md",
    "content": "---\ntitle: Performance and Hardware Configuration\n---\n\n# Performance and Hardware Configuration\n\nTo measure performance on different NVIDIA GPUs we use CaffeNet, the Caffe reference ImageNet model.\n\nFor training, each time point is 20 iterations/minibatches of 256 images for 5,120 images total. For testing, a 50,000 image validation set is classified.\n\n**Acknowledgements**: BVLC members are very grateful to NVIDIA for providing several GPUs to conduct this research.\n\n## NVIDIA K40\n\nPerformance is best with ECC off and boost clock enabled. While ECC makes a negligible difference in speed, disabling it frees ~1 GB of GPU memory.\n\nBest settings with ECC off and maximum clock speed in standard Caffe:\n\n* Training is 26.5 secs / 20 iterations (5,120 images)\n* Testing is 100 secs / validation set (50,000 images)\n\nBest settings with Caffe + [cuDNN acceleration](http://nvidia.com/cudnn):\n\n* Training is 19.2 secs / 20 iterations (5,120 images)\n* Testing is 60.7 secs / validation set (50,000 images)\n\nOther settings:\n\n* ECC on, max speed: training 26.7 secs / 20 iterations, test 101 secs / validation set\n* ECC on, default speed: training 31 secs / 20 iterations, test 117 secs / validation set\n* ECC off, default speed: training 31 secs / 20 iterations, test 118 secs / validation set\n\n### K40 configuration tips\n\nFor maximum K40 performance, turn off ECC and boost the clock speed (at your own risk).\n\nTo turn off ECC, do\n\n    sudo nvidia-smi -i 0 --ecc-config=0    # repeat with -i x for each GPU ID\n\nthen reboot.\n\nSet the \"persistence\" mode of the GPU settings by\n\n    sudo nvidia-smi -pm 1\n\nand then set the clock speed with\n\n    sudo nvidia-smi -i 0 -ac 3004,875    # repeat with -i x for each GPU ID\n\nbut note that this configuration resets across driver reloading / rebooting. Include these commands in a boot script to initialize these settings. For a simple fix, add these commands to `/etc/rc.local` (on Ubuntu).\n\n## NVIDIA Titan\n\nTraining: 26.26 secs / 20 iterations (5,120 images).\nTesting: 100 secs / validation set (50,000 images).\n\ncuDNN Training: 20.25 secs / 20 iterations (5,120 images).\ncuDNN Testing: 66.3 secs / validation set (50,000 images).\n\n\n## NVIDIA K20\n\nTraining: 36.0 secs / 20 iterations (5,120 images).\nTesting: 133 secs / validation set (50,000 images).\n\n## NVIDIA GTX 770\n\nTraining: 33.0 secs / 20 iterations (5,120 images).\nTesting: 129 secs / validation set (50,000 images).\n\ncuDNN Training: 24.3 secs / 20 iterations (5,120 images).\ncuDNN Testing: 104 secs / validation set (50,000 images).\n"
  },
  {
    "path": "caffe-fpn/docs/stylesheets/pygment_trac.css",
    "content": ".highlight  { background: #ffffff; }\n.highlight .c { color: #999988; font-style: italic } /* Comment */\n.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */\n.highlight .k { font-weight: bold } /* Keyword */\n.highlight .o { font-weight: bold } /* Operator */\n.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */\n.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */\n.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */\n.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */\n.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */\n.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */\n.highlight .ge { font-style: italic } /* Generic.Emph */\n.highlight .gr { color: #aa0000 } /* Generic.Error */\n.highlight .gh { color: #999999 } /* Generic.Heading */\n.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */\n.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */\n.highlight .go { color: #888888 } /* Generic.Output */\n.highlight .gp { color: #555555 } /* Generic.Prompt */\n.highlight .gs { font-weight: bold } /* Generic.Strong */\n.highlight .gu { color: #800080; font-weight: bold; } /* Generic.Subheading */\n.highlight .gt { color: #aa0000 } /* Generic.Traceback */\n.highlight .kc { font-weight: bold } /* Keyword.Constant */\n.highlight .kd { font-weight: bold } /* Keyword.Declaration */\n.highlight .kn { font-weight: bold } /* Keyword.Namespace */\n.highlight .kp { font-weight: bold } /* Keyword.Pseudo */\n.highlight .kr { font-weight: bold } /* Keyword.Reserved */\n.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */\n.highlight .m { color: #009999 } /* Literal.Number */\n.highlight .s { color: #d14 } /* Literal.String */\n.highlight .na { color: #008080 } /* Name.Attribute */\n.highlight .nb { color: #0086B3 } /* Name.Builtin */\n.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */\n.highlight .no { color: #008080 } /* Name.Constant */\n.highlight .ni { color: #800080 } /* Name.Entity */\n.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */\n.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */\n.highlight .nn { color: #555555 } /* Name.Namespace */\n.highlight .nt { color: #000080 } /* Name.Tag */\n.highlight .nv { color: #008080 } /* Name.Variable */\n.highlight .ow { font-weight: bold } /* Operator.Word */\n.highlight .w { color: #bbbbbb } /* Text.Whitespace */\n.highlight .mf { color: #009999 } /* Literal.Number.Float */\n.highlight .mh { color: #009999 } /* Literal.Number.Hex */\n.highlight .mi { color: #009999 } /* Literal.Number.Integer */\n.highlight .mo { color: #009999 } /* Literal.Number.Oct */\n.highlight .sb { color: #d14 } /* Literal.String.Backtick */\n.highlight .sc { color: #d14 } /* Literal.String.Char */\n.highlight .sd { color: #d14 } /* Literal.String.Doc */\n.highlight .s2 { color: #d14 } /* Literal.String.Double */\n.highlight .se { color: #d14 } /* Literal.String.Escape */\n.highlight .sh { color: #d14 } /* Literal.String.Heredoc */\n.highlight .si { color: #d14 } /* Literal.String.Interpol */\n.highlight .sx { color: #d14 } /* Literal.String.Other */\n.highlight .sr { color: #009926 } /* Literal.String.Regex */\n.highlight .s1 { color: #d14 } /* Literal.String.Single */\n.highlight .ss { color: #990073 } /* Literal.String.Symbol */\n.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */\n.highlight .vc { color: #008080 } /* Name.Variable.Class */\n.highlight .vg { color: #008080 } /* Name.Variable.Global */\n.highlight .vi { color: #008080 } /* Name.Variable.Instance */\n.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */\n\n.type-csharp .highlight .k { color: #0000FF }\n.type-csharp .highlight .kt { color: #0000FF }\n.type-csharp .highlight .nf { color: #000000; font-weight: normal }\n.type-csharp .highlight .nc { color: #2B91AF }\n.type-csharp .highlight .nn { color: #000000 }\n.type-csharp .highlight .s { color: #A31515 }\n.type-csharp .highlight .sc { color: #A31515 }\n"
  },
  {
    "path": "caffe-fpn/docs/stylesheets/reset.css",
    "content": "/* MeyerWeb Reset */\n\nhtml, body, div, span, applet, object, iframe,\nh1, h2, h3, h4, h5, h6, p, blockquote, pre,\na, abbr, acronym, address, big, cite, code,\ndel, dfn, em, img, ins, kbd, q, s, samp,\nsmall, strike, strong, sub, sup, tt, var,\nb, u, i, center,\ndl, dt, dd, ol, ul, li,\nfieldset, form, label, legend,\ntable, caption, tbody, tfoot, thead, tr, th, td,\narticle, aside, canvas, details, embed,\nfigure, figcaption, footer, header, hgroup,\nmenu, nav, output, ruby, section, summary,\ntime, mark, audio, video {\n  margin: 0;\n  padding: 0;\n  border: 0;\n  font: inherit;\n  vertical-align: baseline;\n}\n"
  },
  {
    "path": "caffe-fpn/docs/stylesheets/styles.css",
    "content": "@import url(http://fonts.googleapis.com/css?family=PT+Serif|Open+Sans:600,400);\n\nbody {\n  padding:10px 50px 0 0;\n  font-family: 'Open Sans', sans-serif;\n  font-size: 14px;\n  color: #232323;\n  background-color: #FBFAF7;\n  margin: 0;\n  line-height: 1.5rem;\n  -webkit-font-smoothing: antialiased;\n}\n\nh1, h2, h3, h4, h5, h6 {\n  color:#232323;\n  margin:36px 0 10px;\n}\n\np, ul, ol, table, dl {\n  margin:0 0 22px;\n}\n\nh1, h2, h3 {\n  font-family: 'PT Serif', serif;\n  line-height:1.3;\n  font-weight: normal;\n  display: block;\n  border-bottom: 1px solid #ccc;\n  padding-bottom: 5px;\n}\n\nh1 {\n  font-size: 30px;\n}\n\nh2 {\n  font-size: 24px;\n}\n\nh3 {\n  font-size: 18px;\n}\n\nh4, h5, h6 {\n  font-family: 'PT Serif', serif;\n  font-weight: 700;\n}\n\na {\n  color:#C30000;\n  text-decoration:none;\n}\n\na:hover {\n  text-decoration: underline;\n}\n\na small {\n  font-size: 12px;\n}\n\nem {\n  font-style: italic;\n}\n\nstrong {\n  font-weight:700;\n}\n\nul {\n  padding-left: 25px;\n}\n\nol {\n  list-style: decimal;\n  padding-left: 20px;\n}\n\nblockquote {\n  margin: 0;\n  padding: 0 0 0 20px;\n  font-style: italic;\n}\n\ndl, dt, dd, dl p {\n  font-color: #444;\n}\n\ndl dt {\n  font-weight: bold;\n}\n\ndl dd {\n  padding-left: 20px;\n  font-style: italic;\n}\n\ndl p {\n  padding-left: 20px;\n  font-style: italic;\n}\n\nhr {\n  border:0;\n  background:#ccc;\n  height:1px;\n  margin:0 0 24px;\n}\n\n/* Images */\n\nimg {\n  position: relative;\n  margin: 0 auto;\n  max-width: 650px;\n  padding: 5px;\n  margin: 10px 0 32px 0;\n  border: 1px solid #ccc;\n}\n\np img {\n  display: inline;\n  margin: 0;\n  padding: 0;\n  vertical-align: middle;\n  text-align: center;\n  border: none;\n}\n\n/* Code blocks */\ncode, pre {\n  font-family: monospace;\n  color:#000;\n  font-size:12px;\n  line-height: 14px;\n}\n\npre {\n  padding: 6px 12px;\n  background: #FDFEFB;\n  border-radius:4px;\n  border:1px solid #D7D8C8;\n  overflow: auto;\n  white-space: pre-wrap;\n  margin-bottom: 16px;\n}\n\n\n/* Tables */\ntable {\n  width:100%;\n}\n\ntable {\n  border: 1px solid #ccc;\n  margin-bottom: 32px;\n  text-align: left;\n }\n\nth {\n  font-family: 'Open Sans', sans-serif;\n  font-size: 18px;\n  font-weight: normal;\n  padding: 10px;\n  background: #232323;\n  color: #FDFEFB;\n }\n\ntd {\n  padding: 10px;\n  background: #ccc;\n }\n\n\n/* Wrapper */\n.wrapper {\n  width:960px;\n}\n\n\n/* Header */\n\nheader {\n  width:170px;\n  float:left;\n  position:fixed;\n  padding: 12px 25px 22px 50px;\n  margin: 24px 25px 0 0;\n}\n\np.header {\n  font-size: 14px;\n}\n\nh1.header {\n  font-size: 30px;\n  font-weight: 300;\n  line-height: 1.3em;\n  margin-top: 0;\n}\n\na.name {\n  white-space: nowrap;\n}\n\nheader ul {\n  list-style:none;\n  padding:0;\n}\n\nheader li {\n  list-style-type: none;\n  width:132px;\n  height:15px;\n  margin-bottom: 12px;\n  line-height: 1em;\n  padding: 6px 6px 6px 7px;\n  background: #c30000;\n  border-radius:4px;\n  border:1px solid #555;\n}\n\nheader li:hover {\n  background: #dd0000;\n}\n\na.buttons {\n  color: #fff;\n  text-decoration: none;\n  font-weight: normal;\n  padding: 2px 2px 2px 22px;\n  height: 30px;\n}\n\na.github {\n  background: url(/images/GitHub-Mark-64px.png) no-repeat center left;\n  background-size: 15%;\n}\n\n/* Section - for main page content */\n\nsection {\n  width:650px;\n  float:right;\n  padding-bottom:50px;\n}\n\np.footnote {\n  font-size: 12px;\n}\n\n\n/* Footer */\n\nfooter {\n  width:170px;\n  float:left;\n  position:fixed;\n  bottom:10px;\n  padding-left: 50px;\n}\n\n@media print, screen and (max-width: 960px) {\n\n  div.wrapper {\n    width:auto;\n    margin:0;\n  }\n\n  header, section, footer {\n    float:none;\n    position:static;\n    width:auto;\n  }\n\n  footer {\n    border-top: 1px solid #ccc;\n    margin:0 84px 0 50px;\n    padding:0;\n  }\n\n  header {\n    padding-right:320px;\n  }\n\n  section {\n    padding:20px 84px 20px 50px;\n    margin:0 0 20px;\n  }\n\n  header a small {\n    display:inline;\n  }\n\n  header ul {\n    position:absolute;\n    right:130px;\n    top:84px;\n  }\n}\n\n@media print, screen and (max-width: 720px) {\n  body {\n    word-wrap:break-word;\n  }\n\n  header {\n    padding:10px 20px 0;\n    margin-right: 0;\n  }\n\n  section {\n    padding:10px 0 10px 20px;\n    margin:0 0 30px;\n  }\n\n  footer {\n    margin: 0 0 0 30px;\n  }\n\n  header ul, header p.view {\n    position:static;\n  }\n}\n\n@media print, screen and (max-width: 480px) {\n\n  header ul li.download {\n    display:none;\n  }\n\n  footer {\n    margin: 0 0 0 20px;\n  }\n\n  footer a{\n    display:block;\n  }\n\n}\n\n@media print {\n  body {\n    padding:0.4in;\n    font-size:12pt;\n    color:#444;\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/convolution.md",
    "content": "---\ntitle: Convolution\n---\n# Caffeinated Convolution\n\nThe Caffe strategy for convolution is to reduce the problem to matrix-matrix multiplication.\nThis linear algebra computation is highly-tuned in BLAS libraries and efficiently computed on GPU devices.\n\nFor more details read Yangqing's [Convolution in Caffe: a memo](https://github.com/Yangqing/caffe/wiki/Convolution-in-Caffe:-a-memo).\n\nAs it turns out, this same reduction was independently explored in the context of conv. nets by\n\n> K. Chellapilla, S. Puri, P. Simard, et al. High performance convolutional neural networks for document processing. In Tenth International Workshop on Frontiers in Handwriting Recognition, 2006.\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/data.md",
    "content": "---\ntitle: Data\n---\n# Data: Ins and Outs\n\nData flows through Caffe as [Blobs](net_layer_blob.html#blob-storage-and-communication).\nData layers load input and save output by converting to and from Blob to other formats.\nCommon transformations like mean-subtraction and feature-scaling are done by data layer configuration.\nNew input types are supported by developing a new data layer -- the rest of the Net follows by the modularity of the Caffe layer catalogue.\n\nThis data layer definition\n\n    layer {\n      name: \"mnist\"\n      # Data layer loads leveldb or lmdb storage DBs for high-throughput.\n      type: \"Data\"\n      # the 1st top is the data itself: the name is only convention\n      top: \"data\"\n      # the 2nd top is the ground truth: the name is only convention\n      top: \"label\"\n      # the Data layer configuration\n      data_param {\n        # path to the DB\n        source: \"examples/mnist/mnist_train_lmdb\"\n        # type of DB: LEVELDB or LMDB (LMDB supports concurrent reads)\n        backend: LMDB\n        # batch processing improves efficiency.\n        batch_size: 64\n      }\n      # common data transformations\n      transform_param {\n        # feature scaling coefficient: this maps the [0, 255] MNIST data to [0, 1]\n        scale: 0.00390625\n      }\n    }\n\nloads the MNIST digits.\n\n**Tops and Bottoms**: A data layer makes **top** blobs to output data to the model.\nIt does not have **bottom** blobs since it takes no input.\n\n**Data and Label**: a data layer has at least one top canonically named **data**.\nFor ground truth a second top can be defined that is canonically named **label**.\nBoth tops simply produce blobs and there is nothing inherently special about these names.\nThe (data, label) pairing is a convenience for classification models.\n\n**Transformations**: data preprocessing is parametrized by transformation messages within the data layer definition.\n\n    layer {\n      name: \"data\"\n      type: \"Data\"\n      [...]\n      transform_param {\n        scale: 0.1\n        mean_file_size: mean.binaryproto\n        # for images in particular horizontal mirroring and random cropping\n        # can be done as simple data augmentations.\n        mirror: 1  # 1 = on, 0 = off\n        # crop a `crop_size` x `crop_size` patch:\n        # - at random during training\n        # - from the center during testing\n        crop_size: 227\n      }\n    }\n\n**Prefetching**: for throughput data layers fetch the next batch of data and prepare it in the background while the Net computes the current batch.\n\n**Multiple Inputs**: a Net can have multiple inputs of any number and type. Define as many data layers as needed giving each a unique name and top. Multiple inputs are useful for non-trivial ground truth: one data layer loads the actual data and the other data layer loads the ground truth in lock-step. In this arrangement both data and label can be any 4D array. Further applications of multiple inputs are found in multi-modal and sequence models. In these cases you may need to implement your own data preparation routines or a special data layer.\n\n*Improvements to data processing to add formats, generality, or helper utilities are welcome!*\n\n## Formats\n\nRefer to the layer catalogue of [data layers](layers.html#data-layers) for close-ups on each type of data Caffe understands.\n\n## Deployment Input\n\nFor on-the-fly computation deployment Nets define their inputs by `input` fields: these Nets then accept direct assignment of data for online or interactive computation.\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/fig/.gitignore",
    "content": ""
  },
  {
    "path": "caffe-fpn/docs/tutorial/forward_backward.md",
    "content": "---\ntitle: Forward and Backward for Inference and Learning\n---\n# Forward and Backward\n\nThe forward and backward passes are the essential computations of a [Net](net_layer_blob.html).\n\n<img src=\"fig/forward_backward.png\" alt=\"Forward and Backward\" width=\"480\">\n\nLet's consider a simple logistic regression classifier.\n\nThe **forward** pass computes the output given the input for inference.\nIn forward Caffe composes the computation of each layer to compute the \"function\" represented by the model.\nThis pass goes from bottom to top.\n\n<img src=\"fig/forward.jpg\" alt=\"Forward pass\" width=\"320\">\n\nThe data $$x$$ is passed through an inner product layer for $$g(x)$$ then through a softmax for $$h(g(x))$$ and softmax loss to give $$f_W(x)$$.\n\nThe **backward** pass computes the gradient given the loss for learning.\nIn backward Caffe reverse-composes the gradient of each layer to compute the gradient of the whole model by automatic differentiation.\nThis is back-propagation.\nThis pass goes from top to bottom.\n\n<img src=\"fig/backward.jpg\" alt=\"Backward pass\" width=\"320\">\n\nThe backward pass begins with the loss and computes the gradient with respect to the output $$\\frac{\\partial f_W}{\\partial h}$$. The gradient with respect to the rest of the model is computed layer-by-layer through the chain rule. Layers with parameters, like the `INNER_PRODUCT` layer, compute the gradient with respect to their parameters $$\\frac{\\partial f_W}{\\partial W_{\\text{ip}}}$$ during the backward step.\n\nThese computations follow immediately from defining the model: Caffe plans and carries out the forward and backward passes for you.\n\n- The `Net::Forward()` and `Net::Backward()` methods carry out the respective passes while `Layer::Forward()` and `Layer::Backward()` compute each step.\n- Every layer type has `forward_{cpu,gpu}()` and `backward_{cpu,gpu}()` methods to compute its steps according to the mode of computation. A layer may only implement CPU or GPU mode due to constraints or convenience.\n\nThe [Solver](solver.html) optimizes a model by first calling forward to yield the output and loss, then calling backward to generate the gradient of the model, and then incorporating the gradient into a weight update that attempts to minimize the loss. Division of labor between the Solver, Net, and Layer keep Caffe modular and open to development.\n\nFor the details of the forward and backward steps of Caffe's layer types, refer to the [layer catalogue](layers.html).\n\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/index.md",
    "content": "---\ntitle: Caffe Tutorial\n---\n# Caffe Tutorial\n\nCaffe is a deep learning framework and this tutorial explains its philosophy, architecture, and usage.\nThis is a practical guide and framework introduction, so the full frontier, context, and history of deep learning cannot be covered here.\nWhile explanations will be given where possible, a background in machine learning and neural networks is helpful.\n\n## Philosophy\n\nIn one sip, Caffe is brewed for\n\n- Expression: models and optimizations are defined as plaintext schemas instead of code.\n- Speed: for research and industry alike speed is crucial for state-of-the-art models and massive data.\n- Modularity: new tasks and settings require flexibility and extension.\n- Openness: scientific and applied progress call for common code, reference models, and reproducibility.\n- Community: academic research, startup prototypes, and industrial applications all share strength by joint discussion and development in a BSD-2 project.\n\nand these principles direct the project.\n\n## Tour\n\n- [Nets, Layers, and Blobs](net_layer_blob.html): the anatomy of a Caffe model.\n- [Forward / Backward](forward_backward.html): the essential computations of layered compositional models.\n- [Loss](loss.html): the task to be learned is defined by the loss.\n- [Solver](solver.html): the solver coordinates model optimization.\n- [Layer Catalogue](layers.html): the layer is the fundamental unit of modeling and computation -- Caffe's catalogue includes layers for state-of-the-art models.\n- [Interfaces](interfaces.html): command line, Python, and MATLAB Caffe.\n- [Data](data.html): how to caffeinate data for model input.\n\nFor a closer look at a few details:\n\n- [Caffeinated Convolution](convolution.html): how Caffe computes convolutions.\n\n## Deeper Learning\n\nThere are helpful references freely online for deep learning that complement our hands-on tutorial.\nThese cover introductory and advanced material, background and history, and the latest advances.\n\nThe [Tutorial on Deep Learning for Vision](https://sites.google.com/site/deeplearningcvpr2014/) from CVPR '14 is a good companion tutorial for researchers.\nOnce you have the framework and practice foundations from the Caffe tutorial, explore the fundamental ideas and advanced research directions in the CVPR '14 tutorial.\n\nA broad introduction is given in the free online draft of [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html) by Michael Nielsen. In particular the chapters on using neural nets and how backpropagation works are helpful if you are new to the subject.\n\nThese recent academic tutorials cover deep learning for researchers in machine learning and vision:\n\n- [Deep Learning Tutorial](http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf) by Yann LeCun (NYU, Facebook) and Marc'Aurelio Ranzato (Facebook). ICML 2013 tutorial.\n- [LISA Deep Learning Tutorial](http://deeplearning.net/tutorial/deeplearning.pdf) by the LISA Lab directed by Yoshua Bengio (U. Montréal).\n\nFor an exposition of neural networks in circuits and code, check out [Understanding Neural Networks from a Programmer's Perspective](http://karpathy.github.io/neuralnets/) by Andrej Karpathy (Stanford).\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/interfaces.md",
    "content": "---\ntitle: Interfaces\n---\n# Interfaces\n\nCaffe has command line, Python, and MATLAB interfaces for day-to-day usage, interfacing with research code, and rapid prototyping. While Caffe is a C++ library at heart and it exposes a modular interface for development, not every occasion calls for custom compilation. The cmdcaffe, pycaffe, and matcaffe interfaces are here for you.\n\n## Command Line\n\nThe command line interface -- cmdcaffe -- is the `caffe` tool for model training, scoring, and diagnostics. Run `caffe` without any arguments for help. This tool and others are found in caffe/build/tools. (The following example calls require completing the LeNet / MNIST example first.)\n\n**Training**: `caffe train` learns models from scratch, resumes learning from saved snapshots, and fine-tunes models to new data and tasks:\n\n* All training requires a solver configuration through the `-solver solver.prototxt` argument.\n* Resuming requires the `-snapshot model_iter_1000.solverstate` argument to load the solver snapshot.\n* Fine-tuning requires the `-weights model.caffemodel` argument for the model initialization.\n\nFor example, you can run:\n\n    # train LeNet\n    caffe train -solver examples/mnist/lenet_solver.prototxt\n    # train on GPU 2\n    caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 2\n    # resume training from the half-way point snapshot\n    caffe train -solver examples/mnist/lenet_solver.prototxt -snapshot examples/mnist/lenet_iter_5000.solverstate\n\nFor a full example of fine-tuning, see examples/finetuning_on_flickr_style, but the training call alone is\n\n    # fine-tune CaffeNet model weights for style recognition\n    caffe train -solver examples/finetuning_on_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel\n\n**Testing**: `caffe test` scores models by running them in the test phase and reports the net output as its score. The net architecture must be properly defined to output an accuracy measure or loss as its output. The per-batch score is reported and then the grand average is reported last.\n\n    # score the learned LeNet model on the validation set as defined in the\n    # model architeture lenet_train_test.prototxt\n    caffe test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel -gpu 0 -iterations 100\n\n**Benchmarking**: `caffe time` benchmarks model execution layer-by-layer through timing and synchronization. This is useful to check system performance and measure relative execution times for models.\n\n    # (These example calls require you complete the LeNet / MNIST example first.)\n    # time LeNet training on CPU for 10 iterations\n    caffe time -model examples/mnist/lenet_train_test.prototxt -iterations 10\n    # time LeNet training on GPU for the default 50 iterations\n    caffe time -model examples/mnist/lenet_train_test.prototxt -gpu 0\n    # time a model architecture with the given weights on the first GPU for 10 iterations\n    caffe time -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel -gpu 0 -iterations 10\n\n**Diagnostics**: `caffe device_query` reports GPU details for reference and checking device ordinals for running on a given device in multi-GPU machines.\n\n    # query the first device\n    caffe device_query -gpu 0\n\n**Parallelism**: the `-gpu` flag to the `caffe` tool can take a comma separated list of IDs to run on multiple GPUs. A solver and net will be instantiated for each GPU so the batch size is effectively multiplied by the number of GPUs. To reproduce single GPU training, reduce the batch size in the network definition accordingly.\n\n    # train on GPUs 0 & 1 (doubling the batch size)\n    caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 0,1\n    # train on all GPUs (multiplying batch size by number of devices)\n    caffe train -solver examples/mnist/lenet_solver.prototxt -gpu all\n\n## Python\n\nThe Python interface -- pycaffe -- is the `caffe` module and its scripts in caffe/python. `import caffe` to load models, do forward and backward, handle IO, visualize networks, and even instrument model solving. All model data, derivatives, and parameters are exposed for reading and writing.\n\n- `caffe.Net` is the central interface for loading, configuring, and running models. `caffe.Classifier` and `caffe.Detector` provide convenience interfaces for common tasks.\n- `caffe.SGDSolver` exposes the solving interface.\n- `caffe.io` handles input / output with preprocessing and protocol buffers.\n- `caffe.draw` visualizes network architectures.\n- Caffe blobs are exposed as numpy ndarrays for ease-of-use and efficiency.\n\nTutorial IPython notebooks are found in caffe/examples: do `ipython notebook caffe/examples` to try them. For developer reference docstrings can be found throughout the code.\n\nCompile pycaffe by `make pycaffe`.\nAdd the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like for `import caffe`.\n\n## MATLAB\n\nThe MATLAB interface -- matcaffe -- is the `caffe` package in caffe/matlab in which you can integrate Caffe in your Matlab code.\n\nIn MatCaffe, you can\n\n* Creating multiple Nets in Matlab\n* Do forward and backward computation\n* Access any layer within a network, and any parameter blob in a layer\n* Get and set data or diff to any blob within a network, not restricting to input blobs or output blobs\n* Save a network's parameters to file, and load parameters from file\n* Reshape a blob and reshape a network\n* Edit network parameter and do network surgery\n* Create multiple Solvers in Matlab for training\n* Resume training from solver snapshots\n* Access train net and test nets in a solver\n* Run for a certain number of iterations and give back control to Matlab\n* Intermingle arbitrary Matlab code with gradient steps\n\nAn ILSVRC image classification demo is in caffe/matlab/demo/classification_demo.m (you need to download BVLC CaffeNet from [Model Zoo](http://caffe.berkeleyvision.org/model_zoo.html) to run it).\n\n### Build MatCaffe\n\nBuild MatCaffe with `make all matcaffe`. After that, you may test it using `make mattest`.\n\nCommon issue: if you run into error messages like `libstdc++.so.6:version 'GLIBCXX_3.4.15' not found` during `make mattest`, then it usually means that your Matlab's runtime libraries do not match your compile-time libraries. You may need to do the following before you start Matlab:\n\n    export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:/usr/local/cuda/lib64\n    export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6\n\nOr the equivalent based on where things are installed on your system, and do `make mattest` again to see if the issue is fixed. Note: this issue is sometimes more complicated since during its startup Matlab may overwrite your `LD_LIBRARY_PATH` environment variable. You can run `!ldd ./matlab/+caffe/private/caffe_.mexa64` (the mex extension may differ on your system) in Matlab to see its runtime libraries, and preload your compile-time libraries by exporting them to your `LD_PRELOAD` environment variable.\n\nAfter successful building and testing, add this package to Matlab search PATH by starting `matlab` from caffe root folder and running the following commands in Matlab command window.\n\n    addpath ./matlab\n\nYou can save your Matlab search PATH by running `savepath` so that you don't have to run the command above again every time you use MatCaffe.\n\n### Use MatCaffe\n\nMatCaffe is very similar to PyCaffe in usage.\n\nExamples below shows detailed usages and assumes you have downloaded BVLC CaffeNet from [Model Zoo](http://caffe.berkeleyvision.org/model_zoo.html) and started `matlab` from caffe root folder.\n\n    model = './models/bvlc_reference_caffenet/deploy.prototxt';\n    weights = './models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel';\n\n#### Set mode and device\n\n**Mode and device should always be set BEFORE you create a net or a solver.**\n\nUse CPU:\n\n    caffe.set_mode_cpu();\n\nUse GPU and specify its gpu_id:\n\n    caffe.set_mode_gpu();\n    caffe.set_device(gpu_id);\n\n#### Create a network and access its layers and blobs\n\nCreate a network:\n\n    net = caffe.Net(model, weights, 'test'); % create net and load weights\n\nOr\n\n    net = caffe.Net(model, 'test'); % create net but not load weights\n    net.copy_from(weights); % load weights\n\nwhich creates `net` object as\n\n      Net with properties:\n\n               layer_vec: [1x23 caffe.Layer]\n                blob_vec: [1x15 caffe.Blob]\n                  inputs: {'data'}\n                 outputs: {'prob'}\n        name2layer_index: [23x1 containers.Map]\n         name2blob_index: [15x1 containers.Map]\n             layer_names: {23x1 cell}\n              blob_names: {15x1 cell}\n\nThe two `containers.Map` objects are useful to find the index of a layer or a blob by its name.\n\nYou have access to every blob in this network. To fill blob 'data' with all ones:\n\n    net.blobs('data').set_data(ones(net.blobs('data').shape));\n\nTo multiply all values in blob 'data' by 10:\n\n    net.blobs('data').set_data(net.blobs('data').get_data() * 10);\n\n**Be aware that since Matlab is 1-indexed and column-major, the usual 4 blob dimensions in Matlab are `[width, height, channels, num]`, and `width` is the fastest dimension. Also be aware that images are in BGR channels.** Also, Caffe uses single-precision float data. If your data is not single, `set_data` will automatically convert it to single.\n\nYou also have access to every layer, so you can do network surgery. For example, to multiply conv1 parameters by 10:\n\n    net.params('conv1', 1).set_data(net.params('conv1', 1).get_data() * 10); % set weights\n    net.params('conv1', 2).set_data(net.params('conv1', 2).get_data() * 10); % set bias\n\nAlternatively, you can use\n\n    net.layers('conv1').params(1).set_data(net.layers('conv1').params(1).get_data() * 10);\n    net.layers('conv1').params(2).set_data(net.layers('conv1').params(2).get_data() * 10);\n\nTo save the network you just modified:\n\n    net.save('my_net.caffemodel');\n\nTo get a layer's type (string):\n\n    layer_type = net.layers('conv1').type;\n\n#### Forward and backward\n\nForward pass can be done using `net.forward` or `net.forward_prefilled`. Function `net.forward` takes in a cell array of N-D arrays containing data of input blob(s) and outputs a cell array containing data from output blob(s). Function `net.forward_prefilled` uses existing data in input blob(s) during forward pass, takes no input and produces no output. After creating some data for input blobs like `data = rand(net.blobs('data').shape);` you can run\n\n    res = net.forward({data});\n    prob = res{1};\n\nOr\n\n    net.blobs('data').set_data(data);\n    net.forward_prefilled();\n    prob = net.blobs('prob').get_data();\n\nBackward is similar using `net.backward` or `net.backward_prefilled` and replacing `get_data` and `set_data` with `get_diff` and `set_diff`. After creating some gradients for output blobs like `prob_diff = rand(net.blobs('prob').shape);` you can run\n\n    res = net.backward({prob_diff});\n    data_diff = res{1};\n\nOr\n\n    net.blobs('prob').set_diff(prob_diff);\n    net.backward_prefilled();\n    data_diff = net.blobs('data').get_diff();\n\n**However, the backward computation above doesn't get correct results, because Caffe decides that the network does not need backward computation. To get correct backward results, you need to set `'force_backward: true'` in your network prototxt.**\n\nAfter performing forward or backward pass, you can also get the data or diff in internal blobs. For example, to extract pool5 features after forward pass:\n\n    pool5_feat = net.blobs('pool5').get_data();\n\n#### Reshape\n\nAssume you want to run 1 image at a time instead of 10:\n\n    net.blobs('data').reshape([227 227 3 1]); % reshape blob 'data'\n    net.reshape();\n\nThen the whole network is reshaped, and now `net.blobs('prob').shape` should be `[1000 1]`;\n\n#### Training\n\nAssume you have created training and validation lmdbs following our [ImageNET Tutorial](http://caffe.berkeleyvision.org/gathered/examples/imagenet.html), to create a solver and train on ILSVRC 2012 classification dataset:\n\n    solver = caffe.Solver('./models/bvlc_reference_caffenet/solver.prototxt');\n\nwhich creates `solver` object as\n\n      Solver with properties:\n\n              net: [1x1 caffe.Net]\n        test_nets: [1x1 caffe.Net]\n\nTo train:\n\n    solver.solve();\n\nOr train for only 1000 iterations (so that you can do something to its net before training more iterations)\n\n    solver.step(1000);\n\nTo get iteration number:\n\n    iter = solver.iter();\n\nTo get its network:\n\n    train_net = solver.net;\n    test_net = solver.test_nets(1);\n\nTo resume from a snapshot \"your_snapshot.solverstate\":\n\n    solver.restore('your_snapshot.solverstate');\n\n#### Input and output\n\n`caffe.io` class provides basic input functions `load_image` and `read_mean`. For example, to read ILSVRC 2012 mean file (assume you have downloaded imagenet example auxiliary files by running `./data/ilsvrc12/get_ilsvrc_aux.sh`):\n\n    mean_data = caffe.io.read_mean('./data/ilsvrc12/imagenet_mean.binaryproto');\n\nTo read Caffe's example image and resize to `[width, height]` and suppose we want `width = 256; height = 256;`\n\n    im_data = caffe.io.load_image('./examples/images/cat.jpg');\n    im_data = imresize(im_data, [width, height]); % resize using Matlab's imresize\n\n**Keep in mind that `width` is the fastest dimension and channels are BGR, which is different from the usual way that Matlab stores an image.** If you don't want to use `caffe.io.load_image` and prefer to load an image by yourself, you can do\n\n    im_data = imread('./examples/images/cat.jpg'); % read image\n    im_data = im_data(:, :, [3, 2, 1]); % convert from RGB to BGR\n    im_data = permute(im_data, [2, 1, 3]); % permute width and height\n    im_data = single(im_data); % convert to single precision\n\nAlso, you may take a look at caffe/matlab/demo/classification_demo.m to see how to prepare input by taking crops from an image.\n\nWe show in caffe/matlab/hdf5creation how to read and write HDF5 data with Matlab. We do not provide extra functions for data output as Matlab itself is already quite powerful in output.\n\n#### Clear nets and solvers\n\nCall `caffe.reset_all()` to clear all solvers and stand-alone nets you have created.\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/layers.md",
    "content": "---\ntitle: Layer Catalogue\n---\n# Layers\n\nTo create a Caffe model you need to define the model architecture in a protocol buffer definition file (prototxt).\n\nCaffe layers and their parameters are defined in the protocol buffer definitions for the project in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto).\n\n### Vision Layers\n\n* Header: `./include/caffe/vision_layers.hpp`\n\nVision layers usually take *images* as input and produce other *images* as output.\nA typical \"image\" in the real-world may have one color channel ($$c = 1$$), as in a grayscale image, or three color channels ($$c = 3$$) as in an RGB (red, green, blue) image.\nBut in this context, the distinguishing characteristic of an image is its spatial structure: usually an image has some non-trivial height $$h > 1$$ and width $$w > 1$$.\nThis 2D geometry naturally lends itself to certain decisions about how to process the input.\nIn particular, most of the vision layers work by applying a particular operation to some region of the input to produce a corresponding region of the output.\nIn contrast, other layers (with few exceptions) ignore the spatial structure of the input, effectively treating it as \"one big vector\" with dimension $$chw$$.\n\n\n#### Convolution\n\n* Layer type: `Convolution`\n* CPU implementation: `./src/caffe/layers/convolution_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/convolution_layer.cu`\n* Parameters (`ConvolutionParameter convolution_param`)\n    - Required\n        - `num_output` (`c_o`): the number of filters\n        - `kernel_size` (or `kernel_h` and `kernel_w`): specifies height and width of each filter\n    - Strongly Recommended\n        - `weight_filler` [default `type: 'constant' value: 0`]\n    - Optional\n        - `bias_term` [default `true`]: specifies whether to learn and apply a set of additive biases to the filter outputs\n        - `pad` (or `pad_h` and `pad_w`) [default 0]: specifies the number of pixels to (implicitly) add to each side of the input\n        - `stride` (or `stride_h` and `stride_w`) [default 1]: specifies the intervals at which to apply the filters to the input\n        - `group` (g) [default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the $$i$$th output group channels will be only connected to the $$i$$th input group channels.\n* Input\n    - `n * c_i * h_i * w_i`\n* Output\n    - `n * c_o * h_o * w_o`, where `h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1` and `w_o` likewise.\n* Sample (as seen in `./models/bvlc_reference_caffenet/train_val.prototxt`)\n\n      layer {\n        name: \"conv1\"\n        type: \"Convolution\"\n        bottom: \"data\"\n        top: \"conv1\"\n        # learning rate and decay multipliers for the filters\n        param { lr_mult: 1 decay_mult: 1 }\n        # learning rate and decay multipliers for the biases\n        param { lr_mult: 2 decay_mult: 0 }\n        convolution_param {\n          num_output: 96     # learn 96 filters\n          kernel_size: 11    # each filter is 11x11\n          stride: 4          # step 4 pixels between each filter application\n          weight_filler {\n            type: \"gaussian\" # initialize the filters from a Gaussian\n            std: 0.01        # distribution with stdev 0.01 (default mean: 0)\n          }\n          bias_filler {\n            type: \"constant\" # initialize the biases to zero (0)\n            value: 0\n          }\n        }\n      }\n\nThe `Convolution` layer convolves the input image with a set of learnable filters, each producing one feature map in the output image.\n\n#### Pooling\n\n* Layer type: `Pooling`\n* CPU implementation: `./src/caffe/layers/pooling_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/pooling_layer.cu`\n* Parameters (`PoolingParameter pooling_param`)\n    - Required\n        - `kernel_size` (or `kernel_h` and `kernel_w`): specifies height and width of each filter\n    - Optional\n        - `pool` [default MAX]: the pooling method. Currently MAX, AVE, or STOCHASTIC\n        - `pad` (or `pad_h` and `pad_w`) [default 0]: specifies the number of pixels to (implicitly) add to each side of the input\n        - `stride` (or `stride_h` and `stride_w`) [default 1]: specifies the intervals at which to apply the filters to the input\n* Input\n    - `n * c * h_i * w_i`\n* Output\n    - `n * c * h_o * w_o`, where h_o and w_o are computed in the same way as convolution.\n* Sample (as seen in `./models/bvlc_reference_caffenet/train_val.prototxt`)\n\n      layer {\n        name: \"pool1\"\n        type: \"Pooling\"\n        bottom: \"conv1\"\n        top: \"pool1\"\n        pooling_param {\n          pool: MAX\n          kernel_size: 3 # pool over a 3x3 region\n          stride: 2      # step two pixels (in the bottom blob) between pooling regions\n        }\n      }\n\n#### Local Response Normalization (LRN)\n\n* Layer type: `LRN`\n* CPU Implementation: `./src/caffe/layers/lrn_layer.cpp`\n* CUDA GPU Implementation: `./src/caffe/layers/lrn_layer.cu`\n* Parameters (`LRNParameter lrn_param`)\n    - Optional\n        - `local_size` [default 5]: the number of channels to sum over (for cross channel LRN) or the side length of the square region to sum over (for within channel LRN)\n        - `alpha` [default 1]: the scaling parameter (see below)\n        - `beta` [default 5]: the exponent (see below)\n        - `norm_region` [default `ACROSS_CHANNELS`]: whether to sum over adjacent channels (`ACROSS_CHANNELS`) or nearby spatial locaitons (`WITHIN_CHANNEL`)\n\nThe local response normalization layer performs a kind of \"lateral inhibition\" by normalizing over local input regions. In `ACROSS_CHANNELS` mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape `local_size x 1 x 1`). In `WITHIN_CHANNEL` mode, the local regions extend spatially, but are in separate channels (i.e., they have shape `1 x local_size x local_size`). Each input value is divided by $$(1 + (\\alpha/n) \\sum_i x_i^2)^\\beta$$, where $$n$$ is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).\n\n#### im2col\n\n`Im2col` is a helper for doing the image-to-column transformation that you most likely do not need to know about. This is used in Caffe's original convolution to do matrix multiplication by laying out all patches into a matrix.\n\n### Loss Layers\n\nLoss drives learning by comparing an output to a target and assigning cost to minimize. The loss itself is computed by the forward pass and the gradient w.r.t. to the loss is computed by the backward pass.\n\n#### Softmax\n\n* Layer type: `SoftmaxWithLoss`\n\nThe softmax loss layer computes the multinomial logistic loss of the softmax of its inputs. It's conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.\n\n#### Sum-of-Squares / Euclidean\n\n* Layer type: `EuclideanLoss`\n\nThe Euclidean loss layer computes the sum of squares of differences of its two inputs, $$\\frac 1 {2N} \\sum_{i=1}^N \\| x^1_i - x^2_i \\|_2^2$$.\n\n#### Hinge / Margin\n\n* Layer type: `HingeLoss`\n* CPU implementation: `./src/caffe/layers/hinge_loss_layer.cpp`\n* CUDA GPU implementation: none yet\n* Parameters (`HingeLossParameter hinge_loss_param`)\n    - Optional\n        - `norm` [default L1]: the norm used. Currently L1, L2\n* Inputs\n    - `n * c * h * w` Predictions\n    - `n * 1 * 1 * 1` Labels\n* Output\n    - `1 * 1 * 1 * 1` Computed Loss\n* Samples\n\n      # L1 Norm\n      layer {\n        name: \"loss\"\n        type: \"HingeLoss\"\n        bottom: \"pred\"\n        bottom: \"label\"\n      }\n\n      # L2 Norm\n      layer {\n        name: \"loss\"\n        type: \"HingeLoss\"\n        bottom: \"pred\"\n        bottom: \"label\"\n        top: \"loss\"\n        hinge_loss_param {\n          norm: L2\n        }\n      }\n\nThe hinge loss layer computes a one-vs-all hinge or squared hinge loss.\n\n#### Sigmoid Cross-Entropy\n\n`SigmoidCrossEntropyLoss`\n\n#### Infogain\n\n`InfogainLoss`\n\n#### Accuracy and Top-k\n\n`Accuracy` scores the output as the accuracy of output with respect to target -- it is not actually a loss and has no backward step.\n\n### Activation / Neuron Layers\n\nIn general, activation / Neuron layers are element-wise operators, taking one bottom blob and producing one top blob of the same size. In the layers below, we will ignore the input and out sizes as they are identical:\n\n* Input\n    - n * c * h * w\n* Output\n    - n * c * h * w\n\n#### ReLU / Rectified-Linear and Leaky-ReLU\n\n* Layer type: `ReLU`\n* CPU implementation: `./src/caffe/layers/relu_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/relu_layer.cu`\n* Parameters (`ReLUParameter relu_param`)\n    - Optional\n        - `negative_slope` [default 0]: specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.\n* Sample (as seen in `./models/bvlc_reference_caffenet/train_val.prototxt`)\n\n      layer {\n        name: \"relu1\"\n        type: \"ReLU\"\n        bottom: \"conv1\"\n        top: \"conv1\"\n      }\n\nGiven an input value x, The `ReLU` layer computes the output as x if x > 0 and negative_slope * x if x <= 0. When the negative slope parameter is not set, it is equivalent to the standard ReLU function of taking max(x, 0). It also supports in-place computation, meaning that the bottom and the top blob could be the same to preserve memory consumption.\n\n#### Sigmoid\n\n* Layer type: `Sigmoid`\n* CPU implementation: `./src/caffe/layers/sigmoid_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/sigmoid_layer.cu`\n* Sample (as seen in `./examples/mnist/mnist_autoencoder.prototxt`)\n\n      layer {\n        name: \"encode1neuron\"\n        bottom: \"encode1\"\n        top: \"encode1neuron\"\n        type: \"Sigmoid\"\n      }\n\nThe `Sigmoid` layer computes the output as sigmoid(x) for each input element x.\n\n#### TanH / Hyperbolic Tangent\n\n* Layer type: `TanH`\n* CPU implementation: `./src/caffe/layers/tanh_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/tanh_layer.cu`\n* Sample\n\n      layer {\n        name: \"layer\"\n        bottom: \"in\"\n        top: \"out\"\n        type: \"TanH\"\n      }\n\nThe `TanH` layer computes the output as tanh(x) for each input element x.\n\n#### Absolute Value\n\n* Layer type: `AbsVal`\n* CPU implementation: `./src/caffe/layers/absval_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/absval_layer.cu`\n* Sample\n\n      layer {\n        name: \"layer\"\n        bottom: \"in\"\n        top: \"out\"\n        type: \"AbsVal\"\n      }\n\nThe `AbsVal` layer computes the output as abs(x) for each input element x.\n\n#### Power\n\n* Layer type: `Power`\n* CPU implementation: `./src/caffe/layers/power_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/power_layer.cu`\n* Parameters (`PowerParameter power_param`)\n    - Optional\n        - `power` [default 1]\n        - `scale` [default 1]\n        - `shift` [default 0]\n* Sample\n\n      layer {\n        name: \"layer\"\n        bottom: \"in\"\n        top: \"out\"\n        type: \"Power\"\n        power_param {\n          power: 1\n          scale: 1\n          shift: 0\n        }\n      }\n\nThe `Power` layer computes the output as (shift + scale * x) ^ power for each input element x.\n\n#### BNLL\n\n* Layer type: `BNLL`\n* CPU implementation: `./src/caffe/layers/bnll_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/bnll_layer.cu`\n* Sample\n\n      layer {\n        name: \"layer\"\n        bottom: \"in\"\n        top: \"out\"\n        type: BNLL\n      }\n\nThe `BNLL` (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x.\n\n\n### Data Layers\n\nData enters Caffe through data layers: they lie at the bottom of nets. Data can come from efficient databases (LevelDB or LMDB), directly from memory, or, when efficiency is not critical, from files on disk in HDF5 or common image formats.\n\nCommon input preprocessing (mean subtraction, scaling, random cropping, and mirroring) is available by specifying `TransformationParameter`s.\n\n#### Database\n\n* Layer type: `Data`\n* Parameters\n    - Required\n        - `source`: the name of the directory containing the database\n        - `batch_size`: the number of inputs to process at one time\n    - Optional\n        - `rand_skip`: skip up to this number of inputs at the beginning; useful for asynchronous sgd\n        - `backend` [default `LEVELDB`]: choose whether to use a `LEVELDB` or `LMDB`\n\n\n\n#### In-Memory\n\n* Layer type: `MemoryData`\n* Parameters\n    - Required\n        - `batch_size`, `channels`, `height`, `width`: specify the size of input chunks to read from memory\n\nThe memory data layer reads data directly from memory, without copying it. In order to use it, one must call `MemoryDataLayer::Reset` (from C++) or `Net.set_input_arrays` (from Python) in order to specify a source of contiguous data (as 4D row major array), which is read one batch-sized chunk at a time.\n\n#### HDF5 Input\n\n* Layer type: `HDF5Data`\n* Parameters\n    - Required\n        - `source`: the name of the file to read from\n        - `batch_size`\n\n#### HDF5 Output\n\n* Layer type: `HDF5Output`\n* Parameters\n    - Required\n        - `file_name`: name of file to write to\n\nThe HDF5 output layer performs the opposite function of the other layers in this section: it writes its input blobs to disk.\n\n#### Images\n\n* Layer type: `ImageData`\n* Parameters\n    - Required\n        - `source`: name of a text file, with each line giving an image filename and label\n        - `batch_size`: number of images to batch together\n    - Optional\n        - `rand_skip`\n        - `shuffle` [default false]\n        - `new_height`, `new_width`: if provided, resize all images to this size\n\n#### Windows\n\n`WindowData`\n\n#### Dummy\n\n`DummyData` is for development and debugging. See `DummyDataParameter`.\n\n### Common Layers\n\n#### Inner Product\n\n* Layer type: `InnerProduct`\n* CPU implementation: `./src/caffe/layers/inner_product_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/inner_product_layer.cu`\n* Parameters (`InnerProductParameter inner_product_param`)\n    - Required\n        - `num_output` (`c_o`): the number of filters\n    - Strongly recommended\n        - `weight_filler` [default `type: 'constant' value: 0`]\n    - Optional\n        - `bias_filler` [default `type: 'constant' value: 0`]\n        - `bias_term` [default `true`]: specifies whether to learn and apply a set of additive biases to the filter outputs\n* Input\n    - `n * c_i * h_i * w_i`\n* Output\n    - `n * c_o * 1 * 1`\n* Sample\n\n      layer {\n        name: \"fc8\"\n        type: \"InnerProduct\"\n        # learning rate and decay multipliers for the weights\n        param { lr_mult: 1 decay_mult: 1 }\n        # learning rate and decay multipliers for the biases\n        param { lr_mult: 2 decay_mult: 0 }\n        inner_product_param {\n          num_output: 1000\n          weight_filler {\n            type: \"gaussian\"\n            std: 0.01\n          }\n          bias_filler {\n            type: \"constant\"\n            value: 0\n          }\n        }\n        bottom: \"fc7\"\n        top: \"fc8\"\n      }\n\nThe `InnerProduct` layer (also usually referred to as the fully connected layer) treats the input as a simple vector and produces an output in the form of a single vector (with the blob's height and width set to 1).\n\n#### Splitting\n\nThe `Split` layer is a utility layer that splits an input blob to multiple output blobs. This is used when a blob is fed into multiple output layers.\n\n#### Flattening\n\nThe `Flatten` layer is a utility layer that flattens an input of shape `n * c * h * w` to a simple vector output of shape `n * (c*h*w)`\n\n#### Reshape\n\n* Layer type: `Reshape`\n* Implementation: `./src/caffe/layers/reshape_layer.cpp`\n* Parameters (`ReshapeParameter reshape_param`)\n    - Optional: (also see detailed description below)\n        - `shape`\n\n* Input\n    - a single blob with arbitrary dimensions\n* Output\n    - the same blob, with modified dimensions, as specified by `reshape_param`\n\n* Sample\n\n        layer {\n          name: \"reshape\"\n          type: \"Reshape\"\n          bottom: \"input\"\n          top: \"output\"\n          reshape_param {\n            shape {\n              dim: 0  # copy the dimension from below\n              dim: 2\n              dim: 3\n              dim: -1 # infer it from the other dimensions\n            }\n          }\n        }\n\nThe `Reshape` layer can be used to change the dimensions of its input, without changing its data. Just like the `Flatten` layer, only the dimensions are changed; no data is copied in the process.\n\nOutput dimensions are specified by the `ReshapeParam` proto. Positive numbers are used directly, setting the corresponding dimension of the output blob. In addition, two special values are accepted for any of the target dimension values:\n\n* **0** means \"copy the respective dimension of the bottom layer\". That is, if the bottom has 2 as its 1st dimension, the top will have 2 as its 1st dimension as well, given `dim: 0` as the 1st target dimension.\n* **-1** stands for \"infer this from the other dimensions\". This behavior is similar to that of -1 in *numpy*'s or `[]` for *MATLAB*'s reshape: this dimension is calculated to keep the overall element count the same as in the bottom layer. At most one -1 can be used in a reshape operation.\n\nAs another example, specifying `reshape_param { shape { dim: 0 dim: -1 } }` makes the layer behave in exactly the same way as the `Flatten` layer.\n\n#### Concatenation\n\n* Layer type: `Concat`\n* CPU implementation: `./src/caffe/layers/concat_layer.cpp`\n* CUDA GPU implementation: `./src/caffe/layers/concat_layer.cu`\n* Parameters (`ConcatParameter concat_param`)\n    - Optional\n        - `axis` [default 1]: 0 for concatenation along num and 1 for channels.\n* Input\n    - `n_i * c_i * h * w` for each input blob i from 1 to K.\n* Output\n    - if `axis = 0`: `(n_1 + n_2 + ... + n_K) * c_1 * h * w`, and all input `c_i` should be the same.\n    - if `axis = 1`: `n_1 * (c_1 + c_2 + ... + c_K) * h * w`, and all input `n_i` should be the same.\n* Sample\n\n      layer {\n        name: \"concat\"\n        bottom: \"in1\"\n        bottom: \"in2\"\n        top: \"out\"\n        type: \"Concat\"\n        concat_param {\n          axis: 1\n        }\n      }\n\nThe `Concat` layer is a utility layer that concatenates its multiple input blobs to one single output blob.\n\n#### Slicing\n\nThe `Slice` layer is a utility layer that slices an input layer to multiple output layers along a given dimension (currently num or channel only) with given slice indices.\n\n* Sample\n\n      layer {\n        name: \"slicer_label\"\n        type: \"Slice\"\n        bottom: \"label\"\n        ## Example of label with a shape N x 3 x 1 x 1\n        top: \"label1\"\n        top: \"label2\"\n        top: \"label3\"\n        slice_param {\n          axis: 1\n          slice_point: 1\n          slice_point: 2\n        }\n      }\n\n`axis` indicates the target axis; `slice_point` indicates indexes in the selected dimension (the number of indices must be equal to the number of top blobs minus one).\n\n\n#### Elementwise Operations\n\n`Eltwise`\n\n#### Argmax\n\n`ArgMax`\n\n#### Softmax\n\n`Softmax`\n\n#### Mean-Variance Normalization\n\n`MVN`\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/loss.md",
    "content": "---\ntitle: Loss\n---\n# Loss\n\nIn Caffe, as in most of machine learning, learning is driven by a **loss** function (also known as an **error**, **cost**, or **objective** function).\nA loss function specifies the goal of learning by mapping parameter settings (i.e., the current network weights) to a scalar value specifying the  \"badness\" of these parameter settings.\nHence, the goal of learning is to find a setting of the weights that *minimizes* the loss function.\n\nThe loss in Caffe is computed by the Forward pass of the network.\nEach layer takes a set of input (`bottom`) blobs and produces a set of output (`top`) blobs.\nSome of these layers' outputs may be used in the loss function.\nA typical choice of loss function for one-versus-all classification tasks is the `SoftmaxWithLoss` function, used in a network definition as follows, for example:\n\n    layer {\n      name: \"loss\"\n      type: \"SoftmaxWithLoss\"\n      bottom: \"pred\"\n      bottom: \"label\"\n      top: \"loss\"\n    }\n\nIn a `SoftmaxWithLoss` function, the `top` blob is a scalar (empty shape) which averages the loss (computed from predicted labels `pred` and actuals labels `label`) over the entire mini-batch.\n\n### Loss weights\n\nFor nets with multiple layers producing a loss (e.g., a network that both classifies the input using a `SoftmaxWithLoss` layer and reconstructs it using a `EuclideanLoss` layer), *loss weights* can be used to specify their relative importance.\n\nBy convention, Caffe layer types with the suffix `Loss` contribute to the loss function, but other layers are assumed to be purely used for intermediate computations.\nHowever, any layer can be used as a loss by adding a field `loss_weight: <float>` to a layer definition for each `top` blob produced by the layer.\nLayers with the suffix `Loss` have an implicit `loss_weight: 1` for the first `top` blob (and `loss_weight: 0` for any additional `top`s); other layers have an implicit `loss_weight: 0` for all `top`s.\nSo, the above `SoftmaxWithLoss` layer could be equivalently written as:\n\n    layer {\n      name: \"loss\"\n      type: \"SoftmaxWithLoss\"\n      bottom: \"pred\"\n      bottom: \"label\"\n      top: \"loss\"\n      loss_weight: 1\n    }\n\nHowever, *any* layer able to backpropagate may be given a non-zero `loss_weight`, allowing one to, for example, regularize the activations produced by some intermediate layer(s) of the network if desired.\nFor non-singleton outputs with an associated non-zero loss, the loss is computed simply by summing over all entries of the blob.\n\nThe final loss in Caffe, then, is computed by summing the total weighted loss over the network, as in the following pseudo-code:\n\n    loss := 0\n    for layer in layers:\n      for top, loss_weight in layer.tops, layer.loss_weights:\n        loss += loss_weight * sum(top)\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/net_layer_blob.md",
    "content": "---\ntitle: Blobs, Layers, and Nets\n---\n# Blobs, Layers, and Nets: anatomy of a Caffe model\n\nDeep networks are compositional models that are naturally represented as a collection of inter-connected layers that work on chunks of data. Caffe defines a net layer-by-layer in its own model schema. The network defines the entire model bottom-to-top from input data to loss. As data and derivatives flow through the network in the [forward and backward passes](forward_backward.html) Caffe stores, communicates, and manipulates the information as *blobs*: the blob is the standard array and unified memory interface for the framework. The layer comes next as the foundation of both model and computation. The net follows as the collection and connection of layers. The details of blob describe how information is stored and communicated in and across layers and nets.\n\n[Solving](solver.html) is configured separately to decouple modeling and optimization.\n\nWe will go over the details of these components in more detail.\n\n## Blob storage and communication\n\nA Blob is a wrapper over the actual data being processed and passed along by Caffe, and also under the hood provides synchronization capability between the CPU and the GPU. Mathematically, a blob is an N-dimensional array stored in a C-contiguous fashion.\n\nCaffe stores and communicates data using blobs. Blobs provide a unified memory interface holding data; e.g., batches of images, model parameters, and derivatives for optimization.\n\nBlobs conceal the computational and mental overhead of mixed CPU/GPU operation by synchronizing from the CPU host to the GPU device as needed. Memory on the host and device is allocated on demand (lazily) for efficient memory usage.\n\nThe conventional blob dimensions for batches of image data are number N x channel K x height H x width W. Blob memory is row-major in layout, so the last / rightmost dimension changes fastest. For example, in a 4D blob, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W + w.\n\n- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images N = 256.\n- Channel / K is the feature dimension e.g. for RGB images K = 3.\n\nNote that although many blobs in Caffe examples are 4D with axes for image applications, it is totally valid to use blobs for non-image applications. For example, if you simply need fully-connected networks like the conventional multi-layer perceptron, use 2D blobs (shape (N, D)) and call the InnerProductLayer (which we will cover soon).\n\nParameter blob dimensions vary according to the type and configuration of the layer. For a convolution layer with 96 filters of 11 x 11 spatial dimension and 3 inputs the blob is 96 x 3 x 11 x 11. For an inner product / fully-connected layer with 1000 output channels and 1024 input channels the parameter blob is 1000 x 1024.\n\nFor custom data it may be necessary to hack your own input preparation tool or data layer. However once your data is in your job is done. The modularity of layers accomplishes the rest of the work for you.\n\n### Implementation Details\n\nAs we are often interested in the values as well as the gradients of the blob, a Blob stores two chunks of memories, *data* and *diff*. The former is the normal data that we pass along, and the latter is the gradient computed by the network.\n\nFurther, as the actual values could be stored either on the CPU and on the GPU, there are two different ways to access them: the const way, which does not change the values, and the mutable way, which changes the values:\n\n    const Dtype* cpu_data() const;\n    Dtype* mutable_cpu_data();\n\n(similarly for gpu and diff).\n\nThe reason for such design is that, a Blob uses a SyncedMem class to synchronize values between the CPU and GPU in order to hide the synchronization details and to minimize data transfer. A rule of thumb is, always use the const call if you do not want to change the values, and never store the pointers in your own object. Every time you work on a blob, call the functions to get the pointers, as the SyncedMem will need this to figure out when to copy data.\n\nIn practice when GPUs are present, one loads data from the disk to a blob in CPU code, calls a device kernel to do GPU computation, and ferries the blob off to the next layer, ignoring low-level details while maintaining a high level of performance. As long as all layers have GPU implementations, all the intermediate data and gradients will remain in the GPU.\n\nIf you want to check out when a Blob will copy data, here is an illustrative example:\n\n    // Assuming that data are on the CPU initially, and we have a blob.\n    const Dtype* foo;\n    Dtype* bar;\n    foo = blob.gpu_data(); // data copied cpu->gpu.\n    foo = blob.cpu_data(); // no data copied since both have up-to-date contents.\n    bar = blob.mutable_gpu_data(); // no data copied.\n    // ... some operations ...\n    bar = blob.mutable_gpu_data(); // no data copied when we are still on GPU.\n    foo = blob.cpu_data(); // data copied gpu->cpu, since the gpu side has modified the data\n    foo = blob.gpu_data(); // no data copied since both have up-to-date contents\n    bar = blob.mutable_cpu_data(); // still no data copied.\n    bar = blob.mutable_gpu_data(); // data copied cpu->gpu.\n    bar = blob.mutable_cpu_data(); // data copied gpu->cpu.\n\n## Layer computation and connections\n\nThe layer is the essence of a model and the fundamental unit of computation. Layers convolve filters, pool, take inner products, apply nonlinearities like rectified-linear and sigmoid and other elementwise transformations, normalize, load data, and compute losses like softmax and hinge. [See the layer catalogue](layers.html) for all operations. Most of the types needed for state-of-the-art deep learning tasks are there.\n\n<img src=\"fig/layer.jpg\" alt=\"A layer with bottom and top blob.\" width=\"256\">\n\nA layer takes input through *bottom* connections and makes output through *top* connections.\n\nEach layer type defines three critical computations: *setup*, *forward*, and *backward*.\n\n- Setup: initialize the layer and its connections once at model initialization.\n- Forward: given input from bottom compute the output and send to the top.\n- Backward: given the gradient w.r.t. the top output compute the gradient w.r.t. to the input and send to the bottom. A layer with parameters computes the gradient w.r.t. to its parameters and stores it internally.\n\nMore specifically, there will be two Forward and Backward functions implemented, one for CPU and one for GPU. If you do not implement a GPU version, the layer will fall back to the CPU functions as a backup option. This may come handy if you would like to do quick experiments, although it may come with additional data transfer cost (its inputs will be copied from GPU to CPU, and its outputs will be copied back from CPU to GPU).\n\nLayers have two key responsibilities for the operation of the network as a whole: a *forward pass* that takes the inputs and produces the outputs, and a *backward pass* that takes the gradient with respect to the output, and computes the gradients with respect to the parameters and to the inputs, which are in turn back-propagated to earlier layers. These passes are simply the composition of each layer's forward and backward.\n\nDeveloping custom layers requires minimal effort by the compositionality of the network and modularity of the code. Define the setup, forward, and backward for the layer and it is ready for inclusion in a net.\n\n## Net definition and operation\n\nThe net jointly defines a function and its gradient by composition and auto-differentiation. The composition of every layer's output computes the function to do a given task, and the composition of every layer's backward computes the gradient from the loss to learn the task. Caffe models are end-to-end machine learning engines.\n\nThe net is a set of layers connected in a computation graph -- a directed acyclic graph (DAG) to be exact. Caffe does all the bookkeeping for any DAG of layers to ensure correctness of the forward and backward passes. A typical net begins with a data layer that loads from disk and ends with a loss layer that computes the objective for a task such as classification or reconstruction.\n\nThe net is defined as a set of layers and their connections in a plaintext modeling language.\nA simple logistic regression classifier\n\n<img src=\"fig/logreg.jpg\" alt=\"Softmax Regression\" width=\"256\">\n\nis defined by\n\n    name: \"LogReg\"\n    layer {\n      name: \"mnist\"\n      type: \"Data\"\n      top: \"data\"\n      top: \"label\"\n      data_param {\n        source: \"input_leveldb\"\n        batch_size: 64\n      }\n    }\n    layer {\n      name: \"ip\"\n      type: \"InnerProduct\"\n      bottom: \"data\"\n      top: \"ip\"\n      inner_product_param {\n        num_output: 2\n      }\n    }\n    layer {\n      name: \"loss\"\n      type: \"SoftmaxWithLoss\"\n      bottom: \"ip\"\n      bottom: \"label\"\n      top: \"loss\"\n    }\n\nModel initialization is handled by `Net::Init()`. The initialization mainly does two things: scaffolding the overall DAG by creating the blobs and layers (for C++ geeks: the network will retain ownership of the blobs and layers during its lifetime), and calls the layers' `SetUp()` function. It also does a set of other bookkeeping things, such as validating the correctness of the overall network architecture. Also, during initialization the Net explains its initialization by logging to INFO as it goes:\n\n    I0902 22:52:17.931977 2079114000 net.cpp:39] Initializing net from parameters:\n    name: \"LogReg\"\n    [...model prototxt printout...]\n    # construct the network layer-by-layer\n    I0902 22:52:17.932152 2079114000 net.cpp:67] Creating Layer mnist\n    I0902 22:52:17.932165 2079114000 net.cpp:356] mnist -> data\n    I0902 22:52:17.932188 2079114000 net.cpp:356] mnist -> label\n    I0902 22:52:17.932200 2079114000 net.cpp:96] Setting up mnist\n    I0902 22:52:17.935807 2079114000 data_layer.cpp:135] Opening leveldb input_leveldb\n    I0902 22:52:17.937155 2079114000 data_layer.cpp:195] output data size: 64,1,28,28\n    I0902 22:52:17.938570 2079114000 net.cpp:103] Top shape: 64 1 28 28 (50176)\n    I0902 22:52:17.938593 2079114000 net.cpp:103] Top shape: 64 (64)\n    I0902 22:52:17.938611 2079114000 net.cpp:67] Creating Layer ip\n    I0902 22:52:17.938617 2079114000 net.cpp:394] ip <- data\n    I0902 22:52:17.939177 2079114000 net.cpp:356] ip -> ip\n    I0902 22:52:17.939196 2079114000 net.cpp:96] Setting up ip\n    I0902 22:52:17.940289 2079114000 net.cpp:103] Top shape: 64 2 (128)\n    I0902 22:52:17.941270 2079114000 net.cpp:67] Creating Layer loss\n    I0902 22:52:17.941305 2079114000 net.cpp:394] loss <- ip\n    I0902 22:52:17.941314 2079114000 net.cpp:394] loss <- label\n    I0902 22:52:17.941323 2079114000 net.cpp:356] loss -> loss\n    # set up the loss and configure the backward pass\n    I0902 22:52:17.941328 2079114000 net.cpp:96] Setting up loss\n    I0902 22:52:17.941328 2079114000 net.cpp:103] Top shape: (1)\n    I0902 22:52:17.941329 2079114000 net.cpp:109]     with loss weight 1\n    I0902 22:52:17.941779 2079114000 net.cpp:170] loss needs backward computation.\n    I0902 22:52:17.941787 2079114000 net.cpp:170] ip needs backward computation.\n    I0902 22:52:17.941794 2079114000 net.cpp:172] mnist does not need backward computation.\n    # determine outputs\n    I0902 22:52:17.941800 2079114000 net.cpp:208] This network produces output loss\n    # finish initialization and report memory usage\n    I0902 22:52:17.941810 2079114000 net.cpp:467] Collecting Learning Rate and Weight Decay.\n    I0902 22:52:17.941818 2079114000 net.cpp:219] Network initialization done.\n    I0902 22:52:17.941824 2079114000 net.cpp:220] Memory required for data: 201476\n\nNote that the construction of the network is device agnostic - recall our earlier explanation that blobs and layers hide implementation details from the model definition. After construction, the network is run on either CPU or GPU by setting a single switch defined in `Caffe::mode()` and set by `Caffe::set_mode()`. Layers come with corresponding CPU and GPU routines that produce identical results (up to numerical errors, and with tests to guard it). The CPU / GPU switch is seamless and independent of the model definition. For research and deployment alike it is best to divide model and implementation.\n\n### Model format\n\nThe models are defined in plaintext protocol buffer schema (prototxt) while the learned models are serialized as binary protocol buffer (binaryproto) .caffemodel files.\n\nThe model format is defined by the protobuf schema in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto). The source file is mostly self-explanatory so one is encouraged to check it out.\n\nCaffe speaks [Google Protocol Buffer](https://code.google.com/p/protobuf/) for the following strengths: minimal-size binary strings when serialized, efficient serialization, a human-readable text format compatible with the binary version, and efficient interface implementations in multiple languages, most notably C++ and Python. This all contributes to the flexibility and extensibility of modeling in Caffe.\n"
  },
  {
    "path": "caffe-fpn/docs/tutorial/solver.md",
    "content": "---\ntitle: Solver / Model Optimization\n---\n# Solver\n\nThe solver orchestrates model optimization by coordinating the network's forward inference and backward gradients to form parameter updates that attempt to improve the loss.\nThe responsibilities of learning are divided between the Solver for overseeing the optimization and generating parameter updates and the Net for yielding loss and gradients.\n\nThe Caffe solvers are:\n\n- Stochastic Gradient Descent (`type: \"SGD\"`),\n- AdaDelta (`type: \"AdaDelta\"`),\n- Adaptive Gradient (`type: \"AdaGrad\"`),\n- Adam (`type: \"Adam\"`),\n- Nesterov's Accelerated Gradient (`type: \"Nesterov\"`) and\n- RMSprop (`type: \"RMSProp\"`)\n\nThe solver\n\n1. scaffolds the optimization bookkeeping and creates the training network for learning and test network(s) for evaluation.\n2. iteratively optimizes by calling forward / backward and updating parameters\n3. (periodically) evaluates the test networks\n4. snapshots the model and solver state throughout the optimization\n\nwhere each iteration\n\n1. calls network forward to compute the output and loss\n2. calls network backward to compute the gradients\n3. incorporates the gradients into parameter updates according to the solver method\n4. updates the solver state according to learning rate, history, and method\n\nto take the weights all the way from initialization to learned model.\n\nLike Caffe models, Caffe solvers run in CPU / GPU modes.\n\n## Methods\n\nThe solver methods address the general optimization problem of loss minimization.\nFor dataset $$D$$, the optimization objective is the average loss over all $$|D|$$ data instances throughout the dataset\n\n$$L(W) = \\frac{1}{|D|} \\sum_i^{|D|} f_W\\left(X^{(i)}\\right) + \\lambda r(W)$$\n\nwhere $$f_W\\left(X^{(i)}\\right)$$ is the loss on data instance $$X^{(i)}$$ and $$r(W)$$ is a regularization term with weight $$\\lambda$$.\n$$|D|$$ can be very large, so in practice, in each solver iteration we use a stochastic approximation of this objective, drawing a mini-batch of $$N << |D|$$ instances:\n\n$$L(W) \\approx \\frac{1}{N} \\sum_i^N f_W\\left(X^{(i)}\\right) + \\lambda r(W)$$\n\nThe model computes $$f_W$$ in the forward pass and the gradient $$\\nabla f_W$$ in the backward pass.\n\nThe parameter update $$\\Delta W$$ is formed by the solver from the error gradient $$\\nabla f_W$$, the regularization gradient $$\\nabla r(W)$$, and other particulars to each method.\n\n### SGD\n\n**Stochastic gradient descent** (`type: \"SGD\"`) updates the weights $$ W $$ by a linear combination of the negative gradient $$ \\nabla L(W) $$ and the previous weight update $$ V_t $$.\nThe **learning rate** $$ \\alpha $$ is the weight of the negative gradient.\nThe **momentum** $$ \\mu $$ is the weight of the previous update.\n\nFormally, we have the following formulas to compute the update value $$ V_{t+1} $$ and the updated weights $$ W_{t+1} $$ at iteration $$ t+1 $$, given the previous weight update $$ V_t $$ and current weights $$ W_t $$:\n\n$$\nV_{t+1} = \\mu V_t - \\alpha \\nabla L(W_t)\n$$\n\n$$\nW_{t+1} = W_t + V_{t+1}\n$$\n\nThe learning \"hyperparameters\" ($$\\alpha$$ and $$\\mu$$) might require a bit of tuning for best results.\nIf you're not sure where to start, take a look at the \"Rules of thumb\" below, and for further information you might refer to Leon Bottou's [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) [1].\n\n[1] L. Bottou.\n    [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf).\n    *Neural Networks: Tricks of the Trade*: Springer, 2012.\n\n#### Rules of thumb for setting the learning rate $$ \\alpha $$ and momentum $$ \\mu $$\n\nA good strategy for deep learning with SGD is to initialize the learning rate $$ \\alpha $$ to a value around $$ \\alpha \\approx 0.01 = 10^{-2} $$, and dropping it by a constant factor (e.g., 10) throughout training when the loss begins to reach an apparent \"plateau\", repeating this several times.\nGenerally, you probably want to use a momentum $$ \\mu = 0.9 $$ or similar value.\nBy smoothing the weight updates across iterations, momentum tends to make deep learning with SGD both stabler and faster.\n\nThis was the strategy used by Krizhevsky et al. [1] in their famously winning CNN entry to the ILSVRC-2012 competition, and Caffe makes this strategy easy to implement in a `SolverParameter`, as in our reproduction of [1] at `./examples/imagenet/alexnet_solver.prototxt`.\n\nTo use a learning rate policy like this, you can put the following lines somewhere in your solver prototxt file:\n\n    base_lr: 0.01     # begin training at a learning rate of 0.01 = 1e-2\n\n    lr_policy: \"step\" # learning rate policy: drop the learning rate in \"steps\"\n                      # by a factor of gamma every stepsize iterations\n\n    gamma: 0.1        # drop the learning rate by a factor of 10\n                      # (i.e., multiply it by a factor of gamma = 0.1)\n\n    stepsize: 100000  # drop the learning rate every 100K iterations\n\n    max_iter: 350000  # train for 350K iterations total\n\n    momentum: 0.9\n\nUnder the above settings, we'll always use `momentum` $$ \\mu = 0.9 $$.\nWe'll begin training at a `base_lr` of $$ \\alpha = 0.01 = 10^{-2} $$ for the first 100,000 iterations, then multiply the learning rate by `gamma` ($$ \\gamma $$) and train at $$ \\alpha' = \\alpha \\gamma = (0.01) (0.1) = 0.001 = 10^{-3} $$ for iterations 100K-200K, then at $$ \\alpha'' = 10^{-4} $$ for iterations 200K-300K, and finally train until iteration 350K (since we have `max_iter: 350000`) at $$ \\alpha''' = 10^{-5} $$.\n\nNote that the momentum setting $$ \\mu $$ effectively multiplies the size of your updates by a factor of $$ \\frac{1}{1 - \\mu} $$ after many iterations of training, so if you increase $$ \\mu $$, it may be a good idea to **decrease** $$ \\alpha $$ accordingly (and vice versa).\n\nFor example, with $$ \\mu = 0.9 $$, we have an effective update size multiplier of $$ \\frac{1}{1 - 0.9} = 10 $$.\nIf we increased the momentum to $$ \\mu = 0.99 $$, we've increased our update size multiplier to 100, so we should drop $$ \\alpha $$ (`base_lr`) by a factor of 10.\n\nNote also that the above settings are merely guidelines, and they're definitely not guaranteed to be optimal (or even work at all!) in every situation.\nIf learning diverges (e.g., you start to see very large or `NaN` or `inf` loss values or outputs), try dropping the `base_lr` (e.g., `base_lr: 0.001`) and re-training, repeating this until you find a `base_lr` value that works.\n\n[1] A. Krizhevsky, I. Sutskever, and G. Hinton.\n    [ImageNet Classification with Deep Convolutional Neural Networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).\n    *Advances in Neural Information Processing Systems*, 2012.\n\n### AdaDelta\n\nThe **AdaDelta** (`type: \"AdaDelta\"`) method (M. Zeiler [1]) is a \"robust learning rate method\". It is a gradient-based optimization method (like SGD). The update formulas are\n\n$$\n\\begin{align}\n(v_t)_i &= \\frac{\\operatorname{RMS}((v_{t-1})_i)}{\\operatorname{RMS}\\left( \\nabla L(W_t) \\right)_{i}} \\left( \\nabla L(W_{t'}) \\right)_i\n\\\\\n\\operatorname{RMS}\\left( \\nabla L(W_t) \\right)_{i} &= \\sqrt{E[g^2] + \\varepsilon}\n\\\\\nE[g^2]_t &= \\delta{E[g^2]_{t-1} } + (1-\\delta)g_{t}^2\n\\end{align}\n$$\n\nand\n\n$$\n(W_{t+1})_i =\n(W_t)_i - \\alpha\n(v_t)_i.\n$$\n\n[1] M. Zeiler\n    [ADADELTA: AN ADAPTIVE LEARNING RATE METHOD](http://arxiv.org/pdf/1212.5701.pdf).\n    *arXiv preprint*, 2012.\n\n### AdaGrad\n\nThe **adaptive gradient** (`type: \"AdaGrad\"`) method (Duchi et al. [1]) is a gradient-based optimization method (like SGD) that attempts to \"find needles in haystacks in the form of very predictive but rarely seen features,\" in Duchi et al.'s words.\nGiven the update information from all previous iterations $$ \\left( \\nabla L(W) \\right)_{t'} $$ for $$ t' \\in \\{1, 2, ..., t\\} $$,\nthe update formulas proposed by [1] are as follows, specified for each component $$i$$ of the weights $$W$$:\n\n$$\n(W_{t+1})_i =\n(W_t)_i - \\alpha\n\\frac{\\left( \\nabla L(W_t) \\right)_{i}}{\n    \\sqrt{\\sum_{t'=1}^{t} \\left( \\nabla L(W_{t'}) \\right)_i^2}\n}\n$$\n\nNote that in practice, for weights $$ W \\in \\mathcal{R}^d $$, AdaGrad implementations (including the one in Caffe) use only $$ \\mathcal{O}(d) $$ extra storage for the historical gradient information (rather than the $$ \\mathcal{O}(dt) $$ storage that would be necessary to store each historical gradient individually).\n\n[1] J. Duchi, E. Hazan, and Y. Singer.\n    [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf).\n    *The Journal of Machine Learning Research*, 2011.\n\n### Adam\n\nThe **Adam** (`type: \"Adam\"`), proposed in Kingma et al. [1], is a gradient-based optimization method (like SGD). This includes an \"adaptive moment estimation\" ($$m_t, v_t$$) and can be regarded as a generalization of AdaGrad. The update formulas are\n\n$$\n(m_t)_i = \\beta_1 (m_{t-1})_i + (1-\\beta_1)(\\nabla L(W_t))_i,\\\\\n(v_t)_i = \\beta_2 (v_{t-1})_i + (1-\\beta_2)(\\nabla L(W_t))_i^2\n$$\n\nand\n\n$$\n(W_{t+1})_i =\n(W_t)_i - \\alpha \\frac{\\sqrt{1-(\\beta_2)_i^t}}{1-(\\beta_1)_i^t}\\frac{(m_t)_i}{\\sqrt{(v_t)_i}+\\varepsilon}.\n$$\n\nKingma et al. [1] proposed to use $$\\beta_1 = 0.9, \\beta_2 = 0.999, \\varepsilon = 10^{-8}$$ as default values. Caffe uses the values of `momemtum, momentum2, delta` for $$\\beta_1, \\beta_2, \\varepsilon$$, respectively.\n\n[1] D. Kingma, J. Ba.\n    [Adam: A Method for Stochastic Optimization](http://arxiv.org/abs/1412.6980).\n    *International Conference for Learning Representations*, 2015.\n\n### NAG\n\n**Nesterov's accelerated gradient** (`type: \"Nesterov\"`) was proposed by Nesterov [1] as an \"optimal\" method of convex optimization, achieving a convergence rate of $$ \\mathcal{O}(1/t^2) $$ rather than the $$ \\mathcal{O}(1/t) $$.\nThough the required assumptions to achieve the $$ \\mathcal{O}(1/t^2) $$ convergence typically will not hold for deep networks trained with Caffe (e.g., due to non-smoothness and non-convexity), in practice NAG can be a very effective method for optimizing certain types of deep learning architectures, as demonstrated for deep MNIST autoencoders by Sutskever et al. [2].\n\nThe weight update formulas look very similar to the SGD updates given above:\n\n$$\nV_{t+1} = \\mu V_t - \\alpha \\nabla L(W_t + \\mu V_t)\n$$\n\n$$\nW_{t+1} = W_t + V_{t+1}\n$$\n\nWhat distinguishes the method from SGD is the weight setting $$ W $$ on which we compute the error gradient $$ \\nabla L(W) $$ -- in NAG we take the gradient on weights with added momentum $$ \\nabla L(W_t + \\mu V_t) $$; in SGD we simply take the gradient $$ \\nabla L(W_t) $$ on the current weights themselves.\n\n[1] Y. Nesterov.\n    A Method of Solving a Convex Programming Problem with Convergence Rate $$\\mathcal{O}(1/\\sqrt{k})$$.\n    *Soviet Mathematics Doklady*, 1983.\n\n[2] I. Sutskever, J. Martens, G. Dahl, and G. Hinton.\n    [On the Importance of Initialization and Momentum in Deep Learning](http://www.cs.toronto.edu/~fritz/absps/momentum.pdf).\n    *Proceedings of the 30th International Conference on Machine Learning*, 2013.\n\n### RMSprop\n\nThe **RMSprop** (`type: \"RMSProp\"`), suggested by Tieleman in a Coursera course lecture, is a gradient-based optimization method (like SGD). The update formulas are\n\n$$\n(v_t)_i =\n\\begin{cases}\n(v_{t-1})_i + \\delta, &(\\nabla L(W_t))_i(\\nabla L(W_{t-1}))_i > 0\\\\\n(v_{t-1})_i \\cdot (1-\\delta), & \\text{else}\n\\end{cases}\n$$\n\n$$\n(W_{t+1})_i =(W_t)_i - \\alpha (v_t)_i,\n$$\n\nIf the gradient updates results in oscillations the gradient is reduced by times $$1-\\delta$$. Otherwise it will be increased by $$\\delta$$. The default value of $$\\delta$$ (`rms_decay`) is set to $$\\delta = 0.02$$.\n\n[1] T. Tieleman, and G. Hinton.\n    [RMSProp: Divide the gradient by a running average of its recent magnitude](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).\n    *COURSERA: Neural Networks for Machine Learning.Technical report*, 2012.\n\n## Scaffolding\n\nThe solver scaffolding prepares the optimization method and initializes the model to be learned in `Solver::Presolve()`.\n\n    > caffe train -solver examples/mnist/lenet_solver.prototxt\n    I0902 13:35:56.474978 16020 caffe.cpp:90] Starting Optimization\n    I0902 13:35:56.475190 16020 solver.cpp:32] Initializing solver from parameters:\n    test_iter: 100\n    test_interval: 500\n    base_lr: 0.01\n    display: 100\n    max_iter: 10000\n    lr_policy: \"inv\"\n    gamma: 0.0001\n    power: 0.75\n    momentum: 0.9\n    weight_decay: 0.0005\n    snapshot: 5000\n    snapshot_prefix: \"examples/mnist/lenet\"\n    solver_mode: GPU\n    net: \"examples/mnist/lenet_train_test.prototxt\"\n\nNet initialization\n\n    I0902 13:35:56.655681 16020 solver.cpp:72] Creating training net from net file: examples/mnist/lenet_train_test.prototxt\n    [...]\n    I0902 13:35:56.656740 16020 net.cpp:56] Memory required for data: 0\n    I0902 13:35:56.656791 16020 net.cpp:67] Creating Layer mnist\n    I0902 13:35:56.656811 16020 net.cpp:356] mnist -> data\n    I0902 13:35:56.656846 16020 net.cpp:356] mnist -> label\n    I0902 13:35:56.656874 16020 net.cpp:96] Setting up mnist\n    I0902 13:35:56.694052 16020 data_layer.cpp:135] Opening lmdb examples/mnist/mnist_train_lmdb\n    I0902 13:35:56.701062 16020 data_layer.cpp:195] output data size: 64,1,28,28\n    I0902 13:35:56.701146 16020 data_layer.cpp:236] Initializing prefetch\n    I0902 13:35:56.701196 16020 data_layer.cpp:238] Prefetch initialized.\n    I0902 13:35:56.701212 16020 net.cpp:103] Top shape: 64 1 28 28 (50176)\n    I0902 13:35:56.701230 16020 net.cpp:103] Top shape: 64 1 1 1 (64)\n    [...]\n    I0902 13:35:56.703737 16020 net.cpp:67] Creating Layer ip1\n    I0902 13:35:56.703753 16020 net.cpp:394] ip1 <- pool2\n    I0902 13:35:56.703778 16020 net.cpp:356] ip1 -> ip1\n    I0902 13:35:56.703797 16020 net.cpp:96] Setting up ip1\n    I0902 13:35:56.728127 16020 net.cpp:103] Top shape: 64 500 1 1 (32000)\n    I0902 13:35:56.728142 16020 net.cpp:113] Memory required for data: 5039360\n    I0902 13:35:56.728175 16020 net.cpp:67] Creating Layer relu1\n    I0902 13:35:56.728194 16020 net.cpp:394] relu1 <- ip1\n    I0902 13:35:56.728219 16020 net.cpp:345] relu1 -> ip1 (in-place)\n    I0902 13:35:56.728240 16020 net.cpp:96] Setting up relu1\n    I0902 13:35:56.728256 16020 net.cpp:103] Top shape: 64 500 1 1 (32000)\n    I0902 13:35:56.728270 16020 net.cpp:113] Memory required for data: 5167360\n    I0902 13:35:56.728287 16020 net.cpp:67] Creating Layer ip2\n    I0902 13:35:56.728304 16020 net.cpp:394] ip2 <- ip1\n    I0902 13:35:56.728333 16020 net.cpp:356] ip2 -> ip2\n    I0902 13:35:56.728356 16020 net.cpp:96] Setting up ip2\n    I0902 13:35:56.728690 16020 net.cpp:103] Top shape: 64 10 1 1 (640)\n    I0902 13:35:56.728705 16020 net.cpp:113] Memory required for data: 5169920\n    I0902 13:35:56.728734 16020 net.cpp:67] Creating Layer loss\n    I0902 13:35:56.728747 16020 net.cpp:394] loss <- ip2\n    I0902 13:35:56.728767 16020 net.cpp:394] loss <- label\n    I0902 13:35:56.728786 16020 net.cpp:356] loss -> loss\n    I0902 13:35:56.728811 16020 net.cpp:96] Setting up loss\n    I0902 13:35:56.728837 16020 net.cpp:103] Top shape: 1 1 1 1 (1)\n    I0902 13:35:56.728849 16020 net.cpp:109]     with loss weight 1\n    I0902 13:35:56.728878 16020 net.cpp:113] Memory required for data: 5169924\n\nLoss\n\n    I0902 13:35:56.728893 16020 net.cpp:170] loss needs backward computation.\n    I0902 13:35:56.728909 16020 net.cpp:170] ip2 needs backward computation.\n    I0902 13:35:56.728924 16020 net.cpp:170] relu1 needs backward computation.\n    I0902 13:35:56.728938 16020 net.cpp:170] ip1 needs backward computation.\n    I0902 13:35:56.728953 16020 net.cpp:170] pool2 needs backward computation.\n    I0902 13:35:56.728970 16020 net.cpp:170] conv2 needs backward computation.\n    I0902 13:35:56.728984 16020 net.cpp:170] pool1 needs backward computation.\n    I0902 13:35:56.728998 16020 net.cpp:170] conv1 needs backward computation.\n    I0902 13:35:56.729014 16020 net.cpp:172] mnist does not need backward computation.\n    I0902 13:35:56.729027 16020 net.cpp:208] This network produces output loss\n    I0902 13:35:56.729053 16020 net.cpp:467] Collecting Learning Rate and Weight Decay.\n    I0902 13:35:56.729071 16020 net.cpp:219] Network initialization done.\n    I0902 13:35:56.729085 16020 net.cpp:220] Memory required for data: 5169924\n    I0902 13:35:56.729277 16020 solver.cpp:156] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt\n\nCompletion\n\n    I0902 13:35:56.806970 16020 solver.cpp:46] Solver scaffolding done.\n    I0902 13:35:56.806984 16020 solver.cpp:165] Solving LeNet\n\n\n## Updating Parameters\n\nThe actual weight update is made by the solver then applied to the net parameters in `Solver::ComputeUpdateValue()`.\nThe `ComputeUpdateValue` method incorporates any weight decay $$ r(W) $$ into the weight gradients (which currently just contain the error gradients) to get the final gradient with respect to each network weight.\nThen these gradients are scaled by the learning rate $$ \\alpha $$ and the update to subtract is stored in each parameter Blob's `diff` field.\nFinally, the `Blob::Update` method is called on each parameter blob, which performs the final update (subtracting the Blob's `diff` from its `data`).\n\n## Snapshotting and Resuming\n\nThe solver snapshots the weights and its own state during training in `Solver::Snapshot()` and `Solver::SnapshotSolverState()`.\nThe weight snapshots export the learned model while the solver snapshots allow training to be resumed from a given point.\nTraining is resumed by `Solver::Restore()` and `Solver::RestoreSolverState()`.\n\nWeights are saved without extension while solver states are saved with `.solverstate` extension.\nBoth files will have an `_iter_N` suffix for the snapshot iteration number.\n\nSnapshotting is configured by:\n\n    # The snapshot interval in iterations.\n    snapshot: 5000\n    # File path prefix for snapshotting model weights and solver state.\n    # Note: this is relative to the invocation of the `caffe` utility, not the\n    # solver definition file.\n    snapshot_prefix: \"/path/to/model\"\n    # Snapshot the diff along with the weights. This can help debugging training\n    # but takes more storage.\n    snapshot_diff: false\n    # A final snapshot is saved at the end of training unless\n    # this flag is set to false. The default is true.\n    snapshot_after_train: true\n\nin the solver definition prototxt.\n"
  },
  {
    "path": "caffe-fpn/examples/00-classification.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Instant Recognition with Caffe\\n\",\n    \"\\n\",\n    \"In this example we'll classify an image with the bundled CaffeNet model based on the network architecture of Krizhevsky et al. for ImageNet. We'll compare CPU and GPU operation then reach into the model to inspect features and the output.\\n\",\n    \"\\n\",\n    \"(These feature visualizations follow the DeCAF visualizations originally by Yangqing Jia.)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"First, import required modules, set plotting parameters, and run `./scripts/download_model_binary.py models/bvlc_reference_caffenet` to get the pretrained CaffeNet model if it hasn't already been fetched.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"# Make sure that caffe is on the python path:\\n\",\n    \"caffe_root = '../'  # this file is expected to be in {caffe_root}/examples\\n\",\n    \"import sys\\n\",\n    \"sys.path.insert(0, caffe_root + 'python')\\n\",\n    \"\\n\",\n    \"import caffe\\n\",\n    \"\\n\",\n    \"plt.rcParams['figure.figsize'] = (10, 10)\\n\",\n    \"plt.rcParams['image.interpolation'] = 'nearest'\\n\",\n    \"plt.rcParams['image.cmap'] = 'gray'\\n\",\n    \"\\n\",\n    \"import os\\n\",\n    \"if not os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):\\n\",\n    \"    print(\\\"Downloading pre-trained CaffeNet model...\\\")\\n\",\n    \"    !../scripts/download_model_binary.py ../models/bvlc_reference_caffenet\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Set Caffe to CPU mode, load the net in the test phase for inference, and configure input preprocessing.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"caffe.set_mode_cpu()\\n\",\n    \"net = caffe.Net(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',\\n\",\n    \"                caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',\\n\",\n    \"                caffe.TEST)\\n\",\n    \"\\n\",\n    \"# input preprocessing: 'data' is the name of the input blob == net.inputs[0]\\n\",\n    \"transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})\\n\",\n    \"transformer.set_transpose('data', (2,0,1))\\n\",\n    \"transformer.set_mean('data', np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1)) # mean pixel\\n\",\n    \"transformer.set_raw_scale('data', 255)  # the reference model operates on images in [0,255] range instead of [0,1]\\n\",\n    \"transformer.set_channel_swap('data', (2,1,0))  # the reference model has channels in BGR order instead of RGB\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's start with a simple classification. We'll set a batch of 50 to demonstrate batch processing, even though we'll only be classifying one image. (Note that the batch size can also be changed on-the-fly.)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# set net to batch size of 50\\n\",\n    \"net.blobs['data'].reshape(50,3,227,227)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Feed in the image (with some preprocessing) and classify with a forward pass.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Predicted class is #281.\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(caffe_root + 'examples/images/cat.jpg'))\\n\",\n    \"out = net.forward()\\n\",\n    \"print(\\\"Predicted class is #{}.\\\".format(out['prob'][0].argmax()))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"What did the input look like?\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.image.AxesImage at 0x7f665b02ae90>\"\n      ]\n     },\n     \"execution_count\": 7,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAJOCAYAAAB8y+mTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvVmsbdt2HdRnvaq99ynuue/ed1/pV9i4iJPYDk5M4gQ7\\n\",\n       \"sp0gSIICMiYSIQLxgfiAD0Dwg5QIIb744yeKkq/ABxLIwrISMDikwibGicv4xe/Z7913i3NPufde\\n\",\n       \"xSz5GK2N3uady8/KVm5OYkb/OHudseaac4wxxxhzjtZbbz2bpsmSJUuWLFmyZMmS/eNb/qorkCxZ\\n\",\n       \"smTJkiVL9s+rpRepZMmSJUuWLFmyO1p6kUqWLFmyZMmSJbujpRepZMmSJUuWLFmyO1p6kUqWLFmy\\n\",\n       \"ZMmSJbujpRepZMmSJUuWLFmyO9pH8iKVZdkPZ1n2K1mW/VqWZf/pR3GNZMmSJUuWLFmyV23ZP2kd\\n\",\n       \"qSzLCjP7VTP7QTN728x+xsx+dJqmX/4neqFkyZIlS5YsWbJXbB8FIvX7zOxL0zR9ZZqmzsz+qpn9\\n\",\n       \"ax/BdZIlS5YsWbJkyV6pfRQvUm+Z2Vfl/19DWbJkyZIlS5Ys2e8oKz+Cc/62vsIsy1JemmTJkiVL\\n\",\n       \"lizZPzc2TVN2rvyjeJF628w+Kf//pAVUambVxc6ay62ZmdWXO9vcvzIzs1E4W3keALNzb11Z4e3J\\n\",\n       \"yiL8tvQjx2KanSMch98UZ06Yh++yTM6Lz7l5Wd/3oWwYYtnUn8Lxo1+/5PfT6Ocb8Vu8R1ZyS3q0\\n\",\n       \"ezI/Xn/r1USdpJ/GcWSFF8eZnC9Hn5EXF39nZtMU6tt1XnY8hPr2Qx/LBrRLR9NkHcq8r4sidHJZ\\n\",\n       \"1vby7YNdvrWO96JufNitVnU4DvekWa3id3VVhbJ17WV1+FyX1aKskHvdd6FO+9t9LHv+8mBmZi9v\\n\",\n       \"bmLZ6Rjadjodw7UaP2/G8aTjJQv9M2pZMeAwvyf1KpynkhnG9o+j9yeqacdTOMcwnfx43K8x87FW\\n\",\n       \"TKGtgw6NAfdz8ErlebhwkQ9SFv42lfcneyzPUNHe2zD04frsIzOztg3na09+XNdyPE12++7Btm+s\\n\",\n       \"rapD2Wrl/VmuwvnqrXdKHudp+Nt3PrKmAceNfjyHvc61yPOczgDsMlA5n3Xcx1uRsc0yr3E/y8Lb\\n\",\n       \"wGV0Kvy4gT+evMxPKKtXG647jX4+joVm42XNJtyf9S6Ube/5/dpdbMIxK++Tug7HDTIoiiL0RXmx\\n\",\n       \"jmVVGc6zu/Q51uD7Qep+i/lxc3NrZmZP3n0av3v/nWdmZvbe157ai1+7tasvbC0/oF8LvXdYz3qv\\n\",\n       \"Z9dyTZT7xMvKuJvQjhx9k2XaLq4rft4C85Rjycwsx1qfyzzNePMyeSZgTLRy/SwP/ZlV4bi89nat\\n\",\n       \"LsIJq42feLUNn7c7v0/NCt9PfhzHna6x/TFcd//S59j187AGjIdw/ZtnBz/HKfx2kHPwmVkIPjEa\\n\",\n       \"nydShjkmzbcC3ajPyd2j8Fy+euN+LPvgVx7b57/vM1ZjQeu4cJk/EyYZ6y8wdt771cexrLLQr/oq\\n\",\n       \"wnk/FfKcyrhOyvOM9UNbtxu5JyuM9dzLiqJF+5ZrzWTyPMN8P7Z+/WEM53v5uLPrx+Fe5HVlb/+8\\n\",\n       \"z4MP20fxIvWzZvaFLMs+Y2ZfN7N/08x+9MMHNVcXdvGJj30El0+WLFmyZMmSJbu7XTxq7OJRY2Zm\\n\",\n       \"1cX2n+6L1DRNfZZl/6GZ/aSFvfxfTBF7yZIlS5YsWbLfifZRIFI2TdNPmNlPfKNjVlc7U6bUBNhT\\n\",\n       \"1RgmHKBlWYQgBUbGAdnMtcUPctGc54gXWJxD3WNERSepaAWsWF0LPc6TictgwI+LM/ISuRwV25Wx\\n\",\n       \"3gJPG1022ikfqq+ZZWiQlhHS1eb48Us3Jv1XhcCjFaDtSd0o8VrugoqnkWqOcEsOw2jltrBhGCNU\\n\",\n       \"m4mriuehS41uQjO/13krx9M9uhI3FqD/KRcYGz8Zte4o1Pa0La8H94S4bPPoCRB3L13Acq3YZ7P+\\n\",\n       \"BDwtJdNEyHoJ948Dz+vfZXAL5NqvqHsm49/d4YMcx8Huv6VLtarE3Yyx28KNK91vHdxsvSPh1sIF\\n\",\n       \"2fntj27GPM+tvqgty/LojlE3QhzbOp/jmEDBpD7TM+Max49yHKH6PNe5w75eusfVxnx+/aKS/s/j\\n\",\n       \"pPTz0s0jbqyCa5f4gOlaGUdxAVa4dzNaQCg7iaukexbOd7wJ7omX1+7aefh6aNfFlbvsNsHbZ6ut\\n\",\n       \"u+w2cNltH+ykLLhsVuI+r9Zh3rWDXz+vwzihq+7UtvG72za4bO5PF1blpe3eXFt7G+q7f+H1HA7s\\n\",\n       \"u1hkBdaTvtN7EsqyMwsV+78SNz77NctlDOMapXAlSMNVd1+BeZfJoIxjN9MxhjmJcV82MtYyXktc\\n\",\n       \"q/icK90E9csmnc90I/r971u6++RZEMduKMtl/E0Z56RPykgZMDHUJZPJxqVF3Z10AVe1uODR3lpo\\n\",\n       \"Dq9/7jWrm8oqnLeuffyxLuqyrA9hzNQy1iZQAMRTbgU9oLJQ5fQ36qMYt4zrZCv0CDYnr/zEOfp/\\n\",\n       \"VDduzu/kWuXyWj2pEkrLqL7xOvLKlM2bq4tXdelk/5StufhI3teT/TNo9a767Q9K9jvCdm+uf/uD\\n\",\n       \"kv2OsNc+ff+3P+j/x/bKnnBTXszIyUQCct3CkkOtiEBGwrQiB4UeHo7D30GIitylZni9ne2BCEgp\\n\",\n       \"+MOdcK67hfC5EmL7Gjum063v6kh2nu2+sWMmUXw0bdfASkYjEDXNCJPYkfWyIyNyN7vYEiXRnZiZ\\n\",\n       \"EzfNnDCoOzi+rI+jkyi5SxrNd6nTtNzpRcREkQOcWncu/HoiMpTJDu5Icr6SQ7Fbbr1dHU9cyHgC\\n\",\n       \"cqHk5XEiefoYy0jQjURkQT8qnk+6rQRikUk/EX2sZAfHxuruZ8S1FCTh9ePObNJzcFfr05QIiwa+\\n\",\n       \"xjkh03mcQJ6vhLy/CfesrBs/DghThWu0rZLISSyXHfSJbfFqxhs7Qx+4S9fxBNRzNvPQ7yC2T6Of\\n\",\n       \"hAEykxJ2LZNf4QwgByt03XM3r8cRiZVpUscAFB4p8wq777yUOoFkPiNW457klZJ9UQ8h5Y817p0g\\n\",\n       \"fPw8HmVO7DF2D+hsB3ps7MIP2vZeLOvuh3u8uhSU6j4QqavLWHZ5FdCpfOvnq0pwQHqFCYCSDOHC\\n\",\n       \"+1shwu/CeS+EnL1fh7m7anydeP5BQK7aG1knunCefNA1AX0iaxfnWJxDsv5xLSilr7OItORyHNCP\\n\",\n       \"2W8jduFlJGUL6t8BnStBzi91DWUdCyEx58vnCdfTIlc0KxxRCJpVVAzo8X6KgBWHhJCuR/R7rmu9\\n\",\n       \"Pyi9nlh3lNjtiJ2gqfhNJqjb1YMwth4+8DHGwAdOHkUQix4IltzXI1DUeiszsA5trXIJHomPiTNl\\n\",\n       \"57w5BQPQ/FoEx/NZYNcyUCfDdTN9TuA3xbqU32JM6uAp5fMZS7n2kiVLlixZsmTJ7mjpRSpZsmTJ\\n\",\n       \"kiVLluyO9urIK1k2c9nlJACrfy7qHsnP6II7h7QpZk+i8hm3IG0GHJ4jO55hUVOraBqFnInK3Bfe\\n\",\n       \"V9sF18rpRtxIdMfB3VjMCMO9fhWuEV0b6jKAuyFTNxbPK7/leeRVeYrupuX7M91yucCuBdwyVeVt\\n\",\n       \"pe5NJ+4JQs+5wNjxEjNdMBTZsu6Ec4de9EkAwZ5Ooq2EIdvPXItwt8j1ed5eNYMwxlSDJmuPaGu4\\n\",\n       \"1mrlbq+6gXugFngeLt1BIOOSAQgyhiZqQJn6cRYf3ANry3sdv5wN64KN8WvB9aXjuQZUvxEqYgO9\\n\",\n       \"m0yDHODmjrJnMrG6nhpbSxL/jByKG1uKC4wM+VzhcZxmEDdKHgnlPJffG7pHZvPalm6UGKhiS5vk\\n\",\n       \"nrA9lZJSS7oA0P8yhopySdgl8VzdfdRsU69sZnT3SfvZbnGLZh1cSzMfZPhzgms1H9xldrgG2T97\\n\",\n       \"HsvGMtzkq0FuNq5bbb1SxRoaTCs/H3unkH7PDvO1s5TAjmaDOdlpAA7738uuqsCAv33sv715L7h7\\n\",\n       \"skFc0BkDBcQtBddiEdcVXX857sUVWHBM+Jwoz7iAC2oWyXFxiGnwALX6uNbL/RpicIgXVnAt5+ru\\n\",\n       \"4vnELce5Xei1MI5qcfe1iPiIwVYyYOIaKg07E08Ug4IKcYFGbS1Zfxswv+vtxstW4TP1+czMNuvg\\n\",\n       \"0s3i8uN9yPVZ164S2k7rra+nw4nadkvX2kzmklp9+iz0hoVzScCQgQ5waP05MaCPy1L6n8EDM7oL\\n\",\n       \"6iTrVB2DLfTZda6X3RIilSxZsmTJkiVLdkd7ZYhUkRc2DcsQTpUa4FtqMUNQ+L0QQFk2nnlrVJTm\\n\",\n       \"Q+S1GYJgy3BV7oQVOSKhrcxlVxvr7CjJGsrDTeZk3/0xqGwP7VIJPIskW1HxRf0GbQR3RDP5gyVy\\n\",\n       \"RmK17qZIUPad25KcOI+hRVv1rZ7H9U5snTqqnZ/ZpQuzlt8r6jCV87DSQSUkwLrsBf05TmFXq/ek\\n\",\n       \"sbBzGmXnShLrOBtj4Te7C9+5V5AEIPqhhNlmzbhqr9+EwINON0TcQSlywuop2RztySTUmCHZhBBV\\n\",\n       \"rsFjHASlwX0f9N5lYdzlog58CTXszT3fpY0gtmaDs415vQH36dj7eU8HhoErARzXnKkIA5HSsOqI\\n\",\n       \"puk9WaIE/nU2Oyb8J//wQd7qMzIlWuSSIF7G8OhKUMeeqvxn0D/ujGfZBmK7ZWIxglr6n6r4gxxX\\n\",\n       \"8t6KxITlod/HWa+gPxEUQIQoVDico2uPUrTBaWXusOtkTnS4x2XhaxIDIHqVZGD4N87XK4LZAEFa\\n\",\n       \"C/qQL8PFG6wZU+F9TUTw8J7PyRLrnpKyiYSXGVF6Cc2vziBSGDOFwPkRYZfxxEsosXygTIJX3TKi\\n\",\n       \"jUTJJdS+ZFCGQPId1vNxRizHeWXtivIvci2iU4WssVUTrt+d6BHw2h0PGK/5cvxrn0znsmIQOZZ6\\n\",\n       \"EglralG7LymT4mthhbK8Ivonaz3mfyuSOA2CGNaCfrZELidFxFnfJZ6sS0F83gFW0jkZEflB0Ofo\\n\",\n       \"EVk+YxWRc/q9lCErQ6UZDRIilSxZsmTJkiVL9tFYepFKlixZsmTJkiW7o706svk0RJeAmUXIfBCI\\n\",\n       \"mWDaXB8qm39p6qpSbRcqC8slPgTPKZwYtahUiwLnK4XYTS9jIZBtXVLtXMnOwd2iCrwNCMB9hiSj\\n\",\n       \"onBLz8IoiUdJqJupmJ+h1E7xuxkWuvjtQGgZeiIKD0fkVFybVYP2C4x/CddXq2rXexDQW1Elpzrv\\n\",\n       \"oH0S/s4ctVEVewl7x2SYmbYB8GzpblQqoJeq4wR3QyXJXYuGV/arXEFUJ3aTuGeohD5KWQd9plzg\\n\",\n       \"/p59nXsZE2hr8IDBZaAK+BPJoOz3Tnsn9Ke6NqMnVFyiUdG3keTGV+H76lLaT9e3aEVdF4TK4bIQ\\n\",\n       \"xeSYMUDdWJFYL2OXlxACdtTlkhs6LKduVG0/N65ZNhbuHrA4r5cK9LP5H8nr/tOoWj6L6Mhm9Z15\\n\",\n       \"DOHGmeR+FXA3Fbm7MSboyOXinumobqNq1yTqqrYR+1NcEAPmZwFtqUF9HND02u7cZbaFtlNR+73u\\n\",\n       \"EbTRiSp5uQ/XOogLhuvd/saTex9vA5H9gITf3cmTfJuF8c9ADDOzHv2prt2hhrundBGszT2QnU9+\\n\",\n       \"YHsDV6HohxUlXXqYL5p4G31dDBrEQbe49xMT2Oq9Y+CBlnF5lsdOdNXnVPgW1xqfE7p202W1lfuf\\n\",\n       \"YT4NMp+oXzSKBtcIjkApdAdSKfoMpHN5JnGdVGI7PcazYCoU5qKjx7mQSwBCBmV7VYUfqIslYzfH\\n\",\n       \"ul9x7siYzKC3NYg+KwMaaiWbT2E8qQuOuuTq7mNWhtLO3Ke4Jp2hG6gGIeOvVFkel52tCeh2iT+K\\n\",\n       \"97iSAT2eCUZTS4hUsmTJkiVLlizZHe3Vkc2z+R405lDScGmWnfl9ppIAZ8imfGGeEctIwI4FZxCZ\\n\",\n       \"cY5/hXPJmzl3Xxrqz6NV7ZobEa3nwB0md9Wy06VcQ+a7Be4qZ+1HG8bpTD1tUTRrIwl7LVSkq1m4\\n\",\n       \"OpXNvahhuPTaz7G9wg5G3tZ7tOvmpSNSpxMUe0/aJxPKBPUYuOtcEosZRKBq11QdbmULUXOXakJK\\n\",\n       \"xfdKlCcCqqGu/My/OoZuDmEnvu9uY1lO5GDQXSLIoZOwiKlirArYRD00nJpdwUAAIQdnZ8ihzEW4\\n\",\n       \"2wrZH9u1R49cxXqoHpuZSx6Yma3Rjy8mRx8+1nyTmZl99cX74VyT7r5JRNeRhZ2u9CvVy2c7PaLN\\n\",\n       \"M7ItCbi60+NvsdOW8ecq5hIuz4COmdr0ktjOwIpS5QfQt1nm96kE2sVfDoOiVUvCLnP95ZMQ1pHF\\n\",\n       \"oBfpjojO6RQrl+HvzHGXraXfhwrtQgCMIt1ExHd+PNPyKHLLOXncC3KL3flw60T1AyQWDi8ddTqd\\n\",\n       \"noW/bUCT2tGPzyM52vtmxZtW+b2mdIPGklCdoZE1oQd5uGpFfgF/i2yJVsSnxizbBMaJoiT4q+sk\\n\",\n       \"0WwFWGfBDbEM4x6k9EJy/VUrrDUbQe6xdqisRkQ6ZQAQnBokr97pGD4fj94nx+Nc7XyuiLKsrwcR\\n\",\n       \"zbRuwrX0GQMkaNLkmRnQXjltUVDZWxA+QDcFkSut08DsCOJ9ATrarCWLAjwWvRD14/2ZIUfnnvh4\\n\",\n       \"dg5nvjJWSYMNeD4ZgHF+yNypiciZlBEBlwCM3/qyHzpjsmTJkiVLlixZsn8sSy9SyZIlS5YsWbJk\\n\",\n       \"d7RX5trLimyGnEd1lnOaMTOb5N/5b+dkU5Lyzp1lCdTR7TXTswAsqrIvVEVVN55BvVp/OzFBrWCR\\n\",\n       \"PSDVHgTQbKb7AXK0uqyg7bHfO7TO883hXhLlpYVwQSq0S2IrSZm5/ICeGJEOsRJk882FJEO9CBjo\\n\",\n       \"Skjcfpxf6/oamjHXXvcWLr1WIOPjAf0DUrreLaqoqwv0HOwb3cLqgsX9r0QxeL0O/an3yV16gL1V\\n\",\n       \"sRgkUw02IMdUx64r2yhRG24JdVXDzaWkWLrAMtyvTPqfSUgVzd6AHFoI2biGts7rD69i2Yr6PRei\\n\",\n       \"twXXw+34Fa/7kW4+uGzEPdVFIra4sXG6QtzCMbez3hpqMM36Gvdipu1Tzn48Q+55b4blfFUlbLrj\\n\",\n       \"dT7F02pARfRKLF2VXk11D9LdrEmboeIs7mby3tXdGbWtJLm5lUzMLi6TD9MNzCyD66fnyBJXTF6H\\n\",\n       \"41f3fKAMOK8Gj3Q4x2HvxPLjMdxbzbbw8jrc/8PLZ34NJGYukGS2KySh7hTc3FkjbaXLtJRgB7jZ\\n\",\n       \"JU7D6ns4Tu5/B7fl9KSX3zJ4B641dW0zA4Sf1jXjZs+OKCTkde94/aUbaRgkowLV83NSAbyvV9uw\\n\",\n       \"hjRXvq6smKBbgk2cjK7XQrCRPhPwsW3FBd5zPFN3z9s65kt3c6R26PMn6mMJtQI8gmnv5yAZvGxc\\n\",\n       \"R6qKauhC1Gdy92b5ysCgqHwWFoZgFwkoG0BYHyVQxx9aMp/o7ZP2tHhmjnFK6FoLzTh5no1wM+tc\\n\",\n       \"Lxg0IOt5UfB55u2PSah17dA6n7GESCVLlixZsmTJkt3RXhkiNeRC3DWLW8JCURr8VaSBkaMzVXJK\\n\",\n       \"B0hINJEAfUvN45s7fydvq9x9z9iufDX23UqPd89OwlVdUVrVVpeE8pjrLubQO0eEc8tB6L6850rc\\n\",\n       \"p2PYYepOcxipdqs7NxdF8OYQ/Qj9dOplZ4JdpSrclqtQtlp7vzaQGNjthNiNn1S138+yQQ47c+Rk\\n\",\n       \"XzCcXInqoR0jN72y1awBjw2yWyPJuJNdHWUShkFy8gHZUzkH3otCwpktjh2oEws5ejqzzaAavyqQ\\n\",\n       \"t1Bb7zMncRdZaHcvCdgq7Eg1d1jcsjOvpOaGxHG17OpWl6FP1jJzqxVkKuS83/XF7zEzsw+OX41l\\n\",\n       \"77wIYe2vr1+LZY9vQp/16P+xW+7WM7kW0TmdJgTdIuJkJloXsiNlnkj58cDzZdwFahAHyNGqWM2w\\n\",\n       \"dpGWd4TjTADGbJfOmy3XYP6xidX2nScR1KyT6+NjJ+TgsSaCI1eHOrKqzUdlbZWpWFESQ/qpCGMx\\n\",\n       \"HzH+BRErkMMuV6UPEK8Hyf95QKBE13lZC2Lz/rlLEpxeXIfvOp87xQqSBJQYEamRsQIyICHsOQDr\\n\",\n       \"lcmuHujUai3XR469URDG8hbnuZWgoBNJ3pQrEPQBa9Ygj648qtgvA5Bq6X+CTqpKXqAjx8E7lE1b\\n\",\n       \"I9jGZuhr+LyWm70G0jIVKn8S2t+evO+4yK1WlRSBlK+kePzt6CWZEdZPs/aZCTquaH05f9aZeYBW\\n\",\n       \"KR6BU0+U3O/TZEESJtOceDHGZul9odr4IM/uzkiUlzytE8tEEoL5BOVZFNcOVdtnRg/8VfmJCS8F\\n\",\n       \"qljPvJYaFEVCu+ZaLRlsJOsOg7B6OV+Vf+NXpYRIJUuWLFmyZMmS3dHSi1SyZMmSJUuWLNkd7RUq\\n\",\n       \"m48fSjJM2WMpG+ZuDzNx953R1sjOUNDPiBiLns2SHDoI7FhNhPiWsONJiY1UrBYYmYTB6ZwLCoTe\\n\",\n       \"GRF2YoLUpRZOLfBkcy/ArkqifgFV4q5TfRD25/JdmVBxVigRHnUU1d2MWkySZLeGa68WsvnuKrix\\n\",\n       \"RB4l6o0M4ioaoEfTi6uOaus9DquF7R61gGpJfNnBtSBQeAeYl9+FNsLdIrB8eSZpZXRRAIoflZzZ\\n\",\n       \"E7JWFXMoRg8KhaM9tbpqAwSvQQlZtom/iMYxgHrmKnsNkm/ZiO4V3KeTzNxPvv66mZndW/t9+vrT\\n\",\n       \"r5mZ2R/9jj8Zy37j7Z8xM7Mv11+PZc/e/lJoI7SFykZ0X0YQq1Vbim48GbvUFtLpxLE9zAIAqOLt\\n\",\n       \"x5HQHSF4JdZirmmfUCtmnrQ7XsE+bLP5NOUfOt5s6Ofn605yDiRrVtXxOVEd5yUBVtwjUVFaEj5H\\n\",\n       \"+SoZE3T9FVopunLhYstK1UfDIStVlg5/1T03Yf3rxd29h87b6bmXtfvg5pt6J6AXTThuBc24TCgT\\n\",\n       \"zNSQibstgws2r9UFhjaKZ4seyr73+1nvMO9ulgRsg7tt5tpCXepZ0Al1lNyGmNxdA0Wwxiq1AN93\\n\",\n       \"0sbqIsyBuNaJPhTX8Jk+Vc9k0H6tqmFgj2avCDU8nWSc4Ny16FJNQ7gX7YEkag3BoLtT3Zi4J0qi\\n\",\n       \"ZwYAeQCS2mJn5u4obvFxWrbx1IO2gLGoZG+6jwdxt3ONH4alGy87p98oz93o+lUGRMGAqrnuXDgM\\n\",\n       \"93qm7E7CvqydJV174tonsVwuFoOxNFPDGb0xtYRIJUuWLFmyZMmS3dFeGSI1WT4j1vJtWXcQGQhz\\n\",\n       \"MwVwEgtn8rTc1elOI7xhaqhpfKvl26q+GUeUyk/L3cRsRwo06ajhl0BfcqnTSMSil9BhKNXGCFbZ\\n\",\n       \"mlf5Uk2VobO57GArKMbW9c6PQl6j21snkR6O4Vqt7BJIxou7Snlb5wZyf/Cd6cUYlLJ1F14j/LUU\\n\",\n       \"JeYCiNWq3sayLZSXu+79WHaEsu2p905+sHsQroFdUC/E3n5kbiqv09QhJ9zB++lEdWiBxMa4Tfdr\\n\",\n       \"Maw3E9SrqEhKx/UFVDq2Yee+P7iy+R5lp8HrlJ0Ja+cOJp+pkgcriyUBMtKlBaUdOV5lu1OjvgI+\\n\",\n       \"2Q7t+eLFZ2LZw9feMjOz73jj22LZ937L95qZ2c//3N+JZT/3f/55MzP7rm/6ZjMz+3u/9Hb8rkA+\\n\",\n       \"tZPmUATqUAgRNYvjWHKisYmZI1xTzOEliATaU2H3r4RV7qqHVtGnbPbXzIR5uxzrMwV09JkikjGf\\n\",\n       \"JUPOhcRaor4z+Q0rFmWxfrJOjJizKhMQyeZCSiZneZIgB+62uQnW4IgMqKcGAFDZvpWMAadjGJ8H\\n\",\n       \"kU55+TysCYfHHhRRYT2tFTklSgF0upIcfkRnS3l01OU9MzPbNj7/DfnybjNfkwjscm0yMys2WLvl\\n\",\n       \"p1EK4BbBBrImEJ0aFBEDEXgGvuM/ipJnZ9AkSpbMwhTwHCkwJnMZr1Sb7+TElEmZLR450SQv4vNE\\n\",\n       \"nzFERHLN5wfUp0LAQq/yN/RcLBU8ZpIgOdraSw69EuhXId4MPgK60e9Ji+fUoZV1F9IZo4UBq4Rt\\n\",\n       \"IjfjSYnlJMq7RTmReUJL/JG5Oy4lfjjfSqJEyivnM17nesYAjOXzvxHUvYwoleQfxHl6IcoPs4wn\\n\",\n       \"S0uIVLJkyZIlS5Ys2R0tvUglS5YsWbJkyZLd0V6dsnnXmb7HRU7mjJwa/qreUoRvFdm3M5pRxVKz\\n\",\n       \"h1oU0esgUCyhxWymRXWGsI4uG8W1dwSkWQg8Sqh0bB3aHgEjF2j3eiPJWKGxUUrSYqqcZ4JZT/is\\n\",\n       \"CTKvoM7aCCn7Bm6+VuDWA0iB9NRMM90rEPbEFbi/CQfef00I4FRAn0HGASrdiGsvh7L2G6/7ceyK\\n\",\n       \"XIZdA7dgBl/FUXRXujb019E9a2YZ1JZbL+zhKugEgieJMhcC+g3u91pg+dJIygTZWNp/6F6amdmp\\n\",\n       \"Vy2c8FlhZ2qQzDRbou6IuGViHk25n9SPwq0o1WWGc8zU+THs1luB1qGFs27ux7Lv/+IfDue71QTR\\n\",\n       \"wS35Xd/zp2LZf3ATLvxTf+Mnzczszaun8bv3X5Cc7dfq8yPqpNfH+cVVbGeSUNOVXcjYabbV7HAV\\n\",\n       \"bh64KKi0W0/YX+Y6fSUaaxH7VYMH4CoUN9rgcufhd0Iw7XlTpP/HSHYXtXHcz0ncyBNI5Jqg26By\\n\",\n       \"Xq1m+v2hiTMF/HHeLrlWkQetJnU3dXDFZKLt1B/COL1+4W684+Nw746ibD0ykfdKaBF0pTHpgLrR\\n\",\n       \"sD5tV04tuH/xyMzMNhtfJ/ImnLep3I3ybAhjay9jku5riUmxkgrgGAyjruHdkhw9UVldVawxd3Tu\\n\",\n       \"ZtEt6NfixOslyOYA5ffNLrRHPZY5zquuXdZFA2XoxtN4qpFJmM8kIVdvW8cEADhdrlpIpHvIEPLP\\n\",\n       \"Mv6wrjaitl9CfK6Ue02trmLmvqS225JgzcTcSqJnt7e9BzHEhMuzeC64NoWU74Fc8tzP6FL3cUJX\\n\",\n       \"ch910SSICJSNcdDAJupdLTUQVcV81VC9XvoJ90d1Fsfk2kuWLFmyZMmSJfto7NUhUlk2Q5pc7fsb\\n\",\n       \"K5tn8Tt51T0TmTjFUFdFs/Cmy7dgIZbm/kM5Ht+dkUnQPDwjZAfGXslp2DnLlmQAyZqXzzJ54waq\\n\",\n       \"lMtuge3W3QJzA03yqs88bbWEye8Qat/WjsiU2Op02CUfTr5bZbtG6cybmwAhXV87+rW7xI60F5SK\\n\",\n       \"irFK4s5fcv9LAAAgAElEQVSpgO753x49gpxDc+2XJRKI6za972A7oFPD9DKWEbDS+8rdz3iSewJ4\\n\",\n       \"opV+2uM3B9m5GNTbiURMRyGx4362svsi2VPJliQ2ZqLsXaL9paoiEyVTQjnGBBGUfNCxjvMJJMoc\\n\",\n       \"ig+2rk5+QP63N7Yf97a+GxC7r7z967Hs+btBEqHe+o7sh/7t/8LMzL7zO/9lMzP7r//Cv+91y4Fq\\n\",\n       \"SlDEe8/fC/XU6UfgRObziAM0bJik1Hrt46SsMXYwN4dBSde2KDu3JkTUS4ti+LPbAORghgd9aCs5\\n\",\n       \"Lad6RDzCSc6EcMdrynlj4VLiRYM3ZgrRsDwinAxrX651ubRsIJos69nhFMqOtyIJArRV5SQYnj7W\\n\",\n       \"kgEAY5DobDNDC8K9u9o+jGWP7gGR2m1iWVaF821WnpWhB8J4+9IDUKwJ9TvVIjES0cyl+4GoUyHS\\n\",\n       \"7hlQnUJI8eUGSLcEtrQ3uIYih3HdE5QY7b+9gRL5PSUsM2BCgy3m+TrNXJR7praOQaHK6jH/6kyp\\n\",\n       \"H7IvKDopWgJSelYux99MQQP9P0OfkKexbJYyDSY5IZnxg+tqqDTmLnIoKkpM2Z22XZLiB0GEO0+i\\n\",\n       \"5z8+k+2AQWgqJk5kN84xBXD5WwHJKjyL1ht/nvD+NI2o0mMtKmy5nihy2ZqjbecsIVLJkiVLlixZ\\n\",\n       \"smR3tFeGSI39MM/MHHOjCfeBoc7yahp9nmdQIhW1GydKJ2g4NX47UixT0Cru+oSkURAtUfE/yjQI\\n\",\n       \"IhL5CrrTG4m0CJrB9GPgILRSt1MW3uZXInQ5NrzGMtZVdz/cHHWCiPFNX0Ptm3GeE6yU0PQjOR2t\\n\",\n       \"70IY1v3sifO8NhcBzbi8JzudCyIcvkvkznUjQnP374UwaeVjHA4QBEQfV4IWHZDPrDxobrSlcGmP\\n\",\n       \"HVHnGyLrICaZi587R/vbyneEK/LQjJnZdWdG9UMdk/CpS7gsRT8LCXUnr22GpuKjCsJ1uF6807K1\\n\",\n       \"4U53lHHaAUG4rJy4cYEcZ/ene7Hs5//e3zIzs9Pe0b810Mzb0/NY9hP//X9mZmZ/8E//R2Zm9mf+\\n\",\n       \"7H8Zv/sL/+1/HK6/c/Tr6c1j1FN25Phbag69nOHP0nfoss3ax10dQ5HJPRM+0jleQgyX9qIo0qeo\\n\",\n       \"Ar7X8GfyoQZFvcHXclkV5V4Vi3MQkZrtqVkn5c2wKqpmQtRdOZcRic8Wx3HsKPrplC4NNceapOK/\\n\",\n       \"4BcOwpHsWyBcg+/IOe9mSwfmfYVNeCHAxNVrQa7kYu2I1GYVyu5dOEePqEfJRHxmdroMJ7q5FkmY\\n\",\n       \"Q+BNVZeCXOwhRApBSuVDZZS60VB/IBi7S+Fobonm+/UPDyDd8MT7af8yNLIURKgASpND/mMQZIZ8\\n\",\n       \"wRmqyXVFZDqitI0cyDrPtUQjnuRFlERAuzQ16ASkrTiH6uhag3W/Wgua2BD9Vo4mryl9TMqhtNvp\\n\",\n       \"v+Tv6VybZr/T5uic4PNWxynnuziCYs7OYVg+4/hbXRPjOWRe0TtSC2+Y3Cgt2zQrnMN/y2tkCkll\\n\",\n       \"OkGWlhCpZMmSJUuWLFmyO1p6kUqWLFmyZMmSJbujvTLXXj5kMyXakXmixBXFcOrMNISVpFypOpSl\\n\",\n       \"B1WsjcLGCkECsiM8KSTeEsdpaGZBeHYW1U2fgcCocOn1pnni4CqT9pQjQjgBBZeCpx8PUD2/cf9U\\n\",\n       \"RRVxCZcuQfKeKlFW9qt6RY19okRpqvjSnSQh7HDtVbW0iyrut17P62cIoX4o13oNauNHcaPB21cL\\n\",\n       \"sW97CVdJKS4IunSoLJ+724fv+VnuIfl0d4wS636iOrXkdaKMtN7/1Toct9kKoZ9K0Tiv3v+xw5jQ\\n\",\n       \"XGe4d4O4Gwq4pypxGU7wYxUCmVOxt9dxym7kZcUV2SGceNVc+vHds3D9zu/J7/rE95iZ2bO3XZW8\\n\",\n       \"exYI+oVC2/Ulru/938IF9Pd//C+Zmdnv/3f/q/jdD//AnzAzsx//u/9bLCvXDL9XFyijMoQojnxq\\n\",\n       \"o/Q/UzbWjbjKK6jSwwXQCOk5EoElKIMwfjEjbFOSQP1oqIckJXRvm7rRQimj35VEH68wLuukfoSe\\n\",\n       \"65NQEDg+SskrN5xQP+f6Wz7ALVGIqw4SAwxN12ADsgEkMtwGuMP7k6wJR5BzT7JOwt09mvjA0Svd\\n\",\n       \"3vuTAQA9pD4aCY5YgRewufIgkgevB9fvunE3GsnQvUg3HNvgArzcuLv5eBn6pJq8Tvk+BMG8ONyE\\n\",\n       \"6++1w+iL8vpWcHeVPk2s3sGNI1SJqgtlm5205zb0ycsX7sbKseDXaKvm2uQzppC8dvtujzJ1rcEt\\n\",\n       \"K6T4KF2T6TihW0zWThC/Yz5JlZqJ41WeXainkrN5ywrN0wkC+ihzt6CrUrKC8DkxyNiNBPQoEyHj\\n\",\n       \"la5vWbvKuP7K8wcuu7EV+QuMRZVYGeg+10gQ5l0lFUOCCIYJ7w7S1zn6bLPy58kWbrxS7h1J6Roo\\n\",\n       \"M8BtuZb15Hhcri1qCZFKlixZsmTJkiW7o70yRMrsNCOYxU2ivBlzV6MEdBLaRlPCGoS+5HTMBD1K\\n\",\n       \"RnbuYkoSTGeZ4SGMJsy+CCbpboF5mEqF08ZF2diHt+NWUKfiQxIPSqbrgebkLz3M8jr2hVyrCd8X\\n\",\n       \"ghyRqK8M2AiEtNJPH9o5aP9P2OKOQrbOgDAoD/awD+25ufYd5H4f6nRV+Y50IqFw9N3kxTp8P0g4\\n\",\n       \"PXe9LYQzCxEaZU4uomWhUuHPoGKqaPjuwq+fQ/ahEWL7FjvRWhC+HIgUpSlGgR9jrr9ROxZhwEKs\\n\",\n       \"jmH9tVyfIezSd7lR/E92PxQ9RBs0h1mOm9iPTvZ/HSTf6aWf483LsNP/ypd+MZbVGMer2vv/9hiQ\\n\",\n       \"gJ2EqW+aEJ7+wfN3zczsnb/xP8XvfuzH/nMzM3v7K/9vLPtaGxCxSna1lDpQbmYEMWSQrTbYEUro\\n\",\n       \"Ngm4bL+KenJeD73OdWqH+LUYLj3T/mQ9ZuoCmHdSkkWZBNz/mYAvjxRU+0ywx9n9KAmwEuqe9dyR\\n\",\n       \"C8JEkVDVXAQiwWVH0YooiSLrZEcSr8pqAEWZ7aOZ6kxVImGDrqdAycaWRGDNjRgI3Zcia3LvXhh/\\n\",\n       \"u42PqyMiP3pZfzfrLf7upAyIxOTzfo0cn0X3xMzMrkWRdwI6qONvtQvzrlr5OWrkH9VQ//VFqF8n\\n\",\n       \"BPxyE77fXMq6cwPkrmdwjqIfkFqZhLBPCYO5SnQok3xtFPMsZI5H9Ecgxg7rQ07Su+QazBqKNIsg\\n\",\n       \"LMZHJgRwik8Wpc4nRjvJ+geUSMn7bMY5hRFKkWS6hvcM9hFyOsZ9oQEjQKdUkDeLY8vLTh3z+qnE\\n\",\n       \"BOYEhXYVkebaLST2Ckj8SjwiF7sw/vQ5QY+FkufLkhNP8kkW3xhzSohUsmTJkiVLlizZHS29SCVL\\n\",\n       \"lixZsmTJkt3RXp2OVNfN4LQIJwq0P5CoqCqy/L1A0Tnh9kmPIyvTocWKej+AQpmjx8ysjGRjccVV\\n\",\n       \"1OdYwqNFpu5GqO2qawNwY945tEgwlrU8ii+iO4TPJ8lDZSDiKRLfbcL3662T6Kh8XuRLyFhzDU09\\n\",\n       \"NThIzlZ1cOYw82ux2Xqb9vvgZnrxwuH2p0/gMtq4jtFAXFhUeYs6nHzVeN3bTYBZ22sSYaVOqHsr\\n\",\n       \"+fJaEGYLuXfbXajoVjSrqKy83Tlk32zhKlRCM90ozO8lbozszD4j9nWphEUSNr1ddB+XQlQnQbmQ\\n\",\n       \"MTZAn4T5zcpK4exwjmMvisHwVX3qY98dy67fCbpQjfy2BGm/EMViKqXnhY/Jrg998hD50r7+lS/F\\n\",\n       \"7y4/+6tmZvYjf+zPxbKf/iv/Sah3LgRg1PNGBioV2Eshm5P4qtC+pwmDe1C14EjO73ROwBWoWlAx\\n\",\n       \"UEF/u9S2YTaERnSsBmYlOKNZFdcn8c8ObI9mQKBmjhBWoy6Vuu8ZT7P2PqFUTy2q3MxJRgLujNrA\\n\",\n       \"YId+xjYP35meA/0v5x1Asp0k12TUL5u5auCC6aJCmLeroD6Pu/EauI83Wy+jEPSpc6rCugnfV7XP\\n\",\n       \"ic1qjTaKLhvcna+9HsbkePS5dv32C9RDiMWoXr2WoBDM8Wbj47TEulNfyjw9hrm1v/F65nDl0SuZ\\n\",\n       \"idspwxo2SmBFQde2rF0ci8OwdONmOk6Y61DmSQm3JbWSCqEi0I2VZbr+yBrD62N9ys+4pOaZQkgz\\n\",\n       \"8TqdjqEvNBcrLxdpJPId1fEHdWOfCQBiftpeRcsw7Gpx9w/VMp8mn/xknuTyXGHUmmYbYQ7XC9EW\\n\",\n       \"2yCjA92eZtJ36pXFRO3lfEpHOGcJkUqWLFmyZMmSJbujvTqy+WQ2yM6IJLJMEZSl6Gt8m9ZdQgwT\\n\",\n       \"lV1V3JHJbwucu+Hb+kx1GfnqBK3grkJJ6cxxpfnvPEu17BxaSAIIUbQzhsQvVZR5vnYvpEOESx9e\\n\",\n       \"ito4Qpy7gyBn2HXKJi0iG6rUzU1ETDWouQFJgJZw/bxEaGiuu+9wvufXN36t90PurDcevO6XgtRD\\n\",\n       \"lkvdKbtQaR9j143vTkIsbU8B6cpUagD92UhY7/Z+2HVMQthcI3N7vZF8hgjd7wYnb1tUj0YWes2N\\n\",\n       \"xb6Q3c+YEVVSqQvu/nQ8r3Gc7BzLUKYBEEOJfHbHUJZLWzmed5PvqjcgUTa1ZFBHuxWRYFiv7qOY\\n\",\n       \"6XwUhC9OMrQ/k93tV37hp83M7Lv/yJ+NZd/2v37ezMx+uf3AT4EBVYn8BeWwNdegg3OCpsTd8TLX\\n\",\n       \"XkeS7bn8m7nuqpWpze/D30GQpjWQqI8B6Qh1Cn323ntBsf3m5kX8biTCrXn1zqo4M9hDhUiwS9Zc\\n\",\n       \"g1S2FqV0qu1r4EkFZM9z7gllHAEQcxFpICh6HBa+ZiNjF/O5l5D04w3Gnfy0xbpTrEO7dltHmquG\\n\",\n       \"Ocx8TPJeKErIbA8qP9Lh3tal5DoDmjXM1liQjSHhcPXAj8/3gYje3QoBHUiIdKGtsf4pmtNskVdN\\n\",\n       \"5nPDnHwqcdIElKqDUvmk4fpRkkP7Onxfq0eAwROaqiCiKoJSAVkvL7w/md2hQfu7M8Epo5DYYwCC\\n\",\n       \"BMDEZ6J6aTBOFRGLY+cgkjS4nye5RpRRYLtniuXhWu1B+glS6Jke1y2RJt4JXXenmAJCEC4S9UcG\\n\",\n       \"kWkAGNZwGWvrXZjrm60TyysikrXk32OOVbk+JSHGg6LUs6iVhSVEKlmyZMmSJUuW7I6WXqSSJUuW\\n\",\n       \"LFmyZMnuaK9O2XwaZ2CZ670IPE4tFtUMIrFzRlR3CjqNhL1MIWNCmrhYKUlmK+gJVeILpDpqqWRD\\n\",\n       \"ktJn3LMlKXSCHk0mpGS627IskNkyUfFuS+hYSbuo2XQ6irIukmsOoi0ywn2mOlb8rKQ8uhZIzu2V\\n\",\n       \"sEovirgHo1bJzN0KGPfoBOjnz4K20DtP3o9lb7z+ppmZ3Y5O4mQyVOq+hDqAbN2ecF4/vuuWiWwz\\n\",\n       \"3NeLKz/HFi6bXOD2ch1+W4oGzEQF/ExdUCQPU09kSSLOxWU8UuFX+qQmZC7QPl3Aeb6cYprcN+qd\\n\",\n       \"1HTFCukVg2wjeiZrENofbi5i2e07QcV8Iy6TOO7EfZtRFVjGaQNtLxJM1zJfSqhdP//yz8WyP/0n\\n\",\n       \"/j0zM/tv/sc/H8v2uGzWyljDPFoLsbiB21Dd8tEDgQ+daCzRtZLL/R9c7tzblS/heTajEmX3q/uh\\n\",\n       \"z3Y7d6PstsEtfIEx+f7jx/G7998PivrdsBwTMxpstkyuyyNydcLhNJNMiRH3fZRAgR5RK5FSkKnb\\n\",\n       \"gXpbkkjXGGzi11pvQrt1ngxd+Nzt/bcvq+Aiu3nulRq4v0bXFaLFtNlsZ20282Tpg6iY00V7Ovl5\\n\",\n       \"j3DbayL3NbSfWl2n0J7mAcaLuFg6JCMeWnftDS1cgSdZ6++F39aN32sS+nWdnOiClHX6sA+/3d9C\\n\",\n       \"C+skRGi0a+ZMpmtfE/nC9aQk/ujElr6rSuo4SZAVvq+3oR67Qsn+fNaouw9BRKO6xcO9oJs2lC2T\\n\",\n       \"BrPOev0eNAO9x0PGayzXEJLMlaoT1z8RcutYvzMBTTp3OLV16YxakeVyXvF5tpPnyvYirJONBCBU\\n\",\n       \"oO3kSi3BfR9Fq45eQ02M3EomiXOWEKlkyZIlS5YsWbI72quTPxjm4ZW5MYeQEszwnqchnHyblTD1\\n\",\n       \"KYIKuiPFzqFcolncEZeSr47hjY2EZhcIw8xm8BPfiL3r4g5bw6TRnkJIjAe8sVcgnedCYixx/Kjy\\n\",\n       \"C9iZ7oXEtwI5UjjMdntACO+tvuqH8zWCsJFIPIDsrMhID+SoNkfJCCYpiY/tn+QN/gRF2994/51Y\\n\",\n       \"9uZr983M7Cgqwhm2uJlvJq3H9rwDAfpw3Pv1+7nqtJkjhpu193+DfFpV4f3EMF0NWz1N4dwKZnL3\\n\",\n       \"EZuYKYKH3Z+oCJNsr2TPHmMhG2T3QxKjSmLwP6KTscI9KeulOjXVq7eVt3WbhV3Xg8aJ/UMWUJRR\\n\",\n       \"dk0ZULpJ5k6N8zy79UCBqcOOHMc9ffrE6/bG58zM7PGv/T+x7Pt/OCBSf/F/+e9i2ddbxok7iZ+7\\n\",\n       \"v5WEGhMI0PDrMc5jKMwLh5c7XF0nONlnsgb8WkFqIFE7IZteXAVF7VqI0muQaHebHeroJz4gdP/p\\n\",\n       \"Y8n1OFGJXa4Vd+cqvxJlyWMZZUeUp2wnrBO1oJ7YJUNg3ErZQTvqoBE4CKJZK/oX7v+9tSegW9dB\\n\",\n       \"jVymrr14NwR0fCV3JI6o6Po+dvUitXL/EormsiZQnkLRp7YN8/nmxsfaLcZd23muvTxfEsB7TFBK\\n\",\n       \"iFzd9zYcX+K4k5+jP4VrDXsNngl1yjPvE/eBiJeg5P33McGcdCQiHw/ergPyAA4SxEN0RGVqSJgu\\n\",\n       \"ZaEmclSvJAMCc8Yq6hnRMQY7ef9TVqQQqRW6R2oJFGE2gP3R5+TzZ1h4ZTpRnmIldeLzVCU2pjho\\n\",\n       \"AyKpaw2DfCYhrI+QdalFJiXHOirKDTFAR7FcIoaz4AnmyY3jzn9xsQ6yGlsZp1f3Q3tK6aZqReka\\n\",\n       \"WZPQ17P8p1RKF0L7OE+RsLCESCVLlixZsmTJkt3R0otUsmTJkiVLlizZHe2VufaKfJjB+ETxNPFi\\n\",\n       \"JJjNMr8CFzyjhTEnoIJYpgRktLZkAkTlHBdUNhfCHKUzZu4ealDNlVw+fBzlqHppJJtBpfReEm+S\\n\",\n       \"AN21klAScOJrV67jUsNloUkmL6fgRru9dp/Zs2cBRj8I3F0QWqX7QnmwJFGLxk8OCLTvlezKxLsC\\n\",\n       \"RfdLDagbqHHXQuLkTR7sJEUg1B8CBH1qvb4kmythu4IbT3Vs6PlSYjFJ45m4TIx9O8mwH5kYGK41\\n\",\n       \"IQITzs7FPRg9zzJ2OwYDrMXdBx0xdQHnuN/qbibZkxCzZd6GAvdie+WK0Z8fvjU05dorShdgLm5p\\n\",\n       \"4z0TePo53CwPHn48lr18Flw6K7oHBOJ/cRu0ot543RPUvvi1nzUzs3/lu38olv3lv/sTZmZ2I/O0\\n\",\n       \"gktFvJKeeFTcQvwJYydOQo6f4FqvNUEv5tgkUHt0Ac80c0L/7C587lzCBbAuvVIlPq/gCtX1597z\\n\",\n       \"4O57+czdSCf0a6WJZ+HGKMU9NWByqVYcx7Fwgi0fSJSXxSjnWsRktLLWYf3R5OKcY3Xp4+TqXtDK\\n\",\n       \"iq44M9uUwUVW5T7GHu3CcRtJZP3BixA00uxCfe8/eM2vheMKCcpgMFB38ACU6xtkQHj6MpY9gdv4\\n\",\n       \"1Lm7b7MKAQCFEKoruJlbrhej3//dJtynw9rr28GlNLzw65dXCOhZP49lF8U91N3J9hN1uSQoIsoY\\n\",\n       \"IbCHf83MctBHZq6eM16fki51cRXTzaVUCbreNfCo5v1hMmJdr/KlayvqU6krCt9r0t6HD8Nvb26c\\n\",\n       \"PmFYsrv+jKtOyeYd3Jxw+46iYt4dWaa0BAQR1X7eCi7TtQQANAgAmK1d+NiqKjr7GwuwsFKiFtnF\\n\",\n       \"hbuAL/F5I4m0qWKuGRCoX1nIGDuC+qKuRQ1QOGcJkUqWLFmyZMmSJbujvTqy+TjNCIuMgxaQJubT\\n\",\n       \"U86ZIV+Uhl9SWXmGZmHbN2bLncM0Uh3dLYa4z1QVlirGM0VjvxiaoArYOF5zzWFn7S/fIlcwLeu0\\n\",\n       \"AurS1KLOChJprYQ5lLUPOvltQBree9vJw0R9MhB8B1VdBvG+VwI8diFKDhypziz92tTczfuOaN+G\\n\",\n       \"neB6e99/jJvbTpI7D4RGok99K7tFRn9LBSghkQvSVTTLe0dSZiZblxbjRNvN/GtDD9KvLcmhSliN\\n\",\n       \"QhtCxOyQME3DiiuQnFVOIZL7Z8gJdoxQAs4rR5qYk/Ak5PA3Ph/Qgewf+O6bIdS6I2X4tQ5/qnhP\\n\",\n       \"gjCOI4McgILI9voK5M133/cxtL0IUhd/7Pv/jVj2V//mj5uZ2U5Jsdi5a0h+zFkp/R9VsYH+FULE\\n\",\n       \"L+NEEUQAE7DXUGuSbKWtVEruB58T9YrKxpWUISgBfXO58u9eexTQnMdPhYjdH/DXr8Vw/VkESBwL\\n\",\n       \"S6mHOVGe6Phs8OIURDD1HMtxXaJfN2vf6V/dC0jPw/uvSdkDMzOrel/2X24CJJGvvPM2z8L3XR4a\\n\",\n       \"eXnpO30iLL1kEWhP4fPx6HP3+bMw/18+1/UHUgtHR/iYi3ADsn9oOOYpmj3IGDoSwdZlHc+E9uiF\\n\",\n       \"Lx6jXZJrsG5C/VaSlaBeIQBG7kmFtXUAmjL03tcN+m466FzjXw3hx3ouwVMT0UR9djEoSpBorkkl\\n\",\n       \"M0tIW4uc43OZAWQmv4NKKXBGj4nmn+swaW5vJcgH407D/zlO24EyNYq+IyemJExYYe5W2rElpYP8\\n\",\n       \"nqxWoS6XFw+9jZCp6UTi43B6id+iLX5WW+H5txEvRdNQ/kDznzIDiAaKAU0V2ZkOY6ztNXhCXBVn\\n\",\n       \"LCFSyZIlS5YsWbJkd7T0IpUsWbJkyZIlS3ZHe3VJi3NPdmhmVlLFXI+ZbFkGSHNG2DxHwMtJivZf\\n\",\n       \"E46vaupEiO4FCJOq7B0TpM7ceSQbqrYQVaw1kSx1ZNQtEb6vAG0q6Y1uLyVdNqsAVdbi2lvHMoeC\\n\",\n       \"KxAVOyFFZoRRxbX4wTtBD2cPonI2g4xxXfUwsAmSDZSekkog880m1G+19bLDMVxrKpzsl8EtNwqJ\\n\",\n       \"sGfSVpK9VWF2WN5XEpVVM2wi8j6Tmw5/WnFVHqFzM4r/mDpGPd29ct5yrHCMtB/1076jBtrpqARw\\n\",\n       \"6rfI+bJlImNqlbiarihGgwC5k3t4/Ti056EQO9uRbjEhUcPdNXM3w31489IJuLvLAKnz9rcHJwe/\\n\",\n       \"eBkS+JbiRv7ab/6KmZl9xxe+PZZ97+d+t5mZ/R+//HdiWUF9IHFB0R2VmbrgMe4ZsJH7/aoxT0+m\\n\",\n       \"cw2/FXIoSaF5L4MXn0l6NvPEuKXA/esVyKZ054zuMt1CKfnNtzQZd9BK6zWzQPTjy73GmOlaCcqA\\n\",\n       \"W2aY6WKFz+qWqhokXK/O7HPp2hNaBNeRShJZc524f99dJruHwd3XTL52lOvwOZegjGyNtQNj4mrr\\n\",\n       \"c7iBu6eWwJ4BWQ6ePnEX6MubMI5ePn8Wy25uwrjbiyr56UAdOckUQIV+uDsHoQwMXEO0C9HX4+Dj\\n\",\n       \"//ZZqFMl7s7Lh1xj9Lf4K86igYVU/RYaAZvdy7zuua5oTBTdc3ILy5wJz5XuwePUV4lnVwxi0gCs\\n\",\n       \"8F2uCaLz5ThR8kK8Vgxo8b6+uIBrU/r4xQ1cdWeSmzPIaJBsIxkaXpvQTeJc8JqUeRhHTe1u3C10\\n\",\n       \"ztYS5NLw2aZ9cvkA1VgSwfOciex9XK8wrhmcZGZGr+gk6/9EVXwZFMOwLOvVb3nGEiKVLFmyZMmS\\n\",\n       \"JUt2R3t1iNSYz3Nu4Q12MN0ugFisqqMkKme6q6eKraidA+2ahMQ5MfwdJMJJEBwiUTPFWubrU5kA\\n\",\n       \"7ghnMBnI24K0zOlwOComCsQOUkhvkWwru1CSHndbf4Nn6Oh67W/wZUGZgtnFzMyRtvCZu/6wSzwc\\n\",\n       \"fLeeoS56TxhyOoiKPBs+IzFjh7XbOYlxAppw7H2HXyPUt5O3exJV2epBd7rY/SrRrwCJUXMf5SRb\\n\",\n       \"C4l9ysNvVKZhADo1CNkx7kSJiArSWcYdluy+zowrkpdn0g34PMurxZxoghz1JXOnIdejKBYf9qG+\\n\",\n       \"98z79ZOP3jIzs335ttQJBFiverzvikhyE3s8+X1vLgJK0e8hV7GS/I9tKFMEkWja4av/MJb9mT/+\\n\",\n       \"75iZ2f/9D/6mV+CSYe2CJmEY6QY6yp5A6sAk2wAHWalSEzxeBiD5vOMk6BekPpRE++ImkJzffOiS\\n\",\n       \"CAVDoivsuDWlFqRQ1hu/X/chBXErof6UUdFdco/+LyT/4BhhUkUYGGQi5GHOhmmJnFMmQ4NtOD4V\\n\",\n       \"uaBC++q+79LvPYAqtaD0RIyz0RveAhXcARnYyhx6CCTw5pmkJ+B6JnkSW4TYH184+nk4hbJD57/N\\n\",\n       \"8jAWBfS1i3UIUCnGcP1WxiuBQA026UlKF/R5APr5/Ilf6+r10MZC8m9mZfhNmTlyRUCswNzsNDSf\\n\",\n       \"sgaKapSLIssmBm+oiwMIm4yT8Uye2IroEB5KrUgNUH4hE7SkrJnDUq6fzasbCrHGSUBHMVHZ3VHH\\n\",\n       \"Aev+rXe7HfbhP1T9NhnXnJSjidROC6Rr7RVomvDMutz5/Fs3yCggayKDi8pSvR7ILYq+OY0+r/lO\\n\",\n       \"oCryJVXMZf2ronSIjFM8NDXv7MBnizD1y+IbY04JkUqWLFmyZMmSJbujpRepZMmSJUuWLFmyO9qr\\n\",\n       \"c+2ZmUky2DGSYgV2jfC9wKjRtbR0nY2KyxPbVLHnjurhVKyW71gVIfEq8ZVGYqeqmEf9HtWbImSv\\n\",\n       \"6rAfOqwplxC/CezIBL21uACv7gd4vqnd3UMIdFB8nPo80p4T9V5A9iyFnEoXmLaL7ibtabrbVFul\\n\",\n       \"rgIsfLl1yLZcL92idMF2QmI8HkFshPtIXYbUKmFiTTN3jykB+kQ3Sy5aH1m4hnjbbAB5fNBKRa0s\\n\",\n       \"JIOW+0WPyqicy4GuGLmvcPeVZxKkqrYLlXpzVUqG2m9DfS5zl202BnfDpz7+RW/rLfSmhIg5oJ+o\\n\",\n       \"nWKm93Hp2rt/X11bYezcHMKXh5euWfX6a4GofCuJZ1vcry//0i/Gst/1nX/OzMy2evNWAbKfcnEB\\n\",\n       \"5XRtCLE5uu3hdhE3Cr8pxY3gxGMl5+L7M9tCHbvvPgmK3Z/95BteJbhZphPcE+JGy+ACKho/y/oS\\n\",\n       \"CRxpDcwAACAASURBVLK3rq1ELaBJ5m4HV1kvbpkeY+10EDcaya6Fkl1ZB7hslVh9Rm8uBiyIjhnH\\n\",\n       \"QqXEetABBhnQdHPclO6q/CTIwA9OYfx9YfvAz4GfHuU+tdCP+lVJrv4eEgl/sH8hdQ/fj0Ispzf2\\n\",\n       \"0Iq7uYZ6OlwsR0lBQd0x1aLrmZVBXLtji/l343399J1AadhcScJvBCM04haamIQ8uthEH45aROIy\\n\",\n       \"YpJp4WtHGscss8EZHSk+O9rOXVUZxsmYhXuiAVP0I1cy2Ks+rDsb1UwiZUCek6SeKImay0guz4LN\\n\",\n       \"mmR/Wbv2YW0lfWGScT1hfg6SyL0jUX8rLtMi1I9BN2ZmWQx8UL250J4YdGCu/ch3BqVMZMhMrM/J\\n\",\n       \"+J18pkt7rrcF+opELwwdE6irjlZy7SVLlixZsmTJkn0k9uoQqay3TJEm7uBnu6/qw7+Kb5CZENs8\\n\",\n       \"0FIlg0HUFBJn34YdQQcl7KKr5LtwPs3NVgB9mHIlBzL8VpAL1ECjNeOuQ3bpUb0cb+utvPEzNFw5\\n\",\n       \"pMzxVq0krBmhyLud59Aiya4VsifJrvuV73S2UNveHsKOc2/+HcRhZ2/r3P3oDpaKxpko8ZY5wvqF\\n\",\n       \"xL3F7reqRGrAwg6rk91ni50r89X1J9nVYKfbC9JIoKWT8PO4c1BECnjGSfqkQ/9rDqcCatwjSe8S\\n\",\n       \"Vk9Zg15C07nHUUSIfT2JYnkRj9Mwdexcc0WOmllZPjrS+PB+IFi+K8TeL25qXFPCxYEIjjJ5mH8t\\n\",\n       \"k918D+TsZu/nW2Ecl8hdtjXfQb54AfkDkemgrEe9knDpX/wZMzP7k3/0X49lP/nLf8vMzNpGiPWo\\n\",\n       \"cyXq8S3uI5XoNa9hiR1kp8ReIqwzZi2DPSSHIcihpSAnR5Cdf+Px12PZW3lAlih7of3F/F+bK1Fs\\n\",\n       \"rxCwIHvQmjn0THewDMn3uh+wqz+eVBU8jO3bk8tODOiDyZgbU9aJCL4p0ohd9bRcT/uTz13+opS5\\n\",\n       \"ex9r3MPOUZpPYE5sMYbaJz5fKR1xr9O4/jB2hsKlFn7+5hfMzOyRyJ883oe6TBJQ0gOl6o6Oel5j\\n\",\n       \"PSsGyrrI+nuCXImgH0esSTOHBGREBkEYn3wtSLJcfczRxGKHPisk8GbAcwHPiXHSEyOIQObEkDNQ\\n\",\n       \"Reo5LccTg5IyGf9cuyfJp8cgnw7n0zyMJ9zPW1n/ikOoy+3Rz7HbhDY0guqQWD0JwlYjeEmDkqg+\\n\",\n       \"MPSS/w/3+DCFNSGSzqXdgwRRtAjUmG4d6dwDTT9ufEwSpaqEWB7XAM1oUjAnJZ7hsv41mH8a6xUJ\\n\",\n       \"5fLsIoql/cnnz0GY9YcDA5XOPGN+C0uIVLJkyZIlS5Ys2R3tFXKk5jKbjoRkv+UxOBAfFLqaywqY\\n\",\n       \"OTdJAvejENnYYgdZO4JBF2l3klxf5BkpSFMy+7SEWjN0flb1pX99AmITNynKRxqJYIivHm0tm2Wo\\n\",\n       \"fS1CY8wXVbRe0T14Q4WEszNMlvyJQXdQzM0mdYric5Kbq8RulTmSzMwm5p8aVlIWPsvmN/qoBw2/\\n\",\n       \"xi6iBW/keND8Rthpqk8/ylQovwznlxvFDYuimjFPU68IC+4xOTKSL4yilqPs/sjNmmV/B3KhaCr7\\n\",\n       \"kRnHzTxkO5dQ37rZ4G9An1a58wemU+jjB2vPl9a3YefeDYLWYtelyGlJQVZpf5FB6mDw3RflAVa7\\n\",\n       \"sEu/boWrgfMdRSbj9dceheMk/P/i7a+YmdmP/IE/Hsv++q/+VKhH4WNizCnm57/twQOJYJ7sNDkp\\n\",\n       \"O7lf5DKcZAzlMf+cH7daA9UQhLmCsO3zWxeJ3O0LHI/xKmKFNe7TWkQdydHsBWmuc2aVlxx+4Gso\\n\",\n       \"R2oENe148DF2fRPQwdXJ5/0B+fxOE3PSLfOqKfpIiQuVXxhx3U7m03MIse56n5SfeD/0yeaxXB/o\\n\",\n       \"0NchqjmNS6RrEAmTT33y02Zm9pk33opln34cGvvs+v1YxjkxziYK8y8Kcge+UAaIqd1LXstjqPtR\\n\",\n       \"BVEBGA2af5H5T2U48RLXgvBeYUx0M+kECKLyt4I0UzC478/xZwV9js8k+R6oSi7kXCJbhYwxCpEW\\n\",\n       \"uHcHGS8lEBlFickv62VOGhDu8p7kFWROVB2TEXVRkVDkn5Rx12DtqnG/VDonynlIXkF253Dj7XpW\\n\",\n       \"hPG8Kl3OpwQCXghyV2ZLNG/AOjblkJVRUeNhycft0MZB+KiUblGZohaekMOtSvJg3TvKcW0S5EyW\\n\",\n       \"LFmyZMmSJftILL1IJUuWLFmyZMmS3dFemWsvmyewk3L/PAIqVHIiox5zDT+PxG490VKBm6Rw8sYU\\n\",\n       \"nmWeuqaQEHZ8Pwq0S9VVde0UsdIzHHdx/QkQfIyq1TBw5vCTJsQgaA3NHQAxCgGyKOla01xrS/V0\\n\",\n       \"yiTQfagK27xEKWTfCYWnVodJqOdu7fAsFeUHQT+HE1xm8qo+wgXYHjRPFfoYt+IkrrXudE6mIpSd\\n\",\n       \"BEau4LI0zUmFjhzUU8TgAVFq73HfhyOI9a3eMLRPHMR0AczyJOLjLP8T4PHpTE4sdQFQlX5VgZwv\\n\",\n       \"rjCqrl9diNTFBwiXlnPQjaMhvC3kHxrxAOZQNF5X7j7MQUbd7cL9PMi9ptr2cBIFflyrXnudruEe\\n\",\n       \"/IS4pT918XEzM/uqefj7yyPJs6I2PzHXINw+GjBB124r5PRb1k2DPcL5VuJGLSAroTnkCrR/EAL2\\n\",\n       \"fr9HPcJxlbiiGT1Cd7qZq/d3GihzJttBhbxfJPOa+X2/uBDFZBxXHbyN6y64YF7com90Uczogj/j\\n\",\n       \"sxJi9QnZAFp1dz0J/fnmE6EKvPuemZk9kQAQur5zEKALkWk5Ia9cIY+Or/3GV8zM7LONE8u//bVv\\n\",\n       \"NTOzX3jy6/7bnv2keRIZZCQrH2kOICxnpbox8UHqNIx8Tghhm659VYTB19cvnNh//00Q2pWp3lNR\\n\",\n       \"HucYlSCCdXpGPsa6muliRxK7rL/IT6pjYl0z2EQCn+C2rCFD0OzdPXsNKZL26Pdr3dDtpfoLDLby\\n\",\n       \"dfIS64gqwPdHStJoe0gz8RLeCyrrr2RhOWKdnubpPsJ5RZX+9mU47oXkxGNWjMn8ecI8fo1o17Av\\n\",\n       \"yhou21lgEcapzBPKqvTyAKB7TtfkAzI6nMR9eoCcjLqKh9PSlauWEKlkyZIlS5YsWbI72isV5JyF\\n\",\n       \"2sN081UwX5zm1WIWatmQcaenp+OudyaxgI8DoJNRRCUHvK0OsvvpT5A1cL525NPlsiPKYqirlkHq\\n\",\n       \"YCaJwLfvbFHfcwhdg52w9tIEpGHM/A16GMLb+iAoGQmduYiKsk9IcKwq2WnFXaC81TP/nmxNSAZf\\n\",\n       \"rXz3Wea4/gyRCn97eVdn2PfJI53tdAh1otDj2AqxFoTWQUmc2HX2s85jtnYvotJfd9L2AzmT3RcR\\n\",\n       \"rpHIlIiaUuiymImqAiWTMuYQzCRMv6D4nu60WV0NMpgouhpQoloyoxP1OsrO7HQNMriclx8VfSiA\\n\",\n       \"EvaySyWKkytPfWCmeRBsS98tt4cA/2hY8S0kBARUsBY3/vBlz//3g9/1w2Zm9pd++q/EsgqT4Sj1\\n\",\n       \"pNhgR0TwKHIVmH/DXscELywCfsxPKXkqJ6BJha5wEL0sJf/Zs+cBMdti/q+3siYMJJE7+tasAmG/\\n\",\n       \"lFxvnKDKP+aatVpptAXaKrjz5b1A8l9tvU43Ny9wOOaL9NehZzCAippinZIJUABFb5752PkE5til\\n\",\n       \"kI1PGM9rQUIzIAYt8wXKEG4ajGvJf8lccG9/5aux7OE3B0TycHJi94Rx3086Uc/s9IEi5AiyUUSK\\n\",\n       \"DPBMZFWYO66VORH1TWU9I0yla9ztbWjHWtBMX+NZICKtdkb8GO2RmBgrKWcjzSoR5LPeOiK8Joon\\n\",\n       \"p6vRjhz9dXHf19rdPvz22fOnsezmGMbEKCj9hkiXgmkI9ig33taSujfXkruuX6KeMXcfPBaFjH8S\\n\",\n       \"wDPxHOUQBM0kdyYFTm9uHRHk+VpBRBncsRLZn9VFaHezwXop3PyoO6RBDOh5DUDpIPCqnijmcVWR\\n\",\n       \"6L6nJI+P8U6EXc9ZQqSSJUuWLFmyZMnuaOlFKlmyZMmSJUuW7I72anPt/Tbm3hsFSAH3ztx9Z/Jv\\n\",\n       \"wUZ5VyS0WODEmvOJPG3VfYnEdiFgR7kb8S1mkdC91KDKhbxOhHiMriqFh+kyknYRThWIdaIbTfwt\\n\",\n       \"VABXBeAe7pZW/G0dlMUHQMCqBJ/BB1Jq/j9qnIgrqiwA4xbibmP7hVnew0WjCszXyGu1v/a6H2/D\\n\",\n       \"+fa3wd/Xd0tXlJI9e5DBpZrWQ0hmUgI23ZzyW+pSidxSJJcPIJh23dLVMFMnxr2oayGRMsee5tAC\\n\",\n       \"jFyIYu+J1y+9AjVIrh38AlvBrDc4byHuLvrl2oP7R6mP0opvgYrlheS1mpBPaxS/3BFk68fvB72f\\n\",\n       \"hw9cnZrny4SIuwPErvpQjx6F3HW/+Y++HMu+40d+IBz/v//lWLaHAvutaBBRjZ+57o6i40a5qaMQ\\n\",\n       \"0JnrcJIAgAq58AbRvaGXdRQXDLMHHISBnHfhIiO0ikrR8crpipOAjRF6X6WMqwkXm41d5r8r/Z4w\\n\",\n       \"2EO9+HSVFeJS5TpFF+zTp+rGx7wWN06dhd9uRJ+ngLbPZ194PV9Hn51kTeD6lIm7rcM4XoMUPcla\\n\",\n       \"t0JftDIn6ipc94O9u2xes8+Gesg8KeAeGaTu5UCtPtGvg6tuD3droUroGdXR5bzoV9VWipkyhBTe\\n\",\n       \"QLJ7RkoekOtQPTcVidqhb8pMyPExUEjciFGKSQMgQhvXlauoVzUJ0+JaY/+o+xBrcYUccoM8V+od\\n\",\n       \"COiX3oabmzBmjyfXkWqgdzfJmkyX8k4Wzx7uzmYnOmLPEbwyaUYJ1oGabdJ+PB/V200NvlLcqFx2\\n\",\n       \"Blmnnj59hr+Pve7b0P6re5694xJk9A3ypa5E27DtkK9U3LMZXZBy/R7jSTOAHKC83u6970ZQKXrR\\n\",\n       \"kera5buFWkKkkiVLlixZsmTJ7mj/zCib8/9zwjhJ3LJbOaNizLfkeVbn8Hceps63fiqxyhkiYVBC\\n\",\n       \"Q0FezU4SrszcXbr7oQK5MFt5PlVFp/wBdzVFoe+xS3Kcn0yJxaF++lbN63dStj+8wF/fJe6PgfjZ\\n\",\n       \"YRc+yVt9US3zxZE8ryhZCYL6rSAiV/eCirEqFh9jeK4TJTtIIuz3BynDLhW3pmuV4EjFcGkrCavK\\n\",\n       \"gMX9F/AvkudHlUQAoVxDfXsgIMylNGoUA0yJlSRv6k6LJEbltbZAIspa0EQc14lK7vEQEKGbJvTn\\n\",\n       \"rpGw3hoSGr2GdVMuQHPoEcHwtt4w7FjGU808etJR94BAXVziHg6SwxIyAYVkhr9FffcHJ6f2uNYn\\n\",\n       \"PvlGLMuevGNmZt/3LX8olv0Pf/+vm5nZSe7xcQ/18gNlMJa5ARVB6GL+uVgU+2TWT7gZqoAfAwA0\\n\",\n       \"24BRdiG0+1R63zRABxWtGKIkh4RVo89OElbOe1EJsz8GwMzCyiHToGMcHxnE8fCBK9tPz8I5OlFi\\n\",\n       \"3jbh3lUSkv57hjDvPl85wshLaF6x3Trs+kdZDEmUpjSHon8GNFHV3ikhkcs5NlUg6BdyPxlsoznU\\n\",\n       \"qEQyk5jB9xEImoXVc52S83J8zB4JWDsEOSmRDSKr5UAMGV2LKQvDIBs97XAGsfbfSQ69ikijBK9w\\n\",\n       \"kdM1Caz4WqU7KJ3D7Bz6TGBgi6KPQLUuBl9ruT5wXoV24HyC/jW78Jv9rc/nscTzQVDHBjkzux1z\\n\",\n       \"A4pcA66ljzO2v6p0sIc/naCEnO+6xg+YJ6fOUaLjgeeLV/XjmfdWxkQFNFFV+bnGz2QNBkoc+W9P\\n\",\n       \"6DNVlG9lvp2zhEglS5YsWbJkyZLd0dKLVLJkyZIlS5Ys2R3t1bn2sn5Guszi3yVhWF12hD1VHZnf\\n\",\n       \"K9xLt1CWqY4QyoozbkSQ7mbJcAHjZpoglt8J3FyUS2Ij0fBBYFlqS5F0qtcnLK5aVAMTZM4Ui5HQ\\n\",\n       \"U8St6KrQhL/H0zWOdxdci9/S7VaXDgVP05mhMC37PwdU3rWSeBbu0F4I8EU1LcroNuoFWiW5u+/p\\n\",\n       \"spqpZoV/1dvGXKCSyDYrzrgM4jUFAu7oxhG4F3080Qc7k32iYr64PUDUrDLvL8LSowppkYBciasM\\n\",\n       \"rr1SoPUeboTbl+E+7Qt3xdZrENsHcc9QbV/qdIsEoioL9tbHgzvo2QcffLhKVolm0X4f6vToDbjl\\n\",\n       \"cj9vfXON9nkfHm6Dy7gTYvFqdT+05eg34Mnbv2lmZn/oD/6pWPZTP/+3zczsvVsfO0OLOQGS92lw\\n\",\n       \"3aHS4PYVn0EZ3UxLJWz1oxO+14AKNq2TRKYjiPRTF+ZOJar3F5sL/EDuF7TFRnEZHXG+W9EsGyeQ\\n\",\n       \"2GWsNdDvoZ5QOADjTwYv5wBdMIXM9e0muPFyWTzvQXvsM8P9WPadD78V9fDrH7swTi6392IZAzA2\\n\",\n       \"u4tYxinDjAWTjAkqsWuC2lUd1pGVucvwsgznazKve8+MCpKqoEP7NXuD13mZjJlusUwzMDAZttzr\\n\",\n       \"KXrR/LwF3O2bjRD7SSyX/qyoS2hLo3tOXesV66TPM5SpAjoVuzuhj5Q4T1N6P9UgpfNRMEhN+LnJ\\n\",\n       \"fO0uQAEYJp1XYS2qG3F3877KmCzQ1rV5kMXNbZiD+UqoMtQbuwQRvhG6Q5XPjjEzq2oGW8WiqDZf\\n\",\n       \"SsLxE6gVpbneGt3t7cndjR3c9uMEAr4MCa6nmtmhYMaEQd146P9OgzfgKpeAGhLLewl86Y6/tUvX\\n\",\n       \"LCFSyZIlS5YsWbJkd7ZXm2vvTL66OfoR/o4S/jqdQUn8+OV74Vw9HGW4bqaEZby4DsoY7hDqWyix\\n\",\n       \"DaiWhJByB6W7VCegC3JiJBHOQ0lDXZbtZ5z+Xt7MM24IJYY/R6jz8eg7kg477FaQo3E6zX47jLKD\\n\",\n       \"6ahm6zsTqmPPqJ4DSfT+Bv/0RQid32xcATqqAyuxe4ICtpAIKTcxRvRNyJHsa7mvkYAut4mK7sOg\\n\",\n       \"Oyh+kPs0cKchKCVbF3O+nVEil3FKxHSQ3RfzE06NSEIUrJMEL2D3o7kjOchHhNO/K4hkfwio0u/7\\n\",\n       \"+Kf9+Jdh199JPUucoxN14A8+CMrHn/vsF2LZl7/0S2ZmtpK6ry4emJnZe+8EBOn+a2/G7z7+qc+E\\n\",\n       \"Sz73fHkl0KFjochpqMvjl44mfR4E6fWL61j2ez733WZm9rM/9T/HsssmjLcDyNubynfmHSalBgz0\\n\",\n       \"H5rDZj7vZ8kB0D2qlH8AEnvsHLktqGyPcfd8csXohx8LyM12pbt6jCFhoLcYTy9f+rh+/PTdcI7X\\n\",\n       \"HP1Zb8Lk3V6KUjXIvjMFdAYUYGz2slsusVt/ffXA6wnU50c++bv9FAfsviXXWQ3UuZVsByvOWZlQ\\n\",\n       \"VG2PgToy/6q4KAuJGnnYVpkjPUS916LUb4cnZjYndtOLMM1C7edBJrPwm+i6kDkZ0WHJyTdyTIgk\\n\",\n       \"xgrBA5J/kU2cKdWT7A+UbBbWMlJ+RYIIxjPPIpt7P0IhEPlR1z+oyItSPEP2q6LB/0Vqg+uUtot9\\n\",\n       \"MXq/FmWYV/1JPDd8FsljkjIdAjraZrtZ1Ok4hPtJ2QOqmZuZbUiOlztVAnWcpSTkGt8vCejqTaDc\\n\",\n       \"jeZY9QUdCN7Kxxq9BNpPHiglmQpGPP9k7kaEXwJKYi7WQaWLEiKVLFmyZMmSJUv2kVh6kUqWLFmy\\n\",\n       \"ZMmSJbujvTLX3jhNs7e4LJKtvYyEciUCZpFsqoDvElqlh0zPR0K3qFgszqvkwInwoBJBocSbiQsu\\n\",\n       \"kuKFAEnPyyg89TKSlwETKmEZ3wnqbJMxaanAvkzkKgTMEuc5nfy4HsRuJfvVUNQ9gQA9CJkuz8Jv\\n\",\n       \"VfWaCYJVR4Z6G6q3tT9c41pe+T5bkmj7EQrsoqPUQatooPtCXbED74lJ4VJbrKUrRF17VMBWTjL/\\n\",\n       \"o2U8bbYcQ5bR3elFdCOqthETKDNRp/5WE4m2JybNVG0ZfBeT9npjN3AFrT/u4+R24v0XAjb0fjpx\\n\",\n       \"mZ4QePDe++/EsvuvgYD+9Hksm/aB5H51P7iKVM/lK78elMpz7TAQ6rPCXVEvkPD09U84Yfnd94K7\\n\",\n       \"962Vl/3ef+H3mpnZvf/rJ73uSNZ8ewx1qoUxX5XB7TSKe4peIZX7itpXqiyPITaob2GiLo8OKNzP\\n\",\n       \"koRV/+ZEF/BG3AO4lBKG6dohEd3M7P13QvvbgxCwL0N7Xjzze3d1P7j+7omKM9vRZmeU5UHs3orf\\n\",\n       \"44fuBWL5pegI3XbhvirdoapDX1caAIP7qATwOqrxox7t0o09CAG/XgUXVLZ3N+7zZ89wnGhLFetZ\\n\",\n       \"u8zMsgGaRUKfoNs6BgwJPYLrWSb6bHlJHTlVRwfZWTym5Rr6UPLUW69DnZSonMNt2Y98XuiijGAT\\n\",\n       \"0Ruzga44fXjhmqItWDHISAZvf4Iu3+T0DRK118UO7RKXHZXVMx8TQ0E3mmg7YSzWoqJOzaZWxhMp\\n\",\n       \"Cp2Qskl5KUVvq0Lfdbhfwo23qlpqNkViuzzlSSKfRAOqQfaGScZJDr27opJ5mvN5gswaEsRAl940\\n\",\n       \"eyajjXqzmSlDX3uYGFy0Ik/ou1vNwLHXNWNpCZFKlixZsmTJkiW7o70yRCrPM5ulMIuIgISLkhQ+\\n\",\n       \"+yXDmhUliZ/khCwUSIbXZpGQHj2cXAnLINZJBbhxUQLyCCRGQ/eLMwrlE3PcgTE+NUJgO0OY4xv2\\n\",\n       \"IEy4rqNisxImmVdJGXtAuCTXFcN+R5xjPzoyxLr3gn6ReJ0pERP9L1zLeNzN3pEO5poalWyP45TE\\n\",\n       \"GOUJIpdQJauxq9T+BxKghO1sojq5H8cQ90wlMYxojh+XVSRbLqUOOCZVHZ9IoOawylfcJsuujqH2\\n\",\n       \"vaCEHX7TiiQC1OizE8ixrV/rNZCSd5lvq29xr3NBpHrIKmheuQK7yeONE8BXV4/MzKxsnAB972FA\\n\",\n       \"qVrsVl8KglWDCL6WecKwc5WwYFfsb5yUfoGdfibj6Yuf/WYzM/sDn/nWWPa3/9GvmJnZpgxt7CYf\\n\",\n       \"k0RGWtlVdkBRs5myPBTrT0IAB4pSV05KXQFZymWO9ZBboGKyIn2UDul6l5+IqsyTBqBw7MQiqzCO\\n\",\n       \"D89cfmQ8YDddiSQBVMZzkR9pauYEDNe6J+P6HsLEv+/hZ2PZJ5qQz+3FrV8rBxRTCiLF+UEUIJQB\\n\",\n       \"TVGEBX1AorSuSTSVK2gxnhVNHDEmL3eea25bADGV8HPKOEyFI3cROWYmiMGR1rwiAVu8BJweua7J\\n\",\n       \"yF154cEztkL/C0wfAxUE9eE0otehV7SEyJ2SqHMGwIhiOMZdI4gUn1mKSO1PgYDfjfrcCefZYJ6W\\n\",\n       \"EoCxwnNilDIiVp1kJThiXKt0TgfCuMKuHMbMDWpm1lm4P0poz7lOYpA3IqvAnJSjZBbogToOMv8n\\n\",\n       \"kvdFfiEGG2igWJQJ8jG53gH1RBerrMGEm5HLmjCi3b2s/1RgVzS77bieqZeAAUXqYTmTcUQsIVLJ\\n\",\n       \"kiVLlixZsmR3tPQilSxZsmTJkiVLdkd7hUmLRcPHbM4KhpHQq2Q/emBGmzGQF7+NCuniPyR5nOj9\\n\",\n       \"9KFfmJllqjtkdBkJ2TFWV8lx1CASuJ86QuKqqpH8tWnC8StJMrrekvTpNRojsVpcEVSFFdcCYW7t\\n\",\n       \"BSbeVLIhv2fS3k7dM+2SsD3QFTmDUflXieokoDo83EMXppsR+vlBIwAoDER1XIFn2dui4UGNj5nn\\n\",\n       \"lL4l8ZTStZDLEI+BBCqawhbR3SL3P8+pWaYaN9DgkgoUUW7d29qCAKsK7AaipPV+T1qQHLOqRfv8\\n\",\n       \"+huoSO/37jI7nujmUH0a1C3XOQR3r5DCn18H188bb30ylj19EdyxFxdBFfuttzxB7gG6ZOva3WPv\\n\",\n       \"Pwn6SGuB9ukCf/rU67l7MxCr9zfublqBNP4D3/uDseyX3v11M3MSdylulwFaNY3OIRDrb4/uHuq7\\n\",\n       \"ZeLTBq6CuvbzXcJVWq/d3bA/kagMwq64HcgZUNXjGFiiOkqY9+oWpwu81+Oo6C7j/4DggnXlfdcg\\n\",\n       \"IXAHV9jl5CTyP/KFbzczs0/kfk+e74MbRxNuF/BVKtk5Mq/FZUIysCZtbeg+wk1R9yBd8RpsUWGN\\n\",\n       \"GUcfazUWMs2ewLqsRcfq2Ib7c9J5l1FbCn0t9IwMa5KuyUyGrMkZIJRvzT1xmUKNuyjcLXbC+QoZ\\n\",\n       \"d0wcz0wYvQQWrNHvpST+pgusFL2z9Tr0BYn4Zr60dDpO4A5blaLsfWK2gyPO4fp8Gdxjtd7DfOla\\n\",\n       \"7jGeVRavHFGXWQYIuNF6r2cD+sJQS9Je3JOcQUQzbUecX561TELdtn7eFkFGfSvuNqyPjXBFSEuY\\n\",\n       \"0SywnlVoF+ermdmAyJJcAiYquplVFJ9zVxJpD0w0foYWosmlq+IbvyolRCpZsmTJkiVLluyO9uoQ\\n\",\n       \"qWmcvUHH/FKy++EOc0l1tg+9rVId+uyF/CN3OjheScQWldX1twzXlLdvgg9yFK+r+f9IypuF1ZIM\\n\",\n       \"hwYxp5KZ2cUu7DoaQakOCJ2e56tbtiuq81Z+PhJE52qvIJaCdLnpfLdwHMIuSQReIym7E6TJX72X\\n\",\n       \"avNDITv3bImSxV/K7pf143EqEkzEMpPd50R0QlTpiSLOiOWsk4b1UqZgWu7cs0JbHox5BU1JjCDq\\n\",\n       \"ag6zAW0tNCSZQRGyqylA2s801x5+0gL92e99t7rCQNG8hhzrmleSytM6d0iK3sjOedWEcOqbaw+1\\n\",\n       \"/gyUz6m6fnPtyAiJuIeTX//yKoTp769dAXxVMazbx99+H65x/56TjW+fB2LtZz73LbHsX3ojkKb/\\n\",\n       \"GpCuQoj1zB2mO/KXQKJOmq8RshONhL9vEepfr/18G1RlvfZxv7Egz3DE/TrunZxvyARw6Dyv4NDh\\n\",\n       \"HgpySEJvL/eJaK4inJQaULL3/hiucfPYAzXyi1Cnp12QUPix3/+vxu8+kV/gWk7OpsRDKfti9oSu\\n\",\n       \"XRwnTeMEbEqmrFaCkmIslkRkZ2tYuNe5rD81QtEPk/cTUbV3Dl5WAJ1pNFAnkpK9Tm0fZBQ8oEcD\\n\",\n       \"cBj+LqrTmIsagLC6CH29uuf3vwQ6OYuIx2kUzeWadMBYK0v/QYH7WYuXYAWV7VrWX6J0kyCizPGm\\n\",\n       \"aFa5CX3CDA+h3aGNDMM/7oVsD9XxWbaFnEEBIonSUBJFPRf4vlKUnkFJMu96SFIo6I3+ySNa1CqA\\n\",\n       \"LgAAIABJREFUKe0amP9TPDLDcuwMLdEsQZ2BCHWCUuYIkNJnVw7EMOY6LfXZTSV+Vcyn/M1SkkH7\\n\",\n       \"joFaiuYy7+sg6/n6UvQezlhCpJIlS5YsWbJkye5o6UUqWbJkyZIlS5bsjvbKXHvTNKOaR7eTJo2N\\n\",\n       \"ySvP/F5dJq6pNDujmc3dHZG0fi7xMTWrZvDg8rwRFRSyJYtyKWNy01ySG+eAefNqCV1Sl6URcvgG\\n\",\n       \"bolrIey2gEK17iVcBqq74VURCBiq5dSWGmuBXUGE1ySvA9sgvlVCxcMsaSzuk6oNn9Fg4r0YM4Xq\\n\",\n       \"+WmcHWPm5HB1urncmLYLelPqMpyoQbPU1pKhE2WrCNnnlcDogOAnVYKGnorqzhjcDYXqgoGMqWLH\\n\",\n       \"jBPoVMiKXdEzQav3TQGuZyEywge4SjIhYPc9yJYzxV4QRgfVVgruoKLyfnr2JLjb9lBCH0Wz7Aa6\\n\",\n       \"UKpO/02f+SYzM/vggyex7NHDQFQvxS399GX47Wbrbjm6UV+TRMqvXwZF9W+Gj3OmY1YEHZ1Lca38\\n\",\n       \"Wv84tHXtHXubhXv2svd7N0Gp/zNvfSyWvX/zG2Zm9gXRsXp2G9rx7PgUv4tf2QR9pH7y87YgAufD\\n\",\n       \"kmysiaSjUrickLEOTAprZrZ/GlygjbpRngdV8B/5tu8yM7N/8WOuGcXsqv3JicBNA22lSd3tcLdK\\n\",\n       \"poIKa4yqvVOVWssYqMIIAF2nuD7PKBiMCRG6xcVr4b6uL5wU/2gdxsKToyugN3BLdbpOgYDNca1u\\n\",\n       \"fM7hQiZWDXLyJPpcq4rBBv7TaoM5OUsuznnvB1JvqMQ6XQmLfQW/4IW4R9fo/6rw+zrgXgzm9ykD\\n\",\n       \"obodJLMDXMD6NORzqkPgw3Hvx6+oxK46ZnBzq7YXx4QJLeIIV7VJIvk8ZmCQOg2kJcSi+DwjEX8U\\n\",\n       \"faoRz6SDqIOzTpPo4g247mkvyuYg72se96hoL1pZhuAdJ5Qvnz8zDUrU93SUpPHoz151vG6QWWPQ\\n\",\n       \"9ZxuTFmLV0owWlpCpJIlS5YsWbJkye5orxCRymb50rJIAF6iSkqsjYCQvMIWgBgGQWl47kHenGNY\\n\",\n       \"K8+ir8EMIdVrRaKoEAEJZ8xI0ZRpkPaQKC2tqbDrIWGzmCFIVMeWdmGHsdvuYtltFlCFw0F232y3\\n\",\n       \"1IlkRyVbk/fYA+HoBJkwEHWnmeotSLRCdieY0s/QPNRXAJkCqFemEsAT26Why/wxCON+dNwZK7GV\\n\",\n       \"SrVDp8gh/g46nrir9p3WRKXe0nfJOUiORCKVsB+BMxF9jmoCoqzMIALhsMdggxnCihPNkEse14eL\\n\",\n       \"VXJPHj16HW0REjXQjBdPnezN8OxCd1BA7Mrad8mX9wJKwJxrZmYtAgkePAqyB7mQODfXgdj8/tu/\\n\",\n       \"Gct++Zd+yczMPvs5R0m+9A9/IdQXKISZGaOPj72o5+Nak6i9f/rTnzEzsw+Q86/Yem6+h/cD0vVc\\n\",\n       \"2nqzCuP/Ue5teL4O13hijtzeFOEaQ+bX+jjyCTYyxz/5emj36d2AAnUCYRwxnogCm5mNQKkOMk9q\\n\",\n       \"5gQTNI/jfxRV7KmldIDMe+yEG0E9HuK3/9b3/EBow7UTtm+BuimxuUNOwEnqXoHYXQlywjGeC5pL\\n\",\n       \"SYJJpFs8k8ESmSDqUQqJuiWJWkLYD20g7b/5+qdj2ftDQKKOgvBWGb0DgjAeSXbmeNF1nWH9Z8Lv\\n\",\n       \"V4pSoS8ESKB3QBHWEX3SCu5N4vcOMgmqTr5Dv6+ljJIcxQwQRp5YQZ97ymQImsSsBIoIDpAEYA7V\\n\",\n       \"bHBkZjzAIyB58KJMi1yrgjzJJMd17VK6xc7kGI1k7BkQiOcpxppK55yoVC9rcn8Mn0+3gnQdmadV\\n\",\n       \"rgkpoLIR5JQqHZK7dfYcMTvrzdJH0jmy+Qko7nEv95rIlaBUU8lcg7Lwi7flnCVEKlmyZMmSJUuW\\n\",\n       \"7I6WXqSSJUuWLFmyZMnuaK9W2fyMK+4M0jizzJ17UkolXC/JP6SYHa4RLxz+zIjYS30eJ5YLOTFe\\n\",\n       \"d6mjNL8+3I0Ct/K4Ciq2M3JwNvszKyxMCaMgYooWCV2bpQikEKnXOkUXESF+ZZEzeadokUxwX03K\\n\",\n       \"oWafZUt3q16LatNZ7pAoNUByucklXUlMBi3uzgH3opqxHukKE3ieirV6r+NfuU+E9oXDGBOz1kzQ\\n\",\n       \"7N/lPK4U3Sm2QWBnXlZJ9AUTnvY6dkhUFtfeOCdRbleqBN3gWnKfpsWHONYKdUFcBqL29sLdbRTt\\n\",\n       \"Pty6jtQa5FXqYr187m6kFdwjO3G3Udk6l349Au5/8dI1qBgoMYrL4gi31CDJjT/76M1w3fffNrM5\\n\",\n       \"wfSTF6EN1+La3lMd/NL75CUGe7t5IGXQ4hF33+WDkLT5U48+5ccdA3n9Afpp6p7F7xrMl6P5PXnZ\\n\",\n       \"B72nQdxDRyRLHmWcPtiF3zy/Fb0n0AxaSdr7YBvczJ8FYd/M7Ef/8A+bmVmH/hxaUZimpptoZlGX\\n\",\n       \"KBNlZyaB1eWUI7GSAJgSPthB3C3M6BB/oK5AELBPknic61otCvhruKCna0m4i4CWjbhAe7jxKuEF\\n\",\n       \"1AWU3aHjo3Ooh3t01KwM0DTSdmVol7I3XG9KA5DgKhMXdNTemmrUTYKCSDcRyghd65msa3mxxChG\\n\",\n       \"LKStuLa7qK0k7mO65eHjurl2bbMyD/26lmCLsaAbX4N4wvebjY9dztm9eKn6I4jtku2h4LNlPLPu\\n\",\n       \"0outMnbMoiBzYoi0EEnaDvrCpFpxGKfbjWog4q88O5hU+1wieWrgFfJMHHoq4J8Rl5RAHb5wZLKe\\n\",\n       \"V5hbpSThLjUa5IwlRCpZsmTJkiVLluyO9krlD879Pzuzg5jhNMs0PI5RzZTSEaZ7htjMt1nN4Rfl\\n\",\n       \"FxTBOfMpolPT8g1V35L5tRLqbxFqeXV/WJz2w/UIdQEiJSTirANyJ7/lDi5XWYFs+SZO4l0JlCwf\\n\",\n       \"dBdKJVqtTPijCq++SdEbxR+JYu1Ite9z5H1R+2b4Me/TdAYRU2XzM+PEd4KKki2vRUQsE4SpoiRF\\n\",\n       \"g2vJTj/DLmTKdKdLZquEEDOIQULCGf5bmOy+CE5IqO3/x96bxNqSZddhJ/rbvfb32VZWVlbHIlUs\\n\",\n       \"UkXRNARDEiQBNmRZsOmZPTBgGzYEeGZ6JHhoTz2ybA80EAxZJmiIhA1IggSDIin2pWIWq5hVWdn/\\n\",\n       \"/N3r322j8+CsdfaKfy9TxisSnzbOHuR/GffeiBMnTpyIs9baa1OMOp36W/HOrZetbb1fYak4tRxR\\n\",\n       \"PCz1qnA6AkiExAttU4HafXdvH4RtV5cegTo/9chMLejHEqe6JxYGZ+fPnHPO/eDdd8K2r3zN13+7\\n\",\n       \"wD6cc26CFeZyZehXnt3y7RXx9v79V5xzzj1414vY88qONUL6fSEo1dde9d8/vbBV+iuoU9keWp88\\n\",\n       \"A2IyGVunrCFA328MObmVveScc25T+XYeHt0Jn5088s7i5d6tsO0TCIFXC0Oa5lh071W2X47rXtKv\\n\",\n       \"l7CHuFeZ2/sXXvXX+9/++s+EbS8BlWyA4M3XYlcQrCAkrRz/5qXUS4PIXJNNmLyiAlzWx0xkPksp\\n\",\n       \"8mZq/MAdGveJ3musGCAIRolzKPYMERmlfmyN5yaAXxW+8zYbm2NY/y9Y0sh+057zn9ynmDtStQ6h\\n\",\n       \"eFmQlhrz1ODZEfpHk2zwBQj2k0FdV6LfUv+zY2KT1MsjmqJzJxJfMkE/NrDuaNR+ANdkde4/u5Ja\\n\",\n       \"m8sa9RdTQ1+nM3/cXua1CpYRU7mf0inm2kbm6bVv+9VS+h9JA7kk5cyRDMKxMKjJim1aV5ZzUTES\\n\",\n       \"/wlHixeZk0P5x21LDkX9O4xJPtcHTA+/o+MUu2ukrl6NGn+btVRFQDUCOvw75xyB1VIE8KPZZ2NO\\n\",\n       \"EZGKESNGjBgxYsS4YbxQjdQQffH/Ds0Ptg05Qz07yTWnJYHqYYhSDG0/h8cYmO8FCEmOz1X9AHzh\\n\",\n       \"G64ei+mX8kZMgEVWOuslDdb8W3C9r/WN/NvvRrj/Cu+5ikgRdSoH1ai3Uz37IHOQdH6sMDfsO7Va\\n\",\n       \"oKZCuX+m8Mr7dpc8p59wzqX99vs4+6mT8+dve9EPBG4aK0ddLbLC/ErrKsJoL2m2V8ROEDmu0/NC\\n\",\n       \"ES6glLn0EzVSXHzJKohgTid1uELm+iA1F/uQbmg4JiUlnLYXy2uzrqCdwUv3fRr+XmYrSA6xLLFV\\n\",\n       \"fVF4JIQp0s7Z+dcrW1VewmByKanzL7/q0ZdTqZ2YQwdFLcXx66Yfeu/dHzrnnHt6YvYDNKnLUxtX\\n\",\n       \"t+/7/Z6dSf097FcRkem+11qJb2KwfzhEbbZe0vo7pNA3YhL7yqHXTb10aJqiJTRHhdgKzIFIZaI5\\n\",\n       \"67Biz1JDbrj6XT/2+3jWWn+9CuRuXYjR6CefOuece+323bBtg3p5h1ITcIVVbyFo5jOkX781szb9\\n\",\n       \"zJveHPQrL78atq1XXod1ee1RsrFqxM49OqFmgSNcO0XuqOXTO6Jn3bUdpsOpIDdtQ5PGbUsamioO\\n\",\n       \"xz/OWXRTNCQd6Ot6b3+wlGtS4F5IE0P4EtoJAOFpOq0rCEREXHppnZMM5kQgbZlYB1C3qGaSNfW1\\n\",\n       \"gtzhGMvEX69RashMA1RNUZUeCHOb2n2d4/qsxWB23fjreb0xNLWtPdJTr8QmgQaXuHdSOYfFxu8j\\n\",\n       \"FauHLPPjQ68hx0It2q882O4II5BQtyq1C4PmVKE7zGdEnwS55jVZiSEnnyepWAhUKe0/pO0BYJVa\\n\",\n       \"eym1VHb4wEQlfE71W98n4uScabk2MiZpY6IezdS3lZWNnQKWEUmmzMVnvypFRCpGjBgxYsSIEeOG\\n\",\n       \"EV+kYsSIESNGjBgxbhgvjNpLkuQ5vfa2OynhPBWREe/bZROggkHCuKkKhfkHxda50iMQwg3oPv67\\n\",\n       \"S5Teb31PxeaGhssx8Pf5mYd2K4ETKZisJQ02nUFEKbstAGMmQjewTwbuuDjHxG23swXtptBphraV\\n\",\n       \"Qg+sWwrQxfWYWmtJCS7ZFrU/aFgTULwTgqBdUldBIxRMw1Z3csLDhaQG05xcoNYU7tHJwAEZf2ud\\n\",\n       \"rpzUolJ1TJP2O64KsZAg3Sj8SELKROjGLCEtazBy0ntBbSK3GIfxdGJUzcv3vZD5/i0vcp6kRoVU\\n\",\n       \"Fe4Jccfuum16hmnNqysTdneoXddJ/1+ceDH43tFh2DYB3ZaBglxLbb4anb0UIeZmDWpB7rX3fvgD\\n\",\n       \"55xzo7FRNkf7/hz3hZaioL1QF2P0dwlx9FLT2kFMsUagc869fM/XzlNLiBr0TCYXqkX7LsV+YDJC\\n\",\n       \"CrX8NkMqdnr4Oeecc+935uJ+NPZt/+7DT8O2v/zVrzvnnHv46aOw7ZUHnubbnxnd+PGJF+WXQpnc\\n\",\n       \"bXxbvnbfXOG/+YUv+3NYmk3Dcj2kjxaXRjfS6iATCpR/D4TlCWuY2dgpg/2ISCpgZ9E1eo/hPglW\\n\",\n       \"JzqH8B7ezkqppQJCOfE0p2ioXRFEzHZP0D26EFd2UlApEmtyoewaUPY6r9PFO6sVFyA9qXQP2i7J\\n\",\n       \"HjxtdQXv0T8LWCNUqZ1ECgqwEh6fdi5tbeff4J5Z1HZPLiFsbhO7Tn1L+wWRT2Ae472+WolgGnPB\\n\",\n       \"eCI0OsZ1JXN3R2G10P0ODGUhaf0jiMHnWk8R40hr0nGeprJDn9NMUNGkANKs+kxkLc5EEpDYllzl\\n\",\n       \"K5BeaKJYGix70E+aWIZr3MgYbmrSk/Y9jmeVe9DOo5GHV4kEFWHqXZaraH47IiIVI0aMGDFixIhx\\n\",\n       \"w3hhiFTXdQOxucUuo8tu61NdkXQhJV5FlKwTJWnCFO/xrVbeVrlKa0R0GGroqflYqImnKximf/Zb\\n\",\n       \"3xuibv5/rq/8KiHNbLXS4rPDIxWHw8BuZKvaEdCfTFZEAB8GK9JdAuwaXwxC8MGqkoakYkIG9EvR\\n\",\n       \"J8pXC0HEiEjpSpNp0uvaBJhtx0rb2k6ahOIfuYYZEJlShNVNzRpW1qIEqbOy0LNrLasvB2QrkZUO\\n\",\n       \"0SnaL3RqYZFBsKrIJUSsKtgMK8hWTFJxaw2qiqNPbh9Z6vLLt70h5Qyi8/3c0KK9iV9C5gu7Tpsr\\n\",\n       \"fw4Hx/a9y/MztN3QF1pnlCrsxaovE5+EDz7wCMy9O/d9ExtDX770VS+E/vbT37fzR1OuFyaY3b86\\n\",\n       \"cc4597N/+a+Hbd//1r90zjl3/9jO9ekzv4puDs1+obvybe6AtOxNbBn46aPH/pwFEeM9uxGx63Ti\\n\",\n       \"7QQW13r+WKUursK2ycQL+jcy/vanGONI0x9398JnR0f++8m1iPNhtbAvyQ77WH0fSNun2DaVFPIN\\n\",\n       \"kKafeOXLYVvhfPsWa7tPaCxYQ1isCOLiFAJ0sRWoYR0wGtmxiDoXIlROsE2NBldIPOgETaKJYob7\\n\",\n       \"uRPomrU49fs5E2AKO/4aKGI5s2117bepUJ4GuJnUhNuH3Ubn/Lku5HrVTMBRRJa2DgJ+8xet1lDD\\n\",\n       \"fmpxpGzJMMgxmCjBCWW+tnk663x/Nop0Z/689NnRoQWb3sZuS1sJeZ5RID1AxELpRvSJnBfrnp6c\\n\",\n       \"Gkq5v+/vp7Hc6w1MYmuZu/mMy+Wxz2SoQoyQ+VrA8eqc1dYj+q2WNLTVUPbDhbFm+y0SWhIJIknz\\n\",\n       \"S3l2BNBJUf/0eduLbfudISFE5FSRQ9QulGyX8dRfz83AdJZJSdamLIuIVIwYMWLEiBEjxp9KxBep\\n\",\n       \"GDFixIgRI0aMG8YLFpurF8QOLyJ+LJAdnW1VHLZbgO73p0J1ije7IJgcNMhvy3Uv9KdSB3D8SPeb\\n\",\n       \"7Gi767a3sU4QPDkuL0WcDF+WUmDHMtQ8EsVmBc8YwTFJKdaN0QMOFN3A2Z1UHh3Txf8pw7Y81Wvi\\n\",\n       \"/9UaVvS0Uih4BMfsvBN4FnW1rjd2/OXSb2s6a2cfoFr0tfQl6c5EagJmBb2tpJ961jUUcSI9sAZC\\n\",\n       \"eVLA8j14hSTB7V4dlnFcoUApgG9a5WxxXTP1QAJVKgLIEoLF4yOjgA4PPS01gwD9cGy01+Xc02fT\\n\",\n       \"pfXr8S1PN6ln06SGE7aI3VPsT0fhCCLvfGLO0l+876lFWmVdnJ2Hz779rW/5Nt0yeq5eeJrjSNzR\\n\",\n       \"Xz7w5/CvvvWbYdsRkgE++vC9sO3gyIuxr6XtLeiew32/j01tLZ6TPpTbiy7zV1Kvb7n0570Semy1\\n\",\n       \"9H/rvXuNum5aYpKUGo+1t2di/9Gev04v3zW673LuReFffNX8tja4TvsTo7H2Zv58RpltOzv3v711\\n\",\n       \"ZM7miUObxUenBd0zgXh4fmbnOp749l1dSE3Emd+WaP0xeFsJs+cyUG+N1PrjeO4a5UVI7aA2nPpD\\n\",\n       \"QVJQiYs6JRCzPRPbpxPQk1firQRKmXUtnXNuuuf307fWT8sV607ivGSe4hTXLqU2HotIyoWtKe0Q\\n\",\n       \"WoyO5ZqAwaQErdMW6kNibtgIPX/d+b5QsXlH93Jx5+5BC2pNxrZnXT2ZT0A3aUk4iuKTjnOIfB+n\\n\",\n       \"uLw2yvrs1FP7B/s2roJXney3gRg+6W0+2YAOX6/Fx4teTULpkvrkZ53QmEGOod6CucpB+D3Sczp3\\n\",\n       \"orKEjN1Qd1fm6SCz4VgQGQfn6UyfE+j/XNze+UKhvnQUu6fyMsAxniRK5+2SIWkbYsSIESNGjBgx\\n\",\n       \"Ytwo/sw4m+8KE5sPNvp/Br/FG+wO74TBT1m7iW+/amFARERRMhxDgA4Ttss7aBreVsUBONT1E/uF\\n\",\n       \"56wTEkE15ld+pXM9kXpVQIK6QQP8P2NxjKWgstb0eziaD1NYuYrjm7viFRAHysqgYJqoiJMLCAUH\\n\",\n       \"IkKI8iqnjtF+m7qYE8XpN7qcIEpmLQ8tCqif1LXDakZNjEdAP9pKUp2xmluL261ds200McUqTR1z\\n\",\n       \"6eKrAnResn6Q6kt3ZEU/+bk48KPNk6n10x7qZI2nHiXaiItzh/NZnpvY9fiOT7VPz61fK9gObB4/\\n\",\n       \"C9suTh+jnYawZTjv4/uGsDz6yKMpb7z1Feecc/fvW625Mzia37lnLt6LK4+OiA7TJVjhr5fW9vLQ\\n\",\n       \"72cjK+c5XNYXUtWdSQlHB15QPZ/L97Fanu6JsH5BZ3fbxzlW5I0KhuH2vRbB7OWV3/eBIIKPn6Ke\\n\",\n       \"HlDVvcquDVf9x8eGvhVI4V6LJcMT9onMSRR+P92chW20gigl/XoxB0qhOl0Ie2ugmZUgiBucdzkS\\n\",\n       \"F3MKimU+yZg8slb7EdzjYh1CtCmRpJQOCPgaqFYjlhhEBLSuJ1f1mSAYKzZJxh+tUHKxvyg3/u/x\\n\",\n       \"xM5nuvF9V8MBXCs7EAmXrzsCUr3YH7Q76up1TInXuTtlnb7BJO+/h+HUyvzDY9WaMMMkFhmTecHn\\n\",\n       \"lCAdBdE/OxSLDHTd9rzHhBmd/ynyVwTlGe77ydisRm7dIvpmx8pWOO9a51NYp6xsPPM5ocgRbQLc\\n\",\n       \"889QZ8+4TITdHJOKXPEqDhA5jlN5FFmfqaDd35/sm3Tgzr/9rOdY7/v6uS1uqErHOZbyPA1jQu+J\\n\",\n       \"drj35yMiUjFixIgRI0aMGDeM+CIVI0aMGDFixIhxw3ih1J5SIYQCBx5DyfN/mD+MCpCDy/jAlRzf\\n\",\n       \"21FcOICASuOAYlKIkRq6TGHXIIpWKJbFJUUct6vwI0V0AU62qFHkdL0wKHIByFih6M55uL0TdWKK\\n\",\n       \"4zetwviEp1Wwx76li7u6iKNN6s+Cfi+E2ivB6aiPCx11KylkS68Qk5A614798a/Vgfo5GFnd2ekV\\n\",\n       \"leVKrW3TwfSgqeT8WdS37WScBCH59vqBBTcLEZ0mOfpf90HxvAhm02KNthuNREfrpFQfF/S7WLaU\\n\",\n       \"oPbopiu1YN1i6Sm9eyMT8VIc/MpLL4Vtpyhke7ZnouTV0tMilycmSj7Y8xTR04efhG0FKK053LOv\\n\",\n       \"5/b9I1BqzdKoxTFoMZNkO3dy5mnEo6lRUPSFaXNxyofYtxG3Z/YOxfOnpyZ2X8M/6u4DE9GencEB\\n\",\n       \"PLNrPYNg/+rcaMGnJ34/B/tGd9AL5+JC3KZBR96FY/pc/LEurvyx9sdGBRagABpJiphh/LfqrLzw\\n\",\n       \"v23EA+sLX37TN10SOi5FyM1gUkoO+m4lRab3Jn4sqGM+i7amOtUweaQyb6EE99pKxcuYH5UWrXE8\\n\",\n       \"/quedRQd91L4O/CSwrc/XvuxOF/ZNSFlQt8r58TTTm7JvPT7G8Hja6FeUCEBRdzOoUDPxAMvC/O+\\n\",\n       \"VnuAi7jM5ymLoMvxg0ccK2Doswbq9Uzo2YSjWAs/k1LqtkXZbafng39VbI5nB2UZvYjDWYFAx1Xb\\n\",\n       \"+TH79NPHYRuLZXcb8fvCPdkLCZaiL9SrjcJzpeX4PGWR4yHVBcpY9OUh2UolOAk9wLZfOwZ90rPa\\n\",\n       \"hSTvDDwPzSfNOfM7bOTZ0fR059c24XqKLsGee0pV2i/+30ZEpGLEiBEjRowYMW4YLwyRSvvnUKVu\\n\",\n       \"h8As2B8oqkJnXa3/xJ3a/pOtP2z1w5XBIA0eO9EX7dAktQTYtV/8q2iKivHsGMMGD5zQsW1+aSuI\\n\",\n       \"yQirn3R7VZV1AmskXOmLsJG1iQZ1tVhriCsT2UXCN3PbRnuEboB0+VBn8zEEuJrqHUAnLQmYMp3Y\\n\",\n       \"Vh8bLsV61uuylRHTlDOnqwWuNGSVDPWmuo1nSEVOG0WzKMAXUf7aN5DCwmZjvdJxfyIYpbC8kZUu\\n\",\n       \"b6Jgl+BccEUeIAdYYLXOhJ20lkhH+H4raeXnvi0P7pjYOwVytbwy5IRC4dmeITdPPv3YOefc9MDQ\\n\",\n       \"LAq1jw5uWTvR5LNnvg7e0V37jOhUKnYVX/myrzX3+DvfDdv2Dj1ic/XQ6s8tYDUwGRt2lbdDpMM5\\n\",\n       \"56Yzj5g8O/WIVK0raIzTp+dPwrb7t31fLKWuGftdHbiLsUfHFBCmncbezFCq+bVHjogIb9bWtnO0\\n\",\n       \"qbpj13AEi4PTT83CocK9sFiJTQDqOn7uDbNJmO37cz0T1I0p22utdcfPAF1OR4b00Qk7lzkhhcdB\\n\",\n       \"mkq1AYiys0pSuJHGnzaGyK1xvp0gAkQCa9gkZHJf82uKXNOVfirI3ROgmM+e2bUjcq1C/Q5i9LVc\\n\",\n       \"T05ZnKenUsNxhfv6orbahMwG0vkvQDyCUrX5NhIf7GG01h3d03n/y36JJqsTOl23swFzsC1Ybhuy\\n\",\n       \"HvLsAuqUynweBP3YRyt2DZwTc0H/e3z/6tL65GTkx+eg1h0ta9RrAU1RhGsNS5JWhNpMGght08oO\\n\",\n       \"/KzbRkkHZuehnu12ApZeOzoc9DJPVOH5RHG6oE/hfHaghPrugL5T5ojXX1FHWqYME9okQWpHREQq\\n\",\n       \"RowYMWLEiBHjhvEjIVJJkrzvnLt0/nWt7vv+m0mSHDvn/oFz7nXn3PvOuZ/v+/78j91JjBgxYsSI\\n\",\n       \"ESPG/0fjR6X2eufcv9X3/als+wXn3D/p+/6/T5Lkv8b//8LWD9NkIHrbEoI7gzY7UeLxr6GGm8Li\\n\",\n       \"bXHYwBfoOfuodEC/0XdEID6amO8oUKyur8GJVRx4g6Ntv+McuU3gdNISyyuDM5cTeLw0Ispfo51S\\n\",\n       \"tDWtUFxU4Gnafgytuvxv29Cf294lQ7pvex+kL7XIJOkJLQZcwqtm4FQOam/SCGSMvzc4r07OlUkE\\n\",\n       \"tfRTXtGdXKkN/28vAtCMXjUibE0a3yb1YElB7dkYEiEsqRKphpxVvP4qfsT5iwCX9N1Y/H4SUAvr\\n\",\n       \"1gS4LIxMpiBrDLIvW/ydG7R/eYYit9dCz6zgNyWi8A705WYlBXdLT9HsHRp9dz73Pkd7KJp869Bo\\n\",\n       \"rym2rVdGI37vt37LOefcV3/up8K2R7/52zg/8fuBQPnBqy+HbR99z9OBeSUeaLjf5ktPSyjFRAh+\\n\",\n       \"uTRqkU7k9+4a3Xn11J9DK+N5DB+p46nRQh98/JFzzrmjO0Z3Hhx6Qf0nn3oBPv3PnHOuQfLA0S3r\\n\",\n       \"r8ePfVHn9craVMF7qha6ocO4m0lx2xUE6MPirhjjQm1VY3+dFkg8qSrxcQM9k4pgN4NjvtJIvI1y\\n\",\n       \"mX/yylOErfp44X4WpjDc25xiNAHI3KmFzIAoeFoZBXhy5vvp5PFJ2LYHik6a6Tag8tdCt5MqSgI9\\n\",\n       \"JsJ+OPbXM2vw+hLrdC1aDAooF20BfZ4ycSAvgo+cFhcfJr5ovVrOcb0aVGGbJhu00KVkA4MwfCby\\n\",\n       \"gUApqi8dHc0xNnIVZ1Mc3Sm1hn9Fl7JY+P6kd5r/qf/NRKjaJhSSt3mC414d7YNQG/25H/X0AAAg\\n\",\n       \"AElEQVSaSFYMi2CX8uzsSbfL85d9MSxa77dtlKqn27jQbSsMxhIFpXvZB8eJJnb1HX0B3VYMEsBU\\n\",\n       \"jhE+x/4GVZA/m7z7k6D2nhcD/Q3n3N/D33/POfc3/wSOESNGjBgxYsSI8Wcu/iQQqX+aJEnrnPsf\\n\",\n       \"+77/n5xz9/q+Zx7mY+fcvV0/9C/UIvrSPSIoDh0IxvCWmMgbOdGBwYsmYofmO7yRdoOaRxD9Dd48\\n\",\n       \"qcRTF2vf0j5R8RlWdfpy22+nzvbPNVCF4MFFW1JOL8+RVj8WlAyWCJuR/ZZmzHmh0BHTelUATbE9\\n\",\n       \"ji+C0eCAPoCktr/HlUmzw6Yi1eNDjFhK6vQUH68EJSlQO69myq195Gq4HvdSL6lK+Duta+X/7dRF\\n\",\n       \"nKctq+9wTWprZ7CMYAa32B+ElaMMIjqa5922YDJJJJUd6EzrDLmYYSXYSO3Ey41fMR6m3j27Pbfv\\n\",\n       \"v3X/J5xzzl3NjRl//BDiXTmt+dz/ZiMoJcdTWyv653/05KmlSb/5xTfQXj8VnJw9DZ+lbh+nIucP\\n\",\n       \"BOH844/t+FgJZ+JYvYLw+pP3Pgzb1p1fTU5yQWkgaJ7u+fP/RGrz5ejDrDIRPS/XRlb/G/RnImON\\n\",\n       \"QnW9m2dAp06eGUpysOfRpmXtUS1FGl9+xaNp13ND5Gq48l9embCXLv9jqbXH3Swu7NpVQDrWgkgs\\n\",\n       \"rv2+R5UJytdAaXk6/Y6KDd0OuwK1CSGa3+hKn+hLYceaAk26OLM+4T0wApq62Sha5rcN6qABRZzM\\n\",\n       \"TGz+zre/75xz7uSJ7bed+HYWgkjWtCTJBQl2rJOJenWS7NJgzi6l1t9k6o+7FPQ5/LaROqUphNq9\\n\",\n       \"IveE7uVrTEZy25YslrAkfY3mJeK2HtA0tz13ppK8VOYznJfc0A3rxPl+GjIyRMmFucH9WY1s/DFR\\n\",\n       \"aHlhwv5q5M/1tLY5ZjQicrRdAWP4fCQ9gXtdHisZLXEE/Ql2BgNdO/tCWIqGLImcPy0ZtMZjTouD\\n\",\n       \"Bm0T5qLbZpPcDrdzPsd21fXVdjJRTFHXbhe0JfGjvkj9XN/3nyZJcsc590+SJPmeftj3fZ8MKxTG\\n\",\n       \"iBEjRowYMWL8/yZ+pBepvu8/xb9PkyT5JefcN51zj5Mkud/3/aMkSR44557s+u3FwyvHZUC1V4Yq\\n\",\n       \"4DFixIgRI0aMGC8ynn1w4Z59AJPjPy1EKkmSiXMu6/v+KkmSqXPurzrn/lvn3D9yzv3Hzrn/Dv/+\\n\",\n       \"H7t+f/TSLHhtOCeFfAdOqNu/I1Ta/muKCNp+d+yD0N3AdwKeSSpiDvCs+l0B2lR4MPnjXdGHqnhu\\n\",\n       \"er5Qr8Coso8FYNmskYKKfN/sxXcEfFghLuYm9hZvKXAFLdx5lXYktDyA7MMpKN8KcawUg+0n/da5\\n\",\n       \"pvh1pvAo8OBKPKhWJWkMii5FsA0KUIx93cZtU5AUPvYDcy96hdk55vQgqSUBIKVQEYcXwSqvk8Lu\\n\",\n       \"BeFjgedZmDkXWjCls7nQCITD80xg9I2H2Rdox/3+dvhsBJFxuxC1Kw57LT5SLAYsbFeAomvZSK+y\\n\",\n       \"2cwKE1/Ao+nuK68455w7mNnxnzzy4uxCxskY4+p7v/PbYds3/9Jfcs459we/9ZvWALiIn58YjfjG\\n\",\n       \"57/q25FKIWVA+tcoEDxfmjh2NvF0Xyo8JsWmg0Ku6JNSEiAW2N9s3woeHxx4inAjY+wa1NoYHkhP\\n\",\n       \"ICZ3zrm7D7wqoZeb8gIFipUeuMSxKqFRX/2cp0znl5KwjGtxeW7bCnhErYVuSSAeTygsz+36ty1d\\n\",\n       \"9MUzC99rpZ9CEXZRkdc8j6WNiRJ04HhqtFCNttQN/ZTsFFKMcZ0SDlAsNzkyz7C3v/O2c865xysr\\n\",\n       \"2rwY+/lsJEW7+xJtKfQaY9yD9lutbK5pWiR2SLWBkFAkU3JHXyK5JsWI1I59r8T4y0s1vMM/kEes\\n\",\n       \"nRy/oQBeJyBKBtRvDu2Q+ZfsVS5+Xy0mnMHzjA1sSfHZteHX9LrmUMOPCkusYLKHnmso9C2U3WrJ\\n\",\n       \"ShnyvY4JMNJ2No3Cek1AyOhLqFUk2ud+abRcMkhy2n52tpg7ckn8CMfbKdXBv516kZHak7neWiy/\\n\",\n       \"haRGE4rw8Z3Xxu7Oa/7+TJPEvfNrD7cPjvhREKl7zrlfQoNz59zf7/v+HydJ8jvOuf8tSZL/xMH+\\n\",\n       \"4Ec4RowYMWLEiBEjxp/ZuPGLVN/37znnvr5j+6lz7q/863cwfKsP6MeO+jZJv736/GN3uv3r7W8R\\n\",\n       \"/doh4tQtHZCObuBqipWeIn187VcwjXXVdH/dEHXT2nihLXqCFKfObUWUQeTcDZy92Rhxu+1Yr0jf\\n\",\n       \"voHwhEZJGnQQlufyfbZJzosojTjh1ljVBNW7M4QvlR7NsGIpClu5lBQUA65RcSYFtc1aUCr0maZ6\\n\",\n       \"01ag3yGeV9SNgsZOheJhJYhrMhBH0kVdVka8ZAqTcZUo38t2IHLMdFbrjBoOzRuIZ+9NrYbefO6R\\n\",\n       \"jgMIsZ1z7tnomXPOucW1iUivLvyqP+nFgZqgpyz1eP7nJ+ZW8uWXvuicc+7y1O9XUYWQhn9hNfx6\\n\",\n       \"ODAr0vXr/+yfOuecu3XHkK4Sq//5yn6b5rQzsWtSw0V5A1uDXATTl5fezuFQ3LmZGv1MziFntYPa\\n\",\n       \"xvMenM21dl4FMfj13MbuLaBUrLm3f2jWCHRPr2RlvMEx1Nl/Xfv9zaa2bQ6kay12HryeqaAJoa5Z\\n\",\n       \"IvW/OqZ/+/Neyr3WIaEgE5QqwT5WKxPA7+15lCgT9KPvsR/R5LKdI3GqHgElo8i+kPz/EhYHi6X1\\n\",\n       \"6/0RjiFzx8W5b8uzx89sG75XzawB1T6Q28qu3XTPHyOtMNfJnBju694E8yXad6H2H0Qk5LfMT5Gc\\n\",\n       \"BFcACS4VYeZ4wny6kXm6bshIqLAZtgqCPqdAszRRJQdKnYpNTcMkJ3WKR5sLQtia7MPEKkEk+Vei\\n\",\n       \"julAkxq1zsE8OZxjt60bVkioUIF10JqzNqGM174G0yHnyv3q8zwkBQn8xYQyBfjo1N6rJQTGZ4MJ\\n\",\n       \"Wq2LBlYcYR/btf44d+RauzAjcrvtrK9tz7IdUJhEdDaPESNGjBgxYsS4YcQXqRgxYsSIESNGjBvG\\n\",\n       \"Cyta3Lsh7UWXhAHE2W7DfvTv0IKCz1NmfhshQIVMKaje9pOg2F1Fbz2EhYocko7shdsLJRMV7n2u\\n\",\n       \"QLL/Ec8LjrlC8RCKHNaTpI+JwJg1/ImEHtiQbhQvDtanVPbSXGm3PU5CJw/OFRCr7CQIBuX4TfB7\\n\",\n       \"EQo0eCslz28awOK5I6XnP8z0+zjsZm2wa9Ztw9M06u1ExEyaM/hjOedSUnUDqhK/xTn0iQjGKdgU\\n\",\n       \"LgB1V11RyjZAxpn4I4Wi1ap/B6VVCwVQw7OlWvgv3nvZXLTrZ16U3FV2XiXokVLEsezj9dIoYJ6P\\n\",\n       \"QtJ9oIzs+HRAhobbffLhB+Gzu0ee5tLzv4KwvRJ6dgN67lgcwB9+4IXq1cQomIbeNpdG943LYdHo\\n\",\n       \"XkTEvK9nMxOMr9ekQoQeRSHfXjyDSkcPJKHFSxxD7pNTFCZmkd/J1DyreDMsxPesxT27Xl2GbVMU\\n\",\n       \"QV4IjZiBqqU43jnnlmFOsGm3xm+ykQmFq4rH9f1ajY3a3WCCUI+f5dxTWrleE1zXZmG0IMW79IJy\\n\",\n       \"zrkp/m5r+x5pTrqtj2fWJw08rihwd865Atcwmdg5vP2eH0cXkpSS5f7v4trOf7YG3bJn93gPkfle\\n\",\n       \"yeOKsB7nkDk7hwyU6npi89QJ6OhWvO1KFK1VWjYrcG1lPi8hnua0Wwy8oDDWBtqObRopVENoVICN\\n\",\n       \"eV/pPsxTSouRZqxBd3YDF28fqWQApBm9oCSxCGNMn52sfNHJfE76brnQAsWUT9hvMwrKeyZqaVLS\\n\",\n       \"9nOaD55WtBJBvL7D8HFgIr7jWZiC5suyHdhP2J3SraRWbRt/K+oBV5SkT7UB/p9ENC3dZ2uKIiIV\\n\",\n       \"I0aMGDFixIhx03hhiJR/ed8WcDWDFHofg7Te4BhebG3bVX9vlytwQCEk5XRXkmS/QwBOEaOmlVKn\\n\",\n       \"OHB25X4GIrV+eAT5KKBqmodqMsKwpVnT2dUuXThsp2//w/06ZyhKRtGdgmVYrfUiGGUf6konCAFb\\n\",\n       \"RQ707Pg9pN9r/aOUqJOgSejIbEe/smP1GnY1Vp9LQd+Ifslo7nA+KpTf0Z1Bbx/E64W6Dvu/c3GR\\n\",\n       \"z4lIibVvWCWKAL5C5yZapwsDr5YV5qYmSuT/fyTJAZN9L/y+eGZWbCOIjMciwC6BBCxEbExRZq65\\n\",\n       \"6zn7SWuyYTwhYaAQpPHJs0fOOefefOONsO3dH/h6eSpApmD57e98J2x79WVvp3B8ZGjG4tqjOGtB\\n\",\n       \"iWiF0UIUOxcheAW3b0WfriGszwo7frPxiMyeOItXQEmePDb0q5r4izeuzIF7PvdtOjrwfa2WLHRn\\n\",\n       \"v742Yfsax8ok1bpEDUMdavsHRzj+J2Hb4bFH7OZiXZESCZW7ZwN06M69B84551pJIrjqidzZOJm3\\n\",\n       \"i61tCZCOrrZjXUPsvr4SRAb3TiWu9Keo5zgG0laNDWk6O/Nj8ZWXHoRt4bgjO/7Jqb9OxWjbHzAV\\n\",\n       \"hL1HIklbiVM93O7rmsJ62wfrDhZSQ28FofBsbNf1GmjevLWxNp971OXwWIT6EOBnTlzB0RWsdpHp\\n\",\n       \"xIZkg0TOgU/RQV3T8JyyTQWdzQU5yXGwVuZz1kIkcrzRpCCMl1ST+WnrIvN08FoYtAnIlTrFB6sH\\n\",\n       \"eQISuRbopgjWLpjDB89J/6/O9WEuHCA52zY16a7nZHC4GcBUaLv/31zmMGZKJQPkEN8ThqEBOnq9\\n\",\n       \"MIuVIyTXqP2FXTNBAj9bax4RqRgxYsSIESNGjJvGC0OkfCjSg39lG1eirdb84S+FkCXAoeV6wsux\\n\",\n       \"fs/yGvEdNRyj9mqbC91Vw2+AfvHvdlsjM+BWn3tt1c8C5zzYL9/C5c0Yr8taQ6tlAwYpsTkPIm0G\\n\",\n       \"R0xKvxxAeNvfD9XXk61tsqhxNWpx1aJHYqp92umrPFAS1UiRt8a2UpCG0cj/vZrLCoppxVprjItO\\n\",\n       \"lUhhNaH119jFkjkuCAQQJLkjign0SCPh2cGpK6qW5jT1sx8XLAqYiyFmWPVr+rv/m6jCyTOrdbdX\\n\",\n       \"ekRmLDqjS5i06j1RICV9um/Hujjx31uuzOjx9h2PktSNVHpHPasGKfSF1iZEH37yidXLu3vvJbTz\\n\",\n       \"Udh2uOcRi1Fp7ZzAfuDRJ4amvfbq55xzzj1V3Ro0L6ydpytD6vFOTs0kkxYat+6+GrZR87EWnc/Z\\n\",\n       \"R6idJ2ezxop0IX3CVPsc4rfrhSFYBQbD6toMNDvUcxsJ+tEB/Rvt27YF+vXw+Dhse/rEX9vxyAbg\\n\",\n       \"eN+jPhuxhJhO/SqZ2sDVxlClBNqgQmrNVSOvocpkBd02/vh9qmaO/rwnI2l70MjYMQrU4huX/nuK\\n\",\n       \"YLDu3mhk12mJeezJD/4obGuSDfYhSA9Q2kysBlizs2jt3iGaOl/4e6KoZKwD9cpau9bUpuZiyJoD\\n\",\n       \"EWlE38ZpbHFtKNU+EFudpqj1JAiTyjzNsp+9IChJvm21wDp5qeqW8EwYqU0J0S9BpFhXjjUMVefT\\n\",\n       \"Y7LrW0WptrWcfD4oqtPgHlPkqoH5Zi/XmHqoPNXXA7IJufwfmofD1oNvA82Xa1LAzkPrtHJs9YLm\\n\",\n       \"Z8X2s9Pq5G0/J2k+m8k++KytxTrk4syjz/qIrwG2lqJ5Dc9A6U/VVe6KiEjFiBEjRowYMWLcMOKL\\n\",\n       \"VIwYMWLEiBEjxg3jxdkf9P1z7rDbKaQtIPtcxHmk3hRG5NvgTvuDHa6nu9Teu/7a5ZROOlBhRwp6\\n\",\n       \"u53fF/ruOUoxUXFoRzhT4Hmcg/SS0XzKLaIzWkmr3zDVt5I02XAw0INCBVDQ2arbPBFgNfHmcQdW\\n\",\n       \"D3R7FwqwprOufY3u5erAm/Ye7p2BbiiE4iCNclUYPLtBqrvSnW4DuFng2ZwuvnI+OVLRa6m/RR1r\\n\",\n       \"TypQRLcpqM+8VBqPPLLUi0P/aLmuCr9Jpf9JB6glAQW1y47CZnNnvv3AU3Fnz4xuovO3dn9GEXmv\\n\",\n       \"Y9If/0DE3msIbzOhtGk/cI2af6u10V45VLaaAPIR7BFef+11awAE2IeHZlPw/e96mucV+d6jR154\\n\",\n       \"fXzbxOuLuae7qjHGQWfO4mtQcM1KUvORfl1dmf1ABSqkFpuMNSD96dRorDUcm5drE+XPQK1doYZe\\n\",\n       \"o7XZYENASs4550oMmD4xB3haFyzXkkIPEfP5pQnVD/b3cV52jB4U0N5M6TaME1zkO7esTza4167O\\n\",\n       \"RUR+Acd0oR9anH/bW9/1jadvG7vEbgMKspM6cUe3fY3BNcZEcynieA4FoT2Yiv8Pf+UXw7bRXd8/\\n\",\n       \"lVB2CZI3dOwGZbfOHZBIrNe+n5pMrgn+HOXS/yktQYTaY03AQQ07f4zlue1vg6SNzNnY4XV3qLGn\\n\",\n       \"UgAKqjvNLKJeWuwnOuxvJKL4IqVQXuxPeloiDHxn/Ge0BNJTgLQh1eoIuOxtoxYGFKDbjxeLbRqP\\n\",\n       \"SvpcMo9YDaOV+55SBmO95JlMqYJaLeD7uXYearyWkqhTw2JIXfFZNSEZlg/x/wV9pzVUaSekyWMZ\\n\",\n       \"KEW1S5igUoM6odOpP5M6qS3mYq0K4Fyk9mLEiBEjRowYMf5U4sWJzdNkIPDju+cu26tO0p8plBsU\\n\",\n       \"304p9pPdp9vviHzBbkMa6nZ65WdVl9YYvBETTBsU2+vc88EFI9OFhymnNHqT7wdVvKJKWBGpJQNW\\n\",\n       \"2q0K28MLtKAU2HeLlUEqb+HBCDGR1QoF+CJO5Opo0HcBahMRI1b2WpPJrAa2Vz9wAXDjsQmWmX5e\\n\",\n       \"lYbSbK627R/YZ72sPrKKNfzUEoImlSpKHFpiqKlhyrxnsQRIsE0F+KwrV01NRJqWOP9CV5o1jm/9\\n\",\n       \"Xi9hejilhYEhCKdMIZeuPofRoKb/clVVjMQ6oPbISS73TjX2f2/EJmGB1P4kWIhoxgbRAts0Qyq8\\n\",\n       \"iuJfe+AF6M+eWl21L3zB1/DTel19TuNAE/vm1QTH9d87EmPKDz70CJYKuzeoMUi0wjnn9g48EtYs\\n\",\n       \"DKUKSRmy+j4/98L3Oy+9HLalsDE4O/NjTE34WlyLtdRwu/Wyryeoaf2sTXcpgm0Kf0eVjYnFle/3\\n\",\n       \"RCwpVjD2zKVOZYbVOW+T+caulwPCkWZ2XpOJX/Uv5tZOIkaZoN70RqxFbL8GAjkeG+pFlJT1D9va\\n\",\n       \"vj/Z99enE6j54HWPYH3v/Xft8AAiio0Ii+l9uO1bHBJhnJM5niixGsiiLUcjTTbyf2dy7aqESJMg\\n\",\n       \"ImjzamW/vV75Y+yrmSnQvBHQx0TRKkyiKzn/AshFLpM32ZEiU6uJcnDOzpmQXdmMJHTZjgSkMO/I\\n\",\n       \"HIptRBeds1p3jTwT2s222JzIViZIPJ+nyUCojqQAID2K6oRrJywBr7HWlawkkcjOh6eo7BTarjgP\\n\",\n       \"a/eFon9qtYD2CtLPX+ZSJ3N8y88TWS7IFcaWolk5UecBCPXZmFNEpGLEiBEjRowYMW4Y8UUqRowY\\n\",\n       \"MWLEiBHjhvFCa+0NiTzAcwLZ7SxvQyfsRug+Ooar7wOdwtWBtWWtPQi7h1bog5btaq3+duC23VJE\\n\",\n       \"ueP4KoB3w9+qZxXd2wdtIgU4gFF3uH1TvL7Dnz0RyJIsRw6uqBF4ugQ8vcOcd3iVWMJIKCP+rTRK\\n\",\n       \"1kHsKdQOu0eFjcHuCx8mgvuPJl5QOh4bZbEqPH2yqQV3Jd1WWpuykmJHO1bb7qBb6ZUVYGGBuAM9\\n\",\n       \"q9QylaVyDuwLQaeTioJh+V7CRAXxlgLNU9aAvYUKodh3sTTKiHXQOnWHZlKCHL8io9DLsdDbh4cm\\n\",\n       \"1KXwnjX/3GBMrrFJaITMw96V1JA7haD6tdfeDNuuQbNNJ7OwbTb1fkerzdKOAYq2AVW3d2x15fZu\\n\",\n       \"3fbHFHPsPdZBE8dqnngt/dpijaj1H9mPy6WorXtPVV2dwONp39pLD7RCKIkW41rrWhYj/5tM6sot\\n\",\n       \"4azdi2dWj/at5XuTA3++19dG3yXO/51Vvr2jyd3wGWstqtNyB5GxeuZ0dAeX+oM97ndNNrheLLFf\\n\",\n       \"63f6oa1qUGZCI0/2/LkmQlnlENE7pTtxT2alzp28x2VGCbsWr7yec6f/fynX5jrcQ5frs7Ct6vzx\\n\",\n       \"WzmvEeaCQtoODXEQszvn3NU1KCuh4OkLlSCxJRMRO0XmhVwACq+1dmr4vvzNeVwdwAMdKG3nvV1j\\n\",\n       \"PtXzCpU9ZLLpMBY1UamBZXmj1Ttwb7et0eIp6XaZJxxlKwNPQ4rNeSzFYLapQNKhKkFI0XdJsv1b\\n\",\n       \"fcZw3wMKFNcnSA9kTgrUnuyXPoqDyh6siaou6iF3SsTz/fDf5/ezKyIiFSNGjBgxYsSIccN4YYhU\\n\",\n       \"mqZDZ+9grP3Zb359QK623yoH6ActCZo/Pm0x2VFAZ9im7WPZNjl8cCBXZ3O8VTtd4beD7+t7bM+0\\n\",\n       \"Vi2XhP/RdFWK3IdYHrY1svrAv3Qdd84cgCn6dYLgpFi5JFJfibtTRILo06BPHEXE9j2uJvT4TEVN\\n\",\n       \"pP5R0vtVbIq1W6KV0SEK1GrtRQlbBb10GMWKCNL2QIWqKc5jYH/AlQiWvcmO1Uo7GEJcrYk7c0gd\\n\",\n       \"N0SgRJu11F2SePSjleNfL/zK+irxq+rLlVkdpBCn92sRcfZsrqE6RYHkAbFpmBQQBXdqHQERsYiH\\n\",\n       \"uYqkAL+QMTGuPHI1qgSlQf/kIuJvln6/H330Udh2fGsfxxKrBQjL50uxM4AVwQj9NBfB9Ouv+Xp9\\n\",\n       \"V9d2rmdz//2FuI2PM2+ncLGRtHacK8XR/mT9NTk8NmH1D3/wQ/9R7Y87duZEnkEAnmSGtMwgbD84\\n\",\n       \"MgTnI7i3FyIAZ0241craztV8ObL+XCHZIE1ElI1/RxNYgsggWq/gbC331ZOHj4Y/dIa+N4JSpHCx\\n\",\n       \"XnUqLC5xjjJPEYkFcj25JX3CeVWQ4yuI149eMuTs8ArIldw8tKTQuYNu5600nmgLHeP1Btzgvltk\\n\",\n       \"4vaebJ9DNfXH2F/ZPXGFZ4HOZ0xo2TTiwE3UeYMkEkEriHRXgmC1GONaL5Gu4GrT0hBNUZiq3bYp\\n\",\n       \"6HB8oqkbsTVgcowbuIMPmRbnzEU9EZSsBxLVZ/qc8OOoEYQ7h2VFOrAn8v8wGUPrr3JMJImgxDss\\n\",\n       \"IdKMNQTlWuNvTUpp2GaxRODn2XOsjj/HdGsb3wU0sYeIVa99QjujZvtdRK0buuhsHiNGjBgxYsSI\\n\",\n       \"8acT8UUqRowYMWLEiBHjhvHifKQSNxC2BmBvyJnhqwrFYdug4DB/qtTStlM6aZ6wRaFACtZ30H07\\n\",\n       \"KcCd3lL6vWTwj28zYUkevtv6/uBcWYxSBcA7DLeCs4h4WxEWTlqBoAFtbwDdFoLEtrBHH9Rsxv6U\\n\",\n       \"nuzhaZULZMojtAKPbgAZ17VSe/g3Uch0KOhW2D0UvhRofTyBA7v4LREyV7E9hZ0DUa4jPLwtQA9f\\n\",\n       \"EyqKLua6D8Ls6lNGprKR8ycsnqQKgYPubRTu9v3zyYUvzPvKwYPw2UG6h/MzaH8MH5e8ML+tPPef\\n\",\n       \"5+LB1bYQO3dGbbFY8WJxFbat6dUFh+e2E9dtwv6dCaFTXP/J2Kito31P/ZyeWCHjEeirorDjL0C3\\n\",\n       \"jSt1O/aduzj31Npkz2g00h2VFEM+PrzlnHPu8syKIV9BRN4PxK5+TMznWqDY040roRZXF34/r731\\n\",\n       \"FeeccwdHt8NnOejuu/fvWZtAez19ZmLn/T1PwdUr87FaQTyulNUIVOliJX5PIex7t2/5MTBfwPdM\\n\",\n       \"fNTy1PfnxcIow+mBP6/FpV0neuU5oTvXDYvxalYEky3Uldu3ZYb9VlO7hlNcu+nxrbDtOx+955xz\\n\",\n       \"bil089GBHxONuMhTNqA0yiYYScl8/hyL0otkoEdR7XUn3la4h8pKcAEMhb2xSBWW/vOVqNfzwvt8\\n\",\n       \"tUJ3tqDWMkzUw+oM/l+VEXTJdgJQKMwr80mHOTbV7zUsUKxzrP97jftVi/xaQWObvNm+AT3V8Vmj\\n\",\n       \"InZQkNLBCT3LBmIRPguV7qJ/IaUl1il8Pqh1IyUouq3EuEsyoZv5zNIi8Eu2d9t3yvZlry4trqcm\\n\",\n       \"EzGJp5DvcYypiLxtmQAhzzMWSFbx+o53AI2ISMWIESNGjBgxYtwwXhgi1fb9YFUfXL77bURKhWgB\\n\",\n       \"Hhjk6YcPt77mBrXj/nghe7JLxMbVmraz33775kp4l3h+4HZOEWNYhenxuX/Ztm1iG1AttUnogzuu\\n\",\n       \"LOVSCtvFfqHjCgtp0LU4bKdEsGxI0BU819Rg1jxyEt12Q5tmW8TXAJ1KZDUZUJqaFhIqjgQiJaJH\\n\",\n       \"1ssrREQbCi+qi++aNaxEWEt0TlKC02BJgF3IaTHDfiOoml1/RaTgol3bftcQiI/UuoL7lWvScIUH\\n\",\n       \"AehGEKE5hNUzcb2m/cHBnlkYjOEG3jSGtNQ4f3F/cO3C76/IzW27L4lI+P/fP7B6eWcnZziWoU9j\\n\",\n       \"1KtqxEX58sr/fe+eoWn7e17QnVWGJvVQytciFC9x4EuME7pqO+fcBOn0z04NQatG/vjVANWi2NXG\\n\",\n       \"7uXco0Pj3FzRX3nNt++9H/5R2DabetTl/v2X0F7r69VTb4lweGhia1qYaF23yzP/vdXS2lnivEcz\\n\",\n       \"u06X5/68c0Ep12skVJTWzjPUzmswXqd7Jo6/RI3Bg0NDhJZA2LTW5RrC4strE2UTYU5lUKS4Zy8u\\n\",\n       \"DU175cj3BVH6TC1B8NNLSQr59Xe/5Zxz7ry2RAmUv3TT3IT1HeCctdQk7IBSNjJPs3Ykk2205luB\\n\",\n       \"Z0EtqEoNdKaUpJQRHm2ziV2n5RJC9Wt9TkCUr6nutM5gwsqgOgTuV7FfCck20qZgE1CLABzH0qIX\\n\",\n       \"NWw/ekHE6PK9gnWFoiGh1Kk+J2gXIIlVRBiHT7xtUXZ47skxOAcVghxR5M35tJcKGD3Oq1cLAcyk\\n\",\n       \"ihJWIySqyPOE8yjdxJ2z+6Nv5B5Ph0kxGhugrqVYsoxGo63vhdtD3zFoU6QO9HgmbSShYyEI8K6I\\n\",\n       \"iFSMGDFixIgRI8YNI75IxYgRI0aMGDFi3DBeGLX3vHSLaGMmVEjb03diWxyswlI6m/aCmZqzuAh7\\n\",\n       \"6e0UCh9v+y5l2bbAbCjOC5xh2Mb9qe+Gusza/oa/HMj70F4VlhMeVRfxpASMKj+nbIwAACAASURB\\n\",\n       \"VJAxmc+Bj0a77fdEzxaeT73Ryw8qSJxoCYGqEyyhVXWdpbO7OpvndKDulO7yFIQYULtqRB8pwqkq\\n\",\n       \"oiQ9qVAw2jw00tralqHgbmsMVBCxNmuliofjKZc+XAffFRl/HYWlKqL3bdrs2X6XY9AiMnbpfdVu\\n\",\n       \"bFuOIrTGCip0ve2Ovn/saSZNCqCPjF5/FjLerI3aoX+WClAVDnfOub6xY92/9/Lgd74tEBuLA/jR\\n\",\n       \"oacDHz36OGwjLP7n/vzPhG3nF74tqyujkeZLTzcegFLsSrmH8OfxsVFrZ8+8V1UjfcjTVrov6HPV\\n\",\n       \"gwx/X14atXj3rR9zzjk3QTHmVCi7Db2ABv40BdptNF4HCmbvwOi2DS725akdq8NgXIgr+/7UC9m7\\n\",\n       \"HYkqd+74/WmR2dmev651o8WtT5xzzj14xahVFpDWAsm8/+dXNiZmcFYfj6zfry/89Zkde1pSacQp\\n\",\n       \"uvO751a0+v2lTzLoR3YO09ZTm/XaKJEM1ycR+pbcc6rFzSFkZq+r2DxDZkch36ZjfCZsTo7+LMf2\\n\",\n       \"2+mB//typeJt/mV9XEMgXuDeSOSerMEBKrUbBpvQc6SbXC/yCVJvqkqABGAjbvdB+ZFsP//yvEBr\\n\",\n       \"5ZmAZ00vO2ayUauVLbBNC7OHosEyxlskt5Tin1aOSrRl+/s5qFitIjFDIe9qrHOSP7HRSJ/J/nzk\\n\",\n       \"0eHGU9+3Su3xGdTsmOt4nZTOY+JZN6iUwjlZfQkpit/2iizFvmssCRe7IiJSMWLEiBEjRowYN4wX\\n\",\n       \"V2uv7bRcTkB6WkFyglPsQIA9tBBwTrRjosAzB1b5Xj9Ekwa1gUINOW0U3n5lJyYAV3fubUhkV00+\\n\",\n       \"07jxD0nXTLbbFIR4svwK9fnkTZtohpZLIjqWy4o0OMpyVdHqKgxIizosMw241NX/tmCfmwYgXHjD\\n\",\n       \"1zRVOCvLapqrTYoYtYbaEgK/VvKPrQ6XJiqED+1YRAllNVuvuEqTZqbDeoaars7VzEBEGlJo7Xsb\\n\",\n       \"FLZbzeV6jrBKlvGUYXwmrV1QXrMRbAKO980degIn8FSVsBgf2ic97RFknBIl7eVk6ZA9FZuEzpa/\\n\",\n       \"zjlZNfsjO+ecOzowsXON4xalrVbXQKzu3n81bHv8yCNHTx5/Grbdvu2dys8ey7VjO8P1tPFCZ2E9\\n\",\n       \"V46Tr/7Y18O26zXrL9q4ov3CvZfvhG0lEJFUHIuPjr3dQVZCxC5j/TyFi7igdEUJBPFSXOxnHtWp\\n\",\n       \"Rtav8xMvvC5LXSWjz2TsXkJYro76U+zv5MLXMJxMLAFgfe2tENLO+unolkeOJvt2/ORTqpLDJnd5\\n\",\n       \"6dvUi7N93fj25WtJgMD9WcGKomQtPedcBvfq6xO7rsvNAqclyDVQnEbcqUPdQ0F4+b1B+j0TOjih\\n\",\n       \"CNLBemm5jNMC92kp91qRo9Zg0clvccxc7A9KtlPd032bMwjKtTpCjnZ28qzh3K02MeFWVOQYCTiK\\n\",\n       \"iDcr1F9U2xX2HUTfea5CaP93Kg/PlhC/JuqwUoPa76Q75k7Op3I+IySUKBNR4L4gEKcWNiR98lL7\\n\",\n       \"H5/lBuuUSGwRXXdAhwY2Nei8Vp+nGFv5c6ySc86VWYl9ybluE0eBJcoK22812hbl83t5ouf/2a9K\\n\",\n       \"EZGKESNGjBgxYsS4YcQXqRgxYsSIESNGjBvGi3M2d8lQMBvcvrcdXgcFctNt3wvSLAMBO+FYPUhP\\n\",\n       \"v40dXlBkzHaYk6t3Bdukzq72220XVaXqWlOFo2nqmUXYWyBjCjHzbbolSVRY6P/txMeoIgQu0HaA\\n\",\n       \"NElPiWRzAz+TXMT5DkWDlW41sbV9LZy30KiEp9XHiLRMI8LKNeiYBJ9pQd3l2v+dijqdDrupFvls\\n\",\n       \"6eOliQL+30boyx4CZT2f4FsVRJTqoox+En8eY/TEd+XKt2U52i7GmfVKLYCC0KLN6O8EyQNVavRM\\n\",\n       \"iWLBeW+C3Q1otFzAaMLOtYho6U6tQvFwnwi0P4PPEe+JRMTZLLzbbuyavPTK6/4zEXbSAfhIROE1\\n\",\n       \"oPd3v/9u2HZ5ucR+7Ri8/kskIhxIQd8WClT1PWowZq+W5phNsbEWDa9APZLOcs65HBTAKy+/Hrbt\\n\",\n       \"HXnaLAdlt1jYfq+W/rdVZtTm1ZX31jo6MrrNZf6aLa/tWNOZ75/LKxOlcyra1HY9R1OKeEVYCzqs\\n\",\n       \"LEqcn4zXULTd7qHx1FOBWvB5Dpfz62s5PvyWJjPx9kKf1Uu7x8agCOfn/lynB+YjVqKywKN3fyNs\\n\",\n       \"y8nj9DL/cXJtdZ7GR4P5nHO8k22Yp/shneOccwXvT5n/+PDQ+ZxzRiIJSEFYndq5Ntyf0JJkcmsW\\n\",\n       \"vpVnCIu7t+pYzudDP3gC4UNxQMf8rBUgOD1kO5KXrL80AYjeVvb9PBSDl++RqtP5DM7zg0dcSlG+\\n\",\n       \"UfXTib/+MhU4FFRwJWjRQrygmKCU5ZLYwMoOOyqLDAKyBD1Hzucbnbu0gLJzAy5ug2dnKYlSlFQ0\\n\",\n       \"UvCZYnNNBGuDQF+T3PjcE0lB/tmYU0SkYsSIESNGjBgxbhgvDJHq+mSAyDDVX5EOIkEDE3O+kWfb\\n\",\n       \"wlpdETCdvu0VOXpObD7Q8G4LwCm2Gwrh+C0Vhbut7/F/ugF0w4+AqjldLfl2UujnT6cdfOaciLJF\\n\",\n       \"bMeFQC3HoqCyqGTlUgydbdeNrWoTpO7qqj5Ltx3b+5b7EMEoUR1B7tjvmaTQ0hV7LV0yn3skot1A\\n\",\n       \"YCgriAYp9GmhfQjBpoiouzV9DRQ5xKZOxfMUj6oAFihNBYGhKBZTCFFXKxGx8xiyBGmABK1t8R9q\\n\",\n       \"QaaCXDYV+lgEmEf7PrX81bH/98GhiaM7pI53khtMi4nZgTlhs4ZiIUvINb6XySqNyFVZ2DXhMOJ4\\n\",\n       \"ykRYX448EpPvW/25NUSspaAP1ys/jpYPTYD8ysteeF5VJlT+9h++7Zxz7vDAxjhHG8eJitiZaj5A\\n\",\n       \"3+A8fnDbUvKfPPS2C4ulIWcjCMvv3LofthHtPJR0fg5t9muRGTJUlP78NTV/Aqf0UhCka6AU1chS\\n\",\n       \"pB998gm22bkugaIlkqaesZ5ia2Oirv1AWq58e8vK9kvn9enUrv9m5cX2dWvnf3HuLRF6WVWv5n6/\\n\",\n       \"ubjN55hb9PhZBiQUyQnZyI6V0PleUv3nQNgU/aRDfaO13uj2LW0KlQXkGjcUdNP+pVZUA7Umexsn\\n\",\n       \"RB8yTf9Xj5Xw4+05nvroRpiADGgyE0taQZVazImpoNTUqav9TeZ8+1Ybmbs4d7aKSA1rjTpnz6UJ\\n\",\n       \"0vpzQXrCM0uraITmiWAcSQ4bsZoYwWpGH7IZnMdHExunk5L1PCV5oCDqR0setaQB+yDP36ShdYvU\\n\",\n       \"VcX4aKR6g0OVBa1eEZ478ixgv/eBEZE+xLuAAqJ5T5se2UfP66oVOGC10Ctyv83EJF1EpGLEiBEj\\n\",\n       \"RowYMf5UIr5IxYgRI0aMGDFi3DBeoI9UHwTGzpnbtnqRMNTHR4Wyti3Z+ozwae+2hXqEJZUeo7Op\\n\",\n       \"ejaxeKZSQdxvKxRYE2BBETvjGFpcklReS7pHPqNz7EREf0nqofpaCgmzy9RHhN4/ev4JhOKlUHtk\\n\",\n       \"LTpAzMuFmmz5E9fCzh36LpN+oqPyrFC4He6wSu3hJ6WIEovCw7jjymDkzdo7MC+vPD3QbAT2JY0r\\n\",\n       \"vi+kgxUeZveTHsRW/496i9CzZlC0Em3D/04mQlnS9kdoZAqrV9fizpzQW0gEo3RFF01stY8xKc6+\\n\",\n       \"y8TTPUeHb/r9N3asxdI7TNcLc8eeTjxVtl7a8Q/2PT10dnIatrX0kZHxPyKlI9eYruwU9tZr80da\\n\",\n       \"rvw1ORTfqaNjT+1cnp2EbU8eeRrr9ORx2PZdePC89oUfD9u++KUvOuec+0e/8oth2099/RvOOed6\\n\",\n       \"0GepHItO+OqYX4389xZLE2zfPvZ06MOP3w/bZjNPh1WV7s+Lscd7IpTHfb/GuVa5fuZ/2wm1PoYY\\n\",\n       \"XinrezNPfX30/vfDtr2p38/p6bOwrQHdfbRvQnUKagtxdGebVvBR64TGGoOyrMWyfzLyY2JzIZ49\\n\",\n       \"oHkv5PikOxPx0bq69ILyBw/Mv4xu6PdfYvHiWj7z7awK69fLS9+vq5VRiwv83Qq11KzhWJ4o7Yax\\n\",\n       \"KPdYA1qOY1jnH4ekAJVl5LgntVIFP+9l+meiysDvyW0n/gRvM0gWapnrC0pGhNqjpKRXF+2WB1Zv\\n\",\n       \"KT477KeUanRaSB3HrUDPJYOi0f57Ta8JOBCMS4HgPKUXkjST4v2RjScWZna5PgtY8FxobhZwhn28\\n\",\n       \"7tcSxVRGARpNijtzXFepUpWYk4U647Mtlcl7qdUo3JCKTiDHWIm3GBNaErf9TFAB+grPGxXFF0ie\\n\",\n       \"0CQPpWN3RUSkYsSIESNGjBgxbhgv0P5g+AZr4rVtYXEyeKveYR2wY39cHSSiQGMtviBUE4EhUY+R\\n\",\n       \"IBI5LFibxo5lzte6gkQKp6BpJp4XlCLUzsNbuPR+iZpEebm9+p7b4tu1WB0msiIpKtokDDwJfCur\\n\",\n       \"bfE80/rTwlYcNYTdvdSGYq3DfFDrjuegYsdtG98SK8eR1HWi2FyNuuvOp1Yvln5Vu9HlGrouL0VY\\n\",\n       \"iFVtKfa4G6xgFU0L6cyagBAM6GUbmjfe839MDmT1h1XyeG8vbLt8ChG/WEes5v6atGI/UMA6QJX1\\n\",\n       \"LWwSchGFvzrzlgEZVmSLjSENvCUquU4UnqrVxtPHvu5ZIcLyuoZQX0ShXNkWY/vtGlYURBWL3JCG\\n\",\n       \"DRCRS2eoxnvv/JFzzrmTZ4ZIffTRQ/+9hdVwy3E+73xgAvSf/Tf/onPOuS998SfDtg8e+n2/9upX\\n\",\n       \"fHvkZq+Qhz6qbAW9gIv5wcyuyekjf/zjY7MpSEKyh12T27e9aH65sj5mKvp43yNY19ci7Meqermw\\n\",\n       \"LIISiNTt25YUsAHSdSj1uD555uvPjUfW1+OZb1+W2vkUQDbm0ndVwbR/oHSC9HabbWf9pfO/PT85\\n\",\n       \"C9syIMGHt62fJmN//KuFIUd37nrhvd53B3seYasmftytL+1al3d9EsGHH38Sti1Qu/B6Lo7pQKQ2\\n\",\n       \"0td0L8+VdcDqv0uk2kHPihKsdqDXBEk5gmp0mAvTROduCOUHNdkgFB9LZQHMidkO2x2quHtBOnqK\\n\",\n       \"mLWGaUJxsiDnHe5ZqdTQ4bdqp0ObBk1KoRNAeHaktl9e9lwhIfTrwIk8Rw3TQhOVUCljKlUJMN+s\\n\",\n       \"ekHYgabVYlNTovLCCPODojXMWGnVbR33riav8FrXglJmZBikn1Zrf/zF0s7n7Noj97Q92qxtrPFB\\n\",\n       \"MRDAA6VSoT6d9dWFocJzqhrbHFsWrDJike1gyjQiIhUjRowYMWLEiHHDeKGGnIrWkMtUrpIIQyIp\\n\",\n       \"6Ynlf8pv8X1Ff8BpDszfSJgTmRKzNqYuj6Radah1JzTrKuiKZEWCFUYvKBENvDpZ6WWhnh4N54yr\\n\",\n       \"HUNTUU5E54Bq6uuBHqgZnINzVs07K+TtH0hcqmZyKXUzeDPXOnxAOAZcPc80UVSH2jNbQbDfE1lp\\n\",\n       \"TSZY1Uo6+QS6lVyQkwqrlPXKr+qfnth+F1jhZmJ02cISopDVZzmDRmupOguajsr1J0onx0+Q6ju5\\n\",\n       \"jZXJvhgdVv4cmo0da1NhXLntlY6unPegIcnEfLLAOD6emkZmL/coQQndymJjqEK+8td9XJhuh2Nt\\n\",\n       \"PjfjyA6rynojyCnaMh6bwWWw7pBhworptFhYSh+usCJdiD7hYunP4YcPDaX57rteyzVfmiEkD/Hy\\n\",\n       \"wvrk6h//qnPOub/21/5K2PaDH37gv/eqP69nT+3834DFweXCth1N94cn45xLgb5sNnbtDlEfsBRE\\n\",\n       \"lPX89o4NTTrY9/3TApE7Ee3XhmaVAnRWE1p42Hmt1/5anD0xjdgMppaljP8N9ZWKnGBFnG8Mdiaa\\n\",\n       \"WcNqY724DJ/xttfVdw1ErBor+kXd3LYOsm1FcwdrC1UtjWGtsKLObqMWMr6Pi5Eil/74iytDlbql\\n\",\n       \"/1uvSdDwVNYnRFN0jqceJUGreqlrugE61Ij2Mu05n1Wyja6aar4L5EbQHP6tfo/m7DBkEJwzCwdl\\n\",\n       \"P4hcZTKf1sF+Yds4We00+CxQ2oVIZJLxWHYoF0wlbRObpxrZYHsjWq4R0FEBuF3N+nMyAFjjVscY\\n\",\n       \"rQDYTDU/roG613KjNLCkye2SONf5MTEZ6xzj5xFF/a6BSJ2f23Pv7KoetE1ZqiLFs1OQ/gWsY2iq\\n\",\n       \"65xZ4ehzt8L8TxTWOedG0ClnMsay9LNflSIiFSNGjBgxYsSIccOIL1IxYsSIESNGjBg3jBdK7Q0C\\n\",\n       \"qKCmpiafre+yn9LtVL5P4aGKMk0MDTGniPOYpj+eSJegAa3Ui6Kjdrc2LLQEFN804tgb6h8pteT/\\n\",\n       \"ZcqpsAOunCKFdSSU1QjizJXUdQMqqW/AKbhH0V+bi7qI5DrUmCpYw0/E9kzxzgRGtzpNQvehe1Kp\\n\",\n       \"4UcB/qCuHvp2KnW9KDYflYYt7009tVJ3EKdKXblV72HfRCwcClA1SSqQOZHasUDREFYrjRJSh4Vu\\n\",\n       \"HYEWHB/7zw6ORAgMenR5pZA5KUOLHND23sRqko1BfZSlCZBb4OdPTow+6pkmjvT7XETMd+EsrnWm\\n\",\n       \"lqBA59cmTm5B7fSC9x8eHuP7kqkAWma0tv3R7T4DBp8Ljdg1vu0fPTIq6A9QO280M8fyN/6ctzi4\\n\",\n       \"uDK68cNP3nfOOffJufXU4ycQKBe/GrZ97Se+7JsGgWc1tv1uMMqXS6OMboGyyoTaZ/r7bKI15HAo\\n\",\n       \"4TFS0KxHh1ITEDB/j2udy/W6vPB2EnSJds65EhIAdYxOIRk4vGUWAh2udSLC8hJU4fza+mkN0exU\\n\",\n       \"XNGZFdFi/Ks7+br210KdoEdwHk+UWkJ9xMMD689nT/w2WkP48/dtun3H3N5LOPAHcbKI/euVP/57\\n\",\n       \"H74Ttq2uUW3g2gTArN3YKrVWBv8VO1fMhbnaUjtQxOiHRKi4NRNLxNeAt3Mi8zlr1w2sS/CTQuwf\\n\",\n       \"6IqdlpqAQldufEeF6KDKVNhOek7PNWlZqcHOivtTuQVF21rLjVKNoDWX5xqp/VYkGJSP1FKpgp9W\\n\",\n       \"Qk/xgaO15grWP5XahaQI1X6AyV20mKFzu3POZTh+IzRii7bUcv5JENbL/IvLvpGO4l40eYsVN7qO\\n\",\n       \"VjsitwD13IqzPqUXs8LGP+teLiTZooFUo2vsnqwWfr7RGp9Z9tmYU0SkYsSIESNGjBgxbhgvEJHq\\n\",\n       \"naJS3Y6q1sHgTFYEthJQQ0jsQw20+FMVpVPX11NgKIjUyL+F5iIO5gKvEURqibpPSStiRyIyWn0b\\n\",\n       \"qtBU01Sxwluv/ZtzroJ1pKmqwM2hJls+ln6as/6YrPRyih21T/CmLSuNDMJurkJGIk6lwDPUrbNT\\n\",\n       \"GNTQS8vtWoeUqjZS1b7pPHKgwvIRTEfHUrurgNj2LiqTX6xMWHvpPsW5yvGxgi5ECNlBqFiqdUVF\\n\",\n       \"EbsgnLigjRicZhP/+eTQ991IUK0Ey6lQ3d45l+PzVsT+Jf6c7ttKfzrxSNtIUIKs8Oddyzi5deyR\\n\",\n       \"gOO9W9ivfX+DFW6WSro+6sWxbpxzYpgo438DRGK+tP6sgWbMUmtnQvE+xv1s//Xw2d0HHi363vvf\\n\",\n       \"tnOdepTsDmrpOefcrVu+7V/66p8P2/7u//x3nXPOXZ9qmrzv49/7zrth21tffsu3A8jZj3/FDDwf\\n\",\n       \"P/vIn7+guh3HvaC/RGfa2q7JnZe8oD+XuoIUNGul+XLPo1jtCihFLqgK0FkaUzrn3HzljzE9MBH/\\n\",\n       \"Civio9fesm2XMEeVVXULOHklK+Iyx/kInF5gzBwd+ntI7RcyJpaI/QbH53RqiFwGePr9j56EbRSj\\n\",\n       \"N1KTcg/mpNOJ7Y9jYgK4oBcLh/On/rxOzq1NNPVcX8tcg7ltb89QWp6P1j9dpdiP3s8UVDsm8dj4\\n\",\n       \"p/tj24txak+rC2UY/L+5JIqQndApuWYmkYq9iUhxPtWkqFD3T55TZFPUEaBmXUF5ToSfiJkkkHW1\\n\",\n       \"86FlB617VBzPR6KyNcGeR4T9TDyppmr06ce4ItzBzFqmc9akc/I8STu0pQYiJIjcCH2hdU057y9W\\n\",\n       \"dq1rJK00lVgtEAFPFf3xn2fSx82aCLyfu/pGLCyQjJWW0tdMPBDq4PaxT6yYTm1MnuI+vb6WOpFI\\n\",\n       \"slBzbjWW3hURkYoRI0aMGDFixLhhxBepGDFixIgRI0aMG8YLo/aStAnup86JV4eKs4O3lPwO/zMQ\\n\",\n       \"APbd1jZCr4lAhhSlc1NZ2ulTRJxXUq8HNdFacTZfziliFhgT+2mFRmx7wsPiQYLdFMAMm0HNKQ8t\\n\",\n       \"ZiKEpGfWWOrvbSBAbzcK9/Y4h+1tFMI7Z7Xjsp4u7nZ0MlCrC/XxAmUm/hzsMxV7sz7efG2CvSz1\\n\",\n       \"MP5mpu64vgGF1JqrQFUQbr0lHj9n649wrkJjQcSaVQqZo07hnjirT+G3tBRRIuuajdXcBO0Ye/qg\\n\",\n       \"F3d4ihdzETFXgKpruf5jiGjl0rm9fQ9ZT0ZGQfTA9s/mRrctGw9ZNx0dxoWeybeTGJpA99l5reDf\\n\",\n       \"kopg9wJ0S9sZLbJBgsRqYTX5JqB2DkFFvvzmV8NnV4mn7M6FMji46+Hxd7//YdhWTrx4+7XPvRy2\\n\",\n       \"vfPB+/5Y1+bj8o0f99TXD77znbDt44+98P4bL33OOWc12pxzbm/qx9B6bj5SHJPLldTLAi14JM7m\\n\",\n       \"7J/J2Ghkh2ub79u2NSkiXLtKXJ9/7Me/7pxz7urc+uvy0l+7qzM7r8me//HVk0dhG6naTtz2i6nv\\n\",\n       \"//tyP83RP9cLEaDDF4yXWIW1yQb+PDM7h8meF9SOJ3ZDV3Ci/vxbRkF+/AHaJ/Pu4aH/nOPPOeeS\\n\",\n       \"jokvSA45MHH+2//yN5xzzj1+ZI71I+e/17bizo56blOpyTfKfZuYdOKcc+c1+s5JAgboMyoq1LOt\\n\",\n       \"JO2mPkLZNmWXIismFxPADL5MuXqL4fM825ZlBBdxlZHwd5LEFMpjyvF7CvXVRpvPpEFVDiQlDWqC\\n\",\n       \"Qr4QHO5lTmpYG07qZYKqbqTWaMP6q2qQlSCxYiD2p3+gtB3Cb33G1UhUYdNHIsQucV33pIZqj6SE\\n\",\n       \"pTNvuQYdVYsofA35Sie+hDXGXyc4TwYPxOtrv7/M2bia4l6YVnZPkKrUmpD0+bonc/Irxw+cc859\\n\",\n       \"+uxh2Ha6usD3RVKyo8avRkSkYsSIESNGjBgxbhgvDJHa2x+7K6lWHl7q1UWb6jwt9R2cTVXsx1o7\\n\",\n       \"stKg2Lzb8VuKTgV9oHVAVuo2pFxqCTuuakv1GuCH9mfK1Ye8yYaUXH6vtvYyvbIQURurtGvqJZGz\\n\",\n       \"VgrWccWSSp0+O76gNNhfSdsDreGEQ+S1nUQHJK6Q1GAu5tLBdcKKSGoznZ96EemtQxFKQzSaZbZK\\n\",\n       \"4NJyRLuAkTihj/1KZ5UYSsFrl4tjdFUSYZN6SRDWFhNDGJYLiBgrO35C8TqQyFzUibw8tTjLN1jV\\n\",\n       \"doWdKxEucXVwsz3f9kLGJH+7L+cf6uNx9anoK+wM6rX1IdPaKVx2zrkcKf6KXK1QoLEUxWRVesSm\\n\",\n       \"baV23NijPscPPueccy6d3Q6fnXzs0Ze1rCDvwfbg48cfhG3/4n/5Peecc//rL/5y2HZ14VGkjdQO\\n\",\n       \"PL30q8lXP/flsO0Rxskadg6KqtQr38dPRWz94JY/fqLuxJnfpoLQCdzjNxtDenKca1qYszwTGdjO\\n\",\n       \"Bw8MVfv2b3r05f333gvbFkDHFpeyX6xcj24Z+vPFH/No1t27ZokQKgmMbIxVrV9FL9e2cqfKlfdu\\n\",\n       \"KqvqFsiAuvPT4iGRdfGKqJbcp6+9cc8f69qSQurGjydNAOk4Z6C9SWN9/fEjf92/8tobtu3K7+MP\\n\",\n       \"z98O26aV7+tSYNoK98JYkjdWEA+vGkGzAJ13SGIZIE24P1uxMGHtSnW6CUlGA5QI85mMkx5oTiY1\\n\",\n       \"2bgjHitRACfkP+1gP+RgoT6i3s/cnyBSRJP0e7SOSGFXo8wJheJrQYkpsm/l/t8wJ0NtBWgnoc+z\\n\",\n       \"3o+tZmMnuZz7sdNIlYODAz8+8nJ7rLFentpUjMi6lDZOVyFRQGwaIMbvhZ3hvJdqpyDJpAmVPQSl\\n\",\n       \"TPivHWsKd/4iM/SpgsVLNqi1isSO18wm4cNTz4ScXlulgmqbxBhERKRixIgRI0aMGDFuGPFFKkaM\\n\",\n       \"GDFixIgR44bxwqi98bh0fW/vcXO44qo4nF5ISuM1PZ1V1TF8W4DuQuHF7WMTxVXKiseiINE557KC\\n\",\n       \"/iRK90EcKD1XAu7sBUZdr4eCUW0f3dvHQvGwkHBeDHBsHEs8TiAe78WfI6MruRQtprdOKn5PJSDz\\n\",\n       \"0Rius4KFp4Dgu95otLRDkWGh0XJ0XiYUWIvr2IiI9erUi0fv3bkXttHTaAJ3an9yLODsz6ESIfoY\\n\",\n       \"VEzdmbB3BMF6Is7GxP6nM3GRHnlouRQXb4r808zoow28lVjcNhMaLwF9shYfmxzUo3pbFfAvUbE5\\n\",\n       \"P82G1TX9uQoEzWKh69b3+0qowDVuz24lPjpMrBgU4/b/dq3C/f576gpew4379ZdN0P8SRN4UmXfi\\n\",\n       \"BH52+sw559zRLXO9XkBkvRLKroXr8MlTEyBz3P+tf//nw7b/6r/8T51zzv2d/+bvhG2PTj1VxILX\\n\",\n       \"aa9eRNiXFuOu4Xcl1/oaxU3VsTiHP9ieiMcTFtIWATodqCnUfe+d74bPvvVtL4rPE6PnPnrC4s42\\n\",\n       \"Tg4grN6IB9vZs//bOefcT//cT4Ztn3/jK/4UJCnAisGqKzTuZ2ybTo12WJSeAtNizLwX1+Jin+M+\\n\",\n       \"KoXGDuLpQ7v/wjwlEoQa12B25BNAuoXdf7cPfVsqccC/uPZeVUd7Jvbv4emV5+rL549Rit9PhXm8\\n\",\n       \"sErBboIitF2yxrHCR66pPQXatiLj4Jws3lp0D0+c8n2oKCH3M/s4k2cRHb1ZKWBYnQLjRYrsUr6R\\n\",\n       \"y+M0DXO3UPBUqkiTgvRCxkSN42aQkRTSXxuUcdDH2roFFSfnWoMCa1vpf5xHXkoD4EAujz3XQd5x\\n\",\n       \"Lff4bOZpa1J66SADjMWAbb8VKL1EqLV66cduL3Os4zVTX8bAPUoyFn4TClpLBY4SIvrEWXt7JECQ\\n\",\n       \"knROaEyhQCtM2vrmcP/YP7MqmXfmqxP3WRERqRgxYsSIESNGjBvGC0OkmXyBIgAAIABJREFUyqoY\\n\",\n       \"OEYTOZkvJA0XK/de3xf5JqyvkNyHuuOGumPibM20Uoj3ckVwsMIboAp4Te+lrtwYIua+FrF3zpR8\\n\",\n       \"e0vmKqKT+mds3wJivk5qE2WhXp44DIf3XBGnApFayiqNtb5UbE+BouwuoE0d2pHKqjbnW71AbRXq\\n\",\n       \"hCWCvmShhqCdF1e1raTfLrGIPTu1Om23r/xKeLZnK+cUQvI2ZBtorUW4SEv9s6IhwiDWFUBRJiLK\\n\",\n       \"HMGpeS0rwmRNp+SwyXWsu0iUSAT4RCk1/dylqFcn1hFJSbd12+YwtrT+GesTlpI6fAyX8QwrrbnU\\n\",\n       \"y5pghdWKsDqBAJtiSuecayAGbxpre9dv3yAPXnkJ5yUO0GOKSCHw1SSCsKq3bVe4Py8vTBzsEt/X\\n\",\n       \"ej9xNf/eBx+FbV94601/zncNpTx7/L5zzmw/FBlIkAGSy5ioeS/IuVJsPx4b+nJw5EXz7cqE6rOR\\n\",\n       \"F5l3E0N4uoVHSSskj7zz3ffDZx8+8cc670x0enTb73dyx4TlFxBbPz43AfqruD7ff9tsIqqRtxHY\\n\",\n       \"37Nrt8F5VDKg2NtEq9Yyh8xgU1FLrb8FkN69PRPRZyknsm2EX5MXKtwzG1mlHx75dlZILHgkSOMf\\n\",\n       \"/NCjdD/4+P2wrcCxDqd2/LOrU+xfEWH/dyHbKtx3lSAnayLbnNcEQCkTIBzqbM96fYKqFUDORzL/\\n\",\n       \"jTGfJ87mnxa2M4X0MbuM87UyHZyntP4q0UEFaeiKXgnrkcEyRmtiEgnVpKScoviMtiaCiKMzarkn\\n\",\n       \"aIniBOmmK7rOCbQQEGN117F2obokoC1ZIok6S1SAoJ2OtLdzZIlkW8prYg/UWe7HfSvJQz1Qx1TQ\\n\",\n       \"px53QCnPs9nEn+PVlf93LElhLoN1jXyf2vlNa+jrIWqhNkt5yKd87tq2ae/Pux3ZPNF2UrN0R0RE\\n\",\n       \"KkaMGDFixIgR44YRX6RixIgRI0aMGDFuGC+M2iuqbOD6WlSE5QweXy4JXyvdsE3tUQyoYjd+LZV3\\n\",\n       \"RdJteyhUWowMMqXYcmhECyqmEn+iyh94I6JwFrkshKoko5MItEmKhNSK2LMEGrFQLi6hP4gIGynA\\n\",\n       \"F7dbCq9TcWIdT0c4Z6MAWjrW4vtFZtBtAW+bQuk++o50QqMBlm9F2NyAjmrX0v+gaBbXBuOeo9Dp\\n\",\n       \"dCricfhHbRp/LepWYOycAnCjPSjyThMREaKzpenBMb2VMZGhH+tW6V7C/XTHVzdduiiLOzW6IpXr\\n\",\n       \"T7FrLoJZFzzKNCmAIlL7XouBXOG6j2sRzK7p2GxtanGNVURe5qQR5DrlM5yDbVssvFD34LZRMNOp\\n\",\n       \"H5+jsf9+Lxj/wYHv9/O3zYl9Hw70I/HduUIfqzsyPeDe/cN/Fbb98i/9ij+GeABdwe9qChdzdYwm\\n\",\n       \"FXR0YO0NInvp69XK04wH9+6HbUykUFoygx9TIjA+uZqHH3r67qMnRllegdq+84bt1218P/3u278W\\n\",\n       \"Nr1239N9n/+GFVx+53d/3znn3EuvGI355Ikf93qPVxhQ66XdTxkE2jkoszRVesj/q5IB0jPqI1Tk\\n\",\n       \"/v5XLx4yVFrtYAM/oolQxSw03IOy/Z3f/q3w2Ydn3gF6X4pBj+DjlQhl3cBRv5Hi2rzsfScFl3EZ\\n\",\n       \"KxElp4kfpyMKtmWuYRJNplQY2pkpZY32Hci1pn/URoqAny5B/QrdSSF5jvtK3elp7a0F6rOOlKWI\\n\",\n       \"6NHFlQi7ecs04gFF+/ZmrfMEEnog+k/lYZcGwbj14RiSkqa1Y/XwalIfQXo1NSLUp2wjlXmCkhuK\\n\",\n       \"s51zboPi7/Olb1NV6QMY7ZZ7Nwf11kkh8QrHSMdWNDitLra+xwoUej1HGAxTVKyYyPwznvi/S6l2\\n\",\n       \"wXn9/Npo6QoSkUmhugz8RqbuEfqzbmxMrvLPNpKKiFSMGDFixIgRI8YN44UhUqMqc6W8GU4TprDa\\n\",\n       \"u90pXY8l1T+sxNSxtu+f32TiPRFFj+BAPZ36z8bTmfwAQkBJobQdiosy1IubSt6+WetJROlpvy3U\\n\",\n       \"rVCTiCL6dWqrtcnMv6XnuSBIITXampRj1acC6D44kIvbK16mM4FpaC1R4N9a3GyDK7isVhquEiQl\\n\",\n       \"nYid/pYr4dVGbCJY929j7by+8OjD1b6hjszJXm38Z2uxX2BqelFKvSQINjtJl82A+qWyrEiBJmRS\\n\",\n       \"E7Dv5/itIEwlxNb4aSfu7BnEzoUggpvSt3Mk6d8pEIRuY6vEHrWjZOEahPq9oIlERBIIbCeS1kt0\\n\",\n       \"MBOkYUlXfkEkFnPeJzZORxAlqyh+f4rzEYuJZ4+fOuece+1Lfn+blQ22B7e9TcLTJ0/Dtte/8Q3n\\n\",\n       \"nHNvvvlK2Pbo172jdTG168QEhNffNETm13/tV51zzr3z/e+HbUGgCiSsFHXwEoLlzhlaQiAkE+SM\\n\",\n       \"SLAsyN3xvr+fahlq2b7vk1atQzBO3vuhR1p+4w/eCZ998Rvenfy/+M/+dtj2D//3f+Gcc+6jf/4r\\n\",\n       \"Ydsffvt7zjnn/oYgTV/75k8555z7g++YncKXv/o155y53jvn3OrK179rBJ4m2pcBfVmuzPU8H7Pa\\n\",\n       \"gaKPfiz2krzSAf6ppFLABinx6go/BiI8k4FSYV5c4Bo+PLeEAaaaH4slxtmV39ZKpYgpLE6uGxPq\\n\",\n       \"M02/FoQ7ATo0EqEwqwEQnUylZEBdewSjSg2lHAMtyAV9SiDU3htJFQGwHmlpNg3HG//3k1Or53h9\\n\",\n       \"DUf/1t/rqaDUFawZyrHWRPV/q3ULmYBCkbMMSSFilZ5Q2LxR1IkWA2Qf7Psl5qtE2IwEqHIt9hs9\\n\",\n       \"hmIq6BtRorYRm4AO7IzMZyOMhcxJ8gJYlwb2C3NFUDGvpDLX0JV+YDEEhC+RQolkHVqZ91oicYrm\\n\",\n       \"oczEZOSPq4hYVtI6SK0uyD4Ywvzhkx8455z7/P0fs3PNmWwhlQUS3juCHMtzcVdERCpGjBgxYsSI\\n\",\n       \"EeOGEV+kYsSIESNGjBgxbhgvjtobl25UqRCQhV/FiRi02Om5iZOvL+l3pO+AffhF2AJYuBJRXjWi\\n\",\n       \"Z4SHB7VAbgZ6Ls2EdnH055DCvyN6UOn3II7LVexGUaz6k8CDCMK5TOBUsgK63xywrzprU+St3k70\\n\",\n       \"8ciFWqCPUpqJPwf9UfBT9c7IE0DgudCTCR1+VZztz7UWKiLof1WUCSg0EWF1A/i6XonfFDxFNhCl\\n\",\n       \"bsTjhSLbTGBfQsFrKdpcVYCitZB1DXhcHIMTuKfnqZLA/hqPQI/UQo+0DdsufQKxdyl0I8euOlZ3\\n\",\n       \"4Jlyqc+cFv74nRTtLEhBAovvRLC/AT2jLvIs2tkJ3VfjWLMDoyyaQF9qcW3fvusr6+Nk6s/3t3/j\\n\",\n       \"nznnnHvjrb8QPpuN/bn+3E8YFP7+U0+Bvf66UXtXl358fnpiws4H999yzjn3F75pzt6fPvRU3ZNH\\n\",\n       \"j8K2n/6q95ZySIRI1J8HsPvKKRUAIayIg+k3UwqNQ+qrFVE0650WuRZo9eP4vY+9O/crn7eCyg8/\\n\",\n       \"8ZTWf/63fyFs+/ihP8eTxybAX117WuD3fvfbYVsFJ+hjKdC8yeBj04iwl0kxItTtQL20ECCPZU4c\\n\",\n       \"w5394sKoKPoYNZ2Nq5S1XZdCLYEC7Xu77ylkToWqKm77ws310vOit4/N2f3hqb/+Uls6eFatpJB2\\n\",\n       \"gUSFbmnXJGeh78TGH72SnNBiZY1rDGlDLvRQBRptnNm4ZmHuiST7cP6rxNwtm/h7PB+ZpGO09n1b\\n\",\n       \"je2ETs+8eP78wlPaA8qcxxLH+LLw12Sk5zCGsFu4ZfpXCbPlWriHcx5wzrmWjuqsWCBZHDmSksby\\n\",\n       \"7Mgg3m/FRX6NuThLZa7ZwU5RllGOtFICimbLOGHVCLK3q8YkGGvcu6ulULsTyEiEbsvgmJ4KBZyH\\n\",\n       \"Yu2SgANarhO6FzXl3f4hqG3JCuPtkWf67MJG4fvXoLSfXX0Str18C0XKE3nGIlEilWfheCIJKjsi\\n\",\n       \"IlIxYsSIESNGjBg3jBeGSE0mo7CScM5WnyNNTU4gGJWU+BourquFrcitTtS2m3M1EqHgmHYGeAuW\\n\",\n       \"t+UCb58bQVXCC3Gib7oQ0Qn6Q/fYRN6I+YZd17aayWg3ACF6IQjKBG/wWWrfT7GPpNfUYPybijgO\\n\",\n       \"SxwV6jPVV12sua7hy/xoJCmdPQXr1v+s3aTi6BorEUWkuGDqBE2g83QmLsYzOpQLmrReQKiKlWkj\\n\",\n       \"9QonM6BEA1sL1J/TellAH9TZPkv8SkxTh0sgERsRj6eoycTkAEW/aE2QOFmt4+9xaY65Ge0HnF2n\\n\",\n       \"bsPajepe79vXdoowQYALCwlFCzrWsFrbCr5eMa3Z9kuXc01AmOLaFpmiHxCv9yJeh7M3x+TpyUPb\\n\",\n       \"x8zv96/+RUOV/oe//w+cc87dv2+WAF/98S8555z7Uve5sO1zr73hnHPu08fW9j9824vMU2er2Z/6\\n\",\n       \"hhdgNxB7zq/MiXyzgut4ZWOI91Ou/QqUcCrJI3Q53khiwajy56N1Kt/7nheg/l//3Avhv/zNn7Xj\\n\",\n       \"ow7gr//m79ixgATX1yZi/eX/0wvP/8Of/5th208CpcuPrf/fe+LRrHv7b4RtF08/ds45tzg1UXaD\\n\",\n       \"umIU3u8f3rbzx2r9YN/QRyInKkBnaPUIormF1M4kwj05sJR0B0fzDAkgI6l28DosJq4lXf8J5onZ\\n\",\n       \"xJICHsOSIt/YPUHkXi1miHBrm1iLbQQ2YSyIVIq5WxNGUjhb5wK5VGAOxsJ69ECOSmkTrW2q2o5R\\n\",\n       \"BITJf295ZahSj/t1LFUsaCcxFdQiAerS5jYm2b75pc0/wfZC5jhuKzDG+1TmxAlcvxNlbnBfp4aS\\n\",\n       \"3R7567Rs3rfj9/48utzQVKLuY0HkeszPucydY9yDeyM/7y1bu4dPLvzYTQRBypioJIzIuOQzQeZp\\n\",\n       \"zJ3KBLHahDrKT5AoFtDZXs6f9RLFziftORatX/d5zWpr09WVv3eqkY2JPoxTeXaUn/2qFBGpGDFi\\n\",\n       \"xIgRI0aMG0Z8kYoRI0aMGDFixLhhvDixeZ67icCJOWi+RooBj8cebjs8MhpliWKdCu02G0KKWvDY\\n\",\n       \"/9t2UtwS1Aq9LSYiBCY8ql4sFGqrizgLP+Yjo7aaBUTZ4mJNN3LVmlNQTd+XUiiLHLSTQsYF3H43\\n\",\n       \"IhIkxDrPDJ513Wc5q4tTNiD6dIfrLKH1UhyGe1Cq69romYZ9nNn59wFmFQoUwuqpQKIHR56OKJSC\\n\",\n       \"g7h+AcoiUWdx0FydMxqTlz2VSqZd5/u174xaYOKBaN0DfZyLAHUDioKO3upPZeyZHasEjTQSETMv\\n\",\n       \"WSfC7g50swpQ6XKeSyFl8q01KJYL8bEadxB4rg1Gd6QlhbKdX/v9jSfWJlJfvRMaYQMB7Mpo0Ska\\n\",\n       \"Pzv2hWo/fvft8NkGgt7Pf+FrYdt/9O/+deecc7/2W0Z3XUA8r6L4P/p9L7zeCAX0b/zE551zzv30\\n\",\n       \"T/471s7OX+OL02fOOedW98T1GDdPJpRlijHeSJ8cQNi9Ecg+B/U2OzJajKUEVhf229Nr3/ZPLz2l\\n\",\n       \"+DUZMPXSH38srs8N1p4qAfjiW/68/ta/9x+Ebe0SvmjiYj0HQ7S3b/Pe1af+/BcLozSpVFjAP2op\\n\",\n       \"CQi3boMKkfukxvXshAIqQFFfXj4L20gHjqQYKwuXJ4NEEd/m5RyeTdKvvCabpSUA3Z8+cM45dy4C\\n\",\n       \"ZAqLJ1JIuoXIXAUYGXyZdO6ccF7EDTgZKY3lI9d5LcP9L35LdFkvJzaf5aUfJ6z64JxzC3gvlbUk\\n\",\n       \"71A2kXgfpYlWm4A4uxO3dbqoq7N2CsqqlQQkVtlQb6Uewu51bW3frH0PrXA99440OQb3uiQHpJgn\\n\",\n       \"93AdnHOuwPywEro9KX2yx1KSHa4hrNYEmNnsDs7V2j5CH/DxUHX22QyVJ54uTsI2FjwuJHmqg2L8\\n\",\n       \"/2HvzWIt27LsoLl2f5rbxI17o3vx2nwv26rKrKwmy64GG7sSYQT4p2whPvyBEBJ8I4wQfFrAB3wg\\n\",\n       \"+DQyQhQ2BbIQIOMqCyyEq8msclZR2bzMl+9FvngR8SJeRNzmtLtdfKwx9xynblQmuigJFVrzJ27s\\n\",\n       \"c87ea6+99tp7jTnmGPSIG7UPXUpkb8xZ3jMtIRxPaSSOXl08aCmOUtDSqxYXUTVwHzvSaux8GJOT\\n\",\n       \"lJwykKLshxVtu0wb4oiIVIwYMWLEiBEjxhXjpSFSRZGMhGgR9pOisnYo4FaVNVPJ425ub+QXF2EF\\n\",\n       \"wd51Sl4bPJEIsdIp8Wac5lxWCm+4BZG9UyXxMQE+rBKSjNTWsTrsOlvVdCi/3/HEciqxAGkGUnZX\\n\",\n       \"BCnLbQVXosTVMWEOPoGbjSFSC6wEWX5Bbfcy8iQa3+b1e+x5pWX49AafgTC5ru1YKuuQ0kojxRs+\\n\",\n       \"I2wp0JmjY1NA3tsLb/1tTWXSQMfyVomVVOoO5GpgqQd0BROrlZybkCp6p9IBTGxEp3giOytKp7xG\\n\",\n       \"Juw7fD8j9EvXxDn1a4G2DFSSrboHFS2/BgmoQ0ukfI/fNnqNiUQ9FUUQafWNa7haGgF2ArK5Yw8t\\n\",\n       \"IAxcOu2Bzi42dv6bdVh1bYE+3r37hu0DitEfvPfuuO3mnSB78Itf/sy4rYYP14KI4jmkCPaZKN6E\\n\",\n       \"+/RiZSs9RdYcVvPble1jUqoPHSMiKACgFbyD/1ZPfaeq0Fs61gyq0OfPbTxfLJVYinJ9UvG/fhxQ\\n\",\n       \"OkbEHCBR9aMTEfm7//3fFxGR5Zm1ff8onL+nwoap+l8awC7Xb4Ty682FqceP0gbqoUdzyBOozN+5\\n\",\n       \"w8raI/w8bqtBFE8IYVM+s6c5doYyfk/HcCD0J6sLnIs1eAppgj2SjjltAtKxYE9QIDL7VFC07MKx\\n\",\n       \"BnKPyPKAEjFyMe1036HfWcJGPdTYQ1UdCDwVUTQNkDsiEat7g3ckXTIJ+1ktDPVWeQpFVWqCG9Qn\\n\",\n       \"0hPZevTkZGcLdHtCKOGAMZvRdUrxeUIIj8cY6zpVB2e8A8rehNLNJwGJSsSI7Qn6uGtI4gbFFnlB\\n\",\n       \"xQt1kCKpu+fjtsn8FtrLsiNA7tHeVLpLn3FRgM6x9WDXeu5UkodMOVEUtFP4hbkw4fE8+nmqYSOh\\n\",\n       \"9EO4xuxh6oFIDTVJLahiPvl0Fopw9zafToHsb7Yk+7MjmXM5IiIVI0aMGDFixIhxxXhpiJRzTpKd\\n\",\n       \"EmZwiug76mrPuecKOVVf2pvu/l54I10uWVQLQnM7nnx4i9ZSS5I1UFv1hFAdLb9MqKxTEpVQoHbC\\n\",\n       \"94cRsR5l/CqNICLSel1hhn+rCb/BwweNUCoTJ7XvHUIYramt/LluTvH9y+7bzBHq0PYE/ImUHK0L\\n\",\n       \"mPMN7DSOPplM7Hvrra4g6JpAuDGj63T7JKx65nvkk4a3f+Z3uHGFU/yJc2bPKbomQFgGKhfuwOHa\\n\",\n       \"KetGH6ckpjpe7xdIXCjSwOhjiv31xHNSAbuE9qtjzBNykQHhYDmPHgZZrrMc/cVKne6x+ib5hXG8\\n\",\n       \"0hhW1HFCBD8rXSfx0Uz7mjwRBxXfo8EL7yxFPxriaB3P9nEutt+zJ0G40k9t9TtA6O7uTfPm6iCc\\n\",\n       \"mGWGCNTPwqpvVti2xSKgHqtF4PL4YzsvFQ7dnxP3C/vdUDvVV1A5YOF0wnkfHlk7t+dh1X16bujD\\n\",\n       \"DGN2jn+51P2VVwNX5HBunKKnz0I7c0Ja/v1/798WEZFf+MrPj9uObgUhzpSQy5O9sJ+Dm3bvTjZh\\n\",\n       \"9d+tXrO2A4FdQtYgI1kD5Xy0La3IsR7mezIFSsXzyXjP0Jwk6O6ErrGgxD9pQz85+mxSBnTq2eZs\\n\",\n       \"3FYDHZ9NDelIdf6jVtbgcmZTEs4E2jSdWp/0Wdi3oqUZ+aq5Pvy23ZL/HHbnaU5QNNuzJyGOVZJM\\n\",\n       \"wwZop/qVioh4UZQarScARcdVy+Kj4NckuY3JTDlSxEPtB+W3EfcnUUkKRj0VuQn768gbbwqJj0Gs\\n\",\n       \"/4sqoJqeuGw6Fw4kCeASlfgx9Kkq1ePW7sltHfhvs8mJtR2csAEeqzzXTyDTMdA42apki2P0CSiR\\n\",\n       \"I6/L8XOSbsH12bLJrN99njL3Sa+dozlZqc4Z+UpqNoXFbxXF6unZ3bbK5bP99YTsvygiIhUjRowY\\n\",\n       \"MWLEiHHFiC9SMWLEiBEjRowYV4yXltrrh2ZX4Vb9dQiK03QbK/aWgBR7IpuqKveUyNPbLaQGSDFV\\n\",\n       \"EzS5KpATxJiAsMj+f61C4JwCdOoXZMevUIq67jktqErV9tOxtB77a1pKTwBtZiXiBCWnOUHBOYjF\\n\",\n       \"B7WRuAeU0HaDkWgrEOpdatIFApi5gOeXUMoyQ13rlryhlKivMhQiIj1IfmT1JsUcaZTrtr+bNwPM\\n\",\n       \"X3JJMI5HCgvSo8RWUwt5QWWoUDGWLam4I/nbUwM6H0i+aUYqxigZz0pLgWgakf2XNLs8ql7QmFRl\\n\",\n       \"WyY96g9Kuk7qDZURKTRHakVTUSIiGWB2ykrJXjrBYZFupWvtke5MqPx6UGI9Kxvj/qgmlG7rLkPm\\n\",\n       \"gjFZUEl6s9J0c/jexbmVMM/2QxqHy4pfuRZ+y2Xl08M3RESkzCkFfet1ERFZX9i1uzgNCsi9NwL4\\n\",\n       \"4iKofReYDLqtjdf1Jvz2eP/VcVsiSjolYjEkFhKWhLgI43hv3/pEvR7X53af3LhxU0REfvWXg8fg\\n\",\n       \"1/74vfGzo5PgF/ilL31u3PZ//s7viIjIllJLP/vlnxQRkddes3Zeuxb2u3xs/fnOn/tZERHJ9ywF\\n\",\n       \"Wk/Ctd07tHbO1+H+dB5FNDTVdUgxnJ9bamd/P+yP/R/1fkqpAMJjLmyo7+ZIUQ5zu++ap4GA3CHt\\n\",\n       \"fG1uabfFOqSFbu+Z/94E92xNfmkXmDuXRADvIBOQUlp4XoRzrSaWPi1g5HeahvFSe5rDRrcB26ay\\n\",\n       \"Mkz2bseUto3TDvffhOgb0xl86oiArCl4jxRbRvO/Fj4wAtHj+I7mFVfAWSG1m71HSl+LXkSMSE+G\\n\",\n       \"ElLADUI59kliUhMq+zLw+ffBjWBSmGL+MLpnsNo6nCrIPUHT/Al5/W1xD3rqd49Umu7XN/Zgy5Ee\\n\",\n       \"26fnhPoktgM5YPgXHB8FVenOFBvuz4LnbqUKaA6OC7uQe+WiLD1+Tn3dgb7gHRcvIC1InoS911S5\\n\",\n       \"zXttHVN7MWLEiBEjRowYP5Z4aYiUl2YHfepVLK+ht1D14UmMbJeXWi6fX/ptQb5uSpDk8lPnRmZx\\n\",\n       \"+IyQBhUwZO21Aat0JrE5fSOnN2glebKvmm5jvyBdiQxa1tzz23r4rZbti4iUeFuu6E1fQIa+RkKD\\n\",\n       \"kyqsGLetie+lRVgJux2yub6lwwU9tbJmRdByQjoGvRb0tq5c8I6ESzuc9/4rRmxV3628JGIhVsQs\\n\",\n       \"07DFas6jDDYl+QenZHNCzlp4sg3kIN80kJMYSPwPpatpaW1v1dWbrmernnggBe/wGzE+uTTcyPt0\\n\",\n       \"XnKZ2J+jzQmRx5Mk9HdBSOikDB06L8P1LFoiXWJJWp8aSrjJw+p0oHHS4++EVl/aZ/XGSvI3QHhY\\n\",\n       \"zUGJ3EMNv7o98nBbBNTj7h0jQudYzRH/Xw4q+IrRqvb04w9FRGRxdjpuq9dYWTNxE9fk+nFAOJ4/\\n\",\n       \"M8+5GycBdc1IaPHsaUBESI9XOiC7bcPyF+HaLU/t+O0mXNz6whAxgDnyV7/6cyIislo9GT979jDs\\n\",\n       \"7ye/aFIPMy2Nrq1f33j9bRERefWuCSJ+973gofezb75ux7obrn9PAyWBdEhG404LBKaz8P2zhZ2X\\n\",\n       \"yoMw2X4fY6yaGNI1YHymRIrPgdhmLMmB65l5ki5ZheueYTzN6boeQ65gNiGZGiBNjxePxm0qxNsT\\n\",\n       \"iXoAYlCSdMSsDOOtKEk4EwUaEyBX9x98b/yshziv3jciIguImSbkU+qRneBSdwHqu2X/UwzkKRVv\\n\",\n       \"6DjSJnFhwQoo8YrI6S7HfUXEag9ELnP0jIMkASNnGQqE8pKfHfoX5j86BUXVBxKaHoDw9t6QqwxI\\n\",\n       \"d0/yAz289lRyQESkRAYmT2zuroDEtDU/u1BQBSSMBVwT9AlLx8yw31Vj95qChMlAKD0EWVniIskg\\n\",\n       \"RdLbveshAK1eqB09V/UlxhPSmkCKgVUSVHbIUfVADkHadkfqAIU/lY2JgrIdL4qISMWIESNGjBgx\\n\",\n       \"Ylwx4otUjBgxYsSIESPGFeOlpfayLJPt1lIWqg7btgYFqp8ek0hVCZtJzB4K6AytFoCvWcdGdT5U\\n\",\n       \"TyMlcqx66eS0behAAE5ZT+KyPlGvpGR6LVU15J5UXN3oCQTPJVZ2Rw6ubQ2K9SWgyIxgTJyPJ3g+\\n\",\n       \"A8y5NyNyHLa1YmTXiaZFAeOmzlJ7DWDfnFTEO0DFPenOZIA4C1Y2x2nskQJ9kYMASpBtOwRYuCNf\\n\",\n       \"q4tVgG+nSAU64VRE+LvpDMb3Er7P6Va9JqxsrNezIVKwKrpzWiRBWi4ZlJxv16RQHSlOTyL315PX\\n\",\n       \"VoLrmpCyr0LGBeWKE/SZ3xrcv+5ArATB0ZOxYn0BdXb2QUM7JxV7iAFa9zyew98L0rFxSYCqHSm7\\n\",\n       \"r5YoHsD5FAsaQyD0OyFfQQzynnR0nj0Iaaya2lki3Ts/tFRhhT4551QVUqofYh8T0hG7WIbzqna8\\n\",\n       \"GcP3y9Jg99PTMMb5flJSbE++gu99+/thf0RsTgak8pAe/lf/yi+Nn/3OH4WU0vtPjDB/81YgkaeJ\\n\",\n       \"ka2dhPvp4X1TJ38DqaLPvmPfO3k9KKWvnlv6skfbzy6MPO6QyixQPEA2ZNLXSli2cXWxQEqH5sn5\\n\",\n       \"LLRph5aA7pnmNnbyg9CmgYo3hjrMBRuo59drm6f3MtUdov6HKn5KxQ4+07Fr98lUqReUqqpQ+DAt\\n\",\n       \"bZx45IBKFAqsLqw44JOn98N3KD2TYHw0VIDUogChoblL00HsCZlgPz2JAKbqseowhyWcigIRmQqg\\n\",\n       \"tIiIFdtH/aiUlbS0eMXGaYmU3pTmbiW096AADJTGUl+5nojg620YOw0VhSgtpSCv0yQJ1zHJ7Vwr\\n\",\n       \"1YDi8+ngqEF6f1o1VaKwgNX2vWjazb6u+ljsU6j+f5wm871ScNhjFmlJSqk6F1L6da3aZnatVYPQ\\n\",\n       \"Ey9DSeQ9pfu0yIjfJ/QnJc2ngxYNUPFAGVN7MWLEiBEjRowYP554eYhUnklNpc4NVgZtb4iUrpJ5\\n\",\n       \"RZArUY/416KefEz2VtI0ISdFqeR1LQ1mc77wj/O80gn7YJREka5V1+9lAAAgAElEQVRtY+qwma6+\\n\",\n       \"iETqQBrOmEUKj7UKys5Mdncoe2dXcSUZbonYejwLKrZ9b6vkCh5eStITEWmxvwv2tUPfVVkg8XqS\\n\",\n       \"i9DVx7Yxsq1yl1sm8eF7eUkq2lh95bT6yCqskp21abu9wP5s5VZDPXkPxPecVibpWABAw7RW9O+y\\n\",\n       \"ijMzoJMsnHdPKraq2dARUTPBaioZ9NrQagm/ZURSy2mTnW2XCdhKBq+p/DnDKrKl8v9zlC5fLMO2\\n\",\n       \"/cQIyzdA7M2o/n2zDm1i5DbPgQgQK1VXjFy8kIMU3G4I9cQqfrMJbTu6Zn1TgWy5Ig+91z/3BRER\\n\",\n       \"+fiJoSo1VtrzPUI4t+E3q6Xd4ypJsKm5rBpoAmQwOvIwu30zjJ1JZfvd4P5kRRJVgx8IzVUC+idr\\n\",\n       \"a6ci0e3aijK6OvRJNQ9I040TK+L4ItS7P/+moWTf+OPviojIw2d2Djeuh/tvRordn/5UIOj/xC/8\\n\",\n       \"hJ3rk4A+rT/+zrjtDEjcZm33c4N5Yh/9P58TqluE/tlu7L6um3BfZTT/6ejMCH1KQQBOpnZPemVU\\n\",\n       \"k1J5oqgz5BSa2hApVeVOqGBiWgSEr6AqhjLBdSTkYpOoTAyRovNQsj+h+WTA5D4BwuXu2I21hhL+\\n\",\n       \"prb5V0HvtiNl6w4eko1dp5lKJxABW6fglgp/HOY714Xz3raMVsBZoDDCfIpswqaxe71uQn+WM5pr\\n\",\n       \"ID/gOttfUYVt8zmT3cM5rmHyN3SEEmPed42hqg5I2EBSN76H1AQXoGCeSggSVBQpLaztHZ5305lJ\\n\",\n       \"Ymj5fwUkeOgJQWs1S2D7nU7Cb5c0rgXFAJ5J+aMBJM3xngsE0D4H9fgUUhOEvnkg5vz88z3GMJH4\\n\",\n       \"ezxrGhoThSr/D1RkNaqn83Pvh78qRUQqRowYMWLEiBHjihFfpGLEiBEjRowYMa4YLy21J7KrD1Q3\\n\",\n       \"ahRIzEqk+zg9oSQ2T2m0HCTfjHWkAF9mRB5Xc+EKv2VDVyUbp5QzdEhL+R3NDDWZJDNaJdsRKVy1\\n\",\n       \"lxJScdV0VAU12Zy8Y1UDqCcS5TAEGJMh69U2qA4XBZFdATfPpqStlQToeUhYAR2muUh7su6QEuDr\\n\",\n       \"lg19NWVp8LzuY4ewB8jW5aQPhF13ncH4NZR9mdjZIc3nXYCqXWaEVdUKKnJL7WiWjwmYCo8L6Y2J\\n\",\n       \"krdprdD1WtBACuRKNsXpMIl3/Lu2dJOSx3NK2TU4x4QIkD2KAhLmiSO1xzomagg8Qxqj7EmzBum7\\n\",\n       \"uuOdYFxRHlHFgzeNweiqIrxYWf8PSLOwCfUeSMmzg9DHT08tjedA7D86tDH0T7/xT0VE5O13TFvp\\n\",\n       \"9Hn43t5tS3eskWba0vEnSCktSFtmsw59ogrH08rOvyyCLlJLUvgp0pzUIzIgFVCvjZS8AHk7J7Jt\\n\",\n       \"VkJZn6a9PU0R43rmpNj++hdDWu7jb/7xuO3P/8xbIiLiSIPNQZ388Mi27c+hAfXIlNI/+UEgu6+e\\n\",\n       \"fDxu80jlZxM77z2YRXeNakHRuWKg5lTYsIGR7EBptKlqz1EBxnwCzaaTm7ZDHMsTAT5DvqtQZwkq\\n\",\n       \"YtBULBtfzzGfHqwtPZWh3y+IAD7BNc4pVbWC4fG1uc1no6YfCor2p9bez33qZ0RE5OFj69ePP1ng\\n\",\n       \"+zSvDM9w+paWbDHWZGYpICcoQHKXU4u1FhvxYxLjpW3oueJDH6Z0TTzmmK6zftVRm7HgHCgF5cSu\\n\",\n       \"XTWDthNoJGsqTumghSYFU1DCv31nKSuljXSkAF4gfcq0gMqDvkLziRLUh4bmwnKG/WG8ZjauSgnj\\n\",\n       \"yiX8nETKcno0busd7sn08jOxJ7pHooVaRClpkapVCbqMn12qc0bjSpXQCyrK2kArcSAdOzVons8s\\n\",\n       \"pS8tNKhokunpHeRFERGpGDFixIgRI0aMK8ZLQ6ScuB3VbyU0d7TWHN/cWQl5VMemUmuU3efkoaMo\\n\",\n       \"UkFkb32L1TLMhNS5HYjXPfnlpfBm8uzhA5ZrSh5GPtGVo71BZyB2D0TAyxWRwsqgpBWMEoU9ESG3\\n\",\n       \"TVhpZc5If6tNKLGuDuwNOnkBwqSloBWps9bwnxtwDJaGEFXdJQ9BVYIloEU6LYlntW/0sU9YxVfV\\n\",\n       \"461PPMruB2cIj1aVeqeEaduvIlJZZqtfXWG03tCXfCwiYK9DrMiYaw7CYEOkZC2jzb2qzpOEBiDD\\n\",\n       \"qrT+H4sBcttHmqNcWQx96VxAmDytCBX1cNQnDsrLfavq6OTDpeRQWtUPQFW354YcoXJ/B6Vbr7GC\\n\",\n       \"I68rLTHuG7p203C8Zou2Ca20Qbw/PTMissb3vv/N8e/PvhOQm6dPPhq33bgeVqLz0lCq80VYMR90\\n\",\n       \"hFyN3Rjafu2moQ8TIGGOizh8+EFO6vgpxh+Xuut1bEiBusI4SompfnERiOdvvhF88tqNXcP5q0Gx\\n\",\n       \"/OZbb43bFs8DyXlxYWTnk9ktERGpicT+6FkoIlgvTH6kfhrQCZZpaSEJ0VNRyHoREKsBLOr9uRGL\\n\",\n       \"FeptSJ17vhfGWkPIaQN/ytmB/baaBYQvPTwZt3n1vyTUvVWftBr3JBXFeIyPfLD+30O/diXJH0CB\\n\",\n       \"3IvdO1v9LdXJn6HPGn9r3DZNgfCAnL5nuxXpwvhwx1wUEu6FRf1d2+bC8eut9X/XQmKBkIsMyE5P\\n\",\n       \"CNOgY8ar6r+hz+kAZIbOf8CYzKhQZj4J83NLY7Luw9ztB5ZEUEkcUhvHdVd+c0IyNV2rbgs2/zRt\\n\",\n       \"2F+R83MK8g9EgFdle0eSPF2nckKUicEY25UTQJuAzAwpIWiYHxl974dwTZwnqxCv3oXWd4rO9yST\\n\",\n       \"MehzyhPqjzlbW+RIjmDcH52/vTNw1iXFv/ScQj9ut4bSFTmyI1SMNXBW4AUREakYMWLEiBEjRowr\\n\",\n       \"RnyRihEjRowYMWLEuGK8XLI5E8bUoJFSYZqVGwgzdC/YVhTQbNrRx1AjYVLgBvFcSdR+h2CoJsPU\\n\",\n       \"Js0sUspkQArSE7Srx8oyhlZBVMwMRiyyAIFOMlW9ZhKnv7TfHgS7NOPvBbh5uTG9p/nkroiIbFvS\\n\",\n       \"AJrgNzXtD+T6UT2djt8OgZTZU9ohJwPh8fjoCk53tp2S+Izs2AEW3yMVaQcCfL1lBd4F9hFSKs4Z\\n\",\n       \"xJ+pZg2lGydlSPc0nUGxmo5Kd5R9CebV7w2Xh7tPdUyEExs4PaTXk/W21EiZxq46WCvEH7Yh3dfZ\\n\",\n       \"tlb7luTLZknIW1zbC+eV1tRGjIm8NBh7u9D9ERETJqvLle24QaqoaS3ds49U2erUUnXNMhDFlZzK\\n\",\n       \"yu5KBGfS5+G1kB555VUzMj48CNc1zwyKv3YcjrW8sBRki9ReTZmNFErJRamkV0sFdUi33Diy3M4a\\n\",\n       \"6XZWlu6QMi4PjbBcQ8cqyYlYjHNsNkZKP5iHFEwKLbTNxoxS+3shfdmeW3roAO3NqLDj0YdBF2pC\\n\",\n       \"6RFNh54cm5FwcnQd3/9g3FYgHV5N7RxnMB/eaHqS1OlXm3Cu6y3p82As5KT2voT213zfFMMd3AN4\\n\",\n       \"nKYrXOOO9Ov0PsJ80VEa0Q2aRrLz1/RRRamgCe7dlkzQN9BqSgvrpxL32/nZvXFbfxDG1j7SwgV9\\n\",\n       \"X5DlqjsbV3tIrWbkQLBqfxDOZWP91CHdnZU2JrpeKRVEC+hVKy60fbsl81x0naMiHlVi5ykhh4vA\\n\",\n       \"huapHnSEmubJYSx4snOczaD3BlV+T7pT3RYUBDuU9FopkxHZGmnGjDQY21ZpKdZPDbSV8tTmaX3a\\n\",\n       \"DeSAIA0U7VX3jRwj9iaYE4kWoNzxgfS5tECm7+z+Ux27igul1JWDaAnqHiCgQpSJkdj1+cBFYX2r\\n\",\n       \"z3Mi0WMuaMiVIQUtZ13b8zSbQGeSjJx5vn1RREQqRowYMWLEiBHjivHSEKlhGHbqC5UIl+2UYWIF\\n\",\n       \"SeTgNNPSSPJQwxt2yn49o9i1vSUrGVm36WpExPzSGMlQcMJTNzm8/bKK96CEOpJTyBXNIjQjw4oh\\n\",\n       \"AQrC7fWQS2CkzWP50xIRz6HUdrW2VXIJBWxHqw/1KRwITVCieIcS8mFLK45UVb/t/BWR6jtGhLJL\\n\",\n       \"23K81Q8ka5BPdHVMvlJQSs5phdn60IYGK8ze20qzyMLys99B/8Lxd7yuIMkg1E9asc0rTb3+RWKr\\n\",\n       \"ZDeimFjVUKl3i7LylIoY1OvN0+pLkQ7ZKZfFapKU6sf9Enm8w+pomYTznlEBRIa/a+prVezebOza\\n\",\n       \"9bUSJm1bC0J5S9fz+XlYCVYT6n+VBwDqwIjwDAjW+bmtIItpQEs6WkGvoHqcUwnx/U/Cb472DZG5\\n\",\n       \"/RoU9XMr/7/3zVDGngG5ev7o/fGzJ/DOO//EyN5HJ3dERCQtDH3J4ZPVCyENeVix7meGSGTw7rrg\\n\",\n       \"CgTMN+tVQJBmr7w+fuS3YVw/P7U2bbJwXkz2VpmAhhTbVSH94f0fjNsUHWRXgDzVQhHrz20Xvpeg\\n\",\n       \"8KIi+Q8twJkSStmMBSBUko7P53NDujKU/fuV3WMD2sSuDM0KqBPGxpqQpl5JvNSHM5SzDyy1gKId\\n\",\n       \"PT8REY85Ke05wxDatF0aeX+RhDGxnl7Dudh1TfHMyBJCtYBmpqX107AJY23bGfraAaWs14a+9Cpn\\n\",\n       \"kxDC2akrghKrSWqiO8V5UaYD878nB4wEBSiDAVKjP6HzjLRc9pPVbbNZuHZdS560cOPoOHOC47KE\\n\",\n       \"Q4FnDauNj0Rtep7VGLOJ2D2eYd73PWVCOvUdDNcioaKs9QLuFPvWr/qs63dI+UDT6Lmr2Qy2hRjU\\n\",\n       \"Y5AI+OqnmiOr46lgRLMDO56E+C1ZSEqL7IBL2JMV2+hY6zoUilQlFXn8iDeliEjFiBEjRowYMWJc\\n\",\n       \"MeKLVIwYMWLEiBEjxhXjJab22h1DxVy1gijtkiLdVZas9g1tI9YMgjptQXB/ghRZSuTZUUdC90Xw\\n\",\n       \"pBvJ5mQyrNvYDFfNjdmhFsTKjHShkg30WRImwGNbrik+ShmmCme2l7b1LZO+F2i7QfsXy5AqOTgw\\n\",\n       \"AqRrwvn3tD/N8jTNZYgzGVNABI8C2i0LIwCPotx0+jXSTZ7g6cypuaUdPwOhMSEdkxwE3R4pMNYO\\n\",\n       \"6WSFzyg9gDQDp6BG82lKi/adXk9rp+vDOZZEom/VmDjRdKvtt0P6wpFmmSLKnAJ2WihABxuNkXvS\\n\",\n       \"xUKKrqfUhkLPQx6uZ0rkaJXUcg2ltnHPZJSKqKFozFowJVShB043QNm7pLT0M5jA7sFwuKOChXNo\\n\",\n       \"S12/dceOjzTKa6St9PpbnxIRkZWJQsk++uLhfdOWun03kIinB/a92V5I/X3z978uIiKffsXSgxX0\\n\",\n       \"wR5/ZPpAH37rayIicrBnJOq922G/xczGf4GUYkZ6b1Ue+unaO9Z2AUVAoX2mFqjuUZbaeNn26jZg\\n\",\n       \"Y/36jaBtxObO9+9/GHZPJO5ZhWKTytJtDdJhrOOkQ7sFifvsgtWx8R1Kt+m8l9J8NsMxHM2xHVL/\\n\",\n       \"2YY0e5YhLdMRKbcDHWALI+uMxksiqplHKfAVzOVJW2gqOv7svPYxt093aBmhfQtyCtguQjru+SSk\\n\",\n       \"WChjP2rmDeSAoSnbgdJIOe6j1lMBTBNSZG7NCuDQlmOx8UFdHnAvUMqyh+H4hvahfS0tK6DjN0Ts\\n\",\n       \"VocKLt5QBe6E0odKW8jweOb7uoXGkvdcAIMUMOndqQOHo/E/mhtzQZNSDwZLgc9yJeBfNhKuNyDs\\n\",\n       \"0/yv415TrCIiSabafvY93UdLVAV93nTkqNFA28llTAqHBpayKEiLUFNwNV+nTk3biQKBOdlxWhDP\\n\",\n       \"DHU9EREZhlD4VPd2jUtKr78oIiIVI0aMGDFixIhxxXh5yuaJ3/ELy9R4jnigaaqqr0QExNvqQG+L\\n\",\n       \"W7xMtuS1pn4+rJStq7i2Vc85LmlUbzQiFivCQqs1rWDfQa7QvsSx1II2mBAGvLdmqZLeaWWWqBI4\\n\",\n       \"94mqydL591AMpmVaAwV0723l0kN+oKlZxTv83eO8+8b6UKUYPJ1DCUKnS3ilhfZ5vibhDX4YWIEa\\n\",\n       \"RQGE3BQFVgesVI025VOUoVJZ8wTyA0yYVwJ+Rsq2uhDxL1CxTWil3yXan7SaT3TFgu/RaqXDas3R\\n\",\n       \"uWo9QUorSO3PwfO6BGiWp/2haKJlArpTnzSMddpFqV6PCZWmTyAXsLVrp9dkS7oCI/GXULIGpeDL\\n\",\n       \"FSk1F/C/Awow7BR7YKVNpdFf/HJQMT+8ZlIDHz8NROEpSVDnGDtf/MpXrJ24Jxsi6n/67XC8+x8+\\n\",\n       \"EhGR3/2a+drdwO4+9baVOhdaRUKK/VugA2crI3bv4TxmM0K9QbzOyeswn2GMA7nx5Nc5uR1kRbJD\\n\",\n       \"O/6jb/9h+B5JKJyfh9LpnMbE0Tzsd7kgFXMgEgNduxxIZEf3xGQaVr/VPpARVtYeScREgF6Ga9dQ\\n\",\n       \"Wb0HijqlgpYEbXYkSZJiW00SDyn6R/u6Jb0Oj3Ha0z2UwVniOLNjPV2F/c1p/AsQjpYyB5ULCEfV\\n\",\n       \"G5rSJGE+eX4avAmbwsZ1UQQkkm+hulEXCZ6nQSIfbP5t1kAfU0MpvM4jjJIBOemBCG6XJFeAPnb0\\n\",\n       \"XFnD49I7O68CqLe6HoiIlJ3671nbx/mBMifqD+qADOUTKtjxOB8aQyq1U5WsIi44FqPZYUx25Gun\\n\",\n       \"474jVnxbg2w+kE8hCmTUZWFI7FwVQb1YPBq3qa9mO9h8PhbgsFXGqGxOqQOdR6k/nRaK6f9pTG5r\\n\",\n       \"II1CmRunSLNdJx27XBSRwB+W6sRGj1X+bU998aKIiFSMGDFixIgRI8YVI75IxYgRI0aMGDFiXDFe\\n\",\n       \"WmovFSeO8nijCTBBnJqW8PS+l4IcxvCsonwD6aj0IDSXpGOSqsqumtf2RrBTEm9H5GBN8wxEDvYg\\n\",\n       \"sRUVw6gAHCktNgCK7AluToGBqi6Vo1SknjbrSGmbkh3XYG0bERbTsJ9NbQq8pQ9k24E0S7LEgFER\\n\",\n       \"kbZlQ8sA++4QDBNNI9gmJ5pSI+PPTNOI1neqc1RRP3UD9HEGS8tNoGlUg5Rbb8z4NYHKd0JmyEoA\\n\",\n       \"T0hFfiSIU0MH/N0TBC+tarbYphzwuaZW287GRAMdqY72myN94cjE0uGadYTZa+ovYW0TVYWncarF\\n\",\n       \"BS10XNa19U2O4oWisDHctqrObP2/0fHP55WFNFY1sXTbyXFQgD44MlJ2B50zl+uYsH1cuxmI30o6\\n\",\n       \"FhHJAdn7nK4hVLnvP3ho38N998FDu57Hx4GUffe1T4/b9g5Dm979/j0REWlOrf/rZ0G/abGybSfX\\n\",\n       \"wrh+9MSg9g8fPBARkS988Uvjtlv7oe3XrlvK7OZJaOfpUzISRur5+PgzaLiZJvs0pJEcGf9evxn6\\n\",\n       \"/+F3fnfcdv/jUOyRUX7g8OZtERGZHdtvW6jNb0mVPIfa+PTA0od63yl5t6L5Z1TC5iIKpHa6rfXJ\\n\",\n       \"ADpAN7HrVOBH/dbSLR7q2SsiTw+YFyb5ZS0iTS16Hv+jjBrpAuq8R+M/AfG375hEHI6VU7pXP930\\n\",\n       \"IVWU9jaGz8/CHJcOlsbRzH9e2T2RiM5n9pxQA++e+qnDfZpRatX3moILY6NurW+ykZ7B8yQKZcT6\\n\",\n       \"tYOKeUr3iU5ZRW8UjAapso6MxAcUGWmqzhOxPtfUNulIqX4Sz4ljF9P856Xb+b6ISI7rSV7t0rQw\\n\",\n       \"HBZru05tKeb9ruPnio4J0mzrVcfJdqyFNEnKg3fAMUnvS8/H87MTqTo1VKZddOqsQSlTJZZzxrDX\\n\",\n       \"QiZ6TjtQYFIqgFICuicKAKcDXxQRkYoRI0aMGDFixLhivDREKkuSkdQlItKMJF72Cwr/emdvuvom\\n\",\n       \"mpGvUAWyt6dSfyMl29t/gb87lUFIab94W09oZaLq6Vzqru3bUfZGW9jDaNwNlSQnStr2l72JOsgE\\n\",\n       \"0GmN5fSMCCma5EkmQBV1WyIWJm6DtjMiMuzsj+UnlPO4owQLNItVj5VQr0iOiEjqKuzPVm4blFh7\\n\",\n       \"WqXrKqWcGAG0LALCMJ8EZOJJQyrSWKXkVH6usMtAK2IdMkzUdy/wWlTZAR5j+rnHOnigFawCATV5\\n\",\n       \"M3UDxhP5FKZYVfU7noxY6RKaqKukjgoFlMS4lU9ERGSvMgRjo9eT/MrmUMyeyqvWJnjtrddGGF0u\\n\",\n       \"w9/nq/Nx23v3g2L0wZn5Ss2A3OxdC4hISqXWzz4J38uo2CO/Hq4Xq9MnSs7M7LruHQY0p6jse0ps\\n\",\n       \"/cFjQ6lyrJj/wld/VUREnj6wtnkf0IeB1N5X2/D3nbc+O277qV/+ydAO9t/cD319QnIK6Tz0bX/B\\n\",\n       \"q88wtuoKcglT83pUKZY0MWL55HZAmpKHhlxNVqGvSyrK+ORBIL43G5J62MextnT94cVWE3m4ASq+\\n\",\n       \"ByXyycxKr/tRasSaqeX0NRWPXLsezrWjQpVE3Q4GLjIJ5zgjrz+BivVYpt/yHMJEYXy9u0yYTlX+\\n\",\n       \"gL6vJPa0sXtsCsmEPZJOSUEAH4BCrFsjMXeQghlIryBzkHpouVBiiuPbPaEuA11tyJHC03xWqhTe\\n\",\n       \"qq9qzwiyeqLa8Sugijz/DRLua++pKEY9WUnipxYtMqHv4V91KvDkjajISEZI1zivU4bHYyLne2fA\\n\",\n       \"MyEtOMWAZxHL2TQoSvB8jyOLo/sjhKbbhuPOp4S0gaifEik9QbFJXnBRUPhtltG81+C+d4YcKlFc\\n\",\n       \"kz8NZVOmQN07Ote20aIIvnYoHiJJGK9+riwToZkDGrt9F8nmMWLEiBEjRowYP5Z4eYhUnu6IKiYK\\n\",\n       \"iTD3R8UXqYS3x2o+J5Qkx0p3QyWsoycbleSOop9YTThPTvM+vIVzrlhzrymtFlIJ+yio57IcsgK0\\n\",\n       \"SinK3XLNsB+sSLTtXFaPlQFztKpMkZ7L4o8DuVFrOx3lfvW8m5r8ksA1GvPHJCpY97qCIbdufK90\\n\",\n       \"tlrtVdSNEDFFvTy1SYXztr2t5lPAjmVlK+wC3kkFViTzysQfn5xCzHFgpA8CdrSCSJRrRisIpys9\\n\",\n       \"FhhV8UMqUx9XYlgR98TzUh5A2zGqFY6R0QDonMpp8OoLEhMDyxRcRh0VAKuAPtTEabt7J5TfV/3l\\n\",\n       \"Uv/TZ5+M2z55EMQfVxfGkVM5kZM7t8dtOQQhT58aH+Hxx+G39z/8QERECkImSnz/5g2TOnh2rm2h\\n\",\n       \"su40XM+7dw0l03t2dmzCmUf7J+GYnxjC0EISogdK8yv/3FfHz/6Xv/2tsP/e7uFSEUbyP1R0bN2Y\\n\",\n       \"+Oebx6EtGfmvleBo3ZiY/18Pglk+AUepoHsdJeybpfXrxSeBD/XwkZ2DQKTw2ZN746b9WTjvas9W\\n\",\n       \"2jmQ2Nk++dqpwDChCTnu4+0a/pMEE1cV5BLIrzEHInjjhl3rg2sBiSvJVzEB0tMuiYdWw0ON+DA1\\n\",\n       \"EKkGCKenMTlymRyhejgd9kRNsDEh+YMUvJ4poU8CeYA5cVSWmJcSzKE5Hb9RoUlCSzz6jn01+w7c\\n\",\n       \"H0LkFcVIaX/96EnKnC8IfELMU8V9ceDwD12vHp6QWU58MJVsoLmrgrAx+5mudQ5urU+qKoyTHsdn\\n\",\n       \"VE9UnoefkxD/9NTXA54jDfPRcP2pql9E763EUEI3avzQNVY5mUzRL5LVQP9stnb9J0DdsorRL1xX\\n\",\n       \"zoTgWrCfbZaqnx4hcRLGbAt0lGUllOcp9J6gSFtKnowpsgR0+JHD1Xc7T2r8S+8d7M/5goiIVIwY\\n\",\n       \"MWLEiBEjxhUjvkjFiBEjRowYMWJcMV5eai8tdxTLc+CNLZHDlBTsyP9MvdYKIjsPLfzSaP9lCWVZ\\n\",\n       \"gva8VwIyCGZCqb1a02OWstIUDJf/Ftlk5zMRkbK67E3kAT0mrLYtu+m+jmrNO3yWELF46OCNxfCk\\n\",\n       \"g9cewZ796Bdm78VL+B9xSbJXRXNAwAxXKrGdrNZUsF0ckf31GA2nFgBtD5RaVd+7YYdsj+9nlu5x\\n\",\n       \"KFN2gHiL3FIhXj32WBIDx2dlZyX+J5QCtXQj+SqpJALJIudI5WgqqmVyuN9iX5QKTNUvj9KYqTaT\\n\",\n       \"VXQxnsXIrsoP7Uip+s3XvyAiIvtlSEX1a0vjPXw/pJH2p0QsVV4vndfsWiBIZ0RUv3kS+vi77743\\n\",\n       \"bqubkMZ79OD+uO3n//xfEhGRT33uZ8J+Szv+o4/viYjImuQPXn8rSBd8790/GrdtL+CJ9n2Dwvdm\\n\",\n       \"oS1HN+1aK2n+5k0jarcg8nuMnZuvvj5+9qWv/osiIvJH/+B/te+nIS1X7Ns+0mn47euUxkyRej19\\n\",\n       \"ZqnlYYkULDG1c6TeSvU6LIwwP2ipe04eYkgFDiSd8hDk+Tu33xi3nT4Lqb+MpA6qLsw350sjO8+R\\n\",\n       \"xlkTyV9lNA4ODkRE5IhSdm0d9jeh75+ehjTvdGZp2dlhSJF3e5xGAbH8gIp8ULLf9JdVrFuQwlka\\n\",\n       \"IElDex0pPavsTLaTssG8Q5PyBPfnorbx1KoDAKXWJj5ckxYpm5pI3KIkbtriktCmbufZgXlFyO0B\\n\",\n       \"aa6OfOWScT6jAgC0xfcqjcIuBinOmdTBIUkwmbFcALZNiOwMwnI5ORi3XYOH2/rMrkkLq44CJHZW\\n\",\n       \"J2/QdvYrHYuMBtoHaCYNzfG5qCcsFepoui23e2KAyjyr1/fgvBSQNXEkvzC6WPT8TILXHaXMRveA\\n\",\n       \"np0t4J1Ivx2gst4zzwYpxUyLA6jYq4aHonibu4Yx9Wv3bkFuGBoeacxux/8QUkgtp/N++KtSRKRi\\n\",\n       \"xIgRI0aMGDGuGC8NkUqSVHJa6bleUQXyHPJaGkmSCBBzTGil0aBMkt8KEyWIezvG6F03qL8QyxqE\\n\",\n       \"4zPSop54GRFbC6x0Ci4hxfpoyisSCCcyUdQ5JXtDQI/eqlu8ffOxHMr+EybMgwDb0wpOyeYNlek3\\n\",\n       \"44qRloQQ7tRVmicqvCIdzGvcoO1tT4RFrCrSjM2J1MOI5Bw6FVrjt/rQ5uXaStyrEqWrWJGw/2AJ\\n\",\n       \"IUpPsgIO184T2VUF5hRBCueo5Hka4krG32VbioiVS9e1IUgZSLwNrWD9uOqm1ffYECJAAmEbqE03\\n\",\n       \"TgJ5fHbHUJfF8wv8ey98llj5/dFRQF1KGhMtyMHtYON6sh/6vZwZIvXd9wNRe0PigzdvBLL3r/yl\\n\",\n       \"v2Lfe+97IiLyv/3WfyciIhWRzd8GSnVyfDJu+9rXf19EROZ7dvy3P/P5cCwa632Da0JjLEFPPXxo\\n\",\n       \"RG1FBOoViLWEiE72A0q3yKxNr732hoiIHN42pGt+hLmAZHzNnhEAACAASURBVCUU7W3WNnbXS6BT\\n\",\n       \"NJ9MQfJtm4Dq3GAPsWk4x5zkOvaOAtLz2Z/86XFbXuo9acc/uINr0dAqHWXsB6Xtr4dIZ0cSAyqT\\n\",\n       \"sgUBf7uxMakFKzmR4g/3A1F+uk/oXwZiLRH15fyxiIhcLKiUG2gGixmnuO/2ylfC8RdGtt9i9V+S\\n\",\n       \"+KUiViwTo8KJTEDvIfFQUIZhqqgDbVtI6J8MqPKqs3tonDMJ/c4SFdq1NinZOvXW1yq+6HsSc0bx\\n\",\n       \"wtCxTIIiLGFcETAjmcrkkF7AGlIj6q8nIjI/AHJDc02egRTO0glaoDK1+2mrkqRAVXryH1VRYyEC\\n\",\n       \"/Au0T8eMBXOk9Z7ICRFSgvwO+AOUyNEcrz6xYxEBPacTvEZMcpLpAPq37gxNnKnX3450DYqC2Lu0\\n\",\n       \"00wAzSe1kschNUT38GYTxmRVXPYa5GKjNFcxTypKUOkIusj63uGFCqpoHn9RREQqRowYMWLEiBHj\\n\",\n       \"ihFfpGLEiBEjRowYMa4YLy215xK3o2zeoyk5QXwNYHFW4M6V7EipHedB2OsMikuQZkoJMk6UPQ1y\\n\",\n       \"YN9SegbHJWkn6ZE+yYgAPpLdSdm5g/ZSkhuMmaiHGqUb1P9K0e6eyIFNDdXfxKDoLlcdLWtTDki1\\n\",\n       \"JsjS0mf2xQawaNM/tzaBZF+MpGxWVse/LRMMoXFD+HAFkmvGOiq16rjwb8O/nkiJqjarqtsiIgdz\\n\",\n       \"kOdHxN7OqwTJPhWDzDukLCkDK71qGjmGjEGs7zlVrCkAgsW1nZoBovaqr1RJBOQkC9fYEbFddcw8\\n\",\n       \"4eObLqSKfuoz/+y4bXEWtj14YATwvekBjoEiBmpbjZR1tya/SBBfSdpsLID4+KMPx2237rwlIiLH\\n\",\n       \"1w1u//CD8Pn/8Bv/9bhtDj+vL37ll0N7jszz7bd/92siIrJ/YP2f5UFT6l/453913NY16gBAYxJp\\n\",\n       \"vlsnRgp/9ix43G3WBpOPpPwtriGluxsQQKeH1iZXhDTfd98zX79XVkEz6c5NO9flMii6z+d2n16D\\n\",\n       \"T+B0Rl6DiabqlZzMue1wrVvSB8qgi3XgTTPrFaRem+fk4Yc0Sss+bRifq6Wl1k6fhoKCemWk+Plc\\n\",\n       \"C2XCtd6SOn2GduY0r+0dhTE0O7BxKkiVJ6SsXqOPJ5SCUrL3xcLuSYe0TA4qRM4+gBobS3cPbfh7\\n\",\n       \"2PGQRJ9RCm4fGlgbmk/0srd038+RKlzh2rgNaXvhWKvG+lr6cN2nFRUFID3Dvp57VUhRb7ZUgAPS\\n\",\n       \"eJ5S4RHmk6bROdTm8BrzyWRCXn+Y/1Y8Tgp8b8qaeYL9Wp+oywErgFcVFPCbcD5lZv2/2oTzHkhv\\n\",\n       \"UBkyjuZ/TcHRNG1pRnqeKbWB96c6e0zoz5A/nOD+8509KPsGaT9OD6J4Y03FFucXGOsd6cKVSui3\\n\",\n       \"Phan2lJUKIRioBzuCTm5KOg7AfvqpakWJXGhWGg7SSWKcuGzjLQCMXexVthub1yOiEjFiBEjRowY\\n\",\n       \"MWJcMV4aItX6VDImgmn5Oa1Ix7dqJodhpVOQsnVegJS+tVVVD/Jml9qb7rjqBJrFSrBKYh8IQXIg\\n\",\n       \"m7HXnKJjXBKrpaBZSqW2WXgTz+kctaw1UbTEFovSt1AsJpSuVQAtp7J6HDbh448lnqSOq07bJDHR\\n\",\n       \"Dc+xD+yPESSsEltaLaqsAiuBK4g3EImy7i57GI3VpIS6qdp6QsjhJ1Av30OprycEKQM5sKDVmoDE\\n\",\n       \"vOFrDRiRvRNVeZmRI0W7OiJqlmjTWP5K/arjr6DrP98Pq/80N1SlaQMZN89s9Xnz1mdEROR7H/zu\\n\",\n       \"uG0PK+aKiixyKLqPvn7eVMcFq75pNqPvh3EyyY2A/RRq2299yvznnj0PhP7vvPddOx8QhV95xdTj\\n\",\n       \"f/rnfklERL7+e78nIiKr1detvYeB+J6IlWv/yi/9ORERefzk43Hbk6dhXN0gNCsBmvDgowfjNi1n\\n\",\n       \"XhIio6rcev8PdL+oTMTpx+bN1735qfA7UpZ/fh76rCOfxH0Ufjx+asc/OFBfQfvtrdtAc/bCtWPP\\n\",\n       \"u8l+6HdH8KeuVoc9u9ZHuMc+pkV1uglI69mZtd1jKUy3kxzBu/CUFKjd6MoQxt1yYeM6K8JBJuS1\\n\",\n       \"psh+QdIVHbzOHDkQFJBT4CV5+xzq5VT+30FiQQs7pLV+rYCwOCpKGHp43VFRRqtoDheg4Po01Cbt\\n\",\n       \"2pyI4g2Q6xKK/vlg479uQr9yEUUroXihJL+2AVIvOfVT4gOamExoju3D/uixIy7RuaXGuZDUAea4\\n\",\n       \"gdBnB8S83tg8sVrAxSGjYqdr8Ekk1HPolbxORHmozGc6r5M36DR/U0REts7uv6EPiGXBdhsg+1cl\\n\",\n       \"FWrhOcXSMfo3E9X12bLjUyfa9nB/qAyGiEhSQjqkpUIN3MeziRXPrLpwL67WhrD2qG5KMnruonig\\n\",\n       \"oHlPx7bDK8vO88+H8ZftmNJC4odQUlWAT9nrr1E/WXrHwLUdyLs3cRGRihEjRowYMWLE+LHEj3yR\\n\",\n       \"cs79befcY+fc/0Xbjpxzv+mc+65z7h865w7ps3/XOfc959x3nHNfffFeY8SIESNGjBgx/uzH/5PU\\n\",\n       \"3n8pIv+ZiPxXtO1vishveu//Y+fcv4P//03n3OdF5K+LyOdF5BUR+S3n3Kc959AQTgrpWJ9ohDZZ\\n\",\n       \"CVyNJ4lYB6jWE7OsQKqkJFXsxTqkW1xiUH0BnYk0U3iW0j5IabF5pRqzDpQeEvzdkQT4qNjKBHho\\n\",\n       \"QGWFbdM2q8YS8eCkBxTaNHxeaAXhzi8QZ5Ucui9twwahAeacEClvDQh45KkSEpqoGSylVrR/UtIM\\n\",\n       \"aUHO3DX5hCp6Q+muGv1D7P0K+jmOyOMbXJ8ccG6ZGhF4NgmQbbOl1KaqnZO2TI0+SyuDh32iaUlq\\n\",\n       \"J1JlTGJU09YCJEpOd6oWWUmpvQwpg5JSBvNZgNvdYOPvvff+UEREJmQa6zvA06Se33ukQ5AWzMh4\\n\",\n       \"M9X7g/WZsPZZrqyI4O7d10RE5MMPf2DnivF5dGCGw5tNaN/n33lj3PZP/vE/EhGRHP20t2+Gvh4p\\n\",\n       \"q1/7tb8+bvv1/zZMA7/4i78ybnsT6bbHD8w0uOtUW4eI8qmmrGxMqAK/jpfz56RZBAI0E2HPzsIY\\n\",\n       \"PpkZ7D9orowY+D3mkRlpK11sQCItbd55+gRpwQ5p3CmbnId/PRd7ID3br0nZfhJ+e/z6G+O25w8D\\n\",\n       \"sf9aa+O5XiClQWmhCca4o5T2dh3SVjXSc8yidZgLONMwK0LKzlWWgs2grdWvqQAG6UPWdtqADLxd\\n\",\n       \"W6q8QJqjrmEondD4gwm3ozYlMCFPdsjBYR8FkbIHpAq3ZBqsnrYN67dh3E2GCu22se5w/2/Xdl8V\\n\",\n       \"0MfrKD2pOls81rTwhCkVSRL6P/Hk3uA1fRXOMSMXDVWd30kZ4Xy2NE8VVfjt4oJSq3or0ByjBVWO\\n\",\n       \"6A6p7OFfJX3bPTTNw3j2pKw+ZEpVIS0wzCNVab/tGi3AsmunrhRtexlTSXcI2KFPOugtzbgAB+T1\\n\",\n       \"hLbpw6WtSVuxD/dCQvNpP+A5zQ4YY7qRNbhAy8G48pQyzsbnJKujq7k0pzHD/jYbe3ZsN+Ge7PlY\\n\",\n       \"MMveYc+zwN0L4kciUt77/0NETv/E5n9JRP4O/v47IvJX8fe/LCK/7r1vvff3ROQ9Efn5H3WMGDFi\\n\",\n       \"xIgRI0aMP4txVbL5Te/9Y/z9WES0xvmOiPwOfe8jCcjUpRiaVhrymstBABuIiDeK2JLc9lQVqBN7\\n\",\n       \"q8yB/kwm5DUHouqqNmKbigEXXomShL5ATXYHpQIpct2asmyGVXXHyBH8slS5WEQkwakVOz5Noc2K\\n\",\n       \"yDDBTVc/LaEPDQjoaU/vsfDf89QnCdRhB5JzUInyPLc+znyB4+f4186h1jLYHfAQpfYpE/u0rJmk\\n\",\n       \"FtTDipCz9VK9C2k1jT7m0u0KpbBblGnnU0KEQPzPCWlscF1TXiVuce1IMTcBQT/ntUKqisW0IgTx\\n\",\n       \"PIGXEysG68opJ7JnDrKvKrKH3Ybr//HZt+28qmtoE0kigCjriIDaN2ElXrqwv5yIsOpd2PRW6j2d\\n\",\n       \"BFmDQyp1v/d+IJQPjbVzbw5iLSGYE5Rxv/ud74/bDg4DAlUCOTg/t/vq1/7a3xARkf/8v/hPx23/\\n\",\n       \"+r/xb4mIyNd/+/fHbU+efkNERE5OjGz+9ttvi4jI44+t7R3Gybe+af2k6vUZ4NklIVITEEzf+onP\\n\",\n       \"j9sWQHX6h9avb0LtvG/s3nm6CXICN28awnbrbvCs29uz8aSoo5KTO0IL1KcyeUEBhidEugFBvV/a\\n\",\n       \"ue4fB5Lt3jVDpD74RpCTaE4NuTt7Hq7JvDKETTmzLRCpGdVaHB4F9PG1T/+EtRNIb7uwQoX+PIyr\\n\",\n       \"hFS01yAPp4TIdFvIJFDxyBbyJDNFM+geLlBEkFREdk8UkSevS/TTllDqtgn9VHpGfYHwEXKfYsra\\n\",\n       \"4Jow+m2q13YOWmyyXtM83YY+KQsrCtAMQ5qQn2kPBwpC80fIHsen6WqsCmG3Be/VxYJQ0jbcCwk9\\n\",\n       \"ky7OUIBEiGiDoqiBimcqoOMFPElTytK0Kk3hKNOg835GUhvqisFpB/WaJU9WD/SNXS56IGxMsFaS\\n\",\n       \"ucrZ1K21d38vjPV5anOiplsW9OzqgU5fkHWiIpsJ9WeiD0+xfurgbVnAa5OLiJSon6bsdbjd+UxE\\n\",\n       \"pK4VVaOx3qraPEknYU4g4E78j8Cc/l+TzX0YUT+M0v7D6e4xYsSIESNGjBh/RuOqiNRj59wt7/3H\\n\",\n       \"zrnbIqLmaQ9E5FX63l1suxS//VvvjiXqd986krc+O33R12LEiBEjRowYMf4/jQ/fXciH76rMxiWa\\n\",\n       \"905c9UXqfxSRvyEi/xH+/fu0/b9xzv0nElJ674jI771oBz/3l98aiXsiIksQLMvS4LkG0PbgDAus\\n\",\n       \"oJVRFIZ3K7Q8JbXxPRAvzzamGNwMAfpNQSxlgplC9jvaGYBxUyKsr9dId00ZxoXhMCnbOsCojlW0\\n\",\n       \"FYLE9/LO4NQJSH81p8wAhedERFRlc5dY2zPAshWZG6shs6fzcSBxKgGvo8Ex+vmK0PcvG/9qapPT\\n\",\n       \"km17mYDZKxmVIfhetT1s2wa/HUAKzVpLT5R5gMc9kVOVbF8RiX1ahmux7kxbRlMLA5sra5Mp3aH7\\n\",\n       \"bqDOmxLE3OBaTCk9kCZQUSYS7bOLYBCck+HwEikDNi3tkdJYN0ZsTZEOzqCs3DQ2riYwXJ2Ulh6a\\n\",\n       \"IKXy0YN747bZLIz1jFS8B6QMUkpLnj0PpN1qRkauVfjNEtpBX/rKXxg/+81/9A9EROQXfuHnxm2/\\n\",\n       \"8Rt/V0REfuoLPzVuOzoOabz33/9g3Pbhh4Fsvd1aHkmJ4puVXadr89D2Aor5BbsYOCWAG4k6B8m8\\n\",\n       \"pVTIvR/cExGR23eOx23Tadjfxdr6v1ViK+VqyiJc/9ksjLVyYmkfTaMUpO2mIlCLhZ1XC+VxR/PJ\\n\",\n       \"2f2QPr0gZfHXvhCMjhePb4/bHt/7AxERef7M9KbKFBpIRThXylhLjmOsSe/q4GZYuyasmH2KuY5y\\n\",\n       \"YC0I5S3dz6rp05BSeYnz7XHzZo5TS6FtGY2rUdyOim2UbE7TlKRIm7IDwBrzc+uoeATHQ8ZeBiLH\\n\",\n       \"p5iTuK9VvXpFfT3TbaQKr+rhjgo62g7k9cbms0GNfDUVRgUrZaHnQCa2uE+5sELJ0GVqxQ6j9lJv\\n\",\n       \"165tw3XvOtKPw/5SKJz3pCKu89pOunGcs+wcVO8wyyv6IoqHMkrBgqDO7iEJqB/9wGbZSIGD7pLs\\n\",\n       \"6POFtqeFuRgoRSQtSSsSE39GyuJrDG7WgFJF+YxoMett0AqrlPpApu1KGeioX1WWLU0unz87hVRQ\\n\",\n       \"VleD6rC/0Ka7bx/J3beVruDln/xPn8ifFj/yRco59+si8s+IyLFz7r6I/Aci8h+KyN9zzv1rInJP\\n\",\n       \"RP6aiIj3/lvOub8nIt+SkJD+Nz0nk2PEiBEjRowYMf5/FD/yRcp7/6/8KR/95T/l+39LRP7Wj9pv\\n\",\n       \"KwtpabWSjWRrWhnghXTTGmGtgDnTdPLOuE095FJaOc0UkVqaAmyP8tguQVkv+UDp6pO39Xgzdc7e\\n\",\n       \"zBOUAjfkSZfqm35u3el1FbNDrAz/KpqUUHmllut6VnFX/jOXv8PPj7jW0qBMviS16w4rzJ5WKV5J\\n\",\n       \"hiP6RO0FMuJ2vg8CLpXGqoRDyYq542rCzjUHUbutraFKBmdC/xZq7w795cUIuylWRlMitmYgirrc\\n\",\n       \"VpUTXbk7Q3O8V7V1Wrmml0tie1zH8XsEyWnJ72pj53/rTkBfzs7MLy8HKXRLsgbq8dUnhr40HgRI\\n\",\n       \"UsDOIImwxfdKKnVe4BrefuPNcduj+wH12Z8b+iIg7H9CxG71uDs9JfI21Kh7kh3R2o47d4OEwXJh\\n\",\n       \"379Yh9XycW+I3DtvfVpERJ48sdXZxx+H1eKXv/zlS9uWS0M6dPzfuXNi24BiXDsM1y4jwvoFELTZ\\n\",\n       \"zMbaIXzfVgsbEwtIKHz00aNx2+tvBVK+bMgncqIohd3PEyByut7rd0jk4Z6YnRiCtIXK943rdg4P\\n\",\n       \"N2FF/sEP3h+37QNNOKlsQH39f/+fRUTkzc/+9Ljt2h3U4hApViDdUqAoQYnrIiIHx3dFROTw0K5/\\n\",\n       \"vwzHb3qSbpgEUu7Zqc1/MyAxm42RslVRPCls3NHdLiK796u6QvA8mY1uD/bLTpETQhUqsObVc1NE\\n\",\n       \"5AB+ogUhAgXmmBYPgKekhF2BZD1NWRIkHL+jAhxVqHcsp+PWaJNt2wCJVSKyiEgPAnZeKRF6/Gic\\n\",\n       \"QwaakxOQoiualEt4WG7Ia24CNDMj6ZQRuSFl77oNKJ1H9qPgYhtVYqdron28s00nMiKMZyNiRsgh\\n\",\n       \"kHC3U2QE1I8mqgF929cqYWJjrYZPbFPQ3K0+iTSf6uOH0bQU/rMdaVI0rRZl2Tw1wY/Ozu6LiEhV\\n\",\n       \"2bPOY95lErnrw/xH4Oco+8BFYcNYZEUK/HgHGSg7lDK0+oKIyuYxYsSIESNGjBhXjPgiFSNGjBgx\\n\",\n       \"YsSIccV4aabFfdeOWkcilr5KSR9DTTtdx0S0AJV6Z/CcmlUyG0uNfjMyvG3bc3ymWiREBNeUGiN4\\n\",\n       \"IJszsbuFFk9LjDlVgB4IWveqCkvnqAaVg2oXJUzixtcLaoB6CxM83sE8MnMExePzZmNppLpTwiph\\n\",\n       \"m4AvFYHvO8NYizT0YUuwr+tV2ZdUZIHV9mTkqXvZMXYctUpIlRx9nO9A2+F8aqjNdnStExfSRwUR\\n\",\n       \"Jqea0vNMoofei+PUAoj1BC2rHotj11gcT/WufE9KyEC2pwfW3o8e/pGIiGSFpUeSIaS+BkotJmqW\\n\",\n       \"TddOSflMdkyc6qeFcX1+atfkZz7/F0VE5NF9UyyfTQLxvGZ1XpC3b90ysueDj4JWUU4GneUkpLvn\\n\",\n       \"ZLjbqGknUjsfvf+98bOv/PwviojIH/zBH9jxkR48ODAC+GuvBbLz979v+lTTabiPrx0a2XazUW0j\\n\",\n       \"GycHMIFW0n+9tnF1fD2Mydnc2rvdhLTLfGqprcOjPZyzpfYuzmGkvGfXROkDe3T+ExQN6D2U0MXR\\n\",\n       \"tN9iSelRpBafPbpn5wq9pQMqQGlOw9j9pLbzuXsztPnZh98ct+1fCym4azft2qUD1OhxbW7etNTi\\n\",\n       \"/hG+d2Tfl61q5tic1Ov88PTxuO3Jo1AAsCHyto4/1rtrkOaqKpCOOT0ykpiZAoAUGN1sJagKPc0d\\n\",\n       \"agyeEnm/UM0sSpWpgXKFx9MBURZ6pGcuNpTGQ1oqJ4P2FumhtKNCAVH3ClbWDsfqaT5p23D+KTT4\\n\",\n       \"WNlcU8COiPWDFgORtqEKuPWOjo9r0veWWtaMmn+BjtKANN6GnlO1D2m/ji6KA3k84aIAKNAXVBSQ\\n\",\n       \"qQkwFeoovYPTeD2KsvhecB5zcA66R09K7Gj7s3Mba0cHYU7oqVCshd4WaxV6pG8H4WtXXPpehuvu\\n\",\n       \"s5DG9r096zLVO/N2//WtPuOp/7Nwr3t6TrVdmHdZq6yFVqP0pCTgSMztBRERqRgxYsSIESNGjCvG\\n\",\n       \"S0Ok0kTEEcHL6Zu7Y6QhvCVOyMOnw5tmQ+WnOXzKBrHfFopIkU8cOHHSjorqLE2gXm/2BruBrIFn\\n\",\n       \"SQKsqraJkQh1VZUQAd1LWPWlORPasSJSDztCulQSoWeyOY7ld8jmeDNnVEXPj1a/SihmFdsEKzZd\\n\",\n       \"aOQJqf5qmX5Bpb4gduf55Td4ZvHpomcglKpD3yVCb/Uou3be0LRSDQVBJtxQWXtTh3O4EJOwqK6B\\n\",\n       \"HJkT2TNXtWn2NVTCIBPgQVTnfk/VayqssFhqoYLnlWttNeKS0JaUVnUKuuWMfqX6EcsvwCeNVqm6\\n\",\n       \"Oq/7sN8vvPkXx8++/V0ohlevjdu6bSB0tkSAP4Ky8CePjFh8fBSQkw1dEyWIbsh/7fpxIE0rWvTK\\n\",\n       \"K3asH9wLxM43iOzeQ6n93g8Mfbp+/TrOy041xyq+76w/FemYV4YwrlGePoWsgbZbRGQFpK2aGPqk\\n\",\n       \"91pP40/V2dlDbA0PxYzGs6JPRWEIo6JTGyBdAxFRJ0DV2hVJgUOJfn7DUKJPPgh+7q+8+qlx2xl0\\n\",\n       \"Ovq1lbU3dVjpz28bmrS4OEU7rU3Xj8PnHRSuc5IayYBOdaRznM+uXWp7g/lnduOufQ/E/4tHpCyP\\n\",\n       \"OSGbGqFZLSDUkzJjSQogwo7L1WtFxKidkDNhV4qRtM7ICTIMPJvNgdR7/HtIWYWnQIsykslpobK9\\n\",\n       \"U5KPubDtGPUP466he0IclMKpUGjAnKmODsmOOwP+Jsa0umwMYnNXN6h0gvVrDeQwI/kBJe2z/ECO\\n\",\n       \"QV4vw70xkNehonrdYPew67X/KSMy/nUZVeLiAUURk9T6uBmW+CX73gKJhHr4dkP3FeZfJow/O7sX\\n\",\n       \"vk/PDn0mejE0S8dRS44CPdqXEHIomMcbJaB7KmJSXj0T21XihiR5hhbFBinNyZifeT4ZW0zPKUfH\\n\",\n       \"e1FERCpGjBgxYsSIEeOKEV+kYsSIESNGjBgxrhgvLbXnpduRvVXYVeE/ESMg5hnBjnWA2+qWGbuA\\n\",\n       \"R1kXSYnFlNoaQHJVJVRWfR6JbUS6TMEA3yEgI2XWEIypBsZDzTLe0AwhaLsfU3tI8RA5VNvCqUg/\\n\",\n       \"4HNKWfQQ3xrYIFRhSTZcRt/1ZGRcVlCFz0Jfty2R85AqzYnEnuVLtJOIiEgPMTlPYVaGgpW825Ni\\n\",\n       \"sAzhOrJmx2ggirRASer0GxCPMwKqV+vQzvm+HUth/oxMextVRaZr5xMllhsErjCvpgXXG/v+aydB\\n\",\n       \"4+fs3Mje80OoIxNhulPiKaUFR9dq2jYSmimRoWax1+afFRGRDx789vhZBYPOtrHUpir6TkgxWYnn\\n\",\n       \"16/ZNvVK5oKGAn07P7J06wIOoqdnIWV4+/Yb42enp6rfY2kEzUCcHJva+gV0nMrS9ruFPo+qJIuY\\n\",\n       \"8n1LhqeaZmvqJfZv56DK/puNkb2vwwR4uzayqWpR3SKDYqcS+KQ3dnAQPt/ft3SLpvnnc2hs0T3U\\n\",\n       \"4bNqn5TgoTFUZKYjNT0O5PDhk4fWzjfeEBGR03vfGrd5zGOZszF+gjZXpJWWoHimQHp0781PW5uW\\n\",\n       \"IFFPyY0a81m7tfu0BIlfKC3UQNF+OrG2C1JENZkLJ5iDskTn0Ms0AiZHlyD7cnpq/A0Vj2ixi6cC\\n\",\n       \"FJ1jyW96TK3pHHNO2l5qLjydsGI4NOsoPaRFLBvSEZtg/vNEAdnU4d6qiEucZVp4gPbSHJ7pM4um\\n\",\n       \"vwyk9JbUybNMn1NGAVGF+umEi2KQRiPD3d6pKjjSXi1pTG1RAOCs2KVM1bGCSORINzJVQRvN2l4e\\n\",\n       \"431gSgmKoXqi2aROje6R9iUCvGrFeep/fXbkPbcJzwlHY1c1w0g/UnB/5FTQNeDZqVplLHvVoPDA\\n\",\n       \"U3sLrR1jvTM9LPWJpmo9aVbp0PaUKu/dD8ecIiIVI0aMGDFixIhxxXhpiFTbLSWhlVmHt+RJbqs/\\n\",\n       \"feslrrWkWOlkVI6o6t2MEqg6qyeyb68lm3gj7sVWYYMP25hYmAhQMkLOPLpsS75WuZa9LkmBe66/\\n\",\n       \"IXVYoFROSz5ppaNyCT6jMmCgGh2R2FVVgRYE0uMNu8+4rB+rQyrTVWmJ3AeCLaueuzSsyBvygVO1\\n\",\n       \"9Z5Qwim8DhtCpLbwdVNUUURE+bFDclmBfEbFA/sgAG/asILJcpZ/CMenBZnUq7Dqm8xtRVqhFLsn\\n\",\n       \"9GEFMqRvrE19ourthDCMRQPh+ydHhrR8/CSU089mdF3VL4tQrcGBFEq+hlpq7wbyesKKLHVG8m/r\\n\",\n       \"cHKHt4A+bZ6Pn01BAPXsYejDtumeoSrPnwXEpqEbpcEqfo++9wxK4W1jfTzfCyvnYh3O8ezsyfjZ\\n\",\n       \"rVsBudhuSWoBitHLlY0T9TrznlFSlTEmn0SMhaoioi72nWOV3lEfqqxAW9vxU4zxprNjnRzN8JmV\\n\",\n       \"yZdAX4qJXeschQ0FwQ8qhZFNwjXJCK0oVZWcSKceBNhhZcruRyB0PyHF9A5E+aM3vjBuu3gYlM+T\\n\",\n       \"jIotoNCcFJdL0kv1PLswRDA7DGR89p9brYCqeLtRVt8OCvjFnpH3S5S/z2Z2/NOnYeyw2rcqG3SY\\n\",\n       \"pwpuG4ojCkKOFZ3K6P5T4nndWZ8MOEbbENKgx6T9tUDHLurwW54v0lYJ2zZ3laNjg107lf1Y0zjd\\n\",\n       \"1iBbE5qmGYaWClXUg1XJ63Ra0o2+gkSiHi4jct0QrslQ2/E1I6HzJe+7Hwx1LiGd0AJpaqjYwUNq\\n\",\n       \"waecacHzhLzxnH6e2Db9uyBESNXAO5oTmgGFUoWNJ49nocMztG/J11NlgijToPIPCc1dXlXxCSXv\\n\",\n       \"0tD/BP6MTiWkZiE9fPTy/jLpe4uiJNbpyKfadiKxI3PUkEzFKH9BavvOX5Zf6OjeelFERCpGjBgx\\n\",\n       \"YsSIEeOK8dIQqWFoR2RIRCTxWgZJ5frg3jAfZxQw6xhpwfcI4Snwqp94ysciD6w5Wv5+lqpY2uXV\\n\",\n       \"Use5XyRaW3rT7gCZpIVtq7dOd0xnrV5vQNC4vHTQVT2hVFg5saxBiVx63TJKpuWnDN2BZ8BCbz7H\\n\",\n       \"fsOxysJW5h36k1G9NFOndSprxuqU+7/RFRzx1sYSXltMy6CCeNTMsgQSgLaw/IHDCkv9+ML56H5t\\n\",\n       \"VaGV40VF/K5NgfOi1V8f0Jme+iQHbyLHdV1vjXuQ66alyQAAIABJREFUgfOTkIO4A0qQZ8SvwwqO\\n\",\n       \"lyVlrt6N1J8S0IFJeWPcNr8eUILHENCc0jms4THJPIsZEIGWynUz9OuKfL0SfC+hdt66FVC8B/dN\\n\",\n       \"JuHps3CBPv/5z4mIyL17xgdTngsLWF5cKG+KPSR1RUyr2ibcE5MJiakCJepa4zeZ6GXYduOEUDWs\\n\",\n       \"vg/nhqotFmHlzvepiiOqgKSIyEzbTOevIqHMUdM2l2O/E/qiHEmScFDkihRRpFnBf+8N4zJtH0Ie\\n\",\n       \"ojCUbO9W8Adt1oY+zHFueUUiqZ0ivLhfCFWRGnzAmSGycwy8zcMPxm0V0J97f/iPx2237rwuIiLP\\n\",\n       \"n5knmnJk8tzGnc43JfiSzIdqwBtzhIiorAR/b7VeXdrmFLkivokKwV6s7b5THZGJhP2uCS05BHK1\\n\",\n       \"JKTzDG3KCCXScdVu7VjKh+LniSJCLWUY1E9U7yFWmhmQYcgZpkLzWqYjYa7vtrax60K/rje27fpx\\n\",\n       \"OMeUELYewpWiaD7xYWvMu0Nvz7URCGM6mj472VcO++12JCmAEnWX51gCPSVP4X8HBMcP9uxqavjf\\n\",\n       \"TgnpxTVmMVedID0b4Olzb2c+edHzfIrzAh+ZfPUa9EmzJakDZJtS0qRJVSSZRaIVzSMulyL2RU7n\\n\",\n       \"w2jbCyIiUjFixIgRI0aMGFeM+CIVI0aMGDFixIhxxXhpqb28KkbStYhIAii03dq2CmmkvqfSXECG\\n\",\n       \"xKGWZhMwSC5dVui/qoyouG7D35oB6xkKBSycEenSJ+q5Qw3Hb6ZU6i1pgN47wnYbqHgLpbsU0c5T\\n\",\n       \"VRjn9kKugNJo6s1WEzlzq3WdDIWKKvAyZBtgTPauU3J5qZDxQKlVkGyXG8vFFRUUc4nE3um1oGNp\\n\",\n       \"qa/3lpbT9CnLOajasSdV8AaSEfN5KNdua9vvViUZCPaugTdPibDpoF6dCo+dcI4deS05kOy9t9SK\\n\",\n       \"oswH8FB78Nj82kqQ5zMiB49+fqTinEIBvWms/HlSBHXqnq6xpnRTKrLYA2l+swDplyD2Oe6JKn1l\\n\",\n       \"3HbnZkgPPXzfSu21GiOhvp7B9+3i3M41Qarq5k0rf//owT0REXn6NKR7VKVcRGSJdNuKSNSqjs4l\\n\",\n       \"5DoWWxr/mvpoqK69nOG8vV2TDvfsDMrmljqUUcOBvRn1GOz11oHEW5DadZqoJxyPcRR70LUrME7S\\n\",\n       \"UYmbTku9M0mJWtQLtKSctcqa0LXLZlAg31jKanJ8J5zPhcmOtFBKV9KriEiB9KHuLd0nuYLTUAww\\n\",\n       \"JE/HTR6l48XExmmzCL++ft3SyOcY2+w1pmk5TssqaXokT1N6TlXOd9J9TXNpm0NfF5RaVQJ6R7Iz\\n\",\n       \"PVIqh3NKFeEaDyhJH0jF+/kKnowdpbFx2zfk/1aAKjCf2TzdNOEe2zY2npVYzs+YHMUQVk5P9AgU\\n\",\n       \"kbREbB8wj/Y9p4z1X1bMRr8SfLFZq/+gXbsOKe0UOUVOuyotgot9Usy1Rcnfw31SXvYf7Acu/wdV\\n\",\n       \"gtNtSOmnQoRy/CZBWszzPjCfqROFiMhsBheBzMZ63eE6kkq4pk3ZbaCDxIijeUJ9Ui3dR/2Kvmjp\\n\",\n       \"mVCnmBP5vipRqMC+fl1oZ0cpYE2L9lx4lvDEcDkiIhUjRowYMWLEiHHFeGmIVFHuiaNSxqHRN077\\n\",\n       \"TtuFN8iC3si3KI1dEGGzQqkxC8fpinlg8nQaUI8OXkKeypp7p2/TLIyH0vCdl1EtCbbvaZmkS9nr\\n\",\n       \"Dt+mUnevIp6pks2J9OaVsEvtVZLzYKvaNUQqZ8dErFZRMXqD3oAA2BMpcQsy8kQF/xyjb7oKo5JP\\n\",\n       \"EDBzEuT0WIk58hA0HyIqAMCmoiBETEmchMi0NUjxVdhvURKJEwKbO+X/WK2uN+TJiLZX5cG4rczC\\n\",\n       \"qjMlRMI8/qjvgEgs67DCn5Ykl6BoSkbEeqz+JyWVy0JoNCVBuBxO5BURFpWUnYmt0pbPIDCKMnhf\\n\",\n       \"2y05A9m2W9g5NFuMXRaEVZtEOtctkMWT64ZmPHwc3Nlfff2NcdvJSUAsnj0L0gjsq5dloW2MNFmJ\\n\",\n       \"N8mPAE3d8Y5UnzIqddZig6Ky8+/gnaZoWkPEbnWu31BhgZL3J1NCtTDsG1p9TnNFfUm4FegIF29o\\n\",\n       \"nzXwH0xL+36PMumMpC60DFto7tICjIT8P/tpGIslzUlr+OqVJEngl2FsbZaE5oJ4nkGstt0aOb+4\\n\",\n       \"GaQW2ne/bvvYC9IZ9akVEQyrgD4lhbVzijHbNIycAGEgkUZF8bxTQUqSOlG5CkI6VEy0I/kN/ZvA\\n\",\n       \"P9luIbBLBTi664T6Kcd5T4B6TwgQnOdh7KwIfVoNYV7rB7uu6r86m1CpPYpW6o3dz2fLcN8XhR1k\\n\",\n       \"swj73ttHyT8j/ZCaYYRCxy7vV4WGeT5VX09PWYcORUM9ZQ5URqTH82QYGJkJ+yX91FEImed/JXun\\n\",\n       \"lE3oW70m1Han52jf02cbeye2uMk87nW3kxHRIi5rUwHx2YrvJ8z/PRVF9Q5CvAnNcZgTevauxece\\n\",\n       \"mSbmfmvRFouPdhgfFT+8R6FN29RtUahFwtEe2SRF8EREplN7trwoIiIVI0aMGDFixIhxxYgvUjFi\\n\",\n       \"xIgRI0aMGFeMl5baS4ZMytTSXhn0ZhqCIgeo4iYErbfw1duQZs5qreq8nNpTZWnCAAHHj8rBhb1H\\n\",\n       \"qup5yqIhKdRUE9OCUdhPCApVMnJHcLMkl2FxN/rYqece+wVeViJXZJUVc/W3DK1mKnfO8DjOoyO9\\n\",\n       \"qUH1NpCym0xsvxXOYY80e7abcD5FQSlQ1UyhdFsL4idreygBcCBS5noTYNyDGXmddeG6a/omJ8Kk\\n\",\n       \"9pyj80+60P+LtaVC8jzso6R06/4kpKyWiY2TFL6DQ84KxEg3qC4Mp1Y1BcjGTprGJSJmnmtRBJMT\\n\",\n       \"Nd1n39uH9k+7NvLsmL7GveAojbhaoGCCSMTDNpzjtLIxOepikYp8jXxfR0Iumj5ZLCxVrKRwJR1z\\n\",\n       \"2kvTA0omFrFUEI9JTdXspLtVn4hSS9o7nlLKxUTTfEgj02czVTanMaTFC44KAAqkwgaazvQcj44s\\n\",\n       \"tann5ok8rD5mY3pma+dfgahcr8jXDymblIjtg6YnSYo5hcfdQPdpMcE92dj+elVRXpu20+L+/dD2\\n\",\n       \"d4IuVXphKQtVvnd3TLMqffCHYV+ZtWmDPll+8MfjtvlRSClu1nRPIN2Zk96XjhmH+zklasUcRQGe\\n\",\n       \"U3ZQ53c0/463EWlGVVCZ73ouSnDYB6vn43pryp6OX0KLadiyYnm41jx3pOiLnvShpuiTvaltew6n\\n\",\n       \"hO3axniK+aHRwicujkDb1aNTxHxK+dmR457oaY5XFfEs5QIYeJe25AmKVF7Xql/e+JE9szbsSRt+\\n\",\n       \"O9RMQFdfUSqKUW0nSje6F/yl2ot5xj6ZSIF1qlloY1i9bg8OrChmnAto7qygqdZS+r7eqscmUUXQ\\n\",\n       \"9po8Fh2eN5aCpJQhnutcvKXpy4zm0wxjrd+QthRcLjoa0KoVKDsK+FFHKkaMGDFixIgR48cSLw2R\\n\",\n       \"yiSTLDHSaa6Oz4Q+rfEG3xOJTN+0e0Juzi4CYZAWiZJl6qtk25TEqKvEg4qIxfDS2fZWfj3A9b4s\\n\",\n       \"SQEc5OieZA2ykZxHxGa8YQ9Ukqlll2500qZIdsl8IkY63CVMNzufidhKyBGJsADJr6dVikoLtFi4\\n\",\n       \"kTi5FJPQx0VFb+FtvnPMcOBwDh3Lk2OF33smz+NY5GGlFbbn5+YnV2AMtFgFduR/WEHFuCV59Ayr\\n\",\n       \"yvOV7bfuwwntU6FAhRLyxBapsjmHijUpul9sAxKgXVySEm6HlVlGpEslYDZ0Xrq/nZXmSGgn/0Gs\\n\",\n       \"yLakdqxFDjXGSYcSbRGR6RDO4aS0ld4pfPU8ESFbRcISKgkGyXvFaEquZGu7nioFUJH/nEZHK8dx\\n\",\n       \"H8mL1l66+rUtJdABlinQ3zpav42eaapOXlgfFriH2Ydtgnbm3N5MXQnoXsvD38uloW+qvJxS/fkG\\n\",\n       \"pHxF2HgMt1j1d8TsLYCYt4ScpxjYrM7c4+ZKSAG7A4m5Xds1Lk+C2ni+TwR83ccjkMdL6q8nwePP\\n\",\n       \"k0xF26v8ApXkF4GAvmYRadx3N05MEuEMchMNIZF6nUr0ccJyBZhQWzZABXI0pXNNFSUmpX5VUeeB\\n\",\n       \"0gBZYtQl1eMB4d/SXHOxCUVGFyvrwxRjrG3sHLIKxG7GXIBmsUp1irHYDnaOOebpDoUljOoMQJC4\\n\",\n       \"1F7HGo+rflCUkqRTMMlwWf/Qhb8X53ah5nOgfiNyxKX5imqNm0bZn4QuyQgi03llZdiYESKbTxR9\\n\",\n       \"IUcPv8W/tr8BqJsizZx9qeCTyYVCFZ6JPNcMnUon0PkA/RmzKiJSoECKpjhZg+yujhLsg6eJEEbQ\\n\",\n       \"ypFkbp2izh5FeTxu28IzcKBrp/MeS7zsJIVeEBGRihEjRowYMWLEuGLEF6kYMWLEiBEjRowrxktL\\n\",\n       \"7fVtKhmZFqdQjE0dQfZA9C82puKr5PE8M7j/fBXg7gWpKB/vB2XpggjtCXSGHIiNCUG2RaFK5ETi\\n\",\n       \"g2lukhqxsURqifwRpQVk2VIKTM2FW9JgatXw1is8aXBuDqJmTrCnItwpexHjkmVEwFUSMYuvqrKz\\n\",\n       \"Jx2dehvaMp+F82l6Sjv1AU5mjZdMoVrW7HB6nQyydsniT35NHK5tSnhzBkIfE5p9BrIvdJcGUsxN\\n\",\n       \"kYrylJ9TpXpPxF7XqmYKQct5SKMVpKK+cYHEyxkrlypRH1pIhemFDP4ysTFNlDC7pu+Ff5PUSPR5\\n\",\n       \"FvZTkOFxgzxLwUrx6JMMcHrRmxJzAvK4I9Xp2QypBdICSjH+2Aw1LzQ9QaRY5E8aMma+ffuWiIg8\\n\",\n       \"g5HtrmYQUiGUitEUGGtLKRmdCegV0uYNpcU0fcipcj2ejjvVZAongetJJHavJGIa6/2YbrB+yjrV\\n\",\n       \"4iHNml5TcBZqvrtE8cJ0aiT+Fn2XEhF1W0OzqKO5A6bSAx0rhQL6lojVzQUI6KQ35s/D3MVGvprd\\n\",\n       \"zq+hDynd3GO/w7nNicU8pPnO3v3GuO3wOIy/+ralhS8eBCPl+b7Nia2/XDygqT1NleU0J+j0VFG6\\n\",\n       \"VYn3XNijiuYpFQ+0UOwe2MkWYysjB4oeWlUe6/w1zQmTSbg+R/7WuO1RE/pweWFp7DJHX9Mc22Ou\\n\",\n       \"6T2natF2SuOoHFOqNAZ2p9DUUmLFLkoV4cKaEvNTQlqBbtD72c613ipVwNqk7hpaCOAc0yjwDKH5\\n\",\n       \"T9NtPHcOGLM9p6fUtF5Ib8lDg5HU+1W1vO9IKb/9E9eONBN1LvbOKBhpEtJnrCOoZsVcbJAgBZfT\\n\",\n       \"w2vAeW/ofqpVNR1uA6w7plQZdnZIExSWsEF9EYp9ytRSe5oXTKlQR+eJ2lNRTvbDX5UiIhUjRowY\\n\",\n       \"MWLEiHHFeGmI1LK5kFlpK6OhhxIqEcYHrHDKglYrgBNyQrPm07AiO1+Tsi+8k3r2zhvCKmoK1KHt\\n\",\n       \"bFWhC7IdxdZUFcNNRV1AImcCeN1iVcOq4J2WBNubrqfVkcguEbvAiTNhcTqSk4lsDRJxRitI3e8O\\n\",\n       \"AV1JwcQYdKkSm8P+JhNbVSiJkrnETgmAhJIlTv2iqPwfq548J1I6muJJAVmlC1hNYKixigQ53hGx\\n\",\n       \"VMt5p5M5/SCskq4dkIRCq8raNk5KKCCzd6F34XxXWyalhr5osQqZlIf2fW0nLQg9ToxXP6Nic0rs\\n\",\n       \"SFXJoH5SiY2eIDH1Eeuw4iu8oTXNJiAd6YS/r0rI1ok1EFb2euyUlEpK7Ur2Z0+uZ8+fo22QFSGv\\n\",\n       \"RSXZtoTSViAPs4r1wUG4n9iTr4E/JKNPWjOf53xPQO1+REGoXBmoLssKrFG6zwTgCcrq11TWr/dx\\n\",\n       \"RitJbd90xvMJ2gH5hZrQZy3XrwnpWp4G5G5+QnIB6PeOEPGigAL/0uaODB5uKakor6Aof/3VT9m2\\n\",\n       \"p98L5zUHqrNvKKmqjSeE9LUo4Z8R0nTxLGybElG9h+zIxamttFUyZMdrD9cieSHDFkURrE6N884c\\n\",\n       \"9WuvKtY2dnRuK3NSRYdMyIKuXYKiiR7ISUFzvToKdBckq7Je4hxs7lbFckbOtdR+IDQlxZxVTmxM\\n\",\n       \"Kr6ghOac9tGrJA0zu90av7L+coPO5zzWw7mqv6CIyIC/HfnPLZdQ1C+B1uQ81yMjQKh2gSKngSYq\\n\",\n       \"9aekW13S0bvQ5unNJtwTkwl54mGe6shRQOVu1MOvJKRpgr5jBXgBItRS6qZtkLkh9E0LtDyhT5qd\\n\",\n       \"qUj+QB0/GmRV2MVhQNv4+acergUVlOXITtHtLFN4kraJzVM636WelPLziEjFiBEjRowYMWL8WCK+\\n\",\n       \"SMWIESNGjBgxYlwxXlpqb1Ofy7owLRQlZzOJtAUpLU0NxlOSORs/7k9DOmbbWFpiC6JkQrBwg1RR\\n\",\n       \"rurdHcOu4V82VOwU2ssJdlQCeEkGlfo+WrNBI9qesRgHzhHYImf6FPZkddwSmjop5dtUlblhaafR\\n\",\n       \"LJgNgqFOTCQ+TT12mk7qLRXTAVpOmfQ6QsV2XhOQcXtSFt6HLsfQmgaXBwbOCLiy5xNKNyaq7YI2\\n\",\n       \"CRHGM02jpoZPz+bhWJzu09TWTlpSFO4mZW9o22wXZ+O2EiTLHMfqyXg2LwPs25BmzAZpVobW9Tr6\\n\",\n       \"wcbfegh9kc8NMk9HUialUaB3UysRlwxK97Lw282KtM1SjH8yKO2R5mHdp1wrFXbcPcNvKoK7Ly5C\\n\",\n       \"6ulgP9wT5+d2LNWCYsJmiVRhy6rDSDcdHlpa9DEMkq9fNxNkTR+yFpWm3saUJRGRVVHf03jRlCGr\\n\",\n       \"qGvxAhNQlQy/XLIq+WW9m65RXST0P+dxcT95Ni3GWNsuTYl8bxr6rqHUpuruFKSjNSrQ05RQNeE3\\n\",\n       \"Tx58OG47xJhoNyjiWFnaIUVBS0aUhQukSpOBJhTMnVnPhRLoMzJyVfIwFw9oSm8kMbPCM65FQfOa\\n\",\n       \"qoh7KmzQQgnW9tHCDt6faktV5J6gV9FB26iaUnoGbgtcAKGmtW1r/X+GvpsURgHQdPQgZBANY+6U\\n\",\n       \"0meqZF5Cn4xuNfGagqZUnKaeuWAg16KA3rY1DbTy6DnRjel+ooWgoKFQXbyU5kscIyPT+AwE9J7m\\n\",\n       \"/xKpr5wm4ARz4bKzOc4lSD2T3lONtNhqTYVCOO9MNeOIsuETnaetnWvo862WVGzQqrOGjbUS6euB\\n\",\n       \"rqfSJ9i/3oyJQzu4ACDR8yctLIfn6IRSdlrI5mibG+DKUJG2l+xqy4mIJNmL0twWEZGKESNGjBgx\\n\",\n       \"YsS4Yrw8+YOukw15TuUgpfqUa9PhtdXYKjnLAhIxNEQUx+rgeGa+WuebB+E4BPto2eUGyInv7C28\\n\",\n       \"R+kq+0oNWpLfGdKgStRMtnYgG1a5bdt0WH0QYU2RKEXaGC1TwiADQl5Xn6yOjDLQhkh8Wv7NKyIt\\n\",\n       \"Sa9p9Vf83+y9S68mWZYldI697Xveh1/3cI/IzMjMyqKq6C41AnqMEEJixJAZ/4ARo57xC3rCHCQm\\n\",\n       \"/AQmzAC1oFvd0NQjs/IZERn+vM/vZW8zBntt2+umu7LQRSUX0tlSyG/Y9332OHbOMTtrrb022q6H\\n\",\n       \"EL9zdm4taidlj65Lr5kF61qnkKwTsOpeUQ2rtoHYl1b4bYWVS8S2F/JvhGtlIWAPYftQGfp1dYl6\\n\",\n       \"WYWtNFf4e1eZ27GiKFlkKEkEF/V1YdsegGYUuIaehOhpJu3kySaiw3V1NTm2T9IXi4X1P+/lt8ej\\n\",\n       \"iY3LEit3as8JF1xCvJkO5KIP9/Zpadsy1HyaCNXRldNIx3K4j2NE4wli3yUhUkek/SsSxOJsRXAS\\n\",\n       \"quEWqYv1wpC2w0H2cXbG9QexqqdyA9r/GTnU72n9LUbL9LpqFiJjRawu5c5ZX18tKSnhDz5zzq5b\\n\",\n       \"x5BzzrVItVekbaKaWjWE7+uckK5ckIaEhK0HOKCvqK5gV+8ena9zzvW4J+29oVnqij68/rkd98ci\\n\",\n       \"PPdHue5lave1PZd271/b/LdGu78jlHCFPnMglOzsTMbObk/O+ppkQ6v0dE67/9hWZK6nRwkAbpI5\\n\",\n       \"s8jJFR6MwETJI4kKhB+lqeMEeJ7G8UuM/+/vbVx/u3/tnHPu1Fl1hGi2FSDBMJ4P3WDos1ohxGx1\\n\",\n       \"kOh4IoYBSFSpVig0KaE4hPOUgOOBPvG1jqOKyAkRUQG+s+fJiP428m9Rd28AWupSQv/QxzIap7P/\\n\",\n       \"eUK1a7WuHKG5E+5jRkzMvgYim1p7am7J2JMA3WEcKUpHsOqE5KV+JIajUZRw3jSPf3ZM10SdBde4\\n\",\n       \"TaQtCEx1OWrmjRNYhZ7tD+TfKOKEAbXpsWvQ5+SSqiJojkVT0buA17qvtrcsC2LzECFChAgRIkSI\\n\",\n       \"f5AIL1IhQoQIESJEiBBPjM9H7bnONY35PnWpwG1jb/BkCnFYSxSYioHjhL01BArMiQJbTELftCRA\\n\",\n       \"HyHyvduJEFb9KpxzLgF8X5CLbAxh+0D03DgqBWLXMhcIJm8ddZRuCcaMMwha8bWUiux6wJ4J+WP0\\n\",\n       \"8Ptg19UWQsGeKDsPeLwjH4+hg3id/GHU0VZdrx/5eWhBZWpXpXamR9Te9NG56/5aLoaLc6+JAokA\\n\",\n       \"h3vCdgdAsCOg9dET7eC08LO1a6kC/E84MRfkCt6Bvi2JKiomoQCH2M6phNi8aaUvsrC5R/uoJ4xz\\n\",\n       \"zp1qFYUTPQD/qGgiZ1+0z+7wYd42dkI9JSmJLdHH4km9UKxjbdfSh2MWRwJifyTiRzuV5I8Ue6UR\\n\",\n       \"yW9mIWPrdDSqTCk9Fe+yiFfpUabbtF8PRHds4em133OBYDkXvk/696fE5nzcP/x+RX42KmhnHykt\\n\",\n       \"8sp7UKH8pwo0O6IRlitpExVbFzSwU6hdmbJZreX7Dc0rutu+Yx8t9P8Ho5Y0UWS9vbDv3chcFJHb\\n\",\n       \"dLd7L8dSEfutCdHjG1C2P3g5bzu+Ebpre/Vi3vbw+hs5T/KgG+DAXtVUgSFTXzDyQFNvr9lcjwq6\\n\",\n       \"wjOK3eGX8Eo6kdxAfX48+ShFmjREZoETkiGYKoqRDKI04rOV+WhdI6Hj3Z314aYXWqqh+W+BRI3j\\n\",\n       \"YPKREZxVQi7emgwQkXg/g/A8AgUY0ZzcAXuIqRh5rFRUY/ud6zMTVazFiidyQB/xbOnJb8vhukfQ\\n\",\n       \"V1yMu9BKCER7deptl7I4HIk9lJSlonjKU3JjBwlKz+MeruhslYWPE52KqK09ntNVbW2tw6M6kSwF\\n\",\n       \"gnr2hRtnHzmq3rBa4Xv0LICQfLXEvSEJxBG+WzyHRJhHo97uU4R7Nw3W19V7bKTxp/eEn+dTwrTh\\n\",\n       \"xxEQqRAhQoQIESJEiCfGZ0OkpmFybWSrpWYQQWXe2upjXq1HJiI9nLBaKwy5mvDGHlFdpyKVFXFL\\n\",\n       \"K7K+fezs3VIa+kKdYCcWQstqbexM2BxNuiK39/oZkSLH2BSrrqHld1W8McdaG4/S4NWVl5248bau\\n\",\n       \"Kxk5J/n7UV0lqPL4nCYsJwZaaeghigwOu5RCOgKl6agOXgyB30Arg67X1RqtKnoVoJuIu2nEWZhT\\n\",\n       \"/bWuGQGBrgXadAQiEFMadAShIJtjnyAizckVfxbF0srx7g7i+cT2lzogQpOtZhOIQU+t9BfP7rij\\n\",\n       \"toWtYGqsyAdSUWalXFDLTvle24kEm0h/zjNCmBRNRSJAQuhPB5fxaTKkyQE5aHpe1UKUTkJ9r+nP\\n\",\n       \"JPZu4J4+EcKpyJEhOLQy/oPvOGfITfzIWR92HoSSliVSjQnNUIH4o1qL+M0Kq1C2P9Dj3nirK6eO\\n\",\n       \"8vp9PndGE/X8FksSwOOeFIRcj738ZgP7B7aQUCA2pWs9AM1LEk6/fvyv7Bh2Dh3NP/dvnHPOnRaG\\n\",\n       \"SJ3j3O+9jZ09UKwV7DcSQtWGUdCX6GT3td6JeH1BVSFqpP9fPbNjvX/9FtdobZxnisgwSgA0G/eO\\n\",\n       \"74kiR0Nn51THco1Jbn19itV+hFolU4SP7Bd0TqY+EWHuUkuEs7W1/8+Q0JNEZCsySA3NujaGowVK\\n\",\n       \"Gw02/jMItcfexn+GOp5JwnYethfnTBAux5Vr4LE+jHDbJ6RrUBR1JJuASQXTdD8HxofwEyCgZvtD\\n\",\n       \"KDnGPddkdZhjGdVLUOtzJJx2RKp/ElOty1H6ST+yeBtWA3TuWaG2C/7Rv86Z7UjdGPp6OqBSQM1J\\n\",\n       \"CVo9g6wusJuusXNK1jK2Pbnyt0BTtQ1zSmxQe5qWkkjUidwT+qVAYE/zudqdjFR/Ua1teur3LfWZ\\n\",\n       \"T0VApEKECBEiRIgQIZ4Y4UUqRIgQIUKECBHiifHZqL1oStxITqQVHIOLyCC2BNj67NLsnEsBCx4b\\n\",\n       \"85bKIMb1JBhLAAuvqOCxUhuzFQoJsRPAmOyiPqqIz5GhBCi1iWDUVl3UqfBipi7WBI/2cPFWgSVT\\n\",\n       \"kWpAzaJHN+i5sD+JnMswshAPkHFPIu5BKUiCZ3XfEBguC6NHRgilKxbnKmUUEfyMP8uU+DZQiyMZ\\n\",\n       \"f2TwGTrtSewPF+Fo5EK6cs4NLjEnx2R1Ox9JbP3ug4ho1wsT23ZIQOhJRFg36uNkcLNSGh35gqmp\\n\",\n       \"yQQvHKZCFRUfHtFo+Ju8cNQ9PopYgKn0CLlNq/N0RwU/0U4pKDPPxYiRqUAa8tm9mKmYCH29dsTt\\n\",\n       \"oTDrRN+L4QDMVJ1SZVrwd722PqG0HBe0Ved/LsaqBVJZRJ7ChGXJAnhQZEzjqNhc/aSKwr6v7uVM\\n\",\n       \"Iy61vYieUzH8OLDYV9p6Q9eT6CAjClJ/q0JVdtgevdJYJMSGKLhvmdqV6z4eiNpFn0moTfaD9LvD\\n\",\n       \"b/6PeZt/9hWu1fr41gstXYOW3VAVAdUzrz5YEoPSkt3JxP6brVDfd/fmD/Ty65/JtncmXu9xb9kp\\n\",\n       \"Xp2ltZniRyUDkDDCczL6QkJeO/FM31ACTkPtqNt6Fa9Tv4eLtUf7TyRE1oLHJfXrDeYx9pZ78yDt\\n\",\n       \"M3JCEejzvCQaCXN2TxUNOsyBOhdnREWpO/04sgQD0gZKVJpQtHeg+X/Qou2k1FcajZNsZtE42r2n\\n\",\n       \"fr2Ay/uxtuefFv71NCd1uE8ZeRs6PAvYbynLpE2mk427CvKJgtzeF+ifC/jdJXRPdMrqaV6b0BbD\\n\",\n       \"xNQeKEi61iIX6nlB59kgaSEvjJYuMZ/sj5Kc4emZpDTfyNIGr8W4yQMO1UMmenZOqT7jOFUFtDA9\\n\",\n       \"Tz09Az4VAZEKESJEiBAhQoR4Ynw2RCqOIxcYpJVtAAAgAElEQVQ5cqLFS2JHKZx9Km/1Kb1pqwNx\\n\",\n       \"dbJUSx/LSmyiNPUFVt+eVphFj+NhZdpRym+sIm968ZzTdUlYWtfqNs5IE5AWEht6L6vPslzT9+DA\\n\",\n       \"fSfXVcfs2Axh6cCICATLsb2tqzi3obf/VN+q6e1bNbN9SysyoGQjBOOeUnhTOMYeBxNsejg783XV\\n\",\n       \"Xto6p3RQraE00spV60pxqmunteho9af17xTp4NWarla9NwRDV2JVbavvCKuJurX27HpxQ357+2be\\n\",\n       \"toTz+Imc8rMCAsxO25rgH5wMC+ZVu80r0g3EsIM3RGICchHT/pCT4HJCHVV4Wqmth7d7kkGIzKnG\\n\",\n       \"usI8VYT0YaUX0Tixc7b7rwL4aeJ+om2sSBd9BgSPESn9XFEQ56yu3mZjgmn9PKPxpwJQrutWFHDF\\n\",\n       \"BjLELura17dbS0BRJIoRFP0tIyd1rX2MBbiP0S/5W46vqDKnmsdY/Q+NTQo5LBEqQlqrWu573Fs/\\n\",\n       \"vb+WygprEsWfnX3hnHPuWNn33l4LcvInP/5y3nYAYppAvHtLCFLxDG1M1/rwQY51cWkr+LGW+WS5\\n\",\n       \"sLE2YHIryBV+Rgfpvqt1Q6djkp3IgRbFj1ANuRc1JfY4uHcnZA/tMSfHZDGRlTI+IkK4FLlqMZ7v\\n\",\n       \"BusvByCBntBfRanOCP08tNL/DjROEngGxNTH1DInJQF4W4l4v0XbeJrr+knr6tl19bjWuiU2BddY\\n\",\n       \"k9VEW2tNOhqT+LvlMaYO+Uh2aQkNaTHvnEhYXSBh4DH4LclbMbEeamcRE36yiGROHJ3Np1oNI6Ua\\n\",\n       \"q8oEKUiV55/oL/Ss0XHUkYdCguO3hAht1GqCIJ2+kf/JCaWKYVPRtsJIcK3BHNlIbUO1cyGsPxwa\\n\",\n       \"+p78Wyy5Tq4ijARnuY+ZMEaxPhUBkQoRIkSIECFChHhihBepECFChAgRIkSIJ8Zno/a89y4i744B\\n\",\n       \"BQ+rwSDGfBR4uhtT/qFzzrmYKItTL79ZJAbtnho4O5Mv0wixXRqvsA8WwgHGrlkwB2qHoe1YCymT\\n\",\n       \"rwSEjx0VvNUCtTHDgxCPn18I7Hx7b/uoG/jTxESFJCrsM4izhGdR9cixWNouInhyHNQfJvtom0cx\\n\",\n       \"5mkiGgki7yQ2KrLqBOLOiB7p8LdRJ85tCjluVVHR0l6/R/cYSnUVZ8tJ4H7O4mWmLOX80pgKZKJA\\n\",\n       \"9fFkhUyLXKgfdgeuGhHZnpp3tm0SGqUbjRYexhWOIfBwEn88JLwnfzBQASysVBfnR4LFVvqkJxfj\\n\",\n       \"GALMiHx08l767MwoEWWh1Bqf0sNeruGRZxqg7YG2ObRnRYh0UUDs6li8Ksc/wpWYPYaWKALMVNjx\\n\",\n       \"KNd1e2siZvWMUj8n/puLFitVy7Sc/tYE8B/TcyyO1++zsF1pybK07ymNqKJz55yLIMC9u7O+8/y5\\n\",\n       \"0G0qaFc3feeci3WuSVmIDQH6wDS+XNft69/N25T6ePvtr+dtV1/8COdux3j51TPnnHP390Y3X17I\\n\",\n       \"tm/e/l5+RwWiT3t4FpGIebmW/v9wY8WQGwjUyUTcnR4wnoluy+EPFD2im0ABq2fWxHIDUCFEt+3g\\n\",\n       \"aF8sbD7JU5wz7TfJ1EeIKBh07oETfzDfqtb32dpo5DiVPnFfWXudGpn/Puzsvja99GefcwUIOe6q\\n\",\n       \"OLfrcZpkYA212sh92j2IZ+HtNUkwcrl3xZIKOSNB5Uie4eOo86l9rwa1Og3Wd7UwMTF1c6WIwctx\\n\",\n       \"mW4/VJAP0DOxBqXsiZZWoX5FSVnpTJWR2/cASQkXIdb5iSpaaMWNCRQoe9Z16nY/Wr/SwvTef3xO\\n\",\n       \"3NdOuJ6ifDZvy/XzntrJ6bPgEvviAu1yT1JKdmgrUIudzYke895APlI5+kdCc9ycIDLG9D0bg5+K\\n\",\n       \"gEiFCBEiRIgQIUI8MT6f/YFzzsW2MkvxVtkP9qZZ1bIizmJO11afAEJaFP2gFUEKG4XEsYvr47pe\\n\",\n       \"MR0/GtSuwPZR4VTSFaf6ApGiumpdp6JMsh9AqmtcEHKhbs9OVljr8ov5s/d3IopuOl5BPJdriAyl\\n\",\n       \"G1K0SWpv1YcTxK60cBydigNJPFzJCifdyoqMndh7rMwY1dvtdzhfW5HFEFseJkN1nIryyYG7PUGA\\n\",\n       \"2NGKJIMont7fta6VOtyyJUai+6OV1kKdvwk5cah151hE3cvxm8HsD9qDtCMjPInXFTlQleRjIfij\\n\",\n       \"wmKwFYgpJTpC29GixjUQlE+MHEK8OZDFxziv3IB0cQ1FrJy4htlqI4LipCB3dKzmq5Mda3884vtX\\n\",\n       \"87YU6eHqxO+cczcPWLkjKaDI7R6qyPsDpdorwrpYrGib7G+zNrFzDURkpHunDtFZZivizUb6ojqK\\n\",\n       \"lyXZDyhKsbLVoDq1M0qmx2f0S4XnLGzW4mEJib0HrL5v3r/HuVFduRo1HAlpOe2kD5UZ2Z8gUSAr\\n\",\n       \"bZl+/UbQkTU5QB+uxYG7vLCaeAe9T5Rq/nCS8yvUQuXycv4suhYk8Hi0eTLLZTUfE9I0oS/c3dn3\\n\",\n       \"FNmrCBEtcC9KajtNBlBUbyC3d62NFpeUVo8mjnhdrmh6QvMv7klE1QZm13SyLpnnZ9w7AnXdCv3+\\n\",\n       \"bGXI+fpujc9s7rzeS9IMC8U95k5HKNFqKeL9ZWb9uchknLw8E7uI6gub/373/c+dc4/F2TOCl1q/\\n\",\n       \"ejhKf4qpKGaqtVvZKR4Iu6M2Hmpp4/xMvt9TEoOOJ0b1Na0/peeZx3Onb4kl8Vq7j5z9Md9OhHDN\\n\",\n       \"iQQxnRN+O+K3NUHdKjKvCVZTVCclm4qZ4SH0sW3grF5S9RAn80hDFg/tqFUZ9BnKbv9y3CSlyh5I\\n\",\n       \"PGvJwqBXJLCx/qfC94SE9VGhc7KNiSSy8/tUBEQqRIgQIUKECBHiifHZEKk8Kea3TOecm9RMkzQN\\n\",\n       \"DeoZVb2tqmZpREwaBaAkTWv765CKzvXsIiBSKxh99cTBRkgXZ6M95c+rPR1rJW+pZW6rxAELFkbO\\n\",\n       \"lCPvqZ7dYi2fp15WUHlsXP3ZVlap37/5BR0e10N5rRGMyLKMVhUHGCJ6Wn3BCiEnfYd3sF1AvcLM\\n\",\n       \"26pO9VicBq2oy6P6e7FWC7cV5O4oq+Q0tm0VahNNxDMPSGuNS1ulRYlcT+wF/VhSDT3X6eqLdEOw\\n\",\n       \"UJi8oS9qphmT1UKBWow7qnUYaZ+J2MwU5xnJvRnJQkPLxKWFHb9GzaeypLZWqwOCrjxQyo70ECl0\\n\",\n       \"aC2tMCMHRAhWEyWhCoutIJKb2GwFTkB6VrQiVwPBjvru+YX0zzgx5CBBWvXD3lbTqivS2nGbjVkN\\n\",\n       \"HI+EOup1oVHOz88/+oz1ZaqRYo1eDPPDPCczUays2XRTQxGxN2/2tBVaNkKa1CahpfGv55LQfmOg\\n\",\n       \"jVfPTI/hsZ8SNfmayuYaD33laW+oZgadxZt339t5vpCxu78hk0y1GCCNyIRVb0soweVzucfvX9v+\\n\",\n       \"FkCJNmoEzNojoNoPDzbW3nwPLdUzm5POzqR/7Mk4URHxlKb9CPf9WFH9OeiFRiAnMc0JM0r4SCOG\\n\",\n       \"zwgRUdQlydgQUzV6hCbjb09rerVC0fNgBEd1c5zCX2DM5Nz/gGw2LWk0gVKf9tavzzbS/uul9YnL\\n\",\n       \"i1fOOWMf7h9MexVDv/X65lfztg7Pp35kVAVaNmY9MqAqLev2Fvgt1TpErb0emtM0Z7sO7dekB0Rb\\n\",\n       \"NB1ZAmEu6Oj5N0DXlpKdjZvblrSRhdZOpfp7QJN1hJH3qGs6GH1STVhFczyzJEC/WIeZ4Rh3+/fz\\n\",\n       \"tjJRJoCskIAc5dAID9PHliz8TqDm2FnBNj0tfmv77WBdMtDcvU2EKVqUXM/3j78qBUQqRIgQIUKE\\n\",\n       \"CBHiiRFepEKECBEiRIgQIZ4Yn8/Z3OUuoVR3FbMNk8HebhCosiLH0igV+K4juFf5Pq7/pLXWusYg\\n\",\n       \"63Um8G2J1NwoNyFkDRpj35vArQEUWZAlwABX8CW5A2cbgTGblty2QSN0BC33oKrWqINVkOjSDQLf\\n\",\n       \"P7/88bzp7iAurhPXdQJknJI7dgrX14isHhKI/GbKzrn5bt/fvHbOOXeR/8n8kQoFi5LSgBdCrRwq\\n\",\n       \"a2utUxdRTbwBdFfLObyAWwcSm7tJa6IRtYfriJxCwWQ1AUHxNFGqK+jJcSL7CdhZRIQjPzv/M+ec\\n\",\n       \"cw8ng4y7XijIkWDsvkNKbCzX1VBbl6AHB0eQOYTiTFk50IcT1ZWKQRWfiMZzo5wzOysvMqHtFEYv\\n\",\n       \"PIt+5d96MMpqvZFrbclqQ8XBnq4/VXqG6E431wS0a1yv5RqVTmDaSYOtLn76U+kzTMUp3VBXdp4X\\n\",\n       \"Fxc4N7vXmv5cUq0zpc9ZKK4xU0wkjtZzZ2pP6aaG6kROqdoUEGUBupGz+dNEqRLpawOJkxskcUyd\\n\",\n       \"OdbvjjI/jK31id01zm/g/o++SyJyn6gol2xPIDbnlPDFSmibBhYTOc0hWpO0JEuIBFUEWBR+AmVR\\n\",\n       \"kASgQL87EY2Xwkm/o7HboRZar21Hcgttd+5rWuOQHfPVHqKnc1LbDU8UjDpxeEqy+EOqdiQax892\\n\",\n       \"BXZdS03iofPUpKGetmUgpu73Np88azXJgZMsZExebOV5saRkh3aQ+788WH+9xfjLqV7iZiV077G2\\n\",\n       \"+SfFGE+JluuVxeL6o0hkUnkG06gZRP48T2R4ni1S228NEfdA86TWCeQKAPoXS0WUSvSUFDOgekWP\\n\",\n       \"e8EVADSxqaDr71CAryBa8gDx+ETzf4/6rAON8VgTFSZKPJnbALYm/Px3Wi+VaXRQwKQY0N9Mj54/\\n\",\n       \"mLu9PbuPlSR+5Zk995Mk2B+ECBEiRIgQIUL8g8RnQ6SWceFaesu7Q220jkwV51Ui1UbLR3lLnLiw\\n\",\n       \"kC6EeKUDsRmVn5tNzFT0u1o9nz9KkfZ/VxP60aJeES9hOxhtOl4RweCTjOY61EmaKJ3/BoZ5m4WI\\n\",\n       \"2aLYUm4VVZtIsJfAVqBPyawO+0sy22+MN/GSVxW4tSkBQmpIGGPF83BrAtfzCxEPj7mtjM5Xmhps\\n\",\n       \"+73ef4/TtRXRAJsAruodebRJyu/qmkJrW9pGTnBZyvfYmE8FqJry6pxzmRpNHk0AvFk+x36tO1+s\\n\",\n       \"5Hrai39v3vard/+T7HdiATxQGpy7JyGmApwpWUKMlSARbWXHTyEYHXoyGgT60lGdPi3k2JFSc1cJ\\n\",\n       \"6rByso+O6nWtIJQcqMFUqH5/ZyjJOYTFI1cr13MjO4cjkKVXr17Z/kZdzcn3GNVR+4vnzy1d3wH1\\n\",\n       \"ZEM+RSdevfpq3qbC1uETNarKglb4WqdrVESUqsAjyWG5tO8rYsm1sXogAquljSetk8Y2CYrc8W8T\\n\",\n       \"IKIJjn9zZ4Lxdn/tnHMup1Tz9iD3f6D6Z1onLiOksQXSwfUPFcQ9OzNReHWSe1IQwjYAgSqvpA+f\\n\",\n       \"yMLAAznU+pbOGfrpKIVbDQY9Hb9F//N08xT9OlDtSg+YRFHPhpJNFJFarm21nkMcnxAikaGGYk0J\\n\",\n       \"ABmSYXJCTtTgtG0I9UXnVZuMlOwyGswJe0KOs8WAYxJqcCPHOpBxpxqRpmS++eHDL51zzr26+Np+\\n\",\n       \"i+fE5bns78XWkj3aVsbru/1v520R8l44AUdRlSyxdho79ElChNSKJCNRetvATgHn2XlDlXIgiPFg\\n\",\n       \"ba39KotpXKE+aUMG16mHmS2ZVMZ4xjAinMGkkgXd0wk2CRBs91QTNkKiDgvgPVD/jGxiPI5xR/Vc\\n\",\n       \"R7ANBfUJjwQlKnE5z3vdAUgXPTqVnWHz3arV2qV2rWq705DYvVyCCSE2J4bYfQ9GyDnn0i3ZfXwi\\n\",\n       \"AiIVIkSIECFChAjxxAgvUiFChAgRIkSIEE+Mz1hrb3SkDXPjqJC94XntJPRFmhg8PELYnJI/jjpL\\n\",\n       \"R+Qsbp5BBrcWpXyeZQIjbgmyVcFsfkfuwIDs29E8i1a5eIV05E+k3hYl+U4MJzl+TNTKAGrl92+/\\n\",\n       \"dc4598WVwaMFsMqIIN6uU2E9u/6CgiIvpAU8U3KiShP15+nstzWE2kqLHsgfZw16iL2oVIhaLK2t\\n\",\n       \"FWI9DVZrbaYPPEHb8Cxy5KM19Oqsa21cV6Ab4A69LK1LenjgsD+UwrfDZJD1rhbx/OXmp/O2AnWX\\n\",\n       \"LrdGY13v/tQ559z7/V/beUaPKa1FQWsLQNspUYsFPJ1aEtHG+HyivhvB0Tqi+1/VgMfJbX2De6b1\\n\",\n       \"xc5X5g7eAk7PyR1/B2ftojAo/N17qSd4fm79uULtvKm1MfHsxZe4VuvPKWhpFV0eDkYjKI3DItoE\\n\",\n       \"YmwWoCv1xsJu/S2L0nlsa+hxleJLs+Gj7+SUFKICdPZ702PsHozGUUF7T9RSg/Nrib7U3AYdE2sS\\n\",\n       \"wr95K9fYU20uBwpwJMdyHX+nnsTGW2nXU2WU1QThfX2w+6/0fUqygHk+Q3utlnavb3ciD2jJxV6p\\n\",\n       \"3QP5gxWg1lqiZ85Wck5Na9v2B5ljM5pPlV5X3y8W8ep97SmJIEZiS04u1h7SB3aqn/2QEkqeAUU5\\n\",\n       \"kdv4TPdBYNy2LBmQ77XkI/cA8X49MQUOCoru3R5U9Zrq5D0goeDtrc0JF/AZ2+0l2eZyY2PyFeol\\n\",\n       \"vrk3WcQHzOc9+SJqVYRHTvmgDB9JRYBlJJRsEClFqdc40hx+lP1uiMYce60rae1fgm7riFpvIOye\\n\",\n       \"qLKB+m1xAYAY1RscSSBUetKjrXtKgMigH+F6dF6TLEiqolUsYpKFHDpIRRIWisNbkGQW6gep35tI\\n\",\n       \"szN3cWpXlQA0jzwQ0RfZ2w2eUp7uk9YAjkcbT/f7v3V/LAIiFSJEiBAhQoQI8cT4bIiUc/1ct8k5\\n\",\n       \"q8NFjghugPB8pDf9NEP6ubM3bY+370c10Zw667LaWt4wM9RV4lT/CMhAuTRUqTqKAH4isd/gZAXT\\n\",\n       \"teYA3dRyrHzBLsqywvO0Ste09/f3gqB0ZPWwQoXz3FNaM+wR+s7EpgOqmke00j9HqrlrGaVDSvKj\\n\",\n       \"quoQEaJtWko5PaAy/MurH83bRqzwFrkp+zRNtmsp1RXfi3gBo3YW5KysYvCBROFznTisvnNSx8e6\\n\",\n       \"SiPBYAJlZ9PbivT+IALh5xc/o2tVEaW15+Xqz51zzu1rExRXSGQYNTmA2msWW07sDr/Bv4YqzJ/7\\n\",\n       \"j5MdltTH1kB/BtOJz7UgFekcyEIghZ1CTJYQmq7P6epffiWWGR+uDSXMkTSwvbDV9AkWH1FGtaaO\\n\",\n       \"SLWOPk4hTtBfl4SIqF3Fbkdu30D/Nhtzdu4hxuY0+fVaxsye0s/X6PeKZrEQXIXivKrU/ndo6o++\\n\",\n       \"x8P/AejU5TkhfLMA3a7//l76vW/kpqj7u3POba/kt/sbS2FXQTlbGJxquWfnz6x2ZgNhe8Ju/7Og\\n\",\n       \"mqxDUFu0L2xMrEsRmV+/l8+WW3ORXyBNP3NkiVCh1ibNCYoqLZd2T04Huf6InOU9EIOps3bPgdhr\\n\",\n       \"V9iSYF+tS8qFoQ96TiPbXatAn1zRByAGjDDrPFYuCaWax538E9McrnPNlqxj1qg8sUhfz9u2QElb\\n\",\n       \"QmkaFSxzQhNE4b/+xipKbFFRYL2BsJsE+x6C/u3KEjCWC2nju5vv5m1xCrE1oe8F2nUgOwO1CtdE\\n\",\n       \"COfMnmAC+paxJcdezr3PqIZni8/JamMd6ZxNiUJ4jk2UlDBgHllR8oA+Fvi508OCZU58oXGaw4qh\\n\",\n       \"Ohn6ul5KQoX3LNKWHV9s7BmzwNx5qAzh6wediwj11MQfjP/FkpNIgD5SAogiVj2xOSo2VxsO59wM\\n\",\n       \"JRWE0nkwKwm3e8vVFT6OgEiFCBEiRIgQIUI8McKLVIgQIUKECBEixBPjs1F71Ti5iURnmVN42CDO\\n\",\n       \"HqI3hlb7DsUTiZYa+o89Y/TPnqi1/QG+UF+C4qJ9LDKhHdYk9r3vBKrNSID9UIu3zJZE0ff3AhVe\\n\",\n       \"RFd2PaAAmonEphAbKnJ9/fDr+bNoElF0lxiNZr5PdizVzp6fWZHNJBUYtTsSjdbI34fejl9hf0r7\\n\",\n       \"LLitsePjiQS7cN3mpIActGg7GLTsIGL03ui+AdQDU7BxArqDPGD6PsMxVAhMlM1CYHFPUOx8LkQB\\n\",\n       \"13Cj35FnzLoQmiVjv61IrvvF9h/P23739l/JeYCy7XsSIrcC2Y8xiUhBiw49CSvhmO2ouHKEv3Py\\n\",\n       \"RUomFI2NjW4Za2mLFM7Gm40Jxh0g7o7Mxj1oue3a9tHgnAdy7E1T+fxwMgqqWGHbziD75Vb6+y1c\\n\",\n       \"tNlj69WX4gvFLuL390J3Z+TFo4Lx49E4SxV7K03knHMlBPJv781HpkSFAIXbDwfrV2dn0hYPD0Yj\\n\",\n       \"qrC7JyrkBJqBz1OTPFhYH4FaaNob218hxz+ijd+/M9p3rtk62f0f4MGzurDx10D4XZHfnQNFP7GP\\n\",\n       \"DuaCBbmS7x+EMhgru56HVtr46koopsPJOkADevbZmUkLTqD2TkQ3qmxie2Zt0mmVgUfecvo/Nsa0\\n\",\n       \"WHWHOaEiwXxZfuxs7kDLlzR3asWAnHzk5mORA/yIk4liGyceFM087REVp4W5PXkxrUApr87MFzC5\\n\",\n       \"EX+okc5zgM8TF0F3mNsnb/f4F7/5N8455y4u0f9pH+el9MkNjeGilL9retY8BwUVl+TtpN5fPc1/\\n\",\n       \"mgzCam/4KMUqT/F2r6tS7vXt3mj8dSHz/5JF7KDbtgX1v17+PpAoP8o+duxepHKNPRk56bw4Vuoj\\n\",\n       \"Z/erw/joKAFA+9hm8/W8SannlKQFmf+BfEbJDsdK5ocDFRBXXz6d97kChlY+6Fp21sdnAyUxYEyk\\n\",\n       \"Obuyqy+WXatKGnp68KXJH8ecAiIVIkSIECFChAjxxPhsiNSpHh/ZBRRwj26drWq1vs5IK0IVHrcN\\n\",\n       \"r3TwPUoTTSFQ9FRXTe0Ujr2gSlltK0OUC3LLJaVaV0BLaFXbA/06kLN2hPfRQ2Vv2ioyVgQNJyj7\\n\",\n       \"w7+exMldK8cgvbZrRhXg2pvxxeYnzjnnnl+9tC9CvP1AKuYjVsc91zACwjOiXtSG0upVCH5/oNpQ\\n\",\n       \"l7JaOdR2rT1E2eyYrdYVKcFEFRCDiZa/iiwmib39q8h4mjRd1dq6LOHOTIJJFSN7xyid/ObUGCJQ\\n\",\n       \"tZpYQMgJFIsp1WS8PPuhc865d3d/JfuorA0XTq6fmtDp2iMjF19NP04zW6VGENmmlFab6IqRHHhH\\n\",\n       \"9IEJyF1dUx22Wq4rm6hPqtUCpfC+/l6Ems+vbEWuSE9Hx6ohLGdEZER6tAqWf/RDs5BYr2Ql3JGF\\n\",\n       \"RgPH4O3GBNB2D+2cVPi5pOSNvmeX9/kM8FmHY1G6NsThObntK3IW0Yq0quS6rq4MEdb7mHkWYMO6\\n\",\n       \"gNDUE9A5de+PyMV8PgahH2qXkRH63TjZ3/HINimo60e2Bg4JECeyLlhBFF1Rv5swjmq09UhOzGqn\\n\",\n       \"caB6eXqvc0oKULF9S9/bvBSEZff2zbxNEdY0I+QK4uUBSBi7s+tfbH+h6GBLIuolENOUE0tiHc8k\\n\",\n       \"NldHbbbCQVKMhzVKS22jY5fnGnVi57k+UeTsRELxWtqupjkmUwdyEmrfPwgi+JtvJeX9cmvC8qqW\\n\",\n       \"c1qTJczPXv4j55xzb+++te81gqq8WBnCvMQ573d2PUMjc2tH6M9s7QMrHK41mKutB6XmVxBbn+U0\\n\",\n       \"nztFBO08tws40DdkKwDLAq6AMWE8ayKMc84NqDwyo2qDtZeK+NlWoHd4/qSGPm9XMj6zyc4pBQNz\\n\",\n       \"URrqpnVX++6X87Yax1AktK65YgISm2icaE/1NJ7VTmeifqLP4IHsFDKgecyEsD3FpyIgUiFChAgR\\n\",\n       \"IkSIEE+M8CIVIkSIECFChAjxxPhs1N5D/didWWmPiNxxYxBd9yejm1T315E7bwpBY1ORAB0fx+Qj\\n\",\n       \"0oIW+gaFKsvCfF9awOiTZ8dWiGjJCbkHLXDaG7S4hc/L/cOOfisCwJhcsU9H8TmJZ82n0R6T+9gJ\\n\",\n       \"uunU2dug0J9+LV5IZUneTqCZDsef27Eg6KzJFVjtW9KVnJPvCOIu5bg9FVk+gdKrGoM1k1kcSv5Y\\n\",\n       \"KorkgsvqVeUstEBsQ6LEHBD8CNE5WXe4fhQIPJpy2qb0oG1zcNkdSdj+UIlo+MXyJ/M2pWoigmnP\\n\",\n       \"FkKR1vAROlYmRN6PAk+XJI5X190iNbGxh2B1HIxGLCKhA7LUWkALWPedbdNunBVCAdQPRsVocVX1\\n\",\n       \"aZFjybm8fWOeNREg6/3B4P43b6WvrYmCe/5c/KbqgY6Ptnv5SlzPLy6soG4Ouuthx/5U0u5MmSq1\\n\",\n       \"xh5QRaEePB9D4jvykXr5SmkT+L6QF4zSR3ys+3s5F6Zn9VhMCyrLMD7yllEFqp3T/kFo/kIpSPKs\\n\",\n       \"c4mMk4zokQwTUEPUgu42y63z3t0IzXPx7AfzthiJBNWtCdojUJpcZaFuVAAr/y7XNtbvboXiYMfy\\n\",\n       \"S9CDNzd3dp4q6KX55P6tOOCnRGN5FepSRoPexxE+PhkVLR9wTsx2pxDsc9FodZHuiW5JcB/ZF7Db\\n\",\n       \"oVIFFaH3JNB3zrmcvL1ubmV8HsnbzWHuXhU2125XMj63J+u79U7bnatHYMxMdu6atPSb3/1r55xz\\n\",\n       \"P3plc8jy+b8rX6fDv7j62jnn3L/z5T+dt/3d638p+4pMgH95Lue0Lo3aez9JP9lH1/O2w0nOc4lC\\n\",\n       \"vhkJ0VUy4cj36IgEjZbm7kUsbZGQBCMFtffIVx1UYeRZ0K9UGX2zj/E92QdTux6Tdk6SgRqeacfO\\n\",\n       \"rmvAPrLCJBBlJvPTMicJBMT2XW1JDuP0Pc4dyWZUIHrAfNp2nOyDRCXP0pIEnzFVL/vjhCo3Ki1o\\n\",\n       \"N3n0j/QdH0VApEKECBEiRIgQIZ4Ynw2R6qvBdQt7481TFfZyqjtSLkd7M5ywghpHe9Pu2wTfp3RN\\n\",\n       \"OLb62PbXIsXx9lZW68dnhj7oyoyFpSNWi+q06pxzLd7Se3qDP9Wywt4QwtbW8r0FbVvmstq/P/7e\\n\",\n       \"OedcSSmfDtYI/UB13QpNdTdUQUVvF1tb6SyQknv3YCudb19/I/tjoT5E0V6tCWgFN+tECWk6QbCe\\n\",\n       \"RHYNHWpXxZ21iTrvcq0pdQBWh3HnTFgb13bf6xau6CrOJtWjCgEnFmcD1fJ0/3OsznpOdccC81Sb\\n\",\n       \"AH2xkuvYv6cVCYS0l6uv5Xyp/WvYX3CttxTt3xJypSn5iSM0J5JVV5YSwoe0+8EZ6pTDpqDqta6a\\n\",\n       \"oZQZxLlpbELMHsLvlByjPdqkpL6mScmRNkoAACAASURBVBFnhHRUlay+jy2Jx3Efr56JUJ3T2gek\\n\",\n       \"tUd0rEKrAhTWd3V16knYre7hvE0dq0caT5pIoUJ1Rp9MbE71L/E9tjrQFWZFVg8ex+qpdp+22Z4E\\n\",\n       \"2Oo2f2whrKa1Zd/ie5RWXqHGXEaJGk4rFJAAN4Htw/2NOTavzy/xGSG36H8THUPd61XsyqjCEnYa\\n\",\n       \"VUX11ypUIFjYPRkH2e+eaidu0Lc6sh+YgBgNhOapuFstKSqyZlD7g4RE9NpnJpq7O6DdGQnVG1h9\\n\",\n       \"xJzqj/YcqI9NsSa0wEKA5skSVgPvrrk6AVAySmbIUSngbGVj5+5uj+sj1gF1JB+5YqOPTUgi+Otf\\n\",\n       \"/O/zZ+drQVBXnhgBpMt/9cIcu3vYKVRHe07EiZzT1XNLCsm93JPv3vxfdnzMzw3qyo5k1xCjxl5m\\n\",\n       \"t9W5XD6/bQ19W+F56ghV6jqtQMEiarm3LNNW5PyRxQVsURLcpySzzy420iZJTlVBgPDfnew+vQNK\\n\",\n       \"W5aE0i0ECR9IAK/O7svM5q79UedOMEeOr0vtjNjCQF3Mra8pm+Cp2sZc4pH6bov9LKh24N/3ohQQ\\n\",\n       \"qRAhQoQIESJEiCdGeJEKESJEiBAhQoR4Ynw2aq9tJldT8dbFmcCjw0QC9AhQXW/Q/qCnTMKywX3s\\n\",\n       \"T9MCqu5JAKtFdcdeDvzuzpzFVwulFhnGF1qoG0mIB7qPofi6FVHcy2dGAY2dnGc1sQBXoEoPIXRE\\n\",\n       \"8HieCSzqyWMjz9TjhNsE7tiJwagqfE3IgX2lHl0kLO4bFapmH12XFp4dBhZiCozak9Buwm/K1K5f\\n\",\n       \"oeJuNLw5yURYGU3s9yHv7ewe7+Ga3EPsGlOyQd+pOzq5GMOJNy9I7IxLrBsuJCz/sGePOlt3LXm2\\n\",\n       \"LGU/BWDc5MLg+ZtbOGHXVLR2o6JLO9QRlFlRkAvYiPMkUXSP2zOSsNSDxlDRq1JHzjk3RPDxooSB\\n\",\n       \"Bv5lLbk9fwXvp4YomCUokLa1+//710IpPyNqYYF+rzQKu6NrIzK1ppSa+jk559xqJTTH48oCoHsf\\n\",\n       \"ib2xj5QKyTZavSD+aB/aF3sSQi9BDx92JqxXOjAlymhEm+3IFX0Jv6WYlKUPEOifbWS/dUPzCpzC\\n\",\n       \"B0oOULqdhe0ZkhzY72kFT6fD3rY93Il/04vn1sfu70QWUC6tQ5UoIFsheYVpPI0LKsY8U6Cl9T+l\\n\",\n       \"Jc7OTBagrvSHk028BcZbRPRhhvud4j51PYmdUXmgJAG4+rMNNCfrfRyon6pqIo2NFktWcp4T+YLp\\n\",\n       \"cFfK2vUkol7Iby+3lFjyXsbnWNuxUmTWJIUdawNn/9NAReiVKqakCPX50wLW33zzV/NnV/Cd+7M/\\n\",\n       \"+ffnbX0m39+ujAr6YhLK6riw+adDofU0oULmuNdntSWvjFo0t8H8Q8LqGgJsrnYR+48TNe4bjA+a\\n\",\n       \"k+JW28SOfzrJ9+Kc59MT9mdtPCfoIFPq/Oyr+bPNGmJ88kXU59P9kdz2MZ7fvrNEmZcX4i3V0bMr\\n\",\n       \"UXkHib2LQvpxD6/EE1XscHjG8LNDs9JYljBhGyfFqLdczn5/SJSI6blD+RmfjIBIhQgRIkSIECFC\\n\",\n       \"PDE+X629eud23lCVNRxgM3oz7rGqYHfk61t5q48GEmp7TRcm5ATIztDbikAd0kugEHd7c/j1EVYE\\n\",\n       \"iX2/HeRtXV3HnXNuGHQFY+iTSvXu94Y0PFtrOjVBF50cN4MQMKIk4gXeuBcLW2nWqLlVNbbS2sGJ\\n\",\n       \"ebuxc3q4l22nR26v8ta9opqALVZ4BWot9XT7kxQu7pSGvNvL23/TUL1ArOA6QuQ83vAbetNPBjmn\\n\",\n       \"wtmKUN179Y3fOediiAI1NXUcGZGAwzG5I6c455zcvhP0mQMhR2MvK6GI6gk62DMkCSk1UR8wBjrI\\n\",\n       \"yQ7brfSJHdkKjINcd56SsBL11/qBEyUgdhwIzZlE+Jon1naNl7YdsVwfSAh6AtJ5lpuIPYulfxSF\\n\",\n       \"tVMPDwXPqy8gotfvrU/GmkhA3yuRcKD1r9h+QxGhga7r7k7Q1O3WRLyKel1e2j3R/TByo/srCOHi\\n\",\n       \"Va9zn67hN1JtsARIByNtA5IROE1ff9PR9yasOnta4WtCQwWUpqRz20GU3jOqizpp7LbvCmkLRq52\\n\",\n       \"J7l3nhDZJYTN+72hZNtncj9HWs0P6IMpxiTlWszJNm1rc4KiZHVt/TT2sq2lc1L0o1wZwn39TvpH\\n\",\n       \"T3NMCwSsxH4nShhYlPJbnrs0dzxd8pyM33JeudekIK72AGH3SHYSuH6PcTIQgqzp79woBcZ/cqAk\\n\",\n       \"Esx/ZW6p9lcXMmfuPSdKyHEjQuJrMCFacy0ipOff/u3/6pxzLiNh/49fiiVNFjEjsMHxDaW6gUC+\\n\",\n       \"IZTGA0VNqOadB/qTAAbpSVitJA3ppd0ApMUTqngL5GogP5kN6hnGA83d+LfaW9/RPtPSdXuFZACr\\n\",\n       \"L5eGoJ2dy7O7IzTXL+VefPv+m3nbopR5jMqPurudoOQZ1d9TN/yRksdSJG+MmE+GRwkz0neXC3vW\\n\",\n       \"ePTPgd4JtLauJ/RxTmiiNlbEiuc95/845hQQqRAhQoQIESJEiCdGeJEKESJEiBAhQoR4Ynw2am9o\\n\",\n       \"BjckBic3KNa6Xpt3xAhaKNkY7FsfBYI7HUkIB5+jvjfH5B7QfrEwqkDpAIVsJ/LJUPjex0QFQETK\\n\",\n       \"QsQC1ELMbsfwOdmTA/vFhYjxppboRsCNKWjBjIr3JoDPM3L1jUqBT2/vrRjmm/F3cv0Md4Pm9OwO\\n\",\n       \"PKHgaWLtlKAY5AhRaEKu6yoijEkcPSwH7JfoNvWdcXTvALP7xN7LaxQNjjIqDA2fnZREuWmkjuoQ\\n\",\n       \"Ak5Eu8CxfejYM0yu25fkNg4qICJaal9JscxFZGLbEe0TM2Tdorht/gznYe2l9Mh6ZQ74Dv0qL8jZ\\n\",\n       \"PJb9PVAh6xR+SwmJ8nW0MX1ZLIUC6A/w7iEoerURysjXlJSA/szO3h3U9vnC+s7ba4HMuT2rk+z7\\n\",\n       \"8oq8ouA3paLsc3I2ryG2riqjdo8nULbkI8WCZg0VvnIRaqXoH/nTAFL/FBWowQL0Fq7fS/JAa5FQ\\n\",\n       \"kMYsgFaxKRU3Bt2hfc4553IIqitQcUNNHjMqoqb+r07d6aMqr0hYIB+vSgvukrdainurlRicc24A\\n\",\n       \"tZmQK7pSC+psT6yfO53k+Eei20dUY4hjuydKd3ZEWaVIZEioQPFMC57sexX8mO7uJdlmuzHaSYXS\\n\",\n       \"JfEzKei+5dqK+w6j3JOBaLw8lXs2Uvt3EK9HrDVXUXClfcfO9wHPiWtKNnh9LfPu7mTzfwW5x5Rb\\n\",\n       \"460gBo+Jptnv5Hv1kbya8CwY4EUYJ3b9NSjQv/nbfzlvy7yM06/Yxb6Ua+g6Oj4SJb4nsXUM6UWx\\n\",\n       \"tP6cVzInDCf4s/XW13K0TUsVOCZ4Gk6x9SFN0Dm01k88GrlIqU8oBUsJLS0otRON3bQABQkadbU1\\n\",\n       \"ylSd94/OkhgaSCuY2l2BvitzSl7A/ax7Sh6BB9qn5DPKKHNlgdjr89SovRyUZkuJKrXS/N6oVb0/\\n\",\n       \"fqLniQ44Hqd/vGZxQKRChAgRIkSIECGeGp8NkVrk8VzLyznnqiPqar2kVP9E3gwPrb2tbvAi3FCa\\n\",\n       \"uOop40duu/J5kRoitUAqaI9Vb7aly8fKKadVba11eGgFFSX6BktvxLGuyGx37+9/hf2ZKFfd2zVL\\n\",\n       \"OyEhYAJUS11t5fvypl3RauH2XpySv5t+NW/blJLOPpI/rVoS8Iv0FMEpHiuTmBxzVcTJ6e8xbA8W\\n\",\n       \"BQnWVdhMgvEJLubH2mp9pUDs+sFWiWg6V9W2SlHLAI+VS0/i4GiCJQXZL1QQCpedrX4ioGkTrb4f\\n\",\n       \"DrJKnVIWJauI065HkTOH2n1sP9EDCYszW32t1hDbkmA1hTvw4f7tvO32VhIZTo3ZBKzXuE+EuuQZ\\n\",\n       \"rg3dlLXxKl7Pc+4nKfZv6OcCVhd3VGttVESCjvXqFWrtnWyVWi7ktylSva+v39Fn0v9ubkywXiPF\\n\",\n       \"/OUXlv6sIma2zlCEibfNqAPnX+D81Nagqig5BMhM39o+SiBhY2P9tIIT935vLvYb1JPMKXnl/Tu5\\n\",\n       \"tnNa/WsiyRLfa47UX9HWbKugqfGcQq1zQkP1xxaoO7Yuqf4YxOuMhGdolIRQEu3tarWSUbr6WSnz\\n\",\n       \"ybtr69cHuJdfnNl17Y/SJudnhvCfgHqORxLgYtzxPKFoYxnJvwdq1xRJBhEhommpx7X96uc93ae5\\n\",\n       \"QAW5nUdA4kdCDvS3WgdyoAQQ/WWZW7tq8kJ0sOPrvRhimk9V0LymSglqHTBYv6uwnwnzf09VHLSi\\n\",\n       \"wvv3lqj0b+P/Tc4j+0/mbWUrbZdSWv0ApCel8Xw4yJgtCXy5PHvlnHMuwRjSKgVynnKvi9jOqQM6\\n\",\n       \"WrP9A9p6IjRrAPqUeGs7TS7iuTOBKHuge+Jz2VZosgElzKjFQBTZsQ6K8NLzJAVimsY8nwFhmwzN\\n\",\n       \"7/BAzwiJ1i4wYk6MKIlJ+9BQ03MayQAFueJ3rRxjHG1b2ygibNsi1DZtaYz76Y/7HwREKkSIECFC\\n\",\n       \"hAgR4onx2RCp1TJyzZF40UTedE9UL+vZlbymF95WVYuFfF5W9oY46ls3WQ1kQEy4cramWNYwPxzZ\\n\",\n       \"mBErB+WMnXMuj2T11XR2npo5Hsecpg+NAmmEfAKOnvhYTWfVNHWujO6gORqohtzUYX/E3+4rWR22\\n\",\n       \"vSEyh5O8aRdLO6eLpaxqho5QP9QE1NpdDa1W9I3bR6QfwgqTC18vwelzuqiDhiQhfZlW4s5y0oHB\\n\",\n       \"fsKPtvzqYToXJ7CVIESoxQqLTVJ1BdF7u/4FVtATnbtqNI5kUukdVtopp8nKqmO/B4LVmR7CYWXM\\n\",\n       \"5ncqr1qXVKEe6bqX56al+v7D75xzzh3IS7Gq1LjPjq+Z/cuFGo3aajFCu3akVdg9yAqWLTlKpNVr\\n\",\n       \"jTTZn1xrS/qyM9RnvN8b6rI9EzSpPsoKsuLUbDQ7Z7AvgbRwNrAaRm42htx9/70gp9utwbSqUVyv\\n\",\n       \"bNtsUwBkiu0X1OpjSXqcI1CdBVWa3wNF6sgSoGxhJ0Fp0iukot/dGepWYCwcoAeLSXuiyDUbKGqd\\n\",\n       \"uK63dsqw/M/IpHLo5VrT2PrT+lzMB6uKtJyYO7gkZgp0bNI6fDSHdRBrXL0wPdLdh/c4JzZklT5x\\n\",\n       \"f09WC4DMe0or73DvxvFjNDEFY5DRvKZyUU/oU4LHSEsocYwamxF1lBHzXkz6OjUljgayk4BGq0A9\\n\",\n       \"u93BENQGfSgnVDnHsQgkcRkmLZ1f5IvoE4Tce8w7ntH5P9Tr0TyptSFZU/N3v/5b55xzZWnPqb/8\\n\",\n       \"k/8A50v6XqBeA9U61FqDLddaBBJTwkx0vTINYgMLiZrQz7n+JF1/D4SJyxqq1pdrHQ419GBsnIrL\\n\",\n       \"zak/t7CniBLVL5FdwKSf2bG0TqMns+yHndg/ZIXpMPV5yrUbHe5FP9oc50ZFuFFrl2xlMhi89oTI\\n\",\n       \"6aMzelS7VWvy0W5bzJ09MRFogCGxQdnRc/RTERCpECFChAgRIkSIJ0Z4kQoRIkSIECFChHhifDZq\\n\",\n       \"r8iWrm0o1d0JjHbzwSDziwuIs53B4zFSIqPIYLfNVqDd/Z1BwMlcL8popMnJ5zlcZNkdWN10u8Fg\\n\",\n       \"fKc1tAhGVFF8WRC0nQvsmBAE3gEqLwhaVGHznHJM9MBDJdDhYrI6aJmKXckSQcXep8p+W6by+aIg\\n\",\n       \"F2MIal89/w/nbX/zS0nZ/f0NhPCOaDTA2GRi7MpMxeZGo6xADzAVoJB9OpiwPhuEUoiJqswUMu05\\n\",\n       \"UQCQLSDwaaRrBWTckOuyQvDHBxJxgw5cZEYZae24mtXbg9CifUtUDejA005g5KoxYa26Ig+RUXGa\\n\",\n       \"rp5THSgVHmdEC19uhHr55ZvfzNsa0Awjwd1nSzj6g8bItkRjwcW/I2ftDu20KI1GU+uOxcrOKQMt\\n\",\n       \"cE61xg5ID2+oJuFvfyH9eXUhdg7Prq7mz7Senh8p/Rqu2MeD9b/NWqgHFpaqAHy1tutRKj2l9HtN\\n\",\n       \"sjBrEra1gNUAiXPv7uTcsw2leuM3vre+dvegzvoWHqLwLLE5YcAY70HtJWQhohQL2y+oLNs7u9cd\\n\",\n       \"7CFiSmwpS2kTvtcqKF8s7d5pfbqJ8v8biG1XuZzn4d6owPVa6NnFmupVghY53HyYt6mFwIZoVJ0z\\n\",\n       \"WIKwRCr6mNgYv7+R+96A7lmv7bOzjcxPzUg1GdH/UuprqlqIKaFoAgXHEgyv7ukx9WdYOyQ1EkCI\\n\",\n       \"xp4mFXtbAoRWfuB5JUW/SknY36kVBSX5TOBUE7JT6R9Ad/WwMCDKdMQji61WIpzTX//8X8/bVON8\\n\",\n       \"uTKbFBVWZ4Rf5Lg/TUM2NaqlaECtemsbtTYfPdG9rVJWNE+AAh6JgtViCBPP3XjuTSSm7tEmEdN3\\n\",\n       \"oPZ60Lf9aDRuB6q4H6kmKtz+rzb2PHtzJ5Ysp4YkPUiKykjs7UFztgMnQ4EqxAMqp+vXhIaJ+pVe\\n\",\n       \"f/koUURd8akCBKwW6obqBKr9RkKULvOBn4iASIUIESJEiBAhQjwxPhsilfjRrTN7Cz3sRQB6VZqw\\n\",\n       \"7vpGVmKrta0gStQdy0nEnCOvNl2TdcGotdbsmDFS3FXk7Z2tqlukmMYZpzzqcW2lecRKW4Xbzjm3\\n\",\n       \"PZMVWxETIpBqRXpbueqb9ghjypGsDq7vxKTt+TkhaBD7eVb2jurqyKsq+Xy9NEO4s6VU5F5khhL9\\n\",\n       \"kz//j51zzt39CzVBM/SlUwNFQroGmNktyehRV7rLla2SaqBZRWaNfWy11tWjXHc534FrF8q5t50i\\n\",\n       \"gmx+iNUXWU1EQC5HWq00sFNIva3INHU4IgG6CjBZFKtKejVLbcnora1E2L3orU+ooLgfDP0836BO\\n\",\n       \"IgkmC4iMLxa2Iru9u5FzS6zdlxBAHyFE31AdtLyU63l4sGOphcYDCcavLuQYWq/ROecOR00ssOu5\\n\",\n       \"+SBjjBGe7Vb6W4MkjwPV3NI7t1qSOD/62EJCEbnj0c5Ta/GxKFbT6tkkU0MFs/f3lGo/19Wza9B9\\n\",\n       \"MEqp3ztSosqylG27e0Mur86l7a4PH6+IN7D42JHRowdKyDYlFUwSS8pXVyBkYvQTl8ip3jUQ8JhX\\n\",\n       \"+kB9EkKzihRtBwR7Qe3fwEB2IPuDJRCui+dfztuOsCxgsa+KvLva+r+ijjmZJK7PZH8HWChUR2vr\\n\",\n       \"xRK1TlnFDETckyXCfImECMVqJ0NjVzXbE5sTr+X4I2piblMTJ4/3er5kIXAv53mkftIDRe3JfqA/\\n\",\n       \"SZvFSzu+1nYcSFC+WMrfN9cy/puGjJthCMuIvKqseduvf/UL2e8P7ZzOYbCr7eWccynYkWJl/elw\\n\",\n       \"knkiQt8Zj5TEgHqtWl/TOef6DlYzZP5Zoy0iyhSagKq0EwvV9VlEdgoQ47Nxa6LzdCv3RPuhc861\\n\",\n       \"PYxuiblQ54TF0vr1JZD9abTjqyk0Jy/oxNN3nAAhx4/Q1lybb7+D1QLbZEB4nkT2jqGCdp5PFHWc\\n\",\n       \"uo8TCiLqkz3f709EQKRChAgRIkSIECGeGOFFKkSIECFChAgR4onx2ai9fde4ZUo0Rg9/ls6EsD3c\\n\",\n       \"YYfeYLwkF3HecknUHiDNkqiVvcJ3tG21FAHw4SQUx76mmjut+m5Yk/hUXbzJxTVWKsrgwRNE8ynR\\n\",\n       \"WCrA7qnWXgTPGoUO2TOrhhB6Qy7uk1MvFruGFB5IE9EoHUS8EVGQ68VL7IRgdMDCP3z+U+ecc796\\n\",\n       \"89fzZyWojboyGN/BnXwkum0EFMreOpNXyJS9nfQd3a4xhbCUa4epQ+3tvVz3QMJOrcnGIkKn/lzk\\n\",\n       \"OtvUcv2LzM6pBx2XpiT2hdi4I5hWIfKZCqD72rUqOiW6N1cnXDulPII/FFEwS1BGJa1VVOTZ0zWe\\n\",\n       \"arnvP3j5I7mWxr6vsHdCzuq31wL7v7ig+n+gMT9cm7N6D+FpRKLMBbyqFkujew/wcstwQUyFqYh5\\n\",\n       \"tTbBctMrZUDnCfEyexZdXJgYWkN9eZjaU/pWt0XExavLOYu9M4iXG+qn6n1Fw9RNEEMvye/r/QcR\\n\",\n       \"Y19cmgD4/t03zjnnbg+yv4y+nyFRpSOKYe6TZK5VlNr/WZwLB2aqirCEK/ThYPRlAc+o6kT05aUc\\n\",\n       \"b2zl3kVEBXW1tElB89QJiRfLrbX5FKtjOFFLz2X+2701V/zomZz7d7/9pZ3nUtpigfHaUrWB/U7u\\n\",\n       \"sTriO+fcYoP2JBG7UjAj1Y5UR/NH4xlUaUzHmCCyj+G3xHNCiXnn1TOj+3bw7Lrd2fcaHHdJEpBd\\n\",\n       \"r/591v5nJebTyebd5Ur28/Ag53sEJe+cc5Pek4nmP4y1ckP+WEhieHttdVKLXObdmJzdE/TTnJza\\n\",\n       \"Rwiv7w5yrz0/pie5J+Ng8+qIhJ6RE6WU0qPsIY+Jd4hp/muQ7ED9eZgnaPYZHPVinXPOdeRj+OFG\\n\",\n       \"ZClfPPvazmmSPlRQUtY5xlPdmbQj1vmRn1M495GTErwmecDHi+YfTTZhZ3WPOXlgSg5NEXlO3sL8\\n\",\n       \"R5DSXA0hsnl/cmT09okIiFSIECFChAgRIsQT47MhUnH6uOL3JUTm97UhLVn5h79yboAojNEsXQiN\\n\",\n       \"k70l93AqTZYmwC2A5viF/OCB6mqdOjmuJ9FZBrHfekFCwFK+14+2Ilan2gO5aPeDHGtsSGwO5+8Y\\n\",\n       \"dZJ8ys0v2+4OtvrZYkWYk4VDies+kji1RcX6N2+s/tNf/FTOryc0T1P9FWlhd9oEqFK5slXVLETs\\n\",\n       \"SFiMFUxP1eKnWOsfWdv1PVYJVGlcK20nVP9qqZXl4SK/e22V0RUl8uyYXsOdN6OVLsSrIyEHAxSL\\n\",\n       \"EaVpq2P0kFOqMawtGqTwMtISQ+TuyfVXU8i5NtotHHvPnSFHHivnlDp5CsRspJXOAMTuYffaOefc\\n\",\n       \"q/U/suNrunRpx7+8lJV4QX1iQP3B/cH6cwnn73Jp1iHzOpNWaQkqnKsVSUyCTUViuYbXGk7onlZ/\\n\",\n       \"6k6uAnPnrMbeguraKcLkOSVd616ibVJCXw7kaD0fa9Q0aNuWAmlsycW8A7KX0XnmhZzL7tpQtw5j\\n\",\n       \"dwtXdu5DisRwqnuK6gQTibj7QasC2DiZ0Wdy0dZdJylZQuDf1cbaSS0efvDlD51zzr1++/382QVQ\\n\",\n       \"J0WGnHPu4kLa/VjZthbC6oZqFyoCfnZuiFz70GGboVndqPOZ/JYNYTQNfCInbm3hnmwK0kT63eM5\\n\",\n       \"DnuiTQOSdybqYw72BzHEy9HaHMMzjJfdzlClczwo/uS5JXb8Bp/fO0IEce8eWkNzIqDIaqHjnHMj\\n\",\n       \"KhmsIbK/ubX7dcK8p0kKzpkT+hk5kFuJR/ve7kHuz7MLSwrIYW0zknVDBhSzgKC+JXd2rwJ0asMY\\n\",\n       \"/S+LOClI+n+Zco04CLUJTk8Wco1sAK/g0Eh2JhopnjsdoUpaJ/XdrVm9LHNpV651WCKxJuG6dep2\\n\",\n       \"n9s8eToJShTzqWtNQMxdCaF6KeYwHpOKKkWPKnDI9Q+O7HQyZZ1o3k9h00JsxjjyyXwcAZEKESJE\\n\",\n       \"iBAhQoR4YoQXqRAhQoQIESJEiCfGZ6P2+nF0nkSkUSFQ3CX56NS9+HjEVI1xhMs5iyhVFNqSYE0B\\n\",\n       \"xQWJotV5OQPsPJAQsj0pxGeQcQx4siBs++VzoQDefTC6ScXDngrE1hCvpwsW7KnYElQUeXwkKLx4\\n\",\n       \"aqgYqheqJqWCmlpcOSPH4BoC1Ie9/faXvxGX3RdXP5u3TepQHcn31afIOec8HMZz8tEaWhXs2XV1\\n\",\n       \"8ODanUycmcz3keBu5VsnFsrL30VpNEIygYJ6JrD4/mhQ8G9f/518RkVO1avFj5wAIMfl4tJzsVg6\\n\",\n       \"vooXmRb0sdBBEzywfWyw7wbC45xg5wIU6KknKgLQ+u2dib0ziFjZFXkCHRwTBO/RJ2/w203xav5s\\n\",\n       \"CcoiItffE1yuh6PROBorEpH/6GsRtn73nYld61p+8+Lqx/O2M7iSf/+t0EdlbuemIt+IaJwB1M5i\\n\",\n       \"QR5woBSL3GhEpQDY70kFrSMlCqjIfKb7iLPL4DPXVDVtk/3GfE7w9uEC5ftbmTv2g9H95xDAH7hA\\n\",\n       \"q3+cPLIqqGg1DtF1LDSFO/nCvjc59Syj4tqgGZkqjiDyfeQLh+tlAX5Zwu/qWty715TEsIew/PLS\\n\",\n       \"HOh3e6HCSpoTRvVPy1iALMfa763vXD0Tmq+pKckFDvG908QOmld7pfZstzoXexqn+htiip1PlAK1\\n\",\n       \"H6co4D1V7EAtbdcrFUxUaAzBPvvzuQnFrWmcPgdVW+2I2kRSziK3+SebZMwUa6JlUV3hCOnHxQsa\\n\",\n       \"rxCWZ5GJ7Z9fiYi/LO3Z5SL4GNFp1o3s7/0Ho2pfnElSEPeJUWUmHkXbKWGnR99lejxGu2bkLO7R\\n\",\n       \"/hHJCGJQ4AXR5yO2+Z7mU8yTdW/UeoxXhcirsJ0TMKRNDjROe4wZPaZzzqUYnzn5LSbw0WJR+Bb9\\n\",\n       \"/0BzXNPJeI4w/phujxNpk47oUQ/vN3Vkx5XJ9+m5m+OcNLHCOeeyTKsscL93fzQCIhUiRIgQIUKE\\n\",\n       \"CPHE+HyIlPMupjfjCW/VGxKsnk6ymjw19mbcYoWZRibOTDxq/dCKsEZK7JKWRDFEfANqQ9W1rTSj\\n\",\n       \"FKmxnb16qmA2KW1bAWfrNLIV6eEob7jVwd50G6x+mkcCTHlL9hkcuztarTpZzVAGsStL1LUjREpt\\n\",\n       \"CthZelGIGPP4YKvKv/nlXznnnLsnF+dxgisxxO4xIUhpjmOk5OKL8+x7a6cHXNfaW/vvsSLOchIx\\n\",\n       \"Z7AJGGg1CYShiG1FmKtrMVJuv3r50/mzm52kaXNNRE2dHulep5m6bdO9Vkdp+p6H8DIhsWELoaLW\\n\",\n       \"euRabwvUU1vQSlctNhaU1q4lrtqIVr93gmw2KS/JdXVESCxEjnUvv31/fD1/duXF4qCYbPUbx6hN\\n\",\n       \"d7IxkWLF/tUrc7b//ltBoqqjCatffSFo10Tn/ptfSd3FF8+/cs459/aDJSz86Z/9Y+ecczc3d/O2\\n\",\n       \"50tpk9XKUBJFTmhRNyMsbUcuyuhv7ACtyNWMVtFKtwAiWB2sXdWepKBkCwektSWU8OJc2uy3v/79\\n\",\n       \"vG2JFebZxsTLH64FJdCkhD63fq3JCW5k0ak64XPI/xWEyLSNzFOPTNxxjSkJWxugOSdCx/W6O7ht\\n\",\n       \"92S/kahNBdmf5Kj1eaLxr8cdKfOikwAAIABJREFUqD3ne0KI3PUHQUIvL0yo/fvXgpgsUDtPk2mw\\n\",\n       \"R7kUahMdYTHVupwGue+evE6m6WNh74hHkM/tnsxjUWu4nSyJwpXyfMif2fm2sOw4DFS7FbUwlzSe\\n\",\n       \"73DZqSebklwQoZLy39ONtLsiSA1df+HlvsajXeuL5zKfZZQApclIIyW7ZJgLPrz/OztPVHko6Lcx\\n\",\n       \"hNUdLAS4D2mfTOkeKsLLzI26mDtPzzMo1Advv9V6rmNOyRM63ZPbu6JeCRD+cfoYpVyQ1UKP/p/Q\\n\",\n       \"XNMpclxQUg6uNSZMJ0KhwiKzcT+/qaA9PdUfXWNcn8hCpBtlLPQ0rhRpyyJGxOS4RcEJJXLcOLO2\\n\",\n       \"41yIT0VApEKECBEiRIgQIZ4Y4UUqRIgQIUKECBHiifHZqL1piF1DPhULUGYdQXY5oPLfk2NzBHhy\\n\",\n       \"saKCioAgu4n8hgBpDgStanHfIxzNR6JYlB7oyFCjaeC2vaCCmoAdSyq47CHALgaiFiGyq8lHRgWg\\n\",\n       \"A/yUSPPmUlB765KKLIKCypKPRYwXS4PC9xD7xZ399mEvflTfdN/M27ICtNwWjtWFXf8AemJZGsQZ\\n\",\n       \"QVDbVHaiR/jTHCYSYKIdh5ooGEC/q8Rg9A7eUru9eWV9cSXQetfiXKig5hdn4rfyzdvfzdsKiPIn\\n\",\n       \"Elum8P0oS3I212SEgfzGFAJPycUXdKealvRc5BVdLKHi0hkoiGS0/ZbwVmoO5hh9Pwrd0JG3WAFf\\n\",\n       \"ppwEoA40l2on33749fzR2WKD4xONDRftZLJtC9AC768/zNt0HK235hn0m2+Fxru8NLG5+rF49M0N\\n\",\n       \"CdZvsL+CBNg5aMTX35tg9sc/+Qm+bxRgDtF6wkV7IUYdchIbg3pRCqQkYb0KwJcFeztJ/08Xdq+1\\n\",\n       \"gO3uaIkiE4qVXp6ZA/b9g9yf83Pr49utiO2VAhtZHT2o79nHhXfrilz8S7lPczKHcy6Z6YOPf3s6\\n\",\n       \"WR/XOa6mJJPVGoVcIahvSR6ggvKhI3oaXaxc233qkCgS0XhSWrQ/WX8+QD7REn11eQVRPpJXIvab\\n\",\n       \"gzg4ZQ1CrPeQBMhIAJrIAVvdsfuEDAI1AaMksbm2DwT7MbnzjxCAZ1SxYrOVe/yBJAD3O+mL1+QZ\\n\",\n       \"FWPuThz3HSSZ0G3PU+kTm6XM/9uTPX/yL+F2X9n496Xcdy5urVr/prEdq39fRgWidZ4+JSTBgFZA\\n\",\n       \"aW/a7SzKnqhN1GV8pP6nhYfZnypGX+gnm5OVvps8+yTJ2M1o7MYqUZi0kLzd60WBhImWFNl4Fp4o\\n\",\n       \"iSFXqrgxEblSlUn8cVJYTK70GSqfREjsOJ1srlEBPhtPabWL6JEoX64rjSl5CTd+onYatFgxvU/4\\n\",\n       \"v+dVKSBSIUKECBEiRIgQT4zPhkilUelaEodtvpBVHdcB0hTPprbvVQ9IdV2YYE1F6x2J41K8YR8b\\n\",\n       \"SuvF2+epgnNpTGmgeKuNJnrTh4v1uKV0eawwPb2tL4DsTJRqXwIlqWk10SAVszvKm271aLUkK5xn\\n\",\n       \"Z+Z622HV6SNb1WWZnPMFraozuMdeEprVfwsH4MqO0aAZtV3p1NzqHKn+GYsDYRdB24ZRfnUgAWgC\\n\",\n       \"4T2X1cohRvfOVtObDVyZD3ZOZa6rE7l+FsfGOM8NuXNr7cKMPCniRI5B2e/Ox1rry7YpItmSdnbA\\n\",\n       \"yk0tFFiwXGOFk3qqYYj2LxNDBBPUbirXlpKuqOdxb9ef45zY2XectHYdrossQU6VoFpqw+GccwmE\\n\",\n       \"xRMhpw9wTF6syH4A9+zm5nrepsJLdsqvvIyP9+8FffrJ1yT2v4MQm2tNql0BIS2ptjWt6lWMvttZ\\n\",\n       \"P9G+y6CPoiS6cl8QMqEi2pr6RArRa92yhQHG5ES11rD695Fti7G073pDaVIgLB6iWBaC+z/4V64L\\n\",\n       \"afWU6q2O7RlZDajLOVs9OKS4J5ldY4V+sqJ7dw3n9R98KQkA795ZG7ZIStguqf7hUbZNnqw20D1i\\n\",\n       \"skS4vxfEbkVZAfVJvngk65DdTv5eIA2dEzDUMZ9yOOa6ehPXKUWrTSSi1hT/hFAC/dxPxBzMNfm0\\n\",\n       \"5qAhAyp8b0mAHzu5F2c0TxyA+sf3dp8ar5bdJIrWCWIgsTX62xJJPAUjaJmiH9ZeMSx2Rsc2LTo+\\n\",\n       \"bfxrvbaisP31rdyLmmxCOrXOSORaC6rhqo7y5P4z11r1nFiD5x8nCmldy4qYgxLX9ignAvcuYQd0\\n\",\n       \"fCFfyh9ck1QrP3D92REd8ETP7lMrNW6XbB2yR1svbD6dqxdQklUSyXwSI8nsfGP96vYWcxwhUiVq\\n\",\n       \"/HU02SdAotiVPwVj0lL/10oZzYkSxSJzrf9UBEQqRIgQIUKECBHiiRFepEKECBEiRIgQIZ4Yn4/a\\n\",\n       \"y5w7T1hYLRDkIjfKboBT6yo3yqRLpDBiSl4QWpgwJm5pAfFakpMAGdRfCe+ghqBQLShKtShdBhix\\n\",\n       \"P9GxEkDq5M6r+uSEIEP1anIJUXAQCO4hsKx78s6A23WWm9j36lyu++bBfEdG+GiV5CxdgDKJR3Jl\\n\",\n       \"xjvyz3/zt/OWXSPQftsoZWoXuxzg58JFe5P00fedM1fiksS+6g8yUdHWHu7IrTdq4QCh+kgivu/e\\n\",\n       \"irj6fC2eSU1jEL8KAQuiFgbQBxN5oeRw1OYil2Ov/iDWJnW7wzVyIWX4XQHFTkq7rxVovoicrVMI\\n\",\n       \"fxee6CGInfvW2rP0cn921CdUUBlTIVWlTUYI2nPq/7u9iFzLwjxzFugf0cbapOrkunYHE3GqULkj\\n\",\n       \"Z++vvhTarm5NqHk4gUb6gTjgv3tvgvVzdc8mIxulm78E7eScc2/fynlmlIChBXInssCOcYPYFVy9\\n\",\n       \"3Mxuxo6l++sSg9jj2TGfPdBSfGb971RJm3S10chr+EediNpIcX6rpVAFHVMsEL3mRE9NmJOKzK6h\\n\",\n       \"AX3TEgWl18pu51UtfSalQrJKo0zkSq0C/Q8f5F6wi7XDvFfROFng3Bui8YtS5q793u6/0rIfPlhS\\n\",\n       \"hPr89FTwWb2ialClGd2vCuLh7dWP7OuYJyK6rh7eX1Fu439ay3mOAyW0KI1HiToOdKR66zmiJ0+3\\n\",\n       \"0scfSJx/C6+8lqQdCxQ85j55wG+YbVXqfb02GqsFzaz9ar39gq4LyQ65tWs3yPFHohuVbh7JWy4H\\n\",\n       \"3ZSmRIshKcp79qBC+4O+nugxrXQX+621M7VHVJz64xE9N2F+mnImq2XfbUXUqp47NdRqKfOSx8Nu\\n\",\n       \"QfSkJj7UlVHQyvKdattvjvvYkwdWin7fkbO6Jp4UhSWKTO5xpYqONDMrPIvYHf0OrugRiegHPONS\\n\",\n       \"eibr8HxU5eSoBceJ7vV/3EgqIFIhQoQIESJEiBBPjM+GSK0K78441R9p/SWJiO/2EOfRSjOHey4j\\n\",\n       \"Ryqi61jshtUkZU7PtcBaCNobWoWNKnIn0aPWc/OjvdX3SNMvqCbZAUhHTr/NsPrrp4/f9BM4ZWct\\n\",\n       \"iTghDo5Hqk0HMfyitDfz272sUpPejr9VkXNnbff8UmwFXpNTdZ+KG7IKEIeJ6hoekC46WFuXEKWS\\n\",\n       \"h7RTF+fG2epbHZ0H/tqgLvL29r/bCfoxRLaaHMfH7vWpJ3tsoEqcFKC1lhyly6pjvaP0X00hZmfh\\n\",\n       \"OdXfW79bxuoeLf1vvbKV0e5eExAIETtIe46EEiRwVu9GW7UosMeOvZk69lLtMA8rhEYtFmi1qoLV\\n\",\n       \"dqKVHoT9fE8GIGwD2YmcdtKuV89tNa3ox7v3Zl3w8uUPnXPO3d5Iv1qTYF5X1RmtPlUoentjFhbv\\n\",\n       \"bwTh+Mu//CfzNrVCONsQwjxpW9s1qmg1GoBMcNtgTE5cnQArTNK/zwbM48hoFlaptCLOkJRSUz1H\\n\",\n       \"rRCglgQZOVxrfxoJVdPaiSnlpGeYx1jY2k8q7CekCc7TXFFhuRCxa1VZ3ykw7rROIafVN0DYuP6i\\n\",\n       \"uoOPhNLvIEBfLmyM393JPVmSUP3uTu5jWZIA/UhO4s45T6n2GRILOhIx57kIcSdC36JM09UtJlgW\\n\",\n       \"TCzehkB+2BnCEwP11twByrR3KzjwD/eWRHGrVSQiTlSC/QYxEjGE+iOhXwdca7GyNjnqXAT0I6UK\\n\",\n       \"HIpwjbUdq4F7f8dWO4PWerMW0JqMbDSQ4lmQU6KCjpMJaBXXNVQhuic2I4F4vqH+F2mfoRswYYYe\\n\",\n       \"aDw1SHgaCE07IpEl8oZEXoAo0XqWPK+2YB86qgrSzLYK9IoRaQUKSsqATdCJUP94kjZOW+vjlxup\\n\",\n       \"ynC2kbqGx4P1UbXTOVbWJ9ZA7vvY2kQtRmKuv4ukhH6whjq1OOeIknyiP445BUQqRIgQIUKECBHi\\n\",\n       \"iRFepEKECBEiRIgQIZ4Yn43aO9uULifYLwEXQrV4XRyrOM+gWPUPariQLf49HWjbJDBeQqJQP3tU\\n\",\n       \"wZ+DnL1rQHt+YigWdBeBsX6S47edbcsh7Ly/Nrpjqd4qa4MR1b9iBH02TeZdUTcCu3dU5FXRTobR\\n\",\n       \"VbBaZASjo8jiNNi1LiHsfHFFvlQfIMoEdM2u243SZyNxJp38XRC1pp5VMYmtHZzdk5jE5iqiJbG1\\n\",\n       \"CiojFvGNcp/qWs5p8CQsxnt+7A12d7O3E9Gt6o8VPyIX5RJ69uCCiJCg7UKp4oVAwSU5Rg8D2ulI\\n\",\n       \"PjY97tnuPX0PbtMkovQ4z4monQV8bh4J+tEXBwjrYxY1gpZ4f2POysUziD5HEqDG6i1j0Pp6LdfV\\n\",\n       \"k2PvEcWlLy5NvH5/L6LMtkVB3zMWooJuI27h999LEeCffG1iY6UA37wxyvDVS6Fg7u9M2K50G7ES\\n\",\n       \"s2dPAdqTvaCmXj226JzgxdZ01k86jInN2qD4mw9CI2/PzJ+mhch8opLD6pSeAO5nwbgmcTzyzAE9\\n\",\n       \"kRFllGEe8VSgVWm5/d7G+AIJIgX5OFWzQNyucb+XpJCylOvJSGy9WAjN/+6tUfbVSY5VkLC7Bc2T\\n\",\n       \"EN22RUH4a/rtei39aUdUiSZvRKCWIqI1VNA/kCxCPaPGrbW1mzDuKLFkQNvGK65uDaqIxl2sBZ+h\\n\",\n       \"36goiWKAt09K1QHOFkqF2m53nez31fOX87bjd3KNJ3KlHwZp/weiMwel2Qf0VxpDfQMaq7bxd4Qv\\n\",\n       \"XkTUoukcaE4ctT2NslqAUuw7u54sRRFezPue5BaaqDI11icLiMKZHuxVlE5Fi/V5l1Ki1IS5+0AJ\\n\",\n       \"GCd49W0Km3dP6KczPUvzfw8ZRUelOvpJPbOs/60XmkRg4ZGMFNNvc1Bqq/LVvO0nX/2Fc865+3s5\\n\",\n       \"jzyx6zrfyD4a8mWci6CTL+ACVTvYF039wzIaO6cO++tsWz/xvPhxBEQqRIgQIUKECBHiifHZEKnF\\n\",\n       \"KnVJRymPeDPe39sKVkWBcWJvq5phP9Aq+XiSVd/1rYnNFGwanK1mngHF0ZfLid7M40hXlVRzZ5K3\\n\",\n       \"2aphF1tBn1KqtZZnmqZ7mLftDoePzjPBiuGIt+BxspXJHVZdX35hq6VDJaiHuhk751yHFTS7iN/t\\n\",\n       \"pc1Kzw7g8na+XZGLdSMr0mOlKaR2vg1W9ez6u0nl+wkhKF7rVbX22xHp33w9Qwc0idDEMpfVtCeE\\n\",\n       \"qRtlP20tq/CIJOseyMFjBEtWjmNi1zpAFNpzCu+8ErZtKc49IpsGTTuPsNJeF2RJcSbbjrGtvqz+\\n\",\n       \"mKEfR4hde1LFTlh9j4QmTECpGCVI0bYtkJaRVpou/tid93CUe50lhiqNQFGznByDgfSw2F0V2nHM\\n\",\n       \"liCyvxw2EWVp/fqAPny1smu9OJPP90frp2cX4lifUFLI7IpMaJIiGy2n2mM1XQJN4nptkY7PT9Tf\\n\",\n       \"jKmKgE+1Tqe1U4pkED5WC5uOzcb6n9p5zLX2KGGhrlEHjhaj5fJjB/6+kr/zzMba3P4FW0JIe263\\n\",\n       \"ti2dhbdUlQHu9uqYfjzaWFsDwViROPrDeyRALOz4eh0VIS1LIKKrrf32Fi7qfN/1nsSKRBEipULl\\n\",\n       \"lhiBAucZkWDeY+7sef7TumYtu23DaoCc3UdN2gAkX1xYAsTuVuaJ+52h/xUSetLUEhtcrUijHStx\\n\",\n       \"GLuFIRf1IO2zPxFyDmQ7HaU9h5psHY5ynjXVK6wPcj9bQkmnUatSWJskmjyU2Xm2sKBJUpvj8gUS\\n\",\n       \"GtCdm44qYGDu4moTLa7fJyQ2V3SesrJm93hCzhsga5qc4Jxz66Wgzefrr+dtL57JGO9GeU7dPPxy\\n\",\n       \"/qxSNmVilgYME52nJlskZImj1h5UTtKV6J+rzPrpZikic9/L/Xrz3uafeyQxdb21U4T7PlH9P6/9\\n\",\n       \"jxLACoyZviIrILX7oGlqIuf7T0VApEKECBEiRIgQIZ4Ynw2RitPIpaSRgmzHPezJLE4rnff2vTXe\\n\",\n       \"IEfadoK+Y3ewt8Yab+fHxhAuX4Cjj2UfSUpcMdCEyVFtHqTJ152tCCtUkN8ujXsfsWRdLE0jcNcp\\n\",\n       \"SmPnOVTy3noCH871ytQm4O2H387bVjkM5FJ709ZVd93wKlHezpv+u3lb26CuG6EvSyAWasJ3JF66\\n\",\n       \"7mQf/ciIHFJtKQ1U0bGYVppqOlg31k5qphp7WyUXmewnSew+TVj11Vj1dt3t/JmCU91oy5Uo1j5B\\n\",\n       \"WioseloykDtihRV5W5EugBxuz+0+nRq5xgmai4HOLYFxXTnYNejKqDrYvauRLtz0lNiMFflAtdsO\\n\",\n       \"aONtZqt/rZOl5m/9RNfqdKVp3785iUapKG21lqdq9GnXWiLFPSLdTgp0ck9avgkI5NUz6c+Kwjjn\\n\",\n       \"nAcykpCmJ0Vj14Q+rDaS/v7Dr0yP9+23v3POOXd5ZcjZw52sZguqJxij73RYkmaU1n/Yy+qzJFRH\\n\",\n       \"tTQ5ua92GLsN9T9F5w4HQ6kjINvH04OdO2rnlYUiTbxax7Hyj41Gl1TXTVfV3WRLWEXfGtK+rJYX\\n\",\n       \"2C9VlccYLMgKpoNx4nIl2+5u7Xzfw7ri6pm19cWFtPH+3u7rBrYTFWmfmiOQ+721k1pLkG+hy4Am\\n\",\n       \"5qoboQ9TReQZfVNd28n22wNZTS6sRtkI5MoTwppuBW0ab23cR2qPoLYDdPztS5lDDt8YS6BWH/uT\\n\",\n       \"ab/GGOafo30vL2U8PdTv7DyBiNcHns9USwT7gdb6624nfe3mgZBOPH9G0tdqkU/PDYt2WhP4lwB1\\n\",\n       \"nqgmnxp8Tk7OTZ9Xzjk31lrXjzRleIwPDJ1iHqHsfwefUdcRStWizy7Ki3nbD1/9uXPOuednP5m3\\n\",\n       \"bTdyDmdruZ9x+h/Nn/322//TOefcL17/i3lbqddD888Ea5+B2qRuoJuiZ5Eit18939A2ucc/+8Gf\\n\",\n       \"yveJufhw8zu5rprGXyHfTzPSXDp97tp9aju1RCDkFIadOekgHdWx/FQERCpEiBAhQoQIEeKJ8fe+\\n\",\n       \"SHnv/zvv/Tvv/f9N2/4b7/3vvff/Bv/9Z/TZP/Pe/9J7/3Pv/X/6D3XiIUKECBEiRIgQnzv+31B7\\n\",\n       \"/71z7r91zv0PtG1yzv3zaZr+OX/Re/8Xzrn/wjn3F865L51z/7P3/k8ntiZGtF3rnDPIsgZVdKQa\\n\",\n       \"UhlgwYigyAIC6IHccVclUsgnowVrpPY3PYmX38u+12uB1lkIpzWfGB31sERoO4PxD0jN9d4gc617\\n\",\n       \"l5ED8dlWUjd3OxLPJxAx9vL9/YFSw0Fz7h4Mnk8uZRun36qIt26ZgpK229OxOrUuIFdeFbYmqbTd\\n\",\n       \"o4xORUVJgD96+X5Ht2+AOHogm4QIIkJuu3ZUt22qiZZ87FQ+4RgpUpjvd2Q/EO0ffcc55zLQgwO1\\n\",\n       \"yQBhZ5J+4n5OLOwEfXi0PpHCSbtBbSafGzycgRbNC6t/GA1Ke7BNBuhWbzTSNKo7sB1LHbJPfO6A\\n\",\n       \"8QeIUhOy4h8H0B3kBH+sQeOmlhqcTEgXXtt5LkFZtRWlKddyfKXMnDNKT+FuFsxfbC9xDVSTElYA\\n\",\n       \"ZUnjby3j47vvjFpWaqmj+m8LCMAPZAlwdfXMOefcHhTUVWljLc/hjk196LiX+7QhAXwGurdvTIC6\\n\",\n       \"3Mp+Dzz1oMOzSbHW1lPH5p4sJFY4Bp/vAhQci821JhmP0/VyhWNRTcRKa+3ZtKt9guuaKaVwAhUX\\n\",\n       \"c01Q/PtAzt7brbT1am19/R7nnLB8AML71dLu3cNOvpcSVaqVD5TaS4mKVYsHH9u2BGNoIvuTGHTw\\n\",\n       \"SDYdE+jWiZJHpk7lFiSKVjE62snT9U8QuX8BR37nnDsi86aqLbHoWhN1WqNFR1iraBKFc87VsEc5\\n\",\n       \"HFgoL+e5hiVKRPdaJSAt9ZMedeKiRyJq+TehSVFv8eHI1CrsXArru6OKx2GF0lKyT4a5dt9Swgrm\\n\",\n       \"eLZfqNHuEdkfqMwiYVd0tTOgepIjXMkXNBZfvBAB+gZ95+7G6NFVKWPt+cbqb3YnmQs4UahFgkhE\\n\",\n       \"NWFXa6GlM+o7N7dSZSGKje7Wuq+aeLNIbfyXeJ7dktXNhHnKx0ajT5A+DGS1MTuw03OvhyidyhQ+\\n\",\n       \"qu36qfh7Ealpmv4X59zdJz76lLHCf+6c+x+naeqmafqdc+5Xzrl/+vcdI0SIECFChAgR4v+P8f9F\\n\",\n       \"bP5fee//S+fcv3LO/dfT9P+w9x69tnRbltAMH9sed+1nnslHOtIoVVWNqhZ0aNGiyS9A4k/Qqx50\\n\",\n       \"aYBogAQSvQIhUOELCSqzgDRkvsx85vP3u/6Y7cJH0Jhjxhwn7+WldKXkSqU1O/fc2HuHWbHWilhj\\n\",\n       \"jjnGdCsin4jIP6fvfCeKTL0TvTQS0Wq9Nr8sQisGvOHHROI8Wys5sar8Pe5iDVJs+jP/LcjOA7lK\\n\",\n       \"Y4EnaQZUKfaVWYLy44RkBVZ4S5/IhyfGm/7u1lep50ACVltHBNIMZF9apbzcfy8iLrCXEqpm797x\\n\",\n       \"SKQ2rKCYbG9+gRJRCW8Hojy9Fp/MHZ3KxNdAffJEz62lFfSIN/Kejm9l8j0hAgmOz0JnI+4PCzda\\n\",\n       \"8DYj78YLklMYDSXQFUaWEREc8gyr1FcmMkLUkATU6h7HpzJ5E1gcfZOkJpJJYnoNxOdGQ9popSmZ\\n\",\n       \"eUP5thikdLaQSoC+ZSx/YE1GfbzHebLAatfrijmBCF/G0giJCfj5StvkJKraV9orlAmvlk5Anz2+\\n\",\n       \"aKV1uNXfbLck3QBiZ4QV3/mZ+zrOS2g6X/M1tFJmEZG60X5akP+k+S/evn49b1tDOmCigoZ+9kTU\\n\",\n       \"frei+9pAViChVX0D1Glc+PdMMDeh1bdVKjApNcOYbEi6owSyeAc0d7n01bKJasaE1kyzXyDJagCx\\n\",\n       \"Seh75mu3JK+72f+PEI4YHel48DkuzfTvBXkcWmCxLC1JspgSxYIQBLsOkzcQEVkAsT1UTmzOQRpn\\n\",\n       \"IWBXPYBYJh2/tz5Bfmmj/YA87CIgUlPkv44hcBrRY2cA8X4krzXjJ08oBBkJ1YuQYdhVjqqmyCxM\\n\",\n       \"5F1qBUKHEwvCapsU4sjdq9c6T77dk3BjoW1SAEEn8FkiYIJT7G2YYp6850kIT75IGLnUf3vytavh\\n\",\n       \"2dcO5PsKmYYRqHo/cL9G3yWyf4c+uaUMyzqF0DDxpW1O6EmmoGv1niQ0nk+VPqde3xA6DCS2iLXY\\n\",\n       \"htH/HgUa3ZGEa/FMooSIpEAxi4UXakWFzvc8dy8xB3zx7M/mbZ9/9rn+gfl3TbI+Dy51H9c7KsCy\\n\",\n       \"PkO+qoMhx9TXBiBSFbV/ApkkFni+J7b6nvhQsvl/KCI/FpE/EJHnIvLv/4rvvvt0DREiRIgQIUKE\\n\",\n       \"+JcgPgiRmiYnI0VR9B+LyH+N/z4Tkc/pq59h2zvxf/zTF5KAZ/T5T87l6uHF+74WIkSIECFChAjx\\n\",\n       \"/2u8/KqWl1+ZBNCvxoM+6EUqiqKn0zSZaMe/JSJW0fdfich/HkXRfyCa0vt1Efmj9+3jH/4bj2Qg\\n\",\n       \"bziBN9LFpad2OsD+8eT45AIaPC1D0bgKJkxOgOWS0X9rartFAs+z0dMDGdSr05Y8f6Afkyz4e6q3\\n\",\n       \"seuc2H5zp8S7qzOHLE2B9uHZT+Ztu05h5gO0rfLMYccSnmx55GmXAfokixUdHwTMmNKClqoYibA3\\n\",\n       \"zuTheZNU2J8J65YdEQwj+FuR7pWlXWICLgcj++WextwdNWUU3YPAEZTa20G/59g4LH+21vaMkSox\\n\",\n       \"pXERkSI2jR9/yR5awPiUsixAmBxJNOVUaUqjHT0FZl5/I5EtTcn+iIZiT6Um198uFw6tL6A9diSf\\n\",\n       \"wg7nvl25AvMZFMDfvHVSsLVdXftvzafPdWGYxG+EdT/fDB5TA6UHMqQROlJx39/q8GzJgCxGiprt\\n\",\n       \"FBflFtdIYxHRIo12vvUxuYcuU1Z4X18iLdjUTva8AaEzJx0tqzkpC+pjGMemLXU4eCrKSN6fPnSN\\n\",\n       \"G0utjpRGbaBY3nZUWHLSPhYxUXr+17f5tUJFu/BxZUrtGXmY2fjrSQvJUpqswWW+im/e+DyxPdN+\\n\",\n       \"nNI8ZartTOi2sJGzWvk4tZaLY9/WVEhVkLKzXevjKx+nd1Dl78kn9NRqGy9JM6eYfQeRsl3QZzMB\\n\",\n       \"nBXz4Q7AKVsQwFNquxHpQOb/x0ij9BWlalE8I+bDRg+xCue7J8X26z0KdKgNKxQtHWtvky20pcrU\\n\",\n       \"55PD3XN8n9JiCx0TOQpLmAifYUySBKF0qaWR/VqTHIRl8gQ17b2YdJQmjPuOiodMDd7oBpLTZ6Cq\\n\",\n       \"5Kn39c8wPp+cecqyLPU+NXSeu5P2k2tKwd2CynCgdtpjPq8ofTp1uqPdzRciIkKXJc9e/pWIiNw0\\n\",\n       \"X83bViieSdgTFP+Og88TyYR0P5G5Y2g/ff3sT+Ztb97+joiIFGeYLzunO6TofwMplrejzis5OWBI\\n\",\n       \"b56onpY1j8OMnsUjKBif/9paPv81fR5PyUn+7H/xdPrfjL/1RSqKov9CRP41EXkQRdG3IvLvici/\\n\",\n       \"HkXRH4iO9S9F5N8REZmm6adRFP2XIvJTUW+Of3d6H3EmRIgQIUKECBHiX4L4W1+kpmn6t9+z+T/5\\n\",\n       \"Fd//xyLyj/+2/Tbt23ulwWmqb35MsCxQrtmf/G3VCJsFrWqOd5o9LNe+1E7gnTM0RBIDSjRgRZaR\\n\",\n       \"C3SEV+woYW8k/e1m6UjDAkrc6YnUWQdd1RxOjj5cnf1IRERWpED99PJfERGRn3/3R7hmP1aWAhnI\\n\",\n       \"yC8NJOeB/MdsOZeS1IItjtuayOP4Tdf76mNtq85OryGJiUTc62e80Kygup0SYTRFl0kIJVwvtJ7g\\n\",\n       \"eHLpBis1jsVXzqZEsdsPKmzgAAAgAElEQVTzigSq3EYEp+WqoXoZlVqbYve+csJwZGTj3FcVW6zi\\n\",\n       \"x87f42+BdpwtCPURve8m02Dl/SJeBs7+g1Omq5KmpnJ13ONV4SvddaFIQP7A0ZSb6++xY1/p9UBC\\n\",\n       \"djvdttqQ6ndmbul+//MlVnqFr6qaSduiP5JiNtYvCyKgV5A9KEtfuW6xmu1wc4zgKyLy2WdKLP3+\\n\",\n       \"u+/8+oEObB55G7a4F9XRV4nbM1w3KQKbJMREFQDVXq/j8lJJ7q9evZg/W0JW5OaGVK/RQStSMbY7\\n\",\n       \"vKVijwHHyokAb+X8BSHMPcrpS8w7I3vooQ25PLk1BwQqSjgcgH7HhH6hP18+eDxve/lar21ByMnG\\n\",\n       \"+il5/CVAlid5V2pEbNyTrMtyhbFLXxuBkh6OpOxtauxcZAKCeH2i8nso8BeQuKgO/phYr7TvsLOB\\n\",\n       \"AScTKXAbGZm9E8XGJzkAmFJ9tnQkfq4QKfE9IiJnILEz+vgWMjJ96XPNeqUl+V1HzxioZ68KP9YE\\n\",\n       \"nYKRqLx2L6K5vxIWgIvNUkZabA4jSQigJDk5SyzMUYMQ0RYK4AkV9Nwe9Xk2ZBjrjd/YbaLj5AeU\\n\",\n       \"ufnNRyqFcn7h1x+hUOXlzucJk2zo6P7vUdBz2vvYNX/YnsbpRftL/e0OvoKEKn1//VPdR+vz2oj7\\n\",\n       \"lOd+/TFI9BJ5QcuDjT47Tieau1AotSh9jP/86z/Vc3uAa2n9WWvXlRU+1xk/v6N+YkVjy9zvfwJP\\n\",\n       \"1oiL3OyekSvGPQT2PRGUzUOECBEiRIgQIT4wwotUiBAhQoQIESLEB8ZHMy2OZSVN6wS3fIFU3EhK\\n\",\n       \"zEjzrJYO2Z9vn4iISLl2yO5nIKWVa4KAM6RqKiKgIZNmukgTEas7pALZoNTSQkzEXMYKH04EWZei\\n\",\n       \"eOPtzomlP3j0eyIiEhEp9tFWCxpvNwrd3oqbbKbQqUgzghMBhVeUbjDJXCYgT5NCxmzkG0FUygij\\n\",\n       \"IiLxWXfvnEoi4pkppGRE+rN9NH6fFhuFRSNKd2XYz2pJCswgLKaxp3ZaQPrjwLpA+nkGCPhYe8ru\\n\",\n       \"7HyBS/bvl9BH2YlDuyl0puLR4dntQmHuZe7XeHP3CxER6RO/nhbEwhpqyqaJJOKK+gmlNk0BnbXF\\n\",\n       \"GriBPti4yWcDU+OE9HZ+9MlviYjIixe/mLedkA5NzPiW+yuGwtUDVoJGG5Z+DdWoqa8yccjcNKVO\\n\",\n       \"ZFD76InC6BPpyLx+o+34+LGSxy8vPRVp5rqv3349b/ut3/5Hul8ilh9vFdJ/8MSPbwrMXUO6aLPh\\n\",\n       \"sZ/72Tl0ZNAnWYl7De2an/3U9WQeX2mqbGJlcfSnntJdphi+WHifMPL48+c+7s6RDpz7GI3rAXoz\\n\",\n       \"LZshg1qwpHlif9Dr2a49tWBGuynd/4cPde56+ewbPyfcivWazH0xP90i3bvd+H4t3behNOaXXygB\\n\",\n       \"+JNPnvjhQVuYWj/+4Y1qek1EdrZ0aEdFCbNGGTgD67X39VmBndJtAiP1iEjRI3SOJkpjjkjfMAE9\\n\",\n       \"zi21SarUoGPEMJ6XhadiMqS5P3vixeGvoRV1TYUilpV/TKn1qdXvNbXPHWZWvWv9ubNDCvwx7sm9\\n\",\n       \"tFNm5+bXtTS9Q0r/ZBlMm4mAbirfOX2vzHXMdCOnpTEX1Xq/Vpl//xM8E55QG1qmOqY0egNdsubk\\n\",\n       \"89n1nZHI/V7XKOgYqADG+nhJziNVg4IiHKOh58pMraA05g5pvqTxc1+X2j/ZFaJqdT4tSKtPoAe3\\n\",\n       \"pXTvd880tZhWOp/3Gb07wO1kvXHJyhw6Yq/ffO/bcM9ypo9EZgLt597i/tgzQUSko+Ke90VApEKE\\n\",\n       \"CBEiRIgQIT4wPh4iNT26V8LYpVB9rt8tMVxufLX26ZPfFBGRqvPvPbrQ1XTXfDlvS6GsfCCp8hhk\\n\",\n       \"wxEr17b1N/PFGitXNtiJ3l1VLzb69r0glCqJ4CGVkCTAXlc9m/KR7w6r7hwq3iWRHuPRPLfImwnn\\n\",\n       \"Odx7GUYJO8kElChX3pPaeQQ0pSBSZAsS6RLecQtCpDKoR7OHVDr/y6X+irqVC/dVstLtJTyXREQy\\n\",\n       \"tN116/fJroeJ8j1U221l2NIqYLcH2T8njbFcz6+gFVmL1c92RarckyECvpp/VKoUxdudIwIy6Epr\\n\",\n       \"ATmFA68+gaqMtKqOQbZuW19Btwdtn5dvnJT98ELb58nGPfESeAI+euDFC/s7/U21A2Gbrt9Qr2bt\\n\",\n       \"fXgBD7esdFRhmIsCvO/mID6XF94m9Qmr1Nbv8ZPHivAsgeBldG++e6nI6dOnvvo3X8OavMkuUXZ9\\n\",\n       \"IMJyBHSACyW6g7a1jUMRkb419Wq0NaGqBoR2JDVgStkjlTobUX5HJfHbjSFdhJJiPz0VFJyg7G6E\\n\",\n       \"YvZGK3MjETv6NGEwttSGprKcE0rV4fxG8vWyEfvpZ+4Tt7u7xTX4/qxvGyLcEEpq6tnVnW/bALF6\\n\",\n       \"89IR8bKEhyX5+u3BwJ1IxTk2XzGSoL5An00wd/B4NVRvovEnKHGfuAAEHmoRlZXHkJ8R8nqcQMqO\\n\",\n       \"cpp3IaMwmer+5G3YQ8JlJF2X3/zJ74qIyJ988ct52w4If135/HOG+5ORU0IOkjH7fw4o0Hl1q+05\\n\",\n       \"crHHQsddR84OaYExS2RzMUmW2MdEg7k1SZ0UvkQ7Rr2f09VGx1sf6zmtEn9OlkAET5XPE68hsZLf\\n\",\n       \"+ZxwfdJzfnbwtn4OUv7xSOeO/Z1v/HmyWel+yg0h4TBj7TodlDX1//k5Vfs+zIGBvXNHEOuvLshR\\n\",\n       \"BE1Laj5mKCFtxtIRuu/vb3XuXq2dbG9yIlnsz98RKFkae6GKQM4lpUxUFmv715O3Z9uZd6+fVB4t\\n\",\n       \"RMQLYf5mBEQqRIgQIUKECBHiAyO8SIUIESJEiBAhQnxgfLTU3jQtJSXhk92two5F4ZBtV+nfT88d\\n\",\n       \"Yrs417RI3jrstyw1fcGmoRlSNevU4cnlWmG8HMq2t6TcOhrcFztk3QP2PjFhGyS2hFNmMFfNKFVm\\n\",\n       \"+i2pONzY1watQ7F38muYoLvUE7HZVMFZiqWCAm1MMLIRlbekNn7ozYyWCJgg8vcwGd5VpLFhXNue\\n\",\n       \"tTNAjiQtkHpEW8Rssqowa0747MVDhaebtw6HHk/6d0Kpghxk02g2/vQ2fAPdnYdnnkY0aHlFmiGm\\n\",\n       \"lFs1rmO1zD/BNRCMCx2Xy62TEt9+B0X7Se/XxT11fGh7UWoxgvZXTQR867HffOOkbIO7CzLhXaIv\\n\",\n       \"JrH/1kSbY/QFK3oQcfLqQGasZvKcxn6eKe7PQDpiFdLWfe33M0e6kw0/J+xvD9JpRErkiyW+R2nk\\n\",\n       \"O2g6/e7v/9687Y//9P8WEZEffuppTOv/Lesyoc0ePfB0Y4P0TQ+y88NL17g6nnQfmzWTrfUauQBh\\n\",\n       \"udK2u772thsGI5t7yqQFodZUx0VE2pOeU7l4t7BhwDqzJG07y0qyzrBx3FsyQ02Q0hpJx62udexw\\n\",\n       \"quzyoc5nxwORrbHrDvfOVMJFRAbMDyVpppnGTUpzUnVAmi32Noky09GjMW7XS5p6Jptkavflyts/\\n\",\n       \"Rwo4JX2yWWOHiP3mFR8RLyEuzbSaHBhqTVslZJYenWv/mJbaF4ZnPq6OlaaHv3vjBQMTzoVJ1B2K\\n\",\n       \"MgbSgOpSK0rw61+skQI6+tgZcC7Hk/b1itqrqJF2pfYyw9+EH6c9TIOJKG7z6anz585qo20S02/L\\n\",\n       \"CQVNBZwViLA+9drGNzfeX57f6Pd6utf7k57TXe/zdJVo/4tIgX6NFPCWtPUuztE/SayxRgqwhRI4\\n\",\n       \"NeF8TjGl0Qekx+qR0rKj9slj7UbSpVEUaDyZ+XhDWnFDo21yRGFNT+4Ulyslsbe1zzX1nMYnWgTm\\n\",\n       \"sZGoAnYZNRVFjXgmxROlpanN3hcBkQoRIkSIECFChPjA+GiIVNM0syKsiIhgRd4Ssbs+mWIwrZJB\\n\",\n       \"/O2FVj8g8Y0E5sQgZRapr9xNvdw8nJ48ctJnjbdfcsGaFU4H8rBrKn2rb+lN19TDyyWdE/692bnX\\n\",\n       \"W4zjLrMrnLe/rR8hBdFMjn4lEd6wiRw3RPpWP4xOWJxifXM+WzkiVU66mrxuXvo54bddB/SLCJsT\\n\",\n       \"yHxJ6avlCehTRuXyHf5uSFnWiN1rIjZvNvr3jwsnBb4F6shed6YbHaE0uMj8+0miiNT1nZMto1Gv\\n\",\n       \"a7MiIiIYi9XJyd4DvKtWay9/TrByWueOSC0ea9s+++IP9fi0Wi6Wei7LpZPo+wRltTtvV1vBDqRA\\n\",\n       \"/OyFEl8LQg4fXOm55CQxYaZdM7GXeyC8A/sTrf67Fa7PV2t5pufU0n3qB+0L7D+5yKDeTaiL+fN9\\n\",\n       \"8kTRpBev/LoalP3HhH598vSHen3fO9L49KkWeySJj+fn3ysptCRl/0tIHfSE0tiYMTmPgSQMrG9c\\n\",\n       \"XXq/OsBDL2UEBSvNR48cEashz9ARed3QlGHv25ZbRT06kGdzQlUyjPueUDVDrnIiWzdAAlMq6+6B\\n\",\n       \"jrDXYY6iEFq4z7IjJSmwH251zmhAlM5i6utAlUYizBux/EiSBPEAUjSXtaNAZiQSr6Fu/CAw78AW\\n\",\n       \"CBqjeim8CFMih1vbUQ2BJIZOU0n+/CchgkmpjTGRArsATYu2Ou7ih+7rePpr7Z+v6ftff6lyIjeE\\n\",\n       \"YHx3rej0ckPXCqRrd3ICcgd/wjR3xMHU++0ecv+f8JDpO7+Jp6Op6NO1AokZJyroAQE+KvyZcARy\\n\",\n       \"tky9AMX6xBKFDzwiDHS+IVTlFvepp+dUigKJiLYtMD4b3iMuo1yQJABU0Seau8yLco+xUzWEfqKP\\n\",\n       \"sV9kCxQ1IbV7M/59/doRxuRK7+OqJoQVKFpEkgMdPDkjqI0PVKj2Gr6iLKvTDbrfvvUCiCX8B+OO\\n\",\n       \"SPTotDfkU9q3es4P135P4oLvwrsREKkQIUKECBEiRIgPjPAiFSJEiBAhQoQI8YHx0VJ7Q9/dgwLL\\n\",\n       \"zAiDRIREamMiFe8aWkh15fD8Hkq1NRF1swVMIyndsCg1tTKOetxmdNivWCu0OBHpsUAqRDi1BUj3\\n\",\n       \"eHLC4B2UgNvB00gXSEckgzfxrKwK2HW9cEPTw1Gh6JHMIC210AkR9pACS3qHttdQcV+SufIWcPzu\\n\",\n       \"uROwjTT/tn6G/bvG1QS49+yM0kOAQGtKAZoHdEw6KlMCjRFqz7FXAuCS1LbPkUa5uyZz48Hgc4XA\\n\",\n       \"15SerBtNBb29dn2cNFaYvx38WElq5G2//9e3qinW0jk9uPgNEfFUnIjIdqlpvuHJPxARkV/81f81\\n\",\n       \"f5YN1q4Oz/eJqfh6Ox2g/cQkxgg6Mm/evp63DTHMVcXJlmacbZJia0ottkiBkbSRdL2lVp2wPpqR\\n\",\n       \"dUTmzmKmod4nOyMtU/qsR+rry6+/wkesZ4SCDSJ7JzCjvXnj6ZHPPtNigLHzMXkCifvs3NOi1otS\\n\",\n       \"GveWW1jBvDci0ml3gKFo5tss7WR6XiKell4sPd1t6faIlNJHjONy4d8rQQa2dEZP+lRmoM2UgWav\\n\",\n       \"fT0nFW/LlHIRhY0dVgw3M+IlEbVNM0moP5lZcAt3gKr2NGoOc/O8JG0rFDSUpffTznR0SBcvQhoj\\n\",\n       \"zT3dUqMoISYKhBkOx3ZhlIo1de6xIWI92r0ntf9UTLOPUtWj9omJ0veWeoq3rlXWv9G0eALV/5hS\\n\",\n       \"i49++++JiMiffO+aUSn0ob754q/mbTdQzL6lZ8JU63xXE3n8AJL/QKRoa6cCzTmNTDqGZhblZ20M\\n\",\n       \"R7Qtwhw/UAowRvowS31OGqAKf6R0Ww2mvtU9daQwvoeO1oHSqA369T2ye4r2pzRWijETp97HM6R7\\n\",\n       \"G9JbtCIw1k887nU/+1vt/5VnUaXHs2Ck/iexnjMbH8ct5i5ylri71c+zjfeJAdSPbGDyvh5wmvUO\\n\",\n       \"vQ1vdt/jPLhQR69xTYT5KUHxRk66iCi2YbNyc9mISG19syGhq/dEQKRChAgRIkSIECE+MD6e/MHQ\\n\",\n       \"yxj7KqCA2mxHC5hhAgGb1LbvoI68Jw+h49FW+P7jE6CTlo4R4S21AHGsIGmACqqnWeRNYnIBKcka\\n\",\n       \"9EY2JGLnCWXaEykgl62u0lZcJgvfLUMBeBVoK82epRagKG4luiIiO7z913R887PLy9+et60LRQIe\\n\",\n       \"kVL2L77SFd5gaE5GyupYhdY1kWMhRTFQaeiEFWlLy/S213vy0sEXWeWKSI3ka5ShaGC5YEq/Hs/a\\n\",\n       \"kFfQhk4Nk6N0dwe9hhOtiABmzNIIIiId6nNfvCSieqLX+/jc26kXI8r/SEREHnzuK5PXUNG9Svka\\n\",\n       \"9PyigZTF0WkbUvEtsDozJWYRkaHVbREp2tdAFtMlzoOIwLs7FEDkjqCkkd7XqSHCLPo4o5mpgJRO\\n\",\n       \"K63DUfvWduXH70BaT9fa1jn1yQH3uCRErgKxtWByNIi/9YGQNvT1DfnEZRhvLRPlC702azru/6Z2\\n\",\n       \"z75+I8rZ17xCxIq84zJ1oBQsidKBoL0k5Mq8KIultRf5xQGdyknCwjz09reEiOZYzZP/pBGws5RK\\n\",\n       \"12NDs72frkAUH2k82z0rQcruyX+0yIE05X5OTfsu2XeyIg6SmBmALGfk/2eXNhBRd0DftvteEIJn\\n\",\n       \"8hsjQxKYu7MLR5MnIM3Rwoni/VGJ4nHjaHqUYd9UABFfaUFDBNV/eetSBwmKAX7z89+Yt/0P/+c/\\n\",\n       \"FxGRu9rHRDvgvpJ0y7fXisSPhHTUg6E0PiYilN+XC6jDT95eY2Pl+kRAN0Q4p6IokPHrg/cn846t\\n\",\n       \"ckaucfzJz70XPRcD02IqNmghO9PR/TL0832uFBNhJam5HZSEUpokBH3vhCKovvVrrA769xFuExUV\\n\",\n       \"MZiKeUzPmhJSQwRSSg9F/aH3flrheXNIKBMFNJ1lP0xGpgLqFNFzugI5n8Ds+dlumSkRkRHPsZGK\\n\",\n       \"QqxQpCeNoQzFQFHuWSdJA9k8RIgQIUKECBHi7yTCi1SIECFChAgRIsQHxkdL7e3313K+dcXoxhDQ\\n\",\n       \"yd/tBmB1rLd0u1Ni+YHI5gPg+XswOkiMnILaQ+17MIVXgvEroKJjTOkunEoUObRt5r4ppQVHCKRM\\n\",\n       \"mcPdPVI2Y0bHAEFzNggliHO1VFIwK1bHnebKBjINLgpoW5HJYmO6GympWINE+PDyJ/O2r7/7C5yw\\n\",\n       \"tudEBMcIBPyWIFuB7klSMj4LYnVCaSxohVSVp2C+fa3aLiOdZ4S/U4KqT42eezErNXu7JiBMbtau\\n\",\n       \"dn13q2kRNo2doCPEei9Gym4pZfLNsz8XEZFF6f0uh0HoBBg7Xzs5egktkpbI2UluqvSkgI9r6IhY\\n\",\n       \"Wx/1vm+IFDy2SBUQ2beJNC1qaST2PU2QUVuRGfMy0VQZE7YPvRJvp9FTJsOgv2lG0kyCBtC9VJkR\\n\",\n       \"6mNtE1bstrTcy5euLfXDH2ra5c3Rt11c6pj54udfzds+faoabQWlkay/lQtSFkc6wlIQDNmPpvJP\\n\",\n       \"WjgD2qlpfFyVgOon6msxNOBY78n0oBJKIyVo8ARzBx3KyfFkUN5CnyxekBn6aER1/+0sM0fpvsHG\\n\",\n       \"Px3/NJkrgo+xCWmGHqTn8wsi7CNXklG6zVKwI811gxHliWx8ArE6L8mVYFb+p9+aUj5SYCM5MMzD\\n\",\n       \"k1KGCf6OSFtvWlm/p9TSSue44db13iboTU0Ju0zotrhUAnL1wsn2/U7nxJHG+sWF7rdkCobdPWqT\\n\",\n       \"Ear94+Bz/PlKC34WC7/HMead5QJ9iPpwd4IW2Ojb8lK/39CY6EGGHiKipZzmh8y8LSpA81j4PTED\\n\",\n       \"8TIxjTOffxqooh+P3tcr+5zuyRLXUC642ED/XRApfLAULP22BwGeMoVi3aibNxI5He1q87XuV/fB\\n\",\n       \"emMN5skbcoVoTbcw8mtM4R7Rkir9gFSqFZlMlNq0KWukedrMsI8nek7g/DhJd7L0KeknZgWeSQue\\n\",\n       \"i4KyeYgQIUKECBEixN9JfDz5g26U08nJqal5rMWk2D1AYXfwt/XrO12R1PSWnmNVFRHSUAOl6anU\\n\",\n       \"PY70TfgAYrlEvjKPRqiNT1QajpfUfnBELMZKZCZJisjlpa4Yb09e1v/gUq+HuK4ygjRq6FdGROz1\\n\",\n       \"Us99aAj9KPF3QiReQ8noPAeooe+OrjZ+sVFEICNi3dVDvcZX11ouGt0jLKI0dPK2ttX6QKuVFCt3\\n\",\n       \"JjHn8BU70Crp1VtILFDp/mqFVdz07mrC/Pwm8jRKoET/4MpX5Buspr/5/st5m5XLRiR10XcD9k+k\\n\",\n       \"ZKx6vvz+T+ZN8VOU3ydKjk8G7y8bEKHPNy4hYGTnNHWUbAT6SFZXcnOr287P/HsRiO1t5dc4Qqk6\\n\",\n       \"zrTtmt77pGBVxaslk0IY2cNs/oz6hJUd00pqANLB7gElELObG+1jFxdehvz6tfanH//YHQD+4i/+\\n\",\n       \"TEREfud3nbD/5S8VfTzfOtl4hRL/mojl5h6wImXrxYysYKVJfaMHsTslREBmKRRSMQZinRJR3han\\n\",\n       \"TOw24nnM6vGF/j3hWDmR6E35P2GkGwrLa7pWQ5qeffPVvO2Tz7U/NZUfvwTZtz7RPUZ5dsxFLhiL\\n\",\n       \"6Yym+RjOMYYYkTWvO0ak7O/9wVW0J4yJ3Y0Tm5dLvf/LBUkiRCanAakVmmtaIKH5uaOksjBUiVbw\\n\",\n       \"GMdJ79dv9fzxyufpfn+N/fl8KjYHAX2Pn7jX5qu/VF/Hiojdr251H4zSFigsici7dJrM/9LHZFnA\\n\",\n       \"AYAQrgyT7BIEZwIQJUV/TSJHmgcQ28/PXMLhZq/q3fudFyWYKvrYUt9Ngdyw7AYKcGyO7+n77Un7\\n\",\n       \"xoGKrczPktFU1FpIklNBE4jXlEyRDJIpXOzQ9u8iZwnQ0Rz7Y4AmRtYlK7lNgNKdSCYH/re7g5/7\\n\",\n       \"ONr483ZaA7Fmk4O5GAIyOfdU9HEszjANmGPrlqR7UFDVEUqfgHifk0xCucS8m/q5jySt8L4IiFSI\\n\",\n       \"ECFChAgRIsQHxkdDpKI4kdORBDEhCEZ6hJKkulpg3kRt5YqEEp1vdPXXib9V7yotex8Hz1uPgtUR\\n\",\n       \"lhhjTytdLILYU6c8A6oU00oLH6edr+CWEVY19PZ/Ourx15ckiZDrKr3pzFfM35YziH8Wha/W38Jj\\n\",\n       \"zkQQRUQmvDnTC/ksOni7/37e9vBSV0cji8lZDht+RXFM6BucrseOSuNrc5Vn3pr+m1ObFJH+Zlo6\\n\",\n       \"0lGBN5bQNRrC1RPCaJygKIXgGon/bTZof1pCPXygiMntna9gDwfc14R4JgKBVeoTZa77y8i78dmr\\n\",\n       \"vxYRkcutrlJK4j5Y6fTjS18RH6G70LLQIFCUmHkGWDK9JvFRUx0oFtTuB6w6sfqJSP4hQbsfar+v\\n\",\n       \"C3NpJ6E7Q04y4vJEWFUPVNZryMnQUel6aStsE+Hzzyas8CoqdX9wqejc1198MW8rwPnjY1WVyQr4\\n\",\n       \"ODEUsyOfuOXSOIcQySUH9w7XtWZBVCBMEy1JTUAv5lGBZmLuR47zjHg8Q85hwraE+Gsj2BQxLfVN\\n\",\n       \"zHAa+Vh6jJdvCBFGP+Vl9c0RvnrUT2MgYcNI5f8YFCUQ0ZrEL9MZxSakZTSvNx/rtzc6d/D97Gvd\\n\",\n       \"D89nhgCedlTqbWKmEGdltGy1An+FZCoKcF6i2O9TnGtn74/kNQqkIyLpinwLMU9CDhITjAQKUCxI\\n\",\n       \"mgBjeLd3VG2P4+8J6avBKy1JuqJBv+8n5xIar7InRKoZ9DedyX8whAMOb1EQRxbnOVK5fn3UdkoI\\n\",\n       \"EYvsmRX7/RSTPyDO6YDzO4K3Wbd+XwEqSt/791vIDqTUJ9tZJoC4X7jEhIR7O0hnNC3zm4CSUsYk\\n\",\n       \"wnXn6AsZeZ3WJudzTzpI99dWhAihH1cVcxT13zL373XzGKP53BApjJeYMk3rxHhT1P5AdRtC805A\\n\",\n       \"wtgT82yj7bOg4y8Kk1PxdwzOIrwvAiIVIkSIECFChAjxgRFepEKECBEiRIgQIT4wPlpqr8xjaUjG\\n\",\n       \"3IioKZVGW4l/3zs8fXsHojSlm2KQjBeRw43xWmHJw/HZvM38tAymTOn7NVSh88SVmIcaEOfS4dG6\\n\",\n       \"U6g6I2JrhPRZGrMkACB7IsVngHFTIwVTyq4DyZgh3h3Sg0VO3khIX7ASawrCcpc7sfPN3de4HieF\\n\",\n       \"joC7sxxwJ0GXPWDUaXhXMXk4MtEO5desrDzp92K6nizB+RE8bMzyYWBoG+kmpAJjVpsHUXW1+LF/\\n\",\n       \"H5Du4ydO9r1o9J69eOGkfCPb9pRaLJAWWKXe7ZtW2/2b7/9SRESeLN1/8Gxzhd95KqJFiXdLatN9\\n\",\n       \"CRifSMwZjhuTsv7tnab5tiSnYYj1gP2mwmXVelzuV/tOS8dbuicZoGjqkpJlSNUwURqp7InSQlWl\\n\",\n       \"6fUMqbWKyf5IN7965V6HBVKGHaso48BnZz52zMPtePD0vctTUKpELLWH9ByprreWliKvTatxyKms\\n\",\n       \"e5Qe+yD/s8j+JaI+Ununyu+JXXcMdW6htG+av3u+lp7enTw9VEFWYGSvNaQgEpqnalOFJ2K3FbdM\\n\",\n       \"xGlYgWRthQ2UnfRtdJ4nEI9j8no0RwGbV0VElkjjtnz9INsWCann416YE8Mk76Yx64ZSe3MqkI6P\\n\",\n       \"1GKyonsHeZSE5sSp1HEcZ/7bAcVAKcZQ+4akEdD+d3siR486J2/Iw/CwU0rHiYodOjxPErpPluYe\\n\",\n       \"SHanAZE/ncwHkjUx3vWfXOD5k9L8F6EAKE2IKoL+VN2TnbGL8DbuBW4Hkx5joDFpqUWW8LAU6EBU\\n\",\n       \"DTOoqMnr0eaJgapiWozTuvPfWoZwYAeCSOfCHGnpKCNJEpP/oNMcB3MxoHOq9O+m9uMvOxSKTT5O\\n\",\n       \"28Z8Kr2PSYd+Wuo9LOmWrOA80cee2rXn2Q0py5tn7NmG0ugpxmTm6dO+PuEa/BhV9atflQIiFSJE\\n\",\n       \"iBAhQoQI8YHx0RCpRVlKS6slW3V1hFbEOL1Xr52wO8AZ20oURVz8MolJOK83US1ekaJ0H6tAFsab\\n\",\n       \"FibWRmgF3uZtZS7iK8Ju8FVSXqioW1X5sQZbJbDQHVZdJioYExHxCCJeRyTO3VGve0WoyhKIxRi9\\n\",\n       \"i/RERKzbVUrGLBJyuh7wpg2ickraDEaonZhYblILdA2jIRFcFQASZU7ExsiIf7RtxP3sCGHYV4oi\\n\",\n       \"9VjVFYzSHUBwJBJrAXFAW7WLiDzcqjxCmvn3vn2hPnlHWpEZ2ZnJq2ugVHfXSp5+0zixe7PVVVhL\\n\",\n       \"Hm4trrXv9vQ9rEdWq+AAACAASURBVKoW3v7FFuX01J+W8LNLiFDegVw9ogBBNr4ymnCt242jb/ud\\n\",\n       \"IqwHQhPTVq8niUmkFqvvKCExRUNOU+qnuJ4GK9IDXdfF1RPsg+4X0Jcteei5JIQfqwDx3bz5RFxY\\n\",\n       \"9XDwleODhw9FROSIJTT7gKUop+5IaiNBiTtVOkuMfjy+hwCe3OvjKGefvO0EbTElhup5f82w6q5q\\n\",\n       \"v/8T+jCvtPfwGlwQ0rTfKxK3pAKACefEKMkScgsJwU6VeQICQYgYfUDcv349p2+/+mre9uiR9t3V\\n\",\n       \"2vtOD4+7h48f029RqED3zu5BHL2LiPVAH8rU77/Am68nqYMI1z/GLjUQW1n73ufOtNSxG9OaPoqt\\n\",\n       \"KEiPf3PniOjztzqvxSs/qSOQrloc/ewhSdA1LH+j7VSTJ+UCYsN9w/OkydToPmqaVw2lvD34/d+m\\n\",\n       \"iuZ15KFpRUsjo08ziX7eJHWFohwak9Nozx0UO1DbZMm7YzhPDdX3/dq9Y0QyRxahn7hQQf9tyLx0\\n\",\n       \"jBbYRrI/pR1L2zAlYnkD1JdqSOYiBhbJnqUeUn7tMDFZb5TOiOrNu96ROYo3FhtGn1FEQMLV5t3J\\n\",\n       \"52SemFHOvnr23CfkahYfpQKE7t0xyBEQqRAhQoQIESJEiA+M8CIVIkSIECFChAjxgfHRUntxvpJ8\\n\",\n       \"RXAeiIrxQJpJgNvy1NMdAhjx1bX7NR1bhXuXRBQ3r52EUkuzT97MIaVUADy0bnauBZMipdQSFrtA\\n\",\n       \"6imJyetsj9Tj4NB+BLLb8R5krFCxeeeNk0PRC2jGVA2prePudKT701qWgeF2kEFJsFX6BVI7qcPt\\n\",\n       \"lqookU5gFXUTUe4nUodGCm6KHOJN0Hg9aXaYplHUsoowFGs53ZKaZgqTN/V7h1tt4440PoZJv/fT\\n\",\n       \"v/7LedsnUDn/0eeuInx+rpo9qzNX5TbNmIhg5AHpq6xwYq2pu2+gtj1rUolIg7TX7uAk9kN9wDX4\\n\",\n       \"PVmb3HxCWjggG49Hb8/t2QOcB30P6bhhxD1piew8aFqmq70Pb0vVtDqSin21RyqAjj9stX+mQiq+\\n\",\n       \"UEwuM/baMqJog//TPnrzZOTUQoJ/2cNO98GK6ZdXeu6stm06MtwlJtPxAbOTj9UCl09I48e0wpqW\\n\",\n       \"tJVwLgn56pm2TkYkUsGYzVnbykj+IKLHTM6HttM0MtkchF1KBXQ4lumeiYjcQQOuIxX3zPy/jkTU\\n\",\n       \"xjmPlIJOQAYuTEWcFLuP0KJarz2NNPRGWPfx/PLZtyIi8slnP/TvTXouXGywWsG7kfTrYqSNLKOb\\n\",\n       \"URo9XekYKxak2I/+1FPK0pSwp7eu95Q80FRxSt6Zs5MBF/mAPJw0+tuzs4fzZzdf/ExERF7XPv+v\\n\",\n       \"kDK8rXxMJKm2Ewnbz04JKaW7jQ4y3fOJQ7oX97imwooCE29RsD6f7q+mjp3he+zrZlnmiDxWGxR7\\n\",\n       \"5FS8U5g/nc0rBHdE7yk2yCzdRlpIJaotMkrLm95gR1SRGHOsFUeIiLSNpQX9uDkKXiLM3ff8KvF8\\n\",\n       \"rqiIwdgb7Oea4pxyIpbbdfTkCWlzRtsSeR9/piiUSsnDtkMKtqG5q4In7ETtZBp8Wcp3Ra+noblr\\n\",\n       \"wPmNI7kcNIFsHiJEiBAhQoQI8XcSH89rL84kzn0Vkk1KQCwi39Yc9S34bEWrFdRYJnTqB7zVD2Ts\\n\",\n       \"VsCzL8n8t22jZbIxSJwxv60nIIeT/15T61tqHztyNGIlOJCDeFuZYjOV9YNsN5JPVQp1VAOzOlL4\\n\",\n       \"rktdOvVEIraF+EgKr1alGlGpfQSUboq41BQr7TW7yus1bnANNaEVZsDUEom8xsp4IHXmFCuniZCG\\n\",\n       \"ywv1zmqpJNxKsltaJaxB/M+IbH5RKEoTtUZwdXLshJXw4e0v5m0v3mhZ88W5yxR8+lTv9cXSV4nH\\n\",\n       \"p0/138rv53KDEloiAPc1VKHRx0pmgoIcuj/4SreDN9Rm48c/nZQMm5MqvslfpNR3DfUi5QTpWxBv\\n\",\n       \"M6A6RGqcTlitUWl2ghL2Vexq6/te0Yfjye+JqdgvyEOqg4fZSP6DhtwMKGFfrchDDqTskVTcE6zm\\n\",\n       \"Y1a2x4qckaZrKLoz2XtR6HGXRPbvgQj2psRM43oC0tWRz5UhRxMdKzZldUbOQPwdCbpNcc4joa6Z\\n\",\n       \"eeuZX2fCsgpAxAl9OKGPZ4QSmfL4mpAuQ6JLKp1vgWy05L93dakIT137Niu46Aa7h36+Bc73QLIS\\n\",\n       \"NtYePfI+ef1Kiyaa2vt/Ac/IjMrfnaDsDbpZ6PUaWrC9cG88a4uS7mENonJ5RkgftsXkLDAc1Cc1\\n\",\n       \"/cRRsqnSOZms6ySNcY/HdwnLxcLmBEeJl0D1rpZP5m1jr+NqGFmdGh6CVNCSA2GPJkIfrMrGCkEm\\n\",\n       \"knVA07EnpBXUNIRS2rw3Ursa2jqS1EA06G+7I/Vd+M4OqY1/ul8jpCEIabEinrRkD01DjqnU36QT\\n\",\n       \"SkcErWphQbIrw6R9kbqOHHbws4QS/TjQsxOZjknIFWE070zfxxpI5O6O/Qfh7ECkcPPVGzo/RjFD\\n\",\n       \"UiiOIW/CFs+nW5I6qIA0JyWhdCsds4uC7ufwLko+z8G0cSJ07H0REKkQIUKECBEiRIgPjPAiFSJE\\n\",\n       \"iBAhQoQI8YHx0VJ703gfYp2QbmAALQV8XhARbwTc+/qG4DlAtlNKKQAjxSWk7XJUuDdByiAhsrkR\\n\",\n       \"YSOShzbTximiYyEF2JFp7BCZZgip4wKr7EjFtTWNHqT7EjrWAXciycmMFgT8LPHrN2J5RqrsdtiW\\n\",\n       \"oOW3t3qsltJy50ZQBcS7PXeCaXdUbH21eDBvSwDZ1q0bj+5PCs9fnnkKKAVhMxkcHm6QbkiJxGhQ\\n\",\n       \"aV4SUdPMYkGEzTJPT3QgND45c3j+LYxJp45J5HrcjEjs59BeWpMCtqWAEjbXRDrOpGvyFZl8ZqaF\\n\",\n       \"RdA6bs/FklJrd9o+U+Ztbcj7FHEfB3ma0kLrBciuMFfdUQHAza2SbVcN6U6tQewlBfr1Uq+1J30k\\n\",\n       \"657M6y9yM4hlbZ1ZLExEROrKYff2Tnfy4PGn87YEmjkxFYAcYEz7eOX6RG9vNLV0du5pIdOD2lCq\\n\",\n       \"zLJ2XW/aamT8jG0ppeBNvymjIgIjow+kN2WZFy5sMBPaOGMj1xzXA3I+keiPHXTXKD1TQ229o1yE\\n\",\n       \"pRs5jWy0gIjmGCPvswJ+a98jQnuPdGsH/aqY0sM9HBCYAjChnZrWt11cPcTxfb8liLox90mMyYhM\\n\",\n       \"4BsYAi83SI+SwvQAUamG9psuNWXIxsdLqIxz8YCZFk+kLWbm0+PNV74t0xRQglTp3Xffzp8lmONO\\n\",\n       \"ZFDcDtqfFunlvG271jRfMzgpvYUGExdATCjGychtIJ5N7ZHuZUPlWXfL+3CE8dxQG/aVGUR7CirH\\n\",\n       \"fMq6XDHSgmNPmAbmM8swRUJaeJhDMkpFDXCHKEufEzLToGKD7Pi+GbaIk8Yj0uWaTBX/6H38bq9/\\n\",\n       \"n63NZNvHlZlwn0iDsEehyEycF2/jy60fv+2gN0YaZCPakc3N89yexXqMmu6h1Qdd71kdXq91s6Z5\\n\",\n       \"osRzN6aUHVKwDRUADaDSRFS8ERMN530REKkQIUKECBEiRIgPjI8nfyDDrD4sIpKkuvoY37OqHIhY\\n\",\n       \"VkEBuWEPIaw04slXkwkUaCciCmZ4Iz8cFK1ZlkTYxpKcXuAlKez8iLCLsvox8nOKQI5MC18RCBTF\\n\",\n       \"mWxnpMS+gmLugdRk4cmUl1SaaWxzWn7ERsAkryNjQK4XvnI9P//RvV2IuCfU2bm29URE0POHimo8\\n\",\n       \"+eR35m05Sse/f+Grup998ee4Ll9V9pBsmIiobWgfKzZPQMcGKnXNoRpvfNLVkkqDUSY8kor2cdLz\\n\",\n       \"zEhZOQU5c6Ky2gkNH5Pa7QBSfkzl3BFIwSsgWGnNCrZYBdHqJ8LaYyBOukC9vN575ykXQG4W/sUG\\n\",\n       \"/ncRldUu0ccWha6mB6rX3oPY/PotqY2jK6Q5raBxk5e5I4yGeo40JsSKFojsb6vv3kq9iWFqZOOB\\n\",\n       \"ybH47es3r+dtj5+oiva333k/SdFno8RX7gNWujGpMhtgY63OCs9GrOYS6uNez6+JGCWAdyZLXeCW\\n\",\n       \"5URUbkwVeUGIGLrHXK7PxN7epCHoZgPNJKDXPfaIgFxjjJ+OVKgCX8c1+Yk2uN8xoV4Fimb2O5VQ\\n\",\n       \"OCOpgwr7ZV+7ttJ5JKf5LAXqxmX1BqzGREA2cJZX34YEHDGGstKPH6EtlkuSS0j07y7yOcGGTLwk\\n\",\n       \"qYMa/fjoThXJ5a/hi545uPnFX4uIowp3pLD98rUWfhzQNiIiTayE/SXd6yTVcyqIbJzkup+2JvV8\\n\",\n       \"oChUzyNFcn9+3B/fHRMJIZdGjh5pXLWQ1o4J/TB0MqICDFMsSQmlHiABNKF4ybIgIiJDDHXwwvvk\\n\",\n       \"Bs+6YkHOEhkyLOT2YW4YC3IPKIGspSQT0gBN2+ckZ2Ek78ZcMUhWCOeb0Di1jM2C7r/A65MLNWL8\\n\",\n       \"5kjzXov5ZmQpJEMuJ3MH8P5/Aorftt4my4127GXhfX2NsZDQc6puraCKZH+A0i7YFYH9Jt8TAZEK\\n\",\n       \"ESJEiBAhQoT4wAgvUiFChAgRIkSIEB8YH49sHnX3NGN6YOXjSIISEBfZU7qhAfExJthtu9LUQkr6\\n\",\n       \"QDNkS6RwU1adoLFkxFERkRwqsnlKsDeg0JHyc/2oEKRBxyIiJfSRcoZRQQZOEocsOyjGDoDCMzI+\\n\",\n       \"NSJuOzlhc5jhS0/jzGrLlNrcbPRYv/3p78/b/v4f/EMREbl65GTft9eajjEl+IJ0t5peUxBPLyll\\n\",\n       \"Vujfx4OnliwFutt5ysLkqzpqzwXSbDFpcRyRMmhJWysBfG+K9sfaIft+NG0rMoi2FCxBy6aiHlEa\\n\",\n       \"yxSAS9I7erXX69gQAb0ApF0WmtobSB+nGVSr5tS5OnMEE9aI2s6KAUrxth4PuNeUgqkTbbOCtY2Q\\n\",\n       \"W8mwv9XGz22NftpRas2U1xcb0keBPtNI+SYjnk4Ej0tmitWUKkY6rEfKggsbjGTdUWrrWNm98H28\\n\",\n       \"Rpav6byvPziDjhcpxV9eaAqmbciYHNdvGmxVxUr8SMUfyQwX51kTYTlvdJxmIxeboE8QsTyC2vHp\\n\",\n       \"xKroMDI2Q21SsTbyek/tmiTv+R766e6WU0B6/D2Nk8VcvMCGu3ptTPaekHqxNvnmW9dRs1Qlf9/G\\n\",\n       \"JKdMZhVtylRb2jImVwYjXsdEFC6Q+kzy+3pOIk7OP5G2V4l+XV65Anm7g7ZaTanFFdJ3nBc3E/DC\\n\",\n       \"NaA2T3R///S//e9EROS7u+fzZzuYitdETq5QPMP9agQBu6Xxl5UYf6RVF01QdqdCpaS3VK2eb03F\\n\",\n       \"HqYfxZSNfnYH8ONbejQiCkbSGVGdUpAodklpWyxmrqzzVU9zaAFaQpYT2bvEbwumtmBc05jo0Z8j\\n\",\n       \"Mqh+8EgdIh5c/GjedrZSna8k+n/8eppvcO7lO9d1anUs0qGkR6HEkBHZPjeyPaU20RcHclzu4O4Q\\n\",\n       \"kyvJ3zTubhsy+UZxRkqUjQL6eauMxjq6eEPzqYhpMPqWGFpyKdFSyjUVd70nAiIVIkSIECFChAjx\\n\",\n       \"gfHREKlxHO6ReMfRCGZE7IUP0KIkBXSQUbuJV7X4l94LaxgFFaVvy6GAu8RqnVfQp16/n9CKPJvL\\n\",\n       \"VWmla6u/iN909a17s/SVTpIoijSM3/t5JlhFmjp0Tl53uO6MSG8nkFNb9rASRbOa0d/0DX242FzN\\n\",\n       \"285xLldrLwmey+/3+tvbOycMN1j17A8kZwul7JsbR6QqKBYfaFVv6uAJrdLXkDiwkm8RkQbtNNzz\\n\",\n       \"dUKJOwjbXf3Kj1/Yufn1N1jBsxL0EcTTsnTkLo/1+Nu1r5Jf7n+p103eeQ/WT3G+2k7jSKRLkGJH\\n\",\n       \"VrbHJY50/8uVnmjJJcz4sxocObEq8pFKohN4C0ZYYZdEGL240v1G4oTN0x3OnfZrBPgkplWTIZeE\\n\",\n       \"Po0gdsb0PbsVJvsRdTwl9Lhmv9a3L7UtHj91BMGUirdn3nZjZOOElPJR6j+PAxHpTBIDkgiM9ORA\\n\",\n       \"RNgHrMPqd0HecDUKUAZCScyLbpp4rQjULeWiBFwpfstFLIZITQQ/nOCTN5BfWQXZgwMRy89Q4s3I\\n\",\n       \"WYTf5FTR0kNuosh8W4JS6+qE0vCF9+vtpyq7cdqRYj8IvR2RbTMUbbQtl5DnuC5CX4Dw3WsnoLgm\\n\",\n       \"TZIT0mG+ci15gg6t9t0tFYUUNmcTSj2hKCQiNEVwPyXzdkq3KsHyg5/8RERE/tk/+V/nz7pc56y7\\n\",\n       \"xsfwHZwHciq2sEdLQ4VCl0D68pKKh0ymgBw1UhTAjIDas8yJ8HP/ZJkO9BP2mquAyGSTX2uGtmbk\\n\",\n       \"eH4+kVL5gKzMCagWeyjG+H5EfpkFZCq4AGU06R7K5kQoaGJNlLNz7U8TSdcsIWfx+OqTedvrN69w\\n\",\n       \"Lu+qrRtgFpMkSwQ06UDZjDVQnWpkdMkmVJYkQNah4GIwIPxou6b2NhlafJ+ySWa7yUirOR9GhDQV\\n\",\n       \"QAKTkeYYawtG2Oh63xcBkQoRIkSIECFChPjACC9SIUKECBEiRIgQHxgfLbV3PO4lJXXuHOzIvnEI\\n\",\n       \"bYX0UEV6SzFIgVnKZF+9DDNgFRGpkUeZhDQzkDbZQCl42Dmc3IJEHjFkCbg/Ix2nFBBjzKlFaEaZ\\n\",\n       \"KaOISApC+f7kquB3B01B5VA2jijFYNxpku6Y1dFrSuPEgCUjegfe4Tq+I72nT2EMGpXeTi/fKgR+\\n\",\n       \"BJx+IpPbZ8+U0Pp89Wze1rfarl9+49tu76B6vHYYOcOfGUHWRlS+I6LwEQrxXGSQAW5vQfBckrZV\\n\",\n       \"j+81RMpfT3qw06236+0CZqiFk72vVvq98zNXas+fK2FyJPJ0B6i6gLbP0Ht7GWF8JBXrEanKrvd7\\n\",\n       \"kpVISx59v3GONBJda2wkYjJy3mPfMUxAV6R7VoBEulyTZk2kaYaIlHat7xaUMhpwjJb6yYB+Okae\\n\",\n       \"qphwnyw9wWkEwVjoak9ZGQTe1X6eXWMkbiJnIlOSJJza07bbn7ztNmc6ZjZIC7298ZRNCny+qb1P\\n\",\n       \"2N8bGms7KGqzYr2l3rqGilfkPmFVxLWnhjk94+1qBNeY0i6Npdkp3X+qYZBLyvLHkx5/dc+VAVo8\\n\",\n       \"pEtkmjasgdYhfT1AvbzYOGXANHbihNMoZrLq13XCPmJqEyN59w0Z5GIOyih9V2y0bVcYQyOZoUdg\\n\",\n       \"5S4pjW50jOrVi3lbudb7GZdUUIQ0X0ZK7RPoE9FElAIQkE0+aXPplIVffvczERE50Fx/RHruEPm9\\n\",\n       \"22NO/OHnvzZvW6EYKF158cpdD9PkiTXYNLUVoY3znByVMYcN9JxokYJryLS+xbhuKAXbNXBxIKL2\\n\",\n       \"Gs+niDTARhDkm6ONV9+vOXWs15TGFN3W0LMzEksLkt6faJrtRGP3xSud9x9duFNDC43ArqdUNQyp\\n\",\n       \"TyB59zQnF9Cxak9E9obafEMFACeksUdKI6eJ6TeStt/8nPCw7LqJndcVz8mYazMuNsGvqSjN+OzF\\n\",\n       \"ggyXbdDQmLA5sKVnwTAxQf3dCIhUiBAhQoQIESLEB8ZHQ6T6rpvLIUVEsnlRRVIDIErGKb99wsOO\\n\",\n       \"VrUdVE9PtRPbaqzcGvLriUHUNgXyReEr83qvq5SOymXT3Ei8RE4FmnG8R0oFmvWIvdb0TTjLSe12\\n\",\n       \"gidbBCVwWoWOLc6NiMC2EE4TX/2lIKqXRKKzt+U//8ufzdu+faYlw+XaV1OGIm23WoYutNK83r3U\\n\",\n       \"c7qmVRWUut/ckfwA/BEjWv6ul2vszttuAnITs9edveGTKnxjnmFAAs7PSJoAqGNE9+nUgkRLvmbX\\n\",\n       \"13ru5cqv9dFG23NLyMXVRlear49eANCXUA8GwjNMvqodRiM4koo9uufIZGMQ31NCGHsgFjGhBAlW\\n\",\n       \"SQWtvgwAOgEtGCuqq55XVVQujnLphKQeRlsRU0l4gpVYSmTzQ6vjIyU0o5gRVpMf8WNZAUhDfpHL\\n\",\n       \"taIjd7eOHGW4P4c7WsFe6X52jfcdI9ve7n2cbkEKP2EbS5IYYbypqdgC18Xos7U7e6gZcsv+e/Y3\\n\",\n       \"+28aGdb2wSrmhlb1hEga8XxP12DkbZZkMdmHs8de7JBjTpioeMTmm+2SEB4UdJQgT0/s9Yh5jRXb\\n\",\n       \"D3eKdPd0rXOhDFs14HqSmBWb56POf1UoxlgusA9SjI9xvh39skffWTIhFwjrlLqciHkmjqSAPcGn\\n\",\n       \"MCY0Zzxq237/Quew08nb/3CA6jah6TWcEhg1KDNFvR6uHWlZAP3rR0ezk1T7bJJQQUmtnxvJnL0x\\n\",\n       \"M/QrllqY0HZtT/cV8xr7uR6PILaT/EBicgoNIVL2jENhQ0soveE0q5LkZ6yIhJRO2kE/H1hOR3Rb\\n\",\n       \"RwUQ377QftpT4ZU97+rW+/jsiZqahAO7kqD/01ka6p4MrPYOAj65HYy4ZzxPFmif5N48CT8/qNzz\\n\",\n       \"nGSALF+/qZ6klE2aYvM6JekczLsDFQBV8FasGv/tsiTE8j0REKkQIUKECBEiRIgPjPAiFSJEiBAh\\n\",\n       \"QoQI8YHx0VJ7m2wjWepw2uwjyoTJRuHGlki8qyWIigQZHluQPUkxtzpp6mckCDZHms2A/YnUVGOQ\\n\",\n       \"joeWNEYABUfMvzW9IyL29YMe4+0bN+O8utDvjQOrHSv0OwGLnITJsWYyS4aaohBkTFB4itRaWjg8\\n\",\n       \"WsIssyai7F9/8YXu957elO77937/t/Ucz87nz/JMUzb7xpXFExh+lhUpccNlMyfIugGkXRakAJzr\\n\",\n       \"5ylphiygWDsRBJuYGjwg3ib2az2Hsnofs2IwUjZHv9YTIOiKtG36XlMqq9yJrZebRyIi8vL2q3lb\\n\",\n       \"VWka43hEepBMLuNe27gniDcGBp3TGiReKMl9TZpdHdIczcn7xCozXSS/79aN6pP24ZujQ/GuHszt\\n\",\n       \"r40X07YJbTeRBpPprNWkbZUj3co+xvabGCmGmNKzg+lH0Q0bUij7Z56KurvVIoYtEfsjpNn2B09B\\n\",\n       \"GGF0JEL7HkRxM0Pu6fgn/PbmxlMxZ2dn+B6b7Op5xpQKeJ8CuaX+Okr32+eWxuWUoX2f033v22bK\\n\",\n       \"4i0bqc8q8kSsRd/paDznSDNklJcYQLLPoMXD6VZBMQIbulpa8u7ujX8N939z5gUYg/VnSu0skA5f\\n\",\n       \"rzzdaRkqI73nRDfoMWfXpA5dgCrRE9l+lswb6BGzWdgJ+zakgCLSAHv2zbf6L/ShaLqQrsI8Tamd\\n\",\n       \"HGrbLE/0g6c/EBGRVUH6XGjPqiYjYyveiTy1JZHOJ9YnitLn2gjPE+5XlvrtqV+ZGnzNKUA8Y6KR\\n\",\n       \"2gnz/rRk/USbO0A7yDk9rfeu6sltYrBz9+81vY6rnCggtt+u8TF5vdffrrNP520b0CI2Wyf5n07Q\\n\",\n       \"+0ug47YkzbpY592c0mhdCx2pvbtCxOZ2Qdpa5mwh0XsoHUTKH0BBqGoUZ9C9XiAFvaHsW77QL3RU\\n\",\n       \"qGXUioq1zWz+bYiWgXuXRETp4ReT90RApEKECBEiRIgQIT4wPhoitUjjewuTbPZ18je/HUiXGZVL\\n\",\n       \"Xm4VVZjWj+dtP//6T0VE5HTy1bytTk0GQUTkBYh1l1t9de1ppWd8VvbhMbXjqSRICp+3NZXJgwB3\\n\",\n       \"uyACbqyrr6EjtVUgEWkB5dqTrww6rP6TiUqDeyVqJoQ+GBExWdDrN5pssfRtT57oCuPVS1cKtzLx\\n\",\n       \"AaX7cU8efiBPW8mtiMgCqNcZEcDt+gu6Jx1WdR3JH8SR7jsjZXnjWCakCmxKAEOC689JYRYrmJTI\\n\",\n       \"gXkJUnhENFmoQp9ab887rLoeJ96eZyslnue0xq1bRYwqrEiyxEvNF1jNbkpfrVnZb7r0c1qsFdmL\\n\",\n       \"SYF/D1X0W0LJYlxsRirWx04/byANcEvl/6ZOnK/8vn5yqdfAqM4R++1bX1UXQESWtJoW3J9TTT5t\\n\",\n       \"MxJgiBAhLebDRr6GtRV5ZIQcbrXNImKMvvxeUYWLSydb30CBPiPk6Hi0wgv9/3Lp53t9ravZntAK\\n\",\n       \"K6euW78GQ1oz9v+q0McJETJ06PbOES6TDoisiISkCQxV4n0cDkZOJgRpbjNfka+A8Ew0x0xAuIuS\\n\",\n       \"ikxmknVC39O/DXRnIm4DL0ouwLD7k9HYaTAntSS/UsD3LKOJt0ZJekrnvlhoP+7Q7iUVu0hmKC0p\\n\",\n       \"m6NQ43zpxPJZ4iTi67+/DxGRBNfa3zrCcoLExdtXeu7ffvutXwPGWNvtaJve//NzR9g3kG6YxOez\\n\",\n       \"5+hPh95R9yUAuyIliZ1Mx9HYg/RMCHI6mK8nzXVArpKRS+iBPpEqvynvRwSdjfDfiwhhjVAgEsNt\\n\",\n       \"IMsYfURRCpHj68HI5uQrWKCvJT53xINeTx+xny18X4k83mMuXFHxzvocaDqkUBYrKsoCAb2pCEHq\\n\",\n       \"4KHX+fGPexDwY/otiqcmgpjsanuaJ1sgSzbt8/hbLvW3OT1rGoznmFTM6yPalZ4dnTk5MEoG8nyZ\\n\",\n       \"+8Arsl/9qhQQqRAhQoQIESJEiA+Mj4ZItXU7l5KKyLwk5dVfhDz/duUrHcuvjsIyAbr66HvPx67h\\n\",\n       \"NSXMuUDZsQlYslhXB6GxjDy8THSP/f+s1tLKQEVEYogUNpR7PVVYOdJKM0KJawzhuAWV1Q44/kAc\\n\",\n       \"DTtGQofvsFrrMl+RF+UC+/X2XAHF+MEPfzhve/1ay4mvX6qA5QWhWisI7B1oBTNNJhxKAn54m6fU\\n\",\n       \"s/RYdWUpuXon9i+/q6Pt6IIGrNxqIGIH8nWrQU6LEkKw0D964vmUkNFo6LcVRBIrKvU3EG29duTo\\n\",\n       \"tn+Nc7LVuver883FvXMUEUm3emHcJ7bn2j/jlL2+dDW9I+TOvA5l4nJ+vY+jQOhz8PtqonLx4Kv1\\n\",\n       \"Ev2JS3gbeHztWLgWvJVi6cjFZgnkrPVh38HXy1CKYuFtM5p0AZEEQQeUgjhyVn7/yQ9+Mm+7g6zA\\n\",\n       \"kby2BsC+y62jfoY6nYGvxyvTN28g/kr8neNRPz+/cO6P+cmtCBG5vdVz2m5d/qJ9D0fK5AzMY+90\\n\",\n       \"clRvsdDjpum7CBrPU5MRLWilm4MvyD6Zo+hvnzx6NG+rgZyNwsitjsUNiKPHO0dQIvRFEsmY78+a\\n\",\n       \"vD7tjBm5NDFjFpg0wdiI5BRaQ04xrzD6nUGmoSD0uW3AKaJ7N/MlafwJtk2EcE7ggb589Xze9tX3\\n\",\n       \"Kk9ihzh1ErRs3AAAIABJREFUJE0A6RhGH8xj7fLKESnzXb07OHL1+qUKho781MOYjVd+PUlux4P4\\n\",\n       \"Ld2bCXPY9B7h5mTkZwI+I6THxH5ZkDOHYCv3sdkTDvNuTPvogLB1hMjOnMvO2zrGnJySIG4JFL2v\\n\",\n       \"fTw1xquKfEJfrrQdR6GxsMJcBOHWY+33y+a9YaTnGZDInLlf4AGaN56IzNzYiJ5dBc75QOLE9j0z\\n\",\n       \"O01T2i+yWQO9T6Q2rmmetmTXROO0yIyHTM9p8x8lj800/dWYU0CkQoQIESJEiBAhPjDCi1SIECFC\\n\",\n       \"hAgRIsQHxkdL7VXHXiIqq05R4tmSD5kR9e52XkJ+ttGy1mGkknj4aa1IEmC70t9ylea0Q+kyoL1C\\n\",\n       \"iEy3Vcg6Kfyc1ufbd4+F0uEDnZOVnzeDE4V789Ob/BgpIO3WiLCEek+AfZvKocgs1eMOAwP5KLV+\\n\",\n       \"j7IzqTnMqrx57g3w9ClUfkFw7ghiXeL7Ty48Ffj9NSD2ltXZ9VzWpZfG5rParRNQy7Xek1IoVQSS\\n\",\n       \"5V3rqYp4/r7+1YlDzHb/WYk6w/0ZDgR3I99EmT15eY3SaUqtWYZuw2rTnaUx8J3E72uMNEKSecpo\\n\",\n       \"JmVTqXcHuLmr/bpeIVVxIvKyZSMyKsnOc6j3w2Rxe+FplAjpvnVBREjISrCHVhKjPxGJtYXXW0Tp\\n\",\n       \"hgXStgkR5Xsj+aM/URZVBqQRUkqBjyBgjpTaWCMFZV5aIq4EfCSyc29q8BOnOyD7ca+Pa7zBPfyN\\n\",\n       \"y1/368J5FjTWl4ml1qnUHP20JlV0S8cxodxSKkYi5/R4hXHKaZcWKZX+PWr3/D0joEcjuTJYQu6e\\n\",\n       \"1xhU2Ym8PGcK8dnm3NOY5nGYcMoax1hQUUIOqY2ey7ZxDC7TN5Xxs0tPC55fqIxFDmJ3RuRwS6Oy\\n\",\n       \"rMMC93qiVMg8uRF9QZCyiSjdEiP1mFC6/X/+F38oIiIvbn+uuyjYRQCyGpSC/eFnmoq6f5+0/V+/\\n\",\n       \"dkmI2AqEKAVvU1EVed/NURRQ5iaNQeXymM/vqeijL5QJl9DbPEH+o0hHFUQet2KQkojVlRVIYeyO\\n\",\n       \"VGxUIgU79b6PCgVAZ+QiEU3of1TYYfdnRW4PG3zctz4mxljpDkXu3xvxrB6s7xLd4cVznTMToQII\\n\",\n       \"pPkWdPtz/Lanc2pB3i+pnyTWP2ic2IOin+V/qP+je/SUxu5xD0eq1LBiDKagLDCfDkSpOGHuTAeS\\n\",\n       \"/fCP3xsBkQoRIkSIECFChPjA+GiI1Klt771wlid7W/XVhwlMtiRhsFkqUfNEK82h1rfpIvffFlgx\\n\",\n       \"ZVRqnFf6hmk+QBkRoS9ArFye+eqrQFn7RESz/Q5u4USsOx0ViYqorLODxxEjBx1KVpPZ84p8gFDC\\n\",\n       \"ShaCkiRwcCdivZVhloQ0LbCq2ZGY436v57JZO5pi5eF5oWjSwAhGb35J/raejbrfQ0XebHApTxe+\\n\",\n       \"gu1A2G6I7F9O5mtGhD0UBaSRv97XzRvsF+dGKwjzE4tYfHKEI33rDXXs4FZOq6RX8OKLqPw5RTsl\\n\",\n       \"GZHy14pOZSXc0sXjdMT1TI40CSQZOhJa3R0hoEdl+kc4vWcssDqbJ/r3SojYXUZ6bjfUJzqgZX3s\\n\",\n       \"fbLDynBROKpUNvBfWzgisztq39mTJMIZdr0g0dF81P0kuPKOVtBLoARd5fuNQMqfiGycoj37zttk\\n\",\n       \"UepqtmdJDKAkDckvmBfiEWO9WJMwJFaYvA9DAu6hH+8Jk1Fh5MBQFJMBEfGSedtf17MkCsjx5INn\\n\",\n       \"QqCMCCezXAGVWgPNulpTUQyUUBvylevRTwsapxH203R6/OboaMmFkezp+o08nhGxOIVg6oK1L9E/\\n\",\n       \"CRCZkUAmql8+UrmPxZkKzCapX38LX8mJkM4M6GBOSONoJHZqu8kKJDpa/b/VuWN38G0Xl3rcn36t\\n\",\n       \"447L+nuMCS72KJF2YHLwCf2pP1J/nkvt/Xt7lOSPMVWvGKE90+MniQvttjPSQ0hHZpdFIs0G3BAg\\n\",\n       \"aDItY0pjB/cn7QjhwscxhDtzknDJgb63nc91pfnAHVnWRD+vqD83EJuelo4SZ4X2u5Q6RYWHEBdK\\n\",\n       \"Zch6RECd+tYzEoKx/ua5e5gWOQo1mICPuTMn9G3CfJeSEHNi3o00x6XIClWYT1tq2BHoY80+uQug\\n\",\n       \"tAu/hgKZgFRITBXSHfXR+25z0JvC4twsY/S+CIhUiBAhQoQIESLEB0Z4kQoRIkSIECFChPjA+HjK\\n\",\n       \"5uVK2sEh65ujapEQh1FSkNL6yCG27579QkTu++pN0KyIB4JnRyNAkgYSCNURfpuSnkcKLZaSdiEg\\n\",\n       \"4uZEdh5KhUJ78p/rUlOM9pTBbqfXdk/FODJlVxBMKWUlSIENBFn2ILayh535OsXka2QKzGfknffs\\n\",\n       \"2Td6bpRavDhXONbIdhWRTmMoJUcNea3hXAZKrSQgzMf3CIPwQSIPJ+tZWekp2AypxDjmcwehGW2Y\\n\",\n       \"khJwBL2tgqBwg6oXCevz6HF3RFiuoXz8feMFAGukhS/OnGyamKbOYLCzt/XmQlMcX3351/O2vNDU\\n\",\n       \"Rk3pqR20ahpi+5uH4DZxYvsqW+HcyNcr1z62XCHt3Hm77vfQVmNhfeioTR2ltuA1tr4gEvtG2+nQ\\n\",\n       \"err1pkNaOnUSqRHJC8hos2JwnJj/JOkuYdgtSxoTjW7cnnsKooOOWknETksHndhPEORuI1Qfbj2N\\n\",\n       \"akLdTOw2AvhI4797jwZdi76dEom1QTqO032mKWXHYADf/NSYxG46UyMpUbfmNkDXaqm3nP0nYWJX\\n\",\n       \"kaNBhrFQ3Bvjetzr1+r/uKLP6oP255wI4BM8wSIiWycg47IqtnFxo/foza03ntqLckuB6z2+u/VU\\n\",\n       \"UAvidUE6ZqZZlC1IRd9oE6RZZX9PDal4o0/88U//wo+B1Od2o/PV82tXNq/32nafPHFfR1Pljigx\\n\",\n       \"f4B+WZF7yrTAHMspoAG0kXpPjg4rpO9t/i19vCaFmbOR12MKWgBRQBr0iY5TdujQE+EXGeaMlvwP\\n\",\n       \"Y/SdMjHdPZ9rlih8ySgVmYmmT3PyNTzuTAmc0l055l8aT5bGbEirbgvdsGbrc+w53EBS6OxdUXHA\\n\",\n       \"Fqnoc+r/b15pP12tPbVbnulcyBp4Juk0UTv18FG93vs5dUe9ninR628bmkNQlFWsqP+DbsBSlT1S\\n\",\n       \"rwOR0idoO1b+mBA5WJ/18xxYS/I9ERCpECFChAgRIkSID4yPhkg9vLqSl3smkcIbihzUzacqIr8m\\n\",\n       \"83ViVrD56dRE4rzBW22aMgEUzummcEyE9R5v02/35OEl+nme+dt3BLfqLGdfJbyJd6SifAB5nlCf\\n\",\n       \"DKTNMW7u/05cPTtb+PnOKsYDSR0AMSnp+CVWn0nmiNTpUt/Yb8h9+/T8OxEReXCh5MmCyHzXx1/i\\n\",\n       \"RHy/Dx+qn+FycvTh/Ey3PXr0+byt+VZXDt++crLx6xe6n/3K5QQerfS4I3kXDpm22QlkvpLK6hdQ\\n\",\n       \"7G3I1+046HUVTLaFO/0ipvuJVVdLxPa70w0+82OsoEb/8EKva7166te6ULmIz//e35+3/dnP/0cR\\n\",\n       \"Ebm9++m8rTpCWZeQ09jKdIlYnkdW6u5tfIIkQITvbUpHH8zj60hkY1OCrlsqv0f/zxa+bYE10kBq\\n\",\n       \"xxVWcQ2hGRnQ1ghoTkqIXH3a4xpopWdEaCo/L6HO3JCKdIxScFoQS5TruS+JAZ2ASLxeQDGZ+mth\\n\",\n       \"XmOkhP3qoN6RPSFCIxBT9tozGQMjfes2FGqQT50pms9zApHI7e+JS9htNUuohhWNjOTTuVqj3Wme\\n\",\n       \"yjDeaDqTGMdl1M3QLiPHtxVJaACROHXsF6jjviRibZwaIkWEXbR7S3IuE64npeIV8z2tgJJ0VPt9\\n\",\n       \"rPT+FGfudTrLY8SEPs7HpUIRQ1NrLtRAnyQC+revv8CO9bem9C4iUgKJvCRfPQOCbgnNrE6Q84ip\\n\",\n       \"eAFtUg8+T0RAicaOfAorQxPxvCjYfxHnQRImNX46EfyRIcPREgHdkLuCUNIYt53V00d0kAX680jj\\n\",\n       \"b8T8nJIkSopjMKozVkBTSaZluYVMyZ7OHQUNJ/JYvcU8+snnTigfR507r/Ds2JCESmEekrEj/U8f\\n\",\n       \"KJq/Pnsyb8vXevyYJoUK3n1v33h/3mNOuCxJqR5zx3dvdU6ayFdyvYUnIjlm2KvA/kiaOB3OmQvF\\n\",\n       \"7jDWKBOwxRy7IJkMlrF4XwREKkSIECFChAgR4gMjvEiFCBEiRIgQIUJ8YHy01F6ZrCXJyeQXCsgR\\n\",\n       \"ayuBUHxPMBckxigiFVf8O5CR4xFpuzhyGK8GjB9Ds6ii9MjrnUKAce/vlgvA7cXS92taLT0R2ydA\\n\",\n       \"5RMr9qJpY07fAXrscUoxQ8HQ1lguCeIHFN4TOTPNYW7bOYxdLADBDn6tnz7W1Nvp4CmYA3Sevv3u\\n\",\n       \"KxERuTh3fZSqU8i0JNPmq05TWz/6/PfnbUuQ7S+3brz6dvkS1+wpoxPg8Rc7v8dtoZpR2wuH2/Mt\\n\",\n       \"9E5Oeq0dE7aRCiGBWWmndwmDqxxkSyIHxr22zxRzWlR/U9ekYgwy6t2Nblvkfg9/9Fiv8dMHbsb7\\n\",\n       \"2af/qoiI/Df/0380b9vt/oVeA8HjAui/yv1+rjHaoolMg0F2tBTwGY3I7UbTniONiRZaMc3Jj5Wj\\n\",\n       \"T56RYnEBsmVH2kInKDt3ZKQ7QasrR0pxIh0z6+sRkbj3O+1PZ2ek9o5xcqq9rxkZOSFS7AKFFz0N\\n\",\n       \"6C1IrCv0f9a4GjF2Ot4vDKrvyMg3Q6quJtNwazJOma1XmqLn4okeaXMr6Lh+66noCJLlnMackMaI\\n\",\n       \"ycjZCkCYgG+s/HtkV5hLn5NW1v5OUyZH0rE5O4N+V21kYzZIBgWAzLVtBhxIsyePIvpk/jF+S6rQ\\n\",\n       \"jSmVU7sjldejn+Y5zXXQTOOU6cbSbJRanUAyjnrSEcIpV42nyv7oL/9YRES+ev5X87brg96DBmT/\\n\",\n       \"hB5TT3+gc125eLew4PkLv3enBqbdOVElQK3g/md/ppQ+bpF6NCpIyk4EcG9I2W0htaIo72tZZFqF\\n\",\n       \"5B6B9GE58vGtHxGmkVi6H+lxUkwfoEWW0VSTwtKB6dArpMozKtQZsR/TZxQR2e/1V9WBDOcz9N3M\\n\",\n       \"0+yXmEejWK+/bz3dukn1OfL04Wd+AnCVL7b+TMgX9owjc/dJ/7668qKc5y80ff/Vl65LNSCV/vAh\\n\",\n       \"dKRICyqHufjQULHRXueH/S0VheCdYOq9PbuD7ueqpPPE+0FGKdWG5pb3RUCkQoQIESJEiBAhPjA+\\n\",\n       \"GiI1Tanksa9qu8yQJkd/RtG34C72FYxxEkvSKbAV21QTYa/XS6urnr6nb51tpW+pQ+RvsHcoFy32\\n\",\n       \"5MOzhayAW13JcoVSfy5/xaoiolLjFiXhCREwE/gjtfCLSgj9MDWBBZVw2nKpSahcGES9fOP7PdWq\\n\",\n       \"DrwkbyTBar4gEikqh2XAKvH1q9fzZ8a/zogc/ZNPQeIbffX1+RNFZyJa1a6gbMvl30sQX5+98NX8\\n\",\n       \"Hqv/HxJv79McKutYuR2prDXGKmETe6lzDiSgP/hqaQDquFh6UcABdbUxefdlRsBOfFvfQyYDq49v\\n\",\n       \"vvlq/uzxwx+LiMiPf/Sb87Ztoqvvf/QP/s1526nWlfDPv/7lvM3Uuw/k03UB0mpBxQtxc8J16woq\\n\",\n       \"Kh1BO8f3stjvYYsxIROhBA3GDpV69yjUaKnYIQeKkRGhfAJR/AS/trRjErXek92tlyGbsjZ7g/WG\\n\",\n       \"cDGajL57tvVzGowoSxIXKxy/r/S+LwhBtIX76fBi3rYE8bgjAvZgEgaMyAARa6h4pUR/YvJ0OxPK\\n\",\n       \"7bekjg2ZAlZ9ntEfIrub711KqE4KNIcq4mcS8atX3/k54fpbUlHO0nPsV4+bE6oRRabYzlIbpixP\\n\",\n       \"ytYoiWdJCPvbfBBFRHp4nHbUdgX2Z8UOTJguiw3Ow49vxP6BJFFim/dSQnqOyDokvvo/AFl98ebl\\n\",\n       \"vG3APZuAQnz6xCfgYqGfRdT/3r7Q/b668bmjb7SPn5+7rEOawjuP8AMjiGcpqV1DNbyv9bhclJSu\\n\",\n       \"rJ28ra2gJCGkK0PbXZFTxniyZwKNE6CZ9CiSFN6lMZTiR/KkjSNtz5T68IQ+XJKx3YR5P4v8Wm9A\\n\",\n       \"vK4qmoDvgPpTAUIb6d8XhOYlqT4DGjw7jhXJKuB8X5GEwqLUdjfEVUQkOYw4T+8TSzw7SiooePJI\\n\",\n       \"/V5XC0epvvxOCxBevNW54JycNQr0ses3fvy4Rb++Tel7yFyQU0SE6WlD3n3m6JBSHyvoefe+CIhU\\n\",\n       \"iBAhQoQIESLEB0Z4kQoRIkSIECFChPjA+Gipvao6SEOkz6k3Q12H/UYjcRM511JbCb0DlrlC1Xni\\n\",\n       \"aQSDo02fSkTEfCZTKJZPox/rAPi+opRBDciUSZzjbIxL6RExxVqH/yIoj9eNQ8YFUnQl0kg1K0bP\\n\",\n       \"VEHaBig+LyllAG2VKfaU2anRFF0Wu7ZLV+n+jBwuIrJdKVT9plPS94mMQkcwuhsi8b18/UxERB6e\\n\",\n       \"ubZSAnPdgUi8p5NC6qwYblodt28ojXILzR7S+0qRUiqhQfK68XvSv9V7sfjs03nbCmmJ252Tja0f\\n\",\n       \"TbWf0zLX9EixcFK8pYP3x1fzNpP+qWDM25Ky+x//+f+ux889FfejH2pqk81IHz/8dRERuTu9mbe9\\n\",\n       \"eq5poYYKELpJj5FRf1pAR+sGaSRT0xYR2SEX2xKN1I47kUFsDF2ahk5qOtkXWccHqY2CjHmhtt2B\\n\",\n       \"vF+y6u9J79d26zC6EapjIRLrnNLxe22K2VnqMLqlAHjaSZCqGpDSX0TeNyxVxMUBloKKWAEfx2ju\\n\",\n       \"FXuQejyiOml/YmVzS/1Zqm4cvW1apMp4XJvGU89KxyiA6Oi3C+j8tK2nm1rc457SMhXmuMtzT2Mc\\n\",\n       \"oMe12ep4rXaexrYClZbU1kekwEgyaFbKTogwa/pQXBSzBW9hmHiOwzGg2VSxFpcp4ZOOj2U5E0oj\\n\",\n       \"RRhYw5L0jrDf5898/KVIy68S0oXqdd7ZbmHQTan4FGnuA1E29uinw0Sk7N7up/f/weZpSkumiaZ7\\n\",\n       \"zCBeRCRHmmdE8U5DxT7xTLPg+29EbL9W49hHlDIsVlCMJ21Dmwu439mjze7ISIR9S5/XlIK+WOtY\\n\",\n       \"IwbArPZ+oJSxOWkMZELfoZ1WW0qLX+h8tzmj5xkKiZrazOj9uXLC3D2NPtZ3B22T650XAEx4Fl89\\n\",\n       \"cArK06dQSr/yZ4c5dawX/r0ff/ZbIiLyAPpU0tPzF4Vfj0iD8fNzpEd/OG+S1VLnroTmjgKG2wUV\\n\",\n       \"dOQomqC6J+mHTv7T/+wP5f8rAiIVIkSIECFChAjxgfHREKnD8VY6Klc/QEU5ppVJD3mCw9FXJBPI\\n\",\n       \"w2vxN+LRSNu8IsFqksl2PRh9qZV1E8NvXVi5MKloo1y5IWJdhe/l5EmW4jpS8pqqjj3O148xiPlq\\n\",\n       \"GVpG3ky1fjaxECvI20uSiYhLlISz/x0Ium9vv/ZzGrc4vrfJYqVv5OlOjzuSivkJZaBMdv/6G/Xr\\n\",\n       \"e/zoR/O2N7eKfkVErH2zUwJgQeXfOa7t4ZXfpxuUc3PpcIzV/AZoRk5+eYejlRD7PlalrqCb3FGS\\n\",\n       \"Qw2169HP3XwCDa0UEVmmKH/PvJErIJwNZAV6WoU8f/FcRET++//tn8zbfv2Zok+blV+DoaQmOSEi\\n\",\n       \"crfTtstIAdtQDJbzWC+VbH+Bz5KYSKQoYpionwzoz6s19WusEtvJV7XW/4VI0ZLpMWhBLBMQwQXa\\n\",\n       \"LiZ29AryCyTYLgWkBu6uHVXYQMKgIfmREn5dJXmtVbWiiAlJByRAzOyoERHhBxD2t2ufJ56/1P53\\n\",\n       \"Ufh+7RIbUqw/AtU5O/N+cn0DQj8hElbWPAB9qCpGsvRGdUTAN3I6exJGuHdcpm6K7exJN2UmseDk\\n\",\n       \"+RL7GTc+73VWqLKGYjn5mvWdzn+s4m7edKRSIEeUf7MqfQGZkIGKV6qT7u/80qVQUty7an/CdXl7\\n\",\n       \"5UALuKw/ASLISE8EyYCEUPcaY+yvvv1y3vbFs5+LiEgzeH968FjvWQwSd0RZheag9+uGJCk63KeU\\n\",\n       \"SMxxBKSbCNhzzUBLYwcIfL4g9wqkLjKgbi1BE/GAAgBSvS7RPwf2Wuyh4k2SCBXuSV/7/LM803uS\\n\",\n       \"FexyAUV19I2YCPv5UsfkYslzIvwqF07YzlF4VJPa/u2djonuEcmf4NmVL3zsREBTC8qEZJAdERDf\\n\",\n       \"MypAWhcoQCFJCkN9f/zUJzubkiZC82I8x6Le2/Mcsjyb3I9h3qYdELaBCPgZ1OvrE811KOzKcyKR\\n\",\n       \"4/lcUjulkfm/MqaEvxkl5EnwPREQqRAhQoQIESJEiA+M8CIVIkSIECFChAjxgfHRUnvN0MhIxOZx\\n\",\n       \"UiLo7Y1DwQNMGNuKzWhBFCdtk2Ewc2NKgQDl65nQDQ2OEe+PMaUCctPsIM2Y2lJFlEarDtD9IMKu\\n\",\n       \"GYMyOrg0qJggyBwpyHSp+7ugtNve9GwGvyU30B1pWkojACnlY5ka/FA72blA/maMHDJtOyMKArql\\n\",\n       \"tFeXK8QfC0P2+vfPv/6LedvWCHuUWjmeQHbPnKifJgrPr8igU6DoPJIC+w46Mudr6A7Ffk/uYDJ5\\n\",\n       \"tf1k3vboqarn9nvXJznt9fjVyXVE7B6vF56ysAKEInMSY5yADIxu0lC/WiBVcLP3VMw/+0NNRTy+\\n\",\n       \"8pTR00/0GFnikPH5JTRobj1lESHdEFNubQTZdLvS62p7V/NNkGamLJ4YT5T7/26qcc2kAbUAKZvU\\n\",\n       \"+xu0bVVRuhnnVKTaJgOlEUdLRVN6yMZiRvf1LdTAWbNnAbXhOzKSLaCyzGrfBr2niaa9WEXdSLa3\\n\",\n       \"JydsG2G9Ovp+O4yjhooNeozZuiUdNaSqlgsyLUY60s63JdPiHmlZTuMZKZ1TASkMxB9eenquNOY3\\n\",\n       \"McBbkJYvL5xY3kNFuyVCfQmV+aG3/kLGw7gXCZmo1iCWcwFCj6KFA7W/mSDn525Gu4L57Ehm4Q3a\\n\",\n       \"wI51Ovm4trZIU+/rVtATZ6RYbbvb+/Fffa99+0tQBkREvvjuF/rbwsdulmM+A8/hVPl17a/1XBIy\\n\",\n       \"9y5jaGaxOjUoCo8fuGnuprT5Z94kC6Qxt1c+J9ipm6HzQMUJV0gVrxaeWisxdgpSm8+glXWiFOQt\\n\",\n       \"5oJTRc4KUNRflt6eR7R3jYIC1gJcQBV8s/FU5AJzcZpyAQa05Zg+grFbU0GNqdgnCVEgQDxfrihX\\n\",\n       \"DEV909hik2XTSiwojW/nxKbdI+az4V7xDGguRLMp0XbLzMfpBsVb5l7SUsoywfO3/H/Ze5NeWbY0\\n\",\n       \"S2hb772f9vavy/cyX2RkZKeqbKBKwKDEEBjBBAkhZgxAjKj6AylAAjFgyqCEBFJJSAUTJBKJSWYV\\n\",\n       \"JFCRVURGRrx4/bv9Ofc03rubmRuDvZZ9y697vigdCa4S7W9yzzV3N9t7297b7Fvf+tZ3bGPYxW8j\\n\",\n       \"fcNBk7XgOYnnusZJvYik8ZogcMgCIhUsWLBgwYIFC3ZHe2eIVN1sXS7KqRlq3C3XRiy/ufF/b5bm\\n\",\n       \"JUZ4M44q8RxBwFQ6GN8llSi5RXeZLuk29maa4820ye3d8qi/n+qdgDyYCiucKsqs2+Wcc1soTw96\\n\",\n       \"Uv8MRPF+H3WgBP0pUBsuWatcgO/XVNJVNxvWepK0drRJU5KXQIm2saF5TAWm5zIcC2EaBNSmlhRu\\n\",\n       \"pC43kXkwn337U38uId9VlUckclHlrknik1qDXZIHhRRdAm2bbtC2wryAU3iQ985M/uDRiSd0X3at\\n\",\n       \"NlcKT7wn40nViVxqIq5Bdh9Iv5tqibb5/s+lhp3DGI8H5hFWS4+EffP1Z+2xOPWI2cmR1Zo6GXiv\\n\",\n       \"6mqhqtggtq8snT0/8WO8rb2HWfRNxTmJZ2ijqJMD4ahqmydj1IbSWncrrJlKLh9DRmQ2k9qNQCK6\\n\",\n       \"Pd/H4755uumWhE3r/wryIJEgIkzJPz01qYlvvvkSn9n1j3pjtN1+m6EWX5aBCNqza5VYn6kgDeCc\\n\",\n       \"upupoSQLHFzObUzosCZC4l3OsRa21sflhmiOnzDbRlFyzEVpb6OZArCTYyQMDASRAaH56soQmS4Q\\n\",\n       \"2W5qaMZ8yiQDScmPeA7f3k7H9pAUSMN6bigdf6s1FOlpb0RtenLl50e/a+1kzcp+z7z56ymSbOrd\\n\",\n       \"5BznjOSeCEpGmDzqS/1FoPmLG0NYv3vmkzcSIeB/eM9XD1hFE/ktEm8w/2ZbQ3DGD0BiFxIx75PK\\n\",\n       \"2TB1/mRsKCFRlL5UQBgioWLQt/7H+N62rU0oexiSLRIh+xdd//dQCOCsU6mVBV5f+HlyO721dmLO\\n\",\n       \"FEKs5hQjMpWL6vZoONhpo3PO1YisNILqJCCHpxKlYVGE2UoU4LFQcolOcCwKIe9HQH2rkpJA8qwD\\n\",\n       \"qiS5Bi1iXYicUQR9hlKWUMnziSTIFmOm1QMy1EIs8EzuClpEFLvTUakVPuvtObXB+ljLPrlFIoHW\\n\",\n       \"k6QEkwC8O2jvIQuIVLBgwYIFCxYs2B0tvEgFCxYsWLBgwYLd0d5d0eKqdt2BwalVA2KlcJPLEuEh\\n\",\n       \"USKtQCithChezaFiq+q8OF8m5PG8Sy0MFDQUYmkDslmvL3oWQ9+YRmDsLVV+JYxXQnumlhAYIfPe\\n\",\n       \"SEJ7GaF9aNZIMcwCuGgnMzjzfOivO3mjxUg9PFqJxsUWOlJKwGsgiFQ3BuM2CO31QWzs9G2wYxCl\\n\",\n       \"NYzZhe7GJ5/8yL4H8vwvvrPQ2nJKcqgoVnP8G1GgB6F3K9AqI0RzEHF7Q/v+ydFopx3OOVeh4PNm\\n\",\n       \"IVokG3/9SoK7p6e+0LFq2xz3QABvLNxB0myS+faej0S7BMkQ86UR29mW7cbmyQW0jbpCwM1R8LIn\\n\",\n       \"0HYJbbEis3myXjP0g4KaHSHHI2SdZhICx7g2tc2dfsffx6OhKdtfXvj7/nJi938UgcQshP4c86lA\\n\",\n       \"weM0Fi0eJDGUooXFFZbLmhgj3HB9aWGsGqH3gWggxQiRdEUrqkDbW8Xswj5LYySRCMR/79SHRzal\\n\",\n       \"kO1R8HgtxPoNkkyymc1xErAnlaktxywaXHI8tUAwiqELEZuhSi2azkLK05mS4vH9SCs1kDwuelMM\\n\",\n       \"6Uj41q6P0JLsaxHUubtSsYBJFlshtvcRAinXSkFAceO1rb8I86lay3zqgvhfcl0JPQIk/liIxTHX\\n\",\n       \"tTxwaO01AAAgAElEQVROYrB8GyHx8jenY5vjnNqb0kK1ozHDbX69LqZG91hC28/JHJ5M/bhr8IUK\\n\",\n       \"/KqjRTV41RbqIPSbyz7RQZgnxfe0yDevoiHeAkVwi0zDc1DWlgQQ7mdM2HHOuQTn7gh5ngkPQyb2\\n\",\n       \"SGiVYTQNe7HlmeiY5W0ClO2nJWJ7vdieSdxjdD6n0N7KJKGI4Xu3Qn82qphPwvZOLIy/tLaz4LzS\\n\",\n       \"V0Ae1/sUYZ9opAJECm0rhu92imazjxLaZOWNrWQWbFs6gpCAEiZl2fdqhPc1pK2q/YcsIFLBggUL\\n\",\n       \"FixYsGB3tHdHNt9UbqvyAyDMNoKqsK7ckRABF41/g14JKZoE9EpkqbcghyWqNou6Ti5ByrmksLP+\\n\",\n       \"XdGTd0v8tJF6RRVI0Y0w5qqY5ExRYAcZL3KSJtxn/iWkDsT7d3iDzqSGXn8AD/KlvEEDJcjF+ymQ\\n\",\n       \"Or4Sov5mScKkeC54c1/HOBabt1p0gQhk1q/xsfccf+sHv9cey/EGfzMzVOeLN548ma5srLfxvlcz\\n\",\n       \"7ACdEPLyxYWXFmjrLsqQ9OFBTacm6zADsf7py6fW1wnSr7s2nVdAvUZ9I5vWOF8tc8wBbWEa8hmQ\\n\",\n       \"LOecm8GDfX0l419C7Xkg9apwr2dzI5GPoPatJM4a9aE0JXe1xNhlGJvaUuNPx58455y7nPy0PZY4\\n\",\n       \"JkCYp9vr+THOUpsT1zGIqlKAiwrAx8fWpvUMqtxAfbXWHz0yJYePocDdiIo5PbzLq2ftkRGQ2LNT\\n\",\n       \"Qx8WSz9n+qKAn6B9KVAA9b7pOKbiVROcHI0M6ZrCw72Z2PynrEEylxpq8NJLTTIBIpBgLcZChG7v\\n\",\n       \"8HafWN8VCQUHpOny4kV7aIQ0+UyIwkvUHytrVUwG6p1IncSYa9efdyUIbg/ny7U6QHvvhGyP9XRy\\n\",\n       \"ZgkA3B/yjrWJ9fyqRtcOkiJAxO0IgsYkA1Wndqh/2awNVXIgqpdSbWI88HNioXUCQQrvyLp7cM9L\\n\",\n       \"Fhwf+2O3F7b+iXTUghBM5kjKEJmCCAjDSlD/BeZCJQjr5MqjWdelyZQM0E7WmOz3Zb/AI0YTMHog\\n\",\n       \"6vcEabpuZSdk7kJRey37Lu+TyvRkKStv4FkjQE9NWYuZEfBToH5ZZihlWzuy1ucp5k4h1Qsaf+8U\\n\",\n       \"TYqxFjOpclDiWZ1w85JlmoJYHkm9umpD+REp1cFrSGJFgQWdyxzjGCsixXGs8JxUcjj7qihdDEkG\\n\",\n       \"hSlTIHbCSW/rDypTPto/tBPtOmQBkQoWLFiwYMGCBbujhRepYMGCBQsWLFiwO9o7C+1tN6WrRYm4\\n\",\n       \"ATxNrSPnnBtAs2MjWhB9hN7WohhLfZA0NvgtQyHNTkcIY4AeExYeTlUnw5uqvlYgNK8VigbZ2S2F\\n\",\n       \"nEaCuMCjEUIvN5WpYq8AfS9w3UJVv6Fn45TsXF2hvRYym0+hyi7Q+oDnE2Xd2wiaGUKUJfFuDWK3\\n\",\n       \"IOHO9UH6y+zd+vjch2Xef/SRnQNaHGdjCxn8ov4JzifaVlTWjTTch+K6ogG2WJLs6n+7TOyz7rH/\\n\",\n       \"++Lqu/bYZOn1ia7WFkZqoOzdqw3aZog4X1m4Z4FioYn0san8Pbl/5n877li4N0n8b19cWMhkgASE\\n\",\n       \"aGwh29USCRC1kK2hX9XpSmgNyQNupYU8CdX73/Y6clNAVB6P3m8PLfE9rTwcEfvf2rUYNi+FWJwh\\n\",\n       \"9NCTEGgH8esCRPlupOrMvq/DnhQeBqFa1clfIfQyEA0qhkWU7BojHJMKjJ9Cj4e6O7GceIv1mqtm\\n\",\n       \"DqoDFEKO7SPM1u9ZGGeO/WG5tPtP0nAthbFZeaAHDaRMCpoyjBTvkL29dSWJoC4RWpLvLWbQJ5OE\\n\",\n       \"jgKq1Bruo3q2Vk9gHkmE8603FsbJNqQsqDo/Q3C2d1AXKpYQ2JMHXo9tcGTh1qjv71Nkw9T2O6Lq\\n\",\n       \"tSTsdDISoKUYM9WhhbBL6kOa2dwxArbsXVgfXRn3Ldbu5NYneWwkFEqNIy08a4r6sv9jXun9n0x8\\n\",\n       \"GH02sxAki1bP5xKWdEzUQdhNNJMYMlIVcYZ5d7SIsLdXstfx+VNJYgG3542GjvA5x2srGzULaGsS\\n\",\n       \"FVW5qcnknHPRlu0UEjcI+qqinsb+c1Vv55qt5LlbMpSPcJuSvXn9WsjeJQsvyz1JMU8OEcV17GhV\\n\",\n       \"rXqDu9pSWoGAY63n5dqJVe2dFU1iGZOECSUaFuT35Lca5ztgAZEKFixYsGDBggW7o70zRGq5rF13\\n\",\n       \"YG+cNSSY9S00x5vuprL3vQgea6FkM3g4qZDjMhLrpIcNUCJ665W8LecgR68krf7mwnspV6Ki3INX\\n\",\n       \"2xNiXVLj70Lq/yGdeiFk15nzHlEX3uIgF88ACEM9lzd9eL29gaSLLkCK1Zp48BJTedOeIk04qYVs\\n\",\n       \"j+5uQFgUBQXnEtQQFE/z/Pihb6cgDddQAq9F/qEHr1vTxDd8w1dSYt97p7OtjeeyA6J46ceiLM2D\\n\",\n       \"nd36c1yPDemZrvwYVpWpA2d9pGmLV0EF4norUgONP0+0tWtMVv4YidhFbIThCmrveUdSuJHY0Ahy\\n\",\n       \"0s982xu5JxVkJNYyyJQa2AiJslyhJt2tH5N7x+IZoa5VWhj6kFGeohIPDmieXquIvUzAg+HD9tgY\\n\",\n       \"yFIvtnnX9EDAjED6FHJ6gfl0e2Ooag8L6urWxv/0nkcnRZy49diXIkkQgaiqaEINr7dxqOEn3SKx\\n\",\n       \"tRFyMFXxM0mrZ0q0KlA7x5p4dmwNUnotSS4j7BmsYTYSuQZ6vQoWUEV8K4klrHU37KkCORABQQ6o\\n\",\n       \"kF7tJNQw1V72LiBmEdoeCYI2h5xFIQkDPIdmaBMJUKQnRX247tjU8wlmxolIx6AmWwM0ZTU3pK+D\\n\",\n       \"ey0lRF0EVLeJJAEDyvpEgZyzpAUdz37fj5nWOru+9muSiFAkP+A+qdRfylT0eopS+T6ORdmcn9e1\\n\",\n       \"QvHNzjWdc24JORsiSLFOK5giWK9f+/WhytpEVhVNqtv+a1WICt8XYneLOkFhWzrLPihAQrRkK/sK\\n\",\n       \"j2mdPrYvlQ4RHdMkEybeKMrC9H/usdoHon5Natdv3D5yRXXwtUhtsN+1kO0PYT8ke1MaYac2Xr0/\\n\",\n       \"hpRVcKV9j6T4HZTcUXbI7h1RKk2yqRudM/sWEKlgwYIFCxYsWLA72jtDpNZNvRM/ZXi9kVTjBG+4\\n\",\n       \"WxG65Bu8aBS6Gh7reqNvjUCpxMXd4q2b8eC6suuvIUxWmgPl1jf43tKGaY3K5KkIR6a4bCyezhp/\\n\",\n       \"bjfW0JqCZEjTrjsitAgvYW50CDe+B0RK+DDNEKn2EzkvxqSQWl95x3tkm6Wl5NOz7cCDrcXVWa/A\\n\",\n       \"6SnsvA/PPTcntURw92bqz7corYbWcOiv++qVpRBnjX/Dj/sisAiUsC/e9Fnk+RrNBkib1JDbADlZ\\n\",\n       \"SK3F1cKLKSZSk4+IZCOChA1rAYqrEAHNWC7FS177L3w9eeXPpQKG4NcVIr+RQRiucdYH1jrUeLyD\\n\",\n       \"mGzUmJwBkTAV6Tw69WNy2oGHO5d5hXvS3UpVc/y9ySWFnzF98dKOztD/RgQ2E9/ORJb9Lbg8ERFG\\n\",\n       \"uV+UK6iE+3Zz6z33oie1zsh9kHkaQ5LjaiKSAAe4JPTc10R1BBEiR6MulVMClFD4iEzP70v6eYY5\\n\",\n       \"vtRK91vU5BTkrtn6+UdUSdOwiSBE4sFzyShIwRT7k3NDPyjjMLk1pCMFIp4J+tRD+nch6DRR1BxX\\n\",\n       \"0fT3LXgwa+GIxODBJAITFUAfCrmfORGZSiQuwJchf9E54a1QHTkSBAH/Nl3hd2LeVTP7Hr+5WApM\\n\",\n       \"Ca9eRVdbgVtBbogccm4kgv4T/bi5UT6ivymKSBFpUqHLipCpcHk6EFM1npWgqUDmFP7inFgsDJGc\\n\",\n       \"Tv2mPR7bObSPNKIzmXBj2b56R6ZgV4hTJUHIeVJRUe6rVaXoL8SsBTlv0R+51i43DL9tOUIiponf\\n\",\n       \"8HQbQZBKXLdOJCIA2ZdMhVtZkzGRY477pF1/i/MpD8odEPOl1UC9F9IXPuMz4YhVnKcHxkm5ZOy3\\n\",\n       \"1klcr1XuZd8CIhUsWLBgwYIFC3ZHCy9SwYIFCxYsWLBgd7R3FtrrndY7hDnWhuv0NQ3Ww2lVZLDa\\n\",\n       \"7cz/3REWOeG7UuQUZoCUNzOtteOhwhghiCw36I4k0q4ohpc5UvJXEkYBLJ8kAjGifVUt76UgJUc7\\n\",\n       \"dcoAY+NrU4HCY9bak3pJBSDt0UjrQPnPF7n9tsl8SKHXtdDCEmGhy4sL6SNDFSTnW4ilqvbVaZn2\\n\",\n       \"PRHF7m+/+9Y559zVrYX2GPqoJU14ATmH9VYgc7T9pLBwF5MLlrjXrbyEcy4rGIKUFPaUsgImdbCF\\n\",\n       \"YvliYdDuHGT0rFLFXBB7pZ7gce5DoCmlAQobQ9a6yiTVfsvfRjon/FzQsFDchkUs3JKBbJ7K3Jkh\\n\",\n       \"TX9bgLBby/2vEZ6pBdrHT+dbS2LgXOt1Nf0d0LrMpxVInktRyl46/B1TfsROu0XigfDF2zCbhhEj\\n\",\n       \"kPFjDaPi86WESk+PPQFeicKU5KhrJkBIGBdruKpsrkcY1ziyhiYJ0+rtnvRBrJ9L+nvVknil/h7I\\n\",\n       \"tiSAX0vIiIrRlGjwDfTX6net/8ORn4u3E6vJyBClDJNLWXdSkiJKhAx6Il1CuYEGVIRMQqEpQna1\\n\",\n       \"9JVK3cOhkd05xsul7T+nrGuYSbIBwqe3QgpfIZTOGn95VzZq3p+BrROuq/rCJEmeP/f7w42EwErU\\n\",\n       \"R6yl/wypNjuzAn/H+6RfhtRKkRBgaLORRKXnz33lAw0jbxFa1Lpy8xnbZ5N3Dn7F7a0PbWdy/yl1\\n\",\n       \"UBQiU4P1NJG6lgwP6VxjuE+J0kmrSq7hvt5O21VWg2GpdGev2Q/FrVaqZ+GN4dNSQlYrkLKV7J0i\\n\",\n       \"VN2VCgT8Da+v94tSI5Uk9qw6fp4cH5vUxhDP9qS2+TSd+jGeSViO/dXrdzA+DRJFlEbBubAjtcBk\\n\",\n       \"DwnZkSA/k5qYDBHrPeG4RzKe9c783LeASAULFixYsGDBgt3R3hkilR3XzgmZrAA5uRZEqE69hycv\\n\",\n       \"te7igh6MeR991iQSL6VaUsxMEBYQhfnTgdZLA/FcxRpZd28oIoWsQ6QCZluQnXMR2qMO31T4ciVE\\n\",\n       \"6iJ4nCqWyTftpXhrN7feM0gaI+eV+b7QH+v6dWIbuy6EC5cz80xy1vOCN7NN7Voku+dyjpfX3sNM\\n\",\n       \"MiMAv3rhPT0VumtV5RIb/xpIx+uJoVnnM+/FFqmdLwPZ8xJeQhJZX4/hOSqqkmz9PeynNimYztwp\\n\",\n       \"DLnsFh4l6OdWw4sIQy4Cp0nskZtB359voEgX7sV6s5LvI11e0B+SEksZ6xHOEwkpug9PM5FaVzmL\\n\",\n       \"qoNsn+fWNnrfcaoaHkj/VU+rIhFUPFLcnzK2MWGaeCNrgp4YESYd67ZOl3hmaeHHq981sn0JQvfR\\n\",\n       \"2BCRCe57I0xpyh9o/ceGhG78fy1oWQwvUBNQGsf2itBuTKSpPeRK1pUTT7MCOhbtOJdIHgEiVIiA\\n\",\n       \"aox5rYTdDpJXFnNDcDZLP3fzWEis2GSaUhAZrhMZ/wznLgV162GMXcF7LfOfySAqoIjR07pmp8P7\\n\",\n       \"vl9Cei7nQFjum5guE1D6QtQGcO+aGshE1+5rtK+b6KIE9foyQ/Oupr7fXz77pj3G2zgc2fkK9E0J\\n\",\n       \"9Q1uEJEJVWnpAZGOBU3vIdkllZMslh5VUlJ+i3BIXUESpTVNnzIBRDAUmUiwT+r6J2F+KvXvtvV+\\n\",\n       \"TTiaErUJNqmcAJGzms/EWhE8PBNmEpEBYtYRBIfolJLiif42OwsAotOiXbJCQsNG6mkSla23zc7/\\n\",\n       \"nXNuCfRrvjCkpx768/YHUv8Pki3aplcvfZLP1Y2huZRfODk1mY4Cz9YGQqPbUuY/pR7k+UsUV+Uf\\n\",\n       \"iNJtpK9pG2ESiRuWE5S9uHvgPqoFRCpYsGDBggULFuyOFl6kggULFixYsGDB7mjvLLQXx7GLVHei\\n\",\n       \"4yFAErydc67aeKiue2ThnkePPGT68lvRv4A68VC0WBYAhDcaFkAIKEOIQ0lvJOqqOnoMsu22MHB5\\n\",\n       \"jdpoc4GME5C3Y1GWZe2wwTje++1q4dvUjQ2KXEPjSKSt3BI18W6nBgUP7wGqlBAkRYlLgSw7CP3c\\n\",\n       \"P37QHuuCoE/iXCpaJAXG/f3H9+28gHhfv/zK2rTw11hOpU4h6uOdj+xaG4RWh0JsXS8BWfcFMm08\\n\",\n       \"Gf3J0H+vX1jYj0TFnmiH1GhTJqEdDruKz5LQWpf2vRy/SRuJAYG0y1BgJiGDdjylhiPh80JCa8tb\\n\",\n       \"H47abGyuXcxRf24o2l45tbLsGhlgbGp6qYoxCajVjjo4+iW12QhLq+7KQmpRtm1nOEIuQrg/bUN6\\n\",\n       \"ou3GMGbPEgbqte/3WpSQT478PduUFp66vPShvb4kjzB82Ije0Xzhw01xRDLrvj5OLKrDjHJEKuON\\n\",\n       \"NmcyT1Lc67LcJ90qAZX3pIPEgmMJT5JEvFPrDOO+XFoYYzH1YYn3H77XHru+8v3vdSUkgDCa1gls\\n\",\n       \"a7JJUgpDn62atBDwK4Q0en2bVwwzqWL3lueTMGLe9UkekWoctUr2Nu557O83hbqTQu4hwlhNbCEb\\n\",\n       \"+uPZUAjoHYTbciEsr1BtQSowULQ9k/BlgRA4538s6vB96BJFggGQKtHIlOAxneusydgcUAA/NCeo\\n\",\n       \"jr6jIUR9JgkFbvCcUsV+3lc9L8Nyqt1EFXWdY1y7TLY5pI6eiv53D2vs5MRCYSPUU1xIXUFev5FE\\n\",\n       \"Ge4ZSkAnAbsslSjv29JHWFbrzy5Xc5zfxikv/OdTIeBvlnP02faJGppuI1l3GZ5LqsVFKg2Tt7TW\\n\",\n       \"IasH7JTDw/e3siY4FVTvq1X0lz1phXvSCM1BE4QOWUCkggULFixYsGDB7mjvDJGaT+OdNNhOzErb\\n\",\n       \"gkhF/q22K6nB7z3yb91Xr8zTnE/9W+VoqG/p8BLE03FADFhrqBFPnwrAkRCwWy+tFLVtvBGXQhgk\\n\",\n       \"764rXkIXnl7UERVlfg6ph9VCaJSRfav9C0TlUryqDVCqQd/eqsfwNI/Hp+2xM3iHP/rwV6Wdvj8k\\n\",\n       \"TG6V4AjS62hoHmRv6Du2klpfP3gfSsAd8SAy//mgMC85AqGzI7X7qFR/JLW+iHqwrlpHEEGihFkh\\n\",\n       \"iETJ6t+qTs0UdvOSSiCSC0n/Zs2uSMim9dZ7RxWItY3ca3qCmUht5F3fvh2vskWu2kMtIqKqxG17\\n\",\n       \"m30viV6oep9EmjT9e7lS9Whvw8Fg71hL2BWi7JpV5cUj55/lIWI5iJqVzD+q929WRraew9N8/dok\\n\",\n       \"MQYgL8cHVMGlO26Cmn2DAVK9BWnietmpq4WxK6VeHpeOku3pQe94qZjjqSjLD0d+LhZEa6Vt24bX\\n\",\n       \"t+8vofa+Whvix3uspOAO+t80Nk9H8OYjUVZPWCdvp6KDPx8TQEpBqxKsxcVs2h6zJApbJ0QaM0l2\\n\",\n       \"4Jx0gpI1me9/p2tINLfAGMhFJOvatV66VFtANYJSVMw7QHUe37daj1OiIzuIKKQrZJ4QbYza2myq\\n\",\n       \"js7KFravR5AHKXTvAjqkc50yJaqAzevvoHlb1jr0/1epgUMSAryEVtEg6qGoClP8lai+Qt+WgiDf\\n\",\n       \"Tjwi9Ob6CtdX+YUe+irVFoBwKnJHxGwwtL2Ba6KU6hE15om2c0x0SEjpHANWEVmtbQzXSOxYCdKU\\n\",\n       \"pf7zTPaT+YH6e/wzlUyRXg+JHx1NsuEez7UjEYm2Tbb/8xpHR/ZMTLAWtCYnZRwUTWMygCa01UJu\\n\",\n       \"P2QBkQoWLFiwYMGCBbujhRepYMGCBQsWLFiwO9o7C+1Nrxu3iQ0ezbdeg4MFYJ1zbtJ4EufoyZP2\\n\",\n       \"2LDnm/zBxwbZffWZh0CrxuDJHsJ8GpboEzLEJSoRRVkCYs26ojECCDQqJdy2rPaOpdCnSiS0wMKo\\n\",\n       \"mWjQOBCVyxKwvIQn8nq/yGd27NuXSOQmh2L14/HH7bGH5x8455w7PzJ9mH7XE4A7WiAU4YY+YW8n\\n\",\n       \"WhxUYlc0Fd1Zie7HSebv05ORhRbbcwhRe4OxU6J4Do0e6in5H+EfQLG1hOcY5ewJYbnVlpHwyHIB\\n\",\n       \"xW7hVyeOSrhyT0hAl3FPc4ZgCPGqYjiLoVpoo0I16rXA2FTHViXeIcZa9Z4OETtJqCWZUtWRqeei\\n\",\n       \"3ycE3pMCvTXCXanevJj3U5TVM+rj2Nwterv91zBajTleJ6I2XvnxYQFk55y7vp2jTXafpghjnJ+Y\\n\",\n       \"3leGEP18YXpDJO9z7JSwn/agRL+jzs77IxQAEMC3ElprC3lLQkWDOZaItgyLqq4QWllKdYQSYYSe\\n\",\n       \"FOhttXVEs+js6D7aITp2mP+DrhRoxdhqOxOMiZLiN2w7lMiTrYS9ElZgUHV2//2+6OJFLG4rtIga\\n\",\n       \"VQtSiVJsJ9jj+qJBhL8jx7bvqzpHQlgu537sXr2wAtUbjMVaikbHMcNoNsYZ7sVwqGscbcOaWE+W\\n\",\n       \"8pn//subS+sXQjEffPhRe6yHvWYroT3urFuZ//yrrLRNUPQ/UCD5bYVv55zLcY+fvGfPqS5CaoUm\\n\",\n       \"xaA/jcyJAdauhi8TcEVYMUIVyxmC6/TtodAd+r1ew5O3V/7ZGUsh8TYELeFmUgA6fWtTf+T3Ow3B\\n\",\n       \"cX8uDxQUPgKxPYst3Mz1qY+/vNV2kvWE8VxLQkWJPbsR6ksmoV/nnKuWMv9W/hwL0TZsleJ1r8PY\\n\",\n       \"zUTvazHnPi6UBtBLdkKQst8dsoBIBQsWLFiwYMGC3dHeGSJVuK7rx+bpL6eedNrt2hvkoOMJ07Nr\\n\",\n       \"ITGeA5F6aJ4u30gnc6k/hzfRjXh/I7zpj0Esr5y95V7deA86FUmEFCTq1VZUh+HOdYdSpw5v9bOV\\n\",\n       \"eKRMq8/tTb8L8lz/1Kttb8b2/TmUgPsjSZeGOnl/bOd4/+wT/GuI1IMzj0Rlkmocw3PdSk2uEunp\\n\",\n       \"9D71jZu1BjWtuIL3rYTBtOe9n5549fROaqm11AUSpQq4VE/Wt3d6DqyTVCoRHB6WEoZJeowqafvA\\n\",\n       \"/0ZrzdHDy2X8qYZbivczgxezRh06JXvTE1yJp0NyfCZeVYY04a6gRNEOy3n3fAcN81UV46eoHTgQ\\n\",\n       \"MjmJ5UpAbdWZpU4iZRVymRNFvn/fuU7YbyVMb5HsoenqlBhoBGmYr7w6cRXbnOB96ktSxApp0oXM\\n\",\n       \"CRJlicgpwZTef0/I4SRRbyVdOeLclX4Z6iFkY6TxD6RO3O10ht9C2V3myxUkDI6PrA8F1lUisM5k\\n\",\n       \"6r93Kujb/MrvZyeCtMxu/LHTY0u2WOEeKxIfI9Xa6qrZmFCdXZHOAiRzRQTbWm+CXMRIRom7dn0H\\n\",\n       \"tKtuBDlqPNmYazKJdb7wLxl/oL83t5aAcHmNGmYyn56++M45t1vrbjDwY1sLoZ7IBf9VRJj85+OR\\n\",\n       \"RSQ4T5JYUbr67Qa31GGta9dKAsj3OI5cE6uVos/+LFrrb1CM9trJ/ed2ZpUd2M6iI3UV012kxTnn\\n\",\n       \"hkCYmJyg7W0TYBTqQYLW9EbU9rEnF5KAkIEMLwCvi4DY6X7eov6CurX7BOekRF866EPv3KpIEAlU\\n\",\n       \"hDsD0qN1Ojk/tpLQdH3tka2JzCfuccdYO1Sfd865LvZiVefnNWq5r3Mom68E4W9iKrXrPgHUX+op\\n\",\n       \"fn+lvYBIBQsWLFiwYMGC3dnCi1SwYMGCBQsWLNgd7Z2F9jpp4XpSCLCE3s+gEMgaMF4tobVN6SG7\\n\",\n       \"Y4H73//wsXPOuc+/fdoei0EsbhZSyHLlYfx7Zx4C7DkNT4EIqUVTgUAOxgZF9hDSa7ZS8HgLyDSX\\n\",\n       \"EEyPYURT6j4a+5ACi4BWjYVxliBMrhcGOzoUQS0bC20+OfWhvdGRhSfIMS4TISyCKR6LLhdJmckc\\n\",\n       \"RZ5FnToFnNkprA+MlHRFRTkCQX2ztnfwDcIoXSGbt2E5AUVrhBa3kYZPWTSS993OkaYM7e2/72sY\\n\",\n       \"J231gfbVZyOBwDOEdKnE65zBvVdXPhTRFY2dASD2lRBBGbLaSr9IXtVC1tQ70gVGuDmJNSlhu/PZ\\n\",\n       \"kdzXwchfP5O5ziKcO4RJ9GEkCQAnJz70Meib3lcC7R0l1jKUF0dQAhZybouKa3gE4dGVaPFUgMo7\\n\",\n       \"Aq0zLNOV4sZ15dusyt41QoQ1tLh2wp/N7jWdMy201ULmOueOzgnq3sje0UWSQye1sAArHdcI6c9E\\n\",\n       \"Cfr62hN2VYvo7AhEXAmZs6j07e2b9tgI5N1MQmBLjHumoTqsic3G9oLVAkrNCDN2JGS8pt6QxBp4\\n\",\n       \"D0shAPcw7oVqjHX9/FC6QzZ65PuwEaIw9skYhbcbrQRALTYNtyNk/OCJKbv3oaP08ruv7af49+LC\\n\",\n       \"iOJPn3mCuoYlqemVtrp3drFjzPFEQjsMs7+5eint9P+Mx0dyCGGcRMPHKGQtHZpOfcIH14Im7Bxj\\n\",\n       \"faq2GxM7Gln/bxCWWkxE74skcgnBpwfI5qxaQHJ0vEN6RhWD1b5ivxYZNs02JUkjoeVGiivjWlrl\\n\",\n       \"gjpKpST+cH9qC6jLnnDvnqeWCNd9J0HGrlXhHNJ/xBlz2SdInxgNbe+ifh4pCwN5JvVwLEsO4EIy\\n\",\n       \"d6xSg+iiRfzH+kOyf0fCnVHz/cG9gEgFCxYsWLBgwYLd0d4ZIpXGmYtFRZxeep7ZW+gIb52bjTVz\\n\",\n       \"tkRKuIAP3dy/LT86MrLn6wvvHTZCSqZq8py17vrmGZ8DaaoK8+pZzye+ryRO7wmp2nESQ4E726+1\\n\",\n       \"VAh5lWq/fRBAc0EaKiBolaR/3048UfH1zSvrK34Ti+psTSXejdQEw/mUqFojJXsLT0MJlt2tb1NP\\n\",\n       \"6pqxnfr2v954zyCR6xOl2vFg2hRm+94MKEJRmPcTQ6k8QdMVfaJXU5b7qaeFSA3QH1iLR04U6VYI\\n\",\n       \"izT1fgcDP9/GY+/hJYVdvz+AhISMCdN1b+biwaOPlXhh9OAq8RJXG/ZfJCFIrEVKdCTeWgnF4K2o\\n\",\n       \"XpNEqqR0eti9js3dQd+juX2p/2Wpy9YmemKcC7u1ufz3msquRVkBJwT0LpCOblfQVyAHVW33pEEh\\n\",\n       \"yfnMUF/KRJRIyVbHrwOy52KtkhT+HyXsLqf+81zqTw5A8r5d2PVHSEmfSFLIHKToDATvkdThYmq+\\n\",\n       \"evVbkLK1KkIf8gg6TzvjIZpra5IE2U2l54PsR2XIWR6BgM3vyaDkSKhIhIBMpEORm5bPmwphthCB\\n\",\n       \"THUAACAASURBVOPnRCKo77bx9zuWNm0gwRFD7T/pG4mYay0SpM0B6Xv88a/btUDG3qxsnVwuLpxz\\n\",\n       \"u7XW1qi12I2tnRvsXTfXfu0qgsR1tXxzYdfC9xXVyUGs/vAjq+wwX5HYb03PEXbodw25m898/1cr\\n\",\n       \"P3cU1eIErA6guqWgRERJC9njibYS8XLOUJodOQXMOyp2q6wL93Otq9rW5tM5QXX8SORMDpDnD/02\\n\",\n       \"SiixYvs+E16IoCuCx/1UE1W4d+sx9nE3oQfPWElAYf8fiio+ETPWQlWyeXsNeSbZXNCEKrxjyDO5\\n\",\n       \"wfgksu8ymqBq++5AVEQtIFLBggULFixYsGB3tPAiFSxYsGDBggULdkd7Z6G91Wq6A9nmIBY2G9HY\\n\",\n       \"gNp3LeGuxYQQqMGoPUDLqcDYPYToVnMjNvZzH8YZpR4yHPZFCRyw/1Zw3w4gwET1meJ9sjHhxkSK\\n\",\n       \"WxJmV70RHmPhycHI4GTCnZuVhSJYoHchoQiyPFVZltDmji4USYnyPdoG5HUlBEa4fiWQ6QZFMJex\\n\",\n       \"hPHQB4Ws+XcloY3yAIzLQr95roRyhCoBuyrEzZ820i/qPelYUzNGtWgO6cOwvx1RFmZYgOHGSODh\\n\",\n       \"BBCvoONug3vSCNmYsLjSETnuqibFthxqE8dkraFAwO5K7GVEqdpRYsaakbZ3oB+jxUA30M86pJV1\\n\",\n       \"aJ6wR9OphWcY0o1Fs2c8PH778m0RVlPHdq5A6E1R8jV0kdjesrJxzXOEbCQ5IM+wFoXZmkFlO5Z7\\n\",\n       \"EmHO9kUVn6Tty9emVH1x6feHxdyP+w9+8Gn7GdXWZ1MLD6fOz6uzIwlP4y5r/1lIOJIttshZhNdC\\n\",\n       \"Kxz/kyMLn5Ug4DPMu1waOThN9hMqSPKPZU32ELJ2QiKOEKqLhJYQgeR+/dxCZfOZH58BxmtgERYX\\n\",\n       \"D/2eqXM94izXRAGEwz/49EftoadPv3XOOTcaWvh0hIQODXdzus+RFKCUAa7rupI9CRNKaRQrhNnm\\n\",\n       \"Uxs7rvWVhOCqjMkLEipHWIzX0lDk5aUfJ93D2vUvg8J512yl4DJ6JsuvvUZe7Os4LbHva5Fzhps0\\n\",\n       \"ZMb1qTQO7v86rn1QNDIJozG5ZWffBb6i56MxjHfoWaPHeF3VkeJepwXfOf81KagGfaSWsHiN36YD\\n\",\n       \"JErVGgr090L3tbTHNqlOF8KI8t4RoZ07sn8tAV1oQb9ESep7Eakoit6Louh/jaLoL6Io+kkURf8B\\n\",\n       \"jp9EUfTHURR9FkXR/xxF0ZH85u9FUfSLKIp+FkXRv/q9Vw8WLFiwYMGCBftrbL8MkSqdc/9R0zR/\\n\",\n       \"HkXRwDn3f0VR9MfOuX/XOffHTdP8Z1EU/cfOub/rnPu7URT90Dn3bznnfuice+yc+1+iKPq1Rl+p\\n\",\n       \"YdV246aSwj3KQRQXWYH5xL91L9eCvuBt9fl39lb74MED55xzvY6RXfsdIDyFvWl/8NDXYjo79XIJ\\n\",\n       \"vZ4R23spUC15gaU6eKcw0h0Bq9VmP70zz204G+roikwA1VtZh2+ztj5wiLQOVAd1/x6k4q1WVOA+\\n\",\n       \"RKwzo5ekqBMRI5LDt+LVxzjf6UjShfGmX0d2jgJeaiqIEE0zbROMXRSb50TCYrxDKAfqBmK1kqj5\\n\",\n       \"/UZe90mypUqvP4dHPypBM+jp7HhkA38fdTre3voUd3p4WgdqDk9vLR5xRSKi9LsCGTyRe7et91Pd\\n\",\n       \"6TlHcu9WmM+8h3ovSdSOBCWtW1K+1MEi0ik1DKloHGv6L1qtZFOOD+eJetW8RCbe9wboSJ7YeUv0\\n\",\n       \"tZHxp7K9ElZJWq9LJVuznh3UvGWsmw3rdUldS6Q469xtEeFYPfIIbbcFTW++kfFsEUZc4oXUixtD\\n\",\n       \"WXoja6iGnEgiqdZMIugIAT7v+Lm7nhtTOIqI0lr/CyrFi1I7EdMkJ0oqfcB1FdUkqqsLJR9DvfzU\\n\",\n       \"UPeoAbH/1XP7Xt/vu4va5sTTCz8GH5z5fbUrqE7SP8cfzozSGeLBx/Dgh1L/8/H5Y/zUfsz5rgko\\n\",\n       \"nFo51o6i/7cgag8GGiVADUMZV9afrCVRo9fz46o16drbGCvC0cH52EabkwvUfxsMbF4TuXnxwuQX\\n\",\n       \"SCifLQzNpZzJQOQsRkO/355LAszlm9fOOefe4F8lVh8dAf2VPZSoj6JkXONVtY8054JIdlBJYKfG\\n\",\n       \"JmqRbuQZx+nGfedQdYSdGp7YV7pSbYLX2FGqxz6tQu1VwmiCHeQdW6MWrUZu2BaVTuGYNbGgWjif\\n\",\n       \"1u2L1r7/ijhxbHN5xml9zEP2vYhU0zQvm6b5c/w9c879pfMvSP+ac+7v42t/3zn3b+Dvf9059981\\n\",\n       \"TVM2TfO1c+5z59zvf28LggULFixYsGDB/praPzfZPIqiD51zv+uc+9+dc/ebpmFO/ivn3H38/cg5\\n\",\n       \"91R+9tT5F69gwYIFCxYsWLD/39k/F9kcYb3/3jn3HzZNM30L1msiZeju28HPoqbYEfRoWKw2E2Ix\\n\",\n       \"NKMWQk5bQ1NGVbkvX/nwzIcfnrfHBgMP7XU7duz+mVfePR541edGmt2Ge7RAL8JHTSQkVjRPkNAW\\n\",\n       \"gtRxITysRSNbWPRA8dqWPCltGkGLJhfCMLVAlBx8C90XPcZrKYmQxPcOQgYK584hVrKU0NLo2MPO\\n\",\n       \"6Y4+yX4YpdU0iQQypy6HFKNs2rbbNWYgJVOpW/uQIixDrRPnLNynhF1C69ofEjCrStqUUIHcxiR5\\n\",\n       \"S5VY1cnXIK+vVB8L3eqK6u2WhEmZ6QWIsgrBM0SjocW0ArSMqdPshKwA2cv4V1sWNxbCMuITg6El\\n\",\n       \"L+RtWEjCLfhbIXiGAxgKSBLVfdlPYuBYN86uz9ueStHiMRW1JQHg5sbrop0fP2iPlSWI3xjD5dyI\\n\",\n       \"3acDr7rdkTXE+ZwoYRTH8txCBkXq59O2vG6PURdIw5dMVCkTVBGQfYUhPS1Q3WDuFHJPGsQldKOj\\n\",\n       \"3tG6trnep0ZZY9dgKDPuWkidys9crzqvm1ZZXPZJ3Lt792yvY6g6jUTZPEVou7CixbdXnmy/mkgI\\n\",\n       \"smTBd39jVXcpajg/9NHByStrnX2RfSLCmkglLIpCFe2+js757yE81Bd19qwNWcs5sJ/UWxsnFuhe\\n\",\n       \"yj7RCvXLb4fQiMozCcsizLg9sNe245pqKNB//8kTwwwuLlBweWFzl3N2LIrdo7HvWxQrLcBf9+bW\\n\",\n       \"z+HqjSVHUNvudGz3kEWOtRjwBvNqvrBn52Ti11YntzBiivU/GqlW1m5fnbN1R+qFErvbxBqZky2N\\n\",\n       \"Q8aaPVSyfYb5kR14xuxWWdjVr9qlICCxS/bpQ8rqbQhYn+cIh+s84T651ud5sk9lUfulL1JRFGXO\\n\",\n       \"v0T9N03T/EMcfhVF0YOmaV5GUfTQOfcax585596Tnz/BsT37+ovbtszC0XHuhg8PfStYsGDBggUL\\n\",\n       \"Fuz/W/uTf/Rn7k/+8f/hnHMuUXHOA/a9L1KRf9X7r51zP22a5r+Uj/5H59y/45z7T/HvP5Tj/20U\\n\",\n       \"Rf+F8yG9X3XO/dmhc//O77znGkFLepn3llRFuEZqbiIkyhhN3q5UVsC/LS4n9tuTE0/QPj42ovbx\\n\",\n       \"iMrnQGtKewslwS6Wa23pfQuCQJXpWAh7dMhSqTWX50QflBS6S2yrd1LYka4pnna348dkhwCPwnrL\\n\",\n       \"2DyNAsQ6RW6IyKg3S+eARDz1KiYz//1vn1lklnXomKLsnKX1Ktm77YOq4+JtXr0aoh5KduaYkJSo\\n\",\n       \"qbGXSE3XdtKurq7av8dQo1YvhJ8rIpSjjuNyKQgnajzSXYoTRXBIOhR1eCASaSaERcwZ9cg6XSJ3\\n\",\n       \"ugChwC6oH72kVj3++LT9rARRVBGRGdK049hIrKy72BMSK+UZBBB0FQnwcp9ICuUxRTA5KDtJCZh/\\n\",\n       \"G5EkSVA7byTJG0TTlDxuOcZ2nzoFVJGx/taVev9ck3YKqq1n6iC2a0yQPiYbyDxdAHVVz5UJIryv\\n\",\n       \"WlfPJBysr2eP/FhvBBEcgnisxNrr25u9axUgry5lnY5ZvUEIzbxpDeaiol/teMm6Kjo471qIzYNf\\n\",\n       \"cc45V9WS0EJl/SNDBHP89s2loR5fffOlc865Y8izpEIEb2+Gok+8r1IvcItxjaVO3z3U4nv29Iv2\\n\",\n       \"2ArjeHn1uj3GtfDkyQe+HVJ/knXvFJG+xbrXemlMfGAyi3POnQDFUYSL+5nuE9xvKBeglRBaFXNN\\n\",\n       \"4sGa0fs/QqKC7jWcuw8fPmqPUcV7PjdCP5fggwceXXj+3Ejs333nEwFKIYLnINEXqSYs+DZXsnbz\\n\",\n       \"gnNM6griuorcd7CelbxOZGk+35dkIdLWFVkZPmN0nvJHssTaShmphngYTZHIwWK2+zzr9ewecioq\\n\",\n       \"OZ577I5MA/bxXPbTASRDVo21k+f7w9/7XfeHv/e7vv9R5P7oP/+v3F9lvwyR+lvOuX/bOffPoij6\\n\",\n       \"MY79Pefcf+Kc+wdRFP17zrmvnXP/pm9A89Moiv6Bc+6nzrnKOffvN80vqfYXLFiwYMGCBQv219S+\\n\",\n       \"90WqaZo/cX81If3v/BW/+SPn3B/9sgv3B4lbTswLW2/gpS21CjMEOQWlaj1okUlIUJ8vE++nAz7A\\n\",\n       \"QGqyRYiNUixS+RA5EIZG8jBbPsZOvToIV4rQWob4uqaa22f7sVVLpYzke/sCZjW4R5u1pnDuSxfQ\\n\",\n       \"c+oKl+RQDaUe63QxDVyayzh7JZ7x7Y33UvVdmJ6gxr6JTu3wcVgt/ECa7EIkDphiSi9QzzsDqtYV\\n\",\n       \"j5CehiJXSyANypupwM1RLtditt7rj3GJYvRFJCzwva0gko1jXTPhSJDzIehTBU+4EjSD7VMvLX4L\\n\",\n       \"ucul5hN5CyogyLb0+uaRnRzf2+mDczZ3JxOr60XZgYF45G39q7ZavAiNwiOlDIlzzqUggvUFferk\\n\",\n       \"Mf61ubYC5yVNFKXz80+9ZFIPrq6ucQ7zglvUQWro8XRafzABElzX+zyHXNbT1cR734rcTlD3j3yM\\n\",\n       \"+gCnsJA+PLnn52Kvb3U9M6Av09s37THKaPREzJfCmrp2KSy7lHvcOcFvWBtxZe2doSlnp4a0V+A0\\n\",\n       \"3RekI0qBJgmXz1X+WrOX37WHthDnPT6xOTGb+HV3S/mPhcgfjMBVES4r6wlWUv+NW2bjbE4cP7iP\\n\",\n       \"axnqSomP0cBEOqdAH54/96yQjaBP3OOOjo0jRIT74pWhWgnWwlSQng2+dyz1FB3mUSUER9a/Wywo\\n\",\n       \"1is1UVuJD02X9/+qrAIRqVTWs0my2PmG4EspSkKUrIs1PhQ+1MsXXrri22fGmFlsiHQZP+Z07OfH\\n\",\n       \"mchfHEP09erK5in7kcgc5x67K/C5GzlQ+Y+8nWNS/7Wm+KnNCdbHHMr4r4EiLaWgIJ9ZusdPMCcZ\\n\",\n       \"fTjE0apU/LoVpBZpIKBfk1tb4ytctz7Axyo3ilzvC+GqhRIxwYIFCxYsWLBgd7TwIhUsWLBgwYIF\\n\",\n       \"C3ZHe2e19uo0crPGYNeYqatSQ6meeoit6Btkl6YeYksLSfXuIrQlCrAFINW6EQLaZjeFMhV4OgXp\\n\",\n       \"MjlAOtwlIO8PWbeHsFxqbaeirCodtKEinDeRlEpCkUpEXhHS3wkZUtZAQhu4xm69NH+NTEnRESQh\\n\",\n       \"AM/q93NAt3l/JN/fD1Uy/VXJlicnJ3vnY/RMQ2UtYVEU7W8ZPkGbVGphAMLksrbzbjf+xC052xlR\\n\",\n       \"U0MmnAu7ZPvuTtucM0idcLKGjPg9hb3503qHnGnzjsZwg4aPDynwUiGeKv8adpzOfFhOpQ4SKBE3\\n\",\n       \"IjUQQ1ahlLAUz6O1xqKYc8LucauADFK0Vhtg207PLTxAQuuob+GJ0yPf/8mlhRH7IAhvJWSVx0g2\\n\",\n       \"WFh/1oDWG+fnfeyk/iTrL24Usvf/LiTcVaAtlLdwzuQ3VLF4A2h/I5IMXIskhdcq14G9o9+10HIX\\n\",\n       \"CuzJTrjHt31Q2Dy5fuMTJY7fe9IeW839NXpCSmb4VO8dQ0Sdrg/pDDpCLWASgfRriBDQtrQ5GYFQ\\n\",\n       \"3jTW9hjVG/KB/fbiGx8Oo+q2c86dnvvf3kx9aH8h4bFs4Y9lI6lXibakEsasb3z/k1MLNxWZv/5W\\n\",\n       \"5v8UZOyupOTnp36MSemYL40IP5ujrqckFnCvZYKHc85lQ3++m4nJXzDyXm9FqR5JSysJS3Lhc/43\\n\",\n       \"VxrG83+Px0aAJylbyc4MbWnyBmkp19eWKHNx4cNsx8d2vhHCV+NTHwJ9WNoYPnzo783r10ZAZw3B\\n\",\n       \"I6lhmCYkjGtd0QbtUDkRjF1nX6ZHQ3tMpOrgOam0EMqzrCUBpcazQAnoNUOgssdS9qGSe9JsSOi3\\n\",\n       \"e8K2v3njx05pHG1SjLLYsU70GNe/JjvN537PUvrGDHtmIgllfaFSHLKASAULFixYsGDBgt3R3hki\\n\",\n       \"tZovdkSwSLqMCiHn4cW1EcLaoEG15oF55Gni33o1Jb+toC3k6U6+S+jeEVXj9QWFoVehxDaiCYp+\\n\",\n       \"EeFRAGcBITRFGHg+XkPREnouuzXU6FWLtR/bWz1JeYoS8e9EUmLpWXf7rCto/V/P5nvXItlXPQir\\n\",\n       \"dG7fpJjmoareOsY8ph5Bm3ZPoTshbBMdVPHL4WC/WjxrnM0ETeE4kvTpnHkxNzfm4b5N9tR6Xbw/\\n\",\n       \"ev953unU+lpWrNO3ny58dGReItG5Ujy3Kcb9+NjPISVCX6MO4GopJN4NiN1982DnIAN3xPvbwBNT\\n\",\n       \"suXVtUcJ3ntiY8d0ZqJ6m7WQnkHyPjs2YvWg79vZF1Sl2/fne/zBr7XHhvCOry6MFPv6q5/4/k9F\\n\",\n       \"OmDoRSRX6EMh9QKnK3imUq/suG3LvnBmR1DKFEhD3dh9Itg8kNqZA3iaJKrWQlhtgIgcD61NGbzU\\n\",\n       \"RDzdBOjgRurqjY79PL0Rsn9ZsyajzfFt3Ee/7Z4kQG4AoLvZyubL6Qnuu9TfKzLfB0Ua2PY4tfu0\\n\",\n       \"Wl34PmwFYcWg3M6snT/89d9wzjn3zbc/d845txSkpYD8RjGSBAz2R/azyaVHDk6GRix3SKjJBX2i\\n\",\n       \"AO3VxFCa41OPxI1Gx2i3zZc3QLqev7R5de/MI6aa2MCN7PTUrt/Kz8jexX2yK8+TakPRzQzXtzXB\\n\",\n       \"v7XW6BmurzX52GZVbqHUxnpt49mipILmTzFnVliLWueNc/1MECzuT/lOoow/HxMctH26xx1KhuKx\\n\",\n       \"XSFef28ZdVksbExaUZONJtEke+fIIRMT74g0A/2TSBDr2KqcEB+319d+T5zNbE+wWot2faKESuIn\\n\",\n       \"yV/RrOtroK6y73bxnNT1pFIQhywgUsGCBQsWLFiwYHe08CIVLFiwYMGCBQt2R3tnob3NTematbzH\\n\",\n       \"9QkdGoSWdj3st5bQ3hRE0SMn30sYshNSJuDbuFGiIJRVDyhlM7QWH6iDp1pIfYRPej2p61WQHGnn\\n\",\n       \"pVaPhrYI3/L6h8J+qo5bo6/a3i0wa/3t5kBNrqg8cAys9Ahg7EpCqwyzdgTqZfhGIgHt+ZoD4zoc\\n\",\n       \"GGTfanDFUpOM55UQDEmOrTa1qmiDKK9aQBnvz1ZJnBn+NRif59VrkbSt94QwNtWL9faT+K+wOz8e\\n\",\n       \"DOy81EBaSl0rqvKyvpVzzkUxyeYWRqDOUwd9yITEHEGrqjuUUFBFBW6pCYh7pzA654zquExufVtu\\n\",\n       \"h9YmhiqmMw9xJ1rzCqGCrZCze10fWlO9qQXCaJvKiL0dhM8ePPpIfuvH+Gxq15+h/l75yis2zzYW\\n\",\n       \"YnIggDZbuylL9qfR0Brm80YI2ww4iN5NitDSUceIo3G8W2OxFNJxjcoH47Hd6yHCfIu5hTZ60KrL\\n\",\n       \"RLOOStF5JmrzuO9FR8K9GM+OzCeqd0eVv8ZGdNc2qEl60h3KMYyT7B0Rwx0Sq09jP7fWS9MROgex\\n\",\n       \"/OLyRXtsCpI51/C3X3/TfnZy32tVNaIEHTnq0kn9O94nSSxwR75fvZ61fYzQ933RO7rB/OAewwoP\\n\",\n       \"zjn35vIv/WdCNq9rP4alkO2paabrn+E2TYBp9+ylavoh8QhjeHRuOk7VAcV0CzNJsgvuxbHoXV2h\\n\",\n       \"rmEl6v2kFihV4QbhpmdPvcK8hkyPxv77D+59vHeO1xcX7bE3uJbSXVipQZ+TfN5p8hLXhFJVuj3S\\n\",\n       \"MqC7JN+vE65TWX/48WpqIbMKNIPBwO4/6QhLCd9yzu5oKoKMzvW3kmQThiw1FEedK1WMf/XKE/T1\\n\",\n       \"eXoEqsDRkd0nPrs13N49kFCkFhCpYMGCBQsWLFiwO9o7Q6SybscNG7s80+rrSIhwQJ0iIUynIGKm\\n\",\n       \"8rZIJ1prLSWR/20W77998033ENFOvXoj3dl5iXQoiY2EXv0tPQwlqb2NiDXNPrFc01BbREr6RVKc\\n\",\n       \"ol+t8y3jRISlbvbRtyXSO2fXhgwMWENPSJRjELUruT7lCZoDBOxqI5XW4bHyPvjvoXadpCnHJPm3\\n\",\n       \"fTZPZ41xTcUzope8032qWCvZFtefTIxYTmIplYOdM8+FqFMt6Auvlcs8WcNzauRes56Wkt1zzM+5\\n\",\n       \"kBgvQMDV37LtnGunQuymxeLv9PrDnb4459xqPUdfbK5vQejURIEaKf6TG0MkSN6kOneskhyUhJC2\\n\",\n       \"3IAA3xFJgPORRxMSkQYhP3pyYbUbu7jXmgBwDPI0EZmLZ6/azy5eeSSkK1411VF2Kt1jnjS1XT8C\\n\",\n       \"iiWOs2u2/vNZJXUKIZ1CQm8siHgX9bz6PZtXqxJSGyJ/EZe4n7Ke7z1+7M/fP7cGtEioradf+fgT\\n\",\n       \"55xJuPgTYp+4RRJBX+QSMMfrqaF/RZ9VGex7CVPC5VoxFk0keyIRzqEgYt+gasMKpOjVixfyfazx\\n\",\n       \"yObJFjNkJWttfOY9/G0mSAeaksaGyHzwwNffm8xsL1puWAHAX//+A0MLfvDJD5xzzv3885+2x4g6\\n\",\n       \"PHr8QXssAtl6LUR5bo9a/7ADtEuRQyLcC6BpeS4I0gC1FtN9qZmRSEh0kDTBmqfOGeq7FIRxPCbC\\n\",\n       \"K3IO+HsEQrl+RrX9r78zdfq/+Tf+hnPOucHQkNarK4/szme2J1vkxNrOfUdlEijjcHNjc+x24hGu\\n\",\n       \"ckX5B0EVMSZNZPf69tpf//nz5+0xIty9uY0J26RJTkdUL28UdfRzskUEN/L8wfemt9Ze/m1337kK\\n\",\n       \"UJdGEzqUDpEWrJGoUch6ktyOgxYQqWDBggULFixYsDtaeJEKFixYsGDBggW7o72z0N7p/fO22KZz\\n\",\n       \"zs1ASpveWnjkzdr/naVS5BPwYSKsYKqXJkIiJMkwF2Vvhr6+L+y2q4TNosmiLfOWOrlzFmbZ0czA\\n\",\n       \"uQ9pJjF8pQrbCbSwVOOj1aeSvlZQyt529kN2OwRwaKqsF1I0Em2KCz9Op6cWRqIuylz0ORbQMTk7\\n\",\n       \"twKpLJZbCdny9WsfjunqeKLND+9bIVW2Ly2EgE5VboTWlAA/bBnoNv6ExXcI+G3BYxt/alr1pOAx\\n\",\n       \"54SqV1MPheftdqxtb658KC4WwjJ1l1RbpgQUriHAFSBwTVTo5CDPiwI09dNWILYr7E/So6rdcww1\\n\",\n       \"KaIlXjZSSDTz82kjmjU3Vz58shai5hxhjlMUjRXE3I1BbB30jRzNBIQTUTvvIAQ2krGmpktTW7jh\\n\",\n       \"8rUPR8wQJnDOudOx//zemSc9P3xg4ZmXl/7v6SsLD2wb6K0J7E/CPsnhzjm3xbxqpDD4CqHnQkjZ\\n\",\n       \"vGf8V8PYA4TZdV6lGNe8sL6OME59CRn8yic/9G2STA2GkbtCLB6f+LU1GAp5HKHKuufHvZBwP4nt\\n\",\n       \"1bElVmyxdiRi5xrQEaJYYpsxzrO08X+GArY679eY4zOEzB7fM2Vt22KUbO5/e/XtZ+2xDpTlOw8e\\n\",\n       \"u7ftVAjYU8T7MqkJW6BSBPfptayJx+/5/WQtqtfPXngS8cWFzZPjI7+36Z5MDTpNHuG+Qz0355w7\\n\",\n       \"v+fvSXFgD69qT+jeoTZg2RUde3bMoX1GJW7nTHuu17M10Tkwx2Yz7BOFaUXRXr3ye20l4emf/exn\\n\",\n       \"zrndkHlLlRBiu0MiS5ZLgeKNH1sWGffn9mO7WNqY3NyiuDD2vY17YOdAKDhP97X97t27b5fHvq/E\\n\",\n       \"9pzPYqGUVNg79D6xCsi2ZuFnibWR2lLYuh6P/NpJZe9u1708dzMWrU/2Y3epVgWp95+3agGRChYs\\n\",\n       \"WLBgwYIFu6O9M0SqSTP34L4hInO4wvU3lmo7eQmFV/E+RlBRdpJqG4Nkq+jDAKn4ihIwnZGok9ZB\\n\",\n       \"Yj01JeySUF6W+6RwTVel13NIEkCvQe+jre8ln1FtvFwr0oJ2C9LDN+i1KFa3KJqQstdI9V8KAXpw\\n\",\n       \"5j2tLsYhEhVppp92hMQ/W3qvJ7+VtGYgYupVkJT48FNTts4wjuMj86qI8KicQQEPOwZaRPKzN5Bj\\n\",\n       \"naTkg7CriFSL8In3uTiAXBE5XIpS+NsKuDNRZyeqpYT9Cbz0SPLKqcY+FzStnELqQtCvNUjhkjnu\\n\",\n       \"8o7/nPNU1eEPzSGilDvqzEDEqupWvkeyvR27mfr1dHVl/SkGfk7OU9/2YyGsnoD4nkhixRFQzMFo\\n\",\n       \"vzaY26n15+9TKkrhrFdVLU3iYAkEdJr6e9Ibvm/nHaKu2tq83xzedCVk+xr+4HRpRPUV1sd6rbW2\\n\",\n       \"gHpKO0mAbsdTXMsBUrOVxEvUQUnkfRBvHz2ydp7eg6yApKQTxezIeJKAq6hnguwKOsH9gV0/6vq/\\n\",\n       \"88I8/QhrslxZEkEC4ruunVaeIjOE6/TU7wmf/eVP2mNXqAXH5In0se2JTGF3pSEYXB46d2ugPyc6\\n\",\n       \"oFsS223fefrdF75tgojkXSq1A5ESRPbkBGiWwG9rzIWrq9ftsQSfawp/hj1enxNvLnxfN5KU8frC\\n\",\n       \"d+jhQ49+CYe6lYZQFW2S3WdCmOf+SOVs55yrK3/9gdQzpSQC6wpqvykNcHNj56U6+mplaM2f/umf\\n\",\n       \"Ouec+9GPftQeW1HZX87btqMRsn1BSQ6V3aHshLVzPKacDGR9Chv/i0vfh2FX0B/UwmUdVuecu8b+\\n\",\n       \"H8mzu4falV2RxCDat5JEnQUkDqiwPha5AiK2a0HfciSoxPJQZBSrlmdCyhq3EnUqUEkjdfacag7I\\n\",\n       \"IqkFRCpYsGDBggULFuyOFl6kggULFixYsGDB7mjvTtl807hSyOYDKPWenoqK7NaHGzTckkIdWInl\\n\",\n       \"RyAWjsYGIw5BAM0PqFhb8eCdcsDOuV3F3gSK6cPhiRzbL8ZIUw2kQ7pQbxcmXovuUq/2lI7RhAAA\\n\",\n       \"IABJREFU0GqnZxDrHDomSwltMhxYKmQLbD2RvkYIi/X7+7pUdUk1WzsHNZha6Nw5t1iwGLFBrNNL\\n\",\n       \"Hz5Zr+yeZChCWlY2ng0IiI2M8RlInFpwkucZo8ht1BjEOkUh06JnZOcOQltKRFzjbw2Vaui1PR80\\n\",\n       \"XRQWZ1Fhkvz1HvLeKRScA26PZerUCK1MhcTdRdhUFYBLQM9zUUDvIqRJDZpM7uEEoRWdayRWa2hv\\n\",\n       \"gn7FUvA5z/xvL64s3DNFAdPJpYUKeggbEVrvdUXjCTo2mWgrUb06dhoeQ5FXCUFTvTgVzbBeCrX9\\n\",\n       \"rc37zdSHviZIWFhM7Lwp9JOyxO5/E/txTYTYW2DupLcWMqwQItYixBz/11cWbrn/m540z7nRFS2y\\n\",\n       \"DUK169JI5GOEwmIhojI8vJVw7zU02hoJ7cZgVM9EbX7DsLQQqjkHBgMkm4jGUZRRRVwSNrD+EmFs\\n\",\n       \"JxH0eWSeFj0fDlzefN0ee/atp1IsZC9+CUJzVPt5N5O2zbAXdC4tjHoFfbSF6DNVEz/+3/z0x+2x\\n\",\n       \"kyM/t+ZSoJgR2iy1MW6pD/jw9tbm6yn0qcZHFu78+GOv8l2JthcL1O4mFCU7/zpnCR0alqeOUoow\\n\",\n       \"2nuPLdx8eenDh9fXtq64d+i+ZsV1bexOkFjwWkjxVwh3PXpo1/j000/9OaD3dnFhIcu8TVixflE9\\n\",\n       \"/bPPjOxPustIlN0rhJYZHnTOudKxeoTtMYOhP19XiqDz2cmi4frkHJE8L/sUqxhoWLZE9E7pIwvQ\\n\",\n       \"R5rE5k4XmlbjgT13Oz2/Trg/awUIavblkSibI7RbyzNhif1Z91OGbTuin9eG+6WP9YF3BbWASAUL\\n\",\n       \"FixYsGDBgt3R3hkiFWeFqxt5g8QL39HA0nqrjX/Pa7ZG2Mydf9M9PzOy5wPUizqRtNou1IhT8dKJ\\n\",\n       \"UvCFdCOEbZMksDdPEnu1XhO9pEPIhaauk9C4g5zg7ZxoVV/q9fF8et4IKcZaa3CFdH0lO7NDC0nh\\n\",\n       \"7aNNnYG9adOLOQY5WOsbmTdh57135j3YzntGzryBevGXX33ZHstAznv64ll77Ae/+iv+tJLWS4Qj\\n\",\n       \"TuwaX3zpz3OLmmuRU8I4FGb7NiYrIFgrSeufTEmoF0TsgEwAazEpAX259F4K75N+PzlAMMwpjyzD\\n\",\n       \"HzOFWWQdupA4UAXyCK6OIoE9IJCcG6qYPxCSMY1t1z7Q+1IvlSjdrZDN6f3OZoacrYCYJd95L/lH\\n\",\n       \"P/ptuZrv/0LQxzyLd87l++j78+TJk/ZYVQNNnVqyw8nYr9lI+v9m5dv3FITVx+8biZp1GlX0uwIi\\n\",\n       \"1qSGnK2oQJ2K/EK7nmz9cW4NRYH69sbPZ6LeuSDdXRDqh0NDxCKMSST7Cj3cjhB2WTOyFESyB3R4\\n\",\n       \"p9YbSMtbIcXTw+4gUaMRRMqhXp4SdlmcTJGWpmECiqRtYz7f3Bpy8u3Xfv09fylp+tf+nhWQRHhz\\n\",\n       \"aZ999QXWq6iYm/yEzfVTJLZcvf62PbaZ+v6/urT9/OjI7/dZbm2fzv39fPLEo4VaMYBIj9bLJMLx\\n\",\n       \"wQcfyPc8mjMUWYleKzWge4L/7UfyPCHZ/uUrjwSdn5jUB+u68V/nnCs3Ddq5ku9VaKet5xKRhdVK\\n\",\n       \"E4oa9MGehTn203Psv7pfLBYT/GvPhDH2c6011wFKqSn83OO0JiJPHcvecXp6irZZO1mz8/ISchki\\n\",\n       \"IXJ25r///LmprbPu6FIkFJggcHRka5fPnUY21AmQ5bEQ0DPWU+0A/VWkH3sn9yHnnBuiAoSS7Xnf\\n\",\n       \"OV/9F3wftSYiZXxUlf5QDV61gEgFCxYsWLBgwYLd0cKLVLBgwYIFCxYs2B3tnYX2XFy6SojdW2g1\\n\",\n       \"Fbnh+PdO/N9U/XbOuV7tIb6H90wx++yI+kjWnQbK29tUChknHu48wBM/aIRUVR+FoRolMxP2U7iT\\n\",\n       \"nyss+3bR4q4Qy7cIRVGTxDlTpRWEtSXIa3FhqjxvNFQHku2TRzZOBbQ1bqFtcizK5hVCITdC2P3w\\n\",\n       \"/Y98O6WvDx56leMjgUJvJh4C/frbr9tjzVvFeJ0zSPXlq5ftsRWg3wYEyCyxazUoPPvll1+0xx7i\\n\",\n       \"+gz7OKfJA9Z9DXPRMpBxF0L2ZgiW33/zxkikfXy2lnGl6rMS0PtIbBgPDIpeLvy9u3pjulgVYOaB\\n\",\n       \"fI+6KA0Uu1WJm3ZIC0uPMQS4FMIs1bY/Lj5uj50j3PLqpYVWNgg39BBmvhXV8eXKz49Yxprzeb0U\\n\",\n       \"YjFg9nNRwCdhV8M9RONzUXbuAoL/7AsfFviVH/5L7WcF+lCIYnGW+fnfNBbG2/SQ2BBbv0i23coe\\n\",\n       \"w9BXUagqvB8zrnEtGs1kk1yKJh+NfR9Pjk1Ha4OQwptLC3f1+3aPaR30Q8MtMcIsuhap8rzGHMpH\\n\",\n       \"tv85h/WhxchZM1loCTH2iUZ9ZXy8EaLuj//inznnnLuVMEaDSgLdDfYwSQ5YlP57/Rd2T548eIQ+\\n\",\n       \"WFiWhWFjUdv+5qkntv/0i5+3x/7wd/8m2r6vbcUxVN2n73AOJsI459zjxz6kfHwsyt4IaS5Ei2iC\\n\",\n       \"MK7uDX/wB3+APraH2jDTCvf1s89+Yf0a+n4NB7b/ZSkL6to55gif305sP5lM/I1SBXImL2lhYpLr\\n\",\n       \"eT/LjTwnt/vnoJ2eGC1m2PXhvqHM0xh7LFXXnXMuwyuAFiEnUV+fcUoad243tEqSvSqhc6/X5985\\n\",\n       \"FPKHYxu7Y+hN/fj//MftMcoM6r7bSXyb+ggLNlJZg0kBel9TPAxWtSaP+RPrc3oB6oFScPjM0p14\\n\",\n       \"KfPokAVEKliwYMGCBQsW7I72zhCpb7752t0/E3VYkO26HXurJDrV6ZtHPGz8m/ZY6n/1QNDdIcdB\\n\",\n       \"2TsTAvAaSATfpnNRMaf3oSrW9FZ36rrhDVe9JCJS+kZM51CV0qnA2pAc6sx4XiWR16W/riINzQZv\\n\",\n       \"+AK/8BenUn+Lsg+FqBhTHiAGCsLzO+dadHAkRNwUBNCN1Lpb3/gxLKVNBVJnf/WDD9tjM9SVWo/M\\n\",\n       \"S+E4LqdCXoan0ev0d77j++i9hcHAvK/1moiMeUhpBiVo8T6WIEhnO8r2qKsmqrzRW8jZaCTzqkdZ\\n\",\n       \"BztvF+jEWvq/BDl2sxL0BSTivqg4Jwm8dCEuLtGfCNefiYoxic0DqeE2xzGt/zWZ7HvaRClHfbs+\\n\",\n       \"BLPdmUhc0NOsII3Rl5Tnck0irCjbY/qlMq83FeskirIwyNi1KHZTlT/JZEyATv3G7/4t/x1JeS5Q\\n\",\n       \"z24g6vgkjy7lWhXboiRupO6XonZMtfueIMELJBtQ7Z+q4s45V6J2nxKGWWVAPXTOoY0cG5z4eZRt\\n\",\n       \"rf8t8VVSqYmOsjacc1b3bAtUOZJ09RxK9K4RYjlS2GNJ//6+ZO33PrQKBL//B/+Kc865P//xP2qP\\n\",\n       \"nQAdI1G/FPmVVllaJGEePHrPX18SS6j8TeK0c5YM9KMfWkJD1qbO6xzz9+nLrz7Duaw3I6Tm35f6\\n\",\n       \"fzSVemCa/laQngFUtHvy7GiA8PzkJ/+sPXbvnieXnwD12XSs/0RYlOxNhOP99y3ZggR4RXNjJJvI\\n\",\n       \"NtUmaPCaztl8oop4T5KSuCcVgioSwVKUiOhLJmgq0exMHvtEthSlmc/9+L94YVVGNqjJd//+/Z12\\n\",\n       \"OOdcBTRnPjHU5vUrj2z/GqQcnHPuwTmiCZG+dlTov5H9iY4uRHanTDFoc0pY2BlGkGzZQe7xzKo3\\n\",\n       \"QjYn1CUyPQn2qefPnlpfK7/uS9ljCk34OGABkQoWLFiwYMGCBbujhRepYMGCBQsWLFiwO9o7C+09\\n\",\n       \"f37lyrlBcVnXQ6A9gRhJ3u2Kxsb9Iw+BVhLaqWoPXyphjDBep2vQMslzDK0oFEo4tdSKskAAh0LE\\n\",\n       \"a3WBBDKO8/330QTwfZ5KcWW8txJ2rNYGO84mU/e2MdynWkRxq1ljY0JY9vHjx+0x9lX1fm7eeLh1\\n\",\n       \"DDhXYU+G9I6lyCT1s16+NHI44VMtLsyx63b3w52vX5oCMuqo7sDthKBJCtW+vn1+55y7AWFUjzHc\\n\",\n       \"sFrtQ/Aa2muJ0hKCcz0q0PfQZyNCU0dEQzvsl4YbnNsngDNEqOGO9XqJdsg8xcfUCtPrs6CmFrQ9\\n\",\n       \"eUt3yreTBVpFgwvjqVpUHFuF5RkOLOL9NcS/EykyzfHcyn06gdp0LNe38bfQwhaE+jy36/cQqnnw\\n\",\n       \"vtcd60j/O2h7I9sUSbGLjZD4oaidyH3aIHnl+lqSNxBa0zXBMeM4qIo2Qyoa9uCeoaGVDdbRtfx2\\n\",\n       \"fM+vI93PCoQeEyHxdkBlUAX2Gve2Ay2yTMJDJJlvdxJ1UDEhFTX/tqiv6l3ho8Ta/of/wr/sP3MW\\n\",\n       \"Kn712q/3bseHVDVkzDV+fm6hKN7jRNSxSTbWNZGiTQ/vW1juUGHusmIRYn+fdL0yYUATRs7PfVio\\n\",\n       \"J2FpjrtSEL596nXu3hddsJuZ30/uPbBECZLtqwPagkxK0DXBdaWJKimqAXwgdIcG907XJDWgdE3S\\n\",\n       \"ONaqhcX1x0Lxeg6lDFhSis0TXv/QtXYLoxc4r1EqXr5CuPVrn/ijFRjOEEZ9JUlELFZ8JiE7Jra8\\n\",\n       \"emZ6U9wf+5JQsu34tl++sPNx7r56+cI5tzv+1FTTvZbPx75QGzhndPy5x91cWaJK1CY8yTMmk/eC\\n\",\n       \"AxYQqWDBggULFixYsDvau5M/qDM3WRqZrECKp2Rfu5fwID943+oQPS48Oe/itdUfik5Qf0dqTRV4\\n\",\n       \"Y+6LLLIRH/2b+Vq8b6pYD4XYPEFap2IP9E5VFbxCqre+EZcg75aSpkmFXiITW2UdwutJxVukxEGR\\n\",\n       \"SA051C47EeToUF0/kmiVqE7Jggwo2VZqqFHW4NtvRYkYnvGNok+4ll6fqrDffGPkRBK/14K+sI7g\\n\",\n       \"WsirRKfoEakHS1RtIp4+UTL9Homli4Upy29Qx1BJ2SWQi1zQBKbs3oDkrarT9D4VkaA3c3Nj12Jf\\n\",\n       \"NV34CjXu1HNiW/TYC9Q1O8V4dofmLc2mvk1xLMhpzbkm6sxANTQjnoiVqvgeUuBv24L5p2r/RLUU\\n\",\n       \"1eOcFaD14Dg9f+aV0kfi/dVAjmupP9cd+gSJfODblAqqcIvzjoaizoz7VQmCw7mjBPQS15jOra/z\\n\",\n       \"Bdedqh37c3M+aR0woq7a/3aNC/qWQc5A0TyiLzNRdqfyeaRzFyTzuaiNFyDl50dAImRc3RbVGRq5\\n\",\n       \"h0j1r2ub60m8r4pPYEWrB/z5n/+Zc85QIOecGwNN5TzRZA9KXCiJ3CoGaLUHIAHS1wpzayvX4jxW\\n\",\n       \"2Zfrt5InFBHjtSai2E/kuJFrXUEpP5dnwr0HvgLGlexns7lfH0peJnLDa2lZU6p+71SgaMnh9qyx\\n\",\n       \"uq7227Y/sk+MsN4VTSayzvWkex3XpM41zn89xt9om3j96UxRWqJvcg18rkr9x0d+j7984+fYRJTt\\n\",\n       \"59inKkHpn2H9L5c2J3/nt3ySwWJpe9L02t+7kcg0EM0rBPWKUVHh5Ut/3ufPrV4hx4nIpHOG0mnk\\n\",\n       \"5Azq9WupisGhPT61RIE+EiAauSe1C4hUsGDBggULFizY/yv2zhCp3nZXrC8Dz2UjbjW9yliaWUX+\\n\",\n       \"N9e3Eo+GB3f/1N5IE9Zu0pJ08MQofqmVoU0SQcTyJObftgmohoqFLVinS9KU6Tmo50KUgMfI7XHO\\n\",\n       \"uS7i1quVeRXNiiiNnYOejv6WMXyNm19DdDMSfg2F/linKM/N+2fbpoJgrOCRas25ycx/nsj1Wc9r\\n\",\n       \"x0tCH2U4W09IPfwogTeNG6V8kA7GJJUK6ps1K60bR6JNhe4KSoC+ako0azwpmnN15ceJHm5RmIQE\\n\",\n       \"PXGtubSAtziTY3O0RVOYOXbqTVOSQ9tUAM1grUEVEKSTqPefIq5ap3G5ECFWmHG5BBE4gPq1khRA\\n\",\n       \"MI9FQmMFBGEky8DqVJZ7xxTh7UOyQZHj0cjzoJKddeXHJwXCUQiq+hc//UvnnHO/9Vu/2R5ra13K\\n\",\n       \"2mVbJuJps9I70WfnnEuxFraSfm18ST9frm9sX/nkk0/wmXwf51BR2XvwhE/ObOyItGr6O2scrkS4\\n\",\n       \"8D3Uh6tkoSwnfh33TzwiFam/C0RsK/tPlGCtpcb9OWi4UW9evWgPpYkfO+WSbkug+UATlfvy/vte\\n\",\n       \"6kBFZbn/TKe2d7Du2Vb2nxnm6XQm/B7wNXdElzG3iQxVpSII++gvTYWTK1y/L2jap5/8yDnn3P/9\\n\",\n       \"U5M6mIKbqjVZb258PzYbvyYfiagx90zdQ8ghVVSLPCRFkzeUH5B1Qv5TJXXiiCId4jkRfVHeFNc1\\n\",\n       \"+aO+7f4aunfxuTMTftnx0RjX35dzuL2183F9PDz3Y1Gkdr+eQjqg3toAjFBDMZGox+eff+77Vdt+\\n\",\n       \"3sn8HPv2qckPnAD1U+RuBgFW7qc//OGP9tqm6BstOqC+rTxcStvEEjlifdhC9thudx/hVQuIVLBg\\n\",\n       \"wYIFCxYs2B0tvEgFCxYsWLBgwYLd0d5ZaO+oG7taQjy9MUJbQg6tHdTOC4MRCWkvhfuVZR4qvn9q\\n\",\n       \"Kawk1tZbg0xXa9R1Y80plUsAVLtQIh7g1oXAniuEwJTETIJurAg8/qNhrCRhGMHDyCeiZtxpQ3ui\\n\",\n       \"WA2i6kpI+SuELKYTgyx7gDQVxSQBXJXVCRGTKHgrdfV4LJYQA+vJLYSczH5puiyJf5u13RSSgdNs\\n\",\n       \"v67ZoTpxtEPyB6dSh4yhL5IJ/THfPpUEYP23i9eWvbBCfbjTM5snw8ER+uXHaSA10jg/riVkynTy\\n\",\n       \"RMI9lJ2YzZRY7OefhiCOoSi+ldgew1Ir9OHlpaThvpWa75yRw28PEPA13MDfKtmc5GH9Hv8egPSq\\n\",\n       \"48/+aB/6IKCuVjYnmDyhyt5UQH751EihVE9fr63/bGevtx8yGQ39vZmLwjHnzkpCoAxBLkT+4gqh\\n\",\n       \"bQ3Z1LhliYSbOJ4kFh91LF2aIY5S5vo1QrCzmY0r57CGWxg+11Tz6YzhYwlLMbQq5zvnfI+x70Wa\\n\",\n       \"lIIKCMpZaBiyFEkWt2/cHpYSWmQ4TEm53Dv6mMMkWDtn82l3XyFR3/Zpzms9RlrE6enHcsx/T9fO\\n\",\n       \"eMT77tsWSQ1V7iu6rzL0diuE/Rl+G8v3XoIA/fK1rTGe5wnU2Z1z7tNPf+Ccs8Qb3a/OsHfoPeQ9\\n\",\n       \"1j5QuiGRWq+UlhnIPOGaaXb2hM3O+TSFn3uoPru4rnVMTLJl/7xdCbe9+M6Pic7dDZ6ZvZ4d+/LL\\n\",\n       \"z3F9yLRI2OsIz926sfO2fZXvpQhBX0iomEvr137Nwvcxkqs2Eu7Mb/25nzx5f6cvzol0UanH9hOw\\n\",\n       \"2hq3krxxAjqQUip4T6YrO9Y0+88ltYBIBQsWLFiwYMGC3dHeGSJ1/N6Zy4byBl1QVNHeoJmmuBFy\\n\",\n       \"6rJk/Sf7XgVBPCV2b/DmXm0M4Xm7Jp6moc/xRnot6MsSHnQq6cJMl1YPgm+9mpLNNOpd4czezvdi\\n\",\n       \"8ep4ulS8iiEIux2pF8i3av1eATGz169N/PLjjz1Rdi3IwTVIlEdAc779zqQODomVUUBzLojYKVGV\\n\",\n       \"raJP9L6FWL2CcKiQWOnNKXJED5c1n5Qw+eKFJ8UOxDN6CJmGuXgQvI+KkjF5QQn1nCfPBCWht7dG\\n\",\n       \"6vxyuV9D7VxI5CQeK3I4JTok8hf8HtPFnXNu2yZS6HyiN+XnRJsk4Zyboiahpisfoe6cVn+nEOZU\\n\",\n       \"RF2JmB1JnTp6uM1OPUOQl3lM+kBCva6TDqQjyqUiCL4tMyH7soaVyglMgRh0+3ZPmEbPdXX5xtCC\\n\",\n       \"9973SN+3334n3/fz/uqNIY231378n8p8JsKiXaXshSZqMAGFBHglUZeYL7quy9rPj8dPHsv3QFhV\\n\",\n       \"QUDMe03UYIKKCuey313x3EmybYgqNOZpt3NHzltjT0x3hIH3Sbbcsx49fNAe++P/AZIlssc+xOcd\\n\",\n       \"tKkQD/5tIrT2oRadAO4Puk8en3hki+nlzhnqtNbaiZW/aRTf1CSeFeRcColScH7oOd57D6T4vq2T\\n\",\n       \"Lz/3tfs++dgQMQqrptG+cO9HH/nvxZGKP/pxWq8kUQkCjpu1ocQd1Ik8PTc0j2srFYmV9WI/oYdR\\n\",\n       \"jy6kQBYH9rpDKLXKmtQHxET5+UQkOdjXRq7BPflIUP8jyB/0uhTJNZuDCK5IG/dVRZPbGSaSFMet\\n\",\n       \"JI/NMd7v2cwI6GfnHuGeIoqi45+137f9p9MKfNo+bXumCqz6tivCylq0p0fW/7Wg3YcsIFLBggUL\\n\",\n       \"FixYsGB3tPAiFSxYsGDBggULdkd7d2TzD09cJqS39Rp6QomExxiKawxaJRSZKhQIaFO1ndZtXT27\\n\",\n       \"Zg8aPITq37yxmlusl6Uhmw4g7Uz0KQ7V6TPy6r4qtxIACSkSlo221q+vUMPo3n2D3TeAE5WwSih6\\n\",\n       \"KeE2hrs0LPfFL/z5ZjMLld174Il1E6r5ythEIKwqZJ/iWrFA0fxRmu6HsTS0xjaPj/bHTuFmai/x\\n\",\n       \"+1SJds654dD/No1EiRyhgC++/LI9RuJ5Lu0sQJ7uD0UVGm0v5xYCIAT+ArWWUgkF3X/g4eTzexae\\n\",\n       \"I6FxPrcwWrcPbZexQdtxGzISHR1oVTHs4JxzNfRLloD4jwQed9RR0zAyQls6J/jbvtQ6TDGOqqMS\\n\",\n       \"IbSoYQHqzFDHKxHdGwobX0odqsUS35d5wv4o2Xa58N+bL22cSGI9caLKj1AqVYSntxaeSBPf19fX\\n\",\n       \"Fsar5/66L0Sf6s2V//wXP/tZe6ytayjJJgP0uyM1+cqaGmggrM8tZFRgPGutVwb15kHX7tPXb772\\n\",\n       \"7S1s7NIl5rqExXgXd7S9QB8YSBhhtfHjOEpwTEJWDcjLkYSAoxrh2Y2NdQw9tEhFyxxClTMLQTHc\\n\",\n       \"VnRtTKhpVmKPUT2fVasFZ31lsksmYdwx1kK1ldqNOJ8SlW1PthA0R4rJLqmEXT5BuC2V6gSsIlDI\\n\",\n       \"84RciS+/tnDvPeyTqrZ9/9TvP6qB9fNf+L2F4SmtIcr9XFW0Fwhz37tv+wTDY6RnOGcEcCVKs2ad\\n\",\n       \"hu9IfeDYaG3Iltohzx+GR3VeWRKF3RP+5r5QFRjuUv02PkeVKmPPGz/XNAFhPoc6e21zqJ+DliLJ\\n\",\n       \"HvO579dclOJfXvrw3eWlhe/X0Bv7yU/+oj320Se/6pxz7h6I7Su5hwXoAXku9fqwTl+9tH3iNfYM\\n\",\n       \"bfv/9mf/k3POuUeP7LnLcKN+L46+/1UpIFLBggULFixYsGB3tHeGSKVxx5ULeTOHR9rU0iR4i/1Y\\n\",\n       \"yKnwOrZS14dvzoomkVg6kGrys5n/fA5ERj2TLby64UjVTIFI7aBK3qIDle6VWH6oqjm9BBKqx6K6\\n\",\n       \"e4S336fPn7XH7p2wrpV5WqZAfdweI7L24oWRqEls7PTtLZ31AZnOrfWVkthfQ5XVcwxx0tj79gZk\\n\",\n       \"T1VRZ7+VWMz0581rI+nRc8pyRfOSnc/Wcn0S4IlWOmdEzXsPTG04w/jUQgqeAelIhGxOQmda271r\\n\",\n       \"Yn/uj554henLS/NgKDGgshIDeJh6T+j0HwmqQLK7qn2TZKxzoo/5mcGr6ktl+vv3qTpuKM1rtE+P\\n\",\n       \"DUfe+1dEaAhErpFKAaenfs6UpZKC/d+sr0bitHPObZDyq3X1vvnGe463UuuMddoUuTlFCntXaud9\\n\",\n       \"feW9z8ePrK5V68thjJXYPcU6nU+svTcXHh17+tzqOn7+lUdfhRPcevNKok6IoigBHQhfP/djOBCk\\n\",\n       \"bzb3fby5NUQgy/x9f3NtxxwQkytZEwnmhPafSMNS0IcbrN1TkW65uPWfJ4nvfybob5RgXouy9PwK\\n\",\n       \"dSJz+173DPteZt+rlx65W5ZGyv3NX/8N55xzP/6Lf9oee/zIz1Ny5xWtSECsVkSkB+iyHXPn3ApJ\\n\",\n       \"PsuZoWTTKaRbpCbmaOj3QF1PXB99nC+SsAJRL/0+EwRUEoTI+oPHtk/0kACgCuj8ngJ3v//7v++c\\n\",\n       \"s/qbXF/OmVK/ojVs7xtBjp5grWsNwQ0QQ1XgJoqk/WE/uBb0WaOJOjTuoXpeHlOUinN3p7IEzr2Q\\n\",\n       \"MaEUgu4xHONDZHe2aSUo9fWNn2tZbtfaVH4sbi8tKSpGndAbkWk5RoLM7/z2b7fHWIM0Y5QqsblW\\n\",\n       \"V6yNaM8fjjsRX+ecO0PVBk2eefTwEfpg31ujesbP/vLn7bFM+nvIAiIVLFiwYMGCBQt2RwsvUsGC\\n\",\n       \"BQsWLFiwYHe0dxbau3zx2qUrg0cJH1dbIdFdergvPhIS8ZGH+GqBOEne05AJCbhbVUAH2W8O9fSe\\n\",\n       \"kB5JfM+ExEldmDwzWI9aQEoYJAFZYXzCh0refgniG8nWIyEWzyceiuwIYY6EYhJh/TVAgBVo+erq\\n\",\n       \"9V7/WXD08trCCFUNbS2QV++DfOmcczfQ4lHNqi5g9Kaj/d9XbGZbYiEWrqEiv5gb3M7xzoTsWyO8\\n\",\n       \"SNVxDc9mmR+L+drCAxwz1VGifshECnRS7TgWsveSSsmaKIDQXgUy4UMpUGpK7DKvAHcr3M4wh97r\\n\",\n       \"AuHLsYSKqcDOkIFzznXf6711Pg1ZUx/NzsuQ7tOnRs6MIpDSJQRC9Lrf1+KyCPd2bPxjd7ZzraMT\\n\",\n       \"I/2uKj/vxkOb1w8eeKLq559/0R6j8vvnX37WHnvV86HCrYRl2bfJrYURGoSNN6geUMU2rxOqswsB\\n\",\n       \"/eaN1xZTyJ7q5V0JwVArSonSHSQvOCG7MhzYzsmtFMjFXH/x0pSYmUSg5NM11r9WUejjutsL6/+r\\n\",\n       \"C79OVVmZFIRtY7+dg2xbbf33NRRRYC2qttEXX/ow51Y69v6Hvk0PP/pBe2zDkFpla4yE6r/9L/7t\\n\",\n       \"9hiLJM+QMKAFqjuF31dOpLg19eZS0dEqsSecnd5vj33yCTTonNkVQrU7Y4JwZIk9djQyesYzUB8S\\n\",\n       \"2UNIxtaQVXfo+/UEIXvnnPvq81/4zyRUeYziug8fSYUF7PExtKVKGS+SzJUAzlCUVoDgXkz9Jeds\\n\",\n       \"n3h9sV+9QEOlbTISPtNQJD/T0Bq12HT/OTvz7dRw1/JQIXXscRoy5Hl2VOlZgWGL56o8WDlf9NnJ\\n\",\n       \"NjdKWEeYV+/nEGN3JNpepO00shdO8cyeYu/UQtbTqT92Jtpp7MNCwsjPoGz/sVzrww/93zpOVLTn\\n\",\n       \"GDq3mwR1yAIiFSxYsGDBggULdkd7Z4jUm6+e76jJUp07Em95Xvq35Y6k9R6XHolW4Er0AAAVaUlE\\n\",\n       \"QVRYC0ozJ/pTma/Tg8em8gd8YyYRty9EdHoQlSBNZ+eeMLhZ2dsv09UV/Wk/E/I2UaTJxMiGfOsn\\n\",\n       \"me/m1j4jmvBAlLDnIA8rYXACku8hD2YltYGYarsWpe4cXvoERLxIlNCJVqiKcNHzb+mqrE1PQL0/\\n\",\n       \"tl2RM3KcG/E/Hz323ulE+t3t+rbTg9F+Ueohtq+7Y0gdrMUjmYOUrvWieJ6ppHr34PVW4pH1gcBR\\n\",\n       \"vXm9sfYyJfr65k17LALxV0nJQxAxte30JlXtOQLqmYra+1fffO2vBVXy83vmBZEIfXZPa5357z0R\\n\",\n       \"Ze0LeLipsMKJmKr3Sa8/EeRwGft7xrn45ImlAadL1MHr2PVXQI4e3fvD9hjzPp69sEQJ1nG8vrU5\\n\",\n       \"ydqVz1/Z93LM0xRrsxLW77NnHn16+MDaRA+2Eo94MGYNQxvrGGPx0Ycftsc2Sz9n1JtdsE7gBnIJ\\n\",\n       \"pdSQQ1OuRX5hCc84kq1zOvXJIyo/UkLtv5a9I0n8vfg2sqSQFPeHc9g5575Dyv4p0unrcj+tXdPl\\n\",\n       \"VyDKagLOEmrTw77KBPh2jnqCnLPKgzwJstjfkyzxe+1C9hDKVBBxds7QvELkP47iY/zW1t8hojZR\\n\",\n       \"kgtBaXpAQsZIvPj5V1+3n62BumlSxuP7PnnhZmpSLwmGrJTKDkTzLi/tfg5Adk+lnirXEQMRcWLr\\n\",\n       \"le3U/f/Npd8fBJBz//Sf/BPnnHPdgdZu9H+fn5v8AEnRinC/eOERUEqnPJD5f3JC6RCbE0z713qJ\\n\",\n       \"PxMpENqnn37qnNtFrohI6T7Btmg05xgI5MWlb9tEqigQuUu7tq8kkALpSZ3A6crPmZ4mCqBqxWJq\\n\",\n       \"8+n/ae9aYuO8rvN3ho8ZDofiQ5T40IukRUehZcduUqNwGiToInAWaZpN2gItsiiKAn2iqzabtstu\\n\",\n       \"WnRRtJu6RVAUKQIUCbIpmrQo0rdVG3IcRVZsw6Le4lMkxfeQc7u458z5RiQdgbU4JnO+jUZ3hv9/\\n\",\n       \"3//9z/nOd2x+WGAT4Grvd2/nvePeXQ8KqqlFfnXVf39e5RJGRt0ieetm/tvbN+7Vy9q1zqzKbkET\\n\",\n       \"Z894UMxuz3tGWKQCgUAgEAgE9ok4SAUCgUAgEAjsE5IalG8P6KYi6bmLvajV2J6c/2lpkGvI5stz\\n\",\n       \"QyP1kjM9WUU5kbTTib5sKjXdJcDddmVKkGo6R2YW340w3E2JGg1LpJlj/cVuPHNzsQtue3tTf+/X\\n\",\n       \"Md0qc4Xx/Y283U5aTKbszCrCG2rubCE7srkFWAG4qu6AGpFy20v5e9M22m7QuMp1WqfkjGbuZTfe\\n\",\n       \"prn2yI1lZs87d9xlM6jm9mKJiLLqqjPNKsCVoqWwk3S5rq7ChQXXGDG12XUiwFr9hKTabSxYbXhT\\n\",\n       \"tW1aC6yLZYTWPO4LZLJe37CgBJ9D09PZtL9BbtSkbhxObmwuGCbRmouU3VILc3luPVC3y7mRs/Xv\\n\",\n       \"LlzI5mkmmz5UwmaxQcclt2FycrJeZvO4o8PJ7kV1YwqROOdmsjtgeia7Oy4++3T9O9Od4Xlqis1G\\n\",\n       \"cM3IfV2luWPtF0pQ2q6Ezfk5d8FY/czFk8TrZuZ7dnudPpXn1QaRws2PzElr797KZHwOSrAMAMs0\\n\",\n       \"xjC3+JoGR6x6G7bUFWnzAAAeasLX2pbXc2omuzt4/ZtWV0fJ+//MYL6/UBt7j2dXTbFIwRvatuqW\\n\",\n       \"zs0ub4MpO7O20rHu/P0IuTFblbBdof3P9p31FcoAofcoEFFddDwtqXqBNuUeDXawROUA6psc65gZ\\n\",\n       \"OXdri6nlGRyoY/pE169fr5fV1666k6q7aPHNLboby1z6HADznl5vZMRdO6aozq49S9A8M+MBBbbf\\n\",\n       \"dGm/87q+9OqlhjoCrg6/Tm7E8fG8jqbvu8tyq6Z7NwXAdKmLknWczN1m7eH15y44n0NGqG6lcbLk\\n\",\n       \"66xjZa7nFqIA2L57vM+zDUwr5YPbaPctaP8/pMTHo+cyYfu//vvf6mXPXHwWAPA2uRiH1UW5RCry\\n\",\n       \"tsZqtCYsGGdlzfuk05Il1/XZiLCua3x40AMb7Dly5Qeujj5ybmRHu0yVnonttk7YBQ0R/MznfxnJ\\n\",\n       \"ODCPICxSgUAgEAgEAvtE08jmGxvb2OZcb/r200mWFiMsb9NbjZ2mh087EcyUdVl+oFPfdIuU/8rD\\n\",\n       \"2fP1mBxq4JxLZnVqJ3VWqZMtWX4hf2a1cQvJ7O72cPKpKSfI8bUAJqC79auzW8PqKazaQmxrpABt\\n\",\n       \"lpgCneqNeL617W9/5Up+c7K35LV1UvjVt76ODh8UU9g1RW4A6NBXgkZ1Xn2roJN+ScNJSx3+lmQE\\n\",\n       \"eFbPNlVse4PgUG8LNWein70Zzc77G+m5c+cavgOAguaM29pmomz9y3qZ9bupCLe0eh8er/TsuK5Z\\n\",\n       \"B6cp11uL1pP7xN7YurqoTqZ2/tCtWYPDeUwKLblPlpa8XaYyb3IduZ65rFRiEnm5oS1cZ5ZOWFhQ\\n\",\n       \"+Qc45pWgOjg0oG3wNeGyFqQsrda/u3ddfsHCiVsoRNjmNhNLTan+2DGS+KjoZ8ntaiXFbgzV9FpO\\n\",\n       \"tr17NxO1h8+SOrqu04War53Bk7k9HUTsN+mGGVI7FyO7HzfCun9XqAcM+HxdV6vXOlm/zDrY2+vW\\n\",\n       \"7AcL2epVo79d1IwGnZWdEicNb8lqMS/pvsbj2qfZEM4QEbakef/WiTBdEF0ztHYsP107zZ3ezjzH\\n\",\n       \"E+2xZvW2lSC0Ji3UvUhWjRXNtsAh5EZoJv47Hqpln9eTkaJ5jzF5lm19ANyddmvRSy+9BADomHXr\\n\",\n       \"k1liWVn/ZH8ea+tzAEg1zedIeTKnv2eSNN7+wYFcFyORV8giODExAcCtIADw5pW3AABnznlY/f8q\\n\",\n       \"2bxC+99HL+S/PX7cLVz3p/J8FiG1d732229nORG2tA0M5HnNVjV7Fn38J36yXrawkPcftrDWtP0r\\n\",\n       \"1CdjoyMA3AoFuJwP56ns7ct1mp7KfdJGkkDmseG9486NHDDx1tWr3tbpXOfjRLbvUGs/7/uj5/P3\\n\",\n       \"/NyzoBTbEyud3v8P5vJ1eV4Z8f40zatlteb3cw49PW90dXvdbf0lzl6yHWTzQCAQCAQCgSeCOEgF\\n\",\n       \"AoFAIBAI7BNNI5s//dxxpJqbx0VdMYUC1yd//sjwWL1kYuJ5AMDogJMITR+ko+ymZSM2s7aKmUC7\\n\",\n       \"K2YK9/ubaZfdGGaqZteS6We0ErHZCNALZDKtqKYKuwVXVrLJ0NxYTCK2+jIRs0V/x2VGFGaXgbn0\\n\",\n       \"mGxtOlsFSnja35+JqkYovzHp7pnxj4zrvVgxNpuHG5JhKhG1SO2y/lmnZLibG7l+UnCTubnqykTA\\n\",\n       \"XtUxSfpdC/V/qZjNvg8W3d1qmirbNE3qJl3xPjGya7XqZZ2qH8Zk4xZ189hYc0JfcxU/JMV0Wy9s\\n\",\n       \"iq4nXCZ3b7HViNo7ie2slGxtrJvTKYjBNGPYPbythFXmPLYUzN3m5nlzVbIL8L4qdDNRc34+m+p7\\n\",\n       \"+/McfvopX1cWvMAuEyOHtnEmYzWBb9KctG2lRmvM2l+i8V9SF7j19eqKzz9zY26s+Zisq7ZcH5Fj\\n\",\n       \"jeS88IBI7KpFJEQV6FYNqkQug6ruD0V1VXD/13StbZIbr6Qk+35KMgzduzi5dVVd6jVaT0bUZaK4\\n\",\n       \"7QlC42lzsUfHvYGCkCw4g1TsVWW92OUuoPu3JgEAJ044AbdHydttPV73lvXc/8yrTbbf7VJfS7jM\\n\",\n       \"AQvtuk+xjo9RC1pZn0ivwzpGdVdh0V26q6r91qoJxx8suIq4uUC3aF1vKPHdXPwAsGR7MU3T2dlc\\n\",\n       \"dmFivF72xuXsgqtS8MAzF3MiZ6lTS3y+dGkfz5O2nLlxR0b9OXVzMiv/pyrRLdRlOjbmLsDtlOdJ\\n\",\n       \"kVxl6xqUNKPuOw7isfYPDnpWCpsfTPYfVBcgJ6E32kgH7Qmmy1SgdVKu5H5nt2hNN1zLCsGJp8dU\\n\",\n       \"HZwzW2ypm5ldgJZIvpVU6c+cycFjNabv6JjduPEe3T/PGVuLvCYqmtFgjbJiGHjvNrqB3RNwagMH\\n\",\n       \"5dhSfOcdz9QwNjaKj//054NsHggEAoFAIPBBo2lk8y2poq3o5zh76UlkalhdySfYgRNOGDs/mMPD\\n\",\n       \"O3v8BGlvzkzYtM+rK07stTdhC6deJGK5WV2YMG4kU5Mt4LIyvf3NPshvTMtkYeqo5OuxsrlZndaU\\n\",\n       \"nMlEYHtJ49DYJc1h1kFkQ8v1tEikdMv1tURvJBVVZ+3pcQKsnc7NmtJHpEcLZ+Y8cGbpaSCR63WN\\n\",\n       \"YMrY3GBrTm7/9KyTsttVbZ6tTkWVe2hTYm25uDNf4SaFWlv/V8lyYBax1rYClalFjN4fVpd3KqAv\\n\",\n       \"Ly/pv8t6T18SRjpksrHdi5WlrZ9YOmNBQ/y5rKpvU2xNLD7Sx4tENjclYlP9B4DNam7jGqlN26tz\\n\",\n       \"w5u+mFK9vxGaRZKGE32aC8zIs9sNCr7S8A8AlJS8zW9g9bc6IptaTsoy5fozwyZLbJR1ntgbcank\\n\",\n       \"V06qcbK+4n1yUsPVZylwo7NsZHvfE8zSw/ITPo7eoJKS5428y9ay0xrQwvPfAlZ6+3xcFxfzumLr\\n\",\n       \"5+a2ElaprLahVtdWHxMjRVeo72ycbP4vLHj7zRLH+T9nlCg8TzkB76h0xNmzLmdR7s5Wim0ipW/W\\n\",\n       \"cvuXl5y83KV7lwWRSI1lYnKdGnK4aZ/w+EubEpuXaU/UoIhOUrs2UvA2yVlcvZZD1sfO5/nP6++4\\n\",\n       \"ykUcO+ZW2tn5vIbXN32fXlzLY/LUU+frZaPj+Xo3brxbL3vm4kUAbkEHgOmp3J+2J5RIkmZuMbef\\n\",\n       \"pQ5aChpYU/L299oYkkXaSNkLJN1ga3aLLOGmrG2SLHNzbpEzSwyTzc3DwcFW1354BQDQ3+8WSctx\\n\",\n       \"x3I2t+9nlW9WT+/uyHtCT9EJ3X29ud9XlvJ4zl25Uv/uwXye688+O1EvW1ZCf4n2rm4lbG+QnIj1\\n\",\n       \"YzftyXdu385t7fZn8dxMHpNWfT7cuefW9xG1XLaR1Iqt+8FhD8qYVVJ6W8nn38mh/LuZWZepsLU4\\n\",\n       \"NOR5V1k1fjc0zSK1tvz+LPjA0cG1dyabXYXAAeH1y1d/9I8CRwLf/Y//aXYVAgeES69dbnYVPtSI\\n\",\n       \"g1TgieOH7042uwqBA8Lrb8RB6scF3/3PV5tdhcAB4dLrbzS7Ch9qNM21197ehkq3m+JNWXf1oR+w\\n\",\n       \"JsazqXBs2MlhnUrALMBNsavqZqoSsdNIwQ1meSM2q6nY3FSAu2xYHdlccOWy19NwnVSkzR1YInO3\\n\",\n       \"ud7aSFtns2qJeXNZe9HN82baZcJya5spm7sp1hSgWceooiTSjrKbR1dWzW3prtI6oVfdUv3s2lPS\\n\",\n       \"bU+Pk3iNAM1k9zZtoxBhsKify6Uu+tvcnv7+EyiXO9HffwIlVWrv6fXfrRkpUc3+rPBbbt/Z71b3\\n\",\n       \"At3f+ieRC2JJifLHju3UVuKAAla+BhoJmxZEwAEZ5pZixeCSEps5sKHzVK57Q/CAuiP62twtsal9\\n\",\n       \"fLo/m6D7T3jSYhvjnh7/fUJV28DJdXNZR5nKdHmwVpnVb3HJibKf+MSLuT2aXLdNyO+nRHymV9aT\\n\",\n       \"IROxtFPdDR3lMtrb29DZWa6TzdvJjeWuR69nS4vpw+T+LNKasOCN5VXv656U5+f6mrsHzFXK7m5z\\n\",\n       \"y3FQgGml8Z5g6+3kgOlO+Z5gxOqhIXePzM7nNkySe0iSJqOmwI6i6jOxu/GUaj8xsdfc3K20xm2e\\n\",\n       \"2p7U10e6Nzqe98iNt6EurRXqk+cuPpfr1ub3Wt/M/dhWIG0pva8FGwDuSku6n7KKvmU22Ja8s2wL\\n\",\n       \"UFJdrCol/G7RVBXlCq1h/Xp21uefuS0r9LtRVfdvtb1zbWfA0NR93//eu55JyWMX3I3Zbi4gImoP\\n\",\n       \"q/eqRlSBh+p6XCYKhtE8LPHt6po/V9Y21N1PQRSDJ7MLqEyE5V4l9G9tEAWgdWemCuuzB/PeJwMD\\n\",\n       \"+Xr2POF92vaE8XEnzNt4Ne5ledxv3bxdL/nUp/Pz9Obtm369eXV30bPrWFd+njFVALVtYKuKDdUe\\n\",\n       \"HBr2Oj3/sY8BAGZm3N1o7kmmERhthQOaLIFzCwUK9fbltTM/59SbSlde99Pq4uvu9efUptI8TlEW\\n\",\n       \"i/ozgfb6IT1HlOm5n4wiQnu87Q/87DAKyF4IsnkgEAgEAoHAPtE0+YMDv2kgEAgEAoHAPrGX/EFT\\n\",\n       \"DlKBQCAQCAQCRwHh2gsEAoFAIBDYJ+IgFQgEAoFAILBPNOUgJSIvi8g1EXlHRH6vGXUIPBmIyKSI\\n\",\n       \"vCkil0Xkkpb1ich3RORtEfm2iPT8qOsEPnwQkb8WkSkR+T6V7Tm2IvIVXePXROSzzal1YD/YY6z/\\n\",\n       \"SERu69q+LCKfo+9irA8pROSMiPyriPxARK6IyG9reaztx8SBH6REpAXAnwN4GcAEgF8UkY8edD0C\\n\",\n       \"TwwJwGdSSi+klF7Ust8H8J2U0tMA/kX/Hzh8+BvkdcvYdWxFZALAzyOv8ZcB/IVw4sHAhx27jXUC\\n\",\n       \"8Ke6tl9IKf0jEGN9BFAF8LsppWcA/BSA39Bncqztx0QzGv8igHdTSpMppSqAvwfwhSbUI/Dk8Ghk\\n\",\n       \"w88C+Kp+/iqAnzvY6gQ+CKSU/h3Ao7kS9hrbLwD4WkqpmlKaBPAu8toPHALsMdbAzrUNxFgfaqSU\\n\",\n       \"7qeU3tDPywDeAnAKsbYfG804SJ0CcIv+f1vLAkcDCcA/i8hrIvKrWjaQUrLkSFMABnb/08AhxF5j\\n\",\n       \"O4y8tg2xzo8GfktEvicir5CrJ8b6iEBERgC8AOBVxNp+bDTjIBV6C0cbn0wpvQDgc8gm4k/xlynr\\n\",\n       \"bcQcOIJ4jLGNcT/c+EsAowCeB3APwJ+8z29jrA8ZRKQC4B8A/E5K6SF/F2v7/dGMg9QdAGfo/2fQ\\n\",\n       \"eLoNHGKklO7pvzMAvoFs8p0SkUEAEJEhANN7XyFwyLDX2D66zk9rWeCQIqU0nRQA/gruzomxPuQQ\\n\",\n       \"kTbkQ9TfppS+qcWxth8TzThIvQZgXERGRKQdmbT2rSbUI/ABQ0TKItKlnzsBfBbA95HH98v6sy8D\\n\",\n       \"+ObuVwgcQuw1tt8C8Asi0i4iowDGAVxqQv0CHxD0YWr4IvLaBmKsDzUkJ+R8BcDVlNKf0Vexth8T\\n\",\n       \"B560OKW0JSK/CeCfALQAeCWl9NZB1yPwRDAA4BuaKLcVwN+llL4tIq8B+LqI/AqASQBfal4VA/uF\\n\",\n       \"iHwNwKcB9IvILQB/AOCPscvYppSuisjXAVwFsAXg11OkUTg02GWs/xDAZ0TkeWQ3znUAvwbEWB8B\\n\",\n       \"fBLALwF4U0Qua9lXEGv7sREpYgKBQCAQCAT2iR9r7YdAIBAIBAKB/w/iIBUIBAKBQCCwT8RBKhAI\\n\",\n       \"BAKBQGCfiINUIBAIBAKBwD4RB6lAIBAIBAKBfSIOUoFAIBAIBAL7RBykAoFAIBAIBPaJOEgFAoFA\\n\",\n       \"IBAI7BP/B0EEnTIvM42+AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f66a46f8a10>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"plt.imshow(transformer.deprocess('data', net.blobs['data'].data[0]))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Adorable, but was our classification correct?\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"['n02123045 tabby, tabby cat' 'n02123159 tiger cat'\\n\",\n      \" 'n02124075 Egyptian cat' 'n02119022 red fox, Vulpes vulpes'\\n\",\n      \" 'n02127052 lynx, catamount']\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# load labels\\n\",\n    \"imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'\\n\",\n    \"try:\\n\",\n    \"    labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\\\\t')\\n\",\n    \"except:\\n\",\n    \"    !../data/ilsvrc12/get_ilsvrc_aux.sh\\n\",\n    \"    labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\\\\t')\\n\",\n    \"\\n\",\n    \"# sort top k predictions from softmax output\\n\",\n    \"top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]\\n\",\n    \"print labels[top_k]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Indeed! But how long did it take?\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"1 loops, best of 3: 7.14 s per loop\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# CPU mode\\n\",\n    \"net.forward()  # call once for allocation\\n\",\n    \"%timeit net.forward()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"That's a while, even for a batch size of 50 images. Let's switch to GPU mode.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"10 loops, best of 3: 90.9 ms per loop\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# GPU mode\\n\",\n    \"caffe.set_device(0)\\n\",\n    \"caffe.set_mode_gpu()\\n\",\n    \"net.forward()  # call once for allocation\\n\",\n    \"%timeit net.forward()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Much better. Now let's look at the net in more detail.\\n\",\n    \"\\n\",\n    \"First, the layer features and their shapes (1 is the batch size, corresponding to the single input image in this example).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[('data', (50, 3, 227, 227)),\\n\",\n       \" ('conv1', (50, 96, 55, 55)),\\n\",\n       \" ('pool1', (50, 96, 27, 27)),\\n\",\n       \" ('norm1', (50, 96, 27, 27)),\\n\",\n       \" ('conv2', (50, 256, 27, 27)),\\n\",\n       \" ('pool2', (50, 256, 13, 13)),\\n\",\n       \" ('norm2', (50, 256, 13, 13)),\\n\",\n       \" ('conv3', (50, 384, 13, 13)),\\n\",\n       \" ('conv4', (50, 384, 13, 13)),\\n\",\n       \" ('conv5', (50, 256, 13, 13)),\\n\",\n       \" ('pool5', (50, 256, 6, 6)),\\n\",\n       \" ('fc6', (50, 4096)),\\n\",\n       \" ('fc7', (50, 4096)),\\n\",\n       \" ('fc8', (50, 1000)),\\n\",\n       \" ('prob', (50, 1000))]\"\n      ]\n     },\n     \"execution_count\": 25,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"[(k, v.data.shape) for k, v in net.blobs.items()]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The parameters and their shapes. The parameters are `net.params['name'][0]` while biases are `net.params['name'][1]`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[('conv1', (96, 3, 11, 11)),\\n\",\n       \" ('conv2', (256, 48, 5, 5)),\\n\",\n       \" ('conv3', (384, 256, 3, 3)),\\n\",\n       \" ('conv4', (384, 192, 3, 3)),\\n\",\n       \" ('conv5', (256, 192, 3, 3)),\\n\",\n       \" ('fc6', (4096, 9216)),\\n\",\n       \" ('fc7', (4096, 4096)),\\n\",\n       \" ('fc8', (1000, 4096))]\"\n      ]\n     },\n     \"execution_count\": 26,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"[(k, v[0].data.shape) for k, v in net.params.items()]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Helper functions for visualization\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 27,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# take an array of shape (n, height, width) or (n, height, width, channels)\\n\",\n    \"# and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)\\n\",\n    \"def vis_square(data, padsize=1, padval=0):\\n\",\n    \"    data -= data.min()\\n\",\n    \"    data /= data.max()\\n\",\n    \"    \\n\",\n    \"    # force the number of filters to be square\\n\",\n    \"    n = int(np.ceil(np.sqrt(data.shape[0])))\\n\",\n    \"    padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((0, 0),) * (data.ndim - 3)\\n\",\n    \"    data = np.pad(data, padding, mode='constant', constant_values=(padval, padval))\\n\",\n    \"    \\n\",\n    \"    # tile the filters into an image\\n\",\n    \"    data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))\\n\",\n    \"    data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])\\n\",\n    \"    \\n\",\n    \"    plt.imshow(data)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The input image\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The first layer filters, `conv1`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 28,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlEAAAJNCAYAAAARaCA+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvXm0Ldld3/er6Qz33PneN8/9ul/PaqEJi5aEwBIoBBOS\\n\",\n       \"GDteduKV2Am2sY0hSN0tqVFLtFoSYBniZHklXthh4diEtczCEASWkDViqSWhFlLP4+s3D/fd8dwz\\n\",\n       \"1amq/NGNrf37ft+6h1LjK8z380+/vXufOnWqdu1T99Rnf3dUVZUJIYQQQog/HvFu74AQQgghxJ9G\\n\",\n       \"dBMlhBBCCFED3UQJIYQQQtRAN1FCCCGEEDXQTZQQQgghRA10EyWEEEIIUYNX/CYqiqJ3RFH0ZBRF\\n\",\n       \"z0RRdM8rvX0hhBBCiG8HolcyJyqKosTMnjKzt5nZeTP7spn9laqqnnjF3kQIIYQQ4tuAV/qXqDeY\\n\",\n       \"2bNVVZ2uqio3s181s//qFX4PIYQQQohdJ32Ft3fIzM5+U/mcmX3nNzeIokgR6UIIIYT4U0NVVRGr\\n\",\n       \"f6Vvoia6QXr3ve80M7PPfu737S1vvtuiRiv4/1GS4YbjBOuSsC7Ph9BmPCR1/UFYUWAb9hNdmob7\\n\",\n       \"+Q8/+g+hzfvuvxdf6I7KYLsHTTa7XdzPsgjKUYqnq9FoBuVmqwVt4m86dr//+d+3u990tzWycFuR\\n\",\n       \"Yf948MGHoO4n/86PBeXROIc2/R5+vigKtx+neI6jCI965vpG2mximwy3ZX5bpPv/3M/i53vgZ94X\\n\",\n       \"lKuyhDYVVllVhCe5yP/jufvs5z9nb3nTm21cFP5lNh6Pw+3kuPFxjsc4jsMP5I+vmdkv/JN/DHX3\\n\",\n       \"3XNfUE4z7FPkNFjDHffOwjy0mZ2fg7qhuyZXLpyHNhdPn4a67uZW+P4NPMe/9Mv/Kii/933v/w//\\n\",\n       \"/synP2Xf/dbvsarEY27uWGXNBjTJWtjPms2wL2YpGZPGO5/jIRmThn5MMrMPPBD2xY985EFok0S4\\n\",\n       \"D1nDXdsJGTfoNRMel4r0KSPj8Kg/MjOz3/nYx+y/+IEfsNFwBG3GJX413PeunwrKP/ezH4E2W64f\\n\",\n       \"mJldu7wSlLfJ2LmwvAB1swthXbPdhjZJg3y+UXj++tt9aFMWeN1++EMfDMrvfOc7oU27hX2vkYV1\\n\",\n       \"CTl/UfLSRfqJT/yevf3tb7OKXLS9Lo7Dve1tt994XhJyjqMo3Ief+yiOnffd926oi13/rHDTNtjC\\n\",\n       \"8zc9PRWU5w/vhTbPfPVRqGsU4Rvc/pbXQ5uHv/jvoe7IoUNB+YH3fgB39Dq80jdR583syDeVj9hL\\n\",\n       \"v0YFfPZzv29mZi+eOWMvvnjEjt906hXeDSGEEEKIP1le6Zuor5jZTVEUHTezC2b2l83sr/hGb3nz\\n\",\n       \"3WZm9tnPmR07dvQV3gUhhBBCiD95XtGbqKqqxlEU/V0z+7dmlpjZL9GZeS//9Hns8OGX/j0If8qO\\n\",\n       \"M/xpNOt0oG5qbjZsM4U/z47J4xj/k+bWygq0qcijQSOPcTyjEf6cv7W+GZQ3NzbwheTRQGcxfGSy\\n\",\n       \"sGcPtJlyP0mzx5DxN83AvP3O221pecnyUfiT+5g8hmAU4/C4xORn5CZ5PJK6n6iz1hS0YQc4cY92\\n\",\n       \"kwy3/Uc/bQdbcrNO43iyORT+8dZ4jPvEHoFV7uf8OPmPjwBO3njSslbDSvKYw9wjvrwYQxP/WNfM\\n\",\n       \"zJuFk84Q6blHEUmMj2z8o0Izs0HiHmGQ8z47j49Q5pfCn+FT8lhu0MfHI333yIs9IvY0vukc33ji\\n\",\n       \"BmvEsaVNPFdpI+xD/lGlGT8u/rFcd20V2nTX8doe5uF5L8hj3WiCMzgmj4zyCreVuzEvTvF1wxz7\\n\",\n       \"md+HKMExKSZ1+eilx83Hjp+w4Si30QgfP+cjfD/YNnnEP+gRHcNtv0kevbY706QuHCtL8ohx0Mf3\\n\",\n       \"g7Eyx+uYPc6DNmQmfEGu7ZF7fJ+QNsnLz8WOHTti4/HIEnLskhT7cOXckpK4CQlxHyqb4PuBdOHC\\n\",\n       \"HbtWE7/Hm8uLUPfUF74WlL/z8GFo87rvfyvU/dJPfDgo3/Lqu6DNba9/LdQ9+YWvQt2kvNK/RFlV\\n\",\n       \"Vb9jZr8zSdtjR4/s3Ei8opw8eXK3d+HPHMePH9/tXfgzx4kTJ3Z7F/7McfLGG3d7F/7MccMN6ue7\\n\",\n       \"jRLLhRBCCCFqoJsoIYQQQogavOKP8yYhd898K/eM+5unh/8RZXEF6maXwmepC4cPQpuZfegRdeZm\\n\",\n       \"gvL0LD4/3yK+Q06ez8PrtrahbrsXOh9N53KZmS0fwn3ffzyU7hfJtPLCuSPba5vQZpt4GuXYPRsn\\n\",\n       \"7g/Duz9pE5/Fsy1V7jl7SVw15gf46dlEHbM4IV6Pi/SIiTfFaLmICOZ3pCl6WX4qfb9HogqId5Y7\\n\",\n       \"v2IwIk4GcalS93Eq4ncwEvfCgrgq3v0xQy+EXaPzi0tQN7dnOSgvTh2ANoM++k5dFwOyOroEbTz9\\n\",\n       \"LeznEfFQ+r1wSvVogMd8cwOvo7WVcEwYkKn1LFJhaja83qcX0QHpzM5AnSclsQQ56Z94reG2WKRJ\\n\",\n       \"5fyqmLUh098L9ga+TYH7ie+Pddtk+vtwGDq0Mws4nk7NoHOZeB+PeExEhbPRdrjv1H+a4PJjcQJE\\n\",\n       \"jzMfpZgSn7MClwm3nZJoBN+HRmPiGrKxhLh3nogMzqnrQ1fPYMTJq773jVC3dflaUP6N/+NXoM1H\\n\",\n       \"P/urUPeV7/9UUP4XP/OL0OYDH/tnUPfEVx+BuknRL1FCCCGEEDXQTZQQQgghRA10EyWEEEIIUQPd\\n\",\n       \"RAkhhBBC1GBXxPLMybtDJybmZB2p3ipKo1defDEot59+FtrsO3Ec6paPh/lUKZEQpzooeg4jFIo9\\n\",\n       \"TK7beyxcl+fQTZjVdPgk5n3MTIf7sHX5KrS5cPpCUL52Bdv0iOw+dhanX4fweozdWnkpCborWTic\\n\",\n       \"Oy4oRpqx5R19HQtrjInsmo9DETptTNbVp1xga0bCPVMSbDd24YWDAQnWJBROUh2PWJAfC+D0bSYL\\n\",\n       \"S81csKR/fzOzQQ/3YcMF0q5cwokebB23WTcZYt9xXKHg4DHs+37dr5ysNwevIYL6oIt9v7cVrsc2\\n\",\n       \"JEGeQ3L+EreW3NwCCuLTRHJeOrg/KM+Q101N7yyWM9+3rMjfwWW4n/6aNTOLYiJV+ykhTLwm/Swf\\n\",\n       \"uu0T0TwnEyY8ox6O+yN23pOd1z7sTONkodLJ1z7U0sysYp/Py/vEK29kk4yfJMCVTaYpXDti3Kdu\\n\",\n       \"PGWTAIxMpplqh9+9ZUkmkZAg1oJI8fA6ElQ6sz+cbNJdW4c2X/vsF6Duh/9muNDJw7/1KWjzz96H\\n\",\n       \"a9f+vf8rXNPvR/7lq6DNv//Vfwt1N9+J7SZFv0QJIYQQQtRAN1FCCCGEEDXQTZQQQgghRA10EyWE\\n\",\n       \"EEIIUYNdEcun5l2CrysPe7iqe9pAcW90Nmx3+cXT0Gb13Dmo23M6FL33kEUcW4uYDp40UUD37D+C\\n\",\n       \"iyrvP37ItUG5Ni5RDDz7h08G5Se/hCtNnz991m0I92mKJLK3ZsJjXrEocIZL041IxG9Ck3JD+ZOJ\\n\",\n       \"3mmEO5+mYbuEpfey3XQHwkvB18OvCJ9muPWKGPA+5Zu9W0JEz9gdTyaRshRlXxXRnHhkqhOKpf58\\n\",\n       \"mpkNh3j9FU64vXrhArTpb6PE3Z4OV22fmcfranZpAeoOuMXJuxtr0MbTJJNBGo0W1C0shynq7Q6u\\n\",\n       \"LJ+18VpvOfl7ag7fr0NWFeh0wm0xWbq/SZKjHTlZVaAk/cWvKsCc4JhM7ChcXWlkgkiM161P1I/I\\n\",\n       \"FTkmEjduB48LSzpvOZF8ahrPX3MKz/vIrQ5QbOO2e+S7p3BGf0rGEnZteypyIbOwd3/cqdLt94Gs\\n\",\n       \"2lDmuPHES/kZnk+2YkFFBHTYJfJZrl4KJ6Tc+DoUuD/z6x+Duke+GH7X/eiD74Q27/+bWPeOv/NX\\n\",\n       \"g/Jfff+PQ5sv/j+/C3VvveFHoG5S9EuUEEIIIUQNdBMlhBBCCFED3UQJIYQQQtRgV5yo4TB8vtru\\n\",\n       \"hAGHS4fQK5jfuwfqlg7sC8o+fNPMbIWsGr2xEgYFlmN89j93YD/UNWcxSM+zZz++rpGEz/DPfP1x\\n\",\n       \"aPPcI49hnWu3uYpBZZ3FcJ+WjxyGNjNLGO7XcO5PTgIdGbF79h8TjylKSDhcMwyobDQwpJOFbfpV\\n\",\n       \"x2PiRDEbKHb+AQvkZGRN51IM0JsYM8nEHYcsY6Gg+AFjF3KapMw5wXPjj0NEzgOj0XL7xVZeJ3Wl\\n\",\n       \"CwHduHYN2lw6cwbqHn34y0F5fnkJ2py86w6o67RD32hhaS+08TTabahLptBtarq6mSXcp4yENU7P\\n\",\n       \"zwXl9gy6ODH5u3TYDcM9r17EQNzVS5ehzsMCVVnfiFxfZ95UwdxCNwZExP0pSUjncBiGZCYsVNIH\\n\",\n       \"VhL6JKyxJNJQy3l9M8RDazRxfCl8qDMZ93skZNVrg+lEwZoIu0YrErZZVGFdQRyl0o3DLCuZ7efI\\n\",\n       \"vR/bp5j6ozuHbc7MzUHdxdOhk3zhLLqU3/X93w11X/zcw0H5wH/zQ9DmLX/+bqj7vV/6taB819ve\\n\",\n       \"CG3mjn0F6rqXV6BuUvRLlBBCCCFEDXQTJYQQQghRA91ECSGEEELUQDdRQgghhBA12BWxfNAP5b2R\\n\",\n       \"kw7ZitFMjt5/y6mgPH8A5dM9RLReOX8x3J8NDAk0tro1CRP0dK+tQt2L3wil0TNPPAltLjzzAtTF\\n\",\n       \"LqDy4CkMBd1/PAzubJNgTSZse2Gz2tn7NDOzlhM2vaBuZpaPmOwadrU4RfGaiawgaJM27C+BOHVi\\n\",\n       \"ORG2Gd5jZcJ9VaGwGbvP58NFX2pDhG3f1dnnY9aoEz0jEnDK8MGBbNudOTKBwu1nb7sLTTbJCu0X\\n\",\n       \"Xgxl86cf+Qa0abYxGHFqPhTLGyTk0dMmMrifCGFm1uyE7WLShysi0m6uhTL9yvmz0Gb9Igria5fD\\n\",\n       \"iSy9Lo4jSbTz5xuTsM0xkaP9oWJKcJnj67y4XhDpOSLhrOOhGzxIPy8mCNsc9DHoMiaTHNozYd/o\\n\",\n       \"zJAxj2y/ux0Gmm53B9BmTI5Lw0vcRMpnE2w8LISUjc2FC0vNiZSfuX2qyPjGxlg/cWZIzgvr+wXp\\n\",\n       \"Z55hjoGx+4+EE60unEGxvJngWHnyjpuC8gtk0sry4QNQt3Ex/P7dXsc+tXQMA7H7bvLHHwf9EiWE\\n\",\n       \"EEIIUQPdRAkhhBBC1EA3UUIIIYQQNdBNlBBCCCFEDXZFLE+d8OZXNV9f2YDX9HoowM0vh0m1HSJV\\n\",\n       \"zx1GiSxxYunWFUwrrXIiRzML0HH5zDmoW7sUiqV+lXUzs2O3nYK6fcdCaXx2P0ltTsP7YL+i+kt1\\n\",\n       \"mATsb58nTeFttMJU6IwI1K0It+VflzaYeI339KkXKEnqNxM9CyeEZw2ULBle4izISugsAdqvfJ6P\\n\",\n       \"SdI5EXXNbYt8FPqZvcjKpHxG6fYrIvJpmqHoPb0YXmvLhw5Cm7WrmGK+cXUtKF++gNfHwnM4aWTB\\n\",\n       \"rRhAFqkH2iSdPKIvDM9pfx0ngwy2UZIddEOZvktE+u2NTXw3N1thahqTzv31wWDjBlOovQTsk7rN\\n\",\n       \"zCrWh3M/wYeI5fT9XF8nkxzynIxBjn4fj3m7g+d0xk0gYOPGNpH311bC87xFzlWrhX0/c9I2m6zg\\n\",\n       \"r2MGE7YrmFli5gdn1iR35yYl55iN6H48Tcm5ikjiPNt32KchivpJJ9z+3kP7oM3mFbyOiijsL/tu\\n\",\n       \"XYY26SH8Prz6pa8H5f5lvLanFjFZPd/EvjAp+iVKCCGEEKIGuokSQgghhKiBbqKEEEIIIWqwK06U\\n\",\n       \"f6bcdAGOLEBumzyzHA3CIK3pBXzW2Z7BZ+pxM3QSGjMkWLOHYYIRCV70sEC8pnMgOosL0GaBrG7f\\n\",\n       \"mg1D5YZkNe/eRuiPsZXeE+bUeI9gwtvpxIVfklxUGuCYOK/Ae3Fm11k13q1ozj6flbjzZRWeh5L5\\n\",\n       \"SISh88cqoqGwbY1d3/CO1EvbIt6COy4QLnodfNiedx2uT3jC2OuYY5KkobOzvH8/tOnfiB5KsxWG\\n\",\n       \"65XE61m9ehXqGu3w/VpTOztDF57HwNrtTfQtes6FYR7TaIB+jneNmBvXaGJw58Ke8NouC/Ru+hME\\n\",\n       \"+eZDDA4siTDjA1WZ28QcnsL1WX8NmZmRrE2rCv86bJRM4NQUhu8344I1zcxazpPynp8Z/77ogedG\\n\",\n       \"zh9xQ1PnnTKfjMd7hjBtsSKf2Z/SnIwlmQsh5cog7pNvl5FxuEF81X6MvpMnTbFfD7fD8TQh257b\\n\",\n       \"i9/b185dCsoXnsVg2xtvOYnbOho6V5dIiPU8cam6zZ0/3/XQL1FCCCGEEDXQTZQQQgghRA10EyWE\\n\",\n       \"EEIIUQPdRAkhhBBC1GBXxHIrQhEwdsZdo0l2i5hzpRMa+0QmZKtPZz5MMEUZNGqQMDqyyrgnaaKo\\n\",\n       \"12mHMl2zTUIBWyjcdZ3cPiJhZt58Tsmq5yyMEiXnycIaIydCF+R1BZHrR2519CjB40uDJp1lWTHT\\n\",\n       \"mwQqeoF5ONx5UoCZ2diFbY5J6GpFPrMP6cwnfJ0Xy6sm9gN6Tt01wyYPMMbDcD8rEiAbxXj9pW71\\n\",\n       \"9xYRfvcePgx13sHdWsMg3QEJiO07CTgjQqqHhV9WJOQxc8ducQnDPuMYJ3qYl/nJJAAW6po62Txi\\n\",\n       \"1+MEQb7FCK//iPwd7Cc1sIkQfsw1Myvz8DywPYqpWR7WJbSf7/z3eoMEXbbbGEzqXezeNo7LLGzT\\n\",\n       \"3HFot3DcZ6L1JEGak4yfKZlwM2RBqK6OTUjxAac+KNWMC/BV7IV03G8WoOzHKQbZTfjM7DssJqmg\\n\",\n       \"iwf2BOXLlzAQ+8LpC1C353gYArxBXre1imPQJOPL9dAvUUIIIYQQNdBNlBBCCCFEDXQTJYQQQghR\\n\",\n       \"A91ECSGEEELUYFfE8qrwyb9sdfKQmNzv+SRZluw6IIJh2fIyNsqESQMTkouKrYvttk1SYtPMCYxE\\n\",\n       \"hB4Qab1wIiRLl/bSuF8x3oynZ/stsZRqRuYlWXJMoozIik5Mjom8yNzMLEpdE5bwS14YOfGSTDBg\\n\",\n       \"+LRllr7MFNLIid0s9ddIWnfuX5eRS7LE8+f3gcmnDC+kGhHSWQJ03AqvhzjFzze7hDL20EnjUYLX\\n\",\n       \"mheazcyGo1AIZ6K+58DxY1CXkYkeXh5utFEwTojMjzMf8NiNhiiy+y5UkM+bD/GYewqSWB4RK3fc\\n\",\n       \"D+XdcYHbjmhiuZvwQ/YhJmNXmrg+G5Fk/gkS9ZtE1J+bxQkMDXf+8h4ez5icGy/TswkbNFXcfZxJ\\n\",\n       \"JsAwWFp/RSbh+Mkt7Dslctc7O3YZuUbhu45d/2xlhQm+H6KYiOxu8ylJhB8NyIQpN9Fqcc88NBkM\\n\",\n       \"sF+vroSTS6bnZ6FNTl7HrslJ0S9RQgghhBA10E2UEEIIIUQNdBMlhBBCCFGDiK9I/Sf4hhFLaxNC\\n\",\n       \"CCGE+Pakuk4irn6JEkIIIYSogW6ihBBCCCFqoJsoIYQQQoga6CZKCCGEEKIGuxK2ec8D7wzK3jRn\\n\",\n       \"K5NT/AuJJB9HZFtOD6OrXRP9PXaBcR9634egzX333Ytv51Z277SnoM3Zp16Auj1H9gbl1iy+7ur5\\n\",\n       \"S0F5roVtIhImOnCrficRdoUPfuhBqPtffyr8fEWJ4YL5EIMtSx+SGWHoWtLA0EN/7FgwapMEuEVV\\n\",\n       \"uA9xhfv0wff/DNS9+6fvD7dDgu763S7UNVzYHVuRfsxWbHdhe2VBwlppKGDYicsCwygf+vCHoe59\\n\",\n       \"D3wgKOeDLWiTk5XWi1FYx8L3Gi0MqE18qGqCAYBRjK+DlD4Snvj+D9wXlH/yvndCm0aCxy5z13FG\\n\",\n       \"AkDZCFS6QMWCHPOSZLqO3TmNfTilmZUkNPNDDz4QlN9z37uhTUbSIQfjMDiwMTsNbTaubUJd7EJH\\n\",\n       \"j994EtpcevwpqFs9dyEoLx85DG1aczgu3f/enw7KP/1+HG+SGK8ZP6SzIN1yjONS6dqRnEmrYjw3\\n\",\n       \"ib/W8GWWkn72wAPh+HL/Pe+FNhura1A3vW85KO85dgDaXHjhuaBc9HrQZmZhAeoGLvw2KUmItQ+H\\n\",\n       \"NgxsfuhDH4Q27733Xfg6N5xFpO+PCjx/aRruQ4uEiSYkmDh13yvbm+vQpt/Fvt9sh2PQh37u56DN\\n\",\n       \"9dAvUUIIIYQQNdBNlBBCCCFEDXQTJYQQQghRA91ECSGEEELUYFfE8gjMblckVrcXaf9oS64RtGDx\\n\",\n       \"6H5VbiatMrxYyhgT2XRmai4ojza3oQ1b/X12756g3N1ACbF0K5gn8ygTDnMizrvPXE52CCxqhOJe\\n\",\n       \"M0FhtDOPUnVjKqxrz+Drsia+LnYna7yNK9mPiQjtVwbPe/g6xqgftpuewVXAp/btg7qta+G56W6g\\n\",\n       \"sD013YG6tBmK1nmO/Scf4grjjUb4uoTIoBQnUadTKB0zwb/fDf/eKke4T70eCvdNJ5tHEUqkcUZk\\n\",\n       \"+sz3hUlWkcdOHJOJCIlrlhApuCSyq7/8ic9MJ6TAeIYDkJXewCWwOTIFmTDht84mAfS2sX/uWQ7H\\n\",\n       \"qakOCv9rly5DXVWE77jgxGgzs60BjnmesZvsYmaWNPErquH6Z2sa2/h9MjMr8vBYlQW+35i8zner\\n\",\n       \"yHcgM/5F42i0yfiW4/utu8lCB04egTbTy/NB+fyjV6DNVGcG6tpT4bjbW8drNiKTfuJ05+uPXA5W\\n\",\n       \"uMk0VYnHvCL9s9kKr9tWE8ekYoTbGjjBfrCN/a4ii6akZPuTol+ihBBCCCFqoJsoIYQQQoga6CZK\\n\",\n       \"CCGEEKIGu+JEwRNJ5zJRPYc6UX7D5ME086RcO3C0rrcT5Fmqh3lTHfcc+twz53DTJFBxel/43Pvi\\n\",\n       \"8y9Cm9S9XauNHsNWF10qS8NTn07oRM3Mhs/ZkxY6JxlxajIXJugD7MzMRhsYjDZc3wjK/bUNbJOj\\n\",\n       \"n+PDPSuWgkhouHDPq5evQZuZOXQNZpaWwv3cRtegIF5W5A5VShyQisT7DdxnbsQTSBlmVjlHISXn\\n\",\n       \"KiZums8ArTLi9fVxP4fOUYhj9NfSEj0wH3KagCNFIGGmEfGPvFxElAy2KfMj12S+pVns3oCON2zs\\n\",\n       \"gjZkLKMfL2yXE3mrIM7X4v4w3Dcivs610zh27T1yKChPL6GXeeU5vI48gy72qSLHcNb+IHRhWsTd\\n\",\n       \"ykgQY+nOexnj2OVDZc3M8irsn3GJ26bn1NHo4LXWIMHLF596NiiPujhu7DtxPCivvHgG2qyvXIW6\\n\",\n       \"xUNhEGqDfF/kxDXKqp1/b8nJGOsdMxbk2SYhvbMzoZ/Hju52j7iwzpkd9TGENJ0i308N7AuTol+i\\n\",\n       \"hBBCCCFqoJsoIYQQQoga6CZKCCGEEKIGuokSQgghhKjBrojlYGRCZiZqZCUxKCOWPgevmzS408Ec\\n\",\n       \"9QnkQbaydObC/K6SwLoDr7oR6qackL5+9gK02euCH2OyIj0LkGuCrzmZmFy41bwTdPtsvIki5MgF\\n\",\n       \"Yg43Mexvaw0F+MFW2I64tZZMoZzZngtDMhsdDLpk+DC6NgmjvHYRg+22XbjmzNIctDEiL44GYchq\\n\",\n       \"K8YJBi3y+WIXJjoiMihj7MI8E7JPWRNFz45b3b7XI6u/k+tjUIaC/XiEfYMFRlbuGs0muGZjEoLI\\n\",\n       \"Ajj9ZJOYjCMp25aLE2TXVeUNfMPJJuxM5UT0xvdnYxm2S5Pw4h6QsNYYBwBbXF4MyutPoqy8ehbH\\n\",\n       \"rjvf+Pqg3JzF/tongbgeFrbJRqXxdigLj8iEjfY0Xrc+UDEhk3nSBk5gGOfhNVqRyUPFBBMDygb2\\n\",\n       \"l30nj0Ld1z/9cFA+/+gL0OZ1t4ffF0tHDkOby48/BXWDrXBiTnsax6kix+MyybyHfIyfzwfZsolP\\n\",\n       \"LRIK2nACen8LJx1111ahbuj7BgkFbs1j38gUtimEEEII8Z8W3UQJIYQQQtRAN1FCCCGEEDXQTZQQ\\n\",\n       \"QgghRA12RSwHF9JZa0weplIlJAiTNjTE3L0BseaoRzeB3MrEudFmKLutE0nuO04dg7r++mZYvoZp\\n\",\n       \"3dM3nwrKOZF0C/JpYifcRhOIrWZmLXdyym0ikRPRs78R7ntvHSXyERFLIyc+T82jCDnthFgzTAKO\\n\",\n       \"DGVJxtWL4QrqC4u47ZvuvBXqNlbDc7VKPl9F1jnvuWO13cWk86U9e6Auc2IwS8pn+ATfrIFDQJGg\\n\",\n       \"ZBmloXA7NYPHc0SkeO9Zj0n6ekkStctxKISOJvh4bJ4Jm5ASp06mJ5HlMXld5T8fWZF+NMb09dIf\\n\",\n       \"BCKtFyx6HHaAjVNkWy5uPXditJnZ9Pws1DXSsE9deOY0eT9kr5OjBwUelz4ZJxByzMe4rdLJwv4a\\n\",\n       \"MjPrb25CnZfGWx0U4JstFMsjJ0ez1Q/YBCbP1haO33e9+iaom10OE98f+9SXoc2r3vbmoDx38BC0\\n\",\n       \"WTlzFuoq1z8LMukgIdcDu448bIJG5r4PWySdPE3JKg3u+u+TY7dGEtn91ZCSc9yeRpG9YssWTIh+\\n\",\n       \"iRJCCCGEqIFuooQQQgghaqCbKCGEEEKIGuyOE+XcInieTJ8v73y/R40l5jH5KtaGhnTuuAvWmcLn\\n\",\n       \"rVfPnA+308SAw0M3HIe6Zz/zpaA8HqDbMLdvKXyvbVwJnS31HiXh8awKfDbO2LoaBk0O+ugj9LbQ\\n\",\n       \"R8jdittVgudzamke6jpLoZM0NUuC0shK6PkwPFYFCV1jLO0N/aPHH/lDaPPik09D3avf8MagvHgS\\n\",\n       \"w1N7FfodW93wWF0lHsPF8+ehbs/evUG50ZosLK4Yhccl7+Eq51mEw0KahQ5N0sBjbgnxzlz4a9HA\\n\",\n       \"/Rz2MHi1cl4P81A8jYQMZ0Tl8IGxFdlvFqg4Hod1BRlxKuKFlN5TJP5hMcHnY64ok5RK72qRwNG5\\n\",\n       \"ebzWxpuhj3ftLPa7hf3oCM4cCK+Zi1fRVTHivXnKAn2yiPgyFocfejzGcXHYx3DPsgpd1MYm+k/M\\n\",\n       \"2Wl1wrokwz48idN27TIel3gaQ09f+33fFZR/7aF/Cm1Of+nRoHz8e14NbbIZ9N4S1/dKw2NuJfbh\\n\",\n       \"JMNjBW1IcG/LhVg2SKhlQlzKfBh+j21cuwZtBtvoj07Nhf16eg4dWupETaAkXg/9EiWEEEIIUQPd\\n\",\n       \"RAkhhBBC1EA3UUIIIYQQNdBNlBBCCCFEDXZFLPcuJLiRE7jg13nlRPhtTbqVSfIMSY6eXb0SCoV7\\n\",\n       \"TxyBNlMkpPPs10OBeXqOiNf7QqnzmT+4gPtEgiZTZ8mPi8nCKDfXQzmTya4FEXxbbj9ZQOb08hLU\\n\",\n       \"zcw5OZI7QMveAAAgAElEQVSsFD4gEmkxDoV3JvwyFp2w/Y6/9hehzSf+31+Hut/45/93UL75ltug\\n\",\n       \"zeFbT0HdvlM3BOX9R3E19heeeAbqrp0LzzMLBWUU4/BYDbaIUVliXdv1/WgG5cwoQbE0Tl3oaYt0\\n\",\n       \"GHJqxi54tSJBrLAZkqxbks9SuTf0ormZWVng68Z5WEd8bcsyJqmHrytJHy6GO38+FhxcsOuvCj9f\\n\",\n       \"kpFJHG2coLF2Phyn+ls46WAvmQCTNMLPvHUNg2aTaufrb3YBReiYSP9+YlKR4/nrdXHCy3DoBfSd\\n\",\n       \"A5zNzAoXHpqwcMgJvhtyEgr64nPPQt3tb31tUD74rz8ObR77XDjp6OhrMQB4ahq/LwZrK0F5roNi\\n\",\n       \"+9YGjqdxsrN5ncR4XCL3BRGRb/JhHydD9a+F+7m2imK5kfMw675Xmh0cp7IMP3OfnJtJ0S9RQggh\\n\",\n       \"hBA10E2UEEIIIUQNdBMlhBBCCFED3UQJIYQQQtRgV8RybyODbEYkvYqlbk+QIF5RedAnpjNpDjfO\\n\",\n       \"pDjPaISCaG8QJqve/udux9et4SrVl144F5RPvf5V0KZwJnt/HdOf9xCJu/Sro0+Y2NpxAnN7HgXV\\n\",\n       \"1mwH6pozoeSYtUlyLUm8HQ/C/dzcXIc2/S4m15ZOFo4TlAkZD3/sU0H5zX/xB6HN3//FB6Hu0//m\\n\",\n       \"Y0H5q//fJ6HNlX/3Gaib+srXg/LJ19wBbY7dfBzqOm518rXzl6ANw0vAYyI097skQdzJ0GOSsJ1O\\n\",\n       \"YYq5/yutIpMO4hj7gk/U9zIxg0/8YGJyWOdFczOzggjp5icnsFUN2BjhdoxJ6+ZTzQkVXUUB9z1y\\n\",\n       \"nydtYN9nh/PahctBOWnh6xaP7Ie6vlu1YHMDx7JGc+fE6wZZySFm/cXP3iH72SCfuXBp3ew7hU1E\\n\",\n       \"8GNJRGYPRczwd7RJyv8Lj+LqBzffcWdQvuvtfw7aPPLbvx+ULzyOgvr8PhTLz6yEwnZJ+n7awGM+\\n\",\n       \"Knae+FCRa81PPBqPMF1+sIVj+upqOMlhlOOKE515TCNvTYffPVkHJ2yV5MsuH0osF0IIIYT4T0qt\\n\",\n       \"m6goio5EUfSpKIoei6Lo0SiK/v7L9YtRFH0iiqKnoyj6eBRFeCsshBBCCPGfAXV/icrN7Ceqqrrd\\n\",\n       \"zP6cmf1YFEW3mtm9ZvaJqqpOmdknXy4LIYQQQvxnRy0nqqqqS2Z26eV/d6MoesLMDpnZD5nZd7/c\\n\",\n       \"7JfN7NNGbqQq/6DUpchx94jc7zlHgPlP9OG/ex0LsaN6xQQOVn+AAXVZJ/QBllzwpJnZmcfwmbZf\\n\",\n       \"2f3QrTdAm7WrYQjZeIDPjttT6Jx0NzeDcmNCZ6izGAbixW10HSKyyvnIuTeDPh6nXhdD17or4fPy\\n\",\n       \"UQ+D4DLyDL/h/aqYrFZO2LsvDNv8Z+/5OWjzxi+9Heq+70d/JCifevWd0ObFRx6Duuf/4PGg/PiX\\n\",\n       \"vgxtLp07B3XH77g5KE8voB/AaLpQ13FCgiZz9HN6g/DcjEp0JKbGZNX4Zvh+SUquY+L1JP66jSf4\\n\",\n       \"e4+oRjkJtjR3XflAQDOzkoRDFs5tGo1YSCfWeZWJOjUTdM+KHAPmSfkBrZHhNZpvoke47VymhYMY\\n\",\n       \"fuuvfzOzTedzjnrovbSJ7+RhYY1s/PbjNfOYwJsys9SFLLJjR52oMnNl4qbFO0ul07MLUHf14nmo\\n\",\n       \"O/vsc0H54B0noM2Lj4dtVsgYcXDuJqhrt0KHdTjAjpexgFPbuYMm5Is0dj7Z2Lu4xh1M364zi/1u\\n\",\n       \"dhH759RcOA7GJAB4RL57RoNddKKiKDpuZt9hZg+b2b6qqv7ITrxsZvu+1e0LIYQQQnw78i3dREVR\\n\",\n       \"NG1m/9rMfryqquB2snrpNr/euixCCCGEEN/m1I44iKIos5duoH6lqqrfeLn6chRF+6uquhRF0QEz\\n\",\n       \"u8Je+/lPf+4//Pvo8aN29MTRurshhBBCCLEr1LqJil4KbfklM3u8qqpf+Kb/9Ztm9tfN7CMv//c3\\n\",\n       \"yMvtTW99s6vRD1ZCCCGE+NNF3V+i7jazv2ZmX4+i6JGX6+4zsw+b2a9FUfQ3zOy0mf0l/vJQ+oPM\\n\",\n       \"MyYTUqvb33wR4Y9K405Ip/dwWEklTgcTpuf3LAfljEirzz99FurmDoSS89QSJkZcfu502KaN4WI+\\n\",\n       \"fM/MzHzgX2OyJ7sj5xwXm/h5bQMFcR9YlxORj63GXpXheZ/KsMu2SMhj6U58FU8wK8DM7vr+u4Py\\n\",\n       \"oRuOQZvf+qe/CnVPPPxIUH6V285L2zoEdXf++e8KyidXUbJ88fGnoO78My8E5QXXV66HD9KLiURa\\n\",\n       \"ZSiWj4ZuwgIJhxwN8bzHTuLOmtg/YaKJodgdk2vGkzDxmpx37wUzmZiGz7rrPyXychnh8SwsPAYJ\\n\",\n       \"EenLfOf+yQ4ByTy12I2fMRm32ASNOA1f15nGyQpjIlV318NJKikNcNx54krFDjqrcgJ6FaE8XBER\\n\",\n       \"uvQTkchxGdMg1LAYkWOQTDB8Zm0cpxpkEs6V0+F3wcJ+DDjdf+PBoDzcQpm/v4ljSeYmEJVj7AcF\\n\",\n       \"C5+eZPyk11H4OiaWj8m478f0rIXjRnuaTGRxn2/Yw++Zfg+/s8Zs3yek7uy8z9v1faq31d4bIYQQ\\n\",\n       \"Qog/JSixXAghhBCiBrqJEkIIIYSowa4sQOxD1fyz6pi4DUmMroF/YM6dJXzW6duxLTMm0d9L4orM\\n\",\n       \"L4ehYN2ruODixrWrUHfwptDH2SaLG29uhh5Kq4PP3QcDfF7ub5+jCZ2hyi20mZCFN5nbUHk3hpzP\\n\",\n       \"Btn3RjN0BqICj29JHDp/rqIJfDYzs8/8Xrhw8C2vwgWB/8cHfwrqTn/tiaB8+exlaPP0pRWoS1uh\\n\",\n       \"zzFLFovee/wI1PWc7zCacAFNCDQkUk2VEOcjdYv2EoegIIuUQtYl8Z/SDJ2WyPUP5k158gHzGrCf\\n\",\n       \"eaUlJx4MC830KhNbSNiPZWZmuXM+UhYcWk0SBsv+5iULs3tniISnjvvYX2I3BjSIX8k+Xzl0C/Sy\\n\",\n       \"0MwJ/l73x+kl8PzFLpm0YoHK1HFxIcv0e4YFmrptERfuOqtfh+9HXtZoYRDqcDscr4d9HL9bc+FY\\n\",\n       \"WeX4eYc94qa68MksQ1dtPMb3YwGxnph891VuvC5yHCPiGI9doxUuJBw38buhNY2OsF8EeTgk332E\\n\",\n       \"Cb/++Gvrv1QIIYQQ4s8uuokSQgghhKiBbqKEEEIIIWqgmyghhBBCiBpEkwRIvqJvGLH4SyGEEEKI\\n\",\n       \"b0+q68xu0S9RQgghhBA10E2UEEIIIUQNdBMlhBBCCFED3UQJIYQQQtRgVxLL73nXPUHZrxA/NYvp\\n\",\n       \"pJ3paagrXRLvaIgrUufjEdT59PPRiCT6kpWsW60w3fWhBx+CNu95z3uhbuQSw6eXlqBNd+US1Pn0\\n\",\n       \"1X3Hj0GbZ//wq0H54PET0CZJMRF65cyZoDy7D1cK/8ADD0DdPfeG546veo51SRImj0+xdPIWrmg+\\n\",\n       \"HoXpxOx8jrYxmRf3Abv6R37+w1B33333BeW+W6HezGzxwD6om94fntPTTz0FbcZd7J97Dx4OyiVJ\\n\",\n       \"Wh6TFel98m9i6Dw+9BB+vvvvvTfcDptY4qO5zSxyccsxacP6ArwO3w0ToY3lcCMf+siHgvI9P/G/\\n\",\n       \"4rbJ68qxS1EmKfjFCPsZbo0kNJPjGaVh34tT7ItpE5Orf/6jHw3Kf//v/Ti+HzlQ/tywU5ykeCbS\\n\",\n       \"RljX7mBiuR8Dzcxil2Y9HmFKdLOBn/md7/zpoHzfu98NbSLyAf1RZ/2nSfYzc6sf+IR2M7MrF3Cl\\n\",\n       \"gY2V1aC8/+ABaDO7gCsNvMd9nod+/h9BmzH5zhp23WoEfRzf+tvdsA1ZsSAjaeid2YWwvIjfRUkT\\n\",\n       \"j50PI7//Xbhqw/vux+++Vjs85hnpB2x1gKG7/gYktX2bJLIXbgwqxrjttIHfM3ESftf+wkd/Edpc\\n\",\n       \"D/0SJYQQQghRA91ECSGEEELUQDdRQgghhBA12BUnqtkOn7keOnE0KBd99BHOPHsa6rbcs+NmB58B\\n\",\n       \"t6bwuX7kjAv/3NbMbGF2DuoGvS7UeZIUn7MXzuNJG/jM2XsaZmaRcxvac7hP3Y3wuXBzCl0j5nyV\\n\",\n       \"btVv5mkwmu3w2XFEVg8f93Gl7rVLK0H56gD3aWYeV+VeOhi6WguLC9CmIsdlexC6BsNuD9owOjOz\\n\",\n       \"Qbns4zP1xz/7B1D3nT/0fUH5NW+9G9p8/rd+F+ouPH86KB86ehTaVBGuZJ+1Q89t4K6F6+H1mIq4\\n\",\n       \"TWy5+dKJNd51ut7r4P1Zkxg/n3dhqnLnjN4RWyGetBs734IHDmNd2giPeZLiuMFcLu/6xRk6ikmy\\n\",\n       \"8/VXEueMKHRWxeGnjoj7w857moXjUsbcrQRfV7ljRd6OemewHdIkIn0/cXnN+QjPe3+E13syHx73\\n\",\n       \"uf17oM2ew4eh7plvPBaUzz3/ArRZ7u783VCV+AGTBD+fuWOckO+LRh5+1zGHLyqwD/s+xC7ZhJxj\\n\",\n       \"I86lJx8TF9bVee/OzCwjPmDD+VyNJp7jrIHXUb8fumF5TpyoFI9nFNf/PUm/RAkhhBBC1EA3UUII\\n\",\n       \"IYQQNdBNlBBCCCFEDXQTJYQQQghRg10Ry2Pnuz36xTAw8tL5i/Caxf3LUHfi9puC8rSTgs3MogrF\\n\",\n       \"vdSFww1JONzFFy9AXVWg3OaJiQDrw0SbMx1s08XgsKQVSvEzCzPQZv1SGNLZIGL5cIjScTF0n2Vn\\n\",\n       \"b9fMzFrNcJ9as/hZ0iYe85m9ofx9hRzfs889C3Wnn3o6KC8sYz84SGTsqcWwL7TaKC8y+nkoJp54\\n\",\n       \"1Sloc+7xZ6Du3/2L3wzK/8OD/wDavPZ73wR1X/n4Z4Ly5voqtGFybWcpDPdjEwoYfltEa7WSOuNh\\n\",\n       \"ZUmCJpn7nTjLmG3bS+tmZrEXWXf2Wq1BRGgmwEO4H2mTEvk7c4J4cwrFci91m5mlsF941Asi5Xpy\\n\",\n       \"Ig/7QF4zM+8qJ2yYZ/a3Ow5pittukKDC0hnhETHEK2bAOxIWxEik8coJ0xUR7ocknLG7tuXaYNDl\\n\",\n       \"Ta++E+pe/71vCcpPzOM4/OzXH4c6gMj8jRb2s+EwrCtzPO/jPPx821s4xsdkskLkxu+IXFhNIrKP\\n\",\n       \"xxN8QbBr1H9mItJHJAzaf0fHTAaPcJ/8JJHYsB8YuUZjEj47KfolSgghhBCiBrqJEkIIIYSogW6i\\n\",\n       \"hBBCCCFqoJsoIYQQQoga7IpYfvny1aA8vSdMqn7H274LXrO0by/UrV6+FpSvnUEhff0yirpXz4Xt\\n\",\n       \"Nrub0GbvEXy//ccOQp0nivGQ+iTl6XmShr6JYnmnHcrCbSLOD5xQ2CTp62sbeAz8CuZRMoG5a2aD\\n\",\n       \"9VBybHTYaumYEr/vljAJ+ObvRIFz5TyuoP7sV0Nh89Kz56HN048+CnULe0MBfW4ZVytnbK2tB+Xk\\n\",\n       \"DhTL3/jD3wt1//y+XwjKX/z1T0Kbu//y90Hd/hPHgvJwC5OWxz0US7tXw3O6cBD7KyNx6eBjIvzS\\n\",\n       \"hGsnbLKU75gkHXuxu2Kx1Ay3C/EEaehRhm2YOBs1w2uEpRVnZCJC6la3j4gIzXczrCxIijITxD1j\\n\",\n       \"lr5MUpv9TjAlmB1PGBNIm6piEemujvSfCQLnLSJyb6uJE1cKv9oCOX9Fjv1s9WI4vqycw++L7soG\\n\",\n       \"1L3qLd8ZlO94/euhDUuht3/5q2Ebcn3QNHKX4F0OUY72E0TG5PMaqfNp/f6cm5llZPJARD6ehx3z\\n\",\n       \"wn28MUs19zPNzKyC71Fsw75nWm7lD3b9Myo20WJC9EuUEEIIIUQNdBMlhBBCCFED3UQJIYQQQtRg\\n\",\n       \"V5yoY7ffGpSX9oa+ysYldHi+9vF/A3Xrzq2KyErhwwo9gsM3Hw/Kb/zhu6HNzPw81J1/9izUeSKy\\n\",\n       \"jvvY7ddUB32LnIRt2sF9QdE7GWZm414YGJeSZ+wV80IK/+x4MjbXw9XK003sQmsX16Du6rmw7tAt\\n\",\n       \"R6DNoRtPYN1NJ4Py6kX0GE5/43mo27gS+nL9PnpvjKE7Dy8+9Ry0eesPvwPqbvvuu4LyFz/2e9Dm\\n\",\n       \"5jfcAXXT06EfV5G/azqzWLd+/kpQ3t5YhzaMyskpRYnXRxSjnzORE0U6kXdoIETT+DXjNz9JFmxM\\n\",\n       \"rg/my8xMh302aWBQaZyREEvnvYzH6KqVYzye3gdiuZMxOeawHSIWUbfJBRwyT4sdl4b7fD4o9aV9\\n\",\n       \"YOcqrCvGxI0hgZiefIjhl1lnGurm9iz4PYA2w+0+1OXu3DxPAjK/8bkvQN2m+555/Q++HdrcePOt\\n\",\n       \"UOehR4CFs2Zh/4xIQGXpPjMLMy2Jv+aPMbuOfT8349ctbJuc99R5fH1UPq0iCbxlFR6XlFyPLEw0\\n\",\n       \"dQHVzMVjn3niL0CCfokSQgghhKiBbqKEEEIIIWqgmyghhBBCiBroJkoIIYQQoga7IpZvXAil2Cc+\\n\",\n       \"+6WgvHYRQxdnyMrZh0+FAY6LJ/ZDm5tehzLv/gNhaOaTX3kM2nzm33wa6kbbKD4C5LZ0XDqJm4Su\\n\",\n       \"jchK3Ylb3ZqJgj50jQq/bNV4v60Jwgxf2li474MhCpzjHq68fuV0KISf/sbT0GbfDXj+Dt0ayub7\\n\",\n       \"jh+CNjfcdRvUrV8NZdCNyyvQhrEwF04oeOLzX4U2PnzPzOwH/vZfDcr/+G/dD22efvgRqDt8ZyjO\\n\",\n       \"r/Xx2LVJ3+8shfuZ9/E8MCo30aIqUQYt2YrtrnskRHaF0EXD/sgCHCOyun1UeQF+55DORgePEwsO\\n\",\n       \"TRIv7qJIW7AQUrefMVl9viQCder2oUqYYryzeE2hoZnhfmYkkDMj8nCSuuNCZXfsG/4zs0DFfIT9\\n\",\n       \"GiATg1YvXYK6YT8cK/cfPwxtDt94A9TdcGcof+85tA/afO2Tn4W6J7/6taCcFzh54HXf91ao87Cg\\n\",\n       \"UjbqVu6csjBYHyLJxHI2g6F0+16M8bwU5PNVE9wqDIf4HWbu+ykrsN/R/Fb3RcoE8Thi10y4n1WE\\n\",\n       \"+12Ray2aaOoKR79ECSGEEELUQDdRQgghhBA10E2UEEIIIUQNdBMlhBBCCFGD3RHLXQLsvkPhCvSv\\n\",\n       \"fdsb4DWLh1ECzGbCpOEkRmnt7Deehbpfee//GZQvPP0itDn12juh7pY33A51HiblNt2q2AVpkxOv\\n\",\n       \"LXGrxI9Jom+rFW6730PBmKYhO3mRpSEz5vcvh/s0IqnNAxQT2+3wXK25RHEzs3OPvAB1F544E5SX\\n\",\n       \"Du+FNnuOH8D3mw/frzM7B20YSwfC7b/w2DPQ5pP/6jeh7r/7yR8Nyq//we+FNqcfw76498ZQio2J\\n\",\n       \"mDzs4nlPndRcNScTk0GqJvIpW22+9BMWiBHL0qy9jM32ksq1XvQkcq3Hi9FmPNG78FIuEZoLUucF\\n\",\n       \"31aG75dlO0vVPjX+pffDawZgvj+T8t1njskkgHSCYxUTKZ+NEpDITia3jEiatafRwH0akYTryy+E\\n\",\n       \"48TVC7iKweYtp6DujjeF3yvf85f/a2hz6OQxqHv4tz8ZlM8/fxrafJ0knXtK0qcsIhOB3PWXkFUo\\n\",\n       \"otSdU3auSBI4zCdiE5FIXTHBvCOWOO8T/POsCW3yHI9L7iYntNq4ykejid/3/nJnSfns2k4mGF+u\\n\",\n       \"h36JEkIIIYSogW6ihBBCCCFqoJsoIYQQQoga7IoTdey2G4NyZylcqbs/GMJrHiWBmCsvXAjK577x\\n\",\n       \"HLS5+NxZqLvx1bcE5R/94E9Cm+XDe6Du7Au4LYB4IZlzJ4ohfr6MOAqp81d63W1o05oNj11Ojl1K\\n\",\n       \"7pUT73NM6ESVVfjcu9HB5/XZPK68Pn8wPJ4HR0ehzdbKBtRduxQGs442utDm8lN4XtpuH6YXZ6EN\\n\",\n       \"ozcKj9+tr7sL2nzls+g/PPq5Lwfl7/jeu6HN6nkMkd1cWQ3KSUJWNC8wEK8sXPAjCYxkpFnoFiRE\\n\",\n       \"02BhdF6TYguhG/FzvPBE/Se6qrrzc8jrPN6jMMOgSzMW5Ef8pwL3KXJ9Pyf5kWMSODjsh69jwYhl\\n\",\n       \"NYF0QvyuhPhr3mVibVi2rg80zYmmRQ4nOCYF9Z92Hl9Yi9nFBahrOr/y4ot4/f/BpzA088XnQpfq\\n\",\n       \"DW97K7Q5cvutUNeeD/fh0c/g9X/hueehzuP9IDOzgnUidz2k3n8ys8x5UuBImVlMrja/LeYHjUgw\\n\",\n       \"atxgwasOMk75EOmcXR8DdKmyZjhOjYZT0IYFcCeuY4+Ib0U0NEtYx54Q/RIlhBBCCFED3UQJIYQQ\\n\",\n       \"QtRAN1FCCCGEEDXQTZQQQgghRA12RSzf3FwPymdeCMMue+soD+fbKKS1nST36rtfA23+2rv/FtQd\\n\",\n       \"vfNkUH76saegzWf/7aegrreBYjfsJ5FGUxe2OepiICYNVHOyW38T339qPhSmmbResNA1H8BXThD2\\n\",\n       \"Z2bDfrgPgx6m4flVyM3MYifJt9ooCrb3z0PdoYVOUO6TY9frbuKOOkm2u7rzuTMz294OP8/yMk4w\\n\",\n       \"OHbTSah79NOhbPodb38TtDlwM75u4KTjzgLKkiMiXvr+MmZBfoRGMzyeIxL8Oi7IauwQfomvY+qp\\n\",\n       \"D35kQjMLZ5wg2xNgIYElkZwTFz6bMKm7REk2d+dqOMLzMiTXAwR+RuRIJTsPxQkJzaTt3CQVHxL6\\n\",\n       \"UiU5Vi6YcJyzo47j28jJ0TkRqPlMhJCcnKsow88842TzqWkcSy667xQzs7NPPR2UP3H5CrS5/Y2v\\n\",\n       \"h7qjp8Lr9shtt0CbSeYFsF5ckAkMGM5KJiu5kOVWB4/BuI/fBf56LMjsASq7T9A/Gw2c3JKPwu2z\\n\",\n       \"a2Y8wDE9dWPeeIxjEpmDA8J96dNFX6qFGjYnZlL0S5QQQgghRA10EyWEEEIIUQPdRAkhhBBC1EA3\\n\",\n       \"UUIIIYQQNdgVsXzQDYW3mU4ou+7bsw9eMz0/B3WdxVCmS6dxheizly9A3Sc++NtBeePyOrQ5cOgA\\n\",\n       \"1B08dBDqPBERblMn3G0TEZolanvZbbCF0qqXLIcksbyK8bhkLScBTmZGWgbyIBOMUQYdjkIxsLe1\\n\",\n       \"Bm1G/S2oi5Nw37M2fpZOA49dNZ4kRRlJ3MdZW8P9PHADpq1feC4UWc89ien5nXncz83RtaAcMQ+S\\n\",\n       \"OLmVS5iPJ0iENjNrNMIk4JhI61XBxODStZlsRyt3PTDPsyKSc+QkXCafw3Zy7PtsBYGBW22+JCJt\\n\",\n       \"nqPI6hOnx+Q4lUQUzlrumDMjdgKzNSWrGmREvPYrJLAUfC8YM8oSr5mSfGZ/bXlB3cwmEsvjGD/L\\n\",\n       \"mEjq/V44SaTdbkGbozffAHVTM+H3zOrlFWjzwh9+A+o2nIB+4PgxaDO7tAR1Hn/NmpkVNMXc9T1y\\n\",\n       \"0WRpOA5Pz+CElCHpU74vFETYHvVR9I5Jv/Z03PE1Mxu5iU7JAPtdbxvfr3KTWwryuj6ZaFG5CUtx\\n\",\n       \"So4BSXdnE1AmRb9ECSGEEELUQDdRQgghhBA10E2UEEIIIUQNdsWJgmen7jntuMLnxJeunIe6wfnQ\\n\",\n       \"ZRh1MVCxGuGz3IN79gflW267DdqQfErrb2MIqIdlHjZdUNlgC58BZyR8snJBZaMuvn/qfIvxCI9d\\n\",\n       \"M8PT7I9KRfwHhl9tvmQr0ke4D6n3NNiq2RXuQ1GE/kpVkSA4EmLnAwbLyXIKLXGugXe5zMy2ifKx\\n\",\n       \"5/DeoJyT1+XkuX7aCsPhRmQldBbE6l2KJJns76HEOW0x82wM369y1yTVXpgf585zQfoLe51XaCZZ\\n\",\n       \"ZH1jBcMTfeiqmdmg13dNsL+ygMPIOTut6Wlok6QkaNYd47SJDs8kcaIFu0aZquIcmjjCc8xWsodd\\n\",\n       \"IB5TTII7I+feMLepmsCJihIMa2Tnz7tF21s4Lran21B30LlMM3Po2W6trULdcDOsu3wa92luGR1a\\n\",\n       \"hIQeE0/KH6pihO+XO6+PuVWsX/sA52KE482AOFEZjdINmZllTlR4TtvexTX+XdB3obXjnIQsb+K+\\n\",\n       \"V3l4XCLy3ZeR689/P/1x0C9RQgghhBA10E2UEEIIIUQNdBMlhBBCCFED3UQJIYQQQtQgmkT4e0Xf\\n\",\n       \"MKJKoxBCCCHEtyVVxROp9UuUEEIIIUQNdBMlhBBCCFED3UQJIYQQQtRAN1FCCCGEEDXYlcTyf/AT\\n\",\n       \"PxbuRNYMynOLy/CauaVFqJuZXQgrSIzy1voG1A1dGurGNUypHbg2Zpi6+w//t38EbX787/5dqFva\\n\",\n       \"vy8oz84vQJtRDxNZzz//fLif6+tk2weD8jRZTXxuGevWr14OyoP1a9DmZz/6i1B377vuDcosfTkl\\n\",\n       \"q2s3XDJ32sRE4STDpGyLw2NekEj4gq0sn4ft8v4A2rz/p98Ddffee19QzkiiN7MLI1cbTxKxbWbV\\n\",\n       \"BPMsSpLgX7q+WJKV3h968CNQ92P/8/8Svo4kHfdJYrH/NHOL89Bmfhmv0c7cbFBOyTkekZT9sVtV\\n\",\n       \"Pc+xzQP3vzco33vfe6FNSlZsj126e9bGfYoicv5c1/Ppz2Y8wd+fmiIfQpuiwH7w4M+8Pyh/+Oc/\\n\",\n       \"DG26qzi+rZy7GJRjN76amS0fPoz7GYXHqtHBVRSG3TWos3HYX0iouTEn92dc/3zPu+6BNlGKG8tc\\n\",\n       \"6v5ggMdzRPpLw60KMdXBxPliiOPE1lp4jLM2pm43W3iM3//AB4LyfffdC20aUyw9O+yPTbKfLbcP\\n\",\n       \"UYLjVH+An2U8DM/VmBynkqy24PvszzzwIWhzz333Q13lxmv2q00cYa3fhzHZpxE5773t8HubTZxr\\n\",\n       \"tvCYT02Fx/zn/8n/TvaUo1+ihBBCCCFqoJsoIYQQQoga6CZKCCGEEKIGu+JEZWn4/LgzG/oVS/v2\\n\",\n       \"wms607jidum8idUrK9Bm9fJlqFu/GrYbbOMq4H7FdjOzzvws1HmGfXSphqPw2XTaJKuVE9fHP/Md\\n\",\n       \"9fAZd7MRPstNiMPDIsKGg3A/8wF6MIxiGO5TOcaVtEfEC+k7xyRrESeqgcel0XQuVQP9lUaGr6uy\\n\",\n       \"8Pyl5WRdfeievbNn6t5/MsO/RqhTY7it1J0v5oBU5c7uzZgcc0Z3M/Q7+r1taJMT/6A9Fa7QnhDX\\n\",\n       \"qD2D7kaz7V+H5y8f4zXD3KKdGGxvYSVz6MqwLiEOX0TOe1V4Dw23XRE3LYH+ieeTHU9PQY5J1ka/\\n\",\n       \"w3ezPMfzycQ+74qZ4fvlxBmqxuH2m8QxG5c7u38RuUSbTXSNet1wvN7oYh+eW0bvdHlP+L3CfLKr\\n\",\n       \"5xI7lkcAACAASURBVK/ifmXhvs8u4/dATrw+T498z1x68QzUbVwOv5+619BDG22G10yTeG/ze9FR\\n\",\n       \"nFoIv0cbM+i9Nefw82Ut4qv6Ngme47HvQxWOb8wt9P06I85Zcwq/Q/zr8hF+Pxnpi8MJzt/10C9R\\n\",\n       \"QgghhBA10E2UEEIIIUQNdBMlhBBCCFED3UQJIYQQQtRgV8TyaSeuzbmAyNkFDIfMhyh+rV0JAyIv\\n\",\n       \"v3gO2lw6cxbqet3NoNwg8uIcCQ5kAYOe0RAlTu+xseDQQYpSbG8rrGPBoXEaSqsNIts1SaCbl0+9\\n\",\n       \"pH89hi7AjQVdDono6YVU4t/amEjqXjlsEJGWCc3tTig076y1/tE+hJ+HSeRMRI592B0Rd5lsXrk9\\n\",\n       \"S4hkCds2szIPD2A04Sfc7m64Mp6rNukv7ZmWK3egTZNNFnDBgXGC11qSYR+K87AvFMXO4aUks8+6\\n\",\n       \"G3hddTfD678ggaM+rPXldwhKWYbnxQdBmpm1p8Nj1ewQITYj16h/d3IIGiTsNnGTYgYkdLEkwv2U\\n\",\n       \"C9dsTeN1tb16Beqqwl8zeI6rCcJnfSCvmVlvGye8bG6G57S9gOPy0VMnoW6wGm7r3JPPQ5txiWPQ\\n\",\n       \"oVuOB+UWGWMH2zg2e2689Raom92LwdKzS6H8HZPxptsNxfK1SzipamsFA5QH264vsKBLEu5r40km\\n\",\n       \"euAY5CcrsGxhnNBAJuqwSTlkXEzS8FrL88nE8l6XTEqZEP0SJYQQQghRA91ECSGEEELUQDdRQggh\\n\",\n       \"hBA10E2UEEIIIUQNdkcsnw/F6s50KJqPByi2XXUrk5uZnX7y6aB84XkUBTfWULhrO4FyahYl2ekl\\n\",\n       \"lBVZnaciqcJeRGbC5pisSD3YDqXf/hYm3qYuDTltoISYNvA0+5TYgoimjM5suO9Dst9lQSRElwCd\\n\",\n       \"M9mVSPnDXiiDDpwUbGbW20Cps+Vk8zQlKfGEwq1qzlYYZ4J4EYWfmUqPTKBMwm2lRCJNMlLnks7H\\n\",\n       \"TAYleJG93UEJeIFMfFjauycod2ZmoA07xnEc7mdOhG0qcbuUdi56h5y4HcXdfIj9069GwJLyU7Ji\\n\",\n       \"AUsM92TsGLju4leaNzPL853PX0lmY7SYWO7HgG183aCHwnZ7NhyHM7LafUWOS+6u20YHx9Nogr/X\\n\",\n       \"x2SSyuraKtS1psPx+4Y78LynMfbrx7/4haB84TSmhd/6xjug7uCJQ0F5ewPH4aIgArPj2T98Civj\\n\",\n       \"p6EqaoV9aOHIAWiz/4YjQXnxxiPQZunkYajrrYfj5yZZ5aNHktyH3Z1XtGBp/Z6CrQRQ4rWdu++Q\\n\",\n       \"mBjp7Br1gehxxlaAwP2KyNg8KfolSgghhBCiBt/STVQURUkURY9EUfRbL5cXoyj6RBRFT0dR9PEo\\n\",\n       \"inb+6UYIIYQQ4k8h3+ovUT9uZo/bfwyIuNfMPlFV1Skz++TLZSGEEEKI/+yo7URFUXTYzH7AzD5o\\n\",\n       \"Zj/5cvUPmdl3v/zvXzazTxu5kcrS8Hn12K2gvHbpArzfc489CXXnng2fJ69fw3CxrInPTWfmQ59j\\n\",\n       \"z9GD0GbvUXwOPTs3B3WeEVk1One+UUxC+liYWN+tVj7o43PpyD0rZr6Od0DMzCrzzslkTs28C4dj\\n\",\n       \"q5ePR+g2jZwDNSSfZUCCH/suLHFMwtPYs/jEOUnsuDB8eGhS4Lkalfj5xlH4uhYJnowSslq5C6Ns\\n\",\n       \"tNCpYSvZe6dtOEb3h7F8YH9Qbjcx4LA1jb5T29XFZNX43gCPSzIK++eItBkQr867hUPiNnkysk+t\\n\",\n       \"KfQPp9z1z8JavcNnZtZz1+OIeEWjIfbPYT/s+yxAcpLu6ccRMzMW0emdRBasycYS7zaxUMk4xb+7\\n\",\n       \"++7ztcn7JSmOQZ41Eg4ZZ3hgvAO1uIiBlV/4zU9D3dd//0tB+eR3YCDnnW9+Le6Y24UzT6N72yWe\\n\",\n       \"FGyGeDfXiOt78fkwIPrqGQyR3tpYC8rekTQzm13G47L/5NGgvHz0ELSZW8YHSM02em4e/51iZlY4\\n\",\n       \"36lk/ZwmcIYNS3I9lsSlqrzDSvpdSjxX5hJPyrfyS9Q/MrN3mtk3X7H7qqq6/PK/L5vZvm9h+0II\\n\",\n       \"IYQQ37bUuomKougHzexKVVWPGF3gwqyqqsomX21DCCGEEOJPFXUf532Xmf1QFEU/YC/9ojwbRdGv\\n\",\n       \"mNnlKIr2V1V1KYqiA2aGCy2Z2cd+67f+w79vOnXKbrvjzpq7IYQQQgixO9S6iaqq6t1m9m4zsyiK\\n\",\n       \"vtvMfqqqqv8+iqKfNbO/bmYfefm/v8Fe/wN/4S/U21shhBBCiG8TXqmwzT96bPdhM/u1KIr+hpmd\\n\",\n       \"NrO/xFuH4mN3fT0oXzp7Hl5y+RwGo/W23Wre0yiWLu1Hue7k7TcH5UM3omA4NY8S+ZhI40CFTzf9\\n\",\n       \"qupMwCvJytJVEdax4MfMCYUVEfCYJOuDH9n7M1qw0jt+lpSI0D6osLeNEnnOAke9zEuCCgc9rIP3\\n\",\n       \"ZwGgDNc3x2MUodmxiiJ/jklIZ0yC39ymmiTgsNPGOi+WN0sUxBknTp0Kym0ijOZjnCwwdGGXUYRD\\n\",\n       \"BwvE7PbD81eN8diNqDQetivJefCceeo5qIvJCNdoh/3Th0yamZXsGnWTDnIyHoyHbKJF7trg5419\\n\",\n       \"IiehIv0uJZNU/LYiEuAak8/nJ2202lPQhtX5z1eRay1tTxB2S+To4ydvhLo9+8MQyacefgLafPnj\\n\",\n       \"n4G6hf3hmP49P/L90ObEbTdB3cO/+7mgfO5ZFL337NlZ/73pdfjE5S1/8R1Qt3wonOg0NYXX/7YL\\n\",\n       \"v9y8gA991i9dhbprV0J5f5uMw8MhTjooJvh+YEHBHv99ZYZh1GYGAyP7XovJGFtOEIibEsGfTaKY\\n\",\n       \"lG/5Jqqqqs+Y2Wde/veqmb3tW92mEEIIIcS3O0osF0IIIYSogW6ihBBCCCFqoJsoIYQQQogavFJi\\n\",\n       \"+R+LgZPZNq+F6atrV1GSy4coD0/NhFLs4r4laHPsFpTGj58KZcXWLErkwz7Kn+sr61DnYYJomoXi\\n\",\n       \"XDlGIbUgKb9Zw6VuEyHOC9NMoGbKqk8QJj48JXKr1DeIRD69iMez4+TdJEPRtCBCs09b9inuZmaD\\n\",\n       \"LvaN3lY46WC4hQIlo3RieU7S15m7GMWFr4A2XuY3M2u2QiG8KEjifILHKrZwJzKyWjljYW8owCYp\\n\",\n       \"nr+BS5c3MytcyjZLHu9tbELd+moosno528z4xAcnoGZEoPZskRULxjlex/6cMvGaXWtJIzxXvk+b\\n\",\n       \"mXU6KOq3Gu78kc8yGuIx9zBpNmbSeLLzsYrIMfcTO3zyuRmfNOIHGD+JxIxPpvHMzC1A3fQM1p17\\n\",\n       \"/IWg/IXf+Ti0SRIcY9/y34azwu/6njdCm6e/iitjPPy7nw/KjRRF74UDe6HO86WPfQrqRkTi9mnd\\n\",\n       \"M4vYz1rzYap4ZxZT9ztNsmqCO+8FWy2ArAphxc5iOfsOKX1UJOl3LK3fr0LBVqWAbZtZ5cbFiIzD\\n\",\n       \"7A1jMsZOin6JEkIIIYSogW6ihBBCCCFqoJsoIYQQQoga7IoT5cM111ZWgnI+wufE3n8yM5tdCp8L\\n\",\n       \"Hzx+GNrsP4Z1rZlwFfduF32ZDedpmZmtXl2FOk/WxNDDLAvrBmQVd+aYtJxf0SSvg0fMLLSThUO6\\n\",\n       \"ANCEpRISVi6GvlqSoZOxtYVuzNyePUF5wZXNzKZm8Lm+98na7tyZmQ1JAGd7K9xW0d/ZOTEzy9wq\\n\",\n       \"9cUIn8Uzf827BqMcz2dCvKWxc4SY98I8t9LCfUgn8GDMzBpNF5YY4flrNnE/R+44bK1tQJvVFQz3\\n\",\n       \"21wPr5mYeAwp8eOmpsL9zKZ2DhMtSdhnMcK6rWvhvve7W9Cmt4XunffcpkhAX3uaOFGzYZ/tzM1D\\n\",\n       \"m0lWkWeOEqvzvuEkQZ5mZuU43BbzD9mmEud8FcSpYfvpabXx+t+8ip7bs197LHy/MY6Lb3jHm6Du\\n\",\n       \"9d//5qC8TsbzT/6r34a6K2cvB+U3/ZcYhZhM7ezUnLz1FNT1iau5djF8v80XV6DNylNngzJz+DIS\\n\",\n       \"0tlZcH1xAfticwoDVdO4Xv+ENiUZOwvctt9WRV4Xx8TZc85uSVyufIzbmiCj87rolyghhBBCiBro\\n\",\n       \"JkoIIYQQoga6iRJCCCGEqIFuooQQQggharArYvkQJN9Q/mLBYSxIc3YhDGKbJpJcRILRer0wYGxj\\n\",\n       \"FSXZ9asolg+IwOxhkmzkwr22N1FkHfZ3DhPtkBBEHy5WkTDDMZGcYR+JmMi4cvFSUM7JivQFCRP1\\n\",\n       \"q3AzAXduGc/xzFwY3Nki0iMLsSzc+zXSycLUOu2wv4wLlFbHY7QQxy4olK06nuZ4uY2d5JgTKXdI\\n\",\n       \"AvG85xknk8nD49ztp00mzvsgxkEP++uA9OFxHvbZhg+eNB4U6kNIm82dz1+TiN7TczgRYW5hMSgz\\n\",\n       \"ETon4ZfFKDx2Q3IMctL3Y9cXyhG5Zsqdxd2I5B2yMNjE9QU2+SNtYF/0ryvGZNsxEZjdOfWBtWaT\\n\",\n       \"/bVeEuH32gqOw2O3Xze/9lZoc9dbXg91uZsc8elf/wS0efoRDNt8zd2vC8r7ThyANisrl6HOE8/g\\n\",\n       \"d9F+sq2bvuc7g3KrhQGnTfc9w67j3jp+z/jJUcM+iu0jMt7kQzw3HjqBoQzrKjrJYeew3djYhC0W\\n\",\n       \"Ph1eowWbVEWuozidMG2aoF+ihBBCCCFqoJsoIYQQQoga6CZKCCGEEKIGuokSQgghhKjBrojllYsH\\n\",\n       \"bbZDIbQVozycEbG04UTSOEP5LB+h6Nl1KbGrl65AGyaNxkTeg30iieXm0le7qyhLstTt1nQo2M+M\\n\",\n       \"URovncjKUobTFgqNkC5LhFFG4lag9+9vZjbq434O3H5tXMZ063PPPAd1DSdQduZx8sDMIq703nFC\\n\",\n       \"8dQ0CsaM1AmNjRQvEZ4c7SRcIi/6fm+GoveAJKvHhtJj6qRKlujL2NrwafJ43iuy89suhX44IKvP\\n\",\n       \"E2Oz7VKTmVjOrpmWe12S7ixezy4uQl17CqXcKTdZoTM7C20yIl775P+cJKQPtvH6y0fhOfXXgplZ\\n\",\n       \"f2vnSStGTnE+JJNG3ESWRgOPQUISqCPXz9g+DXooHadu+0lCtl3t3D9HfexTXhQ2M5teCMfFozcd\\n\",\n       \"hTYJmUjyB//uy0H5G//+a9Dmlrtuh7obX3NbUN7q4ooM7PvCc+0iyufnT78Adf1BOKaWJRv33QQR\\n\",\n       \"kszt5XMzs0Yn/G5tke+GlFxrabzzigFsDMr9JBUygSJNSBq5H3cjHANHZDUC31/I/B6LiNxeEgF9\\n\",\n       \"UvRLlBBCCCFEDXQTJYQQQghRA91ECSGEEELUYFecqNh5Nd6TiNgzdRIYl7XDwMY4wmfAvS4+q964\\n\",\n       \"turaEI+I3F42Jgj8yxoY+BdZuO/jAT7jzvvoNvgnt351djMMa0wb+Lw3iUhAXhZ6DN5ruB6dudAf\\n\",\n       \"iWYwGNWHoJqZmQvgYyGdPeKFDZ0nUZBAvq114pi57ff76G4xGs6ra7dIGCVzm1wAJ3mET0PevFs0\\n\",\n       \"bKB74D0tM7OqCs9pzDZO8EF6PiTUzMxIf4HATyIbtDvoMkZR2K+a5PNl5PP54L6SBIDCa8i2ezl+\\n\",\n       \"vq0V50CuoJ9XEQ8tSsLP3GygT5KQgSNN3OdjDhhxlOD9I2wzHhFfxp2btMn8LuL1OS+EqDiWZXiO\\n\",\n       \"m81wzCvGeG2XxG2CfTJsQ74KbO+B/UG51cEx6LnH0a983tUdOnoY2tz6mldB3cZm+P2wtrICbTrt\\n\",\n       \"nb8b2m08dtPT2IeiyoWlkvPuSSI8xwUZg6oqHAdzch2XpG94H5CRkdBj/6qS+Z3EwYrcK8cFG7/x\\n\",\n       \"/fw1kpIQYjY2T5g1TdEvUUIIIYQQNdBNlBBCCCFEDXQTJYQQQghRA91ECSGEEELUIGKrzf+JvmE0\\n\",\n       \"oQErhBBCCPFtQFVVREnXL1FCCCGEELXQTZQQQgghRA10EyWEEEIIUQPdRAkhhBBC1GBXEsvf+573\\n\",\n       \"BGWfTmyGiand9S2oGw/DFOPFg4egTcut2G5mtr25EZSHPVyVu8FWjXe7+eCDD0GTBx64H+q8j7ax\\n\",\n       \"ge/HfPulPXuCcquNqeLXLoUrg29v40roUzO4Sr1fvbs/wNd95EMfhrqf/KmfCMosTbuVkVTqOLxf\\n\",\n       \"j2J2/07SbONw+wlJpY8qPFeJe117CpPk/9ZP/CjUvetd7wrK21skzZ585oPHjwflqZkZaLO5tg51\\n\",\n       \"w62wL2xewzTkfIjnpjUd9uvmNL7fQw89CHV/+3/6G0G5v70NbbY3sX/6hPmYnL+UJOq33HFnx6U5\\n\",\n       \"RVKw3evSJvb9hz4YXn/33f8BaBORsSR18cQlS22nq7+HrytIo5gkJI9ztw8kMjkhx/ODH3hfUH7g\\n\",\n       \"/vdBG5Y4n7jxlNmwBUmgrtyqAuzYxSlubZiHfWN9C8fqfh9XI/jlf/4vg/J999wDbSKSQm9u1YKE\\n\",\n       \"fMK4gcdzdSVcqeLwTTdCmyvPncNtZeH220v4nVKSz/zgh382KL/nvndBm4qkkftUb3b+fKp5FWGi\\n\",\n       \"f5yS8+cuo+YUjmVZG4954d7v3X8b++I973sP1BVuZYWogceuIuN35L6PWmM8vkY+c+mS20vyvRpV\\n\",\n       \"+Dqf4P+hj/w8vt910C9RQgghhBA10E2UEEIIIUQNdBMlhBBCCFGDXXGixu75Y8Mt1Z2k6FakDazr\\n\",\n       \"rYe+yiZZjb1JVpZvdkIfqLdNXJUhrhrdYM/nHUWBz6HXroUOVmceVx3fd+AA1EV5uA9Pfe0JaNN1\\n\",\n       \"rsFNr74d2rSnOlC3eilcyZ6t6s7wK8Sz5NRxic+cc7eKO1uZnK2uPY7C/SLKiVXk/awMjx1bPZzR\\n\",\n       \"aIb9bHN1hI3IOU7S8O+RtIl9hfky/vgVJToubAX1Zjt0htrT2Kco1EXzTejy726fSH+psK505z3P\\n\",\n       \"8bqKx1iXlk1X3rl/lmw75LOMi7C/VAXxiogTWbrzUNLPS1wjV45J358k9Ji1iGKyn/5csc+X4Ovi\\n\",\n       \"JOz7VUScGtKHE+eYxBnxbKoW1MG2iWvI3K04CvtwXuD1vzCFHuj5tefDfSKOaZLh9VGOwjEgJu5f\\n\",\n       \"TvbTE5HfLCrmc7kq3vXd+5FLlr0scn2DjYtj8sLBCK8tDxkWwXeKyTGIidcXD8LvzCRCL7RKsL9U\\n\",\n       \"7npgoz7zOcuo/u9J+iVKCCGEEKIGuokSQgghhKiBbqKEEEIIIWqgmyghhBBCiBrsilgO5pyTFX1A\\n\",\n       \"n5lBwJqZ2UYRiuTbaxvQZm4vhrxN71sOyvkAJcTtdZTNJ7nl7HYxnHF6aT4oHz68H9qsnb8CdU//\\n\",\n       \"4eNBuWzg6Xrd298clJttFOmf/erjUJcPBkF5bs8StKE4MTEjgqoRSa90wnRRoUzoxV0zszQObcUW\\n\",\n       \"OQZM6fQy9mA0JK3I65zdPiYidJniOzac6D01jTJ/l4RYei942B9Am9EA65acAN+c2lncNcPAyCgh\\n\",\n       \"nZqIl/64UDGZCMVwrbN9YjMK3Fn1QZCMioTo0SHOvR1/f1Lngvti0vGYCO0bRkQsT+g+hJRkAgWT\\n\",\n       \"jmFTZNv0mvGuMr2Od5bpmw0UtjMiY+M+kQ9T4D5k7niuDTEw9ujSCajbunotKKdtvGYiMoFp6F63\\n\",\n       \"PHUDtOkReR8hkjzrZzATgWzK9QU2LyE2MnnHlfMBHvOcfNcyadxTxXitRXF4jLOEHPP+NaiLx+H3\\n\",\n       \"aMQuNtLP/DjFRgR2Hvzr/jjolyghhBBCiBroJkoIIYQQoga6iRJCCCGEqIFuooQQQggharArYnlV\\n\",\n       \"OvnLSWslkV1n9ixAXePcxaB89cxlaDM9h6vGTy+F25qam4c23S2UFSdxB30aupnZnuXw/c4+8Ty0\\n\",\n       \"Of/Ci1A3f3hPUD71htdAm9iJes/8wTegzXgTZffF/aFcX0wkRmK6fEYSk5mkF1VhVxsTCXhcEInb\\n\",\n       \"CZRZguJnTLpx5ITmkojsjCoP3y8fopDOLprY9VlfNjOLiThb5GEacpdMaGBiuRetqdRN8EncTZJm\\n\",\n       \"XxHxMkp8EjCe4wZJac+cqJu18PpIiHScujRiL8QzmCxdEBHaJ7IzKZdppn41Auaist0c+wGO9P04\\n\",\n       \"2TnxuqzwDZlsnvidYAnN5EP7yR8JEZOZ/O3T1jOSJM3ezxMTeTgfk0kcrXDyTPcKTsqZWZqDuuFa\\n\",\n       \"OA5W5Fy153GS0cWvPhqUb57H75RLJHUboKnY+PkK82MXtvErPsQRO76kvxRu20QitxGZwDCBeF1E\\n\",\n       \"5Pp3YnlMJl6U22u4sYEbB6dwRYaC9mu3n9S4J8L9BBNXrod+iRJCCCGEqIFuooQQQgghaqCbKCGE\\n\",\n       \"EEKIGuyKEzUehx5IFYW7kRM/Z3YRn0PvObo3KF94Dr2iKy+chbrFw4eCcnuR+FZNDPzMB+gWeaY7\\n\",\n       \"GHZ56YUzQXljFZ8B77/tJNTtOXksKBcxuiNXnwk/X9VFh6dFVivv9ULnKyMhlowEnicz/wlf5x+p\\n\",\n       \"R8TvyJjXA9siTgaLVHP+AQv3nISiwA+Tkefs/tN4F+h6dX4/WdhmQXyLrBme0ySd7Pw1XMCgDwk1\\n\",\n       \"M2uSsNv2MPTVIhYcSBwFr4EkKR6DhAV+QofBJh7mDCUk7NM7X+UEeYcv7cLOvgULIfUBnBXztNgx\\n\",\n       \"8G2Il1KQfS99UGlBvCni5/jtU0+SeEs+rBScrJc2hnWwHXxdTpyvLAv7ft5Ff7WxgG6TuT6ck/Db\\n\",\n       \"pROHoe5L58NQ5w4JNK5IAC/CnLadryMWBuvHWOoxModu7Jyokl2z5HUTOFFVjL5jloaeVJqTMOEu\\n\",\n       \"usxmYbuouQwtygS/1/ylFdEAXoRl5E6KfokSQgghhKiBbqKEEEIIIWqgmyghhBBCiBroJkoIIYQQ\\n\",\n       \"oga7IpbHzuSsXLjXmASAJWTF5oM3hit1X3wOJfJzT52GuvWLl4LyNBHLm018v6IcQZ2nv7EFdYN+\\n\",\n       \"PygvnzgCbVrLe6Aua4RBiEkXRc+NFy4E5Wvnz+G292Kg4sFTNwblhT1L0IbhRV0ftHe9Og8L5IuI\\n\",\n       \"4Vu6EDQWPBcRsTxxAmxRThi26bbPHFkmUHs5OiYrmjOp2q/iPhphH2NiqQ+2TCYM25zqhH2BhVhW\\n\",\n       \"RPAduWs0IfvEwlL9avMx2TYLcGTHeCdY4KAP1jRD+TSOUAZn++nPQ1Hh5+X+rd8vbMTez8NkcBYc\\n\",\n       \"6MVyfw2Z8eMbJ+F+5Tm5rsjf3b6vR0ySn0DcZXmjBelTmdt+3sPJNEYmyjSnQ/F59fR5aHPna+6C\\n\",\n       \"uu5WLyhXQxxLWD/zsEDHiJw/mL9A5PrYvHyOm6EBp66Kyeds/J7kaowTFO4zL3ZvXYI2VW8V6lL3\\n\",\n       \"nVy1MWyzinA8rfw1STI0Kzo5ov7vSfolSgghhBCiBrqJEkIIIYSogW6ihBBCCCFqoJsoIYQQQoga\\n\",\n       \"7IpY7uNB+068jlsojA36KNwuLIcp5kdvvQHaXD6NsvmlF8ME8eUjmFL7/7P35kG7ZHd93++c7n6W\\n\",\n       \"933vOvtotJtBCyCHYjEklIJLJrjsAipOYZykQjmUnVRYIpBBMyONFrTMSEKKMEvFxAUFruBAERvj\\n\",\n       \"KiqWTBlMygKBUQAhDRJCMxotc2e5c+99l2fpPufkj3uhdL6/79XbtCTeK/n7qVLp9pnT3ae7T/fT\\n\",\n       \"7/N8zvfEzgt/zeb407XaeMnx1B23V8s7573EvbPj5e9uv5bkfvdfvMvV+eP/+DvV8pf/ja9xdb7m\\n\",\n       \"2/6GK9s7d75a/v1//x5XhwNpyESWLOTdHKVclopLJyKHvjKwdHIipCds18hE2gL7azs/MzlLZMbd\\n\",\n       \"ZZIyvl37vrHZ1P06k4R0EoLtkuPziERoM7MOhXQiu2MaupkZjvWIRIjFmQjMzFIPx0cGjfRbn2Ic\\n\",\n       \"UG4d8ece61OJpIOjTNuSQQ4tOek9SKuxkH6QyP0AbfCp/yQNncIk+eNnn6eSPJG/nYhMLfnj20n7\\n\",\n       \"cDj+2cmeG8zXLiDAx+TXOzhcubJzz6qfw08/QgbhkLT+FoT07b5PSG8aMhsBUNjsDuTB5Aa3UOkZ\\n\",\n       \"Bw+wGkygbqAO6z9kY8d3M2vZ8a3rgVbDvk8nj3NyfKdqsXxofBr6sCWNghPBBuWwk4XP/b8I+iZK\\n\",\n       \"CCGEEGICeokSQgghhJiAXqKEEEIIISZwIk5UA7PZry5drpbT0/WymdmpS2dc2e7pOtzrzJ23ujq3\\n\",\n       \"Pvt2V/b0hXr7V558wtXpzvj9FeJAIO2Onz381Pk6SDMO/vfXJ9//EVf2gd98b7X8qY991NX55u/5\\n\",\n       \"jmr5b37v/+DqPPbJx13Zu//pL1TLV0gdxphfjjHsz8xP/k5UKjqjOboi7DfugXkF+Ns4ayghgAvT\\n\",\n       \"zrwT1S387/MGrkFP3J/NhoRRwnqLXe/GtS0J9wSXifkkDPSd5gvvgCxJG4rV7WQhnZkEI243te+0\\n\",\n       \"OfL+E5PhSqqdsjHOAtM2WKgkButRn4R0mAh9j81sTx1B8KRYEGs3JlyUhiCyhsJqmUh1ZL1EQh1d\\n\",\n       \"E9j9h/4fqROJd+a2TcpYbmef6362u+uDGC8/6QMc9+6oXdRLf+KDH9fEdzp9583V8hGpMyYclvUp\\n\",\n       \"VuhcJhIO6frCSH8NHaxCHFPq+o0Io4zZ3/95famuE70rmvf8Z2Za1L7zwD4viAfauOMZ9+T/LLI2\\n\",\n       \"9U2UEEIIIcQU9BIlhBBCCDEBvUQJIYQQQkxAL1FCCCGEEBM4EbF8d6cWc9e7tSB+cFDPmm1mdvGT\\n\",\n       \"XhRcQEDlTbedc3XuesHdrmyzen+1fLR/ydU5TULXupYJmjXnQCI3Mzu8ULf90sf97OGXHvOS4+nb\\n\",\n       \"a+Hub/3Aq1ydl/ztb6yW3/cb73V1/vU7ftqVNdtaAvzql73U1bF/+X+7IifOEpGWB7+hID7u/R1n\\n\",\n       \"R4+BhDwaERrRzaTi5Yj9sVBQFlQIIvSwJW0iIiSKnt2CiOyd31+/rYM753MiuxO6WS2WL5ZELCfB\\n\",\n       \"r01Xr9eSczBsfdjmCkT9QATRNPhz1W/r817C8dLzNnlptWNmMoi6fU9CQgcS/AidKpFA1UD09llb\\n\",\n       \"P2YXDQtwPf7ZEsi91jKhGUT9ofhzl4ipyyRjJJM2+JBTlpB5/P2eSEhnR8Jg16u67+/C54eZ2eHF\\n\",\n       \"fVfWna2fp3HmB9McPfaUKzt1Rx1MfLg6cHWaEeI8C9ZM5FzhJaUDbkaI7C6w1swJ6WxgAutnR5Wp\\n\",\n       \"WAAAIABJREFU1hw/sKMt/j4yKMudP09hxw8MSKF+nrEA18gGWsDnCgtrZedl7MAjhr6JEkIIIYSY\\n\",\n       \"gF6ihBBCCCEmoJcoIYQQQogJ6CVKCCGEEGICJyKWF6uFzOVeLbeu115QO7p0xZVdulBLgMtdL9fu\\n\",\n       \"nb/Jl90CM0Qnv79+7VNp24UXGJHLF7yseOWpup2LHX/an/81X+HK7rz7OdXy8pabXZ1f+fGfq5Z/\\n\",\n       \"+1/+mqtzmrT76//O366Wy/x4sfVazWpp/DzvsB7xFJn8jWIgT4QmwjaIujgz+vVAgZHNzo4p6mZm\\n\",\n       \"h/u1bJqJSLslad2pr4+naYlYPvP9JUH6+Tr4wRiMDOnZKfnzGYnkjAIzE8sT84nhmmIi/NX9sVTx\\n\",\n       \"gAV+48Bq689vP0JITyRdfsbalOFaEf92p/P9ZQ4i+Sz6OrEc/yhmEnAmKeM4ZiOwm42Ap5jdM4Xs\\n\",\n       \"D73nltwzTBD3+/fHx4R7fF7Pd/25Gw788zuA/D3f80Lz4eN+ANNpmL1iu/UDCljKt9s/OT42EAEv\\n\",\n       \"F90yBpYfu/dr24KNZ3aNmY094viiEbEcBiuEuf8sGqL/3C6Q5M7keprDj7MKZHJ+2Xo0Tn4c+iZK\\n\",\n       \"CCGEEGICeokSQgghhJiAXqKEEEIIISZwIk4UBn7NwfnY2fO/mx7te+fjyhN1SObOKT8b9N6eDxM8\\n\",\n       \"d/sd1fLmymVXZ+g3riyPCDgbBr/eLXfdWi3PSZDnQLyMC5+oAzgv/u4fuDoXH/1UtfzlX+ndqud+\\n\",\n       \"xYtc2QEc36c+8rCrw8AgNhpmRoI0ne9E6qRMwgvhp/CGrJeJZ+N+QR+nhbjsUBZGOV/4stmsDqhk\\n\",\n       \"LlUiv7tj5mELYZhmZi0JqNuCE9UEf+4YqU+fcdmMh2Y6dSOzRwcL7oNZ46mjwPpLXZaJO4IMW/+M\\n\",\n       \"GIioleAiD70P++yiP745eD27M+9ydJ3vi3Pw3BZzEmY6ImyTuWos7BZvNXbPMAWkQCgnu1bcLQRf\\n\",\n       \"jvTXdkRQcWJuY0ueE+ARWuc9wrz12yrgO3YkpPPgaf9ZsASnlN0zgQlyuH8SKsueE3mEn+OCO1nC\\n\",\n       \"Mfm8IgaULyGbYu4U0tjKlaVQH3NmfZGdg4why35/rEW+y5JnEmsD2dZY9E2UEEIIIcQE9BIlhBBC\\n\",\n       \"CDEBvUQJIYQQQkxAL1FCCCGEEBMIY0MIP2c7HJv8JoQQQghxA1AwAfQa+iZKCCGEEGICk1+iQghn\\n\",\n       \"Qwi/FEL4YAjhAyGErw0hnA8hvDuE8KEQwrtCCGc/l40VQgghhLhR+Gy+ifpRM/vVUsoLzewrzOwh\\n\",\n       \"M7vHzN5dSrnbzH7t2rIQQgghxBcdk5yoEMIZM3tfKeV5UP6Qmb20lHIhhHC7mf16KeUFUEdOlBBC\\n\",\n       \"CCG+YLieEzU1sfy5ZvZECOFnzOwlZvYfzezlZnZbKeXCtToXzOw2tvI99766Wk65Tixtgk9/nWef\\n\",\n       \"ohxjXcaSq7eFJcLWh12yPzed+QTo3Vwnst73tre7Ove+7gddGUbANmxGepY4C2nEDYltLbBeIe+o\\n\",\n       \"kSTXDjC7NotsfdPr3+nKXnXvfdVySH7F1JLjg7azc5AGf41DV1+/YetTf/Pgr9WZc/UvyVcu+zTr\\n\",\n       \"t731QVf2X7/iF6rlsvD7e+7sIVf2V7oPVsu35qdcnac3/nb4k+GFdTuLT7M+2/gU5XleV8uXyC/n\\n\",\n       \"73zzq13Za++pr982+EdAJinROdXXJhSSvkz6Hs6qzr76xj5sZhZgjnYSMmxveNNbquU3/n1/PV18\\n\",\n       \"t/l7pGG3Y/RtyvB4weRzM7NINlZc2jqZkZ6cu/t//DXV8n2v8c+WkP2JaaAs9L5NkawXIDG8IWnP\\n\",\n       \"BZ8bZpZmddsH8qmSOn+u3vz6H6mWX3v/q1ydjvyRH2F2h46k2bN0cHwOZnIdMrkOqakPyG/ZbCDP\\n\",\n       \"5je+8W3V8ite+XpXJ0TfdjazAdLiTA7kHmKfYVg2j/56tsWvtyj1/f/db/tJV+e1P/I/urKAH8lk\\n\",\n       \"f7Ehsy3Euqwh/a6QEHw8DYVUyuS1J8OKb/jun/cbvw5Tf85rzewrzewnSylfaWaHBj/dlatfcelb\\n\",\n       \"JyGEEEJ8UTL1m6iPm9nHSym/c235l8zsXjN7LIRweynlsRDCHWb2OFv5N3/zN/7838961rPtrmd+\\n\",\n       \"ycRmCCGEEEKcDJNeoq69JD0aQri7lPIhM3uZmf3Rtf99p5m95dr//zJb/xu+4aXVcjp+7kYhhBBC\\n\",\n       \"iBuKqd9EmZl9r5n9nyGEmZl9xMz+vpk1ZvaLIYTvMrOHzezb6Zrwu3MLM2C3xH+aBz9D9KypvZCe\\n\",\n       \"uD9mS1dylPeq5RT8LOCFOANtJLPb43rkN/wAv5czB4ToB04E6YmD0cCs8TgTu5nZQByz4mYBJ/sf\\n\",\n       \"QSEuQNMQB2uod5CYx0A8lAjb7+is52Smd3C1MpkhnpGgby4b71Ld0T7iyp7X/mm1PN94c+JT/d2u\\n\",\n       \"7NHhOdVyRtnBzO7KH3dl59vauVoFf14YEVyRZu77fma/8kdwCzI5n3TMCG6LeVNktQn9MZB+EJg4\\n\",\n       \"gbtq2LEwByPBMvFniLuBp47d7PSZ4CqR60KeU/gIaAdyDla+v3RQj3RFy5H4K/BXcJmTc0f8HLf/\\n\",\n       \"7OsE9hc2OFEDu7fJczjjs4r018I+Q6CDBtKHyWPQ75/4cqzvD9A/2PMtQ50ZuS5dJp8X8FnQkc+L\\n\",\n       \"lvT9OGIA2jbtubIA17QhTtSQib8Gz/lCvKlCrnuBE1rYB2v2rz3kI3I0k1+iSim/b2ZfTf7Ty6Y3\\n\",\n       \"RwghhBDiCwMllgshhBBCTEAvUUIIIYQQE9BLlBBCCCHEBD4bsXwyTsJLtVjWBS+aLcLGle3YQbVM\\n\",\n       \"MjOtLX49tPkOiJyZXEqY2TBC3mV+aAuCOAtYw1BCM+9GsoBKdA4zEQUDseYiypIjxEgzs9Cg9Egq\\n\",\n       \"kcIMInLX+fNbiJA+X9YDAw4OfPBkYcI9tKGN48TrM3G/XrYLrs5N4aIri9BlH1s/19X5g6Ov8mVD\\n\",\n       \"rRU+Y/Goq7Oc/Y4vg3ZGIoMyIvzdVEg/zzMvmyd4VOQtCfcjYnCLhjjZnxvkcLWhf2EiewCw4EIY\\n\",\n       \"wBCI7Jpm/lgyBK/SOuQ6BDjmmHyb2uH4GzAwuZccXxzqkxc3/py3RwtXNjuC0GPSztIS+XsXg1hJ\\n\",\n       \"aO4IczcmIg8PROIGWTiyJFYasorSMRvIQu4HfJ6xz5kRHXYg0jqbwGMLgyEyGRyBz/2WyNnsmR7h\\n\",\n       \"82FO+uuMCf5jzOstecbiMRNBPNBz10Md8ownn6N5hFheSEA0G6AxFn0TJYQQQggxAb1ECSGEEEJM\\n\",\n       \"QC9RQgghhBATOBEnyuB3bnQGtuZ/W2Vlc/j9ugv+t/hF8L/BbmCi26bxv62uzXsh63y8V0O9JQiH\\n\",\n       \"Yz+/ssmFsYRNJJzg+MgczDYkEmaGE7yOdKIyeBlstYG4DQa+U+iIy0F8BHQiLj/ufaTTt553Zd28\\n\",\n       \"vlb56PigVDOzvfbpavl8fNLVmZGgySe2d1bLf7j5Glfn99Zf78ouhGdXy8/rP+Lq7JDAT6cVjryA\\n\",\n       \"EXyHQHyy0pDw2ba+NszACv0h2V+9faqvkDBBN7E22Z/fEHMGSaAieBmZhEOmPR/uu9mBcN8FcXhI\\n\",\n       \"AGeAmYvbre/nzFvyG/fniWghblLiZvDPrfbIlzVX6smvw5aEEs78s6QrtZuayPPU2uOdPfYXvfOR\\n\",\n       \"zCxAJ8rEccGA46tgPeJusnBP2BZzeEZkUdK40cb8dU/oRJF+3cB9y/bPHLoZlM2Ia8QmdEanlVF6\\n\",\n       \"5kTVfWgY/Llr2OdTB/c/CQ7NxKEd9Rhk4aWkXWPRN1FCCCGEEBPQS5QQQgghxAT0EiWEEEIIMQG9\\n\",\n       \"RAkhhBBCTOBExPIC724JhO1DO+vWudKec2WnygKWD1ydmXmhuMXAMfOBnEMmkjObRR2gXltAGZuJ\\n\",\n       \"iV4CbJr68pCcQrOMgiELSjt+9vAxs5Cb+aAylmWYibC5WNTXOBH5vI1eaD66XF/T1aGXrJ//zC9z\\n\",\n       \"ZWlbX/eD9TixfN7UQvFO9H0jDf62eXqoxfKHt893dfbDKVd2e/OJavn59mFX56b4tCvbhwuWgj93\\n\",\n       \"jGZdH18kj4A433VlA8jmiYSXZtKvUXxmoXnsfnD1xtx7bKZ3dq+BSJ6Xa1enJ2J5f7ouWy3J/mZE\\n\",\n       \"TO7rc7w48DfyzOZ+vREwnzmCdIxyv5lZTOTRDyJ56EmfItchwL0WB3Jv5+PvP9bORI3tuqwhA27o\\n\",\n       \"Wq5w7ENvxGrsQQhksiKLsBzg3kpErm9ciK3fTkuCNPGzL5DPC/ZZZGQAiiORZwJsPjS+35XEBhTB\\n\",\n       \"NSXhzEZCcjHstrAPNhZeSoJlx6JvooQQQgghJqCXKCGEEEKICeglSgghhBBiAnqJEkIIIYSYwImI\\n\",\n       \"5RGkuFWphbSn7Ca3zlODL5tDkvNd5WFX5674CVe2AJF8t3iJlJmJiZrdAEvdPt4BtMCkQ7DyiA9n\\n\",\n       \"2UYk17YkeRgrjhEHzbed+ZTdzEuyAeTPzQER/ve8yHrlqVqqPkPSye94zl2u7P/7d++p90/kzDGw\\n\",\n       \"8HU28OGJdHu1fBD3XJ1bF4+4she176+Wv6z9PVen7a64soP4jHp58DI4YwbSb+p93y9bUraot1+I\\n\",\n       \"fJ5IJnOG/oF92oxIsubvBzY4wu2f3Z5s9vmu7nt54ftiv/Cy+Xq3Llvv+aTlofX9LEJCeYQBMWZ8\\n\",\n       \"sALCnhFUnI+4TFLpZ0yKhyR3OrWCPz7Ue6nUPebxQi4xzx2HwS3kmZtI8niEhOvCUrhZ46Hv8XTy\\n\",\n       \"4/snu1YsB3zARzPZthsYROpEcjD42RtIHXqrjRl5RA7GDRrBB4KZFTKoIgeQ1Hu/8SaymTGgD5Nn\\n\",\n       \"C50Zg7RrLPomSgghhBBiAnqJEkIIIYSYgF6ihBBCCCEmcCJO1CzULlMfl9Xypng35uPlGa4sQ4hd\\n\",\n       \"2/rfSM8VH1S4LLUTNSveiRjI76Yb8y6DbxQrwt/U/e+7kfhWBXyOPPj13LaC/42bzoCNv4WPdKLw\\n\",\n       \"tTuQmbSZV7Dd1Occ/YTrrYftfPaLv8TV2X/isit78uHahXv2C+72Gyc0eDqDv0WOkg/NvJJOV8vL\\n\",\n       \"uO/qPHvugzS/qv1/6zrRe1OPN94HvJBvrZZX2beJEcFbalc+oLbtvKPQdHXfbxbeicrGAjjrfl2I\\n\",\n       \"U5PT8b7amNnZM5NVmCcV63oDXnQzS8QZ6ru6rJ/59XriRLWh7uvDmuyvOf7+IwYI9V5yU5+soSPb\\n\",\n       \"JgGjOdXtjHPvfDHHLC/BMSPP4RSPv8aRhQIzryei20S8F5axCP2DdTvmFvl+RbyeEcolc9pCYEGh\\n\",\n       \"GM48Yj3WAPKIRVUs0xPli8YQEnum1zcgUxQLCbYeMgS/EuFqIGGbAfoeLpv5QE4zszCif14PfRMl\\n\",\n       \"hBBCCDEBvUQJIYQQQkxAL1FCCCGEEBPQS5QQQgghxARORCy/eX6pWt7ta9F8n4Rvfarc4squpFq4\\n\",\n       \"ZeFwgQjbAdLoUH69Wua31Y+QWwPZFsqRLDyNZUFGw3ayUEIQd8n+mTsYYFtMqKSA/MnWSmw2djAa\\n\",\n       \"u85LyGnrRdbdvVpgnhOh+ZE/8sJ219bb7/Z8n2LMrRZuIxE2981L3Idhp1o+N/cDGp7XfdCV3dXV\\n\",\n       \"Inkhgv+F4Pv+xXxbXTCMu34oqcbBC8YNkc1ns/r42P0RSKijT2P160V2z8B5QNmWwa4Vm5EeyzCc\\n\",\n       \"8rqFkOZJvFYjnrBbr5CE2mGMmUxuZBbAm9DeJV0/Gwm7BcG+IcIve1BlkOmHBbn/iXCPsAE37M/8\\n\",\n       \"DH0D+8rVQnpRq6WWDG7JZGAObosOphlz+7Ftk7a3GDRLNhVBtGYC/kDW7Jv6mm7JOcf9m5mVMUHT\\n\",\n       \"LLASykryz/1h4wds5S10WtLRExmIEJu6X7fzjavTzI5cWdv5+2Es+iZKCCGEEGICeokSQgghhJiA\\n\",\n       \"XqKEEEIIISaglyghhBBCiAmcUGL5YbW819aJ05lInWyW8ydDndp8Pjzl6nQkInUD2ypEct6QpOpM\\n\",\n       \"UsxdHRY8DGIiE73ZhOJYkQmpBunEhYifCacFv04bxhBA5kuJ5CgTqdOJ5ESkZTOvz3dqkXx75dDV\\n\",\n       \"ufzkRVd29tab692NTKRtIM12yD49/0o558qO4l61vBt9X2yClxcvhzrp/Enz2/5Yeb4re7KvB1Xs\\n\",\n       \"setAyPNaEC+Dl/kbFrvfg3Df+vNiTNR1CfBEWiV9NmD/GBNZTu9PdkPW9ZpEEqF7f/+3m/qYU+tv\\n\",\n       \"2tyTxHKQaZs1s6XJ+cQqRB5umFgOfR1nPjAzKy0RaRdwb5PnRiB/d+OMDMQdNiPnyhHHCdsZ+gt1\\n\",\n       \"wVl3wQEFpG/Q4O+2PsmJnAM2gAGZsdRt8kwPBQcLkec3bIuNg2CDP4ah3nZPXgESOS/tmMR5In9j\\n\",\n       \"GnlJpMOSew3F8rRdujpDIa8vDczIMHiJfEGS1UPy9caib6KEEEIIISaglyghhBBCiAnoJUoIIYQQ\\n\",\n       \"YgIn4kStmvr3/0WsfYtbhsf9Sq1/37t5qAMNaSBf8a7Iuqu9kC354X3T+oS6zJwkgKop8Fs/m7k7\\n\",\n       \"EElpgPDJMcGhkfx2XYhrlOE39RiZyODJXnJxdSI5lgAeA5+znmwLztXRlSu+DnFxds/XgZg5j3OG\\n\",\n       \"GvQWyCXvsz9XeNZn5vc3kHN8AUIzHy3PdnU+0T/LbyvVrtjp9AnfUMIW+n5e+jA6NrM7ujch+/UC\\n\",\n       \"8QgxAJMFI7KgVxpoeAxUR2RyDNwzZUv8p7W//+foXCXvkzEtK4IH0qz9/sJmxP1HgydJNaiXG+L+\\n\",\n       \"kNshdxDyyDZOpE987hZ2/zfHO1Fsd8zEwfudPfepezeiDobRmvl+xcKSR3w0WEt6KOvl6HhRFRYu\\n\",\n       \"ckkjwzbRByT3Iw1QHhF2ayzsGl0xFi7KnD1YbSAeUxm8R1ia+t7KgXhokQTNxnGfDwx9EyWEEEII\\n\",\n       \"MQG9RAkhhBBCTEAvUUIIIYQQE9BLlBBCCCHEBE5ELL9stRQbwKSbNX5m+Xnad2WnQaIezItmPSnb\\n\",\n       \"gqmXSWJdoTODj7AHSeBYUzCgksh1RJx34ZokVa4B4Y/OZE/EPdxWM+LQru7g+BnGmXiZBghBI2GN\\n\",\n       \"AxF18bxsN77OfMdvq1vWYvDB0QFpqWcGxm1DhPRTwcvt6F2eLpd8FRI0d6Wcr5bXtufqLAcvR54f\\n\",\n       \"HquW77RP+jYR+nk9Y3q2HV+JSfjQPZn3XVhwX64lTpbZx0T2gPfMCHM3kH7uxFYzCxAiOSNieSTZ\\n\",\n       \"e3GoD7qdsZnlScPgmRCJJNuSNiA5suMjO4TzUNhza8SzhCVP4kCWaxWhEtkdewZhHRK22Takndgu\\n\",\n       \"Uod50MS9Jm1gQjoOjvAbyjRlGRj88WG4r5lZLgmWfT/D5zwOwDGz69juOAjA12ADPUZkiXKxHEYL\\n\",\n       \"sEtFRzmEugwHQpmZFXLucPNNJoOxejIwqJ3+KqRvooQQQgghJqCXKCGEEEKICeglSgghhBBiAnqJ\\n\",\n       \"EkIIIYSYQBgz+/TndIdokQshhBBC3MAUN9LrKvomSgghhBBiAnqJEkIIIYSYgF6ihBBCCCEmcCJh\\n\",\n       \"m/e89Xvqgm0dlhhJ8FxY+YCsdrWslpuDpa9z6Gdjtw3sj+WkdX6m5zKrZ66/9xde4ercd/8PkY3V\\n\",\n       \"BBKe1jb+mEuG0DUS0rfFUEA2jXxLQvMiBJWRk/CWNz/gyl7/8lfXbSROXUvezVsMoxv8z8stSSrE\\n\",\n       \"oLvE3vvZTPawTHLg7J5/8sO+7DX319sJLIiVBKNCqFwmddi2LNb9OpOgy1R8qFy0OnQ0mF/vrW96\\n\",\n       \"kyv7odfdUy1jEOzVbfuywYUJkoBMV2IWoJSF2DYkDDJjKCBRKR/84bdUy699x3e4OjHtujKD54Zt\\n\",\n       \"SeDoZuGKQqrvURrI2/jrUGZ1eHCY+WdLbH3A8P2v/qlq+TX3/KCrMw8+FbSzVbW8LCtfp/jQWrwK\\n\",\n       \"pfHPzk0hgaoQaDwE9rHir/EPvfknq+VXPuCPj+aE4v7JdcjkueQejaTvN9m3fbhSP/d32tO+TRt/\\n\",\n       \"v7/uba+rll9xzxtcHQx+NjNLVj8TjpL/7NvmuqwYCeR0JWYRnvsdee53LAAUQo//jze93NX5mX/4\\n\",\n       \"Pa7MhV8Gf39kcm/jR8FA0n23JCR709bnZdX6czeQUNdtqOs9+GZ/ra6HvokSQgghhJiAXqKEEEII\\n\",\n       \"ISaglyghhBBCiAnoJUoIIYQQYgInIpbjbOShq2UzlmkVBi+I5XVdjwnb1ntBNIJYymZCD9mLnoUa\\n\",\n       \"6Liib4OThbMX6QqZBRxFyGj+HKR1LY0uThORLniRte3qY85MkiWUBs4VkboHInUmEAO7xu8vDWSW\\n\",\n       \"ehSTmdDsPUg363ii2rMngswfiNRtROJsYBZ1djZx5vWr9er1YkOuMRHu3aTtadz1cw4n64vkVDUg\\n\",\n       \"Y6JIf7Xs+HYyaZ10fbdiCMcfXyhkEMlABPFtXZaPvFheVqQs1dvHQQ9mZmHmJe4mwjGTP10TDvQg\\n\",\n       \"bKOX5IvrCGYBylL22w4DG6wAx0OuMbvhM/R2HBQwFhxIY8bFcny+DOTZ1RLp2InziRxL78u6WH9e\\n\",\n       \"tNnfo+uV/7xAZtFL1WzESw+fY53584Ii+UDuvYj9zswCXNNCtm2kLzYjLik5de76saht9nmf4fNi\\n\",\n       \"IB2hJwMYNjBQZ0s+j5mQvqWDIcahb6KEEEIIISaglyghhBBCiAnoJUoIIYQQYgJ6iRJCCCGEmMCJ\\n\",\n       \"iOXWQQo12GalEAEvkYRdENIDEb8LEfdKX8tn3Fn18qCR9FPXJpawCzIdFf6IirxY1iLplU9tXJ1h\\n\",\n       \"U28rkmRXI+IlhraiwH09EkiAxGu1SI4ltccLjQ2RuAu852ffDawl1zhsMSl73PEZtAv3b2YWmFCc\\n\",\n       \"6vVaJtcSh7OBBOjUkf1FMqjCxS+PFOdxLZYWTu6HCJ2YDcagnR83Rlcj9yj22RFiayTJzmVDHnGQ\\n\",\n       \"Rh6Ozrgq/cEpVzaAkB7J4AibX/ZtAFE3Rj/Qw9qZLwPWwYvziTy8mlJvf0b6MEuJx0EV7Dkc2Xog\\n\",\n       \"OTtB/erWSNmYOmwWA6hBEqhb8qxOff3wSEQsn5N7bQb9sz8kifNENkca8vCKTGiGNPnIHrKY6E+u\\n\",\n       \"cU+OD+81dh9nco3LcPz1a0hfxIEkkQ1Iiayd8Pwm56DHQU5mtm3r87luvUTek1k9+ub4++966Jso\\n\",\n       \"IYQQQogJ6CVKCCGEEGICeokSQgghhJjAyYRtNhjOBuGbbFZn8ntymNcBZ2lBgjWXxD/Y1mWBeRMt\\n\",\n       \"mRmc/Abr6rDQSihDv8TMrJ2R39SHul0HT11xVebL+rfchhwK/bUeXp9Z8BwDf6smOpL1JJmtLMCD\\n\",\n       \"WzBB6PgwurgmfoefyN7mELIWiEvFQFeLXXEWjOq8NxJm2LGAw77ui30hftDMuzADBhySUDlGhrZH\\n\",\n       \"sj8WiBegw7CwTRak6ToaC1Qk95VzImjwI2y6950/9Cyktw5P3OzvuTqbKze7su22dhRj4wMWF2d8\\n\",\n       \"O5seXMbs3UYXyEvoo3++MY9obXVnn5tfryFhwhF9R/Ioo0Ga4Hgyp2bcX+u+VibXHYNfmQdaiEPb\\n\",\n       \"b+p7bT477bdN+lC/Xz9gmuTPZxvYtamZBX/d2WedRXSEfZ0ergMLo2RPL/SWIvGImZ87Bvbx6AJU\\n\",\n       \"mW9FtpWgXk+8tw3x3o6cE0WuJwnbHBS2KYQQQgjxl4teooQQQgghJqCXKCGEEEKICeglSgghhBBi\\n\",\n       \"AicilhcQy0uopcpEHNmWWI7o96Z06OqEgQSxgagXV14KLGzG7Y5I6se0ycysAek3E1GwJWF7B0/U\\n\",\n       \"G1sf+Y2fvb1er93x2x42/lgKCMVM4GQEFK+JSF9av63VTn3u1uf9evsLcl6gXct9L4yeuug7DOSw\\n\",\n       \"WjuMDNuMKFAT8ZqJrCCIx0QC+Q59EGOEW3BG+oaZD4NEj3xLxVIPXvdCZn8PpA04YKIlf38xAZZE\\n\",\n       \"I/oaVK7FbZNNIyRsMxEJOPV12bD1YvnqYNeXrWohfb7w56Bb+mdQ7iGMkjyTxri820IejGHpijDM\\n\",\n       \"d0X6YhNXfj1oA/sLO5lvQwIpt5BBDmMeLzTAlfUX2BYbdNCTZ17b1gM0FrMdV2e4tO/LDupnTrdz\\n\",\n       \"3jdzOP6jdEZCVllAZYAA3ib4fo15zZEMDEqFtAn2NyNdKhr5vCBit6vDPvygnWxgwkDCp1Ek35AR\\n\",\n       \"UytStu7qc7WKvk5pyf7GffxR9E2UEEIIIcQE9BIlhBBCCDEBvUQJIYQQQkxAL1FCCCGEEBM4GbEc\\n\",\n       \"Z22GWc5pgikxLwOIyCX5d8JEhOKAKc0tSwsm60WSsg0QR85yqre/XHihsSGC6MHTIDkSCfH0TbVI\\n\",\n       \"19sB2T9JygY5Gmf3vh4tbou0qTR+f2lZn4Mnz/iu99hZkgTc19u/zbxYvjwis83D4IVuZAovzoYe\\n\",\n       \"yKzumaV8o6AZSUR68WXDwVP1aizxntwRcV6fq4YMTGBgyxsyoznVe0FcDeRWiERud0fDJHJijeNp\\n\",\n       \"wMR0RiDZ/CWRRxyUpZ4kGA+sDGakJ4+NlMg5QGGaSLqRXAekRJ9cvyb9JWP6cuP7XSQJ6aXUUvyM\\n\",\n       \"3NuJ9P0E5z2RwRjsuejrsMRydj/AxljCfuP7wqyt75l0QJLjD/x5yaV+NnfLW/0O9/2AAmRObppM\\n\",\n       \"Ljs+q4ZCZj+AUQANmSViTZ77mGzOZs+gdxqLrwfc57rx9HrXJjJQZwtJ4yuSPL6a+WceJpZvOiKW\\n\",\n       \"02koRg48IuibKCGEEEKICeglSgghhBBiAnqJEkIIIYSYwIk4URl+v8W8v8iCNclvxwP8nluMhJll\\n\",\n       \"EsCJgYONd3EicSKoCIKQ3/BbmG16PvP+0/5F3/ajg3qm9TO3+HbO9+qTd3SZuDgsMBJ+O87kd3cG\\n\",\n       \"eguZ/A7ekrIEZVfI7+AXSXjaDK7x2Za0MxAnCjY//q8FnEGd7Y+4GxDq1nc+rDHsnHVlEfpLpt6d\\n\",\n       \"dzdKrveHAX3XA9UJFqzJHJMINylzVXjYJqxH3LTYsXsNm3S8k8EDY4lLCfdx0/p7pp35cx77up81\\n\",\n       \"cxKo2q5dWdtiiiVzR45P+xvIfWzE2VvD8wwDVs3M0kACKmPtmCyKPweB3A/BRoRtUtPN8/gjAAAg\\n\",\n       \"AElEQVQVoD4guR+gL0ZyDrrO+zL9lfra7Abvpq5IoLF1ddjtqb1bXJWDx57w6wEzcm8XFrYJx1zI\\n\",\n       \"cxHd1MDuK/PnYINhqcxZYm7jiMuXGhakW+8vkX6A/pOZ2RE8E47m/lgOmRMFrmhPZLzAPLA87vOP\\n\",\n       \"oW+ihBBCCCEmoJcoIYQQQogJ6CVKCCGEEGICeokSQgghhJjAiYjlKGiXBuVTIgoz2a2rBbGYvQjJ\\n\",\n       \"vMSSQHYjomfZelkRAwcZgUQVtk29v+3aS4BXLvm2tzBL/N553yYMKt2siVgevIBXtvV6XMr15BGi\\n\",\n       \"4JC8WBqH+piXa3+ezh75bTUgUC5WpJ0kLBFl2jGhb2ZEnCVSbs7+HGPAYSFhf2VxxpU1IMXSQRWd\\n\",\n       \"H1CQwZwfI15fXbGuR7MvSYhkBpmX9fNRkOtQiFDsHPUR4nUk9zELPc1dfa+18yuuznKX9MVZfa1m\\n\",\n       \"S7/t+a4fyBLnIJtHL6QXMjjC1WGCOAZrmlmyOdTx90ciM9mXVK83hJWr0yUi3OPlY64yexC7Ov6c\\n\",\n       \"u2BkM8swsCOSoMSyJue4R8Hfn5ejjS97/t0vrpabA38smycuuDKkJUHBLf28qNtZ3Anmgr+rQ+6Z\\n\",\n       \"ptR9GJ8jZiQc1syGEWHMA5G4E6w3kM2sSSDm0azui0wiPyQBw0cwiIt9trfk3LWfxfdJ+iZKCCGE\\n\",\n       \"EGICk1+iQgj3hhD+KITwhyGEnw8hzEMI50MI7w4hfCiE8K4Qgh/TLYQQQgjxRcCkl6gQwnPM7B+Y\\n\",\n       \"2VeWUr7czBoz+w4zu8fM3l1KudvMfu3ashBCCCHEFx1Tv4m6Yma9me2EEFoz2zGzT5rZt5jZz16r\\n\",\n       \"87Nm9m2fdQuFEEIIIW5AJonlpZSLIYS3m9nHzGxlZv+mlPLuEMJtpZQ/M+wumNltfAu1vIcCXGAS\\n\",\n       \"MBGfMwrpLZmBm0hrYV4LfpHJi0RIa/oRMi2R8kKpj3e9IoIhSW3tTtfbWp72296ua2l1WHtpjgSk\\n\",\n       \"u2a2JEGcMWRM9Cbnjpyn+WFd7xaSPL5cHbmykOr1zuz7bc82rA31OR8bSIv+ZKHX0+8Pe8tA1svt\\n\",\n       \"3JU10NcbMqiiRN83UK5lgigjgmSMffN6FDiBLOm8jEg/Zno4Czp34vqIWeQDEXet9YJ/nINYvrfv\\n\",\n       \"6zREGk8wCKDx8nK34xPLA4jluSUzK5DUdEfyfSOQPlXg3A1GZmRoWMI1lI0U2VsY3MIToY8/vkSs\\n\",\n       \"49CQQSpwfCzkf3tArl+3Vy1v/BgA2zv3TFd2dvfmavmj//5XXZ1dMlgAiVSc941vYKBFId91LODz\\n\",\n       \"aSh+/yzgHm+2hAnmZpaJIB5HfN8ykGdQD/L+hnzOHHW+D+9D4vzRzPfhNfnMHBoccEOeU+QZayNn\\n\",\n       \"7GBM/Tnv+Wb2cjN7jpndaWZ7IYT/vmrT1Tkgxg35EkIIIYT4AmNqxMFXmdl/KKU8ZWYWQvgXZvZ1\\n\",\n       \"ZvZYCOH2UspjIYQ7zOxxtvJ7/tXv/fm/7/rSO+wZX3b7xGYIIYQQQpwMU1+iHjKz+0MISzNbm9nL\\n\",\n       \"zOy9ZnZoZt9pZm+59v+/zFb+um/9ymp51OSUQgghhBA3EFOdqN8PIfycmf2uXZ0m/ffM7KfM7JSZ\\n\",\n       \"/WII4bvM7GEz+3a+PngS6NCQ31ZZEBtmkLHfjqlJ0cBv+MR/IBNg20B+P0aYz5UgqDCRn1+bmW97\\n\",\n       \"09S/3TKP4eiwDsRjDkEgv0NjeGmgM6h7MLcvZxas6c/B4rAumyW/vzM4272ZGQScdgNxMg7J8aFf\\n\",\n       \"MTJsE72eSPyOTCQe1wKyHnOpcAb6wgJdyezoeIswr4gRCzpRxO9iG0PHi9Uh3mJpIBCXhW2S+xYd\\n\",\n       \"rFHZnkSOYfd2aWtHKS6I/0gcpQJ9PTbErZiRkMcG9teQdjKxB+jMB10mEmzrLhX1QkhILnwcNOQh\\n\",\n       \"GIlPgpcmZhKWPOKjpmR/kRviA/Y9eGfkedPEHVfWxd16O+ReO7d33pVd+tAfV8sHj33U1bn5y57h\\n\",\n       \"ypCBPGMDC4iFS8qCSlt4vnTkBulZkKdrgN92Js+EllwHZEs+e7ZNvd6K1FkRb3ELQZpbEvZZiLuF\\n\",\n       \"z6WGHF9L0oRnn4UTNTmxvJTyVjN7KxRftKvfSgkhhBBCfFGjxHIhhBBCiAnoJUoIIYQQYgJ6iRJC\\n\",\n       \"CCGEmMBkJ+qzIYD56N7kSEoYC3DzgX8kGJEIm7Gt5bYyI1JZTyRgJv0CTI5MKNxSwZgEcM5qAa6Q\\n\",\n       \"YMQ81G1vOl+nIWJyxhnNR4rJHTh5THpkcnQLoZm2JpIlaydK+cQ9j+RatdAG0n0oBYYiJCIhWvaN\\n\",\n       \"KCAwRjIIgQ98gEEObLZ7ImwbhB6WkWLksIagWXKv5Z4EfsK9VmibfFHG606ucSDnqpnVZYmNxsB1\\n\",\n       \"ohevY8fEeVjGESpmZr0XxDFwMLNBB0Q2bzrYVufvdTq4BSgDkbqDP2Y0ywORghM19WuZlweqHh9e\\n\",\n       \"2JLgxzEDVyIJ8uw3/rw0oW5n23oBHuuYmaVV3XYWCry9dNGV7T/6aLW8d5MPh9y781ZXhrBBKi0N\\n\",\n       \"xMRnM5GjoS8uyOChQp5duOmEI4XMrCWvBSWQ5yDQk/DLDYjl68Zfl03jgzS3VsvmmQxyYJ+H7t2C\\n\",\n       \"dDs2fmlElu910TdRQgghhBAT0EuUEEIIIcQE9BIlhBBCCDEBvUQJIYQQQkwg0HTiz+cOmSEuhBBC\\n\",\n       \"CHGDUnCqlWvomyghhBBCiAnoJUoIIYQQYgJ6iRJCCCGEmMCJhG3+t9/1r6rlnGEme/LLY9f6ULn5\\n\",\n       \"bL9e7q6Q9Y5cWdPV22Kzo7fBB8Z1UPa6N77T1fn+t3ynK8OAQRbSF0jaJU5Azeo0wU3Z7mEBlTAj\\n\",\n       \"fcg+zOzBV/8TV/Zjb39Vtbzu/Q6PNn6HmJm3WvvwtkRDLOvjG0iQJ9a5CqxHggr/6Y/56/eOBx+s\\n\",\n       \"t8JmWSfBgU0Ht1JLguCwjpmlWNfLJHBwIDfEBsJSBxKQ+frv/z5X9vKXf3+13G/Wfn/Zh/s1MNP6\\n\",\n       \"YmfX1dnZ3XNlS6i3s/DBeom0PUG78uDvx3tfc3+1/H3/6D5Xh90PfvJ3EppJUl0LXHemSLCu2DQY\\n\",\n       \"/Oq33ZIV3/b2H6mW77v/J3ybkg+jjLh9MpM9m90+QlBoSv55enjlk67s4pOPVMur1WVXZ0au+z/7\\n\",\n       \"v95bLT/wpn/g6uTG30cJw0Qx0NXMSiLBxKXuw6n3oZn9xj8HB3w2wnbMzNLg2/C//dgrquW3v+EH\\n\",\n       \"fDtJIGYL129JEiO7FvsiqeM7ukE2syXyvEkkgBcDov/n+3/UVbnvvntc2WJWn6vNwaGr8+TK39t3\\n\",\n       \"f+kLquXDjz7i6lzp/bNrdv6majniAZuZsRBS+JB80wNv8etdB30TJYQQQggxAb1ECSGEEEJMQC9R\\n\",\n       \"QgghhBAT0EuUEEIIIcQETkQsTyDqDWlZLWc6e7jfTpdBUqOzZHvxcgazvTdMLI9E+Bsxk3VDZtNO\\n\",\n       \"IAY2dCZ78j4L1WLxdQIIcSzKNBA502AG86aM6woof2+3/pxkJibC8TWkTU3DZg+HWblbf+7oBNyh\\n\",\n       \"3l8gAwUY21Xdp6J5MTGQ61dy3fZAxNZAhNQC52Eg/QBFWjOz5MRyIlAS1oe1LLxdezmzJxI+SuNh\\n\",\n       \"sXR12uiPedbVx9x2XuZtgj/mHuR2NugACUTOZtm+zrdl8jnpZylhn/LtjkTmxXs0kmscR2QQFyLg\\n\",\n       \"l+SvOz4/6eAIMvABmx6i76/LxY4r29up+0YX/DN3vvTrIeTxRsfJuDueHR8+PM0swGAB9jnTk+eg\\n\",\n       \"E8szeVbSZ17Nwdq3KZJ7Zg6bYu1cGPZFv78++/VSggE3ZEX2KAnkfCIDuUe38CzJre9Ttz/rVld2\\n\",\n       \"9PhT1fKnHvUDGm7/6r/qyjKI5P3hU65OIQN8Qjv9+yR9EyWEEEIIMQG9RAkhhBBCTEAvUUIIIYQQ\\n\",\n       \"E9BLlBBCCCHEBE5ELMek3wTp2ZlIbLF4+buAdkgl8saLs/NwAOt56TgSWZFJxg4i86FsmpkkS+Th\\n\",\n       \"WEA6ZMI9booFbLNtgxzJZXBPP9Q7IF6rlUzkbzgvLZPIyfENKMkyeZiccywpI/9eGNa1WN6ywQpE\\n\",\n       \"OraCCfBE5ifSakGxlMjKmRi3Ge3P471rMzPbrusE/6PDA1eHXAbrZiDOkx02RM7EpOpu7sVyljiP\\n\",\n       \"wjQKo4xC+kFLBgGQNUfUIcfHJ3X360WQgMm56+Lx/ZP1c5SlzcwySPkl+Dr9iGcQ2Z3F4JPHT+3d\\n\",\n       \"XC2f3j3t6rhEf0IkKepMGm/hemVyHdilGUYMDMjmxec+18dcBiImHz/myEL0aeiBrli3i81YgM/F\\n\",\n       \"hqSaswEMW7jXevKcGsYb/hWb3n+O3vWM51XLD/3BB1ydM2SwQjmsn1Nnn3OXq9Ps+L546QMfqpZP\\n\",\n       \"n/IDYLbk0cwE/7HomyghhBBCiAnoJUoIIYQQYgJ6iRJCCCGEmMCJOFENOEj4WzELrGta7zt17QaW\\n\",\n       \"j1ydWbvvyhZNXS+SYM1AfgRmLoOrQ9yiCL9ps9DMSDywkrAOma3c5c6R2cvJLOex1OeczXrOSAmc\\n\",\n       \"KCJO9MRfKXDMGQ/uOuDmUyIOAXEbEmx/IKGgdH+bVb1MfJKh98fc9HB8JHQ1MC8EyoqRkE4y03oD\\n\",\n       \"fYp5aIwC/YyFnrJcvRZCM5njwrwC9FUSdXhIGV73Ec4Xm8m+sPsB9kcDMkmfQmePuWPM2cP9dWR/\\n\",\n       \"Y/6abcl6ibShgQvIjoW6hW5/xF8zfx/NoFrbMI9whPNF/CByyL6dxBnK5PldQt0XWPxuIs/Boa+f\\n\",\n       \"lXnwblMkfQ9hj6AmkP45ok6BG2LW+TrsntnA59N2IP2c9MYxnw6333azK3v8I49Uy09f8Q7m1/5X\\n\",\n       \"L3Zl7/uVf1Mtd3fe4urkw0NX1ub6JMc94lv1/l1ijHN5PfRNlBBCCCHEBPQSJYQQQggxAb1ECSGE\\n\",\n       \"EEJMQC9RQgghhBATOBGxHIVwDNsrJGBt3nkhbTGrZ6RfNFdcnWXj5bNZU8vDRkLeWLZYJIIm0hQf\\n\",\n       \"1la2uH0msjJ5+PhGhYQSqZdBiQtqBcI2I2k3A2cUR1HZzCyQ4ECU6TPz+Iipi5Is86fZ7OH498GY\\n\",\n       \"WcjNzDIExvGgQn9CC4qzRCyPPZPN6/OeWehi60VWPH9l5K28hIC62cwro0w2X+6cqpvUkH5Orl+/\\n\",\n       \"hXudCJzD2gfipm0tf243PmzX7Z+UYX814zPeu22R9ZwUT54HNNsTn2+k8wdmUAORtIkeH4rlVJz3\\n\",\n       \"23f3CKnTNLukYS3UYYHDI9IaSR12PgPck4Gcg8gEf+ghzAXPJHwyZxDLM3tWHj9whQYak+NLMECj\\n\",\n       \"IYNwBmj8iljyhQ24gQcok8hZeGmiz9iaSAbcfPijtVj+X/zdv+PqPP7Hf+rKHvnjP6mWv/lv/nVX\\n\",\n       \"56O/9duubLasn2+p89cqHRAhndQbi76JEkIIIYSYgF6ihBBCCCEmoJcoIYQQQogJ6CVKCCGEEGIC\\n\",\n       \"JyKWtx2I3Q2kjEYv0qFEbma2012BZZJYjhK5mXVNLakGMgN2IhmtLFkZiSwB1vl2JLGYJCu7WdVJ\\n\",\n       \"BG0s9SUMpN0NkSUNxHIr47pChihpdk6osImyIvFM2fEhgb33s/2RNceQh3rNgQw6YCLrYHU9lkBP\\n\",\n       \"xyWker1ApM7AZrLHjbH4bMLe6TPVckMEeCZZtl09YCEysZz0hc1BPSBkQ65xv/bSOCbTY79jsMRy\\n\",\n       \"JsnibASRdEa2nj88cj+SCPEWTlVHBeoRYjkpY33RjUcZyA1CjO2AK5J2RrJeDJi6T64Dm2kAYNeP\\n\",\n       \"RWVHlLhJ12BtwANk57OQ2R1Sqo8v47NzJHTGC5Lyj7fIwIT0ppbd2cwRbBYDHDDRsH5AZf7j++cn\\n\",\n       \"H/2EK3vxN3xdtby9dNnVee+vvsuVff1/8y3V8ubIf7Z/4sMfcWUv+aZvrNv02BOuzozNKsJmbhiJ\\n\",\n       \"vokSQgghhJiAXqKEEEIIISaglyghhBBCiAmciBPVtHW4XgvzaTfRJ4fNO+9EYQBn2/jfTWPrZ2yO\\n\",\n       \"EcpIoKLzA8wsNMefrtiTYETM6GPqD/ktPoAn1TG/w/2+69uYqe9Ur8eC2Rgd/HZM8iPNyKzjmJJJ\\n\",\n       \"9QfiB7gwQaYVkLYX9MnGOlGwXmSOGekHLtCQhegRR6HB60fcAzpBPNRrWEgnYe/s6Wp5ufCznLcz\\n\",\n       \"0ofhgLZrf19tjrx/uNoHJ2rl6/Qbf78H6ENj7j3mZGVyPvFM0duROCYYOBjI/mZz385Ti7psOSP3\\n\",\n       \"Oo0KrcmJJdQSLwuuFQsOJYKXf+aRkMeS/XUvARwl9oAb8UkTaeAoc33qZdb3SdNd4K/hspkV6sLW\\n\",\n       \"ZYmGbR4P68O0n0EPTSw0c6jbxJ9vxL3DTZGuEYv31zr2gQicvfUWVxYgBfRD73u/q/OSl369K1uc\\n\",\n       \"q93N9/4//9bV+Sv/2UtcWYIDPHrqoqtz7ll3ubLDlQ/gHIu+iRJCCCGEmIBeooQQQgghJqCXKCGE\\n\",\n       \"EEKICeglSgghhBBiAicilncgljcgJjKxfDE7cGVdrEXytvGhfSGSkDcI82RSJ84UbsbFVUc/d0Wt\\n\",\n       \"1aJu2RKxNHmZN0DIGwZrmhnxSlm4IJmVG2dCb0ccm3mBOhKpMxOBGk9xIOuxmdcbkGJzT2RXFrZ3\\n\",\n       \"vOdN2WIwIQmjDCSYDWVeds5pHiaeK3IOAhNg8byMDNtcLGuRfLl7ytfZ8bI5BqEemr8fMVjTzGx1\\n\",\n       \"uF8tX7n4tF9vtXZlXVffD3PSTgcx8NmgA8wXZKeOOc64KeKH2017Xjq+86a67TsL3382W//Mc20i\\n\",\n       \"+2NXvYBVzQIy2Zp4/9FzwMrgBLJnZzPmDiRBpYU+zyDck4rzZH8wiCOR0MVA+kuGE5ESu0ePP77t\\n\",\n       \"QJ4JpB6K5D2R3XtoOx17QspaOFctCZpuycCgng5qgP2R5+JjDz9cLT/vxXe7Oom09P3/4T3V8nO/\\n\",\n       \"9Hmuzt6tN7myh/6gFtfveMadrk5v4wKUx6JvooQQQgghJqCXKCGEEEKICeglSgghhBBiAnqJEkII\\n\",\n       \"IYSYwImI5QtIFo+hTsHtohfE59FLq7OmljEjEcbYDO1IIbbkwFYbMZO1EUG8pEVd0C9cnSbt+fUg\\n\",\n       \"/Tya37YzCtlM9uZThoNLbT9ebDUjqcljrG4zd+7YmWSJxbj5thsj13sZm8qnhC12IXZ4pJ0oY9JU\\n\",\n       \"enp8mKLMhFhfhDJmGtHPzczarhafZws/EKKbeTl62NYDNJh0zAZebNd1QvkRiOZmZv3a3+/DrL5H\\n\",\n       \"mrm/Z5CGpCo3TKA+dkv8WHCQw6kdfz/eetZL+c+8/dyx613a9zMyIIE938gxZ+gwTJZmj7IGBHRW\\n\",\n       \"hw3+wHurIyeYjM/wbWIfR0QQHyDlO9JAdt+vt5A0njKb3YEl3NfnsyWDlRpybZA+kJkOyHOpz595\\n\",\n       \"2cwsY6o52T27ftg3mLhPn85sFgpgfehnI7jpzjuq5YEc7yf/9BFXdiekis92/PV85MN/6vd3883V\\n\",\n       \"ciSfF0eX/b3WtdNfhfRNlBBCCCHEBPQSJYQQQggxAb1ECSGEEEJM4EScqLatHYjG6uUZCc3sGu/s\\n\",\n       \"NFb/Ns3cJh4hCYdNPCI2mzf//Rjq9GS9vn5XjdSb2vUbG2q/IpOwzQbayWZZt0BCSK0OOMwjwv6u\\n\",\n       \"rojhkCyodMyGSLAe+b08g//A1abjPY3rrOjAEDvmk7DZ3zFglP7GzjblQkFJCCnpi5i8ODYrrkBo\\n\",\n       \"Zr8h1514IakfPuOymVnJxBWBw+laf3zDmEDTEZ2qIaIWy5kMsK1C7n/mUrXQdur+MNcI6rWkUjPi\\n\",\n       \"+Mips03x59y5haQOCxjGIuZbsecLPoeZ5xPJPeO2TZ5vmQRbYv9k4ZeFOFFpqJ0oFl7ckE+MWaif\\n\",\n       \"lYX4o2O+jRjMu4aFnWN4nhX2DMJnAhHDAilroKWBhG0Gcg7KCOeybf1zCre0/6QP2z17/pwri4v6\\n\",\n       \"M/JTH/+Eq7N7yn9mLqBs/+nLrs6889chjpH2roO+iRJCCCGEmIBeooQQQgghJqCXKCGEEEKICegl\\n\",\n       \"SgghhBBiAicjlsda1JtBuGYTiLwYiBwZ64SxwCQ9JitikB6ROpngO2qm5+KFxhiWUOLrMGk8BhDg\\n\",\n       \"ihf33AzYJBSNSfIxQtmIWcjNzBoQpiNJeSOTsVtCeZeG6PltoVieiUQ+kB1iaF6/JcI9YQ3NZCGW\\n\",\n       \"iSSxtgHr+G03WMnMGeE0hJSF7UETWIgdY7uCQQbkOvStP1cYsro+OvTbJpJ609aC6M6p065OgPBd\\n\",\n       \"M7MWwja7BQmaBSIR2yOT+eF8sgBJlMjNzLoGpVxfZ0XOwZMXa7n16NDf6+vN2pW5/c/9fTwk34YG\\n\",\n       \"2kWfZSQN1h0zk9ZpYCQ8h8l63fFZjRZJXyzsmZdxYIBvUxrIczBhSCdpp/lBOP70kXPABn8ARwP5\\n\",\n       \"uGWDmuBeHljYrhs44+uwQNwGrvuMDHJgA7ToDoBAHnoFHlRzEpo5kPv26MqlavkUkcjbuRfEDy7X\\n\",\n       \"Yb6zzj83Ahn002/JdR+JvokSQgghhJiAXqKEEEIIISaglyghhBBCiAnoJUoIIYQQYgJh7Oz2n7Md\\n\",\n       \"MvtbCCGEEOIGpRQ++krfRAkhhBBCTEAvUUIIIYQQE9BLlBBCCCHEBE4kbPPVP3R/tRy3dUBdIAFk\\n\",\n       \"Nvc/R3Z31MF9w8wHnh2ufJDfgPsjaWYxkTA6CLJ88HVvdXXue8O9rgyy6CyT085m+DaY+ZydFpzl\\n\",\n       \"vCEzfjfZBwDOoGw+rFyd+9/+Dlf2vT/x8rpNJISNzUxuUEarsLC9jMdDZisnM8TjlmLwQZ7v/L5/\\n\",\n       \"7Mp++JV139zO/HrbOUnSXEAoaPR1WNZmhEsTN2Rm+Q0JjFvX9ULvN/7af/w6V/Z9D76xXg87p/kg\\n\",\n       \"TzNzCaqBVGLHh/O4u6BbM+tIaGWBAD6SuWhvv/+Bavnlr/1BVyeQ+FIsYXWoKwrtbEiaYWQJhxBU\\n\",\n       \"mgZ/PwbzZW97w09Uy6965WtGtTNDG1jM7ED+fs5wj8wbv+bZhb9+C+jr28H31ycPfcjiO97y+mr5\\n\",\n       \"NT/kr19m4Z5t3RlI9i1dL6HSQkIlC4YQszKyXm78eu+8/x9Vy/fdex9pkyuygs998lnUww1xaeXb\\n\",\n       \"tOoXrgz77Om5v/9Pk+fboq3r/fADb3B1HnjFD7iy9qBuw+Kyb9Ns35cVCJHt574v9qf9PbPZq+tt\\n\",\n       \"F+RZPSOfIRDw/ZoH/Wff9dA3UUIIIYQQE9BLlBBCCCHEBPQSJYQQQggxAb1ECSGEEEJM4ETEcgNB\\n\",\n       \"GzOsyspLZKX3cl3Yr2XFfMYLalT0zMdLzuaEZhv3ykkkwABlhcjfTJj20jaRZLHx5GAiOb4GhN/G\\n\",\n       \"vGDIKIbHQiRLds4L7o8cCxHn8RREsp4xCRgldTozuWfb1edhvUPkxR0/4/dmD87fgsiLTMZe1/fC\\n\",\n       \"4ogMaDgg1xS21RUyMIGAM62H7K97S+TvCH2WDSjAfm5mFgIOfGDiNVkPu3V7/M0XiBQcyU3rbhEm\\n\",\n       \"95pvU9OAzE/6Hc5ab+ZF4dj4axVH9E+aU8xubtiUE6rNLJHjC3ADzohYvkPK8KxvkxfL12vfTCQS\\n\",\n       \"Bb4hFydv6411DRmo07CLCn2YnBc6qAJOVWiIkE5WQ2Zk5EVPrPimhc+nlgz+wP2Tvp8P/TUecl1v\\n\",\n       \"Q565K2K7B9bPsA3kGVTSslpOh6dcnf7SaVdWtvW28q7vG7m74tebHdQFC79eKl42J2NbRqNvooQQ\\n\",\n       \"QgghJqCXKCGEEEKICeglSgghhBBiAnqJEkIIIYSYwImI5QHE4wDpq3njxa906M3EuFdLa3F36eq0\\n\",\n       \"TDpEi2wg4h4TREfYZ4W8lzpxnkiAQyBSHsi8VD0dUNz1VZrgRegOjq9hceiEAAplIEngiUiyeFaY\\n\",\n       \"RM7cxQjiZSDCP/PKUXhnrjujtLWIOCy9WL4968uOzkBq8w6T3X3RbFUfT75M0oLJel1f96E2jbuV\\n\",\n       \"A/TFlgxyYIMx8H5A0fxaIdkhLFJzl1xTuH6jeidJqeaiN6xGtOAY2fmEpOzMBFX2jKhbH1haOL+7\\n\",\n       \"Kwq7LmRASMLnKdsWkdRnoe7XZ8mgipt3/NY20IcvDr6d6/74/hkjGdBQ/LOrhYdHoqn7RKYPKLyT\\n\",\n       \"a0X6kEFfoCnjIz4byHgN68gDu23qisuOyOdwread338iDb28rs8BS67vyWsBS71HshtiQNLXqXzu\\n\",\n       \"ByKUXLchkwEwuG0zM3ws0Rk1XMl1ZtkYib6JEkIIIYSYgF6ihBBCCCEmoJcoIYQQQogJnIgTVWDG\\n\",\n       \"a5yV24jHlA6Jf7Bfe1LxrA/yijv+N9gWwu4ykXEK+X0+M/kG12MzwkPREEgwGvktPqMTRX5Ub8HL\\n\",\n       \"6Ei7u37l1xvqsqYcujoUPBjiHrCcO6wVSbYnyfGzGYRRxuTPHYsJLXhN25G/eXd1I/rOeyH9Kb/H\\n\",\n       \"w7P1ddhf+v4aMADUzPZm9TWeZe8HtEe+DbM59OvNuLBUn9Lnq5QRIZLUISDrOUeI9Be6P2go8zvc\\n\",\n       \"vjp/7nLvOxV6YUzTYp4UHjILzWSpfQXC/Ybs2xTJM8Fth5zzwJ436C2S8zsjN+BeV/tHtyx8vzvV\\n\",\n       \"+f2tDuq2r9f+Wbbpjz8+IyGIDXEug6vn9xeIY4r9MxGHh3s9sH0WMEz8HCQR75R9XnRwnWeNPwen\\n\",\n       \"5nUfOrNLPFTmmF2pt70aiKNE7tGePLsQosJZAL8rkj5le/uuqMl1u8rSO9FlceTL5uDQkQ8V5hYW\\n\",\n       \"9qE1En0TJYQQQggxgc/4EhVC+OkQwoUQwh9+Wtn5EMK7QwgfCiG8K4Rw9tP+270hhA+HEB4KIXzT\\n\",\n       \"57PhQgghhBAnyXHfRP2MmX0zlN1jZu8updxtZr92bdlCCC8ys79rZi+6ts5PhkAniRNCCCGE+ILn\\n\",\n       \"M77klFJ+08yehuJvMbOfvfbvnzWzb7v27281s39eSulLKQ+b2Z+Y2dd87poqhBBCCHHjMEUsv62U\\n\",\n       \"cuHavy+Y2W3X/n2nmf3Wp9X7uJk9g26hAWF6VjejmS/cKkzixFDOvPLSWpz5bQUncROjmUhyY2Za\\n\",\n       \"Z6F5TvQkX9AxLRjl1kgM2LbUbZ/3PpxuufXi3iLVoh4KgNejATmzDCx0kQiwqQC9tQwAABowSURB\\n\",\n       \"VD4vs63vevMDL3XODmpZOG79+WVO7nZWH0/eGRMXZ5ZRMPTdx7at39YaBPR1R6ROJpEOEODYEPFy\\n\",\n       \"Rg4QhNvA0ksZqa7HpG4mIruZ7NmfX+T2wC7LhO1ChFvMox3zlXbPwlrJmsHdx8yIZYNNIDSXnAQa\\n\",\n       \"/OrCdcm2R1w/JibTHcL915L7cdn658RNIP2eJSmveeuP+fJRfY9eXnnBf5OI6A2wkNdAQhbbUD/3\\n\",\n       \"Wf9pyBMVz0Ji4axxTlpWXz8c9GDGA1SRRIKCB9ZfYPPE5bdT0MxzC3+8M5ZeDDfkk37MkW3I4B32\\n\",\n       \"nECGjgymgbDicJYMYEIZ3MwyiOzDzD9zh13/rByWEJY88+0ml8EPRPoL8Fn93FauntnPtPfpLRNC\\n\",\n       \"CCGEuIGZ8k3UhRDC7aWUx0IId5jZ49fKP2Fmz/y0enddK3P8xq//2z//97Of8zx7/h3PmtAMIYQQ\\n\",\n       \"QoiTY8pL1K+Y2Xea2Vuu/f8vf1r5z4cQ3mFXf8b7EjN7L9vAS//Ll9UFZK48IYQQQogbmc/4EhVC\\n\",\n       \"+Odm9lIzuzmE8KiZvcbMHjSzXwwhfJeZPWxm325mVkr5QAjhF83sA2Y2mNn/Usb8kCqEEEII8QXI\\n\",\n       \"Z3yJKqX8vev8p5exwlLKm83szcftFGdkjpBYHpdeTOxO7fntGCQBr3yqabv0h9jAjNdMEE1EHsSZ\\n\",\n       \"5RkNkRUTSI55lOzq04gjESg7EFJng09xnW0vubIWZPoS/TlnYNAxfU0evJjY9ZDMfehF0/nT3uKe\\n\",\n       \"P7VTb2dDpEeSNhv2IHn87AFpqCdm2H4i14WkEzdQL26YMerlzwbqBXLuCjMhoQ+N/Xtlu67PS0v6\\n\",\n       \"VMvES9h+ILPPh5bcH9AslvpPfGIfjD8i8JrZ7pkIxk4rZ0I8SyzHGQRow0kRpq2THfbp+MR5GmrO\\n\",\n       \"ZgzAZZJOvtv6b//PzOu+0BDh/um1F6+f2K/v20sbfx+XePyPHonMyNCReybCc78h96NPNSezJjjh\\n\",\n       \"/zrJ3JBCzwZesAB/JJNrlViCPzxLrqzYc78+vhm5HzsiS58FAX0zkJlASL/ux4xbmflr1cf6eZPJ\\n\",\n       \"gJvsJxqxgn2BdJ+BbAvHLyTyLMvkYo0ZNHY9lOMkhBBCCDEBvUQJIYQQQkxAL1FCCCGEEBOYMjrv\\n\",\n       \"sybD79wZwjdtRvwc4kkF+P3auQdmVjYsZBEOm87ifryjxMBj+7M1j4NmF4KvEonf0cBv/23xwWVN\\n\",\n       \"8L97Y0DdmLA/Mx+aacRHcF6RmTWber3ugIRtPr10ZTuPnam3ve/r5Dlp+811wGjo/HlhRJhtPh4S\\n\",\n       \"/+mKX2/Z1ccTSUgnBpWame2C47VY+f21XvWzsoW+wX78J2zWdUBdZu4feSoE8Edi469xO+ZvMjaD\\n\",\n       \"OhWJXAuO3TRzlAJpJws9deuRfo2rsZBHkofrfTUakHm89MUCVZmfU8DhaaJfb4bPXDMrcN9ePvJ1\\n\",\n       \"PnXZd+wnV/U9uUq+Ay3I/YDEhoRfEh+wuJNMnrnJBzE2DTi05GI1pA/j/gJ7xjOXCmCuUSH9rIfN\\n\",\n       \"D4P/7MuH9ba2yX/OnZqTwFHw+pg3Nae34/HSF7nsLsQyd/66oNts5h3hRJwl995g/rHBWl1YIDYL\\n\",\n       \"sh2JvokSQgghhJiAXqKEEEIIISaglyghhBBCiAnoJUoIIYQQYgInI5bnWiTzIWQkzKwjgij6mg0J\\n\",\n       \"30JLj22LieUjZtdmFCIdojyIwvjVMtJOCBwL3ICvYBPoDDNvdcYE16A5fpb1qxurz10g4mcciIwN\\n\",\n       \"YZvNlsjnR16gbPbrsM146bRvU0uE+zmESp4b9/dCswLRu/PhgoWI5THUwuQOCXmNZEb6xbZu1/zQ\\n\",\n       \"768j56Xb1mXNMO76oSAaIgkqJKcqQgInDagkYYkoe7NQ2RHdmgYxunU6f87zQNqEy6RNVKSFevwZ\\n\",\n       \"wdoJ67UkAJjJ0ViHBU/SgFEMzfTkwR/zYa77UJ9IsOahH9hxBH1vRsI9lyQY0bWJBE+yYOIEqaOB\\n\",\n       \"yNGBDJhAoT8G386UvaCNgypYkGdiAxGAhrTTSMgqHt9ABo0cZHwmMMmaBeniZwobQMHCRI8X5xMJ\\n\",\n       \"28XrMLBzTtoZITw0kbRdOigGP1vJ5xM7ks8ia1PfRAkhhBBCTEEvUUIIIYQQE9BLlBBCCCHEBE7E\\n\",\n       \"iYro+mAYJPmdv5DfW8uAv92S3z9JoFpIGGLpIbmdY/L+rvN7a72xhv1eTyc8hd/wSbBegj0m4vCw\\n\",\n       \"36FzW28rjZgg1MwsQBJjYSFz5PdrDEY1MlEz640Zkt9KNzJ0DSeLHfn3QgMTrM7JbLwsdLUBVyx3\\n\",\n       \"zF8hE19u6r7Qbfz1a9beQ2k24ESQiYsZOzv1ejMSAMhyH/EssHDWhvhVfmJkEg5JHEEXTEh8R4R5\\n\",\n       \"U4V4aBHvP+aFMQcEyugjgrlicN0zkytHzGDLJobu3TPQ3AUckt/fmihYQ1P3vcON73f7G+/eoZO0\\n\",\n       \"23mv6Nzi+OtXiL2VyUMhGbSBTr7t24ldKpCAyjb69dC9wefy1SYwGxX2T647BqOambVwPGz+cXTF\\n\",\n       \"Nsyb2vp2LsAHpqeOFY6ZYJmcF8xmZo99NtE1TgDOvCkW9OyeVDTglKw2bv52ir6JEkIIIYSYgF6i\\n\",\n       \"hBBCCCEmoJcoIYQQQogJ6CVKCCGEEGICJyOWg0yHIX2BWN2RGZsw63cggXVUIsNZuUkVNjs6K3N1\\n\",\n       \"qHwOx8dE2uJnt8YQQDaLuwvbZFZw48MaExh+mcx2T0FBfEQIm5mXI1PrRcx+d+XKmrN1smVo/HqF\\n\",\n       \"9OJ07rDe/3IzopVmcVOfh5aJ0GhLmpltayG1RN/OriFhdNDXWQhpXJPgTgjpjL77cFDMJ1I+C9tE\\n\",\n       \"QZzNhD6MSKOlAz2I/IliKW0UbofJp+SewXosbBMDgc182G5gA2CobQ7PG9KmRII0XZtYoCIT2aGQ\\n\",\n       \"SfIDEag3EIi73/v9ZXKtFvP6eHZnvjPuzI4/vkIGpCT2EeXSUv1Jz4EI8HCdW3LO++Tb3uAzlQzU\\n\",\n       \"GfN9BOsbMZJAzAbr+G1voAkDOXdYx8ysaY6X5NkTnQaFAjxvFCR5snV2/7vMzBH3lZkf9MM+s0sg\\n\",\n       \"AxhGDOy4HvomSgghhBBiAnqJEkIIIYSYgF6ihBBCCCEmoJcoIYQQQogJnIhYbiCSRxT3iB/KUn6d\\n\",\n       \"NEbX83ZdADE4sjR0IlqnEeJs6ogYDCJpYXIt8boLzPA9kERmTIlORKhkHnSB9+c8MrEc5c9IjoUl\\n\",\n       \"UDu3dU5S1E97mXeItRDenfPtpCnRu3Uacdk78pUI7eb4vthl34ZtX7e9kGjehvRFFLQjkXmb5K+p\\n\",\n       \"QUIx8dg5c7RWiVTNhE03GMO3k8nRES88TQI/vr9kNqgCII68DexY4J6hci0TUrGAybYszRo6Eb8/\\n\",\n       \"jj8+J9ubWaFCLKTnkyTwnmyrh9EfAxGaZ36MirWxvu7Ljp3zEWIykY7Z8wwvDRtQwNLP3cAAsl4M\\n\",\n       \"PsWcC9OwHut8rhIr9OcFJe5CEucb3Bgd38MGTNTbJh9XNiPtpAO7cG/N8ZI6HedFymJz/OAI/Ay7\\n\",\n       \"WoYXmQyOICn/Y8dVMfRNlBBCCCHEBPQSJYQQQggxAb1ECSGEEEJMIPhZ1j/PO2QCghBCCCHEDUop\\n\",\n       \"PFla30QJIYQQQkxAL1FCCCGEEBPQS5QQQgghxAT0EiWEEEIIMYETCdv87/7hg9XyTlOHte3O6oBF\\n\",\n       \"M7MzO2tXttip12NhdJnMbo3paWx29D6xELTaK3v96x5wdb77jW90ZZiDmHGabjNLJIwuQcBZIcFh\\n\",\n       \"PhSQBQe6IptBkCebKfx/f+V9ruy+V76yXo+ktSWyrb6vr+nte2dcnY/96Udc2S0venG1PDyx7+qs\\n\",\n       \"9w9cWXPzTrVcsg/Re+AND7qyV99XHzOb8Tuxvz3gWuFs4lcL/e0WIRyuST41E0NXr1asF3viPD74\\n\",\n       \"wBtc2f/01vr6YfCkGZ+kHiZ/t8gC60g/6zD7jmwbg0PNzN2jkaz4wJvr++/+V77K1WGhmREOJkUf\\n\",\n       \"95ew4aQsLX2fGpb+2VVmEJ5IrlUm3eXHv+unfKEQ4oZC30QJIYQQQkxAL1FCCCGEEBPQS5QQQggh\\n\",\n       \"xAT0EiWEEEIIMYETEcv3j+rlAhN17869ZbkgU0vPQAglrquticiKk2Kz2cOHRGaNzsfPtD4bvGwa\\n\",\n       \"QUhPA9lOJGZpW1+enrzzopAaSRvbQtqU6rKGma2E0tZtWCx3XJ2Pfezjruy5L3hetbz/6EVXZ9P6\\n\",\n       \"GdvP33Z7tfzQb33Q1Tn13FtdWbO3qJb7i35gAqOA4J+z70AFp4M3P4t7JlO/N8VL4xG3lbakUb4N\\n\",\n       \"GeTkMvJWxpnPA5ntPhPRGgV4mt1LZ1qvaZhwzzYG/bEh7UTC4M9B6NnAkrpVkVTJC3+tQqwfXCGQ\\n\",\n       \"+5hsC8X1ktkAEWbcCyFudPRNlBBCCCHEBPQSJYQQQggxAb1ECSGEEEJM4EScqNWmdhJ2unp5MfNy\\n\",\n       \"09zrMtZa7REk4vUU4qYUEBcyCeSjYYkj3jkb4iT58EkSAEhkij5DO0lIJzazkGRN5mBEEEECcX8Y\\n\",\n       \"HVyIp5/wbtPe+VOu7OxOXfaeh37b1XnRt36jKzt89PFq+dKFx1yd53/LX3NlD3/wQ9XyDrnGnOPr\\n\",\n       \"xeD7Rsj1eg3pKi3KeGbWpVW1nAcS1kh1tfr6xW7k30NwHgrxejI5vgLrkQxSM6b6QR+OJMQ2sLDN\\n\",\n       \"hE7U8bRr0s/3vbMX1vNquXSkTWcOfRmEACeWYtsQ7w3KUiQPM36RhRA3OPomSgghhBBiAnqJEkII\\n\",\n       \"IYSYgF6ihBBCCCEmoJcoIYQQQogJnIhYPq9zEG0HlmdE9CzEWi0BAg6JDE48Vhea2RALmOQpmrO4\\n\",\n       \"CR0RywsGd1LJ2YvdLei0hYjJKJJHsu2WZQJCWcdOFOEIklKbxczVueXOW1zZR37v/dXy7p03uTq3\\n\",\n       \"PuMuV/brv/hz1fLz/vMvd3X66Nu+/7FaQD//wue6OozsZGFyzVkRBjgO/no22wO/3tGlug4R/BMJ\\n\",\n       \"NC1xWS3nOO5Wxp6O/cDMLLEwUTgtTBCfkb43gwTcbvBSddwSsRtEazy/jNmKnIOLe66oXD5XL89Z\\n\",\n       \"QC2R63fq69ec8W3KJFDVQ8JTR6wlhLjx0DdRQgghhBAT0EuUEEIIIcQE9BIlhBBCCDEBvUQJIYQQ\\n\",\n       \"QkzgRMTyU3u1WLm7Uy+3wcuZmJhsZtaD/DkQ9zSzFHMnbDNplbxfEmncrUWk3AbE8kjaFLNvQ9P0\\n\",\n       \"sB0iljewbSaWJyKtD/U5jiPV1nZWjwJY7sxdnace+aRfEdr5rC9/oavyJ+99nyub79Qi8jP+6gtc\\n\",\n       \"nff8xm+5sttvu71aLs2YzGuzgGndbCwBSfSOIIS3uXd12o0Xy9Ph5Xq59aK+7ez69aBebr18zsB+\\n\",\n       \"FlnKOEltb6GsI+stBn9eZnArd2sikW/8vYZ+f3Cp/564JdtZL13ZcFDL5pm1accnlrfbui8Ove9T\\n\",\n       \"qffifIJU+ECeLc3YQH0hxA2FvokSQgghhJiAXqKEEEIIISaglyghhBBCiAmciBN1Zln7IzsYdkdm\\n\",\n       \"R++JRxRK/Q5I1B8ampnAGyJZm5aI/zRCy3BtMnM6EG1US8IZcVtD8Z5NcrsjEs+IsM1IPB9G09b1\\n\",\n       \"ji5d8XXIemfvurNafvzRx12dbr1xZc99yZdUyx97+OOuzhzDTM3s/B114OdjT1wgrfIEuDaF/J1R\\n\",\n       \"CuloGKCYt65GZoGxc3CZdk+5OsPeGb+3pnbTcufdNAa6TMx/Yn0I15sTAbFd+y3NwHeaE28pk5DM\\n\",\n       \"AEG61h4fYhnIc4N0DZJ1O+4cuI1tfbvjxjttGKCaI7vXFLcpxBci+iZKCCGEEGICeokSQgghhJiA\\n\",\n       \"XqKEEEIIISaglyghhBBCiAmciFg+B0t1DpZzIhJ5Tl5XjhBiR4M1yWzsKHoHIpE6GdzMAhVQoU3E\\n\",\n       \"D8U2dOYl2Tb7sriB46PBj/XiEPx5GqK/zBlWDFR29RQI6UQR28xs9/w5V/bkE09Xy13n23TLbbe4\\n\",\n       \"sk9+/BPV8s7Ora7O2Zv9/vYvXaqWW3IOGAGsY+rbk2MOKAaTcM+MErmZpcXpus7OnqszzHxgZIbr\\n\",\n       \"nEhYKyNCsCwLZ+3IphZweDRY049ysG4DYbBrf1663kvxGdpVRhxfmvnrknaJ7T7Uoael9YMAbHnk\\n\",\n       \"24SHN5Bg1A0ZIALXigXbllZ/zwrxhYjuXCGEEEKICeglSgghhBBiAnqJEkIIIYSYgF6ihBBCCCEm\\n\",\n       \"cCJiuU8IB7GcyK7FWZ1mHUSIZ5JYXIgZjFtiad2ZRZ2PcHe73oulMxDgG5Jm3SWfRj5H2Zy1E8oS\\n\",\n       \"iVXfRD+zfN/Wl34gAj4Da3Vzv+2jlU8eR3n/9FkvUD9xwaeKL8/VCd4Rk6zN7GjfS8AL2H6M48Tr\\n\",\n       \"grH35M+MTM4VitCJtLN0C1c2REgeb7x8ngMRmCEFOyR/zhkdXIdIwteZWI4J9w1ZL/gubAEGhIQN\\n\",\n       \"OS/E68bE8simFQDy0m+onD5wZWle31ex8w3PSy+kpxkkj5P7sRTySB3qk8efNywFXwhxo6NvooQQ\\n\",\n       \"QgghJqCXKCGEEEKICeglSgghhBBiAifjRGFIJjgYzH9igZgYPlmISxUzCbYDvyLTGdT9tlgbkHki\\n\",\n       \"+4M2dCzgcCBl0K7ANC0IdRzIsYTogzwDhi62I7sC+EDU5CCpoLOudqfWq5Wr08y8XxUXddnqindc\\n\",\n       \"ljs+rBHFu3RExBtCgFDOQtwmY2UNrEf6Sgks9BTKGu8/scDPCA5NGenUBLi30HUyM4uk7RFcsdL7\\n\",\n       \"c5ASCa2F/jKw/bHbDzI5wwhnL839ORjO+H4WUu07pcb3jey7onOi0OU0M4ssERfOS2b9Z6STKIS4\\n\",\n       \"sdA3UUIIIYQQE9BLlBBCCCHEBPQSJYQQQggxAb1ECSGEEEJM4ETEcidto1nOZHASIulmdichgYGs\\n\",\n       \"h+53Q4IYifs96o2ThWZiM1mYIfFYrUFRl4X0gSDekVBCd57Mn+IwjA37wzYQCZmEnvZwXpbdrquT\\n\",\n       \"Nv7c9ataAp4vfGBlIettt7VM344U59ELzlQQJ+cYqrFQ0MS2Bcth8McSiDReIIiVucoUN4jDV6HD\\n\",\n       \"LCBEkvWWEhtXtsE+OyMDNkgjcKBDGXF8idTpF/58RrgfGmLXJ1KWoQuxgSYl+XOAjQ9k24E884QQ\\n\",\n       \"Nz76JkoIIYQQYgJ6iRJCCCGEmIBeooQQQgghJqCXKCGEEEKICZyMWI5uKQipJXphk+X5BpBWA1Fi\\n\",\n       \"M5GqG7BwWzYbO5HNx6Qmz7cb34ZQH1+bfIJ4m31ZA3Z7IOZ8wXNA2sgk5whqMDteRoA2sWTuIXnt\\n\",\n       \"uF3WSdxp8CnR/eDP3fz0qXq9nijNgz93cV6nmBdmUDNAaM5E+S/sbw/oswNGbptZIXY0iuuRXOOG\\n\",\n       \"tD1ETMEed/1wkAEbeIEJ22ZmyQ3+YH2RtBNOH7m1rTR+W0OBa0pjzaGN7ClB1ku4v+D7VGZ+OFyb\\n\",\n       \"RGZWYINi/N+qZGBCHDuwQwhxI6FvooQQQgghJqCXKCGEEEKICeglSgghhBBiAqGwVMnP5w7DSHlD\\n\",\n       \"CCGEEOIGoBQuReubKCGEEEKICeglSgghhBBiAnqJEkIIIYSYgF6ihBBCCCEmoJcoIYQQQogJ/KWP\\n\",\n       \"zhNCCCGE+GJA30QJIYQQQkxAL1FCCCGEEBM4kZeoEMI3hxAeCiF8OITwypNowxc7IYRnhhD+XQjh\\n\",\n       \"j0II7w8hfN+18vMhhHeHED4UQnhXCOHsSbf1i40QQhNCeF8I4V9fW9Y5/zwSQjgbQvilEMIHQwgf\\n\",\n       \"CCF8rc7555cQwr3Xni1/GEL4+RDCXOf8c0sI4adDCBdCCH/4aWXXPcfXrsmHr322ftPJtPo/Pf7S\\n\",\n       \"X6JCCI2Z/biZfbOZvcjM/l4I4YV/2e34T4DezL6/lPJiM/trZvbd187zPWb27lLK3Wb2a9eWxeeW\\n\",\n       \"/9XMPmBmfyYc6px/fvlRM/vVUsoLzewrzOwh0zn//9u7nxCryjiM49+HbCA1BAmsdGKG0EUQ0RCi\\n\",\n       \"UgjhwiLGVuXCkKLWBZHQLNq2Cl25URNxMSImOq4iaFEQlGIR+GdhOOgUMxOV/VvN4NPiPTLXwQtx\\n\",\n       \"uOceuD6f1T3vORd+PJd73t+9973nNEbSCPAOMGb7aeABYDfJvNeOUubJTvfMWNJTwOuUOXUncFBS\\n\",\n       \"fmnqgzZC3gxcsz1tewE4AexqoY6BZnvW9g/V43+AK8B6YBw4Vh12DHi1nQoHk6QNwMvAYeDObQKS\\n\",\n       \"eUMkrQFesP0pgO1F23+SzJv0F+VD2kpJK4CVwC8k856y/TXwx7LhbhnvAiZtL9ieBq5R5tpoWBtN\\n\",\n       \"1HrgZsf2TDUWDak+OT4LfAussz1X7ZoD1rVU1qDaD3wA3O4YS+bNGQV+lXRU0kVJhyStIpk3xvbv\\n\",\n       \"wCfADUrzdMv2FyTzfuiW8eOUufSOzKt90kYTlWsq9JGk1cBnwLu2/+7c53J9i7wePSLpFWDe9vcs\\n\",\n       \"fQt1l2TecyuAMeCg7THgX5b9jJTMe0vSk8B7wAhl8l4taU/nMcm8ef8j4+TfB200UT8Dwx3bw9zd\\n\",\n       \"QUePSHqQ0kAdt32mGp6T9Gi1/zFgvq36BtA2YFzSdWASeFHScZJ5k2aAGdvnq+1TlKZqNpk35jng\\n\",\n       \"G9u/2V4ETgNbSeb90O1csnxe3VCNRcPaaKIuABsljUgaoiyGm2qhjoEmScAR4LLtAx27poC91eO9\\n\",\n       \"wJnlz416bE/YHrY9Sllo+6XtN0jmjbE9C9yUtKka2gFcAs6RzJtyFdgi6aHqPLOD8keKZN68bueS\\n\",\n       \"KWC3pCFJo8BG4LsW6rvvtHLFckkvAQco/+o4Yvvjvhcx4CQ9D3wF/MjS17ofUt5YJ4EngGngNdu3\\n\",\n       \"2qhxkEnaDrxve1zSWpJ5YyQ9Q1nIPwT8BLxJObck84ZI2keZxG8DF4G3gYdJ5j0jaRLYDjxCWf/0\\n\",\n       \"EXCWLhlLmgDeAhYpyzc+b6Hs+05u+xIRERFRQ64jEREREVFDmqiIiIiIGtJERURERNSQJioiIiKi\\n\",\n       \"hjRRERERETWkiYqIiIioIU1URERERA1poiIiIiJq+A8y9O8S/Rd3RQAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffaf3877710>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"# the parameters are a list of [weights, biases]\\n\",\n    \"filters = net.params['conv1'][0].data\\n\",\n    \"vis_square(filters.transpose(0, 2, 3, 1))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The first layer output, `conv1` (rectified responses of the filters above, first 36 only)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 29,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAJOCAYAAAB8y+mTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvVusXed5HTrmuu/L2jdubpISSVEUTd0S3a0okpKYseoo\\n\",\n       \"TRs7TWvFaRPXSYAA7UFeCuPkoUBjFMhL0gYI0IeDUyNoXMM5QgPLRhJfEje2oxhSoosp2TIlUVdS\\n\",\n       \"JCVyk/u+122vdR6Wx7fH/Oe35lp7ywrd4B8vm1xrrjn/+/z/8X3f+JJer4eIiIiIiIiIiIido3C1\\n\",\n       \"CxARERERERER8X8q4kYqIiIiIiIiImKXiBupiIiIiIiIiIhdIm6kIiIiIiIiIiJ2ibiRioiIiIiI\\n\",\n       \"iIjYJeJGKiIiIiIiIiJil3hPNlJJkjycJMmpJEleTpLk/34vnhERERERERERcbWR/KB1pJIkKQJ4\\n\",\n       \"EcBDAN4C8PcAPtbr9b73A31QRERERERERMRVxnvBSN0L4HSv13u91+u1AfwJgA+/B8+JiIiIiIiI\\n\",\n       \"iLiqeC82UtcCOCP/P/v9zyIiIiIiIiIi/lGh9B7cc6itMEmSmJcmIiIiIiIi4v8Y9Hq9xPv8vdhI\\n\",\n       \"vQXgkPz/EPqs1K5Rr9exvr4OAOh2u/b52NgYAKBYLAIA2u02ms3mju599OhRAMCZM2fQbrcBAEmS\\n\",\n       \"2F99HlEoFDJlIcrlsl3jlSX8bZIk8PzUKpVK6rpOp5P7XKJYLGJra8v+z3uzTh4KhULm3nnP4HMA\\n\",\n       \"pJ5FjI+P278bjcZI9xuGUik9VDudTuaayclJzM/PA+j3J9CvG/uVKBaL1i5armKxiG63m+oPPlef\\n\",\n       \"V61WU7/VPuSzer2etSnbflAb8H58VrPZxN69ewFsj8+lpSVcvnwZwHabNhoNtFotAOl+YN/MzMwA\\n\",\n       \"AB566CG77rHHHrN2mZubAwD7bnV11erhjUned2JiAvV6HQDw1ltvZa7j2G21Wm4bXHvttan7vfHG\\n\",\n       \"G5l77N+/H9dddx0A4NSpUwCA5eXlzHUTExN2vxdffDF3nPO7iYkJrK2tAQCmpqYA9Oft4uJi6vof\\n\",\n       \"+7Efs7H8jW98w8pP3HrrrQCAy5cv48KFCwDS7cbn/emf/ikA4Pd///fxrW99K1N+rm1EuVzOjFkP\\n\",\n       \"Y2Nj2NzcBADcc889AICbbroJX/rSlwAgUx/Fj/zIj1j5v/3tbwPot1+IRx55xOZS+BdIry8cvxyL\\n\",\n       \"4+PjtgZ683V6ehoAsHfvXpw+fTr1XZIkOHz4MIDtNVWv0fVHxxtxww03AACuXLkCoD92dI6wvGxn\\n\",\n       \"jtPp6Wn7DTEzM4P9+/cD6M8RwB/32h+s2+bmZqpc+nzeGwD27NmTaQPFnj17rD/z1vRCoWCfe2uc\\n\",\n       \"Pj/PP5ptqm3G+/xD5OctlUrumAmRJIn1HctaqVQy63He74HhdfpP/+k/4VOf+tTg8g4t6c7xFID3\\n\",\n       \"JUlyBMA5AI8A+Nigi3XTAvQnHxc5YnV11RprcnISALC2tmaDlt/VajVb+DY2NjLPIHq9nk36V199\\n\",\n       \"FUB/4LMxV1ZW7Drvt3mbAnYmJ7qiVqvZAOUzyuWyvZR04QsnX6FQsPJxYfEGSbhYeGB7sYyDNn15\\n\",\n       \"0OdwQ6sbDNYvD9PT09bX3oZMX8KcVF6dWI8kSfD6668PLCevG9SHvV4vNTY6nY47mXWzpH+9+4Xf\\n\",\n       \"exOX9eR9O52OvVy5aVpfX7dFPJwfikKhYJsDjqutrS2cP38+Uyb2FzFss6uLq/ci+dCHPgQAtql4\\n\",\n       \"7rnnrM0PHDiQeW74wgK2N5OTk5NWT28DRezbtw8LCwu55SZmZ2cBwNYNYHseHj9+PLPxePLJJ/Eb\\n\",\n       \"v/EbAICDBw8CAD7zmc/Y95cuXQIAvP322+7z2NZsg1/7tV/LbKT2799vbcm+rtVquS8Abq65dgHA\\n\",\n       \"xYsXAQD/7t/9O3zve/24HtZHX/BEq9WyNTLvWW+88Yb9Nm+TCiAzR9fX121sc1PUarVsfLBf9+7d\\n\",\n       \"ix/5kR8BAHznO98B0G87brC5KZqbm7PDhK6zHO98NwDAK6+8AgA4ceIEgH5f6juBcz08EDYaDdsE\\n\",\n       \"sXxLS0uZw4mHmZkZayv+9vjx4zh7ts8jhM8HtufA9PQ0JiYmrN1CeJ958OZwsVjM1FPXVO86fjds\\n\",\n       \"EzY5OYmpqSmsra1l1qU8EmLQdRw7WjbvMEv0er3Mu0PfnR75oGuwt3Z7a/Tv/M7v/MNupHq9XidJ\\n\",\n       \"kv8LwFcAFAF8OkbsRURERERERPxjxHvBSKHX630JwJdGvDb1d21tzU4WZF6uXLliO0rueguFQsYM\\n\",\n       \"tbGxYbtX3YGHJ8NqtWqnA54ClpeX7WTj/ZbQU5G3c+W/a7WanTCJRqNhn+npgydHMgidTsdONixL\\n\",\n       \"sVi0k+MotP8geO0Wfg/0TwFh/byd/szMjLVHeOodhuXlZfeUy35XCpunDu8EwT73KHQtq8d6KUYx\\n\",\n       \"a5ZKJfuez+12u0PvDfT7nIwpy7q5uWnl85gSMg0bGxuZE5/ej+zI7OysMUc8Pa+vr9upWOvKU7OW\\n\",\n       \"neNyz549APrsEsvKcipTy2fdeeedxhw899xzqecA2yY0YHsO6zhmn5O56nQ6VicPXCMOHjw4stm4\\n\",\n       \"VqsBABYWFsxcSCwtLbm/IdtExkRNLGQIh+E//+f/DAD4r//1vxpDQlPh9PS0sSfEDTfcgNdeew1A\\n\",\n       \"mo1j+b1TPU/rY2NjOHSo71nx9NNPA+iPK5qKOZ4uXbpkfeitqcRrr72WYkrz4J3++e8333wTQN9c\\n\",\n       \"ynnDMXn69Gl84hOfALDNSCnYPuPj49YGXEe3traM1eE45VgCgL/+678G0GfEWAZC60Pz8FtvvWUM\\n\",\n       \"u4J9fttttwHor0mhKW5lZcXM5VwTz507Z2VWRopQBsjrV6JcLmfeJ6Nia2vL2pzP6HQ69pm6TYTP\\n\",\n       \"mJyctHblX12DyUSNjY1ZXXR9DMfMsLnqre96j7CNht1vkOUhhFo/duOGEpXNIyIiIiIiIiJ2ifeE\\n\",\n       \"kXq3CE/eR48etdMfT1S6i1YmItwBqz2XJ29lrnhKKBaLxgLwlK02VP69fPmynfTJILRarcx1S0tL\\n\",\n       \"dsrhM9QnxLN5eydclnlra8t2zWQNNjY2Ug7PQP/UlufTMGy3zWd47I6yMfzLkxe/53f83julqs07\\n\",\n       \"ZAt7vZ7LuLEu/K1ek+ffNT4+bs/gdXqq8RzzQwd0vW5ra8t8fHZ6QtRAALZvkiTmyMqTtLIlLKs6\\n\",\n       \"8GvZOR+I1dVVGx+sz+rqqo1Bsp6rq6vueCNzxJP1xYsX7UTNv51Ox3yGyELNz88bg0QfGJYRgPnt\\n\",\n       \"zM3NZcpcrVZtznHuN5tNd44cO3YMwDbbVq/XXRbDA+ff1NSU9SHXix/90R/FN7/5TQDpsUXfMpZr\\n\",\n       \"ZWXF+ivPV03x1FNPAej76fz0T/80gG3/Jm+O3nrrrdbWTzzxhH3O8eY5Jd90000AgM9//vPuuCTD\\n\",\n       \"yP5/9dVXbe6SgTl06JCVh2vhIP+vEOVyOePL4s3jJ5980v59++23A+izNgyCYP96dVRG58iRIwCA\\n\",\n       \"119/PeXfBPTX1o9+9KMAgEcffRSAP3+OHTtmz6Gf2tjYmI1FtkutVrOxQ4br/vvvt+eRWVtfX8dD\\n\",\n       \"Dz0EAPjCF74AIG1h8Xx9WPalpSXrG65TugYPW7dDX+PwN94ayXJ4/kghW6koFAo2f5RZHwWjBlIo\\n\",\n       \"+KxKpZJhxXq9npXf8zsN/2qZFe86GOpd/fpdYljkAKFOleoIGDrLetDoKaU12fgeneq9UNWJnWYA\\n\",\n       \"Drr9+/fboqQDlpPTu46dqYODKBQKNpn5Il9fX8+YN9V8yDIPckpUM1046XTDmEffe98Vi8VM1Ey3\\n\",\n       \"27VNgTdAvfvkOZt3u91db156vV5mY6n0rQYFqHmEzw/7Zmtryy0D2yCPnm+1WvZbvS9fmlxIi8Wi\\n\",\n       \"fbZv3z4AvsO1V45Wq2W/Yd0uXLhgY1bNHh44frgp2tzctOg5ls8zgz/99NO5UWLsX298jo2NpSIH\\n\",\n       \"AX+DwTYG+i9foB9BNurLnuPgpZdeyozLhx9+2MqmmxfWmRvBdrttddnpC+Gxxx7Dv/k3/wYAcN99\\n\",\n       \"9wEAXn755cx1TzzxhK1z3FgsLS1lTJ2VSsXWpX/yT/4JAOCLX/yi28bcnNPsB2Tb+MKFC1Yuzrev\\n\",\n       \"fvWrbl24kSG2trbsfhwbly9fzpjVdc06efKkfcbNKdd6XVfYb7pWMnJwenraPThyA6X1D10Vrly5\\n\",\n       \"YqY6mqM3NzftfjfeeCOA/hjgZpjQoIG77roLAPDMM89Y5CMPGK+88kpmM1IsFu37l156CUB/TWeb\\n\",\n       \"33HHHQD60ZSs7zBncy+wxTvE5kVbD9rkeBHdo2ycCoVC5nmjzhmNJNcNaLghTJIkNyBs1AAarvmt\\n\",\n       \"VsveA6O4a1h5R74yIiIiIiIiIiIihavKSClbNCq1pifzUUL2lX3wQiCHORZzd0paWU9F/O78+fN2\\n\",\n       \"MuQJttvt2m9I7V64cCGzy26328Y+KKMTsg379u0zVoE75U6nY6Hf/M5zCAd8B7udhLvzL+/JHbye\\n\",\n       \"4LRuHt3K+2lobUh3V6tVO8UolZ/HRPG55XI5Ez67ublpJz2Ol2GnorzTVrVatXZju0xNTVn58kLE\\n\",\n       \"t7a23DbnaZOn7K2tLfuMJoVQ1mEQtM/ZfhcvXsxowHgh8cA2I6RtRFOdmnHJbLANzp49O5LkxcLC\\n\",\n       \"gjn9skwzMzOuSTl0kK9UKlaWQWa1vFBpwusDzxEY2J7PygiwbXYqGfK3f/u3+MhHPgIA+PCH+1mz\\n\",\n       \"vJBqdT7n/D527FiGkWq1WiaFwL/Hjx/H3//93wOAG06vshUcC9dccw2Afj9zrSJDNAg0FRLdbjdj\\n\",\n       \"dpmcnMwwjPv27cvohjUaDevPQeH7rK+ur0DfpMz2Ck1jAPDP/tk/AwD82Z/9mY1ZMpiLi4umpUXo\\n\",\n       \"+k52aWFhwcyQZNHUeV3nJhnG66+/3soeWji2traMiVLndDK6HF8f+tCH8PjjjwNISzCMCm3L0Cqj\\n\",\n       \"70XVXGI/6Xs57BN1q1EmLGTFvN+q3piuleH7Qn+bN5eHOYfrs1T6Aej3Q+hIv1tERioiIiIiIiIi\\n\",\n       \"Ype46s7m3IEOE84KWYVQwZvXh8yLCizyfu12231uKOY5OTmZy16oOGh4oq1UKnZvDa3W3TDLF7IZ\\n\",\n       \"Kr7Jv4P8QN555x17HtA/hbKNBu3Uwzby1HBrtZqdgHiqLBQK1kZeSGqeDIEKLCo8X7WwLaenp42J\\n\",\n       \"ZJlUfI/l8xgCb5yMCm88ec8YFDofwhMCrVardk+P0eHJdVSUSiUbEzztKis3TFSR7Kj6UpG50rpz\\n\",\n       \"HKgzfB4jpXIZIevV7XYzAoCDGEiWi0zNxYsXXb+QneI73/mOK7dAVozlq1QqO2aiFC+88AIA4Gd+\\n\",\n       \"5mcA9JkfT1GcYF/y7yCotAOZKG/+e5/R3+zGG29MBSMAfaZLfVSJPGFKzoepqSm7jp+tr69nBC/n\\n\",\n       \"5+etTfX5/IxzfWVlJRUwBPTXhvAZOqc0cINjin1Zr9fx/PPPA9j2+bpy5Qruv/9+AMDXv/51AP22\\n\",\n       \"D8ejSil4gTRkTnXtoc/VSy+9lApeCsF59Gd/9mfmn7ixsbFj1kTX9FF9hsLArPA+vMcoTNMg60j4\\n\",\n       \"TlVfKn5Xq9UyzvCVSiVlHRlUH5Xx8Ngslk8ZPn5WLpdzxUgH4YdmI0VoI2iaFE4M3VBplBjvFTq3\\n\",\n       \"FQoFaxguwjrRNLosfIF3Oh1TQ+YkVUdgLgT79u2zCZGn9VQqlUZSu9Z6EKoF4mkqKUU5qrnUm2iE\\n\",\n       \"Z+ro9Xq55rG8VAiDTDGqTcR7hH3oaemoUyDLruk2VNNktykNvA08sD0GPf2nPPpdy8J2WVhYsPuF\\n\",\n       \"Gjk7gaqYsww0ZRSLRSs/28fr3+PHj2dUwk+fPp1y9gWAD37wg1Z3mjKGOZ/qBpzl43y8ePFiKr0D\\n\",\n       \"4I+XtbU1+57mqBBh+47a92+++aZFItLE2mw2bRPB9k2SZGTndg+f/exnAWw7mz/00EMWxciX/sTE\\n\",\n       \"hL1Ah0Ukcs7RLHTttdeaZtSokVS8rtPp2KaFG0gv2g1IO/6H4HxcWlqyTRPb79KlS7Zp4bxeW1uz\\n\",\n       \"fvLMRhpZzX9z/J08eRJ33nkngO3oyG63a+VjVN6RI0cy5vHZ2VnbEN18880A+mY6boIYCXnq1Knc\\n\",\n       \"CE19N3CeeUEd3DA/+OCDZrLLQ6lUsrE47ADkwXvH6LrobYa8+RJ+NkibiX2igTdeijBNowb016zw\\n\",\n       \"fTIokGYUdLvdjIlS33H6ng0P/6OkpfEQTXsREREREREREbvEVWekCG8n7FHo/MxTTS0UChkzju5O\\n\",\n       \"lckJWQ/dMSszwNMJPyuVSvZvntYWFxfdUM0QehrPU0/XcnkOedzR6zMG6YfkIe+0rsygKo2PEh6r\\n\",\n       \"yGMGarVaRuMJyA879cKAVZWY8Ji/vL7RPFOagzB8npoKtcz8TZhcWTFICoJtmsdEVavVVJABQQaB\\n\",\n       \"rNbm5qblNWOI9VtvvWVmK20PMlZkBprNpjGwrEej0TCmhure8/PzpqfD++blwwO2Gcxz587Z6d47\\n\",\n       \"YQ4LBKDZim0/yml5FKytrRnLRUbn7Nmz1v9kVnbDDChY5z/4gz8AAHzgAx+wfr3lllsAAO9///uN\\n\",\n       \"4c5jpAqFgrFjf/d3fwegL4PA+ZCH/fv3p/S++CyaHMmyeFpje/bsyWiBKbSNaKa6++67AfRZo5AZ\\n\",\n       \"0nGvDvKhk7beV7/jGCTrNTMzk0paDvSTOofPVfOcyhnQjKoJiDm+1arhZRrg3OT8mZqaypi8G42G\\n\",\n       \"q5cVrk9JkqTm9ajQ9QtIB7nkMS67Ye7ZX8pQsr5bW1tmdmX7DXqHEBrE5Mn0hO8az6yvDv76nvUY\\n\",\n       \"uN0yUCEiIxURERERERERsUv80DBSRK/Xyzg8euh0OrZ7VX8nj3XwwjL5W+52NzY2cp/LU1Oj0XBP\\n\",\n       \"3zypKJviMTX8ntd74pBadlX8zhM1260fEMF2U2dDnobUKdhDHhOl34XCnc1mM1NudUpniHWn03Hb\\n\",\n       \"nCHaykR54bteRvFQiqFaraaU6kNwbLRaLZcpZT1HdURmvRcXF3NPRXzu5OSk6wzN3+qYZbtx7Gj7\\n\",\n       \"sF3q9bqNaZ4gV1ZWjGkg65okifnrkP147rnnLBfcqCc6tkun08llHPPGsfpwaJaA8Jph9/Hwzjvv\\n\",\n       \"2Oma/ka1Wg3z8/NWbn42Cm699VZjPLz1hP5Qr732mrUN16SZmZmR2nVhYcFYJQqG3nPPPcYCsOze\\n\",\n       \"uBnUPiwzxwSV3RXFYjEjYeDl6dNncFzdc889GXFLBZm4gwcPmp8W71uv1zMMca/Xy+Q+vPXWW229\\n\",\n       \"YD3+5m/+JpPDs9PpuOOFZQhlaVj3QVhZWTHmknNrbGwsI0Px1FNP4Vd/9VcBbLfv+vp6ps+HzZVB\\n\",\n       \"CB2y3y3CNlKJCNap0WhknlcsFjNCwPruIvtdLpdTAsBE6DfX6XQy78okSTLvJx2Hur6H2Qw8i4LK\\n\",\n       \"OITZNPKQvNsX8G6QJEnuQ/PSgXgIX9BAvoKrF8nlUbBe4uGxsbFMmo9B4ET0zFeKYRGLQH8RGcUE\\n\",\n       \"FCIcFMPwbqIXlFoNtZb03lQ+9pzIVXVewe/ZBrrRG7Z5CVP6dLvdkUyUGj3pbQxVk4VQ8/G7NQMB\\n\",\n       \"2yalUqmUqxxOTExM4N577wWwPT7/9m//1r6nM/ni4uK7Wmhp1vacSfOgCWE9c0UYOTsKtG84Jnbz\\n\",\n       \"MlGzAvGzP/uzALZNbM1m09VsC/HII4+YnpMX9ZaHWq2W0pQD/AwIR44csQ0KX8yLi4vWdtyIfPvb\\n\",\n       \"3848Y2Jiwl7wGhHITTPnVGj+A3yduCRJRgpyueuuu2wTzo1Pp9PJRPKVSiX7TJO6sz/ZR3fffXdm\\n\",\n       \"Y1av161uWn4eMDQNEZ+hL9dw/RkUccwE1DRzA9nN69TUlPWH3pfPpd6U10eDXB92uqZ77ztd2xRe\\n\",\n       \"4JMHzhV+32g07NCnbakJjFkWL/lx6FZRLBYz60OpVMoklG42m+48zHv/7wZcz3u9ntvo0bQXERER\\n\",\n       \"EREREbFL/NCZ9oDsbniYFpCay5SxANKSA3pK5YmAu96VlZWMac9jLQaFuqtiNNCnPD118jB0Wuum\\n\",\n       \"91P5Bi2TXqf5rfSUMopqe4jQJNZqtdxEmCHdWigUrJ08ClbB7/XUSabJO/Fpe7AtlXHwmKgwx9/s\\n\",\n       \"7GwuI+jl3yM8hpB11s9+UKceD15iUQ8czzMzM1ZfzxzK07HKJOym/JwHO2W01WFUT595iW4JNe0p\\n\",\n       \"+6GOrqGcxk5yinkMGRk8MjV/93d/N1KdZ2Zmch2EPRZdvwvXDu+6c+fOWV62X/mVXwEA/PN//s+N\\n\",\n       \"weF3HtbX1zOne2DbtKfzXPX3gMFj0VPADq/97ne/a8ELaqbTPI58Bp3+1XQX9q/qyREzMzOmjcXA\\n\",\n       \"i3Pnztl96NT/4osvZuaIZlbgepEkSSpJMkGml1IWTzzxhDFRqn1FBl4tHhwblA/xxmSr1Uqts7vV\\n\",\n       \"SNOMFMP0pEadz+HYLhQKGRO25owNAweA9PvCY5C9d5tnJg/fU91uN3dN88yMHkJ3lzxERioiIiIi\\n\",\n       \"IiIiYpf4oWSkCJ5mPAc/PZ0qIxX65uiJSGUDwnDWsbEx2+3qjpUOcWRMNIzS2/XqKSvMb/T222+7\\n\",\n       \"J4MwQ7pC/XU8BiRkiNT5blTHWIXu/vN24p70wjCfsVC9dmtrK5eJUlYu9D3Q8rHu4+PjdpLnSejy\\n\",\n       \"5cuZ05eOp7wTWrlctrGgPjteGK0XKLBb6MkzFGH1yhj+hm2kbcs+IgsxPj5u/kh0eN3a2hqp/KNm\\n\",\n       \"FQCyIqPNZtP8V7zwbK8fWOZyuZxRTy4Wi66Io34flrVUKtn48OQ0FHTUpy/LyZMnc4NgiJMnT5qy\\n\",\n       \"vOewrcEOYZ0HKcSrvxzQZ2r4DKpma32G+ZlxvtKHsNlsZpiGTqdjZR3VJ1GDO8K53Gw2za+L9bl4\\n\",\n       \"8aKxRfv27QPQXyvpkK/K5aGA4ne+8x0cP34cwHYWgDNnzti6TWZK2THOi1qtlpEy6fV6KSFooM+E\\n\",\n       \"eexdmJPv2LFjJmdAqZAnnngisya1Wi37rfrRegriRMi87hSjWCe8Oexl2QCyrM4gFf3wnaAWE11r\\n\",\n       \"Qn/iJEkyrOwgBilP5NoTINX75Al3qqL6MPxQb6QILxqv1+tlOnOYEzavUxrV6xw1VTE6SE1G6jgH\\n\",\n       \"+AuL5xwKZDdXnU7H7u1J6nsvGI1YyDP97OSlPorjoT5bdTy8lxfB68bGxjIvrUEvX/5GowZDrRh9\\n\",\n       \"mbGe3gvIcxQdtinROnJM6UbFi04ata3zzGkcV9VqNWPCGmQm4ljlON3Y2EilwAD6myaWmWaGer2e\\n\",\n       \"2pSG5aNjrur0EN6iuX//fnfDwD5kO6qG2zBqnWVQDbdw3tZqtVTfePprYd2GJTpV0FGcppqpqamR\\n\",\n       \"NlJPPPFEJiGugv05NTVlG0uNrqTGFx3BNcKV7bK4uGgveJZpz549dh86xc/MzGTSGGmUaqg7FILr\\n\",\n       \"0iCV8xA6p7125jqgaYM0vQuQzlKg44RloHnw1VdfddPVhFGdxWLR5gPNl2NjYym9NCA9xnh4O3/+\\n\",\n       \"vLWBt/5wfdF6c0OlQUyqjh+mhdI5zxf35uZm6p2wm4wHo0A3E6FbyKA1mm3E8acJgBXheqfvC90g\\n\",\n       \"e3VjX3NdXl1dzZ233hrtZQFRhGRMkiSZjfQoiurRtBcRERERERERsUv80DBSnkOzMiKeUqmeaAA/\\n\",\n       \"OfDExIT91jvhEN5nyjLo96o8C6TpT5ZvUM67kLlIksR2vHp9qJexG+fgQSyJ55w3qv5OWE/9N8uv\\n\",\n       \"Ya88Tayvr2fMPNoupOKr1aqFLCvTF568PWmKvXv3ZhSX19fX7ZSrYbdhWypLMYwpCeu9tbWVcXL3\\n\",\n       \"UKvVMqZCdbjXBM7sD57KKpWKm+ONbIbqoKj2FD9jW5Gt8E6AGl5Mk5EX/l6r1TIM2fT0tMtIsS11\\n\",\n       \"bOed8NgWY2NjqfyBQP/0Pkg/igjV/3WucJ1YWVmxcoVh9yE4LpXNHhWqnA2kWWqWa2lpKXPPiYkJ\\n\",\n       \"a3+O3dOnT7tmFCpks5+0X6iHdPjw4QwjpYzkqON9N87OeY7ALNOBAwesXfjZddddZ+s1v0uSxL6n\\n\",\n       \"av91112XUgcPn8v+GxsbSzF+QH+shGNxY2PDWCo1GfO5NKG++OKLGWZVTalsWzrMA9trl2ZR0ByT\\n\",\n       \"YT7Zzc1NYx8bjYZrXvxBIC8wYxDYvjreuBbR+f7ChQuppNH8HX/LNp2cnLS+ViY+NE3X6/WMXp++\\n\",\n       \"4wblAORzw3dbpVLJ9KGadkPLUx4iIxURERERERERsUv80DBShObL82ztujtURXMg7S/BHaZmjNfT\\n\",\n       \"Z3giKJfL9m/uqDX/kpYvZC5UOE1P4KF6tj6P8Bzk9BmeKrpeH568C4WCMRE8zYTwnPNGdTLMU49V\\n\",\n       \"hDt9PXERBw4cMMbllVdeATDYQT5kQBqNhrUX21nZKLIZ9XrdHE4VXj/ksXHq9M0To+dboAgZyamp\\n\",\n       \"qYxfnX7GE1qlUrF+5d96vW4nNJ5sDx8+bCdeMjVbW1s4cOAAgGzOMMBnoqgQXygUMj4jIZPB6+hr\\n\",\n       \"xbbwoCHbesr2To7qBwWk/eJ4/SginWF4vILshAqbhrnAQrBfjx49CgB49tln7Tv1VfIQ+lL90i/9\\n\",\n       \"Ev74j/84c13Iot5444347ne/C2C7/9WPSFlC5sajYKjnO+ixaMOc7BVsN64rXoCIQp3S1UcS8H39\\n\",\n       \"lpaWTGaCdXvjjTcybKG2wdNPPw0AOH78uPUh+1yVsjnPb7rpppQfIdBvH/7m9ttvB9APEmC7kcFS\\n\",\n       \"9ltZ4TBIaO/evRnWSxlHDbLhnGN9q9Wq9Z2npL2+vj6yjxoxqpVB17FBAtYsNxHeU31gv/a1rwHo\\n\",\n       \"+7GF1+k6y3oq08S+8daIUXwTB8ETSG632y47Pmx8e7jqGykvUiEvzQs7c2xszCZlnuN4oVCwxuJf\\n\",\n       \"L5JncnLSJjM3UAsLCynVX95PTQTAYCdATUUQ1tejC0OToULNArpQhZ/1ej1bMIaZQfIwzBlXkzSH\\n\",\n       \"g7FarWY2Te12OzMhz58/b06jSpl7oPMtf3vp0iVrJ32WOibqX4UmtVTlXS8JdhjV0el0rL+HmVpD\\n\",\n       \"E8HU1JSVlb8dHx+3zzh+5+bm7DccY+vr65m61Ot1e8nwBaNOujTtANuOs7y+3W7bBvT9738/gH5/\\n\",\n       \"cCF74YWtMzCAAAAgAElEQVQXMvVhG2xsbFh0FTdAYUJYIK1pxesvXLhgdacpYGtrK6P/pg78fAl7\\n\",\n       \"8/a6665LLdYcJ2EKE8BPVzTMXELVbI7TxcVF20CFytsKnQNst49//OPuRorgZu1jH/sYPvnJTwLY\\n\",\n       \"Htuql8SFfmZmxkx7n/70p61MYXBAuVy2hMx6qBglCq9SqVgS5DyT7Pz8vD3XW9uGqdizvz7xiU8A\\n\",\n       \"AP7oj/4os7nVTT3b46WXXrLDBDek7Ctg+5Dwta99zRT/meBZwY3or/7qr1ofcbzMzMzYhkfLxPcF\\n\",\n       \"N0+DNME4H3Vd5/24SV1eXs5srsrlsvX1DTfcYIfNUTGqJpTngqIBN6OY/BqNRkYv6+zZs27S4nDc\\n\",\n       \"eS4BGgmt6yfXQI2yHLWeO9W842E8mvYiIiIiIiIiIt5DXDVGijSt51genkQ3NzczzJXm9uFpxzMx\\n\",\n       \"dbtdl1oluNu9cuWK7XLvv/9+AGnTnub/40lfWSBv1877qUZGaI7UUwBZg8XFxcyuvdVqZaj1ubk5\\n\",\n       \"K4uq8A5zoM2D5/RPlMvllPI5/2p+Pi2LQh282U8zMzMpDSPAZx00+eWdd94JoH+y5v3ILiizEf5e\\n\",\n       \"n+H1lTd2VFPE+82wkxq/Z7/VajU7rbGNdGxzLKqyNft3YWEho6V17tw5axfOlampKWNRlZH0WM6H\\n\",\n       \"H34YAFInes+Ux/a97bbbAPRZGpaBZfJYWU0OTpZAmWSO8bNnz1q/0SwJZPtENZdUW0oZGLIOzDOn\\n\",\n       \"faTzgePYM5dQ+VpZOeZTazabVsY8U4POAZb51KlTufnofv3Xfx0A8MADD2S+88wN11xzjbEdZJrG\\n\",\n       \"xsYyz1haWjJdI7bB9ddfbwwOzVVeAEe73bZ65uV2u3LlSoZR4T2B9PjgZ14b/OVf/iWAvtk6dNZX\\n\",\n       \"sD0mJiaMEdSsEiwrGe477rjDVM7JSOn6dtNNNwEAPve5z9lnOg489jlkUhYXFzNrtNaRzzt06BDO\\n\",\n       \"nDkDIG1NYd94qvdnzpwx9tGDvh818CQsg16f52Q+bG0LE9rr2qbgeqLvCLJUaiYl68n1TtdyrgOa\\n\",\n       \"pWRU+RIt5yjSCXrNTkyJkZGKiIiIiIiIiNglrhojtbS0lDqdcqepvgUqWhf6QwHIKJF7fi76G+6K\\n\",\n       \"q9VqhmmoVCrmH0ImSv2SNOzfy7sUfqbMiuZ64wnYO8HryZrPVTaNpxzusi9fvpxhrnq9np1sPIVs\\n\",\n       \"lVbw8sflnUQGhUmHubg8FAqFDFPlsR+qVM3r9+/fb34I6uybl59JRU5HsfEnSeK2pZaL140qvukF\\n\",\n       \"TXAcsw82Nzct1J1/JycnbSzydDo3N2fty3Hi+cD1ej37rQr8sa1ZptnZWTutM3x8aWnJdcz/F//i\\n\",\n       \"XwDYbtOnnnrK+oh+Tgqeyvft22f15Xhvt9v2DDITepLlOFxZWck4jlcqlUwAR3hqZHlYBmWhNJCB\\n\",\n       \"5dF2JqPhKRnrSZWMn+f8nueQ/e///b/PqHArWL7QYRkA7rrrLjzzzDOpz1ZWVqzvVB6EzAvXorNn\\n\",\n       \"z1oADXMGXnfddRkGod1uu8LH9F8i4+SxxqqKr/dgO5NNOX36tH3msYFklzY3N20+sl08hl2d5rlm\\n\",\n       \"bm5u4hd+4RcAAJ///OcBAN/+9retfHxutVq1OUTn/gcffBCPP/44AFjo/pEjR4xl03qHDMf4+HhK\\n\",\n       \"0ZxlUV8wPjcU9tR3ocfstVqtXIZO15hR1ju9Xv1dw3t4efqArNVB5wLn4B133GG5BNl3rVYrxYAC\\n\",\n       \"/bmojv1AfyyGVqhBWTBUHJr18fKrsh9YN2WpdpKX1sNVdTbf3NzMTNxms5kxxemE1+/CCL1CoeBS\\n\",\n       \"ySEGfRcmU221Whnp+vX1dZu86gzHRUtfxuGGoFgs2oDjS3NzczPjiKmbSX2Bc6Hgd56jN68F/KS7\\n\",\n       \"HtUM7DxxrWp35Q1C1fPiosGJ1ul0rI1oXvKclpVC1xcntWTUCZN1ymsXrbcmpgzHRblcHin9zaAI\\n\",\n       \"R0JNmmFbVavVjLbUpUuXbOPD+62srOSaajl2l5eX3Q1qiG63m3Hs5+KuOHr0qI3P559/3j7nhoFm\\n\",\n       \"ZC+aVdM8cPO0sbFh9/MckPliW1lZsQ0N/+r91OFfN/B0Gta+ZPt79ePLct++ffYC89qPG5VarZaJ\\n\",\n       \"slNwHu3ZsyeVgJfgxseL+Pvd3/1dAMBP/MRPZO7rbe4ajYZFr3FNWFpaMjOVziW2P++zsbGR2cgM\\n\",\n       \"WwM02bgXrRmae/VgyDl633332SZDX76cAxwTs7OzNhboxKyuDDQV04QLbPfR2toannvuOQBpk00Y\\n\",\n       \"tTc9PZ3Z+D7++OO4++67AWxHBhYKhZTyOZDuUzqMX7lyxYJmWMewnkB/DNBMzj7Sdxw3UKqKrvXb\\n\",\n       \"KXSz4Y0jdR/xoAE5gJ92TbWx+H5kUucQXFP5br3uuutcdwTP7Sdcw/PSqoXwDj6hudqLLveiKENE\\n\",\n       \"015ERERERERExC6R7DQk8Afy0CTpkR7O07rI0x5RM54654XyAoPMMGEeH1XA5o46PBEAfTaFJ2BV\\n\",\n       \"3A0dBAdpcoQhooPMkSGq1arVzWPUBuXFy5NUGBa+zxNDeNII4Z1UwvIkSWKnP03i6d2TJ1o+d3Fx\\n\",\n       \"0U4ZdEocxNCwD1kWr261Wi3zfalUyrBPyrJovdQpHEi3Cz9TRpKsjTrNaz4vPlcdLfV5QJ8BChlJ\\n\",\n       \"YHv8jpo4mvIAx44dM4pdnarZ53S0PX/+vBueTPMR+yo0O7G+O2U6OecLhcKOtI6A/lgjq6SOrOwH\\n\",\n       \"SjB4prNKpWJ14m+V/STrMDs7a/n38nDbbbeZWcNbvz760Y8CAB599NHMd7Ozs+Y0TzcDz/H18OHD\\n\",\n       \"ls+Pc+GJJ56wenIuvP766+Ygz/Fcr9dtftHxeZBOVzi/C4WCMchkT0qlUkqTj9eH6/rk5KS1c57E\\n\",\n       \"xqFDhzISFvV6PbP2as5AQiVA1KRIpo7WgG9961v2mSpwh1aNYrFoc1glRYgPfehDAICvfvWr9lme\\n\",\n       \"CRdAhm2tVqvWvuzLPXv2WJ9oHb3sEu/GNMX7aBny8sRqrtq8546NjZnlQusxSu66YdD2Y5mVCR9V\\n\",\n       \"rT3PMV/ZTLJhvV7PjbiIjFRERERERERExC5x1XykyEaF+Zu4+wO2d+HeyVbZD0+4U9VTPSdI7/QV\\n\",\n       \"MkMrKyvmy8Dr9ZSs/lChXVVPC1p2MlFqV/eYofCzQSxEuLvXtvJ24zvZrXvMlzoD8h5hu1UqFWNI\\n\",\n       \"lPkJT9XqUEi2anV1NSMoqGD7673oc+X1Q6lUyjgtNxqNjHK4d0rqdrvWx+o47gnAeiHLYRZxdeZV\\n\",\n       \"5opl9vqDZV5dXXVZRc/Bn3ViuywtLRkTypO1KocThULB/ENYvjBXFtAf92zTPBXgnbJRgM/eeBhV\\n\",\n       \"tVmvpT+Sl3er0+kY0zQoXBzot6XX1971HNNend73vvcB8EV/9+3bh5/+6Z8G4GdXIOr1upWZ/l/3\\n\",\n       \"3HOPfcb5Ozs7a0woy/L222+b4zkZmkGMVNiPXvtsbW1lcp6qnIKud+H99u/fb8yWCqqSWVOGkN/z\\n\",\n       \"s83NzYyf0/T0tK0T9EE6evSojeV/+S//JYC+1EY4vo8ePZpR9d/Y2MgNpCETdeLECZPJUL+iUDJG\\n\",\n       \"Hcv5mfpPqv+c+tKG2AkLFUrFqL8h+0P735tfYZ47YLtf6/W6vRvoj6drA8tfqVSMNea9lT0OHfO1\\n\",\n       \"zNpGoci21nFQu6hMDv+GY1F9yHbCnF01017e9+FCVSqVMo7bXrk9M1mxWMy81BV82TQaDWvU0Pw2\\n\",\n       \"CKplw7IqHeyldPEc6DgAvUSQGlERpqEB0gtU+BvdlOZpwIS/YfnZDzoJWM9wUOq/BynHey9dLhSk\\n\",\n       \"zMfHx+0ZXFBarda7UmkPodQ04fWNmkbDa3kfwJ+QmvxSTYChftFu0hF40IUyfMbly5czY3tra8s1\\n\",\n       \"ndEJluaFM2fOZBamqakpe6HRLDRsrnjQdvQOBGw3NbVwLHppLTRhM68bHx/PbOAPHjxoi71uOlkH\\n\",\n       \"r89VCyxvo0/Mz8/bxsJLqsvULl/5ylcy31UqFXzqU58CAPzv//2/AWzrKynuuOMOM4/RhHXLLbdY\\n\",\n       \"+b75zW/ad+xrNWvS3Mc5+MILL7jrqvdyo2mPm8AkSUzpm5/V6/WMBtXU1JSNLa5dzWYzpXIPpE1o\\n\",\n       \"HLOrq6tWPr6MNcOBzmma1vi+WFtbs+d6kaaem8A999wDYFvdXn/rzR11kCf27dtn7wl1/g/Xhs3N\\n\",\n       \"zYxpdGZmxgIbvAjyQqGQ2Txo4MuoB5lRDgZarnK5nNl0qzmV0CToYRosxezsbCqJO9DvN85DTcnF\\n\",\n       \"z9R5Pjzwa2LkPG3DUV0P6IAeTXsREREREREREe8Brqr8gTqK6ak3ZD1arVbK0ZF/Q5ZKw+m9Xbln\\n\",\n       \"QuMu2tOYKZVKdsqiw22hUMiE6nqaJ7oDVrZHtayA/ikq3FF7bJCeFryQXi9RqLJQvJ+e2tSsGv5m\\n\",\n       \"a2srZVrlb0Oa2nPI3trasjZiPwwyG/AZPLlMTEzYtUr5atsA/VMqn8vTpCat9mhqNePxPuwvPU2p\\n\",\n       \"eThk8jS3k1LELL93oteysMy7YaK84AudD+F3Ot7DBM/e88kQANtO5KurqxkH26NHj9q11N8JnzcK\\n\",\n       \"8jTVNNGu6gMRg07drBf79fDhw6ky8jdhcuxqtWrMDNcarbeajUJ4J9tLly4NTMANbCule2i1WiZN\\n\",\n       \"QeZFQa2iRqNhbUOzZaFQsBB8hu9vbGy4ofNcO0KHW0W1WrU8fcqshYzu3NycMVFci/bu3WuMFO89\\n\",\n       \"MTFh81XZm1AWRnWzNEMA60tT4LFjx4xl0zLRyZsyCaurq5n14oMf/KCxTR7TxO+q1ap9r2tNOG6v\\n\",\n       \"XLmCBx98EABMi+rtt99OJQUH+msXmRld10PH916v58o8EJoFhBhk1uKzdX0n9H07KF+gXqfrCf9q\\n\",\n       \"UmVNCh2ywTpX+Nvl5WXXbEjk5YJsNBqZdUfXaM136b2n1NGe14Vmw1FMfJGRioiIiIiIiIjYJX5o\\n\",\n       \"fKRGtdMS6ueSZ7cuFotuuGiek66q+hJkumq1mu1yeQ/1BVCEDnvertizLeuuPS83l5cXLzwpeT5S\\n\",\n       \"XngvoX5OO81rNAxhGPWgE9Bu4bUvkBY15fM9Z8pRpARqtZpd5/W5ym7wdL0b/65wPoyNjZnPGJm6\\n\",\n       \"drudUvUH+v3HkyH/qjOzx+TwvgcPHrTTM9vghRdeMEVl+sXcfPPN9pn6fZCtoa/RP/TaogwifaXu\\n\",\n       \"u+8+17+IZeUcaDab9hs6YVPcUzExMZFx0h02jtk3s7OzuWKeioceegjANhOmAo+f+MQnAPT9q0KV\\n\",\n       \"+BMnTuBHf/RHAWznlHv99ddTztl58KRMyHCFAq7Adh/ffPPNxqRork/vt3mBAuqf6v3WYz0pzEu5\\n\",\n       \"hGKxaGs055FKY9CvD9hmwFTpPpx7KpBMX7mNjY3cdZEs3rlz52weKnujATJAv73JimhwCttyYmLC\\n\",\n       \"+prtVi6XM9kTvEAKjzHV3/4g1nd976iwdfierVQq9jwdY/wNmVNVVFeHfO99GOYqVUZKVc9DaaQk\\n\",\n       \"SXKDCNTJnSzcIB+pq2ba40ZIVcSBtJaFbkD4mZqZCG5E1NncozDVIc9bMHhPVSzn73mdZxLRF6qa\\n\",\n       \"WsIye/pA6+vrmSTDW1tbmfJ5E0Tvw+sbjYarOkzoQCa0XHmO6kmSZEyA2l/D9LDCyTzqJqpUKrlS\\n\",\n       \"/lyMdLJwMnvOyIQujGy3Uqnkmpi8iD/PuZFtrt959wv7ptVqZfpVox6JZrNp45MLc7fbzaQSmpqa\\n\",\n       \"yqi7q5OmPosvLW4gOp2OvXBpelANKX72/PPPu5vDkCbPyy6wGwzasISRv8D2WDt37pyb7YBl1UME\\n\",\n       \"5zFflt6cW19fzwRIDBvH4TowCriJ03FAkx7vo33AuqkiPPWa1tbWXNNQXlkVYfYBbzOoa6A6bnNz\\n\",\n       \"o5sTtqmXmUDTBoUm5XK5bC9aDW7g7/md5yYCbDvaM53X+fPn3SCJsD/VtJSnX6Vg2VWLkMEHrAuw\\n\",\n       \"3b9ra2v2mb5j2JbeXPLeB95nnqld35+cr95a5MEL4Go2m5m10htLnouKaiR6ZIjOgTBpebfbTW1G\\n\",\n       \"9fkh+Ly895QexndyyI+mvYiIiIiIiIiIXeKqMVLtdjulp0F4u+Jut5tJqlsoFDIUrNKjeQ5qmrjX\\n\",\n       \"c8j1dJiUqclzqtWduncdT3d6HU8dqlnkhdOHzsYehem1qd7b06MCtk8o6mgdnjC8BJF6UvJoV0+X\\n\",\n       \"Q8uS15ZkBlQlXM1uPL14pgI1YYXfd7vdjPaMnlJUHT10jNT2ZX+0Wi33xOj1g5cEO6y7MrWEnsrU\\n\",\n       \"LEUmQMO8wzZQWlvbkffhZ+1229qU5kA9ufLfg0yVoUP7IEaKdQtzZep3vV4vw5x6NHy9XncdqXmf\\n\",\n       \"119/3dpB5w/7m6ycBgSwfrOzsxlWpFAomPSDp3JNeFIsXvDEIFDCwAvVZ/m89lhfXzezEseCJoAe\\n\",\n       \"Bs9NgvfJy/X21ltvuSrhHANkY1qtVq70B8fdrbfemgkS2LNnj7FhygqGTs4TExMZSwKwvS7xHgsL\\n\",\n       \"C5lsAsp6kh3TcirT7eGOO+4A0E+SzPpznpHR3bNnjwUHsL3n5uZScj8sO+fS+Ph45p2mztJsAx3b\\n\",\n       \"RKFQyORa1bUy710JIOM+4K1r+i5ShOuYZyHa3NxMsVP8Lly3dU3QNTjMzQukrSi8X+guo/dm3fJc\\n\",\n       \"Q/IQGamIiIiIiIiIiF3iqsofqGyACm2qkxcR+uZ0u92Mc5veR/MCeacTIk9Ekp/r30G5eMJ6eKKP\\n\",\n       \"WgZlOHhvz3lZn8FduNrXQ/mIzc1Nl6FRnyDNAE+Ev/HYJ62PnppCqYN6vZ5y4g1/o5+FjI+WOU8i\\n\",\n       \"YHx83Mqvob1h+3rMWrvdzvSNsnNavlCdfHNzMyN/oRh28g9PbdVqNSMNMcjPge3BZ4SK2GGZlZUj\\n\",\n       \"m6BCgGRjdE6FbNIg2YoQ6k+Y1wYTExN2evZ8bbwxkscyN5tN15eBz7hy5UrKYZ/gnKPMA8Ppge0w\\n\",\n       \"f/oEKcbGxnKZGUKlXYidON+TVVLWi07SXp4x4vLly8ZyHD16FEA/dJ7sirJVnrAo66aMFP2cVB7D\\n\",\n       \"A5koOutrrjwtJ8tChk6VzYm33nrL7sPAhgsXLmRkSzxxyHa7nesHw344c+ZMSpCZ9+M4Juul6xDH\\n\",\n       \"kgoH61qiwRcsp1cW+iVyzS8UClYPKrpfuHDBnjE1NeWy1CyrvjvC4BplY/S34b8104g6brNttJ5c\\n\",\n       \"A3n9IOuC907lfFb/aPW1ykPYBroec+x2Op2MhUWFij2mPC/4LM8hnbiqGykgW0jVceBi2Ol07DNt\\n\",\n       \"yHATUSqVjELmIjGsY3RD4Dmgh2Y+jdAbFO0W1su7r6d3xIVjcXHRfs/yq9Oid+9hUX1Er9ezDQB/\\n\",\n       \"G5ZN66Mol8u55hiW1VMJHqQSruXKQxiNtb6+njHFDqJgw+8HmdNC6Fj0IiSJWq2WMdl5UH0wtuPE\\n\",\n       \"xEQmRUypVHKTbg9T9Qf6faTmVKA/NvjC0EWHCxrHoLbBqNFlhC5enLdeFOWBAwesLPoMlsuL3tV+\\n\",\n       \"Dc3HgxxCOf81cEM3vnxJUt1bI/TytL7GxsZG0gAbdQOqYD3f9773uWXg/P/e974HoD/uwudcvHjR\\n\",\n       \"zEZ0Nt/Y2LB2+83f/E0AwF/8xV+4G6k8c6Vq1nnrKhOKcwM1MzPjmpz5DDUFMvkyzXlLS0u2yeV4\\n\",\n       \"v/HGGy2CjxFs+/bty5hMNXm96ldxc+jpCGqGCI4XbuDm5uaszOpIPcqa22g0bIxx/K2vr9vLXDME\\n\",\n       \"cD4ywEPre+nSJXOS956na0KejpsSAmE6Nf2trgWcz4QeRPUdkvdcjYjnnOVaPsypW5/Bcaw6i6G5\\n\",\n       \"z8OwZ2ikodcGwxBNexERERERERERu8RVZ6Q0Zw6QPk3wtJUkScbRulwuZyj/TqdjpyylTkM6eGNj\\n\",\n       \"ww3RzGNMVDtIw0VZvlB5HcgyUUmSZOjPWq1m/+ZJTrWq+NyVlRXXxMF78/TmJQUFfPOI1pflIqPX\\n\",\n       \"brftBOyxbEShUBhJNmBraytXKyyPtSuXy9Z3Htul4e15uluayypkqWZmZuzeynR69G44TrT+enoK\\n\",\n       \"ofnceJ0X7qsBDXriDPMbang+y9ntdu3ebIv19fWM4nexWLT6khlaW1vbsaaMtrc6+/K78H5nzpxJ\\n\",\n       \"hYGz7Kwb67GwsGDl433Hx8etbsNy+3k6Wcpc0CzKsqhzM5/hsU+XL1+2vHY/aHAOz8/P4/nnn898\\n\",\n       \"HyZzD9uR37E/qbINbM8HmiuPHDlia6W2C/uBZl81H5O1abfbGUbq+PHjGVPz4cOH8dxzzwHoq4gD\\n\",\n       \"wNe+9jX7nrIFBw4csLbneD906FBmbgzKe0nGRIOO2FYq2RDmUJ2ZmUklJiZ4HVkoj+lQU7aCrN3v\\n\",\n       \"/d7vAQA++clPprJisHz8t2paMYsGP3vxxRdTjFVoNvSYaTU7sy0bjYb1vxdAxd80Go2U5BCvz2NX\\n\",\n       \"2SflcjnjVuHJh6gcjb5ftAxh3fTd6jHRvF9e5odyueyyqJphRP/uFJGRioiIiIiIiIjYJa4aI8Ud\\n\",\n       \"fSgeCGR32uqrEjrpAWnZAO5e+b06JWu4vMdchP5GGjbKnWqSJBmHdnXS1RBW3a3zHh6bEQpyNhoN\\n\",\n       \"OxWpQz2fq/5C/EzDpT2Ww3OgVgFF/ob+FXqdij6Gqt8eg+E52qtjrMc+qWhhKOnQ7XatbdQJkicR\\n\",\n       \"T4KB12lePXVKD8ug9VaE9fBU7JMksbbK88lTFkXrTsaE49TLKadgu9RqtZQUBsunDqC8b8j4tFot\\n\",\n       \"q9swdodQPyxPIiQcVx6azWbmhK+OthqGHjKdg8a2+oSFYpmAf8pkH7788ssAgF/8xV80VoTXeyfx\\n\",\n       \"brdrfReG3b9bsN8uXrw40j09aQRgey3gunjgwAFjRX77t38bAPDII4/gx3/8xwEAjz32mP2Wa5HX\\n\",\n       \"h9dffz2AvvN6yNSdP3/exuU999wDoJ+rjn49ZKLuuOMOkwZgOy8sLFj5OI7feecd6//bb78dAHDy\\n\",\n       \"5EnzpWIuvQsXLmTELRcWFowdI+MEbI9zSp80Go2M3EO1WrWxQ/Z2c3Mzs160Wi13rBGf+tSnAPR9\\n\",\n       \"0f7pP/2nALbn7dTUlOXQU988MlE///M/DwD44he/aJ8VCoVMkEOpVMq8xzTXqsJzBOd1g1jxnaDd\\n\",\n       \"bmf8w9QPa1iZuCaooLX3HsiDxxwqg8V1Ii/Prb7zR80ZClzljZRqQXFi6gYk3GAA2wuganHoyyt0\\n\",\n       \"vtvY2HA3X2zgvE2O13H6YlMtqlBhut1up/R5+F34ktP6cdF5/fXXMy+3crls9+Zv9Rp1lgzbIrxP\\n\",\n       \"aFrzBuDBgwetTzRZpdcm6qin9QWQcq7mb2mSuHz5coZu9xTVBwUChFAq2aN+veg+BV+MbOfl5eXM\\n\",\n       \"C0XHkG40vQ1UuHhNTk5mXkCTk5O22OvEDU27pVIpk7xXx6Iquocbfb1WI43yFPAVGmHKsvB52s6c\\n\",\n       \"r2zHQQEG4djWtvMihFS139tgeOZgTfaatynhs5955plUWg/Af8EA25uvYYEsedDsCexj9v+rr76a\\n\",\n       \"a2IIFf1DsL1oOvuJn/gJPProo6nfPPbYY/jX//pfA0g7fdMsqFHSHDtcp7yoTJ0XnssAx9C3v/1t\\n\",\n       \"U2jnJuLkyZN2vZqyaJZTp3hudulIT4dwYHvM1Go1G9v60gz1q6rVamZ+N5vNzHvHU9RfX1/PdW7m\\n\",\n       \"Mz7zmc/gtttuAwAzcy4tLdn9mLZmeXnZ7vfFL34RAHDvvfdamp9ut5tJgq3EgSIkBOr1eqbvNGBA\\n\",\n       \"xxHflepwzX+z/ZrNZoYkaLVamfedPs8z93njd9hmKaybukHoehaawXu9npuEmG3qmQy97AeDEE17\\n\",\n       \"ERERERERERG7xFVjpBqNBiqVip361GGYO0tlongK445UTTFkNVRhmqfnZrOZMvMBaSreC+P3Eh0S\\n\",\n       \"Xu42TbDI+3khwt1uNxMi3uv17DPS20eOHMk4FnrUqUId870To8cWab48thvpas3N5Tmq64k61FMp\\n\",\n       \"lUpGnyu1zueRCvecr5V90n7ICzXWMF5PLyU8AanaOVmlsbGxVFkBX/kd8HM2qcmR0DGoz1Ksr6/n\\n\",\n       \"SiaoZtAoEgt6SlaF9jCP5MTERO54UnDu8fpms5nLwI2aoyrPNOY57a+urrrh3jo2yF4w4a0+xwuK\\n\",\n       \"YD1OnjzpOpHTAZzPvXjxohvwEGJQAm2C43R+ft5YgDzFckWeMrgy1yov8fDDDwMAvvzlLwPoj5Nn\\n\",\n       \"nnkGwDYTfurUqUyZC4VCRn9HFbc96DzydN2YVJmM1F133WVlUedrMkP8LEkSM+3xtydOnMBf//Vf\\n\",\n       \"A9hmut98802rE/PrTU9PZ/pN2Vs1D/I6b3yy3qVSKXfcst6f+9znTHKCZVlcXLTfcs5sbm7ivvvu\\n\",\n       \"A7CdoPrZZ5+1+05OTmbcDwatAyFbtLy8bGsRn7e2tmZ9olYUT8uQUB27MMhFXQUU3ngK57Cu0eoc\\n\",\n       \"HsLLytFsNm0u7cbEnpcP1HNfGITISEVERERERERE7BLJTtR2f2APTZKBD52bm7OTGXf/Bw8eTGUI\\n\",\n       \"B/yccvV63T5TtoBCcTwReNATpDrwhUyD5jfK26nqLltPNqP4+pRKJQuFpb+GB8/nJmyX0N9obGws\\n\",\n       \"4zDe6XTckz7t9zwJ6XWeD0UelCVTZ+m8E4H3G4VX5lGR91vPN0/LFJbFy6umuRG9OqofUei/pIxO\\n\",\n       \"nqq3CgWqnxrLwhO6+s2RKVxYWLDTZ5hvTDE9PZ3yNwQGj136GHF+nDt3zr0ny0ofmEajkWnrYrGY\\n\",\n       \"qfvW1lbGsVgDVpRpZN0nJiaMmfGYMt6vXC5bP2n/sp+oEv6d73wn49f3bnDs2DFjvehDM+xkzbVh\\n\",\n       \"fX0948dVLpfxyCOPAADuvvtuAMCf//mfW1vT/2Z9fd38KikETCfwQaDidr1eN+aPY+vWW2/FCy+8\\n\",\n       \"MPC37A8NNvjYxz4GoM/a0E+L91NJBgZjKCND5nxjY8PmGX87Pj6eWadarVZmLKrVQNljj70P/Wx1\\n\",\n       \"7nnw3hH33nsvgD7TxHtr3VjmG264AcC2wj7QZy7Z5uGa/m7h5RFVRfBRmJlBmTx2Wgbtt1CkV32z\\n\",\n       \"lArWkDUAACAASURBVLkKLQ6VSsXWOc6P9fV1W09UFkLLz9+GKutcU7//rnMb/aqZ9thp4YDTwcmK\\n\",\n       \"vPLKKxnnws3NTVsI2FhKS7KhZ2ZmMhsopSa147xIvrDjVJY/T0K+1+u5CUA1Ok2v1ed2Oh3bQKmS\\n\",\n       \"N8Eyr62tZTStGo2G3dujRwc5SHobijBtg24YvIgVzzzLttH2VedhQl+aYWJQb6Onfaj30OeF0P4K\\n\",\n       \"qeSxsTH7rTpmhqmJPE0rz9k5rB8ROmRq5JC+6FUJmM8P69toNOx+el/dWITgAuOlFxkElnXY9WwX\\n\",\n       \"Xjdo4WWb5iXx7fV6Kd0ioP+y4aGI9zhz5kyqXfiyZ9tfvnw5s4HSccwX/LXXXutuJLipUVN32K+D\\n\",\n       \"0kuNCvb/qC8i1tdzhq9Wq3a4Usd9qqHTCf/FF1+0NqBLgYJryE033WQO3uyv+++/P+UgDgAvvPAC\\n\",\n       \"jh8/DmA7ou6WW26xzRXbR9efz33ucwD6pkVNdAwA9913n5m4dF33DsV0Xn/qqacApFN2adYFL8It\\n\",\n       \"TOatStlqDtMNlNZnEE6cOAEA+Ku/+ivbLNFxXNdlnVPsV667GuEYuh0A/c1VONd0rWSdarVaZmyt\\n\",\n       \"ra1l0qjoBl6jnj3oWg+kXU90PoZtrq4n2oahyrq6MnjuN4TXh51Ox9pLs57oxiiEpyPl6VINQjTt\\n\",\n       \"RURERERERETsEleNkdrc3Ewlq+RJeW1tzahz3W2TieIusdVqpUwW/I47Wj2RqrotkGYrPKfVYQ6t\\n\",\n       \"XphnaLJTzRulLUOFWaWXeQ/VJeJJqFwuuznDeD+ewJQN0hOGsmNsQ5Zfdbp0Fx4ybnp6IhOlp3s9\\n\",\n       \"qYeaV0obe2ZBNV2FpxOPGVKGS52g86AO1zTf6UmZbaAUMH/D65rNpssc5JnxFDzJsewrKyvuiUed\\n\",\n       \"OFmW8CSlkh2esj5NT2NjY5kcVY1Gw56bd7puNBojMSVJkrgBBjsFWSgNwuC87fV6Nge4boQsHRlc\\n\",\n       \"moq+8Y1vZJ4xNTVl45f3rlarbmg1oXUK835ubW1lTuijYnV11RikvAwCwPYa6THNhErKkD255ZZb\\n\",\n       \"8KUvfcmeF4JMk5pndH3yysE12isf54Ka+vjcG2+80e5N05W6J9BJ/IknnsiEtc/OzmasC9dee20q\\n\",\n       \"OTKwHcjEfwP9vvRkcPiZrpUhE1Eqlax/WfZh7gRad5ok+R7SscT7/czP/Ay+8pWvANhuD3Xr6HQ6\\n\",\n       \"GR0pL1eih2EmaJV9CVn2SqWSSdLe7XatXTlnVPeN40DXSn0/hutNoVDIMGAeE6bWILUksKyjMrq8\\n\",\n       \"x6233mpmcuY0VFmLURl7IDJSERERERERERG7xlV1NvcEKj2ntUHZxnl65QlDd5D8rtPp2PcaDj7K\\n\",\n       \"rnhQCHNo9/Uc34Ft9oFlH9TWni8V4Tm20jm12+1mHB6HOZuPCq/unjCdh0H95fmghaxXtVrNdQb0\\n\",\n       \"ckYRpVIp5ROxEwxymqcPDR1yn3vuuUybTkxM2IlWJRvYdxx/+/bts3qwbiGrGtaNYyhJkswYG7U/\\n\",\n       \"rrnmmlSOSj5/FGfpUZ1NFxYWzCGbqtJvvPHGSL5Dc3NzmfYbVUJBoXIl/+E//AcAwH/5L/8lc522\\n\",\n       \"B9s/SZIMCwj4TsP0eeFnS0tLxmzs9DQ7NTVlTAWZi0GMHtc0Mghe8MzU1BQ+8YlPANh2bn7hhRfw\\n\",\n       \"v/7X/wKwffIGsnPv4MGDKV+wQTh69KjNZYqT6vqiwsIq9gmknaZDYU7Fddddl2GagO2AhgMHDgAA\\n\",\n       \"nn76absP20PnFNtMndzVRydk3FSixnNy528PHDjgtn84Xm666aaM/xew7VDOIKpyuWzK8ZwDp0+f\\n\",\n       \"xm/91m8BAP7wD//Qfpu3pqsCujKcnvjmTsG2mpiYsLbkXy0Ly+f5DhaLxZTQcghl7D3hYUKtEewn\\n\",\n       \"tcSwvrxHtVrNCKOOKpdQr9exsrLyw+lsDqSpOG/DoHQ5X2hcZIrFok0YLggaxcbvSqWSbTw4gZMk\\n\",\n       \"yZjvut1uKr0LkO5odYDm99pZHrUf0vz6suYET5LENkNsg0KhYPVUnSUOHqV0dxq5pok6WY9qtZrR\\n\",\n       \"NdG6e8mGVeWW91Hzl+ofAX5CXI3aG2YaCTfX3sa30+nkvsD4jJmZGWsvtoVuvFi38fFxa2teNzc3\\n\",\n       \"Z2OM9VFzqm52wo2Pardw06z9pqmOiDxT0aibjWKxaPXjc/UloqZRVT4G+u3s6RWF0NQpoXloEDin\\n\",\n       \"19bWdrVxykOe1tO5c+ds08d1olqtWnk0yIJzUl/OfLFyk9BoNHL1d/KgLgqqneXN61FMGL1eD3fc\\n\",\n       \"cQeA7ei+//7f/7sb/cv7MDpXN1HeJoJ49dVXM5/ppp4aeMViMfOy0ugp3UDdcsstALZNYm+88Ybp\\n\",\n       \"erF8jz/+uNWDmye9Dw87Os/YtrVaLeMyAGzPe477ubk5m/Nadz3QAH0Hfa+PfvInfxIATNvq1KlT\\n\",\n       \"tp6wjk899VSmD9vtth1A+N309LRtoK655ppMH9ZqNWtzlq/dbg9U5A+hkcNAv93C90ipVMoEB3mH\\n\",\n       \"VO/9oyZvQt8BerD2Ds+ce/xudXU14+KxtbWVSZbtYWNjw4IXuNEcNdJwlLkdTXsREREREREREbvE\\n\",\n       \"VWWk1BRD5kVZG80fRkZAmYtQ76fdbttJirtmDfMkdPfsUfceE0VsbW3ZtXoiCZmDyclJ2z3zVNbr\\n\",\n       \"9ex5nklH2biQBUqSJHNqV3OEOk177A53/0tLSxl9Hi95sJ7glEZl+fWkGVK16hSorEzY1l7SUE+T\\n\",\n       \"KawzrwvbY8+ePXYfrTvbkuNgaWkp0180rwBpRejwHmpK1QAJlllNBWE9hp0URzVHagJi9o2emtim\\n\",\n       \"vN+VK1cyzECxWMywhr1eL6Pd4o1Tzww6NjZmbToqZZ7nlD5MGXwYSN/n3V/R6XTshK4ImS0dn2Qu\\n\",\n       \"HnjgAUvKu1M0Go2MuvYNN9xgpkKVZOBJOo/p63Q6phzO+pw8edLGB+utwTAcx0BWwmJUnbg777wT\\n\",\n       \"Tz75JIBtBunKlSsWJKQWh/vvvx8A8K1vfQtAnzkjE8V5WCwWzRmdfz/84Q/jC1/4AgDgq1/9KoC0\\n\",\n       \"CZBrTbvdNhkM9p8GauTleCuXy651hGOb961WqyYboe8XMlE0QbbbbVtP1MQXjivNHUvcdtttqfx8\\n\",\n       \"2k8sE8eMZrYImWaVhdHsCPp+BfpWEo4tdTr31OlDqJ6Trnt5jLrnaqOf5bkeaHJz7gP4t91uG3uX\\n\",\n       \"p7au+QuHud8MQ2SkIiIiIiIiIiJ2iavGSNHpUBkQIO2rRNRqNTttqDMad80qaMhdPU904+Pjdp36\\n\",\n       \"8qhcwCAME9rL26GrXwmfq74qeg+eKjQnWFiudrttLAF9Od555x1rK363tbWVCU0F0qqvLAPvs76+\\n\",\n       \"bqcwzfTO8niZ3dW+7al6e+3G9lJmKmRpBqmsE16eO17v+XMUi0U3XD2EslC838bGRkYFXqF97CmQ\\n\",\n       \"5wm2euCJ88iRI8YEcezqs+hfB/jjN8zgrj5r6gcY+sO12+2Ms77XB/oZ6zg9PW2//UEoLl9//fXm\\n\",\n       \"+6DjzxsbXog+/X3UuVmvD9mwTqdjY4v53N54442Mf1in0zFRSPbNqD4pgxAq2x8+fNhYHWWk+Lw8\\n\",\n       \"5nJubs58Qv/mb/4GgJ8hQPMWKsvKUz3HULfbHclHRP2m6GN2/PhxE+ekc/jKyooxUcQ777yDQ4cO\\n\",\n       \"Adhm+TY3N/HhD38YAIyF+sIXvpBxXn/jjTdMZJTO3+pwzXmkcineGkJsbGzYOPaYfUpFFAoFk5fw\\n\",\n       \"ZAjIiOzdu9fyPtLRX/1U+e5aXFzM+PK++eabKauLx5h61pZwfIwqyaHjQOdUqP4OpP1Dw3Ko/6yX\\n\",\n       \"+5TQd0nIEiZJYu2vATKhGHa1WrXxyb4elgtS22dQ0JKWbxRctY0UB0uo96Hw1LN1AeSA9xYyNq6+\\n\",\n       \"MFQDJHxBFIvFVMSVXh8iTD9QLpdzo8lUK4mdRBPk22+/bQvoMOdcXscXzPz8vLWN96L3kuq2Wi0b\\n\",\n       \"hNqW3iaEbcgJdPfdd9vCrubN0BSr6VG0bt7ADDcbaj5UCpbjRDW5+JlSwOEk0JQ+XLQ0CTLrMWjT\\n\",\n       \"7OkUhe2i5hlF3gbK29Rx4VhcXMxVmyZWV1eN0teNEtvDc9xUnRZPBX6niT95j0uXLo20cRxVBVwV\\n\",\n       \"phVhxJLqPym8DALEgQMHXAd69iGdlgellAo1w5aWluzlzA3IKImNB2HPnj0ZM46WLw8//uM/buVn\\n\",\n       \"9N6lS5dsI8PyvfTSS9aWur6yXW6//XYA/fEwSlLYixcvZlwttP24iSgWi/jFX/xFAMCjjz5q3zMx\\n\",\n       \"Mftybm4uozRfr9ctSpC48cYb7d5cU1dWVszhndFxa2tr1n5c97wUW8vLy9b23DCfOXMmFTQD9Dec\\n\",\n       \"4Qbq0KFDqaTLbBeOVba9biboUH/y5ElrN24Mz549a4cmL0lyqVRKmcIIfsb1ZG1tzVTnH3zwQQD9\\n\",\n       \"jehf/dVfYRB0rHG8e/2vmoScA7yuVqtZu3kRePy3rheqqRjqNVYqFfu3uqV4EfhcO9jmvV7P2kVN\\n\",\n       \"gByjXjYTwltfQkTTXkRERERERETELnHVdKSo/8Bdvac6SxQKBQvpfeaZZwCkT7Ya+snPvDw5pBw7\\n\",\n       \"nU5GxVqpfT0x7CTEHNhmBkqlkp1Ylb0JHbM9LaBCoZBxGNZcRsouqaMw0N9ZK7Ommh78nr9R3Y1Q\\n\",\n       \"VXl8fNxYDI8d8cD7djqdDJugST6VPQnNQOVyOaNbVC6Xje3QBLaER0N7oLnkwoULGSdD1UvymDz9\\n\",\n       \"LM9EyH7TEGyFmmAH1cOrgzqHq+mRyJOPSJLEzLieA+eoeeI4f6rVakYq5ODBg1Zumj92IgkQJi3V\\n\",\n       \"gAVibGws035JkqQkHfh79vU777yTGYsHDhyw8ntMLB2eL126lOt8yrofO3bM+oJ/R1Wd9vCRj3zE\\n\",\n       \"2pKK5KPiT/7kT/CRj3wEwLapuNPp2L/p6E1n7UGg5k6pVLJTO9dSHUPKEHpzhXIA3/zmNwGkQ841\\n\",\n       \"aXG4/qv5K09vCkAmx5/+lkxcs9k0CwdZo0Kh4M6lPO0wlvONN95wdcf4Gc10unby+UtLSym2i/cP\\n\",\n       \"x6LO283NzYwW1Kgm9EEBPKFT/fj4eEYOQtkiSoYsLy+782ZUhHqCGjSjzvCj3ANIz3VgZyY59qea\\n\",\n       \"ez0LFzXqBulIRUYqIiIiIiIiImKXuGo+UtVqNXVq40lkfHzcTgea2Z5MFDE1NWXOZXoKD/PqaQZy\\n\",\n       \"DREl1E/Ey0AdolQqZfItaQ49ZcC8nTefp4yZx56Fp/mxsTF7rseS8cS0vr5uz9B2UaYkzGdUrVat\\n\",\n       \"LjxRra+v286cTEmlUrFyqWxBeJqcnJzMyAWoH4aWP2TMPDar1+tlBEMVYQZ0IM3QsKzqtxJmAm82\\n\",\n       \"m/ZvPZGG+b4qlYorCUC2ZhiDybJ6/j/KSKnDJsviqedz7OQFT1QqlYyfmObu0px7eYyethXbl6fB\\n\",\n       \"er1uddupOGW1Wk3l2AP6jqOhOOj+/fszzJoqUQPpgA1+T5CVabVamYwACq4dR44cMV8bD6zvtdde\\n\",\n       \"i+9973sA0sK9YR8Pk/ag78bS0lJKgXwn+Lmf+zlzztY5Rd8ezmmVP/DAsPtKpWI+O2SpTp06lWHc\\n\",\n       \"tK3IXLz66qt49tlnU9cVi0Xrr8cffxxA2mH4gx/8IACk5CTYb8raPPzwwwCAL3/5y8ZEEfv27TNG\\n\",\n       \"ioxQpVKxMUNhTJXI8HKRcs2p1+v2XN5jYmLC9bPjGsTxpWs+14Zut2tMFNv2hhtuyOSF7PV6KWYt\\n\",\n       \"z++ODFeSJLY+sX91zKlkC+sSSugAvp+jBhTwPpqRILTo6P34LlemyWO99N3Mec/+WFlZyQQMtNvt\\n\",\n       \"VK5IlsXLmxnmqlRZJbbRzMyMfU8GcRSr3VXbSHGh4+LBBXxjYyNjpuh0OpkXwZUrV0yrgw1ZKpVs\\n\",\n       \"EaQTnyaPVLMA76cvPi8xbviy0UGpAyeMJtDBoWaz8H5zc3NWT9LBGsnH+7TbbRsIfJlo+ht17OT3\\n\",\n       \"vB+Qfrmx3KEyOJAeNCH122w2TUeFbZ4kSebFuba2ZtdpfUMqulgsupsg9hP7Q81anLhK7epv2da6\\n\",\n       \"oeBGj9FEN954Y8Zp1UvzUygUbFyGyuBAevEd1Uk7L+WQmky8AATtd6Dfptw4KNXONtDFxNuEhQvV\\n\",\n       \"IOdqItQVA7bbeX19PTc6iJFL8/PzqXEJ9PuH9eAzVldXrVxcD1QPiy+OhYUFW0O0Lp7pgS++drtt\\n\",\n       \"92S7eMrR3kZKgw30vvwtx8TevXttjhDDTA58UZ4+fXpkczpx9913A+jPHW5QqGyuDtBcH6+//npL\\n\",\n       \"T8KxfdNNN9mGUCMJ2W4s/6FDhzKJk/VwwRfufffdZ2rSRKvVykTe/dZv/ZYpeHMDdfjwYdOg4j2m\\n\",\n       \"p6fx8Y9/HADwP/7H/wCQdhhnn3obxHa7bXVSh/qTJ08CSJv2OBZ53ezsrNWd9w4VuwkeCDj+xsbG\\n\",\n       \"bA33Nl6MLj179qy9s9gv6prhKdPv378/E4A0KNCD/cUyXHPNNVYnbvq8Q9TU1JS9B7h5bjQambro\\n\",\n       \"hlEDL8L0bcVi0dYndUchlMTg55wXhULB2p1rm2r46VrpHWhHSYmlAVicv15UcIho2ouIiIiIiIiI\\n\",\n       \"2CWumrP5wsIClpaWck0hZCHGxsYytObdd9+Np59+OvOb0FHwyJEjxkTkae5sbGxk5A8An14MHbOD\\n\",\n       \"ugHoM008leRp30xMTOSaGd4t+Ew1k3ph74Q6ovP0NUxhmidfnooGhbjn6UN5jqp66g3bWvP0qYNy\\n\",\n       \"WNZBCaVHhZdomVCanGOCiUdffvnld6WnxFM7x/358+fNcZYnw3a7badnPVHTzKvhw5pfCkgr3LMe\\n\",\n       \"PAkryuWyfc8+nZyctPryWRsbG8Z2KDvxwAMPANhOZKume/7d2tqy8rG9VbWZa8SFCxfsN6o/x+ue\\n\",\n       \"eOIJ0yMalnyXLBbHhppviA984AP4+te/nvpMy882GB8fz7BUx44dM0Ylb407ceKE9cOXv/zl3DLn\\n\",\n       \"4Zd+6ZcA9E1j/+2//TcA2yfqpaWlTHLZW265xU7fXH9mZ2ftOh1PnHusr7oZkJkYGxsz5kJzn5E5\\n\",\n       \"VLaTY5bXX7hwIeNQfscdd5j8QeiuAWxbHFZWVjK51iYnJ+0Z/M5bB+69915j4Nh/yqLwGadPnzZJ\\n\",\n       \"CbapBjZ5yEtE/26Rl7R4fn7exhPXrJWVlZHKMTs7a+3GMbG1tWXriAZKhea7QUxYHvtEqHxQXkJj\\n\",\n       \"BdeJ+fn5lLYkn8k1gd9NTEzYGCTbWywWrW68/vLlyxl5I5Y/OptHRERERERERLwHuGqMlPwbwPbO\\n\",\n       \"cWtry3bFaq/kDl/z1hF0CiuVSmYH5YlT/RT4mZffystlF5Q581xFmM9N8+rpLjsU99KdN31HNjc3\\n\",\n       \"3ROE55TuQa/zTi88CdLn5s0333SZjTAfWbFYNPaJ7To5OZkSuAP6tvZR1bzD5+opwWPO8iQCtO78\\n\",\n       \"reZQ4zhZWVnJOIBOTU1ZW3k54HTsECzD3NycsTAasBCeGIflj+MY37dvH26++WYAwPPPPw+gf2qn\\n\",\n       \"BAhPi1euXLHTOpmB8fFxG9OcC61WK5U5AOj3H5/HNiVzC2z7eoyPj2d8BWZmZqyvNacZnX51fJKR\\n\",\n       \"UHYp9DtTBXmWqdvtuj4leej1epZnjozU0tKSmzstxNTUlH1/+PBhAH0/ktDHR520OT9qtVrGR256\\n\",\n       \"etrqpNeH4/bEiRP4qZ/6KQDA7/zO79jnw8Z5iH/7b/8tgD6j89RTTw0s8zCwrTg/Bq01rLvKUChD\\n\",\n       \"C6T9l+666y4AwLPPPptZk375l38Zn/3sZwHAGMUzZ87Y2OKc27t3r+XdU+hvQtDi0Gw2bW1Q6wZV\\n\",\n       \"7L/73e9avUK2ZWFhIcN6AdssK/3oVOLFu45jUjNcsB0XFhasjzSgh58lSZLxv/LafBB4nWbZIDju\\n\",\n       \"R12zPezfv9/GCts3SZLMONH1XaVnlIkG+u3iWX7yRH/z3o/VajW1/gP9ecl1Ud/VXlsOY6SuatLi\\n\",\n       \"QqFgLzmNHGKD6CDhgOKg1Mpy8KpGkm6g2HFe1IOqq+ZtlvLSlWiaFL0u1LTSBJAKdqJGwnCTw9+e\\n\",\n       \"P38+5aBO8MXI9hjkWKwO/BxI/Ds7O2u/V0o6fOl3u92MQ7JuOngPTSTJZ0xOTqY2WkC/3/gC4gKu\\n\",\n       \"CsRehJ73gtF+COvumZI8rKysWH95m0rWo9frpVISsR7sLx2XnPSsx8zMjLUp+3BycjITAVcsFq09\\n\",\n       \"uEE7evSomfZoTrty5Uqq34G+KYP1UAV2Ljy64eNcCTfMWg915lRdonBBbjab7tjOc2BnX5bL5czm\\n\",\n       \"atAGIoys7Ha7qQ0r5xAPJUtLS/YC0ojJ0Myvmyz2jTqa837z8/P2IuN64plV1UxIExGAzEbgrbfe\\n\",\n       \"cteWUTdQVO5mJBodzRWDNlGe2TrcSDWbTXfzzzmgCMfE2tqatT2jru+77z6LlmO7cRMF+M6+xPLy\\n\",\n       \"sns4zdM0Yp9rUl2Fl8IkfPkvLy/b/NbycWx75kt9d4UBC4VCwfqN5nQ9xLAvX375ZTu4bmxsuJs0\\n\",\n       \"VVrnvVlGrS+vY910k8t+27t3r208VQORz83baGn5VSeQbahtqnp0QH/usQw6JjnuuD5WKpVUgmV+\\n\",\n       \"p+9/fhY6uTebTTPp8fCpavG6gQv7mm2Sh2jai4iIiIiIiIjYJd6VaS9JktcBrADYAtDu9Xr3Jkky\\n\",\n       \"B+D/A3AdgNcBfLTX6y0Fv+sBaVVv7jTHx8ftBMVdZ6vVsut0VxwmYvWgitV6+hhmqgvBnXKv18s4\\n\",\n       \"wWq+Ob0+PMkdPHjQduE8vbRaraHO3ESeojYRatWENHqtVrMyDMsHRo0T1v3MmTOpxJVAWseHp1Q9\\n\",\n       \"3Su7E9KylUrFThv6G++k7OVdzHME1wSg4Qljamoq49y4ublpz+DppF6v27/JEIxqbur1elY3juOb\\n\",\n       \"b77Zxiw/q1arKRMX0Jfs8PI98dlkXQY5ZIZje//+/TZ29NRM9oTtqM68CpaZp+xCoWDMMPHcc8+N\\n\",\n       \"nAVgp+Bzp6amUqZ4oD+W2A4XLlxI5eAD+v3GEyXXlUqlYvfMUyBXcx9x7bXX2umfv1Wzj6rnEydO\\n\",\n       \"nADQH7sMt9frqfD9p3/6pwBgTuo7wac//WkAwNe//nV85jOf2fHvQ6hej7Z1CI4xNW8PA5XIuY5d\\n\",\n       \"unQpsxYN0twK10BlVkJTm2J6ejozl4FttpAMxttvv211JzunSeRpXtc8gPzs1KlT5jRP1k010tRc\\n\",\n       \"yrVI8+p58NZ8tvnCwoI7Z8m4kM0Ctk2Xu0GokdfpdGztUL2scP57795Rc20CyLgjjLq+qFI622p8\\n\",\n       \"fNx12eAz+F5bWloy5p/rXrFYxOnTp99TZ/MegA/0er07e73evd//7LcB/GWv1zsO4Gvf/39ERERE\\n\",\n       \"RERExD86vFtG6jUA9/R6vUX57BSAn+r1em8nSbIfwNd7vd5Nwe96Gr4eIo+R4Hd6kqR9dWtrK6OU\\n\",\n       \"XSwW3Z1s6JiWJIntvHkC89iHnSgWj+Iw6v12YmLCTt484Z46dSrDynjigEDaH4rtEfosAWlGQsNd\\n\",\n       \"88rKk7xmQ2efqJDdqAKVeWHCKiypAoFA2u8nLw+e+lmNehLKg5cxfnx83MYEx6IKSvLUqQ6ZmuOL\\n\",\n       \"p0r2ZaPRGIltVbAdkyRJ5WcE0n49bMc9e/ZknHQ9X5Px8XEbgyz79PS0/Zbii6+88oqb7y/MTq9O\\n\",\n       \"/ZoRnvXkuFefFhVZzQuL9hz8B4GMpSo0hw7FhULBys+yFgoFYyoIZczZ5zovKQHx5JNPWvnV1+OX\\n\",\n       \"f/mXAWwzpv/zf/7PkeowOztrLNAnP/lJAH1H+f/4H/8jAJ9xVh+jML/ZbsA+mpiYGHnOh3jggQeM\\n\",\n       \"KVU2yZM9CKHrcV6+SWBbUoR+ajqWVH6Bc4/3q9VqNjYefPBBAMC3vvUte66WMy/cX1nyUJS0VCrZ\\n\",\n       \"tWTQL1++bOu2ClVzDU+SxMYi1zZvHZ2cnMwEoCRJYmNV5+OoayTnDdne5eXllHh0CBWvDp3NAWTK\\n\",\n       \"on2oczoUk56ZmUn5wQH9NYZrGa9bXFzMZVaHYZiz+bvdSL0KYBl9097/0+v1/t8kSa70er3Z73+f\\n\",\n       \"ALjM/8vvekB/UudpN6k+UOhEPD09nTGxjaJcyvuNUu9Dhw7ZQqUvT40EAdJO8yzLIBMQBwLTKKjz\\n\",\n       \"qW688qITFKH+0sTEhLVVqVSyMmpbhlFx4fdAegEYNVpQEepvTU5OGn2tuh/e4ptnwvQcwT3odWFa\\n\",\n       \"AU1QnaelpdFunNiTk5PWRrxueXnZruNz3377bTM1UNX70qVLmQ3r2tqavfDU3MhNrs6PcJHWF/io\\n\",\n       \"zsnEgw8+aGPmtddeA9B/+YftevfddxvVTUxPT9tvmVRX9dW81BCsx/z8vM2NUV+8o0bMNpvNkTdS\\n\",\n       \"bF++BC9fvuxGZrFfOR8bjUaqLnw++1VfZJ45muPjnnvuAdA3AX70ox8FsO14/LnPfW6kOvzkT/6k\\n\",\n       \"JQPmi/nnfu7nTI+KJkLd1GlbetptefCuZ58cOnTI5rem6vDWd69dvIMD56hmpAg1rWq1WirtDcvC\\n\",\n       \"DYqHUSPdiGPHjpmyOMfs9PR0ZqM6Pj6eSjUCpNuea4i2hZZF3VuA7KYoXJv1/RleEz6HGPZeUR1E\\n\",\n       \"lo9ruKqoh22n7xXeQ98/uiZw88eyDEuAPGoEK+9XKBQyUYC9Xs/mq268RiEQ+Pv3MmrvgV6vdz5J\\n\",\n       \"kr0A/vL7bJQ+vKdSBxERERERERER/5jwrjZSvV7v/Pf/XkyS5PMA7gXwdpIk+3u93oUkSQ4AyB7z\\n\",\n       \"sK1MnSRJiiEapkTN3WmhUHAdRcOQeQ2Z5K5TdV+4O9ZdMU+h58+ft92rJub1zGkhnTxIe4QnCDJR\\n\",\n       \"8/PzVl+eEg4dOpTRRPEU2r220lOMR9n3er1cp1HtB4LP0HxKRL1et880hFXz8wF9swVPjGxzzSem\\n\",\n       \"rBf7mH2kQQn8bHx8PGMiUiditsPhw4fNZMIyeWblqampVMJZoM9WhCHHg06xKhdAUAfHC4nX3F3h\\n\",\n       \"qXLv3r0Z81Gn07HxtFsTCrCtkTQ2NmaZAcLk1IpqtZphIi5fvmzmFmWBvVMdf0MThZcIHEgHn4Nq\\n\",\n       \"/gAAIABJREFUlgD9PuJcVtOOx07t1DQ1Pj5uY5HjwJurLAd/A/TrGJqS9DRO1Ot1M4nqGApV0aem\\n\",\n       \"pswZ3ZMu8EDn9MOHDxsjpSYinszDZOLAtvL+q6++mumbYcEneUExFy9edJPQEsoqsD10XLEtvdyn\\n\",\n       \"nqlOE8eHMiOnTp0ybS7KLqyurhozrAmhmYiZzuH1ej3D8qrVgLIFs7OzVg+at3Ve6njimnTvvX03\\n\",\n       \"4ieffNLWBq4X1WrV6sv5ODk5mTLTh2NMEwVzPLfb7Ux76e+8vuGcVIVxWmI8tigsB5B+r5DNunLl\\n\",\n       \"isuKhZYjTUZNJEmScTYfhmHWG2+/QEZaMz5wrpfLZWxubqLdbqc03jzs2tk8SZLxJEnq3//3BIAP\\n\",\n       \"AXgewBcBfPz7l30cwGPe7/niLBaLro5NRERERERERMTVwNjYGObm5lCv14dupN4NI7UPwOe/f0Is\\n\",\n       \"Afhsr9f7apIkTwF4NEmSX8f35Q+8H3PHy1MRd8WD2CgV/uPf0K5eKpUyDmXe/ZrNpm3e1CnNc3Qj\\n\",\n       \"PCc+9bMJ/UPeeecdKx/DUM+ePZuxz1+6dCnjr3PmzBnbjbO+XkivxwBce+21rggiN67VatUNA1VR\\n\",\n       \"NgBu9vlisZgRcdzc3Mx1tGff1Ov1lKJsWC7tJ57seMryQms3NjYyJyMVjyM0lFxD0/lb2s01XJ3Q\\n\",\n       \"cG72mzpuq/M1fSK0bmxnsouvvPKK2/YE61soFKw8O1X3BtI+JfyrGdmBfjZ5jq08P6tqtWrlV2Yj\\n\",\n       \"FEsEtv102OZevrbNzc0MY3L06FHrD/pr9Xq9jFp8+O/dotfrWV28kH1lT/g8jt1Op2Pfa54xZTGB\\n\",\n       \"4ZkSyIrcfvvt5i81qnwA876FvmtAv1+4BupY01ybIdQHRsc0kGap2G/eOBkfH8+Uv1qtpoQ9eQ9+\\n\",\n       \"xvabn583tkDnaCjg2+127TfKtvHfynR4QpXKRBEco2SNtc/V75VrHP3Yjhw5YuOc87ZWq9naoEE0\\n\",\n       \"LMuTTz4JoP8+8OYPn8f+PX36NK655hoA/TntvQP4ztC5SYFQ9r8qeHNsTE9PZ/IleoLGlUrFzYPH\\n\",\n       \"9x3Zp9nZWVuzdA3musj28xza1RFcLSOcU+xDDepR3+W8vQPX97GxMesTluXSpUvGduuc55jIk0YJ\\n\",\n       \"cdVSxOzfv999eXlRcaq140VhDYtwCiPDjh49ags2v5udnXVTDOQ5JedhbGzMfqMLT6jgWi6XXafC\\n\",\n       \"0MlZlajZ6aurq6loCCC7ufJSxPD3XIgvXryYKYPqfXAy0+EyhOd8PyhyJkToHFwul62edNZmItNh\\n\",\n       \"KJfL1r6cVHnRoYPABa1UKllZQvXcYej1eraxYDt66S2A7TE97DARQueKLpChc2ipVEol9OQz8pw8\\n\",\n       \"+dL8sR/7MdPMYTsOck6mng7HyfLyso01tmmv18skOi0Wi+6miWDdisXiDyxqL8T4+HjGGVjnHNFq\\n\",\n       \"tczsyrotLi5ae/Eltrq6an2T186/8Ru/gZ//+Z8HAPyrf/WvAAx3qmXk2NGjR/HHf/zHqe/2799v\\n\",\n       \"LwLPyZzrim5yWY9er2dl5mfr6+tWN03cTrCt3v/+99t6rol9PfNi6PCsSWZ1roYm5UHR0YSu8xzn\\n\",\n       \"v/IrvwKgn2qJpmzP4foXfuEXAAB//ud/bv2rqU7Cd4u+kxRhmWdmZjKHJ89R3QuAOnLkiG2eSqWS\\n\",\n       \"bVrYvnv37rX2CjeswLZrwcbGhrW/FzDAsaHuDXkHPn03eO9HzvXDhw/bRobXPfvss1YW9lGhUMi4\\n\",\n       \"h8zMzNh1Ow2k8cbJIP0q1lcPQCwr5/LKygpefPHFmLQ4IiIiIiIiIuK9wFVNWqzObYTqPunuOWSd\\n\",\n       \"1DmPUIqd3ynzpWY176TE79///vcD6O/8X375ZSsD0D/10GmRJ4PFxUVjEbxQTT0BjSJr4DmRjyrZ\\n\",\n       \"oLnlgLQKLtDfXYc7/EqlYqc5MgOTk5MZBqJarVr5RzU5sd/q9XrmZK6MCtut3W67rI+qWwP9ccIT\\n\",\n       \"Mh2VSZ2H4JggzbyxsWF1U6Vx/pt0r56O1TRK8NQ2NzdnbaQ5+W6//fZU2Z9//nk3UGEUqFmVp6jJ\\n\",\n       \"ycmMvICe7tj3tVotE0RQrVbNIVaduXlK/OAHPwig3x9kb8laemOoVquZSYJh8Mpy8r7lcnnHDGEe\\n\",\n       \"NJiEwSu7geqDeQ7t+hlZHY4n1Q9SFfjQDKFznuP4xIkTNo6+8Y1v7KjMGpTCspRKJbufnspDNntt\\n\",\n       \"bS2j0+StL5pTj99760uSJMZw0y1AzXgsn8pkaEYEjhVdl/PYE2XJhjnJA/05yjbguNd1huajAwcO\\n\",\n       \"pPI0Av22oplHncM5B7w8fBpgErovKCMVthmQtrB4OnuelUGTfo+yNg9j94jJyclUYmrWIxwrKomS\\n\",\n       \"lyB7ZmbG2m1U5p0M1+TkpL3/uY4qy0TLycbGRoZRq1QqFmjDvlSLDv9Wq9WMM/zevXvxzjvvREYq\\n\",\n       \"IiIiIiIiIuK9wFVjpK7GcyMiIiIiIiIidor3UpBz1wiVtIdtrEZx+p6amjJacTfpQHaayDgPk5OT\\n\",\n       \"FoFCh8FRFYSBbbOmOp3v5PdESAOrM52nmq1OhKG2lJbHU0Vn+QY57oZ1m5qaMiduTV2hGjFEmIRS\\n\",\n       \"ozrUjEvql/cdlFA6dM73dLM87N+/P2Ma8BwZPcdnLYvS997YpukidJ5X7Nmzx0wEOjZCk8ggZ95B\\n\",\n       \"17OsYT1YTq0rzapJklhZRp17w9qcz2WKHS8YRO+zsbGxa9PeTjDqOvGDXE9GxaguAD+I5wBImeZD\\n\",\n       \"M8kwlW29l85/3jccR6rX5T1DP+NYVidxLzIsfA8VCv8/e+/SY9mRXY3t+37kuzLrwSoWu5rsbrTU\\n\",\n       \"bGnQgiYaGEJPDPiDPbM9+wB75oHH9sTQ0L/AQxseffA3MjwSLDVktCFIsNSAukWqKYqkim9WsYqs\\n\",\n       \"ysrXfXuQXJHr7rMiTtybWUy2EGtSWeeeR7xOnIi11967WYlFyL/zeT5t1WQyqcy3rVarEvF7MplU\\n\",\n       \"pB3tdrviUcnif563YbZSY/3b6v9Vseq70Gq1Kml2VN1i9c15nro2JkqvXZ8kfy0oKCgoKCgoKIji\\n\",\n       \"2hgps/MVIXbmLNKEGI3jUeS4nR8eHlbcvFXckhhyVsutViuwMgCL6/D8n//850EkB/HtKowS6otr\\n\",\n       \"1LXdbncpzxiQErKrmDzKlTe20seuCcwGhxfgXZt3y2dAiKliqSjHgpgw0o+Jfr9fCanR7XbDteh/\\n\",\n       \"jvvF56WYKN4p++fmMjDcnnwPH3OLc4pBEDydTiv9+vTpU+k0ocqOiMxvv/125XcVEJd3dCg3JwcH\\n\",\n       \"sDseDAZBsIvfWQSsWDf0Qey9UGymgmIQcpESlvM7oAS+KfCYXTWn3TrwjImZZnRzw3fU7ehz5ko1\\n\",\n       \"1yCThS8z52fD+bGQLmaakVJzDsbGfD7P6jtmfji3pS+DYpoUixaLgaYYrlh5fLlSuAwbpUIP5Vxj\\n\",\n       \"dl535Wi1Litbxwqp8czH/PuqLBg8xnLnmhiubSGFQeiDAY7H46UM8GarZWvGRxiT+unp6ZV4CbFp\\n\",\n       \"CWXmlxkdgEXCw4cPQ8C2dZ6PeqSuvXXrVigDFp11HhtsOgPUyxOjOP257IWFMvPCUi304Kny6NEj\\n\",\n       \"uWjyg5kDSvLz2VPFTJt+Xn311RBEj+GfMZ/Pk+MMk0RdX7K5qs77C8ACTnn/oF9ff/11GVBQfax9\\n\",\n       \"v02n07DQQmwuDu6Xm1kAfcRjgz258M7hA8QLKbQLjw0fiyYGTsKtFiOYJ9ZBqm/UpJo70e7s7IR3\\n\",\n       \"cdU4OLlQC71Y+Vb9QK7zQVGLTr94SZlh/LUcZyh1DNfg/eX3mNPH+GerRY76MPNCSn3M8e7xOOTy\\n\",\n       \"+TZQ5kN13iq/p5Br7ostSlJAnVVAVrP6/o6do87jJM3K3KfKGtu8enB911n8FdNeQUFBQUFBQcGa\\n\",\n       \"uDZGColo/apvNBpVdnC3bt2qiJwXi0XY3WIFeXh4GBiDq4pVo2j5GINjdhGThWPzrIOc8g+Hw9BW\\n\",\n       \"OUyUhxIPM03tBeiNRiO0P2IjsUlJUbtKQI1I2WZVtnFra6uys1ksFoF14lg/PknqbDarMFwoZ6zu\\n\",\n       \"bKpUOxDUSUWpV0AMmtgzWYSKMgOqz3Gs0+lU0guZaZG2Ys/A1v3sZz8zM51yiAEROcf+Qvs0m83K\\n\",\n       \"Lvzo6Ci8j2DWGPjtxo0bgWXj2Gsp9pn7XJ23aq7O2A7dswTMhPI7kNqpYp56+vRpiFuDschm58sI\\n\",\n       \"0VNMZ13096tCLtOUMjkyk5CqEwvM1TzmGQlO84Lzu91uOMZ9qeqB35n9UkJwtDM7nXA8N/6Xy6nG\\n\",\n       \"a8y8nmK2cpHL+CjzVx1UeimVhogdklYd8zifx7V3Poo9r+5Zisn1/ZTT7oWRKigoKCgoKChYE9cq\\n\",\n       \"Ns+Nrgp3c7Nl8R1yD4EFmEwmyaSL6+zUcoXEOee9+eabYZULhmA8HicjwaZwenq61DY54JU3CzH9\\n\",\n       \"74vFIjAbHMnY69fMLiLP8nkAMwdgOVBmJW5MCdwZrVYrjAXFEmFMMKPCuxSvz1FMSKvVkm7KHsxS\\n\",\n       \"8DhQmgy/k+J3IKXD+uqrr7J3pJ5ZYWeCv//7vzczsx/96Ef27rvvmpl+LxQjBXCbIJr5u+++GyIC\\n\",\n       \"4x1UOrutra1KmAR2UeddIH7n/kW5mLWMhW2IIfabaodUWAt1Hx7PnDDbbHm8q/eQWcrUfKKer3bt\\n\",\n       \"LxNKY+jHNjMqimnieykBeux8fz92IuH/m120R7fblWJjzz60Wq0kY8R1833J9eDrUqL5WJ3wDKUF\\n\",\n       \"uyqkxm9dOBo/Bnlso83Z0SuFWN1S9U0xZnVtVTdHsFUmF9e2kMKLB3MFBsx0OpUfZIAHLUK9w5Rw\\n\",\n       \"79690HFojLOzs2B+Sk0yMXG1/7jlLv4UptNpqO/9+/fN7Pwj4s2Rjx8/zurEVDJUs3OBMrCqqHYw\\n\",\n       \"GMiFAD5knBmb+y4FnwBWUfoqWebm5mbluPK849Q6eBanXsAiiF9u5fGFReJsNgv9gPtxHSGgPjs7\\n\",\n       \"C/dOLYp5IaUmJzgqqJQXk8mkYr6FedzDO0OosYQUJb5OABa7t27dSi7W+d4+FYZZVUB/eHhYGUNK\\n\",\n       \"RLqxsREWUFw+/M3PSi1e1wH6OragUR88HOO2R10wng4ODpbGI4Bn5G7aUqJg9RGp88bKNYNcxpMv\\n\",\n       \"ZdrLfV7M/KIWUAB7PaMflKccH1OCdtwHc8d8Pq/Eeot5TAJqUaQWdfyvX9hcFqoP2VPSz0+xMYnr\\n\",\n       \"8a7wnKrm2RRy31k1tvlYrmC8zvlnnTmkmPYKCgoKCgoKCtbEtTFSSDCMFS2v7j2lPxgMwu+cUBAR\\n\",\n       \"j+F2rZKkxlbFnh1RO/tOpxOei9XuzZs3w+46J1km4+HDh8EMxiJdPBvHcilFZu7AcG1vb4dVdkxo\\n\",\n       \"7aFW4Jubm5WYTGYXK3g2t6SE8RyLRcU6qoslpMplpsX1e3t7gQ0BWxeLD+XFqAymdr0onccJxtDp\\n\",\n       \"6Wk4T40jRir0xKp0dWznhDEP0zebwVDmX//618H8yewigPcM49Xs4j3jcY9YYP1+X5o9fb++ePFC\\n\",\n       \"jkswZHi3jo+Pg3Cb74ExhHJtbGyslcXAQ7FKuVCmKe5L9Mfnn38e3lMO1eGZktzns3kJ7Xd4eBju\\n\",\n       \"g/vW3S/lop67U1dsDDMDdeY5P/ZjcZP8e7tYLEL7pkyz0+m0whzF2Arch8uMY5hLZrNZYEVxXxUW\\n\",\n       \"RNWdheNKElDnxr8quC35OcrUueq7hHbh8qVE4TGmLuebp8zqsTHkz1vHfLgKCiNVUFBQUFBQULAm\\n\",\n       \"ro2Rgv4AK0LsnjgSNVaz+/v74TzsPnZ2dkIuOzAiL168CLsDsDXPnj0Lq1PsBieTydI1MaicSB9+\\n\",\n       \"+GGINv2nf/qnZnauo/rzP//z2jqfnZ1VooDziniVwKMe2OE2m82gS0m54jNWWakrDZA61+fO4tAE\\n\",\n       \"jFT0Z5zPfcish9/F8P1xn42NjbCL5HJ6FsBMh2/AMaUBYHZURf1W8L+z5g5jUWmfRqNRZcyA1TXT\\n\",\n       \"u1iUvdvthjrx+/H7v//7Zmb2y1/+MlpeDuMBJop1UxgPzWZz6R02O2dCFBviGUwOIsqR2lPvA/p0\\n\",\n       \"c3PzUu8NwGODI83nilb9DllpKW/evBneUw7jkFP+WD5H9D+PxXWZNX/vy17LGh8OB+BZKm4/Pl8J\\n\",\n       \"89W1Xl+nMJlMpKZJMRy+/RaLRXh/eG5QuQC9A08sbEGKbeP/c31XDarKUGX1keDrxNdKl+aZZEau\\n\",\n       \"Ni/mRKO+CbnjUjkx5QYCXQfXmrSYqURMruPxuNLpSLFithz76NNPPzWzi06cTCZhElQf/JQQuNls\\n\",\n       \"Jj2VGPi4wAMqtRiLIbcTsRh69uxZZWJmYSSLIPGy15keYyY9PA9gbyKesM2WTbGpcpldmIbQfltb\\n\",\n       \"W6Ht1AKEI2njA+ufb2b24x//2MzM3nnnnUqcITYBcvthsckLBR+FvdlshnGpxJ4suMRLuupHfT6f\\n\",\n       \"BzMVTHDD4bDyETw9PZVmUD9h8OSlTJpcPowPtYAEZrNZZUJT9+UyYQx99dVXlXpwGg2+L8bBvXv3\\n\",\n       \"zOy8bVOx0dhL7ao91XIjkdctsnyssC+//NJ+8pOfmJlO1eNjoDFUrDezqpeVOq/uA8n1yTWJ1N0L\\n\",\n       \"91AmGG/uUSYZPs6LKxWfKQez2azihMEeelwW/9zRaCT7xNeNy8dIeSazyN1fe9kPfa7gOtcbj8/H\\n\",\n       \"tzL17sUWUX6c85jFOB4Oh9nfVe/1OplMpIl91fbMdcIwK6a9goKCgoKCgoK1ca2MFP/LYQuwQ+Zd\\n\",\n       \"ghcZxiKHr7o7xcp6c3Nz5WjkdStmv8OIrdDBokC8uLm5GcrF7uMwp6Ctms3mUmJnADt9ro9aXadE\\n\",\n       \"jaPRSO6Qvasp7zoBzmvFv3mmj5kNvq9ns3q9nqSOmYkyW44Zlqoj3xvY3NysMJbz+TxpjgS4vqvS\\n\",\n       \"7/P5PNDjqaTJ4/G4shtXO3klcmaGBc96+vSpvfXWW2Z2bqozWw4VgUjyx8fHUjAOcFnA3nFoBVVf\\n\",\n       \"lBFicjbxg3G8c+eOvffee9H78Phbx+SxagTnXKgIztyWnoniOYLfs1w2QYmI1XkKOSb9y8Yv8vOA\\n\",\n       \"am82f6l5hb8VPjRBbjJiNs9x5HLOxQegjOiPOhOv6o/Uecqsxoixe7lx5NTz2OnH7LwNchwKYt8I\\n\",\n       \"zJVoIxa0K5Msvkntdls6J/lrV7HyqFANypkoJnTn5/I1q4z7wkgVFBQUFBQUFKyJa41srlaLzWYz\\n\",\n       \"uG3j31arFfQcn3322UrPqBO8sfaGg26amd2+fTvoVlJBQmPAThSr3n6/H3QfnCcQO3Oczzn04Jq+\\n\",\n       \"ublpb775ppldrNbff//9yjPb7XbYJcTqrVbrPqxBu92Weh/s4HyeKUaj0ahc+4Mf/KBSXmY60A+s\\n\",\n       \"R0hF+jar6qXU7vTmzZuSaUQbctgCVRf0jdJmYZyMRqOVNRsMsIrof1UOFi+ntDQM9FW/3w9tyG2O\\n\",\n       \"HRyez8/gZ6Wew+MB5cbYrruW74Hz0C+3bt1KhroA1mWUVr0ul8FSu1jOAqDu6xk/7i8+T4mDUwE5\\n\",\n       \"+Ty1y/bXqrky9n7nMDTqucq6EBNk8/P439TzUvAOS5PJZIlRMTtvAx+SIHZffx4z00BsXvA6rHV1\\n\",\n       \"aLlQlppVn8mOYf5+rHcGut1uJXyQYpqGw+FSAG2z5TyXl2GNVR2Vo0LdNXW4toVUjKpk4SnEyZub\\n\",\n       \"m6GT8LFRph6YxsyWB4BaBN2+fdvMLibps7OzIOzGfbrdbjCxYUAsFhepU7C4m8/ncjLEMSwI/+RP\\n\",\n       \"/iS8xH/3d39nZucLQ9QXH6Ber1dJn7G9vW0//OEPl57BE6ny0PLn5kCZJtR9UiaFyWRSufbRo0dB\\n\",\n       \"SMzOA1gUoA34o8OpBnydNjY2wqKazQG+H77//e/LhRTKDaH36emp/NioMeqFveualwD0MUcY9s84\\n\",\n       \"ODgIcb1YhJ+a7DGeub3xLjDdz4tnf4z7I/UxNrt4X3lhiHHJC1e0Fd4FJeRHfKo6rBNDKpbFIIV1\\n\",\n       \"UlKgPzFX8Xks8Ee7YtGuTLxKNKsWIDEzjSqfWjTlmDVy5xJl9ufrvTlPlclflxJur/MB9PdbZVx4\\n\",\n       \"cyV//FNzSKxdPPyieF0Ta2wxkWPGVddOJpNKLDi12VFeu8PhMLz3qW95nZeicpBJyU0YlxXxKxTT\\n\",\n       \"XkFBQUFBQUHBmrhW0x4zKrwTAqsAsxqbDTgSut/RcowhrJBTAjOzCxZoOBwGJorDKWCXyKJjPA8r\\n\",\n       \"6pOTk4rZ5eDgINwbzADXjV3AvSlO7UgPDw8rZs2Tk5PKNTHBcp2w0OfLi5kD0MYpYTT3KzCfzyti\\n\",\n       \"84ODg0pUbe5XFhF6M0+KsmWwgFzFZEnFqqljmlIuwnVQlDnYol6vV2H8WBjL4z61g8bY5bpx//r8\\n\",\n       \"gdPpNIxt/ywG9xHat9VqBeYVjNTx8bHdvXvXzC4YKc5RhnuoiOhHR0fhuGIsUmLdOtSxDnUCVQ8+\\n\",\n       \"hyOMe3Z0f38/zGm8k1bPUHkL1XNxHjvr+LFTx2aoeE5156WgHFqUeY5NNykmR92Px8TLYBhWQWp8\\n\",\n       \"ctwsZd6MhX7wx9YRm6eQ42gQK1ej0ahIBWJzkWfW2TrETJQPsRIrC5KkY95hWUXMDB2rh3JoUN+9\\n\",\n       \"nDyHhZEqKCgoKCgoKFgT16qRMlveNZkti/2gQTo7OwurQg7i5XNx5bpMDgaDigZjNBot5SQDsPJG\\n\",\n       \"GIJOpxOuxY6+3W6HMrNOywvL//qv/3qJZTOL6xK89ok1V971NAepnR7vWBXLwrsK1Fnpznhn4K9t\\n\",\n       \"NpuV8AJ8D/RlDGhLlJ2vBcPBu3z019tvvx0i4IMRY6ZAue9yW3jbPwuoeRe1qssst7O/ZjQaVQTK\\n\",\n       \"z58/D3o9hMFQ/c+Ccbw/29vbQSfGz8K7x+J1ZsDwr9dh8Q4NTAhrBxlgp1jTgL5m0a/aVfoMB+wA\\n\",\n       \"kSswXgd1rIiCj3avWKCnT59WjrGwHOxTp9MJf6tAmww8jxksFQE9hVXHrJlmKVL6Gr/z52PM6KTa\\n\",\n       \"WwXrrMOq7OIq8BqvmNBfaalS7Ili766LdVN14vkd4HAKXG41blPvugLmp1arFbKKfPjhh+H3XD1U\\n\",\n       \"Thsq60KObu5aTXsMXlh5c8vZ2VnoHI6HgckLH5iTk5MlIa7Z+QSOa/FhSUU4j0EJ4vDh7na7lWTE\\n\",\n       \"p6enYdLHZHd6epqk6oHhcBgWAij7ixcv5EJvHeSIDPll4MmcqVwcA5T3D5cf8Gl8+LzT01PplQYh\\n\",\n       \"M65hkxP+ZpMTx99KxQdTC/hUNNwYFa8+CjmILaTZVGN2PtZyXmiV4DdGTWP84hm5XnbsmAHTrPI+\\n\",\n       \"w7lmFw4X7JWpYgGlYpeNx2NpcrxKk4dHzrtiVv0QxD6qHmdnZxWzNUdmRp/zO6VMScrBRJnO/HUx\\n\",\n       \"1HlM5ZrW1XW+PdiEzs/17acWUny/Oi/FqwRHRa9b+PhFf0xu4hdmvAi7TD2UeSvWr5ib67wJU+lb\\n\",\n       \"uN/UOMLf+G5PJpPwTebnKUE5solwfbwpm+fyVTcJ647rYtorKCgoKCgoKFgT3xlGipESwvEKGDs4\\n\",\n       \"/Ntutytiv1gMGrWiRqTsN954w8zOI2arWE0ezLao2DfYjff7fXv33XfNTJvGeEeCe+K8OjYCDMz+\\n\",\n       \"/n4ogxLxxtgnlbMNz+Q2AsMA84xi93j3i2u3t7cDM8f388yG2UVMKZjims2mdA0HO+Hd6c3MXn31\\n\",\n       \"VTMze/jwYaXter3eUj49My0sZ4aLQxNgp6SElrkmIG4DlUMPwJjY2dmpCPPVeWonXMfAQiDd6/WC\\n\",\n       \"2ZB3aGpXqZi63FhrzOSaLUeOZnOeNw9Pp9OsZK8vE8r8rnb3fIzLz0mZzc7bHHMFx+RKMcR8b7wX\\n\",\n       \"KtaaYgH4finGbB3niRSD5M/1z/NotVqyzACzx4o9yR0T6zJXahzG4mHlsB18LZ+vWPI6qPv4svB5\\n\",\n       \"/FvKxMbXKgbWs108d7CFCN8nzDWxsntWPtZHXh7E7QYoluoyISA8CiNVUFBQUFBQULAmvpOM1LpY\\n\",\n       \"Jc8eGAtonzhyOFbCsSjquBbggIeKAcNq/MmTJ8ldOzMIOavgRqMRRKb7+/tmdi46BXOhcncxoOvi\\n\",\n       \"MoExYXaH9U5oY1zry2N2znBAZMzu9EpXpQTK2LGAkdrd3a2ETjBLu8ryfTnXGdeHr1W7GMUqqABw\\n\",\n       \"3FZ1/eZDDphVo8UzE8Zs2yuvvGJmyznxfFlUvikuH4T3CO7p6wlwsDy0Jcp5dHRUYUBarVYyArna\\n\",\n       \"wSuWlfPw4RkYa4eHh5VxPBwOo5HvPVLi3Tphb4q1YfYEv3Mfol8Hg0FoS3bfxr1VqAmA24pZVNR9\\n\",\n       \"XWcHRiyMh7+3cpDg85R4nP+fcnZhYC5i1kaxBVcd+ToXKe1Mnc5JsUFKA5f7vFXO4WfGwPov/55y\\n\",\n       \"P7BmEXMk5pjPPvtsSbcEeL2xsiT5v1N1UG3JZfX3Sr0r646H3+mFFD64MPesEpUWZiZ06rNnz+wv\\n\",\n       \"//Ivl44p3Lx50/7dv/t3ZnYRfXk2m8mPG/Dpp5/W3pexymSIRVCdCRIDnhdXMAvwQiplZtra2qp4\\n\",\n       \"CXHqFP+R4PPYIwzY3d0NbQO88cYbFXH0YDCoJGdutVrhhVBmMr6vvx+3Lybrs7Oz0EYo53w+ryyg\\n\",\n       \"1IeAoZIcrwrV9s+fP7cHDx6Y2cVCSsXcYqj3Qi2kMCa47Dzxoe737983M7P33nuvYi7c3t6uLKSU\\n\",\n       \"CTUXvGDlyOt+cj05OQmi1TqoflcmWTa/qvhGHiyMBnhRxLHXkCA6ZaY1W45HpZ7H90VZzfJjmilP\\n\",\n       \"3ZhI25vsFDg9Ct6ZWFwktZBSH1Kc5xdUfJ5KC6WwTqy32H1QFv+R5jmJ66iE9EDKu48F7bkLxnUW\\n\",\n       \"AiomV51J0SeWN7sYgyAgdnZ27A//8A/N7OL79PTp0zDfIMacmVUcgtRzX3311eAwBtN4nXlOLdbr\\n\",\n       \"FrQeOXNXMe0VFBQUFBQUFKyJ32lGKmVKSKHRaFQYDrM8xujNN98M5/3VX/1VuA7MC1a4JycnYdWe\\n\",\n       \"y0S9TKhVtXKZVuYsH1vI7ML0Y3bBdikTC8fG8VCxo95///0gyAVu375dYa7YdMcMDu4qJhMBAAAg\\n\",\n       \"AElEQVT5zjvvhOf6+GK8m+QdFe7J4wpmJdxjY2OjIg42s0qEaYVWqyXZJs73Z3a+K1Ju/l6c2Wq1\\n\",\n       \"ss1BAMq+tbUV6oRx0O/3KywAQ/UXzt/b26uYX3m3BwZrf38/nKfKp5w1ONaXGkd1IUVWjVuzikTA\\n\",\n       \"bHkXWxf3CfFvWCSO67nuaC/lnMDPVbHAVi0zQzHX/pqYWc+b9pilYnmAn4tiIm0F9QzFAnms4xKf\\n\",\n       \"WxYgxuj58qnwMLHzVmWk6kzUfB6banEsNZ/wfXPL8stf/jL6u5LO+PALZhffmufPn1eSzceSDqcE\\n\",\n       \"91y+VBul3j2PwkgVFBQUFBQUFKyJ32lGSgG7e46a7MGrUOwCe72eFDSDpbh3756Zmf3DP/yDDIyJ\\n\",\n       \"XTHvttbJTP+ywGwR2oj1K2plj90p/uV6Y8cc28VgR4D2Yx0W9B+s08GuYzQahV0HWJ633367cv9W\\n\",\n       \"qyUj2SNwJ8rHZePoz16D0m63Kwwdh4AA+P98bx/cEteb6fxgDLAJvItKsUn8fL/zqgvF8N5775mZ\\n\",\n       \"2SuvvFJpP6WlYhYs5UgRqxfGHfrjzp07lfeMtVRgQobDYRgzzJIopozHdkp/w/2BcqvAjrxTX1Vv\\n\",\n       \"pPqN83DiPBaJ4xjad3d3N7DYeFd5V8zsE8q8SpYDLrfZRX0572edeN33Q6PRWMpiYLbMSKVEv8wW\\n\",\n       \"1YXayGEaFOpYBRWdvC7PphLSKzG8Oo9/V2XA//G3CgcRK5dHnUs//uY2quv/VH/hGFtimKlV1/g8\\n\",\n       \"nSofrr+PqifKlnJoqGOalPa2Dv/mFlJKtAjwAEXH4QPe6/Uq8Yhms1n4HdfGooun4rnUwaeDiZkq\\n\",\n       \"VqXvY/CRxXnQcvoOTzVzGSG039zcTH5gldlDCWhRpuPj48qCYzQaVQb/97//ffvNb35TuY/yJgS4\\n\",\n       \"3fwz1EszGAwqomq+B5vfFFXvP9aTyUR6KUI8jvQHX375pTR1YGypyYSP+Y+cqhsnFAbFzu8MFjvs\\n\",\n       \"PYdFFYvcMV6+/vrryuR1eHgYhOB4b5QnqUrZwyYbhh9rPllq6oPB9VMTceqjn5rAUx5s/Hu73Q7v\\n\",\n       \"nPpQQZj761//OhzjDYiK9Kw+GD7mmkooqz6uqXb08GOqLtJ3ykTF5r6UqUu16VWZ6fj9Td2TF1ne\\n\",\n       \"I40Xk0pEDijzZsxElbqGn51rwlLjjuMwKe9fIDebhWpLfoafKxeLRUVqw04EyoOQr/UyCH6GQt0C\\n\",\n       \"aS3B/spXFBQUFBQUFBQUmNnvCCO1vb0ddmOgC2OiOM8IdTqdEPUbu93T09PgRpnKw2Z2wVip8AJY\\n\",\n       \"oe/v79dGaY1d2+12Q3RvPOvRo0eS+UL5YZYyu6gvu/EDygzC9wFisTtUHq/ULhYYDofS9d7vdnin\\n\",\n       \"DBPr8+fPl9zFAew6wMopNgrXeygXdsXqecr5+Pi4wgLOZjPJtimRtBqjKXMvmLputysZBNyPTZSA\\n\",\n       \"ElJzvkGP0WgUGDDg5OQkuOdD3H9wcBDGFP7tdruVsAuj0UiOc5QV4/nRo0ehnTnZtN/tct3YNOb7\\n\",\n       \"rd1ur2VCV2J+gHfvPhxInbs1H1PZAvh3/7zf/va3lbLgmmazWTE9q919r9erMNo8ZlNmXz6mGHYV\\n\",\n       \"G4l/U6Jvz4Qwc8HsDtfTg++bem6KTaljmoDYOX4un81mYeyo8AE8R6SE9Fw3P17UmPL/XzXW0qpm\\n\",\n       \"Uu6v1HvG5cCcuVjkRUpXuSNVrLIYY5qKgK7GTkqQX8fexVAYqYKCgoKCgoKCNfE7wUj1+/3AwoC1\\n\",\n       \"YRdhDqDoV+gsZFXanBSazWYycB6eG2O1lCsxmDWIeReLRdj9gYkZDAaSkfJB0HhlrVgefi7vSNUu\\n\",\n       \"AbsCtKliRcyq9uXxeCx3u2rn6Hf/s9ks1B06ncFgUNlRDwaDEBKBmT//3N3dXdkXGBOsMVG7DJ/b\\n\",\n       \"bTabJYPpMXLchpvN5lLgRNzPB/3s9/uVIJnj8VgyUYASSONfxeSMRqPwPjBz5cfW7u5ueAdwP6UD\\n\",\n       \"jAm+0a8qij73S6r9WAyrBOHrINfNW/UXkBK3qnHCbKXSXymWVAU+VGJZL2L35atzhcd5eF5d7r7U\\n\",\n       \"ferOVwyxYkWU/suzT3zey9RSqesx77BWyj9Pab5i8GOM5walkVJCdTU+VXlic7TSCa7CzJjFvztm\\n\",\n       \"y98VLp//ruQ6G8Tg20ONNYa67ypj5ju9kELljo+PKyagg4ODIBiHMPbs7KzSIavEmoL49oc//KGZ\\n\",\n       \"nccOeuutt8zsYnCw0BaINTgWf1gUzWaz8LGEuK7RuIiNwyLwlChQfUhxD6Z72bTHiyI/EXe73crH\\n\",\n       \"dz6fL5kzUBZ/b7UoOz09rZhOYl4n/kPFdXvttdfMzOyjjz4K3ngACw+Bg4ODihkr5lGlYmh5cTiX\\n\",\n       \"zydI9lBtnlr4YEPw4sWLyj35I8a/YSHIC2XfD3w+C8L9R7rVaoUxyOmSMFZx31ikdi/qn06noY3Q\\n\",\n       \"l5yMmJ+L8nP50FapJNgcQRxjcjabSYE/93WK8gdik7Xqw9Sky/3hP+zcNw8ePDAznVRbtdE6UMme\\n\",\n       \"UQaex7hNvYA6Nxq4ijCe+7FT1/DiVC1A6kTpVyVCT8HPmbw44Q2YlwKocaxiS/nFTu4CYNW+8/XJ\\n\",\n       \"QU6MpU6nE37H/D0ajZLOUrzI9s4VdRuCmGNJDi47Xoppr6CgoKCgoKBgTVw7IwXWhnOyYQWK1ex4\\n\",\n       \"PA47VbBPg8FgiYUxW46/oxgY7K7n87mkrvEMsBofffRRZfWcu0Pknbxa7eJZ29vbYcfCkaaxgsd9\\n\",\n       \"vv76a/lsmMbAcLRarWRsn06nU2FjZrNZpYwqz5iKq6Ncq82qkb4nk4kM38AJaT3YjIf2Urst3EOZ\\n\",\n       \"9TiaOO+e1H3QlooVUXVjtlNFf0+ZhXAetx/aZzKZLJmzAIxLDpOgWEOAd8qeqeEwCQhlofD111/L\\n\",\n       \"fsNz1ZjEs9rtdvgdJr2NjY1K+3pWyUyLXPf29gJjxjv6uthtyoRRFyMI9/bxwWKOI3gu95dPAM5l\\n\",\n       \"ffjwoZktm7KZZcN5uAe3fS7Tw2Xx4yQmslexz1S4gjqxeuy3XHZJsTbqeo7+zc/KNQGlGB2+X6q+\\n\",\n       \"aFsVsiOWx06ZAFNmwboyMK4ipyDDf0+m02lybgN4jLETUIq55Pv4hNyrIMUGAzEWUJ1Xh8JIFRQU\\n\",\n       \"FBQUFBSsiWtlpFh4Bkai1+tVtBbs+g2cnp7W5tgCFCOhtBTYKf/jP/6jmV1uZb+1tRV24eo+OHZ6\\n\",\n       \"ehqYEDArw+EwsAA41mg0QvlQj16vF9qIhcq4Rj13a2tL5hlk7YTZcpBJFjymGCnenfi+2draqujV\\n\",\n       \"er1eUsOGHcne3l6yLRH00efji0HtslkMr8S3vFNRZVYBOXG/27dvm9ky8wN2iRkJ1uEpRgp9o8IZ\\n\",\n       \"4Hx2rsC/XFfWNngWha/Fb0dHR0lWJCUYZkYKUGzffD6vnMfXpoTlHKzVLB3WgJ+XgpobVD0Vg8Tv\\n\",\n       \"jNITesZsPB5XtCDcD1wmxban6sd5y9TOOxVgMSc8hD/mHTNiYmc/TupYI3UtwAJ0Pj/X1V397c+r\\n\",\n       \"Kx9rpVKi7pQYPgdXoftKtWUMSgdVl1PSQ+WWBGJhLYBVv8NqjKl71913Fa3ZtS2kms2m9Xq9SpqK\\n\",\n       \"09PTZITcq4L35FosFtkRw1NCO5hdVN0YTN37Dx5PSmwawzXwYHv+/HnSG0t5TyivDrUYYih6VJkP\\n\",\n       \"UgLqFy9eVIS7SvCtoJwIGMosiDJPp1Mp9lTicbQXv/R8H9zXv5ztdjv8riaW1Edd1WsymQTTLlPi\\n\",\n       \"aD/Ul6OOA3UUO+4xnU4r9RiNRpV2iaVlUR/adSf62WwW3kPe9HjTWGzS5jHrY/twjKK6CdG/17km\\n\",\n       \"QOWAosra7XYrTgF1805dvDNfN+VcoUwnLAQG6kTuPIekzEwpMx6b3bgNUuYetZnha1GPXJNn7gIu\\n\",\n       \"FzyvKC8xBRXDSS3guF9zvoOxBVJOzCi1wIuZdlMLKN4Y4J7Km1TF/UqZ4vga9S0C1LXcLura1P1y\\n\",\n       \"UEx7BQUFBQUFBQVr4toYKezEFLvjV5TdbndJjG6WTykOh8OKmYR3W0okWnc/mM4QioHj9OA+z549\\n\",\n       \"S+40wXpsbm4GAT2XAfXl+E5gAbDi7/V64T54VizXn0okiXZRyXd5J6LYJ6azwZ4wM+TDVRwdHVV2\\n\",\n       \"QcqE0el0lkwmqToBqaTJyq2dr+EdiYoOj2tYcI3o3xDDK8cGs4udNIe68G0wmUySZhRmx9C+aNNG\\n\",\n       \"o1Fpm8lkIqOsA3XjHGXg91IxFygXx4XyDEi32802v/txsFgsKu0xGo1kOALeQXqTA7OFDMXkpGLZ\\n\",\n       \"KCjhtsqHB3D+SoCZCG+mN7sw7Y/H41A+hH158uRJpYwxd3lfXy4zv2/KZJPj6h4z8cZYqdgxFWqF\\n\",\n       \"6+AZCzaXpvor14y3DurM3P5bFRO2qzhS/FvKrKmYsFTMKDUXqWtjcdMAHhucqQDwmQH4+pRjSMyJ\\n\",\n       \"KSWTARSLphwQlFRFHctBYaQKCgoKCgoKCtbEtTFS/X5/yR045ZbLub2wE55Op3LXh92aj9RtpsWj\\n\",\n       \"uUwUcHJyEq5Fjjyzqpty3U4cO3ne0TO8PqzdbodngMHqdruBfUhl6479zs9CnTzDhefgmNphQBsF\\n\",\n       \"PQ/vCNAOzWazwp6wfkllB2emxusbOOBhXRRmXJtifFS0cw4lwc/wfdbpdEJ7gZEwu2gXMEkxga+K\\n\",\n       \"Xo1nKH2SdwKIQeVGU+NAaRrYpdtfM5vNkuOJGRaEN+E8gl5ErrCxsVFh1E5PT5fCfKAsPBa91k+9\\n\",\n       \"3zGBstpxe7aw1+uFvlbl5zkJ17KGB/2Z0qpwm2Pczefz8B5ytoXcgIw5u2ylfVFMSAyqTn4e47EY\\n\",\n       \"Y8/4fF92dnzx166qqX0ZzFSK+eF//XzGY5LZoJTmKlek78uz7rVKqM56Qu+sw3kw8S6YVa0Bqt04\\n\",\n       \"JMqqfVMX6kC9HynWtW6eNbvGhdTOzo4dHx8nBdkAT6iYxHZ2dsLEgt9brdZSdHCz+gXNqmg2myHi\\n\",\n       \"Np71+PHjyuS6TiJVhlpg+U4+OzurJC3m+FqKlmUoQbMyb6Fu6qPa7XalF5k3zymwaFWl02HvTbXQ\\n\",\n       \"ZrG82TK9rBZN6gXCh284HFbG4mw2W5oAzM5TpqC+7BUFKFMXMBwOK/2q4uCYVYXlDNzj9u3bocy8\\n\",\n       \"2MX7gEXiZDJJmj+U2aPOI009F0CfjsfjsLFhB4gf//jHZmb2zjvvVMqCZ7HQnydwbIxQt1jKoxSU\\n\",\n       \"J5UysSlvQk6wzB85ZSpUqZxSZmg/0fMxNoOzeVN9ZFJxn9RCnkW/qs1z5rJms1k5T4212AdaLYL8\\n\",\n       \"MbXYUAuumHlL4aoE57iXX2Dw5smXiZ9fJ/5OPZPBpmxV91wRvFrkpNqVI9uruVxFelfjiufUnG9I\\n\",\n       \"rA6+nnw/JeBXzkSrPL+Y9goKCgoKCgoK1sS1MVKcEJbRaDSCe7/ajWO1qFiQ6XS6FA37KgCzB0Sf\\n\",\n       \"N27cCGVAQlY2H8ZMYh5Yeff7/XC/uojaKbDbuDK/KVdTNkNgh8/snxdVb21tBSaCV/dgSFgYraDo\\n\",\n       \"U880qThIKhq7Ss7L9wNUPB/uD5iKuO3Z/IK6oP2YUWLzlxIZ+/68devWUkRrs+W8hNx+nmlUrMve\\n\",\n       \"3l6ITYU6qVAMzDTwOPDJkhkY70qwzjS+2uEq93zG97//fTPTjBSzCn5ny+xgnQOCmltUH3H5fWLl\\n\",\n       \"mCu536GyCaPOycHfj0MnpHbqXGY2l6fMFUCdOTU3hhIzwH48sgmQy6DMborNULGnUuwTn5NryrtK\\n\",\n       \"9kmBGXZ25MHfbKb37JkyAbL0Bf/PKUMqk4LqoxSU2a2O3eP5lcOtmMVDwfhvZV3oEYXUu6fqYVZl\\n\",\n       \"gWMhRepQGKmCgoKCgoKCgjVxbYzU4eGhbWxsBME2VutHR0dZzEyuW3MM2J0qd2Cg0WiEXTCYi0aj\\n\",\n       \"EdzZwc7EVtmplSxE4sfHx0GQe//+fTPTO3VGXR0Vi8DaIaWD8qv1wWBQYfdOTk4qq/7pdBraklkU\\n\",\n       \"9A3vNNQx1B19PhqNKvWLBWxTeg7fFyxeV6JBfkaKQUQ5nz17VtGWLRaLpbARMbAGi8cu7nPnzh0z\\n\",\n       \"M/viiy8q4nU11p88eVJhMXhMqGCZnCcQ9WUdoWpzP87q3i1mZxRz9Itf/MLMzP7gD/7AzMx+85vf\\n\",\n       \"VM7pdruV92c8Hof5ggXXCnU6FLXr9P0ec/P2z4gJ2nN38oByQEg5SjBSfcJZCrhdlMbP/6bc1lV9\\n\",\n       \"WSfGZVIsi2epVD66Otd+f34ML5uF8s9SWjTfd61Wa2UtbavVygoUyfMiwzth1LEsueXi++DeqPt4\\n\",\n       \"PM7SKLdarcq7t04IAi5Tzjsf0455Z6ccDea1LaRGo5GNRqMw0XKcKCwEMGlubGyEidOnSWFsbm6G\\n\",\n       \"a9Axjx8/Dg2S+jhA/M735g6uixgdq2MM/FLAC++VV14xM7M//uM/tn/5l38xMwtJWs0uPubodP4w\\n\",\n       \"wyzRbDZD+ylPCdTLbLkN2Yxmdi6qxkvAJg+fTFXFxmGBMoNfMLPzfoX5lhPxoq1TCxtVDyV47ff7\\n\",\n       \"S6YaD743Fi3qeSrGF8quKHQzq3isPX36tFInfumxoGJRemrCe/78eSWBsvJE4aTUPA7Q13gHeTJh\\n\",\n       \"8xGQKx5mU6p6B1B3jPsHDx4EkyfAnppsfsUmJmby9AtLjv6uki6zuU8lYlVek2yWNztve7W5UaZE\\n\",\n       \"fz/lPGFW7RuFuo+NMhWizGbaEcd7O6qo7b6seIZf/K0iqlbmOyUYXnVR/21Cvas8J3E7+jZaLBZy\\n\",\n       \"sasi1tdBLVRznE3UoqNuUcJlTS04riKCeCy2FFDnpejrzrIPhbrvO6OY9goKCgoKCgoK1sS1Ji02\\n\",\n       \"q8ZT6nQ6tr+/b2YXMaFOT0+DAL0u2jV2f9j57+/vV1zij4+Pw3m8a/c7ina7XTEHnZycJBPtArGd\\n\",\n       \"HKASB7/11ltmdu7WztGrcb4X3/Num0XQinFSMZkYaAc87/PPP6+Ib/l31K3T6YQy8A4Y7Z9KxMrl\\n\",\n       \"Y1G3Mi8BLNz0OyC1I5xMJrI9AG4LlWsP4LJD4K12fCqvHfebL+NisQjjHQzN/fv3Q5tz+Xx/LBaL\\n\",\n       \"SjgFbmcwYjzW2ATJzCvup6Iwe3f/mPkAQN9PJpPwN7C5uRneHySZvn37dqUN2CwEppiZutjuXJkD\\n\",\n       \"FXwU5tjOW5lg0G48DyjRqh/v3DfsYOCTqquYazEo1snPd41GI5wHlkw56yioOUzNbdxfHOvJm+fM\\n\",\n       \"quzUfD6viKE55InK8Zfjkv9tQ4UKaLfbFTd65fwRw2XqlGs+VOJ2FR6D+yF1T8UGs7xCsWNKpuHH\\n\",\n       \"Tiy2lAqD47/bsXAfntnm8q2SmLkwUgUFBQUFBQUFa+JaGSkVeXsymYRdKf41u9BJYAXZ6/Uq9vyj\\n\",\n       \"o6Ok0JqDjGF3j5WtCrVwdnYWrsV9T05OJBOCnTfnw1s3KOijR48CI3X79u3wDDALSujNu1+1ggaj\\n\",\n       \"x5HZgXa7vZS7CoBu6cMPPzSzc4YD7cTsCHYEKXav3+9XdugqOzjvUtQuQoU8AGJRaTmgqIcS+MbE\\n\",\n       \"tLFjHHWaGRCM31SQUFWXjz/+OPzNjBMYLhybTqcVwTDrknhnpVz2lfhf1TGHpWLwe6aCyHo8evQo\\n\",\n       \"sGcMH/7g5OQklNUH4wWUhsbvYnu9XqUvOJI6n885+8x0SBF+bkpzocIunJ2dVcZl7B7+Heh0Oknt\\n\",\n       \"Jr+XOKbaX2ly1HOVzk6Vm9kWFcLAa3jUM9V7ycFBmQFkFit2v28LfhywO30qMKYPdQAoIX4KSkvF\\n\",\n       \"LD+PbdVeqfx2zCqpnJb4FuF7l8uqxnLjpebcVLBRHicAtws/y89PrPVbhQW8toXUcDi0/f39MLlg\\n\",\n       \"kRATrHlPvmazGSZTjuCs4BOxDofD0NCxFC0AOow/Xv63drsdyoKPXb/fz0pa3O12bW9vb+k3TsSJ\\n\",\n       \"wcjJbTlBcm5U5y+//NLMlhdDQMxUg2sAfjF4MGJRxX3kJwVeNPEAxQKZP4hq4eHp21gUZjVBqUi7\\n\",\n       \"AMevQh+jDxloZ6aruc04uWzsGYPBIEwyWNi+ePEi1F1FmmeoTQLGHcrHi3bc58aNGxVTspqAYp6w\\n\",\n       \"7FgAeC8gs6pZrdlsVjYR0+nUdnd3zezCvNTpdOR7iL7m89G+d+/eNbNqm6A9UpM4L2i4H3w9uK+5\\n\",\n       \"vVKUP8aO8oiNxT5jj1Cc5yd9vpYzF6Q+GGrhE1ssx8BerznJi7nMfC0/yy986tLQ1Amk+T7499tY\\n\",\n       \"TKXM22yOUsJyJa73c6YXSKv3S22C/AJVxelScRxzFw5q3Me+RXgum9KV2W1VE6aKD8Xewqm+UWPH\\n\",\n       \"l2dVFNNeQUFBQUFBQcGauDZGyu9A1WoWq9idnZ1K3Jj5fB52fWr3yeJKL1DlyLxq54Jd5WAwCPcB\\n\",\n       \"28K7CaYAPZ0+n8+TeQRx3mAwCLtirI4///zzsDtNCdY50rg/bqbFsF78CyiRIdoNrBGzC7xz8Kaz\\n\",\n       \"2WxWEfNzH7E7O8rFkb5Tu4QU1a12M7FQDB6xaNeePbl165Z99NFHS+ft7OwssQkATKOff/65meld\\n\",\n       \"o9myE4TZsiCb2TmVQ8u/R4PBQLK3Htwu7Dig4K9nsbESfaLPOSZYKn8dx/9iUzXGG9qRBdKxnasK\\n\",\n       \"t+DP5fKr90eJYfm9SLFAKTP+cDgM/Yrzu91uCHHCbIHfUTNr7MXLsboqti3lIh5jkFLxsvgeypFC\\n\",\n       \"nacE43Vxjfy1gJoPlCj920BdGAf/LjC4zAA7hCwWi8r7qQT+ufKBVqtVYa44Qr8yg9WFQvHncYJv\\n\",\n       \"duqoG7cp5AjAlfQgxj75erCJ1fdbCoWRKigoKCgoKChYE9cqNq/TJ2F1XRfFGGBRLQttoV/h4IVq\\n\",\n       \"dwXdEv5tNpthRa3E0Cyq9KtYFd6AAXfvO3fuhHpCk/TkyZPKCr3VagV2BEFH79+/H9oGuqc7d+6E\\n\",\n       \"Z7/33nvJMgDdbreywue2VHXnnT/KD7aLd9Qqhx8YE9bGcKBAr+Ng7Q63S45mgwXPKq8ew0doVznF\\n\",\n       \"eKeJ83kXyG2EXIVgpDgsBPprY2MjlIcF1D7MBK5nKE3T8fHxUnBT3M9rKXg3y/2m2tSzrRwdn8/z\\n\",\n       \"u7tY/jXf/tymKhwBjx28w2AF9/b2loLW+h06u9b7gJtmFyxcTDOSE0hQ7aY7nU5lzKqo/ePxONvN\\n\",\n       \"2ge0rQMzUeoZXpcSYwWg4cS8wnVj+B08a3yUAJ2vyxVSqzJ7/U9u8ErFgq2q0fHP9VGzFTOlHCBU\\n\",\n       \"GXw4BT8+9/b2Qt3V9yb1jJiTjWKacqPse8aM2Vnl+KCekRqfijGNISWaBxSbqcZijmbq2hZSfgJE\\n\",\n       \"o3W73YpZiL2TMOkvFouKEHQwGIQPMhYYL168CB8WfEgPDw9D4+CY2cVAwMd9PB5niSCVFxBH1FbA\\n\",\n       \"Qur27dv29ttvm9lFEmSzi4GFj/Ht27dDe+GlQZodswux89nZWZjklGj6yZMnFU+/s7Oz0L5s4lML\\n\",\n       \"j1RiYpWskidzb2rggY37KTH3dDoNf/Nix0+EPIGyByH6GmMntpDy97tx40ZYjOBa1UcqdY7/218D\\n\",\n       \"8EJKxdoBOIYSwMJt9lz1/X52dpZMDYG2Ojk5SabRqaO4vbdLt9utePKw4B4YjUaVKPDsRccmPbyj\\n\",\n       \"uEesL/mD5idQZcaNTdAs7DZbTreizsM8oCLIpxK1+//7cVznWJJajClzSizhsRrH/iPNkfKBWNR7\\n\",\n       \"tUhLmb9yBNyx4+q+CjlmxDrkmiOV156aF2ILTL7WX6ekFmpTr8yGjLrFi/8tBv8N3Nvbq2QL4bh+\\n\",\n       \"LNfIMffVLXI57ZtPBM5IediWOFIFBQUFBQUFBd8yro2R2tzctF6vF1y0seNvt9uBreF8Uz42xWKx\\n\",\n       \"CAxHKo5Up9MJrAK784PB4fg1HGoAz0gB13Y6ncAWYQcZE50iWjvq9umnn4Yyg+3hcApYFX/44Yfh\\n\",\n       \"PI4nhGOIO6UoWzPtQs4rbuXiXhfXBPdQK3awTsx+eepXmZJiiZRTJifFnKFMm5uboSxKaM85Hjm6\\n\",\n       \"utnyeEHfMAvIz1NhHBC5m9kK31Y8TvD8/f39pRhqXB+z5RheYGsePHgQng9TIsOLUVlwnTIBdLvd\\n\",\n       \"Sr5JxY7x9Th/a2ursiPk+mLMHh0dZeegY8bKrMpIpUyO3A9KQO1Zz+l0WhGvc7vx/VJOIXUxppRJ\\n\",\n       \"kZ0vzJbzgqlwH/wuY15S+RfVM1WIhdRuPMaq1ZmpAM+OqvAHKkRATJDtn6VMgKpMV8FM8f2azWZo\\n\",\n       \"N2ZnPfPH5auLE5UKj6CsAvN5NR9hDL6NVMT6xWIR3geO25gy46LubHFi+DE1n89lPMTckAiYDzl+\\n\",\n       \"nnLIUPdT8QmV41UdCiNVUFBQUFBQULAmro2Revr06dIqlv/Fbgw7PkTW9vCRmRViO0XsmrHD7fV6\\n\",\n       \"Szot3D8VQA/MWbPZDLtrMGwx7QY0T9/73vfM7DyKNXadzJJhNwl9AjMAqk64B2tulIs9ymum2Sve\\n\",\n       \"5XjXcK9rA7w+hHc2zEigLihXp9OpsGMciZwBBoL1Muxm748BJycnS3nNPFRwTdxjPB6HfuUgpriG\\n\",\n       \"6+1zqJldjAVm+1A3pSEDsDtjjEajJS2T2XnYBTyD2yVHw8N6GMU8cvgQ3x+z2SxLK9dutyU7BvC1\\n\",\n       \"nuFiBpOZM896eSgdkRLu+x01sx1cLp/fkN8Z3tmirEqXwno9FWzW91e32w3vs9qp4x5cV37PvK5L\\n\",\n       \"uXQrp5m6CNd18EwTu9grMa/SSPH//Xnq/BhbkcNmXDY0gi+f0oHxcWamUkJ3r/nCNSlrQF19U1Hf\\n\",\n       \"/XjxSGl9+fmexYxZK/x5nIOSx3uKoePfUuz4qsdWdWYK59ae8ZKAj5SPOn10dBQalTsW52HxcnR0\\n\",\n       \"FBYMKnZMCkwlY8Jqt9vhYw3zW7vdDpMqR2F+5ZVXlsp8eHhYmZhjiw6kfGH6HfVQpiIMnNPT0+SL\\n\",\n       \"j84+Pj6WHc8xiNTg9gsQM7PXX3/dzMw++OADM9NU7Ww2CyZWtBGL79l7z5vO+GOOMqlYRhwLzD/b\\n\",\n       \"TMcHY9MIyscfdTYrmZ2bobBY4jGJ+/EH1XuGsicKR6n3wmieWFRsLuDs7Kwivmj2V+EAACAASURB\\n\",\n       \"VOZ6AgcHB2EhVZciKOWJwgtg/3FQmxRlpsX1ZpZ0duDzeDGBscHehXhXPvnkk0odYqY/H+NNmTrU\\n\",\n       \"woIXjKlJmql/ZWZWYE9EZRb0Hxa+Fy/QeRGMcnoxv5lVzMKLRTXNj1n146rMSBxvjM/z40nFh1Im\\n\",\n       \"ttjmLmexxP2mTErqud82lAehL19sMQlwmynPcCC2qOeUOmZxc5+Pws/nqIjpKacJ/q7w2PHv+mAw\\n\",\n       \"CPfkxPa4ht8Vv8DnMcZm+NxUNFxW/BtzkjC7eL/Vt9GjmPYKCgoKCgoKCtZE7VKr0Wj8r2b2n5nZ\\n\",\n       \"48Vi8dNvjt0ws//DzL5nZg/N7L9cLBbPvvntfzSz/8bMZmb23y8Wi/9b3ffg4MDa7bZ98cUXZnax\\n\",\n       \"+uv1ekthCszO82q99tpr5wX+ZnX429/+VuYeAyOQSpyoVqHPnz8Pu0/shPf398P1YDV2dnYCW/Dw\\n\",\n       \"4UMzW3aJrwPYLlz75ZdfBraD3Tdh8sEKfWtrK5hTcEzlZONdMpuIWECvqFV1L4SQYLoVfYMyTKfT\\n\",\n       \"ym6czWkq/x6HI+B+N9Nmlel0WjHlMLujKFj00enpqd27d8/MLISZ4PYA+LlsluSwDDH0er3wbG7H\\n\",\n       \"lIs4t61nVL/66qtK/kXFJH755ZfZiVpTCYq5jjhPRfeugw8zETO/4V1SsW/4GvRDyozo4XeTZlWW\\n\",\n       \"gENiABxiRZnBWWiPuSUVY4qZK95Rp+KDpVgUdX4saXFdKAQzHRaCx2uM7VD/9+crUbWKh5QSoMcE\\n\",\n       \"46k4Umwa8wzXZc14uVAMXI7wncFmWuVYAHS73fCu4dul2CduI2a4cA3/5p1SWq3WUp48fz+WvigT\\n\",\n       \"Me7jM0T4uvvk6yxoT7Ufh8HhdlOJ1v3fuWMix7SXw0j9b2b2n7pj/4OZ/cVisfiRmf3im/9bo9H4\\n\",\n       \"fTP7r8zs97+55n9pNBqF9SooKCgoKCj4N4laRmqxWPy/jUbjgTv8n5vZf/LN3/+7mf0/dr6Y+i/M\\n\",\n       \"7D8sFouJmT1sNBrvmdkfm9nf+vuOx+OlwJhY4Q6Hw7CKxe70s88+C7tAFueCscCOlfO08e41d1eC\\n\",\n       \"50KT8dlnnwVmAEE9Nzc3w71ZP5HzjFu3boXyoT4cxgHsjWLaTk9Pl4TxOWCmQwlJ0W4xfQfvQAC/\\n\",\n       \"44+5Pe/s7JjZclR6r+dg9kHtvNCm4/G4sgNi3ZTaMWCctNvtCpv0gx/8IER9T0WLbrfbcvcMsA4r\\n\",\n       \"pfvh37w2i+/Luijo0aDH+/zzzyuZ1Fk/9eqrr5qZ2fvvvy8ZIZQL/aLa/vT0NDCOHFLEg7U+PO5x\\n\",\n       \"DXSMHD6CoXIGAsymPHr0yMwuWObxeBzKF8sfh3syu5TSSLHeTeXQ89fyO6+ip/M9/LX8f8xdPGcp\\n\",\n       \"V23WpfjycRgHRioXYIrdiYmDc+EZiZhYOCVAV+VjxkkFFvXn898pQfs6SM3zrL1VjFRKy6fuydaD\\n\",\n       \"drtd+X08Hof3mJ/h9bp8DH3D+iVmnHzfcfBVfr7X1XEfcn95J6FYtoNUtH6+rwproFhoAGXh8CEq\\n\",\n       \"3AefvwoLD6wrNr+9WCweffP3IzO7/c3fd2150fSJmd2TD263l8xkKPyLFy8qAtzd3d3g7cYxYTDI\\n\",\n       \"MOGaVcW7N2/eDJM+04o5mM/nYXHzxhtvmNn5x/Nf//VfzWzZNJHzkh4cHNjf/u158yDG0DpYVVxv\\n\",\n       \"tvziqphN/jwW36LNX3vttZCaA1D0/GAwkGl90Df8oefFEqBMLH5ws9iZP2JYNOGDd//+ffvVr361\\n\",\n       \"dO17770XrucX0y9AWq1WxTNwY2OjshBkWluB2wgLE9yXr8PCiGM3ceR9tAFoch73/DeuVR5fapxy\\n\",\n       \"xHmMd3aG8ItYHkO4H8eWwgIzltrJxz7jRZH6kLOTg/9wxMxOKfF3THzNHy0z/Z61Wq3wDB5rqcWL\\n\",\n       \"ArefMon6j4NyvJlMJhVnHV+GWB1VdHKOv6M2T3VQEcZ9GVLiapSBy+nPU+Yjf5/YgtA/e52FVWrR\\n\",\n       \"3u12K9H9+TxuW286VwmBWWzOpi6+r1rksJeoP0+Vgd8VL0DnMqjYWMoRgNveb8SUGZwXL6lFDpvL\\n\",\n       \"MbfxN8SbIPl+dbHU1LFVzMKXNrstzp+SetK3Y5wuKCgoKCgoKPiWsS4j9ajRaNxZLBZfNBqNV8zs\\n\",\n       \"8TfHPzWz+3Teq98cq+Do6CgI41qtVtid3rhxI6yQeXcCQTfvXr076HA4DKtm7KybzebKTBQDK3O4\\n\",\n       \"zk+n02RsnBQ++eSTJXPMtwlFf7KI1+/WFFu1u7sbGCkOz4BdB5gS3h3zM9BfHGfIiyXNqmEDGNiJ\\n\",\n       \"PH/+XO4YwKSg/DChmF2YnJ4+fVoxxTWbzYq5qtlsVsxQMdOH2lkCuAe7sHMCXW8W4l2REmTz7z4p\\n\",\n       \"tIq8zQyHGn+oG8cgw66u2+1WxkIqvAZfW9dWKTNSjGLHvVfJg8X3NFs2Q/BvGFs8Fj1LqXbb3OfK\\n\",\n       \"8YHHKccoA5R52TMvnFicnSxwDTO0PjMDm1PqIj3nsjQpdkeZ7Pg6z9DMZrMlM2nsWSr8gQovECtf\\n\",\n       \"LsOQy1z5Np1MJvLdSNWpToCeastYuXwZYv2ryuXn3FgePCW18KwoM+sKii1KMY58nOcndU3KEYTb\\n\",\n       \"1PeDMs+amf3Zn/1ZtB5m6zNS/5eZ/ftv/v73ZvZ/0vH/utFodBuNxvfN7Idm9v+pG2xubtpwOLRe\\n\",\n       \"r5cVp6GgoKCgoKCg4NtG3UIqJ/zBf7BzYflBo9H42Mz+JzP7n83sPzYajf/Wvgl/YGa2WCz+qdFo\\n\",\n       \"/Ecz+yczm5rZf7eILJsfPz4nsfwOo9VqhRAB0Eaw/oOREoICsWtzgXIhTENKe1GHlI5mFWD3dvfu\\n\",\n       \"3YoOhtklDqbIK2+1WldaC3/vBw8e2G9+85ulYzdu3Ag7cxZ443lgM5Ruod1uV/qMA3cyMCaU+Jnr\\n\",\n       \"4YcbhyNAoEJ2/QVYjMguwF7LxGUDG5QSGNdhPB6HfoIQnFkoNWa47CiPF7Hz3yqHFgP3uH37doWR\\n\",\n       \"2traSoYd4KjIGC+KsWXdBvqIA7Sy9tHsvN6pkBc8xlO7XqU94eO8K/ZjgstQl3uOtSy41mv9Go1G\\n\",\n       \"hS1oUFgLJYzm99LvnpWbuUJsTK4rvlb6L1X2GIuCtkf7sB5KMbp8P8/aKr2O/93fpw4xkXzsHDxL\\n\",\n       \"MYopcf9lwf3A49MzfrFxj/kLv08mk2zWy48ptjgo3Ss/14c6aLVaMrMG1zNWlti7rxxMUnMzBxNe\\n\",\n       \"p38aV9WpKz200fhO6qbUAsN/oNrtdvioo9NzzXW7u7sVT0Om5/GhZ/MlC2BxDUcxr4MXGTLYvKVM\\n\",\n       \"EjjGL5wf6CwyZvOWf8HZ0wtQ5pu9vb3wDKaZ2SzH1/NzzS7Ekii7asuYOcinAzGrmnZUmfljjmdM\\n\",\n       \"p9Ng6sRiTsWM4kUOFiKz2Sz0w8HBgZmdC7e9F1OsHj7hLR9jESmAttrf369sPLa3tyvjmydwvB+T\\n\",\n       \"ySSYVZW3Hp7BaYHQjhwZHtja2lpamJvp2FD8/jBVrz7saryo8+oSqObG7vKIeQnlfrwAHsd+TLfb\\n\",\n       \"7byUFu12xWN6Pq8mBVZQIufhcJj84HE7e5MOtylDtYt/Li+kVPmuCle9CEo9I7bhVot1IDUmuT14\\n\",\n       \"Y+jNzLGFr89iUCd8BzqdTph3eA7xbcmbZ37X/YJrPp8HSQnPY6rOfrHOi0T+JqVSSvH50+kUdZSd\\n\",\n       \"U2I8FRQUFBQUFBSsiX9z4iTv1t5oNLLDBaiVrRfu3rt3L6xylZmJ4dmMZ8+eBeE7wjlgR8/nv/ba\\n\",\n       \"axW2gXcF64jdsZJnFkslccVqfTAYBEaD64nfseI/Ojqq7KCUyZDZEdRZMWrPnz+vxH3q9XqV/GHK\\n\",\n       \"BNjpdAKboBwMVDJN3p2AieJdkaeO1Y40xuj5a3mXpXaVapwqsyXvLtWOSjFVKQG/FzEzzs7OKgwN\\n\",\n       \"twGL3THGMF5UfC0exzzmPCM1Ho8rOcMYbE5hNgNscZ1I35eBwfm+lPBUxdABeN5JxZHiccCyBrPz\\n\",\n       \"d8uPBdXXnBQYmE6nMqaZZ67YxFoXAd2zHarNYm2rGHH/3HXi9ii8TNbo27Tc1AnQGcp8lzJD8hhK\\n\",\n       \"idL5HjlyFlW+2WwmrTX+3LOzM5m9QM1t/pvB7z7PFxhbyvSXCn2iGK6c8VkYqYKCgoKCgoKCNXFt\\n\",\n       \"Gql1I4hCkLuzs1MJPDabzYKuQuWOS91vsVjI1TNWqhw4NPfeYJ1W1VIx6nZZPhhZv98P2pzhcGi/\\n\",\n       \"/e1vzWxZ7Kv0Rp51UIxPt9utuFarMvJOXtnuX3/9dTMz++CDD8Ixtn2noo2rnE3AK6+8UmHr2KVf\\n\",\n       \"QdnLmUn0TF5usD8WIEMnwJGD1bWqj7ld/HN7vd6SUDQF6L/QprD583MHg4HcyXH5UTcAOTA/+ugj\\n\",\n       \"u3Pnjpld9NE777xTKQdHi0c7K4Epzw9czlQeP9ZIASqIo2JHYmPbg13wgdwQDMq1OsYqKlbJg/WJ\\n\",\n       \"V4GYriw1T3P7ASnmah3x91Ug5nTgocIzKG3WdYIdEXL0cGZV/RWzNkCsjZS2FPdhZjVHN2VW/dbw\\n\",\n       \"O5DSsfLcq8TrdW2hcm7mnA+WKqWRujbT3rp0LhYxuYsZswuhMjpwOp2GRQ4ilv/N3/yNXOigsfHb\\n\",\n       \"KnFrEDEazzo6OgrJihXQwXt7e2FAYcEwGAwqguHDw8MwiPDvyclJsh7dbrcy+Xa73SVzBj+Dj5lp\\n\",\n       \"8bp/WXgRpvpZiZExER8fH4eyYpHLfc0vixeH14XRYPrYU8mcsJejZ/u2ms1m2RHDPWIfJRXLBOdh\\n\",\n       \"AbexsRHKgvNGo1GWJyh7KaLNNjc3K2lgZrOZXKB4syCXj9sAz8BCSmE6nYaxzddydHUA/cnxx3x9\\n\",\n       \"fZumPJYA7isf24yfu1gsKoLX6XRaSZbN4L70Y4wX1+yUoDYbWEDhvWCxLEdUx9/syZcTp8uX1beL\\n\",\n       \"aj/+yPnFpIrnVLeQUmNXbcauCrlCelWWnO9Vqj4vAywVQF/XPc97JqvfGOzNrBaWvNBPSQCUXAJj\\n\",\n       \"dzQa1cZfQx19iqvT09NkahhvNufy+XriGamNVwzFtFdQUFBQUFBQsCb+TYnNm81m2MFh591ut8MK\\n\",\n       \"lBPFYpWJfGCIaxUD7tvpdLJMdLu7u0sxdnCPFHXJ4luUi1f+/rmDwSCsxvEvRwtXK/WNjQ2Z/8jv\\n\",\n       \"rvlappKxE4CpUzGDStDcarVCezDrcPfuXTPTpjrsWPgZyo1audjiWWzWw7Gzs7MlJspsuT+4/fyu\\n\",\n       \"hKOd51LKKneb2t2zm68fJzHzpDJl+SjrfA6zdzDFIYHz7u5u5T3Y2tqq7OA42jmLxzGuEP+t0+lI\\n\",\n       \"QbFiLLCbxdhgZoXrpsybPN7UvT2LulgsKu3LzBb3Z4rVSbGjZlWTZaPRqORi4x2/P9dMmzD5vj6G\\n\",\n       \"0fb2djKyvBKR413mnIaKHU2xSWwqVkiFRFhFXL0ucnOoKYF8blnq5Bfq/5c1Gabc9/kZPqZZzHTm\\n\",\n       \"y6PmJ3ZyUM41KeE4vwP4Xb3XXAbuB3wLVLJqFYPOfx89VCgZ73iVY4UqjFRBQUFBQUFBwZr4zjBS\\n\",\n       \"WMX2+/2wAmTmQgHaIxagYpWLXdnz58/lilJFPIdOAyvSo6Mju3nzppmZ/ehHPzIzLaBVaDab8lxl\\n\",\n       \"m/ZQGiIGi5ix0saOfm9vL5wXE7F6IasKdGhWDZlwdnYWjq2iUUOZVbgCCN65jxCEUvU/9DcvXryo\\n\",\n       \"1F25l/f7/XAfpb9RQmfY4b/88kupaQEw/urYTJTv7OwsyXDxzs9Hu+bAeCpsBe+s/A6S87QBKopx\\n\",\n       \"TE+khKooC+rG12O8xKIE+xAa0+k0SzTNO2u02Wg0WnqnUixhSuA9n88roVPG43FgmlC3s7Mz2Q8o\\n\",\n       \"v2K9ePeeKw7mMWO2zAypIK1AjC33QVq5DZgNRpmZvQV4fCihcm7d1P3WBYvDc4Xsub/lskS52iil\\n\",\n       \"MboMhsNh6Ae0fUzjo5wW1Hj3UFkH1L2YWVVzKr5LX3/9dSgrviWj0WgpHynKjvthbdBoNML45bqh\\n\",\n       \"LVMZDliYr0KZ4H1jdmyVLCbX5rWHqKc+ZpDZctwlAIJxxCBqNBphwsYklhPpOwV07E9/+lMzM/ve\\n\",\n       \"974XzB9/8Rd/YWZm//RP/xTO/8lPfhKuw0SFznry5In98z//86XKY3axuNva2gr1xGQ5n8/Dxwjt\\n\",\n       \"wwNwOp2Gc/ll9yLuWFRan8Q3Nla88FTFkZpMJpIqRbthHLx48SIslmDe5DKz6Qf9xSJotD/MFbx4\\n\",\n       \"UuYKpoVT1DQjRfnyJIa/eWHIUYRxrxR1zMlcPQ0du3ZVkyM2IoPBYGnRDKAP2bsPGxV4YPKmIfX8\\n\",\n       \"Oi8w9pRRi1fltcftoj5que0BqJhrdUglSVWZEnDs5OSkkrCXI72zg4xavKSiOuO3nZ2dpcWtR64J\\n\",\n       \"i82SLG7Hb1cVDypWvsucFyubb/t1Fk/f9jdUOfwAKmVSq9WqyAvWKT+Pk1QMNT7m50oeJ8q0j7F7\\n\",\n       \"enqa/K4wfBovfi/qFkOp8vFcjjrHvPaKaa+goKCgoKCgYE18Z3Lt8eoTu06sNIfDYVglgmFJ0XhX\\n\",\n       \"BU7imopi3m63A3UJU8Cnn366MtWtgJU6R8/GscViURF2+3xZfvdy586dkICZd65+Zc6RzRVS+c94\\n\",\n       \"58Bu7SoHHHbouM98Pq/0La7z13qXed6NsUku5R7LZVVsB+6D35TrrGIG2AzFdHVOrCI2uzCjp6KE\\n\",\n       \"4z6qjqo+qajX/X4/i4Xh8t2/f9/MzD7++GN5rhK+poTeDO/CrKJ2M3PKO2UVOZxNBV4sX5dvEphO\\n\",\n       \"p0s5EWPnqzhddUjlS+M2YtZOxWvzDCKXkcM54LyYE4Q/ppgENpeuyuoo1LFPueyUP1/FglIxwVTd\\n\",\n       \"Yma6Vet52fAI3mTX6/XCu51iXbms/H541nM0GslYSxgn7DjiE06PRqPw7cO71ev1ZG5PD3YS4zyc\\n\",\n       \"KVOikjwoxxa2FHkLS25YCzynMFIFBQUFBQUFBS8B18ZIXcdzCwoKCgoKCgpWxXcysrn3BOLjOXRb\\n\",\n       \"u92uiG/ZYwk043Q6jaag4OezODgV18VM05SgOFPpTfg+ucJOFZZ/lUVoSpioyuWvi523alwYhkrL\\n\",\n       \"chVQdHsuBV8n+l4VbGZSqEsB4s3byquNY4alvN5iaUhygf5S8bAAdrhAWVSZYn2khOUAm7LVe6Xe\\n\",\n       \"4VQ9FotqYtLLIjX2Yb6uS3K+Kl555ZVgikWfjMfjrPG+sbERxg6cTlSbbG9vVxxWUtKBdZD73tY5\\n\",\n       \"KrB5a9WI3ylcZm5YZ06KObP47x2bJevqlxKHw7mC780SCtWWLOzmc8yW57RV3zOW86TiNabM4Hyf\\n\",\n       \"q/q+1LVvMe0VFBQUFBQUFKyJ74zYPHJe5W8WwXko1091P3ZxxK6MXadZsJzDUtWxN7mRbNUqeh1m\\n\",\n       \"ip+3Sr4g/zxfnthOypcnlwV68OBB2HV89dVXWeV7WVhFPJrj+lvHSAHNZrPCTvFYgxPDdDqVcbV8\\n\",\n       \"rjXOv4dj3W43xLz65JNPsuqowPGfFJMGRkrl4WP43ax3kIidz+93LCq1anMcA7M1Ho8vxUhxHDez\\n\",\n       \"NBtothz3Sd3L16WOAUEokPl8HsYExlC73U46iQCbm5vh/VZZBXC/7e3t8AyUU8Uxi7W7ik6dOk+h\\n\",\n       \"jn3wZZ5MJtnXpMqSusdlWV5+nll9GAEuF7dp7tybig8G9Pv9cN46oYQwzj1LyojlxfRt3G63K/EB\\n\",\n       \"J5NJdiR3D2bv+O9UH7KTRV3S4sJIFRQUFBQUFBSsie9MZHNenXoWJrZj9atxxUb1+/1KPipehaqV\\n\",\n       \"ea6WIrULUJnh+X4+ujODtQAcKAyos9f7HdUq4Ijadbm6/PNi/4/hk08+CYxL3Q4yd4fpy6d0FasE\\n\",\n       \"G1X3VzvWXM2YPzafz8PODeEcxuNxOIb8Z/fu3Qvjl3d6HPk6Vt/xeByYqJ///OdmZvaLX/wiWU5V\\n\",\n       \"3zpWlBkwX052sfb3Y72jCg/BbXUZnSDnMvT1YHdwBurCuQBT840C7qGYJg4E68vrwXkNURaPfr+f\\n\",\n       \"xUipMCMMlYNsVQYmNUfEwCyA7zcul5oH0H/dbjf0K2cNwO+5OdSY6fDu+dwWOYEoY4gxqx4cSkCd\\n\",\n       \"y9YWjBO01fHx8ZKO2Ewzanx/sK3NZjO8x2zdUPUC46qihDNSrDH3ude3cXlZN4nzoItuNBoVFlW9\\n\",\n       \"861WK9lPq+jirt20pz72l6HdU1GJVRJXfuFyPtadTqfyknKMEoZKEZGDZrMZ7scLKPUhiF0P+FQj\\n\",\n       \"uagTdgK5JrF14q4wcsWDHGOH/02VK3VebiyTHNMeJwXlcqqPK37HJNZut8PYRllOT0+lIFfFB+Lf\\n\",\n       \"zcx+8IMf2LvvvlupDy8szOKLBR9NmNsHC8JutxtSAAE3b96UKZDwoUrF2VLvgv9Y55pT/Yam2+3K\\n\",\n       \"BQhSRKEdVk2NZHbRVgcHByGGG5ta/AdyHbMRMhuYXSy+LzOPssk4ZqLF72ZXLzY3WzbF4ll+jq5z\\n\",\n       \"AgJyBeOXMTPyM3JjguVuDNaRa2DcbW1thbGtMhakon/HxiLqx2lmVAqwnM0Jz2P8XmMxh/OeP3+e\\n\",\n       \"/Q3B+OUkzX7OMlv+hsfqWCKbFxQUFBQUFBS8RFy7aS/HnbXb7VZWjrPZbMkEB6REcnU5vlTkYM9w\\n\",\n       \"xJgJ7wbKO5tVo7BzOet2plhdc85ClYx4VVyWyUldo55RtzvykZljdfNMg0rim1vOxWKR3YY57cBj\\n\",\n       \"lqMTo3wQLas2mE6nFfd5NguxUwSuVzkDU6wmM6ugyTudjnynsPOGiP3p06ehrTjS/I0bN8zswpng\\n\",\n       \"5OREhgPAtTyOVdR+lCsnyXEMKvlunUmZy+LnnbpxperGEcY9eL7LBUzkjx8/Dm3O4va6/JFm53X1\\n\",\n       \"dYuxUZysGsiZExQztFgsJCOpWGWVQQBjEG2mHFcUG8UR7lMmHmZlkMPz2bNnFXMfX5saEzHHpZTI\\n\",\n       \"fR2RPsr89ddfh3futddeM7NzdsfPE6enp5VxFzNheocSxY5NJhM556O9fG5TXGN23pZ45/Dv9vZ2\\n\",\n       \"mIvq2EU1zn3dut1u5b1YLBYVWU7Od6MwUgUFBQUFBQUFa+LaGakUsIKM2WhTu0m1qsfquNPpLOV2\\n\",\n       \"i53Hgjd1b6xceffk7+uRI0Ks22nwzgDn5QhMV4HSI8TKkHs/zybxTq8uxAKQ2qmrXd1sNqswjXU7\\n\",\n       \"DCWSrRPZ57aH7/fT09MwJrDbXSwWMhidv1aJv80u6pdiHz7++OOgq4GOiVlZuMTfvXs33Bu7QR53\\n\",\n       \"0Dvt7e2F3/Hc8XhcEUQfHx8HrRe/C3guxnG3263oHReLi9ySzGpdhc5zOBxK9sWHP2Ehcx2b5fH0\\n\",\n       \"6dPAFmEcq/c2l8FuNBoVYfHJyclSbkpfjxTm83lo87oQFhirl0FKY2im28FraufzeWA2mZ1VY8Kz\\n\",\n       \"3iqMgwIzhHgW61i5bCltZt0cktLwxsqn2kMB5ebyg1XE+1U3RtiZxH/vYt8L//4oRvLs7CwwQ5w/\\n\",\n       \"1Z93eHgYzsM4PTo6qrQb9yu+Of1+vxJsdDweL33DgXUsOd/phVQKdfQo/+4XL3UTVZ23mvcCVOh0\\n\",\n       \"OpVYVaPRqGJ6Ui8SJ2xEp3LsK5VQNNc7YhXkeCzGoCYAb567zP2UYJhjgbGHTuqDVycyzF0grfox\\n\",\n       \"5/uifBAyb2xsLHm5qXJ5xExEZsvjlJOMIu4L38OPp88++yyMRSy8xuNxWOjxwgvX8GSozJR+4m42\\n\",\n       \"m+Ea/ihyYmKz8z7F87BAu2w8H7TRxsZGEGkDg8Gg8mFkQX7quapcL168sDfeeMPMLsxPqt/qEs8C\\n\",\n       \"7HWE+0ynU7n4SQnGFVIbvXa7vdY7bLZcX9xjMplUyle3oeL+gJm3znuOPfj8eap//fxilk5ozcl8\\n\",\n       \"VR/G5nr/O4vJuU4pL2xeTOSI6ufzuVzEs/e8f27ddy9nflVzjNnF5gv/DofDpWTl+Nefx+8A/4vf\\n\",\n       \"eQGMOmERdnJyEsqM2GyDwSC8m6hnjvd7Me0VFBQUFBQUFKyJ3xlGKpfCBHhlHRMQmqV3lZ1OR8aw\\n\",\n       \"SAmjFQuQEqoyrc3P8Kv+GAOnXGH9jmod1LnoXoahUbs6ZdpLsUFcvpQpjlkqVTb1DDYFruqmnAvF\\n\",\n       \"rAHMxjBy4q/4+5gtuyHzu6CuUe8KroV4+datW6G/wAawkL4uxo5/Ppsymc3yu+N2u52MLXMZKMZG\\n\",\n       \"RQlX4SoUYmwwdsp14wkmQJhYY8yVj/5spuel3FAmOWwbIj2vg9w5hKUbdfcBeO5lpyS+Jx9TuerU\\n\",\n       \"+/jixYvKvL6xsRHMm59//nk4F2wHh8lQDIx/RrPZrJiM/XlKaK9YSn8+f8d4fvQMGLNFfA+cxyE7\\n\",\n       \"vMC+LlwOnERizJp3aKlzJmGWDOOdTbscV8/sfDyjHvwsHEN/PX/+PPShZ8SS5ak9o6CgoKCgoKCg\\n\",\n       \"QOI7yUhx/jtAsQ/ehViFK+BjWFFPJpNKoEq1w5lMJuEaiNxUROB2ux1+TwUCrWM9OMhZStOQy9Rc\\n\",\n       \"BquwT6uCdxPYqbC4Oof9Se3U/LUpRipXMH7VbQDM5/OKC26r1ZKswrrlio2Hzz77zMzM3nzzTTMz\\n\",\n       \"e+utt8JvzBT6HfXjx48rO1cGa0eUSzdHFsYzPIPAkZzrRMHrRPAH8C6fnp4Gdgc70OPj4worNpvN\\n\",\n       \"shgp5QjQaDTs008/NbPlfHkKdU4r/jmYL3q93tqBJznXGubgZrNZme/q5qfY88yW5+PUTj9Wh1R7\\n\",\n       \"MDuigtzCkgC2Q92r1+sFFpLHrG+D8XgstUJ+/s8NBMrlXwdcFyW+VlYU9DHGgXpvFbM6m82CQwOz\\n\",\n       \"Qax5xHn+ve52u+Fv1gEjfAfnf0wFCuVy+nGsvrOrYJ08g9/JhVSu14qKRwKwectTmLGIqwA+bOPx\\n\",\n       \"eGliNzv/2OFvDMThcGiPHj2K3k/FrVG/1XnKKIGi/6DFPLnqoCJzr4pc8xcLZFNlrPPqyYVfqMbM\\n\",\n       \"h0oY6T+kV7FI9fDRxEejUdJ8XOd96Bc+PCawcOVowh9//LGZmd2+fXtpYYFnqfcxlVIB9en1epX0\\n\",\n       \"EmZVUS0LszGJ7e/vVzw6ladUqh1ywOMvFREev+V+7IbDYRDOq5Q5dR8JIbMcuQAAIABJREFU5bXp\\n\",\n       \"0W63K+LbjY2NpFkEY2w4HMpn4AOJPlflbLVaK39sLvNxW9XbVjnjcEwj5cgBnJ6eho86nA/UfGF2\\n\",\n       \"4fCQijsXS6uUmm9j9U1tXgD+3tV5QLIjk9lyX9dt1v3ffJ56R9DWynlmc3OzMo7r5nnlGZi6Jtcp\\n\",\n       \"hdtYOa7FUEx7BQUFBQUFBQVr4jvJSAGpuBRM3zJrkGJUeOXtd5jz+TzsxpRbKO/y/OpaJQ+N1Scn\\n\",\n       \"zkij0VhiwHC+Eigqk+c67M1VmK5yQxhwPXzushj7lFO+OrOmMinkiuZflmnP7GK8sfnYxyozuyg3\\n\",\n       \"xm6uwwWLQzGOeYeGPjg8PKyINA8ODgKzgnelTgiK37e3t8MuNxWyQf329OlT293dXSpfzOHiMmE+\\n\",\n       \"0L4cy0pBiXRT2N7eDu2mmI/UPXLjyKldNsc8UnHEWEzumUs2u6VyHk6n0yyzy1Uh992rSzzrWZle\\n\",\n       \"r7cUggHngIlKhSjY3NysJDDmb5JyZmLRdio232VZb9wH37PFYiG/af79Y0tNnazCZ1To9/sVh5Zu\\n\",\n       \"t5u0LuG5MfY1Jclg+Dmr1WpVhPucqzSVL3Nd82phpAoKCgoKCgoK1kTjZe60ow9tNFZ+6FW7nwPY\\n\",\n       \"lTUajdqo2f75dWXKLXNKh8O6Hn8/tofHkBINfttQup9UQFG1I2SbvKqbbw8l5o8FhfPIDcXgr8Ez\\n\",\n       \"1m1z5Qq9ublZybXHucJi90FZVNurY2on7TVcqv/U+9Pv98MuENHTFdsS6z9m1Mzi2kkWtyrNRgq8\\n\",\n       \"m/Uam2azGTRlaOeYZszj9ddftw8++GDpmNJjjcfjyjjnflVtw27o6JO7d++a2UWkebOL0AkxnYtn\\n\",\n       \"W1utVgg/gV17jDlX7vS5uEzuy3X1MLlMomKAVVneeOON8Ds0hrlsRqvVqrSf6o9YCIjcNkddODj0\\n\",\n       \"qk4CKK/ZcqDQFKOmHML43eJ8tGb1ekGg0+kkgysraxXQbDYDQwdrz+HhoRwngLeIfDOGZKN/p017\\n\",\n       \"qQ9kLngQKTOYijOCjuYJM+UxxS8BTxLqGnQi6jYejyvl4kWTii3C5fXPZbwMYfRloDwlmQbGOT5W\\n\",\n       \"kJlecKkXlz/sZsuTIU8EOV6bdabFq1jcqwleLcJUst/RaJQV2T7mXKGO4T48eaJ8Klo895+PtXN2\\n\",\n       \"dlah3WOLupQ5ty71x6rjnE0rKfMim/u9GacOdZ6k/L56L8bYWFTex4hOjwXQu+++G9rXywPqytdo\\n\",\n       \"NIKZJTWmL7uZVc/2i6tchxAGR7b28yenA+E+ZO9Es/O28qJuXvigbd97772wyE6NPzVmcz0SVzEz\\n\",\n       \"qfeHY/Oh7mzKVht3jBllkucFkN9MDIfDyrvB/aXKxWX3GyD+bnNKH8yBOE/FhOKYdrzwguQA1zSb\\n\",\n       \"zcp3U8UWy+mHYtorKCgoKCgoKFgT32lGKraDM9O79sViUdnV1dGGvBPC3yo3mXKFVGyAWr3iPI5R\\n\",\n       \"okwY/P8UdakiSCvzV4677FWijqHJEdorRoIZJHU/7oec3cOqIvac+6x7v1xzhYq/pOqr2KKtra2k\\n\",\n       \"qaYuIrQqi2eVxuPx0jg3O2fMWHhupoWlzLakzD48B8TKl9v+uWyizw+ZG5qlLnxBig2MlQkMCMwz\\n\",\n       \"zCqwmBj39vHJPNCHgHJo+bagzFWeMVHsDpvuVbgPBszM6MPxeBz69/bt22a2HKVcWRfYNKacLpjx\\n\",\n       \"WQWXiSHFUFYcZab3fwOoH8do9O3KTA7a/uTkJLz3ADtwYKxxdgJOeOzrz+2skpvjd57veE7K+V6o\\n\",\n       \"8T6fzyumR5VX0qMwUgUFBQUFBQUFa+I7zUgBzNDU6aZyVvacvZx3a1gh847aa2lYCMp6Er+jajab\\n\",\n       \"FXdQ3sHwDtzvROsEnrxL9ozZugE5rwIxUauZ1oyxKJRt5Co6vQe3R12metYKxO63Cq7a8aFOf2d2\\n\",\n       \"3n4Is8H56Dxms1lgLjDejo6OknoZ7NCVeF2FlOA+8s82u3C7ju3eWWuF67w2S2E2m1VYFLPVx3mO\\n\",\n       \"kwaAsQPWri70A+AdAzx4F+3Zqfl8Hp6nhP0M9IMKkKn0XKwDzdGJfRvIdepQjJSyWsTqwWJ+/Ivv\\n\",\n       \"ADNRfj5pNBqhP3BtjHHMmWu47VMa2FWgLBz8XniHG/7uKMsKjx3frszacFvjfVd1xxwzn88r47LZ\\n\",\n       \"bFZCurDjCJfLv388D+Fd4TlLsU88t/pnLBbVoKo5uLaFlH8pUt4pSrSmJk8eHDxhoIGZKvYfDC5P\\n\",\n       \"Ku7L2dlZZaCoTp/NZuHj5alMlIH/VXXhNphOpxUPiOl0emVJW68anpZXMUX4gwHTRF3UeUB9zNk8\\n\",\n       \"i+euEjPKl53NbnwPtcjJSYJd552WElIPh8MwiagFFN8D56mEnQoweaj2WywWsm4+jsxisQh9iUWE\\n\",\n       \"inPG75ky2XDcF9WWVxG/aJUFcMpLKAXlZctxtVRcH75WbSKU4B1RuOE5xkglvs41g5tVHWTqFqKr\\n\",\n       \"Rt7n+irZAt/X33NnZycsIus8UvHeqIWDSjYPLBaLcBwmwOFwaF988YWsH66JYTqdJtuIx02Od7G/\\n\",\n       \"Vi1AvPmz3W5X5s/t7e1KyqFmsxnefywiT05Okt7CdXX3YC9qniM5ZRqXyV8LpMzC7An9MjYOxbRX\\n\",\n       \"UFBQUFBQULAmro2R8rt63oGpFTXALJWKq+R3T+zOmlrVLxYLe+2118zsYrf45ZdfVmJoLBaLIJzl\\n\",\n       \"iM9eVDefzysr6E6nE+7HcWkUXe1XzSpOj0Kn0wn3Y/HfVZujcoXlgNrpzefzigA0xjh4VmQ4HIY2\\n\",\n       \"5J1Fzi6jLqZMSoyuGBUuVwqLxULulP14V+UbjUZ27949M7OQ+JbZO443phLe4t5KgIwdvXIbXywW\\n\",\n       \"oY9wj6Ojo4o7uBJ4qojQypStwL+l2L5VksKuAzw7J/edvw7vP+aLmzdv2vvvv790ngoBcePGjRAD\\n\",\n       \"iqFYLJhxEY2bocbTqm3VarWkY0HKnOrnTLNluYQypwCpMaHKzmXhcqKsYOzY8SF2H7Nzdhbvg4r7\\n\",\n       \"hbrt7OyEmF3rjD8/T62TbD5m6kwJrf31ZhflPzw8rEhFGo1GaDf8u7m5KR0f0K+QHmxvbwcWUI1n\\n\",\n       \"BvoL71sscwnA1h4VjV1ZiFIAI2m2HKfN7Px9rENhpAoKCgoKCgoK1sS1aqTMlhkXM51ZWu2sY1B2\\n\",\n       \"1FTUbAhjT05O7KOPPjIzC8xU7Fq4knOW8JworbxTAtitnVfUKg+WB7uSAszUqWBl6+x8FNbRG6n+\\n\",\n       \"VDtp3x5KN7W9vS2Fv/7aGPvky5KbHZy1QCpMRiowJjONzLJ4XR8zq7zjAyOBHd/Z2Zlka1IMTsqV\\n\",\n       \"lx0kGBjvyH3Hu3ZmAbzebTAYVFiU3N27Er7yc/k8BKV8GVj3XeF2RPnr6g7GcWdnJ0SCT2E4HIbx\\n\",\n       \"wVoQpUtbF6wPqgtQDKgxpOY2ZiG8Y06sLB7sEMT3xzE1v6Sg5kyzi7o/fPjQzM61Uv69Zf0kQ83v\\n\",\n       \"fg6J9ZHPCcvodrtJgXfKasCaW+VcBajxf3R0JL/X3oryxRdfVPTJXBYw3e12O8wxubpIjvzvtVTM\\n\",\n       \"yvG8jL9ZcM8sK8oCYO7KYaOvNUVM7ke9zlOPB5tKB5ILb/a4c+dOUlB48+ZNMztPf6Geg47Dffkl\\n\",\n       \"5Q7zH9dGo1FZEHJ0Wr7Ov8x8P+XtWGfW8s80y4/7VAffJyoGSG4f3rp1K0ySqQjZsdgzXsyrvPti\\n\",\n       \"9cupuxKqq8Vav98PbaAmDNUG7L2VEqgr4Nrf+73fs7feemvpt9jYUOZI/65sbGxUBKhq8r958+ZS\\n\",\n       \"GhOPlEei2cXGB2XxzgT4PSWGrUOd48CqaY0AtbBk/NEf/ZGZmT158iR8sAG1mdjb27P9/X0zO4+0\\n\",\n       \"vQp6vd5a6UI8eIOWSuXDH2t8QJWnIdDpdJbiDAGpxPLsSebjkqkybW9vJz+SdQskdb7v99x3iucG\\n\",\n       \"Hn+pxMlKQB1DasyiP1qtVtaiQc1jqu77+/uhvyBH4HmRY0txZHmz/AwCCuxFr9qFF1x+06EW/K1W\\n\",\n       \"y0ajEfpSrniLaa+goKCgoKCgYE18p5MWq7w3HN0Zv6+6s+r3+xVBLt8PuxfVNopBePXVV+3Ro0dL\\n\",\n       \"x1SyXGaUciMk1+2Oc0Xf69D7uaxTigVS91DR5DlMghIP+p1oLEyC2sH5Mqgdi2KhcneYsVhLYAu+\\n\",\n       \"+uqraFnMLnaEgGIuY/3vTQQcGyW1q9vZ2QnszyeffFL5/dVXXzWzc3rex/qKRR2HqBrCUtVHm5ub\\n\",\n       \"oe6op2JqdnZ2At2vxgvAbMFisQjmR1zLyAlREcNVO2sovPHGG2Zm9sEHH1Ses7W1VWHptre3w9jh\\n\",\n       \"OEgKngGpY2NykctIcR9yXk3+l7GxsRHGBI83zAO4Rj2LGROc3+/3A4MNofRsNrOf/vSnZnbBgH78\\n\",\n       \"8cdBMsAsfuqd4nG17jjJYcE9a6JCBXAoAYDbl3Pp1YVeSMGH52k0qvn36oC8eQcHB/bZZ5+Z2UVb\\n\",\n       \"tlqtpHMAcFXMah3ApBVGqqCgoKCgoKDginFtjBTsu3Xi3FWgmIZ13KPZBRMrbuyOer1etng0R6cV\\n\",\n       \"04kpm3Yq32BMZHgVgtPUbilW/jomClg14ByzeqnIzapMnAdLhTUA6gSbOQwcB+lkEbZ35eVygk05\\n\",\n       \"Pj6uBILkZ/jI5Vy3mK5C1SOHoXnzzTft7bffXrp2c3NTRu5GuThw5N27d83Mwo7T7GJXjPuNx2Mp\\n\",\n       \"4Ff1BHD+dDpd0jmoiOBA3fuYCsqY+s2sGnW81+uF57BeS2nAUv0AlmcwGFQYpJ2dndCv7F6uNEh4\\n\",\n       \"Lpc/xbzngsf9qvM2a5CguWNmUjmEeE0LC9X5Oh+OZjqdhmdwNHj8jmM7OzuB4WBWE8wW+mgymWRl\\n\",\n       \"YKiDCr7J3zA1P+bO6ZdhUVlHtCqDy9oi1Ye52QFydc6pPJ4Mr59cpV3qGKlrW0jFRHh1Ymj2FsDf\\n\",\n       \"q35QY8CkDzFsXRRlPJ9pd0y4V0U3cn1TAjrALxyvYiHFZWEqv64sHv5jtLu7W4kvwiadWBnMzql6\\n\",\n       \"NvOZ6dhMXMa6aMJqcZXy+PP3N1uOzKxiMgGpj2ev16vEVeEPIItqU/FovOefB6h1XhSxJ2pOWQHe\\n\",\n       \"xGDCOj4+Du2BD/mzZ88qcYbYLJQSiSsan8e7EvjnYjgcysjhQN1CSt0v9cFQ7Qywqdd7FTFu3LgR\\n\",\n       \"noG2HI/H4XqOcI9FqYq9ltrEqI0SLxrZQWNVzzF+hlqMqHHnFy+9Xi/LsUDVg02APIfkbu44dlvs\\n\",\n       \"/Lo5hOuduzBbdU7n/lqVuKhzCMO7PhgMKlHpOZEx5urJZBLaGs/lse1TxcTKrJD7TVKbsDoU015B\\n\",\n       \"QUFBQUFBwUvCtYrNWcSXcuOvQ51Zw6PODRlg0x5277lxSWJxiXz8DY5Ezc/1IujYbsHTwXwvFf5g\\n\",\n       \"HdTtNFOmrrrdnY8YXLe7VGESUsyLMu3WsZ65O1LFhAGLxWLJrOCfq8ShsVgxZue7Nm/q6nQ6lRxa\\n\",\n       \"eA7KYLbM5Kj348033zQzWwqHgDJ1u135rqTc0PGMbrcb+lP1Ebuo5wjaY/cBLsNIbWxsVHIUzmaz\\n\",\n       \"yvuq3NAVVAJoBoTlPtK52bnbOO7NjgoenGeOQ6dgh4+xw+3C/ZUSJafA7BPH6/GC7Do3dIZqX3XM\\n\",\n       \"s1Scw7PuveVxaRaPw8b1NDtnzlPzPs/lqZhWCqrMsfk2JTbnsqhvB8sa/L3ryuNZIu/gkQOYVc2q\\n\",\n       \"czxn48i1LvH85OvE3wYgZnHAfILnqjUEvqOFkSooKCgoKCgoeAn4Toc/YHjmRQVJi7mrY9XJK2Gs\\n\",\n       \"kLGzXVAwvxRb1el0wuoezz87O1tbH4B7mqU1WezWrrJX41m8Qp9OpxUGJFeXtkrgTl8Gfg7rhHJY\\n\",\n       \"Ni4rzq9zj1e7qDqX5JQguy7IXZ3WCr95RorHp8p5V6dpwr15PHO/+2s58J3Kv+ehWJSNjY3AhKUC\\n\",\n       \"aeI5/hl+N650Tu12O5Rf6YoUa8BMnWKQ+Fq/y1WBYJlNwHhi7Qa77OcwN71eT4q50ZbQSCFQIWMw\\n\",\n       \"GITxgfbgNkP/dzqdoG/D+Z1OJzwX12xuboa/eY5BndBfMTZAsQ9+vK3DBLJWDvdmZwE1P+Swxe12\\n\",\n       \"O7RRKuhnLlSYATXXLBbVPHfM3tWxcql5kdlnFTg4tw7MDPF3wpevbk5V5VP59+p0xv7erKlaN6vA\\n\",\n       \"xsaGzN3KInj8hrbmeUDpEb+zYnOz/MjmsfNSgr1VY2SoZ9y4cSMMCk7squhqH0nXm9jwXG8qUBMC\\n\",\n       \"e6esIixXdfKJH3NR1zephSMfV+fxS+rbTU1GCkrMG4stpcq86uIqtgAF/O/8DLVAwm+DwaBSj7p+\\n\",\n       \"rfNm8xNBo9GQXlEAxzlLxQDiBVxq4c2pGDxl3+12wzHVVyjneDyujD/lLchpktiLlRN2+489LzZY\\n\",\n       \"uO/Lz++wMilzGVKLNUbOZq3ZbIaFlkqdgYwKz549C22IBUi/3w/1xLXb29uh3bjsufGc/EeJExmj\\n\",\n       \"HipdFSO1AFKib94EKvC7ivqiL5XTiZrL1LtSt9lJIWbOzZ1XUnNNq9WqfBN4/kmNT4Vmsxnai9vc\\n\",\n       \"b3JizgapvsYY57LiWK/XC2MRz+KxzdlAfKwwRZ5wVHTuy9xNsYLqhyI2LygoKCgoKCh4SfidNe1x\\n\",\n       \"lGO16lTxTZgtWjV+EZvffKTyRqOaGy8XKVZjFcRcXFPCxFXjjFymrMwWcV+m3PdTUCaiWHyjup2e\\n\",\n       \"2bJZaB3zrAczUryz9ffjnSabYT1jqdhW1QZ1JtmU+ZBNSiyu9XGJ6pwJ+H1TZjJ1Ta7oNhXqgvsQ\\n\",\n       \"sWVOT08rzFer1ZKhU1LMBp+z6lhQovo6KOYKdb9165aZLZsF0X6DwSCUH4xlu90O9+H5wJtqWJSM\\n\",\n       \"Y71eL1zD0fNxHsYRZ4uIsU74zbcfMxyrRp+PzUk+PhSXNfeeyvSY6vtut1sxu6mo4qocitWMOSzx\\n\",\n       \"POHL0+l0sh13AK4v2p+v9e3B56no80CMuWIWOHbtKvBR1ieTSbbFwZdpsbiI54V7jEajwkgVFBQU\\n\",\n       \"FBQUFLwsfGcYqZSNV+3aE/cO15jV5z+qYyHWCTxpppmVmHBTIbUz44BiQEw07cNK1Om01mVgcuDb\\n\",\n       \"koOqqp1LXW43IDeKfV3dVh0nqd+VK/4qeaFS4QK4/dQYA9RYY9YlpSdjhog1TziWM46ZJVXM1KpO\\n\",\n       \"B3VsUIwFVFDvF+f5MrtcUF3F2ikGNnatch64fft2uI+Z2ePHjyvXsnMA6j6dTqUmS7GAKvCocswB\\n\",\n       \"mB3xTF5dEEzliJI733LZUWZcW8fEpDSGMfD3Kad8q+otzfKDvuL6wWBQqSu3uXeK4r9z25fZOIW6\\n\",\n       \"Onlm8NvIi9fr9SraYBWyIfZd4e+J2Xm/nJycJBmptjr4bYHNPalJPfaR9QNlsbgIZ88TZN3kC6iJ\\n\",\n       \"JUdwzROBum+u4I29nhTN6yn2WJn8QsmXJzWQLrOAqnupvEiShaIKbIbA36kFJn/0+UXynhmx9ssd\\n\",\n       \"J4B6SRk+cjiL67ntlbkKEw6beHCNSgECzOfzyiKM2wD/1pm0+XclRq7zMDQ7f5/QT2hzjtek2ozv\\n\",\n       \"q96plMkLz+Tncd2VswEfYxNXDGyu5P5SJtvcVFIebHbjxQHaI9XmjUaj0kYxbzzlYabmhJzF32Kx\\n\",\n       \"qMzhdfNLaiPHf3ObqjhNbHL09+DneS9FRt3chbrlOu2o8+okA778/tvk667iYHHdcs2j/P6gnrwB\\n\",\n       \"z5kXY/EfeVGNZ/nvYa6DkUKjUY1FORqNKnGz1HyrTO1KHpQVc3Kt0hcUFBQUFBQUFFy/aY9dks3y\\n\",\n       \"c1mtAuxEAPUMtXM1q4rcG41qJHKz1c0QoMsnk0ltCAO+ry8Tjsfchn2dYuXKjSzsd0sxcXjKZKoY\\n\",\n       \"KcU+MZSzgUJqZ8lsh2czVxGb55g/WbSIvhmNRiu75bKpyu+MOKZZynQTo+eVGTwXPiZLLGwCcuyl\\n\",\n       \"8icqbGxsJGMAKUZssVgs7a7N4nVKzQksXvUsy+7ubnhmaqe6jigd6PV6lQSrZ2dnoS3xfNXmvV7P\\n\",\n       \"XnnlFTMze/ToUaWcbKb1dVPhQ+ocC9ScqUIT8FhU7vs5jHNdmzKzmxO6gEPZpCQGLwN+blXOOLGy\\n\",\n       \"qDZX+Te57b3pdBUWCO3Kpkc1TmLl5DpxGZgBTslN6rKd+Ij1zI6mvsfNZrOSQD02XorYvKCgoKCg\\n\",\n       \"oKDgJeHaGSkPDrCFsrFgXK1IlVCdd3S+jqxBYBturhBc1Cfrmtiq2K+o1U6XRZq55WPNQ274g5Tu\\n\",\n       \"C+U1q9eOqTZXWhWF3NxzKbaO3am9uJWfy7vYHMZM3Yd1aWz3922uIrTH8j4qnVPqHVAMDcY4t4fS\\n\",\n       \"m9Q5V6R0Z6k8Y2ZVPVe327UbN26YmdkXX3xROZ+REt9y3fi94MjYKWB+ULqjlCB/f3/fnj59mrz3\\n\",\n       \"ugCLMhwOQ1uCyVOZDVSbt1otu3//vpmZHR4emtl5vj5cy0zNqg40ap5V7AjAbDWzCoByEqkLqLwO\\n\",\n       \"e5oDtox4zRWH2qlrMz/+cufqWKgDxe5xm/r7x1hF3/7z+bzitDSbzSoRxheLxUqi/Muijn3Kxc7O\\n\",\n       \"jpnZUtiPVEYSZsRU7ltYLGKM1LUtpCDa9OLXuo8rT/o51KSipvk561C6LJwzq/dEUC9DrmmHkTKD\\n\",\n       \"xQZgaqJLLWhW8ZQEVD1TZY0t1NSC1qdbWSwWWYJnPJvvx5PWqnFrlFeHGjtK0MxRzHmRgJcev/FE\\n\",\n       \"iHhI+Cgytre3K8dzk2XzpKmSKq8aKZnBGyFOvWB23lf37t0zs4vF1bNnzyr3UPUYDofhIxeLr+M3\\n\",\n       \"YWb1sa7MdGRzZRLtdrsrfyRzgQXmfD4PDgqqbwAVSbvZbNrdu3fN7GJsP3z4MPyOiOmxJLyp90HF\\n\",\n       \"quN3atX5IsdEzqiLMcflg6MH6jGZTJKykVScq5xymS2nhVKpSXixmCIE1CaAr0H5eCzmAnPmeDyW\\n\",\n       \"30U1l/H7gOcCV+GFx45U65hWfVzHXM/gmPMCvDoxP0FCU0x7BQUFBQUFBQUvAdcW/oBFcR5Y8fIu\\n\",\n       \"Qa18VX47z3bwLjUVZbUuh54qd67ZRTFECiyWU3mVlDs47zD8/RWdrlzhzUy6i6YEyikROf+eCimh\\n\",\n       \"dmaxkAjK/TjFIrEJw9eD2Y7UDjdWPoCT3KZ2Unj+yclJYB2++uorMzs3fcF8c3BwYGZmT548CdeC\\n\",\n       \"cVImwMPDwxBbCMLi6XQqWQXldODbj9vJC1/N8mP88JjFM8CEjEajEJH7Zz/7mZmZ/epXv5L3wQ6d\\n\",\n       \"2w9MA9pC7SjNlusOkTYzX3589vv9pXya6t5m5+9ZiiUEciP0czgFzi2W6hs/73l4xpSBOYujU6tY\\n\",\n       \"ZQpqxw+o0CMM9GWr1QosWp3Z3I+3VqslQxx40/10Ok06N6hsBnXmbX8t15PPS8kh1Pn8rnrHIWab\\n\",\n       \"Yt8VJar3An+23qCvO51OJaxJs9mU9/PjJBXbzsM7pajQPrmOBWzeVjEU8S1st9uVECv9fr8SB43H\\n\",\n       \"LFioFy9e2IsXL5bKnMNQFkaqoKCgoKCgoGBNXJtGChocZmHMzlfhuUzIusjdLeaiTmye2qnVsVl8\\n\",\n       \"HsDskb/Ga5uURirVlqmdmWq3OjYGWGWH7sscy1Gm6pESKPP5/lpVj5hQ3fenunaxWNju7q6Zre76\\n\",\n       \"v7e3F3ZUubs/sC7YTTFifeTbKqYZ8EztOq79GLNbW1sVfc6DBw+CgFuVnyNXY6estB6LxaISNsBM\\n\",\n       \"BzD1u/Z+v5+dp9EHWlWoG++ss/T6z7Ozs+Q7xYEllTAWGjQwGs+ePatoqfr9fmhDHp8pZw0VEZzf\\n\",\n       \"1ZQWiJmuy4QayNEbdbvdMKZRx+l0miVoV8gRgvOzzZbnH8/2pxxXzM4dGszOdWypoKaNxkXwVWZ8\\n\",\n       \"0Ne5oVXQRrm5+RTqslTkXG+W/tYwUtkCNjc3JWOdE2Vd9TWsQd9JsTnEib4R1CS9s7MTJlicHxNa\\n\",\n       \"KupPmQA9fXrZ9CLrIraQwuSPOsbKlorfEfPayxGCx7ywVkUqPhW3pVrwKbE50+OpCYqxqqBcTYY5\\n\",\n       \"3ntmy6Yln7piNBrJ9vDlOzg4COY91JsTz6YwHA7Dh44jXKc873ITeHPZ6zYHZuft4hcdyqvI7EJU\\n\",\n       \"j/PUJLqzsyMXpdzmfkzgmWbLphIf/TvmPanGwv/P3pvF2Jad52H/PnWGmm/Vne/tZrPZElsSSZG0\\n\",\n       \"RFmCJYAx4AcDAWLkQbHzIBhwAgRIEAd5i/wSIgH0liDIgwMEsYMkD0JsBDCch1gOCMWKRFoGJTAi\\n\",\n       \"uyVSIkX27eHe233nmusMeaj+Vn3n398a9qmqria5PqBxq88e1rz2+qfv90EOsc23RLgaDoehfljr\\n\",\n       \"Mado9CtHhql+un379tz79vf3Q1343TjoY74cHR215jSnNeJy/UdzMBi0HIZjfYDgCphGmQW+K/jQ\\n\",\n       \"BMTmWAnOS8hW+1jp3qrSM/HfXrjzf/v9VQmiqo/6/X4w7fH9KFclLefy/beIy43V1az8wBdbT547\\n\",\n       \"bm1tLXw/cW5gMzwLLv7bEYuOBXNAdTavqKioqKioqDhnfGx4pFiS9KrumCTpT95sdlHajlxINNWv\\n\",\n       \"9ay6nuq7RRi/WTOhTvyqvJS2hXlwfAirr0suTNQs3lepPsqZLdXpHyg1PQI5vhfWJKi5A5RqaLhO\\n\",\n       \"vq7cp6zVwG+Q/BTr9GQyCSHs7777brgOR3VIWawZ8LxoDCW1M7q2VyFXBmumbty4YWZm77//fus+\\n\",\n       \"Xt+K3gLvwW/cXl7/LP2ntBOKj8q3i6+xJjSlRRkMBkneN6bEwPtSzutmp+OEfmFNE98Dx36M55Mn\\n\",\n       \"T6RZDpohaK6UqYPntlp7KIMZ37H2eH4qDSzGemdnJ7nvqCAghrI4+P0sZnryDvmxDAeLIqat9P3C\\n\",\n       \"FpFSZvN+v59cs1grBwcHYayBR48eJekWWPvI2kmzk7ntTWfKqsGuJQC7nvj2lKCrWTbFJ4n6mLUd\\n\",\n       \"6s3mNdnYz6tGqqKioqKioqLinHFp9AcefGpPSXo4CbPGh53mUgRhfJotcTyM+cOUnKBjrNgqpJMl\\n\",\n       \"OA/W0KRyXTFlBKROFSKq2qekTrC58rMxzYuvq5JYlaZJUTr46/4paxVhAAAgAElEQVR97IyI/lBh\\n\",\n       \"wmpcU+PAyDk5KsdyBdQfmpqNjY2gTVJzHA7jx8fHQRPF4cigTICUdXx8HMad533K4V6B/VxiZIBm\\n\",\n       \"6XBwlauO1wr3mWdKZn9H1tChTVw+fKggbfv6et9C5TuSojfxUOOkHLx93+SIHVl6LwqvJt8N5WOo\\n\",\n       \"6gIt1P7+fugvjNN0Ok2Sm/p68t+j0ajV3mfPnrV8X1gry3sXymP6GFWu3zvYoV3tbSmSS7P2XIk5\\n\",\n       \"kZdqSEr8Jkt9W1dWVsLcVv2h6prTIHMwhGfj51yW2E8Gg0HQivJ69pQIMc2l92NV/Zjrj9icxrWu\\n\",\n       \"/mu8n6B+KgCCNdNK251D1rTXNM0/NrN/28wezmazn//wt6+Y2X9oZtDN/4PZbPZ/fXjtN83s75nZ\\n\",\n       \"xMz+/mw2+5finbPl5WUZocfqSl5A/uAzHo/lwaLETMGDft5Jkrl8tXGXRMyNRqMwiNwOqOyxoFSE\\n\",\n       \"kwfK8czgvtyUaSiFXFRXSs2roCJ9YiZSLHBvUkC9zOZV9V3bqA5/sc3XH15ms1mrfr5eMcQS9qai\\n\",\n       \"xXCAGI/HcqMrYYFfWloKbU59XEsjNXEvQyUj5XHzTuBmp/xaL168mOPBMZt3wp/NZuEwinKUA7li\\n\",\n       \"p+/1TpP4LmLaWYQh3+xk3FBOKgqQU2ehLJW6o9/vBxMwm87QD9xXKagoUJS7vb0dIi+5z1S/4Rle\\n\",\n       \"e11NyMoEpUyVzDuGcegqVCwCXtOLlqfWlD+ge+G03++HcUJ5h4eHrWTpw+FQCiqqX1EGC2h+rqyt\\n\",\n       \"rYVnzqNflYDe7/dbplh23WCXDNzno/L4t4ODg1DnkoTWHudh2vufzexv+vea2X87m83+yof/4RD1\\n\",\n       \"GTP722b2mQ+f+YdN01TzYUVFRUVFRcWPJbKmvdls9v82TfOquKROZn/LzH57Npsdm9kPmqb5CzP7\\n\",\n       \"q2b2r/2NBwcHcyp9nCJzJ1zFM+FVinOVbJqW+WM2m4XTacrhLgelXVAM3ip3F0tq6ANc45MyaxIg\\n\",\n       \"BbJkCAmD26ZO3KmcXSzB5dqpVNfMWmymTW0p+gV+lqE0USz5o+0cnKBU/968cHx8LOkUPJTpMdZP\\n\",\n       \"at5Cakb5yuGRzX2o0+7ubmsMl5aWgsYC7WBzAGuwkLT23r174beS+c3jq8ASYsm6YU0y1uDa2low\\n\",\n       \"H2Asm6YJ8xdjurm5GfqFmd4Rsg+W8q2trTkpG/3lGdAZo9Eo/I7xVbkHY1BaNox1zmEcwDyI8eYB\\n\",\n       \"aq3k9ki0HVheXpaJlv27WRPKJjFv/mTzHKMk6CSmuUpB7eusifKaZh4fTkDs1zLnB1TO8GyiSmnH\\n\",\n       \"+TfMA9B58NwFVMLw6XQaNIiqXGY+ByaTyVxSa/zGDPlmJrXbZqf9hfYylxrW0XA4DG3Cmtvd3W3l\\n\",\n       \"0OSxVHk61dri76J/j5pfo9FIZvDA32pdKK0TnznA2YW28behy5ngLNqi/7Rpmv+vaZp/1DTN1oe/\\n\",\n       \"3TWzt+met83spTOUUVFRUVFRUVHxscWizub/g5n9Vx/+/V+b2X9jZv9B5N6o2MGnRT5t+xMhh1ar\\n\",\n       \"cNqcgyznl8L7UbayD+OEmyOKU6GfzNSu/JwAvHc8HgetE4d2e2brlZWVUD9IGE+ePJF0DyxNAMpx\\n\",\n       \"W4UDK+R8vHz/K3JDDk1XfjIq/FRBadZ4HFIMxCxZl0gbsYABICdZQ4MIPwZ28AW4vawl9ZLUaDRq\\n\",\n       \"5cva2dmZ8x/CNWii2I/ES7Mxeg6/BpSzPtNqqGdZC+l9EQeDgfRv8b4lMc0OJEesmffffz/8bXY6\\n\",\n       \"P/g3D5UXbjgcSj8NvE8FqkCDcHh4mJ23HihL+aBweYqeIwX2kUJwQoxZH351GEtug9LkYb7HNBwK\\n\",\n       \"eDfPF8U0z9pJs3lthpqn7H/onYgnk0nLx0u9I7a34/dFaEE4GArt8WPH/rPcXqVZQb/w+mcop3ZF\\n\",\n       \"Ns3ZLnAPrqO9Ozs7YWxQh6Ojo7lgGbwX+xP73rKWFUj5XPJ+rDIq+N9iawy+wzzPsQYwx5guiR3L\\n\",\n       \"oanFb0xbUuJXCix0kJrNZg/xd9M0/5OZ/Z8f/u87ZvYJuvXlD39rwU8wrrTvQJ5YypExFSHBm6FP\\n\",\n       \"0uihzDilamgfoWU2n8oB71NQySDVPepj6J3wubzcYYGfVZFyXh2rnHQnk0lrwikTETsFcv3xvkU/\\n\",\n       \"RHiPWZ5bJJVuJcaHVVqHFFRkG//mDxEIxOB68qED6uhHjx4F0wFMXpwKQzkjA8qJlQ8JbA7164UP\\n\",\n       \"mABH1KggETZbAtgAnzx5kjRX5ZK4qrmTM89xklKU4Q8IXEaKE24wGHSev4CaQ6urq61DCwtAub7y\\n\",\n       \"3FfsxM4HEM9zxQE8XIYf19zBQu2ZLGDiI8wcP6gL6scRqSrBOLsy4PDC95eOhxdE19bWwpxIRfmy\\n\",\n       \"iZLv8eXGojJVXyozFAvc3sTKexavEb/vq/FiExb/hvmGA9XGxkY4kGM/4TmmonVL99TU94lN7Vir\\n\",\n       \"4/FY7mk+5ZRZ26TKCa/VfOI9y6eQaZrGvvKVr0Traragaa9pmjv0v/+umX3rw7//uZn9naZphk3T\\n\",\n       \"fMrMPm1m/0YWTARpFRUVFRUVFRUfF+CwvLS0lD1IZTVSTdP8tpl92cyuN01zz8z+SzP7t5qm+aKd\\n\",\n       \"mO3+0sz+IzOz2Wz2ZtM0/8TM3jSzsZn9x7PIkTSltks5qEGKHY1G9uDBAzObD4X26tHpdJp0HlM8\\n\",\n       \"MjHuoRhYWmSzYEloKPPDoL1sAlL5zQB2rlZO0zF+Ey9hxjQ5nPATbWL1ObeBcXR0JHO7eS1BzJk8\\n\",\n       \"pdJX0m5KAmI1booviTVmShJV13hsUppLmKPYRK1M05DyXrx4IWkAMB5QR9+8edMePnw4V8bq6mro\\n\",\n       \"59LQ5JhkFmvPdDptaSG571lDjP6H2ZqlR/zN46HajWvME8dmULWuIVmzRpqpDvz+w9ooNlf4fuB2\\n\",\n       \"crLnRaHm7MbGRqgz5zcr4dBZX1+XWkwvjbNmSEElBS41cal9R3ELKfMhOwL7pN8q8TDvZzzf/dxf\\n\",\n       \"WVlp7ZHswI16Md1DjnXcr5VY/le/3+aoYNR3j4NSgKWlpTBX0ZeDwaBFA3BwcDBnuvL14nHAHoR5\\n\",\n       \"x47l7KrC31z86/u3q4bfA+1nM38uk4fZyRpAHZhT0VtToHUzOx131lylApFaZedumM1m/774+R8n\\n\",\n       \"7v8tM/utbMkVFRUVFRUVFT/iuLRcez43F5MmKrZi5eCYcgZM2XD7/X5SWu+az4exCNlXimixKzwt\\n\",\n       \"hNdsxca7pC9LQ7VVOUry7kLsmEKq7sp3jOt6lvmfegc717OG1TsyTqfT1nzf3NwM88eHHjOGw2GL\\n\",\n       \"xoPntSJVTNWZNT7KRyrVVyqkm+cfsLq6Kh2QFWnqrVu3zMyC5hnlmGkS3vF4HJ5n7ZNfiysrKy3K\\n\",\n       \"EbPTkHXURTlp8xiyRiKlRU9pxFWgyt27d8OYMW1BifPzF77whbCffP3rXzezeQ0iaynUeDKpJf5N\\n\",\n       \"aRjw2+bmZtijU0EM0+l0zmkZdVFA/3LWBv8diOV4TPV56hoHyqj9LDUGm5ubc0EEHuzjhj5KEXjG\\n\",\n       \"2ua15DmU7rPKKd1fN8uT13YlpzWbH2Mz7SutsLGxEb6bi/jNAqxhV75vsFjMIoScl3aQ8oPrk42a\\n\",\n       \"zUdhvPTSS3jWzOaTufL9fmA3NzfDgKooIcXMy6pYpZYFUoM1GAxaPFccocNRG/4DNJvNWtxMR0dH\\n\",\n       \"weyB33Z2dlpl+371iy7GzO3bwiYTFfGn1N4MFfHgN7BYXVJI8ZaoD3PMvNkVXcefnfW5f/x7VldX\\n\",\n       \"w7zkaDb/br6PgWSk+OizozpHpHkzVOkBttfrhXYowSfXlypCj1PcmMXV9Pido1lTDqqz2SzMO7RT\\n\",\n       \"mY82NzdlVCCeRRm5TRh9wOZ5ZQ4oZbvGAWNrays4+HaJkDMz+/KXv2z37983M7PvfOc74XcIeKhL\\n\",\n       \"ju8qV2e1VwKpOcb7hVq3KgosB2UORn1w7ejoSCYPVsKdilj172VH7dy4+qCo0WjUWsvcLxxl7E13\\n\",\n       \"/B6OiuX6qCTYuI7v6O7ubjiAqH7L9T3WCgvqXojhOcFpgUoOORzdy4d/vCeVTk0FsSnTo+LmYlcQ\\n\",\n       \"djHJHaQq63hFRUVFRUVFxYK4NI2UWdxRlO4zM23u62IWglMZTsql7MOljs9mi5uIclqZlITQ6/WC\\n\",\n       \"8z2kCq/N8BqpGH9QadmlKlOYSUodBXOSUFcNE0u7vtxYWG6JtqaUIZmd17nPSs24eAYS9e7ubitR\\n\",\n       \"L2v77tw5CaR97733gvZBOeGyJFnaB6l+UZxAikMnprVFe1LzCnP86OgojKUyu7E5VbHss8TsTS+s\\n\",\n       \"tVOJblNrYWVlpaVh4Pty8wrlYXx7vV4w5ZbucdBM/tIv/ZJ985vfNLNTsyDXRWkIzwJlZupqVslx\\n\",\n       \"mqlrKU1nLr8ia4pVMInneGLzJjAYDMK4Yr1xrj0EUsTapfaQlAM10x9gTqyvr7cY+n05qD/aqfbg\\n\",\n       \"VGAL7xPgrFNs7THtng/EWESrnYN/j3Iz4L2Bk3mrPsda4uCvx48fV41URUVFRUVFRcVF4FI1Uu43\\n\",\n       \"M9POnDknYfav8mRaCsxhVSp553CWZyERoO7Ly8stgk8O6YQE9OLFi6DxAUnj8fFxqMv169ftjTfe\\n\",\n       \"MLN5ibGELFP5Q3Fd2Vk35bzJ/VIiqTIhWte+ZDoIljRKnR9L7lOSrcrrx/56TBTox1A52l69ejX4\\n\",\n       \"yHCILofCA379fv7zn7c/+ZM/MbP0nOR1pvxhuM5+rSjpUznGjkajlr8RS4upPmDHd4WXX37ZzE40\\n\",\n       \"Uxzi7EkGp9NpkWZke3s7aBHYMVtpNwDM083NTRkMUILhcNjyLTk8PCymrgBeffVVMzvZJ7797W8v\\n\",\n       \"VJdFwH4/GOvUN0VpY9j/T83FRQJ41HfCa15iztzKvzO1N3BQhNpH1TosddJm7Rie8VpS/nttbW2O\\n\",\n       \"rBTv8O1UfoKbm5tyviu/P/YlQlmp/Rr72HQ6bZXBvmCchcS/j7/bvI9Cw8T5/9BvvB/4/WRjYyO0\\n\",\n       \"AxaC9957T37HPrbO5p61FAMdWyyeJZwZaAEeaEze4XDYou2P8Rd5M0TMpIiFllKPLy0ttdKeDIfD\\n\",\n       \"QF2P9z59+vRcItdiSEV4LMKNdBYn7tJEt6UH6NLNKFU/5eDPrOIlJgd1sFCHU1WX4XAYPhS8sd2+\\n\",\n       \"fdvMLDgOMzgpKD4OzAmDuc9Orn4j5TqjTtvb2+EAB8Scg1MmInbwxAaKtvH6hilreXk5vIcd7/2h\\n\",\n       \"mBOy4t/RaBTqv7+/L51gAY5EQ5+zEz7GHdfG47FMEcN9w3VZBNeuXZtL+bMo4ET8+PHj5OHvvMFz\\n\",\n       \"SCVzTu25uP/u3buhDzBPlpeXW4fTWMBFCqVjlEsHwumAzBaL6AZ47S0SIY5nNzY2QvsgZMWEQJ8y\\n\",\n       \"6fDwMPB0oQ5+7XsgGfrbb7/d2u9WVlZaAR6xyEbvMM7pgHh/xB6Da10O0ueN6mxeUVFRUVFRUXFB\\n\",\n       \"+NiY9jiM00uTOf4iem8r9HJ9fT2cfHFSZu0T0FWVzvU7i3QSA7QJkIS89q0UXvLJaXRYgkupmlNt\\n\",\n       \"jnE3qfuUJKISti6qCVPcSDENnG8vO0vmnCT9b4pHKgfloM8JaJVjty+fuaWYqwZtS0n0w+GwJVVy\\n\",\n       \"2LBy5kyZPLjvsb6vXLkinVVZg8zlM3Jm1dls1jI98ztZ+5TSUKhxOA+ocm/cuBF+Y41AqZYX9yHY\\n\",\n       \"QNHCxJ7z7+b9k7WPPpBiOByGcpmJ3L9PBRMpTejNmzdDeW+//baZnYwBJ6b25WIsc/2D9rC5XHEl\\n\",\n       \"pSheuM6suSoJHGmaZo4brQs8ozrqkMqROhqNwnxHn7548SL0F5vBfMDG5uZm2GOYCR/rgC07ninf\\n\",\n       \"t9ksPjalLh4l7gMxYA2z2b8rmHcS/FJVI1VRUVFRUVFRcc64VI3UyspKOJUq2zROtrdu3Qon6Xfe\\n\",\n       \"ecfwLK4z4WZXfwUOiVS5jLzfVE7K4meVNiNni4/h1q1bQZLnOrHmzaytufIaKeX35duC57x/gfIZ\\n\",\n       \"8kzqZt0kLyWdKIdN9ZyqC4BrzFQMKMlGjWu/3285eKo68/vYvwth++zrUUK0ef369SAtqnLRz4oV\\n\",\n       \"neugNI05skFIs16L59/nMRwOW/5LXFc8yw7hHJLt/R/W1tbmSPxS4Dp7ot2PGkpTxuHxvl5XrlwJ\\n\",\n       \"z4CuIEcLw8C74VP3wx/+UO5FXgujtGOj0aiVVy+nTU35YHaBn7OsbUnRdHBof9dvWYwGJQWVa4+D\\n\",\n       \"Y3yeO567uL9p2nkTVQBUjBJBEXL6d5mdfmtWVlbCHsT1OQ8fP0BpGrlNi2iEYuWYdR/r5eVlSQcD\\n\",\n       \"DRzOF+PxuJXPczqdhqCbj52zOf7GxgeOir29vXAYKE35kTJB8ULLoSTVyHQ6lapfQJWF+rEjNVBq\\n\",\n       \"tuRy8O/q6mpQK6PuGxsbYdEsLS21IiBj7/UO3moz56Sc3Fcc4cXv8GUoB8uUGr10oeeCCFAvddBL\\n\",\n       \"HeRKOa1UGUdHR+E+Ns8xSzfaBkZrVrWj7Zy41Sff5cgZddjg9pZyZPnoGfxulv+ooh3YlDi1Sw7o\\n\",\n       \"K5jm9vb2Wu0cDofh0MnzhTfrs37QY+CIWi/85TijwEuzs7PT6v+1tbUwJ2DCWsRV4LXXXjOzk4MU\\n\",\n       \"kNrHeO7gw7JIpgEOmvCHL3VQUYl9Y1DZLlKCAz6KzICdWsvXr18PeyVHlaYOsakIQrW/q+9P7ABX\\n\",\n       \"ut+d1+G1K/jwiv5axL3FR/yp5OA5QHDgdG8Yk+Pj41ZyeGXGVQfaGKqzeUVFRUVFRUXFBeFSNVKx\\n\",\n       \"ZMQ45Sp24q4n142NjXAvTDwq1xFDmQdi5iWzPOt56h1mbY1av98PUr3XAHE7FAOzr4PXminNELeB\\n\",\n       \"1bJeMlL5oJRWic1uXG/FC8ImLrN5TYSSJhSURlLx0bATsTeTcbJKlKfmp2LN5X7mkH0/B5TmivuE\\n\",\n       \"zbNKkvdSlll6DqZ4wI6Pj6X2QUncqTWX0sA8e/ZszgyJ+zA2nEPLawtj5q2Uc3uMciIFNT/Z4Rr1\\n\",\n       \"Rl1Yy8tQfcTcOWbagX44HAYOOGhHSukLeC5CI/Xw4cMwZ0sdsXMaEJ8rjpn8OTAIaxh9pRyDlQko\\n\",\n       \"Z7rHnOz1enOa3BiuXLkSHKS5LKX99vvA/v5+a+/g/JVYo8+ePZN7SAm4bYpuRtG/MFhj31V7ifK2\\n\",\n       \"trbCPOF6Yww5wMQ7lue0lphXzH3ILide08x7Jc8J1naala8LBsZmNBoFUyf44rj/ct+aqpGqqKio\\n\",\n       \"qKioqLggXJpGCtKUckYFcGLt9Xot/5vV1dVwaobkt7e3J6V2Ub5kwFbAaZylaK/Nun37djjdsw+H\\n\",\n       \"so37DPPD4TBI/5wDyte/S25B4Nq1a8FBXUnqKf8g1rJw2H1XwktAOSPH2gRfG0gxMZ8C5fOEuvD4\\n\",\n       \"KqJN71StnOs3NjZazvvr6+sy3Nn3C2tHcn4E/nrMyT2FUl8vtFtlOWctFBMQeuk553eI8RuNRpLq\\n\",\n       \"wEPRJKgciTl00Uix3yLK6wqsZdZSMVs8NKAPHz5M1uPGjRtmdsKqXAL4zR0fH4d9Bz4j+/v7Yc6q\\n\",\n       \"+cb0EH4NxAhoU+ubtQa+DweDQStk/7y+N0xzo96tNNKl7/XBM4qRnOcka11RLq8lr1nhOqk5ntMU\\n\",\n       \"4n2rq6utdTMej6XmH+Vw2XgGWqimacL7umhFU3UFoMnb399v3QuNI+pqdrK2oFXMMabjGezLi8wx\\n\",\n       \"vKPX64U2Ya1MJhN7/vz5x9PZfDAY2Gg0KmbzVZFS+Du10a6vrxexv167di1EzeTQdYNX4M2JD4xm\\n\",\n       \"8cUPp2VM/BcvXsjkrQzvuB37mKc2S440xCaOSV6alFOZEWOm3VKoReyZ8sfjsWThTh0I+f24zoe7\\n\",\n       \"1AGFWbgVd5cviz/+fPhTmy6nQPB1VylYUmDWccyhra2tFpu0StXS7/fnTCExrK6uhvvQ97EDiz88\\n\",\n       \"x+rs1xxHPZYepGIfFgWMAzCZTFqJkTkaD2PNaZ64j3yaktXVVXv1w/Qub775ZrbuZmY/9VM/ZWYn\\n\",\n       \"44aDKqcSSiWyLU1arExOAJt7OMIsZR5JRfTyh3QR9mplPgYwzhyRWHpIUOscY/WDH/ygcz1LEeOn\\n\",\n       \"8nxUZ3U2L0m9w2uFDxveFMfmT/4mLRLM4++Dq8Du7q4cYwDzYGlpKbSNv/2Y+/j32bNnxfyM1bRX\\n\",\n       \"UVFRUVFRUXFBuHT6A5Ww0yNmTqH3mdnJqXFRlS6/B/9yaGWsvNg1Nj3iVF6aJ2ptba1Yeso5vHuK\\n\",\n       \"iJy2gqUUz6SttB3qN8WrxJo3Dk1WJlbFz+L7vJTPSznXc55GaAiOjo5k36j5xM7U3A/87+HhYfib\\n\",\n       \"zbleo8JaFqUlYw1Caaixl/5zpkJ2wleaoVS5LOWlzHjcNvyNf6fT6Zxjr9nJ/FPrTGlU2ESlpHSv\\n\",\n       \"8WuaJmnK42ATn2ONoXjiuMzU3opn79y5Y6+88oqZmX3ta1+L3s+ARur+/futRNYxZ/gUuJ9VoA9z\\n\",\n       \"7OBfzjqBd/isEpPJJLlH8tpX2uVFM0ewRhxrdW9vrzXmisNtNBq1XBBU4A23g53wvZlxNBq11reZ\\n\",\n       \"SZOc2rsAvnZR9AdLS0tzloNYvXIoHbeuPFaDwUA69p+FB8vv28fHx2HOoB37+/u2t7dXNVIVFRUV\\n\",\n       \"FRUVFReBS9dIeWxvb4cTJvxwWGrHKZx9F1Jhp+zPwZK6l9Cn02nRyXY4HLYc30upDpaXl+ecUVF3\\n\",\n       \"JdmqUz2egQ19NpsFrUguVLZUelFSgpKMUlrAGOmeChf2/gCxfEolzNwxpOg0GF4jpHy9mLGeCTL9\\n\",\n       \"O2P+Ol6KjUmd/r4cqzy3wT87Ho+TjPq4NplMgvYBmg72CfL0AGbz4eBcL9zng0h8/jB+r68TO0Gb\\n\",\n       \"zY85NFeHh4dzzNa+z72Ejfv8+lxZWWkRnjKp6nmwPzNAQHz9+vWwVlJ+N5zfDHQJ77///pnyfHoW\\n\",\n       \"bs6rx4BWDv2jaGG435WGK0X0G4PXXE2nUxn4oHLZ4Tr8Sp88edIaw+3t7RAKn6tHyneU/SIV1NpT\\n\",\n       \"WrmcJse3c3V1dU5rgvcqH8rU+1KBQzxeyrrABMToa9wXC7Lw9AelmQtms5mcO13pcjiADGPG5wvV\\n\",\n       \"/zkfqUs7SDVNY2tra6ETeCNWETXKWZGdFM3mVck5x201iTDhcW1zczNMDjjhIkUN3zcajcKHB87Y\\n\",\n       \"w+EwmDoW2ey8o513Ak4BG+2LFy9kqg9fhopYM9MOoiWO8U3ThLHhRJeeF+bg4CC5yaTMeHxYK40m\\n\",\n       \"w4Jk7ivUnz/c+MjFTFXMVA74TXA2m81Fk3ZBjGdGpexRUUK+vRyNAzMez08Gxhwfz9LEvbmPEo+p\\n\",\n       \"d7iOgQ9sZvFE2kCXqD023wPqg3FRwLwfDAbFATcpsOle8RCpj7RK0u2f5T6NfWDwLs+vFuu/EtNP\\n\",\n       \"LILwPDEcDlvfC7UPMZ9YzgWhBCr1GH/UmbdJHRK5zlhLHHmrgiHUs6nE4115ExU++clPhu8X9oYH\\n\",\n       \"Dx6EPffWrVtmdmLeRtnYE/b398OZYJGAJH/Qm0wmRe4+S0tLYd/kA9ru7m417VVUVFRUVFRUXAQu\\n\",\n       \"TSN1GeVWVFRUVFRUVHRF1UhVVFRUVFRUVFwA+vlbLgbnEbpZQoz5UdjafXlmcaJK77yqHBmZXbdU\\n\",\n       \"cxfri5SzeVc27PF4LNvlc3HFHNU5lxzgSeEUXUFu/Dgk3jN352gDFL0BO7YrB3XlO+ZDvzmg4eMA\\n\",\n       \"lWfsPLXCZ11nJWOdq/MiufY+CpyHz9VHvY8BOYLXXDCLojwpoYU4q3N/1z6PBX0A8GnDPlFKJsl7\\n\",\n       \"iHICX2Ru5Bz8gUXpI3I4771DgQPMVDtK/WJT45pbU1xurr2XdpA6D6gDlF+IpZw7w+Gws1Obmryp\\n\",\n       \"D1Upu2tuE1Es4YuwrHN9/EbBUM6mfADxiR/N2gck5gABFD+Y6ktVF7zTbN6ZGzxIPJY+ujIW6QOw\\n\",\n       \"E6aK+vIfjaZpiiNQzgO5Td9DRUJ2+RCUbPZn2VhjkTKKUfs88FF8CLiM0nRUKcT4mFQUbaw+qeux\\n\",\n       \"+2N7UemBNbWXKfb0HF9TaYoqjiYrAQtvar9T7P5KcPTva5om2yaUm0rFxNF4qfrH6uDbxFGgXecl\\n\",\n       \"l3WWg28qGlxxBzJSSdq537h/U99jfof/npXM9Wraq6ioqKioqKhYEB9rjVTqRMoJG5kHpeT0b9Y2\\n\",\n       \"z0wmE/trf+2vmVk5w3CK44cleaWCXUTNmwpDVuDQboaSInyuMIbK7cSSng/BH41GrXD8Xq8X6AIQ\\n\",\n       \"sruzsyM5nrwqN9a/Xtrc2toKYbYoA/Xh9rKJUrWX6TeUhkn1f4qn6bygpNgStXZMkxh7b+rvGFIc\\n\",\n       \"WLn6ndd8L0UXbZTi/eraHzFtUkldSjWDF9H/KZQ8o7Ryk8kka07Hfb6sUvPmZz/7WXvjjTeidfG/\\n\",\n       \"cxmlc405rVIYDoetPUGZS/ldShOSM2EBOV46XlO+/sxzyNe8dof7Eu1QWu/ceOXmMb8n1h61zpqm\\n\",\n       \"aSXkZq1nqh9L7/O4dELOj4Kz5TyhBimGErUn868wuqrgYxu9V5k2TdOZ9ySVcPT27dt2//79ud9U\\n\",\n       \"cllOiOtTz5jNmwI9Nwr3s7KXM8Ebxob9sOBDhcOd2pRY/Z1LaOuTpPIzqOtF+OuUrJUuZqtF/Spy\\n\",\n       \"HyUgtvGpcs9jHzjvPudM8CxMlH7QzsOkV3qQWqRPz+N9OR8pBc8jxs+W7k2p+qnDS+6jXpqIHvVj\\n\",\n       \"k91Z0tsoN43cs137XB3ccvOz67zb2NgI+yITaXuuOLV+lOlRQX1XYu9L+UPl5rbax/D7rEbtVVRU\\n\",\n       \"VFRUVFScLy5dI1WCWOQIHIs5mWZp+gGFL33pS2Zm9qlPfcrMzP7pP/2nyftLIwdwUlen+6ZpJ1CN\\n\",\n       \"tXcRh1F/Co9JZrgOExWb5lKSHif+VFId+mg0Gs2xnJvFpQ+UB8l1b29PslwDYJN/+vRpK+nu2tpa\\n\",\n       \"aAtLkCr5cmmkpC9DSUpn0Y6ctzM0txd1Wl5ezjKLnwdKpFl1nfugVLov7fNY/6bSMqWcltWaGgwG\\n\",\n       \"YU6UjmVpFG1X013s3vPSAvqyUmWotafWT66e3FcpB3RVVwdhQ/IAACAASURBVH6HH7dYn6WsC7w3\\n\",\n       \"eXbymIk3Na9i7VEmtq5Q5apE4Apsfk2txRs3bpjZSQojNXal2r9SDbwPlCo1z5bu+fiOVo1URUVF\\n\",\n       \"RUVFRcUF4NI1Ul25LnKcRimocFu0PycVAVtbWy0NR85BOucrBY0KqABy9eAcQLnxK5VelBYIUFql\\n\",\n       \"1Dg0zSkdQI6GwksnPA4sPSFnE+dkBFhD5MeVc+hxmb6P+/1+K+xVSSycFDil/fyofaTUNSV9pvzd\\n\",\n       \"cmXk7vcJqBepc9f9wEvqZ9FILerrwrkg+f7Us2dpOz+LZ1Kh7DkuqK5YRDuiND08F33fcxns31na\\n\",\n       \"l0rz78s3O10jKFeN/Ww2a2nEVT/nKAy4bqlcpLxu1T7bVSPVxS/J79tcnn8+dg14/fXX7c///M9b\\n\",\n       \"9/k+579TPrDT6TT41+byU5ZqclMciIycj9SlH6TOAs/x08XjnpMZ4lm8786dO2Z2MpHff//9TnXi\\n\",\n       \"SeIH5datWyGqLBUlZ5YmIVMTn5Pw8iLuwoVhNv8xwYcRTuFcV2/eYsQyvJeQQg6Hw3Afmxd9VBw/\\n\",\n       \"e/PmTTM7yTauPob+N054zO8via5ZWVlpmcTO27R3EUiRiCoC1dQHPmbK8mPexTH7PByoL+IgVQJu\\n\",\n       \"OwdNqKTAKQdlVb9Sc98nPvEJMzO7d++ebNeiB6mcM+8iZqac2aukzihvaWlJHmr8fqGcjblcHqsS\\n\",\n       \"c66ZJfdHRsn6Ufcxj91ZDlLq3fx8jnOr1MTmeQJjnIQlJtbScvk+Pg8o53o2L+NfX0c175DguZr2\\n\",\n       \"KioqKioqKiouAB9rHimAT5MsOahTbKmGDRIkpA+z0xM0wjdjJ3SYvyAJjUajoGpMSTHPnz+Xp3+l\\n\",\n       \"WfMqYlZrMz+UksYW4d1RPEhe6mQojVPKbMRO6SkTq5kVaYsGg4Ftbm6a2Ykmymw+JJnv8/VZXV2d\\n\",\n       \"S0mDugDKGVL99lFwRyl0DWvngAaus5L4ffh5jvMGGI/HrXIX0SR1DbdfRKMeeyb17lz9gVh6JLP5\\n\",\n       \"fitJu2J2Ol7Hx8dJ5/Xt7W0zO9FIee3DeZr1YuXnwP3n509OU5cKxR+Pxy0NF5uZmUdI1V+tf3wT\\n\",\n       \"eN9QbU5polR7lVaI90z/nrPQZnikvpX4FilzqjJNmp3uh6j//v5+69s3Go3sc5/7nJmZ/fEf//Fc\\n\",\n       \"mWZ5U7oqN/WdSq09NoOzGddrsyaTSWibytgRQ9VIVVRUVFRUVFQsiI+lj5SXOhRrqrovdq1UeoUj\\n\",\n       \"m3cmz2E4HIaTMqSUXB6fnGPpIk63Zu02ltjTm6ZNMhrr8xTFAcqISRgpTQ8wHA6D5ko5L2OMptNp\\n\",\n       \"eE+KLFH5Q7F2TJGDlmp8vMMot+cykxb7+ndJzlmKkmdKSfAYZ9F2sIbzLFjE4T6laV6E3NTXJUdX\\n\",\n       \"8TM/8zNmZvad73zn3BL/lqDUXydXp/OgYuD+9rk+Y3VSNChdqWe6+vLMZrOWU3fsO6X6bREfqRQW\\n\",\n       \"CXJgB3BfL6BL/6YoEUr7N9WO3F6k4LVyKR+pj6VpzzcwZi4rndSl5gCftHh7ezuY8RAtdnh4OBdh\\n\",\n       \"hvqVJMccDodzKWkANQEUfb9XxaacNbugadqpUNQHod/vy3amIoZefvllMzN76623wmGTs3rjsATW\\n\",\n       \"8/F4PJdCxpeBjfHq1ashEEBF9CmHdYwXRyTymPvFrBzLl5aWQjsU7wo4WS4Tfi7wIZZV2Jyc2UyP\\n\",\n       \"uTKXLsISnDKJddlwPUod2rug6/vYbABwFKhC6kOvnLlLOXf83yVY5BCzaBmqvJhJqRQ+cpGji9V9\\n\",\n       \"gJpfuWS5XaEc2zlijqHKU1GMpeBUV135FVW9uf6q7zxj/ZUrV0JaMCDWv6lAgNR3nq/zO3wZ6sC3\\n\",\n       \"tLQkry80B4vvrKioqKioqKiomMPH0rQH4IR7cHCQzMGU48YpKYMlGJh7xuPxuToSdwnzLkGJ1H4W\\n\",\n       \"NbCX9FSiy1woOTROrC1Cn7MDLT/nNSXKiXA6ndq1a9fMzOzRo0et8pmtF/ODnXX9nMlpWWD6nM1m\\n\",\n       \"SWoHtPvo6OhSzEzqWf4bbWSTJ/ePCurwdYjl7rrovSQXOs0asK44C0VAjCE5RffBIfYpDQ2gAi4Y\\n\",\n       \"523aK31H1/2ladq5Ps+yL6py+/1+a40qV4XRaBTGAfs/OzmnTGylCZRzdU6NeayM8zDtnWW9dskC\\n\",\n       \"wXu92bxmlZnSlVUjZeosLZ/f5fd8tV+kAlEq/UFFRUVFRUVFxQXgY62ROgtyEpXSRJXCP8vSjjpF\\n\",\n       \"l+YWyqHEgdJjUemllKU35t/CmhkPhGqDnNRMh7Uq3zHOq6eI81KEnMoOXkoEp7SeyvlSOY9+lMj5\\n\",\n       \"m/Bvag0oUkBuUww5Cb0rZUPsntT1j1ojxaHT5+HYzXPIz7eYFhD3QYv+5MmThZxpU3VJYRGNVIn2\\n\",\n       \"SWltcvdhD5lMJi2mefbrSbW71+vJoJlUO3LvO0/NFf+d63NcZ58h1uR4tnZFOVDqy7W8vNwipR2N\\n\",\n       \"RsEHlff1Rb5jqWdLHfcBlRlE3e+JrVMaqR+Jg9RZJiOeNzudRKwmRwcOBgN5GCrlCkptPLmoCH/Q\\n\",\n       \"6tLe3EQ/zwgPZvBOHSJi0YnY7NHPx8fHyUMpoMwaMVOHP1x98YtftG9+85vZtuU+0mwqLFmkbBI7\\n\",\n       \"L3Q186X4utiRnscvVYY6uOYOa+fRjhy6Rkp2MWssGq3FayU1DrEDnP9gxD7c169fNzOzDz74oFUu\\n\",\n       \"o5QhveQa4zz3F19+yUEl1t7SdD+l7UylEjnLfM6lEVL9m3KqLkVsDZRknzjr4dCbt1UWENV2Ntme\\n\",\n       \"xx7SZR+opr2KioqKioqKigvCx5L+wCN2WlfmmVRoJcAOnvwOlRNJaT2UqU6dqFlSBpRU798XO+2r\\n\",\n       \"Ol+kRtH3ZYxfxdc3lrAV7YS6V4UpK0nv8PCwRXGg+shzUpmVJ3GN9SNMJ0x1wFonX/Z5shF7lIx1\\n\",\n       \"LKeYx/7+vjRhqLYBKeqLmDYrxYOTQ0rbxf/ftc9zqn+g1Mzjn/H1jJloAG/GW1tba1GxxOqpkm8r\\n\",\n       \"U2yJBoEl9PMMijkrUg7ZMY0Tfs9ZFFgz6N/Ha1rNsdKEtx68RlNrledpLKfgomMRe87PE3Uf7++K\\n\",\n       \"mZ2Z4Tm4xWzeJJeiDFJWlePjY3vppZfMzOydd94xs5O+xHVPb2FWnkFAaY3VfTFUjVRFRUVFRUVF\\n\",\n       \"xYL4kfCRYqiTI07A7JeSklJjttGUozI0IkwwlrLDo46qbDObO72nHNW7SoHcNnaC7eIM6stL2fG5\\n\",\n       \"/iktVKlmaGtrK5BzQtphzSBr5bw0NxgMWqSb169fD/4j3LZUm1h6KmHwVbhIZ/OzzAkg50Sey4eY\\n\",\n       \"kvTPS3PR9T1d+rzUuT0Vqp9arzntDmNjY8PMzF68eGFm2gcl5pfCvntm3Zzmff1zFCBqzZyHj5Sa\\n\",\n       \"nzlnc3VN1S/n13cW6pxFc212IbRVWKTPF9kzUvej3zDX1NxcW1ubo5XwQP8dHR0VfY85b6qq33lY\\n\",\n       \"BWJng5yP1I+EaW9paUkekNDgrhOZHaQ5igEDxqpJDI5naDXTasPYhDI7+TD7gw0v8NgAeuSc15XJ\\n\",\n       \"k6E2j5TDOPeVv+/4+Lil5uXxQGJh7j9VPvqI+xQHmhcvXtjVq1fn3sMHVvTRaDRqRYLEnELVAQm/\\n\",\n       \"KQff3Af3oxRIcmYXv6EoU9Z4PG7dp9ob+zD7g+1sNpMfra7g912kKUkdLP0GzHtCyTvMbC5VTMmH\\n\",\n       \"rtfrtczbOVcGLhdRrDhIcX1TXFWx+qdwloAfhq9LbN/zZlJ2ZWCOO/zG9UMQC+6L7bOl+6vaH/0a\\n\",\n       \"4Pel9ujYQVT1R860V2L+ns3a0dZ8D+/fal0rlBw8vXkaZZR+t7mvfZncz7yn8f0MJRAq9yD1bVDv\\n\",\n       \"86imvYqKioqKioqKBfGx00jFnGWVdMds02YnWg2YhXJaGyVpejX5tWvX5KnaM36Px+NwimWmXM/W\\n\",\n       \"urKyEvLHlZrOFFL3lUiZqCtO+MpJm9lrGZ6zYzQahfewhAFzRU4TBckRGqd79+6FayyB4T3MI+VN\\n\",\n       \"u3t7e3NqYP+Orv2sJDmF0WgUJN5FpPZSM1OJAzBLrixFp7SdqWTSrJXl62osu2qkclqSlPlDMa93\\n\",\n       \"KS/VbymwxKq0t0pjDigncs4WkKKhiM3FVG7Hrg7+MXPaebJ6x+pSkm+UobRFwGAwkPyAfn8302ag\\n\",\n       \"lEuGgsq6kVurKnBIwWtbYtdzv/k+4rri78lk0jmBsao/+nlzc9Pu378/91zKChID3s1uJKlnc33J\\n\",\n       \"/88aP7zPW41KTOVVI1VRUVFRUVFRsSAuTSPFjmNm82HD6vTsT4X9fr8l+UIbZaYdKLls9h8xOzmR\\n\",\n       \"Qsq9ffu2mVnrNI334aSKPHJN0wRNk5KE4OvD+eZYOimV7rwWgPuwVBswGAxafkSKRmFpaUnawb0f\\n\",\n       \"VL/fb71vOBwGx1kG3nfz5k0zM3v48GG4hr5hJ2jWcN26dcvMtFZC2dp//ud/3szM/viP/zj8xlKl\\n\",\n       \"l/AUESj3Lzugq4CBRVh6gZwmKncPQ0maMdy4ccPMTuc5+wmh3aurq3MSPKDmRipvYk77lGqn+i2l\\n\",\n       \"+YlBzZ1U3sScDyL/f4kkv7m5GfpDzRdoZ7GX5NDr9eytt95q/b6o022sH73GJUceqZBzqvbvVPnc\\n\",\n       \"co7+gKrf1atXg1Y7pwnBunj//ffDb8qfMAXsx4eHh1Fqg1j9Y+9b1KE/FnTk84PGmM1T2jWlLcJ+\\n\",\n       \"sbe3Z1/60pfMzOwb3/hGKBPPpMirzdoa86OjoyQrei5ARt2n5p0PwlEWG49LO0j5DUwlLfQHKrO0\\n\",\n       \"KlGxTisVtWJA7vV69vz5czM7HSTlgMoHDD64wZTF5jLUkQ9QmAi8KaU+eDzonC7C7KSv/AFqaWlp\\n\",\n       \"jtreQ0U+TKfT5EbBY+X7gzd9xaHDgEM5DlD9fj+Y6t57771wHw6o/G6MCZtGfDnM1o1IPZTD7eDF\\n\",\n       \"rA6g+KDxbzxG7FBsZvKgUYrcByj3ESn9eClTHCJRcZBiMxn+5bapwyTX0ydBVs6hvl7qeleoj0NO\\n\",\n       \"OPHlxfiIupqcUnj69OlcInYPzPfBYGB37twxMwsHJcW5NpvNpOCU4gIDYtkH1Nj4fUftV2wm4d+A\\n\",\n       \"XBCB/43NTKUO2cBkMmmZ8R4/fjwXROSBtFVPnz4N3wHeG1JBLGruxvooBiV08DMqeIH3O79u+dlY\\n\",\n       \"n3suKwUVuaqeOTo6ClF4SCb/+PHjcIBS4H0iFT3JwmxKYEWdvJIG7WCliWqDmR43zIMUqmmvoqKi\\n\",\n       \"oqKiomJBXCqPVI5ziVHK2aFCHNWJugSs0mN13+c//3kzO5V23nvvvaApSZ2sR6NReE+KKVepHBcF\\n\",\n       \"+gHaBDPdhz7nnTrVs3pcqVjR96VJXNFXZvPmSm9CePXVV+3Ro0dz9x0cHARqBeU8ivYcHx+3+pol\\n\",\n       \"OQbqj7mmzHh8X06T5KXEru/wdSu912sXYxQGygTktQ6bm5tBQmdH/xRKnUi7osTM4IMM1DOxgBYv\\n\",\n       \"sSqnb+5L1uR0bTOvN6892dzcDDn0vv/975vZiTYXay1XRomZX9GbqPbmHMu5X0r6VJnxzmoWTAFa\\n\",\n       \"8N3d3WLzK/aY1B6htDwciu+1x76uin4nZZLlUP0UN2BpIEBuLak6+3K4/gzu89Sz2Gdv3boV2o75\\n\",\n       \"zmeDVC7F5eVlWU5JcABr6oEUA/6HfSZfXDVSFRUVFRUVFRUL4tKZzVMZuflU6U++7L/U1bmSn2Wp\\n\",\n       \"Aidkzgl3Funah1aWsnubpR3LFS2EpzQA2CFaXcc7Svwq2PHYa7DMtNZQSY7sAwWtErQebAfH3Njc\\n\",\n       \"3JRaEPSRcl5kf61FmZ7NtJOpXzPKdq98RkrLVWWUaqRimgZ1X1eH165l5JDSKvAa7eL3pMj0Uizh\\n\",\n       \"XZn3uV7cdu/PE3NyVc6t0O5CslbldtGsd2XrVu3g8S0JGIjRpeTK84hpcGL38b4NxLQKJet7NBqF\\n\",\n       \"50vnHb8j1X+p9yl/3Jh2CfVncku0TdUh5hPalQ6mVDOY8jHkoK5UVoSYVSNF/szvKN3T1FyMUaOk\\n\",\n       \"NFKXdpDqatoAVPqOmKOoKDdaZq7zMQGbpgmDh9/YFMcOb6noBK67N5moRcX167IAlIrZQ5XHSEVK\\n\",\n       \"5OrFmzqbLs3i0W54Bv3C0TMAmxl5c1BcMSnwOKRSEqj2ArxRpZitS6OO1PVcWglfHzOdKgMO5qpP\\n\",\n       \"zfTGAudRmFdjkXxdscge4NvmHZC9ENY07aTayvF0Mpm0Pqpq/HOmk64fp+FwOMfjgzqVfHxj8O3I\\n\",\n       \"1Tl18Io5pQMl+0sM6qO5SHtzH3MA+xgcyznAxb/Lv0+V5ctV0eBmZcKGOiizOW+RPuf6+UCg3GE8\\n\",\n       \"F3ChoNrJaWC47iU4DxcB1Qc8n0uFyWraq6ioqKioqKi4IFy6aY9+C3/nJIEYtre37cmTJ2amHdnV\\n\",\n       \"O1QeOUgV6+vrwfT07rvvhmcWkZrwnC+DzREsiXgJIqYxUqzD3G4vvbA0riTRnCSSUoXzO/AMm/tS\\n\",\n       \"Kme+htBvSIysMeOce3gGpsLnz58ngxJSEhPzvfD8UH2UmkcclLCIlN4Vqi7e2Xh7ezvQczDvEH6D\\n\",\n       \"SSlmBi2Z72oexMxCOc1w6lrO2bxrn3N5JWZcVcdSU0JME5ZCV7cFtY/GtB2pfQDIJUFWjtaldSy9\\n\",\n       \"P+X+wfsnuzlwhgmzeWoPVS67KuA+1sTnqAvM5rXavK/lLBKlUGPYlYkc7ciZqHNaGw4ewL8pTf2r\\n\",\n       \"r75qZidabXAMYu9VvIODwSDUORU0kUtkXNIX3J5YwAV+rxqpioqKioqKiopzxqVppDyTtDpZ5/yd\\n\",\n       \"lERV4ry+CDMvsLKyEp5X2b/Z7l8iReb8k1JgaTEW8uz9l3J+BIyczxPgtUCKKC4mBfoQ4+Xl5VAe\\n\",\n       \"2sKavFyuphIfKeXTNB6PpYM3oLQPKUn+LBqp2FxM+Qcp5MLgff/xeLDEmZJI2a8nxUSda1sKpfsA\\n\",\n       \"+0F01RZ1QYnTcuw+gMemxPla+XjkwPenNFul/aueUf48OSfxknVWGsZv1s7/yX0FrSuTIuM3zs2J\\n\",\n       \"31RGBq5zKqCGofbZlD+O0vz5b6T/fnGWDS5X+VCVrAHlE8hIje/S0lJrX5xMJsn5pvy1UhYALnsR\\n\",\n       \"zXXOApO6P+cjdemmPTj+wSQXQ4kq8cN3z93H4AnNBx78pjYyqHzxvhiz6qLO8wyOQlNpGdi53Sy/\\n\",\n       \"abOjLTv9lUQqmM2bCVFuaWSebxMfclXEBeq0trYWIvjU5oayBoNBMOlxYmQVTQiog0WO9yvFW5bq\\n\",\n       \"x1Iz0yJmodRvakPm39A/4/FYpr8pmRvKRKXSS5ylHf46yldCEa8V3+e5jwOQE2hypskSTi6+j4NT\\n\",\n       \"StdjV5NoblyVS0HKPKPMoBxVHOM/Yqj2qDbFTIrqw1d6EPVgjjQgJgSm2qHK43qqtuE39QFXB2BO\\n\",\n       \"PM4HFvxdaiIsNSmm5iR/V3LfvZJ5HNtnSwXHkrndZT6pa9XZvKKioqKioqLignDpGimvEYhJRamw\\n\",\n       \"TVzr9/st6bNUIkU5ZmnVOXMG5bQZ0IBAoxJzNk3RC/D7/Kk9p3bH+7k8Zn/OmcmUlK00UudBOaHq\\n\",\n       \"lCpfqY1zzPHKQTXmeG42r1pPhckzUtqRUpy3+UutAdbA5JyNvckuxsPG7wZKuKC4vYtodlk75fnS\\n\",\n       \"eN3ktNUljNs55+uudS412am68PxcxNyorqcCQpRTtcrNWRpYwGWeRyYHrntKE+JzZTJ4bivKm1z5\\n\",\n       \"uTH0dUo9G9sHlOkuFyTk9yLWKjHQZk/JwWXErEGA2k+UNaCrw/1ZtPdqHs9ms7l5DnjNH/qqaqQq\\n\",\n       \"KioqKioqKi4Al6aRAqMvQq+VMzT+VszRMUdr+M3gBK6cB/l6zgehq00Z7SjVguXoA7pK6N5B0Usv\\n\",\n       \"Ob+fnHRcQlpo1ibxVDn0UB8z7bhf4nDNUM7rMZ8c74OWY8AGmIwypaVS/jylErqva5dnS1m7S8vg\\n\",\n       \"/svRYSifhlI27hLkJP7ZbNbSSOXILbvmmYuxjiuNmp/bsVDtVJ/7cvi+Uu1YjGZE+UgBqg9SGqle\\n\",\n       \"r5ekWFBQGmL2HSrxX1HXu2g4VblAaX7Xi1wDpSSoOT+x1BrOaSlLv4GeTiU2bl7j0+v1wrrFb+Px\\n\",\n       \"OJSboj9g5Bj9U5YYnh+q3jkfqb768aPAZDKZSzaY2iC5I9m0A/BixodUOckBikmXWbbZVOhV171e\\n\",\n       \"L0w2PHt4eNgadOW0aNaOolMO11x35fzNm5jfKHKTTqn5+fnURgKqfAabUzlizo9nbPLyR8bfp6BM\\n\",\n       \"DoBqe+ygjMSacDYt3UBVVGlug+FyuzpaK8Ei9Wzq/pIySq+hTTwf/HjkTAD8rpL7SiJ6/fxM7QP8\\n\",\n       \"HnbmTX3Ac/VMMdvzx58Fx5QzN7fHl60Ofwox4UTNHWUCAtTHnP8/lUomdSBAWzzUIdabmWOO4CXR\\n\",\n       \"22zeUv1XevBRa4D38q6HXXbDSEXomZUFOcQOCf59LGRjf5xOp+E7xnNW7X1eaaH2aLWmptNpS1Hi\\n\",\n       \"FQK+vrxWUZcU/1tM4D8PAc+smvYqKioqKioqKhbGpTqbQ51ndqpN4JPjogziDHWy/bAOc//GwpAX\\n\",\n       \"Kc/spD0pLUvOhOXvW3SclBrYm92aRud0SvEQqbYpp3nuU6XhWlSlvr6+bjs7O3NtY01jqt9iZkbP\\n\",\n       \"SszPsrYN0hprVAHux67O5qVmvJx6Xr23xNSR0wylnHnZzMTzwI/lWR2LFSUHwJJyyjzH6GrOz+XB\\n\",\n       \"TOVu5Ptyc7wkIGSRvuzKHRczCwJqf+F3LcJ55d+XS6h+1j3SQ+1TKfNwqk8XCbtnqLaVmvZSZut+\\n\",\n       \"vx/+LqXdYEtNKsiBn/PP8t8qiAj0LC9evJDfBgWlRfVaKoUcDQXqdHx8XJnNKyoqKioqKiouCpfm\\n\",\n       \"IzUajezo6CicCOEkvrOzU+Qky06/THKHkygy1j948CA8w1KekqBSmih+ltmwcU35KEGaTJGqKWxt\\n\",\n       \"bQVtSyo0maVjvsbEnh5N0wSNkbLFA2wvZ82UIrzEdSVdo0/7/X7r+mQyke9L+UmlGIiVdknNocFg\\n\",\n       \"EK6z9iwlXfF9vh2s9WQtq0KJNinmIJ+aOymNSi5MXr1DIeVfw+3h/vFjGZMuS7UKuG8RrXFpgEnK\\n\",\n       \"t6dLHymfMQB7wuHhoQy4UEz+ysn9LG0DuI3nQUOQ0oTkylX38x6nxn9RPzz+LZURIednF6McMYtb\\n\",\n       \"I1KBLazFUePJ/4+5E7MCmJ34O3nteezbmtoX1Dcu519Z6hOKuR/z4fXPqkAFzKejoyMZgOCtLjFN\\n\",\n       \"nLeseC24QtK01zTNJ8zsfzWzm2Y2M7P/cTab/fdN01w1s//dzD5pZj8ws39vNps9/fCZ3zSzv2dm\\n\",\n       \"EzP7+7PZ7F+K987QKUgK7BlmGcPhcM58Y6aZq83mk9oCiyYIVVFgMXja+16vNxfBFbtvMpmEjy8O\\n\",\n       \"CY8ePWq9P+Z8l5uofuJxBKQ6sPIhrMRMxshxLfmPkmIWZvD9KXNaKS8WsLKyEhYu6sIH89S71abE\\n\",\n       \"UIEAHyW6REqVIJZ4GOhq6lBRO4u0Q0E5HpfWv9QxPlfX1LzjjyUfnpQw4efdWdJ35Ey7OWb1ksM3\\n\",\n       \"m1NSQkIuuTUjZcZNoVRwiKGUUbsE7IKgkgN3EWj8QUDtL6urq61vlnof72P8TfLjoBQWi6Y0A/x7\\n\",\n       \"cjyRi2Bra8vMTr8XpVH0MWAdLGraOzaz/3w2m33WzH7FzP6Tpml+zsz+CzP7v2ez2etm9tUP/9+a\\n\",\n       \"pvmMmf1tM/uMmf1NM/uHTdNU82FFRUVFRUXFjyWSpr3ZbHbfzO5/+PdO0zR/amYvmdm/Y2Zf/vC2\\n\",\n       \"/8XM/h87OUz9LTP77dlsdmxmP2ia5i/M7K+a2b/278Yp32sklLnq6OhIql69JNXv9+c0UWZxniYv\\n\",\n       \"qUyn01AuUyywyVG1Af96aSl2slYmNNSBNVGvvfaamZndu3cv9AH6I+VIl5OcYqpmSAnQju3u7trt\\n\",\n       \"27fNzOz+/ftmNp9PC+Uo9m/WZjG8FPb8+fPWuCp+mH6/39IWsYSuNFEpbZAaL2Um6fV6rf5i7jNG\\n\",\n       \"yjRw3s6wKcSc60vzyPnfYuHqKak9pZE6D9ORR0r1HjPLlDiyljqql4JdD3hPUlrllElUteO85lbX\\n\",\n       \"QAZVx9gaNovTvSitWGnf+3oplurYO2IaPAYHMeTmrw9YYe0Qa0ZTXG+KkifGX4a6Yv9R3ym2rKA8\\n\",\n       \"3sNUH3BuVliNWLOm5p0aB/WtKsmosLy8HPpS7bdqPPDtWltbC0mqcT2W3UE53Ke+STEUa4uapnnV\\n\",\n       \"zP6Kmf2hmd2azWZwPnpgZrc+/Puumb1Nj71tJwevioqKioqKioofOxQ5mzdNs25m/4eZ/Wez2eyF\\n\",\n       \"kwZmDeXOE5DXptOpDCVm3xI+YSq/Ie/7oLQgLAEpXwQFfgbaFpxSZ7NZ62R7FpsxlwUp4OrVq/b9\\n\",\n       \"739/7r7hcBjKTRGP8XuU8/dwOEza0HH6v379etBExe41m3c8VBQW3OeoN/s7QSoBWGoHxuPxnMO+\\n\",\n       \"WdzmrSRfD1+mv19JKZiLjx49atFCNE0jNVH+fbiX/1VUAoycs24soICfifXVWYgnVUh8ah3mtBr+\\n\",\n       \"eqmGRWkXzdJ55mL+XF4rUqo9izmYK39IBR/8oe5L+T12qV/OJycVdFKKnCYSzrzon729vdZezt+G\\n\",\n       \"lAO9+oYcHBwUadFi15TmB3sG+kKVi9/NTveGXG5ONT+xvxwcHCQpaJjuRWmigP39/RblwNLSUus3\\n\",\n       \"7g8ec+XH6jVvDOVny9dS+xLee3h4ODcXuJ5cbr/fn3vG7KSvMMeU1QJ9Oh6P5zRWeJ/fy0v2ouxB\\n\",\n       \"qmmagZ0cov632Wz2zz78+UHTNLdns9n9pmnumNnDD39/x8w+QY+//OFvLcTMDRsbGyEiS5m84Ez+\\n\",\n       \"5MmT8BubFHCIwLs5io0PEJ7OXjnaoZ5m2mTD8GznzGirNjyus1f3v/fee6FcvEMxJfMAoz37+/vJ\\n\",\n       \"jY/7gN+Xcsa7fv26mZl98MEHychBVS/ue/yNPl9ZWQkqWF4sfnHGuMAUSj5+e3t7rXYo1nY+EPJ7\\n\",\n       \"/WbJwKKOwR8Ucmk+Uoer2IGAPzK45qMKVWQLYxEOXgI1AQAAIABJREFUt5KPVuzw3/UQmSuTzeCp\\n\",\n       \"iDsV6Vnq8Jprk4q88/2pXBnYGRn3s4DB7Smtc6ovFzG3+nkec+pPZUdQQS7qw8dgs5fZ/H6mDsWx\\n\",\n       \"hM0p+H5eXV0Ne5Za39xGfyjh9qpUMspczhF4qX0vFxWJ8jhgAHUpneODwSDUC2Oyvb0tFRsl2Sly\\n\",\n       \"Tt88h1JuGmwy9KbC2WzWYkofjUatuTWbnWYQ4NRNfEhEEMpXvvKVZL2Tpr3mpAX/yMzenM1m/x1d\\n\",\n       \"+udm9nc//Pvvmtk/o9//TtM0w6ZpPmVmnzazf6PePRqNsh+dioqKioqKioqPGhyJmjtI5egPfs3M\\n\",\n       \"fs/M/sROTXS/aSeHo39iZq9Ym/7gH9gJ/cHYTkyBvyPeGwpFRaGK29vbk6fsmzdvmpnZw4cnyi+W\\n\",\n       \"gFKmrBg/R0olyepPr6VYXV0Nz0JaUIzaSoJksHTiVZdd1Om5ZJWscUFdWFLBs6quiqncq5pzDONq\\n\",\n       \"bFDWaDRq5SNU87GLRqpEKmJNkwomYF4yxW9TkrhZheLHUKJVUGa8XFg70KX/+N2x96nryvl3kbDx\\n\",\n       \"ruUyWEMDMA2J/93sbBqpHLpmNlBmHG6vql+p5nDRLBEx87HX+LAkz9qY1DqEu8Th4aFMQK/qojS6\\n\",\n       \"3sx8XuPH5aacwxnMho16+rnKFDSpOcfrlt0dcO/GxkbQlLEWmrWYsfodHh7K+XnlyhUzs2ApYPAc\\n\",\n       \"wr6Jco+OjlpjrbSUw+FQftsWXXtN07TmTGytpBJUK45JbxJP0R/kovZ+3+Jaq78Reea3zOy3Uu+t\\n\",\n       \"qKioqKioqPhxwKXm2mNJXp0g4fdjdspknZMIcqR7+Bf3QetydHQUyi7RajBiUlsJgVksrLU0LFcB\\n\",\n       \"Dt7Hx8ctp++Dg4Nk6ChLginqB/WOXHv9GCvNG7c99b5Yn5cEFMS0lN4pmOcn+4mhX3BNSXc8x1Ja\\n\",\n       \"lJiPlNL4lPoRqfmu4J/N+WsxFtVwdGkH5gv+5bmp5sZsNpub56nyUuXm6s/lxRDrS3Z0NcvTGjDO\\n\",\n       \"kyjSlxcrM6f15GspjRlri7xfipne//1Ycrm5fJe+LV2IZUsc2ku1vFxuTuui2lSa3zC3HlN7OaDG\\n\",\n       \"mn/DvzmLCX9LFt0nMEfM0r50ZqdtZ79ir5U7y1qBhSWlkbq0g9Tm5qbt7Oy0HDJv3LhhH3zwgZnN\\n\",\n       \"T3QVKeWdbnlyQ4XJ78599FMJO4HYwvARBuoQ1u/3sxE5McQmgu+Xu3fv2rvvvtt6Nx8csQjUR5r7\\n\",\n       \"wB9KSutgZi3Geo46zCUhBRZh0s2ZR8y0KY6ZfplTC0EG/F6w5j59+jQ8D5U42ssbctePNYOfLXlP\\n\",\n       \"7IAJgYHH/qJNRTxfcuzZvp7eWRRIvYf7vBSK8yjFpcTXU4fO2AFEsTqn9hj/Trwn9iz/xgfqXIBC\\n\",\n       \"CZSzNO+FyvyVWsOpJOccNMHtx57EHD+pcevSJpThy+X9otSEXspwD1y9ejXsHTEHeeUY7aOZYwFI\\n\",\n       \"3iG71+t1jsZU/Qvz7MHBQTSTAT+ztLTUcl5XGA6HUtgAuN1qbgOle6+ap9vb22Z2osSByS92kKqs\\n\",\n       \"4xUVFRUVFRUVC+JSTXubm5stqgMGtBqHh4ctrZKSHNbW1sIJlFWY/lQay6sG5BzPSsDq4NIknqmE\\n\",\n       \"vAq5vG8ctotyt7e356gjuL5mpydyla+KnddTTqFKFX7lypXgwMiSqNfasDO8krIAJSWytMP1KpGK\\n\",\n       \"YnnQMCbsUO/ZepWaP2baS5nTUvflWLZzSAVk8Nh7fpjZTLM6q3mcmttdtQVdVPGsHfEaqZy5MqUB\\n\",\n       \"8fXxSIWcx7SoJfQhMY6iUnOvrws73J9lDinHd9bk+HKbps3Q3mVcU1odtVem5p/SEJq1tYtq32Nt\\n\",\n       \"G+8hft1ykA3AZjCeG9DkoKyUZQTP4v8VRQBr23E9p8nnDB54n9LU+4CgnPlfjRvazgEIfN+i87Jp\\n\",\n       \"mmC2RF+qwCazdo5CFZwyI8oOdg/JOZtXjVRFRUVFRUVFxYK4NI2Ul+IX0fzAOU9poaBVappmjnHb\\n\",\n       \"TNtmSyX+nESV0tSYzUtrZvN5i1TIrJKElU2YNQ78t3/PJz7xiZC/D+BwXH6fl6Rms5nUnigp3PtX\\n\",\n       \"Kb8kLpedIZUGSUlBXsvC2qKU1LS9vR3qmtL+jUajMGcwt1ijx2MDyYbJVUv8dXK0Bv53Rkzz4ykv\\n\",\n       \"VC6znLNsSoNUyqhuVuaDwijV3iqwJg9gCb0rC7NCr5fOVJ9a/7wGuF9yzs/AotrxLkEEi/Q5nvMa\\n\",\n       \"jvF4XOSzyPNJ3c+0NCl/mFTIe6kfYKljeUxrCKRyb8bWT6nWlp3gVSg/3oM6sMZH+QsD6+vr4Vlm\\n\",\n       \"M8fejH1WaZAWcebmtarGFZYLXONAKR7PnF9lSfnKQuBJjj+Wzub4G52gmMhZRQcHX+DZs2etgRsO\\n\",\n       \"h61DiVmbq4qv8cbmFwZvmoodWZli1IQ4i7Oxr49/NzYePoTx/f7go1IDxKAOSF4VGtugPe9XrP4l\\n\",\n       \"SaG5XakNKma2LL3PH8zYHMlt8JuI+kAeHx8n0ygwSsxkucNVVzPYWcx0MZRwwcScsM8D6iClVPtm\\n\",\n       \"baGKgyFyzvzetGuWjjBUpnHPWefLVRFcpQeukg/LeQmObFLy+0nOcZvr6Q9QsYO+X6MsjKXM1zmo\\n\",\n       \"vQZQB1FOBMxAm9C3y8vL4b5YABLuL90nvIlqEeTGH6bTo6OjKD+jr0tJv50VJXuGCp7iZxZxl6im\\n\",\n       \"vYqKioqKioqKC0JR0uKLAMwwSmryuYkGg8FcqLmZ5rJgCQYakcePH4f3KS2GkmJybOH+ZMtSkQ/3\\n\",\n       \"jr0HkgtrwlRfpHKj5ZzNY+rvVIguTHKTyURKhF6VvLm5GdTALEWrenknvqWlpaCJYlNgSupQWjSU\\n\",\n       \"++TJk5akpCRgrlsqx5/CdDptSfzHx8d248YNMzN7//33w70lmhclyalgiFL6g5xTNGvJ1Pv8nOji\\n\",\n       \"DJ8K2Vd1yWnbFqG/AHJmEk+7knM25/cxxw2QoiFImeI5oIWBNaJyKOZQwuSee1dXbaHSzinzC+8p\\n\",\n       \"/Izfa/j/eV9hTZTZyRpUFoyU+TDFzaSg9kfWRrGW3O/hMfN511yAOY204l3ieefv4zWstKjs8oCg\\n\",\n       \"L/5NzSelpfL38/VU24bDYStHbtPofLgppFwQ1Hocj8cLadGqRqqioqKioqKiYkFcmo8UyClLpPEP\\n\",\n       \"nzEz7cgKTCYTu3btmpmdap/4XcyyC00PX/c+LaPRqCVRTKfpHHqR9rbqngJnqlZlqBxVgPf18rbi\\n\",\n       \"mGajJCzbk336Z1lTV+KvoPwSOKRXSdZMhqnGwZfL7XjttdfM7ERrpJzMPQuzcpZlDQ1Lp0pCK/Fh\\n\",\n       \"UNJY07TJZkufZV+VrlqHLv5LqTWgwosXYUz3z5b463gpWzna83UOXY9ReTCYIiJFLprzv0j5+imo\\n\",\n       \"sPwuwQYeuTnW1fGZ38fO0IoC5u7du2Z26j/JYfeshfR+YipgxT+DsmJEmNymUgfpXIABA1oUvJfJ\\n\",\n       \"S1V/c5CKn3+x9qYCkfh33rtK/JfUnNjc3Ez6eGGMDg4OpJO2xyKM/+pZjPn29nboN+zpqn9jGt1c\\n\",\n       \"cBjwsXU2x4JPJUkERqNR6GBO36HU6QAPukrSC0BFzJEIfF11tNrgmTEW96cYizl6z0cVKWe5lKMs\\n\",\n       \"t8Orlv2iW1tbC23h55leH+WpsUlt4pyaxh+CNjY2WoeXXq+dhNQs/ZFRdeJxT214X/rSl8zM7Bvf\\n\",\n       \"+EaLh4YPILnIP7/Bq0MnHyJSG7f6LZaCoeTjpvr0vBzVU8/mHEsXTRURKxfwpkL1cclFIJrNJ9Dm\\n\",\n       \"a4v2Zcx52e8dsegvv55LHXdjfGiqnucxJqq9ngeOr0+n7YwJ7LjNdVb1S9VZmd1SczyXNqYUymy1\\n\",\n       \"SBLekghH/nswGMhsHD5ootfrzbnJmOnUOoxUCp5+vx/qlWInX1paavXDZDKRY6j6i81t/loKMbca\\n\",\n       \"f+BeW1sL6yuXcaE6m1dUVFRUVFRUXBAunf5AoTRpcMp8hNP9aDQKJ2BoMXq9XjhxKxMPnmXTY1cT\\n\",\n       \"i1mZVBLjUCmVkHKaAZZezHT+K66DkuquXr1qZieO+9w3+JdDePGv0iqlEgrnTIEYQ5TPrOhsWkQd\\n\",\n       \"uM6oC+rJHCkAawbU/FN5qwA2u4DO4cWLF0nTXm58S3PTpTQ0/C6lxlfPlMw75bhr1l4jpQzduWs8\\n\",\n       \"HsoBXTnVqncCMZOY1z6xZpDnnwoKKaWr8Lh69ao9fvy49fuiGincy4g9dx70E9xeXiOoR8q0r6hP\\n\",\n       \"UnQo6+vr4XfsF9PpVH4nPOP3WehoYrQ6vqyYtsWXyzQYwMbGRmib0mAyu7rP6OCRqgP2xX6/H+YW\\n\",\n       \"5y3kfb0L1tfXw7h3fbZ0bmMfwDP4F+1Qju/K4qH6R32buvBIVY1URUVFRUVFRcWCuHSNVMoHiB32\\n\",\n       \"AGgXjo+Pw8kcVAfsWMpEkMDt27fNzOz+/futazF/HXWf8s1iaQN1UY67ngE7p3VTUA50rM1iB1nv\\n\",\n       \"xBmzGaeIGtG/Dx48sOvXr5uZ2QcffBDuU3mv1H0eilFdtXMymSSdyAHWDLGE8df/+l83M7Pf/d3f\\n\",\n       \"NTOzW7du2YMHD6LP5uqirvnxZH+dHG1AqY/KRbFdL6r9xHvM8g7NLD3j71x+sRLNtKcjKXHwZ3C5\\n\",\n       \"vn3Kr0+RtHKbUuOqtFSx95Wy+yuUajMVus4nbm/JM7zmeaxgIWBNFPqA6Q1effVVMzP7wQ9+EK37\\n\",\n       \"cDgM+0Qq71spq3zMMf8s9Bwl+OQnP2k//OEPW7+z9SFFDpwK3OgC1u4D29vbZnaq3VeZCHiPZvqF\\n\",\n       \"3HpG23xdY9p2T+mhnu1CQKvG+mPrbK7MSx9ea3XC5uZmWBilrLX8fr8h9/v9cJ0XdcrBWzlzpxw3\\n\",\n       \"Y4PpWZGbpmkdcphlmZmySw56PtLDH+ZUJIhybubDGBy8Hz16FA406I9nz57Jheb7nPtXqe/VRxOL\\n\",\n       \"f3V1tfVBU1T+/X5fHoY8D0qs/9TGWPIxj3EGefUyXy9ddyqh8CIRcPi7NFFo6vDXBd5pVUVAKedQ\\n\",\n       \"NU95zHk9cpTneR6k1Lpg3rSUqTVmtvSmdmbATznG+rqqd8fKVddzKU5KkYogi+1jgGL1VuablLDF\\n\",\n       \"KYBy+2PJPB4Oh6EuJXxcMUCQPDg4CG1mdwdlBlcHb9X23OE1dcBLmSFXVlZaEb8sZEOgfvr0qUwp\\n\",\n       \"5s2fsbqpoC8oAjBG6jufizQtBR/WPGcl77MsHFfTXkVFRUVFRUXFBeHSTXsXjdXV1eBApqSirsiZ\\n\",\n       \"gBSUJJRyvFbPstaD35PTlKUkxpwGxjuMM5M6Tu2j0ShcV9obrosK70U/QAKJ8WahDE6Minrze722\\n\",\n       \"a2trK7DipzQ6SgOiVNOz2Wwu0W3sfTEzU6lpLzU2rIlTUqfXcHKd0cYYR403xZXypsVMhamkxSmN\\n\",\n       \"zmAwSOZaw7XZbDbn4K84e3K8NmYnfZWi3VB5E5V0XKI18s+WBtecF4UE3lVq6khRdrD2TmkT/Bjz\\n\",\n       \"vON2+zyopdQjjK5aPAYzzafmCTuvq/3s5ZdfNrPT/fiDDz6QgTKq7mqvYWCvxPxsmibZb9x2zG3g\\n\",\n       \"8PAwtAnrhy0K/N1RgSp4n6LSWcSM6F03RqPRXPAS/lXUQ8x9aDZPjcR7oLdgsCUpRUeCMqpGqqKi\\n\",\n       \"oqKioqLiAnBpGqmmaez69etzucnM4jmRvLSxvb0dNA04zc5ms3CiLXEiPCv4ZOtJP2OOjF6KQd3N\\n\",\n       \"5jOao50pPwZ1eo4xmyvtCNuCcSLnfHi+v1555RV76623ovVheI0b+0ukHDfZXwK+Tc+fP5/znTCb\\n\",\n       \"l4BS7LSskWKkwtWVlk+FWytNAu5jegYFttN7cjuVWZ596ZTmQtVZ+RGxlJjShChfipQPSozGQ5Xh\\n\",\n       \"51XOH4rfoaR7XD86OpJ+aSmfEa4r/lY+HspBWRG35vxwVF28tusse3LMD+sse19Ki1qyv5hpVvlF\\n\",\n       \"HJ8ViznWHGsSFcGwQmlAgx8jpR1RvrecM461aNgf0Q5VvtfEYwyZrkDtCWfp31L4793S0pJcNyXf\\n\",\n       \"YeVj+lGfS3KBAx9bZ3P8zfxGZvORbZyKwyfx5I+/2tDwDnZAzh2qPP8Kv7PrAMd4ZJRjNj+TKiOV\\n\",\n       \"MoM/AvybP1jywaI06iR1UIkdfH1blpaWwsHoyZMn4XlEf/BvAKt7+VDlwYc2pepWHyp2UEfbPGv6\\n\",\n       \"cDhs9U3OjMOHRP/B5fu4T0sPOan5ocpIcS6VzuNSE1DMednPHcWKzJFrbF5TdVWcZarPgaWlJckl\\n\",\n       \"5N/Hqv/c5s8RSGhHirdIRdmq5NGcSDllEgNU1CuPQy4ReGkUaOoQpg5SpXMRyJnVOYrbP7+yshLm\\n\",\n       \"ljrQ5pyTYxkhVB25/qPRKOlwjfKuXbsW5jbuH41GYaxVCjM8G0uXljq8MtdWLnrTP6++vWeBMuNz\\n\",\n       \"/RW67nvspM8BNf6b3zRN65zQJRK2OptXVFRUVFRUVFwQLtW0t7GxITUMCiUhnRwKy85mylHUS4ax\\n\",\n       \"k6mXEpQpjuvnHeQY7KjO74W2BVo3JXlx2HWpFMXtRDJnpQmLoZSJHFoHH/4aAzsUp8JdgY2NjfBu\\n\",\n       \"SFvs+M5SDEJ0mUfMO6gyP5AKhVW/ATGWcL+OYs7mCqX515QTeewdMbDpy2szWePIfVqixVokF1wO\\n\",\n       \"ql9827lc1eel5bFWpJTCIKUxi5n2uV5m887w3O6SxNMxVnQ/nrH8e+dBu8B9oShtWONrFk+q7Z2X\\n\",\n       \"OQMDa3n8vt3v94socWJrKhVgwO1JfX/YyRl7MgekqGc8p6Gi2lhZWQnvY/cRnqfY2/Cbshrk5ux5\\n\",\n       \"ocQFgAH+QaaIyAF9gG/gzs6O3L8wz3H/7u6u1Hap7zbexzQYVSNVUVFRUVFRUXFB6OdvuRjMZrNi\\n\",\n       \"bZTSFvBpN5cfyP/O0qeSxtjm6qUxljA471zM9m9mduPGDTOzOTZtSFQvvfSSvfPOO3NlsB8J2vn8\\n\",\n       \"+fM5R2bc5wnFWCrnOilJJafl45xZaKci00y9mzUE3hnx7t27SYZilmxQniqfJQzPEq38nJjck4MD\\n\",\n       \"IOWwo7+XWJUvCMM7xav28LMsZQNKE8Zhvspvhslc/XgoiTTmrwXkfBRSWgol+aU0XDHtYoxWguvE\\n\",\n       \"WmiFmFTs66jCxtV7mZlZvZufwdxRofpq/NmZu0RjENMqeXB+QB4vNXYp2oacw7rS1Pq5qOZVv9+X\\n\",\n       \"GiHMS1xTzutra2tJjRS06isrK0F7x2s09Sx/c1LaZd4vUv6kDBVcASh/Iv4WMdTcUvunf9/S0lIo\\n\",\n       \"M1dX/74YBVCJ5pe1o7wP+3nM9+Fbsra2FvZ39f3hvRrv4UwY6HP+hvhzBb/Ptz+FS3c2B5hJmyMy\\n\",\n       \"zOKbhP/IKVWycjJjYFFtbGx0Mnt5eOfGfr8fVJc4MOaiZ5SjLQad1Z+8oPAMTyzFIwWTl4+SxP1+\\n\",\n       \"HiinRd70U+kHYiZMQKWPQZ1XV1dbm8Pa2lp4ng93ajw/85nPmJnZm2++GX7zJkreGPnQDBMrnPFj\\n\",\n       \"KWxSAQNAzLQXY/Tnupjpw0iKc4vHXEWflaZg8Zt10zRzpjXUXZnd/IdPsZibpbmAeKx8X/GcVGbn\\n\",\n       \"nDmV67wo55D6iMTSweBgznVEuRCKYnxI3iE/hlIOKgUVfVh6aPZ7Kq+p1DeFI+CUAIc+43dwupcS\\n\",\n       \"s3q/329F7fGYwc3h0aNH4TeOYEZfKq485inzaaG2t7dl0Iwvg02AmAe9Xq/lQuHNfVhf+G1rayvM\\n\",\n       \"n67JwblsjhYHOEipa6Qnz0kv+KpDaY4Pj6/53/r9ftgLcDBbWVkJY5IKpOK5o8afUU17FRUVFRUV\\n\",\n       \"FRUXhEvNtdc0TcvxUEkYMW6URcEn/dz7OL+Q2fzpmZ1cFeNyCpAuVldXJc+RQokzHzvkms0n3jSL\\n\",\n       \"M1qX5DricFw2a8ZMIGY69F6FjauQVGj0+v1+kPRSzqFmJwmJzU7NqKxBUFQL0I7s7+93Vk2z83eJ\\n\",\n       \"s7nSXCgnU9TbbF6SVhpar5nh8WctqdIqlXILpfJIppyX+T6+5jVSa2tr4T521vV7A/cVawVVrj3l\\n\",\n       \"VJ8LB/cSugK3KTVG7ETOfQCpH9d8DkkgRffB8FoH1qJ2DUBQ9BGxfcw7Pl+7dk1qaFPUIzzv/DxW\\n\",\n       \"2qeNjY1wnd0rvIbw2rVrc9omXxeV440tGOr7s7W1ZWZ678A8VhpsHnPWPvJ+jPJRP9RpPB7PUYRg\\n\",\n       \"z+O6AqwpS9F9cHt9/s3cNxXtODw8bPVRjgtOIUVBxEFi6h2sdcdcxPeC5yE7jpfkllxZWQljwmuv\\n\",\n       \"aqQqKioqKioqKi4Il+ojNRgMWidblipZW+QlIHXaXV5ebmmY2GmaNTmeQmA0GrVyqPX7/WKH+LOA\\n\",\n       \"JRCU7x3f2WGU/VhYSjBr0z14jZRZW0pkRzxF4se2eUUKqnwevHTPkqMihVPz8Itf/KKZmX3zm99s\\n\",\n       \"XWMJiLUsXpJnx90UXQCTvrLmjJ3R8ZvKGK4cPL10z7QL3D9KC5DqF9bG+PtibNIeSvJm8DzwY6lC\\n\",\n       \"yVlbpLSGeN/y8nKQ6lUblZaH3/faa6+Zmdn3v//91vXd3d3wTpR3cHDQameMEqHUURjIabP9dWan\\n\",\n       \"Vn5uLKGn/HgA5euV80+MOTLjPv9MbJ54R3rlm1ZKiRErw7dta2srrOtc/QCslSdPnkgtCupdOua5\\n\",\n       \"wBsg5Qe4tLTU8q8ys5aGaDKZhLU3Ho9bOfF6vV7S3yxWb9QPc9D7JPN9MX+j0qwIvm1q3cUISjE2\\n\",\n       \"CNba2dmRQRMliM2TUjqInEbqUg9SMW6PFHizK1HFm2nGWG82iE1o/yFlpBh3t7e3Q4QBf+TwN5tG\\n\",\n       \"fP2xCM30QkyBnfnYdArV9LNnz2S/5SKtAH8Iyzkjpj7svOnz4kuZNXji+0OkYhuO1SkVyZlj1/UH\\n\",\n       \"BWVO4cNVKbgPUh/1XOoXj7W1tVBXzD8+1AHD4TB8vFj48POF+0qNlTJ5Mv9ayvwN9fyLFy9a43D1\\n\",\n       \"6tXwN+rOTP18AOX+85tl6Ydb3Rc7hKk54z+0KkODOrz4+gMYa47KTX0IUmZwbpvqKyD2fUAWAPQ9\\n\",\n       \"m2IVUvM0Zt72/cL7BY8Vzxm8T32YS+qyuro6Z24zO1kLGEv01ePHj6X7AvqF2db9wXZtbS3UVXEC\\n\",\n       \"Ytym0+lchDbqAEGED2Qq+qz0kHVe8GtgfX29ldCe65M7ZPu5nUslU+qqgPmyv7+fTLHGAVA7OzvV\\n\",\n       \"tFdRUVFRUVFRcRG4NI3UZZRbUVFRUVFRUdEVVSNVUVFRUVFRUXEBuDRm867+IxeNLjnAPk7IOct5\\n\",\n       \"h1LV78yayz4Zi+ZlYkI8zhW1KHq9XvCxYedQb99+9dVXg88Bcljt7u62yC85Y3gq5JedOeEbYTZP\\n\",\n       \"4om24T1MgumfnU6nxU6tqi7oU/i7PX78uIgGpN/vB6dbzkEI/yaMjWIaVszrg8HAfuZnfsbMzF5+\\n\",\n       \"+WUzM/sX/+JfJOuAcTk6Ogp+JKAW+c53vtO6fzQa2S/+4i+a2Slb/RtvvNG6bzAYhHc/f/486vRs\\n\",\n       \"duqTwT5euRBsdb3UN/M8ocLL2Z9DBS/wuKXyjCmfsJS/Cd/Hflap8Hf21/I59NhXBu1YW1sLvmVY\\n\",\n       \"M1wG/IVevHgR/kY7Dg4OQrABKE8ePnzY2i9iASvA3bt3Q7k+A8P169eD0zfue//998OcQP/dvHkz\\n\",\n       \"7EXAyspKCCZA/37wwQctmhYOlOr1emF9sq+fJ4LO5Wnkvo/5iJmd9i/vF4ugxG9Jze2NjQ17/fXX\\n\",\n       \"zez0G/Ld7343u1/nyjKbz8OX8kHmHIoxup1wb/LqjwjOY2PjxLgXGamXSsuyCEoPKKkPjEpJwu9m\\n\",\n       \"B14sPr4fzvH4ME+n0xBtdJYDFKeZwdjyBo7642Dx6NEjycvjo2F4nih+LeZh8c6NzNOEPogtbp/A\\n\",\n       \"OscczW3Dhw/JTe/duxfqjQ18OByGQ0SMj8jsJA0R7uONkdMxeKhsAMw3g+t/8Rd/ES2XwRGpqEtK\\n\",\n       \"mFpZWQnzKsUgf3x8bJ/4xCeSZXterZiDt3JCVpsysz6bxaMs8YF99913zaxcWFNO7or7it+VYv82\\n\",\n       \"m49iRl1SH6USZ/yS+1EXjPne3t4cU73ZCe/TD3/4w7l2XLlyZU5gMNNJafkghWcPDg7spZdemiuD\\n\",\n       \"5z2vZd8HHBAAp27Ug5/lrAxo2+rqaqgD9kIV/MJ8XT4LgdlpoMfy8nLov1u3bsk1rpi5Iahgju/u\\n\",\n       \"7rYi9FZXV8N1jjrEgQH99sorr9hbb73VKrcUar6rrAg4+OLb++LFC3vvvffMzEKk7t27d8M8UUBf\\n\",\n       \"sJN7av2urq6GsUWaNkaXb3Q17VVUVFRUVFRULIgfC40UTp0x1u4SsJR9kap7qE4/Cn4qBmuXvKaE\\n\",\n       \"+wwSEuddApqmCRIWtAW9Xi9IbjmqBqizUcaDBw/mJFW8z0u+MSkeUh0ktZjZDM+r62xeSGnPUKfD\\n\",\n       \"w8MibhyuXy6pNqRXjMPu7m4oD1qNvb29oOVjMyL6lOFZ9j/3uc/Zn//5n7fuw3jB7MdQ0hjGvGma\\n\",\n       \"IKUyKz9Mcffu3TOzeS0Am1XRLyl1+dLSUhjXlLZteXk58MyUQrGrKyqOpaWl1n6SM6EDg8EgaKK+\\n\",\n       \"/OUvm5nZv/pX/6qofqqMXIJh5v2CNtNzVvEzMXb3VB2Yb8jPZTVfuFzUj59F3966daulaXj27FkY\\n\",\n       \"109+8pNmZva9731Pvhta229961vhN2hKf/3Xf93MTjQOyDPK+e18/Zk9nf/FngVz07e//e3Wsy+9\\n\",\n       \"9FLYC1E/hNozOBsEaxLxHm4bxgHfDQabdvlf/9165ZVXwr6NNbm3t2ef+9znzMzs7bffNrP5tYz7\\n\",\n       \"rl69muTLgvbr+vXrc+OTgp87+/v7Yc2xCRgmUT9fYkBf7ezstDSCau9/8uRJmAueQoNRopmqGqmK\\n\",\n       \"ioqKioqKigXxY6GRwkl1EQd2dr7zp/+L0EidJUfgeSDno4GTO5/gIY0xqRqgsp33+/2WVmk8Hsv8\\n\",\n       \"V94mf3x8XJSvsNfrSX8troPZibMnysB9rLnEnFHSltm8ozjfHwM7pXtfANa2cf9yvQDOnWdm9tM/\\n\",\n       \"/dOy//wcXVlZaflf5EhTVf9BooOjN96DerLfr1JHAAAgAElEQVSPiNmJZugXfuEXzMykTwXaMRqN\\n\",\n       \"wrPKNws4ODiwv/zLvzSz+NiYnbAeq3nJdVaO0V57ynVhDaInoGUoRnal5S3VRClyRobKKqAIg5U/\\n\",\n       \"FOdJxP0pdmrlD6X85lL1Y0DSv3HjRtA0YO4+evTIPvOZz5iZ2ZtvvhmueXJYs9N9QvlLqXKhpXr9\\n\",\n       \"9deDRgraMXY2hoZyaWlJEpmiPARZsEYK77h9+3aYB6gz+1cB4/F4jkQWdUIfoZ8PDg5CvdScGAwG\\n\",\n       \"LdJN1QdvvfVWS7v35MmT4Hv2K7/yK2Zm9tWvfrU1F9g/UWkusT/s7u6Gd6f8mGJA+9gSgjmLvKlX\\n\",\n       \"r17N5loFSi0+PhNCSvudwo/MQarEI38Rx2aVZLT0sJOrk4raKmHePitSqnr+mONfs3lzkYc6XLHD\\n\",\n       \"LcxLWMx7e3tzUSRmJ32Av9EH4/G4xSzPBwNskGx2Uak1GOhrTqeAe7FYRqORTC+EOmDzWllZCQsW\\n\",\n       \"H/NYuVAN80EKYEZj9AH3Jd6tTGzYqF599dXWtaZpWocMju5D/z1+/DhsRip1hWIlR/JnPkixwz/a\\n\",\n       \"iQ36xYsXoS74YDHw7J07d+Th22N3d1eaitFHKGtjY6MVFcVQewL3/fXr181s3nmY08t4R3X+mz9i\\n\",\n       \"3nTAcxbjsb6+3mo7H4awLlQiXtUWnp8+bRH/xnXmCD1/gGa2e14fpR8vzDefksvsdI7hgG52ahba\\n\",\n       \"398PkWoMfFQ5Yg7rD+vt8PAwOZ/+7M/+zMzMfuM3fsP+4A/+wMxO++iVV14JcxXzc21tLcxttPf2\\n\",\n       \"7dthjqn9G+389re/bb/6q79qZma/+7u/a2Zm7733XlgjvC58X66trYV5gn754IMPwjjwOgQ4RRDm\\n\",\n       \"7OHhoYyUVmsScwzRsFevXg3jr8Ya9bt69WrYT/ggjzWUivYejUahf/nQgrmIOq2urrbq8Pjx4xDp\\n\",\n       \"+8orr5jZSSQfykNKsT/6oz8Kz2Dv7fV6UmBEO7APLIpq2quoqKioqKioWBA/chop4Lw4n/jU7N/J\\n\",\n       \"ebwUmE/IOxb7v81OTuqcAPiikNLMTafTlvMzh29D+uTcaJD+j4+PWxqX2WwWJELWeimJGhKN4q1B\\n\",\n       \"f6yvr7fMbrPZLNlfqOvW1laLDoAlSNRJSXcMXI/dh/IgxaytrYU2sSoc/QvNRb/fn/vb7KQv0Ocq\\n\",\n       \"3D/lLKkwmUyChhBmhYcPH85pcFJAHZRzK+pycHAQHHw///nPm9mJBP7Vr341WS+zE2nbm40gTZvN\\n\",\n       \"m3E8lpeX7VOf+pSZnYYr7+zsnCk8G+ZSpvZAHTgHnNLu4n6lQeI8eJjPMc2JD245OjoK/c9JWr15\\n\",\n       \"eW9vr0WxwFJ3yjynJPSdnZ05p3WglLLBB+so7Q1rzLAfbGxsSG2B/421xvweTmBtdrLesA9Aq/m9\\n\",\n       \"730vlOdzFnJZh4eHYd1De/aFL3whaKS+/vWvz5Vjdqq9feONN2TgA89vwDs37+7uBq0z1u0HH3yQ\\n\",\n       \"XP8qifD6+nrLxDqdTuX+iXejbb1eL+zv0PzyPMb9rAVnHitokPBszBUBdCUYI9aWcWL7n/7pnzaz\\n\",\n       \"eYoVOMHj3zt37oT6oy6vvfZamBM+sAp9hHb44Itf/uVfDvsJ6Bc4920MVSNVUVFRUVFRUbEgfmQ0\\n\",\n       \"UheVwZodUT1xYyliEhskM5aAFnU2b5qm5fjKDvJdoPw+PDj0mzU+KFtJGwD3B6S6wWAQpENPfMk4\\n\",\n       \"PDxssU6z3w80ISsrK0HCh0QDe7dHKUN76X2Q7hSJG8NrLsbjcWgTS8d37twxM005ASBEWb2fMRwO\\n\",\n       \"gwQMqff+/fstR+EYOSR+g9R57do1SS/BxH54roQFudfrBX8YjNva2lrLH0LVj4lZoZHIUW4osPaE\\n\",\n       \"xwbl5bTGfp5wv3AGAbwHkjyHqwOsbWUNEtYA6rS5uRn8jHK+L94fip/xJLtm8xK61wiur69LPzw1\\n\",\n       \"d1Bn5fzP9BcAk0T6ftnc3GyVO51OW/5LN27caPVBr9drzYuvfe1r9su//MtmdkrPce/evZaDv3Iw\\n\",\n       \"Z+280jhDK/jGG29IZ2XlAJ76xmAMmCoitlZ9YM5gMGhpBIfDYdCaKc0lfLIODw9b84OtFdDMrK2t\\n\",\n       \"Ba0TtEHLy8thvJQVhwNV0Ief/vSnzexkTqDfoQ08PDwMmiiMudLK3b9/P9T5T//0T83s5NvgtY7c\\n\",\n       \"XhW8gu/Z0dFRCHxAP6asUsCPzEHqosCHEoAd6HAQ6Mr7xNFTvLl2PaRxnXyalxz3UQwpZ3RMaOVs\\n\",\n       \"OB6Pi53lvTo41n/elMAHTYwN1wULd2lpqVWXK1euhI2MNx7VR6oPMDbMw4PfeGPG4QF1zh3y2aHe\\n\",\n       \"zwlOcaIOp1jEqcg1bs/y8nLrAMCmDvzL0ZEMtBN1uXnzpjQveEbgwWBQbH70bMxsRkTd1cdnf38/\\n\",\n       \"bK5grl5EwOL0GGoe5NIk+fkU++D5tb6xsRHmN4+nj7xS/FXPnz+XUaCpVDfcBs+GPplMWuYl3p9Q\\n\",\n       \"l52dnZbpMRbBiN95THydWdhBuVtbW60DyGw2a5nWJ5PJXDogs/loPG4vADPSvXv3wt8w3RwcHLQO\\n\",\n       \"UsfHx2FuY+394R/+Yau9DHZ3UPMRa4r7Ee/mccN+goPGnTt3pAClgHnATO9o2+PHj8Oexn2j3DjQ\\n\",\n       \"58zN6PfZ6XRqP/uzP2tm8+Y+z1/HDuMsdKB9WMvLy8v2cz/3c2Z2etD+/d///VBeal/htYh+3tvb\\n\",\n       \"a63Rl156KSv4mp30H8bb7/MpVNNeRUVFRUVFRcWC+InXSHGeM2+KmU6nQYXJLME4rSvVP6QTzgXH\\n\",\n       \"UoD/rYtGyauaSxm2GYtwbfkExCVQZg+PmFN6CoqmgeuXah9r9CCtsRqd2ZdRBqQcplDwbMLsRJ5y\\n\",\n       \"ij8+Pm5pQra2tmT+LkD9psB8M5DWIeVtbm7OJX41y7PAY/zefffdlpnk137t18LfCE0vDf5gqgBI\\n\",\n       \"/s+ePQtz2psgPPwcXJRORGUv8NoTxeAdy4OnOJl8H7OzOdMu+JyHXCZrTHwf8//naGGUycaP62w2\\n\",\n       \"mwsEAXwiXqaK8c+bzbcb8xK/xfLF+XHk+7AuJ5NJSzug6EZiffE7v/M7c/9/584d++53v2tm8/MB\\n\",\n       \"ARS/93u/Z2ZxlwFf16ZpZPugfUI77ty5ExyZWfPjc3cuYm3g7xM0jrPZTO4j0Bax9cCziCsz7uHh\\n\",\n       \"Yeg3ftabxm/evNlK9sxgDTy0RdCmnQXD4bC1D7MWDZQYCr1er0VHUoKqkaqoqKioqKioWBA/sRop\\n\",\n       \"76vCJ3nYaZeWllqEd6PRKDzLLLsqZB9gKd87XS7ifM4houeBpaWlOenF7KS9kE66Sv0xUkBI/JBS\\n\",\n       \"ORM8sLGxEfzSmNlYOTLDSRv1e/bsmXTI92Hgw+FwjiTT7GRcMdYccu6Zb/f391uSYq/XK6Kz4BB2\\n\",\n       \"3xd4TwwxB19PrspkqKBB+Pa3vy3Z4hFuraRG9AVrF+Az8PLLL9s3vvGNueulc+Tp06dBYkUfPHr0\\n\",\n       \"KGgEQG+wu7sr6SdwH9ZNiSMowBonte48XQUzmyuqEPb1Sq1jaKsGg0HoV2gIVldXw7grx3n0a8z5\\n\",\n       \"XmmaPBS7O7P7MyWHp3tgaoecBsyXwUCQiJpru7u7Sd9RzLvd3d2ieXblypWg7eC6QFsE52pebxij\\n\",\n       \"w8PDzvsqNFavvPJKaz5ygAG0aZ/97GeDRgpg6gGMgW+rJ+xV2lHOYoA2feELXwikmykfLoWYdo+d\\n\",\n       \"7s1O9hqME9r25MmTsEf79saQ8gVdXl5Ojj++YZubm618npPJJOwtgNJM5XxrY/iJPUjhw8OLxncc\\n\",\n       \"R/Iop3RWxeN9rKb3m9t4PD7TAcrXcxHVr3pmMpkEcxAfMFPmAqXm55QkPuJveXk5TGosXDZhcOQI\\n\",\n       \"PqBsasNHH6aO0WgUDldsMvF15ggO7nOvgo+lBsCzzNbsD1elZi1+Fs/s7u6G/sNGpMxuo9Eo6fTI\\n\",\n       \"0WfgX0FZb7/9dhh3mJQ2NzdbUao8j1UdwCr8ta99LXww2JRdAk4Uyn3uI9fu3LkjD1K4nuMCU0Af\\n\",\n       \"qRRBZu0oS2Xa4/uA6XQqOZsAZS4DlMmO+esUozofnlKO7yryivcib0qcTCaS00rBt0XVxazMUXdv\\n\",\n       \"b6/FW7a2ttYyq7Pjs+ekY3z6058OBynsEXygwXvZfIwP+JUrV1ofWHaaZlMi+gYBMNvb2615per3\\n\",\n       \"+PFje+2118zslAPr8PAwtAnt9X3nBS0eVwbGk7mWfIJizviQg+fB6vf7rf3y0aNHwVSPdjx79izs\\n\",\n       \"N2Aif/78eVgr6rCbQr/fD4dJf5AzO93fnz171jK1P378OKT1AQP67u5uiOD05XC9SlxoqmmvoqKi\\n\",\n       \"oqKiomJB/MRqpHDaVKo8r/I2O5UsYpom3Ku0RezkvIiD+EWB2+GlAtbecGg9s7mbzZurlJMeO2F7\\n\",\n       \"9vfRaBRMW54d3UyreVWoMYf2erPn8vKyzBVYihQHDN63SAg+83GxucVsXrrnhNFKumdGZrMT6Rnq\\n\",\n       \"dGXOgzP3w4cPQ9g42tjv94P06XMbmp2ao3Z2duY0iIDK4+dxeHg4Z073gKSpmNWbpmklc14EPLe5\\n\",\n       \"b/y6Vtf87/h/Nv2bnbRN0RV4TKfTlvT89OnTlmaItQ+K6oClZx9QwNplfhbP8Lgpx3IPpX1itwV+\\n\",\n       \"Bm3CXNve3m4xvB8dHbVy7fH44/6tra1WZgBlEsT85z5gYKwUJctgMAimOmi/edz4GwFtC/rsBz/4\\n\",\n       \"gX32s5+du0/hwYMHgb6D4bMxMC2Bmd4POZuE2bxpD9e++93v2uuvv25mpwEes9ksrCXWtnsKGLWX\\n\",\n       \"9/v9sI+gr5qmCeseLgW7u7tBK8b7ijKxog85hyuewTjt7OwE892Xv/xlMzvJq4d+YVcUjB2PP7RP\\n\",\n       \"+G1jY6NFv8P8b948mELVSFVUVFRUVFRULIifWI2UlyAZSpvE17yEyVKAymXFObJKfAY+KihfCyVJ\\n\",\n       \"saapq18Wh0T7Z4+OjoqctAeDQehzJWGy06+X2lV+M6Y/UI60AGvgFKt3VygNx8HBQfCxgIR25cqV\\n\",\n       \"IN0hoGE4HCZJYZl9GH2Vyq/17NmzVt49pnGIsc7jPpTB0lqJky73Ha8Z/A3NhQprVxQlTdMspJ0q\\n\",\n       \"IdpMXUO9uR1m833knzk4OGhJwMrpezKZSLZz71/HSAVDKN88BuYah37n6BTUWPvfeFyZlFI58Pt9\\n\",\n       \"+P333w/aHThKHx8fzzmFm51qWBg8JxXdhyciZbA2lZnBAe6XX/3VXzUzs29961vhfXgGGrW9vb1W\\n\",\n       \"Hfb29lrku0tLS629yo9Byq+Wg1g8qere3l4gIUUgj9np/EV+wIcPH4Yxwb5zcHDQ+mY1TdNy3J7N\\n\",\n       \"ZqHfUX6v1wt9jPG6detWeB+usRaVrTyoA7R377zzTngGLOa3b98OxJ68d2Ec1DxFPx8dHbX6mNvR\\n\",\n       \"BT+xBykAHT0cDueYhc3mzXj48E4mkzm6e7OTCe7VqXy4wmaYcxi+LJQejlj1q0xePhLObN5Z23/Y\\n\",\n       \"S8tVG8hwOGw5dPJHOmVCnc1mreikfr8fNnO0I8b+fRZ4VbxZm8F9fX099CGb9hR4/qIdmGPqEMlM\\n\",\n       \"5Di4cXoE5RjN/WF2Yp7BoY4PQ6qvvHMoOzmzqYCZpflf9IfZ/CGBHcIXOUiVzL3cPfzx8vON5yJz\\n\",\n       \"QbG5zeykncpRXTkre2HNzIJJDGPIEYT8XmVm9BGksTWjDowxM2UOu7u7YW6zYKA+Xn5ucxRlisuP\\n\",\n       \"2cDVQSq2lsxO5hinzMGziDrEmppMJuEQx2bI733ve2ZmIc3Im2++2QrgmM1moe2cFsr3/+Hh4dwY\\n\",\n       \"lriF8PeJx807Z7MbBPqIk3QjmIMPZhwZjHdz5o+UoIeyHjx40Jpv4/E49AP69PHjxyHgQo0XTIbs\\n\",\n       \"MoI95ObNm+E62tbr9cK+6BMVnweqaa+ioqKioqKiYkH8xGukgKZp5hLJmp2cXCHdQXWuzH7Mv8Iq\\n\",\n       \"dB+GnJKEflTg28SaHPTV/v5+kF68psFMS8CQrI+Pj4N0mpJwu2iKMHaoC2vWGJxEtQSsLfIhszko\\n\",\n       \"aQjlM2UD5kwXbQHU33iWuYDQB3fv3pXsv3gPpGzmPuIyfX1u3bolub6UNtFLuDFuGDZr4jm0g53O\\n\",\n       \"U/xbMZRmFuDMB/jXP6toA7jeHLKvHO39fFPJjfG82TzrNDRRHMihtFl+3fJ8UXOL28iuCdxuD+XQ\\n\",\n       \"7nF4eNjSsh4fH0vmbWiWoDl99uxZiypE5U97++23w37CWlnOPWc2r7lg4N3s3A3nanaKxlyEJobH\\n\",\n       \"jU1ofv30+/3WdyVmqUCdd3d3O1kO+F8G2r61tdWiIbly5UqLnX4wGIT1ij44Pj4O7gi5tefnQmzf\\n\",\n       \"Rl8zRyPqlTJpMncguxtgjmE8NjY2Ws7jXfPnplA1UhUVFRUVFRUVC+LHSiMVI9BLASdWPr0ruztL\\n\",\n       \"ZV77MBgMWk6as9lszj5rdr422UURI85TSEmWHAasJAYvUQ8GgyAB4dqNGzdCCDEkx3ff/f/Ze9cY\\n\",\n       \"ya7rXGydendXV08/prvnRU7zIXI0GvMhkZZsE5ZoUZJl2ZEDG7IDGP4TQAECJEZ+xTe/5AS+QQLF\\n\",\n       \"iG0YAeToRwD7XkO+sfxQIJmSQFG2ZcqgKJImqSE1fI1myHn3o7q7urpe+VH8Vn1n7XVOVTcpkfTd\\n\",\n       \"H0BMs6rOOft99vrW2t96zVUOn1Q8zpa9Vqul8umJDPuB5Syy6shAX1YqFa0HLByOIxhnoXkxG7YM\\n\",\n       \"HHPniYky7Jja3t5WC55jOPA8xKdsbW2pRQYLvFQq6WeICXn55ZeDZ3pH+5eWllz1Yi/HHxgaj3Xh\\n\",\n       \"9vNYOcsk8uGP/SBvDnBsjrcW8OEGlMsy0h5r1+l0VAzwySefDJ5nVcrHlbnf7+fGN40LqLfjjucZ\\n\",\n       \"xw56a2SeEjkHhGOtRP/v7Ozo92AcspghjhlEffAZBHq9/uFYP2BhYSEILp+fn3efC7AshMfyIGgd\\n\",\n       \"c6bRaOg65rEdLNlgJRgajYbOcY7Rwt/b29vuAYyDYn19XdcJPNdjBZkthnjoyZMnU+USGbaV7a+7\\n\",\n       \"7rpL6/y9731P65QH9BEfhPFiPVlQ286Lra2tQNx40gwIc3Nzem+8mybJ//dvaiPFp4kmBU84Dmq0\\n\",\n       \"wKKUJIkuQJxGAx3HC4xdbN4JG6lJXRme5lWxWAzcC16dOGiRTzFal92lS5eUIh4HLJzor7W1tSB4\\n\",\n       \"UCSkyplW3+9pu0qlovXF5Nre3h67GOTdz+o0iYzoe5SVkxvzhtADFnv0x/b2ttx5550iMmoDXtSx\\n\",\n       \"KHAiVqgec8BonmL5OH0vIEkSHQdwdezt7QVpl0Qk2IjWarUggWqz2Qw2vN1u1z15a8th6zKJkdDr\\n\",\n       \"9YLAYzbW8C+7Tj01ZPRRkiS6geKTY3Zj1ul0gtQa3saFT/x5rsVx9fWSINt+73Q6gXuzXC67Lhr8\\n\",\n       \"DmO83W5ruTgtlB3LfHCEy4yTYTiVNT09HZxm9AyXubk5bTcYE3fccYc89thjqd9xH0Fn6cUXX9TN\\n\",\n       \"laeHxC9ktBU+W11d1Zc+xou3EW2321ouXNtoNHQTwXXCvLl06dJbfvAFcx2q4zjZNw6vvvqqbsLY\\n\",\n       \"7W5P47366qtaT/S5t3bcdddd8vTTT6c+y8o0AfDaYQ10kXztJ8y97e1tHavYELbbbf0b/05yQCy6\\n\",\n       \"9iIiIiIiIiIiDoh/U4zUJJpEWfA0iLJUffE9Wwh4tk3Iyn/nKT6Pe+6PA3nBo5ywF6jVakFOQXZ1\\n\",\n       \"gb3Z3NwMAko9zY5Jy7e4uKhtzfmxcG9YO16ZRfKPd2f1Mcq8XyswizkSSefiAli7h5+PMnAgqJUS\\n\",\n       \"qFarej+Us1gsqpWL75hpsHmzREbW+CuvvKKaMp5rAjn8zp07pwyDzetl64G6oV02NzcDZpDZT3aR\\n\",\n       \"Y05hfHl9NTs7m/qcmWNcY5Xj2XXq5YwEut2uWtCee57HiXXZerndlpaW9H6exc31sEytV/dOp5OS\\n\",\n       \"VkCZbJvv7u4G+ku27gCPI5E02+ate8DMzIy2h5cVgce4ZS9Z3Z9lA8Ac8DF5APX12vHo0aPKSIHJ\\n\",\n       \"9lhLHuM33XSTiAxd2t/+9rdT9SgWiykGl8vNWFhY0KTMeP709LQ+hwP+7ZznccXl4vCQt/q9YDN5\\n\",\n       \"fOhDHwpYuyxAHoFVwG35Njc33ZyWAMbk4uKifOITnxARkYcffjhVtkmw3/c+jxn7TmD2aT8epMhI\\n\",\n       \"RUREREREREQcEP+mGKlJkZUrCrtnFljDbhff1Wo1d7dsjxWztcjsjFU7Z+D33W5Xn5f3rP3s2jlb\\n\",\n       \"OqxNjkvB3/Atexakx6h1Op3gWrZYbfCqyCheYnd31931s0CcyND68axgZlzwDK9d7TO8/ufcTnk5\\n\",\n       \"9DgOj+/hWfwWMzMzqtzN8OI8wPzh96VSSZaXl0VkxEgVi8UgFqBer+v3bHkh5oZjFDDO8btut6tx\\n\",\n       \"IWfPng3Kcv/994vIkJHCZ7iHFxDqxa7VarVA7oDbkYVv87IP8H3ZiswL2GahSvShF2iNaznWzxPf\\n\",\n       \"ZHFVOz4Hg0HAFl2/fl1OnjwpIsP4EZQF9+H+8ALa7SEXVnD2FNBR9nK5HATDe4HjPE84Dsy25aFD\\n\",\n       \"h5SRAKampnR8Qomcn+fForJYL8BBxlZEst1ua8wavuOxhDHGcwIB0rfffnsQR8Ys6m233SYi6Rgo\\n\",\n       \"/J6FYPHvHXfcEcQUcfthrB0/fjxgd2+99VYV7gSy1nLMzVqttm/F7UnfE2DPWJpiv0x8oVDQ2Mzn\\n\",\n       \"n39eRNIxl1h/eP6gjV566SUdA5gDCwsLWl/L/L2VsAro3FaeAG4W3jUbKaunw+ki9gvvRI2X5iFL\\n\",\n       \"sRgNi0A6TpKIa3kB4tNHWFA4USw6k6nfvJN+eZurrN+xy8NTAs8DTyrPZWf1l7xTbKxAy2lZ8OIG\\n\",\n       \"td7tdnWRwf2yJrVNgjwOeNlMTU0FivWDwUAnLMbC7OxsKumyiJ+Co1qtTlwW7/QdFg9sVDudTvAM\\n\",\n       \"DuYEvIW1VCrpZolfMnjx8ThBuz7++OMi4rseRUYpJBjomxMnToiIf7qPgXY5duyYvtw8wH155coV\\n\",\n       \"HTv8crPuqOvXr+eqbPOmmTcbfKiCf8+fWWVp1MOqa7PrlIPO0f4Y49evX9eTUZ6aOTYRd955p/YJ\\n\",\n       \"nsGbOi6nvQ+74myoAv/N65MXNM+wmwjWcwKuX7+uL1IG7ocxtLa2pi9ubHy2tra033lseGrceK4X\\n\",\n       \"tIz6sLYUXuRPPPGEfPCDHxQRke9+97vBtRhjPO9Qvl6vF2wc9/b2gjJcu3ZNXYlw8bVarWBDfeLE\\n\",\n       \"iWAjxesGNqRXrlzR57ZardyTkh4408Qkmknc3liTOBmxBx4veMYHPvABERmd1BMZvS+Wl5dTmRRE\\n\",\n       \"RkYFo9VqqRaYd5CCgX7H+9gz6kRGY9Cu8yL5m81JXKrRtRcRERERERERcUC8axgpm9gza5c4CZ3J\\n\",\n       \"1pj3Oz5SbF0AfA0rNNvcbcViMdCW6vf7akEybWiDOPnaSeExWGw1suVgj4vyNbCeSqWSWiWs5ot7\\n\",\n       \"5lkpnIQS9eSgdNZwgnXA9C27IQFYh/fee6+IDK2fZ555RkTy+3xqakqfy25LrhPKbF07Ozs7Wgar\\n\",\n       \"hyMyakfWrcnTemm1Wq6+FiwvWJ9ZQe6ezgsANsNjrkRGFje0Ud73vvepawIWfK1WywwaFxklZxVJ\\n\",\n       \"590TkcDCFkmzKAjWhQslC7Da2fpmlszmLev3+25b8ZjAmGXG1GMOPZYoLzCeZQEsI9zv93Ucw9Ln\\n\",\n       \"I+4Yz3x/HKTY2dkJysLWODPdtnx7e3vB0Xr+Da8Tdr54khztdjvID8n5/Li+npSJ5x5BG7A+Dxgp\\n\",\n       \"SB1w+THPOHG3FyjPeSI50S0ApsfDX/3VX4mIyMc+9jH9DP1x5syZlO6XyHA824MlFy9elF/7tV8T\\n\",\n       \"kREjtba2FryrLl68mNKoEkmvf9wPmMuDwSDQM8pSwAfYPcusaBaYkcRayesna9HZBMULCws6TlDm\\n\",\n       \"er0eSAhcv35dbr31VhFJ9zXAYxfXjAsmx3MhydBsNt12QZgEWPRr165lZlXYLyIjFRERERERERFx\\n\",\n       \"QLxrGClgXAzKfvMRWYFJkZE1ViwW1QqA37Xb7aasf5HhbttaXsViMffItBX1ZOxXONI+A/BYu2q1\\n\",\n       \"6qr0WqbMi30SkSBQ3YtVq1arbnb2SY+pegHq6J8XXnhBRNLsDB/3thYGH00HC8BsB1t8HqvA/WnL\\n\",\n       \"x3EHHDOWhWazGVjySZJoGWBReQHpWZakjcPj+ntMHb5vNBrKJoEhePHFF9VyhdTBa6+9ppY5S09Y\\n\",\n       \"lsKz7HjcoN7jAkfRr4cOHQqCUjljva1jFvhYNscEeQHWzFjZ7/gzG1dVLBZTcZX4HeYUC4oiToP7\\n\",\n       \"GG2OflhfX9ex6kkOjFNAB5MDRnlnZ2esQj6ehTqhP5mlYGbdHgAoFouusj3AOQFRfo4xssrTzGZw\\n\",\n       \"8D/ayJvz6K96ve6+B1A+Zmfs+v+jH/0oYDA99Ho9je1hWRDLGvf7/YDVPnfunAZfM0uO77ktuI9t\\n\",\n       \"jNSk74nt7W19Z2H8NZtNd63kMSMyXB/BnvFaaKUuWB7CYybx+9XVVZeJsl4D/jsvAP7UqVN6MAbM\\n\",\n       \"78rKio4dT9EcORwbjYauuXhWs9lUEVRPODgL77qN1FuFvAWYI/m9gE0+/SOSPlGBzq5Wq64bD5g0\\n\",\n       \"0SqX86DB9YxxAXveyR0+RedtkGww7zi6lE9mYNMKt9ba2lowcJeWlvR7XqjsKaasE3s2UHQcOMk0\\n\",\n       \"2sBrN07cnPeCArLSWeClmqUOLjJsUz5NCtiXUpIkeh/WbgKwoM7OzroL3nvf+159noifFHZ6eloN\\n\",\n       \"jLyTRN7m3tskemg2m0FyY3ZlWdexBbvYvVOWtmxeehSRUCm/UCgEc4Vd2Z5bht102AicOnVKRIan\\n\",\n       \"I7HJQN/s7e3ptd6L1G4wUS60hz1Fy9dgTLB6vhdwD+zt7QXP5VAGvj/qCbBrl9W9bZsuLCzovGat\\n\",\n       \"NHs45caNG4FLrFwuBxuphYUFdz4igP+zn/2siIh84QtfCMZBtVrVzQY2B567u9vtBkmLa7WaPPro\\n\",\n       \"o1oukeE49Q73eGPDmy+8eTpoRgWR0VTQJMwAACAASURBVObm2LFjIjJ0q6KtWecKY4YDvNHW2HR4\\n\",\n       \"B25ERuMX9VxYWND1A22Y9Q7DOEFqqo2NDW2jvI3M2bNnA+OEEzJ7RAnmSrPZ1DbFmKxWq0F2h0kQ\\n\",\n       \"XXsREREREREREQfE285IHUQT6a2Ad7zcWgR7e3uB1cZHovFdsVh02Sc8g4MgrSzAuETLk7aLFyjL\\n\",\n       \"YCqe89+hzCi3F1wPy2tvb89Vjs5TNMZOfzAYBPnv2GLmQEpYArCANjc3A1rXO6pdqVRy1eHzFM5F\\n\",\n       \"wnaZ1BXZ6XTU8vGYRmtZi6QTosKiHsfk5el5od+np6f1edx+AGvoePWzBwK88bSzs6Nq6HkB8OMA\\n\",\n       \"hq1SqQQUPI8r/Msq0ZgXbPGLhH3c6/WCXHGeS5v1lzB2yuVyELjP44qv9RJtexpKuB7uiNnZWa0T\\n\",\n       \"9yv6DmOi3W6nGCHc12PCbDB8oVDQMvAYs8fyGWzJ2/uxfIR3DbC4uKjuYG63vNxlCDa+ceOGtinr\\n\",\n       \"Q3mB7xatViuXTeB1yq4Tly9f1mewyw7wXLNgWzhnHJ6xvLys44AZJTyX3xt2XarX66m2ygsbYHbe\\n\",\n       \"jjtmW9EfpVJJxxhc/PV6XTWgsDawThuY/UajoX3NYxZsK6Rspqam5L777hOR9HjnoHAR3y04Nzen\\n\",\n       \"z8WzkiRxx6plH8+fP69sG+rGrKJ3cAzvolKppMzafhjAXEYqSZKbkiR5JEmSZ5MkeSZJkv/+jc8/\\n\",\n       \"lyTJhSRJvv/Gf5+ka/5dkiQ/TJLkbJIkH5+4JBERERERERER7zKMY6Q6IvI/DAaDJ5MkmRGR7yVJ\\n\",\n       \"8nURGYjIHwwGgz/gHydJclpEfkNETovIcRH5RpIkdwwGg0xFKxuQ+ZMGM06WaWJrluM0bCxAoVDQ\\n\",\n       \"z7B7LpVKqeBMXGuDg1ll+c1gXEAcsw8ee2YtM0/9W8S36u3vSqWSG0xv2aJyuazt4O3+8+JpPOts\\n\",\n       \"XD4/tqxtPVjsD/+yhc1xU7iG+83mrRKRgH30jvGPg6cIDyuQGUIOJma2Bv+ivWDRXbt2LQgOrtfr\\n\",\n       \"GoiJYNgjR47Ij370o6BcsCr5WHgew+EB7JenXM6Mh6eAzTnIGDauzxsTPLa9PvQYVmZobDByuVzW\\n\",\n       \"3+YxoqyAbgVBGfV6Xa1/yFV44qCon20Hu+54OSgrlUogNcEHR7g+NibMK3OxWFR2EmVheQCM2YWF\\n\",\n       \"BQ0K9qRUPKYLDMHOzk4Q+D43NxewmZcuXdJ28/DXf/3XIjJkK6x444ULF2RlZUVERvONAYbDW5s2\\n\",\n       \"NzcDxqrb7Srbzmsc5yUVGa4H9ncrKysqo+CB686MNGQeMJe73W4Ql9btdgPpmfn5eZ33YK48QdHt\\n\",\n       \"7W2NZcLcZTFPrBdLS0vKBKFOW1tb2p953hbuU4w7y3ja36JMly5d0vKgvxYXF1PlB2wWjWazqW2F\\n\",\n       \"oPNJYmxzV/PBYHBJRC698fdWkiQ/kOEGSUTEi9b+tIj8x8Fg0BGRV5IkOSciPy0imZkQ34pNxJuB\\n\",\n       \"51r0FkMOXsXCw64HdDKfEPI2N3kaVD8JsLq6F6TNm4m8gNw8ZG3qbNA9qz8DpVIp2NCyi4X7CZME\\n\",\n       \"LqKrV69ONJ48Fet6va5l4U0a+pUp4EndrXBT8ER8M65s3swDdjy12+3AvVWtVlPuWZHRIisyaj+m\\n\",\n       \"7HFfb2wuLS25CzwWyEk3Unz60ZZ5enpaF1+81LlvMR6sCwf34T60GxB2f7CryJ7u89K8cLk9t1pe\\n\",\n       \"EmwuH+qysbERJKPmhZ7/Rvn51JmdK9648uYtZxrwguzzQgW8E64rKyv68kVAs/cC8gwwHmNWK0kk\\n\",\n       \"7Zq2z/XchP1+X1/gfArNtjPWDwv0ER/GsEaRh7W1NS0/b5Q8o8l+1u129WWOcb+9vR3opjFarVZw\\n\",\n       \"ym5nZ0cNJKw/165dSynki/jzem1tzVUFt/O53+9rX6O+09PTwcGTGzdu6PzBpmRzc9NNNJ0HdlHm\\n\",\n       \"rZ/YvM7Pz6cOI4gM1xBruG1vb2u7cjJ3jB12eY7DxMHmSZKsisi9MtoU/XdJkjyVJMkXkySZe+Oz\\n\",\n       \"YyJygS67IKONV0RERERERETEvylM5F94w633n0Tkd95gpv4vEfmf3/j6fxGR/0NE/uuMy3NN70kY\\n\",\n       \"jrca3lFnPjbMv7MUPCdTZVeAdQswTc7/2gDacTo4bxXytDi4/B5bA7DWSV6AOV/HweaWlvesXQ7c\\n\",\n       \"hfuoWCwGTEmtVlMrHBbYuLHELKBNeJwVWJjncmaV/bz8h1xPG9zIbiGAXSJsDdqE1961lUpF2woW\\n\",\n       \"GksieG0Eq3JmZkafjQBQz+LvdDpufizMlUkTn6LNOXEvwHMRbcDJclnviCl/Gxjt6fjwGPN0n7i/\\n\",\n       \"rDXOh0O8+vGYwN9esDnaqNvtKkPCa0eWG0MknUvMm0OeHpr3G6/NvQMynuvePpefgTZbX1933UIA\\n\",\n       \"3G/sJuOxb9mWmZmZoM0vX74csCw8L7i9bRLsrJxs+B0HKGMtyguh2N3d1XEH99rly5e17lxOL0je\\n\",\n       \"urjX19dT+fLsIZN2u63PYTbIZm3gvzlDg21Lz006DpjDtVotcB/2ej0tM7cL2oM9E5O4z8YxUpiX\\n\",\n       \"a2trOr/Qx7VaTcuCccWZS7z1P2t8uGUb94MkScoi8v+KyJ8NBoO/fqMSVwZvQET+bxm670RELorI\\n\",\n       \"TXT5iTc+i4iIiIiIiIh41+Fzn/tc7ve5jFQy3P59UUSeGwwG/yd9fnQwGCBS9b8UESTh+lsR+Q9J\\n\",\n       \"kvyBDF167xGRfzlQyd8kPCEuwDtazVacF+jqCXMyi8N+XAvvmrzjuz8OeHE/XhA0yscK6MyEsKUK\\n\",\n       \"wJLitoLlA0vpIIcJbPZ3hneUmOFZLnyIwIM3Zjy5B1g0XtwWxz7k5SP02tErCzMirEqO59pM5lNT\\n\",\n       \"U65yNMrFrAvqweMAz8C1XttmsXdcBpHxjBTLhzDLxt+JjFiKWq3mWq4cV2PZCWZPPDaBA6itsjl/\\n\",\n       \"jzpxfzDr5bFZtm9ZUNRrG2a1MQbxLzMIbMlby9xjKb3DFXydJwvixYl6DCvA8USw5DmfHzNTHJco\\n\",\n       \"kmakmBGxMTeHDx/W2D4OzLZziduAWSrEWiH2JSs21fvcO/DgXXfLLbeIyIjRZYkKvq9V+GfxX74f\\n\",\n       \"rp2bm3PXE7BmXkA7wB4Y9OuhQ4eCgPnNzU33PjbOaW9vL2CuuKze2ovnX79+XdcvLhOC9FlMlmPj\\n\",\n       \"RNKssTd/+DCO/Z5jm5lFw1jAd+vr626M5+c+9zn5vd/7veBzYJxr7+dE5LdE5OkkSb7/xmf/k4j8\\n\",\n       \"V0mS3CNDt93LIvLfiIgMBoPnkiT5kog8JyJdEflvBz9pgag3MC7o2NsYWTdeqVQKNlJ88o41l+xA\\n\",\n       \"ZV0aXqi8jZRdyDza9c3irVDhrlargc4QuyGwyLzZE5h5CWUnVU8/yLDz3GTe5ssu8MViURebrKTR\\n\",\n       \"WfD6o91u64LGgZlYSJH086WXXgrKXi6XA2XzVqsVnD6tVCr6DIwNT2/KG4fVatV9meIa7xQew7r+\\n\",\n       \"svS1AGz47MIqEh6KQBm8F4K32ckyjOz3/AK1Lie+J29AbKL1cSdreR2w5ffGlXe/TqcTuHG93/HL\\n\",\n       \"lQ8l5Cn485zyXI/4HcYd9yFvKq0CNgNBzEtLS+7JOOue29raCtYLr/94jI0zXu17YDAYBJkrvOBq\\n\",\n       \"kVFSbk7ijfbAtYuLi8FG6vDhw26KHVzj6WIdPXpUr8E48frQM4a2t7fVLYjNWL/f13HHp9iszh0O\\n\",\n       \"OzAGg8FEh6Z6vZ5uwnhNwrxH/7K+GtDpdIJE0bZOFvyuxkaQ9QxRJ94s2ncL3Lp5GHdq7x/Fd/99\\n\",\n       \"Neeafy8i/37skyMiIiIiIiIi3uV425XNJ8E49e+DwFMituxTqVTKTWTrWXxMnVtGwjs2zPpFbBl6\\n\",\n       \"ukRvBuOSllrtJM+6aLfb7uce3W1ZCT5uza4J69LhwHIvv2Febjdcj3uLZCdu5nLhOk/Z3ru//R3n\\n\",\n       \"qGJ4rh2rxeO5YgaDQcBm9Hq9gB3jduc25bZEPezx7Wq1Grgoe72e1jnPJZvljsQ1sPyyAJ0eT5/K\\n\",\n       \"sr0iI1aOE4bjs36/n2KdPF0yywgdOXJEXVE8PjzXrm23brcb9LUnL8B9zXnuYO0yO5anPWXbRWQ0\\n\",\n       \"Lr2gb/6tp3Pl3Y+lM6wKPP/NCcvzXCvsOkF9PQ0wTxYAbesFFvO44nyodi33+m8cU8wslCcHYRMo\\n\",\n       \"Hz582J3zYGvATJ09e1b1jTj5toWXqJjZ1hs3bgTSEIPBIGAux7FC7B4GE8VMDauM4xmWJd7Z2UnJ\\n\",\n       \"GRwU6P9Dhw4FOne7u7tB8LpIvqfJY6H5PQoW0GMkOYMADkFAF20Spi3m2ouIiIiIiIiIOCDeFYxU\\n\",\n       \"r9fLDR73wOJ6eUfTPWkCPsJsY5/YmrDxThb2uczAeAKgQLPZVDE17PgnlYnIEs1ky9UKkyWUA9AL\\n\",\n       \"8GaL2bJKHKTL9fB28VZQstPpBLFPnpBpu92euP5WhsLr/3a7rawILByOWeBxYMuSpUTvjTHrs2fR\\n\",\n       \"QrR91tiBheTFpeC+zEiwMCMzm/jXsq3T09OBIGKn01Er24s38GJ9ON7JlsWL9UuSRANLPUYK1ifL\\n\",\n       \"W3DAcJ7aPurPZS2Xy9qWsHAvXboUWK8cq8bz3/Y/x6DkiXXyOsEHB2ycFgcZ83pin+uNE643j3fL\\n\",\n       \"eHQ6nYBB6PV6EwmKcqwKnre3txfESCVJoswLctOdOHFC/+a4Mlj6nvo4s164BmORGSl8NjMzEwRY\\n\",\n       \"8/xEObmOHoPO662d39znaIN6ve7GcOGzhx56SESGjBQziCLpsYG/W62WnDhxQkRGYrkcXM2Covws\\n\",\n       \"zCWM7atXryprizIvLCykDgOIpMcTs0p4Bsp1/fr1oD1arVZwCMMTV+b78FqEz8DCbW1taZ+A/eJD\\n\",\n       \"CUCpVNI5ZyWIREb9/pnPfEa+9KUvBWWxv2Mw043xOamMi8i7ZCMl8tamkGH6ll0J1sVWLpddteNJ\\n\",\n       \"FMvtvfH/4xZEAIObF+tJNhPjfsPuyjxadm5uTsvFv8NnPBgtfe7pSPX7fZ3Y3ovY2zh4yNMRmVR5\\n\",\n       \"XUQCCltkRA2z3hUmttV/Ehn1TaFQcGl+u6nDi0EkvXmxG0x2sXB98DeuXVpa0mBTlLnRaOjzUCYe\\n\",\n       \"7wic3Nvbc0/35QVEW6OCy8z1RH2yxnpeehyMg8OHDwcJkWdmZtwNnndghN1qWMRZvwZ1xgvUS0nh\\n\",\n       \"Be622+2UsrhIerPhBbRzsmFrdPC6xn1utXZ4fWJ3uN2E8QuNNy8oH16UvV4vOAHJp22Bzc3NwJ3K\\n\",\n       \"ZQCmpqYClwnPMU8PiV12vMHD/e0cf+qpp/SUFU4GeqfUOPWLd3gh6yQ3ymTngOcGzVqnnn32WRER\\n\",\n       \"OXnypH5mswAcOnRIE/uyMYHnsmsJfdhut4NycR/itGC5XNY2we9v3LgREBG8MUPbd7tdnQe8acN4\\n\",\n       \"8k7FoT3q9bp+zy40q7lWKBR03eTxiTYed9IZ97FzWmS0QT5//rx85CMfERGRb33rW8F9vMTTGNsr\\n\",\n       \"Kys6bvGsSbQeo2svIiIiIiIiIuKAeNcwUoAXPJiHLFeAx6J4bhKm+XGt1WJhCYM8dx8zUnlUPD8P\\n\",\n       \"u2xOguxhkoBVkeGu3mMdYLHgSOza2lpKfVvED+KcmZnRz3FfVtf2JBE40NqWxQsE9BLL8n3yAsvH\\n\",\n       \"Ae6CYrEYqCEXCgUtP39mXTqsYu0dU0b5PJdCvV5XVuT8+fP6OX7LYwNtBYvTO5bbaDTUIoTV1mq1\\n\",\n       \"1Frko9h5+dQ8cKJgT4aCGbc8XLw41Oj1jooD3rztdDruvPcS9gIeqzQYDJQpZSYKbYS2z2IG0b5g\\n\",\n       \"MLNYG1smT2+qUqm4h1bsc5nJwZhgl4PnhsD8LRaLaoWzi8eyNXzIgZkpWycvyH1nZydgeprNprIr\\n\",\n       \"CBngdnz++edFZHiMH6wI6ra2tqbMH+rRbDbV/QXGySvLoUOH9HtvHPGYRT3RBhxGAnC90M9ZbD76\\n\",\n       \"7atfHR5sf/DBB+WRRx5J/ebq1avynve8R0TSjBT6hl2eeM7U1JT7TsFYfe6550QkrTeF+2xubqrb\\n\",\n       \"FeN9fX3dDePAmoHxdNttt2k4At8XaxDG29bWloZL4B3S7/d1rqBfd3d3g2e0Wq2J5WomCfx+7LHH\\n\",\n       \"5LOf/azWXUTkiSee0O9R9sXFxZQUgsiQBbRu/0nKFhmpiIiIiIiIiIgD4l3HSL1VMghePI+n+usp\\n\",\n       \"W3sBpTYexrtHFmPiiVzaoPRxMWKTsjHValUtAVgYXCewBSIjJorVpm1Mk6c0PS44D+3hWReT9q/H\\n\",\n       \"XHFcBbOKeRaFZ1myNW7bNUkSNx7KkySwwo3sk+cgTBsIymJ/rBJu47RYHJZZL5QFjNXm5qZ+j/46\\n\",\n       \"dOhQwJ5mKcTbODFuU25b1NcGsVtgjCFO5KabbgoCz7e2tnSs4b5ZecC8OCMeY5Z9LpfLATvIkgi4\\n\",\n       \"B8fc8Ry1EguNRkNZG2acvOPblj3Z29sLDnDwfAT4/8HUXLt2zZ0HHjtm61YsFnX+eyy/JxXBhw08\\n\",\n       \"dgTtwYHtYEB5rfzwhz8sIiKPPvqoiAzjUhDrhz7vdDrKovFBEDuXT506Jc8880zqMx4n4/Kloa34\\n\",\n       \"GrvWcruwEKgHxBaBAVlcXAyCyNvttitvgr8RG8jjr9/vB2OWc08CXA/OM4g+RFnW19e1rMxM4RnM\\n\",\n       \"2GL9B3POfYN5WywWdbx5c8XLX8ksPtYK750KMFM47n34hS98QUREfv7nf15E0jGXqGOr1dL5ynFp\\n\",\n       \"WX2bh3fdRuqthg1OFfGDy9h1571w7IvFc8mwNoqnl8PPejPJnK07Es9GWWzy26xy2DQVrGjNgdn7\\n\",\n       \"VWEflxg5D7xx8IL5gTx3VaFQ0EmJsjQaDa0nv4A4uSx+j8nnBd4zcK1VRRYZ9dH169f1fliAeKOG\\n\",\n       \"Ba3dbo89sYb/tydlKpWKLg68KGJh8cYL0Gg09BpPn4WBsk6a9ggvFii1M1ivLSsg24P9nE8OYmPJ\\n\",\n       \"Lx8soN5JPpH8YHTUt9lsBqlpRMKXEr8I89JocNA36s7uKPTb1NRUSgMMn1lXnKd9l9V+PN7QLry5\\n\",\n       \"sWVlQJcIAeEio5c02nZjY0NP8tnrRHxtKYbVp/N+v729LUePHhURcdXCgXq9HqzHWes7ME6Pzxpm\\n\",\n       \"L7zwgrrVGBhPPOftuOLx4qVH2tjYCIw1Hu/2sIbIyI1+7NgxddmhrURG7eW5/bhux44dS5WVdRNR\\n\",\n       \"j6w2wtjicAl7QIUPyPBmc7/vxW9/+9siMtxAYtMMA6JcLmtZ0Nbe5nQSRNdeRERERERERMQB8Z89\\n\",\n       \"I4Xds5fYUUSCXTHrEnnsiL0v38PL65UkSaCAvB/3pXe819NOYUaNtVBE0ok1QcW32+0gcS1b22y1\\n\",\n       \"WwvKOx5fqVSCY8/2b1zrudNsMP+kWk4eOKcU96EXKOyxbR7bxTIPgOeKYp0ckaHVhrJ4ViBbT7g3\\n\",\n       \"yuQF8LM1xUyizdnFGll5OQH5CDMnh/UsQ095OQ/43h4Px3OtLITIaMyiHjZnmUW/31erH21TrVa1\\n\",\n       \"zrDaPT2iRqMR9GGSJKkgfsC6MVlN3DtkkDcHeI567DjmtzcOmbnydK4YNkC9XC6npD/wGxtmwJkX\\n\",\n       \"GPZYOcsQ3H777SIy7AO4dsGEMCPF9wVjwvWwffT6668Hh2HGsVpAtVoNdMdYK8+r4zhtIbi6zpw5\\n\",\n       \"IyIizzzzjNtPYJA4FAG/48THGN+Li4suU4LrmYGzzOqRI0d0bcN68tprr2k9wUKVy+VAXoLfd6x6\\n\",\n       \"D3YKbtzLly+nZD7we8voi4TvQx53AOtN4fmslJ/3jrzjjjuUeQOjfOnSpYCN5XUvT3ZhEkRGKiIi\\n\",\n       \"IiIiIiLigHhbGaks1XFg0iP9b9VzbVZ1T0CTfcGeKjowLj9gnqW5H/FRbydt83ll3ZOvhUXtsTJ5\\n\",\n       \"yua9Xi8l0SAytBwsm+U9n2OVWFjQswjHxSPtF+OYDABlQZwD+9VRt1qtpvcbdzwXfnpcy1Y29z/a\\n\",\n       \"l9kEm/eN25StRpQZMUhHjx5NsVMi6aD0vJgmLy4mS6AO1ul+gzVrtVowjkulklqz+HdjY0OPWGOO\\n\",\n       \"ZvUjB32j3F68E9iOra2tYP5xhgFc4yk4s6Akx0rZ9YulTnhNs8HeHOeCMk1NTekzuE/QNp6EAuej\\n\",\n       \"s7/f29sL2Ha+LwKR19fX9Xc41s4MEgNyFmCkeC0Bm8LrIpiQpaUlvYbXLNumImGuzcuXLyvbhdir\\n\",\n       \"rNyXFp5CO5eBBRkti58FFssF7NpXKBSCdfa+++6Txx9/XERG82h7e1vHN8uVMLD2Mhts59+lS5d0\\n\",\n       \"HDPj5HkDbHB+tVoNGF2R0ZgCE18oFII1htvNi09kxtnWbXt7W8clZ7+YxOvAh2tYfsFeWywW5dSp\\n\",\n       \"UyIyVKDH726++WYREXn11VfHPkvrMvEvfwwY1yhv9QYq77kckMkLoN1I8eaKKW9LF3rB654K9GAw\\n\",\n       \"CGjetwq8qHqnIVgLitNwiKQXQaa97YamUqmk0hiIDCcyrmFXoaW4a7WaLlZ5weGDwcDdQOVR8B5s\\n\",\n       \"ULdI2k1mXUkrKyvaLl7QKgI9NzY2XBVcm/yyUCjoNfwytm43kXDsM63Ni79tF35R4R6Li4sBJb65\\n\",\n       \"ubnvl413AgfgBXe/Gyl2+3LiW7zMefHF/LInHQG8KPgatAM2Q9yWrCCOerErCdfghVGv14Mg3s3N\\n\",\n       \"TXeT4QXLovz8DDt+vSTo3ulSdkfyy4Y1qoA8lWY+rMEB9KgD7oO6sauY4Z2AArBR4hOaGFd8+hDt\\n\",\n       \"zfXgjQg2DuxGxOlPbKQ4XVEeeIzxPLJ1GwwGgcGXBbyEUceFhYWg77zDRDwX0Rb8Ir9w4UKQDJz7\\n\",\n       \"H+3L9+ZxZV3U41T7MR93d3d1jqBuzWZTN1xYzzhIG/3VaDSCZMQMz0jjUBBcy6EFntFnDaBLly7p\\n\",\n       \"+ECZ9vb2dBxjjVlbW9Nxgg3V7u6ujrG7775bRIaK+uMQXXsREREREREREQfEuy7Y/Mfl7uN72iOx\\n\",\n       \"DA58Zgvdo0lt4Ka3mx7n3nyrwKrNsIZRLrbePVcSB5mi/VE3lj9g5g3XsMvGPvcg+RNZtsBah5VK\\n\",\n       \"RZ/BOZ4sQ5MVWGj7h5XGvXEHiy5LeR0sBSxzZi5ZuZyTwaIe1rXDFjYsfy+IudPpuAcf7BF2du2g\\n\",\n       \"zbLyBU7CmE5PTweM7jjwcWSrMM1/o51LpZIG5GeNnTzXRF4CcP4Mc2Bubk4tftaegVzDSy+9JCJD\\n\",\n       \"FgxsjefC8FgMnme23biv8TdLHWCcevpUxWJR87wxo2EDkJmBQ/tw0mqA2Ts+VOK5mdD2zJjCFYv8\\n\",\n       \"ZdzO6MNyuayuJzArrE4O1ujatWvalvfee6+IiDz88MPuuLV9zMH/DM5OIOIn2hZJayjl4f777xcR\\n\",\n       \"ka985SsiMhwbVgJia2srkGeAMrlI2l3tjSfgyJEjgZQEy7PgeWtra4FMQrvd1ncC/r1y5Yo+G/IG\\n\",\n       \"jUZDFeixHt5zzz3KcKFuc3NzymZhXLXbbW1LL+SB1z2wbWBCOfsI2n4wGGSueSIjxuz8+fNBQml2\\n\",\n       \"HzNbhTphzB4/flzXCbQFxnAeIiMVEREREREREXFAvOsYqUmZqP3mEWOwAnJeUDr/3mOkOKccPrPl\\n\",\n       \"/0mwUSLpuBVYB3ysFBYLB32zdSAyvi0n/R0siKxj9B64Dfke/N1+xEHRN4jn6Ha72kZ52eEZ4+Il\\n\",\n       \"vBxvNl5iampKrUQb5Criq04zm2XH+WAwUEsU19y4cSNg5fBskVH7eeyo1xae1T49PR0wa1kHLmye\\n\",\n       \"Po5PA5gJAbrdrrJULGQIRs2WB9/b+D8en56QJrC1taW/veOOO0RkKLDITJRImkXjnHE2XpLbwwuG\\n\",\n       \"98YY7sGWOMaEN9Z7vV5gjaMuIqN24fp6defDDrYMLJ3BgIwF51y0/b+xsREofc/PzwfB11wmXkd5\\n\",\n       \"7QDAmLAUjJV+4FyazC7bNjxx4oSWy5uHHiPCAAttc43ydxybxbGXGBN4vsiIiZqeng7KurW15UpO\\n\",\n       \"oM5og5mZGW0vZsIwbtGX9XpdnwHpCS43WNcnn3wymHM8trntJz0cZNXT19bW9FrObQkWG2sNrxFg\\n\",\n       \"ME+ePKl9jXpwP+B+fHAMbXH+/HntG1xrY9M8vOs2Um9GUXu/4NNTXgA6L4AYjLwJsy5CXlwxELxT\\n\",\n       \"cpMii672wGW2i2Cv18vUmmHMzs7qAoe2HwwG+jLy3DPsNrDPYPqW28W2ES+aHOTunWxjbSyR9Okp\\n\",\n       \"dp3g3t5LZ1KwCq+3YHgBwugvm6aHwWk5uG52MykyeuHwS8S+kK9du6aLA55fr9eD9B3e5j8r0bKd\\n\",\n       \"e5yAmk+Neu3Cm3XAbuC8jRTfm09PYUMoErqw+D48VzAuUc92u633we+73a4u3C+88II+F33GJ/ns\\n\",\n       \"y2Zubi4I8C0UCu7pUy8o3X7HmjzcLp6uEcaHp5CNTTv3tafTg+d6a0zWy9GmnJmeng4C83u9XnDy\\n\",\n       \"7cqVK4FbaGdnR91LeKHdfvvt6sqCq1BktIZijG9vb2u/cV8CfHjGroXLy8u6JnCbTrrOwlWEecmn\\n\",\n       \"6dg95502xXxEfy0sLKSSPdv5wGr8QLPZ1DGBTdqRI0d0LfIOzaCs7XbbTbrundK0J0I3Nzd1XMAV\\n\",\n       \"xocrWLPOm9eYh9iEFwoFbS9O1m43/TMzM9peGAfNZlM3P3CNMpHAz7dl4QMm41JTMaJrLyIiIiIi\\n\",\n       \"IiLigHjXMVKTHq1mS8PTfZoU2AHjfqVSKaDsWfuIv7M6PZ6r8M2wavsJ1mYL0ysru4ZEhkfmcawY\\n\",\n       \"ePHFF10F57zEyvh9pVLR53q5/oBarRa4Hj11bQ/T09NaBraavSBegHNF2TYol8tq0fBx4Lzyezo4\\n\",\n       \"DD4azPcVGWlVMSPFsMmIC4WCWot8nzz1X86fiDLwcXlYpHmU/GAwSB23x79gO2AhZmlqefPPSoSU\\n\",\n       \"y2VXV42TM6PMfD+7PrDF6bnx8Pt77rlHnnzySRFJuz/smPaCrIvFovYDM1M2gTIfBMhjgbgd2CVr\\n\",\n       \"rWc+wo5nVatVd43My7WHMu3u7gYHQvb29lymzDtQAHYE4/7YsWOB27pcLrttiPtxjkEofIOR4n7m\\n\",\n       \"gyC8tgFoK06gDOStm7Vazf1+90LShgAAIABJREFUXP44EZGPfvSj6t7E79bW1rQMvP5gfnMwvm17\\n\",\n       \"ZhK9+ciadmj7drut98GawDpSP/3TPy0iQzesZb1ZOsG6Xy280BmUEc89fPiwzjl8tra2FjC/rVZL\\n\",\n       \"mTeMl0ajoeOSc5biGWCfOp2OzlcwR61WS/sbrs+lpaXAC1Qul90gfpswPks3jxEZqYiIiIiIiIiI\\n\",\n       \"A+Jdx0gdBJ6acN7vPIuZg0RtPBQrfvO/HMSLfyfNwj4pbJmz5BTs8dcswPq8fv16rvo3KyrjGraY\\n\",\n       \"YYWhfllMos0v6Fl8kwpyZtXNsyi8HIVe2WwsQBabaSUxRPy+tUrUDFhvXtZ2kZBZY1aAy+e1lR2z\\n\",\n       \"m5ub2gZgRRuNhlqzEBT06ujVo1arucyAdw+vfFbFfDAYpARPLXAPKw4Jls07ds1B55YJfvLJJ4Pc\\n\",\n       \"Y3wfIEvdGWOP40nQnzwv7Pjh8vF3VoqFRWRxP76WWQyPcbb344BxPHd6ejpgx6anp4N5NS5rA9gK\\n\",\n       \"DljmgGZvbLG8hMhwHPzwhz9M/YalLXg9wThhBgesA1h1jx32sLu7q2sQM3Fot7zMBffff7/80R/9\\n\",\n       \"kYiMxiHHLHHbe4c/MO8xL3d2djTuK0uQFZ+zsDDWbRyk2djYULbmiSeeEJHRQQkRX0Qa9+OYQAZn\\n\",\n       \"/8DvbbaBTqcTxCoxG4kx3e12U3I6WfX14vWSJNH5inHAsWWIdyuXy6nYPZFh+3lrC4Dx5B1msXhX\\n\",\n       \"b6Qm1ZTiRVdkvMvBS0bMJ5K8k1J2s+bpTXGZx53+mBQoFyZroVBwFah5McTAAHXZ6XR0YWKVY9DB\\n\",\n       \"rD3ipQGxLodyuaz141M2aDe0Fbs/GTbg2dtcsUsR9SyXy7oI5QUUlkqlfSepHOcO9hJOZ514E/E3\\n\",\n       \"B1jseIxhweOgeHZ9oi/xOy+YVESCl2ur1dIXE+5Rq9Vy3d9YgDhYHy8MDiz2AlaBXq/n9qvd7Lbb\\n\",\n       \"7YDa39vbC+a8fcnj/7kf8lLrsPYM2pCTR9tysSGFlzTrNWGNKRQKQRJvPlzBYQG2Pfh0r2ek8Pyw\\n\",\n       \"Gm6FQsF9Sdt1zDu5yG2JFx8HDON+WZsorAnsrrIB79VqVccoXDI7Ozs6H5Du5eWXX9ZNKe7BL2G0\\n\",\n       \"/YsvvphyTYqkN3/4lzcveUbUtWvXgnGSJElwws3D+vq6Pg9t0Gw2dVNgT5yJjOY164TlaS4xeJOG\\n\",\n       \"uh09elTbF2XudDqBltqJEyeCEIrbb79dXXloe2iS8f0ajYZ7otuelNzc3NTn4feLi4vB5o8TFKPt\\n\",\n       \"K5VKsJnKMrKtgen9rtPp6OdsHK+uroqInzgdmOQgVnTtRUREREREREQcEMlPSsco9dAk+ck/dB9g\\n\",\n       \"l4m1ZrM0VywjxYwJLPkkSYLguyRJApq0XC4HdDDnufMYs6zPcEwY1kKxWFTLJy/od3Z2Npf29IA2\\n\",\n       \"qtfrAePm5f3j578Z3S/PXcHfvR1j3EtkLeIfSQdYxRhWM6z2c+fO6WcILL18+bJaVGCXsvoM1jD6\\n\",\n       \"qNls6ljkoHOwYnmux3K5nGKiUC9YsWyRWhSLxSAY1t5bZMhu4HcIwt/a2kodBwd4jthkqpVKRdsE\\n\",\n       \"92NVd2B5eVnLj7E4OzubYqeA48ePi4jIxYsXRUTkAx/4gDz77LOpujN4vNuganYd5rmPOSjdy92H\\n\",\n       \"+jBbwGyFp/FjwcljWVGdpSREhtY9+h19vLKyou3Hbi0Lbmfgk5/8pHz1q18VEZEPfehDIjJkpLB2\\n\",\n       \"YYxfu3YtYL2uXLmSyrwgMmQ92K0oks4Fx0HdaH9m5zmRNepjXcUsb4HA7FqtpgHcYKZ4nIHN39nZ\\n\",\n       \"CdaprAMmnF3CModTU1O6FoANrlar+hmHZuAzTm59+vRpERE9ZMG5T7ncYAnB6GQl87WaVh5OnDih\\n\",\n       \"7kqwX5ubm26eS8hfYG17q7w4BwHW88Fg4EaeR0YqIiIiIiIiIuKAeFfHSE0aQG3Bwnjsd7dCgWyh\\n\",\n       \"sQ8Xu3W+1rsfxy0BLLCH6+zvut1uYH2OC1S39+Uyi/hMGizHYrEY+JnHsVEoM8fVWGG0LLDFzfFS\\n\",\n       \"tvx5Ss/8nRf34TFcsOgOHTqkFhrHGNl4lHa7rfdG/Eyv11MrnOMv0JbjAhNh1aF8xWJR+4bbHM/j\\n\",\n       \"eCOMdw48tkG69hqRIeuBPoEl7+VU9IT3qtWqtgEftWbGQiTNwHlMFPoNcXdcjxs3bgRxcTz3vPxg\\n\",\n       \"zKrynLN5CFutVsACFotFbSPU48qVK0EMpcdGlUoltbjBRHzve9/T7/EsFjfktmZZAZEhS4H24rFt\\n\",\n       \"2efd3V1XfNPOkWazqSwmWIper+fGWll2qlAoaN0x1ra2tnQsci49T7YFz8BYm5mZCQQxvTHGzCTK\\n\",\n       \"fPLkyYBN57hMfMfxUCi7xwTzGEGZ7rvvPnn88cdFxM9biPtsbW0FivBcFoyDJ554IvAQMEPIc9+u\\n\",\n       \"4Ts7O8GaZYP67bze3d3VcQK2q9Vq6TgGk3Tx4kUtL7NiYKL4fogZRF+XSqUgn18WPCbKxgReuHBB\\n\",\n       \"64xxz2s0j0+wipPID7zdeNdtpLzNCxqaX2I8UK3bjTdDXnA40595GynWh+END/61NH6SJDqZeLLa\\n\",\n       \"AF/+jJ8BYKL0+30tCwZguVwO9GsYXP43o6puT1lYWK0YViLn7zDBuY08CteqevPGkIP6UR5ejHAt\\n\",\n       \"XsJJkijtjft4SWQZPDZQVg4wt1plHjigFODFkVOYwJ3FyUzxPVP2nIQYwCbRa0fv4ACn9LD1XV1d\\n\",\n       \"1RQcrBCPsYU2vX79ult367Lb3NwMDmGw2jG7wzE++aWFazgJM48F6zJhV7an8eapxPOGht2BuD82\\n\",\n       \"zXBNNBoN7Qduc08vCeBNp3cIxn42MzPjnny1m6GFhYXUqSn8Bu3Pcx7rgFWDFxmNk+np6eBQwmAw\\n\",\n       \"CAyz69evq8sGL1Q+MOC5LdHOzzzzjH6G/n3xxRf1M09Z2rpXGXzq1csGAFQqlSABdbVaDU4zTk1N\\n\",\n       \"BYHpPM7wPB5rHvDd1NRUsAHudrtBEL/NDOCttV6mCWxAMEduvvlm3QyNO73I+lxvFtVqNThI0W63\\n\",\n       \"tXx8eMX+jjeR3pjx0hB5bc/6amhzHrtQ1OfTvlxWLlMeomsvIiIiIiIiIuKAeEcyUthFMsvDx/Lx\\n\",\n       \"HXbcLC8AsASAVaL2ZA14x+qxRR6849usN2UZIWa9ePeM38GCLRQKgQuL78VUPK7hYEJ7P8Z+dKtg\\n\",\n       \"gSIImvNb4bNarRbksmu324HW0d7enpv3CpZI3hHTer2u7cYWlbVi+Gg9uzKs25V1dfC7Q4cOabt5\\n\",\n       \"SYE5eDQvf1wespLMosw40n3jxg1XG8tTd0d7oK/a7bY+h+uB75mRwP28cnl5E9HnrPEES+7y5csu\\n\",\n       \"c2D1oTjnIurNFj9+l2XZ2/lo5xiYPATusgaQN/bx2fz8fOAi4gBllLVSqWgb8jF5y9BMT0/nZmHg\\n\",\n       \"ethrWbuH549dsyqVSqBztrOzkzpWjvLZ+cXBzZ4OF9qfxx/GnXcgpNfr6bVgpDqdTkoXTGTIjoD1\\n\",\n       \"QNvzuPFYV3zPblBee8Gscs44ACyZF/5x/fr1ICFtqVQKGIi5uTl3PCIYOg/b29uBy471kJg554wK\\n\",\n       \"/B2Qx3Z57yn066SuuUnBivqecjzAc89bH9E3pVJJy4q24hypeSEsCwsLwd5ga2srODTDLn6EF6yv\\n\",\n       \"r+s4y9NMnASRkYqIiIiIiIiIOCDeMYwUH/f3YnysSJqX98mLfRgMBrpb5/taS5YD0Md95sFjjmwA\\n\",\n       \"d6VS0R03xxvYXTYzbJyvz1qaWdd6wnl5x+7r9bp+j3tzQDGsj1tvvVWZCPTD9vZ2wIA0m82JGBov\\n\",\n       \"ZogFDxnW4vGkDmZmZrR92codp66ehVKpFFjyxWIxpciL+1vmhf9G+8zPz2sAM9qbA3cxJiqVilrt\\n\",\n       \"3N6eKKDHEliGqVqtqvXHMSWwfL2+Qp+//vrrGmyMth0MBsqAgqXa3Nx0mS3O7Yd72ByUzHp5qu0A\\n\",\n       \"M4RApVJJlR8sC1udHOsiMhyn9qj25uZmKv+dSHq8sPK+ZXw9scwsmRQ7X3nsoP28++3t7QW/4/aG\\n\",\n       \"ijUfVLExXwzvMMn8/HxKPVwkHZuFNvAUpkX8mEvMFVbett8xQ4z+4xgZPnhjWWORUT+BjWTpA4xT\\n\",\n       \"Wy+RYd8juBr35bHoxZhybBae84u/+IsiMmR+rPQDx7EybJ94BxKyxGsBVvCeFBhDt956q9aVleYx\\n\",\n       \"jlAeb83ksZ0Xc8Uxxh7Q/1NTU8FYZbkPzOWZmZmUZAL+ZUVzPBdzGLkveR+Aeq+srOh7Ik+xfhK8\\n\",\n       \"YzZS3qkzgDcMTG/bwcUuMS8wll86XvJgPIM1aDCZ806EsMuOT2PZEwutVks7HROET4ZxG3jB67jG\\n\",\n       \"bgz5GQz+jBcenNziBKH4LSbOxsbGvjceHhCMLDKaOKiTR6d6myhPgdirL5eXDwfYlCVc3zx4pwB5\\n\",\n       \"c41FM6vcABb4ubm54BBBr9dTtwMHluNFD+r80qVLEyepthuaQ4cOuZscLKp5tHaxWFQXBjZhjUZD\\n\",\n       \"Fze8MG7cuOEGANuN6OHDh93AZwDjdGZmJphTu7u7gZHALhGR9AZKJL0g872xgUIfbm1t6X24zKzt\\n\",\n       \"ZTEu4TiPGS6zSNrwsmOR/99LG8NAW2Kj0O12A3cLH5DAOG21WnLPPfeISFpHyJad55RXD6Ber7un\\n\",\n       \"tmwfs/I+yj43N6dhAxy4bcHrAM8FtAt/xnpkIsO+soH5N27cUHc6gs45yN1zLSHp75e//GX9jNOt\\n\",\n       \"YLx4Y5u1t/COAbL6nLW77DrW7XaDE5XjgDF79uxZPdXHOl1eAu1J0Gg0gsB3kbTaPMoMvPzyyyIy\\n\",\n       \"HDt2I8MnG1mvC+OD56h1yXNbeorl3K/4GxvuQqGg4wiB6JOcGoyuvYiIiIiIiIiIA+JtZ6SwI+Td\\n\",\n       \"tj1K7lHiTBmy5YrdJFuuVuqAdVoA3kWz28Wqk3tWaLVaVese9WAXBrNBNvC5VqsFbBcHNLPiq2Uk\\n\",\n       \"SqVSYPXy31nJYzlo/K0ErKPp6Wm1HmHJczJdr1ysgzIp8wKgDUqlkt4bltza2tpEuZIWFxf1WpSd\\n\",\n       \"DyCwxQ9MqrT7vve9T0SGFLpNiJolQQE2gVm8PCX6vEMR3W43YJ1OnTql1l0erb28vKxl8OQSXnjh\\n\",\n       \"Bf0bbJB3JB1gBilP78dT7eYsALiHZUfQnnk56jgAmH+Pe7EcgMdE2bbma9EG4xiCU6dOiciQGbDh\\n\",\n       \"CF4QuReAXi6Xte+YwcHYYVcg2o3V7MFEefnGPEYC7eK1yW233SZPP/10bp0BlB9MgueF8PqfgXnE\\n\",\n       \"WRTAiLErmxkwlJ/HBNZ67i/rSeA1H23F4OTKVi6HxwrYz52dnSCon+vIqvc8N207TZqIOQuTBqHz\\n\",\n       \"AS+R4TpgXbaLi4va5t///vdFJL2ueMH+eTI8vM7y2Eb7nz17NvN+s7OzKYZJZNjOGLfQ/Wo2m9r+\\n\",\n       \"mFscMgI34iSIjFRERERERERExAHxtjNSVjAtS/XaZoLnvFUsSmiPQvLfvLO2ViVbtrhfpVJRZoOD\\n\",\n       \"TS2yRDVtvBZbWZyhHRYIH8lndXWUhYU48SwryFir1dxcemATuJ7MenFsl4gfl1SpVNTqQLvs7Oxo\\n\",\n       \"HAKsuixrHJagZxF6wagcX2WP2zYajSDQem1tTa2bvBiFTqcTBDV7cULePaampgIGYWZmJlBKFxkF\\n\",\n       \"P7Iat2VSvNgqjvVDW83OzqpVx22Ux0R5sWiIY7r77rvlm9/8Zua1wMrKirIsPO7AajKLgbqgvt4Y\\n\",\n       \"2t3dDQ5hiEgQ69Hv97WvObjaikdy5ngRP37HCoAyQ8OwAeisms1BsNa6ZkaK89J5TCiCm9mituKb\\n\",\n       \"3Ea47/b2diDj0G63g2BklmxAu7HUgScRwAKWAPqQY8fygs1tzI9tF2Z5WDVdZMis5OUA9BTd+b6W\\n\",\n       \"sTp06JDGgTIjhXGM+lYqFWUGH3vsMf2dnVO8nnnitcizKCIBY8pgqRAW5xTxMxPweGf1/x8XmPlh\\n\",\n       \"Rs2uVV6A+/Xr14P8oK+//nou24216LXXXstdx5gZxJrAzwBwj83NzYCtK5fLWn4vborHH8YJ5tYk\\n\",\n       \"OWff9o2UdWvxpskmDMb3IsNO9XSaLLyNGU8+T+0ck5lPYHnKxbZMXGZbJxHfHcmbMK6n3VyxO9JL\\n\",\n       \"2MoLKgdaA9gUzM/Pp2hxkWEfeIMFEx+Dt9Pp6KKSN0EqlYo+I2+CLC4u6n3QrrVaTV9ovEFDnVBO\\n\",\n       \"/mzSwEh+AeadhsFLvdFo6MKJ5/LEBXZ3d92XEU7AoK87nc5E7sB6vZ7SPxFJB0FOCq/tEUC5vb3t\\n\",\n       \"vvwsbr/99iCVQ61Wc90KGLO4b9ZGyo6dWq2mYwzjq1Ao6LxBv5VKJbdO3ssXZdjd3U0p0Iv4c67X\\n\",\n       \"6wU6WDxe7NrA4E0Z+ojT+OBluLe3p6cx84w6fh4r6nsniO2ptE6nk1LwFxkeVEC/Y1PNJ2Y5oBzj\\n\",\n       \"nY0IL+Densby1g8Ob+C254TtIr5SOuon4hsbrJTvbUS9VE328MzCwoIGmzPy1hPPRYV2LJfLQbLk\\n\",\n       \"LGCceGEOvG7zOLJj75577tHy4Lm7u7tBGEpWgmqbBLler+s13F+THMwRGc33kydPiojILbfc4rrg\\n\",\n       \"bN0WFhYCo7jX6wXl5jCC/SJr7UQZOH0Y0hjhPcRprbIQXXsREREREREREQfE285IAewusUloWcLA\\n\",\n       \"S1bLSYStG4+PUbK7zDI5rEGF75rNphsIaYMgmXJmNwRrMuH5lrJnTStm1LygdhvA6OlNeZaryIhV\\n\",\n       \"Wl9fzw3yAwvH+aDygoc9jDsWDrDLiV0neB73NdoQ/9br9eBI97jgSe5ryxzMz8/r37Bcs/JOWTaL\\n\",\n       \"8wh6rkl2++bJJOQdZZ4kYB6wwbIMVpWG5ZoXsL63t6dUPs+FPJkKz2pHmdrtdtBvtVotODrf7XaD\\n\",\n       \"8VkulwOWK0mSlNVudYY8LTi+hseBJ3GC36H9WYWdNYOs+4kD2tk1YXXYPEuZA8uBwWCgbcg57exY\\n\",\n       \"YXc//uXcklYewraLRaVScdkm2/9ePTyWaWpqSscHP8+uab1ez9XpAngOWvfh5uamrgXMxMG9yeA8\\n\",\n       \"f4BluNhbwYHlFlmsMdgx5PPjOvEctQm3+XdcBuDJJ5+UO++8U0QkJUfC65JI+vCSp9eIMbm9vT0x\\n\",\n       \"+5QHyLg88MAD8slPflJERlIHV65c0fUEn33kIx9Jvb9EhmsIxuyrr776pstUKBRSCZtF0sH8+Hdn\\n\",\n       \"Z0fZbDBTYGRz7/+mSxgRERERERER8Z8pkrwYlh/bQ5NEHwrLguN6PHVyazV76uQeu8MxCHyM2lpA\\n\",\n       \"HFvETI9lkDhgnJ9r78dxWBwgbwM3PVkALwZKJAxkbLfbau1wLAXHfSGWJUtIFM+FJe1ZifsF5wBk\\n\",\n       \"pWe0AytHTxLfVKlUJma5AI5LsYrxnU4nldNJJDtXHq4BW9RutzMlCxiDwUDuvfdeERnGCoiIPPLI\\n\",\n       \"I4FKNLOoNp7EwjKhHIPCshpZ0gB8D/b7e8GjYNNWVla0jTimCjFhzGyAKQFL4o0fZn4AT9qBP/OC\\n\",\n       \"jYFyuZwKzOecfoAVt+12u0GQLweHs4K7F5cIcOyQvR8HqgOHDx8OVMmZbQEzkCVuaVWzGXlSASKj\\n\",\n       \"vkFfc8A4ArNZFgVl4Tp4jCnaZ2Zmxo2Js4eIvDHBZeHnemUAvDHBjDPaIe8eN998s9YF/dbtdgPG\\n\",\n       \"mccGYqpYDdx7Bt8jb6yNA+IsOQYTbXjs2DGN2WSW144FjrnEHGBx23cSwBrNzMxou+Z5GiqVinvQ\\n\",\n       \"y2MGsQ6gb2ZnZ7UfxklJYA8yGAzciP+3bSP1djw3IiIiIiIiImK/yNtIRddeRERERERERMQB8bYF\\n\",\n       \"m1tXEx/3z6McQcsdPnzYPYoOtxH0QaDea5+NZIYIpNzd3XVpW+vuy6INAZu/SmSUa+3SpUu5gcCe\\n\",\n       \"/g6eX6vVUi4HkfFJeOfm5pS69lx7Xs6hLLciP5ev5WtYi8NS7+NcTuOQ57rwjmcznW6Dm9ltZANL\\n\",\n       \"+VmFQkGvwXhBAOI48MECbjcvubHXj3BNYqxlBe5adw+7ivcrlyASqicXCgVtIz6qbYOVFxYWtCwY\\n\",\n       \"v3ywgWn3vJxx+F25XNY24jLZXGAcXN3v93V+cY4y3MdzPfHz4VbwNMWQl6zVak10+OLmm292Dyt4\\n\",\n       \"SWHtvO73+9oe7Lr67d/+bREZrU+PPPKI+2ybo7Df78sdd9whIqM28LR0Tp8+LQ888ICIiPzZn/2Z\\n\",\n       \"iPgusaWlJV2rEDDsSVN4h0VarVYQpsHrAVyQCwsL2g/sSkQf4XcXLlxwDz584AMfEBGR733ve/pZ\\n\",\n       \"nosY33W7XS03B6qjP6ADxjn5+F1i13eet3laWSKjfuMj+Xhuq9VKzXGRyXLA2Xvb+cg4SFgH60Ci\\n\",\n       \"XT3JoTfjgeJ1dJL7cGgJfn+QtZAx7rmRkYqIiIiIiIiIOCDeMfIHXm4dL7fTuGzX2BUzIwTrCbvS\\n\",\n       \"qakpZYk4wNIyXBwYa4N6RdK5sbAz5+eeOXNGRNLZ671jx7a+bDlzgDQslXFKq7hPVoD5OGsE8IRC\\n\",\n       \"mTEQSVtcnvIw/z+YI68NPBzEmgET5Ymf5qkO8zVW9VwkbUV67Fge08iwwcZTU1M6ptEug8FAA1OP\\n\",\n       \"HTsmIsNgSNvvbN1zwDXKgHF/5513KrPAAnle+1q2qN/vBxa/J3Z448YNN2jdYjAYaB9x+1q2qNvt\\n\",\n       \"BuKlWWPXlldkNNc9tqJUKinbgPnf6XRUSPCuu+4SkTTjg4DXU6dOBeKWXBcwJZ/+9Kflj//4j4Nn\\n\",\n       \"eyyMFQdGGUXS+dm+8Y1vpMo8Nzenz+Oj9fbQQq1W0zGWt3b0+3036NuiXq8HIoXMavPhDjA9Hutq\\n\",\n       \"A+D576WlJS0zsLa2pmsIWLms+YY+gpfh2rVrOp44Q4QnXgygf1dXV+VjH/uYiIj86Z/+afA7rIWL\\n\",\n       \"i4sqscDrKecCBbyDIygX5gIr108anM5ZEXjsW4mILJkc79CHFcZm1s5jenjs5M1Xr81ZiNYeviqV\\n\",\n       \"SloWjEvODIL68jj23td8mMw+lwVIvcNnWXjbNlJWF4MHFiaLN4nhWuHTTnkvgqWlJR2gSELIKS6A\\n\",\n       \"lZUV/czbJHinBfm0gH0RTE9Pa8diweANBJ4xPT2tHeclE8YzlpeXdZHh+uJlyYqwoL/HqexyW3mp\\n\",\n       \"HOzvZmZmUikwRHxdIHtvlNnbrNm0HCL+RiXvVBefuLAbENb48dxjfF/vGQBvrixNLjJyDfDLf5KN\\n\",\n       \"YKvV0kUSaQ+KxaK6j/DvTTfdpBtkzIFOp+POEQBj8oknntDP7rvvPhER+dd//Vc34e0kbldvUS+V\\n\",\n       \"SjrOvZMwvLh66u6cCgnl4IVRxHc9cOoHlEMk7ca1i32329UXLMq6sbGhL+df+qVfEpFhu9mNx/PP\\n\",\n       \"P6/1BPhUKV7gpVJJ2/rxxx9PPZufy/VjoG8wP5IkCebzqVOn5KMf/aiIiPz+7/9+cA/eqGCjlTcW\\n\",\n       \"19bW5Ac/+EGqnB5WVlZcnStgUjcKK2FbraDnn39ePvGJT4jIaA6sra2lwi7yyvfwww+LiMhv/dZv\\n\",\n       \"iYjI1772NR133C52TSiXy+7a9ZWvfCXzeeij119/Pbj22LFjOkd5/be/W11d1fmCdmFtw3Fg9xeP\\n\",\n       \"fXwG8P289xgAY6dcLqd0l0SGcxXzDBvqubk5bcu8PmL9qjw3Y7/fD9aZrJPbdryx7uQ4gyUPnjZg\\n\",\n       \"5m8numNERERERERERESAt42Rwq4Q7AmshX6/n+siyMuRVi6X5fTp0yIi8tRTT4nI0OqwO9Dl5WW1\\n\",\n       \"cmDlMxuUpTaM8loXoMd03HTTTep+ZMscz+NgU8/qgGWO4NRms+k+B9YH2qVYLKqV4DFSY4PmSDfL\\n\",\n       \"slNZytZ51isD7YHgzE6n4+rlgEFk5sK657xE0Qy2ZsCeIViT2VAeG3aceAmemUpm4BlcZqakRdIB\\n\",\n       \"5mjLdrut4437ywamvvrqq3o9GFtWaEf5OEAV4441vDAv6vV64LZOkiQ34XAeut1ukKPqzJkzygKj\\n\",\n       \"vseOHdP5zfMc/ZuXi9CzJO1cRVujDTinHN/7+9//vn4P4HuM00984hPypS99KXX/wWAQBKMvLy9r\\n\",\n       \"3WEJP/fcc27+Rc+15iVgtQcG2NWBz9bW1gJ2jAH9shs3bkyUA+7y5cva73lW+OzsrKs27a1jNq/n\\n\",\n       \"LbfcooHaaO9XX31VTpw4ISIjr4GIaFJtzPnjx4+n2FgLjDt2PSNo/pd/+Zflu9/9roiM+oDXUzCU\\n\",\n       \"6+vr2odgQLiu3AdwBSOcY2FhIUiqvbS0pGX1QhoQvD49PT025CFPYdubG1gXa7WavhNQrq2trZTK\\n\",\n       \"uUh6PfHmDMD9jDHujXURX0fMjhNey1kHEnOTPQqT5Codxzixi8+WZWpqKuXdmRRva4xUqVTSF4U3\\n\",\n       \"MTAoRUYDgF9GNpXM3XffnaLR7e+sCCPfl0Xh+CQXBiAmJy92XhzB6uqqlgkLLm804HIal4ARExY+\\n\",\n       \"96zBgVgv1K3ZbKpEfxbyXE4ck2V/z5+hzjwY+VSMfQbHm+EFz/2LU0UvvPBCbhJX/n+vHpwtHbBj\\n\",\n       \"q16v66bK25jZeoukx0yej90D9z8mMcfA4GXIp5Q89xmnH7Hl8BY+jM9KpRK4ybz7c/3xktjc3HTj\\n\",\n       \"EfHyh5jj+vq6ngTD5vjq1avBhnB7ezs4VVYsFrWs2GT3+/3g5V+pVLStMOY2NjZclzIW4YWFBR1v\\n\",\n       \"3DaYXxy7g7/xss5zmzL6/b6e6sPL/OLFi/py4bHtAW0NtyCnLUEbLC8vy0MPPSQiozQlzz//vHz+\\n\",\n       \"85/PLBc2DA8++GBwwq9er6dS16AeNpbpzJkzOn8QJ/ajH/0oSLfivZQYGLNcN6Q3OX/+vBvWYA20\\n\",\n       \"YrGoZYEBzkYqh2TYteHs2bM697x+xfhsNBpBCAK7lLC+/Pqv/7rOC2y0bty4Ie973/tEROTZZ58V\\n\",\n       \"kaHhcvz48eB5WCs5STfu7W1K5ubm9N77xe7urttfk24YvJPrFt77m9Mz5RnwpVIpeDf3+33tJ37X\\n\",\n       \"oN85ybEVzeUwEs99yEa0dTPu14DUuh7oqoiIiIiIiIiIiLeXkeKAUgasSezQeTeN3eny8nJgxfCJ\\n\",\n       \"JHZRwCq2aUv4Mz4Vx+z+hs6oAAAgAElEQVQD6FamVWE5ehYOLDrPqigUCgF9OzMzo8/jEzNWL0fE\\n\",\n       \"t8LQfvtJ7OglTrbpODiFALNKsOrZlchuWcBaINPT01pP1I1ZEXZRjGNL7GdsfdrElP1+fyIro1Kp\\n\",\n       \"uK5TD54FNwl7wWMdVu7Ozo5+jjF7++23q6XKOmB4bl6wdrVa1fbzUlYwI4XxjudXq1VtK/x7/Phx\\n\",\n       \"tagxxprNpjKzmIPz8/PKymC+bm1tBX2ZpauDz5n9sqzC3t5ewBqXy+VUf3BALK7Fc8B69ft9+Zmf\\n\",\n       \"+RkREfmHf/iHVBugfiLjk2ADhUJBmTk8d2NjQ8s9zq2GteI973mPiPiJdK9cuaJj+xd+4Re0fJ6W\\n\",\n       \"HoDn3n///UFbegmgvVRMMzMz2q9oZ09HyzvdNS5IF/c7efKktjUYvUajoQHyWKNZl8tL6YE6Xrp0\\n\",\n       \"KVjLz507Jz//8z8vIpJKwo13DYcYoJ3BJP7whz8Mxucrr7yiLkrWcoOuFsO2V7lc1ueCNWw2m3Lr\\n\",\n       \"rbeGjfQGDh8+nHvi0juI4Wn9AeNSYvG9cO24EA7L7mSx9PZASKfTCcZKuVwO0r212+2g3/kkHycJ\\n\",\n       \"tynPZmdnU9pyIsM2txqIe3t7+pldH3PrPvYXERERERERERERLt5WRspjGVgh1bNoEHzNbBRb42CE\\n\",\n       \"YG3NzMwoy4IdPccBwFLvdrvuzhNMCVv3sEA8deA89Pt9rRtrrmDny9Y2PoMFVK/Xg914tVoNAvO9\\n\",\n       \"ZKnjwIwUB1Lb/kGZRPzgapYIsNZI1tHV2267TUREHnvssdwy5kkYgLlsNpvahui3a9euBePIY7yK\\n\",\n       \"xWIQGJllUXl12a9yL7Oj6Ff8W61WlZ3CWBuXVBMW5s7Ojl6DsW3jiADbljs7O9oGsOQuXryocwmM\\n\",\n       \"yc7OjrK/aKPr168HQdilUkmf60lkoB8OHz6ssX5o2xdffNG1mj3VcQ+oW7PZVOsfbcosFdgHnkdg\\n\",\n       \"1MYdtQfDVa1WdbyhPZrNpsZX4t4sscLAHEds1kc+8hENhkcbtdttZW0QZzkumTfWgWazqfXEXFhd\\n\",\n       \"XVXmC33i3euxxx7T+yAg2+uDXq+nY4ZVpb1xh7KA0RkMBtp+CL5mVm7SoF+er946YfUERUZjmj0F\\n\",\n       \"mGtgl2ZmZrTMYK6+/vWv6+8RL9hsNidaezudjr6DWOke7xN4PG677Tb5l3/5FxEZMmqcWcBiXFyp\\n\",\n       \"ZXe8tY1Zb/u5SOjJ4O84kwPm2+7ubsAC8Xs2L1a30+nkZrPgoP88xoglcvLWUIyXarWq6+e4hPaM\\n\",\n       \"d4wgJ9Dv94NAPJERzeotbgjWZOoXi1y5XA42G6yHBHhZ6WdnZwN31NzcXFCGSqWiL3OU89q1a+4J\\n\",\n       \"Q5QLE3dmZiag/ovFYuAa9Cbo4cOHA9rYO1Fm4emkWKEzD953WUGG9hnz8/PahjwwbRDkPffco6dg\\n\",\n       \"+OVrKWcObuXNHMZM3gkXPlgAZGWMz5vMvDhliZ/yv3yPvMnfbrcndjN6gGtqHLzDEl49MVahR1Wt\\n\",\n       \"VtUFg7Zvt9tBnfj/8zaa165d0/GOhZc3O1iYq9Wqlhmbjqw0SbyhQVtyADrclN7mAS+0cX3ABho2\\n\",\n       \"BR//+MdFROQ73/mOPg9lzgIHHIsMN1ww+rCe3Hbbbfo7BJuPSxFi1xqRkdG5vr6uaVRwmi0LeB7m\\n\",\n       \"4zhDAvNyenpa+4fHFTZN/GJDuTAOuO3RjjxO8ftbbrlFN5iem5NTRX31q18VkfQJbASFw0g8evSo\\n\",\n       \"loGN8gcffFBERiKt7FLCGsYnRIHZ2VmtJ4tJo41QPl5/8N2NGzdS75BJhCF5LWKCAXMR7ZokSSq9\\n\",\n       \"i4i/8eW0LHwK2Yp+FgqFIM1Tr9fT9kCZyuVyMI6SJNH1Gvfb3t7ONRKy3ju4H+rOp71xjZeODmXe\\n\",\n       \"3d2dWPw09ex9XxERERERERERESEi70BGqlAoKFUKq7FYLCoLxDt+6/bg3bSV22csLy+rtQEmiYMN\\n\",\n       \"8Vm9XlcrB64HtiaAhYUF+Y3f+A0REfnDP/xDERnueu2uudFoBElvWfMEFiRb2Z5LBGA2ytOiyYJl\\n\",\n       \"BwaDgT4bbAZbV7A02CL02Kk82nh1dVWtIW5rVjcWSQfps8WNdsJzp6enAwkD/juPjmX3J1gF7qu8\\n\",\n       \"IE1m2zi40esf7+ivZanq9bo+zwsih1U2zo3DyEvO6t0bbT/pgYV2u619yBYznuslCsV8TJJE//ZY\\n\",\n       \"DHYfeIcSPORZ6uxKQFlvueUWefrpp0XEZ7TA4HjMpUhae0pkOGdwDfp1ampK68kZCby2xnhi2QgA\\n\",\n       \"9zt16pT80z/9k4iMGKxxQbBYxzY3N9WFhHu/8sor6iL80Ic+JCLZ7nWsDfYgQhbQl7u7uy6ja9cO\\n\",\n       \"lEPE12zKc+1YRhvAu8ELQQCbura2pmMQKWAeffTRYEzU63V1NXIQO9of42FxcVF1sNDevJ7hQML0\\n\",\n       \"9LSu3Xivzc/Pa1nRZnzY4ciRI5lp0Rgeg5QV9D/pemLBHgJe49DGzCTbNa1YLOo1aDfODILfs4TB\\n\",\n       \"pGETXqaOrPIDaGsea+NSiXmIjFRERERERERExAHxjmOkisViYBmzrxVoNBq6c+SYEGuN864bx0v5\\n\",\n       \"iCpbf1bc8uLFi65KNIDgy/vuu0+ZKCCLobDBt2xReWKJ+P38/LxaSmxhgIny2Ax8NwlsAslyuRzI\\n\",\n       \"GjBDlBeU6O3kNzc33WBJWI6w9Fi0kOtp63f8+HENePZiffLiW7zcbPwstLMX6+UpoRcKhVx2wDua\\n\",\n       \"jH+bzabGjLACu23DLOsRbcpsYF7sDOYH54LE2P7Upz6lZYGl/PLLLweq3YPBIDjAgWfz7/gQA/fN\\n\",\n       \"JHIUi4uL2of4PTNEWfnDLDwmpN/va+wRmIFJVNMBy6j1+/2AaXrllVdUNgBjvNvt6ho0KfuHteCb\\n\",\n       \"3/ymtgPHGnqSLQDmzNbWVsB2iIwSHf/mb/6miAwDvD2GDvUcx4jYsd3pdHQsIAZmY2NDxxMYiddf\\n\",\n       \"fz0z4DgLqAeX95577tFyslSHBR8SQh8hqJt/j7ioSqUif//3fy8iozgnbm9mUW3ANV+DentJv7vd\\n\",\n       \"rq5jHNML9mw/cZM2RiprbbJyQL1ebyIBTf4e448V3HGgwrvH3t7exEyYlbdhcAC8nf/FYjEIkOd3\\n\",\n       \"D48ZzHtuW9t+kzBTyX5PG70VSJJkood6L2YMylKp5KYXyUshk0dhHzlyRAc4TzSrzDzuGd7vQele\\n\",\n       \"vnxZJxhraHjAyQ1+6VgXx4kTJ3Tiexojx44d0xfFuMBUu5HyNgf1ej31UhNJB0ly0CInW0bZ8cLm\\n\",\n       \"jQ/agwc02gsLiud2u+OOO3Qj5bk/vczygHeazKPEJz0ByfcDspI0T4JKpaLtiw0LB1+yC41TiOA7\\n\",\n       \"tCmrBWN84DM+HZuHm2++WdsSbuP19fXcRR3BvEtLS0Hg7n6AeqLP9/b2UmroIum1odVqBW3OrgT8\\n\",\n       \"e/z4cTlz5oyIiGYB4IMqHKDswaauqFar8pnPfEZERhuzL3/5y/r7n/u5nxOR4Sk1lA+bEh53WDu8\\n\",\n       \"00XFYlHnEta9breb0jDKwvLysrYbXJA8rn/1V39VRIYv+u985zvB9XgGwM9izaK8TAkw6ra3tydy\\n\",\n       \"Ua2urganohuNho53XgtxOhKbtRMnTqgbFDhz5owe6slLQVYqlfTkIDZSX/ziF7WvMba9OhSLRU11\\n\",\n       \"g2fs7OxoG3BwNX6XdzJ0ZmYm6DeR/A1GFrxk9AeFlwUiC3Avox71ej11EERkOCY4TEJkWEdreO8n\\n\",\n       \"ibMH65Ifd+KPgfV8MBi4jR5dexERERERERERB8Q7zrXnJaNNkkR3ttjVZwWg5mlPgb5lsBoz3Gy8\\n\",\n       \"ewcDBoug1WrpMzhBJaxFG0wukqYNoRIN+rPX67lB5rBosCtn9gD16Xa7LhMF62lSzR2REQvDrgIb\\n\",\n       \"KOpJK5TL5dSRWpE0S4C6cVlgQe7s7Ghbcg5FWOawMDlgE5bt2bNn1TqEdc1jB25Xz/qcVIZgXPJL\\n\",\n       \"7x6TyE+MA9PfuDcHm+Nf1vViKQiPSfGCX22C4kKhoH0DVuT8+fMptkZkOD9sH7VaLb0G9+P281hD\\n\",\n       \"ZjBtEL6nTjwYDA4UIGst9wsXLuhnmN/b29s6RuGm40wJDBtUOz09rawdpAIYmB98AAXr1+rqqj6X\\n\",\n       \"rW3MG/yOFaZhWTebzYmkLryMCgzIH2SNXcwvdg96Af6W9WBdItTx5ptvnoiR6vf7KSYf9/eYUNQN\\n\",\n       \"LD6zUQhkf/bZZ936Wddot9vV0A+4Plk9n8sOl+hf/MVfaL0xvzAHqtVqwKIMBgN9nnf4gI/uTypl\\n\",\n       \"kgeut9XKs39bsJQAw7LenlwB6zoCWeMQY5tzvXpz3TJXnDOQy+xpXk2S8JjL4CnEZyEyUhERERER\\n\",\n       \"ERERB8Q7jpHK8rPDKrIxEvZ32NmCyTl06JCyNt5RXNxva2tLrWvsti9dupSKR7D3gxW6tbXlWkq4\\n\",\n       \"D+/CEWzKGentTpnF3jzGhEXL7LM4R944eEHQk/qMmSmx9+A+RPv1ej2NtWDW0IrQifhyFgAzkWg3\\n\",\n       \"r63wLBZf5YBsL2bAMmoH8cfnqQ97GBdvgHHljS/Ocu+1AWdK93I/2qDvfr8fMKFTU1OBGv9gMAik\\n\",\n       \"Cbx4DR7X+L5Sqeh9uC+ZiRIZWrPjjm9Pgl6vF4jz9ft9jU1hZgr1BPPiKZHzAQTua7Sbx3ogtqxQ\\n\",\n       \"KOi4xH23t7cD9omBeZ0kiV6D53oxfNVqVeck2IxqtRrMTRYgRvmYaQR2d3e1T5ghtL8bDAaBhAWP\\n\",\n       \"a1x75cqVieK6zp8/r7FKgMdmLC4u6ueQHhARjUFCXBTHE3KMJtqA2Vl7AKVarepcYvFkj1njjBoi\\n\",\n       \"w3eDtyZYrwYD7bixsTERGzIOvCaMi6vyZA3sOlwoFCZiytrttpsHz1vXUee8Nddjn7IOdeXdJ+9Q\\n\",\n       \"FLOotux5eMdspLzK4bNut6sULCYkBr5Imp61SWu9gHSRUQA4Tz5MMNZispuSjY0NTWsCNeNCoeC6\\n\",\n       \"2OBOwaL9/ve/Xx5++OHgd7wZwf3ygnnx+/X19cA9g+sPirxNmJeUmMvJ3/NCAmBA8iT0NlJ2geLT\\n\",\n       \"WnARcEoX70WLPvz0pz8tf/7nf556fqfTCXSzvA1N1ibLngz13NF8DZCXeNnC2/TjBYTPWq2W/g4v\\n\",\n       \"tkqlom0+Lq0MgJd1u90ONi/ctnmL06SnfJiu58B3uxH0xj+r2e8HVr2YgfXh5ptvVrcmymiDU3Ev\\n\",\n       \"fM7GGhSy4V5iYDwvLy8Hm6WrV69qO3BQN9YvPIu179And955Z5DguN1u6+ljjO2rV6/q6S/eNFn0\\n\",\n       \"ej01DjkAHhsVPLdQKOSevGSDxY7jtbU1XRvyQgF2dnZ048gbL3u/QqEgq6urIjJaj7l+eevZ8vJy\\n\",\n       \"4IoTGbW5NwcYOMnH18F4wLq8tbUVbOA4VVjW+wnAGHszmQ4Y4+bpJPOLTwbzOGKVc5HhWuRls/AO\\n\",\n       \"0vA1IsNxx3pkIul+8PT/eL3N63dvjWZj6yDvz+jai4iIiIiIiIg4IN42RmqSI5QcaG2tSnYbYLde\\n\",\n       \"qVRcK9wyCEeOHFGLD1bewsKCS9Xane2ZM2cCK5Atas5BZHf3HkVYq9UCK9XbvZfLZd09w9Kcnp52\\n\",\n       \"LctxAbnWEuSdPH9n6z7O7ceyEGAJmZHiQEILWHD1ej3IPZilI2I/L5VKyszAOoW2jC2/dS/wWPTY\\n\",\n       \"oDymicF9Z8e3p0uVBe93KCvaqlQqaV9zO8PyRX37/b66rdH27PLifuOcWPid119gFdhqRH3ZqvSU\\n\",\n       \"jTHeJ3UjA4VCQYPDwR5NQrvnaeOgfGtra8pm456eO1QkncdPZKiUjmu5rFYWpNVqKcvCTDKHDaAs\\n\",\n       \"tnzePL/llluCtYivAThnJLMsHiPEechEhmw/2BpcOzU1ldt3PAesC50VsDFO+v1+cOiE3W5YS9gL\\n\",\n       \"gTF09epVl9WxOl0nT57UduXcd3aera6uButPu90O3g0sBYMxyfME5WP3K8ZIlhq7B3aXvhUHWZht\\n\",\n       \"sVpL/X7f1WSy7j7+nTcOPD1Ehsf02Wv4WtS7XC7r78ble7RMU6lUCtYBdveiHvtdk4DISEVERERE\\n\",\n       \"REREHBBve4wUx0mIpEX3OC7KWsWeomm5XE4pgYsMd7awHCDCx1YcYho2NjaC3ejdd98tTz31VOp+\\n\",\n       \"ngVYr9eDGKnFxcUgQHFjY0OZEFgpu7u7uUfD8dyrV69qWbMs5Ulhgyn5WLbHhHAguydW6AmdeYAV\\n\",\n       \"4QV6428OzOTgdDAQiNfwghbn5+e1vaCQ/s///M/6PVvgeaySF1/BZbbMRrlc1rrlBZtnHaTwjtmi\\n\",\n       \"bt6BgHHK4BhbKHOj0dA+53GK8cQHDcAIwCLMsoLzBDZxv83NzYDd88ZGpVIJ2CyOh2JmDWM/Kyej\\n\",\n       \"RaPRCAJjPQHVtbU1vQ9ijGq1mltPMCMIMD979qzm2uSAcDwDbbm1taWxShAH/ru/+ztto7yYNo91\\n\",\n       \"yWL0PfaKYwFxLX4HBpOZf7BBu7u7QZ8Vi0WdjwyP4fKYaawXmDONRkPbEvO7Xq9ru3ntYuNURUbS\\n\",\n       \"MpyD9Fd+5VdEZNh+lqXi36HPX3nlFWVbOY7Sy6wAeQSwVTwObS5XkXwmqlKp6PWQ0mk2m7oGbmxs\\n\",\n       \"TMRmj/P0ePfg4Gr0F8csop+sEroFxjGkZzqdjvYrr2fWM9Hv911BTnsAhT9jNtPuA3hvMI5p8uqC\\n\",\n       \"NRzzfJL4tLdtI4XO5o0CgM70Tgbwi8pqgGxvb+tpDbxMtre3Nejb2wQheJlfGKdOnRKREdXOf3vu\\n\",\n       \"P+5IvJymp6cDHZoPf/jD8sUvflFEfHcaA53IulSTbKCKxWJK/d2DDdTzkrPyhORTOJgk2KhMTU3p\\n\",\n       \"5OSgedY4ArAwMi1rky1PTU1p3dHmFy5cUL0VLLSe9tDGxoaWD+CDA/xSz1ts8DtuA3Z12Wur1arW\\n\",\n       \"Ke+EGW8YvOTK3iKH3588eTJIsM0nPvFymp+f1xcE7u3No1tvvVVfBAxcgz6YmZnRlz4OTTz11FPa\\n\",\n       \"1/ZEl20DW6dSqaSfeTS9F1DKKUc85PXlysrKRBupJEn0hY0x9tBDD6lGEMMaTTs7Oxpsjrq0220N\\n\",\n       \"gmaFbtTp9OnTIjJsS8xT7g+rL9RsNoMDHP/4j//o1tmeqN3b23P11NB36Id6vR5c6+nicdJdIMtt\\n\",\n       \"7a1tVul7bW0tODSzvb3tbpawyUC7zM3N6ZrPqb8AzJWshMwoC05TX7lyRdvXrjkioicJT58+nUpn\\n\",\n       \"JTLcEGIM8Sk0L9uF7cvZ2Vld3/GOKRaLWn7PTeZtmt5MppJxyuHeZsQ7KOGlP+K115vH3meoCxMN\\n\",\n       \"3qEVrx6AZzzz+8zqV/FBgP1o1kXXXkRERERERETEAfG2uva8Y4bT09Pu7tsGxu7s7ASMy+rqqu6G\\n\",\n       \"J01q+uEPf1hERB599FH97KMf/aiIiPzJn/yJfublXANztbW1FSj4eqriTz/9dGDxszsFu/K5ublc\\n\",\n       \"1wnqPTU1FVjbS0tLustmKjkviNxzBdTrdTdJMixCWGOzs7NBAK29xpabd/q4H5ijBx54QK1Hvgcs\\n\",\n       \"OHbxgWkEQ7C3txe4AXgcsEVlxxhbd5yjDshTQ/dclF69WS2etYNs3j+RdNCtSNrKgxtCZMT8gZlq\\n\",\n       \"NpsBW8RHxDEmmP2ARX/hwgUtHzMDf/M3f5OqT71eD47nc5+iTEmS6Odov/0Ec1qLMCv34Z133hl8\\n\",\n       \"xgH5Njemt77U63VtQzDXp0+fVnaXmWjbx1NTU9quzPyAUWFGygbLHzt2zJ0rP/VTPyUiIu9973tF\\n\",\n       \"RORrX/uajhm0edbRecwBsOOcFNhzUfM8B3ti8+sxVlZWgnl27NixlJSMyHCttusTtzMwGAx0fKIt\\n\",\n       \"XnjhhRQTJTLMr4n7YRwxG+wpV3sSFox7771XREYZLjjXYB6z8vjjj+tn6EuPkavX64EW2ezsbCoY\\n\",\n       \"XSTtbUB/eAeRfhxg7wHa0gvcZqbezmeeH5h7rVYrYJ+zgGv48IqVSWD3J2ezwGcYu8ViMQho58M1\\n\",\n       \"vO/IW5fw/ElcqrmMVJIktSRJvpskyZNJkjyXJMn/+sbnC0mSfD1JkheSJHk4SZI5uubfJUnywyRJ\\n\",\n       \"ziZJ8vGxJYiIiIiIiIiIeJcil5EaDAa7SZI8OBgMdpIkKYnIPyZJ8oCI/Bci8vXBYPC/J0nyP4rI\\n\",\n       \"74rI7yZJclpEfkNETovIcRH5RpIkdwwGA3dLV6/XA6uq1WoFfl4WOmOLFFbMuXPnRGS4m/biV7xg\\n\",\n       \"RVgRzER9+tOfFpERE8Xqv2xZIb4BljBnegfm5+d1l37XXXeJyNBPj90wLL7d3d3giHOz2Qys3hMn\\n\",\n       \"TqjFByvGq1ev13MtVQ4etJabx6JwoG2er5jjB5jZ8srgxUtYq3Nubs4NfAdLwCwRYhjYOsRzf/Zn\\n\",\n       \"f1ZExM1mX61Wg5gRL0A2S+QUFhLn57Lf8X04Nx7+xjg+fPhwEJBp/7bgnJA2houfB/T7fWWgMHbe\\n\",\n       \"//73yxNPPCEiI1HaJEnc5yJWDcwfC+jh39nZ2SBout/va7kw3+bm5lLsmciwHcHGgXlcX18P2Lad\\n\",\n       \"nZ0gm/zW1pY8//zzQZlRFrZOgaNHj6Zi5/B7a/2/9NJL2l4cUGzXmH6/r+wF7jE1NaXtyrE+GMdo\\n\",\n       \"+0ajEeQyFBkxJD/4wQ/0MzCwXlwpA+MS6xTX1YsfQZzia6+9pu2KGCgvfvKHP/xhICJcq9VUzBPj\\n\",\n       \"eXZ2NmDmt7e3dUxw3AzKjLWc2xl98MorrwRjO0kSN7cjgvofeeQRERmOO3gQEEt1+fJlueWWW0RE\\n\",\n       \"VLSXgcNCvM4yuwhgLmflhgMrhnbudDp6TxaCxDsBbcHr9E033RR4KbLiobygfythIBLKvHAMkvcc\\n\",\n       \"zqHIrDOA78cdhkF/ssq/d0gM643H2LLoKzDuufuVNthPRoWxrr3BYICdS0VEiiKyJsON1Iff+Pz/\\n\",\n       \"EZFvyXAz9WkR+Y+DwaAjIq8kSXJORH5aRNxIPzsZ33ieLrpoyHK5HLz8Dx06FAws1v9gxWcbHMoB\\n\",\n       \"r8CxY8d0EgPtdjug9mu1mnzwgx8UkXRyTMCqHouMqNrp6WkdRPwyzKMOvaSWPGBAieIeWYurN4g4\\n\",\n       \"6a89OZg16FAOViAGxgU6om1Qfs9V881vflNpbh4f3skJm16hWCxqX3suIGBpaUlfcvbAwkHAiwlv\\n\",\n       \"uPg0lH0G2urKlSu6UeHTbgDmAo8TjMVerxe48crlsr5wOQifKW6R4eKFIHKMGd7U4kV+7tw5bVO0\\n\",\n       \"d7Va1Rekp1wPir3f7+vGAmXmccVzGn+jrVZWVnQDwOPKzvlCoeCuI9ikeYvh9PS0vlRxIMRzxa+v\\n\",\n       \"rwdrB58mxPxpNBrBIZRWq6UbPLRzo9GQJ598UkSGrjqR4clgz42PjS8HdWP8YE0aB5RveXnZDRq3\\n\",\n       \"Cb5fe+214GVUKBQC1+jOzk6w6Tx37lwqibPIsP0wZlg7DPeBi3p3dzdYe0+fPq0bTIwrb05vbm4G\\n\",\n       \"oQmrq6vBRnlra0seeOABERF57rnnRGTo+vzLv/zL4J6YN3B5v/766+4hI6tPuLKyomscNlzXrl3T\\n\",\n       \"zRrKxP1tE2DzZ4xms5kbuA94AdRZQeR57x12a3lJlyd1OaJeKPve3p72I6d9w9ieVGPL0z7jtSUv\\n\",\n       \"BRhrTLHu47jf52FssHmSJIUkSZ4Ukcsi8shgMHhWRFYGgwEcv5dFZOWNv4+JCDvKL8iQmYqIiIiI\\n\",\n       \"iIiI+DeHSRipvojckyTJIRH5+yRJHjTfD5IkyaMi3O9OnDjh7rxFRrth7Do9S6RSqejulbWNsMNk\\n\",\n       \"VXSLQqEQuMVee+01ZbRw7cmTJwNL6aGHHtIcS8yAQaMKx6BhlYmMGJgkSQIJA89SYBVbfgaCR2Ft\\n\",\n       \"93o9173k7dABtqhZn8MG/mfRmjgC7x1NZgvHWmteeXZ2dgK3EbcHrl1YWHCPb9u+qVarOlb4aLI9\\n\",\n       \"asxB3Wx1YLx5cgGsE2XZK7aouA1gxfLBAnzP1DTqjvyPhw8fVpbIS9TK/cv5/vBZ1rxiXLx4UdkO\\n\",\n       \"3G92dlbnBRiRBx98UO8HF+ru7m4gTdDtdoMg/aWlJZ0HuN/m5qaygB4biP5/7bXX1LWC+b25uan1\\n\",\n       \"tUxcFjyGlvNlos1tQLDIkC20bFe32w3kRdg16VnF7OIHI4UxtrW1pesYjuC/9NJL8rd/+7epZ+A5\\n\",\n       \"IvlaZRwCgHvceeedLiMFgJVbXV3VscgB6FYlutfruWwGWApmcjB20E9zc3M6pjHWjh49qp/BJffq\\n\",\n       \"q69qGbz8hVg3dnd3dc4fO3ZM72vXs263K5///OdTn6GcjI9//OPyqU99SkREfud3fkdE0sHYfNjF\\n\",\n       \"5mS8fPmyPPTQQ/o9/rUyOCIi9913n4ik55RdVzhp9vr6ustE5o3/cXPDBnP3+33trzx3WrVadTN5\\n\",\n       \"eEr+HnPluRTBXLGepPf+8tyWNsefx6J512a94zw1/nGYWP5gMBhsiMj/JyIfEJHLSZIceaNwR0UE\\n\",\n       \"s/SiiLCQz4k3PguwsbGR636JiIiIiIiIiHi78bnPfS73+yRvt5UkyWER6Q4Gg/UkSaZE5O9F5PdE\\n\",\n       \"5BMicn0wGPxvSZL8rojMDQYDBJv/BxnGRR0XkW+IyO0D85AkSQZZRztrtVoQz8Hw4jS83Gg2lxqu\\n\",\n       \"EUlbwAhYX1tb06Ow3/rWt4LnInDzxo0bgR/31KlTwWfMJCBmZVJF8qmpqaANWD07DzaQ1ubTy2p3\\n\",\n       \"K8Fw6NAhlw0BIFvAli6zSx4jBQuI/eUIUIUVwAwC+nVpaUktMy8AlH/vWaLWwshiuCz4kAO3n2V8\\n\",\n       \"mMnBGNvd3c31rXOwtmd5of1wv0KhoGOZA1TRl7hfqVRKBYWiDTzkMZceYI3//+19a4xkx3Xeqe6e\\n\",\n       \"fs70PHZmX7O7XJJLLbWk6CUpiYQlRqEt0RIMWwoMWBEQQBCMOICTKH8CJArsKL+cwIAC/csfOYDk\\n\",\n       \"JJSlBJYtBNaDBiWFMrQUJVJ8L2UuH7s7uzszO+/pme7p7psfvd/p71advtM7ojmmXR9AzLL79r1V\\n\",\n       \"p07VrfPVeWxubmY6Ylr3Rdbpubm5wMLjGonom++bNAyszMdsKQPValXbCD322U2Rnr5Crrxm+FUH\\n\",\n       \"dnZ2UoEieC4Af3LSvGAAACAASURBVKx7771XHn300dQz7rzzTl0fINNz584FzyoUCtpWMCmDkkz6\\n\",\n       \"YP3MQrlc1nGA/JaWlrQv1jrG6wsHHoj0GGd/zTp58qQydNYYg31aXl4OMmAXCgVld62UB7v5O2Ic\\n\",\n       \"8Pcb3/hGcM2HPvQhrYbAbcd7B395vb/rrrtEpH8aIRIyTj5wH2ZWfXmMjo6m/KnQd3zGMud3oMUW\\n\",\n       \"+gzSzThS++DnAoPYLzCMYIDZHw5Eym7pBbKc5znJMVdj8DOlD2LYIH+LzfKTJd84tTAX9d2O9o6I\\n\",\n       \"yJedcznpsVd/kiTJXznnnhaRrznnfkdEXheR377R0Redc18TkRdFpC0iv+dvogDkifA7WCgUMh3Z\\n\",\n       \"0GF2DmXq2crmDGHxBMcEhFNgtVo1N1CYEHjxWs5w8/PzuunAxoBhHRWhzZwNF+3joz381tpE1Wo1\\n\",\n       \"VSjcgzdRFhVcKpVUvuxEinGwojH836PPPjAhuJguw1JmLB5wyFxYWEg5OvN9RdIbKX8D3e129Xte\\n\",\n       \"lPxJvrS0FCy6rDtAsVhUWfmT1eq3iD1OVsQMt9l35uUSB5y53HfW73Q6gQxarZa58OAzjorBb3/l\\n\",\n       \"V35Fn4+SSNC/S5cu6b15Y+2XT7AiDvm41M+EzfdoNBqBw7WVAV0kjCAcBC6w6m+kG42GyoM3GBgH\\n\",\n       \"Puq0jAm0B2vQwYMH9bc4yuT+YgO0tbUV6PbLL78sn/nMZ0TEPobEs06cOKEbmZ///OeZffextrYW\\n\",\n       \"ROhdvXo1eIHxMR5nO+doTZHBmeEhZ8iMj8SwLs7Pz6cilkXSGx/oQbfbTZVHwWfYgFj6hPuMj4/r\\n\",\n       \"b7Bpunz5sh6x8RzGnMLfy5cvm3M4a5PG0ZlwddjNadrXbUufNzY2UoSAFZU2bJ6m3Y6zBsE5F5SG\\n\",\n       \"abVaqbIyIj35YQ5AVhxBbuk2roNOikiqQDZXscCzfOOQN9IwAtrtdrBOc1AKl6vKCjLyC3hnYbf0\\n\",\n       \"B8+JyH3G50si8uEBv/lDEfnDXZ8cEREREREREfEOx75mNrcyV7NlytY2O5yKpOk77D6LxWIqmzPu\\n\",\n       \"i10n74r90Mbt7W3T6gANDWfZ6elppbbZORSA1bi0tBSEg1vh73z0hN04HykxYI2DpWo0GsExCecl\\n\",\n       \"4pBdqw2Q+fLyst4H1w3jsCzSsyr8+nHVanWoowSRPkvADCNkyPnB/OtFQouh3W6bDoJZjsBZ4GKa\\n\",\n       \"0FUrj5B1fCgS6vTIyEiQrmJtbS11dOG3E7DCn7vdbnDMxOHgbEmCRreONOGUbGFqakpZFKS+uHbt\\n\",\n       \"mhkmj/kD+bCugSnM5/OqlxyEgSMA6DZblOzgC8sdljoX2mXw2mIFNFisnX+cXigUNLM4nMRFJLBs\\n\",\n       \"FxcXdS3A/GHGBGH87XY7YCo4y3VWNvHt7W2dF3ykZ+kdwGwG+glZOucCVoedyDFenK0ccrR0KEkS\\n\",\n       \"fQaO53hOMysLNhtrej6fN6sJYLz4SJHzW4n0xshfq3Z2dtTxHAzRgQMHVLdwCnHPPffIs88+KyL9\\n\",\n       \"9W5paUnuu+8+fZ5Ib/wgS8s9g+vlAZDbbqknLMYT+nXw4MGU/K36pZYeW9dYcwQytLLd8zqa5XiO\\n\",\n       \"65rNpt4H485sMFgndtLHWFu5uQb11WJCrdMC6BOe32w2bzpL/M3knYq19iIiIiIiIiIi9oh9Y6R8\\n\",\n       \"Rzpr14tdNCfkZH8S3y+h0WikUiGI9CxW7HxxHYeNw9LodDpBFOEv//IvaxZcKykcmKiRkRG1YthR\\n\",\n       \"FL4RVjI37Oh3dnZ0Vw+rfHNz07QwsaOGXNhxHLIY5NAOa4mtI8sPirP0Wgydz2w1m80gozGHZ1ss\\n\",\n       \"C1ffhjUBGY2OjgZV3Nl5nS16q05SluXF13GtJpG0jxS3z9dL9pviKvZ+Fm6RMHkkn/sDtVotlawQ\\n\",\n       \"7c2qhs5hyH7CS5G+lchMHsYfn7F1hvFbWVkJ5Le0tCRPPvlkIAMre7rPDLAvjeWLxPUEwRZDjrVa\\n\",\n       \"TX8LmY2OjgZZ0X34PkgifXmApbh48WKm3wMnPsWc9NcVtFGkp59oD9IpjI2NBQzIlStXggzOjUZD\\n\",\n       \"1xEkxrScw51zmg2dgftY896SUVYWbq7kYNVBhb6PjY2ZCUzRN4uRwvhWKhUzrQ3G3ZIz+sZzFNfl\\n\",\n       \"crlA7zhhKGqpWhUOeC4ys+PLeXR0VE6cOCEiNnvCrDqSjGKdmp+fV7mhj+yjC9RqtcAxutlsqo5t\\n\",\n       \"bm4G/jyDfIx9Vt5y0mY/TGbErd/47BAcr/m33W7XjMLHM8CsFYtFHTt8V61WdX1Cup9ms5kKGOG/\\n\",\n       \"DE7dg3cNr9vDpgXiIBXfp3WY05l920ihsX72Z5FwMu3s7KSiOfx7ZG3I2PkWi87ly5dTJVP4WSJ9\\n\",\n       \"Kvmpp57SdmEicTkH3O/QoUOpDRTAznk+OM+Vv2FhhUQ7x8fHA0p9e3s7taCI9AafnX0BnhiQOR+t\\n\",\n       \"WHm30D/evPgL7Pj4eKBofA2X6sDE4cy7/oax1WrpePLRGG9asmA5GQ5D0VqFLK3jP74/t8kvoC3S\\n\",\n       \"33TyYuNT8IPKGnABaDwL94G8rbYMip7JOmqFXk1OTuoRFeexwiIHfVpaWgqihfhYCG0YJHe/MDKD\\n\",\n       \"N+g+sgp5A77cRPpHRHgubxgA3vgA+XxegzesgALOMYVNA0db4nl4Ph9/wMiamZlRncDmz5Jbo9Ew\\n\",\n       \"Ny+cEXsYwJBiOXGhYoy7dXyHY7V6vW7qO8DrhV8ZYmtrS59n6S/rhB8lyCWRWJ/9zQsDmyJLn958\\n\",\n       \"803deKN9lowfeuihVKmeQWg0GvqewgZ8YWFB1zH00dpwbG9vB1ngFxYWdO7VarVgvgwytvwcb9bc\\n\",\n       \"tNYJS+9yuZwpO3+zwZF8WCcKhUKQObzVagVr+Pr6ujn3shzpef3hQBsAbeAM7X5xeG6fVX3iLc1s\\n\",\n       \"HhERERERERERYWPfGClYabDM2KnO37EyG2OFkDM4zFakt+vEjpKdK/3wcpH+7hW7Yw7BtQqLwkKw\\n\",\n       \"MtdOT0+rZYFn8ZEI/nJWbO6bb41xO2G58O6ZrRX0Y1D9JlgssPAHOSpa1q5vFVUqlSB0l9uFfnJG\\n\",\n       \"Y4bvLMljD/k550x61WcVrb7kcrngMz4Stahsv23+My2HdusoxGfHisVi6lhWpGedWmHo6NNuWXV9\\n\",\n       \"ZqBarQaZqDc3NwMLMpfLBSzf8vKy/OQnPwn6y+Mgsre6hJiXa2trQ/8erAkfzVppEnj84Txsgetl\\n\",\n       \"+qjX64FV3Gq11EkWz+W2Q/aHDx9OHeWgregzB81g7uFZMzMzqaAPkZ6e+KyFxcZxmpRhc4JZKSAs\\n\",\n       \"BhvrnnVkY7EZR48e1bWB1wOr7pvl9OuH04uEx5XlctlkVjGeYL9WV1cz0wqwrHA/iyXCUfDPfvaz\\n\",\n       \"VIWJQahWq/qe4uMmvw3sIM2Mrn8dZxC3+s0nCRwosVtGc8A/cuQ8bFgTBumTVX9vWOdsPzM/r3+c\\n\",\n       \"OiFrnWBd9ddPfufz+8Rnx3ZjzIdJe6C/GfrKiIiIiIiIiIiIFPY1/cHU1FSm8x4sjdXV1ZQPkEiP\\n\",\n       \"CfEZg1tuuUVDtAHLp4adOdmZD+G9sP7a7bbpsGllh/X9nJIkCVgUqy3tdlvP+DkZnW+NVatVvR+u\\n\",\n       \"4wRwXGUbVi9bOPxsy6HUZz6cc+YZfJaPEjud+35ufs0ykbRVY2Wi383CgWWTlVTTQrVaDaxiZgbR\\n\",\n       \"VsuHp1qtahvZJ8CyXnz/kFarFbCB+D0jSRK1mtlvAuMBOa+vrwc6xvfi8RvWSvVhWbgjIyM6vlxj\\n\",\n       \"EM+zKsezzw365NdhE0n7R1ry91NKMPMjMlxdLMu/p9PpBHX3ms2mjjESOzJjBx06ePCgMhqcLBXy\\n\",\n       \"gE9is9nUNQbXXbx4UfXEZ6sZ4+PjQRg96xzSGuxWPYF90NAuyJzvj/nYbrcDtoYTAQNzc3Op+qIi\\n\",\n       \"PT1AP5mRQn/ZJ9R/BqcNwG+np6cDXdzc3NR3iBUybwHj3G639RmsN9DfLBbKcvTO5/MqN/6tny2e\\n\",\n       \"ZYr3T71e1xMTngNZrEySJIGuWHUY2UeKx4HZySz4Y81BQr5flEg6SSjGBH3nNRryYzn6AVWDwKc8\\n\",\n       \"kDnuMyjNgX9PdixnOd8MEwXs60aKX7gYiEajYeaUgtMdjofa7XagyHw8yA6NcACE0q2tranC8aYN\\n\",\n       \"ixAXxPQ3SOzcxm3zF4xut5s5CdBvLvZq0fdYdKw8S1wwGIvDG2+8YSoSTzjrGM2XZalUCl7SfqQl\\n\",\n       \"rgOy+rubkzhnevczJFvgMjrcFiwKTM/6L9esCFERO0IT+mKNUZIkpsz9o6Tt7W2VA+uV9fJHG3GP\\n\",\n       \"SqWiG0t+4fkRkzeTuRhy5ogVtIVLLPgLLWfC9x3q/X/je7zor169muk0Dp2bnJzUucnHGpyTC8/i\\n\",\n       \"TZqV+XzYYqXQXx4vzDvMOQbGfGVlRZ2L0WYuo8PRbljbsF40Gg39DcbaerENmj9oA9YTC7tFgeJF\\n\",\n       \"xGsMnOyTJAnKQS0uLpqFhPEMqzwUwMYb+rmbAYQ18MKFC5n3HvbIGMewnU4nCKSo1+tamD0L1vqz\\n\",\n       \"vr5ujgN00gomglHGBiR0id1JpqamhnJ65jZxFJuf94kd94F8Pq/rEjtwWwatX3ImSRJzDLOCRyxk\\n\",\n       \"HatZAUHsvI4+Tk9Pa5vZfcU/yt7tneSvj5nt3vWKiIiIiIiIiIgIE/vKSOXzebXQEHZ76NAh00HQ\\n\",\n       \"3xXv7OwENfSY1sZx2fz8fBDSWi6XdUdr5dNhC9ffSfNOGVbx1NSUmffFLx6bz+eDdAvlcjlVEBdt\\n\",\n       \"gmUAWbCjOu/MYRnykSYsDes4jfvEx2m+pddut4cq7DsoxwY+5wzUgGVZwHIolUqmVec7VSZJEowN\\n\",\n       \"U9B+wVOGZR1xNnm2rPy0ApaDeS6Xy2SCLLYKbTh27Fhm1nGAw8aZwcRv2HrC95AjF7yGnIvFYpCb\\n\",\n       \"rdPppJzgRXZnuLhvFluA78ECfPCDH9Q2wDHcsj6t3GsMZj8s3bKyekMe1Wo10IHl5WX9nvsE2Vhj\\n\",\n       \"yEe3WIPgnH7p0qWUIzGuR1+5SPhuR8kig+eZzwxa+K3f+i356le/GnyOPmF95DlguTTwHPTHzAom\\n\",\n       \"sbJ6VyqVwHUjSZJUjiWRdE0+Xtcx1jgSK5fLZsoCf70Q6csc86NUKulneF+g1iSDTyF2c0S2WJn3\\n\",\n       \"v//9IiLy/e9/P3VP/ttut9W5HUzU5OSkzoOlpSUz8z36wvpk5VDywTLnzODWfMe9rVp7e3UZ4Lbz\\n\",\n       \"6UcW+9XpdPQ3vuM492O3422Ag3+4xqwViLYbIiMVEREREREREbFH7BsjlcvlZHp6OrBi4OjJuP/+\\n\",\n       \"+9XJk5OvYXfKO1BYcJYTe1Y9vyRJhmJgRPpn3vBLeuWVV/Q7WLXMqsGSsEJY2W/GqkoOy7nVaulz\\n\",\n       \"2crDPZmNYUc8C9i5z87OikgvfYPFSAFs7fifcVvAjm1ubgbh2Nw3zljvJ13b2toamHaA28CV6mFN\\n\",\n       \"MDPB12elWADY14urjaNd+KxararVDIuFgxestmP8p6am1MKEPNivjwH5giVdWFgIWJHJyUnVZYw5\\n\",\n       \"+4ll1ZZqt9tBMrp8Ph+wweVyWb8ftsI82Jlut6u/QZueeOIJvZ6d0v1+O+cC1ujUqVPKXKOd/lhC\\n\",\n       \"vnfffbeI9OY61gL2c+P6fSK9ecb1wAA/LJ/ZCQ4fx/hj3lYqFWVccA/2VYI8ODOzBTyrUCiYaTL8\\n\",\n       \"6g4Wrl27Jh//+MdFROTP//zP9XO/vman0wmscA5oYfhjMyhBJ+6NdZvX5TNnzoiIyPnz5/V+YJqc\\n\",\n       \"c4H+VioVlS/W6s3NTZ1zzAD6zEq5XA7W4W63q2PN7x32c+W/3J9Bfn6QHypd/PSnP5VvfvObItJ/\\n\",\n       \"/0xPT+t1YCELhUKwXi8vL+safeXKlYAl5NpzFpOD/nKgEvfdH+t8Pm8mPt7NlwhghhHP8NPL8GdW\\n\",\n       \"2zn9gcV6ZbFs0A1OG8HvVp9VbLfbqjuDEiOLDOcjtW8bqW63a26aJiYmtFNI7//YY4/p95y9lhco\\n\",\n       \"kd5gWY6afjSOSHhkc+zYsYEvNR9YNLCB4qMoP4OsiJ1jCKhUKsFm8s477wxyUxUKhWAiHT9+XCM9\\n\",\n       \"rMlubXxarVaqwK1IT37WcRdvjPBbwCoKy/3cbcER6cnIP07lzQHDipDxj075mmGUX6S/UWG9wmTe\\n\",\n       \"2NgIHDKPHDkSvFh4E2s913IoBYrFor7U8Xd5eVk3DGjXxMSEvli4LAxHnYqk8wNx2/EZxso5F2ya\\n\",\n       \"ssoScT/5+JC/t6JwAKvc0Hve8x5tu1+gulqtqtMt2sllX/hYzZI5XthnzpzRf2Oz0el0NIM7lwPB\\n\",\n       \"dfwihV5CzsePH9djdBhwp0+fVuds9G99fV3XHSBJEp1z/BLL2vByDi3ML2uucqFl/2Xz+OOPy+c+\\n\",\n       \"9zkREXn66adFJO3IjLbMzs4GxuTS0lKw6cRzGCsrK0H08fz8vF5nGbZAvV7XPvELDTLnjP7QCW4n\\n\",\n       \"NhuIlOM8cdjUr62tBQXIOfoM4NIpjGEjAn0j5uzZs1rwmqOtYYCiv2NjY4GeNhqNVCF7/2VfLBZT\\n\",\n       \"2fpF0sEhWetOpVLRdwZkPygHlZ/hO5fLBYW7t7e3b7ooMMAG625O6X6hen7vcdReVl5A3zhmsKE0\\n\",\n       \"TEFo/d2uV0RERERERERERJhww+y23vKHOpeIpMPpwQysrq4qJckW0Ic//GER6bNT4+PjugO2drG4\\n\",\n       \"R7PZDKhJdoyENXP58mUtGorwV6s4K34v0rc6RkZGBu76RfpWkcXAHTp0KAhd5yMAn3EQ6TM/J06c\\n\",\n       \"CCy9SqWSyiPlF7oVkSCcmS04foZPcXc6nSDfk3MulXMGf7OOz3Bdp9PR/Dxg4Lh2H2RgWYkWI8W6\\n\",\n       \"nBUmzXrH1h/3Hf32mcu77rpLXnjhhYF94xwp1pGj76xq1X0T6R+FYNxZv/xxERF597vfLSI9OfrO\\n\",\n       \"1UxrM6z8VT5yuVxQh9G6rlwu67hauVkAyyl5YmJC28d5mADO1O/PMz5q5RBn4MiRI8oW4eiuXq/L\\n\",\n       \"+fPnRcTOX2YBDFi9XldGCkc1uVxOv4deXb9+XdcTzhWEOZfF0IyMjATH1t1u1zx+uP3220WkL+tB\\n\",\n       \"rDru87u/+7siIvKlL30pONqZnZ1VBhRrxJUrV8w5AvD6ghQRfjoHxuHDh+Vd73qXiIj84Ac/EJEe\\n\",\n       \"y+enHMjn86l8TyJ22o9CoaCfszsE2myx25DF6Oio6qLFxHAKENybZeAfZY2MjATP+/SnPy1f+cpX\\n\",\n       \"UteJiMoAaz+7p0BvRkZGtKA19xnO9cOy7uVyWfUT/XjjjTeCIu2sd1ynz2c4rTWLs7VDLq1WKzON\\n\",\n       \"QVZVCX7/WAw37lGpVFKMGu7nF0YfdHTnz7NB6WiwtiRJYgo9MlIREREREREREXvEvqY/4N0qdubl\\n\",\n       \"cjmwWD/60Y/Kt771LRHp+3gMSnjpJ+RjBz5YpFeuXNHruP6eb3G1Wi2TQfDrlq2vr2f6pfghwD78\\n\",\n       \"Hf/U1JRaDmy5cmZcfOeHg25tbWWGbVYqlcwMycC73vUuZYnYSd93iBwbGwuc6EdHR5W9QEjv3Nyc\\n\",\n       \"WiqcZsD301ldXQ3SWvD9fYZIJG0RMos1CJz9F2NeKpVSMuS+oF0ig5OEwqJhywdtBbt0/fr1wLpi\\n\",\n       \"HQY7OjMzo4wJxpqtdst6fvHFF/XfYEA46zjAejoMG83zjC1J3zeDZca12zCGYHws+a2srKi++ykD\\n\",\n       \"RPoW6ejoaDDP/Dnr+2KBjRKx5yHax8w11onV1VW9P9gV6AO3td1u6+fox/Xr15U5wHhUq9WBTtmM\\n\",\n       \"yclJ7R/kOoh98NngQYA8Hn/8cRERefDBB/XfwMbGRsDyWnX/BoF9xkRsNnhtbU2ZKMBKIpnP55VV\\n\",\n       \"Yt84yNwKDOL6mX5gUa1W07kJ2TabTXMu+ckmB8kWz8vyXfvud7+rbDHPUcxlnpsYS2bneL0bJhiq\\n\",\n       \"UCjoOstJK3ke+MB1PA7sJ4R3JeRmBU2xb95uvlJZQSvsj8VBSSI9XYSMOYG3VUsV7xCrrRgv9m3l\\n\",\n       \"Nt+MbxSwrxspnqDHjh0TkTQ1jRcQNlEifdrz+eefT22MRGyHvKmpKaWr+Tv/BT4xMRHkI7GymE9P\\n\",\n       \"T+vz+DurxAoX7BXpLcb+xmxxcTFYDK0jQI5wZCXHv/FbzgWDl4RIWE6FwXmGrMWD/+1PEquQKCso\\n\",\n       \"57fBosXPgiw57wtT6iJp2VrKbWWgxT2YcgZWV1dT0Z8iPbmA/uZNgf9b3njz860gA9yH78dRXSLp\\n\",\n       \"jQXubT3j4sWLehwA+fEmG99Z0T2DMAzFzrA2ZJyJHL9Hn5Ik0XmGZx09elSPjdCPF198MdgwlEql\\n\",\n       \"IMpqeXk588hWxNZv/3h8dXXVNKTw8sUL69ChQ0GxcitnUaPRUD0+deqUfo55g+cXCoXgqH2QPvsF\\n\",\n       \"zy3kcjm9d1YVAMbzzz8vImLmJBoZGdE1A47bpVIpZQwNAgfrQBa83vJ8Q98RTPS9731PbrnlFhHp\\n\",\n       \"58Nrt9u6geIqEH7ZHc6oDZ0YGxtLBY+I2EERlgO5cy4w1nK5XOZmMmtjMDc3p3r10Y9+VER6myvL\\n\",\n       \"2EW7EORz/fp1bQMfYQO1Wi0wGBuNRmaAD2SZz+e1DVbJM4DnMMOPSO12u4ELwF7gZ2Dnf1vtsyIN\\n\",\n       \"2fEd39VqtSCjuXW/SqUSrIfDFGOOR3sREREREREREXvEvjJSIn3WhKlHWJ2+VSHSt6j834j02CdY\\n\",\n       \"mLDU+B5WQUeE0+5Wfw3Wy+Li4lBFDZMkCZzS/XuK9Kxga8fvF7zd3t7WnbmVDwuOmYVCQWXATBHu\\n\",\n       \"x8cbYG3YcrRYB8uaG9QfkfQOHhY8Z+kFxsbGtO98JIfncv/8TMVWfa5B2dN9dLvd4KiEnRsZsAJZ\\n\",\n       \"5v7YsBO+laMM1tv09LQef4D1yOVyeu9bb71VRNLpAJiB4XxlIj3rCb/l7+CUytmCOeSb5cDI5/NB\\n\",\n       \"PSp8jn4CHG4v0mN5/PsxG4j7zs3NBczG9PS0Xgd9aTabJruEduG51Wo10+k2l8upFc45nMAc8dzz\\n\",\n       \"mQ3rKJBlwPMLLCBnrAdDw06znN5BZLATrN8Wi5mq1+uqixaLnQVrfuzs7GhKih/+8IfanzvuuENE\\n\",\n       \"shmpJElSgTsi6TnKlQsgI+QGnJqaChz42+22rjsY662trVTuLpHeHDx58qSI9BlaZmStfkLXeK5i\\n\",\n       \"nnU6nYAlLBQK5vwBOEUFvucgGugYn6xYQH9xOsPs52233aYpUQBOdWJV3ECf+AiTmTCficrlcgHT\\n\",\n       \"xHOQs+xbwUnoOwdIWEw94OcaFOnL16p+4ZwzA198BozTs6Bv1hEfp55BP6138TCIjFRERERERERE\\n\",\n       \"xB6xb4wU6jDBwsDuOZ/Pq6XAPhw4N+YwUd8Zme/Hlh52sbzD91MdWNXM2aLKYqGsbMe8A89KMsY7\\n\",\n       \"YIQPVyqVwMI8ceKEOiuyw6pfC2zQjho7c2ZFgLGxscCvZnl5WZkNDhH2rf+skGhuD/trAdvb24HP\\n\",\n       \"C59lMyz2xA/f5VpMkDnXNwSmpqYCNnNmZiYzczD0Y2NjQ//NDBsYQb6vn9Xb97fBNdBfrvNlsY7Q\\n\",\n       \"WXbchoUJvyPnnMlO+EEJhUIh8HMSCf0BrNQT7NRvZSWG7FkPsnx9WB/ZSsVceu973ysiPYv4xz/+\\n\",\n       \"sYj0fT1arVamFTk7O6u6xekoICN+nu+34pzT5+A7nsuw3tmHBkzjyZMndezwt9vtZjIbALO+lvM9\\n\",\n       \"UK/XU87vNwOet8ywok/MAEO+Vi1S4PLly8rCANxmyODo0aPKbPF8y0pDwTUhITfMvWKxmClLzNX1\\n\",\n       \"9fVAhhw0wffwWXfLz5LB10OWrJNZbCF0cmZmRucBO5uDOX3++eeDQB8rYITZs93qZPp15trtdrCe\\n\",\n       \"M0uN79gvidks32ndYopZlrx2+OzUoBQEPvs0MjISsI5bW1tBG6rVasD+7uzs6HvTYqyAQRVCGPu2\\n\",\n       \"kfLzxVgRRvxvfxJUq1XtPNOy/nUHDhwI8hKNj4+rssLp1HIi7XQ6OkhYsJaXl1NUrsjgFwYWpawC\\n\",\n       \"rKyA2LDkcrkggogjPjjT8L333isiIufOndPvT5w4ISKSooL5pQSKGc+1NhHXr1/XY08eB39TWC6X\\n\",\n       \"9d64R7lc1hc76HZr89dsNnWDxUcZ1hGsv2niYzjLMR5/i8ViMEmnp6eDl4FF6fMmgieh/8I9fPiw\\n\",\n       \"eewBB1vc99q1a9o39Js3k7jv9va2LlrYXC8tLanOoj+nT5+Wn//85yKSPoZC1BSubzabgWNtVkkE\\n\",\n       \"hmVADMpcnmUw3H///SLSe6EiIhE4efKkjg2Ow8rlssqNdRvAizmfz+sL3sLa2poeSXAEYZZDvn90\\n\",\n       \"K9LfRLRaLX02FmGuRAA9OHPmTJALbGNjw9RfHyzHLAOuUqmo3LKOshlok0jf7YE369ioYoxPnjyp\\n\",\n       \"G4HdnG45L51ITz6+LDkrNtY451wwHrOzs3pEiOeOj48H85sL6DL4nZAF/yh72Fx0FiYnJ3UcYFBZ\\n\",\n       \"hZtFRDPrQ1YXLlxQp37OHQUXlWHbYblm8IbG0qesecvvQJaHL1dLN3jN57Jbvszb7Xbw+3K5rM+1\\n\",\n       \"jPWsXHWlUknXO+wRNjc3VZZ4r73xxhuZGyhgGLnHo72IiIiIiIiIiD1iXxmp8fFxk5b3LYGJiYkU\\n\",\n       \"lSuS3mVbdLmfBVakT2EePnxYn2s9n8NPsfPFddVqNZWvRCTtaIffrqysKCOUZRV1u10tcIn+vvTS\\n\",\n       \"S/o9HxV97GMfExGRv/zLv9TPfAfkWq2mz2UWApYIMiEzWJbo78TERODcyLlCcO+dnZ2gf/V6XVkz\\n\",\n       \"bpefwV2kbzGALdjc3Mysa5XlPGqh1WoFR8AWVbu8vBwUfi0Wi8oSscXspycoFApmRmk/WIJz2bBe\\n\",\n       \"4jP0m1lKPvZivRTpFXsFw4BxWVxcVMaHHUetOl34HnJhtsA6IgeszPvVajVwLN7Z2dGxgWOxiARp\\n\",\n       \"SziNA8ag1WoNFXbs5yLzHZ6TJEk5OoukjxestCu4jvUdunv48GFlCdHul19+WfuEZ/G4gYXc2trS\\n\",\n       \"/mXl2hn2yGtmZkbHcNgs19BJ1nWMJRffBhYXF81KCFb6BMgI8jt06FAQin/58uVgvs7Ozgbz58qV\\n\",\n       \"K6rvGAMegPCbHgAAIABJREFU50ceeURERL7zne/obzEGCwsLwbgNgl+v9ciRI3oEz0yEz2rzmo+/\\n\",\n       \"o6OjwXtnfn5e74219+WXX1b5c/CUz65MT0+njvt9FItFnX9oa6vV0rmLtg5iNdEGdofAv9FmPhLN\\n\",\n       \"Oiq0UgXx9XwU6F9nyXK3XFRZ71QrUCWXy+lahr/OOZUBH4da7jm7ITJSERERERERERF7xL6mP2B2\\n\",\n       \"BztIK/EY/z92kCsrK6kaXCK93bPlEOnXJkOoLX/X6XTMhGJc/06kt8v3d9Tlcll38rBEqtWqMiFg\\n\",\n       \"06zz3DvvvFMtILZi/fpWk5OTWmcQuOWWW4KwYeec7rhhObM8OEsw+s7sCHbhFhvAliSzRpaDInwj\\n\",\n       \"/GcxuMYf10vKOpNm6z7rfgyfxWIrmcfft0BqtVrgX8f1F/36eoPAzukYV+hpo9HQvu/mt4RxggPq\\n\",\n       \"pUuXAtaL2aKsbL2tVitIwsqOqtZv/cz63OZhfa5E+kwU7lcqlZRxsfwh2OfGh3POrMsHrK2tBb5K\\n\",\n       \"o6Oj+jwr0zjmYbvd1vtxOhX2MwKgU7wOoH+Q+eTkZMAWLC0tZVrAYH5GRkYCdmVjY0MZpN0ypqPN\\n\",\n       \"YJUsOVv32NjYUD2Hzi4sLKTWFpF0EAtkxU7W+O3a2pr2CeuixQY2m83Ab5Wz+3/ve98TkbS+Azxu\\n\",\n       \"w6SqEZHAj45hnZxwAkoOvPAZpEqlot/Df845l2Ki8AywqFg7eR20fBDb7Xam/y0DY4K57pzTtQ26\\n\",\n       \"4JwzmRx/7eYky1gbms1msM5ystSsDPL5fN50IvfHlZ/LTu4ICIHv09jYmPYNurW5uanP48zvWFOg\\n\",\n       \"J1ZwzTDY142UNXA8wUHVbm5uyj333CMi/YVgZWVFhYkFYJBjH7LmwonPohx5I2XR6Fz2AuAs5VjQ\\n\",\n       \"8P2JEyfUATULBw8e1M0QHN8tCrVWq+mkwXVvvPGGWZwTi5FfCFREUm1CW3lTakV8YHFxzgV5n6DE\\n\",\n       \"jNXV1WBcBx3hQpF5IvkvTI7Q4ygqv83WBmxkZCTYdPFREo9/FnWNiVYoFHTc0c6trS2zuDTkyy8E\\n\",\n       \"6Lf1IuPNDv6N9o2OjupYY/NULpd1k8aOuf5xlRV1xKUkeAwgU14UuaSGSDoDNmSbJEnKyMFneIHj\\n\",\n       \"78LCguoxZGDl/yqXy/o92lev17UNVmSgiB0ZiWs5rxaATWmr1dIXHX8PnYAD/7lz58z8dtAPdqqG\\n\",\n       \"PPD8mZkZXavQp2KxqPMZGyVeyHHfiYmJIAKWC/ZmlYUS6esMR0n786VarZpHYdBfPNfKfTc6Ohrk\\n\",\n       \"SOPNFRui1ssfwQhYs6x1fHJyUnUCbRobG1M3Ap6/WUedWUXOl5aWguoT3FcOFsH3WHsPHDgQRAPz\\n\",\n       \"sVWWY/Pq6moQSc7GuPVbK3cU/5tLRQ3jVG25SFgbUV7j+MjzZkurZG2yeCyx5nO5Ks4TlZUrioF2\\n\",\n       \"WWs0wP31S8FlIR7tRURERERERETsEfvGSOVyOZmbm9OdJf/FDh9WydmzZ+Wpp54K7uEXKGZLkr+D\\n\",\n       \"tcbsg/9bkb6FbB0lYHfPzA92yp1ORy0a3Pfll18OcgFZzEC9XtcdObfFzw/Du3arzQxY/3yECXCf\\n\",\n       \"uDYVZ24XSR/FQX4zMzOBJcryQL23CxcuBNQ11wKzHG5hBeZyOTOPC5B1hNHtdgPn3Fwup1YMO2vj\\n\",\n       \"nmyV+9bX6upqwHrNzMykss2jb8yeAhhr6ES5XNa24GiEWQVY8hzmjfutrKwMLPjLSJJEmSgcpzSb\\n\",\n       \"Tb0P11L087UxJQ5ZDEp/ABlAjpOTkypzDrsGE7JbhQGMK889yBSy9/Of4RmsE1Z78Rmcpq1UFe95\\n\",\n       \"z3vMzzGvIT/LIZu/BzY2NnTcwTAcPnxY2RP8PXz4sMrLYoNw32KxqGsCh3GDmclKf8D1N7l2G9qA\\n\",\n       \"8Thw4IAZNOG3xUqX8vrrr+t6yPMC4GAhP7VLrVZTlwNeX/wjz2effTYIFHHOBTVIR0ZGVKZc4y+L\\n\",\n       \"NUab/AAbwM+RxiwzAhysnF6NRkNPU5577rlUO0XSuRAhUxxRrayspK7loy2RtCx5zeQM/gDWAmSB\\n\",\n       \"r1arOv9fe+01EUnn1cIYbm9vp+rz4TM/DQUzbwCnOrDqpgJ8fMjvcD42FOnJzaoFi/nPa5af3b3b\\n\",\n       \"7Zq1L/15wXkHs9oc9GHXKyIiIiIiIiIiIkzsGyPF9XNE+hba1atXU+yESC90GtYLdvC1Wk1++tOf\\n\",\n       \"pu7Ju3JYNidPnkz5xIj0dvb4nn1u/CzGpVJJrUrrzB5n2m+++abu4GHRTE1NaZ+w85+amlJLBqkM\\n\",\n       \"nnnmGdO3h2uJifQcdP3klWxZwYpZWFgwmSg/1JVx4MCBgGniMFDIldkndloGYPW88sorQVoDDsGH\\n\",\n       \"dceMFNdug8XAPi3QE26/5dPEgQciaUsdNcOuX79u1jrzE2Pu7OwEqRiazabJAlqWLPQDfhMsY4td\\n\",\n       \"go5zxnI8/8CBAyprK40D5HLixAl9HrfJZ9YYmIuDklTiOXhGqVRKJQ8VGRxmDoYQcubs3hx67OvY\\n\",\n       \"1NRUJhM1LIrFosrdcogH033o0CGTzYReIvHpgQMHVBezMqqzVQxcuHBBM+Dju2PHjul8tXxFmOWB\\n\",\n       \"pQwZdTod1UVLnwAr0GNzczNgsdbX1zVE/9VXXw1+w/Xa/LZa1RHa7Xbg9yPSXxMgbz+Fi0hPx3yW\\n\",\n       \"/9KlS0GAxOTkZOCrxL63nNjWT8Ug0l83rfUdsIJYONwfMvvBD36g38On7vz58+qXarGlzG5iDLk2\\n\",\n       \"JCfk9NfuQe8NK1AI83O3dBAWLP8735nf0l3rM85OzwySJZthfbb8YJhut2sydYD1GXSs2WwOlXbF\\n\",\n       \"x746mx86dEgVlKl/CAZUaavV0pcD/oJOFbGP4vxCliJ9p3N+wbCSWCUi/MgRq1iuRaG22+1UhlqR\\n\",\n       \"NB380EMPiUgvJ5S/2PCLip22/SOF48eP63EF5+mxgIVna2sroKk3NzcDB9CdnZ1UlA6ej+M7Xvyw\\n\",\n       \"MD7zzDP6GRYDtHlqairIZt/tdgP6vtls6rGXdUy2WxQOdAC6w5ORjy38l+qgI0PICH28evVqcO2g\\n\",\n       \"rN6Wsz82S+jHwsJCkA1ZJL0xFkkfR1nHkpgPzz77rH6GxUGk31+03crkLtKnvbGYNJtNfQ6XvEGb\\n\",\n       \"sTFYXV018834L1de6K0NKcALPmT/0EMP6TEEMv0nSZIZJMCO8VZuJ6wd58+fl9/4jd8QEZFvfvOb\\n\",\n       \"2l+8JDGX5+fn5cEHHxQRkb/+67/W63xHW86hhbmwsbGh/+YSRrtF3AFY09D2CxcuaPuzNpvb29vB\\n\",\n       \"GsnHJHiBz8/Pa5mXM2fOiEgviMHfMFobSNYlPmrhIuMiPb3jNUEkHRX3kY98REREvvvd76qMrJxV\\n\",\n       \"Z8+eFRFJGdNWeRY+qkbfcV273TblBrn45bdE+kfyExMTehzJ5cUwDzl7v7U24Bk4eux0OhrlyO8h\\n\",\n       \"zqhvHbtCNrzhhsytDRAbRfge9+BcSxivVqs1dHWDYYzd3crWcDutahZ+GRqe//xe9seQA3jQ33q9\\n\",\n       \"ruPPOoP347vf/W4R6bsEZCEe7UVERERERERE7BFu2FDFt/ShziV+ngjQ/Pl8XneRsFyYavvgBz8o\\n\",\n       \"IiJPPPGE1itC6PeRI0eC2m7tdlt3mNhls0WNHS6H8eN5g3bPsEqwu5+bm8ss6MnsAhgdWPn5fF6t\\n\",\n       \"OS54iqMEMATOOf3+fe97n4iIPP3006mCniK93TMzesiSzuHHkAMXRPWtDi5qy8dlsFg5jcJdd90l\\n\",\n       \"IiIvvPCCPgssAFi4Wq0WWKeDQrazQn6tTM9W2C0fUfjHWkzV82/9o12GVX8N/SgUCoHVmSSJsmKw\\n\",\n       \"OtfW1gY6szIOHDgQMKF33nmnsiJ8vOGP2y233KLtt44r0N9SqZSZ7oOvH2aNYIfmXwSQKdffs46q\\n\",\n       \"H374YRHp1Y6Evvz+7/9+oLPb29vKzPkO/CLpoJRf/dVfFZG+UzCzT08//bSI9NgF/IZzRllHp1gn\\n\",\n       \"sDa8+uqryl5w/i3r+BifQfZHjhzRccf8f+WVV3T8reMxgOsWZtVVY/is9SDgvuVyOZVrTSRdzBlw\\n\",\n       \"zuncxLrdbDYzmWbMo3a7nSriK5JmODHmU1NTQeAAX4c1uFqtplh0wK8gYJ1CiIi5Fv4i8N9Jg8B5\\n\",\n       \"ybKOdAF2Use6Nz09reMEJnwQQ431C/rObBv0vlQqqXytHHR4v3e73dSJhEhvXP0alOy8zgWIsUZj\\n\",\n       \"PS6Xy/qeyDqe/UUBRitJEjOvRmSkIiIiIiIiIiL2iH1jpPL5vBSLRXXI9hOZederlYC/58+flxMn\\n\",\n       \"TohI36JeXl42a2f90i/9koiIZp1lCwMMB9fkwmetViuwyGZnZ9UihZ+GldzutttuUwaBWZQ/+IM/\\n\",\n       \"EBGRL37xiyKSdhj92wDGF8zWoGf5vkqcYf69732viPQccy3mzaoB6FvoxWIxqGvUaDTUYmErjLPv\\n\",\n       \"cpt8+BYch6ajkvrc3JwylxiP0dFR0/E0i5HyM81z+0RCay5JkiCjPt8H1tvExIR+xv588AFBmy12\\n\",\n       \"rlarKduJe0AnhwEsTPS7Wq2mslH74DH1AymY6YTsT58+rewJ5vfXv/51vQ4+CJy1m5OIQmfx3Gaz\\n\",\n       \"qXK22sl+EPiNc07nJmRVKpVMf5NPfvKTIiLyp3/6p/oZnIbhw7m9va1+YZz8E/Lwk3+K9NestbU1\\n\",\n       \"lQ1kz07d8Euam5sLWMIjR47ob2699VZ9Pp5jpW5goO+QD1vvrKe+HjMzAGaD+4bvmFnH33q9Hvhh\\n\",\n       \"jYyMKMPE7C7Wcsi0UCgEgQqtVkvlnMXEsM+iNW/x/Pvuuy/lII7rwahA3ry2cx1Y6IGli0gwyjVX\\n\",\n       \"Mc9GR0f13/CzGh8fV4YL92Pf4CRJ1FcHDO2gpKN+SiGeN1kYGxsL0hBY6ySn7MH4Wu/KmwEn3USb\\n\",\n       \"bxaY86dPn9bxRvuYgcf4ct84wSd0C33c3t7elZHaN2fzTqcjW1tbQY6nkZERHURQ9t1uV53C4DAu\\n\",\n       \"0i84+uSTT+pvoej47cjIiFKvfuFJkfSxBn6bddQxMjISvKysIq7saM4Lzxe+8AURyXa0FUmXVBAZ\\n\",\n       \"HNngO5EOOmLhzyza3t+sjI6O6vOYGrYmFiY239cqTAml5s0V7g15WEUteYPEmyd/ceAoNnYe9J3+\\n\",\n       \"razoxWIxmLy8QeZFC7qFNrfbbTNHjTVe/rgPoqOtIwcfm5ubgdP8qVOntC1YhCuVio4NNgRra2s6\\n\",\n       \"D6Dv6+vr5vGCn+28Vqvpb1hv8G842V+8eDEoa1QsFvW5586d08+xkMJIuXr1amZU3G7HIP4xskjf\\n\",\n       \"cd85p07BaMu1a9fMscCLnecPFz0V6b3keJ7iuZAHH9NyWQwfHFmL3+Ll2el09MWNsVxdXTWdpSFD\\n\",\n       \"jNvi4qL2nYsVY77gCNLajNXr9dRmSaT/YmF0u11dsyAfLh7LxqSV2dwvGM5rNM9BfwM1MjKixzwY\\n\",\n       \"v2azqcYfjJNGoxG4BZw7dy4wICcmJlQOVlQcNnKLi4v6b2x8eMwxfw8fPqwbN9yXN154J42OjsqP\\n\",\n       \"f/xjEUkHBADOuUA/ue9ccsjK8QSdwDHdxMSE3h/9nJubUz3OcgrfLYM4b8w5pxz+8nGgSG+8uMwO\\n\",\n       \"gDWBoxj9dzgbNrjfG2+8ofqJz3gOQg+sEnV8b/y1gh18xKO9iIiIiIiIiIg9Yt8YqdHRUel0OrpT\\n\",\n       \"5eK8sBiw2x4bG9NdMML8b7vtNqVFeRfrsw+D4IdHcrFcy7HYYnFgyW1tbekOmB0f/XBwDkdn+Lvs\\n\",\n       \"Q4cOBXQxOyAzCwVLDlbP4uJiynEfYAvSt+LZksJ4MNPBcrBybGAc2ILjMHuRnpz94IFSqaTfo09Z\\n\",\n       \"eY5E+ozf+vq69omLTWNcrZxBWRmw2XEbMrCy7PJ9OA+PRbP7ubt2s+SgH5/61KfUCkPQAef9wbiu\\n\",\n       \"ra2pgz/mDBel/kXABYX9GlWc1wvXHThwQFkPyG18fFyZH8hia2sr0OPp6WllynDkNjs7q0dikAGz\\n\",\n       \"QsMWoy2VSgELmCSJPg9pSAqFQip1hC8HfrYfCGAVWD106JDKg7/LyhgP9qtUKim7g3WHAzjYoraA\\n\",\n       \"tQA6VCgUVPf4qMtPa2EFL6ysrATpaLi4NYOZKB9ZLP8999wT1EhkRhw6NDs7q2OJNBg7OztBtnOR\\n\",\n       \"fn4wnFpYbWg2m6qDOC61iroz6wpW6NixY2YlDP8oEQ71In3Zj4yMKNuKPlp5uxhJkpg1SH2WivOh\\n\",\n       \"YYxWV1eVyRvGOZ1RrVaDVDZcGQSy4bUNn5VKpUDH+Hv8HVSkHnMOf51z+lweS+uYHjphnaAwA++j\\n\",\n       \"XC6nsr/7bR+EyEhFREREREREROwR++ZsLtKzmOADAFZhbm7uLXe+9v1hJiYmdNfOSRNxfoxdO1sA\\n\",\n       \"7FiIpIpgGi5evKhWCTND/m+Xl5d1980MF9rHTpiWDwhndRfp7eQ5RBfXgEGq1+tq+WRVQ2fLwdKH\\n\",\n       \"LAfPQZl0/TqDhUJBZcRWgp9dmf1vhgVXOce/2b8G8udK9L51J9L3R7F8ZdgR3fdvse6XJEmgE6VS\\n\",\n       \"SbOrMxOK/g5TT1IkzTSyLoiI2S8LrA/svzDIsZ+f65zT3zAzgXkB3X7zzTeHZo6yANnffvvtQfLK\\n\",\n       \"iYkJbddjjz2mLBYzZhhPrubuMyqlUkkd4xHezakqrKS/AMsD/T1z5ozJ1IJdQT+s1A5333233gfj\\n\",\n       \"Pjk5Kffdd5+I9JOuXrt2TecPMx8A1ripqSnVfbBG7EuJNq2srATsOGd6ZobLr66Qtb6I9HWiXq/L\\n\",\n       \"r/3ar6Xa/P3vf1/7izX46NGj6t+EQIRqtarrE2f+txgwBAkgLUSSJOZceuCBB0SkL1MrYbFIX3eQ\\n\",\n       \"DuOHP/yhfscpefx1T6TPjjATY7XFR7FY1P5yzVB2/vfrwg0C1nD0w2dVhwHWrFqtFgQeWP62zFxb\\n\",\n       \"64DF+DGgi3juoPQq6BvGbXp6Wk+VMM83NzdV1vi72wkB4++ss3mpVJL7779fnnjiiaGu52iJQajV\\n\",\n       \"ajoo2HTkcrnAgW5tbc2kprnApQ8+asMgsdO5Hykh0h9gPIvbzorHDozoB2hFLOZc6sZSAPSbj+F2\\n\",\n       \"UxQuAJlVNNLKGMsFQq2NlL8ZbrfbehRq0a3s+GjlhfIXNz6GgMw3NzfNPuNFgTEcdF9/g8TPyHLM\\n\",\n       \"t44WRPryw4sgSZLM6CpELnW7XaWrrYWWn+/3t1Kp6GaC84RBB9HfhYUF3RQMa0zttlhbkbfY6LGR\\n\",\n       \"go0tZ22GrBBZy8dlWED9QtgW8ByM+WuvvabPsaJ1gGazqUEpnLsNssnqe5IkgQzX19eDtapWq5kb\\n\",\n       \"Hh+FQiE4tu52u6pnOAZaX1+XRx55RETsjRQfoeMFj42PVdHBORccPXJ+IFxXKBSCucIGFb6bnJxU\\n\",\n       \"/eWNw7e//W0R6a+Hhw8f1jGxilwDloHNhcCBu+++29QVRC5yYIsVTGCtZ36VBwbcTbhcDcPXg0ql\\n\",\n       \"EsxrLr7Nzs6YKxsbG0FRZO43F+L1KwckSWIGh2CusIHrF/ZdXV3VscF8HPZ4sFgsBkfjSZIE0bjs\\n\",\n       \"ksFRgBzMw3Li9pXLZW0fnnH9+vU9lXnxn+HPwSzEo72IiIiIiIiIiD1i3xiper2eYqOs8HHAChXO\\n\",\n       \"5XKaHwg780uXLunuOivL6ejoaGDJP/jgg2YosU8/3n777erQDgvi9OnTmkGcrV2/CKVIyHodPHhQ\\n\",\n       \"28q0sF8bzS+87AO7Zt49s6Mly3c3ShX38QtTiqSPR/AXVgdnoh22PpPl+I77ceg05xISSY8hW3w+\\n\",\n       \"M3D27Nng+JCPIbhN/vEEU+d4hpUzjOXI8s86ZsOzKpWK/p4dbqELSPexvr6udDz6yPWoYHFubW2Z\\n\",\n       \"zpcAjx+YKzhXcr1G5LTZ2tpSXQSrkM/ng0zaYBvRLvQB17EzLfqB6yYnJ5UtxFiVy+WbPuL1n432\\n\",\n       \"81ED+m4B8x86xkXGh60RhrnOLgOf+MQnRKSnJ6jjl4VqtRro7Orqqq6XLBc/KzqDC0v782xyclLZ\\n\",\n       \"azCms7OzQQHg9fV1kynx5wozANCxxcXFVGCESPooG/NjdXU1YFsgB/xGpDdufh4uXuex9lvvi0ql\\n\",\n       \"ojLFszqdjsmi+uviyZMndd0GI2UxphMTE6rH7ArAuQpFerKFGwHkeO3aNb0fyzsrv6KVsZ5Tz7Ce\\n\",\n       \"+30ql8sm6wfg3VUqlfQ9YuVa4vc15M5BCdYR4l7mNcAy5L/cvkFsFNYoDmbz8+E1Go1UnkOR4eZ+\\n\",\n       \"ZKQiIiIiIiIiIvaIfWOkOFxaxGaiGNg5co0qWLm88/Yd0iYmJtQ6sBwzcQZ9/Phx+dGPfpT6LfsC\\n\",\n       \"sOWPHTqcSRGCbvVRpG+B1+v1wPJaXV01nbDBCKCtrVZLa1TBumPmgdMvgDFh1uD48eMi0qsLhb5k\\n\",\n       \"7eCLxWIqySeAazk01HcoZGsaDMH29naQiG9iYiLIBM3tBzqdjt6Hq5cDPOZ++LGVPNBikHZ2dgIW\\n\",\n       \"gOtCAc45tWzAYHB72VL/wAc+ICJpXzrICPe1/HW4XZZucdZuv2bg+Pi49sOq+8j9wW+ZiYJ8oTvs\\n\",\n       \"A8eMra/bFpsr0vdL4WS3PpPI/oeQz8rKiv4GvmNs4YKZGB0dTaU8wXzmzMzoc1a2ZE7mC4sVfpY3\\n\",\n       \"A8shFmsX/u6GBx54QBluME7lctmsp8d110TETHa5tLQU6Fm1Wg0c2peWloL6izMzM7pOg9Wu1Wqm\\n\",\n       \"bgHsk8Y+jSI9Pcjy3UR/jx8/rn3BPVqtlsnMY40G8wM/O8bW1lYgv4cfflidzJk5wThhDeEAk6w0\\n\",\n       \"LVeuXAl8faanp3VNYsY5K6CF13I+pYBuA8MGlhQKhSCU31p3JicntZ9gnJrNZibDzYDucLoUrNMY\\n\",\n       \"G0s/x8bGVK4cNANdQGqKQqGg7cLYXL9+XX9jvccQAJPP55Xdw3pw9erVgJWzUoBwQupB2LeNVNZE\\n\",\n       \"BLCQnTx50szx4i8inP0bC8HKyorpHAelxQvo61//enDN0aNHg/wmt956q+YwAQZFrmHjw0WJsYhw\\n\",\n       \"Hg4oNSbf7OysKiDToFb5D79gsEh/M8EbA+uozd9QiaQLNvsRMnxPLp9gvaC4SDLgO4tacrOCCRqN\\n\",\n       \"hvYJE3JjY8MMQLjnnntERHRTfPHixSCbNG8mcN/l5eVgM2CVVmg2m0rLA/l8PlWAE0BkD3SNgyGs\\n\",\n       \"hQxtKRaL5oID4FmW8WHpei6X0/HiuYKXAW8+MNaQ6aDNB148uN8DDzygCxVeWFtbW5lFmtHfO+64\\n\",\n       \"Q+WC9i8uLuqLwsqNNei+KDsD+SZJEpSL4LlgRVkBNxPVkwXM2wcffFCjxDirO4C16ODBg0GxcXZG\\n\",\n       \"ZnDm7kHY2NgIdL/RaATHglxpAmBZQd/K5XLKuPKBl9PU1JSOsXXEg3vMzMzovWGkWGPORgr3B2sC\\n\",\n       \"6wTWRcxHLrSLkmHXrl0L2lWtVoM5dO3atSCv3+joqLYBG1F+CXNGbX9dueWWW/S9grbkcjl9Bozn\\n\",\n       \"XC6XWk/8Ntx5550qa2vew9jpdDrB9/l8Xo0crHGLi4umuwfeVX65MZF0FCh+axlVvJ5hY8KBPv7x\\n\",\n       \"+87Ojq491sYYfcvn83ofzn2HMbGORkEqiPTHDrrY6XS0DbtFojLi0V5ERERERERExB6xb4yUT21i\\n\",\n       \"lzo5Oak7UVhZFhs1PT0dZD5lqpNrGMHqhPWxtbWl12aFNjIb9eu//ut6P9+aHOTcBmsM7Fu5XA6o\\n\",\n       \"39XVVbWo0I+LFy+mnFVFejtvUM64b6lUUiaKnZct6xTWTLVaVfkyI+UzQa1WKzgmY2YDO/lB2ZXx\\n\",\n       \"Oeexsgr7ZlnSbHWifbCGSqVSUCsKfeG/nU7HzHVj5QXCv3dLtQFZgjHlmmLcx1OnTolI37pmax9t\\n\",\n       \"qdfrQeZzSybFYlF/k3VEdfDgQWXM8Lw333xTdcwqsLpbf++++24R6Y9pq9VK1boSEfnWt74V/O74\\n\",\n       \"8eNqOcIi3d7eVusYY/nqq68GjFC5XNbfcqZmWOuwIMfGxlJrCeYN5+7yx79Sqei8Z1bPP66xMpZb\\n\",\n       \"sAIQGKgFOj8/r0eSfm47kb7unDt3Tq1sTqGC3zLzwvXlBiGXywVMQxbjybCYpJ2dncBav+uuu5Qd\\n\",\n       \"47pvFqBHGN/5+XnVo6z8RoOcpzFveE6jfzgW5moRmBd81Idj+IsXLyqLwcflPgPKRe6Bbrerusp9\\n\",\n       \"xxhinh85ckTzh1k1NSGXYrGobbECEJCuQ0RSNfdwipEly06nYzqbYy3gFCB+9nKLqeXPsF6Uy2U9\\n\",\n       \"veEC6dAPzOVh06/wPLP6hjkwyJkdeue/W/32Q/6Yo8Mc7UVGKiIi4m8Fw/pwROyOYTc+EbvjrU74\\n\",\n       \"HBGxb5nNYSFkZc3mM1RYY9iJcp0kDlfGjhWTxTrznZqaCqwlzpDLtaWQtRbtfPTRRzP7hqrjKysr\\n\",\n       \"ej9rBwxwnTuAz9qxO2ZLw/J34uzdOPve2dnR82qwJ+wcDotqbGzMlJfvANjtdgOWamxsLOUoDvjO\\n\",\n       \"yLVaLfDpOXHihLaPFzdOLieSTkrJ4e1oP7MBCOWHg2Sn0wksHs4YzIwA5M56B1mxLCAjrn3nMxfI\\n\",\n       \"hMuo1+s6dpDZbkkuLWAMqtWqjit0w/IdYmYF8jl16pRau2A1JycnU07cIj02eFBGYUaxWFS5sX/S\\n\",\n       \"2wFOl/HZz35WRESee+45ERF56aWXlLXhLNF+Og2R0IeOWVSM9djYWGY2aovpzPLDGgS0D+kvlpeX\\n\",\n       \"lXUYZl0R6c8f51wms8a+Y/g3pzfxMTExoes1vh/kT+LXfdve3g6CNW4G7Hwv0psDvh/M9PS03hvX\\n\",\n       \"3XbbbaaPKYBs6+fPnw+YKwsnT57U9Q760Gg0UnXmRHq6iXUvK8v/xMSEPg/XcQAM/xYyn5iYyNRF\\n\",\n       \"gPXYSnZ8s6jVaqkaudAzzJ9h1gsfSKALHV9YWNBxwBiyTy0/E+w05lmz2QxSGHQ6HZUh2skBF1zT\\n\",\n       \"EuOFPQLeIVmZzTUr79v5n4gkn//855OItwZRlm8doizfOkRZvnWIsnzrEGX51uEfkix72yV7TxOP\\n\",\n       \"9iIiIiIiIiIi9oh9LVocEREREREREfFOQDLgaG9fNlIREREREREREX8fEI/2IiIiIiIiIiL2iLiR\\n\",\n       \"ioihjjlwAAAEq0lEQVSIiIiIiIjYI972jZRz7qPOuZedcz93zv27t/v573Q45153zj3rnHvaOffk\\n\",\n       \"jc+mnHPfdc694pz7jnNuuKJe/8DgnPvvzrlrzrnn6LOBsnPOfe6Gnr7snHtkf1r9dxMDZPmfnHOX\\n\",\n       \"bujm0865j9F3UZYD4Jw77px73Dn3gnPueefcZ298HnXzJpEhy6ibNwnnXNk5d84594xz7kXn3H++\\n\",\n       \"8XnUSw9vq4+Ucy4vIudF5MMicllEfiwin0qS5KW3rRHvcDjnXhOR+5MkWaLP/khEFpMk+aMbm9PJ\\n\",\n       \"JEn+/b418u8onHMPiciGiHwlSZL33PjMlJ1z7oyI/C8ReZ+IzIrIYyLyriRJBieD+QeEAbL8vIis\\n\",\n       \"J0nyX71roywz4Jw7LCKHkyR5xjk3KiI/EZFPiMhnJOrmTSFDlr8tUTdvGs65apIkDedcQUSeEJF/\\n\",\n       \"KyK/KVEvU3i7Gan3i8jfJEnyepIkOyLyVRH5+Nvchr8P8CMHflNEvnzj31+W3sIR4SFJkv8nIn6K\\n\",\n       \"6EGy+7iIPJokyU6SJK+LyN9IT38jZKAsRULdFImyzESSJFeTJHnmxr83ROQl6b2Iom7eJDJkKRJ1\\n\",\n       \"86aRJAkyJRdFJC+9OR/10sPbvZGaFRFOx3tJ+koeMRwSEXnMOfeUc+6f3/jsUJIkKKJ2TUQO7U/T\\n\",\n       \"3pEYJLuj0tNPIOrqcPjXzrmfOef+mCj/KMsh4Zw7KSL3isg5ibr5C4Fk+aMbH0XdvEk453LOuWek\\n\",\n       \"p3+PJ0nygkS9DPB2b6RiroVfHB9IkuReEfmYiPzLG0csCmRg3ZeWvcMxhOyiXLPx30TkVhE5KyJX\\n\",\n       \"ROQLGddGWXq4cRT1f0Tk3yRJkqq8GnXz5nBDlv9berLckKibe0KSJN0kSc6KyDER+UfOuYe976Ne\\n\",\n       \"ytu/kbosIsfp/49LegcbsQuSJLly4++CiPyZ9KjTazd8A8Q5d0RE5vevhe84DJKdr6vHbnwWMQBJ\\n\",\n       \"ksxTOYUvSZ/Wj7LcBc65Eeltov4kSZJv3Pg46uYeQLL8H5Bl1M1fDEmSrIrI/xWR+yXqZYC3eyP1\\n\",\n       \"lIjc4Zw76ZwrisgnReQv3uY2vGPhnKs658Zu/LsmIo+IyHPSk+Gnb1z2aRH5hn2HCAODZPcXIvJP\\n\",\n       \"nXNF59ytInKHiDy5D+17x+DGogr8E+nppkiUZSacc05E/lhEXkyS5Iv0VdTNm8QgWUbdvHk456Zx\\n\",\n       \"BOqcq4jIR0TkaYl6GaDwdj4sSZK2c+5fici3pee49scxYu+mcEhE/qy3VkhBRP5nkiTfcc49JSJf\\n\",\n       \"c879joi8Lr0IlQgPzrlHReRDIjLtnLsoIv9RRP6LGLJLkuRF59zXRORFEWmLyO8lsQyAwpDl50Xk\\n\",\n       \"HzvnzkqPzn9NRP6FSJTlEPiAiPwzEXnWOff0jc8+J1E39wJLlv9BRD4VdfOmcUREvuycy0mPdPmT\\n\",\n       \"JEn+6oZco14SYomYiIiIiIiIiIg9ImY2j4iIiIiIiIjYI+JGKiIiIiIiIiJij4gbqYiIiIiIiIiI\\n\",\n       \"PSJupCIiIiIiIiIi9oi4kYqIiIiIiIiI2CPiRioiIiIiIiIiYo+IG6mIiIiIiIiIiD0ibqQiIiIi\\n\",\n       \"IiIiIvaI/w/CAMCOMj3yxQAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01f952d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['conv1'].data[0, :36]\\n\",\n    \"vis_square(feat, padval=1)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The second layer filters, `conv2`\\n\",\n    \"\\n\",\n    \"There are 256 filters, each of which has dimension 5 x 5 x 48. We show only the first 48 filters, with each channel shown separately, so that each filter is a row.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 30,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAJOCAYAAAB8y+mTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvWmsbldxv1nvOb7zgCc8GwMGMxkIMyKgRCgk/+RDuiMl\\n\",\n       \"6UQBOg4eZQYj7GADRg42wthGcMEoHkCOlaB0R1GCWpGSNJmDiIAEOYAx4BHPxsb25frOZ+gPl2fv\\n\",\n       \"9T57132dg92n8+/6fTnnPWe9a9eqVWvtVbVqmCwvL0ehUCgUCoVC4b+OudUmoFAoFAqFQuG/K+og\\n\",\n       \"VSgUCoVCobBC1EGqUCgUCoVCYYWog1ShUCgUCoXCClEHqUKhUCgUCoUVog5ShUKhUCgUCivE03KQ\\n\",\n       \"mkwm/2MymXx3MpncOplM3v90PKNQKBQKhUJhtTF5qvNITSaT+Yj4XkT8QkTcFxFfj4jfXl5evuUp\\n\",\n       \"fVChUCgUCoXCKuPpsEi9NiJuW15evmt5eXl/RPwfEfG/PA3PKRQKhUKhUFhVPB0HqeMj4p7m870/\\n\",\n       \"+VuhUCgUCoXC/1Q45Gnoc+Zd4WQyqbo0hUKhUCgU/ttgeXl5Mvb3p+MgdV9EnNh8PjEOWKWmcNxx\\n\",\n       \"x8Wzn/3siIg44YQT4sQTT4w9e/ZERMTatWsjIuKTn/xkRESceeaZERGxtLQUERGLi4tx6KGHRkTE\\n\",\n       \"3NwBo9onPvGJiIj4tV/7tYiIePzxxyMi4phjjjlAxIknTvVx1VVXRUTE2972tql29Pvoo49GRMTC\\n\",\n       \"wkJERGzbti0iIt7+9rdHRMTOnTvjsMMOi4iIjRs3RkTE3r17IyLi2muvjYiIc845JyIidu/eHRER\\n\",\n       \"8/PzERGxadOmiIhYs2bNFO3vec97IiLiiSeeiIiIQw45ZKrdli1bIiLiYx/7WEREvP/974/FxcUp\\n\",\n       \"Ojdv3jz1TPqGFvqem5uLm266Kd785jd3PI2I+PSnPx0REe9973un+MX/4fdkMun4cu655079Db87\\n\",\n       \"xstP5vP888+f4hf/53v8hJbzzjsvIiKe8YxnTH1vx44dEXFAXuj7wgsvnOrjxz/+8dQz4OmnPvWp\\n\",\n       \"qXH62evWrZsaN3z84Ac/GBER+/fvn6Jl586d3Tz94R/+YUT084lcb9iwYeoZi4uL8dWvfjW+/vWv\\n\",\n       \"R0TE+973vqlnAvjKuvj4xz8eEQfmPyJi37593TPWr18/9V348s53vnPq2fQJGC/jvOSSS6ZofvDB\\n\",\n       \"B6fGsm/fvoiI+PznPx8RB+SLv/EdZGXnzp1TbZkjaGCOkDX4yPzDF8siaNfRb/3Wb0VExJFHHnnQ\\n\",\n       \"vm+44YaIiHjDG94QP/uzP9vRwrpBTuDL5ZdfHhER7373uyMi4u677556DutucXExPvOZz0RExLve\\n\",\n       \"9a6pPpAV+MJP5ujss8+e4h80w1fmlvYXXXTRVDv6n0wmg32RNcT+xjj52a7niH6/QM4B36f9lVde\\n\",\n       \"OTXWI444oqPFvGTPZT9/+OGHI+KAbL3hDW+YWksR/Ryxjhj/rl27pvhH/4yV/WXz5s0d/ewV3ovY\\n\",\n       \"u7Zv3959J6Jfa/D26quvjoiICy64YGr89J+tiw9/+MPd++GRRx6JiOGexH7x1re+NSIijj766Ijo\\n\",\n       \"ZZW+eSa0wxf64znQTv9XXXVVt8/xDmG8zNHXvva1eMUrXtHxnPcizzSvvdedddZZU7T4nbW0tNS1\\n\",\n       \"/e3f/u2IiHjWs54VERGHH354RPTvbN5lyBbtGRd9Amhjv/jd3/3diOjXETS364L9n/mHx7Sdm5uL\\n\",\n       \"++67r1vnN910UxwMT8dB6t8j4vmTyeTZEXF/RPxvEfHbbnTCCSfEG9/4xogYboyFQqFQKBQKq4Xj\\n\",\n       \"jz++Uwz+Xz9ILS8vL0wmk3dGxN9GxHxEfH4sYm/NmjXdaR9twSdwwKmYE/bCwkKnlXAKB3wXbYf/\\n\",\n       \"8yxOnOCFL3xhREQcddRREdFrVN///vcjoj+5m5YdO3Z0fWJh4rTbjrGliZN2FikJrWg1jIFx0x+Y\\n\",\n       \"m5sbWAHcB0DLQeN65JFHpqwS7ptnts9qfzL2iJ4nT/ZADG3wDU0DTdTzby0RLWoM0A19nn+DcaNZ\\n\",\n       \"0o7PnlPkg++hgd15552DZ0AvYLytZaLlM+1/9KMfTbVDAzMtyNPi4mLXD234H2BubLHK+ILlgX6t\\n\",\n       \"BXodrVmzpqMfntvC7HFCKzRlFiePibnhOS1fjj322Ig4sAlG9JY0tF3L9cLCQuzZs6ebG8bFs8wf\\n\",\n       \"ZB2LNXKAjLb8pS+vd75jWmyx5lmeS/fD2NgvNm/e3NEDmD8s7+xrjGOW7PJ9yw+AZp47Pz+friHA\\n\",\n       \"emcO6Htsn2sBX5h/LH6A72/cuLF7Ribn8Jxns4/y2XsRc8qzmVu/J8Bhhx3WteG7ln/A3oKVk2cg\\n\",\n       \"wwb7oG8LPHdtXx4XP9esWdPNYUQ/j4899lhEDN/B5ift2z1prF1Eb4HinYRs8V7werFlljll3lu6\\n\",\n       \"I/o5NE1zc3MD2WJc8IW+bImdhafDIhXLy8t/HRF/fbA2mPUKqwc21cLq4YQTTlhtEv5/DQ5chdVD\\n\",\n       \"zcHqo+bgp8PTcpB6MnjmM5/ZnSyB774Bp8j2/tLaHMCPhtMrp1K0e34CrGGcYqEJ7Q6aQKtFc4rN\\n\",\n       \"NEZotL9N6yMzBtpzWoY2W2J27do1OGEzHluk0Cz4+9zcXBx//PHdydsWBvjE2BirfWsONo5MC7SG\\n\",\n       \"Zs3bWmCruUbEgOaWB8yXtRrzHvhZs7RotH9ohp9btmwZ9MVnW8VazQsLSksb47GFxnyG1t27d3da\\n\",\n       \"KPPk+beM0h5+WQO7//77I6KXPcZtnxqwuLg4sAJapgBWPICsQZNph0bGYN63z6ENz+Azbbxv+CCL\\n\",\n       \"dmzfQ8AzsR7YMtHuL/4uPLY/nvHDH/5wilbkIZNJW2aYsxb4RsIX1lRmBWCeobn1AWu/D/g7z9m1\\n\",\n       \"a9fABxIgJ/iYchuQ7SPw0b6mtn4A5GX9+vUdDfDGcp7tC/DF7aHdVlLae8978MEHB5ZD9wHgHVZC\\n\",\n       \"5goaPE768TPH/m4/Kz4z7qOPPnpqrKwXLDT2MfR7Bz741mVsP0V2sLwzXs8v8DPpi74ti9DgvW1p\\n\",\n       \"aWlgvcz8LRlvds4wqkRMoVAoFAqFwgqxahapiP60hybKadCanC0X7anSp3q0MbQd+ylk1i5HjnGK\\n\",\n       \"zXyHNm7c2GmvmSZtH5FZFilHGNn/y6fp9s6XPjNfMGvB8IHP5jnP8r3+fwWmqaW7Bf93dBqANt/P\\n\",\n       \"ewwtrHllsGbJ50w7Zo5tkVlaWhr4j9hiZh8Gy4s1eL7HXGS+I/Pz84MoI/ME2ujblgjPyZg21/40\\n\",\n       \"lpaWOu3V85TJYuZ3Yd76/16zLe3217OlKKOFcTqy0nyxr1nWrgX/81odk6G2b8u3tWM08TZaj7FY\\n\",\n       \"fhk3ba1xZ35p0ML3+Z73ojELaOZPAw22Ym3dunW0Pc+2pS2zprf/p022br3XMEdjlvcWtuiY1rad\\n\",\n       \"LamZlYP/OyLQtyim1Xv42A1J5o+XjdfvrmxtAq8LaPB7JGJo3QSWIeBnel1ke7V9x+bn5wfz4/cb\\n\",\n       \"9DLe7LbJKItUoVAoFAqFwgrxlNfae1IPrYSchUKhUCgU/hshS8hZFqlCoVAoFAqFFWLVfKQ+9KEP\\n\",\n       \"dT5ARJrhvc89K1l2ycqK38ZkMulybXD/+ZGPfCQiIt7xjndERH+/jN+Gc7OQNZXswNwJO7sqtFx/\\n\",\n       \"/fUR0WeT3rx5c3eP6igbZ+QmOoH7WCKFuJ/+8Ic/HBF9ZmNHmhCJyJ0vGYLf//73D6Jr8LPiWTfe\\n\",\n       \"eGNEHOB324d9wPhM32SHhW/O3s5z/uiP/qjLVGwfJ99Hk5GX+YTn8IfvwR+ySZOp+L777ouIfk6e\\n\",\n       \"85zndM/5wAc+EBF9Bu82n01En+eFeb3mmmsios+yDa38RNaQTbLy0r+zNm/ZsqW742f+4TnPfuCB\\n\",\n       \"B7q2EX36CTLVI4sAueIZ5F+hPdmHN2/e3EWRsR6gz3LOeJx1HFkmszGySDZmfA6hAf6QxfvCCy8c\\n\",\n       \"+HYwB/TNGoUWaEAOoAU5gI+nn376VH/2lUEWP/vZzw6y7FvO+ens0I6wZJzg4osvjoiI3//93596\\n\",\n       \"piNs9+/fH5/97Gen2iKvrD32OYBskcEdvhA5zHid8R3aHdU5Pz/fPZP1TOZpeOx8aMgo+xxzxHph\\n\",\n       \"X0QW8Wciyz4Z37nh2LVrVzeP7F+sZ9Y/42Td2H/rc5/73FTf8JwoP8YI7bwvWNM7d+7s6EZ2yI/G\\n\",\n       \"GmJvAawb5+HyOJF/eM5zwF/+5V9GxAEZQAbJB8W42QfYi3i3ME7a2XcMWni/OIcXc8Bcf/SjH+3W\\n\",\n       \"M/PJHvvQQw9NPfMLX/hCRPTr375DznTOnFIJAX835qSVF+hGzu07BphX7//w2n6v0MK6g4/OU9n6\\n\",\n       \"UiHnZ5xxRkT0vIPXjJdxQHuGskgVCoVCoVAorBCrZpHasGFD3HrrrRER8R//8R8RcSC3VETE61//\\n\",\n       \"+qm2aMVoi29+85u7U+uXv/zlqbacQtFaOHmTR8l5oaxJOHO3o3E4uT744INx1113RUR/gnaSUUc+\\n\",\n       \"nHrqqVN98n3A6RdLBf3RjzXZH//4xx1PXLcMix3gNE7tIJ4Fz9HUADRDi62HrW+doy6xAmU5meA5\\n\",\n       \"8wotxx133NT3getgkY0eviNHEb32S1vXUHIkJNo+tH73u9+NiF47ph4kQIuCvySye/TRR+POO++c\\n\",\n       \"aotF6Xvf+15E9Nraa17zmogYRuHB03vuuScieg32lFNOiYhepgFzs3Xr1q4v5NyZlpEdxsP/sXoY\\n\",\n       \"1uLIdwUfnbtlMpkM8sA4EhbAc2T2Bz/4wdSzXvCCF0y1t5WI70HDWMQcfTkSyNYs5APrQJvXK6LP\\n\",\n       \"p+X2aK7IINbGVrt2RFhb57IdD4BfyBF9QlO2pp3z7dBDDx1Yg+HVS1/60oiIeNOb3hQR/d77ne98\\n\",\n       \"Z6o9fGONsmaRwSyyEpnduXNnt54d4Yc8IGPOnu29iP+/+MUvjojoSov9+7//e0T0VhUAHx544IGO\\n\",\n       \"R6wVR2GxbphnngXtjsT1XsWcspf7HbBv376uDbKDNdDVAti7ac9e5mz7wFYz9puxrNzMD2uLNcV4\\n\",\n       \"kWfDlt0s4pCxcWuAHMCnlnbn+sOihjxYXoCty1n29Kwyytzc3CDCDx7yTFfwmBW9CcoiVSgUCoVC\\n\",\n       \"obBCrJpF6tFHH+1OoJzisY5Y2+VEjvZ/1113xb333hsRw9M3p31Owpx+OaU6O7jr9aCBoOXYgoHm\\n\",\n       \"8uCDD3anWVfrBs5Rg9YylgU5otc4TzrppIjoNRK0RWv2z3jGMzoN47bbbpv6rjUpZ5Hl/2h/bo+2\\n\",\n       \"7DqCrgLe/o5mAK/H2kb0c+D7aLRGWxiYA+bkm9/8ZkQMs9ZHDP2zrNXZAomG0mrSEb1PkHnOXCNn\\n\",\n       \"rXZtTRrZwwqCZQGaLLu0dwZk/k79R9Bm+KYN47flBWsoNPIMPtt64mziyD08t6w//vjjHe9cp89W\\n\",\n       \"IXiLNcy5ZmwlgzZrh6540LbN8mhZzpFBxvn1r389Ivq9xtqufUXYw+w70v6OjCE7WU1J1ia0QwP7\\n\",\n       \"ojV1+/3xnF27dnX7H2D+seZgicD6abBfYCVBnphbr2nkhbW9bt26bi6wlADTyzhd9wxwq4BF6qtf\\n\",\n       \"/WpE9NZ1rzto37lzZ7d3MA5bJFgH/B+as/qpz33ucyOi3x/vuOOOiOitxqxtsHbt2oF/LvPo2xF4\\n\",\n       \"Zz/XLCs/8uOM+Yy57R9LIvLKnLR1CVu4/iV9tT6hLbyv8v2x/cL532xp8jua/d2+hq6QAcyvtrqB\\n\",\n       \"9xDX7+MdlOVAy7BqB6mlpaVugbBIfegBDIrDwvr169MkhXx22nkmLStwyOTwss4KjjLpRx55ZPdC\\n\",\n       \"YBLGSri0z7ajtzdGFiFCycHLNILDDz+8WxD8z4n3AAKEoFgo3Tffh49sMGMlaOjbz8ySvTEeXmJO\\n\",\n       \"iulFyfUCm7+d9dtDALzFPA6PkSkvDMbDs5/3vOdFRM9HyyLj5tqF52zatGkwTjZ4Fx22TAJo43u8\\n\",\n       \"YPzyBtC2b9++bj6zvtmcmU/Ggcx5U7cDqw8zXnetczHP5kXvgxQ8Q+Y4/DN3zAmwjLImXQzcv7dt\\n\",\n       \"M0dVZAv+wPOs/JOTHzrpX8sXxu0ErIzfQE7YV2hvZ2zgQ2N2WIzoeftP//RPEdEfoHwt777ZL5BN\\n\",\n       \"aM/4grKwfv36tOQHMoQyYgUse3n94z/+49T/WReWL553/PHHD65sDNr66i9z7YBG3AnYy1wEGGza\\n\",\n       \"tGlQlDdL5sn8unCyDwQAmfVhaCyZNIoQz0TGZpXOgib6npVMlL2LvT27Mozo+eGrfoNnZnuy4Su/\\n\",\n       \"1tXA77ksaTb7pPeiDHW1VygUCoVCobBCrJpFam5uLi2g6GsJl9ZotR2b9ThRurhqdqr3s50OwP1D\\n\",\n       \"y4YNGzotPiuzYLOmna99teNQdLQATu5jJSJsSXDRSGCHXVvPsoKY0IQmiobS0u4SFrMc9bB2ZM/M\\n\",\n       \"LFiYz32V2PKF+YTn1qwyjcRzlcmii5m2VibTDc9dMDQrnQMww9tpPyuF0Y4vuzbG8oK8YInKylYg\\n\",\n       \"J/x0WhBbMFrN01YbWwzoK7PUZg7hdl7PClRHDMvTZLKIxYLrM/aYTF4A/WI1aK2DAPqy8jSZBdNu\\n\",\n       \"CNl6gjb41RZ191UtFglft7sEELC1COtiZpGAj22BYej1uGnr8jqZFZA9iPGyf0B7doOxdu3agSya\\n\",\n       \"fqfoyMqxAFvkXJjeWFpa6tr6u7YcuiQUa4pxZ+VtzM+xIA/kEvcRWzuzEjLsEw4g8fpnn4B//GTM\\n\",\n       \"Y3tXViLK+wXjZL79vUy+PPaIoYWRPpgLLFDZushQFqlCoVAoFAqFFaJKxBQKhUKhUCjMQJWIKRQK\\n\",\n       \"hUKhUHiKsWo+UmeeeeYgxJK7YKfxJ+V/G2JsHxXSyV9wwQVTfTrqjL5J+f6ud70rIvp7a0f34edw\\n\",\n       \"9dVXR0TEOeecExEH7mG5XyeC4yUveUlE9CVfKBHhchzcO3N//KlPfSoioitvYV8Bvg8t1157bUQc\\n\",\n       \"KBHgKCrupvkOpTCgG18Z/HDok/EyTtLsO4WD/RQ+9rGPdSn/ocH+BtxRUwqD0gb4YcAH+zwxp8yR\\n\",\n       \"y7e04e/wkHFC98te9rKIiPjWt74VEb0/BaVQTjvttKlnE95MWDNzRVkO2uP/RNTk1q1bO3mgVAU8\\n\",\n       \"ZD6zsGbknDIeTjWBTxFpQmhPCZq7776786dAxvgMLZT8IVoHwA/8CuAjpTDsz+fIQ8p4XHrppZ28\\n\",\n       \"En7OPBK1dNlll0VEP0fwFjkhBQURVC4pBP/wx0BuWNvbtm3rSlt4XfCTaLU/+7M/i4i+XIXDvx1x\\n\",\n       \"5v0CfkF7u46QFdaz03y4VBBlXBinS3wgP9AILYyVtYif1ze+8Y3ud8pmME7WL3PlCEH6piQO4yKa\\n\",\n       \"GR8bxkJ79jrGtLCwMJB3yhW5/Aj0k8yRfcF7F7IKP7yH0T/vi4hhYlqXtqEUDmvMSV5JWUF7ymE5\\n\",\n       \"xYcjyqDl/PPPH/Dc/riM06W26NNRn8wp8sKeztzg5wNft23b1vXN+B3NyjOhBTkHPJt1wRxcd911\\n\",\n       \"U7QA7xebNm3q9i3aOsLc/lqME3nh76T2YL+AdpdmA/S/adOm7pm8Wz760Y9GxDBBMeNjHVHeKkNZ\\n\",\n       \"pAqFQqFQKBRWiFXNI4UVgHIFX/ziFyNiGIXhAsQPPPBAd7J0MjYnXiR5I9YiNAzAyZNncOLmp33I\\n\",\n       \"OBU/9thj8Ru/8RsRMcxB5bZE1WAVIQ8S+XMAGidag0vnOGpjeXm5Gy/J6WjjhGmOAEMThS/f+MY3\\n\",\n       \"ptqjHZDL6ld/9Vcjoi+h0ubCcR4pR7Jk0UYA2qHt+c9//igt8Ovmm2+OiD6v1Mknn9y1ZT5ddgLr\\n\",\n       \"B9obgFZkDn5g/fmrv/qrqfbwFVnE4rlz585BCRfn/7n99tsjotccmVeA5kk/jPfP//zPI6LX2AFa\\n\",\n       \"1G233Rave93rIiLi53/+5yOizxcEGD/yT5JX+GOLDEAWKeOCNci0//CHP+wsA9DpPGGA+YeXv/AL\\n\",\n       \"vxAREZ///Ocjotc0ATIODZQIotzJP/zDP3RtnVvIuWpMN3sQcoPFClnEOmZa6A++MafIcsRwHZBj\\n\",\n       \"6ZZbbpl6JmCvQUZ5NhaGLOoPfqKpP+tZz4q/+Iu/mGoLvVhNWe/IEHsScB45xgWN5gtgX9i9e3e3\\n\",\n       \"d1BcHNjC8vKXv3yKfpf9cvke1naWM63dX2gDXS6z4wSV0ITsuiwX6x6a4At89b67sLDQjRdZyRKJ\\n\",\n       \"Opku42K8WckhSgq9+c1vjohepv/5n/+5a4v1i32Ad5VLpABHpTpSNuM58sR+ynpzlHjEMDrTUaiA\\n\",\n       \"z06qSs439gVga3R7Q+JoQ9Ya7zXvg7ZoZiiLVKFQKBQKhcIKsWoWqWOOOaYr8/K1r30tIvoTqi1S\\n\",\n       \"nIL5+cgjjwxKwQAsUZws0ZycgRg494Z9H3w65kS7a9euzo/mxBNPjIiIv/mbvxml2yVTOCH7dGy/\\n\",\n       \"LDQQ+OSCmHNzc92JH62FcbpvNA+sN84H5AKYaCLOcMxYvvKVr3RtXSDWmoUtUtbqsHJkpTPoD0sH\\n\",\n       \"c8VYWr8fjwc6XeIAOIvwt7/97Yjo5cdao3PXkNvq0UcfHVgvbZHhs/0NALxGrlwawbLIWJ7//Od3\\n\",\n       \"PMXHyxZGaGA8tGe9OIMvz8bfyXwbs3jC+za7dcSw5BNaMdYfKhagYdtqzHxjNbGvTGvZcZkMWz+t\\n\",\n       \"1TOfzmTsPDoG7fg/66nlCzTQNz/hqS1S9MEeBB9Y42jNoLWOR0T8/d//fUREvOIVrxhYgZwF25ZY\\n\",\n       \"58uxfxZ7j3NdZbQ88MADg4LhwNUleBbf9R7tvZi1jTXB/cPHnTt3dn3yXc8/8oJssSYp7n3TTTdN\\n\",\n       \"tYdP9u8as7hEHOAf8411IyuQDdgffFtiwHOqMTjfWCtf7H/QMusZyAlyjizCe68rZ1dHhnlPt3uA\\n\",\n       \"88aBsYzs7bgYLz6GliPgPFqtf6/n33LOO4XxshfPQlmkCoVCoVAoFFaIVbNIPfbYY919tX2LOMUC\\n\",\n       \"Tqicoo8//vju1Ok7bOBMrs68C1yDy1piVt/shBNOiC996UsR0fv0OCLKWXUpMMzf0RIBp2M0Fyw1\\n\",\n       \"PuWDXbt2dRoD1g744npeaFK0Q4PC38gaKZ/RbP/u7/5u6u8+2UfkWbINrCBomsw3Vi/PqS0x8GXM\\n\",\n       \"4mHt1RnKTZt9xqANP52sgPJYNntbXlwIFa0+swLRjvFhgcNnypoac3Hsscd2fTFPWe0s/Cno09E1\\n\",\n       \"wAWhXTMLTR+0RWp5Nhqj/dJsveEZaOi2SCGr/B1LFvxp1zTjccZiF3gF0GDLBOPzOHkWcuAo2Fa+\\n\",\n       \"nG2etnzXVj3a2RJtvx6AJQZ5Yd3ceuutAx4ybjKcY+Vir8kyekML6wN+en9hv4Wmo48+ehARDJhP\\n\",\n       \"5BnfUdrZT4W+WU+usejqDvS7adOmbv/yrYDb2i+JZ/pdBA3wiz2LNe2s3BFDvzqseplfIjxmblyI\\n\",\n       \"HbD+uRHBCu0xtH3AM/t4el90NnHGB22uh0o7xub3bPse5Xffgrh+peHqJOYPcDSgq120wKLMeoB3\\n\",\n       \"yHdWo9Eoi1ShUCgUCoXCCrFqFqn9+/d3J0lOhZz+s9Ns+5NTq7V6W5TQ1lzXy/BpFeuPrQxgbm6u\\n\",\n       \"65NTvf21fJp1HqTMzwBtgfbOOwU2bNjQ/c0WGJ/SrZGbf9a80BrttwXGKsxnd9u2djmyks9ZDTrP\\n\",\n       \"HWMj0qbVppAR+4Bl1ct5tnOYOIcLcD3BVjv2fDIOeIcGBu8sH/DcPnS2DoG2Uj19YXHIfN5cnw65\\n\",\n       \"Mc95li2ytia0f4c3rBmeYUsqz6IvtNhZPiG2dNnvp/3dedUctQqYd+YOWlpfyDHYAkO7ll+WV9aY\\n\",\n       \"6xwCPmP9Q+Omb0ccMjb653t79+5NrZ2Mk33RtSMBc+GIXGBriq3Ra9euHVgDAfMOTbaWQJvB/DvP\\n\",\n       \"nq1ALV+9H5ovjJv/O/rM1jR4jAw776D5tLS01MkIa5LPttQ64tQ8NS2+8eDZyG5r8bT/FM/C4ur9\\n\",\n       \"HD5Aq2vXZvui55o12u51zovoCPzM34xn+jbAtMAP+oUPe/bsGcg5PITnzDv88W1IhrJIFQqFQqFQ\\n\",\n       \"KKwQVWuvUCgUCoVCYQaq1l6hUCgUCoXCU4xV85E677zzuntZ5yHi/vkzn/lMRPS1efj7IYccMogQ\\n\",\n       \"cq097mozX4as1pprqHEnTHZpahZF9He9jqagXh21tuyv42gzatDR3vfT3Am7Zh31rdrvOLcK2aKp\\n\",\n       \"+8Z48OuCf9xPUyOK2mzwGZoZA/1cddVVg/kBzjBL3+985zsjoueffSkYL3WfGCf3265Vt2fPnkFd\\n\",\n       \"Lvs4tXUaIyKuuOKKiBjWiALwhX6oh0YdJ9fDeuKJJzofCMZJ35lflcdJ+yzfGJ+hhbHOzc0N/AS4\\n\",\n       \"42ddwENkyD4jPDOrzZb5ClA/693vfvfAz8S526jjR+0s+3F5nNQ3o+6fc6TZT+vKK6/sZGtW5A9r\\n\",\n       \"lPls6zZG9H4a9MNe5Dpx7BftXnbppZdO0W2fF+cTY/7POOOMiOj5Bc/t++Jafq6Xtn79+u5vtEW2\\n\",\n       \"7FMJ3cgBewv7qPc22rlOHHzkua1fHH+jrefIvm88k/lnnPap9R7GHMH31lfK/ojQwjjhsWsnEq3o\\n\",\n       \"Om6OPIMvrll3xhlndDzOsoWzRqmd2ebBaj9bdlnTzg3GGPjeJz7xiXjPe94z9TfvF6w5anPCQ9aB\\n\",\n       \"91PkiPcL/UMLc4sf7KZNmzq6aeu9xX1DC+39LncEqWu5usrHrl27Bu9c1jM0OOIPOWZPz1AWqUKh\\n\",\n       \"UCgUCoUVYtUsUmvWrBlkHnX0Qds2otcq9u7d2510M83TGY6JBKG6M+AkzsmTzK9ktH3Zy1421b7N\\n\",\n       \"6eHoAEcsOFLC43W0SRbFheblfDLLy8tdX/CME3eW0woQrYEW6Gg2TujwxRp7G1npquaONvGzrUEx\\n\",\n       \"brS6rAaZs84jA632y3w6AhA5sBXDuUlsHTUt8Jv+0FxvvfXWrmaY+4YW5oa+HYUzxtuWJkeQte1p\\n\",\n       \"Y60UwAfmBt6Rk8lZ8/0MZDvLtzY3NzeoKM98OsLLVk3mz5ZXAM95ptvbEtr+z2swyzkDnGU7q2xg\\n\",\n       \"6xIRde0eAA1oxI7CdR0v/o9c8Azy7Lm+HWMZqw/qvYi5ceRoFpUM7cwdWj/ts1p77X7hmnoZyHnl\\n\",\n       \"6gnAlllcuERSAAAgAElEQVRoIEKVPEpuPzc3N7Byem3ZWuh3kNeRI4ydtd7Rz/Pz84ObiyzPHnLu\\n\",\n       \"aDyQRdYZyFUr2/DB0ZuuomFa4I9vVbK8g9BIHj7qLbZ1ZRkffGHes70ri85EJh3N7uhOsGfPnvTW\\n\",\n       \"BHjeZ+VEBGWRKhQKhUKhUFghVs0itXv37oEvhevdgdYSFXFAm+ZE7ezArr+FZQYt0KddV4mn7ldW\\n\",\n       \"Dwva1q9f352UnXMF8NmnYLRYa17QwHjxd4I/9ilZu3btQGtBg3SNMGs7PMtZwo3MR6rNyu48X87F\\n\",\n       \"lNU35O+2aGW02EJnGlsa7KthGQLOfWQ+2YJlPz4yhU8mk4HF0ONmvqHXfcMHaIAmW0FAy1+P020N\\n\",\n       \"fACBawqaj1nOs7a9LUTOEwQY31juobG+x57VYiz7cNanrYDOq+baW27PZyxVWEP43FoN7WfnCgVe\\n\",\n       \"u/yd9s55Y+sZsK/Rli1bBpY0W+jQ4llLzlxPn7OyTQP7Q27cuLEbj7/rig0veclLpmggYz3w/LIv\\n\",\n       \"+tYBtNZo3xpk68I0znoXmY/szd6jJ5PJwOqVzWObF6796fckcBZ/W51bvnkunF8rq1vX+iW3/WTv\\n\",\n       \"OmjCosv32hsSrwvotD8r8Dvelk6/X72/tjcl5n3mz2l/z1koi1ShUCgUCoXCCrGqPlKZZuYTpk+L\\n\",\n       \"W7Zs6TRoa6v4ftAnmjR/t4+UI0XwdUA7smWnPXk7+7H9Dawx0B7asrpPnLT5jNZr7WjHjh2DZ8I7\\n\",\n       \"a7PWRNAQGLf9NezP5dN9a9mxJuTveo7sIzUry3br+xExtJK12pStW7agWcNwpXR47AzgwJYftKWj\\n\",\n       \"jz46zZoL+I6tgsA+UtCELHqdtJadzK8CeP7RnFlH1khZN55TPpv2dqzW2s0H1zWjT+Y5qwrvDPdj\\n\",\n       \"Y4aH9omxpdHtLUvMs2lBLpD/22+/fap9a5GyVY+fyGLm22H/Pme8Nu3wpY0gdAZvZ5t3VnjPP89m\\n\",\n       \"nOaD+/cYInoZ8h7Fd+mTWwC+69qrjmJjvPZHAu1eZR/PzCJlntvfD2SVE5AfW7z37NkziE73nupx\\n\",\n       \"ZhUhsooPIItmjRj6V9oqZIuz4UzumRWI57C3s7+07XkWz2a+4YtvHDKrkG80APLlDOl79+5NeWZL\\n\",\n       \"m31lZ6EsUoVCoVAoFAorxKpZpNauXTvw9M/u432KXLdu3WiF64jhSRsNgWfZd8B3x9zfZ9pUG73m\\n\",\n       \"O21rAozLp3nf2wPfdaNhMiZXop+fnx/UbXN0HcgiR+zbAKzt23+r1VBttXBeG2uBmZXHz/D/wcG0\\n\",\n       \"S7QQ123LIgjtj2B/t8w6yk+sRYuLiwNrJ3AElOXGcJ4sR86YluXl5YFFLYs24plYWjN/nMy6Zmsj\\n\",\n       \"mEwmA5653p37ck6btm7hWHueyRjHaIEnmb+IZWuWNdXr3/nroNWWijFaDPuOOc+YLbWZrxlo9wlb\\n\",\n       \"jBy1iCXVUasgq0mZ+dLYx2xpaamjL/P1crSuLY/AtNki7Tlq29On6zu6L++P/gm8p3lvGpNd95FZ\\n\",\n       \"OVgnjpj2s/3ZkcR8v+WLI2JnWbt8W2S5Ny1ek1iixtad59+WOs+/93JbDc1fZNQ+Zu2ZA/ida95l\\n\",\n       \"a9coi1ShUCgUCoXCClG19gqFQqFQKBRmoGrtFQqFQqFQKDzFWDUfqQsuuGAQWeM6PVmtteXl5S7i\\n\",\n       \"hbva66+/PiL6mlK+q3YklPvmDth5ZLj7veSSSyKir0G2d+/eQf4S10K76KKLpsbD/x0h4dpJ3Nva\\n\",\n       \"Z4Q79Ouuuy4iIs4///w06sI14s4///xRftiPhZpS1FpynS9HWm3btm1Qa4+fzicFLdS3csZvR/Mx\\n\",\n       \"R/DcERV8b35+vqud9K53vWuKTt/DA9cgs38XNLjWGnxBLtrIK8bpGpH2jYM/RM5Rr44aUfbbsV8W\\n\",\n       \"Y21l3f5ljOeaa64ZpYXoGuQGXy/qW1EPzb4yfI+xtrRYlhxtC8+ZI/tU2QfKc+Q17bm9+uqrB7UT\\n\",\n       \"bXF3LTTLrnnOZ2hhHWX5Z9asWTOotQiyqCtooR4mPGZ/8ZxSmw0+2v/tkEMO6eQBWWQNAeSWZzH/\\n\",\n       \"1FqEFvvzOLoJ2WVNtzLuSN9PfOITU3xx3Uev/7aOY8TQ18654qCdfXRpaSmNvqPuI3yBd6aJ8cBH\\n\",\n       \"6r65Lly2js4777yBTxDPgDbXt2PeM38/9lGvC9ei4+/XXnttt7c4Yti57a699tqI6OU88+f1O/rC\\n\",\n       \"Cy+c+jv9t9GvXnOOlAT4KVFr8ayzzppqx/ecK8777liUZPZu8bsrq82YoSxShUKhUCgUCivEqlmk\\n\",\n       \"2kgOTqDOswSch2nXrl2jVcYj+tOqNe7jjjsuIoYnTbQcawvkWbHVqY3uIdqECJgs4zI0uCbWWBbc\\n\",\n       \"lgZ/39aBxcXFjifkgeLzUUcdNdUWPnHibq05Y307sy/zNRbNYsui8/xkmalp7/wpzmwOn+Abc8MY\\n\",\n       \"zMf22XzHtdGAc3c5CiuLGLSmu3///jQLPjKG9moagCOrbHGx5tZGJNra5UzlzAEySwQo7RwR5rUI\\n\",\n       \"7Xx2frW5ublBjpmsTlUmF878DSwnzhE2ljfHmmXmC4rsOO8cc5bVfUT+yU9HVvsxWbS12PPrcTrC\\n\",\n       \"yvXuADQ407MtNxHDdU+uHvYuj9OWKK9pYyx3XJb/x9o/PHzuc5879Xe3935oq4dpXVxcHERu2bLi\\n\",\n       \"vcefs+hF5oIKGM4/Btp3giPezEvm0Tm8svWUraM2Ss10wDvo57vekxg/4/H7I4sKBZn1qP2b5cM5\\n\",\n       \"+jweWws5N/jZvB+goa0nm2UEQP5ZF1nUZoZVO0hF9AxhU3fJCGAz5H333dcxnXQFwMm93vjGN0ZE\\n\",\n       \"v/HfcccdU+1dloFiiwiaC2Kyae/fv787vDgFAWBS2Cig//jjj4+IYep7h7MSTs/ifvaznz3Vfm5u\\n\",\n       \"rjswcmCg+KwFwJuXrxm9edkE7CRy7Vhtqs7m0e3ZKBgDLyF/j5IQ/OR78KN9ebnkCxh7wUUMzb++\\n\",\n       \"uvJLmmczRjbSrVu3Dg4vHh8/aec5Yj34oOjit6AN7WccyKIPRmx8yCIbJ0kNnRaE9vyEDxwY/OKd\\n\",\n       \"n58fJGvNSicBxofsskatvHhTZ90dbJP2S4k15PB39h7mkTV76qmnRkSfoBcwZ8wR/GPdtS8Yv8Sz\\n\",\n       \"cG3g4sx8D9qy1B2spwceeKD7n4sKM+/8PPnkkyOilymnKHCiQkoKZUmUmcu21BKy4pQJVhgpEcM8\\n\",\n       \"/uAHP5hqbyWBOeLg5TlijHv37u3GlxUtdlkq9n2XdfE4ec8gm8y/193i4uJAkc5S8diVwwpDdoh1\\n\",\n       \"CL+TzEb0cs74eG8ee+yxo306zQNyzfizQ7KvFa2At33wN4wc8MMlgpw+Bn7ATx9ePQc+RLbwnmWX\\n\",\n       \"lzElbQx1tVcoFAqFQqGwQqyaRWphYWFQbsFlNwDaH1rCSSed1GkA1qT4OyVQjj766IjotZx77rln\\n\",\n       \"qj3aLadbTqRoO7ZkWAtsv+vrDtqiMXHy5rtomqYF7Q9+oD2OFVxFU0AD5ZTvNPs2YZqnPsU7QSUW\\n\",\n       \"jLGCu9awZ5lFram3pS0ihpqXr2fNR481op+3LPU/4HNbbDWi57UtL3aMR94OP/zwwdWLtV/mCM3L\\n\",\n       \"1hEnebTzdabZz83NdbKXXTPDcxfxzq6NXJ4Iywvt3H5paWlw3Z4lQfW1CesDWiw3WVkX2rcWLBc+\\n\",\n       \"dVFy98042Cfe9KY3Tf3/7rvvjjH4yuP++++PiOk17ethW9ZswYYf7IfWkrOSKFhgkJdnPvOZg32L\\n\",\n       \"/2ENRw4c2ALguUtHuT8AX7F0bNmyZXClDeAL48HSnCW09fdf+cpXTo3Fe3prlXWyZ9M963rV1g5o\\n\",\n       \"QeZ8A8BYwPz8/GAdQ1OWkNlO99m1qpMr2xrd9k8b76G+6gN2Q7F82Gps/tqy09LO3sP8I4u8k3xt\\n\",\n       \"6PeCE9b6NsVBL/ycn59P9xb64D2XlfzJUBapQqFQKBQKhRVi1SxSi4uLg/B4TrmZEyoa/datWzst\\n\",\n       \"LNMY+b99H3yq5xSMtoAlilOzrQatL4qdAG2B8P0z2h0nbp/yrTVAg0Ntwc6dOwdpGrKCjS6sjCY2\\n\",\n       \"ZmFifO3/bclpi2A6JYIdfLMSMTyDcaPNeJw4ziMH8BHtr/WHssO2NSLz3A6JmY8RQNOnPRrM9u3b\\n\",\n       \"B5Yx3+3bUdWFRF3M02U87PfV+gPZgdnzT5+2VECzrQFYFly+xU7nYHFxsXtG5kwKoBH5tpO6+WLf\\n\",\n       \"D3zMsOy1WqPLkthHw7KIxc19UUjXBXTZL6AZn0r7+bV9ISOmwQ7dfLYFAlnLtGOe2Zb98fzb6uuS\\n\",\n       \"KVmZFcZpfx7DKRh+/OMfD3wngZ8JT3mGZdS+VsgmBaPtU+Mxtr/bItXSG9GvNWiwtcPrnrGxjsZK\\n\",\n       \"SrmodJayJiuYzLxnKQjY4+CfgxUieqsPbZy2wWvVfIKnyJjbwz/7ZyHr7Z5uHye+ax9Y4LI2LomT\\n\",\n       \"WVNd/uyQQw4Z9M3/vM9xfsiKwBtlkSoUCoVCoVBYIapETKFQKBQKhcIMVImYQqFQKBQKhacYq+Yj\\n\",\n       \"9e53v3tQZoH7VZfCoFxFe4fusilXXHFF12/EMHrHfhaknyd1vvPwOAkg6efPPPPMiDhwb0ufPMul\\n\",\n       \"CignwDN9v4w18CMf+UhERJxzzjkR0d/9Qjt3yTynLSnh/ChO3kn5GVLhc79M39DG3ymFQXvutuEz\\n\",\n       \"cwAt27ZtG6TZd3kN7qEpbcB82mfI47zhhhsiok/5D1/wHcIf7IknnujKLFAegr4crWUeulyBozj4\\n\",\n       \"PqUTaM+Y2nb8Ds/PPffcqTa+b3eZFeaf9i4VAVw6ZefOnYPcLMwTJWIo+YG/hv20mKvPfe5zEdGX\\n\",\n       \"/MAfhcg6/NPwCfnoRz8aEQfWHbyijRPnQQvy4vUPkEW3d84054y6/vrru7Ip/M/+eqxvyrggi14P\\n\",\n       \"9rtgjij7BCw3+/bt62SL+bevkMsXMU72IuB1RHvKm5x++ukRMfTzmJub635nn6OEBz4+jJP8Qqzv\\n\",\n       \"trRJxDD3EfsKc8pYzzjjjKl+22hXxuG9xYkp7RvLnk77LMIO2qC9XXfsuY4Mp1wNa860OvINnjP/\\n\",\n       \"+FCxB+Hfg+zeeOONEXFgHdkHGDDuD3/4wxHRyyJwzq62FFJEP0dt3qyInvfIxXXXXdfxEL9deE3u\\n\",\n       \"MXyJP//5z0fEcI0CeE3f8JH9gnUDH+Djxo0bu9JGvBfpI4uwZc3xzvX68ZqFL37vtu+ltrRZRP++\\n\",\n       \"wE/L7yDmuUrEFAqFQqFQKDxNWNXM5o5SsgUD2GK1YcOGQdZn4HxBttSMZWSOGGpFttQYS0tLacbm\\n\",\n       \"tk07niwqCTgrrMvdmJbl5eWB5cxRfMBlRmxZMhxR4/wgbWSNoylcZsV8gh/0kWXsBln2dTSx9u9o\\n\",\n       \"Zc6TBVwKwZYneO78U26PRaaNCrRsOTs6spXxBTifVBYFB807duzoeORM/cByArBQWNv3Z2d+H8vH\\n\",\n       \"ZasPNGSyiOWKvpHJLDrJkYNtNA7wGjKyYubAVpJsjpwR3xUDIvr5gWfeD7x3wSdbRW3BAuYXfN+3\\n\",\n       \"b98gas9ynln3gOfSz/a+MVbFwGVjQJap2xYm004/rHGX0HH7J1veI2LI46zYOTx3lm34wZ7Ujgke\\n\",\n       \"w/Msa3ZWCJtx20rqygnmZzsnzCP0Z/nzTEuGbI6c02psH6Ut69iWN/PFORB5RrYX+RaqvQGyLMJT\\n\",\n       \"fjrC0ntXhrJIFQqFQqFQKKwQq5pHyrlMQJbxl9P0pk2bUu3F96L2L3Df9suyBmpNq81DYeuXc1S4\\n\",\n       \"EKb/n/nMWJPPata19c3ch8fhHD8gK4hpWp1Ft6WdU76tP7bAACxQzvSbzalpcJ6ltn2WkdeFXYHr\\n\",\n       \"ODJnY74eLSxH69atG2gvHic/Le/Ac2FLlDW11qfA2lyWeZpnQIt9A0Fm0QVjVkbnjslkkb7wL+FZ\\n\",\n       \"tgYDa97OPt3CxVUzPyyPA75hocsKC7fjbWkY84NzcV37ujhTvS1Sbp/5ko1Z5b2GzBdbUixb/J2f\\n\",\n       \"zk2UFdBu62xm8wnsv0Y7r1HvD/TrtQpa/tmabR7al9Z9ex05t5vn0PtrW5PVz8jqf7rOITR6jY7d\\n\",\n       \"ULQ/27G6CLl9RrOqCUZWIYLvZ8Wh2znNLPKuNWnaLYuuFwts4fJZoIWrMdjS+GRRFqlCoVAoFAqF\\n\",\n       \"FWLVLFKtRmPLRVYPjZP2E088MajHA+xXAjLLgrU+a8kHw1il77HP9ttwBAngRG5tN4veWlhY6E76\\n\",\n       \"9GXt17TAY2d+zvKJQYsjJdoTu+mC7kyrt7bH953ZGbgdGogjCds2zsjcWjPHkPlG2MoEjfaNGcsm\\n\",\n       \"bWtnlnXbtFsjzWSxfbb9sTK6banL6iLiY2bNLPM9Wrt27aBWWGapHKsI38LyklkRnBE9opcJW3Ey\\n\",\n       \"TRnYCphFkjkKNPOha+mjrWvtuW/Lg/3aMgs2aGnOasS5ooF9J92XrUaZ9d1+oAsLC4M+gK2d9pmy\\n\",\n       \"vNgf1n44RtvO2cJNi60j0OLoM0CUo3l/MFm3FQiYFvsAeY6ym4lZ/q4R/XvNPoCOygS25NmifTAr\\n\",\n       \"YEtzNqdtX943sz3G783s3WX5an31Mn89W2Zpl1mkjbJIFQqFQqFQKKwQqxq1Z23XJ3FgL/82EsIa\\n\",\n       \"oSvLO4IsixThxIxW4FOt+281rlmndEf0ZFWrfdJ2zSjTsry8PNDWZ0WqOKIh8wXIrErmr39vacj8\\n\",\n       \"0lwpfJYfiy10B7PUMD7aWFuzpu4oRtOcaeq2viwtLaWWF+h0lEnmx2Z/NFsFPNbNmzfP5Ll9Wxyt\\n\",\n       \"l/ml8T3XQxyruG6raBZtyWf7iGT5dmwddLReK/PZGsvWP33CD9plvmHeFw7mY8nvtpxkfknQ5tpq\\n\",\n       \"mUXKloqDVao3X8CsKEfLTRbl51uF9v/eH7JI2cxH0pbabD8BLQ2ez8xqY3+bzPLmmxBbycyXubm5\\n\",\n       \"Qc1U5z8D9lvye8P7nX3LHCHX8gWrnn0ksxsMz9GsSGPftnj9tXzJouzG6I4Y+kTNquXq90krf5kV\\n\",\n       \"MMtpVrX2CoVCoVAoFJ5mVK29QqFQKBQKhRmoWnuFQqFQKBQKTzFWzUfqvPPOG+SbcZ0r6ttQx4d7\\n\",\n       \"y8XFxe6uPqvLB7gD5Y64rREX0dcUIj+Ic9bwmXpYbW0m+w+5ptTZZ58dEb3/Dc+AFvp2jSDaZ/f2\\n\",\n       \"1PI7//zzB//DwuhaWIwzy+wMn6idRG02RwW2EXIREZdddllX3zCLjOEztFx88cUR0fsbOCrniCOO\\n\",\n       \"iIi+jpv5CN/h38LCQsdD6s/ZJ8IZmi+99NKI6OtVOSLEUV70Dy32c1laWur8h2hLHSfGad83/HKo\\n\",\n       \"h0bfzr/jGlTUw2KOtm/fPshrw+ePfexjEdHLFshy9bgGoX2vkDfXT7z44osHEa/OzUJNScbZ+ni1\\n\",\n       \"7aGNWmvUN8tq7PG9yy67rFufzvDO/LKmWEPIrv1QXEeTWpvUcbNctf5vzD99OzIIeJ+jb/sU8Zn2\\n\",\n       \"7HVnnXVWRAwj8fbs2dP9zW3x14EmZ+KGL661Ce/plzlz3Tf4sXbt2oHcuy00mIesXfZoZLHd/yOG\\n\",\n       \"PjLwsd0vkAPm3zzk3cL4aMez4Bdr9IMf/OBUO/sQMRbav/e97+14xfw40z1t3/GOd0z15dxdfJ+6\\n\",\n       \"n/CRPFWmmX6uuuqqrtYiyGrosf9bFu0zxE/4SI1L+0i1/lvUK2Q+Dz300Kk2REQiL55/Rzu7Lh51\\n\",\n       \"Qv0+an0oXWuR+SRK2f6byA97V4aySBUKhUKhUCisEKtmkdq7d+8gY63zKIExTdTReCCrnQccheNs\\n\",\n       \"uY4YyrJ1t9/NMhUzPvp0dEJW34p+OBU/9NBDETGMGJlMJoNcGo6AadtGDCMAMyuSa3I5oqatj5dF\\n\",\n       \"lQDPAc/EEoXWaz65f88FcjEWtWUN1HWd3N7Ra86ID+iP/7dWhiz/FePMtFfT4lp1tqaCdk7R5qDv\\n\",\n       \"sMMOm2rLOOCDLZdZXpzMgmm+7N27t5sPR2NlWYJdv4/vZ7mrGBvtof3II48c9J1FujoizBFy8N77\\n\",\n       \"hL/vXEZjdb8c4ej9wjzPaoQ5ehPw/6OOOioi+v3i3nvvHdTzdB6gLALKtDMnyKCjmoErCBwsSstR\\n\",\n       \"zrYwm/fO3cUaziJI27Vrnhqu4+e9KKsE8aMf/Wjqs3OEgbYOJn1mtVady8r16rJ8fdCOdZF27bpw\\n\",\n       \"Xixb9by3zMp5lu3RvKPo15adlhbm1XnEkDXgd5XfWVmkutfdunXr0goetlg/2fxRoCxShUKhUCgU\\n\",\n       \"CivEqlmk1q9fP7BIoQ1YIxmrhp7VcXOeI7RWa8kAfxy0OvrjZO3TLpaYubm5QRX7TDNyllT7NAD7\\n\",\n       \"I23dujUiek3DfFm/fv3gbjfL+uuq39bUshxYzoQ9ptk5y7OzILtvzzcaiPMQgaw+Ihpeq2XYz8b+\\n\",\n       \"CKYfPrX+Vu1P59dyhnTkYffu3WnOKVsSs4zM7pNxwZ8sZ9jy8nI3TvtTAGve9jczTeYDc+CcXmD/\\n\",\n       \"/v2djDhbsi1MPNsyZf8rj5P/28LXartZrSyP330zfvsAes3hf8IzDz/88IjoZb+1Nlju4cusmmL2\\n\",\n       \"/ck0b1t0kJv169cPrB7O54N/SpZHzHPiNZplCIeWNtu+27KvwWusp5Yx4Lp/fM9+SoDvr1mzZuCf\\n\",\n       \"luXRynKfMb/AVnSA5WUsp5H97rJcXNlNRTueFq54Ae28L9p1YWux17H3C+fsY3z0bfmy/5qrTrRj\\n\",\n       \"QvbgLfIATb4VYl+0tQ9aLF+ZBXz9+vXpzQv8sdW4MpsXCoVCoVAoPM1YNYvU/v37B/eRWfVv+zns\\n\",\n       \"378/zbTKydLRepygfVrlrpsTcxbl0T6b9tby3Nan2cwq4r4fffTRKVqAad+7d+8g03KWkT3L0Jq1\\n\",\n       \"z3ynxiqLW3O2lSvzkbLPg2tNgawe3NjY3DcaUebbY18g+2OZFmvkbbb9zJ/C2lpWp83yg4UB/hzM\\n\",\n       \"d8QZer2Gsoz+2fyP1VRsafdczM/PpxnZ3daWF2C/DeDs8/DBUU3tOLy3sP6zvm01pr3lBetzFhXY\\n\",\n       \"ZsK2hcH+SFmtxazKQLYWsUqDhYWFVLYcAejahO7bGfwzKxNoo6mdFR1Ar32caO9s4rZAOoJ01n7a\\n\",\n       \"jmdWHUfLrOcCWrwOvP7AM57xjEEkoGkCrYV5bBymlX55Nj+Zq5aPzsTvLOheo/CWn8xZti9aXrJq\\n\",\n       \"Hm1ftv6ytuwj5Vqsma8c8K1Uu+7cFtiHMFtzGcoiVSgUCoVCobBCrJpFaqw211gkFG0jpk+92YnR\\n\",\n       \"J2JH5WR3+z6Bc4J3Zeq2f2ucWSRDVt/O4K6Y9vbbcf/79+8faDFZLSn7FWFJyWpE0c5jsL9C2weY\\n\",\n       \"VQsJWCviHt5WEGvDrho/VkuJvh3x8l/VAj02V2RvLZTW6lwjDFrsAwVcWxHNDTnI+LJ58+aBT1Pm\\n\",\n       \"82CfkSwiyHNjvz5rk4cccsjgGchQVmvOspdZDegPvjkyteWjfRtMv2XR1d5t6c72IvtSjlWib3OM\\n\",\n       \"tXRm68GatOfEVkNHuWJF27Vr10w5d/45gz7xT+HZY36J7ec2UmrWus+iMTPreFaz0uuitYzb2mWL\\n\",\n       \"hNcDfPI8A9o58hbZtH/P+vXrB7XzxuqVtp+zyLrMvxM/X+AcaGN08Tmr++h3rq2H2Z7vPWwsKhTe\\n\",\n       \"OYIYS5Qt71kUq9+PwPsJaG9wgK1ajjT2ODOURapQKBQKhUJhhahae4VCoVAoFAozULX2CoVCoVAo\\n\",\n       \"FJ5irJqP1LnnnjvIveFcF9TxoX4W95aHH354d79KW+rbXXLJJRHR55jwnS13otRao29HPJApmpwX\\n\",\n       \"V155ZUREnHbaaRFx4G75hBNOmHoW96+uKQQcEcgzqYdGjSDX/eMOmM/w5UMf+lAXdQjd+Gw885nP\\n\",\n       \"jIi+Xhl1mRgXPCcvDnfm0PL2t789IoY5Ph555JGp53zxi1/s6M7yRvGZ+kau++YID+7tqZ1Fe9/D\\n\",\n       \"t/0zn8wPssWzXdfvT//0TyMi4vTTT4+Insfw3r4V119/fUT0c+rorpZHyKLrPjqiDN4jW65Bxhjs\\n\",\n       \"vwdfzj///Ig44EuFL4t9Glw7jT5ZP/zE5wnZfdvb3hYRQzmhf/hy3XXXRcSBuoL4dDkSiPlk/qGF\\n\",\n       \"eYRWfx85P/PMM6f+zpqEBqJcb7zxxsGacy6urKYgPIePrGnmirpf8JwII/teLi4udnU5qfkG71gf\\n\",\n       \"rFm+i2zRN2vU9S2hzfUT7b+2b9++Ad3IIuscv5q77757ihbaU9/Q+cmcbfvyyy+for39P/NC38g5\\n\",\n       \"dNM3MmnfF+obIi/OkUf/rrdKjcu5ubmOZ54v6KbuJ9UjkDF4jqzxXmG/4O+0Y85Y48jAaaedNvAr\\n\",\n       \"Y3ysvS984QsREYN6ePYJZZzeL+iHvEw8B/5u27Yt3vrWt07R7bqv7I833HBDRPQ1CB3FCB8ZE+8X\\n\",\n       \"1pH3Kt4fGzdu7OQWuhnX0UcfHRH9HLhv6viZf46cZE0z/+xVbS5B79G8/5Fv+GIfWOYzw6odpJaW\\n\",\n       \"lrqXMsLIguInsBPq7t27u4m3A56dhGE6G6PLuAA7OrKgsjQCrXMtzHcBUCbDyQ95lpO6OTmkFxxC\\n\",\n       \"OUaPk7I5/J3xOFTavAbHHnvs1P/N1/awZOdfF93Mwrb9osic8RF+Dto+DLZ8YX7tLM/fs2RtzImd\\n\",\n       \"Te1UDdhQeCnOz893B2uP047LLgAKXPSXeffLC7Q0Ogw/cwZlzfEzc8I+5phjpv7uUhGW9clkMggK\\n\",\n       \"yNJVOITaSUJ9UGZ92SEYPtmRNmKY3NOOzcDKG3LAC8Y8dyi2Q/zb9eR170NXVgrDJYGyBK5Z8sC1\\n\",\n       \"a9cO6LYzNPOXlR+ygzPfZ7yWXdZBGzDjg4D7dtJf+OU15wScVjAtZ6zN1tHbBW5NSxZCn6UkYT9x\\n\",\n       \"gkvvL4ceeuhACWG+2jJbLY0uFZY5PDvgwf22a5r93HLBMzK+OBAsKxnjPastyxIxvUZZY/TBgdhp\\n\",\n       \"MYADaVyOyPIFzT4MrV27NnU2d+odv8Nmoa72CoVCoVAoFFaIVbNIRQyvALKixZQQaMs4ZMne0LCc\\n\",\n       \"8j1LOZCVHchKrbzwhS/snnviiSdGRG+S9OnV1hG01bEkZRER99xzzxRNpn3MCgAPXWQzC62nL5/u\\n\",\n       \"XXVJDFoAACAASURBVOTWpTZc3Lblp0NcbZnKCkv7OjVLSMicuiSArxsi+ivNLMTe5Vp8lYO1i7my\\n\",\n       \"FfDhhx+OiF7rRc42btw4KJtgLQ/NlGd5juAb/4dWeO91gTa1du3agaXAllfoRPsDTlkAuPpxORM+\\n\",\n       \"U1KpBTKTWSCBr9s9HgNaLFes1Tb829coY6lTWngembNsXTB+h+wzt23/8Ah+QBvj5JoFQCsyl1nb\\n\",\n       \"gUvwtPKXpW2Ah1wD2WoOXJaD/9uaAGx93rhxYxoqD22tpaCF5QVafQvh6xjQpmyxZS0r45UlHrVM\\n\",\n       \"MqfQ7ELEtjJu2bJlUJQZuG+seu2eEtHLltcocpJZWVvrKPui053YegOcgsEWt2y/sLV5rBwasmWr\\n\",\n       \"HuvBe5dvRXyz4zlFVp3iZ82aNQNZdOk0W3ezRNZGWaQKhUKhUCgUVohVs0ht3LhxUM6CkzRaIkBD\\n\",\n       \"54S+sLCQFn7E94OTJKf2tthwi8xygXZg/wto27t3b/cdrBTWDPi7HZ8Zt7WXrBSISwCAxx9/vKMX\\n\",\n       \"bSe7w+bkfdxxx0VExA9/+MOI6Hlufy0+owXYatjOkUs7uFRMpr167rKSD06W5oLUrWaHRYl557vM\\n\",\n       \"o3nupHaMK6PFpYfo9/DDDx9oUqYXDQo+efzW/v1sa41tQjuewU/kF1iz4hlOvAjggx3e4aetI61P\\n\",\n       \"nh1UrYnbf9HaoNvbggef0Ipb2YUuW+jg/ayEtU606DVnjdV+Xe0c4fPh8hPQ4mS/LpTdJrdsaQPM\\n\",\n       \"jf0h9+/fP/gb1m7otGXZvpMOoIHHWaJKj/GJJ54YOOxncCCQZeuuu+6aonWsVFaLNuloVjDdnzP/\\n\",\n       \"TrfHooulEv6wH5iWHTt2DIprZ+sCWfJ+1/p8jY2bPYg16iLfEf1+b7p5lvd/rx/TZCugE1lDG/tK\\n\",\n       \"+95F1ux/x3cefPDBqb691lxA2/titnfPz88PeOi+nER3lux2ND6pVoVCoVAoFAqFAVbNItWeUDlR\\n\",\n       \"cwc+q/zAZDLpTqe2MHGfbP+Z7G4XrdDagX0Kxujnu/bd8bgcnQRmlZ/ItGIwPz8/KKLLd/0saEVb\\n\",\n       \"wWJjjaMdX0sTn235aseBZmDfN2uv9sugL+bXViM0b7dzeYaWPmSJ/6Fx2Y/JPiPQhBaYzb+tIkce\\n\",\n       \"eeRAbl0ayOV1rO04gsbWlUwT37Bhw8Camfmd2FrksivAUWnWcu1r1tJm/wiP0xYIl3Exzxkna9tl\\n\",\n       \"ndr2thghI44iAviO8D1HSmX+XfhzOI1E2z/0mRZ4ajm3BROe07ct1QArA9975JFHBv6UAD5gLR+L\\n\",\n       \"wm0/Q2tWcgY4bcgznvGM1IcFuPwM1mS3px/2UxcKtjWFtbxv375B0XLPP+OynGQ+dfZj5FmZD+bu\\n\",\n       \"3bu7Puz7ajlHpkzLWCmkFt6jsfi07yOn/7B/kmmxrLqUkt91meXKJYkihkXtXQrH43ShaL9vvKeP\\n\",\n       \"FTOnfWYdd/murGh5hrJIFQqFQqFQKKwQVSKmUCgUCoVCYQaqREyhUCgUCoXCU4xV85G66KKLBj4E\\n\",\n       \"9iX5+Mc/HhF9ivj2jtRRe5QToBSC71/9mRTxpML3Hbr9fUhXTymEvXv3dvfM7ptU+JQ2sM+Qc1Vd\\n\",\n       \"ddVVETEsKcMYuZ+GNtqfddZZA58wR0JSIoS+M38baKGMx7nnnhsRQ38k+xRcfvnlXd/OTG6/qquv\\n\",\n       \"vnqKL/CDiBJoghZKPjCnzqeCP8JkMul44vIDziPk0inmOXC0Ylb2p40QQW6RRUphOMO5/dighVI7\\n\",\n       \"9lvx3EIL7ZeWlgZZnl02g7bILFF90Abt0OLSSbTDNwTfk3aO7OtgvwrKLLhEkOcIIIvwnH4cWYXc\\n\",\n       \"ffrTn+7mH9g/z2WZaO8IU/vKILvIYrZPrF27ttu3sjIb9umgtAXrwj5CzhDu0kljY4UuZJH1jD8V\\n\",\n       \"ewv+NMiF5dw+Z/DH5bA++MEPTtG4uLg4kFvKz1A6xzmb7CNHe2QROLrPJaXg+2QyGUSCel343cL/\\n\",\n       \"7SNnvuBLwx6EnxZ+XND+oQ99qPPxYbzOgci7CL7AL/udMV76hnZnTHdOrE9+8pNd2RRHHcOXtm3L\\n\",\n       \"Q/vOOnobWlzGx/m71qxZ05Xloa19gpFJy+4ZZ5wx9Wz6zvY6yvi4okS7R0O3S9vYZ4q5gpYMP9VB\\n\",\n       \"ajKZ3BURP46IxYjYv7y8/NrJZHJ4RPyfEXFSRNwVEb+5vLw87vlYKBQKhUKh8N8YP61Fajkifn55\\n\",\n       \"eblNl3xhRHxpeXn5islk8v6ffL7QX9yzZ88gC3OW08hRaw8++GB32rRlhc9ozuSkOPnkkyOij9IB\\n\",\n       \"jk4jc60LooK2iKszrjpiw7WEnFvDuXscpdFmbG77A1u3bu1Ozvfdd9/Ud7J6hWhMzmRMxIyf5UgJ\\n\",\n       \"5qgda5apHTjCx9EljPdZz3pWRAyjLZgTLFf/+q//GhERz372syMi4sUvfnHX1lYL11xzNJu1OH7y\\n\",\n       \"PecoAfy/nUNbP3hWazmL6OteZTyHn0RWMceWXWjbvHlz14ZoQ88F833LLbdERM+Pl770pRExzGkE\\n\",\n       \"DXfccUdE9Nn7TznllIgYyua6deu6eXUxcucoamtmRvTz6jxahiNvWU+t1uxaWdCSRQQ5CzJ56OjT\\n\",\n       \"2cetBbO2xyJrbUkyvVm9Mv5OMVcsWM6v5+jXNk+V1xzjQKZ+8IMfRERfqYFnAVueiU7Lcv0xNiwd\\n\",\n       \"CwsLHU+dcwirGP93niCvf2eC59nw3nsdNN53333dfscekkXtIVOME/45Hxv8IC8XNFBn0/3v3Llz\\n\",\n       \"UBgY+ixbfHbU2QMPPBARwyoL3tPhM+utXdO0/da3vhUR/fvxda97XUQM15ytOV6zniPXJmQsY5HY\\n\",\n       \"fl/cdtttU8901QTndHrRi140RQtrFjjqEdnev3//4GzhiFqvY1uPMzwVPlJ2vvrViLjxJ7/fGBH/\\n\",\n       \"61PwjEKhUCgUCoX/z+GpsEj93WQyWYyIa5eXl6+PiKOXl5cf+sn/H4qIo7Mvc2olcy2fbWXiVMjP\\n\",\n       \"DRs2DHyBAFo+J8vTTjstIvpTq61Gvpf/5je/GRHDjNiAz+vXr++0OmiwtcM5N17ykpdMfTYtfP/I\\n\",\n       \"I4+MiP4EDu1jNYVuvfXWqe+gtWRVrp0N2TQA57Li9O98VBFD68esmnOMA83JleWdL+fb3/52RETc\\n\",\n       \"f//9ERFx4403TrX/27/9266t86SggcNzW3UYD3yjvesjAucLav2AbNVx1uAXvOAFEdHzPMsL5Pxa\\n\",\n       \"0Oz2bRZeLEbWsADr4nnPe15ERLzhDW+IiIjvfve7ETFcRzzrpJNOioiID3zgAxHRWz75XksLWify\\n\",\n       \"ah9CgBaPjPIsaldmObCYi3vvvTci+jXb1sNz/jgsCvDHfOH/jOtlL3tZRPQWN88ptGFFg2bn22oB\\n\",\n       \"3exzyLvb2mcI66GtQ4B+4CM0LS8vD2oEuuLD7/3e70VExM033xwREd///vdHx4msss9keadox9o/\\n\",\n       \"5JBDOtmxnw1tnOkbmbKljv2C/zNe1qgt+3z/la98Zbd2kDlbgXk29DNHtlAD9gf6++Vf/uWI6Hlv\\n\",\n       \"S93u3bsHuffgi/dNV8iwVdR7EeNn72ddjFUrYK/Fiv/rv/7rERHxla98JSIi7r777gHdEcM8abbg\\n\",\n       \"AefM+853vhMR/Vy1a5q/sSZZQ1nOPsbBjcV//ud/RkS/dm3xhDb4c/vtt0fEgTVqqx5tXTsTerP6\\n\",\n       \"n8ZPe5D62eXl5Qcmk8kzI+JLk8lkanddXl5erlQHhUKhUCgU/mfFT3WQWl5efuAnPx+eTCZ/GRGv\\n\",\n       \"jYiHJpPJMcvLyw9OJpNjI+KHY9/9l3/5l+40uGXLljjmmGO6U621KVfPPuKII9K7y2OOOSYi+tMo\\n\",\n       \"mgOnX9fxcfSS7699X4vWvX379q6t620BNAX6oM+vf/3rETG0AljbdX0ka16PPfZYp6XwE60my+CN\\n\",\n       \"ZoL2wjOJiAP294H3Y7SghTjCh76tvXL6Z67gD1qRLW8vf/nLIyLiLW95S0T0mvq1114bEdPaEX37\\n\",\n       \"J+PH3wAwDuaOOcMiYcsefLalYt++fQPNGx6+5jWviYiIO++8MyJ6WWT8AO0HXsM3/JRskWrr3qFp\\n\",\n       \"2YcHMD604X/7t3+LiF5zxPcJUJMRfmB9xfrnzNl79+7ttNYsMzPweqCvrHYc/HANLeaulV3WMdZL\\n\",\n       \"1gE02drFPvHqV796arzsGzfddNNUe1ueGBtjH8vizfign2fYL4W1ydw5A7w1b/Yi9jDWwSmnnNLt\\n\",\n       \"AwB64TnWnS9/+csR0Wv7AMsNtPNs/E0ySx1je/zxxzs5Nxgn/LCvlK3GjJO5fdWrXhUR/frC4gDa\\n\",\n       \"CFPkwJFzANlybVHamRb+jm8R/PnGN74REb2vFDj66KPTzOT2v0Ie4GVbNSFieHuAbLNv2g+w3cOR\\n\",\n       \"c+bvr//6ryMi4qtf/WpE9PuCgdzwfcbn/QU+sD74Pz5nfqdHDK36fNd+TFj5eG/yf+8jAOurfU6P\\n\",\n       \"PPLItF7hWET1Pffc08nPLKz4IDWZTDZGxPzy8vKOyWSyKSJ+MSL+ICL+r4j43yPi4z/5+cWx77/6\\n\",\n       \"1a/uJsfOYoVCoVAoFAqrhRNPPLG7CkXxzPDTWKSOjoi//MkJ7pCI+MLy8vL/PZlM/j0i/mwymbwj\\n\",\n       \"fpL+YOzLW7Zs6U7qnCzRSK3toiXw/x07dnSnVDRnwAmTn2hFjiZo6YjoT8XPf/7zp56Z1RSbTCbd\\n\",\n       \"6ZwDoaOTsPo40gVLhCOfAFoA0V2u0QQWFha6NoBTvTUjtBf4AN/o2xYs57xijI6KjOhP8WgYWU0o\\n\",\n       \"9834GRc0mC9oBd/73vciotfk0KKOOuqo+NKXvhQRvYygEdk6YmsHWpw1VqJyDNfLa+t9Wdu19Yrx\\n\",\n       \"of27jhP/d/4d5/IC7RzRxrlYTDfaK+sCWkw7Fhl4T6Qkc4tfA1heXu7kGgsRMuL6Vq73Bu+xFtvy\\n\",\n       \"iuwxRteWayOCHMnE3pIpba5nB39oZ4u0rYKseZ7bat7QxV6CXEO3/Wmg0dGPzJ33F/5uK/z9998/\\n\",\n       \"mH/2OeaC8f3Mz/zMVF+mhf2Sn46KBYwfK/ohhxwyiNxy39DEnst4bcnC2oplhrnCEmVrGvxtoxfh\\n\",\n       \"va3GtKVvorv5u99F0ML6YLzPec5zpsYGWnlj/ph3W2kcBQ7YF7wuWLOseaxnY7UcHSHHuE499dSI\\n\",\n       \"GO4t7OWOZkN+HM3O/xk/fISG1oJp32G/Y22ph+fwh8NNex5o4XyDbX5C7/8gy2X3ZCu/rPggtby8\\n\",\n       \"fGdE/MzI3x+NiF9Yab+FQqFQKBQK/11QtfYKhUKhUCgUZqBq7RUKhUKhUCg8xVi1Wntnn3324O7X\\n\",\n       \"vjXUt6JOEHfLGzduHNTEohbSRRddFBHDzOS0516emkLUNyJiwlnWoZE6PmeddVZETGcVx2+C+2Da\\n\",\n       \"Un+KPriHxX/B9aqoKUV/zirO/fOVV14ZEQdqbWV1ijxO+sY3Cr4RnQN/aE9NMWg2X7jvv/7667u6\\n\",\n       \"TFg36ds10aiHxxzZhwSa6YdxMv+urdVm0/U4HeGFv4br1b3tbW+LiP6+nTmlnWszUifK9dP27NnT\\n\",\n       \"/Q7d1HFyHTxAH1l9Q/t5MW74SK21Qw89dFAzi3llDb3//e+PiN43wrl4PP/UlHTeoDZ7dkRfs+o9\\n\",\n       \"73nPwOeL77he5Zlnnjk1fkevmefUCQSuRoBMXn311R3drm+J7CBb8Jz2jhTCf4VnffSjH42IXr4c\\n\",\n       \"GdVa9uE5e4t9/7I6fsiW/fBcAYC6X/ARtNFerlfKmsNnx/LrenXUIITH9jHk76wj6r6xR69bt67r\\n\",\n       \"E14xn7R1dQrzkjmCj/Z/s/xcdtllEXGgvl3EAd8ZvkPbrAYhfjjOos3cuTan66B6bcPHiy66qPtf\\n\",\n       \"67vVgnXBu8W54Jzxnr7ho/NL0Q7foW3btnXzSV/s545WZ47YW7zuac9+41qe7OGOQI/oeci7xXVw\\n\",\n       \"vVZZF7xf4Af/93vG+67fowsLCx1v6Bsees3Rjn0BOc9QFqlCoVAoFAqFFWLVLFKTyaQ71XL645Ts\\n\",\n       \"bKL8n9Pinj17BtYLgIZAtBEnYtpnEWX8nbwXRG05molT7lFHHTWI+DNcn4jvOpIMMH5oRvPKImUm\\n\",\n       \"k0lHN5om2pm1FGhxZB0/M2uC26OZtLQwDveBxuE58rOxGqKx20LRRl1E9FpCVpMtoucZPEd2zBfG\\n\",\n       \"yd/ps83Q3II5Ycz0v3v37sH80JYoJMuYI0h4lrPIO8s8wFJx7LHHdnRnqUSIZIF3yBh9mpdZBCZ/\\n\",\n       \"dyTW3NxcZ+3if0S8mS+ed75HjhvTAp+yNdzOUVZlIIvWYTzwmshB5tfygiwi61gy+F47R4wbyyvP\\n\",\n       \"wtplmpwFmv+7/qMBTccff3xEHJjbjOeuc+loPMO10tps+i0YG1aQHTt2dOvZ0WbQwE+itIjizCLr\\n\",\n       \"HAXo/EMAPm7fvr1bx8iiZcc1OZ37zHua30XIDVF/Y7U5PW/0nfEQsGYd3QngE892BZCWdujlO7ZI\\n\",\n       \"Z9HMttBlmc15Fv1AA/xq5zTLcZfVN6Uv18+kGoH3aD5DU/v+8fzbSshP9uhZEeigLFKFQqFQKBQK\\n\",\n       \"K8SqWaTm5ua6k6jzSPkO2Xkn9uzZ050yne2XkzPWLnIOcdr1SZoTpzUtcnJktejWrFnT5S9Bk/LJ\\n\",\n       \"2Ll90ALR1KyBME6f6skJNWbBct6sNovrGOjD1eCtHcEvtCJnWW5pyTKbW4sBmSWGv1uDBfAHayPy\\n\",\n       \"4twtEf18Ih/QndXlQtuB12g7WZZtvocv0sLCQqqlOW8QspVZTewLMqsC+fbt2+O5z31uRPS8sWzR\\n\",\n       \"NzzEGgQt5ostj7b0jFkZGSfWGdqi3QHmG36Rb825aAA0wHvyDjHW1oLlvQL5RV6dkdsZ8LEe0o/X\\n\",\n       \"tH2vsNRAQysvtnY6o7l5aOsItDNOt3cNN56zZs2agWWS7/IMrIDwaSzzdETPB/ZRW5Hcrq09mVl1\\n\",\n       \"4BHj4/9Y1JzDDblivNSLZD153UHL/Px8tz75m2XRfq7kEUN2oQnY58r5loylpaWOD/Ccz5ZzWySd\\n\",\n       \"T8x7kbOxs5ZZT63s2s/KlibvRfYpazN+t/0Ayz8VJMYqZ/DMjIf2V6RPywlrObPC01+7f/o95zyC\\n\",\n       \"rDVyVVleMpRFqlAoFAqFQmGFWDWL1Pr16wenYVuXAKfGtl4UliKfXn1v7JpTPpE6OsORAL6vRzt4\\n\",\n       \"5JFHBpmK3TeneSLl+Mzp15oaz/SJ2/5K7VjpM8sSDjido1m2WW8jhpY3xsJp39XgWy3QEW7A4wPM\\n\",\n       \"J+NhrujHtLv2HFYjtM3WBwEZyrQeW/Vc/R0tF4udtR1bPFuro8fPOF1xnGd5Ppkj18zi7+YnVqSH\\n\",\n       \"H3544Itgvwrmm76cDdo8Z13ZeuoIMjA3Nzfww/IcANeOIzs/486y7MM35h0ZbMfKM5EJ16Gz7xDA\\n\",\n       \"Ku7aeWN+iRFDy5c1/oie12jUaO1YIpw1m76QLSwzWbZ69iLvO5s2bRrwkGdBH+P1ngRscbDPSzZH\\n\",\n       \"WEVammztYP55pmtyZlGctMfiwrqyLIK26kV2s+CoRWSw9fVqkWVpR97GrIzME20yCwrjbP0uI3Kr\\n\",\n       \"F3zCigptY+8AWwxd0WKWj5R9A81zt+d5lpu2L1u74WV2a8C6wOKd+QNn/pCLi4uDvQieIktY0jg3\\n\",\n       \"ZP7PRlmkCoVCoVAoFFaIVbNIzc/Pd6dBTuicZn3CHKs1xknRERHOp4PvyJglJWKYH6nNCxQx1ALb\\n\",\n       \"vCrf//73I6LXGGy9cmQE9cnsnwAcEUK/zivSjtWRDGNREtDb9o2m4Ht7YCuTtaSxOk7OAzNWhTxi\\n\",\n       \"Oq9HRM8f500BfIb3RHdh0WitksiKo+8YZxa1B80/+MEPIqK3xHiO7CPVWptcU445sOWN8WTRSfzf\\n\",\n       \"Fh1btlq5wPKS+QJBG/PvWotuD1/gn+fIluC5ubmOh1g77DsHnB8L65/9T4DXPxo4NJmW9hmMO+vb\\n\",\n       \"YI3CT1tm6Me54EAr6/zOMz1H7ts+hWjWjMFWI1v0sWBt3rw59dfk76whaLMF01YDLJisI9PiCMP5\\n\",\n       \"+fmB5Rk4mhd/RO97psVzyp7k/aWdY3xdeMdkEX68B7CoYXn1uvBe5+jWsfcR9LL3OOIX2PICrY5C\\n\",\n       \"A+xx8Ae/R+/Z7Xd9e8KasqXNkYVZ5HU7zoieH6xp+Nha9rznwkvWsd/RjAeamEvWf5afz1HPc3Nz\\n\",\n       \"A55DA9Z0zgvQnUXKGmWRKhQKhUKhUFghqtZeoVAoFAqFwgxktfZW7WqPcggRvUkSYBYkFT7p6gnR\\n\",\n       \"feCBBzqTJD8phUDJB5flcLj6ddddFxF9ing7yblsDWU5KD8Q0Zuk/Z1rr702IvoyC05AyLUKZkNK\\n\",\n       \"IVAKxY7MgDGRlv/ss8/uzMW+/uJZn/vc56Z4SB/QigmTz9BCWn7ak8AR8zsh6B/4wAe6vl3ywVcU\\n\",\n       \"lB8ghT/AeZCQape3oPwAcsEVSVvmgPIALptx8803R0TEySefHBERr3zlKyOinxv6Rra4yoJ/0ALP\\n\",\n       \"KbUydmXk0iaU5WCe4TVXki5XAV983WZHUfpnDBs2bOjM3VzRYKK+/PLLp+gGvl7CyfKP//iPIyLi\\n\",\n       \"tNNOm6KVK89XvOIVEdFfU7De3ve+93Xy6mt0zObQQskPxsmapB1XoJdccklE9OsCXuMIynOQxcsv\\n\",\n       \"v7ybT5eRYJzIGH3Dc2SJOTHvKW9BSRE7J/P9Rx99tJMVysnAB3jI+ufnxRdf3PEwYrjukRvLImuU\\n\",\n       \"vzP3O3fu7J7JPscabVPIRPRrCl5SOoP2dqGgHWHh7NHwkf343nvvHQQquFwNvGbN8V2X2mJOs2SY\\n\",\n       \"zBHtoaVNf8C1OHJ76aWXRkTPc19lOSCEPd3t2bte8IIXRIuPfOQjERHxO7/zOx2P2VOQb4JmWJsu\\n\",\n       \"+cK7q02xEhFxzTXXTPERvrFHsz9yHXvFFVd08wlw2H/9618fEf212h/8wR9M8RC4NAzycMMNN0RE\\n\",\n       \"Xw7NqQq4ht+8eXMnW/QNb7kutpsKe7rf6XahgE/eR12abcOGDR1dyCJ7KH3DO/Y9rnopEZWhrvYK\\n\",\n       \"hUKhUCgUVohVTcjJSRsnyZ/7uZ+LiOGJk1MuWsLNN98cv/iLvxgRvWOr0Z5CI/qTZuaYaEc+TqR2\\n\",\n       \"CEZb2rNnT6cRvfzlL4+IodMrnzm9ozG87nWvi4ihI6OfzbMcHgomk0mqeXOSBg6JzUJNAXODNgwt\\n\",\n       \"v/IrvxIRvbNqxLDoJNoL47eDpwtKo9W95jWviYg+HB44KeDtt98eERFvectbImJ67vxsvvNLv/RL\\n\",\n       \"EdFbpoBDsWmPdcTyYkdwvr9hw4ZBqg14yN+Zkxe96EVTfAAOTSaY4aSTToqIoaM862RxcTFuu+22\\n\",\n       \"iIh49atfHRFDDZn5d+Fnxmc5cPoA5uo5z3lORAwdPJ944olBAWysV1lyWLRVtOPf/M3fjIjhWkRm\\n\",\n       \"oZ2xvupVr4qInp8Rw5B4eITFzikHXGbk1ltvjYjeSdkO/k6vYH62axRZwdoHLaeeeurUM4ETTCKD\\n\",\n       \"WHbGyvJE9HPI/rJ169YB3ew13kP47BQF5h8/sUS1678Fc3nvvfd247RDvh35oWEshUREv6aRf94D\\n\",\n       \"XougTbJ61113RUR074ssVN6O+21y0xbsE8g638f6Yyf8hYWFbl5uueWWiOjl/LWvfe1UWyxXLhU1\\n\",\n       \"5rDd/p+/w1cstG0AEXTRl63jlpe2AHZEPyesI88p/bHv0h+W/lbunAwUuWfNOf0Fcs4eBU3QYvck\\n\",\n       \"B2cxxr179w6CqtgP2mLbEf1ZJHs/GmWRKhQKhUKhUFghVjX9ASdLNC60AFuBXL5leXl5UJQW2DrS\\n\",\n       \"au0RQ42M0yynV7Q7F9Y19u/f3526ObWijZlun945tWdlPPieE/GZlrVr13bjsaZkK4DDXF04OUuC\\n\",\n       \"Sb+MFQvgWFI5l4bx34ETTTJeLJPmi0vIwD+sJGNWSbR5LDP0ifULOESdZ6FhYv0CLoMDn8aSArpv\\n\",\n       \"+5t5nE6oZz89a7ttoVDkIEskagsUPIR+p9ZwcWY+U0JhrOyL01fwjKyYNZqn/VcyWTQtWMfa0H0n\\n\",\n       \"b2XcWSoOh4NDG7JmvtBvlqiw3bv4Gxox8wlNWORm0eLUJR6rSwqNlXthHC78jSXBFgmnd/Ca9hy5\\n\",\n       \"nMmWLVsGBY8B4/B8umAwsP+q90XPUWuR8D7n2w7kwalbgGlBZnnvwEcsmW0pFPp16o2sHA/w+8FF\\n\",\n       \"fU07NCFffIam9tlt+p6I3m/PNxi20LCv0LflwSka4DNy1bbPyrTYZ87jtMUeGmeVWmr79xpCprye\\n\",\n       \"oSFLnmqURapQKBQKhUJhhVg1i9Ti4mJ30iZ6BQtDlgQLze6UU07p7oUz7RX49Julwudk6qKe1kha\\n\",\n       \"KxraHIUu7SPFKd3aO9F+1qRc5JPTPCdvW+q2bNmSljbINEYnw4TnttQ5coSf3/rWtyJi3L/Lifbs\\n\",\n       \"RwN4li1s0I6WZNrR9vBzQtNtfZPMO2jBEmU/A2tqaIuZLMI3J0vdvn37wNoBDS4VkhUWduQlWqK1\\n\",\n       \"PMDcr127trPKOcLH8J0/cmKrKX1j2USG4YvX0WQy6dq4xEmWSBReM6933nnnQWlxZB1z2rbPtHbz\\n\",\n       \"Gnh/wD8v0469HlxgfIxuftLnHXfcERH9XAHWRRa1l5W3Yl9o/cPMw9aXr6Up2+dY/05umCUq5Hns\\n\",\n       \"0ccff3xnec1KRzGf7G9ZKSQnqnTJLPOljf7Cv5A91/LvAsG22Gb+N/Aamhirad+8efPAssT6z5Lm\\n\",\n       \"ukyZLTKmBX6wB4wlk2XevRdliapNiwtwe05p5zJvY8mnbb1in0MObDVinzdf4KvfM9DmW6m5ubnB\\n\",\n       \"OLkNgwbmm/fck00PVRapQqFQKBQKhRVi1SxSrQaEdpMVObU/1ObNm1PfJfrw/7P2Ps3yLFto/P+I\\n\",\n       \"YTFNn8ZdIsTakLV6nsn3XAjSaKMQ+A40+FTv6BR4aR8SYJ8KMFb2gT6z4sXuA03MpWQcrdGOs30O\\n\",\n       \"lg9oaPkI3Y6m4hlZyRdb9DJ/E9qN+U5k82QZsy9c1rdLyZiPrYbmKBtbYOyv5AKxlt3M/4R+bZGY\\n\",\n       \"m5vr5j/zWQFZCYis5Ic1c2Bfu/ZZLj9hSyzgmY70cV4h0zrmr2g47xHWHObT8p6VscEK4P3CI8QD\\n\",\n       \"tAAAIABJREFUss5YxvY61hzjsf9dNk5byT23YCwaGroy+efZ9pHJrED+O/zLyrjMzc0NrJjmbeZj\\n\",\n       \"i5x7XI5EZm6xqo4VubV/qiOLgS2MnsesmLPzjPGzXdMuWo6lmXHa2k07z53f1SArAzZWWBrewQfv\\n\",\n       \"e1mRa/7vtZ29b8b8X7MbqWwdPFmURapQKBQKhUJhhagSMYVCoVAoFAozkJWIKYtUoVAoFAqFwgqx\\n\",\n       \"aj5S73nPewY5WfjM3eYVV1wREREXXnhhREzfY/o+nbo81MLiPtnPIIqAGlRnnHHGVN++G+cem5p1\\n\",\n       \"bf0s+1/wmbp81IjyHTn3tK4RBe2OAPKd7/XXXx8RB2oK+X4Y8CzqG7mmoCMgoPFTn/rUVHtHWHgM\\n\",\n       \"n/3sZ7u2hn0bqBHHOJ01F1pcmw8+Gm0+GuimzpJzN9nPhhph9O2oHcN1H+1DMzc319HN/LjWWpZX\\n\",\n       \"yjWifG/vqJ2PfexjU7S3NNvPcNu2baN8sZ8KtCAv1BTj/44YNV/e+c53dn9zbiX8KqhXBy2u4+Yo\\n\",\n       \"LOo+uu4XQPYZ/7Zt27oagYb9MP7kT/4kInoe2g/DPkXZfkH71jeG9UyNMACd9vVAdi0vzmXHHDCn\\n\",\n       \"XhdtdBJrg7bUK8z4gQ8N46Rvy67zbDGnrIsxfzXmF1k5/fTTI2Lof2O/Pmhv67JGDPcgxkA9PPg+\\n\",\n       \"Nzc3WGvmIXLrPFKuF+r9xWN0tm6vu3a8tGG+WHPQYl85Ry1mNQizfG2f+tSnUh4aXs/Oq+j3LrS7\\n\",\n       \"rixo3wG8F6nLh++f/TupBOB9zvLhqiS0px6uc+dF9HJO3U/WnP2ynBeOcWYoi1ShUCgUCoXCCrFq\\n\",\n       \"FqmI/rSHZsHJ0adDV2RvPe99Auak7XwpjuICsyKqslP+8vLyQEvL8gIRVYFG4dwlwNE3WKagyVF/\\n\",\n       \"Le2OpssqZPMM509ytEnGV743FqUEMqsO8HcdaWnrhzU4awstH62lmMdGFs2ZZWmHj8wRNM3Pzw+s\\n\",\n       \"g5ahWXyxVou8O9O5aVxcXBxE42W5eFybMaPN43cEqfk2Fq3kcZkWW5SyqDPLnn+2/TvaNrMCAPiB\\n\",\n       \"pZrPtgoZzsuU5fpq/2d/VO89WYRZVmUBWtkn4N/i4uIgw7a/O5b1eYxmR7061xmwZWtxcTGVRT5n\\n\",\n       \"eYAcWWULntd99v3JZJJGaQF/N8vZBMwH36Z472qjWaGF71pW/B6hT0c3eizZ2mzH5vVvHlo+bKGj\\n\",\n       \"vTPeu3+PxWu8pdO1WR3lbjjy3NYyP9tVTdratKbbcu334yyURapQKBQKhUJhhVg1i9TS0tIgH0T7\\n\",\n       \"vxa21CwtLQ2yBQNqAmV5oKzV+6TNCTXTSFtfG/vJWLux7xR9WksGZKQlU7dP1GMZ3631ZbmYfI9u\\n\",\n       \"jcNao2k+mAZijTt7hr+b5XaxFsAYfT8/ZjXMLAgZMh+QjC/OZN1qgabbflmWhyynFT+dTyzTntet\\n\",\n       \"Wzd4ljUvz4G13mx+7UtzMEtNJtem2zX5Mo0SOHu/LVHt970e7CeRWbvtI+l9ALC/2PI7NqeZf4ll\\n\",\n       \"zrR7/BmfsKIhi+1+lPHUWn1mYZ6VLTrbL1ofqWz9O9ed92D37e97b3dusINZoWflF/Te5bnILLeZ\\n\",\n       \"1ShiaAV2tnBgK2eWww84y35mPW7ptDWQNp7/7N2c5XSzLxrrYsy6hLXUay2z2Gf1Hr2ugC2Yrc9p\\n\",\n       \"ts+ZP9n6z1AWqUKhUCgUCoUVYtUsUm1V7Fm12cb8OTIrkCuNO7pmrEZY2871nty+vTs1XT7tOjLK\\n\",\n       \"2qDbt/42Eb3m5Wy7oNXE+J8zEnucPvVn7T03pr09qWc+IBns6+R+DGdItsWm1aYclWgtLatBZw3M\\n\",\n       \"c2faPUdjmc2tYVtm3T6jLfPXGovWcbQZwGphLTbzv3E2eWt5Wd23lp5ZWbOtaWaWF+bEmZ89t+2z\\n\",\n       \"M1oyS11mTTRcGcD8aL9nS4T7MJxl33Udx/yvWtrbvXBWvbLMN8a024pqeXD7MfnwvpBF3WX7h/1W\\n\",\n       \"LSfmS/vZlrdMFjPfKNNka5KrEIyN1RGDWd/O7G9LbDb/wOuiXQveM6Epm//MZ87vU9NuC6etjhG9\\n\",\n       \"LGbvDY9zVpUBf5890JaupaWlwRzYYn0w366DoSxShUKhUCgUCivEqlmk5ufnB/fRsyKtWu3Ip3Xg\\n\",\n       \"yCcj89dxdEpmNQItDdbWQOY7lI0TbQg/DFvbrAUsLCwMooUyi4FzE7neUTZOWyZAe7K3tpdFOhrm\\n\",\n       \"W3Yv7/7Q0PnZapH2VXDfs/y1sjp3poUcKK3PXWYFtJxnWg59MTee/4NZD5z3JpN/WyoyHzjLhX0G\\n\",\n       \"xiw/Hl/G8yy3zyx4TWa+Ey2d1krNeyzQtkxnkUO26HhMLS2ZRdLjB7bqgMx3Ctqximb9tn+z5QV5\\n\",\n       \"sOXdUY72jXJ7+30tLS2l/lRYDDKfv1mWV8ti5lPZ+opluZYcpWnez7JMZjUcx9qa7iyCDHjODPfj\\n\",\n       \"uWn5m/l6zapBa8t15jtmC6Yj8trnZFZw+5ABrOmZX57lxRapNoox86fKfOUs5xnKIlUoFAqFQqGw\\n\",\n       \"QlStvUKhUCgUCoUZqFp7hUKhUCgUCk8xVs1H6pxzzunuT4844oiI6DOAc7/q+mb4BGzYsCEeeuih\\n\",\n       \"iOh9GaiFRE0h55l69NFHI6K/C6UuE3V/uGclx8W3v/3tiIg4/PDDI6Kv+0b/a9as6XxaHnzwwam+\\n\",\n       \"3da+DNDe1giL6OsbkUeE9tQe4u+0v+CCC7o7Xt8j85NaWNQUgufQbh8B6pu51h4+QY899thU/5/+\\n\",\n       \"9KcHNcWyyAfoZj59503kh+uEQTtzTR4a5nTNmjXd/FOXKYvC4zP1qugbwBciS5g7ZNHtkY+HH364\\n\",\n       \"o5u6TK6Fxf+p3wiYI2qQMReMc/v27VPfpzZfO1b+Z58W+GKewx94zjPdN7TDF3IXwce2vf1QLJOu\\n\",\n       \"y8bf6XvHjh1Tf4fn1PFyNBNz1I717LPPnmoDHFV0zTXXRMSw7uNhhx0WEf2apm/zJcv8vGbNmm79\\n\",\n       \"ew3h+0ieOPjl2nn2LWNOkUVoYb+gH/auhx9+ON0X4TXfuffee0f79hzBH+af/k0LYz3uuOPijjvu\\n\",\n       \"iIh+LswX+63ZNwZ5gXa3Z10wJuQFGdiwYUPHc/ZQ00ItPPuvIS+ef/hiWfS7iz3g3HPP7dqwn/td\\n\",\n       \"RFvohtdHHXVURETcd999EdHzHHmBj8gJezT7Sytfrm/JT+hmPPCFcUI7/PAcsY+yRgHPhpa5ubmu\\n\",\n       \"LbXw4PXWrVsjIuKWW26Z+sx8XnTRRVPt2VfaviP6d/oFF1wwNSZXL4iIuOGGGyJiuJ6ZI+/RzH+G\\n\",\n       \"skgVCoVCoVAorBCrmtnc2s5YXpiIYaTA/v3704igg2V3PVjfWY2dg2U2d1TKrCy5bp9lNnbEQJZl\\n\",\n       \"O4vMOhgtjnTKIiStac/i4xjGcuu0380ixbK+nQNlLEIzixDLPrtveJ7lOJmVX2usT0fVZPXqsizC\\n\",\n       \"Gc3tunDU4awswZlMZu3905FSbaWCLLqmbRuRr8WM545yGouUyiKCsohQa9po4AeLworII1Hb586S\\n\",\n       \"xVn/d2RYJsuW1X379g0i4jLZY7yOUnRUV1a1wGPhuWvXrk3Xc7YPZvtFts9l66P9e/YM052t91lr\\n\",\n       \"78nsN96vsvxHWV61WbmMHP06FhXuCFJbAbP3BfKR5Q8DGY2O4mv7yLKGZzms3CfIcn2N7QHZecGf\\n\",\n       \"x6IND4aySBUKhUKhUCisEKtmkVq/fn13ssQXxP4XwJm9169fn2ojrvzMfWqWR8Y1x7hn5p7Wmhp+\\n\",\n       \"CBs3buzo9n0qMI3W6t03PlBoAXxmDGM5gA6WWbmF65tlOVxAlvNqjI/WIGbl4LLVwH1mmveTsRri\\n\",\n       \"L5HVObOGQXssE/bzMZ+YA/v3tLmcgPNcub6V4Zwt1pIPVnPMbU13lmdqVm4raLaG6rFu2LBhMK/4\\n\",\n       \"MGVrznOSWY2cJdmZ4cesBllfhvuEp/hreZxZtmmvzYh+zXn+s1xcXv+uj+dnsP+ANsu+22ZrK6sp\\n\",\n       \"Zn44q3y2jhjD+vXru98ti+yh0OI+szxSAFrtx2Ta27nPrMCWb2CLrftmbp25e6x/+vL8Wjad4Z19\\n\",\n       \"P6s+4b2O/nkftf07+322poxZllmAnMBPnm3fsfZ/8Aq68Wez7GYVILL8c5nFr827CPz+h5YnO25Q\\n\",\n       \"FqlCoVAoFAqFFWLVLFKLi4udpkUUS6bBcqpts0pnvjvWJG0d8YmUyA80S2ggasea2v333x8RB07T\\n\",\n       \"aCXQb7rpG5qwdpF51VlTf/SjH0XEMArDUR7g8ccfT7OCe5zO2G0tJ7vrt+Y9dlK3BmDfJfPFfkWu\\n\",\n       \"g5b5irgWlT9H9Lz1OLKsuVmmavrOwHN4NtrjWB+uczVGd8TQekY7vmcLHzTs379/4Ffnvl2DEmTr\\n\",\n       \"gr8zLtPuelhj/OI7psVzxOeML4zJtI6N1bJkX5as7p0zeWdaLWsUHCyrv3lOG1vJAfscNMIXaLCm\\n\",\n       \"Tnv7kO3cuXPAYyKFbZHAesEeZdr5CV8d/Qj4PjRF5DcM0AJsYcj4wjixIkCb99G2P78Hshpx5mG2\\n\",\n       \"LuAXc8g7wBFkoJUB+1V6P8ciYx+fLGM9NDgCd0y+HEHL/HsfA17vnne/d5lr7+FjljreZ4C+23fr\\n\",\n       \"2P/9OavZ6jls5zazAvo7Y/v5wVAWqUKhUCgUCoUVYtUsUmvWrBnUe8r8Nfz/+fn5mfWHsrpfs6I2\\n\",\n       \"7EtijbT1b5hVG9D+FLOidtwebSe7v27r/blOV2ZhyjQK89N8yKI6ngyy6Cz71GT30mig9v8Zk5es\\n\",\n       \"QnxGdxbdlNV3Mh+zGl0Rw4iXWf5I9suyP1bm3zKZTAbznq0hRwJllhfLIuPMKtC3a3KWv57nb1Zk\\n\",\n       \"jP07wJiPRGaZncUXI+OLZfFg6yiTrQzmg9dF9qxZ+1BE78NinygsEtl3vZ/OwlhNu2zNZdbwbM91\\n\",\n       \"hHU2B+0el+13bpvtm9m+6HEe7N3lPXRWxN+s9wSYtbe1tNh3KXtPZs/Mojcz2g8mL9BCn8hmZgUE\\n\",\n       \"/w97bx5seVXdfa9zu+/tgUZo5maeJ0XAeSQxvknqLSsmMZXksZJSKQUhUYSKCJQS8iAEUHiVRKKI\\n\",\n       \"hjwZXpKUScU8cdZgRA0OiYqRsRFknqG7aejuO5z3D/j8fvt8fr91z+WG1I3Pu79VXafvOfvs39pr\\n\",\n       \"r73PXmuvIZvnDONquJZ9LNSnMkMtEVNRUVFRUVFRMQa1RExFRUVFRUVFxbOMJbvaO+200zqhkaTE\\n\",\n       \"51qFciWkfMcMvWLFisZsx/XXBRdc0PRbghBRgPmQ9pQ2yBydXSKAdPXT09Md51/TQukUnAv53E7W\\n\",\n       \"F110UUS0pRBsRrUjY1lqw6Zmrl4I44QW+qYP+sSsigMffVOuAPM7fKQ9/Lr44oubUgWZ461LfsBz\\n\",\n       \"zKjwpXRUjWhT/ruMg5Mmrl69Os4///yIaEt+4PzKdyhDBF8uvPDCkb6ROcYHTfAF2l2CCD5OTU01\\n\",\n       \"z3JpE6e78DUsZRmQc8rwMD7C3Pn+xRdf3OGL5x25+MAHPhARbSkE1hClUOBTJud2NvaVaVnGyaHy\\n\",\n       \"DipwaQuX2bBDM+2ZU8BYkclSvpBb0pfwDDtuM873vOc9IzQa8Bz5gufwy1fAq1atanhOOSHG43mE\\n\",\n       \"95SrgecOCAE8k/3ijDPOGOFH6azvvYL5R/ZwbHaJIGinRAhygizagd6lk2i30047NX3yHdqah048\\n\",\n       \"7DmClmyOWKsf/OAHR9o//vjjnbQ28JTSOdDi0jnsRdAGH5FFxslcwkeXzinLeHmvZq3+/u//fkS0\\n\",\n       \"c2RncadoQV74fWH+CW6ibBVzfdlll3X2OfgBbfCJ31yvOV/1A/jIXse6g8/M/apVq5rfImTFey40\\n\",\n       \"8Qz2OdrTJ/wozwMR3f3F7hebNm1q/k9ZHvhilw5ogS/QnqFapCoqKioqKioqFokls0htv/32TWK2\\n\",\n       \"733vexHRhsUecMABI23RDjgtLl++PHbZZZeI6Ia+OlyfEzGndDuAoiW6bMGPfvSjiIjYfffdR9qX\\n\",\n       \"JSXok5NxFjLJ51gasI5Y43QyUTQWaCiLLgK0jjJRaEknQJtjvOZplvSO7xGy2pfy34kkQVY+hXYu\\n\",\n       \"x8IY7NDsgrmmHS2o7Aseo1Ezn05iyDjh9a233jry/X322WekfZbQbdu2bU0fwONAc0ZuHnjggZHP\\n\",\n       \"bTVjbqFp1113jT6sXr26+c5PfvKTiIg48MADR9q4+LAtEw5/hy/IE7zPfCrn5uYa+R2X/NXWPIrb\\n\",\n       \"Mv8eJ5o5c8e8I5PlGmWc0ABPSVvgcQI+p2/kxg7tXld77rnnyFjKPQDeMf7bbrstIiL22muvkfcB\\n\",\n       \"PGb8fP+mm26KiHauAPJlB/K5ublO4AZrDTrhNbLocbkgNBq6AyBAqfVHRFx77bWNDJruhx56KCLa\\n\",\n       \"dcsag9deF053se+++0ZEyxfvO2VySMbBvDgJMs90YlX68DhdxBf+IGfm48zMTLP/O72N+VIm2I1o\\n\",\n       \"ixUzZzwDsL7gM4WZb7zxxoiIOOSQQzp9IxeMl/dtaXR6IV6ZM4+TsUErtDFXZToOeMpcQNMtt9wS\\n\",\n       \"EREHH3zwSN/+TUd2kRsKb7t/xsQY995773T9sy/OV35sPlSLVEVFRUVFRUXFIrFkFqkHH3wwDjvs\\n\",\n       \"sIiIeO973xsREV/4whcioj2ZAk7BnGofeOCBRsNAIwRZKH1WKBVNi354Fq9oywCt6OGHH25O39be\\n\",\n       \"TQtwuK8tGJzyrcFwqrc1ZevWrU2f0LX33ntHRDes2z5gz3ve8yIi4lvf+lZEdDU1NC20Y2i+8847\\n\",\n       \"I6LVCiK6ye4c3u65gOdo4FgW4HlWzBP+4gtw6KGHRkSr4cKTss0ee+wRERGvetWrIqLV1gDjfulL\\n\",\n       \"XxoREb/xG78RERFXXnllRET88Ic/HGlP/06SOhgMOv54aJzMI3RmYf8kpKOfI488MiKisb7ecccd\\n\",\n       \"I+1LayxaHH2TJBbYJwLew/OsgCiWHb7PHDjh49zcXMdqiQyaL8jgEUccERERr33tayMi4p//+Z97\\n\",\n       \"aWecWCKR3euuuy4iRpNksoZYt1g39ttvv4jI0zcgJy7LYYs37yNf0IqVqVx3rAss7K985SsjIuKa\\n\",\n       \"a66JiK61C22ZfYXvYXGzxdv7D/LmuYlo55lnQANzsG7dupH2rEFkzr51WLIAcvKCF7wgIiJe8YpX\\n\",\n       \"NPNjKwByv379+ohoLS/0gbwDFxRn3Px+2IJVphuAJ/Y7BcgLsmVLHH67gD3Y+25WiHfZsmXNXsk8\\n\",\n       \"8gwnd2WdYEli/rG8lftcRHvDAQ2vfvWrI6Kdq9Ln1GXW4CG/Qb4dsaWKvvge+4Fpx3rO7dL1118f\\n\",\n       \"EREvfvGLm7YuS2NfMJdcsz/i97///Yho5cRzBO2sdWjYZZddYv/99x9pizwgo7bAl7cd86FapCoq\\n\",\n       \"KioqKioqFokls0itXr06Pv/5z0dEq4lnGiwnVjT0hx56aGxpC995ZonleDba0XOf+9yIaLUcWw3Q\\n\",\n       \"mg444IC4/fbbIyIvVcDfLgAKfM8MjY6o4nRsbWe77bZr3kMDwGJkixR9cY8OL7FgoWEDtECsg7Tz\\n\",\n       \"XXs5Po87Q+aH4lIqAA2Fzynf4yioiG70zXe/+92IaLVB08q8f+c734mIiN/6rd+KiIhjjz02IroW\\n\",\n       \"LPt3gRUrVnQsKWg7PButDk3Sssgc0Q/yheaF7AG0yQ0bNjT04zdgfwpreWhaWJHMc5f38GufBdPJ\\n\",\n       \"YR0JA9Cs0RRf//rXj9Buy5uLmmPpcCRWRLeYLLzmu7a82MKKZSor/YN80Y7++wrFsi/8+7//e0S0\\n\",\n       \"828fOIBVHEsFPlXQlhWthQ88e3p6uuPbwT5x+OGHR0Q7/9DtiFnkB8sTfMhKrSBHyOELX/jCTskT\\n\",\n       \"wDzj24M8sIdl1lRoMV+8juDf1q1bO1Zu3wLQh0u+uEwTYM6wgrBm2Q9s8dywYUOzt2K1ZN/zXsRe\\n\",\n       \"Q59YXqGJ7wFuC26++eaIiPjsZz8bEe0c89sW0cqik9tCW1Y6Basf7bK9C1qYc3w14avLAvXRhCxm\\n\",\n       \"ln1+g2644YaIaC1vLjnDXGCFZi947LHHOlZA+MC46JNnVh+pioqKioqKior/YiyZRWowGMTzn//8\\n\",\n       \"iGi1IbT9rBQE3vnbb799qtVxendhW7Qc+xmhNfM9tAJOybYCoanMzc01J3+0XWtejk6wNcPjdDFG\\n\",\n       \"tKnsfn9ubq6jIXOiNi1oc5zOrR17nNCMFsSpHw2u5DvPgueMq6+ERzlOvmeN2hqJIyrRaJmL0gcD\\n\",\n       \"LYf5RItx7iaAtQf+/MM//MPI+9ZgATSXRUBtvWAemROXjDEtjNsymkU/0u9znvOcZj6ZJ/MwK9vi\\n\",\n       \"yBngqB1HwtinYmJiopknZITv2toJb6Hxq1/9ajOOiK7lDXnhc2hwLrSI7tpCU2ZubGHAWsj4yz2m\\n\",\n       \"HDdgTGiwWCbgVzlWeMt4nB/M65+5oU9ode42j5X9B1kty1cBF8BG44anWbkNaPLcWNaxLrEPlbcG\\n\",\n       \"jk6zdZQ5Yg3aqoMVARpo5/JFoLTcZAWCgYv4wmv6dKQ0e43zb2WFpXfaaadGDmz18PyzV+Ezxnza\\n\",\n       \"yg7gE5YrLFD81pXtXXSeeXQOL8B+8m//9m8R0e65jNfryHsU8oCclHPutcmz2FMz32LWFlblrMQQ\\n\",\n       \"oH0ZBWvZclQqYJxZ2TKjWqQqKioqKioqKhaJWmuvoqKioqKiomIMaq29ioqKioqKiopnGUvmI/Wu\\n\",\n       \"d72rU4vNNcmoKXXCCSdExGiUDnfV3Kd+4hOfiIi2/hCWNvp07h7q+LjuF3A9N2oKle3tE+Q6TtAC\\n\",\n       \"iE7hHplxXnHFFRHR1hTi2c62y105dX9OPPHEji+Xc25Rf4r6dozf9c24T/7oRz8aEW0NMu646d95\\n\",\n       \"Uy6//PKmXpGz/TpKkfmkphyw3wa0UJuL+ml8Dp/LaI1PfvKTEdHWfHIdPwMeUlPK2bN9/4680J57\\n\",\n       \"d2jdtGlTM25qYcEX+yUg764pR3va8WzXiaM2H3M0Ozvb8AS/CtYHNaWQLWQOWXJdPGptwXP6w8eB\\n\",\n       \"8SI3V1111QjtJf3OWQMPqSnnGpSAuTjvvPNG+oZfrlmJjF5xxRXNmoMfyBZ94r9F39ROZO7w12CO\\n\",\n       \"2HOQXe8Xziq+bdu2Th03YH8a1/FiXTBO2tmPDXlh/hkj35uYmGieBS1ve9vbIqJdk9CNbCEPzD98\\n\",\n       \"cV3QMjKw7N98f+yxx5r1mtVDdc05V6Ogvh08d8Qgz2KOoAW+LFu2rBNVxzPZ59gX7StlHynaU8fP\\n\",\n       \"Ubt8j/0D+TrrrLMaXnmemIuyXmVJi2mCL6x//y4C+IQsX3nllc16znx/8DujpiS5He0TiEyyjqjN\\n\",\n       \"6Jp19v/avHlzsxfBc1dCgCb45BqUrvdZ+itHtPsL68j+UMuWLWt4z3zye2E5hyZ+s/mNzlAtUhUV\\n\",\n       \"FRUVFRUVi8SSWaTKKB+0nKy+DRonJ8zBYNBkKM08/N0nmpejsND2nMk4q8DO9weDwQg9fbSgzfC+\\n\",\n       \"tbosagtaXQ27LzrFkX6c5q1x8Gw0JkcQOS+QLXnOz1NmT7ZWzt+2NLo9NDO/8NaRMozJ9aDov7R8\\n\",\n       \"0YejNWmbZcJ3dXNrg6aFCBk0s1WrVnV4Dl3WEKHRuXt4NrSjDdEveWhAWXGecbgKPUC24AeRUERZ\\n\",\n       \"WXZteSJzuuvclbTbAsX4srpvzL+zCjvXE/3aKmrLVER33SMzZU3EvnHyPUeEWXaRB+9J9OtoyYiu\\n\",\n       \"JQKZyaJ2gfckR+3ZIsfanp6eTtezx0s0lqP2oA0+OIrTa5E55Hs77rhjb1RtRDtvjJe9yPnHDGgn\\n\",\n       \"Kg/ruy0PZdUBxplZSvjblnxk1zyHr+QZZCxEzpHjCjz55JMNDaw5Xp1l2xZr+MKadTZ51ov5yffL\\n\",\n       \"mpVek694xSsios21ZLptyWUuswhy2nud9UW9OVeXf2u8hqCdZzuS2HPEnNKuvDHyenZ+MeiFl76p\\n\",\n       \"ylAtUhUVFRUVFRUVi8SSWaR23333Jvvp3/3d30VEW1OOmkGA0+ExxxwTERFvfvObm5o+f/ZnfzbS\\n\",\n       \"Fu2HEyZaGqdZ5zBBs+JU63t7W8do9/DDDzc5p6hXhXYH6AtthRpKn/rUpyKia5HgtIx2hI8B4ye/\\n\",\n       \"CHjyyScbTYI2Rx99dER0LSmcvMkwi7ZDTh9nwrZ1ECsROW76qnmX1ecjujWyABoHn1MXzZYpgMZJ\\n\",\n       \"1vk3vOENERHx7W9/OyIirr766qYtGhH8IC9QFp1qbY68KeRBsQXLFeiZ04ceeqjJYA3gC7KKJkq+\\n\",\n       \"GMO+ZWiiWL9c96nMDUUb5rOshViODxmCbmghWzhAfn7hF34hItrs43/xF38REd3s41NTUw39yBiW\\n\",\n       \"A/vKIUPOyMz68fp3HirkBI279BFxjjLGYcsEYI3CB9YyGbrJfA9cHZ7++/KrwQ8y8VNVgBpqtjDa\\n\",\n       \"WsL+SK4rXgH7Bf0hX+vXr+/Uq7PVD58pxkH+NPcNkBtk0PsF/IMPt9xyS1MVwRm52UtcsYFx2IrB\\n\",\n       \"34yfum3Q+K//+q8j7dkDDj/88GZ+2COc/8o+QFiBkVGPk98L5pJcgtBIpnywww47NPsdn/3oRz+K\\n\",\n       \"iO56tk+sfeO8p7MX2ZLPnlX6HvJbw/wfd9xxEdHKGFUGAPsl42UNwg9bPP1b5byNzo0X0fVLQv6z\\n\",\n       \"+Wft0p4xZb/R0ALfZmdn03xi8AprJn/XWnsVFRUVFRUVFf/FWDKL1N13391oAWiBRx11VER0NRjq\\n\",\n       \"wnE6vPrqq5tK8a6IjXZq7S7zebE1xZqYI4rKmkzQizZmaxdaIZrCtddeGxGt5uiMv7aOMDZOyX01\\n\",\n       \"7ZwNltN7lpGVZ6Ld9fk8lX87AgstqLw7tv+E77D7KqKXz0ZLQgtyFAoaFv2gXX7605/u0OKaTq4R\\n\",\n       \"aI0EGYS3hx56aES0/gi2pqFx8z3m9IknnkijZ+AhmimvtrzBr+yZtqqVkWlYcxinrTiOtkGmsOq5\\n\",\n       \"gjrWHjRvomK/8pWvjDwbDAaDZpz4OEGLZQsLDX2g1TPP47RGtFv+to9MXx9ZW+YZzfsb3/hGRLQa\\n\",\n       \"ta3MgLmyD1G5RrECIRdYoOBt5jtIH/DRVnXAGseqwD6z7777diwpjAM+XHPNNRHRWmC8MChjAAAg\\n\",\n       \"AElEQVS9z/E3a4656bNIR7T7ImO+//77mz3GPm/0yeeOUrXPi+umYsmEBvOR34l77rmnkTXWg/2S\\n\",\n       \"4Cn7G3sSa9O+YKxpnsF+Q/+2BM7OzjZ7Bc9iLrz+s3qAptVAXvidRDaxxkd0s6MTbctvjC3YztTO\\n\",\n       \"HCLLniPG75udPn9A5t9ryL+fhqNZmQv/jtpPknUxNTXVsY77XMC6YRz2Bc1QLVIVFRUVFRUVFYvE\\n\",\n       \"klmkIlrLkzUWTvCA0y6REhMTE536TJ/5zGcioj2Nor2gKXEitrULbY+TKhqro+EAp9/BYNCpHO22\\n\",\n       \"nIQdPUCkhGF/LcYAjX110lwrzFFnwNou98t9FqaIbq1B5z7qs3i5D1umgCMG0dSznE+MCX8VNFAi\\n\",\n       \"yUqLl+saOoLQmjSaCK9uZ82L9x3tuf3223fGz3eROcue5cWRYvAFa0A2R6UFC63O1ivGh4XOddBs\\n\",\n       \"BWOe8fuCj7S3drx58+bmGWjraLvW6qGNvtDi+8ZT0g7NyLCjwSK6vlCOwrF11DXoXM/R42TeoYFX\\n\",\n       \"9plS1l0P1Lm7LFuOYqJv+GefSvvOQPPc3FxqScFPh2cjW5bNbA6w1Jl273WrVq3qWDlAWRMwop2D\\n\",\n       \"bG/hWaw91o2tSaAcOzzHWuP5B+6TWwPvXXyf3x3XRzTtpWWY3xhqzGZWIObfFhhHdVo+XAeznEOv\\n\",\n       \"c76Lhc5WPdc3ZQ7YXywftIffyBPtSj7QR+Yj5ZsXPwP4twn4N5DvrVy5srOe4Sny699H+9RlqBap\\n\",\n       \"ioqKioqKiopFotbaq6ioqKioqKgYg1prr6KioqKioqLiWcaS+Ui9853v7Nxt24ufOnGu4zQ7O9u5\\n\",\n       \"/6TODjWCuNt0fS76pi6T61txf+u7bvLnUINqdna2uTd2Vmz6PuOMM0bed0QD9+uMk7o/wBnTGT+1\\n\",\n       \"mc4444zmnhm6HYVAW9fOcu0pXml/yimnjLS33wavH/rQh5q2vqt29AQ1wqi15M+B64S5phhzU96Z\\n\",\n       \"Qzd1uUwDtPE37akRxRzRt30p3B5aSt8C6PnjP/7jkbb0iR8Gcg8t1M6ivXluvtKesQ6Hww7d5fz0\\n\",\n       \"0ZLlWYLnZ5555ggt9i2DJub0lFNO6dRvc3RmxvPMp4F1QX1D+7t5bi+55JJOHTfLLeP2unC2ffjo\\n\",\n       \"NWo+ur7k5ORkMz+unebcU67jyPq3r5D98VzL0f2XfWe18Owr4zqO9O16Zt5fXN+s3Dc9Tuqy0ZY9\\n\",\n       \"mu84Wg3aPUf2Z4O2PvnKMlPTFr7Qp32lvBd5nOYjr9QsLOst+vbHexHy4vXQJ+cR7fq3P6DXyYc/\\n\",\n       \"/OFOfcs+uY1o59N1ImnndQLtZd3PiPb3CBlYvnx505a9hT6cNd2/F9neZZ9a+mcd0X9Zu5BnMJ+u\\n\",\n       \"tefoW++jGapFqqKioqKioqJikVgyi9RwOOxoFpm/ljXy0grgaANr885tZA3FGVldqds0lf1bS7Nl\\n\",\n       \"xXWKsvp32VgcpdCXR8aZlkGm/dg6No735ldfHSzaZNqf33c9xKx+EzD/0KqyqMAStnqYFsuJtaOs\\n\",\n       \"b1s8yrpehvseB8+Vo1v62jlaNeO5I4GycZrn42S3pHOcPGQ0+nPD/OibG2vOWV3LjGbXi/S6svZv\\n\",\n       \"q3TJl8zimtHv2qOe0yyPmOVqcnKy8x7Pcn1LRyGWfZTPthXY/Tv79NzcXFo7Fdiq68oIpt17crZH\\n\",\n       \"l9bHbB903xnc3pF149Z2yYfMYlS27XtW9rnf9z5R0u42XoOmxdawcTXnbC1kj3Y28vL/XksL/f13\\n\",\n       \"P6Yty7c1HA5TnmbPGCcfTbsFtaqoqKioqKioqOhgySxSpQ+Kc1ZYI7XmVeYwsZ+V87pwMuZ5bk8e\\n\",\n       \"JvokNw15JTJapqenO9qsNQZbLYwss3GmWfRpXoyPV8ZnWly3KbOeGc5O3mfBMl220I3TGMa1t1bM\\n\",\n       \"WG1VKf/vectykzAu5CWbQ0AuGPNxZmamk1tlnDacyYX7zvJrlXzzGsq0XWvUfRm5+2i15tZn8XAb\\n\",\n       \"W2Qz2E8vs6ZkudHK961B23qR9Z1ZRy1HjNE538gJVO5rlh37bGTWEftE8Wp58n5R1mhzjjKPi+/Q\\n\",\n       \"Lluj5oP5BMh1V/oxZmuRv9lrx+UPct1TeA/tmUWqfFZm9fb7C7Wu21cos0yVz7O107Q4lyGgz3Fz\\n\",\n       \"lPlU9dE3bk3aVzDL2WTYMuW1HdHNn8Xey3eyfZL34ZPzkZkGclp5z+vrO0NmHTeqRaqioqKioqKi\\n\",\n       \"YpFYMotUqflnlghgi08ZtefTOydI32H3WS8iuloQ2v9CfCTsR5JltrbVKPPX4W80UmumzmxbWgHG\\n\",\n       \"jdOa1kLvvq019dFuLdfw+4zH/hnZfbXv2W2h6bsT9/gyPyNb2DLtH/Bs5rLUSD0/jrob5/OW+VLY\\n\",\n       \"d8ZjHA6Hnei0bF6h35aoTKuz/4p9yEpaMkuLYd7CN9fBcvvMalDSYln0dzOLhNcosEaard2+igKZ\\n\",\n       \"z0sm545yg+euZWna7ddpq3v5HtYcW/ky/8zMSmRaXAlgy5YtvXSU380s7lkGf1suPMegtGBnFnjD\\n\",\n       \"spRZQd3O+8s4i3bZZpxPpbPHu6/MEtdnNXIUIq/ZurAlMruZ8bP5HnPv6Nfy/7ZM8l2sm8ARyY40\\n\",\n       \"zqzGtqJNT093eJpF0vo3aRyW7CBVMtbm5cx82GeWzA4vFioY5bB2ihLa9En7+RhJW5tBTQubcxbO\\n\",\n       \"DnxgyBxEy+dnYcnZJgZf7BybbRjZQuvDuB884IVg8292VZiVa+ij29/JfhB8ePW4fTjy5r2QA+lC\\n\",\n       \"rxl9jbrQq8Bly5alfQJ4bqdhbyAZTVkB6pI2t83k3Ie9cT8QWQqLvgO8nz1OcaAPh2K7uLnbIy9e\\n\",\n       \"syUtnhPvJVmJGL8/jnYfep988sk0CAd4v8icx59pwuaSVh9K3YarGf9YW254n8Maa3LcflHCSorf\\n\",\n       \"t4zN57Bc9pPtH2B2djZVgAzez357Fur43HcgzRSjcQfp7Irb7X249Tosx8I8OgXPuIAg/774d9W0\\n\",\n       \"9KU+yZQbH6Ce6UGqXu1VVFRUVFRUVCwStURMRUVFRUVFRcUY1BIxFRUVFRUVFRXPMpbMR+q0005L\\n\",\n       \"HfSwkn3kIx+JiIi3v/3tI+2mp6c7d9OUTSjLZpSv3NXi0HrRRRdFRFuuhPtUEnTaZ4rU+ZQrWLly\\n\",\n       \"Zefe3SUf3va2t0VE6+DJ5zj84iPw0Y9+tOFJRNfJmDtlp/E//fTTO/4Uvtt2OYGsNIjLFZDy3/yD\\n\",\n       \"FnxKLr/88k45AWBrJ2n53/e+9430YVq4+6akxIknnjjyvuVlu+22a3jocdpPg2fRnhIBPNs8Z9yU\\n\",\n       \"zkBe5vN7Qm5dlsN+e4zDJYJo5ySx0E4JGkoQTU5ONvQ7jcdll10WEW3JH/sA+W9k6+STTx7ph7mC\\n\",\n       \"NtYJfHz3u9+d+oIg/9Dy1re+daSd/SpYH7SnvIUd3SlDAZ+uuOKKzvxkPGeOkC3TAu2EULvsk9d+\\n\",\n       \"GUDBfJ500kkjfWUJN7NSGDzbaSRYF7S3/9umTZua8XpfpA37IOuDvYi+2efcnj0IntP++OOPj4hR\\n\",\n       \"Px77JyIr9G05R8YYN3sX85+VzOF9+P6Wt7wlIkb3H/sElnJb0moHfuAyPsieAyTgD7SfdNJJneSt\\n\",\n       \"/i1Czl0Ky+W8GA9zSjk0rx/4yPf+6I/+qFnPTtsBX57znOeM0EJ7+wraT8tleex7VwbOXHjhhRHR\\n\",\n       \"8txlaux3xj7H3oWMel8BLs1F/9BS0g7dlPxxolmXI6PvDNUiVVFRUVFRUVGxSCyZRaqELRPjQm6H\\n\",\n       \"w2HHagNsHXLZAVso7PGPppFFFJWa/7iSGFki0Sy02FELLtPg9tu2beto3C4BAtDieN9WrixKaaEJ\\n\",\n       \"yfraZnzJCiZDk/txigInWS01DSfpy6xBgD6sxfWV/CjhJHHbtm3r8NAWCD73s0x7loB1vmhW5p3x\\n\",\n       \"OEWE1wF/Z2H/9Mer56wPWYRnlvzVllo0zkwWLSd9/p22KHiexqU/MH/Grens+2Uf8NDf8Xr2XrRh\\n\",\n       \"w4aI6CYWBLaylRa9LFLWVqIsQaPfJ2IYuepLxVK2m5ubizVr1vSOE2QWCa9/+GgrGpgvEhPesP9Z\\n\",\n       \"tmxR5PMsgatpsrXQsr5ly5aOhanPQlL2yT6Q7af+m+9hPZ4vajGLCM32Fu+HIEthAh+coLVs78jB\\n\",\n       \"7PfNtLPukYMsujGLQF2+fHmazsTrN0sOmqFapCoqKioqKioqFokltUjZD4lToHPVOKna7Oxs8x1r\\n\",\n       \"3rbqcPeblROgRIxLJnBCzTT7bdu2Nd/Jck44X4a1n0wLsH/OfCUlOJ3z6vIJAA3BFoWsdA7aJLTw\\n\",\n       \"6u/NR+e40zx8g3bfSwPPibWFvhIxwPfupgltH+3m4YcfHvnccwQf+3KGZdZLl/7IcpMwPmv7Tp4H\\n\",\n       \"mKPhcJjmxwG29ti3Iyusm1ks5stbxWeWQZAV382sxtDukiP0U47F69UWyXFwbppx+djYX/qso7Y8\\n\",\n       \"Wzu3vEC7Ew1m7XmmLbkTExMdPjDv5mVmRcyS5Ga0+O9t27Y11ilbM5wU0hYG0wLNzAVjs49c+eyI\\n\",\n       \"p/jtvdbrmTVEXx5PNke2ttm6CrZu3drMj9eU16DnjByHmcXGZb+8T5R8t1+Wf2sXWrYLGr3f8Czm\\n\",\n       \"3Oun5Lt9hKGBvTcr5p3R4r0pS645MTHRmR9b6pGlvvxX82GsRWowGPzpYDC4fzAY/LB4b6fBYPCl\\n\",\n       \"wWBw82Aw+OJgMNix+OyswWBwy2AwuHEwGPzCgqioqKioqKioqPgpxELUtCsj4o8j4s+L986MiC8N\\n\",\n       \"h8MPDAaDM57++8zBYHBkRPxmRBwZEXtFxJcHg8Ghw+Gwc6ybnp7u+EaATPPqK8Ngy4DLD6CdoBXY\\n\",\n       \"IuFitT6Jcu8MykyvvtP3OHyP7qi9LC1/Vq7EmvrExEQzLkcy+ZS+cePGEVqz0h/Az7JPzXylMDKf\\n\",\n       \"BeAs0sCFpoE1L2tXpbzYhwPtJPN5srboAqkeC/Jgi2WfVcgWhbJ8Rvk54FmmPfN7KjPdOwLW2qst\\n\",\n       \"snzXvg0APtgfDQuMrQzl9y2/7tu+HbaWuG8X+53Pd2yctTfzkYC3Cy2g7bVpP8+yL/p2pKTnCKuG\\n\",\n       \"9yJrzdlYSsuefWHcxvuj58hW4Iz3gOcx1uXLl6dZwrOKD6xF88W+YrZ4er/ouyGALv9e2IJvv8bM\\n\",\n       \"Updl0zampqY6lnb2rcwC5QhDYFrY042+DOG2DGURpADabGHKstV7j+N3qM8HF567XFUWlQ1oh5xk\\n\",\n       \"vpT2Ey79nrLfOVvFs/nPMNYiNRwOr4mIR/X26yPifz39//8VEb/y9P9/OSKuGg6H08Ph8PaIWB8R\\n\",\n       \"L1kQJRUVFRUVFRUVP2VYrI/U7sPh8P6n/39/ROz+9P/3jIhri3Z3xVOWqQ6Gw2Fz+sv8nICLd05N\\n\",\n       \"TaXRZI4E4ETtIpMATcRRHVn9vLLWlttaK3F+JGu9WX07jzeLnCh5kGmzwCdx7vhddDNr76i/8vPs\\n\",\n       \"Djs7zWdWE99pA9+/83ep/QJb+5zDx9oxfOP9HXbYYeT9jC99xUr7oiojujXFMr8U52AZZx0p5Q0t\\n\",\n       \"N4t8gpYsUsztmW/6hRasAl6zW7Zs6ayVrD6f8wDxeRZZ6XWVWY/K9+yvx/jH1S90/pssysnWpT7r\\n\",\n       \"i+tY2prR5/NYjsc+VpYX+7mUFs1Mq7e1I4tO6rP+z0eLv1f6/5jHlvNxtda8l3vvysY4NzfXKQBv\\n\",\n       \"WbRPDO2yKFXLxzh/0NWrV3d+S9gPvIaw4jA+rxPDFnr8N/ssdc7BNC4/lK2pyIktd8AFs73v9lmI\\n\",\n       \"oS/zQzS8HjJrqq3FpTW1z5evj95niv901N7wKWrns3/VcjAVFRUVFRUV/0diQbX2BoPB/hHxv4fD\\n\",\n       \"4VFP/31jRPzscDi8bzAYrIuIq4fD4eGDweDMiIjhcHjh0+0+HxHnDIfDb6m/eriqqKioqKio+KnB\\n\",\n       \"s11r7x8j4s1P///NEfEPxfv/YzAYTA0GgwMi4pCI+PYin1FRUVFRUVFR8d8aYy8EB4PBVRHxMxGx\\n\",\n       \"y2AwuDMifj8iLoyIvx0MBm+NiNsj4jciIobD4fWDweBvI+L6iJiJiN8ZJiav3/3d343HHnvsKSKe\\n\",\n       \"vpfcddddI6K946QeDvWt8NafnZ3t+J9QC42aT/SFT8ftt98eEe2d6F/91V81dDw9zojo+soQpQUt\\n\",\n       \"1FqKaKNJ8AeAFuoyUfeL8fE50RnQcuWVV0ZEW1PQeWQcFUidqHe84x0Nr/bbb7+IaH17eAa1k6Bl\\n\",\n       \"zz33jIiIvfZ6ynXthhtuiIjWL8N1vPBL4G7c9/eXXHJJ07ejK533h5pizKfbcc/OK+3huX2HGOP0\\n\",\n       \"9HTDQ9oyHubfuciohcX8E43GXN5zzz0R0foIMP/ULLPf27p16+Khhx6KiIjzzz9/pG/q0iEHDzzw\\n\",\n       \"wMg4XA8PueIZjgZl/t/1rnc1/djPgPFaFsmbxrzCQ+aOGnTU2rKPCZFC8PHyyy+PiKdqVjkCLquF\\n\",\n       \"Bt2O7GGc9EN9O+bUeamgCfm55JJLGp4zPuafvpEt+qa+mWuQ7bzzziPvs7+wLng2finsL6tWrWra\\n\",\n       \"uo6f/ct4FnPkvhkDexlz5fpm9vOYmZnp1KtjPnk2r8gsf1NTDDm3Hyu8Zl9EFum/lAGva/jC/O+2\\n\",\n       \"224jfSH3XnPU5oNWxskehhyxX1DjcNOmTc38sHfwG3LFFVeM0L3TTjtFRLsmkXOehbxQm82+NpZN\\n\",\n       \"aD/55JMbPyxn++b1E5/4RES0NQLZixz1yiu0s1/QD79Z9ne8+OKLGx46P1YZ+RvR8pD5hw+sA/ZH\\n\",\n       \"nuX6dvw+OAJ7zZo1ccEFF0REW2vPtRbtt8VvEeNELpAXxgm/kHXXTy3nEvqYT9OS+aGyRjOMPUgN\\n\",\n       \"h8M3Jh/9X0n7P4yIPxzXb0VFRUVFRUXFTzuWNLP5AQccEBGtFnPjjTdGRNdznhMlGtkRRxzRaG3O\\n\",\n       \"RI12++CDD0ZExJ133jnyPidSwCnYWWSzTOicUA866KDGsnDLLbdERKs5AufoGZeRFW3IkWRYm5x9\\n\",\n       \"d2Jiohk/dGL9yKJI4AsaA1pelhGcZz/66FMZMPrq4VnLycYFnHME7QbNAhoBGghzhdzA77vuuqsz\\n\",\n       \"TuYbftCnc6+U1ckjWosN7yMXBnN7yCGHNO85UopxlxnII1rt17TQ5x577DHyfebd7cvcV+vWrYuI\\n\",\n       \"dj6d7RlgBd5ll10iImKfffaJiNaiApyNnzlCdk3LcDhs1uS4XDz8vf/++0dEO0eso4wvtoY6x1tE\\n\",\n       \"N8KT9QHPs9w80HzQQQdFRMSRRx4ZERFf/vKXR9oj294f+nIXWe4Zh61AwBo2c4V11LSzT8APrM0b\\n\",\n       \"NmzorAnovu+++0aejawZzmkGP7M6oY5ILfMneZ4cVYYlCjl3pCTtnLOL8XuvK6Me165dGxGtRfLu\\n\",\n       \"u+8eacs+AQ22qmdzBK2sC+TL+8Xc3FzTB1ZCeOr93FY/W/8sT4xz7733johWbtirS2sTawpZ2Xff\\n\",\n       \"fSOitZLdf//9UYJ27LHO8ZRlWXdOOOfpKwF9tCmj8kt4/hmLaxMCR3+W0fCeT3jKHgQP4UcWMWnU\\n\",\n       \"WnsVFRUVFRUVFYvEklmkNm7c2FgWOM2jeVmD5bSLtWDvvfeOe++9t+mnry3v0w7LASdP4Ltit7fv\\n\",\n       \"Ce3uvPPORqvHn8K+LJx+0ZzQRLOcJvDh6KOPHvk+mvjNN9880n44HDYnZuhGs/I4OZWj9btWoS11\\n\",\n       \"aAG2jjkrbdm34SzKBtYCtMTMamANDVqhsbRKou2hpWA1RNtB4wBoJLfddltEtNqc62ABxoocIGfL\\n\",\n       \"ly/vaNLQgB8a/HCNKY/z8MMPj4hW3pnb66+/vrf9gw8+2GitWDGyrL833XRTRLRrjWfYagCNzDf9\\n\",\n       \"zpdlm7bwEPpYHwANFBqQVTR285FnMrf06/pYEa2mbH+RzELHWkQDZf3wausoNOLfg8Xj6quvjoiu\\n\",\n       \"BSMiz4Ld17Zsj/WIceITBJyfDD6vWbOmMz/IM/KKRQKr4Pr160faw2O+x5wiJ55T+IKVZdOmTQ3P\\n\",\n       \"s3ql9AlssQfI8ote9KKIaNeV14NpmZ6ebsaL5SW7kWD+4aX9MQG/TawH5ggZznKmRbQ865Pbkjbm\\n\",\n       \"H9rZa5BpgKxz6wJfbEUux8FeybPtQwqQLW4kDjvssJHvsz6Ara2sC9eZjGh56L3atAL44nxc9G2e\\n\",\n       \"u9oHcz87O9t5Fm35XbT1yzzPUC1SFRUVFRUVFRWLxJJZpFavXt1o6sARNgANhVPwF7/4xeYzrBhu\\n\",\n       \"65N0FqXASfzAAw8c+Z4ztQI0j0cffbSxFOBnYGuHa6Q5A7EtOWj08MV1jayprVq1qvGLsPZnujm9\\n\",\n       \"H3PMMRHRamDwxXfeaKJoGPDN9/0Ro7WM+saXZQenD/tWWSNBo4CGa6+9duR5fA5PIlotxhmIzRf7\\n\",\n       \"3aBhog3asoemyjOvu+66iHhKg7GWznxbQ8pqyjHuz3/+802fEf08j2j5uttuuzXaPXRlVgCswKyl\\n\",\n       \"PgtjOW6PwdGQZf/MB7KUWfXo2+u5jIAs4WzaWAOY05IW85jPvO4NfEHQuFmD1khZk1iH4QvPK62v\\n\",\n       \"aOX2M8IKaJ7zXWQPa9G4LOWsm2uuuSYinhpzZpHGkoac49PjvcgWGfpztna3Zw1MTEw01ivv555P\\n\",\n       \"5AS+ZBaKW2+9NSLaOUDWM1q22267xkLGOmVOAOPmdoG+keWswgN+Scw/68TWke22265j9eiT2/JZ\\n\",\n       \"PNvZxTO/JPY4+8yVfGHc7HeuR2d5gQbk3DVZbcGC5zwbHz3WT2kJZPzMH8/OKj4gc/CLvqAxy4XJ\\n\",\n       \"GMt6kZZF6Pa8e/8Yh2qRqqioqKioqKhYJJY0ao+TJSdn1/PJ2q9evTqNHrB1xDkqfHp1LS7fu2an\\n\",\n       \"3bVr13a+k1XGRuNynSZrmq5/BTiR99X94zu2dpkWV8xGU7AWBFy/ifZZ3TfoKfsEWW4O+MKp3/mz\\n\",\n       \"gPnM9/qsadDnaCr6tmxZPtA0XasNWANFk52bm+v4gtlHyJq4eQ7tWFldY9H9l1FRzFMWOen1Yi3Q\\n\",\n       \"PHcNSmA5K+F8an3+ERFd/wyswfZ9ANDG+2jXfRE1jAft3JGz5iFWE88n7bP9xfyerwYdfHENTa9/\\n\",\n       \"57JzHrbMgs37PGfFihWdvr13YOUwb02LLVPZHmdr5LZt2zr0u63rYmI9yiKlMkuOUVq22beyaGb7\\n\",\n       \"xNiylLV3Pq75fPFsUcqiWR3dTXtHwwL45b3LeadK0Icjx5El4HXj3Hbj9nT6p11JC7zyGsssr/Cc\\n\",\n       \"OXHtyqxOKPtQ+eysji/ywd5rGsehWqQqKioqKioqKhaJBdXae9YfWmvtVVRUVFRUVPwU4dmutVdR\\n\",\n       \"UVFRUVFR8f97LJmP1CmnnNLcoWfZRqlZRm228v7aUQcXXnhhRLQ1fwDtnD2VWnvU8eF+ljt1Z3Z2\\n\",\n       \"rb3ly5d37mzpg5pCZ555ZkR0895w7wrt1CCjPcAXhLtwxkDdn1NPPbXpyxmdXQuN+lO+2/adObTA\\n\",\n       \"cz5njPjO8HrppZc2beEVbR2FQS0k5sh33dDsenjun/bQMDc316mFRt+8OoM5PPT820cCvp533nkR\\n\",\n       \"EXHWWWf10rJ58+ZmnK5X5rxhrqGIbFEnDloc9cb7tKdO1MTERMfPkL+ZT2rK8bnroAFqbdG3a1oC\\n\",\n       \"5Ih1d8oppzTjRJaQV9Ygdfze+973jjzbfne8UmvL9c0cDQofL7nkkqa+Xeb7xRx8/OMfHxmnLfOO\\n\",\n       \"8qOmHPLlfEOMcfny5Z2akh5Xtv5d346+7X9Vrv+yPfvFE0880eHhOeecM9IHNDBO6Gec7BeOjLNP\\n\",\n       \"JWOlfl5ZH9KRUF6jjngEjBe+MP/Il2lwbVbW9LJlyzrrAl65pqQj5Ly+2YtY/8BRX/49Kn/ngP2o\\n\",\n       \"4PkJJ5zQ2zegH/iI7HrNA2T9Ax/4QLP+s76Zt/e///0REfHGN75xZDyO1qT9n//5n0dE97fLud9m\\n\",\n       \"ZmZG6g+WbZyzjlfqpyK7wL5izP8f/MEfREQri2WdP8bCvFJrD1n0GcRzBO0ZqkWqoqKioqKiomKR\\n\",\n       \"WDKL1GAwaE6StgYY1iqXL1/esTC4Lad057/xqd0REpxis+gUMDs724lM8HccITYuOsG1huBHlrul\\n\",\n       \"fLZ55LbOOcMzHOUIHFno2lvl2KyNOXLOyKwh2Ry5H2u6fVm8zRdrjsAyiNab5ShxZE35PL9na5hr\\n\",\n       \"QBmOthnHx/LZts5ktAD4kGWdtzXRubD6aOfZ9J3lBeNzZCjLZWZaLON90a/WmM3zLPLR7W0dzWjJ\\n\",\n       \"2kV0ZcWWNEcEZVGfHrdp74sKy7LsZxF0Ge9pPy6PECitRVlEsCPkWJsZjZ5DR1SZ9+We5ug089zR\\n\",\n       \"2Z6bLCu9I2+zjP/D4bAjYyCrPpBZ9rN2fO6bkXKvMw+9njNZzObZY8n2HdNY0sVvCTnwsv0R2cwq\\n\",\n       \"W/TVoI3oRvlNTEykdRlN/7gcVUa1SFVUVFRUVFRULBJLapHyyTnTYDm5csJcvXp1pxI48J0/p1jy\\n\",\n       \"Q1hjcDVzsiZn9/fl89B2eJb7pg9rYNYcAPlReMU/IavNFJFre9mp3lpcponaR8R3yGX/mQaZzSdz\\n\",\n       \"YmthZqmyVpDRHNHlkbU688V+N8BzBZzDqqQt017sn2GfOZDxyxonKDVzy3FfLbyyj0wTN81em1lO\\n\",\n       \"s5UrV6aWFvPFlqvMFwRklgfGUra3D5hz0Jg253izFSnjo2tVYsks2+PrYUur8wkB59GyVuz5Z29i\\n\",\n       \"DGjqTzzxRIdu8mVlPm9ZpmrP1Ti+lJa6LHeXrdzQbb8aYCsjz2D/MB9Ly1YmO4B5Y3y+HcnWqMeW\\n\",\n       \"3WRs27ats/Y8LoAsYWGBD1keKeYwy51XtvfvZGblBbYSew6zmx3/zZhKvsA7xskr+fayCg70RQUR\\n\",\n       \"qm749wI4h9jmzZs7v2uZD232G51hSRNyesPw4ceACdPT0+mGzsBhNiVUfMByn4DND8ZmhUW3bNnS\\n\",\n       \"HLr6rtwiukkgvaF6M/NmR/9Ojgampqaavr05m4dOMGeTqxepTde0d9LF8rvZNUqW1M5/O3Eh8Oad\\n\",\n       \"HczK/3sDhU9ZYIOv/pyoE5CYjv4xS++www6p06wP2tlVlhOTcpBGLrKD2nA4bDngeHgAACAASURB\\n\",\n       \"VOigT29G3jiRezZpy4vlxD9+fbR4nNlh1H35oJkdJH1V5ECJ8rsuK+G+AOudUikONuhLghvRygn9\\n\",\n       \"912neLzzyW1E95AGP0sH7hIO/KC0zHA4TK92vSYzmbKS6OSJ2b5b7hsO+DDdmUtAluzRh53sWqpP\\n\",\n       \"prN9zjQ64W7m2Ew7uw54vygdnBlXti4crGFZy662nQyU/stkur6S9cHBsmgFAzB3mSsIhyL+Zn2U\\n\",\n       \"tNAn5diQaweEAYqbs2dZhk0j/fBaXv1lSbC99rLzRYZ6tVdRUVFRUVFRsUj8t3A2t1bkk7rbzczM\\n\",\n       \"pCY3p+4fVyoBuPSDrwiMycnJTtkV921tzpqGNSkKpmKBAJg83f+yZcsas6UtL9n4Mt5mViBfjWVa\\n\",\n       \"ZvmdLHU/sHOsw3fHaaS+Aivbm4bMsRu4/IpN95ZN/naR6ImJifR6JCtanV0bWDvKNK+ypJA16MzU\\n\",\n       \"bo3LIeXAzui+yuy7IrcVIwuSyKwlmTndDqFG2T67/rMJH1j+0cRd8BRYXhysUVqNsquWzBpk64eL\\n\",\n       \"oGelk/ocqG15tcUI7d7lfEy7i0D3XaeWNJQBMg4qAF7/dmi3nPuKk6tAW/pNS3nljZxnJcJ8tZtZ\\n\",\n       \"6qCxLIRbjqXPmurAHpCVNrFFd1yQCpjv+s0BD6Y3s9R4rxoXMGUa+2hHrj3f2RU2FiVkls+zGwxf\\n\",\n       \"R9Pv1NRU6mZgvmS/lxmqRaqioqKioqKiYpGoJWIqKioqKioqKsagloipqKioqKioqHiWsWQ+UpRP\\n\",\n       \"iGjvpbnz5l6S8gOUTuB9/IIiIh544IGIaNPskx6e+1TuY7kL5a6fFP5Z6RSega8EKeJpPz093dyn\\n\",\n       \"upQHbU03tHDnzyvlB1xmweAumNIJ7373u5t7dcZLpB9+CPCF8iP27XEEDKUTKLPAnMA32nGvfcEF\\n\",\n       \"FzR0A9/Hl6UKItoSIXx+9913j4yB9n/5l385whdoZm743rJlyxpZcYkg5gjeQzfyR0kRQmWBI0k/\\n\",\n       \"9rGPRUTE2WefHRHd6LeZmZmmb+afchL4vBFVZX8T5uitb33rCM32qaM95Q3gy+bNmxue2UcwK1dE\\n\",\n       \"tBrzixzRN2Ucdt9994iI2GOPPSIi4v7774+INvSYcjhnnXVW0yeRjdAEb5Fzxrlu3bqR8Tqi1mWf\\n\",\n       \"4B8yS0oTIoPOPffcOP7440fG6Wgz/GuYI8qPOLkr+wo+k+U4S1rtW/Tkk082a+htb3tbRLRy7jB2\\n\",\n       \"ZPLcc88dGSc0Oz2E55/+7Xu1cuXKpi08L8umRLTRzIyT8TBOaEH22FecFJF1x75YRlTZb499C7lF\\n\",\n       \"9rxPuNTS7/3e70UJ5Al5cAki9rrly5d3EqYyn8gWexHj45W58b7IOmIPgs+sC6I/KW9y+umnd8p3\\n\",\n       \"sTbpm/JTrDlgnynauxSOfzfpn+dedNFFzXyy5vbZZ5+IiLjuuusiIuL222+PiIirrroqItryM/SF\\n\",\n       \"/65/P9i7XDrnwQcfjIhRP0HKlUGL5RtfKPpm/pGt0tep5I9LBMFH5oi9euvWrR3ZYv2zRzu6G5mk\\n\",\n       \"LE+GapGqqKioqKioqFgklswitXHjxuYUiIZOdFoWzcBperfddotbbrmlt1/nJMoKogLn7OF7nEzt\\n\",\n       \"Q4ZWsGHDhuYz6M+iTXbZZZeIaE+3aOymhffpD82Lk7qjX7Zs2dL0OS7/iaOvnPvJgG/33XffSDss\\n\",\n       \"E8xFRDcnif82X7KU/mjJ/hw+OKKiL/oJLRUNCp7R9/7779/bt/MAZdFJ5GuiXzTziG6OKmhxKZSs\\n\",\n       \"XA3v8z0+R34MZPaRRx6J9evXR0TEXnvtFRGtxgmwICFbTpiXRYwdcMABEfHUmovoat5gOBw269nW\\n\",\n       \"YFua4PWdd94ZEa1GCc2WF+YCixc0wJ9yTh3Jk0X6gVJbLV/Zc8r8NxHtOmPeoc2RtiW9jpDDypdZ\\n\",\n       \"np2okrlhfwC870jC4XCY5kti3dI3OXr6kv2W7zsCddy+MTk52clFB8wP5syJNk0DfGPe4XlfNDNj\\n\",\n       \"Zv6Y16wUij/P1j/Yd999I6KdY/YF9//44493ysjAQ99keF8oo3IjurKMJZZ15PVQysCuu+4aERHH\\n\",\n       \"HnvsSJ9ZLi7PmfPTWXaZUyehdhm1iJZnLt/jvHKAucHCxOf0k+V6w7pYRpJ6PmmLBQ1a9t57795x\\n\",\n       \"ZqgWqYqKioqKioqKRWLJLFJlZnBOgZx2ncGXE+XatWsj4qmTZZbnwflg6NvWL4BWb80T+ESKJr5l\\n\",\n       \"y5bGUpCV/HABXOfysdYILZy8XQLBGsyOO+7YjMeWKWsvmQUiK17Lad+nfudNKd9ziZDsNM84sBIB\\n\",\n       \"NM9bb7115H3ndnK5itISZN8mvoPGxHiMMit0RDdjNbCfC3K1Zs2aNJs82j9WHaxDWM0A68GZvhmv\\n\",\n       \"ZRQLxYYNG1LrBMBSleVJsnWMtUa7m2++OSJazc0WjPvvv795D8sycmwLI7KJNQQZhuelD2T5LLR+\\n\",\n       \"W19LC5Z94ZjvrPyOtX3Pr9eorUSugFCuO2TOFkZo9F5kK0hWSgk41xsYDoed+cSCgtXvrrvuioiW\\n\",\n       \"T6VlNaKbPZ69yVnXgcuZTExMdKx7gPHTp318bJFk3dD+jjvuiIg8+zZrcmJiohlXX/HciHY/Z1zI\\n\",\n       \"PXNjS81RRx0VES0focXrr6Qls7CZltLfshyfy9B4nMgXY+2rbnHwwQePvAfdfNcy5H3fvM4KybOG\\n\",\n       \"nVOulBfnl4Ne5KKvgkdE6xPJuJ3JHTB3fcWi3Zbx05a9y7wch2qRqqioqKioqKhYJJbMIrXjjjs2\\n\",\n       \"p1r7wPhEysn13nvvjYinTr+cmH1K55TqgqGckH3/mtV7yjK3lgWFXdMpO6VnVh+fdjlpW8vNMjtv\\n\",\n       \"3ry5U2MtK0aMNuf6Tdl4aYcWwCmf15IWW6ScJTbzp2CuHAHn+UdrgH8uhFlqU3yGP4WjTMxDxoM1\\n\",\n       \"yBF15qNr7YHp6emO5cXZg9F+7ccHbIlBQ+P7fiZj2X333TvRJrYYuEAoWh7asmvSsU6wGjEH8Kkv\\n\",\n       \"gzj+c4wLut03NCJbrCPX6/I46df1HsuxZsXM+2QlopvZ2rQYjhy0rJfy4vnLiu8CxmFLXlZ9wJFF\\n\",\n       \"0G5fqojWr8xRetk+x/6JZm5/J69prEZlJF62LwLaumap90msoFhR4B9y5TktrSLO5O81hPzDc+9Z\\n\",\n       \"ph1a4SOv7N3+fVm1alVn3uE5fYG+7Pglzf69sG+hZbe0hME7ovP43cj8r/jblpmsULSLgmfFziPa\\n\",\n       \"dUEbV7bwfNr6a4tbNqegpCHLVM7tCL5kWfWRDNUiVVFRUVFRUVGxSCyZRWr58uXN6Q/NxJW0AZaK\\n\",\n       \"8jWrnWXtLKuxBKzB0h6NzLTQ/9zcXEeDyqw6rm6d1VpzDiCf8vtgTRoeuW/feZvX1qSwElqrxhpQ\\n\",\n       \"at6uoO36ShnvXVvOfhaAZ5a1Fsv+Sxlw5BwaFXPgvtEk8XeDD5kfm/NplX5htoLYYpnNO7CfhS20\\n\",\n       \"mR/fypUrO7ljrNX2WRLnowlLFMASY18isOOOO3YiIOGHeUi7TPO0BYsxoTVCA1aEcqzORWbfKNMC\\n\",\n       \"ra61lsmua9BlmnlEV5u3JS3zqbP/WhY5Bq22Lk5OTqbWS0ebZtHJ0Eo77w/jrOuPP/54IyO2pDmP\\n\",\n       \"nq3ptqbaQu+5tAUC6/uWLVs61g6vC88zNFiOgKOBfYtgPq5du7Zp69sP04I8m5bMv8+RcY7ILucE\\n\",\n       \"i6QtkRnd9r11/U/TkvkW9kWom+f8nVmNPc/2IfSehp+bo8effPLJzrrg9w3ruGnxfpGhWqQqKioq\\n\",\n       \"KioqKhaJWmuvoqKioqKiomIMaq29ioqKioqKiopnGUvmI0WdoIjuXbDrflGbp/RLcK4dauFQx8n3\\n\",\n       \"0dz1cid6/vnnR0Rbx4n29tehvWu5lfeyzmlFHR/GmOWu4T6WOk7U5jOgjXtr+HLWWWc1dNq3B7ov\\n\",\n       \"uuiiEbrdDnA/T222U089dYRG5zLh/UsvvbSpywScoZa/qYUEz11Dynfa1Amjvf0wSv8H+natPfvS\\n\",\n       \"MW54Tt/2y3COHuTLNcjKKEjz8D3vec8InfZ9wBeI9tACjfYJgy/QQt2vmZmZzt0+QG7Nc9OCnwX1\\n\",\n       \"EJlT+wjYpwZZPP300zu+Cs4tQ9/IFrCvm2ttmuf2T+P10ksvjXe84x0jfbiaAN+l7pdpAcwBfKFO\\n\",\n       \"HHw0rdCwYsWKZpzUfLM/kSNr4SH7hdcY48Zf6eKLLx6h3f5tZR1Q9iLqjzmSjnGyt7gGoX1qXO/0\\n\",\n       \"/e9//wjtpazb/yzb5yxjrhNJe+8r8Ml1//pqljJf5jnjzKK36ZvafPDcvlOuvQgfTz311E5kp/ci\\n\",\n       \"1jNrzjnwAOuI+adeqH3E+uQLnjgyznuT5YW5w2cOX9ty/y/5wv5g37S1a9c2v0XQwvgcOcgzszp+\\n\",\n       \"fO49m/6pn+i8ZFu3bm3eo75luYeWfHCkLOs/Q7VIVVRUVFRUVFQsEktmkSp9s8b5afXlPsoi2WxZ\\n\",\n       \"chZVW4ey97PIqjL6xfX7MouJae3LrVH2nWUKN03btm1Ls+Rm+TKyvgxrUfOBZzkKI6PFVhHT5Gda\\n\",\n       \"a7Jmmo21D5ncOOu8K40DW6KYs+np6c4cWNMyX9zeUYx+RhYVWv4/yznjPrLKAIb7NX/K/rNxGlmG\\n\",\n       \"blsDjCxSroQju/iOM/MDNGzzPMsjZEs4n4+rzdbXVzaObPxZJJ75Pt/aXugc0c6Wmixa2nmDhsNh\\n\",\n       \"Zw9238DWQ69/r3fLolFGpFne++TWdJftszXblzW7D7Ozsx2ejVvPtoZnEcT+/rgovz6Mqz7BM7kl\\n\",\n       \"8jwDLFH+Perje/b7kK2hLIox2+uyWqyzs7OpdTj7jV4oqkWqoqKioqKiomKRWDKL1MTEROeEnllq\\n\",\n       \"XFG6PIlnOUrcV19ujfJ95/qwn4L7n56e7mglzn+TZQ9mHNYwnOnVuT6cCXt2djat0m1kJ/Ess7mz\\n\",\n       \"8WYaeglrUlktMJ/2rUln1b9tRbH/WzlOW7kyS5wrroNMc3d/Wa26so/MOpppu6WVq3ymczeVmr2f\\n\",\n       \"ZViTzCyzIMt4nFmy5ubmOhbVzHpln6hMK8xom8865hxsnt9sPs0/vuf9xfPOenMun/L/C80L5PHb\\n\",\n       \"GuT5p719ZPr6dg1OPnf+Hz97IdbQvu9t27YttVBnsuV1Dbz+yfHkSgegL5dexvPMIpXtd8515LVs\\n\",\n       \"Wqanp1PrsN8fZy0ZlwtxvlxPmcWtbw8tx4F88Ntmq7lpyG5hStoZp/NHZtYxW6LH3TL5N7/cA7Lb\\n\",\n       \"jiyze80jVVFRUVFRUVHxX4wls0hNTk52LBGZNm2ryWAw6GQwB5ykORGT5dbZUA2fdukny1y8ZcuW\\n\",\n       \"jjYyTsOwv4a1F2eR9Zg81sFg0HnmuFO9NY/MFyDzpenTijIasjt6a0W2YLg/a4G+ty8zBPvZtjSY\\n\",\n       \"/qw2XSaLmR/PqlWrOv5U1vrcp9tnY8j81co5Guf7ltVDzCyS5rmzrVuOBoNBWq8ys3Y6iinTTO2/\\n\",\n       \"BvosWNZWnWnZ819WKijboYET/QqQNUf12I+zpCXz7emLtusbV7aekF38vEqa+uanfGUczOu49W+Y\\n\",\n       \"RvorLTnO6O621BL0/u+16PUDzcyR+y/3esu515znyD6BmaWCfuFzVldw2bJlHdnKLHW2pLC/ZRUC\\n\",\n       \"+DvL8F3Ki+lGdtwHyCLqMgtOto/2VRRgvpgL2jBOyy6Aj765MV8yWku/Zvc5zpduHKpFqqKioqKi\\n\",\n       \"oqJikVgyi9TKlSs7J8rMa5/Tbulb5BM1sJ+SP8/uUx0xktU3K+uJWcO0tgMt9O3IoXG19pwvx3yZ\\n\",\n       \"mppqTvfj6trZd8S5WLLoNDCu1tJ8yHxkMkue/3YEiev/lXNsHvFZVscJMK9oRZkfky0cpWxaVjKr\\n\",\n       \"gPsCWE/tEzLf/NPeFtfMeuEIIP62Vg/oj7FlfkyTk5Od9WyrDch8vcbVtwPzRaVZw/Z8ZeufWlvM\\n\",\n       \"QTb/lkX70pRjtcbNZ1kNPfYLtHrX/3NtNnIc2So2HA7T2nn2t+PV1nHvuSCzYNAe/s3NzXVqi5ru\\n\",\n       \"zEJj+bBMU0sts6YxlyXt9g0z3ZapLFIuu2XIbjqmpqY6bfr8KUu4RmG2TzJ+51djHZY0eR3wjOwG\\n\",\n       \"w7+HpsF8ZE5tyevzY0NG3Be+beOiE0FmBfT+WdKU+YZm67n6SFVUVFRUVFRU/Bej1tqrqKioqKio\\n\",\n       \"qBiDWmuvoqKioqKiouJZxpL5SJ188slNvaZHH300Ikbv1yPaWmuun7Vq1apOFBm1kN73vvdFRBvJ\\n\",\n       \"wt3tQw89FBERjz32WEREXHXVVRERcdJJJ0VEe4fL/Sp/c99MnagTTjih6RcfAOjFd4daSNROynJT\\n\",\n       \"uaYgNYjs10N0CzRRg+i0007rRC/y3QcffHCEbmpKbdy4MSK6fltlvTL6jojYY489Rj6nZhn+Guec\\n\",\n       \"c05ax80+DNRCguf2+WCOdt555xG+MP+O1kB+ItraVmefffYIH2655ZYR3uGHwjjf9KY3RUTrI+Uo\\n\",\n       \"Ne766d9zij/Lbrvt1tDFOOELtCCTzMHuu+8eEW0dpxNPPHGERvgJn/AhoDYX9c2mpqaaNQS9yCK1\\n\",\n       \"szxHjO+BBx4Y4SXjPOecc3pphj+sVWg/7bTTmvfok7XE+9BNDUJoYN6dL4n21P2CRngObfjMXHTR\\n\",\n       \"RfH2t789IlqZY+3ss88+I/RTa5O+4Tnt7UPiup/wEd8j+n3kkUea9UktNMa1adOmkXHzvseJrLK/\\n\",\n       \"2Ffqsssui4i2lh+0I38rV65seMR8soZY73vuuecILx9++OGRcSJbjBNe8wzG/fGPfzwiWtllHQ0G\\n\",\n       \"g04kFzUI2ecsqwDfl/POOy8i2r3L8gFf4Cv7LrRPTk42+xUyAk9da5NxIbOsp912222kvWvK+TeM\\n\",\n       \"McPHc889t5Ghu+++e+RZfJffOdcJhW+77rprRLTrn3Eef/zxI8+85557Rmjmt+9DH/pQM07k25GT\\n\",\n       \"zCeySL065t/1/5gjyxd7HXMCn1asWBGf/OQnI6Ldcz3v/s1lj+b3wvmnGANyxLpDXhhbWavWtRZZ\\n\",\n       \"o6xf5jGbowzVIlVRUVFRUVFRsUgsmUVqamoq7rzzzohoT+qvfvWrI6Ib5YMmUmYfP/zwwyMi4rvf\\n\",\n       \"/e5IW07CWFI4xf7kJz+JiG50CloNkR5oLJz+Od0DTvkrV65sTry85xO282McdthhEfGU1lrS5HGi\\n\",\n       \"1UHrzTffHBH9kVVoO5y6b7vttohorQAeJ5qZrVyOjIBvt99+e0REfOlLX4qIiKOOOioiIo488shO\\n\",\n       \"W/qib0738Ac4/w193nHHHRHRWigA34dfz3ve8yKi5cfXvva1MNCwr7nmmohotVsskoA5Q1O5/vrr\\n\",\n       \"IyLida97XUS0lj2AbN5///0REXH00UdHxFNyg+wB/sZiwJwgs44goR1ziRXoRS96UUS0cwHKjL/X\\n\",\n       \"XnttRES84AUviIiIvfbaa6Sts+MfdNBBEdFq9/fdd99Ie2hlTl72speN0PaP//iPHVqcQ4b5LS2H\\n\",\n       \"Ea0myXh4xm/+5m9GRDsHwBGHn/rUpyIimj3gkEMOado6utB12rB2lnRHtPJy7LHHRkQ7F6YFONrn\\n\",\n       \"rrvuiojRdcezkVv2gRe/+MUR0ZUt5sYyyZ5kPtqKztzss88+HTmHXvqGtm9/+9sR0V2jrtaALGPh\\n\",\n       \"2XvvvUfaw0eeu2bNmmbvZB8A7DW77LJLRLS8dyZq0+7s/OvXr4+Idr8EyNfGjRs7UWrsTaYFq8+P\\n\",\n       \"f/zjkc/ZLwF84vfhwAMPjIiIz372syPvgxUrVjR0Q+9xxx0XEdHZL5x9nPXDGDxOaGGf4HcUGS7l\\n\",\n       \"xRb25z//+RER8aMf/Sgiotk/AONg3TBHrHHmDsBzXuEnVrK+PFLspcxJFrXv3FfQdOihh448AzBG\\n\",\n       \"+odvmzdvbngKmH/W2E033RQRrdx7/jMs2UFqxYoVzQJ7wxveEBHtxnLjjTeOtIWxbPaTk5OdEFqA\\n\",\n       \"UHEw+vrXv948LyLiiCOOGGkPYx1qilCycQAEaYcddmj6YiK9iGwmZnxZ8VWH3EIbguaNdHZ2thnX\\n\",\n       \"vffeGxHtxPMjA+iDBcW1EhuH++bZ8PyXf/mXI6I9oJaHQJt7WXS8Zj/q3jB+7ud+LiJaof7c5z4X\\n\",\n       \"Ee3CYeOBn6961asiYlRe4Dkb+kte8pKIaH/gvvjFL47QwsJnbjggIl/wFbDwfCB9+OGHO6VtfP3D\\n\",\n       \"nGRlaZCt8scoouU5V2bu/4knnmjGyasPo/Ca+eRHDtp8zcocwevvf//7EdHyp6/4L5sP68KbFmB+\\n\",\n       \"uXblMLrvvvtGRMS//Mu/9H6PA9crX/nKiIh4/etfP/J+H+AZSpvXv69ob7jhhohorwKRJwAfmSPk\\n\",\n       \"xlflEa1scOj+pV/6pYho5diHV/YPH16YS//AOIkk/T366KMNL4GTGbK/IceZ8sqPkFO3+Eed/lES\\n\",\n       \"V69eHT/84Q8jolvCxrQgY3arAFb62JMZrw+BtL/nnnuaK0z2yazYMn1zQITnWXkbgDKAMsj4v/CF\\n\",\n       \"L0TEU3KFnCOnrGd+tAHzD60/8zM/ExERn//850doAv7t+53f+Z2IaPfjco9mf4deFMyXvvSlEdHK\\n\",\n       \"8Te+8Y2I6O5ByAPtPEcuJYTCQvvSyOCUG9l+COwqgrGA8dnY4cLJHOoeeeSRznyyxqCPK0pkjH1j\\n\",\n       \"HOrVXkVFRUVFRUXFIrFkFqm5ubnm1Ip2c91110VE1+SJJsbJc/369Y22i2kVcEpHM+C7WAOyEiGc\\n\",\n       \"qDktc4rNkoNNT0832jyakU+7NkljOeH6xVYgaGD8XBdgLbDWODk52dDDd+jTWqDLcaBZoQ1aw4QP\\n\",\n       \"+++/f0REHHPMMRER8ZWvfKUzVjuZOxGpk6TCN6xEaFxYAQ4++OBe2vneD37wg4joWngiWm0MTRvL\\n\",\n       \"GxoYFkvTznjRLOnH1lH4x5jQ3LZt29YxA9MnmiTmf8aZJfHDmgptWElsHS0tUs997nMjor2K4hoB\\n\",\n       \"MBeM65vf/GZEtBq4r6Xh+QEHHBARrUVqv/32i4iWT2jew+GwWZPwzM6yANnBWoCMcYVt2bVcIYuM\\n\",\n       \"9Tvf+U7T1skbnfwv4zn8wSrA1YWv3+zwDE1clZbXWDwT6xBtv/e97408Ezg4A/5gFfD+wr5g5+tH\\n\",\n       \"Hnmkc81qR314zP6ZJc1lX7Elx3PEWHjOLrvs0tBlS6qvmdjXsex7X4QW9ihowrpo2aX/1atXN+Nl\\n\",\n       \"j7YLA397ntknnByUv5GrL3/5yxER8ZrXvCYiuldBk5OTzXtYc7MCx3aepj0WGFswy9uRiHZPZ48u\\n\",\n       \"k+eynlmLWJ6wjrHnAAcCMCf0Y9cR2nMLwV6ABay0BLs0GDx1YlHgxJ3QgrXIFqksAer09HTndy7b\\n\",\n       \"F7Gmsq7HoVqkKioqKioqKioWiSWzSE1PTzcaGtqznSwBd958fuCBBza+K5nlhdMrDmnA97BoL75D\\n\",\n       \"t+Ov+5+ZmWksBLYgAE7Y69atGxkH1rKsXAlasUP27UA6NzfXCQV1GDxwKRw0B0739pFx0UosOryP\\n\",\n       \"w+yVV17ZKb+A5QieZmn50WqwMNA3tJkfWJfsrFzy3bTYvwJH9U9/+tMj44NWrD9ovXZOhn/IHa+r\\n\",\n       \"V69u5hVAJxoVc4XcZMlwkUkXsbZ2zBhXrlzZWC+hG4sRfmb21+JvBwgA+kGDxfqHX4LXxdq1a5u1\\n\",\n       \"gQwity4rYwdffEiQUdYLQPOGj1i86Bfn2i984QudArfIN5YX+zG6nAavWEfsKO3SEU43UvbPPKIF\\n\",\n       \"4yuFtSsrX2Ua7YQMXN4Jfi9btqzjy8M47NCblS1hjthzoInv24LjMjgbN25s6LLFwPQj11g7swK6\\n\",\n       \"rBvGiex6v4BPWFsiWrk3X3g28s56dpkj08yz8TFC3vx7tG3btma+aYMlNfNLxJrz13/91xHR8t58\\n\",\n       \"ceqbW2+9NSLadVXeBMA71gP+pb79AfDL+ye/dbZ4upSMfXbLmwDGaTlHRr3n2nKHnCBXmTUdfmV+\\n\",\n       \"XeV78BArF+cG+/dmqBapioqKioqKiopFopaIqaioqKioqKgYg1oipqKioqKioqLiWcaS+Ui9613v\\n\",\n       \"6kRvcC/P35RCILcDWLlyZSc654ILLoiINuU796/2r+B7pJMnzT7tuJ/lbpw7VKfCX7FiRScZJvRT\\n\",\n       \"ToBSBfjGOBeL0+xTOsP39M4XU5aI8N00fUI/qfApbWC+2f+GUjuUCKBfnsNdO/y9/PLLm7Ip0Gdf\\n\",\n       \"McZNiQhKYeA7wrOd/4P28LEsgeHnUE6A+Tev+S7+B5RCoMwG7fzKs5Avxmr5KHO8XHHFFSN0A8Zp\\n\",\n       \"Xwd4Tnt4zvhoD19cxieiGwnFeBknbekDup14j3VBGQ/zwb40lE44+eSTO9Fj9i+h7Tvf+c6RPi2L\\n\",\n       \"PIPyNpR8APiWQDPr5cMf/nAj546uc6QQskXfXmPwhWfBR2j33JR/Mz8ubdKXWDiiLbPC+ncpGfpm\\n\",\n       \"DbJfILv2tSr5zl4E3awx9w2Y/0zO+T7jplwJ8lXKR9Y344QPnhuX5aG8CbTgt4O8MP5PfOITEdHu\\n\",\n       \"0TMzMx1Z4Tsf+9jHRmjxOGkPbR6n9xfv2WV5M3hlWYRXyBZ0Q4vLd3n/P/PMM6MPzBFj/chHPtLM\\n\",\n       \"J7R4vwDsc6x/R9TBe/6Gj9AC3+ADtM/NzTU8IfGuy07RFt8uEu+eddZZI7TbZ5BnkXTZpbnKOeK7\\n\",\n       \"7C3l73lE6xvm/HKs6QzVIlVRUVFRUVFRsUgsmUVqcnKyOVE6h4vzpXBqLk+kthwATsw+lbvMBODU\\n\",\n       \"yimYZ0OboxPoZ9myZc2pHs3KtPh0zncZryNI6MeZv615gampqWY80OLTfdm2HD99enzAmrY1tHKs\\n\",\n       \"1hDsd5flbvG4+NslImwt4Hl9tPB/lwgBWb4ca1wuqeLvu5jr1q1be0v4lLTwXVucAFpQxnvnHaLd\\n\",\n       \"zMxM0xYZspwDl0Sgb/Pcc+m5cv8TExNNX4yP73gObJmzhm45t1XBc9wHPrOseP6JtGTvcZZ+r1Hz\\n\",\n       \"wxG25RzZuuF5z9aJo7F43+OlX6/NmZmZDg+RTefkyXjoz12+xXNqS9VwOGzGZx6ahrKwbdmH4XVk\\n\",\n       \"fs03DvcB6Mu3BMxvn5yX3/NtSt868jrOeI/8+7fH5YiA58B7Utnetyfmx4eztQAAIABJREFUi3no\\n\",\n       \"0kqGx1lansrv25pe0sBrH7198HnB/DRtvJb7TJY93fNIO1vsMlSLVEVFRUVFRUXFIrGkFilgbcCa\\n\",\n       \"uk/ek5OTadtMkyA7cFacl5xGnIqxTDh3T5kp2UVDXZeLMdIOnyCfekGWs8p35mBqaqrjP0GbPutV\\n\",\n       \"2ZcLO86n1UV0NbGSL767Hgdo8x13dm9vq4LHVOZjYRzOm4V2l1k77FvkZwFrXHxv1apVnQzuztUD\\n\",\n       \"D23JNJw3JuNrOVZbu7KcU7bqIZvOsWKNMst5VI41y1js+bR/iXmajddaMO3L95lP+jANtnYgF7T3\\n\",\n       \"uLz3APtF0m9plYSu0oo9H+yPYouUrQPOylxq0Zm1wvNJO2eqNk3k6nH+NGCL3GAwSC2MnmdblrwX\\n\",\n       \"2VoAj1k/pp05XLVqVcdXJ7uR8HftG+j2zpuF3JmWZcuWdfLpZdbyzAqa3UiAzMJf8tEWM+8t5jnj\\n\",\n       \"Mf9s7QHwyVYj2pV1N+23DDLLm2ket3+6EgDy0pdfzfu/fYKrRaqioqKioqKi4r8YS5rZ3FFpmQbj\\n\",\n       \"7NNzc3NpXZ6sfg+nUmtSttDYV8YofYqgi/ey+2Q/K/Ml8hgyawDYunVr6ndhiwrjZpyc5unbvlJZ\\n\",\n       \"Nl2e56y0JbIM78AaJrQ4CgvYsmMLVml9gS7fozM+j9O0WH6seSEXtoiuXLkyzVTvyEH7MgCeZWsB\\n\",\n       \"33MmZPg0GAzG+oJkWqtrTQLWouUrk8UtW7Y0PFmoRc3I2ntO4ZOj4CJaXmW+UVl9O685eG2LhP1Y\\n\",\n       \"3G9pVYYunuHIscwi7TXK6zjfw1JuPE77GfIMW+JNC7Qj90RaeR3Bt9LnMLPyIfe2Emd7i/16Mt87\\n\",\n       \"UFo8Lbe21GY+Q7ZUur2tYq4YAaanpzv7lv3wgG8N7L9o2h3lDGyNL/vKLFGeT8ui5d28J5rb1uW+\\n\",\n       \"Pd1WPZ5FxLz3XOQFWh1Z6znC79E3HBs2bOjsc/alhjb/Xo5DtUhVVFRUVFRUVCwSS2qRstXAWjNw\\n\",\n       \"Po1t27almrRzDHGKt9YDHLXhOj1Zjpa5ubmOP5Y1gyw6yX0BfGwYL1oPWmCfH4tzrzg6A9jiYm0o\\n\",\n       \"q80FTZzu+yw75qExzs/EvgGm3eO2z0lpkXAOHmsYvld3fhxbOjNfE/sKTE9Pd+i0NcTPyny+zEfk\\n\",\n       \"yLRYGy6/m0Wn2OKCDJovjnpz7qMs+qmkx1FlwHyxFcntsYZAsy2ZfVZikEWxArRgW8ft3wgsF1h0\\n\",\n       \"PLcRrSbNuJBBWyIBn9tXyDmKAM82n2dnZzsRpPaJZF07cs7tPe5sreOXUkacwSNbUoH9kDJLHTx1\\n\",\n       \"LdbMMuUal2WftupkUavAezpzYf/OzHdsxYoVndqZpgnY0mrfIM+pb3Ds31P2799Y1xg0LZlFN4vm\\n\",\n       \"y6zOfZY91/u0L6GRWcEzn0rzpbx1yG5qkEX/vmV+WEa1SFVUVFRUVFRULBK11l5FRUVFRUVFxRjU\\n\",\n       \"WnsVFRUVFRUVFc8ylsxH6pRTTkkz/AJqClGDrC9igLtN6jKV9cciuvktuC+ldpLrVWV+PtS3cv20\\n\",\n       \"8jvQ5bam25l9XTvLfHEURlmDyr5PjlL54Ac/GBHRqUGW5dOib+rhZaD9Rz7ykaa2kfkBeCY8p46T\\n\",\n       \"I0HsfwJfXCeuLxsxfVM7yTz339Qgy/jiZ7gelvk9MTHRzA81pTz/WZ4U2psWf898oX5aH/gONaKY\\n\",\n       \"o8znAbim5Lj+4cupp57aods+D66dmK1/5IC++9Zc+X3aX3rppR1ZdFvodt2/+WQroru/OFqvXE9l\\n\",\n       \"/cH5UK6hiHZd2NcFuE6ga7OV4LvUFEO2skg670WM0/43fqVOZN9YLbeMk76d68u5nuA5+6Kj3UwL\\n\",\n       \"Yy3lJdt7WRfUoMuiU13L1bILnHcN2t/xjnekPqIgm6NMJj3OLAq8XBfZvujfor46niUsD8gi9fCy\\n\",\n       \"fWViYiLOP//8iOjOZ8Yf12YErlLCXDBHZ599dtqe7/zP//k/IyL/jfb6Z01nqBapioqKioqKiopF\\n\",\n       \"YsksUiWsLWZWofLEbUtD1pejjxaaX8fPmQ8+xfbRWz4r6zt7lnNzlO9nOZscdZNZ/bK+x0U79fXh\\n\",\n       \"7NnZeMbVgRvHc2fn7kPG2ywKL/ue+WbLVak9OhJy3HfH0ZxpSW4/HA4785XJnl8Xstb8rPn6Lz9z\\n\",\n       \"7ppxfWfIahNm/ZVtF7qnjFsf2bOy9bYYLDTDO5hvzfqzcTUHx/nKZtagxSCr45hVMLDMgnGRteVn\\n\",\n       \"2Z6UzT80jdvT/XdfNGv2W2Vke1FGY2Z1XwjGtc36zvaNrJ5oX5Snxzlu73qmtGfR8WX+SZDxbtxe\\n\",\n       \"bVSLVEVFRUVFRUXFIrFkFqnBYDBW0+77znyfR3RzdVhrsXac1QjKMiGX+VTmq3A9H532HQDOPmwf\\n\",\n       \"kL7cQOOyaWe0mC+Zz0Q2htJyl2WXzzCupuI4LXAhfY/zkfIzs7//M/B4xllJrHna8pb5QJSymNGf\\n\",\n       \"9Q2e6fzPN+eLzSuWwXy0haqvv3H0Z8i05QyZb135Xc975gNFPh2v+0xe5ls/z9Sqk83nOBoWgqw6\\n\",\n       \"gn0/s3U+rsZcdotQzv24fW4hlpTyb1uest+X8vvj1mifv135d7an+9l96yLbazK+GOOswhlNfc93\\n\",\n       \"/UvXLfR3yXnmvGRZ3jl+0+mXvFUzMzNj13fG03GoFqmKioqKioqKikXiv4WP1H/GSjDurnahyHxI\\n\",\n       \"5rtTX6i2aytApmFYgx3nz9N3/57d7Wb8WCif5puTcb5g4/pcqL/GuLnp63Ocv9ZC/brcfx9tmV+a\\n\",\n       \"6c58QUzTQp+9kDaZlSvzhRtnLXsmzxhnYR7H+3Fz1/feM7VIZjJsjdSyN59v1H/W2pnx0/31Wd8W\\n\",\n       \"6vs2zufNr5ml5pnUVfQebTl5pnt3xpeS7sxSks3fOMtThmfi/5dZGBe6n46z/sz37IXudxmy2xRb\\n\",\n       \"MP288v/2W86qiXgOx+1FtnSVz1mobC3ED7dEtUhVVFRUVFRUVCwSS2aRmpiYeMbWg/Lz7GTpE+S4\\n\",\n       \"E6U1Ep/+/XmfJYZnZHlfxuXu8bPdPhtDyQN/Z1w047j2z8RSs1ALFMgiihZqDZovWsVa+jONEHPf\\n\",\n       \"tkhkeXiGw2Gq3S40Imyc71iGZcuWdSK9Mo3Q2tq4qM2FanB9UXtZlXuPc6FzNJ/fRYaFytS4PQeY\\n\",\n       \"X+ZTSVPmG/RMrSPZ/mKa5nsve+a4KD5bUXm17+h81uFxz8wivtzPOB8x0LeOMlrGyV5WU/LZkNHF\\n\",\n       \"+jNm/VgGy2cv1hdoXLQjWKjvVPl/+0RlvHK9O/vBeo5cP6+0TC309gNka86oFqmKioqKioqKikWi\\n\",\n       \"1tqrqKioqKioqBiDYa21V1FRUVFRUVHx7GLJfKTK2lyAu1Dev/zyyyOirfvDners7GwTJeD6ZtT8\\n\",\n       \"efzxx0f6Wrt2bUREPPHEEyPtXVNs06ZNEdHmnthtt90ioq3NAy0zMzOdekzQT00pagQ5zwX3ydBO\\n\",\n       \"nbiy7xKMgedRx+nUU09t7pehe8cdd4yINtfG+9///oiIOOOMM0b4smXLloiIWLdu3cj7ZR2/klae\\n\",\n       \"7VxWl156aVMjDGT+OtRxoi4X+T640ya/SFb3y34ZJX9cUxB6oYEcPdBGjbDTTz89Ito58p0470M7\\n\",\n       \"Y/W9/OTkZKe+oes4wcvnPOc5EdHy3LW27M9k3zmvi/I7jI/xXnTRRRHRyjk08oqcMwfQfuaZZ470\\n\",\n       \"y7w7IqiscUhb1sEOO+wQEe2ao3YW9e3G+a1Qaw+eMyZocEbsD3/4w51aePAMeUfG4DnrwhE+0Mw6\\n\",\n       \"gi+sC579yCOPRES7jqanpzty6/VuvxRkkf2CZ8Nz8ugwJq9Ry/rWrVub91z3ExrMO56BvEA78s+c\\n\",\n       \"rlmzZoRf7KOso3K/4DvQgqxAN+NkXLvvvntERDz22GMR0a45ZNe087r99ttHRFv3rVwXfIf5Z89x\\n\",\n       \"W/piHSC7zAHyguwiF/YN4nnUfTv55JMbXjn/EWCOWHPeD5kDXvl9QXbND74H/y+99NKmb2QPfjA3\\n\",\n       \"lkXXffWaYy3CF/gI7Tx7jz32aJ7ndcF4XGvRfKE9NEMrfyM/n/zkJyMi4sQTT4yIrm/h448/3tB9\\n\",\n       \"5ZVXRkR3j2Z8yAnPYO/KsGQHqbm5uY5TGAzlFXgDmp2dbQTcP65MDj9Sdo7kmYBn8/7GjRsjohV6\\n\",\n       \"H/bKhQk9/DD6wORnjSvbwudOzMlY2TBKWmhLm8xp0I53CDrtzXNvzvyNIJabgQ9O48KYvXidmC9z\\n\",\n       \"rmes8LVvrE4tQFsODNCf9Qkf2dQtXz6wlD/Q5iH0+UDE+DPHR/oEmcNjWdaIcfiHzn3QjoM3sOza\\n\",\n       \"wRO+2FEUrFixonPgYzPKnOrhOZ+j7GQlZXzYYS7L9llwgPsC0FAeQspxmveMiUOAE1f2JeT0OKE7\\n\",\n       \"O7S7ryy1ADT4B2ZycrKRd+D5skLkv3k2B4qHH3545BmWL/ZN+pmammr6YC81LaZ7nOM//EJ2s72O\\n\",\n       \"g9js7OxYx2TPAe1RSBk3cALnzZs3j7zv/koH53GpeZA9eMl+wiE324uAD4vlXpfti7Tx3oX80J59\\n\",\n       \"BdrMe9Yu/T366KMR0fKROYloZYTXhx56aGR8PAs4Ua1/w8wH+AUY2/Llyzvz798z9lT3MQ71aq+i\\n\",\n       \"oqKioqKiYpFYMovU1NRUc7LcsGFDRLSnZGsknII5cZfWoCw9wa677hoRXTNgViKGkymnXGiBNsCp\\n\",\n       \"eNOmTZ1rFFsSssKXaJ4+HXNCx5qG5pWZPkst0tdotjDYFM2VBOPMrGn0i+ZlzaUPDq33OF0gmnFk\\n\",\n       \"iUrhhzX8+axjaDe0gX5kCWDlY+7uvffeiGi1JDQtAK3ICZrWihUrOtYuxonlgL95VqYdMQaXUrAs\\n\",\n       \"lteR0AU940oEwdPMwgSfoMkWT1swZmZmOiHy0O91YVr4/I477uj93NeKtDefymejUZuHppv2jBf+\\n\",\n       \"+RoWMJe0v++++yKi3Ud4jWh5inbLeob3Xks8ExqhGZ6j5QO+/+CDD0ZEaz3ZcccdO3sFz2TNsR6A\\n\",\n       \"5cWyOy7hL1YE+n300UcbXtqSbqsocp3tc7Rj/La8WL522WWXhlZ4d//990dE19Jq0Bfj97qAj/Rr\\n\",\n       \"/vRZX31dnNHAd30z4z3L7Zk7aLPFr/w/34EGXxsC7/+WTf+O0p45Ryax7Jdg3pAZ1hhz5N901jK0\\n\",\n       \"2G3HFizG6n1j1apVHUuT9yy+k81nhmqRqqioqKioqKhYJJa0RAza7n777RcR7enPmjf39Gg2MzMz\\n\",\n       \"HQdsgOWFU68tUtbE6If2nFhtNQGcnrds2dKxHPmUvueee0ZEV9vj9G4fKk7WnMDRrIA1mbm5uY7/\\n\",\n       \"SEa3fWIYR5aQz4Uk7bRb9m/nWfuXjDvV2+/AFokDDjhghBZb0UoNBvrsyMgzrGEig0ceeWRERDz3\\n\",\n       \"uc+NiFZ7xEIFsFDBn1LOzENbqGx5s7zQ3hpb1l9pBcksR2CnnXaKiK7lkvVijRTfBvsjAP89HA47\\n\",\n       \"VgvWmuUfutFWkXf6xMpT9h3RdaZlrOWaps0DDzww8izWqjXSgw46aGS88Afrh9cofGb9eAwl35Eh\\n\",\n       \"3sNKbhkFzAVj8LhtHUM+jjrqqJF+N27c2FlzBM3wTFtJsxJBbs+a9jpir2IvX7t2bbN2bEnjWfCD\\n\",\n       \"vniGQTv4Yf+00goY0fJx+fLlzf+zBLP21+EV2r2O4HlWKNlYs2ZNQ0P5+xWRW8eZZzunl35GEe3c\\n\",\n       \"0A4+sT7KsVp2vOa8Rm3Voi9+m72OXCCYdcLNRwnodTFiW3CBb5PgE7TYgsXc2DK1devWTt/jCocv\\n\",\n       \"tEh3tUhVVFRUVFRUVCwSS2aR2rJlS8eXxqda4JP7ypUrO1FlAO2UkzCn1Uzzzsq6OGIGcHreeeed\\n\",\n       \"G0uZ75sBWnEWteFnO5rLPkD2qVizZk1zirfvik/pjggCWeRDFq23kGKOCy306fFDs9vffvvtEdG1\\n\",\n       \"joHyb/sZWbv3+Jkj/EvQ3GyxAfDRvnXD4bBjpcmKb2bjRGb5HhqXo/9AGXnp+cn8kDyvmcZli4Wt\\n\",\n       \"i33zb23OkTAAixM8Z73beuB+kX/kwOHhEa3FINMsLefXX3/9CK3QkkXtATRbR572+es5kpg+LVuM\\n\",\n       \"h+9Be7Z22RdtfV+2bFm6b8EXLFSZBduau63Otkjh34YFb+XKlR1rP8CiwP7pNZdFrdqSYSs7uPvu\\n\",\n       \"u5v/Ww68LuC5LSnZunDEsX+rzPd77723s3/DuyzqjlfWYOZTy/fpl2fTvrRC2drHdx0RD5gjXh1R\\n\",\n       \"Z1rgF3IFmKvSasz4WA/IB/Ppvi0P9n817TzTty6PP/54Z14dvZ75GI/DWIvUYDD408FgcP9gMPhh\\n\",\n       \"8d4fDAaDuwaDwfee/vd/F5+dNRgMbhkMBjcOBoNfeEbUVFRUVFRUVFT8FGFsiZjBYPDqiHg8Iv58\\n\",\n       \"OBwe9fR750TEpuFw+P+o7ZER8f9GxIsjYq+I+HJEHDocDufUrpaIqaioqKioqPipwaJLxAyHw2si\\n\",\n       \"4tGej/o6/OWIuGo4HE4Ph8PbI2J9RLzkGdBZUVFRUVFRUfFTg/+Mj9Q7B4PBmyLiuxHxe8Ph8LGI\\n\",\n       \"2DMiri3a3BVPWaY6ePvb3978n0go8kgQQUOK+JNOOikiWp+Rgw8+uLnzdfmJd7/73RHR9TOwz8cV\\n\",\n       \"V1wx0jf39aS0546UfBgu+7Fq1arYa6+nhvbjH/84Itr74/POOy8i2lIoREzRHt8QchVRCoFU+Pa/\\n\",\n       \"wO+JO2RKBJxyyinNuI4++uiIaCNk8Lcpy8mU4+T1Jz/5SUS099L0De2OmLzrrrsior2fLkvEODOx\\n\",\n       \"/QUoEQEP+fzlL395RETceOONEdHKwWWXXTbSnjk57LDDRmh/4oknmrIplDbgrt5t8eWgb5er2Hvv\\n\",\n       \"vUf4yP08fHnLW94SEa2MErV18803N+Ok/MBb3/rWiGjl4uCDD46IiJtuuikiWj8FaIHnzAVy4yhW\\n\",\n       \"ZJ32y5cvb6INGSfzD19oi/wTpbh+/fqIaHn78Y9/PCJaPhIJRDQW7fHPKUtnILcHHnhgRLTrAr8Z\\n\",\n       \"5Nzzz7rAtwUa6RvaQd/8M1ZkEd8HaHLJD9boCSecMELLPvvsExFt1Ba0fOxjHxtpT7/MEWPsK4Xh\\n\",\n       \"HFOsf3xb2FugHT+cnXfeOSLaPcgliI4//vgRWpDFrVu3Nr5/zCflZ5AteH7nnXdGRLekEPsFa5kS\\n\",\n       \"ONAOz5kj5hRa1q1b18ggvjCsf0p4IFv2kWGO2LtcUoRxupQO5Woo47Jt27Ymctp5jphPl6ui/T33\\n\",\n       \"3DPyTNY/tCNPz3ve8yIi4j/+4z9GaCn3dHx6kC14CH8ohQTdLsPiihFeF/hp7b///hHR7kX089GP\\n\",\n       \"frShG5k65JBDIqL1EWSP4XeR32f2fVcSwZcIWeR3lM+J4kQOB4NBw0Pvuewt0ODSWS5vZL9H3qe8\\n\",\n       \"DWXinPl/+fLlzfjPPffciGjLDyFTzNF3v/vdkb5Z0xkWe5D6aESc+/T/3x8Rl0TEW5O2vdd4EBoR\\n\",\n       \"ceihhzbMrKioqKioqKhYStx1112dVCwZFnWQGg6HD/D/wWDwiYj430//eXdE7FM03fvp9zp4+ctf\\n\",\n       \"3mjqWIE4kdpLH+y7774R8dQJ9tZbb42IvEaYi/iicTriCw0KjcRaVJYJfccdd2y+y6ndtHBiRrNw\\n\",\n       \"ThZHG9DeGc2zuoJzc3ONVkebLM8Hz4I/8C+LlETT5gSP1dC5OSLy+lWOmAPwEL6h/Tv3TznOiG5B\\n\",\n       \"5r5oRs/jZz7zmYhorUL0ARxtQqQH3/fY4IPzE61cuTLlId+BD87UC1ybjHZosM7KXcqiI3ucg4px\\n\",\n       \"8j5y4NxVAD4g/1hPHQULVqxY0bEsOMIRIJvwBRm21QAg98wdcmN+R3TXCtZNLC/Op8M4HBnJ36Yd\\n\",\n       \"mstcRSXt5ZqGPkdIZvPvyLos6zRwxnCshI8//njnM/Yc12LMqjIgi8yVc7ZldQJ5feSRR9KoOqw5\\n\",\n       \"rDEiuuCLaXfdP9Y76yLj4/bbb9+0dSFg4Ehp770GMohcsHfRv9fdxo0b070lq2fpSDnaO+IMfnnP\\n\",\n       \"o59S1tnH+Q5WcX4v4C3g2bzP95krLHZuz/j5u68ObRb5zD7vdc3cMN8827cCwHwoI/K8t/i3hL7X\\n\",\n       \"rFkThx9+eBxxxBEREXHttdfGfFhUHqnBYLCu+PNXI4KIvn+MiP8xGAymBoPBARFxSER8ezHPqKio\\n\",\n       \"qKioqKj4746xFqnBYHBVRPxMROwyGAzujIhzIuJnB4PBMfHUtd1tEfH2iIjhcHj9YDD424i4PiJm\\n\",\n       \"IuJ3hklY4Pbbb9/42/z93/99RET84i/+YkR0tV1Xhb/44oubU+hrX/ta09vbR5aLCY0En5hbbrkl\\n\",\n       \"ItrTLtoyYDjXXXdd3HDDDRERcdxxx83bN/fQaAHHHHNMRHRzbnAyRyvCCsDJ3XXfVqxY0Tzzc5/7\\n\",\n       \"XER078ABGgJ9wnvaWevlmV/84hcjIuL73/9+RET8+q//ekS01paIbm4Z193KLHVYoLjbfs1rXhMR\\n\",\n       \"baZmwH072t+f/MmfjLR/yUvaeAZrbd/5zncior37ftnLXjbyOfLC/P/N3/xNREQce+yxEdH6HQBr\\n\",\n       \"oiUfsyz73/jGN0ZoedOb3tRLK+OEb9dcc01EtH4bZHgHZZ4hfFvw8frZn/3ZkbZohtCNpe5XfuVX\\n\",\n       \"IqJrecVCgd/aV77ylYiIeN3rXhcRbUZwsGzZssYn6pvf/GZERLz61a+OiO4aQj7wK8FfCVrwJQPw\\n\",\n       \"EavK2WefHRFtFvqXvvSlI3REtDxj3qHJlQ2QY7Tdr371qxHxlLtBRFdTt9/Kt7/97ZEx4UsU0c4v\\n\",\n       \"a+e2226LiIjXv/71I7QC5IfrBF6x2LO2gXPIYXVbuXJlx/KKls/6/9KXvhQRES984QsjomvBZpxo\\n\",\n       \"8PAPWngFWFEY4y233NKsIdPiWwOsXOzpttTzN2vvBz/4QUS08mB5YZ+8++674+tf//pIG3zDgPN/\\n\",\n       \"sVYPP/zwiOj+jtiic9VVV0XEqL9iiZ133rlZc+wt7Ln4tQLvLa5w4Xxszj6O7yT7Ysl3+uK36J/+\\n\",\n       \"6Z8iovWz8zid04u+smzi/A6xp1999dUR0f7WlfLl2yH2Fp7BmgU8C5lDZtlvbE2FdvtgLVu2rFOp\\n\",\n       \"ALq/9a1vRUTE1772tYhof+f6agX2YexBajgcvrHn7T+dp/0fRsQfLujpFRUVFRUVFRU/xVjSzOZY\\n\",\n       \"ZH7t134tIlqtGS0fcLLk1Hzcccc1bfFRAWg7nMA5zdLOd9iOiECjd/03UN7XY92w9ua2aBScbjmR\\n\",\n       \"+3RsvyxnabZlZ2ZmpvkOkUxZ1l++i+b4yle+cmT8aOQAzR3tCd7zar5DTzku+5sAngmPf/u3fzsi\\n\",\n       \"Wj45Cy9aEJomkXPMMVpQ+Uw+e/Ob3zzyvrPkQys0/vzP/3xEtNqRaYGP9tfZbrvt0szjyPkb3/jG\\n\",\n       \"kXG7Bhnyw+dEBPG35ausf4gG1VdnqxynaWGeLS/ML5YdonigxZGEc3NzjewRhYkGaR4ybiws0OLo\\n\",\n       \"XcCcwetf/dVfHfm89DXB+uXakMyXLS+Mm7XGmkYePEfwCz5DM5atco3ybKwhWLnG1U5jnIyF8Vnz\\n\",\n       \"/v/Ye9eYW8vq7nfM57DOLNfiqLgAAfEA1Wpt1HZr27Rvd/qlu2+/7N2DrSUK2MoqEFSgqIDCi0UM\\n\",\n       \"Ym0ED0H7ptmJyU7eNk2a7jbdtam6baLiCVQOcmYBLl3KOj6HNfcH+N3zmr/7HmvyPpvmSZPx//Ks\\n\",\n       \"Nec9r3tc4zrc1xj3GOPvuKWWo806pw3kZv2jp2xdYN3jgcz4M7kfHrkzzjij62cW82quPfrjucsY\\n\",\n       \"0Y75Dr2Potdt27Z13hnaznTIeOPldJVwwLxnHV199dVTffA+eujQoW6e//Iv/3JE9LMUgZk+zC1o\\n\",\n       \"T50zU//kT/5kqp12TNEJHnYy25g7jnmi31/96lcjYuKR5PloTw1jzfOBPcBct23b9I/YZ3ToZ4sr\\n\",\n       \"/LsauWFmEGKlRqNRb49Gp+y1yEL/vC4yFNdeoVAoFAqFwhqxbh6p8XjcnTwz67+9NmJyEj3hhBO6\\n\",\n       \"U7ctY07E/IbrsAp8inW2DZZpxinX1qTAGuXU6vfjnIQtkxm1AbLSDtYjVrDbH4/HnXxYsc7OA1xn\\n\",\n       \"Dwwy2PLiOmT/xV/8xYiYeH9aa5o2sBz4Lf93tqE55PB6oEdbasjOX+YL82fI4kU+rkGWjA/PmS4Z\\n\",\n       \"pxzX8T3zamlpqScH40acDVYOsS+ei+gR/TljKuPL2rt3b6dD2nTarrP6uI6xcHaa1yaWHNcPeVPp\\n\",\n       \"P1Yr97JV57mG54qxsc7N14U3gLFoGebRkdca/bNHCthSba3YFow7f22Zt/MLHSELe0vmoTFrvfkB\\n\",\n       \"vf7Rk70qo9GoN8+dbenfeo2Clls0YqIXZ5B6XbRt23vl2EB7KKxzx1w68zDjCV1ZWenF41huc7bS\\n\",\n       \"T8bAemRvYj4hA7W+LPvS0lJ3D9cotPeKtojj4h4Zlysy48HFm4Lsrd7N2+eMaq9/5oOzwulLNl/4\\n\",\n       \"nHXGPtLqhX6a3xJ4/zdPbsulGNGPTUYGxpI9/cCBA725gseZNti72FPsNc5QHqlCoVAoFAqFNWIm\\n\",\n       \"195/yE2La69QKBQKhcJ/IqyZa69QKBQKhUKhMIx1i5G69NJLu/emvJ90bAQ8PvDngYWFhV5sy003\\n\",\n       \"3RQRE34zxzL4ne6nP/3piJhwJzlTDtl4f3vttddGxISzauPGjd17YGcVXHPNNRER8e53v7snd8Tk\\n\",\n       \"HTHvqakBBNcW3ztzkPvQ18suu6wXb+B39PBbwVcFXD/FXFuui+J2+f0tt9zSGx9A28gPdxZcS7Tp\\n\",\n       \"/gH4jcz75ViypaWl7lo44mjLWUb8peaKed8c+0AfqHXl69sq7swxxodracs6dD/h2nKFa8D8gser\\n\",\n       \"nS/c27GBGXcabTsWEL4y89tlQI/tejYXlsefMeJ7Zxrye4+p65UBPr/ppps6zi/AeHpvMY+X20I2\\n\",\n       \"dA4fGuuItevYqvn5+Y5TjPF0zKP15DHiOv4S30IfsnXBXrW8vNwbzyuvvHKqf8jkfeC6666b6qf1\\n\",\n       \"wV8+v+GGG6aub/vq+EvWBfMWeD/kd+Zmc90sxwzRfquXobjSiJyfCAkuAAAgAElEQVT3088iryP2\\n\",\n       \"f68zZ9xx/dvf/vZ0D+Va+DA9d73fsWYZf/SIXrK4pRtvvLHrJ/ds4+kiJrqE9/OKK66YasPVx4nD\\n\",\n       \"Yn55/Lm+nS88Q8m6NtMDQDbzvloWP099XvBcH41G3bVw7XnPddwq19PPDOWRKhQKhUKhUFgj1s0j\\n\",\n       \"NRqNprIr2r8G17UnSz6zF8OeKp/qbZnye3Nu+Xet3BHPnKazCrwg80S4TgjI/s/vs/aH+pVdy735\\n\",\n       \"nt85mwXY6rMFG9Eft6x//t7/d02v7HqAzO77sX47y4MH6JP1ktW6WVlZSbOq3K/MA+cxsVXsfrb/\\n\",\n       \"z9aDwb2dZZWtC3sghu7N/+1pyWSxzrN7z/r/0LqwxT1rLtprZsvb84XrmB+Wre1L5oH2uLpt7z3Z\\n\",\n       \"HM3m0YYNG3rz1tmazmLL9h7P3WxM0Vtr2Wfy+XP3M3sO8Lnr6h1rH/b4uu2hdRzR96YD6zUbm/b+\\n\",\n       \"1qEzpoHnrvkfvRf7TYjXQzsXn+s6AFm9pQz2Pnr+tPuH90Hg/lp2P8My2S1Ttpbbazz3sqz9DOWR\\n\",\n       \"KhQKhUKhUFgj1s0jFdF//5694/V1EXnNEWqN2OqzpQGwGh3HkNXLwBrasGFDalFm8Ik68yLxOfVD\\n\",\n       \"HM/SwlYMcNv2tDl2KDvVZ16GVhaf4p+rRyqzbm2RDFk1bfvHsrwcK5XV7jIyj6T1CNq4PbfheIrM\\n\",\n       \"yrEX1WOTxQYNeUeze7jNzPLm//6beSbG4/FMj2r2uXXs/9tLdCxPbmYJz9IHsDfQv7M+/LuhfnAP\\n\",\n       \"x7wZjgHiL97yzGtsS340GqWxHvbmgczzDDIvAuDeXgPHkjdbe57n9oo6zs+6zzxVEbk3I6tJlXmN\\n\",\n       \"XNvrWN6SdlyG2vS9Mg/trD39WM8Ar7lsXfu39tRkb0e8zo7lJXRNO9ewmuVVP9abmaHPs+8jco9k\\n\",\n       \"9n2G8kgVCoVCoVAorBHrWtk8swafy/v6Wad7W7HOyvI9qcCK1UcmQfbuuJU9i6fh/5m1YthadL+H\\n\",\n       \"LC9nsGQeE8fQzKomjFXoMRnqg709s/oJsliBzEOReSRa2T3uQ9ccSwZbXll8hysez83NpfM24yez\\n\",\n       \"BYk31Ndl3tRWRnuzMkva/8/G39WCQRYz1f7fc2aWxynrr6+3zEOxRh43626Wx8Trwte7+rrXXStL\\n\",\n       \"NnecKQUcr8j3mSfK86rltvS9vRdlPGUgiwWcNb+Q4fDhw+m+9z8bh+Jq3K6Y7jFqx2JWnKlje/wG\\n\",\n       \"w/A8mvVWYnFxsecxckao5bbHPnvrkHHvDcVLzvIkzfLMHCsucejez2XdZZX8s0rlhpk0QBY7O+S5\\n\",\n       \"n8XXNys2DJRHqlAoFAqFQmGNWNcYKVuPWeaRT7VthtCs98ZGdvL2yZRTreMa7LFo5X2u2QaZ58bv\\n\",\n       \"dF135ljZbFlmmL+3jmd59rJMkFaWWdmLht/PW6YsFiqzJofunVl3s2IgZlnL1ltbv8mW06yMOMMe\\n\",\n       \"hiweB7Qs6LMy3Wxxz8rCApn3aMgSzzxFWXaOPVb2PFh2z5NjeXrdz1n7QRYrk61p4L3pWGvSazCL\\n\",\n       \"BXT/ZvHhuSZU2xaw5wxk8UrZ3M08WkPzIvOoZTGV2VhZ9swbZBnH43FvH5ilc3PvGW5nlud3YWHh\\n\",\n       \"OekqIo8ZBL6X13QW/9Zek8UOHyv7tL139ny1xxMv01AmXlYfK9tzZ8UpeV0cKys6ey5mGdWzPLeg\\n\",\n       \"PFKFQqFQKBQKa0Rx7RUKhUKhUCjMwDjh2lu3V3vvec97OvcZJQtcgsBl/FvqBF5r0AYl3KGIwW1I\\n\",\n       \"0DiuRlyMn/nMZyKiT+Owb9++qXvxOeXqkeXIkSO9AnEEspvyw68ZkCVr2+m99JVDb0tv4VRp3JxP\\n\",\n       \"P/30VNuU8OfeWcop9CPQMtAuAX1+7XjTTTd1bWevpPgNbdNPKHaQya8TLDuy8DvcyYcOHeroR0xV\\n\",\n       \"YNc18+HGG2+ckgWgP9P0fPCDH4yIiIsvvnhKhqGimIw/VAV+lTOkw4gJFQrj7le79Je5jiybNm3q\\n\",\n       \"9RO5aRuaBdpGD8wTgM6h2TDVg132yGLaj7bfXMv4m5aBVxTQM3FP0484kcCvvD784Q93/UQflA44\\n\",\n       \"/vjjI2KSVMLeAi2H6ZgcAAsVDu1zT9oHKysrnU6giEJeqF64F3Po+uuvj4i+Dv26iT6Z3sbrZmFh\\n\",\n       \"odOVKX/QMXIz/rPWP/rwaxiPf/v6kn6afop5y7Xbt2+fugf9pW3mC3P2pz/96dTvaCejw4qY6Bwd\\n\",\n       \"QififdGv3ViD7C/o3M8VnmHot30G+FUkOmU90zZ6oc0XvehFU/qj35YdOByFPt94443dXHSAN/v6\\n\",\n       \"C17wgojoU4S5cC//Z/5/9rOfnbqePjKvuG7Tpk0d/Yyfc1zjwtzei4CTu+inaZwsy+rqajdurCFf\\n\",\n       \"axlMy5OhXu0VCoVCoVAorBHr5pE6cuRId4rHSsTCsJVnr8i9997bnZCzVElO2Fiie/bsiYi+Be4g\\n\",\n       \"Ue7FSdTBZrR/+PDh+NGPfhQRk9PrySefPHWtA/J+8pOfdL9t2wKm7+D7zHuwcePG7t7ogdM5njXg\\n\",\n       \"oNidO3dO9fvxxx+fup7+23KhndNOOy0Mp7hmAXvIams3S2e2F81euCFKCH6DFZdR4ThAk34zf5ib\\n\",\n       \"gOuYu8yvE044oZsP7qeDjJFpx44dMQRkx+PifoN2/jC3+AxvB6ANLGb6x734634yj2bJ0qaa43lB\\n\",\n       \"18w1t833eFy5R7b+0d/evXunvm9JTxk/1gVtMxaMF+Bz+o9syOwUdfQHGEPuw7xowXdek5bFxXKZ\\n\",\n       \"J3x+4oknTl3P3GWOMjbbt2/v7RXezxgrjy+gLfTy1FNPTcnG/AEuB9EGertUCJ8fd9xxERFx+umn\\n\",\n       \"R0TED3/4w8HrmQ8//vGPIyLiiSeemPr9CSecMCjLoUOHuv7RX68LywS87gG/dymGjN5kw4YN3WfI\\n\",\n       \"gnze/xnnk046KSIizj333KnP6bf7yTxgDzjllFN6snANOmQ/dxA5cNC13x55jTJfmLM8b9FXq3fk\\n\",\n       \"5jN+S/+yEkX2FiOLw5NY016TR48e7SWPmVKOvSXzOGcoj1ShUCgUCoXCGrFuHqlt27b1rHtO7D6R\\n\",\n       \"clrGSjh8+HBnzdl7wUmSUymncizJzPLipE57nJYzL8DJJ5/c3ePBBx+c6gegf5yMuQene1tHfoeO\\n\",\n       \"VZydjufm5rr+cJpvPWYtkBUdYmHaC2bwO6xF7ofl0crbkkpH9GOCAGOB5wIge0aFwvzAe8b9sEwj\\n\",\n       \"+vEByIClYb0AxsReJYOxRF9Yfzt27JjSSSsf/cRLMKvAIh4WrH5k9+9avdNvrNIshRodMv4vfOEL\\n\",\n       \"I2LiDQbIwJxGb8hgb9r27ds7XSPDLILboeKFEX2PhFOT7ckaSlF2XBFjk9GPZEVwM6+B95chfTPP\\n\",\n       \"HdPlODPQltJo2+R6e0nwKrAvnHnmmRHxzNz32kIGPAWOBWUeAHteHOdny5750q5R5oG9V45fnVUM\\n\",\n       \"0mn+zD17ESz76upqz+PiZ4tjoRwz5uvtdWV9eA8EmzZt6vTAtfTH+z9t0Pajjz4aEZP9zWOKbN7z\\n\",\n       \"ua7dF13WgblE/y0L/WdenHXWWRER8dBDD0393rLQN55drIG2fZNO02/mg98y0bbjlnkm+bnI/2m3\\n\",\n       \"3atanbRte43awzoL5ZEqFAqFQqFQWCPWzSN14MCB7mT5spe9LCImlqxPgZxEOeUuLCykcUb2/rgg\\n\",\n       \"m0+vjstwuXlbR63lilxYXLYwOY3z154ay4LFYm8KMtoKWFlZ6Vm5WVyK44/4na094P9jUZgKoJUz\\n\",\n       \"K6TmWLE2i6K9l9+dA8eK8Xcojs3WLZ5Lx1UBZKNN5iDWYOY1RCauf+SRR3oxMC7aZ4+TrSPm3pBV\\n\",\n       \"297b7S8uLnbxM4438rX+3plS7ifxJ85m9NxdWlrqvqOtLM7Q8WiOU/CaRnb05piatn0XZLUHIbOk\\n\",\n       \"AR5J9GTPG2OEhws90PfWm8a9acPeMVve9m4hG/e0p87xnci+urqa7mPO7mV9WOcusMhczQoVI0M7\\n\",\n       \"p7NCovbMsB94TgJ7YvCe8nm2d23evLmLN7I3A9APxwxlVFP2VDkzdYjknrVpz5F16Di9733ve1My\\n\",\n       \"e74gMzIxlnjo2jg55imyeM/1OrBMPE+RzfPLHlxkGSrgyrXI5FivbP1zb/fFb4J8feu5y4pgs88z\\n\",\n       \"tzLPa4bySBUKhUKhUCisEevmkRqNRt1JkpM2J02fAk2guWPHjrREP9e6DP2sDBKfQJ31AtqTOPfA\\n\",\n       \"42RZnOGAZZq9fzXxrD0cQ5lVtO134LZe/LljQjKdc5rHwzVEkTNEKt3+zWJlbA1k1AmA60zf0XrC\\n\",\n       \"HDdDf7FeMjJSW9qmLwC0x/X83bNnTzqerqeTUZtgiWFpE8eUZT/S14WFhU4uLPCM2oR7OGMyyyAF\\n\",\n       \"mVcVHDp0qGvD9X3skbTnra01E5HHJbQeuIjJemplNYk33+GJsteUMcq8xJ6LJsx1val2jNrxiZis\\n\",\n       \"Ibw22Vzkd65t5Rg5e9/suWlhj7y9hda5ZcfDwOcZaW27pjNaJntDkDvTh2s2Od7JY8q+OxqNep6o\\n\",\n       \"bG5lNcpmUSQx1+3BA4cPH+69UckyiB13BzJZAOuJdWFPb0Q+nvZUA2dtE5fq52smo+P92jHyM9eZ\\n\",\n       \"00NvXto2/Gz3PkpfHFvcUssB+ofu8cB7Xc9CeaQKhUKhUCgU1oiiiCkUCoVCoVCYgYwipjxShUKh\\n\",\n       \"UCgUCmvEusVIXX755b3aJM7agZvnD/7gDyJi8o55PB537y75DVxYV155ZUT0a3VwHfeEr+g973nP\\n\",\n       \"1PXOwuD9PLxf8P6srq722uRdLDw+F154YURM3r8SZ+JK3uYgA67cba6td73rXT0ZzOP1gQ98ICIi\\n\",\n       \"rrnmmojoxyM4ZgYOugsuuGBKFt6ZIwv3+8QnPhFXXHHF1GeAMeLvhz70oYiI+P3f//2I6Gcdua7I\\n\",\n       \"X//1X0fEhD8R/To2YmVlpdMJfFWOWXH8DbxcjCvv1dGPY8qQHc4q9Ndm4nHtn/3Zn03p0NXnjdtu\\n\",\n       \"uy0iJrxPzto03xV9pf35+flejBz9Zw1de+21EdHPmHJF/Ouuuy4iJuvC2Z4ef7j83vKWt3Tz2lX0\\n\",\n       \"mfd33HHHVD+Bs3aQHe6siy66aOpzz0Xue9ttt3XjQ78c+8U96Cdzi3mSxUjBtfW+970vIvpxHPR1\\n\",\n       \"fn6+myvmfeTezs6j7auvvjoiJtl37q/nYsbN2WbzwlfmeetYFmSE9w/eR8fnORaGfZcxYh6NRqNe\\n\",\n       \"VtmnPvWpiIh4//vfHxHRqzpumVijV111VUT0Y0Rdd4h24X3btGlTJ4/nPXKzXzjGCyALeoQ/Ez06\\n\",\n       \"PomxaLncHJfqfjD+PLsct+j4K66H39DxX9yP/3/0ox/tni1ZtqVl4Tnn2EqvUfgQ0UtbRTxiugaU\\n\",\n       \"eT9de8txz+xdrIusthv/Z/zhRWUvb9vnnszz3/zN35zSmWOH+ZxnUYbySBUKhUKhUCisEevmkVpc\\n\",\n       \"XOxVuOb051OyK1hv2bKlO6W7mrSzUVxPxnVEsP6oSWNeN/NbtfWXXCfFtTWQ17Vaspo21KOyZyer\\n\",\n       \"x7K6utqzEFueqRboi8+591Dl2Yg+w7qzWtr27VFyzRrXKCGzzDWKqMnk6uJ4Im2Bct82w5J+0h8z\\n\",\n       \"rztrg3HHkqLf/N7cavTNMm7ZsqV3LdyL1h33or/AnFK2NC17y/HIvZ0JA8zLxl97bA0+p3J+5lXb\\n\",\n       \"vHlzJwNeHfppK5/P7Ymy5wUgK/PFmart9Z6DtjQzri3WKmsQD4W9B8wP9IFXhd+deuqp3bXcC3nN\\n\",\n       \"Mej6N56DQ7WZWlgP9H1paamX4elaTWaAyNgHWLvm3nOWFxmUbbaWudEAdY7uu+++iJjsJawX2AIA\\n\",\n       \"a83edOau9dO+AbCH0c8L+uV6YFkWnj2/zih0VuCBAwemsggj+l7A9tqIyZxyPbaWUzKin3HNHKT9\\n\",\n       \"tq/s4+atdCV3YA5O72FZrachloG2b22//ExydrN/y/g7Kzx7dvn5urq62mub5zt7i88kMAHMQnmk\\n\",\n       \"CoVCoVAoFNaIdfNItdxetqJtHXPS5tS4cePG7pTqa7FeXAU7s7g5DWMFcALHC2JrCrkPHTrUq2uR\\n\",\n       \"8XiZzwlLwtYL97KlmTGLHz16tBdX4JgXwL2wBukvlqWzN+HtstXHSb3VJ207JsLeIAB/04tf/OIp\\n\",\n       \"WbEk4C40HJ/jeiKtLFgjjmVyP201u7qwLVradaXfoezXc845Z6oNe7PM5g6sR3tZABb8ySef3OmQ\\n\",\n       \"ftoasyfRtZxs1VPZnXa4PqvKvmvXrt4cZI3aO2oOSscnWpese7w9rq/UWt5mhHf1a1v1Xj/AngbA\\n\",\n       \"+nnsscciYqIf+tTyyrF2XO/JMZKA/7tGFfq07PTbno02frP9LKJfN8u1vIArgmPJZ2M6FGOU1fqj\\n\",\n       \"PprfAjiWFCAze5a9RV6jbSyavZ4Z+4TfXJivDTj+L/PYgH379vWqnjuW0P00o0X27HrRi140db0r\\n\",\n       \"hbeeWrcJuCYbTzxW9uh7LrKncz3rzpXC27YyD5R1bk8tyLhZM6aN+fn5nrf753/+5yNi4gXle/bo\\n\",\n       \"51rVoDxShUKhUCgUCmvEulY2d3wPJ/WM346T+Xg87k68rubLCZK/WAhYio4z4VTMO35O4lzvE2xr\\n\",\n       \"TdqS9gnY8QSuip2x3BM746q8Q1VZs++sQ2dz2aLIqo+bWw+LvLUOsCis+4zf7uGHH46IPns9Xh4s\\n\",\n       \"VcueYahasT1n5oEDzqR0Bqb16myUllvKbf/gBz+YugdtY2HZa8QczDJILQvtPProo52HiN+iS8B4\\n\",\n       \"0QZzC69JxrVljj1kHIo1tGWczXOsVOYJ/XAGKbDeXBG7nS/Wlf9v69XjSL8yjzeeatpDz3hXW087\\n\",\n       \"89teEXvNQOapQn/2Gtgz1zIJeHz4v70A/N86f/LJJyNisg/ye+aR911X7V9YWOjadn+Yc3i9mLPo\\n\",\n       \"1jEvtLNr166ImIwJ86zVeduXxcXF3rrPeD89P7IK6KxZV+zOYjC3bt2aeua9r5kflfWUrU08dM4G\\n\",\n       \"HIrXYt92NnMWI0dcr+eHPZmWxXuX48PaNp2Nnj0vmA9+HmbxWs7Abde25+K9994bEZP5jh5Y13hP\\n\",\n       \"Z6E8UoVCoVAoFAprxLp5pFqL2TUqMi8QMSUrKyvpO1//hhNmls2Ehe0Yk8yCBaPRqFdrwqd0W21+\\n\",\n       \"f2yvEdYOJ2pnJvpd8s6dO3vv7J2VAbAwaIO2bZkArHx7dOjzkF7Mr5SBfjqWCr04m8mZlLaOWu+Y\\n\",\n       \"5cKaMa8f4HOsQcdbuD3iktBnWwPHlhTv2Znrjsfy+NN/e2Y8r0Ab3+YYFc9z+mevV2aBmxfM82uI\\n\",\n       \"P89eQHPGGYwF3lG3CZABq9druZ1vztqdxd7u2CHzYHouO4uV+eD4lIi+l9Pck1lcFnpxtqbXKB4q\\n\",\n       \"Z3kePXq053G2Xtx/ew3ZqxxLl3GQOV5rYWGhmw++J54oewPNjQboi/fFbN9tPXt+2+F90evEcUme\\n\",\n       \"P5nnKYvB3Lp1a8+zzDVeF/S/rZcY0a/5BzKvqz02ERNduf+uKwbMyejnaMaf6bcz/G31yFzJYobd\\n\",\n       \"tjNvXXcqe0a7j6urq6knjb9+thTXXqFQKBQKhcJ/MIprr1AoFAqFQmEGimuvUCgUCoVC4XnGusVI\\n\",\n       \"vfe97+3eP/I+nnedxJTAzQTXEu8377///jjttNOm2jP/FJ42smmIvufdN1xIf/iHfxgRk3e41Krh\\n\",\n       \"eu7J9fAbLS4udrE8bVxAxISvCK4t2ibuyPE1cErBFeSYKHMM0ddLL720y/B7xSteERGTd73oFI6o\\n\",\n       \"888/PyIm9X+om3H//fdHxCR2Bu4k9Mi7cq5zNuNf/uVfxuWXXz4lH+/LXUfLvExk77361a+OiEmc\\n\",\n       \"BhkmcC390R/9UURMYgfMobRhw4ZU566uzTtwX2/+PkCcAWP01re+dUoW2jt48GDXBm3DKUV8xnnn\\n\",\n       \"nTfVb8YfTjn4rdA5c9AxIubD2rx5c69aOrE7cEpyLTom1ge9mIOQ69Ej8V6MMTIxpueff36nu1/6\\n\",\n       \"pV+KiIgvf/nLU23Ab4ZemEvc2xlzzF3mIvd0BhKf33LLLd14OvaHeY/8cEq6bXOF8TvWnPkT0XOb\\n\",\n       \"pWTeT/pPJW/2JO7FPIebE5lf+9rXRkTEN77xjYiYzFHaNx8i82Xjxo3dHoPc7FvojHuz7oH50Ji7\\n\",\n       \"rFGynIhz/NznPhcRkz267St7NLpiLpo7k7WE/LTBuoAnlLnNHkesFf1nLsL9OTc311WmphYZ48Qa\\n\",\n       \"evvb3x4Rkzn4C7/wCxER8W//9m9Tv4MPFa5F9iD0Q4Vs9u52D3Clccd2Mf7mQ3QNM8aKucsadTyv\\n\",\n       \"Mwk//OEPd+PDnvPAAw9ERMTpp58eEZM9m+cc45/x2qEvxojraef1r399RER89atf7fp6++23R8SE\\n\",\n       \"xxE4U9octDwv6Kdjic3Nad4/sG3btu63rCHmInWw+PuVr3wlIiZ7FGOUoTxShUKhUCgUCmvEunmk\\n\",\n       \"2qqz/OXEnVUwbfnTsNoybh9O9a6D5OwUZ0a5nkxWfXjfvn29a51V4wquWYaQ+8nn6MH6aftoNm76\\n\",\n       \"ndWR4vqM7RxgFfo6Vzpu23Rl+qwukDNj+P9QzZG2HbyFWKaPPPJIRExn7ZgTzJyBzmYCWJh4O/DY\\n\",\n       \"2SLD4jIf2I9//OPe+BhZPRm3TTvmz/Pv2griWN6MM5aVr0V3szKIuCfeAmfMeR21Nc3w0uDNybgj\\n\",\n       \"mUPOvvE6wtpHP4wV+mmvz7jSsmrHtIl3A/1guXr8nXHptd9awVlWkrkIAddxb2faZlxrrmZ/9OjR\\n\",\n       \"3vjY+8c8B1kNPPQD0wG10TwXaRcZV1ZWemMBXOGb37K/e881e4DZC5yJS9/brGZ7yYEr4PtZ473I\\n\",\n       \"zwv2oGxNLy8vd3tPy9Ma0a9/Rb9d4Z816D299YZHTMaUtYw3rZWXNlnPrFHr0HUY7RV3Rjr7rKvV\\n\",\n       \"+3nTtuH54TEAbf3I9nvatiw+V7R7QFZNns/ZUzJuzgzlkSoUCoVCoVBYI9bNIzU/P9+dWr/1rW9F\\n\",\n       \"RMQb3/jGiOhbpLYa9u7d250Y7Rmw9eJTvy01LBVO5GeccUZETKwGTu4Aa3Lfvn09T5Hfadtb5Arn\\n\",\n       \"9tTQP36HTMSn2CIZj8e9GiWuvGtwPXEa9NOyY6k89NBDU/ceqlbsU74tAX/P52bapk6YK/y6hg0x\\n\",\n       \"dMjezhd7Vlw91xYG1zPO6A8ZsvoqWGjwXR04cKDHEWZPEh6ITBbHHxDnxtjioQBDXpCsUjky0Aae\\n\",\n       \"F8dMAfqXzdUhfjfu+bM/+7MREfG1r31t6l7AtdvuueeeiIh4yUteMnVv4PpBtIf+WlmYl4y79eC5\\n\",\n       \"SBuMq9eNaxq5ho3jXdr5gtyOX3P9H8A6Zz2cffbZERHx9a9/vdf2kMx4BYY8o/SbOYVMeDHsBbB3\\n\",\n       \"mBhMYkdcZd1r/Sc/+UlvrwV+m8Cc5f/2liEb4w/PITq3N6UdC1fazvoJXvOa10TExNOEvgDjzV6E\\n\",\n       \"rOjcnuCtW7f2PMueBwAZrY/s7Qhzl6rczPUhFgeudQ0/16xq5Y7ovzXIvMbmiX3zm98cEZM5DXdn\\n\",\n       \"xGSvccX+9o1TC+7pdcPnfkbTf+5jvbRw7ULm+V133TXV/1koj1ShUCgUCoXCGrGuXHt4kzjNO4YI\\n\",\n       \"2BOzbdu27v2vLU5ndPFbTsY+lTrTLOMMAm2cg2MgDFdg9vtXe4Fc4ZcTu/nOwNGjR7s2sHIcEwS4\\n\",\n       \"J/2zl8CWmd8zO16jtQL8zpt7ZZWq0SH6MMeePS/27BEzY1kjJhZkxlNoDjp79LConM0GGAsyjGj3\\n\",\n       \"yJEjPe+FsxbxuDEX7Xlzf7HQGBvHArSxZXhxWBfm8fNc5LfWrdtmLbLOjsUogHzf/OY3I2Ji1Q9x\\n\",\n       \"REZM1g3zgDllD4PjlYj34Xft3HW1c2Rij/Fewv/NuUZ/M0457umq9W37Gb9lVh2atukDmVWuiO++\\n\",\n       \"MtfxhmzdurU3Ph5nr9mMr47+f+c734mI3ONtfs3t27d3/7YsznLmHo4hsuy0x9hYZsDYbNmypedZ\\n\",\n       \"tReYNtEtmZV4UOw19PzBI82e5efFwYMHe17/7NmCPvwsYg3aU+MYXM/xdo+2dxvvL3uWPYyAcZ31\\n\",\n       \"THaG+Xe/+92I6GfBRkyea9aDK9YDc1Qypshm2R2zy++2bt3ak5tr0I8ZPTIOWmPdDlIHDhzovS5C\\n\",\n       \"MX6QMoFal3AW9GoqEE8Eu3YdsOkg0myyr66u9krX+9WOaRkYFBaUFwaDiQymcRnaYHgFZ1JST8bs\\n\",\n       \"MJaV2bcrN3MvR/Q3VT8gsu+RnTYzlzcLB30cSy9+DeSNM0seoJ9sRr4nYD6gZ/5u27YtpV8g+NpU\\n\",\n       \"N54vfkXjYEzLguwbNmzoHQD8SoLxo1+zCJEd+Oo+DR28fLDx2gKmYXHguzc7B8pyPfdrN1IHyZrG\\n\",\n       \"KXvlaeqTtrRGC8bGD3W/2mj76deALvfg67nOhzuPUUYKPT8/35vnpghB7izA22PH4T+TxQ+viNyo\\n\",\n       \"Q1fI5CQb70Xoj7nLGs1ekbKvzM3N9aitvIeavoeDog0qy8L1WRgCmJub6yU+eA0C03VZFq8LGyQm\\n\",\n       \"Im8NDOax5zljZL2YSJj+ZhQ5loH5wuft62bTzpgIPSMtzozBLFwDvbf0QBlxNG0+/vjjU7/JnAFG\\n\",\n       \"vdorFAqFQqFQWCOKIqZQKBQKhUJhBooiplAoFAqFQuF5xrrFSF1yySW9wEfe6fLO0yXi29L5LlYG\\n\",\n       \"FQK0CQSB8n6Zd528E4V+4uKLL566jveuxF209BNt+0ePHu3e3fKemXezWQl/QP/oA7QMUK04uNBx\\n\",\n       \"W8hy4YUXdvEAvJOmrAFpuZTChyKC9+4EbDuwHRqHd7zjHVP6Ih6FuA36fuutt3ZyO96AftIf5Ebn\\n\",\n       \"gP45TfijH/1oRETs3r07IvqxZW2ROHSI3Lyrd2Ai/bQsyEywNoHhpgiCasEFC+fn57vxN82Oi9oB\\n\",\n       \"5prpKriOoGqXxTAFTcRkPFzskGuRhbFwkT9kQefQmzgg1HGNUMq84x3v6HSM/I5VQC/o3PGMTrln\\n\",\n       \"/V900UVTeqN9xqaVnX4ib1YwE8oHr1FTCjEn6aepM9pkA5+ThuwAACAASURBVPqMDqHC8BoDyMT1\\n\",\n       \"H/zgByMiuuKq3NuxMFBhZPRGBw4c6FHbXH311RExWf+0yRrlL2PE+HMdsjNWplrh+jZInd/SD/YW\\n\",\n       \"aFbYS4iRefDBB6fuwR7N/s+YcJ33MPTYzi/HwFjnrGf04qQKQD+RBThwmudIS0HleE3HwqEX5ovL\\n\",\n       \"6iALMt52221T17M/oA/GgL3p1ltv7e1b/EU2dM7c8j7qWEL0Ce0Lz0XWA+22z2GeRaw5l1gg0Duj\\n\",\n       \"QkIvzO1sj+Z6x8Xt27ev06mfLeiWPdcxYayLDOWRKhQKhUKhUFgj1s0jNTc3150OKcjnTBjgaP1H\\n\",\n       \"HnmkO5XaysfKIXX4V37lVyJi4jVyOjOnYZ9yMxLb9nrkIhW+LckfMTkRu9SA0/eBiwNyQs8yZTZv\\n\",\n       \"3tyluJLpiD6c5u9sC2TgHs7aQE9QZ6AfPm8zJTw+jKO9AgALFS/asUortDK7oCnttJY+/3Y/Z5Wz\\n\",\n       \"ILMI3XsOAvefvr/gBS/olRxwgcFdu3ZN/T/LrLR3Cdmc/g62bt3aI+X1eKIPrFQIcSGhzahw6BP9\\n\",\n       \"Zc26/U2bNnXjj7z2BgCXBXDWjceovUfEZL4w1kNF8zI9ZAX2GEcK0FKY17LQf6hSIH114cKIvmeW\\n\",\n       \"Ncm8db+ZF6YnyTIlmV+seTw1GW1WxMTjxnzJ5hRzk/nOXxekNOjTvffe242T1z/jxl+yWiFG95g5\\n\",\n       \"45I9y55bY25urtM5HljvofSLec54QiyelT9xqY6MImZlZaXniXJmcXttC3swM7oa5iT6RrY2/tm0\\n\",\n       \"Oln2KnD2N3u1s7kBz13mtulbWtn5LYVVrR+XM+D/jA1zuC1r0AK9Mi/aAs5ZKQ57oP0cmYXySBUK\\n\",\n       \"hUKhUCisEevmkVpYWOi8RhQ/I77HFganQ4qlHTx4sCs+Z48UlhKeKK6jjYzyhbiE73//+xExKVjm\\n\",\n       \"0y4n1D179nSne07MFGcDrtmC5ZAV+/J7aCzUs846KyL6RRa3bNnSyf17v/d7ETEpbW/rBp1ixWB5\\n\",\n       \"06ZrvaAX9EdxRdBavejElicWkU/1WAV8T2FWxsbWLpYHHj/K+FP0rfUaYEF5DrkYHsDjRn+wkvi9\\n\",\n       \"ZWHs8HC2tAaZhYklhS7xHNgq9Bg5ZtCehtazyXhltXiQ7U1vetOULOjQsjMX0TmWWlY0d3Fxcape\\n\",\n       \"Syu/de56OS7Q6La5tymFkLGt3WQ9MG9d/BMgG15jxsokrYA+4eGwJ7DdX/iOezBnXJgX2PuFd5z9\\n\",\n       \"hf0RMF/wKtKH17zmNb29BY8Ba4d+3X333RHRX7v8nj3Idfssu+uXLS0t9doEjC97LBQnLooL+Jy5\\n\",\n       \"es4550TEZAysNzwzR48e7TxLrFOPP/1g/Jgv7APsD8CeOWRhDKz3lZWVXjwav7XHCbldR4w2rc+2\\n\",\n       \"+GnExGvkGmrIETHZM5jfzDHPc3sN8QKiRz8vGFM8tfaWtu3jIXLdK9eJMxhLxoaxzGTn2Ujfd+zY\\n\",\n       \"0dsXHadpYnS/NchQHqlCoVAoFAqFNWLdPFIHDx7srDdO6qeeempE5BQBbTVeVwkGtvI51RPHZE8N\\n\",\n       \"J2nHDrjqNsD62b9/f/cdVotjHpy1Muu0i0XBXzxYnJZtwRw4cKCzoLEsoHLwO297VrBes8rW9sg4\\n\",\n       \"86SN+3KGg6scW4euTE1/bWn4eiw2rsOSbb2GzpAy7UT2jtxV6pkPWSwIIL7j6aef7umctunPnXfe\\n\",\n       \"GRF51XXAnM0qfQPaaS14W/eWhcq9eF5NGAocr+Wq69bjeDzuUVxgebKWLIsplvhdRvlAe44lbPXD\\n\",\n       \"vLQ3DEvca4j1zN6TUb+4fXui8Kq3XgPGkTmEde5sJWBaFn6HVzCLY6QPeMQ3btzY87zzW/ZBV5H2\\n\",\n       \"vmlKLfaDLHbQ3pNt27b16FcA/8djwHhnNC7oFn2x5zGWzEnLcvjw4U5+V08HJhCn36zZjGoL/eHB\\n\",\n       \"Yl4M0dZwbcYeAZyNip6Yk54vptixR7fVI+NCf/EwMb+tF5O5+xmczUVk5TreNrTPOsYbWVjHzDHv\\n\",\n       \"d7SNzIwpHlrPL67nPjxHt2/f3rvWjBCuhu4xy1AeqUKhUCgUCoU1Yt08UktLS91p78wzz4yIiXWU\\n\",\n       \"xTER1xGRW/N+/4ynxuSlgHtiYXKCzvjNODVv2bKlV9/JJ2ln/vG9a/i4T1g1/J4+DMUa0ZbjJBxP\\n\",\n       \"Qf+xhltdtjICTv3EUGWeq/ZaX4PVYk8N32MN+R6OBcBSw/LkevrQtm9Lm3mArt1Px07xeywre0eQ\\n\",\n       \"2V6FQ4cO9eRm/BxLx1yzLvk/831W5khbb6ytZ9X2x8AjZf6yjIPQJN4ea7C4uNi1hUeAfmRr1Zlu\\n\",\n       \"GXecs0Ftsbdj5N+SfZdxrbm+nOsIeUzpE3sXnivaaS1vx6vZi+69hTlFv4ghov/um2MM2duw3Fuw\\n\",\n       \"byET1/CbbI3StueyPVPIyFi88IUv7NWeA+b9cy0rj5HfWKAf5rKzpdt1cfbZZ0fEZAy85zpOz17O\\n\",\n       \"TC9+k5Hx50X01wo6yojCaYPxNvE6sMff3qK2r8625DcZabE9ecwD4Llr4mzPj1Z2e+38NsBeYPqf\\n\",\n       \"vS2yzs0P28ae2sPkPcVxqZW1VygUCoVCofAfjOLaKxQKhUKhUJiB4torFAqFQqFQeJ6xbjFSF1xw\\n\",\n       \"QfduNKsnc8MNN0RExBVXXBER03WWeFfPe+A77rgjIiKuu+667pqIPnce///kJz851Tbf46Hj3TH/\\n\",\n       \"hw/pd37ndzoZeS/OO3reC19zzTURMeFl4j2rY8CQ8bOf/ezU9X6fjwy8v4WD6Oqrr+4+cywQOuRa\\n\",\n       \"eNxcT8cZJXBzoRdkdJVm/l5//fUpp6AB/xRtt/FFrWzmN4QnyllsbfYHnE/wj1l3jlej7fPPP39K\\n\",\n       \"Rq7jPT79hAcNbiZne0RMdAVflfmtnI3HPZAFfiu/t2d9EBMA7xvzhTkQMYkfaMcnIuK9731vREzm\\n\",\n       \"idcPa8t6BO09IiZ6/MxnPtP1lfHgHo4ng68Kvjpk9NiwRuHagzuL+WK+P/Rzww03dPxjHn/mt3nZ\\n\",\n       \"aBs4fonYQvYX9OIaaS3PJHLDb2lmAmTweF544YVT96ZN4iDpL7ySb3vb26Y+Z89aWFjo+oHO2UvN\\n\",\n       \"gUYWH/f6/Oc/PyU784QxdeZhyykXMRnTubm5XtYe/fR+4Xga5j1rzusfvdFH2mevg29xcXGxNz70\\n\",\n       \"H65F5M5ih5CNffHaa6+d0gt7ufXDOrr88st7Vfb9bOJZBEcg/efeni+MJXs67aB7MlRZF9dee21v\\n\",\n       \"X3R8IfdgnjNG5rM0WwXr6N3vfndETGJ0vT62bNnSXXvBBRdMyedMSGSEO4+56DqL6Md8eOyLjk0e\\n\",\n       \"jUbdtZbbenFmMOOfoTxShUKhUCgUCmvEunmk9u/f31l71AfKajf55Do/P9+dHF1DhFMoJ0lOoFkW\\n\",\n       \"E/fK6kU4a6et8IvFlbWBvGZMz+pCYS2ZS46+OvtjeXm5Z8XTpqsDYyHwvXnRLLstONchafWJXBnf\\n\",\n       \"lGEONvpnLwLge48F7bT8ibagkYk2s1pf/EUfWWad9dv2xXPM3kJ7dSyLs908NuaJbCuI22J0jTJ7\\n\",\n       \"LO1xySr+2juKfl2PZWVlpVe7K8tm5Z5Y822dl6Hr7dFztk6rd3tY7Q32HGL/cJ2kLHbUnyPDEK+g\\n\",\n       \"a24xrngtzMpgbj2vWa9/Z22xX2zcuLE3V6yPWfujM28Zd+aN14U57bZt29bp2llVWZ055M/2EY+l\\n\",\n       \"63SBlpOTdYwunRGObK6Xx1+Pt+c/MjBmQ3x59M/cklnWnvcqZPH19BOvEtc587SVN6sCbrk9vn5b\\n\",\n       \"lNVjdK3EoSxfewlZ9222ZQva4C/9y2Qxhu5t2EOXcbNmKI9UoVAoFAqFwhqxbh6pvXv3didurBhb\\n\",\n       \"wcAn19Fo1OMAAvzWnhROovZ2OU4DC8PvnUFrddtTZgvTp3FXrrU3DevJnGIZ19KOHTu6e9sDk9U/\\n\",\n       \"Aa5o6+951232bmRrx4h7YRnYAjNsqc6qfcR15qKiffjAIibjh7x4O8ypB9rftm07Vg7QHnMWC/Wn\\n\",\n       \"P/1pyhRvxnR79wBzmRgH1xXz+CP7oUOHUu4zYG+PLW3LjoeGOck8oKZZVq06oh8TZFCZHhkd12We\\n\",\n       \"RMY049hrvW+ZRwpkFqatXJDpBQ8MGLK8XR/NHoms+jz9wYOZecfQB/tKqx/PW+I42SfsubenBv0h\\n\",\n       \"I3+RxWub/aLlEaQ/3hd9T3u/vP+zHrgn1cQdSwTamoGuTeVrPTaZJx4w/q6FxVh5fj366KPdvkXl\\n\",\n       \"edfFM8zOgMzZs8geLPrUshVkcgPPc+9N9h5mtb6Q1W8Rhp4FjnHyXAP+vo0FjOi/2UF/9rYOyWAP\\n\",\n       \"o/udPZOM8kgVCoVCoVAorBHr5pE64YQTOp4yLFJnqwBOja3ng2vNEeb4C06anMRt7biiqfm8bA3i\\n\",\n       \"kVhYWOhO1OYWs9ycmGd5JMwSz++RZej9O3I7K8nVY/GgZPEl7if/x5L1ab4dI1cN52/mHaNfjAmy\\n\",\n       \"ZV5Aj6m9Dlh87W+x+viOzz238FQxrvzOmZIA/TFGrcfOcUm2yN0ve0u4jrFjXlmv7uuOHTs6Hbuy\\n\",\n       \"O4BDzJ4F+m3Lm2wuxztlXuNNmzb15gH38rV4LaiCbA9cVtnYHsmhWBN7orynuG2us/fI2UnAXjHf\\n\",\n       \"px1j9gfktpfI1i77IL/zGNoLSHvI0HqRWVMALyBtMB88D4CrSbsCflbFv43nY41kFblp27FO1gv7\\n\",\n       \"quNvgPdFxmZpaambx+jSFbnpl/dPZ5wCez/QG3PSsrXzjX9nHkZkQZdmV/DvnM3pMWplRS5no7P2\\n\",\n       \"7JF0FX6zLHhNc0/v6Y4jbj+zx8hZdoar8mcchl7L3Hs8HqfcvN5LnHU4C+WRKhQKhUKhUFgj1s0j\\n\",\n       \"tWPHju7UyonbVjHghMqpecuWLallzGnU9YOybBwzRWfvfgHW9NatW3sZD1iEwLw99hJlnhfHc/F5\\n\",\n       \"lrXTyu3YBsDp3RmG/M6WlPVmHqShscrisvy5M8HMSWYMZWe17Q69+3b/sMAz74jj9Bg75hOg/1jb\\n\",\n       \"9GHjxo2p1866wir0GBHHgBVsr1imx61bt/Y8jZkXAH14jrlt1xuyN9WYm5vr5pgt71n95Ht0msVI\\n\",\n       \"cL0ziI5l9bc1ZIZgTka8JK6BBGy5um5b66lhnJljIMsIo21kcYbokBcwYuIFwCNx5MiRnufFcWYg\\n\",\n       \"iyVzPJd537x3OQ5qZWWlF1cD0O2sLF3grDTzImb7xtLSUm8tZdmJYGhODcHcgvzOnr3Nmzf3ajGx\\n\",\n       \"p3hvsec5qzvWth0x0YMzDtsxQeeZN2+IO3Oov9matqwZj2z7b/dnlsfO6yZ7g2WZ2ozdbP9yLTt7\\n\",\n       \"1WehPFKFQqFQKBQKa0Rx7RUKhUKhUCjMQHHtFQqFQqFQKDzPWLcYqcsuu6x7V8p7VWKMeI8JH9Ll\\n\",\n       \"l18eEZP39k899VS8+MUvnmoP/iG40PyOlvgk3vXDtQMH0eOPPx4REWeeeebU73mPDY8T7W/cuLGL\\n\",\n       \"O3BW1p//+Z9HxITzibgL4nRcw8WcYuiBWJI9e/ZExCRGBFkuvvjiLn6Gd/NkCN13330RMeHxg8fJ\\n\",\n       \"NVsAcRbIAu8b9+R613z6+Mc/nnKKOe4GLiS4sx566KGIiDjvvPMiIuKee+6Zuh7eJ/iQiDVAH7zX\\n\",\n       \"3rFjR9x4440REXHRRRdN6cExMsTh3HbbbRERHTcbY8H1ZCtSq4n5guzIQNbLyspKNxbmcbz//vsj\\n\",\n       \"IuK1r31tRETcddddETGJnUEW5gvjTkygMw6RhTE97rjjepWluRZeNrgiv/SlL0VExG/8xm9ERMT3\\n\",\n       \"v//9iJjEYSGL+RBZB86UgVfw8ssvH4wba/tpTjnmlrNfmVusf7j5XAEZvfC7j33sY91czLLL+D/6\\n\",\n       \"oJ9ZTR5khGPt6quvnpKBNc18efrpp7v1yRpytp7lh68OPsRvfetbERFxxhlnREQ/ToPr0Qv6bWvg\\n\",\n       \"mccR/k/W3Mte9rKImIyV5zl6ZF6hB+KX2NMYf9YRejh06FAvfsw6p23G2xmk7EXvf//7p2R0HKh5\\n\",\n       \"QlkXJ554YjdvvSexR7P/O6bM8TWW5Qc/+EFERJxzzjkRMVk/7N2so7e97W1dv+in4xmZL/DbtewZ\\n\",\n       \"EX0OUnSOHll3jM1ZZ501JdOHPvSh7tnCfGU8mZvmZmQ86b9jA5E929OZB+hjy5YtU8+tFsjNXorc\\n\",\n       \"cO2xLzpWGH0gS7sXRUz2LK7bt29ft7cyz9G5q8SzpvhbXHuFQqFQKBQK/0FYN4/U/Px8ZzU9+eST\\n\",\n       \"EdHPhAFYNFy3urqa1hxx1tVLX/rSiOiz3gMsETxcWFoPPvhgRPSrLCPLvn374tFHH42IvnUHsMh9\\n\",\n       \"z4ybDouSUzCWDP12pszmzZu70/ib3vSmiIi49957I2J2JgweFeoo2WrmXs72Q6a27kiWGcVvZ2VM\\n\",\n       \"ob/vfve7U/ewLFQ0ps+MDVZE2w+8HK7F5X66yrprVTnLgzFyJd/9+/f3xoe5hXcMKxDL+tRTT526\\n\",\n       \"nvnC71rOsFZG0GYzUi+H8bTc9ly+8pWvjIiIL3/5yxExsQYB84e5h0zUfvLcXVlZ6axd5hp14py1\\n\",\n       \"xhrjnowR+qEdkK2XoZpZrq7vLNWsRg3jigXNfmCPFnOZPr785S+PiMkcbGv3OPPVc8trlDbwKmJJ\\n\",\n       \"I5NrGrkOz7e//e3u93gEAHIxjm94wxsiIuKf//mfB2Xhevp5yimnTP0/q2mGfp566qleljJABte8\\n\",\n       \"y+J1mQ+MpecgbxNA66nAW2NuQcvNda33IqK/Ru0V4Q3Gv//7v0+1B7Zv397bM+lHlhGGp465yprO\\n\",\n       \"MnFplz3g9NNPj4jpuci/2Xtcq8oV37nez7+s0jfzwlmg/G3bN6sG+wHw89I18ug313lPN8tHW/Hd\\n\",\n       \"tf7oD3XWmAevetWrpu41C+WRKhQKhUKhUFgj1s0j1dZ0wELhZOpTIFYAFU1bq9LeC9rCysGy/PrX\\n\",\n       \"vx4ReX2M0047LSImlqkr3AKs63vvvbc7zZu1ve1jxMRi4HtO+bZIuNfrXve6iIh44IEHpq7zaXpl\\n\",\n       \"ZaXr56tf/eqIiPjKV74y1X9AG1i9yH7uuedGRN/CtOXG/x2/0/7WcQj833JjveBFw6rHGrTlTTu2\\n\",\n       \"vBhrLLb2WiwN2srqgmAp8W4faxCr13qhfawcrKIXvOAFPa8f8v7Wb/3WlEyMjee5vaGuPm094nV4\\n\",\n       \"/PHHZ3pe8Ob96q/+akT0OSVt7fI588R8f0O8X8iAXK4XBByPBhyXAuwV4ndDLPf2Xplr0xY1cwiL\\n\",\n       \"lL0GT0RW08h9wXPXyspY0AZ/mVuMAcDzhId2KPapBWPAemDunnfeeb15iyfi7LPPjoiJ1f6d73wn\\n\",\n       \"IiaxPoDv0QMyZ/xwjAXfLywsdHsz8gH0YM87a8nrAp2/5CUvmfqezzMv0/79+1PGBmDPCXx4tO11\\n\",\n       \"gR7xGrJv0Efi2sDi4mInr+t+mQmDMaMt18nK+PBcC3GID5G9yF5jxtdeY3s7Abp1TBn/5zkEJ6dj\\n\",\n       \"qiIm+zzPc+Ye+55rkgFzrGYwzyr6OO6443pyu1o6YC/O5ruxrgcpNmkedtngmUB28+bNPeoPwGTC\\n\",\n       \"Fc0rLBYAbk/AJEVhvBrjwWFFtgXMhlyoQ2BzZ7IyIbxJIwuvDAkYd5AtOHz4cDfgX/ziFyNicvgi\\n\",\n       \"4BCYToJNKSt22d6jBQukdb+6FL83Qv/frzj+9V//NSImurUs/J9D3K5duyJios+2fdpgbvlwnhVv\\n\",\n       \"Q/ccAkypAvz6rU04sK5czPQLX/hCREw2a9PPmISV19IO9AftK0Bek7mApq9lw/inf/qniJhspD4g\\n\",\n       \"cC8MDNP4GCsrK92cAujD488Y8ABgXDlI+OFFX0ziy//bNcp3pmVi3NxP08/w11Q6wNRU6Jt9hgNH\\n\",\n       \"RL/YLQ8IF6IFftCSpJCRsxqveMUrIuIZ/XmvQAbG5G//9m8jok9mDNiLMVJYD8jufde/27lzZ+91\\n\",\n       \"D3BRR8YvIwo2OTm/ZyxteKHX1vDIDgC0wZ5iouBsL0KWhx9+uOtvRH+/bPvDgz0rJIluXXjWFCqA\\n\",\n       \"79nbcRp4jrfXoktk4dmSGdI2SB0w7z46LGHIOEI+G6OmcwOem+zppmcC/J/9gd9v3769dy16Yd2y\\n\",\n       \"RrN+ZqhXe4VCoVAoFAprxLp5pI4cOdIrT8+J1K+lsGhaksbMq+OTN4GInDht7XKax8LwaylbXm2q\\n\",\n       \"Jr9FFr+qsIXBX7wYtgLoH1YOejGRags+u/vuuyNiYkn5FM//CQBGBvprK8BWg2kI2lcYyIfOTBCb\\n\",\n       \"BabyOoQxwtLIgqqxjtGj04Pbf9tCwiK3hcnnTsVnnO0F8qujNmjTbePdwP2PNw/PjS1HPHWmkMFi\\n\",\n       \"ywJCt23b1htXW1KsKTwLeFCwTO0d82toe3T8um08HnfWHB4GfmOvjqlBaBOd+3pb5B7/1tWfvbpB\\n\",\n       \"L9k8Rz/o2PMHeD7QV+Zuux852cKvOr230A/mpEtaZEkbXh/z8/O9AFwHnzPXsvF3sgmeTD637Ca3\\n\",\n       \"3bRpUy9MAphOBJ1lnjcnlXhPcvttyAXfZd4uPmde4BXNyL/twTO5sZ8B7etbxgn5skQpvMt43P2a\\n\",\n       \"GqAPPFEOym7XAnrgLQpzKqMIcmkiy26dc70paOij9672N97fLIuTbUxun4WhcH37TLfcXMP5wCEw\\n\",\n       \"mefVKI9UoVAoFAqFwhpRFDGFQqFQKBQKM1AUMYVCoVAoFArPM9YtRmr37t29rATejZoi5vzzz4+I\\n\",\n       \"yXvoNqWUNihVT/l53oU6IwgPHCXioXxwYUZnP3E9FDGbN2/uZVPxLhZaDmgWaNvxCNwL6gTK1QPi\\n\",\n       \"HJwm/fGPfzwinimFT1wK7/qdzgt1CiX8iQUhRoL3z1x/3XXXRcSEloXvkdmxBp/4xCe6Ev5+V+2y\\n\",\n       \"CJTw9xg5dR99QuMAXYFjItpCoOgQygenxruQHrKYfsa0LNwLehP043bbNkw/4fnhmEBoPKBOYdz9\\n\",\n       \"Ph8gC1QI27dv7+RwDANto3PaRuceM/ToMfVY8n/W6LXXXttdS3yB40uuv/76iJjonO8dE+M1yrpw\\n\",\n       \"bJTT3//iL/6iW3N8hh4cT4VeWM8u1eH0d+gqaN/rpo2NYS9yPxkjx/Ghww984AMREb1YM/YZ+sJ1\\n\",\n       \"UMTQHmv76aef7u7JXuE1BNALcxedM8/RlzMwaQfqGehwaGfnzp09yi9kga4ki0ejv4wR+z9xgMiC\\n\",\n       \"PrknFCToJWISp0hsINcynu973/siYrLeHY9H/z/4wQ9GxDOULxH9zDED2S+77LJuLFh7bemYVhb0\\n\",\n       \"kmXpIQtjxPonlsop/C0FETrxHuu2mbvsXW6LsWIMkJ3nqJ8XzOXDhw93zyLaRm50yBh57/L6B4y/\\n\",\n       \"KWJ41jmOc//+/d06RW6uRR/I7Ri5/18UMaPR6LTRaPT/jEaj74xGo2+PRqM/ffbz40ej0T+ORqPv\\n\",\n       \"j0aj/3s0Gu1ofnPVaDS6ZzQafXc0Gv2vx7x7oVAoFAqFwn9izPJILUfEZePx+M7RaLQtIr46Go3+\\n\",\n       \"MSLOj4h/HI/HN41Goysi4sqIuHI0Gp0bEf9HRJwbES+OiH8ajUYvG4/HvWIMR48e7Z0Yh2pwRPQz\\n\",\n       \"BxYXF3tWHfCJ0taOT/mcajlJYw1kVBKtdWFr3pa162I4Hs2y872pJbLaPaurq73skSzbhM+dIZXR\\n\",\n       \"FdjzkNHADCErzOnvLZsJMo1ZdWZaOdGZrTv/1hlVjJXroADXKWozSDKPnO9Ff114zlmOruXi+dPe\\n\",\n       \"D3naNdKCuWQvsGsbAWeKzoqlPHDgQE/3rl0GGAN7x7I6SfSFecF9hormegw8/lkGIb/je9aHx8jr\\n\",\n       \"wjQ+bZZfVnA0W8/uP3uT7wkYI8a2rX2V7a1ZTSe37cwyFzb1OrLsR48e7drI1hyw98+y4P1B1owU\\n\",\n       \"HTAGzlxsv8s+Nzm3PXjMQe+3Ge3P008/3Sskia6yunDMG/SS7dHMTWdS2vPdwnuL38QYnh8mogYm\\n\",\n       \"hXd2d3u9KaFmFR6l/6yHWfuiM3PBwsJCukeb4gnZnpc6UuPxeM94PL7z2X/vj4i745kD0v8WEZ97\\n\",\n       \"9rLPRcR/ffbfvxUR/+d4PF4ej8cPRMS9EfH65yRJoVAoFAqFwn8yPOcYqdFo9JKIeG1EfCUiThmP\\n\",\n       \"x088+9UTEXHKs/8+NSL+3+Znj8QzB68eNm3a1LMw+b9Px46lWVlZSYlMOd3ybpzaKtSJyqxBW4uc\\n\",\n       \"rC1LS9dgyyerUeIaHFm9HL9/dpyHPTX79u3rWSucxm2NYQXwfh59YO1llbA5qfMX/bZ6971tWdoC\\n\",\n       \"RZbMY2MvAO0zNhmJZfudLanMg+kYINebysYfGgfmyc6dO3s0C7zzpy1XIs5qsWANzbK8wZNPPtkj\\n\",\n       \"trY3hLmH7vme8R+y3iOm11xE3ysAHnnkkZ5nwRaz2+R6t53FbyA78YtDVZNpuyUqbT/3+NvTypi4\\n\",\n       \"Rhwg7sfxkb5Pe409Z1l8GjEurH/GFBky6iR70TZs2NCT23OK32QxM8xv5jvXM6b2SBCLCqPE8ccf\\n\",\n       \"n1aepm6cx4K2/XlGODzEbOA+mbjWbVCzzyTN7HPWOX1xTCqye0238bz2RHld0JZrfrnyN2Cvoj4d\\n\",\n       \"88Wenhauqp/RMmX/zzy7MIK4jh3zotULjA1Uk+c75J7lNfTzxXMRPXteLC8v9/YtqKHQsb28z7WO\\n\",\n       \"1HM6SD37Wu//iohLxuPx060w4/F4PKOcweB3X/rSlzoFnHLKKR0VRaFQKBQKhcJ64oknnugF8GeY\\n\",\n       \"eZAajUaL8cwh6r+Px+P/wT1Go9ELx+PxntFo9KKIePLZzx+NiPZEtOvZz3p4/etf31kHS0tLsbS0\\n\",\n       \"1J0gfdr16Xk0Gg1WNW5/y/dYr1mMlGMAHKdktJlGyMO9MtJigEWRvSN3nJbfCfukvm3btp53a+i9\\n\",\n       \"eNsfe3uw6oYqVbef26LTYToi+ta9K9dbFmCL3dfbO4ReXH25/cwegkwvrh4+y4KlPZMhLywspFXz\\n\",\n       \"7VHM4hIYG7wezqiy16jNXGUcs8rTtoI9H7I4JmS3587z5bjjjuvmL3+zrEPgKsjZ+vfnGelxK6fl\\n\",\n       \"4/OsCjLXO07D97DHk7+szVZ2r2frLvOkcB1ZvpnXqFq7aQAAIABJREFUyJXAW49XFtPmPSSLS2Ke\\n\",\n       \"eL+gT0Ok1RETj/fhw4dTwle8FOZDHCKhjpisB9acuRYzz07ExOs5xMjQtslvHJ/q+WLvmO89FMdG\\n\",\n       \"fxwDlcXrOMM8i3u0DOwBjFk7X7zn8l22F3Evrwv+7zGy5+5Yn8OqgD6QzR58y55lDmfnhSzud0g+\\n\",\n       \"v01ZXFyM7du3x8tf/vKIiLjzzjsH+wdmZe2NIuIzEXHXeDz+aPPV30bEW5/991sj4n80n//OaDTa\\n\",\n       \"MBqNzoyIcyLi348pQaFQKBQKhcJ/UszySP0vEfGWiPjmaDT6+rOfXRURH4qIz49Go7dFxAMR8b9H\\n\",\n       \"RIzH47tGo9HnI+KuiFiJiD8ZJ6bR3Nxcd7o1b55PpH6POxqNerEJgJOy45FsibhtfmdrwVZjK4Pj\\n\",\n       \"ELJsNGdMZTEvGd+V39uD1kLxKdwWg2NHzCXofmZ8Vo6xaJF5pHyt68ccqybPUL8dOzbkHXPbWVya\\n\",\n       \"M+iwOGdlq/G7Nh7BcnJvz/NZcNzFrBipxcXFnhWaxRk5xiXLNsoyTTM+rDbm0V7MzMJ0tlmWtWvL\\n\",\n       \"3DWf2rnrGDDXtrJVjz4yXWeeLXv+zEXZymmvnuOrsn4yt+wFA9ZX64X1+NjjMCsLN/MOZ3uYPZaH\\n\",\n       \"Dx9OvUD23jF+5usDjrVyXSbD+8mQ/CCLmTRPouG1nu2LGzdu7I1/9jbF85nvs0w573HEBbvOXntN\\n\",\n       \"Fr/qtnkWOf7V/KDAHi70MBRj5DpgeC75PMt2BcieeeqcWdqOsftpjyvwM2oWjnmQGo/H/xa51+q/\\n\",\n       \"JL/5bxHx357T3QuFQqFQKBT+E6O49gqFQqFQKBRmoLj2CoVCoVAoFJ5nrBvX3iWXXNK9y6SWTVsH\\n\",\n       \"JSLihhtuiIiICy64ICKmo/z9btOcYq7E6lggeJng/XHmA9knvMflerh5FhcXu3ofxGVQQwNenpbz\\n\",\n       \"KaIfK0P/4Te76KKLpj7HW+iYIDiFdu/e3enMlZjpD5xCcKfRf3TuzDrLzueu8UE8w8c//vGOC8l1\\n\",\n       \"j/x+mTGCg8zVZB1LZj40x7vBLbZhw4aOCwsuNFecdmVmdIhe6A9xBo7vgCfKHHR8v7S01I0XOkeH\\n\",\n       \"ribsOCxkYS469sH8ZnDWcf3q6uoUp1XEZO7ceuutETHhqyL2gXFnXM21xjpyvAKycz1z96qrrurV\\n\",\n       \"+XJMSMs/1rbNmsvmImOKvpzF0/KEca3jk9w2fIX001mqjoWCP5Hxd02btr4We4XnSlYnBx2+613v\\n\",\n       \"mvqc69mD2F9uu+22KdkZe/R46NChrp/Mlbe85S1T9+Z7foOMt99+e0T056JjaxhrZPH+smPHjl6G\\n\",\n       \"L/x9cMoxV70eGCvm4tvf/vYpGV0rjd996lOfiojJuhuqp0W/Wc/o3DW/HK/lfjIWroHk+fXHf/zH\\n\",\n       \"vUxwzzG4WT1f0ItjDGnb3Hzs6eagu+WWWzouRO+L3MNyIwufm9OVezJG7OnuK/fbv39/jwuR/X/n\\n\",\n       \"zp0RMZkP5olEdsfGOePe+67rr62srHTyIPd73/veqbZYD8jCOQA+zAzlkSoUCoVCoVBYI9bNI7Vh\\n\",\n       \"w4YeV06W5eV6KSsrK2mFZVtctOkMIbdNe+Zoy2p3jEajmZxyzi4AWaVe18uxxW1ZWj04QyHLRnEV\\n\",\n       \"ZJDp3F4x2m37lI3FrPg789Vlesmy+YY8X86ecr+zujDAmZO+3hlmrbWYcSdaP0M6bGU3x1bG0cb1\\n\",\n       \"Bw8e7K2djHMukyHTuWv9ZFheXu5lgmXZhlh9eHK9D2T11dC5OReHKpvbU5TVE6Nfbjur+J/VqRrS\\n\",\n       \"o9ex6wh5vri+1Kzq087UbDOPs2wj173KxspZj7Nq5Fnfhw4dSutf0abrjWX7CL/PZLYstNtykbJW\\n\",\n       \"vCd5vbgmYJZxiNfHb0aGMo5Zz86sdj8zT1tWu82MGq6VOLT+vOc42xJkHv1sjDxnycQbygp0lia/\\n\",\n       \"YV/I9OJneFYB3X1t974sCxOdz+KgzVAeqUKhUCgUCoU1Yt08UgsLC71TrGtWAJ9g21oWrufh2hKz\\n\",\n       \"GLcdt8Tpl6rCPvW2XFzm5clYq7m366dkFXltkbhau2Vvv8ssKd5DZ7yGtl5cA8jxa+0Y2fPi/vhz\\n\",\n       \"68lchLby7EWh8i8ytHPAuqVt19ECWHWMjeeNr3eNpLbmS1Zl3x7XzMoxa7s9UbamuH7btm3dNVld\\n\",\n       \"FL43h2JmoRmMv2s4gbZPLQNBK6eRVW72fLFH1rFYbZ9bb0R7TVbZ3DGGjruwx9KVnj3GrScIWexB\\n\",\n       \"c7+A49U87w3P9bZKdeYdRRbXCcIDB9xve+wz73Orv6wKuvdN7kWbsFEA68NeJnsN+f7QoUO9eEP3\\n\",\n       \"i9/yvb3kmaeOdtGj68q1fUU3eLGyNcc8Zm3aQ5/t//yO+9CXdk1abvOdZswWjh3M+FO9LuzRPNY+\\n\",\n       \"QxwSz6jsLYDHMHu7knnR5ubmUs+6PXPM2axWWe/3z+mqQqFQKBQKhUIP6+aRgl8vos8pl8XCcNo/\\n\",\n       \"cuTIzArOfseZeYEcE+VsA3ukWvZ3Wxo+GbuyueNYfEp3TIE9N9n7+rYt+m0d+j09v82qSduit5ek\\n\",\n       \"fafuuJRZ1WA9FraWsvE3L5wzs9p+2TJqKy63sJVnPr/Mm+Lqwxs2bJjJy+asJI+/vT6z9IqMmzZt\\n\",\n       \"6s1FzxV7AcyQ7rbtJUA2zwfL0vY345Sz19RjYI+UM6Rc6brVexZXYg8CMKcaay7jLGQdWQ9DsWSO\\n\",\n       \"/cpiJt1PV1XOLG/HxoDV1dWZcSbZXAWMJ/2zxZ5511rZM+8F/bTX2zEtbX8icg5G93+Im5JxsUcC\\n\",\n       \"Wbw32wMPnGHIfOD3nssR/Yw3kMX30obvnXlTs9jaId5Hx9Nlnnd7ZvnecWqAmCjPXe95EX3vlfc3\\n\",\n       \"jxHeM3v2maOeL/b8td5Fz9ssHpXPn2udzfJIFQqFQqFQKKwR6+aRak/XjmfJrm3jMjLL2J4CkLG5\\n\",\n       \"2+IyB11mBbZxCBlHoGMXzLGUZe3ZwkDGoYwLe9wsA7BlAAM7MIeS43lsNVqOiNkxMUaWjWQrwN4U\\n\",\n       \"e8faMXVb2fwwGCt7NLN4HXubFhYWerrnGmfTZPFaIKuBlMWatfVyHCtnWeypybzAgHYz7woYj8c9\\n\",\n       \"3WVzxjFPLUdc+ztgazHLyGvbslcsi9fj/xlfl5FllA2tacd0gUznXIfOZ8V3As+rIdmZa7bqM1mG\\n\",\n       \"dNt+PjT+rYyLi4vpWstqUtkLAhyf4/i0LL6z9dRmnHKW3+vC/fTatEfKz4s2LsdzxLJwnfcFx/0B\\n\",\n       \"Z9bRHl7X1gNmT7S9o/aWuVZb9pwEXsPH2lf4zPWgzHcLPP7ePzIO2qHnUZZt6Odn5nnLUB6pQqFQ\\n\",\n       \"KBQKhTWiuPYKhUKhUCgUZqC49gqFQqFQKBSeZ6xbjNS1117bZcD5Hb95v975zndGxPR7fkf6wylm\\n\",\n       \"/iln+AB4v+AU8vt3vxs179toNOrFLvEbZKFtx3Y4BgCuNXh/HIfguJeWU8jZMo4XgMfLvH+ON2p5\\n\",\n       \"mSIirrjiiqnrndUEbr755o7fLAP9ZjzRC6CGjWMB4ObietcuamNy6Kf5p9w/89txPXEFjL+zPs3l\\n\",\n       \"6Pozq6ur3bjBV3XhhRdO9S+Lw4L3633ve19E9GOqnOVG+y0fmuMEkAUeL+Yt1/l65hjXs+acYeO1\\n\",\n       \"ylzfvXt3ryab4ybo5+WXXz71OW0zBswXuLZaTsH2esep3XzzzT3uPMd0mA+R8besjvWAa4/2aY99\\n\",\n       \"ps04zTgFHZ8E4HE7//zzI6Jfq82xUbR/9dVXR0Q/xqhdo+iQ8WeMvC48ntl+4SzIj3zkIxEx2S9a\\n\",\n       \"Ljb6ST9om7nluCXLAr+Z+TA9pvwOLjdkmZubSzPbWP/wuDoz2GsUnXtdZPsuc/3iiy/u5muWxYoO\\n\",\n       \"vS4cg0t/mS+eX64nhb5uueWWbjy9b3kNsf7Zuxy35n7ecccdXT9bGcmwb9cd3JnI7XH0+KJz82d6\\n\",\n       \"LXtfZE1bz22Mmdt2vJX1ApdrhvJIFQqFQqFQKKwR65q1x6nYXiNnPlD5tM2C49/OtnEmiLMNMrj6\\n\",\n       \"sE+7oD09u017vfh/lp1lSzOrpgvskVldXU3rgWRZNdZPlinhejvH4sPKMqKALQPaxgvCX7iWDGq9\\n\",\n       \"uErxUKYUWTRtXaMhGTIZrTfrxRxSrdfA44ncZhZ3ZhBwlp69QQbt7t+/v1szruoLqBaNjGRtZhmh\\n\",\n       \"Tz75ZET06ynZMwEOHTrUy8bxfAW0xbrOMumAM6LMg9nKnvWH32QceV4XrhINnEE3VMPJ/3Y2XVaL\\n\",\n       \"yWvRHnrDlb3bzFv/5oc//OHUtYxR1namD2dBAjwQtHvkyJG0ev6sDErrxePujG2Pdfvmgv66GjrI\\n\",\n       \"sludrQp+9KMfTf1/+/btETE8FyOmK8Z7f8gqvntOZnsX64c56EyzVr/InVV69zywd2sW117GrWcG\\n\",\n       \"iLYN1yoDHhP+b17M7Pos07K9N+BZxG+YH357MAvrdpAajUY90k0Kb2UPmPaVh8sUgOxBwsB6wrAB\\n\",\n       \"eBJnhwIWxsGDB3sEtkN9HPp/JosnTObab2V3UbLsQED5fbu6M0Jg7ulio0MP98zlmuGEE06IiMlB\\n\",\n       \"w0UwDeaD03rRT0sRwmHF9BxZMcRsw8yuZ74hS0u544cucrUHnrY/GaWMX79lNA4nnnhiRDwzFmxk\\n\",\n       \"2abLPEcf6D6bW/SF/tnQ8LrbuXNnJ4N1npEQuzhkJktG5uqHfNu2X9Fm1CZu24d099MPBg4LfjXY\\n\",\n       \"fpa9ivLc2rVrV0RMHnqzUrA9XzzfWiAX4856zmhWOGgzZ138NHvA0OeTTjqp90oKsIdmpUaGSgi0\\n\",\n       \"cBmZLKW9fcZkdDVQgVkvWTkZDk6MO9e7wC3YvHlz77VYRj/kOcqazcofoEdCZBgrz/32WlPgZOVS\\n\",\n       \"0BPjyd7DPPBehB4x2Phr47Bt02vJr52BDTPGINO5X6W3z3Zf+6IXvSgiIh5//PGI6I+rD94Z6tVe\\n\",\n       \"oVAoFAqFwhqxbh6pxx57rLMWbBVmRMGtlcxpMysk6dN9VpAto23xX9AGyNoNnL3as2XpEznAI+Pi\\n\",\n       \"eWCI5JQTsz0GWWFO+u9+WxZO/XxuK7CV7ViFAIfAeGLF2LNgPfp1pYN28YRE9OkBMgJsgOXkVxaZ\\n\",\n       \"xxOZmbvopSXhBqeeeuqUTBl5LbjvvvumZMa7lhWfpf9bt25NXxODk08+eUoG69gy4cF0Ab8sYHb7\\n\",\n       \"9u09LybwvMWCtiU66/Urc/JYBT/bYOf23lmxX79etEfKv7MX7VgebHvk/Hoso7dywDPwPR566KEp\\n\",\n       \"mbDcjx492mv7lFNOiYh+Acns1Z499YxZhgcffHCqvW3btqWUH/bIzSooigyZx81j9Nhjj3X/Nhmz\\n\",\n       \"9eJwAj83fP1JJ50UEX1vyFCgf8Qzr1SH9syIvk7Zx9Chn0nZHt2+To0YnuvHH3/81HdOkLLcpk5x\\n\",\n       \"HzxG6CUrZN3q0Z7KrMgx8J7l9WRZMhqsIY8UOrQ3dNabKaM8UoVCoVAoFAprxLp5pCL6AeFZiqWD\\n\",\n       \"0jds2DCT+NeUGVnqsa39jCrDaE+qyJCVqs/oRjKvEeAEnlEKzM3N9d67O23f97JFYroVYD0cy7OT\\n\",\n       \"ee/8vX9rCzQL8M9ILodoCDLi42xuZQTIWeC/41Ja6yijthmKWWi/B1nqbRZ0OhTf4JgGYFoFe1wy\\n\",\n       \"guxsPj2XIMyM8sJzzzrPYh7s4RgiorUnwWNyLA9z+33mPbRHKiP3beG9KpMti+vMZAcew6NHj/bG\\n\",\n       \"03F32X7gNr0usnH3mmxlyCiCHK+T9c/xOu6DMVTCJpvn7o+9hf6ez/ECeY/2XD98+HDvGZN5Xl0m\\n\",\n       \"wcjGyGM1FNeTrd/M45LNk+y56z5YxlamLJA700u2VzkGCljGthSK70lsnM8JzzVJrZPxOV1VKBQK\\n\",\n       \"hUKhUOihKGIKhUKhUCgUZqAoYgqFQqFQKBSeZ6xbjNTu3bt7dSR4B0qNns985jMRMSnjz3vLbdu2\\n\",\n       \"9WprUH7+yiuvjIjJu07HX7gU/jve8Y6I6NdTclbLpz71qYiYpiswbQr9+PSnPx0Rk7L5fk/szDmX\\n\",\n       \"q+d719kC1157bUQ8U2o/K+JGv00/QD+zehnQMkBX49ga10+6+eabe9QGwDEw0CbQtscGWYg7gMbh\\n\",\n       \"oosumtKLMy2Wl5e7ftI2cBE3/poiCFn5i0zIAv0EfXUcwsGDB7v5DJ3I+9///im9eH6jW6hzmC+O\\n\",\n       \"kXBGye233x4REe9+97u7dpz5SuwOtBzvete7ptr0vCHmAyoEaBw8Zz1GtH/llVf24iZckNHUKaY3\\n\",\n       \"om2ymVhHUD44Xof/E+fw0Y9+NC699NKIyCmCAPPcsrifpqvgesdhtHFwUFUwV7K6YWRpMbcYT+I1\\n\",\n       \"0HGbGRox2YvYLzz2rfy0DUUI+jAtE/245pprIiLiqquu6vrTts284nfMXfZR0MbK0M/Pfe5zU3LP\\n\",\n       \"in2BCgU9Ahe05f/MXZ4XmzZt6sUyUWuJuUXb6NA6RxbmC88Xj4nrK7EfXXrppb0YL3TJvKWfplkx\\n\",\n       \"TJ3jNep6e8j4iU98orf/Z0V+2aPZcx1D6thT5iJ6cRYga3nTpk0dLRN7rte/n73MLeYLOs5iDqFD\\n\",\n       \"4vohuifGh/XMmmOeoBfmic8LGcojVSgUCoVCobBGrJtHamFhoWclcJJ25gNVdrGiNm/e3LPaAKd8\\n\",\n       \"U1vY+wE41WJptZWqI3Liw6Wlpe60jSx4SoAzPPg7VJG5vadrOyG76/MsLy9PydP+1v10fQzq57i/\\n\",\n       \"mSyMDb9r4YwO5M0qVbuGldvOquaahoHftdfThj1KWfVsZ8RRV4p55OrDjDXft9WHs4ww7gHtCm0w\\n\",\n       \"r4Gz+1w/JaviPx6Pu/GlKrY9MKbO4f+0QZ0Zy04/gakf2uuRHw8Ef13Lx3MR+hLXAHM/naW5d+/e\\n\",\n       \"qfZauZ1tmHmmXOPNnoas1pu9IczFdk0729DVpLN6ObSFXriX54vnB/vPaDTqtc3YMF9NtjwrO8nz\\n\",\n       \"JssobImr6a91aLoSk1W7n6YesqcbfQL62GbSZnXlTJjMs4j+0hbge+ac6xgOrVFk4C9tZFl4zhzM\\n\",\n       \"stP8XOAv6659HqFzU4PRP3vY7UUzu4ifi+gXvbDX0Veq9rdtcA9X/Pezy5mRrsZvWTKC9aGsPeqI\\n\",\n       \"ITf7P3J778pQHqlCoVAoFAqFNWLdPFJ79+7tTn+cuDn9ZbxPnCZbj5RPr1jWVGbGAuGU7uvNHZXV\\n\",\n       \"bAJPPPFERETs2bOna5uqrvZIufIsJ2m/w7bscNHxOyr1ulrs8vJyr1ZGpkNO2OYGe/GLXxwRfUsN\\n\",\n       \"C8aWjPmO2rZmkTgDx1kBuOPskbDViBWBPtvr0RH3QA/I2/LyRUze4TOu6PG0006LiL6FSXvIytzc\\n\",\n       \"u3dvb27hMWHu0RbzxVWV0a2rCmfWNN6iffv2df2mn/Yw2WNy5plnRsREH/Z2MjeR2WvQY7u8vNyb\\n\",\n       \"1+jIOkcvzC1kytYR/WQO8/+hueh6SbP47aj4bg/sU089NfU5oI+MNfdhLFsPBh4lE+ayRu1hQKfo\\n\",\n       \"hXujRzyzgMr53Jt589hjj/XWFvemDfrNuDMmlsX7Jv3z3LXnZnl5OeVlcywMfGfMk4xA1x59vM/2\\n\",\n       \"pphIOGKyzu1hRmeMK2sOnXuMGFPmCbKhD+8BrbfYHmqv56yW4SzvqN8SsLbbvtqTyr6fVfxnPO3V\\n\",\n       \"MecqQAY8UVx/1llnRcS0Zwdd+RlF/73+mavmTRzySEf0Pb7tfdxf5DbDATJ4Xmcoj1ShUCgUCoXC\\n\",\n       \"GrFuHqnjjjuusyQ4aXP682kXKwDrct++fZ03K6uai0WJpWEOLsPs4FmlWmQ87bTTOv6qjGvPXjO8\\n\",\n       \"Hshka4c+ccLGAn/00UcHZYno82whi3mc8FDQNqd7PA3IBPg/7eMlG6qaa0vL/8+yUGxpoj9bGOaa\\n\",\n       \"Qq9Y7q1nD/lo27/NOOW4nvG1vgwsWcboyJEjaRVk2nKb/NbXZzFCWbX2bdu2df1gnOxhsneU/toT\\n\",\n       \"53vSJ1cAHrImnVWJFWpZ+Jy/9jB6Hdkjh+z8vh1TZ9u5MrPnLx4X7uF4TY8/XjM+d5xHu47oF14g\\n\",\n       \"dJ9l8TH38DQ5jskeDPZFOPfaDClfS/8Zi3vvvTcinvGsR/T3RceKsU/Qrvcue1GXlpZ6sZDAvI9e\\n\",\n       \"m54vHgvHVnld4KkYjUbdb+ifvaOuls4Y0V/2GED/7fnHo+F18dOf/rT3POMe3s9pkzlo76/noueV\\n\",\n       \"PXHtM4Dv7DlzrCdgbgGzS9izB+w9Zc9rx9S8hKwdjyvAy+UYSXvNgWMT24ryHgv6w77JPuo41Vko\\n\",\n       \"j1ShUCgUCoXCGrGuHilne/k9LHAti6Wlpc6CsEfK3itOo5xafSLlpO7TbsYpxIm7zVICtjCxjOzd\\n\",\n       \"yjiXsA44qXOKH4oFinjGunCGXOZB8ftl7p0xhvs9tms9tcis/wy2atE5Vp2tXSwr+uAxaseUfzuL\\n\",\n       \"JPOOYYG4FhhjkXk8XXdm06ZNPQ8TVptrzaBzy+KsNVt9Wcbp3NxcJze6sizOYsUTQ/+cnZLFDvJ7\\n\",\n       \"62VhYaHHrYdlaQuTNedMOrwaGR+a18EQ16D3FPOcec2ic+8PjssDrtnTZqn5emeZZZ5Z4AxD1+5x\\n\",\n       \"hhj95/o2gzTLCHMcG/1mHQD3C70hi/cB1kvbx1nxpr4u42bM+M/YD+zxar2mmQcNmN9uqCZXC9f2\\n\",\n       \"sx68ZldWVnoZjRnvK7Iwx/xczPTI9czlIQ+Wa3Zl8abA+mJuZc9Re7S4H3O2nbuWhTbZu7Lnhz1N\\n\",\n       \"lgmYJ7Ddu7wG/Qxh7qA7e0czlEeqUCgUCoVCYY0orr1CoVAoFAqFGSiuvUKhUCgUCoXnGesWI/We\\n\",\n       \"97yne5fJ+1O/M4YPZ/fu3RExneXh983w8sDLlVVH5XdwhJlrid8RS8A7cXPWbdiwoYtDcTwWcr/z\\n\",\n       \"ne+MiH4cEvEo5lqDI4r2yCRw5gmcQpdddln3Xt3Zem4bvbjGht8vwxMGj5PjmBzH9slPfjKuvvrq\\n\",\n       \"wX4CfoPO0YtjHJAdmeCUgg+J+Dfae+ELX9j1/QMf+MCU3PTfla75C+8Tsjv+wllOjD8cVIDMzaNH\\n\",\n       \"j3Zzxrxs9JM5yHznXnAKwhPI9+jDtbDg8oNTqq1kzfymn+ZxZG6RVUP/yBiFO+v888+PiMm4M7+Y\\n\",\n       \"kwBZdu/e3YuraitMR0R8+MMfjogJ7x+yOibQXGusf8cOMUbtXIevDJ0584nfonO4+cyIwN7kucv1\\n\",\n       \"wHvWli1bOl6uK664YkoGZ52Zx9E8fs468l6HHs3dePTo0W6PQRa40JhbxMa4ojd7i/XoOkTeF7m+\\n\",\n       \"jY1BN1zLemb9W4eOx+N6dM7Y0TfHlLb7YsQz+m5jGFtZmIvmCXW2svd05iKxUMTMkv1Klt8dd9wR\\n\",\n       \"Ec/wyrlt1hJteL/wfu9sT+Yi17s6Pe3z+4985CMdFyJtOYMYXbIueF4wPxgj/o9ekMV8mObk3Lp1\\n\",\n       \"a/dsYfwtL79hznnvciyYs/0Yf7ffxtSyvuknHJSsLWcGm5s1Q3mkCoVCoVAoFNaIdfNIRUT84Ac/\\n\",\n       \"iIiJlYSl4QwCsrnOOeeciHimlgt1olyRl1M/Frf5i7Iqu5xeH3zwwYiYnFCpuguw5J588slOLk6x\\n\",\n       \"zlxAFjwJWC/mLQLIhqeF03CW5ddmZeANwrNgviqzuNNfV0YHWZ2YIU4uWx9ZxhTwvc0P58q2zBP0\\n\",\n       \"8jM/8zNTv3/88cd7bZsbDHj8s5o09mAAxo56QmCo3oirAFMPBavI/aRemGvetBxqLdrMQWcleq6w\\n\",\n       \"Tl7+8pdHxMQDgSWdZeGcffbZETGxZNErsoLxeNzLmGPcrRvGhHF31pYz61zDBhmcldPek9+YG80e\\n\",\n       \"WPTCunZFeLMPME9Yy+bHa693NhJWf1ajztlaXsv29LpmFvdu+d0AMrz61a+OiMk8+d73vjfVD8B4\\n\",\n       \"O7uXfnu+oOd2rFzfCHAvdG7PflYXyGwVzB/XzOPztr4abXuP9noZ4i1swZyjYjf3+v73vz/Y1z17\\n\",\n       \"9vQ8tMjkPdr7IM8wfue3LPbQ+C1Nm3HGPV/xildExGTeU0/MWb4Z7529wcD8sdybMW7XKN/5GZ1V\\n\",\n       \"fHcmtvfoLOPQss7Pz/eeRbTNnmQuyqzupFEeqUKhUCgUCoU1Yt08Uk899VSvBoeri4NXvepVETHx\\n\",\n       \"Ajz55JOdF8IxCpxCfRI2HxHgBM3JHEv7vPPOi4h+JdyWs85VXn3apYKvLQ8sKHtLiLfhFAzHHu3b\\n\",\n       \"mtqxY0fnxcNrgwzZqd7WoGOfAH3DSuJ3Q3VZzCjuWAd7sdAHbVJd+SUveUlE9Mf/ta99bURM9IlM\\n\",\n       \"eCWx9Nt+OMbBlZqBLXfXPLK1g2XHvdtaKBl3ItdSu+mlL33pMWXhd6627fFv477sDfS6+LVf+7Up\\n\",\n       \"Gb74xS9GxMTjap3j/cXCRuff/e53p2QCq6ur3XxGR8wxrz3mFv3hd+jDtXiQDUvWzACt18CeV37r\\n\",\n       \"ekgAq5h17pgQzwe8oshuvsBWFlvOrBPkNl+d+e3or71fAFnZR9uYNK/nN7/5zREx0e3nP//5qd96\\n\",\n       \"n3OcntknvHc7xmTDhg3pXKQ/jnXhHh5/Kr2z1+HZRU/uK/OorbZvbwagH7TFXsXYtHtLxESPXP+F\\n\",\n       \"L3whIvrV/MHy8nI3x3h+ZW8k+D/rHz68rMp+VpcKfbRr9A1veENETLycyM09mdeWhTnrZ13Gk4eH\\n\",\n       \"Gz2y1w95atknXD/O+6L3cNdN877ouF7+v2PHjt6zyHUDGfes7liG8kgVCoVCoVAorBHr5pE66aST\\n\",\n       \"uvfMnOYz3iesIuKXnnjiiUG29YjJSRmrhTaxamy9cPLmpPpzP/dzEdGv9As4LZ944omd3FgntnY4\\n\",\n       \"5eKR4N5Yf/YCYLljSdDHrJr0pk2b4v7774+IiY5g/ranxdYwejPrNaBPwO+MW8v7f/b0Tj8efvjh\\n\",\n       \"KVnwONnbgSx43bDMhjyY9AMr0DJmFb9t3TgOB2BxIRPzZDwe9/SABUU/me9YdbZ2uXdbPT9iYrF5\\n\",\n       \"jBiDTZs2pdXAwSOPPBIRE08Uuuavx/uuu+6a+txcdY4FWVhY6MWCOL4G0H8yAC27vUCubM4azLyp\\n\",\n       \"EZN9wFmmXnOMGZ4lV5H29Xz/wAMPRMRknjgg3WG3AAAgAElEQVQWp5U740H0fOFz+mX9OF6H33vN\\n\",\n       \"79u3r6cTvNv/8A//MNUWvzUcU+qYqYyzso3n8pwCzGfHRNIfzwfGjjGCWzDzSIDV1dVOJ8xBrwt7\\n\",\n       \"ElkPWRV6Yov+/u//fureeJk9pieccELn9UVnmScN3bH+WQfsWa6y7bgv+uLYzIiIb3zjGxER8bWv\\n\",\n       \"fS0iJnrI3gLQtiu/I5O9qa5gjjdtCIw3/bLXy2PkjFq8fqwTe8ec7ZrNj/be5lp0hfNZKI9UoVAo\\n\",\n       \"FAqFwhqxbh6pubm57nTrGk+Zhcrp8Pjjj08zwpx9gWeCz+3V4eSNRWVr2ifSNtbGlnaWEcbp3Rly\\n\",\n       \"tnqc1eOTtu/34x//uNMNFoB579xP148ZysJrZff3Q1kbHoNZXHsGsQPO/ACMAdaCreP2fq5JhaWB\\n\",\n       \"NeP+tLxcrQzA13MdMQVtzR+PpzO8mIvMTXswbQ3RL8c/WZb5+fkeZ5ytXTxSyIL8joWw7PaeOuYE\\n\",\n       \"bNy4sbvWXotsPNu6cO5PC8c70Z7/tvKZG5I2HPOGrpHduveas9cIL4K56NprHRNlvkuApc4e5HH3\\n\",\n       \"GNEuXgCuP3jwYG/vIDuPfjL+rpsG0LVrvWU8keZNbDPpshgmez2GxjNi4sHBI+W153XEXj4/P997\\n\",\n       \"xjgWzHyYriPo9f/tb3976v9ktWYZqlu3bu0+cyaw41iR27Xg+OvrvV/y12MfEXH33XdHxKT/7EVD\\n\",\n       \"10b0vUbU8ENfQ1nKEZP1kPH/RfTHyx5mt+09iOuy84I9Uvzu8OHDvWu95/jez/VtS3mkCoVCoVAo\\n\",\n       \"FNaI4torFAqFQqFQmIHi2isUCoVCoVB4nrFuMVJ/+qd/2nvPzvt33rPC4wWnEO8vn3766S4Dinew\\n\",\n       \"cEpxLe/qiTtydWC41uBxa/l4IibZfry3hg/rqquuiohnMmNom7gT3sUiC23zXpZ3/GSGcE+4s+B9\\n\",\n       \"Mk8c4HN4oi699NJezJKrPcNXBdcSsvLum4xC+glfERxEhjnpPvaxj3U6b+u3tLIgI1xL5vFy1gnx\\n\",\n       \"FeZOoj3qrLTcbIwn15ozzTVr0DnXO8uLv8jGXER2xo5YgocffribkzfccENEPDPHW1mIu3AGHLxP\\n\",\n       \"novAcXrMRa7fv39/JydzEh3deOONETFZF44R4l78hVMK3kdnjiILf2+//faIeIZry3FFrhZMP+Hx\\n\",\n       \"QxbiNdw2/WQueu5xHevp+uuv7+RmLjomht+wLhhPZKctx1bBKed1wZwkzu/QoUPd3GL8GQsyZF2r\\n\",\n       \"in7CncfaZE2SIXj66adHxDS/YauXNtaIueZ9y3pwTB17C9x8yOhaccTKmJuTMT18+HAv/gq9ILf3\\n\",\n       \"C/4iO+ufec4aRteOg/Rc37p1a69qOvc01x7PFtchYww+9KEPRcSE9412PLbM+Zaz0DGvjsdBL3AK\\n\",\n       \"sl6IS6IPZHF6v2hjhyMmtRBPO+20iIi45pprejpHZ9n6Z54z7oyh6w8yv+CVRGZft3379m580Ln3\\n\",\n       \"CcezIou5FgF7L5mU6BzOUtdZG4/H3b/h8UTuNr6wlY3ajqy5DOt2kFpeXu4UR/DgnXfeGRF5yjGK\\n\",\n       \"2bNnTxd4R0cNNtDvfOc7ETFZfE6LdwCgN3UXzWxTTJmwDiYHTFoWBgGfTM4zzjhjsJ/0n4OXU3NB\\n\",\n       \"GzjnoGkfwvz/N77xjRER8Td/8zcR0Q+qc3A2fbRMLbwouaevNbElcMAvyOh9TLwZMRlPNh/mCcH4\\n\",\n       \"HJAB/WYuMlaUlXBxQC9OCrfec889aXFY6zajQnE6rykTMjLoAwcOdGuIgzLB5b7WAaoO+DUyMuSh\\n\",\n       \"deTkiixg2yUnTBViWZgXbNIve9nLImKYIgRdu/TIK1/5yin5gfvvopg+eDpdGhlIyW5DJZzWTb85\\n\",\n       \"GJlmCH3dd999ETEposgBnb4AZKV9HuabN2/uye216AeeqXFsWAKTFwM/oNv90fscoD/MA4zdjMaL\\n\",\n       \"/vrgQEkSY2FhodMp44QugQ+1Tvd3MVkHI1sfQ3RY9AtdOp3fsKFIsWjv/8jI97/7u78bEZNnVlt+\\n\",\n       \"wgHplMPIKH9cYNVUbF6jzCvuSR+4T1uCgP6jK/rJwcjPBRMmM5bsC5bdhT3bMZx1tti1a1dERHzz\\n\",\n       \"m9+MiP7zP0O92isUCoVCoVBYI9bNI7Vly5bOO8Ap0GS9AKuJE/ndd9/d/ZYTNXApe9zEnER9qucE\\n\",\n       \"zSn2da97XURMrOghclba+cpXvhIRE+v83HPPnbqWEzdeM79mMmz9YeW4PARYXl7uTuXoI6OlwWKg\\n\",\n       \"qOlv//ZvR0TEa17zmojoW7tYAZzm6Ruft2OEXMiLzrMyD/YK2NPgYm8mwW1pWQwsKP7imckIVBl3\\n\",\n       \"vASvf/3rI2IyF50uS7tQFFHQbteuXb1CksjNHHMJAVv16JR7ohfmj+cuety+fXt3D+S3R4q2sd6R\\n\",\n       \"OytUaWJpPC4mSG2BpWkaFfoBXJDU3hDTbHBPdM/axLuApzeiT2wLJQb3siXNdfTXKeb2ApqQG1mG\\n\",\n       \"+sI16Jy55+K4wJQ3rDWKZmKJA/TIvMCjv3fv3t5e5FITyII3z/sF/eQefM9+kMnCfY477rie9xf4\\n\",\n       \"lZ9ly8ol8GrLJRjsfWvpofBEsz6ycjYAvTCPICO2LPyOfpuwG4xGo166vktnAOYFsrI3o8fMa0Sf\\n\",\n       \"mIO8bfiXf/mXMFx6g7VlHdpbjEfKBazbfrYy8jv+tmPqucLzHxns9WPN8qxnr+aZ5X3Rexa/e+yx\\n\",\n       \"x3p7DTqkLaixKLDqtwwZyiNVKBQKhUKhsEasm0dqfn6+Ox1S5IwTpC01Tuqcas8888zO+rd3B8uA\\n\",\n       \"eAusOdqwBcLJGwveNDU+qbcWKfIji9vGUqBNB0naOrKFhrfJhffA/Px8z9vjwnkA3fLOGm8aMtib\\n\",\n       \"gmWBNYB+7BWKmOgKixKZ+JvFa9E212VF3rgXlgz3QR8t7QvX0l/+0j97HEyUes8990REP1YO0H/a\\n\",\n       \"Ze4uLy+nRK5Y8XhqHANlMO72qjpwug2I5Z2+44+AKU9ML2G6GnseLatjRzZs2NDdAy8u/bVV56KX\\n\",\n       \"eI3oXzYPkBWvKmi9TMxFk2xnlrdj6RhXvAUZLQdrmDXN560H014+x51Yp/yfmCLmFnMxK3yL/tiP\\n\",\n       \"HnjggZ4OPe+Rm/55/B0zxr5BOx5TJ2ksLCx03gjLgh7QtYsIZwVcTd90LE99xDOeHbxY6CYr9sj4\\n\",\n       \"sVez9rJ+OqaW+WDZV1dXO3ldWNI653MIhfEau9+AdYQX+u/+7u8iIuLXf/3Xp2SMmIwn42fiaOvF\\n\",\n       \"xPB4xeln5h1DH+wP6L2NY7RnHj3wG7ft5CuSLpCd+Q/oN7LjZVxdXZ2So72WsSHeDLkzmiOjPFKF\\n\",\n       \"QqFQKBQKa8S6Zu1xMjWNi8Gpkfe0J5100iAlQ9sGf32Szt6/QyxJtlZWbr9NA8dybNPwW5guAWuY\\n\",\n       \"/mRxKVhUWEUZgWbExErl1D1E4dJ+jpcAq95Zbr4ey8Wn/iFgOfJb+uMxcr+xBo5FQhuR67l9X+/U\\n\",\n       \"YjwLpmkAWHX2QHEv65H2uR5y7I0bN6bZhlg39qxmsR1chyWKXjxGbao289fxWMDePPplSxX494y/\\n\",\n       \"U+1bOLPTZN2G1wOwHj3eeFPoa3s94+cM2Ixsl34zVsjO/60Xk3ejj6G4N+Ya/XQKebYXMe54bLDY\\n\",\n       \"nUHq9cX3O3fu7MX2oUNk4N6Z19D7iNdNllnHfZaWlrp+ep47jpExYxyz2DHLzPUZddKBAwc6by77\\n\",\n       \"gNcQOnR/iOuzFzjzojumECwvL/eyb1taqRbIQCYw8VnsNd6L0DnzBL0QvzlUbNvxehk5N8Drj2eP\\n\",\n       \"scjmC2PO2jSdV8Rk3EwNlZEQM5fZk2fR1ADaZ+/yWm77Q0wc+7kzYmehPFKFQqFQKBQKa0RRxBQK\\n\",\n       \"hUKhUCjMQFHEFAqFQqFQKDzPWLcYqUsuuSR9D8n71ltvvTUiJtQJXDc3N9f9Fo8aVAVQfjgOhffL\\n\",\n       \"vMumLD90Aq7k/dBDD0XE5B3qX/3VX01dPx6Pu3e1vE/nt9BmuLS9s1LojyliXCeFd8f0hdL5l1xy\\n\",\n       \"SdcvX0vb0AlAEUA8gmt2IDsl/7keELfhOIabb7650wnI4gUYI/TimjyuJ9P2s72O9/r0ZWFhoUcn\\n\",\n       \"YE+ra1FxPXPL39O2r4f2g+/barxcy7y13K6zxTt7rs/oitA5Y2BajqNHj3Zz0VloUJswt9qsqrZt\\n\",\n       \"5gvXmzonq3wP7cPu3bu7+e3YF+JMrHPXA+N67ul+ohdn93H9TTfd1FG+oOtsjjHPadt7kGOJPBf5\\n\",\n       \"/tRTT42ISezJgQMH4tOf/vTUtcBxOKZOMRWO4zb5/JOf/GRE9Glc2ngmryHvoYwR8Zv8lv0CWdCb\\n\",\n       \"q4gz1nfccUf8f+2da7BmZ1Xn1z6nz+n0adOJhEBISNIhFy4BCh0GPljWTOkUhVWWjh+8UKUDFBpK\\n\",\n       \"IyGWaIxUzIUEAxijSCo3MsiMM4yXKS0vpQNWUeqgXE0EEgIJ5kYMwcSG9PX0OTl7PnT/9vu8v/2u\\n\",\n       \"Pl3HTp8R1r+q6/R5z36fvZ71XPaz1l5r/SMmVDttxi4ycC3z3DRLZGERU8ccvv322yPiEP1Q2w5x\\n\",\n       \"OMie7eltvxxnhSzsc45nI8YHPUEp4rnoKv7ch3VBX1s4htT7hTMDvZdBh8Vex/Xeu5DtPe95z+hZ\\n\",\n       \"5JpW7X4eMdnnHANIzBTfs16Qhb4xllu2bBnWHHRlvtaxUzxHkcV7lfck9EhfPSbz8/OD3Fxryifa\\n\",\n       \"dtY+6yhDeaQKhUKhUCgUNohN80itra2NrL4sg8Iny/3794+qxYKWuDNiYkmRjZNlkJmclYwJV0Jt\\n\",\n       \"5XcGR5Zd52y8LOPQdUI4DWMFHCmDwDx3mUeGv3O6RyZnp/h7jJE5m44GzjYxASpjaI4kf9+1jfi9\\n\",\n       \"zcZwdlLGd2ZZ6B+WqetR+Xr/3LNnT1qR3dZuxkGYeaJ8L1+/srIy+o4rD7uOmj1SHv+MODuzkvu+\\n\",\n       \"H3lSszpZXDerSn77fffTnGveF2bJ7zazmkPrZQC5nayO0Cx+M3uLM48t85jPyZSito1ltx5N6tzC\\n\",\n       \"njnLlPGE8pN+mgHBaOek94xMJjxv7HPeB51x6fa8j7aef3tF/V10Zv0wtzyHPVeR2VmioP2+97ts\\n\",\n       \"P3d9LHuR3R4y2evatu854jVk2Gtmz9R6Y8C9nYHbtuHahRnLAvdmfeB9917v6y1L3/cz10bEeG8F\\n\",\n       \"2fWj647qqkKhUCgUCoXCCJvqkbL1nFkNPi3u379/ONX6NMpp1pVWOdW7IjMeJ+pI+F26udlof3l5\\n\",\n       \"eVTF1XLakkJW11Vx21xHH7O6U0tLSyNdoVNbTsjKaZ7TfVZNFthSm4VZjOet3LZ23Q9X1/b19kg5\\n\",\n       \"JqCVPfNuZv3jc9eosgcPMF88RgcOHBjVKWnfzbffzfqZVd+35xO0enTMm+eiq6nj7aCSczaGnru2\\n\",\n       \"ksHS0tJwD7x41H1z2/ZqueJ35tnNxqTdA5jfjktzTEjWz/Xmrmt70Vdz11mu9h6uJwSYH8gK39ed\\n\",\n       \"d9459blBO22cZ1ZHLqsG7jFCFj5nvhA76vGf5cHJPC6u2cbeaz6/rO3M2w7amFTz8Xk8kYUxgbct\\n\",\n       \"W0eeR8w3PJGeX33fj7zd3oMBv3OdPZjeg10h3/HA7Vg4RtifZ/US6S9to2vPH+a/5wt7QHtfrznv\\n\",\n       \"51lNO3tH8Qbam+x45zYG0223vIwRkzOIvWXroTxShUKhUCgUChvEpnmkWi4mWwuZFdCeHh1nA3ya\\n\",\n       \"x+LGyrE3CAuLkzMnUk7o5kNqecKwQjJL2R4Jv3c2sorOzsYA+/btG1kfbSZbC/rB3zmZm/cMZBXA\\n\",\n       \"Z1Ufzyr1Zlx7tvayGCCAF8DxXM6sa2UBGT+ZZXQ2Uza/iLXjPsj29NNPj+YKXi7uwd/5PHuX77iE\\n\",\n       \"LPaljTHMsnGAM7vchsc7iynKYip27949iktzrIvhWJFsXTh2zJZ6K1ObJdT+xCq1pcm9M2+R55M5\\n\",\n       \"PmGih4tyliU7q+p5xHi+swcxFnhqsor/fJ+xaj38WfV/V2S3xxEwzz0X6V/m2Wvni6vAA8aI9XD3\\n\",\n       \"3XdHxGSvzngfszls0PelpaVBLnsS3U/6he75fT3vF8j2rq7rUm+P178z0ZmL9sgAe1FdKbyVyc8g\\n\",\n       \"Z1R6/Tt21OwK1gPPV65jDHlGts9d2kZuczJaX4wFsrPvey4DZHTm3axYSmchOs7sSKwiLcojVSgU\\n\",\n       \"CoVCobBBbJpHqu/7kcci4/Fyhknf96OMP5B5f2x5At+bkzrvfLO4lzbGK8tksozrZb6ZS+tIvGZ8\\n\",\n       \"3+zVtlLaa9u2rA+fvJ1xxOl+VpwC1zgDZj3MshCO9Lm9JLM8WfbQefzXixFwPJLni9thniwvL4/a\\n\",\n       \"tpXuuXa0YJ5lcSlra2uj7CuPJ7Jgnbm/hq1Ge38syyy+Q8f4GfZkZpmCRsYO38KeFnt3/V1z65mz\\n\",\n       \"D5i7z7K0fV3PQ225s6w8Z3sCc3G288sehiwGzB4H4HF3/bgs/qudT5kH3rFwfCfjfbPOvc94TNv9\\n\",\n       \"FT1k3lG+i1cE2dD5rJin9qfnl9dTuxdmHHLA8VzOTssyKz0XZ2XWZfui408tt59d9CHjXnQWPD9b\\n\",\n       \"2R3HvN7+6H3kSGuu/bs9dV3XjdaQ+w2yjMAM5ZEqFAqFQqFQ2CCKa69QKBQKhUJhHRTXXqFQKBQK\\n\",\n       \"hcIxxqbFSF1yySUjnjjzY8EpBI9PGzPE+2Ui+s0R54wgZyfRdsbN5ffSXA9n2cGDB4e/OdMHviLa\\n\",\n       \"dkyEs4/e9773RcSE98f1eAAZJi1PGO98yT5AJr5rvjKyKNAb77CpzcH1yOL6IYD73XrrrXH55ZdH\\n\",\n       \"xDir0PEn73rXu2a27Tg0Mj5uuummKdndbvu+Gy4kdO4qya4tAkcYPF7MD67nOuIUzEHlLMYTTjhh\\n\",\n       \"+A5zhWuB46toA34zrueevp4xgw8PPS4tLY3iDOg/cwseP2fd8D2ymhgjdM711Efy/IJX7qd/+qeH\\n\",\n       \"eDFnSO3atWtKL+bCYryZi8gEdxp6yeIdGbObb7556CfrPIsBYg3BKcj8cPwO33vPe94TEZP5hf64\\n\",\n       \"N33uum7o59vf/vapNlzpnn2B8YQPjzXqODXuwXxBdsaijT1hbr373e+OiDHvI2vPsWBwkKFzMwI4\\n\",\n       \"g87rDv1t2bJlGEf6iw6Ztx4j7kU/aZs16v2TOe7nBe0vLi4O37HcHv8sFsj8mchivbFmrZc3velN\\n\",\n       \"oyxdx76yhmbpsL0H96SfjKnHxrFjN9xww8DLyB7Cuia2lvnOfsH6dwwp16F79mjvi85A37Zt24hr\\n\",\n       \"j/0iq7bPfGG/sM79jIaD0s+j9nnr8bzooosiYlLDjExZxoxMWmTPUB6pQqFQKBQKhQ1i0zxSBw8e\\n\",\n       \"HPG+YZEeiVOO72Y1J1xriDY5ibeVhyPGnGzOJLEnps1ys5fLnhW+y+nfJ29nbeAVQnZXcPUpf9++\\n\",\n       \"fSPvjD1wwLWb8ALYogLIcNppp03J7nohbRtZhdksM4J7YAVkWV6MneuGoO/WM+X+813kNqccOmWs\\n\",\n       \"sGKc3QbMI4eX8KSTThrp0BmVVIvm8yeeeGLm9fbkeU4C9LBjx47hHugaSwrYinNtIreNV+RrX/ta\\n\",\n       \"REy8SngLzCuIDiIm89ZrC/A5Y8Jc5HO+D5hHeF5c463NyLJV7gwmy0J/0B99cD0c4Jp2rtnTzl1n\\n\",\n       \"SLlmjfcW5p5/IpvXNGPgMZmbmxvVs0KHrqljqx2YbcKZVt5H6RP62LZt26ALZ8x5fjvzLcvazGR2\\n\",\n       \"llfrXbLusorV6MUeLMviMUQvGSvH8vLysHZcsf3UU0+d+p05xDxnPLPMOtf4soe2BePltwbInVXN\\n\",\n       \"z6rJOxOTdtGP+WHbMXINQ9e2yuoR+meWKem12/IMup/ck+ccle0Zq6OtcF4eqUKhUCgUCoUNYtM8\\n\",\n       \"Uk8//fRw2lvPA8UJu62V5EqkwFYxnD9Y/3iHgBmn16s+3r6XtTXv75jXytaOLUysIn5yqkdme3ye\\n\",\n       \"9axnjaq8OhYAICundK7HOsCqBTCy2/OAFdF6Uxzr4FiGrP4RcP0UeyRc48vtttYO38WjQrVk5lBW\\n\",\n       \"/8h1oWxpWlbXYTpw4MDIaqM/WJi2ej3vGQs+Zy5n17c8b8jLuHiutN6K9rrMU4O+PF/Qh+fLCSec\\n\",\n       \"MIpx8XfALI7EiMk8t4fBHkjXQGrnU1ajLVujrJ9nP/vZETHRKf3H4wjomytgz6qVxXjTlmsx2TsA\\n\",\n       \"XFWa/rm+kr1CbZyWdYgstGGvRVYXiH1kvdpdrvHTzg979ZAFryn73CwvZ8R4r3I8kvc69Lq6ujry\\n\",\n       \"THqe0xbf8duC7C0D4HrHNYKnn3560LV5TR0Dyxz0swvZMo+kY8aQvZXFjB/Im+1zrMVWlxHjZ5Tb\\n\",\n       \"tzfVnsr2/7Rhz2T2RsJr2zF4IHsWzqpQ4Lci6N5vf9ZDeaQKhUKhUCgUNohN80ht3bp1dBKdVQU1\\n\",\n       \"YlyFteu6tAoyp1euJcYDy9KnVywOWx7OIACcVFdWVkbxOOtVngVYA7byMqvhSBXDfSq3987wu3xX\\n\",\n       \"dAZYT86M4X2/rcyIsZXqatuA7zqbESuGewBXmUUmfm9jkxx/gVcnyxDEenGMFG1aj1iJfv/e9/2o\\n\",\n       \"bfpBm3zH/QV40dCX9ZB5EZjj7TVZ5WnmL9/FIvX1WMfAnhv00PbVXgx+2qtjDjlktscBIJs9MHyv\\n\",\n       \"1aM9C85Ksg7RB2PD/kE7vt7WMXq1tzRiXHnf8VpZXApwfInHyJ4rsLq6mvI+Ois1YzZgjTFP7HW2\\n\",\n       \"Z4ffmbP79+9P9y2udaYk88L98T5qfriMxWF1dXXkKcv4DumvM+sMZ1LSPrLQDti5c+fIG8i4eo2x\\n\",\n       \"TpiTeOyy2CHWg/cuV4pv5WU9oHMzgQA8itzbP+2xwZNjD50rx0eMs1az7E3LYu48YJ1nleEXFhZG\\n\",\n       \"4//P//zPU/1iDNiLHceaoTxShUKhUCgUChvEpnmk5ubmRrECnAIdC+CYm4WFhdTC8O+cTvFM2MJ0\\n\",\n       \"7Ig55TJs3bp1OPH6nazbNps3p3bHa3E65gTOSTt7F7y0tDR8RkaYLQ7A+3g+dyyQrWNO4nzPXqFZ\\n\",\n       \"cW3WfZZ949o0jvOyNY2euXf2fr5tC1nIwsCCcj8dC+F6U87acKZm65HwvLVlyHx3DTSDOeoYQs/J\\n\",\n       \"Nu4L+Z3J437SFnNzliclYpJR5Hi9jCdu3759w7jZumVdA3TGPHcM0Kw4k1ZGczuytttrncmUZWHx\\n\",\n       \"Oda996Qs+4372GPZts//PV+zbEbWmr0dGQedvYyt191tG7Piy1pkcxm4T8w/Z/lF5DFSs+ZQ2xaw\\n\",\n       \"t93jb7AXtnKjQ3s7uZfnh+PyADIzVlyHbNb70tLS4Hmyh7GdtxFjD7U9UZlHEplY0/aSRkz2+2z/\\n\",\n       \"y3j/HNebPR+9RzuzrpUle044yxVkbyQy2HvWjqn3JnTujErWfZZxbpRHqlAoFAqFQmGDKK69QqFQ\\n\",\n       \"KBQKhXWQce1t2qu9Sy+9dBQYituRn5TZp+R/GwiIW5Q2oEKg/DzuPIIfCYbDrUyZfUrE8zmvG+yO\\n\",\n       \"pnQ+lAILCwuDS93uYa6FrgZ3KNfj/uYnpfAp4+8AWBf/Qy8/93M/l7rJcYNef/31ETEp4Y+sLgeA\\n\",\n       \"bpGdMvvAKau426+66qqB2iB7BcN4ovM3v/nNETEZE9zOgH5bdu7tgMb9+/cP1CZQhLjEhOkIrrnm\\n\",\n       \"moiY0A/MSl9vsR5dRfsKCB0it93qDpr91V/91am2XfTTOmeuQynSvmaxS/3aa6+d6ifz25QY3Asq\\n\",\n       \"HOgn/CrQ9CxXXHHF0L5TwJGL8aSfjBHz3MkXzEXGCFomu+r5neuvuuqq0V7BKxhc98z7D37wgxEx\\n\",\n       \"2Vu89/h1QkZvQx/Q69zc3ED5gtxOADG11HXXXRcRYyokZPErn4zeyq8EI6b3iohxGQTPtXe84x0R\\n\",\n       \"EXHZZZdNXe+wAvTL/EKPzOlt27aNXosxt5i3flXj127Iznxh3L2OkIX9oqV9YS5a98xF1qhLDQDu\\n\",\n       \"Yeok9ML3kIF5dvXVV0fEoTFyCQD2Pfpx5ZVXRsRkz/W8dkFn+sl+4dfRfi37rne9a+hnW8S37Qfj\\n\",\n       \"iyw8R10M1K9TTW/jV+ptwgG0LMwV1hh7S7ufR0zmC88LJ0ggO7+bxs2JFfPz84M8zK1sX2Ss0As6\\n\",\n       \"z1Cv9gqFQqFQKBQ2iE0tyIk15yAzn6htJfR9nwamucDakYo3RkwsKE7oBAauF+A3Pz8/CkTNing6\\n\",\n       \"QNmEyoATNNcRfM7J256bNujegcwOKnTRS/pHILCDR91/l0Fog2ptxa8H0zVkFCgGeuE+s7xxLlvh\\n\",\n       \"ueMATxf7BMjm+WIvYet9s1fLwcDAHgTgoHt+x6uSFTZdWVkZBct6PF2Q0N5UzxfadnLBLFqeiEP6\\n\",\n       \"9ppbbz64LIQD4Nu223vyO/Oh7avv7aD5tlRExHgssP6zJAzaQQaTY7fB5+jOBRWzhBD3AdibAkxL\\n\",\n       \"gyxzc3Oj8QcusJmVYPBe5SD+9fa61dXVdJ7Tb6512Q/vcw74PVLCS/v3/fv3j/Yr68XlD+w9ss49\\n\",\n       \"90xaPevNgMt08F3KPQAn4WSB4f49KwvRFvy0BzUj6QYO6Pfea9mc5JMRbrdtONEn2zccwO61l1HE\\n\",\n       \"zCo2nD3/1+vfeiiPVKFQKBQKhcIGsWkeqfbU6dN/RrXRkvdysswsb6warN3s/TGgbTxSyGLy19Zr\\n\",\n       \"ZovTqdAmnXWhQlsY9AlLxfE3s7wGfn8+q0hl2xbWC54ZEwEDk1zaG9BaO5bL3oqM8JJ7u/BcVtLC\\n\",\n       \"VDktZRBAXq61Nev54lIDjpXIUvFnUaZklhQWt+emvZ0uXcCczIilTccQMdGN5cYS5SffoTxERqDL\\n\",\n       \"9S5E6XW0srIy8qRlhKjZ/MhKVNhzndGWtPA4YRF7/F3s0mS0WZHZLAar9TIhN/11CQl7ARxbyL0z\\n\",\n       \"75L1Sx+feOKJ0fibsoPvEK/jvcjlPawXg+va+9j7C6w7+osOM6Jgl5HxegDtvmsvb+YFdJHfjJaH\\n\",\n       \"dthfkJl15DXdjoO9H96beFZ5f8uKQ7v4ruOdWj36LcB6FCieUy7BkxGo23NpKq32b455y2h87B2z\\n\",\n       \"7FlZEOACsC3QoT2R6xFoG0f0SHVdd9CkT/0AACAASURBVGbXdR/tuu7urus+33XdJYc/v6rruq90\\n\",\n       \"XXfn4X/f13zn8q7r7uu67t6u615zVFIUCoVCoVAo/BvEeh6plYj4ub7v7+q67tsi4jNd130kIvqI\\n\",\n       \"+PW+73+9vbjrupdExI9GxEsi4oyI+Muu6y7o+350rFtYWBgVrHRGEDCJ465du9J4KhcQw2oxdQjA\\n\",\n       \"QsGyuP/+++nLIQXodNwWMITY18W82j5GTE71Jmu17KYxwSJxITewd+/eUdaNs8kAv1Nok3s5RsDg\\n\",\n       \"e+sRLrfy2bNoWUxPgHWHdZAVA8TCpfDgrKKIpkKgTVMWGI7Xoh1nbzpDBn0sLy+PvBemyKCftsSA\\n\",\n       \"KQ3cTlbIbmFhYRRH5CKYWbxF5gXCW8Dn6Ad9zppfLk6YefUyQt2MtDYbM/clYmy9Q1rtuByAd9B0\\n\",\n       \"G7OKGkZM9ODioczNdu/K4g4z6ih7pJgvnjeGZX3Oc54z8l7Yw+Z1b++OPRCODbIe0W+7r2QxUqZC\\n\",\n       \"8T0d1+ixs1fI7TOPtmzZMvSbcTbZNvA9s33O1CnMG54Bnqtbt24dFZ41iTlwbCyFRbnOMvK5vWjo\\n\",\n       \"o6WYQifIxzxm/O2pM3UQHjd7pgHzx5Q5Jhhv5WafIG6R/mZxz1nRZO9dmad6fn4+jdcD6Bi9ZB5Y\\n\",\n       \"44geqb7vv9r3/V2H/78nIr4Qhw5IERGz8sV/MCI+1Pf9St/3D0bE/RHxqqOSpFAoFAqFQuHfGI46\\n\",\n       \"Rqrrup0R8R0R8fGI+K6IeEvXdf8lIj4dET/f9/3XI+L0w38HX4nJwWv6xs2pMXsnCma9S3W8ETAh\\n\",\n       \"Yha35Htx6sWycMl7Y35+fvS+eD2L0V6dLHbIlseRstk4MTvLJvOOuC1nTPie9irNIrnN3idntZks\\n\",\n       \"C7rGQvWYesxc86O1bK07gLyWERmcKQRsqVt/rVWUyW2L2hkubst0LllWZJuJStt4bw1nz9g76jpB\\n\",\n       \"wDQ+eLpm1RWyhZwR4jqmw8TRtuqd1ekM1FkWqDN2sgxCZ7l6rNxPxz06O6mV3d5ct5F52kycm8XI\\n\",\n       \"AGRkHczNzaXE716/9iBksjj2JfOast8uLy+PdONrHX/lDEFjFlXYrOvbNe49N9tbPVZZViJt2ztC\\n\",\n       \"u7OywrzWsrnljFD3wR4sj6nj2Nq3DNaZPVMZpZDftthrCjymXqtHok7i3nio3M/s+eJaeCCj9Vlc\\n\",\n       \"XBxd6zVm4vWjjZE6qoPU4dd6fxARb+37fk/XdTdHxDWH//yOiLghIt6UfH3mDvaxj31sEPK0006L\\n\",\n       \"5z//+UclcKFQKBQKhcIziUcffTT+6Z/+6aiuXZcipuu6hYj404j4877vf2PG33dGxJ/0ff+yrut+\\n\",\n       \"KSKi7/vrD//tLyLiyr7vP6HvFEVMoVAoFAqFfzPIKGLWy9rrIuKOiLinPUR1Xfe85rIfiojPHf7/\\n\",\n       \"H0fEj3Vdt9h13TkRcX5EfPJfI3ihUCgUCoXC/69Y79Xed0XEj0fEZ7uuu/PwZ78cEa/ruu4Vcei1\\n\",\n       \"3QMR8eaIiL7v7+m67vci4p6IWI2In+kTl9cb3/jGUTyHq2nDWQa/UfsO1e9BzT/Ge9esPgS8P+a3\\n\",\n       \"4p0x7RN3cNttt0XEhD9v+/btQ1ZEm7kVEfE7v/M7ETHm8XI9DD6Haw+uJd6301+yPLjPb/zGoTPt\\n\",\n       \"JZdcMrxnJ5bHlcvhiHrDG94QEZM6Wa7Mzb3gIIJrCz0QG+NYgJtuumnEhWUORd5Rv/e9742ICUcU\\n\",\n       \"uiauAlnoC7xf6AV9O4uj67qBO+snfuInpv7mTDDzIZo7K4tfgj8t40NbXFwc6ZC54hgwzwPahiPM\\n\",\n       \"MUGu+WVutu3bt6d1ojxvgXnr0D1zC52TGYU+0QuyIctb3/rWUaYf+gCsUfPhZTFAN99889T1Hkv6\\n\",\n       \"yhq45ZZbhn46I5Q5hvz0E64tV4l2zJy51gB7Vcu3yfibI9I1uej37bffHhETfkNnnNI2GaToxfsF\\n\",\n       \"7Z5wwgkjTjG439AH65n+ck/2Reai9xPznXmut7ElrvNG2+ZORBbmFHqin+aUs+z0/wMf+EBETPPE\\n\",\n       \"IYOrhbNfwLXKeLvSN7KwRzNfzCSBDHy/5XJlHOkf2efMHfN4Iqtr4XkfhYOObDfu7Xp7733ve0c8\\n\",\n       \"roC2GWf6yXPUGaTOYmNMf/Inf3Kqj+izHSN04rbJfHQ8HnuXeQJZB+yLtAM3H2sUfbSZhrTN3OJa\\n\",\n       \"dOhnL/1h78pwxINU3/f/N2Z7rf78CN95Z0S884h3LRQKhUKhUPgmwKZVNt+7d+9wCiSTKKu66iyE\\n\",\n       \"1dXVwTp1tkHGf8fJ2JkSzi7gJJpF67cVX10VNeNx4ie8X7Y4M7iuivs6Nzc3ygTJqj7zOfoxX1XG\\n\",\n       \"teYMI8amzU6xxZRlRgBnafATC8NWj3mtkOn000+faq/9G3PqzDPPjIhJPSHXhXJdIPd/vczDNqMk\\n\",\n       \"q/PkOZfx1T3++OMRMZkfzJcs47D1hLoCvbMN7QVD1ixri7WGvrgXOjfaNetsLK8LxuLss8+OiMkY\\n\",\n       \"0X/XNPMcdQ2kdoxc4d8ZXfZ6uRI+88ZeIMPVo2dl1qJrvMDI/dhjj0XEeK9Ch+gLmbLsP9cya7M9\\n\",\n       \"vVfQJllYzlbO6k4B1l6WQUZ9OuoN7dixY5ABj4Nl8d5Ff61Le2T90y89CBBeWloa5iv3chaaueTc\\n\",\n       \"pmXxc4f2Mv7Ehx56aPjswgsvjIiJt4Z6WsB117hHxvvp/RY9Z3ta+ze+C4uG939noSJTVkeM6/0m\\n\",\n       \"Z1bmprMLW29uxHjvymqgoT/XBrPXvK3bZh2iO9a559Z6NeyGex7VVYVCoVAoFAqFETbNIzU/Px/P\\n\",\n       \"fe5zI+JQ+YOIiC996UsRMea3o/LpAw88EBGHLJpXvvKVETHN+RYxOUm67gn3cB0RTrNYPZzQsZpt\\n\",\n       \"qXF6Pv3004eTLtXQsagst2tTveAFL4iIsaXGCRurB8ttVl2QiEMeC6rgYs3T3zPOmC7fhVVMf7kX\\n\",\n       \"FoYtTPSA5epKuK3es9pMs2rrRExqHTEW6JrfbZFgYX7xi1+MiIkn48UvfvHo/siFl2Pnzp0RMRkL\\n\",\n       \"65x7oR/G9OGHH57qt4GsjO0pp5wyiuFxHBL9xBPrGmdY9cjM+kCPfB8g88LCwuDl4BrrEL3Qn698\\n\",\n       \"5StTstjyxgPB3GbesI7MXH/aaacNujW3YGZJU/IES53v0xfAvfF2sD98+ctfDsMxPbZy7b1gHbCO\\n\",\n       \"6B+yuZ+AsbNnqp2L7Cl4FpHlwQcfnLo3YL543TPXHFNqfkjm39zc3KiyPfdmjhFvSD8dE0b/HH/F\\n\",\n       \"fmAvAOuAvp533nnDdz1O7KHolj0lqzuFHhhDxor2LUvLE+r9354UdIcHhbmXcbkyB3lW0TdYLrzv\\n\",\n       \"Pv7444Nu2ffPPffciIj45Cen87DQLbLSr4ceeigixnOXdpkXyMLabucL1953330RcSi1P2Kyh6IH\\n\",\n       \"y+LaVo5jAnzOWLFGPZfbtlwNP6uTxt7E9fZQZt5U72mLi4sjzxvznPmbMResh/JIFQqFQqFQKGwQ\\n\",\n       \"m+aR2rp162BZZDxPgBM6VvJjjz02eBh8ksby5vRuziRbO1ggGceWPRJYT7t27RpO7VzrmCfk5rTO\\n\",\n       \"qRevgU/1GQcXJ2/H6xw4cGDor9/tOubF79vdX9/TFaxtkbTIKhdnnEd47rIq3PYaEufAfGHM8Bq1\\n\",\n       \"niD6iUVkndp6xRrC64WVbM4ly+6snq7rRjqkbawaewk8X/BAOYYss47avpobyuPkKvJYd7OY4iMm\\n\",\n       \"esLDQL8feeSRmdfv2bNn5NVwrALgOtYwus8qPjMWeFOIZ5hVZR89uP98blnwXGaV/42sGrfjPlq5\\n\",\n       \"mYvIOSu2q/3csSCu0g4YS8YGS71lXQB8l++wB+G9cTwNexX7i/fozLNDe60nxmOBN4x7cg/HKQHH\\n\",\n       \"ovJ7Fr9y1llnRcQhvTBnnI0H2FPs7ci4FpGRdcGaRR9e4+ecc84wD/BiOdYPoHP2Na5jf7M3nb2H\\n\",\n       \"PjJv6AM/IyY6Zi9FJn5msaMgi38FzF1XK/czsm0LXZkz1ePK/PdzYdb6b9t1RmXf9yMPI2D8+Q76\\n\",\n       \"csxbhvJIFQqFQqFQKGwQm+aR2r59+3D64wSZZUpw2sUKiBjHDwDacC0b7pFxp5mxHmTZDHv27Bks\\n\",\n       \"Ar+zdduuG2KrxW1zIvcp36fpr3/96yNvRWZRO17FtYr8nhk9YYFgFWE1tJad+ZeyjA5gfjMs14zv\\n\",\n       \"C1nOP//8iJiMlbM7W3kZR2LHsCyIcchkw7pDNus84/3r+37kYeJ3vFxYOZklTZwF495yp0XkHGQL\\n\",\n       \"CwspozwgHof5gjcwq+HEvLA3mHlgT93Kysogr7kBvaaYQ8wD4jXQC1Y+4J7mOUO2dj05K9HeHVvU\\n\",\n       \"/J170gesf4+3YwozvrO233gauDazdn0vxxpm3GzcBw/fwsLCKJ7GNamcMZ1xy6E3zyvrkTWKHtq3\\n\",\n       \"Bh5/exjttbDXCNnpE/MHfXkfIHbw5JNPHu7VZvK1cNa21737SZ+QFe+Xs2bBjh07hucD8xSvruVm\\n\",\n       \"TPwWwPyxAJ3zd9a0M+fa/iM38VrrvQXxs8rZe8DPIdc1nAU/kzKWFdYibbP3MHezfdEe3a1bt6Z7\\n\",\n       \"L3I6NuxoufbKI1UoFAqFQqGwQazLtfeM3LS49gqFQqFQKPwbwoa49gqFQqFQKBQKOTYtRurSSy8d\\n\",\n       \"ZXWQ3dJyZ0VMOKjaeARXjTZ3HlkHXOdsPri24CtyfBbvSnmnDtcO3Ezbtm0b3qNyL2SBIwguLD73\\n\",\n       \"u3/eYcNBZN4v3iE7RgaupYsvvnj0bt8VyeFxgn/KsVGu3XLTTTdFRAz8ecT30Fcy5nh/f+ONN8bP\\n\",\n       \"/uzPRkSMMsccEwS/FRx6WeYgMnE93FyOwWprhdHPt7/97VPyEl9DPAZjxfi/8Y1vjIhJnBbxbFxH\\n\",\n       \"RhR6Yb547LuuGz5jPK+66qqIGPMTOvYB2c21Z45Gvkf7jOlJJ5001K8hJsy8f7RtLkFnq9E23Fzm\\n\",\n       \"YiT+BFkYy6uuumpUmZ64LDKCzLXnGlZe/7feemtETNY/cRfEShJ7xH1uuummEeebY5n4CacY+4Wz\\n\",\n       \"MV19nvF//etfP3XdrDo5rM93vvOdU20Rb+K4E8YIvjJnMaEfPm953Nq/g5WVlUEuZIHfzCwJjoFE\\n\",\n       \"Fta/9wvmJGudMUWPbYwYbfMd9mjmreMz21iW9nrWHO0xZ7kO3TOm7Bftd9qq720/kZt7E2fkmFn0\\n\",\n       \"yJ7kyv8ZB+Hll18+yuj0/oXc9BN4v2P+Iwt7F/F97NXsdchyww03DPegP6wZc9BaFlfbRwb6C0+o\\n\",\n       \"x9R7Xdd1w7z9qZ/6qan+MY7mWrUsGVsHeqV95q4r/j/99NODXHCtXnPNNRExeVYx/sSUsVbh2sxQ\\n\",\n       \"HqlCoVAoFAqFDWJTK5tzqnWlWleIxkJts1wyNmrXUcLDYP4qYNZuvsep2F4krOG1tbXhxGuOJLdt\\n\",\n       \"Ky6ryeQsJ9rD4rAlu7S0NPSXfnKSdlaFMzqwWoCzGamvw/c+//nPR8Qkg6LN8sl4zYAzH7D2GBMq\\n\",\n       \"eVO5njozwBWM0R/WYFtlnTbxMFFFmv7iBQFY/bRB2/TTc9HZjIzZSSedlNaRog3GxPMd0B/mnD16\\n\",\n       \"tsjwDj3vec8b5kFWm82eBVeodtYeY8TfaZ/6Q7P0QpvMDXTo7CR7rsiEQn/O9EEPeN1Yw85EavuB\\n\",\n       \"DOarzPjquM5eYMvC+mGusz7MpNC2Qb+oDwScQebMIWc1eYz8eas/jz9zy8wEyJhlVnru4TV0hqpr\\n\",\n       \"Y+3Zs2foj/tpb5+51LIq+6wfsluzOkJtbTfmSlYnjP2ANpCZuesYYvYm9MfzgEy8WRnKzqpEHxlH\\n\",\n       \"HLpEJvZ/y8K+iSeKtwXmPYwYZ62jDz73c5Rx9PPPbzIAMvjZRl/a55G9uWZdWI9T0vMq40N1Dckt\\n\",\n       \"W7aMnrk855iT7G9Uhz9alEeqUCgUCoVCYYPYNI/UqaeeOlhJ9957b0REvOhFL4qIvGYJXoalpaXh\\n\",\n       \"Ha9Pr1jknLA5cbqCudu2JwprwWzhnMj37NkTd999d0RMTv62dvguljSneKwa83hxPbK8/OUvj4hx\\n\",\n       \"DBZ46qmnBvmxTlxZFnBqp2YPMriys/sJN9M//MM/RMSkMu6s+iCuHWLeQ4D15qrQ8LpltUf4HFmJ\\n\",\n       \"B2q9I4wvnqi77rorIiY1h+wdYWzo7yc+8YmImFhJrtbtmAvqyOzevXvkkULX9NM1yzx38W7QP+Ys\\n\",\n       \"3kbLQr+ffPLJ+Pu///up/p933nlT1zKHkIWaXFivni9YqqwDdM+atUfi0UcfHf7mNeXxtBcHK5DP\\n\",\n       \"bR3Tf+bspz71qYiY6K/lN3PcneMNXf+KNQfnGGMEr6PXKO0iK94RvE3tOmLNMU+Zk3CtGY5vBMjk\\n\",\n       \"uWtPTltvKav2zFi89KUvnZLJ3m48OfaOI0NW0wi93HfffWlFdtpARjyrHjuAPvAaoD+45dw+7Zx4\\n\",\n       \"4onDnERey826YB0gG/PecxGvErp+4QtfGBF5rac9e/aMKtDbWwq4p72otGmvoeM/P/OZz0TEuA5b\\n\",\n       \"xGScmc/IhOfNnLX2vDKX7WkCrtfYjkHE9LPR8ZlcQ//81sgxop7bHn9zN7asFn6GfuELX4iIyV5t\\n\",\n       \"PtS2OvyRUB6pQqFQKBQKhQ1i0zxSBw8eHLxKnH6dnQJcnfgb3/jG8JnfM3Pi9ntlx04ALEruzXtX\\n\",\n       \"rF9bDZx22xgj3pNncjt+AM+C28bCsKeGuIRZHgyz1wPHSPBdZMCzxD19qr/zzjsjYmJpY8licbYW\\n\",\n       \"huMpuAf9thfQcVrI7ng24Gq6tIvsracGCxRPA23itfAY0aZ5rOin9UJfGRv0vGvXrlHcEG0yRlhe\\n\",\n       \"WOCGPU7OkMl4Ir/61a8O93rZy14WEWPvKDp1NegsLonfXREbC919XV5eHtYMOms9RS3wAqAX7oHX\\n\",\n       \"0F4APDoPPfRQREwsVrxurTXtTGDaYj44zoh70t+MBw9gqeKFdnZw+z3uyfpln2DPsnVsyxwdey37\\n\",\n       \"evMrLi8vj+YtcvPTmXKeLx4jV4D3GCFrW70947ez15/fzZEG8Ao4c9BrELTrxPPUexFjRH9pk/HN\\n\",\n       \"+ELxYLCf2DvY3p9rkCVjqqB/zGH2ReaJ28Y7Rmwpc5D9pdU797LHiH3OsrDPMYbMLXt6ATIyd9HH\\n\",\n       \"LK469ECb7F3ZePJMZ/0gmxkgAH1ztuOWLVtG44nu2EuZa8zzrBK6UR6pQqFQKBQKhQ1i0zxSrVfp\\n\",\n       \"+c9/fkSMedIAp8L2fTendr9P5xTOKZe2sHp9IrUl4TorPlFjuSwuLg7xMZx8M56+jEPOstsqpL+c\\n\",\n       \"wN3O2tracI+2nlHbH9+L/mCBcHp323gXOKFj5aCvWTx3fieNXiwLlrSz/fjcemH8HedjazNinIWC\\n\",\n       \"RyrzdjIv+Iklko2dOR7tVZ0lNxYV8rquEMDT4gwi4PmFbIuLi4P3h+/Y88Ln5vFCFvoBsOKYg20m\\n\",\n       \"1Kz2FxYWhn62MQkRYy+AsxMvuOCCiMh17ixOxog9oI3XymK91uO3pP/IlMVG2IpumeUjpmNHnJ3q\\n\",\n       \"Nee9yJmUXOfYM4CezA+4bdu2kQ5bjtCIyTx2DTfLgsfC7XhM0cOsvc7XmluNuYTuMln4HC8C8D3b\\n\",\n       \"GCNnPmbcaejDHGuWnXXBGs30APbv3z+MD/dg3tpT52eVYw399oV2aa/1vERMr1HWOZ9lWbqWxXu4\\n\",\n       \"+TTdPjLyzHK9vfbe1kvGKZnVpuI67+n2Drexeta53w4x35G/lftIKIqYQqFQKBQKhXVQFDGFQqFQ\\n\",\n       \"KBQKxxib9mrv6quvHlxwTgtt6UciJtQpbcl8Bzv+2q/9WkRM6CRwG7oAHS48KEKgE7CL20FzlJ+H\\n\",\n       \"DuXEE08cXKm4GOkHbf/iL/5iREzcg7x2wJ1IP7keehPTEwCXt//5n//5QV6CJnGl4mo1FQpAH1yH\\n\",\n       \"7imF/8M//MNT9yT40q8Cb7311pEO7fbmJ3JfccUVU587NZtxRhZK/rt4YvsaCzoBxh8dM/7cg/6a\\n\",\n       \"ZoVXgNkr0euuuy4iIt7whjdMyc7rrC1btgz9Nl2NaSSYN8jGGF122WVT/WvT2SMmcxFaDsZodXV1\\n\",\n       \"eLWHax1ZWBdQYWSvQein6YpMy4HOGQv6esUVVwzBscjN6zG+Sz+RBfgVGHqi7Te96U1TfWJs/bri\\n\",\n       \"xhtvHNFDuDgorwluvvnmiJisf/YL1ijrg7nI/ILeBriI5NatW4d5y3giL207SBa9vO51r4uIcdAw\\n\",\n       \"+jN1FhRUyNomY3j82bdayo6IyZziu+gFSiEC5WmP+U5/oc6BOoU+Pfnkk6PXaVCbIItpR9jDTG/F\\n\",\n       \"fGnXWsRkTFnbyI5e9u/fP6w9lzNAL6x/EhpYawQ2cy/ahjoJ+LUSY8te95a3vGX0+hdZPM9Zcy76\\n\",\n       \"y/cZM88X9EjihAOkr7322tGaQw9OPvFz1OUPnJyA7NC+uNAzMm3ZsmXYW5CF9esi2ugFiiDmi0Nf\\n\",\n       \"GHf09IEPfCAiJs8LQN8OHDgwXPv+978/IibrmfAB+ofcHqMM5ZEqFAqFQqFQ2CA2tfwBFiunQFIQ\\n\",\n       \"HcjGqZgT61NPPTUEhTko1IGXWDmcLB2gbHJi/o6F4qA8ZHv2s5893Bu5HRTLSbu10iImFoOpE+gn\\n\",\n       \"Vo1L5rto2tra2qATLAHkd5Coi9ohe0adgWx8TnvcL0sLbvtnjxpw4KKDaR24zT3xXBAYfPbZZ0fE\\n\",\n       \"9Jh6HBk/xsYlBugHHim+R3HQrFApsreUCNl4MhcZV5IUbGHRT9ph/pDm7OKQ9OXgwYMj74WDRx1s\\n\",\n       \"TOE5ZHEgq1PqTYHiQPiDBw8O4+0iiL7Wn+PJwsL0GrWnCmtxVgB5Rq9C/5wkYavfpUa8vzim1POh\\n\",\n       \"DWblb6xb99Nzy9Y//XTwMXBBQ2R+6KGHRp7HlrolYrIvkujjuci9HGxrImJAu/x99+7dI/JlwHgy\\n\",\n       \"Jsxj5o33DZdq8F6U7TOrq6ujgrNeQ+iFv5sSJ0tOsn7sTW+v5zPGPyvdQ5v8Hb2Z5gZwT6fszyrg\\n\",\n       \"7H3PSTkuOWCqJM/NrAiqr8vKQrTX0E+XZHE/XciTe7qAp98utfuN178L9SK/x2I9lEeqUCgUCoVC\\n\",\n       \"YYPYNI/Uvn37RkSrWMkuWMjpsI1J8rt+X5udan3ydhEwvCFYLngD3P7y8vIoJigrc0Cb9rTYUrOH\\n\",\n       \"xqS8s07TtsYyD5OL4NE/TuJuB6sPS8akyBlBccQ4vdtevSyGCivZxMIA2W39tpady1jQJl6hrOQE\\n\",\n       \"euD76N6wpdrGVtlbYWoYk/jaSuT7eNywGh3vB9p0amLYuNZWGl4ee/+4zp4XW8Ws1YzMdHl5OSVX\\n\",\n       \"trfDFBhY0PQv82CYYBrvUdtX5ESXljfLUs6Icu3ZNWlrGxsVMb0u7Ellf2NP8Ty3V4N7sx+aSJWx\\n\",\n       \"Zr60hQ69V5gIG08Nc5ICvQY6R1bT+wCvyb179w7jm1E+oRdkygi0HTPIuGfrAr33fT98NytnA+wd\\n\",\n       \"y7yeJrN30UzP3YMHD448L455BI4/Nem1dY7MLpY7y7PnuFKX+fH8d4yhCzp7H7W3iP2Iudvup3zX\\n\",\n       \"ZNu+N0AvlsWxZG7fXvTFxcU0RtTzP5snGcojVSgUCoVCobBBbJpHKmKc7WbPk8Fp+JRTThlZIYDT\\n\",\n       \"J224VH1W8p2TObK4uCTgvl/72teGtrMYBtMHYOXQD1sB/I5l6RL3tnbakzuWYkYn4kJzfo9uD4wt\\n\",\n       \"EMd3tO/I/V7cbVgWW0P0Dz1mVDu2HpxxOKtfAI+LrUDaJIaK2Dja8RhldEZra2sjbwcyuNAi45YR\\n\",\n       \"S5ucE5kdUwHm5+eH8WP+2qozXY8Lz3ldZIVasyJ4q6urg2eBtjM6CffPOrWnju8zb/CK2kPT9sOF\\n\",\n       \"abNCss6owoJGpsxr6GzHWbEUjtd09pX3C9MPseaYu77eJOjM2eXl5ZTiBuBJ9XowGEsTStuD7XiU\\n\",\n       \"7du3D/2xV89xKMjvuEPg7F6u91iD9i2Ds68ycl50jF7aDPEWjlfi+1nhxrm5ufTZ472Fv/sZ5DjF\\n\",\n       \"7Pusi1myON7WcluH3ovsZfZeZ+ooP6va9l1oNCse6+tdDDR7vvB3x23NWqP0jzmK/DyLjrYgZ3mk\\n\",\n       \"CoVCoVAoFDaITfVIcYq1t8An91k1LDgp+pTOSdt0E7Tptm2Rmuw0s9hWVlaGmIf1qA1o0+TLWWYN\\n\",\n       \"9zT5pK2CtbW1oW1nbWWgLVsi1iMy4unAGsRaauMebP27zSzzIbMsbGkxRnyOBwsZ2zgGk8465i2L\\n\",\n       \"7cqodrJ4H1MtLCwsjPrZUri0bWVzylY+HijumXmk+r4f5iJeDFuE9qQQ85LRMiALVqBrAnmezc3N\\n\",\n       \"DTpjfExxAmzlEk+RxZn4nrTnjKm2DcubzUHTTziuwt4RfreH19e1f0OXJvHNvKPeu1hzs9Z/24eW\\n\",\n       \"MsYeg8z7la1/2nR8SpbN5rW/bdu2EckwQFf2Ejo7D9iD75hU7xez+mk6GuA3GI7bdHyX6UdcC8l6\\n\",\n       \"7Pt+FJ/FNfbU0g9+Ms+RzWvO8cCuq9WuI79xOdrnRBaDnF2fEUi3Y+T93rFSWeyYvWTZmuZ6e8QX\\n\",\n       \"FhZSz5q9ZFkMdobySBUKhUKhUChsEMW1VygUCoVCobAOimuvUCgUCoVC4Rhj02Kk3vrWt47eeRvw\\n\",\n       \"27zlLW+JiOl3xc5OMs8S70X5u98N33rrrREx4U7yO3Te0/L59ddfHxETHp/23bczmW677baImHDt\\n\",\n       \"udq043iQBQ4i7unMQWSCg+iiiy4a4mb8jhegF3ToGAe/877hhhsiIufm8vvsO+64Y+BlcpwZPxln\\n\",\n       \"+LjgtzLHlutt0U84qNo6ORHTXEtcC3cScSmuycLPD37wgxERcckll0TEJJ7A8QmMKTxRb3vb26au\\n\",\n       \"Q5a9e/cO8RT0k/Fk3IhhcCwg85y56L/TF3QOlxt6jxjXZuJaxp/xJB7J9W/oDxxhF1100VQ76Bp9\\n\",\n       \"EGN1++23R8QhbjbmCG0yN5ENvjr47Rx/wLpgjrEu4H0DfM/xjXfccccwz13TytfCEWd+M48/7ZiD\\n\",\n       \"jHnEmLcxaNdee21ETPYK5M0ygeAIY245ptJ1cZCF+UL7yLJr165BHtYFOmSeIwvxSdyLvQu9uOab\\n\",\n       \"Y0zgfbvyyisjYqLnAwcOjPZSrWEuGgAAHeZJREFUc+cxT5DbzwPmCzyhyO7YF/YLxpRnwMLCwtAW\\n\",\n       \"44mu0At7kXkbHX/D+md+ITOZo/xOViC8gm9729tGMT6ugs+15qBz7Bvrg+vhoHSFe/rAfW655ZYR\\n\",\n       \"1x5y009ibL0unGELzPvK3HXMEb9v27Zt2BfZc90WYFxpm+co/XHsKXpiXfAMMLvDtm3bhnFi3rIv\\n\",\n       \"Ojaaec6+h14ylEeqUCgUCoVCYYPYNI/U6urqyCJxJgzAWsSSe/jhh4e6D8973vOmrnWmmK0583hx\\n\",\n       \"8nSWVlYhPeMaihifrM2xx+/0B+8AQDbqpKznPWpP2FQ3pp/OZHGlaqweZDNc4wPLwlmR7d+cXeUx\\n\",\n       \"cD/RB5WeL7jggpmym4vrwQcfjIhx7ZdWFrxc6AUeqp07d061jQXtrK0sOwn9YR22f88yo/CGMn/x\\n\",\n       \"1GR1pNrsq4iJVeisvdYDiG5spQEsTmTBIj3zzDMjYpydxL25jv5eeOGFETHhaAMLCwsjDizXc3Hb\\n\",\n       \"6APZmRfmQ3OdMmSZ1dfMW8HvzgjCijcHnb1pgDF11f5ZVZqz2l7MW2cn0g/676zUbL6gD37O2pvo\\n\",\n       \"D9cwrugczkVgTjmvC2eY0kc8nWtra0M/XCWaNljX9qBYfvTF310x3WOKJ+Ib3/jGsL+x13rP9T18\\n\",\n       \"b/fTnih4Vl/xildExOyMY7MrMO6upu3sXPT20EMPRcTYg2svqj2As3D33XdHxESXr371qyNiXF9r\\n\",\n       \"1v4WkWc/ev9wrbBZcN0s+u22eI7gBc8YRAB6M3fr/Px8WhfQ2bfZ/p+hPFKFQqFQKBQKG8SmeaSe\\n\",\n       \"9axnDZbWvffeGxERZ599dkRMexgiJl4FrJ2zzz578F7Yo2KeoawyK+DkjCxYLHiFbE23p9sHHngg\\n\",\n       \"IiZVUW154WnBIjnjjDMiYmJx2sLAYuFkfv7550dExKOPPjqlBzA3NzfUD8K6NxM8wErhNE9b6MkV\\n\",\n       \"nPk+MmGhnHfeeSPZXWvmSDx8bVuMJ94NLA6/22YsvvjFL0ZExIte9KKIiDjnnHMiYqLfiEn/sbTp\\n\",\n       \"N9faIrWXBw4yPDieN65ThFV5//33j3i5Wg9qe21WuR+4kjN68XxBz9u3b48vf/nLETHx0Jo7j3Xx\\n\",\n       \"j//4jxER8T3f8z1T92AeAXRK/1/72tdGxMSDhSXeykx/mVPo1nMRi5y1a743j5Hj0fBIsp5m1ddy\\n\",\n       \"9fDMq2u+vxe+8IURMfFMMB8A+nI9Nsbs1FNPHclg7y5yZ54D12zLKn4jI3Oevu3evXu0Bhkvxv+l\\n\",\n       \"L31pREzWM2PgtpHZNbts2TO//JYhYsyFRz+41pXNPXcZI1gHkJn91rySLaMAHmh0blnM1+a9x2uO\\n\",\n       \"+UAVcdYFY+l1tHfv3mEtMa7IZK5F7sXebE+mPZh42Znb7DOzeGUZf9bQD/zAD0TEZN/nGQPQg2s4\\n\",\n       \"sq9m/Km0jwyzmBYc44h+GAN7pF3Ti3tlPLGuacV8auP2gOOXaYszSMYTaZRHqlAoFAqFQmGD2DSP\\n\",\n       \"1L59+war3+9AfcLE8sCK2rlz53D6xpoBruDKqZSTpasscx3Wgqsn2yPB9Vu3bh2sT3PjAXOP+ZTu\\n\",\n       \"9/GOscJjg+fLPGF79+4dTuuc4rMTtOMnspgxQDtYclhD9Ln1BLpCbVvte1a/uB5LAY9UVh2Y6777\\n\",\n       \"u787IiachZ///OcjYtoiNV8jnkvGwl49X8+9XD0d+N3//fffHxGHrEF79dADn2PlZFX5+d0eTq63\\n\",\n       \"tdvOTTxuWKmtly5iYqUxnszv++67LyLGMRIveMELpmTB00VsiD01XdcN8tOWsw4tC/c499xzI2Ki\\n\",\n       \"c3sN6CdzHI8kc7FdF+iK7zi+yl5jZ5rSL/Ri69hZrI7TavuKp8zZps6cA96baIu+eG3TV+ZoVkk+\\n\",\n       \"YjIHv/M7vzMiJrFueCLs2benzp4G74tmWNiyZcswLhlfqTMpzRcK0DnrnnmCx99vDdDvjh07BnlZ\\n\",\n       \"O9aNM+LMb+p4HMaUvYj9Ag+V9bJjx45hnjojzOuCfqNznknMWcvCGjdHK/1v9wDe9vg5gYfSOndm\\n\",\n       \"Of10xhzwnsbf2Zvb671WkJt9w+vfHj2zMWRce/Yy+vnZ9g99eO/NOH2N8kgVCoVCoVAobBCb6pEy\\n\",\n       \"RxveguydOifL+++/f8QcDziNY1Fw2uVU6/fSnF6xpMySnXGQHTx4cDjdOzYEcPI2n5Vr3ABbVrzX\\n\",\n       \"5nO/Iz/55JMHa5QTtWPDLIuzGrOTt70L9JUxaL0pzs5zXIH7ye94JABxGq6z41owWKLcp42pM1M4\\n\",\n       \"lhDxFY5j4npzCLqmEfDvzKtv//ZvH/Ub3SKfvZ2+nrmGB9IeHXskmBcHDhwYxinzApin0NaurTrP\\n\",\n       \"/0ceeWRKdqPv+8Fq9Vz0GrIlST9Y/7Msx4jJvDBvYuuRzHi6zL0H7D1FL3gRvL+wnzj2albWnr1g\\n\",\n       \"/I4XPdsvaIN7ZXPRHh1+Li4upnFptIknijVnzzT6QtfMG/rguCTvF33fD3uFvbrm2OQ7XJ9l+XJv\\n\",\n       \"xijj8mtjpCyD15C/65gpy4LHAg8wc9bZzWBhYWH0GeOfeersJbdn0/10DNWsPZp+owf2UHs93U+u\\n\",\n       \"N9+hY+RcG8uZu21fudaxXNnzg7b8/My4+bx/tNdncwUgE3sNeloP5ZEqFAqFQqFQ2CCKa69QKBQK\\n\",\n       \"hUJhHRTXXqFQKBQKhcIxxqbFSF188cUjPjvex/JO+N3vfndETPhw2kwcv4OFOwdOId6nko1CjRre\\n\",\n       \"Q8PjA6eQMwlcE4nrW24evkOcAXFWcArRNv3kvSvvsrkXPE7moOLevLfl3TGcVRdffPGIG8zxI+a3\\n\",\n       \"4p7Iyvt6+guPE/0kvoOMGarsIuNtt9028Cw5C83xRozRZZddNnU9MQ/UzSLuAD0ii+M+iMnZs2fP\\n\",\n       \"0E9zpxE/QAYkcwvuJHifXKsEvaBfOOvgieO9e5tpaO48dM41jD91cJAJvcDN5phAYgGor3TddddN\\n\",\n       \"tf/UU08N2XiufwQXHpxi9I97u0YV4w+Pm6uTO6as5SB0fBZtIws6hMePOAx0jQysD/NhAfMFgt/6\\n\",\n       \"rd8a8XhlFc3hWuN6x0Y4wwret5bfMGLM4Xjw4MFBJ17/zh7yfsHcoi2uQ4/E46EXOMjQd5vFR9wR\\n\",\n       \"c9F8mMDVpZkvrFHH0JAVSuwI/GbmcpubmxvkRxb6aQ5S9hLHDJlr1TFVzrjzPtq22XK+RUz4Kr3n\\n\",\n       \"OkuNPrifjCl7FfsKY4Aef+EXfmG4tzOn6T/rgvlivTiuB72wLpx5hkzEVP3mb/7mILfjF5lT3PP9\\n\",\n       \"739/REy488zF6XhYuFmZX+YTZI0fOHBgtEcD2kY//N4+52a17XpR5lt1zOHc3NywBumn1zNrkuvY\\n\",\n       \"m2655ZY4EsojVSgUCoVCobBBbJpH6oQTThjx22BpOJuFv3PCbqvwOguDk7Vr+HCqz7LTnL3j+jOg\\n\",\n       \"zQbj1Jpx/3CtmaXpp9vOTuZ835kSi4uLo8xALAbX1zKcvWZvkqsyO6PSGYQReRVpW5rmPzMPFrVN\\n\",\n       \"AHp0vS7Pi/ZvWIFY2syDLPPNbN/OtASuidVam86q4XfmiTO+snnurCasOuu85ZxjvLmX5TR3FDrl\\n\",\n       \"pz0Vrk6O5cocnpWZiTVO5WZ7VgH9QEZ+4gXwGjVvHh4uZGwz6zwHgbOJADrmHpkHA2QxpfSpzWq0\\n\",\n       \"p918ddaLM06Zq9kYOZuprfHjucLcor/OuvN8cZvMQT53Zq31Nj8/P6qflLXtzEfDWZ7mMDRaDwQ6\\n\",\n       \"ZP17L7JOmYPr9ROZmIvIYj2urq6O3iigj2xvcdZdVrurvUfEpMYb7bfrgrXC84KsTT5vK/JH5Nmu\\n\",\n       \"5l4EyGiGEHtLIybrG/nor984APrtzEj2pox9hHu2NQF9D2cKsm44Y2Rz0iiPVKFQKBQKhcIGsWke\\n\",\n       \"qeXl5VElU+I8XMHZlt2jjz46fEa8icHJk9geTr0+YbpODCdpLA17dlqPBrWJqPqcVeTl9M71WC+8\\n\",\n       \"jwV+T28PDVYhmJ+fH/RAtWe+C/cacKyLKxbbm0LtI3TOCd0esLZ/wO+ybQViURK/xjttdPsXf/EX\\n\",\n       \"U9fbi3TPPfdMyfLyl798dG/6yzhS98UVvF01Gh43ONds9boOE/r5l3/5l5HF6JiYH/mRH4mIiI9+\\n\",\n       \"9KMRMdYL13MPYoCQxZ5Pxmhubi4++clPRsQkFtBV1rHeiG15zWteExERn/rUp6ZkBMwT7knlZvRx\\n\",\n       \"1llnjWR3DTbXoAG2brk3Y0RVcYCVyE/i9IjXaT0e9tK46r5rGrXei7af3MuVze1xYN6gp1aP9m7b\\n\",\n       \"kvZ42lvkmEF76lhHzBdqfZ111lmjeW7vH5Wu/+7v/m6qv8B6NJ+fvQDM3dZr6O8As0zY0+zrWVf2\\n\",\n       \"XOIltWeX3xcXF4c9Bt1m/WSOsm6ymlb2En7/939/RER8/OMfj4jxfrG2tjbIw7OE9Ww4lo49hjno\\n\",\n       \"Zxd6Y05/9rOfjYiIF7/4xVN9i5jojjFhDjLfzW/rGFDXNsuYE+j/q1/96oiY8KO2b1MYCz7jHvbg\\n\",\n       \"A3su7bnOvM+MNfvEgw8+OHpGO64MtgxzC66H8kgVCoVCoVAobBCb5pHaunXrcOLmdGsGZsCJE4/M\\n\",\n       \"zp07B8va3hBOt5w8OVFSJdweKMApFg8GFodPsO2JnHtlvEz29mAdZVkqnObpJ/3GUplVZRvdwaGF\\n\",\n       \"NZ/FSHCKR4ZZsU4RMcoGRB9c31o7tuqBLXHA+MJThxeIMXU76BnLDK8LFnirF8dyYHn5HT7gc8cn\\n\",\n       \"MU9cyZvvo4fWUvPcYmzoFx6bz33ucxEx9uo4lgLgDbTsXDc/Pz+Kp7IObUG+8pWvjIiIv/mbv4mI\\n\",\n       \"sXeU7/M566KNy2qxsrIyeNBok/HCgnQ/zQiArr2m0Stj5LnYepmyeELHtrRytzIiM/20F4i5bP43\\n\",\n       \"1nbrPcX6N98YPzNPCuOKt4D+ZlW4aQ99nHLKKVNxpK3cZAC+6lWviojJXLRHAlkYU8d5eo6aT3DP\\n\",\n       \"nj3DPc3LSH8YZ1es9tzy53gY8CJkXoaTTjpp8AI5Y85gbX3pS1+aatNvAVhH6B49fvrTn555fSsb\\n\",\n       \"c4Q2PLeY5/ydMczi+9jb0C9zvq0uD/g/vKb8ZG+y3NzTeztjYQ+239zgiaL91kPO+JhthHt4/OkX\\n\",\n       \"/fX+YNBXvteuecc8Mu68LWLN8abLHukMm3aQ2rt37zBIuPTbv7Ww63dhYWFY+CgXMEgMiqkt/MAA\\n\",\n       \"bFIoku+7/TYIl6BaJp1dkgyaJx2bthcGsvtVoEsyzMLdd98dEZOAQzZ4wKT1QQPZvTHSDnrxq772\\n\",\n       \"IeADk4PBDcYTPXzsYx+LiMli9GspB2mz0M4888yImH5dRxvcm/7aVQ14yHGPl7zkJREx2dQywmWT\\n\",\n       \"Y8/NzY2u9WH1z/7szyJiojtvCP6cV8ZZ0D5junXr1njZy1421V8/SJEBXX34wx+OiImefAjkcx5W\\n\",\n       \"9M0k1mDHjh1Tqc4R44BVYIJY2qQ/1gvXo3teY9MXjKSI/EGJ7nwwZs0xp5DVwbduB30x/rTb6oV9\\n\",\n       \"DF16E/ehzqn37fi2sgH0zL7THsz9YPOhizVHf7zm0DWfmyDZ4087yPic5zxn6I8PjC5JwVy1boED\\n\",\n       \"/5mTtJMdAr7+9a8PD8JZpNLtPV3Ggn54X6TfPPw/8pGPRMTkMGPaq5WVlWGc2UNp03OVOcS89oEi\\n\",\n       \"o6dCpu/93u+duq7tK/s4hznWDM/DzPB2WQfr3tczFnZItAaGX6daH97nGBPmuYPVDSd5tc8yryHm\\n\",\n       \"FM9y5GcMsgB/o17tFQqFQqFQKGwQRRFTKBQKhUKhsA6KIqZQKBQKhULhGGNTKWJ4b8p7Sd6N8g70\\n\",\n       \"+uuvj4gxvcH8/PwQR0Ew5Ic+9KGImJTw512og4NNswL9AO9neZfOu1NkvOaaayJikqo/Nzc3ii9A\\n\",\n       \"PqhtKLPPO1velTu24fd///en+pkV5rTsl1566egdLv3k81/5lV+Z0gufI4PjjmgbigjejfM99ML7\\n\",\n       \"/Pe9731DyX8HkTIG6JC2oSvhHbcDm+n3tddeO/Sz7T9jxfv+r371q0NJAV9rGiL6C+UD40n/0J9T\\n\",\n       \"1JElo0Jo5f7t3/7tKVnoJ/dGh8SGQG1gqh2nrBOHAC0D1Alt2y5Sx7XI4mB6ficoFLoKZLHOia1g\\n\",\n       \"Xvzpn/5pRByiWkFurmVdoCNTeDhI2rEQUIpA42K9uK/XX3/9aPytH/rNGoUKx6n3rA9iPX73d383\\n\",\n       \"IiY6z+b44uLiIDdzhX6ZnoM2oOWBIoY5Zz0yb66++uqp9kEbc4JOoJPx+GdB94y/ZWduE+dDO5al\\n\",\n       \"LQrq2E7aZvyZz6ZnyuitZgUPt6Cv6HF5eXn4DvOV8YUihLnleC4SWpiLUMq8+c1vjohJELLLrXA9\\n\",\n       \"6+6yyy4bxc6ia74D/ZjpzQC/I9sdd9wREZNnlwPAie/hPjfeeOPQT/TBvYmlY13/yZ/8SUSM91Hg\\n\",\n       \"hIl3vOMdERFx+eWXT90Tmdv9lzVH2y6GafoZ9kU/0902v7One48mjnHLli1D27fddltETOaKE1mI\\n\",\n       \"U+T8wNzNUB6pQqFQKBQKhQ1i0zxSa2trwwmUApxk2Dnzwdlqp59++nCydFo3p1NbAS7+CbCG7LHA\\n\",\n       \"2nUGQXvC5h7IQFqu5cYKxrKkoKDT2V08EKuBwpwuVLq2tjbK/KOIn/sPnGGIZ4nMqbbtiHHRNGRu\\n\",\n       \"M5BsUSCTs04Af3fBTqwAZ5BwLxcHBC4+GjHRHbKRueF0VvrnQnuUZnDBV1tN4PTTTx+8OsDUHswD\\n\",\n       \"MsKyNF57AR5++OGIyDNUIyZWOpYlcyZrG/1ceOGFETHO2rLHhbmNF5C1CtosHuRGR1lKOLrH60OW\\n\",\n       \"qte/vcTolfXQFs3zfHXquLP2TIXEGPG5M6U8p50m384Lr3/2CbwAHiPrnAwpU0UZptrat29fmhHI\\n\",\n       \"NYwnxSHvuuuumbKga/TmMgruK7Kurq4ONCQuauhsTpO5u3SFiwkzD0jhZ96AliaI7zgDGphAGO8F\\n\",\n       \"+4VLd5hQGi86RYG9LtpsXnSPLFnbgPnCmvPeZQ83MpHB7fIqbT8pyEoG7F//9V9PXecyH36b4j3a\\n\",\n       \"zxt+R+Z272I92DPr54fh613mAFiP7dsU79umbeL5xl7uMcpQHqlCoVAoFAqFDWLTPFLbt28fvVfm\\n\",\n       \"BO46Uj6hdl03WCOul+L6HyZ8zE67XIdFgrVrS512FxYWhrY4xdryNrUNtZ0cCwA4WaOPtuBixLhu\\n\",\n       \"xvz8/IhOgOKW7if9wnoxPYc9e7bouY7PW2/IrOKM7Xcz0lpkpnYXsts75vbwSGHpPvXUUwNFimt2\\n\",\n       \"AbxAruXjujguROd6TI6laa2frJCgC3Py0541e/YsS1aoc2lpaUQmmtXLQef2zNqKc+FN5t655547\\n\",\n       \"df3f/u3fDn11PBWW93rkvBkxOMAqdh9NpDyrLRfotSx4Wkw/wk/r0X1wHaqWIgS5ucaeNVvSru1m\\n\",\n       \"MtqMWNZ7Xd/3KV2V4+5YQ+sVKkYPeFy8Rh07s7q6OrTpvch1fjwfsvpzfM5axaPnvQsZlpeXB13z\\n\",\n       \"Ha9/kzkzXygoSa06t809GW/Wxaw3AS4S63EDJmd3XTbrhc8Zd37aAxYx8SghHzXqoJOhEClwjSfX\\n\",\n       \"hrMsGQ0YXtW2rqHjFb0XZV5fe2azmDnaRZ/tOrOcjo3i78idFaw2yiNVKBQKhUKhsEFsmkdq9+7d\\n\",\n       \"o2wtV6E2sNieeOKJKc9QC1tBWAyZp4I2XT2dU7ItjDbmgjgDrFefpE3wiYWOd8wxVbae+Z3MRFvT\\n\",\n       \"7f3w6nzmM5+JiLH1yrXoC53z7trWEf028eOsCsH0z++4M527+rwpEdarbI1XBW9h21f6acsDSzSr\\n\",\n       \"Du2sxAsuuCAixpaXvYR8/8knn0zv6Tawdhw7gr5MXowsBjLPzc0N2VRY6ZnczCl0/olPfCIixvoC\\n\",\n       \"XI8ngnuS1QROPPHEYVzpF2Nh0mnHUDGOGTkzfaE99gm+18rexse0v2c0RvYCcm/moL3MrqqM5c2c\\n\",\n       \"bPcj00wRV4TcWRV22mZfYV147jrWBJx88smjtYc+WuL3iHGsJLCH1nuSx9Qej6WlpSFGxx4pU1/Z\\n\",\n       \"i+oxok1TZUHSa68xY7S8vDzogXWRUX6Z+sYE08DxOXiX/+qv/ipmYe/evaNK7mbyACa5N9WLx9me\\n\",\n       \"TWRlfrXeUce8EQNK/JX3C78FcTxbNqaOB53lXfRao+0so9RV502x5r2L8Xf888GDB0fPOXTIvXku\\n\",\n       \"osMiLS4UCoVCoVB4hrFpHqldu3YN72lNVugYKU6L/NyzZ8/oPTtwLI/rCdmD5WwNZ5Rk7+sXFhZG\\n\",\n       \"XjFbgY6n4R6ZF4h7+Z60a49Xa3E5xsEWo/u/srISjzzyyODJsKVmriETjM6Ca6o4tgUwNq574rpi\\n\",\n       \"lh0ZsczsqWqvtSVtDxJwrSPX5fJYOFOk9fRkMXLcm78z/ieeeOKQwdn23/EJWdwT1tLc3NzI+vJc\\n\",\n       \"9Hqxl9QeJvrn2m6OMWmv9zjhgfGaow3XnLFHDuAdyfgSW+sYa9XxKM6IBY8//niceeaZoxgye2SA\\n\",\n       \"9QFmcbNZV96bvObs9WFN402xLLb624w5W/XmyrNHIYvv89jhZcr2G2JW9+3bN9oH3E/08dhjj8UZ\\n\",\n       \"Z5wxtJkRTqNzsjxZu/YatbyJs4jeW3ic7QWyXpgf3l/43NmMJ5xwwmgPyjwveCD5nJ9+hoFMT64R\\n\",\n       \"2Pbbc4Zn0iOPPDIVD2YvEHM545UF/N116lrZHQvn2Kf19uBsPlkGc75u2bIlnbfMIa9jr7kMm+aR\\n\",\n       \"cppo4fijxmDzQWmDwuag1sDmg9eMhc0DCQ2FjaG49gqFQqFQKBTWQXHtFQqFQqFQKBxj1EGqUCgU\\n\",\n       \"CoVCYYPYlFd7hUKhUCgUCt8MKI9UoVAoFAqFwgZRB6lCoVAoFAqFDeK4H6S6rntt13X3dl13X9d1\\n\",\n       \"lx3v+3+rouu6B7uu+2zXdXd2XffJw589q+u6j3Rd96Wu6z7cdd3J67VTOHp0Xfdfu657vOu6zzWf\\n\",\n       \"pTrvuu7yw+vi3q7rXrM5Un9zIRmDq7qu+8rhtXBn13Xf1/ytxuAYouu6M7uu+2jXdXd3Xff5rusu\\n\",\n       \"Ofx5rYPjhCOMQa2DY4TjGiPVdd18RHwxIv5TRDwaEZ+KiNf1ff+F4ybEtyi6rnsgIv5d3/f/0nz2\\n\",\n       \"7oh4ou/7dx8+1H573/e/tGlCfpOh67rvjog9EfHf+r5/2eHPZuq867qXRMT/jIh/HxFnRMRfRsQF\\n\",\n       \"fd/P5m4pHBWSMbgyInb3ff/rurbG4Bij67rTIuK0vu/v6rru2yLiMxHxnyPijVHr4LjgCGPwI1Hr\\n\",\n       \"4JjgeHukXhUR9/d9/2Df9ysR8b8i4gePswzfynANjB+IiA8e/v8H49DiKhwj9H3/NxGxSx9nOv/B\\n\",\n       \"iPhQ3/crfd8/GBH3x6H1UvhXIBmDiPFaiKgxOObo+/6rfd/fdfj/eyLiC3Ho4Vzr4DjhCGMQUevg\\n\",\n       \"mOB4H6TOiIi2lPBXYjKghWcWfUT8Zdd1n+667qcOf/bcvu8fP/z/xyPiubO/WjiGyHR+ehxaD6DW\\n\",\n       \"xjOLt3Rd9w9d193RvFaqMXgG0XXdzoj4joj4RNQ62BQ0Y/Dxwx/VOjgGON4Hqaq1sHn4rr7vvyMi\\n\",\n       \"vi8iLj78ymNAf+gdb43PccRR6LzG45nBzRFxTkS8IiIei4gbjnBtjcExwOFXSv87It7a9/3u9m+1\\n\",\n       \"Do4PDo/BH8ShMdgTtQ6OGY73QerRiDiz+f3MmD75Fp4h9H3/2OGf/xwRfxiHXLWPH35/Hl3XPS8i\\n\",\n       \"vrZ5En7LINO518bzD39WOMbo+/5r/WFExPtj8tqixuAZQNd1C3HoEPXf+77/o8Mf1zo4jmjG4HcY\\n\",\n       \"g1oHxw7H+yD16Yg4v+u6nV3XLUbEj0bEHx9nGb7l0HXdUtd1Jx7+//aIeE1EfC4O6f71hy97fUT8\\n\",\n       \"0ewWCscQmc7/OCJ+rOu6xa7rzomI8yPik5sg3zc9Dj+4wQ/FobUQUWNwzNF1XRcRd0TEPX3f/0bz\\n\",\n       \"p1oHxwnZGNQ6OHbYcjxv1vf9atd1PxsR/yci5iPijsrYOy54bkT84aH1FFsi4n/0ff/hrus+HRG/\\n\",\n       \"13XdmyLiwTiUxVE4Rui67kMR8R8i4tld1z0SEb8SEdfHDJ33fX9P13W/FxH3RMRqRPxMX7QD/2rM\\n\",\n       \"GIMrI+I/dl33ijj0uuKBiHhzRI3BM4Tviogfj4jPdl135+HPLo9aB8cTs8bglyPidbUOjg2KIqZQ\\n\",\n       \"KBQKhUJhg6jK5oVCoVAoFAobRB2kCoVCoVAoFDaIOkgVCoVCoVAobBB1kCoUCoVCoVDYIOogVSgU\\n\",\n       \"CoVCobBB1EGqUCgUCoVCYYOog1ShUCgUCoXCBlEHqUKhUCgUCoUN4v8BaJ+sNGqpC6EAAAAASUVO\\n\",\n       \"RK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01f42290>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"filters = net.params['conv2'][0].data\\n\",\n    \"vis_square(filters[:48].reshape(48**2, 5, 5))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The second layer output, `conv2` (rectified, only the first 36 of 256 channels)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 31,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlEAAAJNCAYAAAARaCA+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzs3VuMHdd97/l/SSSbZLN5b7LJ5k0kRTISJdMWJcuyjizL\\n\",\n       \"ythGjNgxjBMYuSGTGQxwkMyDHWSceZjIQBBMBnCekpMXjwPDmOOZIIkNx4EhObElHkqmJFIiJd7v\\n\",\n       \"tya7m5fmXbyIrHkQe+lXS13F6rXrtnd/P4Dhf3HXrqpdu3Z1af3X+q8ojmMDAADA+NxX9wEAAAC0\\n\",\n       \"Ix6iAAAAAvAQBQAAEICHKAAAgAA8RAEAAATgIQoAACBA4Q9RURR9IYqivVEUHYii6H8revsAAABN\\n\",\n       \"EBVZJyqKovvNbJ+ZPW9mA2b2ppl9PY7jPYXtBAAAoAGKbol6wswOxnF8NI7jW2b2/5rZlwveBwAA\\n\",\n       \"QO0mFby9fjM7IcsnzeyTukIURZRIBwAAbSOO42isfy/6IarWB6Q1a9YkloeGhlx88eLF1Pf19PSM\\n\",\n       \"uY19+/Yl1rty5UpLx7dixYrE8smTJ138/vvvp75v+fLlLj527Fjqevfff/+Y24uiMb/70s2fP9/F\\n\",\n       \"/vmfNOnDS2/69Oku9tPL+r7bt2+n7mvu3Lljbu/ChQuJ9dK+wylTppjZB+dt0qRJNmPGjDHf09XV\\n\",\n       \"lXjftGnTXKzXW9HnXD+Tmdm1a9cK3X670uulr68v8drly5ddvGDBAhfPmjUrsd57773n4v379+fa\\n\",\n       \"78KFC108Z86cxGt79+7NtQ11330fJgX0d2xmduvWrVzbmDlzpou7u7tdfPr06cR6S5cudfGlS5dc\\n\",\n       \"nHWPVP5vVI/3zp07ubaR9n4zs97eXhcPDg6mvm/27Nku9n/nSu/vek34v2Vd1vMyefLk1G3rPSnr\\n\",\n       \"s+v9afHixYnXdu7cOeZ79Pv0v5u67ukTzV/8xV+Ymdm3v/3t1HWK7hP1pJm9EMfxF+4u/7mZ3Ynj\\n\",\n       \"+K9lndIetPQPs1n2g0maefPmufjcuXMtH1Nd9Hst8wen58ss+R3oQ8VEEHLO/Rv0l770JRf/6Ec/\\n\",\n       \"avmYfud3fsfF+h8FW7dubXnbWVatWuVi/WO3bdu2Qvej5/zxxx9PvKYPRyMjIy7WP5BmYf9x9IUv\\n\",\n       \"fMHF+ofZzOzVV18d9/aqtG7dOheHPPAV/RDVRPrgpdeRWdjfFX3Q1msxL/+c8xBVjdHzHkVRaktU\\n\",\n       \"0X2itprZg1EUrYiiaIqZ/baZ/aTgfQAAANSu0HReHMfvR1H0x2b2opndb2b/NyPzAABAJyq6T5TF\\n\",\n       \"cfwzM/tZ0dsFAABoksIfouoUkqv2tXM/qKpo53u/w/P27durPpy25nca3r17d6Hb1879p06dKnTb\\n\",\n       \"WW7cuOHioj9TGu0wbZbsb/av//qvLvb7uIQ4e/asi4u471QppB9UlnbqB6UDDIaHh1PXSxukMx5T\\n\",\n       \"p051cUg/KLQHpn0BAAAIwEMUAABAgI5K5zWdXyfq6NGjtRxHiEWLFrlY67i8++67dRxO4y1ZsiSx\\n\",\n       \"rDXBsvi1yfLQ7+PMmTOJ17773e+6+Lnnnhv3tkPl/bx56ZDutLIsfq0gTe8VnXLTUg1ZdYQmAq1J\\n\",\n       \"pWkwv5RECE2JaWkGM7MdO3a4OG+pHq3r5pdnefTRR138q1/9Ktf2nn32WRf7pTLKLiOCZqAlCgAA\\n\",\n       \"IAAPUQAAAAFI51WonUax+HTqCH8aiSb43Oc+5+L/+I//qGy/mh5Qfjor7xQVIfwUnrp69aqLW522\\n\",\n       \"qAyf/exnXfzLX/4ydT1N1+iUPOrIkSOJZf295Z06JYtOC6KjD2/evNnyttuZTqOj11sRtNL39evX\\n\",\n       \"E6+FzLah02atXr068Zp2TfD3lebll18e9zHg3oqYfSSEPwVRHrREAQAABOAhCgAAIAAPUQAAAAGi\\n\",\n       \"kLxySzuMomp32CBV5nn1e+3EGb8/+clPJpZ37drl4rr6/uQ957/xG7/h4n/7t38r9ZhUWp+eOv3B\\n\",\n       \"H/yBi7///e+P+/16zvXzmZlNmTLFxUVcE3mrXXc6/2+Glngo+p6Wp7SFz++nWESF+qr09fW5eHBw\\n\",\n       \"0MX+Z+/Ee7rOfuFfR3X1Oxw971EUWRzHY550WqIAAAAC8BAFAAAQoJZ03mjzbxHDjjG2pqfznnzy\\n\",\n       \"SRefOHHCxQMDA6nvCWnar1Lec66V6/20WhPLR4TQlEqZ6ZQqr3OtgK7XrG/u3LkuPn/+fKnHVAf/\\n\",\n       \"t6ezGVy+fNnFRZQ7SEtv+bSauV/yo2mTyvsV7rXMh1Y512unU9N5msLTmRe0FEWdSOcBAACUhIco\\n\",\n       \"AACAALWk8zZs2GBmZjt37ky8VlVV0rpo1Wqz4itXq7Q0hz9yRSvzVnktaAqgiSksHUmZ97oMSS3N\\n\",\n       \"nDkzsazfx0SvhJ1H1ui8tPO3Zs2axPL+/ftz7Stroud2panlvBOi+/eJxx9/3MVvv/22i2/fvp1Y\\n\",\n       \"T1NzFy9edHFWunflypUuPnv2bOK1IiY4LpKfplu2bJmLr1275mL//GWlKdPe0ynpvKaPeCWdBwAA\\n\",\n       \"UBIeogAAAALwEAUAABBg0r1XKd7ChQvN7KOzrmuevBP5s8+X2ScqTVOq9zaxH5Sqqn+e369Dh8fr\\n\",\n       \"d1XX96bHY9aMIfvz588f89/9GQG0T47G/n0nr6lTp44Zaz+2dlNEGYKRkREX+/2glPYJ1e/Q7xur\\n\",\n       \"9G9Cd3d34rW8faLKLI2ydu1aF/t9erQMBv0bx9bEflDjRUsUAABAAB6iAAAAAtSSzhtNTXR6+s5H\\n\",\n       \"ky7uRdNl999/f41H8gG/BEPedF6ZFcv9tM4oHUaedQyhMyXofrMq67eTvKnIrGtRU3NZqTO9/+Xt\\n\",\n       \"ylBEtfGiU3haSmPfvn2Fbhvth5YoAACAADxEAQAABKglndeEET51uHLlSt2HgAbIO0Ft1kinquSt\\n\",\n       \"Yu0rczRh3slJi04p6mirO3fupK7X9CrMKm8Xg6xrUe9rel4OHz4cfmA1yxp96U8ajomNligAAIAA\\n\",\n       \"PEQBAAAE4CEKAAAgQFT08M977jCK4tEhsf6s3O2qp6cnsXz58uUx1/OHCZfZ50W/106Z8TsvrQxf\\n\",\n       \"ZT+0djrnfvX8pvXXmzJlSmI5re9OE8659oEyS1bm3r9/v4tXrlyZWE+rhQ8NDZV0dMXz/2aUed61\\n\",\n       \"rEQR1dWzfP7zn3fxiy++WOq+xqvKc44PjZ73KIosjuMxTzotUQAAAAF4iAIAAAhQSzqv0h1OUE1I\\n\",\n       \"c1TpySefdPGuXbtcnJZaLUPR5/yzn/2si998883Ea01Lv5Xtvvs+/O89HUZf5XWulapXrVrl4qyq\\n\",\n       \"2np8v//7v594bcuWLS7evHlzEYdYiaJTS7NmzUos6+TfWoU+9G+Vbn/OnDkuXrhwYWI9vW807ffV\\n\",\n       \"lHSeptn1d1jVhO1VI50HAABQEh6iAAAAAtRSsRwf0MldL126VOORtJ81a9YklnUi0CpTeGX65S9/\\n\",\n       \"6eLJkyfXeCT109SBP3KvKvp7Xbx4sYv9kXWPP/64i3/zN39zzPeYmf3iF78o+hAbxR8Bmlbpu+iJ\\n\",\n       \"6L/yla8klvW3c+DAARe//vrrhe53Ishb4X4ioSUKAAAgAA9RAAAAAXiIAgAACECfqBrdunWr7kNo\\n\",\n       \"vHnz5rm4t7fXxadPn06sV3S/ilZNnz49saxDtUNwrXxIq1hXSfti6TE89thjifWef/55F+vw+u9/\\n\",\n       \"//uJ9d56662iD7FR/GHvrV7Dy5YtSyxrP1I9z2+//XZivWPHjrW0XyALLVEAAAABeIgCAAAIQDqv\\n\",\n       \"Ru+9917dh9B4fX19Y/5709J3vo0bNyaWN23aVNOR1E/PxdatW4O2MXfu3KIOJ5hOLDx16lQX+yUX\\n\",\n       \"RkZGXPzGG2+4+Ec/+lGJR9c8169fL3R7x48fT33twoULhe4LyIuWKAAAgAA8RAEAAASoJZ03Onli\\n\",\n       \"1ZMfo/3oCBx/RF6TaWXkiW7//v0uXr9+feK1nTt35tqGXgd1TXY6PDzs4h07drjYT+dpGuull14q\\n\",\n       \"9BhWrlzp4sOHDxe67Ynm/vvvTyzrJNdljoZdtGhRYrmd7mv4KFqiAAAAAvAQBQAAEICHKAAAgAC1\\n\",\n       \"9IkaHSqsQ4EnIq2yq/n58+fP13E4jaTXSF19YUJU2c+hv78/9bWBgYHKjiON9mfK2wfKN23aNBdf\\n\",\n       \"vny55WMKcebMGRfrZ3r44YcT682YMcPFN27cKPQYtMwCWrNq1arEsvbdK5pWW+f+3lloiQIAAAjA\\n\",\n       \"QxQAAECAWtJ5/tDSiUorlmsKAB+6cuVK3YcQZPLkyYnlModMa8pz0qTOnIRgwYIFLq4rnac0Tadp\\n\",\n       \"eTOz27dvV7Lfdqb3uyp/4/Pnz3dx3lkP/BRqSHX0rGrraG+0RAEAAATgIQoAACBALW3/Z8+erWO3\\n\",\n       \"jdPT0+PiTmmmxwfKTN/5rl27Vtm+6tLkz3jz5s3E8tDQkIv1Nx6ahpw3b56LT506FbSNpglJ4c2Z\\n\",\n       \"Myd1G3l/b1evXnVx3gngmVlj/Lq6ulzc6X/baIkCAAAIwEMUAABAAB6iAAAAAnTmeOg2sXTpUhcP\\n\",\n       \"Dg66uF2H9aM8GzduTCxv3bq1piOpR5Nnuj948GBiWctbzJ0718WhfaL+03/6Ty7+8Y9/HLSNTlDE\\n\",\n       \"DBd5+0Gp6dOnJ5bzlkaYyLQfmZY0KrP8R11oiQIAAAjAQxQAAEAA0nk10qZlTeehOH51fG1mvnPn\\n\",\n       \"TtWHE2zq1KmFbs+vbN5Okzs3jZ9m0vRPyHldvnx5Yrm3tzfswCaQdevWudj/rWzfvn3c21u5cqWL\\n\",\n       \"Q8tK6KTZ+p36Fc878d7vl/3oZLREAQAABOAhCgAAIADpvBrt37+/7kPoeO02GkQn2h0eHnbx5s2b\\n\",\n       \"C92Pn2bq7u52sVZ1xviFTFCrvvrVryaWixiVpqIocnGV1bg1zakV6PXaM0umL48ePZq6vdWrV7v4\\n\",\n       \"0qVLLt67d28rh2lmZjNnznTx4cOHg7ah3TWKOCY0Ey1RAAAAAXiIAgAACMBDFAAAQIBa+kSN5uTL\\n\",\n       \"zsdr5WCNmzIjPLODw6f9oKq0YsUKF+/atauWY2jV/PnzE8s61F37FRXR5+u++z78788ihnMvWbLE\\n\",\n       \"xbNnz0689s///M+5tqGfX+93frX3Mu87PT09LvY/x7lz58Z8j/99XL9+fcz1/LIcfqX4VmWVSQDS\\n\",\n       \"0BIFAAAQgIcoAACAALWk86pKY2nT8kQetq2ToJ4/f77GI0FTtWsKT33ta19LLB85csTFRZeIKLra\\n\",\n       \"/fr16118/PjxxGv+chqdGNev1J+Hlj4wy3+fnjdv3pjv8e81mo7TMgZnzpxJrJdWlqTsqvpaXmTT\\n\",\n       \"pk3jfr+fvkwrdaHnyyxZ4iHvd43moCUKAAAgAA9RAAAAATqqYrmmrcySTdo3btxoefvLli1zcRHN\\n\",\n       \"rppuvHz5csvbS5M22iXUY4895mKtymtmtnv37kL3BeTlp6MGBgZcXGU6P6QiuKbzfvrTnwbtt6ur\\n\",\n       \"y8U6Oi/v7z+0m4XuK2syXa1YrhXGmyIkhaf8e2Eaf5Ri2qjFvDQ16n/XZf5dwQdoiQIAAAjAQxQA\\n\",\n       \"AEAAHqIAAAACtH2fqGnTprm4r68v8ZrOxL18+XIXv/POO4n10ioOax8oM7MpU6aM+/h02Ks/LFor\\n\",\n       \"DGufrSIqIKu8Fdr1fC1evDjx2qxZs8Z8bcuWLS0eXTM9/PDDLj527JiLr1y5UsfhBNN+gWlDx9uN\\n\",\n       \"XovqxRdfTCwfPny4isP5iDVr1rh43759ud6zc+dOF+/duzdov1p1XssBaLX2MmT1g1JNmSmiLEX0\\n\",\n       \"u82rv7/fxatWrXKx/zdGr6u0kgt10vIOdZUh0j6M/t+9PGiJAgAACMBDFAAAQICo6klwoyiKmXgX\\n\",\n       \"AAC0gyiKLI7jaKzXaIkCAAAIwEMUAABAgFpG561bty7x/6O04rhWcT179mxiPR2BoKPfPvaxjyXW\\n\",\n       \"e+WVV1wcMopMK3Obmc2ZM8fFWn1XRwiame3YscPFOkLIH3WnVYo11hELZsmJO7XKuT8qRkfhaNVj\\n\",\n       \"v5IzyqFp6rrOuV6XZp05IkoncNV7gz96Vn8rOjJRJ+r16QjVKqtq6288dKJdvQ/dunWr5e2l8btj\\n\",\n       \"pF3rWsncPyatrp53VJuOSDNLjsbetm1brm0UTY9paGgo8VqR5z3vOS+b3l8effRRFy9dujSx3vDw\\n\",\n       \"sIt1dPPRo0fLOzgLmy0gS55t0BIFAAAQgIcoAACAADxEAQAABKilT9Ro36Ks/hva12HPnj2J9TS3\\n\",\n       \"rtWL/Tzxrl27WjrO/fv3J5YfeeSRMY/Pn4X7xIkTubav1WRPnz7tYj+/PGPGjDFf0383q7ZiLrJl\\n\",\n       \"9QcpU5V9oKqsNrx69ep77suvyH7+/Plx76foflBaRfy++5L/zar9JbU/qPYnGY/33nvPxQsXLnSx\\n\",\n       \"31enaFrlWSv6Z1X3z3uv0r5izz33XOI1rcZfZZ8o3a/2zy2671kT6f1F++ROnTo1sZ7+RqusRF5H\\n\",\n       \"+SRaogAAAALwEAUAABCglnTeaJOgP+Rf03HaTOpPqqipkYGBARfrhL5mZpcvX27pOP336/Hu3r17\\n\",\n       \"zH8PpSlBv2SCNovrsO3e3t7EemkTs6J6VaXv6lRlM/3BgwfvuY6WIDH7aJq9DjqRtd99QdN2oSm8\\n\",\n       \"NJrCKzu1fOrUqUK3pzQFqilKM7MjR46Utl+9B2ua1CyZNtZuGBONpvOy/lbq9eaXISnib2fdaIkC\\n\",\n       \"AAAIwEMUAABAgFrSeaPNsP5oFa3+qmkrP02nzfQ6UqLsKq4HDhxwcZnNkDpSxyx5nrTZVD+72UfT\\n\",\n       \"BUCrdEYAs+SsADojgE9H7mlqxJ99oEhaBdssmd7OqlJeJk1f+OnGrNFrRaoytVxE5XWlo74uXLiQ\\n\",\n       \"eM3v9lCkIo59+fLlLtaq3Z3i+vXrY8YTDS1RAAAAAXiIAgAACFBLOm80JfXuu+8m/l2bUDW156cU\\n\",\n       \"dLRJ1gSkRSszJaBF+VatWpV4TQsAagrg+PHjifX8gmeoj/9dlNncvWjRIhfnHS3k/6Y0VaJp4eef\\n\",\n       \"fz6x3j/90z/l2r6mkKocxac0fabnv8qitGfOnHGx/3uti14vOsKqiEKFy5Ytc7EWEw2lXTn8FFta\\n\",\n       \"oeUiPkcRKdBOTOHho2iJAgAACMBDFAAAQAAeogAAAALU0idqlD9hqJYQ0OH7WRVjdfi/X9m8nehk\\n\",\n       \"wn75BK0+rBNA+iUOdFg56uVXiS6zT1RI1WR/uLjSayxvHyhfmSVA/Im3R/mVs3W4vQ6HX7BgQWK9\\n\",\n       \"vBOG56WlFrRfUNY5r1LRVbb1PlT0JLx6HfmlbrRPlH7XE2G2ADQHLVEAAAABeIgCAAAIUGs6z6dD\\n\",\n       \"j3fs2JG6XloKr52rpmoKRSdVNksOldVz5Jd0SEtzoHqtTn49EWmara+vL/GalkPJW+lbU0v6fZT9\\n\",\n       \"3eh+y5ycN4um9ssuMaHdK/KWcdAZKbK+D72n+ff3kZERF+vkxCdPnsx1DE0RUp4h7/lD+WiJAgAA\\n\",\n       \"CMBDFAAAQIBa0nmjTZF+M3Pe0XVp62ll73ajlY39UTxpzbX+en6lc9THnwxaRxmFjGDyJ1vViuNF\\n\",\n       \"j7bKsmTJEhcXkTbRkWzDw8Njxu3GH3VcBx2t1kSafsv6O6ApuzfffDOxnk7wu27dOhe3WzpP09g6\\n\",\n       \"EjvL5z73ORfrKEX/d6Pb0/uEf8/Q0cQTbXRjqyP8aYkCAAAIwEMUAABAAB6iAAAAAtSSOB8dpt/O\\n\",\n       \"FcaLFjJM9dy5c4nliZbLbjItWVEEv2q/v1yVkP4m2ufD77OhJTvQmnnz5rlY+3lUKe/Qey3bkvV3\\n\",\n       \"IOtzPP744y7Wkg7//u//fs/jrIL295s7d66L/f5IIfeKRx55xMXLli1zsfYhM0vOAvLOO++42O+H\\n\",\n       \"pn87mvB3RK9ls+R1oNeV9tE0S57LrPIic+bMcbF+T4ODg+M+VlqiAAAAAvAQBQAAEKCWdN7ocG+/\\n\",\n       \"qZb03vj4za7tPCwc7ae3t9fFWqLD14TrciIM4V6xYoWLNdWf9d0UTScj9st8aKol73eQNbmxLuf9\\n\",\n       \"jM8884yLN23alOs9ofQzahkCP50X0pVDt6Ep8d27dyfWO3jwoIv37t077v1USc+RP9m0lvPRyvX6\\n\",\n       \"+cZDy5BoKjgELVEAAAABeIgCAAAIUGtZW9J3xWrnCZhRjvXr17t4z549Li6iqnaVaaJWdWIKz5+k\\n\",\n       \"WUdmaYX8Kumo0ax0Xl56T/Mn5926dauLt2/fnmt7O3fuHPcxhNLuKkXPKqCfXdOBhw4dKnQ/RZs5\\n\",\n       \"c2ZiWa9TrbLvjx7MW8k9L53dpNWZTmiJAgAACMBDFAAAQAAeogAAAAI0e6pvjEvTZ25H9arsA1I3\\n\",\n       \"LblgluwHpUOk81q0aFFiueh+La3y+xhdvHjRxXVVtNfh9lnV6HVYud//Rc2ePdvFfhVrLeMwMDCQ\\n\",\n       \"6/jOnz+fa70iRFHkYv1uirBjxw4X+33Fmsz/rrVvZrv26aUlCgAAIAAPUQAAAAHI/zSQVlc2yz88\\n\",\n       \"uxOHcQN5+de/Tk4aks5rWvrO1+rQ7DplpfCUpv38yXW1mnldEy5nKfN+3IQUnp7zGTNmJF7TdLKW\\n\",\n       \"MiqitErTNO/KAwAAaAM8RAEAAASYMOm8rq4uF2eNGsmiE3xqc3RW5WZtcs7blBnaDNzOzftAXjrq\\n\",\n       \"SfkzIGglZ00L+evVNZKt6dLOc5V0FJpfoV1HI4dUQ8f46WhJjX066Xinfze0RAEAAATgIQoAACAA\\n\",\n       \"D1EAAAABJkyfqNB+UEqrzmp/iyw6DLTs4Z1nz54tdftob1OnTnVxu1YHNksf3u3PEH/y5EkX62/P\\n\",\n       \"71uzbNkyF+/bt6+IQ6yFlnRYuXKli/2+ktu3b8+1PT3PftmVIuUt6eL3rQnpA5r1OSgRc29aKkT7\\n\",\n       \"Fvq/yU7vB6VoiQIAAAjAQxQAAECACZPOK4KWNXj//fdT19OhwVU2EdMcjSwhKTwtDWJWTFq8LP71\\n\",\n       \"n5Y+HxwcTCxrimfOnDkuvnLlSmI9Tc038TxoqmXt2rUu9qtJa9mVbdu25dp2mfeWvNv203ch5WOm\\n\",\n       \"TJni4rxV04s2bdq0xHK7ltigpM4HaIkCAAAIwEMUAABAANJ543Dz5s1c62kKRJuqO3HyxSr19PS4\\n\",\n       \"OO/oSIyfpreamLZavHjxmP/uVyLPe73oRMNZafqm0/Tjnj17XOyPRhwaGqrsmFql6Tf/+9WK2XlH\\n\",\n       \"JteVwlNLly5NLO/fv7+mI0ERaIkCAAAIwEMUAABAAB6iAAAAAkRp1X9L22EUVbvDGugQ1rqGr+r3\\n\",\n       \"2oTZ2CeCJp7zkGHgOpTf74dStOnTp7v44x//uIu1T4+Z2fnz58d8f9Y5176JWX275s6de8/9tDOt\\n\",\n       \"VG/WerV6/29G2rXuV5BnSPwH/D5qfsmNseQ95yjW6HmPosjiOB7zpNMSBQAAEICHKAAAgACk80rQ\\n\",\n       \"hKH4TUwtdTrO+b2tWLEisXz06NGWthdyzhctWpRY1pIOWu6AGQA+pKlRPx3Yrte6pq3N8qeu9Rpu\\n\",\n       \"9frNi3RePUjnAQAAlISHKAAAgABULC+BTvhJZW1kWbBgQWJ59erVLt67d6+LO2XUWBHpD39C3fHS\\n\",\n       \"lB3SacqoiZXrVUgXilmzZiWWR0ZGcr2vu7s7/4Gh49ESBQAAEICHKAAAgAA8RAEAAASgT1SG0Eq/\\n\",\n       \"VZeNQPsaHh7OXMZHzZ8/P9e/37x508Uh1bL9itu6vVarfjfRwoULE8tDQ0Mtbc8vJVFmX7Qq+54e\\n\",\n       \"O3Zs3O+ZN2+ei8+dO1fk4RRO+2m+//77idc6pW9mkWiJAgAACMBDFAAAQADSeRlCm+zzTChZBobe\\n\",\n       \"IkvIxNg6Oa9ZWHO+bmPZsmWp62nqYOnSpYnXfvWrX7n4+PHjY77fH4avaQkdrn/x4sV7HPEH/IrW\\n\",\n       \"Wqlay09kva/sCZyL1Gr6zqeV4Jso73Xgu3Llyrjf09vb6+ImpvOmTJniYv29+l1adL26/s41TfBD\\n\",\n       \"VBRFR83skpndNrNbcRw/EUXRXDP7/8xsuZkdNbP/HMfxhQKOEwAAoFFaSefFZvZsHMcfj+P4ibv/\\n\",\n       \"9i0z+3kcx2vM7D/uLgMAAHSc4AmIoyg6YmYb4zg+J/+218w+E8fxUBRFfWb2chzH67z3MXStJA89\\n\",\n       \"9JCLd+3a5WImq6xG0RMQ+03pKm+qWZvfdXRZFh3ldvbs2dT1NH28fv36xGuHDh3KtY1WMelz9fJO\\n\",\n       \"hhs6urnpNE1c1Wjasicg1tSrVn/XiafNzO6//34Xa4X3q1evFno8TVH2BMSxmf17FEVboyj6n+/+\\n\",\n       \"28I4jkcT60NmtnDstwIAALS3VjqWfzqO49NRFPWa2c/vtkI5cRzHtDoBAIBOFfwQFcfx6bv/fyaK\\n\",\n       \"oh+Z2RNmNhRFUV8cx4NRFC0yMyoHAgCAtvPCCy/cc52gPlFRFE03s/vjOL4cRVG3mb1kZt82s+fN\\n\",\n       \"7Fwcx38dRdG3zGx2HMff8t5L61RJPvaxj7l4+/btLqavyIf0HJ08edLFfk5f+w/pb8T/vWj/Aa2K\\n\",\n       \"rRWKzcw++clPjrkvv5/IG2+8kf0BYH19fS7WKthc59UI7Z8T0j+vCfz+fnrsR44ccfGtW7dKO4as\\n\",\n       \"cz5nzhwX+5X5b9++XdoxFa2J10eePlGhLVELzexHd7/ISWb2/8Rx/FIURVvN7B+jKPoju1viIHD7\\n\",\n       \"AAAAjRb0EBXH8REz2zDGv5+3D1qjAAAAOhoVyzvI4cOH6z6ExtGmbjOzJ5980sWaVjt69GhiPa3G\\n\",\n       \"q5W0/ebyxYsXj7lff/Lan/3sZ/kOuGGeeuqpxPJrr71W05F8iErJ7akJKZq8VftXr17tYr9q/7Zt\\n\",\n       \"21ysZRz8Cu3Xrl0LPs57WblypYv1M4VWYW+Cdqrur5g7DwAAIAAPUQAAAAFI53WQy5cvj/s9mu7S\\n\",\n       \"ZuFTp04FHcP06dNdXGZzdl7+6Lf9+/e7WEfT6Eg9M7MLFz6c8rG/v9/F/mgXfxTeKD89mGb27Nmp\\n\",\n       \"+y2CjuIJGYlbdvquadcLWqMVrUNHhuk1q/ekIq6PvBNv6+/SH7mrxxE640ertOtGu0547dNuE+2E\\n\",\n       \"ligAAIAAPEQBAAAE4CEKAAAgQK19onp7exPLmq++cuVK1YdjZslceFb/FB3arnl7s+QM9lXy+9eM\\n\",\n       \"mjFjhov986p9erQStF+lV3P/mnf3h/9q1VntV/X2228n1iu6708aLWlgZrZq1SoXa7+lhx9+OLFe\\n\",\n       \"d3e3i1ddgqajAAAgAElEQVSsWOHi48ePJ9abP3/+mPv1Z7DX/lfah6GI86DlFHTbWdv3z4vO1v7K\\n\",\n       \"K6+k7mvdunUu1s/oz2av/WF06Lff7yGkXIFeY6jfsmXLXKzX4qRJyT8vBw4ccLHfz0jpfUz7R2X1\\n\",\n       \"iXrooYdcvHv37nsc8Qf8/oz629H7pN+vUu+nIf1Qi1ZmPyi9D5olf8tV3cP943jggQdcrH3wzJLX\\n\",\n       \"1cGDB8s/MKMlCgAAIAgPUQAAAAGCJiBuaYdRFNc1LBQAAGA8siYgpiUKAAAgAA9RAAAAAWoZnacj\\n\",\n       \"LlCcJUuWuPjEiRMuLuJ86wi1vNW41d/8zd8kll988cUx46ItXLgwsTw0NNTS9rIqjGuauuxrXCvN\\n\",\n       \"62fcu3dvrvf39PQklkNGGRVRnbpVVZ5z/e51tGXWSLNOkTUiTUd26jXhT8Kt6+koTf83tWjRIhfr\\n\",\n       \"aLDTp08n1tN96UjRBQsWJNZbs2aNi/W78ids18nFdb/6XZslz4Xyu6no+3TkuZ4H/5jOnj3r4o9/\\n\",\n       \"/OMu/pd/+ZfEe/gbWo08XY9oiQIAAAjAQxQAAEAAHqIAAAAC1FqxHMV65JFHxv0erbidVaU3pB+U\\n\",\n       \"+sY3vpFY/spXvpLrfdOnT3dxyCzufh8ora6s/RT86rt+P4i09bRvktI+FVnby8uvDK/9KPbv3z/u\\n\",\n       \"7fmzBYT0iaqrH1RdtNr9u+++W+ORVC9rBgm9Dm7evOli/X2ZJfvxaNV5v4q93nv09zUwMJBYT/sP\\n\",\n       \"Ke2/aZa8x2nla7+/S9r17P829HMcOXIkdXvaT0v7W/lVwPXz62dM63uFZqElCgAAIAAPUQAAAAFI\\n\",\n       \"53WQRx99dNzvSWsSL9uPf/zjXOtpCm/jxo0u3r59e2I9nQQ6KzXlTyA8yh92vHz5chcfO3YsdXsj\\n\",\n       \"IyNj/nur6TufP5nrjRs3WtqeP7wbH+VPdKwpWk1bTXSantK0lU/TXVqWQycPNjM7f/68izW156f9\\n\",\n       \"0pw5cyaxrL9fnUDbX08njtf7Sda9QdOc/r1Aj13vs/49V/elv3M9D0XbsGFDYtm/nyI/WqIAAAAC\\n\",\n       \"8BAFAAAQgHReB9Fm4Xbyx3/8xy7+27/929T1tm7dmvqapvA0DZM37eKnxzQFELK9Iugoo7ypDBTH\\n\",\n       \"H63lj8zsdJrGuu++9P/eTqvk7o/OS6sw/sADDyTW01Fpmn6fN29eYj39Xeq+7ty5k1hv8+bNLtZ0\\n\",\n       \"mX+/9I93lD/qTo9Jj10ro5uZvfHGG2Nuz6f71c+k5yiLfjf+Z89L9+WPzC5S0TNIFOHTn/60i0Pu\\n\",\n       \"s7REAQAABOAhCgAAIAAPUQAAAAHoE9VB/FnTi6T9I1odXu/77ne/6+LvfOc7ide0X8qf/dmf5dqe\\n\",\n       \"9lvyyz6888474z6+kH5QoRXLtbqy5udPnjw57mMogpaVMDPbu3evi7OqWCsdEt7T05N4befOnbm2\\n\",\n       \"of1pqqqU7u+nzL4iTZT3d679eBYsWOBiv9yB/ga0f5lfakD7Kmk/KL/Mh1YzP336dOrxaRXwTZs2\\n\",\n       \"ubi/vz+xnv729Nj9PlE6W0BIH6Yseq/xS2yk0X5Qs2bNSrym17D+XusqaVBlH6i8f7NeffXVlvZD\\n\",\n       \"SxQAAEAAHqIAAAACkM7rIGlppyJSIUWn8JSmSb75zW+mrrdkyRIX501vhaTvipA3fec3v2tKIOsz\\n\",\n       \"asojbWh2Ed56663Ect4h1EuXLnWxVp3Pqv6epQmTHYcOH59I+vr6XOxPBKzfoZYk8WcY0N+O/h78\\n\",\n       \"iuC6rPe+RYsWJdbTdF7eibs1lbZ48eLEazpMX++Lp06dSqyn6SRNc/r3hrRZDzR1ntcXvvCFxLKW\\n\",\n       \"P/jhD3847u3ltXbt2sSyfqf+eSmTTlj/yCOPuPjXf/3XE+v95V/+ZWH7pCUKAAAgAA9RAAAAAUjn\\n\",\n       \"dRBtqlbajO6PcKmqErY/ciVkpFNdI9SKoOkBrcjsp0nzfsYyU3gqNIV14sSJXOvNnTvXxVEUufjc\\n\",\n       \"uXNB+y1T2uTV+JDea/yK4JrS0u93cHAwdRtaEdy/Z+iIPL2P+SNjNcWj6S3/+HSUoKYHe3t7E+vp\\n\",\n       \"iEG9r128eDGxnr6m17lPR1WHprtHbdmyJbGsKa0y7du3r5L93It2HXj99dddXOb9hJYoAACAADxE\\n\",\n       \"AQAABOAhCgAAIAB9ojrI6tWr77lOEX2gNmzY4OK8lW8nWrVnrX5slszJnz9/vuXtVzXretG0f4pZ\\n\",\n       \"sv+L36cE7UEr0mvfP/83kFa53v/etT+dVhXXPlVmZkePHnWx9oXJGlKvZVL8a9GvnD5K+0CZJT+j\\n\",\n       \"9r/SCupmyf6n+hv1j0/7ArbK71PVah+rdqPfjfZrO3jwYGn7pCUKAAAgAA9RAAAAAUjndZCsYbRF\\n\",\n       \"0hSeDtc3yz8pbV00jaDDnUNTSX5KYNTZs2eDtpdXO6XwlKZd0Bk0HaVpZv9eoBW4NTXnVzbXa0Qn\\n\",\n       \"KvZLEoRM+KvVwjUdmMW/ZrUkgX52//PqxMV63/FL0aTde9LuLUgXMll8q2iJAgAACMBDFAAAQADS\\n\",\n       \"eR0kbSJLTbllpdu0mfrSpUu59ll2+q7oiXZ1YkxN54XSyZ2LUNWoO39SVU2bkHLDeOiIX01b+akV\\n\",\n       \"HSGl6W6/G4Leh/S6PHDgQGK94eHhcR/r0NBQrvX0d+2/R1/T0WD+qFsdkaufUd9jln4fIp3XHmiJ\\n\",\n       \"AgAACMBDFAAAQAAeogAAAALQJ6qDpM1UnbffUt5+UFUqosJ6mjt37rjYHz6dt/+V9rFSfnXlvP03\\n\",\n       \"tKKyVjYuup9SVlVnYDz0N6p9mHp7e1Pfo1XJ/SH/Wum7q6srdb0yaT9S/7enpRq0/5beT8ySfcIO\\n\",\n       \"HTrkYv18Wfsq4vNqv6pO7Ouo59+snr9htEQBAAAE4CEKAAAgAOm8DuI3E7cjvwpxVZW5iyifoEKG\\n\",\n       \"X5slh353YvM7Oo+m8zRN3N3dnVhPh/brsH6/hIBW+tY0u39vKDO9p2lEjc2S96SsmQ708+p58T+H\\n\",\n       \"TtSs93AtFxHqueeec/Ebb7zh4tD7U9P45TFI5wEAALQJHqIAAAACtH/+B07RKak6ZKXvQiqqtxsd\\n\",\n       \"3dQEOsGqWTLVApgl03k6EtgfrabpKU116aTAZsmK4Jp2qjK9rfv1ZyXQVKSOzvUrkff397tYz8WZ\\n\",\n       \"M2cS6+nn199XERXLf/rTn7a8jTw0JWmWvG/4n7dIeSeRLhMtUQAAAAF4iAIAAAjAQxQAAEAA+kSh\\n\",\n       \"bRTRD0qHF5dZPsGfmV2Xy6zCXjS/n8eNGzfGvQ397JMnT255e2gWLVGgfYn6+voS62m5Ah3K7/8O\\n\",\n       \"/X54o/xrMW8f0Dlz5rh4ZGQk13uUX0pBj+P8+fOp79N+QkuXLnWxX+JAfwNaFsJfr8m0NMtEQ0sU\\n\",\n       \"AABAAB6iAAAAApDO6yD+JLrjpUOQ0ybWbXfadL5x40YXb926tdD9+MO7/eUm0zTM4OBgy9vTz076\\n\",\n       \"rvPcvn3bxfr9Hj9+PLGeprT0XqUT9ZolyyRomijvjAz+pLSaFtOJwf10oC5ryj20G4GmDlesWOFi\\n\",\n       \"v3SBlgDwz0Wn8UshaPry2LFjVR9OIWiJAgAACMBDFAAAQADSeW3Mr6TrN5WOV6em8JRWBNYUno7g\\n\",\n       \"MQsbxZNXlZXXQ1K0mk4B7iVttOnAwEBiWUey6XXpT/CrFbxDRn3529NlTS2njQI0S/4GQqv066hF\\n\",\n       \"raw9b968xHp6/rQqu6Ye25mmcf2RmO2awlO0RAEAAATgIQoAACAAD1EAAAAB6BPVxnRosVl29Vxk\\n\",\n       \"K7MPlK/sflAqpH8TfaIwHv59aJRfzkKXtQr4jBkzEutphXsta5C30r/2qfK3oXFvb29ivX379rk4\\n\",\n       \"tB9UGu0f5pd+SLsf5P0dzp4928UXLlwIOLriaT80/Rxl32e1v9m5c+dK3dcoWqIAAAAC8BAFAAAQ\\n\",\n       \"gHReB8k7ISeKo03pTVR0WgLwhdx3NOWWN8XjVyzX9J5OCuxXBNfXNPXob6/o+6cen6bZ8v4mtdxB\\n\",\n       \"lqak8JSWj6iyq0RVKTxFSxQAAEAAHqIAAAACkM7rIH4F83akTe9mnT8h50TjV5PWSaC1KX7v3r2V\\n\",\n       \"HVPT+NXzNa0zkSdw9q8dndRXz4uf0tF7iG7Dr1iu956iU3shafUmpuny8iuTdzJaogAAAALwEAUA\\n\",\n       \"ABCAhygAAIAA9InqIJ3QX6Ld+kC1c7+FOnR3dyeWDx065OLBwcGqD6eR/KH3Tf9d5x2Kn5f2TdL7\\n\",\n       \"gV/ZXM+TVsXOOp7Jkye7WKummyVLIzShXIxWbkd5tF9myN8fviUAAIAAPEQBAAAEIJ3Xxvwhuv4Q\\n\",\n       \"YBRj2bJlLj579mzitaJTGc8884yLd+3a5eKyK/FqakRTbhcvXix0P0ySfW9nzpyp+xBqlZZS8Sfk\\n\",\n       \"1arnmvryS0TopLR6nWdV0tZyMWkTLJetnbo26Dk2Sx775cuXqz6ccdm6dWtL76clCgAAIAAPUQAA\\n\",\n       \"AAFI57Uxvwru8ePHazqSzqbn9fd+7/cSr506darQfW3atKnQ7eWlk6VOnTrVxUWn84BQ/mg6XdaR\\n\",\n       \"dZp+NzPr7+938bRp01zsj6w9efLkmPu9dOlSYlnTe3fu3LnXYQfTdGUonSBd06H6ey+C391gyZIl\\n\",\n       \"Lv7Upz7l4pdeeqnQ/TYBLVEAAAABeIgCAAAIwEMUAABAAPpEdRC/ou+oJgzXzWvmzJmJZb8/Qqt0\\n\",\n       \"KK729+nr60usd/r0aRfrOfvBD36QWG/+/PmFHl8T1FWtWb/7rO99wYIFVRwO2oiWGvGH1OvvXKu/\\n\",\n       \"L1y4MLGeli/RPj5+5fAi+iq1Sss4ZJVqqGtGBe1fltbXrFPQEgUAABCAhygAAIAApPM6iA7lVYsX\\n\",\n       \"L3axpqnM8g911Yk7y2zOLjp950ur/B3a5OxXMM9D0wNFD5Hu6elJLF+/ft3Feb+3sr+DVver1zOq\\n\",\n       \"0U6T4Q4MDCSW9d6ln8OvgK4lEzQ92MQuEFkpPIyPztDgl9HIo31+GQAAAA3CQxQAAEAA0nkdRJsl\\n\",\n       \"1YkTJ8a9rVmzZiWWy6xcvWbNGhfv37+/tP00RdEpPK1KXPRoHE2FmDVjZFJdI44mMj9N3E6OHj1a\\n\",\n       \"9yE0gt4ndJSiWX0jcpsgJIWnaIkCAAAIwEMUAABAAB6iAAAAAtAnqkKTJiVPt5YX0Grjfn46bYjt\\n\",\n       \"hg0bEstFDv0usw+Ur8x+UH7Zh8HBQRfXNXRZZzjPW1ohq29SmX2EmtAHykefqOo1cZg/xoffTTlo\\n\",\n       \"iQIAAAjAQxQAAECAWtJ5n/3sZ83M7Omnn078e29vr4s1nbRo0aLEelEUuViHi8dxnFhPJ97V9fxU\\n\",\n       \"Vdq+jhw5klhPq8ROnz7dxf6kuX4l3FE6+a1/THPnznXx1KlTx3y/WTKt4084/Nxzz6W+ryw6bNYs\\n\",\n       \"+Tn0WJctW5ZYTycJ3bp1q4unTZuWWE/LH2zbti31OPQ70NSDX7147dq1LtZ06qFDh1K3XYTnn3/e\\n\",\n       \"xZrOO3z4cGK97du3u1greBedVlu3bl1iWUtavP7666nve+qpp1z82muvjXu//kTPeo1oevrdd99N\\n\",\n       \"rKfD1OuqqD6RaeV71O+Tn/yki/U3tHnz5joOx774xS8mlleuXOniv/u7v0t932/91m+5WP/e6n2w\\n\",\n       \"bPq3yf97kQctUQAAAAF4iAIAAAgQ+Smw0ncYRXHV+wQAAAgRRZHFcRyN9RotUQAAAAF4iAIAAAjA\\n\",\n       \"QxQAAECAWkocaIkClEP7nWlpBb/y9fDw8JjvyUuHxpuZzZkzx8XLly938YkTJxLr+UP7R/llG3T7\\n\",\n       \"Otu2X1ZCl/VzaFkKM7NTp06NuV+fllrQbfvD67WMhpZtaPo13tXVlbqsn8m/JvTza4mOUFoqRIfR\\n\",\n       \"+/tNuzb135t+zptCz1PWbz5tPf89nPfy+edc7+nnz5+v+nAmjDx/E2mJAgAACMBDFAAAQAAmIJ4A\\n\",\n       \"NFXjVwRvtdyE//758+e7WFM1aek7n1/tPa36+40bNxLLU6ZMcbE2b4c2desk0P6E0Kqnpydo+3Xz\\n\",\n       \"z5+/XJVr166Vtm2tRKzfkz9jQd5JoNP4VfvzTvT6mc98xsVvvfWWizUtbBY2YXUWTenfvHnTxfob\\n\",\n       \"8l9Ds/jdMlAfWqIAAAAC8BAFAAAQoJZ03qpVq8zM7Pjx44l/b3WSVR0ZZpacyPf06dMtbbud6YS8\\n\",\n       \"fpN9q3QSX7PqJof1U2yattNRfGXzUy9oDr2/6HXvj+xsVd70nU7ObWb2yiuv5HpfWgrv0UcfTSy/\\n\",\n       \"8847ubaXlqYjfdc+ND2t96Ay0+MYGy1RAAAAAXiIAgAACFBLOm/SpA92qwXDzMwGBwfHva1169a5\\n\",\n       \"+MEHH0y8pim8iZzO0yZeP/3WKj81MjQ05OJDhw4Vui+1cePGxPLKlStdrGmNvXv3lnYMaB9aQPTc\\n\",\n       \"uXO53qOj+8ySKcGDBw+O+xiyRoqOdnEwy/+7qTJtjWbRa1FTe6TzqkdLFAAAQAAeogAAAALwEAUA\\n\",\n       \"ABCglj5Ro0Pu/Uq/2p8mbyVt7fPiDzUO6WPVznp7e8f8dy1xUPRklf73pH2udL9F0z5QZmZr1qxx\\n\",\n       \"cd6h3loCQye/HQ//Gh6lfXDMyj0XuLesqvNpdHJus/R7kv9daz/NXbt25dpXSP/BvO/xS7/4k3Kj\\n\",\n       \"/ehsEHpvue++ZLvIRO4LXBVaogAAAALwEAUAABCglnTeaFOkPzx+/fr1LtYmypdffjmxXlpzdDun\\n\",\n       \"TLq7u13sV27PW0n4zJkzY/57FEXhB3YPmoIte1/9/f0u7uvrS7ymw9bzljUo4ljTtuGXYNi9e7eL\\n\",\n       \"i6hyrhOQtlrpH2PLm+Jdvnx5YrnM30BeK1ascLFOYGxmtnnz5nFvT9NHqJ+mkPVe4H9P2t0gb2V9\\n\",\n       \"jA8tUQAAAAF4iAIAAAhQSzpvtKrqnTt3Ev8+f/58Fy9atMjFGzZsSKynE3fqNvwRW6OV0c2aMUpB\\n\",\n       \"P5NZsmL7rFmzXHzlypXEepqmO3Xq1Lj3q+kebfr1XytC3lGVeemkrZ/+9Kdd7I+I8lO+eYSM2PKl\\n\",\n       \"pXx05F8ZSOGVQ1PG/ujetO4Chw8fLvWY0vhpxGPHjrlY0+xHjx4N2r5WxaYSdrPotagjoi9evJhY\\n\",\n       \"z/9bguLREgUAABCAhygAAIAAPEQBAAAEqKVP1PHjx83so0P3d+7c6eIZM2a4uKurK7Ge35dq1Ouv\\n\",\n       \"v17UIRZG+yAtXrw48ZoOOdVctt9X5+zZsy0dgw579auaa9+iffv2tbSfLNo/zSyZx8+ybNkyF2s1\\n\",\n       \"3jfffDOxXt7K0Gn8Pkzat0v7H/nXXlq/Kr+adBFlDVA+7Zs4MDDQ8vYee+wxF2/btq3l7SntA+Ur\\n\",\n       \"or+flkYYvWejGfT+pPdFv8+r9mvLe8/F+NASBQAAEICHKAAAgAC1pPPyVODWoZntPExTU0FFN+fn\\n\",\n       \"pcPw/QmI/QkryxLalKwpCx2q7ZdSSCutMG3atMRyWpojdALiNCdPnix0eyjOE088kVg+ceKEi7Wy\\n\",\n       \"fKg//MM/dHHeSdC/853vuPib3/xmy8eQl6Z7/JT22rVrXVxEahPF0Vk7tKuA322g6PsaPoqWKAAA\\n\",\n       \"gAA8RAEAAASIiq4wfc8dRlG1O5yg9HttwoSoRfM/k47+q6uad6ef8yYKOef+JK2a4q3yfvhHf/RH\\n\",\n       \"Ln744Ydd/I1vfCPX+3WSYbPwyuTj5Z8jrvXy+ed84cKFLh4eHq76cCaM0fMeRZHFcTzmhU5LFAAA\\n\",\n       \"QAAeogAAAALwEAUAABCAPlEdqmn9c3p6ehLLOrT63Llz496eVlo3S37GuirzNu2cTwRNOOd+H6tr\\n\",\n       \"166NuZ5e82b5Sr00EX2iquefc62sf+nSpaoPZ8KgTxQAAEBJeIgCAAAIUEvF8lbNmzfPxTqJ7+3b\\n\",\n       \"t+s4HKR46qmnXLxgwYLEazqs/MUXXxz3tvmu0RR++k5nAdAJq/2UdkgaGzCrthQHstESBQAAEICH\\n\",\n       \"KAAAgABtmc7rxGZwrT585syZxGtXr17NtY3u7u4iD6llmnLr7+9PvKZpjtWrV7v44MGDLe83LZ2C\\n\",\n       \"iaWrq8vFN27cSF1PU81FVH9Ou+Y68b6FepDOaw5aogAAAALwEAUAABCAhygAAIAAVCxvoOXLlyeW\\n\",\n       \"tTr32bNnXexXqp08ebKLtRpyEyoKa78nM7NPfOITY663bdu2xPKhQ4dKO6YQ2s/GLHmetS9ME875\\n\",\n       \"RJBVsby3t9fFfj9DhKNiefX8cz5t2jQXX79+verDmTCoWA4AAFASHqIAAAAC1FLiYPHixWZmdurU\\n\",\n       \"qTp233jHjh0Let+tW7cKPpLiHD9+PLH8yCOPuFgncJ0zZ05iPW221irndckaKo9mKbq8BaUz0BSU\\n\",\n       \"OGgOWqIAAAAC8BAFAAAQoJZ0no5owsSgKTszs0mTPrz0hoaGXOyPxmtCCg/t6cqVK4Vur9NTeKPd\\n\",\n       \"LEbR3aK56FbQHJktUVEUfS+KoqEoit6Vf5sbRdHPoyjaH0XRS1EUzZbX/jyKogNRFO2Nouh/KPPA\\n\",\n       \"AQAA6nSvdN4/mNkXvH/7lpn9PI7jNWb2H3eXLYqih8zst83sobvv+a9RFJEuBAAAHSnzISeO4/9u\\n\",\n       \"ZiPeP/+mmX3/bvx9M/vK3fjLZvbDOI5vxXF81MwOmtkTxR0qAABAc4T0iVoYx/FoJ5YhM1t4N15s\\n\",\n       \"ZltkvZNm1j/WBm7fvh2wW7SzCxcuJJZPnDjh4pGRkTFjoBXab0Sr/vtlNHQWgE4xa9YsF1+8eDHX\\n\",\n       \"e+gDBYxfS+m2+INiFVkFKyhmAQAAOlJIS9RQFEV9cRwPRlG0yMyG7/77gJktlfWW3P23j2DEFQAA\\n\",\n       \"aLIXXnjhnuvccwLiKIpWmNm/xnH8yN3l/8vMzsVx/NdRFH3LzGbHcfytux3L/5t90A+q38z+3cxW\\n\",\n       \"x94OoiiK58+fb2ad2YzeFFkTszbBmjVrXDx16lQX79mzJ7Fek6uw+5p+zjtR3nOur1HtuTVNmYBY\\n\",\n       \"U7R6DO+//34dh1OqppzziSbPBMSZLVFRFP3QzD5jZvOjKDphZv+Hmf2fZvaPURT9kZkdNbP/fHdn\\n\",\n       \"u6Mo+kcz221m75vZf/EfoAAAADpF5kNUHMdfT3np+ZT1/8rM/qrVgwIAAGi6WiqWV5XG0yrZ2vx5\\n\",\n       \"9erVSvaPdJrC6+npcfGDDz6YWO/w4cMuvn79evkHdlfTJj7GvS1fvjyxrBN50yjeebSC/MyZM12c\\n\",\n       \"dzQiUASKYQIAAATgIQoAACAAD1EAAAAB7lnioPAdRhGdE8YwadKkMWOz9L5Aq1atSiyPlo4wM9uy\\n\",\n       \"5cPi8U0cDqt9GPr6+lys/djMkpXOr1y5Mua/m7U+rHnp0qWJZT2Offv25doGJQ6qxzlvjV7n165d\\n\",\n       \"y/UehttXzz/n9913X+prKE6eEge0RAEAAATgIQoAACBALem80XRVXZVl/ZSRVr69fPlyZcexaNEi\\n\",\n       \"F/f3fzhXc3d3d2K9/fv3u3hwcNDF/nenZQN0WH7Tm9vnzZvn4t7e3sRrXV1dLtZJi8+fP9/yfjX9\\n\",\n       \"qcOlQ7dPaql6nPPq+fcd/c0yC0VxpkyZ4mKdTNuMa70qpPMAAABKwkMUAABAgFrSeU8//bSZmZ06\\n\",\n       \"dSrxmo4O0aqzN2/eTKynaavZs2e7eGBgoNBjbWftlObQ0Yh+ijdk4lhN186ZMyfxml47IyMjLi4i\\n\",\n       \"PdhO57xT6DnXKvNm1Va4n0j836HOMnDw4MGqD2dCYERkPUjnAQAAlISHKAAAgAA8RAEAAASYdO9V\\n\",\n       \"ijdaoVqH+Jsl+8a8/fbbLt67d29ivatXr44Zoz1l5fe1L4D2f/Mrlk+ePNnFWVWY6TfXuZYvX55Y\\n\",\n       \"zltpfsaMGS7WqvjIx+/bCkwktEQBAAAE4CEKAAAgQC3pvMOHD5tZMmVnxkSKE8nDDz/s4tu3b7t4\\n\",\n       \"aGgosZ5OtKkVxvU9ZsnSCFRNnphCU/vtmsLTWQ7M6ktV5524GOhEtEQBAAAE4CEKAAAgQC0Vyyvd\\n\",\n       \"4QSVt3q2Tr7sp8jKNGvWLBdrdfosS5cudfG5c+cSrzUhpUDF8upN5HOuowrNqktLUj27epzzelCx\\n\",\n       \"HAAAoCQ8RAEAAATgIQoAACBALSUOUJ/RavGj5s2b52ItDeCXGiha3n5Q6vz58y5uQh8oM7Np06bV\\n\",\n       \"fQhoGP1NPfjggy7esmVLy9vW661dSzMAnYSWKAAAgAA8RAEAAASoJZ032iT93nvvBb1fJ5u9detW\\n\",\n       \"rvfoUH6tgu2bOXOmi/0h//o+3Z4//FRTVXmPL685c+a4WNMGYx3HKJ3oWSuFm5n19vaO+f6RkZHE\\n\",\n       \"epreO3HihIv9UgNl0nNetqlTp7rYH0quFixYUMXhoI3obyIk5eYPX9ffZeg9M4TeX6r8nQPthJYo\\n\",\n       \"AACAADxEAQAABKilYjkTDQMAgHZAxXIAAICC8RAFAAAQgIcoAACAALWUOBjvDNTTp09PLLdardrf\\n\",\n       \"3q/92q+5uLu728UDAwOJ9U6dOuXikKHGTz/9dGJZSyYcP37cxQsXLkyst23bNhe///77Lp47d25i\\n\",\n       \"vfnz57t43759LmbG72poXz/OeTWKOOc6lF9/10VXxQ+9j3V1dbl4ypQpLr58+XKu9+t7zMxu3rzp\\n\",\n       \"4uXLl7vYr76/d+/eMbfn92nlWi+ff85nzZrl4kuXLuXahpbvyfsen15Leh11Cv83kOc3SksUAABA\\n\",\n       \"AB6iAAAAArTFBMRFN6v729N0WV6rV6928cGDB3O9Z/PmzePej1myQrum83RCXrPmTMqLZArGzOzG\\n\",\n       \"jRs1HcnEodX8zT5adT+NThKsv6+tW7cWc2B3hf4+tVK/dj3wf/96H9K0RFbXg+vXr7vYP39atX/S\\n\",\n       \"pLb4UzFhaLeTvKm50BSemj17touHh4db3l7ThHTToSUKAAAgAA9RAAAAAWijDZQ3hRfi6NGjQe/T\\n\",\n       \"pvk0fppJm/AHBwfHvU9Na5qZnT592sVXr151saYGzPIdaztbtWpVYlnTIfr9FtHEjg/4k2tryuPY\\n\",\n       \"sWMu9kedbdmyxcU6IXfZnnzySRcfOHDAxf5kv5qa09f80cMqb1pCJxbX2Mxs48aNLq5y8m/cm38f\\n\",\n       \"r8rnP/95F//gBz+obL8h3WdC6Ij53O8p4TgAAAA6Hg9RAAAAAXiIAgAACECfKI9WC/crh7/zzjst\\n\",\n       \"bebKW4cAACAASURBVHvFihWJ5dC+T+Ol/ZH8Ph8nTpwY9/Y0P+0Ps9Z+UCpvHyj/HOn2m9h/SIf8\\n\",\n       \"Kr8PmPZrWbx4sYu1fIVZcoi9VqS+c+dOS8c5Efh9ndatW+fir33tay7+yU9+klhPf9dnzpwp6eg+\\n\",\n       \"SvsZZX2/Wq360KFDLe+3v7/fxVn9qvTewPXXLHV9H3WVutD+jmX2iQo5r7REAQAABOAhCgAAIEBH\\n\",\n       \"pfNmzJiRWL5165aL81aMzhryqxM4pg07NkumZFRV6Tvfo48+6mKdmHg8dJJWrVRbdIot9BwtW7bM\\n\",\n       \"xVpmwU+XaZrNT0XmoRN/mplduHBhzPX8/eq1o5N4+s3jWtVar1+q0d/b2bNnE8s6Q8CSJUtc3JTU\\n\",\n       \"1KuvvpprPb2v6fWW9Tk0lemnlvW1N99808V+qtC//6E59N5Qpaz0b5H0GjX76N/Ysvi/lTxoiQIA\\n\",\n       \"AAjAQxQAAECAjkrnXblypdTta+qqiSPF0mjaStNyZslzptVa/clImzbZ5Pr16xPLO3fuHHM9v9lb\\n\",\n       \"02KacvMr1aalfzUVZ2YWRdGY6508eTKxrBWkdaSinw7U46iryb5KOkLt9u3bpe1Hq5KHNNn79Hsq\\n\",\n       \"Oz2oI17z7kt/8/551deOHz/e4tGhDlqNP++1qPcqHfE5HocPHw5633g98sgjieW33nqrkv2GzKRB\\n\",\n       \"SxQAAEAAHqIAAAAC8BAFAAAQoKP6RPn9eEZGRnK9T4c/+31Zmsyvlq3D/JUO+fcriq9du3bM9xQ9\\n\",\n       \"pN7/bubOnevivFWYp0+f7uK0PlDjoX2O/L5OWnpAS1bkrWjtDwWuamhwuymzH5TKKp3x2GOPufjt\\n\",\n       \"t992cVb/Er0Wy+6LGTK8O6sPY1XDxVEe7dOUt59caD8oVWa1cOWXK9I+pU1DSxQAAEAAHqIAAAAC\\n\",\n       \"NCqdp014mnbK2wypKSKz/Om8VlN4Wr3crLqmR394fFr17LRJgc3CK5iPl/9d5P1uVN4Uo6Yo/f2k\\n\",\n       \"pTlu3rw57uNBZ9B0ct7USNkpvDr4ZT70XGgpCtRv0aJFLtb0bNrfgHaX9TesbrREAQAABOAhCgAA\\n\",\n       \"IECj0nnaRK6Vtf00Tlq6LO8or6IVkb5LGw2G8dMRdEzci3tpegqkq6vLxXknUg+RlcqsahQl8tHR\\n\",\n       \"zk1Ptfb19bl4cHAw13v89N3FixcLPaYi0RIFAAAQgIcoAACAADxEAQAABGhUnyilwza1f5RZuSUE\\n\",\n       \"tEyCznZeNvpBFafK702r3QNlyHtv6O/vd7EOgd+6dWvhx4R66d/EpvdXu379+rjf0/TPpGiJAgAA\\n\",\n       \"CMBDFAAAQIDGpvNUlRNmVpkKqoqmKC9fvpx4TSfhDTFz5szE8qVLl1raXlNo9ebe3l4XDw0NJdZL\\n\",\n       \"G16s7zHLP3GxouxFcVasWOHiEydOJF5reupAj09Tdv6k1lp1n9IenW3hwoUubnqJjpDja6cJ22mJ\\n\",\n       \"AgAACMBDFAAAQIC2SOehNXlTlDqix0/zaXqgp6fHxX56sEyaIstKjy1YsMDFaRMO34umQP0Unjp2\\n\",\n       \"7NiY/66TIJuZdXd3u/jo0aOp29MUXhRF9zrMtuOnP7Ua9+TJk12s58Gs9ZT+8ePHXZx3kuEm0kna\\n\",\n       \"ffqbCEkfo33EcVz3IZSKdB4AAECH4yEKAAAgAA9RAAAAATqqT9T06dMTy9oPhT4C93b69Olc6xUx\\n\",\n       \"fNqvQj/K7/ui32ne7zC0H5Q6e/ZsrvXS+i1p/zKzZF+grDIQ2kdIZy5vtRRFU/jlBHT2Ae0TpbPU\\n\",\n       \"++uFXH9N7Ael/b40zqrwvH///lKPCe0h7/WsMyqcPHmyrMMpnP/3wS9L0iS0RAEAAATgIQoAACBA\\n\",\n       \"o9J5GzdudHHIpJl+M3/eiQ+1OrWmEcqulK7DlTVdc+PGjdL2OWvWrMSypozKpGkqs2Rzrabs/Gbc\\n\",\n       \"MtMXs2fPdvHUqVMTrw0ODubaRtpQY922WbLiuFZ5969ZXa8TSxz49PxpisL/7er10ynVuL/61a+6\\n\",\n       \"WEtlvPnmm4n19Lx0+tB25JO3tExV9/eJjJYoAACAADxEAQAABGhUOi8khZdlypQpLs5K7WlzeZWT\\n\",\n       \"HV+5cqWS/SxevNjFp06dqmSfZsl0pf9ZW03T+ek3TZHlHZ1X5sSdfipOr0VNX2oq2SyZytWRbFev\\n\",\n       \"Xi36EBtHU1U6ma5ZZ4xOXLNmTWJ59+7dLt65c+e4t+dXdS9zkuqJkFpuJ363jDRVzihRpHb6vdMS\\n\",\n       \"BQAAEICHKAAAgAA8RAEAAARoVJ+ovLQadFaV7bwlDjpdlf2gVNF9vrQvkVa3NiumSnmRsvo6aX+f\\n\",\n       \"rFIIev3629P1yuzbVSUtXeD3eWunPhJp/N9hq78PnZHBrNzh7JRWaJaBgYG6DyGT3q9CZgvo6ekp\\n\",\n       \"8nBKRUsUAABAAB6iAAAAArRlOi/vRLkTjT8BcxV0Yl2zj04w2+r2Vq9e7WJN477yyist7edeent7\\n\",\n       \"XZw18bE/zHyU34StQ8T1e/K/M01TapP4ggULEutpqm/79u0uLnOYe9k0ZeRfR+38uUYVnd72J2lO\\n\",\n       \"S+cV/RtF/ZpecqLVCb/9CdybjJYoAACAADxEAQAABGjLdF5eaSMEOrV5u46JWf00U95Uq040rFXi\\n\",\n       \"/e9CK3UfPXrUxUWPFtKK4mbZKTyVlmby03RaUV0ruU+bNi2x3nvvvedi/T79tKGm8/S1Jqa99Lv2\\n\",\n       \"R9mlVXL3z59+350yGjGEnq++vr7Ea/r7UHXd37KubbTmgQceqPsQSjU0NFT3IeRGSxQAAEAAHqIA\\n\",\n       \"AAAC8BAFAAAQoKP7RKUNsyy6j4A/3LRdq/tqX7Gurq7Ea2n9c0LLTWg/qCwnT54M2v54aRXxIvgV\\n\",\n       \"t7V/nvZryarMrdevf83qa7q9Jlbp19+H/5vUPlxZ/RbbtWK5lqwo4jPodToyMtLy9spEH6jyzJ07\\n\",\n       \"t+5DKNWePXvqPoTcaIkCAAAIwEMUAABAgLZM5+kQ8aKrAIdo1/SdT1NGWeUSQobR+9WV01IR/sTC\\n\",\n       \"daVxdPi4ft758+cn1ks7F/39/YllnYBY06H672bJ86IpT3/Iv6a+ml69OK1sg1l6JfdLly4l1tNS\\n\",\n       \"F+2k6JITev01fRJalKedSgAov5RMWjeKpqeqFS1RAAAAAXiIAgAACNCW6bwmpPCawE+R+amhJsnb\\n\",\n       \"PFtl+q67u9vFfrpocHBwzPecPXs2seyPrhs1a9as1PU0/eunrQ4fPuziy5cvu1gnRDZLpr78JvIm\\n\",\n       \"81OPOgovbYaBdqbf9VNPPZV4Tb+3l19+Odf2dIaA3bt3t3ZwbaZTZ5oIsXnz5roPIUiV35nOEuHf\\n\",\n       \"Z4tESxQAAEAAHqIAAAAC8BAFAAAQoC37RIXQvhh1lSTQ0gxmZqtWrXLxokWLXOwf369+9SsXa27X\\n\",\n       \"7yfjL3eCMstZFJGfT6sQfv78+dT1tP+Vn6vXfm3aZ0bLIvjba3ofwazyBNr3qarq9HV57bXXEsvT\\n\",\n       \"pk0b9zaaWOpB+//p8fnlSlqtYO7fPy9evNjS9trZzp076z6EIFX2idLfF32iAAAAGoaHKAAAgABR\\n\",\n       \"1amtKIpy7TArjaOTL2rz4ERu3vXp99r0itYh/NICTZh4V8/5N77xjcRrWrrg9ddfd7HfvH3mzBkX\\n\",\n       \"azrPn3BUK3/r76NTSgPk1enXeRGyykr09PS42E95pKVe/L8ZWioka6aDvHS2AD2mIrbdrvxzzrV+\\n\",\n       \"b9oFInTmgNHzHkWRxXE85kmnJQoAACAAD1EAAAABGjs6L2vEkT/yCR9YuHBh5fv0qwjPmzfPxcPD\\n\",\n       \"w7m2sXbtWhfv27cvdT0dfaiVm83Mdu3alWtfVdm7d29i+cCBAy5Oq4bu08k5L1y4kHhNR/F1ygTY\\n\",\n       \"KId/fWhqo4iJXotOs+X9fTSdjk7U1JI/alFT8LNnz3ZxHffzpvAnevdnisij6Mm/09ASBQAAEICH\\n\",\n       \"KAAAgAA8RAEAAARobImDdtLV1ZVY1v4qVdJZq7XcA8Nhq6G/pQcffDDx2sGDB6s+nAmhiSUO9H6g\\n\",\n       \"xxRahkPLEFy+fDn8wArCcPt8tO+Txv7fCz1/WtpHS0z41fybcM6131JIn6UsGzduTCxv3bq10O3n\\n\",\n       \"RYkDAACAkvAQBQAAEKCxJQ7aSV3pO1+ZkyzWRSuTN6EqeV7Tp09PLGvFcS3RoSlYs2TqRq8rvxK5\\n\",\n       \"DpnW9IpWPPfNmTPHxX4JkVu3bqW+L41+Rh2abZZMX+i2s86LVmi/777kf9/pOcs7+WraOS+bfm9P\\n\",\n       \"Pvmki7ds2ZLr/Z/4xCcSy365jIlES6j4v4GmlfbQSutmyXScTszsTyKtv2VN0+nvwZf2+6pSmedf\\n\",\n       \"y7s0HS1RAAAAAXiIAgAACEA6D43ip3FCUniaAkibRPVedFJVv/k9j6VLlyaWp02b5mIdOblo0aLE\\n\",\n       \"epruyvrs2uyv58xP6WqTu6YA/PU03ZWVntb047Jly1zspxS0WrCmK/x0nqYB9Rz5o480NZe3QnYT\\n\",\n       \"ZjbIO5pOz9GSJUsSr7311luFHlNV/HRU2khKfz0doabXtn+NZaWu6+CnoPJef/o+/c3rPcinr/mz\\n\",\n       \"GZRJ760PPPCAi8+dO5f6Hr1n+Oco7R6Xlcosgm6/1dQhLVEAAAABeIgCAAAIwEMUAABAgFoqljdt\\n\",\n       \"aCoAAMBYqFgOAABQMB6iAAAAAtRS4mB02GqVaT0dZrlhw4bEa5s2bXJxb2+vi0OH0OowUN3Xtm3b\\n\",\n       \"ch2fP/w8b+XldevWuXjPnj0u1krVK1euTH3/gQMHXBw60akO2df46NGjifXyDv999tlnXbxr1y4X\\n\",\n       \"+9+NDlldsWJF6nojIyO59qvD7d97773U9XSIvm67rglCFyxYkFjW72DHjh2p79NJUUMq8PvD8v0J\\n\",\n       \"U/NYu3ati/ft25e63jPPPOPiV155xcVNmJR1IvDv27NmzXKxlrDwy4to2Y+sYeV5K9Lnpb+BefPm\\n\",\n       \"ufjIkSOJ9UJKmej2/Kr9eg8dHh4e97ZV1qTPaRMYmyX/Fuk2/HIRev/U2Qy0VEkWf+YFrd6u14R/\\n\",\n       \"HvRvnT+LQh5+SRy/qn2r8jyj0BIFAAAQgIcoAACAALWk8+oYnffQQw+5WNNWviKq4H7xi190sVYi\\n\",\n       \"zkrnZU0enLci7ac+9akx/z1rYldtXg1N4anTp0+PGYd6+eWXc62n6YH9+/e3vN+sFJ6qslpwHn46\\n\",\n       \"T1ObWem8kBSepnFOnTo17vf7slJ4anBwsOV9oTh6zWn6x0+dDw0NuVir9petv7/fxTqheRETx+vn\\n\",\n       \"8NPJVU2Yrn9PQ+/heWcBSOP//cr6e1akotN3IWiJAgAACMBDFAAAQIC2nIBYe/vnbYbUptW8qRpf\\n\",\n       \"2qSFy5cvT6ynzbo//vGPg/al8jZZ/sM//IOLv/e977lYm7DPnj2beE+ro0ZQDR0taJZ+DfujHlev\\n\",\n       \"Xl3aMVWZklFFpGtxb5qu1cmrfToyU6/TgYGBxHp6bYaMxMrLT2nr/U8/RxHdSjR96Y9Q078DZX5e\\n\",\n       \"1IuWKAAAgAA8RAEAAATgIQoAACBAW/aJChmO2d3d7eK8w9K1PIFZsh9UT0+Pi//0T/80sd6f/Mmf\\n\",\n       \"5Nr+U0895eLXXnst13ta5feJqmoY7kTj9yHxKwSPdxt+9ec0/jW7d+/ece8X1dPK0nm/67JpXyLt\\n\",\n       \"D+rTe6Gu5/cRqqoPnV8N/dixYy7WvklFn2e/YnlaH1q/zEzeWRTQTLREAQAABOAhCgAAIECj0nlF\\n\",\n       \"N2nrxL3+BKl5ZE2++KUvfcnFv/jFL8a9bbPiU3h+M/Eo/RxNqPDaFGWmUELSd6Hb0LSBXvNmZtu3\\n\",\n       \"b2/5ONT69etdHDJRbOiEoZoa0glly6RD/M2SE3n7E2q3qikpPKUVxv0yLkqrU+sEuH4ZjlarYufl\\n\",\n       \"d9eoalYBv7yIfqdaSZx7cGehJQoAACAAD1EAAAABoqonA46iqLId6uiSrFFo8+fPd7E/eq1d6ffa\\n\",\n       \"1dXlYn/0VlVN7BOBnnN/MtIyaarLn1S11UlWn3nmmcTypk2bWtpeXppqNUueT/28586dG3OdMuj2\\n\",\n       \"65hEvWp6/9SJrPfs2ZNY74EHHnCxpvP8dLR/7xl14sSJxHJVk9cWTf+OmCVT15rqy+omksa/3tKu\\n\",\n       \"dT+dr6lM0ojjN3reoyiyOI7HPOm0RAEAAATgIQoAACAAD1EAAAABGlXioGhpQ/59ndIPKo1Wy/Wr\\n\",\n       \"6uowbj0PRQzRnwiq7PuUpug+JNr/ZWBgINd7/PPQap+hadOmJZa10rQ/lLwqE6EflNK+Z371caXl\\n\",\n       \"HrTchl8iQmeNmDdvnov9fjzt1CdKr3vtD2aWLBER0g8qhN/viX5Q5aMlCgAAIAAPUQAAAAHaIp3n\\n\",\n       \"V8vVCSWV35z6qU99ysU6+eUbb7yRa7+a1jDLX6W4t7fXxWfOnMn1nrxWrVrl4kOHDo37/cPDw4ll\\n\",\n       \"Te/ptpm49kNZlc3TUjz+EP3HH3/cxVu2bCnw6Iq3ePFiF+etqr9gwYLEsqYyQmj6rin0/tLE4yua\\n\",\n       \"3iu0lIRPSxfkLfOh14dW8243Wtagr68v8ZpeI9o9oszUXlXV2TuVX6YiD1qiAAAAAvAQBQAAEKCW\\n\",\n       \"dN5o82/eZs28ozX8Jvaf//zn4zswM/vc5z7n4rfeemvc7zcrPoWnzeIhKbws2vxLU/DYQiaH1SrO\\n\",\n       \"Ztmjm5ombwpv8uTJLvYnBW41nddEEyGFl8ZPT6u0+7ifptP1dISlPxKzneh58UeDa/pXR0i30+jD\\n\",\n       \"iSZkpD4tUQAAAAF4iAIAAAjAQxQAAECAzD5RURR9z8x+w8yG4zh+5O6/vWBm/5OZjXb8+d/jOP7Z\\n\",\n       \"3df+3Mz+RzO7bWb/axzHL4213dG+FHn7RI2MjORarwg7d+7Mtd/169eP+Z4y5K2U3NXVVepxID+/\\n\",\n       \"T9SOHTtqOpLyrFy50sWDg4M1HgnKtnTp0tTXtBK53tP9vk7af0jLIoT0OWwKPfYbN24kXksbLk+f\\n\",\n       \"qM5yr5aofzCzL3j/FpvZ38Rx/PG7/xt9gHrIzH7bzB66+57/GkURLV0AAKAjZT7kxHH8381srOaY\\n\",\n       \"saqofdnMfhjH8a04jo+a2UEze6LlIwQAAGig0BIHfxJF0e+b2VYz+2YcxxfMbLGZaSnmk2bWP9ab\\n\",\n       \"33vvvcDdluPTn/60i1999dXU9XQYd1YKTyfh1KGtZfObk1Gfa9euJZY7cci/pnj2799f45GgbIcP\\n\",\n       \"H059raenx8XaBWLq1KmJ9bQkhqb2quyuUTS9v+usGGbJdJ5/LtA5QtJtf29mD5jZBjM7bWbfyVh3\\n\",\n       \"Yk17DgAAJoxxt0TFcewmVIqi6Ltm9q93FwfMTHsfLrn7bwAAAG3lhRdeuOc6436IiqJoURzHp+8u\\n\",\n       \"/paZvXs3/omZ/bcoiv7GPkjjPWhm+Wb6rZg/ii0thaejTszMTp8+PeZ6fjVqRl/gwIEDdR9CKXRS\\n\",\n       \"7unTp7vY/01dv369qkNCBbJGCOv9T9Nb/n1RlzXdrZPzmiW7Q1y9etXFd+7cGccRl0fTkhr7o831\\n\",\n       \"XGg6z5+YOe/oa5TPrzo/+hD17W9/O/U99ypx8EMz+4yZzY+i6ISZ/YWZPRtF0Qb7IFV3xMz+FzOz\\n\",\n       \"OI53R1H0j2a228zeN7P/EnN1AACADpX5EBXH8dfH+OfvZaz/V2b2V60eFAAAQNNRxwkAACBAaImD\\n\",\n       \"xtC+GHmH+OddT0sfmJm99NKYBdjpA9Uw2pdN+1RUaXh4+N4rtaG+vj4X67B3v/8gfaImDq1Mrvdj\\n\",\n       \"LWNgluzr1N//YfWbuXPnJtbTvkQnTpxw8cBAM8Yp6efSz57VZ0v7ffl9xfzSCKhPSL87WqIAAAAC\\n\",\n       \"8BAFAAAQoO3Tef6QxFZ97GMfc/HLL79c6LZRjbpSeO3EL0mQN8W9Z88eF2sagorME5dWLNeUmz8p\\n\",\n       \"9ZUrV8Z8z6xZsxLraTqviRNb66TDOgA9K4WtZQ38Egdob7REAQAABOAhCgAAIEDbp/OKnsx4x44d\\n\",\n       \"hW4PaApNfedN382ePTuxfOHChTHXmzSp7W8lCKTXiKaqzp07l1hPl3UU1EMPPZRYT68lHbl37Nix\\n\",\n       \"xHp11XLW1LWmIv2K5fqajuDWSv9m6b8ptAdaogAAAALwEAUAABCAhygAAIAAtXRkGM2bMz8xUJ1l\\n\",\n       \"y5a5+OjRo7ne4w8/T+u/sWLFisTyzp07x3VsPq1obdacatX4qMmTJ7s4rXq5WbIEgFYinzdvXmK9\\n\",\n       \"OXPmuFirnGdtry6XL192sV9uR/t9aUV/v0L7yMiIi4vu44vy0RIFAAAQgIcoAACAALWk80aHeFJZ\\n\",\n       \"GqhO3omyNTWX9RudMWOGi/1h260ifdc+NAWllcj99FZaCtlPES9YsMDFmipcsmRJYr2DBw+O+1iL\\n\",\n       \"oKUMzp8/72K/e4pOVKxVzjUFaEYKr93REgUAABCAhygAAIAAtaTzmpbG0xEgOvkl0EnypvN01NOZ\\n\",\n       \"M2dS19MRdJrWMEuORmra7x3F0u9aR+f5Vex7e3tdrNeYprrMkteppgSbch3lTb8dOnSo5CNBE9AS\\n\",\n       \"BQAAEICHKAAAgAA8RAEAAASYkFOv69BTM/pBYWLQCso+rUw+ODiYa3val6Wu4eao38WLF12sw/z9\\n\",\n       \"yty6nl47fl8n7Qel/VVHZ7oAmoSWKAAAgAA8RAEAAASYMOk8nbzyxo0bNR4JJoLZs2cnltMm7q1S\\n\",\n       \"Vjrv2rVrubahFaSvXLnS8jEpHR5PFef2odeBfm+3bt1KrJdWsXzq1KmJZf2tFF0JHygaLVEAAAAB\\n\",\n       \"eIgCAAAIMGHSeVWm8HSkk45IKZqmKM3MvvzlL5e2L4yPn6JoOj/1kme9rFF8K1eudLGOvhoaGkp9\\n\",\n       \"T94U3vr163Othw/o/cgs/z1JR8k98cQTqetpOi9rQt40Wr3cLJn200mHs9LRQF1oiQIAAAjAQxQA\\n\",\n       \"AEAAHqIAAAACtH2fKJ0ZPGvG+Sr5Of7xWrFiRWJZ+xZk9WfYvXt3S/udCLSPxcmTJ0vbj85E3878\\n\",\n       \"IeZppRD6+/sTy7/7u7/r4r//+78v9Jh27txZ6PY6kX5v/v1kz549LtbZGvyyHMuWLRsz9r3//vsu\\n\",\n       \"1j5zfr9ALY+R1Qevp6fHxZcvX3Zx3kr6QJVoiQIAAAjAQxQAAECAtkjnPfPMM4llHQKrTdUjIyOJ\\n\",\n       \"9bSZuUqt7jetsu+9pFXF1mb6JlTOrtKcOXMSyzr0Xodj563YnVfR26tL3s/hn2dNGdWVZp806cPb\\n\",\n       \"W5n3guXLlyeWjx07Vtq+8tLfvKbEzJKV4XVSX7+EgKb6/G2oU6dOubiINHZWGQygaWiJAgAACMBD\\n\",\n       \"FAAAQIAob1XZwnYYRW6Ha9euTbymzbjaRL5gwYLEeps3b3axHn+ro+JCPf3004nlLVu2uLiulKKe\\n\",\n       \"l/vvv9/F/oiZTkk7NYGec02TtBv9vQ0PD+d6z9KlSxPLJ06cKPSY0mSd87lz57pYq6b7v8nbt2+P\\n\",\n       \"e79azXsiVNJet26dizVVa9be13q78P9OT+RzriNPp0yZknhNf9v6u/R/4/o3Uf8G+n8fR2dRiKLI\\n\",\n       \"4jge86TTEgUAABCAhygAAIAAPEQBAAAEqKXEwde//nUzMzty5Eji3/ft2+fipg/FX716tYu174VZ\\n\",\n       \"ff2g0mhu2O8DpbnhkL4hGJtfJTq0bEWrdKj7o48+6uJNmzalvufKlSsu7urqSrx248aNMd+TNQS+\\n\",\n       \"LlrComhF94PSfh5N7KdY1/UL+PT3UfRvxZ95IQ9aogAAAALwEAUAABCglnTeaCVcLQVQJx0mqVV6\\n\",\n       \"s2gz4k9+8pPCj6kqmsJrekqhnWzYsCGxXFc6RNNOWlk6i6an/QrUaem8Jqbfy6xYrmnOtHMyHk3/\\n\",\n       \"vdVVPqaJtOK7Don3Z8xA+zl06NC430NLFAAAQAAeogAAAALUks4bGBioY7eOX11Zm+YPHjyY+r5n\\n\",\n       \"n33WxS+//HLqelpNVie8zdtU2NPTk1iuauRT01MK7STvpLtPPvlkYjktxe1P8Js3dZD32l64cKGL\\n\",\n       \"tUp53vR2qDJTyDqCdu/evanraTpfK0NPnjw5sZ6mORcvXuzi7u7uxHppVd41vei/T/el1dDNzF59\\n\",\n       \"9VUX6/fhz5Tw8Y9/3MW3bt1ysT9Kcffu3WMen39f1mts0aJFY77HrPjUZtPp9+h/V5h4uAIAAAAC\\n\",\n       \"8BAFAAAQoLaHqIsXL9a1awBoKxNhkmOgHUX+7NCl7zCK4jiO7YUXXrAXXnih0n0D98J1iSbiukQT\\n\",\n       \"TZTrMooii+M4Gus10nkAAAABeIgCAAAIUEs6r9IdAgAAtCAtnVf5QxQAAEAnIJ0HAP9/e3fvIlcZ\\n\",\n       \"hmH8uomkUAQRIX4FTBHBVNkmjYipwqYx2vhRpRAR/KjVRi1ttBJtjJJCImkiacREK7uwEDQQgwZc\\n\",\n       \"SCRsLPwDEngszrs4rjsiB2de2XP9mjnnPQfmGbjn4WHOzBlJGsEhSpIkaQSHKEmSpBG6DFFJVpNc\\n\",\n       \"SfJzkjd61CABJFlP8kOSi0kutLV7k5xP8lOSc0nu6V2ndrYknybZSHJpZm1uDpO81frnlSRH+lSt\\n\",\n       \"nW5OLt9Ncr31zItJjs4cm1wulz5EJdkFfAisAgeAF5I8tuw6pKaAw1W1UlWH2tqbwPmqehT4tu1L\\n\",\n       \"i/QZQ0+ctW0OkxwAnmPon6vAR0m8qqBF2C6XBXzQeuZKVX0F081ljxd4CLhaVetVdQv4AjjWoQ5p\\n\",\n       \"09afrj4FnGzbJ4Gnl1uOpqaqvgN+37I8L4fHgFNVdauq1oGrDH1V+k/NySX8vWfCRHPZY4h6CLg2\\n\",\n       \"s3+9rUk9FPBNkrUkL7W1PVW10bY3gD19StPEzcvhgwx9c5M9VMv2epLvk5yYucw8yVz2GKK8MZX+\\n\",\n       \"Tx6vqhXgKPBqkidmD9ZwIzUzq67+RQ7NqJblY2AfcBC4Abz/D+fu+Fz2GKJ+BfbO7O/lr9OrtDRV\\n\",\n       \"daM9/gacYfj4eSPJ/QBJHgBu9qtQEzYvh1t76MNtTVq4qrpZDfAJf16ym2QuewxRa8D+JI8k2c3w\\n\",\n       \"RbSzHerQxCW5M8ndbfsu4AhwiSGPx9tpx4Ev+1SoiZuXw7PA80l2J9kH7AcudKhPE9QG+k3PMPRM\\n\",\n       \"mGgu71j2E1bV7SSvAV8Du4ATVfXjsuuQGL5jciYJDO+Fz6vqXJI14HSSF4F14Nl+JWoKkpwCE+Fx\\n\",\n       \"pAAAAGlJREFUngTuS3INeBt4j21yWFWXk5wGLgO3gVfK/+/SAmyTy3eAw0kOMlyq+wV4GaabS/87\\n\",\n       \"T5IkaYQdfw8HSZKkRXCIkiRJGsEhSpIkaQSHKEmSpBEcoiRJkkZwiJIkSRrBIUqSJGmEPwDOrQm6\\n\",\n       \"MQ8HvQAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01f42310>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['conv2'].data[0, :36]\\n\",\n    \"vis_square(feat, padval=1)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The third layer output, `conv3` (rectified, all 384 channels)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 32,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAJOCAYAAAB8y+mTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsnWuwZVV1tt+tCGLQGENEuTaXbqDtbrq5NgEEFFGQaKRi\\n\",\n       \"vlgVKxgvMWpiiKl8ookcYzTBsqiUGiSJxKh8CSZGUVNFQBAQL9BCS9PQdNMgGlBjLhoTNcZL9veD\\n\",\n       \"fs46+z1n9Jxr7bXPPg3j+bPP3mfvdZlzzLnWeNcYYw6Gw6GSJEmSJEmS9jxq2geQJEmSJEmyq5I3\\n\",\n       \"UkmSJEmSJB3JG6kkSZIkSZKO5I1UkiRJkiRJR/JGKkmSJEmSpCN5I5UkSZIkSdKRidxIDQaD5wwG\\n\",\n       \"g62DwWD7YDD4v5PYR5IkSZIkybQZ9F1HajAYPFrSNklnSPqqpC9IetFwOLy71x0lSZIkSZJMmUko\\n\",\n       \"UsdLunc4HH55OBz+UNIVkp4/gf0kSZIkSZJMlUncSO0n6YE57x/c8VmSJEmSJMnDit0msM3is8LB\\n\",\n       \"YJDr0iRJkiRJssswHA4HC30+iRupr0o6YM77A/SQKjXC4x//eP34xz+WJD3ucY/THnvsoRUrVkiS\\n\",\n       \"/uu//mv2O5L04he/WJK05557SpI2bNggSfr6178uSXrwwYc2/5M/+ZOSpN12e+i01q1bJ0mamZnZ\\n\",\n       \"6QH/9E//tCTpp37qpyRJX/nKVyRJj3rUQ4LdU57ylJHPncHgoba98MILd7q/xz3ucZKk733veyOf\\n\",\n       \"77vvvpKkb3/725Kk7373uzs9Xs7z/PPPlyS9613vkiQ9/elPlyT9+7//+8jx/+hHP5IkfelLX5Ik\\n\",\n       \"fe1rX5Mk7bXXXpKk73//+3rMYx6j//7v/5YkHXbYYZKadnzsYx8rSfr5n//5nZ7fYx7zGEnSD3/4\\n\",\n       \"w50ePzzpSU+SJH3zm99c8P/sh9dnP/vZkqQHHnhgZH+0/7Zt20Z+v8cee0iS/uM//mPk8/Xr10tq\\n\",\n       \"+n358uWSpCc+8Ykj+4s44IADRn7/r//6r5Iae2W/9OO9995bdX6w2267affdd59nJ7WsWrVqdjtS\\n\",\n       \"Yw8vfelLF9xf32Avr3/96xfcH+326Ec/WpL0L//yLyP/P/DAAyVJ//mf/ylpfv85tPcFF1ygiy66\\n\",\n       \"SJJmbbktkQ1zTt///vclNed04403SpKuv/56Sc05rV27duT3t912W6fjgchWJsVS3d9ee+2lxz/+\\n\",\n       \"8bNzf1uYa1/xildIkv7yL/9SUnMtqZ27HOYgjzlmDn7Tm94kSbr44oslNbY9KabVf1dccYWk5tp7\\n\",\n       \"1FFHSWrm7r/5m7+RJL373e+W1FxbuRfgGssc8cIXvlCSdNBBB418n++99a1vlTS/3/bb76GHYF/9\\n\",\n       \"6lc7nc/ee++tH/zgB7P9dOqpp86O9YWYxKO9WyUtHwwGywaDwe6S/o+kj/uXnvCEJ2ivvfbSXnvt\\n\",\n       \"NTsRJkmSJEmSTJvdd9999u/TTz99p9/tXZEaDoc/GgwGr5F0taRHS7psoYy9r3/967N38U94whNG\\n\",\n       \"/ocHjyeKgnLIIYdIkv7pn/5JUnN3Cl/+8pclNV4ld8UlUHjw5FEo7r777pHtRqBElECBcaXh3/7t\\n\",\n       \"3yRJP/MzPyOprEj5/lAcUGTYPooEXhj7pz3/53/+Z3Yb9IXU9AfelN/to9yhoOGt8xopTM5cQ537\\n\",\n       \"nu07N9xww7zj3hmRKrFp06aR97T3aaedtuDx/OAHP5DUtBFeDl4UKgXHhTdEO7q3ijIZ8aMf/WhW\\n\",\n       \"RezCPffcI6kZB/RLW1wxjLxuh/YClL7vfOc7kqSnPvWpkmJFinaPjnvlypUj2/vWt741+z/Ouasi\\n\",\n       \"FSkSP/ETPyGp6Wtg7lizZo2kRmnANuhHxtL//u//7nT/++yzj6RGZasdS9MGxQC1lvP47Gc/K0n6\\n\",\n       \"53/+5wV/53NAiT322ENPeMITZvvc+8PhOL7xjW9Imj+3cI2h31Hj2S5PO0pEYwN1FZ785CdLatqJ\\n\",\n       \"uZi5tGQfwNwyaWWrLdgr/c344DwZ61yjeEricE1jTmWs8wpRe3WdPxmnpTnamcSjPQ2Hw6skXbWz\\n\",\n       \"7zzqUY8qTsjJ4sAFLUmSZCnT9gKXJIvBRG6kahgOh7NeG94X3glKCqC08MrzU+IQ7rvvPknN3Wuk\\n\",\n       \"aERwl4vy5O9L4K2W8PN173muZ70z8F4c7uLxClCuUAQ81oSYM/dq7rzzTkmNN3HwwQeP/B/FAW8A\\n\",\n       \"b4zPuaunf5j88BLuuusuSfO9VI4nUgX6uuHjuLEXtzfAO+U4+T7eFMcL7l2uXr1aUuOtc161Xue4\\n\",\n       \"cJxd9+dqyN577y2piQmLQMkCtzvsK1IjopgywM4WUgtcDeuKKwze13DrrbeGxyI1yoHHfdEn3jau\\n\",\n       \"nLgHvlThPJjDUNcjJQoiZzqasx71qEfp+9///ryYtQhvP+JC4aSTTlrwd1u2bJHU2DxzAWo0++Vp\\n\",\n       \"Atck5jr2g/IEbI//M6dht6jjjA2uEcxRzC3YkSudtU9HJgVjH+WNccPczzWwBKo158V5+9Mr3x77\\n\",\n       \"Lz3Vcbgmo2DSrrWKaS4RkyRJkiRJ0pGpKVIHHHDArMeM8sHd5/777y9JOuKIIyQ1Xg3KDd4KsShE\\n\",\n       \"8HNXz108cPcfeXfc1aLokBVWgrvYSEnC2+CulvPjPdmJUTYfEAtGO0TB+dxNoxhwXigL7jW7UsFd\\n\",\n       \"OO2It8jn4Hf70d0/mTAcL14b5+1EShscd9xxkqT7779fUqMQ4cXXgtfG7yMvif/jVdFetAf9i/eF\\n\",\n       \"90w/0q+8bt++XVK98tgVV2XaKlJ4vx5nUKtE1cZkdVWPUAEWOq+uWVeOKyWR4lSKoUE5YAwzd2Fz\\n\",\n       \"UUbipG0kom3iD0oNNo8qiw0xl0ftFClSjDlvH5Q6FC9sMsqo9rnJ7YOxybWBpxtsj+PgGsWY57jY\\n\",\n       \"P+9RMvmdzy3M4Ywtrjke08c1DGUGJQsFlN8vlrpdy5FHHimpmau5Rh9++OGS5qvVEVw7GOv0O9c4\\n\",\n       \"nhbRf8QB83SIax5zOP3O3M01gGsO/YXixffp9xKpSCVJkiRJknRkaorUd7/73VlvgefN3M1zl42X\\n\",\n       \"wP/xevACPv3pT0tqvDfuYj3mpfS8FK+A11pK2WN4n9z94qm7IlKqF+R30X6X7HfrZELg5UQ1V2hf\\n\",\n       \"7uLxHvG6uDvvmvXl+62NOSuBlz8307ANeDkQZXhx/Hh/tAeKIl4o3o/3I7+jfyLlbNIZOO614u3i\\n\",\n       \"zfGe8+JzvGfshM+xQz8fvG9XMB1+z/fb1svyuJi5cS/YatsYiUmBp0uboKZGSgxzF7btCkpt5mQJ\\n\",\n       \"+pq+oA2jWLAIbANVllcyoFG4yLhF8YEouyqas2hHFCnmrkiRio4XUInJdHXbYm7YunWrpPnxkkA/\\n\",\n       \"8X2UDlf4+H0ptstr0JXiBmHaChXty1MdFCjaAfuohXhalC0/P66FjHvec+1l/8z52Df277UbufZh\\n\",\n       \"V7UxXalIJUmSJEmSdGRqitS3v/3tWW8ELwvPlLtaPP7oLvvmm2+W1DxH9ro+0NZ743ioW8VdqysZ\\n\",\n       \"tdvBc+eumrtmjx+IYkbYDjFGrrjhBXkNm1IMCgoWx0PsEnfnXRWfSeHZnSg5HLd7t6VK616J3KGd\\n\",\n       \"sUvahfYu1Soiw6fk5U+6nf38OS/aDW+O8+F4UCmo24T9eQV5348rUq500R7YZ1tFCjWA7c7NrPGq\\n\",\n       \"8pMGDzbyXJmLmENq56JI1aNStNdRou1Ldac8E5XvM1baZjx7Vh71tFACSu3TFtR2j1Ns+3vAdsis\\n\",\n       \"5ThRgLBZbDV6CoHdMWZQNHxu8bhZ7KGv2L5oLoniHsfF25/z5VrENdNj3fhe6akO7e7KLvCeccC9\\n\",\n       \"AnMadk1/ROUzXFH0bMkSqUglSZIkSZJ0ZGqK1J577jl79z93zT2pUWhqn/dyt4k31darAleKeA5P\\n\",\n       \"Jkpb8DY4P/fOuGsvKUe+5p17L9yNo3yV6iJxfngFxG34eeJFo/iVjq9vbwdoN481w3uI9ht5wZ5B\\n\",\n       \"FWWSuFLStlo2MXd4u+BeKt5WKWuxL/w8eI+9Yvcoliip2F1UG4j/u/3hJfM5rx4rVVpTD1zZmuul\\n\",\n       \"1nqQ0HZ9SIdziI4dW3U1m8+9Jhu2HNk0MSbMcbQdtoMt83viEj0OkjhT+pw+JTusLbQfcwXnx3bb\\n\",\n       \"qvkR2NKyZcsktY+79HbF9hnrrly44hfB+Xq2JXOMw3mgTPWlSEWZ6fSLX5O8nlntqhHgc6yvckG7\\n\",\n       \"esZ97bXC63L577iG8jlzq8dKet2vaC73dqlVPFORSpIkSZIk6chUFSlfE467P+5a8bJKz9e5e+R3\\n\",\n       \"kRews2ORmhgQ3pM50nXdLq/H5AoK/y/VuSIDAsUsUtzYjnvHeD8oMcQWse6RP1fm7t3XmovoW4ny\\n\",\n       \"6sO+bhnHVcqyjBRNvEavV1b7+1pob69m7PaEF4yygv0tVpyPx+1w3rySaYV6E8Xv8LlneeJtY0du\\n\",\n       \"V22VOLxUjm+uctg23mxcJaBUd4lz9DmMMeYZmxw/bR2pl2yXV+ZQn8OwIa+ng01625eyySLYDrbC\\n\",\n       \"eqicV18V2tkOY5bzob5SCR/rtC/tHq2N6DUAGTO8xwY5Hq5lPrZ8TNN/tZXa2+LZn658YjeMqbaK\\n\",\n       \"lMP5oYz6OqTE0DH3lrItfa7wuRN74Dw9+472xz6juQZl2ZXC2mtAKlJJkiRJkiQdmZoitddee816\\n\",\n       \"A3hr3E16pkTkzXA37TVQvLJ5Ce5yiRvgOWrXuj7czRJrwvbdA+c9d+fsD6UIuCtGSSFmBWgHz3YE\\n\",\n       \"7spRPmgfvAVfT4z2ZD+1XkrtCvcOz+89awzca8J7w35qM0DAszvZr3v/eO38P6rIHkH/Y0+RHbu3\\n\",\n       \"OGklypUv7ILzw874HseHXdAufj70j6s8nBf7YXx4bFYt7Bcvcu76Yn3FmtRCG+KBe0Yntos66XGM\\n\",\n       \"tCk2jc248gS0JSo1cwa/pw+wIeYC2oWYokj56BqnRx+S7eZjsy98jHh2Vqn/PQbMFUOOl7mPdmZu\\n\",\n       \"4fc+d3I8PA1BGazNVuy6jihzJcpmlJVWWvOwLxiLzB3MKeyf+mLEjZYUKcYJ12bvP86bdqcdvfYi\\n\",\n       \"cwbjxeccnvZ4jGLt06hUpJIkSZIkSToyNUXqyU9+8ry7QJQYvAuvcO6QhebeYNe6PHiHnlVXAk8b\\n\",\n       \"orgI7oK9HhJeVZT54zFUvqI42yk9X3dvmLt4vFRfL8prcwDrGlGDhvP3DA28AjKNeO8VraMqxuDr\\n\",\n       \"SnG+XtW2lMHD9/Bi8JL43dq1a0e+7+st1SpGtCvtVFqjblIVzSM4Pvrba+D48aCS0B6RasHnbi94\\n\",\n       \"5/we757+QK3xNSIjOB7sgN8vtO+uuMqJJ+zZZ+vWrZPUjC0qZDMG+D1zGmuOAf9nbPA7bMz3x1jx\\n\",\n       \"CuGMabYTVUantllEyVYjPDOZ9mqr4pbwONC2c/5nPvMZSdIZZ5whab6ixRztcZ+MZWwZuGah4nJc\\n\",\n       \"vHoNOWzXswG7qtAeA+T2P24F/LZgf9g114g777xT0vx46BJ33HHHyOsJJ5wgSTrrrLMkzV+dwWvj\\n\",\n       \"YS9ca7w9uHbR/h7HWlvjLhWpJEmSJEmSjkxNkfrmN785e7fO3St31Xgz0fNe7j75Ht4Aio17DQ7b\\n\",\n       \"9btNvNBSFprXTXIPGk8ZL9WVIhQBvkf9pqgqsStymzdvltTcnXMepbvn22+/feS4UGaimKpICeSu\\n\",\n       \"3zMyPPaG7MKTTz5ZUtNOeNMcTy20D+2F91Va241+xRvkd3iTUY0bvFCO22vERBBfgVeD1xmtk1Zb\\n\",\n       \"P6kv3M5471WXUaZ83alIQfMMJ/BaLoCijPfIdiM75rgOPvhgSfNrIc09h3FhbLAP5hS3FfqONds8\\n\",\n       \"kxQ1kznDlQ+273GREVF22qRquNWC6uxZg1u2bBn5HjEr2MTc+LYauEa8733v636wc8D2iIWiX9yG\\n\",\n       \"Tz31VEnScccdJ6kZ0zfddJOkuNagx9jwu7ZKXW38ad8KYFs2bNggaX4F+Y997GOSxq/bdsstt0hq\\n\",\n       \"FCmuUcxh2J33o7cb49ufivn7WqUwFakkSZIkSZKOTE2R+t73vjf73BNP1JWpUhbWF77whQW/R+zL\\n\",\n       \"2WefLanxqD37C+8VDxivom3dJPe0vbaL4woEd8HEevndNXfPKEnRXTLZb7Qn54nXy/bYPvulSjDt\\n\",\n       \"TnvwvNjjD9zr8Wq5vPoK7V1ryriCyPF7xgprI+J90P6eleaxYB4XgtdEe6FClLwTYnU4v8997nOS\\n\",\n       \"5mf4RL9D8aIfPTaO2jyRV0rs3KGHHiqpqVjvWaye2eL9i33TTq5YlWKYfDyi0HF+tIOfX0lR9X7k\\n\",\n       \"+3PX2jvyyCMlNWOZtuW7nBNzRKSSLV++XFKjKHhmLZCl5jaEzaLQeMYiqrQrNF4Hp6816mrxOQtV\\n\",\n       \"meNhjqKPaVcfo/SpZy/yvSiGa7EhZqrEVVddNfLalbbqM0ofNl/KIus7S9JhPGGnPqeV4j377m/m\\n\",\n       \"Kq+TxpyHPfpTC+zas/rIIvTtlkhFKkmSJEmSpCODxY7ql6TBYDCcmZlZ9P0mSZIkSZK0ZWZmRsPh\\n\",\n       \"cMH00FSkkiRJkiRJOjK1GKnFUKTYx2KpX9Pa37XXXitpfuwR8RjEqPB8nlo0PE8nXoPn38SPkIHE\\n\",\n       \"ds877zxJ0jve8Y4F9we+JiDP99mer1AfPVd/uPYfcQ+vetWrRvbn2aAloixAh3547WtfK0n65Cc/\\n\",\n       \"KanJsOH/xK/Qr/vss8/IdoiL8HpdxP8Qo8V6ZtTlatueXdcdm5mZmY0jpI2JVyR2gpgjxsRdd901\\n\",\n       \"si/iz7BhYpxokze/+c2SpHe+852z+1wM2M973/teSU2WH8fHWPa+OfbYYyU1WWm0D9BO/B5bOPfc\\n\",\n       \"cyVJl156qaSmXtaBBx4oqcnKoj4Qx4HtElPFnELMCu1PzAr9Q1bcYrfnjTfeKKmJPWKOJH6PMcmc\\n\",\n       \"ScYocZnE+WIfxNL5ahO/9Eu/JKlpl7e97W2SmrnXV/Wg3bZu3Spp/ligPZlLiavEzl/4whdKki65\\n\",\n       \"5BJJTcwf/ev1yrADrhmeaeurG3j23fr16yVJf/AHfyCpGUe0E7GHGzduHDkPjpt2efDBB7UzGJfn\\n\",\n       \"n3++pMW3l4hUpJIkSZIkSToyNUVqKUG1Ybwprxo8qZW5+wKvzusycfeO94A3SqYRXsDTnvY0SY0X\\n\",\n       \"RTZklEVFNhveFttnO7QjXjBKBd4xypSvYwXjrkDeF5PKnJqbZTYXvFC8RbxH+suJlCjUBezCa/VQ\\n\",\n       \"iwhvMqphFNXXcvBe8Z6pqeSV4msZZ5y95z3v6fzbGmhzX/NrsXCPvbQqQFR3CrzvGZMoUmTyeq0v\\n\",\n       \"lCj2z6tniDLWPWuPDNtp1z0ia682m4ysrkg9Lo0ZMqR9TUagfclwjeZCjpc5nVevo+YZ21S253PG\\n\",\n       \"Luo2ilPULyhgfJ85C0XKM4o5n02bNi24PV83s8S066VFpCKVJEmSJEnSkVSk1Nw1RzU4qIeEt7bU\\n\",\n       \"IEYFBcq9DLwaFCHqAuH5E6+A94nXiHLitU/ce+M9x4GSwvb4HC/LvWQqiOPte3XoaTGpGj5RFWRX\\n\",\n       \"G6JK9yVQomhP6k/BpBW/xa59tJgQNzatc0RZqF2VflxQOJgriIVCZWZsY7vYHMfp63B6DBtzFrDd\\n\",\n       \"xZpro5psxPYwVnzMRMpIqXJ3SUGEtmM0Wq0D5YxrAUqT18Sj30oKIefHHFOrKEbjhePzNQkjuipS\\n\",\n       \"K1askNSMG2LdePpUWoOyRCpSSZIkSZIkHdmlFKlJxSrhJURrttWuAD0tUHq4q/cqzniBKD9eAZ1M\\n\",\n       \"H89wQllypY67ehQrfofX4zFZJe+S/eCdlCrLL1Vq15EiU6dE2xpvtRkwk475qz0/p23W4kLUZjJ2\\n\",\n       \"hWPzavBtQfEoVYJ2FkuJApQLYomoms9xoB67LdH+ZGuhSPA9fu9zCyr2YhEpJcyhbW2xNPY9hqwv\\n\",\n       \"ovMgOxUlsWtMmmdd0r/jVirn2lW7jmlXNZ3YNcYdCt64ShSkIpUkSZIkSdKRXUqRmnTWHM/Lufvm\\n\",\n       \"7nXSMTu+Jlxbrwzlh/pPrqyhCPnacsQnROs/8bl7jb6itntZKB4oE1FWGNvluFDU2q4IP21QgHju\\n\",\n       \"fs899+z0+5HySdwI3pnHp5TAiyamD/w98QLUwOkb7Lgt9Dvtg9fYJl5m0is1eEZkV3aVODJUTmKX\\n\",\n       \"UCBQBiIlwZULVyn5v8fqtFXoJsWkssMmtbYg/cHcC7wfd21DfudPIxijJVDExr2WekxdLZ7NyLWr\\n\",\n       \"r5i8VKSSJEmSJEk6MjVFarfddive9eOhc9eI8oHy4lV8xwVvkxXkuYudtCJFO3SND3AvI8pEca+i\\n\",\n       \"FG+B1+39hJeDgkTtEbxXauxQBTlSpPCiUCDo76VarysC7zqq9O64PbkCirJE7JhnREVgP1QW5/so\\n\",\n       \"nhApYn1R2w6OqxtLJXtzLq7mtoUxw1zDGPQxy5iqbUuPVyyBzXEc0X74HrblthlBbBQxKK6y8uoK\\n\",\n       \"ysOdSSuR0TWV9kYtpl/bjjGuIcwptTFf2DNzuyuRUFKuus5djFt+z37OOeccSePXn0tFKkmSJEmS\\n\",\n       \"pCNTU6RqnkHzXH3VqlUjv+laX6cECglKFwoJr5GyMm1q79KjGial8/MMJbxxvBu8WrxLnpvXxjqh\\n\",\n       \"xLDdXc1LbeudeXuSWXPQQQeNfI4Xh2pRW2mc/nQvENpmodKfkULpLHbm1WIyrqKAjdNHzHGsXQeR\\n\",\n       \"QhRlXLb11FEWSr+jr13F5rhdOWOVCI6PpwdAJWyUua6xSNTz4lrA+XBcUdzntJnUtQu8PV3pZK7m\\n\",\n       \"tavq6zFHJagIH8E1oJQNW5vdF8H5Yi9Umh83hisVqSRJkiRJko4s6aw9vC8UIlaOnnRlZryZ6Lny\\n\",\n       \"pDIvusLdfFvFjPMqeYXuheOF4tXQTr6ie/Qc3Ln//vslNcpHrfKyVGhbW4j1qgAvGgUPhQuvvm3t\\n\",\n       \"FxQn2t9jpGq9YvoVFYHjiSqzPxIYd+wTy8QrY6mWKH6w63GV5lLi7IiJwibIruP32BgxUZGixvew\\n\",\n       \"+a4ZnihjKAnsz219qbHYNfJoJ+Zm2qlrrBHH75nb44L9do2vLIHdYi/sz9cc7EoqUkmSJEmSJB1Z\\n\",\n       \"0rfvKCw8F8Uzx4vzFcnHBS+Ru1XuUvGeiGvoqxpqX3T1RrlLb/vcHi8ERcqVKLaLUlaC9qS+0VJd\\n\",\n       \"4bsv3BvkfIkbwd5QA4gH8TXzIugflFWv4VPrRXo2Yl/e567MuBmlzF1eM2vc2B76ijGJTfH5mjVr\\n\",\n       \"JM1fzaAEipPbAMfpc0+kKKA2M6dj06y20BbOg1fU4EkpGn2x2BnJroB5pnTXpyzY1bgV/mHSFfs5\\n\",\n       \"Xu4ZeOrBOBm3X1KRSpIkSZIk6ciSVqTI+Nh///1HPuduuu8quH73zt0q3tdSrGsjdb+bL/2OuAiP\\n\",\n       \"6SEGB8XJFRa+f8ghh1QdBzVn1q5dK6l9RshSoTaWyWPZsGPsjFfiPWqrB9NfePtkz3n9Kep91SqR\\n\",\n       \"9OdStf/FxKvEt4W+JHsPhaA2k5L9E9vhoHTheXtGKWOV15LKiErK9hibePgcdxT7w/45XzKHWY0B\\n\",\n       \"lX9cJl3Rvi8mXcON/gL6ibneFST6p1aR4vhd8dxVIL4Te8m19pIkSZIkSabMklakwFexHzfCvhbu\\n\",\n       \"upe6J46XynNfvN5SvEDJCyE2zZWrY489VlKjJKGgoIR4JXrqI3ktEeJFqC573HHHjWxvV6OrndBf\\n\",\n       \"XquHDBMym0oqAvExePt4W+6lto1rQG2oVU2wA6hVP3YFiOnpaqP0DX3LXFabzYUtRIoUNojHzdzA\\n\",\n       \"+1I9H4e5lxgSr+jOeUQ16lCZvb1QYXe1VQzGpVZd7orP1ajOPvZcWayF7bAf7HmxQE3viq8V2de9\\n\",\n       \"RCpSSZIkSZIkHdklFCnn4IMPltTUH+Kumrv9vp677yrgNeIt9J254l7junXrJDUxUl5Hyb1rFCr3\\n\",\n       \"hqkqy+fE9Fx//fWSpPPOO6+Pw180UH7axqzde++9kpp+9AwpvMdaRYf2xFtEqYK2leO96nYJlEpY\\n\",\n       \"TCWKmI9JZX6OG+OCcsM6lF75u0Qp285jhSLlqi3YJDZOPGCpDhUquVekZk5xFZe5gv9z/MR+oYrW\\n\",\n       \"KikoYuwHBW1aLPb+o7HXV2zTpLPtnHHbj7kPO+prnkhFKkmSJEmSpCNTU6SWLVs2mw1EFhPPK32F\\n\",\n       \"Zu4eUZr4nXs1eKPRWl9RjRVgu57NRDbUuOv8TAqPXSH2iPbAK+F8S8+F+R3n7WvIbdmyRZL0xS9+\\n\",\n       \"UVLTf9Th2nvvvSU17Rh53XfeeefI62LB+aBg9hWnQbuV4g+8rhPthnfP8dFvkQrC58TNsF3UgqjK\\n\",\n       \"82tf+1pJ0kc+8hFJ8+NfUHyXL18uqYl5YpwxDhgvmzdvHnn9uZ/7OUmNXaBURplV2Bn793XA8Hq9\\n\",\n       \"qjZQf0yafA0y1EPPZK2F47vvvvsk7TrZZj53+Bp8qNPeN9g0tskrNvHAAw+MfJ8xSVwltnfkkUdK\\n\",\n       \"amzv2muv3enxcq1gbGDjbSvJj4tnIBMPeumll0pqlDbsgPdHHHGEpKY9PKMXtdnPq7bWXF/4eKN/\\n\",\n       \"uQb7qiBtr6E+p/p4QSFlznBF2mPzsEfaDbtlDsaeiaWqVe9TkUqSJEmSJOnIYBoe0WAwGM7MzCz6\\n\",\n       \"fpMkSZIkSdoyMzOj4XC4YMpzKlJJkiRJkiQdmVqM1OWXXz6bocHq8jyn5Dk8sTg8xzz00EMlNc/h\\n\",\n       \"yejg+SjPNffbbz9J0tOe9jRJD91JLgbs5xOf+ISk+c9fyaaiUvvhhx8uaX4mCs+TqcJKrAjtwvP/\\n\",\n       \"U089VZL0p3/6p5LiDB1qb/jK176WYe358Uq7lzJ3usJ+3vKWt0hq4iSIpSG+wuMCyBrjeTmxWsQO\\n\",\n       \"8RycfkCVffnLXz6y3wiey9OvxHNs27ZNUn17sJ+3ve1tkuZnOzIuOH7+z/kRi8bxE3fD+PDtef8B\\n\",\n       \"/UhsE9sljgPIgKJdb7311gXPa/369ZKk5zznOZKkN7/5zSPHSdwEx0e8ETWPPGaO/ua8XEWnH97w\\n\",\n       \"hjfMxj4w1jy+iqw5xhrHSiwOY5GxxPcYMx57s9hzS9f9RdmMUabpuPtrC/u5/PLLJTU2CGRoExtE\\n\",\n       \"xu8zn/lMSU1m6YYNGyQ1MUP0O3ZBXOFrXvOakf1OGvZz5ZVXSpJuv/12SY0dMbeRlcn5M2ffdddd\\n\",\n       \"kubH/BDSw/x5AAAgAElEQVRTxdzGWDn//PMlSVdccYWkJuuSMcecQk1A2umzn/2spOYaxDWaduMa\\n\",\n       \"xHggluuMM84YOc9JMXesS9Jf/dVfSWrOn2sdx8c1ljmO8+PaTBwo7XLbbbdJamKjmJvOPffcnR5X\\n\",\n       \"KlJJkiRJkiQdmZoihdoiNRWYUZy4y+au2dd1wrvgrhEvC4Wl64rifbFp0yZJcQbR1q1bJZUzTyK2\\n\",\n       \"b98uqVGkSrVi8MKj6sNtwXvBm4oUGPeWSuAF4S2Ar2xP5lS0VhzH43XGAO8crw/vsBavQH7HHXe0\\n\",\n       \"+r0TVbWOsh05f8+ia8uaNWskNV7vWWedNbJdxhc1j17wghdIarLzNm7cKKnx/vEWf+EXfkFS4wW7\\n\",\n       \"guRZo6gMeLcOmUu0k6snc9sPlfKGG26Q1HimZBPRdoCtsI/Pf/7zI+fC7xjLp59+uiRp1apVkrpX\\n\",\n       \"iC6B8lVb8bxENBctdh0gx7PaGPsoC8xtrgjy1AHb4XrCeTL38D3mIObOaeFzNWPB1V3qb5XwjGPP\\n\",\n       \"WOdaE0HmNZm6QHtjfxyP76+2Dhr9wSsZyRwvcz/j0a8pjE9XKr2umtclwz7YHvaOvdA+KLOcH0pW\\n\",\n       \"7fmlIpUkSZIkSdKRqSlSg8Fg1hvBo0Zx4K519erVkpq7SbwJvA4ULO6aURq6rp9D7AleEUpZ2/WE\\n\",\n       \"eE7NcbqXQDxG13WKvA7R0UcfLanxLiadiYn3HSlCUKtEAV5QtGYdz69LeG0ah365++67F/z/pKtj\\n\",\n       \"98W4dc1QeFFwGTf0G14Zdvqxj31MUhPDhFeJl0iMU+1ajw7j7YQTThj5nO0RB7Oz9eLoO44FUCY8\\n\",\n       \"nixSI/FQiRHxdSxhUmONvhm3kjpzGdsZdx3LvscGcz0wdukfFARX5rgm0E6+bihzLjEuKBBt5yQH\\n\",\n       \"9ZU6WG23R9wuYxf74Tw5XmKmUNVRhek/5khqsN1zzz2S5q9zWYJ29TmXfuZ4OE9XpGpXLeD3UXtx\\n\",\n       \"/lHtO/q5dj1T5iS2G43TaDzzdMuVuohUpJIkSZIkSToyNUXq0Y9+9LxYJ3+OiheJN0hshnsfviJ1\\n\",\n       \"V08dL4u7epSXtsoR3gTPX7mr5+4fLwLvhrttYpjwhv25Ld/jbhnwtkreMc+h+R5eal9xGH3hK8w7\\n\",\n       \"tNu4awpGsV2LrUT1dT4R7vUD54n3hr2hYuB1EycQxQtgV9gpqoKv8VeC+BG8Urxw2mVuXOVcFjq/\\n\",\n       \"SNGJ2sJh7qGNOCdeDznkEElNWzGGaINx1xckbs1tsa2tMFcyp6EIdI3p6nts+JzlsTMR/n+UBWyW\\n\",\n       \"TFpss6Sel2BsrFy5UlIzB7dVpFDafDUK8Ax2rh1cA70iviuMfq0qPf3g6Qb7Yb/Ev0ZPVQDFdlwY\\n\",\n       \"8+w32p8rVp45zvn40ym/pnKvEPUD9lIbb52KVJIkSZIkSUempkjttddes94DXgP1gLjLxpt68MEH\\n\",\n       \"JcUriHP3yd1017gC7vo5jq5Zbni9HK97f6VMCs9eo52iu+fazAJX8saF45zUdoFYHbwJ+qmtgsN2\\n\",\n       \"6N9pZyxBdB4otagdXVWOSKlknOB1elZsbTwN/Y/3hzdH5pWvtUf/sl8+x7tk/2Tk+Nqbbu+oD1Kj\\n\",\n       \"FBBHxT6IrSjZKmPOY6MYw67eetv1Bcfrqh7KQa3tY+Oe3RQRxaiUvt9VqfJ4VupDoaQwB5eeMnj7\\n\",\n       \"ez/Rr7XZcA5zB+3ftXYeipLDWEeJ8bGJGst58L2SPRNHjCLFfhhjKEDEFPI58YrR9rnGMbYhWleT\\n\",\n       \"70eZvK58+bWF9nAFDPujP6I5i3GAkoyCWVrntVZxTEUqSZIkSZKkI1NTpHbffffZWBjPRPDMgLm/\\n\",\n       \"kRpvw1cUxxvlLrsteEccD3fHbb2uceMQqODMq5+P11l6uOFeEN4MXmcpK5NsUJ5vE9dC1hfKDl6O\\n\",\n       \"e1VLhWgctCVS3hg3eHMov3hpbVUMxp9768RgEQPlXj3jjd+jSjDO6acou5C4obmfeZ9iO6V4QGzH\\n\",\n       \"V08A5iCYVFwb23WlZVz1l7aN+tbPLwJlg7mJ9uL4aEcUtUixQ2UG4kdRJPzpQm38KzZMzBE20lWF\\n\",\n       \"RqHBvrz+VS3MZdg+9kg7+DUG5Wauje8Mt3ufQ/w9Yx7Fq20Wno8P1GG3L2IO6b9S3HEUB+3ZuJwv\\n\",\n       \"/YFdRtcIrgF9k4pUkiRJkiRJR6amSH3rW9+afU7J3SZ343hj3D1zF+veJHfTHkdQe/fu+HNfvKO2\\n\",\n       \"tVc8xsfh+T81KvB2ovpHeJF8z+tI1UJ79x3PUYu3a5RJ414T60y5N+kKJRDP4N5V37FcuwqRMopd\\n\",\n       \"+1p2beuw4Z0ybt2rxu4YF4zXqLaRe7N8n+1gP3j1c+3F4ybxUFEmXOXi/+wTBYBzGTf7zkHBqW1j\\n\",\n       \"339thi0VwHnF9plzUP98LijNXUA7R6opNudKgX/fFT2Ok1gmfu8KUCk+k3bjd8yZXTO6UV5Kq0iU\\n\",\n       \"iGJuvJYisUC0j2dqR3jskSt+ESU7Z7yUvhc9DfJ4yK64HbBdtzfqdXHtqIV5AruqHQ+pSCVJkiRJ\\n\",\n       \"knRkaorUt7/97dm7bjxN7i65++VuMIpD8GrDXiW2Lb7eT5R1V4vXuAAUM2JHSpkk7lWTvdQWtoNX\\n\",\n       \"MW4MTtt28fiHCPe6o/50bxS7ocpvFNOGAlOqVfNwBztAvfEV32vB691///0lNf0MHjeDGlOrrvj4\\n\",\n       \"IU4CRXOuyoCqzBjjPXME50o2GKomygyvxHq4BzxuZexxVdFaj57jJmsLFdzXFHT6HhMcbxRT5Ofj\\n\",\n       \"FcvBx3JJGcEGUaeZq7Zt21Zz2FOD8ye7b9zVCxiTjBnPiEepKz11oT05Hto/UqA8e682szyCcep2\\n\",\n       \"4Iob3+uqfHFt4PjJPC6RilSSJEmSJElHpqZISY136JW7uUvGmyR2g7thVyi4+yQbqG1FZSBrkLvs\\n\",\n       \"rkoUyg93x3j6eODc7XKcUSVvsvN4Ps5r1xgpPPiuMWRObfwI549XWqrBUutNRFWRo+PyjCWPn3ik\\n\",\n       \"QSYN3qZnCNVW0cbbRa3xTCu8Xa/63baiPt6or8M2VwHjmBm7PlfwXcYc58aY5JXfe12lcSt7j7uK\\n\",\n       \"QG3MC8fJ+RIbxRzbdW6DqF6QwxznNf4iZY/j5pX28v2Ujp/9oLzw6vbAHMs1ZqnA8dLftB/nXZst\\n\",\n       \"yvc8hoj2rK0bxrXXsx/96Qj7of15j93VXjP8aQ52QNbdaaedJmm+AoVdcq3knqBU2d5jJZkbWX+3\\n\",\n       \"RCpSSZIkSZIkHZmaIjUYDObFUnAXy10k3gKxQXgp7lWgIPE8mEyVrqBQcBfd9XmrxyVwV++V2Dnu\\n\",\n       \"TZs2SWrOk/Pibpq7c7L3ukJsVlsvoSv0K+04buZGiSg70bMi8ZbGjXvZVcEOUWKJY0AJLq0QQDvj\\n\",\n       \"5fIe+6J9vT/aqjIeH+Hjf6794oHj0Xu8I0oUc4yvDu8xVahtHPPq1atbHXvf1MZ/ct5eef3ee++V\\n\",\n       \"VK9oRIpN7RjGtmhv2pcx508P+D/tTj/WrsEHKDm+Bhv9CktldQOHMdX2vB2vuRbNxSWFkePxNfE8\\n\",\n       \"Rorfu2LY9hrj8crgWZPYMbGAKGSsXlE7t6NIoZ6jfHFtRqGKSEUqSZIkSZKkI1NTpIbD4ezdMl4C\\n\",\n       \"z1/xRnwF7KiWBneP3FWyvdJdZIQrZW0VINb58ro6KCLE6nD37lmKfD9a6y+qN1VLbcXqcfHn5LXr\\n\",\n       \"Q7Xdvns7rkR53At4PaOHC5wXdrR27doFv4d3SX8wvvgdSjBeIeOQWCcy29gfr6gB2L9704xPVJ5S\\n\",\n       \"3BHb837i+FeuXDn7GcfMZxwrHiq2wVzj2+SYUJNdXavN4okYd33KWgUFBemOO+6Q1LTx9ddf32p/\\n\",\n       \"eORbtmxp9TuH46b9OH+PD2WuxRYZu9gAykgpRoq5FSVyUpXiJwVjpmu8L3AtO/LIIyU110jal3Zi\\n\",\n       \"TDLWuQb7cVCRnPb3rMJSv9T2X2TnnvHNPQR24+fB+UYwD0RzUG1m+8PrCpIkSZIkSbKITDVrj6qj\\n\",\n       \"eCXcTXIXyV0xz83xiKloDniRbA8vk+qmUJtpgsJVG8vhNVJK3hveAPshBmqpPq/vSimLrq/tlzjh\\n\",\n       \"hBMkSTfffPPI511XcO8bnut7fSSorWoMxxxzjKQmfuH4449f8Hsovnh1vHpVb9QAXw8MZfP2228f\\n\",\n       \"2S7j9eijj5Y03+t3hau0hiLKF+OX+YL2OvbYY2e/yxziMUCMUWIsfO00wAP91Kc+JalRvfDMOdZz\\n\",\n       \"zjlnwWMt4WuOlWJfXDEjZql2rkDVxpP3Nc6i1QEgakfOA6WvBDEsJ554oqRmDvcxTEyV1zVqW0+J\\n\",\n       \"48RmplUzbq5tdmHcOcprBqLA0D6MZRQnsjv9//weu2NuqW1X7I85pJQlGSlWrmDSv9gNr77fqF4W\\n\",\n       \"dsnajM6GDRskSWefffZOjzcVqSRJkiRJko5MVZFyTxjILDnwwAMlNV4hK1RH4PnefffdkqTnPe95\\n\",\n       \"kprqts961rMkNV4ad8XcdXP3z92wrw3G3TxeaZR5UoLfo1z5c1juknm+vXXrVkmNd1hb+6MrZD1y\\n\",\n       \"vn6Xv9jUKokO8R3j1v7pCt4TXiH24s/t+T9qg9ch49WVWAclyGO/8BqJ92F7ZImWxpXXisEOI1Xk\\n\",\n       \"4x//uKRGkeL3fJ/+qF23jPPgODkfXqnSfdppp816lowtfuvxW3jc1ImhD4g/ZC7wmm9eoZlz41g4\\n\",\n       \"N89kRH2jbxlb7tEztjkuzzLj+Olr3nP8vkpCaY04YsmijEjmwBUrVkiaXymdufr++++X1LQbts75\\n\",\n       \"cHzMzdgy54dShcqIrXLcXmuwBP3OeXEeHmOzfv16SY0SQ/8w1zD3oWCi3HFcbJ9rDFmdHgcIXlOv\\n\",\n       \"6+oStbF2KHlc6+hfrrmc95o1ayQ1qxps37595Pecp69egB3CIYccImn+mKdd2c7mzZsXPH76DTvg\\n\",\n       \"d7Sjj4fSNYHtR5XbmQPHJRWpJEmSJEmSjgwmXdNnwZ0OBsOZmZlF32+SJEmSJElbZmZmNBwOBwv9\\n\",\n       \"LxWpJEmSJEmSjkwtRmoxFCn2sVjq11LdH5kjxH945g7xG6iTvh4Xn//+7/++JOltb3ubpPqsRl85\\n\",\n       \"3PF1lYhLufDCCyVJl1xyiaQ4e43aQb5eEpQyk8Dbk+Pg+Dm+KLssgvbleT3xMb/2a782sr9Jw37e\\n\",\n       \"8pa3SJqfGUO8A/Ee2AtxDvyf99H6VcSGnX/++ZKkP/mTP5HUxEd4JhjtQfwN2/V6aR6r6LF7MzMz\\n\",\n       \"s+cYZen46u7EqBBrROwPteCiODD28973vldSOX5tXNjf29/+dklN35WyuqIq/w5jhDZ+3eteJ0m6\\n\",\n       \"/PLLJTWxUJOC82NuIUaMWCBswW3O41M9I5QYMI/1edGLXjSy30kzrWsD7Un9LmLWWK8SzjzzTEnS\\n\",\n       \"8uXLJUk33nijpMauiY1i7DGu6I9XvepVI/udNLXt+bM/+7OSmtivT3/605KauZiMYLJPiX3zelel\\n\",\n       \"/aQilSRJkiRJ0pGpZu0tNtQ+ISMED3vVqlWSGm/mr//6ryU1Cgd36e614d1eeeWVEz/2ceA8o7pL\\n\",\n       \"ZFrceuutVdtru1ZaqQaMe9VtV3r3TCqH/mp73K7QoV7g1dVmA+K9kZHSdS3I008/XVL76tRO1J5k\\n\",\n       \"Wt12222S5vdDqfYLeCZalDEDKFSR6gDYyc7qh/laYLQ1n6NwcIyeGYnHDai39LXXnUL5mrQiBbVZ\\n\",\n       \"a1Bbh4ix4QrPpJWo6DhqK6lH54cNkU3IHORZXw93aE8yXqMsv2uuuUZSM5eiWKHIMo4Yq6Vai4wL\\n\",\n       \"lF/G31FHHTXyPZQfFDPmCsYn/cdchFJcC7UDsQcy9n29W89qLV1znFSkkiRJkiRJOrJLKlIeU1ML\\n\",\n       \"tUB4RVFCaeC5OnESxE143SgUnrYVp4G7cBQGaqvgNfSdSYlXgfKEd038AN5GrSLFdvBK+gYvBrxy\\n\",\n       \"vIP3g5pALR2e69O/tSuB0+94W6gY1Hnad999JTXek8dkUXMHO8O7wRtm+7WgruDN8Tyfdnruc58r\\n\",\n       \"SXrXu94lafxK8hxn21iwElHcEkRKFGAHO6udg23T9oxV2pC2wSP1PgK+h02xHbehUmXkcSnZfom2\\n\",\n       \"nvWk1p7D5ksV2X39TJ4KtK23RP+yX/q5tPbawxXatRQvytqM2J1XBqc9Syp/VEGc7THH8HTI66Ax\\n\",\n       \"7rBfj6ushe1Si5F2YA5vWzE/IhWpJEmSJEmSjuySilRfa6R5hXDiIfBauCsndoPIf7wjvN+2lcZ9\\n\",\n       \"5WzPqGj7HLgE1Vt9ZexS1eOIWmWnK67IebVmB++H33lF/LZxHvQv2/MK9njt7u2jgOF1sX+PsUL5\\n\",\n       \"rAVliNgo2uOUU04Z2W6tEoU644oT3ir/x/sstX8tXdc7w2tlvO0s5oo2x+NE5eI3KEusw8lYZ4x7\\n\",\n       \"ZXP6mFeOBSYdG9VWUQLagawkz9JatmyZpPlqqis/tYoQ6jZjj7HC+qf0vStSvpag23C0X54mRDaF\\n\",\n       \"rXhMWdtVKPqmq8I2Lm2vmdgdShSZy7R322sA7c41j/NHaXKlkLmXCunMTV0VJJ9DiPfs61qWilSS\\n\",\n       \"JEmSJElHlrQihQKE1zKuUkNsD4oA6/lwt8tzdJ7PclfMXbOvm0UWoK9FVgsxRnjL7LfkbbUFL9C9\\n\",\n       \"wXvuuafT9toqWONSq7TghdOOXWN8fM06Xomdo799nTW8HFQKjgevDu+461qJ2CFKEetV3XLLLa22\\n\",\n       \"E7UL8Q54r9STOuKIIyQ15+UqRi0odW2zJ6kXhbpCOy9kF3icZB+5ooQnjELj58i+6CNUasaOx++h\\n\",\n       \"uEyKrjE9tAOKnNfFYbulNdv8fCOI7zzmmGMkNe2JGh7NGaWx7ccN1I5jbKEk+vY8tmbaitRiK1EQ\\n\",\n       \"tWMJroG0NxAvWsrk9bptfJ/jQRFCBUdx4prYV8weCjSKKXPxpz71qZH9diUVqSRJkiRJko4saUWK\\n\",\n       \"u1LuIlGAuHusralCVpqvVM3dLnfFeJ98jvfiFb69npQfR+3K3Hi5bBclhXiD6667rub0qonqH+Hp\\n\",\n       \"9x2btdjg/QDt2bbuE/EA9KN773hjqBkoRMRi4XWxP7xQttO1lg3e9/HHHy+p6S9UB167xtUAChte\\n\",\n       \"Pt5iKauuhGdBRuPD/8/4oP1qFMqS6oUazCvxbV59njmINnUbmrTC4NvHs6/N7GWuZM7ysYBKv23b\\n\",\n       \"tgV/X8qyA9rpgx/8YNX3a4ls2ZW1yCZcMelL5XewWdp1GmvY7ozoWlWCMc/TGp7G0O6RIoXixJyF\\n\",\n       \"MogyibrN3MLcvX37dkmNnXPtHjfbkkrtHFdptY22pCKVJEmSJEnSkSWtSKEUkTVEdhuxKhs3bqza\\n\",\n       \"DoqBV1n1uAn2xysxMB4f4d6QZxHVeiPubXJXTkZN33fNHu+AMoJXsKuDV4r3Q7/VKlGAd0k/unKJ\\n\",\n       \"V4vCSQwf9oDXhjdHP+MNda3zRLVnvDns+sgjj5TUeH9U8x0XqgD3TUmp5XyIu0EdQglsg681Rx/R\\n\",\n       \"dvQp+6Jv8LT7yhDuC+Yub0PaKKq3g0LncyAqP+rmtGJ42oIt1MZwQdd6RJ49xhhnzH3xi1+UVB/T\\n\",\n       \"45X2a5W/ttBOHCc16Npy++23S5LOOOMMSU1WbFRLkfhj5lBiD10RY5xFcw1269mdXeHa3VcmMox1\\n\",\n       \"IzUYDL4s6T8l/VjSD4fD4fGDweBJkj4k6SBJX5b0i8PhcOdrRCRJkiRJkuyCjKtIDSWdNhwO5wZP\\n\",\n       \"vF7SJ4fD4dsHg8H/3fH+9WPtZMddLcoPd/O1K5v7c3avB8R28TZQgvBa3SvleygT/ty9pIB4BgXb\\n\",\n       \"I7MA+lKiorpBtFvXLKylBv2EvRDj0xZUCV7xeulX+hs7wssjxs6rAXM82G3XrD3UFLxqFLFxY6KW\\n\",\n       \"GqgjtDvtyPm2yWp1xQLFhbHvFZyjDNelAsqIry/ZNruJ88OWfM3BaRFVkHdqn0YQY4Ny0lV9J46U\\n\",\n       \"scZTA5SStvbCXIJ9Yn+0v8/VfI/XtpnMZN2hAJXWv3RQcOgXxqTHe/rcx3hrqxwC/dX197XwlAs7\\n\",\n       \"aRvj1ode5mf4PEnv3/H3+yX9fA/7SJIkSZIkWXL0oUhdOxgMfizpz4bD4V9I2mc4HH5jx/+/IWmf\\n\",\n       \"Mfcxe1fLK3e9teB1EUeAZ8/dOl4F79mPP7elOirKB7/rmqHB82ueJ+M99V0tOaobhPdHu7TN6Fhq\\n\",\n       \"eHbXuODl0994gVHldV7pTxRNvED6oWscCkqp18tiPIy7xt5SAaXJY6JQ/NpkXtHWvsaY15cqUVon\\n\",\n       \"cLGgz33tulIsl8ft8YqqijI3bUWq7dxeYtxMU2Ds0u68bxtzxdzkawrSP/Snz9mM/bbnw/GxH9Rc\\n\",\n       \"5rC2WYxkd6Koud1Fc28phs/x2n20lyuxfcHc0pVxb6ROGg6HXx8MBj8j6ZODwWDr3H8Oh8PhYDBY\\n\",\n       \"WnmgSZIkSZIklbA8V8RYN1LD4fDrO17/dTAYfFTS8ZK+MRgMnjIcDv95MBg8VdLCYf0twKtsWxEZ\\n\",\n       \"uGvm+SexR14Bm8wCvFT38LlrRTFiu229VPeCOR5ibajZMWl47jyp7KzFBm/Fn8u3VQw9JsrjElCq\\n\",\n       \"eI8C5XEE/M7jJ7oqUihbeGeeLdg1JmypgTdKXAfeeJdxgXqH2outoy7zytimT5kLYKnUBXIbrIVY\\n\",\n       \"FuYeVo3wytPTzlLsex1Pj1ftqijSbow56h0xd3fN1AZ+jzLoREqUx1369rlmuhLUVvlj+yiWXifK\\n\",\n       \"98ecxDUWu6W9SnbG8dHeKIHTigc9/fTTZ2tRLUTnGKnBYPC4wWDw+B1//4SkMyVtlvRxSb+y42u/\\n\",\n       \"IunKrvtIkiRJkiRZyoyjSO0j6aM7PJndJP2/4XB4zWAwuFXS3w4Gg5dqR/mDcQ+yqxIFUSYOVVZr\\n\",\n       \"Y0v8rn5cJYeYJOpjcZe+WLVcSusk7arQrmQ4ta0ZgtfKdvCG8G7JJOF7HjtV6r+uNVFQScju9MyZ\\n\",\n       \"h4uy6DGIKFRd5gH6zF/xeLER+ppK586kKmI7rjaixsGhhx4qqfH0PaaJOc7XAKTt2B5KFAoBv5t2\\n\",\n       \"jFRUJ6srqJqo1ShxbSGz2WsJto1LZMxiT8wZPtfU4pm8Eb5aB9TOjVyjeEXB9Ww6jyelPzm/WkWJ\\n\",\n       \"82FNTOy1L7vom843UsPh8H5Jaxf4/JuSzhjnoJIkSZIkSXYFlnRl864QD1HCK1iXPF7uxomJQQno\\n\",\n       \"mkmAd4yXiHewdevW8DdJzGGHHSapUW7w7qjCS/XqEsQjYBduT/Q7sXXEzNV6p10VKY4H74z99h1X\\n\",\n       \"AniTZHf2VdesFmIGa73uhcAD9jYiniyquDwtvIYZqjlg09gybeOvxH6hQKBkeYVzVGn2E8XodKVt\\n\",\n       \"nOLJJ58sqZkbsTkUkFrVFdV29erVkpr+7lIdf+7xRHCeRx11lCRp06ZNC34vitGKahKWqB0TKI4e\\n\",\n       \"Z1kLcwDtwHiJYp04T9R53tfaAUox18Zxn0pNmlxrL0mSJEmSpCNTVaTwlrj79LXuqCbLXTTeJTU3\\n\",\n       \"uHvnLhlP/cQTTxzZD94ZK7x7HAJKEHftXlNixYoVkpr4Ae6y8f78uXNb8C67KhWcl3vVtTVH8Da8\\n\",\n       \"6jNxGIvtDXi13BK0P/1x8MEHS2r6q1aJgnPOOUdSY5eoAF7zhXa7//77JTXtRgwV78kWxV6J2+gK\\n\",\n       \"3i7eaF/rUDmuCiwWvh4cqlEX5Y0+jFStcZUoFI7Fije89dZbJc2PbWEMeH0pbBaVG8XpwAMPlDQ/\\n\",\n       \"m6+vrCjmDiqA33nnnVW/ox199Yrly5dLapQmjhPbQKniqQHfx4b5PirnCSecIKlRsblGMGaJZULJ\\n\",\n       \"uueee3Z63CtXrpQ0/xrjClOk0DEn0F/RGnZArBdzXCnel/bsutYg10T240qpw5yLvbXNBqVfPXvW\\n\",\n       \"4emQK6rMzYwL2onxznZpd/of++N97XhIRSpJkiRJkqQjg2nURxkMBsOZmZlF32+SJEmSJElbZmZm\\n\",\n       \"NBwOF1z0LxWpJEmSJEmSjkwtRmoxFCn28dGPflRSE1tCtt5JJ50kSdq8ebOk5rn1M57xDEnNc31q\\n\",\n       \"iBAPQQYCsUPEZp1//vkj+yU+g+e1PJ+OanewrlBp3R+ex//iLz5UoouV0DkO4gB4DsxzfuInjj/+\\n\",\n       \"+JH98EpcAPEkfJ9Yqxe84AWSpGuvvVZS0z4e00I7Et9A3ADP73kO7Stt88p+X/7yl0uavK1w3L/3\\n\",\n       \"e7+3KPsD9uP7o914nr9q1SpJ0n777SdJuvrqq0e+T9wI/eDxKMQJXHDBBZKkyy67TFJjn8TG0R9k\\n\",\n       \"I0ZxDR4Dhr3Qz8QyvfSlL13w/CbFzMzM2Psi5gJbjMZq1HeTgv1cddVVkpq+JlaGmJy/+7u/k9TM\\n\",\n       \"cS972cskNbFC1113naQmZoTfMfaZ6/hddH7YlNsIY6mU5QYc5xvf+EZJ0hVXXCGpiXliP/QLtsUc\\n\",\n       \"wdxK3KLbLjbK75iDzjvvPEnSpZdeKinOvGY/vkZeKXYH6KcLL7xQUnt7Yf/EnJVik6DWPrED2rF0\\n\",\n       \"Xm43tDPn9xd/8ReSmhgpMp6ZyzzO0a8ZxCR57bjDDz9cUhPrx3lxvGT3EdvE+4suukhSMye+8pWv\\n\",\n       \"lNTExvE5+ye7lbmWa77XZXNSkUqSJEmSJOnIw7KOlOMZEHgpZMB4JsPnP/95SfPX9ooqVkeeO3fF\\n\",\n       \"vs5QRKRE4dXglXlmBooRXtWGDRskNd4e50sGycc//vEF94PSxd06dYpQ8lCk8DZr163Cq8Cr+spX\\n\",\n       \"vjJyXA51nxaLWu95UtAutIdnkuGdUaMGr5R+p79oV19p3e2TfmNctM2o8ZXp6V+8Ra92vJSgbVDP\\n\",\n       \"OGbGytq1D9UYRsVtWxV/0pAFR2YqSgyZo0Cf4knjcVO5/cgjj5TUZEaTvVR7vpHNtB1LPgcwh3m2\\n\",\n       \"m6+D2helzOZoVQzwseaMG4NMdiHHWatI1XL00UdLapSikiJVyuD2NfJQryO74Jrm9cGwL36PfXqN\\n\",\n       \"RZ6+cG0lu5Jrr19z//7v/17S/PVJyfZ85jOfOfI7nhKVSEUqSZIkSZKkI0takcL74rl9V6L6M5EX\\n\",\n       \"0Xa1ebxbh7tqXnke3NarwmukSu+2bdtG/v+5z31O0nylqu26RBzXli1bJMUV1r0iO3fvPD93ryWK\\n\",\n       \"PxBeDWgAACAASURBVPCaKvx+GpmkbSAWDa/LvR6v5VMC+8C7dW8fr4lYOLwk4gFQiPDOSrViJlUJ\\n\",\n       \"HftbyooUtoWiA8w11IxjDCw1vF4U7xm7fI5N3HDDDQtuhz7yGnluO8Tnedydr2HXFff4161bJ0m6\\n\",\n       \"++67JTX95YpErYrqqrxTW8ndt0PsDzFqt912W9V2IqL2jPqvLxiz1N1CwWQ88HSAa0wJroXYH3YV\\n\",\n       \"XfOiSvXE7JUULa+pd8QRR0hq+gnFi2t5pOihznPt8npTHEdEKlJJkiRJkiQdWdKKFNV38aDxvtpW\\n\",\n       \"qm67VldtFV6IYp+8Sq5nlJA1xWtUPRdFjrt7zgfIuOCuv2v1WrzD7du3S2q8Fa+cjfdEzJh7DcR6\\n\",\n       \"RcoSygteMdshW22pK1J4K3g73v9RLF0EdhLZEfEixPThNWE3/D6K9/DK+3jTperJbaE/USqXAigO\\n\",\n       \"jA1UvLPPPltSM8fw+Sc+8QlJ89XoviqYt117ziHGieNgLN53332S5sevRRATgu0QNxcpB14hmjmo\\n\",\n       \"pPg4ntXnMVkoFD4W+D7787nDM575HnNjNDZKqjHnjX1wLcKuSk8tUAgjiHHzOYN4SOZij5tECWOu\\n\",\n       \"j+aOkjrOtZT2I052zZo1ktqv++rr1vpaeyVoV88Mjn7PtYS5jO/R/7UxZdgTChd2Q8xkKlJJkiRJ\\n\",\n       \"kiQTYkkrUnhftTU7IqK18Np6UxHRit18zt1ulO1EvAa1K66//voFt4didcghh4x8TgYOXhvKRVvw\\n\",\n       \"5vDS8SY89gov5d5775XUeJl8r+Rtc5x837MAx127cNKgMPp6Xl4Xq5ZS7R0UJOJV8CLx3rHjCI9Z\\n\",\n       \"m1T7em2fpYArLJ5BSwwIaiiKi9PXWnrYBipvtDZaBL/DVphjapUowOaYm7Ahj1lydR41vWucndu4\\n\",\n       \"90+kgLBfVxjBM55LKm0tKB5sn+OvtYfSuqHR8RGHedZZZ0lq+utDH/qQpEaJKtkP+0c9J44SmOOJ\\n\",\n       \"gfI17qJMchRah37CLts+HaFfGZ/YZ6QsUfeJfuGV8VC7lia/w26YM2vnslSkkiRJkiRJOrKkFSnq\\n\",\n       \"GI1L9Jya58zcfUfKUolISSjVIMGrIzOjNhPFMzvYTskr5Tk/x+PfR/Gitgheqj9/p14RtK0dU4oh\\n\",\n       \"alvXaLHxrDS8xa6U2g8vybPy8M5LCphnJk0qaw/vbynHuJEB+Z73vEdSo0CtX79eUuPBE0MUZRWN\\n\",\n       \"C0pCqQ6R46vau5oeVRx3GNOoncyRxOQsFh7vCVEs2bh1pEoKUe1x1FKKkYqgFmA0V5ZqEoIrTE6U\\n\",\n       \"HVeqF8Vx+dzFdnil3WqVIaCfaf9ozmL7XNP8uGozp4Hvc630+OCIVKSSJEmSJEk6sqQVqa64QhDd\\n\",\n       \"DfP815WZ6Dm8w91q5HXgbU1agYmyGD32hniQKK4CL4JYLbIF3evtK7YsIqrtMun91kIsVN9ZbxEo\\n\",\n       \"kF1r9rjXiL3WZtK0peTNLgVcaUJlPeOMMyQ1cYiTUqSA7ChWUygpH8wVXqcHahUp4ihZp5Hq+JNW\\n\",\n       \"EzlelCGPQeG9x5W2VaLIaGUORNEgVqgW5njmRI8xKtF1rhpX5a6FpzKsv4odfvGLX5TUtJfHKDFH\\n\",\n       \"R3ML/Uw/RtfgaE732npRLBj7Yb8cL9cyYtmI6eKV+F6H8U67RDGTTipSSZIkSZIkHVmSilT0XJq7\\n\",\n       \"Se5ia7P5otih6PlprfeDVxV5D20zcmrhuTZwHnzumT1AteYoJgfvlN97TQ/A68XLaBvn0ZWlUjG7\\n\",\n       \"1uuPoL2mBf3ZtgJ7LbVe3FJm0rZGnSCy5Gizkk15rAhzD7+nb/kefeyKAHMAlayZI/uKS41AeaB9\\n\",\n       \"PYPU6xB5dqPPta5kuO15peq2sTo+V0arN0R0zfasnUvJ7usa30sFea/RyDWQWEHqfXlMko8T3vNa\\n\",\n       \"smdikVz55drj9bu8/zh/7B5F058qca9Qqivl9lN7DU9FKkmSJEmSpCNTU6Qe+9jHhrFD0fNXFJdS\\n\",\n       \"rIjfRUaR974ulcPdcLQiOl5J5D30vVI5Shz1i4C7aOICqE3iXlOpThFexC233CKped5M9V2gPd1b\\n\",\n       \"dG+E4y15Je5F+P5gqcTecL4l+yj9vi21sXuOe/2uXrRVpEpe+aSyAicJFZ1p40nFj7GGGTXBqEhe\\n\",\n       \"q27S9ygzeOi8Z+zQB9HcxPfdpmqPozb+00GNZe5w28QWfU5BCcFmmbOYY/gdyozXC4O2Y5W5lOPw\\n\",\n       \"+E2Op+1Tj74Yd6xxLSUmilgw+pdYIep7+Zj3p0a0N+ddirmLjp92pR+jdsTeeeUahz1zbUGRio6H\\n\",\n       \"/mWlAOY49lvKvkxFKkmSJEmSpCNTU6Se+MQnhsoSd5VefZfPuVuNPGKUGX+Pt8LvPdbFFZuoxonf\\n\",\n       \"LXs1YOCuFu+L4/Xn+vzf4xscfudeIO3kK2+TVYbXyl01lcu3bds2sn+e5/vzdn8uzXFyHK5QtfVu\\n\",\n       \"Pcaqa+2VxYJ+6Br/UFubxMGeUGZrM8pcwcIO29b/8t9H1Na4WUqgLhKHydjoGxQRxlBbhQQPGxvk\\n\",\n       \"lTmIdQ7pc7IR3aOnbtaJJ54oqclSrO27ruowyh/78TnWbdUVBL5PO9BfKEfMeVFF6q5ZiYxZlA/W\\n\",\n       \"2Cupw34t6pu+4nDJYF+3bp2kxl7op9r+dgWpFC8ZxayxWgcxW8Qj+/c9Zs4VSWKoIFL1sSvmAWop\\n\",\n       \"Mq5K2ZepSCVJkiRJknRkaorUsmXL5lUf9efMPK/kuS3Pw8k0iPDIfOISqB3BfvAWuOukqi9339zV\\n\",\n       \"clfN/lkbDy+I9w6KF3fF3NV63SevdVLijjvukNSsw8Tx4t2yHVc+aGf3trmL564cr7a0sjnnV6o+\\n\",\n       \"i3cY/R8vg/93rZe02HT1brtmN6LceRYlCiPt56qCe63EC0wqDmhXAnXPs8KoodY39D1zQaR6Rxx0\\n\",\n       \"0EGSmr5jTLMd99ijzEyUG7L0PCYEfOx2XSMQmGuYW1wxQGGK1F7OG/UdUL+Zo2vX2KuN9WLMEJsF\\n\",\n       \"pd+R9bar8I//+I+Smmtm2xg47NPrbdFvvj23I/qP7XBtQSm66aabRr6PwoT9Yk+uiPk13O81+D7j\\n\",\n       \"AiXXFd+IVKSSJEmSJEk6MjVFatu2bcXn8ShL3J3iPdXWjwJXDrgLxuvk+as/B0YZ4fv831dEj2rO\\n\",\n       \"cBfOaxT70zbewL1GvKTbbrut1Xb4HV45Xmcp9gZFrTbGpqTAeExW2/6dFnhDeO20J5lYfVdgR4mi\\n\",\n       \"PfGSiG9hzUbHvb5JVYZnHE27TlYNxLqsWLFCUqPmlurMdIUM2GOPPVZS00Zt49RQ1RmD9CXqMGOn\\n\",\n       \"NLeyribfR4FAlX/Ws54lab6KPG5MDtlf4MpAKe4QRWPz5s2SGmWKubxtRmvt+fC9tmp527X9lhoc\\n\",\n       \"P8odim0pG5S5CbvEvvgdNQ29/YmNOvTQQyU1cx7j1SuSo3x5LCB2gELFtbeksHGPwT0DWYvPec5z\\n\",\n       \"dvq7VKSSJEmSJEk6MjVFas8995y9yyRWyesO4elzV8xdptfx4f9f+tKXRr4HkdeBN8Nzf56Poizg\\n\",\n       \"7blC43V0arOnXHkitgVFiLtrzgcly+MePBOEdiQugN/hRXB+PK/nbpt4BLw5jo/Pwb3Stl70pNfv\\n\",\n       \"mhbevtjZpM4Xbxi7xw7x7qYN573YayF2yYzCM0WJQuFpW9GceEIUIDxgPmdskgV0wgknSGoUIb7P\\n\",\n       \"9/CAaUOPY/vMZz7T6vhKoEjtKiowcI1gzuxas+/MM88ceY+iwfZRyLyOEnM21yiUFs8mxA6mBfXK\\n\",\n       \"AIWIay2xalzDmNu5JqC6sx3OG3tBCQLil3nKQL+wHeYItsN7zw5kLcgHHnhAUjNO/FpLvC/H7Zn5\\n\",\n       \"XqEdBZQsReZs5nLGIf1XWwk/FakkSZIkSZKODKahFgwGg+HMzMyi7zdJkiRJkqQtMzMzGg6HC0rW\\n\",\n       \"qUglSZIkSZJ0ZGoxUjMzM7N1oqK14bpC7Mgb3vCG2X11wdd5KsF+PvjBD0qSnv70p0tq4i94Ps3z\\n\",\n       \"dp7v8/yX58bEb7B/zofnvWRAnHLKKSP7nRRkHL361a9elP0B+5n0/ogXeOMb3yhJeve73y2peW5P\\n\",\n       \"Jolna4JnpETVqokTIIPkd37ndyRJb33rWyU1dsbzfPbvGTJkxFC/jGxN4mo4H2L9iIOgHa+55hpJ\\n\",\n       \"0u233y5p/rpYp59+uqTG7q6++uqR/a9du1ZSM159xXjsl/ZcTHu5+OKLJTWxEJ69Q+yDZ+fwf7Lp\\n\",\n       \"iAsk9oI4LL7/spe9bHafO4N4RzIrIxuqObe5r8R0eM07bIy+95gS5iC+H8WzsZ+3v/3tkpo+jeJN\\n\",\n       \"X/ziF0uSXvnKV0qSfvd3f1eS9PnPf15Skx1JNhbHtXHjxpH9vfOd75TUtDsxK8QFbtq0aeS4yaai\\n\",\n       \"36677jpJzZihHRhLxLws1twCvr81a9ZIamKwovhH2p05g+97FhrxvWS4v+QlL5EkffjDHx45Dr5P\\n\",\n       \"DBgxSOyXazLZcXyfdsTuyG5lPP36r//6yPlNCr8WYZ+lWCbGxUknnSSpmRPJsKZWJfcinhlfOq9U\\n\",\n       \"pJIkSZIkSToyNUVKmlwF676Ura7xYyhJKAMoTWR3oURxt89dPl4T3gVex1FHHSWpyQCZ9PpNzqRq\\n\",\n       \"6/QNNUzaZhX69/FmUTNKKkJtFWy8RVesyN7EO8JLjNYLw57wHl1V4H1UmR7vM/LiyCgjqxW7w+vH\\n\",\n       \"K0S14XjHXYk+Wk+rDb76O5794YcfPvJ/jhn4Pp4onj9tjefdNiORc1m9erWkpg1RViBa3T4C23Bb\\n\",\n       \"Yu28fffdV5L0t3/7tyP/R23E896wYcPI/6N1IEv1llDhn//850tq5i5slPpZzI1kTTkoiZ415xXh\\n\",\n       \"gUrcEaXVGaYFq1M4KKVd8X7i/Et1xbCjSE3nmuS0tdtx8WtR7VzB+OY83C5ot6510lKRSpIkSZIk\\n\",\n       \"6chUFalpE61DNS54W/fff//I9vGqvLpqtOYZv8OLxqvAS05G8ZXgS0RVh/H2264zFYFShlLq1Ztd\\n\",\n       \"eUKhXLlypSTprrvuktTYDzVSovMkDgY7ce/fa784XlvGV3QnnojjHFeJguXLl0tqVBvOF+U6UjHm\\n\",\n       \"4h4qCovX+3Giz2lL1Omu6yTSh8ccc4ykJr6NzyOFIiJSiamrQztw/ihz/A7V0XGPvK06SCwJyhHK\\n\",\n       \"2GGHHSap6cNovdRIEUDdH5eulcapN4XtM4Zc0avliCOOkDS/0jtqL9emtqtI+JzlSmpX++0LxhP1\\n\",\n       \"0lDha5/++FqHzHG164Z6ewOKMbFUV111VdX2IBWpJEmSJEmSjjwiFSm8M+76UQr4HI+4a2wQXg8Z\\n\",\n       \"JzyXbRtzRXwB3jReBTE1beF48DL7UlyWCq5IEQvE83GPE4i87VI8AXEeKE2luBm8ySj+AS+I/iE2\\n\",\n       \"ifd4YWQ8YV94l8TM4ZVxfChGDmoI8SjYAcoPx+vqAN9jnERKLsfdFqoSn3baaSOff+ADH+i0Pak5\\n\",\n       \"Zm+bWpgbaJtaz9dBcSI2i7izGpWtDdg+NkUf0g7YoKuI2NS4a8MRT4itkoGK0sLqE77GHkRjKYrd\\n\",\n       \"aUvX+DviWplToliyWlCaHLLHUL7YTxSHybWB/vQK72SL9qVItc1kd4h98zXzavEK/F3Ho0MmNOOF\\n\",\n       \"OTeKDXNSkUqSJEmSJOnII1KRwsv0GBPu2vG4u4KSxHNg9sfzYc/YicA7JD6A36OEtMXPl+3hJUTg\\n\",\n       \"VS51UJJoL2qG8Bw+Uprw/sDjEVAo8aJRMvFa8PZRMNuu+4WXRf+gVnhsFIob+8frR8lhO56h4uBV\\n\",\n       \"0u+cB/vn/GkXYrpQOTgOj/XCLqNswxJ43ShQKK99eJ20DTZfS0mdrMUzbr3+E3FvkVID9BV9hO2h\\n\",\n       \"tKCY+BzBXMT5uxJGLTRX7FzxqIXjYw7Ddtg+tsvYLMFx0B+LvSIH6is128aN2fKxA74WXmku8X5x\\n\",\n       \"xYk4x0gBa8u0103tuqZiCX9aUBtnC6lIJUmSJEmSdOQRqUhF4EWNW3uEu3a8Ra82TCZLyUvAO+U5\\n\",\n       \"OTEveNcoX12prYnT1ot3fEX7SdUPQ5kBlBtqCBFn4fsv1Q7By6Mf6Ve8VPqlq7fkKgRepMdzsD+U\\n\",\n       \"G5SyKPYqimEiY4p4Heze94dXjD3zPeIb3P48u7Qr/J5XFOJx6sPRxm1rjPWFnwsKC+dUazvEn2HL\\n\",\n       \"fj6ujgIxIChf/J52iWKHuipSZLVho8Tq8L5W0WHO5BXlDLWV84jiPTl+lLGuGc8+1j2bs237RNlm\\n\",\n       \"zGH0SymGzseG28O4damWGszpfYP6f/TRR3f6fSpSSZIkSZIkHVkSitSk6jm1xevXdAUviQwWlBhi\\n\",\n       \"S2qfM/OcFgWAuAe8r3EVqVpKMVQlUD66xnZFoNS5okSNFuJS8Po84wNqvUlXDfgd+0EtaPt83e2e\\n\",\n       \"OBe8Sa/jxGspdihqH1d2ovOPMqX4PmsQeszUuPbioCpE/VeDZ1pOC9rO1zesBUUEdRR1m+1FCgax\\n\",\n       \"PU972tMkNXMSryhI9KUfb1fw9FGUqI2GSu11fYghQ7HieOg3nhbUZjuyPd9uW/gd7cXYQunrqkjR\\n\",\n       \"b2wP5bL2/Jh7+J1nXdZmF9LOnrHL72uzHT3Gq+21FMUxylIcN6s0grmOeYL9RLFsTipSSZIkSZIk\\n\",\n       \"HVkSitS0lSjoa40+vBYUA+ISUJJq98N2qGHiygfgnbaN0fF6RBG1tTRK9BWf4hlAvKLkkGVIZXn6\\n\",\n       \"oTYmzL1PYD94aXiVURYoEB+CohLFctEP9AveLjFTteC9Yy8eL4OSSU0fj4Er2QPHz3Zd5WlbqymC\\n\",\n       \"rEGOdxxFKlpVYLHBVsZZT1Car7BhQ1GdH2yWMYiyRSwO22FsQV9ZWtu3bx/ZL7E9bjvYLN/jKQHH\\n\",\n       \"33YuQllBJfXzqc2WRAlj7uW1rT25kkV/0A4oQ9hr6bg4fo+5g9oMdJQXYoQ4LhTFWjw+2HH1Gntl\\n\",\n       \"PVnGOufvqrifD3Yy7rj2emu0ZypSSZIkSZIkE2ZJKFLTgrthFIi+FCn3yH1lear81nrYK1askNRU\\n\",\n       \"pEZ5gVKlafBK7rWK1Lh1tfomUt5cGeJ7tDdej9fxIm4DIuUKb4vt8j7KQiQeBW+YOA2PW3Dvh/3z\\n\",\n       \"eW1MFODd1sYDeSYZXllJQSSGy9UMj7NpC+3D+ld9KJkoQF4zbLFB/et6TnjO2AiKRaS4OKiJHitE\\n\",\n       \"vJuD6orNt80yBOYszp+YragdmJuYs3zNwCgOj//TDh6r42MCm2d7pX5hLBKT1DbDm/7jvLBH2hOl\\n\",\n       \"i+PYsmWLpLjmH9tjzPk1oK067PWr2vYzxxNlQmM/tDvra65atUpSeQWCtvGnbeF828Z5piKVJEmS\\n\",\n       \"JEnSkVSk1J8SBXhfPL9GeeJ5btvMA74feUttsx55Po/3gxcQtcOkqsnWUltHCC/xmc98pqT5yk+U\\n\",\n       \"wULGEtCetLuvSeegGN5+++2Smowk+rsUZ0C/sl/aGzWA/dcqUnjjKHEeY+VeHUoZihL/L8UHcJzE\\n\",\n       \"saB2jJu1R39zvn1m2tXGyU2KUsxLCa9jRB/T5p7ZCa4ouS2hIPgYwwbGjWNFeTjuuOMkNTFTDmPB\\n\",\n       \"xyrHRwyNj1ng+GkXxhBZcD6XYbO1CiHtMG6tQa49jHn6g1fmaLLXopi6UsxZ26w5v1a1pdSOzKH0\\n\",\n       \"A/G/KHPM4dE6t/50pO/xTPu3VYxTkUqSJEmSJOnII0qRwjvhLta9n1qvC28CT5xXwMsg4+Cee+6R\\n\",\n       \"1D12hOyqDRs2SGq8sle+8pWSmiy10vNjzg+lDOWipPT0rdiVcG+07f6pTUOMEv1w3XXXLfh9Vwlo\\n\",\n       \"J7x8jx0ClB68X/oXtaA24wUvi5pAeKMen+JeMP3uihP2wXpmfn4oUNgt3iDjwtd2dIUJ5YqYPeyd\\n\",\n       \"34+rYNLO9FufilTfNa7GBdWTeDBsj7Hs9Ym2bdsmqVFSsBkUBI+rA8YUWXEen0nNOycae/RJredO\\n\",\n       \"DMyaNWskNXGJXkcKm0eBwQYYW7XrOKKkoXhEGaZtlTZsn/Zsq4jQTx6X63Mzqjb9f+utty64PWoM\\n\",\n       \"ck1iFQeoHTtt4yrHhfOnfzkPxift7BnNnlXaV4Yw+8Ge6Zfa7acilSRJkiRJ0pFHhCKFB85dJ94J\\n\",\n       \"3kTb9Xt8fSW/a+W5tnt9Xj+oFq+l4VliUbxARJS5s1Q47LDDxvo93gs1Z7oqgcRllPrtjjvukNQo\\n\",\n       \"SVRxxssqgX1ij65weq0gvHli3FyRIpsOZcjX20JJIh4BxQq75v9eGZ0YMK87hf1hl5xPVzy7NIqX\\n\",\n       \"6AI2X5vpWoLtoDR430c1yYC5A6WFto3qOrktRkpFtB8Umlqi424bQ8L3r7rqKknS1VdfLWm+eknt\\n\",\n       \"t65zJaBsuO34cdM/qLKMYX6PCsz/XTEjpubMM8+UJK1fv16SdN9990mab1+orRs3bpQ0f25nzmI/\\n\",\n       \"1KAjxotVIqLz8diz2orr+++/v6TG7nw/feMxX74/lLmDDjpIUqPu+/n0XZuQfsusvSRJkiRJkkXi\\n\",\n       \"EaFIcXfJ3SyeLkoFd/940jxfr82S8uymqD5UbRXdtvgagXi1eMt+F4+X6SvS+zpe4O8hyqwZF46r\\n\",\n       \"LagBtAfnfeCBB0pqlK5apagEMVj0J++JXfra174mKV43ClCM8H5RfvASiXki/gHFiP16LBPvo5Xf\\n\",\n       \"P/axj0mKvVUUIbbnVbF5xat2aqtP00545dhllHkGPo7aKKt9VzZHwWAOwHaJRcIWmWs4VmyCtqQN\\n\",\n       \"8NQ5t9rKyiW61gVibuA8SuogfeOxRNRDQr2MzouYMVekmJuZo0pZc2vXrpXUZFBjSz73nnDCCZKa\\n\",\n       \"rEKPwcKW6R+Om/PDhrEDxlxUWw6idSwZAyhQtD+KmCs3Xtm7awwgSuW4czjtgb1jD7Qb7cl72hs1\\n\",\n       \"HHuhwjqxYvR/28r2bN/nOq4VjAeOm3Zvq4KnIpUkSZIkSdKRQV9rKbXa6WAwnJmZWfT9JkmSJEmS\\n\",\n       \"tGVmZkbD4XDBNL5UpJIkSZIkSToytRipNoqUV+CufY7LPmr3RWYLtVmWLVsmqcmKop5T1/3VVuh2\\n\",\n       \"iKsgngIVkf1ceumlkpo4BuIBaCfP6gLPynJ4Lk3cx5ve9KaR/RK7w36J7yA2CGhX4hE2b948sn+e\\n\",\n       \"T/Ocn1iv3/zN35Qk/fEf//HI+fNKHAr7bZ1psWP/tO8FF1wwcn6TJrIX7P3000+X1NS/8pg94jSi\\n\",\n       \"uBfOjxix8847b2R/Xl3Z15mqrZhP+2EvvL7whS9c8PyI6+hadyqKkXrjG9+oK664QlJjg7QR2U+s\\n\",\n       \"8QaMccD2iPEg5sUrZr/61a+W1KzbiO0yxtkO8ZecM9sh/o0sK2rEMbaplk/dqNe85jWSYtuMYkG6\\n\",\n       \"wn7+8A//UFJ/2VEONupzy6TA5i+88MJW+xu3fdnPP/zDP0hqsi19DGPbz33ucyVJV155paT51wzs\\n\",\n       \"gxqF0f6mPZc93PYXkYpUkiRJkiRJR3aJrL3Fqnfk9XrwDsb1xshEQXnB6ywpXLDffvtJarwSV3zI\\n\",\n       \"TEFJw9OnJktEpEQde+yxkhpv+vrrr1/we1FmCtlleNOc780337zg90teXpRp1TW7Dzj/pVbtGjWE\\n\",\n       \"DJ0oe7Sk5HB+UdYg/eI1XRhv1HCh+rRXNMe+vA5bqS7buJlznnm1UK0jam1x7vzmiCOOkNTYNm2M\\n\",\n       \"qoqixP9RPcnicZunDfg/tsRYJIOX7CPamt+hXDH30CeoyFF2F9AnzFVsZ9yxAaW5r6Rql2j7Oyqj\\n\",\n       \"c1xkA9bSNSY4WuuuLW77tB9j2Wu6PeMZz5AkXXvttZKk0047TVKjWkeKVDJK2/Vo25KKVJIkSZIk\\n\",\n       \"SUd2CUUKbw7vy+Mfxl3bCwWAu3y2z3pXbVfQjraPV4MyRXxGqc4QNS+iGhrEWbBeFnEcXStCuzdb\\n\",\n       \"277UUmHdMGKiPvCBD0hq4kSIraJ2Cd407TyNTNJpgupBu2MPbfsPr4t4DtSRtlWiUXgYX9gpKg0K\\n\",\n       \"GXZBTRheu64c3wfUKaLuDuoaqidzCbW2sLl169aNfI82o74Q2/HK4KjEvkYXyhBqHrXLUBzoK2Jd\\n\",\n       \"UCpQsGqrw9MHHO9irZUG1GbDdvusQr8QzKGRjbWt1Vd7DUGxZLusY0k/UvutBPbHXB0pctdcc40k\\n\",\n       \"6UUvepEk6eUvf7mk5lrw2c9+tmp/49I1rrcEsWpeP6qvWoTOpJQoSEUqSZIkSZKkI7uEIoWX6d5O\\n\",\n       \"5EV4Jk4JPO2TTz5ZUuMV4mXhhfJ5bUwNd/PcdRNfwd29e53EOhEXgbdDvES0/hHeMHf54yoCHGfb\\n\",\n       \"KrJUhUVR+dVf/VVJjTJFrNhHPvIRSY3iB480JQqiKs/RyuN40dgX3+N9235zsEvUGbaH6oF9oXzy\\n\",\n       \"OXbdt/faBjzPW265ZcH/o4YSn8ixoxi5Ck2cX+TRUs2eWBUUIeamKLaGWB8yKsniom1RLlhjrARz\\n\",\n       \"wGLH+7E22bhPBWphLqRyudM2likaY46rusyRbc/bxyZPQfzaxv4+8YlPSJJOOeUUSY1iQ+YzcA3r\\n\",\n       \"aw7Fjic1ljkfrgmo3FEcbV+gRLPfvmIJU5FKkiRJkiTpyNQUqac+9amzigsxSXjCvo5S7XN3PHWU\\n\",\n       \"nVp47oyHTYwPcQt/9md/Jin29kpr0aHUAMqWr8nnGTp4wdHafb4fYpSidZlqIQ6AWBtfU68EtXeo\\n\",\n       \"sXPOOedIarLALrvsMkmNEtN2+9OGjCzPnuyKe7WMC+zYY/T4PnE72G/tGoKrVq2S1HixeLfE8uH1\\n\",\n       \"4pXivREnwv5cAUO55f/TgBgZ4s5QDjjXSK3jc84Rzx6PnLFIGwEKCH2EwhFlotKnjFXmNpQotkNW\\n\",\n       \"X0lhIKPX14gjNqsWX7OtlsVSooC5iVfiTVF2ahU86JqR3TbuEDzDtLSWItdCYqq4NjmM1XFjjBi7\\n\",\n       \"jBfib/t+WoCdca1CYSwpUsw5bWF8ML7ZbypSSZIkSZIkU2ZqitQee+wx6wWV6s6UQBFC0Wr7XBcv\\n\",\n       \"AS8Or8YzOvDM8UJRUvjc4e6X7/nz+9qMCP7P812/K+f48a7whlEs8FK6eo9tvRw/TuJOvDK9r2yO\\n\",\n       \"Nz7pDItx6at6dERJHUD1wBt3NcK34/1Hf+DlgnvZ2C9VuF0hQ00hhhGiuleTguOUmiw7PqOtC73y\\n\",\n       \"lAAAIABJREFUOFZUXzx4r8OER86YY+xG50SMEIoWthzNaez/c5/73Mh+OV7GqGcHOsx1qOj0Ncd7\\n\",\n       \"zDHHSJof10aMkavck8qWmhS0N+3JXEK7eGYpRE8PukLWYq0a7DFZpZg2vo9ywnlTy43fc57jKiwo\\n\",\n       \"USh8nl3XF1Typz/Wr19f9buuMVv8DvuPVvvoSipSSZIkSZIkHZmaIvWd73xnVqHhLrrr3TR34ygZ\\n\",\n       \"xOjUgneGd8NzabxKXvFiPRsvqrzuMSYoRsRHnHTSSSOfs3+8G86D/+O9ovgA3ixeJe3K8XEceBul\\n\",\n       \"mCuozQDCC2S79IdXWEe5oL2ooePP9z0jZakxaUUK78njKfASqc5Nu3n2I6AkeX/X9j/9yn6wS94v\\n\",\n       \"VFFcmm839Dffx17Hrc8Gc7NUOTdiO3zdScYiah82SNsy9rFVPGay6PyYidXhe15vKAKFjONibKIA\\n\",\n       \"lLLPsEHOg+8Tw8XxeIYw2YK+WkFbT5/tc7xdY42Y02qh/2hfbIrzQ6mhzpIrUj53jgvn7U8rIlAs\\n\",\n       \"24JdokCiRmM3KI/jxpuSfcr2J61U0j9f+MIXOv3e10Dk/P110tmsqUglSZIkSZJ0ZGqKVGkNqS50\\n\",\n       \"VQrIvvLn13ipeH1RhkUUP4HX6V4RmUR4L9xV47lHmQl4B64E4B2yPbxizwRqe1ce1VhBoaNd3Evn\\n\",\n       \"vDdu3CipOT+OB6WEGjq0B9mNc2NeHongbbsdoBrQ3l5R3KEfvH+wI1eKHM8uRd2h/1A/iNtwBRTw\\n\",\n       \"ClED3C7HZe744ljc1jkmFAEyL1GkaBNil3zb0eoD/I62QKGojfNjbFDfyhWmSOHAFuh7xiptiu3w\\n\",\n       \"f/qMGDCy/TxD2oniOLEdFBL22/apQm0FcsCWaG/2z3lh85FN97Vmnh9P7XZR4GoreXtskmd50j99\\n\",\n       \"V7Tva2xGuIKHfdGvpWxG8PbBLplraN9J17ZLRSpJkiRJkqQju0Rl80nDXStKCDVoiGUqZdC4N4B3\\n\",\n       \"iPfrMSx4S2RNUV+JbEG8FVcMIsWN5+R4zyhUkVdWC9v1OlgodXgT7AcVgFcyI/g+61PhLdAueFnE\\n\",\n       \"hE06Bmmp4c/5weMdvNo2dod36tuJat1g56VaOOzPs1PZPvt3r9C9c+wDbxxFs6/1tRZSf1xJ8crf\\n\",\n       \"jG3ivrrWBeJcvbo85xop716TizZBKeL4brzxxgV/z/48a4s+o688c7ZtHR7mIPfoUc7ou9oK4U7b\\n\",\n       \"3/n5EROFDXBc2KbHbkUZrm1ZuXKlpPZKB/biMXW1CgxPP4hhYv99r3E46Tk4UvDa2qcrZ9gTr4tV\\n\",\n       \"5ywVqSRJkiRJko48ohUpvAKP4cGb5O4Y74f/UzsE79EVGzz0Uu0NvCcUGq99Uns3jTfL+RCnMK6n\\n\",\n       \"H3kNfE47eW0WvATOj/gPnn8TF3H33XdLkrZs2SKpe9XaXZ1Spo0rNyg87tVix14fysGuau2D7RKP\\n\",\n       \"ggpCDSUnqlGEl4sX3VdG0Nx6W4yFKG6RsUF1dxSXUswmMVWuXKE6o976fiJQvWlb+oS5pFTviLkl\\n\",\n       \"ysBEoWHOYsy1jWWiPaNMYI67a9Ze1/pExMKdeOKJkpp29HVKXVnpK0aK2LK2sUn0F2O47VMD7A21\\n\",\n       \"vxTjtqsxbh0s5ibscbFWzUhFKkmSJEmSpCMPC0XKPXZf6wvvDq+RDBa8Ca9lgjfrz7/xyIn5wbuI\\n\",\n       \"vN9SlhxxC3gXXSuQex2qvjIio+fknBdeH+3KfukP2hcvkDgAvA5iwmjnSWdWLBW87lgUn4PS4l6V\\n\",\n       \"1wvDq62N8ynFY3B8KLH0i8dkdV3LcZI1XUoKAQoIWXjr1q2r2m60rqLbOupgqbo7cwcKlFceH7fe\\n\",\n       \"ETZBXxOfiCJVq8xQn4l2wyZR1Jhb+47RqcUzQmnPaO7qK2aG9mirqo5bUZu50+MCfRWOXZVxj59r\\n\",\n       \"M+OJuSzKuu2LVKSSJEmSJEk68rBQpMhgwBvhPXC3jheIEuRr9LFOFxkzKD14oygueBWluAAyB7g7\\n\",\n       \"9tgQFJ1NmzbtdDt8j+34c2Tee7wEXlrfCoDXuIm8dbx0+oX2wEvuu6ZLX/Tl3WFXrGyOAkftolpK\\n\",\n       \"Xm/U/hG+AjpeG/2EnaJi+FqSqBIouR4/s9jMja2rVWPJxGUsd81wRZWjDxhz2D7qN7bE/mhDqv57\\n\",\n       \"rFHbmKMoS439cL5tt8vvfW7xWmfTUkKuuuoqSc0Y6XstvYjaLDvHM7g5Xub4Uv0mV6KIP122bJmk\\n\",\n       \"/rISYVJr7U0Kf0rStR5WFP8bkYpUkiRJkiRJR6amSB122GGz3g7eIHeRZCKU7ia5C8fD5nk5ihJ4\\n\",\n       \"7A1313hneOKefdcVvFJqnLAfFCXO25UlvEqyAtkO3+P83DvAWyQrjrtpFAfiF2rXWCMbkf4oxXsA\\n\",\n       \"d+8oGLxn3TPiFvCaUDzwbjnOuVlYi4HHIJEJRMV1r4OEAoP6Qf+S2cV5+PmxH/fevV5YrZeKGkCF\\n\",\n       \"eI7H+8tr9VADh+1iF3j1vh4Ydoia4soviifqDHY4KVD0sLNx4onGrbXm2VeR7Xq1ftq6VNk7WpvN\\n\",\n       \"a83RF8wh9ClzB2PP1cUSrDkYMa24Rmza4wh5qoD6u9TwdVGJu6VffC5g7btISaR/aYdDDjmk1+Nt\\n\",\n       \"q0TRL9gj8co+Tpi7UHL9KdFxxx0nqbmG0r+uvjM+iA/llTmB9mbO4Pc+p/p6r7R3rcKXilSSJEmS\\n\",\n       \"JElHBtN49jkYDIYzMzOLvt8kSZIkSZK2zMzMaDgcLliKPxWpJEmSJEmSjhRjpAaDwV9Keq6kfxkO\\n\",\n       \"h6t3fPYkSR+SdJCkL0v6xeFw+B87/neBpF+V9GNJvzkcDq9ZaLvvfOc752UwdMUzC4jt+a3f+i1J\\n\",\n       \"0sUXXyypnGnB81m258+leU7Lc1diXXju+4Y3vEHSQ3euC+ExUsBaexx/aW0/nv9ecMEFkprzI1aG\\n\",\n       \"diV2xWOjeG7N+fF9znvvvfeW1Dx/Jibmda97naQmVobjJHaI2LTbbrtNUhPr9YxnPGPkONgf7cAa\\n\",\n       \"e17pnerT11zzkAl5ZpRXsaZ/eE5O/InXCeO8fD2m8847T1Lcf87JJ58sqcm69DiUUlYi++GVDBza\\n\",\n       \"gfPCbsDth5o+POfHHj1Wyvc3abrub82aNZKa8Yp90F9RXNPMzIwuu+wySU12EzZCzIbH4zGWiU3h\\n\",\n       \"d/QhbehxYa94xSskSX/0R38kqbF1jpHve30lvscYIwYJG6UODn2MDf3yL/+yJOmWW26R1Ky3yP5O\\n\",\n       \"OeUUSc0qAYz93/iN35DUjJmrr75akrR161ZJTQwOx3P00UdLasbeYtvKFVdcIalpN2KwGFscJ3Mm\\n\",\n       \"/ee1+JjbsCFe+Zz9/fmf/7mkZm7BDsi4JU4S2J/H2HicHu1PvOBv//ZvS5Le8Y53SGrmKvqX42Ju\\n\",\n       \"pnJ7LbQL58m1yDPNsTPs++abb5bU2MG5554rqbFT7I1rI3M8cw77Iz60ZC/ERvF7j0smrpS6T1E2\\n\",\n       \"KPt53/veJ2l+LB/bH7dSOnb2kpe8ZKffq1Gk3ifpOfbZ6yV9cjgcrpB03Y73GgwGKyX9H0krd/zm\\n\",\n       \"ksFgkKpXkiRJkiQPS4qK1HA4vGkwGCyzj58n6dQdf79f0g166Gbq+ZL+Zjgc/lDSlweDwb2Sjpd0\\n\",\n       \"s2+3LzVqxzGOvHdPvLbmR6kWCt5OaXvctXv9pGOPPVZSo+TgBZEZEGUNuuJ2wAEHjPyfzA+8KLyc\\n\",\n       \"qP4QXpXvj+PwWjZeS4PaN/fee6+kpt3I9sPLYS09vOxPf/rTkhov86yzzpLUrNm2ceNGSU374RXj\\n\",\n       \"7XlmC3jVWrw6r0DPcbJ92tXrbOFFlvqZ9nrzm98sqbE7vNj3v//9C24fLwc4n8i+OB/UFdQI+uXw\\n\",\n       \"ww+X1PR7tAbetKAfeUV5o19uuukmSY1X/exnP1tSM0fQv9g37YV3i51JcbV1xho2gHpHX7stMDa8\\n\",\n       \"77x6PP93xSOq9M2Y4dUhuwjFw+ck1EnO3dXgyy+/fOQ48Mg5vg996EOS5itxjMlSFuGkQanBNjzj\\n\",\n       \"1FdvYE5l7Hi70j6MIc/6cuUHtdOVKIgyaKNafT6WS5nobZUoiOqnffjDH5Y0/3x8NRDAfqL/O664\\n\",\n       \"lcDOvL14iuRrT5aIauiNq0RBKWsVuqpF+wyHQ67C35BEvYF9JT0453sPStqv4z6SJEmSJEmWNGPX\\n\",\n       \"kRoOh8PBYLCz1L9doyRqBbWxLygOrvjgJXIXz3P/O++8c8Ht4KFzd85dtj+P91gu3kfeBN6Ge59U\\n\",\n       \"4CYmBSXAt4P3gzfgsU2A937JJZeMHBdeJF60q5OuKKIy4K3hZXrsE+DVc1wcJ94t54cqQTuDVw8G\\n\",\n       \"V6o4Pz7n9Y477pAUe6muoOIFez0nwN6oqeKxeUAcRN+qAioICl/bSuoeC8fxcbzYGf/n/Ol32pEa\\n\",\n       \"QfS7153bGZGHiw0wtkq15FwN5T19SmyGKyoca2n72D7bc0WDMUYb4MGjULntXHTRRZKauQab4XfH\\n\",\n       \"HHPMyHnMrRI/SaLVA1xJKGWVo0ZGYxbbi87L10asVWKwG7YbXRNQ6SN8TintnzFY+0TH65fB+vXr\\n\",\n       \"JTVj2VX92jUEvY5YaTUNnp7wylMJrpmo6cSdlvqf2Kxa5WhSdFWkvjEYDJ4iSYPB4KmS0CO/Kmnu\\n\",\n       \"c6f9d3yWJEmSJEmyy3H99dfv9P9dFamPS/oVSRfteL1yzud/PRgMLtZDj/SWS9qw0Ab23HPP0ItY\\n\",\n       \"qqxevVpSk8kQgZfH3TqeNUoO3ikeNp+790TmBO3EdjxuALxqsWd7Ad4CcQPc1ePlsv3IKyHWBQUh\\n\",\n       \"yg4EV8xQooj5ca/UvccoA4d1zJzPfOYzC35Oth7eI/vFy4MoK8zbjf7y9b7oz8g7i+IZXPHk+Pg+\\n\",\n       \"7Uu/Yj8cbxSXMy7YA2pLW0UKe8c7pt08XoR+pio17YZ6gWpC+3h801xK6yXSV7UVxsE9cNqescAY\\n\",\n       \"8u+hJq5bt05S01dkuAIqZ+TRf+lLX5LUxFlii6weEKnH/5+9N4+17KrOfb8lckmL9PR0Fcg1Drgt\\n\",\n       \"u9yUm7JdtjG24UIIoXGkEHwheoQ4QAIJGBkSSILfgdCJToGrBEJCnhMSEEgI0xiDyw1l3JSbsss2\\n\",\n       \"bqCuYhK4Ckn+uUprKVj7/WH/ap39nTPOnHOttc8+jsdPso7rnL3Xmmt2a45vjjGmQ3noU4wprsfc\\n\",\n       \"U0t01l8EfR2lDFp9W0rtRvv76QXg6rfXH30LBY8+SP8qlbdUH644RnMuCk7rGHf/Vurd/UdL0YLM\\n\",\n       \"1ZTPoxKBMe79lzmLdxn3QYGiP6NAcV33SXQ2S4k6//zztWfPnvDvNekPPq1HHMv/a9d135V0qaT3\\n\",\n       \"Svps13UX6dH0B5I0m83u67rus5Luk/QDSa+dPVZOO0ySJEmSJGmkJmrvfwR/+u/B598t6d2l626m\\n\",\n       \"GlWylkrWK7Aq54w/31cG9u2xdtzyZrWNlYNSgnWFFUmkBeXGyo1OeMdawEqOzsHCijrmmGMk9YoB\\n\",\n       \"1g5tE+VMofxYN1jhkdXnYIXcdtttc+UGP9/o2GOPldTXC+WKfKQc6pd9eNoNa8eVu+i5sbKwpviJ\\n\",\n       \"NUVUGedEoUhdc801c9dxfwruj1LnZ9cBSg39GWt20bYK/YHcQ6342Ys8X9Rf3JplXLp6wHNTH1Jf\\n\",\n       \"l9yLMYWy4Ke6M8ZqT4mP1GCIFBIiWcEjbyFSooCIWcYqfbg2ygncTxLLPvKpKUG9lyKGwZUooH3G\\n\",\n       \"nrfJnOYKSiu1ykjEVNHpQ9VmHzPUK9GN+BfSfiiRzGm8Q5gDmEt5B7l/aaTmM958nDGHucpOvUXR\\n\",\n       \"q1uNzPGUJEmSJEkykNFRe48FPDID8PlASYhyuwCrY/LgYCV4tBrZhX2/GasIHyCsAz7HKhzli9U/\\n\",\n       \"VgHWnlvFfhK8n4zu8Dn23bG6sOypJ6wLtyLYV+f33IestCXrCWWD+1IOrCMvN8/P57GCUDqwoqhf\\n\",\n       \"lAqUN7eOaW+s8dWKhtTXf61KgVLG/c444wxJvXV10003SVqbNRu8H0RWHf3YI83ox9Qj7YOyN9Sa\\n\",\n       \"drAK8ZGLok0dMu6jEqC01UTbrQY1hkgons/rU1rb5tQZbcp3qVOPmuJnFBE7lqEKA8/hpy+0Kvw3\\n\",\n       \"33yzJGnHjh2S1kYjtlLyq6xlqvoF5oRWRcqjGPlJv2DMlqLomMMXDXOa+6l6lCjtw1zDXMmcx1iK\\n\",\n       \"djPcf9PV9eid40ot45D7c2IA7c8csdWVKEhFKkmSJEmSZCCPC0WKVa/7QvHv0n68n2XGqhkrxxUi\\n\",\n       \"v57fF6vGo7KA1Tvfc98oX6X777FOI38NPudZkrHOXflw6zmy0lzZKYE1gzWCsueKDPd3nzGiDX0/\\n\",\n       \"P4J6IUIFRYR8RjA0lw71iiJJlCfnl5G53f0KoJRJPfIrob2wJqmvoX4hJVAGUQJLSi7qD/3DVZ9a\\n\",\n       \"PIu4q0pSPJa9T2GBo5YxRzBmoroeqpjQ5tyHcuIT1Jr7i+vws+S75XB/VE3aYqi/XVRfrdSeQoFi\\n\",\n       \"hP8kCgtzhO8+RESRta7GoyzRb6hv/BpR2Sk/fZ05CvxdUPLfpd/4+a7uP8lcXjumeC7KQ7uvPiWg\\n\",\n       \"Bu8vpTnYof/t379f0lp/4alwFZtxR72Tt2ooqUglSZIkSZIM5HGhSPnqm1W9WwH4cLg1g9WBUkGE\\n\",\n       \"C993KxgrJ8p+WxuJwOfc98nhOp69OILroFzge8LzcF+sIfefwCrj9+6n4WcDRlBvWFv4+Hi7sM/P\\n\",\n       \"/j7l5t+1VhDWKu3hUXjgOWOoz9pcOfgO0Y/wW+A6U/mB8ByRn8zYyCeHdsVqJKdRCdqX7zM+hvqP\\n\",\n       \"YHW3Rm+uhjan79A2tXmJWsHypswoKfRh1MoSUd8Z2qfoozz3VGeUlUA99fqsVcTYJUBp4PzJVlyJ\\n\",\n       \"8szijClXs4m65PNRTjSfs+n7PCf14L5N+OEyRnh38XlXpGhH91GLzg3lc1yP5yi9O/zvY33ifO6l\\n\",\n       \"vkp5rVqhHsk9iG/bVLn3UpFKkiRJkiQZyONCkWL1H+VfwqKNovtQqrAKPEIjysyN9cFPVtsoU6Vo\\n\",\n       \"Klb7WF2uUPn9eD7Pvuvl477s6/Nv8ix51uFSbhCeD6sW3yDOnItA8cKPACvErZzIamn1y8Aq8/xf\\n\",\n       \"bgXzXF5/+CFE0XzUG+0Q+Q61ZgaPwCpGQcPPYFEcfvjhknoVx0+Uj3D/DurV84XVgl8TCmaUz60G\\n\",\n       \"+mzJEo+Ico45PDtjbahvUwSZ1VthrmAOZAwuGvrQUIWPOYHow6H+gD7Gfa71PGQoRfQ9/FujdiTj\\n\",\n       \"PKBqc1/6D3MZ9cJ9UN6Y43kXoQbzLuI6rs57VCrlpN2Zg4f6JI1VpByPvJ4K5maUKZS6qXyxUpFK\\n\",\n       \"kiRJkiQZyONCkfK8TYBygGUc+QdgdWBNuPXjq2f+zb6z52nCaiBqDGsI3yP3dfF8T9H9sA7cFyU6\\n\",\n       \"mZxyoLRgBWHF+DlgQDn5iQ8Z1mytzwpnEpIDxaMAwZ93bLbiknqA9eb9pfQ9niPyvYLW6MYI2nlo\\n\",\n       \"NupWUD1acw7xecYZ+bXoJ6gAtXmlyMjPCQND/GPos1yDSMuSyokiACgG9JVaXyXGOGPIcf88oA49\\n\",\n       \"1xtE+YQiyP3GHIOaHBHNUa1M5SdYG50X4UqS5zdiboyi7HwOd3zu9bmTMeDnjjI3e7Qg+P08XxS4\\n\",\n       \"4uJKKM83VNGMznMdCuWd2r8TnyjGL+MmqrdWUpFKkiRJkiQZyONCkYpWt1gVrEojRaqUJ8fx/WoU\\n\",\n       \"Fqw4FA+silKUG+UiX5JbxZ4Dp9bfA6sXq9qjq/zkcC8Pygv77PjsRAqYQ73cc889kvrnG+o7U4Ln\\n\",\n       \"5LloDyKoAKvefev4vEd6ofxh9fB7IkS4Hkqat99Q6Ee16sNYeA7P5F/C6xF/BdqDeiopUvRPxjPR\\n\",\n       \"syh/hx122EHFwM9+o6/RRvRZ+kKksnkkr/dNlBzGMn5jJd+R0hl/0Rg+88wzJfVjjXJxf9omyuoP\\n\",\n       \"qNvUC3WPKrxt27Z1v7eZZ6RuBq6M+TuAfuR+fh4pXQv1h28T9e3KEQop/ZfdAtqz9rQCcvT55z3v\\n\",\n       \"1FAFqPV0ghKML3Y3/LQN5oxWON/VTwS49dZbhxd2FalIJUmSJEmSDORxoUiVWFTuFM8azOrarRjP\\n\",\n       \"W4R1gB8H0VLke3IrFouc72PdYC1HChFKBtYJ9y1FTFx77bWSegUJsKJqrdavfOUrknrrPIqgQD3w\\n\",\n       \"DPDu31Aqt7czfgdu5WCNH3bYYZL6esSfgfLSHvz0aLxzzz137vuUf6rIqKlyrJTATwPrttWfguhQ\\n\",\n       \"t35R1Gqj/1ACiU6kPqn/M84446AigwVPn0JNow3JZk+fjfqO++D4uX6MrSiPj0NfRs31KKuSDxJK\\n\",\n       \"xp49eza8Twnq/utf//rc7/fu3StJWllZGXX9rQLKG5Ge9957b9P3UQZpN8YC7VPym8QHDTyfU9Tv\\n\",\n       \"mDN4Z3j+qVoi9Zu5cGx0nCtZKGzMFZSfcUL/jfxbiRz3dxLXHVpeFCl+Tu3blYpUkiRJkiTJQB4X\\n\",\n       \"ilTJB8mz2UagEKH8sMrG6oFvf/vbc/9GOYh8jqLzpfg9+7hYR+4/gdWBgoJVixUTZQxHGePzPD+f\\n\",\n       \"c98f8Ky6UOsbBVgHjkftoS64IjXWmsKqpH7PPvtsSb3vDe2IiuFqRslP4ctf/rKkPk8W/YD6/JVf\\n\",\n       \"+ZVR5d8seM4oitPx9rv88ssljT+PDfUGn8X1fBdRoMhY7Cqvny7f2od8rJLDiuszF1BWt9i9DwMW\\n\",\n       \"PBGJ9E2y5IP/e9H4WW9EWOLXRn2gxkZnlqHMHHfccZLW+ikCcyx+q6j1fK412g8Fieu2qrj4QqGq\\n\",\n       \"8rOWaK6E6Hnov/xs9dMFFEaHfj/1bsz27dsl9bsVnnMxwvN3EYk71RmOztRRgalIJUmSJEmSDKSb\\n\",\n       \"OoNo1U27bvafZQ8+SZIkSZL/3KysrGg2m62bwj4VqSRJkiRJkoEszUdqZWXl4P41qljkKwRR1l58\\n\",\n       \"Vzxj9Bvf+EZJ0i233DL3dyJv8GN45jOfKan3d/jSl74kSdqxY4ckaefOnXPlY/+WaCOyI+Pf4Gob\\n\",\n       \"PlpEjuCDA/hVEJkQ+VE43Oe9732vpL4eazN+4/eAT0tU/5Trd37nd+buu2i4z8c//nFJ9WfU1Z5/\\n\",\n       \"BkTtveENb5AkffKTn5TUR6fRT/GXiPI2Rb54+IPg64U/wPOf/3xJcX2+6EUvktT7ruFzBa95zWsk\\n\",\n       \"9b5c11133brXIefQRRddtOH9pob7tN4Pnz9+ls7Sw+fs9a9//Zp7eSbqofiZbNzn/e9//9z1fez9\\n\",\n       \"6q/+qqT+3EnPW+M+Qc7qZ1t93yi6D58mfJKY+4CxTh/Ehwc/Pua0Sy+9dO5+i8b7Cn6d+AhNxUte\\n\",\n       \"8hJJva/WVh8Li74f78JW/1bAh+2SSy6RJP3hH/6hpH5OpX+WfMWIouNdw7uZfk40H75NF154oaTN\\n\",\n       \"r8+IVKSSJEmSJEkGstSoPRQfrD1WrVGEQqQEEG3FqtbzG1155ZUbluO2226T1GdpRVHAGqJcKBwe\\n\",\n       \"fcfnUKQcruflgpISV+L444+X1K/WseBLikytYuNRgiiAU52cXYLIq9pzvmqfC6LIFdqtNtIn8jdE\\n\",\n       \"xSDXUG1mcKLl/LooZKgPX/jCFza8zqIiX1qJ8oE5z372syX1SlykSBHptrp/ou4ypqjrSJFC9cIC\\n\",\n       \"xjJ3BYo+72O1NAaw2KPzFVELI0UqinqKxgB1G9UxdeoKQa3aWwIlAmWB52pVmFDv/fPRGYQobX6u\\n\",\n       \"qqvEU2Vmpz25fu1pEluNoUoU+Nzi70jPERcpxLSXR9MxLrlOdKbhsklFKkmSJEmSZCBLVaSuv/76\\n\",\n       \"Sa6DLwurYaygCy64oOr7WBNu+aNwsbrG+sQqZd+2lkVHSGLdcp8oi69njQUybmNV4gsW+fxsFpST\\n\",\n       \"dop8kbBK+T1qgytOWLWoF+7XgnI4tZVJ/6n1YUORImcPoMzhM1XKDD6039EfUOTG+hu5ShL5B6FI\\n\",\n       \"R8rikUceKalXH8hOLrUrRuCWuVu+0Vl8Je644w5JazOYU5ebPZZqc+a14v6r+AMyl/gpBCUihS4a\\n\",\n       \"kzyPz2m0G/U/Vv3nOXbt2iWpz+vlGfAfr/BuYe71uQl/4tq8WKjvKMtbVflLRSpJkiRJkmQg/yky\\n\",\n       \"m5NpujZLK5EKWPaRtYQFHe33EomDj1JEtK/fivttAPv+PL9b4ViHWPKUhzPK8EHheyXFYaxV10pk\\n\",\n       \"nTooLzxfZO3TDpHP3VTtFVG7z49fCf0TNYF+WHtGXe1J8Q6Kz0knnTT376lOTI/6WZQd21nPf4ho\\n\",\n       \"tNpT6aM29rHGdaM+E3H//fdL6tU9IkpRQz2qrsRZZ50lqVdeKI9HAjPHUccoMlMrUUQ8f+QjH5Ek\\n\",\n       \"3X777ZL6jNp/+qd/uuH3ozPPpppjaF8UjVYfG/z6UFLweWOuTCVqHj9f1v1aUZiIRi1lPud6zMl+\\n\",\n       \"LupWIRWpJEmSJEmSgfynUKRazwvCKoQowsXzMpFrxfNeRVYJVhCWPNZQq1XI9/GB8jPqUCYinyis\\n\",\n       \"V6wqVvcoeYAVHkXEbHVoF6zv2nxcztjcKiVozxKoCfz0iKRaULacUvQlvodEdEXXGcrQfoV/ET9X\\n\",\n       \"l+voo4+e+xt9wf0bsZT9HEWPQmIMb9u2be46EPkbAtc7/fTTJfWWOH5nrdFyKFBu+TvMWfiqUO6o\\n\",\n       \"nDxH5E+HgkZfZGxQ935+JPUWKVKeWy36+1goB3N+q68WkeXM2fhPRueEPt6hfiPllt/j98k7MVLN\\n\",\n       \"3TdxrALIeOFdzhw3lsfGGzJJkiRJkmQLslRFCoWklLm4FvZPfRXr+7Ge2bwE+Wr4HqtiVtNuPbHq\\n\",\n       \"RUEi+ojft/qsYA1SX55bpeS3QXnJl0V5icqDKDJiar+KyNcrojVT+djvRzl/pqI2r5MrmEMVqciv\\n\",\n       \"oDaijfvX+qpFeLsPjYSjPfFbcv+g1ddGaWLsoaTgs4EFzRjjc4wBxg5j1vtsqS24D9dBBR6qaGBB\\n\",\n       \"l9qOclJXKElReUvP4ZHL8LnPfU5SP5egUEVRWYwtyhfN/cy13K92rnYYM9y3dS7juVHs0idqY6gv\\n\",\n       \"V0r9FA1+4jMVvRPpt8yZY3dHaP+p8qZBKlJJkiRJkiQDWaoixWq0VaGIwIpxq8MVI6yFQHXZAAAg\\n\",\n       \"AElEQVRCfu/RQ4cffrikPtIGawS/BhQg/BDc54rrYuWhhPDv1ufF14fPR5EuJf76r/9aktaccQj4\\n\",\n       \"jwy9fi2t7VyrJKEAkuOF7xHBRZRliVofJvobn5/aWqU9sMJa/TsAtcXBT2Bq6yzC+z3jtNUnjXFE\\n\",\n       \"tOxqhdbP03RfKZ8buGfp3rRtazZ/nhnVrHTmWInW+9P3x541WPJDJZ9SidpyMLcOVaLAM9y3jiHm\\n\",\n       \"XsbK1Op8LVHuvK0G/d0jqD0HHXNz6V3gGemJmhzL1O2YilSSJEmSJMlAlqZI/dAP/dBgJSqKNmIf\\n\",\n       \"1X1QiNAB7osV6tYK1hfXIWsvn+PfRMu5fwEKAuVkNc79eF4+57k0ovrAKh5qpZXyInnm8KlYVDZl\\n\",\n       \"hxPvUQjJk9XqW1Xrg+T9oBbvjxH0Q8o/NB+UqwD0f3wHUXRq82ZhFbae4ef+SiifHglWApWAel89\\n\",\n       \"flAymBvoy9xjaB9E2YhOP4j6DH2xts2nZqwSBWP9FFuJ/OdQGqnv2vakX9SeKgCMDVc4p9pFqZ0b\\n\",\n       \"PUP7VoXdDsqJksTc4pG/JV9B1HTmmkVFUo8lFakkSZIkSZKBLE2R+uEf/uHB1mFk/dVmw3UrAh8X\\n\",\n       \"V6QAS9rPZsMa9VwoWG+cJYZViKKEJc19aq0a7j82qiyyKqPzqoZC/VAfQ/M61YJ1g0KB38ZRRx0l\\n\",\n       \"qT8bcb0or9WguNCfvD7oJ9Rjawb0qL25L9Ycvnzbt2+XtPbsu1rcLwdVZWj5UbJaFSnvtx79WgtW\\n\",\n       \"Kd/H6pX6NkO9w5IfqwhxL7eIS5GUlGdoRmavs2X5ykw1J4yFvspcSLuUlB3aoVVRI0rSdyWY21p9\\n\",\n       \"1pyovChgPNdmKYFjYU7gHcnz8W/eiVDqx7Qb7V3rv7rZpCKVJEmSJEkykKUpUg8//PBgK6fVgl1t\\n\",\n       \"sUq9lccqmdU+q2VXttynCmUF68gjCbA+eT6+x2raV9esyj06z8vL6n5o/h3gOVE2vH6mgudfdBQg\\n\",\n       \"YC2eeuqpktbWZ+3+uvvVOLRXa0Z9iKxL+oWfE0d/oh/WKlLUx2mnnTb3e/pRq28XDP0e/Zh+wU+s\\n\",\n       \"/Vq1BYUYJXj1PIJCQdvT9/jOUDhzrXWs8CxT+bdt9agtotvo40PPzKNNvU/Qd5nDqNfa3Q1yAtbC\\n\",\n       \"fRmTHgW6KJireE7qoVUF3myoLxQ15l7GDbsGJ554oqR+ziv1E8YBavhWIxWpJEmSJEmSgSxNkXro\\n\",\n       \"oYcOnlu0aErKgVuNvt/OPji+NW4lesZnrFd8XFAQOPfKz7grwf3uueeeuXK24idoY+VMlVk+YrOy\\n\",\n       \"AbtPFnmlUA5rc/jwuakinhyPXMGKf+Yznympb2d+/8IXvlCSdN9990mS9u3bJ2ltFBztyXlzWN/k\\n\",\n       \"RXOGKsKtkU9E8mClAv2RemYcRv4uqAEobXxvdbZjH+uMRfzkUIGpyxL0IVROr7OhfplT4TnsXAFo\\n\",\n       \"JbL4Ud1LcykKQ6sfn8NzefSXZ6hnTqVd/Lnpc5GPTi1E7zHHtPoVDgXfPp57sxSpoeeN0v6MN+r9\\n\",\n       \"jjvukLR2Lq7192XMbxVfPScVqSRJkiRJkoEsNbP5ZuWEcGUByx2fE6wUVtPs72MNHXnkkXOfx/8C\\n\",\n       \"a/Wkk06auz4+JB4dNlTh4ARyyj30nK6f/umfHvS9xwpYN1/72tck9dY5ClykpHgkSKviMpYzzzxT\\n\",\n       \"Um+F8xwoNNdee60k6Yorrpj7HuqBK2233367pF6NQcF81rOeNaqcWMdu9VMOyu1qBKqBR22iOLnC\\n\",\n       \"Gvm7MH74/N69e+eu+6IXveigvxVjlUhN2hhlpZT/ic+jRhO9NVQNnsqnhrkIxYBn9+z0nMtJmzAn\\n\",\n       \"lXxtPKcdlJQo1MUDBw5IGq/AkbWe0whoB/qG93nmcPom5SHilbl4qF8jbLYiQvsuSh2PGJpLkLFO\\n\",\n       \"/Xs+N/A5gnHL+ZmcwuH+pEP9MxdNKlJJkiRJkiQDWaoi5aAIsYqttWrIaM2ql4zWEb7KxZqLzhyL\\n\",\n       \"9vs54Zx932OOOUZSb816uXg+9osj+Pxhhx0mqd9v9pPkoTbrMNYCn0fxqI2cWBbkT6I/0D+oZ/wH\\n\",\n       \"sBb5PdYN30M5cavb1YLTTz997vd8nvriJ/WFtUg7uUKD9Uz5jj322Ln7YcXzXPiscR1+4idA+2HF\\n\",\n       \"uXXO52lvf176Jb5T3J/+gxXs9eV+Ifhg+ZmZrlhFmdlL45v7ez1vpCrgV+aZrDkfkzrctm3b3N+x\\n\",\n       \"dHkWno02RI1zn46dO3dK0kF/T/cloi4i/0jakO9Rh9QZyhJwn5LCxVyGGks5hipFKGHuv0Z90SeZ\\n\",\n       \"E6kv5hbmZH7PGPVdCfwEUfmZe3neKFIaBcvzRVHvnt8IjjvuuLnno33oD4xJrhep1dQzcwPPN9RX\\n\",\n       \"DcWR/sq/eZ5FKzNDd4vwJaSdKCcKMO80dnNoX/w6gbGOkrjo/FEeQR/1s4hUpJIkSZIkSQbSLSMv\\n\",\n       \"Sdd1s5WVlU2/b5IkSZIkSSsrKyuazWbrOo+lIpUkSZIkSTKQpflIvf3tbz+4nwrso+PPUAv77vhU\\n\",\n       \"sL/7hje8QZL0J3/yJ5J6Xw/8HNj/Zp8f/wH2w0vZcvFlIjrqda97naRHVq6rYX+bvD/kJImu5yoh\\n\",\n       \"/gf4CrF/e9FFF0mS3ve+90lafBQkz7VZaqLfj3b2DPTuA4SfDPVFxBX+J/gZeHtQn5v9fG9/+9vn\\n\",\n       \"ykf74x/h7Ypfjp+sjv8Kz+s+c8tqv0984hOS4jxe+LJ5RneH/FvR+FlZWal+NuaAUjQUbcLcQJu8\\n\",\n       \"9a1vPXjPzYD7vOMd75BUnpvo4z//8z8vSbrqqqskxXWMrxA+Tv58iz7jb9lzC+Dnii8PPmn33nvv\\n\",\n       \"3OeYY/B7jKId8f152cteNnc/fIbof94PX/nKV0rqI5Ddf5e5EF8++gP+la31ecEFF0iSLr/88qrP\\n\",\n       \"O9H9asfZ0Pv9xV/8haQ+ys/BF8vnHuqPesNHKopmLdVjKlJJkiRJkiQDWZoi9bSnPU3nn3++pF6J\\n\",\n       \"8sgOj3hg1YgChBXAqp8opK985Stz34usMK6HIlWrRAFKUSnaDUWBn1i1HgESWXuUCyuIn379oaDM\\n\",\n       \"jD3JHEonsQ+F+n7qU58qqW9/twaxEok+8/xFKFooVmOjFXft2jX3b/Ib1YL1S7+m3ogE4npYSx4d\\n\",\n       \"iDWG9cd1yN6NOrEsShnlS0oURErUehB5iSWP6kUdkeeGqBxUPeoeJYo6p45POOGE6jIsAuYO5ooo\\n\",\n       \"r9GrX/1qSX3dl+rYFReHOZL7+2kOU0HfpZ0Y24wF5jpvJ37SriiIHn1VygzOXMEYi6IcPYI4wjPP\\n\",\n       \"QykfFX8nh6ArUvydfsnzOyhWpdM0xp5FGVGrRKHQsXsTKXxO9Dnqg+dHpWfc+NqC35fyy0WkIpUk\\n\",\n       \"SZIkSTKQpSlS3/nOd/Tnf/7nknplqOQbxWqV1SSrRvLToGi50hOtMslNgnXF50qw2sUaqf0eUD5y\\n\",\n       \"rHA9rK5ov/fb3/72oPuVmEqJgqmVKMBKxXqK8gmhiEVWGMoGis7Q86tQTp/3vOdJ6q2vu+66S1J9\\n\",\n       \"DhmUMfoDVjdWElYt5bzuuusk9dY7VjT5obgemey9PdwnCcVq0WcikmXcM/4D/d99Faln/HRK+dKk\\n\",\n       \"tYqJn7WGhU8bYbli4XIP6mSzTmEowRzIHMDc50rNpz71KUm9Je7wvFyvdFpCqW/g78o5kNCao87V\\n\",\n       \"YxQwFCXaizmAXQjyETF3uk9X7f0ZU14e+iB9M8qt5qw+B7IFxvi5554rqffdeuCBByT1YwmfqEg5\\n\",\n       \"KylRvHv37NlTVS73GZwK3s2tSlA0LvGN4txS8pNFz0kesW9961tN94dUpJIkSZIkSQay1MzmWBG1\\n\",\n       \"+7NYI644YeGyOndKq1yui9Xj+/RYt1juroTgP1ALVoqflM31+TsRNM7UClKJE088cdT3sXr8/KVW\\n\",\n       \"aEcUpejcM6zHSOHEykUBGZo1l35z2WWXSer7S6REYc2hhEHk40V9UU7PuuzZjYm8oj+ikLki5dbk\\n\",\n       \"Zp1kHylRQLnI/M5zEHHWAs9MHbivDG2Ahc+cgoWLWs33+XdtpmMHBQn/RuaU1rFAn6YvcV1v40iJ\\n\",\n       \"AspRUvdKCgQZyIlY/vjHPy6p9+tr9T9kDJF5/v7775fUz4WMCcrF9d1fkHcKn3d1N8Kz51OvKFzM\\n\",\n       \"zbXRizXq6Xowh+DvS6TuU57ylLnr+juPuaIW3kGl/jKURUXtgWf4Z87wkwVQYF/+8pdLkl784hdL\\n\",\n       \"kt797ndLkvbv37/u9WvrMxWpJEmSJEmSgSxNkfrxH//xg5YnCgNKgu+TQ+T7RBQX+/i11oL7GmF9\\n\",\n       \"YOVgxXK9aFXdul/MffgeJ5zXKjaueLBqjnyG3BrGqsEqLvlH4HMzFPL/UI9DI12wNrBC8O9wX6FS\\n\",\n       \"lBhWGM/FdYDfu58F555xfeo7UkId2smjLvFV4n70c+qh1j+HfoXChZXJeW7gStZmK5wRtAP12RKl\\n\",\n       \"57hPCW3ovi20KUoPygdjBKUDlXDo2WkeZTUU96dz1Zoz5/y8yS996UuS+r7AXOM+IT72SooUc8vJ\\n\",\n       \"J58897M1ctXLjyLlfnF+ziZzAWMq8qNjDvKzEh1UU8/NRn3T/pEa7rT6X3IfovUY01yHOQQfoNtv\\n\",\n       \"v33u+7R3LbRrKVptqB9lSZHivuQ7u/baayXVR4V6P4jahfxYvEPYBfrmN7+54fU2OtdzNalIJUmS\\n\",\n       \"JEmSDGRpitTDDz98cNWPdYQi5VYQViF/Z5WIvwKrZVbjJUUCuD9Wmp9I7deJrIuh0U5Y3EQMYDWX\\n\",\n       \"TvZ2q7i0asbKYPVPPWJ9skpnNc+qnPrxHCat1OYJArcmsdJQHrES/UR2+k3JmuG52Tf3fXBXIbgu\\n\",\n       \"1qFHjdZCP6W/e3n4O+3A/dwPIIL+irVFFF+UY2azod/RXu7XgwqE9eoqCO1dU++0EddCyUBt5tqU\\n\",\n       \"yTNFY0nTFozxReVPqqWU3R0Lm7kRlY8+h/IW+S5F+XUiyPyN8heps0SQMpZdCQB8VSKfFWCOop1Q\\n\",\n       \"1aPyUl8llZN6o28ytzLnMDe4UhTRqkjRLtdff/265cK/E8XV8XeRK038m+hV2i+qN96NjM3Wd11t\\n\",\n       \"vqzdu3dLalfHfW6McjTyLmM8R5nK8a2rnXMhFakkSZIkSZKBLE2Reuihhw7uR/tqGWsRS519c6wC\\n\",\n       \"LG8sVDI4Y63VRiBwH1av+NpgnUbKlvvODM2bhLXMqp9/l/wShioM1BM/I3g+fkYKme8nLwraA+ua\\n\",\n       \"56fd3NouEZ1bBSiSHk03NOstYM1FGe393C3uH/kMOvjLuNW5bB8o2qvkX4QKgbX97Gc/W1Kfm6jW\\n\",\n       \"X0Hq5wpXKxlTKAVY5LQp96Dvo1ZSp2PPmuO6WPilPD/R93k+xiD/5jl4vhtuuGHd62Chn3322ZKk\\n\",\n       \"G2+8UdLaOQcLP4rsZCy9613vkhQrTaittVGPpdMRmHOmnnsYc9Qj9eG5zVDHUeB8TEeUziyM/k57\\n\",\n       \"8pN3nOfpcgWMqEp84fA9Y47hHVc6VYOIYJTHqfNI1c5xjj+/t4NH1JeiKGvb0UlFKkmSJEmSZCBL\\n\",\n       \"zSPFqph9bl8N+r9RbrCS8L1h3xslwVfLWEFEP/nf2TfF/6GkaGHpo9SMzTSONYiywnNiBbolvui8\\n\",\n       \"P17vrsxgLW6W7w31wv417YdChRLl1kakIGFFHjhwQNJaVSCKMMFaHApWl/cX+iX9GOXGoxRrcQVx\\n\",\n       \"qII2Fa31RnkZ39QbVnPkB7HePfkMfYVr06ex4L3vuIWO5evqZS3MGSgZEN0/gjkOlZHnoq/Ql73P\\n\",\n       \"uHrM50t5cmrLxfWjCGLao1bJYM6ujY4bS23Gbuo9Upcd7/v0qygKznPMRSo7Y4N+CV4e5jLah88z\\n\",\n       \"hk455RRJa6P/gPpgTqLcter/oimdl8p4r1Wahs6VqUglSZIkSZIMZKmKFNYR++Bu7aG8sNokVwhW\\n\",\n       \"D99HscDKcl8MPleyNvAxiawwlBj3OWF1PxaUKM6P4mw9Z2guG6CeUEJaFS6e18+RWlQWW3zlsOpR\\n\",\n       \"KKivyGrFWsEPBtx/xiN56IfUD88zVpGi3r3/UP/0O8+iTXlq253nY9ws2oetRGt/QDGMIsBqrEss\\n\",\n       \"feoWSxPVmb9727oS5RnOh/Zt9xPjuq1jGUXKiaKpGCOuIDBmyNszFhSPc845R1Lvh0neLNoBxYW+\\n\",\n       \"HUW1jZ3jWqH9mdt4HvqavzvoB7x7ojk0OjUhUj54blemHO7n7er383J6zkCiKfHZczWb73O6gI+9\\n\",\n       \"ZUcEl/pJNFdE49198mrn/FSkkiRJkiRJBrI0ReoJT3jCwdUzq0qsJ1aBWNQoElgvfA/rklVnZC3W\\n\",\n       \"7o9idbBKdyUDHx23osbuF3M/Mr3j9/Dggw+u+/mxkUMoetQr5XcrCWvDT0Rn1T40wqEVnhe/CxQd\\n\",\n       \"slLT7iiKPEeUEZw8S2QP5jwvQPly3zCuRz9ojYYjqhQ/A6CeaXfql+eIlNbSffDFcj+KrQqKJtbz\\n\",\n       \"GFWCtqNvlKKCIjWVPo6CNNQ/0cvDs7VGP7Wqk6W5iT7t0YCt3HvvvZL6evaM6yhQ7uMVsdkqKvVK\\n\",\n       \"PZTyJTGH0n7R510hYqxHihT9jTHLLkwplx9zis95Xi7eXdQ/ShTRfFGEts/1rf1wUeArxviln0UR\\n\",\n       \"vrQHP0s+gLX+z6lIJUmSJEmSDGSpmc19NYg147/HukEJwEL3fEhDc1EAq+vIKvMow6l8gVjtu69P\\n\",\n       \"ZP3WWrFRLhaui9UXWUfcx8+nipSoRZ3wjVXm5ylhVWGFuQIT1R85Vcim68oj98OqIdcKChh+FH5O\\n\",\n       \"WQR50E477bR1y4V1h1WJVYxCi3LI87tS5jBOuK5HirVS8gOZCvoP9e+KcwvUIZYnY4C2Q/3zscfn\\n\",\n       \"8VHBYqcPnHjiic1lkfoxxpisPT/RoY9PrQSg+g7ND+R5jpxbbrllWMEmJvLpoT9QD1EUGDAWSnOe\\n\",\n       \"z72l+uXv5GtCXYanPe1pknplj34QKWJEoDMn+DsGP8TWsb1sJQr8nN1SrjnWDlF9+VxX7Zda9akk\\n\",\n       \"SZIkSZJkDUuN2vPcJq5EsQ/N51hFuo+Q5xeK9tex+LHUUS74Pfv8Hi2EdXv00UdL6pUvrBZ8aoaC\\n\",\n       \"0nHYYYdJ6q3XSGGrzWXD6hqrm+fgPoAfRZRFuPWsvKm566675v6N1UF7uZUV5bJx8DtwKxq1YPv2\\n\",\n       \"7ZKknTt3Sur7IXm/ajn55JMl9daQKyzUP+XwiK5SdKJz6623rvv76HypEotWovz8OFcIUZHwbaO+\\n\",\n       \"qE/G/2r4LMoSSoTnJOOeWO5kVSeTs6vQYy3xoUoUtGR330xqfXlqKUX1odKiJDFmeEfQvmT2pk9F\\n\",\n       \"ihTqZxRl53AqR4naTO4Oc3F0tiNjJopIBvpvNHcse24fS20eNsY1c0c0p9E/Wue8VKSSJEmSJEkG\\n\",\n       \"sjRF6id+4icO7m/iA4VChA8IVgVKCkqN+/Twd1bxrqywGiU6DqvQM4lj+fr+9KGHHiqpV6CwEvxE\\n\",\n       \"bvDzf7ycXBdlyfM6RVaCW2ElojPdyH7M/YaeFbhZoDTir0J+KKwvrAeep2S10w+4jvscUW/0P9qT\\n\",\n       \"ctRarUB/cYXRlUxXUvn8WN+/VlBxeH6stCivk7OeQiT1UZL4+WAlY1V6ziPqmYgirE7GP+NutQ8f\\n\",\n       \"dcoYI2+O+z9SRr7L5/k+90BhYeygRjKnOMwp3Ae1+5BDDpHU9zV8rnhmzzM1NDKU66E633PPPZL6\\n\",\n       \"OYb710ZGou5yXRQf5i7aIFKOmPOISGZsUa/R/T03IPXhudWoZ8YYY4jnp57xKSrlZ2Iu5Lr85H70\\n\",\n       \"Qdqr5CPlf+f+0WkRnlMRpY9+QP1F0WRbfS6fmkhpo996xLPnDGRcRpHrtaQilSRJkiRJMpBubE6i\\n\",\n       \"QTftutlQf40kSZIkSZLNZGVlRbPZbN1U56lIJUmSJEmSDGRpPlIrKysHo9WIPBi6v4vvEPvW7Jui\\n\",\n       \"epXUryOPPFJS74txxRVXzP0dPwd8cSK/hdr7RdSeQO73e+c73ymp34/H94d64bwrIK8RUW+uSuIn\\n\",\n       \"wn4y7VL7fEcccYSk3tfL951r8xKNrc9S5E90v8suu0xS3874ReCXgJ+I++bVQvu89rWvnbtvBL6E\\n\",\n       \"/Ny7d2/T/YD7fPCDH5S0tl6i86daod+96U1vmrvv2OtHedH4/aWXXjq4rzglH6WxfbOV0v2mipoj\\n\",\n       \"MvllL3uZJOnOO++U1PuQnHDCCZL6eiGXH36k+I4x9okgZQ7lc/wbf098WpjLGGulaCzwvhX1FYjq\\n\",\n       \"k7mPdxPl2LNnz7rXwZeJn/g+eUR6aexNDff7/d//fUn175Sh/cjrkzmO9uPdRP3iuxbB3I3fK3MK\\n\",\n       \"/eSXf/mXJUkf+tCHJK31+cNHjn5a8u/k3ct9Pc9UaZynIpUkSZIkSTKQpeaRqo0CKlFrtUQQ2cOq\\n\",\n       \"2fPalM5dGgurdJS01uzCHhlSivb65je/ueH1SnmlIrACKU8UAUHUHNF1HsHiJ24TeRFFaESZ5rEm\\n\",\n       \"W60/rkO5aA+PMBqqrPjZhSWw3qPnQEWgnH7CuxO161T+ktF4HHt9j6hqyfnSGgXHHNAaNbcsaiN5\\n\",\n       \"S6AYwTXXXCOpb9Mbbrhh3e/dfvvtVdeP1FQsfsayzwElXJFq/T7RdMzFlKN01ppn46ePehQmLFqJ\\n\",\n       \"cpgTqA+iLKO5eao8YERJUn8oc7Xnfno9MfejSEGUgZ7dllr8zETKXRvFt9SF1FaBwcfCLgpvH3pQ\\n\",\n       \"I99DnmSBhvzJZF3baFwnYmwILGkGGFS1kxL3LSWro9MyCFyG9xduKYkhLxEfVB7qWgvtUtpKqz0q\\n\",\n       \"yBeCUZK9CJ7ft2iBeistoGDoMSBbBRaWhIKvXkh5m9CGvECoI09pQts897nPlbR2rNOGQ40/krse\\n\",\n       \"OHBgTZnXgxcxiUFLUH6OEKlNGAnMKT73scVVMr5aiVJkRCluSvicx1YPC6PSAgGjif7DnFJKWEk7\\n\",\n       \"+ruD+377299e93ulrcepwGhjAUPi2Vp+6qd+SlKfMqcV2pH+7As6T9ETpQ7y6y2a1vvk1l6SJEmS\\n\",\n       \"JMlAtqQihcV5zjnnSOqtSBJ3RkfAgFs77izO35EfHawZv8/QZHl8j1UuVsLP/MzPSOqtZxwzS8pC\\n\",\n       \"7bEDrc7r4FZL7ZE0JUjq12rVl6z3sQqLHxvB1mZpK6r2kGa3brh+LZESBX6odAnac6x1F1nVUX9B\\n\",\n       \"1fEjcFpBPVhPKfW+wBaB92m2QxmLzAUESlx11VWS+rbCgvY5gb7DdSK18cEHH5RUf/QExwrVuhWw\\n\",\n       \"Tc7c1Qpbd1deeaUk6YwzzpC0uIPIh26ZeqLT6ADv0nFhEDmp0660WwTq/fe//31Ja+fmqP42K3Em\\n\",\n       \"5UNxbFWkhipR0e4NSjBBC7QfW8alw6K3ymHJTipSSZIkSZIkA1mqIoWFijXk1iT72uxXR6t4D0F1\\n\",\n       \"RQqrgNUsfhORIlWylvh76z43Vg8KAv4S/D6yZvk89ykpCThOYkUPtSpgqOJD/VAed34fegyGM/b7\\n\",\n       \"/nyt/iUR9NupDv1FbSBNB34rHGlUy1QH30b9PnJ8rj10me+jGu3fv3/dz62nGPrvonvedNNNknqf\\n\",\n       \"IhSO3bt3S1qrCvvhsEDfKfm9tSo7tc7bfv3IF8iVCdT9Eq3+fK20OsmjIDGnoAQyx6FAMuZKxyt5\\n\",\n       \"f+F6hOczt7hytWvXrrnPf/nLX567/9g5dyp4B9A/okObS1DfvHsiv1V8nLxecRLnOnyOfkk77du3\\n\",\n       \"b8P7b9UjcFKRSpIkSZIkGchSFanIWnQPfw+rd1Ca/PPg4djsZ49l6OoYK6+2HOy7oySUFBisMbc+\\n\",\n       \"sJ5aD8Et+RlEUD9+P/xQIkVwKLWJPjeLUrRhK/5cUeRLiUVHDLWWx6HeIiVqCpgzIjUPixnVdKzC\\n\",\n       \"QDTXovumH5pM9NhLX/pSSX0UWaRIeUQwKv9UKqbTegA4CmCtotaaBJa52d8Z1Cv9gWi2yIeKdxWp\\n\",\n       \"XpYF/ZgxOTTyHF8m0jygJLn/JrsnPgeg1KH00j9J6IpvFFGi7kdL+03lIzXVbgikIpUkSZIkSTKQ\\n\",\n       \"pSpSUR4e/BXYF8VCrY1C86ghV6qwgsYeWcGqtlV5aL2fK3euEKE0sapn9e8RPJudDC4C/4OhSlcE\\n\",\n       \"PnfUR6vv0GOFsTl9Fu1nMNQPYyuAIkN0EXmE8OUZGum46MSeqJP4fOHTxX1Rf/lchCsxrT5MrZHC\\n\",\n       \"PkdNrZa2zrXUF8oH/eG0006T1D8fc1ikSDHX1kZYLwreCeTU83ct78Df+I3fkNTPLdddd92G10M5\\n\",\n       \"ckWKdyvtyHhBoUIRjaItUbpckaLcrRHKEVMlH4ZUpJIkSZIkSQayNEXqSU960pq8QqyOsWawDviJ\\n\",\n       \"dRD5VrH6daWDdO+eUXvsqpTVc6uy0rpP7dYZygvwHPwe68NX/WN9V4AIjKF+E1MrUYAVuCh/jqHg\\n\",\n       \"p7LoCKitgo/Pqf0RpgT1mzFJ3+YZFn081FRQx54ZHAv+rrvuktSuMLUqcK0Rvt4nlh2V5bsLKHQo\\n\",\n       \"k5SPf5fm8KF5vaYCZTI65QHliEN+UX4iRYr6iA4d5l3LO5Z/+7FbDgqwHwEDzO1THYU0tf9qKlJJ\\n\",\n       \"kiRJkiQDWZoi9WM/9mMHrUFWmayOWfWSVdj3XSNY/XpkDN+bOisq92mNPGk9UBPlC2sPJQ+wejbL\\n\",\n       \"N4UDXbHOavMDLZqpFLda6G9Y/dFhvVNZpWN9+mBoxvuhML4XpUiV5oWNQOXmJ326VRlhDnCLfCgo\\n\",\n       \"TLVRflj6bvHTV4iWOuSQQza8jh+oPbUviePKQOQ3O7Tvcz3aZ6gSEZ2ZF0GfnzoyuRV8tCJFijmA\\n\",\n       \"Q37pd5Q/qq8o4pwoQRQklF7u43M0ShS+fVF98Ryt787NojgDdV33Z13X/X3Xdfes+t1K13Xf67ru\\n\",\n       \"zkf/+9lVf3tr13UHuq57oOu65y6q4EmSJEmSJMumRpH6/yT9T0l/sep3M0kfms1mH1r9wa7rtkt6\\n\",\n       \"qaTtkg6RdHXXdUfPZrM15t2//du/HbQ6sIKwWP0srtIJ3igkWKbui4JVxyqb6xPBMtRSxsppVUKi\\n\",\n       \"Vb5HrPj+MvmY3CrDJ2izFCmUF9qFco71AcLqmDq6cGj+rBL4j5BTJlKkpvLZms3lJe0AACAASURB\\n\",\n       \"VEodYBwsKorTFaLIL2IqxvjV0DcYY7QpPiM+59DWURnGnk3nFv0YtU3q5zbmupKysmPHjrl/L9rH\\n\",\n       \"xyOsPQ8WczWKIQpT7Zg6/PDDJfXtfMstt2z4eXx0UD687x555JGS+ndKdAqC764sC3Z9ShB997zn\\n\",\n       \"PU9Sny8qytfFHOLPh/Lkp3HwjqR9+Tt+vShh99xzj9aD64wdD4uiWKrZbPYNSev12vU0thdL+vRs\\n\",\n       \"NvuP2Wz2HUn/S9Lpo0qYJEmSJEmyRRnjI/WbXdf9P5Jul3TJbDb7P5L+m6S9qz7zPT2iTK1h9UoZ\\n\",\n       \"RYPVP6tc/AxYjeIz5UT7v4BVwz35yXVrs+QCq2oUmdK5WChXpWg1t6w9AgfcCsB6mjoSIQIFEAUJ\\n\",\n       \"62SoIoWVgRU+Va4QrFj616Io+d5N7RtU8l8oMXV2bax0rHPOAgT3xcLajRTmMdDmtUoA+XBQgkoR\\n\",\n       \"vZ7hHIsaP0H6Lud58vmSPxoW+bOf/ey58rfmDIv6Bkqa5+dxPO/R1H6lDrkCgTkVP1CiJv2cUvpw\\n\",\n       \"bTvzbonagTm0pHQxV9FeP/mTPylprW/PspUoiOZk3/3AR4qxW9rdiPxiqQfeye7bhjJKxnfakc9F\\n\",\n       \"192KEb+rGaqTfVTSYZJOkvR3kj64wWcX662YJEmSJEmyJAaZ/rPZ7ODyu+u6P5X0pUf/+b8lHbrq\\n\",\n       \"o0999Hdr+O53vxvmonBYpbKf3mqJ8z32ybEiPCtrLShMWGslHylXojyvju/L8/vouq5Q4QfAuUVT\\n\",\n       \"45E8WIluJaD0tebecSXO/UyI6KAdazOWozZMdbZixGbnGhqrPB533HGSpO9973uSeisfxbc1dxDW\\n\",\n       \"Pu3oUaVOSYkisoxcPS20KgEoFMxFKDo333xz1ff5PJY8ai1Ky7XXXispfhbmIqKXKAf+fKUz/piL\\n\",\n       \"UMmjORJlgrGM+utzzI033ihJOumkkyT1bbmoHGg+h/D8jFnmRvoo7VXbzrXRdrX+h63Re5uN+5xF\\n\",\n       \"/c7nXJSkj33sY6Pu77tDUb1u375dUj/XbFYEcS1Pe9rT9NBDDx3s91FeLRikSHVdt9rj8ucl4SH2\\n\",\n       \"RUkXdl33xK7rDpN0lKRbh9wjSZIkSZJkGax2CTn//PM3/GxRkeq67tOSzpX0X7uu+66k/1fSeV3X\\n\",\n       \"naRHtu0elPQaSZrNZvd1XfdZSfdJ+oGk186CJekTnvCEgwUt7X96ThaUm7PPPltSvwpnNew+T9wH\\n\",\n       \"6+tb3/qWpOEnuqNEDVW0gOfAOsDKal2do3gdccQRknprjutH1lt0rhV+Jli7rkjRXuxzc3+UI/wc\\n\",\n       \"sHpRIGgnj0xy69l9mvDboN0crFbKQ/mwpiMfOp7Pzx+rPe+Lz2EtU98eodLqkxTdn+tRXtq1NXM4\\n\",\n       \"/cvPoERN8SzO1D/1i0JM/VG/njOmFZ5jbA6mFrDE8XNErat9Bvo+qhxlJ29TpAjQhvguMReV1DpU\\n\",\n       \"VnyduD8/vQ8wFvk96vUdd9yx7vV9rOD/Rl+jb9NWjGHGLNM9v/ecZfiU0efcR4rP8XPoHP14xX3a\\n\",\n       \"6C+0X6REDqWUadzb38/cY/eB3zMuxpaPfs54bI3YjqIxI4oLqdls9j/W+fWfbfD5d0t6d1MpkiRJ\\n\",\n       \"kiRJHoN0i85cu+5Nu262srKy6fdNkiRJkiRpZWVlRbPZbN3U6lszu1WSJEmSJMljgKWdtbeysnLQ\\n\",\n       \"JyPyYSn5fnjmb4/2QvX60IceScDuvhf41rB/zH4vKl2trwb7wG9729skSQ8++KCk3j+CyAT8H/Ct\\n\",\n       \"OvrooyX1vl5Ef1Ef+MRQLnxVvvrVr0qSLr744rnnXDTcp/V+0flZzvHHHy+pjx57zWtes+H92Acf\\n\",\n       \"66sG3OeP/uiPJPX+M54raCzUx+/93u/N3beW0rlj+LW4ryD3ufLKKyXFWZ7PO+88Sb1fAbmMGI/k\\n\",\n       \"SKI/e4Z0ItAuuuiiufu2go9blD8OGB+/9Vu/pcsvv1ySdPfdd0vq6+icc86RJL3nPe+RJL3+9a+X\\n\",\n       \"JO3bt0+SdOGFF0rqMztTZnyR8P/Cp+dVr3qVJOmDH3wk80spS/wv/MIvSJKuv/56SWvzDhG9hz+d\\n\",\n       \"5zPysUfUk/ua1BKdIrC6Llffzxnq/+f43EmUI9fHR4qf1AsZ2M866yxJ0l133SWpb0/eLfht8g7B\\n\",\n       \"d+2SSy6R1D9fNKaiCFLai/5Ae3o7MHf4WKA/1ebpOuaYYyRJBw4ckBT78PFOe8Mb3jB3v4idO3dK\\n\",\n       \"6n2C/F1c67PU+m6gn9F+N9xwg6T+Xcm7AP9Zn+P9fvjx0h9L+eBoN/7O8/nZjPz9N3/zNzd8nlSk\\n\",\n       \"kiRJkiRJBrI0RepJT3rSwdUwESTkbCDSgPOuohwOP/dzPyeptwLIzorVAZGy5FbfUMXBrQOUJqKe\\n\",\n       \"sNw9nxS/p3ysxnmepzzlKZJ664VVfGtU07nnniupX2Xv3r177u9Rdt6pqLWWUT48u7JDeRd1hptH\\n\",\n       \"otAvhp5A70T1Ucr4jQKHFRtFNHn/d0oRKfv375e09pwurPooehJq88NF0N+x5kuK1GqrnrI7KEG/\\n\",\n       \"/uu/LqlXrIB7XHPNNZJ6JcozpbsyUZvPiJxdHpFKJnMs8I9+9KNV1zvjjDMktZ/KAMw13qdrc4gx\\n\",\n       \"lzAWaTPKg9LEHBZFL/rcydzGXBnlxkOB4qdDffOzRDSmUUW9/JST/sFc5FFq0Vxdq0ShnDAnlaJJ\\n\",\n       \"S3OT5xsjahQV+UtfeiQl5KmnniqpV/7YBfnbv/3buevRvsB1mTuiOQqlifaJlEAUMZSm6B1FJDhz\\n\",\n       \"NYokiqvXN6o6awDKQ/1w/9rowVSkkiRJkiRJBrI0Reqf//mfD/pqOFhtpZOrsQJYNQ61zkpE/gQR\\n\",\n       \"vj+LleKrbqzZz33uc3O/57nJAcPqGSWj9rR79stR+li1A9YDChrW6Gbm8VmPUj1jdZROAi/52GBN\\n\",\n       \"eeZzrou1Rj3RXrXKHdfHmiopK1j37Pe7akK7kEU7svbob5HyVVLysN48v5Zf3zP8l6A/RueZkcOJ\\n\",\n       \"v9Mf3R8HXy1UHPxGavA6BVRaV0NLilOtOrl37yNHkP7sz/6sJOm5z32uJOllL3uZpF7lc9Uaxchx\\n\",\n       \"paJ0jqeD2sdcizJQO7e43ypjhlxjxx57rKR+7qnNUs916OP4rNxzzyM5nzc7yjxS6Kg/z09Um38M\\n\",\n       \"9ZkxHD0XiSBr322lXHIoaPQX+jcK1Atf+EJJvUJDv3UlijHo56IyRumXpXNBo1Mq6DfsSjEHUO+O\\n\",\n       \"+1DxDo3ODIzmAdYSkdIZkYpUkiRJkiTJQJamSG0EVkDJgofIcgbPpN1q1fA9jz5j1euRK6y+US54\\n\",\n       \"Hj8HKYLrUk6s1ZJC4NYQlj0/UZ5Y3ROh5M+D8jGVT9DU1O5bl/pPZG3WRgGWIm9az/jDHyTqJ1i/\\n\",\n       \"pSy9JasYpS3ylaJ+sQpRLagv/HqIGMOfqJQRPlKiwP1sUF1QiVDSsLqjqMMxRGoodVbqe6U6QIVH\\n\",\n       \"mUKRuOKKK9b9fJQ5mr499JxHFIVI8aqFvu9+cfSZKCIbXNFgzLQqWRHM/a0qO+pppOrWnj2IyuyU\\n\",\n       \"lChgtyJSVlCFUZhKUZSMQcYU/RVlCB/Dkm8ZqrnPodQX9cN4KY0Lf7dyf547Ot8VPLM+uwFcz/tn\\n\",\n       \"7TuktLaAVKSSJEmSJEkGslRFyi1d4N+l6CJW36x2o7xTU/n8uHUSWYNY1jwHCkLJOgNW5zwX5ef5\\n\",\n       \"nNp9eT+5nHxXkbUQWUtRuzlT5ZqJGGptQtR+tfmLaGes19I5aSWFj/41Nl8VvnUoot6ukXUbEY1P\\n\",\n       \"r/da/5oI+glWPPUa+TOsR23fbKX2zLfaOkCZQgWOlKfoOVABh0IfLKmEJfC58b7gc43jczb4eZUO\\n\",\n       \"eaFQVIj0dZh78C+9//77Ja0de9FYp15QehhTtUoURHNMq8ofzfGtvnGMDxRW6qf1bDveaZEfc200\\n\",\n       \"K/i7leu72h8phPRD5k4UKN4RqNv4WFFv0e4DilatYpuKVJIkSZIkyUCWpkg9/elPP7jqIxstFimr\\n\",\n       \"yNJqm8+z2iQXRkkZcIaeOB0pLvhIUQ5+RpELDlaMn4ReqyTUZuJmP5ysuZ4zJbKaaq39RSlRsKjo\\n\",\n       \"Qqxerh9F6HjumAjqF2s2Kndk3beClY+i49cb6xdD3rdWqxMi1YhxgnIb5aNCzUC5XT1PRJF+EPkT\\n\",\n       \"DsV9VFohGo3M1/jJobRE5YzU6Vpq57qovlDQSj4k5D6jbWkfFCXPq0Wf5XPMQe67xpjiHfKNb3xD\\n\",\n       \"Uv9cRA2SDwglwxW40lxGOY844ghJ/dxcW39Rvxj6zhkK7chP9+NlLOH7xLvAxySw64OyBcwt9M+h\\n\",\n       \"cwTjivaln0R+pz43M+55DvprKR8VeD2VSEUqSZIkSZJkIEtTpI466qiDq2BWtaxysUqwKrHk2S9l\\n\",\n       \"lcr+rispredOYRV4RnLP3eE+LpTPc8+gBFEOVtO+uuX7KCDs16J08G+u5xEu4FYj9Vnra8PzbNUo\\n\",\n       \"vRJTlxt/CDKEl/JVldQNz64bfR4rbCy0O0qaK1Ilfx/6GfUQZfM+5ZRTJPURNrX5tSIVgHGCisF1\\n\",\n       \"3apnfG6krEVq6FRKFIxV95jb8DVhTot8fyCKBquFPlGKJI7qy+fACPIloRjQ9lzXFSl2Fyif52DD\\n\",\n       \"b5bM1Sgf7EbQd4866ihJ/ZhiDPscUcq75Bnfp1KQNluJor55BzGGmAt411EuFLjSnORqPMoP7Uj7\\n\",\n       \"tT4vY552Z1zQ7k6Ur4p3MIqa96sI+kX6SCVJkiRJkiyYpSlSs9lsTXQaq0lWjx7RgvKELwnWEN9n\\n\",\n       \"1R1ZvKwuo8zPRJpgiXtUmFszlNdXwSgCrPpR2Pg3++ysiska6xE/vmqOctxQjzx/bR4kwFqY2lrf\\n\",\n       \"LGinqaxG9ycoKVL+Pa/H2lw4WO/0c/pjq9KGYlTKcxbBuMAqJIO4qyRYnZxxORbGPdmxUZ49qzJs\\n\",\n       \"llW/EWMVKcY8/mC1kb1j/ehQR8f6WkXnYqKYMTd5eemj3rdRIFDloxxtRBx7BmveDbQLfRnlpTUi\\n\",\n       \"FsVwbHTjokGpcYWNdyRzEvVNu/s7EJ8y6jXKPA4+BnlX0y+4Pu88n3toJz7v/YTvMU7w4XKYi/z6\\n\",\n       \"rf7Sft/a8ZGKVJIkSZIkyUCWpkg9/PDDB/dfWfV5pmgsUlbHWClEPGC9oUiw2o5yufD3kg9Va64Q\\n\",\n       \"h/J7niuUBs8tEkV2YEXwvJFi5IpJawQROWnwTXms+kpNpVDQr2gnrF38Oagft5ZL7cP3o/bBj4Rx\\n\",\n       \"QX+h39fmKEKZqz2fK8ojRj1EfglD/R8iKDcqBvXL+EeRrVFOsayp61qlJ8Jzw3mZh0I5a303gL5U\\n\",\n       \"ikKMxjK/Z+6k7Us+Q0Cbcx3aiPpgLinVj9+P6zEGeD7qh3Jyf+qL9qEv+99rz0mNGBqVORWe+duJ\\n\",\n       \"+jfvVurRxzh+jb7rU+tn7H679Ev3c6Y9UQR918aVKMrB31EEo3cS5XefrbG7LLWnkaQilSRJkiRJ\\n\",\n       \"MpClKVL/+I//eNB68IgBcN8fVqVYLawW+Ttn0i0b9mk9OgrLvjbPE9YayhwnYZNrBqZSjsjRUboe\\n\",\n       \"1ufQc75KRCd8R4zNBO4Q9Yh15dF2btWVsmnzefprFAnj+/me04XIJHyVprKSI2sVX0GyQjuMX1SV\\n\",\n       \"seei+X2pb6xaxr/fZ73+QtQWbTJWkaLNXIUcm0eKfEg8Y23kI32OOQXfIle0orFMneFbwpyK+lnC\\n\",\n       \"FSzK4z4x0ZhgLHhmbBQUnsejzIA+y/Pzb8pPe+CrxXOWzqncqrRGogP9gfpwnzXaAd8ydidQlEr4\\n\",\n       \"O8Bz69GeKJ/cBwUpmjPoT8x5vNujyHV+z/PwfdYUtYqk7waUolIhFakkSZIkSZKBLE2R+ru/+7uD\\n\",\n       \"q9HIxwKrxK0RFAhWt1gti446I6qqFAlAObD6/FwpLG4iFbByeS7P+cFzokx5DpmpzhWrzeKKNTGV\\n\",\n       \"IoX1gzVAlNhYhp675u3G80b+IyhHpRPTS2f3OVjPWHcoMlh3teqB4z5RkV9O5CtI3rPnPe95kqQ7\\n\",\n       \"77xzw/u1+tzxfO5XEbXjer/HAsdCPe644yT1Ea1RNBhgiZZURD43VJFCOWMsRZmbHfcnbT1FgPuh\\n\",\n       \"QLT2JdqUuuf5icYr+c9FOdVQqGjzKOcZf6e+qA9yv20V/07GylCi/Ej+d+rfFRT3raJ+aRfmFuZg\\n\",\n       \"7hMpP46/A/zdxb8j5dLf4cxJKMj8nX7leccAxdF998DnbuZAV/qoR/pdKXP/wetVfSpJkiRJkiRZ\\n\",\n       \"w9IUqRrrPFqFs8qN8ss47KOP9ZOotdpQDtiXZVXNM2MFsJqPziNiH/uGG26QJN11111z120FK8P3\\n\",\n       \"jz1KqhY/B8kz0NeCden5oCA6eR1rj3r1eiwpUfj2eNTjLbfcMnc9lEQURLe2FuUrBm61D1Wi4OlP\\n\",\n       \"f7qk9nxj8IIXvEBSX2+l529VB77whS9IWmuNRqxWPc466yxJfZ4h6g7Ls7ZvoqjUZkBuBYUMFXPP\\n\",\n       \"nj1N3/dIUtoC5Yyf9BXKiaruaiv14menRdCmPqf6GXwoXhGu5HH+Z62KPNYv78QTT5TU18vtt98u\\n\",\n       \"qexLxRjCb/XGG2+U1D9Pa/RbRGmXwCPR/X7us4TazpjyqEio9dXzsY2iRD9gbuadRb/w00l4Tnyh\\n\",\n       \"+DvPg9LGdR1y7nm5Udo8qtPfeVF/q41ITkUqSZIkSZJkIEtTpKQ+1wirYVaJrD7Z5/RcF6xWXcFg\\n\",\n       \"v9z9DDzrKZYuq+mhZ4Thq+Tfx0qLcmSwCuZ5PIN6xNhcKKzu3cJv9a8o+Yix6sd3iPvye6wDsjl7\\n\",\n       \"xvebb75ZkvSc5zxHUm/1UU4yX0OtMglHHnnkXPm8v3g2X1cdsKpot0VnPUZ18DMlh+JnO9ben/H5\\n\",\n       \"yU9+UtL47NoO8wH1Sr9BXaFdNrLSGYsoCq19O4K6b/VziyAb/NCs8B/+8IcHfa80dj33WClPlUPb\\n\",\n       \"eGRxLYw1ftLnUINRzLydI5j7UWbc1+0lL3mJpF4Fja5Hn2QOPvnkkyX1Coora/STkq9RVL9nn322\\n\",\n       \"pF6tp159zLqvnONRbIwpnnOsYublph4iJRKFhzMT+elEc3o09qPdJuZm2sGjBvk3PlFeH3nWXpIk\\n\",\n       \"SZIkyYLplhHd0HXdbGVlZdPvmyRJkiRJ0srKyopms9m6ElUqUkmSJEmSJANZmo/UysrKmqy6kf8B\\n\",\n       \"kQb8ZP+cfXP+7fukqF6bpX5t1fvt3LlTUh+R4pQyU5N75OKLL5Ykvfe975UknX/++ZL6feXdu3fP\\n\",\n       \"fY99fnyQiLqjvYlcYl8dHzT271/72tdKkm666SZJ0lVXXTV3ffwW8AOIovrwHyj5wlGP73znO+ee\\n\",\n       \"K8LzTbX6SnG/97znPZL6iLMDBw5I6iO6tm/fLql/nquvvlpS7/9DOYhuxB/E/T2i/kI9Ep0Y5e5p\\n\",\n       \"hft87GMfkyQdffTRknq/D/xL3PcQHzb8GuiXlA//BvwtqJ+zzz5bn/rUpyTFvhqlTOTucxKdQ1ga\\n\",\n       \"e3yPspGN3tmxY4ekvi3xDaFO6INbdW4ZivuXcp9PfOITknr/SaDd3ve+90mSvv71r0uSLr/88rnP\\n\",\n       \"nXfeeZKkO+64Q1Lsx8f9PvCBD0gqR4aOxeuTaEF85OjT7gd70UUXSerHDP6jvPuYm9lZwufo137t\\n\",\n       \"1+buFzE0157jz4cvm0fv4QdL1CMwTvFZOuWUUyT1cyDvEPxjL7zwQknSRz/6UUn93M8cgU9j5GuF\\n\",\n       \"f6yfY0o/8/NFX/WqV63/4I+SilSSJEmSJMlAlqZIHXrooeEqHFgtY61hnfE9VuGsQlkF33333ete\\n\",\n       \"j1Uwq9Cpo462Ki9+8Ysl9Vl/UU62bdsmqY+CjBSp6MT7K6+8UlKvhDhYh9ddd93c/WHv3r2S+hwu\\n\",\n       \"WAlu/Ud5imi/qB2jiJAStZEsU53xh0JHRJVnSCcCiuf0SDQ/UT3Kwh1Rqsex0K6oDzynn9QOKE0o\\n\",\n       \"aswP/ETJ5DlXX4e6i6J7ShnIPfrJ+2IJ1D3mrlKkLeof+W6e8YxnSOpzmZVUTup2UWfItUbt1RKV\\n\",\n       \"15UooN3e8Y53SFqbSw2FB8UCtdMzv/vYiPIE8dz+joHWs9scFBrqNboeY5qxyRxN5DC7CTxHa14z\\n\",\n       \"n9tRUBk/HtFcq1z5mPV6jqL6UHJpX49a9Ihq3mEoUh71GUG7+Pimn1HP0Ry1ptxVn0qSJEmSJEnW\\n\",\n       \"sDRF6h/+4R8O+iNgVTn8Hl8qfCbYV/ZVLvurEaUsu1PtF5dgNYwSUZs9dSj4Z7h1i/WO9RYRWfFH\\n\",\n       \"HHGEpN4qov5Qgr72ta9Jin2zAKUC68/rH9+aW2+9dcPrtIL16vvoY61wsvVi7ZRy9/A5rGfPfQOl\\n\",\n       \"s/xg6LlvwPOTv8tzC0XjJPInAj6PdY1640SqBGClUh+rrX5XE0t5dsbieYLczwwV/NJLL5XUn06w\\n\",\n       \"b98+Sf0zMAb89AM/64t/48PBGKSNIlU56lP8HiWCUxTAxwI5/lB4aHPmgFLb0Xd4Dld1+X00J0ZZ\\n\",\n       \"/f3c0sjPj+cF3hn0Gz8rjuclxxlzEedL+thwojxE3k7RnHPttddKWtsPqH98zfCrHKscoowyhoe+\\n\",\n       \"C+n3fJ92451OJnKHuQF/Z/o5/cF92YaeKlGakyE6289JRSpJkiRJkmQgS1OkfvCDHxy05ny1jXXD\\n\",\n       \"yeh+ojMWqFstYyMvFq1EAc9dOkdpLFhpkZVae//IqqL+sYqofxSpkrUGnvmcKDQo7ae3+rFAFNEx\\n\",\n       \"1qo74YQTJNVnd0ZBojy0WwnaZepccDx/pC5g1btVF7UDKgG+eED/wwqtVQL5O/MAypm0Vo1zJcqz\\n\",\n       \"po8dg1jMnvkasGix9N/0pjdJ6v0D3/zmN899Ht+hk046SdLaOY0xh5Llka4RKCsO10dhcbw+3T8P\\n\",\n       \"/8Zjjz1WUt8no7FFvUe+J/SRWv9Gxsr9998vqY9qc/zUDEBBpP78HcDzoiS2+s5E9Y5PF5nKKZ8r\\n\",\n       \"LCg5KKv+faLTUOCi/oxfMfUfqdb4sbpvVCsorPRXykl7ofC5bxPwe8Z49G7nPj53+NxY+65gnEbt\\n\",\n       \"FpGKVJIkSZIkyUCWpkg98YlPPLg/zWqSVaOvTskrQ26QoQrEkDJK431OHFbnWMuLyi6PFcLq2v0b\\n\",\n       \"yEtU8h2jXRysYiIp2KcHrKvoTELwk8ldkbjtttvW/R5K5qL8X4aCX0PpnK2I2oggrj/VeXJO5KdS\\n\",\n       \"618APA8+eliXPq7wn3BfO4d+VKvcrYa+uGvXLklrfYtKPj4OljNl8r7L2HjjG9849/vILxT2799f\\n\",\n       \"df/aSEssbc+1RhsMnePw6ULJQa1kDvfcgNRPFFXl5StRO1ZQYrx98dEq7UZQfygdlBOfqcjnx5UU\\n\",\n       \"rkPkKX9nro6i+HyO453himc0ZlFM6f9EWuN3Sj20KlHRu4HxQH1R/yhwvFvxU412L0r+w1wHxY76\\n\",\n       \"dCXKo1spl49X5ijq05XAiFSkkiRJkiRJBrI0Rerf//3fQyUEJYNoM1bPWDtYFYtSjIDVO6vaaN+/\\n\",\n       \"lVardyhYg1jb7uuEVViqR1c8qBciK6ifKOKm5E+AVUD0INYZoLz4dbEWUChL1gvXxY+ArMCeI2Ys\\n\",\n       \"lANfPz+xvUQpcglol6lz/ZCPDV8vlELqG7WE8euqiFup+HmU8nPx3KXoW6xNnr8lnxd9HsuUZxra\\n\",\n       \"B1AEsPi5fkkpKfk0eV+NiMYsigdtV5sbrRWUEpQ35pho1wAVMVLSUIaG+v+h9kenZDi19cKcR3/x\\n\",\n       \"XZRaGNO8A5gb6Ee1Edz4hBFtSf/z0z0ApYmfRBZPlQsvgnc244v6pl35yZh3BY/6pt/4uKK/RD5U\\n\",\n       \"9AfGE+UpjT/m0trdjlSkkiRJkiRJBrI0RaoGVtdf/epXJfXKyaJX0YAPUKsHP6te9lsX5QNVgtU/\\n\",\n       \"kRiRYkHkURTd56BYsA/Oav+YY46RtDbfU5TTxXPKUM+unGFVuF8FnyspOChDWDfsx2PFlPxVWkEF\\n\",\n       \"wLcP67PWl4nvl84GhKhdW3MoUY+oIChJfm4WvnUedQeuRtT6NGIllrISR76UNWCRU6ZWnxwHS5o2\\n\",\n       \"myrDOH1y6FyHglLK21MCpSNSkFAaqIcoCgv4XMm3a+ic6Rm/S3Na7W4G7wLPV1Vqb49Ij8ZMa8Q4\\n\",\n       \"5cZfkTkyyrPl1Oaki2AMRkouczvKIHMREbYPPvigpH5ujHYtWAOwe+CKVDRH0k6MS75f61PInFrb\\n\",\n       \"P1KRSpIkSZIkGciWVqRgaPbSqUAJq1UUsKZQTLDCaiNMpiZSLLCC3CepBFYWq3wgp0wtKBvkoMHK\\n\",\n       \"cx8y7oOPGlZylE/MOfTQQyX1VhKKFIrd1PnDUBFqrRn3UatVoiKwTlFdav1A6N/UJxnpqR/P61ab\\n\",\n       \"9bdWkcJaL7WHP0/LGYGMzbFKFHi0nvu6lOYM/P+iZxqa54rvM7Y9ItLrOFJ1S2omajblL6mEU/mz\\n\",\n       \"RuVFcaw9b7LWrxC1HOWEdimNrWhumiryHF8p5tDNimjnPpGPmPcv3n20G/2gpDwyPqLcdRHeLtRT\\n\",\n       \"rYrdqnanIpUkSZIkSTKQx4QitejovAiUJHx/SmfGAVYoq/ZWJYp93UX7gmENtNYr5cLK5TpYg7VR\\n\",\n       \"Z0Tp4avEPrrnX8LfgSgx2oX7lqxProf17CwqDxN+KiX/Ga9/P4uRf9f2o9qIJcetWe5LfWOFY5Vj\\n\",\n       \"JU6dYb3Vql7vzD7KiDpH3aGwoNSUcrmh4NCHougg/CG5Lvf1vDyoq/yMIpepA++b7hcYqXeorq5E\\n\",\n       \"RYoUfdXVYL+/z8X0iVo/vKl8yEpKz9Qqs/tZcv/W56H+8DtkLvUca7XQbrTTWDW7lVpFz5VVlKzS\\n\",\n       \"9+lXtadkAOOZcTvEn7KFVKSSJEmSJEkG8phQpPC9KWXgnhryV7XmmKk9hylis6ISsVaj3CMRRIZg\\n\",\n       \"HbPq91wdUbSeQ2QNUW6uSHnUJPfH6sAqiyJWsPaI5KA9sVJqI11qISvuEUccIUnat2+fpHrrFWsf\\n\",\n       \"1cFVlbFEOVuicniEG5E31N+yolJhPfWBOqMO3VJHGcAijhQO/Pii0+eBOuJ69Cn3IaEPluayyO+L\\n\",\n       \"vlXy72MOQQ1FjWUs3HTTTXOf53OR7xjl9vtRr34+5n82mINq1fYI6pH+Co4G6gAAIABJREFUwruC\\n\",\n       \"MwYZ47U+fC996Usl9XPaFVdc0VQe5nD8H1HcOFuwBOOsBPXmUbNA//R3EUpb664B7cN9avOyOYz/\\n\",\n       \"EqlIJUmSJEmSDOQxoUhhRW0WRHmhjLTuzy4qi/BQsHpQIrBWn/70p0sqW8coPoD1yfVY9XOiN89f\\n\",\n       \"q0gBnyfrLmANooD4vnrJWsEX6957713371P7SKHIsb8/NEIMq4r2oX4ZD0PHRWv/JAcRVh0Zz++8\\n\",\n       \"886q72ONlrIvo96gQBOhU/KZWt1/XTnwKB/PXVaKCMZS5nOMmQhUPPqoq9OlUw1K/mbUBWOB5+Cn\\n\",\n       \"K1lEupZyxXGKRERJZd8sFb0Ec/fUp0egxPFOGKoOo/xwRp9HPqOY7NmzZ+7fDmo37Up/az3fEyWM\\n\",\n       \"69TOVfQ36rsE4xIFCwWN/l7r21U6+87zW1F/7ucZwfe4D3NuiVSkkiRJkiRJBvKYUKQ229oZa80s\\n\",\n       \"OkIgilLzDOH4c2D1HH/88ZKk66+/XlK9b5Bbx5FCd/fdd1ddL4J6Q0Hy30cRHiUrivraLKWQTPKf\\n\",\n       \"+9znJK31qxnqQ9d6Zl9E5FfjvlMoUKg4KG0oeLU+i6eddpqkvr9FipTnpuFk+NJ9VtevZ3Onz6Cq\\n\",\n       \"ovKV+owrPMwJ60UIrsb7KGMHy7akbntf9eu5glby8eI61OHf/M3fbHj/iKly+fF81O9Ufn+MKT8V\\n\",\n       \"Yez1qF/qbewcQl9lDKLCo5zWjin64+c//3lJfX8mX9KZZ5657veof/dtuvHGG6vuC/S3aCyjmDEX\\n\",\n       \"op7TPnyf8tT6j3p06HOe8xxJvWrOeOWdEWVwj5RL5kCU5euuu06SdO65525YrlSkkiRJkiRJBvKY\\n\",\n       \"UKQ8dweKy9DIic0Ci57yYmm3+rawj80qO7KKqQ+vFyJ98D3ZsWOHpHrramgeLPabURRRHPg99YE1\\n\",\n       \"EvnClCJISlFjQ7NDA89DBAfWJFYk/6Y+I2ub547yWW021D9WIj8pJ/nTeB78L+hP1AfWcKR03XDD\\n\",\n       \"DU3lwq+n9ozL1YoUZXBfCeYQ9xdkrKC40Ib0RVdZW9VxrlPrZznVnMbcc99990mqP0fTIbKXucfz\\n\",\n       \"bwFzFOWnnlEI+DvqJmN2Kv9XlCP6Ti308QceeGDd64E/L32z1b8ShYZ6QNGhf6KU4DNEPRHtR444\\n\",\n       \"fPUoB/2ZsenwfeakVv/ViKi+eU4HVTraDWnNGUkEMXNMlM+M+WDXrl2S+n5M+/mZiK3tmopUkiRJ\\n\",\n       \"kiTJQLpl5IDpum62srKy6fdNkiRJkiRpZWVlRbPZbF0nvFSkkiRJkiRJBrI0H6mNFCnyCEUe9w4R\\n\",\n       \"CH7GGPfwe+HbMTSCxWF/+7d/+7clSZ/61Kck9ZEEY8HPAz8D/Dhe+cpXSpI+8pGPSKqPwmOfGN+j\\n\",\n       \"yN+B/Xv8BahHIkvIPcN+9j333CNJesUrXiFJOueccyRJr3nNa+bKja/WGWecIUk6cOCApD4qDZWU\\n\",\n       \"+vT227Ztm6S+3vfv3z/3d3zA8EO45ZZb1q0Hz6R+4YUXSpL++I//WFLZj+DlL3+5pL4eqE+iJHfu\\n\",\n       \"3Cmp96XCD4d/R/1zUXCfd77znZL69sAHjPr08+Ecj2jCrwM/BPyILr74YknS+9///rnP+7l1jEf6\\n\",\n       \"r/uWEfHG5z3nDOV529vepg984ANzn4Wpz+v0tqMM3Icxdeqpp879G58lYKx5TjZ+0ofpm0RlMUb4\\n\",\n       \"O/5t1BVj9LbbbpPU++XR1vzk/vgyvepVr5IkffGLX5x7vkXBGXbc98/+7M8k9T4s9BWe0yNL6Qv4\\n\",\n       \"6tCXGNv4YDGW8cf83d/9XUmLfz58ky655BJJ0oc//GFJvQ+O99MTTzxRUv9OI2oM6F9833eU/F00\\n\",\n       \"9vlKec04M5A5/rLLLpMknXTSSZJ6ny3akdxzjPFnPetZkvp3HP0W3y8fD7wrLrjgAkmbP3dGpCKV\\n\",\n       \"JEmSJEkykC0ZtYf1EJ1r5GeltZ52jwU9lSLluUtay1OC5/SID2g9K27v3r0b/h0rKcrbdPXVV0uK\\n\",\n       \"FUOsEqxBjw5EkfFyEEGD1RHB9Yhc4XpYp1j/UT4v2gur1SM0SooM/NVf/ZUk6ZRTTpn7PcoT16F9\\n\",\n       \"3KpeFt4eqCmlCKpf/MVflCR99rOfnfu952LxyCGsUaxpvw9RlagmrkidddZZc/dxRWp1P40yO0+l\\n\",\n       \"REVQBld3USSiCFksfc/3BB6h+5nPfGZ0WTeCXGG1EZNj8bkSZYk5HoWJevF8Q55TjzkdBQPljT7e\\n\",\n       \"6hNM/UdnH5bwzOWlMcZ9yMPkbN++XdJaFR6iqLWhlOrL84vRbuwqHXvssZL69uCMR6JI/cxIxjjt\\n\",\n       \"TlQe/XGqvGNTk4pUkiRJkiTJQLakIoV1x/6556rAyhuaQZxVNMoUigarZxQDFJOSz5bnKfIT36cm\\n\",\n       \"ytczFurDM6M7WHcoD5GV9dWvfnXD+6E8cj/PShuBNUPma8/tU/JNw8rifp6/qtVqjZTN0vllreD/\\n\",\n       \"goKG9YY/Cf4RJUWU/okfBtYv1n6U8yg66R2rnft7fTKeGEfus4aqEGXsp52i5xqinpR8P6ai5GfH\\n\",\n       \"2IkyrUeq8KJgLJE/atFEcxlzC0pilPkaxYnroFjwe/7N7gZjpZbWs+ucViUUhYa5F39QzrfkOSNF\\n\",\n       \"qhXmgiiHH6o9Y4/+ypjjHQ2o7lwXNZ7nQqXn+fDFY873dw7thdJVm2GfdmP8LHqcpyKVJEmSJEky\\n\",\n       \"kC2pSGGNsBpHCfJIlqGgQGF18ZP97JNPPllSv4pl9e2KFP4E7qM0NpP2smBfmsigCFdgPIoLarM0\\n\",\n       \"t1obnn16aDZo2g2lZChYa09+8pMl1Z+g3gpRjpwbh9pB9m36dQnqjahMrMDjjjtuw+999KMfXff3\\n\",\n       \"KEkoUu7ngxWKKuBqEL5ukRJ58803S4rPKFzt41arUrdaqJGChcWO/6L/nbpFIXG/T76PCugKxqLP\\n\",\n       \"7XRK6p4rGCgGPAdtX+tTFN2PuSBSxf1zzAm8G5iD+T7+d62+Tq62o3SgwhKZe9VVV637/da5gHeM\\n\",\n       \"v2uYW0qnPLTCmI36Gf3VffwYc65e8zn3U6UefY6P6ofrM0cxx5R2Y6KM80MV6NpTKFKRSpIkSZIk\\n\",\n       \"GciWVKSw3lCKOMma/fvWVT5WE1aK7wdzPX6SzwirwyMvIFKeWD37Kj+KQtwqtJ6JBihRHk1ZAmUF\\n\",\n       \"K5F2Ovroozf8Hj5yWMelff4S9LexLLpdiXjBX+K8886T1OeaiaztkoLLOCMPWCs8d+THQv9gXDko\\n\",\n       \"a5H/ClapR8euR+sZWbVEliw+IqiSfq4k/ny0jfcRfHhqfWmmzodVwvNP8ZwoD4xh5sgXvvCFknp1\\n\",\n       \"FH/F2rP+UMVRSlCUqD/v43zOz/SjnihXNIeX8HbnuVGzN8uHjbEf+YUOxedqzyvGeKqNlqM+GKv0\\n\",\n       \"d65XW1+UCwXL3+EOf+cdy/jgOrVKFP2P/lJb3lSkkiRJkiRJBrIlFSn2QdkXJqqILL2t0VBY3ET/\\n\",\n       \"oWBE+61YU/wke6uDdYKvlJff4b74E3iWXs+LhVXLvu/YHCEoRqyyh0b/YQVSHqwPrEBW/zwvPjDs\\n\",\n       \"p3Nfnt/zApWixrCSsYapz9ZM8vg7jI3MAawnfHnov9QH1l2k3JTAN42fu3fvllRWwtyqcj8Cyhtd\\n\",\n       \"B+uMdvdItKG5Xeg3WJlHHXWUpN5n7Rvf+IaktePM70+/HnLvsdE8RE+VFIKojqh7FJ/SdWi7RSlS\\n\",\n       \"jE3Kg5KAKujwXPiIOaWcaa4gMma4P741KA20F2OWvEr0XeZM/PKo35Kv1VAefPDBhVzXYQyQ+Zx+\\n\",\n       \"MnXOQnZlUGZa51Tam90c2pd+Vbt78K1vfWvu38z1UWQv73jmYO7vKnfUj4F3C8pc7ekqqUglSZIk\\n\",\n       \"SZIMZEsqUlgTf/AHfzDJ9TyvE1YN1k5plYwl7nBmnCtSkXLE77EisPSxwvg9q3cUqqmy1WLpo1BE\\n\",\n       \"yk8JrA7K5fWJQoAVSCSPK2BRlJ5bqbQPyiA+P1gdWBH4YdT6aEVKFH4vtX4dDvWLoul+H0MVKac1\\n\",\n       \"KhIoB/XsUXYOVmBrBn2IolspF+2AjxZRu9yXccm//WzN1f3FfSlc9YWp8sqgDHhfijKZO/QVsvpH\\n\",\n       \"50LC2Cg+5pxo7PlzjPX7a1WCUET83EU/cw8/SeYAlCfGFsoD/WGo/2RElDNtUbjPl78zpoKxNXSO\\n\",\n       \"8qg9lFPmcNqxVlGlnlGc3D+auYKIY8aj+896pHdUb+yOUM+1GQJSkUqSJEmSJBnIllSkpob9dWB1\\n\",\n       \"ir8BvhlYP56PJ1Iu8LOI8ttEoNyw/8pq21fNU0eE+L5zRMl/BCUh2m92qzmySrE6XQHy6Cz8JbCW\\n\",\n       \"sJIpH6pDrQoA0QnqNdFhQ+5D5nj38RkbdVjCFanWzPtRxFkt0fjhevykP0X5sKhHlMkaRYqxuag6\\n\",\n       \"jvLW1ILljsKyaEq52qaOPKXta8emn0vJXMmY5Druj+kZ0GkPfj/1mOa6i1akKDe7HygyQ8/+K8E4\\n\",\n       \"GepTRrvxk/aiPVrnaPqPq9fAeOd+nquOXRDmCt6pxx9/vKS+v7ALxhoAld7XDhGpSCVJkiRJkgzk\\n\",\n       \"caFIuY8Tq07211ml4svBPqlbP67UED011IfJo9XYl62N4FkUWPyRMjDUVybCfZE8MsMjMLBSsU6I\\n\",\n       \"sGi1prFy+D64NeZRiq1Qj6gjbuWUsgvX3p/rch9UGVegajPIQ210J6qKR6j5uAKs1Fo/BPwhUCAZ\\n\",\n       \"L6vbzxUClINSHTv4LPHs9FFXnnjmoWOC8tRGPo7tiyXGnhpB+bgOYzbyN/SxHuVtchij1Btzpfv7\\n\",\n       \"UU/ReaBD4Z1B+ccqkxEeYU7keW00WSutc4PvxvBv2t/PM+XvtUoe/YfdCD/vk3r3iHTmCBQn3g38\\n\",\n       \"nnr1/ua7FLVzXypSSZIkSZIkA3lcKFJRBAKrZZQNfFiIboui/Ry3tGvxnDDf/e53JbUrK1P7f5A9\\n\",\n       \"l9W9R0oM3Z+vtdr8+igtWDn4C5ABHau+1R+D/GReb26VoRBy/aF5k2hfp1QfWGWRCuGZ2V3JRNFb\\n\",\n       \"NNSL524p5eli/ET9A9WHXDIowbTTaoWPZ0Uh4m+tSgFlLn2/1efDYezWntbQ6o8JtacquHofnaMZ\\n\",\n       \"4f6npfu5/x7lLJ1XSh+nL6AceHvQFxd1ZuGxxx4rSdq1a5ck6S//8i8nuS5znT8XuwX33ntv1XVQ\\n\",\n       \"a/EBKuHRdIcffrikfq684447JPU+R34+J/3Z84cxxzJ3+3mtpfLzbnbFEWWSucB9sag/FEn6M/6Y\\n\",\n       \"Pu7ot/S/zGyeJEmSJEmyYB6TihQe97UZzkv7yVivKDFYSUTTtWZ3rcVXzygOrcoSviNjc4qgxLHK\\n\",\n       \"bz3TsMRQ/wGsHJQq/BJQQLiunwdWYt++fZJ6a+ecc86RtDb3EPfBWh+qSEXQfljj3Iffo4T6c2Ft\\n\",\n       \"kzn+gQceWPf6991336TljcCKfv7znz/3e1ds/bw4rGys3muuuUZSP/5QlDlT0FmtYOKPheXa6vMB\\n\",\n       \"zC0oXFEka3T2W8mHCcULxaaUywtafaMYO7UqN8/Dc/D9Wsuc+mcsRiosoABA6xzG5ykvCgbKw9Bc\\n\",\n       \"cLXQjvfff/+k1+W0Bq6PsuL1VSJS9kq7GNx327ZtkqRzzz1XUn9eJvXuuz2MWdqfdmActp5ripLG\\n\",\n       \"XE0GfeZsxgPvaMrDO5z6wresxFBfulSkkiRJkiRJBvKYVKQWdW4Sq2Z8MVh9e96fElgTpczhWAOR\\n\",\n       \"EsVq+vTTT5fUW8NjrbgSJaXP9++Hgl8BVkdklfPcKGRYWexzo3Acdthhknpl7a677trw/pG16jln\\n\",\n       \"8K8Y62fhkSWAosbzEI1GvURWJSoDVlSkOtRm8cZ/wcsDUVQe0B9oR5Rj94FyPwysXqxVj7gpsXo8\\n\",\n       \"uH8Y10RZIoIyUlhoe1QyFBbapKRGRorRSSedJKlXTK6//npJvao4VbZ7p1XdRlGA2pxj1C+qrmeW\\n\",\n       \"jtTSof6lQH3zk/r0SNxF8fnPf34h1+W0BsY0ZzpG9Rjhiin9mXdolGGcXRKUtltvvVXS2neNR3Yf\\n\",\n       \"OHBAUj8XcB/mktpchnDDDTdIku6++25J/Tv6vPPOk9TPyb5rNDaynHdc7TmeqUglSZIkSZIMZEsr\\n\",\n       \"UljI7M9+5jOfkVTvT9AKq232dUv7+xGcSI4yQu4LrssqHt8t9qNZXaNAkHGdyBCs6NaM59F+OOXB\\n\",\n       \"mqw9/2jqbL6upHhumciKQTEcSmTNTO0D5fdz3Hri37SbRy3ye9QXrouSQ//xaEOIInmivGGcFYha\\n\",\n       \"c8opp0iS9uzZM/c57kdE0Qte8IK5v0c+cl/72tckxYpdidVWN0oSz4z/GHWH2hz5duCnRd3S1xkb\\n\",\n       \"Q3O7cY4g5aDPR+d4RrzrXe+S1CsFWPyohcwVJ5xwgqRereT+jC1XNpgLUJRuuummuXI6nr+HuZrr\\n\",\n       \"UM/UP23kfczzVvF5oE+hRrtqixJFOWg/YCxTrtrotWUT+QO2wrsHVdozvjsoVihKtE/tOxfljJ9j\\n\",\n       \"YfyVfB3pj5SXfsM4r909IRKceqiNUE9FKkmSJEmSZCDdVKegN92062YrKyubft8kSZIkSZJWVlZW\\n\",\n       \"NJvN1j20MRWpJEmSJEmSgSzNR+pd73pXMa9Qa1ZdB9WrpH7hX4BPx9e//nVJa6P18L+IovhK9xsb\\n\",\n       \"7eYn29c+31Rs1ft5vSz6fmOhH7zlLW+RJF1++eWSen8OfL/wecJHkFwo+APwEz8b6gEfJfoZ133z\\n\",\n       \"m98safPbj2zP+C3RTtQDvolR9KnnnXLwz3n1q1+95tlKfQNfnih6jL975GbUV0499VRJfdvs3bt3\\n\",\n       \"7u/4IxJhij8c/pj4r/GTyNLXve51c/fDl8h9Vzw7PHV+xhlnzP0en5MHH3xw7vfMhZdccokk6T3v\\n\",\n       \"eY+k3mcEnyvu8+QnP3nu/lGksvdZ2gNfp4svvliSdNVVV0nq/SLpE0SAkj+IOZgoMdrl/PPPn7su\\n\",\n       \"fnzud3jppZdK2npz2WPtfkSo/9Iv/VLT/UrjrsSy6jMiFakkSZIkSZKBLE2ROuSQQ4qRAETSlDJs\\n\",\n       \"t56x5mCxkwGajNKuPHleHaw0jzSJiM7JOuSQQyT10VlRNNrYE9+xNnm+2myvnldpqzFWiYpAgaRd\\n\",\n       \"sGo9kgPFiM9hVQPtjj+iR9FRfrL2Av2SCJ5SHivUgakz0pOzBaXWYZzynG5lenZjImmIBiTHEhnN\\n\",\n       \"nUiJor7Xy3WEEoWCEUV+lixij6R1UFhQmlDPfe5gjrjooovmvnfZZZfNfQ6VjrnMo9ogmjtd5ec6\\n\",\n       \"N95447qfdzxilag9fx5XO0sZtykX+ZGozygnHmOPuYdoP85oiyJrPXO272ZMHXEMY3dPABWan1GU\\n\",\n       \"oZ++sCx8bEbvYqIqUWzJo7ZV4MxA+hu5FP3MwIhUpJIkSZIkSQayNEWq5hTzkmWNb4QrRUOJMmHv\\n\",\n       \"2LFDUm+d4gfAarw210RkDe3cuVNSb/2hSJEDBCstirAs+W4BFvzZZ58tqVe4SvW3jMjOFlAkUAg9\\n\",\n       \"RxD5vEpnLjpenyiH3t74sbjSBG6l1lqRrX4EYxVLB8WopFzyOdQFVzCpL7dW8e1C8eKkebJFl7J9\\n\",\n       \"Mz7WUyewgFG7UKQ4T7NWjeU6kRqI0uJ+XihigFLi51l6PieUJlTj1jxTY/F5mTZjDogUl9pzNL3v\\n\",\n       \"ew41+hBzUm3Gd1dw6BP8HLtrUSKqF1cUmavpJ9Qb9ev9FiWR5yBPFgpnpOJuFq5IRfVLu5YUu+hM\\n\",\n       \"y0Vz4oknSuqVVhQpz/QfsaEi1XXdoV3XXdd13b1d132z67rXP/r7/7vrut1d132767qruq77v1Z9\\n\",\n       \"561d1x3ouu6BruueO+ShkiRJkiRJHguUFKn/kPTG2Wy2v+u6n5C0r+u63ZJeKWn3bDZ7X9d1vy3p\\n\",\n       \"LZLe0nXddkkvlbRd0iGSru667ujZbLZGill9ynLtKhQrje96llyPWBmLR6a4IoV15b4tEZFigPKB\\n\",\n       \"tQy1Z+hRPqze6DwmrLtPf/rTG14Pf49apa0ESljtCfQeCVQCpQ/FiP1uIqPI6tyqSDlRfdAfIn+W\\n\",\n       \"Eu5ngtXWGtESKT8lGH+Un35Ke5WyQbtyRsQUYOX5yeq0D+MYH8XaMw0ZNxvNG0QwYulz6gBlvvPO\\n\",\n       \"OyWVLeXo/MzIV4X7kDUftfuVr3zlhvcBrjf0NPqhuJo4tYLjY8gVMHyoPIN8ySeI399+++2S1pab\\n\",\n       \"OQLVGvg3ylj0LjrmmGMk9dGVtX3UfcCYC9lNcV835uirr7563eegH6IElc6/XDS1Eej4R0b1Rj3h\\n\",\n       \"Q8U7dardphIofIxvomYnOWtvNpt9fzab7X/0//9F0v16ZIH0Ikl//ujH/lzSBY/+/4slfXo2m/3H\\n\",\n       \"bDb7jqT/Jen0ukdJkiRJkiR5bFHtI9V13dMlnSzpFklPns1mODD9vSQkhP8maXXilO/pkYXXGlZb\\n\",\n       \"gKyqsWQj/4WSdTY2YsJB2brtttsk9fvT+D9gLaDgODwXq/aTTz5ZUv98WPpY1qx++cn3/Lmw2oDV\\n\",\n       \"PlYU5eN7+FmUzisCtxr9foCVSJ6f0onbtXm0ak+cB6xIfM3oR7SXn6XXiitNlO+ss86StPYsOvIm\\n\",\n       \"OShF3l9Kvm1OyTqvVRF4Dqwx2oV+VIrEop/h00e/Rjn2z/n45XOoBfhE1vp6Uc71+gt9EhWSOuc7\\n\",\n       \"WJyUIfLFQWXj+94XUD9pQz+DDpWYnyga9AX6Kj9RoTdbiYLWsTcW97FBUXKFpdavsNT3/br++Ujd\\n\",\n       \"pN0Z65Sn5Gvn9Unfpp/Q7p5bLVK/6a+8O5jbeS5Xg/n71P6TgMIGz3jGMyT1uyKMK/q/z/3u34sv\\n\",\n       \"o9cb4yXyGRybS5D643lQKmvPZqxaSD26rfc5SW+YzWb/vFr+nc1ms67rNtqTW/dvqyv0Bz/4Qeio\\n\",\n       \"myRJkiRJspn867/+68GFWekQ6eLqpeu6/6JHFlGfnM1mlz/667/vuu4ps9ns+13X/ZQknDn+t6RD\\n\",\n       \"V339qY/+bg1PecpTDuYEYfUX7fPW5uiYOkcIViVWAIs9FpJYU1GeJX8eyu++T1iprIbJfRP5Ovlz\\n\",\n       \"Ug6scMrJ74lu4vqteYai56u1Emt9o6BVocGK5LmI/qL93GpqxZUa1Al+71ZhCdppKFPljmHc0R+5\\n\",\n       \"LtZyyUePSQarEyvS64H6d/8T/IaImPF8bLRnqV7XG/dY4NQ1bXX//fdL6lW92qgw5iDPU4SPi48p\\n\",\n       \"FCX6PnWDbxbPSnmijOBj+0oJb5NIkeK5eZ6hpzOU8LaO/P3wWaL+o/Kg0LA7wFwLtX6g+/fvr/qc\\n\",\n       \"475ezO3UN+Uj8pddA3yh6C/UP59ntwQlBiXr6KOPnrsff6f/4qPk7V377nQ/ZPdxYy70dwZzi+8O\\n\",\n       \"+FzvSizPS/midh6rSLma/vDDD+tHfuRHDr6zzz///INZ8tejFLXXSfqEpPtms9kfrPrTFyW94tH/\\n\",\n       \"f4Wky1f9/sKu657Ydd1hko6SdGvbIyVJkiRJkjw2KClSZ0v6JUl3d11356O/e6uk90r6bNd1F0n6\\n\",\n       \"jqRflKTZbHZf13WflXSfpB9Ieu0s2HT+0R/90YNKCdYZChWrSlbjWLSR1eZMtc/P/i0KB9aFKyy1\\n\",\n       \"UYL79u3b8O9EL1EvEb5qxxrGlwkrFquc8g2tl2h/feooybGQUwUr6eabb5bU+12Mxa04zinjvK9S\\n\",\n       \"bjSs6kVlV4ZaXzSsU8pT8heKoP39J9BffSogMof7UR6scaL6du/e3VQeqe+zPBOWPmWhDRnjfA5/\\n\",\n       \"L+qOPk7buU9PpO5yP29rroc/YUlti7K6TwX1gOXt5UVh8M9NNfbdpcMjfBlTrkihQJx55pmS+rw/\\n\",\n       \"HuVFO4xVpaeC+uUnyqir3oxhlCh89fDtQxWn//F9V6tRhmgv96OlXmgHrz9Xery9fZeId5grcZSX\\n\",\n       \"8UaEbglXmKLxQH622rnL++9NN90kqZ97qO/afFYbLqRms9kNilWr/x58592S3l119yRJkiRJkscw\\n\",\n       \"S/Pw/qd/+qc11p6v/lgtsqqu3c+dyq+A1TX78Kyqyfc0tbUYnT/l+PNjFVA/ni0XK3xoTpjIx2mr\\n\",\n       \"KFGAdXXHHXdIWusP0UrkD0B90P78vbbfLfp8rFr/FVQYrNBSv4vAz4Tv449SKg9qB/0UBRo/kjGR\\n\",\n       \"ayhSfs4hMIZQhrgnkZ7kn6Jta3PFQWTJ0vZT5WgbiysKPkegctM21AP12qpekj+J+ve5hfu4j5nj\\n\",\n       \"pz+Uxt7U508OhXbHN4j6RRFiDkOJov95PfBO8t0Cb4/Ib5F2dqWKfzM2GcuR75GXC6XJ+z/tyfPu\\n\",\n       \"2rVLUu8bFs3Vte/Y1jxa+M/6+Z8ohFyvOgK66e5JkiRJkiTJQZamSHVdd3BViJXmfgtYi62+JVMp\\n\",\n       \"Jb4Kx2pFIbv77rslrY02KoH15KttrIHWKDfAqmHVzzlBWAO1SgXPg9WzaD+NqcC64KfnamnNkuv9\\n\",\n       \"yPuf/73m/Ehpbb9qjZyZCs9nVcoDFoHVSo4d72f0d6xUxg/jmkz2Rx111Fy5vvGNbwwqz2o8Sslh\\n\",\n       \"rGGRTnW2XXS/0liibijH0CikoXgmZ6KZGFMoKkP9LbkObe3qJWOAdqM8jGH6DHMdCpmXm/qjL/H3\\n\",\n       \"1tMCpoaxQDtTHo+8BhRDFBKUmyii1ue4SGX2iHN8sqgn2qWkUntuQhS16F1D1N4FF1ww97mxuwet\\n\",\n       \"Kr+Xm3rA5w5lLTp/10lFKkmSJEmSZCBLU6T+5V/+5aBnP74a7AtjbaCk1O7Dc9Ya+65Ts2PHDkl9\\n\",\n       \"9CDWk5/5VwKrA2uDn1HUUy1Yj6yuqVfPIF3KcF57jtRWB38XrCSee2j9lpTCkqKEYuUnivv3plKo\\n\",\n       \"SlmNGW9k7x2aG4j7oGhF5XbFDqsYaxjrnCjLVv+b9YiUISxt8u5giW9WZu8oNx7lLSlRU2Ws9ghP\\n\",\n       \"9zWhPuiz3latMBajSE4/8w5/VFcOPbqN3QKizOiDHrnsysVmRx7zziudSsBcTj8hop13TeS74/VJ\\n\",\n       \"/4rmLvf35N1L+Vp9j6j/0lzCiQNTnaU31geOtcO2bdsk9eez1pKKVJIkSZIkyUCWpkg99NBDB1fD\\n\",\n       \"KDRYK6zC3Z/AIwjcmosycE8F1otH3LjCUIL9frLzEsGBFeAZ1LEE/rmGAAAgAElEQVTCPHoR/PfU\\n\",\n       \"C9Yd1lat9cr3aI/orL2tCpEXnoul1uqMToCPfNsg+j0+QKgIfh4WaojndRpL1N48H9YtSh0RU5Sj\\n\",\n       \"pEz+/+2da6ymV3Xf/4/SREprJC6FAMaOx8Y2M77fnQzOACIEx1HBgAA3KKg1VaQUg4IDrklSHyBI\\n\",\n       \"yFEQxpEiSNwowU2aKMjGkJAaw8Dg+2U8HmMPvqAxCsbBbWmkonwh8PTDnJ+f86456+z97Pd2Zvz/\\n\",\n       \"fXnPeS/PZd+evf57rbXZV4vcNmTij2pFzClDe+I++Z9+T8TcPIkKBfljZr1fZwZ9q6Q8YSlHZrV3\\n\",\n       \"Wkk5iD5L0+4ByO95zdRx+jBtJ64yUH58L9sFosSiI49rV0vom4zp9Nmx11u7ukB9xLFoLLV+lrPK\\n\",\n       \"7TcriCblWVHaQzFiRcoYY4wxppGlKVJr12yZxTIrZg+waHWV1mtLe4NNC7lLYob1GDGSgcUdFSUU\\n\",\n       \"p2wdGyWDaEauAygvFJO4f1HtOjfXhfKHooPisCim3cB627ZtkoZ2xX3UgoIUlUAUyUx5yvxxaMcc\\n\",\n       \"L7brsfm9yJ3T2t7x9UMNQCFDlSHLbwY+d6961askDdeP3wNZpiO0f+oXtYPy5rpQtO6+e/67S+F7\\n\",\n       \"QpvPrr0E94ZlG9V12iI+IZkShUWMElObWXnezCsPU1T3KT/GPNRJ6oe2T0Tx2AhnxoZF0ZpRPfrP\\n\",\n       \"tjLWx2msEjV2bC0x7Z55tbs6RN72trdJGsZCnnm1zyIrUsYYY4wxjSxNkfrpn/7pg3Zox9IlxwjK\\n\",\n       \"SimrMFYfylDmVwBxP6cSWEFYzvhEYYHHdX6sqa1bt0o62ArFOq1dh+U6M98ZygulikgXZveUB3vC\\n\",\n       \"YQXG686i+qLvGcoVs3XKn/NgBUUrNkajZZFHlPNYOD73h3JYu0cjoAJwXyh+kEVbRrBuyMtFuUQr\\n\",\n       \"s9QOo8/WtMor/Yz2wXGjLxPErN/0V6xX+mdUSgElNe6bFqP08FfKjjNPWpWomHOtlA+HtpT5ksQ6\\n\",\n       \"GBsRPC1Z1GLmN9h6/MzSpy2hzMXoPMp5LPRFfKpK0MaJ6oz+kXEvxayeuI8M+iBjAGMDfYXfo7DQ\\n\",\n       \"fjhffCbEsZo8WpQjfRblj/OWoudQBBmbab8xF10rPJPx08SXrNYn7Jd+6ZckDWMj5VPqP5Qnz3LG\\n\",\n       \"Itpp7W4VVqSMMcYYYxrplrEG33Vdv7KysvDzGmOMMcaMZWVlRX3fr5sawIqUMcYYY0wjS/ORmoUi\\n\",\n       \"xXo3vkdxvZhzXHfddZKG9eOxOSKA9WDW91lHJRqL811//fWShozR7EGGDxHvsz6OjxDvk1eHyAH8\\n\",\n       \"A/j8kUcekSRdfvnlE+dlPZfoMo4bywWfL6Ik+Zz7iXv+cd/ve9/7Js530kknSRrW9fEfYL0ZtXP7\\n\",\n       \"9u0T58FniXV/osU4H/sc/fZv/7Yk6eqrr5Y0rOvjO8fvuQ7Oy/o95Uv5xWzKr371qyUNvmPvete7\\n\",\n       \"JEkf/ehHJQ1+Cpwn5nSpjajh/BwPPwXKcVHqLOf51Kc+JUk69thjJQ2+bNzf/v37JQ3+CfgE4g9E\\n\",\n       \"/+H+6YeUC/+/5S1vmTjvvFlZWdGNN94oSdqzZ8/EZ/QlfF6oc9oS/nr4gcU2ut651r5GduzYIWno\\n\",\n       \"gzECkbaMP+Odd9657nHwHbnyyis3PN+sifeHXx1jDH2pdR9OfG7oi7/1W78laRir8TXLorAYk2LU\\n\",\n       \"H2MbYwB+s4wF+CleccUVE/dH/fM9fp+t2HBdnI++wvVwvfgXXnrppZKGsSX6/uAbxNif5VIjd9ut\\n\",\n       \"t9667uf0wauuumri/ubNssayu+66S5J0xx13SBraZ5Z5H58z/DZj/jHq9eyzz5Yk7d69W5L0gQ98\\n\",\n       \"YMPrsSJljDHGGNPI0hQpKc8UjUKB8hNnjURgYBFHxSV62mMlTLuHHNFG0QqKMBsmCgnrAuWqlJOF\\n\",\n       \"z1EGOG8W3RWVEr6PMhCvN4v6ohxjXqO4QztEhYd6xFrkfu+77z5JB2ekx9omsoJ6yyI1KFfqMWbA\\n\",\n       \"j9YkakDMScJ5okIE2Z6HlM/YPFete9jNC1SYmG8sg3rM9r6L7QkFdhlEJQpQc2krMWqOvtUavRch\\n\",\n       \"qmvv3r3rfn7JJZdIypUoqI0amjdcB9FdY7Pvx4zZ9InYx2JfzfpONgbHtshYVIreYqxg7EPBjH0E\\n\",\n       \"RYPVBdRz7g/ljuuIUZzZ2MaYko3xqMelnH61Ps+sOtBXUWJZ7XjZy14mabgv9p5rjSJ961vfKkna\\n\",\n       \"tWuXpKE/Qm1uPMZ4+MpXviKpPgKfqNgsQpn2Rr+tzWdlRcoYY4wxppGlKlLZ+joKS5ajItufKTtu\\n\",\n       \"NvssEfPtYLXEnB8R/B+wmshVgTXDcUrKVlRKMn8B1vfj98dm/YV4X5kVFXNtxPLCqsGK4TgcH6uO\\n\",\n       \"42RKXWYVYBWTcyRm8c2sM9oH1le2E3sG9zl25/Jsp3msMa6/1u8EK7aUGT3LDTQv5r3DwDSgZkdF\\n\",\n       \"alb7GwI+VrENnnzyyZKkBx54QJJ02223bXicuC9jzMU2NideCdp2ZNrM5pnqGym1nSz3HMRdCGLe\\n\",\n       \"pRJcV1SiGMvww+SZElXt1lUPro/7i6D+4xOUkf0+wn3STt/85jdLkt7whjdIGsaWz3/+85KmX82J\\n\",\n       \"YzPlxrOgVnmNu4jMqt1HSs/miBUpY4wxxphGlqZIveAFL3hGaYjWBQpD5ptT2k9n2r3aIEa1RWUL\\n\",\n       \"65CoJogKStxLsLSOjVITdwrP7pfjZYrHtMT7A+6f9W58srAuiHygPLDqmO1jlfCKAlkbDRez9Y6F\\n\",\n       \"8471Q8mUqFe84hWSBl+uqHq0WuEZtXv0RZWB7MQlf5cYGVW7D1bJjyPL7s37qAot5RIVmwh+h7OC\\n\",\n       \"vhH9xyjzuD8mY90tt9xSdfzoYxMzV8/aIkcdXRaZwpa1PaLcaMuM/ai6tW0Iv1xAqWFMJUoOHzzG\\n\",\n       \"4re//e0bHvfiiy+e+J9nWrYHXqa08ewoKSXZ77N+QXu6//77JQ3PVlZ9UKSmhTGaZ0T0kSqN+aWM\\n\",\n       \"+MvGipQxxhhjTCNLm95tZCkweyXaK1KKgoqWeowYyfaNYn2ZvERbtmyRNETWREUKRSMqDVk0VG1E\\n\",\n       \"RVSiSsSdqmMOlLGWPcdBkciisPgcRYfrxheM3CiUE/9j/bCvFlYm1lpUiKLVH9fXW/1bsHJnFRkV\\n\",\n       \"/Sg2C7H+a/1dom9VbQRLjKyJqkEsb9rRtm3bJA31G6Ndayi1BVQvxgj6butu86Vrw/+O6yrtG1qC\\n\",\n       \"vsWYRtlRp2PKShrqguuLYxTKGgrPtL4yERQoIO9S3B8zqx98zuIecDfffLOk+sjUqJID7eMjH/mI\\n\",\n       \"pOEZcdFFF214PJSs6O8Y/SFLxLGuVq3PjhNz4FHflBevEfoNz4ZM+QLGclZXiDqMuQSBsSbeL/Ds\\n\",\n       \"WFRE8FjfQytSxhhjjDGNbM4FxxmDVZHlrQJmoTGzeLSGsNKYrUalalY+SnFdO/OBwg+Dz7FSsQp2\\n\",\n       \"7tw56ryUE8eJCgOgvPB9rBTOS7mdd955614P38fawcqOik603lqtskjcwb0WlBNUAKxUFCmsZJS3\\n\",\n       \"aNUs2w+llljvlFPJTyP6PkXVIfpHcDyyf7dmy64h88ucNbTRWfswMSagJHCeVt8RjpMpAShqs/a7\\n\",\n       \"hFg++OjUwqrFOeecI0l69NFHm64jy4QNZFyHP/zDP9zw+5TX3/7t305c3xNPPDHquqgPMuCjCI5d\\n\",\n       \"ZeB6SlGPEcZmokfpq5mihuLFs4PVBNpRFj0Zd/uIz1yOO8+xYS1j+60VKWOMMcaYRg5JRSr6XJTA\\n\",\n       \"WkMRyXxEUCjIbrxv3z5Jw+w/5iiBOHseGz0XZ+EoGygAzNZRnuJegcz+8Z9AIcO6G3s9WKcoCVl0\\n\",\n       \"GOUVI4koZ5QXfNqIQOJ+eeW6UDJiXqdZKVARrKNaax7ljfKNViHZf1nPP/300yUdnPsly9Wz2aDc\\n\",\n       \"S3nbIrE8oyKV5b9ahLUZfWDmBZZ7lguvFRQF2iIK1VglB6JaH+tuXkpUxticfyhq+LFmPlylaLnN\\n\",\n       \"TlwdaY30zZQoxiRWBVCg4j6vtT54KDqM5Yz9Jf/m+GwFfN3iWDIvaiOUwYqUMcYYY0wjh6QihdVR\\n\",\n       \"m9k5KhrMNrGA4++ZtcfoIqyf0no6uTLI/ZGBYnPKKadIGqxYMqOTswRlKipRgNWLVcH3sSaJPqz1\\n\",\n       \"H+D3WHElazdG+mD1YUWwVyL1wCy/VI6AikA9Reu5NeIqRiwBVg/tAwWU72dWEVbiMcccI2nwL0Cp\\n\",\n       \"4jpr8z8tm1L7zYjKUvSZmpfCWMPYLPaRqL5mMIbMWmVDhWcMo89P26ayPQgXTa2CFzOKzzoz/bKg\\n\",\n       \"faE0MvZkKvisoK8TFccYRv6z2kjkqJDyyrOE+8l8pWjPmTK5qDxSY58pVqSMMcYYYxo5JBUp1m+j\\n\",\n       \"D1EG67UoPqwDoxxFZST67GRgFcUM7LV7m5FNl4zY3BfXh29KyRpg3RmrAusSBYXIiahIZfm0sPJi\\n\",\n       \"pEctzObZGR3rIipmkVL2Xcqb++M1u48SZ555pqSDrSN8nLD6sYIyfxLUAP4nDxnfp14pl9Yd1BdN\\n\",\n       \"q5oSrebYH8bWUwZ+IxtZ6aVM5/OC89FWaCNj7z0qYNFnCZ+SaZW27PibFcbcWv+9WavAY/10a6F9\\n\",\n       \"oHrTjtgXdN489thjkoZnytgoSMZAnmG0S+6n5KOYRctNu/owb6xIGWOMMcY0ckgpUnGWXlKiIvgt\\n\",\n       \"8IqPElYj1h/Rb1HxiEoP6/kxEqE2MgTrj1k6Vg4+Ullm9wgKEtYZvjkoQJnFnlnHWBNYB2OtAHxg\\n\",\n       \"KBfOUzpOdj2UD1YJ/9MOaBdZREmWPyzzscOKwtrnPnjFGo6/4/wocCiJUYFaVC6UWRGzIpes+2hV\\n\",\n       \"Ul6z9u+g/2+kmEZFh77dmtE8U5uzTMjUdUmJirsvAJm+AXWZyOPYNku58jL4HZmol0VtJvJaJYr6\\n\",\n       \"RxWeNn8Y9cSzg/aTRQtmUWglOO6iFRjaAX11rJJL/6LcGQMZM0o59LL6pxx5ts2bsUq2FSljjDHG\\n\",\n       \"mEYOKUWKWenYyBJmx8y2sRqx2jguVgWKRIwuQjHC1wqLOFq9tdYgs36i8bB28HW68cYbq47D97/9\\n\",\n       \"7W9LGiIvsLrH5mahHIi247rI6F0iRiyRFwvI+RLJrPZo7VFfvJas06w+yDKcZabHdy76n2R+EZzn\\n\",\n       \"a1/7mqQh6rLE2H2dSlD/UckbC/2Fdl+7B2SWi+iEE06YeL81C3U8z3pRgHFPMSCCld/ee++9o86Z\\n\",\n       \"RRzib8m9YsnyPteT9cVMKYlqW8yFh2KBBc33a8cgFC9+h9/mooj+pXFsZ+wp9fUs7w9jSqZ0xPKl\\n\",\n       \"3rK+SP3SdkuKxax9zs4++2xJg8/UrPNildopyl5WD6yCEK3HcdgLEcUzy6SfgdK1devWifdRqGYd\\n\",\n       \"bRqjFktYkTLGGGOMaeSQUKSwRpiVjvW1wCrBOsTayGbz2V5hXAezbpQSrKaxoCBh1WAN/P3f//2o\\n\",\n       \"40RfKnx1WqPZAEWi9f5Q2jjO2J3pM7DysGawdrDKUeiwnvAfifVKdF60PksRUJkVijVcq0QB9UQ5\\n\",\n       \"x2hQ1IfayCOsQKzGqAjWZnlG1SgpUUSdYvVHVYPfx/23gHo799xzJUm7d++eOP8rX/lKSdKtt946\\n\",\n       \"8bv1yqOUW44My7P218L3gzZJXaL4UBfUTfTByNpUjCjmd5Qh56Wtc1+1KjR9J+b5aYXrqS3fE088\\n\",\n       \"ceL/uAtD5nsU4dkw1qcoKoGlqLKo9KFcjN1Dbyw8gzjPWCWKdvNzP/dzkob2wtjH6kMWIU4k8hln\\n\",\n       \"nCHp4HYMXBftE3X8Na95jaShX6AklaIeL7roIklDuzr++OMlDf3s2GOPlTQ7RYr+umPHDklWpIwx\\n\",\n       \"xhhj5s7SFKnnP//51bNIrJNaH42MViuUWXuWWXzaHC5YybyOhXw6MVN39A8oreezw/hxxx0nabDO\\n\",\n       \"8D9o5fHHH5/q9xncD1Zo/D+zTlm/x6qJGewzou9PSSFib0TKD2stWrUx6hO/DcofhQfrHNUg+o2Q\\n\",\n       \"Ayb6L0Qrc1Z+FZQbVj3Xl9U3voW8wqmnnipJuvDCCyVJr3rVqyQNVi/1GBWp9SjVyayVKCxvxgjK\\n\",\n       \"nlf8KKMvSG00UMwuz5jJ8WkjWPZjLXP2E6UNMlagJPCKYoVvEAoBaiq+Z/yPH2SpPqKSc9ZZZ038\\n\",\n       \"vgSrDKjL3H9UmWvV+cxnhz4Y23bmA0XfrB1bSnA/Y/0o8UkiZx5jH+VBfWf7zwLt8I477pA03F9U\\n\",\n       \"l3k/jlGMgbTbuFsGYyuvJ5988sT3uD4ULZTU+ExG6aI9oLTVwhhGudRm2rciZYwxxhjTSDerLMOj\\n\",\n       \"Ttp1/crKysLPa4wxxhgzlpWVFfV9v+5mm1akjDHGGGMaWZqP1Kc+9amDcnuwHhl9KCKsm0bfkgiq\\n\",\n       \"13XXXTdx3JgLI4twwS8gRgKxTs26Od/Dx+PTn/60pCHCgPuKWV137dolaVhnJqN5jGQg/w7Xz3Vf\\n\",\n       \"cMEFkqRrr7124nPWkYmmgoceekjS4CNDNBe+UURgsJ4dd4R/05veNHG+af1NSrlEqL9rrrlG0uBv\\n\",\n       \"UFrPrz1+jFT6tV/7tYnzAj5LWa4f/ACIQKKdUJ74VUTfKM7zoQ99SNLQLolAYr0+tgfaI/4hWU4X\\n\",\n       \"Pue6f/d3f3fd+5uWWD60v/e///1TnY9yyPpnjJhaWVl55lytGb5LcK9XXHHFM+dcBJxn7PlKUYy1\\n\",\n       \"54vHoW7imIY/W9x1Ap8cvk8EL33mkksukSR9+MMfljT4wOCbk4315FWivu+55x5JQ5+Pvmjcx+/8\\n\",\n       \"zu9M3N+84Lo++MEPLuR8lPNVV10lSfr85z8vqbxLBn2VcmesoE/jh4vPHM9gcuaNbZ/4ReKjF33N\\n\",\n       \"6NtErUZ/19b+0ErpPFakjDHGGGMaWZoitV6m2ZISBSUlKoIVyayW35dyrZSim7CEo1XG3mq8MrvG\\n\",\n       \"GoqRH6VcFVhllE+MoovKEApFKfKFaC5eY1bkqOSgSE2rRMU9CkvZbWPkUi2l41K/pXqOmeuj2oE1\\n\",\n       \"hRVOzhcUJSJIUJZi5A1+itxn6X5RBUoZ3WOW7XkRlbrWTOqRUv+MkT9rIWKSXG2toNpS1zGvUy2Z\\n\",\n       \"OooCUKuyjmWsElV7HOoGNZ37y/Y/pY3HXHKxfmir9J3SWM/n3/jGNyQNbR6Vmc/p47WRtrV7/kXi\\n\",\n       \"LgXkXVoU0ee5duzcuXOnJOnjH/+4pIP7NM9OIudRu1vZu3fvhp/POy/XrLEiZYwxxhjTyNIUqRe9\\n\",\n       \"6EXV1h3ZRvEhuuuuu0adCytw1jlksnxF+IjE/bZa8/fEPEm1O2BnVjB+CViPWJfT7owOpezG7NmX\\n\",\n       \"KWal7MKLAsWD7LnkC0NRi1YV1jYKFIoRihbZfb/4xS/O87KfoXXvvugPM3Yn9GmzY9eyUcRxpkRR\\n\",\n       \"N7VjT5bpeSyxDzKm4Ws0L0Vq3qAA4Wc5dv/EmBkbalcnUKIiqL/44pQUECi1i9LebvhL0vfYBWNZ\\n\",\n       \"7N+/v+p77Dl55ZVXrvs5PlI8y0pK77S7ahxqWJEyxhhjjGlkaYrU008/Xb0DNEoOVtxYRQqrD6UI\\n\",\n       \"hSfOloniQpmp9S+IVhV+D1gvnK91PyD8NDhO3LEc5YSoL15ZH8faZb+k7du3S5IefvhhSXnGaNb7\\n\",\n       \"KT+gHLN9sPAJyxSpUjnU7giegVVIe6ndLymCfwWqBP4gpeNFq5Z6yPaVwtqjvONeefOG+qUdoayh\\n\",\n       \"LGJdlzL40+64n3nT4v/DmEMb4Zo5FveYZayeFdPuZlAiy8Q9a8gs3aqoZbsPMIbQB6PvVdyTLyMq\\n\",\n       \"UdR3RmnsKY1dsY+w28D555+/4e8yaiPUM+IzrqQu33bbbeu+j5JL+T344IOjzptBBnO+T2T5oYYV\\n\",\n       \"KWOMMcaYRpamSEn1ygOWeauPEb9HScGKiTlmOD7WTi0xao+ID3xFsPBbd+zGCkLhif4D+I4xu+fz\\n\",\n       \"aCUy2y/N+lHUsFqiFVayjkp+BvOywoH6rS3vGJUXof2gwLWqFfghRPC5IhqTfkE9tkbd1eZS4n5i\\n\",\n       \"lNxYn0LaK34rr33ta0f9fiy1/h9ridFYUSVElZ63IlVLq7/gOeecI2kYE2688caZXdNaok/QtKDK\\n\",\n       \"oubv3r173e9t3bpV0nCf9NHPfvazGx6/du+7ki9URlR84lg9VmHie+x/+q1vfWvD71N+EJW71rEE\\n\",\n       \"dR91HWUqU9mBZyOKUyzPzMdtLPh14hOHUhqjROeFFSljjDHGmEaWqkjVglXWupM2s3As7loLPYs8\\n\",\n       \"wOrAZybm1EBRiDtfx0zTJUWO62YHa6yNaKXis4O1gHLSmtUZ6y6zMktWTUlxmlV0YImStQS16/lR\\n\",\n       \"pSCrMhntKTcyv0eyDOlYrfhXxB3SMz+SjLgzO/nMMsYePwMFcFFqTklJrAGLuTanWGTaDOqnnXaa\\n\",\n       \"pMGXJtZF9IeshTaGknDWWWdJOjjDdasPDirqli1bJA3+ltQ9bRClgDGD81Fu8f4Yc0rRf/g+1Ubj\\n\",\n       \"QW20ZqxP/Ai5j6ydxD4bx8pWX6eSEgUxhxvPnlJOthIoUIyV2bOBPkk5cP7WXIDAsy8qbrSj+KzN\\n\",\n       \"lCh2++B3Y9tPhhUpY4wxxphGDglFCuXm9ttvb/o9s1Re8ZUqrX9nSgVWBrPjaAVgvcTzYcWUop8i\\n\",\n       \"WM28ZpmjS7N+ImFQtLj/+LuSv8O0uUHG5vKZN1jXYznzzDMlDfto3X///ZKGLMFx/Z+oyegrRXug\\n\",\n       \"naMmtGYPRhVoVXAzSvnBOO9GGcfHgL9SZv2To2et9VnaFxGIhMVCRVEpqXeRaffy49ozpWLsWAG0\\n\",\n       \"aXx9sjbOGJad/8gjj5Q0ZLQG6pryjiok/1NHKDSor6ilWU68ZfuoUe6MtYzlpbExKoqLiryFGElN\\n\",\n       \"pvZpFSnqn7326JvRByzuIkJ/mjYSG6UrtmP6H6/4RmUwNs06p6QVKWOMMcaYRjaFIsWsH+uFWW6c\\n\",\n       \"/bfOIpnFYh1gqaMstVoNzPKzdXCud1qfJWbzKF2tETJcF9ZCyUpojVwpwXGxxue9F1yJVqsffxN2\\n\",\n       \"MEcFIZIoKlIlaymqKFxXjASi/Hg/8wco5Vka699D/yztg4biOC2l9rne57X+d8cff7yk4d6ntdhb\\n\",\n       \"iUrPrEDtjT4lkdJ9o0LyPdokdY86jQ9NHEsZy/Fh4fv4L8Z9Q2FWUYCM/bWZ0iP0IcbAsdeV3d+i\\n\",\n       \"iBHlrZCPjHpGkYrEnIrU+7SKFO0ty4QPJf/cPXv2THUdGVakjDHGGGMa2RSKFLN+Zs/4FM1qB2gU\\n\",\n       \"LyxwfIQ4D1bsWGsjs3K4H6wv1vtbo6OwBnltta5idmCsikyZyJSokq9MCfJbbZZ9mFoVMRSpz33u\\n\",\n       \"c5KkU045RdLgixbJfMJKihBWLWpArUIY6zsyNkcR7bf0u2mjALlf2mV2vPV8Amv3BcRSpu/zf9wn\\n\",\n       \"kbqp3YVhs4BKiSIz1vcLGHPoq9EHjXLC5ywqUiUlobRn27SMHSuz9lP7bIj7VC7b1yvbfWIsjPVE\\n\",\n       \"iGcqOPcf9+bj99NG75XGymU9U6xIGWOMMcY0sikUKWAWj1VIFuJp18uxirCu4vr+tD46KFuA1cpx\\n\",\n       \"8QdotWb5Hdbe2Fk3VhYZy0888URJg6U/dt04ixqsZVqrpAR+Ia3tJuYnovxQSmIUGb5P5JU69thj\\n\",\n       \"JeV+I9n5ojULqADRl6u0wzoRKplVPDafF+WJlZlZu9NmE6ZdtihbqKwl3x/Kjqg9LF3u7etf//rE\\n\",\n       \"9zeLEpW1kQh1j18blNpMhLEyq2vKmSi8jFnlKmuFei5dx7TPglgvs4pgbaU1M36Edkd7yJQhxjrG\\n\",\n       \"LMbAo48+WtL0Y39JZZ81cZ/ZDCtSxhhjjDGNLFWRwgeCWTzKDrNXrINsj7KMmH+HWXS2Xo8Vy2x6\\n\",\n       \"7Lp2VGiybMGtWZDZL4vyqPVNQkHBOti2bZukQfHDPyG7nizip3Wn90UxVoki1wrEDPQl65+8URdd\\n\",\n       \"dJGkwbevNhqU89EuUTAp/9Z1/9ZoxAza86ys3FbIbURW7VtvvfWZz2qj7x555BFJB7f9VtV3XkR1\\n\",\n       \"tNQWYW2ZSIOiNPa+Sn2d8ma/z2mj5AA/THxtyPPVWi/0ydbM4rXwDKO8UamXxbSKGM8QlBnGJJ7R\\n\",\n       \"2V55jJ3syrFopm2HY3dTsSJljDHGGNPIUhWpGBHDOjuWea31BeyQfcIJJ0y8j/LCLBkFCaWG/7Em\\n\",\n       \"xuaVIuIHyPODssB9tvoWsct95qeAwsT94cNDOeLfgZKFFVnykZlVLpdIbWTVvMDKwl8iKpRZJu3s\\n\",\n       \"fazdG264QdJQX7VkeZko/6weZqWaxPLIwBpFJSGzO4raAw88IKleFWpVaDn/Rr+LfZu6e+UrXylJ\\n\",\n       \"euc73ylp6KMxWu8Tn/iEJOk73/nOqGubNdPWcVSmZg1jyO7du2d6XMaq6M/aSq0S1epfSTuLOeTi\\n\",\n       \"6giZwWcdrZiNqdFHbiyMCYxR9L3MD7K0G0Er+PeW9k9985vfLGlYVbrppptGnYexED/X2rHJipQx\\n\",\n       \"xhhjTCObImov5nnCyhmrWGA9Risgzo6ZvRNRgBWCclObM4Z8QS984Qsn3idfULTwxypsXFdUvCJY\\n\",\n       \"CdwX5ci6NjlkiM6jvFnnjnA/rT5jJWalRKE6sMM99Uy0Z4T7jtbmtP4ccPPNN0s6uJ5L0Xvz9tvI\\n\",\n       \"YO8/ygUrOV4n5Yt19vKXv1zSYH3SzrEaS/2G/lWy9mj3sf1xPnLarAWfEOo4qq4oNPwW5YNM57TN\\n\",\n       \"ZStRsyL6s22W6EPGqNJ+kpkKf/7550uS7rzzzqmugzHkta997cT1MCbgS0e7IFcc/+NDxrMgRvbG\\n\",\n       \"9sfx8Z2KEeUob7Rx3i/VG+039l0ykk8L/QH/xMx3btZKFGTnIxoTf1Aym4/1q2asodwz368MK1LG\\n\",\n       \"GGOMMY10y4hO6bquX1lZWfh5jTHGGGPGsrKyor7vu/U+syJljDHGGNPI0nyk1ipS5HzAJ6jkmZ9B\\n\",\n       \"hALZUzlHq/oV95UqMe35xhLPx/o7PiX4icTcJhEilvgd5cd6P+vzV1555cT55k1Wnvgc4ZODH0Mk\\n\",\n       \"u29y03BftDfOg1/JX/3VX0k6OM8Tx6W94Zfw6KOPTnwfH1tnzHgAACAASURBVDX8BvDfedOb3iRp\\n\",\n       \"8I/gvBdeeOHEdX7xi1+UNOQR++Vf/mVJ0tVXXz3xPfrNeeedJ0natWvXesWxsPZJDqDLLrtMkvTR\\n\",\n       \"j35U0lCuRC7h25flmsHvhPujXeKDRn1Qv1dccYV+//d/X9Lgg0OZ448Vc7D96q/+qqQh0vL222+X\\n\",\n       \"NPgP8jvqHJ+Vd7/73ZKGsqzNON7KsseWw/V81157rSTpjDPOkDREntKmiPIkoza7GNAn8bVjLKHd\\n\",\n       \"0edp8/SF7P6IWM+i4cjTVdozEb/Fyy+/XJJ0zTXXSBr6Dj5F7I24d+9eSYNfL32Nz7/1rW9NHD/m\\n\",\n       \"93rooYcm7mts/WV+kCXi+bLIY/o394FPHde/Y8cOScMYRP1n58uwImWMMcYY08imiNrDiotZfMeS\\n\",\n       \"7TDeyti9yDIWtXN8lsm6tAM4VlCMplv2/lgR8nOh+JTKM7tvFCSU0Aj1NTZyoxasxgsuuEDSYMWi\\n\",\n       \"ikQrkPrJIpRQxsj+TGQNis5tt902q0uvIqo+WJu8Pvnkk5IGKzKDKEIUU6zmyHr1WLunFxZorGuU\\n\",\n       \"Byj14WzsabW4Ydp9LQ9ViAxlDKYvZ5GvjA3f//731/2cMSOOkaj2MV9S3NUBJZLzZ/m52CUBhakU\\n\",\n       \"NRf3pIuKFApSKdKZPkBuN8jGcBSZO+64Y+J9yiPbvYC+Xeq7tbkCZxURnt0nY0iM4kNh5PNMiarF\\n\",\n       \"ipQxxhhjTCObwtypzSZLfqPMh6p2D7pFwawfsp2rs735ImN3boeSj9S8M4zPSpHD2sRvpXYfJMBn\\n\",\n       \"CesyUxHmvcN4tMJQXLiv+DnXk11XfB9/ilKOnmWBKlDaCxClLWufKHtr87iV1Nd47Ex1RAnCF4Yc\\n\",\n       \"bBlZTizqgjot+bhEUCyebTDGUceZckEGavpypkhlbY33UXsZU2hzDz74oKQ8N12k9L3oS8f5UHj4\\n\",\n       \"nzGbdsXuCRnkvYrtKz5TUbxQbLP+UtqdoLRas+hdK2gn1CfPVFR6nkHRh3HsLiYZVqSMMcYYYxrZ\\n\",\n       \"FIpUyZ+BCIKzzjpLkvS5z31uLtfx/ve/X9KwboyVQMbqWlCOsD5Kvlu1ma1ZR8+sK9bJsWawjrBq\\n\",\n       \"UYQWnUl71r5h3H/0YygRlbyS/8C8INoQyBpMvdWCD1S0VrFKx6ofi6akVGbqAtYm/ii1e/tJgxKF\\n\",\n       \"WpeNPfSlWos1U9UZA7jmCG0hy9w8bZ4/7jf6Am12qJeSIlVSVhgzSz5L1M+2bdskDX6KPANOPfVU\\n\",\n       \"SUOUWwRfOPwTUWziXndREaHtopzwOUoU98/xS6s20bcvEhWpLNo0a49Q8pGiHPCHbKXkY0j94lPH\\n\",\n       \"dZPhnHpgNYJ+yn1nEcPAakEJK1LGGGOMMY1sCkWqBBb7vLOwo0SxbxFW61hFiuvESsqs0bGUlDvW\\n\",\n       \"yePsHStk+/btkqStW7dKkr761a9OHHez+ZiVGLtDe+2eelnEyryp9e+Bkt/EZiHz7aN/jVGUpMH/\\n\",\n       \"Ait6rb9GKVqItl7qS/g21frGYPFyXPog6maMCsLSzvIGwVg/QMBCP+eccyQNY9Cf/umfNh1v0dAX\\n\",\n       \"uP9s7C/VI20kU+JQ688991xJgwpK/fA+oOaTMw6ob5QOjlv7zMpWLXif3GuMYdEvEuWqtNcdSiv3\\n\",\n       \"R1Tg2AjlUrTdrJ7VjB1ZVCaf79u3T1K7z1eE+ssihSNWpIwxxhhjGjkkFCmYdr21BLlBshwhY4nr\\n\",\n       \"yK1Rd5FSvq0sooJ8QpTjshSNWZUDYD1gLS5j/8gxPPbYY5Kkk046aclXshi2bNkiacgCjV9QFulW\\n\",\n       \"gt9xvLXQFjLLNO5+kEFbKkXNoV5iKUdfE5Sg0047TdLQ52rHstboJ3yB8PVBmTpUwJeFsWysogDU\\n\",\n       \"I/Ue83JxfF6JCGfspp5QDqMSFeE6GeNihDnnR2lDGeL64pjM//QZvodSQvvguCXfJZQrvs/qC/dH\\n\",\n       \"O+bzTMnLIufx85w2JyTQ1/Fxi4pUbb64sYzNIWlFyhhjjDGmkUNKkTpUiNYGVsOslJJpM7jPW4kq\\n\",\n       \"7Z046/VzrDWiv6JKUOsbBa1KSS2t+7ERQYK/AFZkyc9m2cT7jf2C/8kJRIQU1v1XvvKVdY8Da63w\\n\",\n       \"kp8ZbeWEE06QNPiUEDUFKDpY8BlY4NxD9DMkimhtrqu1lHLIcdxW6Gt33333VMdZNPg/Tqts0GYo\\n\",\n       \"36wNoW5SHygyROmVojfjvpCMPTGXG+0lKm60wyz6k3aUPUtQjkr+nfg2UQ58n1c+b/XrPeWUUyQN\\n\",\n       \"e/JB62oSzzoidKMiyLOW617WaoQVKWOMMcaYRqxIzQFmxcz6sW5mtUP8tFbqvCHPU2vEUS2UJ+WN\\n\",\n       \"8pT5U5BzpGRdzjvXTsmPIQPlCet1VntBzhusaKx2rFXUG/oD1jqqEdZ5lmeKSKYx0Y6xTx599NGS\\n\",\n       \"hjIl/wyUoqCog8yCR43NoqKI6qPNRp+Peaujm5WY+XtaUIiiwkVfpE3S9+PeehGUJhQTojz5HeeL\\n\",\n       \"PnaxvaAu08bxgWIMjdF59Al8hjj/UUcdJamsTnN97DHH96NiVMr9lz3DogI4bYbz0moO9ZDlnKsl\\n\",\n       \"i/YtKdLP/H6qsxtjjDHGPIuxIrUOWAsxQqI2Q3ec3bL+jMXdGoECm12R4r7H5kUaC+WAlZlZr2Qr\\n\",\n       \"Zp39y1/+8lyvq0SrIkm5jlWisHqhdmf2WYESSL+KOWi4H3ItocJgbVLPvI96QL8ak6kf3yeUCMqG\\n\",\n       \"Y0ZFqgT3goJA2cY920q/x4cqKlKlfD2HK4y1KDqxDYzdnYHjxF0hqB/aJn2zpHAwlqNsohxxXVx/\\n\",\n       \"PB/XTz1T7ygufD+2Tz5nrEMpZdcPFKmSjxS+VzyDsvbVuvsF94fSRX+oXQ0oHTcyq6i9OCaSF652\\n\",\n       \"jLQiZYwxxhjTyGGlSGHBlnaVL3HmmWdKGnxusIZK+xhFmP0za86UhNq8SszqZ70jPOvtWCGUX6sV\\n\",\n       \"wex+1hnCYznhz8B1Z1YL69yoDmR9JnoxKkTTRkWWIKqx9DnXEcsfa4n2XoqIidb1ohUpzpPtKRit\\n\",\n       \"X/yJaOf4n8Ss5NR/i68Yx0aJaPWLY2zgOFEtLvWdmNcnQl0fLpT2V4zQNqJfIW2q1ocsK18UH3zl\\n\",\n       \"iN6M6jb1i08bfYrz06a5v2wPRcYwjsfvOA4KEWo+7ZLyoq1TLjGX2nHHHbfufUboQ5yf++U6YgR0\\n\",\n       \"bX2hdPE7yr31WUL9ZPXHs4Axgv60f/9+SQe3Dz7nelCgsz0Qa7EiZYwxxhjTyIaKVNd1R0n6c0kv\\n\",\n       \"ktRL+nTf95/sum5F0rskkSDog33ff3H1N1dK+o+SfiTpPX3fj9uobgqmVaIAJQkFamweImbPrH8z\\n\",\n       \"O2fdOFoptbkvTjzxxInjR1AymMXX5otidh4jPmqtB2b5WFvUQ62PVLY+HcEPICqDMbIlwnU88sgj\\n\",\n       \"kgarDvUgWiNZ1t5ZUcqNg28ekTWRabM9L0qJghiBhBXMDvEZMRt1vN/1lKhSXibgWLT57373uxt+\\n\",\n       \"PyMqCPTBUhtCFSU32K5du9b9HlFahwslZQOVlVfaTFQMGTNQqkqKIgpkHNMY61GYUHdj26I9MfbQ\\n\",\n       \"hzguYyAKSHY9tH2eKZwHRQwlK0a/cZ+xD3Ad+P6Nzf+EShyh/HmW1K7GRF/B6Hs2lpgTMIPrLK1O\\n\",\n       \"MOZn991KaWnvh5J+s+/7PV3XHSHpvq7rvqQDk6qP933/8bVf7rpum6S3Sdom6UhJt3Rdd0Lf94sd\\n\",\n       \"uY0xxhhjFsCGE6m+7/9R0j+u/v2Druv26cAESZLWM6vfIOkv+77/oaQnuq57XNK5ku4cc1FY7ERZ\\n\",\n       \"Ye0x+85yu2AVMKtv5b777pvq9zFyh3VrfHSiIlUCJYZyyaxtrJyYFRdi1lyukx3Av/CFL4y6Lq4H\\n\",\n       \"qwzrvFbxYF2b68BKyHzAyDs0dq8+fodfAFZSZrXQ7uYFfg07duxY9/NMiYJpcwstWpGK1iz9uWTl\\n\",\n       \"Yq3TX1B7NlKIa6ONUKBQu6ZVs+lTUYlCgWCvPdoi2dtLdXG455GK/ojUA0odygjlFPMTlZQolJzz\\n\",\n       \"zz9f0sH+hCg8t99++8RxM+LnHJ96Z8zP1PzYzhgz2RORZxvHQZVFpWfsy+4b/8K3vOUtG95HCa5z\\n\",\n       \"bL+gfUeftlaVn2dfSWnL8rRFps03lVHtI9V13TGSztAwKbqs67oHuq67ruu6566+91JJazWz72iY\\n\",\n       \"eBljjDHGHFZURe2tLuv9jaT3ripTfyTpw6sff0TSH0i6NPl59eY3KBFYHWP9FpjNt+bAGBuhkIHy\\n\",\n       \"w2yeyAAUEc4Ts9vyO6wPrFlm0czqS7kzMkUqvo91NdYHDFCEyLM1lqx+435lgEJRq0ShTMb1c+47\\n\",\n       \"81X6+Z//eUlD5mzqJe6HRaZu1t2xrrk+6vmxxx6buP5f+IVfqLr+eYMPHwoR14c1yX3UZimmPKIP\\n\",\n       \"X1RUURmIrMmgXFEIuS7UimnysVGX+M1FUGmx/MdG7AJlGzNg4zNSyls1dgykjDlf7VhGnWV+a9RF\\n\",\n       \"a14r+kxU1WMEK9eBEkeUHJHFjH1ZvUXYUxGFJypg1AP7PKIW4zdKnibaC2oxY3Mcq6jvUm67CGMF\\n\",\n       \"14dyxn2jcDGWxTGN16jm8/ss4zl9iXZCedDXnnrqqarrj1B+rbsv0E8oP+4vRvzOC9phbYR8cSLV\\n\",\n       \"dd1PSvqspOv7vr9Rkvq+f3rN538i6fOr/z4p6ag1P3/Z6nvGGGOMMYcEP/7xj58RH3bu3Lnhd7uN\\n\",\n       \"rPvuwLT6zyT9n77vf3PN+y/p+/6p1b9/U9I5fd//+1Vn87/QAb+oIyXdIunlfThJ13X9ysrK+Dsz\\n\",\n       \"xhhjjFkwKysr6vt+3WWMkiK1XdI7JO3tuu7+1fc+KOmSrutO14Flu/2Sfl2S+r5/uOu6v5b0sKR/\\n\",\n       \"kfQbcRJljDHGGHO4sKEiNbeTdt26Jx0bjRXBFwPfmF/5lV+RdGAmuRZ8Q8ZGEsQ9vqJvEOf55Cc/\\n\",\n       \"KWnwJ8C/AF8ncsjgK7Jnz551z8f6POu1rJ+TW+UDH/iAJOnaa6+VNKzXxyy1kVLulejrwyv3F8uT\\n\",\n       \"dWRk0LFRYax740dBOXGeP/7jP5Y0+JTxPcqPiA3uB/8B7iPu+4QfCH4R3N9rXvMaSdLHPvYxSbmv\\n\",\n       \"HefFryDuC1bru5OVZwb9Y8uWLZIOzmo8q/OdfPLJkoZ2tHv37onP6V+UAxFH2fk+9KEPSRr8Vfg9\\n\",\n       \"vnmx/RPNSL/56le/Kmnw3+A41CPt5+yzz64uy2nJypI2tnXrVkmDTw3+kvTl008/XdIQIZyNeUQi\\n\",\n       \"X3755eueb17E++O+8LnJ/CJ/8Rd/UdLQJrI2StuiLVx66aUT55sVcZcFxuRSXzjppJMkHVx/jEH4\\n\",\n       \"AJVy2EE8H/VPVN7Xv/51SQfnW8JPMz5D+P/444+XNDz78MeMz77ox4kfbqt/a4TzXH/99ZIG3zbG\\n\",\n       \"XsZEfK6oj7PPPnviOIzljHU8cxlbeWZfdtllE+fl/hgbuS98yyhn4P4Zg3h2UL/47uEb9973vndD\\n\",\n       \"RcqZzY0xxhhjGlnaXntHHHHEMxEOWGNE58RIFSxfFJZMaSECprTHW1SisPqy/FSA4hP304qUclVg\\n\",\n       \"3ZT2XGvNvlqKrMmUKCI4SlYKVjXlzOweBWfsfkrUZxZtiOKDEsX18zv2RnzwwQcl5eUfsxFzPqLH\\n\",\n       \"UKRKUZ8x1w3lNe+cP/STaaNKS1C/WT6rsfnQuG7aBf2Ifh1BAcv646OPPjrxyvVE63YZ0MYyhQnF\\n\",\n       \"AaWmpL6XxprIrPYbjXBf2diAek7dlqIR6SulCGTUcxSOsSos111SyRlbUHZQOx966CFJwxjHakJU\\n\",\n       \"n2N0GvWWtXGiDrPdFaCUU44+hUIWxz6UP1T6qNpz36X6yohRkGRYj7n7Ioxhd91114bHr23HPPMY\\n\",\n       \"E2LkMGM69cT3Y3Qr79MO7rjjjqrzW5EyxhhjjGlkaYrUejNNFBpev/nNb6bfXQ+Updo95gCLFoWi\\n\",\n       \"pCwwa2VWPzZbKvl24mweLr74YknSPffcI6msTMU8UPiM1JYb58PaLClSWE+81voJZJQUNI7PK1Yc\\n\",\n       \"r1hBtWC9oYDeeeeoxPvPgLU5Kz+DDNoJVn/JSp0WlCD8V2KuHNp9KQ9UhPLGSuY19teSMhz30xur\\n\",\n       \"gK5HTdb0GlApY/6e2Bdrx4ySah3hPG984xslDWUb+1AttX6rKBuolFmfRimI+3JmkM8rU6u5T9rM\\n\",\n       \"l7/85Q2Pl8H1o3xFtZXrZazh+1mepFK+I37XmmcJKBeelVGx41kW6z0qM61kSl9JaZwXrJJQ/owR\\n\",\n       \"5KWivKK/LatUjOUoZaXVLbAiZYwxxhjTyNIUqfXAox9LH+Wg1uIsWbIZWG1YkxdddJGkwRJn3ZVZ\\n\",\n       \"P7Tu21PKFst91+78Hq0KFLZaRQrFg9n72HKPcN0xW27tdZT8GbjfVmvq7rvvljR+p3TIdmKfF1hT\\n\",\n       \"RKtlWbZjxnXUkCxTfAbWNwrv6173OkmDWoO1mVnT0a+H49C+sPpQW/BHqPXTiDsgZMruGDK/sxg9\\n\",\n       \"lGWIBu6VNn/WWWdJGnxTvva1r426rujrUcvf/d3fSZLOPfdcSUP011hFqjaCmkzcpb5bOxYAyhCK\\n\",\n       \"AWMTkaU33HDDxHGJYhu7KhGj4mgP9CXer1Uss7GTsYO9FmkXe/fuHXW9kbFK6rRKFGS7QywLrof6\\n\",\n       \"ZE5AFCb3jXK3b98+SUO7grHt1IqUMcYYY0wjS1Okuq47yNrBwseqY8duIiWYbZI3KePEE09suiYs\\n\",\n       \"ZSxuosFq93WqBeuNWS9KAuXB+TK/gAgKHrky4u9Kfg5YyXEW38pYJQpQXjKlI+bHGpt3jPVuyomo\\n\",\n       \"0bF+KDH/VKb4UJ58b2x5ABFDKGjHHXecpIPzN1Hvsb3WKptAOXNfKJtY2agOHJfzxtwrgK8VVh+q\\n\",\n       \"DvXIcehv3Oftt9++7vVRHqhEs4C2EKHuSkoUKhl1jNJwwQUXSBrKiDKt3a1+bE42oGxvvfVWScOY\\n\",\n       \"Oi9q9zelnOjjpahExkr6Or4v3Nc111wjafAbbN0bjnpDeaLNogiOVbgyWPVgDCqtGtT64dL3Mn9N\\n\",\n       \"/BN5trXuRwv00aOPPnqq40wL9ROfpbSXWN6MQShTjJWUB3MNxoNSvwcrUsYYY4wxjSxNkdpIRSBK\\n\",\n       \"DasKxSWzrFEamCVn1mUJrDZmoazPM+uOlj7W0djz4dNBhAj/o8hhhWY+PHFHcf5nlh3Xyyk/LHgi\\n\",\n       \"iu69915JQ/nxeVQeapUxoJ7GKjCUe1y/x7qgPGgX/F8b+YJShpVHrpjMisusQa6vFK03rR8Cygvt\\n\",\n       \"i5wmJZ+n2C7HqhpYr7QL/DfwryF6kFf8DTI/Ddo1fZ7/saJj1CX+L1leM9ov1nyr0jdLaIO0VRQN\\n\",\n       \"lAzqYGyUVmsbwrKmrLHIua6xka6zItZVbduk7aBQwfve9z5JB0cDjoXfM6ag3GT+iJHajOe0h6jK\\n\",\n       \"ZtT64ZbqE98sXhkL8T2rfYZRvoxN8Vk0b2IUXXzWUA/8T/9hTrFt2zZJgxJ6yy23SBoUZMayOKco\\n\",\n       \"YUXKGGOMMaaRTRW1B8x2mTWinDCL5nOsLd5nVt2a+ZnZZ1y3J3IjkkULlda1YyZmrCxm9/h4cT3R\\n\",\n       \"2ojKXFSQIvisoMDgY8PvmKUzC0d5a7WGuW6UjVrrN1u3xxrFmkdxweqI6+QliMKkXjM/haz+an3m\\n\",\n       \"4v5OpdwqMZcR7Qsrsjb6jvZFeY31kaIfYcVF65rr4f6w2rP2Qrvk+1jB9LPo61byH6L+KZ955/Ea\\n\",\n       \"A3VH36JtUZZjc4C15rWiTzAWUmcoVSgjs1Lzxkbcxt0RWuF89B3GntpdIWiDlFOW96sEfa707MFH\\n\",\n       \"DGWQNj82Y3stcV9S4D657qhgUk/02Rg9WburwayJ+bli+415orjP2267TdIw9md+wMwtGKtrn4FW\\n\",\n       \"pIwxxhhjGtlUihSzQXyHmP0y60RR4XvRh4f15taIBM7zwhe+UNLgG8UsOGZUzma1pdwvWC1YA1hz\\n\",\n       \"/E9enSxiIPMhKkEerNNOO03SkOOGHbPvv//+dY8/lrE+VbWU8kdh3WJ94/MW926EWUXiZNB+uN5M\\n\",\n       \"kaL+ecWKpL3F90tRlShI8bi1EPmUWfX0M5SrUnvhvunX3AfWOTmOOG+p/TAuUM+tkW3SoGTgIzGt\\n\",\n       \"QoOChHrIvbbumzltBC1+ZqeffrqkYayYlRIVfWaou2w/T4i7I7RCBC1jJm29trxpu4wVjCFcV+2z\\n\",\n       \"hDbIs4I2HRVIFCjayazGyhhBC/yPgsT56bv0IT7nfqlP7idT2qIyVdq3tla5zIgKLfXFK0oi94ni\\n\",\n       \"yH1lueoYi8hDhspeu7plRcoYY4wxppFNpUixPouvELNmZsfMKpnNss6M9ZMpD7VgOXMc8lhxPbXW\\n\",\n       \"ScknhevFBwerIfqKZBEDcZaM1VDrT4FisXXr1onjTWPZrwWrYFGZv4HyG2v1TOunEfNbwWOPPSap\\n\",\n       \"nCsn5mXCOorUZrmmX9B/xt4fik8EnzL8LWp9xaJyTPunHXK8M844Q9JQjvTD6KNF/iz6R2lfs42g\\n\",\n       \"raIGZ/degrrhXuI9l9oidURfxL+wVbFAEaAtZRGxs4K6QOGb936QwPlYRRibxZ9Vju3bt0/8nvpk\\n\",\n       \"TC354fGMinssRhjbqYfaHHglSu2EPsd5GbNob/H+uB9U4troQX7HWEY51K6aRIWpFNVIv2Ws4zrH\\n\",\n       \"RtLj17x7925JQz0SQVzCipQxxhhjTCObSpEClAxmo8yamVXzeeZzgpXRCrNo1knHKislRQormOgk\\n\",\n       \"1pWZFXOf2X5N0aocm8sDpYScJ1gLMbJjWjLrHmt71lYy1il+EyWFMuYUidRGA2ZWE/VYUoRQDaIS\\n\",\n       \"RdQd58fHqLRfGooUKshYdYDrRunh/qJPUm290b6w9rEWsca53qj8ZjmXsL6pt7GZ6dcy9l4yaCv0\\n\",\n       \"fSx8LHRU0qgm8z6KCn2COmhViYm24rrwGZo2o3UkZu3HV4n7bPUNK0G5MYbxLNizZ8+o41DutFHa\\n\",\n       \"eowMjv6xEeqdvp6ppJQLY/usx9yMuFpRymfGGNOqiDJWAO2kpMAxFtQqi/SPV7ziFZKGCPhszKNf\\n\",\n       \"xLkDvl7UN/dPfe7YsWPD67AiZYwxxhjTyKZUpKKFH/f8Ks2Sp/XNwfcDS5dotlpqz0/uEDJE44PC\\n\",\n       \"/e7atavqOK3ZkmO22yyiYVZgHTPbr/UPKPkdAOVQm5H94Ycfnvj/4osvnvh/Wv8FrCSsnUxBRUGL\\n\",\n       \"mfW5H1QNIoGwku677751j0d7ReEZ2z6A31OetOvW3Eb0J8qB46IsocLgl1FS8hgPWvdXW0um/sac\\n\",\n       \"dRH6LHl4HnjgAUlD28NXCSUi+u/xGlVGzofCMxbUbq4fFZjI3VmD+owSU7tHWSsoQPjIlPpYiYce\\n\",\n       \"ekjSwbtKoBjFthh3g+D7jK1ZtFdUt1HUGBuzdrhoWpUoFLtWhZf7rz0/ChTtG79f+h3lzZgV20cW\\n\",\n       \"XclYVfLRAitSxhhjjDGNLE2R6rputMVfO0sdm5U2wnXdfffdTb+vXd/F2sH3hd8xq8fqKc3ux0Yo\\n\",\n       \"YD2iXDz44IOS6nekb6U1Z0ytdUt5oVC0KifQmiH/zDPPlCSdeuqpkqQvfOELG34faxjwlUJJoj3S\\n\",\n       \"PvAjQMmJ7YPrblWiIPMjGAvXH/sv7ZydC1DkUKZQaksRQ7OMDsVnhj5SOja+UCeccIKkwdeGOqRu\\n\",\n       \"aJsxz06pjY3NARbh+rkvOOmkkyQNYxBjQa0FnjHv3GxAedI2Wn2xGFsYKxhLY74tlDZ8bOJ+mzHq\\n\",\n       \"Las32kXMUzVtvrBFQbuhHcf7nFaJLOUfi1BPsf5px+ecc46koX5R8RlbXv/610uSPvOZz0ycnzlE\\n\",\n       \"7bPVipQxxhhjTCNLU6Rq1CisPCxhZp/kdmAWOtYK4jhYGSUFC+uHLMH8jii7mE+nNtIGCx1rEOuU\\n\",\n       \"dXNm/TFSpHWH88i8/CUy4o7aWAXRKo9789VGzxGtiVVB+aHc1CoqZIHGhw1rMbbZ6O9CvXB+FL6x\\n\",\n       \"ymtUkqK1GrNmL4pps1CXog3jfbbkIqKNsY8ldYLvRdbm8QeLkabUMX56UXmgronwxSLneCg+KBko\\n\",\n       \"Ehxv//79kg7uA7TZWCb4btB3UCfxx8Pvj1xb3Dev+HQxtqI68v7YCOB5Q/2x/yhjMPdDffE+9UCf\\n\",\n       \"pPwZc6IqWqtkoUzUKhTZMyD27Xn7kgHtg3bD2MizK/qLRig/xthZZ8hvhf7E2BLHmJ07d0oa2jf3\\n\",\n       \"zzP1pptukjT4Ccd+WLsKZkXKGGOMMaaRblaZVUedtOv6lZWVhZ/XGGOMMWYsKysr6vt+3eUgK1LG\\n\",\n       \"GGOMMY0szUdqrSKFz9HY6LPacyxK/ao9X8nnB78HcsA8+uijG57v937v9yQN67z4VWQ+YDG3DH4C\\n\",\n       \"cd+x6Pvznve8Z8P7MsYYY55tWJEyxhhjjGlkqZnNiYghGyn5ZMZCpAlZgIlY2ayUIh1QgMj9kilS\\n\",\n       \"8Xgxb1IWjUhumSxHztgd1I0xxphnK1akjDHGGGMaWaoihQ9OloGZnCn49PBK/hyylJLFdFZKynnn\\n\",\n       \"nTdx/i996UtTHQ/fpZipOoOcF+SpaiXmOTLGGGPMbLEiZYwxxhjTyFIVKYiZrIEsxSg4+AJ997vf\\n\",\n       \"Xff7s8r4TXZUdnJvhQzlZIXlfoiqIzoPXyh8u9jHKWZZHouVKGOMMWa+WJEyxhhjjGlkUyhSRN3F\\n\",\n       \"fW3YByruQp+B8tMKyhDKFopSaY+wCL9jnyf21WLvNYg7geNDtXv3bknSz/7sz0oa9ulC0VrU/kzG\\n\",\n       \"GGOM2RgrUsYYY4wxjSxNkTriiCOeUZB4Jfou23EZpQelZt++fRO/I5N3K+wIzfnZgX0s/I48Tlkm\\n\",\n       \"czKKx+vGJ+zb3/62pCFakfucNy996UslDUog5WKMMcaYSaxIGWOMMcY0sjRF6nnPe55+9KMfSRry\\n\",\n       \"HfEawWcJpQTfIpQelJpp9+ojmm5ayGdF1BzXGUGhyjKMo0xRLplSN2vIkE7+K3yzjDHGGDOJFSlj\\n\",\n       \"jDHGmEaWpkit9RdC+cBXKGY6J/qN6D7yLPEKi1JsSsT8TdNG2X3/+9+f+P85z3nOVMeLxAzo+HZl\\n\",\n       \"e/UZY4wx5gBWpIwxxhhjGlmaIsV+edIQFbb2vfX4e1kaUwAABjpJREFUp3/6J0mDz1FUpNgbLyo4\\n\",\n       \"reCbVdobD58tyKL0poXjEsUXQdlDWYrRdnzOdZ166qkT/+/Zs2fd484qY7wxxhhzuGFFyhhjjDGm\\n\",\n       \"kaUpUs997nP1z//8z5IGpQSefPLJif+jkpJF1z3xxBMbnpMM47U+SyUlCl70ohdN/B99pFCSiH5r\\n\",\n       \"9eXi/p9++umJ98nDFaMZeR8Fi+viOjgeUXr4qKFAEVWJb5oxxhhjJrEiZYwxxhjTyNIUqf379z+T\\n\",\n       \"PwnFqaQAoaRkeZlKoBzhO8Qrx2vN4H3UUUdN/B+VHxQhlB+i5F7ykpdIGhSyWqUqZjg/5phjJs6L\\n\",\n       \"oke0YzwPn/N/KTovRlEaY4wx5gBWpIwxxhhjGlmaIrU2mze+UhnkTUI5yTKBZ6B4odC8+MUvljT4\\n\",\n       \"EKHM8PqCF7xA0qBQxfPxO5QgXiP4GJHpPML9sDdfq+8Ue/ZFZYlM79/73veajmuMMcaYjbEiZYwx\\n\",\n       \"xhjTyNIUqbWgAOH7E6PyHn/88arjoOzE/FLs0XfcccdJkrZs2SJpiEZ76qmnJA1Rb/yeKEAUMaL+\\n\",\n       \"TjvtNEmDghTPV8tYZS1j1hnI8ekieo/8XWYx7N+//5k2apaP62Pz4LrYXLg+DmBFyphNRimNh1ks\\n\",\n       \"ro/Ng+tic+H6OMDSFKkdO3bo1a9+9ULO9Y53vGPDz1GqIueff37T+VZWVpp+18q059u5c+fC6sIY\\n\",\n       \"Y4w5nLAiZYwxxhjTSFebvXumJ+26xZ/UGGOMMaaRvu/X3Xh2KRMpY4wxxpjDAS/tGWOMMcY04omU\\n\",\n       \"McYYY0wjC59IdV33+q7rvtl13WNd112x6PMbqeu6J7qu29t13f1d1929+t7zu677Utd1j3Zdd3PX\\n\",\n       \"dc9d9nUejnRd99+6rvte13UPrnkvLfuu665c7Svf7Lrudcu56sOXpD5Wuq77zmr/uL/rugvXfOb6\\n\",\n       \"mCNd1x3Vdd3Oruse6rruG13XvWf1ffeRBbNBXbh/BBbqI9V13U9IekTSayU9KekeSZf0fb9vYRdh\\n\",\n       \"1HXdfkln9X3//TXvXS3pf/d9f/XqBPd5fd//l6Vd5GFK13UXSPqBpD/v+/6U1ffWLfuu67ZJ+gtJ\\n\",\n       \"50g6UtItkk7o+/7HS7r8w46kPq6S9P/6vv94+K7rY850XfdiSS/u+35P13VHSLpP0hsl/Qe5jyyU\\n\",\n       \"DerirXL/mGDRitS5kh7v+/6Jvu9/KOl/SHrDgq/BHCBGH/w7SX+2+vef6UCHMTOm7/uvS/q/4e2s\\n\",\n       \"7N8g6S/7vv9h3/dPSHpcB/qQmRFJfUgH9w/J9TF3+r7/x77v96z+/QNJ+3Tgoew+smA2qAvJ/WOC\\n\",\n       \"RU+kjpT0D2v+/46GijGLo5d0S9d193Zd959W3/uZvu/Z3fh7kn5mOZf2rCQr+5fqQB8B95fFcVnX\\n\",\n       \"dQ90XXfdmmUk18cC6bruGElnSLpL7iNLZU1d3Ln6lvvHGhY9kXKuhc3B9r7vz5B0oaT/vLq88Qz9\\n\",\n       \"gfVe19USqCh718v8+SNJWySdLukpSX+wwXddH3NgdSnps5Le2/f9xOar7iOLZbUu/kYH6uIHcv84\\n\",\n       \"iEVPpJ6UdNSa/4/S5AzWLIC+759aff1fkm7QAfn1e6tr4uq67iWSnl7eFT7ryMo+9peXrb5n5kjf\\n\",\n       \"90/3q0j6Ew3LE66PBdB13U/qwCTqM33f37j6tvvIElhTF9dTF+4fB7PoidS9ko7vuu6Yrut+StLb\\n\",\n       \"JN204Gt4VtN13b/uuu45q3//G0mvk/SgDtTDO1e/9k5JN65/BDMHsrK/SdLbu677qa7rtkg6XtLd\\n\",\n       \"S7i+ZxWrD2q4WAf6h+T6mDtd13WSrpP0cN/3n1jzkfvIgsnqwv3jYBa6aXHf9//Sdd27Jf1PST8h\\n\",\n       \"6TpH7C2cn5F0w4E+on8l6b/3fX9z13X3SvrrrusulfSEDkRmmBnTdd1fStoh6d92XfcPkv6rpI9p\\n\",\n       \"nbLv+/7hruv+WtLDkv5F0m/03opgpqxTH1dJelXXdafrwLLEfkm/Lrk+FsR2Se+QtLfruvtX37tS\\n\",\n       \"7iPLYL26+KCkS9w/JvEWMcYYY4wxjTizuTHGGGNMI55IGWOMMcY04omUMcYYY0wjnkgZY4wxxjTi\\n\",\n       \"iZQxxhhjTCOeSBljjDHGNOKJlDHGGGNMI55IGWOMMcY08v8BGKNoaFbqcjsAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01dfb3d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['conv3'].data[0]\\n\",\n    \"vis_square(feat, padval=0.5)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The fourth layer output, `conv4` (rectified, all 384 channels)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 33,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAJOCAYAAAB8y+mTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvXvQZVV19vtsTbxHEy+A3OluupsGBJHCexEqVuqzEv0S\\n\",\n       \"K/V9aGlSGsuYxBK1SOSYiK8GRVNENIZEy8o5ieU5+BkrWqlUWQaUEFEJcpVuaGiguSNijOaeiNnn\\n\",\n       \"D/j16v287+x5WWvtvd/u8ftnd+9377XmmnPMudd41hhjTqbTqYIgCIIgCIJ6HrPoBgRBEARBEKxX\\n\",\n       \"4kYqCIIgCIKgkbiRCoIgCIIgaCRupIIgCIIgCBqJG6kgCIIgCIJG4kYqCIIgCIKgkVFupCaTyf+Y\\n\",\n       \"TCY7J5PJrslk8s4xzhEEQRAEQbBoJkPXkZpMJo+VdIukl0m6T9I3Jb16Op3ePOiJgiAIgiAIFswY\\n\",\n       \"itRpkm6bTqd3TqfTH0r6jKT/OcJ5giAIgiAIFsoYN1KHSbpnr//f++h7QRAEQRAE+xU/NsIxs88K\\n\",\n       \"J5NJ7EsTBEEQBMG6YTqdTtZ6f4wbqfskHbHX/4/QI6pUlp/4iZ+QJP37v/+7JOnhhx+WJD31qU+V\\n\",\n       \"JD3xiU+UJH3ve9+b+fxjH/vYmfd/9KMfSZJWVlZmXkvP/8///M9Fn3f8fBs2bJAk3XHHHU3HS/HM\\n\",\n       \"Zz5TkvSWt7xl5nxjU9qfxx9/vCRp27ZtkqS/+Iu/mPn7kUceKUl64IEHJEk//OEPe51vKDjP+9//\\n\",\n       \"fkmd3cE//MM/jHK+ZRu/yeSRtYL58E//9E+DnO8nf/InJUnf//73Zz6HvTBfvv71r0uSfvzHf1yS\\n\",\n       \"tGXLFknSjh07JEnf/e53JUlPecpTJHXz/+1vf7s+9rGPSZKe/OQnS5L+4z/+Q5L0ne98Z+acBx98\\n\",\n       \"sCTpMY95RJTHFp/2tKdJkv7rv/5LkvTf//3fevjhh/fMOfrkta99rSTpwgsvlCT94Ac/mDn+qaee\\n\",\n       \"Kkl6/OMfL0n62te+NvP3zZs3S5L+9V//daZ9zIWf+qmfkiQddNBBkqRXv/rVkpbHVg499FBJ0v33\\n\",\n       \"31913KOPPlqS9I//+I+Sun5b1Fz44Ac/KKmzE+yGcfmxH3vkJ5I1q3QtZ9yf8YxnSJLe9KY3zZw3\\n\",\n       \"BbbMb92//Mu/FJ3PGas/n/SkJ0nqro9xXNa1DGjvf/7nf868zxqS6+fTTz9dl19+efLvYzzau1rS\\n\",\n       \"sZPJ5OjJZPI4Sf9b0l+NcJ4gCIIgCIJROeOMM/b598EVqel0+vBkMnmLpC9JeqykP01l7HE3iEfK\\n\",\n       \"3S53uShSeI14ibziFfH67W9/W5J09913N7Wdu1ZXpGgnlHoJQytRgEdeC94VXhLXgfc1FHh1P/Mz\\n\",\n       \"PyNJ2r17t6ROiTjkkEMktY9TK3h7KJY5sDOUkVawK+wYxXVZoZ0pO0epq1WqXImC++67T5L0rGc9\\n\",\n       \"S1KnQNFvqBZu97SPdUN6REGSurFL9fWDDz645vvMfY4D9MkTnvCENd93+L7PLdYS2sc1uSpLO579\\n\",\n       \"7GevefwU2Dh90qqu56hVooA1yBW8ZcHHi98g1rBSUD5q+4m1CQXU5yBK1aLWELL8U08Rlg3mJwov\\n\",\n       \"9whw+OGHS5J27tzZ6zxjPNrTdDr9oqQvjnHsIAiCIAiCZWGUG6lSuPvmFSWKWCfuyv/t3/5NUncX\\n\",\n       \"zudQPoh3cC+yFvd4USJQvPDuWp9bLwo8e7yZlDc+FMQSffOb35TU9RvjefPNdSXFHve4x0nqvPhW\\n\",\n       \"UFKwnxR4W0N5Xf5cftnJKXatMVMpiJ0iHggvkvn30EMPrfk99zalzmNmrUBBwnawQVQw5+lPf7qk\\n\",\n       \"1WsBSgVrjp8PiKHCZom9AdYoYrJStuEqZimMHe3g/Lzfd43sC4rZskA8YA7GHVvjN4ffpptuumnQ\\n\",\n       \"dt177yNhxaxZzI1aZWxomFfLNo4psHdXoqCvEgWxRUwQBEEQBEEjC1OknvrUp+7xunhuzvNoFCky\\n\",\n       \"WdyrA+42ed7uGQ99QZG49dZbBzneosDrnZcygpf2hS98QVL/bLe+SlTf4+C1Dr0LwKLwzDReUS3w\\n\",\n       \"glFn+Dv2UzuexOtgFw7qzJ133impU7w88w51hnnv81/qYpBcUeK7rDkoVb62cE7/Pm1yG3B1mr7y\\n\",\n       \"46JQpfrAQQFrVUUZO7LNWEtzcZs5BYy/YyOpuLcUjPGyUBprxDjzes89j5RKHEuZ4bcQuxtaBT5Q\\n\",\n       \"GOppRo5QpIIgCIIgCBpZmCL1tKc9bY8HjCKFF1cag0RGBLEUxFrUZlnN6651WcllgtQqfHhpjIcr\\n\",\n       \"GNSSmbd3irKEfeS8fZQbvPBcbFXp+RetbOUyppgHeMWoPLXxOpBSlAGl69prr505L3B+7AnFGvZe\\n\",\n       \"L1CaUKs8Zgr1C2WKTEEUhrvuumufbfUsOI854lpoBxmyrHGo26UKQ2vGKH3I3CXOM6dIpWKomNPY\\n\",\n       \"AGtmLestXhBSGb+lGcCLorROUi1jrWHEgnndt1YYN35zXM1mLSlVilOEIhUEQRAEQdDIwhSphx9+\\n\",\n       \"eM9drddUqYW7Sry+2rgCvDZiOWhXaa0Tz+RZb9Bf3L2jnODV4s2WwnG85g7/X1R/cZ2lmTqeOeUx\\n\",\n       \"OrV4LOBQDK10oRowH3OKEtfF+d1LL+0v1Be+z/Hwpkvshjnr8V+AMkX21d41qFrg2v082A5ZgsQ8\\n\",\n       \"1Y6917ArBVvH06YvW2uAeYb1stdCGxrfRWPoXQ5ScD5q7zGOrXW8hmLo7E/WMObTUGCnvi4w//nt\\n\",\n       \"D0UqCIIgCIJgQSxMkdr7jpZ/lyoFDnELeA21z+G5GyXTBeWEOADuZvEm/W586MrgrXh2VKmCkvOS\\n\",\n       \"a/vTFQmPNUrV8Bmb2usgIwm77OuFedxOLbTDlb7DDjtMknTbbbf1aF0H7cspUd6u1HWVKma58Snp\\n\",\n       \"t1TNKQebv+WWW4o+D9gyoJLh6fJ/roX3iU0q9Xx9DaqFdriCNFSWWalt7C+MVSE+BQoUiiYKZ23s\\n\",\n       \"0LLXPOQ3wffNHBoUWN9dYqiYvVCkgiAIgiAIGlmYIvXEJz5xj1fIXSiKUqlXCTxX5W6z9nmn78/l\\n\",\n       \"MUM8R8ULW9bME2JM8F6GqsxNlV3IKQzErJGBQcYECk8uM2pZGDqLs2+GD/09VHwP9I3dytkZ7cwp\\n\",\n       \"tzllOjWvW5TsnGefqjPlMRyuUqIEoWBwzbUZj4xF65gwB4lvZA1j7z7WsNbYkPWy11qO1Dg7ubk7\\n\",\n       \"dHYc9kI/kzG8KDV/bIjda1Vgc7AGcR7mQ+29RopQpIIgCIIgCBpZaIyU7wPFXX1tFhL7HuENuoKS\\n\",\n       \"Ay8TBcoVJ7yDZa8Zgnc8tGKGogS5ccHL9XEl06W0jlMwC3ZI7aPWHeYd98qHzirk+DlFiutDySQ2\\n\",\n       \"MZch1hJzRjyk16RK7ZEHbrOeoco1sLcXc+e4446T1GVC5pQLxqB1LrMGHn/88ZK6NY692vpmKe0v\\n\",\n       \"DBXrNXScLPXO+C0cqq7SsuG/WWNVcOc8zFfm11BPHUKRCoIgCIIgaGRhitSPfvSjPZ4msTN4lsT4\\n\",\n       \"lHoLmzdvnvn+3XffPfN3vMxU1t0xxxwjqfMq8fC5O172DBWe9+L14mXzXBgFaNeuXZK668JLJeaG\\n\",\n       \"jKQtW7ZI6u7et27d2tQuvDQyT+hfKlQP9Xz6QMG9N9SNvl4cKgoxiihBQ9WKKVWMUC5RllFNmLct\\n\",\n       \"CqZnsrIWEDvE3/H8yYCkAjlzadOmTZJWq7O0iePQl1wzfeg11EpjaRiTVpiDVFhvrdW3rIxVm62U\\n\",\n       \"VMXzvngNtf1dvR9bkfK9K7GboeqhhSIVBEEQBEHQyMIUqcc//vF7Yj3wDh988EFJ5QoQ3tpRRx0l\\n\",\n       \"qfM6N27cOPM5IvVz3hiZNuvtefSLXvQiSV3tEbLiPG6Du3IULBQAvB+8VvqBceB7L33pSyWt9vLx\\n\",\n       \"sj1OgLt9+p0Yqb571rXC9aaqXq8XiJ9ozRBC8c15gXyOfmN8GX+PL0qR86Y5/uGHHy6pU1C9HprX\\n\",\n       \"8sG+mf97g4rKsTkWawavXDtqLrFTN9xww8zxvC3edo8H5Dh4vrfffrukTn2nD/mex0Lx977xb5yP\\n\",\n       \"623du29ZYTywVfpzXgrV2HGzpbtrwHod37HH64gjjpC0evcSfotST0d4mpIjFKkgCIIgCIJGFqZI\\n\",\n       \"HXXUUXuUDOIP8LBRRvDiUDT87v+MM86QJJ188smS0tVnPQaEOAnuTvFs8Zz5HMfzGhSeAeD1fHif\\n\",\n       \"bMKUV+k7qrc+B6c9O3bskNRVa/a9y1CIyIri7yiB9DPtQLHh+yhSfJ/+IbaKzxNvwveouN16fcTA\\n\",\n       \"EbdSC94qit0999wz8/exqumORa2X2orHFjK+td4jmWOuoKHW8OpVw71+G9/nFa9yLS98586da7aF\\n\",\n       \"c51wwgmSOvUadZysNo+dYA6jVqMCP//5z5ckbd++febzvrs8/yf+0FXdb33rW2tee25PN/oMtZC5\\n\",\n       \"t2HDhpnrYo058cQTZ47vKv16UTQYR35DeMUmSustedwmYPuuLKIGk5XJ2kGcqSsYrbt1pKBdKJ/Y\\n\",\n       \"F9dRuy9qLZwnVfORGEMUVtZ8vsf4+Pyin7Bn1mz6l/7n/VJ8f07/baJdqUoBpdmYoUgFQRAEQRA0\\n\",\n       \"Mhlqx/iqk04m05WVlbmfNwiCIAiCoJaVlRVNp9M1JcZQpIIgCIIgCBpZWIzUPBQpzvGJT3xCUvd8\\n\",\n       \"e+zzffzjH5fUPUd3+tY+4bnuOeecI0l63/veJ6mLATryyCMldc/9/+7v/k5SF4/Bc3xinq644gpJ\\n\",\n       \"3fNunvezLxfxFb/0S78kSbr00kslSVddddXM5zi+x7oQ08L18nza3yd+hBim3/qt35KUthVq+hC7\\n\",\n       \"47Fo/J2YolS2IHEQb33rWyVJH/3oRyV1sTmMV+sO8B5bx/ne8IY3SJIuuugiSeV1tYhvIc6AOAL6\\n\",\n       \"PZVNRz/+3u/9nqQu3mKoWiqp833oQx+aaR/zglg78GxZz7BJtRO7f/Ob3zyXdUXqru3888+X1MWE\\n\",\n       \"EJPEXPAYi+c85zmSujH8+7//+5m/00fElnAczpe7PuZqaw0w2vXOd75TknThhRdKWp2VyJwg69Ez\\n\",\n       \"P4lNYcy8PVwn/fXa175WknTeeedJ6jI4mYOpKvesZbTb4x+Zex4HedZZZ0mSLr74YkldXOlYMG6X\\n\",\n       \"XXaZpK4/6Rdsn3hS+pkYPNZKYtpYoz0Dl+t8+ctfLkn68Ic/LGl1HC52uW3btpnzsaYDazL9S0Y4\\n\",\n       \"40H86plnnjlznaUw/rSLLNMcpfNhKLLzbi6tCIIgCIIg2A9ZmCI1T0rvcocCL8q9BTJi8MTxKsj4\\n\",\n       \"QWniNdVur7OFV4MHj0KEJ+/1kvBqPv/5z695fNpF5o8redddd93McfFSoLVieSrTKgXeaaruF+3O\\n\",\n       \"7VfmmUpD17lyb90zeehv/3vqulIZL2TS4L3yvl+/Z3GODefx8+XqtZXuB5dSfhcBfZvK9iE7DwXA\\n\",\n       \"Ye6jFtbuicf3U9/LKVY+Rp4hWlq7LJftxBrmY+d1obDdlK3magOidqOoeEzwHXfcsc/vDw1rC+NP\\n\",\n       \"+31/WPr5y1/+8prHwT7YlYPsTq4XRSpXKdyz8ADV/JRTTpHUZYT72tiq0sP+srtFKFJBEARBEASN\\n\",\n       \"HBCKVF/PO1VjIodXQeY5MJ441VYdf56Pt8Dzfq+R4u3Dy8BbQPEovfun7g/xB+4t4d3hdQ613xXP\\n\",\n       \"y0trd+CtpsYlp0SBK3zEb3gtFNQG+hslq7ZCOv0F2CfHqbVX+h1lEG8XhbK1Uj9e71gV4LFnvN87\\n\",\n       \"77yz6Tjen/PEbazU5lzZYayY6yhGtR57SonCJrDZRe0uAPST95fbmlfXr63mz9qQquI/7z3siF/k\\n\",\n       \"t4FXxj+3pgFrEGs036u1FxRH1jziSn1/1FQtRGoQHuiEIhUEQRAEQdDIAaFI5SBmyTMkeO7sVYcd\\n\",\n       \"r2yei8nCg8b7QjGiKi3PowFlKfU8Gu+FSvAoRnineCm0E+8jlcXI5/m7x0B5jA/XQzta959yZSYH\\n\",\n       \"19E3Bs6VQ9+3y71WvHrGy+3FvUk/vseneKZRX7wCeCtkfaKS0M6hFCrUAewVhap2B/gWJRT1k6wk\\n\",\n       \"bJ051EpOTcVmsS3mKDbHXCrdx7CUUltAgVg0jAdj2/pUge+hOrfWTeQpwQte8IKZ97/2ta9VHQdl\\n\",\n       \"h9iwVhWftZynE6i5tbs0+K4FrGlkXe7atUtSejeFee1puOyEIhUEQRAEQdDIulSkWp+Xp8ATfvGL\\n\",\n       \"XyypixG6++67JUm/+Zu/uc/v1+6n5HuI5eoc1YJy5cfD+80pRng3KBKuqLgXwvN5YmroT49PyHmV\\n\",\n       \"peOJt4SqgBLUag+MB9BPqeNhf1wn8QTUM7r55pslra4VAx6TVYvvBVlLaa0hYqu4Dq43lylVyyLi\\n\",\n       \"LFBdU7EzraTiBelzbBdVk88xV1GEcjaCIoGSgKJVGl+YAhteNPRTrvZfqYrZdwcP5gD2wh6NtVAf\\n\",\n       \"C3tojQtE2WRfWhQur+GXm+usdb6G83/iY3l/rJiyVjW6Fp7+0E/Mu777l4YiFQRBEARB0Mi6UKRQ\\n\",\n       \"APDSWmNwUlBN9pJLLpHUKRyld9+1XiDxD0PHQbhSk/K2c5lFeDcHH3ywpM7rAVe68PZ4xcum//oq\\n\",\n       \"MA7eFd477WtVpDzGLffc33eGpz1cJ3EKKTv1/q/NjuPzrdRWvd60aZOkbnz5/tD2uwiG9oA9PtFV\\n\",\n       \"yVT2k7cH20q1DxtADcfTRhFzDxsFLLemeQZkzpbHotTGx1YwnKuvvnrmtRbicfvWX+K62cWCNd+V\\n\",\n       \"t9xcR13GXvmt5beEtQrFqK/imWJe40j/8Bs21FOtUKSCIAiCIAgaWReKFDEo3DW31sXJwb5XPBcm\\n\",\n       \"g2Fez2+B5++1GUQoK8Q20V7iQWq9SrzXnHeI14r3g6JTq0SV1qPifBzfK5MD/YD90A/urXsMWE4Z\\n\",\n       \"IksQb4Z+wrvzfvY4BW8v3l+pIoViOjYvetGLJHWZbV/5ylckjadE+T5upbF1ywRj36rg4PFjsylQ\\n\",\n       \"NLAt+iplQ6XqeqoKfg7Ua2yGuLevf/3r+/xebXzpesczfFOgbPq+mowHc5A1hzXR50qpEondcRx+\\n\",\n       \"a5mLHGdRCuVQjLXLSShSQRAEQRAEjawLRWpsJQpQQjgfGSO5jI++MStOq+LgShBeAxk9tft24YXk\\n\",\n       \"YqpckWpVEEprkjAeHpPkoHDwmlIU/bxe5yv1ef7O/2kH3+eV/uf8vs9arf1wXM6XGp++2X14b9/8\\n\",\n       \"5jclra4nNjT0A/3Fa98aQGuBesucQBkhhqU1k3Ao9Szn8fN3nwt94xFLK7M72Nju3bsllXv+qTHd\\n\",\n       \"X5WqXEYw6vRJJ50kqVPP2ROQOcHaQ6wP4+ZrGYpWSpGi/1lTaB/HoT3MTY43VGzR/kIoUkEQBEEQ\\n\",\n       \"BI2sC0VqbCUKiDfAG+KuO+cdDl0NuPU5rtf6wGtprf2BIpW7flcOxoLrw4tifIgnwXvieT/qAOpC\\n\",\n       \"yo7cC+c4nIfzclyvSo0yRH973SX6EUXKvfBUjFcK7DOXkVOrQDo33XSTpOEV1xT0uyttKHh9M532\\n\",\n       \"Bpsl4/Owww6T1HnirYoUa0hrDImruzn4HLbZ9/ytVeuxtZ07d0pavRaU1seCVmUsxdj7RpbC+KT6\\n\",\n       \"gXGkvpTvvUfcp8fEpdTu2srj/nlXBkOJWptQpIIgCIIgCBpZF4pUaSXmvuAl+M7cufN61te8wbv2\\n\",\n       \"eAnaxfPvWmWqVNGif/pWvc31N+/j/RIfcMwxx0jqnt/j1ZXGmrmiSP9xPq+h4nWz+D7KEt4vqoB7\\n\",\n       \"wV6Lxe0nlxlT2s9DZdb4XopjVTdGcUJhxK49q3EItYLYE86BeknGbilUSOZ4biu1CgiVr1HKSqvI\\n\",\n       \"Y6t9Y4v6qo/MBc/gre0HxrwvrAlkQteO79CUPr2g3ti2bdskdYoTawP9g/1SR4y4SCjt95Ry5ZX3\\n\",\n       \"g7UJRSoIgiAIgqCRdaFIja1EAYoC3iAeODtsl4JXWJplRHVavJDa59ooBMQA8Rwbz71VmfjWt75V\\n\",\n       \"9Dm8vr5Ze16d2fH9yfi/V1SvBa8O6E/6LbXfF+P87Gc/e+b/xNekYpRc0fHxwWtNVRGed12zvrFW\\n\",\n       \"pRAb6NWaURaHjJvBVqnizzXW9ik2iyLVd88u1ppcXagUXlm9Ft93shbGkLlQ2x5i1Y466qiZ76G0\\n\",\n       \"UFuvdK670rjommSl52dNIosUpXDXrl2SurXBlSJX4b2fUmsHShfZrNgf/08pUot+GrMsRC8EQRAE\\n\",\n       \"QRA0si4Uqb5wd5/zaPGCUARQCnK1OPzuvrbeDRW3W3HFbmwFwesTkSlCv9HPuevyqrup7DXiATgP\\n\",\n       \"44JyRe2aVMXynKLpXn+pt0ssFf2AHdQqgK545vazmvf+YvMG9QB7GLt+ldTep65W9lXPUQZq49Bq\\n\",\n       \"VfAUHmPTCkrKEUccIUm64YYbJOWvi7nAdaDSe9xhLWNVtC7djQFKM9D5zcK+uO7c2pCzP+zLsxj5\\n\",\n       \"nmfGomCldjMYyl6c1NpN1uKyEYpUEARBEARBI+tSkfL9hxzPXMl5C9xVc/eL94K3wXPilNfqd814\\n\",\n       \"0p5JgxdAe8hO4nPs+I6yc+211655Pry9FHiBeBsoNbkaLig/fJ92UE8I3KtEeUp5S3hXjBvepitI\\n\",\n       \"xKs4KFz0E9fF9aDA+fP6Uu/VvS3GBQWMTCrOz3HpH+yEeJlNmzZJ6saTz5dWIOd79Cf2gh1yPo6L\\n\",\n       \"vdEeMr2IMfJMJfZFo98YH1dD3F6Yb4wfxydzjb/33YuP73u9sGWEMcd2iZdjTOlj/p5ai7Zu3SpJ\\n\",\n       \"OvHEEyV1awYVrQEbZwx9/0aP78vFWHEcjouN9YX2Y9O1ChtKFCy6/lMK1rTSGmf8BuRipfitod8Y\\n\",\n       \"J9Yi/p5TUvk8c8nVfcaJfWVdVceeDznkkJl2s5Yde+yxM+cjvpjx8qcWnnntYH9kK3q8bC6Gj7+z\\n\",\n       \"NpXGLvKbx/igjD7vec/b5/cgFKkgCIIgCIJGJkPuX1V80slkurKyMvfzBkEQBEEQ1LKysqLpdLpm\\n\",\n       \"obZQpIIgCIIgCBpZWIzUysrKnliQvjEVxHagrhGzcdZZZ0mSPvnJT0rqns/yPJTz3nbbbTPHO/74\\n\",\n       \"42eO5zFCPC/muTXPgX/5l39ZkvSpT31KknTPPfdI6mqj8LyX8/E896d/+qcldc+fL7/88pnzAzEw\\n\",\n       \"xPC88Y1vlCR94QtfmLk+njNzfbfeeuvMcX7u535OUlcb5Otf/7qk7jk2mRFcJzEqZ555piTpox/9\\n\",\n       \"6Mz3UxDLxfPpXNYkz++JC3jXu94lSfrc5z4nqYsv8crzZO1x/C1btkjq+pesL6+VwnN6rvMXfuEX\\n\",\n       \"JD1imzVwncQVlFZU5zy58/G8P1VlmPPnMoI4D/1JLJVnWxLfQDVo4hW4Lvob+yL+h7gg+uHss8+W\\n\",\n       \"1NkLxycuw/e0zMH4EdPFK/Pvd3/3d/dcG5/FBoj94Bqwda+8zdrB32mbz8VXv/rVkqQ//MM/nOkL\\n\",\n       \"YK5jW8ReUGOMdjGXicnwWBtqar3hDW+QVG6bHjNVC+f52Mc+JqmL1yNuDttgDrF2nXHGGZK6fr3+\\n\",\n       \"+usldXGnHhvz0pe+VFK3Buau7+ijj5bUjZfHUtGvnD91/ZznvPPOk7R6fGuzA1m7fL9TjvPOd75z\\n\",\n       \"5rxj49eXis3LZXv6fquprMDStWwoas/HvCZ+lbhe5m1uDcqdJxSpIAiCIAiCRhaatddXiQL3HjyT\\n\",\n       \"wbOQ8IpStUV27Nixz/PhfeGluZeFEsV5yZryrDKO88UvfnGf5wPa71lYKEh4Q7SLjCIHRcGvH+8Y\\n\",\n       \"L5rve/aW/z9VSyWnkHA+2kG7vTYJ3gI7opNJgpfh2YK58cuBAoTXgqKIt+sKJioC7UlRW3MGuP5U\\n\",\n       \"xffa45Ghg3eJXaKWYJccl/GgqvLdd98tqbMPvHHswpVHlGf6CTtFCaNadS5ek7/zfcZp7/Nt3759\\n\",\n       \"n8cArtVJVbFPwbVhw/QpfcKYsSaRRcVrTq2trW+FwnD66adLki699NKZv9fuW5pSVxkz5ytf+Yqk\\n\",\n       \"bg1hLUzxt3/7t5I6RSoHay39iirJ0wHWQmw1p8i17obgsAbl6jzNm9z1oc6ndqXAnua1u0gtXg8r\\n\",\n       \"BfPMsxGHygwORSoIgiAIgqCRdVlHKocrVNxtcteKUuJ367m4Au5+8Sa5W0/VP4K+Xo/H9ngcBbVm\\n\",\n       \"uD6vO+R4zBdQAwQFguN5hXL3WmqvD2+S6/Dve/+j9PA92ofXu3Pnzqrze2ybK0mujOCNpeyC/sjV\\n\",\n       \"hmm1g9xxa5VdVA68e8bZz4MX9+Uvf3nN46CioMpwXFdXOD6qDSqC16WC1E4EXueN+bi3IpyrMeek\\n\",\n       \"5jxzgT5g7FwVZC1xj51roDI07UGF5bx+Le5Zp1TIFPQlaxKqndcPKlXuaum75yAqZyr+0mvtHXPM\\n\",\n       \"MZK68UMtLR1/joetuaLEWp/a2461hDVyWZWbFOzmgZ3wW+LzIqcsLora/iZmD4baKzAUqSAIgiAI\\n\",\n       \"gkb2S0XKIXaDu9dU7E7uebrHIHnGSulxakH5Sj1/53y0A28il1Xnx8fLw7tKVQwvxTOfAG89dT2u\\n\",\n       \"KOI1EttDe1r3KMTLTMXmuBKC4pPaC3C9QYaYZxbVgtqCaoJy6jBe9DN26vOE+UWcSyqOCdZS+GgT\\n\",\n       \"yg7nTHmu3gbmAmuGn8PVy1xMDHOAa+OVjEnahbrq8Xetc88VJ2zXK6UvG6jorF2sSfQj40mcKP3P\\n\",\n       \"+/RXKh7RK2P73nMcB7sgSxDIACZbkOOn7Cs1J5YF+ps1lrnJ2r/s9tL3aY/HibYqiqFIBUEQBEEQ\\n\",\n       \"NHJAKFK1d5l4b9yt87wfZQUvib/zfDlH7V0v3gxxA/zfY2LwkvG2UGpK93/CC8ML59X3PKsltbdf\\n\",\n       \"zot3lQCvCIWidL+qFB6/4ePhChoKVa4OVoq+NX3AY4daswCxC15bFSlUFD+u42oC7fV2oyLllChY\\n\",\n       \"67qZI7y27tGGR048Hm1n7gOxVK5yY+Oofygb3me0D5vO1fXJkfq+11saitqYtByeJcgaQowXcxNF\\n\",\n       \"z6FOEMdxVd7bic26gskrdoRig10QM5SrGdc6t+YFa5//pmEvqT3x9jeYf7n9aFOEIhUEQRAEQdDI\\n\",\n       \"AaFI4aV59VnP5vNYjtIdo/1zKFTc7aMYEfuBl5rzvPFm+D4ZInj4wPVAqxeOF0LdJK6r9XhD1WjB\\n\",\n       \"q3RFiHH+xp14AAAgAElEQVStBS+S/vXMqKH3n3RvtlQpdPz6W/sXe2ptB5ClR+waipnjStRQrKVs\\n\",\n       \"+i4Htfj3+D/n8mtIxcB4XSEUFOrXAAoJcXv0JWtEbfYbawEqqKurKDYei9XKUEoUpBQB1iBXBJ3a\\n\",\n       \"uM6cAnHVVVdJ6tbEeTGUspoDe2Yt5P/Yfavqvd5oVaIgFKkgCIIgCIJGFqpI9Y2UL8U9Yrw1zotS\\n\",\n       \"xJ5lvJ/y2vAuiXvwu1m8CN+fCAWFu3zPlkNx8P7Aq8XbJDMJ3Gv1/ZHcm0CZc4+eOA9qi/jed4vC\\n\",\n       \"FSNX4Oiv0ngEavfQ7368oeE8QylBfaGfXJGr9TqJXeN7qZ0C+saJpGLM1jquZ131xRUdn+ullceZ\\n\",\n       \"a6wprqYSi+VV/Wvj8vh8SpFDHaWeVSrWaNmg33N1tRgPVPtUTTLIxaShuLEbxLxinnyt90zXoZRA\\n\",\n       \"lE/OR//Rz8QA8lRgLGVsvROKVBAEQRAEQSMLVaS8wvFQpCp649V4LAfvczde6nUQU+TKDs/p8dT9\\n\",\n       \"eb1nw3E+PG/3nvD08Qq8v4j58XiOFKnYIo8vQUnx2itj44ob4+l78Xkl91I78npGOUWxLyh9eHeL\\n\",\n       \"xq+3b/wD9pfy+hkv79ecGkC2Kv3mCvFaGWOpulBDx71B695q3h72kRxKHU3t2sBalIs1WlZyCh0K\\n\",\n       \"IjaW+3zOLlh7fM0Ym9LdEvrC3EWBwk68Nhz9EIrU2oQiFQRBEARB0MhCFamx7ra99gVeqSsuvI/3\\n\",\n       \"ltszz6H9HqPE+65IuVcIubt8YoCIrfG4DLzw0kwV2ufeOtmGeHPEjs0b9xLxjvDWUSL4P9ddak/Y\\n\",\n       \"Acd1BTNVp4rP1dZW4TxD7evUl9Z6WClQjrhO1BVAAeb6XYFN2T/KZKq92Cv7q0mr50ZfJYq4Sc/s\\n\",\n       \"HRqO3zd7CPy66StioqgMvuy4eo4tsSam+muo35YDpY4SCiXZidg5sWGp+MfgEZZjZQ+CIAiCIFiH\\n\",\n       \"HBB1pFASPFsJjxiFojSbyrOwPJbGs4zwmjyTpDS+guw5lBhvJ++jCODduuLmx6PdfN5joYhtGcpL\\n\",\n       \"LiW3pyCkFKUcrqB4XS7+jldLv7DTPP2/txKyL5773OdK6vrdK9PPG+zX7QgFkvmA3aRqGdFP1Dej\\n\",\n       \"n8gEAs8exdtlHvi8hFzdrbWyfVtrizmMFXGJ2N5Qx3foO64ZRSlVmytHqrI5WXvrBc/CxBbGzvQ+\\n\",\n       \"0GANZC6zBq/XWLp5E4pUEARBEARBIweEIkXMi++71Johg0JDnEFqh3i8y1QF7VJQMPBOPaaE/btQ\\n\",\n       \"Djgfn8/tYE9WGechjoL9pOZd1da9IK6PSvBcJ16pK0o5vHYOx33Zy14mSTrppJMkdfaBN0ycCf1X\\n\",\n       \"qkihAnAc1I5FQUV+VyRdkeJ9FL8HHnhA0uqYNY7jtW6AcfJ9zDx7Dzgfiih7RzprxfnUxsZ43Sbm\\n\",\n       \"bEoVHSv7DxXb48ZOPvnkQc+zLKoovPjFL676fOlaRC08YuboT/q3FuYMleF37dolqVP61rtCRsV9\\n\",\n       \"FClio1gLcvW49hf4LaHOWum4hiIVBEEQBEHQyEIVKerD5HbQrgXvAVyJAryUofeSc28PL7ZvpgyK\\n\",\n       \"gMNxvQpzynugfdx9076hvFS8/NYK3q6Y3XXXXZLStVX6tttVBpQmzz7jemqrZl966aWSVo/Porj9\\n\",\n       \"9tslrY59Km0fdo+Sl9szEsXTSWVE4QWnvrcvahWjvlXmh6pTRTtQ6VD7atVWBwXG91RzUAFRXcH3\\n\",\n       \"mPP4t1TVeYc1wWsH+lMBdotgDm7YsEFSlzXmCpPjew1yPaiobnPYGp8nUxTbo19OPfXUmetgLeb7\\n\",\n       \"i1KkaA/ji6IEXA/jzvVgt/yf6+e3kjWX/kbJ5HyM96Iyu8eCNbB2rQ5FKgiCIAiCoJGFKVKbNm3a\\n\",\n       \"8xzSlRG8Ct8jD68C743sH+6WeX+sCtJrVVKWVu8Af8IJJ0iStm/fLqlThvCOuPsn1or/c7fPdXNd\\n\",\n       \"XD9emHupubtn+pP+pr2eDUhMVN+K3ng7HBfvhXbnapK4d5+KkRmLW2+9dc33S/dVc5ZFiYJ5x7zh\\n\",\n       \"tXp2H9A/zBPsx9Uir77McYZQUpkjter0aaedJqlTL135Qamgral9IcmA9DUwF1fJGsjxfb9OV4q8\\n\",\n       \"r+hL1gKPIcopWW5LrC2swShA3q++n+npp58uafW+oqimOSUK+Htqn1S/Ps8Y9ePT/zfeeKOkrn+p\\n\",\n       \"OTjvfUjpX8bba7O5vdBvXsuN3yLexy5QAokj9fHn+D5nD3RCkQqCIAiCIGhkMlYWyj5POplMV1ZW\\n\",\n       \"5n7eIAiCIAiCWlZWVjSdTtcsJBeKVBAEQRAEQSMLi5H67Gc/u+q5K7ERPH8le4vYHp5H89zdY6d4\\n\",\n       \"rs5zX56759QvzuvPm2vhPLnz0d5UpfIctPc973mPJOnjH/+4pC52i3gH4hBSewhyvR6XQSwT/cA4\\n\",\n       \"vP71r5ckvfe975353ljQj6XnIxandV+o0vFLQdwB45mrCN/3fLWsl/MRY4g9k2UI2CfrBHZ87rnn\\n\",\n       \"6qKLLpK0OjsL2yZmxGtYbdu2bebvxAsC2U9Ut//FX/xFSd3cY8yJwWGOe40ttwmuhe9h4/z/0EMP\\n\",\n       \"nTnfosaO9qRivEphjnIcYqLe/e53z5xvbDjPFVdcIamLCWLt2Lx5s6Que43Mct5nD0Yypvke/6e/\\n\",\n       \"+NzrXve6mfOODef5yEc+Ikk6/PDDJXVZmNjrddddJ2l13C91pchOZJ5s2bJFUhfvSM2/F77whZKk\\n\",\n       \"P/uzP5O0OuPZYxCZs8cdd5yk7jcqlcHPbzrtOOecc2auc2xy5wlFKgiCIAiCoJGFKVLf/va392QK\\n\",\n       \"cHeaqsXBXTDeGxkZeKR4OSg8pVl7XgMGL2vjxo0zx+PunXo57lV67Q7u/rk+9445Lt9zRYpaKnzf\\n\",\n       \"s8W88jdZbe795vYF8/4mA8MzMfz8846rKz3fUNloVC/Gi/IK5p6FR7YlagXjhr1gt1Q4H3on9daM\\n\",\n       \"sxQomtjBvLP8yHZNZax51uneVchTGZ65SuU33XTTzP8d5oBng2ELqeN7/STWKtYostKwCfqa/7fu\\n\",\n       \"jYeK71X8WxkqO62v7aMMtmbQOihI1MdiLWZcvAZhaX/yG9G3Tllf6G/f75LrSu0E4DULmRc7d+6U\\n\",\n       \"1NmD7wyQmj+prE0UqFz237JnB4YiFQRBEARB0MjCFKknPelJq/aGS3HLLbdI6pQYV4S8pkWq2ipe\\n\",\n       \"IYoQSgGKEV4OihKfwwv1+k08B/dK3HjFqbtovJSUt4KXkPK63IvwPeHw2Gv3HUvhXgbH71tvamhS\\n\",\n       \"9Zo8piYHXirXnfOGGMerrrpqn3/H3nOxUznwyhlv1AeP62kFdaV1b0iH9jLPiNNABUgpXv4+85f4\\n\",\n       \"jVSl+33BNfmxS1VPV3FT32PtYG6iDmOL1I3yueQxIrVzjPOmFCnUbo9hWS8wfuxBeP311/c6HoqN\\n\",\n       \"21Cr4oXSyLj57gHzJrVHnj/VKIX+5zfPK9PX7mWYit91mD+ugJUytJLphCIVBEEQBEHQyMIUqYMP\\n\",\n       \"PniPN4fXlPP83ZPnrhoFiuewxE44HvPhd+t4EVdffbWk7u4aD532epahP//FK+T92pii3N5ljlcz\\n\",\n       \"Tnn4xx9/vCTp2GOPldQpbcSkpPD2L5sSlaO2snhfxShFam+5WvCeUaSGUqIcvL/UXpWlMA+82nZt\\n\",\n       \"7BXzCu/y/vvvnznuvmCu+t5ztbhy4RnFtIXYJl8jeGXNYw1jLXJq1w7aRyyLs16VKCD+cOvWrZK6\\n\",\n       \"eMbdu3dLql+bUopI6xo31L6x7IGHPaHioqCVngeFDLvgt8t34ygFJZVxcGVrKBXbKd3TMQVrRyhS\\n\",\n       \"QRAEQRAES8bCFKmHH3541V55PG8tzXTwWi3crabuirl75pW71JR3yl0w7aR93M1zPrwHQCHjPO71\\n\",\n       \"0L7nP//5M/9HIcLTLvVGc9l5gLKGMjKW8rLe8PFbVmjnUMpWDuZVX2/Q7SyV4ZaDeep7Oe7tZaZi\\n\",\n       \"oLxWXC30gdsKqh1KgdfJQVHAkydTFwWKsUT14/gcZywPuhbPcF40KDL0F7XHbrjhhoW1aUh8jnst\\n\",\n       \"tVLoH//tap0H/KZ57Uag7lYpqf1rnb4ZyTyVQEFmDRoqjjgUqSAIgiAIgkYWpkh973vf2+NV+A7m\\n\",\n       \"pYoUd5m84jX5823uwt2r5NXPjzfL3arXV/LMBM/m43Pf+c531mw3x+dzRx55pKTubh8v1mt5pCh9\\n\",\n       \"nk98BArZelGk+nrDOSVnXgpPX1Lt9KzNoVSDsdSQ1jgK5jnqDfNw77pxudin1nPzPffkmUOsFcxd\\n\",\n       \"YqBoI2sQyhRznawuj9OE1qyvI444QlIXQ0SdrNr4S1gWJYo1mvHgeuj3oWuqrXeYM/SH23Gt2kw8\\n\",\n       \"JjFbXrOxVuGZV4067ILfAtrNPUjfemuhSAVBEARBEDSyMEVK6ryIoSooc5ed2s+KbB/iE7hLxdOn\\n\",\n       \"flDpebi7xcsEMhpy2UQ333yzpK5yNgoWXkJt/aNS1osSBX29YcZnvShPtXi2amsM0ljQ/9SeIYYQ\\n\",\n       \"xas29grVhuOksnT3hrnoVf9LbYvvuSLFNTAGroqzpnHtfB4lhVpa2GaqRlwtKAe0Y+hq+g4KG2si\\n\",\n       \"a/rQtsjxUaa8Nltr7M+y0XffUODpDnZF/zFevj8oMX/8hqV+K/hN6vub7fHGY0F7mWfUoqOf+ypS\\n\",\n       \"vW6kJpPJnZL+SdKPJP1wOp2eNplMni7p/0g6StKdkv7XdDoddxYHQRAEQRAsgL6K1FTST0+n0+/t\\n\",\n       \"9d45ki6ZTqe/P5lM3vno/8/xL04mk1U7iveFu1u/S+a5MN4nd7/+/LgUvC28H8+aKz1eqgosXihe\\n\",\n       \"19CKFFDlmP7Bi132fY1qSe2/tr8xtDee2wOzFLxcMmZQZ1qzAfGSU/N9X/SNnXEP3f+f2gWAa8UW\\n\",\n       \"uWbmGnMQpYA577sm1EIG8NjQD4z1UUcdJamLzRoKbNFr+gH9t97V59oK4SlSNdwAxdKfDuXqTPlu\\n\",\n       \"Da2MrUQBaxlrBfZRmvGePf4Ax/CWvFLSnz/67z+X9AsDnCMIgiAIgmDpGEKRunQymfxI0iem0+kn\\n\",\n       \"JR08nU5JDXlQ0sFrfnE63XM3SMxS31gY7r7dS+QulFfuTrn7JhYpVdMi5TnzOX+OzfFz2XTc1ZMB\\n\",\n       \"QawH3x9LSeF8VDrnPL6z/f7C/prBg9fo3uSygWrA/MG+W9UDrpf5My+vVlq911cuC9AzcLlmlCav\\n\",\n       \"hedjONZebUOpjQ4xOKiPQ+9xRsYm/cR48Nuxv8RIDRVbxm+azxGvtM/5XO1Ngd30XVsZP377ON7Q\\n\",\n       \"v30+r1g7hooX7nsj9eLpdPrAZDJ5lqRLJpPJzL4E0+l0OplM1rw7+sEPfjAzaGOVlg+CIAiCIGjl\\n\",\n       \"sssu2+ffe929TKfTBx59fWgymXxe0mmSHpxMJodMp9NvTyaTZ0tas5jSE57whD2e6HQ6nal03hc/\\n\",\n       \"jlc894wdqg2XVlkFrx8Fpc9dUzuEj31TibeBklZarypYLvD+c/WjWuMAhlIrUHQ9/sL37SoFRZU4\\n\",\n       \"ojvuuKP4u9RsI1M2B9k9vLoHnovP8s/TF4wV8ZDEJ/oYjhUfObQSBazpXPdQlaMd+gUlg6capTUI\\n\",\n       \"50XrnnapfktV7k/hvyWeIe+/lfQj7+cUp6HUYNaoeWWU19rlGWecocsvvzz59+Y7l8lk8qTJZPIT\\n\",\n       \"j/77yZJ+VtKNkv5K0q88+rFfkfSF1nMEQRAEQRAsM32kj4Mlff7RO8kfk/T/TqfTv5lMJldL+uxk\\n\",\n       \"MvlVPVr+YK0vP+5xj1u1l5e/lnpjKEl4d17XCc+duARqZ/A9dmpP7Yzud90HHXSQJGnz5s2Suuw3\\n\",\n       \"P19tXADt4fvEAwwNcQt4Ja3VjoNhcCUUO2Z+oFK4iuCxUdgxXjDqACrMonDlldotrd4sSvBacRrs\\n\",\n       \"ucZc5jPM2S1btkjqPG4+554+Y8JaRAyJKwy1c5wxxvNnjUupicRUrRcYC65rrPg1bPvQQw+V1I3n\\n\",\n       \"WDFlrZTuOuGk7KG2bpN/PleXKhevyNrk++S2wnxgvg2VRTdvmm+kptPpbkknr/H+9yS9rE+jgiAI\\n\",\n       \"giAI1gMLi/DeO8AcD7M0281BaeJu2WOXNm7cKKnzhN3L47l6qnYHx+Pumbt8drrmLp59rWqzkHxX\\n\",\n       \"e+Ilhob+Zp8hlIxlz2qjJg39zXjQb4zr4YcfLqnz5tgJnnHFvlAX6G/PxJo3xPyQNYlawbgwTrQ3\\n\",\n       \"FVvEvlFObQzS0KB4Ms/6Zszs2LFD0tqKLTbuajbKEx45NoIt+ZxlrpfGUqXwjF/WNo/RwCPHlrHR\\n\",\n       \"5zznOU3nZU3kuOyvWRtjUwpziz3+xs6e8+MzV5ZlT8ChQElFCaUCd6nShxrN2sDa4hXhsUevZ4ad\\n\",\n       \"oPwdcsghM99zBal0Nw6+x5rN/8dWFD07kH5kTfV+LmX/yBUNgiAIgiBYAAtTpL7//e/vuRskzqA0\\n\",\n       \"U8DBe+Mu+Pbbb5ckvfSlL5XUecDcleO1cFfN3afXmvA4BsCr5Lju3ZVmxHD3y10+xxvLq+Kum+te\\n\",\n       \"L7FRKFHg44G37ePhrw797HEuZGihXgydeYTqANgLXhneIN4TXiLfQ1HcufORaiOMo8fYcVxip1K4\\n\",\n       \"SoHdp+pSEWNHP7FHJAoRChvkqn/Xwnxfa1x37969z+8ypq5GMjdSqh54jJR7sq6m+155jAlKFZ/3\\n\",\n       \"eDiuLaWIMcaslXyedqBIMVc4H/GhjFlOmeJ41IXymC6uH9Uf26SfUZNRGlAssK2cAsH4eN0r5jyv\\n\",\n       \"zrLuO1mL27rXYmPupn4zfQ9CPs9awTj69xln7Jd+R7nlNVfp38FusSvf+7K1Ij1rlu+aQHtY41P2\\n\",\n       \"gJLK2sY9QWmccihSQRAEQRAEjUwW8Ux5MplMV1ZW5n7eIAiCIAiCWlZWVjSdTtdMKwxFKgiCIAiC\\n\",\n       \"oJGFxUidd955e56vE8NBfEJtxDyR/8SW8Fz0jW98o6RH7iTnAef59Kc/LUm69957Z9ozFDzvPvfc\\n\",\n       \"c2fOC8QpEF9AxhLfe/nLXy6pi0+45ZZbJK2uguvPlTnPH/3RH0nqnh8T30H8BfEJjCvPvXkejgrK\\n\",\n       \"cX2fMZ5vv+Md71jz+saC83zhC4/UkGXcvF/oT/qXzBjiCog/IQ6A6+T5O3EOv/EbvzFz3rHhPO9/\\n\",\n       \"//tn2gHEKRDjRHyEVw6nVhP24XE8HOc973nPzHnHZmVlRe973/skdTbGGPi+hEDsiSvzXHuqajzX\\n\",\n       \"dMEFF0gqr3mHLXH80pgQzpeb67SXvcq4PuYkNugxR7Sf988666w1zzcWnOeDH/ygpPI1k2wyYnxK\\n\",\n       \"fzs432c+85mZ97l+YrvY9YFXfrOOOeaYmXYSm8dcYJxZA9/85jfPnHdsUvbieDwl1+O/zWSSeywV\\n\",\n       \"MW9nn322JOlDH/qQpM7eOF7fCv3EdBEDVdufzLfWDPXceUKRCoIgCIIgaGRhitTDDz+8x2tCiWjd\\n\",\n       \"IZysJc94qQWvru9+TShrY+0blMsKTO2gzfcuvfRSSemMDZSlVF2tlNfH+Xj1bDK8FBSbVFbYWPtz\\n\",\n       \"lYKySeYR40h/4V35q9flwnvieyiArZkpQ5GqQUM7UVJTbN++fZ9/X2Qtn5TiVLsmlHqutZ427WCt\\n\",\n       \"6WsLrDW5bEOv4bWs1K6Z999/f6/zsda4ik79phtvvFFSt5bxSsasg8LD95cd7M9rL9Iv2HdqPrj9\\n\",\n       \"872h13CvgF7L2LUSQ5EKgiAIgiBoZGGK1N7gnXE3T20QYi1yHi53062KFlBL4qabbpp5/+CDD5ZU\\n\",\n       \"Xndp3l4f/Yc3tW3btpl2XHHFFTOfTylB9CPVlInt8TpOeFsoSw41Pah7hLdHLBv/57iLVqAcvM3c\\n\",\n       \"vlSQGm/sGS97rL0T9xe8Cvj+zFi2wP6f1HVijhLDQzwiNplaC2pJjR3toLbfspFa21AOc+qsQywO\\n\",\n       \"v1nLXmmdpw+nn366pC62i+uuHbex63e1Vj4nlpDfctZsYidT++yWEopUEARBEARBI0uhSPGc2+/e\\n\",\n       \"S+/miYto3Wkb7rvvvpn/c/dauwP70PtY5eDuH2/Ivc5SUJBQUvBKfK+21Lj4nnZkugB7yY1NrjJ3\\n\",\n       \"jr5eOtePtzev6543qWrefY831Hw+EMH2yCQ95ZRTJHVxbV/96lclDadEMdfI8uIpAllvy6Y250DB\\n\",\n       \"a1WS+B5PL2p/O+YNaz2K1GmnnSZJuvLKKyVpTxZsKWOrya1rAr9NL3nJSySt/o0KRSoIgiAIgmBB\\n\",\n       \"LFSRwnvBq6lVULjL5K6a556le905/vyVvcS4e/Xn6YvCs+nweoiV4v+1XhWKFK9kSLhX5dmAgDdS\\n\",\n       \"qsj1VY5S8Jwe76VWIWRfMd+PDMUv52Vjf3ye5/McpzWDhJo5xLLdeuutTcdhb8fUPmU5mG/YSd/M\\n\",\n       \"KUCRql0HPOPoQIQ1ijFFUcCWx8ok3rp1q6ROAcP2eR07W2osWpU0ftNye+AtC/ymfeUrX5HUrcX8\\n\",\n       \"lnoNvRzLer1cD1mKxEO33is4oUgFQRAEQRA0sjBF6jGPecwej913hC6N+EcBoV4P9M3eox20D4Vj\\n\",\n       \"aLhurqP0+bI/J6Z99APtrq254Tu0c9deWyunNMtyaCUKPKYrRSrGh370SuvEgRDzlPPu6bdNmzZJ\\n\",\n       \"6q9s4iVv2LBBUuf91T7fx2t2UM7w3jg+iiT9xPv081D11zzLsTQeotZr3p/AJplz2Ag2Wqta1sb0\\n\",\n       \"oICxVqDiY2MoYtdff72k4RSA0ozuVrDp2rWP3y7W5EXXjCvl85///MzrvPAK+2PBuHzjG9+Q1FVq\\n\",\n       \"57eobztCkQqCIAiCIGhkYa7ck5/85D137XiUtYoUcNePl5KqyF0K3ghZa/x/aFCMOH5pnSq/a/bY\\n\",\n       \"JLzS2ngI7tLxwrhbH1qRI0aH/l1U3aCU4uGxTMRy0R/UmqG/Ul4xGSHsy9U3loe6VvRba6ZJqoYR\\n\",\n       \"Sm5qDz5/n34aKu4GhYv1oFSRKq33tQz03fPLcVUVFdXVXpQh4tpSmaS17WLNolbdli1bJHUKFZmr\\n\",\n       \"/L9vnClrO3NprHpc2Lzv35qDObKebHJISpUdrwDP05Cxa+1RF4tXnhb0tc9QpIIgCIIgCBpZmCL1\\n\",\n       \"lKc8ZY/SkcqQ4K4VxQnvwPdCw5NFGUjFgOTA2+H7Q2UjpSCThh3ZW8Eb5bXv82YUD7yLoSu148WO\\n\",\n       \"tRdhX2iXK1P0A15nLj6DzzPOfav9cr6+NU9S/Z5SBudVC4j5N1Y25yIZWokCPPlcZioqH+1I0bp2\\n\",\n       \"YOvYODbPWjrUXGft77vG50AZmXdNwPVO6S4j/KbzG8tv/bxhfFv38INQpIIgCIIgCBpZmCL1pCc9\\n\",\n       \"aY934TFOniXE3SLP991DxpPu68FynHnviebZYbUxQ3jw3F3jpaJg1D7nx3vleChTfSGeY9l3oMe+\\n\",\n       \"6D/Gh9gxxilX2Zt+QzGNSt37Brsfyt6WibHq65SuFShNtXvHlcLTBdZoxpC1qHWPNGCusZaVHg+F\\n\",\n       \"hLlbqnzQr+xu4btepOC3igzYsbPRxoLrqK0NyDiVwryY99rIbxz29N3vfrfX8fa/FSsIgiAIgmBO\\n\",\n       \"LEyReuxjH7tHieJul7t3PH6UAd9R2p+/rvdYir7tx2vyGBPqIeElXHXVVfs8DjVpTjrpJEnDxVzB\\n\",\n       \"elFkuF739mk/Xgz9nbou+h8vmMrky1r9F57+9KdLStfjwp74HJXuWxVV8MyzYDio0j8WrOWo3q5o\\n\",\n       \"lKrhzBkUA/9N4Lip3RUcn5uuZHkcLk9FaAcZt2Tg5tZCaryRib3scz0FSl5t9mFrHGhuLR0a7Ag7\\n\",\n       \"C0UqCIIgCIJgQSxMkXr44Yf3PHf2GCEyS1CqiFkaq4ptiqF3tx8L7q5dSaE/S2NO6F9ea+v51MI4\\n\",\n       \"U2sGezj66KNHOV8peJV33HGHpE7hY0d77AIvJuVt49UyLowD1Z8XBcqjxwKimFFTharY/jnGyxmq\\n\",\n       \"Htiyz7dgNazVPC3w3QFK8YrgKBwoR7yWxn3mMk5Za7yOEWsg18V5U5XOWYOp9E6dIub8K1/5yn22\\n\",\n       \"Y9Gw1vObQb/WzulWBW7eGdzYD8pb36zaUKSCIAiCIAgaWZgihbe/N6k95+atRMGyesa+tyDPsVFK\\n\",\n       \"8Jpof2mGC8ehSjHxAp7pgzKD93jzzTdLWj1O1MfyWDjg8/58elHjDXilHoeBt4J3mosfIC7Fvdh5\\n\",\n       \"10zxyvyprFRquuTqp2FXpXEqYzPP/iQrbX+ltfo+ys7GjRsldZXU77777qrjpGxvqOw/55ZbbpG0\\n\",\n       \"eq1nzbryyiuLjoPyNa+aayly9bVST1lYm9l1gv/XZlj7XMSeULxc8aE9vM9v/5FHHjnzfdZS7KoV\\n\",\n       \"lMMTTjhBUjefifdEUawlFKkgCIIgCIJGFqZIHXTQQXtiNQ4//HBJqxUV7g6pkstzVGp0sK8T+/UQ\\n\",\n       \"Y+OKxubNmyV1njjf5zwoC763GooKMSMcFy+L98nogFQMCs9hOT7n9/byORQfPpeqqJ3aN6uVa6+9\\n\",\n       \"duZ8Dt5BzvvKVbdNHb/vflx9SXnRpZlPuf2mfEd4vECe22M32Dt2QGwScQx4edgT8wQvEqUPb6u1\\n\",\n       \"vcwD5hleHN5vab9s27ZNknTYYYdJ6q4TxRO7Zh4yD1DUOC/9xbyYZ/VpVyG3bt0qqWsrVftRjRkr\\n\",\n       \"qtET80Pfcc18jrHm/z529AXXTl+6SootpFR1xhLPv29mrs8Z2tO3Cv/YLOqpAzbM3GMu88oa4QoJ\\n\",\n       \"f2fciLfEnjz+EgWHtYA1JpVdx9wrnVOsPYCChJ1i38wP7NmzJfmtYK3BHlEosee+6jO/Wdu3b585\\n\",\n       \"D8dvJRSpIAiCIAiCRiaLiEeZTCbTlZWVuZ83CIIgCIKglpWVFU2n0zWD0EKRCoIgCIIgaGRhMVLz\\n\",\n       \"UKQ4x8UXXyypi19IZVvxPNkrNBO7xfNUvu8xPpzvggsukJSuOTIUnG9e6t56Ox/jyXj53oEeF7Ko\\n\",\n       \"68NeqFBPXADxMrST9hMrRXwHnyc+wfeqhF//9V+fOe/YcJ7zzz9f0upK6FwPcR/g8Rn0A/MQFd13\\n\",\n       \"Pnj729+evLZctXayeXJxf/TtOeecI0n6wAc+IKmL3yIWhDEj9oO1gO8TQ0UcHJ/jGhljanu94hWv\\n\",\n       \"kLy2rVYAACAASURBVDT/sbvwwgslDZ8tB8zFc889V5L0kY98RFJ9Re1aFjXXL7roIkn5jFevAcj/\\n\",\n       \"eU1l3hJj9I53vEOS9N73vldSN2d870GH2DnmoM8X5ipxwKwxb33rW2euc2iIxaIuGef53Oc+J6lb\\n\",\n       \"A4g9Y17deOONkrp+fOELXyipu/5du3ZJ6uadx6Dt3r1bknTmmWfus32hSAVBEARBEDSyMEVKWh25\\n\",\n       \"3woZC7y60oCXmfNyvJ4Rd6fcneNt5qq9krFAe7hr533ullu9PM+UCNYGL98VjmXbkZ3MK1QI7CNV\\n\",\n       \"x4vsN/eiUnYxVMXxVvCi8W59PPBuyTiito97mcxPzxLcex7RF64is9akFKmcEuV7sAEKE33Mcfg8\\n\",\n       \"NddYe1AiuGaypLgG381h0bXsSm3HsxBL8bm4aFsdG2waO8B+UESxU/qF8UdRpX9SipTD3MHO2DuQ\\n\",\n       \"NYanNKyVrCG0i3Ywr1Be+fu8sjJTFfJZM0855RRJXSa/31Mwv3y3CfqHNRVFj/4urbgeilQQBEEQ\\n\",\n       \"BEEjC1Wk+ipR4HvEOa07Urvn7PVtUvvyeG0YXqlV0Vr3JrW/FHfZY8UxrDeIjcJ783HK1U2aN9gH\\n\",\n       \"YB+oFk6q2vKmTZskdfaJPaDO1HLyySfPtIf5SnxBaf2oXKygK7bEcWDnjFPJPli01eeYq3ql4JnS\\n\",\n       \"BreZXK00roHaXlzDzp071/y8z/HSfTKdU089deY49CmKwpe+9CVJeQXJbXNs6O/WNXto6H9i1Rjv\\n\",\n       \"3Lin8LUbpYXzUOOQNcxrFOaeqvhc5/MoTdgByhcKGe3YsWOHpNUKIzUTsR/U4UWPE9dFe5lnrFX0\\n\",\n       \"Hyo3MVFeNwrFDcWO/indey8UqSAIgiAIgkYWqkjNi9YdnR2eY+d2NG/drydH6nlt7X5I+yuMD0oK\\n\",\n       \"SgiV2rGDZVOkiNshaw8lyuMRcs/rUWG4rr47quONUr0b7xU1pVSRysH1szMBiprv05Wbd2NALApt\\n\",\n       \"QjEAPFdiK3zu4/nmFAyPgek7dqjUxIwce+yxkqQbbrhBUnksU6nKzfGwXZQRjzvNcc899xR9ziuy\\n\",\n       \"M05DKyRcD7Fu2ODVV1/ddDyPeaPfUFCY86wFnnFcW4GbmD2P4fO9C5nbqRg11kx/HXufS8+gd3yt\\n\",\n       \"ox+xA/7OuKXWEK/8zmtp5n0oUkEQBEEQBI0sTJF6ylOesue586233trrWLnaF0PFYi16Z+8Uy6Ks\\n\",\n       \"LBrUArxvFJM77rhj5v+ejblofG88XvH68E5zexCiuOGdY69ep6kUjoMChcox1l6I7NeGCoAKxHxe\\n\",\n       \"xPxDMWAsPG6NrB/6GBvLxXURD0bWEcoAsR4oBK02et1110nqPPprrrlGUqf6jQX9gEJXq0iVQoYn\\n\",\n       \"qiX1goZWpFCAiB3qW98qZ8PE9PA5z/L0GDJXWPz/zFns12MIUXKYayn1m/Ni79jtWL89tMNr47lC\\n\",\n       \"6vXasHfGqdTe6WcUNjKpS+dfKFJBEARBEASNLEyRevzjH1/8/J27aTIZPDYjtzP0omuxBPMB7584\\n\",\n       \"FSp+u5fqFbVrM5PwyvDiUFJawatyZcorsaNSoLB6nAvfJ1asNfOL72/cuFFS5xVSc2asmDzvhz47\\n\",\n       \"snvWETE7tWsBaw6ePWMAHI8xS3noKAoch2wjlAeulXYzBq3xnah42CgeO+0fawzph7HrQfkcGSoO\\n\",\n       \"NsVQldZzCg4KiispKJgoQ1yvK1CuePl4+6vHADFP3P54JYaLz6HUAUok9sbna/sPu2WNZh76OHsc\\n\",\n       \"qce/lto5axvzk+svzXgORSoIgiAIgqCRhSlSJd4md6HcZXK3CyhTeIWRvXZggz184xvfkNR5a+6l\\n\",\n       \"+XPvWpUCRciVnlyF/RR473wPUJRKM7iwfzK1UD1qYwRpD6/UXknVPhoK778+uFrXqkqjfKRiW1A9\\n\",\n       \"UwoM7SCOE/UT1dRjiDyLqy/erg0bNkjqbGOomCmOh5JAf7UqgTmIaaH9fbMc50VpzA1KEmo6axuK\\n\",\n       \"SSqbzO3e658xtzwbLlcpnbmPGs9vsbcDZYjx53O1ihT2kssuRTliLfbv1cZw0Q8oYdhzjlCkgiAI\\n\",\n       \"giAIGlnqOlLc1fpduddyCfZvavfx4nk3doIXhXrg1O4PhhdG3ILvTF57PD7v8Td4ja3eNt5rbWVz\\n\",\n       \"vOabbrpJ0uosWLxOjk9/94V+8LgP2t+3P/rAGnT77bfPvJ+LBfK6PcRxFu/h1VjZHOhTMqQ3b94s\\n\",\n       \"qYs1GUqRIrMUBYDrHav2V2tl8UVTGstFViLX6Xs6lsLaxPd8rSoFe7355psldePtSi2f43ytT4no\\n\",\n       \"J98D0Mcde6MdnB9lCkWpdL75WhOKVBAEQRAEwcgstSKVqgmyXp6HLzvLVk8pBe0s9cZcESKzwxUp\\n\",\n       \"vP1SrwPwgvCSyMhqrWiP4obCRbtqlS3A28SLbY05ciUKLw2Fr7Tqb1/wLpcx+7Y0Bgg1HdUx9XkU\\n\",\n       \"K8aubzYaY0Zlbv4/VAwWkLnKXFtEFfoxod+Ym2NfH3OfGCDWmlqFkjWFNYvr8OOVxhLxPdZk/g/e\\n\",\n       \"L7WKFPaD8kT7uBfw337mn2dvMn+Isy7da5PPkx3pTwlShCIVBEEQBEHQyFIrUrBt2zZJXczGonec\\n\",\n       \"3l9YdiUKXFHh/+6NpHDlBC+H77cqNmRc9fXuvR14h61xDWSI4U25slQL3ifVfvH2Sr28/QmUHXDb\\n\",\n       \"SSlUXtcmZXO+z2DfTGRXNcmeuvPOO3sdNwVzEeVj7PpO84I5wLiQNVgLcYW5GoooSajdxABhD7SD\\n\",\n       \"8eU3EQULWAM8O5Tvo4CWqssnnXSSpE6x8b0RfY2q/Y3h+8wX5gP94Nn+zDevmYeyRC280rWKfmHN\\n\",\n       \"jTpSQRAEQRAEI7MuFCmUqFa4yxwqpoO75JS3RQyJV9+tPU7wCHiD9BNeWK5qNePuXozXVmmtwjxU\\n\",\n       \"5tBzn/tcSdKJJ54oqfM6PUuNPSlT9uL11vDKWvfag5T9lsZr1MZhLDNeb8evCUXA15pUTIp78ChG\\n\",\n       \"Q1XSZuyZK9jsWGsOttIa37essMaQrYYKXft0hDmZUqTYw+7II4+U1P2WMG5XXHGFpPRvmdsT7UWB\\n\",\n       \"5DpQxmhP6W8jn3MlbCh8reE8rOVeS9LjST3brjamjH4mm7X0KUAoUkEQBEEQBI2sC0UK8Oq4+/QI\\n\",\n       \"fZQF94ZSXmIrOW+uNHsrlKgyUjFIuUwpvIuxsstK61sR70ANH9+fCu909+7dkjovD+8LL5J4h5Td\\n\",\n       \"oHJceumlkjo1gviCF7/4xZK6+ZKre0VMFDVteEWVwXukfXi9nulCJg7tZjxTCpX3a00dsdqaY7W4\\n\",\n       \"Cklfck0pD5YxoH0cxz8/tEqNDVD/auw1Z3/NqGYNIfuyNS4yty8nsVfXXHONJOnoo4+W1NlJLtbH\\n\",\n       \"59Rtt90mqav1xtzlN7F0v1v44he/uM+/980E96cDzAf634/LWsr73CPwNIj+LI0z5ekX44syfPrp\\n\",\n       \"p+/ze6FIBUEQBEEQNLIwRWrjxo17vDPuJrmbxSP3Pb/wdPk/3hVeEHfj7o0OXTMlmC+1KgPeFmoB\\n\",\n       \"SgrPy71KrisoJ5xwgqROkSHOgOw3vDqeoxPThHdHZtdxxx03832y6Ry8KleqgOunUjvHoR2pGC/m\\n\",\n       \"lWcYER/hihX95RkwKE5407xynMMPP3ymne4Vk3FEOzmex2ugwHFe+hsv0sdvrdhHYkwYW85B3+Fh\\n\",\n       \"8nfUQq6dviLWgrEDtxU8Zq/dha0yBjmVmj5kTGhHawwKY8H3c6osfetZX/Qx/eCxXqzBHN9VYD5H\\n\",\n       \"v5HtxXHpL2Du3HXXXWu208/PdaZiz1Ct+RxZaq0wLr63WynMNZQt8L3vWFv6Vp73bD2vPF8K10s7\\n\",\n       \"U8oOMVfYHfOReZfKIMbuma+ME9+nn12RuvrqqyV118Vx+H+rMlab1RqKVBAEQRAEQSOTRWRXTCaT\\n\",\n       \"6crKytzPGwRBEARBUMvKyoqm0+maBeBCkQqCIAiCIGhkYTFSKysro2fYoHq1ql+19W84z/ve9z5J\\n\",\n       \"0nOe8xxJ3XNhnvsS28JxPUaF5+d8z6vYHnHEEZKkV73qVZKkT33qU5K6uALiPogLIW6B58e8TxwD\\n\",\n       \"8Rs8h+Y5vcfMcH0XX3yxpK7GyYYNGyR1z8cvueQSSV0GxOte9zpJ0sknnyxJ+pu/+RtJXbwBz8G5\\n\",\n       \"Ts5HFd1Pf/rTkroquqnn88TYlNZ3op94Lv+2t71NkvTe975X0vi1cOjPCy64QFI6jgU7JE7lgQce\\n\",\n       \"6HU+nw+54zM+/P2OO+6Q1MUeeqbO1q1bJUlnnnnmmucbi5WVFX3sYx+T1MVoeKYv8Wy1Y0tsD2P0\\n\",\n       \"rne9a8855wHn+fCHPyypi5Mji4s5wxgxJh5jQxZYLgar79qZwtdU1r6zzjpLknT++edL6tYSYrZY\\n\",\n       \"E31uY5vEQRKLQ+wWawLn4+9vectbJEkf/OAHJXVrM7FcrImtNeZoN/GBY/VnitK1hZgnfhNa9wvl\\n\",\n       \"fBdeeKGkrv+5/qFZVH+mCEUqCIIgCIKgkYXWkcp5hXgrXk14XrRWYuYun+wvMgDwnvCueB/lBy+S\\n\",\n       \"jAru6lFOOC7eMaAI4ZUeeuihkjrlyvdDooI23tk3v/lNSatr4WzZskWSdMstt8x8HwWHduHN4SX7\\n\",\n       \"TtxUFqemyZVXXjnzPdrJdXklcuwkVwOkttI46oRX/p533GAuowo7bFWicuSOj3qR2l/MM2N27tw5\\n\",\n       \"YOvqYOw8uwu1FZtN2UqqDo5nDC8K2u1V74F6QV7RGbWX90uzAZmjqcxnsuxQMpijrEFkI/J3X4t8\\n\",\n       \"LzOuC0WHv6fmPmuFZ+kx/hwnVTeJ/uO1776UMJYSU0tubcGeW5UohzV/vezjOhShSAVBEARBEDSy\\n\",\n       \"1JXNh1Ki8DLxVvoqDsQhoOi4t8T/qXGBAsXzerw73s/VCvGaKn63T0Xs+++/f+Z1165dklZ7Ydu3\\n\",\n       \"b5fU1VZJeU8oSO69ct2u6OA9usLzpS99SVKnBqBs4K1Si4frci+K/ubvqRozraz3PeD2l73sXvKS\\n\",\n       \"l0jq9hMDFFdis/YFtozCgKKCrec89JQnvWyedm4NwxaYY8zZXB+i5EBOkUJlR0lireFpAv9PVfT2\\n\",\n       \"Okecn7WS8/M51havQ+WxVKxFxE7xvaGUF8j1z4HGsihx8yYUqSAIgiAIgkaWWpEaiqG9SO66czE7\\n\",\n       \"KC1+l15bDdfjNlzx8WrLkIoDKd0Pi+P6cWg/SpVXccabJKsPL9D7AW+O9nAeV8By+0v1zf70OI31\\n\",\n       \"Bu1fFkWqdsd1wA6ImSPOhrifElyBIhvJK3en4r1S+H6HQ9G6t14ulgeFyCuP5/A1LaW00L+p+EKe\\n\",\n       \"JtSMndTFPDFeVL33uElfIxgX3uf6uW7+37pGoIDRn+wfiSo/NBs3bpTUtXuRcYdBnlCkgiAIgiAI\\n\",\n       \"GjkgFKmhKc0Ow5tDieG1VgFxRc0VpVS2Vd9YsJTX6/sheXYe7cspdihbnIf+wUv3dpR61SlS8QyL\\n\",\n       \"zsTqy7K1n1pFrZAJRrwMMX0loEARK+NZW6ikxOuVxF1JnY2jxAxFrRJVCmsGyk4qGzFHSpFibSEe\\n\",\n       \"MxUDVQuZv15bD1tIrSmuULFGsRa5claLrz3EbI2lAt9+++2jHDcFChjzhDjUsRQ3xuf444+X1PVj\\n\",\n       \"zVxfJkKRCoIgCIIgaCQUqRFJeX8oI613+35cjx8YipT354oY8RBk6KAg5bIuiX0iDsBr3eTaAXg3\\n\",\n       \"OW8bpWu9Z9hwHR4X0qpu0H9kR+J919blYtyOPPLIpnYQE7V582ZJnUKJHV177bXZYzAXyFLDVolp\\n\",\n       \"oVYZNpdTOgAFwrPaWqE9KGRDx8CgGA2dpebQj+xCQL+zBqDgldoS/UJlc2yczObcOAEKB3MdpbJ2\\n\",\n       \"/Lg+sv+wH+bMNddcU3W8ZcV3tRhLiQLGh3GhX8k0Lx3nZSEUqSAIgiAIgkZCkepBrn4Pz/eB5894\\n\",\n       \"WXhLtVl8rtjMO1sL5YPXVPZersYKXgfVjqmk7v2BApOKBSqN+6C9vofgegPlBjWDeIbSyueMC68e\\n\",\n       \"h9Javw27xIuvxWsKEedUczz3ZBljPGzUNj5XG1/W6ikfdthhkrr9Nxk75gxZhGOpy2NBrBmZlqwF\\n\",\n       \"qIm1MVmeEc3cz1ViZ20lForxxoaIuaqFuUHcH9eL0kdWodfq6ws2zxwd+vgOu2zMC9Yaxpvx4npD\\n\",\n       \"kQqCIAiCIDhACEWqB7msOJ7/AsoR3metEgV9M1D6gvLA9ePV433iXZTGItEPeCH+PWKnUl5paZwL\\n\",\n       \"xx0qziXF2NWO6Wf2QKxVkOhP4lGo++VKYm2VYh//WsjYQV3ALsjiK8HVK/5PTEtt3NdQ+D6ZjAGK\\n\",\n       \"mavX84J+aV2LUEEZe9a21lpivpdgafv4HP3oaxCKTu2cZC5wfGJ4UKT4O2sKClhfUNCYm1x/38zl\\n\",\n       \"ZYFxYrzpz7GeEvBbPJbiG4pUEARBEARBI6FI7YNcLE1OkcLjJyYKL63v81+vs5Si7x5sqarLeGF4\\n\",\n       \"jSgX/J/rLr37J14EL8xVg5wXWVsva6z90hjnvvW7cuQqvefAu2a8sHPGmXGYd7wO404tIeJPauKY\\n\",\n       \"XAnhu631n+gr9qzLxeqkIAaF43js1tgxMClalSigcjmxQ9hOq7JAe/g+/c/xUzbJuPvcZi1CASSr\\n\",\n       \"sBS/Pr7PmsfahX0NpUgxF/2pBvuq9h23RUM/8huD0sbTFtamoWrkjb2WhSIVBEEQBEHQSChSa4BH\\n\",\n       \"jFfQ6l3hJeHx83/usmuPW7unmlczrlWmctmIvOJNENuC143CkNvX7LjjjpPUVdclCw04bioTiH7J\\n\",\n       \"ZdDgNeI1e5ZYX1rVilZa41tQRPEKsRPGE++9ldb4GMCeUBNqvHw8XF5RB2lT7d529C193beyOTaH\\n\",\n       \"DfP/9R77gs2gVlNHqlV9p99RJp71rGdJWq0sEJvk2XqATfP31vG77bbbJK3O/sOOhs56Y26ylrH2\\n\",\n       \"ja1EDa0ElZ7P911dtt0acoQiFQRBEARB0EgoUmuAF9VaKZpYGbw0r69Uu9ce4I2wv1WOvvt4pWKJ\\n\",\n       \"iAvg+vAC8SbwonJKFOCNEIeAsuTn43PELTBOXnsmBd5pag/BVlytGCsGC+gP97ZL4ydQzugHFB/6\\n\",\n       \"v2/mDNWef/Znf7bp+9gP11kT14KKyrXQR8TZtV5b37g0YG8/5k5rzS5o3UMvBWpyLfQ7a06pKulZ\\n\",\n       \"lv4UACUolQnKOLsSBih9ZIX1VY19vErX4lboh3nVCpy3EsQcZ5zIRF5vhCIVBEEQBEHQyLpQpPC6\\n\",\n       \"uCtH4RkrO6pvBg1eDzuiD62ALBriFdhTjVgW3xGeeAIUEs/Go0YK+1il6kHhbbrChvLH9+jnVLYi\\n\",\n       \"3t3QdjPvDBrfP42q0qXt8P7FCx3KTvtmpaIwllZq3xvGHlvDdoizwyZTCgrfc1tFQepbg4zzoSAx\\n\",\n       \"B2rXnNz+kh5XyHm4jpSS4sqbK16eRYaCxRrgMWCAWovajMrIWgKMC59HAUpl7mKzKdWS8eZ4rVX3\\n\",\n       \"U4ytPs+74vi8IeZt7L39xiYUqSAIgiAIgkaWUpHC6yO+AY8Z7xKvBeUHLwOvsm/WUV9QSvAafedx\\n\",\n       \"lJGcAsD3jznmmJn33SucN9ddd52kzqvFK0MRwbvg/ZQXyOeJ7SFWh/8ff/zxklarBHzP92nC60wp\\n\",\n       \"TqVKlFdSp/21MWeoAvTDUPEHXiUbr96rTENp1exc/6VAjUAVwP5T0B7Ok1KwiIVDxaA/+T7jv3d7\\n\",\n       \"XcViH0fmEjEZqHiePcccxQaI3eDaaEMrrFHYku93WBrDlVMfN2zYIKnb44+xJW6xNLbnBS94gaSu\\n\",\n       \"r7E5roN20D/0H3sK0l/YLApLqn4Wx/dMZ89iA34LXH1mfJkbnlkMvm8ltkY/oYRxfNYojkPVfa+h\\n\",\n       \"hwK4Y8cO7YtnPvOZkuoziFHx6R/sfqhYqr4V71PQXpRCryfFOLjqPjRc38EHHyypf6xbKFJBEARB\\n\",\n       \"EASNTMauwrzmSSeT6crKytzPGwRBEARBUMvKyoqm0+mahchCkQqCIAiCIGgkGyM1mUz+b0k/J+k7\\n\",\n       \"0+n0xEffe7qk/yPpKEl3Svpf0+n0+4/+7f+S9AZJP5L01ul0+jdrHfeiiy7a8zyazBCeL19//fWS\\n\",\n       \"uniBTZs2SepiKYh74HtHH320pO75Ns+bX//610uSPve5z0nqns8SY8Rzf2q78Nx069atkrrn+Pyd\\n\",\n       \"7/tzfNrxyle+UtIjd64lpLLUSuE8vNIuMl1oH3ESXA+xJx43wt9RKXnOT8zLa17zGknSH//xH0ta\\n\",\n       \"nU3H/z0ziP4hboFxvf322yV1sUg8Nyeu4dWvfrUk6f3vf//MddNvxBzxynlof21MEv34l3/5lzPX\\n\",\n       \"Q7vY+Z3aNrSTcSd+46tf/aqkzr6IwyDW7b777pMkvepVr5IkfeADH5AknXDCCZJWV+HmuMQNMK4e\\n\",\n       \"gwVul5z3ec97niTp8ssvn7kO+on5g73z/y1btsych0wb2kP8CRlc/P9tb3ubJOlP/uRPZr7H+NA+\\n\",\n       \"xhN7w56Yx3zeY6OclZWV4rnXF597++v5PvnJT0rqbBZYE0466SRJnW1gU15xnGxAxpK1njpQv/qr\\n\",\n       \"vzpzXvBYtVo4PjZFrNg555yz5vlKycW0sRYTB8t5+C3i+6y5ZECzxhBLRj96bB1rg2elcl7Wltz1\\n\",\n       \"cV7m3vbt2/f5eYfxOfvss/d5vlT8Jb89XA9rCtfpsVr0W9/xqyV3nhJF6v+R9D/svXMkXTKdTjdL\\n\",\n       \"+vKj/9dkMtkm6X9L2vbod/54MpmE6hUEQRAEwX5JVpGaTqdfnUwmR9vbr5R0+qP//nNJf6tHbqb+\\n\",\n       \"p6SLp9PpDyXdOZlMbpN0mqQr/bgPPfTQqsrX3FX7XT5ejuMZN+4hA8pHypN1cvVrPMMC76qWvvV2\\n\",\n       \"HM8C5HpT/cddvnubkKrtwXlQSvw6Ut5aLjOCz3stG7wyXsmQ8Uwc/o7iliO1Nx8KGcol7fJ+wn7/\\n\",\n       \"9E//dJ/nQQkCr4mD93XttdcWtbsU+gmvjnagQPn+Xbzyd+YXGUJkazKfPCOKeeHjTv+6csb5sB+O\\n\",\n       \"k8o8Kp2/JcwrO2i9k5pLzJEbb7xR0uoMStZEMm1diSBbK1e1vu8ehLn6RMwN1g5sNJehm8uuxL58\\n\",\n       \"TaY/mBP81mCHvhalfiPob58TtfXIGJdTTjlF0mpFiqxE3/8USscnFYvN+JC96P3uWYN9d10Yi1a1\\n\",\n       \"6ODpdMoMe1DSwY/++1BJe/8K3ivpsMZzBEEQBEEQLDW960hNp9PpZDLZV+pfcVpgq8eZ82p4nu/H\\n\",\n       \"r93pmue4eMzcLffdER7Gqt0xNHhtqaq+rV5DqnoxXk/K+yF+gFikHMRtpEBxoWaM162qhZg1XlPe\\n\",\n       \"3VBgp9g9cRNwxRVXrPk9t2PmFXEdjs+b1Lgzz3J2kauBUxJTSExMTolAMeGV2JSh5x4xJNgmSo7v\\n\",\n       \"ArCsoPZ6f2IrjAnX5TX+UsoOsT1um0PBHPcaaw7jjW0yd/qSsiNsl7k1dGX02lp3rKmsvYyrx2GO\\n\",\n       \"zdBPZ+ZNqyL14GQyOUSSJpPJsyV959H375N0xF6fO/zR94IgCIIgCNYdl1122T7/3qpI/ZWkX5H0\\n\",\n       \"oUdfv7DX+//fZDL5sB55pHespKtKD1qrEHlFZXDP2is74yV6TEgO7vZ9v62hqsm2esN4UR7r4tT2\\n\",\n       \"L+QqVS8a+t9jq1LgBebiCfru/4QdojqgcKUgOxDlyyvGM85U40WJw5v2eB+8PK/0n5o3pTXlUH2I\\n\",\n       \"A8kpwkPFNZR4raVjhkLBtYylAnsVejIo57XLPZmg2GDtforYjvcrawJrK7bJWOfG3DNuh4Y1oVRF\\n\",\n       \"TlVIbyWlrmMP2B1zGOWn7z6vtU9H6P9LLrlEUrcGMB9yFddz6r6DnfBbinLJ61h4hQCeDpSO9xln\\n\",\n       \"nLEn23ktSsofXKxHAsufOZlM7pF0rqQPSvrsZDL5VT1a/kCSptPpTZPJ5LOSbpL0sKTfmC6i4mcQ\\n\",\n       \"BEEQBMEcKMnae3XiTy9LfP4Dkj5QcnKek3P3XquYpDxh7vb3apOk7u7f94CjvhH1qXLgGXOXS2xT\\n\",\n       \"K2QsUO+JeI2UV+NQA4Trc+WD91u9P8ZpfyOVXUiWGl5ZTklKgd3RfyhBKWWFeA7fq5DPY2fUa0It\\n\",\n       \"oH1egwXvy73GlJKEl5YDOyrNjhzKyx8SPP++CkAO+pTaXIzl2OcFFABsr1aRStmE70NZqzpiQ7m5\\n\",\n       \"hc2jgHEejwXy+DlsrrZdQ9lqan9R2sl48JszVKyUZ5Cz9qeui/f9/F53K0WtAsZaxPnmtTZgR9Sy\\n\",\n       \"G5qo8RQEQRAEQdBI76y9PuDxc5eOklSqxDi+IzlwF8zds98N84rX40qA4xWv+9a34Tl07Q7g/v0U\\n\",\n       \"fe/6qX2yv4GS5+NMvAL2gkLUOs54xdhXzq6wP9QE7Jfv0+7du3dLku6+++41j4ci5tmOfWOWSuNg\\n\",\n       \"gP7sWxNoPZNSXlrjFku54YYbJHVrYy2pMca2SlXMFB75QTvpF9RTFAXWImwJBeaggw6S1P12kBnL\\n\",\n       \"bwpPD1xFdcVmaGXI43O9dhvKJHPc25OqdZfClbrc2k/7vO4Vvym5OVtrt62/7X0ZOyswFKkgCIIg\\n\",\n       \"CIJGFqpIcffdtw4T3gsxI+7l4Mnzd85LjEiqsjdeEO3zfY043lC1R5aVvnfzuef0i8L3FgRvJ/bV\\n\",\n       \"qkjh1aFgprxe3if+hPNxftrJ32+99dY12w+8715g35i+Wmoz4nL7mO0Lj8FYdlhD+io7KXKZvIvG\\n\",\n       \"6xSxlmLzjCdzAQUE2+Z95qxnpPK51J59rE2uCA2VaerXx/k9btHttXSOur3XPj2g3/ieZ/6ul3mU\\n\",\n       \"Y+yswFCkgiAIgiAIGlmoIoXXgcfcGidAlhXKkXsdeLheD4nzphQX9+JSsSv7y1370JCNSD+P9Xzc\\n\",\n       \"M3pKve+UvfE+3lpttWCHOAdeU+oD/YN3yHXgJXtWXg7s1O3b60otG33UgNq5yJpx5JFHSurU6bHn\\n\",\n       \"NJXOUblzlcD3VzwjmDnnKqvXymPcfI4Sb4sNcRx+azxWzPejHEo1T8UR0m5XnHxXC4+hSuG/QbV2\\n\",\n       \"i3K33iuLL5pQpIIgCIIgCBpZqCKFF8FddGvtTrwM38kbPIbKs/tQAHIxMK5scb5l3ZF6KDzz6XYl\\n\",\n       \"uQAAIABJREFUpJTWLMRaWp9/4w17XAF2gZfWd3yJk3jGM54habUyh30yD9ybboXj1FYfLoV5RNVu\\n\",\n       \"Ys7cu23NGEudb8hYOzIgvXbXWFArbvPmzZK6OkIHmhIFbpu+ljM3WIMYe7LLUI9ReuhHbJC12n9r\\n\",\n       \"wL/Xdxy4nlTFfJQ1V6RaY9iG+u0ZK2v0QCEUqSAIgiAIgkYWqkjhqXNXjXfQWluF73kWoO9j5HV5\\n\",\n       \"crE7HBcFwxWvvnWkxob+aFU4WrO8hoohq62lUkoqWxSVArug/a1779F/HmMH2HkqixD7rbUzvGs/\\n\",\n       \"nlf+d0qz5tjHzVUdahcBmVSlyqHvxwVjKFLzqjAO9CmKSipjeL3Amsga2bc/GXvGmLWWOYDS5HOX\\n\",\n       \"fvVdMjwGLaUQ+fteKb0UflOOPvpoSV1mLYyteHqWYClcL2tVxEzVEYpUEARBEARBI0tRRwrwQLkr\\n\",\n       \"LlWkiMXh+56VRAwM8Qj8vTRjgXZ4Zg3ZgUNno+EdoCT0fW7fN9amNQbJM2FalQRXAIci5T0zntiH\\n\",\n       \"V1uuVUpRHW655RZJ6etJZfPllCjP+HF27twpSXr5y18uqVOSbr/9dkmdfaAsEfN05ZVXrnk8qkhv\\n\",\n       \"3LhRknTzzTdLSu+9V+uFp+y95Di1NajILJ1XPB97fTFWy65m52BOHHzwwZLqFSmfSx6viE2hSjM3\\n\",\n       \"UaR4RYnxCvK5tRnFijnEXGeOsmZ5O1PqqGfe+vm9TpbTV8VvjbXi+vz7y1IDkH4jQ3/ZCEUqCIIg\\n\",\n       \"CIKgkYUqUr4Tc+teXP493+GcfZdcWXElCu8GxcpjYvB28SbwfvoqJp4NSEwJCkVux3a8qpT3xfE3\\n\",\n       \"bdokqVNa8B5zsT+tledbn7P7c/7cDvFDc++99w56PLfP0n7BG8QbS9WkQUnCq/Tqz67Q3nTTTZKk\\n\",\n       \"Bx54YOZ96lTlYuKYR5dddpmkzp7GIhUztRZca06RYk6kFCn+jjJAVfrWCuTMaa6FOetq97KRU2FR\\n\",\n       \"iqjDhc3edtttkvLKSm5twya94rbTaoNeId3jdbED5gRqLeN45513zhyPftqxY4ek9D6eqUzavjFU\\n\",\n       \"KUWKceE3hjWf/5944omSun5mzX3JS14iSbr//vslSdu3b9/n+TlealeFWjge4xGKVBAEQRAEwX7G\\n\",\n       \"QhWpeeHKF56tZ7PxmlJo3IOHlMJQ+nzZ24GCkfPWIHfXT/voh9osNI+xwksj+8u9KNrj8QV4YcRT\\n\",\n       \"4J3xfbwgzyrzfbf4vMcG+d6NtBPvn//zd/rXlY7WWKihwev1mDnG0+015c0+9NBDM///2te+ts/z\\n\",\n       \"5uIsatUTVAtUg1Q2oXvN2FGNIuVzDcXJ91BDWQBXCrzPuYbWfTXHVu3GIjcH7r777pnXoaCfXelC\\n\",\n       \"RWetcKWIuY4N0X7WPrchbJ1xZm1g/J/1rGfNnJ81hONyXtrF31N73hGnS005B+XT61DRPtaslOKa\\n\",\n       \"mrvMC1/zmcvXXHPNzHUB8Y+lMYTHHHOMpNX9wHV5PDPjx+fpH/qfHQD4TUTZhaOOOkpSZw+MB2s4\\n\",\n       \"32P8U/HCfI92ofKXzvdQpIIgCIIgCBqZ9M3oajrpZDJdWVmZ+3mDIAiCIAhqWVlZ0XQ6XTNgOBSp\\n\",\n       \"IAiCIAiCRhYWIzWmIsVz1Te96U2jn2tvOA8ZDkBGCM9bP//5z0vqnnefeuqpkrrn8bt27ZLUxXds\\n\",\n       \"3bpVUpexwPF5Hv3Xf/3Xkrrn7jz35nk0z515n/gBjkdsDc/RvbYKz69f8YpXzFzn2HCe3//935fU\\n\",\n       \"PfdOPe8vhTgYntvTL7/92789c176h37kvPRLKmbHaxnR35yHzLKzzz5bkvQHf/AHM98nHofx5/+o\\n\",\n       \"x2ShYlfEkXiFdNrNdb7+9a+fOZ/3H/3C8bAv6k2RzUj2JzFMxA56fAX9OE97ueCCCySlM4CJfWAM\\n\",\n       \"PQaC/xPbQeyL1/35nd/5nT3nnAec5xOf+ISkrs+poM1Yc93EfGAznm2ITRFz8vM///OSpIsuumjm\\n\",\n       \"fJ/5zGckddlcjDFrhseH0k/er8wVtxGfC/Puz09+8pOS8hXmfU57HSvWBl9TsLe3vOUtkqTzzz9f\\n\",\n       \"Utc/9D/26FmA/JYR+0S8I+ejvzkO53/Na14jSTrvvPMkdePF54hTpbYdvO51r5PU2dFXv/pVSd08\\n\",\n       \"8Exi2v22t71N0vjjx/ne/e53S5I+/vGPS+rskzUMu6KfsNPaTHKu99xzz93356qOGgRBEARBEOxh\\n\",\n       \"v8zay9UcGYpU/aYvfvGLkrq7Yf6OF+jZeFdfffWaxycTBm+CfZt2794tSfq1X/u1mf/jpXI+Xv0u\\n\",\n       \"HG+JWi+Q2pMPr2pRDF05nv70bDbAC0GR8rpKOfuiH/EOySzhe3iZfj73Fskk43h8j+My7l7zyLP9\\n\",\n       \"+Dx4LRng+yhenP+0006TJB1++OGSugw0VI5cZheK67e+9a2Z87tKgp1t3rx5pn1k3PA9lDi8zjvu\\n\",\n       \"uGPPMXJt8Wyq0ozARWdwgmcOu4IBuYxfFAZePbMZqIpfitcyy9FaO3AoSvc69Cw5z/QGt2nPdiMb\\n\",\n       \"z9cU7wfmvD/dcFg7WOO95p5nOqf2FgTmgT/VANYarnve88Lb7dfLdQ6VJVta0T0UqSAIgiAIgkb2\\n\",\n       \"S0WqdV8g7t5Ld8BO/d3jCaD27p3n3157w+/Kvc4SypcrF7kMTWK06D/Ok9oXatnYsGGDpM5rb93H\\n\",\n       \"jOtHEcQbpT9y1Yc5L5/D20ztOejqAEoZdogil/Iic/vLeRVllC3OR3tRxPBG8daJlfJaPShGrmz5\\n\",\n       \"HprYH/abiy0j1o/+47rxPmnvWhXva2MgvC0pVXYoUCdRxlorpQ9Nao884uXcRvcX+taMq6lxJnVK\\n\",\n       \"Cbad2luPtTunLDInmbOuxPiawJxJ7RaBGp1S60v3sJwX9F/f/Wj7EopUEARBEARBI/ulIuUecSlk\\n\",\n       \"SaEY5PZ4I0bDvTRiXFCmUhkuOXiejsLC92kn4LHjLXA+Xku9a47vlaUXfbdfCgoKipxnbNRCv6KA\\n\",\n       \"1O6sjpeLfaA+eOV2vEpXslCkcgprrXLq1Zs5L//3OINLLrlk5vsvetGLJHV24nh7Syuhc35eDzro\\n\",\n       \"oJl2uLftle33RanSNHZdPTz+VmoVkFJQAx1seL2sAbUQB1irSLX+xrjylFK3S3e1SO22UQvKjsfl\\n\",\n       \"LjvLYpehSAVBEARBEDSyXypS7hHj4eY8d2KRSu/GU5/jfO615LyeQw45RFLnYZfWvuDvPO8nZiV1\\n\",\n       \"t+573Llywt/pR5S3WlAUjj/+eEldBorXLhkKz1xB+alVpIjROe644yR1XlptnAgxZ4cddpikLrYI\\n\",\n       \"xQzoZ+yGmKpSRdHHx2OW3A7YV8t58MEH13yf/qN9kNrB3vEMpRQodjfccIOk1fErZPXR/7SnJKtz\\n\",\n       \"ETs4jAFxbNg4tprKuivFxxZQE1vjDXO0KjvQN8apVmUG5uyyKCLg48jaw9MRFE36jVg91ppUliBz\\n\",\n       \"ne97PbEDnVCkgiAIgiAIGtkvFSmn1ButvbtOxX64opAC74FMHjxuvp+q4eLeF7FRpV4SihP483j6\\n\",\n       \"C2/RP18KXjPtSu2IPhatlc9d4WvNWqRffad3V0w5Pv3N+PJ+Tg3wmCu/7qGyLukXKuWXKpXE3+Ri\\n\",\n       \"pbD7VG0hYg+xb4/lOxDwiuFDZVGlxobYsrHw66mlbx2j0vi9oWAtZc4yV/sqfjzN8LhF1nKP9yz9\\n\",\n       \"TUQ9p59pP+/n1ljWHtT42qxP382iVUEcm1CkgiAIgiAIGjkgFKl5V1/Fy+EumufLZNtxF08MDRW0\\n\",\n       \"8c5yd/nuTbi3kSOXEeJZhn37z6vjLjsoIsTsoKTVekN8nmwz7CIVa8TnUabwLnNVelP1qaBvHIpD\\n\",\n       \"PM4RRxwhKV0LB4aKT8JuOV9pjNYyQX0gXrG1XIZwilZFBZvAZlJKQali5Huvla5F66VG3VC4wjeU\\n\",\n       \"DaPK+vFSsXM5JZP9NLETdtXgPB7Pm4K/91X+WAuHfqpRk/m7L0KRCoIgCIIgaGRduHR97xr7esQ5\\n\",\n       \"j9shdsRjpfACuKsnSwrvDe/M9+bj+3gH7nXg5QyVQUG7UaRaqy/3zahZNNgN/dvqPWO/Ke/es9oY\\n\",\n       \"Z1emvMI6eB2xsWE8XXEdW5HyfdxSmWbLCPsUUn2fsU3tlZciVe+pFlcvU3GVxLbkYIxrY31a4xiH\\n\",\n       \"IrVf6ljQT8x5n9u5tTL1W4hS4+NaG+OGEvW85z1PUlenCkUKULpy7c2p5TnGzg4cKuYqFKkgCIIg\\n\",\n       \"CIJG1oUitehI/aHuhj0OwbO1iIXBO/S6Q3xu7P5whaT1Of56VaK4Xrw5/t+qrDBuKFKeBekZKT7O\\n\",\n       \"rkx5nAD2kvKqW73tlKJIe4h9yylhHqM1lFKZyu5bRlB3GdNFZbKmSM3xUiVhvc71eWd+sqZzXtYU\\n\",\n       \"+s9jmsjuY01mTfKnBGR8+9rCrgleH8rXNOqToZiyxyLt4fOowN7usVn2OnChSAVBEARBEDSyLhSp\\n\",\n       \"ReGZLaWkMhT8rhov9dBDD535O1VoURo8a8s9/KGfH+MFoaBQv+dAAa+P6/esulo8/sHjFogT4dX/\\n\",\n       \"zv+Jj+Fzbk+peIjWGjVklnmtG69Vk9vHztWOobxYYhFzWY3LwEMPPSSpUxbw7Ev3VIOxFACvRQa1\\n\",\n       \"cXe5mCNshzXSa6zNm751rHK4uk+lcdZw5hS27Pbg+7mm+pXfEP+tIjPcFSneR8HilbWetYb2MYc9\\n\",\n       \"U3f79u1rtudAIxSpIAiCIAiCRkKRWgOeK/PcuTazpnT/JRQp9nIrfQ7sXgftJfaE5+e0w5WFlIKF\\n\",\n       \"l4T3gze6Hjz+IcHrx/vzasC1WaSoEeA7trtixHkYP7xDvEa8eL6Hd4mXmNt3jXHmOlIqB3biihL2\\n\",\n       \"xl6KfH/Xrl2SViuYtM/38sv1Y26PzGWPm1gL1OZWhoqPRInBxlLKDH3M2sLYMyZuu/53b6+vXWMr\\n\",\n       \"Qjmo7XffffeNcvxUhi124HvduU0zt3K/KaldMFw9BtZ24m9ZW5jrxD/eddddM59n7Wodt9anPMtO\\n\",\n       \"KFJBEARBEASNhCK1Btw1u3LgUJHcP1d7t903YwRvIRVvUVoHinbPqx7RoiDuAKUnVZtnrOf/rsyk\\n\",\n       \"FELiSHhNKU233XZb1fmPOeYYSZ2agP26N4wKgbdKLBSKJd40e++Bqy733nvvmu3IqSspJQpq9+1a\\n\",\n       \"D2CLKAU+p0888URJXZ+SbcX3UPnoG95nrMjG8orV1A9yqGCNIuJ7xZE5iWrNWsYc4/98Hhvi/8ce\\n\",\n       \"e2yyL+ZB7dOGWjyedceOHVXf77srROr6+E3w3wbPIPa523fOtSpRKGuLzuBPEYpUEARBEARBIwtT\\n\",\n       \"pJ7xjGfs8XjxYH2/JjxS7mLJCEEx4X28rv+fvXcNtqys7n7/y5jXSplKjtGEcIcGmqa5XwSiKIdo\\n\",\n       \"QI0xJ5oYY2leU4oxQcErclHYwdCiIkYUbyXRJCZaWpbXilGCSAKxBWmu3dxBFMuYT+fDqfKDb9U6\\n\",\n       \"H+DH3Ovfe+zneeaca6/dMn5V1KL3mmvO5z7n+M8xxsPfee8NJ554oqTu6Zosw77bPNYT5+H6WFdH\\n\",\n       \"HXWUpM5yx0dkx44dM9db65wqa5UDpQTv16NstnxPP2Ld0u/Uo1Qfxgfn9wzk4AoLVr775vTN5cO4\\n\",\n       \"YLxhfRIhEyl7i/YLuf/++6uOc+UMWtuLfka1oP6oKb4/G+MEnz5XpvBdRC1ZZD4plBzqQtkZw3ff\\n\",\n       \"fbekWF3jePzINm7cKKlTA13tww8NaFPOg38aaxNzAlUThcst+6uuumrF8uG/6bsq4L/G313VRcHy\\n\",\n       \"nGGu0OCDsyhcDXafIY+ULfkforRxPP2wKKIIYNZgjzDftm3bquejXejf1n08GT++Bpciz/1ev95I\\n\",\n       \"RSpJkiRJkqQnk0VEvkwmk+nS0tKaXzdJkiRJkqSVpaUlTafTFZP1pSKVJEmSJEnSk4X5SK2FIsU1\\n\",\n       \"LrroIklxxADRd5s2bZIkXXPNNYOud/nll0vqIhzclwc/hsgHBd8b9/3g/TJ+GS972ctmrjtvuM5a\\n\",\n       \"X+8973mPpPYM3bxPxz+jFPHB9bZs2SKp6zd/f4+PTl/fKsp1zjnnSJLe//73S+r8BvC54t/4weB3\\n\",\n       \"4eODdsE/w/Og4RfxR3/0RzP1HAvPbI7fxAUXXDCX60UsLS2t+djEp4RP5ih9jP8nvjJEqeGbQhQX\\n\",\n       \"Yw2fItYIfFKe97znzVyXPdHwMRoazYTfKL5cZ5xxhiTpn/7pn2b+zpjE14oxGUF9yOhNvib3I43W\\n\",\n       \"Ft9loBTJGeG7A7z97W+XVL43kIOPNcDry5zDJ877wef6pZdeKqmbu54Jnhxx7l/p+3/iH0iOOn5P\\n\",\n       \"/7z1rW+VVJ57HB/t41nLWPeG0i4J+BIyPt/73vdK6tqNiGT8lj3qER84jn/ggQckdeOLtRdfReYv\\n\",\n       \"99qIVKSSJEmSJEl68rjII1XKXUEenX333XfF77H+eHotUdo/i6d/j/CAyLrEKiL3Sy1YHb5vk5cn\\n\",\n       \"2iPQWevstIcccoikzorAasK6xZrA+iVSivbC2qaevoee5wGLsgjz+82bN0vqlCKyCqMEcf4oi7Xv\\n\",\n       \"a+b975nkKSf1iaIaaRe3Kl2RGhu3HheZtTiaU/Pi6quvllSeO/Rp7dz1sYMiBUSPMeb7KlIeIenn\\n\",\n       \"QRHz/FC16jDKU6REOZ5xnbHVup+ojwPO4+WmflH/lXLwlcaZzw3fJcDnMvUsjSdfo/rup0l/s1ef\\n\",\n       \"rx0oNLTDvPM4lXy2vXwoekSRcq/zPQ6Be5/fA2lPIq73228/SfU5FVORSpIkSZIk6cm6UKTc4h6K\\n\",\n       \"586o5X/+538kdVYX78dbc2VEYP0ccMABkjofFt6Xo3iVrAvPk8XTOOfj95wXKyLKXxQpMNFeZ2ut\\n\",\n       \"OKDwUX63Kqgn1jkwnvAj4f047US7+M7oEbQHoGCiRHH+m266adXzeDlLYC0yfhgnlL+0Z55beZyn\\n\",\n       \"rxU7L5hnnq0bdaZm3LnFP2/23ntvSfFeZ/NirKzujF2UFR8TKGl8osoy1kt5lTh/SaWHaC0q7cuI\\n\",\n       \"6ox6jcrsuxP4GsjcwPfG98Ucit87qF+0ttb269i5A1F2UNtRfnzvvRL4G1MP7sWMq6H3DsYT0J6+\\n\",\n       \"9+PQXH30W21OyFSkkiRJkiRJerIuFKmxc1m1WvzgPlBjZ0zmaRnlC+uHp//ap3V/SuZpvGRNRfXB\\n\",\n       \"n8Ctxr4RMmNDRnqsIqwN90/A+vHM5/wbhYr2j96jR9A++LlQDqwXrCXOH9E6rlBIsfaYL4ybyBcr\\n\",\n       \"ggiyRWeVdugvz5rcYsXW7isZRca2gjqM/x5jMlJWWmkdo/DCF75QknTbbbdJ6jKUOz5W/XqotfQB\\n\",\n       \"Y622fkOVs5JSRAQz/n/XX3+9pPp9Mmt3U+iLK3yudPDWw8etZ4QvgULYN+qO8cF1Kbf7a5ZgHrgS\\n\",\n       \"Odb+rf6s4PdOxq/Pa49OZO2OlDbmRbXfcNVRSZIkSZIkyU6sC0WK98U8BbcqVL7XHtbmeoUIlr6R\\n\",\n       \"EH2tPNqX9j744IMldU/1tX4Maw3WQ0k9IIcIVp9H743lgxe1U62vVa1i6uMafxSsx75WHuONdqJ9\\n\",\n       \"x7Iah+J7YM4D6hz5qtRCX5R8hfoSrQ0e3eb8+Z//uSTppJNOkiSdcMIJknbeuw88khhcDaQ8tYpU\\n\",\n       \"3ygvxiblihQpFBMUBOZ8xFrvg+r4mI4U1NZy9r0noNQw9z3KsRUUHu7BHlE9NOqvtEZxj/N25br8\\n\",\n       \"vuRHjW8Y/polUpFKkiRJkiTpycIUqSc84QmPWYO838X64H01T+Ul68ffexKVtF7BCugbXTg08gHr\\n\",\n       \"m3ZdZN6fGrAKSn4M+FLhAzR2zhOsZM7vO6jTr6gTrnIwPmsVU7cSUdSGKkeUl35fL0pUBNGRY/p0\\n\",\n       \"jaW+Rr5H86a0Jl555ZWSOpW05DvGWHXVNspaH0FErOencoWFuROpxIzNH/zgB6teDwXki1/84szv\\n\",\n       \"Ihaxt+x6hjVyLPXX85GhlI3lMxhF5daq2JSjVB6+r40CTkUqSZIkSZKkJwtTpJ70pCft9N7Uc0LU\\n\",\n       \"+rSU8iStN1rrNzb4QaDktUZmrDW1yt1a+NZInTWNMsW4RTFD4cE6xqpxf4FaOG9kRbkvVYnaSJT1\\n\",\n       \"gu8D1xql+Hjkm9/85sxniciPkLcFrBG+K4D75nj+IaLSfF/IsZUhzs91fQ+7ZG1AefS3HmNRUoha\\n\",\n       \"18II1hgU1hKpSCVJkiRJkvRkYYrUz372s50ycPf11XFFBStqvTJUARpaP3zIaPd5W/goN/hptEak\\n\",\n       \"uE/ZUGvWM2jX+sswPokg4t9uhXnGcawjPj0yqhSBhXXt0XX4GKLYsOO5E2VXXu9QL6//GOB3xdhf\\n\",\n       \"LznT1hu//du/Lalbsxg7pahHV7hcVWbtH2vXiOi6iwZF7vEC7T6vXRNK/YraP1SR4t5Ye69JRSpJ\\n\",\n       \"kiRJkqQn6yKP1NhRY2PtQ7VeGRqViFWJEsN7Z7IIs0/VWAzN3eKZrodG41FPdvh+8MEHm35PZApW\\n\",\n       \"NooJ5WI8Y6W7VePZpPFVi5Qi+od93XbbbTdJ3ThAHbjvvvtWPM96j8p0iGpEMWT8jJkDiHPhAzH2\\n\",\n       \"Hmu7GviWeBvzd2j1QWLsRX6Ou9rYbGW9+5+ODZnNGSesRWPlWSuNt7HUdu41jPPi8aNcNUmSJEmS\\n\",\n       \"5HHIulCkoHV/occrQ60cfJZQNLAi8OkZW5FCQcG3C1+pWuUQq5hyAtFnrVFoKHGcr68/BdY0VgvK\\n\",\n       \"E+/no/frKFBQikTBmmOvPa6HckOk1K7i+1SC/vQozDHHJW3KXKBPdrWIxlpKewsypnxORpnNUZhK\\n\",\n       \"EbVr/XYAhRFFYWxFqDUT/q6etwp/Tt/XNIJ2517O2spa26pMuf+or9WUD6WK8ep7/bWycePGmeuX\\n\",\n       \"SEUqSZIkSZKkJwtTpJ74xCc+9vQ61JL2p9a+GcNrQbnBOsNKqT1+KEMjEnha5zy0W+1eca2gKKG8\\n\",\n       \"tO5QjjVEOd3qbAXFM4pyK0F/Y/088MADVb/D6t9nn31m/l5SQaj3PffcI6nb8Z52bbX66YexIpvw\\n\",\n       \"ZcJKHxpVSvk4n/uUjQHnZm7Sp62KFHUfqnywZrGWUS6P8OxLKbdaZLm7gsB5aD+Uh1blYl4wt+64\\n\",\n       \"445Rz/u0pz1NUhfFSEZ75tDY+apQEGln1qxNmzZJku66665Rr8e4Y01jDhPdum3btqrz8DvPIM6a\\n\",\n       \"3apI+bOB/5s1lXYiSpK3LX0VKd+z8cUvfvGqx6cilSRJkiRJ0pOFKVK/8iu/slNGc6ywVh8pfHCw\\n\",\n       \"htyXhmgn9mLDinRFpqQY8TtyZHA81wdXoMbe820oRHfhq+QKwNigdKGEtWb2ditm3lYv1oxHLDEu\\n\",\n       \"sboYP26N8j1WOudB9fDcMlhVvr+TK49Yh4ceeuhMOTmu1sfwyCOPlCT96Ec/ktSNZ1SRWsUQK/ak\\n\",\n       \"k06StLOPWF8o/9i+eiuBmlfaiy4CpWKoIkXbRfmYWiEylb3o+vrqRErWPCIppXYfJIf6RrA2t0L7\\n\",\n       \"eR6tUv+U3lZEMK4OOuggSdJPf/pTSfOLcnzBC14wc73rr79eUrsaTPu7EjQvXznmLf1CO5XYc889\\n\",\n       \"Z37nUbut8yUVqSRJkiRJkp4sTJH6tV/7tZ3y4PCJdVfa+RulCQsbXEngKZ6nTCzwWsu5FAHiO8Bj\\n\",\n       \"DZI1GYUC5cKz/bqVgbJwwAEHSJJ+/OMfS+qUGJSxVlCE+D3txL/H9isArBraca33vypFLHlkxtFH\\n\",\n       \"Hy2pKy/9xXhh/NRav67UeNZf9/VyBZOIMuaL+x9wfqL6aG+ULa83/gooY4xXxgPWGf5CXJfx7zsR\\n\",\n       \"oPTyd+r5rGc9S1I3zvEvob1R7Dgv44L6007uF8G84Ljl88Gjg9x/krbG4qcNKGPkU4Gy4GsLdZ8X\\n\",\n       \"ff1HWRtZS/r6P85LSaA9fS+zkhLFGGJOMgZQJkq7NHheIMZDSelxxZBxMC8FjHufZwofKx+T8+//\\n\",\n       \"/u+Supx6999/v6T2fUuZbwcffLCkbg3x/mKu0z7eD4cccoikndcGXyv97QbfM67oL87raym/p105\\n\",\n       \"jnWhVp1PRSpJkiRJkqQnk0XkuZhMJtOlpaU1v26SJEmSJEkrS0tLmk6nK6YESEUqSZIkSZKkJwvz\\n\",\n       \"kapRpI477jhJXXRRrUe+X6N0Ld7neu6IWngf++Y3v7nqen2zvDpeP97r4tdQeu9PVufayIza9uwL\\n\",\n       \"763xB+A6W7Zsmfne/TboP6IR3ceI9+b4NbiPFj5Ub33rW2eu6/mW+D0+QJz3+OOPl9T5NeCnwft3\\n\",\n       \"+gEfJvwO3vjGN0qSLr74Ykk77/9Wej/P+aLoNvxJ8BM466yzJEnve9/7ZuqFfxA+dO5Ps8cee8xc\\n\",\n       \"76abblrxeu5z97rXvU6SdPnll0vq/EzwU/F8XCWfSGCPRN/H69xzz9Wll14qqRvb+DB5PhuuGfmA\\n\",\n       \"lPIi+VxgbOJvRl1Zu/y8tBHlwheE3/H9XnvtJUl60YteJKkbK0QK04cl36AIfFroO8Yc9briiisk\\n\",\n       \"Dd+LsBSN5+0ZRaCyxvWNsoyuV4J+oL9ao9lK1/O1z3FfP4f25d7y2te+VpL0oQ99SFJ8r+EeyzyI\\n\",\n       \"8lORuw5/XS8n9frYxz42U073OfQ57/3L/OD7KK8b1/vABz4wU/7aCPlSe0fXi0hFKkmSJEmSpCcL\\n\",\n       \"3WsPD3oUAVecsNJalahWnv70p0vqlIaHHnpo5vtSNtnWTOpYrb/3e78nSfrc5z438z3ZebF6PLrQ\\n\",\n       \"82QBCkltefpmjEapwXqOrIDaDNooHVirbiXw+yiCiHpgtfnvff8lV6RclYii/Nwa5Lxbt25dsVxu\\n\",\n       \"baGOEDEGvs9ZbaRIqf8ilcIjkDwflYNCVYpy5Xs/zq1S2hHFrFaJAo5HlVmeq6fUZyhMJd/Q1lxl\\n\",\n       \"tKVH8EbnRXFhrrpSw1iIcpmhEA3NfO4RoL52jLV/Y20+rCiXIO114oknSpKuvvrqUcpHfaPxwBpG\\n\",\n       \"eU4++WRJXZTbUGUMSspIqZ60r9ejNI5RmEo527gnlvKGUY5IeSxFZVJeH/fgUZes+aztkXLqil6t\\n\",\n       \"ElXLQh+kSpvO8qomgsamcfomK7vtttsk7SzDs9ATbh09SLV2Cud59atfPXOdT33qU5IPTw4qAAAg\\n\",\n       \"AElEQVTKi3G0keK8QmMdFu/SIlLbHxwXna90ntrFrHa7gOhBikW+tl7Rg6Q/yPXdKHesRK+ekBN4\\n\",\n       \"4PcQ4RKlB3nOF5WfB6RnPOMZkqQbb7xR0s6pBvi9bwItdQ+H0WamY8+V1rWHG0Z0Y8IoitYWbkiM\\n\",\n       \"VefYY4+V1D0E1yY39Rvx0OSqULtNVvQ97cQaHT1YeLJl+iU6vvRAzdzFmP/yl78sqT2p8Lyh3byf\\n\",\n       \"+Xv0qnrspLcYAEMTtfIKlzU7egBjnpRePXv/8+zAg1nfrWQeK8egXydJkiRJkjyOWZgitdtuuz32\\n\",\n       \"lExSs9ZtFtxa4ilzJQt1NZA33ark1UkpKV1fefkzn/mMpE65qE1+N7Ys2UptyoxaKx2FLTovCofL\\n\",\n       \"7A5O0VHywdpEoJF1gpVU+0oUtYBxjfXk43OszYNrKV0Pqx4llv6h/FESSlQH3zLJYVxEViTbVDzv\\n\",\n       \"ec9b9bpcbyXVJDo3FvpYTst9KVnszPHSXHNljTah7ejDv//7v5ckXXfddU3lHLptVKtTbwnmEnPI\\n\",\n       \"1Vxv17FeTfr5x94aJ4LEqv62pBbG+9A0R7W/H6td6GfWHsahl4N53qoIR24IfUlFKkmSJEmSpCcL\\n\",\n       \"U6R+/vOf77SFxlD6PmX6dgo46fKUywaOteCHgTUQKU28n64NLS45Rtb6IdRublsC5cKtBxSEaCNW\\n\",\n       \"/z1b6HAeLxfn8e0BHJQowuPZOugrX/nKqr9D6YLIiuX6+LShTEXWkCusHNfXyo+s8AjGg2+SHI0f\\n\",\n       \"FF1+h9N7tNWM45uQR3CeaPsMVBN8F0vrA6pHDawN1BX1jL5di42SW2hdy6gXbYwidcwxx0iKFamh\\n\",\n       \"mwVHjK2e49NCOX0u8G/fLLi0TdRY5eq7/RVzjP477bTTJEl/+7d/K0natm2bpC51yS233LLieXyu\\n\",\n       \"w9B6j62acw96/vOfL6m7x6KwRuMmCiRaNKlIJUmSJEmS9GRhitTPfvazx0Iqsbp4SuXpfK18gXha\\n\",\n       \"R2nYc889JXWJB1sVMxSV0u9QNDzRY4nIkq+N4hqqRGHtoQTQfkR5YQVTnxtuuGHm99SXJHdYc1iN\\n\",\n       \"kcJQay0THl8bVl9rbaEIUn98/Kg/45ZyRkn0+lqtrdF9vkEoROMSfyH6h3bhurXjpuTr6Jtmc36U\\n\",\n       \"W/q/lAaCdiY9SQ2MSeY6iQZbUzCsV2jLr33ta5K6+pZSyDC2S6rvvCkpR9QjUvmZc3wyB+bth0i5\\n\",\n       \"+/pQUT7q/cUvflFS1x+85Shd39X12k2ZI1DBGUdjRQrTPmyOvKuTilSSJEmSJElPFqpIARYoVgQW\\n\",\n       \"P8oG+aTG8rB3HnzwQUnd07bnwcHnBt+pUjlak/nV1gslYa0iRiK4PooM/edRaSQWxRpCYUChob/5\\n\",\n       \"fl79Oxb4srHNiFvFbq0xjlxxXOsoPVekojxPjC/K2VcRrrVaUZQoH9Y0yjTXj1QKj4Zsgd+iRBG5\\n\",\n       \"+4vCAw88IEm68sorJZX9yFC5oxx1a0VJdeb7yCcJtZOxxFo87znnSYHH4gtf+ELVcbSLzxH6vaRm\\n\",\n       \"057c66gPazj34NpkwbVs37591PMtilSkkiRJkiRJerLQzOaAJewWPlYF/gz4VI0V5QduQfv5+Z7o\\n\",\n       \"skUrJ4v2Y3AfMPqJ/EtYLSiKbAFDvjAUKf/donL61IJVW6sIctx6iSyBknU+dH6VrFb3HaOdGNf+\\n\",\n       \"e7bUKfnN1ICaiH8eytSic7O1Urv9Et+j8nmkLaBoRP6XQ6mNFC59j8LCRuXsEkF9UMl5qxHllKsl\\n\",\n       \"aq+IsfNW1cKc8i2Dau8VjCfK7xHX8xoXJcaKMJ83qUglSZIkSZL0ZF0oUpHCw9PxUKuiL7w35j37\\n\",\n       \"ov0HYNHlcL8DrAWP8sKnjKy8HhlFvy/KiutL7T5b60WJcmvS96GLjnellnGH9Rr9vqRoeWQV48X9\\n\",\n       \"Y1CiUZEiPFKp5tr4Xa3V/pR9QTlz6MNaf0xUvWgfT+buvHyJxjovqjWf7itVm8usFsbL0DxR84Y5\\n\",\n       \"FPlplmAtZjzxyVuitfbrBL8Ht8z1taSoSE0mk7+fTCY/nUwmty/729JkMnl4Mpnc/Oh/z1/23bmT\\n\",\n       \"yeTeyWRy12QyOXVeBU+SJEmSJFk0NY93n5L0IUn/uOxvU0mXTafTy5YfOJlMNkv6U0mbJe0p6d8n\\n\",\n       \"k8nG6XTayzTHp2ZRkCUW6ybaY42n5rVirfw5UAIiKyAqh+8LFSmOQ3fcHouWzNhSu4+av+fH126t\\n\",\n       \"QI0Aoioj6zqKumMeDLX2yR5OO0TtiUJ1++23r/g9tIwj97NatLobga9L1DatOcXuueeewWUaAjnm\\n\",\n       \"avepLHH33Xev+HfaZSzlgjlSUvRqac0ZOJTaKDvmdIu/4VrAWoS6v17fXhTvCNPp9D8lreQFvJJm\\n\",\n       \"+IeSPjudTn8+nU5/IOk+SccPKmGSJEmSJMk6Zchj+xsmk8mfS/q+pLdMp9P/V9IekrYuO+ZhPaJM\\n\",\n       \"rQkbNmyQ1OVQaQXlgPfBT33qUyV1T8XRnniliIKx36+X9k2KMmq30jdnCH4nRLqUfFxaIfP8WLl/\\n\",\n       \"XFnD+hyqiLp/AlbyWiuYHg1Jf6DkjJWtGDxyyGG+lNq3Noqzdq9KaeccWfggoZL1nTNjz3F8QnzP\\n\",\n       \"OFgv/ncl2MuQMTGWIhWBT89YygWKYKty+Vu/9VuSdq7v2P3GXGatHqqwuv/i0D36ht6L5rU3Yi2l\\n\",\n       \"tQz6Ru19VNL+ko6S9BNJ71/l2HFzFSRJkiRJkqwTeilS0+n0scfsyWTySUlfe/SfP5a097JD93r0\\n\",\n       \"b6vyhCc8QU94whMeU4KwwvDlQCHgqZtoMOfhhx9uqMXOkPeIcvAUXfKJGfp9idb36uvlPTJKR0lZ\\n\",\n       \"KOVq8fq3KBB9oNwokh7ZFSkpboWiSLliudY5i3w88O+xlCiP8qPd1jOo1uQ4o8woU7V4JGMtrHGl\\n\",\n       \"TN6tuySMjSserb/DD27ecxbGXvtYc1rVb8ox7/xLroAOzTxOf/VtR+7Z+Kr1PQ97C47tR4s/LP1S\\n\",\n       \"Urx4FrjmmmtWPa7XHX4ymey+7J9/JAlv0K9KetlkMvlfk8lkf0kHSbrBf79TIRacYDJJkiRJkmQ5\\n\",\n       \"T37yk/XkJz9Zp5xyyqrHFc2oyWTyWUknS3raZDL5kaQLJf3fk8nkKD3y2u5BSX8pSdPpdMdkMvm8\\n\",\n       \"pB2S/o+kv55WpEn2HBX4wPh72kiJgtLTLz5QfPI0j5/EcccdJ6l7msZqLUXIuFWJdbv33o+Ic313\\n\",\n       \"uMaqw3odmuulr3U5b3gPjfXnvkWuxM1bcSspaOw/RSQS/U9uIoZ8ZIW78ub+NVhjnBel1P0XOJ7r\\n\",\n       \"osiSr8t9/oDxtO+++0rqxgPzDgXQ/Tu4LuUnzxNWKPVFmSuB/wzRsffee6+keqWMcUN5a3D/tKHR\\n\",\n       \"bIzFyJcJpYqyYgljPJZ8hlrqNg/o29o1g/L6WI3WrHn5C9ZmJGetYa64Usj3tW8D+H10XVeqWCv6\\n\",\n       \"+k6Nnd+JOUy/tb4NqZ27zH3q7WtlrRIV7cNJ/3v/cb1a8abWT7b4IDWdTv9shT///SrHb5G0perq\\n\",\n       \"SZIkSZIkuzCTsfetq7roZDJdWlpa8+smSZIkSZK0srS0pOl0umKq+HROSpIkSZIk6cnCNq6pUaRK\\n\",\n       \"O5yXfH64xrzVL8r5jne8Y+Z6+NCMHbHC++vzzjtv5nqt4LdRigzCf+D888+XJF1yySWSuvfO1J+I\\n\",\n       \"D96T1+6Yznt43mvj+/PmN79ZkvSJT3xiphwPPfTQzO/x1cE3bfv27TPf8z5+//33lyRt3bpVK0E7\\n\",\n       \"cj3epxNlx3t1fKRuvfXWmd97lKn/nnbG5+mlL32pJOnTn/60pC6rMO0R+WrhF8D4970o3feI9nr7\\n\",\n       \"298+U0/8OfARxF+nNVKM8Uj/MV/PPPPMmeuVKM33EktLS7r44osldW3kbYjPBEq8Z5knDxFwHsrG\\n\",\n       \"GDzjjDMkSZdd9sjmDkQD0Racx9cm91vjvPiVUS7+jo/G2WefLUmP1S/KXcfv3I+uFfrsfe97n6TY\\n\",\n       \"T5S5EGX8pl0Ya4wt1ghfWz70oQ9JivdAPPbYYyV1uydEPmaUC79afJN87aR+fF/KCdgX2pPoL+Yk\\n\",\n       \"/YuPD7kQiRKknRgv3FM4jjWDT9ag17/+9TPXjcAvmLX8jjvumPn+sMMOk9T5C0fjieuwltGe1I9+\\n\",\n       \"9nbGV8x3EWEtYe1kLWX+ev0Y94yz1sz/4Hm4aO/TTz991d+lIpUkSZIkSdKThSlSk8nksac/z7fj\\n\",\n       \"T+ERfZ86xyYqJ1F7RGGxP9Qee+whSXrmM58pSfrCF76w6vk9T0/k1xZlV8aqcGujVnnwSAysTKwY\\n\",\n       \"6k//tUYFch6sFv89ShDWlrPffvvNfLoi9Tu/8zuSOqXnpptumimvg4oR5XuKco8wHkvjcseOHTP/\\n\",\n       \"phxcr5STiOtH5UANoZ+ienikUN+syK7ytObJwromirCUs2U1otxlUXSPK1AO0VvMKc/Sz5hy9bWU\\n\",\n       \"H4q2Yu6gQGBRo5R5n/i+jc6mTZtmvvc96Vqz9pfGcmnvOcZCNCa8HtQvgjUgys+EOk0/efk84pd+\\n\",\n       \"oBw+lseGdmfN8/HIv6Pxg1IXveWI2jnKME6kL/cIznvSSSdJ6vrf7x3g/cD5/J7o1/W3ST4euTdR\\n\",\n       \"btojWqOGRrRH5axVKFORSpIkSZIk6cnCFKnpdPrY0x4WeGum5bH3COuLW1FYqdu2bVvx+D/8wz+U\\n\",\n       \"JH3kIx+RJL3hDW+QJH34wx9e8XjqiTUZWYluHaBIYMX13Q/MlYvIdyd6endFLbKq8XtwKG+U+wNr\\n\",\n       \"HqXJwSrlPCUro5Snin7A1+muu+6SVG/lez/5OB6qtJZyvnB9lFL8AHxctPosYS16TqWSlY//RUnF\\n\",\n       \"qcHHKtemz2r37kI1JtM5ZcefDJhjKFLR2KEcnv3exyJt4BmYoTR3yb9D+Z2S36Krocy5KM9RlP0f\\n\",\n       \"UB5YgzxzPCoklFTyG254JL9zNMZZe0s5B8HbMxqj+BJ9//vfrzpvBOMvyg9Ge1EuH699801Fax7t\\n\",\n       \"xPhkrl977bWSymual4fxw3zDd8vbFZ+o6O2FK4VQmyuy1dctUu9r/XxTkUqSJEmSJOnJwhQpqXua\\n\",\n       \"nXem6r5PqbW4olDKBnvfffdJkj7/+c9L6hSNEiXrwJUMyoUfSN8Intp9xLBCXFnCujjrrLMkdVb9\\n\",\n       \"FVdcIanLxB2BtRT5BURKFkTKYARWdhQRRD+QCfy5z32uJOlzn/tc1fl9vLi1VrI6S9GqKI+R1cv1\\n\",\n       \"mXee9ZdoTvah43sUJ8YvoKIwv7w+tf4mrfuZrQZRP8x9z7hNG0SqNsfxe8Y25wXazM/D74kUxVep\\n\",\n       \"VEd8uShv65wlgtMjOaGkLvr3pb5jDkSKFL5ekWXvfy/dC0pqK2vB2G8rWBPwHWJN/M53vtN0Htop\\n\",\n       \"uhexhtcqp7WU+rF1j8novPQPSpf709JuKFUlmF+16jjzztuX+YT6jtLLWum+hFB770tFKkmSJEmS\\n\",\n       \"pCcLVaTWinkpUeBP5aWn2KuuukpS9x56LEUOaxalwhWHWnhKp92wKkpwPc9PhVVx9dVXS5Je/OIX\\n\",\n       \"zxzv+F57tGfJqqrdX6tESRGiXfCXGOo34VZ7aX8v72eHdiCC6cEHH5z53vcVA6w9+h9/Ho6PVA6s\\n\",\n       \"QKLvhrb/GHgZPCIShYkx6r45RB/RFnyPeowKydzwscnvWuce5WIMRErPWlGacyV/PnxhItXYoyZR\\n\",\n       \"DqI1m7HPmEOdRcEpqfalqMCIO++8U1I3xj3yNsIjjZlzkeLUN3IWWHvXC6jVjCOUPdqldh9axkXp\\n\",\n       \"3hqNR59PnK/0NiSj9pIkSZIkSebM40KRmjd9c47wdD1UkUI5cCUFa9lzx5Tev3sW2taoLT7dyuc9\\n\",\n       \"/M033ywpti48Ugkro2TdowJg5fbtl9bM3rVE7/mJnkNFwb/D60u7RL5PQL/j4+THY52535BHaKGU\\n\",\n       \"uV8Q7YNCxTjhd4y39QSKFG1b8j3iOOrEnPGx4ccB52dM0hclXxT3a/Tzomj0jXAk632tj0oJ32UA\\n\",\n       \"UBzwobrtttuqzlfqF8Yq0Yl81tI3zxCKGipv5D8JzDH6H5hTkdJI/0Z5zxyPiO4b1Tc27oPEvHHf\\n\",\n       \"Kda+Uj/SDrVvRxyPxqv1Paz1tUtFKkmSJEmSpCe7hCJV+3S+XqjNJ+R5d/h3q7XI0z/+Alwfa4Dv\\n\",\n       \"a9+/Y7V59uUSKCjRUzxWOe/JycPk2YfdiqvtdxQd2rPVPwVqrX3KiR9IKSIrakf6Bys2UupoV/Iu\\n\",\n       \"RWA9Ex3p7YfyhNKElcwn/c688wg4V6QAKx3VYxG4hQ4ly5IxQ9vTp7RFNAYjyxaVztuIOchcwBeL\\n\",\n       \"8kV5d2Cov2e0O0BffEyjRr7gBS+Q1I1p2jPKQQelfhqq3nu7liKsgbW19vpcx89LO0SgnNSued5e\\n\",\n       \"Ua69tcYjg51SxvuI1uOHwtueEuuj1ZMkSZIkSXZB1rUitXnzZkmdlcP7eCxjFIC1fkodm5KVVoL3\\n\",\n       \"zig8vEfGOkE5IdtxrV8Bv6+NdMGvI1JUiNBgj0HPagxuxdRmZO+bJ6svtOdQnyoUMKLwSqpDKS8a\\n\",\n       \"7Ydy5VYrfglY2ShR/N33l6N+qBmRDxTlj/ZBg3nuZ9bXR4Q6o8LVZkJvHXP0DXPW+4bvozk01Aem\\n\",\n       \"NuN3K/T5CSecIKlbO2hXVOiha93Yaz1rZWmvNtY2FMRafHyU1lKi7mojkN1nrjT35o3P7b5znHZi\\n\",\n       \"3Mxr3ILvOAC1Cm4qUkmSJEmSJD1Z14oUT/NYukSAeDRStDP1egert2R1lHzEeOonEsQjlIh6I6tr\\n\",\n       \"tK8QeC6O2uyuWNOenwi4HtZWVB/3K1gv7/0drJeovq3Qj1hjkXVcyrUDlMvbD4WP/vIcS8w78k8R\\n\",\n       \"aUaW7sjKRC3x/vPxOw8lCrCIfdf42t0NOJ61huPJN+N9MlQBi8ASr/W3rGVeUV30Kf55qM977733\\n\",\n       \"zKdnxW9l7IzlPudQdKL+4bhaldzLy9yIVFkUu1plycu51qq84/Xp69PGvKOdaqNVWesiRcwjk1lL\\n\",\n       \"icbkWYN+qvUPXp93qCRJkiRJkl2Ada1I8dTIe0reY+6zzz6S4r3XdhVqrYfaCA4/DmvAc5a0+hm0\\n\",\n       \"Rr+VnuJdOSvRN1oThQV1YmxrDWtnqJVM+/JZ8qOo9cnCinOrDKsO9YB+4LqUgwgy1BF8qKIcOlzH\\n\",\n       \"/WCw+uYVdbvcekftxP8LJclzjEXst99+kjpfHzJa05ZjR71FMIc8y/9Q+mZcL8Faffvtt0vqLP9D\\n\",\n       \"DjlEUqfwXXPNNaNeN6JWgWTssFaW5jJqfa0i5XDPon38eoxPFJLWzPbz3rd2rUENx3cs6p9ohwEf\\n\",\n       \"5zw70M746dIffnwqUkmSJEmSJHNmXStS+BAde+yxkjqrZ9OmTZLqs+UOzQZcoq8Pz9gWemR9oTj0\\n\",\n       \"9bdo9WmJlCaUDYiyIpcgWi7a+41xg/VBVNrY1vBYGdA9v9dYe0PS395//J1+QmHC6vOoTqwyFKnI\\n\",\n       \"zyVSe1rHOdGnWOPuQ+h+DsutVFRIV45qLUuUKCKGWWMii7V2TUEhi/ZHdCLFYiitmcBbYayRLwmL\\n\",\n       \"f+gecq3UziHat7ada/svgvHCmsT5GJ+MX/yAUVRrfduiexF+itwLmVvME/ptqGpfqwS2wvkitd7v\\n\",\n       \"OT7eUMW5B3GPOPHEEyXF+6XWzu9UpJIkSZIkSXqyrhUprD9/r88O7FdffXXVebCKeLrn6dz3taqF\\n\",\n       \"p1sseM/E7bgFDWuV/4r6rfX7c6yugw46SJJ0+OGHS+qiByMroESkRAHWFblHShnHF41ndh+LSAny\\n\",\n       \"cYe1F6kVHE8OnVaFqdYfB6URK5F+xldrx44dkla3zpmL+OR4pvBod/iTTjpJknTkkUdK6nxZUN9Q\\n\",\n       \"DNxC9f0TKRtrzMaNGyV1FjAKF8ejDqL2Uf71GqlaCyrid77zHUldBGiJeb89cEoR02ODAhWtxXyP\\n\",\n       \"/+JYUZbMWZ+70Xzoy9hKFPda7tW15/f29ZyI/gyAb59T+9Zh156tSZIkSZIkC2RdK1JYoHwOBauQ\\n\",\n       \"9/dY4K2K1MEHHywpVrR8/6bIquDpGsXMd8bmaXionwRWR22G8hK0Y+RzRX08Ooz375TDo7+wyrEW\\n\",\n       \"+tIa6eKgiKBgok7U9gOqAnvd1UYEoeB5ni3GSeSfgRVPpA/qC34CtWpAidKO94z7vv4wzEeU5tr8\\n\",\n       \"Zcuh7jfccIOknRUHouDwlaDNaUPyH23fvl2SdOutt86c12GuUmeuR5/R9vgDMgfYjxBlijnBWsLc\\n\",\n       \"97akvK1r1lpDvZj7UXm9j0tK1FrVn3HCZ19lyPNB4ZNUejvgmb25PhGx+H8yTvh76e3IvBl71wLq\\n\",\n       \"HSlRtddjPDIfeQZgvkbKnPv1RqQilSRJkiRJ0pPJPLMMhxedTKZLS0trft0kSZIkSZJWlpaWNJ1O\\n\",\n       \"V0zslopUkiRJkiRJTxbmI7Vly5a5R5GherWqX30jR7jOxz/+cUnSEUccIanzh2BPQCJ3UAM9TxZR\\n\",\n       \"iUQaHHXUUZK69+J8/8IXvnDmuvOG62zZskVS1068X478CPDzKPU3uXbwkXrta187c9154+MF/w38\\n\",\n       \"DtxHiXxmHtmFrxftQb3dZ47rXH755TPn5/29Zzl2fwgH/xHKjf8OfgSvfvWrZ647b7w98Tcgoo6o\\n\",\n       \"PNoNfxLqQT3xLWMc4aPH+MN/4qUvfak++MEPSurakDmEfxdtzLWYgz52KSu+Kvh/Uebzzjtvpm6U\\n\",\n       \"2ffPxO/LlX8ifvGZoo6MfepKOc844wxJ0kc+8hFJ3ZjAn44x6BGtlJ928DXtwAMPlNTNPcrzspe9\\n\",\n       \"bKZ+DrtMuN8c5yGjOf6FDzzwwIrnYW6dffbZkqRPfvKTkroM89T/2muvXfH3jAHagesx5slJxnX4\\n\",\n       \"/vTTT5fUrdX0F+ehv/j3vffeK6nrJ9qT69Bv7h+Ln+Sf/MmfSJLe/e53S+rGvPtNEj3KXMdHDxjX\\n\",\n       \"RNJGEbH027ve9a4VrwMedVqC431OnnvuuTPXnTdc56Mf/aikbp65LxXjm/nt45A9IPkdfsrR9SJS\\n\",\n       \"kUqSJEmSJOnJwhSp1faQQqG56aabVvz+tNNOk9Ttw+MZzrFS+oL14tYbVhLRXNHTPNYsShJWDVYG\\n\",\n       \"VjHWH1FmWEEoGChZlAcrb9F5kSgnESi0Q6SU1CqPtMvY2Zz7Qr9G0XIog05r9miUI37HuKO/jznm\\n\",\n       \"GEnSd7/7XUk7t/Pzn/98SV0uFPI9cV7Ug1ZardUIVAUUNuanR30yv1B1OJ72oB/4HeddPt+ZK1ig\\n\",\n       \"rDMeHYYl7Tm1UBM9whTFIVJdUShQplA+XImirFF+GspLn3kUluezQoGKLOlS3h2iFPfff/+Z65eI\\n\",\n       \"IjgZqygn3/ve91Y9DwoOEPH7rGc9a+bvnCfKLE++MId2RD3mrQBwDwEfD56HDHwcMFY9mpB/o0iV\\n\",\n       \"3nIwd6P+ZO4zzko52kpraevcHroW9IW57vVhLYjqGSlRwL166L69qUglSZIkSZL0ZGGK1GpP5pES\\n\",\n       \"BTzlRxmWh2ZX9ZwcWMhu1UZP51g199xzj6ROWUJx4Om4lJuEcmCdYAUuWpHyTNj4RcDQ7MRjZ9td\\n\",\n       \"73h/oryQ84R+Z7wzLk8++WRJ0qGHHipJ+sY3vjFzHsZha14tfPFQZ+6+++6Z87WCMobVGJ0HpRdQ\\n\",\n       \"csH3AVvpfCgHWOoc4yqhl8H9yfieTxSAKMqZMUtbu4p52GGHSeqyt7MGoHigUHEe5pjXmTWIseD+\\n\",\n       \"Za35jrDIGVNDM32TybwWV1epj2firs2O7zB3PHdgLb6LhStxGzZskNQpkr4PJeOmFp8DDsoaKv+i\\n\",\n       \"84mtVQb6kuIUESlRwHweqrSlIpUkSZIkSdKTdZ3ZPGLr1q2S4gzYY+1PRGQM1kytUsLTsysBWA+t\\n\",\n       \"5fOIm/XiQ4RV5L4uRNthfROJgz9GMotH+rgqEvlo4R9Tux9ULSiMRNc997nPldTtjdiqOqDoUk/q\\n\",\n       \"hfrhKg9WOSoR44x/M3/4XK6IMhZbI4JdgXLI0B3B9chg7qBUMCfYfxLliT6kTlzPfYg4Hp8Ovu+r\\n\",\n       \"wrOWoAz1VX764n1PeehH2iXykSnB8Sh7rYpbaVzsvvvuM9dh30zGw9h7z6EcLlqJWi+wb2tp1wUg\\n\",\n       \"ipN5ytueyMeONadEKlJJkiRJkiQ92SUVKYjeaw7dqw14P4+ViDJU8gHiOCJGeKpt3TuM6xNh5EpB\\n\",\n       \"K/hqsVcgUXfuW1OCemAl0g7Um7+T54pcHkSuRJBzpa8VR/2IHPrmN78paTyFcl64VYVfBRFjkU+c\\n\",\n       \"55hx8H9BUa2FvS3pB3KtYL1t3LhRUucDWAJfK+qFykB9I2ufv6PQuT8Q83x5/fgOn5ax1bq+oPTg\\n\",\n       \"s4H/V+S/hqLhUXSuzrnK17dcrGm026JgbWEtoXx9FSnGMGOQyNCxuPPOOyV1SpEroX32i5S6uUvk\\n\",\n       \"LkrXtm3bms7D+Bk6Tlrxe7D7Mx933HGSurWatz/k9XKFiLXQ7w219wrueeQPI9KfPTUjRap2f9pU\\n\",\n       \"pJIkSZIkSXqyMEXql3/5l8OnSaKRnvOc50jqLPYrr7xSUuer4dYUyshYkQRYF1iPWMRYC55FGFy5\\n\",\n       \"8gzZrfBeH2UFJakVnrpRGPpGKvA7Pn2H9H/913+V1L3Hp/+IcIkiKbCW+lrF9NOzn/1sSZ018dWv\\n\",\n       \"frXX+dYK2o1xS3vW+qtE1rorOa3lIVqPSCf6Z/PmzU3nwz8FXy/Gc1Q/+p9xg2rDeGLdoN7LrX7K\\n\",\n       \"jkXMHB66JpR2ma9VTKg7dYoij/28QN1pI9aYvpGy9ClrWmuU2dhQX9ZKV6igtr74fuGnNzb4zfLW\\n\",\n       \"wOm7ly3j47rrrutXMLv+WilR4LnjHObnH/zBH0jq1G4f7xA9K0S5Cx1Ua54dUPYi/1MofQ+pSCVJ\\n\",\n       \"kiRJkvRkYYrUau82eYrFGkFpwDeI99JYtG6VjBUpwVMzvklkwS1ZGZ57hKfsvnsL4iNTyuJaC0/l\\n\",\n       \"Y+HWOhFKfNJ/pRwuWGG12ZUdFCjaGx81fK/IGrwoIlXD/XiwsmrHC2qCR65gLbs13wrlwS+n1m8A\\n\",\n       \"UF1QpqgXVjzz9/jjj5fU+TOQR8tz5rjiu9wHjnN5xuqhUHfP6wS1c9IjXEv49QpFo0oAACAASURB\\n\",\n       \"VFCM6HPf8w0LutU3jDUrUgTWCupBOVDfWUNQLmqVN9qBe0LrGlzaJ5TyscZ4RvLWufKLQimqkLcS\\n\",\n       \"r3nNayR1a/+8FFEUOfw/a/07a0lFKkmSJEmSpCfrMmqPPcz+8z//U1K3hxj5o1CifCfqsfBd6D1q\\n\",\n       \"r6R4YaW4soIS0bqHGVYV1uxa53op4VFxWHGUszWbcF+/AiJzsF7JAoz1sWhFCgXFrdQoYzfjrRR1\\n\",\n       \"GKkcWGG1uVAccuSg7jCuW339sNKj8U75uB7zBiXL1Qfag/pFKtGYUOe1uNZK1wX6gLHEWGFM8T0K\\n\",\n       \"E8oAigprmqt7pb0E1wrK4TnzUOCiPehKcB6/V5TWYto/+r3/28dJyQfu8QpRiA7jM5r7fcHfszWC\\n\",\n       \"uZZUpJIkSZIkSXqyLhUplIwovxFWRCl6ri9YGb7vFdZe6T27Kw5YfSglKF6t0VSct68P0VpB/5DF\\n\",\n       \"ubV/+iqMWIFY21jljJNFQ708x4qPg9b6l1SEvtGZKHyMO3zOWqNGS+VDWUIx9BxCkQpUyjo9Jq2q\\n\",\n       \"6ligPIHvP+hRiaiTqMKRYoLqyxyhTyPfFr6f9z6YzA3fK9HrXQv1I0+Rv00oraWlTOgoJ/STvy3o\\n\",\n       \"6xf7eGVsJQrol7F9JyEVqSRJkiRJkp6sS0WqBBZ2bY6HElEOGN/zi316fP8rByuE/bCwsjwbce1T\\n\",\n       \"N9YO54neL68XsCqwAql3rXLQ1w+F6DLaByVqaOQM0ZeMu74KCNbwWu+T1VfhQ30garWvdV3rd0M/\\n\",\n       \"oWQyv0uK5lrsPbko3yGPvkNJos70CRa353hz3CKnbzmfKypRtGKrn6fD75fvkyjtnHuPOexZ7ftm\\n\",\n       \"OPf6DY3wRvlbLz5muzq1e+YNhbxirVG0EalIJUmSJEmS9GSXVKR4X79WEREoEJ7JOwKrhKddIk14\\n\",\n       \"396qSHhuHHJhkBW2FZQiz9Uydm4N6t2q4PRVPlCkyF+FkjHU6kAVGCsnTKsVjLXbd7z39Wvhd0TS\\n\",\n       \"ME7uvffepvPURh+iFqDY7rPPPpI6ZThSoNfLfno1uLpXisB1xYexQ1uwNvTtY/zS+PQ8UihS9CFz\\n\",\n       \"mZx+tH2rDxnX8RxnKF98z1pLLr1WJYr2/fKXvyypa8/DDjus6TwRqNX333//KOd7vLHvvvvO/Ju3\\n\",\n       \"CX0jt0sQtTfW2yxIRSpJkiRJkqQnu6QiNXbkSG0UHNZMaX8f9rRDQcLC7uvLQfl4r9t3rz3A74Cd\\n\",\n       \"r+cFUV/uvzEvsIq/973vzVx3rCjHsTLmt4IfRq2y4/TNVo0agerBOG4dNyh5nCeqxy233CKpU+AY\\n\",\n       \"7yWFci18pMbCfZtaQZmZl5+dt6XnhKPPHnrooUHXYSy4ynvTTTdJ6hQDrjs0dx4q9djQj0Pbw+nr\\n\",\n       \"C7arMbTdSntgOrV787WSilSSJEmSJElP1oUiNfTpG0t5w4YNksq5P5xapaTWqsF6wscDCxvrjlwu\\n\",\n       \"vFePovdQjp797GdLko466ihJsc9PqR3JA0RUFPsd8VTfN1rOFQaUDOq52267Seqs6XmzqJw/fUG5\\n\",\n       \"83LTX+TQwepCjSgpU8yLzZs39yqXR4eWolUj8LHCGiwpez/60Y8kdeOcf/8iMFRNx8eDuc4nawt7\\n\",\n       \"ltUqOPioRDny+mYSr8Wz7jMHXA3tC2PPox3HgnuN5/saCv2BT9paqeGe+X690zcn49ikIpUkSZIk\\n\",\n       \"SdKTyby841e96GQyXVpaWvPrJkmSJEmStLK0tKTpdLqiw20qUkmSJEmSJD1ZmI/Uu971rvB9Nbk5\\n\",\n       \"8BEij4xDxEf0/hjV633ve9/M8ZEPFb5Ihx9+uCTpu9/9rqQumoi/U258OPBLOP/882euO2+4Tuv1\\n\",\n       \"dt99d0lduUsRRLyHfuc73ylJes973iOp81/gfT7HRe2Lbxb5kPDJ8igtcu286EUvkiRdfPHFksp+\\n\",\n       \"AuTHcr8OyhXlxOHvb37zmyV17el+I5STf7u/C+XHz8N9mPCjwHfsrLPOkiRddNFFknaOkIrAVyk6\\n\",\n       \"jnbAzwEfuLPPPnumfvOCecZ8IO8U84j5Qzswr9ihfdu2bZI6fyDWA9oHXyv+/Wd/9meSHmkXr9t+\\n\",\n       \"++0nqfOhYMx7bjOuxZhmbDI26Dt8SP7iL/5C0s5tWcr4fcopp8xcf+vWrSse5/mmzjnnHEnSlVde\\n\",\n       \"KanzG/NyeWZoxix+iswh2pC+oty0A2266LWMOcVY9jUAP1T+7pnbfVcI5uaFF14oSXr/+98vqcuH\\n\",\n       \"xdzyuRrt0cbY9FxmtDffv+IVr1ixfvg/MpZrM3tH/rCsbRdccMHM9aJI2SOOOEJSt5Z5PizGFWuI\\n\",\n       \"7z8LUf8xriIfNcbboYceKkm67rrrZr5nHvzmb/6mpG7NO/300yU98hyx/Ly0Z6ufNNDvvlcla2dE\\n\",\n       \"KlJJkiRJkiQ9WZgi9au/+qthZABP91iDUVRTbQSGZ8mN+I//+A9JXdZerBOe4rFmsZK4PtbHrkKk\\n\",\n       \"mER4rhe35rG+S1F/5NfyfsN6wJrwfqqNWKmNMMKqwUrDqnVccSrlICllUEft8P3F6IfaSKvScd4O\\n\",\n       \"pYii1qjZ0vHeX3fccYekOP8UCpTDeOEz4oorrpAkveUtb9npOzJ2l6J6or5FEWLMo2wAbUvfRtfZ\\n\",\n       \"a6+9JHVKz+c///lVyxNFa/mYZOxESgZ9RAQm54sUiqH5mobie/+hZHg70A8oKl//+tdXPB9rF2uT\\n\",\n       \"zwVXkjxfFmu7K1KcN8rRxpgt3XNQ51F+SqCSczy7NxDhG+0iEa31vO2J+p23F3fddVdV+QAlDkWK\\n\",\n       \"cvmawbyLdm2gvtwb2L8W6E/6l3t2390guA7zozbvVCpSSZIkSZIkPVmYIlVj/WJlkT8JJcgVotJT\\n\",\n       \"I1ZNbYbrz372s5Kkk046aebvPFVjNZFXaqzM2WtFa26WkpVKu2ClRX1b6nMUnbFypmANQWSdlPJO\\n\",\n       \"uepQC+MC6wwrh/fva0Wp3K25dVqPn3der9WszqH5ZbyurjpiMZfaGEv6G9/4hiTp29/+9qrHR3MA\\n\",\n       \"nx3aFLUeVRe/UsYc5UJ5QOV3v0jqgc/KWPiaUFLBPft91LfM7VJuP87XuqZQTlePgfOVfHFK9yb3\\n\",\n       \"3astl+9RWOtb5fiec54pvFWJYm3ztzrkLIzuJfhPOiiBnM/vtZ51gHHdd94zTiln7ZqfilSSJEmS\\n\",\n       \"JElPFqZItby75GkV64Dor1rPfKyi1n12iCDAWuMpF2uQOrS+hwWUtjvvvFPS8Cy+a0Wk0LiV1Art\\n\",\n       \"2LqXHNBP+KOgAqCYuZWCf4P7STgcV5v1F2sZ9YD39YvKvtuqmDLPPv7xj0vq5sE999wjSbrqqqtW\\n\",\n       \"/T31hnnvF4a1uha4RV1bN+bKl770pVWPYwxHGdBdIWEN5PwoDCgUPuYov1vyXK8283rtvo8oN8yJ\\n\",\n       \"WkUKmDv+d+pdugf0zZOIksKc77vPZWk3gFIErsNaVvLH7Av91PetAO0z9n649LP7S3o5uTf3zTTP\\n\",\n       \"Pah1D79UpJIkSZIkSXqyLvbaqwVri6dE3seWlBzem/a1Tni6Zs8yrCT8EPru5I7SRmRIKTppvYDV\\n\",\n       \"QrtHigv14jisqVJ/eWRULeQieclLXiKpi1S68cYbJUk33HDDzPGoCZEi5f4vpahErBgiaDivW7Nr\\n\",\n       \"HRnVOu6ZZzfffLMk6cADD5Qk/emf/qkk6frrr5cUtwe5m8D3UxubIbszUNaNGzdK6nxCfJ/BiJKa\\n\",\n       \"CSgb+CB5bjz+7lFXPhfcJ8jnUkkdH2snC8YyPlklH5++anVftR/6qsDMZRQP+qfWlwncT9MjXksK\\n\",\n       \"F/e46B7TVymLIAdd3/0t3eeLe1pUftqHdonuDYwDXzuZV1yPt09RXrUSlId7O/eAEqlIJUmSJEmS\\n\",\n       \"9GSXUqQAK+Hoo4+WtHN2YM971FcxcrAijznmGEk7Wxt92VWUKMB6wF8DK8StBXyVNm3aJKlTOr72\\n\",\n       \"ta+tev6+7bpjxw5J0kEHHSSps9ai/EUlsEI9q7LXE9XgyCOPlNRZm5SH9mJcjmU9zpv3vve9M/+m\\n\",\n       \"H0uKIvWGvv4KtUS5c2pgrJx66qmSOt+TWkXK+zTyLWEMRf51tf6b0VqG6oeSUlJPh8J1yDgdKTX4\\n\",\n       \"gRJ9tausdbyFoF6tShS4okaeotpdJVBKWcvwpx1b1WZuH3vssZI6RQpFpzV6j/kQRYcSgY9CGmWO\\n\",\n       \"B+41Pq4Zf6wxnI952errxO9QWqM8YU4qUkmSJEmSJD3ZJRWp0ntvtwo9b09fhYqnYZ7SS3v9/aKC\\n\",\n       \"tVaKWELBw4+ktt37WtNYf//8z//c6/cOETVebrcG8SE67LDDZspBpAnHM14if5GSP8SiabVKwbMR\\n\",\n       \"j02fPG777ruvJOmQQw6R1FngvtdYCSxvFKFoLajtW8/IXOsjxBjDQi/l04n2iOP3JbDwqT+KBpnk\\n\",\n       \"8RPEsmdOD1WkIp+gvrneIjh/a6R3dB5ojWZD1SfDuKu9Y0G7PvTQQ5K68YRy1EoUjYjv0oYNGyR1\\n\",\n       \"PlnMO/djBdZO8qb5dVC6uCex9vJ9pFpTHt6W0N+MWxTEEqlIJUmSJEmS9GRhitSTnvSk3tYDVlZt\\n\",\n       \"Lg2sQayooRY/VhfnXetM1YumNndOX/+Ceft3lPD36nxG78uxdvDtYu8+rDvyLzFu3WeI8YMa8OMf\\n\",\n       \"/3iEWnQMze81lHn3Zx//FSJK2auM/QBLOcIcFKiSKo1ljEUd7QvZmnPLo8CwrEuqfaR0uX9pCaII\\n\",\n       \"UQRcSUORoP6tPitO5F8YtX9rniaHewWKY6s/nueRai0H95qrr75a0vi56Ohv/EG5Hv2HctsK9WRN\\n\",\n       \"pNwokiii5IBj7YuUUp4V3H+W8/p44jjqxbxjfqMMUr97771XUte/qOieEy8iFakkSZIkSZKeLEyR\\n\",\n       \"qvWGX43a99dY/L7zObRGUfF7nroXlbH6F5VF+whhpUU5YJy7775bknTwwQdLkvbee29J3fhgvETj\\n\",\n       \"BIVqXuNo0VGC8+7PPoobGZApW6tvVCsoGvgSRYpUSfFA4fD8RqyFtYoHvje0AxZ6676IjK0onxZv\\n\",\n       \"DfA1GSuPVVSO2r/XgprK3Gz1i+17fZQ7+tWvx+4eRLvRrqVM6g7nxUeQT67fV5HCR472QjVmnDEf\\n\",\n       \"UKBYW/GduvXWW2fOhzLkb7FYo+kn5o/vr4tiyvG+xvu8Q5Gr9e9MRSpJkiRJkqQnC1OkxvCbqPWN\\n\",\n       \"4OnTrSa35mrhaZsIG3xhknEgQmVs8F3iPX3ko+eZ27GqPWLEIUoR64dPsmdzHs+2y7iszZK9qzHv\\n\",\n       \"qFbff64GFKhStJ3TN0s7Y65v1n6/Pha3KyUlHyvPk+MZrMfeF5Hyosy4z0xfaqP0uF5fVRSFD+WE\\n\",\n       \"dq8dL333nOM6kc8eqjc+ftDq4waeNw2Vt3U8oDyyVtLurIWMA/7NdTietR+/UpQi7tFeP/rhwQcf\\n\",\n       \"nPm3q/zUg3s+a63vFAAcX8pvBalIJUmSJEmS9GRhitRkMim+Lx/LeuGpl6dlnkb9vXKtf4H7Rs17\\n\",\n       \"d/vHG+T2aAVfOPqTCKaTTjpJUmdVlrJWo5b63o4lxQhfKSJReL/Oe3rGS20kSFIHOWNWAwsYhQDL\\n\",\n       \"ttWCd18K+tJ9MyIiXyzGPApO5EOFAsOn+4fh++S5zABflKjNWtXDUn4sxjxzjzU4iriu9UGqjfhm\\n\",\n       \"Lej7BsT3xxzDt3c5+FWydjgeXUn7EF3nDFW1UYSive1KcC9k7cPnzucN+bBYG8nnxvzx41FOGUfA\\n\",\n       \"WykfL9H48LdPQ/dyhFSkkiRJkiRJerLQPFKlp12syNbcLm41RD5MNZbsSvh76ccbUa6Psajdcdvx\\n\",\n       \"nC/kJqGcN998c9V5qB/7XN1+++2SOmsNq8Z99J7+9KdL6jKcYyVhRfK7IXvD1eC5hfCHWRTHHXec\\n\",\n       \"JOn73/9+r9+jYkTztcZHyn1VGCt91U+/dl/Fg7F+wgknSIqVCWAMcj0sapQTPqO2Ys3Fn89p3Rex\\n\",\n       \"5HtEu6NclXyGeAvRqowxxzyvUGsUYsRYGdMdf5vB2kW0nO8mcN5550mSvvnNb0pqzzlHJvFI8WQ8\\n\",\n       \"0l+tPn2t/p7ud1oaT37eqB61jBXRnIpUkiRJkiRJTxamSNVYHFh7WBf4rJSYt88SUVgoEn0zR0eW\\n\",\n       \"Nk/pY1lTEbyfxgrCmitZBUQ+0M70jys2nB8/EvcnQdkjy7Tn+IBof60IrCisXyIzaiNoPKKD8mL9\\n\",\n       \"RO/V8ftgvyjGB+2K9TivXDqAEsd88XxYEd7OnGfoXnmHH364JOm2226TtPP4IifOaaedJqlT0PAn\\n\",\n       \"Itpy+/btkqQbb7xRUqeeoBxKO+9n6EoFigxj3lVvj8pjjEf+mvjDeV4qh/PyPaohaxxRfSWfDc5D\\n\",\n       \"3zJWmZP0Xeu+jShznGdsaN/SHK5Va4844ghJ3RrKW4fWCGrWctqTccPYwufM1y6UHfwtb7nlFkk7\\n\",\n       \"16+064VHjTEuiAxHmeK8jLe+mdpLCg5KJ+Om73VKsOZTX8a9/31X8T9ORSpJkiRJkqQnk3lbxyte\\n\",\n       \"dDKZLi0trfl1kyRJkiRJWllaWtJ0Op2s9F0qUkmSJEmSJD1ZmI/Uli1bHnsP2+oD4+C/gE8O/leo\\n\",\n       \"Xp/73Ock7bwvEf4F+CngWxNFQuBv4bks8Am68MILZ64LnoW3FDkB3i68N+aTCI6LLrpIUucrQuQP\\n\",\n       \"/gMcjw/PgQceKKnzD8CnB/8PfFLwB+B3b3vb2yRJl112maTOXwP/DjjmmGMkSXfccYeknf00at9/\\n\",\n       \"0458jpVXrHS9T3ziE5I6XzXan3GBv4fvYE5/Uj/8TfD5wUcLf4BXvepVkqT3v//9M9/TrpFPF9eL\\n\",\n       \"xinX8yy/Z5xxxkw95w3X+ehHPyqpGwfRTgLMQ3ztiCxj3jA/n/a0p818jx/Keeedp0svvVRSN1dp\\n\",\n       \"Qx+DtDHnYozTp/jEkIUe3xHm5AUXXCBJj12P83M+j+wlKo+xQ9nxQWFs4y/m2fdpy3/5l3+ZOb/7\\n\",\n       \"fjFm8bHBd8uz6eP7g28Qn/hKvelNb5IkXXLJJTPnb4V60W74pDE2mQvUb8uWLTP14Hs+yejNHKPd\\n\",\n       \"aS9fUziPRyP+5V/+5cx1542vZev9ep53LYI8Xeecc44k6Yorrpj5nUezMh6Y661+wMzr888/X1J3\\n\",\n       \"72Nt4J5HuciQ7nC8PzMw/5k/+GWW2jEVqSRJkiRJkp4sTJFabiEO3Q8JqwxFxhUSoo6wBrFmfD+j\\n\",\n       \"Ur4qnlpbn6JdkSplxyXvDlFLWPBYWx4JgkLxzGc+U1JnPWJVYM1hJfA90YJYo7QfmcCJOiOPErgV\\n\",\n       \"62zbtm3V+kVKFEpYlHl8XkqUg1LpEVY+PjjOc70wnhknKFVYU57ZHKsN5atkBUZZocHzWzE+Wjng\\n\",\n       \"gAMkxdm4a/H5GFEaV4xjb7/l6wYqKsdGUVMcR5/yb8YYyhDn8T7yMkGUY44+pS1c4eG6paz7RKVF\\n\",\n       \"eY0Yc6U1KsqM7WN8aNSWz9nSGs/3kVrPWkg0G/3mewhSD45fhC/weqA1ehNqI5w9hxvXidZ4FCDu\\n\",\n       \"NVu3bp35nrkd5WXzaH/mI2sG46A0j6LoWMZ7a6RyKlJJkiRJkiQ9WZgitZy+SpTjViagzGDF8RTr\\n\",\n       \"T728F3V/gqG4NYRvC+/78aXZtGnTzPUjXxJ/2ucpnf2IsDaxRryefO9WKdfl6Z4dtYdmj631gWvN\\n\",\n       \"0jsv6A+sGxQmFKhSFmu3wrHCqL+Pd85fmzOlNcsy12+FcYtC6vPIrcl5g++U58Vi3ko7t6H3Bb5R\\n\",\n       \"rBWufjGXPK/QUPAXHEppDo3tR0h70a7+OZTWfSdREvBdYWwzVvHxGjsTOWOfz7HvESWoV2tOxbEy\\n\",\n       \"d0d4O/j+pA7zy99y4DfMPfA73/nOmMXcidJuCa2sukpMJpO9J5PJNZPJZPtkMrljMpmc+ejff2My\\n\",\n       \"mVw1mUzumUwm35pMJv/Xst+cO5lM7p1MJndNJpNTRyllkiRJkiTJOqSkSP1c0pum0+ktk8nkVyXd\\n\",\n       \"NJlMrpL0F5Kumk6n751MJm+XdI6kcyaTyWZJfypps6Q9Jf37ZDLZOJ1OBz0WY2X9/u//viTpK1/5\\n\",\n       \"yorHocg4/r6U9+i+0/S8rIxo3yHKi4KGUuE+N44rGjzlo6SQkRpFqjbbL1Ym5yeywfdDaqVWkeL6\\n\",\n       \"+D8sChQO+g0fKaxz6oFiVdpXiu+xKr0d5rWPF/Tdsf6BBx6Y+TzyyCMlxZEwEbVRqhFEwpEhHTUC\\n\",\n       \"lvtBeUQocwK/MY+Oow99juAjwXlc+VgUzG3a0scSCg/lPfjggyV1CkDrPo+0F36ezAEioPvCmBiq\\n\",\n       \"+Hl/1PrjtfLyl79ckvTKV75SkvSP//iPkqRPfvKTc7me07rfLKyVXynUzhP3wcLvs1Xl5nqltZhx\\n\",\n       \"xj2tpESxbtQqf6uO4ul0+t/T6fSWR////5N0px55QHqRpH949LB/kPT/PPr/fyjps9Pp9OfT6fQH\\n\",\n       \"ku6TdHxVSZIkSZIkSXYxqn2kJpPJfpKOlvQ9SbtNp1Me/X8qabdH/38PScsfKR/WIw9eg/irv/or\\n\",\n       \"SdJLXvISSdK3vvUtSe3WFfA0WvId4SmX41wJqo2IKOVNKkUrOW7FYW2zbxPKGu+Ba0EZ4ame+nu5\\n\",\n       \"qTdP7SUrsNUqipSLeeeRAiI3+KT/PWdPa0QTOUpq974bC9+7sC+33nrrqt8zzvEjgVY/GAf1gvHt\\n\",\n       \"8201pcujfBjbKCzs2cbY51yRZVvaO23ecP1I3fU1hBxxfWGMk5eHdnMlgD5njpRUWo+gdrge/TKW\\n\",\n       \"Eogq3ArRaccee6ykbiz+wz88oifU7B27nmGO+luaVhifrWsj44Z+r4Xysia4Lxn94tF9JSL/5Igq\\n\",\n       \"XfXR13pflHTWdDqdmSHTR0q82ih/fMadJkmSJEmyy3PNNdes+n3RNJ5MJr+sRx6i/mk6nX750T//\\n\",\n       \"dDKZ/PZ0Ov3vyWSyuySci34sae9lP9/r0b+tiluw/v4UnyaUF3xY3K8BK8mtHFeOaiNOeJrl92QD\\n\",\n       \"JkcL71s9b4/TN2qq9Xzu11Eql0P7YC1ifUZKnGcr9n7Diqu1ArwcTq0SVeuTRRSY59yh/lwP62ao\\n\",\n       \"LxNKlI93xivlHdsPp6+PVAnvd8ofKZh9wSeKdsFPB5aPC792NPbwybjxxhsl1c+VsSKM+8IaRGb0\\n\",\n       \"ecMa4EqFKwGsla1jN/KRoo+H5rEai8985jOSpNtuu02S9N3vflfSeEoUyl7pfLUZx1s5+eSTJXW5\\n\",\n       \"4/7t3/5NUqe8lcoDjJe+/XbYYYdJ6uZla3Qoa3SpHfH7rc0zdsopp+jaa68Nvy9F7U0kXSlpx3Q6\\n\",\n       \"/btlX31V0v9+9P//t6QvL/v7yyaTyf+aTCb7SzpI0jBtOUmSJEmSZJ1SUqSeKekVkm6bTCY3P/q3\\n\",\n       \"cyVdIunzk8nk1ZJ+IOmlkjSdTndMJpPPS9oh6f9I+utphYnC03XkS8FeeVim+Oa4IoUV48oFT599\\n\",\n       \"o4Y4nysxKEO+j1P0+1pQLrDwXVmJntJRgIbuXYhCw/lc0fD9ryJoHyKkoqzPTl9FBl8az9weEWWr\\n\",\n       \"pr70K/UsWVmewd6h39xHir+Xft+XoRFWDn4IPg7pN7eWh6o4Ph76+m+sREmJWiu/vFp8n8954z4r\\n\",\n       \"qPDuF9k6Z5lL0Vhv9VGppe/cor5XXXXVmMV5jFpli6jNsRQplDDWPHIHltZq7g1+z+4bXQisVX3z\\n\",\n       \"lJXmKc8OlLt1l5KIVR+kptPpdYpVq+cGv9kiacvAciVJkiRJkqx71kVmc4gyRpNXabfdHgkOjHwu\\n\",\n       \"ot/7e1wUG6yokjWFYoGixXWwjMfK8gs8Vdf6tqBk4EuGFVnKwB3hiklfHy/fw3BeigvQDyhThx56\\n\",\n       \"qKROdfD8XJH15BFHKJ0lpa9UL6zwyPqcV7uUxkHrflxEg9bmJ2v11QPGcYu/RW1das+NBcvYGsuC\\n\",\n       \"7Uut8kPurdK+jCWYC6wB7Ic5lNq1d2wW7eM2lLHVZfqVcU3Ed+mehu/h2App7ZoS4YoUazYKFOvD\\n\",\n       \"mKq2lHvtJUmSJEmS9GZdKVIlUAawEiM8EiTaMRpKO06Dfz+2EuXUnh8fMD5RsminVn8DrFkihMbK\\n\",\n       \"FlxSXLCOUB6dKLKF/mZ8sIchESDsKVjKGA++tx4+QZ7ttjViB2tpXv4fESVlE1WgVs2p9XWDvtmr\\n\",\n       \"I7UIP5GVdmivVXFr1T/m4Fh77g2lttz49XmmaVRrFIiSIsT18I1ibm7YsEFSl/W+FfcDTeroq1rT\\n\",\n       \"f+7XylxnTePtDdF7KFSRstkakb3WML5Yq1kfhvpyOetjdUiSJEmSJNkF2aUUKZ6KS+9lW/cyQ+mo\\n\",\n       \"zeWx3sBaxu8D6wNrlPqUouyA/EpYndddd914hV0FrOPIpwbFBJWA+mJF8T31xafN91oswfmw2nmf\\n\",\n       \"jkKCNc04rPWb8X3Q1goibCJor9oM6K3zYyy/FKI/jzvuOEldLp/lypT7PpT2/ipF5dG3a91nEbX+\\n\",\n       \"Zp4pmn97HqhaWDvw0Rm6RtKXtWvSWJQirH/RQE1n7Ynam3xRGzdunPl7pPCW3h6sFbUqOvcM3jKM\\n\",\n       \"nZ8sFakkSZIkSZKe7FKKFHmkWq2KUs6Nsd+XOvNWujg//g9ch/fXrYoA5yOnSN/8W32J2gkfJY9q\\n\",\n       \"pP9QJkrZeEugUuCLQ5Qa6gTjD+WPdi7lMMEaOvDAAweVr5Vav6F5jU9XhPvmOXv6058uqRufNeoM\\n\",\n       \"qur9998/83f6uFaZqT2uT6RhC6yBJfAXdJWxNeoQ5Yg5xhimD1sjcVFH2Z1i6D6MrezqUXuteNQp\\n\",\n       \"48LVe/x/ydzuUW4RQ6NCh1Lbn7V7QDq1vpGpSCVJkiRJkvRkl1KkeL/fmgNird/DO1iFWLVERNx9\\n\",\n       \"9929zocSAjxl//jHj2xrONRKYB8vzuPKAU/pfLoS4zln8KugHSg/0WscX7JqsZ44bmjepWgvQKw3\\n\",\n       \"rGZ8plCmsOrcF62UA4XrDN17rpVa5SfKDI+iFUWR4lOHguftwDghmvJHP/pRddmlzheNdrv99tsl\\n\",\n       \"1dUr2pOOOg1VjtwHa957w9VG8qIw0HZ9fbxQefGFoY8ZG61zkOOJpGVOrRXzjrTuS8mXLwIfKNrR\\n\",\n       \"I2rpNyKWS/dQ5gVr40qRscvLWdo9IgJfLM4/ViSz+82W/G0jRQtFjkj4jJb9hAAAIABJREFUEqlI\\n\",\n       \"JUmSJEmS9GSXUqQct7J4Km99uh26N10JrEMUDKxEFA9yddTiebTwe4jyPfn7bsqDnwj/xgrBiudp\\n\",\n       \"3n3SsEawTrHyyD9FOYiW853i+eR3tDuKSOSrw3k2bdo0U65aJY76oZRRfhQOIDdOZB3Sbx6953m3\\n\",\n       \"onq4InPwwQfP/I5+wreP9sL6ZNxzfRQ/rKcjjjhCUufbVvIj4Dy0p5e7ZMVzfvrBowSxPvfff/+Z\\n\",\n       \"40v5ychM73tromashPsjRmWnD7FgKTuWeu1awNwq+WG6ZTvvDOnUO1IMave/pB1dwUBVp0+3bdsm\\n\",\n       \"qVvLornDGs2n7zpRS18FZ9F4pm1X+pizkZIKrDXMXeacvx1gXDOeUZxoPz/e1x5+x9rDWsW48ntt\\n\",\n       \"lK/Kueeee1b9vhbKy5rO9bl3uSLF2wTWbBRS+oP6subUvs1KRSpJkiRJkqQnk0U80U8mk+nS0tKa\\n\",\n       \"XzdJkiRJkqSVpaUlTafTFZ0NU5FKkiRJkiTpycJ8pC677LJwnx7e/+K/0DcvEKoXn7w/xX+A9/77\\n\",\n       \"7befpM5fIPIp4f0274vxKcEv4IILLpi53rzx+kXU7iU41vXGgutcfPHFkjq/DN6Lk/PEfXuOOeYY\\n\",\n       \"SZ3PFOOMdsCXC/8BfvfGN75x5nqotXvuuaekLjoS/wHer3t2X/f94fe8d2d8vfzlL5ck/d3f/Z2k\\n\",\n       \"nX2siGgp+YPwPXnEqC/+AJSb8fnBD35Q0s7+DfiocR38eNxnyv0g8LvAX4N/n3322ZKkj3zkI5Ji\\n\",\n       \"X0Daj/L0zXt2/vnn6/LLL5fU1Rmfh5KfVykyEZj7b3rTmyTtPBeGzjXWGPqCerzlLW+ZuR59Tdsx\\n\",\n       \"VqIoK3xCPI8OYw6fJXx0Xv/6189cD5hr9FHtGw3GIlGN1JOx+453vEOS9OEPf3imXu7D5jnt9t13\\n\",\n       \"X0lde+CrRX8++9nPltStGZznj//4j1esHz5bzFnuCUMjv7nOli1bJHXt4fciIltpZ/wpaS98gdxv\\n\",\n       \"0H3i3v72t0vq5h7lZ1y6fy7XpT/dD5Hr0S533HGHpM7378wzz5ypZyu0e8nXkHZgLbvoooskdT5m\\n\",\n       \"tGt0L8e/Ft8pX2O5N7jPGvULy7Xqt0mSJEmSJEnIwhSp1Sw2vosiZ7BCSnl7HJ5OPdtvbXSTW7dE\\n\",\n       \"Sqx15u9WvK3ZqwxrpxQ91ZfWrMcRRJZgjXgeLc6PIoKC6RFLKCwoH1iBnr3WowlpvyjSqtR+WMGR\\n\",\n       \"wkM5sb5dVeDvUe4XrEjqgfVNu/k88nZhHDMvSnmesG6x4mgvPr1/SlGpZM3um3V6eXtSttYs7bX5\\n\",\n       \"hUoW89DM2fSF7w3mMBbo4yhSGQUt2r0B5YG5WsrR13cu+3mj9maser6jqPzcA/xewPlRHlgbSlFm\\n\",\n       \"HIdiQ33HykXIHEWZ8zlUitbbsWOHpE4hKkWoR3PP/86cp/4O1/HvfTzUKktO7fG+lqGk0p4eIe5Q\\n\",\n       \"T78ev2Pt5Tq15UpFKkmSJEmSpCcLU6RW25cMy533+g7+A30VKX9abc1GjA/KUJ+jI488UlJnZcxr\\n\",\n       \"rzOnNo8PuMJQC/4ibrXU7tgNWHEoKZFSibVU2k8JK5Bx5L56fv7WTPoR+H2Q98mJlE3Ke+yxx0ra\\n\",\n       \"eQdz2pnsxb4PmuO5j5iLrXtOun8M1iEKGuC3ELXj0GzgyxXFee+lFlnsUNpvsURtpmjmFGuQX5e1\\n\",\n       \"84QTTpDU5UaL/E2ZA5EF7tn4h7ZzNCYYO9SPsVGrmDnuE+RjEx8fFCfmForNajnL+kB9KAeKIfWt\\n\",\n       \"9TmrzZWIQoQPGep4lEcqGr+o1ZwP5cbbkzWO8TZvGLeUw1V5H6e1ChPjpvaenIpUkiRJkiRJT9Z1\\n\",\n       \"ZnPfYwurpJQVtbTbfWtW3CgDOR7+fX2kyAp89NFHS5I+/elP9zpPK0S71dJ3bzi3mk455RRJXf9h\\n\",\n       \"HZWgP4nCcwUDKwTfL46/9tprVzwfv/eImYih/hEoeqgZkZWDVeX188z4Rx11lKQuMuUb3/jGzPHu\\n\",\n       \"Z4L1D8wrrLOhyirQnj4fKEdUPyeKMItgHvahdVeDvnvDsYZ49FSJSFWkrSMfGPY/ZC6UfJtQIqK1\\n\",\n       \"E8s+2rOPaC7U1lJ7Rn5/vI2IxgjtQYZt2mH79u0rHo8C58oFcB3qzRxjXHB8q89PBFFjrKl8MkdR\\n\",\n       \"eqLrRfcudg/weyP1o19or9Y1nePdt83XxigSH2oz6tfCuKUcQ9fqkj9qRCpSSZIkSZIkPVmXihRW\\n\",\n       \"wYYNGyR11gI+UaWnWd/ry2nN5h5ZfUMteRSbAw88cNB55s1Y+4KRm+POO+9s+h39iZXgfiRYv+TE\\n\",\n       \"YY82rFFXvrBisG5L+31xfay7Vv8QL3dkrZf8a1DYbrjhhpnzRqCAufXpv0PRKe1VWKKkINHOJUWq\\n\",\n       \"dV617lW5nNb9NUs+UhF9y1i7z6HD2CcKrDRm6ZPSbvfR2lka207k4+P7grrCgbJWq6rj24Mi6PcG\\n\",\n       \"7xcUR9qBcvZVpDxCnPOg2rqiEkG5Ih+66K1I5FuGslUbWR3lR/PfRedh7p966qmSpFtuuWXmsxZX\\n\",\n       \"FMfG16ZIgXVSkUqSJEmSJOnJwhSpX//1Xw+jhLBqeFrnKRzrquRr4dZOa5RYLaX3wSVQGFqjD3c1\\n\",\n       \"sGZ++MMfSmq3KrCCSkrF17/+dUmdP0DJByuyMrEi8WXyHdNbQVUo/b424qv2/T3nc6XIrd9aX6Ra\\n\",\n       \"H6eIWiW4NqfTIhgalTfv67nPV+2ax5giiqyVsXyIUDR4KzF0jfU57AqR49Ff5Czsm2vPc9SVcrRF\\n\",\n       \"lKI5o3Zi/Pjcc+WItQn/RD8fvlslH8FobeAeTHuiULUqUn3X4FZQniMfRScVqSRJkiRJkp4sTJFa\\n\",\n       \"zTrFiuLp1q0BnhJrLeO1ys/Ul1I22xKtUYhjUbou1g25RbBy8D+g/0pRj7U+Mxy3devWquMjXJFC\\n\",\n       \"IaE+fbM70061Vs5YuN8KVjL1qq3P0HxPtTmSYFHjejXWs1omtft8OX19wMYCZQvFwucic5C5Tn+U\\n\",\n       \"fH0efPBBSV2kdAS/x18Q/8G+amzftYIoSJQ5yh8pjCg+rrjVKprMsZICSHQhc9kVt+hey1sifPfG\\n\",\n       \"ypA/L1pzJ6YilSRJkiRJ0pOFmR+rPZHyFM37ZJ5+eX/faqHytN73KbiUmXnekQQl5m2xu9XRel2O\\n\",\n       \"Q4kio3irQlECa3zo3n7+Ht5zzfDZqk5gNS96vMxboY1yEbX6N0T+GoukVh2dl19mCd8rrjVfz6LH\\n\",\n       \"JtDOjCVXh93vsJQ7kDHk+1g6vjdflLuuL7VrB3nA8CliDY1yKDLOfDeQoQql42+HWpWbe++9V1Ln\\n\",\n       \"azVv1bnvPOT40riCVKSSJEmSJEl6si4VKcA6AJ62eVrkHLU5MErHkWPELeAoFwfn7QtP4+RXGpIP\\n\",\n       \"Zx7wNN7Xp4dosM2bN0uSDjjgAEld/w3NWzQvXIH0DOh9rTz8F9bKz4bxPFShayVSMFvrvZ6UKIjq\\n\",\n       \"5gxVolz9rAU1HwWllH/Iqd1tYN54PcAVwdrM7VBSY5nb0b1gKLVrgO+hiEJFxG2kNPo9E8ZSSH3t\\n\",\n       \"6+szhvJJ7r++0Ywl+taX9i/lGIRUpJIkSZIkSXqyLjObOzy9+o7ZtbvV1+6UHVkfvKd2q46n8b4K\\n\",\n       \"Be+FsX5QpFr3/5oXWE1DM7hjRVEf2nm9KA6uMnjWX6waPlvf5z/lKU+R1PVz1J5cD6s1ynaM/0Yp\\n\",\n       \"IgdF0Mvr1mmUtbgv+Of8IuI5t8ZuO2DuRap3ybeE6LfSLg/OWkVFYel7/inGvud9oj18TWQu1Coj\\n\",\n       \"pbmLgofvD9evvYeUqC0n48wzzh900EGS6n3fGD+o7JFiVYLxxl55KJetig/1QhmsVXhL+HgZ6tPG\\n\",\n       \"GpmZzZMkSZIkSebMLqFI8XTJnm+t78VrYWdurBEUsBNPPFGSdOutt84cP5ZihDVLNBtKQ60PkStY\\n\",\n       \"KCBDrSiexodG8jzwwAMz5xvLChkL70esN6wbrBMUw1r14alPfaqkTmmi/lGWZMZzaVy15oZx/PzU\\n\",\n       \"x3PQRP4kqBye7w3mrTTWlnMtGFuJcqKs8yVlBb9L1hT6ZOia6XmdgDWCtbmU6TzqMxQUFKgoMze0\\n\",\n       \"Kg++pjI3+aTdSnvf9cWj9lgjXClC+TnkkEMkdf3G8bX7bXIvoB3pJ+5x1DNqR9Ys8m+hNqNItSqe\\n\",\n       \"Dz/8sKSun4fu6sF4ZE1DUSQ6k3amX+ln2iPKYehvIUqsrztakiRJkiTJLsQuoUhB7Q7jZKNt5cAD\\n\",\n       \"D5TUWXFkl523zwfvu/vmJ/L3uGO9z8cKHOo3we/vuOOOwWVaC1CkPLsxPnklFQK/Daw+fsdnaW+7\\n\",\n       \"eSsskaKFdRfVD+sTH8Uo0sateSKOaE/qx7zyaNVSFmky5WOll3IDjQERwygFrfA75jptTSTrXXfd\\n\",\n       \"NbSIM9CmrGVY5CgCfYkUqQ0bNkjq+o63B9GuDZGlX+v3OhbMBcaS7/1WG7VVC+o2a2Lks4SKv3Hj\\n\",\n       \"RkndnCTDee2aHPnS0X8oVnyPQsO9iPoz/l3BaY3opt8ZH61QXvBxGClMPDuwdoytJKcilSRJkiRJ\\n\",\n       \"0pOFKVJPecpTHrPYfUdo37Wep0x8a/w9L0/Tvj9TK/hAYUVx/rGtkoi++YV4f+3ZeyO/ApQFlBcU\\n\",\n       \"LLcS+b7kh0A7odxw/ZKfhGf6xmr233mUZmQVl8DKQgnhOu7TgxXnPl20B+eh3rzvpx4oUrQn1k9r\\n\",\n       \"hEspo36JaCeAKKdMyUqr9bPB6oOf/OQnKx4XWeO+87pfr0VVoe/22WcfSV2da6Oe6Gv6uK/azRxl\\n\",\n       \"7qHa4dNRS20+INYS6jmWyuljhD464ogjZr4/+OCDJXXtj8JSC/XkE1p9l7i3MBeiNY4x536K5Dni\\n\",\n       \"uLvvvltS/zlZ+zvW0P/6r/+a+V2rkoMKjKLEWs4na8Pee+8tqVvTuB5KDvdk34vR12oUUNZCvqc9\\n\",\n       \"KUdt5DPji9+3ZlIHxn9p7aB8PHtk1F6SJEmSJMmcmSxiZ/XJZDJdWlpa8+smSZIkSZK0srS0pOl0\\n\",\n       \"uqJElYpUkiRJkiRJTxbmIzVPRYr3queee64k6ZJLLpHU+Tr1ze6KnwTvl/HV4b06deITXxHet/J+\\n\",\n       \"mffPJb8F94fAXwO/ije84Q0z1wOipCLflFqIYuT9+Kte9SpJ0rvf/W5JO++b5Du0Q5SpnffP9Av+\\n\",\n       \"D7wHf+tb3yqpfax4HqiovWlfjnvb2942c73aKMra/czIToxP2oUXXiipG5/ui8Y4xq8GfwNUZOpD\\n\",\n       \"/1Bejwyi/S+44AJJ0qWXXiqp6zf8KPgd9fV6Ux6uX/LToR0/+MEPSupy4NDPtEPkcxW1P/3LJ/nf\\n\",\n       \"Tj31VF188cWSumg4Im/Ju+MRk5TJ8xZ5NBJtSJu/8Y1vlCRt2bJFUjeWvS6er8ep9XnyteWoo46a\\n\",\n       \"KSfRZn59/AEpB9F81Nd3i6B+p59+uiTpC1/4gqTOT5UxWvJ/BM7P2omfI9fHp+Z1r3udJOmjH/2o\\n\",\n       \"pJ1zreHDQ395dB1jAR821njaFd8X+unMM8+UtPPaQj9zXGnuR358/j33oosuukhSfQ7CUgZ4v6fQ\\n\",\n       \"33/9138tqVuriXZzv1TWLG9v/FWjiFj6AZ8t1rIrrrhi5u8cR30ZP76PqeNzn/6lX84++2xJ0t/8\\n\",\n       \"zd/M1DvK0eflbn0GKN2DUpFKkiRJkiTpyS6VR6oWtw54im19Cj3uuOMkdZEvWO48XfO0HO3QjnUT\\n\",\n       \"RUOh+Nx3330rfu9RcFiLHrnh0WRYX0MVKaxzV1q8fUtRfZH1hbLhvx+aVRhrp5QpPMpei/VeG63Y\\n\",\n       \"Wl73S4yu43mSsBJRVTwfFOPRI4M8+zDl9eNKqkjfrNhE+7XmN4usVcrNJ7l1pE4FRukgysqVKOYe\\n\",\n       \"UDbfs4trEHX3wx/+cOZ39CVzztuINYByucXcGsmJcnT44YdL6sbA7bffvuLxKEGe88tzeaHgefnI\\n\",\n       \"YF1SWyNoj2iM+1yIcpyhpEVjiH6KcpvVjr1apYi8WZQrmhvMWWBNRxmKyguR8kf/ucruOdlcgeJe\\n\",\n       \"RDtHUWml6EDGtUfxuZJEu7dGjfrcj6IdGT+l3HzQ+gwQ7XXppCKVJEmSJEnSk4UpUk984hOr9wxz\\n\",\n       \"Iusueq8bPc2W9qS7+eabJcWWcan8UZZViJQowCrwp3m34lx5qc2RU4KcG+4vwlM61nSpHUp+IGR9\\n\",\n       \"pr1arfSIvnsh1ipMrUpUZOVhpUZ7K+Lzdthhh0nqrMGtW7dKirNBY21iPTuMo74Z9SPcSsXPo7QH\\n\",\n       \"H+VFbUFpaskEjwV/3XXXSYrHJmOba7r/F5YrChBj3vu8lLsOZYw+5PyuHNTiPjwQ1TPKLA7sdcZc\\n\",\n       \"8T6qtfQjSlHhnheIdqb9UVJoX46nHVr3dYzeHtRCeVB5S/mxfE7RT6Vch5FfqUN7RW89uL6/zeD8\\n\",\n       \"0VpbWgv4nd+bGO+Um3LNe7eGee15WXveVKSSJEmSJEl6sjBF6pd+6Zd6K1KRBd6651bJB2aohd43\\n\",\n       \"RxfWZu3O5vhneHbeodA/WOWA9V7K0ks0G9ZPpMhgnWEttu7fBCWfs4ihViqgcGLF1PZDSZHC163k\\n\",\n       \"80ZUIO2H71xktdOvjLfI927oXotYpyXli/mCvw7WNspbxPLs4CgGjF36gjJEfnlY2L6vImMXBcst\\n\",\n       \"+JLvEG2KKjvU/w+oB+fvq74CfexrlmeoHhsfW96enokbX7VjjjlGkrR9+3ZJZeWNMe4+L6yZtJ+3\\n\",\n       \"oytDlMOjBh0/L9C++J5F1PZnaW4y1yiP+/71heg3vxfzloh+5N7EfKy9p40N0bsonYyDvnv+OalI\\n\",\n       \"JUmSJEmS9GRhilSLaoIli8WPdTfUF2joU/nYnHbaaZI636xaPwrag/bBShhq/eL/4YoN77s9WtD9\\n\",\n       \"KShHSZnjfIyJ2kgJByvP/VZKqoErJFjhtGtJ0YHayCDOD/fcc0/V7yLod/qbcqNwRf4JUXmJ5Dr5\\n\",\n       \"5JMlSTfeeKOkstUPWPFAf0Z5xpwvfelLVdeB5co2ljJtggVayi9Dmfw4xjQqn/u2lCI3PdJ3qC8H\\n\",\n       \"ihjnRaXsq+5DpNR5DrOIWp8ex9fwkk8WUZMoCe6PF0H9/Hj6Naqf1wf1E7/OHTt2rHo9V5lZM1pV\\n\",\n       \"a8fzc0XKCv1JPcbyVYrWDq7HOKW+HkU7dLz25fjjj5fUzddUpJIkSZIkSRbMLpFHCmuRp3u3eMcG\\n\",\n       \"i94zSM8bnuKxAiNFyhUirDism7EiGLAuPEKI9i9FztT6iHlm7r7+GGMrjKV+R3HzXDEl3Orv69+C\\n\",\n       \"P8/mzZslSffee6+krh1oR5TDVrDWUPhqcT8i6juW756zXA1izKHU1KqEkaLC2Iyy46NMoJTU5kty\\n\",\n       \"6COuF409Lw/l7asIAeVm7YNaP9Fav8mhoC73zWvlimLrmGQu0N74ALmSRrv59VCSht7D6K/99ttP\\n\",\n       \"0s651RzGH2v50Ejd6B7jUbDUn/HMdfsqUrUKpMPbipJvWl9SkUqSJEmSJOnJLqFIOUMjVEqMlceo\\n\",\n       \"FXK6lHyEPFM1VgDWJO0z1EcKq8mj6LBuxuoHt+L7RjuORWtkydCoP/ezweoqWW2MA46j/z26r1Ul\\n\",\n       \"QPnFd2uoH0Gr1ct4rh0Hy4+rzcRNLjqUIOZc5PMStSF5hEp9xZiKIiE9uipSpPgeFZQ5iDISRTSX\\n\",\n       \"QL0k3xXUriHrzd/Uod1q13aUIxQrj7ZD6Ywyg4P3I2spanIps3kJV52jfsBXiXFKO5Tmtvs3uj+r\\n\",\n       \"K2ucn+85P+NoqCo977dRfVmfpUqSJEmSJNkF2CUVqbEgAqPv+/axQUnYuHHjqse5lePvoceKiMCK\\n\",\n       \"c8UF62Is5Yhyr1drIwKrcq+99pLURYi1+tShSHm+pVIEE+MFqzRSI1oVIcpRspYj/xCn1QptHVfL\\n\",\n       \"/aBq5zJ+fah41AElwlXJSHWsnWvMpaitsPyJbrrrrrtWPI41i5xh+Krwu76KVFS/eWWMHkqtagvR\\n\",\n       \"vqGRqo7ay/GuZKFERf6cKEQ+juifsdZo5ihzkWhVQImiPKw1XJ/xHkXA87toTfP243zML8b7WPmj\\n\",\n       \"5v02qi+71p0rSZIkSZJkHfG4VKR4aubpvK8ihdXSut9TBFaCZ6h2i979FrBSyYEzlo8X53UfKf49\\n\",\n       \"lpWBdYkiNa/orrFxvxbGVasixe+x/vh3SfkrReqAR2KNBeOspA6U/EiGUtq3bCVoM889FikJQ3c5\\n\",\n       \"cL9HV6ZYg0pqHIoDc5C2bY2sdFBX5zVW+hJFI9Ln5DyjHdjdonYN9HxjwPmi85Qii2lH7xf+7hHR\\n\",\n       \"Q9X9kirM9Rhn3PtKc4fy1a5pHlXHWj7Wfp5j7UIxNqlIJUmSJEmS9ORxqUjxdN7XnwBQosby7eGp\\n\",\n       \"H5+P2vfoXJ+nfX8P3jeihsggt8rcp2noe+vaXD/rDcqNNdjXWqJ/fB+osXzQPJP6WNQqh/P2a1ie\\n\",\n       \"x6t1THKcq4vRcX2p9SssrUn0JWOOOY6ShgLSmouNepfUQ/fHHIrPGT8/a5mr/rQD/omUm3oQAe14\\n\",\n       \"/aK3CaVM+CXoR4+wpj74JEXjYax2phy0C2sM48XL57T6yHE+rx/1pj59fe/GutdyHsbDUKUsFakk\\n\",\n       \"SZIkSZKePC4VKfwLsN4OOuggSV3+HLIU14J/wViUdhZ3UEawmtnpmvr0VaTIrO5WG+/513vumLUC\\n\",\n       \"VaCvauHti7VGO7f6vLkqQ0SPU8o9Mxb42rkfylgsVxm8D2ojC2nzyDIdGmXVN1u/Q18x51G6aNu+\\n\",\n       \"18FXDP/MiLF3edh///1n/s1eif/93/8taee5wdjed999Z/798MMPSyrX3xWgee9a4bsK1O6xR7lQ\\n\",\n       \"pyMFp3aNiCKvuQdGEezRmnbEEUdIkm677baZv0eKEQoin30Vqb5rLNf1fGIoaL4nYiupSCVJkiRJ\\n\",\n       \"kvTkcaVIeQZprBesnr4ZnBe1kzXg8+X7GA2NqsMqdMgKjXI39D0zuU54n9/3PThWB1YLVgc5dqL6\\n\",\n       \"9AW1w8cTimDJDwLcKqYfOX+tknPSSSdJ6hTX++67T1IXyeRgjbUqUj6PovNC5HcURfKU/JywornO\\n\",\n       \"YYcd9th39DVt3lo39/8by4eillIU1/bt2yV1cxxfoLEiaFvXsqG+PO7LVFKUGBPsK8n110suQMfL\\n\",\n       \"Rblrx1OpX0vnYY7svffekrr2ZXxzz4uu43+nvVEOPaLb/V2Z41wfZbivDxpKZCsog5SXe8FQJQpS\\n\",\n       \"kUqSJEmSJOnJulakUCp4CiXbL5EaKCK1VpRb0Dwdc53IGuJpOsr03Ncaas3OC67Y+Pt2lAjHc9B4\\n\",\n       \"bpFasDIoB0/77ofC+3vf5wnlCQXJ+6Xve3D8CVBy8PtozfNV8kugvviT4J+BdYhvWW3U3aZNmyR1\\n\",\n       \"1qP7PVAO1BUUGcbPMcccI0k64YQTJEl33nmnpK6/o6jIyCpEcUQdac3r5e1NO3Be/DH22GMPSZ3i\\n\",\n       \"dvfdd1ddj9+vtN8XawNjjb3wSjBmaFvmBv+mb13VJI8Rljp97wqNq2yR6uaZpH2ucx0iFV1xc/Wx\\n\",\n       \"lFEdDjzwQEk7r0XPec5zJEkPPvigpG6sMzYjJcr3gaSvXEFx/z3WepQC2p9P2sMjUX0t43vOx/W9\\n\",\n       \"flzfM8XTjlyP3Sb4PXPH7yEedeljmbnNGsM9J1KEmDuRUskcj5QV6sN1uC7nY/zUKpq+1vv4ZPwD\\n\",\n       \"44B23m+//SR148DzfkUKMuO+79sfyluKEO+b1ysVqSRJkiRJkp5MxspV03TRyWS6tLS05tdNkiRJ\\n\",\n       \"kiRpZWlpSdPpdMVEa6lIJUmSJEmS9GRhPlKrKVL48PA+199v4x9A/iaiozzq7p3vfKck6QMf+MDM\\n\",\n       \"790XivNxncinBn8Ifw/sdeKT9+H4gvzkJz+ZuQ7w3tl9wRy+p55vectbJEkXX3yxpO69c18fo1Je\\n\",\n       \"Ier1nve8Z9Xj8PmJ6lGLt2fEKaecMlOerVu3znzPe3rPKtx6PfwoeM/uai6+Y7R/pPYyrs8+++xV\\n\",\n       \"r4dP0dAsy0Syve51r5u5nu8rV1Kna3368DO48MILZ65XSykSzP0YmGfnnXfeY3N99913l9TNOeb0\\n\",\n       \"WAo8dbrkkksklX1M8BHZsGGDpM6P0dcifF722WcfSdLtt98uSbrgggskSe9617skdWsKa8L9998v\\n\",\n       \"qasveHQV39MO+PacfPLJkjofo9/93d+VtPPaSc69o48+WpL07W9/W1IXfcfcP/zww2f+fsMNN8xc\\n\",\n       \"j3oypl7xildIisfKAQccIKnzBXJfJCI3o2gsroev1Wte8xpJ0sc+9jFJnZ+m+23Sry960YskSddf\\n\",\n       \"f72kne8R9IPvZcdxZ5555qr1G4rvmXf++efP9XoO17nssssk1fulshbiIxWtLYx31oQzzjhDkvSp\\n\",\n       \"T31KUjeu+d73oozuVZFPFO3JPHzlK1+5aj1SkUqSJEmSJOnJuozaQ7GJoneIxvI96Uo5NaLsr1hX\\n\",\n       \"3/rWt1b9PVZGpEgBT8NYynxG9eFpOVJw+D3Rgx7l5rk8sPKwjohyQoFwqxV46o+sPi9vxFAlKso7\\n\",\n       \"FIFis2PHjhW/p30iJaqWUp6x2twwtZnKh+4FCVHEVmuOpdqImZLqU8rGXMpJhLqDmrP8erQZcwVl\\n\",\n       \"aqy2dGqjneiDW2+9ddXjXO10dZkxxphnbEeRt/yd83nfUC6OcyXH282P8zxQrC2sOShRfj7KgVJU\\n\",\n       \"gnozdhz6mzxNDmPKM3zTf6ypfKJi8/1Xv/rVVcvnkdIoMr42R3g0YhQxTD1pD9Y06ueZ6Wkv2pvy\\n\",\n       \"+D0EJZHxgLLnOeN8jfPM7Sg4d9xxx4r1Yzz79binR3B9Xxv4XbQ2lda4aK2inrVrdSpSSZIkSZIk\\n\",\n       \"PVmYIvUbv/Ebj1kHPL235qup3a+nZI3y3r60czrvU0u5JlxR8/fXrfBUjDXoma4pL9aD5/bgd9Ge\\n\",\n       \"a+Te8f2XIp+yEvj2cD36CWsiyrQNrZnNsRojH57S9daaWmWnpHYwf/BbiZTAsfZ5KxFlPMfKxhoe\\n\",\n       \"qg65H8tyK5Ux4BYx4AtEfiR8j9Yb+Da50kSfozRgkUeWd0klRSH4wQ9+IKk8NunjaO1lzOLbBa7M\\n\",\n       \"4CtWCwpYlE0fv8joHsLvvNzRrgIlSvcK1uxaRYN+iNr1rLPOkiTm3A+kAAAeOklEQVQ94xnPkCS9\\n\",\n       \"7W1vW/E4b2fKidLp44G5xJ6HKESMg/+/vXOL1bQq7/h/hWq0SEAOHcYZyIzhEMdUFDKjiSFgAqYa\\n\",\n       \"g/SCWhsSLZaYCEqMMVQTy45cWE1UqBfERhq1UqrB4DGkggEsFzCAjAOMQCfM6ECYmQJqJNEEm9WL\\n\",\n       \"/f3m2/vZe+31vus77b3n/7vZ+zu9h3V61/Nfz/Ms2l/8HUpifLaVfKO4rtJekTVKx6UfxHbbdVWj\\n\",\n       \"NGZRH139ja1IGWOMMcY0MjNF6rWvfe0Sz3h2ksZK2LFjh6ShsvHggw9KKs9OW7OSEtWEr9R99923\\n\",\n       \"7PeYTXc9fsyay+/7+qZgJZVm71xPLXtxzPqL9cism8/junqkZo3hQ9YabdbVioOu2atLYJW0cvXV\\n\",\n       \"V0saZhbndUl5Kfl59OXSSy+VNFQvRvVNG5VS+8SPgr+jKlJE+sQoSalsocLZZ58tSbrsssskSQ88\\n\",\n       \"8IAk6cc//nHTtZT2C2yFMRHfLqLxgPPUxoS+0Jdj3UR/NtTwksVf8m/ldcyM3XX/y9p9onDgm1Xy\\n\",\n       \"GYsKQ1SkKN/SM4YxnSi9kvLZ+iwqgQJUi+SN/q+xXXL/tDOeAXfccUen8wPPgFhO7DYSidfR+myI\\n\",\n       \"7a7UDrs+Q2rPYsbWGlakjDHGGGMamZkidfjw4SPWALP3aB0Q8XHhhRdKGu4pds899yx7zNbZP7Pj\\n\",\n       \"p556asXv9d0pmlk8ESC1WXhp/y2sh7iXXFewHkqKVYxmq/mq1Wb7s8iWPwqjWvVYW29729sk1RWX\\n\",\n       \"vopbCfxCukY+zQrKB+uX9ltTUGvHa1H2UKB+9KMfSRpdxaNPtt5LhL5e8/Ho6h/aFRSuGIUF3Cf+\\n\",\n       \"pF2j0SL8blz+o8B1cZ1d9w+lnOmTtYhSIptrYwZjYN8I5BKf+9znJEn33nuvpO7+xH33GY1Qrigz\\n\",\n       \"KIjsudiVWM41Sr5Lsc/Hz+MztDbW1JTkrsqZFSljjDHGmEZmpkj1seBKChTUfHZQhEqzc2bZtWgx\\n\",\n       \"ZtNdMzwTLYb1ULvnUoQA5+XzGFmCXwBEa6wWtUX58D1m6aV1/q55fkr3y07qNQVwtUM9/PSnP5Uk\\n\",\n       \"3XTTTVM9PxFb41K4JgXth37QNfdSiaigLlRHaHvR/w/I44RfXa1PMrZEVQ1qfaEvKE21XHW0PSz3\\n\",\n       \"2q72NRgz2L0BaFsoPaMqHCgItIm+6noERWv79u2ShmN5a844fJDIGRfHzlq9RFB0RoVnTsl/t0RU\\n\",\n       \"AHnGdW23qOuUQy06rrT7B0pSVyWVdlFSnKC0esMzEbW+VXl2HiljjDHGmAmzKjOb96WmuNRmlczW\\n\",\n       \"u/pcdM0DREQAyk4rzLqxnuOsHGsR64DvdY0kImoRKw4rYNOmTZL652GK5Yj1fNFFF0ka+hlQ7jE7\\n\",\n       \"8lqB+5y1stY371br8Vv3cKT91yLc8MPAT6fkhxF/vzDypmv+l66KeG1sGVVdK1Hz8Yn+ZqMSc9VF\\n\",\n       \"KGPqsFUBY+xEkRo1YpbrQUmCGE1XWkWIygxRiSXlpO99jyuaE2WLvtjVRwmFhrG2tQ93feZFRYo+\\n\",\n       \"TT139V1DSeI4lGPN/zbulxtzLval69hqRcoYY4wxppGZKVKvetWrJmbNxXXb2nnwzC/lwIh0zR3D\\n\",\n       \"9/pmbC/BLDtaS8zSWc8uXRc7pGO9kaMGxaCU3bVv5AlWG4oNPlHkXhmXUjdr+uYDmxS19j1q5BAK\\n\",\n       \"EVZi34gd2lXNGo6RV/ip1PwqFlqp0YIcNZ8PljRl3NUyH5WulvS4FA8oRaPRl2lLo47dHGfUCF+u\\n\",\n       \"Y9euXZKGCh0qOz5TvB8jauP5ua5x5YHqmim9Bu2OXShQamqrLSUfolbfPvpoaQznfLxP++S85FSs\\n\",\n       \"7VvKfZ188smShvfbddWo79hcqu+uPl1WpIwxxhhjGpmZIpVSWpJhe1z0tRr75oDhukv+E1ixREy0\\n\",\n       \"rtPGWTLnjdfL/dYiavAb4HhYF6X7b80+i1XC8bHy2M+rlEXZtFErx1Fz9aDytOYuIiq05h9BJN2W\\n\",\n       \"LVskdVfSFkbtRfWXNogF3nffQa4FSpmsx01XFbtm2Y8LMq2jMKA6d41gjowrvxKgrlNfMc9VySeL\\n\",\n       \"92P+LhSsUaMUY0R1Kyg5XB/PgpoiVVq96FtvlA/9iXYQ2ynPcr5P+XG+0047bdHvSv1x7969y74/\\n\",\n       \"LoUvQvnyrOY8vF/DipQxxhhjTCMzU6T+8Ic/HPG5YN0zzv5RZPD853utOUJKMAtltnzgwIEVv1+z\\n\",\n       \"zPHxGDVjNlYR1hVRdKX1X6yN2vo+uXGwGkqRKDHioy+xvoj+wwrpGsFhRmNUXy4Uy1Zlq28WbNpb\\n\",\n       \"V/+UhX5R0feDe29tw9NSoCIoQHHvtEkTFZQYEYziQN9GGem76wNtigjeUYltJkYzMtYwxgPKSvQ1\\n\",\n       \"w8dqVEWqb8RzCcqZtt7VVyhG3Lb6+NGvyF1HeUVlMe51yXmpF575rT5ak/KrjmMU99d1rmFFyhhj\\n\",\n       \"jDGmkRUVqZTSaZK+KekvJGVJ/5pz/peU0pykf5DEdPvTOec7Br/5lKQrJP2fpI/lnH+y3LGPO+64\\n\",\n       \"I9YO69bMAlEsTjnlFElDq4j1YGa1UUlh/bbV9wYrsKZI1SxlFCGspNYIEKyhmIMjKl1xll46D7Nu\\n\",\n       \"ypMoOq4zRi2OOvvHisKKMZOhtuP9uGi1Ikt+PCU/Ddpd1/a3UE2IPhcln6nV6p+Hjw/XOW1FKrYh\\n\",\n       \"VMi4Fx2KT6vCgVKI3+SoMOaVMtBDzaeoa6RoV0ZVtIDy4pnYVR2OOQJjxHltb8EIfZZIXo4PjPkx\\n\",\n       \"WpB+SM691t0YRl3lKVGKfu36zK4t7b0s6eM5510ppddIejildKfmJ1Vfyjl/aeGXU0rbJL1P0jZJ\\n\",\n       \"myTdlVI6K+fclgXMGGOMMWYVs+JEKud8UNLBwf8vpZR+qfkJkiQtlwTovZJuzTm/LGl/SmmvpB2S\\n\",\n       \"7o9fPP744zvno8Fzntl9ydO/1cpEqcEXqwTWQG3dm1k+s9yueadKMPtHecDHCbpGHWIVkFsFq6Cm\\n\",\n       \"wEW65iLZt29fr+NOmq7115eYs2ba4GcyK5+zWvsutZOSmtE3e3QX6xZfEfLSoBKPWmbkSONesJi5\\n\",\n       \"h1of4TrOOeccSUP1GUWKvQFnBRGXRPAy9rZmyI6MqnpTr2TUJs/S7t27l/1+10jk2nWNK89UV558\\n\",\n       \"8klJw7Gra7Qmqw8oiKy6RJ8rfAFLOwOcfvrpkoZjDT5x8VlUUlC7Knwcj+urRbxzf6MqiKXy5Nlb\\n\",\n       \"o7OPVEppi6S3aDgp+mhK6RcppZtTSngovk7SwtnRMxpOvIwxxhhj1hWdovYGy3q3SbpmoEzdJOmz\\n\",\n       \"g4+vl/RFSR8q/HzZKXsXNQqFCUWK2X/JAi35TtXAmnzkkUcWvY+l3Rp1hDXHcVjn7muFxfXwuC7d\\n\",\n       \"l9adsAHfqlKuDxiX1dpKVKDGrUShbpApnnY3rkz2Xakpf6PuQE99syci1iJqS01pRWVBrcHKwxru\\n\",\n       \"m9updHxp2Fdi5CzXSL6hcUGdcx7Gpq7qM2Na7Et999DDd4UyjWNkbb9EFJ04NsXdDXjNmBajsmgb\\n\",\n       \"1Ekpmo96og+1goJB+Y+aZyhmNo+KYvTrpbxow9Q7Y3TMQ9R3dSJmAu/rc8UzJ+bPYixkbEDJozw5\\n\",\n       \"D88+fs/qB/Uan+OlsS8qXYzNKE4xQp9yrilSnI8xJWZcp13TT0tzh5JyzJyiRnUilVJ6haTvSvpW\\n\",\n       \"zvl7kpRzPrzg869J+uHg5bOSFsaXbh68Z4wxxhizZmBCeffdd6/4vbTSGm+an9Z9Q9ILOeePL3h/\\n\",\n       \"Y875ucH/H5e0Pef8dwNn8//QvF/UJkl3STojh5OklPLc3Fz/uzLGGGOMmTJzc3PKOS+7QWxNkXq7\\n\",\n       \"pMsl7U4pse71aUnvTym9WfPLdvskfViScs57UkrfkbRH0p8kfSROoowxxhhj1gsrKlITO2lK+YQT\\n\",\n       \"TjgSMUM0Wswfw3op66SsX8f9huJeYKz3Xn755ZLmZ5ILfxejhVinL63nx/XXEpzn85///IrfjxEf\\n\",\n       \"XX27+B33fe211y4676ThPPztu5M46+1dfYji+cZ9/Nr5aEcxfxdRmdx/yceHeopZl/GT+MxnPrPo\\n\",\n       \"fJOCdnPddddN5XzAeb7yla9IGvZj/EfwgYz9BH8M/Bvo1zHDOuMD8vtVV12l66+/XlJ7hGxX+rZN\\n\",\n       \"iP5ifc/32c/Ou6biQ4IPzeHD894WNf9L2m4t0zXnm3Z53nDDDZKWRlOW+hJjOr49+MDwPm2F6+dZ\\n\",\n       \"cc011yw6L2M85RMjwN/znvdIGkalPfzww4s+ZwzHt43rp21ynhtvvFHScKzn+uL+rdQPz0aeUeSD\\n\",\n       \"ok/Qt7kv7pPzffWrX5U09NelnVBOnI92+cY3vlHSsJx37dolqTy2Uu6f/OQnF52371jc9ft9xzLq\\n\",\n       \"hftszV81Nze3oiLlzObGGGOMMY3MbK+9hUoYlma0enh/27ZtkoaRC1i2pX20StFkJeurtk8UljFR\\n\",\n       \"S1gtKDFR1ePz2p540DXKkN+VjosVhvUSI3SwRsZF30zXk45mG/fxS5GlXSOzSurApK37yKxX10vt\\n\",\n       \"vVRfqDVY2aVIJZSthblepl22fRk1MzN9muP0PR5jRNe8O9Muz9LYVupLjOmjZqrn+KXz80wpRTxz\\n\",\n       \"/tpYXrqPUi5AlK1SvrOY0b30+1qGfNrRAw88sOznpb5aUuPj6k+tHXUdu+NYUouC7PpsHTUflRUp\\n\",\n       \"Y4wxxphGZqZILZyZx9kkO3ST4+Liiy+WNFRUmDWXcsKMmkskghIFMXdKJO5H1TXzeImuWXSZ/XM+\\n\",\n       \"rAWyL7/73e+WJN13332ShhY9e+zFvfaOdmLunKjwTctaX+37w9XA1wxrH7+QkhWKv0QtZw7WI8rV\\n\",\n       \"cuB7EpWAHTt2SJJ27ty54jnWG2SmruWAg1L5Ab43tNGYx6qmhMQcedPOwYYSgbpeUtlrufdi3jLu\\n\",\n       \"K+YgxKeIvsDrUTPso8rGeqqtjkyKUXPDdWXUMZi5As9IfML6YkXKGGOMMaaRmSlSp5xyStEHhdk8\\n\",\n       \"s3t8opjlnn322ZLKVlXfWSr7D0HNiqrteM7nk7Kuajt/R2sAy/3pp5+WJJ1//vmShlYQCtV6pXX9\\n\",\n       \"m3aEtTfqfk6lqNFaxAoKLaw1ZYryI5sxVnJJcerabyjHlTLol5SU7du3SxoqKqjco6rHNWoZxvtS\\n\",\n       \"ajsxQzfn7etTVVPB6SNER7EnG5mz+bzknznt3Q8oD0A5Y4yAmv9njPSO5crvY+b2qFDVlCiUPtpp\\n\",\n       \"bV/UWF+cr6Ys1uA4XX2eJkWspxq1iHiiFLtmMC9hRcoYY4wxppGZKVIr+TGRK4PZ8/79+yUNZ6Pn\\n\",\n       \"nnvuWK+F2Sg5WWqKVA2sjHFFS8Xj9I2WQ8H42c9+Jkl69NFHJfXfk3CtghXaN+9VzPs0KWoKDPW0\\n\",\n       \"1pQoQFHmL/2s775jEcqtJTfMvffeK2noozKq2lgCZegTn/iEJOmSSy6RNMx/c+edd/Y6Hj5OtImS\\n\",\n       \"7wsqO0oJY9LBgwd7na/WNlG/d+/evegv1PYpjXU/Lr/SEnEsjVFvK/nbLaQW6Q2tkdMoWai59JmS\\n\",\n       \"IlVqB+OK1J6271oJlLWu1BRP8meN6kNmRcoYY4wxppGZKVIrWaGldVysxtpu930hP9W41utnnbcn\\n\",\n       \"Eq27qERt2rRJkvTss+tzf+nWCBJ8esigXYvcqVHzrSux2tpTX+KOBZQDak2rIkU7JvtzHx577LGm\\n\",\n       \"c/aFe/3hD+f3dUcJu//++5uOh1pfo+ZLMy2iT1KNSfuoRVCgJhVlFtXvrmp4VLxWS33Omr7RjTWf\\n\",\n       \"sEOHDklqG0MWYkXKGGOMMaaRVZHZvC9EhIwrAgbfjVGzDq9V3vSmN0kaWviT8heZNV3bC34dtAd8\\n\",\n       \"6PCnmVaOFHj9618vSTrzzDMlDVWNtQJWOAoUVmL0u2jtz30jeWbBnj17pnq+1ZJ7rJTfaLVA25lU\\n\",\n       \"OdHmjzaIzG3d267EuKM8UXjxzeuaszFiRcoYY4wxppGZTZdb/UWkoU/FuGanrdlf2YNv3HvYTQt8\\n\",\n       \"o7AaukasrFWwjmuKEhEy+M5htYxqXZILppaxO4JCRj6ptaZIRT+Zkt9M3/7cN4LnaGLaChB1yl/q\\n\",\n       \"BmUM36c47tdy4k2aSY95jCHTYrWM4eNWolqp5ejjmc9fvt+3/6yOUjfGGGOMWYPMTJE6/vjjj3jM\\n\",\n       \"l8BawVJFiSL3w7jWtVt9XlAY1poiheLCLBz/jfXuI9bVSsJq5vvjyqGCj1XfciavGT5SKGvT3j+r\\n\",\n       \"FcoRtQIlN1qDXaEcY9ZuM2RcbbZrri+UJ8Zm+hB1U1qB4PvrlVFWXlpgdcXM07X8mWu05gz0CGSM\\n\",\n       \"McYY08jMFKkuM+doaWIVTSqqrO9+Ql13UF9tTHvdflrUIkXIFULUZwmUzhhFNmo+JxSUvr5Av/rV\\n\",\n       \"ryQN74/9oR566KGRrmdaELUXy7HVNzEqyGsham+tQjQT+Xt4TZ0xRuI/SJ9hjK7V7azVxNWSsbtE\\n\",\n       \"3z3yVmt05KSoRdnRDtkRgNUjnvEoouQMfPzxx5uuw4qUMcYYY0wjM1Okav5R0lBZwBcJa4jZJHmP\\n\",\n       \"uloVpfV+LFoyWD/zzDOdjgd9s/f2BWtv2uvtaw18Z6hnrOGzzjpLkvSGN7xBkvT9739/xeNMKuKk\\n\",\n       \"VUml3vFt27Ztm6S1o0jF/dN+/etfS1qqZvRt39TzpPdCHCe13egnTd+97GIWeiz7qDTFMZXXNUWq\\n\",\n       \"ps6iIvfNaF0iroTgbzit3HmbN2+W1P0ZwzMJJaXvPqvrna6rBDzj47Oadk1/aF11sCJljDHGGNPI\\n\",\n       \"zBSpLioSs0N8l7BesB76KkHRamKHbaKI8J3pOyvFSpsU41Kiajk11gv4XWB942fQur/ZqKCYtEbU\\n\",\n       \"YE1xP+ecc44k6dxzz5Uk/fznPx/1EifKli1bJC1tx/hz1PJqcf/0S1SOt771rZLWhs8f9zBqLjLG\\n\",\n       \"rLgXW1f67mUXI0yffPLJRa9RWKLSRV3XFKeozBE9hfIybrU/qs3T3sWhr3/ket3/tJVWn7qSPzNK\\n\",\n       \"56hjqBUpY4wxxphGZqZIbdy4sWqJYp3gI4X1gK8IFu3BgwebrgFLlr32WtdHo9XQNffKpIkK1FpV\\n\",\n       \"orr6dWBt4ldB9N1TTz0lqX9EC+0MK6hVBaAdtLYH+sm3v/1tScP7G/e+U5MCJY5+jDrD9dNO8XGj\\n\",\n       \"Hrlvfo86wu9QdxaOI0Th8N0NGzZIGiokKB20Ker2wIEDne6Fa+0Kx0f1ZqxphTbI2Ng1ErWVkood\\n\",\n       \"fUvoa7FuKG/GxPh669ati44bVcuoWJ166qmShvdPW+LZwPtPP/10p/ur+XDh08b991WwYub2vjkL\\n\",\n       \"x7Wf7LSIfZsxj7GUdtr3fshTxhgybkYtZytSxhhjjDGNpFFz4zSdNKU8Nzc39fMaY4wxxvRlbm5O\\n\",\n       \"OedlnfasSBljjDHGNDIzH6nbbrttiQ8EuTJ4n2gfwI8BFY31Uvwi8EPAj+CKK66QJF1//fWShhEv\\n\",\n       \"ZDFlXXT37t2LznPGGWdIGvrUxJwf+CWwzo+/xQc/+EFJ0s0337zoeiM33nijJOmee+6RJN1+++2L\\n\",\n       \"Pr/44oslDf0PYrTZpk2bJElXXnmlJOnLX/6ypKGfAlFq+DfgV8A6f/QJ4/pZz6YeWJfGT4D7M8YY\\n\",\n       \"Y8w8VqSMMcYYYxqZmSL1/PPPV6Pt9u/fv+Ln5IAg1wjKEcoMoLSQTZ3vlzJY1/bQi1l2Y/RhLQLo\\n\",\n       \"lltukSTt3Llz2c/PO+88SdKtt9667Ocxt0vMdouSxl8iTUpZcYmoiPVBOfbdC80YY4w5WrAiZYwx\\n\",\n       \"xhjTyMwUqT/+8Y9HfIzIOdKa56i2bxW5LVBk8Knqm9OjRMwtg89XSfEqKVH4fHFd+CpF4nXX9oar\\n\",\n       \"RWbGfD4x39Kk9p4zxhhj1jpWpIwxxhhjGpmZIvXSSy+NnPG5KzEbLQrWuBSpmL22dW88fK2+/vWv\\n\",\n       \"Syr7JsXd7mvn4/ul7/E+UYz4kM0ix5gxxhizlrAiZYwxxhjTyMwUqeXUDvbaGlUpwkep9j6vUWD6\\n\",\n       \"Rqfxe3ywYNR9kcgfVdqzr6svGddXU6yoi7hvlTHGGGNWxoqUMcYYY0wjM1OkjjvuuCX5mFqj9iIx\\n\",\n       \"ygwfJpQnIHM3ChjRal19g4j+iz5Lkb47S5NJHIWs1YeM+6j9nvxaJ510UtN5jDHGmKMVK1LGGGOM\\n\",\n       \"MY3MTJE66aSTlihS0ZcHJYc99EqZ0FGGUJxiHiTyR6EcsRddVJJQZPAR4i970eG7BGQY5/wl8FWK\\n\",\n       \"11WC45YykXcFJYo8XTVfKftGGWOMMf2wImWMMcYY08jMFKljjz32iDJUUmrw8akpJSeffLKkoQLz\\n\",\n       \"4osvLvs9Pn/1q18taWlG8qhAnX766Ys+f+KJJxa9xhcrXl/Mw9RViYKaElWKSozgk4XP1QsvvLDi\\n\",\n       \"972nnjHGGNMPK1LGGGOMMY3MTJF67rnnjkTT1RQpMn6X4HP2qqtF//3ud7+TJJ144omL3sdnCl8p\\n\",\n       \"8iqVFC7Alyte96SImdS7QvmgPI3qg2WMMcYc7ViRMsYYY4xpZGaK1G9/+9slikjffEuAYoQSVYtO\\n\",\n       \"I28UmdTxlYpK1v79+xcdv0QtT1NUykqKGb5M+FyVvtc38ztKG8ffu3dvr98bY4wxZnmsSBljjDHG\\n\",\n       \"NDIzRWo5/5xR96jrqtSg+Dz//POShlF6v/nNbxZ9ji9VjdJ1kzmd/FN8Dx8n8jtxfpQxrmNcHD58\\n\",\n       \"WNLSjO/GGGOMGQ0rUsYYY4wxjcxMkRqF6PNDlByKT1dFhwziKFD8vm/eJ5Sn+Doel/xVGzZskDRU\\n\",\n       \"vIgOjJnTxwU+Wpyf+4v3ecIJJ0jSkozzxhhjjFkeK1LGGGOMMY2sKkUKRQSfoUOHDkla6ttDlBwZ\\n\",\n       \"zfE1wucpKkTx+OyNx+/x13r88cc7XefGjRslDaP+Nm/evOjzkm8VyhnRgNBVQeuaJytChnYyyeOj\\n\",\n       \"hVJ26NAhHXPMMUfux4rUbNm3b5+2bt0668swA1wfqwfXxerC9TGPFSkzspO/GS9xom1mi+tj9eC6\\n\",\n       \"WF24PuaZmSJ1wQUX6B3veMdUzjU3N9fpe5dddtlUzzcuRj3f3XffPbW6MMYYY9YTVqSMMcYYYxpJ\\n\",\n       \"k94XbtmTpjT9kxpjjDHGNJJzTsu9P5OJlDHGGGPMesBLe8YYY4wxjXgiZYwxxhjTyNQnUimlv0op\\n\",\n       \"PZFS+p+U0rXTPr+RUkr7U0q7U0qPpJR2Dt47MaV0Z0rpqZTST1JKJ8z6OtcjKaV/SykdSik9uuC9\\n\",\n       \"YtmnlD416CtPpJTeOZurXr8U6mMupfTMoH88klJ614LPXB8TJKV0Wkrp7pTS4ymlx1JKHxu87z4y\\n\",\n       \"ZVaoC/ePwFR9pFJKx0h6UtJFkp6V9KCk9+ecfzm1izBKKe2TdF7O+cUF731B0vM55y8MJrivzTn/\\n\",\n       \"48wucp2SUjpf0kuSvplz/svBe8uWfUppm6T/kLRd0iZJd0k6K+fsxF9jolAf10n6fc75S+G7ro8J\\n\",\n       \"k1I6VdKpOeddKaXXSHpY0qWS/l7uI1Nlhbr4G7l/LGLaitQOSXtzzvtzzi9L+k9J753yNZh5YvTB\\n\",\n       \"JZK+Mfj/G5rvMGbM5Jz/W1JMZV8q+/dKujXn/HLOeb+kvZrvQ2ZMFOpDWto/JNfHxMk5H8w57xr8\\n\",\n       \"/5KkX2r+oew+MmVWqAvJ/WMR055IbZJ0YMHrZzSsGDM9sqS7UkoPpZSuHLy3Ied8aPD/IUkbZnNp\\n\",\n       \"RyWlsn+d5vsIuL9Mj4+mlH6RUrp5wTKS62OKpJS2SHqLpAfkPjJTFtTF/YO33D8WMO2JlHMtrA7e\\n\",\n       \"nnN+i6R3SbpqsLxxhDy/3uu6mgEdyt71MnlukrRV0pslPSfpiyt81/UxAQZLSd+VdE3O+fcLP3Mf\\n\",\n       \"mS6DurhN83Xxktw/ljDtidSzkk5b8Po0LZ7BmimQc35u8Pd/Jd2uefn10GBNXCmljZIOz+4KjzpK\\n\",\n       \"ZR/7y+bBe2aC5JwP5wGSvqbh8oTrYwqklF6h+UnUv+ecvzd4231kBiyoi29RF+4fS5n2ROohSWem\\n\",\n       \"lLaklF4p6X2SfjDlaziqSSn9eUrpuMH/x0p6p6RHNV8PHxh87QOSvrf8EcwEKJX9DyT9bUrplSml\\n\",\n       \"rZLOlLRzBtd3VDF4UMNfa75/SK6PiZNSSpJulrQn53zDgo/cR6ZMqS7cP5Yy1U2Lc85/SildLem/\\n\",\n       \"JB0j6WZH7E2dDZJun+8j+jNJt+Scf5JSekjSd1JKH5K0X/ORGWbMpJRulXSBpJNTSgck/ZOkf9Yy\\n\",\n       \"ZZ9z3pNS+o6kPZL+JOkj2VsRjJVl6uM6SRemlN6s+WWJfZI+LLk+psTbJV0uaXdK6ZHBe5+S+8gs\\n\",\n       \"WK4uPi3p/e4fi/EWMcYYY4wxjTizuTHGGGNMI55IGWOMMcY04omUMcYYY0wjnkgZY4wxxjTiiZQx\\n\",\n       \"xhhjTCOeSBljjDHGNOKJlDHGGGNMI55IGWOMMcY08v87vVmfz9SwBgAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01cf9110>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['conv4'].data[0]\\n\",\n    \"vis_square(feat, padval=0.5)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The fifth layer output, `conv5` (rectified, all 256 channels)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 34,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAJOCAYAAAB8y+mTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzs3XmMXdd17/nfEUWRLM7FKhZZHDUPLcu2HDg2XgBbL0Hw\\n\",\n       \"AmR4iYOkAwSdP/JHA+5uJ51AkB14uIITDw3HaSCIjaT7OfFrdAYjhuP8Y9hOolY8QLFlJ9ZEDTQH\\n\",\n       \"cagqVpHFSRQlUTr9h7j2XZd1eId9z3jv9wMYOt5VrLvrDqfOWWvttZM0TQUAAIDB3VD1BAAAAJqK\\n\",\n       \"CykAAIBIXEgBAABE4kIKAAAgEhdSAAAAkbiQAgAAiFTIhVSSJP8lSZJnkyR5IUmSh4p4DAAAgKol\\n\",\n       \"efeRSpJklaTnJP2MpBOSvi/pN9I0PZDrAwEAAFSsiIjUOyUdTNP0SJqmr0n6W0m/VMDjAAAAVKqI\\n\",\n       \"C6ldko65/3/86hgAAMBIubGAn9kzV5gkCfvSAACAxkjTNMkaL+JC6oSkPe7/79GbUam+TExMhONL\\n\",\n       \"ly5d9/uSpP37DFrn1Wq1Mo+rwFyyMZdszCWbPX5V87jxxvap9MMf/nCuc7GffeXKlYH/bdXPi8dc\\n\",\n       \"sg06l5tuuikcv/HGG5I6/26uW7dOknTmzJkw9tprr133561fvz4cP/jggwPNZfXq1WE+ly9fliS9\\n\",\n       \"/vrrff3bXur0Gr3nPe/Ro48+et2vF3Eh9bik25Mk2S/ppKRfl/Qb/f7jbhdPHpstA6iLmIucOvxs\\n\",\n       \"NM+rr766YsxfKG3YsGHFWDd2MRbjtdde6/txmuyBBx4o90IqTdMrSZL8r5K+LmmVpP/Gij0AADCK\\n\",\n       \"iohIKU3Tr0n6WhE/G20WwvXH/u7CQq32X5TvrW99azi2FM3TTz8dxnht0CQ33PDm+qRhohgx9u3b\\n\",\n       \"J0nas6ddNWKfncOHD4ex06dPlzqvunj55Zczjwf9t4hDZ3MAAIBIhUSkUI61a9eGY7tTtPy49GYh\\n\",\n       \"oETUo0qrVq0Kx1bX54tFeW1QV3ZOGWZhT15uu+02SdI999wTxpaWliR1FlWPa0QK1SIiBQAAEIkL\\n\",\n       \"KQAAgEhjldqzVFfTWajdLztdXl6uajro4sknnwzH47BMuF++aNh64Jw7dy6Mzc/Plz4ndNq4caMk\\n\",\n       \"afPmzWHMzj2Li4thrN+WNcP4wQ9+IElaWFgIY5YWn5ubK/zxgW6ISAEAAEQaq4iU7z7cZGvWrJHU\\n\",\n       \"2ZHWlrDm1VV2UL4Vw9TUlCTppZdeCmO+IHScEIXqZO/d7du3h7EtW7ZIkl555ZVK5oRs58+fl9S5\\n\",\n       \"YMLOob4AvQxnz57t+C/qwZ/3bfGTb41x4cKFFWOjiIgUAABAJC6kAAAAIo1GrqtPo9LB1YrmfR8p\\n\",\n       \"S49U1UfFes5I7V4zdUilVtWJuZu77rorHNv8nnnmmaqmUyp7b/gUtKWAfQEzqmev1bim5XvxpRV2\\n\",\n       \"rvMLJsaBT/vefPPNkqRt27aFsRdeeEGSdPTo0TA2ivvkEpECAACIVH3IAAOz5ca+Q/bFixermo6k\\n\",\n       \"zkJh6zhch0Jr6/RuRY9SdXdE+/fvlyTde++9Ycxet5MnT4axUS6otZ3rDx06FMbstan6TtW3R8la\\n\",\n       \"tFGnqCaq56Mx1sLDZz3svT7KfCbCfl9/LrPjqj/bRSMiBQAAEIkLKQAAgEhjldqzHjZNZ2kH3828\\n\",\n       \"6rTDlStXMo+rZr1wquIXBMzOzkrqfB9agfW49VCq+nXxrIO3LdiQ2gXwPj3t08MYDT41Neg51PfJ\\n\",\n       \"sy7r45DO8/zn49SpU5I6P9vj8nwQkQIAAIg0VhEpX5w9CqqOQqE3/xpZ4bvvCm13taPSmqOJLNK0\\n\",\n       \"adOmFV/jMzbahnl9/YKEqnaUqJqPONl5Lauw3Bfmj+JzRUQKAAAgEhdSAAAAkcYqtUexKMrmQ99W\\n\",\n       \"mOkXCdSp6HrcnThxouop5MIXUANF8mk6O9f5dKn11/ILbPz5b1TwiQMAAIg0VhGpJvNX9Lak3i+t\\n\",\n       \"t2ibdT2X2gV+fk8oi4qsW7cujE1OThYwY1zL9pvyxeaHDx+uajq1YXetfm9GInXx6rDHJcaDj37a\\n\",\n       \"+87/XbJ994hIAQAAIBMXUgAAAJGIAdfI9PS0pHY4VGr3F9q6dWsYs5SdL/Sz7sx+w1vrUeRDrdbL\\n\",\n       \"yIdkfUdnFMdv1DuuLMS/ffv2MLZv3z5Jnf1nLFXtU3xHjhyR1Pl+HeUNnmONSzfpPPg0e7eNdX0f\\n\",\n       \"pM2bN6/4fnufjmKPpG7882J92Hbt2hXG7Lka9T55RKQAAAAiJd2uwgt70CRJW61W6Y8LAAAwqFar\\n\",\n       \"pTRNk6yvEZECAACIxIUUAABApMqKzatM7fnHrjrFyFyy1XUuf/InfyJJOnfuXK6P4Yv/szZStQ23\\n\",\n       \"/+AP/iBzXlXwj//xj39cUnXFtjaXqp8TPwfm0snm8Id/+Idh7MqVK5XOpU7PC3PpZHP49Kc/HcaK\\n\",\n       \"Klr351/fY9E8+OCD3f997jMCAAAYE7Q/QFe+I63xS4attYIfs6v7y5cvhzHrXr169eowtrCwIKkz\\n\",\n       \"iuG7sNdRUUvLs6JQZTxuXsZt2XdRbAn5KHd2ryoKhWbyf0eyZLUDGpRF/KX27h/+b1ovRKQAAAAi\\n\",\n       \"cSEFAAAQqRGpvV6FuCjOK6+80vXrlorwnaqtm+2JEyfCmIVnz5w5E8ayQrHWjb2umtyh13fMt+7g\\n\",\n       \"p0+fDmN16hJuG6COWxqo1+cNGDe2a4eUnfK2cpFhUnu+n6aVUfgdQXohIgUAABCpcREpu/rkzq0e\\n\",\n       \"rIjc9gmU2vv5LS0thbGLFy9KahfyXY9/resoj8LGqvg5W6THXj+pHW2r6rPl9+2y4k8iUsB4s+i0\\n\",\n       \"1M6A+LGsxU39slYH/txjn8GshVbXU++/WgAAADXGhRQAAECkRqT2fHjfwm2+74OF4urea2cUzc/P\\n\",\n       \"S2qn7qR2/43FxcUV3++7xloqyYdp/etaR01M6RlfTF51YbnvJ2bvCf/c1j3FC6Ac/lyVtdjM//0Y\\n\",\n       \"lF1b+GJzM0ianbMVAABApEZEpHwhmN2p+qvFrKvJMtiVsBXASe35+avoJhfMZnV39c+3Ffj1W+iX\\n\",\n       \"FZHydxnD3F2gOfyig6wFCLwPVvJRvF6LNjBe/N/IJkfNs/RqeTTM39dunyMiUgAAACXgQgoAACBS\\n\",\n       \"rePnVng8OzsbxizcduHChUrm5Dt4T05OSuoMLdpGvE1O53l5p02zUoU+dBvTC6Qb65N06dKlXH/u\\n\",\n       \"KLKFHHXoZTQqn588lZ3asx0Kpqamwpid3/ziElTPp8LtHMtnqH/++bO/R4MsfCIiBQAAEKnWEams\\n\",\n       \"K+qqIws+omJz8R28q55f3S0vL3f9et4tLOr0elihfR3267O7ra1bt4YxuyvzeySOA+vEX/coS9kF\\n\",\n       \"5haZtBYnUv33wiyTL/C2yH1Ve8H6v0ujFonyv5tFZf3fCTuXDfO3w7dbsfOBf317/vvoRwYAABhz\\n\",\n       \"XEgBAABEqnVqz8KkvidT3sXIg7JiS6nZG9hWparQdx3UIaVndu7cKUlav359GKtjassXfBa1c0HV\\n\",\n       \"55R+lZ3aa8rzUgdZi2jKNMp9xXzvwaJSez6Nt3btWkmD/a0iIgUAABCp1hEp468WreXA3NxcVdMJ\\n\",\n       \"7OrYL52sw9Jx1IcvYrQO+L51R5nRzKy7Ll+YWlVLkSz2vJXx/NSxONffhdtr5SNEZUY3fbSlql0k\\n\",\n       \"6sheF6n9vFS1uGWUsyJ+QYw5f/58OM4jGuh/ni2o8K9vL0SkAAAAInEhBQAAEKkRqT0f0q5TsbLN\\n\",\n       \"xXccJrUHz4edN27cKKnz/XL69GlJ5byvffjfUkM+FeHD21Wr0+e8Cj51V/UiBdJ52eipVQ6/2Czr\\n\",\n       \"Oc+7JMHOk4MUrxORAgAAiNSIiJS/Cq3TXYBdsRa1LBv1Y/uP+de8W8Rgy5Yt4diiU77ou9/Ii0Wz\\n\",\n       \"8nLy5ElJcYXWgxRhXsueg+np6TDmI3TGPuf+bhTA+KmqDccgLSWISAEAAETiQgoAACBSZam9G2+8\\n\",\n       \"sSNFYEXavtOydSz14f2sNNrExISkzpTDmTNnrvt9ZfC9pSyV4wvR/e9ZR1V1bd+7d2+pjzcoK87u\\n\",\n       \"N+zr035WtBvThTiroDKrF5Qd+3RZ1uP1m9KzVJwvOM5KM1rfp16pStsQ1H8WszoT9/u+yzvlCaDz\\n\",\n       \"b2nVHe6b0COLiBQAAECkpIqlrUmSpK1Wq/THBQAAGFSr1VKapplt1IlIAQAAROJCCgAAIFJlxeZf\\n\",\n       \"+cpXOgpYrYjXF7YtLCxI6iwotWJk/31LS0uSOgtiszYetMLZ3/md3wljVacY/ePXaS6f/OQnJXU+\\n\",\n       \"p1Yg7F8PKy72xdBZCwKyiowvXrwoKbsw+gMf+EDmvKrgH/8v/uIvJHV2BLffN68NS9esWSNJ2rFj\\n\",\n       \"Rxg7fvy4JOkjH/lI5ryqUMf3btXz8HNgLp2YSzbmki2PufhFX/0usLENw/1CoV5zICIFAAAQqbKI\\n\",\n       \"1JUrVzoiERZhsjtvz7cyyGprYGwZtdSOPvml33Xqil53WXsGWuQlptt01Uto82JR0iKX5Npzf/To\\n\",\n       \"0cIeAwBGXczODTF7WxKRAgAAiMSFFAAAQKTKUnvHjx/vSPdkpZIGlVXknMfPxfVZQbRPm2Z14R4V\\n\",\n       \"Teiyez3btm0Lx6dPn65wJsBgtm/fLkmanZ0NY7ZY5eDBg5XMCTBEpAAAACJVFpE6d+5cVQ+NHFl7\\n\",\n       \"iZmZmTBm7Sh8of+pU6fKnRgCa1fxlre8JYzZsuBDhw6FMX8M1Int/ekXFFmE1RaASKMdDUf/7Jy3\\n\",\n       \"efPmMGbvk157nWbt/dkLESkAAIBIXEgBAABEqiy1h2L40PcgoclYJ0+elNQZUrdu6PTtGs4NN7Tv\\n\",\n       \"c6yDfAx7bZ5++ukwZn3WSLmiCax/oN9pwY5jegVhtNkiM98TytLDvVJ7k5OTkqT5+fm+H4+IFAAA\\n\",\n       \"QKTKIlKrV6/ueWWI7vw+QtPT05I6o0BlRKQssuHbAti8aD0xHH/3nQffkZ7PHprEziWLi4thrMmt\\n\",\n       \"SDAc2w9Pau/f6lu62N++mDYv69evH/jfEJECAACIxIUUAABApMpSe3mnLcaR3/R5eXlZUvmbA2/Y\\n\",\n       \"sEFSZ9dsK/48f/58GCsjzThq8v6MTE1NhWNL88Vs0Ini2Gfa+rNJ9EbySOd1uu222yS1O79L0oED\\n\",\n       \"ByS1/yaMIr+oype45CHmbygRKQAAgEiVRaRYsjq8OkQTbIm+v0OwwlAfMSszIrVp06ZwbHewvvB9\\n\",\n       \"XN97fiGCvTY+6lV2NBMr2SIA/9mxz9gwLTAwmixyafsOSqMdiTL+74kvPM+Dz6T0i4gUAABAJC6k\\n\",\n       \"AAAAItHZHEOxMKhPRVjKsapUxJo1a8KxzS+vdJ51x21i0atP3VnfMf8azc3NlT4nZLt06VLVU6gl\\n\",\n       \"+/xJ7XOOT/OMW/rzqaeeqnoKlfDnMutEnpeYMhQiUgAAAJGISGEoVsTtl2hb8V9Vxcs++mRz8Xeq\\n\",\n       \"w3T1bnLbDr9339atWyVl70XVxGgbxoN/D9v7ddyiUOg8D8cUh3cTsyMHESkAAIBIXEgBAABEIrWH\\n\",\n       \"XPhQq6XTqupz5fslzczMSMqveLfJaQSfFrG+M76wkpQe6s4vatm4caOkzvcwm3GPnzp0/iciBQAA\\n\",\n       \"EImIFHIxMTERji2yUVVEyt+hHjt2rJI5DMrvrWZF+r6NQ0wB5LV8VO748eOSml08j2LZPpq+a3bV\\n\",\n       \"/EIS+0wQhULViEgBAABE4kIKAAAgEqm9Evli36L4zYOtMLrITXrXr1/f8ViSND8/L6kzbWQ9X/xY\\n\",\n       \"Ewu37TXM2lT2xhvbH6ckSST17nFiBbM+NWo/b/v27WEsayNSe+59cf21jy9lp+/GdePmLPTPylZU\\n\",\n       \"Ss93J+/3Obc0o0+BD/p6+VQ5mqNXiUO382Av1hXd9zy08+8gfRCJSAEAAERKqig2TZIkbbVapT8u\\n\",\n       \"AADAoFqtltI0TbK+RkQKAAAgEhdSAAAAkSorNq8ytecfu6p5WMHlRz7ykei55N1nqA7PS9bjf+5z\\n\",\n       \"n5PUWXBdVO8YXzBuBeUPPfRQGPujP/ojSZ0Fs2VuzlzX16guc6l6Hn4OdZrLxz/+8RVf8wtTbDcC\\n\",\n       \"3yXaPmNWkCu13+u9dgqwz5E/Rz344IMdc6pSHV+jOs3lz/7sz8KYFfWfOXMm18fqtSBm0OfF/7wt\\n\",\n       \"W7ZIyl6kE6PXHIhIAQAARKL9wYB8xGKYJeR5LLXOIwrVBEtLS5LKaZfgX9Os19fu0ummPJy8o6no\\n\",\n       \"Lut843ce6LYLQUwkwj47tNloJt/6oqgdKnpFpAblf4a1ldm1a1cYs1Y0/v2cV4sPIlIAAACRuJAC\\n\",\n       \"AACIRGpvQFbEJklnz56VRPi6aNZptomd0NHJPj/33XdfGDt8+LCkzpC7pRN4zYHyFbnhvC1y8JvL\\n\",\n       \"5+3kyZMdjyW1y3KKKMsgIgUAABCp1hEpW2JuS3Ol4vZ/6pcVPg/LF9qhO6ISo8M+035fLNsryzt2\\n\",\n       \"7FhpcwJ68efrW2+9VVLnvn9zc3OSpNOnT0c/ht+/s+rFLH5f2LzPv/a8FRmRsizRwYMHC3sMj4gU\\n\",\n       \"AABAJC6kAAAAItU6tWeFqdPT02HMQnbz8/NhrOp0X4wqNotuKlJ7+bPCy7IXSljqY5gUCFA2n3az\\n\",\n       \"1JRP7W3dulVSZ8p60B0Pss5zfgcF+5tRxvnQF2nnvXNDE/9e90JECgAAIFKtI1Ld7l793QCA3nxR\\n\",\n       \"98zMjCTp3LlzYYwoEZDNF0bbsd/lwgwTycnqPp/HDhgxiox6jWKGgYgUAABAJC6kAAAAItU6tddN\\n\",\n       \"3gVw/cpr02KgbD7tULUdO3aEY9shwKdPRjH8P47q9J7Ly6lTpyRJk5OTK742MTERjm2T3CYqo4+V\\n\",\n       \"LzXwRfpNREQKAAAg0lARqSRJjkg6L+l1Sa+lafrOJEkmJf2dpH2Sjkj6tTRNzw45z9ogCoWmeuWV\\n\",\n       \"V8Kx3VVXtRR527ZtK479Xandzfv999AcFrkfxYiULdDYuHFjGLPoii8Ot9YFVRWMD6PI9jz23vA7\\n\",\n       \"lox7RCqV9N40Td+epuk7r459UNI30zS9Q9I/X/3/AAAAIyeP1N61m8b9oqQvXj3+oqT/msNjAAAA\\n\",\n       \"1M6wxeappH9KkuR1SX+epun/JWkmTdOFq19fkDQz5GMAyIFPS1edbrhw4UI4np2dldTeycAjtddM\\n\",\n       \"9l4bxS7WlvZaXl4OY5cuXZLUmT6v+jNWV9Yl3m+MbJtCN3XHj2EvpP5TmqZzSZJMS/pmkiTP+i+m\\n\",\n       \"aZomSdLMZwYAAKCHoS6k0jSdu/rfxSRJviLpnZIWkiTZkabpfJIkOyWdymGegV3FVrU82hfIrVmz\\n\",\n       \"RlJ7+TY6+TsOw7L26vi7vc2bN0vqjAzZ18tYUOHv3Dds2LBiftzNo+78e9hYZArXZ9EnH22ueyTq\\n\",\n       \"kUce6fr16BqpJEkmkiTZePV4vaSflfSkpH+U9FtXv+23JP1D7GMAAABU6YEHHuj69WEiUjOSvnL1\\n\",\n       \"6vJGSf9vmqbfSJLkcUlfSpLkt3W1/cEQjwEAAFBb0RdSaZoelvS2jPEzkn5mmEldy/frsOOFhYUw\\n\",\n       \"VmYawD+WpUCseE4qpyNsU/hwrW0y/fLLL1c1nbHn35v2Pq6qU7//HNkuBb4weX5+vrS5ADH858U+\\n\",\n       \"R00qXbDSC9+NvQyjmP6kszkAAECkRuy155dFWxdkXyTrj4vm76St2LCJUagyomg+ImWPR0SqHuy1\\n\",\n       \"sUih1L6bts7NRbKWB1L7jtgv2mjSnT3Gk98rziJSw5zfbNGFVE7bCOu8bv9FPCJSAAAAkbiQAgAA\\n\",\n       \"iNSI1N6xY8cyj/Mw6MaS/vvs2PpiSPXvh2HKTkc2fVPKUZO1qWyZizaOHDkSjk+cOCGpXXRe9lyA\\n\",\n       \"GHmn38o+J9vjlZHKH3VEpAAAACIlVURQkiRJW61W6Y8LAAAwqFarpTRNk6yvEZECAACIxIUUAABA\\n\",\n       \"pMqKzatM7fnH/uM//mNJ0quvvhrGrIfNpk2bwph1sfXf54+N789kfBGtsQ2PP/ShD2XOqwr+8es4\\n\",\n       \"F9+3pczi9bo+L5/+9KclVdeby8/lM5/5jKT8C3D77Xdmc6n69fFzYC6dmEs25pKtjnO5HiJSAAAA\\n\",\n       \"kRrR/qBI3bqinz59euCf1++y7axoFrqzvaHwpjp1iS+qE3MTdw0YxuTkpCTpzJkzXb/PPgtFdoAf\\n\",\n       \"tDUMUDdl7SXKXyYAAIBIXEgBAABEGvvUXlXh66Z0QK+TMjenboKNGzdK4nkZJbZLgt+o3c5NPs2Z\\n\",\n       \"tYAlb/fee68kaWZmJoxZF/oDBw6EMTaYRt1Y6nvnzp1hzBYrLS0thTF/PNTj5fJTAAAAxtDYR6Ss\\n\",\n       \"GK3siJTfn28U+KK+7du3S5IWFhbCGAWrQG9nz56V1BnlqTp67VtQrF27VlLnwo9xiEj58/Wdd94p\\n\",\n       \"Sbr55pvDmEUIn3jiiTAWs1gJ+bD3bFYU1+8vmhciUgAAAJG4kAIAAIg09qm9V155pZLHrTpcn7c7\\n\",\n       \"7rgjHL/jHe+QJD355JNh7JlnnpFU//5Z1nFeai9EqCuKzEdPnVLgL7zwgiTp6NGjYcx68RTZkycP\\n\",\n       \"PvU4MTEhabi+a/58bT2+fCGzpTeLSBuhP/4137ZtmyRpeno6jFna/Ny5c/k/du4/EQAAYEw0LiJl\\n\",\n       \"V5r+yn9ubq6q6eCqw4cPh2N7bXz0qe7RHeMjlFn7JgLj4tKlSx3/LZIv5rY9ToeJHPh9OTdv3iyp\\n\",\n       \"c0HMME6dOiWpMyJsj1HmHqDolLVAw5/DswrQ80JECgAAIBIXUgAAAJEal9qzXhB5pYos3Od/nj1G\\n\",\n       \"kRseNiXV1S9fyPkf//EfFc4kP+O2Ye6427BhQzi2fkk+vcT7oTi+mDuPzbh92s1+dt6vn59nVRuI\\n\",\n       \"Wwd8K6TGm+bn5yV1fn6LTFETkQIAAIjUuIiUFfPlVdRny93tDlRqL0H2Y3kvNfdLNWMVGTEDxo3/\\n\",\n       \"DFn0gihU+fJukXLx4sVcf16dFLGUfxRYFLKMhRISESkAAIBoXEgBAABEalxqL2+WxvPFjpZ28wXh\\n\",\n       \"lkbLK4WWR8qAdB6QH1tkIo3ezgMYTbxP64GIFAAAQKSxj0jZslXfFdW6W/sxfwxgtBHtBdAvIlIA\\n\",\n       \"AACRuJACAACINPapPeM3qzX0kAHGky04scUoAHA9RKQAAAAiEZECAHXuZGA7HtA5enRMTEyU+nhZ\\n\",\n       \"+6laG52tW7eWOhcUi4gUAABAJC6kAAAAIpHaAzDWNmzYIEmanZ0NY1lF5mWm+fym5qtXr5bU2cV6\\n\",\n       \"0I19t2zZEo7Pnj0rqXPT802bNnV8TRq8d57NU2o/pzfddNOK70uSJBwX1Zl73bp14Xjjxo2SOp+D\\n\",\n       \"MmS9h2xsfn5+xdd86tHm+tJLL4Ux+538e8M2ZD5//nwOM0YsIlIAAACRkir26kmSJG21WqU/LgAA\\n\",\n       \"wKBarZbSNE2yvkZECgAAIBIXUgAAAJEqKzavMrXnH7vqFGPMXKw/SVYxoy9EtLRtr/St/ZuPfvSj\\n\",\n       \"A8+lKE1/jYrCXLLZ41c9Dz+HOs3lz//8z8OY7eLgC5TL2KS5js/L3/7t34axqakpSdLly5fD2PLy\\n\",\n       \"siTp5MmTYcw2ul+/fn0Ys0J7v0OGjVnhvSRt375dUrtIXJJ+8zd/s2NOVbI5fPKTnwxjWbt+FMUv\\n\",\n       \"EnjooYc65lSlXnMgIgUAABCJ9gcN1G3/r0GXLMf+G9MtOoZi2XJpfwdtr6W/s7M76GH4JetZS9qb\\n\",\n       \"zJb+j/ISct9l216/IqNQTTkvzM3NheNDhw5J6r+1hG9NkMU+d/59tbi4KKn++7hWsQhNGrytR10Q\\n\",\n       \"kQIAAIjEhRQAAEAkUnsYSt1D96Ps0qVL1/1aHuk8z4f6yyw+LcPk5KQkaefOnWHsueeeq2o6hfAp\\n\",\n       \"rDI+s005LxTZrT4rvVn3lJ6pKsXmSwiahIgUAABAJCJS6GiZAIybM2fOSGrvyTaKmhIhuh5bPOFb\\n\",\n       \"DlgLAb/Yomp+vzwr6vf7FzbFmjVrwnG3CPS2bdvCsUXBu0XKe/H7NTYJf0EBAAAicSEFAAAQidQe\\n\",\n       \"tHbt2qqnUBgrXvTF0pbCuXDhQiVzqopP4e7du1dSZ1H6wsJC6XOqA+vzM8p9pJpu69atkqTZ2dkw\\n\",\n       \"Zqk9vzBg0P5H/aaw+uV/RlbJhI0N07uvDL4PXbfnxRelD5PSM1X1rxoWESkAAIBIRKSGcOONbz59\\n\",\n       \"ZexThThZdzhWEDpuESl7v0rtKGRTlmOjfJs3b5bUGbWxz0ze7TV6sb3ufDdxi4YME8WIiQxZEXlW\\n\",\n       \"iwBf1N+r83md9Rud63UOtQicP/d0a61Qp4UDgyAiBQAAEIkLKQAAgEik9oYwKim9Ufk9+lVkN+M6\\n\",\n       \"8ykaK9Qd1wLzuqpTMbIVePui6bJTetfK+7Pbb2rbPwf99t3LSjnW4XXtR8zrPDU1Janz+bHfd2lp\\n\",\n       \"KZ+J1RQRKQAAgEhEpDB2nc3LLGj0rSWqLqS0KJRUfaH9hg0bwrGf17izO/isu/qyWfTHR6yrft9U\\n\",\n       \"xb8Go7bXZF7sM+0Ly23XgFE3Xn9BAQAAcsSFFAAAQCRSewh9UZC/qtN5XlO7Bo+jOhQlLy8vVz2F\\n\",\n       \"Wur3c2RpfZ/qsn9bp/NClpjUshXu+993XBCRAgAAiDR+l45YwXfjBcpAR/Xm8G0z7Fwxbi1TYqxa\\n\",\n       \"tUpSMxdTxESkrNh8x44dYczeJ6NedE5ECgAAIBIXUgAAAJFI7aHRm2uimeqY2vO9rWzDXp+SqLqr\\n\",\n       \"d1V83yQWpvRv3M6rlsr0pSLjkgImIgUAABCJiBSA3PRbpFqH5f3X2r9/fzjetGmTpM476nGNSHmv\\n\",\n       \"vvpq1VNACWJ2uzh69Kgk6cSJE2FsXPY1JSIFAAAQiQspAACASKT2gAJYUe66devCmKWJYopQfSF0\\n\",\n       \"nW3dujUcnz59Ovrn2PNXZippbm4uHJ88eVJSvfrf+ELvJqfYrChZaqeQ/OID64zti5bz6Mq/evXq\\n\",\n       \"oX/GuPCbrdvr4NPx9lwmSRLGbFHCuBXZS0SkAAAAoiVV7L+VJEnaarVKf1wAAIBBtVotpWmaZH2N\\n\",\n       \"iBQAAEAkLqQAAAAiVVZsXmVqzz92t3n4Qro8UqC+WNQKLh966KG+5tIvv8Go70jcD//4X/jCFyR1\\n\",\n       \"Fmhu3LhRkrRz584w9v3vf1+StLS0FMZ+6qd+SpK0b9++MPboo49K6izeXb9+vSTp1ltvDWNW5Pje\\n\",\n       \"9743jH3XPGmmAAAgAElEQVT84x+XJG3bti2MnTp1SlJn4arN9fLly91+zb5t375dkvT+978/jFWd\\n\",\n       \"ku73vVuGOs6l6nn4OQwzl6xi35hu8KP2vOSljnN5+OGHw1gVJTd+LnV6Xuo0l+shIgUAABCpsojU\\n\",\n       \"mjVrOiI+eUURrmUdimMeI++7gjKWLA8ahboeu/udmJgIY/Z8+DtjH4kyjz32mCTpwIEDYSxrKfyW\\n\",\n       \"LVskdd59Z/08uyO3KFTW1649zkPW4w3Kom7SeC4LLtvMzEw4ts+C/xzbZzCvLuX23s37/JX1uVte\\n\",\n       \"Xs71MRDH2jPkvY9cVVEoDI+IFAAAQCQupAAAACJVltrLKwXVy/nz50t5nFFjXZ59t2czNTXV9d9a\\n\",\n       \"yLtXZ+tLly5Jko4dOxbG/PEoIJ1XLr/4wEoHfEfmvNO/RaX2/CKPqjZ4zurOX9UmtJs3b6708b28\\n\",\n       \"U3rjwNKh0mg+f0SkAAAAIrHXHjJ1K6jM667Qitb9z6vq7hujwS9csOiUj1KdPXtWUn4LP2wPxLwX\\n\",\n       \"kiwsLIRj24+ubPY7Zf1u/jnNO8pnbrnllnB85513Suo8P9hr+cILL4SxqvdG9K8V57I2iyhKw+3B\\n\",\n       \"WVdEpAAAACJxIQUAABCJ1F6fLJTt+wJZGsEXtBfVD6tsVmjqw9N2HNNhOYsVm3s+ZQAMyi9isV5R\\n\",\n       \"/j2V9yIXew8XWUBbxxSR7wFoz29RhfySND09LUnaunVrGLOFMCdPngxjVaf26vha1cGo/F28HiJS\\n\",\n       \"AAAAkYhI9cmWI9t+c1J7XztfjJnVTbmJ7M7K791nv2ded572HPk9CP2dLjAo3zYg67OYd9TEIlGj\\n\",\n       \"uKS7m8nJyXBsEeq8O6/7SNPRo0clSRcuXFjxfX4PzjLbp/jzli068KqOjtXJqLeBISIFAAAQiQsp\\n\",\n       \"AACASKT2+mRpAl80Z+H8vIqv68R+N5/as+JP3yslK9Q+KF+gWcZzWVQ3alTPd1C219m/p+zrefV9\\n\",\n       \"slThKBcZ79mzJxxnFZYX9btbnyipvQH67OxsGNuxY4ekzgJ0K73I47zUi/+97X3lU8vWET6vDbJR\\n\",\n       \"X0SkAAAAIhGR6pMVrPqCShtremF5lqxuyhal8i0grDjcPwf2b/u9U/VFm2UUJRbViRlx7D3ko5/2\\n\",\n       \"nhhmr8ysz6ePWOWhqM++//zZcdkF7RZ92rRpUxiziIs/D5YRjbPdD/y+fxMTE5I6X1MbKyMi5V97\\n\",\n       \"ezx/LrOIKBGp0UdECgAAIBIXUgAAAJFI7Q0oK4U1iqkiC5f7vk5WoOs7Dlso23eM7jfUb2mCmZmZ\\n\",\n       \"4SY7oFFcHNBklkLyhbqxxeD+tb148aKkzs9s3qm4Mj77VfWost/Nbypu/aP8nMr4PNl5xheWG1+U\\n\",\n       \"bq95GXwq2hau+DSepRkx+ohIAQAARCIiNSAfobFutn4Zfd57eRXFF4xnzdnuOH10ye7m/d2oRRH8\\n\",\n       \"z+h37y37t34umzdv7u8XwMiw91PerTRsHzxfjOyLlWP5QnBbbp/3575O7RT8nphbtmxZ8fUyIlL2\\n\",\n       \"nGctSPDzK7ODdlZ2wsvaSxSjiYgUAABAJC6kAAAAIpHaG5APufsizKaw1Nn09HQYyyrQ9IW/xlIw\\n\",\n       \"vVJ2ltqz/0rZxcM2duTIkRWPMW6yNmv26YJRXNBQBP+c2fsvq8/QMBvK+nNAnVJwRfG9vKynVNnl\\n\",\n       \"DJY+9I9lr2VVrwH9oWCISAEAAEQiIlWRrIiPjRVZvGlRNN+BNyvaYXd5WRGiXnvU9bt03X72wsJC\\n\",\n       \"X99fJ77bsy06sKXhUrtw+sUXXwxj3Zbe33fffeHYlnj714jC1f749569T4uMnnSLbPmIbFERRd86\\n\",\n       \"xKJx/n2TB78YxN7D/ncrY2cHK3L3LQWs8Nw+f0BViEgBAABE4kIKAAAgUuNSe4NuiFtXWek7C1X7\\n\",\n       \"dFreYXMLjfsUSNZzSSFltltvvVVSZ2rWUkdZr+nU1FQ4th43/rm3vkZ+E1hLv/r0Sd6b7Y6qpaWl\\n\",\n       \"rl+P7Zgew6d67TX1n2d77YfZmNmn3bJ2I8iDX1RT1QIbe978uerUqVOSpKNHj1YyJ8AQkQIAAIiU\\n\",\n       \"lFEouOJBkyRttVqlPy4AAMCgWq2W0jTNDPcSkQIAAIjEhRQAAECkyipYq0zt+ceuOsXY71x8x+Zu\\n\",\n       \"hfa+0HTQtG0Tn5cy9DsXX/hrr1FeRftWSPzhD384jH3qU59a8X3WJ8l6UUntYma/iMHeJ36T6LNn\\n\",\n       \"z/Y1Fyui/sAHPhDG6vIa/emf/mkYs+fA9ymzBR2+/5O9Vr64347952nt2rWSsguufV+x3/u93+uY\\n\",\n       \"U5VsDv3Opd/zzDBz+au/+qsVX/OLMuy1OXToUBizzaHf/va3r/g+36vNekr5XlrWa+uFF14IY/be\\n\",\n       \"beJrVCTmkq3XHHpGpJIk+UKSJAtJkjzpxiaTJPlmkiTPJ0nyjSRJtrivfShJkheSJHk2SZKfHWby\\n\",\n       \"AAAAddZPROovJf2ppP/uxj4o6Ztpmv4fSZI8dPX/fzBJknsk/bqkeyTtkvRPSZLckabpULc2vntv\\n\",\n       \"Vhdsu1sf5X3a+r07rGLxgFTsnWxTWHuDImS9t20pf1YUMiu6ktVWo98olDfMPnVF860E7PmxLvPX\\n\",\n       \"Hl/Ldx/P6kTerSv6MC0M6sR/di0K1O05i+EjSPZ4fr/NLDaHf/3Xf+36fbZvqP897HWr8/sWzdYz\\n\",\n       \"IpWm6bckLV8z/IuSvnj1+IuS/uvV41+S9Ddpmr6WpukRSQclvTOfqQIAANRLbLH5TJqmFhpakGQh\\n\",\n       \"o1lJx933HdebkSkAAICRM3SxeZqmaZIk3fJJQ+eabMNKqV0c6zsUW1id0G11br/99nC8Z88eSZ3p\\n\",\n       \"kaefflpSuxsxhpfV7dlYikPKf6PevFmXeF8gHJsm9d3li9z8exzkndIzeXde7/Wz/WcBxbHzvi8h\\n\",\n       \"sOfe/72u+/koRmxEaiFJkh2SlCTJTkn21/GEpD3u+3ZfHQMAAGicRx55pOvXYyNS/yjptyR9+up/\\n\",\n       \"/8GN/3WSJJ/Vmym92yV9L/IxAn9HYQWQthRZki5dujTsQ2BIc3Nz4diWIPvCdyJR5ar7Xol+Acn0\\n\",\n       \"9LSkdmsCSTp27Jikwd839t6TiER4fm/IqiN19npL7SxCXnsg2qIXHw23LEaRi0HQPt/719Jeh6oW\\n\",\n       \"QQ3D9sKUpAceeECPPvrodb+354VUkiR/I+k9kqaSJDkm6aOSPiXpS0mS/LakI5J+TZLSNH0mSZIv\\n\",\n       \"SXpG0hVJ70+b+AwCAAD0oeeFVJqmv3GdL/3Mdb7/E5I+McykAAAAmqCyzuaD8L2jrO+NTwMUWbx4\\n\",\n       \"Ld8vyTpZ++K5vELUTeP76PzgBz+ocCbjo8nB3sXFxXBsKTgfSl9evrbjyuDsszqufc08X8hfdWrP\\n\",\n       \"d9O318YXKF++fFlSXMmGvea+9MO6ppPqLdb8/Lyk0enn6P/W9/zeAucBAAAw0hoRkZqYmFhx7O+q\\n\",\n       \"sroQF8XfTdleXqdPnw5j4xqRwmiySIaP+ubxHvdRIisCzqMY2M+NSFRbnQqt/fnaok/+/WXvuZiI\\n\",\n       \"lGUH/M+zhRcsSiqWRZTtNZXaf6d9hNB/PQ+2wCTviOMgkTUiUgAAAJG4kAIAAIjUiNRe1gaUPlRd\\n\",\n       \"ZjrNpxTPnTvX8V9cn3Wnj9kkF9UhVY28+cU5dh736b5Binyvxy/EWFpakjSa7+Vt27ZJ6lxM4Hv6\\n\",\n       \"lckWgPnyF0u35VWAbhuw+/dLUYsnBunATkQKAAAgUiMiUv7K0K4+/V2LFbLlXcSWxRe0sZy2O3+X\\n\",\n       \"ZB3py45IZbWoGJXluVhpFAvMrXDan/PKXGCTNx8tyvo98n4NLYoxiqyQ3i94qoq1wMl6vov8W1mH\\n\",\n       \"vfuISAEAAETiQgoAACBSI1J7vpjM+kj5LsgWHi4jtVcV38n9xhvffNnq1Bsmiy/uPH78eGmP60PL\\n\",\n       \"9j7x6bwyUnuWjimj+7i9HyTSlnkUKg/CUsd5FzJbKtwfW9G01OzUnn+/ZvHnOjNo+sZ6/EnS3r17\\n\",\n       \"JeWf/vK9qqraZaBOvbHsM1BkKrWuuzkQkQIAAIjUiIiUZ1fgq1evDmN1vUrNk7/Ttjs2fzdS9+eg\\n\",\n       \"zPn5u3W7ky27ILGM39eWPvvf14r5/R253SH6iMag/Puvzu+1HTt2hGOL2PpIhC2A2LRpUxizz1Gv\\n\",\n       \"u3tr4eG/zx4j70iYX0Jui2nq/LwPwne5zmLF5sM8p75g3Yqg847i+R037Pwy7hFhqXORUd7n3bou\\n\",\n       \"JiEiBQAAEIkLKQAAgEiNS+0Z303cF/2NKusXcu0xuhvlBQiWsssqol1eXg7HPo0Vq64h9WstLi6G\\n\",\n       \"46xUjhXE+jRnr+JnY+ecrBRb3s/PmTNnwrGlworq4Fy2Xr3k8vg9/d+Honae8Cks+xvk075N+czk\\n\",\n       \"7ejRo+F4VNLRvRCRAgAAiNTYiJRnxddVRR82bNgQjm0OFB3G8YsI8jAqd/FZ+r3TtmLbcRATre33\\n\",\n       \"s1rm3bX/PZq8R1zW53l+fr6CmeTPvx+sMN4XyI9rRKrpUSiLUPsFH70QkQIAAIjEhRQAAECkylJ7\\n\",\n       \"a9as6QipW2FoVpdm3+nWCmx9CNUK/XyKzfq7+IJT//U8WK8eH8och5Sefz3y6BPi+8pMTk4O/fPG\\n\",\n       \"RR026+yHf03ts+rTHvZZzUph+c/5zp07JXWG3PNOBdfFqJxT8kqtW88mX+Bt6c+qPgdlb8BeJt9Z\\n\",\n       \"3/6G+kJ6+zvt/16XUVqzZ8+eFWO2e4VfoNFvzzz7W3b77beHMTvPDPK5IyIFAAAQKamiMCxJkrTV\\n\",\n       \"apX+uAAAAINqtVpK0zSz1xIRKQAAgEhcSAEAAESqrNi8ytSef+yqU4xNn4sV++bdr2mY5+Wuu+4K\\n\",\n       \"x4cPH5aUXZBqRYWSNDc3V8hc8sZcstnjVz0PP4dBP0NScZ+jmOfFConzKnZv8mtUZH+oOj4vDz/8\\n\",\n       \"cBirqi9UHZ+X6yEiBQAAEGkkOpuPq127doXjEydOrPi63ekWuZQ6qwWE3+/MzM7OSpJOnjyZ6+Nn\\n\",\n       \"efHFF8Nx1t2ULcFv8rJyjA7f/qPqTvy+vUC3yIvf39SiNVl7G+bFHq+q6Ig/1168eFFS536WZmpq\\n\",\n       \"Khz3uwS/jprYndz+xkjtlipZuz/4925evycRKQAAgEhEpBosKwrl2d2tz+/nzfZx849hzfMsWuWP\\n\",\n       \"y4hI+Tt8a/Lomz3a/OyuBdXavXu3pM4Gh3bXPw4uXLhQ9RSCfvf183fyZUQvqo6QHDt2rOvX169f\\n\",\n       \"L6l9bpHa0b0m75XYJH5/yv3790vqPI9YhsZ/3nr9De0XESkAAIBIXEgBAABEIrXXJ9uT51d+5VfC\\n\",\n       \"2PT0tKTO0O1XvvIVSdLCwkKJs+su7+W6Xrf9lXwxd14h1H74FFHW725pgiJTnujO7+V1yy23SOrc\\n\",\n       \"y+vxxx8vfU6IU+T5pSmsTMCXC9i+sCiHT6vawgfbh09q/13IWgw1LP6SAAAARGpERMrv9m5X+Vbk\\n\",\n       \"LJWzjN2Wv959991h7Jd/+ZcldV7hHj16VJL0ta99rfA5ZfF3QUUuR+5HVUWWve6QfeQD5bKFAHv3\\n\",\n       \"7g1j9tk6dOhQJXOywtTJyckwZkvXfSuNMtjn1y+jtyjHOBXgj4K8z79Vt4CoO2sgK7XP8T4iZREr\\n\",\n       \"yy5J+f2NIiIFAAAQiQspAACASI1I7flQpvXr8GE8C3kXmbKxlN3Xv/71FWO+L0W3PduKZGkJ30Op\\n\",\n       \"jJ5NwCBscYJPh8/Pz0vK7kJcBjuXWD8rqd1zpuzUnvFFy1V3O68DW5zg02XjlqK3vlRZ+4ai8zNj\\n\",\n       \"7w2furMSIZ/Cz6uHGxEpAACASI2ISPmrRiss90XV3Zbg58XuhL7zne+EMX9chZmZmXBsd9VZ+z8B\\n\",\n       \"dXPq1KnM4yocPHhQknT69OkwVlXXezvPUFjeyaJyZZzr64oO6d35iHZWFNc+30V8tohIAQAAROJC\\n\",\n       \"CgAAIFIjUnu+2HzcCgy78eFLC2uOc+gbGAZp8frivEb/qF76XZRRRLE+ESkAAIBIjYhIIdvLL79c\\n\",\n       \"9RQAAA3h2waVsSPIuCAiBQAAEIkLKQAAgEik9hrMp/aso7l1ZJbaYVxSgGgqv8EoHZ1RB3aO9akx\\n\",\n       \"2xC3qv5jvdjnyHYGkaQzZ85UNZ2RQ0QKAAAgEhGpEWGd3qempsLYDTe8eZ3s99yjwBB1sG/fvnBs\\n\",\n       \"3bxtLyyp/T4+fvx4GPvxj39c0uxQN/be8DtaWPTHL3t/4403Cp+LRfq3bdsWxizSY/uvSsWda+28\\n\",\n       \"LvX/+1rGIqurt89ijDL7PYvYu5KIFAAAQCQupAAAACKR2hsRFm62EK4fA+rmlltuCcfT09OSpNnZ\\n\",\n       \"2TBmheULCwvlTgy1ZKm9ycnJMHb+/HlJ0vz8fBgrY+eLjRs3SuosozBFlk5YWtM+L1I7VZeVskuS\\n\",\n       \"ZMW/zdr42BegN5kV/Evt94lPBVtn+CJKXYhIAQAARBqJkIVdfY7zck5/5W3s7mOcC8ytMNMXZVrU\\n\",\n       \"bvv27WHsxRdfLHdiY+6FF14Ix7ZPpC8st8/ys88+W+7EUEtWWO4jTtbWpez9VxcXFyV1RvyzIj2D\\n\",\n       \"8udwW4Dh2ZiPwNk5zBeg21x8dqJbW4azZ89GzrhefBF51nNg75Mi/h4SkQIAAIjEhRQAAECkxAqw\\n\",\n       \"Sn3QJElbrVbpjwsAADCoVqulNE2TrK8RkQIAAIhUWbF5PxGpn//5n5cknTp1Kox973vfW/F9+/fv\\n\",\n       \"l9RZhHf58uW+HrvqyFhec7GlrsNEGEfxeckDc8kWMxcr0M274NMev+rnxM+BuXQa5bncdNNNkvov\\n\",\n       \"OrcWCpL0+7//+7nOZRhNeY3WrVsXjovaS9a3hXjwwQe7fi8RKQAAgEhcSAEAAESqdR+p3bt3S5Le\\n\",\n       \"9773hbEdO3ZIkp5//vkw9u53v1uSdOLEiTD2jW98I9e5vOtd75LU2XOjTj1uLNTpUyZ59DYB8jTO\\n\",\n       \"Pc0wugY91164cKGgmYyHrHTenXfeGY7f/va3S2r3qJPa/b983zHbCN12UvC69d66FhEpAACASLWO\\n\",\n       \"SD3yyCOSpOeeey6MHT16VFJnsdmWLVskSU8//XRhc/mJn/gJSdL999+/Yi6PPvpoGDty5Iikzj3C\\n\",\n       \"iiqG88ru7gvEWLNmjaTsO0DPinf9fmZ+jywUzxbx+O7QdYrCV2316tXh2HfVRrk2bdokSfqFX/iF\\n\",\n       \"MPae97xHUueOFrYQ67HHHgtjX/ziFyVJc3NzQ82BiBQAAEAkLqQAAAAi1Tq1Zyk9n9rLcuDAAUnt\\n\",\n       \"XkpFsEI1XyRoaQeffrCNV30PijJSe2WytIvU/t2XlpbCGEXu9WKhb0vVSO3PihVbStLFixcLn4vf\\n\",\n       \"XLUbNtyuxvT0dDi23jk//dM/HcasfOJzn/tcGPvnf/7nQubi3yt2XKf3w8TERDj2Rc0ol523Tp8+\\n\",\n       \"HcbsfepTrva3ypfB5LVhMxEpAACASLWOSPXLF5QV5Stf+Yok6e/+7u/6+n5fiDhqXn/99XB8/vx5\\n\",\n       \"SUSh6szunO+6664wZgXEPgpVRkRq0IjCqEVz684Xlu/Zs0dS57LyzZs3S5K++tWvFj4Xf14v4xw/\\n\",\n       \"KKJQ9WDvjb/8y78MY7aoxXZSkKRVq1ZJ6swq5bXXMBEpAACASFxIAQAARBqJ1F4ZBk1djXJfER8O\\n\",\n       \"7dUPCNWzzbwff/zxMGYF6GWnoAf9XAzSXXhUWbqt20bsebHFMpL05S9/WZL0L//yL2HM5vDNb36z\\n\",\n       \"8LkAsezvUll/n4hIAQAARCIi1Se7g7fi6nFW9yJQZDt06FDVUxgY769y+WjzX//1X0sa7eg6kAci\\n\",\n       \"UgAAAJG4kAIAAIhEaq9P/XZkHjdWrEz4H0WwDcmlduHouPWW6rfI3LrV59Ubh8800B+uDgAAACIR\\n\",\n       \"keqT7cnjI1MUwra7xXL3iiJMTk6GY2uFMG4RqX7lFYkCMBgiUgAAAJG4kAIAAIhEam9ApPM68Xyg\\n\",\n       \"SNu3bw/Hc3NzFc4E17Nu3TpJpFwxvohIAQAARCIihaG8/vrrVU8BI2jbtm2SOguoFxcXq5oOrjE7\\n\",\n       \"OxuObaFJGRGpG29s/8m6cuWKJGlmZiaMLS0tSeK8hHIRkQIAAIjEhRQAAECkRqT2rFeR1C5stJ4y\\n\",\n       \"knTTTTdJanc+Rnk2bNggqTMFw8bOg7NNsaX28zfKPcuyUjSedek+d+5cGLt06VLxE0Owdu1aSdK+\\n\",\n       \"ffvC2NTUlKT2eViSvv3tbxc+l40bN0qS7rzzzhXz85+ThYWF6MdYs2aNpN5/R+zx/HNg579+36N2\\n\",\n       \"3sRoICIFAAAQqRERKV84uHXrVkmdV/R2RxsTkbK7mn73s+rXxMREOB7lO+lXX31VUv7PX11ZdLRX\\n\",\n       \"Mavd3U5PT4cxW8rvo0t2d3v69OkwZhGpUYtCeVlRKM+Khi9evFjGdEaCvZfyet/YZ/q5554LY0eO\\n\",\n       \"HJHUjhj67yvShQsXJEnHjx8PY7YPo+06MSzbN9T/HbG/M37nBhvz53iLSNnP8HwRvp0v7e9OXfnf\\n\",\n       \"Y8eOHZI6s0BnzpwZ6Of5v9e7d++WJD3//PNhzN5PTV0kQEQKAAAgEhdSAAAAkZIqNrpMkiRttVql\\n\",\n       \"Py4AAMCgWq2W0jRNsr5GRAoAACBSZcXmVUak/GP3Ow9rsWDFglXOpSjMJVvT5zJO792q5+Hn0O9c\\n\",\n       \"9uzZE46t0PnUqVMrvi+mHUaTn5csk5OT4diKn2MWGfU7F3tt3vve94axF198UZL06KOPhjFrz7B3\\n\",\n       \"794w9vTTT+c6lzI0eS6++H/nzp2SpB//+Me5zuV6iEgBAABE4kIKAAAgUiP6SPXL+opI+fUWMdYX\\n\",\n       \"KO/0SAzr8eG7Q3fbMLTI5wX153vgoD6sS/gtt9wSxqxfku/NZCkL39vn2LFjZUyxdgbtXzQs66zv\\n\",\n       \"03QnT55c8X2WZrT+Zyif7zRvqdjNmzeHsR/+8IeFPTYRKQAAgEiNi0hZh1TfAdW6JBcZbbE7RS/v\\n\",\n       \"TsL9sghDv5EGolDjrYoWJ+jNohc+2pEV0bBO93Xvhj2K7Ln30Qzf1d3Y34Bh9vqLYX8PfTRmcXGx\\n\",\n       \"1DnUhd+X89ChQ5LK29OQiBQAAEAkLqQAAAAi1Tq1Zxu+zszMhDEr9vb9RF544QVJnRu/5s0KPv0G\\n\",\n       \"xKO8qWwTWVH9z/3cz4Ux2yj43/7t38KYHZPyQpWseLzfjZlHZWNw2/hbauYmtVWfN2677bZwfP/9\\n\",\n       \"90vqXGxkm0z7TYHHgd8I/bvf/a6kuB5jMYhIAQAARKp1RMqiCXfffXcYyyosLzISZXwkCvVky8l3\\n\",\n       \"7NgRxu69996Or0nlRDCNRVWldgTClkpLnQWSRdu/f384tufj+PHjYWx+fr60uaAdjfEdy7vxnZub\\n\",\n       \"fD7yBcC2iIfofv98ZNKeP78YynYyGGdlRaIMESkAAIBIXEgBAABEqnVqz1IwPkxn4e28NiNsItsg\\n\",\n       \"07qtS+0UUb/dzvO2e/fucGz9bnzaqIxC2YMHD0pqFxpK7XSVL7wsI6VnfH8Xe+9ab5qy+Q73liJf\\n\",\n       \"v359GLPXbVSKmuvO0ln9piGanM7z/O4QpPQG58+rVuLiz/U+BYxyEJECAACIVOuIlN25WKQBb7Ln\\n\",\n       \"xUek7M6uzCiU5++C7rjjDkmde4h961vfklTO/LJaHVTFR79sn7B+l7vnzUfl7D3klwwDZajqHFWk\\n\",\n       \"bdu2SepcXLK8vCyp2G7ntmDB78NIRLl8RKQAAAAicSEFAAAQqdapPWSz4tSye2V0c/To0XBsaT5f\\n\",\n       \"SFqnuZbJ94yq2qgUKwN14DeR3rVrV8d/pfbCnyJTe3v27JHUuZDEPudzc3NhrMwFNuOIiBQAAEAk\\n\",\n       \"IlLIhY84HT58WFLnnlq2P5UvkB/XKBWA5vN77ln03S/eKKq1Q5Ik4XjTpk2SpFtvvXXFvHzEjIhU\\n\",\n       \"sYhIAQAAROp5IZUkyReSJFlIkuRJN9ZKkuR4kiT/fvV/P+e+9qEkSV5IkuTZJEl+tqiJAwAAVK2f\\n\",\n       \"1N5fSvpTSf/djaWSPpum6Wf9NyZJco+kX5d0j6Rdkv4pSZI70jSlfe0YsZSeDy1v3bpVUucGraT2\\n\",\n       \"gPHjU1M+PdY0/vy1tLQkqTO1V9TiDv+cWc+o1157LYzZzhd+DMXqGZFK0/RbkpYzvpRkjP2SpL9J\\n\",\n       \"0/S1NE2PSDoo6Z1DzRAAAKCmhik2/9+SJPmfJD0u6ffTND0raVbSY+57juvNyFRu7G7GRzusu6vf\\n\",\n       \"wwnVsc7Fk5OTYcwiUVXtMwc0gZ3XfNTBltH7sSa3sti8eXM4tr3ims7Oa/7vUhnRNtslYXFxMYxZ\\n\",\n       \"YfmBAwcKf3y8KbbY/POSbpb0Nklzkv64y/c2N3YLAADQRdSFVJqmp9KrJP3faqfvTkja475199Ux\\n\",\n       \"AACAxnnkkUe6fj0qtZckyc40Ta1t6i9LshV9/yjpr5Mk+azeTOndLul7MY/h+d5D1i/Db9JoqSS/\\n\",\n       \"KSvK5cP1loqwjTwlaffu3ZKkEyfa19WPPeazwBgn/jNt6Xr/mbaURZOLkftlCzEk6a1vfaukzoJs\\n\",\n       \"c/LkyXD83HPPFT8x9M06i/veUbZJeZGOHDnS8d+y2Dl+XDY9f+CBB/Too49e9+s9L6SSJPkbSe+R\\n\",\n       \"NJUkyTFJH5P03iRJ3qY303aHJf3PkpSm6TNJknxJ0jOSrkh6fzoOZ0IAADCWel5IpWn6GxnDX+jy\\n\",\n       \"/Z+Q9IlhJnWtDRs2hGPbt8gXLQ+zzNOiJv5OYnl55SJFW9Jvhe3oNDMzE46tyPzuu+8OY7fccouk\\n\",\n       \"8Wt54KMNWe+rceU/RxbNvHDhQhgbp/sv/76wSJPfFcDOb6OyUKPfRUG+VYqdn30xt0VD+o2K+H+b\\n\",\n       \"N2tD4KOqddpnM29ZEdNxRmdzAACASFxIAQAAREqqCKEnSZK2Wq3SHxcAAGBQrVZLaZpm5jSJSAEA\\n\",\n       \"AEQaprP5UKqMSPnHrjoyVtVc/PJzK2b96Ec/WslcsvAaZWMu2ezxq56Hn0MT5+ILsq2AOos/f/S7\\n\",\n       \"gKTJz0svtmjp3LlzYazfIvgmPi9WbL5v374w1q0Fw/79+8Pxiy++KKlzgdcwc7EC/yL3Fuw1ByJS\\n\",\n       \"AAAAkbiQAgAAiFRZag/10SvEitEzMTEhqTMcXmRoHM3QLZ0ntTta23+lcnrDWcoxa372Xpba/cny\\n\",\n       \"nvILYpUAAB2fSURBVJPvabVx40ZJnWk823x5XPoMWmrX+jpez/bt2yVJe/fuDWOnTp2S1P/G275n\\n\",\n       \"VdbiuDqct4hIAQAARGpcRMo6/t50001hzPbaK9ttt90mqb3PkiQdOnRIUjn7LA1j3DqMo7OQ+Pbb\\n\",\n       \"b5fUeYdvRaBVfZ4grVu3TlLnXXad9jMbtJt4Xu644w5J7WiQd/r06XBc1J5zPiqS1WF+XCJRplfk\\n\",\n       \"0likzu8TOWhX9LxaNNnfaZtTnohIAQAAROJCCgAAIFLjUnsWQvXphzL6SBhfZHnrrbdKaqf4pHbI\\n\",\n       \"s+6pPYwfH47/0Y9+VOFM4PkyBSvO9Ru1WyrixIkT5U6sT5aqyXuXDHsuJGnXrl2SpN27d4cxe7xv\\n\",\n       \"fetbYazflNOgfN+sfouk0d6g2grMpeJeo15mZmYkSZs2bQpjVs4wLCJSAAAAkRoXkcpS1fJHX7xr\\n\",\n       \"6lQYCsSwCK9Uj6XFo87u2qV29MnftS8uLhY+B4u0+6hSvwXURe3XaguLpPb78PDhw2HMonZWoF8k\\n\",\n       \"P5c6sWigj5hVtSCgm6wC/bxYV3n/9zgrevvcc89J6owA54WIFAAAQCQupAAAACKNRGqvTD5c+u1v\\n\",\n       \"f1tSu4hNkp599tnS54Q4vifNhQsXKpxJvfjnhUUT5bJu2b5rdhmsc3eddjmYm5sLx5b+9D2jpqam\\n\",\n       \"JHUW5luX87wLwut6frC0alUF3LOzs+HYirj938hjx45JKrZvob02/rx1yy23SGr3dfR8Kr0bv7Cs\\n\",\n       \"FyJSAAAAkYhIDcHujvxdUlP47rK+uHic1PUusyrWzsMXGRORGg/93qVXJescu7S0JKmz2NyiIrQo\\n\",\n       \"KIfvWL68vCyp/J0R7L2bd1f7QYr1iUgBAABE4kIKAAAgEqm9MRXTLwajx4pzJWn9+vWSpIWFhaqm\\n\",\n       \"A/TNyhM2b94cxiwdMz8/X8mcxtk4b3ZORAoAACASESkQkRpjvijXCketaBSoM79nGsaXtb/wC6jK\\n\",\n       \"XkhERAoAACASF1IAAACRSO2hFBZ2nZ6eDmOnTp2qajrIYH15gLqxLtN+c17rZO07m5OWHj8XL16U\\n\",\n       \"VG2ql4gUAABAJCJSOVi1alU4vummmySN91LQLPv27ZNUThHgli1bwvHZs2cLfzwAxbLok9/X1Pba\\n\",\n       \"886fP1/anFAvVb72RKQAAAAicSEFAAAQidReDnwfJlJ62S5fviyp2A2ed+/eLamz0zGpvXzccEP7\\n\",\n       \"nmvt2rW5/mwrJB5kk9Drse7skvTSSy8N/fPyYhuDv/HGG2HM5upLA2wDXr9ZtH12PHs9/M+zBR1+\\n\",\n       \"14JB1bVMwQqJb7755jBmz8FTTz0Vxubm5sqdGGrD95Ea5jMQg4gUAABAJCJSKEUZe18dP35cUufd\\n\",\n       \"SNad+6iwonofLbLl4Vu3bg1j1r38yJEjff3c7du3h2Pbi88vMc/7bi+PSJSpUxTKs0iTL4h99dVX\\n\",\n       \"JXVGUC3yYsXVUjuK6yPfFsVbXFwMY3m8Lv4xXnvttaF/Xl6OHj3a8V+pHeWr0zxRnbxb6/gIVy9E\\n\",\n       \"pAAAACJxIQUAABApKbsoS5KSJElbrVbpjwsAADCoVqulNE0z831EpAAAACJVVmw+bETKd6+2AlO/\\n\",\n       \"dNcKObOKjP1jVx0ZYy7Z6jqXhx9+WFL+Bde21Fxqv3d7zaXf58UKxW1PqmFZEebHPvaxgedSFHv8\\n\",\n       \"qufh51CnuXz2s58NY1kdoIcp3Lb3l/9M2KIHv/edvV/q9Lwwl05NnIu912xBh9R+j2f9/e/3XJs1\\n\",\n       \"l+shIgUAABCJCykAAIBIje0jldWxmn4iKFpRizP6DTHH6PdzYT2Mem3+mcdz4PtSWc+kpaWlMDZO\\n\",\n       \"n2VLq0nF/d69XtNhHjcrZVzXfl4YPfbe9X2k9u/fL6mz0/3CwoKkznNtXn0GiUgBAABEamxECriW\\n\",\n       \"LTZ429veFsYuXLggSXr++ecrmVMdvPLKK319X6+oRZ7uvPPOcGx7JPp9GA8cOLBibFSNU/QN+bJo\\n\",\n       \"5q5du8KYFfifO3eukjmVzaJJPvpk541t27aFsR07dkjK3l2AiBQAAEBFuJACAACIVOvU3ihvODsq\\n\",\n       \"rKeQ9fLwx2WHlq0/iN909+abb5bUTvFJnSHgceI3N7Zi7zLTeV5WOsu/h9auXVvmdPpSxvnIeuHY\\n\",\n       \"psRS53sX1bNz3q233hrG7FznN5Eug70XZ2dnw5ilsB577LFS51I1v+H2yy+/LKm9kf31+IUewyAi\\n\",\n       \"BQAAEKnWEak6RaLsDvnKlSthzI793aP/+jiwpfCXL18OY/64THYXYoXKknTvvfdKkvbt2xfGxi0i\\n\",\n       \"tXfvXknS+973vhVjn/nMZ8LYiRMnSpvTE088seLYR6Gqeg91U8a+pPY+9eeUp556qvDHRf/sfXDw\\n\",\n       \"4MGKZ9KOwnz3u9+teCbNlNdCDyJSAAAAkbiQAgAAiFTr1F6dWKGp3xjRikDHLZ1Xd0eOHAnHVky9\\n\",\n       \"c+fOimZTvcnJSUnS/fffH8as8+9dd90VxspM7WWpYzrPKyO19+yzzxb+GADyRUQKAAAgEhGpPl26\\n\",\n       \"dKnqKSDCmTNnOv47jqyY+/Of/3wYsyjq9773vUrmBABFs1YVUrERZSJSAAAAkbiQAgAAiERqDxhx\\n\",\n       \"1o+NXjMYhC2ssf5sQNOUsUBEIiIFAAAQjYgUAGCFvLo+A2Xw++aV/d4lIgUAABCJCykAAIBIpPYA\\n\",\n       \"ACusWbNGUuduDpYy8akT+/rFixcLn8srr7xS2GOg2TZu3BiOy+4bSEQKAAAg0khFpNauXRuO675v\\n\",\n       \"F4ozMTERjl9//XVJ3MkCg7I7/FtuuWXF2PHjx8PYwYMHC5+LfY6B66ly9woiUgAAAJG4kAIAAIjU\\n\",\n       \"iNTeli1bwvG2bdskdabxzOLiYjgmtTe+Xn311XBsqYgbb2y/1W0D6rK63qIa1lcmr54yds7xxdeW\\n\",\n       \"Mq77puYxPXY2bdokSZqZmQlju3btktTe9FqSnn766Tym2JV/PKBuiEgBAABEqiwidcMNN4Q9wHrZ\\n\",\n       \"vHlzOJ6ampLUWXy4vLwsSTp16lSOM0RT+bvXrCLzcY1EVdn5tyw33XRTYT87SRJJ0oULF8JYUyIl\\n\",\n       \"Ma/3wsLCirGTJ09e92solkUIvfPnz/f1by2T4//mWuTe3teIR0QKAAAgEhdSAAAAkZIq0hxJkqSt\\n\",\n       \"Vqv0xwUAABhUq9VSmqaZeVAiUgAAAJEqKzbPMyJlxXK9omuTk5OSpA984AOFzCOGf/xh5jI9PS3p\\n\",\n       \"zSJ+M2hBaF5zyUPec/EFlYNGYUf5eRlGHefS7zxWrVq1Yiyv7tmDzqVIzCXbqM3Ft6iw81vM4qtR\\n\",\n       \"e156sVYmL7/8cl9zuR4iUgAAAJG4kAIAAIhU687m99xzj6TOXkBLS0uSOsPw69evl9Q7leX7v4yC\\n\",\n       \"DRs2hOP7779fUucGovR6aRvX3lFNZR3p/Xt8bm4ut5/vO933u6G1pc377X+XlzIft669xnbv3i2p\\n\",\n       \"c8cK+1vge4f5XQ3GiU9V23t7mHKGGPY61P018OcUO8/0Su31QkQKAAAgUq0jUqdPn5bUeZdkV9b+\\n\",\n       \"arvfffXqdIeVB38n/f3vf1+SdObMmaqmU0tVRREwuF/91V8Nx3v27JEkffWrXy3ksfqNQvkiXjsP\\n\",\n       \"HT9+vJA5Se336zvf+c4wZvv4PfHEE4U9rvH7CNbpfGnRjv3794cx6+rtI+/2N6MpHefzYh3npXaU\\n\",\n       \"xe8Icvbs2UIe10cDLSuSFTmen58Px/1+9opy8eLFcJzX3wUiUgAAAJG4kAIAAIhU69Te4uKiJOm2\\n\",\n       \"224LY3v37pUkHTp0KIz5sOE48aF3UnrZSOkNzqd3hi3CHIT1QpPa4Xf/OS+TpZDe8Y53hLETJ05I\\n\",\n       \"Kja1Z+/Xxx57rLDH6MbSiHWT9T6w9wib1Xcqc1HVfffdF45/8id/UpI0Ozsbxo4cOSJJ+vKXvxzG\\n\",\n       \"inq9JiYmwnG39/HatWvDcV5pRiJSAAAAkWodkbK7M3+leeedd0riLqRurNBQkvbt2yepXQAvFXsX\\n\",\n       \"P67yLqTftWuXpOIKU3v5/Oc/X8njZrFInL+zteLmUZZXd/du1qxZs+K43+e2qgglsvno03vf+15J\\n\",\n       \"nRmkJ598UpL04x//OIx94xvfKGQut99+ezi2RSK+7YNF6p555pkw1u9CtV6ISAEAAETiQgoAACBS\\n\",\n       \"rVN7xhfPWWjXhwpRPd+bw/rtbNmyJYyR2stfHik9nza3nzduPXiyWG+ir33ta2HMd0MfVdaDSGp3\\n\",\n       \"qM4r/WH8+7buXbCzWLFy03+PPPgSGysJ8M/F1NSUpM4C76L4RTJ33323pM7P7FNPPSWpXRKRJyJS\\n\",\n       \"AAAAkRpxi+WjT0Si6un555/PPEa9+WLqui59L4K/U+03AldmpK7sfdJMGcXmvm1Lnbqn98uiLD7i\\n\",\n       \"btGOcWPF5JL093//95KkAwcOhDF7H/vO60XtyXf48OFwbJ8ZH021v0tFtHQhIgUAABCJCykAAIBI\\n\",\n       \"jUjtAUCeKKjP9tJLL1U9hUxWBO87UVdV4L20tCSpc4HNuPLvl69//euSpB/+8IdhbOfOnZI6Xyvr\\n\",\n       \"8XTs2LFc5+I3r/bHZSAiBQAAEImIFADUTJkF5k1giwPqUJxuBcx5t4VoOovyzs3NhTGL2llkSurs\\n\",\n       \"bD8qiEgBAABE4kIKAAAgEqk9AEA0KwT3O1DkzbpWb9q0KYwtLy9LGo/NpJvKUrF+cYe9bqOEiBQA\\n\",\n       \"AEAkIlIA0IetW7dKGs076mFs2LBBUrERKeuGvW3btjBmkTC/20URXasxGHutpHYXeGsZIY1mBJGI\\n\",\n       \"FAAAQCQupAAAACKR2quRVatWSSpn41DgeiYmJsLxOGxk3O8mquvXr5dEak9qp9Wkzm7jRTly5Iik\\n\",\n       \"zhSR9SNq4vnSzvWjyPf6svPHKKbzPCJSAAAAkSqLSK1evbpnl1q7A3zjjTfC2CgXEzbxzmrc9Bu9\\n\",\n       \"KMMNN7x5H+Q/H8OwO/zVq1cP/G+TJJGU3ZHb333b85fX59gedxj2PPZy/PjxoR9rVFiBudTZybpo\\n\",\n       \"fn+7Ju51Z3/TRrG7t/HngDNnzgz98/xnfJiO/3mfLzt+du4/EQAAYExwIQUAABApqWJzzCRJ0lar\\n\",\n       \"VfrjAgAADKrVailN08xaAiJSAAAAkSorNn/44YeHKhwbho+GVR0ZYy7ZYuZihZwvvfRS5XPply0j\\n\",\n       \"94Wz3T4XTX+NimKPX/U8/ByYSyfmkm3QufjFEdY53C/eGKbDfJOflyL1mgMRKQAAgEhcSAEAAESq\\n\",\n       \"LLW3Zs0aXb58uaqHxwi6++67JUnz8/NhLKv3z+bNmyVJ586dK2diXRS50Ws/bryxfQq4cuVKaY/n\\n\",\n       \"u6fXpeuxpYal/NPDTeZTSUX04MFg/GtgvQftnCaVe07x7w3rETeOf9eJSAEAAESqLCKVR0diwDt4\\n\",\n       \"8GBf31eHSFRdlBGFynq8Ue7sPIx7771XkvTud787jD355JOSpMcee6ySOfmoZR06+lfNFoj4RSFV\\n\",\n       \"dVk/ffp0JY9r/HtjdnZWUmen+1HeicTrGpFKkmRPkiSPJEnydJIkTyVJ8oGr45NJknwzSZLnkyT5\\n\",\n       \"RpIkW9y/+VCSJC8kSfJskiQ/W/QvAAAAUJVeqb3XJP3vaZr+D5LeJel/SZLkbkkflPTNNE3vkPTP\\n\",\n       \"V/+/kiS5R9KvS7pH0n+R9LkkSUgfAgCAkdQ1tZem6byk+avHF5MkOSBpl6RflPSeq9/2RUn/n968\\n\",\n       \"mPolSX+Tpulrko4kSXJQ0jslrYhJj0vID+U5e/Zs1VNAnxYXF3P5OT61MKw6FJhfunRJkrR27dow\\n\",\n       \"5o+rQDqvU9ULROrE+lhJ0smTJyVRbN5VkiT7Jb1d0r9JmknTdOHqlxYkzVw9npXkl0kd15sXXgAA\\n\",\n       \"ACOnr9u5JEk2SPqypN9J0/SCLxRP0zRNkqRbi/Lo9uV2t1l2QSyAZqhqd4SiWLTDL5wgAoK6OnPm\\n\",\n       \"TDgex0iU6RmRSpJktd68iPp/0jT9h6vDC0mS7Lj69Z2STl0dPyFpj/vnu6+OAQAANM4jjzzS9eu9\\n\",\n       \"Vu0lkv6bpGfSNP0/3Zf+UdJvXT3+LUn/4Mb/xyRJbkqS5GZJt0v6XsS8AQAAKvfAAw90/Xqv1N5/\\n\",\n       \"kvSbkp5IkuTfr459SNKnJH0pSZLflnRE0q9JUpqmzyRJ8iVJz0i6Iun96RCxd1J69Tcz82Z5nO+j\\n\",\n       \"Uoei3W6sG28ZXZq3b98ejnfv3i2ps9v6qVOnVvwb9M86O48Ke2+eONEO5C8vL0uSVq1aFcZG7fdG\\n\",\n       \"sVavXi1Jeu2113L9ueOczvN6rdr7tq4ftfqZ6/ybT0j6xJDzAgAAqL3KOptjNNjdcpOWSJcRibJ9\\n\",\n       \"p+64444wdvPNN0vq3BfLijWJvkKSFhYWOv4L5CHvSBQ60SwTAAAgEhdSAAAAkUjtYShNSumVKavj\\n\",\n       \"thVm+q9NT09L6kzllJF6BIBhrV+/Phxv3bpVUnvBhNQ+l83Pz4exUSxjICIFAAAQaWQjUkUt98yy\\n\",\n       \"bt26cGxX236pclFLRK/pMF/IY2A4/nWx94Ef27hxo6T2+1WSjh07tuL7AKBu/N+gXbve3A3OR5zO\\n\",\n       \"nTsnqTNyZWOjhIgUAABAJC6kAAAAIo1saq+olN6aNWvCsRXXWUhTaoctX3nllTBmqZq8bdu2LRzb\\n\",\n       \"xqa+0M/md/LkyUIeH9d36dIlSZ0dqO29s2XLljA2MTEhqXPzzxdffLGMKQLAUPyOFnau86UsVuLi\\n\",\n       \"z3mk9gAAABCMbESqKL5obtOmTZI6C4XtaryMq24fCbMrfx+lstYERKSq41sZWKuDffv2hTHbd+/l\\n\",\n       \"l18ud2IAkCPbzcHv3GB/G/3+oqOIiBQAAEAkLqQAAAAiVZbaW79+vV566aUV477/ki/UrQs/PysU\\n\",\n       \"9t29T58+LamcVI31IJLavax8Ud+PfvSjwueA7nzHcuvuu2HDhjC2uLgoSTpx4kS5ExsDlob3n0Xr\\n\",\n       \"Kk9H/vqy3kR59VGzRR5+ARDyMTMzE453794tSdq+fXsYm5qakiQ9/fTTYcyOR6lPHhEpAACASJVF\\n\",\n       \"pK53d1DHKJRnEQR/XFWH8e985zvh2KJjPmJ2/vz5oR+D7unDOXr0aDi2pcLf//73w5i9Rnm8Vuhc\\n\",\n       \"+GEtKPz7lkjU4PxzagXFfol73udsiyT6n2vtbGL2abP5E5HKn19U9fjjj0tqL6qR2lEq/3dzFP+O\\n\",\n       \"EJECAACIxIUUAABApKSKMFuSJGmr1Sr9cQEAAAbVarWUpmmS9TUiUgAAAJEqKzavMiLlH7vqyFiR\\n\",\n       \"c7Eiy373HRyV5yXvFhqj8rzkrY5z6Xce9913Xzi2AuZnnnlmxff5vTWtS32vz9OgcylS1lxsP07f\\n\",\n       \"dT9vtkjFP38f/OAHV8ylGytsl7ovEvC7TWTt95al7q9RVWwOn/rUp8JYr+ey6LnU6Xm5HiJSAAAA\\n\",\n       \"kbiQAgAAiMSmxX2yjRjL2Ix4GH7DSFtI0G9qr6767XRsKT3f+6qbflMH6K2MdFGe/GtvG337z/by\\n\",\n       \"8rIk6Y477ghj9h7JSgE2SRmvkX1Wh0kL+X5E3Tr/33777eHY+rEdOnQo+nHrznoGSu1eaXnr93Xz\\n\",\n       \"51r7TI1jvy4iUgAAAJGISPWpqoK7Qfm76l27dkmSLly4EMaa2FW23zn3W1hu+60hP/1GOSz6Y3tS\\n\",\n       \"VsW6MF9PVgfvY8eOFTonSVq7dq2kzgLqs2fPSqr/rg95s6jg9dhz5SM09lyNsqKiUDEsEi1JW7du\\n\",\n       \"ldQ5v3HZsYGIFAAAQCQupAAAACKR4+hTEwvoFhYWJBWbzrNiQ1/waeH1U6dOFfa4w4jZ+DSWhbul\\n\",\n       \"3qmKPPki0KmpKUmdaaqXX35ZUrnPhST95//8nyV1FgP/4Ac/kNSZznrLW94iSfr3f//3MFbmZ9AK\\n\",\n       \"y5999tnSHlNqv0bDlBL4TYazWCqsTikiz967vd6b9hwtLS2Fsaak9nyvuybz6eb5+XlJ7fS91LxF\\n\",\n       \"KLGISAEAAEQiIjXCyog2WOTgd3/3d8OY3SE+9NBDYayJRe55KDMK5fmC+k2bNknqbPFgr4dvl2F3\\n\",\n       \"j0XO2d6Ts7OzYcwiUn5+o7YgwH6fMj6TvT5rFqF88cUXC5vDhg0bJEkXL14c+N8Oeq54/vnnB36M\\n\",\n       \"qo3ywoGqF5JI0rve9S5J0rp168LY3NycpGKizESkAAAAInEhBQAAEGm04ucF6re79rg5c+aMJGlx\\n\",\n       \"cTGMWQ+ePXv2hLEi0wjX8oWcN998c8ecJOnAgQOSRvu19MXm9tpk9XTxfcfKKIC1XkxZKS7fgf+H\\n\",\n       \"P/yhpNFJgZRZ1N/rscp438ek9K61cePGcGyLNmyRhNS92znGmy2k8Oe0Ihc/EZECAACIRESqT6Mc\\n\",\n       \"vRjG8ePHJUkf+9jHwphFEara489HMayru49IjcNr6SNS/XYXLiP6YwsRerUysMiDFS1LnR36ES+P\\n\",\n       \"aFEZ/Dzt/dzENjQo3xNPPFHq4xGRAgAAiMSFFAAAQCRSe8hFXTd1tu7u48an9qxj+EsvvVTVdALr\\n\",\n       \"MdMrRWOp2C1btoQxm/+od0kuWlNSe/49QkoPdUZECgAAIBIRKWAE+YJ6K9iuQ0Sq34Jx60jsO69b\\n\",\n       \"q4Z+i+eRrapFIOPAL7cfldYd6I2IFAAAQCQupAAAACKR2gNGkC/O9d2g68I6VUvtTZJXr14dxmxT\\n\",\n       \"Y9tgV5Lm5+dLmh0Qx7+HLc3nN+PGaCIiBQAAEImIFDDi6lic7e/Sb7jhzfu5iYmJFWM+CmVd0YG6\\n\",\n       \"qmsbGBSLiBQAAEAkLqQAAAAikdoDULqsnlY+3Wf9pv7/9u4nNI4yjOP490djCf4BMSZRa6A9VLBe\\n\",\n       \"mksRiqS5lPRi9eIfEHoQEfyLp9qL5uhF8SB6MUoRqRSkpRexVXLwZAlEW02LFgy0UtNAzEFyycLj\\n\",\n       \"YWZ0stmVOtmd6ez+PpedfWeZeZMnz87DO5P3zWZCv1n5h31vxfmS8vMMZbcyt7IYc34G+2w2eM8C\\n\",\n       \"blYuj0iZmZmZFXRLj0hlD5xmrwCNRqOq7phZGwMDyVfJVvIzP03D8vIyAGtra//rGJ0ahcpGtrZy\\n\",\n       \"vPzo0/DwMLDx95PtLzIilU0LkZ89O3vQOT9K1crg4OCGz3dDtkbi6upqR47XiXi0MjIy8s/2ysoK\\n\",\n       \"UM41ZmhoqCPHyVYtaDXFgqddKI9HpMzMzMwKciFlZmZmVpDyi5uWdlIpJiYmmJycLP3cVp3Z2VnH\\n\",\n       \"vI843v3HMe8v/RTv6elpIqLlfXOPSJmZmZkVVNmIVOknNTMzMyuo3YhUJYWUmZmZWS/wrT0zMzOz\\n\",\n       \"glxImZmZmRVUSSElaUrSZUm/SjpaRR+suyQtSrogaV7S+bTtHknnJP0i6ayku6vupxUn6RNJS5Iu\\n\",\n       \"5traxljSsTTnL0s6WE2vrag28Z6WdC3N83lJh3L7HO8akzQmaVbSz5J+kvRa2u4cb1J6ISVpG/AB\\n\",\n       \"MAXsAZ6V9HDZ/bCuC+BARIxHxL607U3gXEQ8BHybvrf6+pQkj/NaxljSHuBpkpyfAj6U5BHxemkV\\n\",\n       \"7wDeS/N8PCK+Ase7R6wDb0TEI8CjwMvptdo53qSKH3IfcCUiFiNiHfgCOFxBP6z7mv/D4XHgeLp9\\n\",\n       \"HHii3O5YJ0XEd8CfTc3tYnwYOBER6xGxCFwh+S6wmmgTb9ic5+B4115E/BERP6TbfwGXgB04xzep\\n\",\n       \"opDaAVzNvb+WtllvCeAbSXOSXkjbRiNiKd1eAkar6Zp1UbsYP0CS6xnnfe94VdKPkmZyt3kc7x4i\\n\",\n       \"aScwDnyPc3yTKgopz7fQH/ZHxDhwiGRI+LH8zkjm3fDfQg+7iRg7/vX3EbAL2AtcB979j8863jUk\\n\",\n       \"6U7gS+D1iNiwwrZzPFFFIfU7MJZ7P8bGKtZ6QERcT1+XgVMkQ7xLku4DkHQ/cKO6HlqXtItxc94/\\n\",\n       \"mLZZjUXEjUgBH/PvrRzHuwdIuo2kiPosIk6nzc7xJlUUUnPAbkk7JW0neTjtTAX9sC6RdLuku9Lt\\n\",\n       \"O4CDwEWSOB9JP3YEON36CFZj7WJ8BnhG0nZJu4DdwPkK+mcdlF5IM0+S5Dk43rUnScAMsBAR7+d2\\n\",\n       \"OcebDJR9wohoSHoF+BrYBsxExKWy+2FdNQqcSvKQAeDziDgraQ44Kel5YBF4qrou2lZJOgFMAPdK\\n\",\n       \"ugq8BbxDixhHxIKkk8AC0ABeCi+rUCst4v02cEDSXpJbOL8BL4Lj3SP2A88BFyTNp23HcI5v4iVi\\n\",\n       \"zMzMzArqizkezMzMzLrBhZSZmZlZQS6kzMzMzApyIWVmZmZWkAspMzMzs4JcSJmZmZkV5ELKzMzM\\n\",\n       \"rCAXUmZmZmYF/Q0kXnLFpKb3UQAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01c2f0d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['conv5'].data[0]\\n\",\n    \"vis_square(feat, padval=0.5)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The fifth layer after pooling, `pool5`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 35,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlEAAAJMCAYAAADaNPObAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzt3XmMXfd14PlzWPvG2lhciquojSIVWZsj27GgcqA4GseQ\\n\",\n       \"nX9sB4ghpNMBgo67Y4+nZSuDNKQ/0tM20OMMMsgf44kNJZioo0k7XgYtWEuz5FYUWZJlmSEliqTF\\n\",\n       \"EllkVZFVxdr3qt/8wZJC1u+U9OPv3nfvfa++H8CweHiXU/fe997hrXPPU+ecAAAA4NpsyjsBAACA\\n\",\n       \"ckQRBQAAEIEiCgAAIAJFFAAAQASKKAAAgAgUUQAAABFSL6JU9QFVPa6qJ1X1a2lvHwAAoAg0zTlR\\n\",\n       \"qlolIm+JyP0ick5EXhGR33HOvZnaTgAAAAqgOuXt/aqInHLO9YmIqOp/EZHPiMh7RZSqMt0TAACU\\n\",\n       \"DeecWvG0i6idInL2ij/3i8g9axdS1XeTElWVhoYGb0MzMzNe7N31rhR6J81aztpeFpLk0tHREbSc\\n\",\n       \"dfzm5ubeN5dHH31UHn300bI8LmlLkstdd90VtNzPfvazkueStqLkklce1dX+W+bi4mKquVj7WFpa\\n\",\n       \"Clo37eOyaZPf8bGyslLyXD7+8Y+/79+/8847snfvXnnzTf+XHCMjI6nmkrbQXGpra71YfX29F2tu\\n\",\n       \"bvZiFy9eDNre1NRUUC5NTU1e7F0LCwvvbXt6enrd5T5Ikc/R++WRdhEVVNFcmSBfOwMAAIqit7dX\\n\",\n       \"ent7g5ZNu4g6JyK7r/jzbrl8N+oqa+9EAQAAFEFPT4/09PS89+fHHnts3WXTLqJeFZEbVXWfiJwX\\n\",\n       \"kc+LyO+sXWjtnSjrV0+W0LtWNTU1QcsVnVVgWrfXGxsbvVhVVZUXs36dd6UrLxogbTfffHPQcm+9\\n\",\n       \"9VaJM4kX+mu1ou/DsmfPHi9mvQdZvyoKfQ8Pdccdd7zv33d1dcmuXbvk/Pnz3t9Zv84rRwsLC0Gx\\n\",\n       \"iYmJoO1Zv3YO9UG/pkuy7XKXahHlnFtS1S+JyI9FpEpE/oon88oHRRSAcrBr1668UwBEJP07UeKc\\n\",\n       \"e0pEnkp7uwAAAEXCxHIAAIAIFFEAAAARUv91XhFYc1bKkTX/w5rXYc36yKK5cvfu3R+8kIicPXv2\\n\",\n       \"gxdKyGq4t4TOuEniyJEjXuzGG2/0YgcPHvRib7zxRklyylpdXZ0XC+1jKXJjObKZ3XP48OGg5QYG\\n\",\n       \"Bkqcif2ea33GjI+PlzyXvFifRR/96EeD1n322We9WCWNNuJOFAAAQASKKAAAgAgUUQAAABEoogAA\\n\",\n       \"ACJURgf2Glk0D6NYzYFW46MldLpvEh/+8Ie92HXXXefFrOb/Smkst66NJF9OmqbQByLGxsa82OTk\\n\",\n       \"ZNrpFMaZM2fyTuE9R48eLfk+Qr9c2fr2B+tbImZnZ72YNWG8HIU+uLMRcWQAAAAiUEQBAABEoIgC\\n\",\n       \"AACIQBEFAAAQoSIby2tra/NOIRVWE6vVnJtXI31/f38u+7Vk0TAeympyt6Y8W42olcJqqH3ppZdy\\n\",\n       \"yKRy0eybDes9d25uzotVShO5ZXFx0YtZ38xgHYO2tjYvdunSpXQSKwBehQAAABEoogAAACJQRAEA\\n\",\n       \"AESgiAIAAIhQkY3l5ThV+MCBA16sqakpaN3x8XEvNjMz48W6u7uvPTFcM6sJ02qkHBgYyCKdXNTV\\n\",\n       \"1Xkxa5K7xXpIIM1p2mfPnk1tW3mqrq7It+/MWFP1renkra2tQetWcmP50tKSF9uxY4cXs46V9QDN\\n\",\n       \"yy+/nE5iBcCdKAAAgAgUUQAAABEoogAAACJQRAEAAERQq0GupDtUdVnvEwAAIIaqinPO/9oJ4U4U\\n\",\n       \"AABAFIooAACACBRRAAAAESiiAAAAIuQy8lbV7M+6yqZNfn23srISvU+rmT0kj1Ioei719fVezJpE\\n\",\n       \"e+ONNwbt4/Tp017MmrI+NTXlxYp0XNra2ryY9XOEsq5xawr1/Py8Fws9Ltdff33Qcr/85S+DlrOO\\n\",\n       \"i5Xz8vJy0PZCp26vnZhc9NdQaC6bN2/2YtbE9ixySZuVS01NjRezpmFnkUuRjgu52LlYnzvWZPi5\\n\",\n       \"ubmgfVifbdaUeuuzaD3ciQIAAIhAEQUAABCBIgoAACACRRQAAECEXBrLQzQ0NHgxqxHXarq1Gs8Q\\n\",\n       \"zjqmjY2NXqyjoyNoe1bTcjlOrb+WZsMQ1oMSeV27dXV1Xsy6DiyhTeRWE7XFujYmJyeD1i03oce4\\n\",\n       \"UmTRRI7KYT14Ultb68VCG8ut7VmN5deCO1EAAAARKKIAAAAiUEQBAABEoIgCAACIUIjGcqtRrKur\\n\",\n       \"y4tZTZhZNGYeOHAgaLnh4eGgWNFZE2sHBwe92PT0dND2rAbq0EnVoazGd8vMzEz0PkIbqIvEauq3\\n\",\n       \"msiRj9CHOCxJrmVL2tPtkb7Q126lPLAwNjbmxawHzEJZ7+FJ39e5EwUAABCBIgoAACACRRQAAEAE\\n\",\n       \"iigAAIAIhWgsT6JI08krpZnPmhhtTYQNnRLb2dkZvW45sqbtW2ZnZ0ucicjevXu9WFNTkxezprGf\\n\",\n       \"OXMm1VwmJiaClrMeNElTc3OzF0t7Gj0qR+hE6ywePLEe+qlkLS0tXsw6zqGv35qaGi8W+n69Hu5E\\n\",\n       \"AQAARKCIAgAAiEARBQAAEIEiCgAAIEIhGsut5vAiNWn39fV5MavZMHSC90YzMjJS8n2kPb05iS1b\\n\",\n       \"tgQtd+7cOS9mTXdPor6+PtXtpS3J9OFYRXqowWpqtaZSZ5Hz22+/XfJ9lKPQ11AW7/+Li4sl30eR\\n\",\n       \"WA+BhD6gkhXuRAEAAESgiAIAAIhAEQUAABCBIgoAACBCLo3la5vFGhsbvWWsqaRWU11HR4cXS7sJ\\n\",\n       \"02r2tRrLrYbQ6upC9O5fE+tnS3sa7549e1LdXhZaW1u92Pj4uBdLMlXYmtBrsZpdrXMUOhXdanLv\\n\",\n       \"6uoKWtdqDremou/YsSNoexcuXPBi1msr1tLSUmrbSso6P1lMsrdY31SAYj0wlMVUdOu9Ja+HMcbG\\n\",\n       \"xrxYkvNhNaUnfb1xJwoAACACRRQAAEAEiigAAIAIFFEAAAARNOtmQlV1NDACAIByoKrinDOfGOJO\\n\",\n       \"FAAAQASKKAAAgAgUUQAAABEoogAAACLkMk77jjvuuOrPU1NT3jJrp5qvp7+/34tZU11HR0e9WJLJ\\n\",\n       \"0klYjfVFyqW9vd2LWZO0rUnu8/PzXsyaSG8tNzg46MWKdFx27tzpxayJujMzM0H7sKZwb9++3Yv1\\n\",\n       \"9fV5sdDj0tnZ6cVGRkaC1rUU5dotSh4i5LKeJLncfvvtQcu9/vrrJc8lbeRiS5JL6DeDWN9W0NDQ\\n\",\n       \"4MVC38NFuBMFAAAQhSIKAAAgAkUUAABABIooAACACLk0lp8+ffqqP4+Pj+eRRsWora31YgsLC9Hb\\n\",\n       \"s5ql29ravFhHR4cXGx4e9mJnzpyJzqVIzp8/n+r2rOb6s2fPRm/Pav5f+xDHep599tno/QJps97T\\n\",\n       \"rCZj65qfnJwsSU4oLuuBHIv1IFpra2uifXMnCgAAIAJFFAAAQASKKAAAgAgUUQAAABFyaSy/lmmg\\n\",\n       \"uNqOHTuClhsYGEh1v1azptX8aU3hRjasc3Ts2DEvZk30Rz6sxmjLRmuWDn0YxZpAXck+9rGPBS33\\n\",\n       \"4osvljiTbFhN3/X19V5sdnY2eh9NTU3R64pwJwoAACAKRRQAAEAEiigAAIAIFFEAAAARcmksd87l\\n\",\n       \"sduKYE0Tz+J4btrk19tWY3lejZ5dXV1ezHqAYXp6OnofVVVVXiztJu2VlZVUt2edNysGFMnFixe9\\n\",\n       \"WE1NjRdL8s0MKE/V1X7ZMjQ0FL29ubm5JOlwJwoAACAGRRQAAEAEiigAAIAIFFEAAAARcmks32hT\\n\",\n       \"ZtOUZDJrEhMTE16ssbExh0xs1hRbK5aksbwctbe3By03ODjoxZhsXnobbRJ5KOubDzZv3uzFRkZG\\n\",\n       \"vFjaD2cUSaVMIg9lfe40Nzenuo/h4eFE63MnCgAAIAJFFAAAQASKKAAAgAgUUQAAABFyaSzfSLKY\\n\",\n       \"Dm1NDreaK5M09FtTYkNlMek7C9ZxtqYoW6yp8lZjfkNDQ9D2mpqavJg1oX1qaipoe+V4PmJVyvWY\\n\",\n       \"NqthN/T6CRV67JM8tGI1paM8We+51rd2hOro6PBiSafecycKAAAgAkUUAABABIooAACACFFFlKru\\n\",\n       \"VtXDqnpMVY+q6r9bjXeo6jOqekJVn1bVtnTTBQAAKAa1Gl4/cCXV7SKy3Tn3uqo2i8jPROSzIvJ7\\n\",\n       \"IjLsnPumqn5NRNqdc19fs66L2ScAAEDWVFWcc2r9XdSdKOfcoHPu9dX/nhKRN0Vkp4g8KCKPry72\\n\",\n       \"uFwurAAAACpO4p4oVd0nIneIyE9FZJtzbmj1r4ZEZFvS7QMAABRRojlRq7/K+68i8sfOuUnVf7nb\\n\",\n       \"5Zxzqmr+3u7RRx997797enqkp6cnSRoAAACp6O3tld7e3qBlo3qiRERUtUZE/j8Reco59+erseMi\\n\",\n       \"0uOcG1TVHSJy2Dl3YM169EQBAICy8H49UVF3ovTyLae/EpE33i2gVv1QRB4SkW+s/v/310soa1bh\\n\",\n       \"FprHli1bvJg1NdWaCG5N6LWWC83FmsY7Pz8ftK4lyXFJm5XLtm3+b4QvXLiQ6n6tKeHW9G9rubm5\\n\",\n       \"uVRzsRT9HFm5tLX5D+bedtttXuz06dNe7Ny5c15s7QT+cjwmWUh7Mr5ldHTUi1kT9K1p50U6LuRi\\n\",\n       \"57J161YvdvHixeh9WFPHrc9Aa3K9dVys2KFDh4Jy6evr82LWdXotN3pif533ayLyuyJyRFV/vhp7\\n\",\n       \"RET+k4g8qaq/LyJ9IvK5yO0DAAAUWlQR5Zx7QdZvSr8/Ph0AAIDywMRyAACACBRRAAAAERKNOEhL\\n\",\n       \"Z2enF+vq6vJiVjPaqVOnSpJTWtY2xCaVpIm8HFlNrGmbnZ0NWi6LJvK0VVeHvcSt11YS1gMVFqsh\\n\",\n       \"Gemyru/Qaz7U9PR0qtvLi9UEbT0QYTl69KgXC33PqKmp8WLWZ4fVfJ22ycnJVLdXX1/vxSYmJqK3\\n\",\n       \"ZzV9hx7nxcXF6P2uhztRAAAAESiiAAAAIlBEAQAARKCIAgAAiFCIxvKiC51ObuErbpJJu+F5o9mz\\n\",\n       \"Z0/Qcm+//Xaq+x0ZGfFiJ06c8GLWayvthzFQelZD9kZjTekfHBwMWreSr3mrGT7tB0ryfMCMO1EA\\n\",\n       \"AAARKKIAAAAiUEQBAABEoIgCAACIUIjGcqsJ1YoBKF/79u3zYtZk83Pnznmxvr6+EmSEtFRKY/nC\\n\",\n       \"woIXs741wfp56+rqovebxSTyUGl/K0alfysBd6IAAAAiUEQBAABEoIgCAACIQBEFAAAQoRCN5Zbu\\n\",\n       \"7u6g5c6fP1/iTESam5uDlrOmL280DQ0NQcvNzs6WOBOIhF+TNTU1XmxxcTHVXEJfRyg/U1NTeadQ\\n\",\n       \"MhcuXPBiVhN52g3ZaQt9/fEtG9eGO1EAAAARKKIAAAAiUEQBAABEoIgCAACIUNjG8q1btwYtl6Sx\\n\",\n       \"3JqWbE2OXVpait5HkVjNw3ntg8by/FjTltva2rzYxYsXU93vpk1h/2azJpYDRWJ9TiRprreavtNu\\n\",\n       \"1rc+77JgNeZXEu5EAQAARKCIAgAAiEARBQAAEIEiCgAAIIJmPZ1UVR0TUQEAQDlQVXHOqfV33IkC\\n\",\n       \"AACIQBEFAAAQgSIKAAAgAkUUAABAhFwmlqua/VklZTWzb9682YtZ05ytieXz8/PR+52bm/NieRwT\\n\",\n       \"ETu/IuVinaPJyclccuG42Lm0tLR4sSTTlq2p94uLix+YR5HOT2gunZ2dXsyahm29Z1ixJLl88pOf\\n\",\n       \"DFru2Wef9WIrKyup5pI2crFVSi67d+/2Yk1NTV5seHg4KHYtD79xJwoAACACRRQAAEAEiigAAIAI\\n\",\n       \"FFEAAAARcmksL4qFhQUvlnZzbl5Neln44he/GLTc3/3d33kx69hbsmiWbmxsLPk+KlmSJnLL2iZy\\n\",\n       \"ZMNq6Lds2uT/29tqLM+ClUuS17P1fv3AAw8Erfv66697sYGBgehcYLPOeUdHR9C61oNjSXEnCgAA\\n\",\n       \"IAJFFAAAQASKKAAAgAgUUQAAABEK21huTfK1msKSNO5ZTYTV1f4hsSaWh7Ka4DaaqqqqvFOoOFk0\\n\",\n       \"3Feq5uZmL1ZfX+/FxsfHvVjaTe8jIyOpbi+Jf/zHfwxaLsn7ofWea03ft469xZpK3draeu2JrQr9\\n\",\n       \"hom8tLW1ebGxsbEcMimW0IciQq+ra8EnPAAAQASKKAAAgAgUUQAAABEoogAAACKo1UhX0h2qBu3Q\\n\",\n       \"ahi0YhcuXAjar/VzdnV1eTGradKayBzaXGk1vFnTukMnm6fd+G4dl7ymrJOLjVzSzSPtxvKiHBOR\\n\",\n       \"8szFemAo9BsNLNb5tR7EKPpxsaTdWF6O14sldEr9zMxMVC6qKs45MxnuRAEAAESgiAIAAIhAEQUA\\n\",\n       \"ABCBIgoAACBCYRvL02b9nFu2bPFiy8vLXixJY3loLuXYzJc2crGRS+nzqKur82Lz8/O55JIEudjI\\n\",\n       \"xUYuNhrLAQAASowiCgAAIAJFFAAAQASKKAAAgAj++OsNZGRkJO8UAGTspptuClpuaGjIi1lTzJPY\\n\",\n       \"tMn/d6z1LQdW023oVG9ryrW1X2vy9crKSlB+HR0dQblYjcJJHm5qaGjwYrt3747eXhJVVVVezPpW\\n\",\n       \"DEt3d3f0fs+fPx+9LpLjThQAAEAEiigAAIAIFFEAAAARKKIAAAAi5DKxPOt9AgAAxGBiOQAAQMoo\\n\",\n       \"ogAAACJQRAEAAESgiAIAAIiQy8Rya2ptqVnN7HnkIRKeizUBN3R7oftYXl5ONRdre6HK8Rxlwcql\\n\",\n       \"qakpaLnZ2dno/dbV1Xmxubk5L1Zur+ebb745aLm33nqr5LmkzcrFmuA9ODjoxZaWlqL3a70/WNsr\\n\",\n       \"0nG59957vZh1fb/55ptezJra3tjY6MW2bt3qxX7+8597sSIdl/r6ei82Pz+f6n6tSfMzMzNerEjH\\n\",\n       \"ZT3ciQIAAIhAEQUAABCBIgoAACACRRQAAECEXBrLESZJk7bFauYLlXYuabMaEJubm4PWnZycTDud\\n\",\n       \"krOaMC379u0LWq6vr8+Lpd1MinwMDAx4sbRfz2lv7+677w5a7mc/+5kXC20KfuWVV7xY6DVvNZFb\\n\",\n       \"jhw5ErRckVifE6HvpSMjI2mnU3jciQIAAIhAEQUAABCBIgoAACACRRQAAEAEGssDVFeHHaYkE3+R\\n\",\n       \"vtDmz3JsLA9lTR/e6EInkWdh0yb/37ErKyup7iO06Xv//v1By7399ttJ0ik063zU1tZ6MasB3Vq3\\n\",\n       \"UiwsLHgxqwHdmtA+Njbmxa5lInjRVe5ZBwAAKCGKKAAAgAgUUQAAABEoogAAACLQWB6gUhrGQxvk\\n\",\n       \"y5HVqDg+Pp5DJnYz99zcXA6ZiExNTeWyX4s19bhI+RVFFs3mRffqq6+WfB9JmsNDm/VDH24pkomJ\\n\",\n       \"CS9mXX/btm0L2t6FCxcS51Rk3IkCAACIQBEFAAAQgSIKAAAgAkUUAABAhMrtNMaGl1czd177tfT3\\n\",\n       \"9+edQuFYDe6tra1ebHR01IvNzs6mmkuRGsbPnTuXdwq5s85HkV7PWQh9sGHnzp1B2zt16lTinIqM\\n\",\n       \"O1EAAAARKKIAAAAiJCqiVLVKVX+uqj9a/XOHqj6jqidU9WlVbUsnTQAAgGJJeifqj0XkDRF5d9Lh\\n\",\n       \"10XkGefcTSLy3OqfAQAAKk50Y7mq7hKRT4nIn4nI/7waflBE7lv978dFpFcopArDmkSLZGpra71Y\\n\",\n       \"Q0ODF7Om3k9PT3sxq+m56Do7O73YyMiIF5uZmfFi1vGzLCwsXHti69i3b58X27x5sxezzlnajeWW\\n\",\n       \"PI6JiD3BOzSXUDU1NV7M+rYBayK4tVzoPrJQKdP3rSnr1rUROrW90iW5E/UtEfn3InJl2/4259zQ\\n\",\n       \"6n8PiUjYXHgAAIAyE1VEqeqnReSCc+7nIqLWMu7yPxvC/ukAAABQZmJ/nfcxEXlQVT8lIvUisllV\\n\",\n       \"/0ZEhlR1u3NuUFV3iEhlf/MgAACoKL29vdLb2xu0bFQR5Zz7ExH5ExERVb1PRP4X59wXVfWbIvKQ\\n\",\n       \"iHxj9f+/H7N9AACAPPT09EhPT897f37sscfWXVZDm/XW3cDlIuqrzrkHVbVDRJ4UkT0i0icin3PO\\n\",\n       \"ja1Z3iXdJwAAQBZUVZxzZutS4iIqIhmKKAAAUBber4hiYjkAAEAEiigAAIAIFFEAAAARoieWJ7F2\\n\",\n       \"ovPc3Fyq27emD4+Pj3sxVf9XnFYsSQ+XNb3amtxs7TdUXV2dF7MmzFqsn+2OO+7wYjt27PBik5OT\\n\",\n       \"Xmzv3r1e7Pnnn/diN9xwgxc7fPiwF9u2zZ/XGjp5PfS62rp1qxcbGhryYqHnqKmpyYtZ08lDWeco\\n\",\n       \"yfWSRFFysfLYuXOnF7OOu/VeEKqtzf860EuXLnmx0GNiXd8W63q0JDk/ra2tQcuFHr+0c0ly3pLk\\n\",\n       \"Ul3tf0xa0+yT5LJpk38/I4ve4bRfz9axsljHryjvLSLXduy5EwUAABCBIgoAACACRRQAAEAEiigA\\n\",\n       \"AIAIuTSWp91IvlZo47HFavDr7OwMWvfCBf+rAmdnZ6NzCRXaRB7qxIkTXqy/v9+LDQ8Pe7EXXnjB\\n\",\n       \"i1nNuGfPng3KxTqmaUt7H0mayBGvvr7ei9XW1nqxJA3Kzc3N0etWirQbrQ8ePOjFfuM3fiNo3e99\\n\",\n       \"73teLPS9JZT1s1mfE5aVlZWg5SplAHXowwkjIyMlziQ73IkCAACIQBEFAAAQgSIKAAAgAkUUAABA\\n\",\n       \"hFway4uspqbGi4U2k168eNGLlWPDoDUlNknzeui0+LRZjcaWUj/ogGxYzb5pN/lb3zaQROgk8ixY\\n\",\n       \"3/RQVVUVtFzazdyhrG8+SDsX6+GErq6uoHXPnTuXai5FtxHfS7kTBQAAEIEiCgAAIAJFFAAAQASK\\n\",\n       \"KAAAgAg0lq9hNVBvtAnUVtO31XBfV1fnxRYXF72Y1Zy6c+fOyOxQrqxJ142NjUHrJvkWgjSl/e0A\\n\",\n       \"VjO8FUsyETyU1TBuve4vXbqU6n7feOMNL2Y1c2/fvj3V/YayrlvYNtpnpQh3ogAAAKJQRAEAAESg\\n\",\n       \"iAIAAIhAEQUAABCBjrk1rAnjoVNYizSdvLW11YuFTtm1LC8vezGr6dQ6BlZjZlNTU3QuoYo+Pddq\\n\",\n       \"4LeaiiuZdb1YDyeEsNazrtFQ1rloaGiI3l6olZWVku/DYn2zQEdHhxfLosndemjFauofGRkpeS6W\\n\",\n       \"tJvrUb421js2AABASiiiAAAAIlBEAQAARKCIAgAAiFDYxvLQBtssmjCthstQoY2tzc3NXmxqaip6\\n\",\n       \"v0lYzb5JJjUvLCx4sb6+vujt5eXAgQNezGqan5yc9GKjo6Ne7K677vJi1qTmSmE1JFvHqiis95bQ\\n\",\n       \"9xvrIQ7rNRQ6iX3//v1eLEnTvMU6F9b1Hdv4fy2sbzmwms2zMDMzk8t+UR64EwUAABCBIgoAACAC\\n\",\n       \"RRQAAEAEiigAAIAImvWUbVV1RZrsDQAAsB5VFeec/xUTwp0oAACAKBRRAAAAESiiAAAAIlBEAQAA\\n\",\n       \"RMhlYrmq2Z91lZtvvtmLWQ3pw8PDXsyaDm2tG5JHKYTmYk1ttyYmW+uGNu8nOS73339/0HJHjhzx\\n\",\n       \"YhcuXEg1l7SF5tLU1OTFrHM0OzsbtN/QCdENDQ1erKWlxYtZE7GtdcfGxrxYR0eHFxsZGfFioefI\\n\",\n       \"mrBtTWifnp7+wG1Z5+fGG2/0YtbPdenSJS9mHff6+vqg/VrfaFD069Z6b7Gu5SQT5a1cPvzhDwet\\n\",\n       \"29jYGLTcmTNnvNi2bdu82EsvveTFin6OspAkl927d3sx61sJrM9ja3p/0Y/LergTBQAAEIEiCgAA\\n\",\n       \"IAJFFAAAQASKKAAAgAi5NJaHsBrKrJjVEJo2q2HXkqQJ02I1KFuYAJ+fkCboa2E1Zlqsxkzr9WEt\\n\",\n       \"Z8UsVkNoEtbDInV1dV7s7NmzXsx6EGGtwcFBLzY1NRWU2/LyshcLPU5FZzWMW9dt2u9fFmsfFy9e\\n\",\n       \"9GKh157V/N/W1nbtiWHDsl4f14I7UQAAABEoogAAACJQRAEAAESgiAIAAIhQ2MZya9KwxWoITVtX\\n\",\n       \"V1fQclk0ZhbJs88+68Vo6syG9TBB6FR0izU5PO1pwadPn/Zi1vR0a6I4AJEdO3YELTcwMFDiTOz3\\n\",\n       \"oKqqKi9mfVNB6EMb1vas94zQB0hKgTtRAAAAESiiAAAAIlBEAQAARKCIAgAAiFDYxnKr8WzTJr/m\\n\",\n       \"y6KxfHx8vOT7sIROUk17anYSodPdQx8cCLVly5ag5YaHh1Pdb2jzddpT5a1J/aHTzi0LCwtJ0gli\\n\",\n       \"XadpXrtZ/AxJpxuvZb1erJh13Yb+vEV6f7De1638rEn21rpWk3Ha32KR12sctsXFxVS3l/T1wZ0o\\n\",\n       \"AACACBRRAAAAESiiAAAAIlBEAQAARCirxnKriTALIyMjuezXaq60FKlxtL+/P+8UNoSOjg4vtrKy\\n\",\n       \"ErRuaHO99SBHkW3fvt2LzczMBK1rHRNr+n7aTa2tra1erL6+PtV9FEk5/myh38JgPYAU+poMlcUk\\n\",\n       \"8lChDfxJpolbzfqh086zUl7vkgAAAAVBEQUAABCBIgoAACACRRQAAECEXBrL1zZMh04dtyb0Ws3X\\n\",\n       \"VtNtEu3t7UHLTU5OerEkU6RHR0ej101b6ATh0Km93d3diXO6Ul5T5Ys0pTjtid1JmmI7OzuDlrNe\\n\",\n       \"M11dXV4sZCL9xYsXvdjs7KwXC22Ita6ptM+39Rq3mq+zmMaeRE1NTdBy1jcVWO8jVjO39WDRpUuX\\n\",\n       \"vNirr74alEvarAcx0m4sz4I1Md96oGJwcNCLpd30vXPnTi9mXQfW68h6WOTWW2/1Yjt27IjM7jLu\\n\",\n       \"RAEAAESgiAIAAIhAEQUAABCBIgoAACCCZt0Yq6quSM24AAAA61FVcc6p9XfciQIAAIhAEQUAABCB\\n\",\n       \"IgoAACACRRQAAECEXCaWq5r9WSVlNbPnkYdIslysCa4XLlwIWteaOhuay9atW4P2Ebrc0aNHo3Ox\\n\",\n       \"HDhwwIv19/d7MWsa78DAQKq5pK1Sctm1a1fQctZ5SzOPtIXmYl17FmuKe9q51NbWBm3Pes+wJnNb\\n\",\n       \"3zqR5BxZyyV5ICk0l927dwdtL/TnOHPmTHQuWbBysc5vFg+DhR4X69surG85sKanW6+tc+fOBeWy\\n\",\n       \"Hu5EAQAARKCIAgAAiEARBQAAEIEiCgAAIEIujeUhOjo6gpYbHR1Ndb/79+/3Yj/4wQ+C1n344Ye9\\n\",\n       \"2FNPPZWLiRC7AAAgAElEQVQ4pytZTXBVVVVeLO1GwImJCS9mNSCGNpanLbS5PknTLsLV19d7sXvv\\n\",\n       \"vTdo3SeeeCLtdK5y5513Bi332muvpbpf69rbtm2bF2tsbPRiQ0NDqeaysLAQvW5eTcZZOHv2bNBy\\n\",\n       \"e/bsKXEm6TfXhyr6N4rU1NQELVddnU15w50oAACACBRRAAAAESiiAAAAIhS2J6qpqSloubm5OS82\\n\",\n       \"MzOTdjqFYQ3btFi9U0lYx9liDdHMQtq9cahcoUM/0+6JqhQrKysl34c1mNQa6JnXe701RDNt1kBU\\n\",\n       \"a4AkRJaWloKWK0WfFHeiAAAAIlBEAQAARKCIAgAAiEARBQAAEEGzHqylqkE7tAbOWUMlp6envZjV\\n\",\n       \"+Fj0b84OzSXtxvJKOS5pIxdbUXJJkkd7e3vQcpcuXSp5LmmrlFysYa2hD7eknUvaQnNJMmwztIF6\\n\",\n       \"cXExaL9ZCD0uocM2rWHQFqtZf20uqirOOfPAcCcKAAAgAkUUAABABIooAACACNFFlKq2qerfq+qb\\n\",\n       \"qvqGqt6jqh2q+oyqnlDVp1W1Lc1kAQAAiiK6sVxVHxeR551z31HVahFpEpH/VUSGnXPfVNWviUi7\\n\",\n       \"c+7ra9bL5Suiy7Gx0GJNcu/s7Axa15qyWynHJW3kYkuSy969e73Yvn37gtZ9/vnnU8sjbeRiS5JL\\n\",\n       \"R0dH0HJjY2NeLO0Hi7Zv3+7FrPfhd955x4tZk7RDcwltjLZ+3tDjNzIyEpRLFtK+dq0GdKuRPiSX\\n\",\n       \"1BvLVbVVRO51zn1ndYdLzrlxEXlQRB5fXexxEflszPYBAACKLvbXedeJyEVV/a6qvqaq31bVJhHZ\\n\",\n       \"5pwbWl1mSES2pZIlAABAwcR+G1+1iNwpIl9yzr2iqn8uIlf92s455/L61R0AAECM3t5e6e3tDVo2\\n\",\n       \"tojqF5F+59wrq3/+exF5REQGVXW7c25QVXeIyIXI7QMAAGSup6dHenp63vvzY489tu6yUUXUapF0\\n\",\n       \"VlVvcs6dEJH7ReTY6v8eEpFvrP7/92O2v57QZrnR0dE0d1sodXV1Xsya7r7RWE2YtbW1Xmzr1q1e\\n\",\n       \"zGq4R/ruueceL9bd3e3FrAnCaxvLUdn27NkTtJw1xXxmZibVXLZs2ZLq9izWt3F0dXUFrRs6yd1q\\n\",\n       \"QC9H1udd6LGyvsnDav6/FrF3okRE/q2I/D+qWisivxSR3xORKhF5UlV/X0T6RORzibIDAAAoqOgi\\n\",\n       \"yjn3CxH5sPFX98enAwAAUB6YWA4AABCBIgoAACBC9MTy6B2quqz3CQAAECP1ieUAAAAbHUUUAABA\\n\",\n       \"BIooAACACBRRAAAAEZIM24ymenV/1h/+4R96y1jTjY8cORK0/W9961tezGpmX5uHiMiXvvQlL3bn\\n\",\n       \"nXd6sXfeeceLPf74416sr68vOhdL6HTy0Km9SXJJG7nYyjEXa7J+6PasCdHnz5+PyiMLlZLLgQMH\\n\",\n       \"gpY7fvx4yXNJW5JcampqgtZdWFjwYi0tLV5sYmIiOpe0Ff0cWZ93f/RHf+TF7rvvPi9mTWj/p3/6\\n\",\n       \"Jy/2F3/xF15sampq3TzX4k4UAABABIooAACACBRRAAAAESiiAAAAIuTSWL7W4cOHvdhbb73lxW6/\\n\",\n       \"/fYs0gliNb9WV5f+cIY2jKetu7s7aLm1DcDIzvbt273YJz/5yaB1//qv/zrVXDZtCvv32fLycqr7\\n\",\n       \"RZjdu3d7sTfffDNo3YceesiLJbl+rGvFii0tLUXvI4nQh3msxvLJycm009lQZmdnS76P6enpROtz\\n\",\n       \"JwoAACACRRQAAEAEiigAAIAIFFEAAAAR1JoSWtIdqma7w1Whk1k3b94ctD1r6qw12XZxcTE6lyyE\\n\",\n       \"5tLc3By0vWuZ9BqbSxbKMZcsGstDc7FeCxbr9WFNeV7boFuO5ydUfX29F5ubm0s1F6ux/MyZM0H7\\n\",\n       \"CG0sr+RzlAS52JLk0tTU5MWqqqq8mNXob+13bUxVxTlnJsOdKAAAgAgUUQAAABEoogAAACJQRAEA\\n\",\n       \"AESgsXyNtra2oO2NjY2VPJcshOZiNbtaQhtgk+SSBXKxZZHL/v37vdjaqcKDg4MlzyNUXo3l1j5W\\n\",\n       \"Vlaicwl9MCbURrtuQ5GLrci50FgOAACQMoooAACACBRRAAAAESiiAAAAIlTnnUDRWA3jmzZRawKl\\n\",\n       \"0NnZ6cW2bt3qxQYGBrJIpxCSTCdPwmoi7+7uDlr3/PnzqeZSXe1/NFnXimVoaCjVXID3Q3UAAAAQ\\n\",\n       \"gSIKAAAgAkUUAABABIooAACACIVtLN+9e3fQcrOzs15seHg4aN3QycDWFGBLY2OjF5uZmQlat+jq\\n\",\n       \"6uqClksysbzoqqqqvJjVAHvo0KHofbz++uvR6xaddaysicRWw/TFixdLklM5S/uBl5tvvtmLhb4P\\n\",\n       \"p91Yfvvtt3sx6/3VEtpY3tzcHLRce3t70HITExNezJoCX3TWOV/7jQEiIqOjo17MOqa7du3yYidP\\n\",\n       \"nozMrni4EwUAABCBIgoAACACRRQAAEAEiigAAIAImvbU2w/coarLep8AAAAxVFWcc/5TMMKdKAAA\\n\",\n       \"gCgUUQAAABEoogAAACJQRAEAAETIZWK5NaV4rdBpstYU5LGxMS9mNbNbebS0tATt15rgGjrZPDQX\\n\",\n       \"a9qt9fNaU9vb2tq8WJLjkgVysYXmUltbG7S9hYWFkudSakXJQyQ8l/vuuy9oe2+//XbQcmfPno3O\\n\",\n       \"JQtJcuno6AhazpqanSSXL37xi0Hbe+2114KWO3bsWHQulq6urqDlQif8l+P10tnZ6cU++9nPejFr\\n\",\n       \"KvpPfvKT6FzWw50oAACACBRRAAAAESiiAAAAIlBEAQAARMhlYvna2KFDh7zlDh486MWWlpa8mNUo\\n\",\n       \"NjIy4sVCm9YaGxu9mGVubs6Lpd1YnoVyzOWGG27wYo888kjQPh5++GEvluR6sXR3dwctd/78+aDl\\n\",\n       \"kuRy9913By336quvljyXNBUlD5HwXLZs2RK0vZmZmejlin5crAdexsfHc8kl9LhYyyX53EzyPnfn\\n\",\n       \"nXd6MevBoh/96Eep5pKF0Fysh6vuvfdeL7a8vOzFYhvLmVgOAACQMoooAACACBRRAAAAESiiAAAA\\n\",\n       \"IuQysTxNVlNwEqFNnVmwpqfX1dV5MWt6utVsmLZPf/rTQcs999xzXiyL/PKyefPmoOVCG8uzUF9f\\n\",\n       \"78WshycQb3JyMmi5qqqqEmeCa2E1PB84cCBo3ePHj0fv13r9WddQ6HVVKawHzA4fPpxDJpdxJwoA\\n\",\n       \"ACACRRQAAEAEiigAAIAIFFEAAAARCjGxPAvlOJnVaiK3ms0vXbrkxaxprUlysaTdWF6O58jS3Nwc\\n\",\n       \"tNzU1FTJc7Em8C8sLHgxq1kz7VzSVJQ8RMhlPZWSi/XQxSc+8YmgdZ966qnoXKzYF77whaD9njx5\\n\",\n       \"0otZ30pQKecobUwsBwAAKDGKKAAAgAgUUQAAABEoogAAACIUdmK51VBmNfhZDdRW42w5mp+fD4rl\\n\",\n       \"5cUXX/RiHR0dXqyzs9OL9ff3lySnIghtGM9CkSbww2e9p1kTy62m23I8t21tbV5sbGwsh0zCWcc+\\n\",\n       \"9EGMLFjXELLDnSgAAIAIFFEAAAARKKIAAAAiUEQBAABEKGxj+S233OLFampqvJg1+frEiRNB+7Aa\\n\",\n       \"nkObHEMngleym266yYv91m/9VtC6f/qnf5p2OoXR3t7uxayp8huN9XqzjI6OerGsv1mhFKzr4vbb\\n\",\n       \"bw9a9/z5817srbfeSpxTEWza5P9bvra2Nmjdubk5L5Z2o7X1MI81ETxt1jVvXQfW8Tty5EiquVif\\n\",\n       \"vYuLi6nuo1xxJwoAACACRRQAAEAEiigAAIAIFFEAAAARNOuGTVV1ldAkCgAAKp+qinPO/xoV4U4U\\n\",\n       \"AABAFIooAACACBRRAAAAESiiAAAAIuQysVzV7M+6SlVVlRezpthaE8stVjN7SB4iIjfccIMXa2tr\\n\",\n       \"82Jvv/22FwudvhyaSxJ1dXVezJr4m0UulryOi6VScvnQhz4UtJw15d96bRXluBQlD5FkuTQ0NHgx\\n\",\n       \"axL00tJSqrmETvW23oct09PT0blYfu3Xfi1ouZGRES92/Pjx6Fysn7e5udmLVVf7H53j4+NezDpv\\n\",\n       \"5Xjt7tu3L2h7e/bsCVruJz/5SVAu1jR269hbrxnrMzr0G0mu5eE37kQBAABEoIgCAACIQBEFAAAQ\\n\",\n       \"gSIKAAAgQi6N5WmqqanxYlaTWRKHDh3yYrt27fJiVpO21Viel/n5+bxTADYs68GY/fv3B6177Nix\\n\",\n       \"VHMJbXjO4sGTHTt2eLGDBw8GrfvKK6+kmktjY6MXW1lZ8WJWQ3sWQh8IsM5bEn19fUHLWU3fSVjX\\n\",\n       \"5NatW4PWtR4I2Lx5sxc7c+bMtSd2Be5EAQAARKCIAgAAiEARBQAAEIEiCgAAIEJhG8uXl5e9WOh0\\n\",\n       \"8ryEThXOi9XYutG0tLR4scnJyRwyycYvfvELL3b99dd7se7ubi/2y1/+siQ5bVQLCwtebHh4OIdM\\n\",\n       \"7EnQVgO15VqmOcc6ffq0F7MahZEN6/3BatK2PgNPnTqVai7Ww2TW9WxdQ5akzfDciQIAAIhAEQUA\\n\",\n       \"ABCBIgoAACBCdBGlqo+o6jFV/WdV/VtVrVPVDlV9RlVPqOrTqup/jTIAAEAF0JgmQVXdJyL/XURu\\n\",\n       \"cc7Nq+rfich/E5FDIjLsnPumqn5NRNqdc19fs27puxIN1s+Z9uTdUHnlYjWWW1PMN9pxsVRyLrfd\\n\",\n       \"dlvQckeOHCl5LrGKkocIuawn7VxuvfXWoOWOHj1a8lySqJRcGhoavFiSh79Cc7EeDrIkeWBobS6q\\n\",\n       \"Ks4588DE3omaEJFFEWlU1WoRaRSR8yLyoIg8vrrM4yLy2cjtAwAAFFpUEeWcGxWR/ywiZ+Ry8TTm\\n\",\n       \"nHtGRLY554ZWFxsSkW2pZAkAAFAwUQMSVPV6EfmyiOwTkXER+X9V9XevXMY55/L61R0AAECM3t5e\\n\",\n       \"6e3tDVo2dsrU3SLyonNuREREVb8nIh8VkUFV3e6cG1TVHSJyIXL7AAAAmevp6ZGenp73/vzYY4+t\\n\",\n       \"u2xsEXVcRP5UVRtEZE5E7heRl0VkWkQeEpFvrP7/9yO3X3hWU52lSFPWrYnJadu3b1/Qcn19fanu\\n\",\n       \"t63NfxB0bGws1X1UivPnz+edAvC+Dhw44MWs1zjyk9dnW15N+OuJKqKcc79Q1b8WkVdFZEVEXhOR\\n\",\n       \"/0tEWkTkSVX9fRHpE5HPpZQnAABAoUR/aYxz7psi8s014VG5fFcKAACgojGxHAAAIAJFFAAAQITo\\n\",\n       \"X+cBedm1a5cXa21t9WITExNebGVlpSQ5ZW3Hjh1erL293YvNzMx4Maup33ogYPv27VG5rae6Ouzt\\n\",\n       \"ZmlpKWr7TU1NXmx6ejpqW0lt3rzZi1kPo1jnzJrcPDo66sWGh4cjsxOpqqryYtY3GuTVPHzdddd5\\n\",\n       \"sU2b/H/zj4+PZ5EOCqS+vj5oOev935K0UZ07UQAAABEoogAAACJQRAEAAESgiAIAAIigVhNjSXeo\\n\",\n       \"6rLeJwAAQAxVFeec2YHOnSgAAIAIFFEAAAARKKIAAAAiUEQBAABEyGVieeyE0La2Ni82NzfnxRYW\\n\",\n       \"FrzY8vJyankkZTXWk0t4Lta05dtvv92LTU5OerETJ05E52JNTE7ykIQ1Idq6dkOPS3Nzc9B+rZ/D\\n\",\n       \"Yh0/a+J76PXymc98Jmi5F154wYuNjIxc9edyvG5ramqCtre4uJhqLtY0f2sSeeh+revMupbXnjOR\\n\",\n       \"4p+jLCTJxfpmAYv1rQRp55K20Fysb6zYsmWLF7Mm+luT/61vdbiW93XuRAEAAESgiAIAAIhAEQUA\\n\",\n       \"ABCBIgoAACBCLo3la4U2uo6NjZU4E5H6+vqg5ZaWloJilcJq8Nu8eXPQuuPj46nmYjWxXnfddUHr\\n\",\n       \"hjaWW9KetG81kSfR0dHhxb7yla8ErRu6XBZCX4OlZr0vWY31oazG7VtvvTVo3aNHj0bvd2JiInpd\\n\",\n       \"y9TUVKrbK7obbrghaLlTp06VOBOR7u7uoOVCG8vL0cDAQN4pXIU7UQAAABEoogAAACJQRAEAAESg\\n\",\n       \"iAIAAIhQiMbyJM2aabOaSa0J2ZXcRG6xmqrTbhgPZU1btibXbzRWY/ndd9+dQya2H/zgB17MaiIv\\n\",\n       \"yrlM+0EClKcsGsZDvfjii3mnkDvr20fOnTsXtG4pXtPciQIAAIhAEQUAABCBIgoAACACRRQAAEAE\\n\",\n       \"zbp5UlVz6da0fk5rCncWyMVGLrbQXKyHIj7ykY94MeuhiJdffjnVXEqtKHmIVE4uDQ0NXsx6iCOL\\n\",\n       \"XNJGLrZyzKWmpiZoe9b7XGi9s3Y5VRXnnHlguBMFAAAQgSIKAAAgAkUUAABABIooAACACGXfWB46\\n\",\n       \"8bgcG+iyUCm5NDY2ejFrsu38/HzJc0kbuRQ3D5HKyeVjH/uYF2tpafFi/f39XuzYsWOp5lJXV+fF\\n\",\n       \"Ql+7lko5R2krx1ysb2awjI6OppYLjeUAAAApo4gCAACIQBEFAAAQgSIKAAAgQnUeO107WXllZcVb\\n\",\n       \"5p577gna1tTUlBezmhxR2axGVMvCwoIXy/rhiixZ030XFxdzyKT0amtrvZh1vkO1t7cnSacwQq+B\\n\",\n       \"bdu2ebGdO3d6MWsSdNrvudZDIRvN5s2bg5abmJjwYp2dnV7M+pwtR0kaxkuBO1EAAAARKKIAAAAi\\n\",\n       \"UEQBAABEoIgCAACIkMvE8kpu5AUAAJWDieUAAAApo4gCAACIQBEFAAAQgSIKAAAgQi4Ty1Wv7s86\\n\",\n       \"ePBg0HojIyNBy1nTeK111+aRFauxPjSX3/zN3wxa7sc//nHJc0lbklzWTsFfT+jU3ko5LmlLO5ev\\n\",\n       \"fOUrQct961vfisojdJJ9W1ubF7Mmfff393uxJMfkIx/5SNByL730UtByobkkmYYdKslx+fjHPx60\\n\",\n       \"3MmTJ73Y0NBQqrmkLUku3d3dXqy62v8Yt87b2NhYdC7WtwHceeedXmxgYMCLDQ4OerH5+fnoXEI1\\n\",\n       \"Nzd7sZaWFi9m5XwtD79xJwoAACACRRQAAEAEiigAAIAIFFEAAAARcmksX8tq+v7Upz4VtO73vvc9\\n\",\n       \"LzY+Pp44p6J65ZVX8k7hPb/9278dtNw//MM/lDiTjcdqpA9tmt+5c2fQcufOnbumnIrIamC1hDYy\\n\",\n       \"W43lCGc1tCdpXt9ozp8/78WshyLS9sADD3ixP/uzPwta96tf/aoXe/rpp6Nz+dCHPuTFtm3b5sWs\\n\",\n       \"5vCf/vSn0ftdD3eiAAAAIlBEAQAARKCIAgAAiEARBQAAEKEQjeXWhNlQldxEbhkdHc07hfdYE53z\\n\",\n       \"EtpUjWJZO4k8bdY0Z8vMzExJ81hPKRpdK8ELL7yQdwrvufXWW4OWO3r0aIkzsVmTyIukvr4+1e01\\n\",\n       \"NDR4sV/5lV8JWrcUD2ZxJwoAACACRRQAAEAEiigAAIAIFFEAAAARCtFYbvnud7+bdwr4AE8++WTe\\n\",\n       \"KWxYSRrpK2ESedqeeuqpvFPI1PLyci77DZ1O3t7e7sVmZ2e92NzcXOKc8MGee+45L3bgwAEvpqpe\\n\",\n       \"zJqynsTp06e9mDWd3Lo2StGEz50oAACACBRRAAAAESiiAAAAIlBEAQAARFCrIaukO1TNdoerrJ/T\\n\",\n       \"aoLLArnYyMVGLvnkYTU3X7p0KZdcQmWRy44dO7zYwMBAqrncddddQcu98cYbXsxqQN9o5yhUklxq\\n\",\n       \"a2u92L59+4LWPXHiRKq5pG1tLqoqzjkzGe5EAQAARKCIAgAAiEARBQAAEIEiCgAAIEIuE8tramqu\\n\",\n       \"+rPVoGaZnp4uRTqFUFVV5cXymioMm3WdLiws5JCJyKZN/r9/QqeY19XVebH6+novNj4+HrQ9q/nT\\n\",\n       \"2p7FagIO3UcsKzdrunFTU5MXsxrLK5nVRI5wHR0deaeQqZmZGS8WOqXeYr3uQx+GS/IeeS24EwUA\\n\",\n       \"ABCBIgoAACACRRQAAEAEiigAAIAIuUwsz3qfAAAAMZhYDgAAkDKKKAAAgAgUUQAAABEoogAAACLk\\n\",\n       \"MrG8sbHxqj+HTi1OwmpmT3MK8rUgF1toLnfffXfQ9k6ePBm0nDWZO+3j0tLSErTc5ORkyXOprvZf\\n\",\n       \"9ktLS0HrhuZi7aO9vd2LXbx4MWi/sXlYrEnkltBvSEiSyx/8wR8ELfftb3+75LmEfnNE6JT+cnxv\\n\",\n       \"sVivXWt7U1NTqeZiTdzu7u4O2kd/f3+quVjXxq5du7zYwMCAFwv9fC/69bIe7kQBAABEoIgCAACI\\n\",\n       \"QBEFAAAQgSIKAAAgQi6N5WtZTaiW0OZXVLZTp04FLWc1jOfFahjPSxavo7T3EfoeUW7q6+vzTgEF\\n\",\n       \"tbKy4sVCG8bTtmXLllz2Ww64EwUAABCBIgoAACDC+xZRqvodVR1S1X++Itahqs+o6glVfVpV2674\\n\",\n       \"u0dU9aSqHlfVT5YycQAAgDx90J2o74rIA2tiXxeRZ5xzN4nIc6t/FlU9KCKfF5GDq+v8papypwsA\\n\",\n       \"AFSk9+3WdM79D1Xdtyb8oIjct/rfj4tIr1wupD4jIk845xZFpE9VT4nIr4rIS2u3u3aCaaU2jVaS\\n\",\n       \"bdu2BS03NDRU4kxExsbGgpazJv5arAbOUNYk389//vNB6z7xxBNerJIfnoidTi5ybROEP0joJPIs\\n\",\n       \"hD4kUVVV5cWWl5dTzSV0EvlGU6SHQkLV1NR4scXFxejtjY6OBi03NzcXvY9yFXOnaJtz7t1PyiER\\n\",\n       \"effTtVtErnx0oF9EdibIDQAAoLAS/brNXf7n4fv9EzG9fz4CAAAUSMzv0YZUdbtzblBVd4jIhdX4\\n\",\n       \"ORHZfcVyu1ZjAAAAZaG3t1d6e3uDlo0pon4oIg+JyDdW///7V8T/VlX/d7n8a7wbReTliO0DAADk\\n\",\n       \"oqenR3p6et7782OPPbbusu9bRKnqE3K5iXyLqp4Vkf8gIv9JRJ5U1d8XkT4R+ZyIiHPuDVV9UkTe\\n\",\n       \"EJElEfk3LrAbNEkzrdVAl4WGhgYvZv24aTfaqWrQftN26dKlku8jbUkaxkMleShi//79XuzkyZNJ\\n\",\n       \"0qlYaTdRF8VTTz2VdwqoQEmayC3W+9yhQ4e8mPVZbj08UaRvk0jqg57O+511/ur+dZb/jyLyH5Mm\\n\",\n       \"BQAAUHTMcQIAAIhAEQUAABCBIgoAACACo8ID3HTTTV6so6PDiw0PD3ux0InEoe644w4v1tnZ6cWs\\n\",\n       \"6cPPP/989H6ZZhyuubnZi+3du9eLtba2ejEaywGUA+t9znrQa35+3otVUmM5d6IAAAAiUEQBAABE\\n\",\n       \"oIgCAACIQBEFAAAQIZfG8qampqv+bE31Dp1QnPZkVktjY2PQcmlPJ7dYTeRtbW1e7MKFC14sVF5T\\n\",\n       \"0cvRzMxMqtvjOMdb+74iYr8/8JCEzWoKrq2t9WJJ3q8tVoOytb3Z2dno7SEZ67V1yy23eLEtW7Z4\\n\",\n       \"sWPHjnmxo0ePppNYAXAnCgAAIAJFFAAAQASKKAAAgAgUUQAAABE060ZWVXU0zwIAgHKgquKc85+4\\n\",\n       \"Eu5EAQAARKGIAgAAiEARBQAAEIEiCgAAIEIuE8utidilZjWzW3m0trYGbW98fLzkuYTauXNn0HLn\\n\",\n       \"zp0reS5JWLls2uTX+VYsNGdrXWuCddGPS5Fyqaqq8mLt7e1B2xsZGUktD+uY3H333V6spaXFi126\\n\",\n       \"dClov6+//np0LhbrGwgsoccp9PysrKwEbS9UfX29F7MmjFvHJfT9a+/evUHLvfjii16s6K8hK5fQ\\n\",\n       \"b8pI8q0JVi4NDQ1ezJr839XVFZTLxMREdC5FOkfr4U4UAABABIooAACACBRRAAAAESiiAAAAIuTS\\n\",\n       \"WF5kc3NzeadQSLfddpsX+/KXv+zFhoeHvdjDDz8cvV+rwW95eTlo3erqfC5vq6k6tHE5ieuvvz5o\\n\",\n       \"udHR0aDlQnO2mpR//dd/PWjdgYGBoOVeeOGFoOXWevXVV4OWO3DgQNT212Nde0tLS6nuo6amJmi5\\n\",\n       \"Xbt2ebEzZ86kmkuS11rodRbaWF4kVlN/qCQN40lYn4GhP0foNZm2T3ziE0HLHT58OPV9cycKAAAg\\n\",\n       \"AkUUAABABIooAACACBRRAAAAEWgsX8OaXl101iTyvOTVWHjDDTd4sdraWi/25ptvljyXLJrIiy7t\\n\",\n       \"JupSO378eC77jZ3YLnJtU5WLzJqQbT2c8c4773ixIr33WUIfgik66+cYHBzMIZPi4U4UAABABIoo\\n\",\n       \"AACACBRRAAAAESiiAAAAImjWzYmqmks3pPVzqmoOmVROLvX19V7MakBcXFwseS7btm3zYlZj+dmz\\n\",\n       \"Z0ueS9pCc7GOgWVoaKjkuezbty9oe2NjY14sZML2+Ph4UB5ZKPq10tHR4cWyePgh9LjU1dV5sZaW\\n\",\n       \"Fi82Pz/vxSYnJ1PNJVToBG/r/bDo1wu5+LmoqjjnzGS4EwUAABCBIgoAACACRRQAAEAEiigAAIAI\\n\",\n       \"TCxHtLm5ubxTeE+SZmmkr6+vz4tZU6itJvItW7Z4MaYjxyv6BH2rYdyKFUnoNzNUysRyrI87UQAA\\n\",\n       \"ABEoogAAACJQRAEAAESgiAIAAIhQiMbyTZv8Ws6KLS0tZZEOUJbyaq6vrvbfRqzX6sLCQtD2hoeH\\n\",\n       \"g2JrWc2+odPyk6xr2b59uxezJjKHnjOr2T60adn6ZoG0Hwppa2tLdXtpnw/L1q1bvdiFCxeC1g09\\n\",\n       \"fp2dndeU0wdpbm72YtbrKvS1huS4EwUAABCBIgoAACACRRQAAEAEiigAAIAIajU7lnSHqi7rfQIA\\n\",\n       \"AMRQVXHOqfV3ud6J6u3tzXP3WIPzURyci2LhfBQL56M4Nvq5oIjCezgfxcG5KBbOR7FwPopjo58L\\n\",\n       \"eqIAAAAiUEQBAABEyKWxPNMdAgAAJLBeY3nmRRQAAEAl4Nd5AAAAESiiAAAAIlBEAQAARMiliFLV\\n\",\n       \"B1T1uKqeVNWv5ZHDRqaqu1X1sKoeU9WjqvrvVuMdqvqMqp5Q1adVtS3vXDcKVa1S1Z+r6o9W/8y5\\n\",\n       \"yImqtqnq36vqm6r6hqrew/nIj6o+svpe9c+q+reqWsf5yI6qfkdVh1T1n6+IrXv8V8/XydXP+E/m\\n\",\n       \"k3V2Mi+iVLVKRP5PEXlARA6KyO+o6i1Z57HBLYrIV5xzh0TkIyLyR6vn4Osi8oxz7iYReW71z8jG\\n\",\n       \"H4vIGyLy7pMenIv8/B8i8t+cc7eIyG0iclw4H7lQ1X0i8gcicqdz7ldEpEpEviCcjyx9Vy5/Xl/J\\n\",\n       \"PAveniIAAAMgSURBVP6qelBEPi+XP9sfEJG/VNWK/o1XHj/cr4rIKedcn3NuUUT+i4h8Joc8Nizn\\n\",\n       \"3KBz7vXV/54SkTdFZKeIPCgij68u9riIfDafDDcWVd0lIp8Skf9bRN59jJZzkQNVbRWRe51z3xER\\n\",\n       \"cc4tOefGhfORlwm5/I++RlWtFpFGETkvnI/MOOf+h4hcWhNe7/h/RkSecM4tOuf6ROSUXP7Mr1h5\\n\",\n       \"FFE7ReTsFX/uX40hB6v/0rtDRH4qItucc0OrfzUkIttySmuj+ZaI/HsRWbkixrnIx3UiclFVv6uq\\n\",\n       \"r6nqt1W1STgfuXDOjYrIfxaRM3K5eBpzzj0jnI+8rXf8u+XyZ/q7Kv7zPY8iisFUBaGqzSLyX0Xk\\n\",\n       \"j51zk1f+nbs8QIxzVWKq+mkRueCc+7n8y12oq3AuMlUtIneKyF865+4UkWlZ86sizkd2VPV6Efmy\\n\",\n       \"iOyTyx/Qzar6u1cuw/nIV8Dxr+hzk0cRdU5Edl/x591ydeWKDKhqjVwuoP7GOff91fCQqm5f/fsd\\n\",\n       \"InIhr/w2kI+JyIOqelpEnhCRX1fVvxHORV76RaTfOffK6p//Xi4XVYOcj1zcLSIvOudGnHNLIvI9\\n\",\n       \"EfmocD7ytt7709rP912rsYqVRxH1qojcqKr7VLVWLjeh/TCHPDYsVVUR+SsRecM59+dX/NUPReSh\\n\",\n       \"1f9+SES+v3ZdpMs59yfOud3OuevkcsPsf3fOfVE4F7lwzg2KyFlVvWk1dL+IHBORHwnnIw/HReQj\\n\",\n       \"qtqw+r51v1x+AIPzka/13p9+KCJfUNVaVb1ORG4UkZdzyC8zuXzti6r+TyLy53L5SYu/cs79b5kn\\n\",\n       \"sYGp6sdF5CcickT+5VbrI3L5Yn9SRPaISJ+IfM45N5ZHjhuRqt4nIl91zj2oqh3CuciFqn5ILjf5\\n\",\n       \"14rIL0Xk9+TyexXnIweq+rBc/qBeEZHXRORfi0iLcD4yoapPiMh9IrJFLvc//QcR+YGsc/xV9U9E\\n\",\n       \"5F+JyJJcbhX5cQ5pZ4bvzgMAAIhQ0fMbAAAASoUiCgAAIAJFFAAAQASKKAAAgAgUUQAAABEoogAA\\n\",\n       \"ACJQRAEAAET4/wEtHY5P7rJ1tgAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01bbe590>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['pool5'].data[0]\\n\",\n    \"vis_square(feat, padval=1)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The first fully connected layer, `fc6` (rectified)\\n\",\n    \"\\n\",\n    \"We show the output values and the histogram of the positive values\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 36,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlgAAAJPCAYAAACgtar/AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzs3Xv8LEdd5//3OzdCEpIQAyeBBBKQICCQgITI9RAIBJQQ\\n\",\n       \"RIEoGFkWXUVAVCTghYOiXBQVxXVXIRhZhPUHSww3ySHyVXA1LEsCIVwi/kBByQkriCDLLpjaP6Yn\\n\",\n       \"ZzKne6YvVd3V3a/n43EeZ74zPVXV1bfPVFVXO4QgAAAAxHPQ0AUAAACYGgIsAACAyAiwAAAAIiPA\\n\",\n       \"AgAAiIwACwAAIDICLAAAgMhqBVi2j7X9Ztsft/0x2/e3fZztvbavs3257WNTFxYAAGAM6rZgvUrS\\n\",\n       \"O0MId5N0L0mfkHSRpL0hhNMkXVH8DQAAMHveNtGo7WMkXRVCuNPa+5+Q9NAQwj7bJ0jaCSF8W7qi\\n\",\n       \"AgAAjEOdFqxTJX3B9utsf8j279s+UtKuEMK+Ypl9knYlKyUAAMCI1AmwDpF0H0n/MYRwH0n/qrXu\\n\",\n       \"wLBoBuOZOwAAAFoET9t8TtLnQgj/o/j7zZJeIOl62yeEEK63faKkG9a/aJugCwAAjEYIwTHS2Rpg\\n\",\n       \"FQHUZ22fFkK4TtIjJF1b/LtQ0suL/y9NWdAxsb0nhLBn6HL0jfWeF9Z7XljveZnxekdrGKrTgiVJ\\n\",\n       \"z5L0BtuHSfpbSU+TdLCkP7b9dEmfkfTEWIUCAAAYs1oBVgjhw5LuV/LRI+IWBwAAYPyYyT2NnaEL\\n\",\n       \"MJCdoQswkJ2hCzCQnaELMJCdoQswkJ2hCzCQnaELMJCdoQswdlvnweqUuB3mOAYLAACMT8y4hRYs\\n\",\n       \"AACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMgIsAAAACIjwAIAAIiMAAsAACAyAiwAgGztGroM\\n\",\n       \"wJQQYAHADNiyrTMqPrutpOt7LhIwaQRYADAPZ0n6UMVnt+izIMAcEGABwDwcNnQBgDkhwAIAAIiM\\n\",\n       \"AAsAACAyAiwAAIDICLAAAAAiI8ACAACIjAALAAAgMgIsAACAyAiwAAAAIiPAAgAAiIwACwAAIDIC\\n\",\n       \"LAAAgMgIsAAAACIjwAIAAIiMAAsAACAyAiwAs2XrbFs/PnQ5AEwPARaAOXu5pN8euhAApocACwAA\\n\",\n       \"IDICLABzFoYuAIBpIsACAACI7JA6C9n+jKR/kfRvkr4RQjjT9nGS/qukO0r6jKQnhhD+OVE5ASAF\\n\",\n       \"WrAWqAcgsrotWEHS7hDCGSGEM4v3LpK0N4RwmqQrir8BAOPjoQsATE2TLsL1A/A8SZcUry+RdH6U\\n\",\n       \"EgGozZZtvX/ocowYLTcAkmjSgvUe2x+0/YzivV0hhH3F632SdkUvHYA6Hjh0AQAAN1drDJakB4YQ\\n\",\n       \"Pm/7NpL22v7E6ochhGCbX4IAAACqGWCFED5f/P8F22+VdKakfbZPCCFcb/tESTeUfdf2npU/d0II\\n\",\n       \"O92KDADR8MMQmDHbuyXtTpJ2CJvPL7aPkHRwCOErto+UdLmkF0t6hKR/CiG83PZFko4NIVy09t0Q\\n\",\n       \"QmDwJJCILUu6MQQGKbdh668knbVaf7aeIemVIejo4UoWn62HStop21dsnSzp79mPMHcx45Y6LVi7\\n\",\n       \"JL3V9nL5N4QQLrf9QUl/bPvpKqZpiFEgAOhR2S/MB0i6Vd8FGRgteUBkWwOsEMKnJZ1e8v4XtWjF\\n\",\n       \"AoCxIrBYoOUKiIyZ3IFxs3RTVyEAIBMEWAAAAJERYAEAAERGgAVgzhiDBSAJAiwAAIDICLAAzBkt\\n\",\n       \"WAvZ14Ots4cuA9AEARaAOcs+sMBNd8leMXQ5gCYIsIBx89r/mDhbt7T13NjJRk4PmD0CrImxFWyd\\n\",\n       \"MnQ5ACTzIEm/PnQhAGxGgDVNpw5dAADJTLJbs/hxOLdHFGHCCLAAzFlZsJJ7d9kkA6zCcUMXAIiF\\n\",\n       \"AGuacr9AAGhvygEWMBkEWADmrCxYyT2ASVG+3NeZZ25idAiwgHHjgtNN7oFFmTGWGZgdAixgGgi0\\n\",\n       \"5oMACxgBAiwAAAE6EBkBFgCMy41DFwDAdgRY08SvUaCeMXa3jbHMwOwQYGXA1h1s3TlikofZOjpi\\n\",\n       \"egDyMccAix+NGB0CrDxcKelTEdP7PUlfjpgeMFVjDFbmOE3DEoEWRoMAKw9HRE7vpMjpIV9ccLoZ\\n\",\n       \"S2CxaoxlBmaHACtDtv7A1h8MXQ6MCoEWAGSEACtPFxb/AGBdihYsAnQgMgIsALi53IMNugiBESDA\\n\",\n       \"ykPuJ3RgqsYYrIyxzF1xjsToEGABmLMxPux5zhONEmhhNAiw8jDGkzyAYXBuAEaAACtf/FJDHewn\\n\",\n       \"3RCsLFAPQGQEWMA0NAq0bD3E1s+kKgySIhgCRoAAC5inn5f08qELgVYIsIAR6DXAsvV9tn6xzzxH\\n\",\n       \"oqz1gZNoj2wdbGvX0OVAFnLvdmUeLGAE+m7B+jktfjljpGy9zNZbhy5HAj8q6fqhC4HejfGHzBjL\\n\",\n       \"3JXX/geyRxdhHsZ0wvwBSecPXYgEaL2apzHewZt7+ZKy9SxbZw5dDmAbAixg3Nr+oqclAGP1W5Je\\n\",\n       \"NHQhgG1qBVi2D7Z9le23FX8fZ3uv7etsX2772Jr5zfqXF4DsjPGclKLMY6wHIGt1W7CeI+lj2n8Q\\n\",\n       \"XiRpbwjhNElXFH9jHjgR56lpixTbcbzmPJM7MBpbAyzbJ0l6jKTXaP9J/DxJlxSvL9E0x+T0ie4a\\n\",\n       \"IB+5H48Ex8AI1GnB+g1Jz9PNfzXtCiHsK17vEwOEgbHJPYhAtTkGWOyvGJ1DNn1o+7sl3RBCuMr2\\n\",\n       \"7rJlQgjBduUBb3vP/r/efpT0Xa0KOnFjvJMJmAKOs4WxBDBjKSdGoohtdqdIe2OAJekBks6z/RhJ\\n\",\n       \"h0s62vbrJe2zfUII4XrbJ0q6oSqBEMKe5WubrsQGOJEA6Y3xxw3nBiCSEMKOpJ3l37aj3aG6sYsw\\n\",\n       \"hPDCEMLJIYRTJT1Z0p+FEJ4q6TJJFxaLXSjp0pr55X7iwnZsw7y0vdiyHQEgoabzYC1Pyi+TdI7t\\n\",\n       \"6ySdXfwNYDi0arRDoLlAPQCRbesivEkI4c8l/Xnx+ouSHpGqUDPEswjRNwIyAEiImdzRma1g615D\\n\",\n       \"lwPzZOtwO+oPktyDz9n8+LL1cFuPVv7bZNSKc/h9hi7H1BBg6aad68gBizCFE+bJQxcAs3WLoQuA\\n\",\n       \"ZN4u6Z0rfxNopXOXoQswNX0HWDkHEkcMXYA5sPWDtq4duhx9smVbxw1dDpTiLsJ0aQKzRgsWmup6\\n\",\n       \"8XmUpLvHKEhXtr5g6049ZPVdkv6ph3yayD2I6Av1ACAJAizM2fGS7tFDPimfdNC25YEWC2ALW8fb\\n\",\n       \"us3Q5egJPzYiq30XIZIa9GJn65AQ9M0hy4DOCJjaGeNFJUWZc62Hoffr/6nF8JG5BFmIiBasAdm6\\n\",\n       \"1NZJQ5dD0jdsfUdPeQ19wgS2YR/F0u20aOkGGiPAGtbjJD1YeQy0PbHn/IAc5Npyg4Xl9hkq6CXY\\n\",\n       \"RmuzvYvQ1j1t/cPQ5Vhl66P2TRO4TvXAvmkfsHU/W6cMUQhb9xsi3zEqpjG51dDlwE2mem6oY87r\\n\",\n       \"jpGZcwvWmVo0/+bkHpIePnQhtugaJFuSbN1e0gdU/zmWsb1noHzH6uihC9CjbH4IAj1iv49szgEW\\n\",\n       \"hvW54n8O6m74Rd/NaPc/O+q2Zz8CIiPAygPPIhw5W0fYOnPocgDYz9bRNjP9YxhzDrBqBzDFQfqA\\n\",\n       \"lIXB6D1P0pWSZOtgWy/pOX9aINrhh8zCVOvhy5Je0+H7HFdobbaD3Bv6eUl/mTD9JPVi60hb902R\\n\",\n       \"Ng5w2MrrYyX97FAFQWdcVPPTZZt8a7RSAA3MuQVrI1sH2Tq7+HOsE7L+rKQPRk5zrEFyn7hAj0en\\n\",\n       \"/dnWGfbNgus+zHn/mvO6p8a5PTICrGoPkXRF8XqsB3WKsQdjrQsghQ9J+g8D5c2xOEHFw+EPHroc\\n\",\n       \"6G7OAda2aJ0dPCJb59j6b0OXIxdF9+1/HrocKNX0lzyDqBHTHkn/Z+hCoLs5B1g5GdMv0bbNyE+W\\n\",\n       \"9Hjlt65DNYt/u6QfjpDOTfVp69Yr3dq1vzdzdItgkyGOk/sowg98W3ewdVGE8qAlAizMXe0TaNF0\\n\",\n       \"n/NDX39O+7u1gdkqfmxcO3Q5BvY0SS8duhBzNuhdhLaeauucnsuQo87PIrR1vK1dkcqT0phbDL5H\\n\",\n       \"0g01lhviVy8tUu2U7Y/UZX6abpM7Sbp7ioJM2JjPzVkaugXrDyX93sBlqGMMF8wPSPpkioKsifKo\\n\",\n       \"nJG67dAFaKNoeTtp7W1OpuM15mNoMmw929bpQ5cD+Ro6wBrS1C4wJ0o6ZuhCjFCK/WCoC2BVvt8n\\n\",\n       \"6bN9FmRExnweSL6fFdPVPDZ1Pg3kFFy+StILhi4E8jXnAAuYi1uXvJfThWrsxhykbXMfSZcNXYgB\\n\",\n       \"cZygNQKs/c5a+7vPA4tnEQ5nSidQ9pk4qMf9uEYALfGonP1m8yvN1q/Y+nctv35i1MKgqykFiEOI\\n\",\n       \"cU5iG2AKcr4+jxK/Tqqt7mypT6B979gvUPuxA7ds+b1cL0JzGINVprd9ztZhtvb0lR/Ss3Vq31n2\\n\",\n       \"nB/Q2ZwDLKJ1TEnTC1CfF6y7SHpRj/k1EWOahqHOJYMEHbZuIen/7yu7nvLJjq1Hd/juD+nAYS/o\\n\",\n       \"2ZwDrG1me2AnsrwI5VavuZUH2Gbofbb1dcPWBXangHTode/TOzt893WSzo1VELSTQ4A1pwOmCnWA\\n\",\n       \"GEbdKmvrm7aO7znbUdfZCN2r4fJV24fthuzlEGCNAQFQPHM4MY5hDFaaAlgn2QdcROuW62BJt49c\\n\",\n       \"pNpsHVG0rsxhHx27wff1CWK/jyyHuwhz3ai5liuWzg8TnYixb+fcLjR/IunDQxeipbYT9f6q3f3u\\n\",\n       \"Wlvn2tltTwAtzbkFa9uFta8TXdUv5tQX/r7vAlriAjI/OQexscp2Qt0Fbf2WrW8r+ehdkk6LVJ6p\\n\",\n       \"4HwhydaDCb7HJ4cWrLlbP2hc8T7SmFI953B8/d+S96ZUxzE8S9KTIqRzQL3aOrPLIPKML+K5lqsv\\n\",\n       \"fyHpzKELgWY2Bli2D7d9pe2rbX/M9kuL94+zvdf2dbYvt31szfzmfpCkRN3mY6iHg+ewD5QFWDlr\\n\",\n       \"NE2DrWDrkITlqWPTdr5ngjTbJWj9st05kMzhR0MuWg3rsHV6xoHzpG0MsEIIX5f0sBDC6Vrc/fEw\\n\",\n       \"2w+SdJGkvSGE0yRdUfw9Zal3ztX0p35CyW39citP72wdbuvpkZL7RqR0+tJm+899/GLd8+ELteHa\\n\",\n       \"UMynlSrvMatax7brfpWk+9VYbvbnwti2dhGGEL5WvDxMixPLlySdJ+mS4v1LJJ1fMz82YH1N62os\\n\",\n       \"dTurE6St8yW9ZsCy1PFQxSvj2AIsDMDWmZK+PnQ5ZuTQoQswR1sDLNsH2b5a0j5J7w0hXCtpVwhh\\n\",\n       \"X7HIPkm7EpYxlVwCkq3lsPV8W3/SR2EQ3WOHLkDPcjmuupjCOqQUo37q3nUZ7QeZre+OlVZi7H8T\\n\",\n       \"sXUsQQjhRkmn2z5G0rttP2zt82CbmXnbs7Y/9/Apkr59SzqHRysRxmTux8+cjXnb91p2WwdLelvf\\n\",\n       \"+UY25rLPUu3BmiGEL9t+h6T7Stpn+4QQwvW2T5R0Q9X3bO/Z/9dlR8/vBz3WzO0kMYb1jVnGMazv\\n\",\n       \"qlitBWNb77qGXq8cHrFFi9KE2d4taXeKtDcGWLaPl/TNEMI/276lpHMkvVjSZZIulPTy4v9Lq9II\\n\",\n       \"IezZn17ps5HGsPP2eXCPoT7aGPpE3ac5resU5b79UpZvvUU9V2MoY1uxB7nXNeU6rRRC2JG0s/zb\\n\",\n       \"drQH028bg3WipD8rxmBdKeltIYQrJL1M0jm2r5N0dvE32jlE5QfOobbqTn8xJnM7iPta36Yn31R3\\n\",\n       \"ro5t+8Yq72jW29atbb3U1vuHLksLbecJ7BKcjGLb2rrS1qOHLgf229iCFUK4RtJ9St7/oqRHtMgv\\n\",\n       \"px01l7L8oaSvVHz2QUnf2mNZ6ghqd7LKpb7Rv15ahGw5hMHuvs291WvVz+jA6RP6Ln9VfofYOiYE\\n\",\n       \"fXnLcn3quwxV++S2cpwp6TFaPBEAGcjhUTk5HEA5Wa2POw5WinRy2d59lSOX9V03pYA31zpGc78o\\n\",\n       \"6Z8TpJt8f7d1sK0jU+eD8eBROXkYU71wMcPUjeV4LDsWj6jxva7rl/IcsP6M1By2Rd0yvFjSVxOW\\n\",\n       \"g3PvyPQdYJXtIDkcQNswyL273E4OKeu5z5n5XfEaDdg6RtIDOiSRy3H7W0MXIJGu+3Yfx8Zdesgj\\n\",\n       \"pVz24clIHmDZepOts1Lng95wEDaTa9BDuRaW+/MeSW/ukE6f5R5628U4B3Rdh1ZlaPlMvlzOeUNv\\n\",\n       \"dzTURwvWkyT9VfG60YNVE8vloJHyOHBS10dO9T1Vc6/jLsfR6g0/ORyPQ8lt3YeasqBNXrnVHQaW\\n\",\n       \"wyD3XA11sMwt36FMbX2nsj5tgsTJ34IfWc5jsFLlPYXjYwrrMCs5BFhjOMGxY3c3pzocQ5A8huOu\\n\",\n       \"D7XqwdajbN2jazqR9TEDf2UeLbvbxqjutk095Uc29W3rj2wez7YNdxFW67OsoeI10pl7PWdzsl6T\\n\",\n       \"a7n+VNLrN3yea7lzVTeIiH2cpp4Ffy4ukHTy0IXIXQ53EQ7lgAPX1n+29bQhCjMibU94cw9oUunz\\n\",\n       \"jsW2cjru69hWj2NbnylIHXht0vdxNdRxnOv5Y7Ry6CLM6WT1w5J+rHidU7n6EOXgsvVIW0fHSGtD\\n\",\n       \"HmfZOidScpxU8lL2w+cWNb87l2N2kDvwerJetljdZV3qbOt3bT1IW56MEsFc9u/JoIswDznMZRTr\\n\",\n       \"pP1uST/RMa1t3ibp8o5ptH2mWZO0+1YrX1u3lPSgxGVZ1fW4/7qtB0cpyYFyurM5C7ZuaW+8NqQ8\\n\",\n       \"dsbqfZIenzgPHvY8Mjm0YPW2UW2dt+XEUfnV6IXJlK2TbH1hwyJtn5PFSTmtJvX6Y5IO7Zyh9Tu2\\n\",\n       \"frfOol3zknS7xOnXkSwfW2d0/P4P2XpspOJ8TdJPRUqrStuWqSEmHO078Mj6BxrqS92kuW7oCPlP\\n\",\n       \"JN1L0jU1lp1rP/jdJR0/cBm2iVFHfc3knqNYx/2yO/1HI6U3hL6nhajyoRbfWS3H6yRdH6EcyzRz\\n\",\n       \"ecj8HB/GncLc138QOQxyz3Wi0bk+HidV0B3W/s9Fn4+yQX5y2x/7kMM6Vx0XOZRtXS6D3OkiHJkc\\n\",\n       \"ugiRh+XBdXDifAg4ZsDWO2z9et/ZRkpnjheaJnWXwyzqfW6jKZyz5rhPD663AKuYlC67jZzhZHlD\\n\",\n       \"97+3bcFa3ba51WmZvsaE5dBCtl6GPo7Dx0j6vh7y6dMY9uvYNq1znzeIrL8/5Rndc+3VQUM5tGCN\\n\",\n       \"YaOm3uFzOujbtmClHpxarxBWsHWnPvLaVIyB818XpTxF3R4RI61N2fT0nS6G3r5D55+lLTOLT+Fx\\n\",\n       \"SkmnmuiYPkrkEGDlaq6D3Le1YA1dvjp21VimrzFhUztp3bLBsmNY9zFP09C2nDkcw3W7/pqU9X/b\\n\",\n       \"2d+g0wVB0sjkMA9Wrs2hc91RYwxyL6u7mCf13INfS713P09lf81lQDGq5fyw5yY/AOqawj5ZJ02O\\n\",\n       \"hcj6DLCsPC4C62WIcWtzV5127EgX8r4Guccw1IlgDHPo1NFXmeYyf9CU5DrYfV3TfavLmMsx7Vc5\\n\",\n       \"nm9mK4cuwqF3iNsOnP+6oQe5t90nxtYi2Lo8xVike9dMO7f17tMYLmZNzj9znCz33w+cf6xH5YxJ\\n\",\n       \"inVuczMMOsqhi3AMpnwwNzXUHC25yWUixhjGfhNH7PSjn6dsHZTRHct11m9Z1vMSlaFpXTwzSSnq\\n\",\n       \"CZJk6yhbLxiwHHWkDsLQQA4tWF1aEo63dauGX8sxyFutg8HKZ+sZki6OkVTLz/rUx0zufaxrLvVZ\\n\",\n       \"pk0dpwhu7mLrTbHTbegfJb06Ulp9To9wY0/5b/OSAfJc92BJv7L23hN7zH8Kd0LOSt9jsGJv5H1a\\n\",\n       \"PPj3wMwWJ9VNz1ub0g4X64T3/QnzGqS+bZ1cPNwY1VJtm+W+ckD6to6w9QuJ8l33GElParB8igBi\\n\",\n       \"l6QzI6eZMtDp9ckLK6176+sU+1E5YxhHeVN+tr6w8vzc1N18U7omZiGHFqwuDpJ0csVn10l6To9l\\n\",\n       \"iSXXuyrHls/S30t65YbPs2kBsvWLth6eIOmYE422ee5gWR3fT9KLay7bJZ9tabYZg5WrE3rOr059\\n\",\n       \"nD5AnrENud2Pj5R/7vvuJOUQYKW84G7qPiRav7m6B2BVvW2byT1mt1nTbfctJe+lPOG0TfvnJf10\\n\",\n       \"hzzbDo5tUt5farDsGI6xXB72nKuyLsIY6tZh3fmypmSocVRTrtNB9N1FuP8P68495l1WjkFvR7b1\\n\",\n       \"/pU/x7Rj51DWKcypVYs9imkzqrQZi5f19thgKkFXqi66tvl3Xa7t8qty2SensA6zMmQL1qeK/6dy\\n\",\n       \"YmrqgRXvj/UX9bYy5HaAZz/I3dZxkr7Z4Ctt1ym3bdNUlxa6sa97G03WOafJoWMYQ9m73Kk9x/05\\n\",\n       \"Wzl0EQ4lpx1xDAf90pjKOnZH1lim0fawFSTdvV1xokox3cdTO3x3VZuy9Xk+6eMu1WXam7oIU+Q/\\n\",\n       \"xIPJt8mhDHX9eMX7nLcH0HeAVXY315h23s5s2dZtNi3SW2FuLuZ22DQG68APrFvb2ps4/zbLpMx/\\n\",\n       \"yHxPa/m98kzjdmO+s8N3fytaKTab4sWq7y7b9fxiB40/GCkdqf/tPcX9a5b6HoP1HT3mt81Qdw+d\\n\",\n       \"K+mGiOnF0nWQexd3l/SIBOmiH3+28rprl9KJHcoRawbsMcxAn1ouj6NqG/g9u2Y6YxAj+JxVQ0Yu\\n\",\n       \"cugizP3XfmwpnvbeZJxOKjkdwEOXZWytkF3r60Err8d2IetzX6nT0tdH/TVZ51wmGo2V9xjmwaqS\\n\",\n       \"ut5zWc/JyCHAkiTZutjW5T1kleNOFCpej0ndQe5lyzVd5xjPPexjlvE+L0SbpmkYWup9uq9f9l3H\\n\",\n       \"YJ1h67AG+eVgsEk2t8h1X48hRZ0nqS9bR6RIdyq2Bli2T7b9XtvX2v6o7WcX7x9ne6/t62xfbvvY\\n\",\n       \"jmV5nKRzOqZRR+Xs0gMa08miyXPMpqbNLeQ51MW2gcNdy3jQltnyN42/O9jWIzvmv03sbdAlvW2t\\n\",\n       \"WEPvL02maehjvqbYM7m3MfS1Yuh9osqjJP3r0IXIWZ0WrG9Iem4I4R6SzpL0TNt3k3SRpL0hhNMk\\n\",\n       \"XVH8vcnY5r2ZW9dlX60MMfLZlsacg8Ayrdd1wyNM1v2n4v+m2/dsSe9u+J2YcriAt83rtfbGpxTE\\n\",\n       \"cLP6sfVLkv4mYX45Hpe5DHLPrW5OGroAudsaYIUQrg8hXF28/qqkj0u6vRZPWb+kWOwSSeenKmQi\\n\",\n       \"U3pERgxDrmOuQXZbQ1xsu4hR/3co/m9angPOQXbl46+2qcr7jIbpDLk/Nqm/J0r64RZ5NPkBsr7s\\n\",\n       \"2UozjrRpUNF2G7U5Xg5vmVdsY26Fm6VGY7Bsn6LFyepKSbtCCPuKj/Zp8TDTNnJtKWKHbGbbo3Lq\\n\",\n       \"fDY11nz2o0MipnV0xLQk6QcaLt+mBWGq23mq6zU4Wy+wa91ZH+OcySSlA6gdYNk+StJbJD0nhPCV\\n\",\n       \"1c9CCEFsnFjGWo9jC1j7mMn9Zq/t3uZpairGCXxrgGXrjrYOsqPfXJNDF06sMuTyg3N5fKSaaLTt\\n\",\n       \"d9tOWXDA8rb+2m40X1bsbfMrkn6q5P3czpWSJFuH2Eyn00StE53tQ7UIrl4fQri0eHuf7ROKz09U\\n\",\n       \"5dxOe4p/t/h5aadTYauKV2uhA8eSxLgTrY0Yd9FNzZDr32fez+oxr1V9zI59aI20z5T0dkl/lSD/\\n\",\n       \"tuZ+7G0z9LMIU+Z/f0mPTZh+W7mOwfouKeqE0Fmwvdv2nuW/mGnXuYvQkl4r6WMhhN9c+egySRcW\\n\",\n       \"ry+UdOn6dxf2FP/+z0uk3WUL9HUAx54pOKYc7jqrux26jn3oY5B7k5ncU9T3ENuwS7dsnTvFtqWx\\n\",\n       \"bMHa1rrzUC0CrVpsPaTusk0VY71ymvxYKm9peYGtL9VdvoauzyKcq1wmX20jSJKtN9p6fMs0spnW\\n\",\n       \"KaYQwk4IYc/yX8y061TYAyU9RdLDbF9V/DtX0ssknWP7Oi0GPr6sZRm6XuiazpvS5kIf84KZ6wmr\\n\",\n       \"zRQEbZaJUZeHbl9kWorm+cOK14+1ddu1RWKOgWojVf5/3mBZS5Kt41darDf5r6p+6Hpl+n2y9Zta\\n\",\n       \"dCV1nQZnVZOpF/6iKEfsdb9VxfuxpxBZT2eIbs1YYpT9yYr7GCFsUOcuwveHEA4KIZweQjij+Pen\\n\",\n       \"IYQvhhAeEUI4LYTwyBDCP/dR4BKpdvq+nsE1FV3qq+l3d0mSrb+0S28VTrXtHlNzueU2vm/EvP+b\\n\",\n       \"pI8Vry+T9Py1z5/ZIe0Y+2TZ/E5DzTn3BdW7q7ntHWopj+H1tC/cskzqur1yw2dd6uEeFWn1PT8W\\n\",\n       \"9qNOI+v7WYSbpJ69tukBnHtXXWybHkC9qqp8Q2zfB6g8iEl1x8x3N1z+r3Tgg5Xbup+kO6/8fQdb\\n\",\n       \"p2r/um66867PfWrTswhjtsKUpb/qhMjpDSWnsaJ9yKnu1/V1HFW14pV1H/+MrSenLxLayKFPddMj\\n\",\n       \"VJp8f5tcxmCVlbequbxPr42YVtO7rKY2yH11Xes8f66N75X0qZrLbhs4HKMO6ty88XuNEvQBrRxN\\n\",\n       \"dKn3Ic8RQ5+fquRWrmTlsXV6xZ2ufdVBk3xeLumlNZajdWoAOQRYXdW9cLcKsGzdtXGJtiTZ8fOh\\n\",\n       \"jeFAbTKRom09WtLPNc3E1mkJxqY0cZCkuyXOo+5xU7Zc67opun4/2vb7ShPY5nhspnqmZtcfvm2l\\n\",\n       \"eKzTWWp2/r9K9YcD1GLrmAjnijGPH5ulHLoIU47TWM3zmJb5fVQNdk5bj7N1z7rLIws/0/J7n5T0\\n\",\n       \"QlvPXnu/z5NZjO6BxuW19Z4Gi2/qNlymd/ra521uZOij5bBvfd9lXffv2Pn1lW9dtyh5r8u2+GeV\\n\",\n       \"j6erI5fgKJdyjMYUWrC27vS2ztL+ebqa7iRNT9SXav9z2cqMoQVok7YDgzd9HrtO+jwRvETSq/rK\\n\",\n       \"rGhtu9lba//35eEV74eS13XKFnturJRdhClbEsZ0ERuyRSX5edTWUetvdUzy9hX53HSHcGGoa0TT\\n\",\n       \"fMd+LUtuCgFWnS7C1QHcTQ+Svn65LZXttJ13ZDva40fqlGXTs7tyuYCEtf9jSrmOf9TiOznO35O6\\n\",\n       \"C2rULVhUtAEhAAAgAElEQVS2jtj0ccXrKFlX/B07n1TTMVRpk35fPRFvlnRdjeW61FG0+Qdt/fvi\\n\",\n       \"71zO5dnKIcDq65ElY1a5HrZesfXLi8kavxypLHXuIvyJkrFrfW6LOf2yWr0A1u12STnIvc1YrfUy\\n\",\n       \"bFqXTWmuahpgrY5la3unbAy2da6kf91SlhoJ6WRbv9ry67kfQ223xdBBSpn7Srrjyt91j+MmYu67\\n\",\n       \"663oqJDTGKy2cj8RrIt92/UzaizT9kHcXRw3QJ5Ldeowl7tKu2oyPnDZdV3aelBxQ0fqekp1/DYN\\n\",\n       \"sFa7PFPuE3XSPrlBepvq73sk/XSDtFblMgZrbOf3FLbV/Sn24BMNo0QOLVgpbRpcO9RM7nO6oB9i\\n\",\n       \"67K1t+dwwkzZhVOVV50Hrv/I2nfWbZqnqu561B2Lt95FuD52q81+EmuQex/jpPbYpT9CmgYYbcoS\\n\",\n       \"69zXR4tKb4Pebf2Era+nSj+x7xu6ADhQ9gGWrVvYrScoHHswszSGoKSsjMdo/8NUN425ib1+c33W\\n\",\n       \"Wt0bEHIYd5bqwr00hu2aeoqNLsZQf03U2afur/K7B/vWpot6WwtWk6k56praPhJd9gGWpIulyged\\n\",\n       \"tpEk6LL1AFufq7Fom7vthpw/ZV3TiV1TlKFp3mX6GvvX+GRp609tXZyoPGWazBu27fOm2znlzQax\\n\",\n       \"xdiHt7Vu3vSeratrpHekHWXOpjbTNLzU1vNa5le1vevuB233l87b0Natu6axLYu1vx9X8X6XNNGD\\n\",\n       \"HMZgbTtQ7rzl8036PGk/UBW34TZU5+I8pOWdJIfbevXK+xzA5drUy6O0/6S6Lb0YY6Q2pdEl/eM3\\n\",\n       \"fBZz2o6p7nv3Vr0fXO/YlIjdaQzmprp9lqQXRk435225LNsXe863zZ3DKeS8bbKUQwtW1wvEzb5n\\n\",\n       \"6472xu6oNvkMtWPluEMvy3QXbX7AcONuuorHU7QRo0WmizpjsFLM/ZVDQL5ahsMql0pT/30dL7Hy\\n\",\n       \"SdFaXeZva+S7Lf++xhLWHYNWVZ6fsDc+eLxOGVJ5SeL0kZlBAqzEjxe5XVmWJe/lcDGqa0xl7eLf\\n\",\n       \"hi5AAqnHl60OEu86EL1LWbuOqZrLPj7Ej6YjW3wnl+3RtL5+UtJtUxREw9fJ0FNMrKdxUYQ0Jy2n\\n\",\n       \"Wzvb7gB9zBA+5gNLilv+thfnPutw0/imixSnK7du/qnXO0bwUqeMXR68vKrNXbxtlu+iTStOm2P0\\n\",\n       \"tyXdJ0I6Kay3KPVVriGPlxR3bLYx1Dxsuaz/ZPQZYFV1m3Sd0bnOSW99EsOYecRW9qiRTcay03c5\\n\",\n       \"scX0XC1+4f5LT/nlsH1iTDuwPt1GmzQ2LR/j5J77+MV1Z9RYZgzr0Ubbrsi2AfqYjLn7HCuGGoNV\\n\",\n       \"Z4xK/cSs+1b0u3dKNnJ6ddONWjc9ajuvTQpNAlNL2h05/9rruGGCQOvmE05uuyClqtcnRk4v6iNS\\n\",\n       \"ivGWp3ZJoyzZLZ/fzz6gFbTt3bXbPk8RSLS5ZT/2/tX3ua3rfnaoph3U1TH39W8sh0HuSzdtPFth\\n\",\n       \"7eGX2773QUnnbUpzRZtfQEMHOl27QYco/5B1NvT2auI5Fe/fWtJf9lmQCr/ecPkH11yuzUDrKifW\\n\",\n       \"SDOmd0l6Q9WHtu7QIe1Hav/6VGbRIf26+u4arJLLRf0vNHxdDJ0/GsqhBavqvbotUsvvlQVkQw8K\\n\",\n       \"jJVukgPL1rG2Pr7yVorJ6Op8f8iT6NCTbm6aymBVVVd6mzFYVX/HmOrhVg2Xr5oHa/CufFvfsyH9\\n\",\n       \"Tdvh72yd1DLbl0h6ccvvdtH3xbvu9o59V2Pb753V8nsx9XI9s/VMWw/rkBcKQ82D1Xpgrq3D4xSn\\n\",\n       \"kZgX4SYHSerA446Svm3l75gHcJOLeOo77XL5FZzSAfVqb20JqfxuQnUCxMEmkVzzFkmyW3Uhr0+N\\n\",\n       \"sJR7K0Rf0zSkzOfeidLt6zwy1I++Zb6vlvSylmlgRQ5dhNtmcl7fqP/bvtnYlKZdZG3GCg01yD1W\\n\",\n       \"vr8aKZ0upnIRr/P9lOta65e/reMl/WPFMjkMot0UBLvk8zZptrWezntbfKfuEIem6faVZoxWzaG8\\n\",\n       \"K1G6jerC1hNsXZA6H+Qrhy7Cqr7+TSeBoyvSGoM2d0t1PclWjQn5+Yj5jG07dC1vrEH95yQqx6Zn\\n\",\n       \"qt3YIP3YcwqtX7i3/cBa/16Z2OOFctqXh2p9TR2Ut03v+cX/Q3ft1/FHijsL+9D75Rx6AqLKrosw\\n\",\n       \"0SSkZa1CQw1yjz22pEvZnrD2d8x+9yhdobZ+x9a1HcuSolVwXdsWz/vaOsFuNRlkLHWDnLbqjl1a\\n\",\n       \"6muG8yptuq/qlu/o7Ytgg5/q+P3cxuXeJkGaZWKUfegAb3Ry6CJcWv9lW3djxt7pl4N0c4jWcyhD\\n\",\n       \"lW3b55fX/m67LmdLunvL7y6lDtq7+ryk19RctipAaXKnaYrWidg/QqZ6Mv/71T9a/qBs/B1b92r4\\n\",\n       \"lbrn46G2U9N8W4/77ZBnHcc1KEeMm1DQoxy6CNffS3HgNFn+rg3Ta2pM00M0sb5e37H295jWpY5N\\n\",\n       \"XdqueH+TE7oVp1HrSt/dK5smz13+fWziMsTQ5cHzbcXaVndsmGbquqwK8lvlm/jxa30bal1y/kE/\\n\",\n       \"SkM9KqfOxSBmt1gvvxIjpZvbNAZ9aHIL8Tsipd9XnfZ1sjxC0lM6pjFE92lVnmPb57vc+djHul4m\\n\",\n       \"6QMdvj+lAGas2AYjM9QYrMbLFZOP3rHko7GdiLfZ1LpXpu/1H6rrdukxLfJuWqddDTFeaLekQ1um\\n\",\n       \"ccC2svUke5AWm1rs0vm2hjwXpNrmMdfpzA2fVf2wzfWi3rReYt8IMTbB7vw80alda5MbWwtW2aSM\\n\",\n       \"TYOPoQe5N8l3TDt0TieunMoSW5sJObfNC1aWxpuKf0PYuN8XF4qP6sBzxRjvIhxiXz2/xjJjOvdI\\n\",\n       \"0z7ml7qOwfrWAfOfpZzGYFV9NtTAvibdVk3uAGtSphx36Lbz43R93M+Y9Nk61uYHQwqp77RdXb9v\\n\",\n       \"qfyib7ora+j6yNl31lhmLC1ZMdVex4hjvureRbvM9yl2rQC5fgGmNX4tKzncRVj1a7pxy5Sth9dY\\n\",\n       \"vlNXZYWvqv4jQtoYalLLJvq4U62uSV1ca54AY4xZ7GMetLaz7Netgxtqptc1r3Vt6+6pdumdZF3T\\n\",\n       \"jWVZF4+peDB57Fn3Y+4PY9B0SpLXS3pdorJsM/S+ODo5zIN1ytrDUdseOPeU9J619/qcjbjJw6mb\\n\",\n       \"fj6Vk4nUz7o06TbLbcLCPm7gqOvbE6RZJ/AOFe+vft6+ANaP2bUfSl072Zbfe52kp234vGyi1yHO\\n\",\n       \"B29U3Hny1nXt5t32vZy7kVP86F/HPFgD6HMM1qY7iX5e7efBqkqzTRpVzouUjlT/RCDl+Yuhyfax\\n\",\n       \"lG0TdK6DXo+JlM7qBIbr+1HdH1YpAqw6Um+T35H0l4nziCXWI3di6HTHo60vSvp0jTymonJdbP1i\\n\",\n       \"zPQ6fndKdZ6VPgOsl2z5vOsFr2nTcpOWpJTdf5uMaccfS1lzvety0/J1gu717121IY+zG+RdV+zj\\n\",\n       \"tklX4lj2vTFa/2Hcpev/1sW/pt8rM8QzGjs/EcLWoVpMl3F4g3w75dkAx1FkfXYR3m3ldZO7COtu\\n\",\n       \"9LLnq+X4bL02ZcppgHib/Jq02jX1vR2+20XXC36fXRU5njibdt9vagFvI8c6GVKd823KOls+yaDt\\n\",\n       \"GL1tctlnjpJ0eoR0YuJYSGRrgGX7Ytv7bF+z8t5xtvfavs725babzsK86c6Jtgd43SkZetuZbD3S\\n\",\n       \"1k+vv90gia51kkJud0F2nVyzlpJuzj4Gjtcx1pPj0OXe1N01dNly0PcPt+8v/m97nPUxjqlVOrZe\\n\",\n       \"HSnPLraNa6ybhiKkMxt1WrBeJ+nctfcukrQ3hHCapCuKv7s6oOXK1t104CNXJOnkkvdy7EL4JUm/\\n\",\n       \"2vA7OY67quPta39XBbW5HJxDD3Kv2zW2nl6XLpqUYv4wKvvOpsftNEl3Ux7Yr6xFK6f6ilWW1Ts4\\n\",\n       \"t6XZ9lzxzEhpDtGFHzuNWdkaYIUQ3ifpS2tvnyfpkuL1Jao3cd2qOifjMyV9z3pxiv/fVfK9g0ve\\n\",\n       \"G+Mdebme0Nro+1flUpOLa19j/upKUZ4cg/ZNv4aPlw54asPYj4WYplAXuaxD1XjEMinK3KZHIPY5\\n\",\n       \"IpdtMTltB7nvCiHsK17vk7Sr4ffrdBHubZjmz9bMZz2/McixrG27Cvs4mEvzsHUXld/23ibNnLpt\\n\",\n       \"Y+V5lK37haD/kTD/Ot0M75R0ywR51/0OF5xyVcfVwZJuDKF5a4ytEEKr+m76RIPY27Rtej8cMa22\\n\",\n       \"qq7XQ/0gnqzOdxGGEILtDQfWnuL/fzpVeoIWj0wr1TU6H1rKu1pGUSf2AS2OuZV7T8l7P9J3IWwd\\n\",\n       \"pv13U61r2+oaI2D4OUkPbbB8XU0D7G3BVYxu0k1jsLBfnW33TS32nV9ukG7d1pRbNEhzDH40Qhpd\\n\",\n       \"99vvtHV8CPpfEcoyerZ3a0Ng0kXbAGuf7RNCCNfbPlEbZ0/es3zxd5JOKV63vRg0fZjtGLsIK9n6\\n\",\n       \"gRD0hqHLscG9V143uajW2ia2/qlmOR5t64Eh7B//VtweXba/153jbNOYoE3Lli33a5KeVTOfdW2e\\n\",\n       \"RVhXnxezpheJ2MGQJd01cppzULUd7tVrKdrPF9fnIPe6QxX6mqZh9ftP0mI+uBjpjloIYUfSzvJv\\n\",\n       \"2y+KlXbbaRouk3Rh8fpCSZfW+E7dC+6mz55XI5+6acUWK69Ndzz+l0h59KWsTrrU06ZHiqz6OUmv\\n\",\n       \"WHvvU5KeuPJ31wt2l+b09fFFXfKNdaLvKvVYs21Ba1OW1PTu57mz+g1QNnlQw/RyDSJoRZ2wOtM0\\n\",\n       \"vFHSf5d0V9uftf00SS+TdI7t67QYJPiyGnk9ZFM2Fa/XnVgjnzKDTNPQ0qbxaTlZjqH4gTbfq/ww\\n\",\n       \"zazvQbrZ45i2lqOF1W6tLrNvV7W6nlCx/HI9ctxHlpIcc7b+oMPX6wbrc1E1RGNKN93ENGRdlOXd\\n\",\n       \"ZbhPXzfizM7WjRJCuKDio0d0yHd9Az1d++9U7OtOjc4XJFtHqnzAYKyLXc4XzaWqVrX1oLZul1qK\\n\",\n       \"sTEx9qltXYTPXXn9BxHya2rok17qQeNl+8SFJe/VdecNnw1dl9sMWb5YE4FWrUPfXY05anO338Up\\n\",\n       \"CiLdNLb2slTpT1mfM7mvSnkyTnrysfUIW7cr/vw1SX9bvE7ZSpb7Cb9Km1+/Y1nXTeW83YbPuqSb\\n\",\n       \"U5p9509LyvDKugjbbouhfkTl0sW5VNaSWpV219bqbd+ryvctWkybhIaGCrA26Rp8bVunrgfGXkmv\\n\",\n       \"LF6nHsMx6IXE1qG2/tbWU+xGdwgdkFTF623LxjKmJ8k3vTHjCakKMiJj6VbPWdW5bLVuYz41oe9z\\n\",\n       \"W4z8mt5gVCfPO9Zcrmm6Tb7XdBwnP3BqmkIL1vqJNNY65bTTDXWxOErSnbSYY+yFLb7f5ldlin1y\\n\",\n       \"yAtwjJPnt3XMJ+q62qVPUqh7TE8l8Ln39kVG5aU1lvk+1dx+icZS9u1pa3+vBqFTWL91B09ku2Vj\\n\",\n       \"qADrURs+63pBSt2C1We6uVyMYnbbrqe1PiC8rwO8az6pHiRe9tmHW2eU5oR5n7KsNhVjw3J16zGX\\n\",\n       \"Y2Gp7TQB2bF1mjaPSbvZ4jWX23Ye/qGa6QwlSHpGmy/ajbvTbtq3bf2ayp9KIqW7oWWZ7p+qfCJU\\n\",\n       \"qdnEvygMFWD94YbPUo/B+paa6XR53lmOg7Rjq1OmsmXW37t7i3SbStFFmOM2qTJ0WWO0WPfZCjZ0\\n\",\n       \"fVUqxn9WXYDb+qSkk6qybJlm39eWukMPYo3V2vT5lR3K8VPaPvQk5f75nyrevyJhnpM1tjFYdU6s\\n\",\n       \"29Yp1VxSKQbh5nKib9MVul4f683tm5aNrSzNWyXIp0zZ/tjng8lz2IemuE7r+irTP0j6sZrLtglE\\n\",\n       \"p3buys1yupg2194hW91XsW1rGluA9Z01vl93nWIPtt64vK3vsVtf1HPrHqnjIEl/0uJ7uR68bbu2\\n\",\n       \"uuYTQ+yyVs3JlVKfQWnublNzuSHrJcdrS0xD1G2q3p25HT+9yfEg6Lqxy9bp+R3TrNLkwvUWNR93\\n\",\n       \"kGNgVff5iEdIum/J+0nuIrR1N1vHV3ycoh67dDX01UqXKq+yboSYN64MrU15g60jopekJVt3l/Sb\\n\",\n       \"NRf/xqak1v7u67Eu2zO4+fjCse1jY5bjdSlLcwmwHtAxzSpNu85+sGU+Yz95NAmq2q7rx1Td/Vv1\\n\",\n       \"9PgctV3/NpMTxpQ6wMr9GDhG0r8OXYgVD0uUbqxB7jE02SeajsFKHUSs1s96marGwy0NPcY392Mx\\n\",\n       \"G3MJsNrYtBO33cG/o+HyOe7IqW9C6LL9qu502d0hzaW+fsm3rd+cW5BW8+/rbqQoF6HiaQ113LJY\\n\",\n       \"fui6bqPsmPteW2d1SLN2PXSoszbfe5Wt74qQdsrtXPWDsGmQmMrQ+Y8GAVa1pheslDvd2Jtk2/zS\\n\",\n       \"zF2Xcva5PXOozxhlGOoYmMMDocvuSry1pNeq/Q+LJufhGAFW3TSeJOnHW+aXSjbdrohrzgFWypP+\\n\",\n       \"VJ5FuOkXU9v6K5sDbdOJcug6WIr5K7vv1qY+6jD1yd+SZOtbe8i7S3pTvwjGmuag7bJt5bSPSJuv\\n\",\n       \"U32f86a+zw5migHWUDtLWZAwldtilxe3w5X+1u/1ZZd5/26LfFOqWw/3bJhu7O6SofcdKW4Z/kZM\\n\",\n       \"ephym66n/Wst02lybVmfbLiuHPbtFHL5UVllqvUe3RQDrD7GYJUt03e3YR/WW7Denjif9der/kOi\\n\",\n       \"vHPR9aQ69DQGbVvl6gbOq+uX83mrTV23DWKG0Ko7y974va+1LMuQwzIOTZj3jRXvx1zfNnMbPjBi\\n\",\n       \"/rOQ84mqrT6fRdiXoX7RrF807rbyWVmZYtzJllO9r0p1cltPv9euDFs/qfQtQpvKcErivBFX1bZM\\n\",\n       \"9UzYTVJO07AtvbpPBKmdvt3rDO5t0vrJ4v/cW9iyQYAVT1m5h5xJOYZNrXKpWuxyWv8hpLijatN+\\n\",\n       \"+Eqp8bPTmhrbfsEYrIUuP4xuOg8nvLNyyB9uKYKdx0ZMM6Up7eNJTT3AOs7WHyXI5xBbt1V1M2vd\\n\",\n       \"yTjL5NTVGKtVpcktzznuk9Lwvx63if0suaaS57PhQh3rx82UpFr/Nl2EXJAPVHaeG/o832r5DRM8\\n\",\n       \"z16OF7OuO9nqLcf3lXTBARlYD1G3ySe/V9I+DTtDeB82BVjvSpBPVV5j8+qhC9DQgxKnH+uO3SYB\\n\",\n       \"/1B3Yk1h/13qUoebJtKMZcjzRswfnXV/kKceptDW4RHTmpRDhi7AilsX//fRRfjnij/rcqxf0U0f\\n\",\n       \"DpzSpotG2RiEtuWcWoD1jx2+23n9bX37Wnrb0rxZgFXcLdo425afNUl/mU6dY3zKLVg5HiPr9d3H\\n\",\n       \"MZ1yrq3U+8+mnoo+nsqwLa1TG6RVNSh/9qbQgrW+I9T9fspfC11+zea0TZquR5MunFWrrY45Xjyk\\n\",\n       \"uOM9Ut/xd7fti2z0GxHKsCrWNh1DC1Yux2+KsXxNvrdaD6nuuHtqonT7kONQkFV1HyguSfe19YVY\\n\",\n       \"hZmSXE4Gq/oa5L4tn6a/kmNfRLa916fU+W/qTsi1JWLb88I26XvAdtM6PCFBGbrqs4uwy9jJHM+p\\n\",\n       \"Q1itw99PlMdqEDC2LsJNY7D6Xpcf6fj9+0mMwyqT48kg9YDqpW3r3qUcsU7uh9o6WCu30ds629ZR\\n\",\n       \"HdJvU5bU26SP8RpdrZfr+xss2yWfseij3Mv9pM8WrKatt2PdfrGtHtP3HqwU+9294fJDbse2PQFN\\n\",\n       \"BEmy9QBJryjNzLpFxPxmaYoBVt112rZc07qJNQZrPZ2nS7rzyt9/LOkrLdJtVgjrnmp+0fiFltmt\\n\",\n       \"dhHmuE+Wid3N1+XO001yuIsw9his9fRS3kXY9AfbWPbfOpq0pm8ag5WqTprsV7EnSY55nVrfr/sM\\n\",\n       \"7v5yw2dfrni/zvEH5XkyyCXAavorOVbX3nq5br32d+sJ7mw9u8HiR4sWrE1SBxSx0s6hPmN3n/c5\\n\",\n       \"yJ0WrHo2jcEqe5h07Dxj1/u2/afrtXPT9aIq7T4HuUuqbMEiwKqJAKtbOVLc7VF7m9g6QdIxDdJ+\\n\",\n       \"VcOy9HXR6HSitPUTEctSmU2iZWN+N1djHoOVYwtWX/tIk6c1rFst410jlGVbHrG9JXHeq99fb73O\\n\",\n       \"8bq8ydjK25scK2ZMg9y3pbfs5/6XjulU+YsGy7bRVwvWpi7COnnHvuutjtiD7+vWda4tWH206DUJ\\n\",\n       \"YvpuwVruw1MMkFfFGuMaw5B1nbIFq6rFL/oYrEhyjCOykGPF9BVgxU5n0y+SW7VMp+zvVevdh9sT\\n\",\n       \"bzbHUV8nsIMqXuekrxasIdPOXZN1HyrAynX/ratt+Yd4FuGQuq5fTjO5dzXWcieX40GQS4BVpxwp\\n\",\n       \"dqzUO+vJNZerGlSconzZBFi2Xmzrog7fP17Sni5F6Ph52fJ9nAD7aMHKeZD7ctLmXC42bdd/2/HX\\n\",\n       \"poswldTnpU1Wn7XYZsqWNmN2+x6DNURak0KAVa1OObYNzo4xk3vqSSk3GaKLsMuA2Id0+O7SL0h6\\n\",\n       \"UYfvf2eEMmwyxpPZ2Aa5t7lwL/fbD9z0xXHe5r7t+LtzxfubBrmnkksX4ctbfL9NgPXwFvn0Icc4\\n\",\n       \"Igs5Vkzqi3ld9+qSl61T1O5RRLlvkxQTf2664yiXiUZTn8xTPY/MGr4Oy26ueG/DNKxhpkLYdIyv\\n\",\n       \"Pkh+eazfaeW9/y9JidLa9lSFupPQDt1qmlrMMVh1f8g+qmOesXAXYU05PYtwqevGinVL8M/WWGbT\\n\",\n       \"L91Pt8w3p521rxasbLoIM9BnN0HMtDd995QO6Zbl0Wf3/ab98Ukrr8vOO10fWVQl5T7Q9vw5xBis\\n\",\n       \"LLoIW2rTghVTzCcd5HTNykqOF7OuG+vXopSinqHHYLWeE6um9Qva7ZRmnWN1EaY01pPIWMu9bnWa\\n\",\n       \"hnVDBZ2rn+X4Y7WN1WtC348b6kuMR0HFDLBSTTKcytxuaGgtx4oZy04mpfkFVSsdW98aKb+N2az9\\n\",\n       \"L608tqdFOlX6mJSwL6lb+3JtwepD1Ris2K0BXcZgjV3b9RjTGKzTbX3bQHkv9X0XYcpWpxzjiCx0\\n\",\n       \"qhjb59r+hO2/sf38SGU6L1I6fag4EV/WZPLPdXW3yWGJ05cadxHuNCzKTUYeYO3ETCxGgLW+X/52\\n\",\n       \"++JsslOVZyqNxmDZOiTCJLQl67VT9tlUWrA2HH87TdLJ/S7CI+svulP25sZ90Na3SDp00yI13xvQ\\n\",\n       \"TtUHdBHW1DrAsn2wpFdLOleLB2leYDvGmIM6Y5/qSr3hK+rvii4BVt0yt33gc5MAJlaA1eQuwhx+\\n\",\n       \"DZV1jWxYh51t342pzT5dNxhvmPZO03J0VbU/Vl2s7qzuk9BuCrBW5fbDoO0djG0DrKjdRrZ+0NYd\\n\",\n       \"ti3WJY/6dsre3LZ+79ryeVnZ+wzSa5yndmT3Ot5xcrocBGdK+lQI4TMhhG9IepOkx8Up1misXrhS\\n\",\n       \"DaqtSrftWLO6FwKrcYBV6S5bPt/UgpVqoHCuYrRgbXtGZgp9tlikzmu1G7xuXmUXxyF/LOxq+b22\\n\",\n       \"geIh9s3Oh13X/RJJP71lmVQ3ZdSxbaza/bZ8v2yoxSvbFyeZOvWUw4/iLHWJmG8v6bMrf39O0v27\\n\",\n       \"Fac+W8fWWGxTE23jtEveX22puu/+lwdvqtdtTdPrB17V8tt+3UnSESXv1ak3aTH7/NeL1227I5e2\\n\",\n       \"zSu12hp3TM1tu1WHdG6xZVtL1XWysWWxSLdsNv3ldt/W8lDnmF3dZ5rUwXKdyvabbQ4tq+9Y21KL\\n\",\n       \"el1ug/VjouwYOUJrTzpoUJbVqRbqtkaXbfejI67/avnrtE4d3TDNpdW/6zz1YbnMEyT9ra17Nsl/\\n\",\n       \"i23nyqNWyt+01+Cm8q2k0WR86erTOdocL6uatjYevmm/qvjsVrr5PnpUzX2zbJn1erplka9DGHxK\\n\",\n       \"mKw4hHb1YfsJks4NITyj+Pspku4fQnjWyjJUNgAAGI0QQpRW8i4tWP+gmz925WQtWrFuEquQAAAA\\n\",\n       \"Y9Kl7/SDku5i+xTbh2kx6d5lcYoFAAAwXq1bsEII37T945LercXAyNeGED4erWQAAAAj1XoMFgAA\\n\",\n       \"AMolub0y0QSk2bD9GdsfsX2V7Q8U7x1ne6/t62xfbvvYleVfUNTFJ2w/criSN2P7Ytv7bF+z8l7j\\n\",\n       \"9bR9X9vXFJ+VPfg3KxXrvcf254ptfpXtR698NpX1Ptn2e21fa/ujtp9dvD/pbb5hvSe9zW0fbvtK\\n\",\n       \"21fb/pjtlxbvT317V633pLf3ku2Di/V7W/H3pLf3Usl6p9/eIYSo/7ToLvyUFg94PVTS1ZLuFjuf\\n\",\n       \"If9p8SDn49bee4WknyleP1/Sy4rXdy/q4NCiTj4l6aCh16Hmej5Y0hmSrmm5nssW0g9IOrN4/U4t\\n\",\n       \"7j4dfP0arveLJP1kybJTWu8TJJ1evD5K0ie1mIds0tt8w3rPYZsfUfx/iKS/lvSgqW/vDes9+e1d\\n\",\n       \"lPMnJb1B0mXF35Pf3hXrnXx7p2jBmssEpOt3SJ6nxeR4Kv4/v3j9OElvDCF8I4TwGS021pm9lLCj\\n\",\n       \"EML7JH1p7e0m63l/2ydKulUI4QPFcn+48p0sVay3VD7p3pTW+/oQwtXF669K+rgW891NeptvWG9p\\n\",\n       \"+tv8a8XLw7T4cfwlTXx7S5XrLU18e9s+SdJjJL1G+9d18tu7Yr1XJ9NeFW29UwRYZROQ3r5i2bEK\\n\",\n       \"kt5j+4O2n1G8tyuEsK94vU/7Z1K+nW4+fcXY66Ppeq6//w8a7/o/y/aHbb92pRl9kutt+xQtWvGu\\n\",\n       \"1Iy2+cp6/3Xx1qS3ue2DbF+txXZ9bwjhWs1ge1estzTx7a3Fo6OeJ+nGlfcmv71Vvt5Bibd3igBr\\n\",\n       \"DqPmHxhCOEPSoyU90/aDVz8Mi/bDTfUwiTqqsZ5T8ruSTpV0uqTPK8/HWkRh+yhJb5H0nBDCV1Y/\\n\",\n       \"m/I2L9b7zVqs91c1g20eQrgxhHC6pJMkPcT2w9Y+n+T2Llnv3Zr49rb93ZJuCCFcpYpH4Exxe29Y\\n\",\n       \"7+TbO0WAtXUC0rELIXy++P8Lkt6qRZffPtsnSFLRlHhDsfh6fZxUvDdWTdbzc8X7J629P7r1DyHc\\n\",\n       \"EF70LhgAABTnSURBVApaNDMvu3kntd62D9UiuHp9COHS4u3Jb/OV9f4vy/WeyzaXpBDClyW9Q4tH\\n\",\n       \"fk1+ey+trPd3zGB7P0DSebY/LemNks62/XpNf3uXrfcf9rG9UwRYk56A1PYRtm9VvD5S0iMlXaPF\\n\",\n       \"Ol5YLHahpOXF6TJJT7Z9mO1TtXjw8Qc0Xo3WM4RwvaR/sX1/25b01JXvjEZx4ll6vBbbXJrQehfl\\n\",\n       \"fK2kj4UQfnPlo0lv86r1nvo2t338slvE9i0lnSPpKk1/e5eu9zLIKExue4cQXhhCODmEcKqkJ0v6\\n\",\n       \"sxDCUzXx7V2x3j/Yy/G9aQR8239adJ19UovBYS9IkcdQ/7RoUry6+PfR5fpJOk7SeyRdJ+lySceu\\n\",\n       \"fOeFRV18QtKjhl6HBuv6Rkn/KOn/ajGu7mlt1lOLX8XXFJ/91tDr1WK9/50WAxo/IunDxUG1a4Lr\\n\",\n       \"/SAtxihcrcWF9ipJ5059m1es96Onvs0l3VPSh4r1/oik5xXvT317V633pLf3Wh08VPvvppv09l5b\\n\",\n       \"790r6/361NubiUYBAAAiSzLRKAAAwJwRYAEAAERGgAUAABAZARYAAEBkBFgAAACREWABAABERoAF\\n\",\n       \"AAAQGQEWAABAZARYAAAAkRFgAQAAREaABQAAEBkBFgAAQGQEWAAAAJERYAEAAERGgAUAABAZARYA\\n\",\n       \"AEBkBFgAAACREWABAABERoAFAAAQGQEWAABAZARYAAAAkRFgAQAAREaABQAAEBkBFgAAQGQEWAAA\\n\",\n       \"AJERYAEAAERGgAUAABAZARYAAEBkBFgAAACREWABAABERoAFAAAQGQEWAABAZARYAAAAkW0MsGwf\\n\",\n       \"bvtK21fb/pjtlxbvH2d7r+3rbF9u+9h+igsAAJA/hxA2L2AfEUL4mu1DJL1f0k9LOk/S/wohvML2\\n\",\n       \"8yXdOoRwUfriAgAA5G9rF2EI4WvFy8MkHSzpS1oEWJcU718i6fwkpQMAABihrQGW7YNsXy1pn6T3\\n\",\n       \"hhCulbQrhLCvWGSfpF0JywgAADAqh2xbIIRwo6TTbR8j6d22H7b2ebBd2s9Y9T4AAECOQgiOkc7W\\n\",\n       \"AGslwy/bfoek+0raZ/uEEML1tk+UdMOG70UpKOqxvSeEsGfocswJdd4/6rx/1Hn/qPP+xWwY2nYX\\n\",\n       \"4fHLOwRt31LSOZKuknSZpAuLxS6UdGmsAgEAAIzdthasEyVdYvsgLYKx14cQrrB9laQ/tv10SZ+R\\n\",\n       \"9MS0xQQAABiPjQFWCOEaSfcpef+Lkh6RqlDoZGfoAszQztAFmKGdoQswQztDF2CGdoYuANrbOg9W\\n\",\n       \"p8TtwBgsAAAwBjHjFh6VAwAAEBkBFgAAQGQEWAAAAJERYAEAAERGgAUAABAZARYAAEBkBFgAAACR\\n\",\n       \"EWABAABERoAFAAAQGQEWAABAZARYAAAAkW182HMKtksffsgzCwEAwFT0HmAtrMdYxFYAAGA66CIE\\n\",\n       \"AACIjAALAAAgMgIsAACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMgGmgcrvaoJTSUmNQUAAGlN\\n\",\n       \"NsBaKIuxiK0AAEBadBECAABERoAFAAAQGQEWAABAZARYAAAAkRFgAQAAREaABQAAEBkBFgAAQGQE\\n\",\n       \"WAAAAJERYAEAAERGgAUAABDZ1gDL9sm232v7Wtsftf3s4v09tj9n+6ri37npiwsAAJA/h1D5TOTF\\n\",\n       \"AvYJkk4IIVxt+yhJ/1PS+ZKeKOkrIYRf3/DdsP5g5cVDmNfzdPQHMJfnkyYvAAAwfmVxS1tbH/Yc\\n\",\n       \"Qrhe0vXF66/a/rik2y/LEqMQAAAAU9JoDJbtUySdIemvi7eeZfvDtl9r+9jIZQMAABilrS1YS0X3\\n\",\n       \"4JslPadoyfpdSb9YfPxLkl4p6ekl39uz8udO65ICAABEZHu3pN1J0t42BqsowKGS3i7pXSGE3yz5\\n\",\n       \"/BRJbwsh3HPtfcZgAQCAUYg5BqvOXYSW9FpJH1sNrmyfuLLY4yVdE6NAAAAAY1fnLsIHSfoLSR/R\\n\",\n       \"/iahF0q6QNLpxXuflvQjIYR9a9+lBQsAAIxCzBasWl2ErRMnwAIAACPRaxchAAAAmiHAAgAAiIwA\\n\",\n       \"CwAAIDICLAAAgMgIsAAAACIjwAIAAIiMAAsAACAyAiwAAIDICLAAAAAiI8ACAACIjAALAAAgMgIs\\n\",\n       \"AACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMgIsAAAACIjwAIAAIiMAAsAACAyAiwAAIDICLAA\\n\",\n       \"AAAiI8ACAACIjAALAAAgMgIsAACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMgOSZ2B7YemzgMA\\n\",\n       \"ACAnDiGkS9wO0hlf3v/OFw+V/u4IaT1PK4Tg+HmXrVv8vLpYlPNAOZURAIA5sB1iXX97CLBW03+T\\n\",\n       \"pAtEgLVfeTnzKiMAAHMQM8BiDBYAAEBkBFgAAACRbQ2wbJ9s+722r7X9UdvPLt4/zvZe29fZvtz2\\n\",\n       \"semLCwAAkL86LVjfkPTcEMI9JJ0l6Zm27ybpIkl7QwinSbqi+BsAAGD2tgZYIYTrQwhXF6+/Kunj\\n\",\n       \"km4v6TxJlxSLXSLp/FSFBAAAGJNGY7BsnyLpDElXStoVQthXfLRP0q6oJQMAABip2gGW7aMkvUXS\\n\",\n       \"c0IIX1n9LCzmekg33wMAAMCI1JrJ3fahWgRXrw8hXFq8vc/2CSGE622fKOmG8m/vWXl9Y/uSTkDV\\n\",\n       \"pKIAAKB/tndL2p0k7W0Tjdq2FmOs/imE8NyV919RvPdy2xdJOjaEcNHad5lodDXniklFmWgUAIDh\\n\",\n       \"xZxotE4L1gMlPUXSR2xfVbz3Akkvk/THtp8u6TOSnhijQAAAAGO3NcAKIbxf1WO1HhG3OAAAAOPH\\n\",\n       \"TO4AAACREWABAABERoAFAAAQGQEWAABAZLXmwRpK1bxRw06zUC63MjHNAwAAw8k6wFoomzdqSOVz\\n\",\n       \"aw0rtzoCAGDe6CIEAACIjAALAAAgMgIsAACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMhGMA/W\\n\",\n       \"dpsmAAUAAOjbJAKsBSbbBAAAeaCLEAAAIDICLAAAgMgIsAAAACIjwAIAAIiMAAsAACAyAiwAAIDI\\n\",\n       \"JjRNQ31l82aFEKLO6zCWubmqyhm7PgAAmJNZBlj9zZk1lrm5xlJOAADGgS5CAACAyAiwAAAAIiPA\\n\",\n       \"AgAAiIwACwAAIDICLAAAgMgIsAAAACIjwAIAAIhslPNgjWUSzy6YABQAgPEaZYA1j4kxy+KrKa4n\\n\",\n       \"AADTQxchAABAZFsDLNsX295n+5qV9/bY/pztq4p/56YtJgAAwHjUacF6naT1ACpI+vUQwhnFvz+N\\n\",\n       \"XzQAAIBx2hpghRDeJ+lLJR8xIAgAAKBElzFYz7L9YduvtX1stBIBAACMXNsA63clnSrpdEmfl/TK\\n\",\n       \"aCUCAAAYuVbTNIQQbli+tv0aSW+rXnrPyusbK5fKbW6r3MqzlGu52mCuLwDAkGzvlrQ7RdqtAizb\\n\",\n       \"J4YQPl/8+XhJ11QvvWfl9Zs2pJrjvE85zreVY5m6mNr6AADGIoSwI2ln+bftF8VKe2uAZfuNkh4q\\n\",\n       \"6Xjbn5X0Ikm7bZ+uxdXx05J+JFaBAAAAxm5rgBVCuKDk7YsTlAUAAGASmMkdAAAgMgIsAACAyAiw\\n\",\n       \"AAAAIiPAAgAAiKzVNA1TNKX5pYaW2/xWm7Ytc24BAFIgwLoJ8zHFlVt95jjPGgBgqugiBAAAiIwA\\n\",\n       \"CwAAIDICLAAAgMgIsAAAACIjwAIAAIiMAAsAACAyAiwAAIDImAcLvclxMteyMq1PPprbxKkAgPwR\\n\",\n       \"YKFHuU0+KtUvU45lBwDkii5CAACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMgIsAAAACJjmgaU\\n\",\n       \"6jI/1JByLBMAYH4IsFCh7fxQm5btC3NWAQCGRRchAABAZARYAAAAkRFgAQAAREaABQAAEBkBFgAA\\n\",\n       \"QGQEWAAAAJERYAEAAETGPFgYLSYVBQDkigALI8ekogCA/NBFCAAAEBkBFgAAQGRbAyzbF9veZ/ua\\n\",\n       \"lfeOs73X9nW2L7d9bNpiAgAAjEedFqzXSTp37b2LJO0NIZwm6YribwAAAKhGgBVCeJ+kL629fZ6k\\n\",\n       \"S4rXl0g6P3K5AAAARqvtGKxdIYR9xet9knZFKg8AAMDodZ6mIYQQNs9HtGfl9Y1ds8tWbnMy5VYe\\n\",\n       \"xFW1fUMIzFMBADXZ3i1pd4q02wZY+2yfEEK43vaJkm6oXnTPyus3tcxuDHKbj6msPGXX5KHLifZy\\n\",\n       \"2+cAYFxCCDuSdpZ/235RrLTbdhFeJunC4vWFki6NUxwAAIDxqzNNwxsl/XdJd7X9WdtPk/QySefY\\n\",\n       \"vk7S2cXfAAAAUI0uwhDCBRUfPSJyWQAAACaBmdwBAAAiI8ACAACIjAALAAAgMgIsAACAyDpPNIr5\\n\",\n       \"YPJSAADqIcBCA0xsCQBAHXQRAgAAREaABQAAEBkBFgAAQGQEWAAAAJERYAEAAERGgAUAABAZ0zRg\\n\",\n       \"FJiDCwAwJgRYGBHm4QIAjANdhAAAAJERYAEAAERGgAUAABAZARYAAEBkBFgAAACREWABAABERoAF\\n\",\n       \"AAAQGfNgAWvqTmpatlwIodbkXFV51P1+XygnALRDgAUcoO6Epl0nPh3LxKmUEwCaoosQAAAgMgIs\\n\",\n       \"AACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMiYpgHZqTsP1RR1mVurT7HLyTxWAKaGAAsZKrvW\\n\",\n       \"zuU6O5a5nFKUcyzrDgDb0UUIAAAQGQEWAABAZJ26CG1/RtK/SPo3Sd8IIZwZo1AAAABj1nUMVpC0\\n\",\n       \"O4TwxRiFAQAAmIIYXYSMRAUAAFjRNcAKkt5j+4O2nxGjQAAAAGPXtYvwgSGEz9u+jaS9tj8RQnhf\\n\",\n       \"jIIBAACMVacAK4Tw+eL/L9h+q6QzJa0FWHtWXt/YJTsgeykmSW2S5lgmKu1Ll/qoO/lp10lSu+ZT\\n\",\n       \"Ny8mcwUOZHu3pN1J0g6h3fXA9hGSDg4hfMX2kZIul/TiEMLlK8uEm08e+CZJF6h8QsGqySXrLFv3\\n\",\n       \"vbGkOeayzznNPMvefYb14fJpFqTELWfdMqUqe7186ufVtZzAHNgOsY6JLi1YuyS91fYynTesBlcA\\n\",\n       \"AABz1TrACiF8WtLpEcsCAAAwCczkDgAAEBkBFgAAQGQEWAAAAJERYAEAAETWdaJRABi1FHOXpUy3\\n\",\n       \"bl5MvwAMiwALACrnGkuRZtl8XV2lSBNAF3QRAgAAREaABQAAEBkBFgAAQGQEWAAAAJERYAEAAERG\\n\",\n       \"gAUAABAZ0zQAM7NpfqbYcyf1ORdUXTmWKYWh1rPP/asvVes01vVBPwiwgFlKMe9T3bxSzQVV11zm\\n\",\n       \"jMqpjvvOP4W57DeIhS5CAACAyAiwAAAAIiPAAgAAiIwACwAAIDICLAAAgMgIsAAAACIjwAIAAIiM\\n\",\n       \"ebAARNHXxJZDTxQ6dP5jVlZ3bSfrnOKEpnMwp+1GgAUgkqEnL+0LE062F7vupjih6RzMY7vRRQgA\\n\",\n       \"ABAZARYAAEBkBFgAAACREWABAABERoAFAAAQGQEWAABAZEzTAExck3mbcpvjKbfy9GnoecXW5ySq\\n\",\n       \"u1wOYs631Wc+TbZ53XzGtN2mhgALmLwmcw/lNsfTPObLKdfXtmhSx7ntH1WGqrsY+ZSl2TWfsWy3\\n\",\n       \"aaGLEAAAIDICLAAAgMg6BVi2z7X9Cdt/Y/v5sQoFAAAwZq0DLNsHS3q1pHMl3V3SBbbvFqtgaGtn\\n\",\n       \"6ALM0M7QBZihnaELAPRgZ+gCoIMuLVhnSvpUCOEzIYRvSHqTpMfFKRba2xm6ADO0M3QBZmhn6AIA\\n\",\n       \"PdgZugDooEuAdXtJn135+3PFewAAALPWZZqGmvN1nP3l/a+vP0zSLTvkCQAAkD2H0G4uO9tnSdoT\\n\",\n       \"Qji3+PsFkm4MIbx8ZZnZThIIAADGJ9YkrF0CrEMkfVLSwyX9o6QPSLoghPDxGAUDAAAYq9ZdhCGE\\n\",\n       \"b9r+cUnvlnSwpNcSXAEAAHRowQIAAEC5JDO5MwFperYvtr3P9jUr7x1ne6/t62xfbvvYIcs4NbZP\\n\",\n       \"tv1e29fa/qjtZxfvU++J2D7c9pW2r7b9MdsvLd6nzhOzfbDtq2y/rfibOk/I9mdsf6So8w8U71Hn\\n\",\n       \"Cdk+1vabbX+8OL/cP2adRw+wmIC0N6/Too5XXSRpbwjhNElXFH8jnm9Iem4I4R6SzpL0zGLfpt4T\\n\",\n       \"CSF8XdLDQginS7qXpIfZfpCo8z48R9LHtP+Oceo8rSBpdwjhjBDCmcV71Hlar5L0zhDC3bQ4v3xC\\n\",\n       \"Ees8RQsWE5D2IITwPklfWnv7PEmXFK8vkXR+r4WauBDC9SGEq4vXX5X0cS3mfqPeEwohfK14eZgW\\n\",\n       \"4z2/JOo8KdsnSXqMpNdIWt5RRZ2nt373GnWeiO1jJD04hHCxtBhXHkL4siLWeYoAiwlIh7MrhLCv\\n\",\n       \"eL1P0q4hCzNltk+RdIakK0W9J2X7INtXa1G37w0hXCvqPLXfkPQ8STeuvEedpxUkvcf2B20/o3iP\\n\",\n       \"Ok/nVElfsP062x+y/fu2j1TEOk8RYDFqPgNhcfcC2yIB20dJeouk54QQvrL6GfUeXwjhxqKL8CRJ\\n\",\n       \"D7H9sLXPqfOIbH+3pBtCCFfpwBYVSdR5Ig8MIZwh6dFaDD948OqH1Hl0h0i6j6T/GEK4j6R/1Vp3\\n\",\n       \"YNc6TxFg/YOkk1f+PlmLViykt8/2CZJk+0RJNwxcnsmxfagWwdXrQwiXFm9T7z0omu//X3t3r5pF\\n\",\n       \"EEdh/DkWASNpJG3EFNoFCzubgETBJqWxkeA1WGlhm8ImN2AVJCBCNGJrYasgGLQThQQMpPEO/haz\\n\",\n       \"EiEgCDMI5vnBsl8v7HKqw+7OvK+Bq5j5SNeA1SRfgW3gepItzHyoqvo+rY+AHdrnNmY+zgFwUFXv\\n\",\n       \"pv3ntMJ12CvzEQXrPXApycUkM8AasDvgOjppF1iftteBF3/4rf5SkgBPgM9VtfnbKXMfJMn8r1E8\\n\",\n       \"Sc4CN4APmPkwVfWwqhaqahG4A7ypqruY+TBJZpPMTdvngJvAHmY+TFUdAvtJLk+HVoBPwCs6ZT5k\\n\",\n       \"Hqwkt4BNjicg3eh+kVMuyTawDMzT3hM/Al4Cz4ALwDfgdlX9+Ff3+L+ZRq+9BT5y/Nj4Ae1fDMx9\\n\",\n       \"gCRLtA9Nz0zLVlU9TnIeMx8uyTJwv6pWzXycJIu0p1bQXl09raoNMx8ryRXaQI4Z4Atwj9ZbumTu\\n\",\n       \"RKOSJEmdDZloVJIk6TSzYEmSJHVmwZIkSerMgiVJktSZBUuSJKkzC5YkSVJnFixJkqTOLFiSJEmd\\n\",\n       \"/QRSxC44KICduwAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01ae5a90>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['fc6'].data[0]\\n\",\n    \"plt.subplot(2, 1, 1)\\n\",\n    \"plt.plot(feat.flat)\\n\",\n    \"plt.subplot(2, 1, 2)\\n\",\n    \"_ = plt.hist(feat.flat[feat.flat > 0], bins=100)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The second fully connected layer, `fc7` (rectified)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 37,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlcAAAJPCAYAAABRvvFyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xv0LGdd5/vPJ3dygRADOzGJJsgdkWRxNyAb5BJw5OaI\\n\",\n       \"okBE5CAiIJ6jgs6Y7XgDR5DjcMRZQ4KRYXB00JyIoyYoP424JKIJCTcjZ5JlgskOs4gKXkGe80dX\\n\",\n       \"79TuXdVdl29VPdX9fq211+5fd9VT37p/+3mefsopJQEAACDGUVMHAAAAsE1IrgAAAAKRXAEAAAQi\\n\",\n       \"uQIAAAhEcgUAABCI5AoAACDQ2uTK9jm2P2D7Y7Y/avs1xfun2b7a9k22r7J96jjhAgAA5M3rxrmy\\n\",\n       \"fYakM1JK19s+WdKfSXqupJdK+t8ppZ+x/UOS7p1Sev0oEQMAAGRsbc1VSumOlNL1xevPS/qEpLMk\\n\",\n       \"PVvS5cVkl2uRcAEAAOy8xn2ubJ8r6QJJH5K0L6V0sPjooKR94ZEBAADMUKPkqmgSfK+k16aUPlf+\\n\",\n       \"LC3aFXmGDgAAgKRjNk1g+1gtEqt3pZSuKN4+aPuMlNIdts+UdGfFfCRcAABgNlJKjihnbXJl25Iu\\n\",\n       \"lfTxlNJbSx9dKeliSW8q/r+iYvawIOfE9oGU0oGp4xgb671bWO/dwnrvlh1e77BKoU3NghdKepGk\\n\",\n       \"J9u+rvh3kaQ3Snqa7ZskPaX4GwCAWbGVbH3z1HFgu6ytuUop/ZHqE7CnxocDAMDovlrSr00dBLYH\\n\",\n       \"I7TH25s6gInsTR3ARPamDmAie1MHMJG9qQOYyN7UAUxkb+oAJrI3dQBzt3YQ0V4F22kX+1wBAObD\\n\",\n       \"VpL0H1LSJVPHgmlF5i3UXAEAAAQiuQIAAAhEcgUAABCI5AoAACAQyRUAAEAgkisAAIBAJFcAAACB\\n\",\n       \"SK4AAAACkVwBAAAEIrkCAAAIRHIFAAAQiOQKAAAgEMkVAABAIJIrAACAQCRXAAAAgUiuAAAAApFc\\n\",\n       \"AQAABCK5AgAACERyBQAAEIjkCgAAIBDJFQAAQCCSKwAAgEAkVwAAAIFIrgAAAAKRXAEAAAQiuQIA\\n\",\n       \"AAhEcgUAABCI5AoAACAQyRWwZWwdZ+sRU8cBALuK5KqCrSfY+oWp4wA6+m5J108dBADsKpKrai+T\\n\",\n       \"9MqpgwA6usfUAQDALiO5AgAACERyBQAAEIjkCgAAIBDJVbU0dQAAAGCeSK4AAJBk61hbJ04dB+aP\\n\",\n       \"5AoAgIVLJX126iAwfyRXAAAsPEzS8VMHgfkjuQIAAAhEcgXMGP1DACA/JFfATNl6qqS/nzoOAMDh\\n\",\n       \"SK6qMRQD5uCsqQMAAByJ5AoAACAQyRUAAEAgkisAAIBAJFcAAACBSK4AAAACkVxV49eCAACgE5Ir\\n\",\n       \"zJKt420dN3UcAACsIrnCXN0g6XemDgIAgFXHTB0A0NEDJd136iAAAFhFzRUAAEAgkisAAIBAG5Mr\\n\",\n       \"25fZPmj7xtJ7B2zfZvu64t9Fw4YJAAAwD01qrt4paTV5SpLeklK6oPi3bR2LGYoBAAB0sjG5Sild\\n\",\n       \"I+muio8cHw4AAMC89elz9WrbH7F9qe1TwyLKA4kjAADopOtQDG+X9B+K1z8u6c2SXrY6ke0DpT/3\\n\",\n       \"Ukp7HZcHAAAQxvZ+SfuHKLtTcpVSunP52vY7JP1mzXQHuoU1OfpcAQCwxYoKn73l37YviSq7U7Og\\n\",\n       \"7TNLfz5P0o110wIAAOySjTVXtt8j6UmSTrd9q6RLJO23fb4WNTw3S3rFoFECAADMxMbkKqX0woq3\\n\",\n       \"LxsglpzQLDgP/PAAAJAdRmgHAAAIRHJVjRqReaCGEUAWbP2yrQdPHQfyQHIFAEB/L5b0nKmDQB5I\\n\",\n       \"rqpRIwIAADohuQIAAAhEctWCrStt/cHUceAQ+sYBALLT9fE3266uWfBpkk4YMxAAADAv1Fxhzugb\\n\",\n       \"BwDIDslVNZqbAABAJyRXAAAAgUiuqtHcBAAAOiG5wmhsvdLW+6aOAwCAIfFrQYzp2yQ9YeogAAAY\\n\",\n       \"EjVX1WgWBAAAnZBcAQAABCK5qsZQDMNguwIAth7JVTskBwAAYC2Sq2r0uQIAAJ2QXLVD0tVP9Paj\\n\",\n       \"JhEAkB2SK4yJZAgAsPVIrqolSbJ1f1tXTB0MalGTCADIDsnVek+T9JypgwAAAPNBcgUAABCI5AoA\\n\",\n       \"gEzYer6tN0wdB/ohuVqPDth52/X9Q58zYPv8uKSfmjoI9ENyBczXrieXAJAlkqv1qBnIG/sHQCS+\\n\",\n       \"sFSw9SybbdMGyVW1ups2BxeQKVuPt/X/TR0HsIV+S9LpUwcxJyRX1bzyf1ZsPdTWg6eOA8jM10m6\\n\",\n       \"39RBYNaoDUeIY6YOYGZyOfE+JumLko6dOhAAAHA4aq6q5ZJEYb0saxYBYAtxX2yB5ArAtuDiDyAL\\n\",\n       \"JFcYU3RNEzdTAEB2SK7ayakZKqdYAABAgeSqGjUiw2C7AgC2HslVtbpaoZySgyNisXWNrQdNEQwA\\n\",\n       \"YKvldP/LHsnVdnmCFmP95Cq6KZOmUZRx8QeQBZKralykZ8TWF2y9duo4JsBxCmw5W/9i6xFTx4F2\\n\",\n       \"SK5mwNa9qt4ePZB8HSPpUUMvxNZRdlYD73IMYDC2vszW06aOAzpW0tdMHQTaIblab/XmNfrNzNZX\\n\",\n       \"SfqbsZc7kOialrFrbt4m6TMjLxOYyo9KumrqIIA5IrnK3z1r3p9jk9Dca1oeKenUqYMAgAnM8Z4z\\n\",\n       \"GZKranUHEQcXkC/OT2wrju2ZIbmqtqxh4YDO29xrwgCgCa51M0NytV7OB3TOsWEcJP8AkCGSq2p1\\n\",\n       \"Ny1Lkq1jMvvVGABge/FFamZmm1zZ+hZbx060+D1J10y07CVONrYBtZcAxrLr19tWZptcSfoVLUYk\\n\",\n       \"H9Py4LpQ0uNGWiY3UKAZLv4AsjDn5AoAACA7JFfV5vANeI41WnOMGQCAVkiuquWUBMwh0ZtKTvsJ\\n\",\n       \"AABJJFdtTXEzr1vmHJOuOcacM7YnsBtyONdziGE2SK6qcRAhO7aeYuvC8luTBZMnzlsAWWCspna4\\n\",\n       \"ePcTnQzs2v74PUmfl3TK1IFgJ2xl8m7rbEmfTmnnrh8YETVX+Vs7oCkAoJVbJT1l6iCw3TYmV7Yv\\n\",\n       \"s33Q9o2l906zfbXtm2xfZfvUYcMcXU7faEiiAEwhp+tgtHtOHcBc2NyDumhSc/VOSRetvPd6SVen\\n\",\n       \"lB6oRVPF66MDm5hX/l99PwdzvPBFx+ya1wCwTWqvnbYebeuEMYPBZhuTq5TSNZLuWnn72ZIuL15f\\n\",\n       \"Lum5wXEB2GyOCfaQ2B7YVuu+PF4r6TUTLRs1uva52pdSOli8PihpX1A8ueFiHWvIk5R9BWBXjfGc\\n\",\n       \"XZKsFnp3aE8pJW3fja1ufXJaTw70vPbHFDgGgN2w6VrHtSAzXYdiOGj7jJTSHbbPlHRn1US2D5T+\\n\",\n       \"3Esp7XVc3lQ4YIGJ2PoxSb+Ukm6eOhZgh23tfdD2fkn7hyi7a3J1paSLJb2p+P+KqolSSgc6lo/N\\n\",\n       \"dr3WZtXWXgB22I9K+qKkH586EADbp6jw2Vv+bfuSqLKbDMXwHkl/LOlBtm+1/VJJb5T0NNs3aTFe\\n\",\n       \"yBujAspETmNLkTTUY9ugjC8csTi/IHEcdLKx5iql9MKaj54aHEtOOJjmhxsrgF01xj2L+2ILjNCe\\n\",\n       \"v5xq0ZAXEkoMieML6Ijkqh0uNnlhfwBbwtYf2nr11HFkasprHV/kOyC5qpbTTbvuwM4pxhwcPXUA\\n\",\n       \"E+Cih23yRI03IDXnDgZFcoVtcahvoK3jpwwE2BIkIPMx5L7iOOhg7snV2Dt9ioOMGqp6dfvjn2x9\\n\",\n       \"7aiRAD3ZOtPeyRrYKXBdbY8kq4W5J1dDyenEqzugOdDXO3PqAAbCfq+X03nbxV9LWfU5mvv2RAyu\\n\",\n       \"OR3MPbka6uTnYBoG2zUWN7/tc5+pA9gRva9Ftn7O1n+OCAbbZ+7J1dBWT8CcbmY5xYLxsN8xFr4M\\n\",\n       \"rfc9kv6PqYMosK8yQ3KFOdv1RCPbC6qtk2w9ceo4gBq7fu1oI9vrTM5IrqrNYeDOnGJpigtaf3PZ\\n\",\n       \"76+T9IdTB4FeOF/zkcO+mMu1JwskV+vlcEBvk0Mnp6372fpMVHnbytYrbV04dRwddH0ofB+cr2hq\\n\",\n       \"668dgdhWHZBcrcdBNZyHSzpdkmy9xNZPTBxPrn5B0k+V/iaBCGDrLFtvmzoOIAj3qsyQXFWru4Hl\\n\",\n       \"dGPLKZa+/r2kH5k6COyUZ0l61dRBDMXWo2zt61tMSDB52qbrJzI09+RqqJN/my8qO8fWk+3ZH+tL\\n\",\n       \"5WOTGwTq/KmkS6cOYsvtyn1iV9Yz1LbccHYRB3zz5OL3JT1pyEBwBI7P6fUd7X2bk3eOz/bYZi3M\\n\",\n       \"Pbka++Sfw8G1zRfEPlof67YusPWYIYLpobx/53A8joljvyFbt9l6QYf5kq1HDxHTDI15vG1aFs8W\\n\",\n       \"zMzck6uh5HSRbntg79KJMPS6/rGkDw28DGAKZ6l7be6DIwOZSE7XeGyhuSdXu5RIrOLiAGBIu3x9\\n\",\n       \"BXqZRXJl61xbD5k6Dk2T0GxTEjXmxTpiu3Fz6Y5tBxSK5tR7TR1HR5zLHcwiuZL0QUkfr3h/qMRj\\n\",\n       \"DgkNB/x6u7B95nCcYr62+fia4vpw8oBl78L1blbmklwdN/Ly6g7UKQ7gbTppoi/W23zxR3scD0A8\\n\",\n       \"r/yPBuaSXAFtcaOdFhfi+dvmfTi368Pc4t15oyRXtu60dY8hih6gTOnuAzmHi8scRotvasztOcSy\\n\",\n       \"ctjmrnkNYHtxrs/MWDVX95F06kjLwu5oc8GhgzuApdVz+RxbfzPIgsb5MRbjXGVm7s2CY9ck9F6e\\n\",\n       \"rafa+t9tZmn5/lzMPf6p5FB7ht2wS8fag6XWv+Zreg2r+jFWW7u0L7bC3JOroQ1xQF8o6cvazmTr\\n\",\n       \"+AFiGVtOIxo3QQLY3RTbjhsQmuJYaY4O7R3MPbkaemevlj/lwfVPtk6YcPnRhr64beuFgAc3S7J1\\n\",\n       \"tK3vzSCO59l66dRxrGPr2219tsus4cHkq8u5tLPnHzabe3K1FWz9s61/12DS8oNY53hiR1+sh94G\\n\",\n       \"q+XPcZtvq3Ml/aepg5D0DkmXTR3EBk+QdO+pg8jMtiWO27Y+szeX5KruwNmWm91xkh45dRBbZohm\\n\",\n       \"wRwuYHN5cHPOsXVm61hbPzd1HEE27aNtub6in608l4c2l+RqbHO4qBxT0w9rDrFHaXXS2zppqECw\\n\",\n       \"M75c0vdNHURDF00dQMaolcag5pJc1R34Y2fUuZ2Ar6p4z7Y+YuvfRi/M1rk9i5hy+z1C0ucnXH4U\\n\",\n       \"vkXWy+38zN2m7cWxlo8pj206tHcwl+SKnVqt7heEXyPpGQMs72b7sH5fbU05AObpAWVs7c3b1olT\\n\",\n       \"xwCMKOL6k9N9KadYoPkkV7ngAN5tOez/VPO6M1tnSfr7iLLKxQaXB8yGrbfZ+uGp4xiDrSfbeu/U\\n\",\n       \"ceRmLsnVNj0CZpZs3T+4yIh916aMrTlWbN3PDhmYcOmegWVhe2zNOVNh6OvPqyS9NmAZOdj0Rek4\\n\",\n       \"SaeMEciczCW5QnPhF0Rb50v6y+hig8vLbXlDsRa/LI18pMa2bBugq21OJIdmsf2OMHlyZesuW2d0\\n\",\n       \"nb3Hcn/K1q1d58/YEDfKIR66vZGtv9vQMX/sQWRzk3N82Y7QbutUKayPWc77YJNNsc953TaZW5+r\\n\",\n       \"3Du0k1ytmDy50uKBzl81wXKfKOnsms9ohjzcEBeR8rasa3I8RdLjB1jelGXkaFvXq85tkt46dRBA\\n\",\n       \"CzknwtuchHeWQ3LVx9g3BQ6i+W6DLnGvHl85rPuuJUJDqBzvzNaP2HpAy7LmvD/mHHs0Hn/THc2C\\n\",\n       \"FeaeXA0th2rfbb2h57Bem+QYY44xbYufkPRdLeeZ8/4IHaHd1vG2Ht4jnjHNLRnIoVlwnbltz8HN\\n\",\n       \"Pbma84VtVdS65HyQb9P+ygH7GkNquw+/X9INQwQygLn1ucoZ26FCLslV75uErV+39eCIYEpyvnnV\\n\",\n       \"ya6Tt60ftPXJlbebbtuoTpRz3JdoZ4p9vHaZtt5p681jBVPju2vej95ekw1Ea+sBdq+HUx+xLYpm\\n\",\n       \"4mzGb7J10pp4hrzub+rQTrNghVySq02aHDjPk/TMgeMY8gDauk70tk4pfpn1NEkPUvxDh9uUEbG8\\n\",\n       \"HPZFUsd1sfU7tl4cHM+u27QvvkPtmxrR3k2SfrnH/FXn9kskPb/lPEO6v9bHM6Ucro1ZmUtyVafT\\n\",\n       \"DrX1g2r2LSuH6s4cYihrE88fSPoL9TvxcuprkMO+6PMIoWdIesGGMrE7QvtcZeDkFtNG9HHdlfMm\\n\",\n       \"518qZuuYqQNoKLqz95tGXt6YcrogfpUOH/27vP3GjjOn7dLXoOti6z6SjkpJB/sUExVP5rbpuMrB\\n\",\n       \"WNtzjONzV84BmgUrzCW52pWDdJMm24EnmMfJ9YIRvW9X1/NaSffWYgw6rDfn82zT8T3Fuk21PdPK\\n\",\n       \"/23mGUOO+6q87FyvlZOZe7PgLsjtoK08iW0dZ+uymnmq1mEOj79ZnSeHfRH+4GYduZ5nSLpXUNl9\\n\",\n       \"3d9uvJ457J85mXNimA1br6z5KPJ4nDp5qozB1mlatFBw7q3IJbnatGO2rrN3C11PqrG3zZmSXjry\\n\",\n       \"Msf+lpnLzWgOx33Utpri6Q1tTL4vbH3RruxLh3qR14Nv2/D5Nvtv0uS/hs1SLsnVJrkcpBtPSFsP\\n\",\n       \"tfWVAy1/qs6Wfcuq2m7blOyMaehHEeXmX6cOYINJjsFimIBlQnW0pEd3KKbrl9oh9VqmrafYekKT\\n\",\n       \"SQNjGewYsHV6y1gGCWPNZ8s+tTlfQyaRdZ8rW+dp/S9Acry5fkzSLZLOm2j5Qx/kUdt8Ds2CuZvr\\n\",\n       \"vmgj9+RqKj8h6ZOSfrVHGTn+CqzPMvcX//5BNY84WmOIx99EXIvvuXkSSfPqq7YTcq+5+oAWI/52\\n\",\n       \"3nG2bGtf19k3/F3nuI7Li7CMMdm6Pai5YBtOnG1Yh9mz9e22HtRiltyTq6yPKzvrxLk3W6+09QdT\\n\",\n       \"Lb74P+QYqDkvsj6+UC/35OrogDK+WdIdLefpe0DncEGzFh2Tn9hoYus0W+esKauP7C8Qts5r8Fy0\\n\",\n       \"HNYjhxiaqDtm/qsWtS5NtUmupmhqbvML3tzM5Vha5/mSvm7lvRNt/Y8N82W17rbuIR3xFAtJ+tLY\\n\",\n       \"sVRY9wv0ZXxZbc8c5J5cLdVdnJrs0Pv0WO6h8m29T9LxDecb6mI65EX6f0r6qxFjGLo/Q5Iku/GI\\n\",\n       \"xsta0iFiiTTlWGFT6FxzZet3ez4SBYFsPdjWKzZMFnVMf9OmcAZabld19+JJ4rL1WltvbzApzYI1\\n\",\n       \"5pJc1RnzZvcNIy4rQtuD/csCy2pi6MfRLMt/SINppepa0hySqSpDj3OVkz7Ngk+X9NCW87TdFk2m\\n\",\n       \"z7Uz8tgjtP+IpF/cME2u51yViGfifsDW4xosY+xxrl6ju59JOad9ko1ckqscf7UidT+opjwYl9tq\\n\",\n       \"0F8L2vrUhotCWQ6Pv+ECUW+IbdOoTFuPtvV3aybp2+fKxXL2t5m+bflbKut1sye/f0Vsn/2SnrXm\\n\",\n       \"85y/+EjUXNXqdXDavsX2Dbavs31tVFBVixqw7CEMddLn0L/jLFu3aDH+0Kb+XOtiiWgWjCh/3fSr\\n\",\n       \"7+VwAUnKI44+yvE/StIpa6bt3aG9eHj4B/qWs4VmexzZ+kbF/tihz7aouw5ty8Pi1yG5qtE3CUiS\\n\",\n       \"9qeULkgpPSYioBVzfbBoDslg5LYpl/UwqfU4XlW1aUNvo4g+FXWj0d9h6wc6lBfBquhgausoWydM\\n\",\n       \"E9Kg+g4Um9TuWBuiWbD1sW7reLvxz/DXLaNPs2DOHthz/ojr41C16vcbYBl9rOvQTnJVI6KGZYwT\\n\",\n       \"dKoHKXc9YLrEFX1w5nDhzOkCFrGc5TbdJzUaqHBMl0j6xwbTNR7x3Nattn6ye0hZaNp03dVQ59ll\\n\",\n       \"kv52oLIx7X1rk/KXpMkHMkU3ETVX77f9Ydsvjwioonxp/ANniuU2+eYZOarwokDr1GIsnKbr2meb\\n\",\n       \"5PSg08aKWqGcvplVxfLghvO2eRjz2eqeROZysf/ZqQNQt2Px/iMsI9e+rk2M8aOO8q/Fbetetk63\\n\",\n       \"9aiGMWxLs2CT7hc5xJmVvsnVhSmlCyQ9U9KrbDcaU2lkb5tgmTncWNYe7Pah0fnv0mIssLWT17zu\\n\",\n       \"E8vYJ2OfJuYc9udSuZmLC1q8tvs6933Qp1kwp+N+at8o6W8k/YKkP10zXZdtNuftTHJVo9fjb1JK\\n\",\n       \"txf/f8b2b0h6jKRrlp/bPrB4dYmkqx4v/fGvd11UnzgnENks2Hbd17WPLz6wHiHp+tI0Z3ZYTptY\\n\",\n       \"2n7WdLo2CdGcmijXyf5CbOtiLY6pOpsS/3tJh56plrvs98cAcjgP+mq7Dsvj+dgWZURup6l/GVln\\n\",\n       \"1seC7f1S418St9I5ubJ9oqSjU0qfs32SFmPK/Fh5mpTSgcW0ukQ68Mc94uwziOjgioEK/z4l/cvy\\n\",\n       \"rcjia153mX/pjJZl9N3Ooc2spUf6DNlRuWxZIzv187vq/q57byq/1HP+y6TGg7+W5ZpAj3Hc5NCF\\n\",\n       \"IeKJGmOJ2F5VNcip4vM+sSw/25RcDbn/t7ZDe0ppT9Le8m/bl0SV3Scb3ifpGtvXS/qQpPellK5a\\n\",\n       \"M33kz+ablDmmz0p6c+nvXL9ldNU1wVu3X5vu86rpHtthvj7HyvJn/FMdb6vLzeW4H0qbfmHRdqlZ\\n\",\n       \"MLrmZduue02NcT7mvm1zPw9G17nmKqV0s6TzowIpHlr5zSkd9tyxOd1EvmLqAApDj8009qCerY6B\\n\",\n       \"HsMRVK1Xjsdfl2/G2bF1tqRP13w8pwt1ox+Z2Po/Je2lpD8bKI6woUZafB69PGm6fT/EciPH8hvt\\n\",\n       \"XC+ec1j+grNu2Tk8+zBLg2fDLZ7K/gpJP17z2ZwutlLsiZDDzbRvDH2qjl9nt2pu2IYb9qrV7d94\\n\",\n       \"XWydFR9OiFvVrelvaEM1C/6spNcPUPYm755gmX2MdY0bowm5y/hmdfP0HPC7/hmbth628mvo90o6\\n\",\n       \"rWHRs24WHNKYVY2bqqfXjbY7aJ8rW19h65GB5Q7V52rsZfctM6o56+QW09ZdFLr8WjCXC0bddmwS\\n\",\n       \"320ZPCakTt2+ymW71yol/LnHuu6XwOfYjYfwiJDTtorscxVRzqahePrWMn7W1tfXfLba6tJ4LLyS\\n\",\n       \"nPZtFnK66HapXow6uK+Q9OGgsqQZN9eMqM3JeGzN+0Of0JP1dbL1Hlu/VfHRSS3KGHzIBltn2npj\\n\",\n       \"y9mGGkduzAv8F4v/c27qOoKtf7IPdef4N5I+YevEKWOSlMNzAof4ctWl3+cgNVeF+3acb2s7tA9p\\n\",\n       \"jAO66QW0quZqrB0XfZHvsl3bjBAeVV7Tstc2C9p6rKrHE1u3/9qsS11yFbk9ord/YzU3ludq/QNd\\n\",\n       \"V63bxkMmhc+W9EMd521SI53beGhzd7yO7Cvb9stL3fuWWnUFKVsOVrtNI5K36Ve6qebqs/3DGaSf\\n\",\n       \"K0lVjVyaBaX1NVdzOJH6ilrH1VoB23q6rTf1KHNTbN+pxbfgoRwXUMbkx5CtB9mVVe7/aq8d16nv\\n\",\n       \"r2nD170Ytf4lfYtpOk3HG/YYJr25lJp6xuwOMMSyls2sOfW5yumY+4eAMqK6d5RRc1Vj6qrYshx/\\n\",\n       \"dTDEwTi0qmW/TtIPjh1ISd8Tr+6b9dx8UtKf13y2rrmvbkDUTcdZ1fkddWyeI+nynmXkOH5dZI1v\\n\",\n       \"m2m6el2Hedr+AvfhLcvNKSmpU3dOlUX3R+vyq8DI2iYXj/G5YMO8bc4BkqsavUZob6jpwVGVXA3e\\n\",\n       \"Z2QgfX5RF1XuUNusTQxRF9mI4zTip+oR6xOxLm0vuK+2Dz3UOadzqe6Xj1XN0K1+JdnDHBKDKkNu\\n\",\n       \"mxtsPTAl/eUIy9rI1nOkw36ANHdjDcVgSY/Q4gveXI/z2RgjuVratDO7/FowV11qBIccU6bJN8/K\\n\",\n       \"C6atkySdEhJR3nJuJujbLPhz0qGnB+SkSXLVxhQ3/SbLHDKuLsdkl3nK94q+v2xbp8m2uqJH+ZHm\\n\",\n       \"Npq/1a0VgGbBDsZMrjbJqVmw0QFj6/iU9M8jxNNG1SCifX4N9Ac6/Fti1Ym2qXlnqBNv6BM66fB1\\n\",\n       \"++qBl1enrto+fJydkbVpUsupL05ZdLNg41+D9jDVcTF1E+o6QwzC2nT6Jh3aI2wsy9Y9JT2wxfzL\\n\",\n       \"7ZZTF6Ms5PJrwU3jXE1lU+x/GLisXJsFH9Rj3tVm3XL8Xdr168qP0OQXjaF9v2z9la2Xd5m14r25\\n\",\n       \"jDAvbe4zNuW34Km32WtbDpq7NHXcUlAibOueGQ9+W8kO+dGNNGw/3ybn2z1alr2c98UNlr9Tcv+1\\n\",\n       \"YPSzr6I9auLlV+l6gevTcXKMb31R8+bkHElPHrD8NrWMwwRgPcDWU1be/tYOsfQ+Xup+cWjrDLtz\\n\",\n       \"Lf7U16CxmgXHLP+/S7otIpARVSXEtceGrX22/n75p/rXXH1fg2mG2O9TH//Zyqkqb+wbdISpl99E\\n\",\n       \"TjH27Ts0pLEuEl06yVc19XZZVpt5o/yypN+r+azJN+kx9v3t6v5r2qnPr6mXX3ZYLLZObzGERrmW\\n\",\n       \"u+tgl2Pr0yx/P1V31xiyeZDkakS5NAvOVaMTwdbX2rozYBldmwXnvg/mHn9fdQnZpu2S+3aLrrna\\n\",\n       \"1Hy8rox9LWJpa4h+XF2mbTtP3y9Dn5H0LR3nnYO59bnqer9v0iyIFaPXXNn68qLTXKPJG5aZUw2c\\n\",\n       \"JB1j68tKfz9e0n1GWnbf/k2bdEnwhnrcSaSxLhJdHonRttzvaFn2VPul7rzt9cXA1hUr518bbWsj\\n\",\n       \"cvmVaa/j11aq6OvVtk9c1fRNk9apb9Ktlm/rfl3KtPWfbH1nm2VlJOfrd3YGTUpsXV/+s/j/05Le\\n\",\n       \"E7yoHDvD/0xQOREXnakuXFEn4+xPalvX1nz03LpZSq/bPoZkOfhh11rPHLWJ/zk68hEvTcsYqq/i\\n\",\n       \"0GVGxNG031nXGtOph62IXO6rG0xTtR2+V9JrK6arq10d9deCHeabOinO1tA1Po9Q9Y5ZbVPPbQdF\\n\",\n       \"1LQMsW3b3BzWffuPviHM4cY96jFm66KVTtKPrpn0+CbFdQyj6QjtuSXfTWqu2sSc9fFp6yW2zi6/\\n\",\n       \"1Wb2LovsME/ncm3dt+h7lfN+CPmlY8NlTIU+VyMaozntjIAymu7A6B099ckgDfNto48u5VaNyxR+\\n\",\n       \"c7R1c98yAuZZ+m1Jzwgus62q2q+cLoZj/2J0lE7Btl5g6/9pUebl6vYYm0bxBM1T1rbp9KCk5zcs\\n\",\n       \"e+prbuSnufsoAAAgAElEQVTyI58KMeSx26fvZk7Xk6yMkVz9dPF/eQc9SpKi+161mC4bdueLalN9\\n\",\n       \"D/6hqqWH+HZ+bosyV839InFexXu5nw9taqXqakWaNJ+OvR1eLel7Sn/n3qE9smN2nXUPJt8GVUlQ\\n\",\n       \"5H68SLr7mYC2nm+37sd7VMuYmpj7dXMwU/9a8G9Lr3d1J72lxbRdfjEV/cugPh3a52bTCP1/b+vU\\n\",\n       \"yDJrpmsyT9XI3rknV02uPzn0ORl7uWPHmUM3garuDP2CaD4MRFUcXedbN3+TpLQu8f1+Se8svf/e\\n\",\n       \"4r02+u7n3K8nWZnsV3YND/y2J9qFXWJZI4eDaarxl9osv2mNQW5NnGVdRjg/UdKZA8TSNI5NtTZV\\n\",\n       \"F+2cEt0+NVdD1+6MUVZEuZE1V9t2821zfxujz1XV8pavx1iuFT+eZE7Xk6yMWXO17uIYVZ3/TS2n\\n\",\n       \"3ySHi8XU33pblVeTNC9rVXJOruauahs1fW9I6y6+TWLp2h8kh3Xvaqqaq6ha7j79MueQCFeVPVSC\\n\",\n       \"mvv2ILmqkdv4ULssl/FyNukSwzPDo9gs15O+S//BJkMxbEomGiU5tn5R0gkbo2uuSzNJ9HMSy+tX\\n\",\n       \"99y+qWv25vQ8xa4do8c2ZrNg2/lXz90xtukQX25zvc5ObsrkKrcTsWyKQS+jljXFwT70dhrjIjmn\\n\",\n       \"i8Rhsdp6jqSHV0xXuw1sueZhs6/oGVsbIQlTyaZ+apb0rg7ltvk8wlQd2qPWrU85kedhl2bBppom\\n\",\n       \"R3XLmOL+N0Tz45yum6Oa8sHNOSdXETqvn62jbR0bGcyI1tWWNNomtv7c1lvjQqqMZU6etPL36nZ8\\n\",\n       \"waEP3PgC/jpJ/1y8HnLbdGkWrKqR6/PNvjxN3XhjrbdBcY7eu+18dcXVvG4zX1fL+0DTsr6n5pfe\\n\",\n       \"EbFMXVPTpWa5/HffZebcLLjpCw1Kpvy14BySqzFjLB+kvybpLwYoN2L+Phf+pvNeIOlpHZe5xEm/\\n\",\n       \"/qL9kDWfDRVDm8/G1iWWi8OjaK9x3HblsDityig8TtI3RMQ0sLk0C0YstwlXLLfpfHW4ztYguRpO\\n\",\n       \"n4Pu0eo+blH0N5+o/hhdyuozX641pUM/57EuoaqsCbV1pqQ3N1qY9YrgGtUj4rf1H3X4I4HqagSm\\n\",\n       \"uHGultFkdP2mxnjQ+utryh+qM3b1RA7vU1fnH1pM27cbQZ9mwTFrrnK5Dm69MZKrOTy0t05onxBb\\n\",\n       \"z5N0oEeZUU4JLi/nby9HjLVl6922HlQzXaRNyc8QyuU3Ob+/UdUjyd9doHWhrZdJ+kVJX90ynrbN\\n\",\n       \"gv9Xy/K7fOGoMvUxPGZ/nF3pc9VF11ryJtvwh4OXG/XFt3ab23qppPuvKXPq/ZWtpg/q7KNpzVWO\\n\",\n       \"Oyn6IvfQEWJo8q1wU6fedeVV9aka+2bQxLrYvk3Snyuu6XUqET+JbzLNWyQ9psF0bTVppmhz42/z\\n\",\n       \"68k2n+Usss9VX7n1uRpy+V1aZL6z9HrosQGblF31/uo0l0n64Joyc7xvZyGXZsHcdlBEPGPVhDSx\\n\",\n       \"Gkub5oy2N6Rcb2Cblj3E/qoq8x4dR45uqmkNSFXN6tj7p08zRZ++f30MtY1qE0Nbjy5qvaPkXHM1\\n\",\n       \"lTG7T3yNFgMQ911+22tW3/t9VYy53buzkctQDJOdjDXt/9E4ABc630htfaPdqqa1smbUzmJst28d\\n\",\n       \"sOy251XOCW+Tads0dcy1Y+4vSfr1ms8ia4vG7p9ZlsujspquQ10y3HT+8jAoXftcvX7NZ1W6Xg/m\\n\",\n       \"et5MKpc+VydI+tkRYmkrqs/VmDewUZvobJ2nw59rt3H5xfhKbceguVLSE1vMU6euirvtRaLPoK9n\\n\",\n       \"t1xWG02Pu6rzsktTR5+L65DJ35ySzNXlj10rF30fmHMNR9dmwYjuG0MLqSW29U22/ktAPFstl2bB\\n\",\n       \"qsEMsZBFDd8a/0uLPkxt/LOkNwYs+15rPltevFYfrPy4gOX29TPB5XVJipaGaCoYYt6Ivii9zh9b\\n\",\n       \"D5b0lT3LONnWXaW/q5rox+7DOEbNVd++TEOLWvcu67mueXyQJm1bjyw6qzdZ3up73y3pu4rXXwqK\\n\",\n       \"bevk1KE9RxHV3Kuv22p784v+RtS2vKYX2AcElblJ21+fRWqz7bqs4+kbyomqsVnXxDZ0crVpmrq+\\n\",\n       \"m22HNmiyrz7RYJpN7qPDE/5/arH8Kjk1C66WN/Q8Uvz1rkuzYJf56+YZqz/mz0h6iqR3rpmmibnU\\n\",\n       \"SI6OEdqHE3XCTL2dIvfbqOvSso9W6+KL/1+yaRq78tE0EV5Q8V6f5GrsY23Ix5O07TsS2aG47w1n\\n\",\n       \"qpqrqHKjvpS2WebYorfhqDVXE82/U3Lp0N7VycUgiEPJ4WCaW9+RdULa/Fv4rs2THNL1htikU2lV\\n\",\n       \"ErQq8ubepMwTGkzTpOypku3VfiDPUs1AqT2X0ziGBu83HfNr6D5Xq/O0ffyNNEwfxSn1PZ77zD9F\\n\",\n       \"zVWbadbNl/t+ncwYzYJNdD2gLteRfWq6WD1AoodiGKrT7xSJVJtmmrGWWafcJ2ubLwJdfrn0ogbT\\n\",\n       \"DK3NxX5TIvNbkt7bY3ldjo8cfnnaxdB9rtreoLtMN5QpmgXXzZdTzVWVbb6u9jJmh/Z11ahdd3pE\\n\",\n       \"YrVu+VHfrAf7dm/rNPuwTrY5H+xHrIt9qAalid/vsexN22Wsca7qDNHfJaIpbdO0U98Mm7i+wTRT\\n\",\n       \"r0fY42+KX+O2mqXLYjrMM7oBx24bqrvEkN1nqLka0RjJ1YNr3p+yr0dTOcS1bjslLb6t3zJaNJu1\\n\",\n       \"3Wb3HqDMydj6tH3ocRFDNe+8aM1n5XKanN9Db9u2j79pOk3V+3XLOq3F8tro0vzXtqzDJ7K8kjBU\\n\",\n       \"zdf2uj5GzVVTU92so7fBJnX76JU9l79pmY8vXkfXvGHFGMnVw2re/7dVcRQPh33bsCGNbsixgKJq\\n\",\n       \"75oqx1O1Xv/YcN5uC+/+TXTTEAxRTeRfLun8oLLqfOPK331qXofuY3LBms+OUvNzY+hmk7VxtDzu\\n\",\n       \"wm7Qts6WtK/02bskfaznsrr0uTq5w3LKpk6ahpp+db6+8/9kzft3v9H9GvhwSW+tK7eBNl9odt6Y\\n\",\n       \"fa5Wd8wP1nz2fZIebOsKxT51fkpNDuSuI/52eTr73LStjaly0YbPz+1YbpQu+60qYWxbI9y1Brnp\\n\",\n       \"tPfoWcZYzSKblnPfDvN0nb483S2Sji79/URJX7GhzKiaq/K+u7ZmWU3KqXtvXRlNRTxfM8JYSVwf\\n\",\n       \"TZqL1+47W0+X9NTSZyRXNabs0L7pon71WIGsMeYJ0OSb+dy/OUR09hxqn0zd5ypKn+Sqiz61zH2S\\n\",\n       \"uT438BzVHStH17y/TtMkaNP0q8+/G0vkedM2qZuiWdBr5svtmH7Nyt9zuv+MaspfuxxV8zonbQ7s\\n\",\n       \"yIOsXNambbM6Qm7bZ6qNKSKOqGPlOUHlyK5tmq1dX1uXNJ22hyYJSNua0abzNXFYGbYe2bOMXGox\\n\",\n       \"+va56luDuOkc6dIs2CaWPl8CU0AcZWvLsvVlbaZvsJwxjrGIL6ltzuvye19Y+SyXZ0JmZ8pBRFcz\\n\",\n       \"9ykNMRTDiba+qkcMS11vfm2naWqKfTVEzdUTJcnWo+za8ZGauk+HeQ70XGadtjWdY9d0rSvrKWum\\n\",\n       \"adu0XnfMr2umXCzI+oYGy1pbRObz1yXYnZdr668kPb3io6Fvum3XddX3dFxu3+4Yq+ddSM2VrePs\\n\",\n       \"w571WhbxA5fV5Ao1qLkazjdL+lRAOZtufo8OWEZXUyRa0cfKn2r9r+/G0mdbdu2v12e5UTVXTcs5\\n\",\n       \"q8HyN637qRXlVHlfw9H9u2yDpvOM+cisLuuxGt85xb/V8rokbnNoFuybDPW5jq1bxrskfabDfE2X\\n\",\n       \"+y8r7y1bTqi5WjFlUlMem2nqmqs6OcQ1akfJlTGzoo3Z56rNA0WPU+zF4dc6zBN1rPWp5WtbO9RX\\n\",\n       \"m+TvIz3KWGrSh6hvk1yEoZsF68rI4XonjdgsGLjsPjGvm7fqs3esmf6hqq+dLZf1tS2WV37viyuf\\n\",\n       \"0SxYI5cao1xO6qEMORRDtFs2PFJoqItI0/maHrNzf1p7m1/KtvkW3bbPVdtljl1GtKNW/m/FVpLW\\n\",\n       \"fkEZ40HDozcLril/6ptv3TodZR82xMXQy1s33boa3Krj8DvaBFSzzCYJWBWSqIam7HNVlkuSt2qI\\n\",\n       \"i02feceqWWg7wnNTXW4EXaffmFytjBcz9UVjqpqrLJIrW3dJekDP5bfu0G4fkcTWJVdtjt1zat4/\\n\",\n       \"NI+tMzZN01BkzVXfctqUvU6bc7HrjxheLOmOiun6ng/RzYKR98aIc72uf/LU18/s5JLUTP0NNqQz\\n\",\n       \"YU91yUDbKuOhRS+zbSfLyD4rU9TCDW3TD0X6dmiPVF7eqZK+esM0m8ro4vSVvyOSqzrlX8I9c810\\n\",\n       \"EX1j2nxe92vBrteevttqjGbB+274PGo5TaZvW3PVVdWXi6Ml7V83zQqSq4ZySa5yiWMoTQ68LsnV\\n\",\n       \"d3aIJTdtL0pNj5W2yVWSJFv3tvU1LeeNUK7FiUr6qrbVLjcLNlneoeTK1pNtXbtm3i7xj9G/adM5\\n\",\n       \"sprE1sV0WDkdj8vK83DAZ/4dsajg6Z8btLzoGq6uy3ySpB/oMT9JVQ2aBauN0S9iVZPkarWcNkM9\\n\",\n       \"DKHtdurahDFUzdVq50xJ+nnVd54um/LmuM5YzYIRrMP3U9WQGJG1yq2SKy1G9X/0yvt9Re+TLknf\\n\",\n       \"tzecfnWdm26DqmOwzdAFY/5oYrmssZsFm37JGTq5Wh2kdtPxRM1VQ7kkNbk2sSz7R4zxGJ665GqK\\n\",\n       \"fbRuMMeN+2rNt9Ku3/431cZEWC5j4zhIAy9f6jYq91J5+xyoWs6aB/+O3XSzWsajAspYp8mxsxzj\\n\",\n       \"qmtisUmTm3DfrgBtY23aLDhGc120XJeTS81Vl2mqkFytyCW5GjyODQN6bhrA86E9F9/kwPvXDTH0\\n\",\n       \"8f6W07+39IvBugHp1hnyJh1d7S4t9s9yXKMhym9bXp/zoTzvsxosK/dmwcjtXLVd77Xy9/KXfkP0\\n\",\n       \"uSrPE9VZvGtt8Go55f/ryin/ve6atq6mo2qaPtr2L+tbXtT8WfS5WrPcptNQc1Ujl2bBMb5dnD/C\\n\",\n       \"MupUdST8fVvfXXqrS7NgU3/WYZ6oG3xZRLPgUMfsMrnq0/RRp+7xOHWGGmBw+XlUn66+xo6jqkbw\\n\",\n       \"Yw3LjzrumjxqZuhmwVW/1LDsLtsgslm3i7bJzlTNgnVyq7miWbChXGquRmsWtHXuwItoepA9WYtR\\n\",\n       \"3Je6dGhvatm8eVhZtk5YN09N894YfUbWTTPUsdKn5mrTefS9Lcvr0yzYttmo7bbtexOqi6OrNnH8\\n\",\n       \"botph665Wjfvv29YVt21pu12XQ6BsSmZikwcos7j6OdJrhvjr8kyvs3WvVvOsy7GHIZiKM8/xKPi\\n\",\n       \"tlIuydUocdh6uKSb28wStOgmVeNDJld1/nHD511v8tE1V5vKiNCnuSb6+B2i1nBpteZq7GaTtmVF\\n\",\n       \"JuqntShryj5XL+9Q3qb3upQVsQ2GbhbcZIzlHF1azn0lvazBPE37s0VeW6rKavJDA5oFO+i842xf\\n\",\n       \"ZPuTtv/S9g81maXjZ5HWPv5izQNbW8T3ga7zD9ks2KW24Sip0fPVCnur80rSKQ1mbHuD7dJs10bL\\n\",\n       \"b+d7fZZVt9whmwVXp+mYXL0w4pmWEc2CQ107ampt9qpikK1T7VZN3ENe8wb4ZePeyt9N55tds+CK\\n\",\n       \"veWLddfCY1ssZ2nTeHSy9UJJz2hZbtNl1i276jgvO5RE2fp6kVzV6viIBx8t6W1a/FT5oZJeaPsh\\n\",\n       \"PeIYo9mnSV+T9xX/r/48t2n5kv6g6zftKWqu1jlG1ReUTRchqV2T4pQ1J1VaJm97beZpashmQSuk\\n\",\n       \"WfDPx0quhk6S69TU2uzVTX+X2jX/Rh3HVeVUDWnRpazSNthb+bu33JKrGnvLF5uSq7ImiUaTJtf/\\n\",\n       \"Jun7G5RVy9b/sg89mL7FdWXvsGJqJvpKkVTValEzcZjHSPpUSukWSbL9K5KeI+kTHcv70Y7zDW2s\\n\",\n       \"fgHScL8W/EpJr+pQ1jHqfpE+qmbers2CUbU6TXSpGYvuOzRks+DqsromtxHnxrdIenxAOUPo0ufq\\n\",\n       \"KxqUO0bNVZMHVFfZ1FTVpRboe4r//3PHsvqqW07bpCC65mqMCgVJOk/SLxavq64NqzVjm2IpbzfX\\n\",\n       \"vA91T67OknRr6e/bJD12wzz3tGt/NXW/0uuq59odLy2q3jcFtmaaUySdXPPZoY7dxfzLsY6W/9cu\\n\",\n       \"t7S8e1Qtu3hvebFbveiVx1Q6oTR/eX3LMd+rbv1K7y+nXy7r7NJk97YrB848SUc+KPg0Vdemldeh\\n\",\n       \"bpiGe6v6RK7at/dcvlhZt3Jn+3LzYnl7rtvfJ6v5Taa8jU/cUO5Sed/V7ZdTmhyzhfL5cUSH2A3l\\n\",\n       \"lGPZtLwTV6Ypb6Oqee+hw/dFEeeJhx0vm9az5vN7SHrQ+nDvPi5XylgeE+XmuOMrPm9q9Vi5tw4/\\n\",\n       \"FlaHbKia57DzYfV81uIHLD+/jG9lfVbPpSN+bFJMf1zNNCeVyrtPRaxNLI/jh0r6oKQ/Wvm8fFye\\n\",\n       \"vGaf38vW54rXD6yZ5lT7sHVZrv89VaxXw3Nn3Y9ylsv5kha1Nv9Sen+5T05c+Vuryy5erx5PJ+nu\\n\",\n       \"L5Anq3Qd093XkxM2XJ+WTlyZv7GKOKu23XLbVo3XeKhmrJinKo7lfIf2TancoZ5DO3tOqX3Cafub\\n\",\n       \"JF2UUnp58feLJD02pfTq0jRksgAAYDZSSiE1iV1rrj6tw5/8fo4WtVeHRAUIAAAwJ137dnxY0gNs\\n\",\n       \"n2v7OC36TlwZFxYAAMA8daq5Sil90fb3ajEo39GSLk0pde3MDgAAsDU69bkCAABAtUF+1t5hgNFZ\\n\",\n       \"sX2L7RtsX2f72uK902xfbfsm21fZLv2Kw28otsUnbT99usjbsX2Z7YO2byy913o9bT/S9o3FZ//3\\n\",\n       \"2OvRVs16H7B9W7HPr7P9zNJns19v2+fY/oDtj9n+qO3XFO9v9f5es97bvr9PsP0h29fb/rjtny7e\\n\",\n       \"3/b9XbfeW72/l2wfXazfbxZ/b/X+XqpY7+H3d0op9J8WzYSfknSuFj9VvV7SQ6KXM+U/LR6hc9rK\\n\",\n       \"ez8j6QeL1z8k6Y3F64cW2+DYYpt8StJRU69Dw/V8oqQLJN3YcT2XNaPXSnpM8fp/avFL08nXr+V6\\n\",\n       \"XyLp+yum3Yr1lnSGpPOL1ydL+gtJD9n2/b1mvbd6fxcxnlj8f4ykP5H0hG3f32vWe+v3dxHn90t6\\n\",\n       \"t6Qri7+3fn/XrPfg+3uImqtDA4ymlL4gaTnA6LZZ/TXksyVdXry+XNJzi9fPkfSelNIX0mLQ1U9p\\n\",\n       \"sY2yl1K6RouRp8varOdjbZ8p6ZSU0rXFdL9cmidLNestVQ+wtxXrnVK6I6V0ffH681oMCHyWtnx/\\n\",\n       \"r1lvaYv3tySllP6heHmcFl+K79KW72+pdr2lLd/fts+W9CxJ79Dd67r1+7tmveue2BK23kMkV1UD\\n\",\n       \"jJ5VM+1cJUnvt/1h28uHrO5LKR0sXh+UtK94/eU6fJiKuW+Ptuu5+v6nNd/1f7Xtj9i+tFR9vnXr\\n\",\n       \"bftcLWruPqQd2t+l9f6T4q2t3t+2j7J9vRb79QMppY9pB/Z3zXpLW76/Jf2cpB/Q4YNDb/3+VvV6\\n\",\n       \"Jw28v4dIrnahh/yFKaULJD1T0qtsP7H8YVrUG67bDluxjRqs5zZ5uxaPkjhf0u2S3jxtOMOwfbKk\\n\",\n       \"90p6bUrpc+XPtnl/F+v9P7RY789rB/Z3SulLKaXztXiKw9fZfvLK51u5vyvWe7+2fH/b/jeS7kwp\\n\",\n       \"XaeaR9xs4/5es96D7+8hkquNA4zOXUrp9uL/z0j6DS2a+Q7aPkOSiirEO4vJV7fH2cV7c9VmPW8r\\n\",\n       \"3j975f3ZrX9K6c5U0KJ6edm0uzXrbftYLRKrd6WUrije3vr9XVrv/7pc713Y30sppb+V9FuSHqkd\\n\",\n       \"2N9LpfV+1A7s76+V9GzbN0t6j6Sn2H6Xtn9/V633L4+xv4dIrrZ6gFHbJ9o+pXh9kqSnS7pRi3W8\\n\",\n       \"uJjsYknLm9OVkr7V9nG2z5P0AC06xs1Vq/VMKd0h6e9sP9a2Jb24NM9sFBeepedpsc+lLVnvIsZL\\n\",\n       \"JX08pfTW0kdbvb/r1nsH9vfpy6YQ2/eQ9DRJ12n793flei8TjMLW7e+U0g+nlM5JKZ0n6Vsl/X5K\\n\",\n       \"6cXa8v1ds94vGeX8Xtfbves/LZrL/kKLzmBvGGIZU/3Toirx+uLfR5frp8WDjt8v6SZJV0k6tTTP\\n\",\n       \"Dxfb4pOSnjH1OrRY1/dI+mstHnh6q6SXdllPLb4R31h89vNTr1eH9f5OLTow3iDpI8VJtW+b1luL\\n\",\n       \"X0x9qTiuryv+XbTt+7tmvZ+5A/v74ZL+vFjvGyT9QPH+tu/vuvXe6v29sg2epLt/NbfV+3tlvfeX\\n\",\n       \"1vtdQ+9vBhEFAAAINMggogAAALuK5AoAACAQyRUAAEAgkisAAIBAJFcAAACBSK4AAAACkVwBAAAE\\n\",\n       \"IrkCAAAIRHIFAAAQiOQKAAAgEMkVAABAIJIrAACAQCRXAAAAgUiuAAAAApFcAQAABCK5AgAACERy\\n\",\n       \"BQAAEIjkCgAAIBDJFQAAQCCSKwAAgEAkVwAAAIFIrgAAAAKRXAEAAAQiuQIAAAhEcgUAABCI5AoA\\n\",\n       \"ACAQyRUAAEAgkisAAIBAJFcAAACBSK4AAAACkVwBAAAEIrkCAAAIRHIFAAAQaG1yZfsc2x+w/THb\\n\",\n       \"H7X9muL9A7Zvs31d8e+iccIFAADIm1NK9R/aZ0g6I6V0ve2TJf2ZpOdKeoGkz6WU3jJOmAAAAPNw\\n\",\n       \"zLoPU0p3SLqjeP1525+QdFbxsQeODQAAYHYa97myfa6kCyT9SfHWq21/xPaltk8dIDYAAIDZWdss\\n\",\n       \"eGiiRZPgnqSfSCldYfu+kj5TfPzjks5MKb1sZZ7NBQMAAGQipRTSKrcxubJ9rKT3SfrtlNJbKz4/\\n\",\n       \"V9JvppQevvJ+klbL/klJB96Y0hfe0CdobB/bB1JKB6aOA/njWEEbHC9oynaKSq42/VrQki6V9PFy\\n\",\n       \"YmX7zNJkz5N0Y0QwAAAAc7e2Q7ukCyW9SNINtq8r3vthSS+0fb4WVVM3S3rFcCECAADMx6ZfC/6R\\n\",\n       \"qmu3fnuYcLDD9qYOALOxN3UAmJW9qQPA7mGEdmQhpbQ3dQyYB44VtMHxgimQXAEAAAQiuQIAAAhE\\n\",\n       \"cgUAABCI5AoAACAQyRUAAEAgkisAAIBAJFcAAACBSK4AAAACkVwBAAAEIrkCAAAIRHIFAAAQiOQK\\n\",\n       \"AAAgEMkVAABAIJIrAACAQCRXAAAAgUiuAAAAApFcAQAABCK5AgAACERyBQAAEIjkCgAAIBDJFQAA\\n\",\n       \"QCCSKwAAgEAkVwAAAIFIrgAAAAKRXAEAAAQiuQIAAAhEcgUAABCI5AoAACAQyRUAAEAgkisAAIBA\\n\",\n       \"JFcAAACBSK4AAAACkVwBAAAEIrkCAAAIRHIFAAAQiOQKAAAgEMkVAABAIJIrAACAQCRXAAAAgUiu\\n\",\n       \"AAAAAq1NrmyfY/sDtj9m+6O2X1O8f5rtq23fZPsq26eOEy4AAEDeNtVcfUHS61JKD5P0OEmvsv0Q\\n\",\n       \"Sa+XdHVK6YGSfq/4GwAAYOetTa5SSneklK4vXn9e0icknSXp2ZIuLya7XNJzhwwSAABgLhr3ubJ9\\n\",\n       \"rqQLJH1I0r6U0sHio4OS9oVHBgAAMEONkivbJ0t6r6TXppQ+V/4spZQkpQFiAwAAmJ1jNk1g+1gt\\n\",\n       \"Eqt3pZSuKN4+aPuMlNIdts+UdGf13AdKr/f3ChQAACCK7f0aKDlZm1zZtqRLJX08pfTW0kdXSrpY\\n\",\n       \"0puK/6+omF2HJ1eS9MGOYQIAAMRJKe1J2lv+bfuSqLI31VxdKOlFkm6wfV3x3hskvVHSr9p+maRb\\n\",\n       \"JL0gKiAAAIA5W5tcpZT+SPX9sp4aHw4AAMC8MUI7AABAIJIrAACAQCRXAAAAgUiuAAAAApFcAQAA\\n\",\n       \"BCK5AgAACLRxhPZdYrvyMT4pJY8dCwAAmCeSqyOs5lfkVQAAoDmaBQEAAAKRXAEAAAQiuQIAAAhE\\n\",\n       \"cgUAABCI5AoAACAQyRUAAEAgkisAAIBAJFcAAACBtmYQ0brR1SVGWAcAAOPZmuRqoSq/Iq8CAADj\\n\",\n       \"oVkQAAAgEMkVAABAIJIrAACAQCRXAAAAgUiuAAAAApFcAQAABCK5AgAACERyBQAAECjrQUSjRl2v\\n\",\n       \"KodR2wEAwBCyTq4WIkZdXy2DvAoAAAyDZkEAAIBAJFcAAACBSK4AAAACkVwBAAAEIrkCAAAIRHIF\\n\",\n       \"AAAQiOQKAAAgEMkVAABAoGwGEV03GnuO6uJl5HcAAHZbNsnVwtxGUp9bvAAAYGg0CwIAAAQiuQIA\\n\",\n       \"AAhEcgUAABCI5AoAACDQxuTK9mW2D9q+sfTeAdu32b6u+HfRsGECAADMQ5Oaq3dKWk2ekqS3pJQu\\n\",\n       \"KP79TnxoAAAA87MxuUopXSPproqPGHcAAABgRZ8+V6+2/RHbl9o+NSwiAACAGeuaXL1d0nmSzpd0\\n\",\n       \"u6Q3h0UEAAAwY51GaE8p3bl8bfsdkn6zesoDpdf7uywKAAAgnO39Gig56ZRc2T4zpXR78efzJN1Y\\n\",\n       \"PeWBlb8/2GVxAAAAoVJKe5L2ln/bviSq7I3Jle33SHqSpNNt3yrpEkn7bZ+vxa8Gb5b0iqiAAAAA\\n\",\n       \"5mxjcpVSemHF25cNEAsAAMDsMUI7AABAIJIrAACAQCRXAAAAgUiuAAAAApFcAQAABCK5AgAACNRp\\n\",\n       \"EFHUs52q3k8pucl0VdMCAID5ILkKV5Uz1eVKbaYFAABzQLMgAABAIJIrAACAQCRXAAAAgUiuAAAA\\n\",\n       \"ApFcAQAABCK5AgAACERyBQAAEIjkCgAAINAEydUXX287rf4bP475q9qObf9NvQ4AAGybiUZoZ2Ty\\n\",\n       \"OKvb0hXv1b3PNgcAIBrNggAAAIFIrgAAAAKRXAEAAAQiuQIAAAhEcgUAABCI5AoAACAQyRUAAEAg\\n\",\n       \"kisAAIBAEw0iOj1GJ1+o2w4pJUYYBQCgg51NrhglfontAABAJJoFAQAAApFcAQAABCK5AgAACERy\\n\",\n       \"BQAAEIjkCgAAIBDJFQAAQCCSKwAAgEAkVwAAAIF2eBBRRGg70v0QI7+vi4GR5gEAYyO5QoDV3MYV\\n\",\n       \"74zGqzoAAAiBSURBVC3fHyuGoZcHAEA1mgUBAAACkVwBAAAEIrkCAAAIRHIFAAAQaGNyZfsy2wdt\\n\",\n       \"31h67zTbV9u+yfZVtk8dNkwAAIB5aFJz9U5JF62893pJV6eUHijp94q/AQAAdt7G5CqldI2ku1be\\n\",\n       \"fraky4vXl0t6bnBcAAAAs9S1z9W+lNLB4vVBSfuC4gEAAJi13oOIppRS/QjZB0qv9/dd1GHajgw+\\n\",\n       \"l2XVLW/qkcbH3gYAAAzJ9n5FJyeFrsnVQdtnpJTusH2mpDurJzuw8vcHOy6uStWo4EOZerTxXEYa\\n\",\n       \"ZxR0AMB2SCntSdpb/m37kqiyuzYLXinp4uL1xZKuiAkHAABg3poMxfAeSX8s6UG2b7X9UklvlPQ0\\n\",\n       \"2zdJekrxNwAAwM7b2CyYUnphzUdPDY4FAABg9hihHQAAIBDJFQAAQCCSKwAAgEAkVwAAAIFIrgAA\\n\",\n       \"AAL1HqEdGMK6EeGnHq0eAIB1SK6QMUaEBwDMD82CAAAAgUiuAAAAApFcAQAABCK5AgAACERyBQAA\\n\",\n       \"EIjkCgAAIBDJFQAAQCCSKwAAgEAkVwAAAIEYoX0k6x7nkqO5xVunaj2aPj6HR/AAALoguRrN6n06\\n\",\n       \"93vz3OKt03c9eAQPAKAdmgUBAAACkVwBAAAEIrkCAAAIRHIFAAAQiOQKAAAgEMkVAABAIJIrAACA\\n\",\n       \"QCRXAAAAgUiuAAAAApFcAQAABCK5AgAACERyBQAAEIjkCgAAIBDJFQAAQCCSKwAAgEAkVwAAAIFI\\n\",\n       \"rgAAAAKRXAEAAAQ6ZuoA0IztNHUMEbZlPZqqW9+UkseOBQAwDpKr2ai6R8/x/ly3Hqvvz3Hd6mzz\\n\",\n       \"ugEAVtEsCAAAEIjkCgAAIBDJFQAAQKBefa5s3yLp7yT9q6QvpJQeExEUAADAXPXt0J4k7U8pfTYi\\n\",\n       \"GAAAgLmLaBbkp08AAACFvslVkvR+2x+2/fKIgAAAAOasb7PghSml223fR9LVtj+ZUromIjAAAIA5\\n\",\n       \"6pVcpZRuL/7/jO3fkPQYSaXk6kBp6v19FgUc0neU9yFGTW8bU9MY1pXLKO95Y3R+IG+292ug5MQp\\n\",\n       \"dbtP2T5R0tEppc/ZPknSVZJ+LKV0VfF5OnJk6p+U9O/UbpRupt3uafOJrTqx6T/tWDEgL9X7jv0G\\n\",\n       \"5Mp2ijo/+9Rc7ZP0G7aX5bx7mVgBAADsqs7JVUrpZknnB8YCAAAwe4zQDgAAEIjkCgAAIBDJFQAA\\n\",\n       \"QCCSKwAAgEAkVwAAAIFIrgAAAAL1ffwNgC3HKPEA0A7JFYAG6kafBwCsolkQAAAgEMkVAABAIJIr\\n\",\n       \"AACAQCRXAAAAgUiuAAAAApFcAQAABCK5AgAACERyBQAAEIhBRIHCupHIc1QVb9MR0xl1HQCGQ3IF\\n\",\n       \"HLKab+SeY/SNl1HXAWAINAsCAAAEIrkCAAAIRHIFAAAQiOQKAAAgEMkVAABAIJIrAACAQCRXAAAA\\n\",\n       \"gUiuAAAAApFcAQAABGKEdqCDoR6VM0S5bcvsE0OXeXN83E7Ojweqi23quDCNnI/VXUZyBXQy1KNy\\n\",\n       \"cih3qMfqzO1xOznHO7dHNWFYOR+ru4lmQQAAgEAkVwAAAIFIrgAAAAKRXAEAAAQiuQIAAAhEcgUA\\n\",\n       \"ABCI5AoAACAQyRUAAEAgBhEF0FnEiPJVZVSNLD3UqPh1yxvKUCOsR2yfPjG0GSm87ajiY45KP9SI\\n\",\n       \"50NuH9SbaluSXAHoIWKk8L6jxzcdET6XUeKHiiGH7dCm3LYxjDkq/Ry3D+qNvy1pFgQAAAhEcgUA\\n\",\n       \"ABCI5AoAACBQ5+TK9kW2P2n7L23/UGRQAAAAc9UpubJ9tKS3SbpI0kMlvdD2QyIDw67ZmzoAzMbe\\n\",\n       \"1AFgRmzvnzoG7J6uNVePkfSplNItKaUvSPoVSc+JCwu7Z2/qADAbe1MHgHnZP3UA2D1dk6uzJN1a\\n\",\n       \"+vu24j0AAICd1nWcq4aD1T3lbw//++YTJB3fcZkAAADZc0rtB/W1/ThJB1JKFxV/v0HSl1JKbypN\\n\",\n       \"M9hoygAAANGiRm3vmlwdI+kvJH29pL+WdK2kF6aUPhERFAAAwFx1ahZMKX3R9vdK+l1JR0u6lMQK\\n\",\n       \"AACgY80VAAAAqg0yQjsDjKIN27fYvsH2dbavnToe5MP2ZbYP2r6x9N5ptq+2fZPtq2yfOmWMyEfN\\n\",\n       \"8XLA9m3F9eU62xdNGSPyYPsc2x+w/THbH7X9muL9kOtLeHLFAKPoIEnan1K6IKX0mKmDQVbeqcW1\\n\",\n       \"pOz1kq5OKT1Q0u8VfwNS9fGSJL2luL5ckFL6nQniQn6+IOl1KaWHSXqcpFcVuUrI9WWImisGGEUX\\n\",\n       \"Ib/QwHZJKV0j6a6Vt58t6fLi9eWSnjtqUMhWzfEicX3BipTSHSml64vXn5f0CS3G6wy5vgyRXDHA\\n\",\n       \"KNpKkt5v+8O2Xz51MMjevpTSweL1QUn7pgwGs/Bq2x+xfSnNyFhl+1xJF0j6kIKuL0MkV/SQR1sX\\n\",\n       \"ppQukPRMLapmnzh1QJiHtPhFDtccrPN2SedJOl/S7ZLePG04yIntkyW9V9JrU0qfK3/W5/oyRHL1\\n\",\n       \"aUnnlP4+R4vaK6BSSun24v/PSPoNLZqWgToHbZ8hSbbPlHTnxPEgYymlO1NB0jvE9QUF28dqkVi9\\n\",\n       \"K6V0RfF2yPVliOTqw5IeYPtc28dJ+hZJVw6wHGwB2yfaPqV4fZKkp0u6cf1c2HFXSrq4eH2xpCvW\\n\",\n       \"TIsdV9wgl54nri+QZNuSLpX08ZTSW0sfhVxfBhnnyvYzJb1Vdw8w+tPhC8FWsH2eFrVV0mJQ23dz\\n\",\n       \"vGDJ9nskPUnS6Vr0f/hRSf+vpF+V9BWSbpH0gpTS30wVI/JRcbxcImm/Fk2CSdLNkl5R6lODHWX7\\n\",\n       \"CZL+UNINurvp7w1aPHGm9/WFQUQBAAACDTKIKAAAwK4iuQIAAAhEcgUAABCI5AoAACAQyRUAAEAg\\n\",\n       \"kisAAIBAJFcAAACBSK4AAAAC/f9A40wIgLpJlQAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb01953f90>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['fc7'].data[0]\\n\",\n    \"plt.subplot(2, 1, 1)\\n\",\n    \"plt.plot(feat.flat)\\n\",\n    \"plt.subplot(2, 1, 2)\\n\",\n    \"_ = plt.hist(feat.flat[feat.flat > 0], bins=100)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The final probability output, `prob`\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 38,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[<matplotlib.lines.Line2D at 0x7ffb01f12a50>]\"\n      ]\n     },\n     \"execution_count\": 38,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAmEAAAJPCAYAAAA0UwMNAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzt3X2Q7md93/fPV0fGMuLZphYWcnBAtoEB29iVabCdg00Y\\n\",\n       \"hXEsppkxCD+kDkNoU9m0zXQI6YyR23/atJ0kDgmRXcVJXGJNkgKRW4jASc+YOg4gm4BjJCoFa6oH\\n\",\n       \"TDDgBzyxfRR9+8d9L9xa7e6955zdvX6r6/WaObN7P+7vnN+59/fe6/rd11Z3BwCAk3XZ6A0AAJiR\\n\",\n       \"CAMAGECEAQAMIMIAAAYQYQAAA4gwAIABtkZYVV1fVXdX1T1V9eY9br+hqj5aVR+pql+pqu/euO2+\\n\",\n       \"qvrY+rYPHfXGAwCcVnXQOmFVdSbJJ5K8IsmDST6c5MbuvmvjPld29++vP39Rknd19/PWl38jybd2\\n\",\n       \"9+eO768AAHD6bBsJuy7Jvd19X3efT3Jbkhs277ATYGtPSvJbu56jLnkrAQAeZ7ZF2NVJ7t+4/MD6\\n\",\n       \"ukepqldX1V1J3pvkxzZu6iS/UFV3VtUbLnVjAQAeLy7fcvuhfqdRd787ybur6juT/GySb1jf9LLu\\n\",\n       \"/lRVPTPJ+6vq7u7+wMVvLgDA48O2CHswyTUbl6/JajRsT939gaq6vKq+srs/292fWl//map6V1bT\\n\",\n       \"m4+KsKryyysBgFOju4/kVKttEXZnkmur6jlJHkrymiQ3bt6hqp6b5JPd3VX1kvXGfbaqnpjkTHf/\\n\",\n       \"XlVdmeSVSX5iry9yVH8ZTl5V3dzdN4/eDi6cfXe62X+nm/13eh3l4NGBEdbdD1fVTUnuSHImya3d\\n\",\n       \"fVdVvXF9+y1J/mySH66q80m+kOS164dfleSdVbXzdd7R3e87qg0HADjNto2Epbvfm9UJ95vX3bLx\\n\",\n       \"+V9N8lf3eNwnk3zzEWwjAMDjjhXzuVTnRm8AF+3c6A3gkpwbvQFcknOjN4DxDlys9UQ2oKqdEwYA\\n\",\n       \"nAZH2S1GwgAABhBhAAADiDAAgAFEGADAACIMAGAAEQYAMIAIAwAYQIQBAAwgwgAABhBhAAADiDAA\\n\",\n       \"gAFEGADAACIMAGAAEQYAMIAIAwAYQIQBAAwgwgAABhBhAAADiDAAgAFEGADAACIMAGAAEQYAMIAI\\n\",\n       \"AwAYQIQBAAwgwgAABhBhAAADiDAAgAFEGADAACIMAGAAEQYAMIAIAwAYQIQBAAwgwgAABhBhAAAD\\n\",\n       \"iDAAgAFEGADAACIMAGAAEQYAMIAIAwAYQIQBAAwgwgAABhBhAAADiDAAgAFEGADAACIMAGAAEQYA\\n\",\n       \"MIAIAwAYQIQBAAwgwgAABhBhAAADiDAAgAFEGADAACIMAGAAEcZUqvLfVeUto7cDAKq7x25AVXd3\\n\",\n       \"Dd0IplGVTtLdfgAB4MIdZbc4EDEj0Q/AcCIMAGAAEQYAMIAIAwAYQIQxm7HvRAGANRHGbEQYAIsg\\n\",\n       \"wpiNCANgEUQYsxFhACyCCGM2IgyARRBhAAADiDBmYyQMgEUQYczmkdEbAACJCGM+RsIAWAQRBgAw\\n\",\n       \"wNYIq6rrq+ruqrqnqt68x+03VNVHq+ojVfUrVfXdh30sDGAkDIBFqO79j0lVdSbJJ5K8IsmDST6c\\n\",\n       \"5MbuvmvjPld29++vP39Rknd19/MO89j1Y7q762j/WrC3qnwhyZXd8X8OgAt2lN2ybSTsuiT3dvd9\\n\",\n       \"3X0+yW1Jbti8w06ArT0pyW8d9rEwgJEwABZhW4RdneT+jcsPrK97lKp6dVXdleS9SX7sQh4LJ0yE\\n\",\n       \"AbAI2yLsUAes7n53dz8/yZ9J8rNVZaqHpRJhACzC5VtufzDJNRuXr8lqRGtP3f2Bqro8yTPW9zvU\\n\",\n       \"Y6vq5o2L57r73JbtgoslwgA4tKo6m+TssTz3lhPzL8/q5PrvSfJQkg/lsSfmPzfJJ7u7q+olSf5x\\n\",\n       \"dz/3MI9dP96J+ZyYqvx2kqc6MR+Ai3GU3XLgSFh3P1xVNyW5I8mZJLd2911V9cb17bck+bNJfriq\\n\",\n       \"zif5QpLXHvTYo9houARWzAdgEQ4cCTuRDTASxgmqymeTPMNIGAAX4ySXqAAA4BiIMGbjxHwAFkGE\\n\",\n       \"MRsRBsAiiDBmI8IAWAQRxmxEGACLIMKYjQgDYBFEGLMRYQAsgggDABhAhDEbK+YDsAgijNmYjgRg\\n\",\n       \"EUQYsxFhACyCCAMAGECEMRsjYQAsgghjNiIMgEUQYcxGhAGwCCKM2YgwABZBhDEbEQbAIogwAIAB\\n\",\n       \"RBizsWI+AIsgwpiN6UgAFkGEMRsRBsAiiDAAgAFEGLMxEgbAIogwZiPCAFgEEcZsRBgAiyDCmI0I\\n\",\n       \"A2ARRBizEWEALIIIYzYiDIBFEGHMRoQBsAgijNmIMAAWQYQxGxEGwCKIMACAAUQYs3lk9AYAQCLC\\n\",\n       \"mI/pSAAWQYQxGxEGwCKIMACAAUQYszESBsAiiDBmI8IAWAQRxmxEGACLIMKYjQgDYBFEGLMRYQAs\\n\",\n       \"gggDABhAhDEbK+YDsAgijNmYjgRgEUQYsxFhACyCCAMAGECEMRsjYQAsgghjNiIMgEUQYcxGhAGw\\n\",\n       \"CCKM2YgwABZBhDEbEQbAIogwZiPCAFgEEcZsRBgAiyDCmI0IA2ARRBizEWEALIIIAwAYQIQxGyNh\\n\",\n       \"ACyCCGM2j4zeAABIRBjzMRIGwCKIMACAAUQYszESBsAiiDBmI8IAWAQRxmxEGACLIMKYjQgDYBFE\\n\",\n       \"GLMRYQAsgghjNiIMgEUQYQAAA4gwZmPFfAAWQYQxG9ORACyCCAMAGECEMRsjYQAswtYIq6rrq+ru\\n\",\n       \"qrqnqt68x+0/UFUfraqPVdUvVdWLN267b339R6rqQ0e98XARRBgAi3D5QTdW1Zkkb0vyiiQPJvlw\\n\",\n       \"Vd3e3Xdt3O2TSb6ru3+nqq5P8lNJXrq+rZOc7e7PHf2mw0URYQAswraRsOuS3Nvd93X3+SS3Jblh\\n\",\n       \"8w7d/cvd/Tvrix9M8uxdz1FHsqVwNEQYAIuwLcKuTnL/xuUH1tft5/VJ3rNxuZP8QlXdWVVvuLhN\\n\",\n       \"hCMlwgBYhAOnI3MBB6yqenmSP5/kZRtXv6y7P1VVz0zy/qq6u7s/sMdjb964eK67zx3268IFEmEA\\n\",\n       \"HFpVnU1y9jiee1uEPZjkmo3L12Q1GvYo65PxfzrJ9d39+Z3ru/tT64+fqap3ZTW9+ZgI6+6bL3jL\\n\",\n       \"4eKIMAAObT0wdG7nclW99aiee9t05J1Jrq2q51TVE5K8Jsntm3eoqq9N8s4kP9jd925c/8SqevL6\\n\",\n       \"8yuTvDLJrx3VhsNFEmEALMKBI2Hd/XBV3ZTkjiRnktza3XdV1RvXt9+S5MeTPD3J26sqSc5393VJ\\n\",\n       \"rkryzvV1lyd5R3e/79j+JnA4IgyARajuscekquru9g5KTkRV3pHkdd3etQvAhTvKbrFiPrMxEgbA\\n\",\n       \"IogwZvPI6A0AgESEMR8jYQAsgghjSlXOCQNgLBHGrEQYAEOJMGZTuz4CwBAijFmJMACGEmHMyv99\\n\",\n       \"AIZyIGJWRsIAGEqEMRvnhAGwCCKMWYkwAIYSYcxKhAEwlAhjNjvx5f8+AEM5EDErI2EADCXCmJUI\\n\",\n       \"A2AoEcZsxBcAiyDCmJUYA2AoEQYAMIAIAwAYQIQxGyvmA7AIIgwAYAARxqyMhAEwlAhjNuILgEUQ\\n\",\n       \"YQAAA4gwZmVEDIChRBgAwAAijNlYogKARRBhAAADiDAAgAFEGLMxHQnAIogwAIABRBizMhIGwFAi\\n\",\n       \"DABgABHGbIyAAbAIIoxZiTEAhhJhAAADiDBmY4kKABZBhAEADCDCAAAGEGHMynQkAEOJMGYjvgBY\\n\",\n       \"BBHGrMQYAEOJMACAAUQYszECBsAiiDBmJcYAGEqEAQAMIMKYjRXzAVgEEQYAMIAIAwAYQIQxK9OR\\n\",\n       \"AAwlwpiN+AJgEUQYsxJjAAwlwgAABhBhzMYIGACLIMKYlRgDYCgRBgAwgAhjVkbCABhKhDEb8QXA\\n\",\n       \"IogwAIABRBizMiIGwFAijNmILwAWQYQxKzEGwFAiDABgABEGADCACGM2tesjAAwhwgAABhBhzMpI\\n\",\n       \"GABDiTBmI74AWAQRBgAwgAhjVkbEABhqa4RV1fVVdXdV3VNVb97j9h+oqo9W1ceq6peq6sWHfSwM\\n\",\n       \"IL4AWIQDI6yqziR5W5Lrk7wgyY1V9fxdd/tkku/q7hcn+R+S/NQFPBZGEWMADLVtJOy6JPd2933d\\n\",\n       \"fT7JbUlu2LxDd/9yd//O+uIHkzz7sI8FAJjVtgi7Osn9G5cfWF+3n9cnec9FPhYAYBqXb7m9D/tE\\n\",\n       \"VfXyJH8+ycsu9LFwgqyYD8AibIuwB5Ncs3H5mqxGtB5lfTL+Tye5vrs/fyGPXT/+5o2L57r73Jbt\\n\",\n       \"AgA4dlV1NsnZY3nu7v0HrKrq8iSfSPI9SR5K8qEkN3b3XRv3+dok/yLJD3b3v7qQx67v191tVIIT\\n\",\n       \"UZX3JPnTSb6xO58YvT0AnC5H2S0HjoR198NVdVOSO5KcSXJrd99VVW9c335Lkh9P8vQkb6+qJDnf\\n\",\n       \"3dft99ij2Gi4BIIfgEU4cCTsRDbASBgnqCrvzWrZFCNhAFywo+wWK+YzK+EPwFAiDABgABHGbCxR\\n\",\n       \"AcAiiDAAgAFEGADAACKM2ZiOBGARRBgAwAAijFkZCQNgKBEGADCACGM2RsAAWAQRxqzEGABDiTAA\\n\",\n       \"gAFEGLOxRAUAiyDCAAAGEGEAAAOIMGZlOhKAoUQYsxFfACyCCGNWYgyAoUQYAMAAIozZGAEDYBFE\\n\",\n       \"GLMSYwAMJcIAAAYQYczGivkALIIIAwAYQIQBAAwgwpiV6UgAhhJhzEZ8AbAIIoxZiTEAhhJhAAAD\\n\",\n       \"iDBmYwQMgEUQYcxKjAEwlAgDABhAhDErI2EADCXCmI34AmARRBgAwAAijFkZEQNgKBHGbMQXAIsg\\n\",\n       \"wpiVGANgKBEGADCACAMAGECEMZva9REAhhBhAAADiDBmZSQMgKFEGLMRXwAsgggDABhAhDErI2IA\\n\",\n       \"DCXCmI34AmARRBizEmMADCXCAAAGEGEAAAOIMGZjxXwAFkGEMaMevQEAIMKYlZEwAIYSYcymYiQM\\n\",\n       \"gAUQYcyoYyQMgMFEGADAACKMGZmOBGA4EcZsds4JMx0JwFAijBkZCQNgOBHGrIyEATCUCGM2lqgA\\n\",\n       \"YBFEGDMSYQAMJ8KYlelIAIYSYczISBgAw4kwZmOJCgAWQYQxIyNhAAwnwgAABhBhzMZ0JACLIMKY\\n\",\n       \"kelIAIYTYczKSBgAQ4kwZmQkDIDhtkZYVV1fVXdX1T1V9eY9bv/GqvrlqvqDqvpLu267r6o+VlUf\\n\",\n       \"qaoPHeWGw0UyAgbAIlx+0I1VdSbJ25K8IsmDST5cVbd3910bd/tskh9N8uo9nqKTnO3uzx3R9sJR\\n\",\n       \"cGI+AMNtGwm7Lsm93X1fd59PcluSGzbv0N2f6e47k5zf5zkc7Fga05EADLctwq5Ocv/G5QfW1x1W\\n\",\n       \"J/mFqrqzqt5woRsHx8ASFQAswoHTkbn0EYOXdfenquqZSd5fVXd39wcu8TnhUhkJA2C4bRH2YJJr\\n\",\n       \"Ni5fk9Vo2KF096fWHz9TVe/KanrzMRFWVTdvXDzX3ecO+zUAAI5LVZ1NcvY4nntbhN2Z5Nqqek6S\\n\",\n       \"h5K8JsmN+9z3UdM7VfXEJGe6+/eq6sokr0zyE3s9sLtvPvwmwyUxHQnAoa0Hhs7tXK6qtx7Vcx8Y\\n\",\n       \"Yd39cFXdlOSOJGeS3Nrdd1XVG9e331JVVyX5cJKnJHmkqt6U5AVJ/qMk76yqna/zju5+31FtOFwC\\n\",\n       \"05EADFfdY49HVdXdbVSCE1GVX81qWv2HuvPPRm8PAKfLUXaLFfOZkZEwAIYTYczGqCsAiyDCmJET\\n\",\n       \"8wEYToQxI9ORAAwnwpiNJSoAWAQRxoyMhAEwnAgDABhAhDEr05EADCXCmM3OOWEAMJQIY0ZOzAdg\\n\",\n       \"OBHGjIyEATCcCGM2RsAAWAQRxoxMRwIwnAhjRqYjARhOhDErI2EADCXCmI0lKgBYBBHGjEQYAMOJ\\n\",\n       \"MGZlOhKAoUQYszEdCcAiiDBmZIkKAIYTYczISBgAw4kwZmMEDIBFEGHMyHQkAMOJMGZkOhKA4UQY\\n\",\n       \"szISBsBQIozZWKICgEUQYcxIhAEwnAhjVqYjARhKhDEb05EALIIIY0aWqABgOBHGjIyEATCcCAMA\\n\",\n       \"GECEMZudc8JMRwIwlAhjRqYjARhOhDErI2EADCXCmI0lKgBYBBHGjDrJZVV52ugNAWBeIoxZfWeS\\n\",\n       \"/330RgAwLxHGjDrJVyS5YvSGADAvEcZsLFEBwCKIMGa0E2FCDIBhRBizEmEADCXCmM3mdKQIA2AY\\n\",\n       \"EcaMnBMGwHAijFkZCQNgKBHGbHamIy+LCANgIBHGrIyEATCUCGNGzgkDYDgRxoy8OxKA4UQYs7FE\\n\",\n       \"BQCLIMKY0c6J+QAwjAMRMzMSBsAwIozZmI4EYBFEGDOyThgAw4kwZiXAABhKhDEj05EADCfCmE1t\\n\",\n       \"fBRhAAwjwpiRFfMBGE6EMSMn5gMwnAhjNpaoAGARRBgzEmEADCfCmJUAA2AoEcasjIQBMJQIYzbO\\n\",\n       \"CQNgEUQYMxJhAAwnwpjRzhIVADCMAxGzqX0+B4ATJcKYkelIAIYTYczIdCQAwzkQMSsjYQAMJcKY\\n\",\n       \"jSUqAFiErRFWVddX1d1VdU9VvXmP27+xqn65qv6gqv7ShTwWBhFhAAx3YIRV1Zkkb0tyfZIXJLmx\\n\",\n       \"qp6/626fTfKjSf6Xi3gsjCLAABhq20jYdUnu7e77uvt8ktuS3LB5h+7+THffmeT8hT4WTkpVvqEq\\n\",\n       \"X54vTUdeFiEGwEDbIuzqJPdvXH5gfd1hXMpj4ai9Lcl3rD83HQnAcJdvub0v4bkP/diqunnj4rnu\\n\",\n       \"PncJXxf2cmb9JxFhABxSVZ1NcvY4nntbhD2Y5JqNy9dkNaJ1GId+bHfffMjnhItVefQUpAADYKv1\\n\",\n       \"wNC5nctV9dajeu5t05F3Jrm2qp5TVU9I8pokt+9z390HtQt5LBy3nQhLjIQBsAAHjoR198NVdVOS\\n\",\n       \"O7Kayrm1u++qqjeub7+lqq5K8uEkT0nySFW9KckLuvsLez32OP8ycIDN6BJhAAxX3Zdy2tcRbEBV\\n\",\n       \"d7eDIceqKr+Y5H9O8neSfCTJVUme3J1vGLphAJwqR9ktVsxnFpvnhFmiAoDhRBgz2TwnLBFhAAwk\\n\",\n       \"wpjF7nPAnBMGwFAijFnsNR0JAMM4EDELS1QAsCgijFmIMAAWRYQxi80I27kMAMOIMGZRG3+MhAEw\\n\",\n       \"nAhjFrunI60TBsBQIoxZ7DUdKcIAGEaEMZPNJSoEGABDiTBm4Rd4A7AoIoxZmI4EYFFEGLOwThgA\\n\",\n       \"iyLCmMXuX1skwAAYSoQxC0tUALAoIoxZ7J5+NB0JwFAijFmYjgRgUUQYs3BiPgCLIsKYhSUqAFgU\\n\",\n       \"EcZMNqcjnZgPwFAijFnsXjEfAIYSYczCdCQAiyLCmIV1wgBYFBHGLPZaokKEATCMCGMWu88JE2AA\\n\",\n       \"DCXCmIVzwgBYFBHGLExHArAoIoxZ7LViPgAMI8KYielIABbj8tEbACdk94n5fgABYCgRxiw2zwlL\\n\",\n       \"jIIBMJjRAGbhnDAAFsVIGLMQYQAsighjFrXxR4QBMJzpSGbhd0cCsCgijFnsXjEfAIZyUGIWfm0R\\n\",\n       \"AIsiwpjJ5q8tMh0JwFAijFnsXqxVgAEwlAhjFnstUSHEABhGhDGLvVbMv6Iqf2vcJgEwMxHGLPZb\\n\",\n       \"rPVPjdkcAGYnwpjFfueEmZIEYAgRxiz2WqJi8yMAnCgRxiw2zwnrjeu9BgAYwgGIWew+J2zzegA4\\n\",\n       \"cSKMmYgwABZDhDGLyt5rg4kwAIYQYcxiv+lIrwEAhnAAYhbOCQNgUUQYs9jv3ZEiDIAhRBiz2L1Y\\n\",\n       \"6w6vAQCGcABiFrsXa928HgBOnAhjFvtFmNcAAEM4ADELK+YDsCgOQMzEEhUALIYDELNwYj4Ai+IA\\n\",\n       \"xCw2pyN3Xw8AJ06EMYv9Fms9M2BbAECEMQ2/tgiARXEAYhaWqABgURyAmEVt/PFriwAYToQxC+eE\\n\",\n       \"AbAoIoxZmI4EYFEcgJjJXivmm44EYAgRxiz2W6wVAIYQYcxiv3PCAGAIEcYs9jsnDACGcFBiJnud\\n\",\n       \"EwYAQ2yNsKq6vqrurqp7qurN+9znJ9e3f7SqvmXj+vuq6mNV9ZGq+tBRbjgcVtUXzwUzHQnAYlx+\\n\",\n       \"0I1VdSbJ25K8IsmDST5cVbd3910b93lVkud197VV9e1J3p7kpeubO8nZ7v7csWw9HE7t+ggAw20b\\n\",\n       \"Cbsuyb3dfV93n09yW5Ibdt3n+5L8/STp7g8meVpVffXG7Q58jLY5EmY6EoBF2BZhVye5f+PyA+vr\\n\",\n       \"DnufTvILVXVnVb3hUjYULsGB05Eb05UAcGIOnI7M4UcM9juIfUd3P1RVz0zy/qq6u7s/cPjNgyO1\\n\",\n       \"3w8dRscAOHHbIuzBJNdsXL4mq5Gug+7z7PV16e6H1h8/U1Xvymp68zERVlU3b1w8193nDrHtcFib\\n\",\n       \"54TtFVyXJXnkRLcIgFOhqs4mOXscz70twu5Mcm1VPSfJQ0lek+TGXfe5PclNSW6rqpcm+e3u/nRV\\n\",\n       \"PTHJme7+vaq6Mskrk/zEXl+ku2++6L8BbLft3ZGmIwHY03pg6NzO5ap661E994ER1t0PV9VNSe5I\\n\",\n       \"cibJrd19V1W9cX37Ld39nqp6VVXdm+T3k/zI+uFXJXlnVe18nXd09/uOasPhAogwABanuseeClNV\\n\",\n       \"3d0OghybqlyR5N8nuSfJH89qRPa/37jLFd35wxHbBsDpcpTdYsV8ZrBtiQqvAwBOnIMPM9i9WKvp\\n\",\n       \"SACGE2HMYPc5YfvdDgAnRoQxg23TkSIMgBMnwpjJfu+O9DoA4MQ5+DCDbb/A20gYACdOhDED64QB\\n\",\n       \"sDgijBk4JwyAxRFhzGDbSJjXAQAnzsGHGViiAoDFEWHMwGKtACyOCGMGTswHYHFEGDOoJI9k///v\\n\",\n       \"XgcAnDgHH2bxH2IkDIAFEWHMoPLoCNvrdgA4USKMGWybjhRhAJw4EcYMdo+EWScMgOEcfJjBToSd\\n\",\n       \"OeB2ADhRIowZbI6EdZyYD8ACiDBmsHNO2H6x5XUAwIlz8GEGOxF20O0AcKJEGDPYjLCK6UgAFkCE\\n\",\n       \"MYO9wmv37QBwokQYs+h9Pk+8DgAYwMGHGeyMhO03GmYkDIATJ8KYwU6E7XdyvggD4MSJMGawO8Kc\\n\",\n       \"mA/AcCKMGWybjvQ6AODEOfgwg90RZiQMgOFEGDNwThgAiyPCmMFOZIkwABZDhDGDbdORXgcAnDgH\\n\",\n       \"H2ZhnTAAFkWEMQNLVACwOCKMGVgxH4DFEWHMYNu7I70OADhxDj7MwHQkAIsjwpiB6UgAFkeEMQMr\\n\",\n       \"5gOwOCKMGTgnDIDFcfBhBlbMB2BxRBizMB0JwKKIMGaw7cR8rwMATpyDDzOwRAUAiyPCmMG2E/NF\\n\",\n       \"GAAnToQxA+uEAbA4IowZbFsnzOsAgBPn4MMMTEcCsDgijBlYMR+AxRFhzMBirQAsjghjFtYJA2BR\\n\",\n       \"HHyYgelIABZHhDEDJ+YDsDgijBnsjrA/2uN2ADhRIowZ7I6wP9x1u9cBACfOwYcZ7D4n7A/2uB0A\\n\",\n       \"TpQIYwa7I2z3SJgIA+DEiTBmsHs60kgYwBZVeVVVvnf0djyeiTBmsG0kzOsA4LFeluQ7Rm/E49nl\\n\",\n       \"ozcATpDpSIDDuyLJmdEb8XgmwpiB6UiAC/flEWHHSoQxg21LVIgwgMcyEnbMRBgzcE4YwIUTYcdM\\n\",\n       \"hDED05EAF06EHTMRxgysEwZw4UTYMTMNw6lWlddU5Wnb7paDV8z3OgB4rCvWfzgmDj6cdm9J8i1b\\n\",\n       \"7rMTYV+xvvzwHrcD8Ggi7JiJME67p67/HGQnwp6UJN1fHBHbvJ2BqvJVo7cBeIxLirCq/M2qrd+f\\n\",\n       \"pybCOO2esv6zp6r8iSSvyyrCnrzf3Y5hu7gw9/hmDYtzqSNhr0vynKPZlMcnEcapVZXK9pGw70ry\\n\",\n       \"Z9af7xdhx/I6qMpzq/J7x/HcjydVeWKSpyV55uhtgR1VubIqf2r0dgx20RFWlcuSPD3JM450ix5n\\n\",\n       \"RBin2Vdk9c6dfUfCsjqwf2U2piP3cFwjYV+X5ElV+ddVecsxfY3Hg6/a9RGW4GVJ/trojThOVfmZ\\n\",\n       \"LacCXMpI2FOz+t4qwg6wNcKq6vqquruq7qmqN+9zn59c3/7RqvqWC3ksXIKn7vq4l51vMJ1VsP3R\\n\",\n       \"Hvc5rgjb+ebzTUledExf4/HgK3d9hCW4KsnXjN6I47KeSfj+JN9wwN2+fP3nYjx910f2cGCEVdWZ\\n\",\n       \"JG9Lcn2SFyS5saqev+s+r0ryvO6+NslfSPL2wz6Wo1d10S+Yi/x6dfYkv94uOyNgT6nK66ryp/e4\\n\",\n       \"z2aEJfni9OCbNu5zXBH21euPfy/Jq6ry+sM+sCo/UZWXH8tWffFrDN13m7ZGWFVeXiVkNy1o/z1e\\n\",\n       \"PSvJ06uO592BC9h/T0vyxCTPPuA+lzIS9oxdH4eqyrOq8vdHb8du20bCrktyb3ff193nk9yW5IZd\\n\",\n       \"9/m+ZPUX6+4PJnlaVV11yMdyhKryzCQPHlWIVeXphzhZ+uxRfK2LtDkS9tok/+ke99k5z2gnwr6Q\\n\",\n       \"JN35yY37HNe0/FcneWtWUxpPTQ4fYUleneSVx7FRG84e8/Mf1mFGwv6bJD98AttySaryZUf8fAe9\\n\",\n       \"/s4e5dd6vKjKk9fTbJf6w9VV64/PutRt2sfZY3rew3r2ro97uSLJFRf5b7moCEvy0iQ/cFxRfbG2\\n\",\n       \"HXyuTnL/xuUH1tcd5j5fc4jHcrReltWB7FuP6Pn+VpK/uXOhKk87ypG2qlxWlR+pyhMu8imesvHx\\n\",\n       \"m5J8867nvzrJC9cXr1x/3OtE+eMcCft0kgfXl1+8Pln1QOsD+fOTvOSYtuvIVeWvVOV1F/nwx0RY\\n\",\n       \"1WNW6X5JjunfoyovqjrwHbZPrsrXH+J5bkzyi0e4Xd+b5ONVp/M3m1SljjpKD+nlSf6zZO99VpWv\\n\",\n       \"rMr3HOJ5duLr8TolefWuj4+y/n93WZL/kGzfj+s3Iv3jquzMeO1MQ15yhK2f+1JHwl+c1SkpL9x2\\n\",\n       \"x5O07cW9ez2l/VzSQawqP38pjz8Fnp7kjyX56BE/7+7987ysVoP/qar8VlZDzU9P8old99trf+11\\n\",\n       \"3XckeaQq/9f6a31bks9X5d717WeSN72o6qKj7ylZheOPVeVTWb3YL8RVWUXOS7P6ie2ZVblj/Txn\\n\",\n       \"sgqzJ67vuxNh9+zxPDdVffEdlEfp25L8n0k+t778hCTvrdrzvLRNVyT5fJL/eOPf/hj8F19flW9b\\n\",\n       \"X6isvuFBEI0uAAAF1UlEQVTufDzM55vXPS/JH61D5DCuyGr/fybJc5N8NqufUr85q3NQ/mRVfmn9\\n\",\n       \"+eez+n/80n2+Vxz0/eeyrP6ffGa9jf82j/21VS9P8sDG/+sdO//u1yZ5VlV+MasD1sNJ/t0eX+ub\\n\",\n       \"s/o/+M/XX+ORXNq+e2FWI7nvr1qN4D7af/71u157+/07PDGrkdg/zJf+L56E5yW5qir/T1bHmidl\\n\",\n       \"9WaaTx3isU/PatT6/CG/1h/L6jV+PquRnT9I8g+r8tA+2/XHq/L+HLx/XprVQMKtG/83dv6NL8vq\\n\",\n       \"h91fyZcWf64kX7t+zCN5tCett2vje/GjXn87XpTkN7L3D4tH7dlJfjfJ91fl2j1uP5Pk32f19/un\\n\",\n       \"VXk4q9fs+fX1u70wq3+Xf1qVT2T1b/FbSb53/X1sLwf9+2/e9pIkX1aVDx1w/22+KavX7U+vjzd1\\n\",\n       \"kX+OVHXv/29QVS9NcnN3X7++/JYkj3T3/7Rxn7+T5Fx337a+fHeSP5nVO8MOfOz6+mM6wAAAHL3u\\n\",\n       \"PpIg2zYSdmeSa6vqOUkeSvKa5DE/6d6e5KYkt62j7be7+9NV9dlDPPbI/iIAAKfJgRHW3Q9X1U1J\\n\",\n       \"7shqaPLW7r6rqt64vv2W7n5PVb2qqu5N8vtJfuSgxx7nXwYA4LQ4cDoSAIDjMXTFfIu5LltVXVNV\\n\",\n       \"/3dV/XpV/Zuq+rH19c+oqvdX1f9bVe+rqqdtPOYt6/15d1Ud9xILbFFVZ6rqI1X18+vL9t0pUVVP\\n\",\n       \"q6p/UlV3VdXHq+rb7b/TY70/fr2qfq2q/mFVfbn9t0xV9Xer6tNV9Wsb113wvqqqb13v73uq6m8c\\n\",\n       \"5msPizCLuZ4K55P81939wqzeKfRfrvfRX07y/u7++iT/fH05VfWCrM79e0FW+/VvV5VfjTXWm5J8\\n\",\n       \"PF96p5F9d3r8jSTv6e7nZ/X2+rtj/50K63Oh35DkJd39oqxOyXlt7L+l+pms/t03Xci+2jm3/e1J\\n\",\n       \"Xr9evP7aqtr9nI8xcidbzHXhuvs3u/tfrz//QpK7snqL/hcX6F1/fPX68xuS/Fx3n+/u+5Lcm9V+\\n\",\n       \"ZoCqenaSVyX53/Klt1bbd6dAVT01yXd2999NVufYdvfvxP47LX43qx9in1hVl2e1TMhDsf8Wqbs/\\n\",\n       \"kNVSOJsuZF99e1U9K8mTu3tnGY1/sPGYfY2MsMMsBMtCrH+y+5YkH0zy1d396fVNn86Xfj3P12S1\\n\",\n       \"H3fYp2P9tST/bR69ZpF9dzp8XZLPVNXPVNWvVtVPV9WVsf9Ohe7+XJL/Ncn/l1V8/XZ3vz/232ly\\n\",\n       \"oftq9/UP5hD7cGSEeUfAKVFVT0ryfyR5U3c/ahHBXr2z47AL7nFCqup7k/y77v5I9llg0L5btMuz\\n\",\n       \"WqDyb3f3S7J65/lf3ryD/bdcVfXcJP9VkudkdXB+UlX94OZ97L/T4xD76qKNjLAHk1yzcfmaPLoi\\n\",\n       \"WYCq+rKsAuxnu/vd66s/vf79oFkPwe6sHr57nz47X/qVPZysP5Hk+6rqN5L8XJLvrqqfjX13WjyQ\\n\",\n       \"5IHu/vD68j/JKsp+0/47Fb4tyb/s7s9298NJ3pnkP4n9d5pcyPfKB9bXP3vX9Vv34cgI++JCsFX1\\n\",\n       \"hKxOdLt94Pawy/pkw1uTfLy7//rGTbcn+XPrz/9ckndvXP/aqnpCVX1dVr/u5VJ+zQQXqbv/Sndf\\n\",\n       \"091fl9UJwf+iu38o9t2p0N2/meT+qtr5/YevSPLrSX4+9t9pcHeSl1bVV6y/j74iqzfI2H+nxwV9\\n\",\n       \"r1y/Zn93/S7mSvJDG4/Z17BfDGsx11PhZUl+MMnHquoj6+vekuR/TPKPqur1Se5L8v1J0t0fr6p/\\n\",\n       \"lNU3m4eT/MW2EN1S7OwH++70+NEk71j/kPpvs1oI+0zsv8Xr7o9W1T/IarDhkSS/muSnkjw59t/i\\n\",\n       \"VNXPZfXrFr+qqu5P8uO5uO+VfzHJ38vqd6S+p7v/2davbT8DAJw865AAAAwgwgAABhBhAAADiDAA\\n\",\n       \"gAFEGADAACIMAGAAEQYAMIAIAwAY4P8Hf+iH2xY5ngUAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7ffb019b9dd0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = net.blobs['prob'].data[0]\\n\",\n    \"plt.plot(feat.flat)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's see the top 5 predicted labels.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 39,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"['n02123045 tabby, tabby cat' 'n02123159 tiger cat'\\n\",\n      \" 'n02124075 Egyptian cat' 'n02119022 red fox, Vulpes vulpes'\\n\",\n      \" 'n02127052 lynx, catamount']\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# load labels\\n\",\n    \"imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'\\n\",\n    \"try:\\n\",\n    \"    labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\\\\t')\\n\",\n    \"except:\\n\",\n    \"    !../data/ilsvrc12/get_ilsvrc_aux.sh\\n\",\n    \"    labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\\\\t')\\n\",\n    \"\\n\",\n    \"# sort top k predictions from softmax output\\n\",\n    \"top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]\\n\",\n    \"print labels[top_k]\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"Instant recognition with a pre-trained model and a tour of the net interface for visualizing features and parameters layer-by-layer.\",\n  \"example_name\": \"Image Classification and Filter Visualization\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 1\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/01-learning-lenet.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Python solving with LeNet\\n\",\n    \"\\n\",\n    \"In this example, we'll explore learning with Caffe in Python, using the fully-exposed `Solver` interface.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"os.chdir('..')\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import sys\\n\",\n    \"sys.path.insert(0, './python')\\n\",\n    \"import caffe\\n\",\n    \"\\n\",\n    \"from pylab import *\\n\",\n    \"%matplotlib inline\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"We'll be running the provided LeNet example (make sure you've downloaded the data and created the databases, as below).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Downloading...\\n\",\n      \"--2015-06-30 14:41:56--  http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\\n\",\n      \"Resolving yann.lecun.com... 128.122.47.89\\n\",\n      \"Connecting to yann.lecun.com|128.122.47.89|:80... connected.\\n\",\n      \"HTTP request sent, awaiting response... 200 OK\\n\",\n      \"Length: 9912422 (9.5M) [application/x-gzip]\\n\",\n      \"Saving to: 'train-images-idx3-ubyte.gz'\\n\",\n      \"\\n\",\n      \"train-images-idx3-u 100%[=====================>]   9.45M   146KB/s   in 57s    \\n\",\n      \"\\n\",\n      \"2015-06-30 14:42:53 (171 KB/s) - 'train-images-idx3-ubyte.gz' saved [9912422/9912422]\\n\",\n      \"\\n\",\n      \"--2015-06-30 14:42:53--  http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\\n\",\n      \"Resolving yann.lecun.com... 128.122.47.89\\n\",\n      \"Connecting to yann.lecun.com|128.122.47.89|:80... connected.\\n\",\n      \"HTTP request sent, awaiting response... 200 OK\\n\",\n      \"Length: 28881 (28K) [application/x-gzip]\\n\",\n      \"Saving to: 'train-labels-idx1-ubyte.gz'\\n\",\n      \"\\n\",\n      \"train-labels-idx1-u 100%[=====================>]  28.20K   107KB/s   in 0.3s   \\n\",\n      \"\\n\",\n      \"2015-06-30 14:42:53 (107 KB/s) - 'train-labels-idx1-ubyte.gz' saved [28881/28881]\\n\",\n      \"\\n\",\n      \"--2015-06-30 14:42:53--  http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\\n\",\n      \"Resolving yann.lecun.com... 128.122.47.89\\n\",\n      \"Connecting to yann.lecun.com|128.122.47.89|:80... connected.\\n\",\n      \"HTTP request sent, awaiting response... 200 OK\\n\",\n      \"Length: 1648877 (1.6M) [application/x-gzip]\\n\",\n      \"Saving to: 't10k-images-idx3-ubyte.gz'\\n\",\n      \"\\n\",\n      \"t10k-images-idx3-ub 100%[=====================>]   1.57M   205KB/s   in 8.2s   \\n\",\n      \"\\n\",\n      \"2015-06-30 14:43:02 (197 KB/s) - 't10k-images-idx3-ubyte.gz' saved [1648877/1648877]\\n\",\n      \"\\n\",\n      \"--2015-06-30 14:43:02--  http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\\n\",\n      \"Resolving yann.lecun.com... 128.122.47.89\\n\",\n      \"Connecting to yann.lecun.com|128.122.47.89|:80... connected.\\n\",\n      \"HTTP request sent, awaiting response... 200 OK\\n\",\n      \"Length: 4542 (4.4K) [application/x-gzip]\\n\",\n      \"Saving to: 't10k-labels-idx1-ubyte.gz'\\n\",\n      \"\\n\",\n      \"t10k-labels-idx1-ub 100%[=====================>]   4.44K  26.9KB/s   in 0.2s   \\n\",\n      \"\\n\",\n      \"2015-06-30 14:43:02 (26.9 KB/s) - 't10k-labels-idx1-ubyte.gz' saved [4542/4542]\\n\",\n      \"\\n\",\n      \"Unzipping...\\n\",\n      \"Done.\\n\",\n      \"Creating lmdb...\\n\",\n      \"Done.\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# Download and prepare data\\n\",\n    \"!data/mnist/get_mnist.sh\\n\",\n    \"!examples/mnist/create_mnist.sh\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"We need two external files to help out:\\n\",\n    \"* the net prototxt, defining the architecture and pointing to the train/test data\\n\",\n    \"* the solver prototxt, defining the learning parameters\\n\",\n    \"\\n\",\n    \"We start with the net. We'll write the net in a succinct and natural way as Python code that serializes to Caffe's protobuf model format.\\n\",\n    \"\\n\",\n    \"This network expects to read from pregenerated LMDBs, but reading directly from `ndarray`s is also possible using `MemoryDataLayer`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from caffe import layers as L\\n\",\n    \"from caffe import params as P\\n\",\n    \"\\n\",\n    \"def lenet(lmdb, batch_size):\\n\",\n    \"    # our version of LeNet: a series of linear and simple nonlinear transformations\\n\",\n    \"    n = caffe.NetSpec()\\n\",\n    \"    n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,\\n\",\n    \"                             transform_param=dict(scale=1./255), ntop=2)\\n\",\n    \"    n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))\\n\",\n    \"    n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)\\n\",\n    \"    n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))\\n\",\n    \"    n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)\\n\",\n    \"    n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))\\n\",\n    \"    n.relu1 = L.ReLU(n.ip1, in_place=True)\\n\",\n    \"    n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))\\n\",\n    \"    n.loss = L.SoftmaxWithLoss(n.ip2, n.label)\\n\",\n    \"    return n.to_proto()\\n\",\n    \"    \\n\",\n    \"with open('examples/mnist/lenet_auto_train.prototxt', 'w') as f:\\n\",\n    \"    f.write(str(lenet('examples/mnist/mnist_train_lmdb', 64)))\\n\",\n    \"    \\n\",\n    \"with open('examples/mnist/lenet_auto_test.prototxt', 'w') as f:\\n\",\n    \"    f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100)))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The net has been written to disk in more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"layer {\\r\\n\",\n      \"  name: \\\"data\\\"\\r\\n\",\n      \"  type: \\\"Data\\\"\\r\\n\",\n      \"  top: \\\"data\\\"\\r\\n\",\n      \"  top: \\\"label\\\"\\r\\n\",\n      \"  transform_param {\\r\\n\",\n      \"    scale: 0.00392156862745\\r\\n\",\n      \"  }\\r\\n\",\n      \"  data_param {\\r\\n\",\n      \"    source: \\\"examples/mnist/mnist_train_lmdb\\\"\\r\\n\",\n      \"    batch_size: 64\\r\\n\",\n      \"    backend: LMDB\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"conv1\\\"\\r\\n\",\n      \"  type: \\\"Convolution\\\"\\r\\n\",\n      \"  bottom: \\\"data\\\"\\r\\n\",\n      \"  top: \\\"conv1\\\"\\r\\n\",\n      \"  convolution_param {\\r\\n\",\n      \"    num_output: 20\\r\\n\",\n      \"    kernel_size: 5\\r\\n\",\n      \"    weight_filler {\\r\\n\",\n      \"      type: \\\"xavier\\\"\\r\\n\",\n      \"    }\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"pool1\\\"\\r\\n\",\n      \"  type: \\\"Pooling\\\"\\r\\n\",\n      \"  bottom: \\\"conv1\\\"\\r\\n\",\n      \"  top: \\\"pool1\\\"\\r\\n\",\n      \"  pooling_param {\\r\\n\",\n      \"    pool: MAX\\r\\n\",\n      \"    kernel_size: 2\\r\\n\",\n      \"    stride: 2\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"conv2\\\"\\r\\n\",\n      \"  type: \\\"Convolution\\\"\\r\\n\",\n      \"  bottom: \\\"pool1\\\"\\r\\n\",\n      \"  top: \\\"conv2\\\"\\r\\n\",\n      \"  convolution_param {\\r\\n\",\n      \"    num_output: 50\\r\\n\",\n      \"    kernel_size: 5\\r\\n\",\n      \"    weight_filler {\\r\\n\",\n      \"      type: \\\"xavier\\\"\\r\\n\",\n      \"    }\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"pool2\\\"\\r\\n\",\n      \"  type: \\\"Pooling\\\"\\r\\n\",\n      \"  bottom: \\\"conv2\\\"\\r\\n\",\n      \"  top: \\\"pool2\\\"\\r\\n\",\n      \"  pooling_param {\\r\\n\",\n      \"    pool: MAX\\r\\n\",\n      \"    kernel_size: 2\\r\\n\",\n      \"    stride: 2\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"ip1\\\"\\r\\n\",\n      \"  type: \\\"InnerProduct\\\"\\r\\n\",\n      \"  bottom: \\\"pool2\\\"\\r\\n\",\n      \"  top: \\\"ip1\\\"\\r\\n\",\n      \"  inner_product_param {\\r\\n\",\n      \"    num_output: 500\\r\\n\",\n      \"    weight_filler {\\r\\n\",\n      \"      type: \\\"xavier\\\"\\r\\n\",\n      \"    }\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"relu1\\\"\\r\\n\",\n      \"  type: \\\"ReLU\\\"\\r\\n\",\n      \"  bottom: \\\"ip1\\\"\\r\\n\",\n      \"  top: \\\"ip1\\\"\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"ip2\\\"\\r\\n\",\n      \"  type: \\\"InnerProduct\\\"\\r\\n\",\n      \"  bottom: \\\"ip1\\\"\\r\\n\",\n      \"  top: \\\"ip2\\\"\\r\\n\",\n      \"  inner_product_param {\\r\\n\",\n      \"    num_output: 10\\r\\n\",\n      \"    weight_filler {\\r\\n\",\n      \"      type: \\\"xavier\\\"\\r\\n\",\n      \"    }\\r\\n\",\n      \"  }\\r\\n\",\n      \"}\\r\\n\",\n      \"layer {\\r\\n\",\n      \"  name: \\\"loss\\\"\\r\\n\",\n      \"  type: \\\"SoftmaxWithLoss\\\"\\r\\n\",\n      \"  bottom: \\\"ip2\\\"\\r\\n\",\n      \"  bottom: \\\"label\\\"\\r\\n\",\n      \"  top: \\\"loss\\\"\\r\\n\",\n      \"}\\r\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!cat examples/mnist/lenet_auto_train.prototxt\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Now let's see the learning parameters, which are also written as a `prototxt` file. We're using SGD with momentum, weight decay, and a specific learning rate schedule.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"# The train/test net protocol buffer definition\\r\\n\",\n      \"train_net: \\\"examples/mnist/lenet_auto_train.prototxt\\\"\\r\\n\",\n      \"test_net: \\\"examples/mnist/lenet_auto_test.prototxt\\\"\\r\\n\",\n      \"# test_iter specifies how many forward passes the test should carry out.\\r\\n\",\n      \"# In the case of MNIST, we have test batch size 100 and 100 test iterations,\\r\\n\",\n      \"# covering the full 10,000 testing images.\\r\\n\",\n      \"test_iter: 100\\r\\n\",\n      \"# Carry out testing every 500 training iterations.\\r\\n\",\n      \"test_interval: 500\\r\\n\",\n      \"# The base learning rate, momentum and the weight decay of the network.\\r\\n\",\n      \"base_lr: 0.01\\r\\n\",\n      \"momentum: 0.9\\r\\n\",\n      \"weight_decay: 0.0005\\r\\n\",\n      \"# The learning rate policy\\r\\n\",\n      \"lr_policy: \\\"inv\\\"\\r\\n\",\n      \"gamma: 0.0001\\r\\n\",\n      \"power: 0.75\\r\\n\",\n      \"# Display every 100 iterations\\r\\n\",\n      \"display: 100\\r\\n\",\n      \"# The maximum number of iterations\\r\\n\",\n      \"max_iter: 10000\\r\\n\",\n      \"# snapshot intermediate results\\r\\n\",\n      \"snapshot: 5000\\r\\n\",\n      \"snapshot_prefix: \\\"examples/mnist/lenet\\\"\\r\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!cat examples/mnist/lenet_auto_solver.prototxt\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's pick a device and load the solver. We'll use SGD (with momentum), but Adagrad and Nesterov's accelerated gradient are also available.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"caffe.set_device(0)\\n\",\n    \"caffe.set_mode_gpu()\\n\",\n    \"solver = caffe.SGDSolver('examples/mnist/lenet_auto_solver.prototxt')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"metadata\": {\n    \"collapsed\": false,\n    \"scrolled\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[('data', (64, 1, 28, 28)),\\n\",\n       \" ('label', (64,)),\\n\",\n       \" ('conv1', (64, 20, 24, 24)),\\n\",\n       \" ('pool1', (64, 20, 12, 12)),\\n\",\n       \" ('conv2', (64, 50, 8, 8)),\\n\",\n       \" ('pool2', (64, 50, 4, 4)),\\n\",\n       \" ('ip1', (64, 500)),\\n\",\n       \" ('ip2', (64, 10)),\\n\",\n       \" ('loss', ())]\"\n      ]\n     },\n     \"execution_count\": 8,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"# each output is (batch size, feature dim, spatial dim)\\n\",\n    \"[(k, v.data.shape) for k, v in solver.net.blobs.items()]\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[('conv1', (20, 1, 5, 5)),\\n\",\n       \" ('conv2', (50, 20, 5, 5)),\\n\",\n       \" ('ip1', (500, 800)),\\n\",\n       \" ('ip2', (10, 500))]\"\n      ]\n     },\n     \"execution_count\": 9,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"# just print the weight sizes (not biases)\\n\",\n    \"[(k, v[0].data.shape) for k, v in solver.net.params.items()]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"{'loss': array(2.301163673400879, dtype=float32)}\"\n      ]\n     },\n     \"execution_count\": 10,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"solver.net.forward()  # train net\\n\",\n    \"solver.test_nets[0].forward()  # test net (there can be more than one)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"[ 5.  0.  4.  1.  9.  2.  1.  3.]\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAWwAAABKCAYAAACfHW4mAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJztvXlQW1me5/s5EhJaECAJhEBgdrMbDNjgtNNOp7d02pk1\\n\",\n       \"mVlZW1dWd0XH9ERMzxIzEzE1M3/M1HvzIt68iZjpF9HRPdFvpqeqZ6ajJyozy5VbpZ1e0k4n6R0w\\n\",\n       \"JBizrwIJxCYJgQTc9wfcW+D0KiOwK+8ngkBcJN2jo3N/95zf+f5+PyFJEioqKioqzz6arW6AioqK\\n\",\n       \"isrjoRpsFRUVlecE1WCrqKioPCeoBltFRUXlOUE12CoqKirPCarBVlFRUXlOiNpgCyFeEUJ0CCG6\\n\",\n       \"hBA/28hGqaioqKh8ExGNDlsIoQXuAoeBEeAG8ENJku5sbPNUVFRUVGSinWHvBrolSeqXJCkC/G/g\\n\",\n       \"OxvXLBUVFRWVe4nWYLuAoTV/D68eU1FRUVGJEXFRvu6RfhQhhBrzrqKiohIFkiSJ+x2PdoY9AmSt\\n\",\n       \"+TuLlVm2ioqKikqMiNZg3wQKhRA5Qgg98H3gw41rloqKiorKvUTlEpEkaVEI8Y+AM4AW+GtVIaKi\\n\",\n       \"oqISW6KS9SkvFqIfmAWWgIgkSbvX/O9b7cPWaDRotVr0ej06nQ6tVkskEiEcDhOJRFheXkZNbaui\\n\",\n       \"8js0Gg3x8fHEx8ej1WpZXl5mfn6ehYUFlpeXt7p5m8qDfNjRbjoq7wu8JEnS5FO+z+8dVquVnJwc\\n\",\n       \"amtrqaqqwuFw0NLSwq1bt2hpaWFsbIxwOLzVzVRReWZITk7m2LFj7Nmzh/z8fLxeL2fOnOHs2bPM\\n\",\n       \"zMywuLi41U3ccp7WYAPc906wUQgh0Gq1aLVa5Vh8fDwulwuj0UhcXBwmkwm3243P5yM/P5+MjAyS\\n\",\n       \"k5NZXl7G4/EwMDBAf38/8/PzsWwqAFqtloSEBGpqanjppZeoqqqitLQUu91OdnY227ZtIz09nVOn\\n\",\n       \"TjExMRHz9jwNcXFxWK1WCgoKsNvtNDU1MTExwcLCwlY3DYPBQHZ2Nrm5uTidToQQdHV10dHRwfT0\\n\",\n       \"tHpxPwKNRkNcXBxarZakpCSSkpLQ6/WMj4/j9Xo3fUar1+txOBwcOHCAgwcPkpubi8/nw+Px0NjY\\n\",\n       \"SDAYfCa+0/j4eAwGA0IIEhISSE1NZfv27RiNxge+JhKJMDExQVdXF/39/U/Vtxsxwz4nhFgC/kqS\\n\",\n       \"pP8a7RsJ8Tu7r9FoEEIoxtpoNGIwGJT/22w2jh49isPhwGw2k56ezrlz52hqauLHP/4xL7/8MiUl\\n\",\n       \"JYTDYb744gtOnTrF+++/H3ODLYQgPj6ebdu28dprr/GTn/wEo9GIRrOyt1tZWUlxcTElJSVcvnwZ\\n\",\n       \"n8/31G4R+b2BDb/IDAYDBQUFvPPOO1RVVfHzn/+cW7dubbnB1mg0JCcnc/jwYb773e+yf/9+NBoN\\n\",\n       \"v/jFL/jLv/xL2tvbn4mL+1lDvqY0Gg0GgwGj0YjRaKSoqIiioiISExO5fv06DQ0Nm+6GMJlMZGRk\\n\",\n       \"UF1djcvlQqvVkpqaSnp6Oqmpqbjd7k1ry72s7Ter1UpqaiparZbs7Gzq6ur46U9/Slpa2gOvZb/f\\n\",\n       \"z61bt/jlL3/J6OjoU/Xt0xrsvZIkjQohUoGzQogOSZIuP+mbaLVaEhMT0ev1GAwGXC4XLpcLh8OB\\n\",\n       \"wWAgNzeX7Oxs5fk6nY60tDTi4+MRQhCJRFhcXKS4uJgDBw7gdDrx+/14PB46Ozvp7e3dFCNjs9ko\\n\",\n       \"Li7mhz/8Ifv371fuxIuLi4TDYZaXl5XPKs8gJicno/Znx8fHk5mZSWJiIktLS3R1dREKhTbs8+h0\\n\",\n       \"OrKyVtSbQ0NDRCKRDXvvpyE7O5v6+npOnDjB9u3bkSRJ3RN4DEwmE+np6ezevZv8/HxcLhdpaWlY\\n\",\n       \"rVYsFgtxcXHk5eVhtVo5d+4cU1NTm9a2hYUFfD4f3d3dOBwOZdw9CyQmJpKenk55eTk7duygoKAA\\n\",\n       \"jUaD3W7H5XKRlJSEJEkPHH8Gg4HS0lJOnDgBwPnz5xkfH4/qenoqgy1J0ujq73EhxClWQtafyGAb\\n\",\n       \"DAYcDgd1dXXY7XZlxiwb7Pj4eLKyssjMzLzv6/1+P7dv3yYcDmMwGPB6vYyPjzM7O8vo6CjXr1+P\\n\",\n       \"qcGWl5Zms5nKykoOHjzIoUOHyMrKIi4uDkmSCIVCTExMMDw8jMvlUlYIRqOR1tZWBgcHo2pffHy8\\n\",\n       \"4gIKh8MMDw9vqMHW6/U4nU5MJhORSARJktathLaK1NRUSkpKKCkpwW63b1m7ZPeX3W7HZrORmJiI\\n\",\n       \"2WwmPj4eAI/Hw+joKOFwGKPRiE6nU8bmRn5PDyM+Ph6LxYLT6SQnJ4fi4mLq6urIzc3F4XBgsViU\\n\",\n       \"VZrsYgyFQrS2tjI3N7dpq6lIJILf72dkZISZmZlnymBnZWWxd+9e9u7dS1lZmdI2eYUihHjoZCEu\\n\",\n       \"Lo7U1FRqa2sJh8N0dnYSCAQ212ALIUyAVpIkvxDCDBwF/o8nfZ/k5GR27tzJz372M7Zv3664PuQL\\n\",\n       \"8GEXoiRJzMzMcPr0afr7+wmHw5w7d45AIMDMzAyBQACPx8PExETMZl/yRZubm8uJEyd4++23SU1N\\n\",\n       \"RafTKc8JBAJ0d3fz4YcfcvDgQd58803+9E//lJKSEt59910++OCDqA12Xl4excXFhEIhLl26tGGf\\n\",\n       \"SwhBXFwcNpuN1NRUYGXgPQuYTCasVquiJtgq9Ho927Zto66ujrq6OkpLS8nOzlb66+zZs3z66adM\\n\",\n       \"TU2Rnp5OYmIiX375Je3t7YyMjGxKGy0WC9u3b+fVV1/lwIEDVFZWEh8fj0ajYWlpCZ/PRzAYBMDl\\n\",\n       \"cpGZmUltbS3Z2dmbul+xuLjI3Nwck5OTzM3Nbco5H5eKigrefvttqqursVgsUU8OsrOzWV5e5vTp\\n\",\n       \"07jdbmZnZ5/4PR55BQoh/jtwAvBKklSxeswGfADsEkKEWYly/J+SJH32pA0IhUJMTU0xNzenuAzu\\n\",\n       \"RzgcxuPxEAwG0Wg0uFwudDod09PT3Lhxg+7ubpaWlgAUF8Ti4iLz8/MxXSrv3LmTl19+merqakpL\\n\",\n       \"S7HZbGi12nVfanJyMi6XC5PJxOTkJH19fbhcLqxWKxkZGeuM+5NgMpmoqanBbrfT2dm5UR8JWFn5\\n\",\n       \"OJ1O9uzZg8VioaWlBbfbjd/v39DzPAnyaqu+vp69e/eSkJCA3++nv7+fjz/+mMuXL2+a6yYzM5Md\\n\",\n       \"O3Zw4sQJSktLcblcJCcnEwqF6OnpwWg0kpubyzvvvKPMsJeWlkhISCAcDsfUYOt0Oux2Oy+88ALV\\n\",\n       \"1dWUlZUpm7OysZ6bm6Ovr49f/epXeL1e0tPT+ZM/+RNSU1OxWCxYLBb0en3M2ni/NicmJpKRkUFS\\n\",\n       \"UtKmnfdxGB8fp7e3l4qKCoQQLC8vMzc3R29vLwMDA8q1LtsZu91Oeno6GRkZymoLUPzgT7MafJwp\\n\",\n       \"0y+APwf+x5pj/wr4SJKkF1dzYVslSfq/o2lAKBRifHyc9vZ2jEYjKSkphEKhdTO7hYUFPB4Ply5d\\n\",\n       \"wuPxoNVqKSoqwul04vF46Ovro7+/P5rTR41Go8FkMrFjxw5OnjxJRUUFZrOZpaUl/H4/fr+fxcVF\\n\",\n       \"MjIyFN98OBymv7+f1tZW7HY7RqORxMTEdRuHT4JerycrK2vdhuxG4XK5qKmpoaioCK/Xi9vtZmpq\\n\",\n       \"ass2HI1GI06nk/3797Nnzx4KCgqIj49neHiY69ev8+tf/5re3l5CoVDMNhzlTXD5ez9x4gTHjh0j\\n\",\n       \"OTmZ+fl5hoaG6OrqYmhoiISEBEpLSyktLSUxMZG4uDimpqYYHBzEYrHEpH0yiYmJbN++nRMnTlBX\\n\",\n       \"V0dOTo6y37O8vMzCwgJ3797l0qVLfPTRR0xPT1NWVsaPf/xj4uLiMBgMGAyGTV1RyW7FtLQ0EhIS\\n\",\n       \"lOPx8fEkJiZiMpmYn5/fks3k4eFhrly5QnJyMjabjeXlZQKBAO3t7XR2dn7DJZKWlkZxcTFHjx4l\\n\",\n       \"NTVV6cdgMIjX62V6ejrq6+iR34gkSZeFEDn3HH4dOLD6+G+Ai6wY8ScmEokwOjrKhx9+iNvtJjMz\\n\",\n       \"k8HBQV588UUOHz4MwPT0NC0tLfzFX/wF3d3daDQasrKy2LdvH+np6ZvmD1yLXq8nMzNT8aXKvqxw\\n\",\n       \"OExfXx8tLS3Mzs7y/e9/H5PJhM/nY3BwkGAwSDgcZteuXcru89PccWPlu62pqeHtt9/GarXS29vL\\n\",\n       \"xMTEliov7HY71dXVvPPOO5SVlSkbup2dnZw/f57BwUH8fn9MV1NCCEVOeOTIEb73ve9hsVhwu900\\n\",\n       \"Nzfz4Ycf0tjYyPDwMDqdjrfeeos/+IM/oLy8nISEBCKRCENDQzHfzHO5XNTV1XHgwAEyMzPXrfgW\\n\",\n       \"Fxfx+Xx8+umn/M3f/A1utxuz2RzT9jwOcXFxGI1GrFYrJpNJOW6z2cjJyWFoaIj5+fmo3AhPS09P\\n\",\n       \"D2NjYzQ0NKDT6ZAkicXFRaanp+/bHqPRSGlpKYWFhVgsFuUGNDIyQlNTE11dXVGPgWhvoWmSJHlW\\n\",\n       \"H3uAtCjfB0mSmJub4/bt24yPj5OUlITP52NxcRG73U5RURHd3d1cunRJ2ZCQlRcLCwskJCQwPT0d\\n\",\n       \"7emjIjExkcLCQn7wgx9w4MABTCYTQgi8Xi9tbW28//77TE1NYbVa+eyzzxgfH6epqYmWlhYWFhbQ\\n\",\n       \"aDSEQiEcDofi9/T7/czMzDxRG9LT0xW9+UZjMplISkpCq9Xi9Xrp7u7eUjnftm3b2L17N9nZ2Vgs\\n\",\n       \"FhYXFxkdHaWxsZEbN24QCARirhJxuVxUVVVx8uRJ9uzZQ3x8PAMDA5w9e5bTp0/T0dGhaJhzcnIo\\n\",\n       \"KCggOzsbvV5PKBRibGyMmzdvMjQ09OiTPQWZmZlUVlaSlJREXFycsurz+XwMDAxw8eJFLl26pARv\\n\",\n       \"ORwOUlJStnSPYn5+Hq/XS3NzMw6HA6vVCkBxcTGvvvoq8/PzhMPhLTHYi4uL+P1+wuGwshpeXl4m\\n\",\n       \"Eonc1/2WkpKCy+UiISFhXZ8Gg0F8Ph9zc3NRT36e+huSJEl62jB02afn9XqJi4tjYWFBuThycnII\\n\",\n       \"BoPKBohsnGZmZp7IwG0EGo1GcUPs2rWLkydPsm3bNkUJ0tfXx5UrVzh9+jSwYmR8Ph/9/f3cuXOH\\n\",\n       \"ubk5lpaWsFqtLCwskJycTH5+Pjt27GBiYuKJPo/NZiM3N5fk5GQCgcCGfUZ5s9FkMpGQkIBGo2Fq\\n\",\n       \"aoqhoaEticyU3RB5eXns3r0bm82muBdu377N7du3Y+4O0+l0OBwOdu/ezbFjx/jOd76DXq9ncHCQ\\n\",\n       \"q1evcvr0ac6fP6+Mz5SUFGpra6moqMDpdALgdru5ffs2ra2teL3emLZXdnu43W68Xi+zs7N4vV6G\\n\",\n       \"h4fp6Ojg7NmzDA0NKZt7stJlKzdw5eCSL7/8koKCAsrLywHIyMhAo9Fw+/Zt2tratqRtkiSxtLT0\\n\",\n       \"yM3QuLg4xR1VWVmJ1Wpdtz81Pj5Od3c3c3NzUU8uojXYHiGEU5KkMSFEOrAhI3BxcZHFxUUkSSIS\\n\",\n       \"iSgbhrm5udTV1XHp0iUmJye3TG8bFxdHSkoK1dXVHDx4kJSUFMWfNTMzQ2trKzdv3mR6eppAIMDY\\n\",\n       \"2BharXadDht+tzmh1WqxWq289NJLDA0Ncffu3cdui9PppKSkBIvFsqEuIY1GQ0JCAjabTTGOoVCI\\n\",\n       \"2dlZZVN3M9FoNJjNZoqKiti1axd6vZ6FhQXGxsY4e/YsHR0dMW9DUlKSYqj379+PyWSitbWV8+fP\\n\",\n       \"895779Hd3b1uczsxMZGXX36Z4uJi5T2am5s5deoUbrc75iuVxsZGAoEA/f39zMzM0NXVxdjYGJOT\\n\",\n       \"k8zMzHxjhhcfH09CQsKWGmyA2dlZzp07R11dHa+//vqWtiUajEYjVVVVyj5LcnLyuj5ta2vjzJkz\\n\",\n       \"T7VxH63B/hD4Q+D/Wf39m6hbsIa1hri7u5vPP/+cvLw8MjMzKS8vZ/v27QQCAXw+30ac7olYG113\\n\",\n       \"+PBhdu7ciU6no7m5mStXruB2u+nr66O3t5f5+fkHLpdkZJ+irHVOTEx8ovYkJSXhdDrR6/UEAoEN\\n\",\n       \"8zHLxqayshKTyaSsZEKh0KbfKI1GIxkZGRw5coT6+nqMRiPLy8sMDAzQ0NDArVu3GBsbi2kb8vLy\\n\",\n       \"qK+v5/XXX6eyspK4uDj6+/u5dOkSH3/8Md3d3coyPS4uDrvdTnFxMTk5OSQlJREMBmltbeXSpUs0\\n\",\n       \"NTVtiutmenqajo4OQqEQoVCIyclJgsGgMi7vxWq1kpmZGbVaaaNYXl5W9nieBb3/4yAH9pWUlFBW\\n\",\n       \"VkZxcTF5eXnrxACLi4tMTk4qGvynmfg8jqzv71jZYEwRQgwB/xb4D8CvhBB/DPQD34u6BQ9gYGAA\\n\",\n       \"SZIoLi7m2LFj5OTkUF9fz9LSkuJPDQQCioY01siG9fDhw7zwwgs4HA7FH3jq1CmGh4eVL+JRsjJ5\\n\",\n       \"V1mW+awNtnhcjEYjSUlJaDQaZmZmGB0djUrOJrtAjEajoic/fvw4FRUVaDQa+vr6njqcNloSExMp\\n\",\n       \"KirizTffpLKyUlmWtre3c/HiRbq6umLmFpO/m5KSEk6cOMHevXsxGAyMjo7y5Zdfcv78ea5du0Yk\\n\",\n       \"EkGr1WKxWEhPT6e4uJj6+nqcTieSJOF2uzl79ixXrlxhcHAwJm29l3A4zMTExDdy1ZjNZqxW6zc2\\n\",\n       \"urOyssjKykKv1xMOhwkEAgQCgWcmOdlWG28562ZCQgImk2ndtZqYmMiePXvYv38/u3btwm63K9LJ\\n\",\n       \"5eVlpqenlT2gwcFBlpaWnuqG/Tgz7BArOa/vrtFh/xwoBsaBVKAeOB11K+7D0tISHo+Hd999F6vV\\n\",\n       \"yne/+13eeustysrK6OjoYHBwkJs3b9La2ropMz/Z3yz7JcfHx/nlL3/JhQsX6OjoWDe4H3UHldu7\\n\",\n       \"Ue2emppieHg4qgtMp9ORnJxMSUkJO3fupLa2lj179pCWlqYsUVtaWgiFQptusJ1OJ+Xl5eTm5pKY\\n\",\n       \"mMjCwgLd3d189dVXfPnllxvqu78XWRFSXFzMnj17MJvNjIyM8NVXX/FXf/VXdHR0KNGfssvmJz/5\\n\",\n       \"CdXV1WRnZ2Oz2ZSVwKeffrrhOvloKCgoUAJn1hrByspK0tPT0el0+Hw+urq66OrqYnJy65Jwrg31\\n\",\n       \"3uqUA2azmYyMDHbt2kVJScm6KEyz2UxZWRkpKSmYTCa0Wq1irOfn57l+/Trnzp3j9u3b3L17Vxkz\\n\",\n       \"0RKtDlsC/rMkSf856jM/AkmSmJ+fp6enhy+++AKHw0FlZSU1NTXk5eUxOTlJamoqer2e/v5+Zmdn\\n\",\n       \"YzojKC4u5tChQ6SlpREKheju7ub69ev09PQ8cWTWWjlfLGcPCQkJmM1mhBBYrVZF+y3PqLdt20ZC\\n\",\n       \"QgJGo5HU1FSSkpKUwAmdTsf8/DydnZ14PJ5NNdbx8fFkZGSwd+9eDh06hN1uJxgM0tvbywcffMBX\\n\",\n       \"X33F+Ph4TGWGQgjlZuZwOIiLi6O9vZ3Tp08zPDyshBrLaRTy8/PZu3cvLpdL0VkPDQ0pqpBY3lwe\\n\",\n       \"1H6tVovZbCY1NZXS0lL27NmjuHXWjruUlBQcDgeRSITGxkY++eQTPB7PluWO2WoDLSNvItbX13Pg\\n\",\n       \"wAFKSkqUgDcZnU63TmstE4lEGBkZ4caNG5w9e5bR0dENkZ1Gq8OGGKdVhZWZ6szMDNeuXVOSJhUV\\n\",\n       \"FVFWVqYsUeLi4rh48SI9PT2Mj49veBIgOal6RUUFBw8exGKxMDg4yJ07d+jt7Y1aUigba3mD9UmN\\n\",\n       \"z9oZSHJyMjk5OUxMTKwzDGlpaYpKISsri9zcXEWqZzab2bFjBwaDgUgkQiAQYHh4GK/Xy/z8PAkJ\\n\",\n       \"CYrSYDNlkxqNhqSkJOrr6zl8+DB79+5Fr9czMDBAW1sbH330EV1dXTFfrssuEZ1Oh06nQwjBxMQE\\n\",\n       \"breb7OxsSkpKqK2tpby8nMzMTOx2O3q9XskfI/vam5qaNnXDdm1wT1paGunp6Wzfvp1jx45RVVVF\\n\",\n       \"VlYW4XAYnU63LpJxcXGRqakpOjo6uHLlyqb42h/2GZ4FtFotdrudffv28aMf/YiUlJT7+vnvjXQE\\n\",\n       \"FJ324OCg4sLdiEnP08j6/rEQ4ies1Hf8F5IkxeyqHhkZ4fPPP2diYoIjR45w6NAh8vPzKSsrw2Kx\\n\",\n       \"YLPZOHfuHNeuXWN2dnZDZ14Gg4GSkhKKi4uVMPLR0VHa29ufKl2rbHBlSeOT+mJlHagkSezcuRO7\\n\",\n       \"3c7IyMg6BUJaWhppaWkIIdYZk6mpKSYmJmhubmZ0dJShoSF6enrweDzYbDby8/OxWq1EIhG8Xu+m\\n\",\n       \"hqObzWYKCgr46U9/SlVVlbJ8n5ubY3x8nOnp6U3Jay4HR4RCIYLBIGazmePHj1NdXY0kSRiNRiwW\\n\",\n       \"CyaTiaWlJYLBIKFQSPFxBoNB+vv7n3qcPAnyTSYlJYWysjLeeecdCgsLsdlsyuRGzg/vcDjYtm2b\\n\",\n       \"cjOClZtlZmYmFRUV+Hy+dcqmzeRZmWGv5WEr4vsdMxqNlJSUUFVVxa1bt+js7NyQHCnRGuz/Avyf\\n\",\n       \"q4//PfCfgD9+6tY8gPn5ecbGxhTx/MTEBJWVlVRVVZGXl8fBgwfR6/UYjUYuXLiwoTNCrVZLcnIy\\n\",\n       \"ycnJGI1GFhYW6Ovro7W19YnkdPLFlJWVRXV1NUajkZmZGSUoqKen54na1dPTw7lz59Dr9WRkZCi5\\n\",\n       \"P9ZeYLKOemxsjOnpaWZmZhSjJxtt+cfj8Si+OoPBoAQLxDLU+37IyZSKioqw2WzK8YGBAW7evMns\\n\",\n       \"7OymGBFZy9zS0sKZM2fYv38/KSkppKSkEAwGlfwSQ0ND+Hw+NBoNNTU15ObmAigh6pt1s5N97mlp\\n\",\n       \"aezdu5fDhw9z4MABFhYWGBkZwePx4PF48Hq9zMzMsHfvXpKSkrBarYqLxGAwUF5eztzcHD6fj/b2\\n\",\n       \"diYmJjY9de1awyhJkpJOOdYh/feytLTE1NQUN27cICEhgaKiIsLhMFNTU+v6RG6rnOsmJyeH9PR0\\n\",\n       \"LBYLSUlJ6zIiPi3RFuFVdNdCiP8GfLQhrXkIi4uLirD+zp077Nixgz/6oz8iPz9fEaknJSXR2tqK\\n\",\n       \"3+/fsCWorKKQfVRzc3NRzZzkrH61tbUcO3aMhIQE3G43N27c4MyZM/T19T1Ru9rb2xkbGyMQCFBe\\n\",\n       \"Xk5GRsZ9nxcIBGhubmZwcJCRkRFFdng/8vLyqK2txW63Ew6HY+4nXou8lC8qKuLFF1/EbDavW4V0\\n\",\n       \"dHTw1VdfbZoqSJIkFhYWuHr1KouLi9hsNvLy8pTIzzt37tDU1MRXX32Fz+fD6XTicDjIyMhACEFb\\n\",\n       \"W9umJt03Go04HA5qamp46623eO2114hEIpw/f55PPvmEO3fu0N3djc/nIy0tDbvdzu7du5Vc6rLa\\n\",\n       \"Rb5RjoyMIEkS7e3tBAIBJT4CVgxZrG+aa5VU8fHxFBQU4HQ6lcjNzbiByDbn/Pnz3L17l7q6OiUQ\\n\",\n       \"7n6bh0lJSezbt4+TJ08qrsiNJiqDLYRIl3NhA28ArRvXpIcTiUTw+XzcvHmTgwcPsry8rCwD5Wou\\n\",\n       \"fr8/JvpceZk8Nzf3RH5JOUDmwIEDvP766+zbt4+lpSVu377NxYsXlVD8J8Xv93Px4kVu3rz5QFmg\\n\",\n       \"HJYcCoVYWFh4qO/X4XAoSZXkJFWbladFp9PhdDrZvn07hYWFyucJhUJ0dHTQ2dm5qTcQmcnJSa5e\\n\",\n       \"vYrb7VZSEMjSt9nZWaanp7HZbLhcLoqKikhNTWV6elrRPscaeeVWXl7OoUOHOHr0KEVFRczPz9PV\\n\",\n       \"1UVDQwMXLlxQxlh2djZ/+Id/yMsvv4zT6USj0dDW1sbdu3eV79/hcPCjH/2IyspKbt26xZdffonX\\n\",\n       \"61XGzuTkZMwVJGtnsEajkfLycoqLi3E4HFEn/4+Wubk5BgcHmZmZIRKJPFAx5fF4mJ2dJT8/nxdf\\n\",\n       \"fDEmbXmowRZCZLHio7YBWiHEFPAvgKNCiBOAHpgB9sakdWuQZ6ipqamkpKQoqRjXbt7Jy9hYfZny\\n\",\n       \"zu/ExMRjnUPW52ZlZVFeXs7x48fZvn07k5OTXLlyhcuXL9Pc3Bz1Bo+cOGujMBgMJCYmotVqcbvd\\n\",\n       \"fP3115tmsJOSkjh+/DgvvPCCIjGT/cC//e1vuX379qb5gteysLDA+Pg44+PjD3yO3W7HYrGQmJhI\\n\",\n       \"fHw8CwsLdHV1PfQ1G4EQApPJRGFhIS+//DKvvPIK5eXlTE1NKTlWGhoaGBkZUaqevPDCC0qBjenp\\n\",\n       \"adra2mhoaKCtrQ2Hw6EkM3M6nRQWFpKamorL5VIyzM3Pz3P58mUaGhpi9rm8Xi89PT24XC4l57lc\\n\",\n       \"W7SsrEzRv28Wi4uLijb9Uc97Wtneo3jUDDsCHJMkqVkIkQDcAq6wosH+95Ik/cfV9Kp/TJTZ+h6F\\n\",\n       \"RqNRVA05OTns3LlT8V2XlJQoviG/38/Q0BDd3d0xu/vLOuDHuRDlG4zsYz98+DA1NTUMDQ3x2Wef\\n\",\n       \"8ed//ueblsQ+GkZHR+no6Ng0I2mz2XjnnXeorKxUIsQmJiZoamriV7/61RP7+DeTezej5ubm+Prr\\n\",\n       \"r2P+/Wq1WlJSUnjllVd47bXXqKmpIRAIcP36dT744AOuX7/O5OSkUpno5MmTvPbaa2RmZuL3+2lr\\n\",\n       \"a+MXv/gF165dY3h4WJF6yhLA3bt3K/p8rVar5KSfnZ2NqcHu7+/nxo0bSpEKmezsbGpqamhtbY15\\n\",\n       \"Eig5b/Xj+u/lyVl5eTnp6ekxa9dDDbYkSWPA2OrjgBDiDuBiA9OrPgydTofNZqOsrIzq6molGZRc\\n\",\n       \"tkrehFheXiYUCim+642+w8kXpLw0e5C/eC2FhYXs2bOHI0eOUFRURHJyMnfv3uXjjz/mo48+2pLw\\n\",\n       \"+mcZ+aa8VmrW09NDQ0MDk5OTz3RR3bGxMTo7O2NeLONe8vLyePHFFzlx4gT5+fl4vV4++eQTLl26\\n\",\n       \"xJ07d0hNTaW+vp7S0lIqKiooKCggISGBS5cucfPmTRobG2lra2N8fFyJexgeHmZmZoaOjg6amprY\\n\",\n       \"vXs35eXlmM1mJicn+fTTT7l8+YnLtj4Rfr8fr9e7JTpw2cWUnp6O1WplYGCAYDD4SPdnTk4OdXV1\\n\",\n       \"vP322+zcuTNm7XtsH/aqFnsncI0NTK96n/Og1+uxWq24XC5KS0upra1l586d5Ofnk5ycrMzAQqEQ\\n\",\n       \"brdb0egd8pInAAAMP0lEQVReu3YtprvyOp1OadOOHTvweDyKy8BisZCSkqLUpayoqGDXrl3s2LGD\\n\",\n       \"+Ph4vF4v165d48qVK7S3t8esjRtFfHw8ZrN5w3a3H0ZmZiZVVVVKon/Z6Lndbtra2ggEAlsiL3tc\\n\",\n       \"tFrtN6oMbQaFhYUcPnyY4uJiEhIS8Hg8LC0tkZqaSnV1Nbm5uUpF9JycHPx+Px0dHZw+fZobN27Q\\n\",\n       \"29u7boN+bfENuYTV2NgYd+/eVVRNX3zxRcw3U2V33JEjR0hOTlZu4ikpKYrkdHJycsO1+GazWcm/\\n\",\n       \"LReeuDftxFq0Wi0GgwGr1Up9fT3Hjx9n9+7dpKSkKC7apaWlDdXgP5bBXnWHvA/809Uajsr/NiK9\\n\",\n       \"6roGrUYX7dixg/379/PKK68oaUTXnBMAn89HS0sLv/71r2loaIhZ+K+sVpCXPfX19QQCAT7//HPF\\n\",\n       \"h1xYWEh9fT27du0iJyeH1NRUDAYDExMT9Pb2cvv2bS5cuEBvb29M2riRCCGUG9Bm5EjetWsX3/ve\\n\",\n       \"97DZbOvkXJOTkwwMDDwzOS0eRFpaGkVFRd8I+Y41JSUlHDp0SEkclpiYyEsvvcSBAwcwm804nU4l\\n\",\n       \"0EOSJG7cuMH777/Pb37zG7xe7yNXA0NDQwwNDXH+/PmYf5a19PT0oNFoeOutt0hLS1PcIgUFBQC8\\n\",\n       \"9957Si7vjSQlJYWamhreeOMN6urqWFpa4tatW4yPj983w2J8fDypqans3LmTkydP8uqrr2I0GpXQ\\n\",\n       \"9HA4zMLCwjqFzdPyOMmfdKwY6/8pSZKclW9D06vKkVlyxN7Ro0fZsWMH+fn5OJ3OdRUx5Iv45s2b\\n\",\n       \"NDc3097ezvDwcMw3eGSEEOTn52MymaioqGBmZgZJksjKylLkUiaTiWAwSF9fH93d3comzejo6KYX\\n\",\n       \"W4iGtfrSWBogOay3sLBQUacsLS0xOzur5I72+/3P9OwafjczW7s62Azm5+fx+/0kJCSg0+mU7Iaw\\n\",\n       \"sgHW1dWF2+1mcHCQnp4e7ty5Q0dHx6bnkY+GSCSi+MvXhoLHsn+rqqr4wQ9+QGVlJQ6Hg0AgwMGD\\n\",\n       \"B8nJyblv0EtGRgYFBQUUFRWRl5en3LDHx8fp6+ujqamJzz77bEOLfzxKJSKAvwbaJUn6f9f866nT\\n\",\n       \"q8r6ZqfTSVZWFikpKaSnp1NYWMiRI0fYtm2bMnMIhUL4fD7Gxsbo7u6mra2Nq1evKtWnY/klyuHx\\n\",\n       \"U1NTzM7OYjabsdvtJCcnk52drWzKyVnQFhYWlJl/Y2PjuiRVzxNGo/Eb+Xw3Grkqe0ZGBhkZGcTF\\n\",\n       \"xSnRlZ999hmtra3Mz88/8wZbzncu64Y3i97eXi5evEhhYSFWqxW9Xq+EmMt+9f7+fgYGBujq6lJS\\n\",\n       \"Fzzr/QkrN6O7d+9SVFREdna2cjyW13pGRgYVFRVkZmYqs/p9+/ZRXl5+X3+60+lk27ZtpKWlodFo\\n\",\n       \"mJ+fZ3p6Wkm53NjYSHt7+4aKIB41w94L/BhoEUI0rR7712xAelWNRoPRaGT//v288cYbSnSQXHV8\\n\",\n       \"re9UNoC//e1vuX79Ol1dXYRCoU0R0IfDYQYGBuju7mZ4eFgJnpDdI3K9NiEEoVAIr9fLlStXePfd\\n\",\n       \"dzlz5gzLy8vPxQVyL3a7nW3btsW0crY8BuQq3RqNBr/fz+DgIB988AGdnZ1bUjThSZmYmKCvr49I\\n\",\n       \"JBLzVclaLl68SEdHB7W1tYoEb3Z2llu3btHY2Ijf71dkrrLa4VkM+74fwWCQq1evUlFRwe7du7ek\\n\",\n       \"DWazmRdeeOGB/abRaJSfYDCIx+OhqamJ9957j9OnTzM/P7/h4/dRBnsAuAQ4WMnQ9/9JknQ62vSq\\n\",\n       \"er1eqWNYWlqKy+WioqKC4uJiLBYLRqNRMRDy0ri1tZXLly9z9epVenp68Hq9BIPBTQuXXVpaYnp6\\n\",\n       \"mosXLzI3N0d9fT21tbVs375duTCDwSAtLS3KMr6zs5Ourq4trYH4NMhGZzM2HO81cPJmTSQSeS6M\\n\",\n       \"NaxUShkeHmZsbIzU1FS0Wi02mw2z2bwh+SMehFx559q1a3R0dGAwGNblwl5rqJ835BiDkZERxsfH\\n\",\n       \"v1FuKxY0Njbyt3/7t9TX11NWVkZWVtZ9V5iLi4vMzs6ysLBAKBRScgutvfbn5uZi0vePo8P+Z2t1\\n\",\n       \"2EKIs0SZXlUOMT148CAHDx7E5XIpqo+FhQVmZ2eVemc+n0+pkdjQ0EBLS4uy7NxMZMmgLH8aGxtj\\n\",\n       \"fHyc4eFhJWBndnaW69evc+PGDdra2p55GdqDCIVCzMzMbJqhXCvHDAQCz0T17miQV1Z37tzB4XAo\\n\",\n       \"ebTdbjeBQCBmxR/kgg4DAwMb/t5bjRwW3tzcTFpampIpz+12Rx0Z/CjkLJATExNKErm0tDQMBgPL\\n\",\n       \"y8uKgmZqaoqBgQFmZ2eZmZlhcHCQ1tZW2traYi5HjFaHDVGkV5V1zLt27aKyslJxfcgbDHfu3KGn\\n\",\n       \"p4elpSWuX7/OlStXmJ6eJhQKxTyC6FHMz88rd/sLFy6scxXIO8LhcFiZ1TyPjI2N0dHREbOw2ntZ\\n\",\n       \"XFzE6/XS399Pf38/hYWFm3LeWOD3+zl//jxOp5N9+/Zx/PhxJYmS2+1+5pUuzxqyL/43v/kNZ86c\\n\",\n       \"UQJZZOlhLIzi9PS0El0rV4h54403cLlcLC4u8vXXXys/jY2NSi4fOcJxM679aHTYV1nxbT9xetVA\\n\",\n       \"IEBDQwMej4cPP/xQOS5/CT6fj6mpKSRJYnR0VCl79Sws6eQ8InIukd9HBgYG+OSTTxgaGlJULrGM\\n\",\n       \"KJPdXhcuXGB0dBSr1apUGrq3vNWzztzcHM3NzZSVlZGfn09ubi779+8nGAxy+vRpxsfHnxsXz7PC\\n\",\n       \"8vIyc3Nzm3a9yRMv2TjPzs7S1taGxWJZNy7Hx8fxeDwEg8FND+4Rj2MMV90hF4H/S5Kk3wghHKz4\\n\",\n       \"r2ElvWq6JEl/fM9rtt7KqqhsEnKhi6NHj/Lmm29SV1fH/Pw8TU1N/Nmf/ZmyUa6i8jhIknRfD8aT\\n\",\n       \"6LD/l6zD3or0qioqzzJy4jE5s93ExAT79u2jqqqK9PR0RkZGVIOt8tREpcPeyvSqKirPKnKV7I6O\\n\",\n       \"Dk6dOkVraysGg4He3l7VWKtsCA91iQgh9gFfAC2sKEMA/g3wQ6Bq9Vgf8A/W5BaRX6u6RFRUVFSi\\n\",\n       \"4EEukcfyYUeDarBVVFRUomPTDbaKioqKysYS+1A2FRUVFZUNQTXYKioqKs8JMTPYQohXhBAdQoiu\\n\",\n       \"1TJiKqsIIfqFEC1CiCYhxPXVYzYhxFkhRKcQ4jMhRPKj3uf3ESHEfxdCeIQQrWuOPbBvhBD/enWM\\n\",\n       \"dQghjm5Nq7eGB/TVz4UQw6tjq0kIcXzN/76VfSWEyBJCfC6EaBNCfC2E+Cerx5+/cSVnotrIH0AL\\n\",\n       \"dAM5gA5oBkpica7n8YcVZY3tnmP/EfiXq49/BvyHrW7nFvXNi6xE1LY+qm+A0tWxpVsda92AZqs/\\n\",\n       \"wxb31b8D/vl9nvut7SvACVStPk4A7gIlz+O4itUMezfQLUlSvyRJEeB/A9+J0bmeV+7dBX6dlfqY\\n\",\n       \"rP7+e5vbnGcDSZIuA1P3HH5Q33wH+DtJkiKSJPWzcmFtTS7OLeABfQX3z/Pzre0rSZLGJElqXn0c\\n\",\n       \"ANbWpn2uxlWsDLYLGFrz9zC/SxqlsqJfPyeEuCmE+Purx2JWJ/P3gAf1TQYrY0tGHWcr/GMhxG0h\\n\",\n       \"xF+vWearfcVj16Z9ZvsqVgZb1Qo+nL2SJO0EjgN/KoRYlx5PWlmXqX14Hx6jb77t/fZfgFxWAttG\\n\",\n       \"gf/0kOd+q/rq3tq0a//3vIyrWBnsESBrzd9ZrL9jfauRVsP6JUkaB06xstzyCCGcsBL6z1PWyfw9\\n\",\n       \"40F9c+84y1w99q1FkiSvtArw3/jdUv5b3VcPq027+v/nYlzFymDfBAqFEDlCCD3wfVbqQH7rEUKY\\n\",\n       \"hBCW1cdm4CgruVjkOpkQZZ3M32Me1DcfAj8QQuiFELlAIXB9C9r3zLBqeGTW5vn51vbVY9Smhedk\\n\",\n       \"XD12PuwnQZKkRSHEPwLOsKIY+WtJku7E4lzPIWnAqdWyWHHA30qS9JkQ4iZPWSfz9wEhxN8BB4AU\\n\",\n       \"IcQQ8G95QA1RSZLahRC/AtqBReAfrs4svxXcp6/+HfCSEGJdnh/41vfVE9WmfZb7Sg1NV1FRUXlO\\n\",\n       \"UCMdVVRUVJ4TVIOtoqKi8pygGmwVFRWV5wTVYKuoqKg8J6gGW0VFReU5QTXYKioqKs8JqsFWUVFR\\n\",\n       \"eU5QDbaKiorKc8L/DzAr6bE92WeRAAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f7939901710>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"# we use a little trick to tile the first eight images\\n\",\n    \"imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')\\n\",\n    \"print solver.net.blobs['label'].data[:8]\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"[ 7.  2.  1.  0.  4.  1.  4.  9.]\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAWwAAABKCAYAAACfHW4mAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJztnWlwXNd153+3V3RjaaDR2Bs7sRMgQIKgKAokuIgUpcg2\\n\",\n       \"q+QljstO4kpS9iQzlUlqMpkPSWZSlclM1SSTmg+umrI9ZWdGViS5ZMuyJVIUSJEUwA0QSew7QKCx\\n\",\n       \"Aw2ggd4bbz4A7wncRABEo4Ho/apYbLzeTt9+fd695/7POUKSJFRUVFRUdj6aSBugoqKiorI+VIet\\n\",\n       \"oqKisktQHbaKiorKLkF12CoqKiq7BNVhq6ioqOwSVIetoqKiskvYtMMWQrwkhOgUQvQIIf5iK41S\\n\",\n       \"UVFRUXkUsRkdthBCC3QBpwAHcAv4bUmSOrbWPBUVFRUVmc3OsGuAXkmSBiVJCgBvAF/eOrNUVFRU\\n\",\n       \"VB5msw47Axhe8/fI6jEVFRUVlTCh2+TznhpHEUKoOe8qKioqm0CSJPG445udYTuAzDV/Z7Iyy1ZR\\n\",\n       \"UVFRCRObddi3gQIhRI4QwgB8HXh368xSUVFRUXmYTYVEJEkKCiH+GDgPaIEffdEUIlFRUeTm5mI2\\n\",\n       \"mzEYDMTGxpKUlITVagVgYmKC0dFR5ubmGB8fZ3p6OsIWq6io7HY2JetTnizEILAAhICAJEk1a+77\\n\",\n       \"VxvDNhqNpKWl8Vu/9VukpKQQHR2N3W6noqKCoqIiAD799FMaGxvp6enh2rVrNDc3s7y8HGHLn4xG\\n\",\n       \"o8FkMmE0GvF6vfj9foLBYMTsEUKg1+sxGo3o9XoAgsEgHo+HQCAQMbtUVLaDJ8WwN7vpqLwuUCdJ\\n\",\n       \"0uwzvs6u4tChQ7z66qscPXqU+Ph4dDodUVFRxMbGIl8Ac3NziYmJobS0FLfbTX9/PwsLCxF1gk9C\\n\",\n       \"r9djtVp57bXXqK2t5fz58zQ0NNDV1RURe4QQGAwGqqurOXbsGPv378fv99PV1cXbb7/N4OAgS0tL\\n\",\n       \"EbFNRSWSPKvDBnjsleBfI9HR0ZSVlXHmzBnOnDlDXl4eJpOJUCiE2+1mdnaW4eFhTCYTCQkJZGdn\\n\",\n       \"k5CQQHl5OXfu3KGtrW3HOuzU1FSqqqo4cuQIra2tmM3miNmj0Wgwm81UVFTw6quvUlZWRiAQwG63\\n\",\n       \"09HRgcvlipjD1mq1xMbGsmfPHhITE2lsbGRhYeGpz4uNjSU6Ohq3271jVgl6vZ7ExESKiopITU2l\\n\",\n       \"p6eH4eFhpqen2Y7GJjqdDovFQllZGQCzs7OMjo6yuLiI3+9f9+totVqMRiMWiwW3243L5drRq9ln\\n\",\n       \"4VlriUjARSHEbSHEH2yFQTsVIQSJiYl885vf5Ctf+QplZWVERUURDAZZWlrC4XBw+/ZtLly4QGNj\\n\",\n       \"IyMjI2i1WqxWK8XFxezbtw+TyRTpj/FY9Ho9GRkZWCwWgsEgTqcTt9sdMXt0Oh0JCQmUlpZSXV2N\\n\",\n       \"yWTCYrGQl5dHeXk5KSkpEbPNYDBgt9v5+te/zp//+Z+v25aUlBRKS0tJT08nJiYmzFauD5PJxN69\\n\",\n       \"e/n+97/PP/7jP/LVr36VPXv2oNFoECL887Do6GgKCgr43ve+xx/+4R/y8ssvk5WVteHfSVRUFMnJ\\n\",\n       \"yezbt4/MzEy0Wm1Y7RdCoNPp0Ov16PX6bRkrmWedYR+RJGlMCJEEfCiE6JQk6epWGLbTSEhIoKCg\\n\",\n       \"gKqqKtLT0wkEAkxPT1NfX8/ly5eZnZ1lenoal8uF2Wzmtddew2KxkJycjMFgwGw2o9HszFpber1e\\n\",\n       \"cSTz8/O0tLQwOjoaEVuMRiOZmZl84xvfoLq6OiI2fB5RUVEUFhaSlpa2IceQnp7O4cOHEUJw5coV\\n\",\n       \"rly5EmZLPx+j0UhGRgavvfYalZWVxMfH85WvfIWRkRGamprCOkPVaDRERUXxyiuv8I1vfIOysjIc\\n\",\n       \"DgdDQ0MsLCyse7Kg0WgwGo289NJLfOlLXyIlJYVf/epXTE5OMjs7G5bVrF6vx2azcfToUbKysvD5\\n\",\n       \"fLz11ltMTExsy6z+mRy2JEljq/9PCSHeYSVlfV0OW46bJicnY7FY8Pv9hEIh/H4/09PTeDwe/H4/\\n\",\n       \"Xq93RyxvYmJiSEtLIz09Hb1ez/j4OFevXuXdd9/l0qVLir1arZasrCy8Xq+yrPR4PMzPzxMKhSL8\\n\",\n       \"KR7FaDSSlJREeXk5UVFRDAwM4HA4cLlc22qHwWBQxresrIxTp06Rl5f3wGPkmXdmZiYzMzMsLS3h\\n\",\n       \"crm2bTUghMBkMlFcXExaWtqGLsA6nY74+Hjy8vIYHBxEq9WyvLy8LaGHxxEbG0tOTg7V1dXY7Xa0\\n\",\n       \"Wi1xcXFERUWF1SatVovFYqG8vJxTp05x5MgRnE4n9+/fp6uri/n5+XWHi4xGI4WFhdTW1lJXV8fY\\n\",\n       \"2BhCiLD4C9lR5+TkUFZWxsmTJ8nOzmZmZoaPPvqI6enpne2whRBmQCtJkksIEQ2cBv7zep9vMpmo\\n\",\n       \"qKigrq6O8vJynE4nXq+XmZkZrl27hsPhYHZ2lomJiQ3Fs8KFyWQiLi4OrVaLy+WitbWVH/zgB7S2\\n\",\n       \"tj4Qw4yPj+fYsWMcOHCAlJQUdDodY2NjdHR04PV6I/gJHo/FYmHPnj3U1dXhcrn49NNPIxIOiYmJ\\n\",\n       \"4YUXXuDw4cPs37+f0tJSoqOjH3iM0WgkLy+PpaUlrFYrQ0NDdHV1cf/+/W2xUQiB2WympKSE1NRU\\n\",\n       \"pqam1v1cp9PJxMQE1dXVJCQkYDAY8Pl8EXPYNpuNPXv2kJSUhMlkYn5+njt37jA4OEgwGAybXUaj\\n\",\n       \"kaysLH73d3+X2tpalpeXuXnzJj/72c94//33N+T04uLiOHXqFAcOHECj0fD+++/T2NgYFgmt2Wxm\\n\",\n       \"//79nDt3jtOnT2Oz2TAYDHR1dREXF4dOp9uWfYmnOmwhxI+BV4BJSZLKV49ZgV8CB4UQflayHP9Z\\n\",\n       \"kqQL631ji8XCK6+8wuHDh0lPT1dm2D6fj0OHDrGwsMD8/DwOhwOfz7eu1wyFQiwuLtLR0UF7ezsT\\n\",\n       \"ExNbNohjY2NcvnwZj8eDRqNhfHyc3t5ePB7PA48zm81UVlaSlZWlLJklSYrYD/NpxMbGKtLEnp4e\\n\",\n       \"7t69+8hn2g7i4+N59dVX2bdvH1arFaPR+Mhj5Jir3W7nueeeY3x8nA8++IALFy4wOTkZ9h9MUlIS\\n\",\n       \"JSUlFBYW4nK5aGpqWvfFTa/XExsbS2pqKgkJCej1+ohMRIQQaLVaSktLOXHiBLGxsQC43W6am5u5\\n\",\n       \"f/9+WM9VeZVRWlpKUlISMzMzfPjhh7S3t2/YWefn53Py5EliYmJoamqisbGR4eHhpz95g5hMJrKz\\n\",\n       \"szl37hyHDx8mMTFRiV3Hx8fz5S9/Gb1ez+3bt8MeEVjPDPv/AP8L+OmaY/8R+JUkSbWrtbATJEn6\\n\",\n       \"rxt5Y1lnGwgEmJ+fx+v1KvK4goIC5Qcr3xcKhTAYDA/EDGVHGAwGlY2ShYUFrly5gt/v39Dy6mks\\n\",\n       \"LCzQ3d2txKr8fj8ej+eBLychIYH8/HxKSkpISUlBkiRcLhfz8/O4XK4dGRJJTU2lqKgInU7H6Ogo\\n\",\n       \"XV1d274SSE9Pp7q6moMHD5KTk/NYhyGHD6xWK0lJSQghmJ+fB8Dn8/HBBx8wOxsedakccy0tLaW2\\n\",\n       \"thabzUZTUxMNDQ0sLi6u6zXi4uJIT0/HarViNpvRarVhsfVpaDQaYmNjlf0Ys9mMJEm43W7u3buH\\n\",\n       \"w+EIy/vKG3WpqakUFhaSnp7O0tIS9+7do7m5ed17JvJ3UVJSwvHjx6moqGB0dJTm5ma6u7txOp1b\\n\",\n       \"bntycjLl5eUcOnSIvLw8jEajco7GxcVx/PhxJEkiOjqawcFBxsfHmZubC8uF76kOW5Kkq0KInIcO\\n\",\n       \"fwk4tnr7J8BlVpz4unE6nbz11ls0NzcTHx/P2NgYsbGxZGRkkJ2dTWFhIbm5uWRmZipSKKvVqiRR\\n\",\n       \"AIrjdLlcmEwmoqOjkSQJv9/PxMQE7e3tWxqLlRUUwGNnzYWFhZw8eVLRYPv9fu7fv09vby/379/f\\n\",\n       \"EaGdhykqKqK2tpZgMIjD4WBkZGTb7XzhhRf47ne/i81me+Imnt/vZ2lpCZ/PR0xMDBaLBavVyquv\\n\",\n       \"vkp2djZNTU1hc9iy7PHFF1/ka1/7GsFgkKamJj766KN1n19ybD4qKiosNq4X+bPk5OSQkZGBRqNR\\n\",\n       \"HHZnZyfj4+NheV+NRkNMTAz5+fmUl5djMplobGzkjTfeYHh4eN3nnGz/K6+8wne+8x1sNhu3bt2i\\n\",\n       \"paVlSydoayksLOT48eOKgGAtJpOJffv2kZ+fz+nTp3n33Xc5f/48TU1NYQktbTaGnSJJ0sTq7Qlg\\n\",\n       \"wzorr9dLR0cHDocDo9HI4uIiBoOBmJgYYmNjSU5OVjah5BlqZmbmA0vlYDCI1+tlamqKuro6Tp8+\\n\",\n       \"jdFoJBAIhE2L+bjXjIqKIjMzk+PHj3PmzBkSExMJBoPKkv3evXsPbELuBGQ9cWZmJllZWXg8Hrxe\\n\",\n       \"77baKW807tmzB7vdjl6vf+RC6Ha76ejooK+vD4fDgcfjITU1VZkhxsTEYLPZqKmpwev1MjQ0tOV2\\n\",\n       \"xsbGUldXR0VFBcvLy/ziF7+gsbGRxcXFp55jspIhIyOD/Pz8iDtsk8lEWVkZGRkZyix/cnKS3t5e\\n\",\n       \"lpaWwract1gsnD59mpdeeon9+/ezsLBAS0sLzc3NuN3udZ9zBoOBnJwccnJyiImJoa2tjYaGBpqb\\n\",\n       \"m7dcm6/VaomJiaGkpISamhpiYmKUFbPb7cZqtWKz2dDr9cTFxZGdnc3JkycZGRmhvb2dpaWlLV9V\\n\",\n       \"P3PijCRJ0mbS0AOBABMTE0xMTDz2frk+R2pqKi6XC5fLRUZGxgMOOxQKEQgE8Pv92O12Tpw4wfLy\\n\",\n       \"MhMTExu6aj8rRqNRcSAVFRVotVqmpqbo6enh8uXL9PT07LhwiDxTkWOqcrLCdiX26HQ6rFYrhw8f\\n\",\n       \"Zu/evSQmJqLVapEkieXlZQKBAIuLi4yMjPDhhx9y79497t+/j8/nIyMjg+rqanJycjCZTMTExFBX\\n\",\n       \"V8fS0hJerxen00kgENiSC4/ZbMZut1NbW0tOTg5Op5OLFy/S3t6+rrEyGAxkZ2eTnZ1NYmIii4uL\\n\",\n       \"uN3uiJ0PUVFRFBcXk5qaCqysFIeGhhTHGS5MJhOVlZVUVVWRlZVFd3c3Q0ND6155ajQaLBYLOTk5\\n\",\n       \"HDx4kNzcXPx+Px9//DENDQ1huVDrdDqSkpLIy8tT3q+jo4O2tjbcbjd5eXmUlJRgs9mUXIGKigqq\\n\",\n       \"q6tpa2tTkry28iK4WYc9IYRIlSRpXAiRBkxumUWrBAIB5ubmlA8sSRJLS0uPLJmjo6PZt28faWlp\\n\",\n       \"iiKjra1tWzfP5GQK2ekIIRgZGeHmzZu0tbVtSE2wXciKi6SkJEKhkBJu2C7MZjP5+fl8+9vfprKy\\n\",\n       \"koSEBGXGJ4ee2tvbaWxs5O2332ZkZASPx4MkSYyNjbG8vMzZs2dJTExUNrCNRiMajUbRxW+Fw7bb\\n\",\n       \"7dTU1Ciz+cHBQQYHB9cdK42NjeXkyZMUFRXh9Xrp6+tjbGwsYnJVo9FIbm4uiYmJSJJEKBSipaWF\\n\",\n       \"8+fPrytj81mR95xkvXUoFHrq9yTvd1VUVChZxrm5uYyNjfHmm29y9+7dsNgq5yfIobqhoSHeeOMN\\n\",\n       \"3nzzTbRaLXv37qW2tpazZ8+Sm5tLfHw8sbGxnDhxAq1Wyz/90z/R19e3pXtCm3XY7wLfAf7b6v+/\\n\",\n       \"2DKLVpFPprUzkYdnJQaDAZvNxssvv0xFRQVut5urV69y79495ubmtmW2WFlZSV1dHXV1deTn5+P1\\n\",\n       \"eunv76e+vp4PP/yQmZmZHTe7hs9ORovFwtLSEtevX2dgYGDb3l+r1WI2m0lPTyc+Pl7ZNJ6dnaW3\\n\",\n       \"t5cLFy7Q3t5Of38/Q0NDD4QfnE4nfX19XL16FaPRSFlZGQkJCdTU1BAMBunr68Pj8WzJEtlut1NZ\\n\",\n       \"WYnNZmNkZITbt2+vW1NvMplIS0vj0KFDZGVlKRviXV1dYZXOPYn4+HhycnIoLCzEZrMp+z3j4+P0\\n\",\n       \"9/eHfUUqf1551fHiiy8SHR2Nz+djaGiIkZFHS+onJCSQmppKeno6VVVVHDhwgNzcXBYXF+ns7GRy\\n\",\n       \"cjJsEzM5nKXX65XfSEtLCxMTE2g0GuViIyte4uPjEUIQFRVFdHS0ohDbStYj6/sZKxuMNiHEMPBX\\n\",\n       \"wN8DbwohvgsMAl/bUqvWSXx8PIWFhZw4cYK8vDwmJye5ceMGPT09YZ8tyvKoqqoqvvrVr1JSUoLZ\\n\",\n       \"bGZ2dpbr169z6dIlbt++vSM3Gg0GA/Hx8eTn52OxWJiZmaGhoWFbHbY8flFRUUrs2uPx0NfXR319\\n\",\n       \"Pa+//jpDQ0OPnZ34fD4mJia4cuUKdrtdiQ3n5eURCoVISkrakgJRGo2G9PR0SktLldl1Q0PDul9X\\n\",\n       \"zo7du3cvVquV4eFhrl27Rm9vb0Rm11arlT179pCZmUlsbCx+v5/JyUnGx8fDtmErEwqFmJ2dxeVy\\n\",\n       \"odFolBCmvPJoaWmhvb39keelpaWRl5dHfn4+2dnZJCUlEQgEaG1t5ebNm2GtKaPVaklISMBsNisX\\n\",\n       \"FXnlFgqFGB8fJyoqSlGASZKEEAKv18vCwgJ+v3/Lv+f1zLA9rNS87lqjw/4boBiYApKA54APttSy\\n\",\n       \"dZCfn88LL7xASkoKGo0Gl8vFxMTEuqVWz4KcGZaRkaHEUuWNxgsXLtDa2hqWL2wrsFqtlJSU8Pzz\\n\",\n       \"z2OxWOjp6VF065HC7/cr8eq33noLh8PxuRc7j8dDe3s77e3tiu794R38Z0EuN5uSkqK8dl9fH9ev\\n\",\n       \"X1+3k8jNzeXw4cMkJSUpn6+rqytiITKr1aps7gIsLi7S2NhIf39/2N/b5XJx7do1cnNzKS4uJi4u\\n\",\n       \"DpvNhsViQZIkiouLHztTNhgMGI1GDAYDBoOBYDDI7OwsN27c4IMPPghrGMdsNlNVVYXdblcyXJub\\n\",\n       \"mx8Iwcjh0ISEBOXYzMwMQ0NDuFyuLV/lb1aHLQH/IEnSP2ypNetErjJ28OBB6urqsFgstLa28v77\\n\",\n       \"79Pd3b0tsTiLxcLLL79MTU2NUmLV4XBw9+5durq6mJmZ2ZHOGlYSQIqKisjIyCAQCDA2NobT6dzW\\n\",\n       \"GLaMEAIhBE6nk9dff52LFy8yMDCA2+3+3PFbXl5W5J5+v1+Z3URHR1NTU8PU1NQzaXJl3XBcXBzR\\n\",\n       \"0dEMDAxw//79DZ1bCQkJyo99bm6OsbExXC5XxCr1yfasdXzbtbLy+Xz09vby61//mqmpKdLS0khN\\n\",\n       \"TSUjI4OioiIsFgsajYbBwcEHVlVy8tuLL76I2WxmaWmJkZERent7GRwcDOsKNhAIMDo6yvz8PHl5\\n\",\n       \"eeTk5JCZmUlqaioWi4WUlBT27dtHRkYGOp0Ov9+PwWAgKSmJwsJCUlJSFEXJVrFZHTZEsKyq0Wgk\\n\",\n       \"Pz+fAwcOUFVVRTAY5ObNm7z99tv09/eHfbNRlqOdO3eOAwcOYDAY8Pv9dHd309DQwNjYWESyBddL\\n\",\n       \"UlISe/bswWKxMDg4qGzobecFRq4fLm/Syrr8jo71NS6SS7CazWZls1GOHxYWFnLr1q1ntlGj0aDT\\n\",\n       \"6RQlhdPpRKPRPHWchBBoNBoSEhJIS0tDr9czPz/P2NhYRJy1XF88NTWV3NxcDAYDgUCA2dlZ7ty5\\n\",\n       \"89jY8VYTDAaZmpri4sWL3Lx5E7vdTkFBAfv27SMQCGA0GpmYmODmzZsPaNu7u7uRJInq6mri4uJY\\n\",\n       \"WFigra2NgYEBJXEqXHg8Hu7du8ehQ4c4ePAgdrudsrIyJicnSU9Pp6SkhL1792KxWJiensbtdmO3\\n\",\n       \"20lNTWX//v0UFxczMzOzvQ77c/gTIcS3Wenv+GeSJM1tkU1PRZYmpaWlEQgE6Ovro7Ozk6GhoW2J\\n\",\n       \"GaelpVFZWUl+fj7x8fH4fD76+/v5+OOP+fDDD7dlhv8sWCwWRVXjcDhoaWnZ9gtMXl4ezz33HGaz\\n\",\n       \"eVNFkEwmE6WlpZSUlJCeno5Op2N5eZn5+XkuXLiwbsf/JCRJwufzMT09zdTUFDabDZvNhtlsfurs\\n\",\n       \"f22SSEVFBSaTienpaQYGBiKyp2E0GsnJyaGqqorKykplprq0tITT6dzW715Wf3k8HkVJ9dZbbyGE\\n\",\n       \"UPIn1m7o6nQ6SkpKkCSJ+fl5Ojs7eeedd2hpaQm7rV6vl97eXhwOB6FQiOTkZF577TVefPFFDAYD\\n\",\n       \"JpMJjUaDw+Hgo48+oq+vj9///d+noKCApKQkamtrGR8f39J0+c067B8A/2X19t8C/wP47pZY9BQ0\\n\",\n       \"Gg3R0dEUFxeTkpKixMba2trWlcjwLMgzleLiYmpra0lOTsZoNOJyuRgZGVEq3e3EJgXw2a53amoq\\n\",\n       \"drsdj8dDb29vRBy23FJtI4kkGo0Gg8FAdHQ0OTk51NXVUVxcTFRUFBqNRsl6HRsbe+aLpqygmJub\\n\",\n       \"Y25ujoyMDA4fPozT6aSjo+ORWZNOp1OaFMTExJCQkEBVVRVxcXHAZ5makVAMyVUOExMTFSWD0+lk\\n\",\n       \"eHh42zXhy8vLD+jsnxbPr6ioUDIje3t7qa+vp7W1lZmZmbDbGgqFmJmZoampifr6eg4fPqyEcQCl\\n\",\n       \"qFt9fT3Xrl1jfn6eAwcOKCuriooKmpublZozWzHOm23Cq+iuhRA/BH71zJasE5PJRHJyMsXFxSQk\\n\",\n       \"JDA1NUV9fT1dXV1hX9LrdDoSExOprKzkhRdeIC4ujlAoxMLCAn19fVtabCocyPZnZWWRlpaG0+mk\\n\",\n       \"t7eXnp6ebbdFDss8rsjTk9DpdNhsNrKysqiqqlLUQbDy45IbHm/Fj0PWC8/OzjI+Pk5eXh61tbWk\\n\",\n       \"pqZy8eLFR+LjcpgsOTkZm81GUlIS2dnZyspBrj0TiWxXrVZLfHw8ZrNZkZqNjY3R2dm5IytIwmeT\\n\",\n       \"o71793L8+HGio6Pp7Oykvr6eycnJbVmpLC8v4/V6uXHjBhqNBpvNRmFhIdHR0SwvL9Pe3s57773H\\n\",\n       \"z3/+c8bHx4mPj+f27dtkZ2eTnp5OXl4eBQUFZGRkMDg4GDmHLYRIk2thA+eA8K9PViksLOTUqVMU\\n\",\n       \"FRXh9/tpb2+nu7t7W664sbGxHD16lJqaGjIzMzEYDIyMjHDjxg3efffdZ16Ghxuz2Ux1dTWFhYWK\\n\",\n       \"7eGWc20lchJKXV2dUsfZaDQqiRj19fW8++67W7qP0dbWxi9/+UsASkpKlLTux62iQqGQIlW0WCwP\\n\",\n       \"dJYZGhri9u3bEWltJpcyzsjIUJxQV1fXhhQv243ZbKa0tJTnn3+e/fv3o9PpcLvdTw1HhQNZLhwV\\n\",\n       \"FaXowL1eL5cuXXog18LlcvHRRx8pq5m8vDwOHDjAyMgI//Iv/7IlF8fPddhCiExWYtRWQCuEcAJ/\\n\",\n       \"BpwWQrwCGIB54MgzW/IU5ALihw4d4uTJkyQmJnLnzh0++eQTJiYmtkXhEBMTw9GjR9m7d69Sq7mz\\n\",\n       \"s5MPPviA1tbWHe/85IQFudrd3Nzcjv3BPkxBQQEHDx7k2LFj1NTUUFhYqNzn9/uZmpri3r17NDY2\\n\",\n       \"4nQ6tywsNT4+zo0bNwiFQpSXl5Ofn09MTMwj1fZkna7X6yU2NpYDBw6Qk5OjrCAWFhaYmJjY9pCI\\n\",\n       \"wWAgMTGRqqoqMjMzFccyNDREd3f3jp1hR0VFUVZWRkFBAXFxcQwPD3P//n2mpqa2PeTo8XgYHR3l\\n\",\n       \"ypUrjIyMkJaWhs/no7Ozk8HBQaUMglzsraWlhZKSErKzs8nNzaWqqorz589vyXn5tBl2ADgjSdId\\n\",\n       \"IUQM0AQ0sqLB/ltJkv77annV77LBan0bQS4cX15eTl1dHbW1tSwtLdHS0sKlS5e2rTtKTEwMhw8f\\n\",\n       \"fqATSmtrK7/5zW9wOp2EQqFHUucfXgKvvV+WtOl0usc+LxAIbG0dgtWQiLykW1xcjIiUby3y55bH\\n\",\n       \"QR4Lg8HwQG/Buro6fud3foe0tDRsNtsDr+H3+xkeHqavr2/LmxksLi7S09NDT08P9fX12O12UlJS\\n\",\n       \"HqgaCSs641u3bik1b77//e9z9uxZrFYr8FnsdrtDImazWdkkT09Px+fz4XQ6GR0djVgbuPUQFRVF\\n\",\n       \"UVERKSkpeDweWltb6ezsZGxs7OlPDgOysODzNOtyGG1oaIg7d+5w9OhRJVnJZrMxNjb2zDkin+uw\\n\",\n       \"JUkaB8ZXby8KITqADLagvOpGMBgMpKenc+7cOSorK5WrW1tbW8TLlsoVBoPB4CNXz+XlZXw+n5IF\\n\",\n       \"JW+ayT92efZTU1OjFJKHlS/e6XRy6dIlZmZmtmxGIdcRTktLUxJPIvUDkC9W8j85McFoNGK1Wjl7\\n\",\n       \"9iwZGRlKQ9aMjAylWuPDddFnZ2f58Y9/TENDQ1htnpubw+fzMTIy8kh7sEAgoIRhgsGg0sJMxmKx\\n\",\n       \"KF1qtnOfIzExUZnpCyHweDy0tbXtaGcNnynBbDYbMzMzShG13cDIyAjXr1+npqaGgwcPkpGRwdGj\\n\",\n       \"R3G73c9c92TdMexVLXYVcIMtKK+6gfdlz549nDhxQskam5qa4pNPPqG9vT3iErri4mLOnTv32Nia\\n\",\n       \"z+fD4XCwuLhIMBjEaDSSkpKizBD1ej0JCQns37//gXinJElKEfS7d+9uiVONjo4mJSWFzMxMLBYL\\n\",\n       \"ExMTjziV7UTOBpO7r8TFxXHmzBlmZmaIj4/nxIkTpKWlKSoS2bE/XH51dnaWzs7ODRXB3yw+n29d\\n\",\n       \"KxI5TrzWMcsz7O1GLj+r1+sJBoPMzc1x8+bNsFS32yoSExPZs2cP2dnZREdHMz4+Tmtr646/yMi4\\n\",\n       \"XC4GBga4evUqKSkpVFVVcfz4cRwOh9KlarPnwroc9mo45OfAv1vt4ajct9nyqut8XzQaDbW1tfzR\\n\",\n       \"H/0ReXl5uN1u+vv7+c1vfvPY2gPbwVqHcerUKU6dOvXYxy0sLHD9+nUcDgderxeLxcK+ffsoKyt7\\n\",\n       \"6uvKnZ+Xlpa2xGHLPfxsNtuGlBnhor+/n8bGRvLz8zGbzVitVn7v937vEYf8cAhBPtHl4z09PRuq\\n\",\n       \"77EdPLx6gJUfcbiaA3wesgxSlj3KNVj6+vq23Zb1kpeXx5EjR0hNTUWn0ynZjXNz25bq8UxIksTC\\n\",\n       \"wgKXL1+mrKyM559/nlOnTtHV1cUnn3zy1LILn8d6ij/pWXHW/yxJklyVL+zlVWFFjiTXP5BrINy9\\n\",\n       \"e5d33nmH/v7+bf+RejweOjo6SExMVLSYn4e8O19QUEAoFEKv1xMfHw981vJKdjyjo6MMDQ0pTYcX\\n\",\n       \"FhZoamovsU04AAAJfUlEQVRicnJrhjY9PZ29e/diNpuZn5+nv7+frq6usDQsXQ9yGv9LL71EfHz8\\n\",\n       \"ui4ioVAIt9utdNm+ePEibW1t9PX1hT3rbSPI3+va7zdSzStSUlKUlntut1uZBOxk+enapKq1Dax1\\n\",\n       \"Oh0Gg4GoqCi8Xm9EmkWvFzmt/ZNPPsFut3Ps2DHKy8s5e/Ysv/jFLza9Af00lYgAfgS0S5L0P9fc\\n\",\n       \"FfbyqnK9kMOHD1NWVobRaGRgYIDGxkauXr26bVrMtbhcLi5dugSsCPof3ig0Go1ER0djtVrR6XRK\\n\",\n       \"k4D5+XmlVsfY2Bj3799XwhHyEtvhcDAwMMD4+Dh+v1+JlT6rQ5VXKRkZGZSWlmIymZiYmKCrq4vR\\n\",\n       \"0dGIhkQ6Ojq4fv06brdbqS3x8GbeWnw+H93d3XR0dHDnzh3Onz+vdKHZSQ5IHnP5/IjEZqNsh6y7\\n\",\n       \"NxgMzM3Nsbi4qPRI3alYrVbF5oWFBZxOJ4mJidhsNpKTk5Ekid7e3h29SpDzM1paWkhNTaW0tJSs\\n\",\n       \"rCyOHj1KY2Mjc3Nzm7rgPG2GfQT4FnBPCPHp6rG/ZBvKq5rNZgoKCvje977H/v37mZ+f57333lPq\\n\",\n       \"JEciHjgzM8NPfvIT+vr6eP7555U6GDJyd4rnnntOyXADGBwc5Pbt28qGk8vl4ubNmwwODir68YdV\\n\",\n       \"BPLs7Fk/p1z83W63K1mBDoeDe/fusbS0FLGZn9vtpq+vj5/+9Kfs37+f5557jjNnzjxQ9exhFhYW\\n\",\n       \"+Oijj3jvvfe4deuWUlN6J7Veg8/GXO6gEwgEtt1ByuVrY2NjSUxMRKfTKR2agsHgji1M9jCSJBEV\\n\",\n       \"FUVVVRV79+6lsrISh8PBm2++uaMdtszQ0BA3btzglVdeobKyksrKSjIzM3E4HGFx2EPAx0AyKxX6\\n\",\n       \"/rckSR9sR3nVnJwcnn/+eaV32/j4OL29vYyPj0dsdiBrLdva2pienn4gRgkrO9txcXGPNOucmZlh\\n\",\n       \"ampK6TIi64ZdLlfYdbBrW24tLCzQ0dHBtWvX+OSTTyIa95XrX3d3dzM7O0t3dze9vb2Ul5dTVFRE\\n\",\n       \"dnY2c3Nz9Pf3097ejtvtxuVycePGjW1t/7YZzGYz+/fvJy0tjcnJSW7evBmRpCpJkpibm2N0dJS0\\n\",\n       \"tDSWlpa2VHW0HWRnZysJK4uLiwwODvLxxx/vGsWI1+tlYGCAH/7wh3zrW9+iurqaY8eOMTMzs6lw\\n\",\n       \"53p02H+6VocthPiQMJZXlTuRyFlOSUlJSjnIqampbal1/TQ+rxflTiQUCjE4OMiVK1eUrjzd3d0R\\n\",\n       \"DyMEg0Gmp6eZnp5maGiI0dFRenp62LdvH4WFhUxPT9PR0UFzczOLi4sEAgGmpqZ2/OaTrHdfWFjg\\n\",\n       \"3r17vP/++3R1dW2rDfLKY3h4mObmZmw2G+Pj40xOTu54hz09PU1fXx9msxmtVqusVuSs4oaGhl2j\\n\",\n       \"GJHP8YsXL1JWVkZVVRUHDx6kra2N5uZmfD7fhlY7m9VhQ5jKqxoMBnJzczl06BBHjhxRMgpVNofc\\n\",\n       \"HaO+vp6GhgZFG75VTWq3CjldemBggF//+tdK9b1AIPBAAlEoFNrxy3k5kaepqYmPP/6Yu3fvRuQi\\n\",\n       \"s7y8TGtrK8FgEK1Wq9i1k1cnAHfu3OGNN97A7/fj9XppbW3l8uXLjIyMMD8/z9LS0o6/6KxF7lE6\\n\",\n       \"ODjI8PAw+fn5SlLQ6Ojohr6Pzeiwr7MS2w5LeVW5SajdbicuLg6tVqvsbns8nl31Re0kPB7Pjq7R\\n\",\n       \"Lcd6Iz3r3wpmZ2d54403WFxc5P79+8zNzUXsc7lcLrq7u3nrrbdYXl5mdnZ2R6srYKUcQENDA9PT\\n\",\n       \"08rKemhoSHHUO/2C/TiWl5e5desWFouF73znO5hMJmw224bFExvRYb/Nig57UQgRtvKqOp2O5ORk\\n\",\n       \"pTGr3Gm6qamJqampHT87UFFZqyaKNPJ+SaTakm0Gl8uFy+XaFZuKG6Grqwufz0deXh7Dw8ObWuFu\\n\",\n       \"RIf9f2Ud9naVV5XbV73zzju8/vrrjI6O7uhZooqKisqTCAQCDA0N8Xd/93cEAoEHZL3rRXyel1/V\\n\",\n       \"Yf8EmJEk6U/XHFfKqwoh/hQ4KEnSNx967qYCpGazmaKiIkpKSrDb7czPz/Ppp59y9+5dpXefioqK\\n\",\n       \"yr9mJEl67B7h0xz2C8AV4B4ryhCA/wT8NlC5emwA+KM1tUXk56qeVUVFRWUTbMphPwuqw1ZRUVHZ\\n\",\n       \"HNvusFVUVFRUthbN0x+ioqKiorITUB22ioqKyi4hbA5bCPGSEKJTCNGz2kZMZRUhxKAQ4p4Q4lMh\\n\",\n       \"xM3VY1YhxIdCiG4hxAUhRHyk7YwEQogfCyEmhBAta449cWyEEH+5eo51CiFOR8bqyPCEsfobIcTI\\n\",\n       \"6rn1qRDi7Jr7vpBjJYTIFEJcEkK0CSFahRD/dvX47juvHq7duxX/AC3QC+QAeuAOUBKO99qN/1hR\\n\",\n       \"1lgfOvbfgf+wevsvgL+PtJ0RGptaVjJqW542NkDp6rmlXz3XegFNpD9DhMfqr4F//5jHfmHHCkgF\\n\",\n       \"KldvxwBdQMluPK/CNcOuAXolSRqUJCkAvAF8OUzvtVt5eBf4S6xo3ln9/yvba87OQJKkq4DzocNP\\n\",\n       \"GpsvAz+TJCkgSdIgKz+smu2wcyfwhLGCx9f5+cKOlSRJ45Ik3Vm9vQis7U27q86rcDnsDGB4zd8j\\n\",\n       \"fFY0SmVFv35RCHFbCPEHq8e2rU/mLuRJY5POyrklo55nK/yJEOKuEOJHa5b56lix7t60O3aswuWw\\n\",\n       \"Va3g53NEkqQq4Czwb4QQtWvvlFbWZeoYPoZ1jM0Xfdx+AOSyktg2xkqdnyfxhRqrh3vTrr1vt5xX\\n\",\n       \"4XLYDiBzzd+ZPHjF+kIjrab1S5I0BbzDynJrQgiRCiup/4SpT+Yu5Ulj8/B5Zl899oVFkqRJaRXg\\n\",\n       \"h3y2lP9Cj9Xn9aZdvX9XnFfhcti3gQIhRI4QwgB8nZU+kF94hBBmIUTs6u1o4DTQwmd9MiFMfTJ3\\n\",\n       \"MU8am3eBbwghDEKIXKAAuBkB+3YMq45H5hwr5xZ8gcdqHb1pYZecV+uuh70RJEkKCiH+GDjPimLk\\n\",\n       \"R5IkbX+PpJ1JCvDOamsxHfD/JEm6IIS4TZj7ZO4GhBA/A44BNiHEMPBXPKGHqCRJ7UKIN4F2IAh8\\n\",\n       \"f3Vm+YXgMWP110CdEOKBOj/whR+rDfWm3cljpaamq6ioqOwS1ExHFRUVlV2C6rBVVFRUdgmqw1ZR\\n\",\n       \"UVHZJagOW0VFRWWXoDpsFRUVlV2C6rBVVFRUdgmqw1ZRUVHZJagOW0VFRWWX8P8BCxPUWfGXxrcA\\n\",\n       \"AAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f7939901490>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"imshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')\\n\",\n    \"print solver.test_nets[0].blobs['label'].data[:8]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Both train and test nets seem to be loading data, and to have correct labels.\\n\",\n    \"\\n\",\n    \"Let's take one step of (minibatch) SGD and see what happens.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 13,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"solver.step(1)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \\\\times 5$ grid of $5 \\\\times 5$ filters.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 14,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.image.AxesImage at 0x7f79383819d0>\"\n      ]\n     },\n     \"execution_count\": 14,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAATQAAAD7CAYAAADkSGhKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJztvV+obt113jfWOfvYcmSLEtvfJ/FZqnSR4siWsS8sG9Ii\\n\",\n       \"XZSgEEiam8QCU18kJZg2LaUXcS6cpO1Fm4KMIYFQ6j84dew0UOy6hqRxikuNLxwLkkpuJepgCUup\\n\",\n       \"8snQmqb6952zz+rFd8b5nv3s5xljzPfde7/7HL8DFnOuudaaa84xx/yNMdda797bvu9xlrOc5Swv\\n\",\n       \"gzw4dQPOcpaznOWm5Ay0s5zlLC+NnIF2lrOc5aWRM9DOcpazvDRyBtpZznKWl0bOQDvLWc7y0sjF\\n\",\n       \"bVW8bdv5e5CznOUstyL7vm+q/GCgbdv2kYj48Yh4GBE/se/73+BzfviHf/jadR//+Mfj+7//++Ph\\n\",\n       \"w4fx4MGD5xvvP3jwIPZ9j6dPnz7fLi8vy/19369trnzbtufbs/7IsocPH8Y73vGO+KZv+qZ4xzve\\n\",\n       \"cS2P28c+9rH40R/90Xjy5Ek8efIkLi8vn+fV9vjx43jjjTfijTfesPncf/z4saxX3ePp06dyzNw3\\n\",\n       \"h5eXl891qdLMf+pTn4pv//Zvj4cPHz7fcuxUnnWJqSqrUszv+y77rvTy9OnTeNvb3hbf8A3fcCXN\\n\",\n       \"Dfd/+Zd/OX7gB37gSv9U37iP2Ncun7ZdlUVEaTPYv8ePH8dXv/rV0fbrv/7r8b3f+73ynpzPfaV/\\n\",\n       \"tY/zLeccpi4/SVl+9md/VtpxxIFLzm3bHkbE34qIj0TE+yPio9u2/dFD6jrLWc5ylpuSQ5+hfTAi\\n\",\n       \"/vm+75/d9/1xRPy9iPjTN9es0wtGEqe4/ixvyn3R4221477072WRQ4H2WkR8DvY//6ysv/C10Wkn\\n\",\n       \"l0N+EvbhD3/4qOvvu3zzN3/znd/zLvX4/ve//87bcZf9e9e73nVn9zqVHPoMbTQKv/mbv/k8/9pr\\n\",\n       \"r8Vrr70W3/Zt32bX46rMHVPb88aBkeT6Xh3LfOclndHxc7kPfehDcXl5+fx89exudcN6qj5gf524\\n\",\n       \"Y/m8EuvjNCLiW7/1W+WzTrdVz9CqZ2qT49wv3B48eBBPnz59/jxKPetS+9/5nd8p78E65vF5+vSp\\n\",\n       \"fa7EeXyGi2m2N4/nczLe8PkxP0PG58VKRxlMTOwd5wbPE3WMr7sJ28/t9ddfj9dff71sc8qhQPsX\\n\",\n       \"EfFu2H93vBmlXZHv+77vu3ahMyYFNr5m8iAfjYSFJ2llvG7QHWxue8N7YTtZt1Xbqz7nZEoIOLBV\\n\",\n       \"LwEQdg8fPjzohUDXVjxWbQm16kE8nqvsUekJ9YEgUxDDLYGVKeY5jYhrLzgc3BBq2Sa2G6VTp3c1\\n\",\n       \"f6p9zN+W7b/yyivxyiuvPL//Jz/5SWsXhwLt4xHxR7Zte29E/F8R8eci4qN8Ug4OSgWzLmKbXMOe\\n\",\n       \"byI8WJPzOqix53RvWw8F20S4L25f1esmA4KLAcZl0+isixi6PlYgw3Z3byUrO0S9KFgwyLBtESHB\\n\",\n       \"5aAWEVdApsCm3vJPYaYk26/yeW0HtdsC24ocBLR9359s2/YfRMT/FG9+tvGT+75/is9TCpzArIvC\\n\",\n       \"VDmf76CmlDSBmIuSjtk6yLn7c9u4LyoSqvIclSnJ8xlcKkKbfLahUjVWbr+yE1y6Zf8QampZrJbJ\\n\",\n       \"fG9nA5O+uWgMoYqfKyXIXHTmlqLOjjpQc18UyKpjkzlxqFNfkYO/Q9v3/R9ExD+ozlGKW424Vq5n\\n\",\n       \"L8mTf1U5lRwCtUnE1tXJ969kNTJygESZRmjVktOl3D+eMOq8vN5tCl6TY84OcSzZzpzDyP1uqYlL\\n\",\n       \"TvzWDCM1BTG35HRtqco7qPE+j9HUllfgtiK39kuBiHWguTpWtk4BEyVVHozTQ8B2qJdaGWQXHVVQ\\n\",\n       \"q+plCLhnZ8e8FMA+ooNSoKsAxrrrXlrgtZU9Th2Ls3sXlXEacX3JqZabCmwVDCbzDfuEYJtAberU\\n\",\n       \"V8G2IrcKNPcMzcFsUrayHSqsRGXANwWqm/JUCh6qvAKdGi8Wt7xU0Zq6N98XU+6vgpnqM8MM9zGS\\n\",\n       \"cstMBTjWI7aBHdtUJtFZPkNzMOuWnO7r+tU5oeBVBQydAz7WmU/lXkVoCkbHgEwNovNcnVSR0k0N\\n\",\n       \"nKq3q1/pFyd15xyUdBHQBGZuyelg5vrpxqKzkYTb5HkZX+fsBvWrjnGer5vALO/LwJq8FJjYSGXr\\n\",\n       \"DDHVb+w/5vleq0582v5O7hxo7vW4846ovCnM3KBVE6S6rqrjNmBWwW3aJ5aJA6jqZBhUMJsuOTHv\\n\",\n       \"DF+1qwI0w4yhVi01uyWn0o1yPiqN8BEatyEiSpgpqCEUqiiNdY/9YOeCY4N5B7W87i7s3slJIrTq\\n\",\n       \"IWyew/kVsLl7R1z3JDyIU6jd9DZ94znxWl2ExbrCpWZGElxXljmAVUvODmYpqANeguHY8Pg5mOW5\\n\",\n       \"k6WmisxclMZ2MMlzhMYQc0vOlaVm9emG0nflzCpoueM3bfMTW2e5N8/QnPfgc9I4pjBMqSKcCdQq\\n\",\n       \"uEwGTj0MPeQ1dtefSkfOkbh6EWRYxxRqHcgwj5O+Woopx8b7+CyN295Fa6rNbAecThxR1uUghsci\\n\",\n       \"Qj7878DWwaBz9txPBS0HukOAtmrzEznZklN5R+XV1eSsOjmBDr4ez/O4DlVfVSdCyr1Od57Vedru\\n\",\n       \"tXw1eRhMrgzLGYY5XnjO6k+f8F6V00mY8RhPdK9sQPVT1eOuw7JqYq9MSuwj2zpHiNUbTXeMwTbR\\n\",\n       \"heo39o9tAh2F0+kK5LFtqKeUyYsqlDuP0BzEVJqD7wxtAhdXhlCrAOfqruplWKnf4HXe9ljIsa5Y\\n\",\n       \"GB58HRtVGnIHsO6NoUsnTmqiD35+NHFubCsuCmG9ZL3TTw9cRKkcNtpAl/IvCRhqeD+lG7fPbXKB\\n\",\n       \"htNvJS7YWa2H5eRA66DGiovQwHFG5cCGIb+qM+L65wTd2xmGmvsx8QrUHORcm7gvEdeBjcITHoWN\\n\",\n       \"rgOaghrXUy3pFCy68T1mYz3i8rrSgWpf5/RU/ytdKPup7Mp9aOvu38EM552DMJ7H+lD7TpSO7yXQ\\n\",\n       \"Hj58eK1MhdkKbmqQWSbwcp5YPadhuE0nRBWhsUedAmy6LO08Wu5jRFYt4RHmaNRZPl1qpi4dHCpo\\n\",\n       \"VGMw1cGhY9dFZBXQlON0k7KqNyLseHd21cHM2YdKlZ0o+2F9cB7r7XSQ17mVWScnfSlQPUeoPFlE\\n\",\n       \"/axsYux5XZdOJoQD5hRqFbC6pVXVv07QuypR0Fl5fuackjNkF4lXEFIgUfZRjSfqlPvMunRAq9qT\\n\",\n       \"kZ/Sv5NVW3BQw3ZX0ZnSUQUv1ZdqjrHgmKtjnX6cnPwtp3rTlGUcVSiZeGIFAhWduQkwuY+D2+pS\\n\",\n       \"c/J6nvu2CjQ0VjaqLl8tMd1LAR5/JdPxZafhIOLGbnJ91SYFNFePahNeiymXdY6ycppVhOb0qjYX\\n\",\n       \"0TvQdVBzzgFtsILmRE4OtApsx4CsA0+2D2GGiud1/ARgDqCHwqyKzNzErICmQOaiIxcld8/NcMO6\\n\",\n       \"DpFO5xXIOkfk9IhtdnrEiagA1rWN24X7XK9bCUzsz7Vf9U/pppqr3XipPlVtwfQYuTdAm3qBiPot\\n\",\n       \"5MpSjWHm4OYmg8pPlwfuj/dV0d10srCOWNAjYh7HyI1Xt8ScLlGqNrkxVhP4EIBUY5d9rvSH51Tg\\n\",\n       \"6YCb96vOm0R/XR7H0elW6dLBDI/x1wErtjhZGazKvQYaDgamKQpkzniUIVUwqwbdTQYFtekLgOrF\\n\",\n       \"AL6Kny5vWBIYnOI4OZCpt8/T52h4/0rUGLuJNoFFBzPnhCYw4/ZNoqlDgFSVTTfWsbMN1e60DQUz\\n\",\n       \"9cnTpA1uvLt0KicFGu9XXr6C2XRz0FMww4lfgayadB3YJsvNyacaaqsmpoIbluNYqU8xpkDjccf7\\n\",\n       \"O7tQbVXwmU74ia0w0KYyhZkrOxTSDhjYX6VnFSxUulBjo8CGkeYEZlgvt0vtr8jJgIb5CeQq6YxU\\n\",\n       \"AYHbeQgg3X0YRu67IRedqfwq0JyeeKmZ56IOeBxWnpspoKmJ1Rm7G9cVHUwmrapnKiuwUvesbEe1\\n\",\n       \"Hfc5X0nlLFjXDmjdJ06uPZ1upwHNVE4CtEw7Mk865wxXyarBqms7kDlgVf9BvXqexs/QViYww8vl\\n\",\n       \"nV4qXXUGx8BU9anJUAGrglA3caZOaSrcTgeqzlbcMbwH6sqlHFljXp3jYI/92rbrP6jHeniedrDF\\n\",\n       \"VYHa3LPYFbkXEZrLV52vrnMKcddNRU0AhNXjx4+v5R8/fnwl78o4z5ubxNguZ0ScsoGzEU/Amefn\\n\",\n       \"OCMM9n23ERpPxMyzM6gi7Omm7tk5JdVmtgFu9yEwmwJtokMe08qBrW5qecm2xPNctZXnmoOXKl+R\\n\",\n       \"kwOtK5sAjPPunghKd6zzEGikKgJjWHEetw5iHLXl/SdA67wp9mcFYggDBJmLEKYRxuqnK67fPFaq\\n\",\n       \"Hdl+TlPHXR0pFbg6mE2A5trBZQyYDnArDgGXlwkzF3hMZAIw9YH9VO78p08Ra284qk8C+PzMT6O6\\n\",\n       \"CcBSlFdXMFsBWwcyhhpPSsyryKcyPjZ+7J8DXNbPz1IwTY/NIFAg4zpX/yprpwu8Px9nmFURWgVj\\n\",\n       \"F0l2MOugtiKd02JdKJgrvaQgyDhSU/PGzUtMeV7z29R7CTT3Ad6kw5nvILYCJXWdO0fVxYPuoKbA\\n\",\n       \"5iK0CmZYNxpZN6G5D2ofverKUjPFPVvh8ZnCRy05Fcxw8qk6caxQXH+qpV6XHgqxDmoqGkPbVGXs\\n\",\n       \"oFz0VI2tchhKXKSm5lQXSDiQ8c/npnKylwKcV8eUF1gB1+Saab0TmOFWgUwBjSMyPqYmMrYLy1CX\\n\",\n       \"Ez1MQKaMXD1byUlUfaCs7slAq56jcSRVgafrZ7WUneQPhdgUaG6+KMfPeew/wm51rBGQHaCcjU1A\\n\",\n       \"9sIDrdvvvkCfKFRtaoAq0KUwzBhqHbTcMYzIHNxWo4eq/+ohLkZpLu+8tnI8DlwOaqjLDgBVRKba\\n\",\n       \"WIFI7XPbVD+y35Nt3/eyXw5oaOddqkDmwLYCNAVFBU+EEo8Jn18BTB1bkZM8Q2NxjZ7Qf7XDXG91\\n\",\n       \"P5YqonDLxy5CU8tLlTqgubIKYgig1W/w8j5Tx3EI0FSk5q5nHTiduHYo+FQg665fjcbUMYSIA0jn\\n\",\n       \"oFEPvL+yMcxc2/b9rbedCSN1bgcwBbkVOckztIlsm/5zxXnM7U+2vEZdy23AAcu0W3JOlpsINIYZ\\n\",\n       \"56dAw7x6sKrgtmrgPEGyjmpCMQAqMEzecCJwuB2VMKRwCauAdtMwWwXa1M67OcA6ONSB5b6bQykM\\n\",\n       \"MxT+MHuSrsi9BVrEfACrc13koOpwZSnKkNWS00Vq6sVB9fMnfsPZQYzbqhyB+65oxcDxHipKyHtV\\n\",\n       \"IFBl6mNihkCe2wmO8QqUOvDy8ckS+SaAxtFNRBy0LFsZE3RgrM/Mp+DvgdGmUiYR2ksHNJ4wHJ11\\n\",\n       \"UJvKFGQcoVUwu7i4KCM0VV7BTC27Kl3xfi4l0Sgy5Z+tYN86AOD92BM7oPFkUeCYTHo8/1CpIiv1\\n\",\n       \"a4yu7SvAWgFa9X0W6r77rAkByGNd9Q/tTcEsBecH2oAaoynMXjigTQ3SAcdBbBqhYd0roFQRxZMn\\n\",\n       \"T+Lhw4fx+PHjePDgwfiTDQc0XnbhBKiEveHTp/pbsWw7H0sjzHxXxjpU6aFAm0RNE5vhMZtEam5y\\n\",\n       \"V0A7BmLqOEZGDDJ+xpRjUn0ixY4IHRnbztTGuO4KPAgqF4m9EEDrFKWOV8pXkcKKsPeYRGp4DEHG\\n\",\n       \"S8MciDfeeOP5luDCMizPt5fdUgs/ql0d4Gw3wom/H6vq5TFQ11T1qKVMBQ4Hs2MnXQWxClxV2TEQ\\n\",\n       \"c33k/nIUrKRy7rzPkVtCMu+hytR9+J4PHrz5v1gfPnwYFxcXz/NuYzC/EC8FlLDBVfsMs+o61/EJ\\n\",\n       \"tKoyvB9HZwyzbduuRGAMMC5Ho1cTgJ8dOY/YQcbpNSLKyI/1Xt1blVUgcWVVVHaMQ+tA5e5bpTe1\\n\",\n       \"VUBjZ1JBJveVXWPewS3vg/kuCs88Ao03BTiGVpVfkZNEaKtQc2DrDLsbbCyrBh7vx8BJoOV5CC21\\n\",\n       \"zMTy7lkRl7n2d0bOusrobMVgOGpQ9+K8m6hTsB0LMWz3JEpzQLsNqKk2OKChg5hGasqe+VgHNleH\\n\",\n       \"KptGaBcXF+XPnl5IoKnjCmrKU03qjfCDy2VdpKaijYSZUvp0uclA6yYRGpjqC/eZdYX9wGPumipq\\n\",\n       \"cPdXxzqgOMCtRmbumAKr07OK2iqorQLM7bs+c9tXYMb5FYBheefo8fxJZMYR2mRbkXu15FSRmYOa\\n\",\n       \"qy+iHtyp1+Jr814YnfH5+76Pl5u55JxM6FwWKmNkQ3dGoPSeE8bJFGiqTIHE1TnZ8FrVn6q/Dp4d\\n\",\n       \"2Lr8KtwmTivHuoKYGnPOVxFVBTY1rhVo8Pru2Vker76RPAZmEfdgyenyuK8MuTPoiNmzpUmkhpOT\\n\",\n       \"l5w8cVcjtJUJrX6yhFByBlDBbNu255MIj+/71WcpHCFNoMbXKqipcyqQqf64sq7uCkYduDqQqeMM\\n\",\n       \"NJVOdOLG2QFMlU3h5o6rsml0lr8gcgHGCwc0PlblV71zNdG6iEyV4zGcBJeXl7J8JUJzRox57isa\\n\",\n       \"3ARk3H7Oqz4yyPiH5pWusbzrm+rnJM997Mo6eHUAmoBsAkcHMF5qKr04UcDidBINuZ/GOYjxfvdW\\n\",\n       \"EwGnvo3s0qmcdMmp8gpoE/ixrCqqC3XR+Bzknj59KkHmIrQ05E4XCCb+pgwjLewHtq0SvJbhhXmM\\n\",\n       \"BJ2uXf3cFuWcDk3VvVzfqm0FdIfATcGLQcb2sAo3FmXzqROGU9q1is6qH5FnfvWzDdfGKj+Re7Pk\\n\",\n       \"rKDVgc3JKsRcpMZtwSVa7j948CAuLy9biOExXupVesJ28A/KJ4OuJnulJwZZBTQUFREeAqmqvavH\\n\",\n       \"ViG2GslNojkFsC5Cw/Z3usZ9BQQXpamfwvELgcnPk6bRWUZoVfu7skpOFqF1EQmfg+d2E8sNKKcq\\n\",\n       \"GnOQY5ghyHBg3ScbCnAc7bk+cLsd5J10+uW6+ct0TA+RQwBW1ePKujxDBH85cejG17t9By+1cVud\\n\",\n       \"zlIqgKk8RmlZpn5VwhEY5tX+ymcbtyV3HqFNjFoN6qEgw3w1yG5TbU+obdv1P0LplpvqmzQE2tS7\\n\",\n       \"5jXK2PE6F8WoiYJeWkEN00OkGmPX/kPrVfWpiEj9DOymwDaNzCqgOV05qWy/cuoRIWGG0OOfLSmo\\n\",\n       \"HRKhVeN5qNwq0L70pS+151SdcH99Ao0l4uqEZ2VxaJ3XKQ/jvJBqM0MlRf23Jn6OoiKjLnXLYqXH\\n\",\n       \"DhAMPAdz9Tarin65zEG1a+9UpqDMCTVJO3Dx5zpoV1V+FWg58RMWmHK+WgqqsePnrxiBo506mLly\\n\",\n       \"hhbm1acaK9H1VG4VaF/+8pfbc6olVxoQp+rP6TDM2ODQa0bEtYGp8lXEyNGO+rNADLVKD86jdtFj\\n\",\n       \"3r/Ts4q2qgi1i16rMteubt9JFXVWZROYVUBjiPFxBTIVpWW7uk0BQwEsyxlgFdRwzNl2E2y5r5x7\\n\",\n       \"BTcGL8MM24NAdWN3iKO78whtJSrpHsRineh9WGHoIRXQOOUyZ5S8pHj69Kn9F3RuWaHAovIuWmRx\\n\",\n       \"dasxYH1PwaUmiosIJvByhtudW00ABloFMQc0BJkr5+Ua2iqXdY4R210t8xRUHNQqZ+PAgSuIauXS\\n\",\n       \"tYshzHbsxpf1tCJ3HqFNPDt7dw6F1dJNhfWoOL5GeQ0HOfx7ZPn9GUM2z1H/3ISXnE4nuK90445N\\n\",\n       \"REEM61P6V/eqwM9lLFNwTfenaQUwzjO0JoBjqKmILe0z2+VgxkBz4GB9Tx2Nswm3X61epu1T8HXj\\n\",\n       \"6XQzlZNEaM6bV5O1C0dxEk6WIhPPxlGRgmRCzP3pbAc11f5phOTghP2roKXKXF6VrUQOE4N0kVWX\\n\",\n       \"n6buzVsFNAcz3HfgctGZm7hcFuH/VLVzuh3I3LiijXBZBbMu78pyq5zWvQWaitCqyIiVkVJ5EjVQ\\n\",\n       \"Ef36WylZ7WddvGTNPINM/eOTLkLr2lkZJQoC1+lnauQuVUsK97BatTHb2ZWvQqtKO5jxkhPhxVEZ\\n\",\n       \"76edVFDj1UO2qwOaA1cHs25jW6jsZAKqSXCA+2hPVfSN820qJwXaJJydDE6E9jDVfmcQqHylXFxy\\n\",\n       \"JsCq/xHgorNDDHIVangf1WenP3Us6+A3WWp/VVYgtXJs+gV7Ao3hxSBzz9AwGlPR2QRoPF4VGKbA\\n\",\n       \"Wt0qeFYwW7Hdauxwvt1ZhLZt22cj4v+NiMuIeLzv+wf5HPcMbbpkcWtyzEf4/3CUaZXvjCTirf+/\\n\",\n       \"iGWpdP4nKfw2Vr3pQl0o/UyN1QlDrTJUrovr5WMcoVVRT9VG1+5MO1CtQK77GQ63OSFWgY3HNdMu\\n\",\n       \"Opv251AY4bVcj7KFLgJ056nU3Zvz2E+1r1ZEUzkmQtsj4sP7vv/f7gT1DK1aorhjqmzb/D8Mvskt\\n\",\n       \"4q3v4ThqS6PG/yGALwh4q54LVLB1Hu7KYJB3R91wnWyo3A4lWA8DzX0Kwc6lk27CHwq3FaDheE3A\\n\",\n       \"tu+7hZh7wznpQwWqCcB4zNT4sYOqgodJymOt8mre8vgjzO56yVlaqltyqg/vXBl7/uc3JmV0zxg4\\n\",\n       \"j3VUhhMRzyMwhFkqnCM09akJ568psfGsh8CNjYlhhgbr2qOE68Ex4m217mpZdgjcViK07Au/BOCU\\n\",\n       \"wYaRmYrWpkDjdAosBa8uzU1921Z9alHBrHOE03Jebt7ZkjPejND+8bZtlxHxX+/7/t/wCSpCmxgV\\n\",\n       \"T5KnT5/GxcXFFe+VxpL7rOTJUrWCWObRw3OEhlBLoPFgqH2UyuAmEKsAgefwhjp6PqAD41HLTQbZ\\n\",\n       \"o0ePngNtxaAZYBXcppFcRFj7qoCWwOIUAacisgps2KYKaEpHU2BNrlMQU4FFBzAX5au+VI4cz1FR\\n\",\n       \"2oocA7Q/tu/7F7Zt+9aI+JVt2z697/uv4Qmvv/768/zb3/72+MZv/MayQjdxXcd5Hw0jFZFGh/Vw\\n\",\n       \"yOsgURmaiz44Oso2cPv4+d/EG1bQ43LlbZ0XRj06yWNVvS76m0ZqzohduzrgscOrdJfH8JGBeh7K\\n\",\n       \"/4kLnZXLO9BinlNul7KrCmaurAJhJXl/1fYEPp+/ImosU4e/8zu/E5/5zGdG9RwMtH3fv/As/b1t\\n\",\n       \"234hIj4YEVeA9s53vvPadZMIhCe0A4iieQIMoZRAQcNw9+BjeC8lCj7VRMMy1YYuP9VhB7Ep0LhM\\n\",\n       \"1ePg6yaN2uflVtUGB68KalX/8FGGAlj18zsFLhWh8/0rqKVOKsdZ5StoVePCbWGA4YsxnGcTOFbz\\n\",\n       \"V5Xh9r73vS/e9773PT/nV3/1V+19DgLatm1/KCIe7vv+r7Zte3tE/PGI+E8nnViZiJU3VQqICKlk\\n\",\n       \"hAeDJPfVM7ZqSVD1QXljhpmqg/vsdFBB2AGt2lCXSr8oVTQ2GbOVfWfsPO7TJf6kf+qfPTvIVdGY\\n\",\n       \"+1xj2g43rk46mDmo8TFsSxVlOZi5+6nIE+/ntrtacr4aEb/wrHEXEfF3933/R3zSBGhqYkwGkQ0E\\n\",\n       \"FazglRt+DIkwy7bxc5FKcPC4zs4bp6xCq4soV4HmjFm1NaMZfOayMn4VvHBs1L2xDQoc1fdfXX9Q\\n\",\n       \"HMjUfgWvalJWdsU26xyO0muVKtBU4qDL46r2+Z6qzaruLsKeyEFA2/f9MxHx3d15FdDcBJhMEmiH\\n\",\n       \"7Xin/LwGl4j4XAshWSnWTeDOaN21K9Cq9qsoircJfDNfLWPduCp7UB5cOSIVLVRQU9+Huf4wzPIa\\n\",\n       \"92E0l02iMo4qJ4Jjmc6V7SZ1xPqsIOZsinVQzSccG3UfngPcBlVvNbZ3ArSpVEBzIKsmtRI2FqVE\\n\",\n       \"pXRUGHpBFFde9QEn/kQ/q5vSkSurQIZlHcSwzEVnDq7Y10meQYZSAUR999cBjet++PChhZfaqqhi\\n\",\n       \"MjGdTaO94vgwxJwuXZm7H+pAAY6PV3Osm7MINgUyzL8wQFNLI7XfQU15QJwMbuJgVMYKY9jhvuqL\\n\",\n       \"Ao3qu9s/FGiTdAI39pwV0HLSrzxHU/1VZajviRPrQIYb90flM1qbgOwQoCmpJjw7RLbZzkEoJ455\\n\",\n       \"NZewTIEM83gc26ycmesr30s5rLv+sLYUN2Au2qgiNJap51XXPn36NB4+fFh6TQZbdV4VkUy8pjM4\\n\",\n       \"9tQVsBzEurwDmgObWm5W41f1nY/lpHWT1XlxBhj/UkP1A5eb2FcFLvcb3SnIOghVOppGK3y9A1jV\\n\",\n       \"Bm5rBTV3/xx7XCpnXdU8XnUGTu4caO65mQJb95BZGc5Ku9jLbNv1v9nfwbKC0mq+259EXVPYTYHG\\n\",\n       \"abXkVOOJ+u4mlZt4KMqLI9QQZvh37NT4ZVnCLeupIIbHOoBxeQV2LktbzP2JjfP8cDBzumW9TMpc\\n\",\n       \"fxBq2PYKopi/l0BTz5GU0TuwqcnB4jpe7btBzy0N24HN9akD3LHw656LTeClUuUUVIoRWrV1E0ml\\n\",\n       \"GBmoiceOpYIaf2bBdTDE8Pg0Qnvy5EkLMixj26tS1kFli2pucB2uXqXjvA87OhQ1t/I6nPNoWx3I\\n\",\n       \"lP7uHdBUJxzEXORReW3ltSYT08FRgW1yHrY7wn+K0YEN61ZlHUQcWDqoqcnnyrpnaJP+qVTplaVa\\n\",\n       \"ljiYXV5e/Q/3lb3kSwEFMoZaBzSlu+l483IdYebE6a0aEyUIp6nesO2ZX4FTN64rcnKgTYxfiVIu\\n\",\n       \"5qdlqq2ubTyB+a804J+fUdA+dLKr+08Ax/d2efTKnXOofhfJZazTKsWoeDoBV7y3gg7+RjPb4p7D\\n\",\n       \"uTeonQPobM5FQCoqUzpS109tudvPtjj7wD5MYKl0wHXxvOjqYzn5W05u9BRiKPxbsgpWqg0TQKT3\\n\",\n       \"Tnjhj+bZuA+BGebVsSoKc2VVO7Cdla5Ylwytat85JtVnfN7Ck1a11e1jf3MJpPrpAMew4n0VcSiH\\n\",\n       \"qfpd6ZihUYGMv9LHMcf6WXds652jc/rivma7J6ssrk+NXdaBLxamcufP0LrlWMqEzKvrazYANcAV\\n\",\n       \"3PjvZeVfAEGD7/rW9dfpQBmKg5oyjkrX2K5OJiDjv0yyMrH597grYGOYdTamljYuCnNLJxchoXNT\\n\",\n       \"yzIWFyFzJMlQS0EAZF14z87WVar66MDW2ZsT187pr3VYTr7kVGVOASudq+BYwWwCMt4uLi5k/ROY\\n\",\n       \"qba5fAU6yo3XAAAgAElEQVQy1Z/pxvfr2uBAxsdU/ypBmGXq2sXlasOIRbUDgZH3q57nrICtA5lr\\n\",\n       \"j2obAoxBhpIAUG1BnTi75zLXLpXm/Sdz2UWy3E7Vl05O9pZzQm8U9DrK+7FXwn28d6bTQa3g5pYe\\n\",\n       \"NwEz1eZJe9VE7nTO91D5TKfRGT9Dmximi8y6Nqu+uiiNJduFUKuelzmgKVmBObfJLTu5XrZ57hfq\\n\",\n       \"poroVZ7rcstNvk/laJzOOqc0lZN8hzZddqYwzDrFqH2nrGqgFch4iclAq6DgYNYZfh6fLhU6oCp9\\n\",\n       \"OB0pnU2WndWyRe1n/3Lj50RKV127qwhN3b+ClxvzmxAX3SmQ4T5fg9dxOT9mcUDDjdvWAS3vh/fF\\n\",\n       \"Midu7A7R8b36Dg3PUeI6WEVjeE83uJ0HY6jlrwzUAOP9sB0OdFPp2q10Wt1X6UuNCZdNQIZLThwj\\n\",\n       \"1hXnJ0BVepleV4ENHeUkMnMT2e2re1bHqgjN6cBBwIGs2hCQqm1dX6b9V/M0+3EI1E7+DM1NMjY+\\n\",\n       \"teRcuS/f23msbuD5JzPKI7p06rFW267yXVsmdbol+OT5WV7jlisq7aJ3pQsHLewLjw2OH++rt5wq\\n\",\n       \"n+ejTTp753sovbAupjDLe2RUyxBgvaxCbeKInD4rcUGAgtqK3IufPuG5E7JXYMtBmEwA5ZEcxNTy\\n\",\n       \"UoX2ri/Tfq48Y6giKndf1xbuvwP9aoSm9KWi2zRe169KZ9gvngAOBGoibtsmHyt0z9C4XW6sGWaq\\n\",\n       \"PayjbH9OcP7lQ9ZZvRXsnDjaOS85K311YKtgrdpXOaOp3IuXAquG24XsDmrV/SceC3/QXj074DJ3\\n\",\n       \"fLV/brIrI3CTTOVXvXcHMoxkVdTh9rF/XbTJbedz2dNX+sU2YIQ2WXIi1FSUxu3l8127+B4INn45\\n\",\n       \"4NpZ2TmPN8Is81XbuvbiPKxAznIIxFBemA9rp+EsKjDr4/zKMwSGGT8bwnqrflTHKo/t7jNxDOpe\\n\",\n       \"VZsc3JSxTz/bqICAG0+0ztExHNw48xf3SscqKmKouZcCHH2xPvkcbjuPdQUO9xmL+uUCjzM7CzW+\\n\",\n       \"PM6VrlTeQZ/PVW1z47cqtwo0NfmrEJcVjuIIj2Vd5NJBy0Uh6eX5+ZmDm9t3E1L1ZQq0qtwJH3eG\\n\",\n       \"vgIzdSz7UW24hHIwVuUYgfA5CDO3DOOooZuwnTidd2OB93IQSCgzoLNvWa6gphxzlWLegdeVqTHl\\n\",\n       \"89U+trWLeDu5VaA9evToWtl0WVOFvLyPyxWejFmu4MnQqpZYeC8FMyxzEYXKT/qG11WTuLuvK1sZ\\n\",\n       \"k84h8fgpI5+AogMEGzxDDGGmlmJq4iMksr4q3+lanYM6URBT16eo33FmXycgY8fTQY7boVLlmLG/\\n\",\n       \"7hzVR+xDpxsnJwOaAhCXoXRhuYswquir2vB8VK6KyricwVKB5pAobSXl61Q9K/CaOIA8D2GWE4/7\\n\",\n       \"rqIk1+6JPhlm7kfeeY0qQ3hxlK8iw0rv2K8KYtUEVm1BqOE3khOoOaDxOGO7uB94LNtRja+L9vge\\n\",\n       \"7p5TuXOgoYK7fMok7HURWgW0KfSUVJCbwKYbYL4XXs/3OSavItgptDp9M8w4ulqRlagXnVDqT0HN\\n\",\n       \"AbSCmCpX4iax2lflXA/mVVQ6gdnFxUUJMd7HtqjoiSNdZedVhNZB/YWI0FLJkzSl8hAo0+XjJFJz\\n\",\n       \"EVoVqWEfM3UwQ5mCrbufK6tgUEVoq/rkLduPSyU0/i46c+1X/cVxwuUtw43rVWPLwHL5Sv/ZJxWJ\\n\",\n       \"VyBz+uBoB9s/eX7Gdu0ghvuurawr5bgQvjjmDmJKdy8M0BBaVT6lC3mz3tUIrYrUugitEgZZBbgJ\\n\",\n       \"zCaDyudMosRMVyE2BZwz1GMjNAeQFOeEJhGag5faJrICsgnMUBeZn8BsGoVjuWub6gfCrHNeTk+T\\n\",\n       \"8enkZEDj8L0yGDXYXKaWTl1kVkVqXYRWiYKY2u8AVkGt21ftcOkqxDrvzkBTUHBt50nJ/an2sXwC\\n\",\n       \"M/ebSAUyBTolDuKZViCrdFGN701EaGqMJzBDXTrwHwKmqt+V3DnQIjTUHNAcwLgs4vqSc+U52SER\\n\",\n       \"mhrYCmAuyqgAVpVVx6YRoopsb2pLnbhIZyIKYmgb6jjqoAIYthEfaDuQTaDsHNUEZC5C47q4vwg0\\n\",\n       \"1lM3H6o5wm11TkDpbTrOE8ezIrcKNP47YRG646o8xQHNRWiTgVMQW4nQKpn0zQFb9RnLKqAjULkt\\n\",\n       \"XX41IpuejzpDMGD7UJxu1cRQEyVhgiDLCR9xfSnqnrVNojS2URWJuPFy46r2WS98LPun6uY5sfKG\\n\",\n       \"Wtl9FamxLrpxdnPpXgOtWnJO8pUSuQxD625phKmDGD9LYOkm3wTcqq4JzDoDy/uo9qgyB/1DIIb7\\n\",\n       \"PNEmEU4leC1CxJW5qKj6feQEZFnG39Z1UOucsnNalV10EVrn6Kst4vrPqpwuKzvLfCUO5C8E0Kbp\\n\",\n       \"xDNgh7tvaxzMnBfDJWflUbh/043r6Ix3ZZtGwpMI7dCoLdvt/pz2MVLVwVDDaIO/UYu4/vvIDmQq\\n\",\n       \"ClHjpcau2udr3Tl8/k1HaLkhMNX9+ZcYzmlNYOZ0tSoneYYWUT/g5cnuIMYTeAIyBTO37yK0yps4\\n\",\n       \"eD14oH+f6AxZHUsjqkDG8Eh9VoA7BGgrHh4Borx3JVXbU9/uOoSagheDjO+n7s/nVhOvs9mb2E8d\\n\",\n       \"K5s8dowRWt1H0a7/2GZV1tly9WeTlNz5X9tgUQbpDBajtjyOIFFLSpdeXFxcSauN76UiIde3buKi\\n\",\n       \"UeAkdILRhmsPt2ua5zYjiNnYsayCdk6GrqwCV6W7HBvczzJlR5OJiDKZeIdurn7skwOB6pc6numx\\n\",\n       \"/y8B61B/nhz/Dyqn7pjL878QXJGTA60SBTPMc1kFMBWNKagpwEVcf27AE86BYqWfHcxYF50hKli5\\n\",\n       \"tIoqO5hxvqqPf2iN6bFgS1GOEMtVitdiqsqm0cUqyBxMurw7Pr2HE9cv3ioodekkvyL3AmiVcivj\\n\",\n       \"5HMUvBzQusgMz933t57D8BfRCmwT4Wiiu5Yh5soV0Lo8g42jrg5sDoTTiKyCmmsn6q3SLerIQcyJ\\n\",\n       \"g1oFpUMioQlsujmC+Wn9U7g5oFVwmkReDowvZISmJl6n2CqSyQitey7GW7f0zGtzQBNsOEmxbV37\\n\",\n       \"VcSxEs3l+ZVhTgHp2sYgUzCbwm0SpVVRWeUwVpyHgpkD2yGAqGC2ArmVCcxOsYMu/izJwc3pAvvR\\n\",\n       \"LSkVlKbLzBc2QuPJ544ro1VlKiKroq5umYnXJMByW31rNzlH9Y8NFo0RJ6OCmRJ1bAKgDmAOag5e\\n\",\n       \"k2dorLdDoI96Q6jl8U5XfHwVZiuA47IK3JVjX4WxusbpQkVouFVL0ZUIjo+tyEmAhhNyxcicUeME\\n\",\n       \"6h7wuwhNgQ2js1wS8aSuoNaBbAJCzDuoMdxQppHwFGgdwBTMphFbBTanr4kOGWqdnbH+qohmCqjV\\n\",\n       \"fexztT9t8xRuXV3HAq0DnILaitz5X6xVRsEGpgasW3ZMYeaisSpKe/r06fO0e86j8iwKdCuT1QFs\\n\",\n       \"Wubuo2AzjdQmYKvyTq8OZofqq5JjIOfg5MDlzsEJzACuIrXpppadrv88T6sl5wRe08iM0xW58wiN\\n\",\n       \"vZDzDuyVUpyhT56TrR7DKA1hpqITbN9KVDZZWvA+6sYZJINM6RkjYwezDmLqBcAxEVkVqTn9Tvan\\n\",\n       \"9R8a3XQwU3mX5vh2jtPNG9cXhhmfM61jGqEdE5ndywitAxpOKAc29tYun5OrA9bkuzMGmlpuTpac\\n\",\n       \"rh9YxvlJlKfAhvrllPVc1emg5iDW6aSC2PRZmtNfpS91bTdGTm4KbNNUAU3NJR5PBSkXKToH5+yD\\n\",\n       \"YYZgevLkSVxeXtpzcqvqeCkiNKdYFAUxNtAc8JU3mtMl6oMHD+Ly8vJGQcbHOe9S1KEqZx1jvvLC\\n\",\n       \"LjpzEVoHsA5q3QsC1a5Kdx3UJmDs9MhlU2hUS04FtJzADLJ0qqqf2S6XTrdKBwxotexU0KogN31p\\n\",\n       \"kIBXDpzl5EBzBHZhdzVhKqipzzO6N6JYZ0LNAXVVXLSm6q0iOydTPat7O/2ujEcXZXWO6hidTkCm\\n\",\n       \"9itRDkEBYVLmYIjQyLnD0XVlN9g2dd+E4hRuCsIMsYzOGEBdfuUXBStykpcCHForL1AZPteXadax\\n\",\n       \"bdvzFF/7svFkdHB5eXkNcJnfti0eP37cbk+ePIknT55cAwBGIjhZ0VDVRJyWqRT7inlnuOgIIuKK\\n\",\n       \"7lJfT548eX5OFbnhMfWcxW2pP9Qn6hUnzuXl5VgXaF+sc4yAKpCnTeD9Ly4u7DJuJVJz57vntSp9\\n\",\n       \"29veFl//9V//PH306NFz242I5/p94403rqw20M65LMcPx0DlM01bqfrWRWxqieoiRycnidAqiHFE\\n\",\n       \"4bytqhevVSBTQEuYqYHOjSeZm3AJNF5SJdgQag5okyiG9aI2521ZH1lHti2PY6ifAEFYMdTUsX3f\\n\",\n       \"rz00Vg+SEWhq8iioTeHPfUcbQpgxRDAydxHFJNLpop/KySiAcdmDBw/i0aNH8XVf93VX0gRajuXj\\n\",\n       \"x4/ltVU+HdlkbBzc3eaW2+rYitz7CE1JFaVhhOaO41tL9YyMJ6oaROWtcqIhvFxeRaHdPk5ALmej\\n\",\n       \"xz5PUtan+6CxmghclvVMHvwqqLnoDAGLelB57BMCHPuDx3Gc0FbU854KUnzP1bQDGZbh4xR+pJJj\\n\",\n       \"gDY33RBoPD64n2PC/VB947k4gd6K3PsIja+txF2b5RhKd1EGGo2aXG7iqQhNgcwBzYEty9JIFdSw\\n\",\n       \"3UpnCmLOiFyZmlguRc/LQFD7qNsqxei9cwJO8DjqQy2L3D5OUKXfymlU5dgXBTSVV3ac/csILaGD\\n\",\n       \"emEd4X4CjZ+XccrPurhfbr+Cf6c7J/c6QuNOVZFHRFiQZd0uoug8IYOMU8zz9QwwTiuAqQ37ywaN\\n\",\n       \"S2XWDepEjQcaJZdhOtUZA617VqKeqbnnbeksWGduH9Mu7575OOBXenXStYXHvINaN778+IXzztGh\\n\",\n       \"o2G48daBxzmXVdh3cu8jNGc8PLlVKJtAwSUEGoQyEi6PiHKSqYfVeR91bwe0bstrsa8RHmpTQYPH\\n\",\n       \"tvHDegS20pmaaAg0l0egVc/Y+E3axBlgeeZZb3xe6nmyrUoHYT5W6Rj3O2ehlsmTZ17VGPA+Bh1K\\n\",\n       \"190xzqv9iZRA27btpyLiT0bEF/d9/8Czsj8cEf9dRPzrEfHZiPiz+77/vrr+JiI0Z0Quj8aO0OBJ\\n\",\n       \"4LZnfbyydW/ocIDT0DAiVIBbBVq2C2GlYIZAcwaB5ahrfNidHpnf5naOgCM0Fem4yIcjQhclVrpT\\n\",\n       \"44d6q9rLTkNFMM6pVDp2bar0OAFabu5RSETI5XoHPd668cg6pmPB5eo8LFuRLkL76Yj4mxHxd6Ds\\n\",\n       \"RyLiV/Z9/6+2bfvLz/Z/RF18bISGIFPLSbePyo3oHxp3ZZMH2rgUSsAwwLiPU8NWE04dQ6iloEG4\\n\",\n       \"fHrjLGOgvfHGG883vnfVZuWQXGTQLUtxUh0a3XL/1bMnZa/VPtuNKlP6Wnn0Uek7Iq6MUTqdiKvP\\n\",\n       \"A/OzDWe3Lp2MR87Nzi66MgXwVSmBtu/7r23b9l4q/lMR8aFn+Z+JiP8lDNCOjdBwQDLSyTryutzP\\n\",\n       \"lEPfY/MRUQ4klzPIEGYKbjiQPFF5gFE/2FaGGX6D1KURcSW6TD1mWU6Wr371q/G1r33t+bWTDcdk\\n\",\n       \"slVLU86nsXdOAPur9OuW6gpuqpzvUeUZoG7rnAWWR8TzscE3m+lcc3xzHNVjEvcopVq2sv3nWEyA\\n\",\n       \"zXm3X42Dk0Oeob267/vrz/KvR8Sr7sRjI7ScxBFv/fQjyxXMUDiim4hT3kqYjoPCIOPnMxw9sLHm\\n\",\n       \"xMV9hhlPFgTaZLJFxPNIDF8mcIT2ta99Lb7yla9cq8PBg4HGY8V5F8WpfF6nIOUmv4qs8jg6ArQD\\n\",\n       \"7pM61ukDN/cBq9qvgMb35u/O8HknR2jVpzHu27LuUUHq1sHZgVvpIOtFh7UiR70U2Pd937bNPh29\\n\",\n       \"iQgNBb0O18mTXYHO5bv9lYFlqCmQcd9xciHoGGqqnwpmFxcX48m273s8fvz4SlSX/ciJkUD76le/\\n\",\n       \"+rw+rJfvged0YzGFHPe9ilpUW9RynScTO4JDN7e0Uj+5Y6AeAjT3q4Dsc5Z97Wtfkx+Hu7x6POAe\\n\",\n       \"GSh9un385QV/M5dQy3HiednJIUB7fdu2d+77/i+3bXtXRHzRnfhzP/dzz/Mf+MAH4ru+67si4up/\\n\",\n       \"LkJo8cRP6bxwSqWALqKrwKcgkm3CtjuYdcsLtczgssq7O31kWzudVdd3gmCe6L0qq6A2bYe7f4oC\\n\",\n       \"Der1JoDmIFctuyZwVlvVbwYRrzBY36kf1BPrsLOPiQ2pcXbjFRHx+uuvxxe/aDFzRQ4B2i9FxA9F\\n\",\n       \"xN94lv6iO/EHf/AH28pc9IIdxqWQ88B5vJIuauuOde3mCdKF2RXAuIy/AFdAW5Hq/M55uHQiFcz4\\n\",\n       \"HGyPG4sOxArefN1qVNTpZqK7SicrUUkXzaqoyumPIZv1p/6rep3tOyer8k4/r776arz66ltPtn7r\\n\",\n       \"t37L6qP7bOPn480XAN+ybdvnIuKvRsR/GRF/f9u2Px/PPtsorq+qbyWV5CI0dY/K+FmqqEzlVX9w\\n\",\n       \"sCP0cwQFoVWoqb8Uoq5z7ZxIF13w84zKMFF3q2nVPm7rMf1y/bwpkKnyia4OBZr6MqAKFlByhYHP\\n\",\n       \"sRTI3OOirGMSbbIOjuUESveW86Pm0L89qbxrqCO/itAwVcrhpY8DWxeldRGaGgg8bxVoDmoKaO5P\\n\",\n       \"HClDmRgJ64tTNeH5/EPlWLgdIwrU1RJ+BWiYr6DGesj8IUCbgIzfkOM48vji8pvrUGUKaMo5q+Dj\\n\",\n       \"puVWfyngGsxkr45lfhKpOUOoDEQBbAVqvK8AVu1PYJbXdEvOQw2mgpmLAlel0u0EYui0uI3TdrnI\\n\",\n       \"4RigYb0qj2mlmy6Cml5bwU199oNtzjmGb3vxOpybXJbnO5Ap/XAbVH5V7hxoCjxMfSxDRVeejyOH\\n\",\n       \"CcRWYFbBogJaB7cKYpyfLjkPkQoUeJ8q6lUy1XHnRLJtHfS6CVP17RigqXtU963GagI0hPsKyNim\\n\",\n       \"GWaZIqx4fmKkhhGagpjr8xT2q3KSCA2PK4hxVFZFZnyvQ5Yq3URzA6GMtgKaylcQ44m2suTk9k6k\\n\",\n       \"MkS8D+tJ6b7Lq+VPFbFX+l/pn+qncy6HAk2VTSA2jc7YJlWE122sl+x31pcpLzMZZjg+KzqrxucY\\n\",\n       \"ORnQWBHsFdjwUYFq6wyBB7KaTG6CVd6WJwf/NdDus4sqv22bXW5WS87pOLgy1TdeZqDOJuOgdN1d\\n\",\n       \"U9Xr+q3gwn1i/a0ALSc/1+/u7RxO6oDt3kkFs2OWm6pP3C4HM/XTJ95f1cmhctIILc9Ry0wVmT14\\n\",\n       \"4P9L0FQ6o1GgqyYEt8OBrAJaBTV+jlY9gzvU+yldqomPETPrjMfO6dWNA+u5a3+33/XR9W8CtDw+\\n\",\n       \"uT/rs2vr6gpDRV/VLy34Hjy3EmrbtpUAy+M89lV/pzo4Rk7654NQeEIoz6Cghtere1XiorQKas6b\\n\",\n       \"YyRVAY1TN6HcvoLiFOpTI6qgjecoIHXOogKbi4hdPyY2UPXtEKAd4zyq6xgyU7BVAKuWm3y/iKt/\\n\",\n       \"PHQCMyzjutS4qL4r6B0j9+IZmkvRa6glZ3ePyjCOgRlPBJ4U/LMWTrsIbQVuatIdOibV5M19nhhK\\n\",\n       \"p7jvzlMTbAXKDmQTmKg+3iTQVgRXJ1ymzuX7Omh1kEO9sJ7UKqmCGV7Douq/Kd0puVWgTbxMp4xq\\n\",\n       \"4zCZvUXEW16n+rG6iy7cBM228n7VxwksFKzdZGUY4HIClw2uTqxL/QFFnAhd31x5BT8+XtXjylYn\\n\",\n       \"hnNI6GyqMZk4jkpf1TW86nD347K8Vv2kqYrQXNvUWLC+c95hvnOU03KO+lflVoGmpINGSjWwCKnM\\n\",\n       \"R8SVffZWqlwtmVwb2Wtx5MhtR4OrJrW7P16bZWg47EnRc1Zg5LJ936/9FQaGm3IGbgIoz66isckk\\n\",\n       \"q0T1RwnXzTBTUXXnSBloKsrsJiZfo1YCmHdlaRc8XlOHtCJqzDkYWHUyKU5fq+0/WYQ29RQMMn6e\\n\",\n       \"psA0nUQMI2d4Ci6ZImSyXMFzRWdoIAhrNGLUS/WiRE14hqL6/6IcrXVAxvq4/dU4YJnSxaGiQFuB\\n\",\n       \"jP9KSQcyBzS8t4s+O6feRWU8ftVfonXOW+nLCY4n7x8KMCVuHq/IyZecTpwxTSItBhZfw3/hgyMQ\\n\",\n       \"ZXA4iFlngqTKd5NWRWloLBVklcFjnQpivI9AU/8yTum5MmiGewU01guPYxftdNGZOk+Bjf8wZgcy\\n\",\n       \"3J+AmduNZZVjn0At4vrf6zvUmVaioHZsfUqUfazIyYC2opDOYyY4FLiqMvx+BturQFLtY4TEMHMD\\n\",\n       \"xEbJ9bo2OGhVk9vpmiM0hlkVoSmdueMVwFy+6/+qOLthqE1Axk4IIzHnZDlqxWMqnUI17VgtNQ8F\\n\",\n       \"baVDd62KWKf1ODlkvO8caNOJkHlnUAyMCiCqjKOsvIeK9pS4ZR4DE6O0Y3XH5Svwqu6Tf4ue/1pp\\n\",\n       \"9QxN3c8BehVmnHcw6Prq7G+y5MxzqzzbVjXOCs5VlNpFZphXz88qZ+p0syoOYpWjdqLafEiEebKX\\n\",\n       \"Agi2KoKYRmRZVwU09RLBGaprd9WvBFpuhzxLuwlPulJPRmj8H+BVhMZjNcl3E7cbPzceSirAKRvC\\n\",\n       \"t5v5O9kp0LB/CiLOCVQTl/tdQQ3boF4KqCgN00NkxVnyfTiQwfMqBtwroLnGpJG66IxhV8FMPadi\\n\",\n       \"eLlJhe3sAKAGhNuX9508x3BerAvdnZc9NL/v+xWYqWUn609NbpdOIIZlEw891ac6p4vQ8Lyuf8qu\\n\",\n       \"+HMGp3fus3Ic1VITbW4aoR0qDkZ8HPu1Er2xOAc6kZMsObOhXYO75Sameb/O46fxOK839RTch7xu\\n\",\n       \"5TkGD3pnOJXH7Y5V16h/kIEwc6//1fjg/kr71KainEOMHNvLdqSeoXHfXIptRCfKz2adM8R+sq4r\\n\",\n       \"2+T96hnasUBTEXIFMGXH0yibIXZI2+9FhIZ5lTLIMs/3qICmnmdVnnDSDwXmSYRWTUiu0wHBwdqV\\n\",\n       \"YT+4TH2DNvlsI/tSTb4VuPLX7fzyhnXSCdtVBzNecuZ1mGKebSni+hvwPL+DmdJ1p9tMq2/QWM/V\\n\",\n       \"GHSiQKX0rUDWjZsC2cpYp9zLt5wYTrvoDPfxfm6C437mua6qD065Cjwu/O/67wbRAWvly/CqTP1f\\n\",\n       \"Rl5yYvt4nNQYHQI01F8V6bBelLjJWsEsXwrw+S6P7cNoUn0zqNrnwMY2WYFNQdE51A4OSlcOTnm+\\n\",\n       \"ymN9K1Dja1ZhFnGClwJKHEyqgXSgw2cYbh+frSlIYpucMTqZQmYyyHyOgli1n3V0gHMgU5MEdYLt\\n\",\n       \"Rj25Scxj6wx+avhd2aQd2B435m6fIwq3YdSkNv5v5djuzu5x/DhCUw51Agl1rrMddUxdq8p41aRS\\n\",\n       \"toluPE/yF2s5dObySb2qHtVh3ne/NqiiQCedkVQGvvJJhwNaBzdsg8tXE60DMU8s/NnQxKA55ck/\\n\",\n       \"cQhcH+dd39QSjSNC1L8aEwWlyX6XZ3t09p5p9YZT6crpX8kE1NPx5XrRrvH3xxHXA5IpkE8KNLU/\\n\",\n       \"ua4bYBT1S4AVmKloBOtRA9WBA/en0gGtW2ZUaQde3HIiu2jBOaYJ4Fb6V9WDeQSP6hfv57iyuEid\\n\",\n       \"oyuXTqCXZROgoY5dZF05s05/FbxcvhsLHBOEGIpaWa3IyYDGeT6fz2FwuMmkhKG2AjNnyEomkMAJ\\n\",\n       \"VE16VffEqHBpyHU6o5tEMdjmlcnWtQXzk6hsCjQHywpuFdBUOUIIAebyHcwc0KofzEeEhJmK1Jx+\\n\",\n       \"JvbBqYukJ2ONkZm6NuLq76FX5ORAc2XuOgZcBR4U/pMnE5ipJSEOCJdzqjwcpthmVx/uHwK0ru6b\\n\",\n       \"BFo1HhPIrmxO55xihOaismOApp49umeSDjqqfajHSu8RIe9VwUKNS+UIFMxUm6djHRFX/v+nEzXP\\n\",\n       \"OjkJ0DJVYFPXccfcMqcTDGenMGPo4D2dst0kVF4Nr+G885SuLlWvatu0vVivAxqPRxWhuX5yGyZt\\n\",\n       \"Unpyx1aAnf2pxIGk26ZtYFusIjVsRxehdQ6gs4kqwlR258a6KkObYqc/kZMCrUsZHApiXWTn2oXQ\\n\",\n       \"qmCmIg0Hs8mk5AhtJdqoIOZC/04PkwnPfeiisi5Cc/tOb5Vuu7xa3nRAyz50usPJjRt+BoP5KVxR\\n\",\n       \"xwgx5YgjwkaAyslVMHNlrDd3v8lYZ9+4nG0n+4r1TuTkQHNluY8ww4FW+ZV2uU1959ZNRiUKYmwY\\n\",\n       \"DLQu4pgAzRlApduVKIb1h/vq2NQzHwKrKn9I37B/kzHmyY0A49R9VtHpGCHGEVoecy8EJpETl01s\\n\",\n       \"T/VbwdDt89xX++iIVuRkQJvmcx+9rYLaikwjs6pu58XcNlkGTCZmlWK+cxiYV21yk44dSAU2p69K\\n\",\n       \"j9PU6YrLVBRa9U+13Yl7Zoa/unB/vcSBzdklAgzBlu2ofuHBuun0zec4ffE93bhymdOvCy5W5KRA\\n\",\n       \"4313PsOMQbYKtLxmAjMXabjyFbjh9e5cPNYBjScEttOlEf5bLWXIqp6qbieTKKEqq4DmHIEC2zFA\\n\",\n       \"40nNPyGrfhvr2oDj50CG+xH6OzQGmtNnNw4TmDHQHDhTFAN4HuY9V+f2Sf7rUwU13ncw43TSFlYU\\n\",\n       \"K9ApVrVJyRRuHO1MIThNsc9uKciwdhELw2AKSjUmnbdVE88dmwAtj3cAYaC59rOoic1wy79i4u6p\\n\",\n       \"8vv+1n88Q3t0YHPRWQc1pV+Vsh2qZTZHaO4+rNtsf/ZTzc0VOel/Tl8tr2DWgQ0V45aT3TLT1emi\\n\",\n       \"KfY2CDKM0FaAVsEM8y7KVP2b3JujhyqtxpHHojvm9qd6S11PtxWpPtOo/hSTStkhZdsTXgk51HVG\\n\",\n       \"aApiOF6drjuYKfCrb+0qcfMv5wmnCsQTOfl/Tj/m/JRVI+8mrvKm0xQ9Dnoe7h9HR6vtdAOuIjIX\\n\",\n       \"nVXnOj3fVLSsxo/Luok2GfdslxpbjmouLy/btmM7+KF/9YmGajOmSo9KT2qloGyhGn+sa+owJvpD\\n\",\n       \"qZNw3ZoAAB8OSURBVFZdzib5mKqnk5P/OP2QaEil6vwVKPDAqf1JyjDL1PVVtaFrPx9XOu02PA/D\\n\",\n       \"fG6r0vchEKuEJ9ZEF5PzssxNRjUxsQ0sXDb5xqwbQ6WHLro9RN8OkHhfLK/mjdMl2g62s4JZBd1D\\n\",\n       \"+nmvIrRjRBmMO65CaAeyVaApmKXnX4nQXJ/UZGDPnOkq2DqvWMHsWLBlfasb6knlI+ofqPPSifur\\n\",\n       \"8rnfRWbKRiobTT1kWo3DVKfTcXXlzrGzDhlg1T2r6Kzreycnj9AOkWpy5/5q1FN5IDWgFdAUzCZ9\\n\",\n       \"mUQhlfByZbrxA2jVRmxPBbNjwOaiBHY0DvCqLCNQFY2pVLXHgU3BbOV7MCUcLU0mt6rXRUUTeKk5\\n\",\n       \"4YIB7G/akutX5UiraG5FTh6hTb2NO7ea+CueXi0RHMSqMhwgB7Osn6MS1WbXb9btamSGxleF/U7/\\n\",\n       \"xwDMQRPzbgJlHq+pUgZaFaGt1Buhf/rULTuxf0onaEOVzpQTd7p2Y7sKt0p3aAdsW649eFyB7hA5\\n\",\n       \"eYQ2UbCS6twKEB3IquXnBGir4iIMBzT0tGgAPBE6kHUG5KCDE4/P6+A2dXDdRFK6cdFUAo3r4Wdo\\n\",\n       \"GKGpOlUZRnzd0rNqs9NR5VhwHDqYYZ7B6cDYjYHSY0KM/2ijg5eL2FR7p3KyCM0NpiqvPAmnHSAm\\n\",\n       \"YHODV0UMCLRJpIMGNWk/16tgxnqvQNZtWI/StYoYDoWZG3c3gXgSV/kqQlMvBrivlY2pTxecQ6za\\n\",\n       \"rPSgxoClAhnvV3ZTQa1aufCYsKjvyKYgO1ROFqGxp59KpfxMHci4TD3jqKI1Bz3cn7T/4cOH4/Zj\\n\",\n       \"ykDisH4KK76e63U658gA5RAjdNGIczbqE4EKEh3QVITF/VV2k/tqyTr5ZOMQnXCf2CFy/dW4IzQ7\\n\",\n       \"x1BFaKg3vrerdxKd8XkrcpIIreq889buelXegYzBpQDnorMp0Fx0kzBzxqP6gXpJ+OBHiJ1HVtGZ\\n\",\n       \"i9bwetYtj9NU1LmdbVTRgHIclU1wPd1nFnhd5xC7evFcp4NOl5N5VF3bgU3pS21VXxW0Kpup7O7e\\n\",\n       \"RmjsSTCNuLp8WYXYxLNk2g1SlarB7IA2maxdlMYpR1TcP2cUE7gp47pNUQDmMawmU+p5OqEnkRlH\\n\",\n       \"aG7D45OlLNqIchJKH05P6jqGhrt+dZynzp4jtLQv165JJHaMPd76knPihSo4TferaGy6Kag5b+vg\\n\",\n       \"5vqq2ocTmVMuU3UiFFHf1YYR3irUKoBMJ2W3n/fpIoOpOKA5EKn7T+ylivhw6+bDygRmm8LrK4c2\\n\",\n       \"jcRX+o1ON1cOPF8727wJh3qrQPvyl798rWw6gTPtIihlNJMtBQc3BwFBwRMiB8wBDf/nI2/VYGE7\\n\",\n       \"lOfO9uX/ksSU8+q+Kq8+21BjkX28uLi4dn43YVQfXb8RujhWDG4GWgdapf/sE/72Ev8s9CFAq5yd\\n\",\n       \"6m+VdvrCfR7T6s9bZ58fPHhw5Qv/bGfm89jFxUU5f7INOU4XFxdX/tcp/s9TVd6dd6/+ScqXvvQl\\n\",\n       \"e8yBbeodKlB1kFRQq34QyxCr9jkCqp5V5b1ZXBmDTAGtApuDXRUluSW18vQKZhOIuzFwk3KyXGK9\\n\",\n       \"Yds48kuY4Y+9pzCbOl1sj4uiuAwjOlfG+q+glrDCKArhxh+DT8YC2zUB1iHbitx5hBYxg1mmDlgT\\n\",\n       \"RUf4JQ0e58gAr8vBmj5Dw7p4ok+jNLWPhurAhsdU5KaOVcsOnLBoXG7ZUi1luv66CE1Jgqca9zxe\\n\",\n       \"RWj7vl+J0BioDmxcruDGedXPaj/bnqkrw3p5hYG2nBuCLK/BP6LAf1Chi8zwXGWfbh/LVPSG56/I\\n\",\n       \"vQFaB7WuLMUZCOfdNTjJMRSvojIFNK63muQuquF61GB3+Qp42VelW+wTfpKgIq+u/RNhmFU/oZku\\n\",\n       \"ORFo7KC4f0+ePLlybAKvDnx4HPvo7KKzlS5CYyBfXFxcq5MfiyDAEGqpv+zbdMwq21O22G33aslZ\\n\",\n       \"AW0FapO8i4ZUXl1XlXcQw2N57SRF6SDsjMEZR+URcauiM5zwl5eXz5+hYZu7/FR47NR9eAKxVBEd\\n\",\n       \"R2jYN7SPKsqawMtt3MeVTemC9aYcAes0bXTlvtO5lvd2KwH1aKSC2L1ccn7lK1+5VsbgWom+KsFB\\n\",\n       \"wzwrOweUr424+v868xr1jKyK2ro2qjKGgdp4yTmBGm8utGeAMcj4s4ZpvyJmY4hLqEqHCB1XtypT\\n\",\n       \"S2sGNraX4VUBjqHlyrD97rlqpwPnKCpHwJEZLjU7eHbLTLRLnGPdNrXbPL4iJdC2bfupiPiTEfHF\\n\",\n       \"fd8/8Kzsr0fEX4iI33t22l/Z9/0fquvdW041+OoYtaXcVx4Ay7I+5/0zz23IwapeAmDUxu3u9vHe\\n\",\n       \"laFNQ3QHMJfnZQr3ld8CrgpDpBJ0OFWkUS051T04QsN+Kpi5h/uqDO/Z2TFGM268Kx0gGFnHeD7X\\n\",\n       \"hW1GXTiYVc7IARiBxgGFKl9ZVaxIF6H9dET8zYj4O1C2R8SP7fv+Y13lFdA6qOWgR8yWcKwMfpCd\\n\",\n       \"57GXx3K8Z7YlIsrlJue7qBPLUZSRsBesjEGBDFOXR2ArmF1cXDxPnzx5IsezE56I1TVq6Y/LJFxW\\n\",\n       \"OZ1y/WrJiYB68uRJCTIGGjuvrh0IMIZZ9gehgjBS+sA6eC6oVQb2oYoOsR6WatWQH9YytNCOuWzl\\n\",\n       \"8ciKlEDb9/3Xtm17r+rfpHL32cYhYToqXA2EmswMEjQiLOvyHI1NgOY21AHeyxnM1AAqkKkNz+OI\\n\",\n       \"RS0381zVflfGk3AqOEY8Ofkh9STPesX+5rlcvxtnBtxUHww0hBnCIM91euh01gUKCmh4TyUVyBTQ\\n\",\n       \"2HadPXd2feNLzkL+0rZt/25EfDwi/pN9339fndS9FOjA5iY6K2rbtucT9OnTtz4GTMlz1JvIPM4p\\n\",\n       \"5hW8KrjhhMGJk3104iKyLlRXy8kE16NHj67BDMsw8uS/PvHkyZNrdVdRCZdhRIZQc4BD2OA1uPEk\\n\",\n       \"n7SFJ28ez/HBKLCCGOdXhIHGMMtzIq7CKW2N9YL1sv5QWDc8b1Q9lf6UfSqgVXNV2bLbv+klp5K/\\n\",\n       \"HRH/2bP8fx4RH4uIP69O7H4pUEUxmXakR6A9evRIvmbG6/AeVdSHA66emXVwywmSho9vQCfAZsOZ\\n\",\n       \"PG9AmCG0Ms9pnq9eAjx58iQePXr0HGoYzfE44nhmXgFsNVJDXbHNKHtSKdfH5/CknjxWqIDmIh0V\\n\",\n       \"lSHM3LW8oujuyaBS9VW2zvVW9sgvizgQqLYOZHcWoe37/kXo8E9ExP/ozv3EJz7xPP/KK6/EK6+8\\n\",\n       \"UkZkXIYTwE0ON3grE6cL51fqOWY/ojaELmTnaE15O+UYOsPOtiqH4/rR9bM75iI6jrI6O+A6J/lJ\\n\",\n       \"NLnanzzmbF1FYnzc1ckgQX0wYBKik005EKeziYNmoLGzZjtN+/z85z8fn//850f6Xwbatm3v2vf9\\n\",\n       \"C892/0xEfNKd+x3f8R1X9t2A4nFzzzblydh5KpbJxMLzqgiz6+OknxXM0BBUBOegxobS6WkSQVd5\\n\",\n       \"p+NJVLUCzC4aZHtRZZny5zf4iIIn7gR0auxdxDmBGJ+jXihU80OBSx3DiLKTmwBa9ZjlPe95T7zn\\n\",\n       \"Pe95fr/f+I3fsG3pPtv4+Yj4UER8y7Ztn4uIvxYRH9627bvjzbedn4mIv+iur7yKSllJaqJ1MOuu\\n\",\n       \"O1Qm0FoFWzfBKphVz9QqiHXRWNUnfIjurlP1HJJOpIriKv12DpIhxlsXvbmyym6yLhWxod4Ttnxt\\n\",\n       \"2gf2A/ub46/6lvaBx/A+3bisAs3ZsbPXFenecn5UFP/UtHKnhM4jo3Te1UVl6rpVmURZK6BT/cT9\\n\",\n       \"m4RZB7bKYFT73Xd2ri+HyDFwi9BLU5xoWd7l+boKbAy1yqYrgClbS72zLSfYsK4EUJY7qK1skzFQ\\n\",\n       \"91H2O4Fa1Y6p3OovBfDNkZNqgqdMjbHzvBO4KcDgsZXI7JBopDKK6XKTP5ytojSnFwe1FbmNqIwF\\n\",\n       \"J7nb72zFORLshxoXPNZFqB3AFNDwA1ucSwgtjNr4PsdAJO9RAcUFFB3IKkfNZTcaoR0rnUfncyup\\n\",\n       \"IrGbiMx4QmA5G50qPwRqqo0uGnAGUH2XNl1+Vjpx/V/V66HpVBgqbC+ZOqBXERoCRE1i11bVlxWg\\n\",\n       \"pXTP8dB2MaLDNlaRfgU0p+vKdhXMuEytFlzZitw60I4RNTDOI1SGOmmnAplqwyEAc8dW+9bBbPoy\\n\",\n       \"QOmFJzNHZrjvdDp1IisQc7py5e7aylYULBTElON0bVaAQgB1QHP35jaoSLSK0DLqOgZorHOeb2hn\\n\",\n       \"CmTKSVfXrcitLzlZVieDm3wdvKoJ2wlO2pVIrIMb51XbJiBjqKklp1qaOu/JMFP6wG+vFNR4cmFd\\n\",\n       \"lR6mOnI6q85l25hs+OzIRWYIDtV/7Efl5CrAcT3O1hloCkR4jKO9KdA6veU5DkoV5DrwrcidR2g8\\n\",\n       \"GXBgqsa7CYh1dMddG3lCT9vE9VTG2Rkt9yNTN/gcgVWRmfoGrTKWDtCq3RyFcN0TqFV5J5Px6QDG\\n\",\n       \"+nAwUxN42tZJVMb53GedYprl+FZU9d1Bi99sKqBl/ZVec7+DmHOoHSynctIlZzURuo50HT9EIQpm\\n\",\n       \"q5FYB4JJtOYmngJU9cV1tU10w33gXzsonaPeXB87qLnrDpUJyLKMYVZFaa6NlX2wTeH5eYxfpqkA\\n\",\n       \"gPuVcHJvozuwKbipscF78s8IHbhWYcb3WZF7seR0EwSP3wbVXTTm2jKFm4JYBTalFzX5JktOBhpf\\n\",\n       \"p3S00lcnDDOsczUS6yBYySE2xJOvghlPuq5fk0is029lK9nO/KE41+OA3W3uL32gU0OoYVsmKYOL\\n\",\n       \"21qNZSUnfylQNdrB6VioVe1Sk7L7axrd8ax3Cjt3fhX13IRRrLQb76smnIOU6g+fMylTDkC1jdMJ\\n\",\n       \"zCei+jg5h6M0zCuITNqBEHJ/yqqro4I9/l6Zy7CtCl4un/fF+1f5qZwMaOzB2cgm3rQDGd/DAUUZ\\n\",\n       \"PqZoKGpbidacfhiO6U27+6q/8sHQcBMIIwwHzmmElvpCb451qP6qlPNqH8tWQKbEORLOd23Ee3OE\\n\",\n       \"pACs7hURVyIeJ1x/Rurux/Td2DlHhW3PNNumQIfgciuESYTG+RU5eYQWMYvSKsBVyylsRw40GxkO\\n\",\n       \"Grdn2+b/qHYCti4CYrCp+7PB4r2xjJcgCmo8Tl273Bih7jpwOZhVIHOTzpWpyTGZJFU7uzZiGxg6\\n\",\n       \"3f266KyaS/mXLxBqbBvYp4lj4r6oiB/hlul0y+td9PzCAc01VhnfSrSG53NbXCTSGaD600AKYvxA\\n\",\n       \"tgOCOw89X6ZssOqe1f0RYux1XVtVXagjt491qn66Y6yXKq/uN50czuGxHlz7JtLdQwlGOitw72BW\\n\",\n       \"2X7Xbi7DPK4iOLjoXkxNYbYKtZMArYqGMN9FZi5Cq9qTA4D3qyZqhAYa1lUt9xgEbr+CWrapAqr6\\n\",\n       \"+JXrdFDr2r3i2Ttoc5lKXZ7rdpNPTRB3PurB3W/S3k4q26x0xse5bNu2K3+XTDk9ZXNd+117VRnP\\n\",\n       \"xeqzoYcPH8r5jPc8Jko7CdBwMDBKSFGQ6wCnIja8XzWx3L1TVpZ6VWTjIFFBLfuiIFotd3nJiRDD\\n\",\n       \"+7tIlvcV0BTAHNQOgUQHus7Yu8gM61Rwr9p9E6L0xHbbpdu2XYvOpi+pDulPBTn1Bl6lVYTm4DaV\\n\",\n       \"kwHNwSxlGp0pkDlPPIk03GRUD+FVGV6D9blJiWUIMN4OeQlRHWewTa9V46Sgpvp+E1DjtuP9Mt/Z\\n\",\n       \"gpKqLW68lEwgi/1wUHM2pHSEfzq9ejHQtZ3bWM1NZU8dzKoPvKvAZUVOBjQ2TBVmTqIz932LagPC\\n\",\n       \"aaXtDmgqXZm0HUQQcMpoq+Wnis4OAd/EETBglB5vOsX78v5kQlSgUv2t+o/3n0Kf++LawPdWbctn\\n\",\n       \"rJPnZ1OwYZunkbD7RpK/l1QrKTe/V+XOgebors7LtFteVp9yZDvcpJwYmnuLqMq4DmVAaqI4mLk3\\n\",\n       \"nW6ZOYVQJatAS3Fguw2ouQi/WrIwDFU7O5A5PXSRqls1qDzfp3JCEXHlP3U5G8F7VH1YaT/220GN\\n\",\n       \"YaaAVs3dex+hVcpBmXSYv3XpqF55qCrvgKGgpvrd3edQmE2WoKr+leOHwFFB5JjUOQI34ZxT6yKl\\n\",\n       \"qr+rzoEjnA5evL8yfvlCoHssoe416Udep4DHMOv+jFX+4YQJyF6IJWc3QapOTaI1p4Ru4lRlDmiu\\n\",\n       \"rNODMnQFszwnIiTMKrBON4aP6/vqhFb9PQZiVdsqmHH/HFw6gKxIBbNKL6o93ZhG1B9+r/bDOQLX\\n\",\n       \"x0wnkRn+eSu+Vwe4qdwq0JR0EZeDVPUtiwNfCg7KoZOrMii8j+szHlftnA6uCsW76KTSv2rnTYia\\n\",\n       \"pJifOBgWjggmxo/XuPpdWyeiJn6VdzK1Nwc0d6zbn/av6281P91K6oUD2sXF9eqr71PcWxL1r61Y\\n\",\n       \"USsK6CIBdc7EELp75xfSeL57tc0p/8Ng9ee2V/7KRuov28WOJNs30anSiXMgTsedLicwO2YiKKkc\\n\",\n       \"b7Z7EomhqCgRj92kTB1uxPW+5kslbHN1D/dZBl/HOrtpuVWgZWiJcgjQ1PEV0kdcHcw0qA5sKs91\\n\",\n       \"ZX3VvitXb4LUGyGEF0Nt8ue2K7hlu6qIt+qTMnY3kSqoOXFtqOyga3PVRrwWJx5O9HwsoM5V+929\\n\",\n       \"+dhKBNWJq0vpScGMz3WiIrHuuXZ3j3sXod0E0CqYHRKdRXhYufA88yrlJU2WTfLVq20GFEdoq5GZ\\n\",\n       \"23JMptHOBBI8oafLz65+BhpHlTgxDxE38TEyzLKc9AwMBbNJhOOO3RTgOrAdEuUqez40Qqvm0Iqc\\n\",\n       \"DGgVwBzEuggtQlO9Mia3tFTHnTDUpin2dfJ2CKHGD1lVpMdG5oCmwKDg0MGMoxrU2wRwSq+cR91V\\n\",\n       \"k/DQCZHXYn/UvflbRo76WabL0WPrmNyDRc0fjNK6+YTXTCJnZys3IS800FRY2xm0m2BuU9cq4cmn\\n\",\n       \"2qOMpgIY7ncRmvt5yVRn1ZLTeWue9Or50EreAczpsoMa1nXoMysHdI7QVmHT2ZarcwWMnajo2+mx\\n\",\n       \"c3Boa110xjBTzuNQOQnQFKymAHMG3Ck/4npYW4GMDbVStPNYPAF5vwKZSt1WLT2d51xZcnb9VTpe\\n\",\n       \"hZrTa6XLyfFDhe2Eyw4VtyzH48rZTtuq6uqg2+lzmlaPAZwOOlnV+b2N0NwEVJ4Al04p1SBnugI0\\n\",\n       \"lMpLZTqZeByhOYgx0FZeBjDQWH+T9uJWLYeyvhWo8bWr7VGTj8eqe07VSVXv5Gd0eC/Vnm5Zvtq2\\n\",\n       \"VZk4iEl+4hQZ6FV0dkjf7n2EhrR3aWXQE1kBWqX8LOsintxchOaiMvXphnqpMHUGGKF1y4VKdwgy\\n\",\n       \"1BGWcd4dc3rl8e4m1op00Qvv4/lq6ekgh7bj4Jbt4XZNYDeJ5pw9Z94FCRXsun3XDnaANyEn/Q5t\\n\",\n       \"CjQ2ZgUMJZUyGVgKZPnBIg7QoTBT5d1LgG6ZWT1HY/BXQOu8M/ZX6VQt5fm425+ATDkxvEY5tUMm\\n\",\n       \"yNR55TEHrgcPrr80wHZOYda1b0U654x6w/ZPHLPSucu7CM3J6jie9LONFaB1kJiImlgOaurPrkT4\\n\",\n       \"h/+4P4Ew992BS70MmLwUUPByYJg4i07Hled3uuf9FajxeKgJdIzXZ/CoujA6Y4Ax1DgKmkC+gttU\\n\",\n       \"uqjN9XMKMuX0qrZgfuIcV+VeA+1QZU4mD0Os+j2kkgpkFUAwVVFWF5lVIFNvOKtUGa/rX6bTybUy\\n\",\n       \"CatnaLyvIjTVZlXvVBzEuBztRcEty/J4xNU/s81bFUE5uFYOVumE68ZzEMIYvVdzEccDZSXCxLl2\\n\",\n       \"DMwibhlo6EmxzEVj6vmPMmzOO3ERAkKr+tNAmaolWooC2DSvIKaepXVbB3/st5o4CuBVdFXp3TkX\\n\",\n       \"vi9PKNdu15fVSaQeM7i2V9DgutM+UIdufxrZoG6qyC1lJRhQ/UVBMOc5hwQUxzgXXimtyL35pYAq\\n\",\n       \"Y0/cpWpCqrQDGKfZbjcoCDVlWFVU2kVhVVTmtgr21bJ7ssTpJrqLanic1LHJhFl1ZNxPd45qixJ2\\n\",\n       \"EFOoTSDG+umcS9ZbPWpQDiIF4aU2bNdkXHicsb0TmKKuKsdTycmXnFW0FlErYmrY1RKz+5PaKqpA\\n\",\n       \"o61g1m1VFKbA5j6gZQPGSNJNIBe5qHIWNwaVgVdREU5MFZ11Y10tZVR/XRvwPuq+zoGqicj7K3DG\\n\",\n       \"P+Ve9SWlsgPWoQOM0gm30QHNjREHANW9FchwqT6Ve7HkdJHaTYibpApmDm4RcU3ZKTi4VZ9UH1ee\\n\",\n       \"k1UvAFx0VoGM9aOgpvrJZV3qorFKj9WEUZPG7Xd51QZui7s/w6uL0lSdE+dcgSzrnb5Qw/uo1Yay\\n\",\n       \"8Q5sru0KZpMAhG3jhVlyTiY8Kl9JZbBqicEgq2CGZWwYPOBV1MnwwWOTFwDT782U3iaTpHt+5qTy\\n\",\n       \"0GqisrG6qKjasE7VRm5/BQPVHzXWDkS5z/pj0CXUHjx4EJeXl6WOqv6tRGgOaM4esM2cxz5jWyug\\n\",\n       \"VTBz0arr26rcq2do6qXAxDhZYXysi8pcGQItnzeg8OAqsCkIZTqNzCZ/WaMzXGc8LjqrDMtFGqqM\\n\",\n       \"AVZBbTU6475gn1QfJlBTWwUHBzBVxv3BFEUtOd2+itDYHtkZd2Dj86YptgnbyzBzcizMIk6w5Owg\\n\",\n       \"hvspk4lWlavITIGuAhq+mkdho2eYMdCwrHoJsLL0VFsVwbBu3J9qVnVwZOEiF57wmHdl0+iM6+X+\\n\",\n       \"cH5FHMhwfLFNFcA4cuP2T6Qbx2zT5E2nui/DC/Xn9O7KVCSGbe5gpoKS1TG8Fz99cvsR15eP00jC\\n\",\n       \"XVPBy20JCL638uRqczBSX/5XbznVctN5Z9aF2meYKf2iVPByEKoAxh68A5mDGbeXx0pNpKpMQQ31\\n\",\n       \"y49DJlCbgmzVYUfoJWfnGLjfKpri9rp8VZZ1VfWyPajrpnIvn6EpoPEbR5ZJhLYSmT19+ua/BcNz\\n\",\n       \"qoneAUxFWwpm1eca7lmcm2xKLw7yrm+dh+yiKgUwvJajjWmEpmTSJxdBMMzU0k09p6yisnSE+A3a\\n\",\n       \"5eVl2X6lF3densufOjGAJzB10dQKuNT1VdSm6nG2MpWTLjk7qLnojN98VIRX1ypIKZipj2+zrpQq\\n\",\n       \"QqsiM04rsLk3m91bTtYF62WysVTQ6aKBCm7Tujup+oD35/uq+7uIGx+JcCSo7t+1/xBHjTKN0FSf\\n\",\n       \"lVSRXCcTmKkxcDBfBdu9iNBcPiKuRGYJl/SGWba6BFEgc9FaFaFNJkAFMwW2KqKbgCzzLqI9BGJO\\n\",\n       \"HHQ4gqmiM0yn0OqcF/ezu2/XN+d0+T4qSsvyKjJzfVDHVTunL4qOAdVEDoUZ13GMrH21dpaz3LIc\\n\",\n       \"a9B/EOWss7fkzoH22c9+9q5veWfy27/926duwq3K7/7u7976PW4qWjhEPvOZz5zs3sfIVGef+tSn\\n\",\n       \"brklp5cz0G5QXnagfe5zn7v1e5wy2rht27wtWE919ulPf/pW7n+f5LzkPMu9klNGaLcttwXrl1ln\\n\",\n       \"q3IG2lnulZyfB63LWWdvyXaLXuOs5bOc5Sy3Ivu+y7D01oB2lrOc5Sx3Lecl51nOcpaXRs5AO8tZ\\n\",\n       \"zvLSyJ0Bbdu2j2zb9ult235727a/fFf3vSvZtu2z27Z9Ytu2f7pt2z85dXuOlW3bfmrbtte3bfsk\\n\",\n       \"lP3hbdt+Zdu2/3Pbtn+0bdu/dso2HiOmf39927bPPxvDf7pt20dO2cZDZdu2d2/b9qvbtv3v27b9\\n\",\n       \"1rZt/+Gz8pdm/JzcCdC2bXsYEX8rIj4SEe+PiI9u2/ZH7+Ledyh7RHx43/fv2ff9g6duzA3IT8eb\\n\",\n       \"44XyIxHxK/u+/xsR8T8/239RRfVvj4gfezaG37Pv+z88QbtuQh5HxH+87/t3RMT3R8S//2y+vUzj\\n\",\n       \"J+WuIrQPRsQ/3/f9s/u+P46IvxcRf/qO7n2X8tJ8ELTv+69FxP9DxX8qIn7mWf5nIuLfudNG3aCY\\n\",\n       \"/kW8BGO47/u/3Pf9nz3L/38R8amIeC1eovFzcldAey0i8DPzzz8re5lkj4h/vG3bx7dt+/dO3Zhb\\n\",\n       \"klf3fX/9Wf71iHj1lI25JflL27b9b9u2/eTLsCTbtu29EfE9EfEb8Qdg/O4KaH8Qvg35Y/u+f09E\\n\",\n       \"/Il4M8T/t07doNuU/c3vfV62cf3bEfG+iPjuiPhCRHzstM05TrZt+8aI+O8j4j/a9/1f4bGXdPzu\\n\",\n       \"DGj/IiLeDfvvjjejtJdG9n3/wrP09yLiF+LNZfbLJq9v2/bOiIht294VEV88cXtuVPZ9/+L+TCLi\\n\",\n       \"J+IFHsNt2x7FmzD7b/d9/8VnxS/1+EXcHdA+HhF/ZNu2927b9nUR8eci4pfu6N63Ltu2/aFt277p\\n\",\n       \"Wf7tEfHHI+KT9VUvpPxSRPzQs/wPRcQvFue+cPJskqf8mXhBx3B788edPxkR/8e+7z8Oh17q8Yu4\\n\",\n       \"w18KbNv2JyLixyPiYUT85L7v/8Wd3PgOZNu298WbUVnEm3808+++6P3btu3nI+JDEfEt8ebzlr8a\\n\",\n       \"Ef9DRPz9iHhPRHw2Iv7svu+/f6o2HiOif38tIj4cby4394j4TET8RXjm9MLItm3/ZkT8rxHxiXhr\\n\",\n       \"WflXIuKfxEsyfk7OP306y1nO8tLI+ZcCZznLWV4aOQPtLGc5y0sjZ6Cd5SxneWnkDLSznOUsL42c\\n\",\n       \"gXaWs5zlpZEz0M5ylrO8NHIG2lnOcpaXRs5AO8tZzvLSyP8P5bdSohzrzUEAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f7939901850>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)\\n\",\n    \"       .transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Something is happening. Let's run the net for a while, keeping track of a few things as it goes.\\n\",\n    \"Note that this process will be the same as if training through the `caffe` binary. In particular:\\n\",\n    \"* logging will continue to happen as normal\\n\",\n    \"* snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)\\n\",\n    \"* testing will happen at the interval specified (here, every 500 iterations)\\n\",\n    \"\\n\",\n    \"Since we have control of the loop in Python, we're free to compute additional things as we go, as we show below. We can do many other things as well, for example:\\n\",\n    \"* write a custom stopping criterion\\n\",\n    \"* change the solving process by updating the net in the loop\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Iteration 0 testing...\\n\",\n      \"Iteration 25 testing...\\n\",\n      \"Iteration 50 testing...\\n\",\n      \"Iteration 75 testing...\\n\",\n      \"Iteration 100 testing...\\n\",\n      \"Iteration 125 testing...\\n\",\n      \"Iteration 150 testing...\\n\",\n      \"Iteration 175 testing...\\n\",\n      \"CPU times: user 12.3 s, sys: 3.96 s, total: 16.2 s\\n\",\n      \"Wall time: 15.7 s\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"%%time\\n\",\n    \"niter = 200\\n\",\n    \"test_interval = 25\\n\",\n    \"# losses will also be stored in the log\\n\",\n    \"train_loss = zeros(niter)\\n\",\n    \"test_acc = zeros(int(np.ceil(niter / test_interval)))\\n\",\n    \"output = zeros((niter, 8, 10))\\n\",\n    \"\\n\",\n    \"# the main solver loop\\n\",\n    \"for it in range(niter):\\n\",\n    \"    solver.step(1)  # SGD by Caffe\\n\",\n    \"    \\n\",\n    \"    # store the train loss\\n\",\n    \"    train_loss[it] = solver.net.blobs['loss'].data\\n\",\n    \"    \\n\",\n    \"    # store the output on the first test batch\\n\",\n    \"    # (start the forward pass at conv1 to avoid loading new data)\\n\",\n    \"    solver.test_nets[0].forward(start='conv1')\\n\",\n    \"    output[it] = solver.test_nets[0].blobs['ip2'].data[:8]\\n\",\n    \"    \\n\",\n    \"    # run a full test every so often\\n\",\n    \"    # (Caffe can also do this for us and write to a log, but we show here\\n\",\n    \"    #  how to do it directly in Python, where more complicated things are easier.)\\n\",\n    \"    if it % test_interval == 0:\\n\",\n    \"        print 'Iteration', it, 'testing...'\\n\",\n    \"        correct = 0\\n\",\n    \"        for test_it in range(100):\\n\",\n    \"            solver.test_nets[0].forward()\\n\",\n    \"            correct += sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)\\n\",\n    \"                           == solver.test_nets[0].blobs['label'].data)\\n\",\n    \"        test_acc[it // test_interval] = correct / 1e4\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's plot the train loss and test accuracy.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 16,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.text.Text at 0x7f793878f490>\"\n      ]\n     },\n     \"execution_count\": 16,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAaAAAAEPCAYAAAAEfBBiAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJztnXm4HGWV/z9fwhK2JIRAgCTsYYkswsgiiwYBRVRwGxV1\\n\",\n       \"dNRxcEGZUcdtVBhHZ3Abcf8xiruCjguigohIANmXQBISIAECYd9CSFgTOL8/zlvpun2r+1bf23V7\\n\",\n       \"uefzPP10d9XbVe+t2/1+65z3vOfIzAiCIAiC0WadTncgCIIgGJuEAAVBEAQdIQQoCIIg6AghQEEQ\\n\",\n       \"BEFHCAEKgiAIOkIIUBAEQdARKhMgSTMkXSjpRkkLJH2woM1sSSskzU2PT1XVnyAIgrGOpO9Lul/S\\n\",\n       \"/CZtvi5psaQbJO1TZX/WrfDYq4F/NbPrJW0CXCvpfDNbVNfuIjM7psJ+BEEQBM4PgG8APy7aKelo\\n\",\n       \"YGczmynpAOA7wIFVdaYyC8jM7jOz69PrVcAiYJuCpqqqD0EQBEENM7sEWN6kyTHAj1LbK4FJkqZW\\n\",\n       \"1Z9RmQOStD2wD3Bl3S4DDkqm3jmSZo1Gf4IgCIJCpgHLcu/vAqZXdbIqXXAAJPfbr4ATkyWU5zpg\\n\",\n       \"hpk9IenlwFnALlX3KQiCIGhIvVeqsnxtlQqQpPWAXwM/NbOz6veb2crc63MlfVvSZDN7pO44kbAu\\n\",\n       \"CIJgGJhZK9McdwMzcu+np22VUJkASRJwOrDQzE5t0GYq8ICZmaT9AdWLT0aLFzFogqSTzezkTvej\\n\",\n       \"H4hr2V7ieraXYdy8nw2cAJwp6UDgUTO7v/09c6q0gA4G3grMkzQ3bfsksC2AmZ0GvB54r6Q1wBPA\\n\",\n       \"myrsTxAEwZhG0hnAi4EpkpYBJwHrgY/JZnaOpKMlLQEeB95RZX8qEyAz+xtDBDmY2beAb1XVhyAI\\n\",\n       \"gqCGmR1Xos0Jo9EXiEwIY5U5ne5AHzGn0x3oM+Z0ugPB6KFeKEgnyWIOKAiCoDW6fewMCygIgiDo\\n\",\n       \"CCFAQRAEQUcIAQqCIAg6QghQEARB0BFCgIIgCIKOEAIUBEEQdIQQoCAIgqAjhAAFQRCURdoAaQuk\\n\",\n       \"8Z3uSj9QeTmGIAg6iCcF3hjP9/UU8DRmz3W2U6OMX4PxwITcY2Ld+0aP+nbrACuA1wEXjeaf0Y/0\\n\",\n       \"TCYEsPXMWNPpvgTBqOKD50bApBYem+VeTwSeAVYDG+AD8WpcjBo9nh5i/0jbrKHMwFP720cqGhOA\\n\",\n       \"Z3HheKzko1Hbp0v1vUvo9kwIvSRAJ5nx2U73JQhaonb33UggyjzWAI82eSxvsm8FZs/U9Wd9amLU\\n\",\n       \"6DHU/pG0WYfGIrUeNdHYNO0fjlDkHysxe7r5P6o/CQFqA0mAVgHTzHis0/0Jxjju/98F2B3YgaEF\\n\",\n       \"5TmaC0gzMVnRd4OntC4DhSl7vSFunWWishKz8HqMgG4XoF6aA7oB2A+4oNMdCcYI0kRcZOof04Db\\n\",\n       \"gEXp+RHgdorFZAVmT41637sZF5U1eL2ZYAzTSwJ0BfBCQoCCduIuqakMFJhZ6XkCcBMuNIvwCr8u\\n\",\n       \"OmarO9LfIOgjekmALqfi6nxBHyOtA2xPsUXzLC4sC9PzH9LzXWMuYizoeyQdBZwKjAO+Z2ZfqNu/\\n\",\n       \"GfB9YEd8Xu6dZnZjJX3poTmgacA8YAszur/TQWeQ1gdmMlhkdgEepmbN1B5mD3ams0FQLfVzQJLG\\n\",\n       \"ATcDRwB3A1cDx5nZolybLwGPmdl/StoV+JaZHVFF/3rGAjLjHokngJ2BxZ3uT9BhpE2A3RjsNtsO\\n\",\n       \"uIOawPwJ+CpwE2YrO9PZIOga9geWmNlSAElnAsfiv5WM3YFTAMzsZknbS9rCKrhR6xkBSlyOzwOF\\n\",\n       \"AI0VpC0odptNAW6hJjQ/w11oS/ouaiwI2sc0YFnu/V3AAXVtbgBeC/xN0v74Td10YMwLUBaI8ONO\\n\",\n       \"dySoAHefHQgcCbwIt2rWoyYyC4Hz0+s7MHu2Qz0Ngq5E0mxgdpMmZaYvTgG+JmkuMB+Yi8+Ttp1e\\n\",\n       \"E6BrgDd2uhNBm/AItN1xwTkSOBS3bs8HPo9/+e/rpZXnQdBJzGwOMCd7L+mkuiZ3AzNy72fgVlD+\\n\",\n       \"GCuBd+aOcTu+3KDt9JoAzQP2kFjHjIhO6kWkLfEJ0Ex0ngX+jFu1/4jZQx3sXRD0O9cAMyVtD9yD\\n\",\n       \"39Afl28gX//2pJk9I+ndwEVmtqqKzvSUAJmxQuIhYCdiHqg3kDYEDsHF5qV4KPQc3Mr5b2BxWDhB\\n\",\n       \"MDqY2RpJJwDn4WHYp5vZIknHp/2n4a7vH3r0MQuAd1XVn54Jw85CCSXOBn5kxq873K2gCF9vszc1\\n\",\n       \"C+dA3HL9My46V0V6lSAYHSIVT/u5AdgLQoC6Bmk6NcE5Ak9Bcz7wTeD1mK3oYO+CIOhSelGA5gFv\\n\",\n       \"6XQnxjTSpnikTSY6W+Apks4HPoHZHZ3rXBAEvUIvuuB2Bc41Y8cOd2vs4NmLX0BNcPbBV1BnbrW5\\n\",\n       \"kbImCLqPbnfB9aIAjcNTtW8dpRkqRNqJmuC8BF+8dn56XIzZEx3sXRAEJeh2Aeo5F5wZz0rcCOwB\\n\",\n       \"XNbp/vQN0mRcaDLR2RAXm7OAEzC7t4O9C4KgD+k5AUpcARxOCNDw8awDL6QmOLsDf8NF5xvAjREe\\n\",\n       \"HQRBlfScC87fcxBem2VWZMZuAWka8DrgZXjWgZupudUuixxqQdBfdLsLrlcFSHhqiNeaMbdzPesB\\n\",\n       \"pM2B1+OrnfcCzgbOAS7A7OFOdi0IgmrpdgHqSRecGSbxc+DNEAI0CC9VcCwuOofiJQlOBf4U5aGD\\n\",\n       \"IOgWetIC8m3MwsOAt428cIC0AfByXHSOwudzfg6cHXVwgmBsEhZQRZixUOIZvCjZwk73pyP4+pzD\\n\",\n       \"cNF5Nb5I9wzgfeFeC4Kg21mnqgNLmiHpQkk3Slog6YMN2n1d0mJJN0jap8XTXAocNPLe9hCSkA5C\\n\",\n       \"+gaeRv2/8ISBe2I2G7PTQnyCIOgFqrSAVgP/ambXy+ckrpV0fl3t8aOBnc1spqQDgO/gySvLchku\\n\",\n       \"QN9rZ8e7Dq+bsxdu6bwJeAK3dA7FLLKCB0HQk1QmQGZ2H3Bfer1K0iJgGwbWHj8G+FFqc6WkSZKm\\n\",\n       \"mtn9JU9zKfAvbex2dyHtjAvOccDGQFa/fV6s0QmCoNcZlTmgVPxoH+DKul1F9cmnA2UF6EZgK4kp\\n\",\n       \"ZvRHITNpG2pForYF/g94N3B5iE4QBP1E5QKU3G+/Ak5sUFWvPkKjcJCVdHLu7Rwzm5PS8lyBu+HO\\n\",\n       \"bkd/O4Kv1XkdLjp74+lv/h24MGrnBEHQr1QqQJLWw+v2/NTMzipoUl+ffHraNggzO7nBaS4DXkSv\\n\",\n       \"CVDxWp2vEWt1giCoEElH4esCxwHfM7Mv1O2fAvwU2ArXiC+b2Q+r6EuVUXDC0+UsNLNTGzQ7G3hb\\n\",\n       \"an8g8GgL8z8ZvwHeKnGBxM7D7vBoIG2A9GqkX+BC+2Y8mGA6Zm/E7KwQnyAIqkLSOLxQ5FF46e3j\\n\",\n       \"JO1e1+wEYK6ZPR+v+/UV+ZKPtlOlBXQw8FZgnqQsW8En8XkNzOw0MztH0tGSlgCPA+9o9SRmzJfY\\n\",\n       \"Fvgi8CHgfW3pfbvwf/hhuNjEWp0gCDrJ/sASM1sKICkLbMoHh92LR90CTAAetoqmAqqMgvsbJSws\\n\",\n       \"Mzth5OfiGYmvAVdJnGjG6pEec0S49Xcg7l57Ax5ocQbwacwKXYxBEASjQFHg1wF1bb4L/FXSPcCm\\n\",\n       \"+BhWCT2bCaEeM26XuAUvLXDOqHfARWdPamt1niTW6gRBMIpImo27zRpRJpL2k8D1ZjZbXpjyfEl7\\n\",\n       \"WwUpvfpGgBI/B95CJwQIfoebrWeSudoibDoIglHEzOYAc7L3kk6qa1If+DUDt4LyHAR8Ph3vVkm3\\n\",\n       \"A7sC17S5u72bjLS4HVsCi4EtzRi92jZSdt4pmHXW/RcEQZAYXMpG6+J1wA4H7gGuAo6ry1DzP8AK\\n\",\n       \"M/sPSVOBa4G9zOyRdvevsii4TmDGA/gE2mhHw70EuCjEJwiCbiYFE5wAnIcncf6FmS2SdLyk41Oz\\n\",\n       \"/wJeIOkG4C/AR6sQH+gzC8jbchbwUzN+VXG38ic9Hbges2+M2jmDIAiGoNvLMfSVBZS4CS/RMDp4\\n\",\n       \"8MGReFnrIAiCoCT9KECLGE0Bgpn4dbx5FM8ZBEHQ8/SjAN0E1K/srRK3fnrBlxkEQdBF9KMA3Qzs\\n\",\n       \"Ko3a33YE4X4LgiBomb4TIDMeBVbiK36rxUMaZwMXVH6uIAiCPqPvBCixiNFxw+0H3EnrCVSDIAjG\\n\",\n       \"PP0qQKMVCXcEHicfBEEQtEgI0MiI8OsgCIJh0q8CtAivdVEdXlBuX+CSSs8TBEHQp/SrAC0A9pAG\\n\",\n       \"lftuJy8Grsbs8QrPEQRB0Lf0qwDdl563qvAc4X4LgiAYAX0pQGYYMB+vz1MVEYAQBEEwAvpSgBLz\\n\",\n       \"gT0qObK0DbANnqY8CIIgGAb9LEALqM4COgL4K2bPVnT8IAiCvqefBahKF1y434IgCEZI39UDqn2G\\n\",\n       \"TfFghAlmtM9S8fILdwOHYnZr244bBEHQZqIeUIcwYyXwALBjmw89C3gauK3Nxw2CIKgcSUdJuknS\\n\",\n       \"YkkfK9j/EUlz02O+pDWSJlXRl74VoEQV80BRfiEIgp5E0jjgm8BR+M30cZIG5M00sy+b2T5mtg/w\\n\",\n       \"CWCOmT1aRX/6XYBuB7Zt8zFj/U8QBL3K/sASM1tqZquBM4Fjm7R/M3BGVZ3pdwF6ENiibUeT1gcO\\n\",\n       \"Bf7atmMGQRCMHtOAZbn3d9GgdI2kjYCXAb+uqjPrVnXgLuFBPF9buzgQuAWzh9t4zCAIgrYgaTZe\\n\",\n       \"o6wRrUwdvAr4W1XuNxgbAtQ+CyjCr4Mg6GLMbA4wJ3sv6aS6JncDM3LvZ+BWUBFvokL3G4QLrlVi\\n\",\n       \"/icIgl7mGmCmpO3lUwpvBM6ubyRpIvAi4HdVdiYsoLJ4GOIewKVtOV4QBMEoY2ZrJJ0AnAeMA043\\n\",\n       \"s0WSjk/7T0tNXw2cZ2ZPVtmfvl2I6p9jCnCLGZPb0IlXA+/D7KUjPlYQBMEoUPVCVEnjbAQpyfrd\\n\",\n       \"BfcIMEFivTYcK9xvQRAEA1ks6UuShlUAtK8FyIzncBHavA2HiwCEIAiCgTwfWAx8T9KVko6XNKHs\\n\",\n       \"h/tagBIPAVNGdARpW2Az4IZ2dCgIgqAfMLPHzOx/zewg4GPAZ4D7JP1I0s5DfX4sCFA7AhGOBC7A\\n\",\n       \"7Lk29CcIgqAvkLSupGMlnQWcCnwFz7/5e+CcoT7f71Fw0B4BCvdbEATBYG7B1x190cwuy23/laQX\\n\",\n       \"D/XhSi0gSd+XdL+k+Q32z5a0Ipd59VMVdGNkAiStAxxOBCAEQRDUs5eZvbNOfAAwsw8M9eGqXXA/\\n\",\n       \"wLOuNuOiLPOqmX2ugj6M1ALaC3gUszvb1J8gCIJ+4Vv5Ug2SJkv6ftkPVypAZnYJsHyIZlUXSxqp\\n\",\n       \"AEX4dRAEQTF753PFmdkjtJB/s9NBCAYcJOkGSecMN5Z8CB4EtpA4XBqccqIEIUBBEATFSNLk3JvJ\\n\",\n       \"eIaFUnQ6COE6YIaZPSHp5cBZwC5tPkdmAb0NeKnEeDOeKvVJaTzwQuANbe5TEARBP/AV4HJJv8S9\\n\",\n       \"WX8PfL7sh4cUIEmbAE+a2bOSdgV2Bc5NxYxGhJmtzL0+V9K3JU1OZlx9P07OvZ2Tsr6W4UFgOrA3\\n\",\n       \"ngl2f+Dikp89GFhAhenIgyAIehUz+7Gka4GX4B6t15jZwrKfL2MBXQwcImkzPIHd1XgG1bcMo78D\\n\",\n       \"kDQVeMDMTNL+eG66QeIDYGYnD/M0DwI7AxfhfX8x5QUowq+DIAiaYGY3SnoIGA+YpG2tZNBWmTkg\\n\",\n       \"mdkTwGuBb5vZ3+NZoYf+oHQGcBmwq6Rlkt6ZUjUcn5q8Hpgv6Xp8EdObyhy3RbLicb/ERWjI2PQc\\n\",\n       \"Mf8TBEHQAEnHSFoM3IavB1oKnFv680Nlw5Y0F3gf8FXgXUnt5pvZnsPtdKuMNKOrxN9wAX0GL0e7\\n\",\n       \"uRnPDPGhzYHbgSmYNW8bBEHQhYxCNux5uPvtfDPbR9JhwD+Y2TvLfL6MBfQvwCeA3ybx2Qm4cNg9\\n\",\n       \"7gBmHGLGA2Y8CiwBXlDiYy8BLgnxCYIgaMhqM3sIWCeVZriQcuMrUEKAzOwiMzvGzL4gzwrwoJl9\\n\",\n       \"cAQd7jSXU+4ChfstCIK+Q9JRkm6StFjSxxq0mZ2y0yyQNKfJ4ZZL2hS4BPiZpK8Dq8r2ZUgBknSG\\n\",\n       \"pAmSNgYWAIskfbTsCbqQu4BtSrSLAIQgCPoKSeOAb+IZamYBx0nava7NJOBbwKvMbA98rr4RxwJP\\n\",\n       \"AP8K/An3ML2qbH/KuOBmmdljeInWc4HtgX8oe4Iu5B6GEiB3M44HbhyNDgVBEIwS+wNLzGxpWkpz\\n\",\n       \"Ji4ied4M/NrM7gJILrZBSFoX+IOZPWtmq83sh2b2dTN7uKh9EWUEaF1J6+EC9PvU6e6v492Ye4Gt\\n\",\n       \"h2hzJPAXeqFeeRAEQXmm4YFYGXelbXlmApMlXSjpGkmFBoeZrQGey+eCa5Uy64BOw0Pr5gEXS9oe\\n\",\n       \"WDHcE3YBZQToCBhW2p4gCIJupsxN9Xp4PrfDgY3wTAdXmNnigraP40tp/oy74gCsbJzAkAJkZl8H\\n\",\n       \"vp69l3QHHiHWqzR3wbmP9CVALwdaBEEwBpE0G5jdpMndwIzc+xm4FZRnGfCQmT0JPCnpYjyTTJEA\\n\",\n       \"/SY98pT2HJVZBzQJOAl4Udo0B/ismY2aFdTOWHYJAU8BEwtzwkn7AT/E7HntOF8QBEGnqB8707zN\\n\",\n       \"zbh1cw9wFXCcmS3KtdkND1R4GbABcCXwxlZS7JSljAvu+8B8PMmc8ACEH+ALO3sOM0ziPtwNd3tB\\n\",\n       \"kwi/DoKgLzGzNZJOwNOqjQNON7NFWXYaMzvNzG6S9Cd82uU54LuNxEdS0RhqZrZjmf6UsYBuMLO9\\n\",\n       \"h9pWJe1ezStxBfAhMwZV8UO6EPgyZn9s1/mCIAg6wShkQpiSezseD9ne3Mw+XebzZaLgnpR0aO6E\\n\",\n       \"h1CbbOpViueBfK3TfpRPVhoEQTBmMbOHco+7zOxU4BVlP1/GBfce4MeSJqb3y4G3D6Ov3USjSLhD\\n\",\n       \"gevIlYkIgiAIipH0d9SCDtbBs8y0ryCdmV0P7CVpQnr/2DD62W00ioSL7AdBEATl+Qo1AVqDL9kp\\n\",\n       \"XcCzoQBJ+nDureW2C59k+p+Wutld3Estqi/PkbjFFwRBEAyBmc0eyeebzQFtCmySHpvmHtn7XuYe\\n\",\n       \"kgtOSuaiF8fbDi9aFwRBEAyBpP/KZ0KQtJmkz5X+fC9km6kgCm5v4KfAF4APAAcaOg54A2avbtd5\\n\",\n       \"giAIOskoRMFdb2bPr9s218z2KfP5MkEI/cg9uLXzZXzC7PnE+p8gCIJWWUfSeDN7CkDShsD6ZT88\\n\",\n       \"VgXoYWBD4MfAw+K5f8QDEE7pZKeCIAh6jJ8BF0j6Pp6o4B34uFqKMemC82Pyb8B3gc1nceNVC9jj\\n\",\n       \"ccF2kQE7CIJ+oWoXXDrHy/HUPuCluc8r/dkSmRDGA6/D6wBlFpOZ2Wdb7+rwqPoifkafXfImzrx9\\n\",\n       \"li08sqpzBEEQjDajMAe0A3BfSlyaueCmmtnSMp8vkwnhd8AxwGq81OoqPAV33/BqzrLf86oHOt2P\\n\",\n       \"IAiCHuNXwLO598+lbaUoMwc0zcxe1mqvegZpvd0ZP/3N/Px3hcXRgyAIgkaMM7Nnsjdm9nQqYFqK\\n\",\n       \"MhbQZZL2GlbXeoP9VjDxkZvZrcy1CIIgCGo8JGltSe/0urCEdxFlLKBDgXektNtPp21mZv0iSkcu\\n\",\n       \"YvcF+ALbIAiCoDzvAX4m6Zvp/V14yZ5SlBGglw+nVz3EkZdy8CV4kEUQBEFQEjNbAhwgaVN/a6ta\\n\",\n       \"+XyzXHATUuLRfkg+WownWH3+d3n3qcAene5OEARBryHplcAsYLynCoWyUdLN5j3OSM/XAdcWPPqB\\n\",\n       \"FwNX3sl2jxAuuCAIxgCSjpJ0k6TFkgbFXkmaLWmFpLnp8akmxzoNz379QXwh6hvwLDOlaGgBmdkr\\n\",\n       \"0vP2ZQ/Wg2TlF1YSAhQEQZ8jaRzwTXzsuxu4WtLZZraorulFZnZMiUMeZGZ7SppnZv8h6SvAn8r2\\n\",\n       \"p1QqHkmbATPxkqsAmFk/VA09Engbvq6p1zN8B0EQDMX+wJJsoaikM4FjgXoBKrt49cn0/ISkaXia\\n\",\n       \"s63KdmZIAZL0bty8mgHMBQ4ELgdeUvYkXYk0HZiK/03bEBZQEAT9zzRgWe79XcABdW0MOEjSDbiV\\n\",\n       \"9BEzW9jgeL9PBsqXqE3NfLdsZ8pYQCcC+wGXm9lhknYD/rvsCbqYw4G/YvYsYhUhQEEQ9D9lcl1e\\n\",\n       \"B8wwsydSnrezgF0KD2b2n+nlryX9ERhvZo+W7UwZAXrKzJ6UREq7fZOkXcueoIvJl19YBWwiIWAC\\n\",\n       \"8DIzftmxngVBEAwDSbOB2U2a3I17szJm4FbQWsxsZe71uZK+LWmymT3S7NypJMNTLfW3RDLSs/AU\\n\",\n       \"2yfiVsNyYF0zO7qVE42EtifU81jBe4GDMLvNN/EUMAk4GPiqGf2y0DYIgjFK/dgpaV3gZnwsvwe4\\n\",\n       \"CjguH4Qgrw79gJmZpP2BX1YVjDakBWS1CqEnS5qDWwiloxy6lD2AxzPxSazCAxE2BzbrSK+CIAgq\\n\",\n       \"xMzWSDoBOA8vxnm6mS2SdHzafxrweuC9ktYATwBvqqo/TS2gpJYLzGy3qjpQhgosoA8Bu2D2ntom\\n\",\n       \"lgKHAS8DvmLGxm07XxAEQQcYhXIMF5jZ4UNta0TTBJxmtga4WVLphUU9Qrb+J08WiLA5sJFUvqxs\\n\",\n       \"EATBWELShpI2B7aQNDn32B6PtCtFmSCEycCNkq6iVgfIyixSSmVaX4H7E/ds0ObreL65J4B/NLO5\\n\",\n       \"pXo+XKQNgEOAt9btyRajTk7vNwPur7QvQRAEvcnxeFzANgzMjLMSX+haijIC9CkGL0oqW7b6B8A3\\n\",\n       \"aFAjXNLRwM5mNlPSAcB38HVGVXIgcDODIzryFhC4EIUABUEQ1GFmpwKnSvqAmX1juMcpUwPnFWY2\\n\",\n       \"J/8ASkXAmdkleNRcI44BfpTaXglMShEYVZIPv86TBSHkLSCktYIUBEEQDOT+lAkbSZ+W9BtJ+5b9\\n\",\n       \"cBkBOrJgW7tCsItW5U5v07Eb0UyAMgtoJbVIuGukKNUQBEFQwKfNbKWkQ/DQ7u8D/6/shxsKkKT3\\n\",\n       \"SpoP7Cppfu6xFJg30l7nT1X3vqx7bxhn0mZ42vDLCvZmc0CbA0uAyRLr4Au1tqysT0EQBL3Ls+n5\\n\",\n       \"lcB3zewPQOmS3M3mgH4OnAucAnyMmlCsNLOHh9HRIupX5U5P2wYh6eTc28wV2CqHAZdi9nTBvswC\\n\",\n       \"mowL7GbAFDxWPtYFBUEQDOZuSf+Le5ZOkTSecp41oHk5hhXACipchAScDZwAnCnpQOBRMyuc+Dez\\n\",\n       \"k9twviMZHH6dkc0BbQbcmp63TvsmN/hMEATBWOYN+NrJL5nZo5K2Bv6t7IdLlWMYLpLOwIu+TZG0\\n\",\n       \"DDiJZJ6Z2Wlmdo6koyUtwUO831Flf/D1P438k6vwDAlP4tFvO1AToLCAgiAI6jCzxyU9iC9tWQys\\n\",\n       \"wacwSlGpAJnZcSXanFBlH9biC6QmAPMbtFiFV/J7BI/c25daXYuwgIIgCOpIUyN/B+yKL7tZH/gJ\\n\",\n       \"nlNzSEr76vqAI4ALMHuuwf6VuAA9jAtQ5oJ7hrCAgiAIingNXtDucQAzu5sWinuOJQFqFH6dsQoP\\n\",\n       \"iHgYt4Im4wJ0MyFAQRAERTxtuZt6SS3l0BwbAiStg8eoNwpAABeg9am54DbDXXCLCBdcEARBEf8n\\n\",\n       \"6TQ8icA/AxcA3yv74UrngLqI5wMPYbasSZtV6bneBfcXer38eBAEQQWY2ZckvRSfwtgFX5jazNM0\\n\",\n       \"gLEiQEO538AvINQsoMwFdyNeHyMIgiDIIekLZvYx4M8F24ZkbLjgissv1LPWAjLjSeA5YFvcBRdz\\n\",\n       \"QEEQBIN5acG20qna+l+ApA3xDNhzhmiZCVCWJXs5sBq4kxCgIAj6BElHSbpJ0mJJDS0VSftJWiPp\\n\",\n       \"tQX72pKqbSy44A4B5uOZHZqR1TrK0gwtx2sUrQLWl9jAjKIUPkEQBD2BpHF4vZ4j8LRnV0s628wW\\n\",\n       \"FbT7AvAnBufrhDalahsLAnQEQ8//YMZqiaepWUCPpO0m8QhuBd1XWS+DIAiqZ39giZktBZB0Jr6O\\n\",\n       \"Z1Fduw8AvwL2KzpIu1K19b8LrlwAQsZKBlpA9+VedywUW2J9qfziriAIggYUlcAZUEJb0jRclL6T\\n\",\n       \"NlVWoaC/LSBpCrATcGXJT3wAuD29Xo4rfPa6k/NAb8ZTW7y7g30IgqDLkTQbmN2kSRkxORX4uJmZ\\n\",\n       \"JFHsgmsL/S1Avvj0YsxWl2lsxpm5t/cBD6TXmQuuU0yklhg1CIKgkFSmZk72XtJJdU3qS+DMwK2g\\n\",\n       \"PH+HVygAL0nzckmrzezsdve33wWoFfdbPSfjodjQYRccMB6iNHgQBCPmGmCmPDnzPcAbgQFJo81s\\n\",\n       \"x+y1pB8Av69CfKCf54BcvpvV/2mKGU/mot467YLbEL8TCYIgGDZmtgavwXYesBD4hZktknS8pONH\\n\",\n       \"uz/9bAHtjP999dEdwyFLTtopxhMCFARBGzCzc/EQ6vy20xq0rbRGW/9aQFn2A7N2RHB0gwU0Serr\\n\",\n       \"G4YgCMYY/SxAI5n/qafTAjQ+PUdW7iAI+ob+FCBpXeAwhjn/U0CnXXAbpudwwwVB0Df0pwB5GOFd\\n\",\n       \"mLUrc8FyYAepYyKUWUARCRcEQd/QrwLUTvcbeOjiFcASide08bhl2RB4lrCAgiDoI/pVgMqUXyiN\\n\",\n       \"GU+Z8U7geOA97TpuC4zHY/ZDgIIg6Bv6T4CkTXAX3EUVHP084CCJTSTGS7ywgnMUsSG+WjlccEEQ\\n\",\n       \"9A39GNb7IuBazB4fsmWLmPGYxFV4ip+dgH/Bi9ZVzXjgNsICCoKgj+hHAWqr+62AP+Iluo8Apkis\\n\",\n       \"Z0apXHMjICygIKhDYm9gnll12ZqDauk/F1z7AxDq+SPwVuByPGHptObN28J4XIDCAgqCGr8Fdu90\\n\",\n       \"J4Lh018CJG0FTAeurfAstwB/Bj4L3AFsV+G5MjILKAQoCGpsTHgFepp+c8EdAVyIJ9yrhGTuvwxA\\n\",\n       \"GjUByiyg+LEFQY2NiN9ET9NfFlD17rd6wgIKgg4gIVyAIj1VD9M/AuTlF6oOQKincgFKP7QNgHuB\\n\",\n       \"CZGQNAgAWB8fv8IC6mH6R4B8MnI1sGQUzzkaFtD6wOoUafconU2KGgTdwkbpOQSoh+knAWpn+YWy\\n\",\n       \"jIYAbQhcmI0AAAAgAElEQVQ8lV4/TLjhggBqAhQuuB6mnwRotOd/AO4EZkiNr6PEBInPj+Ac46kJ\\n\",\n       \"0EOEAAUBhAU0bCQdJekmSYslfaxg/7GSbpA0V9K1kl5SVV/6Q4Ck9fAMCBeM5mnNeBxYBWzZpNm+\\n\",\n       \"wCelYQvHhsCT6fVtxLqHYBSR2E7isE73o4CwgIaBpHHAN4GjgFnAcZLqx5S/mNneZrYP8I/A/1bV\\n\",\n       \"n/4QIDgAuBWzhzpw7qHccLum5+H+iPMW0AV4GqC+R2JSp/sQAHA08MFOd6KAjfAbs7CAWmN/YImZ\\n\",\n       \"LTWz1cCZwLH5BjYwjdkmuOelEvpFgDrhfstYSnMB2g1YxvCFI28BXQAcnnf5Sewpsekwj92VSMyg\\n\",\n       \"2sXEQXm2ojvdvhvhSxPCAmqNafh4lHEXBdlcJL1a0iLgXCq8AekXARrt8Os8dwAnSnxb4nKJi+v2\\n\",\n       \"7wacxvAFaK0FZMYy/G5k79z+rwHHAEhsILFF0UEkjpWYOcw+jDab4wNf0Hm6WYCWERbQACTNlnRy\\n\",\n       \"9ihoUipIy8zOMrPdgVcBP2lnH/NUKkAlJrtmS1qRJrvmSvrUME4yEdgL+FsbujwcvgH8EFgEfAbY\\n\",\n       \"X2KD3P5dgV8Bk6TmmbMlZkq8UWKv3Oa8BQQutEfk3m+Lpx8CeCPu3y3ivcDrmv8pXcOmwEbS2kqw\\n\",\n       \"AEiM61B/xjJT6c5BfiM8KlTS2pL1Yx4zm2NmJ2ePgiZ3AzNy72fgVlCj410CrCupku9AZQJUcrIL\\n\",\n       \"4CIz2yc9PjeMU80GrsDsyaEaVoEZS834rhnfMON83CLaGSD9MLYBbgX+CjSMJkmWy3zg48CHc7vy\\n\",\n       \"c0CQE6DkiptB7Qu1Ez5gFDGdgZZTN5O5FNe6VyS2Am7sTHfGNFsBmzeL9OwQGwGP4yLUjQJZGRKv\\n\",\n       \"HMH/4xpgpqTtJa2P37SePfD42km+sB9J+wKY2cMj6XMjqvxSDTnZldAIz9NJ91sRN+FuN4CZwG1m\\n\",\n       \"rMEttAOafG4rYDHwSQaKSL0FNAc4JH0Bt8AXqmYCtF3aVsQ0GGBZdTODBIiBll4wemyFjxPdFhSy\\n\",\n       \"EfAE8AhjTICAMxhmFn7zPJkn4MU1FwK/MLNFko6XdHxq9jpgvqS5uIv/TW3ocyFVpnUpmuyqH4AN\\n\",\n       \"OEjSDbhp+BEzW9jieY4E3jzsXrafRdRCpXfFBQk8i3aRAGdMwed37megAA2wgMx4VGI5LjpbAs9Q\\n\",\n       \"E6DtKfDXS2ySjrOjxHizARZVS6TUQOtWXAOpSIC2AjaWGGfGsxWeO0ik//VWeBqoKfhg3y1kAvQw\\n\",\n       \"YygQIaXi2gTPiLJsiOaFmNm5eHBBfttpuddfBL44gm6WpkoBKjPZdR0ww8yekPRy4Cxgl6KGdRNq\\n\",\n       \"c8xsDtIM/Idx/Ug720ZuojZHsxs1AVoMTYMAGglQvQUELma74l/Ea3PH3Y7kLjHjuVz7afgNwFO4\\n\",\n       \"O/S6Fv6eevYDvgocPIJjDEUjAQL/m1dUeO6gxgT8BucO/Pt5S2e7M4CxagFNTM99IbpVuuCGnOwy\\n\",\n       \"s5Vm9kR6fS6wnqTCC5ufWDOzOWnzEcAFmD1X9JkOkbeAdgNuTq/vBKbWT6zn2AIXoAfwSqvZ/6Z+\\n\",\n       \"Doh0zF1xt9S1wKbJypmW2k6sa58J0A2MfB5oGrBfk7+jHWySnvPfha3Tc0+FnHdT5KHEKS32Zyv8\\n\",\n       \"hqgb51mGZQGlSNFeDlrIXKEhQENQZrJram6ya39AZtaKmd/J9T+NuAnYNQnI3+F+VtI80B3Ajg0+\\n\",\n       \"NwV4KLm2VlL7gjWygHbBBegOXOwPAB6k5i7JM52aAI10HmgysB7VBjRsilvQRRbQhArP21aSC+ta\\n\",\n       \"ae1i5E7zUuB5LbTfCq/6uzYFlMR+FfRrOAzXAjoRj1btVTIB6oukxJUJUMnJrtfjk13XA6fSymSX\\n\",\n       \"tA6+tqabAhAwYwXwGHA8bo1ck9u9hMZuuMwFB37XmQ24zSygGbhltQw4BF8U+xCDAxGm4yI1j5EL\\n\",\n       \"RyYK+4/wOM3YFLiHYgHqJQtoCt7f53e6I4mJtDZYTyUnQBKTgasqtn7LMtw5oBlUEMwi8QJpVG6O\\n\",\n       \"+soCqrS2TInJrm8B3xrm4fcEHsNs6bA7WB03AacA704VVDOazQNNAa5Mr7N5oAW4BbSyrm1mAd2P\\n\",\n       \"i08mQHfgA169BTQNdw3eAOwlobp+tcJkPKy8agG6g4GD5Vb4gNMzFhA1a3dv4Bed7EhiEq0JUOaC\\n\",\n       \"yyygmbntS9vaM9ZOsFvJIJNMgEQt6rQMW1JNRN8XgO/hEWpV0lcC1G2x/a3QbeHXeRbhP9xf121f\\n\",\n       \"TFojVMAU3IUGAwMRiiygpfggMJOaBfRCahZQIxfc/bhrayRZBiYDf6J6AbqTwRbQEnrLAtoRT1bb\\n\",\n       \"8fVXyR3YqgVU74LLAoS2rm8ocYjEV0fYzVOAd5ZsO1wLaEuaJw8eLpszcM67KsIF1yV04/xPxveB\\n\",\n       \"dxXcybXigssEaNAcUJpPuh0fjO/DBWhj3GpoJEB3J6vnRtI8gMQbGvn0077fS3y3btfm+JqmbSQ2\\n\",\n       \"qygPXWYBTU59ET7oLab3LKBzSAIksYU0KEBktNgIGMfwXHBZEELeAqpnJ0YutFvReCF1PcOdA5ra\\n\",\n       \"wjlaYTKjJ0B3ExZQB5HG42HAF3a6K0WYca0ZlxTsGsoFVyRARRYQ+DzQ3UnksvUASykWoCwKDlyA\\n\",\n       \"9kivPwR8Jw3wa5E4HPgKcDGeaSLPZNxSuw5fXPyIxA4N/qbhsgn+t2Q/sol4OPB9tMECkviixAtH\\n\",\n       \"epwS7Ih/RydIbI6nbPrnBn3apGh7G8nunEdqAT1GgQWE/49GKq4TKH+DkRegVi2gLSrI7DAZmqfa\\n\",\n       \"ahOT8LIsIUAd5IXAQsyWd7ojLZKFYr9A4kN1+1oVoFvS8aAmQHfg4rBWgCTWx7+sD6RNC4DnSayH\\n\",\n       \"z6ONB16Taz8e+A7wfjyVUn3Bvcn4j/5M3KK7nBYj6yT2ltinSZMBFhC1gfAx2mMBvZDRyQqxI36N\\n\",\n       \"5uGLkF9O48Se10vsVGFfMnEY6RzQ3ximAEm8bYiBfyLlbzAyAXp0qPPmzr8ePoA/QRvngVLux40Z\\n\",\n       \"PQvodsIF11G62f3WkOQ6W4pHBn5acp+65MW1zHgiNW3qgkvMpbbGKBOgOxkcBbcNcF/OHZi54J6X\\n\",\n       \"+vJvwOdyA8MHgYVmnG3Gk/gPPD/gTAYeMeM7ZrwfD5wYFNorMVHivQ0uxdvwKMFGFAnQvXgwRjtc\\n\",\n       \"flMoHkTbzY743eoNwH/h/5tBApBEf0eoXIBW0Vpm63wU3Ba4BXQxwxCg9B3/UYPPZrRiAW2MC8kK\\n\",\n       \"yovJFNydeC8tuuHqEgzXMxm30EfLArqdsIA6SjcHIAzF2/ConR8Cb03bskWoGWUsoDNJ7hwzHgEO\\n\",\n       \"SwJW74LLu9+gJkD74SHif8LvprZJ+w/BB4qMpQysd5RZQPnjzSro3wuBb0scVbBvK5pHLmVh2OOT\\n\",\n       \"BdduC6hyAcr1exkuQFPxpQZFA8f2eDRXlQPYJDx6sZQFlG5ItsQt5+X4d+QpPMCmoQDVu3NzZK7n\\n\",\n       \"ZgP/cCygVcCGKYJuKLbEf1sP0EIgQkoUvLhJk83xG41NspvJCgkXXEfxTAm74a6fnsOMK814EPgx\\n\",\n       \"8A/ph553v0EJC8gMy4dSmzEnvawXoB2oueow4+F0vFcD16RjLMUHQXCxuSP3+TuyfekucH38R5+x\\n\",\n       \"kOLFjTvgg9W3C36UW9NAgNIAtkk6xyP4wLc1LkBrLSCJGcMpz5Cu92Sqt4C2w+foVgMXAacDl1Es\\n\",\n       \"AJnlU6UATSQNXAVzfpI4sK79ZsDjZjydLPdHcbfvvTQWoPWg4RqhbDFus4G/5TmglHLqMcq54abi\\n\",\n       \"4nP/EP2oZyY0/b5Nxn93dwHTJQ6V+GgLx2+FSfjveeOSotvV9J4AeUmDv2H2dKc7MkKux9PJH8xg\\n\",\n       \"AXoA2DINFI0soEbUC9BeeJmHPAuAl1FbJLuUmpVTL0D5fZvh7rf8GqJFeOaH+h/njrgldTXwgbp9\\n\",\n       \"W+FzYUV+7I2AZ9Kgl0U4FVlAv8RX9g9AYh2J4wqOmzEJ/95XLUCZ+w0zbjbjn2ic0mZH/LvQdgFK\\n\",\n       \"wQ/gf/dD+M1H/SC/NXBxnTBtQW1ZAOmzi3EBKoqCm1j3XE8Wwl1oAeXCxFu1gGCIeSCJbdLxh2UB\\n\",\n       \"UVvP1cjVtzn+v12G/w9fD7yrheO3wiT8d9GK67Fr6UUB6mX321rSIP4T3A03QIBStuon8AG/0RxQ\\n\",\n       \"Ix7FXQHrpfd74y6gPFldnSyJ6x3A9ilEeF0GutiWUrOO6t1vmLESH6jqI+GyAfh/gdfW7ds6Hbco\\n\",\n       \"Rc2m1BbeZhFOmQCtpDZ4TqfY8poG/Fxam4+vnim4oI+GAN1at61RxNZO+OR+WwVIXto8u/mYiH83\\n\",\n       \"ikRwW9x6yd8QbEEtcAX8+3kLPoBvUXDDMZQA7ZqO0cgFNx7/7g1pAaVzr0/txmxF/XmTCzTjz/iN\\n\",\n       \"XuZSfKCoHxJfbRAIkn23G7m9st/FnXggwqHALlIl640m4f/HVqP/upJeFKCeDEBowK9wV9hUBt5t\\n\",\n       \"Qs0N15IFlFwS+bURezM4W/iNwIIUZAA1K2c74I46C2etCy4dsyhX343ALIndVKv6mgnQxfiPcWtY\\n\",\n       \"O+G+Ee6OKhKgTWgsQI/hiVfXSduKRCaby3p7wT5wAVqIW5hVfv93xCeL8zyCZyuvnyfJwrVHFEVV\\n\",\n       \"cNwdgK2T63QiPlA3EiAYaNlsycDv5DJgfnIpLmdwuqeJ+E3TxNSXcRL/mVtHtisuso0G5Qn4Iuky\\n\",\n       \"LrgNgSdz39MB1kBKuHp1rv0MPFdi5oIbZAElUfsnKFwXN5QA5S2gPXFr70LgoNzxN5PasiA5BKhj\\n\",\n       \"SDvi0S8LOt2VdmDGrbhL41gGuuBI27eldQsIarm7tsLvbO+u2382kC+RnolMvfsNBrrgBllAiYV4\\n\",\n       \"8MJfgI+kgXBHvBjfajzQ4ZWp7VRcXBdRPA+Ut4AeBvbFk7reQM0FNwW/Wy4SoGzB6lsb+Oyn4AEO\\n\",\n       \"K6k2w/PW1F33JPjPwqA5sZ3weaL6kPfSSLwAuLRucyZo21AbuB5mcCRc1i4vQPUuuLcAf0ivi+aB\\n\",\n       \"JuLfncwS+Q3wIuCNaY3TrvjNSCMLaGI6bhkXXN79BoNdcNviNz1K554AvICaC65oDmgWfvNTZAHt\\n\",\n       \"iN8EDmUBLQP+HrgKr4B8SK7N64DPD/WHNSPN+WzIwPnRnqa3BChzv5kNN49ZN/Jr/IdaL0B/xcuZ\\n\",\n       \"tzoHBLV5oL2BG+rzvplxnxnn5TYtJWcB1R3rDmC7JCqNBOhG4CP4Hf8LU7vnzMjWaf0eeFV6nQUU\\n\",\n       \"5CvH5ql3wX0cONVsrWhsmjvG7gV3/dsAF6T9hxccP3N3NppMbxf1FkTGAAsk9X8H/KZqJY0r2jYk\\n\",\n       \"HePLwAvrFrTmBaiMBZQXhwEuODPW5L5HjQRoGR4JNx7/7h6Oh+m/FViN33Q0EqAJ+P9sgxKT6/UC\\n\",\n       \"VD8fknkOJuN/+zO4ZZO3gKZKbC3xs3T99k/tirLV74DfAA1lAd2Ju4Yvwa29vADtwMgzMEwAHkte\\n\",\n       \"juVN+tMUSUdJuknSYkkfK9j/Fkk3SJon6VJJla2Z6zUB6if3W8Zv0nO9AP0GnzsZjgV0Nx58UDT/\\n\",\n       \"U8Sd+CC0PXUCZMbj+B3XljQWoMtxl8Mx+J3k8xjofjoXmJ2i4bI1PWsFKN2pbpfmoOoF6C58cIWa\\n\",\n       \"BbQ1PrfxLIN/1Nuk4/8ELwFSz+aMngA9ULC9XgC2BlaasYra/6FVXoELxkIGFnTMBGgazQVoBv6d\\n\",\n       \"aeaCy3Nfvm2abxyPW5YTSVZuCiQ5G/gwtfmjqekze0pr3aXk+vcYQ1tBQ1lAmXUzHf/br07n3Y2B\\n\",\n       \"QQhvxKspvwB30Z1DnQWUC6dvJkB5CwhcgK4C9sxFgG7PyHIwQs2KhWG64CSNwxeYH4X/Vo+TVO9J\\n\",\n       \"uA14kZntBfwnPo9bCb0jQH7hXkIfBCDUsRC/M7yvbvuNuOWzEa1bQP8DfAKfDB1SgJLIrMTvAust\\n\",\n       \"IKgFIkzGB7D6z99kxuHJ4lkAvIEUAZb2L8f/nv2oWS9LgB0kTsQHrpuAzzJQgL4PHJ0rIf44PtBN\\n\",\n       \"xwVkIYPdcFun411CccLU0bKA6ifxM+oHjnywQssClKLcTsVdqjcy0KqcgQ+K9S64IgvoKpq74PLU\\n\",\n       \"X7sJuHBkQpBlUAC3fnfGF03n517+g4GLkbNj5ANNGjGUBZSdYwb+ty/DU0ftwMAw7DfhN09vwgXo\\n\",\n       \"5wy2gLbFv0/301yAMgvoceCKtCZvPrU5pR2oRbYOl3oBGo4Lbn9giZktNbPV+HrCY/MNzOxyM8uq\\n\",\n       \"Dl9JBeUrMnpHgGAf4H7M6uczeprk1jiYOv992v4bYE26k2zlmNfg2RZeSTkLCFx4DqBYgLI5okYW\\n\",\n       \"UJ7L8TvL2+q2X4vP5WyFZ2Z4Che21+Hh1MfgFttaATLjHrNaGeh0TVbhd/n34MJdvwh2m7RvAbCT\\n\",\n       \"xMZ1+/Or4YclQCnUe7EaJGJNg0yjAfxhPBBhssRJ+ER1dq2W0UIgQgou+A3wWzP+QK1OVMYMfADJ\\n\",\n       \"W0BF2Rhm4FbCcAUoO3YWjZZlUMCMpXgqokyAsmq/e8CAtUcTaK8FtJqaBXQ3tSUHD6RzjMetnffg\\n\",\n       \"VtDOwB9x11w+60EWTNPM4tgcX57wGDAjWbOkc2Ypp7aHtamAAJA4UuJ7Q/yteSZRK0c/3CCEadQs\\n\",\n       \"NXAPw7Qm7d+FW4aV0EsC1Bfh10WYsTz5dev5Na273zI+if8AFpZsvxQPbS0SoGvx619GgC7DB/lG\\n\",\n       \"ArQ1PoCB+8hnmzEfH6T2wgefVTRmJT7I3svA8ucZ2wD3mvEMbhHUF4NrhwW0PT5gbddg/0Q8Sqto\\n\",\n       \"rVo2cLwYeC/u4liS9rVqAb0fv1aZH79IgK6gyRxQmq+ZhF//vDuzkQsRfEDP3xXnBWgStajFjI8A\\n\",\n       \"v04BKY+lz24H7J8LupjIyCygvABNxa2P6dRuSK7GXZ1Z9NwDqU/z0uv5yWpZRi3qE9xyuZ3mA/5a\\n\",\n       \"z0Bu3pPUh73kJcAn47+J/DV+Ea2VNclbQMvxG5mPShyWNZA0W9LJ2aPgGKXnzyUdhpfHGDRP1C56\\n\",\n       \"SYD6cf5nKK7F3Y4tk6yH/RoMgkXcgd813luw73R8cd0ulLOAoFiA9iU3OJnxUCa8Ztyfzr8rgwvw\\n\",\n       \"5XmMoQXonvT6Glz08rRDgLJs4o2slWbzJ5kAzMKzYWwPa+votCpA+wK/zN285OfVNsKjum7A73Ab\\n\",\n       \"RcFl1XLvobwFdCsDXVWZ9ZIJQd4Fhxnnm61NZXM/Lr43p/7MrDtGmXRLRRZQvQvuWmouuLtxD0M+\\n\",\n       \"Q/1ifJ4QvJDcn3N/W34eqLQFVLB9Hh6WvS0ubPU56PbF3dACkDimIEvFUbmgjHoX3Btwcf9w1t7M\\n\",\n       \"5pjZydmjoE93M/B7O4OBqbrSebUX8F3gGKsw6XMvCdABsDbdzJggpdu5ZuiWbWEpsKzIEjPjAeAs\\n\",\n       \"3JpoKkBmLMOtrpvqdi3Ef4gzKRY58B/swTQXoJX4AHEvbuHsmYVbpwnjidQGzmvwCeY8LQUhSKwv\\n\",\n       \"8cG68OhMgBqJRTPrIROg3fGkr/emMu7gAtRKaYtZDLRwbwFmpr5mwnIXPshsjF+7+jmgbdN515aB\\n\",\n       \"T5/PrlMRt+LuzWywbOiCK+ABPDpuAW6dZW64zAIajguu3gLKBChzwd1jxjIzXpFrc7iZu73N+IYZ\\n\",\n       \"n8n9bXlxbWoBJesmn0g4zwL8f7Qz/vvKp9gCd8+Nx92SmwK/I1dTKQXlnAv8S9qUF6B5eKaRvYFD\\n\",\n       \"G2QVKeIaYKak7SWtj7vLzx74N2lb3LX7VjNbUnCMttFLAnQ9Zs0GpmBk3JIejfhaeh7KAsKM55nV\\n\",\n       \"8s+lbWvwH+RuNB6c5uMRdENZQOvig8pd+OCQJTzdCo++ykS0SIBanQP6Mv635y2tPfCBqpEF1CgA\\n\",\n       \"AWoD2SzcgstzA75+ZUCKFRUUsUuiuys5oU9zEI+mfmUBCPfgVtbKdF3qBWgGLkAP4i6dcfhA93hy\\n\",\n       \"Yw4iZb9YRe361QtQvQsuz/3UBOhK/MYSRhaEsNYCSqI4FQ86yCyge+oP0KQk/W0kCyjlxzsMF8pG\\n\",\n       \"FlAj6ye7TtnfezsDowC3wsVnHi5yWfDIy3OH+DvcPfvxlKFhrQCZcYcZ/2zGvfiyg9dQAjNbA5yA\\n\",\n       \"zxEvBH5hZoskHS8pCwr5DB7g8B1JcyVdVebYw6GXBGisud9Gm/MZnDJnLWZcj2cXWNaoTQmuTc/3\\n\",\n       \"N9g/Lz0PZQFBzYo6jVo01TYMtK4W4muYsgSm2eC6PLXbusDlcUTu9XH4gPB7GFDAbg98Ynbbus9m\\n\",\n       \"A+dQLrgpuHgMEKC0UPVScm7XNAjeI9VW1Se2Ax5Og1yebB5oBm7RPkatbAG4MGyZc+tsm9plGQ6m\\n\",\n       \"0NyCy1hCzVWVCVBRFFw99+OWSZEADTcIIW8BZUEnN6XzZFGRZbkV2FFiN9wK+EeztRZQkZVRGBma\\n\",\n       \"Yz4eYLOUgRbQPnhJldtwi2sW/r85OvfZ/fHFv/+Fe3/eQu3/mOcXFC85KMTMzjWzXc1sZzP777Tt\\n\",\n       \"NDM7Lb3+JzPb3Mz2SY9W5qlaopcEqC8DELqF5O5rGvBgxo9bjcir41pgeS6sup4sb9lQFtCKXF9/\\n\",\n       \"ARyc8p4NuNtNg+o8agsCN0ufXZMilVaQm0yXmAqcL61dDPoO3Mf+J5KrKK152RmfM9g299nxwJ1p\\n\",\n       \"bUuzAfwRfPBZnsShnvPwRLEZWdLY30q8U+IlSTR3pzjA5CZyApS23ZP+1izP4H3UJtozFxzU3HDN\\n\",\n       \"5n8yluDXAVp3wYEL0FxgtzRflXfBDbCAkhv0BokXp03N5oC2xK3glfjC0ieG+l7XcRtZwmP4dzP+\\n\",\n       \"mLYvBzYrCKNuaAEl5uNCPcACwud/rkvbd8D/n6cDe+cs4P3w8Piv4gu5/wNP31XPH4ADMzecxAYj\\n\",\n       \"DPceNXpJgCozA4NR4zIGu53yLASeY2gLKC8yjwNn4FZQ0d3ul4GvpXDs+qzjC6jN50AtZDZzt83C\\n\",\n       \"8+hdQc0C2hmfW7mZgS64Q/FB9Pk0H8AfxoWy0XU4D3hZbgA5HL8DficeiPMjPH9g/fxPxqV4nai9\\n\",\n       \"qQnQ3dTmDiDNFaXX21EToPvwAbKMAN3KQAF6jPIuuCeApUkMF+Nu18wCKio6+Np03B+ndU87UWAB\\n\",\n       \"5dxvmcgtY3AaqqFYBHwB2MuMH2Qb083MUwV9G8oCyqz6pTS2gHbA/5/X4cJ3ZGqzP3B1ujm8Pt0A\\n\",\n       \"DgoYSL+B66hF1H2RwRnou5LeESBfNBX0MGYsMuPgJvufxAfVZj/oxxgcxPAVPKT5gPp9Zvwad/V8\\n\",\n       \"hdr8T0a9AO2bnmeleZdJ+CA2D3flTUrtF1Cr/ZL9hl6GD1B7MbQFBI3D4xfhc1wzU1qdfYG/mfFH\\n\",\n       \"M44DTsJdoY0E6Of4xPXrKLCAErdQy5iwF7XcilmGg7IuuCILaHN8XGkUSn8vcGNunu6G1IeGFhAe\\n\",\n       \"bv5hPBBmKf4/+L9sZ4r0fBbPGpLv+1205n7DjGfM+FxK/VRP0TzQDjQXucyqv52awIP/X+em7ZkL\\n\",\n       \"biH+v3t1miPamMEZ1RtxNbVFrwfjgtT19I4ABWOFQ6ndNRaxksEiczvwLeAfKB5wPoAHI/ycwRZQ\\n\",\n       \"vqTDvvgPeRZuBd1kxnPJ7Xgtfoe5B7VM4iuorbp/KV7ldk+GDkKABhZQmhw/D1+dfyheNPDxXJNf\\n\",\n       \"AbNxt+KgY6TPfwwX5Gxx8z0MtoCyDOXrUgvDbdUFN2AOKAnBM/hC40aT/OfjCz8zsvVfeQtorQCl\\n\",\n       \"DNLb4xFiH8UDTg40GxRlmVlfWcJRGIYADUGRAB3C4CSweRbj4fZZBoapEtNTX2/BBWh3PFrvVuBn\\n\",\n       \"uIB8jmT9lOzb1cB+6aZldxi16NkREQIUdBVmPDrEj+4vuJDUcwq+lmlp0TFx6+iL1HLvgYdx11tA\\n\",\n       \"P8UFqN7CuAIf2N+HzwlBWreT5n2m4z78zAIqHMBzizGbLRD+PC4gn8YjnPKffwz3+e9MExEz4//l\\n\",\n       \"FkXezMC1HpkFtA8wN3e978Pdg6XngFQrJJdZWCto7H7DvMJqPrR3Hu4uzKLgspIbkng7LrhfT/N2\\n\",\n       \"T5txaYPvRxYAkXfBzaO9mfMHCFCyfg/B3WaFpH6/PVl8mcAfA5xjxrP4d3ZrUuZ4s7Xre95Ga9MO\\n\",\n       \"V+E3SPvjCYhbTd/VEXq+pGswtmi0LsqMJyT2gAHWQn7/s3gSxjwL8Yza6+CD1xb4nfbHGSxA5+NJ\\n\",\n       \"P49MEYFQq4A5CxeKBbgwPElzF9b3GFyjKd/X2ySOxSOf/rWgyQ+Bl6TBakjMOL1uUyZAmRso4xI8\\n\",\n       \"Hc1qPPii2TEfkXgOd7nVC1CjCLgiMgHKUjBlLriXA5/CU+X8tcRxsiwMW1KrRPuNFvpRhnoLaDc8\\n\",\n       \"O3Wpeab0HV2NZwf/atr2tMTd5L5rZlwl8RoGr6Vrxp3AOFy8mllkXUVYQEHfYMaqFlwWmTWRVXPd\\n\",\n       \"BxeFO/EMAgcxcFD4ixl75MSH1HY74N3A2emu8zY82q7RIk7M+HCDCLh8m6uAbcy4smD3BdTCl4fD\\n\",\n       \"HbilcBA5AUrneh5uYV5R4jhZIEJpC6ge8wwYzwBPJVdnFoRwOPBDMy4o+T/Nu+CGmr8aLvUCdAgD\\n\",\n       \"syuU4X58fc+fcttuY3BI/h/NSs//ZK7Xq3HLKQQoCHqEzA23L3Bd+iEvwqPemkXsgVtAH8Lza/0s\\n\",\n       \"bZuHh1iPOGgmuQ6LtpvZ8NdjJWvwNjy/39y6ffcngSwz+F2Di++wBSgxD9YKcmYBHUZrmU+yUOz8\\n\",\n       \"HFC7qRegQxmeAF1Yt37rXLycyUi5Gg/EuKwNxxoVwgUXjHUW4GG+++LzSOCWz94MLqldz524C+ol\\n\",\n       \"uaiuedTCubuZW3DrbfFQDZvwUbzK6TSG74IDv2bZeqyV1AIhrm74icGswANN9qQgt1mbeAR4kcTH\\n\",\n       \"cAt3Nh4s0ArLqBNWs7Xfu5FyJXCLWWUWYNsJCygY61yDzzd8G6+NAi5AN5dYdHsO8FIzbs5tu5bW\\n\",\n       \"1550glvwyepnh3uAdBf/Ctxll93R/w+eOaIVsnLrpOeNgMutQSqgBjyK50z797r/Rzu5Cr9p3xJP\\n\",\n       \"qAceZTUAAAbWSURBVDqX5umring3nuSzCv7MwEXMXY96obq1JDOznljZG/QWKYprXF5sJPYFjjDj\\n\",\n       \"i8M83iYFKXK6ComjgVlma6vNdrIvU/Hgjp+m6/c0cJIZ/93CMfYANjVbm409oPvHzhCgIAi6ComH\\n\",\n       \"gFealQqECJrQ7WNnuOCCIOg2/p5IvTUmCAsoCIKgT+n2sTMsoCAIgqAjVCpAko6SdJOkxZIK64pL\\n\",\n       \"+nraf4OkXghfDYIg6FmGGpcl7SbpcklPSfpw0THaRWUCJGkcnvrkKDxVyXGSdq9rczSws5nNxFPI\\n\",\n       \"f6eq/gQ1JM3udB/6hbiW7SWuZ7WUGZfxjPEfgOojJKu0gPYHlpjZUvNSCmcCx9a1OQavb4KZXQlM\\n\",\n       \"kjSVoGpmd7oDfcTsTnegz5jd6Q70OUOOy2b2oJldAyPP5jEUVQrQNAaWb74rbRuqzXSCIAiCKigz\\n\",\n       \"Lo8aVQpQ2fC6+giN7g/LC4Ig6E26anytMhfc3QwsWTyDwTma6ttMp0EaE0lddeF6HUkndboP/UJc\\n\",\n       \"y/YS17NSyozLo0aVAnQNMFPS9nhVwjcCx9W1ORs4AThT0oHAo2Y2KJFhN8exB0EQ9BBlxuWMysfd\\n\",\n       \"ygTIzNZIOgEvLzwOON3MFkk6Pu0/zczOkXS0pCV4IbF3VNWfIAiCsU6ZcVnSVngm8gnAc5JOBGaZ\\n\",\n       \"2ap296cnMiEEQRAE/UdXZ0Ios5A1aI6kpZLmSZor6aq0bbKk8yXdIunPkiZ1up/diqTvS7pf0vzc\\n\",\n       \"tobXT9In0vf1Jkkv7Uyvu5MG1/JkSXel7+dcSS/P7Ytr2QRJMyRdKOlGSQskfTBt75nvZ9cKUMkF\\n\",\n       \"U8HQGDDbzPYxs/3Tto8D55vZLnh55493rHfdzw/w72CewusnaRbuU5+VPvNtSV37G+sARdfSgP9J\\n\",\n       \"3899zOxciGtZktXAv5rZ84ADgfenMbJnvp/d/A8ts5A1KEf9ZOLaBcDp+dWj253ewcwuAZbXbW50\\n\",\n       \"/Y4FzjCz1Wa2FFiCf48DGl5LKJ7sjms5BGZ2n5ldn16vwkvIT6OHvp/dLEBdtWCqhzHgL5KukfTu\\n\",\n       \"tG1qLtrwfiCyT7RGo+u3DQNDWuM7W44PpFyQp+fcRXEtWyBFte2Dl+Xume9nNwtQREe0h4PNbB+8\\n\",\n       \"7PT7JR2a32kehRLXepiUuH5xbZvzHWAH4PnAvcBXmrSNa1mApE2AXwMnmtmASrzd/v3sZgHqqgVT\\n\",\n       \"vYqZ3ZueHwR+i5vc96dQSyRtDTzQuR72JI2uX+mF1YFjZg9YAvgeNZdQXMsSSFoPF5+fmNlZaXPP\\n\",\n       \"fD+7WYDWLpiStD4+eXZ2h/vUU0jaSNKm6fXGwEuB+fh1fHtq9nbgrOIjBA1odP3OBt4kaX1JOwAz\\n\",\n       \"icqeTUkDZMZr8O8nxLUcEkkCTgcWmtmpuV098/2sMhPCiGi0YKrD3eo1pgK/9e8p6wI/M7M/S7oG\\n\",\n       \"+KWkdwFLgTd0rovdjaQzgBcDUyQtAz4DnELB9TOzhZJ+CSwE1gDvs1hot5aCa3kSMFvS83FX0O1A\\n\",\n       \"tiAyruXQHAy8FZgnaW7a9gl66PsZC1GDIAiCjtDNLrggCIKgjwkBCoIgCDpCCFAQBEHQEUKAgiAI\\n\",\n       \"go4QAhQEQRB0hBCgIAiCoCOEAAVjCkmXpuftJDWqBDncY3+y6FxBEBQT64CCMYmk2cCHzexVLXxm\\n\",\n       \"XTNb02T/SjPbtB39C4KxQFhAwZhCUlZW+BTg0FQE7URJ60j6kqSrUmbmf07tZ0u6RNLvgAVp21kp\\n\",\n       \"u/iCLMO4pFOADdPxfpI/l5wvSZovLw74htyx50j6P0mLJP10dK9GEHSWrk3FEwQVkZn8HwM+kllA\\n\",\n       \"SXAeNbP9JW0A/E3Sn1PbfYDnmdkd6f07zGy5pA2BqyT9ysw+Lun9KfN4/bleC+wN7AVsAVwt6eK0\\n\",\n       \"7/l4gbB7gUslHWxm4boLxgRhAQVjlfoiaC8F3pZyal0BTAZ2TvuuyokPwImSrgcux7MLzxziXIcA\\n\",\n       \"P09Jnx8ALgL2wwXqKjO7J+Xkuh7YfgR/UxD0FGEBBUGNE8zs/PyGNFf0eN37w4EDzewpSRcC44c4\\n\",\n       \"rjFY8DLr6OnctmeJ32QwhggLKBirrATyAQPnAe+TtC6ApF0kbVTwuQnA8iQ+uwEH5vatzj5fxyXA\\n\",\n       \"G9M80xbAi/A0+EWlqINgzBB3W8FYI7M8bgCeTa60HwBfx91f16U6Kw/g9WnqK0r+CXiPpIXAzbgb\\n\",\n       \"LuN/8dT415rZP2SfM7PfSnphOqcB/2ZmD0jancEVKSMsNRgzRBh2EARB0BHCBRcEQRB0hBCgIAiC\\n\",\n       \"oCOEAAVBEAQdIQQoCIIg6AghQEEQBEFHCAEKgiAIOkIIUBAEQdARQoCCIAiCjvD/AXFRJnS871y9\\n\",\n       \"AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f79387025d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"_, ax1 = subplots()\\n\",\n    \"ax2 = ax1.twinx()\\n\",\n    \"ax1.plot(arange(niter), train_loss)\\n\",\n    \"ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')\\n\",\n    \"ax1.set_xlabel('iteration')\\n\",\n    \"ax1.set_ylabel('train loss')\\n\",\n    \"ax2.set_ylabel('test accuracy')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!\\n\",\n    \"\\n\",\n    \"Since we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 17,\n   \"metadata\": {\n    \"collapsed\": false,\n    \"scrolled\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAFZtJREFUeJztnVtsY8d5x//f4Z2H94skaiXvemUbsAsD9otbwA2ahyCw\\n\",\n       \"USBpXxoYKFD0EvShN7QPddyHJo9pgAZF+1CgiB30hqRFCxfpQ1vbRQukD724sGOnaydZY8XVihJF\\n\",\n       \"iXfykDwipw/kNzuHklbiRRRJzQ8Y8OgsdXYk/vXNN9988w0JIaDRjIJx1R3QLB5aNJqR0aLRjIwW\\n\",\n       \"jWZktGg0I6NFoxmZsUVDRC8R0cdE9CMienWandLMNzROnIaIXAB+AOAzAHYB/A+AV4QQH023e5p5\\n\",\n       \"ZFxL8wKAu0KIbSGEDeDbAD4/vW5p5hn3mN93A8CO8vUDAD+uvoGIdKh5wRFC0Gn3x7U0WhDXmHFF\\n\",\n       \"swtgU/l6E31ro7kGjCuadwE8SUS3iMgL4AsAvjO9bmnmmbF8GiHEMRH9OoB/AeAC8LqeOV0fxppy\\n\",\n       \"X+jB2hFeeKbtCGuuMVo0mpHRotGMjBaNZmS0aDQjo0WjGRktGs3IaNFoRkaLRjMyWjSakdGi0YzM\\n\",\n       \"uElYAAAi2gZQBdAFYAshXphGpzTzzUSiQT8Z69NCiOI0OqNZDKYxPJ26EqpZXiYVjQDwDhG9S0Rf\\n\",\n       \"nEaHNPPPpMPTi0KIPSJKA3ibiD4WQnx3Gh3TzC8TWRohxN7gtQDgTfS3tmiWnEl2WAaJKDy4NgF8\\n\",\n       \"FsCH0+qYZn6ZZHhaBfAmEfFz/loI8dZUeqWZaxYyR9gwDBARiEheq/eY0342IQSEEOj1evJafR9f\\n\",\n       \"D79eR87KEZ7UEZ45hmHA5/PB6/XC6/XC5/PB7/fD7/fL+71ez9FUAfR6PbTbbdk6nY58D7+/2+2i\\n\",\n       \"2+3Ka42ThRSN1+tFKBSSLRKJyBYIBNDtdnF8fOz48NmidLtd1Go11Ot11Go1NBqNE++3bVs2LZqT\\n\",\n       \"LJxoiAg+nw+hUAjxeBzxeBzpdBqpVAqpVAqRSASdTkd+6MfHxw6LY9s2isUiisUifD4f3G63QyS2\\n\",\n       \"bYOIpMA0J1k40RiGAb/fj0gkgmQyidXVVWQyGaytrWFtbQ2xWAydTkc227YdQ5Vt2wiHwwgGg3I4\\n\",\n       \"U9/f6XRgWZZsbvfV/IrUIVX1wYaH26vwuRZONC6XC6ZpIplMYmNjAxsbG9LKsKVhC8Ov6i/7+PgY\\n\",\n       \"wWAQ8XgcKysrqFarDstk2zaazSYsy0Kz2USr1Zr5zzjsk7VaLViWJV/55+I2a+EspGhCoRBSqRQ2\\n\",\n       \"Nzdx69YtxGIxxGIxRKNRmKZ5wpFVZ0m9Xg/xeBzNZhONRgOWZTkExqLhZlnWzH9G9rG41Wo1lMtl\\n\",\n       \"2ZrNJtrttnyvFs05GIbhsDSPP/44QqEQTNNEKBSC3+8/MZ0eNucsDvUvlj8oVTQsqlnDfWMLeHh4\\n\",\n       \"iHw+D4/Hg16vJ8MK3W4XnU5n5v1bSNH4fD6Ew2HpBPOU2+/3w+v1noi78C/5rFe2Sr1eD8fHxw4r\\n\",\n       \"02w2Hc9Sv28aqP1jbNuW4YB2u41IJCIF0263Hf7ZNPtyURZONDxlzufzyGazMAwDgUBANq/XK4cn\\n\",\n       \"FoPL5XI0t9sNt9strw3DgGEYcLlcMlDo9XpBRHC73Sd8DH4/t1E4zbFVA5SGYZwYLrvdLlqtFlqt\\n\",\n       \"lhyWhBBot9taNBeBRVMoFLC9vQ3bthEMBmXj2RCb9263C6/XC4/HI199Pp+cOfG1eo+I4PV64Xa7\\n\",\n       \"4ff7Hf5Ft9t1CO6is6vhAKMq7GFBs8VT36OK5vj4GO12G7VabWTRToOFFE29Xkc+n4dhGGg0GggG\\n\",\n       \"gzBNE6Zpwuv1yl9wq9WCbduO4cvv9ztEFgwGZZBQCOH48PhaFWGn05HRaBbheZzmU6nN7XbD4/HI\\n\",\n       \"V36vao3UCHar1UK9XkexWJxP0RDRGwB+GsCBEOLZwb0EgL8BcBPANoCfE0KUL7GfEhbN4eEhjo+P\\n\",\n       \"Ua1WpWBYNOrsx7ZtBAKBU0XCjWM77GSy1fF4PPB4PNJJBh4KgIezi4pmuPH/xT4NWzW/339iDc22\\n\",\n       \"bTQaDdTrdTQaDZTLZWlV53V4+iaAPwHwF8q9LwF4WwjxtUHh6S8N2qXDUV3LsmAYBrrdLprNJur1\\n\",\n       \"Ovx+P9xuN9rttsOU8/oUi0H1gQKBAMLhMEKhkHzlD499JI6PcKxk+PvPY3g2xw4uW65oNCrjTMlk\\n\",\n       \"Eh6Px7Egy8MVW5l2uy19nbkM7gkhvktEt4Zufw7ATw2u/xzAv2NGoun1euh0Omg2m/Ja9VdcLpdj\\n\",\n       \"Ot3tdh2mn9+r+jfBYFBao+FXv9/vsFzNZhOmaTos13l/7cPRXFXU7XYbmUwGt27dgmEYCIfDICK4\\n\",\n       \"XC4YhiG/7yzRXAXj+jSrQoj84DqPfm7NTGBLw3+xPPvhXzJbH3W9SZ3p8PtU53PYGWZRmKaJQCDg\\n\",\n       \"GBoajYZjaDNN80KiUfujRnhbrRa2trZgGAZCoRAymYx0rvm57DgPi0ZdUpglEzvCQggxy/p6LBrb\\n\",\n       \"tqfyPPYnuHk8HhkoZPHU63XHqng4HHYMaeehCkZdFmDhCCEQj8dx48YNdDod+Hw+EJGcjrNg2u22\\n\",\n       \"jB91Op0rWUIAxhdNnojWhBD7RJQBcDDNTs0S1TFlc99ut6Uvoa5F8QfVbrfhcrkuvBI+PDzxB81T\\n\",\n       \"fHUBlf0Znmp3Oh3UajVUKhUUi0UUCgWUSiXU63V0Op2FEs13APwCgD8YvP7D1Hp0BfR6PQAPBdRq\\n\",\n       \"taRgWq2WdFjb7Ta63S7a7bacOnOw7VEMz5zUBDKObpumCb/fL0WjLm3w2tPR0REKhQKKxSLq9bqM\\n\",\n       \"Ds+ai0y5v4W+05sioh0Avw/gqwD+loh+GYMp92V28rIZjomw49lqteByuRxBNp6xqBZnlP8HAEKh\\n\",\n       \"kHTCI5GIw9K43W45NLGjzKJRLY1lWVK8s+Yis6dXzvinz0y5L1fGcF4Kx2TOYtJZi8fjQTQaRSAQ\\n\",\n       \"QCwWQyQScVgaFm273Ua9Xke1WpWiOTw8RKVSkYHBubQ0mskZToAPh8NIpVLY2NjA5uYmNjc3kUql\\n\",\n       \"YJomDMNAq9VCtVrF0dERjo6OkMvlcHR0hFqtJpdGeIZ4FWjRzAB1uu9yuRyieeKJJ5DJZJBOpxEK\\n\",\n       \"hWAYBjqdDqrVKg4ODrC7u4tcLofDw0MpGrYwV7VTQotmBnCwjqf1nETGokkkEjKBTBVNoVDAzs6O\\n\",\n       \"tDQ8Y+Kptk73XGJYNByRVi3N1taWdIy9Xq8UDa/k7+zsYG9vzyGaq05416KZARyL4SjyysoK4vG4\\n\",\n       \"XGDlPB52gDkJSw0AztN2Gi2aGcD7tJLJJJLJJNLp9Kmi4ak8x4RarZZMbmcLMw87PrVoZgAH8FKp\\n\",\n       \"FDKZDFZWVhCLxRAKhaRo1BgR5+6oa1RXuao9jC7UOAN4eEomk1hfXz8xPPECpbqafZalmQe0pbkE\\n\",\n       \"htNBE4kE0uk01tfXsbm5idXVVcRiMZlw1Ww2UalUUK1WUalUcPfuXezs7KBQKKBer0vRXNUC5Ymf\\n\",\n       \"76o7sIyo6RZ+vx+JRAIrKyvIZDLY3NxEMpmUEWEigmVZODw8xN7eHnK5HO7fv4+dnR0Zm+HF0nkZ\\n\",\n       \"nrRoLgEWDaegDosmHA4jEAhIS8OiyWazuHv3LnK5HPb39x2W5iojwMOMmyP8FQC/AqAweNtrQoh/\\n\",\n       \"vqxOLhqc72uaJqLRqBTN+vo6HnvsMRmP4ZROVTR37txBoVCQuylrtdq5a2Gz5iKO8DcBvDR0TwD4\\n\",\n       \"uhDi+UHTglFg0fCGvmg0KlexOU+n3W7LJPFyuYxKpSJbvV6X24XnYTgaZtwcYUDXDz4Tj8eDQCAg\\n\",\n       \"K1vwKjZbGDXtodVqnRAO58pwWuu8McmU+zeI6HtE9DoRxabWoyXA7XbLXQ6JRAKRSERuOeH0TU57\\n\",\n       \"GBZMtVpFo9FAq9WaW0szrmj+FMDjAJ4DsAfgD6fWoyWARROJRORi5GmiUYcnFs4iiGas2ZMQQuYE\\n\",\n       \"E9E3APzj1Hq0gAwXFPB6vQgGg9Kn4dkSR35brRYqlQry+Tzy+Tz29vZQKpXQbDYdaQ/zKBhgTEsz\\n\",\n       \"SCZnfha6frBjcxuLhi1NJBKRG/lYNOVyWRYx2NvbQ7FYlHu55lkwwHg5wl8G8Gkieg79WdQ9AL96\\n\",\n       \"qb2cc4bL0w6Lhi3NsGj29/cdouGikfNejnbcHOE3LqEvCw0Lx+VynTk8qaLh4SmbzeLo6EhWuLrK\\n\",\n       \"jLyLoiPCU4CXC3hv9+rqKpLJpBSMz+eT23G73a7D+eUAHtfSm3fBAFo0U8Hn8yEajSIajSIWi8mc\\n\",\n       \"X05/APor2JZlodvtOgJ5alxmXmdLw2jRTAHev7SysiJL1HKiVTgcdmyntSzrxBSbZ03ztlxwFlo0\\n\",\n       \"U4BFk06nsbm56RANF0vi2Mtpywbq9lptaZYUtWCA2+1GKpXC2toaNjY2cPPmTayuriIajUpfhh3f\\n\",\n       \"g4MD5PN57O/vy12S87R6fVG0aMbA4/FIx9fv98u0hxs3buCxxx5DPB5HJBKRh3s0m01HXGZ/fx/l\\n\",\n       \"chmWZS2EZRlGi2YMPB6PnFKHw2Gk02mHpeFKWlx6rdlsolQqYX9/H/fv33dYGi2aawAROVaxuVx+\\n\",\n       \"JpORogEe7g8fFk02m0WxWESlUtGiWWY4cMdRX66Yzgd5rK+vI5VKIRwOw+v1yqJLnP5wVlxmUWZL\\n\",\n       \"w2jRXAAWy/BebD6bYWNjA4lEQtbfOz4+hmVZsuxaqVRyzJjmfRX7PLRoLoAqGN5Wm06n5bZarsoZ\\n\",\n       \"DAZl6oNlWbJEiCoa1cospaUhok30S8GuoL84+WdCiD++yjrCVwFbGq7JFwqFpGiefPJJWZuPdxdw\\n\",\n       \"QaRarSYPJFOFw/UC5301+yzOS42wAfy2EOLHAPwEgF8joqfxsI7wUwD+FTMqB3sVGIaBYDAoT315\\n\",\n       \"4okncPPmTWQyGXm+lN/vl7kynJFXLBaRz+exs7ODfD4v82WGK48uIo+0NEKIfQD7g+s6EX0E4Aau\\n\",\n       \"sI7wrFATq8LhMNbW1qTje/v2bayvryMej8ttKABknbxKpYLDw0Pkcjlsb29jb28P5XJZVvJcdC7s\\n\",\n       \"0wySy58H8F+4wjrCs0AVDNf3XVtbw9bWFra2tuTRh4lEAn6/33FgarfblbVldnd3sb29jcPDQxSL\\n\",\n       \"xYWdYg9zIdEQUQjA3wP4LSFEbejs65nWEZ41bGlWV1extbWFZ599Vq5o8y5J3szGvky1WnVYmlqt\\n\",\n       \"Jhcsr4VoiMiDvmD+UgjBpV+Xpo7waagVzrkMPgfy0um0LG/P+5hs25b7sCuVCnK5HA4ODhxBPE59\\n\",\n       \"WAYe6QhT36S8DuCOEOKPlH/iOsLAEtQRHsYwDFmylY/64SrmoVBICobLwVqWhVKphFwuh08++QQP\\n\",\n       \"HjzAwcGBXF+66rMMps15luZFAD8P4AMiem9w7zUsWR1hFXV6zSe2qKIJh8OOk+mAvmiKxSJyuRzu\\n\",\n       \"3buH3d1dHBwcSCvDDvIiz5hUzps9/QfOtkZLU0d4GA7ieb1eh2hYOOqyAnDS0hQKBbkf27KspREL\\n\",\n       \"oyPCgGMzPi8TxONxxGIxJBIJrK+vI5FIwDRNuN3uE4e/7+/vy8YxmVqtJh3kZRIMoEUDwLlM4PV6\\n\",\n       \"ZcUqzpG5ffs2VldXYZomgL5lUbfRZrNZ7O7uIp/PyyqcfKrdMnLtRTO8RBAIBJBIJHDjxg0Zl+Hc\\n\",\n       \"X9M0IYSQwxHvkLx//76cMR0dHcnV7UXZXTAq1140AKRo2IdJJBLY2NjAU089hWeeecZxRibw0PHd\\n\",\n       \"3d1FNpvF/fv3HZaGh6RFS+O8KNdeNLwjkkURi8Uc50mmUqkTJ+PycYjlclmeisK7CjhJfBktDKNF\\n\",\n       \"Mzgdl8uCpNNppFIpuWeJj9PhYwwByMgvlwrhEmc8HC2zYAAtGgAn6/yyaEzTlEE8jhADD0XTaDTk\\n\",\n       \"ZjdOqroOXHvRsKXhqlVra2uO3ZF8niTHZLiq+GmWRotmiVFPzOUttYlEAqurqzLfV82TUVETxvkc\\n\",\n       \"g2WL+J7HtRMNn47Lzq1aspWTxJPJpCxBrznJtRQNb3ZTa8ik02kZzOOFSZ/Pd9XdnUvOW+XeJKJ/\\n\",\n       \"I6L/I6LvE9FvDu5/hYgeENF7gzZcMnau4ZKtoVAI0WhUnoxy2nYUzUnOszScI/z+IBHrf4nobTys\\n\",\n       \"I/z1S+/hlOHhSRVNJBKRp9aGQiGHzwM4T9NlX0Y9R/I6TLNVxs0RBha0jrA6PLFoQqEQAoGAPEZH\\n\",\n       \"nV4DkALhbSfqARfqscvXRTgXLtSo5Aj/5+DWwtYR5pKtLBr1eGMWjTrNVs9g4qSqTqcjE6sWfXfB\\n\",\n       \"qFxINIOh6e/QzxGuY4HrCKuWxjRNRCKRUy0Ni4aHJj4nWxXNdZtqM6PkCP8V5wgveh1h9SBSn88H\\n\",\n       \"j8cDt9stxcIi6PV6cneB2tRzsnk1+6oOVr8KzttheWqOMBFlhBB7gy+Xqo4wO7sshE6n4yhGxBvg\\n\",\n       \"+DymZrO5dDnA5zFOjvDvAXhlmesId7tdeSRgs9lEoVBANpvFvXv3cO/ePRwdHcmm1svTlgaPzBH+\\n\",\n       \"p8vpznzAorEsC/V6HQcHB8hms/joo49w584deRgpny953abd1y4iDDhTG0qlkoz+ulwu2LaNZrMp\\n\",\n       \"W61WQzabxc7ODnZ3d7G3t+eYfl+XRUqVaycarudbKpXkRrdGo4HDw0Ps7u4iHo87zmKyLAsPHjzA\\n\",\n       \"gwcPUC6Xr+2MSYUu6wef1626XMlKTd9cW1uT602maTpWrzudDkqlEkqlEorFIsrlsiMus8x+jBDi\\n\",\n       \"1ADutRMNT7d5w5tt244NcG632yEINbDHDXhY73eZrY0WjWZkzhLNJMcRaq4pWjSakbm04UmzvGhL\\n\",\n       \"oxkZLRrNyFyqaIjoJSL6mIh+RESvTuF520T0wSDF9L/H+P43iChPRB8q9xJE9DYR/ZCI3holN+iM\\n\",\n       \"542dCvuI9Nqx+nhp6bq8ZjLtBsAF4C6AWwA8AN4H8PSEz7wHIDHB938K/USyD5V7XwPwu4PrVwF8\\n\",\n       \"dcLnfRnA74zZvzUAzw2uQwB+AODpcfv4iOeN3UchxKVamhcA3BVCbAshbADfBvD5KTx37DRTIcR3\\n\",\n       \"AZSGbn8O/bK2GLz+zITPA8bsoxBiXwjx/uC6DkAtwTtyHx/xvLH7CFzu8HQDwI7y9QM87PC4CADv\\n\",\n       \"ENG7RPTFCZ/FXEZ524lTYaddgnea6bqXKZrLmMu/KIR4HsDL6FdP/9Q0Hy76dnzSfk+cCjtcgnfS\\n\",\n       \"Pk47XfcyRbMLYFP5ehN9azM2YpAtKIQoAHgT/SFwUvJEtAb0MxIxYXlbIcSBGADgG6P28VEleMfp\\n\",\n       \"41npupP08TJF8y6AJ4noFhF5AXwB/VKyY0FEQSIKD65NAJ/FdNJMp1redvChMiOlwk67BO+j0nXH\\n\",\n       \"7SOAy5s9DTz2l9H32O8CeG3CZz2O/gzsfQDfH+d5AL4FIAegg76/9YsAEgDeAfBDAG8BiE3wvF9C\\n\",\n       \"/9SaDwB8b/Dhro7wvJ8E0Bv8jO8N2kvj9vGM5708SR+FEHoZQTM6OiKsGRktGs3IaNFoRkaLRjMy\\n\",\n       \"WjSakdGi0YyMFo1mZLRoNCPz/yU19i71FpCwAAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f7938797710>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEjpJREFUeJzt3X+QXXV5x/HPJ793ITSAiWyysaEttsBoDRFYEEGq7VBG\\n\",\n       \"0bZWpa1S27HTUQulyojMtH+1o5XpiI7TzlgoCv5qqxZ1WhFaU0TsJhASfiQBsUNawq9N24DEZZMl\\n\",\n       \"PP3j3oRls5s9T06+e84N79dMhnvOfe73fO/5nnv34Zxzv48jQgAAADh0c5ruAAAAQK8joQIAAKiJ\\n\",\n       \"hAoAAKAmEioAAICaSKgAAABqIqECAACoaV6TG7fNnA0AAKBnRISnWl80obJ9gaRrJM2VdG1E/OXk\\n\",\n       \"mEsvvfSA1w0PD2toaOhF60rOl5Vpe+/evcX60QZz5lQ/aWlPeUxp3bp1OvPMMw9Xlw6rzPg9//zz\\n\",\n       \"Rdp97rnnisRmxk6S5s6de8C6jRs3avXq1QesX7BgQbF+VLVnz57KsaOjo6m2x8bGKseOj49Xjs0c\\n\",\n       \"F5nYqcZOkh566CGddNJJB6xfuHBh5bb7+voqxy5atKhI7Lx51f80TbcvpjPd99ZUMn8bMsfF7t27\\n\",\n       \"D1i3YcMGrVmz5oD1mWNzuranU7fP02nD92wpa9eunfa5Ypf8bM+V9BlJF0g6RdLFtk8utT0AAICm\\n\",\n       \"lLyH6gxJP4qIbRExLukrkt5acHsAAACNKJlQrZD0yITl7d11MxocHCzSIcyOFSsqDTNa6IQTTmi6\\n\",\n       \"C6jhuOOOa7oLOEQDAwNNdwE1lUyoDvmmJxKq3sb49S6+1Hvb8ccf33QXcIiWL1/edBdQU8mb0h+V\\n\",\n       \"tHLC8kp1zlK9yPDw8P7Hg4OD/DEGAACtsHPnTj311FOVYksmVHdJOsn2KkmPSXqnpIsnB03+NR8A\\n\",\n       \"AEAbHHvssTr22GP3L2/btm3a2GIJVUQ8Z/uDkr6jzrQJ10XE1lLbAwAAaErReagi4tuSvl1yGwAA\\n\",\n       \"AE1rdKZ0qR0TdWUmFis1wWim3Uxs5r1J5SZka0Os1I79XGqywOxnKdN2ZoLRUpOcZmTbzeyLTGxm\\n\",\n       \"MtKM7OSpmcksMxNlZvqRaTczEWl2X5SayLnUhL3ZYzmzP0rFZiZmzcRmxzoTn/1bMuX2arcAAADw\\n\",\n       \"EkdCBQAAUBMJFQAAQE0kVAAAADWRUAEAANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIFAABQ\\n\",\n       \"EwkVAABATY3X8svU8ek1baj7V7LtUnUYMzWVsn0oue96TWbftaE+X0mZ4yLznZWpX1eqxqNUrnZc\\n\",\n       \"ps+ZdsfGxirHZvdF5lguVb+uv7+/cuyiRYsqx0q5OoiZ2Pnz51eOzey3TJ3JTKyUO+ZaX8vP9krb\\n\",\n       \"a21vtn2/7UtLbg8AAKAJpU8PjUu6PCI22T5a0gbbt0bE1sLbBQAAmDVFz1BFxBMRsan7eJekrZKW\\n\",\n       \"l9wmAADAbJu1m9Jtr5K0WtK62domAADAbJiVhKp7ue+rki7rnqkCAAA4YhT/iZ3t+ZK+JukLEXHT\\n\",\n       \"5OeHh4f3Px4cHNTg4GDpLgEAAMxoZGREIyMjlWKLJlTu/MbxOklbIuKaqWKGhoZKdgEAAOCQLFu2\\n\",\n       \"TMuWLdu/vHnz5mljS1/ye52k35F0vu2N3X8XFN4mAADArCp6hioivi9mYwcAAEc4kh0AAICaGq/7\\n\",\n       \"UnW69+yU80eykvuiVBmANsRm4zPlEzIlRkrFltRrJXuy5W8y7y9zXCxYsKBybF9fX+XYo446qnKs\\n\",\n       \"lCsbkpEp+7Jnz57KsaOjo5Vjn3322cqxUq7ESOb7IlMiZsmSJZVjFy9eXDlWypXAyeyLUmOSic2W\\n\",\n       \"GcrEH47vOM5QAQAA1ERCBQAAUBMJFQAAQE0kVAAAADWRUAEAANREQgUAAFATCRUAAEBNJFQAAAA1\\n\",\n       \"kVABAADUREIFAABQU+OlZ6qWcchMC9+WMhml+pwpF1BSpgRHJjajZBmevXv3Vo7NlkQo0YdsuZVM\\n\",\n       \"fOaYK1WyJzPW2c9I5vOXKaGSGb9S+03KjXXm/WVid+/eXTk2Mx6Zki9SrhxQZkwyfR4bG6scu2vX\\n\",\n       \"rsqxUv57oKrM5y9T6ihTKmfhwoWVY6VypcamM+07sf0bkkLSVD2KiPh6lQ3YnivpLknbI+Ith9RL\\n\",\n       \"AACAFjtYavgWdRKq6VRKqCRdJmmLpFyFRwAAgB4xbUIVEb9bt3Hbg5IulPQXkv6kbnsAAABtNONF\\n\",\n       \"Q9sn2L7O9s3d5VNs/37F9j8p6QpJ7bjpBwAAoIAqd2F9TtItkpZ3lx+SdPlML7L9ZkkjEbFRU9+H\\n\",\n       \"BQAAcESocnv9yyLi721fKUkRMW67ys8IzpZ0ke0LJS2SdIztGyLiPRODhoeH9z8eHBzU4OBg9d4D\\n\",\n       \"AAAUsmPHDu3YsaNSbJWEapft4/ct2B6S9PRML4qIqyRd1X3NeZI+PDmZkqShoaFKHQUAAJhNS5cu\\n\",\n       \"1dKlS/cvb926ddrYKgnVhyR9S9LP2P6BpKWS3n4I/WrH5FAAAACH2YwJVURssH2upJ9X516oByMi\\n\",\n       \"NYthRNwm6bZD6yIAAEC7zZhQ2e6T9H5J56hzlul2238TEdWnegUAADiCVbnkd4OkH0v6tDpnqH5L\\n\",\n       \"0o2SfrNgvwAAAHpGlYTq1Ig4ZcLyd21vOVwdqFoLqmQtv1JtZ2IzNYdK1q/LyNRLK1VfMVMrTcr1\\n\",\n       \"OROb7UdVmTpX2fpumdpVmX5k6ruNjo5Wji1Vjy4rs98ydeMydcqytQozx0amNl5/f3+RPmS0pbZp\\n\",\n       \"5pjLjHXmsyfl9kem7UxsppZf5rjItFu67alU+Wa42/ZZ+xa6v/LbUHvLAAAAR4iDFUe+b0LMHbYf\\n\",\n       \"UeceqldIenAW+gYAANATZiqODAAAgBkcrDjytonLtpepM+M5AAAAJqhSHPki2w9JeliduaS2Sfp2\\n\",\n       \"4X4BAAD0jCo3pf+5pLMk/TAiTpT0RknrivYKAACgh1RJqMYj4n8kzbE9NyLWSnpt4X4BAAD0jCoT\\n\",\n       \"S+y0vVjS7ZK+aHtE0q6y3QIAAOgdVc5QvU3SqKTLJd0s6UfiF4AAAAD7VSmOvO9s1F5JnyvaGwAA\\n\",\n       \"gB50sIk9d6kzkedUIiKOORwdyJSTqCpbmqVUWZRMP0qVRMkq1Y/MOGfGI1vypQ39yPQhE7t79+7K\\n\",\n       \"sdm2Sx1zpY637Gd6bKx6rfdMbGZMSrUrteP7pVQZpUwpIKlc6a7MfhsfH68cmy2jlGm7VHm0UmWt\\n\",\n       \"SpaeORylkQ42D9XRdRu3vUTStZJOVSc5+72IGK7bLgAAQJvkqi7mfUrSv0TE223Pk3RU4e0BAADM\\n\",\n       \"umIJle2fkvT6iLhEkiLiOUlPl9oeAABAU3IXn3NOlLTD9vW277b9t7b7C24PAACgESUTqnmSTpP0\\n\",\n       \"1xFxmqSfSLqy4PYAAAAaUfIequ2StkfEnd3lr2qKhGrDhg37Hw8MDGj58uUFuwQAAFDN+Ph45V9O\\n\",\n       \"FkuoIuIJ24/YfmVE/FDSmyRtnhy3Zs2aUl0AAAA4ZPPnz3/RdA0Hm96k9K/8/kidcjULJP2npPcW\\n\",\n       \"3h4AAMCsK5pQRcQ9kk4vuQ0AAICmlbwpHQAA4CWh9CW/GVWd7r3UtPeZPmTbLtVuqdiSbWf2xYIF\\n\",\n       \"CyrHLlq0qHKsJC1cuLBIbKZ8QqlSR9mxzvQ5Myb9/dVnR1myZEnl2MWLF1eOPeqo3BzCmfjMfsso\\n\",\n       \"+bnOKFVGKVOaJfN9kS1HUqrUSeYzUqr8jdSeck5Nt5tVtR8H+5vDGSoAAICaSKgAAABqIqECAACo\\n\",\n       \"iYQKAACgJhIqAACAmkioAAAAaiKhAgAAqImECgAAoCYSKgAAgJpIqAAAAGpqvPRM1Sn4S5ZaKDX9\\n\",\n       \"fqbdTCmCTGzJaf1L9SOz3zKlL0r2I9NuqZIv2XIWmfc3Pj5eOTYzJqXaLVmSqFRppMwxlD3uM+Vk\\n\",\n       \"SrW7Z8+eyrGZ4yJ73GfaLlUup2RpnUx85rso8xnJHMuljotsfKYf0yl6hsr2R21vtn2f7S/Zrj4i\\n\",\n       \"AAAAPaJYQmV7laT3STotIl4laa6kd5XaHgAAQFNKXvL7saRxSf2290rql/Rowe0BAAA0otgZqoj4\\n\",\n       \"P0l/Jem/JT0m6amI+NdS2wMAAGhKyUt+PyvpjyWtkrRc0tG2f7vU9gAAAJpS8pLfayX9ICL+V5Js\\n\",\n       \"f13S2ZK+ODHozjvv3P94+fLlWrFiRcEuAQAAVPP444/riSeeqBRbMqF6QNKf2u6TNCbpTZLWTw46\\n\",\n       \"/fTTC3YBAADg0AwMDGhgYGD/8qZNm6aNLXkP1T2SbpB0l6R7u6s/W2p7AAAATSk6sWdEfELSJ0pu\\n\",\n       \"AwAAoGmUngEAAKiJhAoAAKCmxmv5ZWoaHclK1dwrWcuvlExtrkwtqqzMsZnpR6bdTA3LkjW/MkrV\\n\",\n       \"QMzst2wtv/7+/sqxmVp+Y2NjlWNHR0eLxErl9nNfX1+R2Mz4ZWuwZY7PzOcvs98ydfEyx2a2H5la\\n\",\n       \"jKXqoJaqMynlxq/qd/j1118//fYqbw0AAABTIqECAACoiYQKAACgJhIqAACAmkioAAAAaiKhAgAA\\n\",\n       \"qKmVCdX27dub7gJqYPx618MPP9x0F1DDtm3bmu4CDtEDDzzQdBdQUysTqkcffbTpLqAGEqreRULV\\n\",\n       \"20ioehcJVe9rZUIFAADQS0ioAAAAanKTpUls915dFAAA8JIVEVPWR2s0oQIAADgScMkPAACgJhIq\\n\",\n       \"AACAmlqXUNm+wPYDth+y/ZGm+4Pp2f4720/avm/CuuNs32r7h7Zvsb2kyT5ierZX2l5re7Pt+21f\\n\",\n       \"2l3PGLac7UW219neZHuL7Y911zN2PcT2XNsbbX+ru8z49bBWJVS250r6jKQLJJ0i6WLbJzfbKxzE\\n\",\n       \"9eqM1URXSro1Il4p6d+6y2incUmXR8SpkoYkfaD7eWMMWy4ixiSdHxGvkfRqSefbPkeMXa+5TNIW\\n\",\n       \"SftuZmb8elirEipJZ0j6UURsi4hxSV+R9NaG+4RpRMTtknZOWn2RpM93H39e0ttmtVOoLCKeiIhN\\n\",\n       \"3ce7JG2VtEKMYU+IiNHuwwWS5qrzWWTseoTtQUkXSrpW0r5fjTF+PaxtCdUKSY9MWN7eXYfe8fKI\\n\",\n       \"eLL7+ElJL2+yM6jG9ipJqyWtE2PYE2zPsb1JnTFaGxGbxdj1kk9KukLS8xPWMX49rG0JFXM4HEGi\\n\",\n       \"MycHY9pyto+W9DVJl0XEMxOfYwzbKyKe717yG5R0ru3zJz3P2LWU7TdLGomIjXrh7NSLMH69p20J\\n\",\n       \"1aOSVk5YXqnOWSr0jidtnyBJtgckjTTcHxyE7fnqJFM3RsRN3dWMYQ+JiKcl/bOkNWLsesXZki6y\\n\",\n       \"/bCkL0v6Jds3ivHraW1LqO6SdJLtVbYXSHqnpG823CfkfFPSJd3Hl0i66SCxaJBtS7pO0paIuGbC\\n\",\n       \"U4xhy9l+2b5fgNnuk/TLkjaKsesJEXFVRKyMiBMlvUvSdyPi3WL8elrrZkq3/auSrlHnJsvrIuJj\\n\",\n       \"DXcJ07D9ZUnnSXqZOtf7/0zSNyT9g6RXSNom6R0R8VRTfcT0ur8K+56ke/XCpYWPSlovxrDVbL9K\\n\",\n       \"nZuW53T/3RgRV9s+ToxdT7F9nqQPRcRFjF9va11CBQAA0GvadskPAACg55BQAQAA1ERCBQAAUBMJ\\n\",\n       \"FQAAQE0kVAAAADWRUAEAANREQgWgcbbv6P73p21ffJjbvmqqbQHA4cQ8VABaw/Yb1Jnk8C2J18yL\\n\",\n       \"iOcO8vwzEbH4cPQPAKbDGSoAjbO9q/vw45Jeb3uj7ctsz7F9te31tu+x/Qfd+DfYvt32NyTd3113\\n\",\n       \"k+27bN9v+33ddR+X1Ndt78aJ23LH1bbvs32v7XdMaPvfbf+j7a22vzC7ewNAL5rXdAcAQC+UvvmI\\n\",\n       \"pA/vO0PVTaCeiogzbC+U9H3bt3RjV0s6NSL+q7v83ojY2a1tt972VyPiStsfiIjVU2zr1yX9oqRX\\n\",\n       \"S1oq6U7b3+s+9xpJp0h6XNIdtl8XEVwqBDAtzlABaBNPWv4VSe+xvVHSsKTjJP1c97n1E5IpSbrM\\n\",\n       \"9iZJ/yFppaSTZtjWOZK+FB0jkm6TdLo6Cdf6iHgsOvdEbJK0qsZ7AvASwBkqAG33wYi4deKK7r1W\\n\",\n       \"P5m0/EZJQxExZnutpEUztBs6MIHbd/Zq94R1e8V3JYAZcIYKQJs8I2niDeTfkfR+2/MkyfYrbfdP\\n\",\n       \"8bpjJO3sJlO/IGlownPj+14/ye2S3tm9T2uppHMlrdeBSRYAzIj/6wLQBvvODN0jaW/30t31kj6t\\n\",\n       \"zuW2u21b0oikX+vGT/yJ8s2S/tD2FkkPqnPZb5/PSrrX9oaIePe+10XEP9k+q7vNkHRFRIzYPnlS\\n\",\n       \"25piGQBehGkTAAAAauKSHwAAQE0kVAAAADWRUAEAANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADU\\n\",\n       \"REIFAABQ0/8Dsw8TC+BipngAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f7938323190>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGPlJREFUeJztXUlsbNlZ/v6a53myy37Pft1B6kiRkk1YJFGyiKKOkJKw\\n\",\n       \"IYqEFAWEWDAJFjRhQWAXIhEhWCAg6SgMSkCgoICESAeBaBYMjbrTHUgP7nbZZbuq7JrnW9NhUfWf\\n\",\n       \"Pve6bNfk91zl+0lHVa7yuz717lf//59/JCEETJiYB5YnvQET6weTNCbmhkkaE3PDJI2JuWGSxsTc\\n\",\n       \"MEljYm4sTBoiepaIXieit4jouVVuysTdBi3ipyEiK4A3AHwcwCmA/wbwOSHEj1a7PRN3EYtKmg8C\\n\",\n       \"OBBCZIQQfQDfBvDp1W3LxF2GbcF/lwaQVX4+AfDj6i8QkelqXnMIIWja64tKGpMQ9xiLkuYUwK7y\\n\",\n       \"8y7G0sbEPcCipHkJwHuIaI+IHAA+C+C7q9uWibuMhWwaIcSAiH4RwD8BsAL4unlyuj9Y6Mg904VN\\n\",\n       \"Q3jtsWpD2MQ9hkkaE3PDJI2JubGoc2+tYbVa5bJYLCDSq24ikq/zc3UZMRwOMRqN5ONoNIIQAkII\\n\",\n       \"+bP6/rrj3pHGYrHA7XbD7XbD4/HA7XZLcgBjwthsNt1yOp1wuVxwOp1wOp2XrtnpdHSr3+/rlqZp\\n\",\n       \"6Ha7cq17Xva9Iw0RweVyIRQKIRwOIxgM6iSLxWKR5HA4HHC5XPD5fPD5fPD7/fB6vZeuWavVUK1W\\n\",\n       \"UavVUKvV0Ol00O125WO9Xkej0cBoNIKmaSZp1g1EBLfbjVAohGQyiXg8riOM1WqVEsjj8cDj8SAc\\n\",\n       \"DiMSiSAcDiMcDl+65vn5Oc7Pz3FxcYHz83M0m03dstlsEEKg2+0+gU+8emw8aVjdWK1W2Gw2uFwu\\n\",\n       \"RKNRJJNJpNNpbG1t6ewVJo3H44HX64XX60U0GkUkEkEsFkMkErn0N1hiBQIBeL1e1Ot1KV3q9Tos\\n\",\n       \"FgsGgwFarRasVisASGmzjlJn40ljsVjg8/kQCAQQCAQQCoWQTqexvb2N7e1tJBIJnaRh9eRyueTy\\n\",\n       \"+/3weDxSYhjhcDjg9/sxGAxARPD7/QgEAlLSEBF6vR6azSaq1Sp6vZ7OeF433AvS+P1+pFIpuba2\\n\",\n       \"tpBMJrG1tYVYLCYNYSaOzWaD3W6Xy+Vywe12w263T/0bDocDPp8PFosFLpcLrVZLt1jKVCoVuFwu\\n\",\n       \"AEC/38dgMJAnrXXCxpPGarXC5/MhkUhgf38fe3t7SCaTSCQSSCQSiEajAKA7PbHEISLd8fw6ScOE\\n\",\n       \"CQQCaLfb6HQ6usdKpYJCoQCXy4XhcCiP4+uIpUhDRBkAdQBDAH0hxAdXsallwDea7Rifz4d4PI7t\\n\",\n       \"7W08fPgQTz31FKLRKKLRKGKxGEKh0KVrsI+FCcLSoN/vo9fr6QjGy263S/K43W7dEbtUKiGfzyMU\\n\",\n       \"CsHn80EIASLCaDRCr9d7rP8/q8CykkYA+JgQoryKzawCrI7YholEInjqqafw4MEDqY4CgYC0UaZh\\n\",\n       \"OBzKNRgMJFl6vR76/b40mHk5HA65nE4nLBYL7Ha7JIfX60UoFEIkEkEikYDdbketVsNwOESn07mX\\n\",\n       \"6mlqJPRJwWq1wu/3I5lMIpVKYXt7G7u7uzrSsHNvGmlYbahEabfbuqXaO3a7XZ6y2Ihmuwh41xAP\\n\",\n       \"BoPy1AZAEsbojV4HrELSfJ+IhgD+WAjxpyvY01Jg0qRSKTx69Aj7+/vS+E2lUojFYroj+DQMh0Pp\\n\",\n       \"ye10OqjVaqjX66jVamg0GtL5xw5APjU5HA65B7aN7Ha7jjSJRAL9fl86/e4jaT4khMgRURzAC0T0\\n\",\n       \"uhDixVVsbB4YbQu/3494PI7d3V08evRI2i/RaBSBQEBnr7AKUtXRNCO2Wq3KR/U47nK5MBgMpJTh\\n\",\n       \"UxT7h9iZyMf+cDiMer1+paRbByy1ayFEbvJ4QUTfwbi05bGSRo0V2e126YsJh8PS4A0EAvImCSEu\\n\",\n       \"2SvsjOPFBiyHARqNhlzNZlNnwzgcDjx48EAayHw0Z0nGxFGP8izppgU/1wELk4aIPACsQogGEXkB\\n\",\n       \"fALA76xsZ7PvQ3p6nU4n/H4/gsGgjjRGG2Y4HEp7pdPpoFAoIJfLIZfLoVAoQNM0uZg86lJvvt1u\\n\",\n       \"R7/fBxHJkIPL5YLD4dBJGzaeVcKogdJ1wjKSJgngO5MPbQPwl0KI761kV3OAVZLT6YTX651KGr7B\\n\",\n       \"qqRhwjSbTeTzebzzzjs4ODjA4eGhJJR6YlIf1bQKfnS73QiHw0ilUlL18d9T/T2qpFlHwgBLkEYI\\n\",\n       \"cQjg/Svcy0LgG+JwOKTtwBFpPnarGA6H6Ha7aDabaDQaqFarODs7QyaTwVtvvYU33nhDqi1+ZHXG\\n\",\n       \"bn9VYlitVmxtbaFWq6HVaqHX60m1pTrvVN+PalOtI9bTEjNAtRccDoe0KfibrN70Xq+Hi4sLGZEu\\n\",\n       \"FArIZDI4OztDpVKBpmnSMOZHNakKAFwulwxqejweeSoLBoPwer1SPXFwko3rRqOBcrmMer2OdruN\\n\",\n       \"fr+/luRZe9Ko9gKTRrUbWB2xaul0Ori4uMDx8TGOjo6QzWZRKBRwfn6OarWKbrd7KdNOlQ5EBKfT\\n\",\n       \"KR2HrJKi0SiCwSA8Hg+cTqeOuP1+H51OB/V6HaVSCbVazSTNk4YxyGi0GdiGYbV0cXGBbDaLN998\\n\",\n       \"E2+//bYujYEz64xqRL25TqcTwWAQ8XhcBkFVSWM8HV0ladYxhABsCGmMMBq67MntdDqoVqvI5/PI\\n\",\n       \"ZrM4PDzEwcHBJaP3OrCkYV9QOp1GIpFAJBJBIBCAy+XSBTx5P5qmodVqoV6vo9VqSTVoSponACEE\\n\",\n       \"er2eTHCyWq3wer2wWq3QNA35fF4XPGw0Gjg8PEQul0OtVpNE4cjzLHA4HDKelEgkEA6H4fV6ZcBy\\n\",\n       \"XU9Fs2LtScPGbbvdlnaIxWJBr9dDtVpFOByWUoTDAoVCAYVCQZKGDeV5SRMOhyVpfD4fnE6nzju9\\n\",\n       \"qVh70rCk4aTtTqcDTdNQq9WQz+fh9XplVcBgMECv19N5eHu9ngxSzkoaDlKGQiHE43GEQiGdpGFs\\n\",\n       \"KnE2gjRMCmBsFHO8iE8xxlQH47oJarWC1WqF2+2G3+9HKBRCLBaTUW72Aqt5OMC7dVHGta5Ye9IY\\n\",\n       \"oaY2ENGlG6b6bGaRLFzy4na7pX+Gj9gc03I6nfLEBOASOflkxks1hNcRG0caYHzT+BtvtVqvrHic\\n\",\n       \"VR1xGmcoFEIoFLrkl2GHIqsm9YivaZo80vMyT093DEwM9uayXWH0uczqymdJEwwGkUgkZHKXMXpu\\n\",\n       \"s9l0pNE0De12Wx6z1cU+mo2VNET0PICfAHAuhHjf5LUIgL8C8BBABsBPCSGqt7jPubCszaDmALMN\\n\",\n       \"Ew6HkUwm8eDBA0kav98vy3rZ5mEbi31EXHXJ0qbZbELTtLlU5F3DLAkd3wDwrOG13wDwghDixwD8\\n\",\n       \"8+TnjQBHzT0ejy5Fc2trCw8ePMDe3h5SqRTC4bAkDAAp3fr9PprNJkqlEs7OznB4eIjT01OUSiVZ\\n\",\n       \"zmKMZa0bbiTNJBOvYnj5UwC+OXn+TQCfWfG+niiYNIFAQKZobm9vY2dnB3t7e9ja2kIoFILb7ZYq\\n\",\n       \"iY1sLopTSXN2diZJY+wssY5Y1KZJCiEKk+cFjHNrNgKqpOH0zEQiga2tLezu7uLhw4fyJMUhAyYB\\n\",\n       \"n5aMpCkUCiiXyzrSrCthgBUYwkIIsWn99YzqKZFIIJVKIZ1OY3d3V1fGy3aMGklvNBoolUrI5XI4\\n\",\n       \"OjpCpVKRke1N6FGzKGkKRJQSQuSJaAvA+So39aRhs9l0SV2qL2Zamma/39eV4RaLRZRKJZmI3mw2\\n\",\n       \"0e121/aIbcSimc3fBfD5yfPPA/i71WznyYNPTE6nEx6PRxb/M2mmxZY49lWtVmXLkXK5jHK5rCPN\\n\",\n       \"TRH0dcEsR+5vAfgogBgRZQH8FoAvA/hrIvpZTI7ct7nJxw3ufjWNNNNiSyxpqtUqLi4udJJGDYpu\\n\",\n       \"iqS5kTRCiM9d8dbHV7yXJwZVehhVE/eccblcsNvtl5yFQghomoZmsymL/Jk03B1r3W0YIzbOIzwv\\n\",\n       \"LBYLvF4vfD4fvF4vAoEAdnZ2sLOzg3Q6jZ2dHcTjcQQCAdlvj9MsOOXi7OwMJycnyGazOD4+Ri6X\\n\",\n       \"Q7VaRafTecKf7nZw70nDTYg4RMDHa7WfTSgUQjAY1JGm1WrJOBKT5ujoCJlMRqqmdSzunwX3njRc\\n\",\n       \"oJ9MJmX/mng8LlcsFpNRbqfTKfN3WB1VKhXkcjmcnJzg+PgYmUwGzWYTrVZrY3rsGXHvSUNEOtI8\\n\",\n       \"88wzusaMoVDoUs4v2zDlcllWZ56eniKbzSKTyehqw01JsyEwtn/lTp5cnakavmqYABgbwK1WC+Vy\\n\",\n       \"WSao5/P5qR7fTSQMcA9Jo9ZJqZWZbAxzr2DVL2PMw2Epk8/ncXR0hEKhgGq1qvP4btqJScW9Iw0A\\n\",\n       \"XXEdd/JU68D5NZY0akkux5bK5bIME1SrVUka1RdjSpoNglpcZ1RPgUBAV3jHpBFCyMR0VT0dHx+j\\n\",\n       \"3W7L7hLr2OJ1Xtw70vBpibuPx2Ix2fHT7/frsvBYNTFReLHjrtFooNPpyNqpTVZJKu4tabhjOefI\\n\",\n       \"JBIJ+P1+KV3U05LaOLpSqch67GazKQdocHLVfcC9JA03cnz06BGefvppJJNJnaQxpj2wpKlUKjg/\\n\",\n       \"P5eShgOR69x9fBHcGOUmoueJqEBErymv/TYRnRDRy5NlTAe9s2APMJPmve99L/b39yVpuIHAdX6Z\\n\",\n       \"YrEoScPFeaZ60uMbAP4QwJ8prwkAXxVCfPVWdrVCcCYeG7dsz3BogPNljP1kePV6PZRKJRQKBZye\\n\",\n       \"niKTyciS3k2Y3bQIZolyv0hEe1PeWouaUzV9k0f2qFNTfD6frgkRVxNwv712u41isYhcLic9vqye\\n\",\n       \"NjVMcBOWaS/5S0T0AyL6OhFd7hV/R8D9fbn2OhqNSknj9/ulpOEmRMC7/WTY+C0Wi8jn8zIomc/n\\n\",\n       \"TdIsgD8CsI9xz70cgN9b2Y5WDCaNmvMbiUQQCoWkpFHVE/tjuAESJ1YxaTKZjEmaRf6REOJcTADg\\n\",\n       \"axj3D74zUPv28lidSCSCVCqly4/hagJj+iaXorTbbdkUiUtsuZfNulcULIOFSDNJJmf8JIDXrvrd\\n\",\n       \"x41psSWfz4doNCrLUBKJhAwXTCMNG8CsorgJtUqY+3JSmoZFcoS/BOBjRPR+jE9RhwB+/lZ3OSfU\\n\",\n       \"vr089S0ajcrhGolEQkoalSw8TodrsdX2a6qU2fTY0k1YNEf4+VvYy8rAHl1VPUWjUaRSKezu7sr+\\n\",\n       \"wty5ygi1Z1+r1dL5Yu6LA+86bJxHmCe8caOhSCSCZDIph5aqbVtVw5frsDVNk4bv6ekpjo+PpV9G\\n\",\n       \"07Qn/fHuBDaWNOpQLm7ZypPdeOQOH7FZqrA6KhQKODs7QzabxdHRkfQA39fTkhEbRxqehMJzt9Pp\\n\",\n       \"tI40fr9f16aenXncR6ZSqeikjJrza0qaMTaONDzcgkmzs7Mj1RN34VT7z6jH61qthmKxeEnSzNOf\\n\",\n       \"7z5g40mTTqd1dUvGGUtCCLTbbZTLZZyenuLk5ASnp6coFotoNBrQNG0jmiuuEhtLGm53tr29LSUM\\n\",\n       \"jwtUwaQpFos4OTnBwcEB8vm8rp/MOo9Dvg1sHGm4R54qaTidc9owdq4uKJVKyGazePvtt2WyFZOG\\n\",\n       \"f++++mWM2DjSTFNP6jAvI0aj0SVJwy3xWTWZZNFj40hjHP13VU8ZFWq7WFZH6nXmhbGLqPF1da/G\\n\",\n       \"x2l7NL7PSe88jGzezuiDwUA3cnFeL/fGkWYRGIm27AQ4lk6q8cxE5Ef+u4C+eG+aNFTfs1gssnKC\\n\",\n       \"Uztmmbqrfg4+KdZqNVSrVWiaduWYomkwSQPoCGO323U3fBHSGFukTbsZqlRhMnC87Kr9sYoNBAKI\\n\",\n       \"xWJybPQ0A98IdQ+1Wg2FQkEeAtRU1aUlDRHtYpzmmcA4OPknQog/uOt9hOcB37yrJM0isFgs0qdz\\n\",\n       \"07dXJcxVA1HV92w2GwKBgJw1lU6nZTeL66Duo1gsAgA6nQ7K5bJsiTJrXO0mSdMH8KtCiFeIyAfg\\n\",\n       \"f4joBQBfwLiP8FeI6DmM+wivZS9hbgCQSCSwv78vh3NwWcoiDj11PLPq52EJpKoaridXl1FFqZLG\\n\",\n       \"ZrPJ8AhXUcwraZxOp2zD3263YbfbdTOxbvrM15JGCJEHkJ88bxLRjwCkMe4j/NHJr30TwL9iTUlj\\n\",\n       \"sVgQDAaxs7MDIQT8fr9uassiUe1WqyVDD9xwWh2mygRgEng8HtlUyefzTSWNurh8mGc1zGLTAO8S\\n\",\n       \"x263S2L3ej3Y7XbZVZ2/LNdhZptmklz+AQD/iQ3qI8ykEULA5/Nhe3tbZ88s4tRjA5MfeQ4CSzA+\\n\",\n       \"1bENpQ7rCIVCl+waVYXyOETumaM2wJ4FQgg4HA6dRGVJxnXqN2Em0kxU098C+BUhREPVueveR5gN\\n\",\n       \"S6/Xi1QqdamnzCJ2TalUQrFYlIu/0dxn2DgOOhKJyAZK8Xh8KmmMezb2Mr4J6udg9aSWE3N7/llc\\n\",\n       \"DLNk7tkxJsyfCyG49eud7SPMUWt1YLs6RXeaKL/q9WX2AEBmDvLNMUoa3lcwGJRNlILBoLxxsxLW\\n\",\n       \"OABNtZ+mqddKpYJGo6HLSLy4uEChUJgpkn/T6YkAfB3A/wkhfl95i/sI/y7uWB9h/ta0222Z6sBi\\n\",\n       \"nMfs3Da4zkoIAZvNdq1Nw4NauSeO2j1UfWRMIxJ/SbhzhSpFeFSjilqthlwuh/PzcznymcdGd7vd\\n\",\n       \"G0c/3/Q/+CEAPw3gVSJ6efLaF3GH+wirtde1Wg3lchmBQECWsjwO8IxLm80Gt9t9abqdUb04HA6Z\\n\",\n       \"GMYOQP4ss6jKwWCAdrsth5DxOGleRmnTaDSk6lSHy08j2DTcdHr6d1xdsXAn+wgzadjrWalUdLVP\\n\",\n       \"jwNceOd2uy85zVSPMHDZyGXSGH1F15GH50vV63UpOZrNplzG0xDP+ORmTOrgsqVJs44YjUZyPkE2\\n\",\n       \"m4XNZpP/KSy+543VGI+8qqNtmrrjagiGmsTV7/flqeyq0xkP5+DfV22Uab/PRX28+LjPk+2MpNE0\\n\",\n       \"TQ4s47a23DLlXpJmOByi0Wggn88DGH+rUqmUPALHYjEAl4OF10FtIMB+lXlsJJZ86k1k+2aaocrk\\n\",\n       \"ZjWjHtmntcrvdru6pktcPcHORePfYMnEf4ftISb0Tdg40oxGIzQaDRCRTgyzyK7X67q29rOQhj21\\n\",\n       \"LpdLGoyj0UjaLDeBc5B5L3xiYUPVCN4nLyYBd0o3SgOOWvP7KsGmNVtiSaZ6vudpzLRxpBkOh1Id\\n\",\n       \"XVxcwO12y65VXGY7D2mISEoWj8cDj8eD0WgkbZZZwJ20yuUyzs/PL0kC47ebc5VLpRLK5bKs8rzK\\n\",\n       \"sFVTO4z20LQY2lW/M6szc+NIo9YxAeMbxmOQiQiaps1NGvXI7na7Ua1WUS6XUSwWEQ6Hb7wGO/t4\\n\",\n       \"QguThSWD8aayq4CXao91u90nnnq6caQxQgiBTqeDarUKItIZwrOqJ0524mMx57Hw400wDnNnHwqr\\n\",\n       \"BSNYIqqlMzz+5y5kEW48aUajkSRKr9dDrVYDMJsBzL9nTE1Q7ZtZ0hLUCDL36OM1LWeHKz15qW3z\\n\",\n       \"7wLotph7V+JRaq7MoumbRnVmPILfBGNqxFXOO4aaBDbNTnlcEEJM/WZtPGlMLI6rSLNM+zQT9xQm\\n\",\n       \"aUzMjWtJQ0S7RPQvRPS/RPRDIvrlyetr20fYxPK41qYhohSAlJojDOAzGEe1G+KaPsKmTbP+uMqm\\n\",\n       \"WTRHGFiTPsImVo+ZbRolR/g/Ji+tRR9hE6vHTKSZqKa/wThHuIk16iNsYvW40U8zyRH+BwD/aEj5\\n\",\n       \"5Pf3APy9EOJ9htdNm2bNsZCf5qocYbrDfYRN3D5uOj19GMC/AXgV47JcAPhNAJ/DWDXJPsJKHRT/\\n\",\n       \"W1PSrDnMMIKJuWGGEUysDCZpTMwNkzQm5oZJGhNzwySNiblhksbE3DBJY2Ju3JqfxsTmwpQ0JuaG\\n\",\n       \"SRoTc+NWSUNEzxLR60T01qQL6LLXyxDRq5MU0/9a4N8/T0QFInpNeS1CRC8Q0ZtE9L15coOuuN7C\\n\",\n       \"qbDXpNcutMdbS9e9rq53mQXACuAAwB4AO4BXADyz5DUPAUSW+PcfwTiR7DXlta8A+PXJ8+cAfHnJ\\n\",\n       \"630JwK8tuL8UgPdPnvsAvAHgmUX3eM31Ft6jEOJWJc0HARwIITJCiD6AbwP49Aquu3CaqRDiRQAV\\n\",\n       \"w8ufwritLSaPn1nyesCCexRC5IUQr0yeNwGoLXjn3uM111t4j8Dtqqc0gKzy8wne3fCiEAC+T0Qv\\n\",\n       \"EdHPLXktxm20t106FVZJr11JC95VpuveJmlu4yz/ISHEBwB8EsAvENFHVnlxMZbjy+576VRYMrTg\\n\",\n       \"XXaPq07XvU3SnALYVX7exVjaLAwhRG7yeAHgOxirwGVRmJTqcEbiUu1thRDnYgIAX5t3j3RNC95F\\n\",\n       \"9qhc7y/4esvu8TZJ8xKA9xDRHhE5AHwW41ayC4GIPETknzz3AvgEVpNmyu1tgRW0t10mFfaq9NpF\\n\",\n       \"93hr6brLnGZmsN4/ibHFfgDgi0teax/jE9grAH64yPUAfAvAGYAexvbWFwBEAHwfwJsAvgcgtMT1\\n\",\n       \"fgbjqTWvAvjB5OYm57jehwGMJp/x5cl6dtE9XnG9Ty6zRyGEGUYwMT9Mj7CJuWGSxsTcMEljYm6Y\\n\",\n       \"pDExN0zSmJgbJmlMzA2TNCbmhkkaE3Pj/wFJ7Hv45ZreFAAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f7938233050>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEx1JREFUeJzt3X2QXXV9x/HPJ4+bZPOglUqV2BsabAVNg1UHfEzQdigj\\n\",\n       \"aFur0Fap7djpqJUSdURm2vGPMlqdjg/jtDNWqoAibdWiTqtCYaNRkQhmCU8+pAMFFEhLMdnN4yb5\\n\",\n       \"9o97N1mS3ezvuye/vffi+zWT4T5895zvnt+5Z7+cc+7v64gQAAAAZm5OtxMAAADodxRUAAAADVFQ\\n\",\n       \"AQAANERBBQAA0BAFFQAAQEMUVAAAAA3N6+bKbTNnAwAA6BsR4cler1pQ2T5X0kckzZX0yYj426Nj\\n\",\n       \"3ve+9x3zc0NDQ1q/fn3N1J4gMxfXoUOHKmZSxp50LCc1Z07uJGQmfqo8br75Zp1zzjknfLknQmas\\n\",\n       \"Dxw4UBx78ODB4tixsbEqsfPnzy+OlaSBgYFjXpvqs7dkyZLi5S5cuLA4dt688kPQ3r17i2N3795d\\n\",\n       \"HCtJo6OjVfLI7BeZ2AULFkz6+i233KKzzz77mNcHBweLl71ixYri2OXLlxfHLl26tDg2sw9Nth8f\\n\",\n       \"T+ZzkjkW7du3rzh2z549x7x23XXX6cILLzzm9V27dhUvV8rt+/v3768Sm9mXa86Fmfm7M3fu3KK4\\n\",\n       \"DRs2TL2+4rUl2Z4r6eOSzpV0uqSLbD+n1voAAAC6peY9VC+StC0i7o+IMUnXSXpNxfUBAAB0Rc2C\\n\",\n       \"6pmSHpzw/KHOa9NqtVo18sEsWbVqVbdTwAzx2etvp5xySrdTwAw997nP7XYKaKhmQTXjC6P8Qe5v\\n\",\n       \"jF//Yuz628qVK7udAmaIgqr/1bwp/SeSJn66V6p9luoJhoaGDj9utVoc0AEAQE/Ytm2btm3bVhRb\\n\",\n       \"s6C6TdJptluSfirpDZIuOjpoNr/NBwAAUGr16tVavXr14ec33HDDlLHVCqqIOGD77ZK+rva0CVdG\\n\",\n       \"xL211gcAANAtVeehioivSvpqzXUAAAB0W1dnSpfKJ9PKTlCZ0QuTdWb0yraoNUlmRs2JS2tNUFlr\\n\",\n       \"/LIT5GXiM7GTTVp4ImS2cXaS01qTTmYme6w5+W0mPjMxZGbi2R07dhTHZiaSLf0bMi5zjMtMZpk5\\n\",\n       \"xmViM9tYqjOZZdZUE89OJvNZrTnWmQl7p0IvPwAAgIYoqAAAABqioAIAAGiIggoAAKAhCioAAICG\\n\",\n       \"KKgAAAAaoqACAABoiIIKAACgIQoqAACAhiioAAAAGqKgAgAAaKjrvfxK+w5lelFl+1xl+gPVyiOb\\n\",\n       \"c7eXm1Urj2z/ukwemZ5YmT5ztXrSZfpnSfXGJPN5yvx+mR562V5+mTEZHBwsjl28eHFxbM0enQcP\\n\",\n       \"HiyOrXXcyvRVGxkZKY7N/G41ZbZFZr9YtmxZKo/ly5cXx2Z6WC5atKg4tleO95k+iKV9Gzds2DDl\\n\",\n       \"e1XPUNleaXvI9t2277L9jprrAwAA6IbaZ6jGJF0aEcO2ByXdbvvGiLi38noBAABmTdUzVBHxSEQM\\n\",\n       \"dx6PSrpX0jNqrhMAAGC2zdpN6bZbks6UdOtsrRMAAGA2zEpB1bnc93lJl3TOVAEAADxpVP+Wn+35\\n\",\n       \"kr4g6TMRcf3R7990002HH69atUqnnnpq7ZQAAACmtWnTJm3atKko1tmvIWa4/d3JqyQ9FhGXTvJ+\\n\",\n       \"XHHFFaXLyqy3OLbmspk24Yhe+Rot0yYcwbQJRzBtwhFMmzAzTJtwRK8c72tMm7B06VJFxKS/YO1L\\n\",\n       \"fi+R9EeS1tve0vl3buV1AgAAzKqql/wi4ltiNnYAAPAkR7EDAADQUNdbz5TKXDuteU09k0et2My9\\n\",\n       \"CNlrzrW2c+a+moya98tl7mnJ/H61YjP7RVatZde616L0fohxBw4cKI7N7BeZe7mWLFlSHJu5T0bK\\n\",\n       \"3V+XGevdu3cXx+7atas4dseOHcWxO3fuLI6VcmOd2RYrVqwojj355JOLY0866aTiWEkaGBgojs38\\n\",\n       \"fqOj5V/Qz+wXmdh9+/YVx0q548CJOMZxhgoAAKAhCioAAICGKKgAAAAaoqACAABoiIIKAACgIQoq\\n\",\n       \"AACAhiioAAAAGqKgAgAAaIiCCgAAoCEKKgAAgIa63nqmtA1Ar7RbqRWbaeNSs8VIRi+0A8q2Lsnk\\n\",\n       \"PDY2VhybaWeRid2zZ09xbKa1R3bZmRYOtdrJZGS2sZT7TGW2WyaPTJuhefNyh+5MHpn2Hpn9Its2\\n\",\n       \"pFSmZY+Ua82SaTOUkWnjkm2tk9k/M8e4zGek1r6c3e8z43cijltTZmf79ySFpMnWEhHxxZIV2J4r\\n\",\n       \"6TZJD0XE+TPKEgAAoIcdr9w7X+2CaipFBZWkSyTdI2lpaVIAAAD9ZMqCKiL+uOnCbZ8i6TxJV0ja\\n\",\n       \"0HR5AAAAvWjaC4y2T7Z9pe2vdZ6fbvtPC5f/YUnvltQbN/0AAABUUHLH1qcl3SDpGZ3nP5Z06XQ/\\n\",\n       \"ZPvVkrZHxBZNfh8WAADAk0LJLfNPi4h/tn2ZJEXEmO2Sr4y8WNIFts+TNCBpme2rI+JNE4OGhoYO\\n\",\n       \"P261Wlq1alV59gAAAJXs3bu3+BuqJQXVqO1fGH9i+yxJO6b7oYi4XNLlnZ95haR3HV1MSdL69euL\\n\",\n       \"EgUAAJhNAwMDT5hqY2RkZMrYkoLqnZK+IulU29+RdJKk180gr9zkUAAAAH1i2oIqIm63/XJJv6r2\\n\",\n       \"vVA/jIjy2cDay/iGpG/MLEUAAIDeNm1BZXuRpLdKeqnaZ5k22f6HiNhbOzkAAIB+UHLJ72pJOyV9\\n\",\n       \"TO0zVH8g6RpJv18xLwAAgL5RUlCdERGnT3h+s+17TlQCpb12Mj15avbyqyXTRygTm+mpJOW2c60+\\n\",\n       \"STXHOqNWT7rMmAwODhbHLly4MJXHokWLimMXLFhQHJvp21irn2C2b1xmn8tsi8w2zvSky+6bmfhs\\n\",\n       \"H8RS2WNRqWy+mf0zs9127Jj2u1qHZfpuZvflTH++jFo99zL7Rba34vz580947MUXXzzleyXZfd/2\\n\",\n       \"2eNPOt/yu71ozQAAAD8Hjtcc+c4JMd+2/aDa91A9S9IPZyE3AACAvjBdc2QAAABM43jNke+f+Nz2\\n\",\n       \"L6o94zkAAAAmKGmOfIHtH0u6T+25pO6X9NXKeQEAAPSNkpvS/0bS2ZJ+FBGrJL1S0q1VswIAAOgj\\n\",\n       \"JQXVWET8r6Q5tudGxJCkF1TOCwAAoG+UTBbxuO2lkjZJ+qzt7ZJG66YFAADQP0rOUL1W0m5Jl0r6\\n\",\n       \"mqRt4huAAAAAh5U0Rx4/G3VQ0qerZgMAANCHjjex56jaE3lOJiJi2YlIoLRtQK2WKFJu6vtaLWJ6\\n\",\n       \"RaaVy6FDh6ost1abmqxa7RMyv1+mRcXIyEhxbDaPWu1WBgbKZ2J5ylOeUiUHKde+JNPaIxObaV2S\\n\",\n       \"bbdSq71HZvwysYsXLy6OzeybUu5zndkWmZY2NT/Xo6Pld+Rk2trUOoZn9otse61M65lMHlM53jxU\\n\",\n       \"5U3EpmB7haRPSjpD7eLsTyLiu02XCwAA0EvKS/WZ+aik/4iI19meJ6m8+ycAAECfqFZQ2V4u6WUR\\n\",\n       \"cbEkRcQBSeXntAEAAPpEvRtQpFWS/sf2p2x/3/Y/2i6/MA4AANAnahZU8yQ9X9LfR8TzJe2SdFnF\\n\",\n       \"9QEAAHRFzXuoHpL0UER8r/P885qkoNq4cePhx61WS61Wq2JKAAAAZYaHhzU8PFwUW62giohHbD9o\\n\",\n       \"+9kR8SNJr5J099Fx69atq5UCAADAjK1du1Zr1649/Pyqq66aMrb2t/z+Qu12NQsk/ZekN1deHwAA\\n\",\n       \"wKyrWlBFxB2SXlhzHQAAAN1W86Z0AACAnwu1L/lNq0brkMwU+VKuZUAtmZxrtYfJxme2Wy+0qcku\\n\",\n       \"O/P77d+/vzh2586dxbGZdiSZlhOStHv37uLYTIuKzHbLtA3JHCsyrVak3H6UaSezZ8+e4tjMNs7k\\n\",\n       \"IOX2z0zOtWIzn9Ps35Bara0yrUtqtbWS6rVpy2y3TMuXTA6Z7TaT+KY4QwUAANAQBRUAAEBDFFQA\\n\",\n       \"AAANUVABAAA0REEFAADQEAUVAABAQxRUAAAADVFQAQAANERBBQAA0BAFFQAAQENdbz2TbY1SItuO\\n\",\n       \"JKPWVP2Z9gKZ5Wbb8NRqW5BRq01NTZltcdpppxXHLlmypDg20+4hK9PqJNPmJLPcAwcOFMeuWLGi\\n\",\n       \"OFaSli1bViU2m0epTJsaKbftMp+/zPhlWh1l9qHs8T6z7IzFixcXx2ba1AwODqbyyMRncs4cizL7\\n\",\n       \"UGa/GBkZKY7NLru0NdJ555035XtVz1DZfq/tu23fafta2wtrrg8AAKAbqhVUtluS3iLp+RHxPElz\\n\",\n       \"JV1Ya30AAADdUvOS305JY5IW2z4oabGkn1RcHwAAQFdUO0MVEf8n6e8kPSDpp5J+FhH/WWt9AAAA\\n\",\n       \"3VLzkt+vSPpLSS1Jz5A0aPsPa60PAACgW2pe8nuBpO9ExGOSZPuLkl4s6bMTgzZu3Hj4cavVUqvV\\n\",\n       \"qpgSAABAma1bt2rr1q1FsTULqh9I+ivbiyTtlfQqSZuPDlq3bl3FFAAAAGZmzZo1WrNmzeHn1157\\n\",\n       \"7ZSxNe+hukPS1ZJukzRe3n2i1voAAAC6perEnhHxQUkfrLkOAACAbqP1DAAAQEMUVAAAAA11vZff\\n\",\n       \"vHllKWR6tmX7u2XiDx48WGW5tWIz+Ur1+uhl8sjEZnqJZZfdC2PSK30NM73j9u7dWxxb2j9Lym2L\\n\",\n       \"bJ/JTB/ETO+4HTt2FMc+9thjVWKl3Phl9s/Mdlu4sLzzWGa5mX1IqtfXMLPPZbZFpt+eVP43Vcpt\\n\",\n       \"i5p/d2rJbItM7FQ4QwUAANAQBRUAAEBDFFQAAAANUVABAAA0REEFAADQEAUVAABAQz1ZUN13333d\\n\",\n       \"TgENPPDAA91OATP08MMPdzsFNJCZHgG9hbHrfxRUOOEoqPrXI4880u0U0MD+/fu7nQJmiIKq//Vk\\n\",\n       \"QQUAANBPKKgAAAAacs12FdOu3O7eygEAAJIiYtL+U10tqAAAAJ4MuOQHAADQEAUVAABAQz1XUNk+\\n\",\n       \"1/YPbP/Y9nu6nQ+mZvufbD9q+84Jrz3V9o22f2T7BtsrupkjpmZ7pe0h23fbvsv2OzqvM4Y9zvaA\\n\",\n       \"7VttD9u+x/b7O68zdn3E9lzbW2x/pfOc8etjPVVQ2Z4r6eOSzpV0uqSLbD+nu1nhOD6l9lhNdJmk\\n\",\n       \"GyPi2ZJu6jxHbxqTdGlEnCHpLElv63zeGMMeFxF7Ja2PiLWS1khab/ulYuz6zSWS7pE0fjMz49fH\\n\",\n       \"eqqgkvQiSdsi4v6IGJN0naTXdDknTCEiNkl6/KiXL5B0VefxVZJeO6tJoVhEPBIRw53Ho5LulfRM\\n\",\n       \"MYZ9ISJ2dx4ukDRX7c8iY9cnbJ8i6TxJn5Q0/q0xxq+P9VpB9UxJD054/lDnNfSPp0fEo53Hj0p6\\n\",\n       \"ejeTQRnbLUlnSrpVjGFfsD3H9rDaYzQUEXeLsesnH5b0bkmHJrzG+PWxXiuomMPhSSTac3Iwpj3O\\n\",\n       \"9qCkL0i6JCJGJr7HGPauiDjUueR3iqSX215/1PuMXY+y/WpJ2yNii46cnXoCxq//9FpB9RNJKyc8\\n\",\n       \"X6n2WSr0j0dtnyxJtn9J0vYu54PjsD1f7WLqmoi4vvMyY9hHImKHpH+X9Bti7PrFiyVdYPs+SZ+T\\n\",\n       \"dI7ta8T49bVeK6huk3Sa7ZbtBZLeIOnLXc4JOV+WdHHn8cWSrj9OLLrItiVdKemeiPjIhLcYwx5n\\n\",\n       \"+2nj3wCzvUjSb0raIsauL0TE5RGxMiJWSbpQ0s0R8UYxfn2t52ZKt/3bkj6i9k2WV0bE+7ucEqZg\\n\",\n       \"+3OSXiHpaWpf7/9rSV+S9C+SniXpfkmvj4ifdStHTK3zrbBvStqqI5cW3itpsxjDnmb7eWrftDyn\\n\",\n       \"8++aiPiQ7aeKsesrtl8h6Z0RcQHj1996rqACAADoN712yQ8AAKDvUFABAAA0REEFAADQEAUVAABA\\n\",\n       \"QxRUAAAADVFQAQAANERBBaDrbH+7899ftn3RCV725ZOtCwBOJOahAtAzbK9Te5LD8xM/My8iDhzn\\n\",\n       \"/ZGIWHoi8gOAqXCGCkDX2R7tPPyApJfZ3mL7EttzbH/I9mbbd9j+s078OtubbH9J0l2d1663fZvt\\n\",\n       \"u2y/pfPaByQt6izvmonrctuHbN9pe6vt109Y9kbb/2r7Xtufmd2tAaAfzet2AgCgI61v3iPpXeNn\\n\",\n       \"qDoF1M8i4kW2F0r6lu0bOrFnSjojIv678/zNEfF4p7fdZtufj4jLbL8tIs6cZF2/K+nXJa2RdJKk\\n\",\n       \"79n+Zue9tZJOl/SwpG/bfklEcKkQwJQ4QwWgl/io578l6U22t0j6rqSnSlrdeW/zhGJKki6xPSzp\\n\",\n       \"FkkrJZ02zbpeKunaaNsu6RuSXqh2wbU5In4a7XsihiW1GvxOAH4OcIYKQK97e0TcOPGFzr1Wu456\\n\",\n       \"/kpJZ0XEXttDkgamWW7o2AJu/OzVvgmvHRTHSgDT4AwVgF4yImniDeRfl/RW2/MkyfazbS+e5OeW\\n\",\n       \"SXq8U0z9mqSzJrw3Nv7zR9kk6Q2d+7ROkvRySZt1bJEFANPi/7oA9ILxM0N3SDrYuXT3KUkfU/ty\\n\",\n       \"2/dtW9J2Sb/TiZ/4FeWvSfpz2/dI+qHal/3GfULSVtu3R8Qbx38uIv7N9tmddYakd0fEdtvPOWrZ\\n\",\n       \"muQ5ADwB0yYAAAA0xCU/AACAhiioAAAAGqKgAgAAaIiCCgAAoCEKKgAAgIYoqAAAABqioAIAAGiI\\n\",\n       \"ggoAAKCh/wcQESvdP72F3wAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f793821a310>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEBVJREFUeJztnVuMJNdZx39f36d6+rI9O7PDrveSlQKysSX7xSA5EREK\\n\",\n       \"0fqFwEsiS0hRgIgHboIHTHiJHyMkIsQLEoqNwkWJEMgoIAG2ERJBKIDROnYgjmPJK8/sXHd2unu6\\n\",\n       \"p+99eOg+h+qanktX12Snqs5PKk13zXTpm93/fOec73zfd0QphcUyC4lHbYAlfFjRWGbGisYyM1Y0\\n\",\n       \"lpmxorHMjBWNZWZ8i0ZE7ojIuyLyAxF5MUijLBcb8ROnEZEk8H3gk8B94L+AF5RS3wvWPMtFxK+n\\n\",\n       \"eRZ4Xyl1TynVA74BfDo4sywXmZTPz10D1lzv14GfcP+AiNhQc8hRSsm0+349jRVEjPErmvvAddf7\\n\",\n       \"64y8jSUG+BXNm8BHReSWiGSAzwLfDM4sy0XG15xGKdUXkV8D/glIAi/blVN88LXkPtOD7UQ49AQ9\\n\",\n       \"EbbEGCsay8xY0VhmxorGMjNWNJaZsaKxzIwVjWVm/G5YWk5ARMwFkE6nzZVKpRgOhwwGA/r9PoPB\\n\",\n       \"wLzXXy86VjTnQCKRIJlMkkwmSaVSlMtlKpWKuVqtFo1Gw1ytVmviGg6Hj/pXOBErmnMgkUgYz5LJ\\n\",\n       \"ZFhZWeHWrVvcvHmTmzdvUqvV2N3dNdf+/j7VahWlFO12+1GbfypWNOeA9jDZbJZcLsfKygq3b9/m\\n\",\n       \"qaee4sknn2RnZ4e1tTU+/PBDstks6XQapRSdTscMaReZuUQjIveAOjAAekqpZ4MwKuwkEgkymQy5\\n\",\n       \"XA7HcahUKly9epXbt2/zxBNPUCqVEBE6nQ4HBwc0m03q9TqpVCr6omGUjPUJpdTDIIyJCslkknQ6\\n\",\n       \"zcLCAouLiywsLJDJZEgmkwAMh0P6/T69Xo9Op0Ov16Pf7zMcDglDbX0QS+6L/6fxQyaZTJLJZE4U\\n\",\n       \"zWAwoNfr0e126Xa7RjRhYF7RKOANEXlTRL4QhEFR4CyiOc7ThIF5h6fnlFKbIrIMvC4i7yqlvhWE\\n\",\n       \"YWFGiyaXy02IJpFIoJQyXqbdbnN4eEir1aLX6zEYDKI/PCmlNsdfd4FXGZW2xB49EXZ7mnQ6TSKR\\n\",\n       \"YDgc0ul0aDabVKtVHjx4QLVapdls0u12oy0aEXFEpDB+nQc+BbwTlGFhxj085fN5IxoRYTAY0O12\\n\",\n       \"aTab1Go19vb2qNVqRjRhYJ7h6Qrw6niJmAL+Uin1WiBWhRzvnCaXyxnRuD1NrVbjwYMHNJtN+v0+\\n\",\n       \"/X4/FJ7Gt2iUUh8ATwdoS2jRsRW93+QWTLlcxnEcUqmUCeC1Wi2azSYHBwfUarVQRIHd2IhwACQS\\n\",\n       \"CVKplNlvKhaLVCoVVlZWuHr1KoVCgVQqRavVMtsGjUYjNHMYL1Y0AaCDeZlMhkwmQ6lUYmlpiStX\\n\",\n       \"rnDt2jVEBKWU8TAPHz6k0WjQ6XQetem+sPk0AeBeYufzeUql0oSnKZVKJJNJ42m0aMLqaaxoAkB7\\n\",\n       \"Gi2aQqHApUuXWFpaYmVlhcXFRUSEZrMZieHJiiYAdCqE3qDM5XJkMhkTmxkMBrRaLZMSsb+/H6q4\\n\",\n       \"jBcrmgDQniabzeI4jokA68nxNNFoTxNGrGgCQHuabDbLwsICuVzO5MlME03YIsBerGgCQE+EHceh\\n\",\n       \"UCjgOA7ZbNbkx/T7fVqtFvV6nb29PSsaC+RyOUqlEisrK9y4cYPl5WXy+TyJRMIE8w4PD01A7/Dw\\n\",\n       \"kE6nQ7/ff9Sm+8KKJgCy2SzFYpGVlRUee+wxLl++PCEavZutE8kPDw/pdruh2dX2YoN7AZDNZimV\\n\",\n       \"SiwvL3Pjxg2KxSKLi4skk8ljPY3OoQkjp3oaEXlFRLZF5B3XvYqIvC4i74nIayJSPl8zLx7u2qZc\\n\",\n       \"LkexWGR5eZmrV69SqVRwHGfC02jh6JKVMHuaswxPfwrc8dz7XeB1pdSPAv88fh8b9F6TjgIvLCzg\\n\",\n       \"OI4J7DmOM5F0pdM7dUGcvsIoGDiDaMaZePue2z8LfG38+mvAzwVs14XmONEsLi5SKBQmMvXcInFX\\n\",\n       \"UiqlQisav3OaK0qp7fHrbUa5NbHBLRp3spX2NDphXCllcn+93iasgoEAJsJKKRW3/nq6GE4LxnEc\\n\",\n       \"Mzzl83k6nc4RwXivMON3yb0tIqsAIvIjwE5wJl18UqmUSRovlUrk83lyuRyp1OhvUM9jdMWBu9A/\\n\",\n       \"CvgVzTeBz41ffw7422DMufiIyIRoyuWySenUEWB3Vwi3aMI8JLk5y5L768C/Az8mImsi8nngy8DP\\n\",\n       \"iMh7wE+P38cGr6dxiwame5qwT37dnDqnUUq9cMy3PhmwLaFBF/fn83nK5TL5fN7sNcHRYriwVVCe\\n\",\n       \"ho0I+2BaiYq7grLT6VCv19nd3WV7e5udnR2TQB4FT2P3nnzg7gqhJ8Fe0dRqNdNSZHt7m2q1Grqq\\n\",\n       \"g+OwovGBN71T1zVp0XS7XeNp1tbWIudp7PA0I3r1dNLw1G63jWjW19d58OCBFU3c0GmbqVSKdDpN\\n\",\n       \"uVw2SeOrq6uUSiUymQyDwYBGo2GK4KrVKvv7+xwcHJgi/yhgRXMKIkIymSSbzZp2aLrSYHl5mdXV\\n\",\n       \"VfL5/BHR1Ov1I6IJS9ntaVjRnAEdl9Gbkl5Pk0gkTHF/o9GgXq9PeBqdCmE9TYxwx2WKxeJETdPq\\n\",\n       \"6qrJmdGX19P0er3Qp0O4saunM6A9TaFQoFwuUygUTEDP20Lk8PCQdrs9EdCLSiRYY0VzCiIy0XTx\\n\",\n       \"0qVLFIvFIxUHw+GQbrdLq9U6IpqoCceK5gy4PY0WjTs+Yz2Nh2NyhF8SkXURuTu+vOmgkUJ7Gu/w\\n\",\n       \"5N6kHAwGdDqdqaKJmnD85ggr4CtKqWfG1z8Gb9rFQC+5j+uhB9Dr9Tg8PDSdrXQxXK/Xi4xQ3PjN\\n\",\n       \"EYYY9Q8+rcWrWzTestsoMs+c5tdF5Dsi8nLUS1i8u9reDcper2e6de7u7lKr1UxBXBTxK5o/Bj7C\\n\",\n       \"qOfeJvAHgVl0ATlteNITYDs8nYBSakeNAb5KxPoH6/Oa3GUqOhpcKpVYWFiY6AbRaDSo1Wo8fPjQ\\n\",\n       \"iEZXUUYRX6IZJ5Nrfp6I9Q/Wk1/dR08PS4VCgWKxSC6XM6LRVZP1ep39/X2zox3l4enUbYRxjvBP\\n\",\n       \"AZdFZA34EvAJEXma0SrqA+BXztXKHzLTPI0Wje6fl0wmTQsR7Wm0aHSKZ1SHJ785wq+cgy0XAl2f\\n\",\n       \"7fY0enjSonGfP9npdI4MT1HZYzoOu2E5hUwmw+LiotmgrFQqE1sHrVbLzGfa7TbNZpN2uz2xMRll\\n\",\n       \"7DaCBxEhm82yuLhIpVLhypUrLC0tUSqVcBzHbBv0ej3T3Uo3KYpSbdNJWNF40G3qC4UCS0tLrK6u\\n\",\n       \"srS0ZDxNOp0GmGiJ1mw2Q93ZalasaKZwkqfJZDLApGjcniYOWNFMwdvi1d2pUzOt7DbqcxmNFY0H\\n\",\n       \"ETnSf0b3BNanrejJrj5uJ0yHlgaBFc0U3KLJZrNmn0lvG7i7W+ljBK1oYo7X06TTaVKp1FTRuIcm\\n\",\n       \"K5qY4k7vdCdduROu+v0+7XabRqNBtVql0WjQbrft6inOpNNpHMcx5zZ5l9vu/Bl9BmWUNyi92Iiw\\n\",\n       \"B+1pHMcx0WCdDqE9jRaNbluvl91xEc2JnkZErovIv4jI/4jId0XkN8b3I91HeJqn0Tk0gIkGa09T\\n\",\n       \"r9cjVXZ7GqcNTz3gt5RSPw78JPCrIvI4Eesj7E2yyufz5ggefaLKtImwXnbbibALpdSWUuqt8esG\\n\",\n       \"8D3gGhHqI6yHI90OTbeodx/2pRsA6DhN3DnznEZEbgHPAP9BxPoIu2u1S6WS6TquRaPza7SniTtn\\n\",\n       \"Eo2ILAJ/A/ymUurA/RcX9j7C3m6duvGiWzTucxAsZ8vcSzMSzJ8rpXTr120RWVVKbYW9j/C01ZKu\\n\",\n       \"oNSCUUqZeYtO8Ww2m+YonrAfkDErp62eBHgZ+F+l1B+6vhWpPsKZTMZ06tQ72nq15C651amd7h40\\n\",\n       \"tVrNJGHFJbh3mqd5DvgF4G0RuTu+90VGfYP/SkR+CbgHfObcLDxndP6M4ziUy2UuX748kXAFGNHo\\n\",\n       \"I3jcgqlWq3S7XVOGGwdOFI1S6t843htFoo+wFo0+hN0tGj08DYdDE5txexktnKiceXBWYhkRdk9o\\n\",\n       \"dXtXLZpKpWKO35kWAa5WqxNDkq44iMNcRhNL0cBk1YHucqVFoyPAqVQKpZSpoNRlt97TbrVg4iKc\\n\",\n       \"WIpGJ1q5RaNjNJcuXZrqaZrNpinw157GXQwXF8FAjEWjhTPN0+jewHoi3O12TYH/tFrtOAkGYioa\\n\",\n       \"L1pAWkR6SBIRut0u1WqVvb09tre32dzcZG9vj0ajYQ4DixuxFI32Dvryns+klDKrJRFhZ2eHra0t\\n\",\n       \"Njc3WV9fNzvbnU7nUf8qj4RYikbjFY1bOJp+v29OU9nY2OD+/fsmwBfVAv/TiK1o9LDijrG4S1L0\\n\",\n       \"1el0jGg2Nze5f/8+vV7PXHEklqJxz0OGwyHNZpO9vT02NjZwHMcIRx/ytba2xtbWFvv7+zQajdgF\\n\",\n       \"87zEUjQavRFZr9fZ2NggkUhwcHBghipdorK5ucnOzg4HBweR7NY5K7EVjf4PHwwG1Go1RIRms8nW\\n\",\n       \"1taRw9f1lkGj0TClt3EVDICc9MuLyHXgz4AVRg2M/kQp9Uci8hLwy8Du+Ee/6G0LG5YcG50aoWub\\n\",\n       \"3IeX6q/uOY7elIyDaJRSUxOIThPNKrCqlHprnIj134xSOz8DHCilvnLCZ6P/rxpxjhPNabvcW8DW\\n\",\n       \"+HVDRHSOMMSoj7BlkjMnvbpyhL89vhWbPsKWSc4kmvHQ9NeMcoQbxKyPsGWSE+c0YHKE/x74B0/K\\n\",\n       \"p/7+LeDvlFJPee7bOU3IOW5O4ytHOOp9hC0nc9rq6WPAvwJvM1pyA/we8AKjocn0EXbVQenPWk8T\\n\",\n       \"cnwtuefBiib8+BqeLJZpWNFYZsaKxjIzVjSWmbGiscyMFY1lZqxoLDNzbnEaS3SxnsYyM1Y0lpk5\\n\",\n       \"V9GIyB0ReVdEfiAiLwbwvHsi8raI3BWR//Tx+VdEZFtE3nHd893e9pjnvSQi62Mb74rInRmeF2gL\\n\",\n       \"3hOe59tG4Gi1YVAXkATeB24BaeAt4PE5n/kBUJnj8x9nlEj2juve7wO/M379IvDlOZ/3JeC3fdq3\\n\",\n       \"Cjw9fr0IfB943K+NJzzPt41KqXP1NM8C7yul7imlesA3gE8H8FzfaaZKqW8B+57bvtvbHvM88Gmj\\n\",\n       \"CrgF7wnP820jnO/wdA1Yc71f5/8N9osC3hCRN0XkC3M+S3Me7W3nToUNugVvkOm65yma81jLP6eU\\n\",\n       \"egZ4nlH39I8H+XA18uPz2j13Kqy3Be+8NgadrnueorkPXHe9v87I2/hGKbU5/roLvMpoCJyX7XGp\\n\",\n       \"js5InKu9rVJqR40BvjqrjSe14PVjo+t5f6GfN6+N5ymaN4GPisgtEckAn2XUStYXIuKISGH8Og98\\n\",\n       \"imDSTANtbztPKmzQLXjPLV13ntXMGWbvzzOasb/PqApznmd9hNEK7C3gu36eB3wd2AC6jOZbnwcq\\n\",\n       \"wBvAe8BrQHmO5/0io4rUt4HvjP9zr8zwvI8Bw/HveHd83fFr4zHPe34eG5VSdhvBMjs2ImyZGSsa\\n\",\n       \"y8xY0VhmxorGMjNWNJaZsaKxzIwVjWVmrGgsM/N/z4EQsKT2Kt0AAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f793819f790>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEn9JREFUeJzt3X+QXfVZx/HPJz9IsiSQwYRsAqmJChoyLQRbBiiFYquD\\n\",\n       \"TKFVaykqxerUcdraiC1Tyoz+pdPajlPsdHSmgrTQXyqttB2FghYDpZYU2A2QpIQ4RIH8MpjQ/Nqw\\n\",\n       \"SR7/uHfDZtnNnmdPvnvPDe/XTCb3nPvcc773fM8999lzzv0+jggBAABg4qZ0ugEAAADdjoQKAACg\\n\",\n       \"JhIqAACAmkioAAAAaiKhAgAAqImECgAAoKZpnVy5bcZsAAAAXSMiPNr8ogmV7Ssk3SJpqqRbI+Iv\\n\",\n       \"R8Zcd911r3rdmjVrdO655x41LzNeVnZsrcOHD5+wsSWNtZ3Xr1+vZcuWHTWv1Ps7ePBg5VhJGhwc\\n\",\n       \"rBx74MCByrEvv/xykeUODAxUjj0eduzYoXnz5r1qvj3q8WNUU6ZUP/Gd6ev9+/cXic3GZ/o6s38e\\n\",\n       \"OnSocuxY/RERoz43bVr1Q/2MGTMqx5500kmVYzNtyOxDmVgpty9nvkvq9vW+ffvU09PzqvmZY1a2\\n\",\n       \"HZn3l/mslvy+zsj0ddXYYx2/i13ysz1V0uclXSHpHEnX2l527FcBAAB0n5L3UF0gaWNEbIqIQUlf\\n\",\n       \"l/TOgusDAADoiJIJ1RmSnhs2/Xx73rgWLFhQpEGYHKNdMkJ3GO2SA4Dypk+f3ukmoKaSCdWEL4z2\\n\",\n       \"9vYez3Zgks2fP7/TTcAEkVB1t8w9I2gWEqruV/Km9BckLR42vVits1RHWbNmzZHHCxYsIJkCAACN\\n\",\n       \"cPjw4co35JdMqB6VdJbtJZI2S7pG0rUjg0b+mg8AAKAJpkyZctQvSY/1a9xiCVVEHLT9YUnfVWvY\\n\",\n       \"hNsiYn2p9QEAAHRK0XGoIuIeSfeUXAcAAECndXSkdKn6IHJNGeht6tSpRWIzbc4MkJdpg1RuW5R6\\n\",\n       \"f9kbOTODFmZiZ86cWSQ28/4ygyxm4zPtyMRm9qHMdsvenN2EQXgz2y3zGZFyfT1r1qzKsZnPSGa5\\n\",\n       \"c+bMqRyb3RaZgSSbMBBwdpDazLE2+/1QYrmlvkekMgMHn3HG2IMVUMsPAACgJhIqAACAmkioAAAA\\n\",\n       \"aiKhAgAAqImECgAAoCYSKgAAgJpIqAAAAGoioQIAAKiJhAoAAKAmEioAAICaSKgAAABq6ngtv0xN\\n\",\n       \"o1Kydb86vdxua4OUq5+VkanVJOW2R6nYUrUYszXNMg4dOlRkuZn3l6l1l635lYnP1K/L1NDL7EPZ\\n\",\n       \"/f7gwYOVYzOf1UxsZh/au3dv5djsd0imHaViM/typgaiJPX09BRZdmZfznyuS35HZfrkeOQiRc9Q\\n\",\n       \"2V5s+wHba20/ZfsjJdcHAADQCaXPUA1KuiEi+m3PlvSY7fsjYn3h9QIAAEyaomeoImJrRPS3H++R\\n\",\n       \"tF7SopLrBAAAmGyTdlO67SWSVkh6ZLLWCQAAMBkmJaFqX+67S9LK9pkqAACAE0bxX/nZni7pG5K+\\n\",\n       \"HBF3j3y+v7//yOPe3l719vaWbhIAAMC4nn32WW3atKlSbNGEyq3fQ94maV1E3DJazHnnnVeyCQAA\\n\",\n       \"ABOydOlSLV269Mj0qlWrxowtfcnvzZJ+R9Lltvva/64ovE4AAIBJVfQMVUR8X4zGDgAATnAkOwAA\\n\",\n       \"ADV1vPRM1WHnm1JCJaNUuZVSy83KtCNbCqTUckvtR5myL6XaMDg4WGS5JWX2oUz5lGypnEwpl0z/\\n\",\n       \"ZUqMzJw5s3LsnDlzKsdKubIhGZk+yWzjkqVLSpU7ypR8mTt3buXYU089tXKslHt/mWPG7t27K8fu\\n\",\n       \"2VP9x/z79u2rHDswMFA5VsqVk8nsy2PhDBUAAEBNJFQAAAA1kVABAADUREIFAABQEwkVAABATSRU\\n\",\n       \"AAAANZFQAQAA1ERCBQAAUBMJFQAAQE0kVAAAADV1vPRM1dIhmRIVJUuzlGpHpixDJrYpZWqaUjoo\\n\",\n       \"sz0y5UtKleDIlE7IlmXIlJ3IbLdM6YtMbKa8T1amhFGpMjWZ8jAzZsyoHJtV6lhUqvRMtgxPpkRM\\n\",\n       \"pvRM5hiwefPmyrEbN26sHCvljhmlvksy/ZfZ77PHgEx8ps1jrm+sJ2z/hqSQNNoRISLim1VWYHuq\\n\",\n       \"pEclPR8RV02olQAAAA12rPTtKrUSqrFUSqgkrZS0TlLuzwgAAIAuMWZCFRG/W3fhts+UdKWkv5D0\\n\",\n       \"J3WXBwAA0ETj3jhgu9f2bbbvbU+fY/v3Ky7/s5JulFT9Qi0AAECXqXIn5hcl3SdpUXv6GUk3jPci\\n\",\n       \"2++QtD0i+jT6fVgAAAAnhCq3wM+LiH+wfZMkRcSg7So/Z7hY0tW2r5Q0U9Iptu+IiPcND+rr6zvy\\n\",\n       \"uLe3VwsXLqzeegAAgEI2b96sLVu2VIqtklDtsf1TQxO2L5T00ngvioibJd3cfs1lkj42MpmSpBUr\\n\",\n       \"VlRqKAAAwGRatGiRFi1adGR6+EmgkaokVB+V9B1JP2P7B5LmS3r3BNrVjAGRAAAAjrNxE6qIeMz2\\n\",\n       \"pZJ+Xq17oZ6OiOojAraWsUrSqok1EQAAoNnGTahsz5L0QUmXqHWW6SHbfxsRuWGZAQAATlBVLvnd\\n\",\n       \"Ieknkj6n1hmq35J0p6TfLNguAACArlEloVoeEecMm/6e7XXHqwFV64mVrEmXqWfUBJm6YyVr6DWh\\n\",\n       \"Pl+27zLxme1cqs7c7Nmzi7Uh8/4ysZlaYvv27ascm6lVmFmuVK4+X6ZPMrUj9+7dWzk2K9OOjMw+\\n\",\n       \"lDne7969O9WOzPvLtGP//v2VYw8cOFA5NlMjUMq9v1LHuMxyMzX0st85mTZnagqOpcq7ftz2RUMT\\n\",\n       \"7V/5PVZ7zQAAACeIYxVHfnJYzMO2n1PrHqrXSXp6EtoGAADQFcYrjgwAAIBxHKs48qbh07ZPV2vE\\n\",\n       \"cwAAAAxTpTjy1bafkfSsWmNJbZJ0T+F2AQAAdI0qN6X/uaSLJG2IiKWS3ibpkaKtAgAA6CJVEqrB\\n\",\n       \"iNghaYrtqRHxgKQ3Fm4XAABA16gySMNO23MkPSTpK7a3S9pTtlkAAADdo8oZqndJ2ifpBkn3Stoo\\n\",\n       \"fgEIAABwRJXiyENnow5J+mLR1gAAAHShYw3suUetgTxHExFxyvFowPTp0yvFNaXcSimlSuuULKuT\\n\",\n       \"aXOp2EzZAqncfpSJLbUtMiVfsjLbORM7Z86cyrGnnFL9kJPdLzKlQDKxVUtrlYzNKlWSaObM6qPu\\n\",\n       \"ZEqGlCy5lJEp+ZLZhzIllyYSX1XmM1Wq5Eu2PEzV/EKqvn8++OCDYz53rHGoqhcRG4PtuZJulbRc\\n\",\n       \"reTs9yLih3WXCwAA0CRlKrq+4q8l/WtEvNv2NEknF14fAADApCuWUNk+VdJbIuJ6SYqIg5JeKrU+\\n\",\n       \"AACATilzMbllqaT/tX277cdt/53tnoLrAwAA6IiSCdU0SedL+puIOF/SXkk3FVwfAABAR5S8h+p5\\n\",\n       \"Sc9HxI/a03dplISqr6/vyOPe3l4tXLiwYJMAAACq2bJli7Zu3VoptlhCFRFbbT9n++yI2CDp7ZLW\\n\",\n       \"joxbsWJFqSYAAABM2MKFC4860dPf3z9mbOlf+f2RWuVqTpL0X5LeX3h9AAAAk65oQhURayS9qeQ6\\n\",\n       \"AAAAOq3kTekAAACvCaUv+Y2ragmFUmVAsvGlyhY0RRPKyWS2cbbsRKZ8QqlyMqXKnBw8eLBybDY+\\n\",\n       \"8/4yfTJr1qzKsZnSJdkSFZkSOBmltnG2VFWmLEqmrE2m3FFmuaVKcWVljkU9PdVHBZo7d27l2KaU\\n\",\n       \"Uit1DMhs46aUGhtzfbWXAAAA8BpHQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIFAABQEwkVAABA\\n\",\n       \"TSRUAAAANZFQAQAA1ERCBQAAUFPHS89ULa3RlNIzpcoAZIfUL6VUGZ5M7OHDhyvHZkqzZJUqfzFj\\n\",\n       \"xozKsZkSFdmySJntnCkbkumTTOmSPXv2VI49+eSTK8dK5UrgZMqRZMrDZLablOvrzLEoU2IkU4Yn\\n\",\n       \"E5vZblJuX85st8y2yMRmyyiVWnYmNtMne/furRy7f//+yrFS2dJdoyl6hsr2J2yvtf2k7a/arv5N\\n\",\n       \"AgAA0CWKJVS2l0j6gKTzI+L1kqZKem+p9QEAAHRKyUt+P5E0KKnH9iFJPZJeKLg+AACAjih2hioi\\n\",\n       \"/k/SX0n6H0mbJe2KiH8rtT4AAIBOKXnJ72cl/bGkJZIWSZpt+7dLrQ8AAKBTSl7ye6OkH0TEi5Jk\\n\",\n       \"+5uSLpb0leFBa9euPfJ4/vz5Ov300ws2CQAAoJodO3boxRdfrBRbMqH6saQ/tT1L0oCkt0taPTJo\\n\",\n       \"+fLlBZsAAAAwMfPmzdO8efOOTG/YsGHM2JL3UK2RdIekRyU90Z79hVLrAwAA6JSiA3tGxKclfbrk\\n\",\n       \"OgAAADqN0jMAAAA1kVABAADU1PFaflVrbmXqqpWqwSblat2VbEdVmVpUUq7N2RpaJZab3caZ+FK1\\n\",\n       \"CgcGBirHZmpRZWv5NaF+ZKbNmbp4JesaZvpvx44dlWN3795dOTZT11A6PnXKRjN9+vTKsZkaiJm6\\n\",\n       \"cZn+kPLHxKoyn6dMPc+S+3Kmjl7muFyqbmNWpk8y+/JYOEMFAABQEwkVAABATSRUAAAANZFQAQAA\\n\",\n       \"1ERCBQAAUBMJFQAAQE2NTKi2bNnS6Saghq1bt3a6CZggPnvdbefOnZ1uAiYoM8QGmqmRCRVfyN1t\\n\",\n       \"27ZtnW4CJojPXnfbtWtXp5uACSKh6n6NTKgAAAC6CQkVAABATe5keRTbna/NAgAAUFFEjFqDrqMJ\\n\",\n       \"FQAAwImAS34AAAA1kVABAADU1LiEyvYVtn9s+xnbH+90ezA2239ve5vtJ4fNO832/bY32L7P9txO\\n\",\n       \"thFjs73Y9gO219p+yvZH2vPpw4azPdP2I7b7ba+z/cn2fPqui9iearvP9nfa0/RfF2tUQmV7qqTP\\n\",\n       \"S7pC0jmSrrW9rLOtwjHcrlZfDXeTpPsj4mxJ/96eRjMNSrohIpZLulDSh9qfN/qw4SJiQNLlEXGe\\n\",\n       \"pDdIutz2JaLvus1KSeskDd3MTP91sUYlVJIukLQxIjZFxKCkr0t6Z4fbhDFExEOSRg7NfLWkL7Uf\\n\",\n       \"f0nSuya1UagsIrZGRH/78R5J6yWdIfqwK0TEvvbDkyRNVeuzSN91CdtnSrpS0q2Shn41Rv91saYl\\n\",\n       \"VGdIem7Y9PPteegeCyJiaKj0bZIWdLIxqMb2EkkrJD0i+rAr2J5iu1+tPnogItaKvusmn5V0o6TD\\n\",\n       \"w+bRf12saQkVYzicQKI1Jgd92nC2Z0v6hqSVEbF7+HP0YXNFxOH2Jb8zJV1q+/IRz9N3DWX7HZK2\\n\",\n       \"R0SfXjk7dRT6r/s0LaF6QdLiYdOL1TpLhe6xzXavJNleKGl7h9uDY7A9Xa1k6s6IuLs9mz7sIhHx\\n\",\n       \"kqR/kfSLou+6xcWSrrb9rKSvSfol23eK/utqTUuoHpV0lu0ltk+SdI2kb3e4Tcj5tqTr24+vl3T3\\n\",\n       \"MWLRQbYt6TZJ6yLilmFP0YcNZ3ve0C/AbM+S9MuS+kTfdYWIuDkiFkfEUknvlfS9iLhO9F9Xa9xI\\n\",\n       \"6bZ/VdItat1keVtEfLLDTcIYbH9N0mWS5ql1vf/PJH1L0j9Kep2kTZLeExG7OtVGjK39q7AHJT2h\\n\",\n       \"Vy4tfELSatGHjWb79WrdtDyl/e/OiPiM7dNE33UV25dJ+mhEXE3/dbfGJVQAAADdpmmX/AAAALoO\\n\",\n       \"CRUAAEBNJFQAAAA1kVABAADUREIFAABQEwkVAABATSRUADrO9sPt/3/a9rXHedk3j7YuADieGIcK\\n\",\n       \"QGPYfqtagxxelXjNtIg4eIznd0fEnOPRPgAYC2eoAHSc7T3th5+S9BbbfbZX2p5i+zO2V9teY/sP\\n\",\n       \"2vFvtf2Q7W9Jeqo9727bj9p+yvYH2vM+JWlWe3l3Dl+XWz5j+0nbT9h+z7Bl/4ftf7K93vaXJ3dr\\n\",\n       \"AOhG0zrdAADQK6VvPi7pY0NnqNoJ1K6IuMD2DEnft31fO3aFpOUR8d/t6fdHxM52bbvVtu+KiJts\\n\",\n       \"fygiVoyyrl+XdK6kN0iaL+lHth9sP3eepHMkbZH0sO03RwSXCgGMiTNUAJrEI6Z/RdL7bPdJ+qGk\\n\",\n       \"0yT9XPu51cOSKUlaabtf0n9KWizprHHWdYmkr0bLdkmrJL1JrYRrdURsjtY9Ef2SltR4TwBeAzhD\\n\",\n       \"BaDpPhwR9w+f0b7Xau+I6bdJujAiBmw/IGnmOMsNvTqBGzp7dWDYvEPiWAlgHJyhAtAkuyUNv4H8\\n\",\n       \"u5I+aHuaJNk+23bPKK87RdLOdjL1C5IuHPbc4NDrR3hI0jXt+7TmS7pU0mq9OskCgHHxVxeAJhg6\\n\",\n       \"M7RG0qH2pbvbJX1Orcttj9u2pO2Sfq0dP/wnyvdK+kPb6yQ9rdZlvyFfkPSE7cci4rqh10XEP9u+\\n\",\n       \"qL3OkHRjRGy3vWzEsjXKNAAchWETAAAAauKSHwAAQE0kVAAAADWRUAEAANREQgUAAFATCRUAAEBN\\n\",\n       \"JFQAAAA1kVABAADUREIFAABQ0/8DDbzgL6GT1HEAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f793811c290>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGlxJREFUeJztnVls49t93z+Hi7iJu0Rq44xm5l4bAVzAfnEfnKB5CIJr\\n\",\n       \"FEjSlwYGChTpgj50Q/vQJH1o/JgGaFC0D0Xb2EE3JC1auEgKtLUNtIj70DYOfGNf9/pO7r3SSENR\\n\",\n       \"JCVS3ClS5OmD+Ds+5FALN4nS/L/AwZ/L8Mwh/1/9zm8/SmuNAweTwHXfC3Dw8OCQxsHEcEjjYGI4\\n\",\n       \"pHEwMRzSOJgYDmkcTIypSaOUek8p9SOl1J8opX55notysNxQ0/hplFJu4CPgZ4As8IfAV7TWH853\\n\",\n       \"eQ6WEdNKmi8CH2ut97XWXeB3gZ+f37IcLDM8U35uGzi0nr8G/rT9D5RSjqv5gUNrrca9Pq2kcQjx\\n\",\n       \"FmNa0mSBjPU8w6W0cfAWYFrSfBd4Vym1q5RaAX4R+L35LcvBMmMqnUZrfaGU+hvAfwfcwNccy+nt\\n\",\n       \"wVQm960mdhThB495K8IO3mI4pHEwMRzSOJgYDmkcTAyHNA4mhkMaBxPDIY2DiTFtwPLRQ6kfuyjc\\n\",\n       \"brcZHo8Hl8tlhtvtRmtNv99Ha43WmouLC3q9HhcXF1xcXJjXAR5DyZBDmjFQSpnhcrkIhUKsrq6a\\n\",\n       \"q9/vJxAImKuQo9vtcnFxQbVaNaNSqdDr9ej1evT7fXq93n1/vZnhkGYMbMK43W5WV1dZW1szIxqN\\n\",\n       \"Do3z83Pa7Tbn5+ecn5+Ty+XI5XIcHx/T6XTodDqGVA5pHimEMKOkyWQy7OzskEqlWF9fN6PZbNJo\\n\",\n       \"NMw1Go3i9XrpdruUy2Uzb6/XQyn14LeomUijlNoHqkAP6GqtvziPRd03bCnj9XpZXV0lmUyyvb3N\\n\",\n       \"8+fP2djYIJ1Os7GxQSqVotFo0Gg0qNfrNBoN+v0+zWaTcrnM6uqqIcrFxcV9f7W5YFZJo4Gf1lqX\\n\",\n       \"5rGYZYBsTUKYlZUVQqEQsViMtbU1Njc3icfjrK6u4vV6gUtFeWVlhUAggFKKSCRCNBolHo+TSCRw\\n\",\n       \"u90AXFxc0G63325JM8DYSOhDhi1lfD7fEGk2NjYIhUKEQiFWVlaAH5NGKYXH4yESiRCLxYjH4yST\\n\",\n       \"SSNlWq3WPX+z+WAekubbSqke8M+11v9yDmu6d7hcLjweDx6PZ4g06+vrbG5u4vV68Xg8eL1etNa4\\n\",\n       \"3W5DGJ/PZ0iTSCRIJpN0u13a7Ta1Wm3IlH+omJU0X9Ja55RS68C3lFI/0lp/Zx4Lu2vIzXS5XHi9\\n\",\n       \"Xvx+P6urq0NSQ7abUYjSLPMEg0FjnofDYarVKj6fz2xTDx0zkUZrnRtci0qpb3BZ2vLgSCM3XaRF\\n\",\n       \"NBo15nUqleLJkyckk0mCweB9L3UpMEuFZVApFR48DgE/C/xgXgu7S4i15PF4WFlZIRKJkE6n2d3d\\n\",\n       \"5bOf/SyZTIa1tTUCgcCN87wNmEXSpIFvDH4oD/DvtNbfnMuq7hC2eS2kiUajbGxssLu7yzvvvMP6\\n\",\n       \"+rojaSxMTRqt9R7w+Tmu5c5gb0di+cgIBoPGStrZ2WF3d5dwOGzCB9dBa/2GN1kUZjHffT6fiUfJ\\n\",\n       \"Z0bHsuOt8wgrpfD7/UPxo2AwSDAYJBAIEA6Hef78Odvb26ytrREOhwkGg7dSZGV7sq2uaDRKu92m\\n\",\n       \"2+3S7/dRStHpdExAs9fr0e12Tdyq2+0uPXHeOtK4XC78fv9Q7CgSiRAOhwmHw0SjUXZ2dgxpIpGI\\n\",\n       \"kRK3tX7cbjd+v9+QxiaMx+Oh3W7T6XQ4Pz+n0+nQarVot9u0Wi263e6Cf4HZ8daRRiRNNBo1MSQx\\n\",\n       \"p2VITEkkjWw1N5FGtqdRSWMTxufz0Wg0aLVatFotms3mg/MYv5WkCQQCxGIx0uk029vbJJNJMxKJ\\n\",\n       \"hAkDRCKRqZRfj8eD3+8nHA5zfn4+5PsJhUJDcap6vW7CEeI1lhjVsuo4bx1pXC4XwWCQRCLBzs4O\\n\",\n       \"z58/N9tTJBIxRJnGGSc6jdfrJRgMmq0mEAgQiUSGIuH2tVgscnx8jMfj4eLigvPz86FErmUjzltH\\n\",\n       \"GvHYrq2tsbOzw4sXL4YSqvx+Pz6fD5/Ph8cz3c8jpAFYWVkxEkf0GNmW5BoOh3G73SY63mg0TG5O\\n\",\n       \"r9dzSHPfkEy8ZDJpJI3EmcREttM7J4HoNEIar9dLKBQymXsiPYQ4ovyKGd5qtSiXy4YkvV6PTqez\\n\",\n       \"iJ9hJrwVpLFzev1+P5FIhGQyycbGBpnMZceUWXJ4hSzy2O1243K5TBRcIP9GpIhk/ImEqdVqlMtl\\n\",\n       \"4+tZ1sj4oyeN2+025nQ4HGZtbY3d3V3W19cJBoMmQcq+Tgo7sdxOMJdhk9Z2+glB4/E4Ozs79Pt9\\n\",\n       \"/H4/2WyW168v2/3U6/WlSxF99KRxuVxEIhE2NjbY2Nhge3t7iDTAG8SZFFprkzg+bsj25/V6h+Jc\\n\",\n       \"8jgejxvCJBIJVldX0VpTr9c5Pj6e908yMx49aUTSbG5u8uLFC54/f87Ozs5CJI3oLfJYrjKv6Ehi\\n\",\n       \"govESSQSBAIBkskkmUwGr9drCLOM6RQ3kkYp9XXgzwIFrfWfGryWAP498BTYB/681vpsgeucCHLj\\n\",\n       \"7Uy6dDrNs2fP+MxnPkMikTA3ahqMbj8SBpAhiq+tAPf7feDNNAyllEncEjSbTV6/fk0sFjPkWqbY\\n\",\n       \"1G0kzW8D/xT419ZrvwJ8S2v9G4PG078yGPcOsV5kSxDfi9QtiQ9Git5GFeDb3JTRkhWxhGSIlSQj\\n\",\n       \"EAiY+JbEuGT4/f43pJsksycSCTY3N/H5fEP/pxDwvnAjabTW31FK7Y68/HPAnxk8/lfA/2SJSOPx\\n\",\n       \"eMwNiUajRgmWGyaksS0e+3oTzs/PqdVqVKtVarXakHe30WgMkebi4oJQKGQi5aurq8RiMWKxGAA+\\n\",\n       \"n28saUKhkCGN2+2mWq0CLIUJPq1Ok9Za5weP81zm1iwFRNJIuqaEA0TSBAIBVlZW8Hq9M0maWq3G\\n\",\n       \"yckJp6enVCoVU00pFZVSddnr9Yy3WdbSbrcBjPl/HWk2NjaMTtTpdJbCmppZEdZa62XpryfWiF12\\n\",\n       \"kkgkiMViRCIRQ5rbShr7ua1TNJtNzs7OKBaL5HI5SqUS5XKZs7MzSqXSUAig1+sN5RnHYjG01kaP\\n\",\n       \"6ff7byi7Xq+XcDhMMplka2vLzNVsNqlUKm+Y9XeNaUmTV0ptaK2PlVKbQGGei5oEdjWkOO+SySSb\\n\",\n       \"m5vGxM5kMqRSKaLRKD6fzyiXkgw+CttcFq9st9s1JbavX78mm82aIduUbFm25WQPqUoIBoPEYjGa\\n\",\n       \"zabZbkQ5drlc+Hw+otEo6XSaXq9nzHVZc61We8M5CHfXXGBa0vwe8BeBfzi4/ue5rWgKiN9D3PeJ\\n\",\n       \"RILt7W2ePXvG06dPTapDJBK5NWnson47wFiv1zk8POT169fm2mw2h2JJNumk5qnT6dBut02saW1t\\n\",\n       \"jUajQafTeSP1wufzEYvF6PV6ZiuV8IbL5aJUKpkGA7JGwV0Q5zYm9+9wqfSuKaUOgX8A/DrwH5RS\\n\",\n       \"f5mByb3IRd6wvqFqSCHN1tYWL1684MWLF0MeYYley1/2KMTnIjda9JezszMzDg8POTg4MEMkkIxR\\n\",\n       \"k1wkQqvVol6vk0gkODs7M5JGAqOjksbr9RKJRFhZWTEE11oby08IfdeR8NtYT1+54q2fmfNapoZd\\n\",\n       \"SSCk2dzcZHd3l3fffdfk6MpfrO3HEdg3WdITRDKcnZ1xcnLCyckJxWLRkGV/f5+Dg4Mhb/A4JVXM\\n\",\n       \"8kajgd/vJ5VKUalUqNfrnJ+fm3waIcbKygoej4dwOEy/3zfv2zpMr9ej0WiYP4C73KIehUfY7idj\\n\",\n       \"u+ltolwnWUTEi95ydnZGuVw2yu3oNZ/PUyqVzFZ0k1IqZJJtpF6vUyqVyOfzHB4eEo/HjSS0Qw39\\n\",\n       \"fh+Xy2WSxjY2Nsx36Ha71Ot1yuWykYzifV40cR4FaQRXVQLI86tCBKKgik6Sy+U4OjoyPWbEFyPX\\n\",\n       \"s7MzKpUKzWbT3CghzzjITZXHtVqNUqnE8fGxybXp9XrGqSdrFZdAIBAgHo+brEORMqVSiUAgQKfT\\n\",\n       \"Md/tunXMC4+GNKOSxiaN/d4o7OL8er1OpVIhl8uxt7fH3t4e+/v7Q7kvrVZryDsr29F1N8oOaF5c\\n\",\n       \"XAyRRspihDB2/Euufr+fWCxmrp1Oh3K5TC6XIxAI0Gq1zP9xF3jwpBklwyhpxmXf2QFKiR1Jgb7c\\n\",\n       \"jL29PT788EM++uijoe1rmmoB0XcEsj0FAgFDbOmBI+SziSPebVl7q9Uil8sRjUaNs1K2v7uo8nzw\\n\",\n       \"pJkGtu+k0+lQLBYpFArk83mOj485ODigWCxSq9WGPLvzEvsi2arVqgl1yJZXrVYJBoNDmYTLVu77\\n\",\n       \"1pHG9ptIzdHJyQnZbJbDw0MODw/J5XIUi0Xq9bqJWo9Ki1lgk8btdhOPxymXy1SrVer1OsDMecqL\\n\",\n       \"xPKtaMGwTepWq0WtVqNYLJLNZvn000/59NNPjT+mVquZisd5KpjdbpdWq2WSyePxuJE0tVrNmN52\\n\",\n       \"s6RlwltHGsCQptFoUKlUjKTZ29vj5cuXQ4ruIioeRdL0+32j1NqkEY/1ysrKlUQdp9zfFbkeJWmu\\n\",\n       \"+yH7/T7VatW0bT06OmJ/f59sNku5XDalJrIl3ee6bVLYmYXSDmV9fZ1MJoPH4zHEsy26ReFRkgbe\\n\",\n       \"tKoEWmuq1SpHR0e8fPmSTz75hGKxSLFYNKQR5XfRpBlH7nHrHk1JlVqqVCpFJpMxFmOn06FarS68\\n\",\n       \"HvzRkWYcUewUCJE02WyWjz76iA8++MC0dG02myYz7i6cZLK2cWGN6ySNpFWkUikT5RbCXBWEnSem\\n\",\n       \"zRH+KvBXgOLgn/2q1vq/LWqR02Lcnt/v96lUKoY077///ht5M/eJ20oaaWAgfqZqtUqhULiTRPTb\\n\",\n       \"0PK3gfdGXtPAb2qtvzAY90YY+y9vfX2ddDpNPB4nFApda66K1LmPhCZJRw2HwyQSCZOcJRmG4rAT\\n\",\n       \"AowmiYlUEasvl8tRLpdNWGPRuJE0+rJbZ3nMW0thB0pALxqNsr6+zsbGBolEglAoZFq2jhv3CSFN\\n\",\n       \"JBIxlRE2aezk93HZhUKaQqHAwcEBR0dHQwHURWOWDfBvKqX+WCn1NaVUbG4rmhB2kyIhzW0kzX3C\\n\",\n       \"6/UOkSYejw81V7pJ0pyfnxtJc3h4OESapZA0V+CfAc+47LmXA/7R3FY0Iez67IcmaSQPWCRNNBod\\n\",\n       \"kjS2fnKdpMlms5yenppzGRa+/mk+pLU2OcFKqd8Cfn9uK5oQotNIInkymSQSiZhg4H2tScxgO4dZ\\n\",\n       \"EqYSiQTr6+tsbW2RyWSMHhYIBMZaP6P6l/Tnk5jYXVp7MCVplFKbetB4Gvhz3GP/YHF2SasySWjy\\n\",\n       \"+/33VtIqKaj2GQt2QlgymSSVSg2RRlIfrktBHW1ZctdkEUyTI/xrwE8rpT7PpRW1B/y1ha7yGowr\\n\",\n       \"WRG94L4kjUgW+3wFaZYk1RLpdJqtrS2ePHliiH6dpLFrqUTS2Jl6d7n1Tpsj/PUFrGUqjEoaqdH2\\n\",\n       \"+/33uj3ZSWB221nJYRZJ8+TJE6N/XRWctJPdbcKMugzuCstpXkwIW4cQPeI+I8M+n8+U4MqhGlJT\\n\",\n       \"HolEePr0Kdvb24bgkkhuVxzYhGg0Gqauqlarsbe3Z3wzrVbLHHd4V7GyR0GaZYM4G9fW1ox1ZI9U\\n\",\n       \"KkUqlSIWiw0RRgg/WmQnllKhUKBYLPLq1StjMUkZzLwTxa6DQ5oFQOqWUqmUOfPSHuFw2BztI6a1\\n\",\n       \"LR1HKyQqlQr5fJ6DgwNevXrF0dERx8fHxjfT6XTmmiR2ExzSLAB2WW0mk2F7e5utrS0z7C5Y44KV\\n\",\n       \"dsmLOPLy+Tz7+/u8fPmSQqFgEsXsLudLbXI7GIa0MBFFN5PJ8OTJEzY3N0mlUsTj8aH+xDdFovv9\\n\",\n       \"viFMs9mkXq9TrVY5Ozvj9PSUs7Mz0zb2PlrGPkrS3LUSHAqFTFt88UrLibrJZJJoNGpaxN4GYikJ\\n\",\n       \"aaSGXBoM1Ot1c0jHQ+oasdS46x8yFAqRSqXY3d1ld3fXHFsoZy2IlLltvq9IGrv+W6SNlPOKz8Yh\\n\",\n       \"zQPF6uoqqVSKZ8+e8bnPfc7Ej+RUXemGPomksbcn2+QWSTNJE6Z541GSZpxyab832ilr9H3x3IoX\\n\",\n       \"1/b9jJvz3Xff5Z133uHJkyek0+mhz/r9ftMu5LrSYBvtdptyuUw2m+Xo6IhXr15RKBRMKufS99x7\\n\",\n       \"SBiXOjl6kyTsEAwGTWrC6Pv29hKLxYba3o+LZ0kDpc3NTRKJhCGJHW+ynXc3EafdbnNycsKrV694\\n\",\n       \"+fIl2WyW4+Nj0zDpvvGoSDOK6yRNMBg0YQcbbrebra0ttre32d7efuMc7nGhCdvbG4lEhoKVoxHu\\n\",\n       \"20gaKeA7ODjgww8/pFAomO1p6UmjlMpw2Qo2xWVw8l9orf+JeiB9hEcfy/PrJI3H4yGTyZgt5/nz\\n\",\n       \"50NnXI6edwAYCSTkuGo9t7XqhDT7+/v88Ic/NH327tKBdx1ukjRd4O9ord9XSq0Cf6SU+hbwSyxJ\\n\",\n       \"H2HpNNVoNCiXy5yenhKJRNBa4/V68fl8Q/9eKWVM5KdPn74xn9vtNrGh9fV10wBaAopXKbOjVQOy\\n\",\n       \"tttgtHl1qVQyVpJ0qVgmXEsarfUxcDx4XFdKfQhss0R9hPv9Pu12m0qlQrFYJBaL0e12TUbf6Mlw\\n\",\n       \"LpeLcDhMOp1Ga004HH7j/fX1ddPYUdIuRYkdR4TRagGYrMVsp9MxZTT1ep1isUi1Wh1qwrhMuLVO\\n\",\n       \"oy4bUH8B+D8sUR/hUdKsrq6a9hx263iBUopwOIzWmlAoRDr946XLjbcj1NLv7jp9xCbMNJJG+gOX\\n\",\n       \"SiVKpRLFYpFKpbK051neijSDrek/AX9ba12zfzyt77ePcL/fNx0YisXiUKdyaZpow+VyGT9KKpUa\\n\",\n       \"ynyT66gCK1iUpOl2u9RqNU5PTzk+Pn74kkYp5eWSMP9Gay2tX5emj7DoNNJdSvrTSRPEi4uLIR+L\\n\",\n       \"Umqom6bMYV9nWYsQxz7/abRFrH1I2Pn5OScnJ+TzeZP+IA2tm83mw5M06vJX/Rrw/7TW/9h6a2n6\\n\",\n       \"CEtJqijCfr+f9fX1ofiMLTkWGZeSue1On5KmaSeDS4TabgApXc/L5TLFYvFO65gmxU2S5kvAXwC+\\n\",\n       \"r5T63uC1X2WJ+gjLX269XjfnKJ2dnb0R1LvLjlKjOb2jkuX4+JijoyMz7IM55FqtVh8mabTW/4ur\\n\",\n       \"a6OWoo+w1AFJzY/b7aZSqZgD06WrpuSvLHot9vYkEkZiSNI9NJ/Ps7e3x8cff8wnn3xCo9Ew/XCE\\n\",\n       \"6DIeHGkeAuzOVoBptpjP58lms0YxtmNBtt9lnsnnIslsE3o0v7dWq5mO57lcjkKhQLvdNucvCFHs\\n\",\n       \"U+mWDY+CNPJXrZSi1WpxenpKNpvF7/fT7XZNaqVc7bGIioXz83MqlQqlUonT01Ojq4jeIo0hT05O\\n\",\n       \"3sjxtfsSLysePGkAE4+RDt9iRQE0Gg2i0agpsJcqTGnZMer8mwWyPQlp8vk8R0dHpmmSJIaLxKnX\\n\",\n       \"66b+etTCWoby4avw4Eljm7WiT5RKJQBzOHoymTRVAc1mE8B0k1oEhDR2Vwdp15bL5Ux7NluyyHex\\n\",\n       \"v9ey4sGTBob9LHa7VVGS7Vzber1uXpPsOEldsHv32qkQdinsOBMahmNP0skhl8uRz+cpFotmm5KT\\n\",\n       \"5x4yHgVpbEiBfKvVAjBZ/XIqW6lUolarUalUKJfLpFIpkxAuB6OK4uz3+00vO/tgU/v8J5Fctjkv\\n\",\n       \"JnUul+Pk5MSYz+M81A8Rj5Y0UnQmUqZarbKyskIgEBjK7C8UCibZSq4Sm5K2rEJC0UVEsRXnnA2l\\n\",\n       \"FKenp0NDSLPoBop3hUdJGtlK2u320FGFkoAl5zcVCgXW1tZIp9NDTQ/FGSjnEYxKqnw+b0ahUDAK\\n\",\n       \"sMB20tVqNeOjua/qgXnj0ZEGuNZctQ/VEgki3ctF75EjkMXiEokiQ6wgGaOQKgK52r2JHwMeJWmu\\n\",\n       \"g2xfzWZzqG5aTpArFAqmikB0HPvMbTnexx4C27knyrb4YKRA35E0DxBiUQFmC2s2m5TLZQKBgOni\\n\",\n       \"4PP5THqn3HyJHdlSRHJe7O1p9IhlGcvssJsE6jrmX5Mj/FVu6CN8nzk2N2FUzxmXCG6/P5rmYB+h\\n\",\n       \"PI4IVzUaWmaH3ThorcdGeG8izQawYecIA7/AZVS7prX+zWs++3B+HQdjcRVpps0RhiXpI+zg7nHr\\n\",\n       \"XAErR/h/D15aij7CDu4etyLNYGv6j1zmCNdZoj7CDu4e1+o0YHKE/wvwX0dSPuX9XeD39eCwDet1\\n\",\n       \"R6d54LhKp7lW0lyVIzxIJhfcax9hB3ePm6ynnwT+APg+lyY3wN8HvsLl1mT6CFt1UPJZR9I8cExl\\n\",\n       \"cs8ChzQPH1NtTw4cjINDGgcTwyGNg4nhkMbBxHBI42BiOKRxMDEc0jiYGAvz0zh4vHAkjYOJ4ZDG\\n\",\n       \"wcRYKGmUUu8ppX6klPqTQRfQWefbV0p9Xyn1PaXU/53i819XSuWVUj+wXksopb6llHqplPrmJLlB\\n\",\n       \"V8z3VaXU68Eav6eUem+C+TJKqf+hlPqhUuoDpdTfmmWN18w39RqB8fms8xiAG/gY2AW8wPvAT8w4\\n\",\n       \"5x6QmOHzP8VlItkPrNd+A/h7g8e/DPz6jPP9GvB3p1zfBvD5weNV4CPgJ6Zd4zXzTb1GrfVCJc0X\\n\",\n       \"gY+11vta6y7wu8DPz2HeqdNMtdbfAcojL/8cl21tGVx/Ycb5YMo1aq2PtdbvDx7XAbsF78RrvGa+\\n\",\n       \"qdcIi92etoFD6/lrfrzgaaGBbyulvquU+qszziVYRHvbmVNh592Cd57puoskzSJs+S9prb8AfBn4\\n\",\n       \"60qpn5rn5PpSjs+67plTYUdb8M66xnmn6y6SNFkgYz3PcCltpobWOje4FoFvcLkFzor8oFRHMhJn\\n\",\n       \"am+rtS7oAYDfmnSN17XgnWaN1nz/VuabdY2LJM13gXeVUrtKqRXgF7lsJTsVlFJBpVR48DgE/Czz\\n\",\n       \"STOV9rYwh/a2s6TC3qIF70RrXFi67izWzC209y9zqbF/zGUV5ixzPePSAnsf+GCa+YDfAY6ADpf6\\n\",\n       \"1i8BCeDbwEvgm0Bshvn+EpcVqd8H/nhwc9MTzPeTQH/wHb83GO9Nu8Yr5vvyLGvUWjthBAeTw/EI\\n\",\n       \"O5gYDmkcTAyHNA4mhkMaBxPDIY2DieGQxsHEcEjjYGI4pHEwMf4/w2zPGHuGeikAAAAASUVORK5C\\n\",\n       \"YII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7f79380a2550>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEqlJREFUeJzt3X2QXXV9x/HPJ5unTUICFKoFdjcWYitBa6gygAkP1RbK\\n\",\n       \"aLStBWmr1HbsdNSaUmUMzLR/tSPIdKSO085YqAI+QdWiTiuBVioLYiKQ8BTkoSNNAoWkFDbPybJ8\\n\",\n       \"+8e9Cctmb/Z89+S3997wfs0w3nPud3/nd8/v3JOv55z7+zoiBAAAgMmb1u4OAAAAdDsSKgAAgJpI\\n\",\n       \"qAAAAGoioQIAAKiJhAoAAKAmEioAAICaprdz47aZswEAAHSNiPB464smVLbPl3SNpB5J10bEVWNj\\n\",\n       \"Lr/88gP+bnBwUMuWLXvVumnTql9Ms8f9rIdEpu1SsZl9MX16bogz8a1iV61apfPOO2/S7Zb8fJm2\\n\",\n       \"MzLj9/LLL1eOzcwT19vbWzlWkubMmXPAuptvvlkXXnjhAevnzZtXq91WZs6cWTl27969lWN37dpV\\n\",\n       \"OVaSdu7cWaQfmbHOxLbaxzfddJMuuuiiA9YfccQRlds+6qijKsceeeSRlWPnz59fOTZzLM+ePbty\\n\",\n       \"rCTNmDEjFV/Vnj17KseOd7xdeeWVWrly5QHrt2/fnurHjh07Ksfu3r27SOzw8HDl2FLnw6yq5/Bz\\n\",\n       \"zjmn5XvFbvnZ7pH0BUnnSzpZ0sW231RqewAAAO1S8hmq0yQ9GRFPRcSwpG9Iem/B7QEAALRFyYTq\\n\",\n       \"eEkbRy1vaq6bUH9/f5EOYWqceOKJ7e4CJmnx4sXt7gJqYPy619KlS9vdBdRUMqGa9M3OgYGBQ9kP\\n\",\n       \"TLGTTjqp3V3AJPEPcnc75ZRT2t0FTBIJVfcr+VD605L6Ri33qXGV6lUGBwf3v+7v7yeZAgAAHWHt\\n\",\n       \"2rVat25dpdiSCdW9khbZXijpGUkXSbp4bNDYX/MBAAB0giVLlmjJkiX7l6+//vqWscUSqoh4yfbH\\n\",\n       \"Ja1SY9qE6yLi0VLbAwAAaJei81BFxPclfb/kNgAAANqtrTOlS9Un9RoZGancZnbyr1KT73VCbGa/\\n\",\n       \"SdJLL73UVbElP19mP5f6fJnJAjMTTmbbLtWPUpNkltwXmQkOM+1mPl9JpSa/7enpqRybmfA1299S\\n\",\n       \"3+tOON9L5SZGzrSbmTz1UEwm3UqpCZdboZYfAABATSRUAAAANZFQAQAA1ERCBQAAUBMJFQAAQE0k\\n\",\n       \"VAAAADWRUAEAANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADU1PZaftnaPOicml+ZfmTqK2ZiM7W2\\n\",\n       \"OkWmJlamFlUmViq3n0sdF6VqzGVl6syVOr8NDw8Xiy9VZy5zDO3cubNIu1nZurBVzZo1q3Jsb29v\\n\",\n       \"qu1M/OzZsyvHZvqcUWofS2VqrF511VUt3yt6hrLdZ/sO24/Yftj2J0puDwAAoB1KXx4alnRpRKyz\\n\",\n       \"PU/SfbZvj4hHC28XAABgyhS9QhURz0bEuubr7ZIelXRcyW0CAABMtSl7KMH2QklLJK2eqm0CAABM\\n\",\n       \"hSlJqJq3+74paUXzShUAAMBho/hP7GzPkPQtSV+JiFvGvn/nnXfufz0wMKCBgYHSXQIAAJjQhg0b\\n\",\n       \"tGHDhkqxRRMqN37HfZ2k9RFxzXgxZ511VskuAAAATEp/f7/6+/v3L999990tY0vf8nuHpD+QdK7t\\n\",\n       \"tc3/zi+8TQAAgClV9ApVRNwlZmMHAACHOZIdAACAmtpe96VTyqhUVaqESqk+lGw7M3bZsihVlSxd\\n\",\n       \"lCl1kont6ekp0m72uzQyMlI5NrOfS5WeybS7d+/eyrFSrkRFpu0ZM2ZUjs2UDJk/f37lWCl3zGWO\\n\",\n       \"i0yJmB07dlSO3b17d5FYKTd+mX0xd+7cyrGZ8cuOdaZETOY7NTQ0VDl2165dlWMz47dnz57KsVJu\\n\",\n       \"/A7Fv6tcoQIAAKiJhAoAAKAmEioAAICaSKgAAABqIqECAACoiYQKAACgJhIqAACAmkioAAAAaiKh\\n\",\n       \"AgAAqImECgAAoKauKT1TqpxFSaXKapT8fKVKxHRCaRYp9/lKjV+pEirDw8OVY7PxmRIOpUr2ZMYu\\n\",\n       \"U0pGyo1JptxKph+Zz5c97jNjXarsSyY2c7xlSvZIudIsmTHJlNbZsmVL5ditW7dWjpVyY505PjPn\\n\",\n       \"w8zxOXPmzMqxmVJOUtl/S8bTMqGy/TuSQtJ4R1RExLerbMB2j6R7JW2KiPdMqpcAAAAd7GBXqN6j\\n\",\n       \"RkLVSqWEStIKSeslHVG1UwAAAN2kZUIVEX9Yt3HbJ0i6QNLfSPqLuu0BAAB0oglvGtp+ve3rbN/a\\n\",\n       \"XD7Z9h9XbP9zki6TVP0BBQAAgC5T5SmsL0u6TdJxzeUnJF060R/ZfrekzRGxVuM/hwUAAHBYqPIr\\n\",\n       \"v2Mi4ibbKyUpIoZtV/lpwJmSltu+QNJsSfNt3xARHxoddNddd+1/3d/fr/7+/uq9BwAAKGRoaKjy\\n\",\n       \"Ly2rJFTbbf/cvgXbp0samuiPIuIKSVc0/+ZsSZ8am0xJ0tKlSyt1FAAAYCotWLBACxYs2L/89NNP\\n\",\n       \"t4ytklB9UtL3JP2i7R9JOlbS+yfRr86YHAoAAOAQmzChioj7bJ8l6ZfUeBbqsYhIzSAYET+U9MPJ\\n\",\n       \"dREAAKCzTZhQ2e6V9FFJS9W4yjRo+x8iovq0twAAAIexKrf8bpC0VdLn1bhC9XuSbpT0uwX7BQAA\\n\",\n       \"0DWqJFSLI+LkUcs/sL3+UHWgav2cUjXYsvGl6uhNn169rGLJml+l2s60W6qeYFapsc7Ulzr66KMr\\n\",\n       \"x2ZqYmXjMzW0MnXYMvXdMnXHsrX8MmMye/bsyrFz5sypHDtv3rzKsdnvSOa7WqpuY6lzXKYOYza+\\n\",\n       \"VC2/Xbt2VY7N1ugsNX6ZGoiZ2My5JVvLL3PMVY1dvnx5y/eq7M37bZ+xb6H5K7/7Km0ZAADgNeBg\\n\",\n       \"xZEfGhVzt+2NajxD1S/psSnoGwAAQFeYqDgyAAAAJnCw4shPjV62/fNqzHgOAACAUaoUR15u+wlJ\\n\",\n       \"P1NjLqmnJH2/cL8AAAC6RpWH0v9a0hmSHo+IN0h6p6TVRXsFAADQRaokVMMR8b+SptnuiYg7JL2t\\n\",\n       \"cL8AAAC6RpWJF16wfYSkQUlftb1Z0vay3QIAAOgeVa5QvU/STkmXSrpV0pPiF4AAAAD7VSmOvO9q\\n\",\n       \"1IikLxftDQAAQBc62MSe29WYyHM8ERHzD0UHqpaeyEyR343lVjpFptxKpoRDpt1u3G+Z0iWZ0hDb\\n\",\n       \"tm2rHJstt5LZz5kSDpmSNqXazZRxkXLlPTJlQ4aGhor0IVuOpNT5s9SYZMr7lCxHktkXmfNhJjZT\\n\",\n       \"nknKlcDJHEeZPmfOh5ljKFteq0TpmYO20eqNiMidkcZh+0hJ10parEZy9kcR8eO67QIAAHSS+inZ\\n\",\n       \"wf2dpH+LiPfbni5pbuHtAQAATLliCZXtBZKWRcQlkhQRL0mqfv0bAACgS+QeNsp5g6Qttr9k+37b\\n\",\n       \"/2h7TsHtAQAAtEXJhGq6pFMl/X1EnCpph6SVBbcHAADQFiWfodokaVNE/KS5/E2Nk1Ddc889+1+f\\n\",\n       \"cMIJ6uvrK9glAACAap544gk9+eSTlWKLJVQR8aztjbbfGBGPS3qXpEfGxp1xxhmlugAAADBpixYt\\n\",\n       \"0qJFi/Yvr1q1qmVs6V/5/Zka5WpmSvovSR8uvD0AAIApVzShiogHJL295DYAAADareRD6QAAAK8J\\n\",\n       \"pW/5TSg7lXwV2dIlpUqdZEvgdIJMeYGMUqVnMiUOpHKlgzL7befOnUVi9+7dWzlWypWqyXy+zD6e\\n\",\n       \"M6f6TCqZciTZ80omPnMs79mzp3JsZjwyfZBy45c5jjKfL9Nu5rtXsvRMJjZzLGf6XPIc1wmxJcvw\\n\",\n       \"ZI6j7HdqPN33Lz4AAECHIaECAACoiYQKAACgJhIqAACAmkioAAAAaiKhAgAAqImECgAAoCYSKgAA\\n\",\n       \"gJpIqAAAAGoioQIAAKip7aVnSpV9yShVFqVUGZdS0/p3ikyJiuHh4VTbIyMjbY/NlDnJlFuZO3du\\n\",\n       \"5Vgp1+fMfs6UI8mU1tm6dWvl2EwZEEnq7e0t0vb8+fMrx2bOQyXLDGX2RabdTNmQTLtZpc4vme9T\\n\",\n       \"pvRM5hwg5cavVGyp8ky7du2qHCuVK6PUStErVLYvt/2I7Ydsf832rJLbAwAAaIdiCZXthZI+IunU\\n\",\n       \"iHizpB5JHyi1PQAAgHYpectvq6RhSXNsj0iaI+npgtsDAABoi2JXqCLi/yT9raQNkp6R9GJE/Hup\\n\",\n       \"7QEAALRLyVt+J0r6c0kLJR0naZ7t3y+1PQAAgHYpecvvbZJ+FBHPS5Ltb0s6U9JXRwcNDg7uf93f\\n\",\n       \"36+BgYGCXQIAAKhm48aN2rRpU6XYkgnVTyX9pe1eSbslvUvSmrFBy5YtK9gFAACAyenr61NfX9/+\\n\",\n       \"5dWrV7eMLfkM1QOSbpB0r6QHm6u/WGp7AAAA7VJ0Ys+I+Kykz5bcBgAAQLtRegYAAKAmEioAAICa\\n\",\n       \"2l7Lr2rNn0xtoEzs4S5by69U7b9S41fy802fXv3rkanPl/l8mfpg2X2RqQmZqWGZaTcTm6kllhkP\\n\",\n       \"SZo1q3pVrMy+eP755yvHDg0NVY7dvn175VipXG28TJ25TGxmPDI1AqXcd6rUeStzDGXr3WY+X+Y4\\n\",\n       \"2rZtW+XYTvn3OnN+mTdvXv3t1W4BAADgNY6ECgAAoCYSKgAAgJpIqAAAAGoioQIAAKiJhAoAAKCm\\n\",\n       \"jkyoNmzY0O4uoIaNGze2uwuYJMauu23evLndXcAkPfPMM+3uAmoiocIhV7UyNzoPY9fdtmzZ0u4u\\n\",\n       \"YJJIqLpfRyZUAAAA3YSECgAAoCa3s0yLbWrEAACArhER49YDamtCBQAAcDjglh8AAEBNJFQAAAA1\\n\",\n       \"dVxCZft82z+1/YTtT7e7P2jN9j/Zfs72Q6PWHW37dtuP277N9pHt7CNas91n+w7bj9h+2PYnmusZ\\n\",\n       \"ww5ne7bt1bbX2V5v+zPN9YxdF7HdY3ut7e81lxm/LtZRCZXtHklfkHS+pJMlXWz7Te3tFQ7iS2qM\\n\",\n       \"1WgrJd0eEW+U9B/NZXSmYUmXRsRiSadL+ljz+8YYdriI2C3p3Ih4q6S3SDrX9lIxdt1mhaT1kvY9\\n\",\n       \"zMz4dbGOSqgknSbpyYh4KiKGJX1D0nvb3Ce0EBGDkl4Ys3q5pOubr6+X9L4p7RQqi4hnI2Jd8/V2\\n\",\n       \"SY9KOl6MYVeIiJ3NlzMl9ajxXWTsuoTtEyRdIOlaSft+Ncb4dbFOS6iOlzS69sWm5jp0j9dFxHPN\\n\",\n       \"189Jel07O4NqbC+UtETSajGGXcH2NNvr1BijOyLiETF23eRzki6T9PKodYxfF+u0hIo5HA4j0ZiT\\n\",\n       \"gzHtcLbnSfqWpBURsW30e4xh54qIl5u3/E6QdJbtc8e8z9h1KNvvlrQ5ItbqlatTr8L4dZ9OS6ie\\n\",\n       \"ltQ3arlPjatU6B7P2X69JNn+BUlUa+1gtmeokUzdGBG3NFczhl0kIoYk/aukXxVj1y3OlLTc9s8k\\n\",\n       \"fV3Sr9m+UYxfV+u0hOpeSYtsL7Q9U9JFkr7b5j4h57uSLmm+vkTSLQeJRRvZtqTrJK2PiGtGvcUY\\n\",\n       \"djjbx+z7BZjtXkm/LmmtGLuuEBFXRERfRLxB0gck/SAiPijGr6t13Ezptn9T0jVqPGR5XUR8ps1d\\n\",\n       \"Qgu2vy7pbEnHqHG//68kfUfSzZL6JT0l6cKIeLFdfURrzV+F3SnpQb1ya+FySWvEGHY0229W46Hl\\n\",\n       \"ac3/boyIq20fLcauq9g+W9InI2I549fdOi6hAgAA6DaddssPAACg65BQAQAA1ERCBQAAUBMJFQAA\\n\",\n       \"QE0kVAAAADWRUAEAANREQgWg7Wzf3fzfAdsXH+K2rxhvWwBwKDEPFYCOYfscNSY5fE/ib6ZHxEsH\\n\",\n       \"eX9bRBxxKPoHAK1whQpA29ne3nx5paRlttfaXmF7mu2rba+x/YDtP2nGn2N70PZ3JD3cXHeL7Xtt\\n\",\n       \"P2z7I811V0rqbbZ34+htueFq2w/ZftD2haPa/k/b/2z7Udtfmdq9AaAbTW93BwBAr5S++bSkT+27\\n\",\n       \"QtVMoF6MiNNsz5J0l+3bmrFLJC2OiP9uLn84Il5o1rZbY/ubEbHS9sciYsk42/ptSb8i6S2SjpX0\\n\",\n       \"E9t3Nt97q6STJf2PpLttvyMiuFUIoCWuUAHoJB6z/BuSPmR7raQfSzpa0knN99aMSqYkaYXtdZLu\\n\",\n       \"kdQnadEE21oq6WvRsFnSDyW9XY2Ea01EPBONZyLWSVpY4zMBeA3gChWATvfxiLh99Irms1Y7xiy/\\n\",\n       \"U9LpEbHb9h2SZk/QbujABG7f1as9o9aNiHMlgAlwhQpAJ9kmafQD5KskfdT2dEmy/Ubbc8b5u/mS\\n\",\n       \"XmgmU78s6fRR7w3v+/sxBiVd1HxO61hJZ0laowOTLACYEP+vC0An2Hdl6AFJI81bd1+S9Hk1brfd\\n\",\n       \"b9uSNkv6rWb86J8o3yrpT22vl/SYGrf99vmipAdt3xcRH9z3dxHxL7bPaG4zJF0WEZttv2lM2xpn\\n\",\n       \"GQBehWkTAAAAauKWHwAAQE0kVAAAADWRUAEAANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIF\\n\",\n       \"AABQ0/8DEfw5JxfRlIgAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792bd10b10>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAFYRJREFUeJztnVuMZHldxz+/ut+7q+89072zs8Oa8EACL/gARB4IWWIi\\n\",\n       \"+qIhMRpE44OiURMRHwSjD0gCMb4QlV2Dl4BGAwETFTAa8cHLml12UXbZTRimZ/tW3VXVdb//fej6\\n\",\n       \"/ffUmeqeruq6nJo5n+Skbl2nf931rd//8rscMcbg4zMKgXkb4LN4+KLxGRlfND4j44vGZ2R80fiM\\n\",\n       \"jC8an5EZWzQi8oyIvCIir4nIxyZplI+3kXH2aUQkCLwKvA94A/hv4EPGmO9O1jwfLzKup3kn8Lox\\n\",\n       \"5q4xpg18Cfjg5Mzy8TKhMd93E9hzPL4P/LDzB0TE32pecIwxMuz5cT2NL4jHmHFF8waw63i8y7m3\\n\",\n       \"8XkMGFc0zwNPi8iTIhIBfgr46uTM8vEyY81pjDEdEfll4J+AIPCsv3J6fBhryX2lE/sT4YVn0hNh\\n\",\n       \"n8cYXzQ+I+OLxmdkfNH4jIwvGp+R8UXjMzK+aHxGxheNz8j4ovEZGV80PiPji8ZnZMZNwgJARO4C\\n\",\n       \"JaALtI0x75yEUdNGROwRCAQIBoOEQiGCwSDBYJB2u02n07G3s7AnEAhYmwCMMWhc0H1/3lxLNJwn\\n\",\n       \"Y73XGJOfhDGzQEQIhUL2CIfDLC0tkclkyGQyJBIJ8vm8PQqFwtRtCoVCRCIRwuEwkUiEXq9Ht9ul\\n\",\n       \"2+3S6XTs416vR6/Xm7twrisagKGRUC8TDAaJRCLEYjHi8ThbW1tsb2+ztbXF6uoqd+/e5e7du7Tb\\n\",\n       \"7ZmIJhgMWlsSiQTdbpd2u02r1aLVatFutxERK6B5MwlP800R6QJ/bIz50wnYNFXU00SjURKJBOl0\\n\",\n       \"mu3tbe7cucNb3vIWbt68STwep91uk8/PxoGGQiFisRjpdJpMJkOn06Fer9NoNOywBXhCMHB90bzL\\n\",\n       \"GHMgIuvAN0TkFWPMtyZh2DTRDymRSLC0tMT6+jo7Ozs89dRT3Lp1i0KhwBtvvEE8Hp+pPalUimw2\\n\",\n       \"S6fTIRaLUa/Xqdfr1Go16vU6xhja7fZiD0/GmIP+bU5Evsx5aYunRSMihMNh4vE4mUyGbDZLJpMh\\n\",\n       \"Ho8TDocfmJDOglAoZO1ZXV214tCjWCySz+cREZrN5tw9ztiiEZEEEDTGlEUkCbwf+N2JWTYlVDSJ\\n\",\n       \"RMKKJpVKDYhm1sJxDk8rKysEg0G7Yur1euRyOUSERqNBsViciU2X2nuN924CX+7/Y0PAXxljvj4R\\n\",\n       \"q6aIiBCJRB7wNIlEgkgkMnMvA4OeZmVlhVgsZrcCAoEA0WiUZrNJsVgkEJj/1trYojHGfB94+wRt\\n\",\n       \"mQnDPE06nZ7r8BQIBAiHw8RiMZLJJMlkkmg0SjQaJRKJ0G63OT09JR6PW/vmOa+ZxJJ7IVARqKfR\\n\",\n       \"SfDKygrpdJpYLEYoNJ9/R6/Xo91uU6/XqVQqBINBwuEwwWCQeDxOPB4nGo3a5wKBgB2+5iGe+fu6\\n\",\n       \"GeAUTCAQIBKJkEwmB0QTj8fnJpput0ur1aJer1Mul2k0GvR6PTtsOUUTCoXm4g2dPBaiAQbCBupp\\n\",\n       \"nMPTPD2NUzSVSoVGo0G32yUYDJJIJIZ6mnkK57EYnlQs7rnD0tIS2WzWzh0AG2+a5XZ9r9ej0+nQ\\n\",\n       \"arVoNBq0220AuzWgoolEIlY0ML841CMvGp34RiIRIpEI6XSa5eVlG29Kp9MD3/RGo0GhUKBSqdgP\\n\",\n       \"bxY2qqg1DhWNRu0GpIpGg6o6p5kXj41oNK6jgtEjnU5TLpft0HB2dkY+n6dardJqtWZmozPaHg6H\\n\",\n       \"iUaj1uZhw1Ov1/PnNNPCKZpUKsXy8vKAp0mlUoRCIVqtFmdnZxwdHVlPM0vRiIgVzTBPo1Fw95xm\\n\",\n       \"HjySnsa5WgoGg0SjURvXWVtbI5vNWi+j8SUdlg4ODsjn8zMdnnRY0mFUBaORb53P6Mpp3qunR040\\n\",\n       \"KhQ94vE4a2tr3Lx5kxs3bnDjxg1u3rzJ0tISgUCAer3O2dkZJycn7O/vs7e3x9HREaVSiWazOROb\\n\",\n       \"3bGn5eVlksmk3aH2Go+caODNfJlIJEIqlbKieeqpp3jiiSdYXV0lk8kQCASo1WpWNAcHB9y7d49i\\n\",\n       \"sThX0SwtLQ2ENbzGIycazZfR+FI6nbapD3fu3OHOnTtWUCJiPU0ul7OeRlMRfE8znIeKRkSeA34U\\n\",\n       \"ODbGvK3/3Arw18At4C7wk8aY+YdfeVM0zr2YtbU1bty4wa1bt7h9+7bNiGu1WpTLZQqFAicnJxwd\\n\",\n       \"HbG/v29TK7vd7szsdQYsve5prrJ6+jPgGddzvwV8wxjzQ8A/9x97gkAgQDweJ5vNsr29ze7uLhsb\\n\",\n       \"GywtLRGPxxERarUauVyOu3fv8sorr3Dv3j1yuRzVanUmubi6xNY8Zc0iTKVSAxP0cDi8mKLpZ+K5\\n\",\n       \"E2V/DPhC//4XgB+fsF1jEwgESCQSZLNZtra22N3dZXNzk+XlZWKx2AOiefXVV7l37x4nJydUKhUr\\n\",\n       \"mmkGA937Mrono6LR/J5QKORJ0Yw7p9k0xhz17x9xnlvjCS7zNE7RnJyc8IMf/IDXX3+d4+Nj62mm\\n\",\n       \"LRgYXOEN8zSpVMq+7lxae0VA154IG2PMvPvrOf+poVCIZDLJysoKW1tb7Ozs2NVSLBYDsKK5d+8e\\n\",\n       \"r732GuVymUqlYoenaaNeRvdldOdXE8vj8fhA6sMshDwK44rmSES2jDGHIrINHE/SqFHQIKSuiDKZ\\n\",\n       \"DGtra6yurrK6uko2myWRSBAMBm0gslarUa1WqVQqNoTQarVmIhjAbjam02nS6fQDw6eWsGjwtFwu\\n\",\n       \"U6vVbDBTa6LmlSs8rmi+Cvws8Af9269MzKIRERGi0ajNeFtZWWFtbc0KJ5vN2jyUbrdLo9EYEI2G\\n\",\n       \"C1qt1sw+hEgkYld1a2trbG9vs7y8bCfqnU6HRqNBo9GgXq9TKpWoVqs0Gg06nc7Mo/BurrLk/iLw\\n\",\n       \"I8CaiOwBvwN8CvgbEfkI/SX3NI28DM2h1bjS+vr6A55G/8FaT+QWjX5zZ+lpMpkMGxsb7OzssLm5\\n\",\n       \"STabtcNnp9Oh2WxSrVYpl8vW0zSbTetpPC0aY8yHLnjpfRO2ZSw0fdMZW1LR6J6HfmtbrdYDgqlU\\n\",\n       \"KjO31yma3d1dtra2hnqaSqVCsVjk7OzMJmepaOaZ7rnwO8LO4UlFs7S0NLCjqpt4xWKRk5MTTk9P\\n\",\n       \"Zx6QdB7OVNONjQ1WVlasvQCtVotSqcTx8TGHh4ccHBxQKBSo1WpzFww8AqJxDk8qmuXl5YEdVf0Q\\n\",\n       \"NL6kUexZpT44N/JCoZBdWq+urrKxsWFrr9TeZrPJ2dkZx8fH7O3tWZt1dTfvldTCi0Y9jVM0F3ka\\n\",\n       \"FY16mlmLRld4bk/jDBsANJtNSqUSuVxuQDROTwN+uufYaKL4VUSTy+Xmmi+jnSqSyaSNM62vr5NM\\n\",\n       \"Jm0+DbwpmuPjY+7fv8/R0RHFYvEB0cyLhRSNsymRe0c1nU7bb20wGKTX69FoNGwkex6eJhwOk0ql\\n\",\n       \"yGQytuGAM5Kt6ZvaVkRXTaVSiWKxaPeSvFD8DwsoGmcpim7sOWM3GuzTD8MYQ6PRGJjT6A7wrDyN\\n\",\n       \"esLV1VXW1tasaBKJhK3q1AZG3W7Xru6cotGVky+aMVHBuGM3usPq9DS6fHV6mmazaTf0ZoGKZmVl\\n\",\n       \"he3t7Qc8DTDQxGiYp9FNPV80Y+IM+A3zNDrhVLevnkZFM2ucotna2npg3tVutwdKc1U0Z2dndo/G\\n\",\n       \"SyykaODBWqFwOGyPXq9nd371n1+r1WbSdHEYaqOKW8tRtOhNBdNoNKhWqwOxMC94FjcLKZqLunNq\\n\",\n       \"xn6z2aTRaNBsNsnn83blMas5jBv36slZwwRviqbZbNrApJdFs3B1T27BuNMMnJ5G0zjPzs6o1+tz\\n\",\n       \"9zSaBuEsR3FGtTWYqiulWcXCRuWhohGR50TkSERedjz3SRG5LyIv9A93OujUcQpHd1rdoikWi+Ry\\n\",\n       \"Oc94Gq1nusjTuIcnr0x83YybI2yAzxpj3tE//nHypg3HmZikyd9aOF+r1QbEod0h3AVnWqU4rUw4\\n\",\n       \"Z7WkVndqDbmGOZLJpM0B7na7dmhyp0EspGguyBGGOfYPVtHo3oaKplqtWtE4A5mzFo27mF9Fk81m\\n\",\n       \"B5bb0WjUikaX2qVSiUqlYocoL3KdOc1HReTbIvKsiCxPzKIrcJFo1NN0Oh0byEwmkyQSCWKx2AP1\\n\",\n       \"0LP0NJrvo2XBzjCH29NoGsTCepoL+Bxwm/OeewfAZyZm0RVwDk9u0VSr1Qs9jWbwOSeh0+Ci4ckZ\\n\",\n       \"hR8mmmq1+kDujBdFM9aS2xhjc4JF5PPA1yZm0dV+/4C30d1UTbbSzT3dC1lbW2NnZ4dqtWpLcfVo\\n\",\n       \"NBoj/35n501nFwdnaENtiEQi3Lp1i42NDVtlMEyszhyZeac+PIyxRCMi29p4GvgJ4OXLfn6SGGNs\\n\",\n       \"d0u3t2k2mzSbzYFWqvF4nPX1dWq1GsYYEokExWLRHuVyeWQbnBuJejjzZbTzg97evn2bjY0Nksnk\\n\",\n       \"lf4+rzNOjvAngPeKyNs5X0V9H/jFqVrpQr+J6tqdQ1Sj0bB9eLWt6vr6Or1ej2g0yvLyMsfHx/YY\\n\",\n       \"5/oH2gZED6dXcbYK0d+/vb3N5uYmqVRqaDvXhz32GuPmCD83BVuujDMJySka9TTdbteKJp1OY4yx\\n\",\n       \"gllfX7dJT7qSGhWdXOttPB5/QEjOQ7tuJZPJC+dR7mHJy8JZyDCCE61jKhQKHB4eEo1G7Ra8czkL\\n\",\n       \"5x4im80OTJSXl0df+KkX0Vv3cKSHvq6T8YtqszudzkDLk0KhQLVapdlselI8j4xo8vk8+/v7ADZq\\n\",\n       \"LCL0ej27UtK5R6/Xs00bNzY2Rv6d6qH01jmncXazcovoopaz7XbbCv/o6IjT01PK5bInLp4xjIUX\\n\",\n       \"jSYt6VVKNGajy15tCJBIJAZax2vfmnFzapzDiDMGNmz1pIK6aOXUarWoVCpWNMVi0fbH8T3NFFBP\\n\",\n       \"o1n89XodYGAVo3MaTbtMp9PX6lnnvL6lc9dWz+eMgzmHpIt+n3qaYrHI8fEx5XLZFu/5opkCWjnZ\\n\",\n       \"bDZtADOXy9lhqFqtks1mWVlZsVdccV7kdJyJsPYb1lu3PSpMvXUvz93icU7onc2vvXC9ymEsvGiA\\n\",\n       \"gckuwMnJifVAp6enLC0tDbSBdX+Io2CMsZWZmmvsRvOA9dA6c80Jvui8zgucenmDb+FFo99SeLNd\\n\",\n       \"fLfbtYLRchHtTp5OpwfmNpqjOwqFQmFgg9DNzs4OTzzxBLVajV6vRzabxRhDKBQikUgM9TTu6L1X\\n\",\n       \"BQOPkGhUOCJiwwVa4pJKpQYO3VfRmNSo5HI5Tk5O7OHm6aeftoJRUapgLvs7VDBeXDE5WXjRwOBG\\n\",\n       \"mN5XEWnE2zn3ce7WjuppjDEUi0UKhQKlUolarfbAz2jgVFM13IX7btzpoM45jRez9x4J0VyGUyyA\\n\",\n       \"jSg7Y0ajUqlULs0EdMbDdOmse0fDcF8dxtnUyItD1SMvGsDmBmsVoztCPSrOi6wPw52yoambVxFN\\n\",\n       \"IpGwO9Z6noUSjYjsAn8ObHAenPwTY8wfebmPsBvnnKfVag1MQsfdp3lY+oJbNM5mRMNw1m8lk0m7\\n\",\n       \"E+z0kF7iYZ6mDfyaMeZFEUkB/yMi3wA+zHkf4U+LyMc47yPsmV7CbmYdBBx1JeTOQnTu1XiRS32z\\n\",\n       \"MebQGPNi/34F+C5wEw/3EfYSV/Vk7moE55DmtaEJRpjTiMiTwDuA/8TDfYS9xrA+wG4xuYvltEbL\\n\",\n       \"iysnuKJo+kPT3wG/aowpO/9oL/QRXgQuiz/p/MVZLAd4dlf4KsVyYc4F8xfGGG39eiQiW/3X59pH\\n\",\n       \"eFG4bF7lruPy+q7wpaKR86/Fs8D/GWP+0PGS9hGGOfcRXgQWIRtvFB42PL0L+GngJRF5of/cx/FQ\\n\",\n       \"H2Evc9HwsujiuVQ0xph/52Jv5Ik+wj6z57HYEZ41zs4WFyV7OYesRah1cuKLZko4W6G4xeMWipcn\\n\",\n       \"vcNYuP40i8CwHjoX5dA4V0qLIhxfNFPiKp7GmT+zSMLxh6cpoBde1YbY6XSaWCxm+xprIFIPZ9t9\\n\",\n       \"XzSPKe5unnrBDG1Rq4VxpVKJUqnE/v6+bWPvi+YxRa9/kM1m2dzctL2N9UJl1WqVQqFALpfj+PjY\\n\",\n       \"XmWlXq/7onlc0foq9TTRaNRWZGrSe6FQ4ODggPv37w94Gq+mQzjxRTMDms2mjVp3Oh329/e5f/8+\\n\",\n       \"e3t77O3tDVRV+p7mMUWzBOv1uq3Jdiab64W/9NAk9Uaj4YvmccXZQ0+vcZDP5+2Ry+XsfEavB67V\\n\",\n       \"mgsvmktyhD8J/DyQ6//ox2fZFtbrqKdR0ZyenrK/v8/BwYGdv+TzeQqFAvl83g5d87xs8iiMmyOs\\n\",\n       \"fYQ/O3ULF5BGo0GxWOTw8JBMJsPJyQlHR0ccHh5yeHg4sNyuVqsLIRQnD4tyHwKH/fsVEdEcYZhj\\n\",\n       \"H2GvU6vVyOVyhMNh2u02pVKJQqFgD683l34Y4+QI/wfneTYfFZGfAZ4HfsOrJSzzoFqtcnx8TLPZ\\n\",\n       \"pFAoDPQ4rtVqD62b8jpyFaX3h6Z/BX7fGPMVEdngzfnM7wHbxpiPuN6zeF+hCaGVm1rF6bxYvLMr\\n\",\n       \"hNfrto0xQ0eTh4qmnyP898A/uFI+9fUnga8ZY97mev6xFc2jwkWiGStHuJ9Mrsy0j7DP/LnU04jI\\n\",\n       \"u4F/A17ifMUE8NvAhzhvcW/7CDvqoPS9vqdZcMYensbFF83iM9bw5OMzDF80PiPji8ZnZHzR+IyM\\n\",\n       \"LxqfkfFF4zMyvmh8RmZq+zQ+jy6+p/EZGV80PiMzVdGIyDMi8oqIvNbvAnrd890VkZdE5AUR+a8x\\n\",\n       \"3v+ciByJyMuO51ZE5Bsi8j0R+foo1xi/4HyfFJH7fRtfEJFnRjjfroj8i4j8r4h8R0R+5To2XnK+\\n\",\n       \"sW0Ehl/adxIHEAReB54EwsCLwFuvec7vAyvXeP97OE8ke9nx3KeB3+zf/xjwqWue7xPAr49p3xbw\\n\",\n       \"9v79FPAq8NZxbbzkfGPbaIyZqqd5J/C6MeauMaYNfAn44ATOO3aaqTHmW0DB9fTY7W0vOB+MaaOZ\\n\",\n       \"cAveS843to0w3eHpJrDneHyfNw0eFwN8U0SeF5FfuOa5lGm0t/2oiHxbRJ4dZbhzMukWvK503WvZ\\n\",\n       \"OE3RTGMt/y5jzDuADwC/JCLvmeTJzbkfv67dnwNuc55vdAB8ZtQTuFvwXtfG/vn+tn++ynVtnKZo\\n\",\n       \"3gB2HY93Ofc2Y2OMOejf5oAvcz4EXpeJtrc1xhybPsDnR7Vx0i14Hef7Sz3fdW2cpmieB54WkSdF\\n\",\n       \"JAL8FOetZMdCRBIiku7fTwLvZzJpphNtb3udVNhJt+CdWrrudVYzV5i9f4DzGfvrnFdhXudctzlf\\n\",\n       \"gb0IfGec8wFfBPaBFufzrQ8DK8A3ge8BXweWr3G+n+O8IvUl4Nv9D3dzhPO9G+j1/8YX+scz49p4\\n\",\n       \"wfk+cB0bjTF+GMFndPwdYZ+R8UXjMzK+aHxGxheNz8j4ovEZGV80PiPji8ZnZHzR+IzM/wMn9Av6\\n\",\n       \"T5UJ3wAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792bca2290>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEr9JREFUeJzt3X+wXGV9x/HPJ79vQoJaaCwYmx83psJohYqDv6IUwwRG\\n\",\n       \"0LZWpa1S27HTUWtK1RGZKX+1amU6UsdpZ6zUH6hIqxZ1KDEpgkGthB8JEBJ+hAktYE1aCprLzQ03\\n\",\n       \"4ds/dhOvl7u5z/eePHd3w/s1w7Bn97vPefY8Z8/95pyzz9cRIQAAAEzdjG53AAAAoN+RUAEAADRE\\n\",\n       \"QgUAANAQCRUAAEBDJFQAAAANkVABAAA0NKubK7fNnA0AAKBvRIQner5qQmV7raQrJM2U9NmI+Jvx\\n\",\n       \"MZdffvkz3rdhwwadc845Nbs2Zb0wb5c94VhOaMaM3EnIo9H29ddfr3PPPbdRP0plxyMTf+DAgSqx\\n\",\n       \"o6OjVWJnz55dHNspfuPGjVqzZs0znh8YGChud86cOcWxmf1teHi4OHZoaKg4VpL27t1bHLtv377i\\n\",\n       \"2Mz4ZfahTmN966236owzznjG83Pnzi1ue/78+cWxCxcuLI6ttQ/NmpX7MzZz5szi2MzxYmRkpDh2\\n\",\n       \"on2o09+97L6c2T8zfc7EPvXUU8WxBw8erBJby9VXX93xtWqX/GzPlPRpSWslnSLpQtsvrrU+AACA\\n\",\n       \"bql5D9UrJO2MiIciYlTSVyW9qeL6AAAAuqJmQnWypIfHLD/Sfm5SK1asqNIhTI/BwcFudwFTtHz5\\n\",\n       \"8m53AQ2cdNJJ3e4Cpoi/e/2vZkI15ZuN2LH628qVK7vdBUwR373+dvLJRf9mRQ/iu9f/at6U/qik\\n\",\n       \"JWOWl6h1luoXbNiw4fDjFStWsFMBAICesHv3bu3Zs6cotmZCdZuklbaXSvqxpLdJunB8UK/+mg8A\\n\",\n       \"ADy7LV68WIsXLz68vG3bto6x1RKqiDhg+32SvqPWtAlXRsSOWusDAADolqrzUEXE9ZKur7kOAACA\\n\",\n       \"buvqTOlS+SRrmQnWspM9ZiYLe/rpp4tjMxP1ZdrN9DczsaDUGxNU1oqV6m27TLuZSe8yk+nt37+/\\n\",\n       \"OFaqNwFg5vNl+lxrG0vNJ2XsJPP5MseAmjKTrdaaZDgzEWl2Ys+MzJj0QqyUm7g0s+0y7c6bN684\\n\",\n       \"NjPWmQlfpdw+dzQmn6aWHwAAQEMkVAAAAA2RUAEAADREQgUAANAQCRUAAEBDJFQAAAANkVABAAA0\\n\",\n       \"REIFAADQEAkVAABAQyRUAAAADZFQAQAANNT1Wn6ltaCORp2dTjL1jDK1qzIy7WZqFdbqb02Zz1ez\\n\",\n       \"/lmtOlCZ/S3TbqbWllRvO2draZbKfL7s8SLzPZk9e3ZxbKamWWa7ZWtY1qqZWKv255NPPlkcm63b\\n\",\n       \"mPl8tWrIZvahTK07qXfq6JWqWdcws3+Wxq5bt67ja1XPUNleYvtG2/fY3mb7/TXXBwAA0A21z1CN\\n\",\n       \"Sro4IrbaPk7S7bY3RsSOyusFAACYNlXPUEXETyJia/vxkKQdkk6quU4AAIDpNm03pdteKuk0SbdM\\n\",\n       \"1zoBAACmw7QkVO3LfV+TtK59pgoAAOCYUf1XfrZnS/q6pC9FxLXjX1+/fv3hx4ODgxocHKzdJQAA\\n\",\n       \"gEk98MAD2rlzZ1Fs1YTKrd8iXylpe0RcMVHM2rVra3YBAABgSlauXKmVK1ceXh57Emi82pf8Xi3p\\n\",\n       \"DySdZXtL+z8yKAAAcEypeoYqIr4vZmMHAADHOJIdAACAhrpeeiYzNXwvqFXqpFaJg2wZkF4p+3Is\\n\",\n       \"q1VGKVuOJFOCI9N25jud6UMmdmRkpDhWyn2+zH6fGetMyZAFCxYUx0q5sj2Zz5fZzvv27SuOzZSe\\n\",\n       \"GR4eLo6V6pWemT9/fnHsc5/73OLYRYsWFcdKubI2tcY6U+ooE5s9xmWORUejZBZnqAAAABoioQIA\\n\",\n       \"AGiIhAoAAKAhEioAAICGSKgAAAAaIqECAABoiIQKAACgIRIqAACAhkioAAAAGiKhAgAAaKjrpWdm\\n\",\n       \"zSrrwtGYFr4T21Vie0G2zEnm85WOnZQrhzBnzpzi2Llz5xbHZvuRKdeR2c6ZPmfGI/sdybSd2W4D\\n\",\n       \"AwPFsZlyHZnYzNhl4zOlS2qVnsls46zM53vqqaeKYzMlRjKypVky2y6z32e+f5nSOk888URxrFSv\\n\",\n       \"jFJGrWN49nif+V6Xxl5zzTUdX+v4F9H270gKSRMddSMivlGyctszJd0m6ZGIOL/kPQAAAP3kSKcY\\n\",\n       \"zlcroeqkKKGStE7SdkkLSzsFAADQTzomVBHxh00bt/0CSedJ+mtJf9G0PQAAgF406QV+28+3faXt\\n\",\n       \"9e3lU2z/cWH7n5T0IUl1LtQCAAD0gJI7Jj8vaYOkk9rLD0i6eLI32X6jpD0RsUUT34cFAABwTCj5\\n\",\n       \"mdYJEXGN7UskKSJGbR8oeN+rJF1g+zxJ8yQtsv3FiHjn2KD169cffjw4OKjBwcHy3gMAAFSyadMm\\n\",\n       \"bdq0qSi2JKEasv1LhxZsnynpp5O9KSIulXRp+z2vk/TB8cmUJK1du7aoowAAANNp9erVWr169eHl\\n\",\n       \"j370ox1jSxKqD0j6tqTltn8o6URJb5lCv+pNJAUAANBFkyZUEXG77dWSVql1L9R9EVE+c1irje9J\\n\",\n       \"+t7UuggAANDbJk2obA9Ieo+k16h1lulm2/8QESO1OwcAANAPSi75fVHSzyR9Sq0zVL8n6SpJv1ux\\n\",\n       \"XwAAAH2jJKE6NSJOGbP8Xdvbj1YHSutGZWoOZWuaZeJr9aNWjcCatQdr1pkrla1FlalTlmk7025G\\n\",\n       \"piZWpraiVK9WYaaWWKamWS/UjZNy2zkzfplafjVrm9balzP7UObzZfY3qd7n27dvX3FsZl/OHuMy\\n\",\n       \"2y5zDKhVc6/W9ynbdvb4OZGSPfwO2688tND+ld/tjdcMAABwjDhSceS7x8T8wPbDat1D9UJJ901D\\n\",\n       \"3wAAAPrCZMWRAQAAMIkjFUd+aOyy7V9Wa8ZzAAAAjFFSHPkC2w9I2qXWXFIPSbq+cr8AAAD6RslN\\n\",\n       \"6X8l6ZWS7o+IZZLOlnRL1V4BAAD0kZKEajQi/lfSDNszI+JGSS+v3C8AAIC+UTLxwuO2F0q6WdKX\\n\",\n       \"be+RNFS3WwAAAP2j5AzVmyUNS7pY0npJO8UvAAEAAA4rKY586GzUQUmfr9obAACAPnSkiT2H1JrI\\n\",\n       \"cyIREYuORgdKS0Rkypxky61k4jPlE3qhNEuvqFUCJ1M6QcqNX63YjMx+kS2pkWk78/kWLSo/NJxw\\n\",\n       \"wgnFsbNnzy6OzZaoyJQCyZQ6OXDgQHFsplxOpr9SvWNcZjvPnz+/SrvZkiGZ+My2yHz/MmM9PDxc\\n\",\n       \"HCvVK+eUKYGT2d8y45E5Bki5vw/ZtidypHmojmvauO3nSPqspFPVSs7+KCJ+1LRdAACAXtK8GuCR\\n\",\n       \"/Z2kf4uIt9ieJWlB5fUBAABMu2oJle3jJb02Ii6SpIg4IOmntdYHAADQLXVu/GhZJul/bH/O9h22\\n\",\n       \"/9F2+UV0AACAPlEzoZol6XRJfx8Rp0t6UtIlFdcHAADQFTXvoXpE0iMRcWt7+WuaIKG64YYbDj9e\\n\",\n       \"tmyZli9fXrFLAAAAZe677z7df//9RbHVEqqI+Inth22/KCLul/QGSfeMjzv77LNrdQEAAGDKVq1a\\n\",\n       \"pVWrVh1evu666zrG1v6V35+pVa5mjqQHJb2r8voAAACmXdWEKiLulHRGzXUAAAB0W82b0gEAAJ4V\\n\",\n       \"al/ym1Tp1P41S8/UKifTC7G9IlPmpGa5gFplJzJlGUZGRqrEZvog5cpO7Nu3rzg2M36ZciTz5s0r\\n\",\n       \"js2UfJFypU6yZW1KZfa3gYGBVNuZ719mv8jEDg0NTR7UltkW2fJTmfjM8SIzJpnj1nHH5YqWZEo/\\n\",\n       \"1VLrb1TNEm1Ho23OUAEAADREQgUAANAQCRUAAEBDJFQAAAANkVABAAA0REIFAADQEAkVAABAQyRU\\n\",\n       \"AAAADZFQAQAANERCBQAA0NAxWXqmV2Smss98vky7Nafqz6hVmiVbYuTgwYNV2s58vlplTrIlODL2\\n\",\n       \"799fHDs8PFwc+9hjjxXHjo6OFscuWLCgOFbKlQ3JxGb6kfmuZvf7bFmiGu3W+j5lj3GZfTnTdqac\\n\",\n       \"TK2SNlKuRFOm7Uy7tY73mRJY2bYzZZQ6qXqGyvZHbN9j+27bX7E9t+b6AAAAuqFaQmV7qaR3Szo9\\n\",\n       \"Il4iaaakt9daHwAAQLfUvOT3M0mjkubbPihpvqRHK64PAACgK6qdoYqI/5P0t5L+S9KPJT0REf9e\\n\",\n       \"a30AAADdUvOS3wpJfy5pqaSTJB1n+/drrQ8AAKBbal7ye7mkH0bEY5Jk+xuSXiXpy2ODNm7cePjx\\n\",\n       \"8uXLtWLFiopdAgAAKLNr1y7t2rWrKLZmQnWvpL+0PSBpRNIbJG0eH7RmzZqKXQAAAJiaZcuWadmy\\n\",\n       \"ZYeXb7rppo6xNe+hulPSFyXdJumu9tOfqbU+AACAbqk6sWdEfELSJ2quAwAAoNsoPQMAANAQCRUA\\n\",\n       \"AEBDXa/lV6NGX6/Ur8t8tkzto1p1/7Iy/Sit2Sjl6tdlamJJuT5n2s7EZmru1eqDlKs9ltmPMjX3\\n\",\n       \"Mvt95vNla/kdf/zxxbGZ+meZWmJ79+4tjh0aGiqOlXpjOy9cuLBKu9n6bpl6npn9PvO9zuxDixYt\\n\",\n       \"Ko6VpLlzyyu8ZeorZrZb5hiQic30QaozfpdddlnH1zhDBQAA0BAJFQAAQEMkVAAAAA2RUAEAADRE\\n\",\n       \"QgUAANAQCRUAAEBDPZlQPfjgg93uAhrYuXNnt7uAKdqxY0e3u4AGGL/+dccdd3S7C2iIhApHHePX\\n\",\n       \"v/iD3N/uvffebncBU7Rly5ZudwEN9WRCBQAA0E9IqAAAABpyN8u02O6NGjEAAAAFImLCGmZdTagA\\n\",\n       \"AACOBVzyAwAAaIiECgAAoKGeS6hsr7V9r+0HbH+42/1BZ7b/yfZu23ePee55tjfavt/2BtvP6WYf\\n\",\n       \"0ZntJbZvtH2P7W22399+njHscbbn2b7F9lbb221/rP08Y9dHbM+0vcX2t9vLjF8f66mEyvZMSZ+W\\n\",\n       \"tFbSKZIutP3i7vYKR/A5tcZqrEskbYyIF0m6ob2M3jQq6eKIOFXSmZLe2/6+MYY9LiJGJJ0VES+T\\n\",\n       \"9FJJZ9l+jRi7frNO0nZJh25mZvz6WE8lVJJeIWlnRDwUEaOSvirpTV3uEzqIiJslPT7u6QskfaH9\\n\",\n       \"+AuS3jytnUKxiPhJRGxtPx6StEPSyWIM+0JEDLcfzpE0U63vImPXJ2y/QNJ5kj4r6dCvxhi/PtZr\\n\",\n       \"CdXJkh4es/xI+zn0j8URsbv9eLekxd3sDMrYXirpNEm3iDHsC7Zn2N6q1hjdGBH3iLHrJ5+U9CFJ\\n\",\n       \"T495jvHrY72WUDGHwzEkWnNyMKY9zvZxkr4uaV1E7B37GmPYuyLi6fYlvxdIWm37rHGvM3Y9yvYb\\n\",\n       \"Je2JiC36+dmpX8D49Z9eS6gelbRkzPIStc5SoX/stv18SbL9K5L2dLk/OALbs9VKpq6KiGvbTzOG\\n\",\n       \"fSQifirpOkm/IcauX7xK0gW2d0m6WtJv2r5KjF9f67WE6jZJK20vtT1H0tskfavLfULOtyRd1H58\\n\",\n       \"kaRrjxCLLrJtSVdK2h4RV4x5iTHscbZPOPQLMNsDktZI2iLGri9ExKURsSQilkl6u6TvRsQ7xPj1\\n\",\n       \"tZ6bKd32uZKuUOsmyysj4mNd7hI6sH21pNdJOkGt6/2XSfqmpH+W9EJJD0l6a0Q80a0+orP2r8I2\\n\",\n       \"SbpLP7+08BFJm8UY9jTbL1HrpuUZ7f+uiojLbT9PjF1fsf06SR+IiAsYv/7WcwkVAABAv+m1S34A\\n\",\n       \"AAB9h4QKAACgIRIqAACAhkioAAAAGiKhAgAAaIiECgAAoCESKgBdZ/sH7f//qu0Lj3Lbl060LgA4\\n\",\n       \"mpiHCkDPsP16tSY5PD/xnlkRceAIr++NiIVHo38A0AlnqAB0ne2h9sOPS3qt7S2219meYfty25tt\\n\",\n       \"32n7T9rxr7d9s+1vStrWfu5a27fZ3mb73e3nPi5poN3eVWPX5ZbLbd9t+y7bbx3T9k22/8X2Dttf\\n\",\n       \"mt6tAaAfzep2BwBAPy9982FJHzx0hqqdQD0REa+wPVfS921vaMeeJunUiPjP9vK7IuLxdm27zba/\\n\",\n       \"FhGX2H5vRJw2wbp+W9KvS3qppBMl3Wp7U/u1l0k6RdJ/S/qB7VdHBJcKAXTEGSoAvcTjls+R9E7b\\n\",\n       \"WyT9SNLzJA22X9s8JpmSpHW2t0r6D0lLJK2cZF2vkfSVaNkj6XuSzlAr4docET+O1j0RWyUtbfCZ\\n\",\n       \"ADwLcIYKQK97X0RsHPtE+16rJ8ctny3pzIgYsX2jpHmTtBt6ZgJ36OzV/jHPHRTHSgCT4AwVgF6y\\n\",\n       \"V9LYG8i/I+k9tmdJku0X2Z4/wfsWSXq8nUz9mqQzx7w2euj949ws6W3t+7ROlLRa0mY9M8kCgEnx\\n\",\n       \"ry4AveDQmaE7JR1sX7r7nKRPqXW57Q7blrRH0m+148f+RHm9pD+1vV3SfWpd9jvkM5Lusn17RLzj\\n\",\n       \"0Psi4l9tv7K9zpD0oYjYY/vF49rWBMsA8AuYNgEAAKAhLvkBAAA0REIFAADQEAkVAABAQyRUAAAA\\n\",\n       \"DZFQAQAANERCBQAA0BAJFQAAQEMkVAAAAA39PxShDsSnYXpyAAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792bc15650>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAES9JREFUeJztnVmMpNdVx3+nq6prX3qbnvG4x4tmsEYRkv1ikJyICIVo\\n\",\n       \"/ELghcgSUmQC4gECgkiY8BIjeIiQEiFeIiA2CosSIZCjBAmwjQIYIRYjb4E4jqVZPNPdM91de9fe\\n\",\n       \"fXmoOt/c/qZ6qaU9VfXdn/Sp9qvTM3+du5zlE2MMDscgzN1vAxzThxONY2CcaBwD40TjGBgnGsfA\\n\",\n       \"ONE4BmZo0YjIFRF5V0R+KCLPjdMox2Qjw5zTiEgI+AHwCeAW8N/AM8aY74/XPMckMqyneRJ43xhz\\n\",\n       \"zRjTBr4JfGp8ZjkmmfCQvzsPfGC9vgn8mP0FEXFHzVOOMUb6vT+sp3GCCDDDiuYWsGa9XqPrbRwB\\n\",\n       \"YFjRvA5cEpGHRWQe+DTw7fGZ5ZhkhlrTGGM6IvKrwD8CIeAFt3MKDkNtuU80sFsITz3jXgg7AowT\\n\",\n       \"jWNgnGgcA+NE4xgYJxrHwDjROAbGicYxME40joFxonEMjBONY2CcaBwDM2wSFgAicg0oA3tA2xjz\\n\",\n       \"5DiMmnbm5uYQEe8xHA4TCoUIh8OEw2H29vYOXPv7+weuSS+VHkk0dJOxPm6MyY/DmFkhHA4zPz9P\\n\",\n       \"NBplfn6eTCZDJpMhm82SyWSo1+tUq1Xvqtfr1Ot1Go0G9XrdG2dSxTOqaAD6RkKDTCQSIZFIkEgk\\n\",\n       \"SKVSnD17lnPnznHu3DkeeOABisUiW1tb3Llzh62tLUqlEqVSCYBms8ne3h4AIjKRwhmHp3lVRPaA\\n\",\n       \"PzbG/OkYbJp6wuEw8XicbDZLNptlbW2NixcvcunSJS5evMjt27e5ceMG169f97wRQKvVolKpTPwU\\n\",\n       \"NaponjLGbIjICvCKiLxrjHltHIZNGyJ3HW40GiWVSrG4uMjKygpra2s8+uijPPbYY1y+fJnFxUVi\\n\",\n       \"sRihUMj7bbvdplqteusgmNHpyRiz0XvcEpGX6Ja2BE40/oVvKpViZWWFBx98kAsXLrC2tsby8jKJ\\n\",\n       \"RAIRIR6Ps7CwwPnz573fNJtNisUioVCITqcDzKBoRCQBhIwxFRFJAp8Efndslk0RKphQKEQoFCKd\\n\",\n       \"Tnse5tKlS6yurrK0tEQymUREiMViLCwsICIkEglarRaFQoH19XVCoZC3lpnFNc0q8FLPlYaBvzLG\\n\",\n       \"vDwWq6YMEfG21KFQiFQqxfLysica3TWpp1HRJBIJlpeXqVarrK+vk0qlPNFMqmBgBNEYY64Cj4/R\\n\",\n       \"lqlFvUwkEmF+fp50Os3i4iKrq6usra0Rj8e9LbiIEIlECIVCxONxjDEsLi6SSqWIRqPedKXXJDKO\\n\",\n       \"LXfgCYfDxGIx4vE4iUSCdDpNKpUiHo8TjUaJRCKEw2Hm5roH8O12m2azSbPZpNVqkc/nqVarNBoN\\n\",\n       \"77DPGDN7nsZxl1AoRCwWI5VKkU6nyWQyJJPJvqLRnVKtVqNSqVCtVsnn81QqFU80xpiJ3nY70YyB\\n\",\n       \"cDhMNBolmUySzWZJp9Mkk0kSiQTRaJRwOOwtlqHraXZ3dymVSuTzeU80erA3yYIBJ5qxoCfAmUyG\\n\",\n       \"paUlcrmcNz3p+sVGPU2hUODOnTsHPI0tmEkVjhPNEOgCVR/j8Ti5XM5b+K6urpLL5YjFYn0Xs+pp\\n\",\n       \"isWiJ5pqtUqz2Zx4wYATzdDYOxxbNA899NAB0fRDReP3NM1mc+KnJnCiGRoVzNzc3AHRXLhwgaWl\\n\",\n       \"JbLZ7JGeplarHelpJhknmiFQseilwcmVlRXOnz/vbbmj0Wjf359keppknGiGQA/y9EqlUt52O5VK\\n\",\n       \"ebsmewFsi6HdblOv1ymXyxQKBcrlMrVajXa77UQzq4RCIebn54nFYt5W2xaOpjvoVltRQahoKpUK\\n\",\n       \"+XyecrlMvV53opll1NPEYjEv0coWjcag9FzGPt01xhwQTaFQoFqt0ul0vOSrSefYxHIReVFEbovI\\n\",\n       \"O9Z7iyLyioi8JyIvi0judM2cLEKhENFolHg87k1JyWTSO9CLxWL3nM8YY7yc4Far5aV8lkolL4Qw\\n\",\n       \"LZ7mJNUIfwZc8b3328ArxpgfAf6p9zoQiAjz8/MkEglyuRzLy8vkcjmSySTz8/OH7pYajQbVapVC\\n\",\n       \"oUClUpmq6cjPsaLpZeIVfG//NPD13vOvAz8zZrsmGj0B1h1TNpslkUgQiUT6fn9vb+/AdGSvYaaR\\n\",\n       \"Ydc0q8aY273nt+nm1gSGSCRCMpn0PM1xoul0Op6nKZVKVCqVqdot+Rm5WM50/+rp+8tHwPY0tmg0\\n\",\n       \"QdyPvcXe2dmhVCoFUjS3ReQsgIicA+6Mz6TJR/NnUqkU2WyWVCp1IFEc7u6Y9vf3qdfrFItFNjY2\\n\",\n       \"uH79Ouvr6xQKBer1eqBE823gM73nnwG+NR5zpgM7FUIXwbFYjHC4O9urYPRqNBoUCgU2Nze5evUq\\n\",\n       \"Gxsb5PN5arXabIpGRL4B/DvwmIh8ICLPAl8CfkpE3gN+svc6MBzmaVQ0it/TbG5ucu3atan3NMcu\\n\",\n       \"hI0xzxzy0SfGbMtEo1vpubk572BPRZNMJg8kW9lTkzHmgGhu3LhBsVj0ynFnUjQOvML9cDhMJBI5\\n\",\n       \"UJetouk3PWlBv+YE64FerVaj1Wp5qZ3ThhPNMWjXh2g0SiwWIxaLeYLRS2NQtmjsLhC65a7Vap6H\\n\",\n       \"UdFMI040J0DXMBqY9HeB0JCBf/e0v7/P3t7eAU+zu7tLvV73QgrO08wo9m7J9jB6+UMHtqfpdDr3\\n\",\n       \"TE+tVus+/SXjwYnmBGiAUmua+uXL2LRaLXZ3d71ra2uLcrlMs9n8kC0/HZxojkHXNBqkPIloms2m\\n\",\n       \"F2fK5/Nsb297OcDTOB35caI5AX5Po2W2R3maSqXC9vY2m5ubbG9vUy6XaTQaMyEa16jxBOiaxu9p\\n\",\n       \"/Id5iopmZ2eHjY0NTzRuegoQmt6pSVcanFRP4w8baHBye3ub9fV1Nz0FDW0jctj0JCL3dOes1WqU\\n\",\n       \"SiW2t7fv8TRONAHBnp76VRvYZzKacKWiWV9fp1gsep5mFhg2R/h5EbkpIm/0Ln866Eyh09NhC2Fb\\n\",\n       \"NJ1O5x7RzJqnGTZH2ABfMcY80bv+Yfym3T/8XSByuRyLi4ssLy9z5swZFhYWvJxgOHiYpyfA/h40\\n\",\n       \"nU6H/f39+/yXjYeTRLlfE5GH+3w0mW2aRsQfa0okEiwsLLC0tMTKygpnzpzxqg80vdPvafQUuNVq\\n\",\n       \"0Ww2abfbU9FC5KSMsuX+nIi8JSIvzFoJix1rymaznmiO8jRaomILxhbNLHmaYUXzVeARuj33NoAv\\n\",\n       \"j82i+4x6mn6iWVlZ8TpCDOJpAjc99cMY4+UEi8jXgO+MzaIJwG68qJdWTGrVJNxNzLKrDez7HUxr\\n\",\n       \"4vhxDOVpesnkys8C7xz23WnE7gtspz30Ewzg7Ziq1aq3vW40Gl4T6VnjWE/TyxH+CWBZRD4Avgh8\\n\",\n       \"XEQep7uLugr88qla+SFj9wXWy24/70+FUE8zK8VwxzFsjvCLp2DLxHCUpzms7NZf0F+v1+l0OjM5\\n\",\n       \"PbkTYR92rXY2m2VpaYlMJuM1XYT+JSpaCLexscHOzo7XpGgWcaLpg7/sNpfLkUgk+uYAG2Oo1Wpe\\n\",\n       \"gFJjTbMUNvDjRONDPY2KRgv8D9tia12THTbY2dmhXC7TarXc9BQUbNH0K/A/KkC5sbHhFfk7TzPD\\n\",\n       \"2Pdr0vWMNpLudwLsjzHV63WvbX2pVGJ3d5dmsxncLXcQsO/VZAcq9c5wmnhlexq7ykBrmnZ3d72d\\n\",\n       \"k4rGTU8ziHoYPY+xRaOeJhqNejfGAPrWM6loKpWKF2ua1mK44wi8aOCup9GOnVqjraKxpy/gnnom\\n\",\n       \"u3pyGm5cOipONPSva9JEK7utqz5qDz1dw2gf4Far5W3FZ5nAi8Yf1dZ7NdldIGy0pau2qZ+FzlaD\\n\",\n       \"EnjRQPcwzy679Zeo+IWjotEDPVs0QeDIKLeIrInId0Xkf0XkeyLya733Z6aPsO1ptLhfPY1WG/hR\\n\",\n       \"0ZRKpXs8TRA4LjWiDfyGMeYjwI8DvyIil5mxPsK6e9Ibl9r5M3B44vjW1pZ3AjzLh3l+jhSNMWbT\\n\",\n       \"GPNm73kV+D5wnhnrI9wvqm1HtP2pnPa9mm7dujVzFZTHceI1TS+5/AngP5mxPsL98mfsqcluG9Lp\\n\",\n       \"dA7cdufmzZuUy+WZKlE5jhOJRkRSwN8Cv26MqfjuLGJEZGr/pex7N+kUdZin0QM9WzS3bt2i0Wh4\\n\",\n       \"+cBB4CTFchG6gvkLY4y2fp2pPsKHbblDodCB9YwKQ0+B9STYLlMJAsftngR4Afg/Y8wfWh/NTB9h\\n\",\n       \"ESESiRCPx8lkMiwsLHgtXu2wga5nGo3GgbIUDRfM+imwzXHT01PAzwNvi8gbvfe+QLdv8F+LyGeB\\n\",\n       \"a8DPnZqFHwLatj6dTrOwsOCV3uo5jYrGLklptVqed1HRBIUjRWOM+TcO90Yz0Ue4n6dJp9MHWrz6\\n\",\n       \"A5R+T2Nn8QWBQDc10jveanu0eDzu9QSORCKHntOod1EPo4IJimgCGUawA5D2+YzeLU4DlXadk783\\n\",\n       \"sC2UIAkGAuxp7Juxa1qEFv1r7oz/HpR263p/crl+LwgE0tModgLWqJ4mSATa0yj2QleL9e1u4v6k\\n\",\n       \"K7t1SBAJrKexpxT73gWVSsVLytLDOhWNZumpcIIqmsB6GkVF02w2+97wQkMI6oXspHEnmoDiF43e\\n\",\n       \"/ti+S4rtafSzIIUN/AR+etrf3/dqsbe3t0mlUgAHKhQ0D7hSqRzoPeNEE0B0R1Sr1SgUCqyvrwN4\\n\",\n       \"gtDPd3Z22NnZ8XrPaJDSiSZAGGMOpD3UajXy+TzGmANFbvqdYrHoXX7RBG27DceIRkTWgD8HztBt\\n\",\n       \"YPQnxpg/EpHngV8Etnpf/cK0tYW1p6darQbg9ZgxxnjT09zcHJVKxUu0sstVnKfpj+YIv9lLxPof\\n\",\n       \"EXmFu32Ev3LqFp4yWpKiTYja7bZXjTA3N+clXWkFpSZg7e7uTv3NvobluCj3JrDZe14VEc0Rhhnq\\n\",\n       \"I6y7I+ge+hWLRcLhMJ1Ox0vj1POZRqPhTVNBFY2cdE7u5Qj/C/AR4PPAs0AJeB34vDGm6Pv+1Ez2\\n\",\n       \"dufOUCjkNZfW+yCoB9JHO2uvXq/fb/NPDWNMX8dwItH0pqZ/Bn7fGPMtETnD3fXM7wHnjDGf9f1m\\n\",\n       \"akRjR701VUK7SITD4XuClBpC0PSIWWVo0fRyhP8O+Htfyqd+/jDwHWPMj/renxrROPpzmGiGyhGe\\n\",\n       \"9T7CjqM50tOIyEeBfwXeprtjAvgd4Bm6Le69PsJWHZT+1nmaKWekNc0wONFMP0NNTw5HP5xoHAPj\\n\",\n       \"ROMYGCcax8A40TgGxonGMTBONI6BObVzGsfs4jyNY2CcaBwDc6qiEZErIvKuiPxQRJ4bw3jXRORt\\n\",\n       \"EXlDRP5riN+/KCK3ReQd672h29seMt7zInKzZ+MbInJlgPHG2oL3iPGGthG499Z647qAEPA+8DAQ\\n\",\n       \"Ad4ELo845lVgcYTff4xus8l3rPf+APit3vPngC+NON4Xgd8c0r6zwOO95yngB8DlYW08YryhbTTG\\n\",\n       \"nKqneRJ43xhzzRjTBr4JfGoM4w6dZmqMeQ0o+N4eur3tIePBkDaaMbfgPWK8oW2E052ezgMfWK9v\\n\",\n       \"ctfgYTHAqyLyuoj80ohjKafR3vZzIvKWiLwwbDf3cbfgtcb7j1FtPE3RnMZe/iljzBPA03S7p39s\\n\",\n       \"nIObrh8f1e6vAo/QzTfaAL486AD+Fryj2tgb729641VHtfE0RXMLWLNer9H1NkNjjNnoPW4BL9Gd\\n\",\n       \"AkdlrO1tjTF3TA/ga4PaOO4WvNZ4f6njjWrjaYrmdeCSiDwsIvPAp+m2kh0KEUmISLr3PAl8kvGk\\n\",\n       \"mY61ve0oqbDjbsF7aum6o+xmTrB6f5ruiv19ulWYo4z1CN0d2JvA94YZD/gGsA606K63ngUWgVeB\\n\",\n       \"94CXgdwI4/0C3YrUt4G3ev+5qwOM91Fgv/c3vtG7rgxr4yHjPT2KjcYYF0ZwDI47EXYMjBONY2Cc\\n\",\n       \"aBwD40TjGBgnGsfAONE4BsaJxjEwTjSOgfl/g7yNWl4b+UcAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792bb98fd0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEptJREFUeJzt3X2QXfVdx/HPJ89PJExMFrJJaqKCQobaRGCAUii2OsgU\\n\",\n       \"WrWWorZYmTpOWxuxZUqZ0b90Wttxip2OzlSQFvqk0grtKBS0QCm1hJAEQkIDOETJ4xKEkMfNZvfr\\n\",\n       \"H/du2Cy72fPNyW/vveH9mslwz7nfe85vz+/cc7+ch9/XESEAAAAcvwmtbgAAAECnI6ECAACoiYQK\\n\",\n       \"AACgJhIqAACAmkioAAAAaiKhAgAAqGlSK1dumzEbAABAx4gIjzS/aEJl+3JJN0uaKOmWiPjr4THX\\n\",\n       \"XXfd6z63Zs0arVix4qh5AwMDhVqZW3Z/f3+R2IxMe7PjjGXiR4vdsGGDli1bdtS8zLbI/H2HDx+u\\n\",\n       \"HJuN7+3trRx78ODBIrGHDh2qHHsiviMvvvii5s+f/7r5EydOrLwMe8RjzYgybd6/f3/l2Mw2zi47\\n\",\n       \"s19k9rfMd2TChJEvLgwMDIz43qRJ1Q/1U6dOrRw7ZcqUyrGTJ0+uHDva31c3NitzPMz030j7xb59\\n\",\n       \"+zRz5sxKsdllj6bUcbnkb1QpVfejYx2Ti+2JtidK+pKkyyWdLeka22eVWh8AAECrlLyH6nxJz0XE\\n\",\n       \"5ojok/QtSe8uuD4AAICWKJlQLZT0wpDpLc15Y1qwYEGRBmF8jHTJCJ1hxowZrW4CashcbkV7yVwS\\n\",\n       \"RXsqmVAd94VREqrO1tXV1eom4DiNdA8HOgcJVefK3I+G9lTypvStkhYPmV6sxlmqo6xZs+bI6wUL\\n\",\n       \"FpBMAQCAtjAwMFD5JvuSCdVqSWfYXiJpm6SrJV0zPGj403wAAADtYMKECUc9AXisp/yKJVQRcdj2\\n\",\n       \"xyR9X41hE26NiKdLrQ8AAKBVio5DFRH3SLqn5DoAAABaraUjpUvVB5wrOdBbJj5z02dmMMRSsdlt\\n\",\n       \"UWrZmeVmBiHMPhmTGbRw2rRpRWIzbcj8fdmbWksNypiJzewXme2WVWpw38xyS+73mb7O7MuZ5Wae\\n\",\n       \"IJ09e3bl2Mx2k3IDSfb19VWOLTW474EDByrHSrnfqFK/q5k+ycRmjhdS7rtatU8WLhx9sAJq+QEA\\n\",\n       \"ANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIFAABQEwkVAABATSRUAAAANZFQAQAA1ERCBQAA\\n\",\n       \"UBMJFQAAQE0tr+V36NChSnHZmnSlZOoklVpuphZVqfa2i8y2yOq0uo3ZOlcZmZp0GaXqg2W3Raav\\n\",\n       \"MzUFM7GZNmRqlEnS4cOHK8dm+rpUXbx9+/ZVjq36GzIosy0ysZk+yezLmRqI2fhMbGZfznz/Sv5G\\n\",\n       \"Zfokux+NpGiWYnux7Qdsb7D9lO2Pl1wfAABAK5Q+Q9Un6fqIWGd7lqTHbd8fEU8XXi8AAMC4KXqG\\n\",\n       \"KiJ2RMS65uu9kp6W1F1ynQAAAONt3G5Msr1E0nJJj47XOgEAAMbDuCRUzct9d0pa2TxTBQAAcNIo\\n\",\n       \"/pSf7cmSvi3paxFx1/D3165de+T16aefrgULFpRuEgAAwJief/55bd68uVJs0YTKjechb5W0MSJu\\n\",\n       \"Hilm+fLlJZsAAABwXJYuXaqlS5cemX7wwQdHjS19ye+tkn5f0mW21zb/XV54nQAAAOOq6BmqiPiR\\n\",\n       \"GI0dAACc5Eh2AAAAamp56ZmS5TI6SaaEQ6nYrEyJilKlCEqWGClV9iVTbqVkWYZMO0r1X6ltkSkZ\\n\",\n       \"IuW+J5llZ8qtZMqAzJ49u3KsVK4ETqZcRyb24MGDlWMPHDhQOTYrs9/PnDmzcuzcuXMrx86ZM6dy\\n\",\n       \"rJT7TmX2z1dffbVy7N691R/mz5QZyuwXktTb21s5NrMtRsMZKgAAgJpIqAAAAGoioQIAAKiJhAoA\\n\",\n       \"AKAmEioAAICaSKgAAABqIqECAACoiYQKAACgJhIqAACAmkioAAAAamp56ZlJk6o1IVPmpGS5lYxM\\n\",\n       \"m0vFZpXadv39/ZVjS5Zbyfx9mTZnypFk+i9TriNTZkHKlVooVWYoE1v1WHE8Mu3I7BeZMiBTpkwp\\n\",\n       \"EltSZluU2oeyZXgyJWIyJXsyJXC2bt1aOTZTmkXKHTNK/a5m+m/y5MmVY7PHgPEubTdq62z/tqSQ\\n\",\n       \"NNKvW0TEd6qswPZESaslbYmIK4+rlQAAAG3sWOnelWokVKOplFBJWilpo6RTqjYKAACgk4yaUEXE\\n\",\n       \"H9RduO1Fkq6Q9FeS/qzu8gAAANrRmBf4bZ9u+1bb9zanz7Z9XcXlf0HSDZLK3fQDAADQYlXumPyK\\n\",\n       \"pPskdTenn5V0/Vgfsv0uST0RsVYj34cFAABwUqhyy/y8iPgn2zdKUkT02a7ySNNFkq6yfYWkaZJm\\n\",\n       \"2749Ij44NOixxx478rq7u1sLFy6s3noAAIBCtmzZUvmpzCoJ1V7bPzM4YfsCSbvH+lBE3CTppuZn\\n\",\n       \"LpX0yeHJlCSdd955lRoKAAAwnhYtWqRFixYdmR56Emi4KgnVJyR9T9LP2f6xpPmS3nsc7WqPwaEA\\n\",\n       \"AABOsDETqoh43PYlkn5RjXuhNkVE9REBG8t4SNJDx9dEAACA9jZmQmV7uqSPSLpYjbNMD9v++4g4\\n\",\n       \"WLpxAAAAnaDKJb/bJb0q6YtqnKH6XUl3SPqdgu0CAADoGFUSqmURcfaQ6R/Y3niiGnDwYLUTXSXr\\n\",\n       \"82WWXSo2U7+uVGzJZZfqv2xdw1I1E0v19SmnVC8wcOqpp1aOlXJ15jKxmVpi+/fvLxJb9bgyKFOT\\n\",\n       \"LqNUTbOszP6ZqUuZkdmHMt+97du3p9qR6evMdiu1f2b7o9SxKFNHr9SxJRMr5dp8IupjVmndGtsX\\n\",\n       \"Dk40n/J7vPaaAQAAThLHKo68fkjMI7ZfUOMeqjdJ2jQObQMAAOgIYxVHBgAAwBiOVRx589Bp211q\\n\",\n       \"jHgOAACAIaoUR77K9rOSnldjLKnNku4p3C4AAICOUeWm9L+UdKGkZyJiqaR3SHq0aKsAAAA6SJWE\\n\",\n       \"qi8idkmaYHtiRDwg6dzC7QIAAOgYVQZpeNn2KZIelvR12z2S9pZtFgAAQOeocobqPZL2S7pe0r2S\\n\",\n       \"nhNPAAIAABxRpTjy4NmofklfKdoaAACADnSsgT33qjGQ50giImafiAZUHe69ZLmVjFIlVNqh/E02\\n\",\n       \"PlMeolSZmmwpgoxSZUNKbeNsuZV2KI2UKfeQic3uF5ltl4nt7e0tstxMeR+p3DEjs52nTp1aOTZT\\n\",\n       \"MiQTK+XanNkW06dPrxxbar+QcvtG5vhSqk8y+0W2PEwmftq0+qNCHWscqll1F277VEm3SFqmRnL2\\n\",\n       \"hxHxk7rLBQAAaCe51D7vbyX9e0S81/YkSTMLrw8AAGDcFUuobM+R9LaIuFaSIuKwpN2l1gcAANAq\\n\",\n       \"5W5AkZZKetH2bbbX2P4H2zMKrg8AAKAlSiZUkyStkPR3EbFC0j5JNxZcHwAAQEuUvIdqi6QtEfFY\\n\",\n       \"c/pOjZBQrV69+sjr7u5udXd3F2wSAABANbt27dKuXbsqxRZLqCJih+0XbJ8ZEc9IeqekDcPjzj2X\\n\",\n       \"KjYAAKD9zJs3T/PmzTsyvWnTplFjSz/l9ydqlKuZIum/JX2o8PoAAADGXdGEKiKekHReyXUAAAC0\\n\",\n       \"Wsmb0gEAAN4QSl/yG9OBAwcqxWXKWWTLTpQqVVOqLEqpMi7tIlO2IFPyRcqVkym1X2TKTmRLjGT0\\n\",\n       \"9/cXWW6mT2bMqD6SSqY0RKacRTY+s1/09fWl2lFKpq8zbc6URTl8+HDl2IzscTYTn4nNlDnJHrdK\\n\",\n       \"KfX7kDnOlorNxlft67vvvnv0ZVReGwAAAEZEQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIFAABQ\\n\",\n       \"EwkVAABATSRUAAAANZFQAQAA1ERCBQAAUFPLS89ULUeQKQEwMDCQakOmlEQmNtOOUmVqSsqUiMnI\\n\",\n       \"lEPIlL6Qyu5HVWVKqMyZM6dybHYfyvx9mRI4mdj9+/dXjt29e3fl2OnTp1eOlXIlcDLLnjlzZuXY\\n\",\n       \"THmYbEmiTDmZUiWJMqVnMn9ftkRUZtmZY1HmeJgpiZIto5Qpa5Mpl5NpR2Z/27t3b+XYffv2VY7N\\n\",\n       \"tuNElIkq+itu+9O2N9heb/sbtnN7BgAAQAcollDZXiLpw5JWRMQ5kiZKen+p9QEAALRKyUt+r0rq\\n\",\n       \"kzTDdr+kGZK2FlwfAABASxQ7QxUR/yfpbyT9r6Rtkl6JiP8otT4AAIBWKXnJ7+cl/amkJZK6Jc2y\\n\",\n       \"/Xul1gcAANAqJS/5nSvpxxHxkiTZ/o6kiyR9fWjQ+vXrj7zu6urSaaedVrBJAAAA1fT09Kinp6dS\\n\",\n       \"bMmE6qeS/tz2dEkHJb1T0qrhQeecc07BJgAAAByfrq4udXV1HZnesGHDqLEl76F6QtLtklZLerI5\\n\",\n       \"+8ul1gcAANAqRQf2jIjPSfpcyXUAAAC0WucNzw0AANBmSKgAAABqanktv6q1rjI1lUrVYMPRMtu5\\n\",\n       \"VJ9kl5upU5apjZepzZWpJZapf5at5Zep+ZVZdqnlZmroZfpDytU0yxyLdu7cWTk2U6twz549lWOl\\n\",\n       \"cvX5MvXdMjUsM8vN1vPM9F+2TmBVpertSbmagr29vZVjM3X02qF2pJQ7vsyaNav++movAQAA4A2O\\n\",\n       \"hAoAAKAmEioAAICaSKgAAABqIqECAACoiYQKAACgprZMqLZt29bqJqCG7du3t7oJOE5btmxpdRNQ\\n\",\n       \"w65du1rdBBwnjpudry0TKnaszkb/da6tW7e2ugmo4aWXXmp1E3CcduzY0eomoKa2TKgAAAA6CQkV\\n\",\n       \"AABATc4Mw3/CV263buUAAABJETFiTaKWJlQAAAAnAy75AQAA1ERCBQAAUFPbJVS2L7f9U9vP2v5U\\n\",\n       \"q9uD0dn+R9s7ba8fMm+u7fttP2P7PtuntrKNGJ3txbYfsL3B9lO2P96cTx+2OdvTbD9qe53tjbY/\\n\",\n       \"05xP33UQ2xNtr7X9veY0/dfB2iqhsj1R0pckXS7pbEnX2D6rta3CMdymRl8NdaOk+yPiTEn/2ZxG\\n\",\n       \"e+qTdH1ELJN0gaSPNr9v9GGbi4iDki6LiLdIerOky2xfLPqu06yUtFHS4M3M9F8Ha6uEStL5kp6L\\n\",\n       \"iM0R0SfpW5Le3eI2YRQR8bCkl4fNvkrSV5uvvyrpPePaKFQWETsiYl3z9V5JT0taKPqwI0TE/ubL\\n\",\n       \"KZImqvFdpO86hO1Fkq6QdIukwafG6L8O1m4J1UJJLwyZ3tKch85xWkTsbL7eKem0VjYG1dheImm5\\n\",\n       \"pEdFH3YE2xNsr1Ojjx6IiA2i7zrJFyTdIGlgyDz6r4O1W0LFGA4nkWiMyUGftjnbsyR9W9LKiNgz\\n\",\n       \"9D36sH1FxEDzkt8iSZfYvmzY+/Rdm7L9Lkk9EbFWr52dOgr913naLaHaKmnxkOnFapylQufYaft0\\n\",\n       \"SbK9QFJPi9uDY7A9WY1k6o6IuKs5mz7sIBGxW9K/SfoV0Xed4iJJV9l+XtI3Jf2q7TtE/3W0dkuo\\n\",\n       \"Vks6w/YS21MkXS3puy1uE3K+K+na5utrJd11jFi0kG1LulXSxoi4echb9GGbsz1v8Akw29Ml/Zqk\\n\",\n       \"taLvOkJE3BQRiyNiqaT3S/pBRHxA9F9Ha7uR0m3/hqSb1bjJ8taI+EyLm4RR2P6mpEslzVPjev9f\\n\",\n       \"SLpb0j9LepOkzZLeFxGvtKqNGF3zqbAfSnpSr11a+LSkVaIP25rtc9S4aXlC898dEfF523NF33UU\\n\",\n       \"25dK+kREXEX/dba2S6gAAAA6Tbtd8gMAAOg4JFQAAAA1kVABAADUREIFAABQEwkVAABATSRUAAAA\\n\",\n       \"NZFQAWg52480//uztq85wcu+aaR1AcCJxDhUANqG7berMcjhlYnPTIqIw8d4f09EnHIi2gcAo+EM\\n\",\n       \"FYCWs723+fKzkt5me63tlbYn2P687VW2n7D9R834t9t+2Pbdkp5qzrvL9mrbT9n+cHPeZyVNby7v\\n\",\n       \"jqHrcsPnba+3/aTt9w1Z9oO2/8X207a/Nr5bA0AnmtTqBgCAXit98ylJnxw8Q9VMoF6JiPNtT5X0\\n\",\n       \"I9v3NWOXS1oWEf/TnP5QRLzcrG23yvadEXGj7Y9GxPIR1vVbkn5Z0pslzZf0mO0fNt97i6SzJW2X\\n\",\n       \"9Ijtt0YElwoBjIozVADaiYdN/7qkD9peK+knkuZK+oXme6uGJFOStNL2Okn/JWmxpDPGWNfFkr4R\\n\",\n       \"DT2SHpJ0nhoJ16qI2BaNeyLWSVpS428C8AbAGSoA7e5jEXH/0BnNe632DZt+h6QLIuKg7QckTRtj\\n\",\n       \"uaHXJ3CDZ696h8zrF8dKAGPgDBWAdrJH0tAbyL8v6SO2J0mS7TNtzxjhc7MlvdxMpn5J0gVD3usb\\n\",\n       \"/PwwD0u6unmf1nxJl0hapdcnWQAwJv6vC0A7GDwz9ISk/ualu9skfVGNy21rbFtSj6TfbMYPfUT5\\n\",\n       \"Xkl/bHujpE1qXPYb9GVJT9p+PCI+MPi5iPhX2xc21xmSboiIHttnDVu2RpgGgKMwbAIAAEBNXPID\\n\",\n       \"AACoiYQKAACgJhIqAACAmkioAAAAaiKhAgAAqImECgAAoCYSKgAAgJpIqAAAAGr6f7xE4rRkFyo0\\n\",\n       \"AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792bb3b910>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGNNJREFUeJztnVmM7Fldxz+na1+6qrqW7q5ebt+ZuTMDTEzgBU2AyAMh\\n\",\n       \"Q0xAXyQkRoNofFA0SiLig6D4gCYQow9EZYlbQKOBgInKYETxwQUzw4CyzMxdeu/au/a1jw9dv8Op\\n\",\n       \"ukvfWrvrzv+bnPyram7/51TVt37n9/ud3+97lNYaBw5GwdJlT8DB4sEhjYOR4ZDGwchwSONgZDik\\n\",\n       \"cTAyHNI4GBljk0Yp9axS6rtKqZeUUh+c5qQcXG2ocfI0SikX8D3gbcAB8N/Ae7TW35nu9BxcRYxr\\n\",\n       \"ad4IvKy1vq217gCfB941vWk5uMpwj/l3m8Ce9Xwf+GH7HyilnFTzgkNrre71+riWxiHEqxjjkuYA\\n\",\n       \"2Laeb3NubRy8CjAuab4BPKmUuq6U8gLvBr40vWk5uMoYy6fRWneVUr8E/BPgAj7tRE6vHowVcj/U\\n\",\n       \"jR1HeOExbUfYwasYDmkcjAyHNA5GhkMaByPDIY2DkeGQxsHIcEjjYGQ4pHEwMhzSOBgZDmkcjAyH\\n\",\n       \"NA5GxrhFWAAopW4DZaAHdLTWb5zGpKYNpdTAY7fbjcvlMtezszO01macnZ0NDHkd4LLamIffg1KK\\n\",\n       \"paUllFID87bnOitMRBrOi7HeqrUuTGMys4J8wEtLS7jdbiKRCMvLy0QiEUKhEN1ul06nY0az2aTR\\n\",\n       \"aNBsNmk2m5ydndHr9cz1MuZvE8XtduPxeMzo9Xq0220zzs7OzN/OgkCTkgbgnjuhVwXyQbtcLlwu\\n\",\n       \"Fz6fj5WVFdbX10mn0yQSCUOORqNBo9GgXC5zenpKuVwGoNPpoJSi0+lc6nuQ4fV6CQQCZnQ6Her1\\n\",\n       \"OrVajV6vN3OrOA1L81WlVA/4Y631n05hTlOHUsosRz6fj3g8zvb2Nk888QRbW1tUq1UzKpUK2WwW\\n\",\n       \"j8cDQLvdNkvDZViZ4fcgxA+FQoTDYZaXl2m1Wiil6PV6tFqtAeLMApOS5k1a6yOlVAp4Tin1Xa31\\n\",\n       \"16cxsUkhX/TS0hI+n8/8KiORCOl0mu3tbR5//HF2dnY4PT01lqVUKqG1pt1uU61WzRcl/sNlvAeP\\n\",\n       \"x4PP5zMjHA4TjUaJRqNEIhHq9TpKKbrdLvV63SylskxNm0ATkUZrfdS/ZpVSX+C8teXSSSNfsFIK\\n\",\n       \"r9dLIpEglUqRSqVYXV1lZ2eH7e1tVldXiUQixlfpdrs0m03c7vOPRZ53Oh263e6ArzBLDPswy8vL\\n\",\n       \"xONxEokEiUSCSCRCOBw2o1gs4vP50FpTr9cH3k+32536/MYmjVIqCLi01hWlVAh4O/DbU5vZmLA/\\n\",\n       \"bDHliUSCnZ0dHnvsMa5du0YqlSKZTJJKpYhEIvR6PTqdDu12m3q9bpambrdLq9Wi2+2aX++83oPt\\n\",\n       \"w4TDYdLpNNeuXePatWtEo1GCwSDBYJBAIMDJyQlnZ2fU63Xy+bx5L1rrmSxVk1iaNeALfRPqBv5K\\n\",\n       \"a/2VqcxqQgyv/0KaZ555hqeffppQKEQwGCQUCuHz+cyH3Gq18Pv9d1kaCWXnRRpgwHlfXl5mfX2d\\n\",\n       \"J598kte97nXEYrG7lqt6vU6xWOTg4IBGo2EIM4sldWzSaK1vAa+f4lwmgu3D2D5ANBollUoZx/ep\\n\",\n       \"p54a+BVrrc1VTLodfrfb7bm/F3HavV4vXq/XRHs7Ozs89dRTRCIRQ6qlpSWq1SqRSIRAIIDH45m5\\n\",\n       \"DzaNkPvSIR+QECYWi5mRSqXY2dkhlUoRDodxu920221DiGazyeHhIUdHR2bs7e2Ry+Wo1+uX8n68\\n\",\n       \"Xi/Ly8tmbGxskEwmiUajBAIBAOr1ukkV7O7ucnx8TLFYpFar0Wq1aLfbM4uiFp409vrvcrnw+/2s\\n\",\n       \"rKyQTqdJp9Nsbm5y/fp1VldXCYfDuFwuut2uCa9LpRJ7e3tm7O/vk8/nKRQKl0qaSCRinPeNjQ1S\\n\",\n       \"qZTxZTqdDrVazcxTSFMoFKjVajSbTeOHOaS5D4bzMLFYjI2NDR5//HEee+wx1tbWjKVxuVx0Oh2q\\n\",\n       \"1Sq5XI5MJsPu7i63bt3i1q1b3Llzx/yCm83mpbwfmzTb29tsbm4a0tjJvGw2y97e3j0tjR1yTxsL\\n\",\n       \"Txo7te71evH7/cRiMeMDPPHEEyanEQqFWFpaotPpUKlUyOVyHBwcmA/+9u3b3Llz59Leh/ggfr+f\\n\",\n       \"aDTK6uoqW1tbrK+vE4/HCYVCuN1uut2uSULu7u5ycHBANpulVCrRaDRm7octPGkAY2ECgYDxAyKR\\n\",\n       \"yABZfD4fLpcLrTWNRoNiscjR0RG7u7tkMhlOT09ptVqXMn+JkmREo1ESiQRra2tsbW2RSCRMlFcs\\n\",\n       \"FsnlcuRyObLZLNlslkKhQKVSodVqzWVDdeFJI7vWPp+PYDBoCBOJRIjFYkSjURNJyY62kOb4+Jjd\\n\",\n       \"3V1yudylkkYceImWIpEIiUSC9fV1tra2CIVCZu+rVCoNECaTyVAoFKhWq2ZZmjUeOdLIfoxtaexf\\n\",\n       \"ca/Xo9FoUCqVjKWpVqvGF7gM2GkC2eqIx+Osra2xubmJy+WiVqtRr9ep1+sDpMlmsxSLRVqtlkOa\\n\",\n       \"izCcl/H7/YRCIWNlpPwhHA4P1Ma0Wi0qlQqFQoFMJsPh4eFAXmbe81dK4fF4CAQCZlsgHo+bjPXa\\n\",\n       \"2ppJD7TbbbM85fN5MyqVCr1eb+YblYKFI81wAZLs+MqvU/ZmAoEAbrebs7MzY0mq1SrFYpG9vT2y\\n\",\n       \"2SyVSoVOp2O2COZVYDXsw9h7Y6lUihs3brCxsWGSeN1u15Rs5PN5isUi1WrVhNZ2sdg8sHCkgcEU\\n\",\n       \"u9frJRgM3pc0vV6PWq1GJpMhl8txfHxsknfVapVOpzP3D932YTweD/F4nM3NTbO3tLW1RTqdJhKJ\\n\",\n       \"mLzSMGkqlQqNRmNgX2weVXuw4KSxfRkhjZ05tS1NNpvlzp077O7uGksjpLFLPecBl8s14MMIaW7c\\n\",\n       \"uMFrXvMaEokEsVjsoUgjlhLmV4p6IWmUUp8BfgzIaK1/qP9aHPhrYAe4Dfyk1ro0w3na8xn40CVi\\n\",\n       \"isViJBIJkskkkUgEv99vHF8hzd7eHq+88gonJyfk83lqtdpMSgcuwrAfFo/HSafTXL9+naefftps\\n\",\n       \"d3g8HrTWtFotarUap6enA6Sxl6e5zv8h/s1ngWeHXvsN4Dmt9VPAP/efzwUul4tQKDSwCSklD5ub\\n\",\n       \"m6ytrRGJRPB4PCaJd3p6SrFYJJ/Pm/C6Xq9fWvmmXX0nDnsoFCIQCOD1eg1RTk9POTk54fj4eOAq\\n\",\n       \"zu+88jLDuNDSaK2/rpS6PvTyO4Ef7T/+M+BrzIk4LpeLcDhMMpk0db6bm5tmpFIpPB4PbrfbRB3D\\n\",\n       \"pKlUKtTr9UuxMvCDZGQoFCIajRrS+P1+Q5pms2lCbJswJycnZo9pXiH2XfMf8+/WtNYn/ccnnNfW\\n\",\n       \"zAVLS0uEQiGSySRbW1vs7OyYzcl0Os3KyoqpWJM9pmHSNJtNWq3WlbI0wWDQkKbVatFsNs28hTCZ\\n\",\n       \"TIaTkxMT9Q13HswLEzvCWms9T309l8tFMBgkHo+zsbHBtWvXWFtbMyMSiVCpVIw1qVarpva3UCiQ\\n\",\n       \"z+dNOeS8CsXt8k1gwBdbWVkhGo0SDofN8iSbpaVSiePj4wErc5klG4JxSXOilFrXWh8rpdJAZpqT\\n\",\n       \"ehjcby2XCjuxNFI7Yye/Zh0p2Yk7ON+A9Pv9BAIB/H4/6+vrbGxsmJFOp4nH4wSDQZRSNJtNCoUC\\n\",\n       \"BwcHvPLKK2Z/rFqtXmpHhGBc0nwJ+Bng9/rXL05tRmPCJoNdgSc1vhJlzCu0tpOQgUBgoDAsnU4P\\n\",\n       \"kCaZTLKyskIgEEApRaPRoFAosL+/z0svvcTx8TH5fJ5qtXopy9EwHibk/hznTm9SKbUH/BbwMeBv\\n\",\n       \"lFLvox9yz3KSF8EmgVgau1hcugmGk2AzlMMdKA4LBoOsrKyYJVTIsrm5ycbGhtnuENLYluall14i\\n\",\n       \"n8+bRr6FsDRa6/fc5z+9bcpzGQvDhLFJIxZGliX5Iu3Hw71BYx5lNPDYrt+VFIHkYiQ1YFsbr9dr\\n\",\n       \"+soBY2kODw+5efOm6fS8KljIjPCDIG2rwWDQEGdzc9NU6IdCobsa/GWHWGprR4Xb7R4YUuIgY9iH\\n\",\n       \"SSaTpqhKEnh2z3ipVDIh9VU8N/2RI400yEmVnpj7s7MzU3RuR09SBVcul6lUKlSr1ZH/n+Lo2g6v\\n\",\n       \"9CQFAgHTpCdDCsUk8yu+lyxBUrZ5WSH1RXgkSePxeAx5JFnmdrsJh8Osrq4asojagpQa5HI50yg3\\n\",\n       \"CuxuRxk2MeLxOCsrK8TjceLxuMnHyIaldEVI9rpUKlGtVk3D21XDQpPG9kPksU0WwCwBktupVCrG\\n\",\n       \"15EOysPDQ4LBoGmSGxVSIWhfZUi/tT2G+5FkiSyXy+RyuQFL45BmCrAr705OTvD7/abJX9LxduQC\\n\",\n       \"5z5HIBCg1+uZTUwZIiNiF3OPCrEuouQgVxnBYBCfz2dIOexTCVkODw/Z39/n4ODAbBU4y9MUIDW+\\n\",\n       \"QhrZLZZlwC7vhB8sV36/HzhvD7G/sF6vh8/nIxKJkEwmp+LT+Hy+u557vV4zJ0k+Si5JOiMODw+5\\n\",\n       \"ffs2h4eHJsx2LM0UYJPG7XYbVYWVlRXq9Trtdtv8om31K2lZHc4Ka62JRCImcppG9CTDlmiTq1LK\\n\",\n       \"kEb6x21Lc/v2bVP361iaKcEmjSTr4vE4q6urA2GqXd0nX6LP57srpyKY5y9aSCMbk8OkOT09NZGU\\n\",\n       \"Q5opQGttPnCXy8Xp6SlHR0f4/X601hQKhbuWBilokqu9hMhyZQs02vp1D7MTbveGdzodEz3Jddjx\\n\",\n       \"FcUq2VCVUL9Wq9FoNMwO/DxLUEfBQpKm0+kMZHaPjo6MoM/x8bHJj0iuRHIn8ppENJIIlP0qCcWl\\n\",\n       \"CF2+yItQq9XMqFarA8m8cDh8T9KIFo7kh6RFRUgjKYGriIUkjRRPiTOptaZWq5HL5Uxtiq1BIzUr\\n\",\n       \"y8vLRKNRo6MXCATu2naQGpxisUihUKBUuriKtVgsDowbN24AGDGiYdiWRso4qtXqAGnmXew+Csat\\n\",\n       \"Ef4I8HNAtv/PPqS1/sdZTdKGkEaI02g0qNVqZLPZgYo4O8kmSbWVlRWSyaQJsWOxmLmnkEa09gqF\\n\",\n       \"gil8ughS6yKj2+0SDodZX1+/55cupKnVapTLZcrl8l2kucp4GEvzWeCPgD+3XtPAJ7TWn5jJrEaA\\n\",\n       \"XT8jkPyMvWlpZ1wlxD06OiIejw+Ev51Oh0KhYEaxWLxwDpLFlUL14T4qe9ui1+tRLBbJZDIcHByw\\n\",\n       \"v7/P4eHhlc7LDGPcGmG4QvrBNnHEzzk7OxvY06lWq8anyeVy7O/vm/pc+wsV7ZpRfBopH202m/ds\\n\",\n       \"vLPFoaWJP5PJsLe3x82bN029TL1ev5LL0TAm8Wner5T6ac4Pdv/AvFpYhmFr4dkRkERYEqHYCt92\\n\",\n       \"8k0cYTuCkvzJKHkb29kdLvYSAovAtSQm9/f3uXXrFoVCwXRIPMqk+STwO/3HHwU+DrxvKjMaA+KT\\n\",\n       \"CGRrwMa98jP306QbtbZmOCNsE1lEoWVTslarGUuzv7/PzZs3zeakhNlXHWORRmttvEOl1KeAL09t\\n\",\n       \"RlPC8Bc+y1+waOTZOn8icQKYzgLZTRcRonK5bHSK5yk5OynGIo1SKi3C08BPAN+a3pQWDxKJSTnn\\n\",\n       \"+vo6sVjM7He1Wi3TWSAbkrlcjnK5TLvdvqfzfJUxTo3wh4G3KqVez3kUdQv4hZnO8opDpGfX1ta4\\n\",\n       \"fv36XaSxlbdu3rzJ4eGhUa2QRN6iEAbGrxH+zAzmsrCwLc21a9cGSCOVgyKiJBuSInkm0q2LhIXL\\n\",\n       \"CF8FSBQme1m2AJF0eUrxl123I86w7C3NS4Ro2nBIMwakElCG1P6KTyNbGCJ1Mpw8XCT/5V5wSDMG\\n\",\n       \"pHBdNP1swqytrZmWFBFVsjPTw201iwiHNCNCKgGDwSCxWMwsTba1sZOMYmnsDVFb8mwR4ZBmDAhp\\n\",\n       \"RH3L1iqWsxds/0UOIZPd7GazOdOzC2YNhzRjQNQ4ZQd9eXnZlJMCRu5M9q+KxaI5uU52s+XYwEWE\\n\",\n       \"Q5oxIDXHYmnsY3MAc3ZBuVweqMuRgivZ2LwsUaVJ4ZBmDIilGSaNfbiYCCtKeYWQxhaHhMs753sS\\n\",\n       \"OKQZA/dankQY8uzsbKBYXPaZpFh8kfaY7oeHEWp0MITh5WmYNI1Gg9PTU7OTLQd2NJvNhbQsw3As\\n\",\n       \"zYiwZemFNKKXN3z2QjabfSRJ80BLo5TaVkr9i1Lqf5VS31ZK/XL/9bhS6jml1PeVUl9RSsXmM92r\\n\",\n       \"AQm57eXJ5/OxtLRkSHM/S/Mo4KLlqQP8qtb6GeBHgF9USr2WS9QRvgwISaLRqJE6k7OxRVzRbrlt\\n\",\n       \"t9sm5BYFCBGKfhQszQOXJ631MXDcf1xVSn0H2OQSdYTnDfvQDulwECn9YDA4IE0iFYRSmyylptIu\\n\",\n       \"vKh5mWE8tE/TLy5/A/CfXKKO8GVABAKkFSaVShGLxQiFQqZDU6Iiux5YSGOLRT7ylkaglAoDfwf8\\n\",\n       \"ita6MtQDPVcd4XlDLM3y8jKJRIJ0Ok0ymSQWixEMBvF6vYYMdhmEbWmktfdRIAw8XOWeh3PC/IXW\\n\",\n       \"WqRfL11HeJ4Q0iSTSaOZJ2G2FI7b8mfVatUcQGpbF1uE2h5298Ii5HAuip4U8Gng/7TWf2D9J9ER\\n\",\n       \"hiuiIzwr2JYmmUySTqdJJBIsLy8bta1OpzOQAS6Xy3cd2GGfHS7FW16v12jXiGzK/TokrhIusjRv\\n\",\n       \"An4KeFEp9Xz/tQ9xxXSEZw3xaRKJBBsbG/ckjew13Ys0tq6wLRUrQ0omhltxriouip7+nftboyuh\\n\",\n       \"IzxriIafbWlERmTY0sjZmOVy2RzgJfcY1suxdXNEAWMRCANORvihIF+4dGfaywkMhtoirmjndoY1\\n\",\n       \"cWSpEjLVajWjHnFVxRltOKSZATweD6FQyEjbh0KhgfPChyv75ORe6SO/6s6wQ5opYFjDzy4HbbVa\\n\",\n       \"rKysmKMSk8mkadMVy+Tz+YyY0qPgCDsYESJ0LZZGaz1wiMbGxoZxnBuNhjm7qVqtksvlHNK8WuB2\\n\",\n       \"u40sbSwWw+VyDTwXSxOJRPD5fCZ7LMJGktNxHOFXESSPI1YmFouZRF+9Xjc5GXF6S6US+XzejKOj\\n\",\n       \"IwqFwiMvNeLAgtfrNYdjBIPBu9Q+7VNe6vW6kRqRowbl2OR6vX7lnWBwSDMVSAgeDAYHBBblKod1\\n\",\n       \"5HK5AX2ag4MD9vb2zHmbV1WhfBgOaR4Cw+KNtg8ie0f2LrfdFNfpdMjlcmSzWTOOjo7IZDLm8Ay5\\n\",\n       \"n11wfpXhkOYCiD6xHA/o9/tN6CyJOFusWvSBbW3hUqk0MMSXkWo+u+tyEeCQ5gIIafL5vJFGs4/V\\n\",\n       \"cblcxuGVIb1OoitsC1MPX6Wib5FEAR5IGqXUNudSsKucCxj9idb6Dy9TR3je0FrTaDTI5/MAZimR\\n\",\n       \"SCkQCBgtYDmh7uTkxJylfXx8bEQf5Tq8fC1aD9RFlkZqhF/oF2L9j1LqOa6QjvA8ID6NUopOpzNw\\n\",\n       \"bpMcvWOPTCYzMERexK7esx3lRcO4NcJwhXSEZw0RrxanVyRERFpfcjKyTA0fKSj+yrwOk5811MNO\\n\",\n       \"vl8j/K/AM8AHgPcCp9xHR/hRKgGVsyZlDJ8iJ5GP5GZsEsnxO3YYDpMd6TwvaK3vaRgeijT9pelr\\n\",\n       \"wO9qrb+olFrlB/7MR4G01vp9Q39zdT+NEWGXMSil7jr4azgvYyug20vQIhDFxtik6dcI/z3wD0Ml\\n\",\n       \"n/LfrwNflsM2rNcX45NxcF/cjzRj1Qj3i8kFr3od4VcbHmhplFJvBv4NeJHziAngN4H3AAM6wlYf\\n\",\n       \"lPytY2kWHBP5NOPAIc3iY6zlyYGDe8EhjYOR4ZDGwchwSONgZDikcTAyHNI4GBkOaRyMjJnlaRw8\\n\",\n       \"unAsjYOR4ZDGwciYKWmUUs8qpb6rlHpJKfXBKdzvtlLqRaXU80qp/xrj7z+jlDpRSn3Lem1sedv7\\n\",\n       \"3O8jSqn9/hyfV0o9O8L9pirB+4D7jT1H4O7m9WkNwAW8DFwHPMALwGsnvOctID7B37+Fc7HJb1mv\\n\",\n       \"/T7w6/3HHwQ+NuH9Pgz82pjzWwde338cBr4HvHbcOT7gfmPPUWs9U0vzRuBlrfVtrXUH+Dzwrinc\\n\",\n       \"d+wyU63114Hi0Mvv5FzWlv71xye8H4w5R631sdb6hf7jKmBL8I48xwfcb+w5wmyXp01gz3q+zw8m\\n\",\n       \"PC408FWl1DeUUj8/4b0Es5C3fb9S6ptKqU+Pq+Y+bQle637/MekcZ0maWcTyb9JavwF4B+fq6W+Z\\n\",\n       \"5s31uR2fdN6fBB7jvN7oCPj4qDcYluCddI79+/1t/37VSec4S9IcANvW823Orc3Y0Fof9a9Z4Auc\\n\",\n       \"L4GT4kQptQ6mInEieVutdUb3AXxq1Dk+SIJ3nDla9/tLud+kc5wlab4BPKmUuq6U8gLv5lxKdiwo\\n\",\n       \"pYJKqeX+4xDwdqZTZjpVedtJSmGnLcE7s3LdSaKZh/De38G5x/4y512Yk9zrMc4jsBeAb49zP+Bz\\n\",\n       \"wCHQ5tzfei8QB74KfB/4ChCb4H4/y3lH6ovAN/tf7toI93szcNZ/j8/3x7PjzvE+93vHJHPUWjvb\\n\",\n       \"CA5Gh5MRdjAyHNI4GBkOaRyMDIc0DkaGQxoHI8MhjYOR4ZDGwchwSONgZPw/UDzRgG/E2K8AAAAA\\n\",\n       \"SUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792ba9ddd0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEvtJREFUeJzt3X+QXXV5x/HPJxuS7OYXQwNYdbNLG20loxWrDoKYpNoO\\n\",\n       \"ZQRta1XaqrUdOx21UqqOyNj+4bSj1elIHaadsVB/4K+2alGmVaQ1iRFKIpgQSFAMYyhoZQkkkt3N\\n\",\n       \"j014+se9G5ewN3uePfnuuTe+XzMM95z73HO+93zPOfvknHO/jyNCAAAAmL15TTcAAACg15FQAQAA\\n\",\n       \"1ERCBQAAUBMJFQAAQE0kVAAAADWRUAEAANQ0v8mV22bMBgAA0DMiwtPNL5pQ2b5Y0jWS+iRdFxF/\\n\",\n       \"e3zM+973vqd8buPGjVqzZs2s13v06NFUfGYsrieeeKJIbC+OB2ZPu0/p1ltv1YUXXviked2wjSXp\\n\",\n       \"yJEjlWMPHz5cJPbgwYNFYufNy11w7uvre8q8HTt2aPXq1U+Zf9ppp9VabieZY3V0dLRy7N69eyvH\\n\",\n       \"StK+ffsqx46Pj1eOPXToUOXYzD7UaRvv379fS5cufcr8TP8NDAxUju3v768cu2jRosqx8+dX/9OU\\n\",\n       \"2d+kzuet6WTOF5ljdbp9aGRkRGeddVal2BM5cOBA5dhS57iJiYnKsZlzQPZv+1wrdsvPdp+kayVd\\n\",\n       \"LOlcSZfbfk6p9QEAADSl5DNUL5a0KyJ2R8SEpM9LelXB9QEAADSiZEL1DEkPTpl+qD1vRkNDQ0Ua\\n\",\n       \"hLkxODjYdBMwS2eeeWbTTUANCxYsaLoJmKXFixc33QTUVDKhmvVDQcPDwyexGZhrK1eubLoJmKXp\\n\",\n       \"nuFA71i4cGHTTcAskVD1vpIPpf9Q0tRLFYNqXaV6ko0bNx57PTQ0RDIFAAB6TsmE6g5Jz7I9LOlH\\n\",\n       \"kl4n6fLjg+r8mg8AAKAbFEuoIuKI7bdLulmtYROuj4h7S60PAACgKUXHoYqIr0r6asl1AAAANK3R\\n\",\n       \"kdKl6r9KyQwMmRkUTsoNDpkZWKzXlivlBrLLDN6WWW4mtuT3KzUYaalB77IDAJYadLJUbEZ2uZlt\\n\",\n       \"UWrgxMyAk9kBbTP7UeYYGRsbqxybOS9nBhfNDFoq5Y7rUuetkue4zGCry5cvrxybGTg4M4hrpq+z\\n\",\n       \"v2LNHFNV94sNGzZ0fI9afgAAADWRUAEAANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIFAABQ\\n\",\n       \"EwkVAABATSRUAAAANZFQAQAA1ERCBQAAUFPjtfyq1qTK1OTJytQoKtWOUnXjMsuVcnWjStUHy3y/\\n\",\n       \"bM22UjUTs9u5RBuyMjUFS9X9y/RHqWMk246+vr7KsZn6dZlj5ODBg5VjpVz9wcx+UWofeuyxxyrH\\n\",\n       \"ZrdFqZp7mf0zU38wU5tPytXGK1WfL7PcUucWqdzfnU6KXqGyPWh7ve0dtu+x/Y6S6wMAAGhC6StU\\n\",\n       \"E5KujIhttpdIutP2LRFxb+H1AgAAzJmiV6gi4scRsa39elTSvZKeXnKdAAAAc23OHkq3PSzpPEmb\\n\",\n       \"52qdAAAAc2FOEqr27b4vSLqifaUKAADglFH8V362T5P0RUmfjogbj39//fr1x14PDw/rnHPOKd0k\\n\",\n       \"AACAGY2Ojmp0tNp1oKIJlVtjDFwvaWdEXDNdzLp160o2AQAAYFaWLFmiJUuWHJseGRnpGFv6lt+F\\n\",\n       \"kv5A0jrbW9v/XVx4nQAAAHOq6BWqiPiWGI0dAACc4kh2AAAAamq89Eym5EM3KFVipFtKl2RK62TK\\n\",\n       \"aixYsKBybKZsQSZWyn2/UiVGSn2/TJkFKbdvlCpJlGlDydIsmRIqpUqMLF68uHLs8uXLK8dK0sKF\\n\",\n       \"CyvHZr7f+Ph4kdj9+/dXjn388ccrx0rl/uYsW7ascuzZZ59dJFbKlYjJHH+ZPhkbGysSmymhJOVK\\n\",\n       \"1VTdFtu3b+/4HleoAAAAaiKhAgAAqImECgAAoCYSKgAAgJpIqAAAAGoioQIAAKiJhAoAAKAmEioA\\n\",\n       \"AICaSKgAAABqIqECAACoqfHSM1VlyiFky7iUKgWSKXNSqvRMpiSKlCu3kvl+mdIzpWKlXCmQTF9n\\n\",\n       \"tlu2zVVlywxllCqhUio2u99n4jOlSzL7xcDAQOXYzLaQcueXTImfQ4cOFYnNHHtnnHFG5VgpV7an\\n\",\n       \"v78/teyqRkdHK8fu2bMntezMds7InO9L7cuZ5Uq581bV73fttdd2fK/jWcT270gKSdOtJSLiS1VW\\n\",\n       \"brtP0h2SHoqIS6t8BgAAoJec6J9ll6qVUHVSKaGSdIWknZKWVm0UAABAL+mYUEXEH9ZduO1nSrpE\\n\",\n       \"0t9I+ou6ywMAAOhGM96otv0029fb/lp7+lzbf1xx+R+R9G5J5R7uAAAAaFiVJ/8+Ienrkp7env6+\\n\",\n       \"pCtn+pDtV0oaiYitmv45LAAAgFNClZ+2rIiIf7F9lSRFxITtKj8DuUDSZbYvkbRI0jLbn4qIN04N\\n\",\n       \"2rhx47HXQ0NDGh4ertx4AACAUjZs2KANGzZUiq2SUI3a/rnJCdvnS/rJTB+KiKslXd3+zBpJ7zo+\\n\",\n       \"mZKkNWvWVGooAADAXFq7dq3Wrl17bPr9739/x9gqCdU7Jd0k6Rds3ybpTEmvmUW7ygy0BAAA0LAZ\\n\",\n       \"E6qIuNP2yyT9klrPQn0vIiYyK4mIjZI2zhgIAADQg2ZMqGz3S3qrpJeqdZVpk+1/jIiDpRsHAADQ\\n\",\n       \"C6rc8vuUpMclfVStK1S/J+kGSb9bsF0AAAA9o0pCtToizp0y/Q3bO09WA6rWjcrUosrWNCu17Exs\\n\",\n       \"qRqBmdisbqhVmO3rTHymplkmNmPRokWVYzN1q6RcnblMrbvDhw9Xjh0bG6scm6lRdvBg7gJ6qbqU\\n\",\n       \"CxcurBybqWmWPa4z55fMsZppR4m6alL+2MvUYsy0I1OfL7MvZ89xme+X6ZPMuSiz32fOLdk6qJll\\n\",\n       \"Z75fJ1WOsu/YfsnkRPtXfnfWXjMAAMAp4kTFke+eEnOr7QfVeoZqpaTvzUHbAAAAesJMxZEBAAAw\\n\",\n       \"gxMVR949ddr2WWqNeA4AAIApqhRHvsz29yX9QK2xpHZL+mrhdgEAAPSMKg+l/7Wkl0i6LyLOkfRy\\n\",\n       \"SZuLtgoAAKCHVEmoJiJij6R5tvsiYr2kFxZuFwAAQM+oMkjDXttLJW2S9BnbI5KqD7gBAABwiqty\\n\",\n       \"herVksYlXSnpa5J2iV8AAgAAHFOlOPLk1aijkj5RtDUAAAA96EQDe46qNZDndCIilp2MBpQoSZIp\\n\",\n       \"qdEtMtuhVBkXqVxpnUw5hFLLzcaXKkmUacP4+Hjl2Ew5i2w7MiU4MsdfpiRKqfJMUq4EziOPPFI5\\n\",\n       \"NlMC58CBA0WWK5U7v2RKe/T391eOzZQYKVlyKRObOZ4y/ZfZN6XcOaNUmaFM/2ViM/uQlNs3MuVy\\n\",\n       \"OjnROFRL6i7c9umSrpO0Wq3k7I8i4va6ywUAAOgm1f95MTt/L+k/I+I1tudLql79EwAAoEcUS6hs\\n\",\n       \"L5d0UUS8SZIi4oikn5RaHwAAQFOqP5SQd46kR2x/3PZ3bP+T7YGC6wMAAGhEyYRqvqQXSPqHiHiB\\n\",\n       \"pDFJVxVcHwAAQCNKPkP1kKSHIuLb7ekvaJqEatOmTcder1y5UkNDQwWbBAAAUM2ePXv06KOPVoot\\n\",\n       \"llBFxI9tP2j72RFxn6RXSNpxfNxFF11UqgkAAACztmLFCq1YseLY9H333dcxtvSv/P5MrXI1CyTd\\n\",\n       \"L+nNhdcHAAAw54omVBFxl6QXlVwHAABA00o+lA4AAPAzofQtvxkdOXKkUlzJshOlyl+UanPJbZGR\\n\",\n       \"WXYmNlPOYtGiRZVjpVx5gUzZgsz3y5SIOXz4cOXYqsfSpEzZicw+l9nGy5cvrxy7dOnSyrFLluQK\\n\",\n       \"PSxeXH3M4cy2mJiYSLWjlExppMz+mSmhki2NVFWmP7LxmdIzmXNR5hyXPYdnjuvM/pk5v3RDCTOp\\n\",\n       \"TPmwm266qeN7XKECAACoiYQKAACgJhIqAACAmkioAAAAaiKhAgAAqImECgAAoCYSKgAAgJpIqAAA\\n\",\n       \"AGoioQIAAKiJhAoAAKCmxkvPVB0aPjM8fcmh+ksMZS/lShyUaq9UrgROph3j4+OVY/ft21c5Vsr1\\n\",\n       \"SbaUS1WZEhWlSuVIuTIOmRI4pcqRZMpkZMrUSLlSNQMDA5Vjly1bVjm2VMkQqdz5M9PmTF9njr1s\\n\",\n       \"6ZlMOzLbbcGCBZVjS5bX6u/vLxKbOUZK7ReZvw3ZZZ+M0khFr1DZfq/tHbbvtv1Z29X/OgAAAPSI\\n\",\n       \"YgmV7WFJb5H0goh4rqQ+Sa8vtT4AAICmlLzl97ikCUkDto9KGpD0w4LrAwAAaESxK1QR8Zikv5P0\\n\",\n       \"v5J+JGlfRPxXqfUBAAA0peQtv1+U9OeShiU9XdIS279fan0AAABNKXnL74WSbouIRyXJ9pckXSDp\\n\",\n       \"M1ODbrvttmOvBwcHNTg4WLBJAAAA1ezatUu7du2qFFsyofqupL+03S/poKRXSNpyfNAFF1xQsAkA\\n\",\n       \"AACzs2rVKq1aterY9M0339wxtuQzVHdJ+pSkOyRtb8/+WKn1AQAANKXowJ4R8SFJHyq5DgAAgKZR\\n\",\n       \"egYAAKAmEioAAICaGq/ll6mLVUqp+nWZ2FJtyMosO1N/MPP9StUSyy47U0cvU28rU8crUx+sZM2v\\n\",\n       \"zH6RqX+WkdkWmbpjknT66adXjs2cszK1x/bu3Vs5NlvDMlMbL1MTMlOrMFNfMdN/+/fvrxwr5eog\\n\",\n       \"Zs4vmeM68/0y+2a2HZn988CBA5Vjx8bGirQhU1NUKldPtxOuUAEAANREQgUAAFATCRUAAEBNJFQA\\n\",\n       \"AAA1kVABAADUREIFAABQU1cmVLt37266Cajh/vvvb7oJmKXt27fPHISuRf/1rttvv73pJqCmrkyo\\n\",\n       \"HnjggaabgBpIqHoXf5B7G/3XuzZv3tx0E1BTVyZUAAAAvYSECgAAoCZnS3ec1JXbza0cAAAgKSKm\\n\",\n       \"rWHWaEIFAABwKuCWHwAAQE0kVAAAADV1XUJl+2Lb37X9fdvvabo96Mz2P9t+2PbdU+adYfsW2/fZ\\n\",\n       \"/rrt05tsIzqzPWh7ve0dtu+x/Y72fPqwy9leZHuz7W22d9r+QHs+fddDbPfZ3mr7pvY0/dfDuiqh\\n\",\n       \"st0n6VpJF0s6V9Lltp/TbKtwAh9Xq6+mukrSLRHxbEn/3Z5Gd5qQdGVErJZ0vqS3tY83+rDLRcRB\\n\",\n       \"Sesi4vmSnidpne2Xir7rNVdI2ilp8mFm+q+HdVVCJenFknZFxO6ImJD0eUmvarhN6CAiNknae9zs\\n\",\n       \"yyR9sv36k5JePaeNQmUR8eOI2NZ+PSrpXknPEH3YEyJivP1ygaQ+tY5F+q5H2H6mpEskXSdp8ldj\\n\",\n       \"9F8P67aE6hmSHpwy/VB7HnrH2RHxcPv1w5LObrIxqMb2sKTzJG0WfdgTbM+zvU2tPlofETtE3/WS\\n\",\n       \"j0h6t6Qnpsyj/3pYtyVUjOFwConWmBz0aZezvUTSFyVdERH7p75HH3aviHiifcvvmZJeZnvdce/T\\n\",\n       \"d13K9isljUTEVv306tST0H+9p9sSqh9KGpwyPajWVSr0jodtP02SbP+8pJGG24MTsH2aWsnUDRFx\\n\",\n       \"Y3s2fdhDIuInkv5D0q+KvusVF0i6zPYPJH1O0q/ZvkH0X0/rtoTqDknPsj1se4Gk10n6SsNtQs5X\\n\",\n       \"JL2p/fpNkm48QSwaZNuSrpe0MyKumfIWfdjlbK+Y/AWY7X5Jvy5pq+i7nhARV0fEYEScI+n1kr4R\\n\",\n       \"EW8Q/dfTum6kdNu/KekatR6yvD4iPtBwk9CB7c9JWiNphVr3+/9K0pcl/auklZJ2S3ptROxrqo3o\\n\",\n       \"rP2rsG9K2q6f3lp4r6Qtog+7mu3nqvXQ8rz2fzdExIdtnyH6rqfYXiPpnRFxGf3X27ouoQIAAOg1\\n\",\n       \"3XbLDwAAoOeQUAEAANREQgUAAFATCRUAAEBNJFQAAAA1kVABAADUREIFoHG2b23/f8j25Sd52VdP\\n\",\n       \"ty4AOJkYhwpA17C9Vq1BDi9NfGZ+RBw5wfv7I2LpyWgfAHTCFSoAjbM92n75QUkX2d5q+wrb82x/\\n\",\n       \"2PYW23fZ/pN2/Frbm2x/WdI97Xk32r7D9j2239Ke90FJ/e3l3TB1XW75sO27bW+3/dopy95g+99s\\n\",\n       \"32v703O7NQD0ovlNNwAA9NPSN++R9K7JK1TtBGpfRLzY9kJJ37L99XbseZJWR8QD7ek3R8Tedm27\\n\",\n       \"Lba/EBFX2X5bRJw3zbp+W9KvSHqepDMlfdv2N9vvPV/SuZL+T9Ktti+MCG4VAuiIK1QAuomPm/4N\\n\",\n       \"SW+0vVXS7ZLOkLSq/d6WKcmUJF1he5uk/5E0KOlZM6zrpZI+Gy0jkjZKepFaCdeWiPhRtJ6J2CZp\\n\",\n       \"uMZ3AvAzgCtUALrd2yPilqkz2s9ajR03/XJJ50fEQdvrJS2aYbmhpyZwk1evDk2Zd1ScKwHMgCtU\\n\",\n       \"ALrJfklTHyC/WdJbbc+XJNvPtj0wzeeWSdrbTqZ+WdL5U96bmPz8cTZJel37Oa0zJb1M0hY9NckC\\n\",\n       \"gBnxry4A3WDyytBdko62b919XNJH1brd9h3bljQi6bfa8VN/ovw1SX9qe6ek76l122/SxyRtt31n\\n\",\n       \"RLxh8nMR8e+2X9JeZ0h6d0SM2H7OccvWNNMA8CQMmwAAAFATt/wAAABqIqECAACoiYQKAACgJhIq\\n\",\n       \"AACAmkioAAAAaiKhAgAAqImECgAAoCYSKgAAgJr+H9OLZ8u3dMr8AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b9b9550>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGLBJREFUeJztnVtsY+tVx3+ft+/29i224yQzk+lp+9AHpNOX8lAq+lBV\\n\",\n       \"p0Jq4YWqEgKVUvEABQESbXmgBV5KJSoED0ioLeKmFgQqKi/QVgKpPHA5qKcXzqVnTjOTjJ2L49jx\\n\",\n       \"/f7xYK9vtj3JnLHjTOxk/6WtOJ5k5zv1v+tb31r/9d9Ka40LF7PAc9ULcLF6cEnjYma4pHExM1zS\\n\",\n       \"uJgZLmlczAyXNC5mxtykUUq9oJR6VSn1ulLqk4tclIvlhpqnTqOUsoDXgPcBeeB/gI9orV9Z7PJc\\n\",\n       \"LCPmjTTvAu5pre9rrXvAV4EPLW5ZLpYZ3jl/bwvYc3z/EPhx5w8opdxS84pDa63Oen/eSOMS4gZj\\n\",\n       \"XtLkgduO728zijYubgDmJc2LwNuVUneVUn7gw8DXF7csF8uMuXIarXVfKfWrwL8CFvAl9+R0czDX\\n\",\n       \"kfupbuwmwiuPRSfCLm4wXNK4mBkuaVzMDJc0LmaGSxoXM8MljYuZ4ZLGxcxwSeNiZrikcTEzXNK4\\n\",\n       \"mBkuaVzMjHlFWAAope4DVWAA9LTW71rEolwsNy5EGkZirPdqrU8WsRgXq4FFbE9ndkJdXF9clDQa\\n\",\n       \"+JZS6kWl1McXsSAXy4+Lbk/v1lrvK6UywDeVUq9qrb+9iIU9ayj1KGBaloXH48Hj8WBZlvl3+ZnB\\n\",\n       \"YGCufr9/Jeu9SlyINFrr/fHXolLqa4xGW1aONEIIpRSWZREOh4lEIkQiEcLhMJZl4fV6sSwLy7I4\\n\",\n       \"PT01V7VaRYRsN8XrZ27SKKXCgKW1rimlIsD7gd9b2MqeIZRSJrJ4vV5s22ZtbY21tTVSqRR+v99c\\n\",\n       \"Pp+PQqFAoVBgOBxSq9WAEWGUUjeCOBeJNOvA18Yh2wv8rdb6GwtZ1TOGkEYiim3bZDIZtra22Nzc\\n\",\n       \"JBQKmSsQCBAOhxkOh1SrVTweD8PhEHAjzZtCa70DPL/AtTwzeDwesx15PB4TQXw+H6FQiHQ6zcbG\\n\",\n       \"Bnfu3OHOnTtmuwqHwwSDQdrtNuVymXw+j8fjmSDLTSDORRPhlYNSCr/fTyAQIBAIEAwGsW2baDSK\\n\",\n       \"bdvEYjE2NjbY2Nhgc3OTXC5nfi4QCOD3+4lEIoRCIfx+v0mUh8Mhg8Hgiv/rng1uJGkCgcAESdLp\\n\",\n       \"NOl0mrW1NfNVrmQyaaKQz+fDsiyi0aghjdc7+p9QcpqbgBtJGr/fTzQaJZVKsba2xq1bt9ja2mJr\\n\",\n       \"a4uNjQ0TeeSSJFm2NSFNIBDAsiyGw+FEbnPdcWNII1HAsixCoRDxeHwid9ne3mZ7e5vbt2+brUu2\\n\",\n       \"IyeGwyHRaNQcySORCJ1Oh06nAzCxRV3X/OZGkMZZrAsEAsTjcbLZrEl0c7kcmUyGWCxGMBjE5/Ph\\n\",\n       \"9XrP3G6UUoRCIVKpFFtbW7ztbW8z9ZrT01OGwyHD4RCttfl63XDtSTN9nA4GgyQSCdbX17lz5w5v\\n\",\n       \"fetbSSaTpFIpYrGY2XKEaGchHA6bba3VanFwcMDR0RHD4ZBWq2WqxcC1TI6vPWkAU7Tz+/0Eg0ET\\n\",\n       \"aW7fvs1zzz1njtORSIRAIGCIdl5i64w0Wmt8Ph9aa1qtFicno4a/1vpaRhm4AaSR1oAQJhqNkkgk\\n\",\n       \"yGQybG5ucvv2bbMdyVfBeR96KBQimUzS7/fx+XwMBgM6nQ71ep1yuUyn06Hb7dLr9UyVWK7rkCzf\\n\",\n       \"CNJI4huPx8lkMmSzWRKJBOFw2ByjLct6qiOzUgqfz0ckEqHf7+PxeOh2u+Yovra2Rr1eN1etVqPd\\n\",\n       \"bpur0+msfAX52pPG4/GYPCabzbKxsWFIEwqF8Hq9E8dpwZM+UCnwyfHd4/EQiURYW1tjc3OTk5MT\\n\",\n       \"SqWS+epMlHu93sT9V5E4N4I0oVDIJL+3bt0im80Sj8cJh8OGNE7pw5tBIo3Ue4QwzWaTRqPBwcEB\\n\",\n       \"hUKBg4MD03qQiFSv11eaMPAUpFFKfRn4KeBIa/1j4/dSwN8B28B94Ge11pVLXOfMEAJMk2Zra4t0\\n\",\n       \"Ok08HjeRxonztg65n2xP0qcCiMViRlszGAxIpVKmACjbXq/Xo1armWKg3GsVifM0keYvgD8F/srx\\n\",\n       \"3qeAb2qtPz82nv7U+LpyOGUOQhjbtkkmk2QyGdbX10kmk0SjUXw+HzApqhoMBqbWInUWIYkky+f9\\n\",\n       \"TelDhcNhEokEnU4HrTX9fp9ms0m5XDYnrcFgsJKEgacgjdb620qpu1NvfxD4yfHrvwT+nSUhDTw6\\n\",\n       \"YktdRk5M6XSa9fV1YrEYkUjEfID9fp9er0e326Xb7T5GoGAwaKQRTyKNvJbIJoQTwsjfFMIMh8OV\\n\",\n       \"jDbz5jTrWuvD8etDRtqapYB8gFKXOSvSCAmckabb7dJqtWi32/R6PUOkwWCAbdtmWzrvb8KjynM4\\n\",\n       \"HDaEiUQiVCoVDg8PDWn6/b4hzSoW/y6cCGut9bL561mWhc/nIxAIEAqFiEajxONx1tbWyGQyZiuR\\n\",\n       \"JmO326XZbFKv12k0GqbG0u12TZ4ipAmFQhP5jTN3EkiuJEf9YrFIPp/Htm1CoZDJaVaRMDA/aQ6V\\n\",\n       \"Ujmt9YFSagM4WuSiLgKlFF6v1yjsRPsiPSUhSr/fN1+Pjo4oFosUi0VKpZIhivxMKpUyHfFUKkUw\\n\",\n       \"GDTRSk5GTiLKa5/Ph1KKeDxOLpfjueeeo9PpcHx8TKVSoVwuUy6XJ7arVdiq5iXN14FfAP5w/PWf\\n\",\n       \"FraiC8JZAZYoEw6HCQQCppDX7/fpdrt0Oh2azSb7+/vs7u7y4MED8vn8RMMRMDob0dokEglzxePx\\n\",\n       \"iQanHN8lKfZ4PMRiMXK5HO12G4/HQz6fJ5/Po7U2kW2Vos/THLm/wijpTSul9oDfBT4H/L1S6mOM\\n\",\n       \"j9yXuchZcVakEdJIpOl0OjQaDU5PT9nf3+eNN97gtdde44033jD3ke0nk8lMXOvr67TbbWBU6JPo\\n\",\n       \"IMdrIY5IQSXSeDweotEowWDQEKZYLJqItyri9Kc5PX3knH9634LXshBMRxqnNFMKedO9okKhwM7O\\n\",\n       \"Dq+88govv/zyxJHd6/VSKpVMhVd6SwDBYJBYLGYIIomycyQGwLZt4FHPajAYUKvVODw8xOfz0ev1\\n\",\n       \"VqovdS0rws4PbfrSWtNutzk9PeXo6Ij9/X1KpRK1Wo1ut2vuIQ3GwWBAu92mVqvh9XoZDodEIhFi\\n\",\n       \"sRipVIpWqzVxWnPKPqdfO+s/sgU6r1XBtSQNPCLOdItgOBzSbrepVCocHR2Rz+cpFosTpJEPUP6f\\n\",\n       \"L6SR343FYiSTSWq1Gs1m0xDmrEgx3eGeLh5Ok2cVcC1J4yTMWY3IVqtlIk0+n+f4+PjcSKO1Np1p\\n\",\n       \"IU8qlSKbzVKv12m1WuZoL0nstMh8Oso4ibNqhIEbQhoncYQE1WqVYrHI/v4+Jycn1Ov1xyKNvBZZ\\n\",\n       \"g9x7fX2dSqVCvV6n3W6buo6QYLrx+WaRxkmeVcC1I43ogG3bJpVKkclkTEdbElWfz0c4HCYej5NK\\n\",\n       \"pcxJarp5eRamE23bts2R/jxNjiTezWaTWq1mIpS0LJz1mVUgzrWzT3OOqEjrQARXUksR0kgya9s2\\n\",\n       \"wWDwqUgDk0d65wyU1GZg8sOXNkWz2aRardJoNAxppIC4KoU9uIakebNII6RyRppYLHamTOI8SKQJ\\n\",\n       \"h8MTkea833+zSCMV4VUhzbXbnpwTlKlUymhnzoo08XjcFNjO62BP3xsmI41t26YG5Iw0TjgjTa1W\\n\",\n       \"o9FomFxItqdVwrUjjSS6tVqNk5MTisUiXq/XkAQe/9BFfRcOh40fjfNyjuX6/X62t7dZX1/Htu0J\\n\",\n       \"5Z9UdqeP+9LtTqVS9Ho96vU6p6enlEolIpEI7XbbJMnXoo2wapAWQb1e5+TkxESZeDxupgPOIo2T\\n\",\n       \"OE4vGtmG5IpEIty9e5dsNott22bLk7+ttZ44sTlzLPn71WqVUqlkdD2A0Q6vQm5z7UijtTZa3HK5\\n\",\n       \"TDQaNUP+EgWENDJS6yRMJBIxXWzpZMskg1zb29uGNF6vd6LWMhgMJpqVgCGNbJ3lcpnDw0NisZjx\\n\",\n       \"upFIJeRZZsyrEf4s8EtAcfxjn9Za/8tlLXIWSKSR7SkYDJJOp2k2m+YDEUWfCK1s256wS3POaUej\\n\",\n       \"UdPdzmQyrK2tkc1mzRiviKokNxEPPmfj0e/3G8LEYjFKpRKpVIp4PG5GYQaDwcSc1DJjXo2wBr6g\\n\",\n       \"tf7CpazqAtBa0+v1aLVaVKtVwuEwtVpt4rQCj4gzGAzIZrO0Wi2jfXFqZSTSiAwiHo8TjUaxLItO\\n\",\n       \"p0OlUqHRaBgBV71eNxYmcsm6JMqJhlgG9iSRlrHeZce8GmFYUv9g2Z6azSaWZREIBEyPqNPpGBWe\\n\",\n       \"kMbj8ZDL5bAsC9u2zYco+Ywzp5FIJCKrTqdDv9/n+PjYXKVSiWw2Sy6XI5fLTRg8yu8JEcWiTeQa\\n\",\n       \"Qtxlx0Vymk8opX6e0YPdf2tZRlickQZG1V8hjUQaIY3kNqJzyWazhmxnnZ6ETGItIuO3+/v7PHz4\\n\",\n       \"kHw+z97eHnfv3qXb7eL1eo2pgAzVOWWg6XSaRqNhphUqlcq1Js2fAb8/fv0HwB8BH1vIii4IIQ2M\\n\",\n       \"TiQej+ex7UmIIKSIRqNn5hHy3vQHWalU6Pf7Znva399nZ2eHe/fu8frrr9NqtYzhYy6Xm2g9OI//\\n\",\n       \"mUyGfr9vuu7BYPD6kkZrbTTBSqkvAv+8sBUtAE6Vv3ywBwcH7OzsmGqx2KfJeO20cAoekUakofK1\\n\",\n       \"WCxyfHxsvu7u7pLP5ymVSjQaDWq1mtEAHx8fG1WeRBnZGm3bpt1uk0gkzFpkAM/ZzFw2zEUapdSG\\n\",\n       \"GE8DPwN8f3FLujjk6Asj0oikU2oi2WyWbDZr5rzPklAA5hgspzG59vf32d/fN6O3Qp6TkxMzmlut\\n\",\n       \"Vg1pJNKEQqEJ8ti2Tb/fJx6Pm5qNTCv0+31DtmU7Tc2jEf4M8F6l1POMTlE7wC9f6ipngFOGIElx\\n\",\n       \"pVKhUCiglKLT6ZiEMxKJkEgkzO86db1OiJRCEt3d3V329vbY3d3l4cOHxiFCGpFS8T05OeH4+Bif\\n\",\n       \"z0cwGDTSC/leZKASaaT5KfUkGeRbNsyrEf7yJaxlYZgO6ZVKBY/HYyrF4vKQyWRMYuyMMtO1Eok0\\n\",\n       \"x8fHFAoFHjx4wM7ODj/60Y/Y2dl5bKy3Xq9PRBqJKs46kWxDXq+XRCIxEWm63a4hzDLWba5dRXga\\n\",\n       \"Em0ajYbZggqFgnG9krFb8QkOBAKPVXjz+TwPHz5kb2+PfD5PoVCYUPtNC6mcGmSZgJBBu0gkYo7Y\\n\",\n       \"sj1KR15cLYLBIJXK6DAqIvZlwo0gjfMIrrXm4OCAQCAAQKvVeqz/5ExC+/0+e3t7Zjva29vj+PiY\\n\",\n       \"crlMq9U6U3kneZTMWEm9SPQ3Mrgnx3jpyOdyOer1uulndbtdqtXq0jUxbwxpADNqK4SRXCWZTJor\\n\",\n       \"Ho9PbDW9Xo+9vT0ePHjAgwcP2N3dNflLs9mc0AULhDTS0ZYoI+PB0j6Q0WFx0Go0Gmat3W6XWq12\\n\",\n       \"rlnkVeLGkEb6Os1mE3iUp0gFN5PJmERWTi4inpIoc//+fXZ3d41BgKjuptFut+n3+9TrdSzLMltQ\\n\",\n       \"IpEglUpNmAn4/X6zPckaJcIcHR25pLkqTOtvJSFWSpmI0m63jZzCaVDU7/cpFApmzEW0L0+qoUgS\\n\",\n       \"K/UikXmWy2WKxeLE6clpLGDbNp1Ox0QikaBalrVUUws3gjTTcOY4MtctUoqzchoZ1G80GhPD+ufB\\n\",\n       \"SVJxpWg0GoY0slXJKUlyHul4y0nKaYQ9PblwlbhxpHHmOFLCbzQaVCoV06CcHnBrNpu0Wi1jLC33\\n\",\n       \"edKH5zQtkshWqVQoFotG0O4kjUg1gMdII/qcqyaL4MaSpt/vmyLfWW2E6dmnWQbbztoOncRMpVJm\\n\",\n       \"zkoqxJKcS89KSON80suytBRuHGng2buJ93o908X2eDxG8F4qlSZ8+JRSxu7NacJkWRbNZtOc1q46\\n\",\n       \"4txI0jxr9Ho9U1wcDAYkk8mJZ0qJvaxTvyMegZubm3i9XsrlsikcXnXEcUnzDCCRZjAY0Gq1SCQS\\n\",\n       \"Ew8kSyaTRm8sxtYindjc3DTbo5giXTWeSBql1G1GMs8so+bkn2ut/0StgI/wMkEMH9vtNkopQxq5\\n\",\n       \"pM0gNrXSSJXaUa/XM62JpScN0AN+Q2v9klIqCvyvUuqbwEdZUh/hZYTTNUIKjKenp+YkJbUaEYP5\\n\",\n       \"/X5isZgRaUmJQH7nqo0DnkgarfUBcDB+XVdKvQJsseQ+wsuMadcKsXULh8PGJcvn8xmtjc/nMxXi\\n\",\n       \"4+NjU0cSMl2FdOKpc5qxuPydwH+xxD7CywqndNRJGq21sVVrNpuGKDKIF4vFaLValEolIyTrdrtG\\n\",\n       \"IHYVp6mnIs14a/pH4Ne11rUpSeTS+QgvM5yRBjDOWrlcziTL0mIQkVa9Xufg4IBEIkEkEqHVak3o\\n\",\n       \"bZ41nka552NEmL/WWov169L6CK8CJCkWXY0o/A4PDzk4ODDP0pRLbPqz2Sybm5tYlmUmF8RMSfAs\\n\",\n       \"os6bnZ4U8CXgZa31Hzv+aWl9hFcBIrmQmou0FwqFghltEZ9iEYbF43HW19fZ3t424zeDwWCiH/as\\n\",\n       \"tqk3izTvBn4O+J5S6jvj9z7NkvsILzvEekRmt8U0Umzw5d/kFOX3+w1pxMZNCHNycjJhKXvlkUZr\\n\",\n       \"/R+cb3y0lD7CqwAhi+hnJNLIeIvMfcfjcbTWE5EGMHKLUqlkrE7kOP8sNMVuRfgKML2VtNttYz/i\\n\",\n       \"NFxKp9NGIyzmAVprI9ByRqYnicIWDZc0SwBpM5TLZfMshXQ6TbVapdlsmnHeaDRq+lCJRMI890Eq\\n\",\n       \"zSKEdyPNDcB0QzORSFAul80MumxbYkAgNiUSaeRZCzKOc9lwSbMEENKIEUAymaRSqRjS+P1+YyIQ\\n\",\n       \"CAQ4OTkxA3bhcNicoJ6VIZJLmiWAJMYSLZxu6rFYjPX1dRKJBMlk0kQbeVpeNptlMBiYKOVUF14W\\n\",\n       \"XNIsAeS4LB92tVo1s1n9fp/T01Nu3bqFZVnGfcu2bTKZDLdu3QIetSdOT08vfb0uaZYAzsc6D4dD\\n\",\n       \"arUaBwcH5hE/nU7HJMgbGxtm7CWdTlOv143tSbVaPdeWdpFwSbMEmJZOyFSlPBNKa21cupxd8HQ6\\n\",\n       \"Ta/Xm2iAPgu9jUuaJYFTjC7uEiJ+Fy1xtVo1BgZyBM9kMqZuIyO/Z82XLxIuaZYQkhDLtEKr1TKe\\n\",\n       \"N+KYpbUmGAySTCYnjuAinZBin3jcLBIuaZYQTp2MzF2JfUmlUjFqv2AwaOo2TjetdrtNp9O5NH+b\\n\",\n       \"J26ASqnbSql/U0r9n1LqB0qpXxu//1ml1EOl1HfG1wsLX9kNhnzYkq9MR5pms2kijcyHT7tpybzU\\n\",\n       \"ZRT75tUIL62P8HXA9LBdq9UyvoGJRIJut0s6nTZ5jbhPSOSRaU1R+C0a82qEYUl9hK8j2u02pVKJ\\n\",\n       \"vb09AOOJLM6kcuqSSc1AIGD+/SoijYFDI/yfjHQ2S+kjfB0hpNH60cPfPR6PKfBNG2o7H1x/GaR5\\n\",\n       \"qkP9eGv6B0Ya4TojH+G3AM8D+4x8hF1cEuTYvbe3x6uvvsq9e/coFArmGeESacTCxGkccCWRxqER\\n\",\n       \"/hvRCOsl9xG+bpB6jViWHB0dUSgUWFtbIx6P02q1ODo6olKpPPaYw8uQScylEV52H+HrCKcnTrVa\\n\",\n       \"pVAo4Pf7zfH66OiIYrHI4eGheU6mRKFFYx6N8O8AH1lWH+HrCCdhtNbUajUKhQKdTsc4oosxdrVa\\n\",\n       \"NY+EluLgoqEuS+XlzkItFs7cxPmEGL/fP+Fw7nz21EWds7TWZyZELmlcnIvzSHP1FgQuVg4uaVzM\\n\",\n       \"DJc0LmaGSxoXM8MljYuZ4ZLGxcy4tCO3i+sLN9K4mBkuaVzMjEsljVLqBaXUq0qp18cuoBe9332l\\n\",\n       \"1PfGEtP/nuP3v6yUOlRKfd/xXkop9U2l1A+VUt9QSiWedI+nuN/cUtgnyGvnWuOlyXWdfv+LvAAL\\n\",\n       \"uAfcBXzAS8A7LnjPHSB1gd9/DyMh2fcd730e+O3x608Cn7vg/T4D/Oac68sBz49fR4HXgHfMu8Yn\\n\",\n       \"3G/uNWqtLzXSvAu4p7W+r7XuAV8FPrSA+86tKtJafxsoT739QUa2toy//vQF7wdzrlFrfaC1fmn8\\n\",\n       \"ug44LXhnXuMT7jf3GuFyt6ctYM/x/UMeLXheaOBbSqkXlVIfv+C9BJdhb/sJpdR3lVJfmmW7c2LR\\n\",\n       \"FrxTct0LrfEySXMZZ/l3a63fCXwA+BWl1HsWeXM9iuMXXfeFpbDTFrwXXeOi5bqXSZo8cNvx/W1G\\n\",\n       \"0WZu6LFaUGtdBL7GaAu8KA6VUjkYKRK5oL2t1vpIjwF8cdY1PsmCd541nifXvcgaL5M0LwJvV0rd\\n\",\n       \"VUr5gQ8zspKdC0qpsFLKHr+OAO9nMTJTsbeFBdjbjj9UwUxS2Kew4J1pjU+S6867RuDyTk/jjP0D\\n\",\n       \"jDL2e8CnL3ivtzA6gb0E/GCe+wFfAQpAl1G+9VEgBXwL+CHwDSBxgfv9IqOn1nwP+O74w12f4X4/\\n\",\n       \"AQzH/43fGV8vzLvGc+73gYusUWvtthFczA63IuxiZrikcTEzXNK4mBkuaVzMDJc0LmaGSxoXM8Ml\\n\",\n       \"jYuZ4ZLGxcz4f041SDwzkyB1AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b97ec50>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAE2tJREFUeJzt3XuwXWV5x/HfLwm5kavBBDSHRKi0wsSKVbkoIFU6lFG0\\n\",\n       \"rVVpq9Z27HTUSqk6Ik77VztanY7oMO1opSp4oa1a1Gm9pDWQaDARTEJCUKCaNhFISi6cnFxPwtM/\\n\",\n       \"9k48nJyT8z5Zec/eO34/Mwx7rf3std693rX2ebIu7+OIEAAAAE7chE43AAAAoNeRUAEAADREQgUA\\n\",\n       \"ANAQCRUAAEBDJFQAAAANkVABAAA0NKmTK7fNmA0AAKBnRIRHml81obJ9taSbJU2U9KmI+NvhMR/4\\n\",\n       \"wAeO+dzy5ct1+eWXP23eU089VamVuWV3Q2xNEyY0P2m5cuVKXXrppU+bV+v7HT58OBU/ODhYHLt/\\n\",\n       \"//4qsXv37i2O3bNnT3HsxIkTi2MladKkYw//hx56SOedd94x86dOnVq83NNOO604NrNf7Nq1qzh2\\n\",\n       \"+/btxbHZ+P7+/uLYffv2Fcdm9qHRtvGhQ4dG7Ndp06YVL3v27NnFsbNmzSqOnTFjRnFsrf1Nyv3G\\n\",\n       \"HTp0qDg2c6wODAwcM2/r1q1asGDBMfN3795dvNxsOw4ePFglNvM7m/kNqDlupj1ijpRqQ7VLfrYn\\n\",\n       \"SrpF0tWSzpd0ne3n1VofAABAp9S8h+olkh6JiE0RMSjpDkmvqbg+AACAjqiZUD1b0uYh01va88a0\\n\",\n       \"aNGiKg3C+Ojr6+t0E3CC5s2b1+kmoIGTcckenXH66ad3ugloqObRd8IXO0moehsJVe8ioeptJFS9\\n\",\n       \"K3OPGbpTzZvSfyZp6F/WPrXOUj3N8uXLj75etGgRyRQAAOgKmRvhayZU90p6ru3Fkh6V9AZJ1w0P\\n\",\n       \"Gv40HwAAQDcY/vTf8RKsaglVRByy/U5J31Jr2IRbI+LBWusDAADolKrjUEXENyR9o+Y6AAAAOq2j\\n\",\n       \"I6VL5YOylQ66JeUH/6o1sFit2JoDhmYGyswMeldrudnvl4nP3OCbGYhwypQpxbHTp08vjs1sN0k6\\n\",\n       \"cOBAcWxmsMDMcjODBdYc1C+znTMy+0VmfzsZg7iOJtPmzHIzx17m9z4rs89l9uXM8Zfpv8ygrFLu\\n\",\n       \"acHMoKiZPsn8xtUcxDXT5tLBSO+6665R3+OREAAAgIZIqAAAABoioQIAAGiIhAoAAKAhEioAAICG\\n\",\n       \"SKgAAAAaIqECAABoiIQKAACgIRIqAACAhkioAAAAGiKhAgAAaKjjtfwytZLQkq1fV0umTlKt2lyZ\\n\",\n       \"GoFSvfqDterMZeq7ZepnSbl6YpnYTL2tTH9ktkXNGo+16jZm9qFMPTpJ2r9/f3Fspk9K659JuTY/\\n\",\n       \"+uijxbGZ7ybV+w3I7EOZ+nwzZswojpWkmTNnFsfOmjWrOLZWXcpaNW+l3D6XrYU6kqpnqGz32V5m\\n\",\n       \"+wHbG2y/q+b6AAAAOqH2GapBSTdExFrbMyTdZ3tpRDxYeb0AAADjpuoZqoh4PCLWtl8PSHpQ0rNq\\n\",\n       \"rhMAAGC8jdtN6bYXS7pQ0qrxWicAAMB4GJeEqn2570uSrm+fqQIAADhlVH/Kz/Zpkr4s6XMRcefw\\n\",\n       \"91esWHH09dlnn61FixbVbhIAAMCYdu7cqV27dhXFVk2o3HpW/lZJGyPi5pFiLrvssppNAAAAOCFz\\n\",\n       \"587V3Llzj05v2rRp1Njal/xeKukPJF1pe037v6srrxMAAGBcVT1DFRHfFaOxAwCAUxzJDgAAQEMd\\n\",\n       \"Lz0zefLkorhapUukukPf11hupsRBtgRHpmRAphxJZrkZmTZIuf1o0qTywyMTW2u7ZUsnZMpq1IrN\\n\",\n       \"7PeZ5e7bt684VsqVUMnsQ5kyPLNnzy6OnTdvXnGslCt1krFnz57i2P7+/uLYHTt2VImVcqVqMvvc\\n\",\n       \"nDlzimMzD18tXLiwOFbKlarJfL/SG7Ml6cknnyyO3b17d3FsZn+Tcn1d+rdy2bJlo77HGSoAAICG\\n\",\n       \"SKgAAAAaIqECAABoiIQKAACgIRIqAACAhkioAAAAGiKhAgAAaIiECgAAoCESKgAAgIZIqAAAABrq\\n\",\n       \"eOmZ6dOnF8XVLLdSq6xNrXIymeVmv1um1EkmNlOCo1aslCsRk9l2mdhM6ZlMaYiDBw8Wx2aXnenr\\n\",\n       \"008/vTh25syZxbGlvxVSviRRaQksqV6Zmkx5mMw2lnJ9feDAgeLYTGmPbDmgUmeeeWYqPlO2J1PG\\n\",\n       \"JSNTxuWxxx5LLTtTniWzL2eOv3PPPbc4ttZvgCRNmTIlFV/iE5/4xKjvjfrXxfbvSApJI/0iRER8\\n\",\n       \"pWTltidKulfSloh4dclnAAAAesnx/rn+arUSqtEUJVSSrpe0UVJ5GgoAANBDRk2oIuIPmy7c9kJJ\\n\",\n       \"10j6G0l/0XR5AAAA3WjMGyNsn2n7VtvfbE+fb/uPC5f/UUnvlZS7qQkAAKCHlNxp+hlJ35b0rPb0\\n\",\n       \"w5JuGOtDtl8laVtErNHI92EBAACcEkoeeTojIv7Z9o2SFBGDtg8VfO5SSdfavkbSVEmzbN8WEW8e\\n\",\n       \"GrR06dKjr88555zU0wEAAAC1rFq1SqtXry6KLUmoBmwffc7U9sWSnhzrQxFxk6Sb2p+5QtJ7hidT\\n\",\n       \"knTVVVcVNRQAAGA8XXTRRbrooouOTt9yyy2jxpYkVO+W9HVJ59heKemZkl53Au0qHzwJAACgh4yZ\\n\",\n       \"UEXEfbYvl/TLat0L9eOIKB8NrLWMuyXdfWJNBAAA6G5jJlS2p0l6u6SXqXWWaYXtf4iI8iFyAQAA\\n\",\n       \"TmEll/xuk9Qv6eNqnaH6PUm3S/rdiu0CAADoGSUJ1QURcf6Q6e/Y3niyGlBad6hWrbvayy6VqfmV\\n\",\n       \"qUeXrWmWkWnzoUMlD4a2ZLZxpu5YVq2+zvTJrFmzimMzNbGkXF2sTK27TF/v3r27OHb79u3FsXv3\\n\",\n       \"7i2OlXL7cqY+WKbm3uzZs4tjd+zYURwr5WoxZn4PM/typu5mpj8ef/zx4lhJ2rx5cyq+VKZPMvt9\\n\",\n       \"pl6ilDv+MvtF5vcic4xkYjO/QzXbMZqSrflD25ccmWg/5Xdf4zUDAACcIo5XHHn9kJjv2d6s1j1U\\n\",\n       \"Z0v68Ti0DQAAoCeMVRwZAAAAYzheceRNQ6dtz1drxHMAAAAMUVIc+VrbD0v6qVpjSW2S9I3K7QIA\\n\",\n       \"AOgZJTel/7WkSyQ9FBHPkfQKSauqtgoAAKCHlCRUgxHxhKQJtidGxDJJL6rcLgAAgJ5RMqDRTtsz\\n\",\n       \"Ja2Q9Hnb2yQN1G0WAABA7yg5Q/VaSXsl3SDpm5IeEU8AAgAAHFVSHPnI2ajDkj5TtTUAAAA96HgD\\n\",\n       \"ew6oNZDnSCIiymtiHEdpmYPMEPndUm4lE1urzEnNMjy1SvbU2hbZZWf2uUxsxsBA+dX1TMkJKbd/\\n\",\n       \"ZsqGTJs2rcpyM2UnMiVfpFwpkK1btxbHlpbWyrYhW1qn1nGd6ZMZM2YUx2b2oWw5kkzprsxxnSmD\\n\",\n       \"1d/fXxybLTO0a9eu4tjBwcHi2My2yBzXmZI2mVgpV05m6tTmo0Idbxyq8r1/FLbnSPqUpAvUSs7+\\n\",\n       \"KCK+33S5AAAA3aQ8VT8xH5P0HxHxOtuTJOX+2QgAANADqiVUtmdLuiwi3iJJEXFI0pO11gcAANAp\\n\",\n       \"dW78aHmOpP+z/WnbP7T9j7ZzF0ABAAB6QM2EapKkF0r6+4h4oaQ9km6suD4AAICOqHkP1RZJWyLi\\n\",\n       \"B+3pL2mEhOruu+8++nrRokVavHhxxSYBAACU2bx5s7Zs2VIUWy2hiojHbW+2fV5EPCTplZIeGB53\\n\",\n       \"xRVX1GoCAADACevr61NfX9/R6VWrRi9lXPspvz9Tq1zNZEn/LemtldcHAAAw7qomVBGxTtKLa64D\\n\",\n       \"AACg02relA4AAPALofYlv47IlE6Q6pWTqdWGbliuVK9ETKZ0ULZcQKYkQmbbHT58uDg2U2KkZjmS\\n\",\n       \"gwcPFsdmjqnMNp4zZ05x7MyZM4tjsyUqMmVRMmVtMuVIMvtQ9tjLLHvfvn3FsZl9LrPcWttNypWe\\n\",\n       \"ycTOmlVejW3+/PlV2iDlSsRkylVlt3M3qFUSbNT1jevaAAAATkEkVAAAAA2RUAEAADREQgUAANAQ\\n\",\n       \"CRUAAEBDJFQAAAANkVABAAA0REIFAADQEAkVAABAQyRUAAAADXW89MzAwEBRXM0SKplSJ5mh7DOx\\n\",\n       \"tdqQLVFR6/tlZMpOlO4/R2S2R61SC5kyJwsWLCiOnTx5cqodmXIy+/fvL46tVY5k27ZtxbGZkjZS\\n\",\n       \"br/IlMA566yzqrQhc4xIuRIjGbVK2gwODhbHZkodZZedkSlJNGXKlOLYTEmbbPzcuXOrxGb25czv\\n\",\n       \"RaYUl5Tb57J/S0ZS9QyV7ffbfsD2ettfsF2+FwEAAPSIagmV7cWS3ibphRGxRNJESW+stT4AAIBO\\n\",\n       \"qXnJr1/SoKTptg9Lmi7pZxXXBwAA0BHVzlBFxA5JfyfpfyU9KmlXRPxnrfUBAAB0Ss1LfudK+nNJ\\n\",\n       \"iyU9S9IM279fa30AAACdUvOS34skrYyI7ZJk+yuSLpX0+aFB99xzz9HXCxcuVF9fX8UmAQAAlFm/\\n\",\n       \"fr02bNhQFFszofqRpL+0PU3SfkmvlLR6eNAll1xSsQkAAAAnZsmSJVqyZMnR6TvuuGPU2Jr3UK2T\\n\",\n       \"dJukeyXd3579yVrrAwAA6JSqA3tGxIclfbjmOgAAADqN0jMAAAANkVABAAA01PFafvPnzy+Ky9ak\\n\",\n       \"y8jUCeyGWne1livltkWm/mCtNmTq0Um5bZepjTdt2rTi2Mx2y9T8mjp1anGslKs9lumTTN24zHFd\\n\",\n       \"s/7ZGWecURybqcWYqT32xBNPFMdu3769OFbK1a+bNKn8z0JmO2fqK2aW29/fXxwr1atrmPm9yHy/\\n\",\n       \"efPmpdqRqW1Yq+5mZrmZ9mbqaErS7Nmzi2Ozv58j4QwVAABAQyRUAAAADZFQAQAANERCBQAA0BAJ\\n\",\n       \"FQAAQEMkVAAAAA11ZUL1k5/8pNNNQAMPP/xwp5uAE7R27dpONwENrFu3rtNNwAlauXJlp5uAhkio\\n\",\n       \"cNI98sgjnW4CThB/kHsb/de7SKh6X1cmVAAAAL2EhAoAAKAh1yzpMubK7c6tHAAAICkiRqzF1dGE\\n\",\n       \"CgAA4FTAJT8AAICGSKgAAAAa6rqEyvbVtn9k+2Hb7+t0ezA62/9ke6vt9UPmPcP2UtsP2f627Tmd\\n\",\n       \"bCNGZ7vP9jLbD9jeYPtd7fn0YZezPdX2KttrbW+0/cH2fPquh9ieaHuN7a+3p+m/HtZVCZXtiZJu\\n\",\n       \"kXS1pPMlXWf7eZ1tFY7j02r11VA3SloaEedJ+q/2NLrToKQbIuICSRdLekf7eKMPu1xE7Jd0ZUS8\\n\",\n       \"QNLzJV1p+2Wi73rN9ZI2SjpyMzP918O6KqGS9BJJj0TEpogYlHSHpNd0uE0YRUSskLRz2OxrJX22\\n\",\n       \"/fqzkl47ro1CsYh4PCLWtl8PSHpQ0rNFH/aEiNjbfjlZ0kS1jkX6rkfYXijpGkmfknTkqTH6r4d1\\n\",\n       \"W0L1bEmbh0xvac9D71gQEVvbr7dKWtDJxqCM7cWSLpS0SvRhT7A9wfZatfpoWUQ8IPqul3xU0nsl\\n\",\n       \"PTVkHv3Xw7otoWIMh1NItMbkoE+7nO0Zkr4s6fqI2D30Pfqwe0XEU+1LfgslXW77ymHv03ddyvar\\n\",\n       \"JG2LiDX6+dmpp6H/ek+3JVQ/k9Q3ZLpPrbNU6B1bbZ8pSbbPkrStw+3Bcdg+Ta1k6vaIuLM9mz7s\\n\",\n       \"IRHxpKR/l/Rrou96xaWSrrX9U0lflPTrtm8X/dfTui2hulfSc20vtj1Z0hskfa3DbULO1yS9pf36\\n\",\n       \"LZLuPE4sOsi2Jd0qaWNE3DzkLfqwy9k+48gTYLanSbpK0hrRdz0hIm6KiL6IeI6kN0r6TkS8SfRf\\n\",\n       \"T+u6kdJt/6akm9W6yfLWiPhgh5uEUdj+oqQrJJ2h1vX+v5L0VUn/IulsSZskvT4idnWqjRhd+6mw\\n\",\n       \"5ZLu188vLbxf0mrRh13N9hK1blqe0P7v9oj4iO1niL7rKbavkPTuiLiW/uttXZdQAQAA9Jpuu+QH\\n\",\n       \"AADQc0ioAAAAGiKhAgAAaIiECgAAoCESKgAAgIZIqAAAABoioQLQcba/1/7/ItvXneRl3zTSugDg\\n\",\n       \"ZGIcKgBdw/bL1Rrk8NWJz0yKiEPHeX93RMw8Ge0DgNFwhgpAx9keaL/8kKTLbK+xfb3tCbY/Ynu1\\n\",\n       \"7XW2/6Qd/3LbK2x/VdKG9rw7bd9re4Ptt7XnfUjStPbybh+6Lrd8xPZ62/fbfv2QZd9l+19tP2j7\\n\",\n       \"c+O7NQD0okmdbgAA6Oelb94n6T1HzlC1E6hdEfES21Mkfdf2t9uxF0q6ICL+pz391ojY2a5tt9r2\\n\",\n       \"lyLiRtvviIgLR1jXb0v6VUnPl/RMST+wvbz93gsknS/pMUnfs/3SiOBSIYBRcYYKQDfxsOnfkPRm\\n\",\n       \"22skfV/SMyT9Uvu91UOSKUm63vZaSfdI6pP03DHW9TJJX4iWbZLulvRitRKu1RHxaLTuiVgraXGD\\n\",\n       \"7wTgFwBnqAB0u3dGxNKhM9r3Wu0ZNv0KSRdHxH7byyRNHWO5oWMTuCNnrw4MmXdY/FYCGANnqAB0\\n\",\n       \"k92Sht5A/i1Jb7c9SZJsn2d7+gifmyVpZzuZ+hVJFw95b/DI54dZIekN7fu0ninpckmrdWySBQBj\\n\",\n       \"4l9dALrBkTND6yQdbl+6+7Skj6t1ue2Hti1pm6TfascPfUT5m5L+1PZGST9W67LfEZ+UdL/t+yLi\\n\",\n       \"TUc+FxH/ZvuS9jpD0nsjYpvt5w1btkaYBoCnYdgEAACAhrjkBwAA0BAJFQAAQEMkVAAAAA2RUAEA\\n\",\n       \"ADREQgUAANAQCRUAAEBDJFQAAAANkVABAAA09P8W4xDCBDf4RgAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b91c3d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"for i in range(8):\\n\",\n    \"    figure(figsize=(2, 2))\\n\",\n    \"    imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')\\n\",\n    \"    figure(figsize=(10, 2))\\n\",\n    \"    imshow(output[:50, i].T, interpolation='nearest', cmap='gray')\\n\",\n    \"    xlabel('iteration')\\n\",\n    \"    ylabel('label')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted \\\"9\\\" that's (understandably) most confused with \\\"4\\\".\\n\",\n    \"\\n\",\n    \"Note that these are the \\\"raw\\\" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 18,\n   \"metadata\": {\n    \"collapsed\": false,\n    \"scrolled\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAFZtJREFUeJztnVtsY8d5x//f4Z2H94skaiXvemUbsAsD9otbwA2ahyCw\\n\",\n       \"USBpXxoYKFD0EvShN7QPddyHJo9pgAZF+1CgiB30hqRFCxfpQ1vbRQukD724sGOnaydZY8XVihJF\\n\",\n       \"iXfykDwipw/kNzuHklbiRRRJzQ8Y8OgsdXYk/vXNN9988w0JIaDRjIJx1R3QLB5aNJqR0aLRjIwW\\n\",\n       \"jWZktGg0I6NFoxmZsUVDRC8R0cdE9CMienWandLMNzROnIaIXAB+AOAzAHYB/A+AV4QQH023e5p5\\n\",\n       \"ZFxL8wKAu0KIbSGEDeDbAD4/vW5p5hn3mN93A8CO8vUDAD+uvoGIdKh5wRFC0Gn3x7U0WhDXmHFF\\n\",\n       \"swtgU/l6E31ro7kGjCuadwE8SUS3iMgL4AsAvjO9bmnmmbF8GiHEMRH9OoB/AeAC8LqeOV0fxppy\\n\",\n       \"X+jB2hFeeKbtCGuuMVo0mpHRotGMjBaNZmS0aDQjo0WjGRktGs3IaNFoRkaLRjMyWjSakdGi0YzM\\n\",\n       \"uElYAAAi2gZQBdAFYAshXphGpzTzzUSiQT8Z69NCiOI0OqNZDKYxPJ26EqpZXiYVjQDwDhG9S0Rf\\n\",\n       \"nEaHNPPPpMPTi0KIPSJKA3ibiD4WQnx3Gh3TzC8TWRohxN7gtQDgTfS3tmiWnEl2WAaJKDy4NgF8\\n\",\n       \"FsCH0+qYZn6ZZHhaBfAmEfFz/loI8dZUeqWZaxYyR9gwDBARiEheq/eY0342IQSEEOj1evJafR9f\\n\",\n       \"D79eR87KEZ7UEZ45hmHA5/PB6/XC6/XC5/PB7/fD7/fL+71ez9FUAfR6PbTbbdk6nY58D7+/2+2i\\n\",\n       \"2+3Ka42ThRSN1+tFKBSSLRKJyBYIBNDtdnF8fOz48NmidLtd1Go11Ot11Go1NBqNE++3bVs2LZqT\\n\",\n       \"LJxoiAg+nw+hUAjxeBzxeBzpdBqpVAqpVAqRSASdTkd+6MfHxw6LY9s2isUiisUifD4f3G63QyS2\\n\",\n       \"bYOIpMA0J1k40RiGAb/fj0gkgmQyidXVVWQyGaytrWFtbQ2xWAydTkc227YdQ5Vt2wiHwwgGg3I4\\n\",\n       \"U9/f6XRgWZZsbvfV/IrUIVX1wYaH26vwuRZONC6XC6ZpIplMYmNjAxsbG9LKsKVhC8Ov6i/7+PgY\\n\",\n       \"wWAQ8XgcKysrqFarDstk2zaazSYsy0Kz2USr1Zr5zzjsk7VaLViWJV/55+I2a+EspGhCoRBSqRQ2\\n\",\n       \"Nzdx69YtxGIxxGIxRKNRmKZ5wpFVZ0m9Xg/xeBzNZhONRgOWZTkExqLhZlnWzH9G9rG41Wo1lMtl\\n\",\n       \"2ZrNJtrttnyvFs05GIbhsDSPP/44QqEQTNNEKBSC3+8/MZ0eNucsDvUvlj8oVTQsqlnDfWMLeHh4\\n\",\n       \"iHw+D4/Hg16vJ8MK3W4XnU5n5v1bSNH4fD6Ew2HpBPOU2+/3w+v1noi78C/5rFe2Sr1eD8fHxw4r\\n\",\n       \"02w2Hc9Sv28aqP1jbNuW4YB2u41IJCIF0263Hf7ZNPtyURZONDxlzufzyGazMAwDgUBANq/XK4cn\\n\",\n       \"FoPL5XI0t9sNt9strw3DgGEYcLlcMlDo9XpBRHC73Sd8DH4/t1E4zbFVA5SGYZwYLrvdLlqtFlqt\\n\",\n       \"lhyWhBBot9taNBeBRVMoFLC9vQ3bthEMBmXj2RCb9263C6/XC4/HI199Pp+cOfG1eo+I4PV64Xa7\\n\",\n       \"4ff7Hf5Ft9t1CO6is6vhAKMq7GFBs8VT36OK5vj4GO12G7VabWTRToOFFE29Xkc+n4dhGGg0GggG\\n\",\n       \"gzBNE6Zpwuv1yl9wq9WCbduO4cvv9ztEFgwGZZBQCOH48PhaFWGn05HRaBbheZzmU6nN7XbD4/HI\\n\",\n       \"V36vao3UCHar1UK9XkexWJxP0RDRGwB+GsCBEOLZwb0EgL8BcBPANoCfE0KUL7GfEhbN4eEhjo+P\\n\",\n       \"Ua1WpWBYNOrsx7ZtBAKBU0XCjWM77GSy1fF4PPB4PNJJBh4KgIezi4pmuPH/xT4NWzW/339iDc22\\n\",\n       \"bTQaDdTrdTQaDZTLZWlV53V4+iaAPwHwF8q9LwF4WwjxtUHh6S8N2qXDUV3LsmAYBrrdLprNJur1\\n\",\n       \"Ovx+P9xuN9rttsOU8/oUi0H1gQKBAMLhMEKhkHzlD499JI6PcKxk+PvPY3g2xw4uW65oNCrjTMlk\\n\",\n       \"Eh6Px7Egy8MVW5l2uy19nbkM7gkhvktEt4Zufw7ATw2u/xzAv2NGoun1euh0Omg2m/Ja9VdcLpdj\\n\",\n       \"Ot3tdh2mn9+r+jfBYFBao+FXv9/vsFzNZhOmaTos13l/7cPRXFXU7XYbmUwGt27dgmEYCIfDICK4\\n\",\n       \"XC4YhiG/7yzRXAXj+jSrQoj84DqPfm7NTGBLw3+xPPvhXzJbH3W9SZ3p8PtU53PYGWZRmKaJQCDg\\n\",\n       \"GBoajYZjaDNN80KiUfujRnhbrRa2trZgGAZCoRAymYx0rvm57DgPi0ZdUpglEzvCQggxy/p6LBrb\\n\",\n       \"tqfyPPYnuHk8HhkoZPHU63XHqng4HHYMaeehCkZdFmDhCCEQj8dx48YNdDod+Hw+EJGcjrNg2u22\\n\",\n       \"jB91Op0rWUIAxhdNnojWhBD7RJQBcDDNTs0S1TFlc99ut6Uvoa5F8QfVbrfhcrkuvBI+PDzxB81T\\n\",\n       \"fHUBlf0Znmp3Oh3UajVUKhUUi0UUCgWUSiXU63V0Op2FEs13APwCgD8YvP7D1Hp0BfR6PQAPBdRq\\n\",\n       \"taRgWq2WdFjb7Ta63S7a7bacOnOw7VEMz5zUBDKObpumCb/fL0WjLm3w2tPR0REKhQKKxSLq9bqM\\n\",\n       \"Ds+ai0y5v4W+05sioh0Avw/gqwD+loh+GYMp92V28rIZjomw49lqteByuRxBNp6xqBZnlP8HAEKh\\n\",\n       \"kHTCI5GIw9K43W45NLGjzKJRLY1lWVK8s+Yis6dXzvinz0y5L1fGcF4Kx2TOYtJZi8fjQTQaRSAQ\\n\",\n       \"QCwWQyQScVgaFm273Ua9Xke1WpWiOTw8RKVSkYHBubQ0mskZToAPh8NIpVLY2NjA5uYmNjc3kUql\\n\",\n       \"YJomDMNAq9VCtVrF0dERjo6OkMvlcHR0hFqtJpdGeIZ4FWjRzAB1uu9yuRyieeKJJ5DJZJBOpxEK\\n\",\n       \"hWAYBjqdDqrVKg4ODrC7u4tcLofDw0MpGrYwV7VTQotmBnCwjqf1nETGokkkEjKBTBVNoVDAzs6O\\n\",\n       \"tDQ8Y+Kptk73XGJYNByRVi3N1taWdIy9Xq8UDa/k7+zsYG9vzyGaq05416KZARyL4SjyysoK4vG4\\n\",\n       \"XGDlPB52gDkJSw0AztN2Gi2aGcD7tJLJJJLJJNLp9Kmi4ak8x4RarZZMbmcLMw87PrVoZgAH8FKp\\n\",\n       \"FDKZDFZWVhCLxRAKhaRo1BgR5+6oa1RXuao9jC7UOAN4eEomk1hfXz8xPPECpbqafZalmQe0pbkE\\n\",\n       \"htNBE4kE0uk01tfXsbm5idXVVcRiMZlw1Ww2UalUUK1WUalUcPfuXezs7KBQKKBer0vRXNUC5Ymf\\n\",\n       \"76o7sIyo6RZ+vx+JRAIrKyvIZDLY3NxEMpmUEWEigmVZODw8xN7eHnK5HO7fv4+dnR0Zm+HF0nkZ\\n\",\n       \"nrRoLgEWDaegDosmHA4jEAhIS8OiyWazuHv3LnK5HPb39x2W5iojwMOMmyP8FQC/AqAweNtrQoh/\\n\",\n       \"vqxOLhqc72uaJqLRqBTN+vo6HnvsMRmP4ZROVTR37txBoVCQuylrtdq5a2Gz5iKO8DcBvDR0TwD4\\n\",\n       \"uhDi+UHTglFg0fCGvmg0KlexOU+n3W7LJPFyuYxKpSJbvV6X24XnYTgaZtwcYUDXDz4Tj8eDQCAg\\n\",\n       \"K1vwKjZbGDXtodVqnRAO58pwWuu8McmU+zeI6HtE9DoRxabWoyXA7XbLXQ6JRAKRSERuOeH0TU57\\n\",\n       \"GBZMtVpFo9FAq9WaW0szrmj+FMDjAJ4DsAfgD6fWoyWARROJRORi5GmiUYcnFs4iiGas2ZMQQuYE\\n\",\n       \"E9E3APzj1Hq0gAwXFPB6vQgGg9Kn4dkSR35brRYqlQry+Tzy+Tz29vZQKpXQbDYdaQ/zKBhgTEsz\\n\",\n       \"SCZnfha6frBjcxuLhi1NJBKRG/lYNOVyWRYx2NvbQ7FYlHu55lkwwHg5wl8G8Gkieg79WdQ9AL96\\n\",\n       \"qb2cc4bL0w6Lhi3NsGj29/cdouGikfNejnbcHOE3LqEvCw0Lx+VynTk8qaLh4SmbzeLo6EhWuLrK\\n\",\n       \"jLyLoiPCU4CXC3hv9+rqKpLJpBSMz+eT23G73a7D+eUAHtfSm3fBAFo0U8Hn8yEajSIajSIWi8mc\\n\",\n       \"X05/APor2JZlodvtOgJ5alxmXmdLw2jRTAHev7SysiJL1HKiVTgcdmyntSzrxBSbZ03ztlxwFlo0\\n\",\n       \"U4BFk06nsbm56RANF0vi2Mtpywbq9lptaZYUtWCA2+1GKpXC2toaNjY2cPPmTayuriIajUpfhh3f\\n\",\n       \"g4MD5PN57O/vy12S87R6fVG0aMbA4/FIx9fv98u0hxs3buCxxx5DPB5HJBKRh3s0m01HXGZ/fx/l\\n\",\n       \"chmWZS2EZRlGi2YMPB6PnFKHw2Gk02mHpeFKWlx6rdlsolQqYX9/H/fv33dYGi2aawAROVaxuVx+\\n\",\n       \"JpORogEe7g8fFk02m0WxWESlUtGiWWY4cMdRX66Yzgd5rK+vI5VKIRwOw+v1yqJLnP5wVlxmUWZL\\n\",\n       \"w2jRXAAWy/BebD6bYWNjA4lEQtbfOz4+hmVZsuxaqVRyzJjmfRX7PLRoLoAqGN5Wm06n5bZarsoZ\\n\",\n       \"DAZl6oNlWbJEiCoa1cospaUhok30S8GuoL84+WdCiD++yjrCVwFbGq7JFwqFpGiefPJJWZuPdxdw\\n\",\n       \"QaRarSYPJFOFw/UC5301+yzOS42wAfy2EOLHAPwEgF8joqfxsI7wUwD+FTMqB3sVGIaBYDAoT315\\n\",\n       \"4okncPPmTWQyGXm+lN/vl7kynJFXLBaRz+exs7ODfD4v82WGK48uIo+0NEKIfQD7g+s6EX0E4Aau\\n\",\n       \"sI7wrFATq8LhMNbW1qTje/v2bayvryMej8ttKABknbxKpYLDw0Pkcjlsb29jb28P5XJZVvJcdC7s\\n\",\n       \"0wySy58H8F+4wjrCs0AVDNf3XVtbw9bWFra2tuTRh4lEAn6/33FgarfblbVldnd3sb29jcPDQxSL\\n\",\n       \"xYWdYg9zIdEQUQjA3wP4LSFEbejs65nWEZ41bGlWV1extbWFZ599Vq5o8y5J3szGvky1WnVYmlqt\\n\",\n       \"Jhcsr4VoiMiDvmD+UgjBpV+Xpo7waagVzrkMPgfy0um0LG/P+5hs25b7sCuVCnK5HA4ODhxBPE59\\n\",\n       \"WAYe6QhT36S8DuCOEOKPlH/iOsLAEtQRHsYwDFmylY/64SrmoVBICobLwVqWhVKphFwuh08++QQP\\n\",\n       \"HjzAwcGBXF+66rMMps15luZFAD8P4AMiem9w7zUsWR1hFXV6zSe2qKIJh8OOk+mAvmiKxSJyuRzu\\n\",\n       \"3buH3d1dHBwcSCvDDvIiz5hUzps9/QfOtkZLU0d4GA7ieb1eh2hYOOqyAnDS0hQKBbkf27KspREL\\n\",\n       \"oyPCgGMzPi8TxONxxGIxJBIJrK+vI5FIwDRNuN3uE4e/7+/vy8YxmVqtJh3kZRIMoEUDwLlM4PV6\\n\",\n       \"ZcUqzpG5ffs2VldXYZomgL5lUbfRZrNZ7O7uIp/PyyqcfKrdMnLtRTO8RBAIBJBIJHDjxg0Zl+Hc\\n\",\n       \"X9M0IYSQwxHvkLx//76cMR0dHcnV7UXZXTAq1140AKRo2IdJJBLY2NjAU089hWeeecZxRibw0PHd\\n\",\n       \"3d1FNpvF/fv3HZaGh6RFS+O8KNdeNLwjkkURi8Uc50mmUqkTJ+PycYjlclmeisK7CjhJfBktDKNF\\n\",\n       \"Mzgdl8uCpNNppFIpuWeJj9PhYwwByMgvlwrhEmc8HC2zYAAtGgAn6/yyaEzTlEE8jhADD0XTaDTk\\n\",\n       \"ZjdOqroOXHvRsKXhqlVra2uO3ZF8niTHZLiq+GmWRotmiVFPzOUttYlEAqurqzLfV82TUVETxvkc\\n\",\n       \"g2WL+J7HtRMNn47Lzq1aspWTxJPJpCxBrznJtRQNb3ZTa8ik02kZzOOFSZ/Pd9XdnUvOW+XeJKJ/\\n\",\n       \"I6L/I6LvE9FvDu5/hYgeENF7gzZcMnau4ZKtoVAI0WhUnoxy2nYUzUnOszScI/z+IBHrf4nobTys\\n\",\n       \"I/z1S+/hlOHhSRVNJBKRp9aGQiGHzwM4T9NlX0Y9R/I6TLNVxs0RBha0jrA6PLFoQqEQAoGAPEZH\\n\",\n       \"nV4DkALhbSfqARfqscvXRTgXLtSo5Aj/5+DWwtYR5pKtLBr1eGMWjTrNVs9g4qSqTqcjE6sWfXfB\\n\",\n       \"qFxINIOh6e/QzxGuY4HrCKuWxjRNRCKRUy0Ni4aHJj4nWxXNdZtqM6PkCP8V5wgveh1h9SBSn88H\\n\",\n       \"j8cDt9stxcIi6PV6cneB2tRzsnk1+6oOVr8KzttheWqOMBFlhBB7gy+Xqo4wO7sshE6n4yhGxBvg\\n\",\n       \"+DymZrO5dDnA5zFOjvDvAXhlmesId7tdeSRgs9lEoVBANpvFvXv3cO/ePRwdHcmm1svTlgaPzBH+\\n\",\n       \"p8vpznzAorEsC/V6HQcHB8hms/joo49w584deRgpny953abd1y4iDDhTG0qlkoz+ulwu2LaNZrMp\\n\",\n       \"W61WQzabxc7ODnZ3d7G3t+eYfl+XRUqVaycarudbKpXkRrdGo4HDw0Ps7u4iHo87zmKyLAsPHjzA\\n\",\n       \"gwcPUC6Xr+2MSYUu6wef1626XMlKTd9cW1uT602maTpWrzudDkqlEkqlEorFIsrlsiMus8x+jBDi\\n\",\n       \"1ADutRMNT7d5w5tt244NcG632yEINbDHDXhY73eZrY0WjWZkzhLNJMcRaq4pWjSakbm04UmzvGhL\\n\",\n       \"oxkZLRrNyFyqaIjoJSL6mIh+RESvTuF520T0wSDF9L/H+P43iChPRB8q9xJE9DYR/ZCI3holN+iM\\n\",\n       \"542dCvuI9Nqx+nhp6bq8ZjLtBsAF4C6AWwA8AN4H8PSEz7wHIDHB938K/USyD5V7XwPwu4PrVwF8\\n\",\n       \"dcLnfRnA74zZvzUAzw2uQwB+AODpcfv4iOeN3UchxKVamhcA3BVCbAshbADfBvD5KTx37DRTIcR3\\n\",\n       \"AZSGbn8O/bK2GLz+zITPA8bsoxBiXwjx/uC6DkAtwTtyHx/xvLH7CFzu8HQDwI7y9QM87PC4CADv\\n\",\n       \"ENG7RPTFCZ/FXEZ524lTYaddgnea6bqXKZrLmMu/KIR4HsDL6FdP/9Q0Hy76dnzSfk+cCjtcgnfS\\n\",\n       \"Pk47XfcyRbMLYFP5ehN9azM2YpAtKIQoAHgT/SFwUvJEtAb0MxIxYXlbIcSBGADgG6P28VEleMfp\\n\",\n       \"41npupP08TJF8y6AJ4noFhF5AXwB/VKyY0FEQSIKD65NAJ/FdNJMp1redvChMiOlwk67BO+j0nXH\\n\",\n       \"7SOAy5s9DTz2l9H32O8CeG3CZz2O/gzsfQDfH+d5AL4FIAegg76/9YsAEgDeAfBDAG8BiE3wvF9C\\n\",\n       \"/9SaDwB8b/Dhro7wvJ8E0Bv8jO8N2kvj9vGM5708SR+FEHoZQTM6OiKsGRktGs3IaNFoRkaLRjMy\\n\",\n       \"WjSakdGi0YyMFo1mZLRoNCPz/yU19i71FpCwAAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b65d950>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAD0ZJREFUeJzt3XvQXVdZx/HvL2+ubUoh0oJA2lAFJR2QIjDlTgGdykDx\\n\",\n       \"gkBFQHRwHEAqAgN0Rv/SAYdxqAyDM0jlUm4qYIFRLlUqFJCGQtNbys2h2oI0SkNoapO8SR7/OCfN\\n\",\n       \"2/Am2Ts76z3npN/PzDs5e5/n7PXkrJPkyVr7rJWqQpIkSUdv2aQTkCRJmnUWVJIkSQNZUEmSJA1k\\n\",\n       \"QSVJkjSQBZUkSdJAFlSSJEkDLZ9k40lcs0GSJM2Mqspi55sWVEnOBS4C5oB3VdVfHByzfv36n3jd\\n\",\n       \"9u3bOfnkk4+63b5ra+3du7dJbJ889u3b1zm2j2TRfj9m8Yu54447OPHEE4/69X3e4927d/e69vz8\\n\",\n       \"fOfYPXv2dI7t09fTvPZbVQ3+DEzz70+SWmk25ZdkDng7cC6wETg/ycNatSdJkjQpLe+heizwnaq6\\n\",\n       \"qarmgQ8Dz2nYniRJ0kS0LKgeCNy84PiW8bkjWrVqVZOEtDRWrFgx6RQkSVpSLQuqo76RYvXq1ccy\\n\",\n       \"Dy2xlStXTjoFHaVjcQ+dJN0Ttbwp/XvAwjvO1zMapbqb7du33/V41apVFlOSJGnmtCyorgIekmQD\\n\",\n       \"8H3g+cD5BwcN+TafJEnSNGhWUFXVniSvBD7DaNmEi6vqxlbtSZIkTUrTdaiq6lPAp1q2IUmSNGkT\\n\",\n       \"XSkd+i2e2FXfhQX7LKrZKrbVIqB9Fwzts6jmNLxvfX9/rRaddDFLSbpncy8/SZKkgSyoJEmSBrKg\\n\",\n       \"kiRJGsiCSpIkaSALKkmSpIEsqCRJkgayoJIkSRrIgkqSJGkgCypJkqSBLKgkSZIGsqCSJEkaaOJ7\\n\",\n       \"+a1atWrSKfTSZ8+2VrF99tvru8dcn2v3ie2zZ2Of2Pn5+c6x0G6fwCS98mih5X6C7lUoSYfXdIQq\\n\",\n       \"yfoklye5Icn1SV7Vsj1JkqRJaD1CNQ+8uqo2J1kLfC3JZVV1Y+N2JUmSlkzTEaqq+kFVbR4/3gHc\\n\",\n       \"CDygZZuSJElLbcluSk+yATgLuHKp2pQkSVoKS1JQjaf7PgJcMB6pkiRJOm40/5ZfkhXAR4H3V9Wl\\n\",\n       \"Bz+/bdu2ux6vXr2aNWvWtE5JkiTpmGpaUGX0XfKLgS1VddFiMfe5z31apiBJktRc6ym/JwC/DZyT\\n\",\n       \"5Orxz7mN25QkSVpSTUeoquqLuBq7JEk6zlnsSJIkDTTxrWe6bjPSZ2uPltuATMMWHC3fiz7xy5Z1\\n\",\n       \"r8dXrFjRObbPljZ9t57pE99n65k+5ubmOsf2eY/75jsN2wy1eo8laak5QiVJkjSQBZUkSdJAFlSS\\n\",\n       \"JEkDWVBJkiQNZEElSZI0kAWVJEnSQBZUkiRJA1lQSZIkDWRBJUmSNJAFlSRJ0kAT33pm5cqVneL6\\n\",\n       \"bH3RdzuLVtvJtNraYxa362i1HVDLrXX6vM99YvtszdLns9nycz8NWy5J0jQ7ZEGV5DeAAhb7F6iq\\n\",\n       \"6mNdGkgyB1wF3FJVzz6qLCVJkqbY4Uaons2ooDqUTgUVcAGwBTipa1KSJEmz5JAFVVX9ztCLJ3kQ\\n\",\n       \"8Ezgz4E/Hno9SZKkaXTEm9KT3D/JxUk+PT7emOT3Ol7/rcDrgNm76UeSJKmjLt/yew/wWeAB4+Nv\\n\",\n       \"A68+0ouSPAvYWlVXs/h9WJIkSceFLgXVfavq74C9AFU1D3T5mtLjgfOSfBf4EPC0JO87OOi22267\\n\",\n       \"6+fOO+/skbokSdJ06LJswo4kP7X/IMnZwPYjvaiqLgQuHL/mKcBrq+rFB8etW7eue7aSJElTqEtB\\n\",\n       \"9Rrgk8AZSb4MnAI89yjaciEbSZJ0XEqXBfuSLAd+jtG9UN8cT/sNbzypM844o1OsC3se4MKeB/RZ\\n\",\n       \"JBNgfr77R7dPbKs+cWFPSZouVbXoP2hHHKFKsgZ4OfBERqNMVyT566raeWxTlCRJmk1dpvzeB/wY\\n\",\n       \"eBujEarfAi4BfrNhXpIkSTOjS0F1ZlVtXHD8uSRbjlUCt99+e6e4PtMZfacnWl271TRJq+mzvlq9\\n\",\n       \"F32mP/tO+fW59jT09bTo85k73t8LSVpMl2UTvp7kcfsPxt/y+1q7lCRJkmbL4TZHvm5BzJeS3Mzo\\n\",\n       \"HqrTgG8uQW6SJEkz4UibI0uSJOkIDrc58k0Lj5OcCqxunZAkSdKs6bI58nlJvg18F/g8cBPwqcZ5\\n\",\n       \"SZIkzYwuN6X/GfA44FtV9WDg6cCVTbOSJEmaIV0Kqvmq+l9gWZK5qroceHTjvCRJkmZGl3WotiU5\\n\",\n       \"CbgC+ECSrcCOtmlJkiTNjiPu5ZdkLXAno9GsFwL3Aj5QVT8c3HhSp5xySqdYF/Y8wIU9D3Bhz+nj\\n\",\n       \"eyHpeHbUe/lV1f7RqL3Ae45hTpIkSceFwy3suYPRQp6Lqaq617FIYNeuXZ3i+ozKTMsITh+tRnBa\\n\",\n       \"jta1GtlrORrpqJMkqYXDrUO1dujFk9wbeBdwJqPi7Her6itDrytJkjRNutyUPsRfAf9cVc9Nshw4\\n\",\n       \"sXF7kiRJS65ZQZXkZOBJVfUSgKraA2xv1Z4kSdKkdFmH6mg9GPifJO9O8vUkf5PkhIbtSZIkTUTL\\n\",\n       \"gmo58CjgHVX1KOAO4A0N25MkSZqIlgXVLcAtVfXV8fFHGBVYd7Nz5867fvquKSRJkjQNmt1DVVU/\\n\",\n       \"SHJzkodW1beAZwA3HBy3evXqVilIkiQtidbf8vtDRtvVrAT+A3hp4/YkSZKWXNOCqqquAR7Tsg1J\\n\",\n       \"kqRJa3kPlSRJ0j1C6ym/IyewvFsKc3Nzx/ya+61YsaJz7MqVKyce2+e+szVr1nSO7XvtE07ovgrG\\n\",\n       \"2rXdF94/7bTTOsdu3LixcyzA6aef3jl23bp1nWP7vG+7d+/uHDs/P985dtmyfv8/6vO5P+mkk5rE\\n\",\n       \"rlq1qnPs8b79lKTpd7i/WxyhkiRJGsiCSpIkaSALKkmSpIEsqCRJkgayoJIkSRrIgkqSJGkgCypJ\\n\",\n       \"kqSBLKgkSZIGsqCSJEkayIJKkiRpoJnZeqbPVhJV1SuHPtt77N27t3Psrl27Osf22Takz3vRJ9+W\\n\",\n       \"+uSxc+fOJrEAe/bs6RzbJ+c+n7k+/dd3O5k+9u3b1yS2758/SToeNB2hSvLGJDckuS7JB5N037hL\\n\",\n       \"kiRpRjQrqJJsAF4GPKqqHg7MAS9o1Z4kSdKktJzy+zEwD5yQZC9wAvC9hu1JkiRNRLMRqqq6DfhL\\n\",\n       \"4L+A7wM/qqp/adWeJEnSpLSc8vsZ4I+ADcADgLVJXtiqPUmSpElpeVP6o4EvV9UPq2oP8DHg8QcH\\n\",\n       \"7dix466f3bt3N0xHkiSpjZb3UH0D+JMka4CdwDOATQcHrV27tmEKkiRJ7bW8h+oa4H3AVcC149Pv\\n\",\n       \"bNWeJEnSpGSSi/AlqVNPPbVrbJ/r9s2jc2yfhRZbxbqw59HFggt7LuTCnpLUX1Ut+pe4W89IkiQN\\n\",\n       \"ZEElSZI00Mzs5ddSq+nEPrF9plRa7mvYauqqz3X7THPNzc11ju2bR6u+noZp477xfb6B2ye21bSq\\n\",\n       \"JC01R6gkSZIGsqCSJEkayIJKkiRpIAsqSZKkgSyoJEmSBrKgkiRJGmgqC6pdu3ZNOgUN0Hf1ck2P\\n\",\n       \"+fn5SacgSTPJgkrHnP03u/pszSNJOmAqCypJkqRZYkElSZI0UCa5nUMS95KQJEkzo6oW3WtsogWV\\n\",\n       \"JEnS8cApP0mSpIEsqCRJkgaauoIqyblJvpHk20leP+l8dGhJ/jbJrUmuW3BuXZLLknwryWeT3HuS\\n\",\n       \"OerQkqxPcnmSG5Jcn+RV4/P24ZRLsjrJlUk2J9mS5E3j8/bdDEkyl+TqJJ8cH9t/M2yqCqokc8Db\\n\",\n       \"gXOBjcD5SR422ax0GO9m1FcLvQG4rKoeCvzr+FjTaR54dVWdCZwNvGL8580+nHJVtRM4p6oeCTwC\\n\",\n       \"OCfJE7HvZs0FwBZg/83M9t8Mm6qCCngs8J2quqmq5oEPA8+ZcE46hKq6Ath20OnzgPeOH78X+NUl\\n\",\n       \"TUqdVdUPqmrz+PEO4EbggdiHM6Gq/m/8cCUwx+jPon03I5I8CHgm8C5g/7fG7L8ZNm0F1QOBmxcc\\n\",\n       \"3zI+p9lxv6q6dfz4VuB+k0xG3STZAJwFXIl9OBOSLEuymVEfXV5VN2DfzZK3Aq8D9i04Z//NsGkr\\n\",\n       \"qFzD4ThSozU57NMpl2Qt8FHggqq6feFz9uH0qqp94ym/BwFPTnLOQc/bd1MqybOArVV1NQdGp+7G\\n\",\n       \"/ps901ZQfQ9Yv+B4PaNRKs2OW5PcHyDJTwNbJ5yPDiPJCkbF1CVVden4tH04Q6pqO/BPwC9i382K\\n\",\n       \"xwPnJfku8CHgaUkuwf6badNWUF0FPCTJhiQrgecDn5hwTurnE8BLxo9fAlx6mFhNUJIAFwNbquqi\\n\",\n       \"BU/Zh1MuyX33fwMsyRrgl4Crse9mQlVdWFXrq+rBwAuAz1XVi7D/ZtrUrZSe5FeAixjdZHlxVb1p\\n\",\n       \"winpEJJ8CHgKcF9G8/1/Cnwc+HvgNOAm4HlV9aNJ5ahDG38r7AvAtRyYWngjsAn7cKoleTijm5aX\\n\",\n       \"jX8uqaq3JFmHfTdTkjwFeE1VnWf/zbapK6gkSZJmzbRN+UmSJM0cCypJkqSBLKgkSZIGsqCSJEka\\n\",\n       \"yIJKkiRpIAsqSZKkgSyoJE1cki+Nfz09yfnH+NoXLtaWJB1LrkMlaWokeSqjRQ6f3eM1y6tqz2Ge\\n\",\n       \"v72qTjoW+UnSoThCJWnikuwYP3wz8KQkVye5IMmyJG9JsinJNUl+fxz/1CRXJPk4cP343KVJrkpy\\n\",\n       \"fZKXjc+9GVgzvt4lC9vKyFuSXJfk2iTPW3Dtf0vyD0luTPL+pX03JM2i5ZNOQJI4sPXN64HX7h+h\\n\",\n       \"GhdQP6qqxyZZBXwxyWfHsWcBZ1bVf46PX1pV28Z7221K8pGqekOSV1TVWYu09evALwCPAE4Bvprk\\n\",\n       \"C+PnHglsBP4b+FKSJ1SVU4WSDskRKknTJAcd/zLw4iRXA18B1gE/O35u04JiCuCCJJuBfwfWAw85\\n\",\n       \"QltPBD5YI1uBzwOPYVRwbaqq79fonojNwIYBvydJ9wCOUEmadq+sqssWnhjfa3XHQcdPB86uqp1J\\n\",\n       \"LgdWH+G6xU8WcPtHr3YtOLcX/66UdASOUEmaJrcDC28g/wzw8iTLAZI8NMkJi7zuXsC2cTH188DZ\\n\",\n       \"C56b3//6g1wBPH98n9YpwJOBTfxkkSVJR+T/uiRNg/0jQ9cAe8dTd+8G3sZouu3rSQJsBX5tHL/w\\n\",\n       \"K8qfBv4gyRbgm4ym/fZ7J3Btkq9V1Yv2v66q/jHJ48ZtFvC6qtqa5GEHXZtFjiXpblw2QZIkaSCn\\n\",\n       \"/CRJkgayoJIkSRrIgkqSJGkgCypJkqSBLKgkSZIGsqCSJEkayIJKkiRpIAsqSZKkgf4fuHwpG022\\n\",\n       \"rncAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b4bf3d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGPlJREFUeJztXUlsbNlZ/v6a53myy37Pft1B6kiRkk1YJFGyiKKOkJKw\\n\",\n       \"IYqEFAWEWDAJFjRhQWAXIhEhWCAg6SgMSkCgoICESAeBaBYMjbrTHUgP7nbZZbuq7JrnW9NhUfWf\\n\",\n       \"Pve6bNfk91zl+0lHVa7yuz717lf//59/JCEETJiYB5YnvQET6weTNCbmhkkaE3PDJI2JuWGSxsTc\\n\",\n       \"MEljYm4sTBoiepaIXieit4jouVVuysTdBi3ipyEiK4A3AHwcwCmA/wbwOSHEj1a7PRN3EYtKmg8C\\n\",\n       \"OBBCZIQQfQDfBvDp1W3LxF2GbcF/lwaQVX4+AfDj6i8QkelqXnMIIWja64tKGpMQ9xiLkuYUwK7y\\n\",\n       \"8y7G0sbEPcCipHkJwHuIaI+IHAA+C+C7q9uWibuMhWwaIcSAiH4RwD8BsAL4unlyuj9Y6Mg904VN\\n\",\n       \"Q3jtsWpD2MQ9hkkaE3PDJI2JubGoc2+tYbVa5bJYLCDSq24ikq/zc3UZMRwOMRqN5ONoNIIQAkII\\n\",\n       \"+bP6/rrj3pHGYrHA7XbD7XbD4/HA7XZLcgBjwthsNt1yOp1wuVxwOp1wOp2XrtnpdHSr3+/rlqZp\\n\",\n       \"6Ha7cq17Xva9Iw0RweVyIRQKIRwOIxgM6iSLxWKR5HA4HHC5XPD5fPD5fPD7/fB6vZeuWavVUK1W\\n\",\n       \"UavVUKvV0Ol00O125WO9Xkej0cBoNIKmaSZp1g1EBLfbjVAohGQyiXg8riOM1WqVEsjj8cDj8SAc\\n\",\n       \"DiMSiSAcDiMcDl+65vn5Oc7Pz3FxcYHz83M0m03dstlsEEKg2+0+gU+8emw8aVjdWK1W2Gw2uFwu\\n\",\n       \"RKNRJJNJpNNpbG1t6ewVJo3H44HX64XX60U0GkUkEkEsFkMkErn0N1hiBQIBeL1e1Ot1KV3q9Tos\\n\",\n       \"FgsGgwFarRasVisASGmzjlJn40ljsVjg8/kQCAQQCAQQCoWQTqexvb2N7e1tJBIJnaRh9eRyueTy\\n\",\n       \"+/3weDxSYhjhcDjg9/sxGAxARPD7/QgEAlLSEBF6vR6azSaq1Sp6vZ7OeF433AvS+P1+pFIpuba2\\n\",\n       \"tpBMJrG1tYVYLCYNYSaOzWaD3W6Xy+Vywe12w263T/0bDocDPp8PFosFLpcLrVZLt1jKVCoVuFwu\\n\",\n       \"AEC/38dgMJAnrXXCxpPGarXC5/MhkUhgf38fe3t7SCaTSCQSSCQSiEajAKA7PbHEISLd8fw6ScOE\\n\",\n       \"CQQCaLfb6HQ6usdKpYJCoQCXy4XhcCiP4+uIpUhDRBkAdQBDAH0hxAdXsallwDea7Rifz4d4PI7t\\n\",\n       \"7W08fPgQTz31FKLRKKLRKGKxGEKh0KVrsI+FCcLSoN/vo9fr6QjGy263S/K43W7dEbtUKiGfzyMU\\n\",\n       \"CsHn80EIASLCaDRCr9d7rP8/q8CykkYA+JgQoryKzawCrI7YholEInjqqafw4MEDqY4CgYC0UaZh\\n\",\n       \"OBzKNRgMJFl6vR76/b40mHk5HA65nE4nLBYL7Ha7JIfX60UoFEIkEkEikYDdbketVsNwOESn07mX\\n\",\n       \"6mlqJPRJwWq1wu/3I5lMIpVKYXt7G7u7uzrSsHNvGmlYbahEabfbuqXaO3a7XZ6y2Ihmuwh41xAP\\n\",\n       \"BoPy1AZAEsbojV4HrELSfJ+IhgD+WAjxpyvY01Jg0qRSKTx69Aj7+/vS+E2lUojFYroj+DQMh0Pp\\n\",\n       \"ye10OqjVaqjX66jVamg0GtL5xw5APjU5HA65B7aN7Ha7jjSJRAL9fl86/e4jaT4khMgRURzAC0T0\\n\",\n       \"uhDixVVsbB4YbQu/3494PI7d3V08evRI2i/RaBSBQEBnr7AKUtXRNCO2Wq3KR/U47nK5MBgMpJTh\\n\",\n       \"UxT7h9iZyMf+cDiMer1+paRbByy1ayFEbvJ4QUTfwbi05bGSRo0V2e126YsJh8PS4A0EAvImCSEu\\n\",\n       \"2SvsjOPFBiyHARqNhlzNZlNnwzgcDjx48EAayHw0Z0nGxFGP8izppgU/1wELk4aIPACsQogGEXkB\\n\",\n       \"fALA76xsZ7PvQ3p6nU4n/H4/gsGgjjRGG2Y4HEp7pdPpoFAoIJfLIZfLoVAoQNM0uZg86lJvvt1u\\n\",\n       \"R7/fBxHJkIPL5YLD4dBJGzaeVcKogdJ1wjKSJgngO5MPbQPwl0KI761kV3OAVZLT6YTX651KGr7B\\n\",\n       \"qqRhwjSbTeTzebzzzjs4ODjA4eGhJJR6YlIf1bQKfnS73QiHw0ilUlL18d9T/T2qpFlHwgBLkEYI\\n\",\n       \"cQjg/Svcy0LgG+JwOKTtwBFpPnarGA6H6Ha7aDabaDQaqFarODs7QyaTwVtvvYU33nhDqi1+ZHXG\\n\",\n       \"bn9VYlitVmxtbaFWq6HVaqHX60m1pTrvVN+PalOtI9bTEjNAtRccDoe0KfibrN70Xq+Hi4sLGZEu\\n\",\n       \"FArIZDI4OztDpVKBpmnSMOZHNakKAFwulwxqejweeSoLBoPwer1SPXFwko3rRqOBcrmMer2OdruN\\n\",\n       \"fr+/luRZe9Ko9gKTRrUbWB2xaul0Ori4uMDx8TGOjo6QzWZRKBRwfn6OarWKbrd7KdNOlQ5EBKfT\\n\",\n       \"KR2HrJKi0SiCwSA8Hg+cTqeOuP1+H51OB/V6HaVSCbVazSTNk4YxyGi0GdiGYbV0cXGBbDaLN998\\n\",\n       \"E2+//bYujYEz64xqRL25TqcTwWAQ8XhcBkFVSWM8HV0ladYxhABsCGmMMBq67MntdDqoVqvI5/PI\\n\",\n       \"ZrM4PDzEwcHBJaP3OrCkYV9QOp1GIpFAJBJBIBCAy+XSBTx5P5qmodVqoV6vo9VqSTVoSponACEE\\n\",\n       \"er2eTHCyWq3wer2wWq3QNA35fF4XPGw0Gjg8PEQul0OtVpNE4cjzLHA4HDKelEgkEA6H4fV6ZcBy\\n\",\n       \"XU9Fs2LtScPGbbvdlnaIxWJBr9dDtVpFOByWUoTDAoVCAYVCQZKGDeV5SRMOhyVpfD4fnE6nzju9\\n\",\n       \"qVh70rCk4aTtTqcDTdNQq9WQz+fh9XplVcBgMECv19N5eHu9ngxSzkoaDlKGQiHE43GEQiGdpGFs\\n\",\n       \"KnE2gjRMCmBsFHO8iE8xxlQH47oJarWC1WqF2+2G3+9HKBRCLBaTUW72Aqt5OMC7dVHGta5Ye9IY\\n\",\n       \"oaY2ENGlG6b6bGaRLFzy4na7pX+Gj9gc03I6nfLEBOASOflkxks1hNcRG0caYHzT+BtvtVqvrHic\\n\",\n       \"VR1xGmcoFEIoFLrkl2GHIqsm9YivaZo80vMyT093DEwM9uayXWH0uczqymdJEwwGkUgkZHKXMXpu\\n\",\n       \"s9l0pNE0De12Wx6z1cU+mo2VNET0PICfAHAuhHjf5LUIgL8C8BBABsBPCSGqt7jPubCszaDmALMN\\n\",\n       \"Ew6HkUwm8eDBA0kav98vy3rZ5mEbi31EXHXJ0qbZbELTtLlU5F3DLAkd3wDwrOG13wDwghDixwD8\\n\",\n       \"8+TnjQBHzT0ejy5Fc2trCw8ePMDe3h5SqRTC4bAkDAAp3fr9PprNJkqlEs7OznB4eIjT01OUSiVZ\\n\",\n       \"zmKMZa0bbiTNJBOvYnj5UwC+OXn+TQCfWfG+niiYNIFAQKZobm9vY2dnB3t7e9ja2kIoFILb7ZYq\\n\",\n       \"iY1sLopTSXN2diZJY+wssY5Y1KZJCiEKk+cFjHNrNgKqpOH0zEQiga2tLezu7uLhw4fyJMUhAyYB\\n\",\n       \"n5aMpCkUCiiXyzrSrCthgBUYwkIIsWn99YzqKZFIIJVKIZ1OY3d3V1fGy3aMGklvNBoolUrI5XI4\\n\",\n       \"OjpCpVKRke1N6FGzKGkKRJQSQuSJaAvA+So39aRhs9l0SV2qL2Zamma/39eV4RaLRZRKJZmI3mw2\\n\",\n       \"0e121/aIbcSimc3fBfD5yfPPA/i71WznyYNPTE6nEx6PRxb/M2mmxZY49lWtVmXLkXK5jHK5rCPN\\n\",\n       \"TRH0dcEsR+5vAfgogBgRZQH8FoAvA/hrIvpZTI7ct7nJxw3ufjWNNNNiSyxpqtUqLi4udJJGDYpu\\n\",\n       \"iqS5kTRCiM9d8dbHV7yXJwZVehhVE/eccblcsNvtl5yFQghomoZmsymL/Jk03B1r3W0YIzbOIzwv\\n\",\n       \"LBYLvF4vfD4fvF4vAoEAdnZ2sLOzg3Q6jZ2dHcTjcQQCAdlvj9MsOOXi7OwMJycnyGazOD4+Ri6X\\n\",\n       \"Q7VaRafTecKf7nZw70nDTYg4RMDHa7WfTSgUQjAY1JGm1WrJOBKT5ujoCJlMRqqmdSzunwX3njRc\\n\",\n       \"oJ9MJmX/mng8LlcsFpNRbqfTKfN3WB1VKhXkcjmcnJzg+PgYmUwGzWYTrVZrY3rsGXHvSUNEOtI8\\n\",\n       \"88wzusaMoVDoUs4v2zDlcllWZ56eniKbzSKTyehqw01JsyEwtn/lTp5cnakavmqYABgbwK1WC+Vy\\n\",\n       \"WSao5/P5qR7fTSQMcA9Jo9ZJqZWZbAxzr2DVL2PMw2Epk8/ncXR0hEKhgGq1qvP4btqJScW9Iw0A\\n\",\n       \"XXEdd/JU68D5NZY0akkux5bK5bIME1SrVUka1RdjSpoNglpcZ1RPgUBAV3jHpBFCyMR0VT0dHx+j\\n\",\n       \"3W7L7hLr2OJ1Xtw70vBpibuPx2Ix2fHT7/frsvBYNTFReLHjrtFooNPpyNqpTVZJKu4tabhjOefI\\n\",\n       \"JBIJ+P1+KV3U05LaOLpSqch67GazKQdocHLVfcC9JA03cnz06BGefvppJJNJnaQxpj2wpKlUKjg/\\n\",\n       \"P5eShgOR69x9fBHcGOUmoueJqEBErymv/TYRnRDRy5NlTAe9s2APMJPmve99L/b39yVpuIHAdX6Z\\n\",\n       \"YrEoScPFeaZ60uMbAP4QwJ8prwkAXxVCfPVWdrVCcCYeG7dsz3BogPNljP1kePV6PZRKJRQKBZye\\n\",\n       \"niKTyciS3k2Y3bQIZolyv0hEe1PeWouaUzV9k0f2qFNTfD6frgkRVxNwv712u41isYhcLic9vqye\\n\",\n       \"NjVMcBOWaS/5S0T0AyL6OhFd7hV/R8D9fbn2OhqNSknj9/ulpOEmRMC7/WTY+C0Wi8jn8zIomc/n\\n\",\n       \"TdIsgD8CsI9xz70cgN9b2Y5WDCaNmvMbiUQQCoWkpFHVE/tjuAESJ1YxaTKZjEmaRf6REOJcTADg\\n\",\n       \"axj3D74zUPv28lidSCSCVCqly4/hagJj+iaXorTbbdkUiUtsuZfNulcULIOFSDNJJmf8JIDXrvrd\\n\",\n       \"x41psSWfz4doNCrLUBKJhAwXTCMNG8CsorgJtUqY+3JSmoZFcoS/BOBjRPR+jE9RhwB+/lZ3OSfU\\n\",\n       \"vr089S0ajcrhGolEQkoalSw8TodrsdX2a6qU2fTY0k1YNEf4+VvYy8rAHl1VPUWjUaRSKezu7sr+\\n\",\n       \"wty5ygi1Z1+r1dL5Yu6LA+86bJxHmCe8caOhSCSCZDIph5aqbVtVw5frsDVNk4bv6ekpjo+PpV9G\\n\",\n       \"07Qn/fHuBDaWNOpQLm7ZypPdeOQOH7FZqrA6KhQKODs7QzabxdHRkfQA39fTkhEbRxqehMJzt9Pp\\n\",\n       \"tI40fr9f16aenXncR6ZSqeikjJrza0qaMTaONDzcgkmzs7Mj1RN34VT7z6jH61qthmKxeEnSzNOf\\n\",\n       \"7z5g40mTTqd1dUvGGUtCCLTbbZTLZZyenuLk5ASnp6coFotoNBrQNG0jmiuuEhtLGm53tr29LSUM\\n\",\n       \"jwtUwaQpFos4OTnBwcEB8vm8rp/MOo9Dvg1sHGm4R54qaTidc9owdq4uKJVKyGazePvtt2WyFZOG\\n\",\n       \"f++++mWM2DjSTFNP6jAvI0aj0SVJwy3xWTWZZNFj40hjHP13VU8ZFWq7WFZH6nXmhbGLqPF1da/G\\n\",\n       \"x2l7NL7PSe88jGzezuiDwUA3cnFeL/fGkWYRGIm27AQ4lk6q8cxE5Ef+u4C+eG+aNFTfs1gssnKC\\n\",\n       \"Uztmmbqrfg4+KdZqNVSrVWiaduWYomkwSQPoCGO323U3fBHSGFukTbsZqlRhMnC87Kr9sYoNBAKI\\n\",\n       \"xWJybPQ0A98IdQ+1Wg2FQkEeAtRU1aUlDRHtYpzmmcA4OPknQog/uOt9hOcB37yrJM0isFgs0qdz\\n\",\n       \"07dXJcxVA1HV92w2GwKBgJw1lU6nZTeL66Duo1gsAgA6nQ7K5bJsiTJrXO0mSdMH8KtCiFeIyAfg\\n\",\n       \"f4joBQBfwLiP8FeI6DmM+wivZS9hbgCQSCSwv78vh3NwWcoiDj11PLPq52EJpKoaridXl1FFqZLG\\n\",\n       \"ZrPJ8AhXUcwraZxOp2zD3263YbfbdTOxbvrM15JGCJEHkJ88bxLRjwCkMe4j/NHJr30TwL9iTUlj\\n\",\n       \"sVgQDAaxs7MDIQT8fr9uassiUe1WqyVDD9xwWh2mygRgEng8HtlUyefzTSWNurh8mGc1zGLTAO8S\\n\",\n       \"x263S2L3ej3Y7XbZVZ2/LNdhZptmklz+AQD/iQ3qI8ykEULA5/Nhe3tbZ88s4tRjA5MfeQ4CSzA+\\n\",\n       \"1bENpQ7rCIVCl+waVYXyOETumaM2wJ4FQgg4HA6dRGVJxnXqN2Em0kxU098C+BUhREPVueveR5gN\\n\",\n       \"S6/Xi1QqdamnzCJ2TalUQrFYlIu/0dxn2DgOOhKJyAZK8Xh8KmmMezb2Mr4J6udg9aSWE3N7/llc\\n\",\n       \"DLNk7tkxJsyfCyG49eud7SPMUWt1YLs6RXeaKL/q9WX2AEBmDvLNMUoa3lcwGJRNlILBoLxxsxLW\\n\",\n       \"OABNtZ+mqddKpYJGo6HLSLy4uEChUJgpkn/T6YkAfB3A/wkhfl95i/sI/y7uWB9h/ta0222Z6sBi\\n\",\n       \"nMfs3Da4zkoIAZvNdq1Nw4NauSeO2j1UfWRMIxJ/SbhzhSpFeFSjilqthlwuh/PzcznymcdGd7vd\\n\",\n       \"G0c/3/Q/+CEAPw3gVSJ6efLaF3GH+wirtde1Wg3lchmBQECWsjwO8IxLm80Gt9t9abqdUb04HA6Z\\n\",\n       \"GMYOQP4ss6jKwWCAdrsth5DxOGleRmnTaDSk6lSHy08j2DTcdHr6d1xdsXAn+wgzadjrWalUdLVP\\n\",\n       \"jwNceOd2uy85zVSPMHDZyGXSGH1F15GH50vV63UpOZrNplzG0xDP+ORmTOrgsqVJs44YjUZyPkE2\\n\",\n       \"m4XNZpP/KSy+543VGI+8qqNtmrrjagiGmsTV7/flqeyq0xkP5+DfV22Uab/PRX28+LjPk+2MpNE0\\n\",\n       \"TQ4s47a23DLlXpJmOByi0Wggn88DGH+rUqmUPALHYjEAl4OF10FtIMB+lXlsJJZ86k1k+2aaocrk\\n\",\n       \"ZjWjHtmntcrvdru6pktcPcHORePfYMnEf4ftISb0Tdg40oxGIzQaDRCRTgyzyK7X67q29rOQhj21\\n\",\n       \"LpdLGoyj0UjaLDeBc5B5L3xiYUPVCN4nLyYBd0o3SgOOWvP7KsGmNVtiSaZ6vudpzLRxpBkOh1Id\\n\",\n       \"XVxcwO12y65VXGY7D2mISEoWj8cDj8eD0WgkbZZZwJ20yuUyzs/PL0kC47ebc5VLpRLK5bKs8rzK\\n\",\n       \"sFVTO4z20LQY2lW/M6szc+NIo9YxAeMbxmOQiQiaps1NGvXI7na7Ua1WUS6XUSwWEQ6Hb7wGO/t4\\n\",\n       \"QguThSWD8aayq4CXao91u90nnnq6caQxQgiBTqeDarUKItIZwrOqJ0524mMx57Hw400wDnNnHwqr\\n\",\n       \"BSNYIqqlMzz+5y5kEW48aUajkSRKr9dDrVYDMJsBzL9nTE1Q7ZtZ0hLUCDL36OM1LWeHKz15qW3z\\n\",\n       \"7wLotph7V+JRaq7MoumbRnVmPILfBGNqxFXOO4aaBDbNTnlcEEJM/WZtPGlMLI6rSLNM+zQT9xQm\\n\",\n       \"aUzMjWtJQ0S7RPQvRPS/RPRDIvrlyetr20fYxPK41qYhohSAlJojDOAzGEe1G+KaPsKmTbP+uMqm\\n\",\n       \"WTRHGFiTPsImVo+ZbRolR/g/Ji+tRR9hE6vHTKSZqKa/wThHuIk16iNsYvW40U8zyRH+BwD/aEj5\\n\",\n       \"5Pf3APy9EOJ9htdNm2bNsZCf5qocYbrDfYRN3D5uOj19GMC/AXgV47JcAPhNAJ/DWDXJPsJKHRT/\\n\",\n       \"W1PSrDnMMIKJuWGGEUysDCZpTMwNkzQm5oZJGhNzwySNiblhksbE3DBJY2Ju3JqfxsTmwpQ0JuaG\\n\",\n       \"SRoTc+NWSUNEzxLR60T01qQL6LLXyxDRq5MU0/9a4N8/T0QFInpNeS1CRC8Q0ZtE9L15coOuuN7C\\n\",\n       \"qbDXpNcutMdbS9e9rq53mQXACuAAwB4AO4BXADyz5DUPAUSW+PcfwTiR7DXlta8A+PXJ8+cAfHnJ\\n\",\n       \"630JwK8tuL8UgPdPnvsAvAHgmUX3eM31Ft6jEOJWJc0HARwIITJCiD6AbwP49Aquu3CaqRDiRQAV\\n\",\n       \"w8ufwritLSaPn1nyesCCexRC5IUQr0yeNwGoLXjn3uM111t4j8Dtqqc0gKzy8wne3fCiEAC+T0Qv\\n\",\n       \"EdHPLXktxm20t106FVZJr11JC95VpuveJmlu4yz/ISHEBwB8EsAvENFHVnlxMZbjy+576VRYMrTg\\n\",\n       \"XXaPq07XvU3SnALYVX7exVjaLAwhRG7yeAHgOxirwGVRmJTqcEbiUu1thRDnYgIAX5t3j3RNC95F\\n\",\n       \"9qhc7y/4esvu8TZJ8xKA9xDRHhE5AHwW41ayC4GIPETknzz3AvgEVpNmyu1tgRW0t10mFfaq9NpF\\n\",\n       \"93hr6brLnGZmsN4/ibHFfgDgi0teax/jE9grAH64yPUAfAvAGYAexvbWFwBEAHwfwJsAvgcgtMT1\\n\",\n       \"fgbjqTWvAvjB5OYm57jehwGMJp/x5cl6dtE9XnG9Ty6zRyGEGUYwMT9Mj7CJuWGSxsTcMEljYm6Y\\n\",\n       \"pDExN0zSmJgbJmlMzA2TNCbmhkkaE3Pj/wFJ7Hv45ZreFAAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b44f290>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEbZJREFUeJzt3X+QXeVdx/HPJ5vdTXaTGAIhIcliooIC01qwMEChKbQ6\\n\",\n       \"2Cm0ai1FbbE6dZy2NmLLlDKjf+G0tuMUOx2dqSAt1FKVVtqO8kuLKaWUFEjCj1AgDigkkohJyO4m\\n\",\n       \"2exuvv5xb8Jmsz+eJyfP3nvC+zWT4Z5zv/c8z73PuXe/nB/P1xEhAAAAHL1Zre4AAABA3ZFQAQAA\\n\",\n       \"VERCBQAAUBEJFQAAQEUkVAAAABWRUAEAAFQ0u5WN22bOBgAAUBsR4YnWF02obF8m6UZJHZJuioi/\\n\",\n       \"GB+zZMmSI143MDCgefPmHXW7uXNr5cQfOHAgtzvHvA/2hGM5oY6Ojqx+5MTPmjXxAc7du3drwYIF\\n\",\n       \"R73dnPeXO9Y54zc0NJQcu3///iKxw8PDybE5n9tk8SMjI5o9+8ifhc7OzuTtzpkzJzl2orYmMzAw\\n\",\n       \"kBybM3aSNDo6mhx7vM/dN9n3eiI53+vu7u7k2Pnz5yfH9vb2Jsfm9iPH4OBgcmx/f/8R6/bs2aOe\\n\",\n       \"np4j1u/evTurHyMjI8mxx/u+PNOKnfKz3SHpS5Iuk3SmpKtsn1GqPQAAgFYpeQ3VeZI2R8QLETEs\\n\",\n       \"6RuS3l2wPQAAgJYomVAtl/TimOWXmuum1dXVVaRDmBmlDqmjvJzTPQCOnZzT6mhPJX89j/rkLAlV\\n\",\n       \"vZFQ1RcJFdAaJFT1V/Ki9C2S+sYs96lxlOowYy807erqIpkCAAC1UzKhekTSabZXStoq6UpJV40P\\n\",\n       \"qnI3HwAAQDsollBFxIjtj0m6R41pE26OiKdLtQcAANAqReehioi7JN1Vsg0AAIBWcysn9rIdfX19\\n\",\n       \"0weq7MWyOZ9BqQk420XOBIelJkPMiS05cWnOpJM5sTnvL2eywNzJLHMmAMwZ63aQMyGqlDfha05s\\n\",\n       \"HSdOLPW7VWpC4tz+lhrrUtqhDzjcZDOlc0sPAABARSRUAAAAFZFQAQAAVERCBQAAUBEJFQAAQEUk\\n\",\n       \"VAAAABWRUAEAAFREQgUAAFARCRUAAEBFJFQAAAAVkVABAABUVLQ4corUGn2l6kDlxufUFCxZj6pE\\n\",\n       \"H3L70Q51ynJrzOWMX059vjlz5iTHdnZ2JsfmvL/u7u7kWCmv3t2ePXuSY3t6epJjTzzxxOTYk08+\\n\",\n       \"OTl2//79ybGStHDhwuTYVatWZW07Vc4+1Nvbm7XtnDqP8+fPT47t6upKjt2xY0dy7D333JMcu3Xr\\n\",\n       \"1uRYKW9fzqmjl/O5XXrppcmxF198cXKsJC1btiw5Nmc/yvks5s6dmxyb87nlfEdypf4eTrXPFz1C\\n\",\n       \"ZbvP9v22n7L9pO2Pl2wPAACgFUofoRqWdE1EbLA9T9Kjtu+LiKcLtwsAADBjih6hioiXI2JD8/GA\\n\",\n       \"pKclpR+PBAAAqIEZuyjd9kpJZ0t6eKbaBAAAmAkzklA1T/fdIWlN80gVAADAcaP4XX62OyV9U9LX\\n\",\n       \"IuLO8c/v2rXr0OM5c+YUvYofAAAg1dq1a7V27dqk2KIJlRv34N8saVNE3DhRTM7tygAAADNl9erV\\n\",\n       \"Wr169aHlG264YdLY0qf83iLpdyRdYnt9899lhdsEAACYUUWPUEXED8Rs7AAA4DhHsgMAAFCRW1k+\\n\",\n       \"xHaceuqpqbHJ2819TzlT6ueUAsnpR05sqT5IeZ9FTmyp0jq5282JzylTkxObMyYjIyPJsbnlVkqN\\n\",\n       \"dY6c8cjpQ87nlqsdSi7llEXKjS+1f7bD/tYuckqC5Y51qb9nx/uY5IiICX+4OEIFAABQEQkVAABA\\n\",\n       \"RSRUAAAAFZFQAQAAVERCBQAAUBEJFQAAQEUkVAAAABWRUAEAAFREQgUAAFARCRUAAEBFRYsjpxge\\n\",\n       \"Hk6KKzlFfqkSMaVKLeTEliyTkbPtdik9U7eyGu0y1mgvqb+bRxuPsnL+nuXEorUmTahs/4akkDTR\\n\",\n       \"X6yIiG+lNGC7Q9Ijkl6KiMuPqpcAAABtbKojVJerkVBNJimhkrRG0iZJ81M7BQAAUCeTJlQR8btV\\n\",\n       \"N257haR3SvpzSX9SdXsAAADtaNqL0m0vtX2z7buby2fa/v3E7X9B0rWS8i5qAgAAqJGUu/y+Iule\\n\",\n       \"Scuay89Juma6F9l+l6TtEbFeE1+HBQAAcFxISahOioh/kDQqSRExLCnl9qcLJV1h+3lJt0u61Pat\\n\",\n       \"44P6+/sP/RsaGsroOgAAQHtImTZhwPaJBxdsny/p1eleFBHXS7q++ZrVkj4ZER8cHzd/PteqAwCA\\n\",\n       \"ektJqD4h6buSfsb2DyUtlvTeo2iLSXIAAMBxySmTAdqeLenn1bgW6pnmab/qjdtxyimnJMUysefR\\n\",\n       \"xTKx5+GY2BMAUEVETPiHZ9ojVLbnSvqIpIvUOMr0gO2/iYh9x7aLAAAA9ZRyyu9WSbslfVGNI1S/\\n\",\n       \"Jek2Sb9ZsF8AAAC1kZJQnRURZ45Z/p7tTcesA7PTygnOmpVexzn31EfuKcIS/ejo6Gh5rJQ+Hu0S\\n\",\n       \"mzvWpU7ZlqqVtnjx4uTYhQsXZm17xYoVybFLly5Njt22bVty7MaNG5Njt2zZkhy7c+fO5Nhcpfah\\n\",\n       \"np6e5NjOzs7kWEnq6upKjs252zrnlHvO9zqnv4ODg8mxUt77y7Fnz57k2JzLCXJr+XHq/+ik7stT\\n\",\n       \"fb4pWcpjti8Y0+j5kh5NahkAAOB1YKriyE+MiXnQ9otqXEN1qqRnZqBvAAAAtTBdcWQAAABMY6ri\\n\",\n       \"yC+MXbZ9sqQ5pTsEAABQNynFka+w/Zyk5yWtlfSCpLsK9wsAAKA2Ui5Kv0HSBZKejYhVkt4u6eGi\\n\",\n       \"vQIAAKiRlIRqOCJekTTLdkdE3C/pzYX7BQAAUBspE4PstD1f0gOS/t72dkkDZbsFAABQHylHqN4j\\n\",\n       \"aY+kayTdLWmzuAMQAADgkGmPUEXEwaNRo5K+UrQ3AAAANeTJplG3PaDGRJ4TiYhYULlxO1JLa+SU\\n\",\n       \"UMkpcZC77ZzYkuVy2kGpPueUs8gtwZGju7s7OTZnn8spfdHf358cOzCQdyY+pyxKzvtbtGhRcuyy\\n\",\n       \"ZcuSY3NK5Sxfvjw5VpI2b96cHLthw4bk2JwyPDnli3K/e6V+t3K+I0uWLEmOPffcc5Njc/YhKW//\\n\",\n       \"zPl9ySkns3bt2uTYhx56KDlWknbt2pUcm1PWph1+73Pl7Mup5Y727t2riJiw01PNQzUvuSeTsL1Q\\n\",\n       \"0k2SzlIjOfu9iPhR1e0CAAC0k7xDOfn+StK/RsR7bc+W1Fu4PQAAgBlXLKGy/VOSLo6IqyUpIkYk\\n\",\n       \"vVqqPQAAgFZJP8GYb5Wk/7V9i+3HbP+t7Z6C7QEAALREyYRqtqRzJP11RJwjaVDSdQXbAwAAaImS\\n\",\n       \"11C9JOmliPhxc/kOTZBQDQ4OHnrc2dmZfKU9AABASaOjo8l3RBdLqCLiZdsv2j49Ip6V9A5JT42P\\n\",\n       \"6+3lOnUAANB+Ojo6Dpt2ZKrpMUrf5fdHapSr6ZL0n5I+VLg9AACAGVc0oYqIjZLSZ2gDAACooZIX\\n\",\n       \"pQMAALwuTFp6ZkYat2Pp0qWpsSX7UWzbqXLKgJSKlfJKEZSKzZE7djn7e6n3l1Oioo4liQDgeDZZ\\n\",\n       \"6RmOUAEAAFREQgUAAFARCRUAAEBFJFQAAAAVkVABAABUREIFAABQEQkVAABARSRUAAAAFZFQAQAA\\n\",\n       \"VERCBQAAUFHR4sgpUkt2tEvpmVmz0nPQnNju7u7k2Jz+5pYu6ejoSI6dPbvM7pNTmqVUSRspv2xP\\n\",\n       \"qhNOOKFIbGdnZ1Y/du3alRy7Y8eO5NjBwcHk2KGhoeTY/fv3J8f29PQkx0pSagksSVqyZEly7LJl\\n\",\n       \"y5Jj+/v7k2NfeeWV5FhJGh4eTo7du3dvcuzu3buLxOb8BuTK2Y9yzJ07Nzl2wYIFRbYr5f1u5fx+\\n\",\n       \"5vze79u3Lzk2Z7/I2TelvM8i9W/lVN+lokeobH/a9lO2n7D9ddvpWQMAAEBNFEuobK+U9GFJ50TE\\n\",\n       \"GyR1SHp/qfYAAABapeQpv92ShiX12B6V1CNpS8H2AAAAWqLYEaqI2CHpLyX9t6StknZFxL+Vag8A\\n\",\n       \"AKBVSp7y+1lJfyxppaRlkubZ/u1S7QEAALRKyYvS3yzphxHxfxExIulbki4cHzQ4OHjoX6m7LwAA\\n\",\n       \"AHIdOHBAo6Ojh/5NpeQ1VD+R9Ke250raJ+kdktaND+rt7S3YBQAAgKMzfvqjqaZiKHkN1UZJt0p6\\n\",\n       \"RNLjzdVfLtUeAABAqxSd2DMiPifpcyXbAAAAaDVKzwAAAFREQgUAAFCRc2u9HdPG7ejr60uKzak5\\n\",\n       \"lFuDLeczyInN6UeJmkNSfq27Uv0o9VmUHOtS222HWADA0YmICQvqcoQKAACgIhIqAACAikioAAAA\\n\",\n       \"KiKhAgAAqIiECgAAoCISKgAAgIraMqHat29fq7uACnKnagAAoO7aMqEaGhpqdRdQAQkVAOD1pi0T\\n\",\n       \"KgAAgDohoQIAAKio5aVnWtY4AABApslKz7Q0oQIAADgecMoPAACgIhIqAACAitouobJ9me2f2H7O\\n\",\n       \"9qda3R9Mzvbf2d5m+4kx6xbZvs/2s7bvtb2wlX3E5Gz32b7f9lO2n7T98eZ6xrDN2Z5j+2HbG2xv\\n\",\n       \"sv2Z5nrGrkZsd9heb/u7zWXGr8baKqGy3SHpS5Iuk3SmpKtsn9HaXmEKt6gxVmNdJ+m+iDhd0r83\\n\",\n       \"l9GehiVdExFnSTpf0keb3zfGsM1FxD5Jl0TEmyS9UdIlti8SY1c3ayRtknTwYmbGr8baKqGSdJ6k\\n\",\n       \"zRHxQkQMS/qGpHe3uE+YREQ8IGnnuNVXSPpq8/FXJb1nRjuFZBHxckRsaD4ekPS0pOViDGshIvY0\\n\",\n       \"H3ZJ6lDju8jY1YTtFZLeKekmSQfvGmP8aqzdEqrlkl4cs/xScx3qY0lEbGs+3iZpSSs7gzS2V0o6\\n\",\n       \"W9LDYgxrwfYs2xvUGKP7I+IpMXZ18gVJ10o6MGYd41dj7ZZQMYfDcSQac3Iwpm3O9jxJ35S0JiL6\\n\",\n       \"xz7HGLaviDjQPOW3QtJbbV8y7nnGrk3Zfpek7RGxXq8dnToM41c/7ZZQbZHUN2a5T42jVKiPbbaX\\n\",\n       \"SpLtUyRtb3F/MAXbnWokU7dFxJ3N1YxhjUTEq5L+RdIvibGriwslXWH7eUm3S7rU9m1i/Gqt3RKq\\n\",\n       \"RySdZnul7S5JV0r6Tov7hDzfkXR18/HVku6cIhYtZNuSbpa0KSJuHPMUY9jmbJ908A4w23Ml/bKk\\n\",\n       \"9WLsaiEiro+IvohYJen9kr4XER8Q41drbTdTuu1flXSjGhdZ3hwRn2lxlzAJ27dLWi3pJDXO9/+Z\\n\",\n       \"pG9L+kdJp0p6QdL7ImJXq/qIyTXvCvu+pMf12qmFT0taJ8awrdl+gxoXLc9q/rstIj5ve5EYu1qx\\n\",\n       \"vVrSJyLiCsav3touoQIAAKibdjvlBwAAUDskVAAAABWRUAEAAFREQgUAAFARCRUAAEBFJFQAAAAV\\n\",\n       \"kVABaDnbDzb/+9O2rzrG275+orYA4FhiHioAbcP229SY5PDyjNfMjoiRKZ7vj4j5x6J/ADAZjlAB\\n\",\n       \"aDnbA82Hn5V0se31ttfYnmX787bX2d5o+w+a8W+z/YDtb0t6srnuTtuP2H7S9oeb6z4raW5ze7eN\\n\",\n       \"bcsNn7f9hO3Hbb9vzLb/w/Y/2X7a9tdm9tMAUEezW90BANBrpW8+JemTB49QNROoXRFxnu1uST+w\\n\",\n       \"fW8z9mxJZ0XEfzWXPxQRO5u17dbZviMirrP90Yg4e4K2fl3SL0p6o6TFkn5s+/vN594k6UxJ/yPp\\n\",\n       \"QdtviQhOFQKYFEeoALQTj1v+FUkftL1e0o8kLZL0c83n1o1JpiRpje0Nkh6S1CfptGnaukjS16Nh\\n\",\n       \"u6S1ks5VI+FaFxFbo3FNxAZJKyu8JwCvAxyhAtDuPhYR941d0bzWanDc8tslnR8R+2zfL2nONNsN\\n\",\n       \"HZnAHTx6NTRm3aj4rQQwDY5QAWgn/ZLGXkB+j6SP2J4tSbZPt90zwesWSNrZTKZ+QdL5Y54bPvj6\\n\",\n       \"cR6QdGXzOq3Fkt4qaZ2OTLIAYFr8XxeAdnDwyNBGSaPNU3e3SPqiGqfbHrNtSdsl/VozfuwtyndL\\n\",\n       \"+kPbmyQ9o8Zpv4O+LOlx249GxAcOvi4i/tn2Bc02Q9K1EbHd9hnjtq0JlgHgMEybAAAAUBGn/AAA\\n\",\n       \"ACoioQIAAKiIhAoAAKAiEioAAICKSKgAAAAqIqECAACoiIQKAACgIhIqAACAiv4fPgLxE2ST8JkA\\n\",\n       \"AAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b3b3a50>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEBVJREFUeJztnVuMJNdZx39f36d6+rI9O7PDrveSlQKysSX7xSA5EREK\\n\",\n       \"0fqFwEsiS0hRgIgHboIHTHiJHyMkIsQLEoqNwkWJEMgoIAG2ERJBKIDROnYgjmPJK8/sXHd2unu6\\n\",\n       \"p+99eOg+h+qanktX12Snqs5PKk13zXTpm93/fOec73zfd0QphcUyC4lHbYAlfFjRWGbGisYyM1Y0\\n\",\n       \"lpmxorHMjBWNZWZ8i0ZE7ojIuyLyAxF5MUijLBcb8ROnEZEk8H3gk8B94L+AF5RS3wvWPMtFxK+n\\n\",\n       \"eRZ4Xyl1TynVA74BfDo4sywXmZTPz10D1lzv14GfcP+AiNhQc8hRSsm0+349jRVEjPErmvvAddf7\\n\",\n       \"64y8jSUG+BXNm8BHReSWiGSAzwLfDM4sy0XG15xGKdUXkV8D/glIAi/blVN88LXkPtOD7UQ49AQ9\\n\",\n       \"EbbEGCsay8xY0VhmxorGMjNWNJaZsaKxzIwVjWVm/G5YWk5ARMwFkE6nzZVKpRgOhwwGA/r9PoPB\\n\",\n       \"wLzXXy86VjTnQCKRIJlMkkwmSaVSlMtlKpWKuVqtFo1Gw1ytVmviGg6Hj/pXOBErmnMgkUgYz5LJ\\n\",\n       \"ZFhZWeHWrVvcvHmTmzdvUqvV2N3dNdf+/j7VahWlFO12+1GbfypWNOeA9jDZbJZcLsfKygq3b9/m\\n\",\n       \"qaee4sknn2RnZ4e1tTU+/PBDstks6XQapRSdTscMaReZuUQjIveAOjAAekqpZ4MwKuwkEgkymQy5\\n\",\n       \"XA7HcahUKly9epXbt2/zxBNPUCqVEBE6nQ4HBwc0m03q9TqpVCr6omGUjPUJpdTDIIyJCslkknQ6\\n\",\n       \"zcLCAouLiywsLJDJZEgmkwAMh0P6/T69Xo9Op0Ov16Pf7zMcDglDbX0QS+6L/6fxQyaZTJLJZE4U\\n\",\n       \"zWAwoNfr0e126Xa7RjRhYF7RKOANEXlTRL4QhEFR4CyiOc7ThIF5h6fnlFKbIrIMvC4i7yqlvhWE\\n\",\n       \"YWFGiyaXy02IJpFIoJQyXqbdbnN4eEir1aLX6zEYDKI/PCmlNsdfd4FXGZW2xB49EXZ7mnQ6TSKR\\n\",\n       \"YDgc0ul0aDabVKtVHjx4QLVapdls0u12oy0aEXFEpDB+nQc+BbwTlGFhxj085fN5IxoRYTAY0O12\\n\",\n       \"aTab1Go19vb2qNVqRjRhYJ7h6Qrw6niJmAL+Uin1WiBWhRzvnCaXyxnRuD1NrVbjwYMHNJtN+v0+\\n\",\n       \"/X4/FJ7Gt2iUUh8ATwdoS2jRsRW93+QWTLlcxnEcUqmUCeC1Wi2azSYHBwfUarVQRIHd2IhwACQS\\n\",\n       \"CVKplNlvKhaLVCoVVlZWuHr1KoVCgVQqRavVMtsGjUYjNHMYL1Y0AaCDeZlMhkwmQ6lUYmlpiStX\\n\",\n       \"rnDt2jVEBKWU8TAPHz6k0WjQ6XQetem+sPk0AeBeYufzeUql0oSnKZVKJJNJ42m0aMLqaaxoAkB7\\n\",\n       \"Gi2aQqHApUuXWFpaYmVlhcXFRUSEZrMZieHJiiYAdCqE3qDM5XJkMhkTmxkMBrRaLZMSsb+/H6q4\\n\",\n       \"jBcrmgDQniabzeI4jokA68nxNNFoTxNGrGgCQHuabDbLwsICuVzO5MlME03YIsBerGgCQE+EHceh\\n\",\n       \"UCjgOA7ZbNbkx/T7fVqtFvV6nb29PSsaC+RyOUqlEisrK9y4cYPl5WXy+TyJRMIE8w4PD01A7/Dw\\n\",\n       \"kE6nQ7/ff9Sm+8KKJgCy2SzFYpGVlRUee+wxLl++PCEavZutE8kPDw/pdruh2dX2YoN7AZDNZimV\\n\",\n       \"SiwvL3Pjxg2KxSKLi4skk8ljPY3OoQkjp3oaEXlFRLZF5B3XvYqIvC4i74nIayJSPl8zLx7u2qZc\\n\",\n       \"LkexWGR5eZmrV69SqVRwHGfC02jh6JKVMHuaswxPfwrc8dz7XeB1pdSPAv88fh8b9F6TjgIvLCzg\\n\",\n       \"OI4J7DmOM5F0pdM7dUGcvsIoGDiDaMaZePue2z8LfG38+mvAzwVs14XmONEsLi5SKBQmMvXcInFX\\n\",\n       \"UiqlQisav3OaK0qp7fHrbUa5NbHBLRp3spX2NDphXCllcn+93iasgoEAJsJKKRW3/nq6GE4LxnEc\\n\",\n       \"Mzzl83k6nc4RwXivMON3yb0tIqsAIvIjwE5wJl18UqmUSRovlUrk83lyuRyp1OhvUM9jdMWBu9A/\\n\",\n       \"CvgVzTeBz41ffw7422DMufiIyIRoyuWySenUEWB3Vwi3aMI8JLk5y5L768C/Az8mImsi8nngy8DP\\n\",\n       \"iMh7wE+P38cGr6dxiwame5qwT37dnDqnUUq9cMy3PhmwLaFBF/fn83nK5TL5fN7sNcHRYriwVVCe\\n\",\n       \"ho0I+2BaiYq7grLT6VCv19nd3WV7e5udnR2TQB4FT2P3nnzg7gqhJ8Fe0dRqNdNSZHt7m2q1Grqq\\n\",\n       \"g+OwovGBN71T1zVp0XS7XeNp1tbWIudp7PA0I3r1dNLw1G63jWjW19d58OCBFU3c0GmbqVSKdDpN\\n\",\n       \"uVw2SeOrq6uUSiUymQyDwYBGo2GK4KrVKvv7+xwcHJgi/yhgRXMKIkIymSSbzZp2aLrSYHl5mdXV\\n\",\n       \"VfL5/BHR1Ov1I6IJS9ntaVjRnAEdl9Gbkl5Pk0gkTHF/o9GgXq9PeBqdCmE9TYxwx2WKxeJETdPq\\n\",\n       \"6qrJmdGX19P0er3Qp0O4saunM6A9TaFQoFwuUygUTEDP20Lk8PCQdrs9EdCLSiRYY0VzCiIy0XTx\\n\",\n       \"0qVLFIvFIxUHw+GQbrdLq9U6IpqoCceK5gy4PY0WjTs+Yz2Nh2NyhF8SkXURuTu+vOmgkUJ7Gu/w\\n\",\n       \"5N6kHAwGdDqdqaKJmnD85ggr4CtKqWfG1z8Gb9rFQC+5j+uhB9Dr9Tg8PDSdrXQxXK/Xi4xQ3PjN\\n\",\n       \"EYYY9Q8+rcWrWzTestsoMs+c5tdF5Dsi8nLUS1i8u9reDcper2e6de7u7lKr1UxBXBTxK5o/Bj7C\\n\",\n       \"qOfeJvAHgVl0ATlteNITYDs8nYBSakeNAb5KxPoH6/Oa3GUqOhpcKpVYWFiY6AbRaDSo1Wo8fPjQ\\n\",\n       \"iEZXUUYRX6IZJ5Nrfp6I9Q/Wk1/dR08PS4VCgWKxSC6XM6LRVZP1ep39/X2zox3l4enUbYRxjvBP\\n\",\n       \"AZdFZA34EvAJEXma0SrqA+BXztXKHzLTPI0Wje6fl0wmTQsR7Wm0aHSKZ1SHJ785wq+cgy0XAl2f\\n\",\n       \"7fY0enjSonGfP9npdI4MT1HZYzoOu2E5hUwmw+LiotmgrFQqE1sHrVbLzGfa7TbNZpN2uz2xMRll\\n\",\n       \"7DaCBxEhm82yuLhIpVLhypUrLC0tUSqVcBzHbBv0ej3T3Uo3KYpSbdNJWNF40G3qC4UCS0tLrK6u\\n\",\n       \"srS0ZDxNOp0GmGiJ1mw2Q93ZalasaKZwkqfJZDLApGjcniYOWNFMwdvi1d2pUzOt7DbqcxmNFY0H\\n\",\n       \"ETnSf0b3BNanrejJrj5uJ0yHlgaBFc0U3KLJZrNmn0lvG7i7W+ljBK1oYo7X06TTaVKp1FTRuIcm\\n\",\n       \"K5qY4k7vdCdduROu+v0+7XabRqNBtVql0WjQbrft6inOpNNpHMcx5zZ5l9vu/Bl9BmWUNyi92Iiw\\n\",\n       \"B+1pHMcx0WCdDqE9jRaNbluvl91xEc2JnkZErovIv4jI/4jId0XkN8b3I91HeJqn0Tk0gIkGa09T\\n\",\n       \"r9cjVXZ7GqcNTz3gt5RSPw78JPCrIvI4Eesj7E2yyufz5ggefaLKtImwXnbbibALpdSWUuqt8esG\\n\",\n       \"8D3gGhHqI6yHI90OTbeodx/2pRsA6DhN3DnznEZEbgHPAP9BxPoIu2u1S6WS6TquRaPza7SniTtn\\n\",\n       \"Eo2ILAJ/A/ymUurA/RcX9j7C3m6duvGiWzTucxAsZ8vcSzMSzJ8rpXTr120RWVVKbYW9j/C01ZKu\\n\",\n       \"oNSCUUqZeYtO8Ww2m+YonrAfkDErp62eBHgZ+F+l1B+6vhWpPsKZTMZ06tQ72nq15C651amd7h40\\n\",\n       \"tVrNJGHFJbh3mqd5DvgF4G0RuTu+90VGfYP/SkR+CbgHfObcLDxndP6M4ziUy2UuX748kXAFGNHo\\n\",\n       \"I3jcgqlWq3S7XVOGGwdOFI1S6t843htFoo+wFo0+hN0tGj08DYdDE5txexktnKiceXBWYhkRdk9o\\n\",\n       \"dXtXLZpKpWKO35kWAa5WqxNDkq44iMNcRhNL0cBk1YHucqVFoyPAqVQKpZSpoNRlt97TbrVg4iKc\\n\",\n       \"WIpGJ1q5RaNjNJcuXZrqaZrNpinw157GXQwXF8FAjEWjhTPN0+jewHoi3O12TYH/tFrtOAkGYioa\\n\",\n       \"L1pAWkR6SBIRut0u1WqVvb09tre32dzcZG9vj0ajYQ4DixuxFI32Dvryns+klDKrJRFhZ2eHra0t\\n\",\n       \"Njc3WV9fNzvbnU7nUf8qj4RYikbjFY1bOJp+v29OU9nY2OD+/fsmwBfVAv/TiK1o9LDijrG4S1L0\\n\",\n       \"1el0jGg2Nze5f/8+vV7PXHEklqJxz0OGwyHNZpO9vT02NjZwHMcIRx/ytba2xtbWFvv7+zQajdgF\\n\",\n       \"87zEUjQavRFZr9fZ2NggkUhwcHBghipdorK5ucnOzg4HBweR7NY5K7EVjf4PHwwG1Go1RIRms8nW\\n\",\n       \"1taRw9f1lkGj0TClt3EVDICc9MuLyHXgz4AVRg2M/kQp9Uci8hLwy8Du+Ee/6G0LG5YcG50aoWub\\n\",\n       \"3IeX6q/uOY7elIyDaJRSUxOIThPNKrCqlHprnIj134xSOz8DHCilvnLCZ6P/rxpxjhPNabvcW8DW\\n\",\n       \"+HVDRHSOMMSoj7BlkjMnvbpyhL89vhWbPsKWSc4kmvHQ9NeMcoQbxKyPsGWSE+c0YHKE/x74B0/K\\n\",\n       \"p/7+LeDvlFJPee7bOU3IOW5O4ytHOOp9hC0nc9rq6WPAvwJvM1pyA/we8AKjocn0EXbVQenPWk8T\\n\",\n       \"cnwtuefBiib8+BqeLJZpWNFYZsaKxjIzVjSWmbGiscyMFY1lZqxoLDNzbnEaS3SxnsYyM1Y0lpk5\\n\",\n       \"V9GIyB0ReVdEfiAiLwbwvHsi8raI3BWR//Tx+VdEZFtE3nHd893e9pjnvSQi62Mb74rInRmeF2gL\\n\",\n       \"3hOe59tG4Gi1YVAXkATeB24BaeAt4PE5n/kBUJnj8x9nlEj2juve7wO/M379IvDlOZ/3JeC3fdq3\\n\",\n       \"Cjw9fr0IfB943K+NJzzPt41KqXP1NM8C7yul7imlesA3gE8H8FzfaaZKqW8B+57bvtvbHvM88Gmj\\n\",\n       \"CrgF7wnP820jnO/wdA1Yc71f5/8N9osC3hCRN0XkC3M+S3Me7W3nToUNugVvkOm65yma81jLP6eU\\n\",\n       \"egZ4nlH39I8H+XA18uPz2j13Kqy3Be+8NgadrnueorkPXHe9v87I2/hGKbU5/roLvMpoCJyX7XGp\\n\",\n       \"js5InKu9rVJqR40BvjqrjSe14PVjo+t5f6GfN6+N5ymaN4GPisgtEckAn2XUStYXIuKISGH8Og98\\n\",\n       \"imDSTANtbztPKmzQLXjPLV13ntXMGWbvzzOasb/PqApznmd9hNEK7C3gu36eB3wd2AC6jOZbnwcq\\n\",\n       \"wBvAe8BrQHmO5/0io4rUt4HvjP9zr8zwvI8Bw/HveHd83fFr4zHPe34eG5VSdhvBMjs2ImyZGSsa\\n\",\n       \"y8xY0VhmxorGMjNWNJaZsaKxzIwVjWVmrGgsM/N/z4EQsKT2Kt0AAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b33bd10>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAD4tJREFUeJzt3WuQZVdZh/HnPz2XnjhDpiJJEJg4qKAkBRIEKtwJoBUp\\n\",\n       \"CKgIRAVEC8sCJEaggHzwkxZYlEWkKK1CIpcgoAIGKBWIEiGAJARmcptws4gmIBkvGchA9dA98/qh\\n\",\n       \"z0w6Tc/02b2z+pw9eX5VXXP2PuvstfqsfU6/s/ba601VIUmSpLXbMOkGSJIkDZ0BlSRJUk8GVJIk\\n\",\n       \"ST0ZUEmSJPVkQCVJktSTAZUkSVJPGydZeRLXbJAkSYNRVVlpf9OAKsl5wCXADPD2qvqT5WVOP/30\\n\",\n       \"H3rdgQMH2LZt2932dVkv6/Dhw53a2aX8oUOHxi7bqs1djttynbFjHXt+fp5NmzatuR1d3ouufT0N\\n\",\n       \"fSJJOvE0u+SXZAZ4K3AecCZwQZKHtqpPkiRpUlrOoXoM8PWquqWq5oH3A89uWJ8kSdJEtAyoHgDc\\n\",\n       \"umT7ttG+VW3evLlJg7Q+NmzwXgdJ0r1Ly798a55UYkA1bDMzM5NugiRJ66rlpPRvAjuXbO9kcZTq\\n\",\n       \"bg4cOHD08ebNmw2mJEnS4LQMqK4FHpxkF/At4PnABcsLLb+bT5IkaWiaBVRVtZDkFcDHWVw24dKq\\n\",\n       \"urlVfZIkSZOSSa6fk6RWWodqJa5DtbbjTmIdqr5lXYdKkjStJrKw5zi2bt06VrmNG8dvapey0G0S\\n\",\n       \"dZdjb9myZeyys7OzTcqO+/4e0WUOW5eyXdqxffv2scuecsopY5cFOO2008Yue+qppzYpu2PHjrHL\\n\",\n       \"djmHpqWvu3xGunz2utw92jXA7VJ+YWFh7LJd/gPW5b1IVvw+P6ZW359d+qRLm7v+fkPT5Xzrcg51\\n\",\n       \"PXarPmnVfy3Pi3Hft+N9Tr2/XZIkqScDKkmSpJ4MqCRJknoyoJIkSerJgEqSJKknAypJkqSeDKgk\\n\",\n       \"SZJ6MqCSJEnqyYBKkiSpJwMqSZKkngyoJEmSepp4Lr+5ublJN6GTLrmEWuVUGmLC41ZJibvkVet6\\n\",\n       \"7GlIhN1Fy/x1Joq+S5f8dV3y801LAvhW/XeinxdDdKLnTFxvTUeokuxMcmWSm5LcmOSVLeuTJEma\\n\",\n       \"hNYjVPPARVW1J8k24ItJrqiqmxvXK0mStG6ajlBV1beras/o8QHgZuD+LeuUJElab+s2KT3JLuBs\\n\",\n       \"4Or1qlOSJGk9rEtANbrc9wHgwtFIlSRJ0gmj+V1+STYBHwTeU1WXL3/+zjvvPPp48+bNbNmypXWT\\n\",\n       \"JEmSVtXl7tSmAVUW78m8FNhbVZesVGb79u0tmyBJkrQmy5eWOF6A1fqS3+OB3wDOTbJ79HNe4zol\\n\",\n       \"SZLWVdMRqqr6DK7GLkmSTnAGO5IkST1NPPXMuGkchpiapYsuqSFapS6BdukvWvVflzQgLY/d6r1o\\n\",\n       \"eV4MLRXItLS3S0qiLmW7nBddU4a0SjEyLX0yNF36o+t3XBfTkGrsROIIlSRJUk8GVJIkST0ZUEmS\\n\",\n       \"JPVkQCVJktSTAZUkSVJPBlSSJEk9GVBJkiT1ZEAlSZLUkwGVJElSTwZUkiRJPU089czGjeM1oUsK\\n\",\n       \"h67L3rdaUr9LKpAuv1+X4w4xDU/LFAet0oa0Oj9N93DvYf/de3Tp6y7fLZqsY0YzSX4FKGClpENV\\n\",\n       \"VR8ap4IkM8C1wG1V9aw1tVKSJGmKHW946FksBlTHMlZABVwI7AW2j9soSZKkITlmQFVVv9n34Eke\\n\",\n       \"CDwD+GPgD/oeT5IkaRqtOik9yf2SXJrkY6PtM5P89pjHfzPwGmD8ST+SJEkDM85dfu8EPgHcf7T9\\n\",\n       \"NeCi1V6U5JnAvqrazcrzsCRJkk4I4wRU962qvwEOAVTVPLAwxuseB5yf5BvA+4CnJnn38kL79+8/\\n\",\n       \"+jM3N9eh6ZIkSdNhnDULDiT50SMbSc4BvrPai6rqYuDi0WueDLy6ql60vNyOHTvGb60kSdIUGieg\\n\",\n       \"ehXwUeAnknwOOBV47hrqcpEVSZJ0Qso4C4wl2Qj8NItzob4yuuzXv/KkzjjjjLHKtlr4ElzYc61c\\n\",\n       \"2HNtZV3YU5KGq6pWnBe+6ghVkq3Ay4AnsDjKdFWSv6gqJzxJkiQx3iW/dwPfBd7C4gjVrwGXAb/a\\n\",\n       \"sF2SJEmDMU5AdVZVnblk+5NJ9t5TDRj3zr6Wlz6Gdglmw4bh5bRu9b4tLIxzw+naJOOv9tGlbKs2\\n\",\n       \"tDS0z4gkrbdx/jJ/Kcljj2yM7vL7YrsmSZIkDcvxkiPfsKTMZ5PcyuIcqjOAr6xD2yRJkgZhteTI\\n\",\n       \"kiRJWsXxkiPfsnQ7yWnAbOsGSZIkDc04yZHPT/I14BvAp4BbgH9q3C5JkqTBGGdS+h8BjwW+WlUP\\n\",\n       \"Ap4GXN20VZIkSQMyTkA1X1X/A2xIMlNVVwKPatwuSZKkwRhnHao7kmwHrgL+Osk+4EDbZkmSJA3H\\n\",\n       \"OCNUzwG+D1wEfAz4Ot4BKEmSdNSqI1RVdWQ06hDwzqatkSRJGqDjLex5gMWFPFdSVXWfe6IBs7Pj\\n\",\n       \"rcQwDWlAuuqSVuPw4cMTLzst7ehSdmZmZuyyMB2pTlq9Fy1TLrUyxM+qJK3keOtQbet78CQ7gLcD\\n\",\n       \"Z7EYnP1WVX2+73ElSZKmyTiT0vv4M+Afq+q5STYCP9K4PkmSpHXXLKBKcjLwxKp6MUBVLQDfaVWf\\n\",\n       \"JEnSpIxzl99aPQj47yTvSPKlJH+Z5KSG9UmSJE1Ey4BqI/BI4M+r6pHA94DXNaxPkiRpIlrOoboN\\n\",\n       \"uK2qvjDa/gArBFT79+8/+nh2dnbsu/4kSZKmRbOAqqq+neTWJA+pqq8CTwduWl5ux44drZogSZK0\\n\",\n       \"Llrf5fd7LKar2Qz8O/CSxvVJkiStu6YBVVVdBzy6ZR2SJEmT1nJSuiRJ0r1C60t+q5qbmxurXMvU\\n\",\n       \"M0NLa9MqpU3XY7cqu2HD+HH+xo3dTuFNmzaNXXZhYWHssvPz82OX/cEPfjB22S5apk9pdexp+DxB\\n\",\n       \"t3OulSH2n6S7TP5bRJIkaeAMqCRJknoyoJIkSerJgEqSJKknAypJkqSeDKgkSZJ6MqCSJEnqyYBK\\n\",\n       \"kiSpJwMqSZKkngyoJEmSepp46pmDBw+OVa5lioppSD3T6rhdU050aUeXdB1dyh46dGjssl1SvnQ9\\n\",\n       \"dpe0PV2O2+U97pJap2tfd/n9upRtlZJoWkzD98UQ3zfpRNd0hCrJ65PclOSGJO9NsqVlfZIkSZPQ\\n\",\n       \"LKBKsgt4KfDIqnoYMAO8oFV9kiRJk9Lykt93gXngpCSHgJOAbzasT5IkaSKajVBV1f8Bfwr8J/At\\n\",\n       \"YH9V/XOr+iRJkial5SW/nwR+H9gF3B/YluTXW9UnSZI0KS0npT8K+FxV/W9VLQAfAh63vNDc3NzR\\n\",\n       \"n4WFhYbNkSRJaqNlQPVl4JwkW7N47/DTgb3LC83Ozh796XKLuCRJ0rRoOYfqOuDdwLXA9aPdb2tV\\n\",\n       \"nyRJ0qRkkgvEJamTTz553LIt29GkbKs2dDEtC3t2aUeXRTK7lO1avtXCnl106Q8X9ly7Vt8Brfqv\\n\",\n       \"6/s2Le+zdCKoqhU/2KaekSRJ6smASpIkqaeJzwLftm3bWOVaXXJoqUs7uvx+XXQ9bqtLc60uL7W8\\n\",\n       \"5NfqPJqWc3laPidDMw2fa0nTxxEqSZKkngyoJEmSejKgkiRJ6smASpIkqScDKkmSpJ4MqCRJknqa\\n\",\n       \"yoDq4MGDk26Cepifn590E7RGLqUgSWtjQKV7nAHVcBlQSdLaTGVAJUmSNCQGVJIkST1lkkP8Sby+\\n\",\n       \"IEmSBqOqstL+iQZUkiRJJwIv+UmSJPVkQCVJktTT1AVUSc5L8uUkX0vy2km3R8eW5K+S3J7khiX7\\n\",\n       \"TklyRZKvJvlEkh2TbKOOLcnOJFcmuSnJjUleOdpvH065JLNJrk6yJ8neJG8Y7bfvBiTJTJLdST46\\n\",\n       \"2rb/BmyqAqokM8BbgfOAM4ELkjx0sq3ScbyDxb5a6nXAFVX1EOBfRtuaTvPARVV1FnAO8PLR580+\\n\",\n       \"nHJVNQecW1WPAB4OnJvkCdh3Q3MhsBc4MpnZ/huwqQqogMcAX6+qW6pqHng/8OwJt0nHUFVXAXcs\\n\",\n       \"230+8K7R43cBz1nXRmlsVfXtqtozenwAuBl4APbhIFTV90cPNwMzLH4W7buBSPJA4BnA24Ejd43Z\\n\",\n       \"fwM2bQHVA4Bbl2zfNtqn4Ti9qm4fPb4dOH2SjdF4kuwCzgauxj4chCQbkuxhsY+urKqbsO+G5M3A\\n\",\n       \"a4DDS/bZfwM2bQGVazicQGpxTQ77dMol2QZ8ELiwqu5c+px9OL2q6vDokt8DgSclOXfZ8/bdlEry\\n\",\n       \"TGBfVe3mrtGpu7H/hmfaAqpvAjuXbO9kcZRKw3F7kvsBJPkxYN+E26PjSLKJxWDqsqq6fLTbPhyQ\\n\",\n       \"qvoO8A/Az2HfDcXjgPOTfAN4H/DUJJdh/w3atAVU1wIPTrIryWbg+cBHJtwmdfMR4MWjxy8GLj9O\\n\",\n       \"WU1QkgCXAnur6pIlT9mHUy7JfY/cAZZkK/DzwG7su0GoqouramdVPQh4AfDJqnoh9t+gTd1K6Ul+\\n\",\n       \"EbiExUmWl1bVGybcJB1DkvcBTwbuy+L1/j8EPgz8LXAGcAvwvKraP6k26thGd4V9Grieuy4tvB64\\n\",\n       \"BvtwqiV5GIuTljeMfi6rqjclOQX7blCSPBl4VVWdb/8N29QFVJIkSUMzbZf8JEmSBseASpIkqScD\\n\",\n       \"KkmSpJ4MqCRJknoyoJIkSerJgEqSJKknAypJE5fks6N/fzzJBffwsS9eqS5Juie5DpWkqZHkKSwu\\n\",\n       \"cvisDq/ZWFULx3n+zqrafk+0T5KOxREqSROX5MDo4RuBJybZneTCJBuSvCnJNUmuS/I7o/JPSXJV\\n\",\n       \"kg8DN472XZ7k2iQ3JnnpaN8bga2j4122tK4selOSG5Jcn+R5S479r0n+LsnNSd6zvu+GpCHaOOkG\\n\",\n       \"SBJ3pb55LfDqIyNUowBqf1U9JskW4DNJPjEqezZwVlX9x2j7JVV1xyi33TVJPlBVr0vy8qo6e4W6\\n\",\n       \"fhn4WeDhwKnAF5J8evTcI4Azgf8CPpvk8VXlpUJJx+QIlaRpkmXbvwC8KMlu4PPAKcBPjZ67Zkkw\\n\",\n       \"BXBhkj3AvwE7gQevUtcTgPfWon3Ap4BHsxhwXVNV36rFORF7gF09fidJ9wKOUEmadq+oqiuW7hjN\\n\",\n       \"tfresu2nAedU1VySK4HZVY5b/HAAd2T06uCSfYfwu1LSKhyhkjRN7gSWTiD/OPCyJBsBkjwkyUkr\\n\",\n       \"vO4+wB2jYOpngHOWPDd/5PXLXAU8fzRP61TgScA1/HCQJUmr8n9dkqbBkZGh64BDo0t37wDewuLl\\n\",\n       \"ti8lCbAP+KVR+aW3KH8M+N0ke4GvsHjZ74i3Adcn+WJVvfDI66rq75M8dlRnAa+pqn1JHrrs2Kyw\\n\",\n       \"LUl347IJkiRJPXnJT5IkqScDKkmSpJ4MqCRJknoyoJIkSerJgEqSJKknAypJkqSeDKgkSZJ6MqCS\\n\",\n       \"JEnq6f8BUrepXiOjd3cAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b2b79d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGlxJREFUeJztnVls49t93z+Hi7iJu0Rq44xm5l4bAVzAfnEfnKB5CIJr\\n\",\n       \"FEjSlwYGChTpgj50Q/vQJH1o/JgGaFC0D0Xb2EE3JC1auEgKtLUNtIj70DYOfGNf9/pO7r3SSENR\\n\",\n       \"JCVS3ClS5OmD+Ds+5FALN4nS/L/AwZ/L8Mwh/1/9zm8/SmuNAweTwHXfC3Dw8OCQxsHEcEjjYGI4\\n\",\n       \"pHEwMRzSOJgYDmkcTIypSaOUek8p9SOl1J8opX55notysNxQ0/hplFJu4CPgZ4As8IfAV7TWH853\\n\",\n       \"eQ6WEdNKmi8CH2ut97XWXeB3gZ+f37IcLDM8U35uGzi0nr8G/rT9D5RSjqv5gUNrrca9Pq2kcQjx\\n\",\n       \"FmNa0mSBjPU8w6W0cfAWYFrSfBd4Vym1q5RaAX4R+L35LcvBMmMqnUZrfaGU+hvAfwfcwNccy+nt\\n\",\n       \"wVQm960mdhThB495K8IO3mI4pHEwMRzSOJgYDmkcTAyHNA4mhkMaBxPDIY2DiTFtwPLRQ6kfuyjc\\n\",\n       \"brcZHo8Hl8tlhtvtRmtNv99Ha43WmouLC3q9HhcXF1xcXJjXAR5DyZBDmjFQSpnhcrkIhUKsrq6a\\n\",\n       \"q9/vJxAImKuQo9vtcnFxQbVaNaNSqdDr9ej1evT7fXq93n1/vZnhkGYMbMK43W5WV1dZW1szIxqN\\n\",\n       \"Do3z83Pa7Tbn5+ecn5+Ty+XI5XIcHx/T6XTodDqGVA5pHimEMKOkyWQy7OzskEqlWF9fN6PZbNJo\\n\",\n       \"NMw1Go3i9XrpdruUy2Uzb6/XQyn14LeomUijlNoHqkAP6GqtvziPRd03bCnj9XpZXV0lmUyyvb3N\\n\",\n       \"8+fP2djYIJ1Os7GxQSqVotFo0Gg0qNfrNBoN+v0+zWaTcrnM6uqqIcrFxcV9f7W5YFZJo4Gf1lqX\\n\",\n       \"5rGYZYBsTUKYlZUVQqEQsViMtbU1Njc3icfjrK6u4vV6gUtFeWVlhUAggFKKSCRCNBolHo+TSCRw\\n\",\n       \"u90AXFxc0G63325JM8DYSOhDhi1lfD7fEGk2NjYIhUKEQiFWVlaAH5NGKYXH4yESiRCLxYjH4yST\\n\",\n       \"SSNlWq3WPX+z+WAekubbSqke8M+11v9yDmu6d7hcLjweDx6PZ4g06+vrbG5u4vV68Xg8eL1etNa4\\n\",\n       \"3W5DGJ/PZ0iTSCRIJpN0u13a7Ta1Wm3IlH+omJU0X9Ja55RS68C3lFI/0lp/Zx4Lu2vIzXS5XHi9\\n\",\n       \"Xvx+P6urq0NSQ7abUYjSLPMEg0FjnofDYarVKj6fz2xTDx0zkUZrnRtci0qpb3BZ2vLgSCM3XaRF\\n\",\n       \"NBo15nUqleLJkyckk0mCweB9L3UpMEuFZVApFR48DgE/C/xgXgu7S4i15PF4WFlZIRKJkE6n2d3d\\n\",\n       \"5bOf/SyZTIa1tTUCgcCN87wNmEXSpIFvDH4oD/DvtNbfnMuq7hC2eS2kiUajbGxssLu7yzvvvMP6\\n\",\n       \"+rojaSxMTRqt9R7w+Tmu5c5gb0di+cgIBoPGStrZ2WF3d5dwOGzCB9dBa/2GN1kUZjHffT6fiUfJ\\n\",\n       \"Z0bHsuOt8wgrpfD7/UPxo2AwSDAYJBAIEA6Hef78Odvb26ytrREOhwkGg7dSZGV7sq2uaDRKu92m\\n\",\n       \"2+3S7/dRStHpdExAs9fr0e12Tdyq2+0uPXHeOtK4XC78fv9Q7CgSiRAOhwmHw0SjUXZ2dgxpIpGI\\n\",\n       \"kRK3tX7cbjd+v9+QxiaMx+Oh3W7T6XQ4Pz+n0+nQarVot9u0Wi263e6Cf4HZ8daRRiRNNBo1MSQx\\n\",\n       \"p2VITEkkjWw1N5FGtqdRSWMTxufz0Wg0aLVatFotms3mg/MYv5WkCQQCxGIx0uk029vbJJNJMxKJ\\n\",\n       \"hAkDRCKRqZRfj8eD3+8nHA5zfn4+5PsJhUJDcap6vW7CEeI1lhjVsuo4bx1pXC4XwWCQRCLBzs4O\\n\",\n       \"z58/N9tTJBIxRJnGGSc6jdfrJRgMmq0mEAgQiUSGIuH2tVgscnx8jMfj4eLigvPz86FErmUjzltH\\n\",\n       \"GvHYrq2tsbOzw4sXL4YSqvx+Pz6fD5/Ph8cz3c8jpAFYWVkxEkf0GNmW5BoOh3G73SY63mg0TG5O\\n\",\n       \"r9dzSHPfkEy8ZDJpJI3EmcREttM7J4HoNEIar9dLKBQymXsiPYQ4ovyKGd5qtSiXy4YkvV6PTqez\\n\",\n       \"iJ9hJrwVpLFzev1+P5FIhGQyycbGBpnMZceUWXJ4hSzy2O1243K5TBRcIP9GpIhk/ImEqdVqlMtl\\n\",\n       \"4+tZ1sj4oyeN2+025nQ4HGZtbY3d3V3W19cJBoMmQcq+Tgo7sdxOMJdhk9Z2+glB4/E4Ozs79Pt9\\n\",\n       \"/H4/2WyW168v2/3U6/WlSxF99KRxuVxEIhE2NjbY2Nhge3t7iDTAG8SZFFprkzg+bsj25/V6h+Jc\\n\",\n       \"8jgejxvCJBIJVldX0VpTr9c5Pj6e908yMx49aUTSbG5u8uLFC54/f87Ozs5CJI3oLfJYrjKv6Ehi\\n\",\n       \"govESSQSBAIBkskkmUwGr9drCLOM6RQ3kkYp9XXgzwIFrfWfGryWAP498BTYB/681vpsgeucCHLj\\n\",\n       \"7Uy6dDrNs2fP+MxnPkMikTA3ahqMbj8SBpAhiq+tAPf7feDNNAyllEncEjSbTV6/fk0sFjPkWqbY\\n\",\n       \"1G0kzW8D/xT419ZrvwJ8S2v9G4PG078yGPcOsV5kSxDfi9QtiQ9Git5GFeDb3JTRkhWxhGSIlSQj\\n\",\n       \"EAiY+JbEuGT4/f43pJsksycSCTY3N/H5fEP/pxDwvnAjabTW31FK7Y68/HPAnxk8/lfA/2SJSOPx\\n\",\n       \"eMwNiUajRgmWGyaksS0e+3oTzs/PqdVqVKtVarXakHe30WgMkebi4oJQKGQi5aurq8RiMWKxGAA+\\n\",\n       \"n28saUKhkCGN2+2mWq0CLIUJPq1Ok9Za5weP81zm1iwFRNJIuqaEA0TSBAIBVlZW8Hq9M0maWq3G\\n\",\n       \"yckJp6enVCoVU00pFZVSddnr9Yy3WdbSbrcBjPl/HWk2NjaMTtTpdJbCmppZEdZa62XpryfWiF12\\n\",\n       \"kkgkiMViRCIRQ5rbShr7ua1TNJtNzs7OKBaL5HI5SqUS5XKZs7MzSqXSUAig1+sN5RnHYjG01kaP\\n\",\n       \"6ff7byi7Xq+XcDhMMplka2vLzNVsNqlUKm+Y9XeNaUmTV0ptaK2PlVKbQGGei5oEdjWkOO+SySSb\\n\",\n       \"m5vGxM5kMqRSKaLRKD6fzyiXkgw+CttcFq9st9s1JbavX78mm82aIduUbFm25WQPqUoIBoPEYjGa\\n\",\n       \"zabZbkQ5drlc+Hw+otEo6XSaXq9nzHVZc61We8M5CHfXXGBa0vwe8BeBfzi4/ue5rWgKiN9D3PeJ\\n\",\n       \"RILt7W2ePXvG06dPTapDJBK5NWnson47wFiv1zk8POT169fm2mw2h2JJNumk5qnT6dBut02saW1t\\n\",\n       \"jUajQafTeSP1wufzEYvF6PV6ZiuV8IbL5aJUKpkGA7JGwV0Q5zYm9+9wqfSuKaUOgX8A/DrwH5RS\\n\",\n       \"f5mByb3IRd6wvqFqSCHN1tYWL1684MWLF0MeYYley1/2KMTnIjda9JezszMzDg8POTg4MEMkkIxR\\n\",\n       \"k1wkQqvVol6vk0gkODs7M5JGAqOjksbr9RKJRFhZWTEE11oby08IfdeR8NtYT1+54q2fmfNapoZd\\n\",\n       \"SSCk2dzcZHd3l3fffdfk6MpfrO3HEdg3WdITRDKcnZ1xcnLCyckJxWLRkGV/f5+Dg4Mhb/A4JVXM\\n\",\n       \"8kajgd/vJ5VKUalUqNfrnJ+fm3waIcbKygoej4dwOEy/3zfv2zpMr9ej0WiYP4C73KIehUfY7idj\\n\",\n       \"u+ltolwnWUTEi95ydnZGuVw2yu3oNZ/PUyqVzFZ0k1IqZJJtpF6vUyqVyOfzHB4eEo/HjSS0Qw39\\n\",\n       \"fh+Xy2WSxjY2Nsx36Ha71Ot1yuWykYzifV40cR4FaQRXVQLI86tCBKKgik6Sy+U4OjoyPWbEFyPX\\n\",\n       \"s7MzKpUKzWbT3CghzzjITZXHtVqNUqnE8fGxybXp9XrGqSdrFZdAIBAgHo+brEORMqVSiUAgQKfT\\n\",\n       \"Md/tunXMC4+GNKOSxiaN/d4o7OL8er1OpVIhl8uxt7fH3t4e+/v7Q7kvrVZryDsr29F1N8oOaF5c\\n\",\n       \"XAyRRspihDB2/Euufr+fWCxmrp1Oh3K5TC6XIxAI0Gq1zP9xF3jwpBklwyhpxmXf2QFKiR1Jgb7c\\n\",\n       \"jL29PT788EM++uijoe1rmmoB0XcEsj0FAgFDbOmBI+SziSPebVl7q9Uil8sRjUaNs1K2v7uo8nzw\\n\",\n       \"pJkGtu+k0+lQLBYpFArk83mOj485ODigWCxSq9WGPLvzEvsi2arVqgl1yJZXrVYJBoNDmYTLVu77\\n\",\n       \"1pHG9ptIzdHJyQnZbJbDw0MODw/J5XIUi0Xq9bqJWo9Ki1lgk8btdhOPxymXy1SrVer1OsDMecqL\\n\",\n       \"xPKtaMGwTepWq0WtVqNYLJLNZvn000/59NNPjT+mVquZisd5KpjdbpdWq2WSyePxuJE0tVrNmN52\\n\",\n       \"s6RlwltHGsCQptFoUKlUjKTZ29vj5cuXQ4ruIioeRdL0+32j1NqkEY/1ysrKlUQdp9zfFbkeJWmu\\n\",\n       \"+yH7/T7VatW0bT06OmJ/f59sNku5XDalJrIl3ee6bVLYmYXSDmV9fZ1MJoPH4zHEsy26ReFRkgbe\\n\",\n       \"tKoEWmuq1SpHR0e8fPmSTz75hGKxSLFYNKQR5XfRpBlH7nHrHk1JlVqqVCpFJpMxFmOn06FarS68\\n\",\n       \"HvzRkWYcUewUCJE02WyWjz76iA8++MC0dG02myYz7i6cZLK2cWGN6ySNpFWkUikT5RbCXBWEnSem\\n\",\n       \"zRH+KvBXgOLgn/2q1vq/LWqR02Lcnt/v96lUKoY077///ht5M/eJ20oaaWAgfqZqtUqhULiTRPTb\\n\",\n       \"0PK3gfdGXtPAb2qtvzAY90YY+y9vfX2ddDpNPB4nFApda66K1LmPhCZJRw2HwyQSCZOcJRmG4rAT\\n\",\n       \"AowmiYlUEasvl8tRLpdNWGPRuJE0+rJbZ3nMW0thB0pALxqNsr6+zsbGBolEglAoZFq2jhv3CSFN\\n\",\n       \"JBIxlRE2aezk93HZhUKaQqHAwcEBR0dHQwHURWOWDfBvKqX+WCn1NaVUbG4rmhB2kyIhzW0kzX3C\\n\",\n       \"6/UOkSYejw81V7pJ0pyfnxtJc3h4OESapZA0V+CfAc+47LmXA/7R3FY0Iez67IcmaSQPWCRNNBod\\n\",\n       \"kjS2fnKdpMlms5yenppzGRa+/mk+pLU2OcFKqd8Cfn9uK5oQotNIInkymSQSiZhg4H2tScxgO4dZ\\n\",\n       \"EqYSiQTr6+tsbW2RyWSMHhYIBMZaP6P6l/Tnk5jYXVp7MCVplFKbetB4Gvhz3GP/YHF2SasySWjy\\n\",\n       \"+/33VtIqKaj2GQt2QlgymSSVSg2RRlIfrktBHW1ZctdkEUyTI/xrwE8rpT7PpRW1B/y1ha7yGowr\\n\",\n       \"WRG94L4kjUgW+3wFaZYk1RLpdJqtrS2ePHliiH6dpLFrqUTS2Jl6d7n1Tpsj/PUFrGUqjEoaqdH2\\n\",\n       \"+/33uj3ZSWB221nJYRZJ8+TJE6N/XRWctJPdbcKMugzuCstpXkwIW4cQPeI+I8M+n8+U4MqhGlJT\\n\",\n       \"HolEePr0Kdvb24bgkkhuVxzYhGg0Gqauqlarsbe3Z3wzrVbLHHd4V7GyR0GaZYM4G9fW1ox1ZI9U\\n\",\n       \"KkUqlSIWiw0RRgg/WmQnllKhUKBYLPLq1StjMUkZzLwTxa6DQ5oFQOqWUqmUOfPSHuFw2BztI6a1\\n\",\n       \"LR1HKyQqlQr5fJ6DgwNevXrF0dERx8fHxjfT6XTmmiR2ExzSLAB2WW0mk2F7e5utrS0z7C5Y44KV\\n\",\n       \"dsmLOPLy+Tz7+/u8fPmSQqFgEsXsLudLbXI7GIa0MBFFN5PJ8OTJEzY3N0mlUsTj8aH+xDdFovv9\\n\",\n       \"viFMs9mkXq9TrVY5Ozvj9PSUs7Mz0zb2PlrGPkrS3LUSHAqFTFt88UrLibrJZJJoNGpaxN4GYikJ\\n\",\n       \"aaSGXBoM1Ot1c0jHQ+oasdS46x8yFAqRSqXY3d1ld3fXHFsoZy2IlLltvq9IGrv+W6SNlPOKz8Yh\\n\",\n       \"zQPF6uoqqVSKZ8+e8bnPfc7Ej+RUXemGPomksbcn2+QWSTNJE6Z541GSZpxyab832ilr9H3x3IoX\\n\",\n       \"1/b9jJvz3Xff5Z133uHJkyek0+mhz/r9ftMu5LrSYBvtdptyuUw2m+Xo6IhXr15RKBRMKufS99x7\\n\",\n       \"SBiXOjl6kyTsEAwGTWrC6Pv29hKLxYba3o+LZ0kDpc3NTRKJhCGJHW+ynXc3EafdbnNycsKrV694\\n\",\n       \"+fIl2WyW4+Nj0zDpvvGoSDOK6yRNMBg0YQcbbrebra0ttre32d7efuMc7nGhCdvbG4lEhoKVoxHu\\n\",\n       \"20gaKeA7ODjgww8/pFAomO1p6UmjlMpw2Qo2xWVw8l9orf+JeiB9hEcfy/PrJI3H4yGTyZgt5/nz\\n\",\n       \"50NnXI6edwAYCSTkuGo9t7XqhDT7+/v88Ic/NH327tKBdx1ukjRd4O9ord9XSq0Cf6SU+hbwSyxJ\\n\",\n       \"H2HpNNVoNCiXy5yenhKJRNBa4/V68fl8Q/9eKWVM5KdPn74xn9vtNrGh9fV10wBaAopXKbOjVQOy\\n\",\n       \"tttgtHl1qVQyVpJ0qVgmXEsarfUxcDx4XFdKfQhss0R9hPv9Pu12m0qlQrFYJBaL0e12TUbf6Mlw\\n\",\n       \"LpeLcDhMOp1Ga004HH7j/fX1ddPYUdIuRYkdR4TRagGYrMVsp9MxZTT1ep1isUi1Wh1qwrhMuLVO\\n\",\n       \"oy4bUH8B+D8sUR/hUdKsrq6a9hx263iBUopwOIzWmlAoRDr946XLjbcj1NLv7jp9xCbMNJJG+gOX\\n\",\n       \"SiVKpRLFYpFKpbK051neijSDrek/AX9ba12zfzyt77ePcL/fNx0YisXiUKdyaZpow+VyGT9KKpUa\\n\",\n       \"ynyT66gCK1iUpOl2u9RqNU5PTzk+Pn74kkYp5eWSMP9Gay2tX5emj7DoNNJdSvrTSRPEi4uLIR+L\\n\",\n       \"Umqom6bMYV9nWYsQxz7/abRFrH1I2Pn5OScnJ+TzeZP+IA2tm83mw5M06vJX/Rrw/7TW/9h6a2n6\\n\",\n       \"CEtJqijCfr+f9fX1ofiMLTkWGZeSue1On5KmaSeDS4TabgApXc/L5TLFYvFO65gmxU2S5kvAXwC+\\n\",\n       \"r5T63uC1X2WJ+gjLX269XjfnKJ2dnb0R1LvLjlKjOb2jkuX4+JijoyMz7IM55FqtVh8mabTW/4ur\\n\",\n       \"a6OWoo+w1AFJzY/b7aZSqZgD06WrpuSvLHot9vYkEkZiSNI9NJ/Ps7e3x8cff8wnn3xCo9Ew/XCE\\n\",\n       \"6DIeHGkeAuzOVoBptpjP58lms0YxtmNBtt9lnsnnIslsE3o0v7dWq5mO57lcjkKhQLvdNucvCFHs\\n\",\n       \"U+mWDY+CNPJXrZSi1WpxenpKNpvF7/fT7XZNaqVc7bGIioXz83MqlQqlUonT01Ojq4jeIo0hT05O\\n\",\n       \"3sjxtfsSLysePGkAE4+RDt9iRQE0Gg2i0agpsJcqTGnZMer8mwWyPQlp8vk8R0dHpmmSJIaLxKnX\\n\",\n       \"66b+etTCWoby4avw4Eljm7WiT5RKJQBzOHoymTRVAc1mE8B0k1oEhDR2Vwdp15bL5Ux7NluyyHex\\n\",\n       \"v9ey4sGTBob9LHa7VVGS7Vzber1uXpPsOEldsHv32qkQdinsOBMahmNP0skhl8uRz+cpFotmm5KT\\n\",\n       \"5x4yHgVpbEiBfKvVAjBZ/XIqW6lUolarUalUKJfLpFIpkxAuB6OK4uz3+00vO/tgU/v8J5Fctjkv\\n\",\n       \"JnUul+Pk5MSYz+M81A8Rj5Y0UnQmUqZarbKyskIgEBjK7C8UCibZSq4Sm5K2rEJC0UVEsRXnnA2l\\n\",\n       \"FKenp0NDSLPoBop3hUdJGtlK2u320FGFkoAl5zcVCgXW1tZIp9NDTQ/FGSjnEYxKqnw+b0ahUDAK\\n\",\n       \"sMB20tVqNeOjua/qgXnj0ZEGuNZctQ/VEgki3ctF75EjkMXiEokiQ6wgGaOQKgK52r2JHwMeJWmu\\n\",\n       \"g2xfzWZzqG5aTpArFAqmikB0HPvMbTnexx4C27knyrb4YKRA35E0DxBiUQFmC2s2m5TLZQKBgOni\\n\",\n       \"4PP5THqn3HyJHdlSRHJe7O1p9IhlGcvssJsE6jrmX5Mj/FVu6CN8nzk2N2FUzxmXCG6/P5rmYB+h\\n\",\n       \"PI4IVzUaWmaH3ThorcdGeG8izQawYecIA7/AZVS7prX+zWs++3B+HQdjcRVpps0RhiXpI+zg7nHr\\n\",\n       \"XAErR/h/D15aij7CDu4etyLNYGv6j1zmCNdZoj7CDu4e1+o0YHKE/wvwX0dSPuX9XeD39eCwDet1\\n\",\n       \"R6d54LhKp7lW0lyVIzxIJhfcax9hB3ePm6ynnwT+APg+lyY3wN8HvsLl1mT6CFt1UPJZR9I8cExl\\n\",\n       \"cs8ChzQPH1NtTw4cjINDGgcTwyGNg4nhkMbBxHBI42BiOKRxMDEc0jiYGAvz0zh4vHAkjYOJ4ZDG\\n\",\n       \"wcRYKGmUUu8ppX6klPqTQRfQWefbV0p9Xyn1PaXU/53i819XSuWVUj+wXksopb6llHqplPrmJLlB\\n\",\n       \"V8z3VaXU68Eav6eUem+C+TJKqf+hlPqhUuoDpdTfmmWN18w39RqB8fms8xiAG/gY2AW8wPvAT8w4\\n\",\n       \"5x6QmOHzP8VlItkPrNd+A/h7g8e/DPz6jPP9GvB3p1zfBvD5weNV4CPgJ6Zd4zXzTb1GrfVCJc0X\\n\",\n       \"gY+11vta6y7wu8DPz2HeqdNMtdbfAcojL/8cl21tGVx/Ycb5YMo1aq2PtdbvDx7XAbsF78RrvGa+\\n\",\n       \"qdcIi92etoFD6/lrfrzgaaGBbyulvquU+qszziVYRHvbmVNh592Cd57puoskzSJs+S9prb8AfBn4\\n\",\n       \"60qpn5rn5PpSjs+67plTYUdb8M66xnmn6y6SNFkgYz3PcCltpobWOje4FoFvcLkFzor8oFRHMhJn\\n\",\n       \"am+rtS7oAYDfmnSN17XgnWaN1nz/VuabdY2LJM13gXeVUrtKqRXgF7lsJTsVlFJBpVR48DgE/Czz\\n\",\n       \"STOV9rYwh/a2s6TC3qIF70RrXFi67izWzC209y9zqbF/zGUV5ixzPePSAnsf+GCa+YDfAY6ADpf6\\n\",\n       \"1i8BCeDbwEvgm0Bshvn+EpcVqd8H/nhwc9MTzPeTQH/wHb83GO9Nu8Yr5vvyLGvUWjthBAeTw/EI\\n\",\n       \"O5gYDmkcTAyHNA4mhkMaBxPDIY2DieGQxsHEcEjjYGI4pHEwMf4/w2zPGHuGeikAAAAASUVORK5C\\n\",\n       \"YII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b23ec90>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEJBJREFUeJzt3X2wXVV5x/HvjxtIbkxEgoINxsZSoYbRiiiD4guoZaiD\\n\",\n       \"KK1VaavWdux01ApUHZWZdvijHa1OR3ScdsZCfcGqbdXiS1GhlSoqgkTeXxQ70oIi0BIhCSG5CU//\\n\",\n       \"OCfhcsnN3TubnXNO+H5m7nD2Ps/Za92zzrk82Wvv9aSqkCRJ0u7bZ9QdkCRJmnQmVJIkSR2ZUEmS\\n\",\n       \"JHVkQiVJktSRCZUkSVJHJlSSJEkdLRpl40lcs0GSJE2MqsrO9veaUCU5ETgbmALOqaq/nhuzevXq\\n\",\n       \"h71u3bp1HHDAAQ/Zt2hR865OTU217Wer+Kba9Hnx4sWNY5ctW9Y49uCDD24cC7By5crOsRdeeCEn\\n\",\n       \"nHDCQ/YdcsghjY974IEHNo5dvnx541iA6enpxrF9rdE2MzPTOPb2229vHNtm7AAOOuigh+07++yz\\n\",\n       \"Of300x+2f8WKFY2Pu3Tp0saxbb6rbcZjy5YtjWMBNm7c2Dj2rrvuahx70003NY694IILGsceeeSR\\n\",\n       \"O93/la98hZNOOulh+4899tjGxz788MMbx+63336NY/v6O7u3OOusszjrrLNG3Y15bdu2rXFsm+/f\\n\",\n       \"/fff3zh206ZNjWMB1q9f3zj23nvvbRR39NFHz/tcb1N+SaaAjwAnAmuAU5M8ra/2JEmSRqXPa6iO\\n\",\n       \"Bn5cVbdU1QzwWeAVPbYnSZI0En0mVIcAt87avm24b0FLlizppUPaMw499NBRd0G76Zhjjhl1F9TB\\n\",\n       \"YYcdNuouaDcdd9xxo+6COuozodrtC1DaXOei8WNCNblMqCabCdXkMqGafH1elP5TYNWs7VUMzlI9\\n\",\n       \"xLp163Y8XrJkicmUJEkaC2vXrmXt2rWNYvtMqK4AnppkNfAz4DXAqXOD5t7NJ0mSNA6OOuoojjrq\\n\",\n       \"qB3b55xzzryxvSVUVbU1yVuBrzNYNuHcqrqxr/YkSZJGpdd1qKrqq8BX+2xDkiRp1Ea6UjrAfffd\\n\",\n       \"1yiuzaJ+bRYgaxvfJvaBBx4Y+XHbLk7Z17H7WiRTkqRxYC0/SZKkjkyoJEmSOjKhkiRJ6siESpIk\\n\",\n       \"qSMTKkmSpI5MqCRJkjoyoZIkSerIhEqSJKkjEypJkqSOTKgkSZI6MqGSJEnqaOS1/JYtW9YoLknj\\n\",\n       \"Y7aJ7VNfte62bt3aSyzAli1bGsfOzMz0EttXbUUYj5qCfX0+29Rh3J14DYzD35dx+BxLeqhez1Al\\n\",\n       \"WZXk4iTXJ7kuydv6bE+SJGkU+j5DNQOcUVVXJVkGrE1yUVXd2HO7kiRJe0yvZ6iq6udVddXw8Qbg\\n\",\n       \"RmBln21KkiTtaXvsovQkq4Ejgcv2VJuSJEl7wh5JqIbTfZ8DThueqZIkSdpr9H6XX5J9gc8Dn6qq\\n\",\n       \"8+c+f/fdd+94PD09zfT0dN9dkiRJekT1mlBlcH/xucANVXX2zmJWrFjRZxckSZJ61/eU37HA7wPH\\n\",\n       \"J7ly+HNiz21KkiTtUb2eoaqqb+Nq7JIkaS9nsiNJktTRyEvPNC110qbcQ9uyDG3i25Tr6Ou4bcqt\\n\",\n       \"tC0v0ubYbcrJ9FWGp88yIPvs0/zfG236MTU11Uvspk2bGsdCf2OytxuH96Lt534c+izt7TxDJUmS\\n\",\n       \"1JEJlSRJUkcmVJIkSR2ZUEmSJHVkQiVJktSRCZUkSVJHJlSSJEkdmVBJkiR1ZEIlSZLUkQmVJElS\\n\",\n       \"RxNTeqbPcitt9FVOpq/YPktO9PU+tyn50lZf5Vb6GpOm3w+ArVu3No7VZLOUjDR+5k2okvw2UMDO\\n\",\n       \"ikZVVX2hSQNJpoArgNuq6uW71UtJkqQxtqszVC9nkFDNp1FCBZwG3AAsb9opSZKkSTJvQlVVf9D1\\n\",\n       \"4EmeBLwM+Cvgz7oeT5IkaRwteLFKkicmOTfJ14bba5L8UcPjfxB4J9DfRU2SJEkj1uTq348DFwIr\\n\",\n       \"h9s3A2cs9KIkJwF3VtWV7Pw6LEmSpL1Ck4Tq8VX1T8A2gKqaAZrcTvQ84OQkPwE+A7w4ySfnBm3Y\\n\",\n       \"sGHHT5s7miRJksZFk2UTNiQ5cPtGkmOAexZ6UVWdCZw5fM2LgHdU1evnxi1btqx5byVJksZQk4Tq\\n\",\n       \"7cCXgV9J8l3gCcCrdqMtF06RJEl7pTRZIC7JIuBwBtdC/XA47de98aQOOuigRrEu7Ll7sZO4sGfS\\n\",\n       \"3yV3fS3s2Sa2ze/X5j12YU9J6l9V7fSP+IJnqJJMA28Gns/gLNMlSf6uqu5/ZLsoSZI0mZpM+X0S\\n\",\n       \"uBf4MIMzVL8LnAf8To/9kiRJmhhNEqojqmrNrO1vJLnhkerA9PR0o7iZmeazjG2notpMJ7aJbTO1\\n\",\n       \"s2hR87KKbWrdtZ0+62u6ra8p27bTXG360UabKb999923cezKlSsXDhpq+7nfuHFj49h77lnwPpQd\\n\",\n       \"Nm3a1Di2r+9TW23G5IADDmgcu2bNmoWDhk455ZTGsddee23jWIBLL720cezNN9/cOLbNndnWH5xs\\n\",\n       \"bf6/0+b7tGTJksaxS5cubRwL8JjHPKZx7P77798obu3atfM+1+Qd+kGS527fGN7lN/8RJUmSHmV2\\n\",\n       \"VRz52lkx30lyK4NrqJ4M/HAP9E2SJGkiLFQcWZIkSQvYVXHkW2ZvJzkIaD7ZKUmS9CjRpDjyyUlu\\n\",\n       \"Bn4CfBO4Bfhqz/2SJEmaGE0uSv9L4LnAj6rqKcBLgMt67ZUkSdIEaZJQzVTV/wL7JJmqqouBZ/fc\\n\",\n       \"L0mSpInRZPGjdUmWA5cA/5jkTmBDv92SJEmaHE3OUL0SuA84A/ga8GO8A1CSJGmHBc9QVdX2s1Hb\\n\",\n       \"gI/32htJkqQJlPnKASTZwGAhz52pqnps58aTWrZsWaPYqampxsdtE9s2vs3y+21i22hTwqHPMjx9\\n\",\n       \"lYhp04e25Sza9LlNqZM2sW0+F21+v82bNzeOhXbvxTiUDemrlNPuxDfV1+e+7fdaejTqo1xVVVFV\\n\",\n       \"Oz3wrtahapbp7EKSxwHnAEcwSM7+sKq+1/W4kiRJ46Sff5Y96EPABVX1qiSLgOaVCiVJkiZEbwlV\\n\",\n       \"kv2BF1TVGwCqaivQvGS9JEnShOjnIp+BpwB3JflYkh8k+fskS3tsT5IkaST6TKgWAc8C/raqngVs\\n\",\n       \"BN7dY3uSJEkj0WdCdRtwW1V9f7j9OQYJ1kNs3rx5x0+bO2IkSZL6NLyrb8fPrvR2DVVV/TzJrUkO\\n\",\n       \"q6ofAS8Frp8bt3jx4r66IEmStNvmLr2wq6Sq77v8/pRBuZr9gP8C3thze5IkSXtcrwlVVV0NPKfP\\n\",\n       \"NiRJkkatz2uoJEmSHhX6nvJb0NKlzVZS6LPkS1+lZ/pY9r6tNuUsALZs2dJLbF+lS/osM9Rm/Nq8\\n\",\n       \"z21KxPRZhmfStPn9ZmZmWh27zQ0xbT4Xbfq8t4+ftKft6e+UZ6gkSZI6MqGSJEnqyIRKkiSpIxMq\\n\",\n       \"SZKkjkyoJEmSOjKhkiRJ6siESpIkqSMTKkmSpI5MqCRJkjoyoZIkSepo5KVnmpYC6bOMywMPPNA4\\n\",\n       \"ts1S9m363LaESlNtSmpAu/eirxIcbfrc9vfrqxRBX5+LNqWO2v5ubcZ6bzcOpZHajIdjp3HW1/8b\\n\",\n       \"xl2vZ6iSvCfJ9UmuTfLpJIv7bE+SJGkUekuokqwG3gQ8q6qeDkwBr+2rPUmSpFHpc8rvXmAGWJpk\\n\",\n       \"G7AU+GmP7UmSJI1Eb2eoqupu4G+A/wF+Bvyiqv69r/YkSZJGpc8pv0OB04HVwEpgWZLf66s9SZKk\\n\",\n       \"UenzovRnA9+tqv+rqq3AF4DnzQ1av379jp/Nmzf32B1JkqR+9HkN1U3AnyeZBu4HXgpcPjdo+fLl\\n\",\n       \"PXZBkiSpf31eQ3U18EngCuCa4e6P9tWeJEnSqPS6sGdVvR94f59tSJIkjZqlZyRJkjoyoZIkSepo\\n\",\n       \"5LX8tm3b1ihuXOpc9VVTsE09o77q4kHz8YB273Nfxx0XbWrutdHX50K7r83fgLbfP2lv8Gj9W+QZ\\n\",\n       \"KkmSpI5MqCRJkjoyoZIkSerIhEqSJKkjEypJkqSOTKgkSZI6GsuEasuWLaPugjpos0SCJEl7AxMq\\n\",\n       \"PeImcR0pSZK6GMuESpIkaZKYUEmSJHWUUS4Rn+TRuT69JEmaSFW10/pTI02oJEmS9gZO+UmSJHVk\\n\",\n       \"QiVJktTR2CVUSU5MclOSm5O8a9T90fyS/EOSO5JcO2vfiiQXJflRkguTPG6UfdT8kqxKcnGS65Nc\\n\",\n       \"l+Rtw/2O4ZhLsiTJZUmuSnJDkvcO9zt2EyTJVJIrk3x5uO34TbCxSqiSTAEfAU4E1gCnJnnaaHul\\n\",\n       \"XfgYg7Ga7d3ARVV1GPAfw22NpxngjKo6AjgGeMvw++YYjrmquh84vqqeCTwDOD7J83HsJs1pwA3A\\n\",\n       \"9ouZHb8JNlYJFXA08OOquqWqZoDPAq8YcZ80j6q6BFg3Z/fJwCeGjz8BvHKPdkqNVdXPq+qq4eMN\\n\",\n       \"wI3AITiGE6Gq7hs+3A+YYvBddOwmRJInAS8DzgG23zXm+E2wcUuoDgFunbV923CfJsfBVXXH8PEd\\n\",\n       \"wMGj7IyaSbIaOBK4DMdwIiTZJ8lVDMbo4qq6HsduknwQeCcwu7SE4zfBxi2hcg2HvUgN1uRwTMdc\\n\",\n       \"kmXA54HTqmr97Occw/FVVQ8Mp/yeBLwwyfFznnfsxlSSk4A7q+pKHjw79RCO3+QZt4Tqp8CqWdur\\n\",\n       \"GJyl0uS4I8kTAZL8EnDniPujXUiyL4Nk6ryqOn+42zGcIFV1D/BvwFE4dpPiecDJSX4CfAZ4cZLz\\n\",\n       \"cPwm2rglVFcAT02yOsl+wGuAL424T2rnS8Abho/fAJy/i1iNUJIA5wI3VNXZs55yDMdcksdvvwMs\\n\",\n       \"yTTwG8CVOHYToarOrKpVVfUU4LXAN6rqdTh+E23sVkpP8pvA2Qwusjy3qt474i5pHkk+A7wIeDyD\\n\",\n       \"+f6/AL4I/DPwZOAW4NVV9YtR9VHzG94V9i3gGh6cWngPcDmO4VhL8nQGFy3vM/w5r6o+kGQFjt1E\\n\",\n       \"SfIi4O1VdbLjN9nGLqGSJEmaNOM25SdJkjRxTKgkSZI6MqGSJEnqyIRKkiSpIxMqSZKkjkyoJEmS\\n\",\n       \"OjKhkjRySb4z/O8vJzn1ET72mTtrS5IeSa5DJWlsJDmOwSKHL2/xmkVVtXUXz6+vquWPRP8kaT6e\\n\",\n       \"oZI0ckk2DB++D3hBkiuTnJZknyQfSHJ5kquT/PEw/rgklyT5InDdcN/5Sa5Icl2SNw33vQ+YHh7v\\n\",\n       \"vNltZeADSa5Nck2SV8869n8m+ZckNyb51J59NyRNokWj7oAk8WDpm3cB79h+hmqYQP2iqo5Oshj4\\n\",\n       \"dpILh7FHAkdU1X8Pt99YVeuGte0uT/K5qnp3krdU1ZE7aeu3gF8HngE8Afh+km8Nn3smsAa4HfhO\\n\",\n       \"kmOryqlCSfPyDJWkcZI52ycAr09yJfA9YAXwq8PnLp+VTAGcluQq4FJgFfDUBdp6PvDpGrgT+Cbw\\n\",\n       \"HAYJ1+VV9bMaXBNxFbC6w+8k6VHAM1SSxt1bq+qi2TuG11ptnLP9EuCYqro/ycXAkgWOWzw8gdt+\\n\",\n       \"9mrzrH3b8G+lpAV4hkrSOFkPzL6A/OvAm5MsAkhyWJKlO3ndY4F1w2Tq14BjZj03s/31c1wCvGZ4\\n\",\n       \"ndYTgBcCl/PwJEuSFuS/uiSNg+1nhq4Gtg2n7j4GfJjBdNsPkgS4EzhlGD/7FuWvAX+S5Abghwym\\n\",\n       \"/bb7KHBNkrVV9brtr6uqf03y3GGbBbyzqu5M8rQ5x2Yn25L0EC6bIEmS1JFTfpIkSR2ZUEmSJHVk\\n\",\n       \"QiVJktSRCZUkSVJHJlSSJEkdmVBJkiR1ZEIlSZLUkQmVJElSR/8PcYZmdpOLkfYAAAAASUVORK5C\\n\",\n       \"YII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b1bd290>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAFYRJREFUeJztnVuMZHldxz+/ut+7q+89072zs8Oa8EACL/gARB4IWWIi\\n\",\n       \"+qIhMRpE44OiURMRHwSjD0gCMb4QlV2Dl4BGAwETFTAa8cHLml12UXbZTRimZ/tW3VXVdb//fej6\\n\",\n       \"/ffUmeqeruq6nJo5n+Skbl2nf931rd//8rscMcbg4zMKgXkb4LN4+KLxGRlfND4j44vGZ2R80fiM\\n\",\n       \"jC8an5EZWzQi8oyIvCIir4nIxyZplI+3kXH2aUQkCLwKvA94A/hv4EPGmO9O1jwfLzKup3kn8Lox\\n\",\n       \"5q4xpg18Cfjg5Mzy8TKhMd93E9hzPL4P/LDzB0TE32pecIwxMuz5cT2NL4jHmHFF8waw63i8y7m3\\n\",\n       \"8XkMGFc0zwNPi8iTIhIBfgr46uTM8vEyY81pjDEdEfll4J+AIPCsv3J6fBhryX2lE/sT4YVn0hNh\\n\",\n       \"n8cYXzQ+I+OLxmdkfNH4jIwvGp+R8UXjMzK+aHxGxheNz8j4ovEZGV80PiPji8ZnZMZNwgJARO4C\\n\",\n       \"JaALtI0x75yEUdNGROwRCAQIBoOEQiGCwSDBYJB2u02n07G3s7AnEAhYmwCMMWhc0H1/3lxLNJwn\\n\",\n       \"Y73XGJOfhDGzQEQIhUL2CIfDLC0tkclkyGQyJBIJ8vm8PQqFwtRtCoVCRCIRwuEwkUiEXq9Ht9ul\\n\",\n       \"2+3S6XTs416vR6/Xm7twrisagKGRUC8TDAaJRCLEYjHi8ThbW1tsb2+ztbXF6uoqd+/e5e7du7Tb\\n\",\n       \"7ZmIJhgMWlsSiQTdbpd2u02r1aLVatFutxERK6B5MwlP800R6QJ/bIz50wnYNFXU00SjURKJBOl0\\n\",\n       \"mu3tbe7cucNb3vIWbt68STwep91uk8/PxoGGQiFisRjpdJpMJkOn06Fer9NoNOywBXhCMHB90bzL\\n\",\n       \"GHMgIuvAN0TkFWPMtyZh2DTRDymRSLC0tMT6+jo7Ozs89dRT3Lp1i0KhwBtvvEE8Hp+pPalUimw2\\n\",\n       \"S6fTIRaLUa/Xqdfr1Go16vU6xhja7fZiD0/GmIP+bU5Evsx5aYunRSMihMNh4vE4mUyGbDZLJpMh\\n\",\n       \"Ho8TDocfmJDOglAoZO1ZXV214tCjWCySz+cREZrN5tw9ztiiEZEEEDTGlEUkCbwf+N2JWTYlVDSJ\\n\",\n       \"RMKKJpVKDYhm1sJxDk8rKysEg0G7Yur1euRyOUSERqNBsViciU2X2nuN924CX+7/Y0PAXxljvj4R\\n\",\n       \"q6aIiBCJRB7wNIlEgkgkMnMvA4OeZmVlhVgsZrcCAoEA0WiUZrNJsVgkEJj/1trYojHGfB94+wRt\\n\",\n       \"mQnDPE06nZ7r8BQIBAiHw8RiMZLJJMlkkmg0SjQaJRKJ0G63OT09JR6PW/vmOa+ZxJJ7IVARqKfR\\n\",\n       \"SfDKygrpdJpYLEYoNJ9/R6/Xo91uU6/XqVQqBINBwuEwwWCQeDxOPB4nGo3a5wKBgB2+5iGe+fu6\\n\",\n       \"GeAUTCAQIBKJkEwmB0QTj8fnJpput0ur1aJer1Mul2k0GvR6PTtsOUUTCoXm4g2dPBaiAQbCBupp\\n\",\n       \"nMPTPD2NUzSVSoVGo0G32yUYDJJIJIZ6mnkK57EYnlQs7rnD0tIS2WzWzh0AG2+a5XZ9r9ej0+nQ\\n\",\n       \"arVoNBq0220AuzWgoolEIlY0ML841CMvGp34RiIRIpEI6XSa5eVlG29Kp9MD3/RGo0GhUKBSqdgP\\n\",\n       \"bxY2qqg1DhWNRu0GpIpGg6o6p5kXj41oNK6jgtEjnU5TLpft0HB2dkY+n6dardJqtWZmozPaHg6H\\n\",\n       \"iUaj1uZhw1Ov1/PnNNPCKZpUKsXy8vKAp0mlUoRCIVqtFmdnZxwdHVlPM0vRiIgVzTBPo1Fw95xm\\n\",\n       \"HjySnsa5WgoGg0SjURvXWVtbI5vNWi+j8SUdlg4ODsjn8zMdnnRY0mFUBaORb53P6Mpp3qunR040\\n\",\n       \"KhQ94vE4a2tr3Lx5kxs3bnDjxg1u3rzJ0tISgUCAer3O2dkZJycn7O/vs7e3x9HREaVSiWazOROb\\n\",\n       \"3bGn5eVlksmk3aH2Go+caODNfJlIJEIqlbKieeqpp3jiiSdYXV0lk8kQCASo1WpWNAcHB9y7d49i\\n\",\n       \"sThX0SwtLQ2ENbzGIycazZfR+FI6nbapD3fu3OHOnTtWUCJiPU0ul7OeRlMRfE8znIeKRkSeA34U\\n\",\n       \"ODbGvK3/3Arw18At4C7wk8aY+YdfeVM0zr2YtbU1bty4wa1bt7h9+7bNiGu1WpTLZQqFAicnJxwd\\n\",\n       \"HbG/v29TK7vd7szsdQYsve5prrJ6+jPgGddzvwV8wxjzQ8A/9x97gkAgQDweJ5vNsr29ze7uLhsb\\n\",\n       \"GywtLRGPxxERarUauVyOu3fv8sorr3Dv3j1yuRzVanUmubi6xNY8Zc0iTKVSAxP0cDi8mKLpZ+K5\\n\",\n       \"E2V/DPhC//4XgB+fsF1jEwgESCQSZLNZtra22N3dZXNzk+XlZWKx2AOiefXVV7l37x4nJydUKhUr\\n\",\n       \"mmkGA937Mrono6LR/J5QKORJ0Yw7p9k0xhz17x9xnlvjCS7zNE7RnJyc8IMf/IDXX3+d4+Nj62mm\\n\",\n       \"LRgYXOEN8zSpVMq+7lxae0VA154IG2PMvPvrOf+poVCIZDLJysoKW1tb7Ozs2NVSLBYDsKK5d+8e\\n\",\n       \"r732GuVymUqlYoenaaNeRvdldOdXE8vj8fhA6sMshDwK44rmSES2jDGHIrINHE/SqFHQIKSuiDKZ\\n\",\n       \"DGtra6yurrK6uko2myWRSBAMBm0gslarUa1WqVQqNoTQarVmIhjAbjam02nS6fQDw6eWsGjwtFwu\\n\",\n       \"U6vVbDBTa6LmlSs8rmi+Cvws8Af9269MzKIRERGi0ajNeFtZWWFtbc0KJ5vN2jyUbrdLo9EYEI2G\\n\",\n       \"C1qt1sw+hEgkYld1a2trbG9vs7y8bCfqnU6HRqNBo9GgXq9TKpWoVqs0Gg06nc7Mo/BurrLk/iLw\\n\",\n       \"I8CaiOwBvwN8CvgbEfkI/SX3NI28DM2h1bjS+vr6A55G/8FaT+QWjX5zZ+lpMpkMGxsb7OzssLm5\\n\",\n       \"STabtcNnp9Oh2WxSrVYpl8vW0zSbTetpPC0aY8yHLnjpfRO2ZSw0fdMZW1LR6J6HfmtbrdYDgqlU\\n\",\n       \"KjO31yma3d1dtra2hnqaSqVCsVjk7OzMJmepaOaZ7rnwO8LO4UlFs7S0NLCjqpt4xWKRk5MTTk9P\\n\",\n       \"Zx6QdB7OVNONjQ1WVlasvQCtVotSqcTx8TGHh4ccHBxQKBSo1WpzFww8AqJxDk8qmuXl5YEdVf0Q\\n\",\n       \"NL6kUexZpT44N/JCoZBdWq+urrKxsWFrr9TeZrPJ2dkZx8fH7O3tWZt1dTfvldTCi0Y9jVM0F3ka\\n\",\n       \"FY16mlmLRld4bk/jDBsANJtNSqUSuVxuQDROTwN+uufYaKL4VUSTy+Xmmi+jnSqSyaSNM62vr5NM\\n\",\n       \"Jm0+DbwpmuPjY+7fv8/R0RHFYvEB0cyLhRSNsymRe0c1nU7bb20wGKTX69FoNGwkex6eJhwOk0ql\\n\",\n       \"yGQytuGAM5Kt6ZvaVkRXTaVSiWKxaPeSvFD8DwsoGmcpim7sOWM3GuzTD8MYQ6PRGJjT6A7wrDyN\\n\",\n       \"esLV1VXW1tasaBKJhK3q1AZG3W7Xru6cotGVky+aMVHBuGM3usPq9DS6fHV6mmazaTf0ZoGKZmVl\\n\",\n       \"he3t7Qc8DTDQxGiYp9FNPV80Y+IM+A3zNDrhVLevnkZFM2ucotna2npg3tVutwdKc1U0Z2dndo/G\\n\",\n       \"SyykaODBWqFwOGyPXq9nd371n1+r1WbSdHEYaqOKW8tRtOhNBdNoNKhWqwOxMC94FjcLKZqLunNq\\n\",\n       \"xn6z2aTRaNBsNsnn83blMas5jBv36slZwwRviqbZbNrApJdFs3B1T27BuNMMnJ5G0zjPzs6o1+tz\\n\",\n       \"9zSaBuEsR3FGtTWYqiulWcXCRuWhohGR50TkSERedjz3SRG5LyIv9A93OujUcQpHd1rdoikWi+Ry\\n\",\n       \"Oc94Gq1nusjTuIcnr0x83YybI2yAzxpj3tE//nHypg3HmZikyd9aOF+r1QbEod0h3AVnWqU4rUw4\\n\",\n       \"Z7WkVndqDbmGOZLJpM0B7na7dmhyp0EspGguyBGGOfYPVtHo3oaKplqtWtE4A5mzFo27mF9Fk81m\\n\",\n       \"B5bb0WjUikaX2qVSiUqlYocoL3KdOc1HReTbIvKsiCxPzKIrcJFo1NN0Oh0byEwmkyQSCWKx2AP1\\n\",\n       \"0LP0NJrvo2XBzjCH29NoGsTCepoL+Bxwm/OeewfAZyZm0RVwDk9u0VSr1Qs9jWbwOSeh0+Ci4ckZ\\n\",\n       \"hR8mmmq1+kDujBdFM9aS2xhjc4JF5PPA1yZm0dV+/4C30d1UTbbSzT3dC1lbW2NnZ4dqtWpLcfVo\\n\",\n       \"NBoj/35n501nFwdnaENtiEQi3Lp1i42NDVtlMEyszhyZeac+PIyxRCMi29p4GvgJ4OXLfn6SGGNs\\n\",\n       \"d0u3t2k2mzSbzYFWqvF4nPX1dWq1GsYYEokExWLRHuVyeWQbnBuJejjzZbTzg97evn2bjY0Nksnk\\n\",\n       \"lf4+rzNOjvAngPeKyNs5X0V9H/jFqVrpQr+J6tqdQ1Sj0bB9eLWt6vr6Or1ej2g0yvLyMsfHx/YY\\n\",\n       \"5/oH2gZED6dXcbYK0d+/vb3N5uYmqVRqaDvXhz32GuPmCD83BVuujDMJySka9TTdbteKJp1OY4yx\\n\",\n       \"gllfX7dJT7qSGhWdXOttPB5/QEjOQ7tuJZPJC+dR7mHJy8JZyDCCE61jKhQKHB4eEo1G7Ra8czkL\\n\",\n       \"5x4im80OTJSXl0df+KkX0Vv3cKSHvq6T8YtqszudzkDLk0KhQLVapdlselI8j4xo8vk8+/v7ADZq\\n\",\n       \"LCL0ej27UtK5R6/Xs00bNzY2Rv6d6qH01jmncXazcovoopaz7XbbCv/o6IjT01PK5bInLp4xjIUX\\n\",\n       \"jSYt6VVKNGajy15tCJBIJAZax2vfmnFzapzDiDMGNmz1pIK6aOXUarWoVCpWNMVi0fbH8T3NFFBP\\n\",\n       \"o1n89XodYGAVo3MaTbtMp9PX6lnnvL6lc9dWz+eMgzmHpIt+n3qaYrHI8fEx5XLZFu/5opkCWjnZ\\n\",\n       \"bDZtADOXy9lhqFqtks1mWVlZsVdccV7kdJyJsPYb1lu3PSpMvXUvz93icU7onc2vvXC9ymEsvGiA\\n\",\n       \"gckuwMnJifVAp6enLC0tDbSBdX+Io2CMsZWZmmvsRvOA9dA6c80Jvui8zgucenmDb+FFo99SeLNd\\n\",\n       \"fLfbtYLRchHtTp5OpwfmNpqjOwqFQmFgg9DNzs4OTzzxBLVajV6vRzabxRhDKBQikUgM9TTu6L1X\\n\",\n       \"BQOPkGhUOCJiwwVa4pJKpQYO3VfRmNSo5HI5Tk5O7OHm6aeftoJRUapgLvs7VDBeXDE5WXjRwOBG\\n\",\n       \"mN5XEWnE2zn3ce7WjuppjDEUi0UKhQKlUolarfbAz2jgVFM13IX7btzpoM45jRez9x4J0VyGUyyA\\n\",\n       \"jSg7Y0ajUqlULs0EdMbDdOmse0fDcF8dxtnUyItD1SMvGsDmBmsVoztCPSrOi6wPw52yoambVxFN\\n\",\n       \"IpGwO9Z6noUSjYjsAn8ObHAenPwTY8wfebmPsBvnnKfVag1MQsfdp3lY+oJbNM5mRMNw1m8lk0m7\\n\",\n       \"E+z0kF7iYZ6mDfyaMeZFEUkB/yMi3wA+zHkf4U+LyMc47yPsmV7CbmYdBBx1JeTOQnTu1XiRS32z\\n\",\n       \"MebQGPNi/34F+C5wEw/3EfYSV/Vk7moE55DmtaEJRpjTiMiTwDuA/8TDfYS9xrA+wG4xuYvltEbL\\n\",\n       \"iysnuKJo+kPT3wG/aowpO/9oL/QRXgQuiz/p/MVZLAd4dlf4KsVyYc4F8xfGGG39eiQiW/3X59pH\\n\",\n       \"eFG4bF7lruPy+q7wpaKR86/Fs8D/GWP+0PGS9hGGOfcRXgQWIRtvFB42PL0L+GngJRF5of/cx/FQ\\n\",\n       \"H2Evc9HwsujiuVQ0xph/52Jv5Ik+wj6z57HYEZ41zs4WFyV7OYesRah1cuKLZko4W6G4xeMWipcn\\n\",\n       \"vcNYuP40i8CwHjoX5dA4V0qLIhxfNFPiKp7GmT+zSMLxh6cpoBde1YbY6XSaWCxm+xprIFIPZ9t9\\n\",\n       \"XzSPKe5unnrBDG1Rq4VxpVKJUqnE/v6+bWPvi+YxRa9/kM1m2dzctL2N9UJl1WqVQqFALpfj+PjY\\n\",\n       \"XmWlXq/7onlc0foq9TTRaNRWZGrSe6FQ4ODggPv37w94Gq+mQzjxRTMDms2mjVp3Oh329/e5f/8+\\n\",\n       \"e3t77O3tDVRV+p7mMUWzBOv1uq3Jdiab64W/9NAk9Uaj4YvmccXZQ0+vcZDP5+2Ry+XsfEavB67V\\n\",\n       \"mgsvmktyhD8J/DyQ6//ox2fZFtbrqKdR0ZyenrK/v8/BwYGdv+TzeQqFAvl83g5d87xs8iiMmyOs\\n\",\n       \"fYQ/O3ULF5BGo0GxWOTw8JBMJsPJyQlHR0ccHh5yeHg4sNyuVqsLIRQnD4tyHwKH/fsVEdEcYZhj\\n\",\n       \"H2GvU6vVyOVyhMNh2u02pVKJQqFgD683l34Y4+QI/wfneTYfFZGfAZ4HfsOrJSzzoFqtcnx8TLPZ\\n\",\n       \"pFAoDPQ4rtVqD62b8jpyFaX3h6Z/BX7fGPMVEdngzfnM7wHbxpiPuN6zeF+hCaGVm1rF6bxYvLMr\\n\",\n       \"hNfrto0xQ0eTh4qmnyP898A/uFI+9fUnga8ZY97mev6xFc2jwkWiGStHuJ9Mrsy0j7DP/LnU04jI\\n\",\n       \"u4F/A17ifMUE8NvAhzhvcW/7CDvqoPS9vqdZcMYensbFF83iM9bw5OMzDF80PiPji8ZnZHzR+IyM\\n\",\n       \"LxqfkfFF4zMyvmh8RmZq+zQ+jy6+p/EZGV80PiMzVdGIyDMi8oqIvNbvAnrd890VkZdE5AUR+a8x\\n\",\n       \"3v+ciByJyMuO51ZE5Bsi8j0R+foo1xi/4HyfFJH7fRtfEJFnRjjfroj8i4j8r4h8R0R+5To2XnK+\\n\",\n       \"sW0Ehl/adxIHEAReB54EwsCLwFuvec7vAyvXeP97OE8ke9nx3KeB3+zf/xjwqWue7xPAr49p3xbw\\n\",\n       \"9v79FPAq8NZxbbzkfGPbaIyZqqd5J/C6MeauMaYNfAn44ATOO3aaqTHmW0DB9fTY7W0vOB+MaaOZ\\n\",\n       \"cAveS843to0w3eHpJrDneHyfNw0eFwN8U0SeF5FfuOa5lGm0t/2oiHxbRJ4dZbhzMukWvK503WvZ\\n\",\n       \"OE3RTGMt/y5jzDuADwC/JCLvmeTJzbkfv67dnwNuc55vdAB8ZtQTuFvwXtfG/vn+tn++ynVtnKZo\\n\",\n       \"3gB2HY93Ofc2Y2OMOejf5oAvcz4EXpeJtrc1xhybPsDnR7Vx0i14Hef7Sz3fdW2cpmieB54WkSdF\\n\",\n       \"JAL8FOetZMdCRBIiku7fTwLvZzJpphNtb3udVNhJt+CdWrrudVYzV5i9f4DzGfvrnFdhXudctzlf\\n\",\n       \"gb0IfGec8wFfBPaBFufzrQ8DK8A3ge8BXweWr3G+n+O8IvUl4Nv9D3dzhPO9G+j1/8YX+scz49p4\\n\",\n       \"wfk+cB0bjTF+GMFndPwdYZ+R8UXjMzK+aHxGxheNz8j4ovEZGV80PiPji8ZnZHzR+IzM/wMn9Av6\\n\",\n       \"T5UJ3wAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b1419d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAELpJREFUeJzt3X+QXeVdx/HPZ3fZ7MaEkBBoA0kJKlF+tBYsDFACxVYH\\n\",\n       \"mUKr1lLUFqtTx2lrI7ZMKTP6lw61jFPsdHSmgrRQ26q00nYUCirSlNqkQEKAUCBOUX5IorKEXfJr\\n\",\n       \"d/P1j3s3XJb98Tx78uw9Z/t+zezknnu/9zzPnufes9+cH8/XESEAAADMXU+3OwAAANB0JFQAAAAV\\n\",\n       \"kVABAABUREIFAABQEQkVAABARSRUAAAAFfV1s3HbzNkAAAAaIyI81fNFEyrbF0m6XlKvpBsi4k8n\\n\",\n       \"x6xatepV7xseHtbSpUsnr6tQL6WcubgOHjxYrB+pcrZFb29v1rpz4nt6pj7AOTQ0pOXLl2e12yln\\n\",\n       \"PA4cOJC17pz4nNixsbEisTmft9zvyFTjNzY2pr6+V+8Wcj4XOeM3Pj6eHJuz3Zhfr35K7cMZa9RF\\n\",\n       \"sVN+tnslfVbSRZJOkXS57ZNLtQcAANAtJa+hOkvSjoh4MiJGJX1F0jsKtgcAANAVJROq4yU91bH8\\n\",\n       \"dPu5WfX39xfpEObHwMBAt7uAOZruNC4AYGYl955zPrG9aNGiw9kPzLPBwcFudwFzREIFAHNT8qL0\\n\",\n       \"ZySt6Vheo9ZRqlcYHh4+9Li/v59kCgAANE7JhOo+SSfZXivpWUmXSbp8ctDku/kAAACaplhCFRFj\\n\",\n       \"tj8s6VtqTZtwY0Q8Wqo9AACAbik6D1VE3C7p9pJtAAAAdFtXZ0qX0u/oy5ngMGeywNx150wumNOP\\n\",\n       \"UrG5E5HmxJeKLTlRH5MAAt3Bdw8LHbf0AAAAVERCBQAAUBEJFQAAQEUkVAAAABWRUAEAAFREQgUA\\n\",\n       \"AFARCRUAAEBFJFQAAAAVkVABAABUREIFAABQEQkVAABARV2v5Wc7Ke6II45IXmdObE4fcmNz9PSU\\n\",\n       \"yW1z+5tTb6tULb/R0dHk2P379yfHSnl1EHO2RR0+F319eV/nnLqUBw4cKBKb04fcupQ5csY6Z0xy\\n\",\n       \"YkvWK20a6v7Njzr87csZ67p/LooeobK9xvbdth+x/bDtj5RsDwAAoBtKH6EalXRlRGy1vUTS/bbv\\n\",\n       \"iohHC7cLAAAwb4oeoYqI5yJia/vxiKRHJR1Xsk0AAID5Nm8XpdteK+l0SZvmq00AAID5MC8JVft0\\n\",\n       \"362SNrSPVAEAACwYxe/ys32EpK9K+mJE3Db59aGhoUOPBwYGNDg4WLpLAAAAh1XRhMqt+yxvlLQ9\\n\",\n       \"Iq6fKmb58uUluwAAAFBc6VN+b5b0G5IutL2l/XNR4TYBAADmVdEjVBHxHTEbOwAAWOBIdgAAACrq\\n\",\n       \"eumZ1NIhpaa9l8qVWyk1pX6pPuSuu2nbYi7xqUr9fiXLMtTlM9c0JUvgAPNpIZV9qQOOUAEAAFRE\\n\",\n       \"QgUAAFARCRUAAEBFJFQAAAAVkVABAABUREIFAABQEQkVAABARSRUAAAAFZFQAQAAVERCBQAAUFHX\\n\",\n       \"S88sWrTosK8zd4r83t7e5NhSJXBKTevf15c3xP39/UXWvXz58uTYY489Njn2uOOOS46VpJUrVybH\\n\",\n       \"Dg4OJscuWbIkOXbVqlXJsTn27duXFT8+Pp4cu2LFiuTY1atXJ8ceffTRybEDAwPJsTm/mySNjY0l\\n\",\n       \"xz7//PPJsUuXLk2OzdnGOduipNHR0eTY3bt3F4nN3Qfk/M3p6Uk/5pCzD3/ppZeSYzdt2pQcK0nb\\n\",\n       \"tm1Ljs3527d48eLk2HXr1iXHnnDCCcmxy5YtS46V8v/+pZjpOz1ta7Z/RVJImiqDiIj4Wkrjtnsl\\n\",\n       \"3Sfp6Yi4JOU9AAAATTJT+naJWgnVdJISKkkbJG2XlP5fNQAAgAaZNqGKiN+sunLbqyVdLOlPJP1B\\n\",\n       \"1fUBAADU0awniG2/1vaNtu9oL59i+7cT1/9pSVdJOlihjwAAALWWcsXd5yXdKWniyr8nJF0525ts\\n\",\n       \"v13SrojYoqmvwwIAAFgQUi6BXxkRf2v7akmKiFHbKbfEnCvpUtsXSxqQdKTtmyPifZ1BQ0NDhx4P\\n\",\n       \"DAxk3VkFAABQysaNG7Vx48ak2JSEasT2oXubbZ8tadZ7WiPiGknXtN9zgaSPTU6mpLzb6QEAAObL\\n\",\n       \"+vXrtX79+kPL11577bSxKQnVRyV9U9KP2/6upGMkvWsO/Soz0RIAAECXzZpQRcT9ts+X9FNqXQv1\\n\",\n       \"WESkz+bWWsc9ku6ZWxcBAADqbdaEyvagpA9KOk+to0wbbf9lRORNywwAALBApZzyu1nSi5I+o9YR\\n\",\n       \"ql+TdIukXy3YLwAAgMbwbPWHbG+PiFNme25OjduRWofp4MH0qaxyYnPjc+o1larPl6NU7cFcpcYv\\n\",\n       \"pwablFfjrVSfcz4XObXEcsc6Jz6nzznbOLfmHgB0W0RMufNM2Vs/YPuciYX2XX73H66OAQAANN1M\\n\",\n       \"xZEf6oi51/ZTal1D9TpJj81D3wAAABphtuLIAAAAmMVMxZGf7Fy2faxaM54DAACgQ0px5EttPyHp\\n\",\n       \"h2rNJfWkpNsL9wsAAKAxUi5K/2NJ50h6PCJOlPRWSZuK9goAAKBBUhKq0Yj4X0k9tnsj4m5Jbyrc\\n\",\n       \"LwAAgMZImdhzyPZSSRsl/Y3tXZJGynYLAACgOVKOUL1T0h5JV0q6Q9IOcQcgAADAISnFkSeORo1L\\n\",\n       \"+nzR3gAAADTQTBN7jqg1kedUIiKOPBwd2LNnT1JcXUpwlOpHHcrUSPUorZOz3Xp7e4v0IXfdOdui\\n\",\n       \"LqWOcuJzxqSvL+VKgpac71NJdRiTuuwDSqnLvnOhb+eF/vvV2UzzUC2punLbR0m6QdKpaiVnvxUR\\n\",\n       \"36u6XgAAgDpJ/6/k3Py5pH+KiHfZ7pP0Y4XbAwAAmHfFEirbyyStj4grJCkixiTtLtUeAABAt5S8\\n\",\n       \"gOFESf9j+ybbD9j+K9uLC7YHAADQFSUTqj5JZ0j6i4g4Q9JLkq4u2B4AAEBXlEyonpb0dER8v718\\n\",\n       \"q1oJ1ivs3bv30M/o6GjB7gAAAJRR7BqqiHjO9lO210XE45LeJumRyXGDg4OlugAAADAvSt/l93tq\\n\",\n       \"lavpl/Qfkt5fuD0AAIB5VzShiogHJZ1Zsg0AAIBuq8c0xQAAAA1W+pTf7B1ILFNRqjxMbnyp2JK/\\n\",\n       \"X45SJVRyYkuVOZlLfKr9+/cnx+7enT4d24EDB5Jjc7axtLBLqOSWn6rLulPVYRvnyv18LmQl9/c5\\n\",\n       \"6y71Wa7LvqXE92SmdXKECgAAoCISKgAAgIpIqAAAACoioQIAAKiIhAoAAKAiEioAAICKSKgAAAAq\\n\",\n       \"IqECAACoiIQKAACgIhIqAACAirpeeiZ1Wv06TKcvlZsmP6fPOesdHx9Pjs1dd07s6OhocmxOuZWc\\n\",\n       \"WClve+SUyig11nX53JfaFk0soZKj1PcazVZyH15qn5FTAqdUCbO6f0eKHqGy/Qnbj9h+yPaXbC8q\\n\",\n       \"2R4AAEA3FEuobK+V9AFJZ0TE6yX1SnpPqfYAAAC6peQpvxcljUpabHtc0mJJzxRsDwAAoCuKHaGK\\n\",\n       \"iOcl/Zmk/5L0rKQXIuKfS7UHAADQLSVP+f2EpN+XtFbScZKW2P71Uu0BAAB0S8mL0t8k6bsR8X8R\\n\",\n       \"MSbpa5LOnRw0MjJy6Cf3ji0AAIA6KHkN1Q8k/aHtQUn7JL1N0ubJQUuWLCnYBQAAgPJKXkP1oKSb\\n\",\n       \"Jd0naVv76c+Vag8AAKBbik7sGRGfkvSpkm0AAAB0G6VnAAAAKiKhAgAAqKgxtfyaWB+sVL2mnPpL\\n\",\n       \"OXWSpLw+59R26u3tTY7t7+9Pjs2tX5ezPUr9fjnbOGe9ixblVXYaGBhIjt27d29y7O7du5Njc+7s\\n\",\n       \"zRm7nO0m5W2Lo446Kjn2xRdfTI4dGRlJjh0bG0uOLamvL/1PSM4NSMuWLUuO3blzZ3KslFdXNOcz\\n\",\n       \"l7MvGhwcTI4988wzk2Ml6bTTTkuOzfn+5Xw+d+zYkRz7zDPp833n9EHKG+vU/f3w8PD060huDQAA\\n\",\n       \"AFMioQIAAKiIhAoAAKAiEioAAICKSKgAAAAqIqECAACoqJYJ1f79+7vdBVTA+DXXnj17ut0FVFCX\\n\",\n       \"KWOQry5TYWDuaplQ5cyNgfohoWqunPmmABw+OXMRop5qmVABAAA0CQkVAABARe7mOXfbnPAHAACN\\n\",\n       \"ERFT1hnqakIFAACwEHDKDwAAoCISKgAAgIpql1DZvsj2D2w/Yfvj3e4Ppmf7r23vtP1Qx3MrbN9l\\n\",\n       \"+3Hbd9o+qpt9xPRsr7F9t+1HbD9s+yPt5xnDmrM9YHuT7a22t9u+tv08Y9cgtnttb7H9zfYy49dg\\n\",\n       \"tUqobPdK+qykiySdIuly2yd3t1eYwU1qjVWnqyXdFRHrJP1Lexn1NCrpyog4VdLZkj7U/r4xhjUX\\n\",\n       \"EfskXRgRb5T0BkkX2j5PjF3TbJC0XdLExcyMX4PVKqGSdJakHRHxZESMSvqKpHd0uU+YRkRslDQ0\\n\",\n       \"6elLJX2h/fgLkt45r51Csoh4LiK2th+PSHpU0vFiDBshIiamte+X1KvWd5GxawjbqyVdLOkGSRN3\\n\",\n       \"jTF+DVa3hOp4SU91LD/dfg7N8ZqI2Nl+vFPSa7rZGaSxvVbS6ZI2iTFsBNs9treqNUZ3R8QjYuya\\n\",\n       \"5NOSrpJ0sOM5xq/B6pZQMYfDAhKtOTkY05qzvUTSVyVtiIjhztcYw/qKiIPtU36rJZ1v+8JJrzN2\\n\",\n       \"NWX77ZJ2RcQWvXx06hUYv+apW0L1jKQ1Hctr1DpKhebYafu1kmR7laRdXe4PZmD7CLWSqVsi4rb2\\n\",\n       \"04xhg0TEbkn/KOlnxdg1xbmSLrX9Q0lflvRztm8R49dodUuo7pN0ku21tvslXSbpG13uE/J8Q9IV\\n\",\n       \"7cdXSLpthlh0kW1LulHS9oi4vuMlxrDmbK+cuAPM9qCkn5e0RYxdI0TENRGxJiJOlPQeSf8aEe8V\\n\",\n       \"49dotZsp3fYvSrperYssb4yIa7vcJUzD9pclXSBppVrn+/9I0tcl/Z2k10l6UtK7I+KFbvUR02vf\\n\",\n       \"FfZtSdv08qmFT0jaLMaw1my/Xq2LlnvaP7dExHW2V4ixaxTbF0j6aERcyvg1W+0SKgAAgKap2yk/\\n\",\n       \"AACAxiGhAgAAqIiECgAAoCISKgAAgIpIqAAAACoioQIAAKiIhApA19m+t/3vCbYvP8zrvmaqtgDg\\n\",\n       \"cGIeKgC1Yfstak1yeEnGe/oiYmyG14cjYunh6B8ATIcjVAC6zvZI++EnJa23vcX2Bts9tq+zvdn2\\n\",\n       \"g7Z/px3/FtsbbX9d0sPt526zfZ/th21/oP3cJyUNttd3S2dbbrnO9kO2t9l+d8e6/83239t+1PYX\\n\",\n       \"53drAGiivm53AAD0cumbj0v62MQRqnYC9UJEnGV7kaTv2L6zHXu6pFMj4j/by++PiKF2bbvNtm+N\\n\",\n       \"iKttfygiTp+irV+W9DOS3iDpGEnft/3t9mtvlHSKpP+WdK/tN0cEpwoBTIsjVADqxJOWf0HS+2xv\\n\",\n       \"kfQ9SSsk/WT7tc0dyZQkbbC9VdK/S1oj6aRZ2jpP0peiZZekeySdqVbCtTkino3WNRFbJa2t8DsB\\n\",\n       \"+BHAESoAdffhiLir84n2tVYvTVp+q6SzI2Kf7bslDcyy3tCrE7iJo1f7O54bF/tKALPgCBWAOhmW\\n\",\n       \"1HkB+bckfdB2nyTZXmd78RTvO1LSUDuZ+mlJZ3e8Njrx/kk2SrqsfZ3WMZLOl7RZr06yAGBW/K8L\\n\",\n       \"QB1MHBl6UNJ4+9TdTZI+o9bptgdsW9IuSb/Uju+8RfkOSb9re7ukx9Q67Tfhc5K22b4/It478b6I\\n\",\n       \"+Afb57TbDElXRcQu2ydPWremWAaAV2DaBAAAgIo45QcAAFARCRUAAEBFJFQAAAAVkVABAABUREIF\\n\",\n       \"AABQEQkVAABARSRUAAAAFZFQAQAAVPT/E259UVIep5MAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b127d90>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAES9JREFUeJztnVmMpNdVx3+nq6prX3qbnvG4x4tmsEYRkv1ikJyICIVo\\n\",\n       \"/ELghcgSUmQC4gECgkiY8BIjeIiQEiFeIiA2CosSIZCjBAmwjQIYIRYjb4E4jqVZPNPdM91de9fe\\n\",\n       \"fXmoOt/c/qZ6qaU9VfXdn/Sp9qvTM3+du5zlE2MMDscgzN1vAxzThxONY2CcaBwD40TjGBgnGsfA\\n\",\n       \"ONE4BmZo0YjIFRF5V0R+KCLPjdMox2Qjw5zTiEgI+AHwCeAW8N/AM8aY74/XPMckMqyneRJ43xhz\\n\",\n       \"zRjTBr4JfGp8ZjkmmfCQvzsPfGC9vgn8mP0FEXFHzVOOMUb6vT+sp3GCCDDDiuYWsGa9XqPrbRwB\\n\",\n       \"YFjRvA5cEpGHRWQe+DTw7fGZ5ZhkhlrTGGM6IvKrwD8CIeAFt3MKDkNtuU80sFsITz3jXgg7AowT\\n\",\n       \"jWNgnGgcA+NE4xgYJxrHwDjROAbGicYxME40joFxonEMjBONY2CcaBwDM2wSFgAicg0oA3tA2xjz\\n\",\n       \"5DiMmnbm5uYQEe8xHA4TCoUIh8OEw2H29vYOXPv7+weuSS+VHkk0dJOxPm6MyY/DmFkhHA4zPz9P\\n\",\n       \"NBplfn6eTCZDJpMhm82SyWSo1+tUq1Xvqtfr1Ot1Go0G9XrdG2dSxTOqaAD6RkKDTCQSIZFIkEgk\\n\",\n       \"SKVSnD17lnPnznHu3DkeeOABisUiW1tb3Llzh62tLUqlEqVSCYBms8ne3h4AIjKRwhmHp3lVRPaA\\n\",\n       \"PzbG/OkYbJp6wuEw8XicbDZLNptlbW2NixcvcunSJS5evMjt27e5ceMG169f97wRQKvVolKpTPwU\\n\",\n       \"NaponjLGbIjICvCKiLxrjHltHIZNGyJ3HW40GiWVSrG4uMjKygpra2s8+uijPPbYY1y+fJnFxUVi\\n\",\n       \"sRihUMj7bbvdplqteusgmNHpyRiz0XvcEpGX6Ja2BE40/oVvKpViZWWFBx98kAsXLrC2tsby8jKJ\\n\",\n       \"RAIRIR6Ps7CwwPnz573fNJtNisUioVCITqcDzKBoRCQBhIwxFRFJAp8Efndslk0RKphQKEQoFCKd\\n\",\n       \"Tnse5tKlS6yurrK0tEQymUREiMViLCwsICIkEglarRaFQoH19XVCoZC3lpnFNc0q8FLPlYaBvzLG\\n\",\n       \"vDwWq6YMEfG21KFQiFQqxfLysica3TWpp1HRJBIJlpeXqVarrK+vk0qlPNFMqmBgBNEYY64Cj4/R\\n\",\n       \"lqlFvUwkEmF+fp50Os3i4iKrq6usra0Rj8e9LbiIEIlECIVCxONxjDEsLi6SSqWIRqPedKXXJDKO\\n\",\n       \"LXfgCYfDxGIx4vE4iUSCdDpNKpUiHo8TjUaJRCKEw2Hm5roH8O12m2azSbPZpNVqkc/nqVarNBoN\\n\",\n       \"77DPGDN7nsZxl1AoRCwWI5VKkU6nyWQyJJPJvqLRnVKtVqNSqVCtVsnn81QqFU80xpiJ3nY70YyB\\n\",\n       \"cDhMNBolmUySzWZJp9Mkk0kSiQTRaJRwOOwtlqHraXZ3dymVSuTzeU80erA3yYIBJ5qxoCfAmUyG\\n\",\n       \"paUlcrmcNz3p+sVGPU2hUODOnTsHPI0tmEkVjhPNEOgCVR/j8Ti5XM5b+K6urpLL5YjFYn0Xs+pp\\n\",\n       \"isWiJ5pqtUqz2Zx4wYATzdDYOxxbNA899NAB0fRDReP3NM1mc+KnJnCiGRoVzNzc3AHRXLhwgaWl\\n\",\n       \"JbLZ7JGeplarHelpJhknmiFQseilwcmVlRXOnz/vbbmj0Wjf359keppknGiGQA/y9EqlUt52O5VK\\n\",\n       \"ebsmewFsi6HdblOv1ymXyxQKBcrlMrVajXa77UQzq4RCIebn54nFYt5W2xaOpjvoVltRQahoKpUK\\n\",\n       \"+XyecrlMvV53opll1NPEYjEv0coWjcag9FzGPt01xhwQTaFQoFqt0ul0vOSrSefYxHIReVFEbovI\\n\",\n       \"O9Z7iyLyioi8JyIvi0judM2cLEKhENFolHg87k1JyWTSO9CLxWL3nM8YY7yc4Far5aV8lkolL4Qw\\n\",\n       \"LZ7mJNUIfwZc8b3328ArxpgfAf6p9zoQiAjz8/MkEglyuRzLy8vkcjmSySTz8/OH7pYajQbVapVC\\n\",\n       \"oUClUpmq6cjPsaLpZeIVfG//NPD13vOvAz8zZrsmGj0B1h1TNpslkUgQiUT6fn9vb+/AdGSvYaaR\\n\",\n       \"Ydc0q8aY273nt+nm1gSGSCRCMpn0PM1xoul0Op6nKZVKVCqVqdot+Rm5WM50/+rp+8tHwPY0tmg0\\n\",\n       \"QdyPvcXe2dmhVCoFUjS3ReQsgIicA+6Mz6TJR/NnUqkU2WyWVCp1IFEc7u6Y9vf3qdfrFItFNjY2\\n\",\n       \"uH79Ouvr6xQKBer1eqBE823gM73nnwG+NR5zpgM7FUIXwbFYjHC4O9urYPRqNBoUCgU2Nze5evUq\\n\",\n       \"Gxsb5PN5arXabIpGRL4B/DvwmIh8ICLPAl8CfkpE3gN+svc6MBzmaVQ0it/TbG5ucu3atan3NMcu\\n\",\n       \"hI0xzxzy0SfGbMtEo1vpubk572BPRZNMJg8kW9lTkzHmgGhu3LhBsVj0ynFnUjQOvML9cDhMJBI5\\n\",\n       \"UJetouk3PWlBv+YE64FerVaj1Wp5qZ3ThhPNMWjXh2g0SiwWIxaLeYLRS2NQtmjsLhC65a7Vap6H\\n\",\n       \"UdFMI040J0DXMBqY9HeB0JCBf/e0v7/P3t7eAU+zu7tLvV73QgrO08wo9m7J9jB6+UMHtqfpdDr3\\n\",\n       \"TE+tVus+/SXjwYnmBGiAUmua+uXL2LRaLXZ3d71ra2uLcrlMs9n8kC0/HZxojkHXNBqkPIloms2m\\n\",\n       \"F2fK5/Nsb297OcDTOB35caI5AX5Po2W2R3maSqXC9vY2m5ubbG9vUy6XaTQaMyEa16jxBOiaxu9p\\n\",\n       \"/Id5iopmZ2eHjY0NTzRuegoQmt6pSVcanFRP4w8baHBye3ub9fV1Nz0FDW0jctj0JCL3dOes1WqU\\n\",\n       \"SiW2t7fv8TRONAHBnp76VRvYZzKacKWiWV9fp1gsep5mFhg2R/h5EbkpIm/0Ln866Eyh09NhC2Fb\\n\",\n       \"NJ1O5x7RzJqnGTZH2ABfMcY80bv+Yfym3T/8XSByuRyLi4ssLy9z5swZFhYWvJxgOHiYpyfA/h40\\n\",\n       \"nU6H/f39+/yXjYeTRLlfE5GH+3w0mW2aRsQfa0okEiwsLLC0tMTKygpnzpzxqg80vdPvafQUuNVq\\n\",\n       \"0Ww2abfbU9FC5KSMsuX+nIi8JSIvzFoJix1rymaznmiO8jRaomILxhbNLHmaYUXzVeARuj33NoAv\\n\",\n       \"j82i+4x6mn6iWVlZ8TpCDOJpAjc99cMY4+UEi8jXgO+MzaIJwG68qJdWTGrVJNxNzLKrDez7HUxr\\n\",\n       \"4vhxDOVpesnkys8C7xz23WnE7gtspz30Ewzg7Ziq1aq3vW40Gl4T6VnjWE/TyxH+CWBZRD4Avgh8\\n\",\n       \"XEQep7uLugr88qla+SFj9wXWy24/70+FUE8zK8VwxzFsjvCLp2DLxHCUpzms7NZf0F+v1+l0OjM5\\n\",\n       \"PbkTYR92rXY2m2VpaYlMJuM1XYT+JSpaCLexscHOzo7XpGgWcaLpg7/sNpfLkUgk+uYAG2Oo1Wpe\\n\",\n       \"gFJjTbMUNvDjRONDPY2KRgv8D9tia12THTbY2dmhXC7TarXc9BQUbNH0K/A/KkC5sbHhFfk7TzPD\\n\",\n       \"2Pdr0vWMNpLudwLsjzHV63WvbX2pVGJ3d5dmsxncLXcQsO/VZAcq9c5wmnhlexq7ykBrmnZ3d72d\\n\",\n       \"k4rGTU8ziHoYPY+xRaOeJhqNejfGAPrWM6loKpWKF2ua1mK44wi8aOCup9GOnVqjraKxpy/gnnom\\n\",\n       \"u3pyGm5cOipONPSva9JEK7utqz5qDz1dw2gf4Far5W3FZ5nAi8Yf1dZ7NdldIGy0pau2qZ+FzlaD\\n\",\n       \"EnjRQPcwzy679Zeo+IWjotEDPVs0QeDIKLeIrInId0Xkf0XkeyLya733Z6aPsO1ptLhfPY1WG/hR\\n\",\n       \"0ZRKpXs8TRA4LjWiDfyGMeYjwI8DvyIil5mxPsK6e9Ibl9r5M3B44vjW1pZ3AjzLh3l+jhSNMWbT\\n\",\n       \"GPNm73kV+D5wnhnrI9wvqm1HtP2pnPa9mm7dujVzFZTHceI1TS+5/AngP5mxPsL98mfsqcluG9Lp\\n\",\n       \"dA7cdufmzZuUy+WZKlE5jhOJRkRSwN8Cv26MqfjuLGJEZGr/pex7N+kUdZin0QM9WzS3bt2i0Wh4\\n\",\n       \"+cBB4CTFchG6gvkLY4y2fp2pPsKHbblDodCB9YwKQ0+B9STYLlMJAsftngR4Afg/Y8wfWh/NTB9h\\n\",\n       \"ESESiRCPx8lkMiwsLHgtXu2wga5nGo3GgbIUDRfM+imwzXHT01PAzwNvi8gbvfe+QLdv8F+LyGeB\\n\",\n       \"a8DPnZqFHwLatj6dTrOwsOCV3uo5jYrGLklptVqed1HRBIUjRWOM+TcO90Yz0Ue4n6dJp9MHWrz6\\n\",\n       \"A5R+T2Nn8QWBQDc10jveanu0eDzu9QSORCKHntOod1EPo4IJimgCGUawA5D2+YzeLU4DlXadk783\\n\",\n       \"sC2UIAkGAuxp7Juxa1qEFv1r7oz/HpR263p/crl+LwgE0tModgLWqJ4mSATa0yj2QleL9e1u4v6k\\n\",\n       \"K7t1SBAJrKexpxT73gWVSsVLytLDOhWNZumpcIIqmsB6GkVF02w2+97wQkMI6oXspHEnmoDiF43e\\n\",\n       \"/ti+S4rtafSzIIUN/AR+etrf3/dqsbe3t0mlUgAHKhQ0D7hSqRzoPeNEE0B0R1Sr1SgUCqyvrwN4\\n\",\n       \"gtDPd3Z22NnZ8XrPaJDSiSZAGGMOpD3UajXy+TzGmANFbvqdYrHoXX7RBG27DceIRkTWgD8HztBt\\n\",\n       \"YPQnxpg/EpHngV8Etnpf/cK0tYW1p6darQbg9ZgxxnjT09zcHJVKxUu0sstVnKfpj+YIv9lLxPof\\n\",\n       \"EXmFu32Ev3LqFp4yWpKiTYja7bZXjTA3N+clXWkFpSZg7e7uTv3NvobluCj3JrDZe14VEc0Rhhnq\\n\",\n       \"I6y7I+ge+hWLRcLhMJ1Ox0vj1POZRqPhTVNBFY2cdE7u5Qj/C/AR4PPAs0AJeB34vDGm6Pv+1Ez2\\n\",\n       \"dufOUCjkNZfW+yCoB9JHO2uvXq/fb/NPDWNMX8dwItH0pqZ/Bn7fGPMtETnD3fXM7wHnjDGf9f1m\\n\",\n       \"akRjR701VUK7SITD4XuClBpC0PSIWWVo0fRyhP8O+Htfyqd+/jDwHWPMj/renxrROPpzmGiGyhGe\\n\",\n       \"9T7CjqM50tOIyEeBfwXeprtjAvgd4Bm6Le69PsJWHZT+1nmaKWekNc0wONFMP0NNTw5HP5xoHAPj\\n\",\n       \"ROMYGCcax8A40TgGxonGMTBONI6BObVzGsfs4jyNY2CcaBwDc6qiEZErIvKuiPxQRJ4bw3jXRORt\\n\",\n       \"EXlDRP5riN+/KCK3ReQd672h29seMt7zInKzZ+MbInJlgPHG2oL3iPGGthG499Z647qAEPA+8DAQ\\n\",\n       \"Ad4ELo845lVgcYTff4xus8l3rPf+APit3vPngC+NON4Xgd8c0r6zwOO95yngB8DlYW08YryhbTTG\\n\",\n       \"nKqneRJ43xhzzRjTBr4JfGoM4w6dZmqMeQ0o+N4eur3tIePBkDaaMbfgPWK8oW2E052ezgMfWK9v\\n\",\n       \"ctfgYTHAqyLyuoj80ohjKafR3vZzIvKWiLwwbDf3cbfgtcb7j1FtPE3RnMZe/iljzBPA03S7p39s\\n\",\n       \"nIObrh8f1e6vAo/QzTfaAL486AD+Fryj2tgb729641VHtfE0RXMLWLNer9H1NkNjjNnoPW4BL9Gd\\n\",\n       \"AkdlrO1tjTF3TA/ga4PaOO4WvNZ4f6njjWrjaYrmdeCSiDwsIvPAp+m2kh0KEUmISLr3PAl8kvGk\\n\",\n       \"mY61ve0oqbDjbsF7aum6o+xmTrB6f5ruiv19ulWYo4z1CN0d2JvA94YZD/gGsA606K63ngUWgVeB\\n\",\n       \"94CXgdwI4/0C3YrUt4G3ev+5qwOM91Fgv/c3vtG7rgxr4yHjPT2KjcYYF0ZwDI47EXYMjBONY2Cc\\n\",\n       \"aBwD40TjGBgnGsfAONE4BsaJxjEwTjSOgfl/g7yNWl4b+UcAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b044750>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAD4BJREFUeJzt3XuspVdZx/Hvb87cO4OT2pYWGBysoLQBKQIpdwpoKoHi\\n\",\n       \"BYGKgGgwBpCKQID+4V8aMMRQCcEEqVzKTQUsEOVSpUIBaSnM9DblZqi2IB2ltyl2JnN5/GPvaU8P\\n\",\n       \"Z+a877yzzt779PtJTma/7177Xevstc85z6y13vWkqpAkSdLRWzXpBkiSJM06AypJkqSBDKgkSZIG\\n\",\n       \"MqCSJEkayIBKkiRpIAMqSZKkgVZPsvIk7tkgSZJmRlVlsfNNA6okZwMXAHPAu6vqLxaWOeWUU37i\\n\",\n       \"dbt372bz5s33Onfw4MHO9fYpC3DgwIEmZVu1uU/ZvvuM9Sl/uLL79+9n9erVncouZtq/P0mSFmo2\\n\",\n       \"5ZdkDngHcDZwGnBukoe3qk+SJGlSWq6hehzw3aq6oar2AR8BntuwPkmSpIloGVA9ELhx3vFN43NL\\n\",\n       \"Wrt2bZMGaXmsWuW9DpKk+5aWf/mOegHKunXrjmU7tMwMqCRJ9zUtF6V/H9g673gro1Gqe9m9e/fd\\n\",\n       \"j9euXWswJUmSZk7LgOpK4KFJtgE/AF4AnLuw0MK7+SRJkmZNs4CqqvYneRXwWUbbJlxYVde3qk+S\\n\",\n       \"JGlSmu5DVVWfBj7dsg5JkqRJm+hO6dB9AXqfO/8Wbiq5lDVr1jS59oYNGzqXPe644zqX3bhxY+ey\\n\",\n       \"fadU+1x7/fr1nctu2rSpc9ktW7Z0LnvSSSd1Lgtw8skndy574okndi7bp819+rrP563P5xhgbm6u\\n\",\n       \"Sdk+NyUki244PLhsX6021u2zEXDLmzn6vHfT0NcrXZ9Ng/fu3dvs2n1+v/T5XEzLz3ULR2qvt2NJ\\n\",\n       \"kiQNZEAlSZI0kAGVJEnSQAZUkiRJAxlQSZIkDWRAJUmSNJABlSRJ0kAGVJIkSQMZUEmSJA1kQCVJ\\n\",\n       \"kjSQAZUkSdJAE8/lt2fPnk7l+uQz6pPLqK9pyDvUJ5dYX33eu1b5z/qU3b9/f+ey0C+3Wp+yrd63\\n\",\n       \"Plp+7vtcu2U7WmmVe6xP2Zbv8Sz2iTRrmo5QJdma5NIk1yW5NsmrW9YnSZI0Ca1HqPYBr6mqHUk2\\n\",\n       \"AV9PcklVXd+4XkmSpGXTdISqqn5YVTvGj+8Ergce0LJOSZKk5bZsi9KTbAPOAC5frjolSZKWw7IE\\n\",\n       \"VOPpvo8C541HqiRJklaM5nf5JVkDfAz4QFVdvPD53bt33/147dq1rFu3rnWTJEmSjqmmAVVG9wxf\\n\",\n       \"COysqgsWK7N58+aWTZAkSWqu9ZTfE4HfAc5Ksn38dXbjOiVJkpZV0xGqqvoS7sYuSZJWOIMdSZKk\\n\",\n       \"gSaeembVqm4xXcuUL61SPrQq2yd1ScsUFX3a0Spdx9zcXOeyLa30dDJ9tPpZnZb3ok/ZWfzct/r9\\n\",\n       \"Yvqbe/Tp65Y3avVJr9Unzdd9ta8doZIkSRrIgEqSJGkgAypJkqSBDKgkSZIGMqCSJEkayIBKkiRp\\n\",\n       \"IAMqSZKkgQyoJEmSBjKgkiRJGsiASpIkaaCJp57ZsGFDp3J9tr3vs51+S31SOLT6/qYl9cy0tKHP\\n\",\n       \"e9enbMt0QJpd05B+StOnT1/v2bOnYUt0LB02oErym0ABiyUdqqr6eJcKkswBVwI3VdVzjqqVkiRJ\\n\",\n       \"U+xII1TPYRRQHU6ngAo4D9gJbO7aKEmSpFly2ICqqn536MWTPAh4FvDnwJ8MvZ4kSdI0WnJRepKT\\n\",\n       \"k1yY5DPj49OS/H7H678NeD3ghL8kSVqxutzl917gc8ADxsffAV6z1IuSPBvYVVXbWXwdliRJ0orQ\\n\",\n       \"JaA6oar+DjgAUFX7gC63pD0BOCfJ94APA09P8v6FhW655Za7v+66664eTZckSZoOXbZNuDPJTx86\\n\",\n       \"SHImcPtSL6qq84Hzx695KvC6qnrJwnLHH39899ZKkiRNoS4B1WuBTwE/m+QrwInA846iLjffkSRJ\\n\",\n       \"K1K6bDCWZDXw84zWQn1rPO03vPKkTj311E5lW27s2WrzvT7tcGPP5WmDG3tKkoaoqkXXhS85QpVk\\n\",\n       \"A/AK4EmMRpkuS/LXVeX2rZIkSXSb8ns/cAfwdkYjVL8NXAT8VsN2SZIkzYwuAdXpVXXavOPPJ9l5\\n\",\n       \"rBpw++1Lrm8H2k3Ltbx2qymxZDp2oVi1qntu7VbvRd/rtvwctdCnr6ehvZJ0X9XlL+I3kjz+0MH4\\n\",\n       \"Lr+vt2uSJEnSbDlScuRr5pX5cpIbGa2hejDwrWVomyRJ0kxYKjmyJEmSlnCk5Mg3zD9OchKwvnWD\\n\",\n       \"JEmSZk2X5MjnJPkO8D3gC8ANwKcbt0uSJGlmdFmU/mfA44FvV9VDgGcAlzdtlSRJ0gzpElDtq6r/\\n\",\n       \"BVYlmauqS4HHNG6XJEnSzOiyD9WtSTYDlwEfTLILuLNtsyRJkmbHkrn8kmwC7mI0mvUi4H7AB6vq\\n\",\n       \"R4MrT+qEE07oVNaNPe/RcmPPVptDtsp11ycHYt92tMqZ2KesG3tK0nQ56lx+VXVoNOoA8N5j2CZJ\\n\",\n       \"kqQV4Ugbe97JaCPPxVRV3e9YNOC4447rVK7P/9T7pESB6UiL0mekpdUoS99rt3rf+rS55WjdNKTW\\n\",\n       \"cdRJkmbDkfah2jT04km2AO8GTmcUnP1eVX116HUlSZKmSZdF6UP8FfDPVfW8JKuBbsNRkiRJM6RZ\\n\",\n       \"QJXkp4AnV9VLAapqP3B7q/okSZImpd9io34eAvxPkvck+UaSv0mysWF9kiRJE9EyoFoNPBp4Z1U9\\n\",\n       \"Gvgx8MaG9UmSJE1EyzVUNwE3VdXXxscfZZGA6rbbbrv78fr161m/3vzLkiRptjQLqKrqh0luTPKw\\n\",\n       \"qvo28EzguoXltmzZ0qoJkiRJy6L1XX5/xChdzVrgP4CXNa5PkiRp2TUNqKrqKuCxLeuQJEmatJaL\\n\",\n       \"0iVJku4TWk/5LemOO+7oVK5PipG+6Uj6pBhp2Y6uZjFRdJ/3YvXq7h/LNWvWdC4L7RIv9ym7b9++\\n\",\n       \"zmX7ME3N0WuZwqgF+1qaPo5QSZIkDWRAJUmSNJABlSRJ0kAGVJIkSQMZUEmSJA1kQCVJkjSQAZUk\\n\",\n       \"SdJABlSSJEkDGVBJkiQNZEAlSZI00MRTz3RN2TENKV+gX5qaabjuwYMHe5Xv897Nzc11LtsnVcaB\\n\",\n       \"Awc6l+2bxqXPtfuU7fM+T8tnuU+bV3qqk1bfX5+f65X+HksrXdMRqiRvSnJdkmuSfCjJupb1SZIk\\n\",\n       \"TUKzgCrJNuDlwKOr6hHAHPDCVvVJkiRNSsspvzuAfcDGJAeAjcD3G9YnSZI0Ec1GqKrqFuAvgf8C\\n\",\n       \"fgDcVlX/0qo+SZKkSWk55Xcq8MfANuABwKYkL2pVnyRJ0qS0XJT+GOArVfWjqtoPfBx4wsJCe/fu\\n\",\n       \"vfur6x1/kiRJ06RlQPVN4MwkGzK69/uZwM6FhdatW3f31+rVE9/FQZIkqbeWa6iuAt4PXAlcPT79\\n\",\n       \"rlb1SZIkTUomuZlcktq8eXPXsn2ue7RNWpIbex5d2VYbe/adJp6GjT2nhRt7ttdqY0/7Q5qcqlr0\\n\",\n       \"j5+pZyRJkgYyoJIkSRpo4qvAu075zeL0RJ92tJoy6jv92Wraoc/316dsn2m5vtdu9TmaxamdVtO7\\n\",\n       \"K90sTgVLOjqOUEmSJA1kQCVJkjSQAZUkSdJABlSSJEkDGVBJkiQNZEAlSZI00FQGVHv37p10EzTA\\n\",\n       \"vn37Jt0EHSW3PJCko2NApWOubzoYSZJm3VQGVJIkSbPEgEqSJGmgTHLNRBIXbEiSpJlRVYvm4ppo\\n\",\n       \"QCVJkrQSOOUnSZI0kAGVJEnSQFMXUCU5O8k3k3wnyRsm3R4dXpK/TXJzkmvmnTs+ySVJvp3kc0m2\\n\",\n       \"TLKNOrwkW5NcmuS6JNcmefX4vH045ZKsT3J5kh1JdiZ58/i8fTdDkswl2Z7kU+Nj+2+GTVVAlWQO\\n\",\n       \"eAdwNnAacG6Sh0+2VTqC9zDqq/neCFxSVQ8D/nV8rOm0D3hNVZ0OnAm8cvzzZh9OuaraA5xVVY8C\\n\",\n       \"HgmcleRJ2Hez5jxgJ3BoMbP9N8OmKqACHgd8t6puqKp9wEeA5064TTqMqroMuHXB6XOA940fvw/4\\n\",\n       \"tWVtlDqrqh9W1Y7x4zuB64EHYh/OhKr6v/HDtcAco59F+25GJHkQ8Czg3cChu8bsvxk2bQHVA4Eb\\n\",\n       \"5x3fND6n2XH/qrp5/Phm4P6TbIy6SbINOAO4HPtwJiRZlWQHoz66tKquw76bJW8DXg8cnHfO/pth\\n\",\n       \"0xZQuYfDClKjPTns0ymXZBPwMeC8qto9/zn7cHpV1cHxlN+DgKckOWvB8/bdlErybGBXVW3nntGp\\n\",\n       \"e7H/Zs+0BVTfB7bOO97KaJRKs+PmJCcDJDkF2DXh9ugIkqxhFExdVFUXj0/bhzOkqm4H/gn4Jey7\\n\",\n       \"WfEE4Jwk3wM+DDw9yUXYfzNt2gKqK4GHJtmWZC3wAuCTE26T+vkk8NLx45cCFx+hrCYoSYALgZ1V\\n\",\n       \"dcG8p+zDKZfkhEN3gCXZAPwysB37biZU1flVtbWqHgK8EPh8Vb0Y+2+mTd1O6Ul+FbiA0SLLC6vq\\n\",\n       \"zRNukg4jyYeBpwInMJrv/1PgE8DfAw8GbgCeX1W3TaqNOrzxXWFfBK7mnqmFNwFXYB9OtSSPYLRo\\n\",\n       \"edX466KqemuS47HvZkqSpwKvrapz7L/ZNnUBlSRJ0qyZtik/SZKkmWNAJUmSNJABlSRJ0kAGVJIk\\n\",\n       \"SQMZUEmSJA1kQCVJkjSQAZWkiUvy5fG/P5Pk3GN87fMXq0uSjiX3oZI0NZI8jdEmh8/p8ZrVVbX/\\n\",\n       \"CM/vrqrNx6J9knQ4jlBJmrgkd44fvgV4cpLtSc5LsirJW5NckeSqJH8wLv+0JJcl+QRw7fjcxUmu\\n\",\n       \"THJtkpePz70F2DC+3kXz68rIW5Nck+TqJM+fd+1/S/IPSa5P8oHlfTckzaLVk26AJHFP6ps3AK87\\n\",\n       \"NEI1DqBuq6rHJVkHfCnJ58ZlzwBOr6r/HB+/rKpuHee2uyLJR6vqjUleWVVnLFLXbwC/CDwSOBH4\\n\",\n       \"WpIvjp97FHAa8N/Al5M8saqcKpR0WI5QSZomWXD8K8BLkmwHvgocD/zc+Lkr5gVTAOcl2QH8O7AV\\n\",\n       \"eOgSdT0J+FCN7AK+ADyWUcB1RVX9oEZrInYA2wZ8T5LuAxyhkjTtXlVVl8w/MV5r9eMFx88Azqyq\\n\",\n       \"PUkuBdYvcd3iJwO4Q6NXe+edO4C/KyUtwREqSdNkNzB/AflngVckWQ2Q5GFJNi7yuvsBt46DqV8A\\n\",\n       \"zpz33L5Dr1/gMuAF43VaJwJPAa7gJ4MsSVqS/+uSNA0OjQxdBRwYT929B3g7o+m2byQJsAv49XH5\\n\",\n       \"+bcofwb4wyQ7gW8xmvY75F3A1Um+XlUvPvS6qvrHJI8f11nA66tqV5KHL7g2ixxL0r24bYIkSdJA\\n\",\n       \"TvlJkiQNZEAlSZI0kAGVJEnSQAZUkiRJAxlQSZIkDWRAJUmSNJABlSRJ0kAGVJIkSQP9P+4wayRS\\n\",\n       \"hyMkAAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b06c950>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGNNJREFUeJztnVmM7Fldxz+na1+6qrqW7q5ebt+ZuTMDTEzgBU2AyAMh\\n\",\n       \"Q0xAXyQkRoNofFA0SiLig6D4gCYQow9EZYlbQKOBgInKYETxwQUzw4CyzMxdeu/au/a1jw9dv8Op\\n\",\n       \"ukvfWrvrzv+bnPyram7/51TVt37n9/ud3+97lNYaBw5GwdJlT8DB4sEhjYOR4ZDGwchwSONgZDik\\n\",\n       \"cTAyHNI4GBljk0Yp9axS6rtKqZeUUh+c5qQcXG2ocfI0SikX8D3gbcAB8N/Ae7TW35nu9BxcRYxr\\n\",\n       \"ad4IvKy1vq217gCfB941vWk5uMpwj/l3m8Ce9Xwf+GH7HyilnFTzgkNrre71+riWxiHEqxjjkuYA\\n\",\n       \"2Laeb3NubRy8CjAuab4BPKmUuq6U8gLvBr40vWk5uMoYy6fRWneVUr8E/BPgAj7tRE6vHowVcj/U\\n\",\n       \"jR1HeOExbUfYwasYDmkcjAyHNA5GhkMaByPDIY2DkeGQxsHIcEjjYGQ4pHEwMhzSOBgZDmkcjAyH\\n\",\n       \"NA5GxrhFWAAopW4DZaAHdLTWb5zGpKYNpdTAY7fbjcvlMtezszO01macnZ0NDHkd4LLamIffg1KK\\n\",\n       \"paUllFID87bnOitMRBrOi7HeqrUuTGMys4J8wEtLS7jdbiKRCMvLy0QiEUKhEN1ul06nY0az2aTR\\n\",\n       \"aNBsNmk2m5ydndHr9cz1MuZvE8XtduPxeMzo9Xq0220zzs7OzN/OgkCTkgbgnjuhVwXyQbtcLlwu\\n\",\n       \"Fz6fj5WVFdbX10mn0yQSCUOORqNBo9GgXC5zenpKuVwGoNPpoJSi0+lc6nuQ4fV6CQQCZnQ6Her1\\n\",\n       \"OrVajV6vN3OrOA1L81WlVA/4Y631n05hTlOHUsosRz6fj3g8zvb2Nk888QRbW1tUq1UzKpUK2WwW\\n\",\n       \"j8cDQLvdNkvDZViZ4fcgxA+FQoTDYZaXl2m1Wiil6PV6tFqtAeLMApOS5k1a6yOlVAp4Tin1Xa31\\n\",\n       \"16cxsUkhX/TS0hI+n8/8KiORCOl0mu3tbR5//HF2dnY4PT01lqVUKqG1pt1uU61WzRcl/sNlvAeP\\n\",\n       \"x4PP5zMjHA4TjUaJRqNEIhHq9TpKKbrdLvV63SylskxNm0ATkUZrfdS/ZpVSX+C8teXSSSNfsFIK\\n\",\n       \"r9dLIpEglUqRSqVYXV1lZ2eH7e1tVldXiUQixlfpdrs0m03c7vOPRZ53Oh263e6ArzBLDPswy8vL\\n\",\n       \"xONxEokEiUSCSCRCOBw2o1gs4vP50FpTr9cH3k+32536/MYmjVIqCLi01hWlVAh4O/DbU5vZmLA/\\n\",\n       \"bDHliUSCnZ0dHnvsMa5du0YqlSKZTJJKpYhEIvR6PTqdDu12m3q9bpambrdLq9Wi2+2aX++83oPt\\n\",\n       \"w4TDYdLpNNeuXePatWtEo1GCwSDBYJBAIMDJyQlnZ2fU63Xy+bx5L1rrmSxVk1iaNeALfRPqBv5K\\n\",\n       \"a/2VqcxqQgyv/0KaZ555hqeffppQKEQwGCQUCuHz+cyH3Gq18Pv9d1kaCWXnRRpgwHlfXl5mfX2d\\n\",\n       \"J598kte97nXEYrG7lqt6vU6xWOTg4IBGo2EIM4sldWzSaK1vAa+f4lwmgu3D2D5ANBollUoZx/ep\\n\",\n       \"p54a+BVrrc1VTLodfrfb7bm/F3HavV4vXq/XRHs7Ozs89dRTRCIRQ6qlpSWq1SqRSIRAIIDH45m5\\n\",\n       \"DzaNkPvSIR+QECYWi5mRSqXY2dkhlUoRDodxu920221DiGazyeHhIUdHR2bs7e2Ry+Wo1+uX8n68\\n\",\n       \"Xi/Ly8tmbGxskEwmiUajBAIBAOr1ukkV7O7ucnx8TLFYpFar0Wq1aLfbM4uiFp409vrvcrnw+/2s\\n\",\n       \"rKyQTqdJp9Nsbm5y/fp1VldXCYfDuFwuut2uCa9LpRJ7e3tm7O/vk8/nKRQKl0qaSCRinPeNjQ1S\\n\",\n       \"qZTxZTqdDrVazcxTSFMoFKjVajSbTeOHOaS5D4bzMLFYjI2NDR5//HEee+wx1tbWjKVxuVx0Oh2q\\n\",\n       \"1Sq5XI5MJsPu7i63bt3i1q1b3Llzx/yCm83mpbwfmzTb29tsbm4a0tjJvGw2y97e3j0tjR1yTxsL\\n\",\n       \"Txo7te71evH7/cRiMeMDPPHEEyanEQqFWFpaotPpUKlUyOVyHBwcmA/+9u3b3Llz59Leh/ggfr+f\\n\",\n       \"aDTK6uoqW1tbrK+vE4/HCYVCuN1uut2uSULu7u5ycHBANpulVCrRaDRm7octPGkAY2ECgYDxAyKR\\n\",\n       \"yABZfD4fLpcLrTWNRoNiscjR0RG7u7tkMhlOT09ptVqXMn+JkmREo1ESiQRra2tsbW2RSCRMlFcs\\n\",\n       \"FsnlcuRyObLZLNlslkKhQKVSodVqzWVDdeFJI7vWPp+PYDBoCBOJRIjFYkSjURNJyY62kOb4+Jjd\\n\",\n       \"3V1yudylkkYceImWIpEIiUSC9fV1tra2CIVCZu+rVCoNECaTyVAoFKhWq2ZZmjUeOdLIfoxtaexf\\n\",\n       \"ca/Xo9FoUCqVjKWpVqvGF7gM2GkC2eqIx+Osra2xubmJy+WiVqtRr9ep1+sDpMlmsxSLRVqtlkOa\\n\",\n       \"izCcl/H7/YRCIWNlpPwhHA4P1Ma0Wi0qlQqFQoFMJsPh4eFAXmbe81dK4fF4CAQCZlsgHo+bjPXa\\n\",\n       \"2ppJD7TbbbM85fN5MyqVCr1eb+YblYKFI81wAZLs+MqvU/ZmAoEAbrebs7MzY0mq1SrFYpG9vT2y\\n\",\n       \"2SyVSoVOp2O2COZVYDXsw9h7Y6lUihs3brCxsWGSeN1u15Rs5PN5isUi1WrVhNZ2sdg8sHCkgcEU\\n\",\n       \"u9frJRgM3pc0vV6PWq1GJpMhl8txfHxsknfVapVOpzP3D932YTweD/F4nM3NTbO3tLW1RTqdJhKJ\\n\",\n       \"mLzSMGkqlQqNRmNgX2weVXuw4KSxfRkhjZ05tS1NNpvlzp077O7uGksjpLFLPecBl8s14MMIaW7c\\n\",\n       \"uMFrXvMaEokEsVjsoUgjlhLmV4p6IWmUUp8BfgzIaK1/qP9aHPhrYAe4Dfyk1ro0w3na8xn40CVi\\n\",\n       \"isViJBIJkskkkUgEv99vHF8hzd7eHq+88gonJyfk83lqtdpMSgcuwrAfFo/HSafTXL9+naefftps\\n\",\n       \"d3g8HrTWtFotarUap6enA6Sxl6e5zv8h/s1ngWeHXvsN4Dmt9VPAP/efzwUul4tQKDSwCSklD5ub\\n\",\n       \"m6ytrRGJRPB4PCaJd3p6SrFYJJ/Pm/C6Xq9fWvmmXX0nDnsoFCIQCOD1eg1RTk9POTk54fj4eOAq\\n\",\n       \"zu+88jLDuNDSaK2/rpS6PvTyO4Ef7T/+M+BrzIk4LpeLcDhMMpk0db6bm5tmpFIpPB4PbrfbRB3D\\n\",\n       \"pKlUKtTr9UuxMvCDZGQoFCIajRrS+P1+Q5pms2lCbJswJycnZo9pXiH2XfMf8+/WtNYn/ccnnNfW\\n\",\n       \"zAVLS0uEQiGSySRbW1vs7OyYzcl0Os3KyoqpWJM9pmHSNJtNWq3WlbI0wWDQkKbVatFsNs28hTCZ\\n\",\n       \"TIaTkxMT9Q13HswLEzvCWms9T309l8tFMBgkHo+zsbHBtWvXWFtbMyMSiVCpVIw1qVarpva3UCiQ\\n\",\n       \"z+dNOeS8CsXt8k1gwBdbWVkhGo0SDofN8iSbpaVSiePj4wErc5klG4JxSXOilFrXWh8rpdJAZpqT\\n\",\n       \"ehjcby2XCjuxNFI7Yye/Zh0p2Yk7ON+A9Pv9BAIB/H4/6+vrbGxsmJFOp4nH4wSDQZRSNJtNCoUC\\n\",\n       \"BwcHvPLKK2Z/rFqtXmpHhGBc0nwJ+Bng9/rXL05tRmPCJoNdgSc1vhJlzCu0tpOQgUBgoDAsnU4P\\n\",\n       \"kCaZTLKyskIgEEApRaPRoFAosL+/z0svvcTx8TH5fJ5qtXopy9EwHibk/hznTm9SKbUH/BbwMeBv\\n\",\n       \"lFLvox9yz3KSF8EmgVgau1hcugmGk2AzlMMdKA4LBoOsrKyYJVTIsrm5ycbGhtnuENLYluall14i\\n\",\n       \"n8+bRr6FsDRa6/fc5z+9bcpzGQvDhLFJIxZGliX5Iu3Hw71BYx5lNPDYrt+VFIHkYiQ1YFsbr9dr\\n\",\n       \"+soBY2kODw+5efOm6fS8KljIjPCDIG2rwWDQEGdzc9NU6IdCobsa/GWHWGprR4Xb7R4YUuIgY9iH\\n\",\n       \"SSaTpqhKEnh2z3ipVDIh9VU8N/2RI400yEmVnpj7s7MzU3RuR09SBVcul6lUKlSr1ZH/n+Lo2g6v\\n\",\n       \"9CQFAgHTpCdDCsUk8yu+lyxBUrZ5WSH1RXgkSePxeAx5JFnmdrsJh8Osrq4asojagpQa5HI50yg3\\n\",\n       \"CuxuRxk2MeLxOCsrK8TjceLxuMnHyIaldEVI9rpUKlGtVk3D21XDQpPG9kPksU0WwCwBktupVCrG\\n\",\n       \"15EOysPDQ4LBoGmSGxVSIWhfZUi/tT2G+5FkiSyXy+RyuQFL45BmCrAr705OTvD7/abJX9LxduQC\\n\",\n       \"5z5HIBCg1+uZTUwZIiNiF3OPCrEuouQgVxnBYBCfz2dIOexTCVkODw/Z39/n4ODAbBU4y9MUIDW+\\n\",\n       \"QhrZLZZlwC7vhB8sV36/HzhvD7G/sF6vh8/nIxKJkEwmp+LT+Hy+u557vV4zJ0k+Si5JOiMODw+5\\n\",\n       \"ffs2h4eHJsx2LM0UYJPG7XYbVYWVlRXq9Trtdtv8om31K2lZHc4Ka62JRCImcppG9CTDlmiTq1LK\\n\",\n       \"kEb6x21Lc/v2bVP361iaKcEmjSTr4vE4q6urA2GqXd0nX6LP57srpyKY5y9aSCMbk8OkOT09NZGU\\n\",\n       \"Q5opQGttPnCXy8Xp6SlHR0f4/X601hQKhbuWBilokqu9hMhyZQs02vp1D7MTbveGdzodEz3Jddjx\\n\",\n       \"FcUq2VCVUL9Wq9FoNMwO/DxLUEfBQpKm0+kMZHaPjo6MoM/x8bHJj0iuRHIn8ppENJIIlP0qCcWl\\n\",\n       \"CF2+yItQq9XMqFarA8m8cDh8T9KIFo7kh6RFRUgjKYGriIUkjRRPiTOptaZWq5HL5Uxtiq1BIzUr\\n\",\n       \"y8vLRKNRo6MXCATu2naQGpxisUihUKBUuriKtVgsDowbN24AGDGiYdiWRso4qtXqAGnmXew+Csat\\n\",\n       \"Ef4I8HNAtv/PPqS1/sdZTdKGkEaI02g0qNVqZLPZgYo4O8kmSbWVlRWSyaQJsWOxmLmnkEa09gqF\\n\",\n       \"gil8ughS6yKj2+0SDodZX1+/55cupKnVapTLZcrl8l2kucp4GEvzWeCPgD+3XtPAJ7TWn5jJrEaA\\n\",\n       \"XT8jkPyMvWlpZ1wlxD06OiIejw+Ev51Oh0KhYEaxWLxwDpLFlUL14T4qe9ui1+tRLBbJZDIcHByw\\n\",\n       \"v7/P4eHhlc7LDGPcGmG4QvrBNnHEzzk7OxvY06lWq8anyeVy7O/vm/pc+wsV7ZpRfBopH202m/ds\\n\",\n       \"vLPFoaWJP5PJsLe3x82bN029TL1ev5LL0TAm8Wner5T6ac4Pdv/AvFpYhmFr4dkRkERYEqHYCt92\\n\",\n       \"8k0cYTuCkvzJKHkb29kdLvYSAovAtSQm9/f3uXXrFoVCwXRIPMqk+STwO/3HHwU+DrxvKjMaA+KT\\n\",\n       \"CGRrwMa98jP306QbtbZmOCNsE1lEoWVTslarGUuzv7/PzZs3zeakhNlXHWORRmttvEOl1KeAL09t\\n\",\n       \"RlPC8Bc+y1+waOTZOn8icQKYzgLZTRcRonK5bHSK5yk5OynGIo1SKi3C08BPAN+a3pQWDxKJSTnn\\n\",\n       \"+vo6sVjM7He1Wi3TWSAbkrlcjnK5TLvdvqfzfJUxTo3wh4G3KqVez3kUdQv4hZnO8opDpGfX1ta4\\n\",\n       \"fv36XaSxlbdu3rzJ4eGhUa2QRN6iEAbGrxH+zAzmsrCwLc21a9cGSCOVgyKiJBuSInkm0q2LhIXL\\n\",\n       \"CF8FSBQme1m2AJF0eUrxl123I86w7C3NS4Ro2nBIMwakElCG1P6KTyNbGCJ1Mpw8XCT/5V5wSDMG\\n\",\n       \"pHBdNP1swqytrZmWFBFVsjPTw201iwiHNCNCKgGDwSCxWMwsTba1sZOMYmnsDVFb8mwR4ZBmDAhp\\n\",\n       \"RH3L1iqWsxds/0UOIZPd7GazOdOzC2YNhzRjQNQ4ZQd9eXnZlJMCRu5M9q+KxaI5uU52s+XYwEWE\\n\",\n       \"Q5oxIDXHYmnsY3MAc3ZBuVweqMuRgivZ2LwsUaVJ4ZBmDIilGSaNfbiYCCtKeYWQxhaHhMs753sS\\n\",\n       \"OKQZA/dankQY8uzsbKBYXPaZpFh8kfaY7oeHEWp0MITh5WmYNI1Gg9PTU7OTLQd2NJvNhbQsw3As\\n\",\n       \"zYiwZemFNKKXN3z2QjabfSRJ80BLo5TaVkr9i1Lqf5VS31ZK/XL/9bhS6jml1PeVUl9RSsXmM92r\\n\",\n       \"AQm57eXJ5/OxtLRkSHM/S/Mo4KLlqQP8qtb6GeBHgF9USr2WS9QRvgwISaLRqJE6k7OxRVzRbrlt\\n\",\n       \"t9sm5BYFCBGKfhQszQOXJ631MXDcf1xVSn0H2OQSdYTnDfvQDulwECn9YDA4IE0iFYRSmyylptIu\\n\",\n       \"vKh5mWE8tE/TLy5/A/CfXKKO8GVABAKkFSaVShGLxQiFQqZDU6Iiux5YSGOLRT7ylkaglAoDfwf8\\n\",\n       \"ita6MtQDPVcd4XlDLM3y8jKJRIJ0Ok0ymSQWixEMBvF6vYYMdhmEbWmktfdRIAw8XOWeh3PC/IXW\\n\",\n       \"WqRfL11HeJ4Q0iSTSaOZJ2G2FI7b8mfVatUcQGpbF1uE2h5298Ii5HAuip4U8Gng/7TWf2D9J9ER\\n\",\n       \"hiuiIzwr2JYmmUySTqdJJBIsLy8bta1OpzOQAS6Xy3cd2GGfHS7FW16v12jXiGzK/TokrhIusjRv\\n\",\n       \"An4KeFEp9Xz/tQ9xxXSEZw3xaRKJBBsbG/ckjew13Ys0tq6wLRUrQ0omhltxriouip7+nftboyuh\\n\",\n       \"IzxriIafbWlERmTY0sjZmOVy2RzgJfcY1suxdXNEAWMRCANORvihIF+4dGfaywkMhtoirmjndoY1\\n\",\n       \"cWSpEjLVajWjHnFVxRltOKSZATweD6FQyEjbh0KhgfPChyv75ORe6SO/6s6wQ5opYFjDzy4HbbVa\\n\",\n       \"rKysmKMSk8mkadMVy+Tz+YyY0qPgCDsYESJ0LZZGaz1wiMbGxoZxnBuNhjm7qVqtksvlHNK8WuB2\\n\",\n       \"u40sbSwWw+VyDTwXSxOJRPD5fCZ7LMJGktNxHOFXESSPI1YmFouZRF+9Xjc5GXF6S6US+XzejKOj\\n\",\n       \"IwqFwiMvNeLAgtfrNYdjBIPBu9Q+7VNe6vW6kRqRowbl2OR6vX7lnWBwSDMVSAgeDAYHBBblKod1\\n\",\n       \"5HK5AX2ag4MD9vb2zHmbV1WhfBgOaR4Cw+KNtg8ie0f2LrfdFNfpdMjlcmSzWTOOjo7IZDLm8Ay5\\n\",\n       \"n11wfpXhkOYCiD6xHA/o9/tN6CyJOFusWvSBbW3hUqk0MMSXkWo+u+tyEeCQ5gIIafL5vJFGs4/V\\n\",\n       \"cblcxuGVIb1OoitsC1MPX6Wib5FEAR5IGqXUNudSsKucCxj9idb6Dy9TR3je0FrTaDTI5/MAZimR\\n\",\n       \"SCkQCBgtYDmh7uTkxJylfXx8bEQf5Tq8fC1aD9RFlkZqhF/oF2L9j1LqOa6QjvA8ID6NUopOpzNw\\n\",\n       \"bpMcvWOPTCYzMERexK7esx3lRcO4NcJwhXSEZw0RrxanVyRERFpfcjKyTA0fKSj+yrwOk5811MNO\\n\",\n       \"vl8j/K/AM8AHgPcCp9xHR/hRKgGVsyZlDJ8iJ5GP5GZsEsnxO3YYDpMd6TwvaK3vaRgeijT9pelr\\n\",\n       \"wO9qrb+olFrlB/7MR4G01vp9Q39zdT+NEWGXMSil7jr4azgvYyug20vQIhDFxtik6dcI/z3wD0Ml\\n\",\n       \"n/LfrwNflsM2rNcX45NxcF/cjzRj1Qj3i8kFr3od4VcbHmhplFJvBv4NeJHziAngN4H3AAM6wlYf\\n\",\n       \"lPytY2kWHBP5NOPAIc3iY6zlyYGDe8EhjYOR4ZDGwchwSONgZDikcTAyHNI4GBkOaRyMjJnlaRw8\\n\",\n       \"unAsjYOR4ZDGwciYKWmUUs8qpb6rlHpJKfXBKdzvtlLqRaXU80qp/xrj7z+jlDpRSn3Lem1sedv7\\n\",\n       \"3O8jSqn9/hyfV0o9O8L9pirB+4D7jT1H4O7m9WkNwAW8DFwHPMALwGsnvOctID7B37+Fc7HJb1mv\\n\",\n       \"/T7w6/3HHwQ+NuH9Pgz82pjzWwde338cBr4HvHbcOT7gfmPPUWs9U0vzRuBlrfVtrXUH+Dzwrinc\\n\",\n       \"d+wyU63114Hi0Mvv5FzWlv71xye8H4w5R631sdb6hf7jKmBL8I48xwfcb+w5wmyXp01gz3q+zw8m\\n\",\n       \"PC408FWl1DeUUj8/4b0Es5C3fb9S6ptKqU+Pq+Y+bQle637/MekcZ0maWcTyb9JavwF4B+fq6W+Z\\n\",\n       \"5s31uR2fdN6fBB7jvN7oCPj4qDcYluCddI79+/1t/37VSec4S9IcANvW823Orc3Y0Fof9a9Z4Auc\\n\",\n       \"L4GT4kQptQ6mInEieVutdUb3AXxq1Dk+SIJ3nDla9/tLud+kc5wlab4BPKmUuq6U8gLv5lxKdiwo\\n\",\n       \"pYJKqeX+4xDwdqZTZjpVedtJSmGnLcE7s3LdSaKZh/De38G5x/4y512Yk9zrMc4jsBeAb49zP+Bz\\n\",\n       \"wCHQ5tzfei8QB74KfB/4ChCb4H4/y3lH6ovAN/tf7toI93szcNZ/j8/3x7PjzvE+93vHJHPUWjvb\\n\",\n       \"CA5Gh5MRdjAyHNI4GBkOaRyMDIc0DkaGQxoHI8MhjYOR4ZDGwchwSONgZPw/UDzRgG/E2K8AAAAA\\n\",\n       \"SUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792af4a550>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEQZJREFUeJzt3X2QXmV5x/HfL7ub3SVvJE2QEiKBQFLIaIWaDCBCqLRQ\\n\",\n       \"RtC2VqStUtux00FrSoUR+aN/MC1amY7oOHTGQlGwalu0iNOCUEowYiUCCS9ZJIEhJUAhUEiWQF52\\n\",\n       \"w9U/nmfDZtmX+87Jvc9z4vczs5PnnL2ec9977rPPXjkv9+WIEAAAAPbflFZ3AAAAoO5IqAAAACoi\\n\",\n       \"oQIAAKiIhAoAAKAiEioAAICKSKgAAAAq6mxl47aZswEAANRGRHi09UUTKtvnSLpGUoek6yLib0fG\\n\",\n       \"zJ8//y3v6+/v18yZM/dZlzNfVu7cWnv27CkSW6rPObH2qON+QLY9lh07dqi3t3e/t1tqPEpuux3G\\n\",\n       \"GgDQOsUu+dnukPRVSedIOkHShbaPL9UeAABAq5S8h2q5pCciYlNEDEj6jqQPFGwPAACgJUomVPMl\\n\",\n       \"bR62/Exz3YS6u7uLdAiTo7OzpbfmAQAw6UomVPt98wcJVb11dXW1ugsAAEyqkqcSnpW0YNjyAjXO\\n\",\n       \"Uu2jv79/7+vu7m6SKQAAUDslE6r7JR1ne6Gk5yRdIOnCkUEjn+YDAACom2IJVUQM2v6UpB+qMW3C\\n\",\n       \"9RHxWKn2AAAAWqXo3cMRcZuk20q2AQAA0Gotfxxr2rRpSXFvvPFG8jZzYiVpcHCwSGypSSTrOPFl\\n\",\n       \"qfHLnfiyHSbVZLJOADj4UMsPAACgIhIqAACAikioAAAAKiKhAgAAqIiECgAAoCISKgAAgIpIqAAA\\n\",\n       \"ACoioQIAAKiIhAoAAKAiEioAAICKSKgAAAAqanktv9xacylsZ8V3dXUlx3Z2pu+yUvXrStXQy912\\n\",\n       \"qRqIpWKlcnUCc2Jzj88SfdifeADA2IqeobK9wPbdttfbftT2p0u2BwAA0Aqlz1ANSLokItbZni7p\\n\",\n       \"Adt3RsRjhdsFAACYNEXPUEXE8xGxrvl6u6THJB1Rsk0AAIDJNmk3pdteKOlESfdNVpsAAACTYVIS\\n\",\n       \"qublvpslrWyeqQIAADhoFH/Kz3aXpO9K+mZE3DLy+y+//PLe1729vert7S3dJQAAgAOqaELlxvPh\\n\",\n       \"10vqi4hrRouZM2dOyS4AAAAUV/qS33sk/aGkM22vbX6dU7hNAACASVX0DFVE/FjMxg4AAA5yJDsA\\n\",\n       \"AAAVtbz0zK5du5Licsp15Jb2yCnBkVvKpYSSfZgyJT3HzinDM3Xq1OTYkiVRSu27digns3v37qxt\\n\",\n       \"DwwMJMeWKndUKvZgL6tT6njL3Xap8kx4U+5Yl/pbWbe/k63AGSoAAICKSKgAAAAqIqECAACoiIQK\\n\",\n       \"AACgIhIqAACAikioAAAAKiKhAgAAqIiECgAAoCISKgAAgIpIqAAAACpqeemZVDmlL3J1dHQkx3Z1\\n\",\n       \"dRXpQ6kSDjmlZKS8EjE5254+fXpy7KxZs5JjZ8+enRwrST09PcmxOaVZckotzJ07Nzl2586dybEv\\n\",\n       \"vfRScqwkvfjii8mx06ZNS45dvHhxkdicPjz99NPJsZL05JNPFoldtGhRcux5552XHLt8+fLkWCmv\\n\",\n       \"xEjOcb958+bk2NWrVyfHbtq0KTn24osvTo6VpCVLliTH5hxzOfutr68vOfaqq65KjpXyxmTevHnJ\\n\",\n       \"sQsXLkyOPfvss5Njly1blhyb018p7+916t/V8f7ujZlQ2f5dSSFptN/EiIjvpTRuu0PS/ZKeiYj0\\n\",\n       \"TwwAAICaGO8M1XlqJFRjSUqoJK2U1CdpRmqnAAAA6mTMhCoi/qjqxm0fKelcSX8j6S+rbg8AAKAd\\n\",\n       \"TXgTjO3DbV9v+/bm8gm2/yRx+1+SdJmk9BtMAAAAaiblruKvS7pD0hHN5Y2SLpnoTbbfL2lLRKzV\\n\",\n       \"6PdhAQAAHBRSnvKbGxH/bPtySYqIAduDCe87VdL5ts+V1CNppu0bI+Jjw4O2bdu293V3d3fWU1gA\\n\",\n       \"AAClrFq1SqtWrUqKTUmottv+paEF2ydL2jZOvCQpIq6QdEXzPWdIunRkMiXlPSIPAAAwWVasWKEV\\n\",\n       \"K1bsXb7yyivHjE1JqD4j6QeSjrH9E0nzJH1oP/qVPnkSAABAjUyYUEXEA7ZPl7REjXuhHo+I9BnM\\n\",\n       \"Gtu4R9I9+9dFAACA9jZhQmW7V9LFkk5T4yzTatt/HxHpUzgDAAAcxFIu+d0oqV/SV9Q4Q/X7km6S\\n\",\n       \"9HsF+wUAAFAbnqh+je2+iDhhonX71bgdhx9+eFJsTi2/nFp3Ul4dtpxt52w3p9ZWqdiSSu23wcGU\\n\",\n       \"B07flHMc5fSj1DFUx7Fuh/0GAKVExKgftinzUD1o+5ShheZTfg8cqI4BAADU3XjFkR8ZFnOv7c1q\\n\",\n       \"3EP1dkmPT0LfAAAAamGi4sgAAACYwHjFkTcNX7Z9mBozngMAAGCYlOLI59veKOkpNeaS2iTptsL9\\n\",\n       \"AgAAqI2Um9L/WtIpkjZExNGS3ifpvqK9AgAAqJGUhGogIl6SNMV2R0TcLendhfsFAABQGykTe75i\\n\",\n       \"e4ak1ZL+yfYWSdvLdgsAAKA+Us5QfVDS65IukXS7pCfEE4AAAAB7pRRHHjobtUfS14v2BgAAoIbG\\n\",\n       \"m9hzuxoTeY4mImLmgehAaimQnLIaHR0d+9udlilVgqNkGZ6c2JySL6XKw+Ruu9R+LlWGJxelXADg\\n\",\n       \"wBlvHqrpVTdu+1BJ10laqkZy9scR8dOq2wUAAGgnKTelV/FlSf8RER+y3SlpWuH2AAAAJl2xhMr2\\n\",\n       \"LEnvjYiLJCkiBiVtK9UeAABAq6Q85be/jpb0ou0bbD9o+x9sH1KwPQAAgJYomVB1SjpJ0rURcZKk\\n\",\n       \"1yRdXrA9AACAliiZUD0j6ZmI+Flz+WY1Eqx9vPbaa3u/du/eXbA7AAAAZRS7hyoinre92fbiiNgg\\n\",\n       \"6SxJ60fGTZvGfeoAAKDeSj/l9+dqlKuZKulJSR8v3B4AAMCkK5pQRcRDkpaVbAMAAKDVSt5DBQAA\\n\",\n       \"8Auh9CW/CQ0ODibFTZmSnvvlltTIKVVTKjbn5ysV2y66u7uTY2fNmpW17Zkz0ysm9fT0JMfu3Lkz\\n\",\n       \"OXbz5s3JsVu3bi3SB0nFHgLp7Ez/WMm5hzLnuOjt7U2OlfJ+T3L2c86+yNHf358Vn/OZmPqZLEkD\\n\",\n       \"AwPJsbt27UqOLVUiKnfbOaWfco6hrq6u5NjcY6hUn0v1Ied4y4mVypTXGu9nq99fWwAAgDZDQgUA\\n\",\n       \"AFARCRUAAEBFJFQAAAAVkVABAABUREIFAABQEQkVAABARSRUAAAAFZFQAQAAVERCBQAAUJFLTM2e\\n\",\n       \"3Lgdhx56aGpsznaz+lGqlEtOP3LKC5Qsw1NqP+eUIsgpUZETK+WVLihV/iJn/HKOi9zjPqdsSM6+\\n\",\n       \"KLXfWvlZNVzOfm6H0h65co+jVKXGr12Oi3ZR6vicOnVqcmzOmOR8DuUe9wWPuVF3ctEzVLY/Z3u9\\n\",\n       \"7Udsf8t2ejEuAACAmiiWUNleKOkTkk6KiHdI6pD0kVLtAQAAtEqZUugN/ZIGJB1ie4+kQyQ9W7A9\\n\",\n       \"AACAlih2hioiXpb0d5KelvScpK0R8Z+l2gMAAGiVkpf8Fkn6C0kLJR0habrtPyjVHgAAQKuUvCn9\\n\",\n       \"3ZJ+EhH/FxGDkr4n6dSRQTt27Nj7lXO3PwAAQLsomVD9XNLJtnvdeI7zLEl9I4N6e3v3fnV1dRXs\\n\",\n       \"DgAAQBkl76F6SNKNku6X9HBz9ddKtQcAANAqJZ/yU0R8UdIXS7YBAADQapSeAQAAqIiECgAAoKKi\\n\",\n       \"l/xSzJgxIykupyZPyVp+ObXVStVhy6mplLsvOjo6kmN7enqytp0qp17T66+/nrXtnFp+OT/f7Nmz\\n\",\n       \"k2N7e3uTY+fNm5cce8wxxyTHStKSJUuSY3P228aNG5Nj+/re8pzKmPr7+5NjDzvssORYKW/fLV26\\n\",\n       \"NDk25+e79dZbk2M3bNiQHCvlfX7mHPeLFi1Kjj3rrLOSYxcvXpwce+211ybHStJTTz2VHJvz+ZLz\\n\",\n       \"2Xnssccmx1566aXJsZJ01FFHJcc++2z6XNs5v9d33XVXcmzO78i2bduSY6W8z63UPGDHjh1jbyO5\\n\",\n       \"NQAAAIyKhAoAAKAiEioAAICKSKgAAAAqIqECAACoiIQKAACgorZMqHbu3NnqLqCC3Edb0T7Wr1/f\\n\",\n       \"6i6ggpzHxNFecqYFQXtqy4Rq165dre4CKiChqi8SqnojoaqvV199tdVdQEVtmVABAADUCQkVAABA\\n\",\n       \"Rc4pSXDAG7db1zgAAECmiBi1pltLEyoAAICDAZf8AAAAKiKhAgAAqKjtEirb59j+ue2Ntj/b6v5g\\n\",\n       \"bLb/0fYLth8Ztm6O7Tttb7B9h+1DW9lHjM32Att3215v+1Hbn26uZwzbnO0e2/fZXme7z/bnm+sZ\\n\",\n       \"uxqx3WF7re0fNJcZvxprq4TKdoekr0o6R9IJki60fXxre4Vx3KDGWA13uaQ7I2KxpLuay2hPA5Iu\\n\",\n       \"iYilkk6W9Mnm7xtj2OYiYqekMyPiXZLeKelM26eJsaublZL6JA3dzMz41VhbJVSSlkt6IiI2RcSA\\n\",\n       \"pO9I+kCL+4QxRMRqSa+MWH2+pG80X39D0gcntVNIFhHPR8S65uvtkh6TNF+MYS1ExOvNl1Mldajx\\n\",\n       \"u8jY1YTtIyWdK+k6SUNPjTF+NdZuCdV8SZuHLT/TXIf6eFtEvNB8/YKkt7WyM0hje6GkEyXdJ8aw\\n\",\n       \"FmxPsb1OjTG6OyLWi7Grky9JukzSG8PWMX411m4JFXM4HESiMScHY9rmbE+X9F1JKyNin/oXjGH7\\n\",\n       \"iog3mpf8jpR0uu0zR3yfsWtTtt8vaUtErNWbZ6f2wfjVT7slVM9KWjBseYEaZ6lQHy/YPlySbP+y\\n\",\n       \"pC0t7g/GYbtLjWTqpoi4pbmaMayRiNgm6d8l/ZoYu7o4VdL5tp+S9G1Jv277JjF+tdZuCdX9ko6z\\n\",\n       \"vdD2VEkXSLq1xX1CnlslXdR8fZGkW8aJRQvZtqTrJfVFxDXDvsUYtjnbc4eeALPdK+k3JK0VY1cL\\n\",\n       \"EXFFRCyIiKMlfUTSf0XER8X41VrbzZRu+7ckXaPGTZbXR8TnW9wljMH2tyWdIWmuGtf7/0rS9yX9\\n\",\n       \"i6S3S9ok6cMRsbVVfcTYmk+F/UjSw3rz0sLnJK0RY9jWbL9DjZuWpzS/boqIq23PEWNXK7bPkPSZ\\n\",\n       \"iDif8au3tkuoAAAA6qbdLvkBAADUDgkVAABARSRUAAAAFZFQAQAAVERCBQAAUBEJFQAAQEUkVABa\\n\",\n       \"zva9zX+Psn3hAd72FaO1BQAHEvNQAWgbtleoMcnheRnv6YyIwXG+/2pEzDgQ/QOAsXCGCkDL2d7e\\n\",\n       \"fPkFSe+1vdb2SttTbF9te43th2z/aTN+he3Vtr8v6dHmults32/7UdufaK77gqTe5vZuGt6WG662\\n\",\n       \"/Yjth21/eNi2V9n+V9uP2f7m5O4NAHXU2eoOAIDeLH3zWUmXDp2haiZQWyNiue1uST+2fUcz9kRJ\\n\",\n       \"SyPif5rLH4+IV5q17dbYvjkiLrf9yYg4cZS2fkfSr0p6p6R5kn5m+0fN771L0gmS/lfSvbbfExFc\\n\",\n       \"KgQwJs5QAWgnHrH8m5I+ZnutpJ9KmiPp2Ob31gxLpiRppe11kv5b0gJJx03Q1mmSvhUNWyTdI2mZ\\n\",\n       \"GgnXmoh4Lhr3RKyTtLDCzwTgFwBnqAC0u09FxJ3DVzTvtXptxPL7JJ0cETtt3y2pZ4Ltht6awA2d\\n\",\n       \"vdo1bN0e8VkJYAKcoQLQTl6VNPwG8h9Kuth2pyTZXmz7kFHeN1PSK81k6lcknTzsewND7x9htaQL\\n\",\n       \"mvdpzZN0uqQ1emuSBQAT4n9dANrB0JmhhyTtaV66u0HSV9S43PagbUvaIum3m/HDH1G+XdKf2e6T\\n\",\n       \"9Lgal/2GfE3Sw7YfiIiPDr0vIv7N9inNNkPSZRGxxfbxI7atUZYBYB9MmwAAAFARl/wAAAAqIqEC\\n\",\n       \"AACoiIQKAACgIhIqAACAikioAAAAKiKhAgAAqIiECgAAoCISKgAAgIr+Hyoqh+rLDshuAAAAAElF\\n\",\n       \"TkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792aed8c90>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAI0AAACPCAYAAADHlliuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAGLBJREFUeJztnVtsY+tVx3+ft+/29i224yQzk+lp+9AHpNOX8lAq+lBV\\n\",\n       \"p0Jq4YWqEgKVUvEABQESbXmgBV5KJSoED0ioLeKmFgQqKi/QVgKpPHA5qKcXzqVnTjOTjJ2L49jx\\n\",\n       \"/f7xYK9vtj3JnLHjTOxk/6WtOJ5k5zv1v+tb31r/9d9Ka40LF7PAc9ULcLF6cEnjYma4pHExM1zS\\n\",\n       \"uJgZLmlczAyXNC5mxtykUUq9oJR6VSn1ulLqk4tclIvlhpqnTqOUsoDXgPcBeeB/gI9orV9Z7PJc\\n\",\n       \"LCPmjTTvAu5pre9rrXvAV4EPLW5ZLpYZ3jl/bwvYc3z/EPhx5w8opdxS84pDa63Oen/eSOMS4gZj\\n\",\n       \"XtLkgduO728zijYubgDmJc2LwNuVUneVUn7gw8DXF7csF8uMuXIarXVfKfWrwL8CFvAl9+R0czDX\\n\",\n       \"kfupbuwmwiuPRSfCLm4wXNK4mBkuaVzMDJc0LmaGSxoXM8MljYuZ4ZLGxcxwSeNiZrikcTEzXNK4\\n\",\n       \"mBkuaVzMjHlFWAAope4DVWAA9LTW71rEolwsNy5EGkZirPdqrU8WsRgXq4FFbE9ndkJdXF9clDQa\\n\",\n       \"+JZS6kWl1McXsSAXy4+Lbk/v1lrvK6UywDeVUq9qrb+9iIU9ayj1KGBaloXH48Hj8WBZlvl3+ZnB\\n\",\n       \"YGCufr9/Jeu9SlyINFrr/fHXolLqa4xGW1aONEIIpRSWZREOh4lEIkQiEcLhMJZl4fV6sSwLy7I4\\n\",\n       \"PT01V7VaRYRsN8XrZ27SKKXCgKW1rimlIsD7gd9b2MqeIZRSJrJ4vV5s22ZtbY21tTVSqRR+v99c\\n\",\n       \"Pp+PQqFAoVBgOBxSq9WAEWGUUjeCOBeJNOvA18Yh2wv8rdb6GwtZ1TOGkEYiim3bZDIZtra22Nzc\\n\",\n       \"JBQKmSsQCBAOhxkOh1SrVTweD8PhEHAjzZtCa70DPL/AtTwzeDwesx15PB4TQXw+H6FQiHQ6zcbG\\n\",\n       \"Bnfu3OHOnTtmuwqHwwSDQdrtNuVymXw+j8fjmSDLTSDORRPhlYNSCr/fTyAQIBAIEAwGsW2baDSK\\n\",\n       \"bdvEYjE2NjbY2Nhgc3OTXC5nfi4QCOD3+4lEIoRCIfx+v0mUh8Mhg8Hgiv/rng1uJGkCgcAESdLp\\n\",\n       \"NOl0mrW1NfNVrmQyaaKQz+fDsiyi0aghjdc7+p9QcpqbgBtJGr/fTzQaJZVKsba2xq1bt9ja2mJr\\n\",\n       \"a4uNjQ0TeeSSJFm2NSFNIBDAsiyGw+FEbnPdcWNII1HAsixCoRDxeHwid9ne3mZ7e5vbt2+brUu2\\n\",\n       \"IyeGwyHRaNQcySORCJ1Oh06nAzCxRV3X/OZGkMZZrAsEAsTjcbLZrEl0c7kcmUyGWCxGMBjE5/Ph\\n\",\n       \"9XrP3G6UUoRCIVKpFFtbW7ztbW8z9ZrT01OGwyHD4RCttfl63XDtSTN9nA4GgyQSCdbX17lz5w5v\\n\",\n       \"fetbSSaTpFIpYrGY2XKEaGchHA6bba3VanFwcMDR0RHD4ZBWq2WqxcC1TI6vPWkAU7Tz+/0Eg0ET\\n\",\n       \"aW7fvs1zzz1njtORSIRAIGCIdl5i64w0Wmt8Ph9aa1qtFicno4a/1vpaRhm4AaSR1oAQJhqNkkgk\\n\",\n       \"yGQybG5ucvv2bbMdyVfBeR96KBQimUzS7/fx+XwMBgM6nQ71ep1yuUyn06Hb7dLr9UyVWK7rkCzf\\n\",\n       \"CNJI4huPx8lkMmSzWRKJBOFw2ByjLct6qiOzUgqfz0ckEqHf7+PxeOh2u+Yovra2Rr1eN1etVqPd\\n\",\n       \"bpur0+msfAX52pPG4/GYPCabzbKxsWFIEwqF8Hq9E8dpwZM+UCnwyfHd4/EQiURYW1tjc3OTk5MT\\n\",\n       \"SqWS+epMlHu93sT9V5E4N4I0oVDIJL+3bt0im80Sj8cJh8OGNE7pw5tBIo3Ue4QwzWaTRqPBwcEB\\n\",\n       \"hUKBg4MD03qQiFSv11eaMPAUpFFKfRn4KeBIa/1j4/dSwN8B28B94Ge11pVLXOfMEAJMk2Zra4t0\\n\",\n       \"Ok08HjeRxonztg65n2xP0qcCiMViRlszGAxIpVKmACjbXq/Xo1armWKg3GsVifM0keYvgD8F/srx\\n\",\n       \"3qeAb2qtPz82nv7U+LpyOGUOQhjbtkkmk2QyGdbX10kmk0SjUXw+HzApqhoMBqbWInUWIYkky+f9\\n\",\n       \"TelDhcNhEokEnU4HrTX9fp9ms0m5XDYnrcFgsJKEgacgjdb620qpu1NvfxD4yfHrvwT+nSUhDTw6\\n\",\n       \"YktdRk5M6XSa9fV1YrEYkUjEfID9fp9er0e326Xb7T5GoGAwaKQRTyKNvJbIJoQTwsjfFMIMh8OV\\n\",\n       \"jDbz5jTrWuvD8etDRtqapYB8gFKXOSvSCAmckabb7dJqtWi32/R6PUOkwWCAbdtmWzrvb8KjynM4\\n\",\n       \"HDaEiUQiVCoVDg8PDWn6/b4hzSoW/y6cCGut9bL561mWhc/nIxAIEAqFiEajxONx1tbWyGQyZiuR\\n\",\n       \"JmO326XZbFKv12k0GqbG0u12TZ4ipAmFQhP5jTN3EkiuJEf9YrFIPp/Htm1CoZDJaVaRMDA/aQ6V\\n\",\n       \"Ujmt9YFSagM4WuSiLgKlFF6v1yjsRPsiPSUhSr/fN1+Pjo4oFosUi0VKpZIhivxMKpUyHfFUKkUw\\n\",\n       \"GDTRSk5GTiLKa5/Ph1KKeDxOLpfjueeeo9PpcHx8TKVSoVwuUy6XJ7arVdiq5iXN14FfAP5w/PWf\\n\",\n       \"FraiC8JZAZYoEw6HCQQCppDX7/fpdrt0Oh2azSb7+/vs7u7y4MED8vn8RMMRMDob0dokEglzxePx\\n\",\n       \"iQanHN8lKfZ4PMRiMXK5HO12G4/HQz6fJ5/Po7U2kW2Vos/THLm/wijpTSul9oDfBT4H/L1S6mOM\\n\",\n       \"j9yXuchZcVakEdJIpOl0OjQaDU5PT9nf3+eNN97gtdde44033jD3ke0nk8lMXOvr67TbbWBU6JPo\\n\",\n       \"IMdrIY5IQSXSeDweotEowWDQEKZYLJqItyri9Kc5PX3knH9634LXshBMRxqnNFMKedO9okKhwM7O\\n\",\n       \"Dq+88govv/zyxJHd6/VSKpVMhVd6SwDBYJBYLGYIIomycyQGwLZt4FHPajAYUKvVODw8xOfz0ev1\\n\",\n       \"VqovdS0rws4PbfrSWtNutzk9PeXo6Ij9/X1KpRK1Wo1ut2vuIQ3GwWBAu92mVqvh9XoZDodEIhFi\\n\",\n       \"sRipVIpWqzVxWnPKPqdfO+s/sgU6r1XBtSQNPCLOdItgOBzSbrepVCocHR2Rz+cpFosTpJEPUP6f\\n\",\n       \"L6SR343FYiSTSWq1Gs1m0xDmrEgx3eGeLh5Ok2cVcC1J4yTMWY3IVqtlIk0+n+f4+PjcSKO1Np1p\\n\",\n       \"IU8qlSKbzVKv12m1WuZoL0nstMh8Oso4ibNqhIEbQhoncYQE1WqVYrHI/v4+Jycn1Ov1xyKNvBZZ\\n\",\n       \"g9x7fX2dSqVCvV6n3W6buo6QYLrx+WaRxkmeVcC1I43ogG3bJpVKkclkTEdbElWfz0c4HCYej5NK\\n\",\n       \"pcxJarp5eRamE23bts2R/jxNjiTezWaTWq1mIpS0LJz1mVUgzrWzT3OOqEjrQARXUksR0kgya9s2\\n\",\n       \"wWDwqUgDk0d65wyU1GZg8sOXNkWz2aRardJoNAxppIC4KoU9uIakebNII6RyRppYLHamTOI8SKQJ\\n\",\n       \"h8MTkea833+zSCMV4VUhzbXbnpwTlKlUymhnzoo08XjcFNjO62BP3xsmI41t26YG5Iw0TjgjTa1W\\n\",\n       \"o9FomFxItqdVwrUjjSS6tVqNk5MTisUiXq/XkAQe/9BFfRcOh40fjfNyjuX6/X62t7dZX1/Htu0J\\n\",\n       \"5Z9UdqeP+9LtTqVS9Ho96vU6p6enlEolIpEI7XbbJMnXoo2wapAWQb1e5+TkxESZeDxupgPOIo2T\\n\",\n       \"OE4vGtmG5IpEIty9e5dsNott22bLk7+ttZ44sTlzLPn71WqVUqlkdD2A0Q6vQm5z7UijtTZa3HK5\\n\",\n       \"TDQaNUP+EgWENDJS6yRMJBIxXWzpZMskg1zb29uGNF6vd6LWMhgMJpqVgCGNbJ3lcpnDw0NisZjx\\n\",\n       \"upFIJeRZZsyrEf4s8EtAcfxjn9Za/8tlLXIWSKSR7SkYDJJOp2k2m+YDEUWfCK1s256wS3POaUej\\n\",\n       \"UdPdzmQyrK2tkc1mzRiviKokNxEPPmfj0e/3G8LEYjFKpRKpVIp4PG5GYQaDwcSc1DJjXo2wBr6g\\n\",\n       \"tf7CpazqAtBa0+v1aLVaVKtVwuEwtVpt4rQCj4gzGAzIZrO0Wi2jfXFqZSTSiAwiHo8TjUaxLItO\\n\",\n       \"p0OlUqHRaBgBV71eNxYmcsm6JMqJhlgG9iSRlrHeZce8GmFYUv9g2Z6azSaWZREIBEyPqNPpGBWe\\n\",\n       \"kMbj8ZDL5bAsC9u2zYco+Ywzp5FIJCKrTqdDv9/n+PjYXKVSiWw2Sy6XI5fLTRg8yu8JEcWiTeQa\\n\",\n       \"Qtxlx0Vymk8opX6e0YPdf2tZRlickQZG1V8hjUQaIY3kNqJzyWazhmxnnZ6ETGItIuO3+/v7PHz4\\n\",\n       \"kHw+z97eHnfv3qXb7eL1eo2pgAzVOWWg6XSaRqNhphUqlcq1Js2fAb8/fv0HwB8BH1vIii4IIQ2M\\n\",\n       \"TiQej+ex7UmIIKSIRqNn5hHy3vQHWalU6Pf7Znva399nZ2eHe/fu8frrr9NqtYzhYy6Xm2g9OI//\\n\",\n       \"mUyGfr9vuu7BYPD6kkZrbTTBSqkvAv+8sBUtAE6Vv3ywBwcH7OzsmGqx2KfJeO20cAoekUakofK1\\n\",\n       \"WCxyfHxsvu7u7pLP5ymVSjQaDWq1mtEAHx8fG1WeRBnZGm3bpt1uk0gkzFpkAM/ZzFw2zEUapdSG\\n\",\n       \"GE8DPwN8f3FLujjk6Asj0oikU2oi2WyWbDZr5rzPklAA5hgspzG59vf32d/fN6O3Qp6TkxMzmlut\\n\",\n       \"Vg1pJNKEQqEJ8ti2Tb/fJx6Pm5qNTCv0+31DtmU7Tc2jEf4M8F6l1POMTlE7wC9f6ipngFOGIElx\\n\",\n       \"pVKhUCiglKLT6ZiEMxKJkEgkzO86db1OiJRCEt3d3V329vbY3d3l4cOHxiFCGpFS8T05OeH4+Bif\\n\",\n       \"z0cwGDTSC/leZKASaaT5KfUkGeRbNsyrEf7yJaxlYZgO6ZVKBY/HYyrF4vKQyWRMYuyMMtO1Eok0\\n\",\n       \"x8fHFAoFHjx4wM7ODj/60Y/Y2dl5bKy3Xq9PRBqJKs46kWxDXq+XRCIxEWm63a4hzDLWba5dRXga\\n\",\n       \"Em0ajYbZggqFgnG9krFb8QkOBAKPVXjz+TwPHz5kb2+PfD5PoVCYUPtNC6mcGmSZgJBBu0gkYo7Y\\n\",\n       \"sj1KR15cLYLBIJXK6DAqIvZlwo0gjfMIrrXm4OCAQCAAQKvVeqz/5ExC+/0+e3t7Zjva29vj+PiY\\n\",\n       \"crlMq9U6U3kneZTMWEm9SPQ3Mrgnx3jpyOdyOer1uulndbtdqtXq0jUxbwxpADNqK4SRXCWZTJor\\n\",\n       \"Ho9PbDW9Xo+9vT0ePHjAgwcP2N3dNflLs9mc0AULhDTS0ZYoI+PB0j6Q0WFx0Go0Gmat3W6XWq12\\n\",\n       \"rlnkVeLGkEb6Os1mE3iUp0gFN5PJmERWTi4inpIoc//+fXZ3d41BgKjuptFut+n3+9TrdSzLMltQ\\n\",\n       \"IpEglUpNmAn4/X6zPckaJcIcHR25pLkqTOtvJSFWSpmI0m63jZzCaVDU7/cpFApmzEW0L0+qoUgS\\n\",\n       \"K/UikXmWy2WKxeLE6clpLGDbNp1Ox0QikaBalrVUUws3gjTTcOY4MtctUoqzchoZ1G80GhPD+ufB\\n\",\n       \"SVJxpWg0GoY0slXJKUlyHul4y0nKaYQ9PblwlbhxpHHmOFLCbzQaVCoV06CcHnBrNpu0Wi1jLC33\\n\",\n       \"edKH5zQtkshWqVQoFotG0O4kjUg1gMdII/qcqyaL4MaSpt/vmyLfWW2E6dmnWQbbztoOncRMpVJm\\n\",\n       \"zkoqxJKcS89KSON80suytBRuHGng2buJ93o908X2eDxG8F4qlSZ8+JRSxu7NacJkWRbNZtOc1q46\\n\",\n       \"4txI0jxr9Ho9U1wcDAYkk8mJZ0qJvaxTvyMegZubm3i9XsrlsikcXnXEcUnzDCCRZjAY0Gq1SCQS\\n\",\n       \"Ew8kSyaTRm8sxtYindjc3DTbo5giXTWeSBql1G1GMs8so+bkn2ut/0StgI/wMkEMH9vtNkopQxq5\\n\",\n       \"pM0gNrXSSJXaUa/XM62JpScN0AN+Q2v9klIqCvyvUuqbwEdZUh/hZYTTNUIKjKenp+YkJbUaEYP5\\n\",\n       \"/X5isZgRaUmJQH7nqo0DnkgarfUBcDB+XVdKvQJsseQ+wsuMadcKsXULh8PGJcvn8xmtjc/nMxXi\\n\",\n       \"4+NjU0cSMl2FdOKpc5qxuPydwH+xxD7CywqndNRJGq21sVVrNpuGKDKIF4vFaLValEolIyTrdrtG\\n\",\n       \"IHYVp6mnIs14a/pH4Ne11rUpSeTS+QgvM5yRBjDOWrlcziTL0mIQkVa9Xufg4IBEIkEkEqHVak3o\\n\",\n       \"bZ41nka552NEmL/WWov169L6CK8CJCkWXY0o/A4PDzk4ODDP0pRLbPqz2Sybm5tYlmUmF8RMSfAs\\n\",\n       \"os6bnZ4U8CXgZa31Hzv+aWl9hFcBIrmQmou0FwqFghltEZ9iEYbF43HW19fZ3t424zeDwWCiH/as\\n\",\n       \"tqk3izTvBn4O+J5S6jvj9z7NkvsILzvEekRmt8U0Umzw5d/kFOX3+w1pxMZNCHNycjJhKXvlkUZr\\n\",\n       \"/R+cb3y0lD7CqwAhi+hnJNLIeIvMfcfjcbTWE5EGMHKLUqlkrE7kOP8sNMVuRfgKML2VtNttYz/i\\n\",\n       \"NFxKp9NGIyzmAVprI9ByRqYnicIWDZc0SwBpM5TLZfMshXQ6TbVapdlsmnHeaDRq+lCJRMI890Eq\\n\",\n       \"zSKEdyPNDcB0QzORSFAul80MumxbYkAgNiUSaeRZCzKOc9lwSbMEENKIEUAymaRSqRjS+P1+YyIQ\\n\",\n       \"CAQ4OTkxA3bhcNicoJ6VIZJLmiWAJMYSLZxu6rFYjPX1dRKJBMlk0kQbeVpeNptlMBiYKOVUF14W\\n\",\n       \"XNIsAeS4LB92tVo1s1n9fp/T01Nu3bqFZVnGfcu2bTKZDLdu3QIetSdOT08vfb0uaZYAzsc6D4dD\\n\",\n       \"arUaBwcH5hE/nU7HJMgbGxtm7CWdTlOv143tSbVaPdeWdpFwSbMEmJZOyFSlPBNKa21cupxd8HQ6\\n\",\n       \"Ta/Xm2iAPgu9jUuaJYFTjC7uEiJ+Fy1xtVo1BgZyBM9kMqZuIyO/Z82XLxIuaZYQkhDLtEKr1TKe\\n\",\n       \"N+KYpbUmGAySTCYnjuAinZBin3jcLBIuaZYQTp2MzF2JfUmlUjFqv2AwaOo2TjetdrtNp9O5NH+b\\n\",\n       \"J26ASqnbSql/U0r9n1LqB0qpXxu//1ml1EOl1HfG1wsLX9kNhnzYkq9MR5pms2kijcyHT7tpybzU\\n\",\n       \"ZRT75tUIL62P8HXA9LBdq9UyvoGJRIJut0s6nTZ5jbhPSOSRaU1R+C0a82qEYUl9hK8j2u02pVKJ\\n\",\n       \"vb09AOOJLM6kcuqSSc1AIGD+/SoijYFDI/yfjHQ2S+kjfB0hpNH60cPfPR6PKfBNG2o7H1x/GaR5\\n\",\n       \"qkP9eGv6B0Ya4TojH+G3AM8D+4x8hF1cEuTYvbe3x6uvvsq9e/coFArmGeESacTCxGkccCWRxqER\\n\",\n       \"/hvRCOsl9xG+bpB6jViWHB0dUSgUWFtbIx6P02q1ODo6olKpPPaYw8uQScylEV52H+HrCKcnTrVa\\n\",\n       \"pVAo4Pf7zfH66OiIYrHI4eGheU6mRKFFYx6N8O8AH1lWH+HrCCdhtNbUajUKhQKdTsc4oosxdrVa\\n\",\n       \"NY+EluLgoqEuS+XlzkItFs7cxPmEGL/fP+Fw7nz21EWds7TWZyZELmlcnIvzSHP1FgQuVg4uaVzM\\n\",\n       \"DJc0LmaGSxoXM8MljYuZ4ZLGxcy4tCO3i+sLN9K4mBkuaVzMjEsljVLqBaXUq0qp18cuoBe9332l\\n\",\n       \"1PfGEtP/nuP3v6yUOlRKfd/xXkop9U2l1A+VUt9QSiWedI+nuN/cUtgnyGvnWuOlyXWdfv+LvAAL\\n\",\n       \"uAfcBXzAS8A7LnjPHSB1gd9/DyMh2fcd730e+O3x608Cn7vg/T4D/Oac68sBz49fR4HXgHfMu8Yn\\n\",\n       \"3G/uNWqtLzXSvAu4p7W+r7XuAV8FPrSA+86tKtJafxsoT739QUa2toy//vQF7wdzrlFrfaC1fmn8\\n\",\n       \"ug44LXhnXuMT7jf3GuFyt6ctYM/x/UMeLXheaOBbSqkXlVIfv+C9BJdhb/sJpdR3lVJfmmW7c2LR\\n\",\n       \"FrxTct0LrfEySXMZZ/l3a63fCXwA+BWl1HsWeXM9iuMXXfeFpbDTFrwXXeOi5bqXSZo8cNvx/W1G\\n\",\n       \"0WZu6LFaUGtdBL7GaAu8KA6VUjkYKRK5oL2t1vpIjwF8cdY1PsmCd541nifXvcgaL5M0LwJvV0rd\\n\",\n       \"VUr5gQ8zspKdC0qpsFLKHr+OAO9nMTJTsbeFBdjbjj9UwUxS2Kew4J1pjU+S6867RuDyTk/jjP0D\\n\",\n       \"jDL2e8CnL3ivtzA6gb0E/GCe+wFfAQpAl1G+9VEgBXwL+CHwDSBxgfv9IqOn1nwP+O74w12f4X4/\\n\",\n       \"AQzH/43fGV8vzLvGc+73gYusUWvtthFczA63IuxiZrikcTEzXNK4mBkuaVzMDJc0LmaGSxoXM8Ml\\n\",\n       \"jYuZ4ZLGxcz4f041SDwzkyB1AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792b570150>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlQAAACbCAYAAACkuQVhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAEbVJREFUeJzt3X+QXWV9x/HPJ7vZZJONYfgRAyE0SQsUMloBYQIiCNpC\\n\",\n       \"QcBSK9JWqWXsdNRKqTJiZtq/2tHKdERH2hlLioJVS1GiDi1CW0DEQviRAEkghI78SGiyQCHssmST\\n\",\n       \"Dd/+ce+GzbI/nicnz557w/s1s5N7zv3e8zz3POfe+8359XVECAAAAHtvWt0dAAAAaHckVAAAABWR\\n\",\n       \"UAEAAFREQgUAAFARCRUAAEBFJFQAAAAVddbZuG3u2QAAANpGRHis+UUTKttnS7paUoekayPib0fH\\n\",\n       \"zJs3702v6+/vV09Pz163+/rrrxeLLxW7P90PbHBwUDNmzNhjXs77K7WOc+N37dqVHJvz/krFAgDq\\n\",\n       \"U+yQn+0OSd+QdLakYyVdbPuYUu0BAADUpeQ5VCdJejIinoqInZK+L+mCgu0BAADUomRCtUDSsyOm\\n\",\n       \"NzXnTaqrq6tIhzA1Ojo66u4CAABTqmRCtdcnf5BQtbfOzlqvdQAAYMqV/OXbLGnhiOmFauyl2kN/\\n\",\n       \"f//ux11dXSRTAACg7ZRMqB6QdKTtRZKek3SRpItHB1W5mg8AAKAVFEuoImLI9mck/VSN2yasiIjH\\n\",\n       \"SrUHAABQF9d5nxvbMdZ9qKriPlSth/tQlY8FAJRXy409U8ycOXOfLzP3R6gVfmRLJV+5CUfO+xsa\\n\",\n       \"GirSj5KJaE68PeZnZr9BsgYA+w61/AAAACoioQIAAKiIhAoAAKAiEioAAICKSKgAAAAqIqECAACo\\n\",\n       \"iIQKAACgIhIqAACAikioAAAAKiKhAgAAqIiECgAAoKLaa/kNDAzU3YViStVKy63Pl6Pdat2VrNtI\\n\",\n       \"rTsAQKqie6hsL7R9h+11ttfa/mzJ9gAAAOpQeg/VTkmXR8Qa2z2SHrR9e0Q8VrhdAACAKVN0D1VE\\n\",\n       \"bImINc3H/ZIek3RYyTYBAACm2pSdlG57kaTjJN03VW0CAABMhSlJqJqH+26SdFlzTxUAAMB+o/hV\\n\",\n       \"franS/qBpO9ExMrRz7/66qu7H0+fPl1dXV2luwQAALBPFU2o3LiufoWk9RFx9Vgxs2fPLtkFAACA\\n\",\n       \"4kof8nuPpD+UdIbt1c2/swu3CQAAMKWK7qGKiJ+Lu7EDAID9HMkOAABARbWXnpkxY0ZS3LRp5XK/\\n\",\n       \"nBIjpUqXtEIfpLz1vGvXruTY6dOnZ/WjFeSsi5zYnJI9OWM9ODiYHCtJO3fuTI7NGetSctZFbnmm\\n\",\n       \"VigzlLMNdXbmfXXnXOyT04+cbS5neytZXqvd5P725Xy/lPouKvXb1+rYQwUAAFARCRUAAEBFJFQA\\n\",\n       \"AAAVkVABAABUREIFAABQEQkVAABARSRUAAAAFZFQAQAAVERCBQAAUBEJFQAAQEW1l5456KCDkuKG\\n\",\n       \"hoaSl5lbJiPn9vs5cvqcc1v/nPeX+95yS1qkmj17dpHYuXPnZvUjtdSRlFcqY8eOHcmxOeu4t7c3\\n\",\n       \"Ofb5559PjpWkvr6+rPhUBx54YHLskiVLiix369atybGStGnTpuTYF154ITk2Z/s88cQTk2PPPffc\\n\",\n       \"5Fgpb/vcsmVLcuyqVauSY9euXZscm7NtnnXWWcmxknTBBRckxx555JHJsQMDA8mxK1asSI69+eab\\n\",\n       \"k2Ml6bXXXkuOzfl9yPnemjVrVnLsEUcckRx7wgknJMdK0uGHH54cm/q5vuaaa8Z9btw1ZPt3JYWk\\n\",\n       \"sdZ4RMQPUxq33SHpAUmbIuK8lNcAAAC0k4lSzvPUSKjGk5RQSbpM0npJc1I7BQAA0E7GTagi4o+q\\n\",\n       \"Ltz24ZLOkfQ3kv6i6vIAAABa0aQnpdueb3uF7Vub08favjRx+V+VdIWk9BOEAAAA2kzKVX7fknSb\\n\",\n       \"pMOa0xslXT7Zi2x/UFJvRKzW2OdhAQAA7BdSTts/OCL+xfaVkhQRO22nXL52iqTzbZ8jaaakt9m+\\n\",\n       \"PiI+PjJo5BUlPT096unpSe89AABAIZs3b9bmzZuTYlMSqn7bu+9tYHuZpG2TvSgilkta3nzN6ZI+\\n\",\n       \"PzqZkqT58+cndRQAAGAqLViwQAsWLNg9ff/9948bm5JQfU7STyQtsf0LSYdI+vBe9GuiKwYBAADa\\n\",\n       \"1qQJVUQ8aPs0SUercS7UhohIv+NhYxl3Sbpr77oIAADQ2iZNqGx3S/qUpFPV2Mt0t+1/iIjtpTsH\\n\",\n       \"AADQDlIO+V0v6RVJX1djD9XvS7pB0u8V7BcAAEDbcMTEpzbZXh8Rx042b68at+Owww6bPFB59esm\\n\",\n       \"e0+j5dTRy112qmnT0utUd3R0JMdOnz49qx85y86p7ZSz3Jz6Ujn1EqW8sc5Zdk7dv5yxTq11KUkH\\n\",\n       \"HHBAcqyUV9cwp45ef39/cuyTTz6ZHJtTqzCnD1K575eurq7k2Hnz5iXH5n6uu7u7k2Nzts+c2O3b\\n\",\n       \"0w9qbNs26XVPu+XWK83ZNkrV82yF35xcOeu51O9ZTmxufGqfX3nlFUXEmCsjZQkP2T55eKJ5ld+D\\n\",\n       \"SS0DAAC8BUxUHPnRETH32H5WjXOojpC0YQr6BgAA0BYmK44MAACASUxUHPmpkdO256lxx3MAAACM\\n\",\n       \"kFIc+XzbGyX9Uo17ST0l6d8L9wsAAKBtpJyU/teSTpb0REQslvR+SfcV7RUAAEAbSUmodkbEC5Km\\n\",\n       \"2e6IiDskvbtwvwAAANpGyo2EXrI9R9Ldkv7Zdq+kvJu8AAAA7MdS9lB9SNKApMsl3SrpSXEFIAAA\\n\",\n       \"wG4pxZGH90btkvStor0BAABoQ+OWnrHdr8aNPMcSEfG2yo3bMX/+/KTYnFvZ55REafajSGwpOaUI\\n\",\n       \"cssW5MTnlE9ohVgpr8RIKa1Q/ia3HznbfU5ZlNzPaqrccR4YGEiOzd3mUFbJ7+RWKfuC1jJe6ZmJ\\n\",\n       \"7kPVU7VR2wdIulbSUjWSsz+OiHurLhcAAKCVlPnv4Ru+JunfIuLDtjslzS7cHgAAwJQrllDZnivp\\n\",\n       \"vRFxiSRFxJCk9BLiAAAAbSL9xKR8iyU9b/s62w/Z/kfbswq2BwAAUIuSCVWnpOMl/X1EHC/pVUlX\\n\",\n       \"FmwPAACgFiUTqk2SNkXE/c3pm9RIsPbQ19e3+29wcLBgdwAAAMoodg5VRGyx/aztoyLiCUkfkLRu\\n\",\n       \"dNycOXNKdQEAAGBKlL7K78/UKFfTJel/JH2icHsAAABTrmhCFREPSzqxZBsAAAB1K3kOFQAAwFtC\\n\",\n       \"6UN+tcgtO9HR0VGkHzklEXJK65TUCmU1ckqXdHd3Zy17xowZybE56yKndMmLL76YHLtjx47k2Nzt\\n\",\n       \"vtRY55TAydnuc2Jzy5GU+vyVKmuV+/5KlasqVaoqR6uUh2mFsmTtqFVKv6VuRxNtx63xKw4AANDG\\n\",\n       \"SKgAAAAqIqECAACoiIQKAACgIhIqAACAikioAAAAKiKhAgAAqIiECgAAoCISKgAAgIpIqAAAACpy\\n\",\n       \"nbfttx1z5swpsdxi8aXKX3R2plcByulD7vjm9DmnZE9O2Ymccis5ZU5y+5FTyiVnPeeU1skplZNb\\n\",\n       \"PuW1115Ljs0Zk5x1XCq2VcqAzJo1Kzl25syZybG5233O9pmzHQ0NDSXHDg4OJsfmfPZyt/uc7Shn\\n\",\n       \"vbVKqbGc7+VDDjkkOXbZsmXJsYceemhybF9fX3Ls2rVrk2MlacuWLcmx27ZtS4rr6+tTRIw52EX3\\n\",\n       \"UNn+ou11th+1/V3b6b8OAAAAbaJYQmV7kaRPSjo+It4hqUPSR0u1BwAAUJf040z5XpG0U9Is27sk\\n\",\n       \"zZK0uWB7AAAAtSi2hyoi/k/S30l6RtJzkl6OiP8o1R4AAEBdSh7y+1VJfy5pkaTDJPXY/oNS7QEA\\n\",\n       \"ANSl5Enp75b0i4h4MSKGJP1Q0imjgwYHB3f/5VwxAgAAUNLQ0NAeecpESp5D9bikv7TdLWm7pA9I\\n\",\n       \"WjU6KOeycAAAgKnS2dm5x22NJrqFTMlzqB6WdL2kByQ90pz9zVLtAQAA1KXkHipFxFckfaVkGwAA\\n\",\n       \"AHWj9AwAAEBFJFQAAAAV1V7L7+ijj06KLdnPnLpKOTX3cmoq5dSB6urqSo7NqVsl5a2LnH7kyOnz\\n\",\n       \"wMBAkT5I0ty5c5Njc2pXdXd3F+nDQQcdlBwr5fU5Zz1v3749OTbnyt6c7S31e2XY0qVLk2Nz1tvT\\n\",\n       \"Tz+dHHvjjTcmx65cuTI5VsqrxZizHZ122mnJsWeeeWZy7JIlS5Jjb7nlluRYSbrzzjuTY5955pnk\\n\",\n       \"2Jy6tJdeemly7IUXXpgcK+XVCt26dWty7L333pscu3HjxuTY3t7e5Nic+qNSXv3IxYsXJ8UtX768\\n\",\n       \"nlp+AAAAbwUkVAAAABWRUAEAAFREQgUAAFARCRUAAEBFJFQAAAAVtWRCVfJSeJT38ssv190F7KXH\\n\",\n       \"H3+87i6ggr6+vrq7gL20bt26uruAikiosM9t27at7i5gL23YsKHuLqACEqr2tX79+rq7gIpaMqEC\\n\",\n       \"AABoJyRUAAAAFdVeeqa2xgEAADKNV3qm1oQKAABgf8AhPwAAgIpIqAAAACpquYTK9tm2H7e90fYX\\n\",\n       \"6u4Pxmf7n2xvtf3oiHkH2r7d9hO2b7N9QJ19xPhsL7R9h+11ttfa/mxzPmPY4mzPtH2f7TW219v+\\n\",\n       \"UnM+Y9dGbHfYXm37J81pxq+NtVRCZbtD0jcknS3pWEkX2z6m3l5hAtepMVYjXSnp9og4StJ/NqfR\\n\",\n       \"mnZKujwilkpaJunTzc8bY9jiImK7pDMi4l2S3inpDNunirFrN5dJWi9p+GRmxq+NtVRCJekkSU9G\\n\",\n       \"xFMRsVPS9yVdUHOfMI6IuFvSS6Nmny/p283H35b0oSntFJJFxJaIWNN83C/pMUkLxBi2hYgYvgNy\\n\",\n       \"l6QONT6LjF2bsH24pHMkXStp+Koxxq+NtVpCtUDSsyOmNzXnoX28PSK2Nh9vlfT2OjuDNLYXSTpO\\n\",\n       \"0n1iDNuC7Wm216gxRndExDoxdu3kq5KukPT6iHmMXxtrtYSKezjsR6JxTw7GtMXZ7pH0A0mXRcQe\\n\",\n       \"tUsYw9YVEa83D/kdLuk022eMep6xa1G2PyipNyJW6429U3tg/NpPqyVUmyUtHDG9UI29VGgfW23P\\n\",\n       \"lyTbh0rqrbk/mIDt6WokUzdExMrmbMawjUTENkm3SDpBjF27OEXS+bZ/Kel7ks60fYMYv7bWagnV\\n\",\n       \"A5KOtL3IdpekiyT9uOY+Ic+PJV3SfHyJpJUTxKJGti1phaT1EXH1iKcYwxZn++DhK8Bsd0v6TUmr\\n\",\n       \"xdi1hYhYHhELI2KxpI9K+q+I+JgYv7bWcndKt/3bkq5W4yTLFRHxpZq7hHHY/p6k0yUdrMbx/r+S\\n\",\n       \"9CNJN0o6QtJTkj4SES/X1UeMr3lV2M8kPaI3Di18UdIqMYYtzfY71DhpeVrz74aIuMr2gWLs2ort\\n\",\n       \"0yV9LiLOZ/zaW8slVAAAAO2m1Q75AQAAtB0SKgAAgIpIqAAAACoioQIAAKiIhAoAAKAiEioAAICK\\n\",\n       \"SKgA1M72Pc1/f8X2xft42cvHagsA9iXuQwWgZdh+nxo3OTwv4zWdETE0wfN9ETFnX/QPAMbDHioA\\n\",\n       \"tbPd33z4ZUnvtb3a9mW2p9m+yvYq2w/b/pNm/Pts3237R5LWNuettP2A7bW2P9mc92VJ3c3l3TCy\\n\",\n       \"LTdcZftR24/Y/siIZd9p+19tP2b7O1O7NgC0o866OwAAeqP0zRckfX54D1UzgXo5Ik6yPUPSz23f\\n\",\n       \"1ow9TtLSiHi6Of2JiHipWdtule2bIuJK25+OiOPGaOtCSb8h6Z2SDpF0v+2fNZ97l6RjJf2vpHts\\n\",\n       \"vyciOFQIYFzsoQLQSjxq+rckfdz2akn3SjpQ0q81n1s1IpmSpMtsr5H035IWSjpykrZOlfTdaOiV\\n\",\n       \"dJekE9VIuFZFxHPROCdijaRFFd4TgLcA9lABaHWfiYjbR85onmv16qjp90taFhHbbd8haeYkyw29\\n\",\n       \"OYEb3ns1OGLeLvFdCWAS7KEC0Er6JI08gfynkj5lu1OSbB9le9YYr3ubpJeaydSvS1o24rmdw68f\\n\",\n       \"5W5JFzXP0zpE0mmSVunNSRYATIr/dQFoBcN7hh6WtKt56O46SV9X43DbQ7YtqVfS7zTjR16ifKuk\\n\",\n       \"P7W9XtIGNQ77DfumpEdsPxgRHxt+XUTcbPvkZpsh6YqI6LV9zKhla4xpANgDt00AAACoiEN+AAAA\\n\",\n       \"FZFQAQAAVERCBQAAUBEJFQAAQEUkVAAAABWRUAEAAFREQgUAAFARCRUAAEBF/w/CsMbhRL/ldgAA\\n\",\n       \"AABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7d792baffb50>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"for i in range(8):\\n\",\n    \"    figure(figsize=(2, 2))\\n\",\n    \"    imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')\\n\",\n    \"    figure(figsize=(10, 2))\\n\",\n    \"    imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')\\n\",\n    \"    xlabel('iteration')\\n\",\n    \"    ylabel('label')\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"Define, train, and test the classic LeNet with the Python interface.\",\n  \"example_name\": \"Learning LeNet\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 2\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/02-brewing-logreg.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Brewing Logistic Regression then Going Deeper\\n\",\n    \"\\n\",\n    \"While Caffe is made for deep networks it can likewise represent \\\"shallow\\\" models like logistic regression for classification. We'll do simple logistic regression on synthetic data that we'll generate and save to HDF5 to feed vectors to Caffe. Once that model is done, we'll add layers to improve accuracy. That's what Caffe is about: define a model, experiment, and then deploy.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"import os\\n\",\n    \"os.chdir('..')\\n\",\n    \"\\n\",\n    \"import sys\\n\",\n    \"sys.path.insert(0, './python')\\n\",\n    \"import caffe\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"import os\\n\",\n    \"import h5py\\n\",\n    \"import shutil\\n\",\n    \"import tempfile\\n\",\n    \"\\n\",\n    \"import sklearn\\n\",\n    \"import sklearn.datasets\\n\",\n    \"import sklearn.linear_model\\n\",\n    \"\\n\",\n    \"import pandas as pd\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Synthesize a dataset of 10,000 4-vectors for binary classification with 2 informative features and 2 noise features.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAiMAAAImCAYAAACB54oCAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvXmQHOd5p/m8mZWVdZ9dfV/oRqPRAIiDIMH7ECmJkqjL\\n\",\n       \"lqxrZK3Xno0Zz3i0G2Fv7EZs7IR3YsYbc2x45N2Vx4csj6WRZy1bHh2ULZOUSPGUCJIAiLvRDfR9\\n\",\n       \"Vtd9V+a3f2RBbIIACRJoNAjkE4GjMquy3swvK/OX7/ceopTCxcXFxcXFxWWz0DbbABcXFxcXF5eb\\n\",\n       \"G1eMuLi4uLi4uGwqrhhxcXFxcXFx2VRcMeLi4uLi4uKyqbhixMXFxcXFxWVTccWIi4uLi4uLy6ay\\n\",\n       \"oWJERH5fRH4qIv/xguU+EfmaiDwpIl/ZSBtcXFxcXFxcrm82TIyIyK1AUCl1P+AVkdvWrf4y8F+U\\n\",\n       \"Ug8rpf7HjbLBxcXFxcXF5fpnIz0jdwD/0Pr/E8Bd69Y9AHxcRH4iIh/bQBtcXFxcXFxcrnM2UozE\\n\",\n       \"gELr/7nW6/MMAz8AHgX+dxHRN9AOFxcXFxcXl+sYzwZuOwdEWv+PAtkL1j2tlGqIyBmgA5hf/2ER\\n\",\n       \"cevUu7i4uLi43EAopeRiyzdSjLwA/BPg28DDwNfXrXse2CMirwKDwPLFNnApo11uTERE3QxjLiIa\\n\",\n       \"dPwWPByCbavO0ooHnuiDl/9cKXVqcy28ttws4/52iIgHur4M7zdhOO0sLRnweC8c+ppS6szmWnh1\\n\",\n       \"ccf92iMSvAf2fxQeOAceBTZwqAeePaVU+lsb//2XdjJs2DSNUupVoCoiPwWaSqmDIvIHrdX/Fvg3\\n\",\n       \"wLPAnyilmhtlh4vLdUg/DLS/LkQA/E3YnYGOuzfPLJdNZgAGE68LEYBgA/bkoP2uS3/MxeXtERGB\\n\",\n       \"+P2wf94RIuBIgL1z0D4mIrG33MAGs5GeEZRS/9MFr7/c+ncReGQjv9vF5TomAOGLPCFEK2Akrr05\\n\",\n       \"LtcJgddnttcTqYAnfs2tcbnR0MAIQiz9psWEFBDgjeEU1xS36Nl1hgg7RfiSCMnNtsVlw1iGeQ2s\\n\",\n       \"C1zUswkonN4ck1yuA5ZhThzX+Xrm4lAc3xSLXG4YlFIWVGdg+gIPSNkDKxawtimGtXDFyHWECF8E\\n\",\n       \"fgz8MvCqCIOba5HLRqCUWoX5n8NTg5D2Q02Hkx1wUEHuhc22z2VzUEotwewr8PQWWGudF8c74WAD\\n\",\n       \"8j/bbPtcbgQW/wGej8DZBDQ0WAzBT/pg5XGlVHUzLROlrs+klZstuEmEUZwYmgeU4rgIvw18CrhH\\n\",\n       \"Ka7PQbrK3Exj7qSzm7dB272ghaB0AtaeUkpdNJj7RuZmGve3wwli9d0OyXtAC0LpWOu8WH37T7+3\\n\",\n       \"cMd9cxCRAWh/CLz9YKVh6Smwj6lrIAbeasxdMXKdIMLfAC8oxX9ovdaAl4DfU4q/2VTjrhE325i7\\n\",\n       \"OLjjfnPijvvNhytGrnNEuAWnWu2wUpTXLX8U+NfArTeDd+RmGnOX13HH/ebEHfebj7caczdm5Prg\\n\",\n       \"N4GvrhciLf4OCAMHrr1JLi4uLi4u1wZXjGwyIviAzwL/+cJ1SmEDf4RTPM7lOkVEvCISFxFjs21x\\n\",\n       \"2RhEJCwi0c22w8Xl3SAigdY16rq957vTNJuMCJ8F/rFSfOAS67uBo0CXUtSuqXHXmPfamIuIHoL7\\n\",\n       \"k3B/GPQCNFbhqRI8ey2CwW4UrudxF5FkCj6RgCEBsjC3CN9VSs2/7Ydd3pLredxvFEQkEIcPJ2Cv\\n\",\n       \"D8hCbgF+YCl1cpPsueSYb2jRM5fL4vPANy+1UinmRTgKfACnuaDLJiEiW9vhPi+kKnDWB/ZeuO1u\\n\",\n       \"mAlAswTGs/CRw2DhtDxweQ8jIr4e+O/vB98ITGvAOYj/BH5DRP5AKZW7Ct8x2gH3eiBRhokMPOek\\n\",\n       \"+Lq4XBkiEo3Bv+yF3TFYaYOJTrCehi+JyB8ppaY228b1XLcum5sBEfzAQ7y9yPg28JmNt8jlUvhF\\n\",\n       \"9u+F3/gItH8Wyu+HsR74F/1QCEATIAiNu2EuCe9zO1G/99Fg23aIjcLy+QvlIGRuAcMPu690+wGR\\n\",\n       \"O26FX3sUkp+DyiOwcxh+U0S6rnTbLjc3IhLphN+5HQ58GFZvBU8T9o/D8H4opODezbbxQlzPyOby\\n\",\n       \"MPCKUqTf5n1/A/wrEbxKUb8GdrmsQ0SMfvjwwzAXxjn+vZAvQXMOtm9z6sMAEIZ6AEyc0sqFS2wv\\n\",\n       \"AcG9EGiHwhRUX1NKFa/N3rhcLkFoS0DjwuVJKIfgsgRDa45+BNp2gW3B2lFgAjAHnXNqNtj6ju2w\\n\",\n       \"pEMqB+8HvnEVd8XlBkdEUhDaA/42yE76IbkTkoOQC0AzAM19UHsGtgjMmpd5/l5LXDGyuXwC+N7b\\n\",\n       \"vak1VTOOo2Z/vOFWuVxIWxt4zwsRAB/U/FBahVQNdNOZmiELZgkqQFlEAsAwiAlqXik1LyKDMPxr\\n\",\n       \"sEsgXoGlnXD0ARH50xuxsNV7mRIsr8KbgpJXIVCA2bf7vCNEYr8Mo/thaxFsgfE74NRzUDjc4VQ1\\n\",\n       \"e4PYGYJVP2yT1uT6BdvrAOkF1QQmXAHrAiCib4Ntvwo7LYhWYX6Xl5/u8VNdzkC0HYqGU26VdlDn\\n\",\n       \"oLsK112lZ1eMbBIiCPBRnA7Gl8NjwKO4YmQzqJZBs3l9XtMAy3Dm+PdXW2IkB+bz0JOGv8XpzPtF\\n\",\n       \"GPGBX8G0iERfga5BeLgA3QXn5jS4BtEOePyDwJtaeIuIF1BKqTc9obtsLDaMn4K1FHSMwrKAmoS2\\n\",\n       \"16BahdcuYxPDjhB55OzrZ86wQPMuODRTusg0eQG8NrxBZDjdViMfhp33wBYFVQ9M1ET0byllnboa\\n\",\n       \"++ry3qM1FWxC76fgg2loO18aIlmjb+sy6f4wtn2a0v5eGuNRSOfAPAeeVXhm0wy/BBsqRkTk94H9\\n\",\n       \"wCvrO/iKyO8CnwQywPeUUr+/kXZcp4wBVaU4c5nvfwzHdfvbG2eSy8VQSmVSIuNHYXA3LIDTymzR\\n\",\n       \"6Tr1138N0QCYBbAzcC4EO2q0/1Ob903AYCtIbB/wg/ugFAB92s8rt3qoRy085TLRKfA9ItJmQ/Ec\\n\",\n       \"1F4DAtD2IdiyDZQSSRyCzONKqYtO/bhcfZRSNRH5+pPw4ZdhTIAcnF2Bx95qHESkDcK7IfEIJMPQ\\n\",\n       \"MMBsiUmPguEmjCeXKJ07Dl07YAmgAsZzcFcViv3wr9pEjqXhSSAB2++D23KwtgOCCQho0BwWkS8r\\n\",\n       \"pS7Z4MwRMgy3wT4NvKvwmg0nXHG78YiIH4ydENsClTUoHr4a3k+nhEDofui7G1QbxEZAPQOUYTUB\\n\",\n       \"9d1V9uQX+ck2H9H8El4tQ3aXQX3yCOQX4StKqbf17F1rNkyMiMitQFApdb+IfFVEblNKHWytVsBv\\n\",\n       \"K6We3Kjvfw/wMO/My/EKEBdhSCkmN8imGxIRMYHe1stZpdQ7TpFehe88A184B/1xsBdBm4dXy/Df\\n\",\n       \"ys75fF8X/JNtcKtAXogOlDnTW2R2HMKrMLQMW9Pw8t4kk73tGCUTM1OhFlth9hMZAiXFg6cgvRMO\\n\",\n       \"fwA0A+6vw8gMWBoc3w3P94jIf3JvJNcOpVQG+JZzY8EHdIOxRUSCwDml1Bta7IroIzDyq7DLhsUO\\n\",\n       \"8A7ARBIGn4NAqxGZBdjNFfj2U/DFCeiPgpqCPX6ofgmOBKFxBkaeh+FpkkswYEH6HuipQzjjnBPV\\n\",\n       \"QVj7TRH5PaWUEhG/CXsTsLMJpRU4GIKh7fDgqOOqb56Fna/BuIg8jXP9X1RK5a/hIb3hEZE4MAjJ\\n\",\n       \"T8A+D3SXoGA607H6N5SyrrADc/wTcOs+2D8H6SpM7IDZ98GZk1DrENp6QjTCeUSfIxP1oVMmoPLU\\n\",\n       \"tTT8llJq+mrs59VmIz0jd+CUOAd4ArgLOLhu/b8VkQzwO0qpwxtox/XKQzhZMpeFUtgi/B3wEeD/\\n\",\n       \"2TCrbjBE9FHo/wz0eZ0ls3UR/dtKWe8oz14pVRCRP16BHiAEpJVSKwBBkQe2wG/vAyMB+Qnon2Nl\\n\",\n       \"QCNR0on3WMg0PKdD7IhONhKhUjTxVRWgaHb6Uc06lXqR3iyMrEL2NvAkYfRHkDXh+AgUBiASg2xC\\n\",\n       \"JDzuxMkGK1CegNrR9fEDTjCbMeS8aky4sShXhQj0/BpsC0MEONsO0yGRyDhUX4LGT4Ey9H4aHllz\\n\",\n       \"XObhGkz2grkVJraAbwoikzDVhMpppVRORL667AjlkV0Q/DT84rzcDks5GJ6lOWBTDMGIDZGWK95j\\n\",\n       \"QzIHqSEodYtIpgN+Yy90DECmCqnDcFcGEh+CZzyOYMYPjdNEPg/9d0IoB1NxEf8RqH4HmL9R6+OI\\n\",\n       \"SA+kHnLGws7C6k+h/spb7a8jQD1jEOqE0jI0jiulLqySvf79OkQ+BKN3Q2QIvD1QnYboSzCUhp4A\\n\",\n       \"fPfTIvLvlVLNd7kfbbBjL9wz5czymXl4QQO7H2LdoPngZIefeWMrqtJGpFCiYZyhYdrgBc/7RPRD\\n\",\n       \"YI+/1b5sBhspRmLwiyf4HLBz3bo/UEr9HyKyFfgz4P4NtOO6QwQdeAD4Z+/wo48Bv4ErRi4LJ2tl\\n\",\n       \"6xfgI2uQqDhLMz744RdE5CtKqbfLYnoDrQvXG9ybIhIbgUfGQIaduAIqUOnCE5whWvQS81UI9kEb\\n\",\n       \"8MJIkFza4Kidpz8OAU+dfNSklI1il4ocHILODJg6+KOQN+Gle2EoALdUYDIFvZ+DVaBrBdIV8B2H\\n\",\n       \"6QdF5GtKqRWR0P0w9kHY2rrInhGR4N8pVXquZe+bAiNd3hpnqqPzM06pn74ZOHkPbDsAfX7IjkLt\\n\",\n       \"fTD9UVj+E+jyg6cIR7shF4RsCIZC0KnACMOJe2Hme0qpRfjFOTUjIu1beL0dhAXaKYzdZQIjJh6z\\n\",\n       \"wmovDCyCVQLdgqbAItCXhXMJH2zZBx13wi+eehVEj8O+LBxsg7IF8jTRA152Wx4k2UT1wH0a5MZg\\n\",\n       \"+haYf0xEvn+ht+e9joiMwcCXYXcNRiegosOrn4YjcZwH5Tf9Lpybfs+vw44ItFVh1YQTD4vIn51/\\n\",\n       \"CHkz5m2w+1548Byc2gWjM7AQgkO3wcPPOAK1MwEzXcDMu9ydBLTbr4cbLUQg4oW2AKgEYPjR/T0E\\n\",\n       \"lQ+P1KgG2zALFUqBEvFBgx2fbBAYgdNVEfnPSql3a8dVZyPFSA7nEQIgCmTPr2i5PlFKnXGmNC9O\\n\",\n       \"K7bkPE8ppZ666lZuDvuABaWc+IN3wJPA10XwKUV1A+y6wfDvcKb6zwsRgHgVxhTM7QKevgpfMtQJ\\n\",\n       \"YQW6AhFQecxoikB6num2JjEDwg2wbIiUFPHaANlKgEOncniSJfz+PEnPArFeiNchZ8FqwLnAnOmC\\n\",\n       \"riAMZ2B6AJIeaK9CxIC2DGxtwpEOuHcVnviIiDwBux6Bj85CJgW5URgLQX23iPkDSLVBv18kdQJW\\n\",\n       \"n3SLa102HdDZDoMzMN8FshdCNlRsyPdAxyyU90LjY5DphIP7oFODYgcMhCA5B8sGtB+GO1cgHxOR\\n\",\n       \"5Hkx3ApETExARxIK/ZCZRBtZJLmtjNey6TgBJYHpPqho4JuHKT94T0OxAhQScP8QpBegcxH/tjpa\\n\",\n       \"CBoSoW4uQLwNytMQr9IZ9uBr2GQG4aFxCDShVHLCnmJ3wgvjwPHNPNhXE5HgPdD7O3Bb1LkOTA9B\\n\",\n       \"/CC8bwqWHxCRM5C8E/p3ivTVIPsCFJ+F1KNwvxdGW+JuFGhPweOfAP704t+WvA/2LTpxQShQAn0F\\n\",\n       \"mIrDfLgVtA6tv94lBVhbF/g80wPdSUjUYDIEARWmokxiYlPVGkRZZjbuI6gC6HaGALA9D4MWPPZ5\\n\",\n       \"Efm/lFLWFdhz1dhIMfICTk+Vb+PER3z9/AoRCbfc3m1vZYNS6nc30L7N5CHeRVaMUmREOI4z5fWT\\n\",\n       \"q27VDYcZgeBF4iuCDfBF3rz88hEnb/f+DvhMAw7UIXAMurbDMRvRPaiaRdNsULJBhUBVwciX6Dsx\\n\",\n       \"S237QzRe3ktz9rs0Rldp668zkoZEEmwTlBdeqYP0wO46VE0odDmpwB6BjjpkA9C3CN44RMsQ2Qrl\\n\",\n       \"FRitw3I/NPZCXxG8RZjfA/F/Abu/DT2zcGYIXhgWka++U+/QTYreusEA+S1QD0LNB9uakGhAO2C1\\n\",\n       \"weIdYPth3xokS5DvhJEKzA1A8EkYmHC20RWFEykgLSLDkPpdaB87jZ5KkzuQZPmkh+aOABW9TMkI\\n\",\n       \"otk1trwE8wKrEQjkoeMVJ1vz1TQwbUNlDhmtkhrxkCh5MUo1apFFZjtzlHvjjmrxKnxkyMVteguO\\n\",\n       \"EAEwGoAPRrMwsZ8bRIyIyADseRR8Hhibd8awzYDJ2yHyBMRM6P7ncH8ets5CzQOHH4KDQxDpd+K1\\n\",\n       \"1jO8Ai8NnL9/vfkbtTBEW20C/GchPeJ4On1AzXAEyUIBx6X1rlBKLYjEq/DjT0LKhmwb+FPChCfC\\n\",\n       \"kh7EtD1kBDxYKI+PYKkEAYW/UUH3W3QmIfsANBehexnme4HrohLrhokRpdSrIlIVkZ8CryqlDrZK\\n\",\n       \"KH8Z+PcisgvH1/S/bJQN1zH3AX/xLj/7OE5RJFeMvC3ZszB3D2y/YPmsH7JXFATshX3b4Vd6oXcV\\n\",\n       \"wl2gCtD5KoTDVBcnqGzNMCgQa4DHC1Ud0qZNW/4MybNF8r1JiI3j81cIVWy0GEgVbNuJNQg34WgQ\\n\",\n       \"ZKcXb1ChouApNAhYUM2AtyWyBOcJTAG6AZpAZp/jRcnFoV6HgAE9y1AeAX0RRpeh1g1rB3A6Q7tw\\n\",\n       \"PtbGP+YUWM1PAJOtKYtlWKrCmt85zmtBOFAFSwdfBQwTegNw/FZofwEWvJCNQ7YHFn1O8cvMGMxN\\n\",\n       \"Qc8K5AWnDk0Ekn/kZ/uWKPEyWOUS4c4Gpf4dZKshPItxYhN9+MpHmd42x8jzcPpW8KdhwQerpyD9\\n\",\n       \"XaWUrYm8chLz10ZIzekYTYA6IitELYEHTMojWbBWWA6W2TrhCPXzFELgnXbiUDTvZhz7jSG+F8Yq\\n\",\n       \"MFuCohdiNSerKS6QboeVHtizCNtbHkJPA+6eguWtsBKAwhKE62/MwJZf/PVmqqfg7BYYWYGeMzDZ\\n\",\n       \"5hQ7PROFs/tgpQjLf3AlnggR320wGoN6GpYTkAtBNtJNs96J6THxSI6geFmTJpqZJaw38eglNHJ0\\n\",\n       \"LkOyBt46ZLqc6gTXTxX2DU3tXZ/O23r95da//3Qjv/d6plVf5C7g3R6DJ3Bqk/xvV82oG5dxOHkO\\n\",\n       \"AgMwugKi4FQHnJoG+/SVbDgJ7+uBvm3Q6IbjEzCUgOUiJI9BY45GCXxhCJmAghUTvE2YuK9BLTdD\\n\",\n       \"Z2wGyo54aOYglIGGB7xV2FqAxXaDSSPEaV8nw/kaZqVEPZilEquxosM9x5z6WLWCc3PMnoT8Cryy\\n\",\n       \"H3YNQaDiXGfmA45nJnEcFuJOFoZuQ3cWQkNX5SjfAIiYe2H007DdAtOCqfth/LiI/H9KqYaI/rfw\\n\",\n       \"D1+AnipUNSj6oFKDqG5SSipKugezGqSWylPI1SgNQp8FWQvQIbsTFrvhmUlYfBaYBeOLQWIjXXQt\\n\",\n       \"amgKwEOlEcHXZ2NWdNomPPgrAINU/SvkQ3XqT8Hk14Dy+t44ChbPkpirUY62I6qG8ixh9YcZOa44\\n\",\n       \"5dlK+eUiBBdY6bJJZRxPzqAHSiFYsaB3Ag6lIP2OH3Ja6cOj0H47eAKw9hpUD21+gKQRAF8D2sdh\\n\",\n       \"/DbYV3e8Izow3Q5VCwbm3viZugeiAzDXAc/uAt8CDL4Gw2k4m4TMzKWzj1Z/Ai9uA6sD+tbAOAHP\\n\",\n       \"doO/AH0ZGMzAxKdFTE2p2qvvdG+clN6+R+DhKSguwultYEV8pK0ONJ+XoLJoSgilVhERBI1KPYtm\\n\",\n       \"rBJWdXr8oHdAtRPsohMHw3XT8NEtenbtGQFKSjH3tu+8OC8AO0SIK0XmKtp1w6GUaorIN+CZO+Do\\n\",\n       \"bYBA5nEo/+wqpMcOJkEPQT4ExOH4AoTKUFyG0026UhCvwUIMfBYM1sHjg/xOGJ6FWBXyOswagAlr\\n\",\n       \"JdjZiiEqe6BUj8HWA6ycWaSZgJiWZdZvojSLULTJwT2QWQLPEfiJ3sHaQCfsyzM/kqGZzOJZhs6q\\n\",\n       \"E9dQAuomUGvVbgMyQahPXOExuCEQkTAM/xJ8dNF5EgbYAZi74PnjwKtKWSdE5KswfwCCt0OiDbps\\n\",\n       \"nVxUkbe8ZPMhqismxUKT0PYaQwoGZuCHg1CP+iloMWp+k6ZZpG4W4DkL444gZuO8EAEQmnoEj61B\\n\",\n       \"vclKQJHSDMyKgRKDM711cn+mlLpYrFm5TujMOe7PnWMlJpQ6+8AK4c1VsQKDMKWDqtHM/Yij4xaR\\n\",\n       \"EKzd6cQgmQswtRPO/Awal1PM7QLCH4SxB2FHDnx1OPsROLy/FVS9iYJk9Tic2wX3T8FLAXhqG8QE\\n\",\n       \"pmMw8zSoBcgNQue6InNnb3OmR+98HvwjoKfg1Adg4jDMLsPydy/1bc4UivwhrHwJwh+BUgr6YjA0\\n\",\n       \"BYEAxGZhLAt/+0siMvEuUqqjEDdhOQKzd8GwBUNFD/8gTdKaQc72IVoFj3gIqQLY81BWbDXrRKpQ\\n\",\n       \"E0h7wKjDmRSUD8H1017EFSPXnru4glK8SlET4TngfcB3rppVNyhKqSpOoOrTTgOy2K0Q/pSIOWVQ\\n\",\n       \"19qcaFZrFV5qwmuX60KtwWLRKVwHgAlWL+RPgNeAWgUtACPL0Cw6sQOVMORM6NGdKpwdraDaI+1w\\n\",\n       \"OgiNPmjPQkOHMyaY54LY226BpR1kVg5T2dlFfMFGb55mMTVPZh7ypyH3zR545BGnWV91jo7JIqpy\\n\",\n       \"hp91LrF9Cvqn4fQgnBmAnsegqcNUHxzcCrlTIhI/H1B+EzMAQ/rrQuQ829bg9H7gVXBuNsB3RYJ5\\n\",\n       \"WPs8qC0BKoNemgZko14KHiEftPAEhZ664lAcCAXxWMO010wahsaK0UMzvkzlt6aozesUlEVN1zEt\\n\",\n       \"AI1AI0tV66aWHab03ALlkSqeRJG6Vqf+LaAkkvqs4+VbOQKcVkrZSqm8SPIYLI3B/mkPJ2NesvUi\\n\",\n       \"c7GtZI/prdTeKNS9EK0wOAf9fwXlJJSDsKpB8fg7rcHjxP2N3QcfmAKjJXQ7iyAD8NQ+4LkrGZgr\\n\",\n       \"wz4Bx6bBMwDD81BIw9FemPwOFL8O9MKRMegwnSmctSgUtkBjEXadAzUDy12QGIAXViD3f799Cf7A\\n\",\n       \"MGxPQNcrMPlx2N+EYBegIJeCxkHn9z8xCBx5hztUdorznt0LB8qgMDjcL5Q8BootKGlH2SVqzKM4\\n\",\n       \"hthBRnMNYok6/Tk4ZcJEwIk9C8yDlmnVp0lAaJ9jZ2EWyoeUUtm3teYq44qRa8/dXHl7+fNxI64Y\\n\",\n       \"uUxEjF0w9jnYVQdfPcjTnx9hydxH7UkPNE7CZ4/CqIj81eWkvq7Bd1+DD6UgGAMrC+Gj0DlJiCId\\n\",\n       \"eaiY8FwnDOiQbDjBhw0DUjqEgqCvOV6K7SswbcCiCd/vgWAOuo5BNlDCLs1DyoNIgKTtI1hqoIwA\\n\",\n       \"sgwPPQGnwvBichv4klCYhM4mdihB9HQfRTIU9DqaDmYWXq3BnALtY2AFoP0obLsFjt/SKsR0uZWA\\n\",\n       \"b0SE1s36jWg26+IDRCQKdAN4ODxlEd+q8KaCNK0eqpl+AvkF6tEKZ7x5PBVohGBbrY0l8eHRbZQu\\n\",\n       \"9GAx3x7D9iyQLxisFWqc8uv0NwXdyjGXTFM1vAR8E1gHOikd9cDkEag1CAvs/BKMtJ6oT++DUwdF\\n\",\n       \"5DvOObv2DPxkBI7f2SAUWGW+J0lhfg07/gxs74T0PPgqtG2Be2da5cNb2SIFL/zVgyLy83foNeyB\\n\",\n       \"AV4XIucZTEN8F5soRpRSdRH5c3hmLxzbCyoDy98H+/jrKdXGX0L+E44gKbSDVoEHXnCm6rCgfwoi\\n\",\n       \"i/CaoVT2LYWI07ph4GF4YBZeuAfaxOmXGbKdhKe+cZjYBeoEl4w7eUtMmKuBvQNOphOcHuxCjzQQ\\n\",\n       \"+oEaSA7BoGkZ2JqNqVWxQxYLApk4RDTYtQbNEJyIQOWQiNwNfR+D/XVIlGB1BI7c1+qV9U6zPa8I\\n\",\n       \"V4xce+7ikqlhl80TvPuYk5sOpwLrwC/BI0vOE9CRnmE80klPpcp01yjNo12QL8Pul+FnwLmLbCOM\\n\",\n       \"M8XWj5O2PjkJ/28V+Z+DGIkqmiwSDtXoaNj402AvQK0PxtGYbyQ546ni16r0WopGsEmh3bnIrXmd\\n\",\n       \"aZlQENp1KHXD5GiM5aUOmJ+E3TFQNna+Qi2QoelfIXQS+jOQD4D01pEdPye0TfAE6pSiNbyDfrRJ\\n\",\n       \"k8piHe9rUGqD9J9CaQfsN+HeU87UEcBAAP7bZ1qFmG7Wyq5TMKlgr/HG7KvxJKSfBhAJPgBb368R\\n\",\n       \"GwlT6NNZDTQpxjvQzE6MchKlIngaIczVBRqJJXJpiKdAs00aXhvNFAJ1DU/Jxh/woed06EiT/VEP\\n\",\n       \"Zx5ULPlXoatOl1bltidPUp+aZ2YY1h6okfurAjwLY78CH1zX52ZLGqzb4ODJpMiBfvSPe9Da1kgH\\n\",\n       \"sqS9eYIxYVuygbKyLLPKUn4OJiF01hEiC2EY3waVTtCroOWAqIjkgCEgCKzgVC2+lEBvwMWcKVUD\\n\",\n       \"7E0vP9Dy9Pys9eci6xtHReQUzLQDKdj36TdO2wCshqF+9mKfd4oqtj8IRjsks2C0wXITyvtgyAMr\\n\",\n       \"EQg1oC7AWadp5oTJBdcYJ5iZLTgiZWq9t9KJyQm9HwY/B8n94OnycXKgm5ovjJIqNj78YlFgCdu2\\n\",\n       \"8Gg1YraHiC1sLddYMkH3OJV7vRnwlSHSDp33QGgUfDoUTzrTxP1ZiCeg8lHgT97lYX9XuGLkGiJC\\n\",\n       \"FOdHfugKN/UaEBVhUKk33zhvNlp1GiJApTUtcyFd0G04QgSCzHUmMKpevJUcZh80j2rAADSPO2Lj\\n\",\n       \"3Bu3b+yCwS9B5z4I+2BNIHO2QXp8irZjBmqgQWUnbPfAliJoKajPw5EuKAbaKWm9+CrnyItFXdnU\\n\",\n       \"DMGONjAyzjTNgOk8gXWehELCx6FhE2+jg8njNZqFcdSDipUhH2Y+jydvES1BzgcZDYx9M4Qe2E6y\\n\",\n       \"YSLNBqVimYo/Q2NXmdAqvBKC0y+COumUp77zzOtCBJybUk8CZru5TlL8NhInCJAQUFJK1QGcKQ7f\\n\",\n       \"9+GxT8BY0wk0nvbDyVPQOCwiI3Drh2C3lWSm20fMyjNiengl2Is/pxB/lka0SsaroWeDmIt+FoIa\\n\",\n       \"VV+DQCBHzgzTXjfQchaNkE3NX0G1l2lXDYovTFB/zEv+n9foj9qMzYLodXz2Kvd9Dxai8MOaE3Q4\\n\",\n       \"VH5j8oORGQzxAAAgAElEQVQGDFUiHPrHe7AiGh3+ObyaQShikEw20Cyb5YCP8ILGVjsH1k6WZl8k\\n\",\n       \"3V3jXBrO3AOjCjqKUPJCeQxWPgKxDhiOQVQ58Y1TJ1sew4vFF5yFiRqMBaG95CxqChyPwdLbdiS/\\n\",\n       \"HmiJ8DkRmYfZ2+DVPtg975QOWgnAK0FYeZOHR8S/H/b8CuxPQ2oVJrvh4Ifg1boTa2JUYK0Gkzoo\\n\",\n       \"A5ZG4HQB5r6/PvjYCZ4e/mXYqjkB7ZNqfaFCkF2w5Z/BwHYYVFAt+chHgqD5MOwGFZpYdhil5VFa\\n\",\n       \"mYRtYTbzVGolyhmnyFu/D/wLkFiBQ/3Q3nTaU5QM2DUHrw3Bq1U4cBoG1iDSLyJBpVTpWo2DK0au\\n\",\n       \"LQeAV5Tiip5AW6Xhn8Cp3/K1q2LZexTnh9z7YYj6oaJE4i9B9h8uuHC2Klaef+GtNbF0ha0JWHXw\\n\",\n       \"6GDXQGviFJMTkRDQCZiw9bOwqx8GixBbdLb1syQE+6Hpa7BnFV5THpIVWAs28fqg/yyMjsMLQ0ns\\n\",\n       \"vIGsdSPxSY70K/o9YHhgdQtIDSJF6DgNnoZGM2jSU62TC84RGdzF2hELtnix+vLoq+1EFsLk2sf5\\n\",\n       \"649XyaWhYyhPuJImbwQQvwKjSqW+RLnhJVeqUPsznHbztkjfJQ/jRo3P9YLzdBm4G/ofcgrKFZoi\\n\",\n       \"kWeg8LRSylKq+pKIzMD0LvAGYO00MK6UskRSt0NXSmdpjx9POI+Keog1LKKqTs4IECiXiRabWHYE\\n\",\n       \"/6Rien8KZipYVolFslSUh2kjRCQWQJRC5WZI5hp0NCDzqEX5RxWGcpA6A/1rTv2PuVGYa0DXJHg6\\n\",\n       \"wJp06opYAucSwlpE4a/CYiJCcyiPkcxQjVTo9gpdEiBRqdEMCPFqhsl6F/3H6lTjEZbKcQrNRV44\\n\",\n       \"AHcJ9OWcGlzlIGw/BNkvwL4X4ZZ1wvTZMXj+7lY/GxOonfeUKKUqIvJN+MEXYWvSWT2FEyD6emn7\\n\",\n       \"9wKt+Im/hGcehWO3OPVBMjlY+MaF/VxamS0fggfnIFprzfINwq1lmOuBvhUgAilgXIe6BY1OyE7D\\n\",\n       \"eiEiSdj2Kfj4EoRa16zdHvjhR0Rkymlol/gsDLVDW6Pl0QrYKK1AjSw5rU6FMnVuQbMEpZXxFLIE\\n\",\n       \"V7Kk1qD3R5C7BWJRWCvC7CrU4rBrHmYSLSuA7Vl4eis0W31zrF/89U4QkW6BbgUNnOvO28TYvI4r\\n\",\n       \"Rq4tVxS8egFP4sSN3LRiRETfDrs/C/cvQiINNR1evhNe8gF/ve6t8zCXP18Fscrw3CxzwwbpjhCl\\n\",\n       \"+gR8pAr6Ucg0YNpxyQ8/DN0Cc93QvgW8liNEwEkPHKjDfA+0Bz2cNdvJBxKcVaAkQ8WzxPyOJjtO\\n\",\n       \"GhQDJtWmTj0IXrMTtVrgRLBOuJLBVhCugG8Fym1QbdcodFvURdGoZ7C8r8LWrRCqovzd5HvrlDos\\n\",\n       \"lL4Fu2HimV2m3F7E1E+S1PxYmgbNBj1anXTVx0SlrNS6plzZl+DE/XD7umJOKwGYq3AZKX4tD1Q/\\n\",\n       \"4MXpY/Ie6iDsPwC7Pwb3zkKgAfPtcOiLcLRHRP5SKdVslWh/Q0EqEfFB4g6QHUKyCiVDYYVsECj7\\n\",\n       \"cmT0OFVvk4LewJPNkh5YRQsa7MvGMCyN6a4oTd2kpFnkfHNo5Sw75+vsn4JjAeg2Apz8lzHOBX1M\\n\",\n       \"xstM5HL0LVe4dRYWR2BuDWqzUBiHEw8HOPlAF8VkCq1RoGnMszDWCc12iOo0fKusqjS6JcTKAuIh\\n\",\n       \"ZjXw+UusxIsU26bw9Gk0s5BVkO+As3FHfxuTkJyFrr1gXHATumUBXv0UxG9zigVWiyK+H0PtZeVw\\n\",\n       \"VkT+A0xvwTk35i5dMv36pJWe3A2EYe3HsPYkzv1x5RLl8WMQjUA2DGk/WBZ42mDwFJzrAJqw0nDS\\n\",\n       \"6dssGJmFnA23/xwOfUZEryplnQDfdsc7FVr38ORvwva6U6KfWTDHIJkHCTupxzNdwgqQJYUiik0Z\\n\",\n       \"OIqtp52c3UqR2087mVIzoxA9B7N3wMDzoOcg39VqtFgFzzJMd4O/Bg2fUw/pbBvkjl3Cy3yp46dF\\n\",\n       \"4WO74cCgMz+njUNdF/mW9YZr0KVxxci15W7gD6/Stp4Efk/EedS6Stt8j9H+ENyefr3cu2nBndMw\\n\",\n       \"tVdEnlzXdsASkf8Kf/8lGI1DyJ6mmVcsDd0Ciw2wVpwLei4Lv55jLOJkB/gsOGiCsQ2qHa1mvYCt\\n\",\n       \"QSMG1ZSXfLyLpWYbutcRQyHVgQ9YGZjj5a4I9dUY5UYcO9aNXZumaKRIvjJPxczQdwKWdkM2AknD\\n\",\n       \"QEVC+KtCIaTTMDsotRXAlwZ7FKw4LK5ipcYgPw6+WfSOXmzvPOlEGUP5ERSa1QQbkkaJlQsKWBWf\\n\",\n       \"g5e2OU33eipO0OIJu/Xk95beOicTqesfwUDUeWKcRST4hFKlq1FS/13Tuolsg9Q+p2DXyuFWgGJj\\n\",\n       \"3Xt06HsI7poDQ8Hpe8Bsg70Cxf8B1lIi8hegdUByJ9hNSB8G6dFJ/Hc23tsVpVgTvZanogtSt/lZ\\n\",\n       \"W4Kyt4K3OkHN046lFSi3z2ApD9snY8RUnXP9u2mW4oSLSzRSFmg+YpTJtNU5tgr1mQ4W93Rhd+/B\\n\",\n       \"N1WiYqyQbV+jmphgsbdMfAnmm2DMQN/2IKfv7YP+MP5yGbGgFLody7TRghECqonHGydQb7BipkmY\\n\",\n       \"DRJlm6Y0aJgr2FsVKW2V6OAy6SzkJiFyGiKWk9nlr8H0IOSjMNMHwarjNQGY6IbuffDgGUewrPnh\\n\",\n       \"xU/BazqtWAylVIX3aOVWJx4s9QXo7XNu+KvDIBXgJORfFpHHLzJd0QGVO0GrgSSgnIKyD2QcrDSk\\n\",\n       \"JiEB1Ech1YTYFDTK0L/s9ChafBg4AZoJVhQmw6A1IL4I0aIzVWgEnK+qp6EUhJIfzmwxOW30UiKG\\n\",\n       \"zQgKDaGOwgMUERYJzzoF7AJVaCbAmICXF6Dshe4gzEWcINbgMTD6oNzlbLtowbc/DvlZsLMixj3Q\\n\",\n       \"fPlyRIkGYzvgzvfDufOZW9vB/z34goj8u9b58Za4YuQaIYKG08n4S1dje0pxToQisAsnhuQmxOiA\\n\",\n       \"zgue6DUgaeH0Q/pFEJhSakZEfh9mR8Ab8lGdHoEZ02n2Yd8N6RDUsxgfz9F15PW4irY0zBYhmoK1\\n\",\n       \"GPhqwvwozCY8ZD0+yipO1aehdJs2BR4RmkTxacuUfHXagqdINjrJGu2oagq9eJh89xI7noH3H4Y/\\n\",\n       \"7IL5EIjfwO9tUkEx6YlRtrxIMIR4YygtCbk0pDzgr0PQxEMD3aOj6UHKWhc5/PQom7pWYk1N0oiU\\n\",\n       \"8Ovrj4xSqiwifwyr2yA6CJUsVI+9XRqf45Lu+VV4BOhveVVqOjz+IRFZVEqdusKBvALCj8LYPU6/\\n\",\n       \"DY8Fk9vh6LiIfHOdIPFBwA/RNJzZDbEEdLbK4PdaMNgOP//XMJaB4Tw0NZ2nP9dBdkc/kekVqsEZ\\n\",\n       \"JuN1Rq0sOhrjZhTd66HfMtA8RdLWDIuNCI1jJXx2hDalqKQ8FIwE3oaNRAwCykO6aVL3pNBSkM8o\\n\",\n       \"Ao0k1WA3dr1ANZbH5/cTL0fRfAma/jIrfdC7BPeeifD3946x1rmVat1PwcpB6AQEeiGzgDdQx1IB\\n\",\n       \"sItUPQEsbZ41zcIKZkDpFP1+hpYC1FcKhDNNDjwHjw3D4R740DHH6ze5C9J7nHYEA71wthcWjsOt\\n\",\n       \"4zB9O2w7BcnWDSVahduWYP79IvLyu+1Ae/3Q9sswuhPqPVC+BQZqECpAcgpy++DFDhH5k/MeEuf3\\n\",\n       \"0PsxSC3A0gGnR1RH1kkDnhyE8jlY0p3KvH4djAKkBZKHnSmRzgJ4+hyRHNoJS7c5XX1tDRZ3QPEV\\n\",\n       \"OGc4dVIAKn8PP/8/oSsFvkCUBmEUA0AMwUYoIpjYhAjZGmbQ5lgYjpmwoEPxKKx9BZZTEP4oRHIQ\\n\",\n       \"TkH9YfDUoPccTIUhuKxRuyWGPxVBH6xgfjpL6ZiI/K/rY1wuRgpuH4OMvi4rTXfmF4cX4B+JyJNc\\n\",\n       \"JDFgPa4YuXaMAWmlWL6K23wSJ27kJhUjjQWna2Xvuh+KJZDWWdeY8Tytm/GpKNWHIvDJglNTfXEY\\n\",\n       \"joZaxX9iaF5orosS7M/AzFmY74CVLWG0sEkhbDBnmWQCGppE8YkQpEaGOiXAryCLQdzWiNsG8cVl\\n\",\n       \"TnQbYMBSWwU9IyiBE51QPQ7mksb4dh+lfg+NkBd/yYNHa2AHB9GtKpYmqEAIrBLiz2LQJGRp+DQN\\n\",\n       \"H100qFOlQUGgIWEsqwtLy1PZf5Fj0ACOtf5cLoMwGH5diIDjhdqVhZk7gWsmRkTEDMI9MbjThqSP\\n\",\n       \"9oEq+5+DjtaTa38W1Ag8s53XfxdVKFWcJ/r6AKRa50tNc4rCtdVheBfc9W0IVWA+3Emzv58O04NO\\n\",\n       \"haDZQU8zS85boIiGoSWINiLQ0NDKAdolR8XfTdlco2mkKQXzmEYYsFD+OrpWp0kXygqi2QX81Tj+\\n\",\n       \"YJaFPXkavjqReoFQD3i1JhXLwKx6qVehewm2KJjq7CaXCtEIedANA6vUBcY8GFmI6XhUk0CxSilY\\n\",\n       \"IBNsElUBYjUb2y5z1mjiXa2jHy3TOWPRNe3U5rt1Hv6+6Ljso93QvAu8ORg4CHYbjFXhyB540oSS\\n\",\n       \"BoNHwUbn4JYYE6NeGt4yRW/ZaUtxpaUKNg2nxkb/HWAOQn8IUhWncNupKJy7Ax7+Biz1w8oAcD6j\\n\",\n       \"phvauiHe53gUTmvgizqCYxUYLEFuDipjMBeCHa/B0IuQaF2TFiLOtUvbBXu7wXgNzg3AQBmiGrzy\\n\",\n       \"EIz/BdCa3ihNgnghXjFZiASoEcKDhkKniY7jq3QCv/y24GvCljocPwvz/xWkDIk7IWfAcAo+9EOY\\n\",\n       \"G4L8h8E24OUU7PihcPKhQUy7G1NLYi5Y2DLN0q5xar8F/Ju3Oo4amMa6GJMZZ2f2JiC8H4p1GDoD\\n\",\n       \"r7zVU48rRq4dVzNe5DxPAl8E/uNV3u57hOUn4ee/7pR8bitDxQMHe2Dp4MWe9sWJFvvcARiOOPWe\\n\",\n       \"OxoQOgr3euGpTihWaKxBxf/6pzTgwBH480CcXF8SvRnEW1U0rUE8bWUs/wqWN4GoMGGVo4qNr5kl\\n\",\n       \"pDUZUAbL6PiaNUy9RNZXA49BrdHJaw/Oc0zqyAQUum3aZ/2cEYv2oE1fo0jRBDyrrCiBagbLC5pW\\n\",\n       \"xBYPZjOHEgNThKal49FsTCIYdgMNH9VmgpqKU+rSRe6xlLrSWg8+JwHlQkI18FxRw8F3Qmv8Pr8f\\n\",\n       \"Rm6BhUWIncFIjvPje9f48FPns6VgSx5O/MJj6EzT+Z6A538Fej1O/ZCqDkfjED0KxU7oqDm9ZgpB\\n\",\n       \"jRO3tEFYI8AyhUEvPUYQA5Nko0rRGyEmDdB07JKGFLzgt0g2l1iK9dL8qSI9tEZnb5WAb4ayP4g0\\n\",\n       \"TEr1TozsAvVYCV1FKWttVLVJwtgEPQY+y0PEtqhrVZaDRew8dGfAD5zuNiiONQiFSii9iS9gUq53\\n\",\n       \"0CytQtjAVm0EawUaoSCR2gKNZoBFS8cqBfDNrRFpq7HjGcdt//rRBFbg3CqE74fbs05sSDkKixak\\n\",\n       \"005c98EpJ15Bq3t4cesI47cMEMx58VYKLCUy8ElTpFRT6vC1Og+uMj4wtsCuAuQSzrXEb8HOIjze\\n\",\n       \"AQUTOsCZcznbmhbsAHsvpDRoX3D6SVV0iPihUHa6NU93QfsZCC07TS4XxiD6ImT88POkU++k/R7o\\n\",\n       \"Auyo02fm552gz0H1BGReWudx2g5bVvzkhuPU8eKhgWINjaSzA9jYlIAMtrIoNuCED46fhM67YGfQ\\n\",\n       \"6ep77oATc5JdcLyEnnPQnYN0L5weDZDr7qA7q1EyAHQ01UdifpnsAyLylbcKRl2DI2fh0S4oFCFQ\\n\",\n       \"gj2DkJsH9sHpJJQfh/1vdQN0xci1YyPEyI+BPxbBoxTvcVfpO0cpa1zE+CasfhgifVBpwtrTULhU\\n\",\n       \"f43eAdh6B0yvQWHFeV3cCvoZ2LIGy4vYh2DSA+2dsGXVSaF9fkRYXtuJVRiG/BIDexbIDXrQzDi6\\n\",\n       \"toCuFihJAp+qobFCgyVSyiKvNNY8eZo9RcpRizZVpVMzWY5XKabAZ0OqAxomlMYW8etRUrYXo6Fj\\n\",\n       \"aAZ9jXkMvYeZ6jk01QnNVZQnT1Mv4pUIHtbI6xaoECZNllEIFkUBVe9xvEOfEpEXLhGEd7kswowG\\n\",\n       \"B8RJdzzPdBxyz1zOBlrBr6NtsKMJ9SwcUUqde4d2DA7ByN2tFOQ1aMQxSsM0zTzH+5rc3SrcVtfB\\n\",\n       \"uqD4Re0gHNNgfhiWukEKkHgN9k/AT1pz/7kUVG4Bf0IRCDaphOso5cNb1tErFpWQh2ogQFgpDKtE\\n\",\n       \"TQLUu2pglWhKGbvWjs9QZAQqngzm/8/ee8fIlp7pfb/3O7FyV3V1TjenuZMzJ5AccpbLsCvSpCVt\\n\",\n       \"kAWvBFuwDQkLGDAM2IYA+Q8DtoSVdyUIsCBrd21Ju7LWXIpLcmkOORqGyfnO3Bz6dqyurpxOnfT5\\n\",\n       \"j68v53J2MufO7lJ8gAa6uqtOVZ3vhDc87/O0rtDzFoj1PJG+gmTX8HYnqZ7x6Lghbl7I7NtGW1XC\\n\",\n       \"yCZNU2yrDWqEF8IQ8AVqJzvkpwuU1Q47UiLEJWv36LqXIPYZ2kOa1R6xP4AYFsdT2KMYqx0Qu23a\\n\",\n       \"lYQzD5vsfKILXR9O3Q+LczB3CFQRRj4E28Zl1ilBM4TKcyDPQzOFH3y+ws7xA+SbNkJAozxPcPom\\n\",\n       \"2NyBR0XklfciFvgXEG2IM5BrmerZuGwIpIkDuZHZV3UBa0Vk5nMwfxdgw2AZmiPIJzDdg2IM5z3w\\n\",\n       \"+qAV3PI43HXVtDOfugmeexCenoXROWj+HqQDGH0RztwBVQuW+2Ybm8dhIwN4e9ojY2BGaM7ksXM2\\n\",\n       \"FgEeXRRtEgI0VTQN4BJOWiduQ/E1GAWQL8HHkzfch9MRZHJw+mZ44FnjELB2HAoFKGUSpLxFx1+G\\n\",\n       \"y+AAoBBypuKRx8i/viXG8OJLcFsCKwXI2ZB7AVQOXp8yBzJHofPzYOQvBu4HfufD3KDW1EW4AtzN\\n\",\n       \"hx/o/LlARGyMXnJBG87HlXeSaNc6ek1ETsNmDgjehYhZmd3raVagHcJTl8wJ5J+HQwN4fhe+CmsW\\n\",\n       \"9B8C73bQx2G2q3kgc5HNAx1qm1XqmQS3nFAMLRIp4iRtfOsifekzkQ7IaOjaUCBgpRCwNQF5xhRE\\n\",\n       \"MUhsjmhNnQka9pBRYUhmlGXNdlmwMwQ6Yej2UGmAnRSYD5s0rApxcAEZXCTnd7g5dNHKoekoJmTI\\n\",\n       \"jqzQpIgd23QkwQ4CVFwiGbbg5g58XESeei8ksrfex7ouUnoSvvsA3FyHTARXJuGFIfSffbfXi4hd\\n\",\n       \"hl85AccPQC8C+wzclxf5Tl/r777Xz2HB1IKZQwVgEna2aFFgOsyyO92FC2b89UzeEFB/4jto4GkR\\n\",\n       \"+Xtg/R24Y88s8Ow0XBCjIKofgDs2UiaDJhulBXYcH1UY040c3HGPoTtJtjOi5ZWYsvu4qod2YmJ3\\n\",\n       \"SCt1yGbanLgrR7Ee0GdAU2qEF9vo+S1UfpLcsIDf9wj9Md2wjq7CyGqS0ztEiaKZ0eSigMrQIzvo\\n\",\n       \"cWnKEFj9I3WKVpG+zjBNQKQjurJG7CQk5Bl3zjKe6FDC5oBt0Z0IqBU8KCdUGop2ADuLMPw8pDuw\\n\",\n       \"edSIYB1W4OZhbhVqx+Dcfph8HcpdMyl2vgHD88AWvDiZRd2bkCUhZJLBuQMkZy3QWVjCJOgf6Pj6\\n\",\n       \"KCEiOXBOQG4K+lvAGQhehnMnjE7K7hSQg3EK413YLsL5GbjnIageNXySeg7OR7BrGWXVkTY/uyNw\\n\",\n       \"FLRH8MAel+38NNTvg+kpkHkYHYb0IXB8mK2CnwdvaKZyggQKFtTuhsXfAnkV0jmo9oWNWRAnIERI\\n\",\n       \"iLBw8agTsUZMD802hQTuXDNGf6ddmFiAlW+88e3dGuT2QzptWpT1CPblQBKobKeEKwFjp8GGKmOK\\n\",\n       \"wyO6xYBwAyP0eP1+VMDhKbhVmw7RKzvwL5+Am/LwuWVonoQXDhvRvGt4x2D1hgYjhjDIncALb3bw\\n\",\n       \"3St3vQj8ttb6Z3o8VYQJzAl7I7gd13gjPxPByCz81wdhugq6BlwyYkS//04lwr2s/72MmY42IROA\\n\",\n       \"5UMyC7Up+PYpWOnAmZbWv7eXwR8QWhua4kF45DTctgHt4pjMdJ9KIeFlO8JOXyHJOCgHvDRHKZkm\\n\",\n       \"VG02YsNavzmGgjJeMPcgdIAkTdFqxKvuUSxyWDImR8CW32IsEZ6GSW3hxAVia5fESmhainFYQg93\\n\",\n       \"yTgBRwLFpIa2PYPNBFOEKJpc1h1i24PIRUbzjIMOWNMgR+AXTsOte8ZlH3Act/sNeHoDLn3MTBy0\\n\",\n       \"12H4no5nBcdPwvFPXUdg2w9WCI+IyKn3+gkSGHSu00TJw2iR7guvEj80ZiGC50xSx+pjwKW32obW\\n\",\n       \"et1clzp3Q3Z/jsbCfqJOFrXUYryvzvpUm8MbW4zGMfFugdiJOO3VWLFSvMEsuc1L1JbqDCeKzKUW\\n\",\n       \"aRhSdwV/oKlqHys/wsoHzPbhodfhm0fGlHsNuv0JxrZg+XW2btLoZBLllMhQwB6uM7auYqUxTlZo\\n\",\n       \"ZFOa+y3i0GY0GrOUHbHCWZpSYIgiJwEzRPRwGMoWvYxidqTJeIoNew5LJshbOeJxn42pHsPamOU/\\n\",\n       \"MSJn9RXjEH13BJZAtwrShdw2rC/Ba9NQGBu58LXvG50LEJFvCtx1iGGrAEPP6EjQAS8wme/78rT5\\n\",\n       \"84CIzMDS34ITGdO22PHgtR7svgBn5ve+0prxqOlapoD62g4s+HAEyCZQ7RtdkfE8RJuw1YUnfJM7\\n\",\n       \"6QJcqcPNZ4xWzMiCU1+C2/KQsQ1npKYhPQkkUE1hEujmoKehE8Mogf0JTOTg8oNw1yXFqbuX6MpB\\n\",\n       \"UorkGGNzgQEdNC4T9IEu3RTuHYA1B/Rh/5bhoWwtwcoVswdmLhlfqmEFagVQPthb0B0BnZDMDwLi\\n\",\n       \"W1uks0M67YS+vUNr3Eb/4fWJjBhfhC8eg7uPQFeA83DbGXihBf+uBZen4TeXoHn9/r8AE++0Pjcs\\n\",\n       \"GBGRO4Cc1vphEfmnInKX1vq5657yS8AO7xIt/YzgXuD5G9RKeQz4b4H/+QZs+yPHw1A8cZ0S6Iuw\\n\",\n       \"8B/gUeD//aDbNIFv7kFY/uTryE11Brcfo/PCx4gutCBzBnQbvisihWn4zw7C/ARYz5B9YEC8EdDt\\n\",\n       \"wNpck92FPuNpKOVssmHEnC242Ixw2Q41TjAm6xuR3eUh7JRhMgVHaUoorgAz2sdRDqGGFCFhBksg\\n\",\n       \"lXW2xKWoYWRrlFYkEtDSLZRuU3T72J7CGwvttMKGWyEXWQTaRqksRSKaqYbdAnEyhE4edn04fQes\\n\",\n       \"deDWOtnfFMk89l4maN6MvYDvRRFpLsGvrhjZ2JWr8IW8yP/X1/qJt3ttFW45+KbMyoPkIOizsPI+\\n\",\n       \"lJUunIf+PphY2SMoW+j2BoMrYy6+COcuAc9ord+RJK61bgDfyoh87F5YehDOXkDpXTLJNH3neZ6Z\\n\",\n       \"Djhc2+Zma4fNnM9Ze57TL9TJ3LaKP69BaVQ9ZlNsnLFQ8EMyXoZursVqachU2OeWSyAWTFSEuU6E\\n\",\n       \"nz1Ps1ugVVGIexBn7DFxNiVc6hDPjclhM21ZFAJFlgFNT+H1xmzNQyCwSUSGJovAEooeNldRFNOA\\n\",\n       \"rOthh3mazFDTBUQNsSUiyeSxuku4g9MMspr9ocuum2EwkaMepVi6w8gb4a3AvmdhVcPGFgQuXPxW\\n\",\n       \"jnE0LfJX2ybCO9WEH7wOD9y7V64fgf00zO/C137KNuCHBnOuqxMw/RDYEzC8AM3va61rMPMleCSF\\n\",\n       \"/Xtti2IBGo/C7Ajowuv7wKlD+ALsnDUBOIvwwFcgmTIVQQAngul1E7itDI1fTD6A+lW4+QJcPgE/\\n\",\n       \"DA03vjBpbrOlBBoa1hbgNgWhhkCZqpwvhvyaJqAci57rsX1QGErKN+cUYW4JrElSFAkOcByPc2i2\\n\",\n       \"yRKS02MKEcx1QK/D0iqkAjszcOX+N4KR/BDS1+FKwbSCChqO/xBuu8qec3SbK2eHnHnkNLVWAtst\\n\",\n       \"+FoAu1MiX04gbJkgf2USPn87nJo1luDsg0YCdzwDL2qtL2RFvvUn8NmjEDqQXIHsWXjHxONGVkbu\\n\",\n       \"Bb699/t3MG2K64ORXwH+Df8RqD8C93HjKhdPAH8gQlZr/hztuj8cHIPa9Y9PwtYrcJuI/MnbyFG/\\n\",\n       \"B2Tvh1s+Bw+uh/CtOmfu7HDh/jOcnxbi1zfh97XW62WRv3ESuX0SJ2oROha2V2S0HPHMoQQKcEcY\\n\",\n       \"4rThQjlEeUBqcyIIGFpDzmVhVWDBgbyA9sBWRqXe8AYVY5WnRwEfYVcUA1Js1ScQi5D9NPSANSVU\\n\",\n       \"tcWYMXWKhExRZQzRBWI7pmdrcvYkapxH9R1U2KdXyZJpBnjZC8S5VVxXozIlwtbtRI2vU3q4z74J\\n\",\n       \"m8JKzIQNr39aRH5Pa/2WfhtvBxHxl+HXP2fMTBqYb2Z9Gz4rIpta67c020uNIpR6898T87/3fBPT\\n\",\n       \"Wo9F5F9+C/76LCxFULzI5IkRizVQD0Hvs7D7sIj8Q3PzeWdMwv0noaaAMnGtQXA0Jhe5TC8G3NWD\\n\",\n       \"0iBlYjyk1L3K8zfZzI888r6FF6RMhAlXsjkygyHjXpWtlSq262DpkMha4+VjdYZti7hoo1xNwbKZ\\n\",\n       \"1i22cvO85M0hic+w2mVYuIiXahbdDBARekOKVsRELLwybzGrNMf2dpKP6VyeIgVcptF0VIqnXba8\\n\",\n       \"ZZSsUIgsHBkRyiotNEfX8mxbObbnBnhTBwiyHvn1XYazFkXJYccxq9WIaN5MgtibsDo4wHjhVpgq\\n\",\n       \"QrAJN5+CB7fgd58HrsB9e3Pz6S58O3gb35c/H+QehBOfg9saMNEznIhnbjLKqscXYf9VU63cmoNX\\n\",\n       \"fgFusmE4gGPfgehpeOIYvPQ9CB8HjkPmi3DqAZhrQVqA4t7UViOCxgh29hkfKvUqfPZl6Duw/glY\\n\",\n       \"/VVz/qsJ6AhkGoYzv+gZA72LtvGruTk1p0ZX4FRGkTDFscQj1QmRO6bl2JxVDh42DikxWXxyQJaY\\n\",\n       \"LDFDBqRU26D7UK6bwEJpOHIFvrsE7hJMBiY3eb0Ljd8CdkH+GxD/WiBikPZC0v9jHf4VkFTgP70D\\n\",\n       \"fvlMF/0AACAASURBVHE/DFZhvg63lmA8Y1qts1fh9XvgvAIOmJ7eMeDCUOsfiMjlVThpgd8yarwX\\n\",\n       \"gP/r7VbuRgYjE7xRKu0AN137h4j8AvA45nr0HwNv5X4+PLGzn4DW9ER4GXgA4+b7lxpvvmNZkFrm\\n\",\n       \"z9ZbPf/dYDgoi58RljyP8/ekOKOQ5dMhyy/36U3D+j/WWgcisjyD/5/bTA/PkMw1yC6OIavxhpp+\\n\",\n       \"1pBZ51uwUYJyBBUXVhW0BTwb5lO46ICnoWbDjG8mFiIMGewyOSwgJqbJBD18Ino0ZZssGWJsInFY\\n\",\n       \"JWSdSUIpE9MmR0TOzUK0TEKD1qhPktMkaOLMkJHv0NdCJg1wnJDSUHM4jSioBoPlx7nwn/gsdieZ\\n\",\n       \"Xa+jBe5ah8U8fO0rIvKP9gTh9iYEuDZFlMVkv2tvyngPHoLM7HV9YB+Sm6C7BndhLjZ/Brvw4jnD\\n\",\n       \"rm9e0yEYgHMB0G/TTnk7aK23ReQfb8ECzPxXcMurkByFgxHku3D+PnjxH4jI/7SnqPq2SKD4LN70\\n\",\n       \"Jtl9CYmXpylD0lmHA9rIo7czEK1Bth4y82DI7DjA7ziMCppkasSMNWQjKdONllA6pm+BTZ403Yf2\\n\",\n       \"A84uj0gSqGiL2simmVa5ksvhJIqRaxHNZInyfbJ2QsiABMhiUY6h6VoUEeb3liYmZYSRW/cAF4WX\\n\",\n       \"htSVoi5FBnpERlrE7hRae8QyTaq32Vh0iVohYyfHenUadw2s/hUuTSQcdhTlWAgFnpuEq/8com8v\\n\",\n       \"wJc/D7XJPQ7IfmjmYOFx+FhL66+LyOMYQmP3/ah03mgYPsiBR+GRNUNCBThWM15RjY+DaDP6f/Eu\\n\",\n       \"iPbBdBGWRnB2Hi7fDkefgfsvwNqdEB6GpV+GpRzE89Bagdf1nrBhH64ehANdaA8BGxqfhq9PmcmY\\n\",\n       \"43kopBD7cNkBncKlGdDaBCg9TEHhuDa7cV1ACyyJww4OKtGM7YQYh0AgZpOIHFkcAlxkr0aSkhKQ\\n\",\n       \"pT8KiT2T+GSvO1+HAuMX4Ltfg8Is9LYhev2agJuI/DF87zfglpxx7N0pwCsR1L9fhE9k4NMTcLIC\\n\",\n       \"zwdGRe3wL8H6edg3Z/p7rSfhpqch8SFsQjHCVP9FpApku6ZSsnPdGr3t+t3IQKCD2TtgBKiuLwv/\\n\",\n       \"LYz416+80wZE5O9f9/BxrfXjH+Ln+0hwndjZ37yBb3ONN/KXPhjZgsLcdfyPVaj0jIvlByXHTXno\\n\",\n       \"+2fphlnsUUxYbHJhpUn1tKY0gvUMEEDlc4osQ5xCl9xShQPjLP20wUZR8Czw8lAfQsMGP4bUEcS2\\n\",\n       \"uZJxmZKYUZoyUNNcUFCiydNOwAwxI0DjskaWLAlbZNhimYgLuDSZYoBHEWGKiAl6QCBXsXCYIGJa\\n\",\n       \"drG0zyhfpBUPsNbGbLs1UidH2xZsbVEadoimtnF0l0rOx0l9JB6R9UaUKpqJfoO+PWJpr5Uy24e5\\n\",\n       \"JdicE5EyFL4CCzmf/v5pWlNzDFcT9OpV2NoTD7vWYvEyb7GDs2ZesPAOa3DuDDw1hnsOQByBnANZ\\n\",\n       \"hT/WWrfe6QL1VtjzELFgScPgADzQe8Nt9+QmhIswfhT4/WuvMc7NTGF8VeoikrEoLwRM3w0lS1Px\\n\",\n       \"23SLEZd8ze4uTNagdBG2y3D11ywWJj2qPU235DC2DIlQrIT+pCK0wSKLrSPyhCSe0PJyJDpmZpjw\\n\",\n       \"4kSBdlhlkK/Qc2qM3G1COYToTfJqzAIhywywNOxamgsiVLBQQAFBoXCJiEh/HIy0GbImc2zpaXoy\\n\",\n       \"RyoxKQ0S8YnTCjE5YuXSrIxRhWUuSp8RQ/ITLfRMiePNhMgKuVwU4s2Q5SehN4CdwTz4XUhWYSoD\\n\",\n       \"4QGoH4Xac4b/93Wt9RDevgpr1kadgOm7jT5G62UIXvywApc98uSt4B0E2YXgnNZ606zvLG8EItew\\n\",\n       \"vwGFBWhuwqkjMDEHpY4hq44VlGqQzsPuJBTaEN0Kxxw4qQ25N9iBnYNwqgivzBuX7SNNU1QrWOCV\\n\",\n       \"YFZg9ZPQs2BxA6IpKI8gG8I5H9y9gzxJoalMQTCrIdUm7h9pKGuLoUoZJyldt0xNVQCNRcqQFl2q\\n\",\n       \"QIeEBj5NFB0SMiyPFZO9lKwHL98OVg8SG2p9E4xEbWhe1Fr/BIdDa31VRH4bardDfg66z8Lw4gL8\\n\",\n       \"9bshb8GkA/4afGUH4gOmh1UoQVCHXAHsEOaGMLUMOwMo+dAuicycMOTWdBdUVeRMA/7du13Db2Qw\\n\",\n       \"8iTwXwL/FnOj/D+v+98R4KvAAqbN932t9bk3b0Br/fdv4Of7qHAEaGnNu5aNfwo8BvxvN3D7Hxm+\\n\",\n       \"C6VbIV+Bfh0KL0Fag29+0O3l4dYSI7uANCyseERcTWDBon44prcOuf8gIk/CoYUBfm2X3Yds5h0Q\\n\",\n       \"x6dAnhF9OklC3YJBYk70xgoUbEDHFFVMKBYX5BD9NIsaa2K3TESTVTZwSemTJWSClH30KBNzGmGL\\n\",\n       \"GULKeOyyQIEcFjGKIm2WyfE6iiJTpHjSZQQMrCFjzyKtjcj451H5DEnBpZ/vMkoz2PExlMDp7JCx\\n\",\n       \"XaAchQxUi4zVJRtrNvdBeR3Kgcn8K/8AZj4GM65Ql1nizi0cODWmXl1i+/IBqHwPvgz8i73duXkV\\n\",\n       \"5A5DhvhxafcqTLTgqbdbg73g4WsvwQtnYX8K0RjO7XE3PihsiLNQtd8IRACs1CiE5o6IiLWnMXIX\\n\",\n       \"rHwWZhwYKpGZNbAvJKxIj6KXZSlr4ziKxWhE0UtYzUFkwziEwX059jsRg8hnYhBjTViMMnlGWhFG\\n\",\n       \"CTVtgZuhnEzh9yFJWuhMi8QTBrjUUp/1eBnLtRCrQqQWiTmP4hlEQpaxKDHARjMl5pL4qtLskjLA\\n\",\n       \"Y0RMB8gh+AhdNLVUsS5lQjlAIDnQPonkCMhicYW+8kAP0NLHTifwIwerWEHGTeL8EM/xSRMHaRao\\n\",\n       \"vLaJikyFaX0f7BQacPM2OFXMbfIJMwLyLO/BOM1U2Up/BU7eDceaxufm0hfgpdtE5F9orX8qoquI\\n\",\n       \"LEDlf4DFm2EugXEEW+dF8v8WeB36f6YdaMbz0y5sfxWe+V/hTh8qEax6YA9hZQOGGehMw45jiL3V\\n\",\n       \"HJT3kiJ/BIunYGseBjsmaCvsgPoYOAtwa2i0SfIedF04vQQ3KbALxt+qDVzcq8qUlKmeBgKbYqa/\\n\",\n       \"+omR31dWQuClKMthXU3j4KPRxNgouqRcAVxstkgZU8XmcBqSZjRqFbJDUAuwWoW5i5CdhpXPmMmh\\n\",\n       \"bl2k/By0v369Yu7eOfida4/zIo/cAYXbYP1VM2418zC0vweTUxCUIbNmyCbPPwnHJ2EiY6IrTsKT\\n\",\n       \"E4Y5e8eX4esKE3I9B0d/BL/Iu/D+blgworV+UUQCEXkCU6p5TkT+d63139Va3w4gIn8TsN4qEPkZ\\n\",\n       \"wv28w4X6Q8JTwDERylq/IYH+lxHn4Xd24K4MzA3g1R489+aI/v2gBDcfpfPCZS7fNGY20yF3WKGU\\n\",\n       \"xeVsgt/VHPsinHUhcbpktM14wifGRsYxqfTxsXGDmLbWWD64FaEkmqtAS2LmRaiRZ1MK2DpGWZq+\\n\",\n       \"bRNTxaLDiC534fEUJ+hRROMjlHBZp8wYYRqLBIsYjZAhYICDMCYliweAIp/u4EtIZ6pAcWOKud0m\\n\",\n       \"u8E2nu1hyxTKm8fVFmOZwgZUcoEgzmKnDqsOZByX3O09LlTrzJxLaN4G97pQcWC563J6MsCer9Nr\\n\",\n       \"TFLe3aFz6DZGj5+C/SIyqbVuaK23J0Se+S7cdwIaPkSXYfIlaAbw0jutw95o7drez1tCRDJ7z30v\\n\",\n       \"VbBNqIdQetM1rFUAtQppCKQichBu/TJ8egMKe5yjM9Pw2H1gVRNmRz0GZYVKUspD2NczwlSbxwDb\\n\",\n       \"olJW6NSjOxyxVbJR2QxiFUnSlJ30AEl3nYzXo6dKWJaHlWRBrxImwthaYCwJtjdmIvWZkIC6rpDK\\n\",\n       \"UVL5IRk8JgnRCDFvtPSywJAYjUsdlxJj2pjWXBNIWKYgOXpUcIG+hMTkSCkgFFDsoPQYO1nEi1zs\\n\",\n       \"cICXUyx2Ha4kI6KxR7EV0cu00XaPueeN6miifLhlGoq3QCO3R3bahtwP4BMt+EfvYV0WYN+d8KnL\\n\",\n       \"prtay5ub7+ydsPsahuP2gWDaMDN/D44egYevGMPKkQuXlsD6JXj9HGyvwelpOL7XGogFXpqB+h9p\\n\",\n       \"rbdE8v8P1D4PwdhUVbYXoeRC4MHaBFzcAbkMcutPzlYowB2BakISQXsOrCKsNCEpmsmZ2IJ8CgPP\\n\",\n       \"FAB9TIvG0Yakugx09saAI4GzGqZT48YLsK5C2gS4UiEhj01EFyHARxEjXEWxnxLTKC6SRTHLkB4a\\n\",\n       \"FUM/gtkBhB5sZOHkBuyvw8VpOPgyPHUPPN0C/oyflIgcrMIdRfiCZ85ne2xkZcWFdBbCDXBXTLSX\\n\",\n       \"y8HVTZj1IYnh1BKcyRnyzdGxUQbOVIzVBnfA+lm4XUT+9J3W94byNd48zqu1/rtvevy7N/L9/4Lg\\n\",\n       \"RpJXAdCasQg/Aj7BTzF18hcBe/3Fb7zrE9/r9iA4SLI14rz/Mo2/5pB3ynTVLBIpvIkrDD+5jVep\\n\",\n       \"sObMMDqmcZNtav4Q144pBC65ep6IEWezsFO2KeYVRa3ppTG+MpmqTQaLCNsCx4KiNreXAuZ6PqKH\\n\",\n       \"w1U0B4iJgRoRI1I8CkQ0sXEZEZKSYKEBjxifATExmgYJwpAF+pkpMgsBa0mDQabKIWyCJM9Q9em6\\n\",\n       \"wsiOUGkOW9t43jpxdJJIK1I3otyfZ5izeeET2yxOe+gwJHKEvmuh8zbLgx1O7Z/C3oqwcgrIsPdh\\n\",\n       \"9tCBf/8kXLkE9ynw2/D4wEywvNlI7H1hSuTXD8LRvd/P7cI33qlyYqzrvX8D9kk4M2ssAXo+tEaG\\n\",\n       \"WNh60lRkZj4Gt3XeCEQAjuzAE4/A4GCRc4VJLMdB6zaXnSbTvZjyBqxuQb0oHAoq5GohmeMttnxN\\n\",\n       \"XzloEpqyzBAXrYpY+gqJ7DDOWETeyNyYVBXiGbAttNWkbzVJUQh5EBdXazQBWkb4KKZJyGpDanSB\\n\",\n       \"o2guEnGOiC1ismhcoIDCxccnJSAlwcbDQugyJCChjsUYS46RkYDU0SSBkNchth6inISGvcupxSEz\\n\",\n       \"nZTCNlQ34Dt3wriSpXR/hkSfYriwSNr2IR6DHRmVrrdNGvd4RwuC+kVtkuhtmxeX57h4Yg5ICP1t\\n\",\n       \"rN/MiAxGWj//wY4S+yhML8DBgQlEwFg1TWehWYGrR2HnD+B7vwbnl6GYwpaC7e9D9LyZSGUbun34\\n\",\n       \"xHnTzrl4AU4fNlWS+tdh+Bhkb4XGAWjMwOLeMTi0YVtBqwHDPtQ+BZVpM6GuR9DOAyPoZ4zo2Y4N\\n\",\n       \"pRj6tvEPrGgz4VLBVEQOa9gQw+sghi0POkpzqNvicr5Chz4pAQFLQI6EESAkxAwYU2CMJkbSGFeD\\n\",\n       \"N4a4CEsvwvYUeC4c3TLzIb5jjPbu2IQLD+11In7MLcmJfPxW+OUqLNXh1jrkLpsPrnvgrxv2rtMH\\n\",\n       \"6+tQycJuAcp9yAFnjsPzlnHqda0957/oOo6fbbRoBBOdvf3qfrCD4ud4H7gf+Ocfwftc4438pQ5G\\n\",\n       \"3g0ikofSw8byAKD7HHSfeKuboYiI4Ay/SvGvjrGXNEl+ho4+Rj7UKDK4jRKxD+37DxF/XdO5RVNU\\n\",\n       \"GUZRjQ27R9HN4rh92gls/8DlyIpHf0nYZ48oFmyqqcZWCSF9asQM8Uiw2JUGFj0ihuRQbFMBhmTY\\n\",\n       \"ZIAP2CS4XGVEnwYRM/QpIkAfF80mMKaIjas7KK1Z14fpMoUkDtodou0sEZuspw7aOkCo84TKQbND\\n\",\n       \"qtqAx4gctnKZ7IU0UWyOfZrhPsgEeJkSqdRoZyJyCrxUM8jEKCKCfI6w1gGvbjx7fkxY3buIvbz3\\n\",\n       \"86HhU7Dv6F7V5Bys/AD+toj89h4/4fo1zWCu6P296ut/B4P/EeZmIN+AwRXYeBX6e8qwziSU3lRp\\n\",\n       \"Wa1ANV9iwznATOowHwtWXKLnZLhQWCW/AZVNn3N4nK+OmViOqNgFDgaaZ61FOqpMgEVsp6Rln0jl\\n\",\n       \"cfU62XQfYlVAlujgg6yi7DmcZBqlnqcnARNpA6XaBNJCE9ImpkJKB8iLoLRmKC45HDJUccgxZJ02\\n\",\n       \"XebxyTJmpPqElPBoU2caB41HQsgIHxeHPOgeIxGs1MNXAYkecWlmiaHcQmbjLPWZbcaLPQqf0Zy5\\n\",\n       \"HSSCz/6h4nv358n2xrQmt9nqVUjWM0aRy+Nt2jQiYpXgr+yDO13SqVUaxwf86MAUa85JSusWKg0Z\\n\",\n       \"FJZg9zR8UUSufLAWnTdhqizXT38AeBFIFsTSWrdF5J/CzgKmxFQHulD8LMzeZ2hDaxPwh4/CymVI\\n\",\n       \"NGxsQO2faD16fu/7nIUXt6G+DCu3GiHRTWD7ZeisGepT+BzsPAqXi5CbBhWC3YNhAuuYMeCBMgFM\\n\",\n       \"RUPWgoGGIDXCaK6GldhwhIc1GMyYKp8egpUbETEg5RhCEU0IeAh5FDVgQEhAThvV6SgxxNhQwcYk\\n\",\n       \"1DWUh+b+H7jGiTmdg5k1sH1MoHDN9K98ED5fgE8lJqpTu6YyVnahUYTWszDvA/fAiyHo12DfizAd\\n\",\n       \"w3dvN74KKYAH4RCiDtiTeyO/AE3ItMzjdzTb+3kwcgMhQgEjOvFR+DY8xjuMTf0swJAQZ34D7qnC\\n\",\n       \"iW2TtJ++H548URA5V4JjKQwb8FQIL0HuY5qbDzVxsj72pKKlMjRVB1QJ6cYkJRuiWWIX8GcYNQSR\\n\",\n       \"PjqXo+6cwfH6eH4HvQbRUxade12KXkA9oylLSrjXn47oUaFGj0MM2KFAzBKaDHkcAobEtBkBNVwy\\n\",\n       \"JBTR7KfLCE0Tj3N0KBOTY0wXTZ0BIZoe2xIxYpItNUEuHGG5PRylABcHRYtpfCCyCjhJjwQHnz4W\\n\",\n       \"dfObilH5gFwohNUSUsqg8xZtVWJ/DHGww24mYCJosmXP0AuHDMse4YXHYH4b/vCaqq2I+BiVpsH7\\n\",\n       \"1Sl5NxznDQPJY7DTgKUanGBPDmBPK+aTsP9hqCroKpHyGZgswb4L4O1Cw4OdPjT//RvchP5F2LwD\\n\",\n       \"KiOoZ6FehKv7wGcOte6ytS9hnBeyCSRJhX5SZ2d2gtGXDpJccmjvdkj21Rl6PfxRhEQD+l6VhAJI\\n\",\n       \"F1t5eIQkaoqOFJlKHSLJI+Sw01kUTex0HqFMYq0zsHcROmTI4gA1QkI0CbCBJtQwKy5dUurYZPFw\\n\",\n       \"2U+D8+wQkUWYokWEyxrrhHRQuMRsoxgS61l8mvQEknQaiFHSYVstEKgZshtruI5mMZ0iHyeo3AB7\\n\",\n       \"QTFIHF753JB8u80gV6FSH9ObmKVzZQzsmCGEt+S9KThxEu7+FFzuwnaL2qxHNDlDOqlgAxKV0vYW\\n\",\n       \"iC4lkFk1FbAPYK432IBxG9YrhltzjR7S9WCjAb2xiBzCTIGtX3uVSP5huO1BONCCzUVYseDSArx+\\n\",\n       \"CDJt6F+GyBERF7J/G1a+DJOekXw/lUDuNExdgf1j2Pg8rLxqpuriEM64cCIxx2RYgKEGdwid1Bgz\\n\",\n       \"nggh75qpu+kUagJXUuOKQGgqMtYkVB2j2rw9DZPWkAYhHfpERGgiBLBYxOIFPEbkMdWhVy1wY6gc\\n\",\n       \"APrw5EmwXoWugrUZ8x3yIeSPwtlboPvidedzxoUvhvDXIqjuM30e1TGzylKCyiqoKqRlaDowCMFb\\n\",\n       \"gScEdn4I3/kRPDKEuVnoNCH/mpEMttagPAvdHcg/D+Ud+Nd7k3tvu7o/D0ZuLO4BXtImsL3ReAmY\\n\",\n       \"FmFBazY+gvf7c4A6Biem4c6rb/zteN3j0peW2L1rDhoKxj3Ydxr2d5k8BkdDRaNoU9lSDGcUWS8h\\n\",\n       \"SIY4ToYwP2QQ+ow7GvJl6HWIymO85AoFv08JwU8Vw4U8+m8UqZPDi7Y5k3WYFJsKMWOGdAk5QkCH\\n\",\n       \"i/ToMYmQwSOz177pE1HFYagLaBmRUiCnQxIJGKczFBKfntPGw6VK4cfZcIcuc6mgFfhJgOfCvNK4\\n\",\n       \"pAyIKBFxQXkM0x5K24RWQAmHMj5jYJ4RPbZpqBy27ZIqi/JEwDhToBO3uJwpsBhU6AVb7LoBV1WL\\n\",\n       \"ZsslTi/BwTr8UQgvvSEat+8RmLJMIFB9DRp//ObKxYeFKQgKMP/GX7y74ZZH4eNXwU8MGfB7v2SU\\n\",\n       \"Mr/wbX4slHd6Br77JX5Muu08Cc/d4XDhgRk61Um06tJe2sbyFSKK2VATBSmdjGbg2nScKdLyncio\\n\",\n       \"BIdaJElKW9sMkhH9bEAxXMez8wytCG25KFqMdIhKFxESxjjoOMC2NJZy0cmYVA/JJl0iZ4OczqFl\\n\",\n       \"hSp9AoRlXHK0SUnQmGz6KiFTZKkyQjNilzKKMoodUiwuk1BmFXAoIijGTKKYwWHIDuuhj8sKw+EZ\\n\",\n       \"LHeMHjkMveNYY0HFdfwFoew6uFGRKE3IamHfEJ5fzlFQp2hPLdLFx+Zp1GCb9Oo6fAemf1VkaQ7G\\n\",\n       \"G1B/Qmu9CjANdx/bIzFOwPhOtp/5Ef1ftvG8AXrBod+bpntqChrbsKCua/u9T1yErVMgVXhqCpbG\\n\",\n       \"0CvAqRw0evDQp02CcjUUsf5A6+ScOXYXH4KShtWPmwms7SNwrAqDCE5ehv40PPu/wNpjcPDLcJsL\\n\",\n       \"VtnwSJrAxWPgXYKZOrQ/AxeOw4Rt+Ju2BZsKrmpj1mkFkOmYSs0gZ2wGaokpCsSW8bKxBbZSo7y6\\n\",\n       \"oMEpmsDqfKzAdZiVEUMCOgiGsFrCRhCGJLiMGLMCbAnkemaEOGhCqQn7ijBagfMjCGbg5hbMnTPn\\n\",\n       \"SUdgIiMii8DONPzGMtzrG0vfcAjuGjhzkK4Zx754T6XNTaBwEUpleOmA0cKfLoB9Ef5JA+7JwkoA\\n\",\n       \"rzXNoIrzTfi4C4sR1GrwR2+nQXQ9fh6M3Fg8CPzgo3gjrUlEeBx4hOtGGn+2UF6B+Z8ouSuuHsnh\\n\",\n       \"LWZw8/sIsz2whhBVIN8lM4QoJxQSF783Yio/pGFlCVTMyBrQzboEF7bRYQWidTKLQik5TVzsUMmE\\n\",\n       \"zCQOlaFHlIb4s0XO79TZXvYpKdPr7TBFTIYCp3mFhJg+VWAZizIhARFdLYSUsaWITY5YT6O1SyQ9\\n\",\n       \"XC1mvE9NkUUokKWKpsiYXXLUGLKrFIvpgKYKsdUbVuEuIQFDCoxIyOCkTUaWIktEaihtuARMskOf\\n\",\n       \"Bfq4lLtd2pld/PQofqfJqt+lZgmWlWOYTjC6+Ovo37aACNSfwm3Pw4bGGcFNn4dPXjV99hR44Tg8\\n\",\n       \"qYD/+0asdAv8wU9k4tVPwN3bJhAB05efy5rBnmbGVD4Ajtbg5f0iUtJad7TWu5bID49i3zlnOg3W\\n\",\n       \"EuPTeTJ3N8iWi0x1bbxmSuzHrE+O6FgJKtomm3QJfZfQA8edJY1iQhUxznaIaZCKxtWKFJ8cN6F0\\n\",\n       \"i45ShHpMRQb0xMLRHQIJ0HSI7E1KpKzIPNvYTDIiwKWEh0fEmC6RhkQETRHIkENhkWDTokdMgM3k\\n\",\n       \"3pTNZSaxEMp0KTBiCpuBdlgeDwhUkWY9T7QOUdKl+zvT8Is+8W1D9ExC1fLJBimpK6AUBBovTVET\\n\",\n       \"HrbqcSD7Gl43w2uZhIliQCsPU4/CfbvGFK62CM/9FyLW72qdnBNwr7ePPw47Y/rf2qb/hUl4fdmY\\n\",\n       \"BY1S4BI4w/epLXMNWutYRH4Pxquw9Rl4eQaCy5DLwq+8CuW9algzA9/4NRH5LaAPVg4aJ+Fw1uj1\\n\",\n       \"+XNwMIZtD3aqcMdFM9rb/jsw7UDXgZEHOQsOJKCroG+FM4dgMAlHxpBzDEViWsMhDc8JHBwaLZHv\\n\",\n       \"zIMXGw2RjmXGbI8lIMrwRhIFT3kwKYZTtqQho102XZebRUgRCnSBDBofB4XgAC0UBYRdHIzD8CiG\\n\",\n       \"ox2461VwI3NubszB1RKkz8DVAC7nwWvAwXNG/6h+k6JbOQHztrEl1gFYTbAdw9LVWWNiRAgIbFpg\\n\",\n       \"rcAT1T2J9y7YI2hrrevAn7zFcl15v+v782DkxuJh4B9+hO93jTfyMxqMDFtmdO4NeNRvrTCWw8SN\\n\",\n       \"ZegCVCHbhgPQWYVBKaGQpCSOx/TuFjt2SitbReKUUX8LGV/G3qihF2cplALGOYc0ux9JQ9ppjdbE\\n\",\n       \"iEMDiwPuJer7C8xYM/gaRCI8etToM8aiQJ8xDkXYK69buIzxxcFiRJ8hETVyUkFJkYQRCSmx0ojk\\n\",\n       \"UGgsQDOgRwsLzQEsPBIyohHp0CLHiBSHgIQaMQNm9TZbMktMHqGIT48AIUsGTR+HAQXGjGKbbGgT\\n\",\n       \"BPvxIps0nSIfHaE+PA0TFWNUcSUx4kb6PM7NWfzDebKHezgueB14ch7SDGRqcPAizBy/Nmnz067s\\n\",\n       \"JZg8sKfqehkqr5lxh9fgmm7FviJUrquIpXv30XwKfe+NYESx5zjqXHvmDBy7k/i5EfHNOUO6K/Vw\\n\",\n       \"vDrK2aLmT1K0NGGuQTdtU3QyzLgBrrVDaCe05BA9XcQFhjpPX8ZETJHIPpwUFCGxSsnIRYQcnmTx\\n\",\n       \"aNLVQ0JrG2UliLUGaYP9aYVVlSNmTA9FHrUnhOej6KJwcYhwETK4FFAoIiw6OHtHUMaENGgcyiSU\\n\",\n       \"ydLF5jwDVoi44iXYaUwcnYb+EP4ZyKtNso+eJdx3gHyoGSmNqJBYArzUxR8PWS0KytZYGRtXByT+\\n\",\n       \"gCO7UBDYPQzOGWNsDXCgAdkxtD8vIud9ePkSfGH2OlfXQ7D9LFxVMMqZHlrmLJQumRHh69bx/WGv\\n\",\n       \"EvdN4Jum6pF/BO596I1ABMyxcExg/bjWox+JVIYwswDrk0ZDMSemOFN1YGM/hOuQi8GdhEJkEvqu\\n\",\n       \"ZYZBGgI5Bb0S5BagWYA4BxOR6Vxoy/A1ysDTM4YfksO0cKy+IZUeSKE4NHI8oQ1Vz1RwDmLGiFsC\\n\",\n       \"dZ1nLEMG5EjJUCdGs44wS4yDzQioo3CwCAiA/sBwi5caRqL+2vG/sAWZGcMDKl8Fz1ZkSMk4hrNi\\n\",\n       \"ewV4wILbIzNOF27CRBV0ybBxaZnIiSkYvwRT+2F1/14gsg6lMxDEcPaDruNb4efByA2CCC6mTfMB\\n\",\n       \"eqMfGI8B/70IovXPoufP6BS8/BVwZqDQh1LNYjCRYTtaIv0xOWraUN+PlqhHmtcX+xye6VFyPfQ4\\n\",\n       \"xWGHylaNnarGt4XZWYdJf5edlSZFOUzNqpLRwqT4JK5PqK+ylg04SELeUXhaUBSxdUAiNUpoMtKG\\n\",\n       \"SwAAIABJREFUmqREaFZIKAGX6LNJhgXGKCCgT40Rlgg2FjC1l/MOiCXGZpOYDCAMqJMScYgURYRG\\n\",\n       \"YYnDLD3GXCGiQA8Ln0kmKDCUDVIdk+gSwjpDYjxciliEe14WDR1wshHzsdeEs1PC07fYpL0xycw5\\n\",\n       \"3GkojiIiHAonnma3NEl8qsPcvgz5hgMCaRX8g0bMaWoTdpbh1UXwzmPSv586GPlTaJaNmaS0YL0G\\n\",\n       \"X71mjmh6zXObsF4yUzNg+uRJE+pzcPI68nItD40O15l0WVDows37zSxmYwD+NjIo42ZPU2nUSRON\\n\",\n       \"thTiT3EAD5GU1J4ko7pk2WRVIPV2sbVHUSIcrhIQ0JeD+NohpkNHDXH1NqLKtHRKhS4WYyZI8SRi\\n\",\n       \"zZ5nnQJClgSPLruEaKoIiogYi0ggROPSp0RMl5QWYyxCSmgExWWqhP8/e3caY0t63of991ad/Zze\\n\",\n       \"l7tvs3JmuIyGQ4qiRJqmaNqOZdmxkQSJAyNCYhsxDCUfEiTwh0RBPgRBYlgO4h3whsRW4gRW7DiS\\n\",\n       \"aS2kFlIkh8sMh5w7c/e9b+999qWq3nyoc8Wr0XCROAtl6AEafft0n6q6XdVVz/t//ouqdSNTXHcC\\n\",\n       \"mzJXXdFVjS2N/LZ4L+PPlb+BJ/5iZnn6DZdn94wXZ7LqyNV6tBqbjo0Tk8Wm64H7aaHdyVQKfuAm\\n\",\n       \"jdD00nM1h9OMzsDJezw+JzMf77N4hrud8Tw+PuPcOY7G1F6hfcBP32P/Cs9G8m3+OS7OZd7fc5Vq\\n\",\n       \"qfUFWm+Q1N3KqM2N+A52ufdUSX5tRO42SknwMyMqHa69j2/UWU3Kj0qVlYTNwNcTDlPiu2gXZBW6\\n\",\n       \"kfv10o11W0mKHeMCmqE0H28lfGm5DMBrVThsMwhlGvCxOJcKh2A9lg3Jl0PVWHBRzZIzBno6pUez\\n\",\n       \"XKEiqmqquaRAb8KHv8IX30e/W+KlUdls3Wovu7V8XHy64eAH+mI6sraXOba/71ZasZ/X+VMt1tYY\\n\",\n       \"fp3iaaaNchyTB/Id0ozRLo9ss7dLvMkTCaP7HNzlH36vCrrX1+83I29dPYfLMXpTiX7foV5TXpWP\\n\",\n       \"+zYyvN+LVa6OVz5Bq8rVR1io001z18JZ3e3GQzDxDicy2j8m/7XPuPuJscmpaKEyLZcwo57hiURn\\n\",\n       \"0vahvUQUbS8lkk5bvZqqGxqEch67GBvaYUk3HTnMghiTUoKbdCQyRUw0w1iUe1RiXWJVYVXmK3q+\\n\",\n       \"oGnmuD3nsK9pqu/Ioq8bW5GpmbplYmzJI2oGxkaOq9k0NZsjI32FJbkVy7pOSGXGxqbqqta18sw4\\n\",\n       \"FCrFgWpStSpRUZPHUkY4GtTIMvdPRtX+jvf/X1233ldx8/Syc7f6OpOqUf209cORyqn7tlrLli/e\\n\",\n       \"N1jq2bhL/z08NSRbKg0Zzx9hiU+d9bpkzt9tbcX4t+fSy/DG5NjtT/G5nyjFaaePSij++iG3Mi5v\\n\",\n       \"lOODgxYvVtn6Rw9LF4/YHpc5K7f2WdznTDRoDdQrDbXm0PplLm+sSCupJNTlsUc4RFXVgVrct2/m\\n\",\n       \"dBKtyWUCcdsgDtwKpySidixZH43irnqSOaFcJY8CRag7oeG6dRuGdi3rW9O1I4jW9NTV9EV3JTYU\\n\",\n       \"Cl23NdUkEtGGuuNSd7RkclOJkQtyLW2ZVBTCLYdxZGWwKD2xbzZi8c9S+xDJsZkTC9s6aaKRc5Rw\\n\",\n       \"FKO79WWhWDLsHTMYH0rrmceGM+P6cderm6qxolJPjdItn/2RG6afi57ZKh/E4wLTeWbQ3/sVnlkr\\n\",\n       \"mZnDvdJb6kH20UtvxjXy+pqH4l3m9gfKW97DdbtO93oI4Rwn/1LZKGxiWC3Jr/uBFxOyjO4KaYfj\\n\",\n       \"Wdlk5JFmQhLKhIT7Rfkc3qmUlvJpUjYgmRIM+kZ5lekohSoV5CmtUN6OFyP1pOSt3KmUVIyKiiKk\\n\",\n       \"+iFXk2kpHFi2L3HPWE2NOQ4atUwdKVwX7GtE4jaVHTYu8XKNyslUdTnRa3V8buNxo8OOvJ6otk9r\\n\",\n       \"H92yvXxfb7Lu1sk6Ty6X/JB0kUGFeI3ZKeTlYuCwS+8Yqx3CY2R9lr7B4f2Si3XxrQhG/P1m5K2r\\n\",\n       \"j/oeDH5+NxWjGMJvjmr+jWpGSvLqU8/xia9SvMjtZWZpdPP9EwaXyj8cOeEmx2u8Ug49VqpPye53\\n\",\n       \"HNhTWRzLu0ca50ZOazg/gMxGLfqNauFWvW6cnJDExFEYGCZbKgrNrHBHSuypJX1ThUyVMNY1siBY\\n\",\n       \"w0iQzYctx9Td9C4zZ4z1NPTkms5raYjaBqI9WzJ3TVRc0hdMZIKpsVyQGKpqChKFAwvGJhr2FYYO\\n\",\n       \"1azEtiTfVdTWHNOU27etL2JkIstqGnf4SjWxt1h4/NdpDCYOT04cvz3ww99gfVTx0vHbrh5b1c5m\\n\",\n       \"qpsT/XsXJUszjSUWGuy0SzfNpRoKsjrFXoyx963P2e+sHrKdf4Pv5ZdDSP8+e5+kfrZUEO79IyZX\\n\",\n       \"+IXn6Zxl9CqHL7w+k6bHV67xZxY5Fji5QrZndlTTjw03V6a8t1Cv1GU6akVNlrRlcUcRtgVjwrK2\\n\",\n       \"TQ1dibE1I9Gyjol9C3ZjZjqH2Ssh+qDgSVE1lNLNL6sZ65vquS9VcV9fzUR0YNeaXEdNZkXXlqGq\\n\",\n       \"e85JLVhXONR1YFtb6owjh1bmNOkaciMjhanMmpEDtdFYp8b4vw/qf5rjMYqd0rr8QkzsSiwWmaHc\\n\",\n       \"nZC4OmqZjI6064eag7rVybLLy6esHaWyIhpW2uLoCc3ukZfffeDJLb58isPfeKBamn/+8vzjLa0Q\\n\",\n       \"wnHWP86Zp8jHfLVG7TxP3S/Hdxc3eO0KLrPyv/Jcmyf2CJ0yA+bavKG4VOdUSnNE6LNaKc1FbzbL\\n\",\n       \"tU0pyC8FZJ3A7cjjBbWE66FsbnIlPeK0cjwzUSIUaSgTEpqhNDgbKsdDq7gW6IjIHSpB7K9ZFj2B\\n\",\n       \"RGHZ2C0l3jKVGAq2cGRplFk85Pz/zlPXOXwPLz7a9OKppmrM9bJT9pw0bS+S1ez1tx0trpL1ZI+e\\n\",\n       \"L31C4vGyk8xf4fSIQZP8LlmHgxNcjLz/dOncNtjgdpUs4Ykh7+vG+I234rz+fjPy1tVH8Q/egf3+\\n\",\n       \"In7cWxTM987V+g/wxFEJbyaR8wcw9t5rL/lM5T3cm5RD2uoVmj/Crw2pJ2Zr57THLZVsSSyumeXR\\n\",\n       \"6QmxPnO0Fg07iXFa0ay1XE/P6sRFzTgzji1FSA19WU/Ncr+hs9A1SjJFUThKyPT18YzEIhKJ37Bm\\n\",\n       \"bMHM0KFlmQoyExNLNi0qTEQVQVOq6kBX05NGamauYqZtpq5mgEKh6simREvdnrqBk6KqsVD03asP\\n\",\n       \"LMf7+pbVnHDMWHTXTsxUxuecujjw6urM1z41ce8W4zbdV/jEKTZGJDHz7L0rVoZbXryQyOtjq8OZ\\n\",\n       \"RxssDLi5Vn6+u8b+k7QuU7yCK2/nFRBjfhmXH9i8P/StT32r95RoS3VxX777ouLEGnoli/HmmkHt\\n\",\n       \"X+s/0ZZXZ7ZWGob51CBUdeJENFSVqWhaioW1cEchEdW0IewoQrQah/o2TMKmatjRVMwbwURfYqBl\\n\",\n       \"XdW2czKJvtocR9sT9aRSi+55UtS356pUw3FdyxaEuRV4XSbqO3JMX8OCmVRXoSEX9BXqquWa3O5a\\n\",\n       \"Rzp9t9q/M1Kv7NuNPZsxs4zdNLMQK/qxqR+DNAbV0WmT1wYqGy2t9j0XV+uOWkSJwV5N7/ai2dWh\\n\",\n       \"bLMpdA7808fZ+SqHb3sWVhnAduHP80M5j96mX+NLS3x+Uo5URPb/P8YvYJON0xyflP580xQpp2OJ\\n\",\n       \"jCQFa1c5d5W997PVprlYoh67ygZiG4uBVwNFUQq81rIyAO+BGr2uREaWlM1IXdn07CnV6cfMk3nn\\n\",\n       \"2ztSIjlrcjMVv+G4Xc8p255dJcbyXqWufRk9LQOVYqoyqdkbj6weZ/8+V+OSrWzRXjpTTaZymyaz\\n\",\n       \"J7l3yKlj9NZl01c43aT+FJN9woTqSaZNRp9ncsDkJAc14j7HU5pd+qulhrkfcI7xy6Uo45+8Fef2\\n\",\n       \"95uRt6BCkCpTdP/jd2D3v4i/OndPetOhtHeuAsRSHpfmDzQltMfb/NyvlUyx0wN2q7xYlCvgQTCs\\n\",\n       \"NSz2Y8nmEiUToopppbBz7Jgkr0pDrkg25fqyUDFLa9JYGIsKuVpaVW1PVaWuOGkWVsvVj662266Z\\n\",\n       \"OS9125m5eVnFSE20KDURnBJcUjGRqwpmc+vvgUJuWdW6qaFUVcO6iZtyi3OCY0/NPSetuW5H1XEL\\n\",\n       \"KoaqoiLkTqo4sm2pOFCkiYmpdux6XFUvO9A9PXFiMrH18qbtE+c53aDxeZc2MqdC7vSlkgB35mjg\\n\",\n       \"Uo/JP2TwR1PXl2r2n4qypcz1kFm/VlqPnPgsL6yz+7YoxV5fr2tEflvNE0OPYYXzn+CZZM/uQc3X\\n\",\n       \"fuA9xuPzijtD6lcoWpavVqSnhvK1ZQ19N8KRNQONUNrP9WxaKnqSpO94SGxpmmppyVTiWCOMLBjI\\n\",\n       \"3NSIQSfUDM3sqqprSyQaMn2n5kTVLVEFuyrGFqRSDVeMJHjKum3rcmSiBYVDi2bGEtvuWlC3M0dD\\n\",\n       \"Ts1twpva6mpumyULxrVVORZHazrDpu7Cq3qV0gujHRI3berG4xbzaJJMhfZt1k4Y3D5y9L6gmo1N\\n\",\n       \"ZsFwSF6smXzt3bIX7ttbfpkz7P61h308vsU5eJT156l2OHiF8VffHBn40g/x/sATc+7K0oSPXWbv\\n\",\n       \"LF//xzHG31RghRBaLN6j/zjHMmr3ywYjdjhIcIWzN0rEL01J5w1Goyibif3AoChBgXNKa/ndyG7F\\n\",\n       \"3A23REFmypHNnnJM01GapE2VwX3t+c8FD7AO7sjtYN8Zr2nPL9iOcst3fTN58r7CvoEa8RHjYsFC\\n\",\n       \"se/ij17x4kc6jl5raT/R0G5nivaBfn0X92hG8kkpP1vMSOvkoxIemkTCDu0luqEMxrn2QtnZXRnx\\n\",\n       \"rpPs/0FeXJ43Iub/g/jbg9XftPr9ZuStqXfj/lscjveGFaM7IdjBs94GyPTtqHI2nE556ZMMRiQD\\n\",\n       \"ll4pvXSuwMVxqej4/PznX/08f+GDNGtGW9sOTjQsToP6nQWjStdOyHSyJcM00az0ddPEraQqSE3t\\n\",\n       \"qsZUkeTaJjrIw1Clkbgaz2pYE2JNPeQqFjQFO674VS0sWJKYSEyk6g4VVkwtqlsxsGWigcSRgQN9\\n\",\n       \"XYtGgq8o1C2qiw5UrGjaVjERTKwLjmQOzQRbotn8NriSBEsSC4F2OpJqipoqIbcXonHrUH4hc/xn\\n\",\n       \"Gzz+gyy9f65mGLjdfsWv/aGRR9vR4j2u1bj8OfKfrrn2Y22bZ9sezWrq+zOz2p7+yaEX23z5NXZ/\\n\",\n       \"juztMPP7rqvkFS39GE9/gJOR7Q9T63HuM7kP3L5vVr3m0g+N9Ncr4tZUc2lq+OTAbKmmbl01X1XE\\n\",\n       \"HTth26jStiBIZWbJgtXi0FFa/nZH8ySZe8oQtCVDi6p6YWpP1argvjgnHUY9GwaGCuQK0ackZlYE\\n\",\n       \"G9iU2pHoizZkuoKJ1CFyQWosmJko7IpCaYdnUapiUaFQuG/gQPS4epEbp1uS9shROlGTOpnMbOKY\\n\",\n       \"tpshmiZkoW0yWtAqegaLl9RrA0tp32qaWc8O3F9eMmj1xDM7XL1YJsf9y+/ciLQ+zoU/09ZqZprD\\n\",\n       \"iQvv4doHQwh/93snPXYe49Tr8rcSpV/H19f9VmO2XQ73GF4qreI3xiXP4mqNnQlnvsArz3DsqLxO\\n\",\n       \"zi4ilhyQqTJpd3P+8L2h5IecSHgqlA3GlvL+kysxjW8wz9UtJbZrSjfW48o2Yzw/vJlVV1SkMlNN\\n\",\n       \"ifH83VEpA0vn/95VRpgvyYun6a4Ydo+MqgvCTldybtux56ZO15tCI1NUc5Nw143Q0K8ulEfYqJTp\\n\",\n       \"ucWU6hGjDkWj3HZ6l409QoN7p3jhJpd2+Qdr/NRIqRFS/jaSiyzt8cvf2/n71vX7zchbUx/Fr76D\\n\",\n       \"+3/AG/k3ohmh9SGe+GAwHNV97VSmlmaKZ+i+zM5ff720NMa4FUL4W4d8JJEtTu199LTBbEFlVjca\\n\",\n       \"jxzOhk6kqVDpykLqMHnccBYklQ1ZGOqFbSvqVvX11LWLulZyqBU6siKRmppqObJgQUNi7MBERTIf\\n\",\n       \"ytS0VC0Y2nPfTE8u6jl0INOUqliw54KBhqpgaM/Aoba6ZVOXZAYqplr6Kjp2TCw4L0g07OsYmblr\\n\",\n       \"V9+eXGrdipa6VJSp2VLVSO6YNDI3rXA6pfcip0+x/8flX3zElZufcuPDRxq/Tv/TSvXXWkdotT2+\\n\",\n       \"07TRQ5kpq7lw214rd/Ufxxjf0rwlyib0u1FehBBWcZLKU7z7eT5+lcOl0pOmEXnpA3z80zMffumG\\n\",\n       \"o/UgO71ieVS33M3txr779bp68nU951SKqiJ5WiwqqiGoFMEk7LoblsXYNwsTfUEjNoxVnMfQcbnE\\n\",\n       \"kvu29dy2anku+I6C2x4ztIod0V1RS8OGiolDu0Z6jiGTCHrW3XfVewWM9I1NcCjTct7MmoE7EgM7\\n\",\n       \"TtiVK3TVlEyEQ73kyAmFs6EQjBSxdC6JRXAz6SgES7perTQlWS4epIrptvb62LvuRM0FlkbXXeos\\n\",\n       \"e21thx9+iZMHklfHJr8UQki+FYExhHCsY+m/eFQMK7JR5rB9W37ypuPtwtEH8Onv7aqY7rJ9gX6V\\n\",\n       \"vMrCAasHpcOp34K8xBiPQlj6lVLCm43YfhfDBle/yr2/RW9IzDn/LJtL3KmXzqXtWDYia0o+SGoe\\n\",\n       \"XjgnpB6hmGfNVJVNxnVl9MqDrOO2b5JZryuRk1RQWJU4qy5xBld0TX4zAOqcsgnp4xVlm7WgZJo0\\n\",\n       \"tRol2WWavWhxfWQlnTkVouU06qXrurGpYddmGJQJ4vEeSaaoIauWbN0kIz2gXicMyJ/map37B1TP\\n\",\n       \"sfLLnNnll3+DjxxjrUG8Q+smL+Vv7CnyptTvNyNvTX0E/+Id3P8v4s/jf3oHj+FNqRBClVN/ZNHk\\n\",\n       \"7FnDWuGwuWCyGuTVHXl7X/xGCOHLr19xxRjvhxA+u0FnKjauGZ0tyI5srrG5TfdE5lijInXc4mTg\\n\",\n       \"KG3amr1mXG+KcWDmtu14pB5qOkmug3Uzl5NVE0HDxjziYaTQsaSqMNXVUpUj0dRQc1fbvtSBTFVf\\n\",\n       \"sI6B42raOnJ95Dakxo5M0XRgVUVFtCu1b2jJWaUh2X01LamqmtKhsWpH5lDLqkxhYCzR1ifNzZrc\\n\",\n       \"e7Ze5pb3qhQvERZ5+XmuvCJ74Uj/bz5o6kIIKzXVfkXejqa1oDalSEv7rfZh/rqb/pt8vkON59b5\\n\",\n       \"g2dZPhHCrS3+dYzxtxllhTKo6Eef5GNniS9pf6inMR463Ke/GAyPJdqzXIJrmzy6veNDX+j5l48t\\n\",\n       \"asSJ3nrXsXFFu8gspDsyB66kp43ieWkxME7bFo9Sabpo2r5nV9CvNKypSAw1QkVmRW5x7o7ZNrVg\\n\",\n       \"z749FYnEyNMmHlE1EG2Jqkp2QWEgkdrEgX1RQzFfSQ/mg5sVhZGxXR2Ftp4chVRdxTFVK2amciuG\\n\",\n       \"7hnblmmFluNmqnNp+HLItWNwlESjgjRrSiJpUjdsdQxaXaYVrSI1Xs88uUW9mJo1tq2MuT1h7Q5P\\n\",\n       \"fYZf/xhfPjRHIV9fNT75mNrCOcvXH7y2KE/H9s9sWXje99yM7NzlpT/PB/u0cw4TLh5y4wt+04X3\\n\",\n       \"4er+Ai8/x7H3lfZDswqanPtY6Vm0scb4iK83aaTldGMZ5yP3lHyRFSX6sakcxVxHJ7IUeDLyUiiR\\n\",\n       \"j0U8pRyyvKgc3ywqFetT5Oq+ZllVooaJsYlN5Tw9KHGTknFUfj1ThrpsSMJYXjsS0tuqaccpuSxJ\\n\",\n       \"NcOGVGpJbhpaxpY0JNLQEIuzVA5UZ7tC/l71m3uGJ67LFxZKo7PkFIcXiBNO3Cs94y+u8Mw1/lrO\\n\",\n       \"xR0+GqmMeeGwRMXeVDnvw/WWNiMhhL+K9+PLDyf4luFW/qhSN/XfxRjftJTWd7pCEJTIyH/5Dh7G\\n\",\n       \"p/EPQlB7m6zo39QqV7oLz7NwgcVhavbUk6at3KRz2qx9SjJNpKM7xgtd8d/6UrmA+KnXbWPjEf7c\\n\",\n       \"DzO9wNdGXPw/tT6565n79LOaU+tNxbhmXK0pQrAeuyZp0052WVqJahY0Q1ti6tCqhih1pHBeEE1F\\n\",\n       \"DZmuPVQ09KTu4rj2PLZs36GZqQtuu2vFSMVJdcdMXNZUV1gSJbiratGCzC23PG1q3UzfoqpNO7p6\\n\",\n       \"ZmYyS84aaeqbCIZWbTuncGCq477LFvRdMNPW0hY0cel90aXu2LSfMTvL0U1WvkSxXw7KH5bodpns\\n\",\n       \"JSZZotcpxE4o42S3CqMBb51cvcMfeDd/+Hnur3PzKmuf4y+3Q/jsjJdmvPLQDfFdT/GJT3K9SnFN\\n\",\n       \"9b3LarObLv6hupBFsZ3qTKdCK3ftR0Y2/xU7a2PrXx37dz/DZz7Bs73cxVM1K7FhscgtZnVfrE6J\\n\",\n       \"Fbt5YlpkOmlF0Qv2q+dM6i1HlaGaYDV2tELpDxP1dGVyE6clOpjo2XXTRCHXUz5Y3oeWVCLRk7th\\n\",\n       \"aBEHqkrY/8iSuqjiokOZ0yrWRBW5A7mxpmNzgfctE6cEq/O8565UQ1tNxUw0MpaLlkPdTEeYTQ0q\\n\",\n       \"iWlcdpRsiHFBOtoltiT1pqEdd5ejzoRhp1SXLG+zUBArPH+P6x8LIXzhjVCrJZ5YkfyWZrUqzTfF\\n\",\n       \"dMuk/b1cG2VI4rkf5tTnee0JlkrzWLdX2H7hjdGa6rM83+aj/5TL76dzuny2/uIP0lghbNCs8miV\\n\",\n       \"1UAvlD3NFaVt+9NKVCSJpQFayfZhObIe2Q5lo/HM/PUrSozjgtIT7GvK5mYDh4KxsURuX91YMHUk\\n\",\n       \"ehRPKFueoXK4U0VXsCjYUgiy0BCrQ4uhoWpspm6sqSUXjNX15tGLdcJdRboqxLrmkCK9pNKoafcO\\n\",\n       \"dWstilP0n2C3WfY+OuUqbTMpD2N0EOPPhhD+hVJun30v5+67qbesGQkhPId2jPGjIYS/EUJ4Psb4\\n\",\n       \"wvzbfyXG+D+GENr4V97EyPjvg3oSkxjfqEt/eypG+yF4DR/yNsuLv9cqZXvn/xw/kHKsy+7JxOef\\n\",\n       \"K0yvBoPNk2JoqkyJFpidYPcGPxpC+NsxxnsPtrPEh57Do3NDriazVDtdVl3YkxRTa92Bfn2o3wyK\\n\",\n       \"tEa+q1GZSZzVsWRJpQyjcqTrjrtaEkNctOCkmUxiB9FMak/0tKFV11Sldsy0jG1JVaypqom66nJ1\\n\",\n       \"NFX15MZzYlihYiiRWNTQ0HRgLDo1f/9UX0/iOTUzVTU1LV0TI4c20VXTc0pVx7Ig2MN2yDUFx5Ou\\n\",\n       \"7tKuW0ub5Nc4gf6XaN7lJx5+sMQYt5ZD+DWu/6llT99t64ymstqWrZMzO7/ke3DQ/A7nvnmBj/0B\\n\",\n       \"bjVLb4PWmOeeY2WBzXUuvMIfCiH8vRjj/WN88BkOqvPU0HWDrR39Ex3x9Jr2lULl1R33Hk3sxCUL\\n\",\n       \"tS2/9CO5O19gYVq6tlabtEaFpe5UczWVJYlGMhCStkZMNZIl0/pMv3IkKzbEbCYpMoXUyMQgjCVW\\n\",\n       \"5mdw11DTKQuW5yqpoUzLfQUOdZQw/En0TAVNqypGcjvGqvpmTgqioUwuiFYtiaq6epI5yH9HZklF\\n\",\n       \"V/CIjk2ZibFVTY+J7ulqioKJisyujqqOwopZZWqS9d2qMsxzndFlQkdtq6G/OdBvrrl2MtEZDk1a\\n\",\n       \"fd0hJw+5W2+5+lQhhLHZMU785RBOR46+SP+zDxrECjv0stxaPVWbu6JGM4MWR1/5Hi+R8zxS48Ov\\n\",\n       \"MbrK/YXSdv3RQPcpbxi9sf5Bntpl0CKe5sQ+L5/g9GppB79UL13RH4nlM7goSlfVS7HkodTn/M08\\n\",\n       \"zI3KlM/uPCWNZSjwksQ26grLSmzjtJLtcTg/6wXOqTgtt2/gNRWpnr4Vc/2NcrCzPN/DFoaqGjLH\\n\",\n       \"hLAmDzWsyA1FUw1s21dX0dKTmGmIDgw9Zmw7zBw4ob80U89WLU8yMUsNw5osaVEMWR8xXWQ7ozjk\\n\",\n       \"bI9fijGO+M5k8Tezvm0zEkJ4Cn9C6RJD2bD98xjjK9/Ftn/QNyV3v6B0K3qBMmNg/nrLW7jKeofq\\n\",\n       \"4/ild/ogfJM38nuqGWHjk/xwweNzItrxfmJn667XHj0jq9clGVEuq6WKYY1Zp/zBMjmqjIU8sy75\\n\",\n       \"RFDkPY4aTG5LzuXimaphlfwg17mcOVw6rj1s6NWCmKXGnZoQNjRVTBX6ppo6Gqp2dYgDKyFadckN\\n\",\n       \"K6aaqmpatmSCRC4xtaiE0msqToq61gytK+zZVTgptWDfoWVDTVOpwpGZAwtaggRBQ26gpTBV/q8L\\n\",\n       \"HSO76mbzx1VVV81tmX1tiyqCIxesCDomEmu64bY7YSg4UM9OmRSPEa9SrXO4Uv7uvvrwWTji72bu\\n\",\n       \"ZUODP05ncyqfTQx+lv7ferMcNN+gVtdJmuWA3V3efZzaKvdusvpBbh9n9V/xJ/G3Uzot30T+njV6\\n\",\n       \"9edcPJ862YhqITErOm5uL7u5tSDtjo3CHj/N7E/wyrPlW0dVFg9nqtVMMotCZaay0FaNTZUwEpNM\\n\",\n       \"Vukq0uNqs32KiWS6Z6WWWjQ2dl/bnomxms7cR/fIREe0oGFgxWMO3VPe8CaomMyB+6ZErmuEVFOC\\n\",\n       \"RQM9E1sesW5BWzGPRgwObWmK7ipUVDUEE9FYYqqlpq8himpWpEYKIxXXNAwlJE392rLRbFctP1Sb\\n\",\n       \"prLRSC3NDTRcKthMMktabtgwrudefa6iGHQs1Gqq69tm9S2P3Zn68Ctc/AhffGJOTp1u8/Ki4R9s\\n\",\n       \"u3QhWMwWtbtRL+zr3iB+Swn2d1npNx9ZzeyBvL903U1qb/yWpEYlL13pm7FsKrobnBlxbYNToXRE\\n\",\n       \"beSMKmXzsRHnqpmkPFtt80tSCSAeKduHG4EjiYa6NYUlM4Voee4dMpVKVGybWMeKoX31ucC7qqci\\n\",\n       \"WlYiZg/vaeYBEze3IGiq2JJKTUQDia4NSxqqsedG2BKkhqoqFnTkqlI1uXG4bpRumBV9exs9kk0x\\n\",\n       \"PyQ5SW9AJ6VxhRMj9m5T2X6HwIFv2YzMRyn/Pn7GN+eDZ/BPQgj/R4zxf/gO2172zUCkIyWO9fD2\\n\",\n       \"/wb+bfyHv4vj/n6uj+Nn3+mDUDYj/w3+23f6QH5n1X6cR289/MrU+V+buPbjPdPmkaQxXMXCAAAg\\n\",\n       \"AElEQVS5JE4T+SBjOColu0cYhBASlv4kp374SO/5X5cf/5LZeNXhzopjw3Xp7iWj82xUefmJjpWD\\n\",\n       \"oN440t2oGjVP2Q8jLKjEXAyFmZrMWGpBEWeSOFGEih2bGoK+k8YKQU9i4MCyhh2ZQkvqrJmb83Vx\\n\",\n       \"amZFzcTYKzqOG2g5sGNNX13mnqmemsSCqVyqa1GmLjiSyzSN7JrJZUZaZnL0NcysmVo3tm1TX8Wm\\n\",\n       \"TKKYYyin3DOQWddK98Rw1zQ51AxHmufGir8UQvM+4y88gLrnMsy/HkL4h3TXsPtWzovnNegS8vIj\\n\",\n       \"zTm+zEGfamCcEB9hb5UzIYTlFl+/wY9ulvdyx+k/b+ezn3d4OujMmvLes7pffRf3B1RvlIzE8wx2\\n\",\n       \"+ewOS0Mqp1hOOXk5SlocLI+cqF0xqa9qD/ckychO2hGTlmntjGpxi5ipGOtIFQqHhnYk6voyVVMV\\n\",\n       \"Jcsow0TigTLhUPmgWUJd7sCRa1KFtqqg4Z7julIVg7lHzWiOmLVVnDPT1RdkppZNjaSiwlRwYKaB\\n\",\n       \"aOKKaCw3k7umPU9KWihXxQbUdkzjXZMQnYuFhUZm3Go7CNHdg7rb4awsrCsafTFfVx1edvjESGvy\\n\",\n       \"tLWvpO5duGb/Bs/fpn+WnSdDCAfn+egmcdVsOLXXumFv7ZAvbPN3cDyE0PsesoxucT3yXKVsRh7U\\n\",\n       \"1dXSW+SNav8rfPY/oLHK7DHu7pceX7Nx6Xg+nZs3Tx98hLkMN5ZfXyovGW0PJLnlo21FOcK5JHhR\\n\",\n       \"piM3ELVFGb6mpacl1TYW3HPfVSNP29NRM7JkoCHYFZ2c7+mB+HeoxFJOCu5ZEm1YVtM3NnbPkhv2\\n\",\n       \"rTnSCcHYSV2rmqaCbdGyFVU1XS1PGdkUdOWNVUWyK5+2Obg8Nz/p0tgpQ6e+fpu/+TDC/HbWt0NG\\n\",\n       \"/hM8HWP8Lb7/IYS/olQhfadm5EjJ3qH82/stCEiM8S+GEP5r/GslivLbKoTwUw99+ekY46e/wz7f\\n\",\n       \"0QrlgvZj+M/e4UOhhCyfDUEnxm8GWH3/V5Ez+S03m+js9SPhYluRvKR4/nz59zoL7L1K44CXcZXk\\n\",\n       \"KR79CM3zM+8ZDPRGFTG549Z7x7ovN8TByN09Jjn7K4VrjZHFbm44O2V/dNp48WX1atc0tDWM5OqS\\n\",\n       \"IjMJQ2MrYnhGIqjomnlWY27bPbMuU7HjtprEMYW2iTtKTvyCI+ckanNS42uqLhoJ1uQacotSqwo7\\n\",\n       \"em5J7Mqc0ZNbcqTtrkKu0HRgWYKKroapmcKyzIJEW3TOyMsyh6aWUZXoWjdSVXNC3aE03KLWczKJ\\n\",\n       \"Gp2ovkDxJ/jaMn7+4bMxz4d5W66fGOPhWggvf5l3v6dkD8oJF1la5ysPDA7muHkY8sJXeC5w+jz7\\n\",\n       \"Q2q3aUzMfuUTDm6cf4gH80XObllu8vSfLW2/D1K+8Q2+8gLHP8DRCWyX4WNrzcIkO3Ji2FJP+YpT\\n\",\n       \"DprHTZNCDIs6aV07vqYwcVrDVMMgVoxCw8RUTTJnAk3kUkfmhMFwA8cERzoKub6luUT3gpl9Ffct\\n\",\n       \"OjTVsoa6qUJNfZ7cmkosGWCkZmhPXWZFUMgtmKmpaKtaldqdS4HXNa2rGBsZ2HPbmp6I+9bqHZ2V\\n\",\n       \"mTRPtSosTAeq1WNuTZ+QHubydqGorJi5oKh9TZjWnekf16vccns9c/awVC0tXzhu/6MfZ3iGn99l\\n\",\n       \"bVAGvXR+VRqjR/50iTgcJCGsfYP9f/a7uD6OQmj/HD//Yzw9pD7j5iIv32b8LUZASZvsAmcSkpTe\\n\",\n       \"U2zXqO6UI57bsQzEe7xKCORFOXrZSUolViVwS9ka7Ct1LReUdMdJeYV6BCtzs7u7OFB3zIozElMT\\n\",\n       \"UdUNp01ccR+FoNCRWRXdVqIgm8oxTV/ZXT+Bm5qaTklVZeioSlRVXFZ1X8t9j2CsaaBuRUUiM8RE\\n\",\n       \"jCsKaxiIcSAvFhW9J9nfIjnGYJ1bFfJPcWybX4184Xd6Xt6s+nbNSK4cz1x/3esnPZQD8m3qc/gL\\n\",\n       \"+KfKccHff/CNEEJ9bh889m1MVGKMP/Vd7Of7qd6H3RjdeacPJEbDELygJNP+HuLkHH6elz7MDz6E\\n\",\n       \"jlxeyw1/7haf2uYn7vChdkllX88ZZOXd5FRh/Tkaizxa4ZH7R3YmU7feVXEiHjl8ZlXv3mm12ZHe\\n\",\n       \"2oHT91pup5u6g5Htx56TH1y3tLymFu87DJuWFBp2ZcmB3aJjPGsL9VWJmYqawp6ZmsK6DLkVIz3X\\n\",\n       \"deVSBNsqDi3YEBX6ojIL56zomlNmclM1mY6KTO6cmoq+Kybz9qEp18Ixyw4VbiscUxHkhtpKPVDX\\n\",\n       \"TNNEPrfkKt05SyCjpq9QaCjMNMOuakitSzTSgUYHZ6l+gLXnQlg9ycEvxRivv/3nnn3++ecIV3im\\n\",\n       \"yuwyJzf56g/MlRLXWdkvx+mHZVBa+DuHfGCFd+fsbvMvC3Z/gf/oMc4u4S7haxrN3I90ef4hvsup\\n\",\n       \"dT7V5dJPcuPjtD5Sl38k0WwVnpwmWkuJnVhzWGuYhJFZmKkXFY3YshRb+mEkN9YTVUPHvrF7ploq\\n\",\n       \"qhKZ3JYVh8V9laSQFEOSixbUpUgMXVBxRsW+RRVta/Zdt6ZrJtewY2ZJrjQPH+haMbEg05EpXDFy\\n\",\n       \"TM8JdX0z+xYsq1rWm2uwqp7Qt21dsKhtz0k3Xbaiomeh2JOktVIOHuuqRXS8UrjZDhyMBYHJiuJ2\\n\",\n       \"hcVvmG2QJwWhkM4Jo/0ag3yFE2fnfKIN9jbYy4THW1Y+3PXH/gnL0/Kh/qV38dk/xuvsQr6LinHw\\n\",\n       \"2RDCHW4+Wxqq7b9C9vUHtvQPVwhhkcc+xCf/H177Uer9Eg17bKUUjZxPy9yZGMrMmLVYsjVuoJ6z\\n\",\n       \"ntGrl/lCMyUy8l7lY4uyhTiBvtRUoSVXinVaFqWaOFSTz23yjjRdM5HY1NVSeEK0qlzd7yofyD3l\\n\",\n       \"3HQdd7U1VHWVbJVF5fDnlorcxFk05yESuaimak1iaqYvhkf0swlplBXHOBhTG5UypF187ibHbnDi\\n\",\n       \"Ki+M+JkY4zsmePh2zch/jl8IIVxWtoaUY5rH8Ze+04ZjjF8JIYxDCL+iDE16IYTwv8QYfxI/HUJ4\\n\",\n       \"l3Lw9ntefvpQfb/wRR7UA97I76FmpPtpvniC7UdK46r9wJVddv7vGOM+fqodwr93ij/yFFunuXtA\\n\",\n       \"7bP8xMsmh8zW2JiPEzYORypXq15bWXFseFpyt6py4owTu9fsre5q3F9TZBXt7JrhJp1Yk2tZidvG\\n\",\n       \"4UjfQDurqw+WZfWxLB7KbSvCigWJVMdAlAuCqkLL1Lod++pq9qVaFmTqc/D2UGLPkpqKVT0nROdV\\n\",\n       \"7CrcEnTkcmNVUeaMqg0trfn2S2b92F253MSyhqdlJqK6aDCnxE1kBloyE/U4kocojccsh4qankzV\\n\",\n       \"oqAWc2t5kFaCyXujc6/yxJPcfSyE+s/EOHlLAs6+Xc2Jcz8TQljBmRP8+Dr1a6zt0/w6ky3+2QPe\\n\",\n       \"yhy5+WWvM2MKIfz0bR5N6eQc0fkpjp7k1z/I2i02X2U1Y/kptv9Z0/TGedMfXbJ6OBMPe7Y7267X\\n\",\n       \"F4VYkVSqEn1F7MuSmV6s6iRBHoODwIX5g6in6oKamYk9wcApNYXFpKvq0DAZKuamd1Uja4IViW0N\\n\",\n       \"mUKwoGmqourgN0c6ZZNZXhk9M2cEOxKvaqk7homRkQpWdWyLxkZm1pVr7aGRzB1xLh1tWNCMa/J4\\n\",\n       \"l7CiMyvUJ1NFJTespnq1iSIdyVdHQh5JbrNWYViRZ2Oj9l0xFM5ts9sqH6WTlwt+qPDN1WVB2NV6\\n\",\n       \"nE6X2rxxSfDcHV573++mGZmf8xveUMb722qzXDvHCq0aj7/M/jLdhdKUNPY5vsQjB2wnXF9ic1be\\n\",\n       \"d/oJ/QqNUK4xawXfSErmwaZybENJJ52omEg0REOFZYUxqnJBrm8mmurHRAh1U2vGdvElpRj4A7im\\n\",\n       \"7LKfVbEukZmqKuYco3LUsK8vOHRMZlM5bth54I2qbzDPf+6aqBoVU7Mkp1gupVFypi+xusW9nmrn\\n\",\n       \"ls2TR46Ppto5N34yhPQfz2MX3vb6ls1IjPHnQwhP4oNKhCQqB2YvfLcyn4flvPOvf3L++T/9XR/x\\n\",\n       \"93d9XJlq+P1Sv4i/8U4fxO+kYoyjEMLfZ/csX1pTopZXH1xzIYT2ozzzh/lCfY7QNZl8iOSuo5V9\\n\",\n       \"g6QkJS7Mx4uToqFbKQxqBw7ONKyly4yOK7J9rXtft7Ld0Hv6QLJwVjKLYlq1rDDTtV4MVWcdL7UX\\n\",\n       \"NMOKUegK6loORU0zNVVBVXRkV6Fi0TEdB4ZaDtUtW5bPRwupmZEDYzNn52vcfW2HqngUA4WmoCeq\\n\",\n       \"6CpM5da0bci07SuzgVMbcomhzETTTEWnzEjRchRzuZ05fXHfSnxMEYOasSIEE7lQjKRZtDQN8kBn\\n\",\n       \"xuVlfvA+78o4/LEQwjfeDknfG1WM8QAHIYQruzy9yKkB98e8/O3C+UIItYQnFuaEvIxbrP1lzvwI\\n\",\n       \"53oMcu6e4uDDpXdGq8rGf3XMTv0RjVZm8Wiit1u11VgwTMcahmbumYZzqvm6UNkyC3fd01cLhUeQ\\n\",\n       \"S8yMta1bmSMaqyaO3HFgJo1DUeqkqrvhpIkVbVftqethKpeoWVGZJ988pS6Y2FXGLy6bKeYi7apS\\n\",\n       \"BlxzzlKZMm1iVaFjYldHRc+mctJ1oGXJqqA9p7kOJHOT+qHgIEanMmLSVplNdGZjr7WDpLgqhkfF\\n\",\n       \"8QL7N1m5zMJA2rvu5pk9J77OV1e4M+bO/xZjvLEZwtVLnHiynHOYm2stdW1epvXQdZTG0qfjrasQ\\n\",\n       \"QgWxdDOf1KlHtjeZnKKyWDYpzfrcjGyrXGcXGYs9JifK9N6kUg5Lhkpi65pyDHNXiVMMlaObqYnV\\n\",\n       \"+XKhdAcZq2k4mNOU12SaduyFQrRk36pvcoh+XbkqH+GYVGXuv7qEit580FOfK/iObIo2zCwpsZkj\\n\",\n       \"QaEyx2sL9/XiWD2bquSXhGRN3OqUMcuVfWoT9vbp3fToYu5Dn+HkXWpZ2Vj+v38mhPA/vw38sN9W\\n\",\n       \"31ZNM5f1vOVOi/8mVAiqyhChP/tOH8tD9UVcCMFGjHbe6YP5bmu+6v1Wq5/VddRfNyo8zVGblX03\\n\",\n       \"P8sXf5wf3CamNZceW9CfVdypL+ofHxo3busunLH+1brcwLnZ1GxcczG7olFhIJOYOVvUrZu62Dwm\\n\",\n       \"KdZUYlMt///Zu7MYa9LzPuy/t6rOfnpfvu5vnflmvuHMcIYixUWUqM2yHNlaINhBYiGB7ViOEyBG\\n\",\n       \"kps4CZIAgQPkyoETJIDtq9iOlzheEmuxSFmyRIqWSYrrLORs3770vp/9nKp6c1E9JCVrn6GGSvgH\\n\",\n       \"+qILp7vfrjqn6nmf57/MNNJrWl4xclOhodCWOj6/EVxQuGVqQTzPmuibGcs0HRrqa0qtyowEXVOL\\n\",\n       \"XnbH+x26LPjXoouCD2HbVKblyJF9TC0bnDd9l84zboZuOXNBMG9s38C+TCqol5lOXCYcSOJ9kqn9\\n\",\n       \"cslUA1v2jH3bgCSNyjI6yRnmrA54daNt+p4mf+G8s/maqkO8qepV3/uDKlLOb4qfDSG8ovI7+O0K\\n\",\n       \"kfkN/vy7WL9QhX/V/rX2swPPX+DqA1ZaPD7llSvVdO/set1+2pIvzUkXeyZJx1a9JYaOvDUnuqvQ\\n\",\n       \"lZUXKpePpKleLkmcCfGBmATHITG2KDPS1Ve51TJV6jjUNVYvGsqkStmdhcvq9gxdMdK1YqhuXuLQ\\n\",\n       \"kR3RM4ZmgvY5GTHT1NCUmjjR9LpCsKirrYVS09jeOfG5IZzTmjnQtaergbpEet5ZuycI4UgwZ280\\n\",\n       \"dCupuTApxFlpr73iVClzIA9Hyk6T2bFGvmPTyLWXo/Qur+V8+qfwwpu8wn1+6hP8+R2uLlbf115V\\n\",\n       \"HA1dv/vrr1S/XtWJby+qOADvovsjXHycRo+H13j9lPoi9XU2I2+0uHFSlXWP6swusPA55g44XGLQ\\n\",\n       \"qIYkF2Jletav/htt1Ufgy6q9+Z6K3fEM5yqXWKlsTFx3pNQ8l/UWJhKXBK+7oRr2nKsAXcGnVV2w\\n\",\n       \"mboD0ZnpuRHayNB9YysoXTRUN7Kq4k5lqt7XialE34k0nmjGmYXJmfL+JfHRiNXXqn92Ms+D72Xv\\n\",\n       \"nuZzhff+Ix7b+toZXB3y5Bp3r6sMUv5A8S0H1rcPH8TtGP1+meJvO2I0C8En8Ufwj97p9bxN6J2c\\n\",\n       \"Ky7S6o4A9mlXc9D+/8oLD9n7cebbXb1RYn/lwwaTC7I4UmSv6s/fV94YetevsP99xNgxfzTVmZu6\\n\",\n       \"UJtZzOoWyqlhaOuVTQvFsZ00EqvZfamBI4tuG9hUk4lWznkeZ6YuuSB36nGFV93Rt2wiFXR1nZoK\\n\",\n       \"ziwKLijwgr5vMzFWmlM9+Sf6dqVqOiaOTaVaenZ1najLNY08rqrLtk2cyl2OJ2qmYtF0GEemYcHj\\n\",\n       \"9yem3XteWRtK4qFWMlaP3MyiVo2iIDtk8ZWmf/Z9jzm4vGwQrnDpHn/2K0wvkFxV7QXvcRZC+Ht/\\n\",\n       \"EKz7KvRu7ce4cZ0ghI377P60qse/hvGb6oxF/sSzPB81Vl5VrE3lBd1LNaFVc3Q6dbCYWy5Zyri5\\n\",\n       \"0HGcXBf2U83ljrOVDTG2xOFQ0ZoXkyFSsewqbcimO/LWUF4eKkPQVdcwcikm+mHqSEtDTzCTaciN\\n\",\n       \"pHK1GBRpsBQGUqWeh3a0DTwndWLfQE2hZtnMzMSKuiOlQzVtM22ZR2qOTS3ZFB2anPfFRgozTR0N\\n\",\n       \"Q7dM1eTn+q2GU08rZjvu1MbWlMgc2zC0J8RTrWLTcHDVg92Zo+WORv/MMJbGzcfV+os6oxPT1uti\\n\",\n       \"Z+TxQXT1xeg7P1o9Bq8u87EPxrj9pneUGONhCOEf9Pjz3cpvaVAYf54vrtGacfmU/Q6fW2P/n+Gv\\n\",\n       \"vA3vj2Xaz9O4wNxzrH0bTy+x0ac34daUV36A9UBrjVspoVc9wmdpNfxIVhh/H0dbnHYpZxX/tlji\\n\",\n       \"QlnJfx8EishCrMzRFrEYqu7TnqrP8UBwLOpIva5wxdCZlurstyT6alZVXZDa+dcF1dBnS2pJTd9Y\\n\",\n       \"du7K2lTFcS7oqyktqNQgB77mS5KJemIcyl12Ib+kUQz08ruONqb0v43bnYptKyUe0ozShOEaxY6v\\n\",\n       \"cn+oOkh+C5n0NxbfKkbePnyz8UXexJu8kf9PFCMxxpPlEF76LM9/gIcZcUDts2wc8H+d79j/Xgjh\\n\",\n       \"/1nkL13k+6/KVpc1lWKaSeNlZfbA6NLY8fvZXuc7XzjVf+ay0DtzvHxqy0gnLTWlpob6tahd7ihk\\n\",\n       \"euYV5lVtw4FGeKA4v8G07YvGpiYOzbTsOfOYka6hr2g4taCvo6lpoCZTw5K+iz7poeXzx8whElHb\\n\",\n       \"qZG+jmjJnvw87TfzuJHU2EyqKdc1sW5fNyyJedv8tKcVjz0cLQrlRGzMdPIdi0UUa1ErlBbqHB+w\\n\",\n       \"/oCTAUebm4ZXV8zNUtNJVFt7zmxvxh97F//8qfP2+30WP8afCSH8tW9kh6Ry3Lz0k3xvnRvnvLU7\\n\",\n       \"q4mf+28vOjzZwIiwHsLdfX52Xe3fPzG3NjW5tKDTnKpnDfVuYhYXhJOa9ta+7c2+Rit1nF2TzVo2\\n\",\n       \"5gtntZ7TWt1wNjRr1YWwzOweWVMMmaJoCmFByLdlouW0Zq2IhqHUCzVrZlKFPWMP5VpKhVmlegl1\\n\",\n       \"N6KvdjFyA9EFD3VFHaXxuUB3XeJYdCQx0XJipCczMjKVxa40ZE4kJgo9afV8OS9yN9QlprYV5lCY\\n\",\n       \"GMd9J9mauqnxeVZO11gt7krHOYP7+p/7DpNXM5PvzGTNR8rLV6TDZUke5Z056XBd1jw2bU7UB5x1\\n\",\n       \"Wejz2BHLV0IIS+cjNSGE1mX+zPcQbpwbkN1k7Re8Nt131Kd5kfyA/X8Q4+ylyhLorbw/sid44s/y\\n\",\n       \"bCC/xPA5Dru8+0U6ObdXmHua73idV5Yo5thsc7/Nrw1Ya7MSuFty1qB+idMWP3jC1hxZrBQ1SWQ+\\n\",\n       \"pYjBzWReazYyq01dVpUDE5Vqrq5hzcR9GRIXNZTWUBclEkEiV6lCgqrXsqgaFmWqjKFLqq7JQxWH\\n\",\n       \"ZBHLEi3B4TlFdlX18J6oXB1Lwg0L4znj2ciotqm+nVmsvyife9Fs6XkOF5g+YOlTatcnWgVH38Fr\\n\",\n       \"myx/kY3dKpX4bjj/03/g+FYx8vbhj+J/eqcX8ZvgX+IvvdOLeDtxzE99mvwW75urKv3igJ+dnht3\\n\",\n       \"VZEltWdzYamU31jUPuobr5UsJLI4lBUtzSHzrzBrMqsNNQZ7Hl5dkqR17DkJQ5tlMKcvhIYsqZvl\\n\",\n       \"S0qMDDQsy8K8mUeiaMmpXN3YVU2resbqXhNcP6e2tU08NFTXlWk6MxbV5Q4MdM1bseDEIkozO3Lr\\n\",\n       \"akYKO6oNy5IVmcKZQ01reueqmsRYzWOinTgwiz3zpi7v5aaPzhwkcw6ud6T5TFKe2TiZ6Relr3Rz\\n\",\n       \"s5yb22RP12yuzEmyKPTbNt841b428fDaU/SOqvvfPlzl5DGublUGDN9Aslv2NM/M866vU8DEcEX6\\n\",\n       \"7R/mM89U4yNf4eIv8p/lOmu52fKmxbRlpT82bW8YFqdiODbaWJY+7Gj0+vrLTZKOx8ogndXktaaL\\n\",\n       \"8cvu1xZMp10x36u2iLGBfTGbE5NjaZhZmtWs5GfibOZqs9TJ+vZDIhFNRMuCe2ZIdGIVhnZZbuBA\\n\",\n       \"X0tpxaJ9u7JzFVaVa5PYkjhRM7FgYOTIvJmOBbuuOQ5tVRDebVwVLOjLNBwr3HbHkZrENXUtUSMu\\n\",\n       \"eCNsWDJvpKtppOEOds2FiYt1JrOhq48+7o0L847vNOXX2rJRSy3pK5pdIS+V7aDRLMhLq/Ns/xEm\\n\",\n       \"n2b93xj71nn2mSrf5KtKuKfYP6T5S/Y/O43xs2/XO6PihFz+d/ihE9YHfPl5ru/zaJFbF3jPI846\\n\",\n       \"XE7Ir7BaZ25As1PxP77YJk0rh4DHJpWNTTFg0OR0vlIlL0VeTSrb9xMMQt1eOefZsnRazvSSaKxS\\n\",\n       \"19zAktyhDG2PFLZ0rWpJpaYGcoNz4uoTqoLjEK+qmCfXVR+oXHUCT6SmglzdvK4DwZnjc/5Q1RQ+\\n\",\n       \"whlxJrOqnUehJPSbasOmWllTm+6a3R/TmbKyo7kRPfcoWJow6DNbYut7Of40dzPu/6sY4x942jzf\\n\",\n       \"KkbeFoSKV/ZB35xupy9jLgSPx+jOO72Yt4oQwkLKjSG7b/D3VHeJkzelfZWd/JWf5N2tvln92GcW\\n\",\n       \"ekaNDckkNz8spWWuSEqNhKUp8XTJr9y4YmHC4tnEbHmsVx+5nieuKrTCI9vlxO1wwUlIheJYkSw4\\n\",\n       \"mx3Ka0OtkKvyeJlY0T13Hdm1IZHIPBCM1OQKHQeG6miKqtzVAqtWBH1B32smNqU6KvLijsSquvG5\\n\",\n       \"y8SCUt2pB04El9V1ZNbVbRpJY8OrxdhkOhPPEgdrTzquddVmC64czpTxoa90p5pxVToa26kfyhd2\\n\",\n       \"PTmLVu5m8vma7miof7Vr7vZIdr0t305+A0enU3WJG9/Yq929wOqvk2x2vfrEVa1eVG+VpuGAlQ7d\\n\",\n       \"Fcl772jOmnpzdXOzIIhidlmYnXhQC5bb0cmVxCwGh7WgU0SmpUkzCFlHW2rZIzvpE5z2JfU1ad7Q\\n\",\n       \"LHNF+nnDrKk96yjzQ2fpkc1Gai0Qy2icFDINIw339Uxiy3VzhmFMLIzC+DwYPtd2rFBT8wWFFUFX\\n\",\n       \"Q8vMl5R2taWitiUbNuTuuWzRRK6FmYFFXBatKA2NLEt1FXoKhVrMXIuZBy5px7ZZHKinlRNFDcM4\\n\",\n       \"8G14LAT38obmha6wtOil48zwpYH8mWXtfqK5v2eyMNOtF0I5k9RYOWPS5tFH6P0KR1tvdkXO3xMb\\n\",\n       \"q+fjgK/HCuO5imzxdmKDzQ7rb3rJJNSnzE+rUMQ3Rhws0Kgjq/xCVvr0O9Q6aLCccn1U/Xg6ozvl\\n\",\n       \"VsndOk9GNiLtsuoWPCxr5mPd4ijXDwuGtaimcGBsqoo/PJALmHeqo+3LlSGSjompgSMXDM9DEr+W\\n\",\n       \"yPtmSu/GeXlaU9NDIXOkpq2ukEqMXVN3Ud2JsZFoJonrZrP7Yu1EXmbCtKbeGyhbE8VJLuw+4hNd\\n\",\n       \"nspYmLoxN7I66bv2a8wf8uoVHj3L/RYH/7uqxfOO4FvFyNuD78KLMfotyXXvFGJUhuDn8cfxN97p\\n\",\n       \"9bwV1EN49gl+4ilCg3iX9A6fO+WnvvaqC3+SHyh5/AHsevTG625/e6YI5HEs6OuMB9Kkod4o9M7a\\n\",\n       \"ppcOhUZUS6Kz0FWLK07CgdWi4Wo5MZc+lBr5bHFD+6AmX9+zmgVrMh25iS37nrZv2dCZbRN1+xKF\\n\",\n       \"qZ6GeUvWjF0z85rbth0ZWRBcFi1IEAUNF/Scum1qZmSi1JCZNyc10DfWtAKmguiGzAMtryttYd5I\\n\",\n       \"drYgvDJ0y5JHR0+Y7R+5vEwoU9P0ulm6Ly27lsq2ZLBmPsuN2oeu3d2z15mT5rSzqd5CLhnfpP3e\\n\",\n       \"qmMMcsL9qsv8DeaM9HfZr/P0194DBt3EpDUzbb/ED87T6VJeVG5uGaV9knlljUIhBkJYczBecZI3\\n\",\n       \"FftrJsef1X7XqUutIzutRUsh1ZmMjWo9ZVrTKjvy5UQ9f1UMXclgItR2LKVTS2Fe587I5ELThWZJ\\n\",\n       \"vVJTTZOOB2HdINS14gN5mJick437ISjVBKW2ID1XRGSGasainqltHQMXBFdk2mr2nTmzJtFUR6qm\\n\",\n       \"VZ5K1PWSgKFSQ6JUWJGaExxpyiQxM461qrOTFNplX5GM1CSaIWoXNaNJQ1rMmayuu3Cn7/7Sdxve\\n\",\n       \"/jgnxwaNTQuTqFk2rA6OHHfGZnjtPazMOGzx6cDOr7N46LN7+JsUqEcVk3TnNx5/i4iUX6fIaTxg\\n\",\n       \"eo3ThMMLXC9pt3mwwnqf9TO6Q5r3uHmF+lI1oklTGlMmdYpGRUO6k1fiE6FibcTIUpHYz+g3umJR\\n\",\n       \"Myz6HiTRfMxcDjPXfO0jcSra0HdRdMtYoS1aNnZJMCdzqHQkihJtQVN0X9ueoY7ootQiTnTiSCMc\\n\",\n       \"OrGEmmBJplu56Bqrx76ZvrR8QaNR10oyg3LFbO/QIOmb/EP835/iekP6I1Nr7ei5F6tzAe+/zdoR\\n\",\n       \"W9sxxq+8zdfo94RvFSNvD/6YahzyzYqPqqz9/9AWIyGEzuP8uz/G/uL57ut5wtsZ5lMAACAASURB\\n\",\n       \"VC/yoc/wOr4cQljkmUs8/tW2funGp0ZOrrzgeD4ViqA7HmnGhtU7ibLG0aXgxlbbchnFem5x2tNf\\n\",\n       \"rRnnuZjmJkUmTqN2Y6aZ5JK5I82k7kJZkyU1szCSaZsXHNoUVMmnfRNLcWwpjKzEu2I4sWdF4VQw\\n\",\n       \"UkdDy46ugVRfXV3vPG+kbsGxA2cmLphTattz6kxf3YIzDWO0fcUVx5aIU2fxzGn5wHh+T/5saWf3\\n\",\n       \"itmnLnP/xOgDU735xLC2Kiu3TZtTx0XL+DCxat04HhrXtj21VXh9c01MOb14rHhjxL2LzLWZDmi8\\n\",\n       \"zPIWnzj3ffkGIn+VV85YX+PGPoPazOEVDjfmKove5ZKjFvcDe++SN7fEzry8nOrmhXQ2cLayQtFQ\\n\",\n       \"HD9n8PKIxr5e8ZL+c/cspieGoa2II5N8KMmXzeWfczbXFIp1c4OpdPhAq557PCndX5h4VyN6lIzd\\n\",\n       \"lFsLQZG13UsuEqO6PXmYWFHoahrqKsxsmVg201PREY+talgWnRra0dXylEV1UyNjUWnF2C2FGMm1\\n\",\n       \"xTiVm0i/mpIyUaXStGTlmCQRY65Xjs8Nc6fGxUArmZFMkcnjyKxcZDhmWDdNZsLirtn1UkhG1Ja4\\n\",\n       \"uau4fOa4HGjWR04mQ3NHicuYv0m/WYUM3niVyQdwK4TQavPhFb7rK3wg5eLzfLHJ9A4rX2Y0qyQo\\n\",\n       \"bycqqwzbc2z22HyDNx6jt8DaIeNm1dXYPWB7vrL835nnUUXdsRAZlRVPYjEQSva6VXbNU0c0suCl\\n\",\n       \"+WAuYaEoHcXSnfGKabJuT8d4lmmkuy6HoIy5k1CJcRfRE23rOLSso+NIZuJMRXVNZbpaFtVFwba+\\n\",\n       \"XGnm8Xhi19MmoSZ3IsrFEK3Eh85Cw1iOU7moNNZWmIYDWdJz6VHh2lGLhUQRt93anDn7m+T/9Dzi\\n\",\n       \"4Y0Qws8yWPtaIfLVUznHya++zdfn94xvFSNvD34I/+k7vYjfBv8CfzMEjRj/zTbqHxJcf4La4te1\\n\",\n       \"gVPi0xzf5gN+y5vdtYe5l25eVlvdNQ0Dodm2uD02WRjZXk5151ILRU16t6ZxPJWs55IP9vQz6gdN\\n\",\n       \"28sNC6L9WLMyfGCnQ11NG1NDdJXW1G0rnJiZPzc423UaTtUVZjI1hzp2LCv0YpCGS265oI3M+Fya\\n\",\n       \"d6CuYUMqt68m6jpyqi13YsmpCwptwWLk1bDp9DxabxwztbhoFqbGHrg24cLZll/74VvKlxlPZs4m\\n\",\n       \"mUmtp6iVYujqHW+Kk23DS6m05LXFwod2tq3c3PaVy2wfUvyX2PolPjzPuwtO9/hYWY3/fs84DzJc\\n\",\n       \"VXWp93/z2PcK554z/zu/+KN89smGkxvXnfRmit6scmbNdyRX/6X02oHs3g2jL+3KWrftL60bNzPp\\n\",\n       \"dN/oLFM2N42PHlaihb2PiL+8507nnsceO3K9PDJtv5kGcmJ6PHGYXFY/OJXMdtQXc08ftbVHR27r\\n\",\n       \"u5+zNKoZ1TOvjWoOOiti7EuymTlt188j4uc0XDHz4LzYvCu3rXRVdNUtIw9sS800LJ1H5JWaCnVT\\n\",\n       \"R4iyeExcdlSuivmWIqzKk0OSAiuiHRwpw4ki9gWFvTixkBS64YHj+rqZzIl5MwO1cCYmU6NQyuNM\\n\",\n       \"Wi+tLCTyLNUtPmX1h5ccDDLGN8R//AXF80PrF3ILj01dfsjjh+x22BvzPa+y9VwI4aNr/MT7ufY0\\n\",\n       \"O0d84ot85FV+JOOLZ9zZrcJV31bfihhjEUL4R/yLP8fTiyxMuT9iepen3yAdcWGnCqH95Z/kU5s1\\n\",\n       \"83mQ16ayLicliyMOA8M604SzgsMpz/bXfXlzTas4cRr6eulMy9DTh8d25i45KubNOg/UdYUYjONQ\\n\",\n       \"P42mqrTeTXUPLXrWxC3HtrWxoXJXzRTnxPhKzrsl19Q0th8aeqYyD4kjIeZOk8xqiC656ZFLCi+a\\n\",\n       \"WFNXlzqQlzetjcfec0TjwUz33OByLuMTn4+x9/WfrzvcusNnrvH8NrWS1y7wYp/xC2/n9fn94FvF\\n\",\n       \"yFtECDZUvKNPv8NL+S0Ro8MQfEXlg/LN3MH57ZBkXyflfRM1iqRSyIkxnoSw8Yg7yzx+xLDBdGFf\\n\",\n       \"a9QxmVyW7o08yHbtXDrWaXDpK4XFjTOt+dTscaZFTWsvGN3r612vedTtCMXEC0lNb9Ax96itNr4j\\n\",\n       \"fy5Qb6jFYFYmJHuOLIlORXVVw3Z0nhHxIe3QMIwnhmFfzbZpWHfqhrpg5kCpo7AsWjb1wH19waY5\\n\",\n       \"DdGh6AVDIwtqlnUx9UpIHceWGApJWchjYpQmogX1ZMu4W3iueey4+cCDa4/beHHk9LEzw0sHprWr\\n\",\n       \"ws3n5TdbjIfO2m9Y3KDzauUyOVhm6zYH/8PXJXT//PnX7xshhEsb/NvrrAXsV6Zm//S3+5lz2e7f\\n\",\n       \"CSEsXOC/+mEe7NH9Iv/h61pLifrkSG3a9O6Hd2xdbbv5pTdMP3qo92SqvTRVzK6YrD7BzUNar2lf\\n\",\n       \"+VWNH+4b15veKCamnejxAQvZvFmja7g6cJoObGyUFvq00lQjG9haiDY/GT37af7Vj+eOPnBNXizp\\n\",\n       \"JMt6aTB1G3XBnMLEgb6BVOLMjhVRS4zHgmAcUicmji0YS52Z01cKxiprvMzQ1K556exIMd0TFabp\\n\",\n       \"qkQmKW8qww6hipsvwpDymrQ8cmbglsJaONUKXffiVVEq0RXKa6I7PtMau65wKWs6KRI7zZZumZpr\\n\",\n       \"TgwmJ0aDQz6yavrKA2+sn7qal64MeHG58p179+epxXMF6NPv4tqHzz2BFplc52d+nic/zU+VMX7m\\n\",\n       \"t766v+P7JeAa808TC3qvxhi/So6NMd4LIfwvPHyaxpO4yHcWLD7gwja1AhKLN1NfXC+ELt2MNOFy\\n\",\n       \"we0GT25Xib15nQcn7O525O+5pDbKDdu5+bBgsyCb7SrqI0t7t714sWZQdsziyKC2oJEMZeedirHS\\n\",\n       \"sZqU8w1GV7VbylQylQNR3czgXG9VU/eEugP75szMqcVcFk9NwiVB3x0jK46seGhXV4hHUhNFfuLi\\n\",\n       \"3syH+oQVZm0OWlz7RdYS2iu/4bNUhBD+Pp/8CC99JyGj/wLHv3zuZPyO4lvFyFvHv4VfitE3TOL4\\n\",\n       \"NuGj+BP+8BYjD+4Q3kva/Doy5S2WD/iVEEKD7N10Ir/wNFfrXclGU785cLB232Nnu5KQGoeeYiX3\\n\",\n       \"nj0uZby2fGwxnzhNW/rXo8nZ2LhW03tpyRdajwuvXhLvHll7X8v8UWIhPnBvI+pv1qyUQ7N600xd\\n\",\n       \"z5NyZyrJSWXNTSbXMDS1HOY0q0eRh1ITK6YKmZGpK6pbV9S0oPS0hp4FUdOaqaaegR2Jvr5oXRmb\\n\",\n       \"xDWl0p6RScKC1OZ5M7+Yz7z43pln9l63nx2Yteqahz1LecNW9ym2+6RHzB+aHadOj+pevzi19DmO\\n\",\n       \"v8T2T8cY3zbCcwihe5X/4AeZXT1XW2wx94v8udu/0w9XmJ4P9Yt1hgu6Jx3rk450ODZuFxp54rHB\\n\",\n       \"Q/sXOfmZ/GuE5vpt/mKD6w9ceHriQrfp4LFrWrVS3hg5zLdksbAR5yV5tFW/qjaZOBwfGnXPLGRl\\n\",\n       \"lTb0Kb7vlzlu0+4sOkgvyWs1pVSwItUX7TiybNFIqalyFkmMLMjtG4ZVLzvSEi2b03bZxIK+fT11\\n\",\n       \"c7qqFNfcUWw7LHsahuYnp5Yfrtu9nMtrjytNJMmuMj1POA5diXWdsiXJ9s2VqXFo200WZXFoKadd\\n\",\n       \"ZPK0ZZxu2ol3PGrPKUYd2XhZM2vJ4kPF8pHO6LuNPrZL+ByfzfX/Gtt/iXurrO3zkd3KTXWny8nR\\n\",\n       \"PMuXfhPS6nVOX38LpNWqEJn7EZ78Lp6cVPuQN74/hLmPx9j7hfPXXGLhPcT3ceE662PiBcxx85jr\\n\",\n       \"nyKbpW5fzayNg6fLme5SkBfRoxqNku1FRmcc1Hj0OqP/uu3g5zPza2faUutlMFcm0mHTtBjqd8eW\\n\",\n       \"MnbHbVOZg1rdZpwzCjOkBuVYL4lqMndsGltSnYigItQ8UvmMPMC7BBdE9/VdU2jIykO1kAjhaUl4\\n\",\n       \"qPBufQcmrmiWN43jfe8uZjaKyB6d2/Se4/GS6ZjiEgffx6O7HP/S153PBZWA5zjG3i+qol7Cm7EK\\n\",\n       \"3wz4VjHy1vFD3uKO8Q8IH8Xfxn/xDq/j94UY41EnhH/xsSqTZtAgv8f8y9yZ8Qar/wkb765pxZlJ\\n\",\n       \"WPXis1cVb0wVk4H3HbdtnD0ymztWHNHsVeS1hRHr9+hfHloZDc3iilMrHm5tmt59pPs9PcXzJ0a9\\n\",\n       \"RB4TIU7laaa9m9ip9xwuzMvixDCsOI3LYkjObbeDoM+57fZMLouJWmgaWVdoqUtMkLsiWhCciE6x\\n\",\n       \"pKajZWwqV9c2taHuvpmr9u2qyTV1tZJjZ/GKcQhaYWxd1HIkDXXvjdHDkDvsFpbG+67c4rnbPLh8\\n\",\n       \"0Ue/LdX77OtsJsyu8eDdpuN/Yjvbtf0PVZ2mt7WtXuOZZ2he9TUn4Iv0nmXhd1OMxBhHmyE8vMvS\\n\",\n       \"InlN83gsS6dmcz1Znkqax2I2MPfg6wPCY4zTEMLfOtL+y6lrjQVbTzylMWtLpmNhfuQ01OyFAyfl\\n\",\n       \"nGbZ0Z61dGYrRtuHhvXP214YunbEhz5J3uQLP1qz/dy6cdaVxYZQZtJiYpY2TDQNFZbUzkW+ZyYW\\n\",\n       \"5dblenLXtK1q2JNq6OgK2o4tuevYgkwzTg3DRafjq8rTLxlNF7iZGC9Hg6wlL5Y0JntqnjKrL+C2\\n\",\n       \"WTaRJrTSOY14Ig0dawrj0LQfcu2SMmWxzOwXDa24qGXBaj42S46VjSFJppYHzXxg89lFBwdrZpe2\\n\",\n       \"YtytRmXbP1mlHQ/r3F/hi3V2//a00tX+G8+RfhXIdvIbj/8e8BhPfYQfvlvZx8O7En7m+ys33vo6\\n\",\n       \"7/5TlSB258M8mzM4Zu9eZfve3OSl59g/od9IfOeE1bSmeVpKamPTdhQTFhusjpgbUjvgle8ptI6D\\n\",\n       \"Ri1otCuufJ6VynZuWnTMQiJMT4R8qta+6qw4FJO2thOHYaadBAvnrLAqY6qtKkKiyszwTPUB6+Ip\\n\",\n       \"dWUVz6ApKevmtJTl2DRpq8WmJBwpzGsWQShXxOS2SYxaA5ZfZ7iMEbvNiq/SHDBKuXmZ2cvVJm3p\\n\",\n       \"R3nqvRVP5lAIc5+g/8u/3Yj0ncC3ipG3gBAkqs7If/NOr+V3gc9jPQRXY3T/d3z1NyEGMX4ihHD/\\n\",\n       \"Pt9Wo73Pl0teof7jK2o/ds1o2DYuHzndWFdPFxWdR9KiZalfk+RVFkS8xnRSRYCPmjy7zQuBTz+T\\n\",\n       \"VMZXLydieeB9l4LW4cDowquOP7Jsa3fd2cIjZe3M2v3g4OJFu+k1WU633JPU9sWwRMyFUBk5l2VO\\n\",\n       \"aMrCyCQErZhXDbSwLrNjZlmhRszEMBbPJZxBoVAIcvtWECQyqUVR09RDs9DQsyqGuwgazvT0ZGZu\\n\",\n       \"yGShsBh4o8G77rB/Q5XyNR7Ke4+z3/W1dOlf5Wqfoyf5jy/glLBW5cL847ejhdtieamykv91WPxa\\n\",\n       \"BOrviB1+7pP8hefJZsZhJt+9o1zY9/hWaXE319zjpSnq55u+KcQYhyFcHuYWbl+wdW1FOpwpNrpC\\n\",\n       \"UlNLMqedseZ4wcI46nSjXiAkmVqamLvDcMAnnqfzJMfPN60npU4spWHgLK2UK0nMDMJUDTNzEn0j\\n\",\n       \"NVuahoYyF6SONWNXQzAK1GPNQjgx1XFi1YFexaQsNpk0Kw/z5vuNJsFo+3OsP2J1lUmqrPflWRRD\\n\",\n       \"UzQiCbJYiEVl8V8F4o0kMr2MaZmYxro8H0uLtmk41W8sydI6WSGdnOllTdOzjov7p8qN3O7G+anf\\n\",\n       \"qnvjsw0P/mShsTyTvzgz+DsxxlshhMNX+KFrtNcq4o0jWl8hDN+Spfjis5WCP/26nXut5MaMm8+z\\n\",\n       \"8WF+eJt7K1yZVGPZnSVajxjd5eET3Gqx+zcKm88VwvuCZgxqI3qNoBWiemAa2Vmk8Smuzdj+z09d\\n\",\n       \"6B3bNW/+sO9ss7RUTo2bI0lZMykmjjV0mhn5RUl60Th+ySxp6aIhWlToG5liaKKyC+6oBluJKlNm\\n\",\n       \"iGOlXBbJpMowOw/drPyQJmEk1aDIjEIqDUOxbDrOJ3ZiYbkkafDcHh9frXxCwrCKEeoMqz+3+AN8\\n\",\n       \"6H186EF1Licpv/KDfH7gm4xa8K1i5K3h23EQ4+8qQfIdxbnE92fx4/jf3un1/H5xPjr46vgghBAW\\n\",\n       \"pP/ee3R687pnMDGcXzWZG0vXMtPe0LRW15omxEQtlOqH3LrMkyW7XXobXC4XrN3kJD3UfGLNyutd\\n\",\n       \"te2+etlTv3Rq0r7vXnpirRulmyfm5rv2i4F6qIYxl8a37DefMAodSdyRuKMRpuYdmOqYWDEKI2MD\\n\",\n       \"mRG6gjMh7osuoke5SrFvVEvkcnNqxuo6DoykKoulDubPg7TejBR/aMXIvNyiKFE4Na1C2CKPn5A3\\n\",\n       \"qhv3q7We0d/6OT7yTDVknt6n+wLJe9j4AW7XKhsEX+LaJ/lT+D/e6nXrs7VD4+nfcHy3qgp/V4gx\\n\",\n       \"Pggh/PVDPpQYLQ3F9bHv/SRPHNGvBz/93jXbg2X+8ohiPoRP9/ilKjtldpw5ebIQ6yPFfFeYlGKj\\n\",\n       \"MGsuSoqbJs1o2o/6s6A4HEj6O+JZ3zMf5yzlV065+u7g8UFHmvTtpxPjtNR12yyQnvex+rHwWpiq\\n\",\n       \"mxetGZua19I2cWqgG5Jz1UzUNjTWsmzq1KJojaJk/3HSIe0ms4LmWmXrvfNpVj5v1loUi6EYnhaK\\n\",\n       \"BfVyT9EYG9jVSGtaojxUOh6edKwrT0YOsx2Nsqd71jaYW3OvCBbMdELdSZizE4L6EE7U0iPtcQgh\\n\",\n       \"WeEn3sfTzxq/UDMu3mD9s/xYCOFvxhhPshD+7k/xp69USXIeMXnI33/Tpv/tR7nG1YTOrHJHfVPi\\n\",\n       \"u9jn+Aprn2F8n9dP8HIpvDAxeCbTW2BpmBsUpSRUPODLPRq32LrB1gdZbRY292/pNxcdLKfm81Ox\\n\",\n       \"MbE+Kw3znn69rT4eWa+npum2wzRKnbliQUfHmSO3lYKJkUVVebqvIqtOcF9VjHTUbYk21OK8hkP9\\n\",\n       \"MG9WFjplXSO5bS/kYlxDUEzvEYaS8VX93T23iiOzBbIF7rbobnHlszz+SqUIun0ZddbexwcffK2o\\n\",\n       \"axR8cIt73x9C+Mz/b8Y0IYT/Ge/HF74+wTeE8N+rxhvw38UYvxlt1H83+CF87J1exO8B/xR/2R/i\\n\",\n       \"YuQ3wfqystVVP33zQF3rbGCwWposbxj969c8uJHrjPpiWmrfobnCG484Cxw/k7jSrJn1Clk2opOZ\\n\",\n       \"a48N3jcRp31pvW+pXxI47vH0NtlGU9mgeWkmNtuaeUeaHlsfftGDViYJhdWyZkNdluyZhcKpXf1I\\n\",\n       \"M7YkycjUTG5ZEIXyFqEmK09N02P14lCSLmpKz42tRkqXjPRUBklHKkeEkSSuacVbeknTikVTJ3rl\\n\",\n       \"yCyZ6c1YOeDBJvdz9loc/p95jC+EEL70gOebzB3y+gY/+kGGbxYi8B62XudGCGHlrT5YSl57lf0l\\n\",\n       \"Lj7LTiC+ysaXq3CQ3zVijHv42RDCxzj8Qf7Vh/nSZXbmbzho/DHx5TWGI7LP8b1foBVC+OkmrZrm\\n\",\n       \"tZEwV5rVBrJaYjZpOHUkr+Wy6SMPm+vauy1rr2+Z1u5Z22H9hC92WD5kLomuHUc7WbDafs3dpFCm\\n\",\n       \"dU/GmWkciGHRWPTAsrELEj2JVEMpONDQ0IlDeXliOVnU9dCsXHISGxJHijCgeIz+gLXzJ23zlGfH\\n\",\n       \"QnYkac0L6a4QMrHWlRQvCRoYivkt0yRRCx2nBurl0Cxc1HFWGZ8libNkZpI1ZK0oT54yGU8chZ5m\\n\",\n       \"SKnNi7NHyrVbTg7uaJ8WBq8zfOw67/ruysEcfDsPx1w94Hn8Wl51SP7qw0pKAo/e7Er9/nHyFd74\\n\",\n       \"bm6Erz1IZwlv1BjfJr9RHbt6yK8W9GvVKOreJrt/lHyB9j3m/iIHt6O1L8/svXtmWqMfGc5oHFWd\\n\",\n       \"hF6gt8b1QL7Heiw1R0denCdPEjtF0+3QkMZoflRqJ/dl+YZaPjCo78uTRY1zPVTbnGAkcWrNffc8\\n\",\n       \"pSK5vaZyXK2jLvW0zK0qRjN0DOOOvJio95ZlcSa2T6yWhX79TD+ta+ZD2XRF55V1p1tPOX3jl+0U\\n\",\n       \"Pc2U927wHS/ROTdwe3GT4xeRMFdWtvZfj8UJtXXVtOgtXqe3D9+wYiSE8O3oxBi/N4Tw10MIH4gx\\n\",\n       \"vhmq9HdijH/lnFTz0745M11+N/jj+B/f6UX8HvAL+Lsh2IzxG21Y9QeGpGGyNdXbbFo5gIlYe117\\n\",\n       \"kW63pXh35tFox9nVExf2ybaqbsj1l6mFlul7VhWjttatodPLAwePz1sd1SRZphGa0mQqLSaKza76\\n\",\n       \"u5e8cTwy6y5K03krowf26l2H9abGdIgd7btNnY01l/Oo1hzJ41RWJFbKM+O0o5UemUiNzavZEcNU\\n\",\n       \"LSl1ponuzraTuX3zLRbTLXlkWF8wDs9IjM8lwyeqgqRFXFLGoamGqYGmM4sGYpiKZVQeVjkdX9Dy\\n\",\n       \"2l7T+HPHDM6Ja9vOTctC5a39pxf9+pDHc4fIqIrQeEvFyJvcjY/zA1/gfaq0speO+EX81d/H78vx\\n\",\n       \"sRDCx9G5wE98L4/eHBW0yL+L+/d5/xmLV/lTPByf6NZeJ2wosmCStuXbW8zeY/rxT/Hysf5SKf3A\\n\",\n       \"yNUtWod8tMmDv8v8uyjGZPUzncGqWTGz0hmbqw8UoZDkhSxONWrBONz30JlSkEkcG0uMNZR27OvE\\n\",\n       \"oevj3DTL1OOuIk/VQleSXZblW9JmbpSUitoSkWy+FOtL2qEvNbNa3BRD5iReMhlPTWXyWGp0BrrJ\\n\",\n       \"lrmkNIvX9S1Ii7qNWU2RBdNk3qz4gklzQWFH2c61pxPprNAZJTrTI2eNY5u3K0uZ1z/XZPPy1xHG\\n\",\n       \"qzAEtQ16S5UH+q+dX48Zb6vD811e/1XKcwLruMPtJnd/Dp/l/vdXkferQ65+lk9/B4MnWBszzire\\n\",\n       \"yNwV6n+c/kPGN3llxsqYUYJBTb07Z1LOnD02tNgtDCaJYjF1eymnFq3KHMZFY8smZU0zLeXF0Dg+\\n\",\n       \"shxed7UsdXTcNJGZKbSkOgqJwpHSmWpWtaDijSziSBCk2rJ4RWJPVuwpen3T3dLceNvaQqo2Cqbt\\n\",\n       \"3Gg0FWpNyeFl5f11h69cM3ttn2ZL3OoZ/RPe+AnGT7Besp/w4C5HP4eco5JhVpGO38T2HOO9t14w\\n\",\n       \"vr34RnZGvkPlb0F1w/lOfA5ijHfPj0/9JnLNPwwIwSLe65vTAv43RYwmIfjn+JP46+/0et4m7J3x\\n\",\n       \"erBTH5uuHMva2xYvZy4N9rVe7mkclrYWpl79GC//YxWR/Zij5+aV/1HX0sGC1q1MbTIxuPCYwp69\\n\",\n       \"hWg5LciDZNB0tNF1L33KfL+jGQvbyVBsbVHPvHd4pl/bN01OjRq5s2Ri42ii2cmIqVJTYlN9cCif\\n\",\n       \"WzdWs1GOLXroLI61kprh9HGLt1e1vpSI1/atXOJoI3cjb1nJBh6mrzi0KlgRPIOomg3ex5oYn9Yq\\n\",\n       \"Smde1otTTWOtKRcGNR+9ftX28VPyew0++JD3v8GnQwg//WaLNsYYN0K4fZ/1x6qmC9WJyvaq58/B\\n\",\n       \"23GhYoxn+GchhJ85/76AtxKYFmMcY3w1hAsXf0PAV0pM2bzBD71HNjenMTvS7z9ULuC0oLzHyfP8\\n\",\n       \"/Da1Ab8SY/5KCOEKR+9T3Z9+NcbYCyEMOHyNWx8Zeaa356AMYqNUTwuTaeGZT6/YfiI1XTxSLjQM\\n\",\n       \"nKonTM2bmNO2rhtHSkNH5ci9/5e9Nw+y6zzP/H7vWe6+9b6hF+wgQYAkQFIkJUqiFkqivMkaxXbZ\\n\",\n       \"HieWx7NUMqlJTSqVeCrl5I+ZZOIlY3ts2bFleyxbXiRZ1kJZIsV9AwmCAAgCaHQD6L373tt3387+\\n\",\n       \"5Y9zQTRBUCRINAhy+FQ1gNs499zvft9Z3vO+z/s8tok0chTyCWLNIqlJk75YkYaXx2m5OL05ymoU\\n\",\n       \"mwF8PUpUCihVQ/wIA16KjtkmIsJZT3DiO5FAo9OZoZPUSaswkM6ogEAM2hEfnyhOK4VLFAwDGk1i\\n\",\n       \"iSSmM4YEOp6Wp5J2SbZhdTuUfg+YdSDZDBtBmA4dFfcJpBoQ64QdGcZmmCUqpZSIfBdeXIH5X4Z0\\n\",\n       \"T0gT6z0QVhaX/hoe+AXY3QdBFBZqYDdChdXOHfDRJqRy4FrwwiSs2nD7X8JTe6Cd7Cc1NYHM1anH\\n\",\n       \"KjhjOnGnjZc0GQl8LIljSZSkZJhmDBVY6NowflSjZS7gSgdXisRjPZRkHAuXChESWNiUcMiik6bE\\n\",\n       \"VsKU0jxhSTUDjJB2NeKtaVpxG7QAt7aCOeOjuduwshb5IMDw4zh5aFlVgvy9tB7IgC2EJ+SZ0Bj0\\n\",\n       \"O10tnj+D9THCWKcGLF04t0VSD8Pj98Pta9DXgeUMPNsL+T+/2mv2drGZwUiOkDAH4QTtvcw2vw58\\n\",\n       \"aRPHsJn4NPCYUrTfcMvrC18D/kfeI8HIBfGjOG58G/lSnsxtAVPFGuk1jxsf9Uh1YCIBqzdCawiY\\n\",\n       \"hsRtkPlwkur2fprrTUqZBH0dk/ZQH9FWh1K0RhPFOB5WwmRe30pQj2IEHsqIIVqGwF/GNGNkXJOG\\n\",\n       \"1ktNHyIILCLZFRoJi90B9CqDlmdSijSp5aJoJBgMAiJKCCQgJwl2Bg6ndZsg5xFEW2h1GJyFlc/B\\n\",\n       \"rAm+9OEHHTwZRWQYXdXxpEZ4YcuCpuMGa4iqE3UzDFsZXP00tuNxLjdERY2Q9upEP6ERWH005y3a\\n\",\n       \"PTU4zoYn2Tw89Az8agAyDpUSJI7AYAEeUEp1rvaaXc39AbiwtgbpUS5aMlih8tXeKaLpJKmOgeEM\\n\",\n       \"kpyPUBufxY5nYXYADpfBOB2q+M6kRO7ZCfdNhPvkPBwUkYeAKqz+QWisuLbfJqtBoQXKhrufAtc0\\n\",\n       \"EbuNE0/TYJyOxGmxjoNPBg0Tl14xcYNhGnodL27isoWIVAkGk0xaHbxInETVxKykaE5UMDSfVS+D\\n\",\n       \"MpqIWsVXJaJK0TI9TNXCMsroqXtAFlH+MiORKAdtF990Eb1GQxvE9H0KkqbjZHHN5TCwaMVgsUhw\\n\",\n       \"015oCKI7+I00WjCB3zxJs6BU/bsAIjJzBpwoTLXhltugmYDaDOhJGDkBKLvQvgAAIABJREFU9xNm\\n\",\n       \"tzcDJox+HD6xDNuOh78qx+HBn4dTvw/nfgtWfwFG74QtbbAcqN4OdzphY70Tg7gON+rQuAFeGIDk\\n\",\n       \"ssbayggqkWRYz2K0M7iLc9T2xZkMPPxYlKYxSDLoUNL6CGQbIjU8qeDLMEnZgsE6AREaMoalejFU\\n\",\n       \"lSVlgT6AgY9PC4cYLtuAOBoG4s3i6wkQHUcsfL0GTpSeRRdVqNOyYGzbLOmBJIGfxG97JIxltEyL\\n\",\n       \"0jcewB/cFcr7B+fBPAuHg9BjD+jaFsFsNzjfgNaTcKQNC/dCZACsFSh8Syn/zCat2VvGZgYjF66X\\n\",\n       \"EGapXtXmJSKfA3qUUn/9ejsQkV/f8PJRpdSjV3mMbwc/weadhJuJ7wNfFmFcqYvumu9mKKXOich/\\n\",\n       \"WoObbWJjPrvOKIZWIObA0Uko3wx7MrD7CzA3AZkKfOhYwPea/fjSZnq4Str0acfnaGXj6AxidxKU\\n\",\n       \"3Da+4WLrPWSwcQ2XckLQdQOlDHzlUNAy5PUdRDolTMOFuNDUl1mKWiQ8j5Rfpe07zETG6KgOSjlY\\n\",\n       \"rKOpDoMqg4ciSplSZpXOrSVip0HqkDxkMBDozH4wTWBqGIZLRGp06OCpHmAI7DaYFjgGAW2CToyy\\n\",\n       \"7xJRBr1NUMkcEvUZTsdIrfq4UYvKaJKVdJLGB9gQjCilFkTkD8vw0Shs86FchL/yfkRHhC6ydwju\\n\",\n       \"NWDQgcV8SBY9ey3WXES0CNzaBx/UINWE+hMw/slQ8a5jg/4UbFdomkl/vk27LwIOCDmyS4rqyAxO\\n\",\n       \"x4JzVXjSDR2fJ3bB/Z+ChRj4HYjm4IMx+OQQPF2GYJ71h1rUT5nIwQi23cbepyinoa/WpNmXoqb3\\n\",\n       \"se7vISYayBSBPk2bFg4RdN/ECEz6/SjrZgstvkgm4eGkXYi00YvrJGci1O9JYmoaORWn6pVQmqB0\\n\",\n       \"F03ZDJkmPYGFplxGqNFvHOJcYFDHZcRPkbQUHdWmHq0SkRWq9NPwxml7DrqxCJIiW69iD6UIrCaJ\\n\",\n       \"uoOXdInlk2QXB2jlMpTcC/OslGqJyH+x4TfvhkgNEisQ5OC53bBchNtF5DGl1BVxfy6uY+xOyIxB\\n\",\n       \"swCd45fsZzvsyMK2Dd1/vR3Yb8PabVA5BXvH4VNPh/SaB/vA3wGpCLg6aBEwXDA0GA5AVxrTe016\\n\",\n       \"Dgi4Po2KQo9GaGlC04L+pCBGFF8c1iXCKr0YgKMG0VjFkwaWeJjKwZc460EEXTQynk/BnCSKRgyP\\n\",\n       \"sKfGxeVlIAZBCnnFGM/EUXFiegqjtETLz+P2wmgTdsc77Fy0aEYqLCZ8qm04WICHt0xT/LV52KND\\n\",\n       \"tAVnCZOiukj2M7Dtru4tNhDJPA6NRy+07XYzJC8AL4iIvhkPAlcLmxmMPAP8c+DvgI8Df3rhP0Rk\\n\",\n       \"P/CvgM/+qB0opX59E8f3liGCSZgZeddpdihFR4S/Bf4p7y6+y49E1yflEZGRPaCnwkBkJQ3VW+Cu\\n\",\n       \"WpgszZZgaCfkB8E26wwvtMhvGSBWqbK4awz8gESQQ/NrOJQpm5BWVaL+ICQMOvYk9dUq5tA6Qdwi\\n\",\n       \"plLUmv1o6TJG1EbTdGytF1OzKKlVzokCw6cidTSngyERAqXolQjjrkJpa8ybAY7WYlwsegwXScOL\\n\",\n       \"t8COYy771xT1XcvEhjNYbkBL66DrDh6j+E4E7Ar4W2Clg5ZrEdhVMh2XtG2R8nS8KHg6pHQfFdWJ\\n\",\n       \"2B6JhE/a9HB2XGYOF4G/eDPzHRO5/Vb4/G1QHITlZeh5Dr6oi/y5r9T01V3d1yIDn74JPnwzFNJQ\\n\",\n       \"nYP+p8D8G+jtA+lAsA6PxzBvMkg1q7gxhZNMoLk2gbaOrtbgzx2lvnxhnwMiB28MzVd8gAW4eRwS\\n\",\n       \"EWgpcHsRrcDwv4Rhs5/UskZVb3K2UueZbTaZqsdKsodGsIuYiqArH6ULQhbRTDzl0rJN4lqbpvJJ\\n\",\n       \"dpKk2yv0LoRmgH064IIz4THhJGhGdZoSoS+iUdVT2Jwgppv0+gGG5jOOju0liTsdIlGPWfFoqTpB\\n\",\n       \"TOjDIIZGX2OVs7pFXhM0IuATEmrtEobbTzUSug6nG1ESxQDf9CnHTOxXGaYppebHRU4Nh3wcbQQq\\n\",\n       \"kdDpnn4IzkAPV0hEvoh7PwuD7fCB4cS9IvJlpVS33CbJMMF+KbItiPVD/22wzQPfANOBsRNw7I7Q\\n\",\n       \"OycjoAO1KCRcqBspIr0G0XiUyRbkzSRmwkHsOP1zgywES8xGA6IGtFQPhvRjKIUvDp6ApmxMFTCo\\n\",\n       \"WgSBhy6CrVxsimhajAQx+gOFEpcAiCqFyBotNYHpTuPrY0RtH709SLtp0NGaaKZPkKmTrcPuGKRS\\n\",\n       \"4OqKVMPn9jU4nIBYAEamSyB/lX+MSObjcPAeuHMx7JKxdHjyk/CiKyIvEq5L/UKAdz0HIrCJwYhS\\n\",\n       \"6kURsUTkceBFpdRhEfkdpdS/Bv4jMAh8X0RqSqmf2qxxbBLuAWaVYuWdHshbxJ8CXxHh3yv17uTs\\n\",\n       \"vD4KD8GhXwbDh6URGCNs4ojMhJLPvUF44i6Ndtg38yKr+6KoPTvIpNoEotCJoKkkeIvUzVV0T2EG\\n\",\n       \"FYqxbTitDtGEh+5Xcf0KtuvQMNIo08TDwA8UzUBjTOsjRomEyuApDQyDut7Gtso0/CgTcUVgeKyL\\n\",\n       \"ohEkiJd1MusdlA790zBhwuJ+2LLLIxeBqWaNsjnPtDlJYGuY0Q6+4YLtgFODhIYrHhJvoNwlsKE5\\n\",\n       \"4OPHS3T0BI5tYEQAO0CjgGg2ibdcXhQRcxzu+ygspbts/AmoRsArw6dF5MxmtgyKSM9u+ODH4PwF\\n\",\n       \"i4AbYS0A+QEcPQmPAR2llB2V6P5F6vdtZeBcnU6mSju3jBtfJnXEp/PAxv0akIp3v08rTH2P9EO5\\n\",\n       \"Gqr8xk8wcLtixyBEzQZmkCJRjOJHB5mXBn4PBG6GwPRpGS2iARiBYOLRIUrEh1QQIWovoOmK3R3F\\n\",\n       \"Ga2XelzHbbd4alebrRkYMcAxalRUP3XPJaWqOPoKHk00pdHEo0dptMTEdIU4PgmxyBk+Hc/EwyMi\\n\",\n       \"Pp4fJ9qwMdMGKp9kdL6O1buMN7ZIKqpj2ooWo6z5DokgihER7FYHe7Zykev3CnxY8yC2sQwWAJXw\\n\",\n       \"jv823Mrv7GZntwFDWXjgcyLye+Hxo9ZhRUIFhY1YzUK1CrnPgZWD8zYYS7DrJZj7IRz/8ZBbm7VA\\n\",\n       \"i8O8rtGQASa0MmU/xUC7QTuZoho3MWM+yhT0aEBe6yeJhhVEiOk6MVH4dAjkPIo8WVUBcQg0kz2+\\n\",\n       \"T6Ct86IyqGnDpDFQyiVQgrhp9HaFdKqNqz1Hr5OhrXaRWIkQ+A7tdhUjV8dIu6CHBYTt87A2Br4O\\n\",\n       \"9QFCHRofluIQvCa4F5EYTN0NH+gGIhDK2t+2Ame+CMNL0B9AVRPpfREq31FdVeLrFZva2ruxnbf7\\n\",\n       \"+l93//70Zn7uNcC7tURzAc8R1hg/BDzxDo/lqkIpf0ZE/zNY/wxoo7AnCuMvwtQsFIbCTn8zgMCA\\n\",\n       \"0/scDnoRXo6A4XgYArVIg5aZI+b1IK0qIwsaxb4ijlUlmU6RxsKsOeTmhNJEBT3ej66lSQXQ0nRM\\n\",\n       \"peMFVTp6Gl+2YEsPcT/AIIZtPc+SX6GhHKKmRpMEPflRdh730VWLSN1j7CjUbwoFnBabEG23SAQJ\\n\",\n       \"RNUYdM7T0pK0Iz4S7EaVcmBUoaeF4hw2yzgG9CYh50C2lOdsT8BSZpiYpRH18jQTBbKnofLC25jm\\n\",\n       \"XC9E05e0BQ6HfIJxQt/1q8ozuQRDY+ECvirgGYdyGnbVlPqHC79zcP6/M9g7i9S2pIl0msTrZTIL\\n\",\n       \"PoVvKaVeZWtfgVOL8GNboOaCaYSGKDINqVUStxXp2ZskjkGvo5O265QnU6RjI2hrfZzrzJKMNkkl\\n\",\n       \"+lBuhKivcDQLRxUJRBFQpRVYKCwSvmI6PUqlM0Zvw8UebxFLLrGUWA9Fy7QAnBoDHY/AtIloPgkr\\n\",\n       \"gqV8PDeKpGPoQQfD9tHEA83BNAQdn5LvUtJ8EuUmxWSaTkWIN22izWmC/joHzuuMmgaYLYzgBaaT\\n\",\n       \"22if8CBdhdw66FnYKyILXcIxAHl45DD84kfAyYLtgbwAW/Jw4urpiYzXoHccVnsJu7fm4fwMPLMD\\n\",\n       \"9q9C1IPZQTgch6m9sGUBIgnYXYHiFpiLwCefhT/PQmEYBsfA0UF3UwxrEVAarZiJ+DDoNxC/RtN0\\n\",\n       \"8TQN3zuIVUri9gcYqkFVFfCZAnsa3VxiQDR2+TWU4dIWYUUXskGDKDq27yJ6BN+Nojsm4rUJoiXQ\\n\",\n       \"6gwWFfcVFCczJc73D1ApW6TiJn3tGDp1iMGgEZaTcwHYSUhpUB2B6Q5wFEqXk49IQlrCAGQjzkzA\\n\",\n       \"1A645wjE3TALfPLH4Fi/iPzx9ZwdeV/07AohghAGI++2bM4rUAolwpeBX+Y9FowAKOVPi8gZ4AYY\\n\",\n       \"/UWYOB82qvYXYcaCUk8oCe9tg2i/R0oEo2aQ8DyMHhs/WsSJlXE7TaqiUVgZIZkPuMXoEGg+vqEj\\n\",\n       \"gY/0BAx4K6x7k6AlyEYcNL1D1a+QZhBLBkn4CsdXuE6DwIoxnoaBehytY9CpWzRTDoG2RjVnIxFY\\n\",\n       \"v19YvTlL3R7mjNvE7y9TtJpMlKMk4m22uj6LVVjN+ej9A4ihg6yj3En8H0Dj7mWGZmCqBMu7AsZj\\n\",\n       \"eXKqyLINvZ2A5BwsvgjtF9/GFLebgBc6bb0SELTAtMMAZbNbBjsXOjw2ohFemV/FTVOhnPn/UoD7\\n\",\n       \"C/h7gRrkH4P205e+34ZjR+EOHcYnoboO+gtEbyoyjk1/DIaUhZ1QrKko6SBCQneImQFoNqKlmCqv\\n\",\n       \"4iYCKvERHKXQnQKa5qJVFokbMGGaDPk+7SiUomt4fht7Vwo9nWOyNUo9ahHxbPboARXNI2+4BH6S\\n\",\n       \"ljZEy/WxKvPkYz4DukOv38HRAywvIFbXUSaoiE+86RNfB6/pMNiu0/YszqdXaCY77KrA3Us+AT7z\\n\",\n       \"YzBh58kGFsdu3ELQHMKa/RTeY+dgz1MwJCJ/0G3ZxVfqZEzk60X4VC9Em0AJjlRDm4mrCAVdrZtu\\n\",\n       \"R81X4al74OW7Qg5IdRn8FNyowZ55eHEoNIkbaUFzEh5sQvOPofE0VH8Vpm6Csa0+9S0Wq4lRrHKb\\n\",\n       \"fCJJVO+np9ogG3MotvbjOlPQdPBjffhOAJGTkFoGq8aI5bOdBlFN4Sdhp4K2Fjr+9otPf2WdZipK\\n\",\n       \"x5zEQ0eCKkryBJ5LrKhRDRxSahndj6Bi/fRaAYa/SCdWZvs6HJiHB++B0XWI29BOhCZ+TgDOGAzf\\n\",\n       \"J5KMQ/uJDcFEHWoONCOQ6p5vnsD6jTC2DsqAM/dAOhHKwVR+AfKRbhnsqlo9XC28H4xcOfYS3tne\\n\",\n       \"htTxdYE/B86I8G+VensaEtcjuhey0zB7FH54C+yqhPevhSWYbUFPGvpTsN5rsb2wzkxslERgkCkF\\n\",\n       \"rCctRApUzrUp/q86tMr0/JZBW3dIZKAfh2LCQe+FiWqD8eYZShJnqR/i8Q5tP0m22Us+oqP5AW7d\\n\",\n       \"oe2uMJyOMOyZaH6Z3JE4btpBHzzH0mDAkAW3tMFK9SDxKCk9Qll24Fcq1DMzrHttts5FmU0EKFUk\\n\",\n       \"3bHwhwRpa8SsJIlWjErfJOZKjeMDTRIWqGLYwerrAcUMrJ2GzkNQOweZO0TMMninlVJXVLJRSrV6\\n\",\n       \"RY4choN3wKJGGJg8D2MVePAaPH0tLkBxFvp3dNuObdCPQV8BvnuZ8a4BX34jY7Bum+SfPAq35eDm\\n\",\n       \"KpRcBm5PEMehHYOy2OxyNZZ1j0rMJK65tLSAprJItGP0pGL0nj/Pwtg5amYE24eBRpvWvE88A/1D\\n\",\n       \"DpJUxMw4O1yNuu9SyNTQIj5lv594I0ot4WN6PlHRyOuj1LQe6vY4RnmU4OxRir3Pc3xXk7rhMroG\\n\",\n       \"yVVoOR7pdVjZCv2PAInQGmZk0eXBKZf2X0DwMZCfgfkesAQaCdi5pDOdmyIen2LkZYty3xpL2/YT\\n\",\n       \"TBdgMg/budixgRWW2o8thkSO9tW/qZ3rg9KSUqqyYU1swhbiRyD7UzB1ALK3Q8qGI9tg8nlopOH4\\n\",\n       \"GOTrcOoB4Pvd8/+34MRuWPppB//+FMHoBJpV4GTQINMzRzpTx+pkKDZ34a/GQZ2DXaNQjoE9Dtkj\\n\",\n       \"6HGfvqjge4qWD/ucMHMRJKAHoaed5pyt04l5ZIM5PEPDS7RRbh/eyxpqdZ3MMWjerrNNXqaxN0cQ\\n\",\n       \"sfCjRaae8xlNgB5AfxPOCDgJaMch6sKBRfBbocLwc/eF4nvh8a2UckWSD8MTPw4fWAuJvasZaOSg\\n\",\n       \"72FYuQWGDfAsKKUgKrB9D3gfA759ddft6uD9YOTK8RPAt97tXAulKIrwLeCLhBye9xyUUoGIfA2e\\n\",\n       \"Pg2nPgeRneCWwf8OzMyB+/PQE4Hh5BKa1eJsNktHdwmcMrrTwvsPSqnHRUQr467ppO+Is6MRCj8l\\n\",\n       \"GzCTC420xHbY0uOwuw0lDx5J+5S9Mp4aplN0qMUtsijGDJ+0GRBLgHGwQ2QOjIfhyEG4tQl2IsbZ\\n\",\n       \"qQx+J8qg0aEV6UWtJLGddabH2yz1tkhbilhHkTT7aZWmSNc7qEQnlAQJojTdOFapSexpSHSl4JsG\\n\",\n       \"nEjA6pdg6Jfgw3tgyIGKCafbIvKnl5Ys3ggV+N5zEJ2HfT0hWVTLwzPNa5Bp667rVx6Enz8F4wlg\\n\",\n       \"BVQe/jH0Knrd973hOdu9wT4mInMaE/9NhJtbQs5TOBJQMxXTmke/qrHcbyLozGtR1EtD+NYi9kAF\\n\",\n       \"a9JmZ0TR17LpAKcT8HIE9npw0yHFwgHoiyvaCQ1MjzUvw1CtzarYuLYQCTSa4lJOZ1ljiqDcIdKq\\n\",\n       \"0HajIBNQmaXaWKD1LDQsaDkwuACmCccK8PSHYUoLv80TNVj5i1BxOLkF6gqa62EAY+6DbDPK+ngC\\n\",\n       \"vQUQJVMtE98KrenB0BWw7zLz47LB7PDt49GJsBmrbMJ0C/LfvPx25q2w/za49zzMDsNQD0z6cPgg\\n\",\n       \"3Psg6Ofh6xPACxv0c1zgBHBCRL60BP9zC/0nkyQjLXxpUglSOGcmcactGMuFXVhzs9A7ACyDVAlW\\n\",\n       \"NSItH8MDbwxED+XW24Dl6Zimi2X246kpGlYBJ57Adj3sSJy+uEtgrJMoN9n7A4uTtwhmf4fdczbb\\n\",\n       \"j8PIKpzbC8u3QjMDcR9u0GBFwVYLIqlQ6j3mwT3zsHKniDxxsXzWfgaOurD8MYj2Q7sG/tOhOJ/0\\n\",\n       \"wmICvAHoVzAYgfZ2iP6kiHxvM7Rh3i7eD0auHD8J/Lt3ehBXCb8LfE2E31SK67aW+HYQ6pDkJmFv\\n\",\n       \"DG5+HpIOnN8Dh3ZD4XHIjkLsYMDucom0W+JEHIbKcGYBeLK7j0Bk6KxH31CDTBuiVmgdcnwYykZI\\n\",\n       \"ko3noZ2EJQXRNZ/l54u0x1dRO2Lk0lYoiJS20W2fPgVGJFSVbE+AvwqJQ2DrOkEqHRpaJVuY8RaN\\n\",\n       \"WATlx1FKGDobwa/GaN8eoJvLtPURooFDxIpjrrtEUgFtzSYXwFBXqMwXODICpW9Dz8fh7l64eUOr\\n\",\n       \"5Fgv/OPnReT3r4R02n1i/RsReZCwY7H6Vts73wqUUiUR+d18OPkxIK+UehtEykvR/9kIo4k42WqE\\n\",\n       \"REeINJvEt5jMRBTresB63SKyGKNeysFSFEsdIz9gsy2mmKiHRmU1DSQNxpbQ/6Q5FVJRelsOIgb1\\n\",\n       \"WBRNaeAqtESBmlXm5vUonajBGZWk6fnoxWGCp3yCxjTc7ENa8PMa5SDgprOQsWA5Dc8lYD9wxzdg\\n\",\n       \"qS8kQk4aUP4k3FqAXWtwZB30LVDrh5QObcOh6fvdYETQlAolikdmiBy0SEVFMiloPLt5a/v4X0Ny\\n\",\n       \"BDrr3Szd63CN+j8ANxXDpHTvDKx9GLa0oDcKMwNQicHKcaXU6wVKroMZrZPRmxiqQ2ZNZyBQuLFV\\n\",\n       \"Tgy26aRrEN0K367B+DzsW4LGMOrwCZb2d/jQ87DqgB6F4v6QezbVAjvpcy7aIXBjxLVJfHsYvb5A\\n\",\n       \"LFUjOtygbUPaBsvwWFuGyl9B9SOQK4aZ2m0vwyM7YSWAD/gwUAyN+wYUNFOQLkBxAIYLYVAxkyM0\\n\",\n       \"/r0QXD8vIocJVeYdMG+AQ78KgznIZWGHC8s94XHSn4OlAUInietOgfv9YOQKIMIEYery0Xd4KFcF\\n\",\n       \"SnFYhFXgx4HXeSJ5d0NE+uGGO+Cj5y96XNy4Bv4oPOjAqd+GlV+Dl26AhA3RdZibg7U/AIZFcneA\\n\",\n       \"VYc+F9JPw8oeMCNhZ07qGMzeAg0dcgrqDdCXYGABNLdOJXICQ8ugmyZRs0JNBWyzY2TbHs2kzfpO\\n\",\n       \"WKuBNQ1aA7bUHAq7AuzhHFreo9SXptKyUXGLXG2AYt8eEmWT5Mkmxq4qRvoUpfQNTJ7QENehaq7j\\n\",\n       \"nKtTWoFvToSEuOUYrB4H6zgMfQZuvKQDbKoM/eOwdoE0eEXotlSX33DDTUD3Yrz0hhu+JZjbkiRL\\n\",\n       \"AbUhRVwUXiyN4zloOsxLls6qUG+uYv/hI+Cl4LYObgvWXLBN8JPQ9GHHacgPwWocxmIhrcY2AjLN\\n\",\n       \"FieiinjTpxWxqHo2QVExrXWoKig+WEPRj1coYOwuo39mG5qfwusU8CtDFMrH+BY2PStQnYWeD8Nt\\n\",\n       \"Tqiy2dedk3N9ULob9p2CbBtueRJO3AbOFNR6QXc8ZH4elUhjxyM4EYXNMfo+tkiuDp8swNrdcPgW\\n\",\n       \"EflDpVT1R07ZW4BS/gnC7MUbQItDtNsN0l8G/ylYuAmKGTg/ANYDUL+srYiIJEfhX42g37+FXN1E\\n\",\n       \"nHWs+CyFoMyYtk5PxqDzzSUYHAlbWAor8N1iqBi+CmsaHPqJUHT21ACkapAtgjMYsGbEyfktlHme\\n\",\n       \"FXMcx28iGTAbNRyziorDN3dAvQPrf6dUcEQkWYfSJ2FYh4YJlQCGnwRtFzgeaAGU9bAtP1cFKw1B\\n\",\n       \"Acoal+lc6p4HFzplTopofwDrfwa3JOBcBIba0NcMH5YmIrD6BeB33sJybSreD0auDJ8H/kEp3Dfc\\n\",\n       \"8t2D3wP+B96jwQgwCGPq1VbkAOMlSO9UqvZtEfkJKO6E7H7wa9CcgYFPwK5JGLOhZcCR7VAowu3f\\n\",\n       \"h1YS7CjENPjBGGTL4UXEdKDWhC3TsPKz8KGKRf9hi9k7DIaSARYaS4ZDM+OjYsISimwxfLh/+hPw\\n\",\n       \"8QdcRk6uUt7vUs5NUF0Zxc0/jbHTImJtQ2/3kGiAaircmE7f4BIrHKI0lsAWi9rZGvwRNF6Ao9sg\\n\",\n       \"+VkYGIedu6D4b8EfA235tVOkvfLH+7iAYMnBvbkPd7nKyriHmUuBC8v6FhZP3of6ewuMb8Dty0r9\\n\",\n       \"RshPSn8WDtRC0zYlsK8Vtpg/m4IgD6ezkFKwkADNUZjrLW44CS/mYP00xAcgmISJFuztq7BW/SEz\\n\",\n       \"X+ghot+E0UqiqxJu3MJv3U5zvoKTniH/LaVUXWT8M9BzSfainIEhBU4kfD3chMzTcKgHSgp2WpCJ\\n\",\n       \"5TmZbjE3mKHdUkRVg/HpJh88CgPt8D2MQe0DhIKJ7xBqx+DcPXCgG2gNFSH9BDw3DoX/+0cFSjHY\\n\",\n       \"vx2m4hgqjm4DDBNvt2gnwG2tECs48BdtqK3CEKHOTKHLO8mC2gIL3wC+A/l/AXv3gJeA80GAajj0\\n\",\n       \"p3TQl1nDxRCfQG8Q15vkVn2smVD3ZLAF5v0iMV0p6+lQB2RuANBg5xdhKAOxWGgGGHVgrg/GnFAB\\n\",\n       \"Xu/AoQkoHN/Ip3k9KBVMi/T+FThfhKk49DfCY8D1YXgBem8VkZ43s69rifeDkSvDF4D/850exFXG\\n\",\n       \"3wG/IcJepXj5nR7MJqATunJeilocvCK8IgZ0uvuDSOpjcHAC7py/uH1PBx65J7xYxKaAFCxGoH0S\\n\",\n       \"Gi3oXQ1JdTcX4bm9kIzD1pchbgunfB1lBfQmBenAQjzBOsOojoObKmP6TYIa/O0eGCzWKMzGKGVr\\n\",\n       \"OJWjYNt4JwL8uzJkulkNUWnMOVgP+rGsBdTDJVoPE6oyBoBA351w5wAcmA0DsUYEvnkrvLAX7tjw\\n\",\n       \"JLqShvUyV8l/5r2D4vdb9NyVYjTXQ2TOoRKJUtcGWKnejfpBCpxUeIWfEBEdKENlBvK7wpKIIpQj\\n\",\n       \"P98LrTmonIboXkgCNQuq8TAFv56Fsy/DthYkBuCuM5B2oNADGbVGsu1xKldDVAPd0uiZj4AUmds6\\n\",\n       \"hFucCXkddbBnYX4LbN+Q3dK9cFnTG56mC5PQF4PIizCxFOWZjw5QiYbW9c7cMr39FwORC5gsQWYv\\n\",\n       \"72gw0nwWDu8HbxzGq9CKwok05L/9RhmbLGzNgaPhdFyciEnEAcghaplazKSCDZVuhuGVzGFS5CM7\\n\",\n       \"4ROThO1h58FapP4d8F1w9sL+J0CL2EzfqdOOKHq8Jba4iqG6hx/xKSRgpQ9+4R9DPkgjAo/8tIje\\n\",\n       \"UEqdJjxfERlehriCZQ/6HOhLwDJwpA86JnRWoP4NqF5B51LlIah8AYwaNAwwyqHy9Ok0TFRgZQR4\\n\",\n       \"3WBERHoIy5+la2Wo934w8iYhwjiwm3evw/BloRSOCF8C/g3wK+/0eDYBCzC/DrP9sKN7w+0YcLwH\\n\",\n       \"8v9w6cYiYkLvZ+NUki5PRT12rMJQE3aU4KVpODoauoimVuGmBdgeg0O3hbpYwxVY7IG5rZBZBl+F\\n\",\n       \"CYftizrnxz3cREAh3UdLTZGtmehGBG9lHJejRPUW/otw7O998JpwZwE+YIBRh4JFPdHBzCQwW4pA\\n\",\n       \"HBrZLLWzTbxl8B6Fkbtg4LNhHXrFhZ5BOPjSxYRH2oGDL8CTB8Lsz1A7rLO/HMDKn22mSNm7EUr5\\n\",\n       \"p0Riv71K+VeiRG8wace20Vo+gPXQMBQAVkMyQH6D4d/vwJP/L5R3QToNjTjMAs0HISmgHYPWLugt\\n\",\n       \"w5YGTMdh7VGIN2BbBhwTst3AYagCq5Owu2GxFFFkZ6MYtkMn49PsqSHpElhAt6Ol+DA8+6sQDMBE\\n\",\n       \"OQx2lmOwfAZKsfApG6A+AUUXdi4YzPbvoRpMkjvuoHp6aOV78QZP8MRtDX7q8YvHTisK/lUv0VwJ\\n\",\n       \"umaFX4LKAcjtDk3xysUc5NIin2jCSaXUZUUo27DugJ/GKbUo9Jv06hEibh07ZrHqKirfvvT4F5Ed\\n\",\n       \"B+DT93WtAQDykPo26vYlFl34oAMDHShFocf2seodRgwY1YSsr2iyZLMsAAAgAElEQVQ0w0AzeSYM\\n\",\n       \"RCA8Bw+UYeWjbOhSgvx34PgXYcu5sMW3vxfEgehTsPUYzCVh5fwVipadgaXTMLMDpioQCCykITIN\\n\",\n       \"nTYXyzqvgoikoe9zsGcXJAMouiKx7yllHb6Cz35LeD8YefP4PGEXzXVlu3yV8J+BGRH+nVJcUVfF\\n\",\n       \"9Y6ukd5X4MGfg5PjYYfJCrD6XXWJbLmIxAbgFwdo39pHseWyziJnb1zhwPOKm1ZAcrA7D/ed0Hhh\\n\",\n       \"spfDt6XxzTZ+qsbjcYuTDQgq0F6FgTthdgJuWVcMrQVIJsJMkOJcbDd9nomrFO5qhsSsIkKO0lgL\\n\",\n       \"V20g4H2vSw41lFKWiDySZ+XfJSindJAIVqdGMDoPWaGvV3HfHGztprAPT8LqTdCehdQGQuCuAjzx\\n\",\n       \"Mjz6ndBevV2E9ptK/f7XCXt5mMKJLDRsWLXCx9cmwBz0PAI3B7C0ReTflOAQcBgW/ido/T8w4EH0\\n\",\n       \"NNxyNNS16dwOH/06NA5DdQqCGIx4MPccJO+EVAuqG26IQihj3jaa5EoLNDPjWJksTjqKZ5RwvCbs\\n\",\n       \"iIQ1voJSallE/ggqH4H4TvAqUPxL8Nbhuz8HY+NgqvDeuvsETFSzPHnnKMmaICh8SUO5l3a5l8pI\\n\",\n       \"g7U0jDZC7YoTvVC4ynoiV45up9MTIvJ0D/yTXfDJrdB2QT8N9yZFvttS6jX6MQ04fQL+2TAM5WhH\\n\",\n       \"67QV6HYJ1TIJHnEuo5WSg08OwtZzsFsP+2VnhqCwC/qWWHkEDu2DZi9YKSgL7MtDwoPzacVSHTwt\\n\",\n       \"VFHdeklZdKAJ5sAl32tJRH4P1j4Go9ug7wXoWYCt+XCL4RTkPwocu4K58kW034Ol/x6CAHQHRlfC\\n\",\n       \"NuLjitBI+FUQEYGBn4V7xsIHLY0wm/PDz3eV0mfe7Oe/FbwfjLx5/BPg37/Tg9gMKMW6CF8l5I78\\n\",\n       \"2js9nqsNpdR6eLLnR4Eo4dPsazQS4nD7rTDVj3usjNoZJVPJ4RkuRw8U6G1AwYR7ivDy8HZOHNhB\\n\",\n       \"shrBaFqIX6PYP02hWlbqb0WkD/KT4Wc9PgD9ymKtrXHSG8DzdBoZncxanPgyCEKASSsClROXjNuj\\n\",\n       \"6wGilHopKvJ/WbQ/3wN3JiErsHwAynNoN59jJrDZ2g0qthVgRYf8FkhtuIAs5SA4p5R1iPDm+T5e\\n\",\n       \"ByLSsw3+u3uhMw6nPDh9BA48BB8x4CUbdtwAhZth0QP9FHz2KOwsEZ+HD74ABzbchJ7LQFqD4jCM\\n\",\n       \"L8BAt5QS7YGXUuAugpaDshfyTVJuWG0LOiF50iifJaugvM1G8ywsPQJH74fnH4KfEpEZpVRTKbUM\\n\",\n       \"/NVlvstvwcoYoIP0wMQXwBUNL2ISb/k4EZ2a0w/FOO1DM/j3wTMTMFQPZWRWHoXguinharDnRrjl\\n\",\n       \"43D+Qu5mBxjfhPtFZPpSRdg++OhuyOcgosNoB3rO4MdW4est+M+XdgqJyOQE3D8C+ihULUjm4YMu\\n\",\n       \"vJAI629LUP0dKHwSRqtQD0ANg6nBLcchWYfDYzDrwvgl5c+VLNgLXIJud9hzMLjvIi/mAvpboI9d\\n\",\n       \"+Uypl2Dp6xDcA+MKziRhvg4rX3md1t5RGJuE/RvGl3bg1gqsfhh4Pxh5p9Et0dwIPPROj2UT8VvA\\n\",\n       \"MyL8B6XCp7/3Erpp2MuQNy+iF27bDYUkvt9gddjC7dVJOH20cgW+uxXyX4fSwSz5bZPEmhGM7glt\\n\",\n       \"GTcRnKnCPhF5sHth+WNo/xjEd8DpJDS/F9D8Sgd+tof2Tou1XTa5HtCVQ9VoUT8DPP+jxmcr9WJS\\n\",\n       \"ZHAbJG+ExWFoTmPsnSRbarM2Nc/KufBptrcD5hxMb4Wh+VChcTkDz2Uh/7dXaUrf00jCrftAH+8a\\n\",\n       \"wBmg7oAXOrD+GJz8KHgf3vB0eQ/MN2BXCSMFuUts3DO1UJqjJ/Hq3+dTUJsHuwBHvghbTsNzN8GY\\n\",\n       \"Bp04rKzB3PNgrSRQv9APCwZWYxfuiZ2EjttT0HMWJuH1+V7dMlKXnyDz8PIWqH+ghbhF1sZztGrb\\n\",\n       \"aD4bAc+Bjo39FLz0N/CSAKvXW+asH27dGfZNv4I4eDuAMzDFhq4wERm6CXZ9GI77cKIGGQX0gv59\\n\",\n       \"KDaVyl+6/yH4zBQstGDSBN+EThTcGdh7PiyvrED9GEw3ofhBCM5CJR0uQ38W8v0wU4Tat+DEEBxc\\n\",\n       \"Cc/BpSw8l4bCV1/nq5UhT8hLM4OLv17MgfOaAOaN0L3mfT8McqZHCEsz8z9CYyQFPZcp1/a3wOy/\\n\",\n       \"0s+/UrwfjLw5/Dzwd+/REg0ASjErwmOEEvHXXdvXtYDqalHHwNmH9USBpeE6kf4obgPUfwFegpNb\\n\",\n       \"DfhQgr4lCDSbZiZCqTEU9saOEGpulLvOo18SkSTgK6UsABH57nH4lZtpHI3SiDQguQBeDf70zQgR\\n\",\n       \"5WDHPpgfCFWXSOJVV2nFs5hAIRsGIxDu9ul5WB0CMwqdPBT+RCl1fjPm7r2GZKiq9Rpl2j5w07Br\\n\",\n       \"/DItluPgHaXpOawlYduGp/PtRXixDSoGIzoYAZwZhJeaYB9XSrVE9D+F0qdD8bKTWfDOQe17oE4q\\n\",\n       \"pewRkf5dkJ4INcg34jLk7NeHCq3lvyUiz7XDXPxPDkLdCj14hk9BfA2+rpR6XfG46wCX5Te9Dukp\\n\",\n       \"09eVl9ch6O3aBSTAMMPOmVdBRCJTMLYNFk6Gi9UzCi0XtAUYmIe/V0pd0Oj4YdcINgZkIH8XaOPQ\\n\",\n       \"OA88ArTg+Xtg9oPhOdheheI3lFKvKZHABV5M9ml47COh4V3OhsUsHMpA/m+ubIpetd8KP4KsugEl\\n\",\n       \"yEs4XRtDveUcdGbf6ue/WbwfjLwBul40/xT4Z+/0WK4BfgP4qgi/rxTXnULfZqMMz0/D/XfDfAS8\\n\",\n       \"LbBUwVk/BFngmFLKFpE/8TF3VqntSCBWBntxCveUFqqQCq/1Rmld8npJRP6gDHclYMqGwjo8rZSa\\n\",\n       \"ezNj9KDehLEBaC9B9lmyu6uYY03MJJx3Qz2JZhROV8D6CizagHmF5Lf/6tGAxXW4YeqSi3gBDBvm\\n\",\n       \"WzB66XtaoPmol+DlNGRGYM8a+BqcHIbSQ7A6C7O3gmjQfAlKP7xwfIQGjzJLKF7ldoOGV1CCQ3Pw\\n\",\n       \"uckNT/0NiJwPyZVzV/r9uoq7ayKmWif538bRB11Up4XzA2hdNyWZy2Edjp6Bmya6whsAHTDOAgou\\n\",\n       \"DbbLxVC441W31zXION3s0iXYUiB326PklMKVdUoygGVGobkELzcvMUjtqry6hMHp1y6zv0dE5DHe\\n\",\n       \"9DlY/wEcrsL5j4A5BNYirH39zV4f3g7CcnbPEXjyNjiwAgkHFnrh+SiUNl1ZeVODERH5beAgcERt\\n\",\n       \"cPAVkV8mVDF9Sin1i5s5hquAg4Q8g9cQo95rUIpnRVgCfgb4y3d6PNcaFhw+Aje0YOs4WC0wTwOL\\n\",\n       \"8FcXLiRKqZqI/GYZ94s3hiSAZgvMJ0MvjzelVKnC1PBb0nUpwLMvwS8nwX6c/ruFfUGS2GwJZxCM\\n\",\n       \"GDz6Kah+GWrf3qBmaYtIBszdEElDawE4fw08ZN616MDx4/DhLAxsh2IAchKGz0DRgn88Cf98MpxM\\n\",\n       \"B6AAyelQyOElWD4FD38Cnt8HKoD6EagehcgYFB+Dzhwwd2kHxyXiVa+CC0ePww0W7J4EqwPGadCW\\n\",\n       \"4Gtv1SNGRL8BbvmEzb2HbNJOSFZ9cQye+Wmu4/M/gNMn4UUXbtkKlgv6GdDn4TtdEb5XoJQq9Yoc\\n\",\n       \"expuOQjLcfBWIP08ZArwqnKJiAzAjl+yuPG8jz3VS2y9QyVV5yV/O5V5C069FeG3bmD5et0rGjAJ\\n\",\n       \"iUlwO8C0Up1nReQQoF972fbqt+DZEszcA3osPFbzP3i9TqWrCdmsjj4ROQD8C6XUr4rI7wNfVkod\\n\",\n       \"7v5fH6HZ0q+/XjDS9bS6ohTkZkCE3wXWleL/eKfHci0gwseAPwJuuNbibtfDmouIAezMwVYbGp2w\\n\",\n       \"ZfA1yqS6yO5huD8FvRa4FXiyAY9fi4tHQuRDaeSXAvbv1hmpNzA6bbY/B5k6PL8FHn1Aqc4zG77T\\n\",\n       \"Ntj6i7DHgKQHSxGYmYHSV6+HjMn1sO6Xg4gMDcBnMrA9AFWDE+XQhK0WFbl5FH5qnJA4tAidJfhr\\n\",\n       \"pdS5De/Xw39F9sPUT8NuBREF50w4dxhq37w0A/IG49GB7TnY7kC7HR6bb9knRmT0X8KPJy6W9iBM\\n\",\n       \"0X9tHE7+J6XUpmrPvJ11797Et2Zhlwd2K5yLy3YCioiZhnt74a4IGE0o5uEBX6nZV2+X/RR84i64\\n\",\n       \"MW8yfXOa+ngCUQ0WcgEnv9+A31ZKXTU+XXit6fkC7LgJJl3o6DAdwLmvKuWffuM9bB7CzpqrHwz9\\n\",\n       \"qDXfzMzIB4AfdP/9EHAXobzuBeZwehM/+6pAhCjws8Cd7/RYrhWU4mERzhNyR/7wnR7PtUb35DvF\\n\",\n       \"jzBcA/CVmhaRM0AccK7lE0xbqSdFsoMw3oTJQqhbcUFhts+C5Cutg6FuyvjPwWfqoQokwE1AameY\\n\",\n       \"juWpazXudxu6Gaw/E5E4EGwM3GyljonImTnYQlgqWeym7De+3w8VPCc+Bz+Rv2j1vhd48HZ45jRw\\n\",\n       \"8grG4wNnuj9XAfogDFxC4NSAXkXIfbpuhfC6QdzZ7s8bbesCPxCRhwnLYJ3L6+rEh6CnBYbvsvdI\\n\",\n       \"mfqZMu1k2IE080OlnKtM7Ddvgf374CNzF4tIO2LwDz8jIv9Rva5Pz+ajOz/XNCuzmRLQOS6SvGrd\\n\",\n       \"1+82fB44ptQbH/DvMfxvwP8uQuqdHsj1DBWife1TqQDtc9BphW2iG6XuCzGob0ypjsOW2MVA5AJu\\n\",\n       \"LEDvHddkqO9yKKU6l8sgdX8/o5Q6d2kgsgFb+f/Ze+/4OLLrzvd7qqpzowMaORIgmHMaTp7hZEnj\\n\",\n       \"0UgrybKsMPKT5ZV3V2v7eT9rrf389Lwv2ZL8tLZ318/yyrYkK1hhlEZhcuYEZg7JIQmCIBIJNBro\\n\",\n       \"RudQffePaooYDjlMAJoA6vv54AOguqvu6b7dt86995zfoUc/54iANeyuSEDD5tmx+HIpDsJw8K3H\\n\",\n       \"TIGoxuUFPM4rlFKlyvf1ItsByQEYmzbmBVLQNAqJDBRnwTGr2worYm+9DYdz0OXAygpaVMymM5IA\\n\",\n       \"ApW/g5wX2MdFg5/PISKfn/Zz5wzbdzn8G+C/VaHdqqIUr2OtZi006fsFROlNODIJbzRbe/2mwOFG\\n\",\n       \"OJyA4vQARAH9AudrZRA7gH320UC7wFinla1y9NVk9Bkr3XukskqddsCLnXDmtauJjZj/ZPbC/oJV\\n\",\n       \"BbiMlWK7txV6T3MZKzBXjuhvr5kFlc/LoqsVNZuD0U7gd7Bqn9wN/MN5j19yr1Ap9fmZN+vyEGEj\\n\",\n       \"VuL4jy/13AXK/wocEuEbSrGn2sbYvJWKKuv/gGfuh11rQASmjkDsF+ct7w7DUAkSLghOm90fb4D4\\n\",\n       \"s3Nt9yLkFPQBG3RwTQsYPhGCaBVrvYBS6qSVUjz5AHg7oJCHiSch/Xw17aoWleD0v4fH3wU1S8Es\\n\",\n       \"w9RemHxydoK9J/ZA74NWuYmzpB3Qr6jowiwmZi2AFUBEvgxsBvYqpf69iPyVUuqzIvIg8B+BpVgZ\\n\",\n       \"NR+8wLlVDWoT4SvAgFL852rZUG1E+E0sRdYb5kIIrdp9Pl8RESeAukhBKxHHWuj5MKwpgS8PQ344\\n\",\n       \"dAZGv3q1mRgzyULvdxHfHbDsfliZA0cJ+v1w5CjEvvkO2ztzaJ8IVsZgcS4zrK7nfhcRF1ac0Kz1\\n\",\n       \"j9VG/SdgdSd0pCDrhCNO6P2RUrnXZqvdavJOfT6rzsi1UM0PqggtwBvACqW46mj1hYAIX8UK+vqY\\n\",\n       \"UpfeWru2tq7fwWm+IyJNULMe3CGrumzp8PWQSQOLo99FpBNCa8HhhugR4Fh1Yo2uHxZDv18KK8Bc\\n\",\n       \"Wwm1K6CYhsQBZcn6L0hsZ+SK2+aLgKEUv3fJJy9wRPBi1TH570rNbvyMPTgtTux+X5zY/b74sJ2R\\n\",\n       \"K2qXCFZBoA1KXVChb9EhQg9WCui/UooXZ68de3BajNj9vjix+33x8U59vugidi+DPwS+bzsi51CK\\n\",\n       \"XuAR4F9EaKuyOTY2NjY2Cwx7ZeQtbdIO7APWK/XOFV4XIyL8J+C9wB1KkbvU86/8+vZMaTFi9/vi\\n\",\n       \"xO73xYe9TXPZbfKPwJBS/MlctjtfqBQN/C6WhsynZjqg1R6cFid2vy9O7H5ffNjbNJeBCFuBB4C/\\n\",\n       \"qLYt1ysV5+MRLKn/f11da2xsbGxsFgq2AiMgggF8BfgPSjFVbXuuZ5QiJcLDwEsiHJzNgFYbGxsb\\n\",\n       \"m8WBvTJi8XtADPhGtQ2ZD0wLaP2OCM1VNsfGxsbGZp6z6GNGRFgPPAXcVLnJ2lwmIvzvWFL/dyvF\\n\",\n       \"NSsV2nvIixO73xcndr8vPuyYkYsggg/4DvAHtiNyVfxnIA3839U2xMbGxsZm/rJoV0ZE0LEyQ+JK\\n\",\n       \"8Vuz1c5CpyIStxvLofvBtV1rfs2UKjU9lkPdRtA0GD8I5TcXu8z3lTLf+n06ItINtZvA4YHYISgd\\n\",\n       \"uliNIJu3Mp/7fa5561gjOsQOzMexxk7tfdu10YC/BlYBDyiFPXhcAyJsA34GPKwUL139debP4GQN\\n\",\n       \"DoEHYeXNsGLKKgXeG4TDh2DiW3NZcGy+M5/6fToi/rtg+T2wKg2uIvQH4Y0BiP7j9VL353pmvvb7\\n\",\n       \"XLOQxpp36vNFl00jggMrc2Y58G7bEbl2lOJ1ET4KPCrCQ0rxSrVtmgPaoOsmuL/fGhwAlkxAeQ28\\n\",\n       \"vBw4UkXbbGYZEYnAyrvggUFwVW4GnXHQOuH59cDrVTXQZiHRCl03vn2sUavhpQUz1sxqzIiI/H8i\\n\",\n       \"8ryIfPm84y0i8rSIvCQid8+mDW9tlx7gRaAWuFcp4nPV9kJHKX4JfBL4iQgfrQikLWA8XdBdOjc4\\n\",\n       \"nKU7CXVrqmOTzRzSDl2cc0TOsnQC6jZUxSKbBYqnG7rNhT7WzJozIiKbAZ9S6nbAKSJbpz38R8Af\\n\",\n       \"A/fB7KuditAuwl9iVZ/9Z+C9SpGe7XYXG0rxGHAP8DngRyKsrbJJs4hZAvMCDldJB9Neol/4mFC8\\n\",\n       \"QP8XDChfc2aZjc05ykW4UGhI0QBzwXzWZnNlZDvweOXvJ4Gbpj22Vim1UymVBpIiUnOxi4hQI0JI\\n\",\n       \"BOflzrZF8IqwWYTfE+FxrHozOrBGKf5qpmXMbc6hFPuBzVhVfp8Q4QURPifCu0VYLUKnCI2VnyYR\\n\",\n       \"mqps8lVSOAZHBdKOc8eKGrzpgcmD1bPLZo7ogxNFmHSfO2QKHA1BdFf1zLJZeOSPwbELjDVHvTB5\\n\",\n       \"oHp2zSyzGTMSAvoqfyeA6ctJ+rS/E5XnJi9ynT8FPg14AU2ENFY6aaryOw1kATdQA0QqPyewtmS+\\n\",\n       \"grUSkr32l2RzOShFHvhzEb4M3AXcjyUs1wl4sPoKQAEFoL0adl4LSqlxEfcP4UfvhWVYfn2fQP9T\\n\",\n       \"Sqn+KptnM8sopdIizu/ATz8MyzRwAf0a9L8C5QWxh29zfaCUiom4f2SNNT1Yt88+gVNPKaVOVtu+\\n\",\n       \"mWLWsmlE5HeBqFLquyLyfqBVKfXXlceeUUrtqPz9I+A3lVKp8863Vy9sbGxsbGwWENXIptkJ/A6W\\n\",\n       \"lsfdwD9Me+yAiNwIHAQC5zsiZ5kvaV8iorfCH74P8nWQOXv8KDQ8AX1Rpb5ZTdvC8IEe2NAFuSJo\\n\",\n       \"b4LRDz/JKPVqtey6EHaq3+JkPva7iIS64P/aCp11gBPMJDiOQHkPfCmn1DPVtvF6Zz72+4XwiNx8\\n\",\n       \"B7z7Jhg4eywP+g+g9Sh8SSk1ea1tXGwc74Mf5ZSaN5lb77TIMGvOiFJqr4jkROR5YK9SapeI/JVS\\n\",\n       \"6rNYlXG/hrVk/6ezZcMcUt8Avjp4y4euB6IvwyoRMaolTqPBqtWw8W44eTZAaBkYP4QHReTYTHxR\\n\",\n       \"bGwWIUuaYPVSGHVjlUIIAwVo7IWHANsZWSTUwoZumJh+zAXmEuCotQV9zWOsBqvPH8d7wPFDeEhE\\n\",\n       \"jiul5n1m6KzqjCilfu+8/z9b+T2MtVqyUCgVeXtwbQH0shUGXa6CTQDUwaZlEJ8eqeyF0lKQY7CE\\n\",\n       \"Gfii2FwbZwOz7cDqeUWDAcZZR+QsPsgY0FIto2zmnjLkC+A//3hFwGpGJqH1sHk5TE4fx31Q7AHt\\n\",\n       \"uDWO75uJdqrJoq5NM4PExmC4F+qmHzwIzQl4XSlVNWcEQLjoTW7eL5HOZyoZRY9iDVj9Inys2jbZ\\n\",\n       \"XDaDUSglwXn2QBkYAG8ehqpol80cMwavHYZac9p4Og7ePsgDMxZgepHBWl38ofnFolNgnQ2UUkpE\\n\",\n       \"vvc0PDIAHWHgNHAS+qeqvFw7DvuOw+r2aasjWWuvEaC/WnYtdkQIA88D3wN+E1gL/LMITUrxhaoa\\n\",\n       \"Z3M59E3A3pdgZRsoF6hRkDMQz8JPq22czdxRhsOH4ZUU3LAEVA7kOBQG4Z+VUjOSxRmFvcfhw23n\\n\",\n       \"jeMnLGekfybaqDaLsjbNbCEiTmCpgF9BDOiv+qqIiBGGDy2Hdd2QLoDxJjhOwE+zSu2spm3nMx/7\\n\",\n       \"/GqobMv8EOhTit+fdrwVeBn4rFL8qFr2zTXztd91kZWd8IkGCOnWTag4BIfH4GszdRNayMzXfr8Y\\n\",\n       \"ItIi0KKsrbveio7WTF3bqIVfXw5ruiAzbRz/SVapeVN+47oslCcia4C/A0zgkFLqM+c9vqA+qNVE\\n\",\n       \"RHSgOwwripBLwWGl1Ei17TqfxdLnIrwf+D+BjefXRhLhRuDHwBalGKyGfXPNfO53Eanzwho3BOPQ\\n\",\n       \"V4ajSqkFo4o5m8znfq8GlXF8aRiWX8/j+DtxvTojv8owEZGvAn+tlNo77XH7g7rIWAx9LoILOAZ8\\n\",\n       \"XCmeu8hz/hTYBjy0GIJaF0O/27wdu98XH+/U51ULYD0v1dUDdtE6m0XBI8DhizkiFf5foBv4wJxY\\n\",\n       \"ZGNjY1NlqppNIyIPichBILeQZG1tbC6ECA6sIoJ/9k7Pq2zd/DbwZZG3pwza2NjYLDSq6owopX6s\\n\",\n       \"lFqHVSzv3mracjmISEBEakXEXlq0uRo+BvQqxSUDh5XiZeBprArXNtcZIuISkYiIuKpti82VYY/j\\n\",\n       \"1ydVS+0VEadS6mzw3hTT8vWnPefz0/59Vin17ByY9jZEJFQH710OPU5gAiZE5EdKqb5LnmxjA4hg\\n\",\n       \"AP8J+K0rOO1zwD4RvqIUp2bHMpsrQUQ0P9zRCbcHQZ+CckDk+SQ8p5Qyq22fzcURkXAdvHcFLDWA\\n\",\n       \"CYiJyA/twpbXB9UMYH0I+AMswZaTwG9NT4O9XoKbRMRogn9zOwRWwqgGnIaapyF4HP6rUmq02jYu\\n\",\n       \"FK6XPp8NKoJmn1KKO67wvM8DK5Xiw7Ni2HXAfOp3v8jtG+Fdt8KgB0pZMF6Gtr3wRNKuR3NFzGW/\\n\",\n       \"i4ijGf7t7eBbAWMaMAyBZ8DfC3+jlBqfCzsWO+/U51VbGVFK/RgrhfF6p3sp1K+eVgSpGZIbwB+F\\n\",\n       \"rcBjVbTtHRGRlgjc5oHOAoyNwQtKqRPVtmuxIYIO/Anwu1dx+heA4yJsUoq9l3y2zawhIo4OuP1m\\n\",\n       \"GPJUZL49ULoJhgbgdhF5adpq7+Ve0+OBbbWwBSjH4PUc7FZK5WflRSxeepZBZNW0cbwVptaDf8x6\\n\",\n       \"739ZRdsuiojoTtgYgRs18EzBgSS8opSaqrZtM42twHoJBAKRCxyPQMoDTXNu0GUiIp3L4VNbIdcM\\n\",\n       \"iRg07YFPuUS+nVdqf7XtW2R8EEsE7+krPVEp0iL8OVZByffNtGE2V4THB07fefVovFDyW/F3XuCy\\n\",\n       \"nRERcTTAxzdDxzIYUyBvwnv2wQoR+Zq97TNzOCAYuUCNsFpIe6G5GjZdDkF473q4YQ2MuSHfB7fu\\n\",\n       \"gnUi8v9frNr9fMV2Ri6BgskL7cNEoSYFB+fcIKASeCVYBblqgInzt4ua4IFbIdlRSZmugYkQpCet\\n\",\n       \"ar2HgDZo2AHOViiOwuhzSqljc/5iFjgiaFirIv/hGjRD/g74jyJsVGr+F8SaL4iIF/w3Q3CLdcS7\\n\",\n       \"K0GmmABX0Ko7AkASnEnLQbkixU0NVqyCzhunyXnfCqfSsGwcbhKRGJCohrCViOgLyRkqwsQo6Ocf\\n\",\n       \"j4I/CbtF9FXQeCfoEcj3Q/Q5pdRliw6KiIZVGWTG4h5EpHkdbNkBfWczTTbDkAntk7AZq5zE2ec2\\n\",\n       \"ABEgBQzNpB1zhe2MXJr+PhjaA63r4LQO5VNQuw/KSdh9uRepqOd16RAxIQGcuFKlRhHxQ3AHtN0M\\n\",\n       \"ah04s9ByEhJFkdoDMPkDpVRRRFzd0NbBWxU8Q5CPgGsQ2QKr3gs3TEFTDKIh2PVJEdd3lMrbN7uZ\\n\",\n       \"5dewZsu/uNoLKEVWhC8Af4y1ymIzy1ilHRo+CduaYOUolAXevPM0Ox0vEK+5FQZDkI+Daye0jMOP\\n\",\n       \"r/T7HIaetvMcmDw4HBgrnDR0FVjdC1FNpP44jH/naiXmLaeKHgGXsspmDV/sZiXi2gB1d8GSiEhL\\n\",\n       \"FMafUKpw+Gravc440QsjDdC6FkY0UP0Q2Q+lNC4N1n4cNscgEoPhdnjt0yLy90qpdwwcF5EIRO6B\\n\",\n       \"jrWgSiLB12Dq2RkqB9DUAWp6ymsMPDnwuuB+ETkCxCH8MKzZAE0KJgUGh0Tkm/NtK8d2Ri6BUsoU\\n\",\n       \"ka8/D/cdhI06aEkYGIWfKaVil3MNEfHVw8eWQnuTtS+s9VoZOf+glJq4zGs4oPETsL0B3I2WE5x1\\n\",\n       \"wfEuuO85OLAeXokBTwKlIhQyYHinlbAuAxnQoOFOuH0cmirLfO0J8Och9i4RObiQZkTVpFKD5nPA\\n\",\n       \"/zMDSqp/D/yJCO2LRSa+umgrYHUzbBs4d2z7oCLZuZcX95yBbi80ZCATgx9m4bUrbSEPyQw4ph87\\n\",\n       \"gWNNmkidyfpX4MZKP7++FF6+H6ue0RUhIl1L4KPLwOUBNQBaP+wVkUfP/56LeLfD+ofhhjFoGIAz\\n\",\n       \"fnjlYyKubyuVn9dbu5Vx/GvPwX37Yb0O2hT0j8GT0PQI3DUEvoozuTQGehni9wFfudg1RSQA7b8N\\n\",\n       \"Nzth2RAUdTh4M7zeJiJfnYFxNDfdUz0A7eOwKQze9RCPwr8/gR4rsKkObj11TqnjjSZ49v3AP15j\\n\",\n       \"+3OK7YxcBpWCR4+KyGOAoZTKXMn5IbjnRmjZwrn0zDaofwA2RcYAACAASURBVNKKAfgfFztPRHyg\\n\",\n       \"LQOjBmiH9nXQOgTxRmiYsHZqMmE40QybRuDozSLytFLKrBHZuQd23Aynzn5ED0JLFE5AoBuazruh\\n\",\n       \"hXMQroehEFZ8g821cwcQBn5wrRdSiqQIXwc+g5UibDOrhLqh9QKz27acSTA5ROILgBtLsPGqbjop\\n\",\n       \"OHgYdiypbPsUwBjH1T2CN2nSkIXTTeBJw8ZhOLJFRH6hlMqdf52zehnnr3aIiLMdPvJuSBkwdRJp\\n\",\n       \"aEQMoXznAegD9kx7rgHt98BtIxCsbEE1peA2E6L3ViYpVS36ea0opZLA90Xkp4CmlMqKSAvU6+cc\\n\",\n       \"kbN0TIKnc3rZkrfj2QgbvLBqyPrfKMENgxDrhOgSEZnAiivMAQOX8zkREW3a+9x3AtLLocaA8jhs\\n\",\n       \"vgGSZ0Bvhd0BmPgOjl8/ieM12NMFoqBtHFafgUM9IlJ7uZPd6wHbGbkCKpHyVxot7+iEzWut5dFf\\n\",\n       \"sQyi+2CJiISVUpMXOG+J0PTHLhqWg9FUJOU1yedgdACohXwWHHmoLUBvBDynwHBg9WkhBc/vhshp\\n\",\n       \"WN8E5gRoQ9A/CY9C/rOQNcAz7UtWEsgorC+OzczwOeDPlWKmVpr+K/CSCH+mlN1Ps0suDqm3aR9B\\n\",\n       \"ygH5ROXGclVVWUXEjRXrNXUCvv0o/KsOMIrgOoZ4Y3RNgtoGZQVjAgyDMwq4mPb9FBF/CO5st7L6\\n\",\n       \"pFZk7yQ8o5RKVJ7S2QnuMcTYS9NWRZsmGKrAqNvN8KdF5Hen3fgCEHSdc0TOEsmCP4IVnLsgAibP\\n\",\n       \"y1TKQkqsdePpGyJJF5Qy8E7f3WAXNCbffry5DO5fg846aCtDWuBUXES+rpQau9CVRKStAe7tgO4W\\n\",\n       \"ETUBO4HHB+Hrj8Fv+mBFN9QMgvLCgQaImiAGelhj9K4ya8+AAg4LBA+At4j1eZk32M7I7KPpoDvO\\n\",\n       \"+1Br/OrNf1sfiIhuEPk/6ljeWoPTUPhyBUpqjLH6HLEM+FthPAzOBMRS4OiDgTBkBs+mFlZ+f0dE\\n\",\n       \"njkEtUASGFFKKZHAK7D7Dri5srRXBva0QWzPTJa9XsyIsAVYDXxjpq6pFMdFOIAVh/LdmbquzYXI\\n\",\n       \"vAGH74JOD9RWVkiiXjhShtxVxVBYcWP+HdB1KwQ1SJRLRF/oJ/XFfmgFFHjaoS0MXVHrLIW1FTth\\n\",\n       \"YolDnr2WoxEeuQEa1lixbOoIbNwJS0Xkv1ViFgzAsY/GDR42pAxcJYACEccYrqVwdBVw6OwLhnQZ\\n\",\n       \"8jq4po1VaQdkSyzQSYpSalKk7k3Yvxw2DVtHTYE9zRD72TsHgmbHILHE2uaezplGWNICH3gdjMr5\\n\",\n       \"/bXwy4+KyH95+/aYtCyD394IRg53Vxp3eBTzrl7K782T/rNh+JITPtAFmU446q0ET5+C2inqncKy\\n\",\n       \"LPRUJrQdGry4CUb2Ms9WuG1nZJZRSuUbRHqPQ1sR9LiVAVN2w0QMJrjwB2adh3B3PcGhHPk1Gp6M\\n\",\n       \"m3IujLt+lLHOMmsHYbIB9CQMNYI7Ai9lYPRbF2h/DDjPG08+Y63yDW+E+jLENDh9BOJXHWRp8zb+\\n\",\n       \"CPhSpc7MTPJ1LFl52xmZRZRSMRH9n+HRD0JbBJTAUBZOf10pdZVFPf23w6a74ZYBSHgcHO50Uvp3\\n\",\n       \"efIrSxT/CTDAPQH9QXD7IZSFuBuGTVA5rH1ZBaDB8hXQtGWabsZ6GElCR8xygncDQ70QctPsOOuI\\n\",\n       \"AGQo+Qt0n4CJrVScEaVUTiT4Krx2K9w0YN1Eixq81goTT1x8q2IhEPshvPQh6FsK4bKlbXlmJ2Qv\\n\",\n       \"UbYhsceKEWnyQl1l6/5kLQzUw72vnHNEAJZMQEc7nG5nWvYUQARu2wik8a8WWss1eMb9qPE4sWXD\\n\",\n       \"BH4XTn+xCOPHobUEWgsMLoVYH+7WEg0Jk0IWJgPgS0PeBU4/TBy/Us2bamM7IzOAiDQBQSB+IUXW\\n\",\n       \"KDzxNHx5FbR0QiILrjfBOQF/f5F92IgbTVmB1OfE6gwcBYO0p8CAglgG4nnQR+CEE8b+9mwKoIjU\\n\",\n       \"YkW4JpVSZ86/eCXq/3si8gwcCQFTSqnozLwbNiJ0AHcBn5yFy/8A+C8i1CuF3WeziFLmMRH5Cxhs\\n\",\n       \"w3IChq80Y+YsVgB65+1w0yAMhVvYeVMnghu3Gcf44BjF4DD8AAJJcB6C3e3gdIJrAjpfgwE/RHUq\\n\",\n       \"Whk10NoApRMQKYCzHSb8kG+CdAA6sYTTkg6RnV5Y66CQ19FKGUqeKM6kInIKNMdbrZx6EnY5YGAb\\n\",\n       \"hJQ1V4q+AKnn3/aCFhCV1eB/EJFGwA/ElFJxEdEqmUj5C8V7KKVGRfSvQ/r90FBrbXWPjUL5wLnV\\n\",\n       \"tOl4ADoq9YxiZ1VfPbDUCe4kEYcLzySAINTiygwTjtRw5g9Wozw6YMLmfljVB8eGoS6BbxA6X4fx\\n\",\n       \"bjgTseJyHcOgz7tSJbYzcg2IiCcCH1wHy+ugHAUtInJkAr731n1JbXstRoOOoY9QavJR7L0D9VoZ\\n\",\n       \"lotI0wUchv4SE+UypTKYJTB1UFIga0BwBJpOwaTDj2PUTTpcIBdOQoOIREPwaythcyOUJ0CrF+kd\\n\",\n       \"h3+5UNBtJRtoXi3lzRM+A3xNqZnfY68Esj4G/DrwNzN9fZu3UnE+flVRXERcXrgxDDcKuFKwLw7P\\n\",\n       \"X8ZqiRd8OrjMWnZtXoMr68WZB3DhNrrIep6Gj7jpvaGOVKGIZo7hSk9y65tWlkbm1HRHKAnuQ3Bv\\n\",\n       \"A3oANNdRKIcxd3kpn8pM+06X4CdTTG3NsMRpgDNP/XFF85C14jH+5AVe64+tSQoBLI2TBREncjlU\\n\",\n       \"JpKjACKujdB6D3gDkMuL+J6BzCvnTx4rDusXYKgBays+CqH3wMBWWDdNH2bc42Pf5m5w10F2FLSw\\n\",\n       \"yJ44/LgJxicxlgrOyuqTIk+yNktsiZ/JlgaUW4OReisWMJcC9zi095P7JmTXQigOtZVgZFPgJx2Q\\n\",\n       \"mXdK29UslLcd+EssT/91pdQfVMuWqyUED9wIy7ZVlkrLwKuwaifcAzxmRbkb7/PQ8L/58TgF75gD\\n\",\n       \"NZIlXjPBmZYWyvoRS9zop+fNuPpMYi9McvA2N21xk3xzkaKe5aQU2D4KA752hqQHl9+LI5el4I7D\\n\",\n       \"h9+Egc3Qdgv0K9ANMHdD10vwEPDtarxHiw0R3MD/Atwyi818B/h9bGdkThFLVOLDm2H5WjjjhGQv\\n\",\n       \"bH7FmlT8rVIqJSJ1EL7FKoFSmoKxF6F8CEhDMg/9tSEKbi+BSYAyJUOjWKiFUgPcdwO5Z8dIboEa\\n\",\n       \"vYF08CCP3xvHeA1GvznNjkAb3N2Ao3EJ/owb50QeU99P5uZD5IMZzC+efa5SakLE970S/vtLrM4A\\n\",\n       \"JdjZCof6oHjgQq+zknVygcDMxYGltbL21+GWMxAZgqQTdv4aHHABb6s/VEkbjgJlKyZPXoLX14Fq\\n\",\n       \"sXbNUm4vP7ttG1Pjd8DxEWitwb3EQN2cRi0fpfDMKUo3NpNxOvGRY6K5wGSHB3eihYy/BUI9lhrb\\n\",\n       \"YY8l3uZ9wXIWT8BRE7R10JMEU4Njfuh7CRia6/ftWqnmykg/sEMpVRCRb4jIWqXUG1W054oQEc9S\\n\",\n       \"2LRpWqdrwGYYPgrbROQJ8N4Krf+2TGMB0lLAWVsgFfYQOH2KzLvSEDOp80J+tYj+L0qZx8FK0ROR\\n\",\n       \"LwrDkybRrWAUchQdSVyHYEACZIwVeLIepFxgPNBN/nUfRKPwvmbYvRfvPSV0r4GZbiTzZh2sEZHA\\n\",\n       \"fBPBmad8ENijFMdnsY0ngK+LEFHKXtmaQ5Z0wfJbpqXor4ORLLTHYIOIHIXOz8A2DTpjMOWH/R+F\\n\",\n       \"I48rNfW0iPdJ2P8bJqYBUKboKDAebCazbwq6/JB2QVGR0/MYdQpx1RE34+R2K6UGLZXNunuh7o4g\\n\",\n       \"mdugJn6Gkuah4AVw4chO4ilD7C1ZQEqlnxORUzCwARxeGD8E5TfnW0zBXGBNIFvuhZtHrUwigJoC\\n\",\n       \"3DYII3eIyM7p6dUi0gz198OSHjALIjUvA8/DwN9C4iaoWQW5YhPjI7fBnuPoq2PUr3AQTrVCdIjk\\n\",\n       \"vROIdpzB748x/tkA5U5INgfwRiM4ckPEu5oxyi4keIby5kbMfY2QrIUGgTrF5L/Ay/vg2HpQJsT2\\n\",\n       \"A722AusVcF5sRZFp4lzzBLcbxOCtYlYuMF2WXxKBxnugPpdn1dQ4r/Z04kyZeH1J0qsglBmlnIX7\\n\",\n       \"D4Gm4OcfFZG/OiukVknP+0trplVwAeNQrIVjf+JAPphCL5rkBrsoPNlmRdYZBtSNUr/dSd2kC+ek\\n\",\n       \"SdE5xPgNOmODWBuWtjMy+3wMS6Bs1qgosj4FvAf42my2ZXMOBzS2VWI2suAaQu9K4GrNYRoeCqUc\\n\",\n       \"wUbYDqyupPHXFKAuDRM7ROR14DU4ok2g//kwse1eJBMhva+F8uCLsNENZ4YJbXXTGvPiOAOQxtkI\\n\",\n       \"uXtEZBd0fAJuEYPolJNeVcbvLhBPabgHNYyih7I4KFAkFqay3XAWpVQ/5wVO2lwQF3gCUHfeyoKn\\n\",\n       \"BCHBWpHIAYhIPXR/Gm4pQNeAFbaz+xHo/QQYOyH6CiR+CTSE4TNZ8MTwL3dSO6VhFDQEB84crHcX\\n\",\n       \"iLZE8T4WJfuuJZTMCKl8EUfIgW+8SL7GgZ4XTGcMowvyx01QCqKVWJYjlZ95TdVjRkRkPVCvlHqz\\n\",\n       \"2rZcIYk4TI2Dtw5+FY9xBvxTVuSXH9pMMHMQLAzROVpkoKGWgreE6Y+RLURZvxcME8YCUFcHg9uB\\n\",\n       \"n01v5GyQkzUrav5b2BwuMzCmY6g8E/XDnN7STmEgBa4o7tYS7pYyBeUkl2tC6w1QGx8j2QHZq8wA\\n\",\n       \"sLlcRGgGtgHvnYPmfgg8jO2MzBlFIArBBMR78dxQpMnnwJvKk/ZmYK2lT9X+8lvPcpnQqqC3CeiH\\n\",\n       \"0Noky/b0caannZIzi3vpASbaRzHHI+hdwwTqIJcLk4014RmLoZnQU4CpB2GjE1YMl9BUguOmB4e3\\n\",\n       \"iFZfJBsy0JKTlFNZjD7sSce1UIB8BibdlhDkWfJ65W2dFkMT2m6ViFkag5wTxrbDRh+4HeBug8HP\\n\",\n       \"QN8+yPxTFMwjyNY0rqVZsnmhVDJhMktkGPJu6PowbH8cCi8qnvWVSTidhGJ+zIk4pW6NklchChz+\\n\",\n       \"cfItY1YW1DvGhYjoayr1duqgMABjT19K3r6aVNUZqWR9/DXzsN6GUqrsFHnsWfjoFphqsJTxAq9C\\n\",\n       \"YwpONsC/jjO8oUBtQuOpJSV64kNsOzHCia4ykwYYRyG6AR5/yEon805Bc0TEl1Yq/dzbW/R8yFIC\\n\",\n       \"3nZqCnc2xvGWJtqiUVIrf0HsA8dxrE/TFYDWYghfoYgqnaBvo07s5CThfZB1Mq24l82s8GHgh0ox\\n\",\n       \"E3UpLsVjwN+I4LYF0GYXEXGE4D1L4YYorDkAN5aplQCBowVK+gBCntt3was74GQ9rB+BCY9wolGn\\n\",\n       \"4Cgx6saaTS+H7jUGrqYCroZjmK4cNShayqB7MhR6ArgKYdqTo0TbBzm+KkHNaci4wdcBkV2WRV3j\\n\",\n       \"I+zNGWRbAoQLQqk4Rdnbx2RYMfVzzhNYtLGqmDfAXU5YYkIsCs+W4OD52xlKqbKI9xl49WG4rSIR\\n\",\n       \"n9fhlTYYf1EplRERtwbLNeruK+NIQ0mH0Xao9UHjJBxZAqM3w/I0+FfCYMcgY3mhZnUDtWUPkWyG\\n\",\n       \"tOcU451TBA7B+HpYNQpLJqHMBIdO1VFc40f3uHEMnUL5EpRrBFfJxPBM4ZRhCt84mwFp1VH6lbZU\\n\",\n       \"5fV6t8Omh2HLONSPwnAjvPrbIsZXlSpdl5k21QxgNbAEof7wHVTpPj/t32eVUs/OgWlnC0s1Yg0g\\n\",\n       \"Zy62/1ZQ6rCIfGUMbnNCSwbiLqh7EGqaoe+7DN/uJ73MA+k8w+EzOIihF8CMWrOcZJew3KPh1MuM\\n\",\n       \"NCjWnITwgyIyrpQ6NM2WCDTeZkkUg8mq6AlyjiiHO+vIRpwQKhJwOWgu52ktxcgCRU+J5ekiRz0w\\n\",\n       \"NcRlihaJSBhLC6UA9F9tKuMi5UPA5+eiIaWIiXAYuBl4ei7aXKwE4J4tsO0mOHUaJp/F+SEDMzLA\\n\",\n       \"xPokomVoivmIbk9Tm4ODnZB213Foaxt6yY1pxBgrR+HmON6gm6EHOig6O3AlT2HWTtDqnSSUKBMq\\n\",\n       \"lqnfF+fo2jTRWiHgLrC+AM2nQRlQdMHAdgi9APFIjnWxXnJ5L28s0zC9GZqiJZqOwP7T08eryjjr\\n\",\n       \"wPr+13hhUwC6chCNwx6l1IJ3XERkyUr41HZItcNIDHy74TcOWVsuL779jOxrcNAJp++EkAOmyhB7\\n\",\n       \"AaaeFJH6VvjkSggMk2me4FRdklRXFk8Z6rKWMF62CW46BPVJ6zbS4lLs236KVb8Yp/8+g4wvT/1E\\n\",\n       \"jo0n4WA3iB9Cu6wdf40kd77+Jj8NuBjbXkOhS1F2nSSs3HjJopOjNZOhuFlE2xZB9XTBagUqLNIX\\n\",\n       \"h13Aaei436q34yxDPAw1ZdhkwtgjIvJPWGP7dVWDrJorIx/EkjH+i0pphc8ppV6Z/gSl1Ofn2igR\\n\",\n       \"3y3QfR80C6Q1GDktIt+6mMa/UuokldS/WpGHb4X6FTC2C3o2k487iRPDF9Jx9UdIGG+Qjk/Q7gaz\\n\",\n       \"RcNbr6OZUBADV0ln8jaFc8SBryck8rUi5LrhpgbgINklGUbdirpDYJhFlo07ONko6AShGMFrKETF\\n\",\n       \"SHiK1KXK+HJFPDHoDUF0z6WcCitwy38XrNgB7QpyAv0pEfmGUmreRWbPNSI0AKu4QLT9LPI4cB+2\\n\",\n       \"MzJriIhrCdy4DYYMUO2QqEPfPUzThhRaYyONe1pwZouUjRHiSyYZcrrI3eUkXIySVLVMnH6Awo92\\n\",\n       \"Iu/ejXOjm3BTCWe6j6irhE9vpS6RZ6ouRX0cAlFoP13kQAP0TEBeQX8ThI/Apl7htUdcZJcLLkee\\n\",\n       \"sK+M71SKxhdgyyvgzVsLn8fqKnY7amBHB9zoBEfCqk8T2QK5JkgmoOMg3KiLfMNU6mh13+XZpRHu\\n\",\n       \"uxmmOiBe+T+1A3JjcLeI7Dq/3k/FmXtBRF6FUwEgfbYKb6PI+3aA0QMDJ8mknmfkdg/u4DB5T4nG\\n\",\n       \"BLzZYAW+RirZSEXAa0KnDoZKc9+3YHg7ePyWMPfYEoNy0UfdxgLRpVna3oDGaJaO3jr6b1uFgwK+\\n\",\n       \"YhqPnMKhK+pNNw6ziNZgcOaP7mZqXxecOQUbxuH2fvhIFo6cwVNvEj8OiY3gNSDTAFk/NEzBUgOG\\n\",\n       \"Yu8kT18NqhnA+i3gbYqhs4WIaFazF48yFpGVsOVBuGfwXN2WY3Xw1EdF5G8uVSjKa5UEnwSIw5Lt\\n\",\n       \"EPORP3OEfPcozroizkYfLmMCTwrGgkKroXAoIadMyjWCKT5KTh0Ju+GPDdC3w3dXwLiXqRd3cvxD\\n\",\n       \"OSJdirZeGPW5SATcZIwemBgl3+zHURDSRMl6CwSBcg3ERiD31GW8RSth1T1w3ylwVF7nSA387GMi\\n\",\n       \"8iU78v6SvBt4chYUV9+JJ4AvY6m92swOXh+Ie1o5h2VkB0aJ3VFDe9bAKAIocrrGqO5keX2E+qO1\\n\",\n       \"OLIKzBRB9z4OrXDj6/YQbDUJOsZRAaFGypQMD5NuBwUHpJstyXd3HDKtwvFGIaWVye+HjbucHLxb\\n\",\n       \"cDh1RpVGbanEhFvQekw69kIoBaebnfSu1YmPOkQ2hmDFVli/BYY9UPol3FoDnUvg5z7IeUBClm7F\\n\",\n       \"+0XkCwtVYVVE9E7o6JimVAvggVI96P2WOOTwhc6tjHnj064VXgUdPZVrdcHEAKeGjpC8U6gJw4kS\\n\",\n       \"mJOwo8/KYUi7YazGS2FFnmynMPEeJ7kzCtdEFr1XY7ijlsSEh6wqs79TpzaRIN6QIPdshP23bsAc\\n\",\n       \"DGOGBvHVufBqDRTlJKaRJ9BRJufz4TyzAsaOwuYirGiDhAecechnmFo6SbQLVvVDIQhBD5CF3hq4\\n\",\n       \"dcQKtv3FRypJE9dFAcSqB7DONiLSCg33QUc3mFkR/4uQfvnCX77GW2DD5FsLyC0fh+Md1p4g7xj8\\n\",\n       \"U4LJBESC1pRG08Ech/BpAj0l6pxegikPk4YO9SaNuiJWMnAaXlJuH6ZepIRJsraTjMsJ9S5IDsF9\\n\",\n       \"JXj2Vjg6yrHnj5G8tUiHwGBTgmKwndpUHAn6SDPFiDtIS0Ewi0OkczCRgInHgEYRuUTVyMYbYE38\\n\",\n       \"nCMC0JKEJR1w+iGR1lbQPJB6A+IvXr0k9oLlQeAnc9zmK0CPrcY6q+iTIJPgDle2OttgKshQpo9c\\n\",\n       \"Y4KxTTpm3sd41IM/WaRW1xAMnFMKNCeRwCn873LgqMkjHiElGn7RaZYiJ4mRdUEEoehRGH7hWE+Q\\n\",\n       \"cWcb2UyBBAlcS8fZ+bCOs9FJeKoBOarjLI0z2pmnKZgitU3nDVeEiSVOTsgm0kfG4ZFBWHID/NwJ\\n\",\n       \"Z+80tSth4iTs8ELRY+nLS8AKvm/lEmPbfEUpZbaJJOPgCk2LmSsDyUqJMJHQu6FmPZRzENsJ+d0X\\n\",\n       \"cc40fVr25EloH6LuRp3WgoZrCvyTVgX1Q8uheFKI1oUp5OvxZk9zmgASqcHvKmKSIN4yxdFuD1MO\\n\",\n       \"Jy5/iqwrz0jZyWBKeMPTRCJaAw4BVxGlFO6ynxrlRml5mvQSwxGTwtTzsC4Fd/VAtgDhODjTMFnD\\n\",\n       \"VGKKZI+J0QtT9RDIwgk3RBJWSOOGIWjrgDNtnOeoVYsF7YxYMu1LPw235GDJIKSdsOcB2FvHBcu6\\n\",\n       \"G2EIXiD4MFAGvCLSCZHboLw+QMoQim8k4DmgT1m5yi/th0/UQyoAAydhxSDOtUm0gJN0PkveE8db\\n\",\n       \"FMIFnUxAY9LhJ0mYWq1MSQxSGAyLm7KzC8onILgC1JuwsQdGP0jhue9zavQgQznY3GOS7zSYDJoY\\n\",\n       \"gTCSS3DMf5rRmhJes0iuCGocNvvB/C0YmKpsuVxkj1j3g/cCs/r4StjSBNveAG8STmyF19aIyH+3\\n\",\n       \"dUssRHAA9wK/O5ftKkVRhOeAu7FF7WYUEalvhPevhrYMND0JN62HV1bAwH5oPUVjxE93nxd/okTO\\n\",\n       \"MMFZIu8o40zFMV15Ej1lRru9JN05iu5JajHZXNZoNgymlDBU1miWBGOGh/qch7KZZXd7mH7nUkol\\n\",\n       \"B1GnQTBXh9edZ3TTFGFxkzyl02TGiTUXyPt1dteEyTY7oclFuX8L2afWw+lRS4Niax80rDyX4msm\\n\",\n       \"IeCFthXwulZxUt6AlUH4/UaR1yfgUAkOL7RV0Bg8vwseuhP6jcrr3getZ6APWj8EN/hg6TjkDDj4\\n\",\n       \"MOzvFJEXQ7DZZcmbnsjDAWBiDJI/w7E1iT80ibmuTJfTQ+tUGU8clp+EzhH46TbQmnQifhPJD9Hf\\n\",\n       \"0s2ZyUaKiZOMLjXRNrsp5F0UG3wECdGumtBLI2S1KPmAwXiPg/L309A4Ac4YaZcwpIRwqUSIEq4C\\n\",\n       \"5IwCxbY0PLAEAgEoB6wbnHEY2n2oIZ3TGZMX68Dwg5TAH4WuNEy4rXfGq7iOKvsuaGcEam+FbSXo\\n\",\n       \"rghD1RTgtn4Y3Swiz59Nm4Wz2zjBkzC4FoLTbtimwIgGWhiWf9JHcWMn+VAQX0GIb0yS2twHjwJP\\n\",\n       \"Acf3ox0/Rs0HHIhLiHd1oULL0EpOtPI4BSOB00xSKyWOqABxrYWCyjAuBk5ClKjFW05Q0EsUlQMc\\n\",\n       \"OfAGIT0KNY2QzEAKGgpw04tJDjuPk1jWgMKg4F0HOBmNjkB6EG96iB27FNsqM57BIPzi4yLylxeO\\n\",\n       \"H4kfhoEd0DCtau+pINAIt7wMwUr68voRKLbBxBbmNj7iemY9MKTU+QUJ54SnsJ2RGUVEXC3wyF3g\\n\",\n       \"6IFBYHAnxF+ELc+BZ4JQZ54NL3og4sU1peMp5/C4TrF/aw79ZYXe42B4ZTfK7cZPlow+RUmVmCzl\\n\",\n       \"aCuXCOk6hbLJgKbhMuG4qsEpBpPmespxP6KmKHkcFIsGOSJkVALRi5Q9/YxsL9PiDJBztxHQDEpq\\n\",\n       \"KfEBIDkCXWU47YJsAEoT0IBV9Q0/9J+E96yCca0yu38dOt3g2w7rG2B8CFYfgpOVWIIFk3mXg1f3\\n\",\n       \"Q+gEPAQ17RkMT57coRzlw7BpibVKAOAvwB0nYfSebuK3bIVxP+SHYeVBuHkEvj1GYyRJy6Za/KUU\\n\",\n       \"400mzYVJ8hRZVhE5dDmgrQTFXhfDK1xoaY2SQ0eUUK69k2zaBCMJ5hhijOI1veilAQoO6BAvHpVn\\n\",\n       \"3NvH4a0miUIrOOtxF0Fppxlx9pEplXGUPbhSBrpjGRQ9iBrB3TyIu1DCLIZIF6bI+MA9AsufgIGN\\n\",\n       \"0FJnafO9FoFwAnI6DCuuo8yrBe6MeLuhdfKtxzSgqQyH64Bxywnx3gRtd4DUwqurIHcc1h2FpAv2\\n\",\n       \"N8LwK9D0AIS3thFrr6M1qyh6BHepg8FIifwOEdkPoTtKrFuWZOWzEA818kzQIKdPYNTo+Io6Nbk2\\n\",\n       \"yo40p91pVClAzhHGJ+0EcQIlskBJsnj118jjI0cEPAUgA87noGMUngRtO0S7oMM5gn9fmZfXb8KR\\n\",\n       \"gpKeIZ90YUx10qjSHN06wcYBcChonYKudhjuBi4QsJbZBXs2g7RB5wTkHPDyKmgYhmDKyqMvixUo\\n\",\n       \"1xaH4HJsZ+QsNwGXqPA5a7wA/E6V2l5wiDUgLFsBwZ5py9c3wXEvpH8Br+eovxG2D43RtyzF+AoX\\n\",\n       \"SB60HO5exfEWneZQEw7lwDBLREWj1gxiGBlizhEypolHmThNIWvq+GhF0x249XESDg+Gu0DOdBMs\\n\",\n       \"+fHENVQujhGCUCZNrMtgdd5JsdiKT7mIKHAVJsi1rySfP41a+3PEkSWgxSlHhMyqWszTzZBshpFn\\n\",\n       \"QTmt2ibhKDhPQ/1dcCgK7nrI9MCYBl0vwDqsrIwFgZWu608lWZGBZQcglLASH1/6CAQOQH8PZDus\\n\",\n       \"ysxa1I1v9TJ4aXVlVakd4h5o/QX6v8ux4oYsSydHyPlhShQNJmSnBcAmGyGQh7pjTpKeMMXgFO7g\\n\",\n       \"OGNtnRRMF0Y6jWaWKPqD6JLHlBFSOizFQ025gNKFiIqzdLCd/ZsjmP1TFPxZ3GWISDsOcxCX7kJc\\n\",\n       \"TsxEAow4IYeDGjJoeghjcJKkL8pEa4HBx6EjCN0HYOQ2GOmEySGoM+GJDjjz82utPSSV7JOZUHxd\\n\",\n       \"4M5IMQYTEWtFZDpxDaisANTcDRvugm0jEByHQ1PwryWyKgAAIABJREFU2kbYLSCnrPLS+dPQ+Ckf\\n\",\n       \"hGoJTem4iuDCxPCkGFvRSb7/CNwAtXdD0xnwZSGv1xI840RzmDhFw+UUDOUgj2LQlccrUQyC6Pgx\\n\",\n       \"VZKsTDFFDl3G0EjQorspmpOcieQpDhXBN4lnWFFq8xLtynPiDpNtg4quWJk3xxMU9RKFgE7AreFS\\n\",\n       \"QtB0k47AgVscuLJuUq0l0v4yjFTiR96yHaWUSorI30HiBgiugXIczjwKkdvh2A1gNlUqCMehOAil\\n\",\n       \"EWzOciPVc8z2A60i1Cl1LtjO5vKx/A/PNgjfDu0BRVKPEW97GjrL4PTByEo4GYaMFxqypGvhWNjE\\n\",\n       \"KCRp2Z9kqgcIguaFOC5yAQ9SFBziwFnK4fIrKAcpcJqYUjSYQgoHZtFJ1GHQUEqjOdzU5uPEXF5y\\n\",\n       \"hhOnylFsymIWR8j4vJw+kUGkiFec9Hk1NFOjnNCoKSXxNU7iTMUhNEJrh4dwMQGSJN/0XYaXeykP\\n\",\n       \"QPyEm8KP3ZCZBCMOoRVgeqFQAMNnqSIGwuAMwg4R2T0fJcUvhIj4oPseuPfEuXjAZqA3D0MPwMqo\\n\",\n       \"FadcrIO410nKX0TvH8ecHMG1IoOjvoxZcqE25ug5AB0JBaMwYUBsOfhNmKoBZxEmg5Aag21DSXY9\\n\",\n       \"6ERvNXGSAWcMbzmDcniRqImYJYoKSkyScHiJqgyFco5Q2aRUcGEGXPhLMRKZNKXeHI5OF+6yn6xD\\n\",\n       \"Z1CvgVgr2cQwjmAXwSETPVxEBUdxtERxlmLUH4GRr8AzK6BmM5h7rYX1mhL0TcDYo+q8LKqKrIMG\\n\",\n       \"TFyq70WkJgh3tlsZsdSK7J2Ep69l636BOyOjL8C+T0IkYy3BlYHDTTA4BAxZH9Klt8JtA5ZSIsCa\\n\",\n       \"MxB6GX6SgdPfgNoHoPE3gA1lEqUCmugYeQ3JaRi5ElooDWEIfxw6W8HshN0KjN40ZW0Jgb4MIx0m\\n\",\n       \"mWARRyhDwZujhho6JoQ6bYBopEBCeUhJCA0PGiVqyOGTKZq1OO566I/wP9m7s2Db8vs+6J//GvZ8\\n\",\n       \"9pnvuefOt+/tWd2S2pIs2TKOU46dCXCcpBzKdiqGQFHhAQoeKHig8kBBQUGRFKSo4gESIAxJSBzi\\n\",\n       \"OEPZjuNJsmRZLau71eOdxzOfffa8hj8P+7TVNrJly0PLkr8v955d65617hr2+v1/v+/guc1E/9qa\\n\",\n       \"0bmme8WxmyndqzMnL8487o80Qq4Xg0mvaTis3U7mumXm8Du3tW4s69w7cZBe4vqX+KEQwv/86wmt\\n\",\n       \"pwFZP4mfDCH06XwHt/44a4GLtxE4eJpXPsjuH3ZFvoxP4L94P3YcoyoEn7II5vuH78cx/MFH/3t5\\n\",\n       \"4V+hHxidiw5evCVcuGZ4t6FVDU0u/4LyiSY3DqxfpHmp4e3nW1aS2qw3tXG3lD9i4yY+WUh6hUlM\\n\",\n       \"NbLaUhYN0iCLU7MYjSIno+hWp7KTBgf1LSedJetlS7O+7ySedRI6yuaAxkwZr+ruthW9u6bZA69O\\n\",\n       \"ZnpFaZgksnml6tUUd8TssdV+bXt+oJmNbYy4cL/ymfUT9w44Pznxpdu3TLc/vlCI1Hf52Fua1wrp\\n\",\n       \"/ftm39O23JxqNibikHoUQvjbp7EUf9BxZmHV8G4hMm2ws40ug4skYxprbAxRM+wP9J79rPK5JWf3\\n\",\n       \"aORD0/O1ySaf/TC7t9g6ZqnFg5r8DPW3Mt/j4SHn3+bmh0ornccelZysJVbDDVWyZh56ypXEdD43\\n\",\n       \"m80MWlf0qoZpPDJI9jyIbIyXJctR1e6zHBW92oMmS1khS2ZqlfHyNoeflrVy805fNy019ofSk/vO\\n\",\n       \"F8xXeLAa49GP48cXflXNF6iuMz/yHpfeEMLmGf7Ms1xKifvshxD+fozxKxJbQwiNLX7k46w/xyN4\\n\",\n       \"nQ99mqunXMKvyYTxG7oYiTG+GULnRzn4Xs5kjBN2brD7/5yG0a2wHr5ciLyL8wPyK2z9m3z7Mpdu\\n\",\n       \"8/pswvqu1/Jzlo+CpMd+lZgfvqH9MT78kO0G3ZrjM9z94GPjum86v2b+qQvGX3pb+Og9a8+Mnb2z\\n\",\n       \"LlyvaIxED0ielhpJFc7qaDpv6qZBqF1P2P+W4OD5ymyW6rzV9PRR4dVG5uWNzEpz7Eq85zBsmcSz\\n\",\n       \"OgmN5thRo6VRNxTVkfzq3Fv5NY++8Ed4/Yhndxdhfp/9SgqbRSFy/t/hmat0TrtLDz5Kf0494MyE\\n\",\n       \"8i+G0Hwc4+wLvz9X8+sTp/4ia3g/4wx+Dt/hD4uR3zYW9/qT30arR3Kd5+a5w6WJs623vXX1aWff\\n\",\n       \"mZu3B2698MBkNXr+CxfcWut4mI4M+pVeo+ne1YGyKJwvefGoUG/uuOOCbB6NmixVM/eTXbFk723e\\n\",\n       \"zhMnW1HRLlytKg1H6rzjIOmpyhnhobIRJclVjSozWwlCtawMqcHSTZsne46y3L2LPaPQNhrfU20e\\n\",\n       \"u9yoZLESG9FJxpvJoumRfJL+DTZf/BV3//rbmn9iovPHGlbW1p0dpJL2vmGr5+hWVM9Hnt7Ht/K5\\n\",\n       \"HH/9azyvDWxg9m7e1vuIKcNk8ddhhzvfzkqblS2GMz73HOcPFp3f+8lEo75r+1Iia2UmF9aUjZF2\\n\",\n       \"Y+RaYLxMZ50vPcGLR6QFXxqydosHG1Q5J9c52mKyzMkST6otV4ceJgOPQ+Z+u6XR6JqELWURjVUe\\n\",\n       \"ZpuCllm4ZWetttI4MukMeL6ilWlLdOZjiW3bs7GYvSy7nDuublmranUx1z0eu/qwduWQnzhH9zJ+\\n\",\n       \"KYSwxPZf5sW1hUJyeI1Xvi2E5v/N/M3z/Mi73ilwj+Wf4kdOZb+Hv/5kJjz9LGc+/J4x5gd5MODS\\n\",\n       \"7sJr6fNfy0X6hi5GIMbxL4YQXub2Bqa/7sEYchQWJNX0PW2pvQ6jlOc3efE2R0ukJSvDBzaXCneW\\n\",\n       \"VqUq43TPvLnvpQdcbPLFJxm+RDfyxHxmff9Nn1/ZceMTy8rZfb3LY+eGxJUT75y5JjaiGEuTGAml\\n\",\n       \"NWyi1tHUdhAqqyHXiktajYca6b7Gs/tuZw1ls+V8UlhDI9R6HtsNR24lUd6da8635XHqdmtoo9X0\\n\",\n       \"OL9kdv+n+M5sQR65fsLdEMLfwS9Y3AvTxZ/dj/FSf2Hc0x2Sl4vPy8DV1xaGjrcqqu8LIbzxtVbC\\n\",\n       \"3yD4BH4xRu+nVv9n8V+9j/v/g4wN2svsvcS1GUcd8k7uuf25aX/HYKVvZVR4YjB3b61jsHVBfaaQ\\n\",\n       \"zCfmsXZSdU1Siu19FwZcf5PXrj22Fqf20yVHoTILx5YmY/F2y+Dtq2bbDdn5Xamxpebcap2SlPrZ\\n\",\n       \"wIN84qAuXKivOEqW5XFq2pjY3061q46TNPdaf2gWjx3nU1Uo6BRUTTGZaGTRJGeSETqsVJyveLK3\\n\",\n       \"+H+W/93Y5oiVeuYDbx27c6bnZDlxafpA8tRUdZ8nd7l/tqH50UYIRcH/+ZVeSr8RmiF8+Bw/3JNd\\n\",\n       \"nkmaKyH5lWPxv38f3V4f8fD+wpAsvcDWqUjm5S3O3WVljf2K7l3WLhaeOTh2a2tJJy/NGjfcSWc2\\n\",\n       \"J7WnI7dyHm2Rp/zc8iLktB4vPrvaZgXTe+wEHp5nO6WIjJOJTkg8JdNQKJJl90NUxcSRLUdFIglL\\n\",\n       \"ZEOT9tCgPVFXv0JoEjJlmBm1CleKjn4yctLY0W807M6PHah85GWevU+d8KVV2m8tpm7Q/yQfW+HD\\n\",\n       \"d798Si40+fvfF9z+sVN+1K9Kuy9w/AK9x3zYVzBU7HPh7FeIFjm7IE5f8ofFyG+MU2b4VzK2aS04\\n\",\n       \"Wz/1p7j6Ntu3UfKZs4ze5MylxWbHG5x5QLlReiI8VLUe6k7Yay++B7IVXr6W6XRyl5JUQ2mnXbhx\\n\",\n       \"du6P/PyO1tKue9+eeGqXi+sd46VVmYk7MnNNMw1NE3OZmVxUi8hMHemYm2lmLGXk+TkxFHrWrMRC\\n\",\n       \"HWYKE1HUEXXDFbN0JqQzMea6MXN9Otbc/GUH33ONmHCmy/K+7KVE+oMHuvdpfalt0Oga7h9LLmRu\\n\",\n       \"jprebEysnZlJm3zHCUct6gaTQOcO5zNuncPXZdbB7xM+YeH38X7iM/hACLoxGn3Vrf8Q78UKx9/G\\n\",\n       \"B5cXSrKdpcqjbqY1T60eT433+no3KydbCc80HH7oUOxEV+uupVkpTUpHYebGKqHNzkrDQw1lo9DK\\n\",\n       \"H9iKtcmcc0Xbm5fX3N3INMtAkVvLL2g5dpyRh7EQE+uG0mRTljTFqi+bd3WSHUW7kJjIi9xefkmd\\n\",\n       \"XJOVUTfMFPGWsnFiN2mZpxd0tMRwZG7fW2Hmo3NOziWO0548C/aXJtbMFWHk+Xsjr11lraCdsj9a\\n\",\n       \"8rlL5w1WOup2Lf/3DhXXQgh/I8b4VZ/zEMK1J4X/6KL1My1rw0Q62XX0idccPRVC+JHTUfDvK067\\n\",\n       \"4P8XP/VDbH4PT53whVWmN7h3gS5WksWicyNj2JnK86nVhCwuzs1yubAZKWpagXnGWsXFOZ2aX1nh\\n\",\n       \"UkEPr71Ae4cnEs6EZBGxGGpPqFHYFtwOiV4MBqG1+MVJX+2YpBRcxkxILyKK7gtKTUMHeamZBkEh\\n\",\n       \"pkEjXfMoNPyTlyZ+8rlKNi41HpM3a3v3QwhbXPgwT50q/Y57qceXctOlmaVeymyJ9L3na4+1iexq\\n\",\n       \"kG+G0J4w/YKFBPgy6pzZMY1ff56PaY597by1b4pi5CshhPRJnv5hnqnYPeDVj/D5D3H8WQ7/0WKu\\n\",\n       \"dnCVImGWLUxj1t7hlTXCbq6YNhQX56pmYX8zt513ZYKmxJJEHVMvp6VPvViaF6mz9YZkf+DBhSDN\\n\",\n       \"m5aM9ZNr7poo1DqxZRY4kukbGGNFZkcphGXnjXUs2UvWbXnk2FyUmuk40ldYljhUoLasSjuqdN9x\\n\",\n       \"7MrzgU53LEkL9SSnOdBZ2hAa61ayHeHyI/nlygcPxu7WZ+32thwWfb0vDDxu3Dbb2vXUw8XtUjU4\\n\",\n       \"nnPhDm+se48z5TcpPo7/8v08gBhNQvBFCzLZVwhZ/MZECKFF8iTNVSY7ePu34yS6IK6e+y6e2mWt\\n\",\n       \"z9aU9br21tnC22tt3cNS0T7w+W9ZM1/JzKuhZGNotdHTnlbKrBKbiUasLOHm+bP2+tvKVqblQOpQ\\n\",\n       \"CBODfMsgWZPVS7JWw9H0gWanqTVrCSEK6UAZK8JIK6ZyK3rxwGFYN29lkjCUigbpurk1uUy0K6Rn\\n\",\n       \"xLiqTudiMjS1qdSSm6Kp1DUMt91oJ3ouelj09ePUSb4nzOd2+4eyIf0h8x5VmXqwet2st6VZUlZ9\\n\",\n       \"WX6i+OiMfyuE8J99tTyTDb57S297yfYj0hrO2rq3J145Uf0p75MEPcZ4GEL4G4QnuD5keo3mhabj\\n\",\n       \"jcqnN0rdnLrJ9hIXjxfRXFnNalhkA9XpQvMwqjlXsVvQnjBrBKsd+lk0axBTqpQnh9wT1HJJmJtg\\n\",\n       \"KNiQO7apXRemdSJPW6qwTUhxR+Ka6FjtWbS0pWZStQcm5mp7jpIM1+3HC7IwkyYPTdofUkVCPpeH\\n\",\n       \"I2WRKC/t81feVmSLIPmH68t2nl+TlCeqrSCeKV148g1H86ZZ/ynFF+8LFx9Y+9CeVnviyqv0/lTq\\n\",\n       \"U//GVYPJ9UUl5i3SX6Z/gePzp9yThyy9toiof/VrvUbvZ1DetkXy6LPo/l5b0i72t/LSQsV2dJOt\\n\",\n       \"f4XvPmBzjHscf47Xz/GLr8c4/LkQQq/hl7b63vp4LqtPdM4NXT/iaL6uqC+LSW3QKq3N33GSNZwN\\n\",\n       \"uTQWhMpd6x6EFZXao5Uj43LHSnkiPN80bc60s7GmrjSZm8WJXnxLP1mSaNiX2jOyrKXSdl/beaXa\\n\",\n       \"XKGn1hCl5iZqXXuuy4zklgSbasfmZjJNSdnUivuO8sxRLKUrmXr+UNa5ap6f6DdTzbgpZg15vKOx\\n\",\n       \"3NYOV3TGJHmpemJb95cTs2zsZ8+PPLVL94CtLzBMuTf0G1gpfzMgBBm+Bb/4fh+LRXfko75JipEQ\\n\",\n       \"whnO/yWeXma15HHKWw9DCH/rtyFX3GJrhRf/OW/8EPkKYZlmKA2bU6Otys7WeXndsTXpG99+0/Hl\\n\",\n       \"gbyZinlDb5IqZhOpmZVs1Stbz1iZtCzPK2neU8g8jhOc0U264izTbq84CQ1F9kUHaUs/jFS6EstS\\n\",\n       \"j43D1IFCM9TWve5AT6EycNbcWD9sSrRNw74iDM2r/mKFkLbVlnTtmTjWUFqSmGnZS1JFvaFR1opG\\n\",\n       \"X2N2aCfvWpuO3NyYO0mDopcYanu43JGOK1m2zKgpfTd//sOPOOurPO8JTzQszd8tRN7FkuaI3ge9\\n\",\n       \"j344C4nv6k9z5/sb4jNXVPmalWFpNt/1YO2ek3Zlc8oo0JxTtdlNuJWQdVitWC8ZNLlb0u6z2owe\\n\",\n       \"WMTeHISeQaeynlRuP9lxN5zR0LJST5wkj3zeyPPajnRtl7VOcstj1yzaCQOJocRlZEq5oMRbco8s\\n\",\n       \"Geo5NJbacdmKNVU4q5VOTOKSNFTKzjpxX+U6N16XfPC6unps9zKf7TQtr26K4UgWC8u68v2+Z+88\\n\",\n       \"dGNryckHgr3qSPfqiXbztjpUVo5ZHp/X+c6XDH/5OfU78AKNf0DnR2mdX7C+PWbwgL/5O3Hmfj87\\n\",\n       \"Iwf4oxaGYb+nCCF9hmd+kBeKhcPqrRe5+SLxH395q+UZH7nFl66HELIVvu8ZJ822IqXVnTic33D7\\n\",\n       \"0pFeuqkVKlO5dDw36aybq40sq8IDM32PXLSiMFKRnLGRpDTu227NtUPtrloWMuO4Zz3puSSTmlk2\\n\",\n       \"M5W77YyW4EjLxIFopBK1FOaRKqxrx/seh7NqqVRlojSyopCI3lQktX4ZqafezFcMJqmifYduX9mb\\n\",\n       \"aYbSclJohUoZ+rpVw27elc5TnWZTdTywvz7TW1lV3eq5G0byn+Pph7ze48059//WN2qmxW8RL+JO\\n\",\n       \"jL4erPE/g3/9/T6IrxV5CC9tLAo7+3y+4Au/ebjj1p/lj6ZcO511v4D18/zMd+NHv9r+Qgg5LlKs\\n\",\n       \"Lsyhrvxdbn4/zS3ak2C/21DNG/JWTzd0zKpasn5R9/HrxiuHhNxJIwjzmd6gNhltm27k9ppTszwq\\n\",\n       \"00xiW7BrlrJUV2adTKVN1lLXmZ38vsyqtdDSUClj08OwbmzfUNd1pXW3PXLJsSHmpslELZPqmduT\\n\",\n       \"hFRipDSV2ZUbu6hl1VyltqTwUEMRKmUjdVwt2RxsmvQfebVZyfLEJR1V3dKtaOSHbvX6ZpOGlaQ0\\n\",\n       \"78w0NoP51GKe8ZtizK2x+bcuveezKDpUZ4TfF+l5CGGT7gs0lzh6h/qNL99LRz/F3X//rOWNTauT\\n\",\n       \"oMoayukHnP1UavrhG96p6KSLkVWz5lGGesGbO0kWHnKDsEjruJSQptSCgbNelWnEkTxmpskVZQha\\n\",\n       \"dTCNTWndsJvc9Fl9F2Mii5VxSGT1CckeNgWpYKxWoxTcETz2hMSyllQ00XdH5khHQyaLpTpsSb2u\\n\",\n       \"Dmu6saS3p325MC721cvnTHqlt16f2GqlVsdT9bnEwcG69ddbOuMdl1uvGNy/afDxTLnSMppclE7P\\n\",\n       \"+/l/7Z6qcVZ7b6R1nvGrsMT8eY7/KT+5uwiJDXj8O20ovJ9BeTPMTj1Tfs+wiNC++Gf4nsUoDIv5\\n\",\n       \"X/wYP/8dwbk3ogv7CwVNkVBXuNDX+OG+zTLTPapMW6mDsGV02DP66BXNtJDM70nzjnHjOalfdmhq\\n\",\n       \"U8OhVUOFSmVPtFpnhGV1cuBE4lIYODL2SOUkNJw30BLVEkPk5noGbtg0c4RjO0505QbG0nissCbR\\n\",\n       \"N66CcTowkKh0FTHXqEtC0K0KM0MDV5T5hlgnWvXQdH0oyVYtVczk2nEqC4mQEM1VIZNOc820YbB/\\n\",\n       \"zt3PZ0xe53O89rd5+xrFEfHlGOM3Oz/h/TQ7+/X4LP7z9/sgvlZ8O3/+6dN581t8/6/w/Gl8wVdS\\n\",\n       \"e63z/HmuvUd6WKYL4uXLL4UQfuw3K5LTEK5f5AfO0brh3tMjP//E2Lk3mK8t1HAPsxXDeUtjUssa\\n\",\n       \"bUVYE8spS6VkpWU0K8QwszZCOlEk7B+nspVM2V4yiW2hKsj31HEixiVl2hCrwig8Jj8iS9VW3bNk\\n\",\n       \"P07loVCHLa1IEh57ZCrV0JY6EDSUttEwtKNhosJY4k1Vdk9wIlXINS0plIJSVJtbEdxTGSVn1fei\\n\",\n       \"/XRJUg3Mci5bMZl29capPJ0LSxOTpHQ7nTvOVjQfjWTpvnKJ+n4IoUnrJdY+ujibB59j+kvvOrYO\\n\",\n       \"+Uc3Df9cw/Haiv5BqU4fGK4+ZsjhT/xu3S8hhDYunv54510SfQj5B3j6B3iuWvD57n6ML9057ZhN\\n\",\n       \"0clUy6tCHZU5RZWqJ4l03lKHRNWrHU+5n9G2MDDrNRbekzcDs0A743y6ENF1kGp7w7JgnTB1x1AU\\n\",\n       \"tQVZMhOrYFiuqIqO485UEnnY6DiJZ5xUgfKIbLHgpBDVglzwSE9mWRAFUVdTz5LMnrFCJQkkShXS\\n\",\n       \"WAhKSUi0WmOhWZq0cuvzvnR96k590Z3DiY1W7slxolHuenBpana21BtNdIvnzfcuSvNVdasjm/ZM\\n\",\n       \"lzOzJwu/NgC0QcxpzmN89Lt1Tb8ZOCNbbLVZ+1UVTXCw0VX2W/bPXzdoPPJ6ct+VNwvtE44/Reu7\\n\",\n       \"l6y02jbvFMpuLawc6m71DTfPEZ9QxFLWzM27B7KkqRH77mGoL9NzrGXHSFLXkpgqk1QaohMT76ic\\n\",\n       \"4Jy5lsoVM5XcRGGkZaBh3zkTfYmm3FCJIVqxMAl3TBwYiJpVIYQL2prEHYfJWBUm8nouJA/MGg2q\\n\",\n       \"dRvFimr3xGy5YZLsG4UDSZpomNkLubmBZpg4Z2ZWjIW6VjZz1duXuflZLu5xj/N/mrXlRSz23pMh\\n\",\n       \"hB/9Jg/M+wR++v0+iFO8jdU/qKF537VYYYFznEx5+lNc9xXdgiVfVr+Nm9x/luIiwuliYstvME4I\\n\",\n       \"Iaxc44f+OEc7pEeK45k3X2g4fGluIyb20r53kheIQ6P+HZuOzPRIjkzzoVmaKtKhL8XEajfVjrlp\\n\",\n       \"txbbe2L3qhhWFGmqGWFXGQ4IZxzrmYWocIt8ReKahlIZZqZaqKw70Q1HmGianZYglVVB4ZJaMFJK\\n\",\n       \"DJT2lB6LgSyuaVcdkkPCsqlEUCicyFU248xuHJnGynylZ368p56e6GSZtWJFp04kjUqoMo1ipJ3v\\n\",\n       \"qOtMPR6rk1vqxoGlmxxnbPwAH7rG07uLBfGbf5LPPxdC+JsxxiLGeC+E8J8M7f7HHcMrpbwccLcy\\n\",\n       \"/FHqL/5u3Csh5M9z5c9xOVs4298uQ2j8XYqbXP5zPJlwfJaTmnP3aFzmZ384hO0G299VS4tamGVa\\n\",\n       \"46hcGymeu2H4gaEzWe1iZKNBGhKdOjpuROPICWLk6QZ3k0UR0osLZeGuFQPXdeRKU4l1EwNVvCMN\\n\",\n       \"qXZCC9O6MjzsOnm0wdKKsDzRzh+pswtKZ7Gv8sBib3dVokxXLZMZaejqGTmwJpiqPDLVEtxTSLUc\\n\",\n       \"ysPcWhyZtY81ilSdbFl+3NSMJyYlg82OeTkzb+7aeXLuYrdy7hGamcHFmXeqJXHe1z2ALe3DR04u\\n\",\n       \"rijbQzo9xjVukY3f88yePlvLfb5tiQ/UjPf59JyXvxrP6F18MxQj1UKS+i5OussefnBb6+2R6VPL\\n\",\n       \"YtLXiMHnP3pT9c+DkwOWfuBYWhXKbmn6ZF8ojo07F6Rhqirm6mluXuWs9MTkgap+UVbnRmHPOAzU\\n\",\n       \"NqX1XDOW8jAXwo5o7gmVQ9EZmVwpCg6sWparlFIHjvREd6zqGevKbFs3NzMVQtey1Chu25wfGeV7\\n\",\n       \"8pCqnFGEtio+0rWrq3R2cmwQnpalDXUxMLsQLZfBsoa5G/bCBfO4JjFSGRlYUaSHzo1fMc2XPS4u\\n\",\n       \"m3zpH3L5Fo9HnrzCJw+pxouW5eAiP/MXT1n236wk1k94n8mr7yJGdQg+a8Eb+fH3+3h+p7jC+HWe\\n\",\n       \"9JWLkX32Drm/zODDbCwt0kjv99iaEf5SCOF/+EqGXS2ee5bkdfnFG554run8aNVw78Bgee6L6ZZG\\n\",\n       \"uGj7qOdRb418bOqOmIwcrmyp045WPK8rEcKRfJYT2i4UY+90xkL+WCcuupzT5Bg7og1l/cgspIQJ\\n\",\n       \"liVhW1JnOJaFZambGmrRwDLWlbqiqDCOtS+FPWfxwEXHeqcvvCCLS0p39MOJvmhkbm6kJ1MrZRZC\\n\",\n       \"iaNQqTy2Xg7drxNV+0QopuI8mjVKjbQnmVdCNpOGVDUbMD7RGNWaw339t2bmNzi+wlPX+eStL5/R\\n\",\n       \"jdtMrrD7JF6DGONnQgg/ODJ80uJt/SD+Lq2gFyOYp/4Cf3KHlVN56WGLH/8LvPWPab1IaPH0dGFy\\n\",\n       \"eefSQgX4xMe49jJHF2rj7K43NyaGea2VzUzDyFI61kwoC04a9JPoZkKvpJUScspkoaB5jJAsxjhV\\n\",\n       \"4ChZMZap0NIVjI01pKGhZVm7pB0PdUJUVE1Hh48XcsRWVMXLsioR7CqSNQvDoHdlS581NZVZ1zp9\\n\",\n       \"S1SmNtxzXx9HUnMj+xId3djWV2o7tBxHurH2RroiqQ/tra1zsiOsNx0X99zqjn3oqLT1OFJm7jzd\\n\",\n       \"t1RFa73a/ThRNTKtwbJpeEM5yElucqHPzpusvsPnvMdnJISwdJ5/+6MsXWVvSvMV/vyvLKS+vyUq\\n\",\n       \"xtdLMfIVZzUhhL/6nh9/Osb401/D737Mox1urXLlMNg/s4qZ/eZVg5+8Yr4/pzVku+TZp+kcmD23\\n\",\n       \"73jzdYP8AzZ2Mtm8NG9OievsP2KrT9onPVbbV4dnlSHFM8RXCPd1QktMJnYcW7XrikJX6pZMR2Uq\\n\",\n       \"VVn2SKYtlRo71rClq5TKjU5r300jK6Z2TKzoVzNnQuFRY0M7RD0PtJ24r6shMY5Ry0TZLBWziTqd\\n\",\n       \"m7UbzgwzjfmQdq6bToQ4MoipflULYdlhbHB/4kHY13xrYPzTd7V3mLzK8iabH+DN51lNmAfGx5x7\\n\",\n       \"yOOrFqvybyqcmp2t40vv97G8B5/Fx3wDFCNTsvki3v7/hwURMfwD/vl/yHMX2NjlrRXuzLn6KsUH\\n\",\n       \"mf/XIWx/np2fpX7tXXvrJssJyU2bT7ddPp66e75heOGsJE1Nmk2DdCjrFqrkgiJ5wiBMJNalVaGT\\n\",\n       \"pJbrFuGyRjk3yYdaIfi8rmnJqlK/vm8S5qoYhLBtNxyJcUqyJI+H6nBWqqeZ1EqpJA7EkKo9VKj0\\n\",\n       \"jeRKM6vesaEOmZkTd0yMkJoiUVlTGjoTWi7V3dNtAAAgAElEQVQIturCOORGCneUtizGCAGPJPJy\\n\",\n       \"qh2PbXRrzbJj0Go6U4+MkwMNQUhbkvncYRI8Gh7Y/sKxp19mbbgYX/+Lm3RepJcvFnfZe3yZLkxY\\n\",\n       \"u+60GDm9RlP8rnRCfi06z3M557XrjM+RzNh4h6dx49tZO8NLN7/8SlkKfObpzO7jc175lhWNtVr0\\n\",\n       \"SKPzwJKOtXhsJ689GYPN+YIDctTmYYw6CVcSBgmbkdsJN+KCxPpOTj9JPQ59r+iZq0UtU7WmVM9M\\n\",\n       \"Q72wUqgLMa/kZdNyGhy/tClv1qpkKFrWEeSaDpxTauKAuEbYc+yORxrOaGiYKCQem2p6ZF20pDaM\\n\",\n       \"tXk89FyVWlMJoibmMwbFfY/PfdxkcE1zMDC2r9zNjJcKk2btcLfh+NxF6eul9kpt9Xru/vGSSRpN\\n\",\n       \"jgNHy8x3OHiV/YzHu/yTepHu/Kv3QIePfJj+h7gHy8w2GB3yLSGET/1WitH3U02T4Z/ig/hnIYT/\\n\",\n       \"NMb4mfduE2P8q7/T/ZxqzP8OP/GXePIStqfu9bc8vPmt5m90KIc0Znzsw+yWek8uW63aurNHdtc/\\n\",\n       \"7aCxqbN7IMQV9bRP6wKxzfSQrCTbInlsEZH5ELlM05J3jOMjuZluCB7quy1R6dl2pBDsOqdp4p5D\\n\",\n       \"B2qXNUWpVEtPYtVYFIykdqWO0E9rtcqhhpamsQtObNiWqkLTfQPHyVA5Wbd+a6C6dIY8aDQqdSyE\\n\",\n       \"sKuUynT0TM2TcmFTXZeqak21t2/zHv9qRusMty/xxlPk63zboy+71T7s8fMvWLj8fDPi495/s7Nf\\n\",\n       \"j8/iL7/fB/G1YEijdzqXHpG/Tjb6TWSCMcabITR+nDPLiy/d9iGXj7j7rTzf4qkxKz1e/iFe/adO\\n\",\n       \"VUbH3P8CT5fiubkvdK9LqmWt8cDJxjUxOVKFnt0wkMYvmsW+EFsmsc840RPUzR3tTma97huZ2qiW\\n\",\n       \"7KcddXZiCXXat1aXgso8DD0MI5K2EOey0FZo48RUH00hTPQcyezrqC0LWla87Kq5pkXJcs7AA7Vz\\n\",\n       \"2pZlZsZ+RRWiSzqWnOinlQ6WJN5Uu2ARM5LgAoqQeJS2PRPmTqpEM1mzkrSUBo7iY+Nmbphnjkcz\\n\",\n       \"9S+NXX2dOmv59IuZB/0l8w+sCPXMF7cP3b1y6Ft/iSunRmiTnNnvKHDtt45si70P88Kcs0OmTd75\\n\",\n       \"GLu79O/RmzLLaZ0SVsslssZ5g/4Lzt8cqNcKs/W2i/O7jvNj7ZPa5eXgagzGKfOUfsIxBpHjMmjI\\n\",\n       \"LJ0ULjX5hQZ7KceRg3TbOK6qTXQUKE1VRiYajmTGWnKx0Vs0LxNmKxetqrVUakGhr3RgZhOlRFSb\\n\",\n       \"EeZ4SmXPTZU9pZauUtvMA9eVrkodxr6RUgwjbyWlj8+CPKGqOEzYmJT2RY3Ht8z6mfrR0/yzoXh5\\n\",\n       \"rn76WOfVM07CkubRTDbZMTt/n9Yqg116Oxyv88ZdTvb5b2OM8690VVZ45qJfS+RPiRfxykKF9fVb\\n\",\n       \"jJwSzL7792lfj0MIf41716P0akf1J/4EX2wu+nhuLfq8rUKzG5wrl7RuDJTXVmwu7wvxnmmdCXt3\\n\",\n       \"JRuVeOaSmMzF6g5Zj9C16FlNLfyPJ2pTQ3MtbEpdj2veCuc0TA0deaRjyUwhE60qlOaCxFBhdqou\\n\",\n       \"j6KgY+7AWE+2mF2bm+idyoAJ1mxIdFXmZrpxSRK3mBdWjh4abL2tytcNs1LIDnDkqVnHnTD2KLuk\\n\",\n       \"UfbMQ66YDVWt14QnOpYfjD25c+qxssHtZ1ifcXTIzspCc796TKttMTz9ZsTXE3n1XXwG/1MIQoz+\\n\",\n       \"QIWc/QM2r5IG3Fjkpvz9GOPj3/xfFXfo3llkS8HPfIRnIo05JwcLUvrSnIc/fCaEJwsGazy5znpi\\n\",\n       \"uNkUG6vS6tCsvUTa0p23FfmuJCwJ6b5ePXJ2NPKoPzDqJJIkkcWgXc+l6UQdUgNtBaahaWxXVc/M\\n\",\n       \"0obczECh9gRWpKGHL6jjQB2WpQ5Rqowce2xVYtUiSHPgrKmlUxn/xNCa4Oh067m2WmWuLVhTCBK0\\n\",\n       \"NX5VQRPkotWacZI6tCZLKkU2d1tlLSusGRnpa9RrluqZpdmhWZiZ/B9j/gM+852XhR+8Ih7NWf4A\\n\",\n       \"nQOxe+Aov6TZec3Pf2LH+k8skry/lDB6NYRw3qIl8fD3bnRb9Dnb4YlTE69OuSCZ/ujzlNmimHj9\\n\",\n       \"0iI7pjlnt9Pw0LbmUWGyHI1DpdlNJMmmGI8Mm8HZOiqSSpHSS8hrOgn7MdEVmBNrqhlZO7cUo8tS\\n\",\n       \"b8Q105BjrOXAzIaGucJDwY6WtrUwNzMkTBTJhrkgrxJVmpwOXgZ2JaeGpk2ZwSlXdNninXJelBlo\\n\",\n       \"OZHqeltPoafjtiumoS2Pc2UYG7jrF6uJ5/cX5pS7bVrjRDFLNR83jIbPqn+hy+RpY6+59wHON099\\n\",\n       \"8kXjdMX+K0eU/5InA0WHN24yusf//hsVIlAwGC0Wpr+mKD1VOPyWHLq/XsY0v+c4ZXu/ilf7IcSf\\n\",\n       \"4JNPMcyoXuXiCodN7bSlcwzL8rcLwyrlXGGSNlx8cOS43XLUjgKKvKPSEsxEZyw6Ix1sq90yD4U1\\n\",\n       \"mS2FPetSTbmGSuWBfZWx3LGpNZlcYmQiyh1roFhQngw8Vom2FO6ZG1qQ4fooRWMtqUpDLZWokiXZ\\n\",\n       \"dEeR7buz2RIPJvLWTb2s0E4zvcMg78xNG+tmcUWRNswtq8Iq8ZYsXDftveEff3LF6NKWav2RamPf\\n\",\n       \"2+3K8XmuPV6stu6dZzC04Fp9M+Lrhi/yLmJ0PwSVxVL47lfb/usJ7/DfvLNweITbX82pcxFyaYc3\\n\",\n       \"jji3uTAum5xdBGI+TBfExWmj5a1P9HXPf4Tdh5wbyz4y1nkwVs7Oqru1JGPc7Frdq427lMlA3d1w\\n\",\n       \"oVpW1m9p90oXpycO9cyalVbNtJo5SGdmoavMNs3rM+owNAoPNZKHGnrqOFd5WhKWhTCSWBK8IITP\\n\",\n       \"qU8NC4OBloFULdWxpzRQ62iZaEgURhrGGlpa2De1KhUXSjgnClFPX/tUkVeqjdUGp+4lj22ZWRWS\\n\",\n       \"h5q2POGxOrBZl4Ijd8KmaexaHhxLu9MFKbHY5tnvFb94hwuXmOeS5Y58/VCaz52sbyiqHT+2z+wO\\n\",\n       \"d37mHH9xezET8ZBRCOHvxRjf+d2/U1aXaN7jwTprp/fIvSt0+4tuWChZLbjfpH+T5PXE3veV4upE\\n\",\n       \"a2kh2x2pzNJgXheGKYnaYbboXDTighS7izMSG2o7VWGnzVFMrGp5flopmx3D2LSXZO4lHc1QWPGa\\n\",\n       \"A8cSR5pqB0ot61qihoZjS4t+SLoQZ1fa2h4KWtLTb/NohisWfuv3LSRDS7gvmqvlck1TywaW9YUF\\n\",\n       \"AbrIhLBl0Hnsdp6IdTAa1Yat2s6dVHXzRV5rUEKbWXDv88TJvvZSqg5te8PvcvTpCxz/BLc+xeer\\n\",\n       \"Rbf15rtqqd8Iu/ziK3xgm+P26T7usXxjUZz8lhy6v2mKkffihB//LG/c4EMJ2R7/8jrfvyQ+t7gR\\n\",\n       \"A0I9UR8d2r4b3Pu24MxB0I9H6kFf7DWFrOlIS4y3VWFVNLHwyF2Yj809NnO04G85sSPT19TQUSkd\\n\",\n       \"irbtWMNMT2HfY2Pb5hJjqwYOZcbOOevYocRM7a4ejhUOtdARDFXWpUZx4RJ5mO2TjayeX5YfrGjs\\n\",\n       \"jRysDqw2otBsuJctu50uKWKpDg0xHlJFWmeEcirUm+5+8lkbg1XN48wgm6haM+vdubq/4OGdG/Kw\\n\",\n       \"TfhgCOGV39wT4hsL7zE7+8xX2/Z9wMv4kD9gxchp8fHKV9suhLC8wZ++zjMZ9j0c7frxks9fOm3Z\\n\",\n       \"l1z8BfrDxNtPnVUsBfPJUHLpnrU/su5cKLTah0I88aiZKKqmSTJxsLaqngyEaa2bTvTikYmRy+OG\\n\",\n       \"1cktn+mvu1t3PcjnqrQgPkk4Y6QWkqEYN4zjXC8M5OY6EuOweN5bCmOPRT3BEm4LgpauflyTh6kj\\n\",\n       \"57Ud2VPIzE77HtFEpTBWaAh2BF90om+x6jxwYNk5iVQtShxK7cutGNtJeoY2FMZOtOVairgiOjZR\\n\",\n       \"WzG1Wd93s26r5sFg3uHOCb1lNs9x9wadivUTS2cSeZYrGsHq2x2DTu7GYaH4H6/y734Po+1TvsAO\\n\",\n       \"3X/GD4cQ/vpvJ9PmN7jeKdkH2fz4IqZltk3n9cWi/dZlygbzcuGOvXaTg2NuPUOvxb0+j/enjn7m\\n\",\n       \"yPr3NOVFIa7XymaQz3Y0WpVz8+Aoi9YRq8WkoSgYp/SS0iwg42jM3eXgWhVMqpZYVNaSEwfpum5I\\n\",\n       \"HXoLI88rbUkFtdtyN5Q6gsyagVpTD4e/6iiSachNbTpWuOWuy7hlMSiaWlic9i1MXtaM7UsMZPoS\\n\",\n       \"jOSq2FYntUF9RrM68nq4IB9MFC4a3b1J9XmmFRfPcXSPJ29wpa1+venuzj5/L+WZZcIX+einyO8t\\n\",\n       \"CpG/Z/FKa4cQ5u/liPx6xBjf7obwY8d87wXCjHCH4wf8r79ZR+W9+KYsRk5P6tveQ7zshtCMRs9u\\n\",\n       \"OdhKNJNDk+27sqR2VKE+drXouBcOXU8SD5OLYrlnlJdmhhIbgkLqLU1x4dehMBC8YduJdROrKocW\\n\",\n       \"dXmlq3LRqqZc6eSU0Jq4Y8nbcrlUJlhy4lDi2HXnMPO2Y8e6Ej0te24bWvFA17yoVfGuVuPIclyS\\n\",\n       \"t9fMzncN6rZ8PlbF6F7+lGq3QWdL3swUjbZqfMR0k8N75udrx+f66k5f+jAh5qZJQ2PWUpUHJmml\\n\",\n       \"a2EffeGE0fezn4UQ/revVj1/A+EF3I3R7+iL9vcI7xYj/+j9PpDfLZzyy84jW+fPvsjW09zbYnSX\\n\",\n       \"5Z921H3H0f9C9i849928eAAtxxeCQZ463HxoeytxpW5rOlFspLaPa2s7d728fkFW7Ws3Bor6jnmY\\n\",\n       \"6zg0T/ZJaum0kHfnrjfGBnHdSbVknjyhoa+ot8RkIrorhnuCrhCX7Ie2k3pMGi3bM9RRydX21O4J\\n\",\n       \"mqKmYGhqoIpz03BoILfqrFouc3iaOJVgV1SJvhUP8Kq2XWfkgsJbCys1E4ljG0aO7KIlNT3llz3S\\n\",\n       \"djG2FGZSA4chqmMQ66AcNN0ZpgY7u4v7p5ijXLz9jx/Lr16RVTnzplA1xeW78ryw2m3YeepZ8u33\\n\",\n       \"dEjPMHqO1YcLU8DfoSPwyvfxwkd4fpdGxad7fPHD/LH/l8vv8GCbo+9gf8hL+wuvmYcPePs8X4wc\\n\",\n       \"/jXO/5X7+m/Ujj667P9j705jJUvv+75/nnNO7Xffuvv2Oj0znIU7xUWiqM2WYlMLDMeLEseGYTnI\\n\",\n       \"Ar9IgCBGXgSJE8BB3iQBEsQvYidxFMOIIy+KF8qmaYuWRHEbchYOhzM9M73f27fvfmuvOsuTF1UM\\n\",\n       \"SZmUhhxSQ0X6AQ3cRlfdevo8p875n//y/a0U0er4yP7ygYWUmuh4wIMO5wuGGad1NsuZ0++NZNZ7\\n\",\n       \"kSWUZaZetuXpRFIfW0ju64TUkQaGNiQuSSQ66gYeEfXV7EtkSlEhuK+Y481yJ3Ycazhnoq2vqeHI\\n\",\n       \"SA3bZlmRXfMoDyfY0tdVqrTUtFAlpaGWqUonrovF+5wVlWo4YnwqWX/O2rVPaPXquu/fcjb9cT7x\\n\",\n       \"KMcPWPw1LjyY1eMvbc0ogGdhlqX8b84zSGaf+2COcviWBN5BjJ8KIbzwOhfM+A93v51y3e/LYOSb\\n\",\n       \"aRDjJ0MI/fuO/2rH4tVAtWnUuuhksacWj7y+NLZkydX8RGda2akl0vRQO6xIwkBuILMt1RFMRWOF\\n\",\n       \"ykPXLSpcVjgRdDGVaWjZlVnRsomapr7zgi2VgcxQ6cCOqaGLVh04c6Qu9zQ6ckOlJUcOHFI2TJJK\\n\",\n       \"lg48XSxop0tqoe8kRA/CmmKSCdW+vLFsnE7F1q4yu6hMGqSLtPZmxdKi4Wy7J2tODTu5vFrU627J\\n\",\n       \"xwMHbdI1LpywfJfmkO4Jl6/z6z9g5v77+0Hfj/0iX9Vz+FNv9SK+WwohPMKln2e7w8G7BqbvekX7\\n\",\n       \"1m3jJ9cdPfhxw2ffTWefR3uKT8xawQ4/zIVYuL+ceNjatnJwS+tCzfrJme7GREuqkUWtowPLawOj\\n\",\n       \"pKlejAzrE0tJagUrMbeWB2drqXBSGVapqrZktXlFFZclBcN6LpMobEgM1YyMQk+Ii6p0LLWvb91U\\n\",\n       \"Oe8B2xE9KtpUuWUsMwjV/EGmY6SQOJPanLeuvqhlxaqWRKbrgVOLKjWpmmWpqQZmGd2+jq5ly7qi\\n\",\n       \"VF/w0JJD14yrL+uGuk4shVBIq7ajKjHortnbaxid7Sh+2czfp1wL4bnnedcqxZeEdEWRN8hyjdHQ\\n\",\n       \"VE8rp9ass7X0W4BY0CZvzCzI3+TeX/tZ1HnpMqt3+IkX+Kcd/v57eeQ+p+sMIj/2mdnlC8736PVI\\n\",\n       \"J5hS1UYebd5UO2vq1aIiTGwP2W+TnczcmgcTHi7zaJyxatZwP/JqNZvYyTLKamJQ71qTioLU2Lqb\\n\",\n       \"7lqWKG3JVDJjhVJuINFVWdKwpnBi4shInM9NjeXzc6PmyFNSU6WaxAOZA3WJQsvYxCzV+TaJBUt+\\n\",\n       \"SNN9XYyEeUm/L00OKBfUpk0r1bHBym15deKRTuKJlyrv+kLNZ7dTd9KRW02iC/TOcXWFx3+Ej72m\\n\",\n       \"8b6p9qV1ro4Nm9umv/xj4st3WPkkvxBC+B+/2cj8VzXPcH5Hpfs/CEa+US9tGTyzatDdklxtyIqG\\n\",\n       \"iwd9k5Ud9y/dcGE6dKWsWym67jUuS8q+pexEFXNH4R1q2lJDY30NKxaN9d0xseHIVN95leuCPVGl\\n\",\n       \"MnZopKbU1HBi5juRGGnJLNpy177cvqGBDRNXZqeQSqmlsiSzHEtFcmo/JOrVqk66Lg9NIUx0Ylc9\\n\",\n       \"LBgt1yTTiXFyS2OtTRmMRq9Ia21la4DIYYf+iXjQkl9P7Dav8GKN0YH9p0cWCjYPeHw+svfMBhsv\\n\",\n       \"sHU2Mxr8fRWMfL96wDyHv/pWL+K7oRDCMtf/PH/0jPvLrGw2bA1GTjaW+Mqxg3P/yovvfrfhjQ7b\\n\",\n       \"3RmO+mMhhE/zymbJZ5pq/9GChdMgP1dXn/TUxgNxsS5JJnobueX81HvudhTXCrfSxCPlVJ5kqjJV\\n\",\n       \"1CtrZenLC5lury4bVfqtUgyZNPbV4tg0LAuGApK4qxnOrBrKLDs2MNKa/W+qe8pkTU1i6jVsmLoi\\n\",\n       \"OFObhxFdTWMnUgfqbqtLXXVeTaLQs+GGA5ftCIbI5YLckS2JhonMWE9b07LEkkzw0KmUJDqMtzRC\\n\",\n       \"33Je13jxyGBcd7esdF/sqt2qbJzS/2AIYY/m8BOsnDO+sqrqPSekOSHTGLKx37A4qXsxG/DKLj/w\\n\",\n       \"5HzPDmm/xNMPearD5Y0QsiP+2bdbrpllw5b+Am97nKd3Z8HW3ffymQv8wJf4hyN+9RMoufgnv/bO\\n\",\n       \"YZPbH+Dmdba+xMpfZmeNwVWebo51puwszkxodwsuVSRD7o/Zbs14IiMzT5payXrOE13WIzs5z14e\\n\",\n       \"mqqpqUxCMFSzOv/ssdyOwoIpGl5Ss62jrm1JadmppoZbrmNRqpTqzfPiY9FYsG7ROdwWjC3rO5Pr\\n\",\n       \"a2FBS8RAYcminqGRMu4qjDXiWBqeMh5n0upEzY7m1lhzVDla5zc/2CBpe/TkJSdPPe1kN0FkZZXs\\n\",\n       \"FY0PTV1aChrlkryIhsOviB88b/f4CfbfzqW9Wbbr17+T7/PvpD8IRr5R61vEBs0PqG48I7wnV2Rt\\n\",\n       \"zeMVw3TRa4tHFh5OdWOq9uCW9JHKYlpoGJqGoURppLIss6qptI7b6qIHFkUrZtvfNDU2tKbpdQ+V\\n\",\n       \"1i0ZSoz0VXKpTM+iiYGGgbZUKlNXGCllUh2VXalhKOwKzse6UdKSxEoMxSw/ExJjZyb60ubQ07Gm\\n\",\n       \"JRGqh+53rtotOtLpnmkyJTvj9R/j4y/wkwe8c41am6LtYPgai6XOKbdX2EvJb/O+PY7as3rr7xv9\\n\",\n       \"EP7bt3oR30Kv4lwIlmP0LZ9gfm+o8TRPJWwMeO5tvPe4NFoMFpMje481NOMdS1cLw7T/dQZs8xvf\\n\",\n       \"SQjh9ZtqPzPSe6KmmpzaWRvZ6o5k45HQnj1Rdx6y2ZvY265RW7I46uo36qZFbhKiflnopdFBOZFk\\n\",\n       \"Z6ZFS5KuGYbSzNxsKHEXd7RD4jELmiEXNazr27Fqb47xrjTlUrOutHWzC3BHYWRgVeKWiYGmPYnc\\n\",\n       \"ZQtqzuYgs9SSaNVNqYEzq26pbBgI7jrWNsK6zLplq0ZKXTsW5cY4Jw8tt6vXtMNEKhj94kT56Ykn\\n\",\n       \"/hRv77B8nhc+wtl1nnhmZPHgti9vdb120NQeLnjbcUNnPDVodd3cSBz/cuTGy+wucWltNgzw49ss\\n\",\n       \"PsGdx3n+Fo//BpfmcMTRG9/75G287TxXz74GN1ud8Lnz3DnFp2OML0MI4f/kV/4c1y+Tv5dykbUv\\n\",\n       \"8NMvUib8gw/PWkEXIxspxwmhYn3MV9q0Trm/wFptNkF4YYLarEyzNgeWBFwe8PqIu/Wgn7Ia29rh\\n\",\n       \"iovG2g7tKn3ILKtSSixqWDGWKwxEUSFxTtA2QBDm5qalyhiFjqlCXTCQe10imvWOLKLUsKFlrHAk\\n\",\n       \"1bHitkHIncSWKj/naBo1khuqzm0hnlrusF62tK/kqotdewvr1p4tNepT0iZlPkv9hKnFi6n6NFcs\\n\",\n       \"N8RQClXN1uhl/cee0N1fZ7gwG9P9nugPgpFv1LBLtkLjVy0+cWR9obTSDAZlR9ZvKE9pHrE2Kl3+\\n\",\n       \"lxMf+9OF4t2cy2tOG5FaqdRU4lihrmeqI1rV0jEV5AYK5wUDJ051FDK5ysChCuMZZVVwMh/WW5er\\n\",\n       \"CTK1uTFeT1Q3lQraxhoGTuyFqYape4ETuYGOussmesbJoUuhbl1PiEE/27BdvW5YX3ZYtNQnpXR6\\n\",\n       \"onPuV/nRy05fvqlIb7Bdl7V60sHA6Wf4YpPyHuePuXw0w3K/ssnxP3yrN+93Q3PY2YbvL9jZ/6cY\\n\",\n       \"lSH4khm/59fe6vW8OTWWZgTMKqDJ+snI/QsLji+tyEcbavtHGs0x74izO8A3KMZYhpD8Tzct/yXO\\n\",\n       \"jTl8nH6DZpsbFxn0eO8pp6uF0ShTNgpTddMi6pYTSUzlMTEWXAmsOtOdDt1rjR2nJElDaiQayGJh\\n\",\n       \"Lcy+5WOJhkNraHnNWHQW2nI9s5nrJYnmfAC3r9JQqGmrjIwFOSp1A3V3JZatSGVy0UjLxLboVUv2\\n\",\n       \"XFTMzCKkWgrLJk7cMHbibUYuzo/HPmr0PmJ41OczBzzyJZuP8lMP2Bhy1qBxmQ83OHuS9c/yM58a\\n\",\n       \"+scXtnx5v2t4fl9jvWl8Epy9PuafxBjzEMIv/jr/WYefvsZGm/2S9inrT7N3PLOTeNqMgPAGtfI4\\n\",\n       \"j+5wtsVqg/Y8INks+fVtDp/5un2+H0L477j/fq5u8dFXWJqPkyaRy4EbE6a7PNziqMYPjthd4As9\\n\",\n       \"Fi7SbtJpUB8yHswG0potjhbYqc+4IlVrdi4u9yvTRZbTVKXvTEfdok09XaUCU5XCbL5yoDCdz0wN\\n\",\n       \"1SW25/t/otA1y9OM5fM5qdxEqScTDC2qLEmsiqbG9qW2NDRM3ZcqJLE9Y9U/fKA5uq8+CNJqoLzU\\n\",\n       \"slWs2tjN1Ce5vH4oz3bsvnPb5IX6DKedHc1inZVLkuWWfEhs9eTLDyVHizaOR+4uwAHt3tfaV77r\\n\",\n       \"+oNg5OsUYzxaC+ErJxb/2AVPFps6B2fiuYmN5p6d5Z5BxvPZjIz78tPFrLcn5059KnHqQXyUsKiO\\n\",\n       \"yrFDXblrrmvqGTvRUGljqLIuSvSkyJ041bBrzaqWpsJE047CmcpYW2pb4VjUVNdQk6GQmlhzSdM9\\n\",\n       \"Dx14yrKmqQ2pMyP3jZSWbFirBtI5z6RM+5bz3KPjtlZ/X7lYaS8MpbVSrTqzstW3kw195FcYbJFd\\n\",\n       \"oL3Fa5u8+iTFMUc5+2fc+adMvviWbt7vnn4Qn/s+g539Vn21ifX3eDDSvcvOj/JUJDvluBlVvQVF\\n\",\n       \"lTI9k7em+uP38dkJP78YQujzqRjj100SxZe58w8Y/xE2jri/xt5Dzv4qtWXSf4vHj1idJqZ514Ol\\n\",\n       \"RGvcsr1fefgYeRZsT4OttHS3UYj1wvr0jknjvFpcVgsd0ZKzsO9Ycw4qnMx7waJc5cCBtVBz30Df\\n\",\n       \"u9WNpRLRQGEi0VDpy02lcmuWPaZnJDhvqu9UTV00NVVqa1sSZTYkzpvdzBYFd3WNkMq9zdiVeWdJ\\n\",\n       \"UFg1KV+aXfN7B8LVSmNSE5dzzf9ndqxefpzVR9kYMzw3w6If745d//Krjn9i2bW9uta9M5NsZG/C\\n\",\n       \"zSU8SLj+TpI2n77K5Yv0JtRe5/2rfHyL8cKsAfnbCEamgxnp9cJnuP0hOu0Z2v72Irv/OMb4Db4o\\n\",\n       \"McZpCGGXduBLP0zSoBpTnjC9wvICD1cYRdIBzzfpdmcmeI/26QfGgcd73FnieEAR2QmsL7LRZ2+D\\n\",\n       \"fkWzXXhEzWqoaZrqG3lV3SUrMiP3DazKpUZOtdXUNZVyIylKq7NsupYgEe0wh571pBpuqFkUvEuq\\n\",\n       \"JjU0cd9YR+HYUE3qTOGuppo8vt3C9IZ0bdfCjcK/8zf4zafqbv58TXvv2GCb6bQpGaxaOnng5sap\\n\",\n       \"0/7f5x2nbJ7RW+H0vsHKtnqYCpO7qjLaqjFprRk9eJXNFxmN38DE23eqPwhGfotO+Hzqwg71q7mi\\n\",\n       \"VjEcKbOBtcZQ0uOdH+PVxzh6hI/8Crv3OPpA5WAzUjyQ1wYGyvmk/6p0PsBVKTTtmrqqbmIomnkL\\n\",\n       \"XFc5E1Qa7prl1hOFoWCkY9lFDU1TZ1jUc0tzXq5ZNrLhvHROW7xiNMefzTyAl0xsim5KqqY0GZrG\\n\",\n       \"hnaMYpJaGTKsdU1qQxdD3bUkStNC0ui6s5ErhhQLbF3gXUeMmixfZDiepUrbr82Gzuo1/H4p03zY\\n\",\n       \"939vzHP40Fu9iO+CXuWVuyxeZusGL3w4k11osEv7Tt/dpYtOiw5PXRLWSrU/35H/yVoIXyx42ew4\\n\",\n       \"NKi/h8n7OJrOvFQ2j7nxGPt/k1tfZvinWLqQq4eG/kGiU43c3IjyUGpVwTumFYuV9yTciZSNFYNk\\n\",\n       \"WxmitoCWFRtOve4Em/NZmYlKV1M9Rgthz6mWqUNNhdKLCssSbUFP5VUT+5qanlJa1LZnat9EUyGZ\\n\",\n       \"M0i6WhYsylWG3q6hlJuVfiuPaXveUEZCDJQAACAASURBVBBsS+TKeS41ix2VJXnjczqPTF1dzCyc\\n\",\n       \"lIbbNb+03VY9GGlcmrrWSQ2zUlWyfcb+RQ5bpctfPvYDzxADy2czP5hf/rkQwqvn+cCTHO8Tu2QX\\n\",\n       \"Zwc9XyI7Zv10dvff//a2vv8iL/04j/R528c5WqPX4n7G+O9+8/c0rs5Kex/cZfmEu4s8/4fZ6rBV\\n\",\n       \"cn6XgwV+c4FXdige5z31WfmmVXEHzyzO2Hs3a0yTGexxtZiRD9Yr0jrtIlgqKgfpVE2iHQoLKoey\\n\",\n       \"eXNx0KoqSXLsxMSylpahidzY4ZxAElRWzJxuemZAjkxlVy7T8bRq3ldS6ai7aOaAWYleNnEqmqKu\\n\",\n       \"qO5rhCjNma4t+J//zJpR59hGZ2glS3TGmVoYmaxMHBSJwcmJ+BsbvPsHebk3s5CIXzHdeUk9bdvs\\n\",\n       \"tq28vOf4qWOvnB8b3rzFzgEf+534P29GfxCM/OuKpcu/eeD6b5zafzwzOJ9rDwsX7/GlKY/dI38H\\n\",\n       \"C+mqT/3QNWmeiXdv66+umbYWpEXXNFuUuSY4lbvrgUWrFiSGxl4wdWhGCGmrdGQ6UstyAyceKKyr\\n\",\n       \"uSYx1XTiwMiaqGPseJ6o3XHZspaacm7hUSm1JXKLaImCMyPJ7CKYzN57LkRTqdTIw1bDTjmQlTVP\\n\",\n       \"HhZqjYmiGFt7EK08ZHCd/af4wMEMdHa6xmZBHNGPPH6LzSHFVfavm/UrvCGFkD7J1h+ido58l/1/\\n\",\n       \"+T3Zze++fhj/1Vu9iN9Bz+Hff6sX8WYVYyxCCL/Ir32YlfczfSEYJ1ONrKEsrju9uy0+0rWwdqbV\\n\",\n       \"7jm3mMnXVzz46Ibhaa4qjywcnHq8zweP6Ke8ts1Tr7Fyjn/1I/Re5PjXeDBKhT/blF6tVEmN9Irz\\n\",\n       \"ZSo49aXWoYtJ5UmUaWovLEsQpCrBBVGi4UTHsZF1LaWRQ9Guyko4NVTIYksalixoS5zKPZSb4tjE\\n\",\n       \"SMdQRzBRt6rtnMKZsddVMixpaltXSB3NnUoCgqZoLKoL6kKcEGZI+iAXzfzqojPRwKX6utWVUr6x\\n\",\n       \"qd1MdJbq7p5/xLix58XY1Rw2lIOR1x4fO7/L6eNs/wbrX9eEujVg5TK7S4E0o3yUB5/j6fVZCnWY\\n\",\n       \"EA9Y/jInk98G7f8t9n4vhMbf4+//cR5JqCK3C+79XzHGo9/6+lnD6+WP8MQXOH7b7P97ssz1GoOU\\n\",\n       \"uyNubhPqLKeM38aFOo8ms2CjStgIvJxSFjOy73qN1W36AyYZm9UsCOtUUZpG7cnUca3SDJVakroT\\n\",\n       \"B0Zh5LKgTGir1PWcmtrXMvCoSpR50WgeMM5KdetmAckmUqWhoa7UJZVEUM2nNPdFqQW5IMhlugYu\\n\",\n       \"J9FymaoPgrtrFx2Gi6qDL8tkhrWpZpZrPKhbGI/sXKrpLqxr/6fHGuUXVLevOst+kINUvvrQnd2e\\n\",\n       \"s/Vc1u4Y3NrV/++HfObb6/f5zvSmgpEQwl+IMf7vb+L9/4MZPOqLMcb/+M2s5c0ohHCe2o+y+hhb\\n\",\n       \"BTcv84HP5y7u5l57P1sbHGbE2PbMT+eSJ1L5yjnlwwuWXk2EB335ExMPmk1Feg2pxFQmEy0pPFAo\\n\",\n       \"tKVacj0Nhz6gtIDXRLsKG9iSGenatqAuzivJwUNndi2qeVR0Og9AEkMjuYGgdGwYmzpheY6Ijkg1\\n\",\n       \"HZhgEjKvx4mzEGwYGhnppZcUr3Q1Lkwl2UR9WGk/ZHWP4eLsiztaml1TmIGG2jCapUFvn6N1b2ae\\n\",\n       \"ZcsbDEZCaLyHd/08Hzzk3C4PlvjcL/DCd31vv5sKQQPvw2ff6rX8DvoSngxBPcZ/fezy95LmF8F/\\n\",\n       \"Mf8jDeHpC3p/7ie5e4uP9DU3h1ZaB5KFVZ16lKlrlBvuFecV9R3ZpVsO+w9Uh1ztki3y6lP8wBf5\\n\",\n       \"7J/h+j3OrS57+Q9t29+Y2atrLsvSaJQlimrFNDbcTe6oC9ohuGpsR2mASs1Y4UDHmWVB7obcvppK\\n\",\n       \"w8TImdKK0nLoGtpReVRTWybYMFLZcWTFBSNtY4eilPkUzYKOoYeC3MLcj2rJmUyiZqqvUqGm7kyr\\n\",\n       \"6ukmicxtuU2LsWcS6oo4VoY7amHN6rSjap8ItW2d3ljeOFLvrGuWlxxVr/tsreOdg5rx4titx2Y0\\n\",\n       \"04vv5MZ5Nm6wdjoroYwjJsc8/xo/9xHuPMWnXuC9GesPWTjiwT5/+zt5oo5x8mwI4VVuXjEbp7kT\\n\",\n       \"Y/ym5olYYKnOk19m92zWP7u3SCtjv8V2Ovt5nFCLHCWz/pNBwkacMUyqOovJLG7KFnmwzShLDdul\\n\",\n       \"pcDWcMYAG7YIZd00YZhUepEHcWIaxkY4FvWQCtZkSo8ZW7IgUero2ZR4RfS44L4ZsvTdvjYjvYGu\\n\",\n       \"0o5SZx5Qjk3sK3VNnFezKhq4JlpMjqkFxdaqrdDTbRw5rS9ZHeTuN6OzZGThAsNh6iiJOnHTY2VL\\n\",\n       \"u79jfOWGw7zpVvuq6uFEWfyMo48d0/rHM7z7p+emh99zvdnMyH+N7ygYCSG8D50Y44+GEP5aCOH9\\n\",\n       \"McZnfsc3fhcVQkiX+GOX1P7igrWlocbo0PLBUFHnV36aD36SacmdFe5mWzrTDdnJPYN26mwhKloP\\n\",\n       \"jVcTVZ7rL2WqZJ2wjZHoSHSgMYtPFYLrenoSGVo+476nFZZQm/tLjHBFIlPoS+fFlpYNqZ6pRGJs\\n\",\n       \"oC9xx5nr8xNzKFdIPMSyqKlhYKqQe6ipI1eXh0ccV/tO4sRiVbg6va3zBe69g/EaizmthzNvpAdv\\n\",\n       \"52SB1iafXuDiPRo5B826wVrT660103Tfl352qFlwrRXC5iMc/qPfbg59RlS89FF+7AGr85P80tns\\n\",\n       \"d39/ByNmgfMrMX5/I/BjNAzBHTzp98BB/XZU8ZWX+een/MQClybq233tUW49XdJIg3FtRWdypN6+\\n\",\n       \"JJ0sSrINRePA564Vlqd1h62GvcdGjuuFzcf42U81/eY7N0yaK5rNZUFmWjTlCeNaJShUMTXI9jWq\\n\",\n       \"qeWQ2nJmEFaV2nqiI2sGxoLHtDxu7At2XdQyVjd02VRdX2Wi7bYdQwOLasYmjkx1XTK1oBCRqXso\\n\",\n       \"11KqVKYyQ21HViW2JFbV3Ra9LrWgNi/VVCb6IZFI1Z0SC5NwJhMUYU8ou9phWVrvStNMYqBqFoq0\\n\",\n       \"YZSkJsVUEbfEg45bu0cmG5nGucL289RqbK2z96NUv8HtNsfPxRhHIYRnn+ddBVev0tvm5edZvc0/\\n\",\n       \"m8zGer9jn6QYY9/XuQH/NhrO+jnGGRfvz/7czTl+mnemXMwZZdQmPNtkKcwCqjO8XufcZBYKPAjs\\n\",\n       \"By5uEJsdd5NMTenUxIN67lrBXQ3raWYDm6F0HHKFwmWJC4InEZW+MrfruGe2J4lEZSw3VBniGaXU\\n\",\n       \"rF02Zb77sybWBZl9hUexrHKsVMOa6O0qL2hpW9AUYkOZ7Au1TYkZDv9kbapbLVmbFCZhYJilmtOJ\\n\",\n       \"hVFH47ShVatLyi2N6b5Lg1ecLtcdFUN6L3Hhy9R2+aXfrUCENxCMhBB+OxvorTfx2R/Cx+c/f8Js\\n\",\n       \"XPJ3NRhp85En+ekVW6Fl83YU7emtv2BxPHbW5v/+w7RbpGt1K5MNG7tTvXO5g+VcVe+5WJ3pb6UO\\n\",\n       \"PW6UVDODryQl5DMqo5al+eWk40wu09OyNp8fX1G5693OdEQ3VXIZ6jq2JFpmrRgn81O4NDZRd2TJ\\n\",\n       \"0LJKquZE1LQ8XTVOh2L6JVOrClHUR2FdqnJBETOd2Bfj1NaYw4TTS2Q3ePW9s+mC9nnG1zlOafVp\\n\",\n       \"tDlYouiQ3qyrwqru4nnx1onFzQ2Xs548nHj6AQfX+fU/H0L4a3MjxG+mJVbarP6WVOvmt3ri+X7S\\n\",\n       \"R/Abb/Ui3qCew3v9/ywYmd/Y/mUI4Qs0LixZe/Sc99yfOnisEqqmLE5Ma0EYUsmFeKBaoNZYomhZ\\n\",\n       \"SDJJsWrnAweuj0a6y6n+uZrxxkXNdCyEoXF2RYxjk9CXpEFjuqyqWu4Z2yjbknRow22HllXWDPSl\\n\",\n       \"mprWNLxqoq3hSKontWxkXVNhaKjAsj17duWiptJFbBmLEplKZeiOiMqymQfVqo6mjpahA/tG8w6x\\n\",\n       \"0p6eukpX10TpgmtVYimuGYfgJDzUtaduIqmWFGFEEmRxorCozBbkZWqYvJ3pnrz5smL5mmy4J19J\\n\",\n       \"XHnIez/Os+/m5ByNBs/9CAe/xOmvzLelPNB4+ZOWnqiJjwWD16ZG/1uM8be7d3y3z4tpCEu/zmd+\\n\",\n       \"kg/fm2Hiy6fpVDOC6lmdNCXvzCZiQpyBRp8q2U95PZtlenbMhrN6i8ti0nBF3QUJJo7CsZu1XD/m\\n\",\n       \"hqG0o64RWxZD8ITS/rxX6EyiPW+i2xNt66npOrRqz0WZY7lc1DXDvJ+ZBSIRGxJn8xJeT7A3B8dv\\n\",\n       \"y5wT7YoyidYce1kXQkcaEzHO/MnSLJXEmp1YMy5ydbnVvK35YOru+qon7o1MrjfEUBPKIMlYu/+S\\n\",\n       \"o7MDvvgSN854Lsb44Hdr/3hjmZEt/FG+Kfr6zTTyrfiagc4Z3v4mfte3rRBCcpkfOa8WC50xVKp6\\n\",\n       \"Xf3KguHS2PqUKyPWdiieZ+9Hd7zcKpXnKxfrNY3Y1Im5Kpk6TVJnNnBoVq2sSV1Td2riIcY21Eyw\\n\",\n       \"rWZRoiFKTARf8qotfXFusBQ1XZmdVCotmdyZA6U1F51Td0nfy7btzxvUmqbO0kovbFg0tGHqnJGu\\n\",\n       \"JVFw37rJfA0dC6qkodeeaJxy6y6dEY2vcHyFpTVENs74uf3ZKfLsOW71OK3WDG4/Zjo40zwXPdHs\\n\",\n       \"SB42tM4GDq5NveMZdq/y8DpufItDP2JYMUlpfF3T6+j3Qv/SD+Nvv9WLeIN61myi5v94qxfy7SoN\\n\",\n       \"4e3n+PF01j1464BPfuOEDBhzYW+gdufA8WpNMpmYtmZ8j2l1zmQQlLW7kuXo+llN1l6zOmEUJi7H\\n\",\n       \"qZezddnZQztPltL1XAxtQVSm98WkKWiq5nbwedqfcSNCKs/GDiwYKZzz0BQjF6xrOXXb0KFLlqyY\\n\",\n       \"qgsWnDk28VDuHM7NKZsNLXcFNazqI9oUDKVKM/P4+6IrZuzNDxq6Z2qsUneGZQtagpE1NZnUoQW3\\n\",\n       \"QlNNJohiElwsl2xWe0ZZQyfN7RjbLWrOZVPjUJnG6EHtMXnZErJleWwSjt1Mxi53px75OCsjfuIz\\n\",\n       \"M/DcWYvuKkd/62sPHKt/gve8m3fezbVe5/VNPvuzIYS7GJqxKaLvqZsv9P4VX8y4+2Gal8gXWD5m\\n\",\n       \"a4WLGQJ54KWErJqt6WbGRjkzcH8l4UJkOc2M05oVDdvznHUpqElcwTjUvUflSO5maMzNCmdD2yOl\\n\",\n       \"ocotLMqck80RDA3bBiqVI9k8GFlTmarcNktjpiC1LDrEBcGixCWVNZUTdFRGpralXjLAlsks9Ik5\\n\",\n       \"1dhh9oOCsTJUzhpn6rHrrOizsCbczGSjSrpzanSxVDYnascjg8Wc/3XE3/tWmawQQr3JB9b4YCA9\\n\",\n       \"5bnBrIwz+G7s3hu5AfwTLMQYn/0mi3szFMozs1EMZm7Jp2/id30nyjIaC6rBsfJ8qWx2jd8btBeD\\n\",\n       \"wzpLDVa6LHXpxoa1r9T0LxZOO01XIrX81Mu1juPQUsSJkBwLYUFqT2XTDAc/Ee3KFUqpQy2Pyp3I\\n\",\n       \"tM1Qvm09S6KxmkTUEqzrGllUyQ115R5o27SgpYFU6pyJYwtyE0MtaVKThZ6euodKwbKo5cBjjpE4\\n\",\n       \"dhiYpplSRxajk4Wg8a6JP/13aL/Kx36E8TWupHzohPp8fPXtR7Pa6v7O+/TvfJB7/0Dzx9qWmk2t\\n\",\n       \"4VRZT+Wd2Ws3I2H5Wx30GOM4hNXP84Uf5Afvzhpjy8AzF7/Ve74fFIIUP4L/8K1eyxvUc/joW72I\\n\",\n       \"b1edED78Pn7ufRyuc3CfS5/j3wsh/I0Y452ve2lFLEs/9A9PffkPsTw4VdRbuq0NO1aVxUNFdqY9\\n\",\n       \"HUinDcq+3fZY2qgsjypLydTdtVSajV3a73r1/Jlh1jEMmRBfJayrjKROVOFAlTREjzidg9lnDeNb\\n\",\n       \"0pgqQ8uxqcqRJSPrOprS+Rh9ZduZV9Sti4JFUysW1D1m6nV9hbYL+gqVIw1LxgrRttmsxSbOG6q7\\n\",\n       \"71lL8+fkQmmqZlFNQ6KtLZHG3GHZshVyaVVzLHWWrZrKrBvbKGvupsFgVJm0T5XJqhAPhWpkWs8Y\\n\",\n       \"N6ndVr6Xg+GSL17qyvszz5dLZyyP+HRpPj0XQtjmXe/iR27PvsvwjgeMrvLgL51XnNuejaWMd+mG\\n\",\n       \"EP5OjPHu9+LcmQc6Hw8h/DqNP8LjHeof4mKXVspwjUZt9hz8iln712HJa+UM01GvWMxIsxnufWUe\\n\",\n       \"LM4sCEdqMpsm7qJZZbaSUicOfCqU+pouaUjmV+SaGdQsk+LMSEOu0nHPgb6GfUuCY8nMddfQDITX\\n\",\n       \"ldvFER4RbakELM7JJa+jPYen7djVVehaCiPT0NYrtg3SOmFAPGfSXzWpmuTPcLwkedB1f3vJIwcj\\n\",\n       \"4n3Hm2PHgeqUcyvzB8l/zW05hJCu82fey+NPzVJJxav82DM8FUL4X74bvmS/YzASY/yF3+bf/u03\\n\",\n       \"8dmfNuv4/yX8Yd+k9ySE8Fe+7q+fjDF+8k183jcoxji9EMIDyio6TsZcS6XtfaPYtVhPXc4z29WE\\n\",\n       \"83TXB+ovZBbeXspj3UKsHMcVI2sW4lAMp0bWtOSm6nhF7kghVzmzreWehtSaoJQZOdUz9UDT1JKm\\n\",\n       \"0pqBpimWndpyrKcydGBBw2ie+BuYpeIyh4I1dZmplTkkJ5G74ljHsSWZnsIaXtcy0pHqVGeotOKK\\n\",\n       \"V8KixXDTjUtLHlzcdLK9bzWOLbSis2u5xgkLh7Nxvk5CPr7J1nV6K8aHO4aPsGCsaE8tz5tX9wPx\\n\",\n       \"dwgsTz/O55vcfTdr82ay/c9/t/b2e6T3YzdGu2/1Qt6gnsN7QhDi7JHt+14hhOZVfuonuN+ZGW15\\n\",\n       \"lKN0dqX8N/DXv/raGWxr9VlO3seP/yNeP58rN3KDD1R6lz+tWq3EkxNrO4W1l+p2PhotWrVxEgU9\\n\",\n       \"D5oVnYnXN5gMD6yWz7sbHleEBVk4UzqRxpGWpmUNuZFJjA6s6oSGYZwahXWXnSA3EDVMLCstOlVY\\n\",\n       \"lGjK9VFpyNExlDq1ApraWtYd2J8P6ieW5Ag6ZodghKaau9pKlS0P9OcF2KZNUaWva+xYX81ijCRj\\n\",\n       \"B+W6Ml0j3TOOlywXK2rVPWexpzU90U4q16crM3xBcdNh+qh7R23qNaHz1MwFuJvbf8cDv3rlruPf\\n\",\n       \"5P23+cI2x1/fB7I1R6p/3U7ubTH50JaFq0/q3GzoVy29gy1JEZX/eQjhv3yzJYAQQhPJN2tonfex\\n\",\n       \"PM/4Z2ZZnX7GaJksCbpVlKMe6x6MogtyzZLJhGmNTqDKphqxMAhRMncaK1UyhSrSiLlcXYzR1Ozn\\n\",\n       \"JXVdDTXLODHQ01TINeRWtQSlnpFjpWMrxt7hllc95siKUz0M54iGsaghuqdy1aybZWBGJ5vitsqt\\n\",\n       \"+de68lA5c5APi4pkSRUGatqyaVCUF5UParRfYHRT9bczN/7svsNHJpp5ZTFy/XWe+jTHa3zu3w0h\\n\",\n       \"/Zsxlq/M+vtcNsPYtB7jbR+ZGQqDD3BvzJWjWVXjTXOm3rLUeIzx2RDCOITwa3j2mzWvxhj/yvdy\\n\",\n       \"DQf802f5hWtOdqYmj3c1a/ctdQrnpqkwydQnhaPFmnGS6L2/J9ZKZ9PSWbPtXrKiHZuyUBiFPatx\\n\",\n       \"aqJShhWlKCi0HAkyJ644E9V07TJnE2QypaZMYuKpefhyFwfqHlezMrff2lVJ5UptQ7mohpGGGyZW\\n\",\n       \"JVK515QuSdREj6o08QAftyC4IuroGiSlqupYnBYW0lzv4obP/LFV2XBR6/TYWSM1FeVpKW5UyhYP\\n\",\n       \"cw4LVhdu22p80tE7Ng13KzezY9m5Y4v90uZdnt/m9V1fK799U80tpf9uCOFfmGfFYoynIYTv5Xa/\\n\",\n       \"Wf0U/vlbvYg3qhjth5mByVcdyX8vaH2L9KuByFd1mZMG10MI6Tem+U//OZ89x/7lWXp9ENYcn3xU\\n\",\n       \"+fce5fCE5t919tEDO08m0oWaxTSIaem0WKPXtdCIpoF6I/VIr2f56CteurChys6kajpx1VLoKsK+\\n\",\n       \"lnXnIvfC2+2WXdLPS+KuKqTqbs5dWHtOla4p1Y2dmqUIjxUGOLZhLJjK1HVN0LNkIjqwaGpZVyJz\\n\",\n       \"onKmEl1Tc8OGd6mpyVUqi/p6xqKRPctyXRHnBGvFlKovD2P3akOFzFKxaVkQkjUxOZUHtuPEomND\\n\",\n       \"m5KsoVne0FtfcFy8W+3kWKwN6bQddVaFtdInt4985stj5d9i+Kmv254h3a/74g4bnHwok7cvaXYX\\n\",\n       \"LZ92hcf7Vh/ryG6e02vtKf+LEMJ35J8UQlhh/aNcf5pECOdusf9PYowPf8tLb/Pw87TfwVGL5iQx\\n\",\n       \"rqfaRWUpROdDqcgy6YRY0Ltbt/tES5FGRRiowsBE4q6WRTMzw7GJQWRzUhrUR8Y4SoKrGjbVpHNo\\n\",\n       \"w0OLagb2NbTVLMpNZIKga11wqIGahiVNY0NT0dSqaFmUiO6qSfEJk7kb2deCkZl/WeVUdKqNbZn7\\n\",\n       \"4aFpYyxxjeqc2lmqWe7qbj8Uy+lsHjtn+pfZu8K5v8hP3uXS4eyQLUxoFBz/dAjh5AJ/9jJrHapb\\n\",\n       \"XKuIFbe/Puy8RH+Nx/1eDkbgrRznhSLGmyGEv/6QP94xfP+Zaq3v8Zi5Mi3daybK2oa80SFPFYsT\\n\",\n       \"u7WB3vjIi0kiNpo6aTRROI0152NuP7lgWV0jBmloSKS6DrWdN9QWvOCuaENNw9SSuvuWNRwZSnQ0\\n\",\n       \"zXAdU/05EmkoMxQtmqhbUtMSde1LDe3b0DGIwVgHXdPYNg1jRMEDHRNvk7psiIm1qrIXSnu1Jeth\\n\",\n       \"Tb68oKidF0e39N62av1k4rVa1KgSabsybNGr2C9oXBnb/PKLFgYd958unYWJV6qok/Klp+l/gpNf\\n\",\n       \"eaN14a96iHwPt/i7qZ/y/etH8630VRLr7bd4HW9Uwx5J5WvP2Ttc3OedkeVt/pNmCL864Zk40yCE\\n\",\n       \"8DfYv4a1Nf74n5hx3nuwzvgH9L/4L9z4D6YeKaKpwkmWOailGmWQtLjSZeGsUt8sZGlpeFi4txW1\\n\",\n       \"q+iR0QO9ha48VDYkFpNCjJ83SQqlbe3Qlghz3seesSUNfTtS1wwtz+kfRxoqiwbWzG4oZ6aCfe+W\\n\",\n       \"SwSFkUVMrKJtzdh9fQ/tW3OqckNuQ18PF7TR8IptdROVhi3nZNVDspownWqEro3YdlJdsC4q0twk\\n\",\n       \"JvrVRD0ZawxYyqc62ZlBu1KmY1spvXBffSVKq1Wx3lYfNXX7J6b7jxndep7xyTc2p9/i5hm317h2\\n\",\n       \"zNEW7Syz1+xo7A8NLkSbSarRL/UaC5xm1pJc9m+y922dHCGEOhf+Aj+8xFP/L3v3+StLmt+H/fNU\\n\",\n       \"de4+fXK4+U7cCTuzecmlyF0uVzQVaFm0DEqyKRtUgAEbkG1Ihl/Y/gsMCxb9QjYMUTIgCjQlSKJF\\n\",\n       \"kDRJMS43z+5Ozjffe3Lo07m7qh6/qDPL0WzgkrPSzED6vro4t+vcuvVUV/2e3+8b7pQk1Gtb/O5f\\n\",\n       \"CyH872+WD8cYYwjhZ7l2H/X/lEdP6mp5VM+ia0vl2CZT6NXYTRa9uHpZzGqW01wtHRmnN80ce9Xp\\n\",\n       \"mahgbo4rkcWCyYTdJt1YMUemph4zm0kZw7EQK15WkYaZC6Yqoj1dAxWLGlJzFZkVuUzuyFXBA2oS\\n\",\n       \"0bJCR8UtQd/Ugj8gua5g7czVdabuVTM33FVVUXE19owMzdwxXK+axIGFcEvdQHeV4f9HMmTheWpt\\n\",\n       \"js/R+V2Wzq7d5oD25S3+6o/hCreOWFok7vCBZ7jzwdIdDgyoT3xzBlYIoa2ch51+t7Lu9wJp8N8o\\n\",\n       \"Yow3Qwif7bnyNxsePglCNbfSKPRVPd2pWSwIw8Sd6SN6Wa5a/4LbJpLkUBImGjGT5H1HyUVF6OqI\\n\",\n       \"WrFQhJaxDRP7Fuxp2lJYdeScEzsakjMya0swkzmVSpVs6kV7jhwbipYlzuk5dWJb/cxvYKahrtCL\\n\",\n       \"FdGullNjNbUwVfOasXXBvq5U1cihiQJ9wVKMpFX5nIU8VzFzvHBFrZiYth6Uv37Pl+8PukVmtaCS\\n\",\n       \"8EOnPPM4+SW2joaWO5wkPPGzJDkvrHB08G/Soe+dQgg6ygHze81e/Q1Fzb94p0/ku0GM8Xg9hJee\\n\",\n       \"5qEPcfcuF4d8bEblA3zlfqZf5D9+tuSIf+7smByvhxDuLPHnLrwlvvxQa7WpftR0ezYRG3Wded2V\\n\",\n       \"WW64kYk5xZCLr0T2R/LHo241UxlEJ+0NNxYWVWJLJfQcGVgULISpJ3S8DEZSp4KgJtVQkQuuOXSs\\n\",\n       \"0FUakmUaTnT041BbXwj0fNSpBcFLomWpRew7MpdoaMXL1vK+NCxqp1W3Hbojs6pmrG5kU01V6UcR\\n\",\n       \"1SE5R3Es1pZV47FWHNsvxrZDIQpqYaKVlJFtw1YingbFtCft1CxFWsVIPUxcSNr2tdRGqepkKOsW\\n\",\n       \"Bjvfx6s3+VgI4dfOupveyKbhV/8yFy6Rn0dzqvu1gdm5BdlSqjMqZNVI5Uiezt1/h/E6O0IILWV3\\n\",\n       \"dPCHPzuSh3jfKu9/E+fkwQOOL7L3JL7RsQkhBDo/XHJn9w8JKzOxGkyqhc2sYnVQeKrGzUmi8uxF\\n\",\n       \"yfpEqO077SQqYUFWbMnDHRuCRpipRRZPuNbmizVWptwKPFSwVCSmyVhMKxZjoR2G9gV5KOM/npFI\\n\",\n       \"NEQ1OVJNQdPA0NyemVWlJUMqGivsoSJVM1HHBalUbv/sYjXNzc8+cc5YX6Ln/phYj1O74brTkMgV\\n\",\n       \"OtgKQx1RI0+EUEgWqH+Ew2MWm7y+zAf/MWlRCglOW5cJl7j1Gh8KZeBPMmXhNn/uPD+/wX6P+gtU\\n\",\n       \"T8vnzBvXPe3yow/wAys4ISyF8OUevxJj/Nc6nm/Fv/PFSDkXW/2ruQ9tZ5YWql4fFCYpzeZ51WzV\\n\",\n       \"0XGqP2sZF7mlxblQ78iTXe24p2HDcmxrn5zorRzat+JEyyiZq8XEPCwiSOKuldByoqbivEzbxIvK\\n\",\n       \"kO+ZQkUqlTg0RnSq60RL3cyGsQ2ZDxjr69tWaKnbNxGth541iWrsm6vaFxWuW9UxMJEYm5zJB7eU\\n\",\n       \"qZQH6BkR96wMU/1qIV+6Kp80GEjaOwAAIABJREFUxUHdwvyi+fA1MefBMbUWs4IHK3QSBqs8dsRR\\n\",\n       \"jVc/yp/8VS70OP5UCOGL3yt29bsIP4bPx+i99v/6Gv7KO30SfxQc8M9/n598nQdqfLxBbPPSR0rZ\\n\",\n       \"Q/wkd7b5TAjhK2+8EM8wmzI4prFcNh/AifrKxObeir0sGK1UnZtVNOLQMB06zVnusTpimOZMgnS1\\n\",\n       \"EOJls7wpmxXa466FxQ6Va24mEwtGoroFJ+pnhmNb2meh8ntOVBxoOFbeLlWLmhblNgg9u05FlxQS\\n\",\n       \"qeuCniUXJGpyFfmZQVoeKmJaxaGopmYgwUVNqzJ7uCWRaarFnllo6ZjKk6gwFIqpuWAeRrrzinYl\\n\",\n       \"UyQ3zeJcd8SsE93XLYS8qpekXgtN+3Gko28eO9JAUmSydE8yS1Uz5pWk9OPaVEp8QIxxN4TwM+yc\\n\",\n       \"w2Ue//PRZ25d98ufuuKovigfZyb1U4PRbYvHPLDPc2eE9fv/hzIZ9ySEsPwVTn7527+0mpusfYu/\\n\",\n       \"Wx/RvfiWHz7EI5/hx27yyv/L7qdyqw8wWU40ZoWX64n+IHr891p25j3dq9Fm0tCZ1YTaxDTNvBZL\\n\",\n       \"p5iLRV87zG00eWTEl+rsptQjLXO1pDBMaybmpqGwrdAz9f4YXAsbMm0rirPguyMDUV1baSJ/qKqn\\n\",\n       \"cHwWATLVEiwj0bOvau6ehpdEq1Kn6sZon4XtVVQ0dY3VVK3H1IqBG2HuROG8xLKZpbxwPeGCoB5T\\n\",\n       \"YmZUY5bROs/1h7jvFb5ykePnFri6zYUaVy9zUPr8ml/j/s/xoy0+d5fpNr8QY/xGi6vDD32IT/4g\\n\",\n       \"t6oUGeELfN+XS1+3X/5O3/1/54sRnGe5y/Lh1OZ2YIsb56N8PjFarjkNdfLU+QWa1YKQEYmGHpod\\n\",\n       \"6lUOjZZSQWrJxIlLxgayMNNwcJYyM1ePtx2HpsJ1QUdUs2euaapmqozUmqi668MqBgLqjp0YSmVS\\n\",\n       \"mbnE+pkL301dU5ccKeSSEDQMdM0MsHLWCbmG9yt52jNlFHY7UrHo8o2K5ig3S+8Zb3TtDgdi7VRv\\n\",\n       \"fU877XtkUuYxTMbca/Jgxl5JE1fUaU5ZrHF7iavHbIZS1veee2n/YfgJ/LN3+iT+GPg6/s47fRJ/\\n\",\n       \"FJwVsv8ghHDfJZZ+gldXShYnaDNfonqrTFQ/fNNxsRHCb32Rn/gkt1tkc5KJSWVi9fVbusfLXn9i\\n\",\n       \"7uvrY9XWyDTJrH62vI8PF+k9umRSq5sMW4r++62c9vVXt2XrUciDorJmFu9ZjhXjJPGQVE/NqtSy\\n\",\n       \"qQMzE11LUizakxq7LrevbqQdb5uETKJj2f5ZAHyuoWbzzF85P3NebQoO9U1CYdHImsJcalW07FRb\\n\",\n       \"LjlTZ0yk8hAMjbWKkWoYKkIwDg1HMZqlrxpG1vKJGObuG1CMqaXRsF4R0rpqTCwVbYexbim9Q9Jz\\n\",\n       \"mg+Ma1NxVtG+2zJb/y2t/3pfI6XzcAiLv87p331DRRFjLHC3DKu7c44vfOzEk8+MfHmlafuBIPSG\\n\",\n       \"rn5t7uMvc6/LwVlWzV/cKSX+WeALH+crOX7pW98d432Oqt/884MWg7fMfNY/wmOnVAse3eZgQu+Y\\n\",\n       \"Xq1QhOjS3cyFE4b7hWK9abO2bPF2bv5YUz1raeT3rFeDJOYUUa1WBgfeScvpQ56UPiU38fH8VC1U\\n\",\n       \"3U0Sp8q4u/vV3QhVhUOLpgYKDOX6Viw4lpgqNDU0jFUcm6pZlOhIJDKZXNtM1YtyM/cbOIot9ZBq\\n\",\n       \"FUOHITHRVwtl0k2jSBBVVJ0rhg5DYTGyEetOQ6afFioRMQgFW6G8JrsLjB/ny5OSZzP77B3+2zWu\\n\",\n       \"XC4TBMGEeJHfPqT4HL+F3yvVkaFSfjVNL/PJj3O3WlJcVIgf487rfDyE8Fvf6bv/74sREqrH7K5w\\n\",\n       \"pRddvc6V62y3Dz3//RcNh12X8kxoF4IDucx6QUhmNosTMVtykKxIVTTirhg6Zqq4a+hA4gG0TUJL\\n\",\n       \"RabpNXOPiR6UueGGXa2zhMc5LqlYE+Wi5yzJbWnpS52aaZu6Kziy6NiW1GVFGXse26qhZ4xtXYW5\\n\",\n       \"e/pWFSZKJV6tKJ/fg8Dm/MC9zZr1vOS9pMkXLYa+ZjXqZFTq7GZs9ljpM7taFjNRMGzV7aWJxsHU\\n\",\n       \"5L5c/xkcc5rgvWBe9l0jBDX8Gfz37/S5/DFwDUshWI3RN+V5vMtxC9vVtwQwTkn75YPum+6zKV9+\\n\",\n       \"lvo2n16m0ifuGv+Lwp1HZ35kb9cT19ntcmedV1d58kW2n+RzH01laxW9sGR4r656mgixpVpbMygG\\n\",\n       \"WrOphZR6dWwcamfbjQSptqqKqYaop2HhzN1nqmZB3cyRy8amOg5VDBwba1mMiRDINBWOBdfNBA3B\\n\",\n       \"0KncsUSQSFw2cAuLZzndt23Ys6lp1UTF1I5cz4tJT0dbdN7EsizeVs9T48rYjUqqbWjSnOvWeaCf\\n\",\n       \"upfUhEqulgSNrC/HUZbbdNdSNnN4+pCFF4PxhVNL67vWjvo+8SIrE774H/FUxP/y5jU442r8Ip9/\\n\",\n       \"lY2PzeS/NOM+Hh9zbsCzW7wwYecf4Wf+wGuoEvm+21z7eAjhN7+17XvxCi+fsLXOg2d5WTeXeH7O\\n\",\n       \"6GkIIXSwxsYWjbPOWa/LxZyrz/D0RXpp9Ng9Qs6vPpybFyP1yqrGbiFbzkwuz4UwVYs1w6SqnwyM\\n\",\n       \"sSy4L604VtUME7NQuI2TKnVzI2Whch790LBgXhaicttqTqUmrpjoSixq6jlyV8+mmUNV588CQ44V\\n\",\n       \"+godK47lUlls68YdWYyOk3NEWvGuk+RE3akiFhbyWMp/04FZUphjFNuup8sacaASxwYIodBKqQ75\\n\",\n       \"4C1+OXL9dxn8vTMen24Iv9vkby2VGUPzHTqvk3+QV18qu2P3MG2F8H2X+EyHxqC8oJfrpR79G6iT\\n\",\n       \"d8rFan2nL/y/L0a4R3+3DFX6+jqXBmRNXr04MPvsM6pXN0wfbEkcCdWBbpx/I35qkmYaRVTJ27J8\\n\",\n       \"7Ki6apZsiTEKyVC0KMc0ZCpYtCk3VnddqmYsM5WZmGgpPIzzEusKPW0rZ3bTK1pGjm07kDhQta8m\\n\",\n       \"F3XUhTLBN0yNLJpgatO6KHHP3L6+Qlkzly6DzcgsHXuglZumNXfklsPI5RM++RvcuMArDyEpuY9L\\n\",\n       \"GUmflzs1J60l+7OoWWQuxKpjmfpjQ1+fs3M9xvhHTOd81+NH8GKM/q26EX4vEKMiBE8rYy/eK0GE\\n\",\n       \"KLkgrRB+5wv82R/iVoN8TvJFLh7y2TP5ZkL1I6z/EMkia68MHfzWkC/dLD2MhjHGUQjJwxz8dF16\\n\",\n       \"NcrizPQpxr/Nb/5tHulweVz18vS8fNbRLIYGYc98YVXeWpbMZkbTsUp9T6eaq2Cmqmns9MwfqGmo\\n\",\n       \"sCQYaauIKlK5iqqKuqdd0QplPF6zKJwmYzHuaJsah0TPQEXPlqaa1ImhSswthyCTGImCiqCmrmvP\\n\",\n       \"FqqWnJiqn/EKbpupOXAOqRB2hYQsPKZSDKQGlgyNqrcchomlSpQUuZV5VVIv7d06aUVa5DYDi8N9\\n\",\n       \"h/2B595fMWrPfeD1iQ/cZPOsSPi+W9z40yGE/zPGePrWtVPmIz0LIYQ6n32UzjmGh8xfiDEOvlk9\\n\",\n       \"Vy3Khpe2b1FsxhinIYR/yK//OZ66jyRwvM3Oz2EeQv2nOfcxLp2ws8Wz6yz9GlnC9gr3LpZdlOQW\\n\",\n       \"v3o/0y3yLunRUKt3w+iJhpBEkqhIG7J5017CQtFUhIGNUDOKiancrkQH7wuF+9BRurc+j5GKB7XM\\n\",\n       \"DQxVDV2Q62iZOHZVcGwkmFk0tyizo22scEtbU64vNdWxoKZwIGjGqSIZWimuaU0O7VejYVKYFYV5\\n\",\n       \"NrV6XHdaPTGpT6WNzJ2irT6verZa1Y1zK6FOHHpOw5ViTjrX3Odzj5Vu2/kylfeFEJ6OMY77/Nrz\\n\",\n       \"nMv4i4tkrdKH4bU2sxvli+R2nY88wZ//E9xb5GBA9bf4gc/ygR9+E4+kT+2EKU7fuqZvxr/zxUhJ\\n\",\n       \"wEp/gfp/znzG9Q+SLTI+4LE7fS8ujHXu6+iMgpPGpmnSltqxnRyRTK32p2KRyyK7Sw+U9rxJ/Ux+\\n\",\n       \"ex4DUydyEysOTbSUjqyZFfecnM2IM3nZ/jPzuqChoq4ql59R5BqW5Joq2nIP45qhuzo2BU25Aw2n\\n\",\n       \"mgaCviC3qtCzEycuzEt79wFuLhI1HSYXHMWmNCnUwrHDhR1ffbLwwdfJn+fmAs+f58YCw/22k+rD\\n\",\n       \"muGS1t2h4eKepzZ3hf2qo5WhfsHJL4QQyshgjv8wwtJ7BD+FX3inT+Jt4A0S63uqGIExn3uG2j0+\\n\",\n       \"uUzSwwG/f1rGR6D7YzzxST60y9I9rl/lC/8lr/8fb/hYhBDCIo8+6MAVrs1pPK/x4zddGOaqG6W3\\n\",\n       \"2k46EWanGq26RrWq6Nw0TKeGtZpkUhGP79ltn8gL5iGYhwUXZTrFqeOQlzqYUComJqIbNsxsqJor\\n\",\n       \"WzwNBxq6WaaT1CUSo+QWpqpnzp0PKQxMnJy1gqrB2XYjWrVsZmjbwJaOKKgr5KrGFiyrKfQca5hZ\\n\",\n       \"k1pS8aJpWMeCVj6RJ6lqsmzL0G13TVLWijInK93N7C4k1qWyIjqd1s1rTXExce5g5m41uhBZfpOx\\n\",\n       \"VTMrDcKs+0NeMmejnK970wvqW2NY5WTuW6gz3vS7DvCzIYQuEpIrrP9N6p9hqcvmbSZDPvxMqbT5\\n\",\n       \"7I+S7lLZ5MKM+24zaPLKubL7u3bKPGdnbeLR8cTyuGJ2p+50reegkQvTxM0w1UwTKyF3EoOpwiSU\\n\",\n       \"XawrglTUxIrUpuj4zAmqLrFn1VzHgqnrVswtiWdP65FM1JZirKvjQFB1TkVmInFiJjiU6GpYzFvl\\n\",\n       \"qL6WCfk5cX7O/F6F2ct213Yld6LWWmFYOW8/3bI8b8qTmr3itnuVoVicIzs1LDLtcUWy0TardA17\\n\",\n       \"DSsfvavy8Mjkd0MI/1eM8TSE8Av7tLc4f4neEZ3P0r5XjtEG63zm+9lZLAsNHeZP8LnP8cM3uHGV\\n\",\n       \"k31aX2TrgH9+Rnb+tqv/ri5GQqh9jNUnSx34/lcoXjybTX5PEWP+Sgjhf6P1V3hgiYdfPHMv/iDn\\n\",\n       \"H89k3bl78RGyumrIHVYuGsWal4o9rcpQ42jH6doVA92STxLmZ4Y0bWV/qibYlhvZ0rNnx55FNRfP\\n\",\n       \"OiTbDh2oCtZlUsGGuVUT4zOOdYolmYmhiqoLcgsKTxkZq+gI7ukaaqiamRWMtSxkC3rpxLOBzgJH\\n\",\n       \"GfsheGh+ST9Z1EgKSaAeNtUqMzv3HdhdpHlE5XrF5KDrzp0JCw8yfcJwMlerLct7j8uefZ6FYyc3\\n\",\n       \"TvmdDn9qlSfbxCNmrRB+fRTjuz3h9tsilCyyH8d/806fy9vA1/Dpd/ok/jg4+67/Zgjh89fLTsfg\\n\",\n       \"DXJ0WWM8/Cf49I2yxU/pElpscPJJ/D9nv+b+h/j+P82NlPhVtU+0ddqrjj5cWG6OLWZDF3OO27cc\\n\",\n       \"FVGaLErymo3+K8YtZsVIsjl2Ja1byKd6lan9oi+xYCl2ZOHUUyExjYUYmurFiknykLYg1Rfdr2Fo\\n\",\n       \"otCvkCjUzozU6rhP4ggvWJFra5uq21PVl+IwVk1Cbv3spXfHzJEoiDJLUnU1hYEFifukdkTLWDjj\\n\",\n       \"puUkc7U0UckZJy2VwI0QqYylMdVvVMxniUfGU4dpMBxfsr5NvxUUu4furnX0Onv21goXz1rw/Sq9\\n\",\n       \"uT+qPvebsNNha1B6gXxpi/1/+RZi8re7N05DaH+KR/8CFx9lI5CMubnO+RFPf4rOtTJpuPUgH366\\n\",\n       \"tMQISekO210qx8of3WZhzO93ea3Lqtx4IZdOGh4ZDcWlxGJsmExytVqwUsxtp6Vishqi2tmzeSqY\\n\",\n       \"q+rIjeTqxgYa6DpSOLJoIMokogY6ClNBIjszmy8sOtY3NdaxhoadYkW/uGOcJ+oxtWjkNLnsxKKh\\n\",\n       \"jmJljVl00hgaPXZkYbagmly0VjRlRV08XDBvdGXVa2ZFyp3zRl+4ofmDNbXsnOU767YOU0nR0lt9\\n\",\n       \"xqVHoskP4pfLrmL4+0c8scxjGf0jvhpjvBFCaDbprLzFmuEqt7/Es79EpcPlGUcH/PyMp/+wNX1X\\n\",\n       \"FyN88ie4/4h5yss/xfNfDCH84ttJgXwDZYvXJWW9sB9j3A/hYpP/4Lc5bfDKp1lY41yd7qDjy52K\\n\",\n       \"WcJ+XpNMaaeXxfHMINs2mOWSwyHnR2JoKvWFFeWQoyNYUdU/ixqPqq6I7sfIxJ6JVSM1N4wkCovG\\n\",\n       \"duXYdmRLU1VqLjg6Y9+XBjoLuCjYU3Nd9+yBNDpTpafCtHTnaydUZuWYb2/eViQNtyvLqiHKQtAS\\n\",\n       \"NSUyK+rZgVhhtlR3/JEtvTsPE3tcWGTzRDx90vQrlTOC0mXCLeoNPvgJVj5cMrxin9pv8xNpCP08\\n\",\n       \"xu8mdfPdiP8Mv/Ie5Fu8GV/Hf/dOn8TbQYxx7IzEGkLYWub7N/hw3+SRsTv9kjz9Bi4flWaRJZZ4\\n\",\n       \"5EFGKXFA6yYfWzBaXrFaHVqPmeV8x0my46pM5rpGXneS0Xo5c/G5luNPd2zGXK0SrIq6cebIqX42\\n\",\n       \"d5RG87jgKD5sEnYk8chmWDoT+I4V5qoSM1tS+2ZqZx6bUV1VzdyKwqFLMnVdQaZmasW+e4JUCKfu\\n\",\n       \"mOiJOpYc6RqeJVjVtM5o7YfGZ7THmsqZqXwU5CZOzZPMahFkSVCPpZdJHgOR/nAqvTN1Lo3SbhTy\\n\",\n       \"Va1ZGYwZV+ZOl1acng683qlKtqYu7HLU5EvnOfi5tyrnQggPbvLDNbam3N0rnbOvf/vV/cWc+iXm\\n\",\n       \"Pfb/KbPvyjyrlATf/yO8ryg5LHmFjXE5Ufjq+7k6pxFJqmWj5dyXuXmFGz/GeqtU8CQ58xVm+2xl\\n\",\n       \"LBW8kEadItUdVDRPxlrtukcGud3q3HYS1NJgsyg8n0bLkU4ot5sDQaYwk5rIHZkZqDk885CZWD7T\\n\",\n       \"QA1FbYmOqqroptwFiR3DM/nvQNuuy3LnFIYkJ7L0pjtx7FTTKCzI4op5JVA5ZJYLratyLQMNSWgq\\n\",\n       \"Qq7WmppUFs1dJh8zP8/+83ywbnz4iIXjhqVvfHfWNQYLxp1TjU+FEL6GPXS6PNDgwUiWMQkh7GI6\\n\",\n       \"ZnBE880E8wnpmL0D/s4B8z9KFtG7vBj5oRt/8OdLJ4w+xtNfwZ2381tDCMub/NRlNheJ90iWQniq\\n\",\n       \"VDLVcm5dYqmGZtnak0X1NKg1c1kIClE1tuRpXatZVV24LBtlVqav2W88aGJTMJG6JjVWsaQwkymD\\n\",\n       \"vJo2TdEzxCXRilTfzOvu2DGyYumMXbLi1PzMPL5q7AmlyvweKgoHcseqGg6smuvZQMUsBLG6Lwk9\\n\",\n       \"9ZzdKtXpeUv5innl0GloWopVbadOk0wSy9jrWiTmvLbYsT35qPn4CY5eoNYkHbL5Og8+yGsFYZvl\\n\",\n       \"e7xyhbWPcPMNs6oFZh9h/y4/7LuLAH9XIQQBfx1/650+l7eJ5/FgCJox/sFD472IEMLlh/hrHyZv\\n\",\n       \"Mn9Zf/Wu3/nULaPP89jZjv24Sfbm3VqMhJs8cMgH2+bnlqW1iWoahKJKsqEWT+yGifM5L0+ovbrs\\n\",\n       \"VM/uY2319pYjuRj23Q0TSxKtkInZWHdetR22xHRmJVnXDiP9MFYx0YhRElJDiakoQ2FDy8DAS/qi\\n\",\n       \"izJ9VUNDWdyRh1xLRd9lDWuIojXRjqFg6EMaatbiPf1wz8Bc1FOVSzxojiVTM32JlLgj0yxZZSHX\\n\",\n       \"CFOZI6lK+X0Pia1KYTrg/Cm3rjBIgsVGatIsTNKandknTG++5PVLR47qvHaF6YD9/5vJ33/z+tRD\\n\",\n       \"ePJJ/vJHOd7gcIfNL/M30hD+YR7jtwzOjPHe3w0hVJH9ETeZ66V6L61RyQgDpk1Omlxocn+PwZzp\\n\",\n       \"iGF7wW/8pRWzjapuree4cWw5ZCqh5HqMlsr36VGLrVri8nBquDF3bSOSFU4bE9qZep66m1Y1YnR0\\n\",\n       \"Nip7OZa2N2mIego7Go6LiknYMrUuCcffCAMIzuE1QY5l0TEqVlQMzUycKizhCVyXesFiUhrK56aW\\n\",\n       \"Y12wbGRJLemoxLlREmheEkNfNa5Jk7vGYcXcVC3tyBpBnN6VpcfMF5h12e1RU0orv4GhabPm5MJV\\n\",\n       \"w80V/qttphndH2HwEHfmpM/xiS9xYZ+f3edffYG/cMYZmY6ofJ5L+/yrGOPEHxHv8mLkzUgj9+W8\\n\",\n       \"eMXbKEZCCGGDn/w0i49wq097Q3KpJvkvnrV/N/d0tSSw5meS+sqIeXdo5WTssFrTSQuDZFGRV2Wx\\n\",\n       \"qVl0JaEjqyRqyUR38rpZ/RVpMrIoWLJl0avGZg5dlnNmbDPVtmxq2VyQq2NRz9wQ9zTkptaRaZg6\\n\",\n       \"EQQrZwz7uYptFacSNZnCyCWlT0BP3TiMtNLCg3nFxZBbzHmpWbg9q1mLCw7Dkb1kTevsVj+VaWeH\\n\",\n       \"AiYhdW9wyXTyYGlK4jzHL7F5oXzIVF+kOyZ5jteH/ONlfiJ5y7XeYFDl3B93rd5hfFg5GviOcrR3\\n\",\n       \"O2I0DcEryvyIb4pceC9hqySzDi9zEnFsfKdhcXXgmQ8cefjXS6Li19fZ+0dvHHPCC8/xH36AqxtM\\n\",\n       \"ErFfYWWskMoUinlFXqs5MRFV9ZIPa2UN1clNs/WrFkJTNRam2YrF9IZKcuJC0bdXTd2bLerP6hqV\\n\",\n       \"hmplrB4rstBzWF0wDnVVbdVY1/Cc0zCUmCtMVTXkZm6oOTRx4tCVUHW/iqZo6rrXdN2xbFVwaMVc\\n\",\n       \"JmipKsSwZcmuuX1THQ1Boo9cxaHUgnZsmg1uaVX2jOstdwx1wtiK1IW8IYToFYVQSVQuFe7cof4a\\n\",\n       \"oTpw9EN11ZA4dNnodKS60jU/ajj+7MzxzyhdNf81gmkIIb3In/lhdt7YKV/lqMHsiD8TQnj12xUb\\n\",\n       \"f0xu2agcszR3Ob3KxjYnD3O0wMUJvRqjCt2nF+x//AHF+bq1mNroLTtNWq63b7o65bhDPmG3zmY1\\n\",\n       \"lU9rupNCZzZ3ulB4vjO3lpPmJCG3OI/WcZAWukW5ybuKuijGwjxMHYfztsMFqY4irsniEcld0VBF\\n\",\n       \"oa2rbarMsHlYRSFVV1hRXrwbUrdtmFlVWDWUyd1NanbiRBHuoCoPBemq0sSjR7wgDXOJvmjJVKqw\\n\",\n       \"Jg6fpUj43Qf47JDaK/ypqD/LTRup+mQmSw8dPPyQau9+2dcf4vYzPHGNJ9f45xVihexj3D7k6j73\\n\",\n       \"TXnqWdJtPrNQWs3PDvm1wR/THPIdK0ZCCH9a6YFwEGP8oe/uqGkgf7vpgBtbXHyE2/usvm7xB1i1\\n\",\n       \"LhQLxo+feGbE0gHdRYoutRn58dBW44Z7cdO40jXKJyrFDUkt2pwHR+nUuLGsPq/L01Q3u2ahVrEq\\n\",\n       \"kbgnqmgppI7dNBdkOlZlFgRR48wgKeiecbT7mk71rTvV1TTXUHPsnl+XWVN1qmYmlVo1VZd62bbc\\n\",\n       \"R/RN9N2U6ISKZlJTiXPtLPFQNjRozuzHNdOiJ0sOnYYFoRjqFIdMTnzwKZJp3cFSXX+lfkZOWmJ6\\n\",\n       \"juvP8ciESY1ntzm9U87mj/ZxTD2cfRbusDQr+XvvRfx1/GyMvuccpXcAb9jCv2eLkRBC80HOXz4z\\n\",\n       \"2wp42OQrr9r+SEf96pFffpCTMTu/QvH8mw69eZfeCo0twiHZivE8N6w2tOPQJC1ks0x1yr3GOdPh\\n\",\n       \"mpXDe44ub5mniVmsC3GonnR08y3D2HOYLImx5dy0JS+mQlyRVW87qk48EArLbrllzcTIOBwrYk/q\\n\",\n       \"PisydS2HFgQ9wYmeV1Q0PKChI0qlKiYetmvHTKGtomlorGaKucI1JypqZhJHck1LjjE0ULEUJ4bF\\n\",\n       \"vrrE1mzuXjG1lp1qNSsuh8J2uuEgrMizxLzo2e/cEbfHtl5k5QO5fDCzl16V9hq6kxfVl8ZOtrtm\\n\",\n       \"ee9NBldvweISrRWO3vzDLQZdLm6X4/DB9+A+WFducCbs3GDvMvUDDteo3uZgqzRmLE7Z+iz56aZK\\n\",\n       \"v+Eki/bSKK9VTIcr5smBfhgaFtxLmYzqRpNonE4UK1E1YZDyYFryUVJczXktFG6ERDcjiD6al+/5\\n\",\n       \"04L1EC0VuUmo62uahwVFbKiETYkXFA7ULViUKS9U4Yqbti0qL1BXudMuRQqbmu53IpPYLZOHVELb\\n\",\n       \"wB5exXI5Qg9zLCtE0/CQNPuCPDkR1cXhAQczJsul/870Noun/No144srbjyRWEy2zbqLsrBh+vIV\\n\",\n       \"+WvQoP0g0+tcXOcbna1z5FXWZzG+hi+GEJ46O/3RG4VlCCFJeHSNj6XUj3hmWvLXvi3eyc7I55WS\\n\",\n       \"w3/17T8yTf9Ah35S55VI/urb/HfrrTLwJ9zU+nDF+XGqPi3EUCHl8af5nUdZrDNs0VqgOyOZ96wO\\n\",\n       \"Bl6zpDVY1spyi8s1q1nTJD8xS+qOKk15Hs3TaKsYSJJG6aqUtBVGZ/bNXaltifxsjthUNVUDXYlD\\n\",\n       \"U0OZJVXn5YLSSueyTXN3bcu0nVfREIxkeuhr6BqdWQRnagpdhdM404uJ9izXSDP1RsO4uCKd5Irk\\n\",\n       \"2Dx/Tlrra4eR9cDuVY7vjnjtWPa+EWmHIqJBusxTX+S5Y34Od2OMWQhh5YDOb/LpRfoJB2u8/hK1\\n\",\n       \"3fegEiWUkal/EU++0+fyPcLXlIqa9zKyebkTSetn3iNtxk+afvYZ032++g9x460chhhjPB/CzWXm\\n\",\n       \"hywdky/RWnN05ch8NTod7cuTkdZRsH/rqv4R0yQxnSyJe5nTS1OdNFpVKJKyY9q3omkqqc2EvK3h\\n\",\n       \"JceVka0QdMytxFOLDuzGqWNbYnHeeqUpMXViydRQMDMwMBDcL5OamUjPFDhjDZklbOk7sW8eE6du\\n\",\n       \"2QvB3IbUqiWnCnsSd0XHHhKdxiWN0VDI6ppSoywxLvriNJElwX5tzVFyUS1fEOYzh5Utqkt86qad\\n\",\n       \"cKxfTV3anblQeUleT0yrC9ZuB5U4de87FROTMTJChW90QMZUJuV6va1NZMnxW/xxHv84F5W+iq9N\\n\",\n       \"+Z0TLtyllpU5K70vs3GPT3yVSs4r61VZvao4rEhPZ4q1xPqkZfEodbDCC4Hh14PJ1bqFJLdVKcRq\\n\",\n       \"ZtJiOSmLkIOknGrMA9XI3aywUrBdsDljmtCu0ohRZ5jZqh94rXlR4sQ0rCkQBG1jbTtacm0tLVW5\\n\",\n       \"nmBg5EklG3iCuTrW9E1OrZRTAAAgAElEQVRl6jZMXXDeibF1I39SYQdHZ4VIDStmSVApMkXaEezJ\\n\",\n       \"Jym3I62C6jYPvMzxU9R7/GyPrGH8kdTk4blw/nFF40leSM/WLy1d3sJJOcv6Bk5I5m9SUMUyq+hf\\n\",\n       \"U0B1+bOP8wOPlCGX2TV+/Ot8+Dv5PryTqb0n8J2TWv/ZBe5XNqFez7j98zHGbyv7+i6xt0O+x1Km\\n\",\n       \"1ayrH0PPuDO2skNW55ENPvKzfPknefkSlWYwrTclB4k8nTpamBrWGnpx5HqtLswWnN+/bn+xIqu0\\n\",\n       \"hWxkVqw5n/YNql1ZPifUDLSMpdpJYmpb18yJ+6QKHR19M4nbgpq5VVWpwkzfqVR0ZMVM36pLoqZZ\\n\",\n       \"zNTDwIYDI1UVURZLL5FBEszl1pNMI7LTYhKjaT4Rp31FURHzTFoUpCPTYaE3KncZ9VMGv3mb3/9l\\n\",\n       \"fup9TBKyPqfXuHnMz8UYb56tX/08P/2D5bD+82MeOuXCl1m6w//8nclr71r8J/hcjG+Pm/Quwtfw\\n\",\n       \"l97pk3g7iDHOl0P4ytf5vu97kxX5c5zv8VSM8flvdVwomY7pdf7EKqM2+R0OG/LpoZOreyanPa1X\\n\",\n       \"mPxSorfX5NMfMbr9NJf6/OiSeTNzsjEQkoFpODSONetzalkhjmp6rZaYjiyZWjfUialG7KJmMd5z\\n\",\n       \"z8Dr6dyJTKqhr6lqRy4xtYG+kRpmEuMzphipYCL3okzH3P0hM/GKXRcNLes4xETLkkU1A5kiTozi\\n\",\n       \"RL8+1y3m8nuM77V8MF81CQPPPJbZWV5Vj031mJvFNeN8VSOuGM8rXDmRt1/VqNd9YNTSOCqM04HX\\n\",\n       \"tjJxOuLbKuNijKPlEL72FB/+GLcTJcP9q1w4Lp0636bMv/Ik7/8EP3KjHNnD+xb5lZTnfkZpJ7CP\\n\",\n       \"wPM/TX6hLBR2W8cOXTV/aaz7SOqkzWRxKKYT1SE3n+bwb0ULf2Pm4k8GzTTRSlKHIdfjjBZcN42l\\n\",\n       \"0LZhroFGj+ywtMDpZDQuUBsFo2bdcW2gG3eNLIlun/XCb8tDbk1XV8/coR1BW3BBsOj3HKobS2Rn\\n\",\n       \"I/oo11ITLXEWuNizKZgquyg56kp3wxFFQ5bdJD0tN8G7n+aVwNINLh2z9Rs8us//GGO8d/b9eHlD\\n\",\n       \"3MjFS7tcOeHe6plCZpVbX+ZDzTc5al9j9ZWyEHnt261UCOHcE3z/nzxTsFF2yAqu/MZ3WuE/1n3x\\n\",\n       \"bw0v/x1evqy85tdjjG+7zRdjnLRC+NXP8Ze6skaQVfomrVcx9MGXePYHOXeKFo8fcf5W3Y33171c\\n\",\n       \"Oaf7zNDg4xcdt+tGxYrmPBGS61QPpK0Dj92ODi8um+ym+iv79rs0i7FqoF9UbIdUXswNksuiFYmx\\n\",\n       \"um20Tc3O8gnauvZN9M1MzUw0ZFoqJqJEfqas6QqhQpyYBTJzx1KdIpUXUR5yG6F0BGzMKFJeCHW7\\n\",\n       \"R1Elu2lWVC3uzmUrc92lwhMTLr9QbkBfq3P8ibHh336B39/lR2sszHjpsHyw3HvjeiY8/D6WHyvH\\n\",\n       \"MTs5r+Wkaakt73ybZXi346/jf32nT+J7iKfwRAjqMb69Heo7iRN+40us7fDgJvGgJKbeOeRffrtj\\n\",\n       \"WvzA+7nvPCcXqNYZP8faTbbv8XMDk3/CpIgxzkII7Rf4/i1WHmb7d7gxZTkRxz3HrbpRsqheZE7j\\n\",\n       \"gXnRluxvKmoNR6s17XpmNdKJDVneVismimpVHo6tx5lOOO9AqupY1aKWgW2bFENH4Z7XQsMDRaKe\\n\",\n       \"TI3lrll0pOqcJZeQOnFqqOFEoqUvUcN5U0NVFalBDORzKxJbo8LCPvuzkcHxhu2PPmohDgzyVaO0\\n\",\n       \"a5IV5BsqkTDLJGlVEVbULMrmR+q9qXk3VTV1vpJ5/RC/94esz698kdYtHl0l36eyw9On3xOPm/Uf\\n\",\n       \"4P0HZVLvc+fYv79837bnWI0xPvfGJ0MIf4+9h+lcYHzYM/+tm/ztLfvVluXjvkHtQF5ti0fB8T+d\\n\",\n       \"xPhqCOEfzdz886nqZrRZK5zkHFYT61b1NLRDMC8mTkzsVk8t7REmXHuBh5vlK2p3q2W72TGaZR6L\\n\",\n       \"t9xM+g6SqFUMTLJTWnXzNLVnzcBFdXNDx8g9rCIzds1csGpH5kDqnMJE5kTbSEu0qCLI9YhzIaZi\\n\",\n       \"GAjxRWk2kceRynFqnjzJpKB2zHSFX3w/18ec3y+v0WrCRx7jxz7O/iI3f4+HnuXPPshvbLJ3i86r\\n\",\n       \"/H7K4JCLGWGPnV3+yRsxAN8KgQtXiembOmRwhZPvtML/xouREMImfv4tP96JMf7lP+zYGOORt8wg\\n\",\n       \"vxcYxfjFEMLxgun/VDFandi6Ofbka6Xe/fPLtJ4v54CVyLjb1MwTC/nMqLVkUr2g0h/RPNVQk8/a\\n\",\n       \"YnFNOil0bzIYjtXutmXn5+5uzITNiqS6bJbXFEkqqVwx1Ve3qmUuc91IYWpN3VRxNg2+Ysd1D2Ns\\n\",\n       \"pqJSTM3DrmCqG3fMk0yhKY25PAZZ+P/Ze9MgO6/zvvN33v3uW9/eu4EG0ACIhQTE1ZREipREyyPF\\n\",\n       \"sl0ex055MhNr7DjJB6fictlV8yU1S5VnqXKcZcZZnBlJySi247G1WIoWiqQpcV8AEBsBNND7dvvu\\n\",\n       \"9777e858eJsSRIGUGBGEKOZfdT/gVt/ug3vufd/nPM9/sWlKSSXRcXzJeCYhFrCaQDmA2ABlRESa\\n\",\n       \"Tv/iKKY0cOIr+LOSPQImVmB0V4VQ8WBtHhhVSn3HSfFGyEF9BL5z6tHTD2E8Am4+9Xx7V0EIDgMH\\n\",\n       \"gL+61Wt5u6AUg10S6wne5HT7447dHIxPN2EKqJCe0FbeyHtICGHOwkMPwDUBy5uwL4CpHOx0oTuA\\n\",\n       \"P7ne00IpNRRC/Nv/BD9fg8kYrm2nDMW796MO6khNQ4k6ATpdPUBUFVqQIRgKvKJiSxuhJu1UO6El\\n\",\n       \"DJA0gKoacpdcZEvkuSKy9AnoI1BK4Mg9+EbABdVmR0vIKvDIMBBViipmlByaMIilQ6J10VREVSQ0\\n\",\n       \"d4e3HRJMegxUDw+NQ4lAj23yrsuUC3s6Nv/xjgPkz0iMCcmw2kGfnADVIbEEoqOQ4RbKccCXWH0b\\n\",\n       \"d1KR9Hyq2+DtHoy1xR8k09xVUPz7XV5HidT48G2SxWvZNAvr2aOgH4Tjw3SEf24WOr8qhPhfXkv9\\n\",\n       \"3d3Ts7sPhBDVAVzxCbYbbE0EWNEMycIR4mtfh727Ng/nJcmfSYa/lmbOlAT0EpsFzdrtS0TE6GzJ\\n\",\n       \"afrxNc4rQVOF5O+NuVJJOyQ53UIkeYpRB2VoWInN7Po6wopw+hmiXofzd+3D0YpktHRgH1JAskST\\n\",\n       \"iGliyspCEGIIiysMyJNQpkeP/Ui6u7osHSFjNEAoiyTUMJil2nXoix5hdxk6ATQH8Godtiq7Y5Qa\\n\",\n       \"6Fn4lT1QkHDvQYgTuFyEsw/B15+Bk1+B+zR4oQ8vd1ICf38DRklHFDs/SPWkILpRpRKksqM3xE0v\\n\",\n       \"RpRSW/xnmi4JIf7xdf98XCn1+NuxJgCl1CUhxO9A+N9BpgCLY/DEYdjYhoMtmO6kIUgZUxAK6CYC\\n\",\n       \"R2WIzRJRS0HYwywo6q5OJC3CCFo9m836CUp6nfolgbj0Ipt3KuT+MsVeiKcqGDWPRPfwsehgIJlg\\n\",\n       \"CCQqIhY+FpsosmzSJqKLTQEwCXazdm1pYQmJrQb4qo2rFP3YZuDlqLPDdhwSaJKJOC0ucsug6jDj\\n\",\n       \"QiEv6DBgb2RzbvMOVlZPUzviU2jB+HVEUzuBrA67RJY3wTDN5/6+D1kTsoPU2OTdhk8Bn1aKH7Gt\\n\",\n       \"/GOHZ4D7eBcXI5ByQEg5fj/MCC2XByufhiqxP5U5nwO4CNObaWv/ewy2dhNI/y8hRGX3qcpR+L0a\\n\",\n       \"emNAcVpScWxGXJNEnKVVfIV+rohvKzS3iudIFoRLXnPxNZeGshgkZTzl86Qx5Lhq86Dq8bLIsyhK\\n\",\n       \"qV28kpjhFJ4YZ1PvY7GAoM5IopPRE2wVoYQk1AokykNXXXTRoyhrmCi2tIAhm5ixYMzVmPd1XC9i\\n\",\n       \"UBb0C4pazyRvpo4nXneUeHWdTHwJb0yg9ITIaUARnEEBZSX4MqB8FYZLMCyD3kgPxN+bN/ID9qhB\\n\",\n       \"OjJ5G9E/B+cegugAfLCZjmoSDUp9uFPAk3fzxh0YswaNR+BlF2V4BGYpHT0rLZ126Kk7qPWnkPsE\\n\",\n       \"TBahkAFhFrgiEgrEWCKDCB0C38cJTbw9Q8pC8FPSYIKYLQEbokuoG2ybCUNjjLGNMuVmh6i8TvVM\\n\",\n       \"l15YxSsV0cYVVsEgI0BoFppw6NJhVOpYUic2BLUkZl3TWBBp3Y1aRgkDXZ3Hp4SGjU5AJF9BJFPY\\n\",\n       \"6zUiqRMEOZKtCyAsuDB3HfEUYBXmT8Da0XQ+pVkpqftEBMlhOP9RePbfwcwV+P3XKabeShzGlcsQ\\n\",\n       \"HoJMbVdZFYB+ftfr/41wK9U0dwK/DxwTQnwN+Buvb/0opf7xzVzDbvT1H0Lr12DiPpgcgrMKL9yd\\n\",\n       \"dnNzV2HhniFbhXH0SxFDe0AvakGpR9WxYQB9usSZDkERtrxxpF+hnAjAJbQlJd1FhmfolzL4QsMQ\\n\",\n       \"BiMx7GhZQmmitAi0HVABSkRIZolkiNBiDjEgwxYhGg1N4KsSe0LFlhjgGkNsAwQ2rjeK2tjhWm1I\\n\",\n       \"vZNgRwmbY1DrgV+HOS9V4q5mXbLLbUoMqRwSeO02ccsnEWly6Ug7/X5uVsDd5Ue9OSRcugjtOowe\\n\",\n       \"gm0NuAbVV1JTnDM3c//ebuyG4v1t4IdUd72r8AzwMeAPb/VC3kEMBxAPwcxd170bgDVMi5A3DHVU\\n\",\n       \"SrWFEKN1+I0q3DZD4q7QneljRQmOkoQO+Nk+t70IwaEqHbNHXInoOx5ZPcARFpoapSzrlJKIlt7m\\n\",\n       \"29oKIyohpkst8ajJIjEVnCihF3v0sy0KBByQAxbMUXpqkHphqCyx9EAMkCoE1WBUDnGBIO7h6AHj\\n\",\n       \"scC3YT3JUNuOcHKKThVyLUmQ8+jsFbTWIDFHiAkQySmkkcERs1Q6BXJLCrmzxOahNuoVmDuTHkou\\n\",\n       \"1+CZ90GiCVH72xqtnRpkXGgP4cyuPfs7gN5T8NJH4e4CRIPUOr6RgcxZGNmAM0d442KkuQXhl7Hv\\n\",\n       \"alKeAFsYDIJxOitD5KnX+CxKhaeFKHwawl+B6BDE2gBDKwA6fSkJZJeiobEnihBIirYgUYKMEBxE\\n\",\n       \"kVeS02KHbauCMdhgpTpkoNrkkxhXZjg/XkCpPJ7bR2UitqwMuoAcWaRKSKKArhFRC2ICYTIiq9S1\\n\",\n       \"IR1iHE2gyR5D1SQObDRhEBl99F6WyqV1/HpI37FIuoswuQnfNIEJKExCPwbxEuztgPNRuPws3G/A\\n\",\n       \"TCE1BNSvwCMObIxBP0o7INFuGq98q67nSqmhLsTnvgh/ax+MmqAWgVX4OvB7b/Q68TaYmd4UCCGU\\n\",\n       \"UurN2K1vx98YAx6A2x6Bj52G0u4p6dw4PLYHrDPgagbhiRHM8QL9oYu8u052rEotMoj8VTx9mYo+\\n\",\n       \"4NA16PdGWTZn2WmNkEm6JNmzFN8Ht5lDaki27f0saqngtyFzSIqYcplI6PjuJlFGJ6dncOigi4g8\\n\",\n       \"ZSbZQWAwpEVfZZFBSOwl1HI6U7GioGv0Qo3LrsbaygD3TwSZv6mY2i8YV4qMBaNB2pXeHsDJz4KT\\n\",\n       \"wDeOwtUeTNRh9ghUM6nHjb0Cwy148SXY/mc/jBGREKI6Ap8ow7wGdGFlC774Wj7IW9iPm77nb/73\\n\",\n       \"+QXgt5TiwVu1hpsFITgE/CelmLvVa3k9bua+54V44AT8zP2wmoNoCOa3Yfo0fHmg1PdxIIQQDqlx\\n\",\n       \"hF6DRz4M+TW4+37wNrFPbmHXupRMF5sFHC9haO5hUxzDu7gKE2cYn0g4HGvkLYsYQYsi60yQk5Ie\\n\",\n       \"C3S1DqNKMJcYTMQFGonNlmbgyw79TMieVkQ9p7FuTdPWPYbagBF06kriazENaaAHJiNeHlYaNKZ8\\n\",\n       \"DhYFdV1nK5IYgJlI8suwHVt0shbnnH30kxzkauB5aW683sc2mlQ6Cfmmg5QhtttlZENyPoFiF7Qi\\n\",\n       \"JMfh8GWYvWZz8cEi18pHWHuxguycA7EAn02U+s9SOL7VfRdCnIQ7fzu9zGh+KtYTEtZn4KUQ5Jeh\\n\",\n       \"eUopdfX1r9VE8R/l2fcLo4w1bWx/yLDQYCHvsvR7Sqmndn+/yMHfMSj+Dx4jdcl+Q6HpNQJdx096\\n\",\n       \"bCU16q8MMUYG2NmQPSVBoEuUDqF4zSQYXDQaUsdOEuJE0ugbGMM6ThO2Z08g3QIqXENOZyhpLpa4\\n\",\n       \"iqkUpbiI8Fcwsj5N3UZoFo7SwQvZ0cfp2nVMN8Q1XMxuDyfp4BRq9KNZBt0GyihCXIDtPDxxGgo+\\n\",\n       \"DGsQRSnHamkWDkyBU4aDWShkoTQC3gIUr8CFHFx4Gs5l0wL+YAJhB57pwxNvxhN5g/3KkCpQDNJx\\n\",\n       \"auvN9vzHnMB6cyDSNLf/6ijcn8O6w2fgbPCl0Qb3PZMaNR7dhDULnvqSUmpBCKFvwkNDSv8kR927\\n\",\n       \"TOBWGWYyhIUmNWWxv+VgOxFJEWaiBmRnWf3qEFGV7LtPZzZQOEKjaGwQK8FFLY8jIrJylZ7ewg4l\\n\",\n       \"Sh/D0etUhMSmRp42fTbZQKOgoCeKBEKnY2dwjCI5IhbtFpkkYDZUzJg+3l4T8XdLHLIE5UGfYSlh\\n\",\n       \"w1QMLIgtReFyQskFPYZeER7YhBPfgqfdlN9nleBKCFuvwPb/+8M6Iu5yez6TfpfRXpvdvgvx68Af\\n\",\n       \"/8CfenfiMlASaYX6I+aJvHswhG+dAlbhwTyYA4h24MtDWBAi9yHQDBhcBpZ12DebnuZswFmHe1x4\\n\",\n       \"ugQL30J7JIeqh5QtF93cItMx2deu0q1JOjkH77YeGTPHIU1R0yOUZlMkoqJ6+FQIsDB0B4lJX0ks\\n\",\n       \"LcK3XSpuwnhX0neGbBo58t0Iw4yZ19c5RYGenqGFi6s8skrHIIPUKyRyh+GemDFNIx+BEwqmEli1\\n\",\n       \"JH0JrbrFWm+crYUMfb0I+6ZACchpaWcDF+XVCc0FRq92efAsmAm8MlnAP1Gk+WIbp+Ty8AtwzzWD\\n\",\n       \"V49NkVVZjq5u4M1+gJ0LM2D/JfyiEOL/eIdCMc/B1jl4X6jjThXZOJHBm/TYqoK20uboALp3CZF/\\n\",\n       \"VKnBdywjhBA1uK3a54Nf9NmaMwkKIWNXYsZbEBwTQpwC5oC5abhrC3OYUB5T1DUoJC3aWLRETNZq\\n\",\n       \"sblfUhoq0BVDUzEi4LyW0phSDRU0kRxQkosiJbnuswVxPqJVsRnRzrJQ2I/n5QllQtPbhmyEHZu4\\n\",\n       \"pktUzuOrDLGy0MQMhaiNq+kEzihGYBJaeVRsoXlr5Fo19rYzdHN9LsRHiMiD8mHxOCzOg/0XULsI\\n\",\n       \"f0TKGUm68LsZOHYSuhp0V8HoQiUCOYD9F0BU4I6TcPV2OB+BfhoeeAHGhRCffSsuubsxDjdUud0I\\n\",\n       \"78liRIMjR+EDH4HFUxgHBMXBOEq8zDP3Npj4GmRjyCjSixJKqUSI/AmYjCI03aFrzDLhtvHNDJOa\\n\",\n       \"jW0V8csdop6N09Coja2x8f4ejm+RD/t0dBgzTKzYZcJaYJ0MQgkMzWdCKHLJCKecPAVyIGNizUNS\\n\",\n       \"psAWDQR9sZcsFlBCigBlrOFTZEoVGHCNqznF5NCgiEO3bpNxe2haGdOyKMV9PC1htJcnsnY4e29I\\n\",\n       \"exPCAO5cSkm6H3oeli+n7oWNKmz869fHgv8weL3Hw7sJQrAPuAv4hVu9lpsBpZBC8Cwpb+Qvb/V6\\n\",\n       \"3g7snrwOiDSEZIPU9+Z7Lpa7Lea/FkI8TaruGkDmXtj3D+BgDLqEhYfh8ukZBoc+Ab2R1C47P5aq\\n\",\n       \"CI5HWDGMts4xnNRwjA5l5TNv52lWs6BFFEWXttUmb5QoyiFKdzFEFjvRUGJAXgQEmpWq30Qdy+vQ\\n\",\n       \"diSaFpEtRHiWzlAaxP2AxZykYhZwjVk6usO+SENnyEBbIyJLueUzsASRcNAyHkUlkTq0hcboUDAf\\n\",\n       \"wqKCl5Iya8FtgAczU5BEKUeAfalVaf9ZkokCsbXNwok2Hzmt88LcXk7fu5eoeQd0nkbMLfDKnm0S\\n\",\n       \"3SLenyPX0RHSo5TfZic/Bb0JqK2kJmQ33dxwV/H0WfjibxUo3VvAVpJBdi+8Wqe2c4bVfZv89DfB\\n\",\n       \"f0gIcfq6EVIJRiSMtCNG2t+tmiINxDHY+zuw39AZzrdYmusy3KPImIJCIjA0ixEhycQxA8tmy5pA\\n\",\n       \"Y42glPpgbZJO+4qkRN8uUFSpB1hOgLKhZhrIxCXKmmhhyCF5lrN2Bk030SmjolECTeFrEySygxI5\\n\",\n       \"RNBFl0OG5giRsx8R+UhNI1EhRi+DHtYJ6ybmyyZTos3C7VWilTHYvpoaYxklCPaBsQhjpPwotuFR\\n\",\n       \"Cx7xICiCNwLrSxAMQWYhtw+8echl4eAiWAfg5Z+C5W042EwNXr4jq3+78Z4sRupwzxFoG6CKBOst\\n\",\n       \"3PkMpfYEfr7B4gjMb+++5xtCiP0a+s/X0X9zklaxCDTpawGmYVNIdHRTEWYd4o5DkpQoLTYZ6FuI\\n\",\n       \"TIsoPyBOJEFb8OpEjIagnOgkmsYkBlPSARFy3skTigya8ImFicLEIyZHQsQMkjw5FCsUMMmh4eHT\\n\",\n       \"xldZHFHGVdu0Cw690CDS+8S2jhGWqbgGBc2gF+1wMQ+eyBBNhwy/ALN3fDftFGC2kz5eBdZu2ajk\\n\",\n       \"FuLvAp95t+e3/AB8m5QP864vRoQQe/bAfzMPThbUMmiLcFoI8ec3Un3sntzbaRrE3Mfgb6ym40qA\\n\",\n       \"2wR87pEZrvgjuym0RRispTHpIxexqgUqmkFxEDOtx5gCIhHRK9nEoY4mm6AkkUjDMC0ktvKINBsd\\n\",\n       \"gYfAkkN8zacgLVzLwBd5SjJCigHK9HF8xWYGas8L2ocnkXoWSwNN01CyRBbFthziGhG+GTC0TBxd\\n\",\n       \"pykirESQixQ7hoaIE/qWwOocQPPmkf018MdBLsJYBmSczl9CQRLmSMwy0mhztVbl/B178aODhKcC\\n\",\n       \"MFzi2l66Tp3nR9pknQH19QLTa4oIK+UVoFIz3HfMpVgptVoT4sWTbGo+1mGDqTBHrmeiR1P4+ibr\\n\",\n       \"1XQycGGW3SgLoJMal0nSULvXsDQC+l742QWoeoKLZR/jngzLjsWObrPDkESXjCgdy9Toyhhh2BTc\\n\",\n       \"AlEY4NuKpoCmgGukXZGySuXHCBAJWLpATxRCKAzNpeuMo0caecaRYZMoK1BCYIVzwJAYg0iroYwK\\n\",\n       \"uOcZZnWUVkIYEqXyaD2JQQ9pZgmckFgzKfQF+tCEjgGhkZ6mDSsNaapOwKf27qppVsFfg0dPwd11\\n\",\n       \"sA1o67BVg/kubJogpqBTgHAB9rThWhU6E6Behir/pRh5e6GDY+9+maZJrvXYmg6gYpCY0CjDYgbW\\n\",\n       \"HoPyL8L4L9cY7p+FgkGcL6EGI5hBk67RZahFxLpgUlPoeojKh8S2Q7tZIdlYIXfSYsXOs1MvoMkA\\n\",\n       \"ZW7jiDJt4TGOia0Srog5QhGioxGSwUQSAz7ubpJFFohYx0EpE1tFxORBtHCFoBybuKbDUE3R92vE\\n\",\n       \"5iL9zADT0an2AUwycZEg/BBbT1wFnoIXoF+H5XJagLyG1RJ0trnOXe+9ACGwgb8DvP9Wr+Um45vA\\n\",\n       \"v7jVi/hRIYQwp+Fv/Qy442myKCeAx+Hk0+ld4fk3frV1IA0gda4rWHQlGAkVi6O7lwUEUIeXL8PH\\n\",\n       \"h8iKgTYUeFGLMIL9gYOyEhpmSCh0OkRg1fBpsKJH7BOKNJ11wA6CTVYxDJsSNgOVwdEdTDnkmoKi\\n\",\n       \"ZhKqiJ6KyW+CMWHTLlSJzSymlLQ0iaVCNHLEhPh2na48SNRvQTFgwwnIKIODnsQ1NLoexEOb2Koi\\n\",\n       \"e7k01U4NwOqQNUIc3YT8NYZ2QtDxcAcZZAKPHizRMvcRfLsOjW/CA7cR9bNkxDpKmyC32mB7zw6R\\n\",\n       \"GKW1VAd3Awpr6bXiLXHDflQYYFvo5YjMZELOGWCOCMLQQLbThPdYsevSC+kYWYjqaXj6BNy5W4Q2\\n\",\n       \"svDcXGppUPUAYpSWgXKWyf6QayWNvqiQU10aWkRWaqwrnZbRwap6oGm4KkGEYJrQFmkDQigY7NrH\\n\",\n       \"BwpcXVFPYlpGnj7jxBh4hkFfhlh6FosmHYbERhV0CxMTDYm0Q6ReAZkmpKskA7qGUiYoSaL3EcLG\\n\",\n       \"lkPaBReZfJ3qkTwi0hj2SvjGBtT6cPRn4UsTu8XIDmT/CsbOw1MnIDMJg+V0HKnthRcbMNWH8WIa\\n\",\n       \"qCf7UKlCp/VaNPHN3df3HlrwyjX42BgMsuAfwX1yiaV9ZzEPw/AqNB8FIw+H/yE45REGtoGtEjTa\\n\",\n       \"uPkCelxEam2kXmAoh7RUTIUIO97k6rE6KxfWOHAwQ12MYl6NUNMujUyFtq7YUUUK8SZtzWdHlOiQ\\n\",\n       \"x2CAzhYt6hQpYKiASDTo4zBQFWJhYdCloEIUNtkkoK/DBpJtzcVXE9jeYeLOAKM2RjsZ4Bo9woyJ\\n\",\n       \"DGIa2hTiqg76dno8aMLmF+HRvw+j+3WsOMGMYKULG3/6FtMzfxLwi8BppfhRowZ+3PE8sF8Iakrx\\n\",\n       \"Nvk/3BLMzkJ2nO/+HzTgKDQuwz28aTGCAO37Pt+KSncLRq8/O9eheQZOLWDmLEqhz9yqojkmuDIL\\n\",\n       \"wtFwo22WjVE6zKKrkESLaNAmlB4ZTVJEoJPwYDKkjcaKVkago8hSUnmSZMiWcLFCnWLUJqlCr+Hg\\n\",\n       \"2hqhbmMkHkibCgmJNiRQktCs0YnyZEQB3WsRGy6XZMDQAUvEJAEYyzatYgEubsBcDOHz5GcqlIME\\n\",\n       \"zYyQahnTHuJyhP7nL+MVwfsTC35xAtbXoJSFyjwsbTGYS7CrOl6Uw4s22R6bJHr6GZi5CNE6fPqt\\n\",\n       \"xMS/HWgi4hVKt81QWRkQzulU3ITIWmM4BQUPnjNJTUmvQ/vz8OwALt8Ljgb9Hmx+DT5+7LWfcPDL\\n\",\n       \"DkZfYFt5ksDgij7ANgxiFaCJIpEapRBCSA/d1JhXCU1A8wyuOIJAS9gREkkq1qpFIJTGFT2LriwS\\n\",\n       \"oREowboaI6CLrRtYcRlNDJB6HUeZCNFBoSGFRNM8EjUH8SJoNkQzqKSDby5hWtsUlzSuzQna1TyH\\n\",\n       \"V3wy0iOuRPj2n3L1d6EfzSKaAbKqUF0BjIB7FKKvw1NPgFtKA++yHwDjMGw5aUdkpgRmDEKH6AKM\\n\",\n       \"LqZF/+LN3Nf3ZFIpnb0AACAASURBVDHiw0un4KSC2T3Q9sFaQHodgv9dKf9RACHGfhfsukCUdcak\\n\",\n       \"Trkn6doh23afUNeICLDlLFpnne3cGm0zw9hlA8I+udEhk1QwWwVy6xHexoDSHX2a5SxlaRLpWfQ4\\n\",\n       \"pCMtepZGIkfQhYulttnR1hGiCSiE3Icm2whjH0oOGIoWJjqBvoXAoDhs0jWmyHWniXp5/ERhDlyU\\n\",\n       \"XWPg92hTwm6PUz6f0LMvQGUbvqCU6ulCjM2wpE2yUtAw8i1idw15oQdWRYiPA3TgAqnz7U96cfL3\\n\",\n       \"SEMbf6KhFJEQfBt4EPj/bvV6fgQYNzLAsdLYXvvNXxouwBUdbtvNvYoFXB2By+MrxGe/AfuOwo4F\\n\",\n       \"8QKMXIILCdGih/o5qHhlFlWRq7FGmIAnBwyGJrqxTMaQaHaFQIX0dMFI0gUtpq6gpEDFIZcMHVtk\\n\",\n       \"cQkZSptMrJExc+Q0jzhvo4c28d5DWHFCopeJVY6ObNPXFEo2UOIQkXsAkiZBbgUjEWTaUxS5ghEp\\n\",\n       \"BhIKz0H+miTZa8FyFtZaOA/sMLvawM1HhFWTXKIzs6PoDp/m/P7NVPl2qi7EibOwJ5PGlisBMkF2\\n\",\n       \"RvBO5VluSHj6KlSfhG960I3g4q0gqyeM1heprJkE2Sx+r89StYlpbJIN4OkRWP03r1/XrhHaV4QQ\\n\",\n       \"3wRmoXoPjNwNr94ONQEzKwKkgd4I6JfHcFYzuNUqsdHCqwVgFCBUjNIgG8ccSRR6Bhw/wyljL3o7\\n\",\n       \"ZNX2SLwm0gmYGKacEVvPsVQsk2VIw3IIggPoQ5vcIMQbjUgSH5Uro4kdQjGLJkpItpA0EZoLsQ/h\\n\",\n       \"EKSbcpicbVSwRdgo0+hl2Cx22W/5mDMu9lUDTdQwxyUHfIcrwypJZgXjYUnviX3IKwB5iBxwOko9\\n\",\n       \"Rhp0N7cGv3E8nVm1h/D8X8NdLuRisJuw3IAbjj/fTrwnixGllCuE+DcdOFGBozG4jTTR9LqTscxC\\n\",\n       \"XFHc1W3xvF1Ib/6tGK9usKUFRN0Bo/FVCr6ieL6AUe1gaQlq3KdYVWRXynjFhLBvYLctig2XnJ7B\\n\",\n       \"jHcIslUMt4VbCDHQEYlJLKp4jBDJBcqmQU0qdFyGUmcnvExk9okISEQDXcVMxYoxmWD2RmjFgq5n\\n\",\n       \"E2/1iMoGohtTTHY4/q2Areom27OS3sYQPhPAc0IIexZ++WdhvYZcgJAY+I/wkRm47/juqeIyfOAS\\n\",\n       \"fFsI8Vc/qQWJENwFzAJfuNVreYfwTeBh3t3FyNpKenE2sq/NVYCrMNKBJ97shUqpTSHy34QvP5xu\\n\",\n       \"+9K9YMzBSOiT3fsUy2uX6KxbIFvwFR9eSq0Yzk9nufzwAZSexdjMoGeKBDuXoJwlmR7HkBFCrqBJ\\n\",\n       \"j1yYo+528XKQl6mb83Q/YNPc4pIzi2uGvMqQSdNIXZeNCD8pkHOniXITZMwQw1+ibTkktIkFafJw\\n\",\n       \"XAB/BWyXxOhgBjNkLl6kt1exdwmOPw704ZmRPsE/fwIePAZWHyPy0KMBZS1k+pJJTpm4+ZhkbB3r\\n\",\n       \"jCT8EsAO/PmT8CujKSm4aKeE12sH4IwO6lUYDeGprlJvFjHyDsB0Bnz0W6e4NlpkdTomsobkB+nE\\n\",\n       \"aPHT11vD3wCjMPv34UQCh6/BU2W4+GHYOeejt/tsJyVW+jWKDZPCqk+/skg3GFLrb1Ibj7AcRVlA\\n\",\n       \"P0koBODaJoOwjxMrxgcq5fv5ClMLmVmCKJvHmZvA1Hy8MI87sNFdAy00iX0fx9hBiCmU5mHIU8Ro\\n\",\n       \"KDxMVlGJRN9xiTcyYNgwtgGmCwslYJRE6uiJQ6ltYMarDOctjCDL+FqIqys2JLi2A9ktMsemGS5a\\n\",\n       \"EK+mwTqLr70hSqlreSG+PIRH9gIRiE14fgW+puDS2+ei++Z4TxYj8B3Z0dO7jxugewqS/xZq0TYz\\n\",\n       \"A5OlQgVNVwTCRZMRqmNQ6OY4cFVDi7oMbBfqkmwOVACToseG6RAfcOCKIlQ6URBjDjbZt+7Tqubp\\n\",\n       \"ij5dc5XAOErcrKNaT1MbGzKbn8IcdpHZEoV4gNQNtpINxnSdidhDxYpuLNiIYvzeFs2LJvSbkJGw\\n\",\n       \"NWSoNwl9uNTpoxp9hp+G6AkgsuBkCX4uC/dfgIsH4cooDJehNgHVOkSHd90WD6Qi/vtfTu3gl96p\\n\",\n       \"vXmH8TvAHyj13ZvaTzgeAz5zqxfxo0ApNcgK8eWvwiePpOF3wQqUzsDO8IdwmFVq8KgQ4jJc+204\\n\",\n       \"Pg7HLsHYDiQ6nJ7Z4dubSrV++/rXCCH+YAR9eoSsI5GmgVcxke1ZKOkQlkmCECvKYAWbBMUufScm\\n\",\n       \"xOGCSsgmEXUBg6jHkacu8eyBvXRik8FUhO30yOp5jkeCtp0nF3u07Qpooxhtk7C7lt6EuvvgCzbc\\n\",\n       \"WUMrKYxMl7i4QXtvm2QLXnoWLhrgh7D9/yglXxVCnFlF/5uS8oOS0SqM2DCYieh0I45eBM+C/S5c\\n\",\n       \"mCFNPO4LIf51AyYc+KgDJ++C5RD0Jag9B1oLlseF+O9NGPVhZSf1n7jpSprvxeACbJxQ3LXU5a7d\\n\",\n       \"61Kgw59NcoMANyFEFcr3g/0BGH0IDgPVBizuh2MvwPYqPH9SEfy1z45/CGlk6FouenmDuNtGfB6m\\n\",\n       \"Dnj4j8BsklIOXR3ORxqDgYUXhMwmITlTogUSKRSxhNUWjHUGiIxO35sgWZOwuEiy3yGpXEMfUWix\\n\",\n       \"RmKBsArYIoupTDRfgOoS6hJ/WIOVNsy1wDXhpTEoJzBbB1UguXwRP1ei5CaEo03iDOiNBM+qYTVb\\n\",\n       \"RMUCflWg57ehtAq5V1Pl0/e8TwOlnhRCnLkMM3w3C86/6Vt5Hd6zxcgPRvQ4bK3C5YmYKW8JxSYr\\n\",\n       \"OYHj2uTd9zFydYnNcZdy3qYohyQTEk2D3AaM9qAz1acoTXpGnuFcyLql0xZLSAfuXOzghB2q05As\\n\",\n       \"WVzKLREZFxiMdakrB8fzkU6E0BKE8KiqLiGTHA4EdqiwehpHGorzDEl8l61WAnMCfY9PJruNMRhi\\n\",\n       \"PQbNx8B97LWuRkmITxyH90+DcNLc6+nTMHkCntiC8WkIw9SERE9blqgDECyk396fuGJECPYDHyYN\\n\",\n       \"xnuv4GWgLgSzSt18OebNwm6+1NYK3GlBoQ3fCuD0btfTBO0w1I5CEkDr1A3So7ehsgfuugSVXUm6\\n\",\n       \"lsCJJXj1HiHEtFLqesv5Tpbk4j30VwSodZg+Bx8/CGyBv41rldBbimGlQmIsc9U4QCnKopuKXrDD\\n\",\n       \"ortF8UVJsFom8j8MT32V+NeOEjtPkp8ZkrU0XAYMREIiNHwtwcw7aKJNYPkoOQXNdfSVVXI1ENEW\\n\",\n       \"YqfNscdC8AXPLVbY3vKh6V5nxR6zbwbe/zhsfgrKOuzz4EIVnrkTeBYO9eDSGLsn5d1rxboQ4jOn\\n\",\n       \"4NQafFBPw6oudqB5B3zyfdAehc46TL8Av2EI8X/HSi3cvN1+PTrfgueOgZyE2Sb0MvBKBTa/9vrx\\n\",\n       \"TOoxMvubcHsJzOOwXYBxBXYHxoaweB/s/wY0T8MTn+/D6rNwvIY6GhMPd+AlUNuw8kdwuw8THtQ8\\n\",\n       \"iE2wcorHjYBKFDJZElQU6HpCJxPR1KCZhXs/36czd5Vn7jlBcG0G1pbBfR799g4nv1igc6jK1qEm\\n\",\n       \"XlbhOzaWlCTJGrEe4fsCVfLRH87hYKDTJH5/hqhbIsqOp1a0UybrrkvBMtFiC2V7dLOCdTVKtmVQ\\n\",\n       \"aLzCJn0G2mUQHXjUhWde7wsjhBCkrm3nblUX/L8UI2+A1A66+L/BlV+FzjRohYDbz8Kqp9OTPlY0\\n\",\n       \"i9xu8XK8Qf62NlkJIy14+Ao4MZwHlqdbqGaHxb0GaiHk6FlgAp48AHkH5s/B9JMuwYEMnSlJZtQl\\n\",\n       \"W7WoJQPcao9AmehyB2VlqSkDMxmQaJJsYFKIQnJZk8vZw1DOYtZ1xuQ4mY1RYjMmvneH6QGcCoUQ\\n\",\n       \"1wB1G9z3ECxGYC3C0QMwlGky4z4NZAcqCYZ8gczHQaky/lJE3JXXMdN/wvCPgH+pFO9Wk7a3DKVI\\n\",\n       \"hOArwCeA//NWr+dHgVJqkdeR6tJCZORX4cg8zPVSpePFu4XIf0OpwfV24WZqxVB4XX6SAeQUMMJ1\\n\",\n       \"+TdKqe64EKtLUJmD1hSsXoYzHsw7cC5LsrxN/4MemaKH7c3gigq4GpYn8fJ5OlHC9sg28q8OEg7G\\n\",\n       \"wC3D2UuUPjpDpbNDKxdS1tpsGTpZ+pQ9Sd/rkPhXySSKwFnBnR2QPeSTFStE9hajZ2GkXebibTPo\\n\",\n       \"9xyHbwzAOAcPG0J8BijAfg38PVBZA38ENgwwA1AdmDJgR4fk+/yBdm9Ip3cfCCGMGfidh2C9TJr8\\n\",\n       \"vB+aDkQd+BkhxL94p25iSqlmmszbvQ/yhyFuQuNLIC98/09XPgD3GJDTIR+D34eqDs0JqLagImBn\\n\",\n       \"HHoJ4O12A57nOhK0EEIHYwuKF2B9FlYroCJwXlbY5SG5n7KZ7umYkURpCZkg1WOdLcN/KID/0iaD\\n\",\n       \"T38bJnIw1oPnXOoBPNTo01vS+FamwnZlh6Qi8EwH3zdQQgPrCFocUpc+tlFB18qo+DJJOWLb1AmU\\n\",\n       \"jiqdoHP5AudGWxSkC3GGa84+Cucz5MM+bq6H0e9i/EFbqe8L/0yLEOskTD0Mdgn8thD2o0oFp2/e\\n\",\n       \"Dt4YtzKb5jdI5ZQA/1Qp9blbtZbXIISwIXs/VO4DYYJ2Fi7+GZQ/CrMxdDtQenXIJ1dP8eKhHKv7\\n\",\n       \"QdIleCrmwauw9UEYFGGgwb4W5AJ41tAYa1qMdwRRJsBfF+TbCjFTYqFjcu4+H1M7g6qWoezRyQ+Y\\n\",\n       \"3PTJDTPkLRfXTGgJ0OIdXM0lG2tYAUBMaBnsjOhoFcjaDqYnkEkJszeOX99h8CDMHYXq07Behc2c\\n\",\n       \"RrKcgSAPZxbhZBZUC/YlsNpBr84z28xQaIPSWnTnrrCuDYj+1S3emrcdQrAX+GXgyC1eyq3AF4Ff\\n\",\n       \"411ejNwYxjE4Ng8fWvzuc3t18B8WQryilGoIIYrAFHSHcGkaKi7YIZQG0LWg63EDueoW/OVj8Kkt\\n\",\n       \"mKmC3wZ9BzK3w3Id1iOS50NKUcRg5BBjZx1sXxKbClPT6QQ2/sI2nGvA/CpMSrTKBkWrgLBNNlRE\\n\",\n       \"W63j4ROoMSJrG6JNigWYbXn0Kl9j6eMlyMTEgcCQ07jHIx49OYvWL2NszELbgXAKcn8Bv74Jp1Or\\n\",\n       \"lMF8Sq8JczDZg6oL/hDiPFxS3GC0cQNUqpAp870qrCno5dLWvgPvnEfPruvzl3cfb4LcEZjbgY3x\\n\",\n       \"VCAyug2Lc6kJmpsBK4GrE7D2DG8a7GkM4PCj4OVguw7LM7A9AmJGEiQekdTRLIUWS3IN0DMg20qt\\n\",\n       \"/P5rv0EIsdBHvA8yd0F40uQrmSJeIomTJXQxigKE2KKaHWCYk3Slh25a2IlAs3QgjyUzhHJIPlpD\\n\",\n       \"y1ZwBxHk9zNoWQwWXwInAGuDztgGdnUb6XcJvzFOOD8rxP8YwvIWPPrdTmHmHjj2c3DfZupf18jC\\n\",\n       \"078shK0pFbz8I27TW8Kt7Ix8VSn1r3bDeJ4BbmkxksZI134F7pyH4xupNfLl2+GZAax+FsZ+CUYC\\n\",\n       \"8Grwqu1x5JrH3jX46iWQEWx8DLojsJmHaQ8aGiyIAp1SncmrCe1pweZkkdiU9PEIMjp5MUm9NUQb\\n\",\n       \"X8NhnaoHLRMGCuZfimkfCuhkJM2gQ1UvENoZpOWzuKdLEkVcFRXKSkczImx9iOlkCe2YpJYQGjb1\\n\",\n       \"fEI2k7DnKcUdW5s8/rGzLCzfjlqbgaUmdBfh8DKEDYwXdOaLAr80waCkgA00scp0C65Zu+9RBXIn\\n\",\n       \"IT8Nw3UYvPxOkZtuAv5n4J8p9cOnkf4E4avAHwtBXqmb6x3wzqN2O+zrfO9zdpIaYV2czQtxeB4e\\n\",\n       \"GQN7iZ36kMcPmcy3YvJen1iGNHrQ/AugK0T+YajcAxjQfwl4chn+cBOOGlifdBlPbJzFAZ0HyvhV\\n\",\n       \"UMNNZHFIKQ6gmEEf6uhJhJczkW6cHpc7y1jmE2R+uk2xljAb94itGFMPaTdmCLUaI1nJIOriGBZ7\\n\",\n       \"4ohJlRAkGQrCZD0YI5RVEr2JX+qgG3XCRBKOAoRgtOH2KTgwA+VzvPqgz+1VyR0r6bhhrQ5PjYC3\\n\",\n       \"CV4Rtl4mXdcPguemLl5Ch+90QFwwwjSE8Mc05Trpw8AGpwGDPbC3CRfX4MU5qOdhx4FrL0L7P7xR\\n\",\n       \"Zyd14K6dgbMnwZqDtWMw6UBRQt6AYQjP2wmzXQgsaI+lmaGxJoSYUEptCCGmYeIfQvFDBqpisF3b\\n\",\n       \"w4pZprjjYF1co1vcJjQUk7qgLKtEfhaNED0TIA1JTrdRSJQjQUHGWyYeNvDJIt0QmusweRucrUK8\\n\",\n       \"RFBZIrgawnN3wYfvSiuotZWU+/PrhhB/nMAyzHwEHliDwm4uW92FD2zC9iO7LrbvmKHdLStGlFKv\\n\",\n       \"cRAS+LEgD+6BffNw/3XciOPr4M7Adhm2DsAeHeZ74Nbg0ny6t83PARps/RY8uAmNPFysgbTyXMhV\\n\",\n       \"kauQZHSao3PIJItvWUS2TSIW6M2tYFZnmF/dg2f22CgN2HsZ9AKcyiYYFyKE1HGOFFnKG8xJg4o0\\n\",\n       \"CLWEFSemGkdEnR2GoyWElhAZMbFmI4M2NibjfplBK2D1zg5Hvukxu3OGxonb6a4B5KC/Advb8Gmo\\n\",\n       \"3R1z/8tXkWKF5TGFIGZ2C9p5WKsLIYYw9ym43YD6AJr74MwHhBD/9p0nsP1oEIL3kXJF/t6tXsut\\n\",\n       \"gFL0dq3hH+Hdraq5AWSURsu/HokANXEQ3v/TsPwcnHgYthTb/im6cy4lWcZVmwSXhkT/Mh313LkX\\n\",\n       \"jm2mlvGv3gvPHYKNPwrRhiF3CfjI0wGuvcjZTxrIoWTJkFjrglp1nWuHbPTIwWgltDIOwebSbufA\\n\",\n       \"44i8jD/Moo+PsumUKIQJJH0GpTJyRxJoq4h8zD5D4CiXHVMgohr1lsuw6LJeTtgjDQoyYmBfYacy\\n\",\n       \"xVqnARMR5EtQr6fZI+0CpaaOmR2wOaUYWwfLhaIOhQjKFyA5AWemhBB/rJR6w8JUKTWoCvHKy3Ds\\n\",\n       \"rt3xlQRehKkWPKaU+nG4ht8AjW/DmV+C96/CUge2K6kXiXYOOmtwdRE6/+QHr7/1DXjm52H/fihb\\n\",\n       \"MO6lRmdZwM7AigZXc1BTMBaBrafPj/8DIYw/gfrPl7A+OsawbmNYDpY2QJchRrHP5t37CVoaRVcy\\n\",\n       \"2tNJRhyw2uhCYWKjMLBUSMCAmIiSyrKp5chvXaDfieAJB973MXji0K744C7gcdh/Ef7rh+D0a6nV\\n\",\n       \"e1PncdmCn96Ez0HJ/m4h8hqqXpqpSA7euRH2jwNn5Df5sbCnNsdg+gbciMkeFD8Mxy+kEsChk7rr\\n\",\n       \"VcWuEngF9BOw9xXYmTYIRmLEDliXimTm6+y4y2wfmSYSOTynhhEnYOfQvHlio0unPGStnyEOcwRd\\n\",\n       \"ybVyhsmSgy18rKjNwMuzfe0oMvYYMGBQNZAiQasEmGOKOXOVlhfScbLopk+gu/jJkNkQBkEFY90n\\n\",\n       \"cbo0R32OPrnM6Q9/CWazoNZAbcBXgIvg7YH2HNy+EbHnupPl1SKEXRj7ODwQwdxuJ2G6C7USuJ8U\\n\",\n       \"Qvzzd5n0938F/qf3ElfkBvhz4Jf4iStGGi/DxeMw2wZ99zPZt+ByUoWRY9CTIAKYmoWODl2TYBiy\\n\",\n       \"fW4MVr4A9UuwH+bn4KeuO5icXEsPJjvHoHoEDrVTe7T2KOwJYiY2wJoq8nzOpp1rE9sJnbtKhC0d\\n\",\n       \"79p5mN+EP4XcPIzq0FaTeO1Rip7EzoIQY1j2Bp3RNsVEMeEaTCWCjIR2QbClW1hJF2mETEQ1arEC\\n\",\n       \"3fn/2Xvv8Lqu8073/U7vOOegd5AgCXZSFEmRlCiZKrZkx0WJYzlW7LhMnEwSJ9PvzXNTfOdOynVu\\n\",\n       \"4plkPPa1E8eObcndkSyrF0qUSIliETsJkOj94PRe1/yxDkgQhNgJgDTe58Ej6pS919lr77W//ZXf\\n\",\n       \"h7VooiEyRs7hY7TSCA1uyMZBuRFxUh3yYhgdYHBNipgBxADtIehKwJo3wJEFSzO8cgfw0kxHdJIw\\n\",\n       \"PPUmOPpgSSWUxsAwCu8kLlFOPbcUDsGRepjYptVWTzRBwAPFA5DYCak9l2lIlcDVD9kSmDq00rqh\\n\",\n       \"AlqB4STUVoC/BGYjGHNQNQK2JNhKEPukk/zqdnBZ8ZYMUKhFGVMUzT1kjBYMpXoKo0NYq4qI0Yop\\n\",\n       \"mAGvgWo1SNTpJWF1YVRBDIQwFQykCyay2S5yNS6KJ9p1jXpjEewTWmglZAAWQXII1jq1bMVZmiBq\\n\",\n       \"16G1HCRKkDGer0icNEO6wCyG3WAWjBERqQW+P+3lEaXUJ0TkDuBB4CM3ehyXJp+E2AxPVHEbWGpg\\n\",\n       \"/S4wdkHQDyWjzsiuqYOR5SAVNuIrajBn7ZgG85TME5TqsxgMlZTiY4zkilDhwGgoUaRIXiyYYxYK\\n\",\n       \"dgcFiRLOlnAGIdO8hapDYUbq4qTwQMRJzGilGLBQ7TNh8XopOsyYmCBVUcJuMVJjFJoGRgjUGgi5\\n\",\n       \"s9gtRnqA0UQDdYdsmNJ5ilahaFLETDkMP9wHL6JjvKOT2ecicgCObIM6J9SUE9rOVEJnFBgGdwu0\\n\",\n       \"TutL0BQFXzOMeIHwDZyc64YI70WvIN+Y67HMMT8G/l8R3LeYUXYSTuyGzBZYVIS8wCkFAz+xwiY7\\n\",\n       \"5IpgKN+VFYAFlNJ5VBm9IFqboHGGsENjErztYDCDqey+Lpp1n76CwUqocSm23ga8vaNM1MWIuyKU\\n\",\n       \"PCZIbYFjMbjtAClfgLiyE/FUUpG3Yk8WyGZyJGoVWbsRAxaq4kZy5iQBi4maFFjD4CFHwlQgbTJS\\n\",\n       \"lzdiTJYomKxILo5lIo+/po9Rrw18wxBaAnsSKDukxExF1kxmBEojUNWs1eHdp7QhAtAegIO3cQlj\\n\",\n       \"RCmVAr4lInWAGwhPaUY3rxARF9AA5CH2GsRicLoRij8H9lyFgJdFt3xp6gOrDzwpyNRDVQr6Mrrx\\n\",\n       \"LS7w5iAUg9Z3ADv0+8BvdNNfZcNRUtgpkrYUyVt9WNQQKYsJiiUMVYqUo0DWXIHJUAAJYnJkaM4O\\n\",\n       \"ccZYIqey2AsmKuJGLLkJYhYfE7GHYE9cJydG2yDbC5t98LwRSjZtUZimh9aiYM1rQyMFwd3w1ntg\\n\",\n       \"Wz+YS7pE+q1GCD0/296uG26MKKXGgB3TXxeRRuD/Az70bk/VIvLFKf+7Uym180aMsUwXdCahrQKa\\n\",\n       \"o/qliBWOWIF+3fPAmgNXAvo3g8ULVjcs+pyZEbOPEYuX2jEDBmUHrBStZ8gtHsRgq8OooiQNQgkz\\n\",\n       \"hnwJUzpLydxDqTSG35qltSoAdQWMEiJVXUMmfifh16og1AO+F+CuJJlaP0WzA0MKwE02NYLFlaZg\\n\",\n       \"d2Ev2agfyuCqMtDvt2M4s5zQ7iGG2vM0ZmKUjCWyOTjsgsBjSqkLmh0ppcZFTN+GJx6GqiatTBkY\\n\",\n       \"gvGfAGntlC1Nk9IuUY6wzVMX7fmIYEB7Rf5Yqfka454dlGJChNeADwPfnevxXC+UUkpEnoJDB+BE\\n\",\n       \"GxRzUOxSSkXsIq4BePAOGBAIjoOrGpIxkCrdTr0iDONQGIbYDGtj3AaZEGTOQPfD0BjTVcURA+Cy\\n\",\n       \"M27w4Yh1E2+J4Kg3ssToIJVLEWxoIf2yG3JWlOl5Ti0RXHlFzg6OnAFjzkZemZFwGqNPsFjcOAtp\\n\",\n       \"Bu1pzMk8rd1CtjJOj8lOOJamypkg6TBQLJlwdBYwx0cwZcLwqhWq7wPjIgilwLSXXnM/6rYk9Urn\\n\",\n       \"NwRaoTMOa6boK+WNoHIX/t53PcajlBsKzkdE7Fuh/SFoEgh5ILIc2k5CZQwG1kJPm4j8YHp56yUI\\n\",\n       \"QyiuAyCdBXC6dGO8rAXyJciZoSECFVGIGsFU0KFBQxHIGMjmzahYhrTPjE1SGJSVPGZUsQi5FCWv\\n\",\n       \"l1ghRCCWotrlxKwi5At5+m1GPGEnVYF6zIk0qjpL2OYmpOzkR+p1xUTuLVDLoeAERwh81RAcAk8I\\n\",\n       \"9hyAptt1HIkCyD5oCMIvytfKy3DABP1btFJwBAjshMSuGzA1F2UuwzR/CtQAP9Ulzjw0XWRFKfXF\\n\",\n       \"2RqMUiojIt+Gp38DGpp1R9uRHAx/H6y1cGQHbO2HgQ1Q4wKVBnMSHjlp4uePVtA1Euek30R9zoCh\\n\",\n       \"lGDQK5gNXazqqiTcVMLkPkTEtwgKDszBPhLeU1QZFW3FJC5nFneuEr9pjJ41PgZerIIQgBNyGTg+\\n\",\n       \"TGqJlRgm3IYkeUsWY8rAeDqHPx/GZBGyNkUsDcWjleSCy8j1KY7TycCqENYg0ANjL1wsv0OpwhkR\\n\",\n       \"+Tud5EZhanKqSOUROL4S1k7JOD9VC+HTcyEJfZV8Ashwy4UmrprHgE9xCxkjcLYsdaj8d5YMvPMO\\n\",\n       \"bDZCUx107Yc7fVBthjMpcB4GRuAHUAzCqSwsckN9+dwO2+CoEeKHgDAcWgfFxdASg9EoHOrwEI8E\\n\",\n       \"sNVEqK2344o5yVscGDMhKt276Fr7flL71kLP24SrRjCkQ8TvNpGyK4o2F4lcgXQpgSmdx6AU7iIY\\n\",\n       \"smkGTDAqiolEkoF3qsnbEkQ29FFnBmsGzPE01qEiscMFrbDsPgifBxq8kCpgtmfwqiKOAhirwW0G\\n\",\n       \"gxNCO6BvNzR3w/FqCMyDUPm1IyKLYf2H4H0D2nv1ykrYkoVoMyx5EdYXYNdKeGMTsPtyt6uTWI1P\\n\",\n       \"wFufgqZOOLQKxANuC0RyWjndAoy3AAOQN0C/CZxxCEYyZJ/PE7pPex5qTGncpRgj5ggqYyAfPwrO\\n\",\n       \"RajhZvpGjxJqO4PJbyAfc5EYG6dmMIdhKVR4TORSdgLpDH5bGoO5i6GaDl3Wc/gtuM0F9jh4u/T5\\n\",\n       \"HIjDd/fAg92wwgelcZBx2JUui32WvR9Pi8iraG9XrOwBm3Vkvob6RUQppWa9lb2uqqEebaiNKKVy\\n\",\n       \"uuS36hPQvAYqN4InDqNZWLkHGmJOXv9wHQcNHYS7x7BVGjAmE5RcQ6yvjrHlKSgWYKDBxPE7qxlY\\n\",\n       \"UklqMAtDY6xZkqStWKSjC3IuN92LvWTyHvbGPkLs22Yo7oGWnfBKFfyaA9PKLPbqEq6UUIoYiOcC\\n\",\n       \"tNalWLkfPEEdatrrMNL5RjXUKygEYH9J95C4Jk+ALoes/S1YUgc1RQgaoCsII99SSl2XEM2NnHMR\\n\",\n       \"bMBJ4JNKHj9GhQAAIABJREFUMetW/3xEBAdajXGjUje2CdbFxzF717qIuJyw2QtrsmBMQNwDuRQM\\n\",\n       \"JeBsdZiItEDDI9Do0bknwzkY/IlSxRPl981gWAGVKyGfgajUYfqsGW+bk3VDJqxJA/GlCohjGTVw\\n\",\n       \"PPchhp+zQ+F7UNsPX3HB77XCBi9S4UBVDFDhTtHaF8NQC22ApxRBpeB0P5x6EkLfAyrB8SFYtAIW\\n\",\n       \"5SFtgp40DHxfqdzk2Pwe2GyAO+LctrrImh7IbYT6iJam718CwyVwRmHibRh4G8JPo3MPjEB/uXT2\\n\",\n       \"Rs/FNc27XpdpR/cjGtVVK9UfhwcXw5IJ6PFDYDtsDsGAH4x7oWEEQnb4sUGpoS9fxT6boWormOsh\\n\",\n       \"egd0rIIdx2HUC8OLoOSDUgEyfZDuhehRGPwWUGqEP6pB3mPB2p7HXApgTBVR/R1ED8RgYxbiXkha\\n\",\n       \"IdwE3c1a3a1pJ7wKDZ8x4nXYyBqL1Azn8BsaiC2u4tDwR4i9DnAKqnfC6iK8GIYjeThSVhovi7/h\\n\",\n       \"QofW5qwr+8XmfD4ksM4ryqVMQ9Ney2qvycRWaK+A2wbgrnEdQxyvLpBrSiIteZyLK7DGLORHg5hs\\n\",\n       \"OVxBbbgI4D9TYM2ZEZ5bE+OtE0WMjgweH6wcAXsOSCXIYOB0qwmLqxPaQlA4AsNZ2F2Cle+h0G0k\\n\",\n       \"XkoSt1oh3w3143SWIKjAZodsH4w9V7hQbfJ6HJeYFhoaawdLJeTCwOlrNXJmkT8ADi8YIudQipQI\\n\",\n       \"3wF+B/jjuR7PbFCuGHm5/HcBZSXKVhs05RneWWQ4jo6vD08918v/Plz+A8AmErRi+Asb5iygEihD\\n\",\n       \"ESlW4phIY/VkwdwNVUHd+8mVgC8fg3o36j4vfEjRFqqibdhBPByhvyFKsRoMEegfhtCPyh6fCRH5\\n\",\n       \"noNjv+7g+D1mlCpBKKBjppNjCwHPisiQdvyUKqEuChXlXLDCKeixQWEMju8CTsHifw/tJn1L6EbE\\n\",\n       \"+YJSyXmbmKqNguZPwhKHrmjpFxHvPrBX6BQKgILx3C3OjA6lgNYWMdivZr/lEPeAHkPd58EThjcb\\n\",\n       \"dNVz/ggYAwa6V7oYj3pQsRLIBNRklXpHRP5yHLWvmcyjjWRG2mG8XXfD5cfQ0AoD28tVEXGwnIbq\\n\",\n       \"M9qAmIClZ4rsGDynTpeyTnDEV8DU3A/eFFiPgC0If5NV6sAM4w7ChZ26RaTOAouVLkM7PZc5QAvG\\n\",\n       \"yGWilCqJyH4o3q9FdBwFCPq8nHrATsllwBAsYs6aybtHMFnGKJQymI+cX8JvAHKxJDwJxRIEH5yS\\n\",\n       \"V4SiYTxKnyWFMRfVLradZes2KyK/eA0+vQFyAagcgbusULuG4liacWMaTo1Cd+IK2zyXPUEt6Ct6\\n\",\n       \"/GInY9mld+pKtj8fEKEa+D+BO+d6LPOQrwK7RPiiUmTnejA3GhGpB3zormpDU/PVRMTkg48uhjUt\\n\",\n       \"UEyDdEKxF75XvAyjO6PUayI1z6dgsQlrKUPTS04ibQYSfgNR49vQ2A8Ni/Qds72ceLEzDl8XcFpJ\\n\",\n       \"fCRH0WrAUTJjjhSwdgJ9cGaPUurs3Pjh4Y2wYgPqgA2KY+B6DX5LRL4xTfZ+AHqVVhydSsAGzUd0\\n\",\n       \"1ceRJDR/HD4YAG95H+uN8Mz7RKR/Bhn9OUd7pZoehfdnob68Xm0QeGUzvN0Fg14dXmuI6Oe4lBHi\\n\",\n       \"AlXlXMCuaoi9ee0jMQgs74baQ+WcwqKFV1YtpWS6HXVkMfSHwP4aPGIUqQaG8vBaAJIu2F4FhhNQ\\n\",\n       \"1wXGAfhWFtY7y1nSMVhlBJsTeivh3iBDDm1vTtZYOLIZvEfGKBp+AaM5iIbhwEz5gO+GW+T+VbBj\\n\",\n       \"CZRKQBeIXeSptFLX4dhcOQvGyBWg80ocz8JLH4H1YQuBDgcxv5Fwph3/iRwlU5KcY5ycLUlTFxy1\\n\",\n       \"Q60XWiO6fOpQA/R3UV4ERdwvwhsfhNVj2hU8ZIf4YB5rdwi+MuliK++7W0S+NggfbYUH2nV92X4X\\n\",\n       \"JIbAdUyX9O04qC3ry1pAtOuu7jd1zyWPgiGDiHcvRJ+6imzz+cyfA48pdfMZUjcapegU4R3gUeCb\\n\",\n       \"cz2eG4V26Vf+OqxdDrUlmDDAQK+IPK6USgKYYf1aWHsP9E4u+UvA8SR8XES+NNUgeHcC3y3Q/bkC\\n\",\n       \"G9NQH01imEiyr81CZE8CGu+D2OqyumsWjC/DA/thIgZ/L0z4Y4yvM1AdTVN3CIoJ2G2E2FRp8upV\\n\",\n       \"sHoL9E2OsRYSG8ESgLuZcu0rpSIirhfA9Cgk/brJ26gDxsKwcQheqtOJ6svknCECWiRuRQr613GZ\\n\",\n       \"a8ks0wItTl2qO4lRwaoJ6PTAwQgYm3SVkOMUPL8Nqvt0AeHJFjgSgtgb1z6MiYNw+iPa8LEWIW6p\\n\",\n       \"pH9JI9loXdnjUQJJ4+kwU3t7lta3YNgQZ3j/YSJf64JFCooZ7ZEYE5E9AXhkGdzZoSWzT9fB6Emo\\n\",\n       \"eZa+iiivt+p5cxRgyAPvWBXRr4+e0+y6bESkdT3c9wHoM5e9aivB9CT8ioh0K6XGr/34XBkLxsgV\\n\",\n       \"olTqLRGJwNB2I/kaL9lwDbWxClyTST+xLDHfCM4sHHkKnmkCZwsUChB+E2IvnXsaS3wVDnsg3KH7\\n\",\n       \"Jhh6IRSB0Z9MNUTO7VsN14kkl8CRJmhzodUzGyExCH4nmHywlstYQLQ7uubjcJ8TlpaTWosCO7fA\\n\",\n       \"3lEuo/vpzYAIK4BH0M3+FpiZv0Qrsv7Lrdu9uOI+2NQBW6YkcB9ohl0fAH4IUAWbV2gr5SxVWlyk\\n\",\n       \"ph9WlBUpL5pkp5TqEZGvQWg7WFsgNwDj38tBeiX8/mrONSe0QvE2CPTD9mGlDovI/5Ol8wEY2wAm\\n\",\n       \"IyTGYOyZaSrH3iqd+HEedRCzQNOF40m8psM1I78H1cugugfqArCzHnpeAVMSrDP8JmsBzI6L/dY5\\n\",\n       \"xKzTRKZjy4PZBEPfgFe2wv41UOqF8Zd1oulhj07DyB6eNECvjfwhOLQW8ouhJQGjfjtBdwOJNxyQ\\n\",\n       \"KQGv4d5YYp1YcOaybBrSjdBf3QRvTqTJ7gdSk+eUUipaL2K4A55t1J47AFbAeCd5x0F2H4HTy8Bi\\n\",\n       \"huQ4jH9TXYUhAuCF1R2QNk8J79mhsEx7AjsoG1OzyYIxchUopU4Bp2pE0otga4H84qnvx1AGmMgB\\n\",\n       \"+5Ua+YWI2NHVKXkAEbFXwP1tsNFEnDDHO4N4+kGdhvjhcjn0jJjBa9fiCectIE5QQb2CmS/zZ9RB\\n\",\n       \"Q/05QwT008X6UThzJ7eIMQL8DfBXSl0YL11AoxSvijAM/Abwnbkez/VGu/XbNsNt03qPrBuG42tE\\n\",\n       \"5BdKqaSAyTRlcU6D6SB0RGDdIvBl4aRR5OmiUl0X25/S3X7Pa28hIu3uadcsQIXWNvGXv5cCnhCR\\n\",\n       \"pwHj9OrCMtEJMEx12AOMaUnVC/rplLd7RkT+C4SXQHAlHMtD5Ci6E3cj9BhhjZwTiQPodUPg2MV+\\n\",\n       \"5xwyrNM2pot1dVdB9LVyguZz5b8bxrlcwtc7wN8B6WKJpLGpHCofBXeCKq8Nb7KIymsl35zJiM9v\\n\",\n       \"w/6njWTfjMG4iDw1GQ4zgc87xRCZpAIKkDkA/T9Gr/GZaxGaNOh7xQXeb5N+73LvIdeVBWPkGgjA\\n\",\n       \"692wtpnxeAYqjTiT4yR9oxQyEH+hXI/PVC+HiEgVPLIZ2tfBsBWKg1DxGrGGTq1Ea7KL7CiCPa/b\\n\",\n       \"OZ9n+SbgdBLqzWCsBMbA2YfV14VUQCY0cfk5HTZwzNB3wJUDY81VH5R5hAjvQ3tEfm2ux3IT8EXg\\n\",\n       \"H0X4kVLMdBO8qRARpwk6rFAW5DOZtSt9KkYFdtCP2ckwHDwN76sqey/2wnofNK+AZDscj4DlNfi0\\n\",\n       \"iHz9Sp5Iy1UfpSEw5bViWikAjpOYm4ewNE+QPS4iFUqpKJxNjp0xR0UpNV4lcuJtWL4BhsxQCoJ9\\n\",\n       \"P3jHtJDdjJTDrqeYtj5or0nP2/DiHdruMheh2w/Humfugjv36GR6x3Pw/AdgVQwcORjwwjthSMzq\\n\",\n       \"Q1R5ro6W//CJFHbD1s0wmANzCYsxSM6TpultAAsnN9aRqjFiyn0CBobA8xp8VkS+CSS80DcA7cs5\\n\",\n       \"1zOrANIDXvDcDrbbIXQSCsfhwhwvEXGYoMMOvrg2TmcsMgjByTOwZYluogZAEeSM1qu4qLF9o1gw\\n\",\n       \"Rq4BpVSfUeSbUbIP2xlYksJSE8F4PE36n4EBv8hHnLCsALFx2FWC40BDEyzZPMVd2wTRDWAdhz/w\\n\",\n       \"QZ0H2hxgmgCDR+SpOPx3tLDYYnCaD5DyrkDF+zAsG6fRn6bKGMOUjJPzwOBmETml1CVFjMZgVOnS\\n\",\n       \"QPsU13xPFaSO34DDNauIYEd3pf3CL0Ni5rWiFC+Xc0f+C/Bf53o8l4OIVPvgzvI1FhmHNyavsRb4\\n\",\n       \"9HKweyA/DJb9TLRn6U3oRmmTTDi0WiZRgDTsOwBrc9Dq0QmsHfW6/8NhO2TtkN10Ljfjkh4kEREn\\n\",\n       \"3NUK93rBGIKmH8HiFXDqKHW3ZWl2RLCV0tg8cOoPROQfZ/KK6hb2tENFG2QTwEt7IHkKNtiBiI7n\\n\",\n       \"fE8pdeZKj2FZ+OpJeKsLujeCmCD4KuQPX8YaMmcolXpdREZgYBOYPBDZC+kD1yf8cvVE4Ll9UOiG\\n\",\n       \"O+xgHSVkznL7AUXzIIQqPCTrbGRyFaQnAOohvghWhuF/VWAIJijxinah0A4TcbA+g2H7ME2NNpo7\\n\",\n       \"itjCedLb4PRREfn2VO+ZiNQ3w2eW6wTpwihYOmG4/LnpfYdOd8I7Bli/RLdIMHSBq1trrwzO4iE7\\n\",\n       \"y4LOyHWgXJFSAWSVUikRqW6D390E0qJXO9shqDoGzydh/H3wsa3TJvwMNL0G9y+D6ApIFbXcq/kY\\n\",\n       \"+I7Bt6L447B0BbRmYNxj5eAycLQa6UjmcYzlaTwEdSPweivs+rFS+QvKuy4ct3MbrPggrA1DRRoG\\n\",\n       \"/bBPQe/X5iKB6XrOuQh/CbQrxSPXY3u/DIjQChwAtig1e09HVzPvIlLTBr+7GVQLhKP6Gqs8As/5\\n\",\n       \"YMP7dQZjdPLzz8OaN2mpLHHXMaiNwYQL9vmg8wko7p0SQrWaYJUD7umATVvgUOWUVgcpMD0GFQNK\\n\",\n       \"/dWlxugQuWM9fGQ7DDigkAHjK7DxEO7lwoZEkaqBHE2nwBeF01Xw/KhSY+clEYuIBSo/AcuWQnMW\\n\",\n       \"UiY4qaD7MSj1o7MyY5MJ5yJSg84dyQNn5krA6nK4mdb4y6FcFu5Hhz/SgAOsK2DlB2FdBAqeWrru\\n\",\n       \"dNMb3cHYrkaIdUNHGHmoD4c00tBZIGsaYtw/QGZfBUykwZum9j0NrO+zY01lKdgmKNpCpHph73eU\\n\",\n       \"yuwp79tVAX+yHuyroLcGkgD79X3l7YhSP59hvEZgWRWsVjrMfwjtSblhRsGCzsgNpqxNcnbB8sH2\\n\",\n       \"zSAry5LJbshVQTII7zkDjwXOD/cCEIB2M1iqwR6GZgt4ChislSijA/kPUdpeh4fKioErR7NYi5Cs\\n\",\n       \"gK0/1K0GJlkchhPr0DeVS4w7uVtEgjC4Dcx+3SY9umcua82vByLcBXwWuG2ux3IzoRR9Ivw58H0R\\n\",\n       \"ts1nj5If7tmiJbDHAFyQq4TkMHyoEvJNcJ6XYDucOE3/onGeOw22VkhXgt0Mq98H4XtFHM9Dem+5\\n\",\n       \"YuaAiAxnoLJyWs+lcd3i9JJS6CJiaIIdd8Cwo9wuwaZb5B4/SsWSJNuenHbdToB7kYjYz09et22E\\n\",\n       \"25bCXVPCQovt8MTHoO9Lk4KDolUJH1oFd7aBygKnIW8S+W7hKjwmC1wZItIK9Q/rU6YIBPth/GdK\\n\",\n       \"ZXaLyAQM3QkGp5Hw+P1k36yDRBEkBFvMOI0W3OMWnDELTlqxpyfoWz5I5sN++K9V1I+7sMUBHJiT\\n\",\n       \"tRgKSaz1WfzrgT0WkZWt8JkW2FIPseOwpAdO3gGnVsPIO7CxnBd1Xli+bMCeKP/NOQvGyA3ACcvb\\n\",\n       \"4LwbuhWKjcAZKAxA30FoWgvDRlDj4Dytn3CUBaq8iFNhEyPGpJ2C5TSlyhGcHRA8AZXlxdGR1a3S\\n\",\n       \"Q5VQP2VxLBq1fs3lMZmMe+2/en4gQg1a5vxzSs2c0LfARfkKcD/wP0X4vFIXJl3OBxzQ0TpNxMkG\\n\",\n       \"xWqQgm59fh4GfW2lIPhD8D4Mmz2w6bRuDpawwKsfgYMZ9NMhSqnRKpHOt2HpbTBo0k+O9v3gHb9I\\n\",\n       \"bsYUrDZwesttHc6+CAUjJYG09XxjpCRQUkxJoNX4N8LywLTX0tBSBX0tnIvvr1gB2x+AXlM5UXY5\\n\",\n       \"OH4Oj4rI38xUnbfA9UFLJLR/Bu5NQGO53LirBnZ+WkT+QSnVCXSKiOThc8PQXAXJApjTGKsDmHJu\\n\",\n       \"PGc90RbsaRv2Gsisd4HDxPkN66wYsxZyvixFk4hUtMPHd0AkC9F2naUse2HFaQi2aals4XzBq3nJ\\n\",\n       \"DF1qF7hWihCLa+PiPBL6eGcC8NguOP44NP0Ymp8A4xn4x5x+36wwm40Y8wrIAQopgkVBqP3c1loD\\n\",\n       \"OucjZZmyZ4GTFTC+/0b/xvmICBXAs8A3leIXcz2em5Gy8fFJYD3w30Tm5yJWgnh8hvrOEqQCEAto\\n\",\n       \"Eb+znNKtqA8AHqi5De4Y0IYI6KTtTeNQc+/U7wThR2/Awceh8cfQ9DOwnbj83IxMSle+nDcOO+QV\\n\",\n       \"0RAMu87/+PF6iBy9UMtEjCAzGIQGmHKDqYVNKyBimlKxUwWpdt0wZfGF31/g+uFcD6tFN06cZGkA\\n\",\n       \"lnnBsHTyFaWUCsDju+DY96Hp51B7BClBxYAH+9lwWp6iMYUUgYhARBgp5cmcrXApogwFAnYI7DFA\\n\",\n       \"+zKQBpjIQjoFVhOoVsiNQvMpqE1o4cx5rxs1Z54REfkU8Dn0gvJ1pdQtI7gUgDcOwcfuheTk4tAF\\n\",\n       \"1UPapTyp+vgD3V0UK7pVokTh46fg/g4wuSmWEihDP8ZCCdO4kDIoip5zezEpCB6DN4wQbNbGSo8R\\n\",\n       \"et/kFvJ0XC4iLAKeQMt8/99zPJybGqWIi/B+dEt5rwh/NN/0Rybg9UPw0R1TPAGdUDUCwwF47nn4\\n\",\n       \"5GrweiAzAs6jEAnDTsANPnV+GStoW8XULCKGSXd22ZvwUxF5Fv1wEb3cRV0ppawiL+yGj98Jo5WQ\\n\",\n       \"joL1LWgokPwmvL5MN1WrLMK4ETrHIfzshVsK7YPO9+kmnZNErdBXoCxLDmAAm22GCpzyk8qCB/yG\\n\",\n       \"4qoF/wyJs/4iWP1TXykn2P5QRH4BWA0YAh4yH/OSj9ow57MUTL1EqyOoU0BnFN5exrg3yKHmNM0G\\n\",\n       \"wazijHgLjL0O6qARNltBGUBVw4E+2Fap+4uYxqDuBHQH4cVZOQzXyFyepI8ppf6lnPy5l1tI/bEA\\n\",\n       \"h45C3QRsawQVB8MAjI/p33x2ESyfmGdPYqPIV0ahPoRa5URKRSwJK57BHIaSIgp0GnXnzbQZTpgg\\n\",\n       \"+HXI9UDfUjBbdHOm8yWub3VEqEf3VflNtBHy9/M1tHAzoRQBEbYDPwKeF+FTSs1Nlv1M5ODgEaid\\n\",\n       \"gG0N+hqTARgbh8eVUiER+YcRWOeAqjB0l8vkUyJS1BHUvOGcZwRg2A350elxdTir/3HFiaBZpQ5Z\\n\",\n       \"RWQM7ndAVRrSE/BkBt6CYTOMLgFrJaQDvGufp/TbWrxrfJ02mGxpOGmE4R9ODb2E4UgPvL+uLIQI\\n\",\n       \"kAdDOdFk3szbrUlsAALLtdL2VMZM5bm9gMm1X0S+eYZ04wTBjXZMkoVShFxvicjfKqUKIvLsGXDX\\n\",\n       \"MJoURr1RDJYUpefz+gG+KCL93WBcA1INwRS82QuLhrUh8pMC/GyuK4wulzmvpikLgj2rlLpn2us3\\n\",\n       \"faa1iHiBWnRm9eBMC925zxo7oOERM+7NHrJLraQN9Zi6UxhL3XjHssQHoPsF8Pt1eV/i6KSOyWWM\\n\",\n       \"w4zWW0jN5xPzSuZchFrg/wA+DXwL+JJSl04sXODKEMGINva+APw74PvX29i7lmtdRHxADZdxjZ37\\n\",\n       \"TsVDsH47bB7WIZphN7zaDJ3fVqp08GrGcYkxCtqzkrtSd7mIcQU0fgwq66Hkh7EojH1VTWuGJiL2\\n\",\n       \"WvjsOmhog1AGLMfBcxJeiis1L5+Mb4U1HiY7mrd8AbajJehLAsfqYXcQxr42s5F53vetYFwJ7mWQ\\n\",\n       \"HYf04Wmqu4hIHbpiM4LWFzFSzkeqgA8tgzsz2JoiVNVkcFkjpKJ5ot+F6E/nU4n2xeZ8To0REfkz\\n\",\n       \"4LeBP1FKfXvae7fEiXo5iEg1LPtDeHACXHlhqMVM79YinVVFintA9sHY00qp/ktv7XxsIhtr4H1+\\n\",\n       \"sCaAoG7c8My7qDvOKZcz5+UE1f+MDvF9B/jrhUTVG48Im4F/Qj9l/55S169nyWxf67qk0Xkn+O8W\\n\",\n       \"8i1+Io3VZPoLMBGEQ2H4xXxI+NRluh1fgAcD4Ctfr6MueNYFvX83XTtCROxWWOeHVQX9SL4P6Jqv\\n\",\n       \"ntJbaY3Xc1X9ILiX6kTk2GEIPaeUil/HfVRWw4crYbFZJ1SHhrVQZh84f8fMyvcZaIzk8A0paodg\\n\",\n       \"fyO89bpS0Weu1xiulTk1RkSkFvj+tJdHlVK/UX7fgo5NPzT14hIRxfmx/51KqZ03dLBzhIj7Pnhg\\n\",\n       \"u5annsqeFnjpWaXyr1/Ndo0iK9fDJ3fAkBtyBZD90PQWHAsqNX1O5pyLnqhCFfCf0Mbr42iJ96HZ\\n\",\n       \"HN8vOyKYgf+InocvAV9Wamal0Cvb7tzclAwiK9bAZ3dAvw+yBZB3oHEPdE4o9d3ZHs90RDwPwAPb\\n\",\n       \"YO00Y3tPC7z8r0rl9s3NyK4Pt5IxMklZbVddb2+EiFga4Av3gG1pWTV1GNwvg/s0fAMWfx4eGT1f\\n\",\n       \"ZThjhO/XQu9fzhfvyJzqjJQVBXfMMKh1aIXMItDODKVHSqkv3ujxzQ/sXnDNoOlQkQO7/Wq3Wgs7\\n\",\n       \"NsKEWxflYAK1CQZ6YbWIVE53Bc5HRPCjb4C/i25otl6pc4l7C8weZcPjr0X4IfBV4JMi/EeleH6O\\n\",\n       \"h3ZV1ML2TTDuK8tqm0BtgMEeWC4iNXMh/Hc+Nh+4Z1gX3HmweC58fYG55sJqqOvGkqXg65ii3N0A\\n\",\n       \"8bXgHobNKWxyYbsDWxFsBnSRxLwwRi7GXJb2/ir6AJnQB3jJHI5ljgl3w/AF2ggwZIX4FYdmJjFB\\n\",\n       \"Tc2UhDbQE+7XWgbuq93ubCDCEhH+ATiNzgnYoBT/dsEQmXuUoht4EPgztB7JcyLcMcfDumJMUFM9\\n\",\n       \"w/VROW+uj0j3hSXAAEMWSC5cB79EWMDrv0CDBnyQdIEHYkkITntwDTggFmNKkcR8Zs6MEaXUnyul\\n\",\n       \"diiltqFb3kcu9Z1bl8JxODoO+5sgaYa4Bd5sgc4+pilJXgk5GBzWSU/n9gQS0F6o8Lt8bc4p64Xs\\n\",\n       \"BOLAaqX4baW4qlbZC9wYlEIpxc+AVcCTwOMi7BXhP4mwvtwb6AJEMIrgfLf3Z5McDI5Muz5KwLhe\\n\",\n       \"F+fB9ZE/BkcCcKC8LkStOkTT1QN0z/XoFpg9cjAxppNWz2Mc3Anog8DT8God9Hsha9T/fa0GAk9f\\n\",\n       \"TlL3fGCuE1g/BPwFsE8p9Zlp791y8cSLISIucN+pOzNShMjbkNxzLYl0ItK+Gj53F0zUQSIJ5n3Q\\n\",\n       \"eADejCj15HUc/nVh6pyLYJpv2hYLvDvlqpv3Ah8E3oMW2sqgtS8mq0lslIX/0NVPX9TfnZtrXUTa\\n\",\n       \"VsLn74JgA8RTYNoPTfthX1ipn872eGZCRNzguRPcG4AihPdCas98TEC/Un7Z1vhrQURMNfA7W6Fm\\n\",\n       \"NYwYodQNVbt0+fb/VEpFRGQp1L4HLA2QG4KxVy5ToG/WmLfVNGcHIfL3wM+VUi9MeW3uB7bAAgss\\n\",\n       \"sMACC1w35l2jPBGxTMnwjXFWLPAct7LVLFLxINx+N2wZ0IlHGSO83gIHn1YqsevKtiUmoN0NjRmI\\n\",\n       \"5+HE9LK/m4Fb7Ump3LPiD+H+GPhS0FVjpq8pz5kEhL+olIpeeiu3PrfavC9weSzM+2SZee3vwz0V\\n\",\n       \"sHxMOw4nHPBSNZz4KlC0w1JApbUw3k0tY3AxJ8NcKrA+KCL/Ae3C7QHmTS30jUZEbLBoi+6PMZkB\\n\",\n       \"bSvCliHou1tEdl+uOJKI2Kvhk0uhtQFyCTCd1Mf2X5RSC3kWc4rrNlirwJav4Of3tJJ2ezAUo5hW\\n\",\n       \"jMMXReSvy9VmCyywwC8nbbCoBlZOKVSoSsH6tJmef9NBxrxEV5xyGh50ibyUUOrlORrrDWXOjBGl\\n\",\n       \"cxbmXd7CLOEEl0EbIFNx5cBhBWxamdb0AHiWg3kCQm9C/p3pan4uuHMjNN8BfUWQMagDy9IY6i9E\\n\",\n       \"5K/QXpKbIoHp1sNRC96UnT1rl5FqdOMQAypvx5VuZsK+H/Wr6BLZa0ar7JrWQ8X9UHJCYh/kn70V\\n\",\n       \"cgsWWOAWxg3+srcg4NCdmPMNkPHVoWruhu/XlJOpV4DxKbhPRDqVUjNK/OscI/t68CzSunexAzfL\\n\",\n       \"A89CA6VZQkT8dlhlB69AvyJS1K3LXeVQVcKiq2gSCRts9GH+z5X4KkrY0mMY80Eal0PvHhF5bKpx\\n\",\n       \"UQGbV8FYCeQE5g1xqlqMeNJ+kt4Qvn8LAztF5CfzVYXxZqcs2meeWWY/3g8D6xwEtlqpLBYx5wso\\n\",\n       \"V5GYsxHVVQkNIuJXSoWmbM9ng5UO8EegtwSnLiVYpPs7eR+14/61esTmQUoRzO+ZIPawiPzhQjho\\n\",\n       \"gQXmhsn1Ad2KQ4mI2aDb+S5OQxSIwIhY2Lmqit51HkyNOayFCaIOL45EN/btBiKvV0HICsVluhnZ\\n\",\n       \"cmboN6TDws2/DWudUBeD6CI4slXE+F2lip2z/duvlAVjZBYwiixbCo+uAJyQH4It+xkxJ3nVBYvi\\n\",\n       \"Hk4tdRKpKRB1pojsqofPtFFlclLbC9BEznKQWFOA1rUQ2sOUcl8BgwFKAaiK42+xUTMBgom8BdYP\\n\",\n       \"gnUD7DvAQingdUVE7F547yLYYAJDvcjIqJYRnxIaS78DRz8LDocJ25gAReImMyoSw90M8ShTxP5E\\n\",\n       \"ZHE7fGoFGNyQG4E7TsCYiPzzJXKA2qw4378aszTgGQVoxccAanEnmc8Bf3djjsICCywwEyJi9cD9\\n\",\n       \"bbDJAsY4BIwiL1bD3SuguRHSSTAfB05z0rUI98Za/HYDlRFF1ljJsD1O0SDUZ/rJrqki/SrojBLD\\n\",\n       \"DCUAYYLNAAAgAElEQVS+Gv99sM0CK8oaNM1RqHNA8ldF5G+utC/SbLNgjNxgRMTSAh97CEJ+3cyL\\n\",\n       \"DsBBoeVF9of9HHnvMkxGL6VwNam9PdARQBaZcJ0VNbJjyTWRcQawu8DZyhRjJA4HDsN9Fkx1JZxm\\n\",\n       \"wJAhb5zAVICmKGQt0LmMBWPkuiEiUgW/sQXa1un2q6UBqHgNPici/2uygaFSKi5SO5CiOBCjq8aF\\n\",\n       \"JWfGEHLgG4mSrwoQL1JudiUipnr49O1adjNRCaEOKLih6VXdgesiOVX2pRVQU4frvKelGryBUcJb\\n\",\n       \"RcQ+H3qtLLDALwt++OgmWLFBK1cWh8H9IvxxHQTvheOTn2sD+49I395GtidJYROkskZI+Kk4kWNi\\n\",\n       \"SQplsGOpyJI2m6BwBqwxODV9f9o72rIGlk7zmFSloMYPgzVw8R5e5dYtk41dey/V4O96s2CM3ABE\\n\",\n       \"pMkDq4zaPZdsBJsfzpOWXg5j+yit3UZmz0oYN0NBgCw0KZQpRrqiGudZ178VgLQFcskp+zG6wXcS\\n\",\n       \"1jZQrBeSjjD5+mFMQxNs2qWTYwsGKF6RFLDOPzCvg6r1oBRM7IfCUaXUgu6HprkFFm+cIs3cDNEN\\n\",\n       \"YJuALejmVWVM4TR3PdvH25sWkbfYKJqDBDoGyDqDmH4KBYeIpGzwsVrY4YNYATgNuWrYuxhCe+HD\\n\",\n       \"VSLWJIxl4JhSKnb+cPI5EwWDAcN5oTihZLJQyqBPnwVjZIEFZgERqVsDK7eihRqzYC6CvxlWpmE0\\n\",\n       \"CT3O8vXoh7QHrNWU+qHos1IREYwlUBRJVPUSq24kn+uGym6wn4E3gT4RqQffHWBvKKvxvgWqAHkj\\n\",\n       \"mKat0wXtlH338Ror4MOr4fYmUAmgG6Ii8p3L7Qx/PVgwRq4zTpF71sB7OyBnhuIhaEhBbREGjJzf\\n\",\n       \"et0MFTVw0sI5cS8rjDogP07EpvAZBWNRoRijYIFADPKnpnx24wZYtRH+dT9q2Wny26E2GaQ6BcvH\\n\",\n       \"IW2CU2ZInrjc8esy4cpPwNoOWBICJXDqETi6QkR+sJAMC4C3dtpcAtRAzAHN578aegtGfyXIB14O\\n\",\n       \"8/JWC/nlOdqzJYzjYGqCQ5+HkVcXw7Y6iNWXPSUpsJ6G7QpylVBxGwxEYMNRuF9E/kkpNaWpYuFA\\n\",\n       \"glQyTtLlxpkAUBSNKYK2NKWD6NL5BW4SRFgM3A1Uox9idi4oEN9U+GrKN/8EOPrhTi/YK8E2Bov7\\n\",\n       \"4d4a2F1ZTkxVEJgAh4v0cIpkrQVPtIiSMJ6xITzDEwRHT8KhEBwDzoBxMSz5LVhfgMo4BG6Dgxth\\n\",\n       \"6AwcXQqbprQK6PHD+BgQeLfBmmHDWti0A3omJdl7wfccfEJE/sdshXcWjJHriIhUr4QHfgUGreWT\\n\",\n       \"sRmCT8CqM9C6DHonP3sSauOwPwiuBi17DkAdDL4D4RKJWJxeXxGnKUDeMUxhAGL/v1LqrGx+JWxb\\n\",\n       \"A+NuyL8HjlmIZU8TXmXH7M+yewUMJqD/F1dYm74UlnfAPb3nXmqIQmYNvL2Xa5Cnv4WIBWZo7DgB\\n\",\n       \"7jScPP/VzNvwzmIYX1fC2pRh4ygkclD9JlSFwNTk5Llf30B+qBvqI2D1QtYBWYGlWcg1wZtLy4tJ\\n\",\n       \"DXifgYfL4SAFoJSKGkT+9iSFP2nDVWHDnMsQtYyS7A3A9xYMyJuDshHyP4A7gBfQbvXbgb8V4Rng\\n\",\n       \"j5QidJFNLDA/iAbLeR0jsLIWLJUQjoCnBO5myA/ABj+8FAJ7ArqPQGoDOZdi0B3CVT+C0TaOpQ8G\\n\",\n       \"Xk+S/2GiXBUnIgK1H4IdEagr55HVJMHthae8sLcfJlqhoQQRA3TFYfRHFytgqIZta2Fsam+YNgg3\\n\",\n       \"QcsINDLFA3wjWTBGriNmWNwOyjrFJeaAYgu8vRs64lBwQW4Q7KegPwFPH4J/UwnuBoiXgDPgH4IX\\n\",\n       \"43DAS/z2PHFnCA6V4LnpFRsGcDim9NDYRuH0YgZGX2ZgZQTDK1B6Qyk1cWW/wr8MWqdVhhiAtiyc\\n\",\n       \"XMyCMQLQ3w8Dh6ChLM2sxsB1EGwh2DP1g0qpvIh8D4JjsLUKZAAWjYOtHDprCxVxbXER3tsO+/bD\\n\",\n       \"1maw20ANQWURjt4NXZPba4WIH5pHwQfnbkwlpV4Ukd4k6Q9aobYAJ0PwklJqoaHaTYAI9wGPA38L\\n\",\n       \"fFQpslPecwF/BewW4V6lGH6XzSwwPxgZhNP7YLENGnwQjoNlDFIZCHaCPQeON2HpGUiNwr+UYDwO\\n\",\n       \"W13kAjlC5gn9ULMfGJ5mSHjB54e6add1SwS8zdD5JQjUgqUGcjGg61KdhA3gdOjKnvOwae/vBWKk\\n\",\n       \"N4oFY+Q6UoKSmuGJuQ6Cb8PxF6HTCp6I9pB0lW9U/5yED/uhOQ+EoGsCnix7QF662P6ScLQb1q+e\\n\",\n       \"kpjkgUweBqH0glIqdeW/Ip+C7AznRdYIhYW8A0ApVRKRx16DXzkKK61ACMKj8G2l1NAMn1ci0gvG\\n\",\n       \"QWialmCWMeeQvj4dAB7wwks90DAK7kFo/CDsdM/c/vuCJx2l1Gngy9fnVy4wW4iwHW2IfEwpdk5/\\n\",\n       \"XykSwBdE+FPgCRHuVGr+t4T/ZaV8vf/gDfigD+4OAjnINMKebTB+GmoOwfJR2JmFPVNK+5/h0uKf\\n\",\n       \"OcgCRQHjlDUgb4B8CciV14HTlzveOBzphtvXcM7ITYNpSK8xs6b4umCMXEeK0HMaWAUmezkPpAR0\\n\",\n       \"agPkZ0qpk9O/o5TqFZG/H4EV4FgC1hKEq0Ukdin3ehh2vQUr89DYpK1v+xHwjMATFzNEyvLxppkF\\n\",\n       \"seJH4eQ90G4CRzmXJW6BUwLpC8Z/s3LxY3BplFJx4HHd4BALELnEfPVBbwpG3FBfDssVBI75S4R/\\n\",\n       \"chC2Ai2LIFgNweNQSsCLE2CrnbKRHvAH9aJhE3HfD2YbhDuBM/O9dG+BCxGhHfgx8OhMhsg0/huw\\n\",\n       \"Efhz4P+6wUP7pUREPGBbBc5qiA5C4cTVVKKV198feEXMS2H1XdAzmTPohEwWdmfh6SvVf1JKJUX8\\n\",\n       \"nXB0Cayb8uBzuB4iBy/lBZmJKLy+F1bloakZQgmwHYaKUd0vbgb9pBvDvGiUNxPztW+BjtlRi65Q\\n\",\n       \"GJ9+otpFtrbDB5ZB0QilHp2IuC8K//puNysR552w+P3QkQNzCXrs0HkIwj++1A1GRLwu2OiBjjxE\\n\",\n       \"AvDmu3VqLNe+7/DBFhMYEzA0Bs9Ml40XcdwBjR+AxQZ9/XQXYeBnSmXfuZJjdaXMxpyLiN0D9/pg\\n\",\n       \"kwlMCegrH4PBaZ8zA3WUnw6ux41eRFqg+ZPQbgc7OhQ78CbEngbsdtjggzVFSAVgbwlGGuCzK8BX\\n\",\n       \"DdkQWE9Csh/LO7D0LlieB0sBehzQeQxCP7gZK57m67V+oxHBBOwCfqAU//0yv9MAHAY2KUXPjRzf\\n\",\n       \"jWa+zbuINEHbZ2C5RUfbwxYYGISRf1JKhS+9hRm3WVGvr+HKGsiGwXICMv3wz+cnoV/Wthpq4CE7\\n\",\n       \"dARxrUrRkijR0gMTBejrg8BjV2s8iEiFCzZ5oKMA0XF9H7ls78oV7Gd+d+2difl2ooJWx6yBj9VB\\n\",\n       \"sxNKo1Aag+eTSu2e9rk6Byw3gDWh3VynJi1WXQ+OD+1Oi2vVvI5/Dw8PnZOHLwEvLYI931WqeOwi\\n\",\n       \"47EDhss9AStFHt0EK9bDsB0K/eB9Fbxn4FvosNHZk0FEKoBW9M24t+wJuKHc6Dkv64N8ejMsLuuD\\n\",\n       \"FPvBvwtsZ+ArSqkAaJG6Rvi1erCXQIYhNgw/UEpdViKXlmSmBm3EpKa9ZwUWo43ZYaXU+JT3nGhv\\n\",\n       \"ZWxyLkTEZoQVdliUgaECDMKy34WHR8BeNjxKwCuL4I3HlSoevtbjNNvMx2t9NhDhz4A7gYeU4rKT\\n\",\n       \"jEX4c6BNKT5zwwY3C8ynedcPmfV/BOvqKuleVknBAhCgYAkT+ldIfAvtAY2j1+/MJYQIp27baoAO\\n\",\n       \"NzSmYCIPx7WHQ4yAo7yti2p6iEjlIvj97ZBvg2AGTK/D8iMQicMPgf4LS/7nHxeb84UwzWUiIoZa\\n\",\n       \"+M0dULG8nF2cBNOz8Cm3yAY7TAThaAlOKqVGRWxNULMVPHZIKpGKPZDoh8b3g98DWRGpOgn0Qzsw\\n\",\n       \"6jExXFXEVFAsGoOlUehajy7nmj4WL1S+H9pXgAGRun4Ye+piVTMiUr92Su17BixxzItqsC4fwrY0\\n\",\n       \"g2WPiPHnShW7QFdooJ/AbiWaWqB985Ts8DbtlqwP6gqGp0Skein85oMQrIQgwDC4n4ffEpEvX2wB\\n\",\n       \"0oaG699B24PgNkAsJ+L6KSS/MemxKBulJ6Z9zwOVH4AlK7U0TWRcRH4OFG0YfrWE744E7iQkcmYi\\n\",\n       \"mfz/Zu+9gy257vy+z+ncfXN6OU7OgwkYECASyWVcMK652vVKxd1VKFtr0SqXvKu1Vest+w+Vtsp2\\n\",\n       \"uaQqa+W15A1eaYNEU6SYiUQABDAAZoCZwWDezLycb8634/Ef/SAMSSSCAAYE8K2aqnn3vdvdt897\\n\",\n       \"p3/nd74BJw03jrUC7KnD3AnefWP2roQQ7AW+DBz/aQqRHfwL4KoQjEr59u3pv8tRhNTEGFf3H8Xq\\n\",\n       \"WThdgEm6yfNEv1VlagbIQDgOqQUIGkLkL0D966+0JS6EyCXgxHC8L7vWjPkhjfh75gmY+BgkE9AL\\n\",\n       \"hEj9ADoPv9iBFUIMW7AHYADzaTh+ApRdO3OSA8FtsFRB/1CbkRzYDSGGF2H7azcucH6ecNOKESHE\\n\",\n       \"bcQ21RFwVkr5392sa3mdmJyE4QM3PMi2YN9u2GfC0GF4ch6OXYLLQmgX4ZZfgrvWIVuFjgbf/i9i\\n\",\n       \"Y81PPQSl1fhjX9wDDxwxuLZ7khcSwwgZIlnlhaMbTFyPiF4uf8CA0d+AO1NwYBWEhMUiPPS3hRD/\\n\",\n       \"/FVySPLD8UmRwBXM031GixmcTRupDrgXePDXhRD/8l2swMiPvAzxcwSaTtwFIgnHjkCUg/4WJEMQ\\n\",\n       \"Q9DZD9mV2Dz36Vc+fOofwS2fgjuW465Fx4DH/hacd4F//eM/vbPlp8Hw34IPFuHwakxKW0sLvvVP\\n\",\n       \"RlnRYWbGYKLjsXo8jUmCjNdjUd/iq/ka9z72Ev/kffyc4X8H/kBKfoLw/FqQkqoQ/DnwW8A/edOv\\n\",\n       \"7L0JqdIZmsLGQt8hB0siBqPDZBNVxtKQmIQDLtR2QeEhWDoMP0wA/+bGA8WLC/bMwGePAwXolmH/\\n\",\n       \"BbhLCPFHsdLlyC/DnZuQr0NXhyc/CudM4FtJIe4+DB/bDZECXAN1C+whWA134j9cMK6Q/GCGjAO3\\n\",\n       \"NOHECswPw0O/ufMceNu4Hm8WbmZnZBH4kJTSE0L8mRDiiJTy4k28nteCk+GlFUwdMiHs2wNbFdBn\\n\",\n       \"oTYbSy0P/oD8EbhtE7IuXBmC5ROgTsOuAmy7YDwJmQ4c24C528e4vvsA0/MaxgAECRpDLs9+poKT\\n\",\n       \"FGIsB1vflTLcIY+KwzAzEluXvIhdVahNQOU48PArXH+ruqP0qUO2R7pkka016SU8MjUY7sBJG8p3\\n\",\n       \"EjP7341ov3gPbkQVkoMdfxAb8hGoXyF9T5d8BlQM6u4QtVUHMq904HgHaOYj8IHll7ZPkh6c2YCV\\n\",\n       \"z+90Oho7CioTknfB5B3gDcHIJEz94CV2fM51GNsd0dLy7N4YsDF0iCCwMLU+dsuiPlrCM8/zw1ta\\n\",\n       \"fOEHOzTpHJRfi4n/Pt4BEIJfBPYBv/QzHOYPgW8Iwe+9gc7K+/hJVE2aro7xn5+JPoOEh5ZPoJcV\\n\",\n       \"WrMRJzpQ6oGWgsosnDoHS7uFEKNSyo142zz3adh71ECeimhqLvVnUgSVNPTzkOjCp8sUk3D7FuR3\\n\",\n       \"+IYJH+5YgZU7hBBzR+Dj98GqtWMRcRiUP4XPnMc4nsXsq4QDk0E7Im90Ufvx9KUAeyqwPQWbh4Cz\\n\",\n       \"b8ZN2Qn6M4DuT0u2/Wlx04oR+aOxxj43uJC+Q7G9DkoIQgXZgnwGoho4Fi+tbqah/UOUWwOGrsN6\\n\",\n       \"CtY/AGe68EIEkz3QM7B2Ozj3gyJ11H0O/dQmV+8OUEWE1jNIM0qqUeHUOowY8MMvCaH9hzzh7Dji\\n\",\n       \"voil6S5fGWtx6kKsLwcodiEx/irXv7qy441RBCmw5QBfX8S3uhzcKWxKLTBH38J7eLOxtAibF2Hk\\n\",\n       \"EGwqQBmc87E/yOKQEH9TgTufwLxHZ/dqgalNAI++cYkLJ9ts/LtXOXYJEhokfuz3uGak6B0cht/x\\n\",\n       \"oZ0U4lHIjcOt++CWNbhsgHYYNu4C9UFIdaGZS2EFLlZBw1xSqBdNXFMQJHTIRXhdgZtNM0i1ODsD\\n\",\n       \"ZQFXz0P0/Mtd2Pt450AITOKuyH97o5fITwspuSAEFeBDvIYFwPt4bUgppSrEXzWIfjeBzAmMwKVR\\n\",\n       \"ULD9Hmo9whDx4hLAdiHMxv8vSCANbED+83D7QdhfTXBJ2ojW08x9ap35ehY6Mk7erYHeh9KPZYWZ\\n\",\n       \"IeQw4cQ+8K0bvKqqUMqTzC9j6GlGKjpCVtg42ieSa6Suw64bvKQKA0j+ZwGeEMJJwh0ZOCWAOpzt\\n\",\n       \"xttFr6oQEkJYGfjoDJw2QGlDRRPi68EriCPeDNx0zogQ4hhQejnZ6zsJUspqVoizD8Ftt8BWFBtd\\n\",\n       \"OdvQPnyDpluCIvAbsJWEpWnYFULGA30AtSzsbkN1WOW5ey2647C+PwO9U4SbGqG5gj+6iFKXZGsQ\\n\",\n       \"apAO4Ew1zdV/fA/9CznkwhpKwWPgPM8PPrjFxx+Iw5CqCeisCyEcoAAYOTiegKMSPAceK8NfPQyf\\n\",\n       \"yMCZkGamg+3VOPVk3FkBKKdh8KYzqN8pkFKGQog/fRA+ewH2GkAN2uvw/f3w2TPQA2pXSSkdtmea\\n\",\n       \"aGGS0VoXkWyzqw6d3IvHEkIYGnyoEKf2JpKge2yNCZ5JeoyUJUMV2EhOcnbfBG7/QzDvgngUPncO\\n\",\n       \"u+Ryx84DJN2HRg/GBWzPQuoiCBnQ1y0G9QEdK6KdK6JEFpo3gCikF/TwBgbdCjxwHnqXgQUgsRN2\\n\",\n       \"1QG23ndefUfiHwJXpHxNP4nXgz8BvsT7xcibggge3qJ/vMjS/gLqIEG4cp3MiRWmtuM5vJaD0Q70\\n\",\n       \"rNiGIwK2FaAeCxEOHgLTS/Hdu1Rae5pYaoGs65B27qW1vAqpZRiH/kNQceJ5+0X4CjQQIPwbnsub\\n\",\n       \"kHwB87RFvlkhu/UUnpPBs1wMt4ZaGHDX2ViB+SIqJnQ2d9SAmRL8ypmYXrAVgbgMH34a9goh/vWr\\n\",\n       \"kWYL8MtnYO9eqNXiXJAzc/A5S4j/042tI95wIf1KuKnFiBAiD/xz4Iuv8P3fv+HLB6WUD76F12KA\\n\",\n       \"eRIKpwEFak/D4OkbfSia8PUnYes63ClBVaD+YXhqJJ78CUBcgaRP4y/h8XvAyYMhNa7s0qmMwVre\\n\",\n       \"5PwndQJVRe2pECSpR5NgRaAloTOK2oVAe5TOLODFAoptfQRROrgz6WxSdRXC4ihRtsp37wsYvgyL\\n\",\n       \"VZteahR+Jw9WE25NQv1OeFiD6AJ87BmYqsKfVeHrwJdg1zDs34RKHpZugcvjIO4XwjwF3jNvdVvu\\n\",\n       \"ZmCHU/MnO2ohHagPw2/eAe1paCwgMhlS8zaavc3W9BZ516V4JWSkAi+cEmJsH3AEhg4lUHUTWTMo\\n\",\n       \"W8cI2aLmrzNXBM+q0sqYzDklalkDf/F7aJ+TBJoAPUdrZJOLz8ORDZitwIPNeAzcYXi6BxsHGqwM\\n\",\n       \"7ad5vsq1iSwickEqSF3i9XL4mwKiFrIHThJKX7KonCjQG80jm0FMpn7yJzNsfjbsTHAG0Hs3/m68\\n\",\n       \"1diR5f73xGGKbwb+Avg9IdCl5G1NWH2Xwt9GPFIjM24Szep0Nrp4T/uMduP0jit3xh2RjoDcEjw+\\n\",\n       \"BRsXpJTbQoj9Cr0jkzx9cBgnBDVj0nZqtFubhM5jcMSPHbN7Gtu9gMdH4K41yLhxhtgD+9IsiSLc\\n\",\n       \"fQ1OlmFRgpaAdEi4Z4uqFqCpNUqDGmEEE8/A2iHYLsLQDm/s6hA83wWtBKO/C9HYEOVDCuHaFYx9\\n\",\n       \"GpE7yWC+A1PlmBz7spllO2KHfcdgawHuHgJrBiojkDoLv7IRL3r++M1e7NxMAqsG/Bnwj16J/Sul\\n\",\n       \"/P236VpUKPwq3LIf9pdjUujVT8G5w0KIf/NiBbnDdH585x+2EKceh89VwdFBXgd1CX4A0bfh0iak\\n\",\n       \"/r7O4ANDJH2VDTFO4Cv0DB1P9YmUbfrOBEG9CKzAZBn8iMDqIIVPsg1JCbVJqBR1hHMJblcxgg7q\\n\",\n       \"bhdlVMEQBjgBg0BQCY7BXR+CxY3YyKLXAeM5OPYheOJOWKrBgRpMSimXYyLVDz8OF+8F5xSkGnDr\\n\",\n       \"tyDfhae/CBcywP1vx/2/GXiR6CuE0GyYnoRlH9QeshSxtcvEdNMo/W2c87B3DR68HfIGnF4CZgy8\\n\",\n       \"vGDVl3giRTCao1nO4q/VWBpvUs2oRAWNvpkic71NJsox2FskVBP0Wil8Q/DEXRtkvw0TTTjyDDz6\\n\",\n       \"i1C5A6ZOQmEpYPTBBa4esZi3k8jwMiI5TOQNEWzVoDsHdg+7BPdOZnhm/wzq+GEynkPH0gjsefjI\\n\",\n       \"82AIIf7pG3PifQlxoZ7+CEyfAVODdlUI9T+9qLx6H68b/wz4V1K+fnfMV4OUrAvBHHAP8L0345jv\\n\",\n       \"VcRk8twXYP8dAcN2AE0oW/DCFXh8FQpHIZiHK8NgboG0oP4ItC4LMfJ3YfxMlsa9Y8xUTFLNkK7q\\n\",\n       \"oDhtyrki/WAypiIMhsHtoP7qCnNPwfwuSBngqRm2wo8TPbMfNq+Dchk+VgT2wNwCYWKCIKFSSy9T\\n\",\n       \"XPA4tQLre+CFFjy0BM/thHM2r0GnB7feBR9YtXlgysIv1kiPJlCuS0RnmcZtFtvbGaIZXqEYAbJD\\n\",\n       \"ILdhIg92cSdypAjdDGgW7K7ADDD/Cu9/Q7iZnZEvErsJ/kEsKuB3pZSP36Rr2QV798Ndiy+9VFqC\\n\",\n       \"wTRU9wMvS6ztS/m0EGL+OuxRQfdg4QZ57XkhxP+VQR4QTAUF/ENF8h2X1WyJQQ+MroOSLcNwH7Wr\\n\",\n       \"E6kOaqgRRQFC1/CtOI1xjwSh96goG6j3GaQ2IixbYdhzaVsutg2T3RQLt47SO6/Ebb7SEPSHYbAG\\n\",\n       \"I9uQGILuBETPQEkIsQaMQ9iF1gCOPQKnFmISFMC9S7B9jxDiiZ9HVvZPiTDYKdw24EQWRrO4jQDN\\n\",\n       \"3iRIFDh3a5WBBfMT8KlvQuMEFH2Tvqdi+D0enxzC6V3DHaszrMCQLJK6uE19KmDD8TDLCq2jQ6Sk\\n\",\n       \"ghr1UEswUCYJ0k0u7OszaBks/0JAIx8yW4eTC7EKJ4h63PdXPb56zzBXl07C1TpoG5AxoaVDyWdW\\n\",\n       \"EXh6iU5xFtW1EXSxRnVIpgnJ4H9mleA54K9+tluU+zzcehRO7njhbCbh4V8XQvyrHzfMex8vDyG4\\n\",\n       \"g5jfceBNPvRXgc/yfjHys2IKxu+F6SkoAqYf8/X8++Cp34fWfyTuCnYAi7i4SMHsfwN3+7CVynC5\\n\",\n       \"JfFSHm1Nw2wNkD2FfiYLskJRBoisizs+hrZUI7Gvy6QBaqTjmzqp8WdZHcvS+6oBKzlYG4o1xMNJ\\n\",\n       \"GGRIawWMgcLy6DYDs8HxDUj6MP+XUH1xyySCXf8YPrgCZujRzwakVZ2hhkujaKOVNYYGK3Rm+3Re\\n\",\n       \"ravRqsQqwlIBbtwZMHWolSA6Hxt/vjuKESnlv+Udo9rIzMLUy9iCl3ywPmoKkfFikurSj7enpZR1\\n\",\n       \"IcT5EOUAZD9pCXsmy0ABljXo3ULtyUU6hyx0PaDv6ww8G2EERE4GxVtBsZaRySlEaKN1B0T6JjIV\\n\",\n       \"kBUmWsKl34DJeplqxiHKCshJkmaIaG/i9EP2tmDlgAIjNYrWOdwpnU7gIZUEDBIQdWODrW4zVpJ4\\n\",\n       \"kPsvYd9BmHJh43ZotuBKHw5uxp/KCmFIwHwB+LkoRoQQFuhHIbcPvCY0zv+4q+rLQUopHSEe/iH8\\n\",\n       \"yi4YGYdqQNi4RGs3GMFutp02mwmPvc/GKZnVFDj9kB4aab+P0Jv01HUyRYf9PRXDk4ixCDXTIVNZ\\n\",\n       \"4NLhvQgH8CJ8tUtkBBhbKo2EQeeDNpuBTWgJqHg4Vp3tEY8912FxFwQLcPDKGtedw0T9U7ClglyE\\n\",\n       \"3Dm0MZ+9103KORikAiKriZIwyAY9unoLw9BIJhS0/1GIXBIaf/FGOiRCiBIcPhqrhF4sVkc6cKsB\\n\",\n       \"lXuBP/5pj/legxAIYhuD/2EnZ+bNxP8HfEsIvizlT8rW380QQkxC7hbQ01C/Av7FNxrvAMY0FPfB\\n\",\n       \"dB2cnYd7uhtTy67/TWgHQAT+NSllNT5/6iQc02B2C8q2T6rt4Q5ctvMaApVB5CDRSbsqad2HrE/a\\n\",\n       \"V+ikdcK0wsxGks09EbqSISUH2LvPcu03huh+PwddC4I0pNNoTY8AH6+goSqjVOwWP4wiCs/DfElK\\n\",\n       \"eWXnfgxDVsRkWAix1Q3wU0QoCEMiFZdQW8ORHp3GK9wIgPVVuJ6BXXnQk9BvgTEH1hRcW42roDf7\\n\",\n       \"9/jmE1jfTgghhoAEUP1Rt7p+COt5GGq9RCqaLxR5/O7d9NfHILcM2gI8L4T4ixuJP/E+euHXYO/R\\n\",\n       \"PFsH9+CYCQZhglrxOpGoQeoo3tkOwewEdreFmW/TG3LQ+nVI93CiK7jKKkI1iXIStdZE71pM+QG6\\n\",\n       \"5jJUhqTXoatfo5dKs5jVEaJHSJMjVSgnIK+4zHoeFZkl0emxNbrEtmrCoAliP3QWIXcFAkh8GvZ+\\n\",\n       \"EqZqYC/ASBXGJDx9Gqa+HcvMIqAl+PkpRBIw/JtwbBjG29DfDRdvF8L6ipSDp17r/X344bNw0oXj\\n\",\n       \"XYjaoPhw+V68p108fQnYwsvvKLub0MgPEH2DrqNg9BYIR3WGEdiRwKqB4g4YmDCS22QzZVM3TXw7\\n\",\n       \"IowCRJRikGsQqTr96iyDxQK7uyt4xQZKwaY17HElC1LAUhN6yy3kVx6AiXNwQAXRjG38/xwqH3FY\\n\",\n       \"OBLSK20gnAKqUKgHIRlPkPXaRIOIg3WofxAu+MTboj8tslCULxUiL2KkBcarqbfex0v4G8T8pDdy\\n\",\n       \"/18Ll4lX6Ud5DxneCWHdCkc+B4fdWN2yeggu3L7Dk3oD25KhA8KIC5FAga4db9e302DdC8d6kOjA\\n\",\n       \"NSGE9VUpB2chOQGlnYeyvd2hfmoCTytBOKBrbCDNNqoGZiIg0iV6mMDoV2jnPQzfYMMKURIGoak3\\n\",\n       \"QUsAACAASURBVOzjoZDw+mStLhzr020qMSEsDAj1LfxUHcv3cPp5SlUbJepSO8qPhmi2oSHjUFMz\\n\",\n       \"BKe9zcyqz9WxHF2tjMiU0YIKxQuw/Ypp7lLGQX/nQF2HL43FmRPdSfihBuFc/Fx407do3xPFSBxm\\n\",\n       \"VvwiHNkNGQlbCJF5FFrfS8MvzMKHNc6d6LNyoMrIcp/bnyvw2AcO0wlOIM9loX0UeAQOPQJ36EL0\\n\",\n       \"M/BxF3tXzCMoJVX88l5EOEJxHSLRIZyZpf7cebgzAdNdIi1Bq2Ria3W0wQaD1DaYBnpQYYSQoSDA\\n\",\n       \"dFXWTIeyImllJGoUt+z1ELqGTjJyyHZdUiLF3pbCXL5Hz9HZ23HR1EV808YzHQrlBs3cd3HH2rB0\\n\",\n       \"P4ysQrPMqAojd8Dtm5DyoLIHGkBLhayEp3fDUA2aNmxdeXEF8M5H4gNweghuvcGufUqHv75PCPH8\\n\",\n       \"a01OOyqbbwBTCaiXwB2PLeBHr2CebGP0oVJV+bf7c4RZAzHTxwlbCF0nanUoRjq+dAl7ITWjyyCv\\n\",\n       \"gdBp5SEp6wh1myiyULQEq34GEawTpaYgbODfus2CZ6D32gwKPmd8hRE/Qh1AdRyezoL8g4qUD+8o\\n\",\n       \"pdSdGIFkkqd+2yZ7oM8hZwXFaNAVFltGjihs4aqbjOz4xzTNJPxySYhsDR6OYjvq1yulb0H1Zeyb\\n\",\n       \"t1Lgb77eEXqvQggs4J8Cv/5W+IFIiRSC7wAf5T1SjMSLj933wSfX48UTwEwdjEl48Ay8ZuDgyyC8\\n\",\n       \"Gje/V4swGAXPNHg+n2Qxa6DXTRYO1UnVWnzgPPBZIcQCpDZg/ghcOKazcnIEzzJQA8nASSJkgcB4\\n\",\n       \"HlOmMcIkiq+hBwFRWKdrqiStIl0lQBUhUioQgasN6ApJwlmnG9Vjo81wlWSuw6TVJuUqKEGPctFl\\n\",\n       \"dCO2uiL94ieQUvZip+/H7oYPrMby4dZInQMbdbpzkFsHKeH7ITtO3K+EnTnzjzQhznXh8xMQzYG5\\n\",\n       \"Ad0N+H/feAfqlfGeKEag8AW4cxpu2XlY+Qo8dI/OY5PHiXbdDfMVup0lxKktOgcvsTKRp55LEG6+\\n\",\n       \"QPJejag3RP/qLLJ2CX5rFtEzKEwKUoNl3Nlt+rbNZrJBWK3S7PTRE30SaR3l8DCd5gC3DLjP4u0J\\n\",\n       \"8S0NLVBAJkmXy0ykUkz2dbq2RFNVRu2QHiGboYmlFpgbqZOsq4Qdh22riBqGiN46T2VG8MwaOUWl\\n\",\n       \"LwV6q8Nk6ywt1aZn+aiyC/+zHzvGdsCYhYOfgbqEpgHZAYzUoV+InYpXzuxQS/qwXoHWgztqp4aU\\n\",\n       \"MorzVpyTkNkHfgMqZ6WUizdvTG9E9jjsKf/oawkfplRYmADmXsdBFjdg7QDY49BaRJldJn9qAVvp\\n\",\n       \"cfcTKnN7Zrl8bBKxoWCt99geruD359FWQ/YuBUSJPq1xSFoWqqVT00PKOHQVnZxYoKon6Eobz6gR\\n\",\n       \"JvcgOwI0DfQqUaJClNhDV1nmrN5j4IEIodmE3AqsjgkhWgJmknBcFWLKgcM6+gnBXs0iE0oCv4+n\\n\",\n       \"NbCURZYdl6FqQEJqPLR3L92tIoo/AuOb8KsX4cJOh+81w/+klFtxbMHZfXBiDTQJNRueycLWX7+x\\n\",\n       \"8XpP4cvAs1Ly0Ft4ju8Cfw/4X9/Cc7yTMBn/bSf8H315TwWePcEbKkaYh41z8NhnBEPRCEvZacqJ\\n\",\n       \"Akbg4poV2sNjaN4LPHFrnclllWtfNGmrPS59AfajMpVs0Q99yk4WXxvB6I6gNDfR8lVsv0LPMXHD\\n\",\n       \"Jr6exCBJW4WMOoYSRfFWuhxQVSLaiUvI6TFoWrD4JM4+lVlhkPMtHF/BjMqQCGhmIRfA0JeFKH4f\\n\",\n       \"qt+Mw/xa34WnfVi8E6JhGKQgacVejs+PQy+E3neACWLj0Z2tWNLEc/2PLECDmBd5YT0ODQ2J87be\\n\",\n       \"EsuAd30xEuu/D+2FYzdYnOsRnNhM8dQXTjP4hg5RFuoBnWczdDJrMKNiAzPSxGpEBMYa9VNNNloz\\n\",\n       \"UMyR7OiMbob45h4aiTSerZPxHDaGlzHGYcLLoGoSOegysDwW1Ntpf1MB+Q3kIYF/9CCWuUnCsikq\\n\",\n       \"oJhdzHBAy0mi4FEMLOb7Gstan4Tl4OdCOrJAd6WALDdxZodxFQ3htlC0kLQDga4SyYB8o8OBZ+EH\\n\",\n       \"UzBxD4QSqheB26BzHHI2zA/BZg0OL8ScrPok7J6HiYdjHu7mPrj0e+A9A/0lIZQHYfwjcCIF403o\\n\",\n       \"DMGFk0JY/yFuV95syAB89Sdf9+EG86BXPYKUgRDiT74HX3wOZvuYtzfJtKocOwdT9TxP5mcZmQtZ\\n\",\n       \"GlJphzp08hDWoFOldh1yJ6EQKSR1DT/0aWmSUBFEoUUjdFFpcoAmTUWhrPdodhOIRA7dM9H8Mqol\\n\",\n       \"yMthOrLGUiNk93fgg/OwmoOF3Xla9+XQPtkkO2uRSvt4eoTQk6Q7oAgXOzDQ6RMl+vhqxJE0uEUb\\n\",\n       \"XQFRc/DaM7B5EPoeHHkidmh8na3W6r+HRz8BL5yI04Yb3Z2E0LfMAOndACHIAb8N3PEWn+p+4I+F\\n\",\n       \"wJKSN33F+g5ECP7L8GN8NXZKf/3YiWRIAh7IpyF/PMvSzAwDbxjN0BCqg6FquBOb1JUREs0B61P7\\n\",\n       \"kdk29v4qM0aHgq/iCgM96GFEJh0SSFfFEDo9NUBnwKi3hS6zrGqT6P46bTyW1HVGlZCk4lMXSTqh\\n\",\n       \"TSQ6GL6K03EJ8iqpKCRZdlFTPr7Sxw888j3wNdCb8JF5CHfDD35TCPEvdvw/vi+ENYBDvwS3/nvo\\n\",\n       \"6XDuEzBahPHvgzYEF/6eEObXILkLDh+EQgRlRYj8eah/9UYqgpTS44YYlLcK7/piBHAg9TJ73ran\\n\",\n       \"IZJpcJcQ0xukj0nSCkh0arsrON1dmANQUDA8i1J1gcrpWfzzEjsZIB2PxlQJ4QJ2GVeRCDPPuNvG\\n\",\n       \"VHR8kUIJl1CsPqXd3yGdsgjaKo2ui7se4Y31CEo+UhdEionud1E1gQwyOK09OJUObUeyXYSQPtRc\\n\",\n       \"lEyZKMrglruoqQoTpiAbwURfZawT0jR9LtwGy0Nw6vtwZgUaDjz8ORgcheNrsUpoTY0NvJ7cD0EE\\n\",\n       \"moDR+2F4C67cBcMZSNdgRYvl6F/7bTi6HhO1qgchyMK+DtR/TQhx8bXc/N56VJ+Ey5+F4g2KoO0E\\n\",\n       \"rAyATSHsD0L+VkBA/Snon325NqOUsiqE+MNtuBOGRuDu52G6CTXbItQi1KEWYiaHiKYQCKJwADNt\\n\",\n       \"lqRHIQXjSwqLuwRrRZWSrzOhSTqyRkcJaSOYiRzq0qAo51jJHWI7rBLZ1xH2NqYIkYGF0nEIr7U5\\n\",\n       \"fSW+qpZh0N43jn1fg7GsxoQcRe1VGaQrVIVPYFhoAw2p9IkCF0eJSAK6qiAGaabVBtf2jrP6TR38\\n\",\n       \"NiSnwLsCe3mdxcjO+H5FCPFtYiVB8/V0Vd4H/xD4j1K+rs7cG4aUNITgInEC8HvBAG0JlgZQdmJ7\\n\",\n       \"doj5XJeLUPnK6z2IEOo+GPtFSOfBjaCVBP1ikTBlMBgSSEWSVEERKTx3hcGwTq+Uwl/cT7TwKLnb\\n\",\n       \"S0yt27hOBQgIHY2S1mBNX6Wf16Afkeim2FJaYKjkOg79LYElPMKUhqJY+LpNR/okpEZCLOKIiIzp\\n\",\n       \"kMi6uNLHtw3UXpeorREVi9hCougdyrbLZnbAr9Rj1+fKFGztB56LpfhTH44LlaQHjx6D000Y2obl\\n\",\n       \"adj7AIw1oPZlOLAId+zMmxHw2Al4ogV8500ftdfAe6EYqcB2FBvL2Dfsk29nPKLlqzBeJXeLyXhT\\n\",\n       \"oIY+odpHE3Wc/iWapTEiRRKl2wTmFpo6je/7uLscBoGGW9IRUYCHgWf2MWSEYum40kNxB7h0EZbg\\n\",\n       \"lpZG13XQm1usTvvMzWn0ruvUv+BRTduYUYhvh/iagRJo1PAIMwnC9BiaXEG1IgpqgrCwTK9UJWz1\\n\",\n       \"UdISWzPZ21GQfsBSRoKQJAT0enB6UePKkSS1PQH+IZfRrOTqaEC4CfllSJdhcRqulOGDL8DECpQL\\n\",\n       \"YOWgVIOKDaEdcw5K+Zh/V9kPwx4k+tBPw/JuWLsb+PbNGtwY7tPw7G5oH4LJELoKzHmw8udQ/FU4\\n\",\n       \"OQv7twEJcx+HZw4IIf4f4rbQblAtCNeBch4+PwynBZUDXb5+uIk2aLHnfJfQ7uBO6GAfwqpGKE5A\\n\",\n       \"aI8x0BsoH13Ga0uuI6n7EUlsRjshbileXoxEHgkRUsdBiXx0VaUULlPVwEZlV2CiCwvVr9EzetQn\\n\",\n       \"YhJazYHngyQc62In2hhFQTXtYysOBhEGFTytADoYUYdeso+lQKIJQV/SSEV0hEq6XUMZukb4CQOU\\n\",\n       \"SkwYEkKIZ/kp2q47+8g/k2fJewU7XZHfIk6DfjvwHeBjvAeKkTjPTPt38I1fg31FcCJYVmH+Avjn\\n\",\n       \"Xuv9O92QO2DP34n9xXYvARF87xc1rh+ICBN9tLyHUCzUUBIoPsKMMMM2fcWk0r5MYl8f3dbwPAtN\\n\",\n       \"CnBqKFGKri5xRIc027h2QLJl0g2h4Wg4/SqVXBdF6ZKJFBwtiSUUpFRRaNBRQnah46oqUSqH3ajg\\n\",\n       \"KQ3kaIJukAYlws8O8JUS7nYSo/ksl8clp5eg5EFqhJg3lIOCHhciELs6jLXBjIA8eBqoAobHoXTx\\n\",\n       \"pQWcApxag6u3CyEeeDWH1rcC7/piRErZFyJ5PzzwKTi5DYVeLON+ItHE/cPHUf7+BAndRAn7eMYS\\n\",\n       \"/VSD8RXJaLjEJW8DbVQlEQ0Y2pYs5a9z6Y7DdDoKaSK0XoBvNkEk8VQTb7CNr7kYRKidCj0VZmVI\\n\",\n       \"mBowKA3wEwNmrga09Ue4fmIUzzGYEx1adkgKnShqUrdmaEUBhjuMjBoECRcnEJD2SSgOSWuOtB3R\\n\",\n       \"FgamNMEw6STShDJAhi1UpQ+74dnbS/TG0qBvoVsqwx2TsldnPePT2Qusg1yHzr+Eyq1xVezaYO+0\\n\",\n       \"P7dtSOysnPUuDA7A/quQGLz0WiYB+buFEN+7mSvlnfC5P4faNDw3Dn4qJv0qp2DPQbjjyks/ffsy\\n\",\n       \"9Gag/GmYvA32BfFk9kJWZ8GZJfBOwNomfdNBOE3U5CKXb99C0dfpZE5DV6KYIPVtIuFRkHmEvsWg\\n\",\n       \"BFY2JIp8ko5GGEaEnoarxt2KnKdTabRIJAJ6iQyoHRSRZSxyMVQDwzeRXojqSMxhnT+6z0c+BZW/\\n\",\n       \"7JH9M8nkeMBEWiKVLSoih5RJbBrUhUsC0NSAuqFApDIdeeQ6ktFyh8sTSXqmRTi+D55rgHMBSnvg\\n\",\n       \"Ez7sasKzQoi/fufwf941+DLwNSl5u7ayvkvsZv07b9P5biqkDK4LIf43WNoLhhNbKv2k9cKL2HEP\\n\",\n       \"zgM+5D8J2b8Bp5OQGoLreyHxfJqM2WMzs03gJFCjFjoKA9Ug9GuEYRdBFSVIcdAckE/4bIU1emMO\\n\",\n       \"qi9IKD2cqAehjkqf0cBDUw18Z52a4tEJxlmz6iS1NrolySsBQ4qHygbbSpGO7IOQpCKLKCrjWgpB\\n\",\n       \"yUITEes66JGB2fNphjbtYArTDTFIszLd5PQSVPU4HBiAXuwSG4o4fFN48RaWECB90EJoOXEosBlA\\n\",\n       \"JHa2fYJ4wW5qxAu194uRNx/dH8AzDVi5B9RhcBeg/EAk5YoQ6SMVrI+ZtHM+VrfJ8ackBWD+Y+AI\\n\",\n       \"j0+9EA/oZhG8aI1OSrJuZNgyA1SjR6jnUUJwtRS6u0jF6pMgjZqqEugeRCZ11UOYLooMaB4AR+sw\\n\",\n       \"GlZopvcyUBOsRH1MavhKE19UUO0yrlrH1zxywsXUdMinKPngRzAhJG1caqpCxS5heSoGgiiy6CoV\\n\",\n       \"Onmf1p0Wwg0QioHd7lNN2aTKPoFWJ70B233YXIOUBldDuP8gzLTiqrmdg7UO3LHj0xHVobYfnBu2\\n\",\n       \"Nio29FowLKGSZsel72ZhR462DPYJOHQaZkNY2AvpkfizjO+EGbYdMA7D+H1wZhNkFzYimE4oZA5t\\n\",\n       \"01n9LssfmqLnN+kmJIoqURMForBGIMqodkCk6mhuB0foFKSk4wgi4dOyQmwJoQhwBdQkiChCCkGo\\n\",\n       \"hggRofkeKc+lr5hoWh9DMVGlgTTa9A0LWkWcrTGqjQGR9gLOvT67ZAo9LVFVQZoIhxZXlSQZ6dEM\\n\",\n       \"fOaDJOgJlKDHZGRg6W1WJ5qM1toM1wLOT1tE1y9DYQ5md8PKEdiag0walPvhS0KI/0NK+SPeA3EK\\n\",\n       \"qXUL5I5C2IPts8Dc+1bwr46dMLz/mtjk7O3CE8AuIShKySvKNt9NkFJ2gNfRCbFPweQnoGjC1iRM\\n\",\n       \"5ECRMLsRdwuyFlz8sEQWBNNql4G3wHaiSFepE0YdhGhgD1rk62Dnh8l6FvYgR/9amc1bqzgZH2UA\\n\",\n       \"DRX6is/+qo/MChK6jzAkDZGh4ts0zQZJ3SGnJEkpdQQBScASG1wRGr6EvnAIoz66TOHqNobMUI3y\\n\",\n       \"LIYRmmXhRQKFAQVXIPISqwz3H4WrI5C/VQg1AJZhYwUe2xXzzvLzcO045GVsF3L9JNR2w/YwrH/I\\n\",\n       \"ZiMyEZGPOuhiL0F7hZvQAX1PFCM7k+dzvKz0rftwi7EJuGcx/vrqECxMwhULxrW4i+IJaBZgsqmS\\n\",\n       \"rTQ470dUkgXaSgFvuw97WhjaGsJR6IYGi7KGZrgMDJVkM2TYjUiqHmTA1gTVSCcKUmhqm5TsEAkX\\n\",\n       \"SYpATAM9QlEnUjcYUSymQwWTFogOFd1FIElKHRH5bAuNltRIu+BrEa6m0+qO0daXqToqu3sBjZyG\\n\",\n       \"qCvM+ZJ9QsUAAh8uHoShLtx9ENoCzqdgzgWlC8M9uOvpmK2+nIXqOnSvwSNFGJLQTCpsWgqFuYDV\\n\",\n       \"LLzx9NE3GQfg0Bk4swFXpyAqQqcAC3dD+wEoj0H3FggSMXncTUJ5EkwHjp8zudIVbGUSJItFvO4x\\n\",\n       \"grUWofkw4aQO66OoHVBEDsVoIQ2JpbYIlQq6qrI/9NhWJL6AlpDs0sEKoNiEalKyYEicrErkGgwQ\\n\",\n       \"lMMEdhBhWRp6KPGUAqrbBBlQTwsiZwzSixifdkh4XVxVpSZAyghLdLHRqBJSUVP0m3vBERS1Oj2z\\n\",\n       \"zhXLpOhrNPMBXtOjs52E763A6G7gFigDGCAy4B2C5AYcAR558UbGMuKRvw3Hh2G0r7A5q3P9SxqL\\n\",\n       \"F1Qh/u8defD7RcnL45eB56R8RbvtNx1S4gvBI8QF0M/ouPvugRBiH+z5uzBig6KBOwG7PFiwoBHE\\n\",\n       \"c11iAEoxIGcJyoFKSe1ycNCmocOqgMwAaMDmMDjuKuExnTUlINRDMnpA14d5oVATCawgoDnkcjCA\\n\",\n       \"TGSgRZKUYjBDh6v2gI4+wqQIkDh06BMiSBKSJsGGjJgXNUoiiS51FOkzGGSo6Q4hJwkDDfAJxXXK\\n\",\n       \"pTnqyRbZARgl+PDjoOcVnvmfslxhls7cOg+eqHH+zoBSJaShQ6YGpQNg2dCtweTTJtXbdRLdUZIX\\n\",\n       \"PLp2ledub7H5cHgT/rbfE8XIqyO6As/Pgz4LbhH0PZBRYXId7By0epBcgOHDMNz0qY9aDKwJ9EEP\\n\",\n       \"I13FnfbBCVC9ACOQWF4S18nQE1l6cp35tM5QS9I0JZpqsCqytNRxPC1Co0xeQBKDiBRdWqyRY8Aa\\n\",\n       \"GTXNWFBFqDZqJLFEDU0MCKWCRoQmBHZkMkefqqGgqzr97jSDCgTmMotmg3ZSIzBz0NmDPVdhrlgm\\n\",\n       \"LIGZgmwVPv1gbMoKUIxMvnl6iM3LA8ojNVaPh2TKggYOtctd7C3oHNDoT+QIZZahtT6VUcmW04Z7\\n\",\n       \"gW/cvDF8EUOnYLIHT90JUw7MdGBjArb3waUinNgCmYMrhdiUdioAPQuhCi/MuhgDSXl4BiuAhNWg\\n\",\n       \"WYhAGYt981GRlSp2dBl3OkWo13BlGVvUGZHQI4Gi5NGljhAdrokKlu7TtsFpQJgyWbBtItXECzRM\\n\",\n       \"GdI3Q7YDnWEsorBBGDSoOWlWlV0w3YdbDIJxSScaQRUCKfs0RBNVePRkhN31SDBBw5zEaLfwDQfd\\n\",\n       \"TdJPtlkwevh+h0IrQeeJWSi7kC7sFI4+qB5ENrgZcJ3YA/sGWCfglmE4Wk7w/F3DYJrsq3XoHSuw\\n\",\n       \"9neuw7e46Vyhdyz+AfC/3ITzfh/4MO8XIzcg9xswvg8ONcD24WIJtj1It+D5LDhBTKFCD2mo0IsU\\n\",\n       \"ZiNJVlEoBgEDGywJFyYh8mBX08OzPdwh2KvGLq1bisWamkDoSZaFQUfU6OoDjoUeiiqoS522ZqGJ\\n\",\n       \"cdqiy1V8UmRIkMWjSQePTUy2hEoU9fFkSEp4dLCpS4WB3At+C0wBShrCYWT4Ar5ikBm3KFwJaB7y\\n\",\n       \"UPwRRhyVrp3h+skJwlyDKk2qXROuPcdm2CeagKPzcMtWiksfHaLzXI2tqTqXxsfwrh2nfe5JOCCE\\n\",\n       \"+Nrbnfr9ni9GdvgGfwoP/gLs/gdwfBNSizC9BWd/AcwM9LIwBMgoYCVU2B6p4hsaoboLgjwETUL9\\n\",\n       \"PB27gOJnSOGiiRSu0OmwzqPZIpYcBSXCkwp5IVHpkWWEMcp0SaCSIccGEo15YZCRIbYIUMMNpC7R\\n\",\n       \"gGngkogICSlKuCxyEByi1fDp6jlSqz3c/CJBcRS7bLKthgRulkQYEhX7eEMu6Tr0k3DswZcKkXIh\\n\",\n       \"z+atJqXkONuJEuHiJtXCItXx0zCfhfF1+s5Znpu2KBbT5CsBc8YQWyt3M3jq+/BBIcS5G3J5bhIU\\n\",\n       \"DTbGYNqGfS9uG82DdxvIDHQ74FnxxNRKQd2EVATjHjxTdAm3QlxHYhg+QuuC0YYoAwwgYxA9MsLA\\n\",\n       \"KaOZm8jRRVxFMBZFZBWN69osGWmjyog8WUwc1lmgrkaknWE69hRIg4CQpF5BCeoMyS6b2oCy9FGU\\n\",\n       \"LH3jMH6QJEguwx4b5G56SoW2OkYGl1B0UYWBoExLREzbCoEbYBhdMAWDZotKUVJSTAwkXR227A7h\\n\",\n       \"tx6FE0PQ24SsDmMVSPvwQhuSm3HK84+ZIOWPwWxNZW1miNBKY9YBXEYGM6y1+3CXEOKslLL29o7x\\n\",\n       \"OxtCcAoocXOK8/uB/+omnPcdidgcbeYM3LYVFx0A0xvQnICqB8lz8Pg4aEVYkgG5BZXMHgUjVIkG\\n\",\n       \"AVECFC/2XnJCgd1RWCqFODYMKbEzcdtR6GIjtCK2ULDEBBKdBgs8KXQyUZYuYyjCZoSQdcZo0qKD\\n\",\n       \"IEEfjQwWITUiRmWFEAs/UNhQcrRFAanVQFERTg+NJnq0RSjAEwl0zSOZ0QmSSbRejeoZB70hSRYU\\n\",\n       \"FH2GqFYFpQozY7BaRHy0QTpS6GU6LGwaRHqKfFXHcBNstD5N+3GAizBJrOF/Wx24b2Zq7yjwn4CD\\n\",\n       \"QOLtrsJuhJTSFUIrw/6zcOgGP5LpJ2DuHmAmNnpqZkdZ6+9CK+t4E9Mg+8A2hvTQ9BSBatFV+gwH\\n\",\n       \"EclIUInySBVUkcFEJaKIKRr0WSIkIkdIhEM86h0EPikqWHQQSh9NcdGRKBLUCNYVCCKJlHBNQBCu\\n\",\n       \"obWztK1ZgpUK3akeMtEj3Rtj1oe25jIfzdEo9UlEPQ5vQb4McyVwd8NgHizPYu2gRpRvYxeWOCy3\\n\",\n       \"0DqC5clR6s198EQOBvsBnf7eMitrB1h5NgX9CWgC7ILohTjF8SYXI5VnIf+rcOYG/ooSQboOyjaU\\n\",\n       \"VyGbgpILjgeXsnE+T9MBJzKZzyhoLY8gEzEQtZjxFQqgDlYB3YK8mcCsKbQdla502UyHNI0MTWGQ\\n\",\n       \"RCKRqDKiL1KkogR1y2fdmMZGp68aFEIdSzq4ooerNBnHYzGcocspUAoYigq0CZwrIIpEYsCKbDNA\\n\",\n       \"4sg6AzHAx8DF50poMKS0MJ156pqNTCukQ4XIbzLQe0z//+zdaaxk+Xke9t//LLXf/d6+vS/Ts3PI\\n\",\n       \"ISlxFWVro2TLi+zIih1ElgMHSeAYCZJ8SRDAiT8bMRAHcGDHMGzHgZPAkuM4lmVLIkNJliguw2U4\\n\",\n       \"5GzdPb3ffalbe9U5558PdUmNKEpibA17JPIBGtVdt+rWv885dc5z3vd5n6cgr1fSv/w5u//HuuLd\\n\",\n       \"LS5NKK5yu44v8cde4hPFb0nwLEeMF+p651ryN+VQzEKN4hL5q5zDd8jIb8ZP4R/E+M352/we40Ws\\n\",\n       \"hOBSjO79rq/+g49zrJ4wq+OUjKzvcbLM/XMsnWeyPJe77f565eKFSr1f6K4xm0WzA/YnnP90UP6J\\n\",\n       \"NWuTnntLleUsWk7oJuyHtpE1mVxdpl4tKUImhj2TpG9qURUSZ2LpUEsSzps4kdsxdknmjj1Lph6I\\n\",\n       \"oeFMGBmFiXByUaNsKtqpqjOVmyqTkVRwthyb1Xb1qpF6QnjHyGhxkVgzvhhVg1w0pNlhVnJ5wAc+\\n\",\n       \"IB58Vn9j0cXjA71LPZONUv24MKwvGXWhR200D8f7lvvVPMrKyKF5SfGbngt/axELZl9ne/3kHrN/\\n\",\n       \"zb/oE5aXtb//eWu3ZmbXR8oYxGpgJR9bSINazOyqmSRd/Wx+EUurRWU2FJWmgdJYZUmCwgQ9pULi\\n\",\n       \"CDO5XKKvZSgx01JYl+mH1CBwoDQsomOlULLwWt/Kp7/o87M3uBQV19csz5as54n+uUpPTa2bS0ZD\\n\",\n       \"659pGRRD6/e43KT7g9x4N0+9EPSujjUWU6Ppks3dXKsqNNKeV9b2dBdWGBeEVY4Pefwx9upvMhOb\\n\",\n       \"mSdIfSv31jdG8dKccNy5wuXuvP2yvUAYZfprHbPp2Elnanu1srJDtsO05FaN8XSJvai52nW//X7j\\n\",\n       \"uzWqh7TvkhXEnrXzTSsPEtXkoXJl0dU7hx68c1lVNp3kqWVBkNtPckU5k4RUlMtmiVFGFTO1pFKW\\n\",\n       \"xwZpoZ8mzsdMrF2QxVxQSRNoSsKaqQOsm4QL7ldf0Ek6Llqz4IGyGipTTtKpMu1ajnWT0JInpSwb\\n\",\n       \"OCu6OqZ5wqhe2Xi+8FL9+5R/6x6Xv8xlc6Z2b8bD3+q7svcZXv7J0plpqWzlktnUqJbaml1l/z7n\\n\",\n       \"vX20Qm8LhCDHnzP3+/iWI0ZVCP5f/CD+/qNYw9sMBeVdth+f38x1RvNDfjqiP+JqYOV1ek12nuFg\\n\",\n       \"QLUXpRgHHuzz1D9l77GankQaUpdmmWxcOmgmNmKpKXUSMqmRng295Fiu1JJLw8g4CSZVZhxTEx1J\\n\",\n       \"0pAplXInlmQuKb0mOG8cUvfiWFH1tY4m2t2XjN7RV9kRwlkZYjIxSO5qx7EnYyUP0Zkw8WoaTgAA\\n\",\n       \"IABJREFUMDqf2KraimRma3lDrJapjudhO0ubZJsc5bYW+pYeW1JvDxRrex4uNdx7+LTZ3SHZJ7lw\\n\",\n       \"wP/zKKYjH2Vq7wST+cj32wHVzflUybM1QsaoST7ijSaDf0xyLvLDN4yfKozWojwk6rWGZkJWVdLZ\\n\",\n       \"iLQuD+SBIBomE9FUXV1HIsaeQah0ZYLEQxNXHWsqdByr8BWptsqKobtqRuraSvtx2V65qF490B33\\n\",\n       \"5EVUPuTgwdjs74w1/9KG9kfOydKaw1gpqyCzzeqqyeoF9yTy4tiNDzxQtQsLnejkSunB944cLhZC\\n\",\n       \"3rB0vK33XMPhYE27GltvHOgu/KLGuS2LVwqx03dy4VMm19foXeaAuXNW6a0fYZynUjprfhG8deoM\\n\",\n       \"+DXM/QeSv8GN/4w8zmURtVt19/9k09HwsisPh4r+loP3Dt09H3mN5AbrP5f5hY8uGW51TEdLJkc3\\n\",\n       \"aLVJdogHvHLP0rtZupBI1mbG44lLn2qpLjUMa2MhjxKF25axIisD1ZFJuqDmxL20UC9yaR6FatdB\\n\",\n       \"qBul1zTjQL3oqrJKnkxNzETZKaWJSvtyQd3ENMmcl2nbl+rqJJXFZO7ovKnSxINQOZQbhkK9YpBy\\n\",\n       \"7wp7A/KtaHn1nINPXeXF6lTMHQkP52XZr0P1Fb7yKxP7f3LHwvVlsZt5MPugvV8/oH2Hnt8l3+Lb\\n\",\n       \"EB/FrRjdeIRr+Kpu5O8/wjU8UszNzDZ/mMvn2L3GuT2qHCtUBbdSrv0iH/oC44xP/Thpm/Elhg84\\n\",\n       \"KOZmkPU6X/wRJkuZ4eDIw3O5q+PKYZ46J7odok6cGIWBY0sO1S1gaZ6xK8FlXZ9L1nTHpSQv1eLY\\n\",\n       \"TFcaG9IkE0QzhbrEukqFfiPVu1rnIOoUXRtlV5XMPEhXRNsmZi6F3GJWKWeVWznrugbtmQfhWcdH\\n\",\n       \"i5QHpDfJG5QFa116Tzi5u+dLT00sJW1hVOofBr36L/C+EfmQWwUxhND8VptZfttrRr6KGGM3hNo/\\n\",\n       \"46f/a66co1lxL+HeL7HyYd71xEQx3nNwcWa1nTuKC8bJmk4cOwiJk9pMLdw2dt5uyMj7VNsS61bL\\n\",\n       \"XKGpNFIL+7KkMI3LBuVNt7OxcyC4bdlI9Izcgsyhhq5EjKVMwyRuaveOvLHQM9lh9o/p/aslfqJu\\n\",\n       \"/BNB3pgapZU8yWWxZ1qOzWoNMUy1Fhet7p5x97GaVnbs8nau3D22v3AsW+Dxfs1Kg7Qxsr3ywN08\\n\",\n       \"lW3f0HqWd4a2je6+h4upB7V9x38mmGy9yvSAr+zxt+e5CG8NQgjpEn/8Od5/iTgk3mQYQvhfY4wP\\n\",\n       \"vm5Pvsitf8Lkw1yWe/iuVffv55JxX75MPbZs3y8MFiee/SIHx9woc7v/8D3i6tK8CvL+DW4NqR2x\\n\",\n       \"NLSaTz0zLHzwC/MfT/CZtaHj0PBUtWRWlB5Oxw7rLWWckvRJdmRxg6JwlN3XkEliy07SNk42NB1a\\n\",\n       \"qRqypKYT+nqxpx6mRnKVXOohdp3Vtu7AtmBRoaFvQeWSeSX1GXPxbDesaOC84A3rbiZ3HbUK7ymo\\n\",\n       \"pUze07Mbj2iepf9Vm6NDGiUnvg6nkzL/IoTw2QH/Tpt3X+boJu3b8xDBf/itNkX6fYCfxD98xGv4\\n\",\n       \"OP67EIQYfdtNO4WQPsU7/gIfPOTCHV4Z8cIPsHiPtS/xoGJvxh/7IkXgVz5CeJYrCU9VHC7Ow2r7\\n\",\n       \"bd5Y55lf4/B8obE6022WXquxWlXuhIoYDKqJu8nUWJAaWTHFsYYTq3JnqwOX1dxvrFipMlU8kTgy\\n\",\n       \"Sh4T47FZeKCm6VrctGAsOjZKj91J7xue6xMzeVK357wVQ4kosWZJVIV9rTBSZjW7Se6gXDOJ+xqN\\n\",\n       \"f2KUXxHLs0yG5G+wcZlPL7D23UYv9oxqv8rSR/jnr/Pcizz9PfziIuM7/PEv8t3/5gnI/2Z4W5OR\\n\",\n       \"EMJffdM/PxFj/MRb+4mtTa7cYfk1YsKHjsifIHucD/3rsU+22Ki3PNOberB0xxuhqwgdMfaFMNCI\\n\",\n       \"a5ZDX4KRI/1w4LzoJGTidKyej8VwZ14Fibk8mzmLjsRES0emqVCTqmuoW8JMEiYKLbOysJM19IZR\\n\",\n       \"8T/jpwPfdZE/c1mcHZgUD1T1JyQyaTUxzB9TJKm06tnd3LW33rIQlnUmXQerTZvDvtqZhqvJWLHU\\n\",\n       \"U41TZSitllMHMXezlTi3vmxtZ9fx2Zrdjeetf6bQWGq5t7Oh2voMpm/xHXLGu97JB7+f26n5CfZJ\\n\",\n       \"Fv8FPxlC+OtvTqB900X007x8fonGj3O/TnHTwZmxUEvF8g5rL9n9OG7g1oj25/nLf4jegAe7XD+Q\\n\",\n       \"LN5Q1brO1wuLfe48HawcRp0+a0tMVkuNtG4/XNYqxsa1vmkIpo7FuKAKK4pky3qMVlMy2wZhxTTk\\n\",\n       \"0qpUSxiLVuOxrtzAZQmCmwp3LVj2mBOJnqiyhLrgTEUtzEfOo6CwJEotIDd1bF3LqiTs6iZsnnDh\\n\",\n       \"sHBw6Rft/8B1bq2zc4X9z3P2kP/7t9v2McZd/K0Qwsr9uUZkYm4u9c2m/n5b4NRb5EfxXz7ipdww\\n\",\n       \"b6E+hVce8Vq+pZg7q577Yd45Y3yV1xq0dvjhf8ZPP8YLP4N7nP8p+g1efSfN97Ha5mJF39yp+dV1\\n\",\n       \"muk8FXf/Iu8wUWRBfVB5vRUcSSzHoBui3SR4MnY1w9Cx3LFMpWfT1IUYhCq6GG6b2jUJmVaxQLqm\\n\",\n       \"Vt3TCwMzfSuuWQpjpYGxRSfWtEyNwirhxLY1tKWiTE0wVWioxWiQzduzySzKDzetzUqDVk2VXTA5\\n\",\n       \"XhLHHR7eZnWLzjOYzCsmi6u8nJB1uHiW4RPsZcQLnCRc/gTvwa9+q/bf24WMfMNeTYzxr37LFhBC\\n\",\n       \"g2vvn5vE1N/UL7sZqK+xc572dKwRK6PNRDOZ2Yg97ZC6p5LF62YWrYRUiFGoCnfDVMe2BSOTtCHE\\n\",\n       \"iVYcmOEgmVkyn5BJ1OxrO2PirtIdqWuCKEHuWHTPhl4yEZql+vScpf98qPcXx4ymYucVjaUJzQVl\\n\",\n       \"OJY7MMguitWiNCYaoW4h1hTJA3laCslMNc1YroRmqkwzR2lNrcotH5eqUCkaM9PtPZduHVs8Dg5W\\n\",\n       \"v1t9XJOPE82Qqk2fNr7R5+J9nsYn36p9s8GH3sH+V4kIXODkMpcezLUPt77+PTHGfeyvh/CZQx5/\\n\",\n       \"kr0m013xucV5qWBpkwc7fOrUPKkfQvbrO/L/OMifnljpjCWzuSX+uRiszzKzDfY2c8fHI8OFaNoY\\n\",\n       \"KWNq2u5phVQW2uYzspti2Da1q5ZULlhQjw1lNVKrRhrx2FFYU6UzC2FqbE09GUm8dpoyH+Q2LCMx\\n\",\n       \"tGdBX6EfK+thapZUlkRTFHJBqiHIZILKUKpjUbQrBKZLwcNsQWtWaV8MBp0bZJ9g64R/MJ6H5v2O\\n\",\n       \"OK18PVJju7c5vh8vxWjnUS4iRjEEHzPXjfyBJiPzc7biTcQ4J3uecJ6VGfmM3jn2Tzh7m50h7XfQ\\n\",\n       \"r/GzH2XjKs8VjEbzKfdxyXSVlR63F3HAhQbLM3pZ1M45k0QhrVSxkiWclzhXFdbD1EjlYP42T8R5\\n\",\n       \"carCKPDkdGg35dz+0KjV1W2WbuRR4YLVMFWYCFInLqs7UqkJUkRTXcGGjprSguDQkZHFQM8FpUoo\\n\",\n       \"KydFX71bVzQvyUfHZq4oX2yQdLnz+XkGSOMsxRovvo9b25xPqOccZG86v17j8EXe5duBjIQQMnOv\\n\",\n       \"gufxr0II/22M8dOPaj1o0Ep/MxGBfELMmLZJVjOjhZrNklDWLZcDowZVUlOGhiLOjOLYKEYxpsTE\\n\",\n       \"Tpp4TLSZzLRUpiHxRggKdZmOHblU30RhaoxEruue1MREadGxDZNqpp7tGicpBa2kbuNsota+o6Fu\\n\",\n       \"ULYcxpYYxoYWlSLJTdEF9bImjQuyin56aDnpa2Uto3rHQbpqmGXKousgHTlfr1s4rDsejCRl3+Rw\\n\",\n       \"5umXWu68J5c2oTKr1RX9041WZvMxsLcMKc0m069/vjF/qP1O7z3gVz7Lc0MuHvDe72Y0JFnjpfcx\\n\",\n       \"/nn+gxDC/0j9Gd79h04s32J6HoukZ+iMGedBv71g+eZIsjazfSaxW6vrJ7nV6UQWBgZhXTQ0cUtp\\n\",\n       \"ybyEsKsVEiKTZCyqSWZj9eTINKs7iJnEzAkmokpHEqeqkKlryb1hz4ZdTcG2bjh0glXBgeBEZV15\\n\",\n       \"2mtORYU9qyp10ViQSEWDxoKBJbG3Jnz2yXlwT23G6gNe/apY7TSz4yLa5sZoyQLvrLF0xI1q/trf\\n\",\n       \"sh++A/Cn/A4Vpm8xPoY/jb/5qBfyViCEcJmzf5Trl5jNQlj6NCcfR0a4yplDlibza+vCCCtzger1\\n\",\n       \"n+LxOvEd88DP3UW2Z4xyHmApMF1ht0G1zSihvsjuZfImVZ5SRqtJYhiCaVVpJzPjJJhWmaUQDJRS\\n\",\n       \"7MWgE6PdlIPALEu0Y2JlqWYtm1qsVnwlaZpVQ6Mw0TIy0jbw0MDIVE0hNdJUOlLpaupoausauK+U\\n\",\n       \"uqQuM4sjk/w5rZNbRmkpzqaqWqFMb3J1wsFj/PzLLD3ky5d4/lnupXP76fx1WlffZHgIY7IRC6sh\\n\",\n       \"/FgyHw/+Mm68lVOvj1LAWuCHHtXnfwP06PbnI7yrbxLuLBzwcsVaUVN15u31bpWYhAWdSaGXV/aS\\n\",\n       \"uoWqr2fViXPyKoqh6zgZKIuuWRg7GzKNJBig1DZWCUqpjqnU1KFS7prGaVF+aKTvFUSJpWRi5IyF\\n\",\n       \"WDdujZy0dk0MrUtlIfV80fWFrOYNT8q0VVYlDqTuG6YbkqouL08UcV89BGnS9HDxjDSMdS1bizNp\\n\",\n       \"9ap7tcrqbFO1/dC01ndrlxvrQ+d2Dm1fWTCdRQfJWcWdCneoDeYqyrcMXV66xfe8l/tffW5Mejq7\\n\",\n       \"+OC3ex/EGB+GEP7OHv/F83NPjaLJy9e4kVE+Na+uPF3a+CG+9wGf/KOpcDX31KxQTVlMKq9llHoO\\n\",\n       \"LyTOlNEsz/XDVJoOjZuLuuG6RQtqorFS6WWlQxTEDZmWuTvMUFWvU1QKY71kYDeMJJ6TVxs6lkxD\\n\",\n       \"qfCqY31R6Vjb1L62iYtarql0jBQqhcQDhS87UbcoWjW1onDsgWNn5JqT0qCsqaqRo1pTv1zmOMEq\\n\",\n       \"yzUL/2kIF+/S22XpWrDyWFNcmjqpbejO3q/60gKT+3zXV9gKIfy9b2Uf+fcDQpDgx/CHHvVaTvFx\\n\",\n       \"/I0QpI9oxPgtQwjhLE/8h3xkwJW78wDUL3wPn12bp3cv3OMrT3G1TpITBjycEdr8kU+w95F5Evfu\\n\",\n       \"Q7oTbjZZSXi84myfWY1Xpry0wKjOjQ5lHYGYVM7FTKfKLMWpvaStUYxMk9IbSakjk8boAPeq4ExI\\n\",\n       \"NMrKLOF+UgmxcrtRyELDYbjkpHqc8uftJXc01BzaNLIucc/ACq4qDFVqMlu2PZCpKa0o4nl5qKvU\\n\",\n       \"TEIQkttGVxfV9x4qixPD9gbbHY7HxNf47h6/NuAff57d23ykSdajV/D595/aNEBJ+Nd81wVO3kkt\\n\",\n       \"Id7g/a/xuRDCz7xVhOTt0qZ55IgxliHU/yW//Od43wFne2wt8soyd19m+H1Re6ESw8y9LLdcnkjb\\n\",\n       \"HffDpv3BA+NGosguakklmFYtZkfG6ZqtpOUgFKKuGFmy6KyhQuK8gamZN3DWUN9UqbKssopUYVff\\n\",\n       \"xCWs64QTR6Gp5YrM66axqx0qDxullgXFaVWkkoiWRD1199UlkvzIk7NSmtS8kjVVWSodLJvManbr\\n\",\n       \"hTSsmFSFYW9FVTw0foPR3+Hn/zRrl27pFYmHq8+afKpP8gLXbsynMm6/lfumxyc/yzsrLl7ieEjt\\n\",\n       \"Syxt8XMxxh6kITxzhu/NWB9x+4BfepO4tTtjf8TDiqOcQUFakC4QczZLnRYniwzeG51pFjp5dJgF\\n\",\n       \"SZG6OK3cr5futqMbBUd5y0aYahipQkdm5Egw05MayE0kJmZWnISWsqpJw0CZnKW6YVqrWVG5otDX\\n\",\n       \"dFddkeR6JqaagksSn3YgkZjZNLaMZQ1nVSqFqYmpVEvmTlwQDRUhV5kKhhJ1x4Kb+UyVZXrOOhpF\\n\",\n       \"nv/lecVj+yuufHflyXs8+wYv/PuLXn/yceXtJY3eSPnEWD2bGu8+Ln7ucTS5/Mt80Pxi9x38Bt6H\\n\",\n       \"wxi9/qgXAjF6GIJdvBsvPOr1/N5i5cN8V8GV0xylZjEPv9x6moOb5DWKjBs1FuPcU2SvYvWAkwtM\\n\",\n       \"r/OgpIicGfNKY+7/VwXu5xzUmR1wvuLmBkWdp8O8WLgtej2ZmYXKPI5rJk8q60kijampShITszIX\\n\",\n       \"Q4HKXkIj4clACC27cq9btuuI8OuEUqbmDRfQUEoVVnHd3P9ipnJeYQNfEfRVVsQQpEplTNXkpqFj\\n\",\n       \"mt9TrMX5jdDeBW41mfRZvcNTO3PH5Ookxo+HEH7FvLg8SLn6z/nJJ1hv4GUWM8Kf5IXaPEHV9Xml\\n\",\n       \"9L2f4ot47a3Ys98hI29CjJMvhpCO2f4B8vOMe2SRP7LPzR2K84F2pqbuQdxwHDeU1VmhOnEyWpS1\\n\",\n       \"j8ySmm4SMVQkDdEzQriokqrcF8PLJmpyhamZW2rWDRGNJYLKGTU56iY2kZl5VUvN1FBNadFMTWLs\\n\",\n       \"JMw8dOKSljdsqkvVRSP7RhpKmbGeTM9KVZNP6npZrqdjfbKpdXiiu1A3C835GoYDk+Oh2f0e/yjG\\n\",\n       \"uGMuYFwb0HyZC7u8J1Lu8/GCF99qw7oY40kI4W8f813LPDNjZ4+fiTG+Dq0QPvAe/vR7OFjl+AHX\\n\",\n       \"PsMzIYT/hfom1/7sxMIffmC22XcwPmd/eFeS0tq6Y1ofK29wvM7Oc0GzCGYSnWLmoB7t11JL08rZ\\n\",\n       \"NHpYrdhpblpJFjF24q6h3IpjBx6KzuC8tk2JPV1DE3teSyrnZVJMkwVdbee1rcnUJI6UjpQqOXqi\\n\",\n       \"gcoEYwvmxncLgianXeRSlMtjqlPNtMKifmwapZeJLWkYER+YVhN7s45kdEbj4IKkWOXoDhc/Y+Gp\\n\",\n       \"meujwuOvkuR1483r1qZLBq2m0FuTThvy4UtmHy4Un8+IT7HzIu/1HTLy9fgxb58WzVfxVd3IHzAy\\n\",\n       \"0rrCZve3Pn82zu+Ljs7Nc8aaxVwb8ljBpy+xs0z3R0ivzts3k5QbxbxysrE3t3Iot2m9ijXapxrv\\n\",\n       \"7+kxXCQLrAcKwXbIPRdn7oapV0PNUKYepirRQJCHGUldUpaaaelxwXK5ZCup64dNjbgoDy3BWBXm\\n\",\n       \"lfBgE0sSe+bUJzd3ahtiSZTJkGgrnZdaMBPmrsyxUJUV0x3l5N3czth9ncGUp1tUT9Cb8ONH87Hd\\n\",\n       \"//3UWuOr03C3Qgj/w505/6k1uPyjPP9VIgIJrtO/wTv8DmQkhFAztyufYu//T37VtzUZCSGs4oL5\\n\",\n       \"hrsdY5zEWL6KV+c/X/8Jvq9Bd4lrKaO0o5olqlpiLVS+nK3rx6+IoaFlUVlcJx6Y5jMhjsT0HHJp\\n\",\n       \"1VaFVBUuyZyo3HffqksaZqbuC7pKha7zUjU1TaUEh2rac5NgMwyty6VSqUwqc05q6kiqZyY9/d/V\\n\",\n       \"JYIT4+qhJHlgSWozJob1BTuzjnJwZL9Z19ioCU5MQ2lWbYsHR8LwWHpnqPzaQRdjPDj96/0QwufM\\n\",\n       \"HVfrWHLqN/JW4rQC8onTP19DCKF+hR/5Ae61T79cT7CXz61mf/zIY+s1j29cNNuZGubRwuo92eYT\\n\",\n       \"jg8fSs8+9PhXaD3f9KUPr3pxqaGVPfAwnWh0ok5CFqMj0VFs2a2dV6U1a6JoQcOmvrEDmaguWLHM\\n\",\n       \"qdonF5yTCKKagzg1DI1T55hK3djMQKkmlchMTc0r/tFM1BGcWHKoFMyU+mbGZo41zWRmYeAoLfTi\\n\",\n       \"oZF3CdU1aTETTaRpKhT3TNNz0rSrvvm6KlvSaLSN96aSs0c2fpYzu9zfbNlda1s6mZmuNNR2I9qy\\n\",\n       \"aWve2lq8SLdyurjv4Ovxw/ivHvUivg4fw3+Cv/aoF/J7i+k2B9dYGdNv0l2ZG5nt50hYfIkXrnC+\\n\",\n       \"Pc+U2QkUd+h9H7WMy9Vc4hYrPtPg4ZQvr81b8/URi13G59itz9s3jZS8pJ/NbwQuiu4qnIRK3QUL\\n\",\n       \"cl17Uh0bsWbdSBYGUoXtJDGOLcdh5uU0c2hJ37IYKovamqduQif29BEUEsuCE/P+WmZ+gTpGX6VQ\\n\",\n       \"xIsyURnmNyRRTRW2hOSOvEbSi8ZrDRZrzDaZbHLzhOSY3nUe/1W+F7/41S06t83Pn6F9hv52pTgp\\n\",\n       \"v8FQSTXf0L/tFF09hOcv8yc2qU0IO/O27v/5pmvH74hvSzISQggdfugp/vAVc7eGNxiFEP43841d\\n\",\n       \"wxaXnuLyIb/yXq41SZPUwjgxrSrL8a5ue+b2KFdUiZq+XjIxlxe2ZcXrpmkTiSqZiNqCRGkD+yoN\\n\",\n       \"M211iVzDiZktiQWFVVGldKLuSGLJTK5vakNAFBVO9PU1tHS0bJlhz0RT0NZG4phkT7uqW4uZViBN\\n\",\n       \"Rhab0X5JWVYGoUl/k5OXWbutWQ58/6vzL+GrfymE9s/GOPi1N22781z481zpUC5yvx1C8zOM/94j\\n\",\n       \"EjeunyFrf50D7GUOg/QjPH6/aXqmZraYKM6fSBcmNsIX9VeHSj3v/ULTl566Jlk5K9zasffcdeNk\\n\",\n       \"ywvZ0DOxtDijW9ZsHy9JVmpqiApNPHRecEMhGLkglRgZ4kRUWY0No1AJlqWhkJmZ+/S2ZaZKa1JD\\n\",\n       \"LWPHNuWmp0TzgYmJqCN1pK1moFAXfMWaNTVUulruxaBrLAtdafW6Kq1LLKhrqrIgpBOdkLi2nzI+\\n\",\n       \"crJyYGt9Iip1383nK/YuVZLOVFZLZMnEaCNSLZi2qAbp6R3SK3Nl4L/81u/ity9CsI4n8euPei1f\\n\",\n       \"h1/CPwhBLcbfKv7+/Yu9X+PzzwXDs0sG1xeIY4cLI68d9/kZRgVrD3jlMmHIlVc51+J+xV4yb+HU\\n\",\n       \"s9NDOrKwwPEBl/bZRP857mxyOGI1ocjndYpRnJORCQqFoaYjCY4ElaeVlo3tB0YyY0sWy4mDrPQg\\n\",\n       \"XNLXVrMi2JCYmZlIZRo6Ftw3NTNzonBVbmL6NYHcClLRrtK61DlF9bqYbqBlXjk5UDN2PhTaazfd\\n\",\n       \"Gz+u29tk2CHd54k77K/wlcfZ+TIfckpGQghnuPQXeWqRfKVm+2L0xuwlxydPs908ddouCK/ROuKl\\n\",\n       \"b7RXQghXnuPf/SG2l0+dmV9n4+P8hRDC//TNWAF8W5IRPP0sP/BR7uSnR+VLnCv5m2f4UpPZFuVd\\n\",\n       \"J61KdzgPVxufCxpJoaoKZTpTTypn465xo6ZZq3lQnRGql5XZU9I4xS7xHOExla5gJlrAHVNBw0yq\\n\",\n       \"J+rqyx16h8ILXndioJAilVsy9EB05K6oMpDITSw6ENQ0BYfom2pINN1XE8xUgh3N2HMttD0eCt0w\\n\",\n       \"M5ZoasjTd5gOMbpLq0eWaxys27wzcGmfcz2eyPinPxpCeC3GuB9CyOdE5IcCt8/TOctjuPdTvP5U\\n\",\n       \"COGvxRi338odN2fxLpqTxrsY9eeMXfKm1/WoF7KSWkn3XN3s7KL2uBRrQ4lcs5iq4zPvX/Hw2VTR\\n\",\n       \"uOPg6TNi9rha/zWxed9RNtENU2G2aHO/sLXKLHIv0NEytXDqljoT7ZqaC10zicuCGPYdnFY2glKw\\n\",\n       \"Y6R03tDsdOImyqSGErcEBypDLT0LKlML9mSaSrnCgYsO5BpKQcvQWWXYk8VFRdgU0jqOJXFPUa0q\\n\",\n       \"40hWDS2WDXmvVHWC9WxqnDA6arExdvfPlt770tDi7b4711ZsHMyMliuT4S1Hq8equ4e0v8Tqa7wx\\n\",\n       \"4lFOvL0d8YP4pbfbBT9GhyF4zVzj88uPej2/V4gx3gkh/eVlD/+bZcvDYBo2Hd94zuC1j/Hv3TN9\\n\",\n       \"gtUOz+/RbfD6e9kZEOOSo/qi+8lcFl7T1wlsZPPsyG7O9uV5kXwnIZvNp2B2UjYCkzA/w9zma+qO\\n\",\n       \"A/uWRXWVXKYVgnUTD5Il5aAmJn1FXje1ZlGpMJQj1THSNT0dZYhYsO9QVMqVhqef9NWxtvnBlUpl\\n\",\n       \"YWiSZuYnwAOMJXZVYVEn5GpGzqdfVq6NTNv3ZcVttdZYbXFZ2R87zilnIYQwb6Gc+VN8JG8YXtow\\n\",\n       \"Xuu4MBhqLnZ99sxPmzaf45UEN8nemBPc299ov6zzgefnibJfi4h4gr3bXN7iGr+7nurbkoxs8v5n\\n\",\n       \"Of4qEZmQ3uVDzwsXo5pVxY33Kh/8gu6113zsTGrWyXRW2i7F3KTdVeWZRpkpZ3WdtKkRDzXTkVFc\\n\",\n       \"Vp9NJBVhtmma7c/nQ01Oq9snopHCRA875iO+R9ZOJ9SDidSRkU2JdaWJ6ETUsuTYoQ2ZqahStygV\\n\",\n       \"DW2bWJI458GpB2BTW88wHpkEmroOMZK4ZtkrLlnUNmksGhSXVaM7TB9aPOxII8PTcdlmwRORG9ew\\n\",\n       \"j8tcWZhH06yf452n5bcLIxau8Pk/d8qC/600JHPS43Ha5xgdUL0WYxyF0PoA13+US8n863l3yP1/\\n\",\n       \"tMfrL3HtXTxkrgb/HOdGJv+SNz4UNRuVTJCWQVlGJ9mSonpDml6x8/yzTDZU/ZvGZ85KK/Lhimp2\\n\",\n       \"32Rp2dJgKq9OOFuqOdaNC3ohOaUce2baGp604YEjYzOVDkqlsS1tXZsmptiKQ2th5ERNlOhg36ot\\n\",\n       \"ly24KzNzVuqMKDF1aOAVm15VaKoUHjezikpNoumuiRUxZHKJRJDZJNxWJDcUcdfabKZdremvZxpZ\\n\",\n       \"YVbeV4+p5n7qjon2eqJ7fWbx1TuqMHNQ65BN9GqHiv9rn088YHqa6nvjUWRWvM3xUfzCo17Eb4Ov\\n\",\n       \"6kb+wJARWFMtfNTRx5YdDesUK4wj1tW+/57HXqPeZvcs9chSjS9vnzFpXNWsLVoq5r4d3ex1J6Hr\\n\",\n       \"/ZH1Dv1VLnyW6RovX+fO1lx/8skNzoV5DWLPvMXzDrxuakXNVYnU2MDYVEOhUI9N4/7E/lKqUdVI\\n\",\n       \"k/n5x7HMMVZERD0TWwq90ymZROJVlcbpWSBV6ZlnLyzKbJvEHuFZmZpcJZ4OKyTu6oWgGftaWeZK\\n\",\n       \"7yV3046LoenCSa6qMln1BW9875Fbn6J8JoRwwlOXg6YNh2trGgdQsz5Ora2ytftzvJByPOLmb3W7\\n\",\n       \"/g3UWF+aF2l+E5bmD+1vZr9+W5KRlE7rTb4Vn+ddLfXHF9SLYyvZoeT6noffddnin2owAAAgAElE\\n\",\n       \"QVSkGnm9ESXnD8yyQr2cyZMgxNROlsqLQqxm8iqVe6BKNlXJdfkkKpOSohTCF8WkRAcjdFXWtPUN\\n\",\n       \"LZq4qtKVeiBI1eZdQGOVmyrnJDasCaamcuvuO7FsqqXvWM2eFaVMaWBqU89EYlSVapEsYTHMu44N\\n\",\n       \"lVRHGWmHiSoZqddyo+4a/bsmi0c6B6y+OTo6+I3+4QJZk6PrPPcmEVlWzD/j/Brb5/wu47a/E0II\\n\",\n       \"C2z+BZ48x+aU44xXByGEj/H8j/Ej938jDny7w8/9+X13/vav8GM3ubpM3CZs86kZP8vrl1LnxzsW\\n\",\n       \"k6ZZu3CcNmwn+2qz3Kx2Xr1b6NUzRbchLqwp0xdZPrZUjYRyaJxXeo1SqKbqyZ5L+maifRN76tqu\\n\",\n       \"WpIJLliML9gPY5Wa6SmtvKBmU+bE0EnInNNRmupp6GlpaEkcmuk7o7Dg2FShp2WkZcU1PQv6ujiW\\n\",\n       \"OaupkMvURKWaSk9HYWJBYaZZlcp4RxoSzbDn7HjXINQkgdXhzGF6xuigqd2bmXUKh7OxJ3514Cd2\\n\",\n       \"XndzI7O9VugljP/Ktzqf4vcTQhDM9SJ//VGv5bfBx/BX8N8/6oX8XiKn02Zydm6ZCoY0U/V20O5H\\n\",\n       \"T71Er820xrl+zae/56rjg7bDxSBLZ8Zpw6oz9nSNTqdqOjW2n6S5RNIhfoDrCdtTejWGCRtx7sz9\\n\",\n       \"onldIjeWa0pl+l/71kbCgb31ZVvjmQuGhnFLERqCXOqewo7KgNPpuw3BxLaRpwx1VdYFawq5eRT6\\n\",\n       \"sdQtEy21sIREIqqcYKhmQYwTJyYuiM5Uha3azGpad/6kIQtR0De6PLDebjl4/4ccju5Su+14M3dc\\n\",\n       \"tmW/iUgE4hP0b7M9+Sb8v3rc3OJDZ+YB9F/Dw/nD/jezX78tycgxL93hh84wmJAfaj6zoT4eClVN\\n\",\n       \"+6QwWOqobS6Y7Pc5eZfq6FO2zr8hLLOcpPJ20CqnHiQt+8WSB8k5kpkYD1TlF5X1VXpDy2FDaSCm\\n\",\n       \"A3lSmoQzhqEtOLIoqDQcnHqpB2csuGrdocIb6o6smhfiUgV6MnVNNZftGyvdMbCMBU2Zyh01x0ob\\n\",\n       \"porAYsKyuQQmMVfH7CnNVKfWObla6Jt1hqrixDjfd/HGXBwG43Sef1feCWHxo1z6QfbeN7fNH92j\\n\",\n       \"sTX/vScd6q9Ti/6tj6mVH+SDG7znTRbzF1f52f+IZ+7+BhFh7gvwxCp3zu7xd/eceutzEGM8hBDC\\n\",\n       \"Ty969bme1bV9y81MczRxococZblB1jActIxe6olXGoxfE5a2rOeZzqyjMR5rVgf2FuryakUttmUm\\n\",\n       \"WvZ0wkjpGZVcIpwq47vW9GwKLuqoy0wEQ12HooG6LR3nnNjQt29BU2Vgx5Gupr7MxEDdqoaaYOS2\\n\",\n       \"qcc9dB6fFt3Wt6wpkZoIKrmgYWTZyL1TypnGi5aKun428Eb7rpXuwNMhd5LX7PUXXD6YGjQre3l0\\n\",\n       \"tsfKIXnF0zuFSUr56e8Qkd8VT5oT9ber0+mv4j1h7r7V/11f/fsEB3z5Lj926U3eGKj2VKKN0+cW\\n\",\n       \"BhhQhNxs6QxbbcfpofH5mUaWaFUtqdROKA0jzUh1hmlCL+MdQ56s+FzK3UBSpx1ZjvPp1tVAS6Fr\\n\",\n       \"bE3lgspYsCN1aGoUS0V9qkxn3itXmiqwY82hHaVDVyxbtS4VVDK5V9y0qO2uji0TuQMXTJw/FbSe\\n\",\n       \"l8tPRxkK2Wn+e0PPKBRWY3ReKc0alseFEHoWqq6sFxRThvUrVl6ttLN3Odz9ILOfdvjOW+7NKhtf\\n\",\n       \"q3gOHS8s6e1FyvgNzCYhhJDg8TXeFdDjzguUDc5cZ29M/gXO3eVl3+TN6bclGRnywufmI4qXVoiV\\n\",\n       \"RnpH0Txv6XZTNin0zqxrdLeM1mqKO1j9sGqnZ6uzZ5BWsmRZETYVZUeZrkhCJepx/ARbn2fjrno7\\n\",\n       \"FUYLqmZHnmxIRYumCiOVbT11Q4tSh3Jn1JSWBKm26IqRTKrSkbsnMfSEZan9UyvgdTMXcVPTq5bV\\n\",\n       \"wZFDUwPBYphLpZaVp13JeYlxS1cMZ6VxbC3ueFjl0smuWrhv+mJhNOCVTWYJr9V445N1+Z/PLX/v\\n\",\n       \"2JOvF+KLHH6Yh49RJVRDDkasP2SrgX9Lzcjie3j2637H1cO5v0DxDQ7qFkI9xiqa30R8Pe5sa2Qd\\n\",\n       \"l4/XrQ5yndlEme+rrVZeHq4Yf/YMW29wfI8nj+TtMwq5cX5iVjtSJFFDZjG9YFblinAiDUOZE2e8\\n\",\n       \"6lbcFUND6kTN2AV1LewY2hSU6g6VXtVSyj0usYjSnouGbmgZK1QGpipjiQ25uvbpCO+is7b1DZ24\\n\",\n       \"IppIbJsaKm3LtCw6J5cYy8x0DJNd9bCoFhiVqXFy1uHSbXtVKSsTYX/bw6dLq7VSnk/sd/jcR7j8\\n\",\n       \"AlsLfLFL92s20Kdts8KceXYwPh0N/HbHD+MX3q6BdDEahOAF8+mJn3vU6/m9wowvvcj7Ey5fobvL\\n\",\n       \"xRvy61viCVsXuHL0G8XcL58bm3yuy/c9q3wpN0m3NM4Qqz2xnqnH1BezUrtR2kd6MtdsVnXuNbhU\\n\",\n       \"0U0Yj/8/8u40SJL0vu/758nMOru6+u65Z3Z29t7FLi4SIAmKFCmCpCRashy2LPnQZeuN71CEQmFL\\n\",\n       \"Yb2xHWFLtiJkyxFSSJYUDoUcoiRSJkUFRRAgBJA4Fwtg753ZuXt6+u6q6joz8/GLrAUXy6WxBEEu\\n\",\n       \"QPwi+sV0ZVQ/U5mV+X/+x/dXuU48lFGkHMx7SI4U8jm8vS0zizUdQxdr971Slq5g06Fcw0AqN7Cr\\n\",\n       \"0LVhxYbERDYv4x66pGbZRBDsyQyct+3m3H8qOKe0L9ESpQotdQfG9jDWKWu21ZUK9XYujKJ0xpUv\\n\",\n       \"c+PMKbW8yWhs3G4za1C81+zZu156Zsdsdd1SUhpkNTfGjxu8+ms0yrdxYw8hhC7/1uN86CEGgfga\\n\",\n       \"H3iBV3+R8QJPlEwO+dgJn3qn473flcFIjHEQQvg7H+eDHX5gqHaY27yXmnROmTYTMdszy+6Kkye5\\n\",\n       \"dlv2viXNEMSlVD15SN9lMe4JyQUx1k3DnrxYFidXeGXM/mesLPe0snPuP3BeXgyV2dg0TCWGc/eQ\\n\",\n       \"TR0zuQWFYu7GGFXdBitadp0YWrBsYkNDVXfMNdXl9t1zWapuwarM2IrCqtxVd+SekM5x4EFLISjt\\n\",\n       \"YmjgglfdsaFXdqTbJ06/smVpL7fzHJ//F7z2KMWUfv8Kf2hN6yMLWocnbjz6ujTfc+aTvPj7uHue\\n\",\n       \"Ux+ntsOvLHPvn7zTh1QI4Ryr30fzLKM7HP7aG6+Qvk3PSbLH9hJPvSngKFW7lvi2fiBVs+vafzSx\\n\",\n       \"9uDUo0XPySJHa4WFgsuzVL/xGXd/qmtUtIkNsivSGC0mhcWwqiEYlDdNQqoZx8o400xaauWyLDkx\\n\",\n       \"wYIFY7QqhwhHmtYkSoduyxRSM4uCRYUD+4JFYy2sOzFy4s78GjhSIrpoJjcyU9OzKNi1qKmni0UN\\n\",\n       \"I8G+6MCiHZsKh845NpHpS7R1Q9M0nRqVK1butmTxnnR3anVp4PBc0C0a6nejK5+m8wJf+n6++F4G\\n\",\n       \"v8j05yoX6+wKGz/O5TP0l0gzVg8Y5SEsfZbeL32XO/f+GP7Ru72Ib6A3+kZ+zwQjMcZxCOHvfYIP\\n\",\n       \"tTT+fGFzqe/hl1ka8JUPs73OxdfZwfV7hfE/eY1HNli/IH9+x+D7dmSLM4vjhs5spFsPejhKWa3T\\n\",\n       \"LVlv0Emr6ZtO4Jk+n6nxckF/uZq0SZDO76tN0WsK0xCdEu2JVsMbThlxPl9TU9dwhHvaSGXqZk5s\\n\",\n       \"OS/qaAhGVjS0zdwyNbEqOLAkaImWpV4XdRWmxvE2dmTWFBaNi1IIR4owNMurQkljlfFOpuz23Vw5\\n\",\n       \"7/DqpzSfONDayExmdYMvnvhyoy78gQVlGbn+KRZuV87c/bc5BZce4ns/+ibj0gfZK3n48/zdff7x\\n\",\n       \"b4Uv8oa+K4MRqoAEnwgh/ArFX6L71Ff1Ojccn2uaLmxKe48oP3Gs9tBYPT1QnDu0kmVqsS46MRVN\\n\",\n       \"y2gcZpJ8JCnq8mK3Kj5OM6PxuvKRsSQcatUalrTmcfGOvq4DmxbdM9YytiQ31DGZNyOVStFMsG3T\\n\",\n       \"WGrBWKqUqkmtKtyXa1mx6JyJPQe2nVY6bd89fQ1MncxzLU3RRLSFk7Kv7PUtPscjr3LhNvUpP9vl\\n\",\n       \"zI+TrjF6/RQf+Qjj+1qDhvZJF00ni19w/xzNr5RefGTilX7kBQ4/HWO8/U4++xDSh3jsT/O+Met9\\n\",\n       \"dp/k2fdVLJ3+i7z2MI+/KcC4t8jRqxxNWDnPg3tMM17Y4MaXvAkTX71/OM3S97D+h3h0BUM2x8HR\\n\",\n       \"atTKUr2woL1f6NSi+tKGUadFfkirQZGqFSO5qVboaGs7qM0clCPdpFnxW5IqOXqg4YEwtmVZ7qxJ\\n\",\n       \"HBqGE2tmHpLYVtc1dG0+EdNQuB2PDEPuIVX2cl+qJnFeW2aorzQyVTp26AFjucxQKaj6jY7nnNaJ\\n\",\n       \"jq7cjgNHMoUFhdMecjT3I1qY7qo3Sv31sfpoxXTtrJP9Q4oDaRxoXS88/mlufYgP4vkNWk9z9b0h\\n\",\n       \"hI/z+If4gUMGU+4/VjXypS9y7mW+8BG+0MQ//21/Gb8DFYIafgh/7t1eyzfQx/A33+1FfCsVQrjM\\n\",\n       \"2gcm4pMT58f8xKd+vXz76D/nH7+H//cTuK7iRxUhhL/4K/z3K7xnbDKdOi4KSwohNBxOS9vjYLA0\\n\",\n       \"08qqcd57IbEZoxiifsFOncYxOyd0Fyru4yjlchKMRbexpHBPqYlF3AhvjAJXY8G5higotNWMTXWl\\n\",\n       \"gqhhqK3GnKt8go5MTa4naEpV/tw0ZM7KHCk8r69Qxsuo2deT1IKV8rRmmJmsDLxSn3r1TFO7MzQ4\\n\",\n       \"s6y3dV3nqZaneg2d0Y7+ubFXLj1lf/u0uJfS3eeh15iVvwk/apGHr1TUuK8FHHMo2ug1HjuM8Tdk\\n\",\n       \"U96JvmuDkTdplVaXzunSlfxYtnPs2XLBq+NFcW+Ln0ylzV3dMvfEYcd2O+rUDh0lE/n4eUn9vDIM\\n\",\n       \"xWyHjXt8aIDc8WjTQudYN/S0Qlcpyg2kps5oed3AoUx1RkszqUOpDTmODY0kmvpalgTRSFBaUhqp\\n\",\n       \"qQoDLU0VDnjBzLJjO3KJaF/pptSyXF1wJNiKUX/cdLgz9sxtLo+onWdwji/VWLnPjwxY2+X5xxp+\\n\",\n       \"+fd1nPzCrmke5WmQFU3jzmmHl87aeDkzO8z0Jl9l7f47bFKqzNjO/hQ/fFD1fMDaiM5SFYzs/2s+\\n\",\n       \"fYneBU6dcNji+cj238ceH/8Qzz5DOWHvnzF79s1ReLWbf+RP82hg6z1cSYPnHliwNWxYSFnpF/qr\\n\",\n       \"idvrifvxacnolHpMTAeBc20zxw6TxFJaAfWLMppasBPGMqVWeWIYZrZDZixYMjTQFgSHYVGp7zhy\\n\",\n       \"PUR9qX2UplbiLeJYDJWv756qHPuA0lWZhqZFC5b0TJRW5Pq2Neatbj25ukxUWtC2qJDItSQeiMGr\\n\",\n       \"MmfCgo6evlJZpmplQzOe6DcGRo2nnbrfkR4vSxub8uyaq4/tGbSrUcj1nZrDs+f0H13Rah5r/uie\\n\",\n       \"6es9zY/zynt53zHLY165Qnm1QnDf/kAI4eMxxqNv2bfxO0cfxrUY7b7bC/kG+jweCsHa3DTlO1oh\\n\",\n       \"dH6Q9/5BnhjQe4Kwxq82+b5PszCjWfLkITcO49c/FCf7Nm/t6zycqA0f0ws7DpoLmsdtraulxfqu\\n\",\n       \"xWdmTmWUddqxdBfHeWZSFu6lUdqmWOHMiKSgvkp7nq/YVmUh3q9y6q1K4lS+24lFUd3MLQsOnVZz\\n\",\n       \"3y09qxoayEU9QZTqGiIVnChNTS2rWdN028hEPncuoxTCY2I8Ky0ODbMN7XDXfjq2Gtvuxa48Hyof\\n\",\n       \"WHd46xHlLw9l33NXc3OofG0opAeS8Snp6rLkfU8qn+uwWw0au/hZ/kII4S+/dYKupHg7KFr+DaBo\\n\",\n       \"30jvajASQvjf8AE8G2P8r9+dVXR/mEceYXNCbNFr8OTnrjtOeu4+1FGu9KXtvpUYzGol+W2jWmLD\\n\",\n       \"zHKz5SCwU55RhMcqrF9jTP8GjZsmaV8SatpJX2FBW12m4YYldQ3RaYUD0TV0RYU920pTLas6atYd\\n\",\n       \"GNhUAeZJFKJ9Y6XN+QBYKZfryRxbVLhk5ljulpq7WBBNlXpl02M7Uy+16LXojQg1bm/QP81DL3LU\\n\",\n       \"Jr7Exb1oJd+SP7Jp+PKW3fdlusNMf7MrnbSc1M6bPHuJmykXf6Xywfi1d/CBL7O8wuk7X//r88dU\\n\",\n       \"pNcQwv/O3tN0L3Kyy8lzbzSkqnZ6H3u7N66aqs7+EX7wpOHu+1Nhs84ksVgvvdYqXJgm1selYai7\\n\",\n       \"np1yMonqzbFsoYW+PF0Xy4ljI2XJOBmapQOnputGWWbPzDjkDp04MbAu2lLT01DqCgpR0A5RX5Rq\\n\",\n       \"ObSJjkUTRXnTfrZnrOKznEi8hEKqq9DW1FB3z9CJYj76t+SehzUlEi3BVNtEYizYlyq1wkRqZmLR\\n\",\n       \"ssKqVx2ky0bNEzEcKrLT0rIuqZPWo2HSVIuLxiv7Ot3o0jYvvnfVqHjS6ZczablndOGsovZl/+YD\\n\",\n       \"fa0lVuZ1+DaGC7QnrJWq6b3vxmDkx/CL7/YivpFiNA3Bp/D78dPv9np+OwohLPHIR/mJW1Wz9Us5\\n\",\n       \"61OaS7x0iQ9erY6c8aaHYrUBOv3HufAYTx2U2jtHPv5YTW297XzMTOq7djcKG6Fy8K2rMhqzSD8r\\n\",\n       \"XU4bTh2Pjfs0ejTOpo5j0JpF+wo7abW5KHCoyoIc4dGYuhYadkVNpZHotrrUsZa63MxVhcxM7sRM\\n\",\n       \"fd4r1lDaMdGTa1q140TLRJz7cJ9TM1JYEh1KwhnSdWVMHKMM9xyHnoVYuNxtaY8Pjc8+7/YjZ+S9\\n\",\n       \"NQuTe+6fPtD6cnDwyMTyhS2NxtjBlQ85ubrM5Ar7N3j0uCJtf12m44SXX+FHHyFtzqFoE9JXqfd4\\n\",\n       \"8Zs9v+9aMBJCeD8WYoy/L4Twt0IIH4wxfuF3fyXtP8KlBS7uVjfbIuHOxejS1oGdpYFH5K7EoC03\\n\",\n       \"a+SOw9SlsKQ9Te3VuprjfZPsgoNRVtlEju/Q2NFsj1xM67qi5fm0wz1Htl2Q2NSwrLSh5ozC86bI\\n\",\n       \"lM5YEAwxMUDTdaljrFpWOrFrYN+qxCP6JugZy80k6h4yUcOm6CVTixIrSmVEHr18qmPlpK4RJ75w\\n\",\n       \"paPI2pq1E/VwYOf01Pu32f4ewmePrNy7bf8DR7LJ1GApGtSDcT2RXL+g94UL4i24zMFXeNo7C0am\\n\",\n       \"TANFIH1TXXH2NWZZjPFk/l7v5P3erGVWV1LjhdNie6g4mjlpdXWHdWXWcyuZuJc23Ms3FbM63aHO\\n\",\n       \"rG6YJZa6i4rJVf1W3cxEXh5Jp1tW0pmHi5uuxWV7tZZRed8snXhK4glNEw1jB16yYddZUV9haCTD\\n\",\n       \"mpqFqh8oNBTpeV0T0XBeNktdxKtyU9EMHanTFu3pua6hdFlNV91UcNVEc46+u6+t74qaKBNF1wyM\\n\",\n       \"bejE4IItJ2nfNO/YSRZloVQs9s3aheOYa8S6jWnULqlraqWpKye7dlbP6O4lTDuy2Zr+al99xGGz\\n\",\n       \"mrIaB8os88rDwdWzM5ZDCLd/p/2Jvg31Ufx37/Yi3qHe6Bv5jg5GcIFLqp6N3Q+QrXNwjvKE7Rmu\\n\",\n       \"Vtfp1dzXP0BPs3aO1mJF1A62Pbi94JV1inZidPaE2lSR8UyZGE0jCReT6DCJGjkHC9THNHLKIjhu\\n\",\n       \"FpUDe8pFledcQ/Vs3laVacahqyW1L9EXJcYeNlZYlGpbd2RL3SA2zcJ1uQsKpT19wTWZzAPaLpma\\n\",\n       \"ec11q7ZclppIRJmzclvycE0RzkniKSGsSV2TKTwaV8WYC1mpE4ayx7fdmJwzvRvkQ4rN6HsGNbdj\\n\",\n       \"x2w6cTr9Na88+GN6L09I2xWJovvWkxBj3GqH8Av/gp94qErsx2skN/jYOy3Vv53ezczIh/z6zuKX\\n\",\n       \"VIja39VgJISwwsVlWie/nnVKy8qI6ZNPlC4s171vNwinWUqj/bSwWCutFrsMliS1ic6LLc3TNFtj\\n\",\n       \"497rbNynO7UWVi3KRDMThxhbx20NLYuGUqnrckc69pVaSh0TEy0jlxy4o+LFfMCRQzsOhLm3TW4g\\n\",\n       \"es0MIxOVm0nb1FD1hairSIGHgn6x4ELJfpZaG04d1lJHlx4SWwuSek2zNxHrXTtrr3t9JffwIdee\\n\",\n       \"HmmfuSq7csnqpKUW90zTA43pE3rPXpwHInBUbZPPXAjhryAd8KUjPhFj/A2GVjHGkxDWvspXnuR9\\n\",\n       \"b5qO+fLZb8EpnTHVcPBAV6NX153teP2xnla9U43gzrjfPKM1mNG6J2+dlbcOqgbf2FLPd7SmL5sl\\n\",\n       \"E+l4LDmh003k05nzzQNtdc/VKjDZZdFMS1TTNvaQa/ZcsmjdwC2zeafO2hwdP1QahlTXkqmJiVQi\\n\",\n       \"N1S4qG+i5q5UriaIrtrQ11bt0fa0LGobiu7YUNhQOG3Rka6ZKHdO5shd1ywGlgVTXffDkmI2NEyD\\n\",\n       \"rJaplx0L43uShbsGNW712VsqJaOGzcGBvdVNk37LyVYprde0IyvXefEpzi0GW6Nzbn3/aSeLHft3\\n\",\n       \"Tvj3rvJYCOGnv1uAaCFYUZGvPv2Njv020cfw59/tRXyzmk9zQc6oyd4zPDCmdZujgt2LHD3Ox17j\\n\",\n       \"RoP7nyN5eE6OPkFaZSuy+MZ9vvSBrb5Jc+DwoaiRMK6xmXPQLO0kFS97mhAkdtOgM+XxXWYx8clT\\n\",\n       \"tGNqkObOCmaCqDRToRR6uPE144imnjXRTpU1lWkopE6kas4JrodNqTuC1yzIbItmShclFtxzNKdu\\n\",\n       \"Lzs7d6hJLEjM9B1asOBliw603bCnZeLAcmxaKvv204mQJspaYrUc6DtRWx06rnPmZc7sH7kd1+Tl\\n\",\n       \"ks7o2On2Tb2lu6TTirZ6+HbnZBjjp0IIr17jSiRMKyja2w4SvFO9m8HIsmpWimpe/Ml3YQ3t6oK+\\n\",\n       \"9mBVD6/Pd3eHdXpLPLwztne6615eUqc7y6UJ4xnJ1amFWTQ6G6Ure8rmjI3T1chB0tSRq5J1qVxT\\n\",\n       \"4VBDw9jQ0H1TPamhMzZtqCuNjN23Yyabc/cSVZTdVTjjwDK2dOxYNXLoSOWnsGBqilLUK4P3J1VJ\\n\",\n       \"p4sj0QtpcBBbpsXQShLFomPcWHeuqGbsB/WO2iwzyXa8dP7Ie+5z8nDd8QMNT9w9tDg6MM0KmZHR\\n\",\n       \"xk3XnhnYeaXDdET6BX7gaW58L9dT4su87zM8FEL4W2/Pqjj4l/zqCncvsh7ZCdy68Vs5cSGEJuKb\\n\",\n       \"p3dijP0QNl4rrX6YzXHb0mDR/d2J1840qNXlCmaF5nBPCAdanab6nNmRjO4aZNdsxBPh9ar6kO/V\\n\",\n       \"7TzctNCb2Ns4Y6e7IIQ7FmJNEoYmgppSgpZ8DhpsKbA27/XYVkdQmBnJJapelKGpmeBBiQH2RYWB\\n\",\n       \"ly3Ys6lUU0gEy5qO5Pa1NE0FE4m23IkFQxxZnfvejHS0nDK0EDqGcWyWDMWiJ00u6CR1ab1nPH5d\\n\",\n       \"uz7wVI8Hv0z/VO7m6SN3aismR7nd1oNGn9g3Wj/UeobulP1bfGV50+jsY2J/w/jZK4qXE4qP8d5f\\n\",\n       \"q9KzX/2tnMPvYP1+fDpG43d7Ie9QX8FaCM7H6M43PPrbRJWR6dqP88ATiBy9zLVNrjRozzc6y1sV\\n\",\n       \"abzV49kVHj+oJk6PH+XVSQjhH2Cb7UjR5kaNjWPuPkDrTLRXY3uRWZ76aj2VZ9FFhbNllBXczKL7\\n\",\n       \"ceJUUZWGduulo5R7WdAMmeVYZQsbSofVMuUyC+oWdEQtK6JbruDAtnuac/LTWK5uJI9jzVDXkDs/\\n\",\n       \"t/XYsWRNLreqUBPsGghKQS44MTKQ6kq1NLUVzoZty0ZuqOuUheOsKcRFWayLhkoH8uS+WXNkZUL/\\n\",\n       \"LNvTgdZLN00e2DRbmmpmz7Lc5OXd6vt8821PDmKMO6qRpW+J3s1g5Nivp4Detu4cQvirb/rnJ2KM\\n\",\n       \"n/gWr+GA6S5hxCcfrUA2k8DejHi15ublc1rZmsVRJgk9+4u3HSd9D41YzoPR5RELdNIddxuXSHJC\\n\",\n       \"F9tGWjpzQHdNZqrpdWcNXdKwVvnG2JM7kDuWSbT0bUptKw1k2nJdwY7o+jwCX1S6ouem1FkTLaUs\\n\",\n       \"VmWPAqMkVt40MToJVRvUcszshEQrTUxCqr9fo5syphXHBrXM0WhDntfdXeWzm6mvbjTko65nDsKb\\n\",\n       \"epWGDps7Fk59koc36D/P+irHP8SX3jjqGbYGlfPUU6oGuq/TfLT6b1c7Gkuq6PsO/s43OmEhhDXW\\n\",\n       \"f5IHH63+vf4S+78QY5xH8Hs/O1V8aNfh06laseLu+Q15L9PKZmrLQa14Sb62Y3G4ZHG650Kybzdt\\n\",\n       \"i8XEymRm0qFc4XgSTN/HqXHf0aUld2rrZjE1icFJUjMrGySlqXQeXGSmaOgZ4JyRY8HEprogNcOO\\n\",\n       \"PQOrCj2lS4KG4NgC1gzsO3HG2GOamhr6RvaNNMz0Ravz+nHuVkzshcTUqbm5+J6ZNQ/GqSzcnYPt\\n\",\n       \"cucMTGsdZ8Z3zdKJo3pPrX7i0oiHtsk7nH2x1Dzs+dhT6+68ciwef44f7dGYGP0cX/5p1U3nylP8\\n\",\n       \"ye/lev1NNflHObxW9X59twQjH/Xti4D/DYpRGYKPq0o1/+DdXs87UQihzbk/yw+0ePRO9ZB/8WE+\\n\",\n       \"0eSrNYarFWTxfkL5KqebJAv84TfRQh/o8nP/PsMbrC3TXuQrm1WPyQbKGnkkLRcM2+t2amMPhJq6\\n\",\n       \"ib30UCvmukqHSYUR+MIZDpd4KA2aofR6jEah4UGlVHXMACN1TW1TywqJzEjLsm2bVvXteMiitpqB\\n\",\n       \"4Mg49LQdCHNbj0SwKxiKCiMzuw7NRAdGuqa6cnU1XQ2vm8WRehgblCPNWWkzTGzVVnRjaaMcm4bC\\n\",\n       \"qCyMysSJ2x67S7jK4SNc+jTv3z+yvz50/XsTd891+cqEf3VclV1+1xg672Yw8msqi+t/ovqS/F9v\\n\",\n       \"PSDG+Fd/JxdQ+Z0sfJzNn+SJf1ONi5YL1QRHcbmhtTHRDfdMi65gyerOBfeXXnFnr7C4uijpFuwN\\n\",\n       \"HS01pMWuEFJlWBS17JlZlGoaSc2MrNj1FI4VdhSCpjWZHUcaHjCQWFA38zqSicQAACAASURBVLrm\\n\",\n       \"fI68crFZUjqlsC2qGVpAW0PTioZUEk7mfSZVH8K+hFDYk2poCKFjGjIxLtqblcrxmMaMUAoxmMY1\\n\",\n       \"5f5MvSiI3LjecXfcdf6R0jRNNd6Ufs8mE3Hrc/w/GLe5/D089dbW6rOcLFcGSb8hGJl/9tFbou4Q\\n\",\n       \"fkODtre83ubcn+MjdR6Z7/BeucKn/lwI4f+IMY4qPkb4SyOHf2GZHzpPL3d6faKbTtQGwdJsSa+5\\n\",\n       \"J1nvGExGToydTnP9+sRRe8FS0qI2kHZLZwOThQWN4arDWt1JqOm1l+zoeT1puKyvclfOXbViEIfG\\n\",\n       \"4VhDqmlJrqftrqiDmam+0oKeBZmenqFSKXPiSGbborFH0DIxMRXULGm54cShEwOliZqGO2FTtGxi\\n\",\n       \"JpEZq2lLhVAo4/K8mXZkuRhYSjPTWlPXus5sqlcWloqR6+eoLdFrMikL2Ut74tEF1i5xs8ONbZY+\\n\",\n       \"z0/d528h1pjU39IxH6onRfLW8/V7WD+m+jy+k/RG38h3RDBC7UmeXOKJN/UgPH2vYhv1T8jvMM64\\n\",\n       \"eFT9/Nyf4OJbeszO9Vj9Hk5f4ief49qMr/wIxVrlWxNmtNNgpXPGVlaQp1YliqSlWc6cJEeaJRcj\\n\",\n       \"X+zQWuD7c8qy0Iwsplwz0VM1pNfwgooUtVE27SbLMqVC0JKYOtSf01Qrh7HgwLrCyF0tmZa7TtSc\\n\",\n       \"aNh1IFGz7NB51NXta9vVmt8JSvcURjoy9dhUy3M1Y620Lo4yW+lUbRok6UQ+quuNNpzq7VkekW3x\\n\",\n       \"xWUupqyiczhVfpXjnxvz99/MD6o2gM6azyl3eSQhOeIVvP6tCljeUTASQnh8vpjPzvkcb/z+J2KM\\n\",\n       \"35SleIzxSyGEcQjhk/jSu9O8CsNP8qU+t3+IdJHBHumHeepMR4hN7XGpaBy6t5B5LSxJbnSEGwOv\\n\",\n       \"PpaaTVeNh0GYji12ZuppXUPPgTNGDlxzx+Kc/7DnCVNLMgvG9mT6poKG0tRw7sxa85KHTJ3VNjZx\\n\",\n       \"IHcO9xTzkeB7juwZmFoX53tiWnMzvCN9wZYUCyaaeqa2JQ405dM1iyGVlS9JJvuG2brsMDVwZLJy\\n\",\n       \"Q6gdy77C9X9U8mDTwX9x4nar6YGTRBZnZtnQbpODj8cYfxVqIejxvrd+qj2ao99kTv2bV+1Jnlzk\\n\",\n       \"8TfdoJ64z9ZTHff+wsUQzCqf8F855n9MmQ1kf6jQKRrW79XMOgWLmZCmFgpmVtwfbXs1nSjqa66o\\n\",\n       \"yYqBo05ptTbTCrnGNNMczTSbmSyWjoon9NPXfSXedyfMNE0MZKIDp0KppSaJY1dDV92KrqGg75YV\\n\",\n       \"0RPqrhorHBhrCVoGtq3ouWzd1H2ZQq6Y58UYzJF1TevxtFpo23fKWENXkOso7Jipq8W+aZipW8Sh\\n\",\n       \"qVynjEIY2K6taw3HptnELK0rsqZJLerXZ5rdE5375PXzkuMfUP7sDcmVu9ofiGrphtHCjvx+lP/L\\n\",\n       \"14nPVAZFXwtIrrKyN7ck/72uEFxRNXJ9p2WBPoa/HILw7UqM/Xp1z7P5NiXei3f55VOVgedTt6s4\\n\",\n       \"+Nnz3B3ykbchMCfnOPvVxJf+ZKp+utRcKzyeVg3zd3NOJ3VjLQtGekk0K2gnjLQ0p0c6SWW3sjCt\\n\",\n       \"gpJTh2wtM27Msyt4VpXYj6okb8eMZKqUy6SqTeKCUk9fQ6LuQFBI5+MGG1jDOZktQy8oDJxyycCS\\n\",\n       \"TW20HLqk7lXRkVP23XPWmqZWuKMRE5sx15qkerNoIyzbSXjhpJCNGrpF16mje05ala9gL7D91/nX\\n\",\n       \"G/zqk5RTjn6FwSffCERCZRT20cf4wYu4xaXA+Qd5dpGja/zgq3w+hPAz34oG9m8YjIQQ/kv8ZyrG\\n\",\n       \"/N8LIfxXMcafmb/8P+GbCkbg3Rvn/TqtUltmdo+9V1n5CR4/xeX+yAtx5ng5KmMmnZxIdxe1jwvN\\n\",\n       \"TmHxpG036QiTu6ZLhUvJnl7SENSdcuTAwJFgT+7YY0q5IJo6J1gTXDcy1JVoGaupu+68gStoy2Si\\n\",\n       \"ukLPVNfAvlRi25LR3IvmSEXnri72umMLDkQLzuqISqldZ9yNp0xmB+qDE8V4olgsFPGuLN9ylLXM\\n\",\n       \"8rbO/j2to9zGDW79232zf7Vt9OUlr31wYHet0Al9g3Tq+NP0vgZRynnlFcYXWTo/94rYo/08YVCZ\\n\",\n       \"OHwLtXiWjbfU6XfXFuVPP6x+/aMmn9uj+0X+1Iv80x6/0FP+/q76OMiKTNor9BuHRllicXKon51z\\n\",\n       \"HD6iv/W87rlMnqVOkpY8luqhLiSHyuZU2RladeRWY00i08gfNk4XbNkXdGy6aV3ustSiqe2Quq1U\\n\",\n       \"CoYaUut6LknsiwaaFnS0bMsFiaFTGlrzse2h0vn5Xmo2hySlkjkAKc6nsKLUwFg9zgg1hb5BHDhy\\n\",\n       \"3uk8VS+WxdrAbpgxTp2E1+3Xp04rTGuJ18KK7mxFa5JZPLzt/uqJyfGitcXn7X946PSphrXDIC1W\\n\",\n       \"9GJN/kendq/dMvr5n+OnHmPaIL/BwsvVjOXz39pz/W2rH/NtjID//9FV1dPyEdWO9ttcJzsc/YZN\\n\",\n       \"DnsdDj/DLzb4wnlCweBz5M9z84mqh+QNHbQ4jJx+smX11ILufs/9hcK0yXZSYQ2CQlIWMgsWDWyH\\n\",\n       \"0uU8lcVcmdAP3J2oRv0b7EQOBvSbvtYj8sbIe6l6pN5TSuJIEu7ILcrlDkxVZKElpUcNlX6d1lio\\n\",\n       \"Ws4X5x1lU7lX7FrRkci01OQGFtFVzl3Za0Z2TZwpK95RGYK9LLM+mjjtwChesj9s627n9s737T6y\\n\",\n       \"LYncuc/4l8h/Oca9OPeZiTHGGEI4ux7CD7Z4YJXaKc7/O3xpm27O2jPs3+XyQ/zrR6rR0+/5bBWY\\n\",\n       \"v/bbPePvJDPy5/GBeZ3/Afx0COGBGOPf+O3+8d8NVUjw5jMsP8asx/4XY4zX569d5sqf4j2B5SGv\\n\",\n       \"fYidU5Qdrq/0dRc7VmNqauQon9k5F/VOpRrTrn56S697YLE7tpKVlmp9dVcdx4ax0lI4MDNU1xAs\\n\",\n       \"G9sycSw6i0S0LHfHrtKaxDVdOxaMLUlMTeTq1iX2TSzK7Nlycc5hHcgduqNf/S30dRwq1eReks8d\\n\",\n       \"XZeM4mVlzDX3OprHpdCL9h49K721IpmN5Oe3tU6uufjc1EJJ9ywbK03X3s/+ZNfxZwrDmMvGpckv\\n\",\n       \"Uv7SW5pGT0IIf//n+eNnuJBUrrmjLf7vGOM7AqG9c53scNB8829abj+1bDI7a7rVpDjP8RKj/QpG\\n\",\n       \"8L/e4OUz7j+8pN4iK3flo0PN446b00vuPjdj+hJPN4xHhWmrY202Va+NHCeZRTXDZKi2kDszfcXt\\n\",\n       \"cE6RnlaEUl1d6ZT2PAt1jF3R63KTOTe1La36eKyqEEivOK1mU6Gp4UTPNUGqoW3BzEzdiWpD2FcY\\n\",\n       \"K+2goyYah6HSOTOpxJKgIw8TypFaqCtC1x2HphqWpxPD0DGLW8ZJbiMmurOZST1YD5Rx3/36WC0s\\n\",\n       \"22ptGoxvapcTjfLAwtNL1l4Kcyz/iDDz8E3KHxm78z+/wO3bPJXRPKgebK99t0zSqIKRn/mGR32b\\n\",\n       \"KUYxhK+Var4DgpHx8zz/o5xa4sIxZeALH+bmA/xQzizn9T53/2GM8V7FIPnMBU4ucnZQ8aJeSNn9\\n\",\n       \"6dTpv9LywF6QyLSHXFuush/nC6YxdzrZ9XpyXqdYM5gderk2tZj3TOttjbzrVG/iMDtyM4tqC5yr\\n\",\n       \"c6akl1QTxOdVFelCFZz0poxCblwbKsz0rKiwhAtyE1W9o1TVNkfMyaqFIKpJnJLaMkOYl3lqKujv\\n\",\n       \"VB1Hc5rrknZx4Fac2U+CWt5Uz2fOjnOXDu+J7ZFf7l62M56o1Q50B23ntxNxoWf/DzJ8SUWoLSGE\\n\",\n       \"cPER/tMPMj7L8Yv80IC1F9ibUj9HscTomNV91k6xs05c4c+cC2FrwvZ+VT35psZ730kwEt4ozcQY\\n\",\n       \"b4QQfhj/NIRwydtQ2L6dVGWZzvwnPLNaXdAnZ3jh/SG0f5bR5zn7x/gDfc7M+fuDGmnC3lkem5Ye\\n\",\n       \"3T9x1KkbNqLYHltMhxrDmjPT4MWFliztyxZyiqg+O3avvmQQzIG/U1116yYKW7aN8FU1W6aW5A7R\\n\",\n       \"tOCi3CuOhTkKfmKsbiaf1xZzhR1H1k08jvtS+xouKgwMnahLdA0dmxh4QvCIMk6Mw0ui66INoT02\\n\",\n       \"aQyM10f0Tklv0+rmuoMVMew4usQDQ9YnZ3y+e0ZZa1s7Gio/sufk87fM/oZqd3UxhJCpWOYnuMTi\\n\",\n       \"g/dMP3vP5EiVr9yOMX7TJL7fXJPneeFHWN1kc0htltg/3XK796j4tRTtItM16rdY7Cn/Wk+60XZw\\n\",\n       \"pdQKY0s7icODy3r17+VzmxXc54u/avpn2XpgolOraSZNR+HAyyHTKnKJqWGNNPS0wwVkFvTtWjKT\\n\",\n       \"Mc9DDVzUNVOaODGQlC9Lk2VDOe5a1rKuIzVTmmkZuyh1W9/MeZlVdfdlrqqynm8YQ1wx8fnqZhcj\\n\",\n       \"oRAMRAsaZoZGSuuUV+TFru3kpv3aqoXBTLOxpT6bOdsOLqW0Y2YUM/uxdCEm9qd1J/l52dFtndFt\\n\",\n       \"R+GyUF+eByID49YtaR49fYs7G2jGGO96h06cv5cUglQ1SfOfv9tr+Sb1MfxR3wH9LvPer7/Hz/8x\\n\",\n       \"1i8wOUU4yw//Sy7OJ2luLfOv/sMQwv86P/7/ZO8plh9kfED/OdSj1/+HvuJ8ol5GJwVHCR8uWQ9c\\n\",\n       \"K6iHXU8Mp15Il80mwYGB43ruTJnJitLd5JLF+7ntRl/aJqSEuaHthmAvLvicqAgnupHNQHdaeC3L\\n\",\n       \"3A9tdbklA6mOPZkqlVDZaVaByNOCRGEBhXKeEa38KaOJvlTDGKldUc+hmlgGC+PE/foZw+SCTiNT\\n\",\n       \"rx0bNrYd1rZsDgaa09T0V/seefCUK/eCYKpoXLN5fiL8iRDC//LGvfoUP/79DB6Yj/MuUjzM4Rd4\\n\",\n       \"avFNmY+MmJPdqJDdH/pA5RWxs89jz/G+NIR/WMT4Ww5430kwshNCeG+M8bn5RTIIIfxh/F0V6Orb\\n\",\n       \"WJ0P8cEVPvCmSO38IYM/yLX7bCxz5k2vJXPWS1YSMswKa72RvMswa7o4TGzlZ7zSXlaGxNJsYL95\\n\",\n       \"V1EEg/SiellXTztmcc9BOFIqLZqYeVlX1wNqSifG9hzYsGvRyHWloOnERGHkSBVrtwxdVT3x6zgt\\n\",\n       \"8YLq8oxW9E3UHaoZ6ctsGTiviAtqYWqiLfOQqec04z2L9SNTmWRyxiyr23Rg5MTRUkcW143PHbr3\\n\",\n       \"WuZ6clZvLZG2c/XFuuVwVv09R0Z/vWf5szxYq7rYb6l8Gh7p8uC0st5+JeHGz8Q4/p0aHxy13Xo9\\n\",\n       \"9S/+g0S3Ww0zHyXf6+Rzi2+yui4IPVpL/PgyD03sjg5svD516R6tSekk3jLyCRaW6bys+cRQlj6q\\n\",\n       \"n0Qvh6FaqPySp2XQno1drdENNJK+83FoJzD0sEzfzKHEguiMmaFjpdqcrzhOqpLMqjt2JZY1RPum\\n\",\n       \"ehbNLMltaJrY0tM0dGZub9gwNVDaUP3H7qluUhcIO6KagMShiRNF0hfjpQoZWSwqZ6dNx3s6k6H0\\n\",\n       \"Ku11noxRfTFoJdFyKGVl6aaoG0pJuGlnYer8Z3aEfGyntuN4nVAcap0c+f7PVTu+kxHfMeOsvxP6\\n\",\n       \"IO7G+Lbu0N8J+hj+RgiSGH3bQ+pijHdCCH+T7TXW/wx/7BXOvsm47eIRFy+ydRHXY4xDfG7+I4TQ\\n\",\n       \"Yf0vRudnpZXlXD2JkpiYzkrH5iymfXZKjrNj03Bs/y6n1oMzK01ZXhg0+vaW9pRynTSRlaWdQEgY\\n\",\n       \"xbqWjmPn1UJQ07PjjiJpuN5sW4hshoGxwpEDS0itic7JDCUaSj0zPdEZFaHkWNXvNbbkjmh5ngcf\\n\",\n       \"GbsucaS0YlaWVotjrzdXTZM2pqbpgo7T8jTYXzlxv1GI108sLXed33tjKrIunTRk7YnVh1fd+W8v\\n\",\n       \"hDDu8dIKj1x8U9DRYGvKIx1im5O7JBdI+oRzHH2WD5wjOcsL6wzXGa7R7vNHQwh/7beaLX0nwch/\\n\",\n       \"bM7XfdNFMgsh/Cn87d/KH/vd19J7ufIW74h2zoWUa6fmJNs3aeEqN9/LWq+6+d6rMWuR94J6vSZO\\n\",\n       \"F/STy7rl0CAUYtJ0IVx0LxsYBDYNDGVGNvTjrnoYiTEYhdJjTiQKR/oyC7ruKuzo2dec+xdM4jah\\n\",\n       \"odp2Hqse+RuqfudqDjrzqkImEazbcmBXI/ZshoaX5IQjjdmEtCvEmhCWtMqblsOhMydnvd5r6Zb7\\n\",\n       \"9s8keqfeY9ZsaMRMTI/dv7DjOAseTFIrvUQyKxx2gnunGmpneGCbj36m+qxunOfLP8UTP8dD80bV\\n\",\n       \"R2v8zB8JIVz9nfAr6fIHPsB7Pqz/8339xrRinLz/OR67zK++Ydz0RS71OP1jDB7jbiHee87OQ5+x\\n\",\n       \"lx1q/CKjGwOSF1mh/X6+r8F05ZZXz66r1XOjOBREWVp3kG1Ii5GT0Pd4nHjGDf/GaXthV2lixVRT\\n\",\n       \"00BLw8BIqW5qscikSdMgDB3Gh2ThQNSU6cvklo20pGZyLamOa67aM1WY2ZY7g1TiRGpk5nHBuiAR\\n\",\n       \"PS8KohO5RTE+QOxp1UcWyiBJZ07sG0/3bdxmaZmddtAso/VYaIfSUhL0YmEp2TftDDVvlnZ73H62\\n\",\n       \"J/1nPSt/mPdsVXbsJ3U+fY79n/kuKse8nT7qOwAB/5spRlsh2FFZNzz7bq/nnWg+qbEXwvlYNZG+\\n\",\n       \"Va2oiiq+ThUCvvPf8N7vi9a3CuO1zEaMzhSFl+qEMYcDmtc5O6NT8MqUU4v8u7+W2v5g4XAtl9ZK\\n\",\n       \"Md1ycyUKabSG1SmtwNXGkn5I5/iGoGNFGse+moysxMwVI/04q4ozIXXLvrZnzdyW6gjGanqGnjO1\\n\",\n       \"rXr3qhlp6jFHXpV4RaFuqE1RI2Sy4rYkWbOXPiwmbZyWGSlct5NuaJctebaoKPfNhoc6ZzriXjZ/\\n\",\n       \"jhfKLDdZWzNZ+T7uPcrBVT74RZ7e5/5GFRM5zfXrFaLh1CaTXW79PE+vc23I2g6XL/HlTb/uz7TO\\n\",\n       \"cIW1O5wNIUzRnweJ31DfMBj5zeo/84vkU+/kj7x7KsfVJKK32NpPIva4v8+z72FlVjkynrnF53ZI\\n\",\n       \"ltnP2BiwsEPzXvTl7y/8f+S9WZBk13nn9/vO3TJvrpWVtVdX9YLuxr4QBAGCBCmOFlIzpEayZcsT\\n\",\n       \"1jjGYzvCoQe/zMh2zMOEH+wJT0yE7QdH2A7HaEIzYQU1kjhaxqJIUxJXkQAIEGgADXSj19qzMiv3\\n\",\n       \"vPs9xw+3APaAICQSjWkQ+kfkS0VW5Il7qk5+5/v+S9fySa0xMTEzFL4FjvKxzBCNossSIRGh3EDj\\n\",\n       \"YmHxGjMcctYQfFJSoDAJDolpoLXLSC3TYZmCjprxfSeZ8yjWUYxQTDFYCDllDphSIWJIlRltlrnC\\n\",\n       \"KjEelu4zsmfMmRxfC0M9pD4x3NfxcPwh+5MZ+CmTjQfJ9TzWeId4zkHlNSaVGdpKUDMbSEnbBmzD\\n\",\n       \"fDljCKSPQvAC+BEka0VY1cHG94uRSgp3KXj9FPC927mTIlI6BU88Adse5F7h6sYn4Onfho/8Nmws\\n\",\n       \"Q94HtQ3Jg7DfhkkITg2Sx+HSDL35LUL/BPynG2DnYF+CJ3v0OuDeN+BsGLLtOKxYHjlLhJQom4Lh\\n\",\n       \"E+WH7Ns7nDNjTrDIlIQyHk1ihnQZHjsNCBaZMdRNSlNyGpSJZQ6bLkP2aDPEw8aiTorQwcZjhkPA\\n\",\n       \"MgmhqRBLmwkuIQMsYjCncaSGYoywjOJ+NHsIdbQpKHQeFg0Tg7Gx4wluuYcxCZN76/Q323RsF58R\\n\",\n       \"nvRZjGPKtsUIzXQ24Wgb8v8NbjzDmzbal1PofQKq6zCJ4egPIXzmh27QXw/8LPBP7vQi3iW+Anya\\n\",\n       \"n5Bi5PuYvAI3HoeHbiGoxhZsA2/bqToFCx+D010I1zWV/YT+YmF8pgwMO3DyJgSXwM7gZgkGOZz5\\n\",\n       \"WegvZRwtCJThbl3wNlyrOJVjQLTiZqlQxAzxiTBYZkZHB+hjiburIyZWxoKxCWSBMSUsphiGuGzR\\n\",\n       \"xCGlzoQFykwpkuAXKCL22kCfiAY5S9h4qKyHSaaYGwmVxROErdNYZoWcKRyPdYQaikNyXIxOcCox\\n\",\n       \"9x8FDE9NGZ+qkYY+KhqRKgcdzTHYvRd2fcgegd1tOPgufOjj8PQRspyivAH50VW4sg+THJ7pwOf9\\n\",\n       \"gsvgNKC2Bpdu5WqkhXDh1An4tQVIh0BT5Dsj+PJfNrr/gKf29p6GV/6Doqh4wwZhvwbbMyABaxGu\\n\",\n       \"3ltorW2BZ0PY+xIMFmH+FHy3BcshjFbh+qEQrTWIyyVmJiOzx2RSJmLGjAUyOUEVCkc9lrC5is2M\\n\",\n       \"itRoHAu7BGiRETEhJ0ebDqlpMqOMj2IgHhgDskDhxXsSwUXoAdepMaKCTYUEnwFCwpY5TS+/Bwuf\\n\",\n       \"WlolcQKMXGEiITqzUIxoqBiroQn8mLXpJfbsBWI/Qs0u4jRSGqqEkBHgABOGnovngUZRCcckzpS6\\n\",\n       \"AmcVXv1Z8LswfQSqZZgZSC+Ac3xjFgPqveASVWqgvLe0s5oQ1eD6Zfg/Lhdj2ImP/fdu0nxyj1oM\\n\",\n       \"kSwzuPlxold8sE7Ar3wcrgd4qwmqegZZE8brXZ4wsJRFrOcRXVFcUhntbEqoPIz2KJk2PT3ggsx4\\n\",\n       \"wmyzKy5TLFL2GNJGWEChsdknl4SRHeARsKHnGMk+kQQ4jDggx0IYYAho4VBnniLZ18JlUVo4JqIn\\n\",\n       \"cMOs4eucWAoCX47CYp8MD02P3MwwZGCW8BTYRmMRof0jtE64x9R4beMk2nEpaQOWTyAlLnpdnHyF\\n\",\n       \"YLZI/FrA5PIVGE+Kk/YUsGfM5Ksi8i0KGev0veEA/eRAhBqFhP3rd3ot7xJfBP57CiXkTxDGfwHP\\n\",\n       \"Pgj5GmwMYOrBC8uw9T2gLSI5cBLwiuby4t+F0jrsOtBbAs8DVxWikLqGG/NwsQTzr8NwAjtlOO8V\\n\",\n       \"eVlpG1LfcF4JoVWccT6wIfACNpd8h5pA3/h0mCfhkJSQeaVwTURVQkoYOkYhrDGRFk36HLKE4i5s\\n\",\n       \"hkwZ0CSnicc+FaCOokmRUDYDrpFzH4JDnq5irngQ/AW0LjJyfVwp4amEhMJrw6OESxWl+9gERAS0\\n\",\n       \"pppHn55ysaE4Wo4xvoXfjVD5PFfb5+n9qX+LRP+xwt3wiV38/7BBjRmW6aKnM6IXhgx/wxjz73Sm\\n\",\n       \"miLWBfjo48cVIcAz8BEH6r8CWyXIY7C+Ax97ruDrvqPy9gNejKTPw0snYfQwbGqYKrgSwM6/guVf\\n\",\n       \"hM+MYOELcGUJYh9cH5JFqJ6DaAVGfsGDcK+VMXqNIE3JleBIBUVOwoTIRMRSJIHkZEzI0XRwGKGY\\n\",\n       \"YB8n1PSIWcRBI1RNyg0yImMQ6oBDSgfLXGdRDDVcEgYc0WRKD8NVTjDlFA7rTBkT0gUWTIWZKbOj\\n\",\n       \"+iTxBFFjJG+h7QoJO2QyopwGLHrQDhXdGPwgoF8eoib7lByfdj6P0g46q+JF15i0B0QmoWdXaAQz\\n\",\n       \"YjnCUikKWFbgfhRMDxZ34eAkNOtw7cNw7unipnIV0D/UQvhdYDKCdAZO5ZaxYb8IFhpTkGa1iPNA\\n\",\n       \"yLkHhZW8hD+bMlt8nc6n9rlyP8SjZcTqsvioTTPQ5DaEjTa2fURPaXwbWhaEMiOXHWzb0DIOE9Vg\\n\",\n       \"xCrN2GPHnvGMPSbWW4yVR5Atoa1zNOgRiKFEBZt9InpM0PhqQgWLKgllFC41XDQtFA4BW7iMKWMT\\n\",\n       \"EuAyT0BAQokJdTnD1LIxlACXMk0sAib0yU0Ipg6mh8glYtPiyLjYaoJrepRs2C7NEdouJ3IYK4Wf\\n\",\n       \"GUJpMrYm5CbBGpcILjwIO0d4/82E8xegMYNdLeL/sTHmaf56JvK+HT4FPGMMf6WW8/sYXwV+W4SG\\n\",\n       \"MfxAdtT7FccE1f8Tho9B/R6IF6HiwVNnofMYDO+Ck68WXI7DD8NaBE4CNzdhwYczQEtDx4UbBh4a\\n\",\n       \"wNNluN6E2Rfg40/Bx3bhX38Cqh6IhkvK0JAi2HNkYCgQSpMFbVMmZwXFzBh6ElITm9yUORCHDINt\\n\",\n       \"jyjjs0WZmAkd5olYpIlDQo2cBjk3mTDA417GdNDcQ8ER3KMYzTeRLEQliswVyE9B/jqW36KZJISO\\n\",\n       \"hwLqKkQzI2GAkQ4VM2Apj5ERfO8xDWuaVhf6dsr4OtQ7DdJzdVSJW6hDMyjFVOWIz31pB6OKyLH1\\n\",\n       \"IXzjpPD1/3hJBANZF14ALgN/9gwsduCupeJLQe3Awufg62+k+XqQPw47N+AJEfnqO+3xB7oYMcbk\\n\",\n       \"IvK7MPg2XFgGE1F8W1ZhYRFOHld0Dx0TLv/0ITj5WfjEd2HxAF58GPbWQJ0ts02FxN/Ayj1sVSKT\\n\",\n       \"Gbk8TV9CmmjGCMPjm3ITzepxukiLGSMcusSMcKiYiJSMjgj3SYUJioyMGRMWpEILD5sMTUqd17lG\\n\",\n       \"GQvFCkKdDCFmDjgBXBUhFQ9jlsCukZsJRnZIUxujFUmW0DKwJTk7FU3DgFmEuB+QTXawT54ksVyc\\n\",\n       \"mcHEPVInoBlYqHSfdlWzEkPown4OzhRMqSD3tgO4uAI7XdicwOQMPDeCmyHs/Ikx5jabnYExJqmK\\n\",\n       \"fPUb8LeegN0mxEdQ/gtY6cJvf990Z/FvGB6/2Gfv4xmje9rAKu1Zn95yRGcpo5KWWHoRlEkJqgo/\\n\",\n       \"sHAXLMJYE+XCwIOuMpynyimjMSREMuSSsblhRUTi0UdoM6CRu9xQ51ESgTEskB1LfjdwSaigeYmQ\\n\",\n       \"lB4loIZiiRoJ6ZtpQi3GvMbicQBAxAiHk6KoEzPHFteoMqBeyBIJGBJh2EdkAUtclAkxcgaMxhKN\\n\",\n       \"qxNEeTRNhO1ZeBkkxqKeOoyjlLxms5yWiYIyZy5d5eDsETcfWqSVbHF6Bg9vQ2DDl/62iHSNMdfe\\n\",\n       \"YVv+OuGzwL+904t4tzCGQIRvAT8D/N6dXs+PguPgza+IOEfw+Cp86jJkCr5+PzyRw3gNrmk4PQ91\\n\",\n       \"HyZzRZekqSG1oHds634ug54D7QFUFmD3E7AeFom85hBeWIO6rXhYFBZCCGxIjgck5ERKYZkyiwSs\\n\",\n       \"yJApNjOWEVFUMTSYx2OPDmMMZRaYssUyGZBgobHI8dEsYjjEoormEB7SNQAAIABJREFUBoUldZFw\\n\",\n       \"JcbBEOFowTVTpu0qJknBE5Q08LJdpo6gxcUDbPaZsY1rhqymhvlZxjUL7mrAQgiVoOCovmbBU88M\\n\",\n       \"6M0dMD5xnuxlgAisF2FjymYXznbhxrzN9kLG/lyZg9OnsR97jPwrGtQleOg1+M4I/vAIfvOo0FvM\\n\",\n       \"Adk5+E9Wjzknb6AEebUYTVTeaX8/0MUIvMlt2eH7/jKISL2YG74V3Q/BfTEsBNCvwXQBfnoEr1Qs\\n\",\n       \"rjmrzGU2kWVAa8qUMKZJoGZ44pBRosRVHBxWjq17ARQWp4h5DQsLj4lMCYEGHjOaeASUSIlxWKZG\\n\",\n       \"Rohhik2JGoYGEZoITUiCQR2bi9eAxFQIZRWhiicOM9bJURj3ZTBTKNWZmJSUKZvTgNNXYeMqjEop\\n\",\n       \"/bsPaB3kJHNHxNqgnJRmPMfc6zNeWUqY2THxDMYaPtqFzQCevQu6UzgycDSCB//f4iZy+QF47TqE\\n\",\n       \"v/9uIqT/Mszgmy9CsgOfKhX2jKMufD4+VnoVpLUTbdjcDhjvtgg2QPIZuhxStjykN8E9nZDXXdTI\\n\",\n       \"wk40s/IEV6fg2OwqQyxF8N08MZl4ONpBVMy86XDdtqnT4h6jKB87CAyJ6GKRieBSIqeCIsbgobBx\\n\",\n       \"WWHKNRQJDgqPMTHzKKrE5ARkhHgoUirAGhZ1DDYKH581AgJmRPQZkhQONWYeTwwtevTMGn3ZxKOP\\n\",\n       \"YQBygiVScompmBGL6TyW1ozsHFxhUUdMZEY9aFHrCnbziMlSA+taMTuHguT9wBR2P8L3wyz/2kIK\\n\",\n       \"GcJngX92p9dym/BF4Of5CStGvo+Fj8GD3SKF9+o8LDiFSvKlJYhOwNmwSC4wBs5ocK3iEqWBmoEs\\n\",\n       \"h5sG6iNw+jBZhqMObM2BtQnLiUNql8HkhJLjYLFMiIvFLmVWmHFFHJRxaeVj6rJIrIQmhjZgY2Gz\\n\",\n       \"RIUpFkcF9wyXsTHMJMMjRuMdy3khJKTohAyBHrB/TGNto8Um9IcYNQWzB0wps0VSaqPMFNvsE1o5\\n\",\n       \"ZULqpkkjTVCzEUcJLHdgZQTRAkw9QTuGZgMubA6o9S+QnujDo3U42iv89b8GzfvKfOnRdQ7XF1G6\\n\",\n       \"T1SzGZ1YQa6cPJb8bkI/gY98D54/Pu+3gW0RUTOY9qHcgjAo5L/tEEpHxcMfv3Unb8UHvhj5IehC\\n\",\n       \"d1TwR1ZukYqlbWh1iiKuU4clKTIMGmTgKMjBMUKkLOwsQKsc22SEHCKiiIhoUsFhhKGOkDAiwiMu\\n\",\n       \"SEiMKRnoiMsSHhkeipwF+oQ4WGhSIgQHmwYOR8dBamUMIRkZGUUcdgCMpImYMRibVPlkaLSKgAmW\\n\",\n       \"WaOWOYit8XRKpK/y2vwENa0xWLZZtBOG6YgToYdKyjj9EuXDEdu1EsNhTqwLoqpfElTNcL0OegQP\\n\",\n       \"vwqLA/jWHDSiIv9h34XwK+9lIQJvFpZPi8izgAdEt+YiFA6Cq3uw0/BIKhUaL+doNyI6pRHdYHV2\\n\",\n       \"HaMUwcPLuK9a6LDPuNwBu8ViWMPLR0ydKYklrJkAQ8TUcshMhGUES9VZwaeaGgwjcFKW5Igj3ILH\\n\",\n       \"YUogmpwDMqzjpKAKZSxibEaUOHnspzuigcEjQZOyhUGhcPGZEiDElMk4pITQMOfIZIqrE1bFYYpH\\n\",\n       \"CQs3NyD1QuarbXKdoywPK28SSxedD2mZPbr2AkGa42UzYtPBNj4LVwtxsJKMir1HREH2ewPVCNzG\\n\",\n       \"e7mfP0F4BJgY8+5dJt8n+CLw6z851vBvhVWFynEsSWYVmTAAoxq0IggsKNngZoUwvmQXOTZLGsoa\\n\",\n       \"uhpCDc4MqgfgluDFBhw8DAvKY0CFepYzclIyFAsYQhwMGkVOGcUyKT2xqEtOJB45xxf/Y85cioPB\\n\",\n       \"o2oOyFhCmR65tIg5AhQVHDIOCVhhyi7FgeZTzDcskBmYHVL7dGFsQhfkOkKJqt7HYcKEeZSekJHS\\n\",\n       \"pEl5ppA4J05gfwyPRDCzbbaaNUK/gj2MYB6+9SstTPcc+df2wX0ZGMD/BfTh8lOnKZ86w1yn8CQJ\\n\",\n       \"a2uYTBNVo+JBJRaY05BdKmZfb575xhjtiXz5m/Ar65AdwQNzUJ4Wc6/LPjz5TjPO93UxIuI+BvMP\\n\",\n       \"Fh4r3e+CfvV2eOAX3AL5PfjK34P76tCMoOPD+BCmxyQdSQEDk4pi6MfMJ/t0K0tocYhViMWrOKbH\\n\",\n       \"ikpxTMAhN+hKwRwBHyHDUCVjwD4OHRQhGZ4ILgmCjUcGx7kjDQJywMEBfCwy4uOYvZAjJliU0eyR\\n\",\n       \"McVihtBHSEzGmjmkbzxSciyZIPhofYpmOkDyGbh1xF5lsnTApUaLxlAxdyXlhknwK4c02nXsWcpr\\n\",\n       \"d1XYXWlRHjVwmLKzPkJlGY5SVMYT6hcyhj6YKgQBLE/gRguujnlThfHe4/hv4G1yKwA6X4Gn/37G\\n\",\n       \"qsmxrYyoGTKqtJGtJVoHVxieuMTJ4S6vn60yyVxUvsGRMyRxAhy3SiBCQ6cE4pJrB0OC5JqJqZFK\\n\",\n       \"mZKxQQZkktOkgpERq2i2qDGRCjaHpBxSxWOZlIABHsIRDboE3CRnmRCLjAkuA9ZIuQvF0wxJOMBG\\n\",\n       \"UMXcGRubgEx2SfBQKmdGhE2ThBoVXGyTYZmcUCJsFFYm5HlG5KS4saZq7VExPSZik+URC4cab6zJ\\n\",\n       \"MotgKSW3+0yshI9+u+iIvIGdJgxvm+JCRGrVIuHw0x6spHBzCP8mg+du12e8h/hAjGhuwesUZOUH\\n\",\n       \"gAt3eC0/BqYX4foj8MA+rAwLCkNkQ+jA5gFsr0BbQW0Kh/WiKxIGkHlg2fCahmEPAtfH+2iJTmoz\\n\",\n       \"O+pSe8xwwga8iIFlUTIGSzQpLjYWUzLQU3JlUMfWhjskpGaCQ5WRFBfFiIKQnpPRMhYiPbRETGUO\\n\",\n       \"TU6Gx4hrQHJctJxF0cewi2GOgolrgyTYXCQjRahTNk1sUeQSU5YBlqmTqbtJzAFXZEC54oFfJxi1\\n\",\n       \"SKcdDtoxOqkQJy6SKExaote6m2gyJp/cB1c/DNlLsPzn8OSRMb8zL3LQon4uwWkqrDQnqFiE0TyM\\n\",\n       \"+zC/eqxcSkHpW7yd3kBszAuOiNeD/+GRQnAwWSqsc3e+DJ95pwPljhUjIvLzwP8C9IwxT739u576\\n\",\n       \"RTgzKOZ9l34VXnlaRP7gNqUETh2ufstw83yGH0P4IqRfhhf+R6i2oBnCFWMR1ytEowYnu1tEpxJG\\n\",\n       \"XhmdHaJVwAmj8YMWdpZj1xQlytwg54ZklJlSZZ4WmhkKaGBzhRYRDg4TbFYwaGxyIKfEVSJWiHEx\\n\",\n       \"GDJGVMnIaOEzYcqYGi1yLiPkJmHeHJKqKpF2UIQsYlPWU8ZSI8jKEIKxI2w3ZVDy0fgkjo9lpsza\\n\",\n       \"m0y/MePygsGtdxFZJCufYuFiicrQo7/Zpt57gdlij9cPDCeGhmBlSPegMCGqvQD/dh32e3Dw+bcy\\n\",\n       \"re8UjMlfF3H/ZcLBf3aI9VSVvLVM9UaT0nCP0WJCdqDppnWy5TNUO4Zhawl7qiF9jVGzQqYPwRpz\\n\",\n       \"HZd14+OalIFyuKzLJLliKDEtCVBKCKTYnxqaZnpIYidYIlS14qwCiwaaKRYZUKOD5lWgQ4UGHkcs\\n\",\n       \"MWOdBjMSWoyYcYWAORaYI6BMxpBlRqxg8LBMyFi6uGaIZRwCUVTyLkOVk+QZeWpjshCHDl6Q8tDT\\n\",\n       \"UIvg5XMJUkkId+EgVZzeM+SLI9JSRNdP6L8EF0+CpeHEEK62i/DI4LYUCiLiL8N/tQBPbUJlHYIh\\n\",\n       \"fOganL8Kf3DbCUa3H58F/rs7vYjbhWNr+DdGNT+Bxcjwm/Ds/WBW4UQfzDZ8+aFiKjBqFrmNl3OQ\\n\",\n       \"GugArlZgaVT45dysQHwJ6tUS7ZMKSRQrnRnJ+RJnxiELFI2ISjmkrywWjSGR4orZocGymdE3M3aA\\n\",\n       \"SQaVGB5MelxtlplI7TidN8TiAIucSBwi6mhq3JtdZ6oy6lLiJbkHyyT4kjFki8KGrQqcBy5S+EvJ\\n\",\n       \"MU3WwiXClyLH15UuXb3ATNURqqhkBS0TIutlCq+PM7DzTXYf7DO3pNjYz5AsYLdZ4SD7EPkzl6B+\\n\",\n       \"CZbbMNmAfh0eEJHfX4GDTcbf0MyaKVZpnWR7COebxcxLAczAuVQYtVx+u93JIDwHz90Hew5kb0gr\\n\",\n       \"74Hp+7IYAb4NPEThCvhD8IlbVBknhhA8Bi9+l1v4Hz8OfJHHz8PnzoNRZOYqY+sGbE/gS/D6P4bh\\n\",\n       \"fwHNFUU/9akeNFkcCUpprO3rRG2h7iX4lkdtXMHvR2S1MU61QlW5lDjLgIiEKgFdehg0LRpMCMnJ\\n\",\n       \"UBjTwBObDhNChCllAuQ4lSRASHApkeAwT50KFYSEq9SZmRNM5QaIJgViMyWROQSDx01GakArPUVo\\n\",\n       \"pWinuGWPyRCTUJKIRnLAUQWuV7aQX1jBBI+QTG7AXAWvleElE2arLrlYuMESzrRDNhczkJDpJkTr\\n\",\n       \"kH4XBlsw+SPgyu2KkL5dMCZ5VUT+UQSf9nD+wSHW3FWckzM2spTGrsvF+jKT/iqzZ7vYVbfoPrZP\\n\",\n       \"oOQQ4V6G5hJTM2ZfZihWGccOydEB3pxF3+sTOmWWjcITzSF1jsw8C1oxMx0O5JA1sRmhyJhgUJRY\\n\",\n       \"ZA7NITYZDRI2GFDkTJTZYkadmE0MMyCiT48+EXN4dFklNxlKUmasIcaiIdcocY2cnKltyKni5AuY\\n\",\n       \"cUTidRj5Q+7tQb8BByXoXof2DFhXzKwSzz5Zw74pZLlLTsqZ9oh6Ct/7BPzZDsS/C+O/MMZM/pJH\\n\",\n       \"jYi4FN3lHyr/9eChE3BmAbwPFw7YLMHEgZaBR9/PxYgIy8BZ3veeSj8yvgj8OvBP7/RCflQYY/qF\\n\",\n       \"9fvwo1A7D/mzcPD70FqAm58pCKof/hMYVeDKo4Uqsj+E8CpE34D2P7JorlSpjxxmyRhzMuGMGEQs\\n\",\n       \"hlEMeDSUg2UyLpOigSqKRYYMJSPUkKdwZgR2AHvtnHS6Te577FqCR4hDSkqZm1SwjcNauoNYEffm\\n\",\n       \"ULX7RFzkJZZIWcVQoZAk3Dh+FR1zaGKIgISYGBjgsk9kfGbqdOHcrRMyq0uuqli0cYDq6rPEiwlW\\n\",\n       \"VuLApAQlB8IS8ZFLtdVl6G/hLB0y/4kS1alhasYcDSGVHrywD5tPkb/6hovCt8B8Ex49Bf4ObFwB\\n\",\n       \"swW//8OyxwTcMuDeIhsGKL9NJ+VW3LFi5A2HTpG/qiWFZeBUDq9u8i6KERFpn4fP/W3Ye0NjfT/I\\n\",\n       \"F+GTz8HrxqRfK/wV9ueW4B8+RWf2OgenhpiTKwQqwRrvsrEUsbIX8cDhiNSBwYLPYLmGEgtL1VCm\\n\",\n       \"ykxKFJrxfXwTUaPLmmScRhFKQoowwmORAS1CLAzXj+nGEXUW2MRgA+Y4h0RTIaUrXTDreKYGUsGI\\n\",\n       \"TVt3ScxlQqlimQoTa0Ike5iqQwmNYwQVdTGS4RvhsGyx4FdoTUKM3+HwhDCdbODriKyRshBNGVQH\\n\",\n       \"TMMb2I2QkxKw4BmaCkoCVx6E8RLsVWH4P/GDVrZ3HMcF0p+IOCdg/dfgyZehOQNlFKO6oetvwuWU\\n\",\n       \"zJ8w+aV5/NhBLAHlYZwGkSwRJW1QVYhH0OkT6wlqYZ2ZypiqlLrJGHEWO5syloyQBTKzi5IYnxCX\\n\",\n       \"Cgp9zPbxsFnBZQ+FIgMUE+YQUmwUE1q4rOCwj2KHAwJzilQ2MJKh6QF9FHVSU2KEi5OPybOEk9LB\\n\",\n       \"TPdwtqA1bxGWfIYriptLHnFvRKOb8eQhrIeNQor8/ISL84pEL/GRqxrHSlBpyOlnS3z1wz7d+yfQ\\n\",\n       \"E5GXjDFvOw4TEasCn9yEpypgTSD2Rb4SwjNvLU6bcNYDZ/kWSbYAdTALRV/7/YzPAl825t91of4A\\n\",\n       \"4KvA50WoG/POxML3I4wxA+CPj19vQkS+ANMn4NqHwIyh/08hep5iLFWrYv3jnPJ6GTexmMqUtAab\\n\",\n       \"mc2yp7HDhFkFIjemrMEoYQ04DRwYzbbEzASsPiwkMJqBnbns9O5C6whZzEnJmeQlmpbHfLJNVt/m\\n\",\n       \"tIKGEkraYk5BRQtzlo0j62hKOChSZhQk1msUY5o+sAzEGI4o2J9blLGYk5AxQ45YI6WEEb9Q8pgy\\n\",\n       \"trg0jMXEfpWWU6aapBi9SHXXoikhVvU79M71OWW1OfO6g5WNCRtj9mtw9cMpwXMvwoMZnN6ESQTO\\n\",\n       \"AcyuwT+5Aj0N2sC14+f/9nsDezdBHi5aTG+eBTeh+U57+r7mjPwgYil8VH58eHD2fOG1/2bVZoM5\\n\",\n       \"B7PrRavqGsV3WXdB5EXgdM5svoVVU9Qpk8uQfRUhKxn3H0JUcjh40KJkQ24cMDGOtsmki1E2xmjy\\n\",\n       \"fJ+yFdHWhlAZcmCFMTEKB2EFYUbxh+/jskWMzZQUD5hg6GIRYJEwT8wR86TiYEkNi5hYGYw5jzF1\\n\",\n       \"HDMikBw/PSK2poyUoak19qFhIXborlk0yiWaqokRQWmhlhyy1RgwlRY4HonEuKUd8oUhbSujNtK0\\n\",\n       \"K5rFBGq9IvFyB8g+BcPfAV59N3vy3qKVw8LlgtyWO5BKhDuZ4ske040UljOybMjY3QcrwUouYiyL\\n\",\n       \"eR1SZ59MbCZ+TnzXEU5/jnG6Ab0Juh5TLgfMZftE2mFmpyRqh6pS2Hgs4iJSRkioMOUmJWJsDGC4\\n\",\n       \"SRmLEQl9loEJTWxaGDQag0uFBqGsA/NogkJDJdtAD0sbfBOgJERbNU7HE7ZrhntWK8TSoJQ71GxN\\n\",\n       \"WwuX5sFpDolUjnWphDRK+NGUDQK6zXlK+wZSYefkaV5qrmBZPs7DkNYvwOMi8hvGmNlbn2oVfupD\\n\",\n       \"8NNPwI4P2QTcb8AvvViw5p+99b0JjDKQ+PvOg2/8XLJbjQ7en/hl4F/c6UXcbhjDTIRvU0h8v3Cn\\n\",\n       \"1iEy/8tQ3oD4AHrfMsa8K4+iY+vxPzt+3fI54rTh19axPtyjNCrjVSJ69YA128Kf5gzKhgWnGGum\\n\",\n       \"ApMyKGM4r2FeoG2Ki9hrU+AIsilUZ3BzLWexusu0GoNbpZm7xEbTT4ZkZYt5nWM5UJPifzvVEGqf\\n\",\n       \"PbVAVc+TaU2ubFIro3Bf1RSs0EXgWxTf5kNKjFlA0WIZmxk++zgscFMaIBE2A1IJsJjDWC52WiG1\\n\",\n       \"AkpAvNoDr4odzqDaoVqvsfRSii5NyJwebpjxsWdh8FMQfKcHv/kNuPsi3JPCbAgXfhRxgjFmryHy\\n\",\n       \"7J/C4/dA34PsGrQvFPYpPxTveTEiIkvA59/y4wNjzN/5y3/7X/1C4egJsHoNdkeQvytGuxRWtj8w\\n\",\n       \"VrBA53DPWhGwUlsR2e3B63+E/1/aNB72OJVZ2NojiGwGnRo3rAF/dB5WV23aohkaRZcVlogYiE+a\\n\",\n       \"pxjTJ2cPbWAjh8UMUkczkYBDJbSMoSMFhbWPZpmAiBo2GQkDoIowooKQYmGTM8+ECjcZEBGaFiXj\\n\",\n       \"kJASyyIoYawrZFmJeFjCyPOUqjN88ah4p/GCDnFaZjlJ0aWMzFbYYwsz79O2hozjkNBrEiiFE++Q\\n\",\n       \"Nqa4owTXzaibHBPAQQPGZdifg9oAuIf3dTFiObDxDNguBE2wYnikt8WVz/gc/Y3zELWgv1+kJJpd\\n\",\n       \"8kVYNcusYmFph0o0IaPPTYm56xWbF57oE+Q5ef+A8TLUcMmzIxKZ0cwtLNWgwYQtSWlg46GZIOzT\\n\",\n       \"wANiUmKWaJsDhD65eEQ4JEAXhXUs3y7C9yI0Y8DFxiFDgBuUJOKkzqkazb7S9MSiql1MxSOwSng6\\n\",\n       \"R4lD1UypKIeW63G4HnDfH4RMHy+iy700x/ZStB0RVhocNu6huQWp52IlD5Fu2bDxtSIc7mu3PtFj\\n\",\n       \"a/6nnoTtN8yNapB8FPb24KdF5LlbieYDeL4Pn7RhbQ1sH7IAvCPIt74fTPy+gwgt4KMUBckHEW/w\\n\",\n       \"Ru5YMQKfvrvgcxxtwvP3iTi/ZUz6yu38BBFZh/Z/bSj9VA9zImFQA1+n6JKHa2LKFuzrjIlrONcB\\n\",\n       \"bOiUYB6oipAgJFrjZVAZgHwDXrwMi78MD9o5VjhlvwVnVYRC0Ew5cGx2jItrQlIDMwWrRjMV4Xlr\\n\",\n       \"npk2OJKglM1UK9Ae2AmFpfo6YMB0ClK6pCxj0QIEgzAH9KhzlZI8QMQhmksYMhxcEkpgxYQmBzVH\\n\",\n       \"JehQnYyo9TO2Q1jrhdT3B5QOUyoDw/IeeCl4JwDXGBNRBAu/9OM+8zH84Xfg6jV4XIHXhz+OigiC\\n\",\n       \"//2H/c57XowYYzoU7oU/BuwXihZZBlzNYPtfHxvf/NiI4NpVsO4vYnkNFJXoy3D/Gkx/Bl5pwWAb\\n\",\n       \"Gl+g9usDNqXKcmSxkhhMPmKSt2nvK7LKlK0rKf6ihY5KZJ5g2WBjqElCzwqxdQ8JhabJsVyHTBzQ\\n\",\n       \"OVPtsu2AkYiJ8fGkStVkTGRGGU0Jlx6GMSPqJickZ4bNCRQbJIxQDE2PPdllnyVcDKrgbiNmQJke\\n\",\n       \"qt5lwbi0JGDOitHz+3ScDKwSSht0GCOpS9rKkUqKOApP9mklDiIw0zHszMhfBvcEUIOpD+28iHU5\\n\",\n       \"FcPhPJTufjf78d5j8DLc/Hl4YguOrV9IVULUt2B/D9wc/FNwUMVd6OGvjziVX8e1hdhy8GZCbWTR\\n\",\n       \"aTW48ijUpjdISoaTQ0MjtsnLU3J7xJ5rWNE2A1OigSJnQkegUFa5CDkxIaAYmh1EDGcpY9NnjwYO\\n\",\n       \"NjOKdJuMPTLmMCbDNUdoSmSS45kenkpwTJWZ8gj0kJQApQo/G3IwZcFJbOLYIgltUs8wQbEaQOZM\\n\",\n       \"adzoMzydMGto+nNjBg+OmFZOQN9G6YiZv0h8CeAM9F4seF1fe8tDrdVBld4ynpuDqFzw7jxuUTsZ\\n\",\n       \"Y7ZLIr81hr/fh/sWCvJbegAvHcDvAP/te7P37xq/CHzFmGPH7Q8evgj8gzsr8T1/HMPVjKEZwOhz\\n\",\n       \"IvLa7QpkFJEGnPzP4YFmjVGtRZ6FpOkRu57CTWNGXogpZ7gTiC7BdhO2y4rANmwIDCxQSogToTYx\\n\",\n       \"OFFhBi2vwNmzsHYGLpzKOZtazDs5iaQEBjZVxNg4NLXFiTRnz4YdKb5totwmkRjDFUI2QLdwGJPm\\n\",\n       \"N0FlFMlzI8qSsmrmsdhinpAKVYQuIfOk1EkxKHMZZAsPm/sp02CfKSE3VY7Ja1xTipanmMwnTLZh\\n\",\n       \"8XvQecjQ2UxYugqbx4XIYQXCPuCJyD2qsKC9+eMaWB5fRn6goHknWsadVNM8CvzPwP0i8mXgc8aY\\n\",\n       \"t4xgLv2vcOkExaF33Rjzrg8FY8xeXeQbX4RPnoOZDfllmO+A96vwzcrx+CYD5dKoCvFSQpp4ZH2b\\n\",\n       \"emLjuBP2Fy3aScaBC5tTg28VnZB69RKHUkVbmlz2qeopmevSCHw6sY1nOWxZG+jEULEzOsaQqxkh\\n\",\n       \"y8xTxtUdjmSf3CjGss6IHloSGjqiIi5tERIyHPo4tGnKTY7QjKWPlfsoASUptrVLzSrR1gors8jS\\n\",\n       \"nHoeYMWaI5XTqbdZTntYnoPGkMmAsSg2Q4cHAtBxyk5e5pm9GZ3nCge//U/CXXnBQdoB1lJIIlht\\n\",\n       \"ikjdGDMWkYUqPFiG1giuJfDKcZV9BxF9D174EJgNODmAyIGLjSpHuw9Dbx1GN2D/EPtnc1pVhbZy\\n\",\n       \"WtpiZRYzdmNMeZFXrA2ycpXKEZTDA7rLI4zfAkej3THiZqxhmCQxdj5lZJdZReEzI0KTodmjQ45P\\n\",\n       \"xCPkconqsQerRcB9XKNDE0wRjicyJs3m0FJBpRY1UhI1QtsjtD5FybhF0JZeYMZ1et4YiRyWKoUH\\n\",\n       \"9lSVUaOMUb4G3Q7DckQ9h9jJOPfyHt9cneP6ykkGh2N2SlUae4psZUz3bJ3+6yuYXYC0IGq93f5N\\n\",\n       \"RpCHYJdvGXf2oRzClB8IpoTImO+KyCsHBRl0GTig+J+e/NV5Y//e8R8Bv3mnF/Ee4hIFj+d+3sUt\\n\",\n       \"+PZhPoTWPOwUF//bgtJ9cJ8LtdTQ9ys4PTCqQsOqcz3eZeZMODWAh/8F5AI3fwbq3zbEH1XYJchc\\n\",\n       \"Q0U0talhN4RwCJ0LUHehFhYeS8MW+FmOYyKMqzHkKDIakpOLwddwV1h0lKcsAMuIXqRqpoRqn9Tu\\n\",\n       \"o/QQVAPSJ0Dv4ZQaNNllKkNaJGRYLJPQx8elwwybAWcI2McjZRmwicnI8YlZMktcM3eRxlMG/QTt\\n\",\n       \"HjFvGcwTFuV1Ta1mmMxn/GnPsPb/wZayOHj+NPzDuwqjFHMNqIp8eWrMv5c8pjtJYH2OIgXznd5z\\n\",\n       \"xJu32duHCXzpOdh+FX7OgdoIXngUogpkBrgJm1/H/twMfwNWTEI7HdOpOvRSh3YeE1kZwZ4h/hOY\\n\",\n       \"PDTj1Ew4kCqu0qx4W2yriLkkZ+kCRCZjbdPhQBJestcxfomSl9CTMkMWqZibxGTsm4QSdQJCAl1H\\n\",\n       \"5wZxq6TmgETZQJkZmjIuFhGIoEyOK4e4eozLK0i2TubYlEzCkoypWBlGa7LQYhI6xK4mskAmYzzH\\n\",\n       \"pWoPMYwYiFAK6qyNNUNbEZTncF/qMVeG3f8HLnwI6h+Bo2phmTwfwFEMc1+H1RiutS2RlXPwq/dB\\n\",\n       \"XoN4Dx56BZ4SkX/+V1Fl/KiwRO5ehE/Y0Arh+hF83RjzA+mdxphARP45DB+GC/dDHsDhv8lh4wg+\\n\",\n       \"tg6jFbi5g9+PsUo5zizBWC6S2IzLLvulJYKySxzbNGJBaOOnKWNvUtjAi1CnSosJUSlG6T774mLE\\n\",\n       \"sAyUmbKDYYMaCU0umgqYGmUVAGN8ypR1yAm1T9kY9sk5oaEXb3NYssntNpECMTeI8PFVCS+BsZUR\\n\",\n       \"OTZ5tkpshFByXibEZ0hZZcR+g8kIkBDresbeGLYqUD5RpiOfYvIbp+FoixvN16mcGhGXbbL8bnjG\\n\",\n       \"hUwDF6HdhT9/41mKSBvK90Dd32d87Ztw75OwU4F0BN63C2v+3/1hXkDHZNgL/ATISY9HNE9SFCQf\\n\",\n       \"SNwi8f2bvC+KkVwgEt6mmP3xUVmCuVDQtTHGeo3kpKGiwdERHcvjaM8lej6hW4F4Hqb7MB8YWl/K\\n\",\n       \"ufBzMG8VZ16QwUECo9+Cye+B9Rhcvwc228V7ghIYZbBMjDnmbY4xLGhwdGE/38XnQC0Ta0hNQIcq\\n\",\n       \"duTgWK+QezUU96HskEzKOJJimyaZ7B+7e2dsUXBDXIqc3ql5FUtsFvS9LCQdEuc6uZRQ0iZjhdA0\\n\",\n       \"SfUQ1b6bUuc1ug8pNt0Sa7OYfD8EP6R8V8x3lqH7z+6Ch3/RfcHcAAAgAElEQVQB9t7I/noQrD+G\\n\",\n       \"T4vI9ffa0BJ+4gistw2rG/CL56FUB70FjxzA+QnsX4d7X6byc0M253IWlEUiGUPLcEYbXvcU49Al\\n\",\n       \"mmZ00gwuZFzfhtbqlLXRlGAGr7chyWDjS/Chb8Jey3D4dxWbUZW+ahCXhYnMMdYWRhrEapVUDkmp\\n\",\n       \"MdU+DjYwxuQDVKaImDFyDCtkWOR0KaGok6OxpEKu57DJKXOdchhQNi6JHaLEYj4FL8zp5TFWWGPk\\n\",\n       \"G5x4ndOdLfqnD+lKRinVlDUsaxsd+ThhjilPMSqgehkYGzP6HZGlDZg/CX4TVARz12GxBy9sAPEa\\n\",\n       \"/J3PQK913J4/DUcVWPtzeIq3MN7fLXyRxx+BX3oYjlow2YGzz8K9IvJ/G2PeTmk1B14D8hH0XgOu\\n\",\n       \"T6F3gf+fvfcOluw8z/x+70md081h5t4JmAyAgwEIIjEApEglipRWkZRFaVfWrlRey2W7ymuXXSUH\\n\",\n       \"1daW7d2tpezdUqlIUV4xiJQYxASCJAgQEOJggMnx5tDhdu4+ffLnP04PMBgMSFAEMCCBp6qnbvd0\\n\",\n       \"n/76O93feb/3fd7n4e5RKIzBQAOnR6D12LasaCWF/vQIVii0zSJ+YNH1HbTREKNn4MsonuUQUCJP\\n\",\n       \"SDMq09ZcQqWYFY++Cmgp0MUkUAkyIgSiSFJnNydZFYe+srDEIWDAVqQRSohFrBIpIaR0l5lBlUWz\\n\",\n       \"T1ETUtGAuiqgazqEFh2VIwybaGYOXyVIRkk0v0f/0hpbsw5Jepgdj9FjA1ItjZNeifazPWhMg3nj\\n\",\n       \"ULxoP1T3069ehEuPw/t92FEEexlkCZ4J4DkAkcTNsPfXYL+CZOCynHqMU9EyvbEc6D0YbMEX3Z8M\\n\",\n       \"EbNXgg8D3/kpLtFcxt8D/zNviBbfk9OwdebV3bx01mDlvgzWoSwqo5NKOCitS08J21Zg3FZcOAt5\\n\",\n       \"gYMVGFmGlUm4qEP7X8V/J0vgboL7daXUEoBIfhzGdNi1DEYJyhmYU9DUIBdBXYPukCviJ2MFkb6k\\n\",\n       \"WI8mGagkaRWR8R18rUvCAN9PEco6CQMs+gjgyiiayuDjkFY+vhayRkBeaaQimKWHRAnyeoVIt9H8\\n\",\n       \"SaTrEBQC/LBP4G5AMI3yXPx0SDIzyVg3xEmkkHAOqYJZvkChmKb23ikYZGJ7HCD2lNkP7nKcOXsr\\n\",\n       \"GHm1ISLaNPzm+8HbFpfmuAl4AHZ8C97pk73J5YCeYucgRCcgGeksZHSWVEDSdTjtTtE+czuNxx+D\\n\",\n       \"W1ao/Sk89wewtBM0DRrrsGcVbv8etJKweruiVrKpJxVB4NLWxxkYoxDWUZYLYQAUCHSdSAdRfQK9\\n\",\n       \"REJaBL0mRipFWo8wtFg6bRrFEllCNmgqCwefKEzSUSmClMWN7YhMwudiSjC6Frm+Rz8X0gv7tDtC\\n\",\n       \"mLxEsMNknFmSfcGQdWqmg+7XSAYGSTvC6UeMPwJnHOIOM6DxAFR+C+49A6lhE8RzM1BeAIwZSIwM\\n\",\n       \"5/My9kHlKBzhVQxGRCS5A372Pli9HMHvg6oJo+24M+AvX/z85Nvh0IfhoAtpD5ZvhlOrUPvUJfhk\\n\",\n       \"Hz40ApNtBqpB1h1wx4LBhYlJFlIRZiKioZLYwQRSr1IZcShshNizJppkKVEgFwqiEjTFoa977PIi\\n\",\n       \"0oQ0jRI9xgkloEKdIop5TNIiaKrPikCXUVajEE2z2RmFFBUciqCiLNo9RT5qkQhsEpqLmw/JSYFu\\n\",\n       \"lKaug4QBoRvhJ0x8sUhFOfTIwkyl0FfWaE2XSRcUazdnaW29A++peZiswO7jcMsatLddcb5moeXC\\n\",\n       \"Iw/BlwwwfFgH1mKJfcnAzl+BD1YgN9QKOEhEesca378fgtNA/9Wq879B8DvAx6/3IF4HfBf4jAiT\\n\",\n       \"SsUaMK8v/m4OJhXUBVZXofGVV/f4/tkUJ3aPsk/pJKIUqbZJVzPZNBRT1RZTvs/WvXDoCdizEZuA\\n\",\n       \"HqjA5Cw81Feq8W+vPmLsgbXtENx8P6zeA3YaogAu6LG7r6bAV2AJbI8gO6xD5gCTEFtSeLpCNwdE\\n\",\n       \"5NDVgEhbwZCAZKRQojBUjjxrVDBxoxyXwg6h2UOLYEKEBDqLXpaZQYXN3AJJbxRroDAaOr6Zoxbs\\n\",\n       \"RV0ag0we5T2DPy0kQpOoLUjaROlGTE9Bw2AEOkZsQJZMX1GaNSE04iaf1xxvumAEmJqG4rarIr3b\\n\",\n       \"4Pufhl8MGcuaJJVNZOhkKmkEh1JCZ0ESGNUJls/uJTonQA6S4D0Nyxdg9BaQcehsgx3T0EkJT/xK\\n\",\n       \"mm1jCW5ctDk7r7GUcrATE+Bn0QIdI6rgm5vo2k6sqIdQQ5cWPjqetQPdmyHv+wyMLmV8oENGAnRp\\n\",\n       \"clHl8GUfaTVCFNToYDCQDU5kArb7I6SaHS6YPUbGfFwDNtsR5a0CUoHo0BiltkLpPmGxSNqtcU4P\\n\",\n       \"2ZUJCGqgnYAnTdj47AsXmOAEnCpA7b0woUFPg8pF2PoCUIpi6YgXIYpbOV/t1s2xCdAzvFj3YTy2\\n\",\n       \"mr0j1ohhSSnli0gWdv8i/PwGZIbP39mAxBw8dESpwaMi8vEyFMEfheBfwdl9KXqZaXK2hd8ZkFwe\\n\",\n       \"0M5Oo05bML5Bq7OFfyBCIp2u5mNqgqtyuFEJkTJbSqcbbWPLm8TSBV0PSZlFNLXKgvQoAS2ZJ4FH\\n\",\n       \"kwk0UuRosqStYQeb+EqRDzSmA8VKJGiBh2FF7PfBSlY572foYlAVhYsiH1TRZD8JxwBlQ0mRGc3R\\n\",\n       \"9xXbzugE0xbObA+vNgrLc9DqwIWn4O4ifD0Lng3GY7Gq4AORUk9fY87nYafxQiByGXvrcPawUhuP\\n\",\n       \"v8rn+LpChJ3Eu8GfJgn4a0IpXBHuBz4I/MXrP4Lj/w+xvEaHYfD7Kr/B6D6ap4XjMzVGdvhYiTx+\\n\",\n       \"OIXZPEVtqke0BjM7Id2FE4fAugS7l2CqDblbROQrL29B0pqEfMoiZafJdmxaWY9GLl72bg+gYcIe\\n\",\n       \"id2CB4HGwFaMp7sM2E0oGhE+PhdQ+OhMkmSaSDKYUQNHVukTEGi7MbWQhiogKiQlq5zCIxOOYPTH\\n\",\n       \"KfYDVswKS+0NCksW4ViJVi1Np5mArge58tAN0MRuOdhKJ20aaB5AnyARUWcSWjXY3Yf0lcHIImTr\\n\",\n       \"r1O35JsxGNH1a7T2psHVYMkmn86R6rqE2xJYXQVEGD2TUBlsBduIciNwcxukHQs2JZVSiyLpCdh+\\n\",\n       \"C9yUgNoe+Id3GuTH0pSqHo10kRVvCs0oYZnn8PQRUA5Ka2DRQBOXVNRCAodUkMNPTaAFCrFA97IY\\n\",\n       \"joaTKdNXWVRUoaP3yUgJWMXRfDzLQaIsbn0cO1WjV9YYa2uMbtNxXY9KC+751IAzH9B4dm+BFdMm\\n\",\n       \"ZWkk1QDH0Sm7swy8FY53IPo+RMeg8aRSMZERnhcR+76IPA2LY4B9mWktIoONuNiam76iVfMUTDZf\\n\",\n       \"2onx48LpX6FXoYAl2NeBg0WwdsHvLENfRP4ayMK8/kIgchk3bMGJI8Cjw8/VBJoi8j/A4//SRf+1\\n\",\n       \"JmE2jdRLOOsGauPR2P7TbOJvbMGlW1C7kjRTIUislZvQDDyVxXYsmv5NJOoDmiWXVCIkF+poRh7b\\n\",\n       \"CFDRFEosdKbRySJEaIEBWsggGrCsmhx2A6oFRZeIMTTyvk7RiShGDoXBeZ4pFOhaBVyjT2h4lGQD\\n\",\n       \"CllEBhiqS5s2qYslJuoa3QmN+cDg1OxZom23wuK74dlPQOqzMF4A6cWmVt/qwcsR1eQasSYvtN3/\\n\",\n       \"1OG/AD6n1KvJXXhD48vAb3EdgpEhz+slXK8fFSKiExOjFbF0xOUAQs/A4F3Yjx5Fy5nMd4WM6REY\\n\",\n       \"Ic0dsNsCmnBTMxZxfOp90KlDsgt+AIwB1avGrEQKJ2D1j+GucsCqNiAsKHxLRxeDCTS6moNSiuwA\\n\",\n       \"XEuoRWnCdp5cZguTM3hakUgNUFEbXY0g2jhjYYeB7jPQNCKSOBKiK+goE8gjfkjO1AhDodftkjRq\\n\",\n       \"ULQYSA59q8PcMy5Lk5MMRubhiQXY7sD0FLQy4M4SLVVYuEexzVLkej7uWINK6QbqS9Ow6yQUH4X9\\n\",\n       \"h+GsQHQJimdjZfpryr6/2ngzBiPlMgzqkBq9ov3wHEza8JBLexSmjQx21aYyBrnIoakH1NQtbPXf\\n\",\n       \"A+UIZANGDsGmB78gIl+BAx+E99Y0zs9nGWRdtkpgpYWV0SSbajdRO0Qzx7CCDp5po8QmkESsIKE8\\n\",\n       \"RoKQPQMFXp7nlEZbE7zIoO945FOCjk1b6yJozJLBpohg43CJVbmZvutBz6dz/gIp0yM7Dql2SLcB\\n\",\n       \"Nz8Ig7eFHHQd6myn5vpczHmYgxxsTpJaKqPnYPCwUs3/FUBEsiLpd0HpUCzwU30SOD8kIb4oq6SU\\n\",\n       \"CkXks/fDx/ZDMQvBBlgXYLkf6/a8alBKbY2LXDoJ22+EcgUmQjgYguyGp2+D1Q3IfQN+exX+HsJr\\n\",\n       \"XDBDDa7Z5RPM0O/eAA+n4W1FcMqQzkIrBccegqgFfzEKfxpAxiLKteiOuIgWYEgFP/LYlCI9Q6FN\\n\",\n       \"TiLSoGd5OAKlTpGg4GCqNIbWp6mV8CW21fJMHTMycaWIHtZ5LqeRQ2dnpJGTgNFkRDWhkEZEP53E\\n\",\n       \"MNLM9LL0cnME0TK6WsASHVd5uJpDGKUZd6uUd2YJUyWSPSFp9LCngUUBScLaAvyfa7Hlaf+HeAst\\n\",\n       \"w1IIt1iQveJ5F8Zg60s/1gl9g0EEIS7RvAIdpJ8afB34TyJkfxI5MiKyC7b9GkxmQAmUWyLyeaXU\\n\",\n       \"CrC+Br4D2hT26QpbN5uM9Jr0sn0KxM69bh/KeVjeAXe50E3GGkrJLsjviMjHX9rp2TkOqwN4dldE\\n\",\n       \"yQ2p5lOsiDCqICUJfAJqkc+KDIMaCZHREC+aIoq2g6FhRBNgN/GzJcCgqZuEmkZAiMMYES6RmgTX\\n\",\n       \"BiOBcpLYtmDpa4QpoW32aUZ58k1FPg3zwO2PLPH5X8iwduhugu9WYPUivKcNzj7YaNB7dJPze3uk\\n\",\n       \"tuUJ9btwnjkUp7iLc/DUAnhfhIyg0SZ6WMG3X6/y65suGFFK+brI330LPnoIwjw4m5A9Aa0ufAWa\\n\",\n       \"3jqbH80w4ul4tR4bIz7Lz+aoOjNgLEDJBSy49HY4vQpzVRJvh3mtyIN3zdHdNYkkG0SdBbqZInrf\\n\",\n       \"wBFFVGriZmYJZAAqg9JngAGe6iIsY6LIayEVfwDaBglLBzHpJwJ0HQxCFBYFlcUVE0WIMEKSGgU0\\n\",\n       \"2k6eIF3GHzhUnlT4d8Khc3D4FPTnIJOHyS2PC0sOW6UxojFIGCZWqkf7xjL1LRh8CuJABKb/Szgy\\n\",\n       \"AnNNGBTg5O/CmQe4StnwinldFpF/tw77k5DrxpyDi6/FF3kL/vYh+K1LsN2EGzVIZODMnbGSMjPQ\\n\",\n       \"3QXbV0GDJQ8aqbjpBuKq0Zlx2Prbq49bgLtvh/xhOFaGjQYcHof0UXh3Gz7dijUx0rNQ7sMpA2ZN\\n\",\n       \"PLtFfWwTK+iQqChGFkfov8snNMBwR9DbJv2whlZqYZEiM6izlhnBIYMii9CkLx0SWoQQIj4YKiIT\\n\",\n       \"QWRY6AK+FpLzUzw+NgNhHk816adbaGo7GfZiqyU6qka67WJmQ7YZHuOjAUrr0MzXWR07iHMxMexS\\n\",\n       \"OB2rTJ16pa3ySqmeSOJL8LV/AgeCuLtgJQ3nLoD/7Kt1Xt8guJu4f/1a5aqfSihFS4THgfdzXQXQ\\n\",\n       \"fnSISAl2/w68vwOTQ/L6Rg6+9TER+fdKqW5C5EvfhF8/RGSnqJxZpnngIpoRsGcVxh8BpcFzH4Tt\\n\",\n       \"SUjbsFkAZwl2nYb6neDNi8xehK1HwDs2zLpUQZ4F5w7oLxo0jDzpTIcwo+FLipSCpLfFc5Zi30Ax\\n\",\n       \"pruEeodlfQo/7EFYQFFHUn1CxhCVp6mnSeMSUkCoohgQkQKjCEED/CTBVp1gIocrc+iDFmJ28fUy\\n\",\n       \"osDZD85Gm1/65tN8+o4ltsYFenX4NwG0ViDRhRWIFiL6/9V2+HAf3IegaMDaQVh3Sd/3DLsMxZ41\\n\",\n       \"WH0nLIyKyN8opV5zO4Q3XTACECp1TkT+bAMOp2Li46ILJ5RSfRH5QsC5pTalnwEpxs6l7e/l4Y92\\n\",\n       \"QcWFVAoGyaHpjwkKjJRBeXYfg0yOhJUD10IP6jS9Jnopi2rrdLO70aM1XGMaor1o3iaR2UML0xTs\\n\",\n       \"IiQqbKoIM12laGYgSOLo0FN5umGZyBiQYAJbRjGVgaJKSAQqTdofYERdwsIlknOKO87AmgvRjdAM\\n\",\n       \"wJ+FgoJGBnr+GhNBhLmVp5O1aTsVpN8gfxwGQ7+B9Nvh1hLcdkUGZLoNnXtF5JnLvkJXY8iCf+pa\\n\",\n       \"//ejQETGgTzQVEo1rvE+HRH58xrMjMDvfwCCfVeRZ9PDQ8Hq5+BrH4W9GqQjWDbh0gnwnxWRWWLV\\n\",\n       \"5Q2lVCUHt+wZpmSnoDIely6yyzC1AA8opZoikjbBeSd85zzMbGC8C8bXdpLf6hKaDqlcid0bq6wX\\n\",\n       \"TGiMkehl8JOL1EcbJKM0UUJho6NkCyFHRIoo6BBF4OtNIi3BRD3FWDOkPKehSUQ3kcQzSpjBBCZJ\\n\",\n       \"Qm2EwGuQUqukmESLsgyMZbykTcKEkAi7ZLKnpbOrM0AfOU17bj+dZ78D8xegWo8Fr14xlHKPicgG\\n\",\n       \"rN4IVgYaF4ALL2eO9xOMfwp86vqJgF03fJm4g+gnKhiB9M1wUIPJKwLrmS4cKML6AeBJV6njIlJb\\n\",\n       \"h8NJKDbxPuPDAvT/GDJdSIagX4KEB800qEWYOAq1O2FHDkYXYbcGz/0qnJgAvhGvQaVTMHMXHFi3\\n\",\n       \"CMcFO6FYCzQiGcRdkIYiFDhngKmZWI2QnNXGzupoA5sg2SMydLSgQ2TsJog0uuKDpFGXzfKUHXdG\\n\",\n       \"6Gas/eG3wLgVFUUoCdCUTqE3jxWcws4qOpOw85zH+KUNtv7dy4mEjous74MHCzH71stC8AzZ9ycY\\n\",\n       \"cwwmWj5vW4tbO75/I/zDJeA154W9KYMRAKVUFfjWNR4PgSeGt+cxIXJhHbbvueKiV4P0FvSgfyzL\\n\",\n       \"6j8vMtHqEYy6mAlIkqCpbVExmiSLBwisHKE7iHu/Mmso3UXzNcxuElUtUtu2QS8ZcsjukHYW2ExN\\n\",\n       \"IWIiKiTUCuTUNEEUd9wkCXElQxh10KjTM2zcnItoNlkf+rfBzhQ0J+HZHSmyTpq247Kp+9T7Lu9b\\n\",\n       \"XiQd6pybijCPKg5eiFPu9+8CjkHxJthxVRCQCGEOOD8LXDMYeTUwJvLRQ3BwFMIaaCMizzbhy1dH\\n\",\n       \"5kOux3pG5HuduIvmeYQgKzHJYVOpsCoi/xaW94M+Bu4isAGF/wlG3xWLF3URKT44BZ5/BR9FB1WI\\n\",\n       \"+/MKvKA2Wt6Ebg+sI7Dkk7zRZKy7QW+mTTpM0DqooYcZsv4GXn+VyOygsi2KoUXkZLHTKRANjSVC\\n\",\n       \"tQQyikgScVsEloYZJqEzoD2ZpCAehhES6BYNNFqqja31sbU8qbBIRJl0VCE7OEuz0OMGgakeqFSA\\n\",\n       \"Y0ScL+pst2Gi0mF742mWFiL6XyLOWP3IO52hmvJ16Lh4fSCxyvavAHuv91iuA74C/G8iGErxExRg\\n\",\n       \"pkagcI2Sa96HVOnyvWtxU0Sy34bv/By8rRFzPBvbIVmG+X+A2m6YBJYGUGrDRB/uXYLaXSLyWLwh\\n\",\n       \"a30ezt4Hg109alMBWs4i6TlsGBFFPWQmgqKAo2uUnRx7NzTMaINnx+ucSvlEKYeilEiYGmXO4jNL\\n\",\n       \"nK60QSYgGgO1Hjv+SQ/yLdDmSZg1RsQmJMAfWGQGOVJemq1sn6wOp6egWYOXN0BswPky3Hpg+Huu\\n\",\n       \"wnhEzmwgymdqqO2lEbc7n7udt4KRNw5qcP8j8Ac9mJ2ETgvSz4G1CX8FLGi0V5ro74bJrEZSt2nQ\\n\",\n       \"ZUpPMhIoejWbPsfpJkIoJSHQUSqLYfcQlaGw3GZThVguVHNwIGOzT6pEpk9daRy3ZsmGBgNVpawN\\n\",\n       \"SEYahmaDeLiRR11GEBN0b5zEZJnWbJm5OmwPDC6NCMetAonOLNlFG9l2hvOzNvdeDJnchH0nQY/i\\n\",\n       \"r4IMvw/RAJxrOCw68ENsoH9cvAP23wbLl9twHoVbnoQ28ADErdlXstttOHoMbtNg206oD8A8BaOr\\n\",\n       \"MTm1CqDBxBTB3SWCkgN3rGPtdti7He7ZgIIHngZPvrfC42dP0qvcc0Wv/Trk12NS7jo8z4/5/P3w\\n\",\n       \"sYNQbOOoZTbuLpNIhaSigPlEl75WwnMStLNVXNNhXyZA1yIip0nT7rIyPo4nM5jKIRm4GMploGfx\\n\",\n       \"7Cq6FtBJRIzpJmOBhRZ1CEyfpqTYivaDE6IS4CXXCcRmPdhizOhz0IeMxGaSYw3YUY04XohQy7HM\\n\",\n       \"9lg5oh0p1XsD+whdd3wM+LpSLyYsvhmgFKsiXCK27njgeo/nlaO1GgcEjgUj3TgrArEyam8NQEQs\\n\",\n       \"SN4KI2+PjWQbR2HwFPB9OLoVt+eSBm0D7lyGfAfWJqFuQL0DNw03oIaCGQXnJ4CWUsoR0T8N5p9F\\n\",\n       \"zGkRxcjE0H1CCWkGAX2B2RBGwghNt1kfS7O9qUhKm5wVcKPA3obLaiZPQYcFQ7BJQGTFwYgqQ1QE\\n\",\n       \"MQEbmCSb1okkQEUFxpt9PMuhPB5QaCmc5CTLt1t0PJ/+P9Rj37DT15q1CM6ehuUEzO+BWgMya3jF\\n\",\n       \"FWbOw+4r1G+NaPj2rzneCkZeIZRSFRH5szrcmoP5AZxrxXXltgGHXYJKk25hjFxT4aZ7kNSZD3v4\\n\",\n       \"CQdnooboeylpCUKtSUufIvCa+LkB0WKa1X4D9ynImTA7D6MT0C8IpAyMaMAullmWnSQiCyfscU4G\\n\",\n       \"ZPAQAvxolkIE4xp0TJOxzAzJcMBKocfNbp7dbZeqjHB+M42YCfROkfVdNqe2YPZR6GVA82FBIBxe\\n\",\n       \"hKuPw6mPwFQn/gECbOZgyeaKC/VrgSOwdjk1oQG3wfoFuFPEqMPEu2BuTGSmDJUHlArPxXwG+fMO\\n\",\n       \"3P4M3BRCpwrfiuAkgIjM7oeP3QuNSVhtQOGLZG4pM+b45IeBlRXBreuK1R2Pca66ivGzkyg/Iqyt\\n\",\n       \"xP41n7qS+6KUWhKRf78K90Rk74uYGIFpgQywri/SI0KZaSg2KWR9pnp9EhWDKB+rQZrVC1ycuZmk\\n\",\n       \"ypIKhVBqJFUPFXYIVoVSwSTs9Tk+pjCMiEhl6Kk5rChDgEPaVyT0PF3jHGbYwfMh8qDdgWQPSIPp\\n\",\n       \"wXgf+h6UdcjUIHhZ2+83O4bE1T8E/tn1Hst1xGeB3+QnJBgRkRGYfDd4+6CfgLoNZ9cgW4ZzmxCd\\n\",\n       \"j7tsRj8Ch/fA/hpoEVz4ADxzAKqfUCo8zfCCHZu6PvxzcPQQBDMw6cIN58FLgDnknHU0ho0PIpKA\\n\",\n       \"mXthRxl26Q7dGQc0Ia0pcgKrUfz8UYEx16Y84VHPxurJu7MwE0GodSg5FoN0mboqMNB3oMIl8I/F\\n\",\n       \"u0RR6DRI90ZIuwajdNjM5OkHkNVTmJFDQm2xVJiD9behnp4hXPIhehA+OhSCXLp67obSB596GG49\\n\",\n       \"DndGMKghZxX3PBe7QFzGxXFoPXj1618LvBWM/AhQSjWBb1++LyKFafijQ1DqwO4lIsthbTxBxm6Q\\n\",\n       \"DAdI4GIqSJp5iipDKjJAOqQ5RsVK43tNhC7u6gZ8AuzfBnMncUsXAxx0uppGmho51cHBJ+nbpEIv\\n\",\n       \"dgAeFOkk8xR1nYztc2ZcI6lM8kGGWrpPDyEZ6uRtG7N1hI3THYyxLKLDJQf8e8AwYMWClQeIMxBA\\n\",\n       \"dApOPQqdO+LSjA0sDGDtr35I18WPjcvmhZeRgkCQebjxI3D3OkysxAS1x39XxPzPSvmnhlyV7wxv\\n\",\n       \"L8Io3HkLOAaYKzDnQjpW9hCjQTcD+f7wnQNIz/Y5NHqObRfP0yooqgkoH1XKfomqq1KqLTI+D8Xp\\n\",\n       \"DGlHo1Hw0HDZo3wucZF+mCRheCRUxEgbpk56dKcCKntgOmzT6/foallszUDzXRKNGlk7YO2TEPyG\\n\",\n       \"z5QBKdMklTDxE2na+gA72SXyLVJtG2dUkfVHSLsRJh5bkcfNX4SkDau/Bm4RNnUop2D8aFxdrL6q\\n\",\n       \"nU0/ZbiPmOD7Zp6jvwGeE+GP3uhtzbHw2MSvw3sTsO2rsLEXBvOwsg+OnoP+Xw4vuHtg7x545xWb\\n\",\n       \"qNHl+Llb+7lCBn+44fw2pHbB5HNQ2BvflnfAzKNQScD6JrE5F8AcjOVgxIbQhbwTe1+5hjCqCwmJ\\n\",\n       \"TUU3DJ12XmMZj1agmNqE7A1xc0A5E1Fw6yjfxKdNXZbx/ByoELwaujFGtjPGxAWhv9sg6ftMs8pq\\n\",\n       \"IknHMbH0ZWxDCJ77PXgRKf9W6FVjBeyll5tHh9JUmdECZBVsePCNI3B4EdIurOfgRAX6T7zc619N\\n\",\n       \"vBWM/BgowfvugGwHch247RYC3cKM6vRTOvpgiWoIpQSkVIZcZJMRDVt0xv2QhOrQbq4Q9fr4FWhv\\n\",\n       \"QvfLsHQ7iAX5iksr1OmmoK0rEIttgyzGmoOaElSg2H8UnrhFIxFp6KGG4Sm6BuSiEPSASCJcR6Pn\\n\",\n       \"JtCdGdgcIag+CglwzbjsYjpwQwWKOTjxYeBzwzLIV0XkSTg1TVyaWXhpe9urj2asvfx8DXgdCi1S\\n\",\n       \"k3DfsbikAnEq9p0hbH1ARE4T670UiLVPBlcez4JtA9jbgrEsRAqMNJ3cgL4D3hXpx40piDS46wLM\\n\",\n       \"duKIyNXhy/eIyFGl1IsIsiJS0Bl97wxOYYwRQ0iFPn60yapsMRZBz3Q4sAyLeRg7CYkBdGoR3ijY\\n\",\n       \"WZhdrzDVabE2qqG2QvIbLuEGrD0ZK7a398O44ZPt+GxMp/DnfBJ+AzGEVlFI93JkahpBqsToeodK\\n\",\n       \"SWf9wIA7vwHWf4YTH4CLGRh7Gi71YPO7EF0zZfsWAPgj4P99ExJXn4dSrIlwEvgAMYfkjYwJmNoG\\n\",\n       \"e1fiuzccB3UctiVgI61Ub7jJyO2A7cN1azMXS25kHNjWh9INXBGMxAHO1IfgvT3YvgZPd+HM/tg1\\n\",\n       \"5MS7ofENqHzmxcJslhu3BW8PIbNlcmEqpIPCUjoGKWxyuMqnIkW8MKITLmPqPsEAVASlLjTQ8PQE\\n\",\n       \"oV5EBgEsrIDjIisB4zv2kt9IYYUO9rYQx8qS9nsUgjpjCx6WWWWtfRecvcYEdRMw/fJTOPIrcMcB\\n\",\n       \"OLIWZ8C7Vfj6fvhaNSb2Ns+Df+L1Mju9nq69fwD83niYgbwAACAASURBVPDuf1BKfeZ6jeUfAxHR\\n\",\n       \"p+DuCiRacNcB8CMsFZI3s2gofGwW1CqzBsz3ekgyh6YpCgMwQwMlHXr6gEwWMh+G8XGon4KV0zCb\\n\",\n       \"hXwVJt2QWsHgwQNjDETD9gwStSytiRRjmT6J/V0SUYtmYoTRwEBvRtSSNko65Acgyz0ujeSxBzP0\\n\",\n       \"1zYhdwzGN8k78IGzcSR/9BBs3QYWUDwiYm4q5T8Mz5N8X9f6+fdg+jZoTMQiaoXHYKLN5PILgchl\\n\",\n       \"jNuQ3Q7WnXGqtpiCPiLFp6F9/+XAyY5rJ/O7YOWybFef3vo3ubAX5hNQ1KCZhdOzoJ+B2StIX4kQ\\n\",\n       \"dgFn5rmqWwdIFOnvnqIQgpEQDF/HMGYQ1WdLG2AF0A4h9wQ8W4RoDrQ8qAxUDcHJJrGqPvPfCZi9\\n\",\n       \"AIk+fGkWOAfVL8KlP4FEHRbHNZyxHglvg0BtwwwnUX5IOGjghDB+KUmmk6e2tIE9OaCyF8SG8peg\\n\",\n       \"8hCs9oCKUqr/GpyunwqIsJ2YK/G713kobwRcLtW80YORBKSuUkYVYpVgbfyFx7wedC146DaQWSgp\\n\",\n       \"WBOohDC42rCxAKUpmB92Ed5+HtrLMY/v7DhUPnXV72gVah1I+7EjxYQjOCKA4pwoelESkwAnUlh6\\n\",\n       \"l1ExGIRFKk4NpwzHZmCHo9FUJXqhyYazG+/RIpxZgpsuorIeGXcNZ3Q/ySWNdNmnucNmkGhAT0hL\\n\",\n       \"k2oAjc924dDVE1SOF/ila02eiIzCwYNw28oLnP2cB+9chK9oSm3+9Y9wLl4VXM/MyP1KqT8XEYOY\\n\",\n       \"qfsTEYzE0TM70vCrGbhnWJyfaJDIjjDeNzFSgJkkI3m2IuFCV5E0ttAlRzvMMxGEeIkWZW2VbCri\\n\",\n       \"8BaoAexegzN74MEmnK1B24wzJG53guDiYfIdlwVJEuSzcOIJ/MN5WrMpjKCOaw4ou3koD5D6JvXx\\n\",\n       \"ATiwuOQzON2l6h5Duc+AW4a/VuR/A8Z78P3bYGIG9rTiyHhCg+O/IaJXlArPXY/5fQ4+WYZ3mzDp\\n\",\n       \"wXoVvgTyEfC12P3yMvomNPJw8EPwnjUo1uPnPH07PJkkTjljQaoGg8ZQ5C6IGWzVFBt0uN+D3Ay4\\n\",\n       \"deh+Fz5wDSXIUGLjOhARk9inwcvBr0/h5jK0UxEp3cUwFCYmoZagxoDEOpzpwM88A4//jEVmZ4ac\\n\",\n       \"puNpMN5x6IQWi7kR/H01lm/z6Aygfhoyh6H/KCx9BYI/hBvyKcb6SfbVW2yM9jg1pgiCCM1cRYsK\\n\",\n       \"TFUHXMjfhfv4M7iZJo0vAGWg+hpIa/+04r8G/lKpF9SD38T4W+Bfi5BRijdEABtzM9hBfL1aI87U\\n\",\n       \"FmCtBPUWjF7R1ntpHOwrMoDuWXj2D+EdU/C2ShyweDo8Phu7I7wIYdw0F/HCBbrgxkJ/j2aBjIjY\\n\",\n       \"Q78mHZKHwclDMAonMqBnFUkzRRgoZkKFa/pckDQzyqCowPM0Ik1Y3jtBZU2n3a7z7KiGUgX81hjt\\n\",\n       \"Y32UrsEvb4+7CLQW5UqN2fyA1g2T6I0+sllhc8JGXwzpPBPQ+QxwbB0+8hgcOALrCQg3IXc07vb8\\n\",\n       \"/stMaz528dCueniiF1uNvf64bsGIUupyDS+En4xWMhHJwejvJlG/NMNgXwHf6BAkDbB10uKj6xa0\\n\",\n       \"NZRhofwIyxynf0nnwniDXZMLWFaChXyIz4CkUhRdsEtxMIIGR9ZhcR7Wn4KbLNixBX0rSeV9IY1U\\n\",\n       \"gcDZCc0VKNxMu2njRAnSnVGC/BaRfY6i8pl3obEGzSZUqtD4hIc6uX5FK6fI9D1wZhqYgf3N+Aca\\n\",\n       \"AVEEt25A/V7gugQjSqnzXCU/LFJ8Cp68A+5YiclVvgZPzsZy5LdXoThMw5oRvGMFlm8WkQeUUs0E\\n\",\n       \"eHPw/WfhkMBICFEKVubgRIXKn0OlTry47YSlj8FeeYHA1bPgQgjRck7kffNwdxr0JkyNQGoEtVik\\n\",\n       \"bTUxxjKkNUFwcUODlqMxfjxi7dPw1d+zSNw6wYxvEgQmTujhhR561iZImJzMTWB6G+x+OuIXTsCJ\\n\",\n       \"98LROag3oZgTJjUTlYBwWii6HlObZeqqhG4psF2OZw5Qe86EcAvqwPG3gpBXDhHyxNoiR673WN4I\\n\",\n       \"UIqqCE8Qe9V89nqPR0R2wvaPwM5kbDx3sQR2Eg5sxGKG3/1l2P0s7LgI60U4qqB+JeGyAzRASrBa\\n\",\n       \"ite6bgS7HoL1GREpXNbiUEp1RSYWYvmC/cOM8MBI8LW7Sqwli/AvO9DURb4Ghf3wtnfA4VWwvwDP\\n\",\n       \"/BZkDANxp0E6aEkHX2wSZkhg6NSUgatNkSwJowR0xkt0t3bgPLKGmjwC31mF/QdgfBL6KXBTkFml\\n\",\n       \"71dYX1ynMLGGlME5Ac4nlVJnr5qnLzwG7zsHb7dA78JWBf5SKfVybrsNqAkE8kKTAsB6AfzX3KH3\\n\",\n       \"WngjcEb+BfATIik98uE0pfsOUpu5iUzFRKjT3HmGoNRBU/OElo3uJTE2t3Ane5j2JKmlt9P+5tM8\\n\",\n       \"90+WyO7uMm4pbvAgFcJcD1YNGFhQ3Qbzl+Lr5YlvwqO3wcl9EM1tsbVnG8F6AgoJWGzB+H6wu7i1\\n\",\n       \"cdyNdchn0fwmyfkqqcfg3Ssw3YXH5uCxKaXax178OaoPwNH/Fvaa8Y/T12GzFCu4z22BMXVdpvdl\\n\",\n       \"0b4fnjJg+UicZt0iVkPM3flisSOII/2RkJhD0uzDcR/edi98rwsJK3ZJ1r4Qm3NtXi7niMg5OPs4\\n\",\n       \"OO+AXQH4EsdEK39XgDuOwD23w1oKgqNwiw2JBnRGCJZKVLI6KX2ASA+as2TWUmwUVinuhh3rGVrN\\n\",\n       \"NKWGTzfdprndYIeeI+c0qSR0eilYyiTYuH3AsxHsPwe1AyA7YNuKojXjY5cMUBnsVAKtlsU45dAd\\n\",\n       \"SdNduREuuSAPQHITPvNWIPIj4/eBbyn12naJ/YThr4hLVtc1GBGRNMz/NvxCDyZq8TW6eBjqCZhZ\\n\",\n       \"gHufhJOr8NSt8OQWOEeh9dhlz6wh9FjW/cA3oDkS8+N3NyHhQ3obkOR54j5A7Svw8O9BZQ5GgxRP\\n\",\n       \"HD5AJX0PwTdGoF+FzHfhn5+laMK7T76wcXE3oTQ+QKIGtlWg6rXpWyE3CBRCA+VrtFSLNWWSdfIo\\n\",\n       \"+yYGtXMwPQa5JXjvKGRNUAlo9cFKgL0fumn6GwX6F47G2d6j1zLuG65jXxORB4hr7v0ftBbEBPzi\\n\",\n       \"k/DInXDbRpz92cjBk0WofP7HPnn/CLzmwUjcLvWSL/WmUuojIvIO4GeJlf+u9do/ueLu95RS33tN\\n\",\n       \"BvkKICIFmL91mtAaw3ANtEgQNUJxaZatXc8huk0Q5AkHHn5hCy1MUlqcobzZg1veT3TpEUJzkdEp\\n\",\n       \"jVSlgTvmEg7g8BY8nofWDGy/BBUNWITqElj/PRxZ7HDzqQWO3rGN2vYFBroN6RbYU7CRBieEnCJr\\n\",\n       \"ZigswZ7lWPgK4PAGnH3HMEvwfPZJqfCiiPZJkP8D8iUggPTJeHexVgB//VpzcL0w7OD5ooh8l1iZ\\n\",\n       \"tQX0ILUdNvKw/YrFJBy6ZNIk/ufhJ2C/C7Oz0C5D4QSkNuELVxJyh6nXr8BzR+HcDgh98C8A3hj8\\n\",\n       \"8l2wYsYEWJIgu6D1BGgXwZ+ArTwDq45kPcz6blonx/HdNcYPKKY2LSpuQKAG9KyIaaWTNQIiT8dL\\n\",\n       \"WUxGfSLxMX2Ym4FnJ0BrwaQJjpvFHKRR9QKBlcWK+jRSLoMdZfiLDjy8BTN9qPtwZthV9BZeIUQw\\n\",\n       \"gf+GWOjsLbyAvwP+gwjbleK67JKH2A17EjAxFNqrT8a+dSM9uDQHe2pw4ybI0/Ctp5RqvURVWCnl\\n\",\n       \"iUyvxRLv81fw31oJ2HKAxlXPr4vIx6G8B8wbtuFOfwCeTQxFDyegvx/UGmquh34izp4evRtmNSiW\\n\",\n       \"oaCatCybCymL2UGAUfIgABGLEl1sJmg3CwTdPORHQPdhfQXS2wEX8jVQPXDHYdEHFUAqiLOeay/v\\n\",\n       \"IPzC5wW8IZ3gh6D9DXi6B5fuHsY+Vah8Uim1+MNf++rjNQ9GhqqN9179+FCK+/8CfunlIjil1J+8\\n\",\n       \"tqP7kWCBliggCpLtAfZIBmugoQfjmOUkHapst9YYPx+QX0mzdXCchcY0oS2QiMBtYeYEQ1Kk7HH0\\n\",\n       \"yga1UkTBgLRAD/jePotlfwL+uIO1q8cNKuLQE5Dw1xj5dp2H+2WWbsrgV1OwlQGnCsUI6gH6SAcj\\n\",\n       \"gFH7hSGnAjB04vP8olKYUtFTIsX/BNU74bZlyPpQycJTJai8xLfljYA4mhdzBH4uA4dsypkGD80o\\n\",\n       \"7nsGZjpgG/D0LFSfuiL12hCR/1iHIwXYY0O9PSTlikhp2K59+fiKuCb9fCuviMyOAGZcx0LivzeI\\n\",\n       \"CTbmJJyfAicJgUIZB/G+ngTvcTggmKIYb+sEVY+LIx4JA8ZCD9f06aSS4EAytMmZAek+zHfiltxH\\n\",\n       \"JmHnurCcSzOayDNVsXHSLWojwmYwS3TMg9ORUs8Bz71+Z+CnDr8KLCj15vGheSVQioEInyPOjvzv\\n\",\n       \"13EoVkzRuozQiksKWgBh8oXHUz5YuZc/TPmb8Ojvx8JoMy2oZ+GZAlQ+dy0l4uEm5aSIuLNwJPGC\\n\",\n       \"+jIA49Cw6A3f7/wMzGZg9zo8XgSiiD0nBmwecUjq4JQDCvUQY8QEMcnkPS5a8/T6KUgIqD4MdsHj\\n\",\n       \"a3DfFNACbS+cz4JzBkoDaGzGa1JbRG4CSYDaUEptXDkuidXdDo/Be7bDyJTISgW+o5RauNasDDeo\\n\",\n       \"D4rIw8SGme71zKxezzLN/0LsC/J3wyDu516vFqJ/JBrQb7ZI7pqlUN3CKUR4KQvoE0Y60UWXxS+F\\n\",\n       \"2BnQUkKtdwCvp4NWQ58tk8p5aMUuoWHTmS+QLBuohse6grNpaLXzNNL3EtQOQvmLZA52CEo1Hnuv\\n\",\n       \"z75HYHt7wG/cv8nfeKOcj56A7dsgm4oJSxcWse9rsOP4i0mey0Xor738vLa/Ckd7sHgXJAzoNaD8\\n\",\n       \"l0qpS6/LjP6IEJHiHPzBnWDsgQ0HzO9wMXuSxi0+owvg+tD4LvQevvJ1w8DkQZHUAOZ+AXbvif/n\\n\",\n       \"khJJf1Up+wf10XeaIEEsT6sAJuHcxbg3UCtBowz7p8CehKeTQ0n5VXAi3CqU7Cbbz25j5UAAiT5G\\n\",\n       \"CEYUEtlChgqNrE/P1/FSIY1k7KrjCtRbFvmLORbzLuWkhqgRutVZmke1eFt3DYXct/BKIYIG/I/D\\n\",\n       \"21t4KT4B/I0If6oUP3A3/hpiE5YFDg95XNk6dPXYTiV3xRq1koX6S1pbL2No4vkfof5OSM2DV4bq\\n\",\n       \"51/BOtceqqS9iObpQegzWIczE9Afh1knHt9oFdYbsJaD6kARNOHur8DW9pDaezyinMa62kXFHsDM\\n\",\n       \"OSgugTMBD98BFx+GERd2R+BdgqwDE+fAbcOJOjwG8/9dnClKAcsiUjwK7S9fFmTMwD03ws/fBpVR\\n\",\n       \"WF2D0uPwzwyRTwQ/4LMOX/+6OPP+IFxPAuu/uF7v/Y9BLANu/HWZ4s0jRGPjjC5U6Mx41EdWicpL\\n\",\n       \"8HGlwudFt0Rk8mn4pzNwWJHYngFl4qoCDS3CKjSxcz7pNahswubHLbqP3Ak/fwus+aD1CYo9erMG\\n\",\n       \"xXmf0yPx9e/tx1xGLm7A5zcgtwpvS0PBh2oF/9/A+jvh4ljMLq8U4KgB5Zet+w53BQ+IyPeI64z2\\n\",\n       \"G5lzkIVbb4HEgaE0exa8D8Ez0Jg7RuP/AxZfznNFRGbg0AfhF9fjjBHATQZ89YMisjTM4L0ESqlu\\n\",\n       \"UeTJf4A7b4fVJIQKvBU4dx7OVsFw4Og8JA+AdxqmzkFiCb4NzRZ86zc9JnIbTJhJyrrPIjb7j4ZM\\n\",\n       \"bKRpvyePZ+tsZEYxZyuUx6qMroL9V7DSdJn7OQvp7cdY7tAprTMYXeSG2Q6yr4WdEkkuK+W8tav/\\n\",\n       \"x+GXiUXOvn69B/IGxVFiG4T38DJO3a81lFIbcav+d2+Hg3WwHLjowlYW7mxANROvd6dXIHrZYOTy\\n\",\n       \"sYDPXf24iExDeh9oBvQuACuX10ClVHVU5MxTsP8IrJsQbcUu3oUug/8bHnwHpA6ANQqNbtzAcuMC\\n\",\n       \"NBPw4E1xZfegCTvPRhTXPZ78UIq17Bojc+tkTRtXSrS7Ac6uS7BxJxz9ezCqUNP/f/beOzqu68rT\\n\",\n       \"/XblQhWqgELOIAACBHMmFShRWbYcJGfZbdnu5Ncz73WvsfvNmuk30+3uWT0zr8PMtKdf2/2m3XZb\\n\",\n       \"cpZkS5aVAyVKpJgzGEAEImdUzlVn/jgXUhEEkxgAUvWtxSWhwr2n6ta9d599fvu3IRSA/ii8DfRB\\n\",\n       \"3R8b2hnjOrUGeH0DvNsNHBQRRyPcvRX6nUYWvB78JlBTuhvzd67agblGLAQB6w2DUuljIvLNY9j/\\n\",\n       \"Dw9qhYOURVCjBXB0EWxxiyQj8LbSjIrI/xyk4J/qscV9FBQ2YJck8fQwfdYoWSkkFk8ROhoj8e0i\\n\",\n       \"+FSVXqthN5aWDA1WE3VpB6WpGJ4EjFXCttugfzdw1Ihm384dn4gch7FbwVYB0WMwvUMpNXLxz6VS\\n\",\n       \"wDVvEX2leKCl6iyxmaYGOADuCzd/cy+FJan3AxGAgjS0paG3nQs0gAvAC3sg0Q23FoApCLFx+FFc\\n\",\n       \"qb2g++VMQ1MfLM1AKgDHlFJ9AB6R28twLXZg6/cR2+chHtpNfF2QwiVuMl4vKXM9priJuKOAjDXO\\n\",\n       \"mDVI6BVgCI6d6cfxh1ZGa4IUOE2sHM7glDDOHlh0Al77tIhMztca742KkRX5M+Dff5hNzi6EUigR\\n\",\n       \"vocW+M5LMKIJPAM7u6FrE5gcMP1tSKUhuEwnK6eeh/i+D+IMLeLeAssehNa0dl7vuhu63xWRX88E\\n\",\n       \"JFPw1A64/ySsc4ApAIExeFwp1SEih4C3wfz78EAfVIQs7Gwu4fgaL+OWNLETgzy1KkHlIGTFxBBp\\n\",\n       \"SlJOqsedWGPFpAtThLwpuio7iW2YgGNjsGsM/jlXzyYirbqaqDzH68gELJ3Q3wsHAV8pmJ2zluNr\\n\",\n       \"IeCAehGxqAXeYTsfjFwmSqmTIvINF/zrZdqg64zRrMDyOnx0nxZEHTNeG/aI21NGRciKJZsl5bJh\\n\",\n       \"U/VI2MmIw4z/oIKp/VAeg4mgvjP6eylqKaF+yE4qNs5gM8SdICkYL4KRX+b2SZk1tjPk9I4REdPs\\n\",\n       \"xnI3MnGYCEBZpRG0zWBEJxfxRLA6wDrH92bNgsVx7uPvY5zEL4vIm+gcaWhWr5oscNr49x4iUtIO\\n\",\n       \"ZZ8l8rwpZ3jFBIOvE/zqSgr8JVhMWtdjwkPSOkysKqiV8FnggIh8/Rg8Yqblaxa8oTi+3iw1fWBL\\n\",\n       \"w4owDG4E8sHI5fEw+azIpfA48OciVCl1dsfb64VxHsyljTqff8YlISLlsPQB+PgAOIxzuV3gxc2w\\n\",\n       \"5zjQaew/DjwrIi+jK2+CM9dTI2A5JmL+B3juM+BsbmB05WKSo83EdvvAf4JA+WsEomE4VkzRGhPV\\n\",\n       \"oWqcIQCFORLE5Buj2GMiVngQ3knCG3M4XVu0pGM2tozWjwAQCYEpo32U3guwp8GR0hmueV+GuRj5\\n\",\n       \"YOSDUVULlRtzbvxOSK+BiQG4AyMYAciSCmfI2JzYsk4cQROismTFRNZagMls1wUahSE4eBTu8IE7\\n\",\n       \"haXQRtZiQaEwnQTvbnDEwOvBaNI0g2HEVYReYokYj3mh+B6oXw1KiRTvB//rM9UW2mjO1AYlKyCb\\n\",\n       \"hslD6LbyC3qWOAW7D8PaCrAX6ZsJ3VDSpatr5hRpvc/0Kei5TXeHn1kBzgI9Dgh2Xsr+jYvE5Vji\\n\",\n       \"u4u1x/xZlEDUjPgKsYwOYHFMYrNB1lKGJWzWF5clwE5jn0kROZmhcW+SdYb1td8D/XUQ8YHJKyLP\\n\",\n       \"5R1WLw0RLMBfAP8unxW5MEoxLcJP0PYLfzbf47m62FqgJft+IAJa97E4At0rMIKRGYygZE7tnVKZ\\n\",\n       \"kyLyV6XwjTZIjePwnKJsi4CqJNRXTTzSB0HBbBas6RkNiiDKiTkSoCCehmNJ2D67nYXBoL7VLKqE\\n\",\n       \"eBUoCxQMwYAV/C8b4wv4RI7sh+XroN8EJMC8F6om4Zm5ru26dJpGdDuNvhnR/3yRD0Y+GC4v54q6\\n\",\n       \"iiFmBp9258MC2MB8eIKJzXZKsiksVhOoONOmKJlgAwQ6dYHGpFJq3CzykxD8mSLckGDClMAZS+M+\\n\",\n       \"rdvfqAxMOzBKVgFECjZB/X1QbIMwIsUHwP86VP8OrKmEwiJIlsPUBjhyi4j8KZAB3xe0FXBzENJm\\n\",\n       \"OLkOTr4N/OZ6fYEfBKVUv03k59PwW2XgyEJ4HAZHdD+diy0zdcGpw2BdCS1+/VBnEZw+yBVmFkTE\\n\",\n       \"BLY1UHIbmN0Q6YDp7cD0+CzxK3pnFUHMpv1YGxw0iAurWJDUEOHSEN0I0TaMYMRgHAZF/+RGKyG4\\n\",\n       \"EUqzEHPqbsG2PxCRfwGmzpc1y/Mev402qlnQv/UFxP8E3hDhPy/05nmXh0nOdR8FfTl+vyxWn9tY\\n\",\n       \"geRFJmsFWSwbjlK8TNFssuMKWzEPjBCsj3LMEiK0z0ncFCPhGSJbXoQ56MQUTJAxxwlIAg6drzRf\\n\",\n       \"VxEWxOHdT0N7AApj0N0OR85A9D0PqWl49l0wdcGyIsiOA2Pwehx2z96mTaS9ET7XBFYzqF7AKfKb\\n\",\n       \"mFLvXtLXdw3IByOXgIhYrLC2DDahc/qd/TpPftZNphdKA5itUP0nJrKtboK+OkIjEyTCZxhxlYDN\\n\",\n       \"gSRjmP0erJMTRLJDWv8xCuCC2mVw2kS49zij6zy0BIrJWvo5dEuSqV4Yf3rmpitiXgZrHtFW6IVJ\\n\",\n       \"7aS3dy1sb4OWCnA3Q0UWCiO6yiSzBXZ9GdI7oX0p3Nv7/idcNAHJW0XkoFJqQXmM5CIilVVwTwUk\\n\",\n       \"s2CbBPMU7FKzmtjNhRYgy89hx1HoXANKwcRvIHtcKZUVkSIP3FqodR/hMdiRhSOXli3yfBRW3Aar\\n\",\n       \"xsDth56VsGsZ9H1nDHa+BVs26BaYyS7wvYXvljCuERO+hhKcJhsQJmPL4EkIlZkyurca69aTxtgH\\n\",\n       \"RXyH4c21ULQUWqIwZYfhELSfhFO3g2kDSIeIdw8EX1/glWnzguG2+ufAQ/msyKWhFMdFOAR8Hm2G\\n\",\n       \"dpMQ74bTZliW02YiC3T4nEwU1Yp8M4apUvA5FF4/xMdFzC/N1SZDTz4rvhzC7rFSLV4apyFjThFY\\n\",\n       \"5KSkc5S6YkVX0zRVZiuJzDjuVBSKLIRLI4yHUky/GYYn5xqliNi98JVaCh+0M+2fZIc3jJs0TTug\\n\",\n       \"BZhoxFjCMrIqPxGRYsANTCulwnNs09sMX/gYTM40JV0Llufh4yIyqM7v2npNyQcjF0H0kf3cKli+\\n\",\n       \"FMYtkD0Nm3aD91Vo3qA9iOM9ULIdy4oQ64agNVzOWKmHVCzGyco1DO88g3PVSaw2sAULSZhMhPoj\\n\",\n       \"8MsIvGXsx1SnnT4HCyBdQb//KP62IHafi6QnSegZpbI5EW7F3bB+XAcioOvvN/ZDx71gN0Ol0RES\\n\",\n       \"dMFHy6QWaMWT0DQrArcoaMrAqUaMSpWFhojYauAr94NqMJbHImB9CT4lIhOXIuI0sgZHyOnUaWy7\\n\",\n       \"qBa+vkHX/U1GwXUEHj2mtbHnGCnNem8JtG+Gu3vfd2NcNgLZaghsDhB4aR+Eu2GLDRyTiC3Asg7I\\n\",\n       \"jMShSmG2xbCkTWQsEFVZHGMezD4bmXXAy+/vafpp2JGExrUwlgXXINQNac3Ihqy2xm4eh0O3wp5y\\n\",\n       \"EfmXhb7sNg/8R+Alpdg/3wO5wfg74C9FePxmCeKUUsMi7tfhubuhNQmWDJwo9nGk9HZotGDyTVC6\\n\",\n       \"ZAJHpovqzgTL/bDjKyLm7yuVmb2s2wCNlSksQyEcdQ6Sdhu2RAYn4wQWxSjrgtCKJB97Ksux9SbG\\n\",\n       \"l0aIFGeJZmD0ZUj/m/Nldr1w/3LY5KYk4KBoWqEGBwj5jhK1J2kbhK4VzNLTGN5J03NtD8AKbW3a\\n\",\n       \"luC9CYsT0u2Q6IeVMD9Gd/lg5OLUN8GyO6B3Jqm3EfqTYHoLOvug1AwlERiZon4IHjjm5OBWL6aI\\n\",\n       \"GXM2RY39NGOti4gEi6B8FNPkBNntQfj7WbNXs1mrLNMAzTDZTGgHhHgGaqfPWUqwlEHFwNmPmQBn\\n\",\n       \"RJsEemZpKKIm8MRgsqGAfcvNdETD1Awp2od0hUkGyCzkFH9TMxQ2QN/MAy5IrYDwEGzgCpZaCmHT\\n\",\n       \"eihYZQRiRZAoh/A03CYiu5RSUxd4ezP47LpPTi51k+BeopT/eeAtEXkbsELhVqjZCIF4kmRfGMrN\\n\",\n       \"ZGxp4mkHI6l6olYPmWoFjzlFnHZIOaHcBX0RUsd1F+AHjO9gx3JoRncwnnbq47i5D8ZaYLKWebqo\\n\",\n       \"LERE2AB8GX2xzXN5vAj8F+Cj3ETLW0qFXxORTuhZCmarnWnPLbBkKYwepHB1MZUjRUg2wnDjGZZ1\\n\",\n       \"w6ZpGL+PWXoSwAMlChiP4hwYwuG0EvFkMWXjFI6DfxKKLeCOpmneZyMQqSFQW4rFFMPcFCD9hyLy\\n\",\n       \"uNJd0t9DRBxNsH4xDA+jKgEEoRpnsJ/ONVFGaiAwICK7uAzNnwUcjjlkBk5IWuEC5nHXlnwwchEc\\n\",\n       \"UFsPmdmri3UQKILUoFJ/AyAibdBg/GAGq9NMFNnJmtKEiqtQahNsH4boarK7u6Bopxa6vjfzVUql\\n\",\n       \"ykW6eqCqWVv/AjAFzgktWp04ewTJAd3UyJ2AkAN8US2cTozDQI2uwa8yMiCTThhIQ9LdSmJxKSOV\\n\",\n       \"DkonpzhS0cWZhhB37IVTYtifL1Scc50lHohZofhKNuyB9vpZttAWULWgTkAVs54DEBFfGTzcDGuS\\n\",\n       \"dKwKMV3vZ/1BaDKOXdAJ6feOo6HAT4iYh2DMDu39Wd6dzlBi81AcynCgoRUVchGLuGDSB9Md8P/4\\n\",\n       \"YO9y6B6BpkOQ7KMvBf1ebYEfLwFfDCa84MxJH1cqoIR8MAKACHa0ide/UYqxi70+z9koRVaE/wT8\\n\",\n       \"qQjP3yzZEQCjBL8PoEbkDxpgKgIFunDGlBW0M+cZxr2wagBsdXNUKE7BiMCtg3CoLcXa6RSLe2DS\\n\",\n       \"C70T0B2GAq+PJ++z4S+vIFxdgXvYjHvKgUx7dIb3iyLybWPJWDB87+1gLoPxEfxZRaEZRKWYWFRG\\n\",\n       \"pKSQmKok3jcIv31Cl1+/cimfOQZnzoBlOWerElvtLgAAIABJREFUZs6AZxI6zve+a00+GLkISYhG\\n\",\n       \"5lA6RcGR5KyW4yGYNguHa6sZ9S7BLlZIjJC2NJJJD0FNFMIlEFoN0ydgs4i8lis4HIeX34HfjUJV\\n\",\n       \"FQSmwXUQCowGaLNqxEe3wyt/CTVuKErCcRME/TD1U4i8CW98A+qtesYei4B1uJix5k/Bs30EpieR\\n\",\n       \"1kq82Qwji47yZEIx9V11doOphcbYAMg6zj4YA+ANwIHzvelSSEMoDCW+WZVKUa1mO0d7ISLWKvja\\n\",\n       \"neBshqOHiJeHibtP8s6tw7jeAG8MDvtg9Nfn7i17Co5Pgs8LLa8F2PfJFMnaRUzbnCRjDhJTJ7Co\\n\",\n       \"CYruseGydBPfbCKg7iB+tBB8LzAxNc3LhdBWDyEbnCoDVx8055QVTwln/zY/7HwbOMEC6EJ7A/M0\\n\",\n       \"Wm/zADpTctORhlAEvOUQUyRNWg4oxkXBlobxAkhPz2GV0A993VC5CGp3wZ7VYK3Wq/qTu2H05Tps\\n\",\n       \"f7mYurifjGMpDr8i7h0hVFxP/OdVMFUL9SNQKyIxqPgslFeBRXo5s+44U/trCR4YoH9tCofXQajM\\n\",\n       \"gko0Eju8BE4sB1MM7jQ0fxfVzwF9XXDodVizBKYskDkNJcegN6vPk3khH4xchCx0noJkC7jKDS+L\\n\",\n       \"CFiPgd3PWWvPwzDQ42Xqs424ziTILkqScJlJmatQEz1Q44BXCiCeBcxaoW0mp/5bOw7Kd6ZgUwE0\\n\",\n       \"JqFzAt6dMdA6m+I2aByFiiSY3DpFn1CQmILMOzCUgfj9UAZkonDGtIbwMQdkW8l0+JkcmGaq1IFy\\n\",\n       \"jsDpMcPAa6GilBr0iRzeDqtWwKgDUl1QfgCiUe0W+YEZh51H4csVEJrpQ3FGWzUHYM5uri2LoXiJ\\n\",\n       \"MaNaTGzXaYY31mKvmuCltSmkG4ZfYI4TWymVEJHvw+v3g3dFmuzLQUZqCkg2OqDvDGbGaKvxUJwu\\n\",\n       \"wByNIalhAo1v05HdSuKoF2qn6f8rGFwE5iYI3wf3ndYXyyxwogJ6xoDeK/lObhZE+DqwBdh0M83o\\n\",\n       \"rzdGduTP0dqRl+fRIv6aMQa7jsCy+yHgIdwfxF+XxhEdxpKACj+8Ww0T57i4KqWUiPwEdtwD3vVg\\n\",\n       \"PgF+P0y9Bhwqgc9tJLk7yVDzJBSaMKetpJSPRMgKSYACHfm4ofrzcI8FmvsBYpywbuetO7cy9EYr\\n\",\n       \"gdc7CHxkAqYbYFu7biuOFbKNQAfUA5ci5lci8tQObeu9UXQhwI4EHPgg5nFXi3wwchGUUmEReeLX\\n\",\n       \"8Gg9+KwgfZAZhF8ppQZyXqdE5Gd2LA+ZKbRlMY8miRRFSdj9ei0ulEJMHVhXJ0gmwnBq9oE3SoKz\\n\",\n       \"ftg2PYcKOud1bmhZB3cc1KLJpBUsaai3wdgWGNihVOhFEdkJfWXoGX+JBT43s40iCBahgj3gY45l\\n\",\n       \"iIXINDy1E/pOwa0C7jDs9+va/OCVbDcLxzvg1WndYEpFwXQG/MPwxFyuhVYoKslZc/VCeA2xN04R\\n\",\n       \"q+vC75+C786o2I1aficQmNmWUsoP/FxEfoXOvvjC8H8ugv59FD/ooiaYIVI/SdqTwDbqoio8xHBj\\n\",\n       \"gJHOrA6WokplDgOHRcwnIPYwlBRDwgST3UbV1ULW/1wXRPgt4E+BrUrlM0VXgV+guxx/Fb3sdVOh\\n\",\n       \"lOp0i7wQhPtqSE1NMuAbpMA3TfUReKYYxn8NyYPneW8MeE5EXgLMuXrAWpHKGhgrItY3DUSgtRJT\\n\",\n       \"OA3EwBWHwKAORpyw2AvNOZPPJV0BAvZXmfIWE08loWc1HFw3S7eSBtQs99XZiEiRMVa/cX1YUI02\\n\",\n       \"5y0YEZHHgN8B7MD/r5RasD9upVS3iPx1v448zehWzueYTCmlQtUir1cx7SmCuBuiJ6DyFGwFS52V\\n\",\n       \"ylAck6mbmH2c9ISIuGa2YxZpq4FP+MATB0pFOibh13OVZgEu8ChdBQNgN5TYRQmwlfN+B8YAhkGp\\n\",\n       \"iAROa8W0Y0ZFnQY5Ad5x+NXV/cauDcbNfCdne3Bcje0q4DUR2dupNSIJtAnQnDf0FEyO6t/Be5h0\\n\",\n       \"EJNJwH4jgHUUw4PNsLYAxA8xu8gLCaXeW1LKCUaHvSJ7XoPb41i9WTJFMeyeCNasUFQwSqjNhX3y\\n\",\n       \"CNQF4c2z3V8zJ0Tkr2GgBEipnC7EH1ZEEOD/Bv4IuE+pcwSHeT4AhkX8HwHPiPCkUlzRJGAhElbq\\n\",\n       \"LRE5eBqqIZOE0CScdAD+OZxRz2Gu1hox6BuD5TEotuIp7tJthh0JUvY46bUHSDmG4TkQExTPkb2r\\n\",\n       \"HYhQNBlWw98XkTY/fDXXVsIP9k4diMxp/CgilRXwSLuuDqRSZHAUfnkprUKuJ/OZGfmxUuqHhqnM\\n\",\n       \"bhZ4pG3cOE5f7HWj8MoB+MrtELFApgYmD2Czj1HebcXjj+GdirG8C4a9sO1WdKO6+qXw2N0wXgr9\\n\",\n       \"GZAj0PYOFIrI/5pDJe2HyQxELbq/ygwjbohNYKT+Zo0/bBP52a/h84vBYgfVDXJG97c5p3b+w0hu\\n\",\n       \"8HYRurtg6CDULIchE6geKDkE6YixdOeDhzfBsnUwYAHlB/s2+JxZJJpR6pzvOwjP7oEeK/HbBIs9\\n\",\n       \"SU2HGbxOsGQR+wTx+ig86Z+jT4gRnOSFmYAIi9EakTJgs1J5Ee/VRCn2iPAsutz3a/M9nmuBkWnN\\n\",\n       \"DbSuyJl0CnbuhQ3VuNZ7qZs0kw2cYKJxCnsoTkE2zNBhSL0FapE2N1w9awsjHgjNSAJOnYDtUbht\\n\",\n       \"EWSTIKch06+NH8/J/omIux5+5y7ILDKWlXvA9wb8tiGYPW8G/nozn117Z26idi7aV+TGIaPUCavI\\n\",\n       \"E+PwgBvqwuAcpfZkhi/ueD+TAToBcmA98Eop3LoGoqUQBRBQK2C4D+rHoY6cclaY0R24X4c3Pwob\\n\",\n       \"R6AkBkOFsLMURs/rL5FU6riI/G03NAtYFfQvtOj4RkAplRaRH74J9x+ClSYwhbRQ4wWllF9ESpbC\\n\",\n       \"8g3QNyO2LYLEBpgchrtFpHO2CM5Q0R9LYjsI6UbwhNJ4hkKMF0NvEdgGY3o2E73+n3jhI4IL+BPg\\n\",\n       \"68D/C/ydUucG5XmuCn8MHBDh00rx1HwPZqGjlBoRkZdG8awqJOZJY8oEWLIrwS3HIW6FJ0uVGlQi\\n\",\n       \"cga6e2FXA6wc1r1nOsvhUAIi+4xtKeB5Edl/DGqVXrbtOt9StR2WLQXHopzKukUwNQp1w7CUOdxZ\\n\",\n       \"54t51YwY9uS/B/yH+RzH1Sal1DER6UBrBSqg7GtnByIAWcHQHdihshRCMbAcgsUBaAIsCZ3hWMSs\\n\",\n       \"YEQTeRsORGHgLrCWQWIERn8wl0NgLkb0POe6Z55Lx/genxKRX6PXiHMrcTy+WT1pEmDug2oLrKqH\\n\",\n       \"6nKRU+PwqlIqtwGZAsc4FI3C3iWQLQKJQMU+8KKTZnlyMZZkHgb+B7ADWKXUwjTuu1lQirAIXwSe\\n\",\n       \"F+GMUixo8fsCYTBE8/4Qm0d1w067cS5HbZBxloh8pg5WKiZNk7yajHGoWt8zIsdh8pXZfWOMSeRF\\n\",\n       \"J5IuKPfNURFYDIlCqLhKn+2qcM2DERGp4NySuhGl1KNKqb8Qkf+KXq9/anbKSES+lfPnNqXUtms7\\n\",\n       \"2quHEcFGRWQABmPa96M8JwN0vAL8bwDEYGAUlgxDUzlUrIGAFTJ7tZnEQyJyZLbxlrH9fcA+ETHn\\n\",\n       \"BYvzw3nU5/7xWR0034U1LmjeDCPLoL8L6nfC74nId2bK8bRlve8gqJVw7+va4t+i4EwRTMQ5x2vm\\n\",\n       \"w40IjcDfo4P3ryrFG/M7og8PxnLN7wK/FuERpZi3niY3CP3Ql4S1ZnDnXDOOVJUy5roTrEt0CaQc\\n\",\n       \"I1W1i6HhMfinS9GpXIgwDI3DxtZZj0+APQxDV7Ltq801D0aMvit3zX5cRGzGhTyFzhDIHO/91rUe\\n\",\n       \"37VGKZUSkZ/Bi49BezG40zBgh1N9ENkBMAXv7IAtTdCwTP8gTWNQ7IXu9RANaIfRly6wj3wgsoBQ\\n\",\n       \"Sk0Xiex7B9avh8EpcGVgUSEkiuGYCVgM4wndUXMzkONHMv0q7KiDqXqoTMC0DY7HYeSJvL27RgQr\\n\",\n       \"8A20SPVvgU/ll2SuP0rxjAhpdEDyN8B/zx+HudFL6+afwW++BO0CrjT024VDiTWozAoYBl15sA4G\\n\",\n       \"pqFhTAfZx69kv0no6IC7fFDeapT9noKyDggk59HgbC7mc5nm34vIVrRm5KdziW9uFoxqnP8Og0vB\\n\",\n       \"7oFgH9CZU+o5LCIv1UPrCSjOgrJDzyI47gd7oV6qOS8iUlIAyx3g8UNXVm/7Yl1s81xDAvDcbgid\\n\",\n       \"hlvTUN0AWS/sLM/JblSD360vOO+hlAqKyHdhrBU8NRCZhNTx8zS8qnFDuwVsfi1C7pnDkOmmQoT1\\n\",\n       \"wPfQF++NSs1dQZDn+qAUvxFhE1rQ+kcifA9tkHYw7+tyNkplTonI/4DBdrAVQrC3BFZXz3F9r4Jk\\n\",\n       \"oa7su6JgRCkVE5HvvwYP7oElAEE4MQEvzlpafg8RsQAtxdCSgHAUOmZb1V8L5lPA+udoR78PBcaa\\n\",\n       \"34VKUjsV7GuEEQtkLIb51jT44hew9TaLLFkMX1wKqgBSA7D5BPQZjdLynVvnCSMYfFVEtgGtNfDF\\n\",\n       \"ShjNfc00FCTmMFUzUrPnNPTLxSVy5wq4vxWSFl0ZtaUT9orIL2/GgESEAuAv0P1lvgn8KH+zWxgY\\n\",\n       \"AeHHRViOrrD5BWAT4SngJ8Ce/LHSGKX3O2b+dovU+LWQ9Cz8YImB/yrtcxL4kYg4jL/Pe18QEVsJ\\n\",\n       \"fHEJLK6HWBwsHXCvXeQXCaWuqSfJOTbneeaNnm6Y7ANPTiDiOAzOKdg11xtExF4Hn/0oTKyBwTYY\\n\",\n       \"uwfOrIP6Ath0fYefZy6M7NeJPhg9rttcABAA+yEonPgAnikiUt4I9z0EAytguB1GPgI9S2A9MHt5\\n\",\n       \"+IZGBLMIX0G72VYBy5XiifzNbeGhFEeV4pvAYuBBdHnsE8AREb4qgm1eB7gAicChY6BGwT3zWB8U\\n\",\n       \"nYR4+ipbLiil4heboFph9WpovQ/OtMHYKhj6GIzUwCMi4rqa45lN3oF1gTBTLvoqfOYw1NlBjUHC\\n\",\n       \"6EtzvsxIXS3YimeppVth7AisA9689iPPczGMst0nXodPH4UGpz62yRH4uVLqsrsNW6GpBZQjp5WA\\n\",\n       \"oUMJnoYVzGN/iauFCA7gi2htSBD4glLvzyjzLFyMQLED+DMRvgXci9b3/CcR1iiVF2LPoJSaFJEf\\n\",\n       \"xuAz5VCbBcZgynB/vu6WF2WwbvEsoXwhJBeBpQcauIY6k3wwsoAw0mn/KCJlaC3TeF77cXNgpGf/\\n\",\n       \"SURK0Tqp8avdB0LNIQK/kRDBAtwJfBb4FLAXvSTzcj4TcmNiHLdXgFdEWJIPRM7F0BT+7aDOnGaB\\n\",\n       \"sQUqVr+mY8oHIwsQdWmdFwH6+yE5Bc7cjrMnoWwKXr5Gw8tzBSilrvhinIKuTpClYJ7JjmSBTr3s\\n\",\n       \"c/iKBzl//Ft0EPILtHtqXpx6E6HUjZ+xu1YYFZHDF33hNWYc9p2ChyvhPcF8CGw92m5+Dr+rq4cs\\n\",\n       \"zAAMREQppW7omd71wCzS1qwFR+KGZD84T0LvODx+owlY88f80nGLbFkED7ZBygzZbnB2wp6AbuB4\\n\",\n       \"QwlYZ467CKabsRtsnrnJn+8LD0PA+mgrtDVALAaWE2DqgSevhoD1Qsd8QQcj8z2GPHny5MmTJ8/V\\n\",\n       \"43zByIJepslHzZeHiIgN1pTBvU4ojEN4HF5LwD6llHKK3Ho7PHR7TjlpCGxPQ2kP/PV8CKZmjX/B\\n\",\n       \"zpREpHIZ/F+f0I517y2NvAyL3oV/UUqdyHmt2GC1cRw8s4/DvH2IBcpCPu4Xwyay8Q54eAv0zjwW\\n\",\n       \"AevPYHkC+gqhIA7RCdgWh103WtbqWnIjH/c8H4wLJRkWdDCS5/Kww/oV8OlbYLgE/FPgfBc+fUS3\\n\",\n       \"u9/lg7Wts5TSSTB7dYfTDSLyZv5mOTc2aGqGjH1WBUuLNjY7q4LFDuuM4zBSAv2zj8OVjkVEzECt\\n\",\n       \"HhZD8x1EfpgphTUtMJn7WABK62GFG4KbocsP9t3wicPggCu3rBeRYvQ5G0Ef//w5m+eGJx+M3CSI\\n\",\n       \"iLkW7r0dBosgAeCD2BYYHIZ7RWRfFaiZiosMyC5YGocWL3iWgWkclonIj5RSV8Vs52ZCna9lAYjK\\n\",\n       \"UZkbx+G+22HofMchp2P1ZSMi1dXwpTrw2EENAE6R52NK5XuDzA9nRQIKCMBSn+6AFgXdsXkL9A/B\\n\",\n       \"nSLy7vmcLy+GiJi98FAbbKyCrB9Mg9AvIj85X9fWPHluFObN9ExElonIOyLyloh8Z77GcRPhKgTn\\n\",\n       \"zA1whkJIevQM2j0Be05CKcBhqLNB62YIVcD0J+DoXVBcDp8XkXzqdBYzFSyxnAA+A9IJ7smzK1gK\\n\",\n       \"3Oc/Dnb9vx8MEbHVwmMfAT4C/XfDwGdgZAl8UkSaLrqBPFedSdg3c04BpMCSBM8UxBtyMiZOSPv0\\n\",\n       \"9dbzQfflgI2rYPPnoH8rDDwMfXdCRSl8+so+RZ488898ZkZOKqVuAxCRfxaRNUqpA/M4nhudWBRS\\n\",\n       \"EbA69X9tDkinwRTRs/poCg4cgvYotMVhzUow9YG7GPbYIdUOYyegfgwqWQBlZgsJpdS4S+SF5+Aj\\n\",\n       \"rZAxQ7YL7N3wLtCZ89JkCCQINg/vNw2LgSWil3iuZEmluRncdTkldgWQXg6hAdgI+VLY600SDh+B\\n\",\n       \"9ji0N0AiBpZjYGuHg4U5xz8FpmkdjHzgppYlcMdaGDbnZOKWw8hxaBKR0qtRNp4nz3wxn71pclPV\\n\",\n       \"Tq6SD/+HFaVUqlBk+wvwpULwWcGehGwQghPwgxmDLRF5YhKaqqBWwWgNDBYa6WQAt77QOebtgyxg\\n\",\n       \"Ikq9IyJdPbDEBJawDkL6jLpUswtuq4MtZqh/Hm5tg51roC8B5nehdgJeu0KjM4d7jgfdELeC9wq2\\n\",\n       \"m+cDYnTl/vEuaDoBTQmIZGF7OdwTBUsBpP1gfwW2JCHdAn9UJTI0As8rpc7pS3Q+RETqwO2B6dnP\\n\",\n       \"Gees86p+sDx5rjPzqhkRkU8Afwns/SC22HnOJgJjaXCVgcursyPmNJgVVNpFblOQQnf07SwVeT4N\\n\",\n       \"LbmBSBzMw/rCds07NN6oKKVGgJHZjxfC3avh7k0w6IQdx2HlcbjzCBxRMDYB28Kw7Qp3P9IPso6z\\n\",\n       \"11cHoCgA+3NfKCIuE7TawBWHIaA3X8lxbTC+19PGP0REjkB8BO72ai3W6maIPgTvFECqD4q2w2+L\\n\",\n       \"yD8opUbPt10RKbFCi2jhc08CTvdCdVPO8k8ULKM623KpRol58ixI5jUYUUo9CzwrIt8WkfuUUq/k\\n\",\n       \"Pi8i38r5c5tSatv1HN+NRiXcfQfsK9VmNc5aiNuhPQhfb4K3zZDuBOwiTydh2y5oU1BRB1MBcByE\\n\",\n       \"0hHdWjpfnXEZiEhBE9x+O/TNVNssh0M+6P0V2Mfgbz+oaDEXpdRwsciBbbpaZ9QB6W4oPQChKOzL\\n\",\n       \"GU9jI3y5FewuyA6AuQtOiMjPrrYFfZ5zMapbtovIbqB9NTjuga6Z5+vBvwbsE7AZeGaubThE1rfC\\n\",\n       \"J1sBC6guMJ+GjnfAkYLyOpiaAtcB8I3Br280g8M8ZyOCFfhddGXeL5S68qqrG415C0ZExJZzYQzC\\n\",\n       \"uR0dlVLfuq6DuoEREVMDVFYbegIbhEag3ArNLeCvgOkG8LeD7Rn4VCf8t274rh+2FMDiDPhH4UdZ\\n\",\n       \"ODbfn+UGxOsDsc/SA1RDwAX1aCvlq4IffrkT+k7DZgFXCHYH4R2lVAhARKy18OhHIFIBowCrgO2w\\n\",\n       \"5B3d1TffbO46oZRKiEi2Kkc7MkMFBAv0b+McRKSkDR7+BAy7dDaT5SAvwPJ98IxfN1tsSsPkGDyX\\n\",\n       \"Uer4tf4sea4dIpjRLRA8wHPA4yL8R6X4/vyO7Poyn5mRB0XkG+hyyR7ghXkcyw2PUipbIzI5Bq5y\\n\",\n       \"iGTANAorCsAzCqrSEL25IbkYOA1NWaX2AU/O78hvCkLTQBrEkiMunARnUmuh0qBvMg5YYgFHWP/m\\n\",\n       \"L3vpxOhhsdv4Nxe1dVBQMcv7YimMHYdN5IOR601wYlZJ+BR4T0NrWJfl1iilBnOft0LLYlAzgQjo\\n\",\n       \"7EgbhHuhekKpn12nsee5PvwxUATcrxRJEX4DvC3Czg9TP5/5FLA+Czw7X/u/GRmD13bDo5thfBzW\\n\",\n       \"WqA1AF4HTJ6C2zOwcxFMWQCL7gqc5yqglAoXiezZCbdshH47ZMJg2wWV4/ALpZSyi6xqhc+0gbJD\\n\",\n       \"5gzcfRqOiMiTV+I7Mgdm8xwPWiBryguT54O+MzB4GKqXwXAftAZh5QTY1+lA5V8VirwZhldmzMsE\\n\",\n       \"LLlB7QwWyEr+vL2pEKEa3SByrVI6g6YUJ0X4z8BfAZ+Yz/FdT/KmZzcBIlIANAHmw7BtEL5Qr1PA\\n\",\n       \"QTekt8KpMFj3wvpKeK0bTCk9M89zlQjAi3sh0wObXGAKQGIMnknCARFxN8EjH4fRmXLPpcAbsGqn\\n\",\n       \"dm49eKX7F5EqoApQfaBCYMstLT0NZX5460r3k+fCiIgLfS5agH6l1ISIPPEmfPwAbHTBukIYb4E3\\n\",\n       \"22AsBabnYOshOInRpiEJPV1gXj4r09al/UuOzM8ny3ON+HfAD5RidmXVd4BvirBWqbPF6Tcr+WDk\\n\",\n       \"Bsci0twIv9UMNjuoM8AIuGrhnSKITsKaM1BbCtMW8D4Ly/rh2Qup+PNcGiJiARrRZZWjfqWeF5E3\\n\",\n       \"gAK0Dmrm+UVNYMkNDkxAG0x1wjquIBgREXMRfHIlrKsDFQFOgvdpKFoPEy5IDIL7CEyEtSdKnmuE\\n\",\n       \"WaRtETzaDFYrqF7ArX8Pr08o9eMCkeAdkFgF/REonIKiYvC3QqIH2jGCEaXUkFfknRdhSyuELdrT\\n\",\n       \"xnMcjmXh1PX4LEbLgUb0b3ncqCLLcxURoRT4MrBk9nNKERfh/wP+APi96z22+SAfjNzAiIizAb70\\n\",\n       \"MQiUQjSuZ8PrTLApCK1pcNnAn4boaagZ17qBnwLvzPfYb3REpLwavtwIxYXall2KRPYAzymlYk6R\\n\",\n       \"zVXwYDWYRqA8DI0BmPJCeGYbJm1Gd0UuyFZYswI23AU9MxtqhcJnwfcyHHDom15nCo5ejYqePHMj\\n\",\n       \"IoVN8OjHYaoY4gCrwfwS3LtfN9HrMkNcge8MLHWBpEBGIJSEXpMu332PILywF072wCoT2CbhSFYb\\n\",\n       \"RV7NJb3zfZaSSv3bLvWCGgSTT+TgNPzyeuz/Q8RXgWeV4nwTwx8AHSJ8QylC121U80Q+GLlBMHwj\\n\",\n       \"HiiCdQKRALwGxFvAVmp4hfTBSh+UtsD4KNRuhjNjUCAwVQKTh7QV+cF8Y60rQ0RMlfDF+8HaaFQv\\n\",\n       \"ZUC2wS3vwpCI+NfAJ++D/jhYj0DxMNTvgE+3w7ONhnHVaSiZPM/SiYg4LdDmgJIojGS1P8w5VRml\\n\",\n       \"sGk5jOdGNNUQWgTFO+GYX6m8K+t1wATNi8EyE4gA2CHTDuE+WAt0hSE5BkvXwZmZyqtpcL0K6/3w\\n\",\n       \"vdztGedoFzklwXMhIhUOWGwCcxS6lFIDV/I5RETK4Qv36PYSU93QWALeLDSHdSCdLzS4CohgAr4O\\n\",\n       \"PHa+1yjFsAhvAY8AP7xeY5sv8sHIDYCIVJfDf1kMLQ0QTWob9zs64IDFaNQ1CuVx/fxoSDfRyh4B\\n\",\n       \"bxkkzkB9DLrKoLMfFpFfd74gRm+eGvTyy5hSKjDrJTW1UNqYY8tuBrUCRrvg1hRMrYJAGOyHYEsj\\n\",\n       \"WGtgeBSaj8KnemFXGsLH4VQKDs2x//Ja+NoSKCyG1BhYT8K4iPwgdywiYq6EqhikkhC35ZQQ27XW\\n\",\n       \"4IYRO4qIFag2/hxSSqUu9PqFhoDNAuYxKBPIFoPfors8p8yGO2oZNDjg1H6oqIWU6IyaLQlTzCFY\\n\",\n       \"vRgukduWwkcXQ9oEdMMDXpG3g/DCFUw4KqqgygyRDrirGbIeSEyAOwrfEJGjSqn+D7jtPO9zJxDj\\n\",\n       \"4kunT5EPRm5+RKQGfBvBXq6r5kJ7lFJT8z2uXERESuB3WqH2LuiZ6UtRDkVxWHoKAoXQaIVyO9T2\\n\",\n       \"QNUZKPNAKKCDFo8fAvfAtmEo2m/4uYiIDV1dEc47c76PiBSXwxdroaoQskMgHpHtoZxqB8DunOPm\\n\",\n       \"4YKkCbxWUIUQOwZL28Bcr8t7/WZdSTNxAEoH4Z+AjrnS3hXw6bvB1GQEO8uAEqjaBvej/QgQkbpq\\n\",\n       \"+IITGgegaQpSaWzTdkx+F/HhPj3zHpy97YVKHfxxNTgFZBDiZpGfZZQ6Pd/julQUFHTBraUQAkwD\\n\",\n       \"4Ehjnx4iaxsn9biImGugaDkcnYS+PigRoAyGs+DsvEw7dxGpWAof+bh2/E0DLNNeJFv2aVH0B82I\\n\",\n       \"2R2Q7YZVayHuMzI9XkhOQEUcPoYWV+a5Mh4FHlfqokHoc8Dfi+BS6or6Wi14PrTBiIh5CbR/GVbF\\n\",\n       \"oTgCI7fAwY0i8r8WmFir3A0NDRCbCUSMKW/aB3VnoGoYUsthPALpAJTEgTZILdEujfa3wJIFOaP9\\n\",\n       \"DsYKRR6qhk+asVVFIWIR+WkGnvmwL9+IiFTAo3dB8RIjEEiB6XW4a4+2255p5DjcD6YDUJeEAieE\\n\",\n       \"W2C0G0rDcDALqQHYGIXqGgiAPmYpSLbAnlHwDsL0XIGIiPiaoaUHh/UQ9qUeEv6lxHuXwsh+WCEi\\n\",\n       \"vwKsDfCVFeAcB8cA1JdhLjBhivnxnRoi0TpA9BVtxHtj8AmIlRneKBNQ8AJ8WUT+7npNDnRwbl0J\\n\",\n       \"JatApWF8L2SPK6WyRqasvgAWpSGZ1NqNyZz3Vi6Beyrg+Bg0FCOlZgo8fUhDJ5VHs5Suhw5riGDJ\\n\",\n       \"XthaBOE0ROvgcDOMH4IGLrMFg0N7kWScOdkwC6hWiHRrF88PGoyMDoK1SRv5vdd4LwCFpdDjgWoR\\n\",\n       \"cSulwhfaSJ7zY7itPoI2IbwgSjElwi7gAeDpaz22+eRDGYxopXjtI3DvGJQYF+yKMBSUQfg+4PF5\\n\",\n       \"HeDZiAmyacM4KQsMQH0CKuzgqYVQBEZ3QrEZUmWQXg9j3TrFXxSCTANMvg3rR+DHHtjQhPV3Syk3\\n\",\n       \"OfCEYiRdXUz9215ilcB35/ejzjvVlVBbBLEhqCqCqQJIrIbxXtjC+8GINQLuCVhfB9EoqBdBxuFd\\n\",\n       \"P2wH2AdrisEZh4gZGAOvQG8RhFLa4OgsCwtpAAAgAElEQVScZQhjqWLNGBW3CC3jFuypXqYqexhY\\n\",\n       \"disT20T/BkwmWFwFJSFYvBpMEQqCk1gDI8R93URLA3z0p9Drgh2L4cYwTSrL6ZFUCtGl4BvSSaHt\\n\",\n       \"13rf+nsvfQxWLoImv171OPlbcGyXiDzrhYebYH2TXg6TU/BRu8hTCaPLeCGsXAaZZbB/D6Q7cFeZ\\n\",\n       \"KZiKIukwLcdhY7eT3s+tIjhaDv4aEAWmg3DHCegcgNdzq9tEpAzwoSuyRt7zH9HZzEbAagfPXMpn\\n\",\n       \"E6jZYtjLQSkVt4m86IYtAYjZIRmGgjFtnHgqrRsyziliNX6/ZUAy30H4gtwNdM1Rzns+XgDuIx+M\\n\",\n       \"3JSUQZlT6wdzaR4HV6uIWBaQanwsCmd6dXbEEoSiIDQXgCUEJoFSE1QngEJIecEch3hA31B7TXom\\n\",\n       \"lx6FvQF4swH+powKCigdB3Bjj7dgjU8x9rCIPKOUGjZmgtysmRIRMc21NCXQatGaj7AJXWfpgiNl\\n\",\n       \"0G/RNwcASuCjW2GyBF4IQpMTCqohNqj1JVPGPv7RD5btcHc9jLrgYBP09oBvWM+Cz1LQW0WW18In\\n\",\n       \"43huS1LvSxL3KAayVWRRWC27sX5pktQ/A1hgXQQ2LwIxIyk7jmQ91ngF1kQIJQGyIlBkx/55s8gv\\n\",\n       \"jSqMxLX8Tq82hZBw6qDtOmBqh+VNcEfv+4/V+SG6EQ7522HD/TnVSm1g+xU8IiLdSqmAA4rcEDcB\\n\",\n       \"bmzmaqp6rBSEJwh7u0lZU4wUlmEqbsB8upzMiVFYnITqAkgdhXE//Ap0sFEMjyyGDTbwRcA+CYdF\\n\",\n       \"5B8AXy18qRGcdqATCjugZBkMzXiRZIHjUBOH6QqRRyd0KfDxy9XfJJV6s0jk6UNwazlghd467X1S\\n\",\n       \"5IfDc/XBsYusrIOPl4M9DlIh0jcGTyqlzukynIfPAZfjovsqusT3pmY+e9NsAv4b+hzao5T6xnXc\\n\",\n       \"fQoScu7DCQtk0saYFgpqFH5shupXYJkdFleAcwjiSYjWQI0bJAKEQcIgWaiwwLtr4JcAb0N9WFff\\n\",\n       \"eBxYSmy4o7k7cOGIW7FXgCwTKb0d6leCSot4d0HwzZuhJFRnwxwboeQOqHeLVJ6B0ZeVUn3G8+46\\n\",\n       \"uMcGiSotPlQVYOqB1cfBGoVdxuuczdC+BAbMerY4BvoH0wP1IlKolAoZZlf/tRN6U9Bao5ud1Z/W\\n\",\n       \"PYB+lhvoiUjdMnh0E/i34ckK3pMpOm5ZhCXtwTGSBuyETSZSjRb4w0XQmAJfKSSTKFeaZMaCNR4n\\n\",\n       \"Y7Fhifl4Z2sdGWspqQIFnzsB0yLyzzfSjWEQCvy6JPY6ULoMGoNnP2ZWsCjt5fDWNlQgNwthtFQw\\n\",\n       \"n9bmZgem4fQArNLuxtmkImsGmCRrTlE+DcECD1nMqJQHwh6dYTsQAesxcE9pi38KYWsj3OWG2np0\\n\",\n       \"NU4nfOwoNLpgeCukenD4eimst6BkimnP02SSK/WkI3sMlgbA+TEYdoKnB1Ycg9Mi8sRMJZaIyKzf\\n\",\n       \"nsw16QjA945BKg4VpUAnlHbD0BS8OPu1IlK/DD5/L4wWG92DT0DFNnhMRP7eaGGQBxDBBjwMfOsy\\n\",\n       \"3nYU8IjQqNT1OieuP/OZGekF7lJKJUXkCRFZrpQ6ej12rJSaFKnogxMVsCRnrfZQNQTevh6CTiMV\\n\",\n       \"WwpE0E6NatbztWVwbx00ZyA4Dr8ch7fK4U8rYLAI/HZY2wRZpduIq0WQOgT2EcAHVQNQNgr2IzCW\\n\",\n       \"hiCUfqwf1RYnli0jMeLBYc5icidJqSSSguKPwT2T0DqgiwAO3w5764wb2YK+oGi5BzXo5Y8upVT0\\n\",\n       \"7Fd4HoBVd8DaIfBOQ18J7Px9EflHpVS/CVraATMc3Q9LF0OoAFJpsO6D2gjsqBX5k2pwpWFJGKZn\\n\",\n       \"PEOSYBnEVB/Fugo8vy/ieBsS+41GaT+chKZ9WO4C7zKtGU7cJyKvzWiTSmDTKoiWQFTIoAjYGjGP\\n\",\n       \"CVZnAElbMI+6MNEITjesegi2PwlLT0N9BcQTRMqSmJN+rLEQKddSnH4X8WQb2WM+7UFT8Rp8BPjx\\n\",\n       \"dTwkl8VuqGuHUQGOQ8VxGMpqV9Krjnar9d0KznpIjELSrSciMwx4YcwLQ2WCGjPNMTkxgRLDIyYN\\n\",\n       \"HUfhNifUlZMeH2GyfZS0tR/fqM629vqChKSU7FnZsDEoTBgiYxGxlMP9KViyGAJ1EBSgFoJx2BSA\\n\",\n       \"jgMUWeO0F7goCQAkGHWc4rh3kPBRgcJCqHkU3ikwllEaYcoELdthuVUkVQZ31UF5lcjgKBxVlLdA\\n\",\n       \"XZNITQim3oL4npnzXCkVEJF/GIUmAa/S5ei9c10HSmHzaojmljUvgbFeqB/Ry0oXLE/+kHEvcFwp\\n\",\n       \"LrkiSSmyIrwG3MOsEvCbifnsTZN7Yqa4ip1NL42xp2DbY9BXrzPwwwK9nRDcdi33ql07iz6h7Qaq\\n\",\n       \"MuA3Qf+wMXsJiIhN4PZGXUETqoFeB6SPwD37oKMI9ruhbRxaK8FtBREgDKYMmH16uSA5DKajUBOB\\n\",\n       \"p8PQC4sfg1tiCfz7o0zfPkhsZZhIuAT39AD+whCmrJZLLO3QI7WkYVMfTDXAeCML9IIiIuKBh5bD\\n\",\n       \"LQ2gEiCnIWkReSKtVJfxGi8svgXu7AGLEfQ1TkPGBFNbgcfN4CwAtRJOHoPwflicBmcM/jd7bx4j\\n\",\n       \"6Xnf+X2e96z77q7qnr7nvmdIDY8hRVEUdVqyLUuyHcuGV+sjiyBBgt0AySIB1kj+WCDYAEF2Y2eN\\n\",\n       \"jZ21pWhtadeyJQvSihLPITkccjgnZ3rOvqu7676r3uvJH08NORyOxFukaH6BBmeGVU9XvW/V+/6e\\n\",\n       \"3+97VNvg3Q93HlQ7zsr3Yc95+PQB+H4InPPY924S3dFBlxEm9nUR22Fh1/Cc+kJk7oBDW+HQNYg5\\n\",\n       \"cG1mWAj9CVDOwo41GK9BJkq9XiG6NYTwQ4iOg7EJAxmlt96G0SRU1yHeI9U7QcpOYsSgboZohXT0\\n\",\n       \"psRNxmi4BVpnM0Py7E7YfB52CyFC79eo+Sfg8dNwRICowxNtePp2viq3QgihAXNxmHWV8d/Fm8ml\\n\",\n       \"t3n8FGz/fbjThUIdKtNwbAJOWzBRDXFs3xjr0yP4eo9SpAich5EUXIpAzwR/APo1QL7imNoXQvzZ\\n\",\n       \"Y3BvCu4cUL9YxUr4ZNbhkT02l0bbdIonYN9dcCoGgxJEnod4CZ5U14Tkl1vEPh4hHn+BXuYS5d49\\n\",\n       \"dK7HwM0rae02j+lKgtGXCb0J8rUSg0iDc1cgMI7Cjsgt19GtUDsFX5wD4wiURmHpNEw+SuGrTe45\\n\",\n       \"DgfmoRGCk78KpzPA9288d1h4XH69c2DBSJrXKj3SanwUe73n/wPDb/DmRjQ38CjwIB8WI+8ehBAH\\n\",\n       \"gBEp5c+VaCelrAoh/g2sz6K+MFVg6d3nSYTvhUMfUfPpG83fl/Lw6JeFEN8qwNeS8IkRmO5D+Kq6\\n\",\n       \"+C2OwslRuK8Fcx1FjMxYgA96B2QaBhHwO+AkwQ3g2avwl30pnxFi5LfgrgHMVXy8YxWe2WFjzjWp\\n\",\n       \"J1fZ9NpE1iDvQusu+P4IhKqwZR52bsJ4AFaed6EYGRLeokDnrfpKaLBnD9z3MCzcmJ3vgsh34beE\\n\",\n       \"EP9qOGLKQl6+UojcwEQdQnMALqwsgnYA2A+r8+AuwSFgpgCWC64Lm3FwPgJPHYfPHYePpKG6hH5X\\n\",\n       \"B1dsJ3k5RCdVxA1dJ550qT4thGjCvkPw0euvnO/tJTUNqtwbpUEa7khDLgHNDj2rxKrcwIrkiCFo\\n\",\n       \"hSM0NrbinHkUPhuH6pOk7wlxuGsROtGjO9Ejl12kbHpssAW3nKdzbvqm7CGhdvHDP74/0ZLyEdRs\\n\",\n       \"/A1DCGFk4De2wd4Z6PfAmIfP2EJ8ayDlmds/q/BZuL8N03X199QA0l34//bANw7P4O2Zwq5rNNxx\\n\",\n       \"uo/Ow5ZrcO8x2J+CngYLa7B5FU7D6C8LMTkNfhmMx9p4Px6+D4QQKYONz8zCJ/bDlRD0zsChb8Ev\\n\",\n       \"heBUG8qb8HUp5YIQsQfgjgM+VtGmE0lidltkI89xcfYo3es+tLto2QTJV31Hunhhl7FVWJ6C2srt\\n\",\n       \"Wpd9MCOw60F4/EYcwRrRsSzbGx7eRBcxD+m+KtQ37hVCHLuNr87PRAeuF+Gu3E0kZICi+sB/SGQd\\n\",\n       \"Qghs4AvAP38LT38a+B/e2Vf0/sJ7WowIITLAvwa+8lP+/x/d9NfHpJSPvZO/f0hSfd3K/+1CCBED\\n\",\n       \"e69S6yQ+DfuuvdoFfM8GnN0To/RHeThQh+0FcHZDxQOzDCMLcNiCfFh1PlIxcNeAURBN0GZhMABx\\n\",\n       \"CmIxOLMOxcHL7y08BxPDHZXuB6Q3exSK0J8A9wXIl6H5MMxYMNkHLwwXjoL/HDQ0cG6Zp7/t46FB\\n\",\n       \"9H6Y/hjEDGi5QkQffStr5eDIbpW7I2/6t+4c5BaVwdtLQEd1oG5FNQLejYvlygKcehTuyCmnzHtm\\n\",\n       \"QZTV+CQ8gHufgX0H4O8nYe0APPI9SDsYuRyh7i7G5kOYLkAC2emxObmM9XnQJIR3w2YfCjd1A0fb\\n\",\n       \"YP+2weiYh917iUbuME33ftiAHov0fGiExlTrO/40fOwyXJ6A2ICxcJJ4FcAjtFEknPLYcRWuF9ts\\n\",\n       \"Rq5SvhtqzBJcA7gMo03l4PoLz/25GQYc2A97P6Y4EwDsAOs/wa8JIa5KKV+1WxdC2LB1EqaXXr1S\\n\",\n       \"pgdbNnKcdvdAKwOtrPKD2Z6HsVnlZHytohjv+UU4Idl9QKUuTKxBMa3xzP+cYL42IcRSA04AJydh\\n\",\n       \"16/A6diwCNgBjzwDM0/A8R5895VxcOp+uGPNoTUocX5nlEHBIGJskrZP0k/VCU7VkfMu7UMGCVcg\\n\",\n       \"gi5udBOjI3Er0K0C166AvwfsKkQXFUk2U4RIZMjjOIGYaWLH17C3pUisW3ixLo4JtqsK9YKESzmG\\n\",\n       \"HbU3igYcPwkfCcHIVij1wDwNY8twAVh7s+f1A4zPAmekfEvH5AKQFYJRKd+cDPwXBe8lgdUAvg78\\n\",\n       \"91LK2x5cKeUf/Vxf1LsANZ+e+sewJ6R2YWcOwkYBQsd4mUjajoRwD+2FyB2w9hyYI2C2wU9C3YSo\\n\",\n       \"gFkHZjJQ6kJfB7OquiKNOISugrUJYhNWGvDkksqRGN5ovQrUIhBuga9DLwf1GMoAbQ6uHISH1mA9\\n\",\n       \"DVKHXAsO+vD4EVh/kne8YIs9AAc/DUdXIOpC24Jjn4dn3vRKOtj2bUZ8lvqPCWokKMTIVTg1DQfW\\n\",\n       \"VCHYNeBkDja+MXyMFEL8p2fgSgr+8BB4A2UotcsCYxvUXUhsqFbp0z50NHjOQB6eJDp5oxABCAiE\\n\",\n       \"j1eA8d+AqARzHDb2w/p52PljCA3g2j2QmYpQcCR6qI1TP8aVfI2ipimZ8HkD4hVVrfkBNBPQugzV\\n\",\n       \"EPoejYH0gCLelMeOBXANg9Yhh8LqRfqhDt2jDr1ODazz0Cl9AG28c3DnDsWLeBlxcOZAv664Cudv\\n\",\n       \"eYqnGlw9A8I3fWYCoK+Z4GyBxSz02mB1YfYuqD2F2HqB9MclKSPAkxrtf+az8z+oUZ9jhNg8mGc8\\n\",\n       \"blCOfpbNi9fhgafg3gkwYzeFIwLshuI52L4yLERUYT4VhmgZYsUqy4sB3b1JhHSIulfJtIq4BZPG\\n\",\n       \"cy2uTrcIhUyynT65iwFGFR4zYHBeStkyhfjWX8F/OQ6HtoMjwY1B6RJs+xuyBZjt60S8DmvZDu2M\\n\",\n       \"S2hFjWNvoK5xm3HL60Fx8MSftuGTEdgZwKAGj7XhiQ+qIu8t4qvAN97KE4e8kePAvcDfvqOv6n2C\\n\",\n       \"97Iz8hWU6cv/NlSS/nMp5QcqVVRJZPNfhIc8mBkSltavgDkJa/tgx3MAOte3Z6hr41ALQIwoL4qY\\n\",\n       \"gNhAmRiNCYhuQ0W8rkJyHvSjsFAGYxW6NpypKzOt/+grt9Cb5uYbT8DJ34F0D4o7IR2AZ0GyAvsr\\n\",\n       \"cHIaFgKwXoI1qYolJNQDWP3rG7LQoc9BFuhJKetv8ZhYMP1RuG8ZIsMLYcyBo6tvpRipwdnr8LnC\\n\",\n       \"TQF0LmgL6o83kcTK34Infw3md0A8gE0fNr+nRAgKwxn5i1uEOLMdLlXhrjnY3IRIG6wUeA70KrD/\\n\",\n       \"knLSfC6Nnwqo3ytJ6gLDB9igOdImkVCeFUeuwAs6ZEPg7IKFJiTmYW06QtxI4SVjyO4AoZfYYqzS\\n\",\n       \"2Jimu5CDiV+Hp25kmADMw+jfw1qD3qkGKV85iXcsHT8ZpphKYq5mSNbqzIlz9DKX6XUd+JEDZ2/u\\n\",\n       \"EgghQqjz2P1FUtjcCqE8V15zs9PUOOo1IynF30keh1P3w703dUfOjUP5QgPWrsFDWVhugh0FrkDm\\n\",\n       \"IvntBeaKceyaS2CcwdnaZ/kTkl1fh3I+h5NIE620SKT7bJpHYHkFDrZV1fuyl4Sn3GXTDgQ3VCzK\\n\",\n       \"VC1/Ba6PK57oiFMn/1SD9azGelxjWylFEB3j7OFJGpfO8sLBFcaKAekebAxg7etSyuZw/fMBrE2D\\n\",\n       \"Z4OTgMocNM8S3Qoz0yNsOa6B9PGvr9E91Ce6CpUIODqUY7C28FYNH4fP+8th2m/wYRHyaghBEuUV\\n\",\n       \"8odvY5mngaN8WIy8s5BSfhP45nv1+39OSEN27JVCBGDrBaXisfZCaBECx+DUjl3UTurKeW3OBc2A\\n\",\n       \"ZhkyK4rYGJtV8fB+B2JT0O5C9BRk8+Cm4GQXNjfgKV9p+2/pFAQvwUvfhdJnIHkQ0g3oDZSBaCWu\\n\",\n       \"du+rcdj+NHjblH1JX1emr/EvC5HNAnlFcE2tgFsXInsBqt+5tRX+BhCFuPlKIXID8dclK94OfXjx\\n\",\n       \"FByWMDUN9T6YFyCxAo/c7N45dIz8i+FoMAKUfxqZswOL63CPpRQKNR2urcP0FYjrYBUhtAA/lFIu\\n\",\n       \"CSEeb9K6N8biqEZSSBAbdKe6FFyYcmD9HkhH4QogHWjsgY2+Rqw/TqFn09sSQnPCgI4t1zF39qGW\\n\",\n       \"h+rNhQjADFRs2BdjyQTtMIx12+jjFr2wTbGRYnTToWX3qGV1kqNtUvug5UNxBeio4jhyL8x8EkZ0\\n\",\n       \"aGlC5C5A5a2cx58Lhh3UrRbkHKXouHKD2FqGk5fh18Z4JdG0A+Y11er4KYZSzUfhRA6Ku6EQQEXA\\n\",\n       \"8jJU/q4LwUk46MFUAVplCF/F2ppirJ3E7gDoCBkh1PGw8i7zkyZWMoLu+EhtQDfcgHABmlPQfhYO\\n\",\n       \"P4sdG8Vd6xL0z5LZVyWRL2FdhuZ/LYT4tpSyqDwKn/gSzOQgEwF9VbI+8CksT2DnYlD0CRlH6M3v\\n\",\n       \"YnD92yzMLrLw5yh1y81Kn2gSwvvhQgCchtlV+JgknuthRhdp7AwR2ugRo0/lJJzeA8tfAi1Ql4/K\\n\",\n       \"373d8/V+V929h/gS8BMpeTvF/9PAv3iHXs/7Du85gfX9hGFVP4baVRXfqPGZEPpuyD8I+gg4i7D5\\n\",\n       \"mJTythdDAYkwhh7wYs7m5G9J3IGgu1qGA1LdpbMBxK5D1IJgHYyMkpjKKWi2IdJTVs2DCyCuQV3C\\n\",\n       \"s1V4xlMx8a95zcNdyjEhxCWYjsM9y1BoQSsOnRhYJRUwOtgPOyvKtuTFg7BFQPP3YLICXgG2LcLA\\n\",\n       \"BuMUVHfAsV8H/vym4zeVgSM2pBpwtQsvSClvjb5uq8ZPx1Qjmhto2G/kWN/mvfWEEH/2OBxIwz5f\\n\",\n       \"jSReQN39b7wuzYQ7R+C+CTUbu1iFx7lJijhUZczqkPOh8SKIXWA6oFvKOK7kwuJWOLkGcU/9DoDG\\n\",\n       \"Eiy2aY1HaEkHYh2ilkEQ1WjfFRD2BTE/YE7zWZOwfB6K3wwz879GCG+6BFaLwUgI4fn0LUlfW4IX\\n\",\n       \"Xbj726Qe7GNE0/RK++lcakEoDnt+BeeRCpc2rrCxdRG2asTTWSa6Pu1tG3Rikik/RnfTJ9dxiZrw\\n\",\n       \"yNeEEP8HaFOw55fhoSVVDAbA6R1w7CuouPL3FYQQ8Tz87jYYGwW/Atpl5Zny51LKqgunz8A+B7ZP\\n\",\n       \"QWegClFzRY0obxu5PuzyfV2NT89kUIXMy9J6IcSf1uBQGnZ34ZiBuXUKq+fhZAOk3gVCGK02/his\\n\",\n       \"P+gzutiin65T2p2l0mnBXf8Z9DbhTJnpay/QGx2hu62MnveYK/exvSj2iIv7UQcxIUToe3DoC1C4\\n\",\n       \"CnUPLt1pUs9GCBYE8YxOoeQgwzkGawAh8Oaguwj2bSwInAEEfdAvwsQADt0NHQfR8zD7VYzmJtH1\\n\",\n       \"gKmzcPw+2LUGO36kRGOuD4/8mhBiA8XzyAMJIKErpd4GPxeC/wcWXwX++G2ucQI4LAS6lHzgir4P\\n\",\n       \"i5EhhBDTsOU3YDwO/QisGUKI7wGP/yyJoRCRu+HQr8KdFciVYG0MnvtDIYz/B7iulJgLaTVfLmcy\\n\",\n       \"rNwZohzJ0+rch3+lDeETcCAEiRFY74FfgpCBppURpoYvNZAZ6Lig2eA2wKiB1YH1OvxfAyJXoC8h\\n\",\n       \"sLiNzfhNMKGfhxd3g9GH1FU4eFkZuz4xB5YJXgxKIxDtwi4PKmnohBWv1s9A7iJc2wt7fwDXtgoh\\n\",\n       \"8lLKDUuIj8zB12ahPwkbZZg+BXcLIf705lGAlNIVIvY4HPusGs3EHGhZ8Mz4z3jdPxNDYubx4c9r\\n\",\n       \"kIDPHYD7DsJGDKoLsOdZJXX94+G8OzwCX52FmXGQdRAXQD47NLOKK4LNwgNw4QTMrSuliiGEmN0O\\n\",\n       \"/+ggBCE4fh0ebEEkTe+JeQa/IWnTx7U8YlLZwQQCOvuBswab5TZr6SiF1T60S6xv6bAc9vA9yPxX\\n\",\n       \"K4QLcRLlAuPzDbzsY1x+QLLGTrg2Ac0JaI7T6J+DuQadqItjdLCikE+GaK/YmKUqVhcmG7AnAet7\\n\",\n       \"IXMn3FFSoqBGFKJ92L8GV7cJIUZ/GnfrvUIGPnMUcgdv6nKMw+iPlWnUnw09iv7yKdiRge0udFuq\\n\",\n       \"IC++3trDx7zmcUN/mqeBp4UQWoz+wzWauyDpSAh8sELohmC9CxYB7q4GK7PT1IsfZbA4Cp3n0e6+\\n\",\n       \"jrAk235UYvuxEk/cLYikUojoNNHzJvpggBZdJXdfm95uGA2BaYBlbaFljBHrR+mYHo5V4fL2Js7a\\n\",\n       \"BP2rf0X64QFWtEPPNmkNJoToByrA7lkXTkopnZQQz52Eo23YdRQaMXAtmnRw2yNk1rt08k1WViE8\\n\",\n       \"CvmrkC+98u4PDmD146rmTe7MUDuwha6Ro3ulCcXrcEEI8VdvVfn2DxVCsAU4DPz921lHShpCUAR2\\n\",\n       \"ooj5Hyh8WIwAyqpi9nfhkx24llZTkbkYLH8a1teFGPkJNL8Hzou3uBdaMPUp+PjqK2OG2SqYPtQ+\\n\",\n       \"I2Xxj4UQfwM/+RrsnTTob7MpR2wup+/Hv5SB3hWY2A9GFOoliFkIM4lmbxBuZom1LRq2Ti8pQa+A\\n\",\n       \"kQI9C8EyWB0s3WHHJ2DnFDQ1OBMVIleESEhZjWw8JofJp8pkbe73Ye8GxFOqAbC6Cx4tQH8R3Cdh\\n\",\n       \"e1vFZnAE9riwtEeFia6kQK+DIcAPgRbXuPARMKwAtgshUpPwL3eAZyrXyG1b4MW7IdRRUdnfefUR\\n\",\n       \"7zwFpyWsfEw1gNoDKP0t8D++C+c2uxvu+Tgs3Aga3A3rPozV1fz1uwl48AhM3zUMxwOYhOwPYWMB\\n\",\n       \"flKAOwLQ/ho+HYHgsPKJ+GcVKHwU5qeg0YCYBz0LgjMEB8OURZUlQ7Bfgi3ACQSelHgZIKbT/U6Y\\n\",\n       \"M5+scnWsSXwiIGFr5BKS3X4cOxQiVqvSHFlgI5Rm9GqbWcunHnyW7mWALoQWEJ/Mo6U6+Nosm90y\\n\",\n       \"Vj1Goefi6kWsIGB0yNpPDSCSBa0AL00qVVcKuJyA56PgtyG+MhwbvC+kmEIIexb27b1FjbEDNl+E\\n\",\n       \"OSFEUkrZGHYCX+LduThPZvEvQzHnE3YihDUNLdVgMxrQWYDpZ2D5oRC5bo9R6xFKB11aJoyEA0ba\\n\",\n       \"UPoELG+CFDaWyEBgoQ8EEMbsRNAtH+7LUSyGQFZZGxsQ0dYZ2DFa9ihyaQv9sItnXGfb7ggzTQuz\\n\",\n       \"t0l5LsPG5/az9PgcweZ5+OIZmBVC/DXw4+OQG4WP1aFRBJGlc7XKpUKL7nZB2IeVOch4oFfhwkMQ\\n\",\n       \"RMAogb4Ckc/DA+fSnJk4zEgliuEM2Bg7wua1FOw5BncBx96FY/1Bxm8CfyMltx0Lv0mcBO7gw2Lk\\n\",\n       \"gwpzF+wyoZyC0FbY04HOhJK6NQRcuQuCJMxneLUXQhqy5mv5DhMNiEwKISwp5ZoQ4v+EjT022u9M\\n\",\n       \"0atFlSqm2wBLU3cJqUO3BLE6mh3HGFSBOJbTwwpy4J+mNxKHYEXxRvQmrEXYseozkWiwtaK6LzsO\\n\",\n       \"QHQnHPwelDLw/O8JYX5DSvccpO6GOwTse0k1LjZ3AiY0UjD/x5BIQPteGPWgHoGyBu001CyIOrBa\\n\",\n       \"AOlbbIbjLGctnG1d2miQtkC7B9g79BTognEcjszC41E4wC3FyLC9/KQQ4hkUf6MrpfSGROZ3GqPj\\n\",\n       \"6vi+qr08AdU47BBCaBNwzw6oSF5hPW6FyihMnYO/ugKPlOBrR0G/ezj+aUD0GDx8FowKLJdUouud\\n\",\n       \"UdAGoBugpwjJPvP4+ATEhcD2BFEjSuX/DqCyzCCjE8rEmNp0aU/6FAYmu0oD6nmBJ0fIbaxgUYQV\\n\",\n       \"OHJe5/I9ZbqxBLgvEf6ohzGRJlyDrn6RIGnTD+q06RLymxTOvqLW2ghBYwUIQ3YSDq/B6QmYSkEa\\n\",\n       \"WE7Bvik4+U+G3aKfS1Lu60DTbkNQ1VDptLyNMLg3igjM3gmLK9SDIufu3SRUkGiGYNAOGC3CYJfB\\n\",\n       \"4XacZiiMu1RBZLtsG8lgVyJYWg1zNEDm4KoMaGf7RCo3O4AJVgvb8Y00ic4S5XyMaVMjRhfNaBIl\\n\",\n       \"YHGwF+eZGqkH40xcaiLTZVqjWcKtBDtWF2huT1PtmJBMw1dq0M9CPArbO+C+BPUx6ORgcpJqdZ1a\\n\",\n       \"/HnMTA933WG3AXP7lA2+3YRWGl7YC5EiRLwRnFCMRA3AIN3doLltL/0XLyg1x4fFyJvDbwPvVNzJ\\n\",\n       \"jWLk6+/Qeu8bfFiMABBJQcKF4g441IbeGMQCFc1Qj4OZh34UsgeEMLtSuk8Pn9iDjgBfqCyLG2hb\\n\",\n       \"4A4YSk6H8+vjcSHiSXioC9IH4YOmAw4MHBgNIJRC0yIE4gr98CLVKEivji8CNbf1siAyMGhiGWXq\\n\",\n       \"hQHxEjSSsLoX7q5AJQr1EcjV4KA5qa4AACAASURBVGMDqHxWCHEBxudUMwAgv6l+ADqTcLEBjQq8\\n\",\n       \"+N8qO5OcoYqVugaaC7tbsJyJUkzM0UulkK6GG+/gaUtk7s3i0KdXGeDXbTXj8mbAX4QpyU+3PR7u\\n\",\n       \"at9RD5PboNe6jbKiCWFFiIx8oo//8FnMfhi/OkHvTGEYYmcqUoUBuGOQPXITD6UMkyOQ24B7inDA\\n\",\n       \"hvEtEC6gexaa8RJSs7CFRTboE5ESgxjX7DhtfxfszYKzBJHLiEidlm/hGD6jA0GgaYT9Ho1oGnvT\\n\",\n       \"xIgMaGfg6j4fb/ACJBzIuYxEJK3eIs5oj4ieIVvpsS50inWHuAbpDXA1uJCHCxUILkFEUwqc1aS6\\n\",\n       \"bx3uQScCoR5s3QSRhPo93OTC+V5BStkbFeL6dchvhZeVYauQqKq/v+sqIBf8K7A3BfE8jojiDCws\\n\",\n       \"x8dsn2PzcJ28F9CvdBnYIYLAxbZMZqoDlhMeMhmwbRUSfajHAupBEyMSxw1ZaE6D5VyKupEk3Gyw\\n\",\n       \"mRyQjkSx7RCB0cUXBjm9RXDoCheTPeKlKtEWmGMWYcPEiTg4iRZmsAGJrdANQ0aHfzEL8/vg+Dw0\\n\",\n       \"avDgdTC3wboBJR/ZP4jzowqI56nmlWGt7qluaEdANYCkBbWRgGCqhjupI7s2ouyiR20VZxx9o8dP\\n\",\n       \"+SuhA81/qFwTIdiDSjJ+/B1a8iTwP71Da72v8GExAkBjGdY+BtKCkKeInaEebIwopvlEHxJN6Nhw\\n\",\n       \"/neEsLtSDk5JKZtCZM/Bi3vhjhW1b/MFPD8OlR/eSjBrw/MX4MgItC5BehpaVRUHrvchaYAwcFlS\\n\",\n       \"YXdyioGAaK9PP1ECfRt0JxXhrnkabWQHRvUEa+OwXSijRbsH3SlohVWkhaxArISyX9iE2hxkbzG+\\n\",\n       \"qgmgA/nPw8ETajO1NqWsLTTLxsOgOOpQMsdYM2wCTeBJA83bzkhZoxnzCJstdHuN9vYZ/LOqDY23\\n\",\n       \"CePVt2Z9DChC8a3sfCFEKgH3JeBgoGS2zwzg+Z9BNl5agtIVyG0bdm76oJ+B9Ca2liD1FZteVOJn\\n\",\n       \"Xcz8JcyCRvMHAfSLYKN+tKiKZgegDJkAds3ARkkFiVkzYGxgaGWEkcAaTKKH5mnqfcKay5gfomjE\\n\",\n       \"qDONp02BNqqKsJQEq0bD0YhSpxNy6Y57aD2BZ/p4tqQ7kqKejLFOnGq5CtUfY0QEbsYlNGITaFOM\\n\",\n       \"LLfxwi4ip9PI+Zw9DevjoPWgeQZqjwC+ss4YfRwu3AvhsCJMhpYhGoCnK0fa2A7eB8UIQAm+/wT8\\n\",\n       \"QR0mRtV3JXoGKMKf/6ybm1Bs5D15eECHTB+ulRX3602ZTQmIhqGQUzLZwRZYvYQ7uYRX2E6ouoyT\\n\",\n       \"DqiHW3RDJej46IaOCAZIOtgOCB36IfCtgMRVwWaiyqWJBLKUobGm4U6apBp9agXIR1xCuoGDj4uO\\n\",\n       \"rweEzU3C2/v0rkM8BrOLkvnDAich0aSJk9wKnVGoXofwndCwIH0VHp6BIAWVC7BtHqaaMKdBMwKG\\n\",\n       \"BYs6yaZPfREenQWhgbUBO07D6S+AbVRJ6GNEOwFYLco7swyevwyjrZ/Cy7rl+Kcg9wXYvgN0AY11\\n\",\n       \"oRLB33AeywcIXwW+KeU7Fr76IorEqr2Da74v8GExonAF5ldgbAaW05ByYSMONRt2FuG0DTM9GOhw\\n\",\n       \"5ypUHhZCnFYXxOp34Rkbru9Q96hNDdafhe6rWplKomh8sox9uEYwN08/NIKseVBehG07QUrMYJ2I\\n\",\n       \"3kCSoEsZ10jTzISRgY7wXaQWgpIOfhqnX8dNRRA9JRl1NNXN0HTYuQ56AJUE9OYg+dvgHoZnD6g6\\n\",\n       \"aOIcjFTgwihc3wRtJ2Q+pwqwRBHsY1AaiWFFJRfCIUoNjfXRBEndQggb6QRoeotuNInpL+NYcdKd\\n\",\n       \"NRwbemMRGJyHxCL8sAdv2jtGCH0H5D8FMwUhJupQfgwGLwCxLfAHdynPlVIPzHPwy6dhmp9S9Cgf\\n\",\n       \"B/Efvwf/NAtHDWUnu1aFH6ax/rtDxOsGqXqFjW1j+KZLkH8e/XNVfDcL5ybh95ehvQ7hNlgxcBow\\n\",\n       \"kVWtr3IDsikYb6PFNXTW0Ls2pi/RAocKHcZRBpgNX6cq4jhSIyErtAshAtIMRJ+eDEgFNlWhk9T7\\n\",\n       \"GHaETr1OdU5jw4hSb47R3NgO6wuI5AsUxgfs6oPW7lGMdtjYkSfwcriDBKHeVvpL51kur8Gf3Kwq\\n\",\n       \"EaJwDdqjMH4cqqb6DAxMcCyIt9W4xt243XF8LyClXBdC/OsSHE7ARBvW2/Di6/FaInB0D3zhMJQz\\n\",\n       \"UFuBuecUYflPpZQrNx4nhBhJwEeiMNmDtbpKD3/5/edgzxwcuwa/lAfRBVFGilmkbBKkFqjYLZKm\\n\",\n       \"wAzAmXOp+x2KToB1TjLTATOAUgL8pYAjT3V44WjA9YSHnV0itC8G4V00NY2oIRGexDYHdHQbKcMg\\n\",\n       \"XAZWFNkx6E25LCTA8TU8u4PvG2xY0wz6YZVvM9cEcx+U+5DpqmiB6x5EpqG3AcEh6IWhXoP8CsQM\\n\",\n       \"qn2f/+L74L+kiNVmAJtReNKHjlZjx/I15ifyeJpOO1hGG9+Ak43XGdGoiIfCP4IH4rBrWW3QllLw\\n\",\n       \"2D8eJvj+1NygDxqEQAN+C/jiO7WmlJSFoI5KjL7yeo//RcKHxQg3FB7i30O7COV/AlNAQkKkA/Nh\\n\",\n       \"SK+q77JXhW2bEJlE7Zj7Q/b9XwghRlEZN43hjyWEeQCiOWgWDezfnCb20CgxdIg06Uav0cp0KGzm\\n\",\n       \"aQSX0PSAMT2BJbNIunRFmyUS9MigyR7SaEO8CNMW+DqBO6Dc9Yn3YbkADQMupeDQKTACNVZfyUAs\\n\",\n       \"A5Evw64+9AewcA8s7oD2BSjPQ9+Avb8CeQ2296C+ReeFO5IUZyJYlsO6YUEuREiX1IREZ0BYmsSd\\n\",\n       \"LgO7Td8vInRBO5XDWujQe2kRwhfgeBv+1Ztl3guh74S9vwv3VmB8CSphOPElOBM16U6PwOEBrHag\\n\",\n       \"MQqdB2ChBgeEEE9JKVdfu56IFuBLw1Tckz3lEaJVYMc4ppUg3B3gmRAvz9NNg2fXCbZ8Bv7ttuF4\\n\",\n       \"YAWSfwvRR2B6l3LnnO1B/pIiC21pYtgR0AYY0iNnaehOH33gohlwzhaYSDQh6OOh+QHQRE91CEyD\\n\",\n       \"Hg38iE68b2L3upyNCqRWQ4/ptPtj9JlCigzaqE9grhJLG8QZ0E5DTsBeBFF6FPUqhKIEERexawLJ\\n\",\n       \"JsbvCDG6AbUN8FaBJ+An/43J5KykMRdgzgaYm5D9iTK+OpuGze/cegxvOpb68DPev2GE925jaK73\\n\",\n       \"hqMChBDhWfjkJ2ApPByTblcFfLahUk///fBxk1vh9w6CHIVWBe44BXcNZcPXhRDmFohMweIqnOpD\\n\",\n       \"oQ2RLogawlrFFG08IVg2bcL00bBo9UKck222pFyKrvIiXBwzYMTiR3eE2UgamL0Occ1hd6tO4C5z\\n\",\n       \"YTxJx9JZwmVc76MJnZRXpiOgokvseIQxGgwGUDR9hFllVYtQHVj4ogZ6TVkCnO+AMYBsB9Ln0VJJ\\n\",\n       \"AmMTrG3QyoNXU/Pm3gBIUnM3eX4SPrKsRsyuBs9Pgn0GtLKkuG0VrVjGCyzS1Q70AopffwO5Ndtg\\n\",\n       \"+wjsucnaYKoOByOweZg3mT/0C46jqLye0+/wuidR/pcfFiMfRAyLiu8IIX4M1Y9A7MtgPwQHy2qu\\n\",\n       \"Wm7A1POqW9LvcIvNMzCA9EchcRC8CAQF1VXJt2BxPMTlT42TWEuiJyNorRS+CLDGrqLPCKQmSWsJ\\n\",\n       \"bHwQEoFBDIdRaiyKLoHugwiBNMANQ3URcj0IKrijUPwEtDzoroAwIJGBzbByg87XwYrAoWWgBNVN\\n\",\n       \"OBeF6pwKCxOHYa4BKyY4mkExNEO1MIo0DOpmGpcqmtEkjEdPC+HTJWa7aFYPl1VMz+Po+jyt6irn\\n\",\n       \"8ybtXB//zzrwzE8zFfvZyH8KjlZgbLijz/bgaNHk2j/dQ9ferzSy8XOwuwAn9kBxEuRp5YvwmmIk\\n\",\n       \"DHcehpG7b1LK7FLKpKMGfaNBN3oN5iQzmsAKXFaiGov+GJ2mrwiUwQQ09kLjSSU/+KgFGQlTEYhs\\n\",\n       \"QdObID2kH8bTOrh6Dzu6TsXok9MsdosIRuBTBAY4mGKNZj6ErqXwhYPwM7gMuGwYWIMxnJVL6NYG\\n\",\n       \"dz49TufIdvSJbZgdD0t3qIej2GKTaEzdG7NxEEYMx5jEFSPEWgJpZehEz2N8ymDLpMeWDWhtA9+B\\n\",\n       \"cj3D6niaYhVCl5qs5fskI23SE5KGBuvflVLO33z8hBB5YFzFH43thrQNXSlE8lloPvI+lHnmRkEL\\n\",\n       \"3xIRMKMM47bdcD7Nwy89AN1pqAMUFFs73oDfFiK3ADM7ytS2HafljxBcvwSzLRKzDUYiJWyjgiZs\\n\",\n       \"SuyiKy16fg1PX6On30X79AnKk2ssboHJlM6kabEcm2bdyOABRqiD7hWpDza4d3MNM9Lgii6pyAFt\\n\",\n       \"I84W6bBhJKiLOD08ZskSlmvoqQ6mtLC6DhlDYgSXWIuZDOwwyDAYZ2F3BvJhwtLGCJZxQgs44g6k\\n\",\n       \"qIFdAnw1y700gXdykyfrsDilNl4bwPojkLsb9q2DfQ1gAAwY6PDNAm+Ip2OllVLvVmQ7EH3L8v1f\\n\",\n       \"UHwV+IaUr3UJfps4BRzkbYzA34/4sBi5BcO29qPAo0LEvwi1h2FmAfpJuPwxuF6AzuOo0cB1uBHA\\n\",\n       \"VfgaHE3B7jV47H5l4ujbMPcYBFtCYJRZnc6h9fo4hQ5eLEEqCFO2ozhijAGSKpu41IiTwQR0uszi\\n\",\n       \"YmltGrJMFR8nFQGxSshaYTTd5rPPQHYAl6dhPQT1NrRTkNJAxmCtAPsuvfIOEwOI7oMDi7BpASaU\\n\",\n       \"c4p3+ogepzybRg8b9ENRanIUP4hgai9RFSGyQRGXPj0cIjTIij5aPyBek0Q2m8x8H57X4PSFt+7o\\n\",\n       \"GRpVGVuvQKO8JU00MwXXs0qX3B4H4xjcOQM/GFYttw2BS8LhHHgNiCWHlvE2+DNQv0avW6WzV+dw\\n\",\n       \"2yLWh0AL6BouovME81+cxuujrscVTyXDRu6G0gamdREZzODpUwgC9OAKrt7CxaEh1pF6k7xvMCmS\\n\",\n       \"JAMdA5+wrNERPlKk8bRJhNxEl2tEtBhp6eNqTRrZAKfdIbEKnzjl8Sf3J8Ar4llLGKEa8VCZgSax\\n\",\n       \"JeiuQBfQEjlcooSlRyWq4UbbCJHF7M7jJZTT6v2rER7fM05z1xhpx6ZqVmi38/BCmFr3DAuU4H+/\\n\",\n       \"YSs+/EwLiH8O9h6FfAacA+DWYfsxmKzBifvh+RDwN2/tPL9r6LVvQ1huQCiA9jCDKLodtkzfQq5O\\n\",\n       \"gGcTfgjuewoOLQ9Yr5/he5/bykZyhXBGZ86WGHoHQ1hERJgwPkURwxK6onlZK7RnNLRImHHPwNS7\\n\",\n       \"NGNpmmaUMJKeHEUjgW9AKVVj3texgz4pPyAWFvQ9j2VrKz3mMKgSsEZN9NCDLDY9PL1LPq4R8zyW\\n\",\n       \"rQGaf4219A5aJQO2bEKhTLg9gik0hOkSGVjoxll6Yp2kqZPvO5i4NMwOG1PQ+Jdw1uYVR+KWECng\\n\",\n       \"haNwz5IasQTAC1ug+iwwI4SdBacKXL19IepUoHSbUMpSDDon34Hz+wsBIbCALwNH3oXlzwJfexfW\\n\",\n       \"fU/xXgbljaFMYHYD0du4Cb4P0P4unOvB0m/CyDawmrDlGByuw/E/EEL8OynlNdB2wO4cHFiCUgTE\\n\",\n       \"lFKkVPJwuQ16FyKyh5GW+GYIDAfhV+na47haEgsLCGFh4bNClSYJLCbJ0MYENKJinQjzXMcnXHMY\\n\",\n       \"M7vc3YGx4U1/Zg1KeyA4Ag+cgEgAxYIa2WzmYLAKgzBUCiBNaMZtLv9qBiOZQsOjozXpiBBSTyMI\\n\",\n       \"QOoIBIY2QoCOJ2ySgU7Iv0jfqzPrQ6BBowyVl2DuHOSqUJqEM2PcpkvxxuB2oG4rbwwFm8pMmH43\\n\",\n       \"C1fLcHcUehHwcqCfhumrasd29daVbCEO5eDBtpJGDdagNgkvxKDbBq+F9A20nE0l16fhaGitGqKU\\n\",\n       \"IxZxiI7uoXG6DdEl+Hgb/F1QiaCPJDHTMXrRafBBGD4Yc0TlKl4wjxfU8b0QsaqGn7NBg0DXsJCM\\n\",\n       \"4lLWypSlhj1okBAWk4ZACAPP6JDX1ikWXKRu8M0vgWNfwYuWyFkRAgoYWoIBGyxoJTrRgIwUBFh0\\n\",\n       \"iNIQaQJ0QCATPRzDYDDmsOeEwXxujq6RIGqNECo16U3spdu4AmMGXI8oo7wor1I2aXtg7/3w8AJc\\n\",\n       \"noGpNUVqPn4XjP9nle2yfKcQ4tG3mlP0bkBKWR4R4tpZmNwP66AyYU7CWOWVTA/PA+mpNuLLu9Yi\\n\",\n       \"2pRLwoKYCauTkKpU+fLfNfj2/RaaliRWMnHdNkE2jRQWCVZYYUBLGPi6Ry+ow5RGRE5h1wO8RJVN\\n\",\n       \"s0BLjGPLFq5eJCBHwk/R1y1W4gO26Bo7PJOS7+EGFjEk18jgE0KXOnCFqghTkDrKZk1QNpJ00Inp\\n\",\n       \"JSaDOlfCGs5EGl2kMZ0Gom8QCdJYcZtq+kWS2XG2NgSWoWEXJePtKrEwXChIKS+++gg2fgwnYrB8\\n\",\n       \"ALI+lHVYuwyh7XDgHhgJlOz/clkI8f/e5txfhcubMDoGe9fVCGghA2d8aP+DKUaATwMXpWThXVj7\\n\",\n       \"LLD/XVj3PcV72RmpAg/x/ttZvYyh98XjkL4f7jwHuQ6Ehi1IAVQ/CfxbSExAfqCua6sHQE6o0NeQ\\n\",\n       \"AYND4F9s4xDC0wNESEfILn2jTFWbxkYQkh06oouGDhi41NAx0KkRxcIB3CBCwo8SdTpEWy6ppCR9\\n\",\n       \"k+V1tAeDPoyYcHlWSXK92rBAyMOZXTBqKjPXQQROjc/Q08bxdBuJQLUWiiBnEEFfWV/LTSQxpIQg\\n\",\n       \"kPRkHx8HS0JXKq5cvwKJy+r3gAqzfTt8gvJjcOJX4IElsH3wRUAxlaO2MgPFJTh9BfZFQZQhcQFk\\n\",\n       \"Ff7iVpdcIcTsAfj1SbjgwOH9UGtAfAnuMeFMGfbvg/YaWi+GNPoMrHX0sEdsySUY7YLZBbOl2MzU\\n\",\n       \"IVzAHHWIM8AkzkC4+LqJL7roQQOXCIGeItBbtKSDW9Tpp/oMTEkTjYEI6ApJA5ewWEVaLhlCCCnx\\n\",\n       \"hUCTYPtJplphrox49AsacmMD0wpRNycJhMAPwPLHsLBZiy/wHBqTskcDl75cx0XgkQLNQmo6bhSM\\n\",\n       \"IMr1sTiEBG64jpwKYQSTKt/IHoFBS0mUD/IqR9KRI7CnpqLl/SREauozPxqDhRzsWVfJBSQZjjp+\\n\",\n       \"XhgGNk6jrl8rt1q/l+HbT8BvXoHplIpU0IrwVF/ZaSOlHGSEOH0GDtxxU9F8Dv3eBmbeZvnhAemO\\n\",\n       \"Sl+InfXZuTpgcdTE26ETxKcI6OIQIAloMIoghUkXR3RBO42QFtLsUrcm6Ik8kESKGOAjWaSqh7Ck\\n\",\n       \"Q98IiBohHD1MX/ZxzBApGmTlJkUi6FLQEylClOnrHmBxXuiUhE5ChjCJ4uiWUub5OoHp4WZ3EVo4\\n\",\n       \"Ry9xEU1zsfUuI7KMIeOIQQs3VSd2zeeOs1C+SwgRgEiArKGybgbAX6tx9Y1zm/00fDQO+28KFywU\\n\",\n       \"4NHPo6z1LYjeD+mjMGVDawEe6cPpyaGydwXWv/eLHMr4FvCWE3rfAK4BI0KQkPJdt0b4ueG9DMob\\n\",\n       \"AIN3yejqnUQW8gFM3HLSp+pgTykVoVmGugWVDNT3KuJaNwyGpkLnerOCJbNP0XoJEQpAGrgiBEgC\\n\",\n       \"TBxhEaDTBUxqCDxiZEmiA308mlS1LP2ugd6Fjt3HIUDc9OXuWdBPQMRU6bwALR1KBtTDyi+l7usU\\n\",\n       \"0zbLRhpX24qi6go0OkgCJC0QZ5DaNhK+TqB7DOQFNAGSPt1gHel0GfdgrAkdC7oZiB+AbhWaJlzu\\n\",\n       \"87aIVf3n4HQUig9AVkBDuKw+kaQfR73e631YLcLIIsSr8L8M+T6vwgjccxBaM6q/nHgK5vIg1yF3\\n\",\n       \"HrTdsD4LuR7Nks2Elybsa3jxNTrbBIPFKoPEfyB8CIJwHreeQ4pNwoHEY4FWXOLrl0BLIRkw0GYA\\n\",\n       \"H0EbZBbPmWdpqkm+Z7EQTaEJDWhRoUkcjSlCrGoRDHIE0gTaeFoJ3RQEI3E8mcbTL9ObczEYJdCS\\n\",\n       \"6AyIahYpBEaQQBoRWvQ4T4uGMAgYw0DDZA2PRYJwmEHQYiFr0s7F0LChDX7cxwlaEEIVcsktcGFR\\n\",\n       \"2VX/4JUjqIfBHvIu9AZ0Q8o+3pRKBuwLKAt+/oXIzCR8dRrCJsglkGEhvt+7KfFbjRvEvytBAdXx\\n\",\n       \"Kd+6g6/BD56F3DpMpcG4ALt85NadWA2Lrt6gnllCZATBtEap3sYbK5OJzCC0PJpzkc1QQJE9DMig\\n\",\n       \"0adHh0CMoHkJulodJx5lICaIIgkY4GAxIDz8ti0zMMDWbHoyoGcYDISNKwQbtHFFkUDOopOi763T\\n\",\n       \"06u0tBGkGEGSRMgmbW2RgCxCjuCa46CHEXoHJ3oGf5fPVheSuosfwEi/gSZ8Oo0MoeU2InDRAf1z\\n\",\n       \"/P/s3Wmsbvt9F/bPfw3PuJ89n73PfM6958732tcTMYnBAWcqSRQ3oiltAlWFQK1EEZX6ohUSfVOp\\n\",\n       \"byqBVFUgASnQCqigkBISgklIQoht4tg3tq/vcKZ7zj3jnsdnftZa/75Y+9jX5jq2g6/tmPykLZ2z\\n\",\n       \"9/OstbSG//oN38GHnqoNnLcCb9wLIfzfMcb+CetlN4TQZvE5nrv3ZRfCsxt89ukQQpfln+T9z/Li\\n\",\n       \"Q9ozbpzhE4Gbf0OtMdL3H1GEYB5/An/hndj+SfXyKl5QWxd8V8QfYEa+dgw4TOrZ6VtHoXttZocn\\n\",\n       \"1NHXePWHGbxAd7mWXL++Ti+QzlZ86l2P2c1WlZN9sj2yVWKXEJTGCm3RKnaNBZetGCgVOjITmbGF\\n\",\n       \"su9oumB4dKh8MHWY8dIKz7dJVrhxjqM5mjMO30Oa5qY6JkVqaza105yYthrayYJu6DpwDvMShbp+\\n\",\n       \"mQg6ogcaClWIQjzSjBtSibScGlcjp3JWjxkNuL1JMaJ9il/+INuvcO//OvGK+b1Gp6nfb+l/vu+N\\n\",\n       \"QclLFYcv82emPHaB0bA2REs2+Nm3S0Qg59Qiw5T4IT7/gFubLA1YLbjX4ntbhBfs3/+8GxdGzkqk\\n\",\n       \"SeKwdeTg3Bkf+ELX8vHEcOXAvWen7jYbsntTB8++W0wmdYvfm1ir96cvxkUGyxwXDsefNzm3pB1W\\n\",\n       \"RT1TE9F9Mw9t6RlpGGnrBCaWJHGqCtuOGkcm5YEsm1gJqVlMjByLKt2QyCSyLJeEXIhTx1YVujoG\\n\",\n       \"oiCa17AgJnumndQnvyfVGg8dxDn5wcRx57TJ9AFr9zi4zBsX+fwn6zfSW+/vz3PrT9RG0otXufUD\\n\",\n       \"QStGN/LaBuYTF9n69NfBrvhinIhgnVO7Ed/53Tyfvsr3O5f4Mz/KYL3mSxuR/Ut+IoSwEWO8/eiz\\n\",\n       \"JzokX9WjJsY4CCH8rS3ZT7L8Myum739cs0wd9xoWhm3j5qK9hT3NYmpwvqWRHNpsblpIjg3TtgfW\\n\",\n       \"jZ1DrlQJVUMSB9J4lnjoTtKQxVw7jOUO9c0pHYpGkjgWy2WDEG0lc7JQOa4WHIVTUtGRvqlXzcIy\\n\",\n       \"6ZbKssx5LZmWxDg8qRLMLKvCAqbyNJPFurDJ01SWbqtMNCeVgxA8FgbG5ztm+7kqH7n2h+vR7g+9\\n\",\n       \"pdvxO+f4jR/BP3nLqUrr1eGRqGOltocYtCi6uFiTzD70FvbM01uMz7L3rhiPPvaNXOPvkvhJ/JsY\\n\",\n       \"vZM05kejmj9IRr4VEUL4ayTLxIr48zHGb/lIJ8a4XwubvfTcl4TNJimfPs3uz518ph9C+DsM/jrf\\n\",\n       \"kzPMagXMUb/j488+7aC5Jhe0sz3D8KTKI9TYUKWnpnqMsSnV1NUQPXTHyJqxXKk/nbi/v6+4NvLU\\n\",\n       \"DU6vs3GRjz1bY1Kk5BP6PZ6eNoRGRxkKx62ZPQsWQkcwMbUqN5XYO9l3gkQ0EQylOpZ1NIxNI3OH\\n\",\n       \"fU9ei/KSz2S1e++Nuyx8nvfepjfmxmnuRHa/YSrvV8ZF/tLztJYYb9N6lXfd5//c5u9u8+QSVyYc\\n\",\n       \"D3nlq+lNhBDO5brrvyB78ZzJ7eeNb53luFO3XS5HyQ+NVE8PqQrKJ23uTu3svSlbmDedHXimaJgb\\n\",\n       \"T/Q7QT4YWZns211YcXDpgtgYSJOojL7YehcP0KSqaBySZ6pJbpqtCPGMtlQ7RImeypE9i4Kph/Zl\\n\",\n       \"saERGtKQ6GMrbFlPWrigsqwII6VCJRGMJGmibWIch9bkYsjMqwSLHhkfl6FnVA5ksedWnmsXt1Wz\\n\",\n       \"maJ1xnS3YnyL9jL/8jleebkGHn0F/XD8O/zO+ygfy4WLPYftzPbqnK3jodd/cFP5D6LJL3y917UT\\n\",\n       \"wgev8KMX6vmhO4xDCP8gxnjrG7g9nniS1vqJQi61uN5l0pvyPxvC8qfYv1pvPnuelRcoJ+y8hBtv\\n\",\n       \"I5T2dO7Cn5u3MnfZhmW9MDRLJ26f75mEjui2Ya/tyuRxp/fuOejueTgfbOSPKeNI0w1VeMwsJhJN\\n\",\n       \"eTgwy0fy6jHTMjVOZo7SrhibYmjWB+ZAFlddGQfj9r6t0LQaC0fJk1KHJx2UFa1wV+mqIhRSz0mr\\n\",\n       \"s5JwbKJUSARtlY66xdVQ6kskMpVGuCuIulrmkpGjRnCvmZhWU9MXxkYH7G22nXuQ+KX3D1y6z1Ob\\n\",\n       \"LA9o/EAI4fPqMXoHOxxscO0Ma3tzfv0D645Wu2bNQ5vFNj8xtP42gNUzh/SufAPX9rspfgZ/+x3e\\n\",\n       \"x3cdbuQ7JRn5KrOa9w65csgscP39IbQfxjj6hkW0/sNj75/xiYQbz7FQ1UXZ1q8w+e1azMyTKzxz\\n\",\n       \"aPpGYXKa7hZZi72nl/QX1swnmamWqVTdOx6oT/4FNVbjAa5jTomRiSVMJO6ap6oMQ8Pe0oYza/yp\\n\",\n       \"X6S4yfUF+lv8zrvpteskaB0vt1PjUKm09HWUFg0w0MfYVMu8O0YCeiozlZvoC1qSMJGEgbLc1NiL\\n\",\n       \"3nWLN+e5eIf5J1l6yNNveXmd2aPR/2bQPP845WMnLIensFy7tP4n2zH+fbVr6LX6Tys/GMJaxfbn\\n\",\n       \"cP0RADqE9Eme+a9mnk22HSaF4TO33Xhm1c5wU/7kxNzyU9qbx46bI0XjjPHgWDwzUc41lFcnsu7U\\n\",\n       \"cevQv/v+rrQcakx2tbcXDGhudgAAIABJREFUTBcXlMtLsiRXSsQkV/O7dwlLMrk0jE2yfc3OQ/Pd\\n\",\n       \"TAhti9WRftpTyIy1BfMKicT7jD1wPTzUNJBoG2tp49hTgp7MTK6ro69mk6faDhx7YCFEK3hTKtoT\\n\",\n       \"TU21pAamHsrSwrpU2eg5Hj1r/95tGp/W6M2kyxPTckv7Xa/Jnz82fnhs9A/r8xcWcB6LbP5G8Kvf\\n\",\n       \"u67z4hnV5x43uHOBwxHpL3Lm9olU/te6piGES+/iJ/4E9zsnlNstuv+CPx1C+Ktf772R0up8hVfN\\n\",\n       \"TcmTu3rvi04d8YemXPsj3F7khX2u7DNLee09vP5r+Fdfvr3eT68608xNhsymmTLpaRUHsuUl/XFK\\n\",\n       \"uaCRcTg+kLcGynZwKkwV8Q3tsGLOnmjfQThvK/RM7AseCGFZTC5KbInVWQ1U4VAsbypDYqE4drcz\\n\",\n       \"MQsTHNgNlwXHKESZeUemekptqed0tIySzFRTpilSj91M0BakUqU0VKK+mXnz2vI40893NGLhqEhd\\n\",\n       \"zzrGvWjUzORLlzTzXH6wY+PSGz6dTVyacGWR1b/KccbyyzUIf9Dkt9abrD5l0l/T2AkOhx82/fi/\\n\",\n       \"4+JL7i9VvhKoedBh8hWjne/+CMG6mkHz0Xd4Vy+r3au/a+LbyabJ1DPqF/GxEMJfjjF+6ss/9eN3\\n\",\n       \"vtQefCbj//vxEMKNb7Wz6MnY4R+EEJbUok97J23efIX/8plaGnxw03H7C+4tDbgYrWapyVxTSGfG\\n\",\n       \"eqbaEs2TvOtLpmyJAQ6lunJrxoY2DbWsamlrxNJU4XDwlPxuJq7f9vqFWvL99RcIpznTbGhoOMZ1\\n\",\n       \"XVMrUpVooO8xwQWJKNgx9bpoTkuBPTN76uToSG5DV66nUpZD7eHY8zcY5dwree81PnGFxlt0BMrA\\n\",\n       \"F06x+cvfjHP9WF2RfTGeYvu3eSaE0IJU5y/nLv6xxMpsqrldOPw+bvxG7Y4scPajfGSPtUHfwe7E\\n\",\n       \"F96/Kv3eI3lybCU5bRL7yksjvfIlw3JROdc1HR7QGDA3VSy/YGv1knxIGQeK8Zbj1aGynGmWlWmy\\n\",\n       \"IgbqTtZ5XEVL6ow82RWroV62aVHTJKZyiZVq4F5ySt9AQ1eiEk1UVpQuShwpXZWZ09Awc0ZTW2Jo\\n\",\n       \"YijR0vUZQ1EXS6KWRW8IjvVV5gSHCvuCqRXbcvMWQynYtdp8SXG5sj4cON8bK2djxfyh/D2Z+CBK\\n\",\n       \"Drn1386F8NJjfHiZpyLpgP2BMn/B8ct/+C04oDHLp3hmhz+fhvCrFVd/t0R0mfe9cMKAevS7NQZP\\n\",\n       \"snKbJ77ee6PkwR3Ce9TdvFe5eE3zh4eqbtPw7sRer3C2oPsunvnYl7yYLu0x+nAI4bMxxq1H28vE\\n\",\n       \"D3DvQluZ9w3mpkZJoZd3ZaGFe2KrrTFtGZVXjReip5PEfojaYaaBkWWlPeteU+naNtSqgvVwYBrn\\n\",\n       \"TMNZjfGhSXbgIB9LbEiqS/azgfmk57Q1uV19qU1tU5klQQsTTZmemRWZBzJrSpVSJUFhoE5GmurO\\n\",\n       \"5raRQs2vioZmstCQyh2Hlr3GRZ1qyZ5l8i1Fa2Z8OpfMM07bGqcmhrsUs+CMhvlsYv+P8keP2IuM\\n\",\n       \"P7fktQ91TTotx/tXlC/1GH6YwWvu/NjA1bV6PEPNhvtch92vWM//o4ifwMdifHupgW9ivIx3hSC8\\n\",\n       \"Azom35b4dgJYC/zg7/6pt5rPtQuejLxxxYnHyLc6TtDgXwSNZrzwAs/8cW4fMrers75sqzOlVxhm\\n\",\n       \"DdOqEsOe0mlNucwCHigfuY96Q25owbHEWWv2DHBX4oaenl3CglHxjFE1lS6dId72qZ9INE9nhqsr\\n\",\n       \"pmHRtonSUKLUdNrAgpZSaklhCbmGQmJe6oLMdTPzmqYYKvTN62sa+cCU5W2KnJdatarr/T5PfoK8\\n\",\n       \"5PAqGyMOL9Zd3DuBe5+g+sI7ff57/MW2sx9Z8+SbqaQamvYeOvVYX1ny2d/BIcsLrJ3oRywe5faH\\n\",\n       \"ucb+vudOVdqzQ7sLmW4jdxyD7uiBYbzgfj9w94jOOkvPK5K2tDFUhZnYXFX0HgrpsVAMxEbUCAET\\n\",\n       \"M7uiU6I3xHhf4VBaww6tZFPj6Zbd1rJugIHCrplCPXl/Q6mH2yaOsS26pDQ1cN/UmrZFubGhFJf1\\n\",\n       \"7ciVgqaHFh3bt+bQJaUDXTN9xw4cY82KTNCajfSTB+YWmlrNSrMYG2SJFyep42Zi+/S86aiv/cx5\\n\",\n       \"6UeeUx4+XQvWTG+w+DpLh7y4xcM1Brd5YsYLK3S+h8szfvpVroUQ/v5XS0jyum337+FDujXHvP0N\\n\",\n       \"tNPuv8lnf433rTB7wA+eFXqF5u7jFrbvu/auV7Tzyvcc0j/vhNpbM4Iew6sXsAUhhIvzms9fkLR7\\n\",\n       \"To2OdYttm62O/XSkig/M8gazVZNsR5hPjNNCEQpjuSUNiT2FNVVcFMKGRUf68aylZMcgDuXhqkl4\\n\",\n       \"4KBBGaeKYWlhNm/Y3ZUlK3JL+mboSQ319B3oKNXvsdKWRNBQKcxL3FQJSlM1qP1sfQ7LLVmyoQp9\\n\",\n       \"ibbEWT0HDgwM9dGw5Zw8dswVUTNdNqnmVfEzdtd3pYd9F+Y61odHdBu2O0tuzZ/T2z1weGHT63eO\\n\",\n       \"fc8tPn25bW5zwcqgdKvZMxrCPNMVh68O/Mph7Z7QimxO2fhHMcY3QwhrKZcrysit7xBH6HcyPuqd\\n\",\n       \"Y9F8MWK0GYICZ/2eZRS+s+I7ZUzzdUbCVx3pfPPjBHDXxMFXGrbBCi8+cZKcvKH13l3za7mD+Sfc\\n\",\n       \"ambuhoeigTl9M9dV5pX6MrekzikNMdQxlGpbNCcXLRoaWffAil1vSuKadtGV56lpuGOwkFueazuY\\n\",\n       \"P20aHpMYKZ1WigqbCptmDk1EmYsKFY4NZVqCoCsxk5tZdaRyoGOiE2kd8cJNGgUPF5iNuPfrPFbw\\n\",\n       \"apeNnI2/zfQqDx8/OTcP3+rn8R8at1h+a3fkGmtHvI61ec33d5zZzSQVdDWO10wWh5ZPVeaf4ugT\\n\",\n       \"TMNbwcaJrUuFc42m9mSo3ymk6WX9pCfIxORQGu5pzc0Z787z4TPMllTlxKTRkTgSuqU8aWnMhnR3\\n\",\n       \"TU1VWtjSNNTRNjCShGOrsTSJ6xaqgSKMLDbvGVcM05apkWiqnvG/oH74jiTmsK2yIGqbaEksmjhU\\n\",\n       \"SXQkZkZGUjxv30OFBexo6TiDBaU1G3b1Lcrcc9GWkWGYWM1bNuIV06SpH/ddT2dCqy3EVBUnQl4o\\n\",\n       \"H++YZvPmqoZie9e0t2p28woHt1huMX+HM20eTHj+HEdvUn6QN3pMM576t/X8+m11JPZ4/R5PXXwL\\n\",\n       \"86bCmzV+5Os2rzsRLPunv8Ubi/zF5ynGuptz1m8FWXlZY/qm3eeOjPe+vJChnibVZoq1oNvSn1/T\\n\",\n       \"3GubrjMJixaGA2k8cC/fNhw9y4MnySdma03tNJi54XpVqtKora8j07FlHEqlqUNnhdA1NjYJpc3w\\n\",\n       \"hBjXGee1YnJ1y+FRzgtNmXfJ5IIoGqp8Dnc0LMlOoKmljlRTlJrpiXLBfbl7J4D3BY14rEweKCSa\\n\",\n       \"uiozqZ5FDVEhGphJzHTMQmWSJdJqSLrIlCItzbWbWnEqThuKuObcaCpNj2wsNBTZaRsrY6+OZ9Ld\\n\",\n       \"I+l0JsQgmytIM8q7LEy5yfbfYPuUGsu9FWOczYXwkef4yJUa9Oc6WiH8/DjG3/56r/nvpwjBHD6M\\n\",\n       \"P/0t2uUj3MgfJCPvfLyVwTJJuRmYfSOAt99TnNDVfpwnX6hfHfv9EPJfiHH2yld8NFaEYzrbGk8c\\n\",\n       \"6T99RWyekYdCu+oI6XXLdmxZlDmWOEBPW2bbNbm2RQsn2hUDdabVsCDYMFJpS2JCOiU7ovPApDdn\\n\",\n       \"M8w7Cuc1jAXnRatS4xNmzorKrhrlP8KOzGld0QSFvlTUiwPtWV82nVge12rZ65vsN5g1uIfBr3L0\\n\",\n       \"P/O5FfUis/kWBsRr78S5/zXCJueXmG7TfK1WVfslXFpQldOvMKpsyUaZo+WpIsECW7f5wjne/RCm\\n\",\n       \"minNZkvcLPRfWFFk8/LQQDBpLsmKA2k+4bnzTCriA3ptVT4UQ6pVToViX9HuWhEdhI7UQCa1bMmK\\n\",\n       \"u+47UmpoVy1bIbdkwWJcVqSZU9WWA12bnlC39LpY9MgSOLgtOC3qSCwp9UUPlHpm3jQ0Fo3VwhoR\\n\",\n       \"LcdmUpV5LalCZWhsUWZRz0j7pM81CTtu5I/rjkdmMdWdtB22Vxw2p5qTrrZD+809jdBQaCjTRGxP\\n\",\n       \"Ze1NZfus6rM9Do9IHnAh0s7ofRprvNQ76XZcYf813uOrJCMzXv5cre9y4Qo7M9JXWb1dg2a/ISfX\\n\",\n       \"k6LgMxdDuPk8r1xX/DFChESIp8TDI1eXeP9bkpyDJtcLvBFCmMc55k8vSB6kkiLaW53SqhRK/TBX\\n\",\n       \"d8jWD4nBTKIsGhppV5rMXNYQTCwa2arRIPY8Zd+6oI2ZIq6LYR1nas794WWmm7VBVFwRHSuQiypN\\n\",\n       \"lWW5a7qG5vTNa9r3vL5FDXcdaynsm9nHUBJPa4QBYaJ0WaqnUii8IpoodGVmJhLBw1r11bxB0kRR\\n\",\n       \"A7fDWDNpaZuYhaFRryWbxrpz15hznJ83FWVh6uUn7jh1uG9u6zV3X7hocjAk3WD5M7Q2+H9PwMFf\\n\",\n       \"HIGFEC69yA/+KHebNXvKc+T/vGY+3Y4xbn8j1/33SfwwfivGbxnd/VEy8i+/1gd/P8R3eDLyscd4\\n\",\n       \"fECZcLXJ7X/9zazCIYTwOKc+RGOd0W32Ps7qj/F9F3jxXl1h7bb5tZ85cf28/ei727x0jZ9+kaND\\n\",\n       \"8XxX2TivMhFCpUpapsma3F1nXLfptNSqpmisLzHEsbaJXKKmCCyq2/8NwS1UiiQhmyiyDTrLJCum\\n\",\n       \"KM2b2jlpzZbqJ76ttKq+sInUntKewm0Tq3IDldtidSwWpdabUfOYq2dp7NC7QZmzOc/du+z/45PW\\n\",\n       \"+8Y385z/bnGH//0hz8/VduUPCl6LMY5CCCu52e7Ig6XKUkhOKt+xabtyd/684Q+d5Xv3HDTv+6Xl\\n\",\n       \"kTdbrBdT4+MjR7Gj1+o4aizrmWkZG8lVoRLjnCjheErZ4MyI7CzVvCTdUSabQuuhhXJZzBKJfUFP\\n\",\n       \"R6pj08Ce01KlrhvpunEcejNMTJOe+TIaho5dT0lclBifqKQeekQHLm3hOalDLbm25ZOE656RA8yf\\n\",\n       \"3BcJltHALaWJiYZobGLOVEulFCVGTht5w9S8SqFqBKPRWD9fMosNeXHLYZprFmuaWobxvmFemFRB\\n\",\n       \"ki/JxmNheYPnD7lX8PnX+Df3ePIxlj9Yc0C/qLtT+yl5Gz+SOk6u38/+Oh/8HV6MTHb4uVntwBu/\\n\",\n       \"Xq2hWvMie5aFSz3Zwo6iOu3w5Q3h3cFiGSRxqp9z+G9qJ+LHL9Sdshsld/7ZMj+2yvMZ7Ztmz27L\\n\",\n       \"wzm9qzOtBxOz5SObz5wW08P6fM+j3KAxVmmo4oLGbKqVBYOkcEtTruFIZtu80QmQNDoyDo9jRhiR\\n\",\n       \"BdqR5JTQuCqmUcPYqjkNUaVvqLQjk1tx2lRuzrKbXnHO0EhlR5RjSVAK4dDUohpwU9XHpyVxRvCG\\n\",\n       \"DQt6FiTuS42slVvuJ++RxoapkslnyXNF3DXt0hhNOeyKrajfntPPV01mC7Ldmap9wSir3H/qjFXH\\n\",\n       \"xuGWaX6dpwt+c5e3Nahc5IVnGDffcl90mT1FvF1j078bk5GP+pLK77ciXsb3fwv3947Gd3gy8lt/\\n\",\n       \"h2vPUk05fCXGeOdrf+frjxCaL/Lu/4L3Hta0to1n+M3vY77J2YdsnGX+oDZ5et8xm98fQmOVxY8Q\\n\",\n       \"LjF3/3f0Jzt8pNBfymmNNOJIKXOUnsOOHXMuKjXt2HNspHR0AmEMMbEbOCOTgg0zY9typSNBTnxF\\n\",\n       \"kSyowWrnTS3hQDA9YWUUMjOJkZnGyaI1FgSlqClXUwrrsdH8rO/yzcr2K/z2y9hhfI3FD/BLHyJN\\n\",\n       \"GX+c3b8bY/XqN/N8fz1x4mnzdsC3W1vcPudua095jjNKMdt29/RFW9f/FNeblCXhU0bFJ3zm7nFd\\n\",\n       \"qX9yKPwPmdnzibIa200b5spEqSMZTU3CsXG1zOs7vPg+QoLXyfqqdCKrxlqjzOl4ZJqNtFKe1JAK\\n\",\n       \"EjNBU6VpV1PqReINo3zPlokqSUSZqQ0j93Gs7ow8e/LvPameaCKRSzROAIo9HVOlOU0XzKwaO1b3\\n\",\n       \"Y5fUK/yOA4VdDYsWFep05cgFfacVRqZuIjcJOXcTG2lLfjbIqsp+xkwqVPMmYcvQsby/xyRVZIkQ\\n\",\n       \"77C6w3irdi/+5WN+Y8DSfI3dRd27vMrSNv/i67iuv3ry8w1HzfI58+d4fonTw2P3Ox/zyvf8sK1f\\n\",\n       \"fsber+3YP31XOHWs+k38b7yyzqvniSVuLPOfPcEfmqfKGU/0d/esPXbTwYVzupsHdi4t688vEEri\\n\",\n       \"MVmb0Kn9CZpNlZG9kIlhJHcK8wpDdyybOHNy9mdmUlGFIIsH2o19w/UoNmeSfKxhpmViZKaSSBUy\\n\",\n       \"uyo9iyrlSVHRMbbooR2Jpp6o7cAZ8USvpEZsnVYzcO6pO6GXRFOZvsIDLUNrMmV1KIY7ZipVVZAN\\n\",\n       \"NMKe+eTQJEmtJHPKwcykCsbtBXdna4bbubQcOcw6hEuyIrFdPW/0r05x/+c5+zq/+nYdjhBCNid5\\n\",\n       \"z32N7zuUFV3TjQumry1ynBOTOg//rooQZPgx/JVv4W5fxn/3LdzfOxrf0clIjPEarn3ND/4eIoSQ\\n\",\n       \"c+HH+ciDL/mgLGxx8zzl9zO5RZLV4+nZhPw+/gjv2mX1EusjNt7d8Fvr9I93zcfS1J1acjU0REFp\\n\",\n       \"0a4NZexaUEgcKsKyaWwTFpWhsGlqLDVvopI71jFyQ0MiapqFUyfH3OQE/zFVqjVJOtgyc1o0qGm5\\n\",\n       \"dlW2sGJFTf6htFKVdSq0XbnykFvbjP+Xt2gv/MojR9N34nz/h8aJNP/fKxU/teBOf+buwhHTjvjG\\n\",\n       \"R3n1UQWWEj/Andf5viUurlHdcvf0UKPZ9ORw191OW5I2jRWOs5bx0YEqnUlePBYWbylb+cl2JmKZ\\n\",\n       \"aE1XdcKR0GzrmtkQzUSnFMZSAz0NEwcaJloYWgzB5TAn0zJwZOJAaUGhq2Fg5JNK59AX7akMpc6K\\n\",\n       \"MkGUGproSyxrSfVMbTtr5tfVXZXzOK3wmpu2LGloaZpYc+hiTfOUGxkSO4y67B+aHM2bHG1xaZnJ\\n\",\n       \"Y4bTSuhsanZyyXjB6c9sGizd0j9TKLq09jZ0DZw7z8O/wPbP3uWf/jx/8mk0ibfI3uDT1Ts0tvtS\\n\",\n       \"LP0A3ztX+z/BEzsHFopf8OsfetbBp/riaFv85QN+7uQe3jj5EUK42OOnzuIU0zHpnqN2n8Nr1tO7\\n\",\n       \"9h+bc7j8hFK7ttsOdwmBLJFm+7JkaGJeFp/Xt60Gv+zrxUQUCcucjNNKi7J4VxHOCEVilrSExkCV\\n\",\n       \"DxUWtBxI9UxOCPUcK+1bOhmtkCkMPDCyq6uhJ1Nq2LFg6p5n1cPKvjo5zU5+WuiJUlPrcnv6Doyq\\n\",\n       \"05qjmZCfFaoxyT2tsvJ4OK3T7BgVI3vG0rO5o/7M3uSS436braEyGzB+nJ1rpsnjpr92vr79PEV8\\n\",\n       \"gye9bYdj8aN95y4/sNd+3MLmULl61eaHn3b0G2+QDWs58++2+BDuxOibWjB/jXgFT4cgi/HLXap/\\n\",\n       \"P8Z3dDLyDscyq19myFbjUiZP15YdzYrpEheG9KfsNOmcrYFwjX02qtRhecmykXy+59ys8GbWNGxe\\n\",\n       \"kmjL444qvCI6ZRYv2khywQMpYjinRu49g3MO7Tl0pMYUXFNXv6fUC0xfXQVdPPndUL3yV1q2BDeM\\n\",\n       \"NQQrkhOWzkxHW2reTLQrSJ2SmouZa4szry+RHD9KPE5o1lfonQshO6R8/ffuuvvORYxxN4TwN7d4\\n\",\n       \"mtjE9cf5nxbqptEXY4uzK1x5N6/2GN0131i0NA3Gnaa5wb69zqqq2DLJ+4p2IcSh1uOlSZZJw2lB\\n\",\n       \"U1kWsuObhs37YnOikBpVTQcpn1Z4WsO8FIX7MntOG3ogTSbOSjTsGzltzwUN805p2XUsuiozE11V\\n\",\n       \"WRUtYEHmodKeqY7o1kmFPNG3rW0oNzYzpx4SN9SdlYsmPmUjHhCeklhS0zyneCAgxmvSuKB1qZQM\\n\",\n       \"Hxrklerq5fq2avbFtWtmnYneqC2Zn9c5GJl0Eou3dkx7q4rWOYPH9jReONBuT+Lwfw0h3L3NUzmt\\n\",\n       \"Ye1cfeebmcTWhYLH1Jn0Lu5y8UWe/TJF1eg9Xxi41f+0g3+KN7+a90mTP/oUvfdwe0hzhzMXWdp3\\n\",\n       \"lA8d34zS8SnllYT8LMkeoUEIQpiJZqoYNeOiM2FoPyxqxnlFuONheMw0BjU4c+HkqNqK8HniliK/\\n\",\n       \"ZJaWtVqwS1JruCrzpmOVwkBuYlEUNbXROOFcvekpHfMa5nBo5J7MzLJoUxQcn5B8H3VlJrgrigpB\\n\",\n       \"X0NlQQhd3eFYa3NTXCmV62NLjaaumSrk2rML0t09w6yyc9Q22H7I4y26XUY90jtcGvDK2ZNEBFKq\\n\",\n       \"9G3eHyGE9cT5H181XslMexvuna3kg5asvK7xU/eVv0j5Hbe2fBPiWz2iEaNBCO6rk8J3uBh45+M/\\n\",\n       \"imSkBqRaV7eXH54snBNG4ctBsq88wdoFhqPaffdCRbNVCwC9vsaTgeWEi9s0pql/fanQPBwq2kEy\\n\",\n       \"OaPTSTXLO/pJUIWoGVeJh4SOxJ5SVyFRV7b3kAimch2V2wpTtRDa0+oj21G3YoMaLbinXnxqW9zU\\n\",\n       \"vCXH5u3q2bKta0fPoTmpPSMHchPnqyZFpVlVGrgSuf9YCOGpetOnPsJT5zkzrf1lrk5OHDm/IXDh\\n\",\n       \"Ox0hpFdSC39pXni+qWhMTR7umz18SO9M/W4GuzydcbzO8U1WS63Fy/LxgTLLdaWa1YZptW+WPm9v\\n\",\n       \"NtPpnJZV98zSNbPQlaqE0FK2VoXqqiLOTGJhKTSslomNpO2VMNVVCnoG1g2cU3lZ04FER6V0pCvV\\n\",\n       \"OKFnTs2cxaGZEVbVyegOxiqrMrcwlGqZea/CnJaxwpaZePKdmhJ+khajR3iILZGTEcE2mhKlkGy6\\n\",\n       \"NNjV3O5L9kfK4ao7p+YNb7wpefq++emOJ/+fnvvPDxwudPWTXLq7a9Y5ZbRwztrVlt7BTOvodXs/\\n\",\n       \"mobw8zHGL+AdER8MIayc4b++wtIS8T7hNjf3lCnV24FLCvUz/VVN2JZ5vMdgROMBT57CBfpjFtti\\n\",\n       \"475iISdPqe6THZM8jamGO6KzWrEgHDg2L2o5MDCLi8ZatWGjnJORWP2stggDMR6eMGauqDlP1/Vd\\n\",\n       \"Vlg6GfRtmLjl0JHUoaFKouHAMwqrWgaCEmsaRiY2zdmV6atcUC9otzxSc65xZ1NzWuYlJg6MwsTx\\n\",\n       \"YjC/d13WXDNXlnpxIo8TMXSVSSnOd3UHQ41yanDhmI1jtlv1uCo+ZLHJ+NEqWRBuko3exikb77tk\\n\",\n       \"8uSzlu7kuq+OHK7u2n/+hk449L6rrGW88hdDCH/v93J/fCdGCII6GfmT34bdPwKx/kEy8p0cNYWv\\n\",\n       \"+0e58gOcTurH9f5GCOEfxhj3Qli9zhcee8S84PgJGilPX+P2efp1gaS/XCt9Pztl3GDvNKc2oouj\\n\",\n       \"fTsLuWowUQ4bqm4qtlJ5rKimpkmbsC/xpkql8ozaZrFmzjT0BW3BgdI2HlczJs6rk5GmWqG1p05E\\n\",\n       \"FtX0gzHuGGtYMLYqWDa1ZuR+tWO/SMhWnKpyucziLErSsVE608HDK6SX+cCIg4z1U3zPx1g4eaFf\\n\",\n       \"mucX//MQwl97pGz67Y4Qwnpq/a88qXHhsoWtprToO1q65d4f+iWz3o/w6jmOdujc4Nw5fqtNscWV\\n\",\n       \"ZbPDmaq1JJkOpf2RuLqr0VgUp5XEJelsrGouiRZljiRxpEoKVWusLBNZyKwVhUaaGKeVUwqlFQ+N\\n\",\n       \"T+QzKrUEZUdlZGjRnLHCosREif0TH5NgXXCg1k9NZJaU9kyMTQXBxMR75Cf/S/QkUqVX1YnqbR5p\\n\",\n       \"TGip/XHueCRNmuqqcQTXRYtC2HLc2XF+kfa5VNLc1U4+4eVTMyu3+fF/zIW9A3feuO7jHzwlnI2O\\n\",\n       \"TidGjYalOy29A8hl5XnpqF9TF98RTZkQQljjpz5S+9Xfgffjk1z5NTuzqVdP8963gCXvLbC15y0s\\n\",\n       \"jrfZZrpOd0h5lRfOYoHjCfmY9DE+e8gP9UleJL1fM02SHUEhlchEeVyVlwcmSYfQc2yJySdpHp9I\\n\",\n       \"DYzUbKfL6s7m2snuHxNrtT716rOArrHeyUBmAQ19r5o39S6pB1LHUoVSJRUdm5gppYKZoXuiSuUy\\n\",\n       \"zslsKe2cdNgu6NjQtKLjntSqNaW7WWl4fmQ52aVMDZLEYliVxKEsXDXrlfrtxGiuFOLzFnf7xvPR\\n\",\n       \"6OVlHna5+CrPvML1nPIac7e+CnB1mefOqia5tISCbtcTO0sm7UNLu7zvPmd6jH7yG2B1f6fH82oD\\n\",\n       \"n899rQ++A/EoGflH34Z9f1PjuzoZqc1Zn/1RfvBNWieo7mur/Os/HUL4P/Bz/OZPc/siy5E7a8zt\\n\",\n       \"875NqjbLfUYLbDSZDYmBokWvYrA8s3Y4MFu4ZHj/gTQ9dFSdliSpPGQYmrijCm9qmsnjk0aB+gXy\\n\",\n       \"eUGqY2Bi39S20ll1j6amndYxpy75a5GtujKeqROTicKhqURHUKhsR0YF79qpbC7s2WvXep/jZmmU\\n\",\n       \"TM3KSj6miHx/Sf99pHdr2f37f5i5X6nZQ+eOOH2BB3UD5jsieh9ckl143OLmo4VuzsL+JcPGyzbL\\n\",\n       \"n2fU4uKM/T4fe/cJhmTK8nOOb73u4PLA4lxd3aX77J2ZNylH8k4pyQtFWgphTiLTjBsmMZeUDTMd\\n\",\n       \"XSumk21HvZm5WWmSpfIwVTmnfsGM1UvrGXQdOdSSypUmZsaWjbRlpifJaBTN4/hETb2n1lUlmlMZ\\n\",\n       \"i2bmTnych5pSlcwtU++VWRBMlKYqpTqJraQKwU1RT/SsqJJUqWFnZL/dshpbJBOnq6nNc/OG7bHb\\n\",\n       \"o4H5X+XiwZ71X9nz2Yt8/Huf19Oy8sW3xcyoO2+20Xg0jXhnYnWN848SkUfxIg9eNjn7wCe3OLjE\\n\",\n       \"mXHtkv3alIf/8KuNiB4pJM9zLq0f2lafuZKs4PiIWxlnEvnlLTHfUCRzddVRdcQwlYW+1Fg7NE1C\\n\",\n       \"IomJcejK4lVFWCJdnziLAAAgAElEQVRcJtYFR92NOlQ/ryvq53Wofmb3BaMvMmLoKQ1FudxluTeN\\n\",\n       \"jS3JJUqFiXuOHWhL5FIzDQcm9hx45kQ6/q66WzqvaU3pQDQTTHHb1NicZV2Hemkw63V1jVTlsq0w\\n\",\n       \"1EruuxzrnspxSDzM10zSiWS2obrYsjgpWHne6Gfn+e2XmP7KyS73+bSvguVrYc5gf2Ywl2sPZ6qF\\n\",\n       \"VHvUctzzRePCM8esXvguSkY+in/2bVJCfRl/5tuw3296fFuTkdoIz/vxUozxv//m72HtQ7xn90uJ\\n\",\n       \"CDy1w7WLbJ6PMd4JIfxNts5jjtUR5y7w2Uu1M+3DJUZzbBXEIbOCnZwqY9biqDwy3brn4Ch3cKkU\\n\",\n       \"5gpZyM3ManhaOJabWZFKwpGxe6ZGeFOUONJTaauxIKX6YR6c/Dt65KZbZwRN9UJ3qK6O24LctomB\\n\",\n       \"oTxGpyfMTqi5D5uli9OR00l9vFu4Fep6uhdo92m0eLDK4jUGi+yusHaibpvyLRSY+9rRuDgnLR8l\\n\",\n       \"Io+iqTvtEt7k72B84qJ8/lP8N8Ma05Alik7Pze6RxSqYizODbjSYJbozJunEtJlrhLEiPhBPJvhF\\n\",\n       \"0tMqrinSUj47FvOmy2VfnuWSkNoxs+mSY7tqAEZeG9lpi1IPHSrcMbZsZiaxq34pHaMtOlQ/gA11\\n\",\n       \"0jlVj2xqMGJh4kCuZU4mlcnlRjJ79rVVKolDtdniKWwrLMmkKldU9qkGlKc0ssKwmulr6ZVD46TQ\\n\",\n       \"jRPD3pK9hZm7z0wtfJpmWSfg1T8/sP8TY43FTHNSmrSCnSI3uT96Z8GHeZN/rxPXpMyIbP1Ndp5g\\n\",\n       \"/jz9I4pt3l52O4SQNfjgizzzAT75SQS6cxSb9WhlM6H8grM/cuRMPpZWr3gjnNUP0dg8cWgYCktV\\n\",\n       \"1C1vG6aHpqFjMttUZkOa766ZOemwPkZP4/PqZ3mgLizeUD95y+qxbEfd3TquR2hKwaaZRK7tNQtK\\n\",\n       \"MwN3ZR7XV+mKlkzFE4uIBdfddcoRUlHtRZNiX+qeto4Vbd16IEmM0ho85Mi6pTRXqczi1L2Q14yh\\n\",\n       \"uGSpOjaIQSsZWpsbaadHlrq/4uZHP2DyC/uS8b5z55hr0L/M7q+GEP7dVyaCQ260jFe4e2Gsu1wJ\\n\",\n       \"LfZj33Sj9rZ6FF+VCf77Mf5T/I/fpn1/1xjmfTu9ad6HbozxwyGEvx5C+ECM8dPf3L2kiyy8zWI1\\n\",\n       \"X6nXhUdW43frY0pTBj/D0g53zrPZ5vAM58fELv+2wdMvMzfh2jlu9aPRZjB3cUlx6qzKnlacGIdU\\n\",\n       \"mUS5dR1DG8ZmRkqvnkiAn0GtEFBXUefU0Oi6aqr9ThbUYg576hpiTp2EPMCmdTwhcaxypqy30kj5\\n\",\n       \"dMF+wlrFelH3UQaN+u9N3G6Tjetuz9qAxog7C/XfihPK3U6Hh0PfQn2Rrx2jG0ONRqkKqST2jdtH\\n\",\n       \"+isT+6v9eu1vxBiHEGO81wrhMzP+bIf1Gzz+vGLYsLsZ7RxvyEOpc/BQmp+TJfuG+aKy2RTDfeJV\\n\",\n       \"w5hR7AplKcymjmNTO2+4nlfSUFhTmpNZ9LKRU1rOiVKFmcp9+xoyi9pGKtc03RfNCSLaUpSuqxOR\\n\",\n       \"y+qs7576mm+rX2DzGBl7Q0NAsCoXDPS9qVA74jQEu9W2KpmpE5LT6qW+TyyVcSqJa/J415GZEDIb\\n\",\n       \"1Rmd8g1VIzhqNPTPTm2ss/s4L68x/ltHir+74/YPn9aYrpjeLsX+Z0h2+bfv4EXe3mKyVzsKf/G5\\n\",\n       \"vcGpYa03M8YXmiHkp/mxtfrmTdZCuLXNP4kxHoQQ2vP8yFm+L+U9Q/Z3OPgwH/842y/xkWXKc7z2\\n\",\n       \"eQt/curpcmxpVGrmI63GPVuhdDMdmVlQFkMH4VA/2aKiSt48Ie4+JimXKIdmVZN8oH5Oezgv8ZmT\\n\",\n       \"McppdeLx/7N3Z7G2pPd12H9fVe35nLPPfO58b9+eyWY3KZKiRFODNdqOIkuWbdARLENGEMMPjh+M\\n\",\n       \"BEiAJH4zkMQPCWA7ieJMzmBYliVDEjRYMi2ZpEixJXaz52ZPdz7zsOehqr481G6SoilzENlNClrA\\n\",\n       \"xcXd9+yza9euXd/6/v/1X+ue1IngzMw5LCndQ1CYqLYILROrgjruSb0mtaSFmaG61KXFoPjciaaZ\\n\",\n       \"0rG+lrlE4kiU27Sjo764to6NjRzJzMJ5R87MHNsSbYXMyJZQpFoK0YmNZMvGfFQNEtVqLo4OTO//\\n\",\n       \"Za99x4nH7vJDpyzN6Nf52I/yVPRF2qFTPv4Z3v0ek1uXTF67Kz27Kb1y6PqzVUUEXt3g8I9FWSQE\\n\",\n       \"F1WFyW/k9+Lfh1ewE4LlGD+vm/tWxNtZGfmAz6do/ga+U1X++zpi9FluPs47v2BRzQN3El+izxxj\\n\",\n       \"8WwI7V/g4IdYu8PoEg8+w9WCjRNOl3jyKs+dcRBXlc0V7c01/etRq5nLw7a8jIqwJBrJvGBkS2Fb\\n\",\n       \"ZqK6WlbxDhUbeknldf5ela/8q4vHuz4fg9kWBEGq4UjpwENSlwUXFvNcrwRqkTJWbabunOV18oxe\\n\",\n       \"ViUNdwKdyDRWG81GjXtNLj7PKxuE81xKObjAswl3/8kiP+ibBKNPnqrfuOHo6rJ6r3By3wVFfW5U\\n\",\n       \"rjF6jb8ZQviZGONhCOHSu3j/n+NXP8Yje+rbL6kt1YRHpsr5jNk1G596is0bllorRkuvy7NEkq4q\\n\",\n       \"Y5AWZ4qsbl6P0tGaYWPHPGaWTSSG7rgj07MqGFsWbWk5MJE59oCKXJwZ2rcic79TfYXEhkThzIET\\n\",\n       \"wdBcVQ0JKro4X/wdVavxCs4UXtbS0ndO1Besaaijp4wDteLQ3EWlg4Uae+HUGSZiiIbFREzGzrQs\\n\",\n       \"Fee1i6E87ZhMxu6VE+EivR+upl4vPMeV9/V9evc5Bz+zZ/odKe0JN/f4VzHGb9giEmOc10L4hd/g\\n\",\n       \"rzzBpMvoLqufYu3Q+lkI5/86+/vv4rt+iFtvOsA+z4Xf4idDCP+oI/kvOro/WLNcMN4OztZfNGvf\\n\",\n       \"4cVxJV7tnXD5Dt99oLE+1xnNdQe0G8fKctVSPTVMZ+7M7yhrbYVUEh+SJ11LcYAl5exAUb+rVraq\\n\",\n       \"UeyyNE+GcoXgdVFX8IwlVf2ysGpmRWIfz5tZUm1AmirSsKLuwLGb6toypWbMzALn1SxZVTNTN5Wj\\n\",\n       \"Y1nPZU1NdX09x8YmmhJ37du2rGHiVMNReMREkLilZs2O1KaJ86LSrlvpBfOirlC3oS2EMyGZS+OZ\\n\",\n       \"IpnYvDc3zPjh5+ksYoSWZ/ypO9z5vhDCk190n2gO8GkeTdgYKUb7Bi8UwhHPXuC4xss99n4O//Ab\\n\",\n       \"dS29hfhR/EqMXz69+huBGBUheEGlW3kbEu2/fng7yciqz5d8z1Qn8+uMk4/xqcdJd7h2SL/B0zvc\\n\",\n       \"++gfFtgU4+iTIYRPc/N9fLDOB2/y+ndytF53Y31Vv9V2N6SK5oqdGxN2WiwvmyW7ZiEjvSDVkHjZ\\n\",\n       \"WB2PWFJTc+jUO1QkY6iqgrxD1YK5qdohD1WK+H1kUm0dA4kVNa9KDNwveEDqUCJXqJVREqrizkZC\\n\",\n       \"c7WqkAyRJGwn1VRQEbiRMC2qn7vdIR5y/i75kGc+zst3GR8xePqtTkb+cogxnoQQ/suXTP7OhvzH\\n\",\n       \"3yUWhfJoXfzE+3h+h50h34d/1uVdjzArSF6x+t4VsZNLamNCqVMsS7Op+QOpznPH3vtyMP+hroOL\\n\",\n       \"25JmpiGo1ztGemZxrnQfg2CyNJZKtaygJzdQV/OIU8d6blk3c0VitNBy1NRdsORYsGTTPRxJpUpj\\n\",\n       \"NY+4Y0XuVOkcrkm9vJiSOFAZBwylVkQdQytYXgxy7hsIVSheGEjL+zTzkTw9spQeydMrGmUhjyVS\\n\",\n       \"J3GoDJcoo5jumoXozMysl6vPcuNlJrtsvdpxvNFyenWq+d6B9Oi24u8h+VLZTN8IzGN8LoTwj3b5\\n\",\n       \"9gbbRxprM+/KebxF0uz49T+/4W4xkvci4xl1audXhW/fk5+vufRXtlwcN2XlSJEeO7nCa0tHpu+4\\n\",\n       \"WPG7uMTxEkd3yebOq1zuz1pzq5MD3WnDkxsXpdn96vmWOHld3r4mJPuKpCuZtdSLwjy7RXrdksys\\n\",\n       \"mMmTPXUtTTcNZJasayuljjUd2DZ1y7K2FUzNtFQ9qTlywapuPBTC1Hlzt0JHrlSqqamagDOnopYT\\n\",\n       \"l6WW1aSCzKrMxNQovkeZ/75eekMjuWzuumhDY/F7ltwy1hWNDbEqsePM7WTFPAZjNe00JQzV0qA0\\n\",\n       \"1+jTiJ8nIm9ieUaniZbFJFsIYeUaP/2DDC/yKyV2Wf41w+UbPv5LtDtMjik/G2OcfKXOu9/k+PP4\\n\",\n       \"x2/zMbzZqvkTMvI14ky18aMqBPw7fv4hhL/7Bf/8NzHGf/PVvECM8SCE8D/R/x7aj1AMOPgXzH/v\\n\",\n       \"yzxvFkKYsjGjOePBjzb80l94yMH2tnSaC51z0vlt/fuGknbEBUkIaj5r5o1FD3eA+4WFfVkpU1U9\\n\",\n       \"thdvtpquqPrJr6C3cFJty8x0tDXUNOS2RDWJl3FtUcjNYuEsRK2kEp0eqPzWHp4zrPFiWjV2LsTq\\n\",\n       \"8cHilK+G6h5+OmX/aW4fc/BL5E/HOPqmmJz5wxBjvBNC+PvnmL+jmr0e1Rex9A+x9zs8FkL42Q2a\\n\",\n       \"dfIXOZ9JLm8oWpd1CtJy37y+q4zR6Hzfuae4/9Ujzz297dbFJXE+l9YnmmWmMVmTx9vyekOZnOqV\\n\",\n       \"M51Y0445SUNfaaqwbNVQV9uGqSWZYuERUqhb13ZqJGpbUndgVXX/5q6+mRPRIglo0bZbF7SkC6t3\\n\",\n       \"gpam3Ia6poFEW1vbyEzNmUQZlhTFHZv1pnfEQjE/sK9ulpZ62UxaduX5un6WSsKBeXpXOTuyM5t7\\n\",\n       \"ZMgDY/qdLb/94w9rvN61cjhWJrclP3VbcXMY48fe6s8ZPx9CuJ/H/zp/9oWFk0baNNvMZOf68rV9\\n\",\n       \"slPtpZrzt1tmTfIPd2x3lzWOU6FoSMcnuvW7utc+YL/3DgYjwhGzFzWvrUmO993r8ESfpR43NjhI\\n\",\n       \"G3r5mlQXMxodIanJtCVxpB7HBvUV2+Xr+tnLMhnJvqmpeXyHZkhk3mlJprZI5eV1M6eWbBk4L5Gr\\n\",\n       \"NiMNDNQW240y1KoNhlVXjYyMHJlZ0lAa4cSZC451pKqB8LqmmkIqVYQ9Ic0thagelwx1xHgmTQZm\\n\",\n       \"ZqKOUs9Mw8hYzUxUmMa6o7wpZPuupKmsKOT9ue2b3K1VPkvD2h8kJP06w6kvaKfVeM/DtC4u2rsJ\\n\",\n       \"LtB/jNU96uM4/K235gp6axCCFXwQf+ltPpRn8PjbfAx/ZHxFZCSE8CEcxxifDyF8L96nypX4zT/C\\n\",\n       \"a/8O/gZ+Ft+vEiD+AcQY/+4f4fe/+Tv2F6/x1eKQu4H3Cj770I7Daw9r38uNG6VcQ1jqKpYLYdbU\\n\",\n       \"nP2+XqsQkdtWEY49VS94skjQnS7aNBOf6+lLVIr7luB5Ncc6alLrWgtd/FDQUhpXCSrGC+fGHpTc\\n\",\n       \"SyoVSYw8MaFsUE+qbvUungtVi6aMrESSkpdTjp/l9G/Qu1Bl82TfH8LaZzn92JtVkRDCm/KWKW6/\\n\",\n       \"VbvjL4O8ZLa6yEbZZekzWg8cap07NgsMHjrixTd4f865HVqpc3lfP7QIG+qzE8PsZTHPvZHxvz7Y\\n\",\n       \"tX/luvK1TbW1ubRx4OjcsUaYWS2m+nmhnGwoB/fsnc+NUhJTcy3njAzcj0JwIreDVVFNbtfQ2Imh\\n\",\n       \"trDwyu3YMzGTmVg1dllQLoLyZqpSYV2io+7YRBQMBAMthbq5QuZUIqgJsSYpa/LdU9pjq60lq3mm\\n\",\n       \"0FdkB0YhsxzXlOWW+bhungaz9Jx4dFtdy6M3Jy68RPJQ28HaNduzmtNuJt1vyvKHheMZPxBC+P0Y\\n\",\n       \"45cUin5jsXyda7M3fYBqPvnQqnljSX3aNUsbspWaenpstHyqOUw0aqXlYiJf6sjOghATtbOW5maT\\n\",\n       \"wSG9O5rZsfr6UOhmxke8PGa0yVqToyzxwqimnIzV6n3zdlusDaSBUCZKiSI7VQ8jE4laHKrHQ+10\\n\",\n       \"4D7BmZfcdFlYLPKnMkFHal1wV3tBSyu9SF21GYnqZhLVQHYdUa6U2rQmt2fXgQK5ZT3LghXB4nN2\\n\",\n       \"IF+4mfCUjVD6rmnNXpr7bPqqs+ScGHckYWjsjrmeJW2rmkb29Mpcb1AYlTVJa+heMtTNS2dly43V\\n\",\n       \"qXhvJj/m45f44O2KkAzqfPwiR7+4cEbe2OTPbfHnIjsvce4cz3SrW5wuk3Y1VPTHDX8GH/sm0Gp8\\n\",\n       \"RtUu+pbGlyUjIYS/hz+NNITwEZXPwC/jvwkhfFuM8b/7Wl44xvjpEMIkhPDbKmLzddaLfOUIIdzH\\n\",\n       \"5ndS36zkB6efwE3eeJmPPtI0fXSFSWFWD45jU9lHvS6v10mDkJbmEtF9KiKSsyjHRoXcWKktuKeK\\n\",\n       \"dLzkTSFqpRMZWDa2rnBNoWFqbmqgbVdm4NBIX4pPqW5jm4FrsaIyUxwHPtmsqiBXcBDZjmRJlWN2\\n\",\n       \"Hu3AzRqTgsk92u/i0R/hiZNK6Hv7CT71RAjhf6b1Lq5/P5fzyhzu1kkI4f+OMe6FEN7MbJvHGM+8\\n\",\n       \"tdjfY+911hvMP2Lnu6MHTdQbY62b7P40z/3yiyYvbfLuNdOQWCp72mXPKCbMC3kcWzvgtX+5LN9/\\n\",\n       \"L8mURwtxZyZv7VpJchcUluLMLL7m5c67FEct45OJ2OlrdXJroSszMjIXjQwtLyobmcSS1KqGOwo9\\n\",\n       \"Dee9oauhlJg7dSLaVLesrjCyr9BeaAKeVFqXm6qbKh0uHC8SY12pfjUSWjANM8W0wc1jdqasRweN\\n\",\n       \"nmGYWDZzXrAXJlJBPW0YH7WFWjQd1zSWOhrjgeny3LxoOux2rPYLJ22KpHDWumz21D7ZG9VC8jYY\\n\",\n       \"4OUTZguvrXmy6ub1KzbfuOneY01lu60emmqjQ/2Le859LFOejISNM7HbWbiFThQNRvkhg2es76Qu\\n\",\n       \"FHPNbiFfmjporzt96UwjL8xD5nin7vLZzFY4cmulFMtUOrsr1HJF2TZJgnkytJHsqZeH5krtLPrg\\n\",\n       \"PLWUBGdpz0qcei7M9GxoqgsKiVUzDT2HC/LQUX3vOyqR8syxVwVntqVmRmga2Jea6UpEwZELluVK\\n\",\n       \"ZxJLn6MxUz3n4qlRGNspO7aTqfHoWFi+rhl3zDUWRnojiQMDTTEem8REL3a04jXZaSq4pdeoOQvr\\n\",\n       \"mst1ybQUl24bvnvXJ/4/bmyxkjKYcvhLjH4nhNC6yE9/qCohvxxY2WH1Dh9q8K+bVVpxq+cttUl/\\n\",\n       \"q/CWu67+IXgG7wpBeJvGi78u+EoqI39eVQKqq9bOSzHGsxDCf49P4msiI/CNGef96hBC4908/per\\n\",\n       \"Uu3qiHvv5fe+jdd/hqN/ykf/Wq7zY/smaxdMT+uyYmYjOdXrBvM0ytMDE2sL/4jrqjv3THWybuBl\\n\",\n       \"hW6VWeEF0am6O1rmCmdmohWlUqa58GjtOJVrW9G3hzUj51VF3VUVAWmrvt+tOY9HjjNCyXFSFQ2u\\n\",\n       \"BE5DVSTuxopQ9Ev2JnzgHr/9GOFH+IFbtBYCtJVd5vdx8t9y9X4e6lHbZ+eZyhzt138qhPRfcuFH\\n\",\n       \"2egyCyFsvcLhv/z3OWB+PbFIef1n/4q/lmn/qcL1lVxnfKq9W3r0qUp6dPz9R17/+1NWl02ulXYv\\n\",\n       \"BVvzxNqkNAoTo5zQI392hQ8+yq3PsLknf/ex0OZCiBrhgKxtczKQx9/14nrTfPdMPjpTNtftZIme\\n\",\n       \"jr6egY6xC3hVsCwoJQ4Xlt1dA2vaODCwrLAp1bErGhjoKtR1DdSlel6SWVXTUCo0UViyb6TjUI6Z\\n\",\n       \"Q0V6KiujneFE7zqjdrAXThVh7r2iTdHUXNvMJBk6bg8UjW3xdEBaxfwNr6Rqg7nLn8xNHpg67aZm\\n\",\n       \"YeLO6rreZ7fZ7XE/VkMIoxjj0VvxGX8e4xd58Yd5sA41ZbqsNciFO28wHMsfCcpiT+dw7L3P1rzW\\n\",\n       \"zp2+d18tzSXtoAhH9pqJyehZD+ykroTcMFvRS9a1Bdf7hRsbXafZ64qE+8uW9ZNEkt8xXdrXX+04\\n\",\n       \"V9Zk8UVnIXUkMQ8DK6HvvkAR6ZaJMmFSBuNYM05SMWyZCRjrsHBPXZW7I4nPCWGLz4WJVOGIwQVL\\n\",\n       \"GkbumaNZJcZ4VOaiOWp2HTux7sgbcutKbaUjHFkJU5dwYKaczXUGPc1mSu2UkIqGoomZjhO3PR+b\\n\",\n       \"2nlLzMfOwp5aHGjWZ+rh3c4dJLJ0YNqt2e9+p/yN5/jwwO2znBfnfPykmm6KtRAefoTuo9wak/0G\\n\",\n       \"s+OqN9y6Q97hjee5l1c6/T82CEENfxb/2dt9LDHaC5VR73nfwuYtXwkZmS3U0nkI4dU3d8KLWPBv\\n\",\n       \"ao3Bl0OVgXHlR/j+e3QX+SYb4yqbpv8DjG/w8PXM0ms1vc6r7j52XpyuqvdGeo0DYlMMA1UVZFNF\\n\",\n       \"F6JKJb+jEqh9RqXoOMDIFevWZRJEc3NNt23I1cwMvOKeTVPvMfW6qoZyXeUX9JCqZH2oepkrOE2o\\n\",\n       \"T6nVmKdcjNxOqp+rqeZ1rofKWLIc8cg9WjXa6zTO0Xq9et+7Oxw/xvSdbG3xnhe4ekR/hbvfxZWP\\n\",\n       \"cPEa6d/iR15n59Yis/Uyv/VTIYR/8FZN38QY90MI/wMr13jwRZb7bB5VMyl1ldvu6+sD/tmJ4uEL\\n\",\n       \"Rk9wvFpI21NluS8dc/qv8eyMR3Z5V1ute1vo7Kq3Uq0wk8aJ5TAzayxrDArrN2+Le3OPT5edjgbe\\n\",\n       \"ePSckJxHaSgoZRI7on11fW01a1SeFWibSvWs23ReU6EvF2zak7iGqcxtmab7tGUShcaizH9mojR1\\n\",\n       \"Kjq2ZappSb0I0uW2cWjaLc6Jk0PD1k0vmVhCqlAqnCtP9EOX5NC8ccLBBdOjGw7O5WY7LIeBy796\\n\",\n       \"7Onve9jxZ+4Xn16h/9uS77xheZ1H/xK9UDkWH/3cW5VbVGm+Gj/Hz/849ydDs/qZw637DX73Mm88\\n\",\n       \"Kcxya+ktCXYGc51PBr98re2gmGtNSvN0btKPHk3rttZSm1lqujlzFO66q22pM1dfbzuOdfN0Znk+\\n\",\n       \"MVtqu3gytzUZOTefmpc4G9tqly4vV0S/G1lLqpmlrtKJaBA7boXLWuWpJBmalBtiiPKwp+5UQ9eq\\n\",\n       \"XXnYFcwNLOl7SLFwWR6Koi1dZ+4s5m8um3hoIWWuSWyKmub6VmwamjgztW+qcMlYC3uhcLfB2Xau\\n\",\n       \"npyomwtloh6jZpKiZjdsOZy3NaZz7eE9l8e3TBqlzclFt7KGLJ2YtzrKWqLe65LtsPIBDt6oohce\\n\",\n       \"/DhXQwj/sMv5rcVU0126TVrL1cfXPObRFynu8b8vxrL/OOF78dkYv2kW/zdFrN8sx/NV4yshI9MQ\\n\",\n       \"Qnvh4fBtbz4YQlj1JQyKvsWwzXb980TkTdx/QOMxug/xQzcmDmYjb3zvmfXTnrutxM1LK8r0IVk+\\n\",\n       \"UqR7YhJUNYi+6qSuqEjJTFXHOI9dbeuLocypuplCYsVc4sxUKrdkZsOufSeLZ2+oTnNbRXHSxSvd\\n\",\n       \"CdX/JUUV9ne7wUFgcyFoPcZprOzs+6F67iNnZKscZYQZ83dyc5fGkN4HK0ldfTGZEx/h7hXat1kZ\\n\",\n       \"c3CJ7DwXT6p39qag7dE9bl1l9z5VXthbgkpkfOl1Vuesj7l3rmX3ocS8M7UnZy3G+FQSwv/Yd/Z3\\n\",\n       \"OqbvzDWHJ7LegN9h/3+JMU5DCDd/z9LfvGSr09GKTeVsoFUb6oTMZlGal0f26uQrc088nThXC0K+\\n\",\n       \"4sb0AdPmZVkIUs8r9RdTMnMrmi4Z6st1F9ZkjCQSOwoENRO5JoJtfXfNpUY6trSlCrlCgkRTas3M\\n\",\n       \"sbm6rh3R1DJpKaa08hPLzZlitmKt3DZLbnpesCLaKodaZl7VkybLOrXLxr2h2Do1rkfd+7j1Q6Wb\\n\",\n       \"s9ed/WYUJ6d093jktu2dkb/w62yPqmvw6fv56E/g/3rrPufpp0MIr/LqfT1+d4/vuY/TnKRr/MIn\\n\",\n       \"3f6uE9df4zefYK8+s/eLp0avrTjbmXJn7Pp38IG9udd+ciKULaFIrGW5oyxq1gppOLGmaq4+OB06\\n\",\n       \"ujDx7FapaEQPFiPHgaIbLWVksRIft3M6RTUavxtoxGgcNs2LsZN6z4ZXjMMtRWCi0LDhYbt6Mh0r\\n\",\n       \"nrdmTQ23FRrWlJrGhpZM7Qh2FRIdqUS+SLyZ6pibmGBNqqXhxETfmpkLRdRLKu+i04SNYuQ49O2H\\n\",\n       \"rtYk167NNQMnTm3GmWV9N+pD/XrTvY1UUtQq19U0N84ok6bkpBQmgXaD0VVOb7B1kd9/vHJHfHzI\\n\",\n       \"3hH1Eq/z7vfSX+fgHmubPPsgJ7/C9/hjkJ3yRfgJ/NzbfRBfgDdFrL/2dh/I14qvhIx8z5us9oty\\n\",\n       \"SjL8tW/IUb11mDH5EvNlvQbjNg/MyMpoY/dE/7lU2Ji5OKm7t/aYcj40q9ckSalwqhrN3VPtnTKV\\n\",\n       \"euuzqv36QN1sYWI0k5qoOZGqOyco1E2klg0MtO2rWt5RtfvaVIWXliwEbtUrDFUju4ctTgpWZqzW\\n\",\n       \"uZNVEtkHF88vYuW6emOT9SNGE5KbLO9y5wM0Rqx2OexyeI7OtLrxduskqwzXGR5w7yrdLV67zCiy\\n\",\n       \"9Crve4HN0ucno95CHH2Up34suBa37L17XTbM9dPgoMRPhBCOcbZv6zO055RtBjc4+sefDwHcefim\\n\",\n       \"+5/p2/u+jrmeaRhKYrRVlLIYhSxqTiiXV8TvXHKQjYw6q1qqocwYCdakbqiHpvoi3n3mwLq5hlMD\\n\",\n       \"dcdKURBMZHITpZZD02r5UBpZUxiamqtLBE2FqbEzUz2p0hzBfHFNCWvCrFJ4NkOUZJlaXNGMmYuh\\n\",\n       \"0FMR2rm5lRjc15+azp+xe7HUnp939XduibXC5susNmYmuy/b/5mX2WDzJ/mJ5yq/MapK2xN3eOWh\\n\",\n       \"EMLmWzn6HWPsqXI/ng4hPHPCn65zec7tPZP/Ojr4UVprwcVuS/KDy3ZPVwyf36NPY8LWcaH3kUL8\\n\",\n       \"9pTtVJJFaVIaFIlJSD0YCo3FmtuOBU1ulpTzaKXOSp1xwixW+qwrOULiwrz0ap1WQhELu8nApqAV\\n\",\n       \"Dxw5r7SuaeLAmZ4TazFTk2qGQs9AzaprOuoigg0DUxO3lOqCiXLh11u5rTacSc3NYmEQpkpTbYlL\\n\",\n       \"scqxe90izRAG0Y8AACAASURBVCqptkWX4x0xFnZbS6Yl9XgmFgc2sJROZcmqk8kFjV5iUB+Zz3It\\n\",\n       \"R6bNpjzNTGuFs+WXhHiokZSSa0OjWUk4R7/LtT1+/gUGHS7V6Kxz0qNzyvwqd9pMV7gcQuh8MyaB\\n\",\n       \"fy0IQYofV03SfLPgM6pqzbcsviwZ+cPKa4ub0TeVF8VXi6oMvHOTF3Y4f8atde5e4ORh2qeZl757\\n\",\n       \"1SunNfXBQGvUl85SJ5t1/dpnleE+kvtUMysjVVjDJu6qoWUq6GlLZOZmTk2UJhIdPfFzI3vRSGpt\\n\",\n       \"Ec1VN1Kq2jID1Uc0W/x9qKqGHKoWwluqxXCc0JrSKTnLkfB9JSsFo5RXSt6osxw5KTk64b2/VolW\\n\",\n       \"f/0naLyP+8/YazCr8cCocj8pamyVlRj+uXeQDviOCedPKvO4Zx/iycjxUMV6vmFYiGavqkjPqUow\\n\",\n       \"8yk+fbHp1f+0bWs0M2q27fa/V/+Tg6pM/CNHrqzxbQ3yM5xV5/HTP1W1eQy4cq30oeeOvHr1yGzO\\n\",\n       \"6Q7TJW5k0WpRZf2cSnS724ZnF2wdH5IlYlKqxSNzXWKilixZ87Io0zVQoC6aG0gNtHQUpu7oOWek\\n\",\n       \"riZVSs2NVI4zlQFuz5m6VXVNhcLQxGxhdsWxoTUdbblTuVk9CiEaqelkUSjHYqiyod+M0ZuWdPpN\\n\",\n       \"DUuyF+dWl3Lb9RPTrYmlnNojnItsP87JizHOfjGEK1lVcfpCJBbuxR1f43e/ao26pOLod2KM0y/z\\n\",\n       \"lD+AGOPrqlV3EYS59bd5/zL1tU1nF7c9enNsvX3B062HzS79mr0rc7sH7LwWfPrPzLVqc7GW6sXE\\n\",\n       \"blKzHEeaolbkuMlywXJa5Te9XOeJKefH9Nu8ErkdWKoxSpuSGI3zqdNYOkn2q++hxHJMhPCanhtO\\n\",\n       \"BTVRq0ysqdkNqZaRvrkLxhqiVCKKChMdA8sSm2YGi6m5gYqQjBVeNbUaS9OycBbqapEk1BwqjMpo\\n\",\n       \"syzVa1wrgzyWNsMbbieZF5K6pKSlcD1JXc6Dk6Rwt9x3rGtYFl5aLbXzl81r58yT8/KsL5hYEqyM\\n\",\n       \"7tleO3JYvuH2VirmY45ijKMQwv825i+vs7IQhpyc4zNtpjkhtxgz/OODD+FejF8ytfjtwjP4W2/3\\n\",\n       \"QfxR8Mc8KO8rwf4/51f/K1Z/nPoGyx3O9zvubV7WP7ciO9dWvPy646uZ/P7zsnTVehn1s5Gp38ap\\n\",\n       \"JWMr+nK3nWlb0ZAZaGm4rpA78pKR81J3FS4obChEE7d1rQgLL8boUF9lVVXZr9zAU6pEi5dUN6Zm\\n\",\n       \"/Hxo+DjQOmOwxN2iuok+UlQ31TSQFtw/YNzgTo/6x/gPP16VmucJjRkrr7F8ynLG3cdop9TGHOa8\\n\",\n       \"lnEw42DMX/xlTj9YmRetDHn4jF95D3d+1ucNY7/uCCGssP1XuXaezche4MYbHP4/9D/xgP53POpO\\n\",\n       \"v8XsCicpcZ1RED7AWsrpVqWlgdsp7YzOo9W5GV/nt29WLacLxxxusZbTSxbVqIzQqWk6c7jdUs9O\\n\",\n       \"FBusxBUxZMjNkr7EiWjmgsy6NamZqbETU5mpq6Y+I+orrIk6qlW4oyKWM8FwMRTcd2BXY2EDnztd\\n\",\n       \"5Pi25F4U3RN0bRuFNbnMyGSRZlRqJIeysuKGA9yIPNxj6+zAUTHXHEyUO1ONpJC0efg1OosNx5WE\\n\",\n       \"2Z8JITzL9uvc3OG+LzAHnKYL4+KvSciahXD/ZT58iUYg3GaWhvC1jN2/iUfY+FGujhruXV7SSaem\\n\",\n       \"DzWtv37Hyn0fdvhLn3G8edOvvpMr51jTND2ZuZE1rIaJnbTvrmAWKruMcU4tZBqxLoltjfmp5xq5\\n\",\n       \"k8qN341FdeSVGu041U9aJsWS9nzF+vi2s/WuuQv2QgMzmV0dx0o1B5bE0LIacwdxYCNJZM4wl2sK\\n\",\n       \"RoIB5hqmVlXX7IuBeaQp2Itb9mJdOp7armWuyNXD1Ena1CzH0qQ0TxP3lVEjieahtJus2V8okCZh\\n\",\n       \"pEjqsnIsTyJxKssaht1zyvFFjeOxSXHbbPtEMjxRmyzbDlvS1sB45bPG065HnvmM0beljp7q8ftU\\n\",\n       \"mzr8g/UQpmPue0910wLPcOGUp79a0vlNjm+2Fg1VmsjDIchi/NYkfn9CRlirxODvHHLW5eFxZn95\\n\",\n       \"VW/9mvnpzGh11+y9Y9v5wybFskZeo5ioL3PgVOKiq0aaeoIjQycL82ZWFtkT0cxlPGziOUvuOrMv\\n\",\n       \"WDN1VV+htfAO6uub2sa3q0rCmar5M1r82cL5RWvpesFJ5PU6o4Kl06q91Et5o1aZthVJRUSWzpjP\\n\",\n       \"eN+zFREZZ/zedcar3P/JqsjTneCEkyY3VmnfIHmJC/eYXa0qIo2Pcu+xKmAvlpVz59EvfFEL7+uM\\n\",\n       \"zR/lQ5s8/gXjgU9e4aM/xOm/zZk+wu5iBtSU2l22x7Jt2suVdXWtrOzx6+9mfIGLIx4bM2/TfyQz\\n\",\n       \"HATPlXODWCU2P6GqLIgch9wnAtPyloNLmcdmpWnxoufipsI6BmLY04gTq2FsVaqnVF8MfG/HTc9L\\n\",\n       \"nF/YXu2GvtxYG7maJ+QK0ackLittloVRMpJ507FmyZpEZux+pSPnHdpCR8TIiprPGhhqx6GLRZX6\\n\",\n       \"/jzONam3SD4y8x2f3nfzIkcFvUvcd1gRkagyJd2r89AJx+9j/zf5xH9SFTCuHHPQ4d8+zK0nVdv0\\n\",\n       \"wVfzCYYQutf5q3+2YoYHcEzrV/nJLxn/+hVh9Qeq87Nzkrp3pWZpQDObO7lYSvZ6LLc0HqJ+i9tX\\n\",\n       \"o0kWTcM57f7EdnNEmjtuVsW2DgZJzc0y1dOxpmW9ziwMlOXEMiZpNVx0QSYJmULulbDkhXyumWaa\\n\",\n       \"zptra2JZS+ayobkbZs6XwSSM3UxKwzjyxIi9epBmA8vq0jLRT2amZkoVOZoF3q3yO9oVPRL68ryu\\n\",\n       \"0dxwbm8idvuS9twVuRtZZlYW0liaBDpJw71w1Z5Nc3WpzDy5aerQZ8LY+/NoNK+Rn9eslTQpR6W0\\n\",\n       \"cV5t2hJHQ50Jm+OBeWuqXClNylNpra/bmDr65YWH0+dwws9+gp/a5+oW5R7hNW6ffAvrGL4YoXKc\\n\",\n       \"/LBvrhaNGI1CcFuV1/jc2308Xwv+hIzo/AUeb/PwXV5YTy03UwfzdePuujhtCme7alsdeWNL7WRC\\n\",\n       \"IxcbiSzQXtSrozUsiY4kcpsmDlRkoq0ax72JXVOrCqXSrlLAgaFNw0WsVdXr/ZAgiJ+Tvq6odrob\\n\",\n       \"qgpJrtpNz1PKlEFK/YTuiKLNUSQrudOoKhyP3uH1NtMpv/M+9l7j4NuoL3Eu4+Q+XmnRukq5UWXb\\n\",\n       \"5Dd55//L8oR/dZHRKxVJWT9j/WPMMkYZn1jxJXJ+vl6ojNceepjHvsjn4l33eObbOP2VfV5+hutP\\n\",\n       \"cPeGcPWupcdvq60NhVAZQt5uEtNq0rC5WWlcmgUPDWoGrehO2tDLg6MsOmhWtnUDlUanDExCYcOp\\n\",\n       \"e50NO0Vbfz7Qr5+5nB/rFDX3mnUHMTGJiXsht2eGUlTtxu+FibbMUjx1GOqiDWsOtU1FpYmgJS5q\\n\",\n       \"ZSuGyUxp6qqgJTOVmpojFXRNbC/GOpctK6V65lqCU12ZPMu9cj+1Q5ae5OgSZnzkw9TOMWxVE1dZ\\n\",\n       \"QnvInfvYXaK5y8EHKJv457z0M5x+P+ExwgOc2+d7r3Hnb4ew+inOfukrNcJr8I5HSbcqRg3WGb+D\\n\",\n       \"8mshIyGEBtd2yE/p13KtwcysWVebTY1XtwxefFXj/We2E37waT69w/f2Jl56/9jd5a7BcOh0PdiO\\n\",\n       \"HIeoENRDzXZSeioWrsS+YVaoR0YJd0uaJTuBSUjFsimJNRdk7tUT9yZtm1oLp5lK21XI5LrmsWee\\n\",\n       \"ZG6FqAzHdhaj+IM86mW5mnwhdq00H2/6DG6r7PRmi8zmkzi1nI5dMhLOJ2ahLpQ1RZxIlUJJqFUt\\n\",\n       \"1ruh69Q6OjLRxKo0EkPpNAw9V5u6f3fm9sauWXaibmC61lCEVVmxpJ6eKLqFXN3yQTCbTIRXex78\\n\",\n       \"bd5Y9SUEqQvLh3+4V43/vdlOff0bu1F5y/Ef4MUYvfJ2H8iXwO+rhkz+hIx8a2L50SpNsugwWi9M\\n\",\n       \"2i33UkI8o32mVp+ISVLJyVq5ELpirRRMpZqimtREIlVakhstQtBSj8WpVoiaKlJyR5U4U60Npang\\n\",\n       \"wsKNImOhNWGw8HAMKiHqNdXNqada95s+H7HVUA0RPzbjhTXWs0qsuRJp1dmt8RvbpE/yY7/Crz7O\\n\",\n       \"U+/jnRmzUyYN5o9WGTbXn2PlFZ5/B2eR39/gaMK9n6uM0z72F/nO3Wr8eZjxifMc/mKM8RsZElWr\\n\",\n       \"3mfyRQ83CmoJsiP+xb/lwy/yBCvv7lsbHrr0FOMV4sNk19m6yfEWZy3GJRvDxLRbV5sWurGuP62r\\n\",\n       \"J21JsS+oJpLGSUX6uoFTc+frh5rlqrNkbCOZeXBMo8hdqJdeShODkNqSqimdqdRELcGWxIr6Ygpm\\n\",\n       \"7Lamqu4xUYhqi2vjRFupaVNNy8xARM1UlMiUogNddBeuvEyVGpaU2gpNN4xsx+A9Meok3HtP1Z47\\n\",\n       \"+xCPdVgZ8M7X2K/zew/zRsrlMe9+hvND7qwz3aH3RIzzp0II/wfn/jY/vM+1RcumCHzkO/jkHV9h\\n\",\n       \"uGWDlSX/bphYZzEW+jWgRMHFp/i9b585NzjUX10xaWZec8Ho8LNW3n/sgSfZGpP0eeN6YW144HR1\\n\",\n       \"ZJrOhVhTn0a1ciYIxrWOGOZW5lMHtcS5SCzndhOGSfWd62GjnKlJzENpHGmHJUlgEqvKJQuX9EBf\\n\",\n       \"Q6YtU9O15VLZ0gs3LddYCVXVc1cV07CqWsO7qs3HFMuifdFYbi1UYQErWhohkYgO04ZpHImCh5JM\\n\",\n       \"Qy5bRBXsioK5qbmJnpU4MVeYxJr+dGp/febBGEzKKC327JdL7mqJw4KybrlfmC3PjPJD49jzxNPc\\n\",\n       \"bXH48h/mK7Qgp2/ZVN3bgJ/2JdzCv0nwpMod/Z+83QfyteBPyIj8LgeP02qxM+KNZU5rd4SsrVls\\n\",\n       \"6+SbRqNblpt3HLXOSfIq6qyM0TAcauuoOxQWS8iKuVeVthfWV6cqm/h1lW5vf9EPrh6LVqRWpYLc\\n\",\n       \"qWhLarzwSzwWlarkmhuq+/aOz5ORvkXMVqg0IlmXy5HkmHmjqqCsRl4PXHi2cmcNj3B9pSIXV7eZ\\n\",\n       \"ZvSWKwHj/sPMX2DtozzQ46NPMv+lN/u9ITRK7n0/rU0mQ45+gfHvfoM/oFOOz9hbYucLWgM3Vunf\\n\",\n       \"WYycw/8aQvc/5n0zHr3F+ohf/FEaR6RN9rv0a5Ux3FHgQp4o0yCdEQVlK5OVNaGWmcRcL1Tnq6si\\n\",\n       \"hCHyRMy9nhxS4+KApRGT9agWCheUjhTOISyi75bwmtIDaubmBqGpWRbOJRNvmC9ISHRXFLSdSk0U\\n\",\n       \"OjJNHXtGXhRcUjiTLOzk13SdGljWkBujNJfpmYvyMLYWg6JFMatM8GqNqnX/2O9Syz+XGmJ3k7M1\\n\",\n       \"3v98VUnbW+NswHufY+9DKrHSJS6vc+0LKlNp5PF9Xv2gr5CM9Lhxm+9+9Isev1Ox9K8aMcZ5COuf\\n\",\n       \"IX8XD/wWN64OnPVyd3c6Dke/y/jA1oulD73AnUtsbHJ3uUqESsNQNmU+5uo9lhvsr0WDbCgGpo01\\n\",\n       \"V/ulfjZw0giuljWtydydTnU9lKLGbC6tzYyTmbPQkiVjoRjaz5Z1VaRiiIGJlVCax5ZaDEKyigP9\\n\",\n       \"OPY+FdGeqCbf7qrkOEG1KXlOVVHdlrshuIxCYmymq6mxUIa+LlgKQRJzIaQuxOjUzJqxV000DFzW\\n\",\n       \"1E3mZmXfII6dohVZ7g8kcYPYdDWOzLNDu+muLHY1JzW95rHdnVPdT1bHcmePw5//Wj6zb3WE4Br+\\n\",\n       \"FP6jt/lQ/jD8Hn7s7T6IrxV/Qkac/iZvfLhy007m9OLESla6GF9yKzlwnGayOHJaHmvEuVF9JphX\\n\",\n       \"I5ixpowzeRLVDcwlxi4s5h96boSgZaSlujE1VMLUM9VN6AGcyDU0nGpU5VrBifA5AfoWPr34+euq\\n\",\n       \"9kFP5TVyqrppbQR+5zxipQE4XuPcoNoF1+dVv//un2b2aXaaFXm5Grl2yivLlYvr+UNOWzSe5L47\\n\",\n       \"dLo834lx93PCs4Xnw1MqJjR9K8qvMcaycn39yE/x7g4bffZX+HTC3j/9gz/dwQN32Rxx3KoWjuY9\\n\",\n       \"PnvfQrQ6qM7NqOTmamGtKMQ0Ogs1I4les6kxr/wj2ioy0VZViZYE18roOOFuTtjn8HwlEu6hE6Mi\\n\",\n       \"RF3VYv/mGntkZiOW9kNpXIzEhJqJscJMa1H1mntJZjmWdkT9EBy7YlVPMPe83NhlcysypTUHi6mr\\n\",\n       \"GsZSJ1qOrBlJYzV2fRh5tcXwiPuG1HaqqIOlEdM0c+9cMFmK8lrhuY1o6Zjmy1x7nVCQrS/eQH2h\\n\",\n       \"nfkidGakX8049ysv88YSVx+tFMheZvv5P5JJ08mv8/ELPHiOnSOK04nykxP5/4kz8v+csyZnT/DO\\n\",\n       \"A2aHPH2JO4NKrJ73GZa0BuQFzdbIwWoqjyPt/alxd+5SLXV+3pT0zuy1uJVyoWCe54ZJ6tVQV05X\\n\",\n       \"JPv3pJs3JUs7jkPHUagIIlEa6w50dEPQF5RS+Fw1dIIVNXNRqjRTmqlIyYrqetoUzaS6Gm7KHZlo\\n\",\n       \"Ss2VjjV0jZWhGgWexNRSmFtzJEqsartkpnQsCyOdaSGUtIqaWE6kcc8kb0gauaV4LOlNvPfJI9NN\\n\",\n       \"zr3Bc/8/e+8dZNl13/l9fje9+3LqHKYnDwYY5EyAQSQhElyJpiRLstamd9cluXbX5XK5LIdSuVxb\\n\",\n       \"Lru89m6tg1SSVqpd2YqrsF6LUaIYAIIAiTAIg8nTM53T63453Xz8x3mDGYCUCJAABgPqV/Vquvul\\n\",\n       \"O/fce87v/H7f0IWlf4bG+qwrpW5ayfEfMv5r4LeUemt4qXcxXgTuEsFUiveCj9hbihuWjIjI48C/\\n\",\n       \"APaUUh+8UccB0Tp0T8OLs8ACHOnE1EQoGDHCOtspG2Nokdnz6RXX2cv5REaZfFQmg6JnbbNIkxQR\\n\",\n       \"ittIqCLJGQaSI02HUPpEcY/ADAlGiiR5dJJxNUHx8Elj4Y1KsrvEzKIooOUVriYxndHPe+hE5PDo\\n\",\n       \"oUa/Py0aZGeYkGtCVNLeHmYAVRPW74epAbSKUB25JLsRdAQiE8q7unIgQCcN/nexJkYT0btqmqZU\\n\",\n       \"fElEfgNqD4I7A4OXofWdNwLooHUR1h/TbbdUpJO5w5vwF4dgMoY4gXQfDjtQsxVPpn1cM0d7I2a3\\n\",\n       \"0MXNNkjbEVMBLLv6nNeBtMBcYhBHin1Rwnlg7wxgQ2YSchYMRFdPMsIohdDzQR/FZNQiMFNEiaJu\\n\",\n       \"BMRABxeXNC0SFIoFFCiDKXFwEFqEtMkxy8ZrbBt75Ai8qzJkxcPgCuO0uBWdBFWANYEGihMD7cz8\\n\",\n       \"ogXtSZjIQOMO2B2kaFuTOF5E6Ed0Y4cwqBG3Aw6c12N/YQIGV0vt27Cp9HXlXjfBXRmD7otvfgxV\\n\",\n       \"JCK/9xQ8dBruEzCb8ERfG2b+2g92XaiuduXeOaTHwWtAcvG6St4X4Cv/CRzNw9CDvQwMd+HDT0F/\\n\",\n       \"DL7603Cyr8X+preh5cLqfMzkWpPjNbg4BbmDJn5Ke9y4vk7uL6a01UJ2N6YqA/LOZbpGSMUIqbZX\\n\",\n       \"OFdwUYbCVGk84yANMbCkjUWfWPlEDMigGcpL6I7TKgZZYsIRAi2N4CMcJxnpNxsEGAgW85j4BKxi\\n\",\n       \"UadIKDWyKBwxiJOEbcMml0SEbGMaLlUG6LuhSyFMSEfQsWz23Cqhl8JtDLBUAysX48aw8ALc/3V9\\n\",\n       \"lteLcHaI3hFlgTkR6dwAT6obGiJMAb8AvLG4954JpWiLsAncwk2IG7mRlZFvoykLP4zz79sRfcie\\n\",\n       \"AXcd8hGUmgo73efy3SYTaeFg4NOOB/gluJBNKGDSTwxsc4BiQEFtk0NhqCo75InI4SRZ6tLCNIS8\\n\",\n       \"KqAkoU9AS/mkJXxNuGwRnZgcAHJE7KD7xjbQUVoNdT96Z3pB6V342Oj3KrqnPADyid5BTSh4wYQ7\\n\",\n       \"hhA6MOjBlRLEISQVzaqYuQANEy7OwJGa/j+fN3SZ3m5pumsrBacdaL3pxeadDqXUJvB9ysPDl+Dl\\n\",\n       \"B8CZgcO7QA2euhemTbh3V9OYa3lYSul59VKQEDQHxBMDZu2YhbQi5cOOpam+KQdmRKgmwl6saMcG\\n\",\n       \"UQidTsKlSJf8pyIQBxYF9iuDRBQxBiGKZRI6Ck4ZignlERh6fPvYPKpiYnqcFQdT2eyTkBomXhKS\\n\",\n       \"Mi0q1KkT0qHPGMIBLrPDPLHKEqiItnRIqQ53oDikwDB0jmiPzsVuDJggOTg2gM1AJ6cF26TrBOy0\\n\",\n       \"8uz4Y7Se32J9LMfEQoPty5qJ9awJ9SdG570nkvsr+KtPwe0tDWheL8NJH9pPv8Ux9IAnRo/XQuR7\\n\",\n       \"6A6++c8M0b4n3+V9Mqrk/S4E/50Wbc0swt3LGvPUDSD+GsQNkCKspKA2hJ1/CeFPw6lxvXzHtZg7\\n\",\n       \"FmM2yxqLtYBmno3vwnwAmwcifC/iqA/VXdiuZpiVKnNKSFTAjuqzTcxQOlSASeliqoRkpHQ8h65u\\n\",\n       \"rhGQxsRVikBEN36TBAx9b6dG0ngOUMTFxEJRQACLgE1sskmKiuoxxKduCBuJsD/sUzEHZEUxBKYS\\n\",\n       \"yAQGNcuiNxhQlDSGX8YNAtYLbZYFblvRlcVaAV50YOv/hcLjMP6Qnn+aIlJ+CVpfUEr9oJifmy3+\\n\",\n       \"S+APlWLnRh/I94mruJG/TUbebCilWvDDTURvU2zD9jpkq1BuwkQDFquQTmKm+lqzp5GCjAMTmMyq\\n\",\n       \"Bn2jw+UkwjZ8yniMRS5GNwXmCo30Fn7UY2hPsjSsk01HWFIlibcZNy0OEZJFJyEDNCDVQFc8BJ3U\\n\",\n       \"9oDLAh9Q0BZdHb1NdHvnKqsyh154VoBKpGXhrRjqu3AmBytFyK6A2oVOCcZDaAXQzsLhZWgcghUL\\n\",\n       \"3BCcS3ChqcGs2bQG1G3+kVJq690ejB8mRjvl34avPwLP3wXheRguwN1F6FjgWeD3oRJr09/4HEwR\\n\",\n       \"EU7CtgXTsYVyDTyV0DYjptALUl8lVGPhnCmcN1IM1ocYM7AeQMMQpkJFxhS2LUVfCZYY9FEYJIwL\\n\",\n       \"bBi6uqAMmE1gUtmURWup7sfnkrhYCqrKx4w8uqZJmgSLhL4SJiXFjAqoskXbN6hbPQYyJCuwLwYj\\n\",\n       \"0dihIfrfrg+dADpZcJRmwQTAphic2ycEgcl6/CCdLxyE5VOcn3+RjfsbnJ+D6JtQf1IptXPtvPae\\n\",\n       \"EpEarD+kcUmdb0P32av38Hs8XoD+k3Awcw2AC/DKNHT/FHZehsX96N3B8ogR8mXw/x5MLgAnIByD\\n\",\n       \"idOwsAbnfxx2NmD/q3A5DxdKMN+DaFLTxDtpm7wRsIdNVtnskwZ51WdTKcrikUa9dv8eQY/LIcBF\\n\",\n       \"8QoKTyyqSUxFGZhRQtPWVbey6GsxIuQiERHOyDZvQAVhSMyaDOhjYicRkaErq4lp0hGhqiLyol13\\n\",\n       \"mwWL/rCCvNRh7WCHaMwl8IvU6x7dP/HxL8OZaRiehcYzkDkKdz8Kj65qinws8Ow98GwIfP7dHc53\\n\",\n       \"P0QoA7/IdXYo7+E4CdwL/D83+kDeavzIY0ZGLrB/DFERuic0NmO1CpNLWkiwX9VAL8eAshhsJhnG\\n\",\n       \"E49NI01fpamLR0d1mXW2MWUfxSTFThwSB9o4r5dMQrJCWpnkkz32TBiIT0MScugk5CB6LjTQQ1JH\\n\",\n       \"76DXBMJYszr2oVsATXRZt4ve3WcD3fNuhhB7cO/XYPkADMYhdQ7Sdah8GPbycMdlMMdgbx/0QjiT\\n\",\n       \"h8oSRKdgexs2voLOfLZv1h3PqHz8pdEDkemsTt5iH8oDnZilU7BqQKoD0QNwaxfCso3nWaQs2HJt\\n\",\n       \"MuTxgy5dy2DQgEQMBrbLIOwxndYLSK+g/94kYSiKtjIYw6SsIMDExuQ4PssJdENYSGBcwHECDBxM\\n\",\n       \"DMYl4iwRbeUxnQgpBT0vZsu22cIkFOEgBnFiY5MwqYbMi08auJLAaYH95jUXpPUI+kOt0C0GpBN4\\n\",\n       \"uQB3vAT3dIQXggr1YZWNxcMjobq7UGt7dFil8/tKqQt/zXm9gFbdu6lidH9/3eXPfjlFfsKj2POR\\n\",\n       \"DaidhMEzo+v85Te8pwn8HyIyBczCzj1QWQDy0PxTGNZgtwi9JniLMPtTQFZX3Vw7YFpLjLEraZQy\\n\",\n       \"sVFMSsBkrNgfwauWFu9TwKKp8T05LI4IvICQiU1igV4E+T7k8lpozUTTfRWKHD7z+PjYuBjMqJCh\\n\",\n       \"pLg1CmkZBsPEQMWKtmlQjmMuABOinbs3sVFhlX2bGVptmxe8EwzOrMDsNuzA3hevSreLiAnVX4QT\\n\",\n       \"DVg/oqtrVhtuWYfL94vI164Dkb9f4z8DPq/UNTG393C8gBZlu+niHU9GRGQSeAPQkG2l1C+8iff+\\n\",\n       \"k+t+fUIp9cTbeGjXhw2Nz0G3BdFdUKxB75CmjxYHMDWEJAdxLmTJdDmrDqBUEUVCxCohGbbjDuXk\\n\",\n       \"HM2URc4U5uxNQsulEaYIgz5OWpENTNLmOAXq7Fp9BqY2sUtxDah2VeRMi1mBZeodUW/0tyCBgwlc\\n\",\n       \"MsEWrYMR2KA6EPTg+BXY6cHlLrRKwATYRfjoKc0AUbswl4UXp+GFbTj/P6NLLstvhqI7kvOObx7t\\n\",\n       \"AKlB+WW4eAz2OdApg7JhZwj798BzwO1CVqAd5amZRfyoT8ou4XSKDJ1t+l4OiR2SQppUOEnn1nVe\\n\",\n       \"cfrMBtCTmHHLYiY22EOYVgnbhklKwaREOEAp0YycrgX7Q+gEEetuRCC6ktFWQ52AmgrlaZxPV0LG\\n\",\n       \"PWjlIMZFcEm8PmZiYioTP5VQSiBK4EVTXzu9RLfuTjiQG2GBkhDKPdjNwEQnptBssVY4TO81PNAS\\n\",\n       \"VK5oQNKVGzJE72CIyIGj8PN30Fuy6W012CpdBFmCp9+YcItIBV16rCulPKXUNrpMeFJE0vCaW7kD\\n\",\n       \"zidh4gFwx3WiL45B1nWZQ6iqlFaEEY8rYpJNInrETPrgC5BoMULlQ2YAUQ6UFREY4CQmvSRhPFJ4\\n\",\n       \"pqahbyRaqK5gwIbSlPbDaOzvGiE2QkoUfRXRNxJsUXQlpqOETJDQEpNqoKi7iqKCmXrEruPhl4f0\\n\",\n       \"Ugk+UX79WwAAIABJREFUTzH+8Q7VTMD4ENb3iTh/olRwFkiBXYG9g1Ad2U4MJzROJ3WFa1LQ78sQ\\n\",\n       \"IYuWWf/IDT6UNxsvAXfejEqs73gyMir3/tgP+N5/8vYezetDRPIl+Knb4OgUJE1CY4nzy02yG1CY\\n\",\n       \"gUfXIeOCNwu+gi1lUzTS9KSAUg6hZEgnMYjCjVr0nYhJK0UmjnGHTcyhxVz0ImeqKTzVomULsddm\\n\",\n       \"LecxJXCngiujNoyFFjs6h164ZtHVEQ896Vy14JtREIpWPz0zEq4SC+wMpHrwpWOw+wxUZ7WHxjCj\\n\",\n       \"xZqsEQJe0EJXEz0wryil3hQ1U0RmYeKTcOAARIFI4TvQffK9L/O8802Yn4fjT8HONGwJFCdg7iJU\\n\",\n       \"65pZ0XbB84Xt1DiBJIRJly16+FmDyLchlae0OKQ35lAyW9hWmik8SnbMAAhURIiFb1hIYjMfBzQM\\n\",\n       \"kxIxLtBO4FEfvpSFyykBsamIhRCwgSInMTOBlqBvpiCOoTcyPcx5sJruYRGRNkL6TsySmdBCV8jc\\n\",\n       \"BI7EsG7p91RNSBm67ZSygDSsjjyHZjqwobpsPn2JuDeAOR+MLdjZgT/WdFmx0Nnx4L3OmhCRIzD5\\n\",\n       \"QbAnYLg6ai9tXPe8TMOnPwKtGV1KBGAeil+EnxCRXxtVTnJQ/Sk4fgxyCewmItm/GlVOFOgk5No3\\n\",\n       \"Z/8DmP37kBuDyIZoDLyyzQR5nLjNltFAiYEixiFiVRSugqYNhiGoSLGrdCVzcggTIdQr0ArhoZWY\\n\",\n       \"Qgjny3rOWS0Dy1Aeh+N5zQrb4Brw9R7AQrEOtCVEiUkqThi3IDGElnJY6AmBgrokpMOYYttnt3SZ\\n\",\n       \"1t0uq1aBrFMi6WcYLO/Smuzw4S146udF5F8AHRjO6vbOxEhbJD/UWKTOYeD9DmT9ReBbSt0crsNK\\n\",\n       \"0RFhBbgdnZjcNHEj2TT3Av8UOCEiXwF+8t1a2ESkVIVPTcPjB2D/PliahFfL0D4FM9+kzx7Bb8KF\\n\",\n       \"fzSyTy9BEBtkbBffGdI3anSNDDEDlJHGTQz6tiKxAo54DVTaxVQOhuVjscNE22V9DFwrwslGHEGL\\n\",\n       \"krVFbywKaOzHFfTG7GF0QnJ1XYjQ0t7pEVtjWTRe5ChACL1NCF+Foy/ByRkY+zjM5qCsoFcG34Bn\\n\",\n       \"b4OHz2tH3l4aagqil7/H6fle52sCDv0SPOrBwqrGXrz8YXhhAvj9t3t83t5ITsPpL8PeR2GyA8OL\\n\",\n       \"2ixw30UotmFzE87th51BitAYsme3yTgeGRRjqZi869BglyvZMSLpU7aFg0YaoUdkxEwrcGI4nUQo\\n\",\n       \"X1gWYdpOiI0ID6gnULa06eDUAC4VUtyCTYAQkMclZpYWK07EB+sQGdAyNbtpcgcODOFcOeFsdYBl\\n\",\n       \"g2nAtAcP+lq2ftmBlyKNEckYMG6NWEQ+ZBKNJznvZHh1xuC3pU/4B4rwV9ehu65Fa0L07t8UyX8M\\n\",\n       \"9j0CaQt6bRH7L5QK35NAOJHU3XDnz8E9TRjrwuYBeOG4iPwrpdTy6GWFEozNaKToazGnfR/mtyAn\\n\",\n       \"Ij0Y+1l4dEHbDRhoZeFv/AScbAOnX/+9MgMzvwwHsjDmQLsEwzSsGCEpuqxjUSWkgI9CWEdhKpgC\\n\",\n       \"njGFlEqhYqHNkHRGKyY3YliyYWwXDvc0JmMiAfHB97RmUAwshbDfABGtH/MA1+aHfegW85okPCog\\n\",\n       \"gVAME76ejtlJFZjaMdnX2KWRgq/OgGkk2LUKMp6jEJn4FBk6E1j507x0osvRFVg+onFVRh2u5CDt\\n\",\n       \"QsWDtgOXXZBNRtz3d3Swb1CMpN9/Ge3QezPF02g9lL9NRt5MKKVOAo+9W9+ne58sAMUx+MkP6zup\\n\",\n       \"+CFY8yG3CY848I0TsHkWju0R/iHwPBRSKbYzJtaxhCBKuODO0pLDWLHCNraAGqFEBKmYglikGcM3\\n\",\n       \"XdKASMCwXCO0PW6JtYtu04KZUTLRQRvbHR1hUi6iJ63s6KhNrt3rJTS7JiuA0gycJtpFdH4TGiU9\\n\",\n       \"kbqzMH4QPnwK0jH4A1iqasbMhYEGspobsKugvSIit6DbZq8DI4rIAlTv1S2e3ATcZlwzTctE8IEV\\n\",\n       \"2DkuIjMjpst7MkY72ydF5Hm4Mo5Geabgq5+G0gwM1zTbaKbWZ/ODBkbRI23Y3I2QUhF+aNGxs0w4\\n\",\n       \"MTtGgGU4pEaf3kdXM4ompLAoYuP1hMt5k7641PEoi5AWoeZGBNi4VLAQFAZphBQJWRxWJWI1Dc5Q\\n\",\n       \"4zzmEkgygAPHElgNwRrCtCVUTJtKqEgIcQx4KdEsqKMx1E3AGnnxKIeL1jzbUiW1XqT1ckxovQI/\\n\",\n       \"UYffU0q95vUjUnoc7nkY7l/X47ubgaf+IxHzd5SKf3D7mHcgdKtw/lPw4S1NQV8vw2QbHgmh9Ung\\n\",\n       \"N0cvDQMg0qXD16o8PUh1MA7B9H+jk4HCAhz+1jWV30wE9+zB+od5QzIC9uMwVRIOGSa7+YQpOyGj\\n\",\n       \"IFIJu5KwgE0JhYFJlhA9Sno8SmJwORrH6Y4zvXeF9kSLxTRYlu50HHMMLhxyaFsK24JqP8B2FHMl\\n\",\n       \"qIbwrMA5A90CHB1PHIMSvYmZAWpKMfDB9CBKUuS6Npelz041ICcJSQS+A7cs5blUnkUxhnIcrLhN\\n\",\n       \"e6GOokxidRmvabG8OAF3D6pb8OIx6M9CFEF+EZwW3Hx6Fm8hPgucVerNCfu9h+Jp4BP8gJT5GxU/\\n\",\n       \"EgBW3Que+qz2b3LGFCsnVqids/GsFMQu9D0tsjG7X5u0KKABl8/lGPzDEhNRQjeO2banaDINEpOI\\n\",\n       \"jTCLTQ1l1HElIBMUaJopKoEiyoKRmGC5xEaPqQgwdRIyFuukIiNwQfSiptATylXfmQy6pSJcE9GK\\n\",\n       \"0CqI08CyGmlg2BA8oNUkT6YAW+MS0qNJIuXDzBKs3gYbArlluJDXiqb3/zjkFGwgkv8m9L6qy9bu\\n\",\n       \"A3DHZ+C2ga6knP0obA+gtQul66pXUwm8UuWHEq56d2IEsnsNgCYivw7befRkWrDY+aUSks5SVC2U\\n\",\n       \"xAxJyGHimiEDJSRmwN4I4xGLh49PBj12ZcAkQ2EoTHshvUyBnSTPpNkhMCMakUFsdPENkxATQbDR\\n\",\n       \"10IMRFgIJrtGzPioBL6bhvFIL1KhQl8gRSiGgp9E7OYTQgc8Q1dLCraWLPeUdmPuZ6AdZ1i2iqT7\\n\",\n       \"BdJrDpZ5P/6KB0ef0VSO86NzUYAjD8AjK9faeeMDeKABtY+hs+T3UlQgVyrytdsnGea08o6wx/4L\\n\",\n       \"IZm8iKSUUr5SalARefU03HrX6BqNQb6F/YkG+4fw01uwWoHde2H1ETj4JDiBbo008hBlRMR4PT7K\\n\",\n       \"XYB8SvDSMY4DrqWn0VlgkQQTjwDBQjPlImButIk4rGJCu85OISRSKY7FWg140gClYLEIsYopk1BQ\\n\",\n       \"BvWixSAI2TahndGblFaok5eh0nNCMdHXSGtkXYACbwiVU4rE88nYAZVpxdRFLT/vPzjCR82U6GZm\\n\",\n       \"sRMwTME2K6SihOGBHZIJODsLppYmoXkO/GMZhv40jbAExQ57n9il1W+RPiUin78KeH2/hAgm8N8C\\n\",\n       \"v3Sjj+UHiKeB//FGH8RbjR+JZAQmfhZ+LAfHVg0W3RKZZp3KbIZXzF1UZgIGafA7UG5Bak8DNRo5\\n\",\n       \"+rXjLJ5ZZ/WRKqIy+MxD1CblDLAMwVB1ImmjqDMXQ6TSbIYBSWKSsRJ8K2bPyCJJj5atWTN2AruG\\n\",\n       \"1gZRI7nx9RFt10L3gdfR+iIGeqLpcG3XNkigbkBO9AToKY2Sn3K1mdniEIyN1///C3u6GnL+23D5\\n\",\n       \"Jch/FH68rtsWRU+D6b72UXhuW0SuwIFPwSc2tNYG6H784BCcPwQPnb32uQ3hul78zRSjikkHQETm\\n\",\n       \"jxIdm2BirUc6FWCNmySqzh4+U0AlbtARF9/IEKkmHTVkThQ+ujR+2YQVNWSYCMOS0BUbL0m4YhlM\\n\",\n       \"JCYBPh2JsRGG+Pi4FIEIRYzP9kjMLAO4OdhzYc8GqwR+CLURSLliQikxwFTU0Xb2pdiglihyhiIK\\n\",\n       \"YNzRgNmqMlCGzVzSYnMwhkQZwj2ABRic1XSgq9ocJc3usNTrz9JMG+x978qAvLXwC2zeeTeTvSKF\\n\",\n       \"JsACiVxi8cQFehfhGnCvCV96BsobsG8c1BqMXaFqeHz6SUhHMN7VuKFiGlYXYH1cg1LTaZjqgvwj\\n\",\n       \"Efk9pVRHf2LYBT8lGlyeaP+onoI1w8ZQMCSmLQYWFllCckCCkBWwQ6iaEQY9rDzsCczaOpntKthn\\n\",\n       \"JHQkYQ/YlZg4EgqWcMhQFBQMQ1hSsB7D0NHgdwcwQ8gF8GxO/2wuQX302vELir4LR56H0/8h5HPQ\\n\",\n       \"caGn8mCG2JaNFSmUAaaRwXUDvBzcU4fCQ3Dpblj9/0y+Vj1E6liJQj5B0jnM9gxTWy8R/YMO/riI\\n\",\n       \"/MublYH318TPoMF837zRB/IDxCLgijCv1OtblO/leN8nIxrvcPscHFsDSEj3fDBKTLe6LFVO004d\\n\",\n       \"0zVupwPxaZjegT9RSoUlkSM9uP0OglQRghUorWNaFaZUD6VWEAmZVH0M0c65XZVChgV2G11UxaZv\\n\",\n       \"ZFGJR54d9kw4KroN01FwCr0bikQzJqd8nZz0bBiKbqOURSurukq3djaAsqErIxk0zQ8Fiwa4ogWU\\n\",\n       \"rhiw6cFYAUqj/nOtBNs7oP4YghKUboOVMqyO6V10ZVFjKJYfgW0P5s1riQhA+TL4+6C5HzirE6Zz\\n\",\n       \"k7CyhS7V3NQxCQ9PY1spins+nf0pnDgkljSm8lCxwcCIIeoQqw77lXZxvWJoczNbQZgYHAgNekaW\\n\",\n       \"Wy+FtOZbJIUS6cjHshzqZpmWFEnLLqbqsiIJHiYZYupE7Ko+toItBUYNdqfh4aEGoO45sGBA29SA\\n\",\n       \"xk0zwhSdsDqi3xPFYMc6h16I9PXTVYrENMjFQ6zMgM38cYJFgCHYgS7pXI2OxiDEon1nrkYtB9EO\\n\",\n       \"XG1z2vfA+AfAyELvDLSeUkpdp93xrkVphl7fpXpV4Q0TURMEVo1W2LjOSViLtslv1zSoogjOLfDw\\n\",\n       \"rZAfLZyVIbhXYPV2aN0NBzyY8WALePR5uHQfnPqgyNxJaD6vvazqA5izFb6p2AJiEwST/cTsiEtJ\\n\",\n       \"GRgCrvIRgZZSTAHLpkV2aJLEHu2MSWBVSBKLphriW30KKqEaa4fpW2I4LQrLMnHDGMfQQ/ZwC57I\\n\",\n       \"wJU+PJ3XrRkjgR0T2i0wFHRyYKXBsqF2HFJdOP2TcKhgMG4YWIOIi5khS0YbIy4wtBx8P0GpJpHZ\\n\",\n       \"Z64Ld17WQNVqBdwH8rwwOEzvdAtudykvO5S7YCYTdCodJg/D3jHg1XfxGnjHQgQBfgX475VCfb/X\\n\",\n       \"v9dCKZTIa7iRNzJZ37Pxvk9G0N4a111Q47UGWwOXKGPiREfgyVNwx6a29nxlD76ilLosIqlpuO0I\\n\",\n       \"WPthmIN+FgqvIPkeAXs4IhxWaQqqhWdARsFUrcYr2SrCFKaXENgRGW+LKAVFH17Mw21Ky4Z3Qm1a\\n\",\n       \"tp2GRIGK9G6nh7a494BzAgcSncDsokv606KrJTHaBM9Renc0jGC8A24TFl8GdwpyBU393KnD7peB\\n\",\n       \"HSj9IhxcgCNpKITgxXD6OHSroBzgL7/bXLXYgvNLcPko/NlB3TPeW4S9f3fzUHz/+rCglCNuDYmm\\n\",\n       \"fWiZ7FZ6DJyEcVE0gV5ssBfFuAnc24aNgjCwwBZtZDirbGbbCUuFiJ4T0FAWJwKPhjuNGaWZSZrs\\n\",\n       \"popsJilyxjIOEREWuySY+JQVGB2wNiDwtcppJoGcr6+JATAfwaKCFQMypr4megqMOGHS1LoxXQO8\\n\",\n       \"EJIIainFkD6RnycOIgxjERZsaLwK7uA6hUalVEuk8jI8e5fGjNgJtFPw3DjsjBxAi5+GOx6AO3Yg\\n\",\n       \"04Yrd8Hzt4rIb9wA8TN3kviCy8b0kM6Y4CrFQMp0ltPEtTe+eHSNLgNo6Fj3xOtf8cAZ+IssbN+l\\n\",\n       \"pTa2+lB6Eeq3waECDC04Bmw+Bs/nIfM1xSs/LpiRwnU0wy0DGFgYScKG4VCJe+wZMR6wLxl1unwY\\n\",\n       \"b/i0qg5bZoEZP42yFKHhkk4cBmaDbKzPv6EUFYGUitkTsCKtGrti6oJe+Tto4b1EV3UWPGjYsF2G\\n\",\n       \"XAjFsk4mmj3YSMH+kpBN2xjDGHEMqsEeWbOArzwilSbpRrisUrQ9TuxBMirF5n2YskLycxb9VZfK\\n\",\n       \"rkO2c/XMWYiCcgDlA7xPkhHgcXR//Is3+kB+iPjbZOQ9GDWohdB19G7IiQYce2aFk4/k6KunodCC\\n\",\n       \"Z7bhc9fTAoGDh6Ft6JX8lhx0UxDmsaNleuIzYYLIgD4R+dDEVzF31jx6hXPsbhWpjRskQYdcts+x\\n\",\n       \"LaiV9cThCsx5MN+HSxnttjs5onWGpmbIjIsWsaoBm4ZOKHJK73rSBliiuyNRBLYPxRg6CQx7MFyH\\n\",\n       \"7X8G/scgdQ/E4xC2wHKAD8BCEdJZKHhafdUFjvjwah6SKrABqz3YymuPl9USnHsQZBIOXIBdG7ae\\n\",\n       \"UMr7i3d/KN+Z6ML5PvHCEq39HofSDumeS9fdYz0NV/wszqYickOKPdjLwRCDCQwKXsIOCco2MKKQ\\n\",\n       \"ot/h0oSBZSTkGika0ylEQaxsClGXmlkhl2RJpM8EIQKoWGtInM/A+CEYjzVAcUPBWEv7CoU29C3Y\\n\",\n       \"NeFECFMGIFqzomaA8nSZPu7qNp6rdELbdXqYYchS2Sd9WDG24JG+NKR7DurH0RnuKJqfg2d9WLx/\\n\",\n       \"hE3woPZvlYrP6uribffBR5avVU5u34J4FpoPAF95l4dsZxviR/Gf6eIXAnAz0F+BfAe2RXIf1dT3\\n\",\n       \"waU33NNAchEuDGFfCfaNkqhmGho7YD0BJy7CwIWl+0HdCX4bVKArTw+twN7d0PUSqt+AvY/oyoRY\\n\",\n       \"0LU9jCiNoxQFZ8B2EJBBJwgLoQagpnYjxk/Csw+X6VlCx46ZFQPDhCFpbGWxZYaklAGRIjJ15bSe\\n\",\n       \"6OpXwwaxYTIB7oIrAUx5cMsKxArUPFQU7MzAeAOakZat396vFZqHKQO3pzc8LbvH9HCJVWuadLuH\\n\",\n       \"sbWLWd1j31BriWQ8fW4CC8yhj9ltkrgWoXn1TAZEVh0jBtuD4fuJUfMrwP9yM1ZFrotvoQG4N028\\n\",\n       \"75MRpVQgkvoi/NXPwl0dLWK2WQhZOdVk8PtNLXD0vW4kJwtxGU6t69V6nwNikKiETNghb5vkVEwq\\n\",\n       \"NMEPcLqwWICdskd3xSP4n8DfgL3fgnMzkLehGsOqwJqjhbcObkC2q03M1mZhHsirkUqn0vS+0NSt\\n\",\n       \"mpxo1s0ADVpLh7Bl6/fsWJD0YDmCxu8qpbZF5Otw4DDc19b28a0p+OYjkDNBdfWiWlF6F5aYukxf\\n\",\n       \"2YHdPGz8IXz5P4aDVWg8pEXWMmfh0ItakOsrHxaRRaXU4rs6mO9QdOA7J7E+k2LOz1JIIpQbkbNt\\n\",\n       \"jmwUuLA9zu5LS3Ae1AE4/wlI+SZmbOJ0DSw/YLcYQiGh6UKnljCfJDh9EzsZElvglSK9mmCRRAam\\n\",\n       \"CalEJxgtgVUbjigYN3WLJRnCgQFcLkLsaOuAJNaMKuWgd21AmMB4AhdNnWxuAbshHElBNwuTDdhO\\n\",\n       \"fCZrPicSaOSg/AxMr8BXHheRbaXURdD3CfB5EfkqGkndUUpdxV5Mwpx6fQsHYK4JhWO8y8mIUqpZ\\n\",\n       \"FPnWN+BDd+gLtrEE1acxDvQ4WoTbDugW6KXHRHJfU6r39eveOxSR/xu+/HMwOa8Zazt92PwdGP8x\\n\",\n       \"uDQD8R2Qn9c2C14G+hVYmYXpC3BwC/7yDpibhuIqzBc19b+RMYjbQ4ohSFl3wnbRMi4XxuBgAIkF\\n\",\n       \"z8zCWiPFWCYDqQYrKYPJSLCNiD1TaCm4N0nIoNWXMwHEkdaqcWI4ZOgNiLMBwyoc2wHjomYFGQ/B\\n\",\n       \"8QBUBjJD2FfXQNRsExqi8FMRO2mDQWhqbAk9zHARqxNx8DsQ3QbteUht6+QrsDQ7a28Y0f3zM6hH\\n\",\n       \"p6kdLWFFISKrxFadw2fgYgKD90VVRIQPoqlKf3qjj+WHjBeABREmlOK7qoXvxXjfJyMASvkvikgT\\n\",\n       \"tj6grcP7J0eur3t/w9s2F6Fahv13watdyPdgooW/v4bn2vRiixiFZw9p20oD1RIodeDoGTj/MNQL\\n\",\n       \"OlmYNmDa12C5joLzJjSykN/Urrk7s9o7ZUHAE10BUWhMQFHBmoJjgQbcLRmarluyoSfwpAvNGAZ9\\n\",\n       \"GP4WBCOviOpn4NZxKDT15xxogG/Bq4/B9B44e7AzBokNvR6UzsOOB3hKqboWPFr9ONw3CfMXodLU\\n\",\n       \"a2A6gRMd7QDMTZ+MiEgVcJqMbVrsN7pEYwlqxSPfcTAij065Rv1zwFegpSD4CSh/JqbxmEUQmSSB\\n\",\n       \"4EQhZlYrah4xYDkNhWSAiicoPxfQPRzRmc7RiT3aOY9SqBfLoQJlwa3oNkAaXfE6mdIKu7i6beOH\\n\",\n       \"sO5CNdAvaojgJDZOnBDGBm0nRzecoGa2EHeHDVtxpKf1ITJDODgF+4Zan8If0xiJ29uw/iBvYMqM\\n\",\n       \"xL3e6Mo8/N445Z4L4Q1hUnXgL5+DrSV4xIRCG7a7HCvDz1y4BsQ9ZsIXPiYiF66vkCilNkXk/4St\\n\",\n       \"STQyfEcpFYuIB0//r/BACZwuNKZhJ4S7L8LlY9Ba1hXWbBtmL8Hlh4WLt1jMh0K2l7CTdQiaHlci\\n\",\n       \"MM7Ard/UyULfhq8/CBc9KBZh/ECLVK5CwRjHo8cVIsw4RYuEiRAkhDOWheFZ1IjYsSMKJuRiGA4g\\n\",\n       \"qUF1DdyqBitvHgczD2MBpAPNpEvy0CxA2YOzBQgugDNM6JhCfr/BROjS6Fjcst6iruCUguTPQd2l\\n\",\n       \"k+6tfRCGOs9bfgmib23B8w0GFx22/v2ASuxT3ISzLdj80xuEHXon4leA/02pm5uyrBSRCE8CH+Um\\n\",\n       \"adX8SCQjAEqpJbRs4fcN0e593QbENUhVNWdysIkp25j9g3i+x/qgyXAyxVwipJMe2Iq+BzM9rQh5\\n\",\n       \"9AFYzcItkT7NFaCX09RB39Wl3fJB2PctaG3p0nAjDYVEJx1KRgJYiSb6DPqwGoNj67LtakqXZu0N\\n\",\n       \"+MA6NE149RPg/blI9jhM/l2Y6wAHYS2B3MtwdBWebcNWAMUM7L+kHXpXs2C0ofHKVYreaAe5BhNb\\n\",\n       \"UG2+/gzlfLAKb+sAvcshIkUY/xm49YAWk9u+LyL7asSx56++RnOYa/PApeuwMZ8XkSdi+JBH47OK\\n\",\n       \"iXsUZQUdD8YDKJlQM+G822estc7uQ2mS2KbWSxEOVhEvZKYDhTlopTWu0mPkEgxgwT6BdqB3paqm\\n\",\n       \"QbKVPKyX4DYElZjkI31AW3aWKCkQD45y5ImQOH2SzQ/WsMegNNTVkKIFnQykdnVCClrZ9U2P4TIs\\n\",\n       \"t2G5DPtH18LQglNFqP3bH24kfrAYjcfLo8dIJ+X2qdczglIxHA3hyjE0+vv69yu02Nv1f1sTmboE\\n\",\n       \"e13YGgMKcOsGzHagXYbLU1oh2W3q/Ce1z8EToW0LZQGlTC5SYfnVBtPndPK+k4fNvHYEvlNgoQQ7\\n\",\n       \"RptivMWiU6WgClgEDNQGBBHNHrwU26TCLMrLYjSapOcVExVFVhIOxtCbhtUCuB2o58Ab00mJZcJu\\n\",\n       \"VjPz5i1dSekbut0XZeG5tZix+2ImE6FNgNlImP823LcOfzwOK78GKA3kvXgM0kNonkFf/xG6d/xv\\n\",\n       \"ROTPtN7IGsDmm7GRuBlChHvQyqWfudHH8jbFV4GP87fJyM0XIlI0kU9aVD4WkVcDuscbtC6/TGKt\\n\",\n       \"kbqzx6yRZRjNke6DWm/SpM9WaxNjIcVk5HHbeTAysP4hONqCelYvMkYCrq/pmaGh8R+hoZktu3eC\\n\",\n       \"5cPcEM67miEzbmqA6jrQ9CH7vE5U9vsw34FeBhr36efmmjC3rDd4yT7o/EM4lIJMA0p9XfnwZmD7\\n\",\n       \"56BxEaQFrz4Fa49Bdj+oAfiXoHMSml96wynZ1mZ9976BZbFWgvaL79KwvO0xMv/6L+Duadi/BRO7\\n\",\n       \"8O2LsP4oVFowPvJtaaS1Jh7r179fKdUFvigiqzD9TyH1IBwdQKYHbg+mYvh6GTbdNsZlj7jvkve2\\n\",\n       \"cMcjKvtgLKvbdBZaJXfPgh1DW9KbQEq0VP3WGNxag0kb1hXYKaGdgVys2BDF0DBZlAxeNEFaBoRz\\n\",\n       \"AUZ5iunmHr3xBNuEggtLJaiehngI7mhRXq1A+8k3c76UUpGI/C785d+F2X06edtQsPn1FIxVRY53\\n\",\n       \"YTOEszfONE0MXVH8rr+jK0JvNowa3NKGqZOwPQm7H4DOcX2+VjOw+wWL8omIkz9pMlVxmO8KXRWx\\n\",\n       \"mrJYM9IkDa0DorLwbAG8deg8DUcfhVsy2q03iBV5c4v7kwY7iUUgHpkkJhlqfFh5OEacFMiuBQwD\\n\",\n       \"SB8xKEdCWwWElpYDGE5C4sKrMZQdmK9r3aGGpTcyHU8rw+74kNmD9DrsVGDuEty9p7BjRTsLG0dh\\n\",\n       \"Zk2DonFHFObnRo/vGaPEZOWve/4mjv8B+OdK8R63uHjT8VXgl0WQmwH/ciPl4P9T4B+Mfv2/lFJ/\\n\",\n       \"dKOOZXQ8C5PwX4VMfMBheigUhx2i3C7t/Q3ORT3mKhkqhslaLiSOQYYpKnGBVKNHbLWoljzscQgn\\n\",\n       \"9C5kO691AYZ9XeFIZzUllgA6KTgQalpmOKvxAAMT2pGWAU+uMm4URIHWDVkswWRFa0tcPqyrLJYP\\n\",\n       \"y4d0ZWNyCebakPsIHH8Cepfg7J1QHIcJBWOe3lk7BpgNWP0smpJhAW2l1PdiIWyLlE7CEw/AbTUN\\n\",\n       \"eL04By8OYXBTSQ1fDW14VvjP4cBjcLgD3gJc7MOx5+HZHXjmITj2AqyPwdIchOeg/EkRWazCrWkt\\n\",\n       \"h9rZhiXNTCreDTMmpFIa59N3wWhCxoFWFop1n8lLCdE+uHjcYt4xyaqQwE7omSPRq0QL0pmO9pqp\\n\",\n       \"KWjmdDWsd0DrXYQp7frb8w0uGxbZWLDCPKVslmbikO0MifYZZIYuydCg3Ui43ISZCCIXXjgMuStw\\n\",\n       \"qAMvz8KLXei/KiIlND7kb2RFKaV2Rq2NWXQJRy3AL9wKqQp4Nbj/LHxERP71jSnZt87B4qNw5LrE\\n\",\n       \"ORK45Oh7Qceo6pkDgu9tP7H7DLz6czDWh/RAC3/FQwhWMxwLMxR/3uFKccDKXEReUtjikSQm4TB4\\n\",\n       \"mcqJAAAgAElEQVTL1KBPMhmR3oOFBnQmoHm3xnZM+qAMiBx9P64DpcSnKD5GpLVDUg5kTSHBoBJH\\n\",\n       \"qP1D6koYE5deEDLMwdMZtAWBoTH17Qg229AqamxJKoSDKUjvwq4L/qvw0BOwcwTGJkC1tbeVDRT7\\n\",\n       \"0KnA0jx027ye7v0jFSLcB9wHfF8D15sozqPn90PcBC11uVFeWCKyoJRaGRlzfUcpdd8bnldKqbew\\n\",\n       \"o3nL339gHB60odqFKxY8VsZ6bIwxy8UK6wg7FFVEMVG8tDDBoWiWfL/GVnmKAIU59EmGLoWtFTr5\\n\",\n       \"ZQ5MxxQCGIxraqwCdhOQBA6HkMnoXe+egnoAczGMDfXEsGvoNv0gA/0++Bm9y8u0oB/C1m9APoKF\\n\",\n       \"W2HjUYhnwHHhdk8D2gYG7HqwNYRL6/D45/Vk+Pl/D8YOavZO04XaGtz/efjmGJz+1ZGJ4fc7TyY4\\n\",\n       \"d0H5Y2DcD1YAuXXo1GHr3ykVX/p+n/EWxuQdHXP9HcXH4cinYf6QTkYyHnSyms459gL80ST4qzDz\\n\",\n       \"MNy9DFN1WJnO8J0HP8jOc3fD8hUY+zLlnxoyUUgRZCJm0j4pSy8QSaTBwJEHYR8eew7OzLucn4rI\\n\",\n       \"VhSTKUEiwUhCJtMasuEA5VgnobsmrA01k+lAWrfk+i6ElkkkVfqME8URttrDFJ+UOU9nOM78csig\\n\",\n       \"YuL2tvGSRZw6DOvgHtYqoc1NzahoBND+gwJ9ZxyOpoAWNLfhC/EIzPomxkmm4B9/EnL7tScBAOdg\\n\",\n       \"8qtweU+pP3xrY/LDj7tOMoqfhiMPweGBrg5cSsPiU9D5slYWNg/D1N+BfFVrttTPavaZY8NglRH9\\n\",\n       \"FwqPw+zDkL9VCPZn2VMmjlFgfDcgGrNYnBrHVxvM5cC1bCw/IteFYtSlV/DYXlaMDyCVgU4RtqtQ\\n\",\n       \"7sPhyxAcgPKENrGsoyteKoKMrxNaowGdnTw7+/LklM+anTAmJWKEQcYlkT6JWQfVYzKEqoJA9Mbk\\n\",\n       \"4BpspeH8GEzuQep5ePCbuu373Mdhz4DqBoS3wommVlO+MgnPb8Hl/10p/015Vb1d8W7c7282RPgS\\n\",\n       \"8AWl+PUbfSxvZ4jwO8CLSvGrN/pY4G8e8xvpTXO1zKf1sN/FSIvcewf8zB3QK8LgMnziAnx6H5Y/\\n\",\n       \"RaENMIUiRSt/itjZh5gWfVUnXwqZjFfZtsqE+ZCB2cPL10lbMbsRFCowKwYSK5qiCA3BShSXbL3j\\n\",\n       \"7Y8ou4Y3clv1oeWAIZpVMQzhaANK25rnv5QBfwkKeWjUwf47WsysGGuBqnpa0zhvGWg65npGT0rn\\n\",\n       \"j2jF73ILDp/VLSEjD/d/VRv/zSo4PY62Av4bYwTsOwXq03D3Nuzfhuoe1DPwtc+KyK+PrNbfM6Fd\\n\",\n       \"WDP3anG3uA87z6KV5QSmPgJqCjYOaMGr7Rgqq1rtdH0Skuegcgt86gUN/gObrdIYR8IdBlMu3ctX\\n\",\n       \"cBYKGJUDhLkMuZ0+lwq7uM4Ox2JF1tbA1LoHhW3Ih4JdDBmvmNziRXRSEErEWlorlJfQFN0VQwNV\\n\",\n       \"hxHYl2B6Hhb64BfhUAhrwSxn0pNM+4qha5KOcoRqkXWzSRJn6NqKULUYZtfJNXQ1bbIMj56CzQrI\\n\",\n       \"CzC7Ac/OZ/nKnR+C6C5t9atqkP0G/D0R+U2l1JtRbCxXYGr/GwzojkLtOTh+VY79bR7WvzFG7ruf\\n\",\n       \"gxdOw+XjGiDcOgssj56bh+N/Hz7UhOl12JiEc/8YWg04dgrWLFg8D41/o1T7iyJyMov7B4cpdyvk\\n\",\n       \"OwHe4VVWT/SYyKQ4EbapGQb9JEWh71Ie1oncNi07pllXRA5053UlpGjAwRj+//bOOzqu67rX354O\\n\",\n       \"YAAMei8Ee++UqGbJKnFV7DiJ7cSOY8eWX8pLntOzXvPKy4sTO8mLveKVRHYc23GNLUdyHNmS1alC\\n\",\n       \"SSTFDjaA6H2AGWAG0+e8P/YwBCGQBMkBZgjcby2sBVxgzj2459xz991n798+XAJjNeAp05iiFKqs\\n\",\n       \"awMqbDBpB3ccPCOw82SUl6bL6K0sw+bz01VSQomtkvJYHHGV46KCUTlFlGl8AmuicNyp2kXlcSgt\\n\",\n       \"gvEuePAFcCfhlVVwYgMUFUFxndazeake7FP6ctT5bWNSi2qI5BOZDJqN3HwF8ebDY8BvQX4YI1ci\\n\",\n       \"H2JG/gvw6GKdTETcLfDO+zWqLA4wqfmztgAJR5Kkw4EjaUOoxRE/S8BbiQlO4ndEqMdNeTyG19HN\\n\",\n       \"YMkUk644k4dSsA5qisEed5KMOiiyG5pTUexFhmGHpuh6hiFVBBVOLUI2GoFni8BXoJofjQFVTjzZ\\n\",\n       \"CGu71eXa2KnbOU/shLoa2OGB+hRE7DCUBpsDzhrw2DUbp7Qf1h+GU3Xw9Ard0om7dcErOXUxFiIo\\n\",\n       \"aI7wfK6XC4p+B9ruhaYgTLfBWAiaXoEtSRjeBfxogYbrmlFDpP4h2F6m3qHpYjjxEWh/GkL7wLED\\n\",\n       \"doZgeBCiZXo9h1eAPwjtDgicgRXbocx/oU0X4eoSysYmKayMMWX3Y1u7Envc4EgJ3iSEp1z4CqIM\\n\",\n       \"OwOsj2vWVMUg9Abg1WbDRIWhJZli3C1E0vrwacxohQSNCpwVZbZJplPgr4XhOEwUapZhzFZAdLqC\\n\",\n       \"ipTB77XhihtCKScmUk1F+CzO8DjjIsSnYjjDUNYOVYNQ7INACUxOw6qM4VmeLqBoxy4mH7vw/1VD\\n\",\n       \"eAcUjsIdwHy2S/Ny/zkTlNqR+ZpF1R2wc1o9TnEHBHfCHUPwuhfKp2DzJLywHl7aCey3QXUbJtVE\\n\",\n       \"2RjYbX4iHjdtaS8FtjS2ZCnVcRexAj9dTkNseJqoJJkaBucklO+A7SnApsrMpwUaYzC50kYkBaFY\\n\",\n       \"mna3xgutSWu15T67Lkd7O0BIcFtXDwN+F89trmbIVo5xgXGmMHY7hfgoSZURZhoH6nH1pkBS0HgO\\n\",\n       \"RgLw6qvw3UpIroWy1bApkpljlRCchHUT0DkFwZ9CerF1YvIG0SJCfwf84RKKFZnJT4Gvi1BmDBNX\\n\",\n       \"/escsuDGiIjU8OZo3kFjzC+JyC3A21jc6OWaWnBcMEQAnBArgcgEKWecoMdGWdiGzaQwTkM8Eifq\\n\",\n       \"bKEgdJaO0j6KysHjTBOxRagIQkMZDIxBaZHgs7uojELamSZWJDiNDdJCJJykNgqrxlVSvSIF4oZo\\n\",\n       \"Gjb1QSAJlaehf6e61D39sH5Mc/17KsFbA2vKYG0PRJvAnanc2ZPUNL7AeCZ4sgsSk1A6BS/+CErv\\n\",\n       \"hOCdsOcorOzVYL7TVdA1yrwD0Lx3Qts6WBWEukwswIQX+nZC2WHw1Fz584tN4W7Y4YNdF97aw9AQ\\n\",\n       \"hMm74fQk1IS0QOGWHk21PFwNoQLomITRLwAhiNg0vkdFKNM4IwliHhuppAPSLhLuUpyRAFPFaVwu\\n\",\n       \"wZtyUzZewogvwNAUtJ3Tz55pA4bA2Awhl42RlINbJlKMl6WoQb0iIaPS/w1ROOsEX1x1XU6KigLX\\n\",\n       \"RqEYO70iOMVGWcRFfFQIpKDcODQGIRZnvAP8/wNIQvrt0L8dGktAzkDTMXBlvI+hEi/JN9UQqdJA\\n\",\n       \"prp5XuTAOAx0QXkr/Gd8yGmontQg1jxc1F2NWosJYKIcihzgmVIV42ChZsysG4VTe4D9FbChldjp\\n\",\n       \"OIFVLsrHJ3C6SihLJImkQ4TtXpxDNtJVYZyecjqmJkgfgvgj0PI12GRTVdaAS70jGw3E7ODFji+W\\n\",\n       \"pt3uhhE7pakUA54k054UU2OAU7OevFGdP850HIYLYGIzkdERUpvGcFYIFTbwJB1EM4URIzb1ruGA\\n\",\n       \"zgKwHYf0s9BzBNb+HqwrgLUxjUcbq1PNoddjMDkKo0tCRfkG+HVgDPjXXHdkITCGsAjPAu8Avpnr\\n\",\n       \"/lyJBTdGMnEJ98w+LiINwF8BD5rLBK6IyKdn/PicMea5LHQpHr1YdQ6AUt0DGYlDZRH+iUmmKxO4\\n\",\n       \"7UPE7BHS3x0ivcHAzjQ1jlqqYzEMI8RTmlrX2QQ7nofh1QYvKQo8NlxJQ2LaTtikmbLrm29rWDNq\\n\",\n       \"PIP6MI8VqgDZaBLKu6EkDF0JKAvCSKXGlYQEIgNQVqr7vrYkuEcg5QNvAeBWpUlHDIoG4FwaNgTg\\n\",\n       \"oAfM68ZM7BfxPAuRd8LpBhXUGuqHkX81M+p3XA7dh2/cq56a8dqLvykLwVgZ9NZAKM/cu77N0Dqu\\n\",\n       \"ol+hYnDFVCOl2cDpFVDVBYOtkCiFliDUROB8AZx7/sIWhUjVOTjZDJuGACJUnx3h1fu3EDxgB1NM\\n\",\n       \"ajRKcLWdsrE0/mQCRwU4nCkm49D8pGZfeMqhYUqNolcqDJ4yQ4HLEHYliWW8ZWmjFVSHBDwu3Vqw\\n\",\n       \"JdRLdqoG7H0QaAQn05TEIvQ5y6kYFeJxL4VdAtFOYjXgfQM4lElfB3hYRCpB/itsD0JxxgsWs8N5\\n\",\n       \"h4PpwdlXbUTdc/MKcstsezz6DHxsPTSVQ2wE3O0Q8EOeKvPGB2G0CYr9Gk9yYQkICrRkdFUcadTN\\n\",\n       \"QBLCBTDqZSQ9yvSaNCXxKOHCNLFkCsKDJMrBEYrhGBoi+kwIPg+u34LmOKQLNBYsmmmuxsAbDiiO\\n\",\n       \"CS4HlKFRtOuGDDF7kImyFK5noGsXvFysczUBBOIQeikA9aNQ2UD8SJL0Zj/+2igVjiBFSZiKwoCB\\n\",\n       \"5BgU7YOac3CsBqbOA3UqRe9wgSusBmlzl+obxVNQ2AED9tlXarkgQg2aQfOWmyHb5AZ4DH3hX97G\\n\",\n       \"yBX4n2ge6w80wJ23G2OiM//AGPPpBTjv8BAMdkDFSo0goxhCQzAU0ChGv5OIJ0DEMQSnUvD5IdgR\\n\",\n       \"QP65gIpUmmjpFKP2NF4bJGvAVMDoeQj1wEg6TmiLG684SETguEkwWZCixaYxIANlWj8kHoT0eZhu\\n\",\n       \"huSoKmmeLYfkeQ2CNANgOvRNLpWE46vB9MB4E1QHNfhu2qbVXYcFivy67VN7GI5Vgv/f0R2W3VC5\\n\",\n       \"W3VKDvVA4gDQcznjbw4EbG5Y3Q/7xuGMD1ZM6qIdLNRSFKE8S/FNTkP3HeCt0hTqgKjbOtQPjMBY\\n\",\n       \"GDY9ByMtEKgA2zAkYhDZf7GNsX+DFz4E3c2qUjtom6TzuR4SPA2NfhhNE/JWEg9F8TSBSAwkSOF5\\n\",\n       \"WHsCzrZqwLB7HAZLITEGkfEU9uYUNgMuBySdUBqCcKlmTrnjQErl38fL1FA551Fp7qlUGmdfP3ab\\n\",\n       \"jaHGYhKRKZzFA0w0D1M2Dp1tMPrczHL3xpgxEfkn+NGHoLVMyxB0G+j7fgBWvA5t22DACekBKD4I\\n\",\n       \"BX6tZzEvjDGDIvL5QVhfBFUhGExC++x7OH8Y3QcHH9J6Lb4J6DDQXgHxEd1+BDhbBYGnACbgyCnY\\n\",\n       \"+3aSpxsInB8mGZmi0RfA4wrjOx3AlrIRsaeITgGfAaahaD3INEw7YcirYoNJ0aoS8SSUjiQZr7OT\\n\",\n       \"Tl4orhclXpwkPgGbDkO7E/xTmpWVTsPoKAx9c1StjM9tgJJSkmejjKS6CZUHqZ1WTZEzAr4oeFbD\\n\",\n       \"vp0QHYC1HwW/W7fpykMw7dYgWYCkA4jCeAqVd16u/AXwVWM4edW/vLn5d+BvRCg2Jn+rrOcsm+Zq\\n\",\n       \"LGSktYhU1sGvrIDyUmAApBuOB+B4Deywq0jH0Si8YYyZVg9B7b/CXVVwchtsFqg1EHGqTPPxOAwe\\n\",\n       \"gOQPwbfRQWKLEK60URSO0xAwBFdCXSFURdUNn56CaT+80gK3fS9TGC0EURe88E5ofAX2dEDQA4dq\\n\",\n       \"4YQN9rig4BYorYCiJJwvhkMBGPseVFRmFpwIjD0L0y9D2QdgyxZYP6rGw7kqONQHI1+5Fje6SNWH\\n\",\n       \"4YFmLSd/ZB2EWyDlhO4pGPpvb679cUPjkoWsCtcvw67fhLd2qlgYQFclPBGBwQ9A5YdgV5vWRnSn\\n\",\n       \"9Lq8ZKDnizMLvomIDVUk8wLjGdVOLxrEkSiFe314PlJOhS2CLd5PqnyK+pjWiZmwQ8MZWN+jZeNL\\n\",\n       \"AnBkJZx4F7QW6vZMuQ1K0OrJSRuUCBxOQtu0PsCGA1A2Ah02GD+pyr4mamd0owfv5hTl7ijVBpIx\\n\",\n       \"SHeA/RC0vwrBf5tpbGrMDy1oyk6fMSYoIgU+eKAMdrhAJsE/Aj9KGjNHrMXCs1hZFSL2dVD3LvCV\\n\",\n       \"wlQNmHLY3q6ZLr3FcHwARv75glZKgcjeJnjHCpAgFB2jaus0W9phZaeKyR11QMdXjUl2iIgbKh6D\\n\",\n       \"0lthpx3cdr1PEqgxEgzD6vMQSwmddUIfNlbFkpR0QPErmgXTfgT8j6Dlt5Poi0Mic41qnTh+uwDX\\n\",\n       \"LUnsU9OkBqG6BtaOw3gJpFfAdBHUBVSLKDYCRUfh+XfD+jMqslcfUu/s/lro7YTefzEmtG+hr/vl\\n\",\n       \"xyN32TQi3IV6Cjbk8wM6W4jwQ+ARY/habvtx+TFflsZIpn0H0IqmoYwaY64oay3iex+0fhQK7oKd\\n\",\n       \"NhVSKo7qg2QwDs8mYfRBlV123A633Q+3darc+8kVcObjUO6EpjFIJKEvAkOdEA7B2l51UnUZ6D4I\\n\",\n       \"xcVQsBpSk+DfB7F2qPolaFgDhY0wXgVDZyDwf4wxfSLizPwfYWNMQkRWwM5PwDu7Lt2Rer4F9n3f\\n\",\n       \"mMS8NUI05qf1k7BDoC6oio9HiqDj28bEjl7zhb/yubJgjDT8EaxuBVuTOt5iaL2WoR7o+DNgCorf\\n\",\n       \"AqW3gM0J4Xbw/9QYM3rllmefp/BO2P1eqAiBJw52H/i3QH859E7AriG45WVwZLbDYnb45koIVcOa\\n\",\n       \"tVDkg4RL03YLS3Tfv3wSSpNwLgbFP4HbTmp5gEdKoPdvgHLY8BuwsQQce6EqrP/PURusegYOlsEb\\n\",\n       \"882IQR+guIDQNXjLss5iPpQyRmYpOjFKoWQreEpVEDD5pngXVeqlGQ0imgDveihugekhCB68kB6v\\n\",\n       \"XsjNf6py6hWF6lFLF8A5gVQCWnuh9QR0VsOBYzD8dSjcC76fgWIXJPtgcj+Mfm/2XNSXoYqPwK5V\\n\",\n       \"sHVA155D74WpCDQdgIlbYFsQDm7XWLPq89BboRlUowIHtkNjD9jbVKF1vB0m/xESh5bLuF96XlzA\\n\",\n       \"G8D/MoacKAgvNiL8PPDrxnBvbvuRh6m9uSajIngNQjDB56DrQbglCAkv+GyQcGhFT28SCsPg+whw\\n\",\n       \"EGz3QUuXptUBOAqgbVL3Z7tD4A2pCFJ5FPafgGd+hG4u95rLlGMXkYdhtBl9nZ5A80JXiNh2ozXF\\n\",\n       \"Oy4WNito0kqhtlmttEzB8bXojTgvMmJXX4TxPeBtg/gZGHt1Rmp23qCLdpMX7joMY2dhuFS9H3eM\\n\",\n       \"wXO10OHJLPRPiMiTgG0+sTNz422AthGNT+lZAZGd0BzVwofFnTCwFvbfBltfhWABHK0A/+MQPgCn\\n\",\n       \"fg9q2sDhhfhmkFFIn4PgaigYVzXX4UI1clyAux7KP6I6Yw3l4LRBTY+KVoFWYu6v07ik9hZmpdxe\\n\",\n       \"jsyDNw+DTReOzDbWhayCabS6IKDzRw15qQQTRu+pILofeYHLvLRU7dVCkr2rYFUMJoohFYNqm27J\\n\",\n       \"dtjhVAWMHoDAZ4Ak1P4M3PuUBs8CdFTAsx8Tkc/P2u5qhtZVsLdHf/T7oD6q9tHx9bApBVEnlMbA\\n\",\n       \"lGiyU2kEhuph+2sar/Lqv6Dr/eC1Gt5LkE+hmjI/yHE/FpMfAf8oQrMx9OS6M3OxbI2Ra8Vo8bh/\\n\",\n       \"gIFbob4AYlGwxzQwLOgARxLqt8OaMThbA6kq3UppOwmRJiiNa3BiRSfUZR4iR2ohbTPGXHXPMrOI\\n\",\n       \"dsGF9NXqh6CtUYPj/MC5MRH5mjFmAuLT+qY9m2kXJK7ZJWmM8QM/vtbPLTYaWFl9RhUlV41p5gxo\\n\",\n       \"obLhFFpG9T//FuYuhqUqrTRlfuw1WjxuFuFB8K+HegcM3wdVhVoLZKoAgilY8zzsX6NFytJjMPI4\\n\",\n       \"pE9mgj//HELroeBWWDWlxpM3DO12WBvSGJLuCj3Py5ugeCvs9GvAcOFKmEpCxYzUbIdR2f+4QOpN\\n\",\n       \"mTIWVyezzfJBWLFadXiCwNnJzD11VT0ezY6rHYPzUzBVrRo2ElKhsfFhmBiEoc+COaXeS+9bYbPt\\n\",\n       \"oiECsNIPA80wsFpETqHbaw6gCupnnMuR1F2cqggki3UrqDABIZsKqAkak2JLwEgRJEfms8YsB0Ro\\n\",\n       \"BP4A2LPEg1YvwRiiInwL+CTw33Pdn7mwjJF5ots6pQ3gn4DuGtiQ8TyMCBzzQEsvTE1A2yhMnAFH\\n\",\n       \"CyRXQ7BXy8B3OmHCB3VRmB7RgmNjBTB26tp7U/YA7K2F7TO8Ew018MyDwNcgdQZOp2BVAVRkHqRh\\n\",\n       \"J5xwQ+BINq5H/jL6NLzykAYXN0yoVsfhMhh+dD7BlSLOTdD6PmjJ3BvdSRHn941JnLj0L6ePwPG7\\n\",\n       \"IL0NXNWq8dDrAv807Iio8FbNKTj85dlS+xmPxGER6YN4o3rKbEBBBwyv0/ghRxh6SyG8Fdacg/V9\\n\",\n       \"YItDsFa1aYZ9GkdiS8OgEwon4LgNEllTxF3K6PZLyW1QsgGSYfVi7myF22bcU63l8JNfFJG/u/p2\\n\",\n       \"xtQxOPE+zdDyO1SYsCgJ/V4YKITRLxmTnuFhKayGsjm0fsoS4NgA1e+CliINZj1Vrm3szLzRloZg\\n\",\n       \"cBwG68HXBf21ajubKPQlwefQ4FXHELxaDSM5LbWRZ3wa+JIxdOa6IzngC8DLIvxfY+anM7WYLGtj\\n\",\n       \"JFOXowHVHOkyl6k+KSJl4L4btuyB7d+CH78PRtaBpwimE1qELF2lmSc9O6FqGM7FoawKgq3Q0wRV\\n\",\n       \"bmidAl+RfvbIBEx0Q/rla+yzC1q3waZZ7uK1w3B4tYiUGGMmRezfhB9+AForwS7QnYK+R40xfXO3\\n\",\n       \"fHMjIkXg2Qa162C0Fx53qXJtwg/DjxljTl/l8wXASlj5Ebh9TKsou5OwKQQ//YCI/G3GQwSAMSYg\\n\",\n       \"Il+B0HehqQimEuCZhtVxGGoA5xgMF6Gv2HOiGS+Vp+GNVbC9H5rOwmkfHFwL6WPw7HpoCkFLJmNp\\n\",\n       \"1Sjs64fUKg1cfaNGNVJGh7WcQN/31DNmcSXUEGn8L7CrEFrHNPDztQcheRxmurBbx6G2SQvmXVrh\\n\",\n       \"981MvgztH4W9SajsgxN14C8Fx2ko7YbkLI/kVC+MbrhYBfkC/YVQeRu8q1tLEQBsG4ZH3wOHx6Cx\\n\",\n       \"H4ZKNWbldA14wuDqhUc3QnkvjMXhZAPYuiAWh5H/MCZxxdiuTL2edVC9U4UUx45C8rgxZkl52UTY\\n\",\n       \"ADwIrMl1X3KBMZwV4WXgw8A/5ro/s1mWxojok+ueNXBPM7qMd0NYRL4x82EtIoVQ/h5Ytx6St0FF\\n\",\n       \"DCYFfuHb8NJWiK4Atxe6mlQb4JYQSDn4G6ChC06HodsLG/vANw4tAU2rK3BqvYrAT2a/NV+mvzZU\\n\",\n       \"kMoJBMBuA/ssoSIbmfLpzSIur/5b3X8H3dWZX/YaY0JZuoR5hYgUQ/0nYGsZNAU0KPS4F04/ZszM\\n\",\n       \"lN05PytQdAe0vhVsKxy4dlbxfLIW+1AM0oO4pieo74bejcALsz4ehdIE7DoINEF1WANWzxfB6/Uw\\n\",\n       \"Po/MJf8j8NJ74MwGKE7DWA8M/hvEOsHWCmv3av2cC7R0wBGnysebYQgMoim5J41WE7a4Kt49sLMQ\\n\",\n       \"tmUywUpjWkBwtBVGzuk4XsBpmMc6qS8AlU9C79vBvlJjlZp7oWQchqK8qeRF9CgcvQtKqtWb2lkJ\\n\",\n       \"51bACQM7J6B2xr1aGoeN+x08tbUKubUSm0kQS04QPTysgpJOMmW70TiYIVTkZOpiHNmVKH03rNur\\n\",\n       \"1YodKej8eTi2TUT+5XIvaDcpfw78pTHLOp35r4GviPDPxpBXxuayNEaAtevhgfuh252JGxiA4sfh\\n\",\n       \"wyLy1xffCCreC7etga298FQENgShfxOMROCeNyB4Eh6/FQoESsrAG9GKod4wnGyD4DHw9MIWO9jd\\n\",\n       \"MLAJKNWg15IzMPUmwTA1gNxbwLcKYgEI9ELdPVBXCe40DKa0eFp3OayYUR21txSG2mDDp6DWDbEw\\n\",\n       \"9A1C99dmiGEtUby3wk4f7LwQuDkFjRMQeruIHL+yEebYAhvfAff0wpNrWwi566lKO5kuKcbVWUOk\\n\",\n       \"8ABndk/hiYvUNMHYMUi3ZxbpWvWCuYrA0wHjdUAhYCAwBbFHNb7HuR48PpjqB87OXOAzaaTfynjp\\n\",\n       \"CtE04iiAiPTD2VtggwsmPXByNxSUgFkJlRO6BVcYgzNvhb4e/b8tZpMx5tvA2wKJaSjfCc0z7h0B\\n\",\n       \"PAPgXQsjpReNkQkPDMWYRw0nxd8LK+rhLadUpBCgvxJeqgQuuQeNMVMi8mV44t3gfi9UlUL5ELQ4\\n\",\n       \"NS7tiaNQdw7WDqlhk3CtJGZ/gMRjNpASmOoG3w8o+FSI1vNQkdQt4+HXIXhsfkbIBfHJrbfAA10X\\n\",\n       \"qx03BSG9CvatB7KaMZcrRLgd2A58INd9ySXG8III54CPQ34VBVyWxkgN3LIJJtwzAhjrYaoNmvtg\\n\",\n       \"BXBaC2ut2KMeDgdaW2asHGonoWsN0Kf6HdMFcNfrMFoHL63S8t6JjGBV76tQ4VJxszY/VD2vdTHs\\n\",\n       \"KRhuYlbJbn3Dr/s4bKnQwLbAajj0Kdh2DLZlAtBCLviP1fBkEdxSoArcfi/sWwUtbRroWJCCcJlW\\n\",\n       \"B01+VET+bKm5XC+ldCusnJUhUJDU+i8dDWiRvMtQ9RbYOQoFyRImvRWQclAYSRAvTpFyT5AoFYpW\\n\",\n       \"wKZzUD8J57bA8WER+TYQg1QvjKyAcg94z0PMA+OFEHsBqISWX4V1bvWgDLrh9ICIfNUYc8nYZ7Ko\\n\",\n       \"ArOPibgfgR/8AhTthU0GIl6om4C6DjjQBM3noTUNj79fRL6wzKW934SmvVe8H1ZugOYYRO1wZBsM\\n\",\n       \"nIGKGVWKa0/ByVXgqdBaNcEiOOaCwW/P3ztQWQP203CgGmrRwNKBNBT2AhUikkDzzePAiDFmVKS4\\n\",\n       \"B3Ydhz098PoaqN8K5YWQ2ASpMnhpBG5/1cvptWtInq/QzDkAzlLQ4mZdQ4g9J6BuRItnvngrvBYC\\n\",\n       \"nplfn10tsCJ10RC5wIpJaN/IEjBGRBDgL9FU3jwV5VtU/gR4XIRvGnP5beTFZlkaI3bwFvJmF1Wh\\n\",\n       \"5sS5NdK9+WehZhucXgntw9DYAe1VsMoDERd0lcERHwQPqof0tuMw0A1D5Wps+DyQPgujYTagsfYA\\n\",\n       \"ABVoSURBVDi+CeoD4EmpJPNAMXSGYXYQVcntsLsMdmT2rcMe2JQAVyal1pUEbxy2j8Dj/fBUHxQ3\\n\",\n       \"w3Q7eLbCDr8GbQKUAwWl0L8R+pu5pjTmmw0Tg5iTN6WpxuGqFaEdFVAxAODAkXATCiYZLwKxTRH2\\n\",\n       \"jWJvKaBuYpLWHkiUQcMG4DboWQvh16A3CGsOQ7QEJqsh7dcgwvHvQf0vwv0xaMxsxW0AfI3wwluA\\n\",\n       \"x+f1n5nYEY0T2l0DhSMQ3w7NAa1btDIOna1w5yGob4KhWi6berpccW6DLRvgLV0Xj4kNzr4FWruh\\n\",\n       \"ODNnJpww+BwEX4auFoichcDBaxP1c1fC3kMQdsNwma4Du0bhRDV07IGGdVDjVO2YoX4R+Z6WW9jW\\n\",\n       \"q8Uz42vg9gGt4DtUDzVJSNXDU9scDCVquZhOHwT3CL6mIiqn/KSdetRuYFc/nLtdRJ6fX9p6Kq5Z\\n\",\n       \"WLOJOTQ1eUnwLlRp8Bu57kg+YAxviPAY8DngoVz35wLL0hgJwMkeuKd6hmciCdIDAo4q2Hw/3Hoe\\n\",\n       \"BlfCmgCcr1bphg3Pw5Gt0OOG5AiMZPLU2z8GzRNQP6VfEx44Uo7qFEyJeH8Mk/drxHsU6AlB/7+8\\n\",\n       \"OZ6gZBusmRFDEnVBSUoL4wV8UD2W+bsIFLqN6X8KLgTYlv03jUuZSXkQnC1AQVYvYN4xuh9O/Jyq\\n\",\n       \"kl/IaB4s1ut8tZz6WBf01UDreJia/iQJdzHhYITxxmmidjsbR0MUBCDugfQWWD0OFXGVdPc1wjMh\\n\",\n       \"eNoGLXF9A+5FH2pmHGrKoHGW5sfGQXhjt4j8+BoEp9JQOwzNPTC5BWyZzxXFtRI0ZOKFlm2dkctT\\n\",\n       \"tQvWjl16bHsPdHTBI+vVYxkBuidh6MvGmDfV7Zk/050wsAU2DF9M2U0D58pgzR3w9rNQnHkJOlMJ\\n\",\n       \"T38YcGpK+GC5pu86DPh6dMulPwWpKHSOx4k+NgS3NmS8ZxFwpimQKWxA8Yy326IEuF2oQM0cKemz\\n\",\n       \"SZ2FM2lY69bYGVCBvlOF4M+zulPXjgh2NFbkT4yZO5V/mfKHwHERfsYYnsh1ZyCHxoiI/Arwa4Ab\\n\",\n       \"eNgY85XFOvc0HDikkqKNreCPgOuEhqK/CNVbVT3TNw3j56BvLTQGNX1urBfGhmHsVSgx4CyDxAk4\\n\",\n       \"9SxE7lJB1wRwLgm9370QUGhM6AUROQzn6jN/0DO369ckID7jgVI+CX2iFUBtM9zv/T4IzlwobJAe\\n\",\n       \"UvXPtTOi82M2GDNA1iTb85PEITjWCsFtGkgcFjg3Df1fv7qLfeQZ2P8QSDrG5o4z9LeuYLKwkdAL\\n\",\n       \"AVKxKdK7Q1S1Q7RZM1scaXWHSwq2DEJHMxz5Bxh0oXN5ICO5Xq96IbOxG30zvzoi0gglm6C4QQMc\\n\",\n       \"t/RqDNJEI1QGYbQACnozsQ1xrprxsRwxNh2v43UQrFFtoKZ+qO2GZx6DrhDqUeuZb5zF5Zl4RVVQ\\n\",\n       \"XZXQNgZhFxyuh8k03DF00RAB1SPqbIbjg5oVY0tfdOLFneph2/AkdPtAzk/DvoPqWruwZjknCRZP\\n\",\n       \"UnQcSmbERA0WQ3gE5rcdoXPV83344c/DKrs+EjqBnqeMMUsh/fWX0Yy2H+W6I/mEMUyK8GHguyLs\\n\",\n       \"NebSmKZckEvPyLeMMV/PBJe9BiyaMZIJHns4CHt8Kl8YGIGfpOE42G9VuwSgtV1l2zvXwHgxHHBr\\n\",\n       \"Dv8DW6AsAiPb4MQ90PdlOHEYTmxH90c6mfVGboyZZMZ+79z498OJd8CdGXdsUwDa/XCqAfZEdL/7\\n\",\n       \"XDUcnobwwRkfHIfIUThzByTLtRJt1AEnfTD2tDFmfM7TLRGMMSkR+T5MvKxCcsRQr9RV3wyNMd0i\\n\",\n       \"8iWYuB88LX6iB8OMBwZISxwSE0w4VV67byW4w/pg6y6A6guaD2nAY4yZvQ02DMMhGPZeFF8DaK+B\\n\",\n       \"0BtX8opo7JDj56D5Lljnh4ogHKiDn9bDlldgsAb6mqAvDLVReLIaBq4htmE5MX4MXvojWC2wIqZv\\n\",\n       \"/afXw5kuNAMpfLUW5osxZkSVkifv03IO6SiMPwml28A3x3gXC4TG4al10ObW6rqlZRqHXPYakIbT\\n\",\n       \"JTB2KLNm/eMk3FIKG1Mw7if4RTi/DSqLoToEfT54vRSG//laZN6NiR4TkS7oaOOiEvRNr9IqQgHw\\n\",\n       \"p8CHlpPA2XwxhudF+AzwmAh3G0NOnxM5r02T0Xf4iTHmLbOO56huQe3H4O3VqjFwgZgdvlMPoSA8\\n\",\n       \"KBerfII+XH7aCakQtOyBFqMvJWcT0PsNY5LzLj6msQHl74e2tdCUVkXFU3EYPgFVK0E8ED4KEy/M\\n\",\n       \"NjA04LbpE1BdD/YS9dAOdcDoX83Ux8hnclk4K3N+++x9dhH3Vmh+H9RugmofBCNg74Bbj0Na4F8b\\n\",\n       \"4cxfzyXjLyJtqluyESiNwlAhnAjAwJczMuNz9MG5ERo/orEEjSEYToPrDGw/A4/tBb8fnCGYDINr\\n\",\n       \"ElI9mdiGG9heyC0LWxTTdSfs/GPYIFAS09T6YQfs74Xe312odPfMS5ZRxV3fO+De3epJu8BwIfz4\\n\",\n       \"nVB7QiUD+mqhp1S9J6sHwDsFnTboehmmHr+ccSFiXw/V9+j2cqIPhp+5WbLnFr7+GH8GrDaG9y/U\\n\",\n       \"OW52MsG9fwXcAdxvzNVemG/0fHlaKE9E/hfwCeB/GGO+Nut3OTJGpAXWfQJundI4kPFCOFQDh1+H\\n\",\n       \"tu3w87NiAFICD2+HFUG4v/NiVPpYIfywGHo+Nx/lzxnntwEt4KqHZAjSZzPpn/P5bCkUbAFvNUz2\\n\",\n       \"Qex4Nt/8FppcGyOXQ9Nu5Rao/QXYMQI7uyBQAG/UwdEXjAlcNhhVRMqhcAsUVUCgGxInLuexEZES\\n\",\n       \"WPn7sMet2zN141rB90AFlL8IngQ8PmLM4FcX6n/NBQtrjNT/Jry7APDCdA3YYlDWD8dK4JlvL4ZM\\n\",\n       \"uohUQOtvwF6jWV8RJzz+NmgIwR0zsl4ONMJz7RA+Aw4nJLpvZiPzaizsuLMO2AdsNcYK6r4SGYPk\\n\",\n       \"i2jq8zsX0kOS00J5WvWV78w6PGSM+aAx5k9F5C+Ap0XkkdlvKSLy6Rk/PmeMeW5hezvTbT9+H7hb\\n\",\n       \"IRWA0UcgcR7M9jk+ATjqYcP5S9PjKqehrVKLqNF+DedPo5oE1/x2k3nbzllJ8KVKxuvxhIicgNfu\\n\",\n       \"g6MrtaLy+KMQff0qnx0HnpvfmWwrYbVNtUPI3LAOowX4OpqhqRPSVhDeNSFpjceoHeCSTCNTCovj\\n\",\n       \"ujda1+phmLoXCtdD3IB3HPa8culfbh6EY2sh9D1j4jcYv7J8EcENfAtN5bUMkatgDEaE3wQ+C7yQ\\n\",\n       \"CWpd9DjDBTdGMkWm7pl9XERcGe2LBBpy/iZryRjz6YXu31wYrUj7T5Ix4+CCUufYIJyfJTZ2slYz\\n\",\n       \"azxz6HjMT73R4uYgo8771ZnzIruIU2uRlPuhNw2Vdk3hdaYh7YCzPhj7j+yfdykzegBOv/dSRdOA\\n\",\n       \"G84nyBSeXAwy6+C3MtLrlVDz25pFMxNnCmx2NG7DMkaugxlv+eeBf8hxd24aMjE1fyDCKPCiCA8Y\\n\",\n       \"w6LWucrlg/JPRORuNAPhO/koZT3zgaN7v/IDeOajsK4JyhMw7IJTYxpF33HXpTLSMTt0wzzLuVvc\\n\",\n       \"PCyMIQIqoHbeBlui4D0MnduhzMC5EuidgLGDkJ63l80CIHEYjq6F6fXQHNcg8FNA/3fmE+CcbTLr\\n\",\n       \"iB/8wTcHN3dWQqjj6iUELOYiY4h8FtgK3GsFrV47xvBZEfzA8yK8xxheW6xz5zyA9XLkcfxAITjW\\n\",\n       \"QVElTA1C+gxgg5qPwZZ6rT8TccHJIjj9hDHh53Pd55uFfB3zxUSk9B2w7k5YNwkJN3SugDOjMPEw\\n\",\n       \"KiWfnzfsDbDwgYwX5OCLV0AiDNFTuc4wE3GshJW/CluSUB6C4RI4nITuLxljlkWKdjbHXYQS4GFU\\n\",\n       \"X+FdxjB25U9YXAkR3g38E/BxY/hh9trN0wDWK3GzPZg0K8i5GSo2auDp2CFjzLwzaSxuvjFfCDJu\\n\",\n       \"/DVQtV2rQPuPLcUKqjNZruOu8XS+XVBQA1M9EDpollHV5WyNuwjvQuus/AT4HWPmI/ZmcTVE2A08\\n\",\n       \"BvxfY/hidtq0jJGskElDTi3lB0Muyccxv14yb+MFQHR2urDFpdxs4671bnACkaXoqVosbnTcRWgG\\n\",\n       \"/h+6LfOQMfOtx2MxX0RoQ0tX/BD4Y2O4odpXljFyg2hly5p3QnETJNMQfAMmfnozpc3eDOTTmF8v\\n\",\n       \"6tlw74aqt0JhEUxPg/8ZiLxmPbjm5mYZdxFxQ+m94NsNTgeEBmHoxzeLrke+cb3jLoIL+F3g94Ev\\n\",\n       \"AJ+1CuAtHCJUAI8CY8Cv3Ujqb05Te292RKQS2j4Od8WgtRcSNji6HfbXiMiXjFUl1eISCvbA5vfC\\n\",\n       \"bf1aKyjghlfekyl++mque2dxI5T/IuxeC9v7tehlXym88DER+XtjjJVCugiIcB/wd8BZYI8xs4uN\\n\",\n       \"WmQbY/BnrvtngCMiPGQMP872eeZVI2N5U7IbtnFRkdWZhp390NoEtOSyZxb5hYg4oPJeuLMPfJmM\\n\",\n       \"CF8M7uiHyvv09xY3IyJSB41rYW+PGiKgNat2xqDi9tz2bukjQpMI3wO+BPyBMbzbMkQWD2OIGcPv\\n\",\n       \"ovXkPi/CkyLcJ5I9G2JZGCOZFOLrxNsM1XOkHdcC3Hf97V6ZG+vz0mo7W20uQjtF4PVcWhAN9Odi\\n\",\n       \"t/5+UftzU7ezUG1eZ1vlUDPHNltNEJLX096cZPva5cF1u8FzUiXCnwOH4eEwsMEY/v3G2rzx/yOf\\n\",\n       \"7pXF7IsxPInWt/g+8DmgW4SvivBJEe4WueN9mRTra2ZZGCPA3df/0elBGC968/ExATZff7tX5W6r\\n\",\n       \"7ay3udDtTEM4AWHnpYfDTj3ObFn/he7Pzd7OQrV5PW1Ngn+ORdbvhenaG+3QDO7OYlvZbi+bbV0W\\n\",\n       \"ESpFeI8I3wROA2XATvhkV5YyZe7Okzay1U422ph3O8aQMIaHgR3AA+j2863An8FdXwcmRTgkwndF\\n\",\n       \"+D8ivHU+7S4XY+QGCLwGh10w5NWf08CJGugcBpZNGp7F1THGJGDsBXilESKZLZmIA/Y3gH+fsarq\\n\",\n       \"3sz0QW83vNGg9ahA608d8kKi58oftbgSIvxl5sH1ogjdQAfwm8DLwDpj+HVjFk8t12J+GIMxhnZj\\n\",\n       \"+Htj+Kgx3AGf+RzQBHwSTQtOAc3zac/aw74KxpghEcdXIfSzUN4EcWD8LIw9Bnwqx92zyDum98ER\\n\",\n       \"O/TdCSV2mEyB/ykIWTWDbmIyyqnfghffDSc3gsfAeAhGvgn8Yq77d5PTDRwG+oB+oNsYrHT4mxRj\\n\",\n       \"CACvZ77mTV6n9ua6DxYWFhYWFhbZ46bTGbGwsLCwsLBYHlgxIxYWFhYWFhY5xTJGliEisifXfbDI\\n\",\n       \"LtaYWsyFNS+WB0thnJfFNo2IeIwxCyIXLCLubJT8FpFdwF7ABwSAV4wxB26wzbmMTQGeMMbcsEaK\\n\",\n       \"iGwCksaYUzOO3WqM2X+jbc9oz4vO0zm0Xq6rvazMhesZ92yMcTbHNFvjJyLbgYAx5ryI3A+4gB9n\\n\",\n       \"Q5042+M/o92srQk3sgZk675fiHs9m/f3jc6RbM6DXK0B+XT/5+O9v6SMERH5IPB7QBLV0v/LTBT8\\n\",\n       \"s8aYexbonE8aYx64wTb+Fh3Ep4AgUArci06W37mBdiPAXJNrqzGm/HrbzbT9N0A1kACqgI8ZY0Zu\\n\",\n       \"9FqLyMeA3wDCwFeAj6P51I8YY75wDe0s6Fy41nHP1hhna0yzNX4i8veAm0xRQGAKmAQajTG/Ot92\\n\",\n       \"ZrSXlfGf0d6CrwnXuwZk877P9r2ezfv7euZINuZBPq0B+XT/5+u9j8kkCy+FL+AVNF1ZgF9H85zL\\n\",\n       \"gGez0Pa+y3xNZKHtF67l+DW0ewjwzXH8qWxcjxnfbwGeB3bf6LVGb7QLFW970RtYgJdzMReyNe7Z\\n\",\n       \"GuNsjWm2xm9m/4FjM75/Ppfjn+15kM25kO05kc15ke35cb1zJBvzIJ/WgHy6//P13l9yOiPGmGTm\\n\",\n       \"278XkUNo6ePqLDRdiVqfl0h9i8hPs9D2QRF5GHgStS5LUKv50A22+06YU7HwbTfYLoBNRFzGmLgx\\n\",\n       \"5qiIvBf4BioVfCPEjLr4IplChHEAEblmN3iW5kK2xj1bY5ytMc3W+NlnfP/fZ3x/vS7XrI3/f3Yk\\n\",\n       \"e2tCtteAbN732b7Xs3l/X88cyco8yKM1IJ/u//y896/HgsnXL+AhoGXWsQbgH7LQ9tuZ2yLdmaW+\\n\",\n       \"70Ct9z9B3ZPbc309r9LfW4CaWcccwAdvsN1fARyzjrmA/52LuZDNcc+nMc7W+KEL2Fzj9WAuxz/b\\n\",\n       \"8yDbcyEf58RCzI/rnSPZmAf5tgbky1jn672/pGJGZiMi3zLG/NICtf1tY8wHF6Jti4tk6zpnay5Y\\n\",\n       \"4764ZPt6Z3NNsObC4pGNa22tAfnNUk/trVvAtrNZHMvi8mTrOmdrLljjvrhk+3pnc02w5sLikY1r\\n\",\n       \"ba0BecxSN0YsLCwsLCws8hzLGLGwsLCwsLDIKZYxYmFhYWFhYZFTlnoAa40xZvhma9viItm6zvnW\\n\",\n       \"jsX8yPb1zmZ71lxYPLJxra01IL9Z0saIhYWFhYWFRf5jbdNYWFhYWFhY5BTLGLGwsLCwsLDIKZYx\\n\",\n       \"YmFhYWFhYZFTLGMkjxCRt4nIKRE5KyJ/lOv+WCw8IvIVERkWkWO57ovF4iAiTSLyrIicEJHjIvLb\\n\",\n       \"ue6TxcIjIh4ReVVEDovISRH5TK77lE9YAax5gojYgdPAfUA/8DpaK6A9px2zWFBE5E4gBHzdGLM5\\n\",\n       \"1/2xWHhEpBaoNcYcFhEvcBB4j3WvL31EpNAYMy0iDuBF4PeNMS/mul/5gOUZyR/2AOeMMV3GmATw\\n\",\n       \"HeBnc9wniwXGGLMPmMh1PywWD2PMkDHmcOb7ENAO1Oe2VxaLgTFmOvOtC616O57D7uQVljGSPzQA\\n\",\n       \"vTN+7sscs7CwWKKISCuwHXg1tz2xWAxExCYih4Fh4FljzMlc9ylfsIyR/MHaL7OwWEZktmi+D/xO\\n\",\n       \"xkNiscQxxqSNMduARuAuEbk7x13KGyxjJH/oB5pm/NyEekcsLCyWGCLiBB4BvmGMeTTX/bFYXIwx\\n\",\n       \"QeA/gF257ku+YBkj+cMBYLWItIqIC3g/8MMc98nCwiLLiIgA/wScNMb8ba77Y7E4iEiliPgy3xcA\\n\",\n       \"9wNv5LZX+YNljOQJxpgk8FvAE8BJ4LtWdP3SR0S+DbwMrBGRXhH5aK77ZLHg3A58CLhHRN7IfL0t\\n\",\n       \"152yWHDqgGcyMSOvAv9ujHk6x33KG6zUXgsLCwsLC4ucYnlGLCwsLCwsLHKKZYxYWFhYWFhY5BTL\\n\",\n       \"GLGwsLCwsLDIKZYxYmFhYWFhYZFTLGPEwsLCwsLCIqdYxoiFhYWFhYVFTrGMEQsLCwsLC4ucYhkj\\n\",\n       \"FhYWFhYWFjnl/wPBByFp6Gp27QAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x11dbd0090>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"X, y = sklearn.datasets.make_classification(\\n\",\n    \"    n_samples=10000, n_features=4, n_redundant=0, n_informative=2, \\n\",\n    \"    n_clusters_per_class=2, hypercube=False, random_state=0\\n\",\n    \")\\n\",\n    \"\\n\",\n    \"# Split into train and test\\n\",\n    \"X, Xt, y, yt = sklearn.cross_validation.train_test_split(X, y)\\n\",\n    \"\\n\",\n    \"# Visualize sample of the data\\n\",\n    \"ind = np.random.permutation(X.shape[0])[:1000]\\n\",\n    \"df = pd.DataFrame(X[ind])\\n\",\n    \"_ = pd.scatter_matrix(df, figsize=(9, 9), diagonal='kde', marker='o', s=40, alpha=.4, c=y[ind])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Learn and evaluate scikit-learn's logistic regression with stochastic gradient descent (SGD) training. Time and check the classifier's accuracy.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Accuracy: 0.783\\n\",\n      \"Accuracy: 0.783\\n\",\n      \"Accuracy: 0.783\\n\",\n      \"Accuracy: 0.783\\n\",\n      \"1 loops, best of 3: 508 ms per loop\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"%%timeit\\n\",\n    \"# Train and test the scikit-learn SGD logistic regression.\\n\",\n    \"clf = sklearn.linear_model.SGDClassifier(\\n\",\n    \"    loss='log', n_iter=1000, penalty='l2', alpha=1e-3, class_weight='auto')\\n\",\n    \"\\n\",\n    \"clf.fit(X, y)\\n\",\n    \"yt_pred = clf.predict(Xt)\\n\",\n    \"print('Accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(yt, yt_pred)))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Save the dataset to HDF5 for loading in Caffe.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Write out the data to HDF5 files in a temp directory.\\n\",\n    \"# This file is assumed to be caffe_root/examples/hdf5_classification.ipynb\\n\",\n    \"dirname = os.path.abspath('./examples/hdf5_classification/data')\\n\",\n    \"if not os.path.exists(dirname):\\n\",\n    \"    os.makedirs(dirname)\\n\",\n    \"\\n\",\n    \"train_filename = os.path.join(dirname, 'train.h5')\\n\",\n    \"test_filename = os.path.join(dirname, 'test.h5')\\n\",\n    \"\\n\",\n    \"# HDF5DataLayer source should be a file containing a list of HDF5 filenames.\\n\",\n    \"# To show this off, we'll list the same data file twice.\\n\",\n    \"with h5py.File(train_filename, 'w') as f:\\n\",\n    \"    f['data'] = X\\n\",\n    \"    f['label'] = y.astype(np.float32)\\n\",\n    \"with open(os.path.join(dirname, 'train.txt'), 'w') as f:\\n\",\n    \"    f.write(train_filename + '\\\\n')\\n\",\n    \"    f.write(train_filename + '\\\\n')\\n\",\n    \"    \\n\",\n    \"# HDF5 is pretty efficient, but can be further compressed.\\n\",\n    \"comp_kwargs = {'compression': 'gzip', 'compression_opts': 1}\\n\",\n    \"with h5py.File(test_filename, 'w') as f:\\n\",\n    \"    f.create_dataset('data', data=Xt, **comp_kwargs)\\n\",\n    \"    f.create_dataset('label', data=yt.astype(np.float32), **comp_kwargs)\\n\",\n    \"with open(os.path.join(dirname, 'test.txt'), 'w') as f:\\n\",\n    \"    f.write(test_filename + '\\\\n')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's define logistic regression in Caffe through Python net specification. This is a quick and natural way to define nets that sidesteps manually editing the protobuf model.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from caffe import layers as L\\n\",\n    \"from caffe import params as P\\n\",\n    \"\\n\",\n    \"def logreg(hdf5, batch_size):\\n\",\n    \"    # logistic regression: data, matrix multiplication, and 2-class softmax loss\\n\",\n    \"    n = caffe.NetSpec()\\n\",\n    \"    n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)\\n\",\n    \"    n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier'))\\n\",\n    \"    n.accuracy = L.Accuracy(n.ip1, n.label)\\n\",\n    \"    n.loss = L.SoftmaxWithLoss(n.ip1, n.label)\\n\",\n    \"    return n.to_proto()\\n\",\n    \"    \\n\",\n    \"with open('examples/hdf5_classification/logreg_auto_train.prototxt', 'w') as f:\\n\",\n    \"    f.write(str(logreg('examples/hdf5_classification/data/train.txt', 10)))\\n\",\n    \"    \\n\",\n    \"with open('examples/hdf5_classification/logreg_auto_test.prototxt', 'w') as f:\\n\",\n    \"    f.write(str(logreg('examples/hdf5_classification/data/test.txt', 10)))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Time to learn and evaluate our Caffeinated logistic regression in Python.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Accuracy: 0.782\\n\",\n      \"Accuracy: 0.782\\n\",\n      \"Accuracy: 0.782\\n\",\n      \"Accuracy: 0.782\\n\",\n      \"1 loops, best of 3: 287 ms per loop\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"%%timeit\\n\",\n    \"caffe.set_mode_cpu()\\n\",\n    \"solver = caffe.get_solver('examples/hdf5_classification/solver.prototxt')\\n\",\n    \"solver.solve()\\n\",\n    \"\\n\",\n    \"accuracy = 0\\n\",\n    \"batch_size = solver.test_nets[0].blobs['data'].num\\n\",\n    \"test_iters = int(len(Xt) / batch_size)\\n\",\n    \"for i in range(test_iters):\\n\",\n    \"    solver.test_nets[0].forward()\\n\",\n    \"    accuracy += solver.test_nets[0].blobs['accuracy'].data\\n\",\n    \"accuracy /= test_iters\\n\",\n    \"\\n\",\n    \"print(\\\"Accuracy: {:.3f}\\\".format(accuracy))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Do the same through the command line interface for detailed output on the model and solving.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"I0318 00:58:32.322571 2013098752 caffe.cpp:117] Use CPU.\\n\",\n      \"I0318 00:58:32.643163 2013098752 caffe.cpp:121] Starting Optimization\\n\",\n      \"I0318 00:58:32.643229 2013098752 solver.cpp:32] Initializing solver from parameters: \\n\",\n      \"train_net: \\\"examples/hdf5_classification/logreg_auto_train.prototxt\\\"\\n\",\n      \"test_net: \\\"examples/hdf5_classification/logreg_auto_test.prototxt\\\"\\n\",\n      \"test_iter: 250\\n\",\n      \"test_interval: 1000\\n\",\n      \"base_lr: 0.01\\n\",\n      \"display: 1000\\n\",\n      \"max_iter: 10000\\n\",\n      \"lr_policy: \\\"step\\\"\\n\",\n      \"gamma: 0.1\\n\",\n      \"momentum: 0.9\\n\",\n      \"weight_decay: 0.0005\\n\",\n      \"stepsize: 5000\\n\",\n      \"snapshot: 10000\\n\",\n      \"snapshot_prefix: \\\"examples/hdf5_classification/data/train\\\"\\n\",\n      \"solver_mode: CPU\\n\",\n      \"I0318 00:58:32.643333 2013098752 solver.cpp:61] Creating training net from train_net file: examples/hdf5_classification/logreg_auto_train.prototxt\\n\",\n      \"I0318 00:58:32.643465 2013098752 net.cpp:42] Initializing net from parameters: \\n\",\n      \"state {\\n\",\n      \"  phase: TRAIN\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"data\\\"\\n\",\n      \"  type: \\\"HDF5Data\\\"\\n\",\n      \"  top: \\\"data\\\"\\n\",\n      \"  top: \\\"label\\\"\\n\",\n      \"  hdf5_data_param {\\n\",\n      \"    source: \\\"examples/hdf5_classification/data/train.txt\\\"\\n\",\n      \"    batch_size: 10\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"ip1\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"data\\\"\\n\",\n      \"  top: \\\"ip1\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 2\\n\",\n      \"    weight_filler {\\n\",\n      \"      type: \\\"xavier\\\"\\n\",\n      \"    }\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"accuracy\\\"\\n\",\n      \"  type: \\\"Accuracy\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"accuracy\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"loss\\\"\\n\",\n      \"  type: \\\"SoftmaxWithLoss\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"loss\\\"\\n\",\n      \"}\\n\",\n      \"I0318 00:58:32.644197 2013098752 layer_factory.hpp:74] Creating layer data\\n\",\n      \"I0318 00:58:32.644219 2013098752 net.cpp:84] Creating Layer data\\n\",\n      \"I0318 00:58:32.644230 2013098752 net.cpp:338] data -> data\\n\",\n      \"I0318 00:58:32.644256 2013098752 net.cpp:338] data -> label\\n\",\n      \"I0318 00:58:32.644269 2013098752 net.cpp:113] Setting up data\\n\",\n      \"I0318 00:58:32.644278 2013098752 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/hdf5_classification/data/train.txt\\n\",\n      \"I0318 00:58:32.644327 2013098752 hdf5_data_layer.cpp:80] Number of HDF5 files: 2\\n\",\n      \"I0318 00:58:32.646458 2013098752 net.cpp:120] Top shape: 10 4 (40)\\n\",\n      \"I0318 00:58:32.646502 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:32.646518 2013098752 layer_factory.hpp:74] Creating layer label_data_1_split\\n\",\n      \"I0318 00:58:32.646538 2013098752 net.cpp:84] Creating Layer label_data_1_split\\n\",\n      \"I0318 00:58:32.646546 2013098752 net.cpp:380] label_data_1_split <- label\\n\",\n      \"I0318 00:58:32.646556 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_0\\n\",\n      \"I0318 00:58:32.646569 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_1\\n\",\n      \"I0318 00:58:32.646579 2013098752 net.cpp:113] Setting up label_data_1_split\\n\",\n      \"I0318 00:58:32.646586 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:32.646595 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:32.646601 2013098752 layer_factory.hpp:74] Creating layer ip1\\n\",\n      \"I0318 00:58:32.646615 2013098752 net.cpp:84] Creating Layer ip1\\n\",\n      \"I0318 00:58:32.646622 2013098752 net.cpp:380] ip1 <- data\\n\",\n      \"I0318 00:58:32.646664 2013098752 net.cpp:338] ip1 -> ip1\\n\",\n      \"I0318 00:58:32.646689 2013098752 net.cpp:113] Setting up ip1\\n\",\n      \"I0318 00:58:32.652330 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:32.652371 2013098752 layer_factory.hpp:74] Creating layer ip1_ip1_0_split\\n\",\n      \"I0318 00:58:32.652393 2013098752 net.cpp:84] Creating Layer ip1_ip1_0_split\\n\",\n      \"I0318 00:58:32.652407 2013098752 net.cpp:380] ip1_ip1_0_split <- ip1\\n\",\n      \"I0318 00:58:32.652421 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_0\\n\",\n      \"I0318 00:58:32.652467 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_1\\n\",\n      \"I0318 00:58:32.652480 2013098752 net.cpp:113] Setting up ip1_ip1_0_split\\n\",\n      \"I0318 00:58:32.652489 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:32.652498 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:32.652505 2013098752 layer_factory.hpp:74] Creating layer accuracy\\n\",\n      \"I0318 00:58:32.652521 2013098752 net.cpp:84] Creating Layer accuracy\\n\",\n      \"I0318 00:58:32.652534 2013098752 net.cpp:380] accuracy <- ip1_ip1_0_split_0\\n\",\n      \"I0318 00:58:32.652545 2013098752 net.cpp:380] accuracy <- label_data_1_split_0\\n\",\n      \"I0318 00:58:32.652562 2013098752 net.cpp:338] accuracy -> accuracy\\n\",\n      \"I0318 00:58:32.652577 2013098752 net.cpp:113] Setting up accuracy\\n\",\n      \"I0318 00:58:32.652590 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:32.652642 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:32.652655 2013098752 net.cpp:84] Creating Layer loss\\n\",\n      \"I0318 00:58:32.652663 2013098752 net.cpp:380] loss <- ip1_ip1_0_split_1\\n\",\n      \"I0318 00:58:32.652672 2013098752 net.cpp:380] loss <- label_data_1_split_1\\n\",\n      \"I0318 00:58:32.652679 2013098752 net.cpp:338] loss -> loss\\n\",\n      \"I0318 00:58:32.652689 2013098752 net.cpp:113] Setting up loss\\n\",\n      \"I0318 00:58:32.652701 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:32.652716 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:32.652724 2013098752 net.cpp:122]     with loss weight 1\\n\",\n      \"I0318 00:58:32.652740 2013098752 net.cpp:167] loss needs backward computation.\\n\",\n      \"I0318 00:58:32.652746 2013098752 net.cpp:169] accuracy does not need backward computation.\\n\",\n      \"I0318 00:58:32.652753 2013098752 net.cpp:167] ip1_ip1_0_split needs backward computation.\\n\",\n      \"I0318 00:58:32.652760 2013098752 net.cpp:167] ip1 needs backward computation.\\n\",\n      \"I0318 00:58:32.652786 2013098752 net.cpp:169] label_data_1_split does not need backward computation.\\n\",\n      \"I0318 00:58:32.652801 2013098752 net.cpp:169] data does not need backward computation.\\n\",\n      \"I0318 00:58:32.652808 2013098752 net.cpp:205] This network produces output accuracy\\n\",\n      \"I0318 00:58:32.652815 2013098752 net.cpp:205] This network produces output loss\\n\",\n      \"I0318 00:58:32.652825 2013098752 net.cpp:447] Collecting Learning Rate and Weight Decay.\\n\",\n      \"I0318 00:58:32.652833 2013098752 net.cpp:217] Network initialization done.\\n\",\n      \"I0318 00:58:32.652839 2013098752 net.cpp:218] Memory required for data: 528\\n\",\n      \"I0318 00:58:32.652964 2013098752 solver.cpp:154] Creating test net (#0) specified by test_net file: examples/hdf5_classification/logreg_auto_test.prototxt\\n\",\n      \"I0318 00:58:32.652986 2013098752 net.cpp:42] Initializing net from parameters: \\n\",\n      \"state {\\n\",\n      \"  phase: TEST\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"data\\\"\\n\",\n      \"  type: \\\"HDF5Data\\\"\\n\",\n      \"  top: \\\"data\\\"\\n\",\n      \"  top: \\\"label\\\"\\n\",\n      \"  hdf5_data_param {\\n\",\n      \"    source: \\\"examples/hdf5_classification/data/test.txt\\\"\\n\",\n      \"    batch_size: 10\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"ip1\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"data\\\"\\n\",\n      \"  top: \\\"ip1\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 2\\n\",\n      \"    weight_filler {\\n\",\n      \"      type: \\\"xavier\\\"\\n\",\n      \"    }\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"accuracy\\\"\\n\",\n      \"  type: \\\"Accuracy\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"accuracy\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"loss\\\"\\n\",\n      \"  type: \\\"SoftmaxWithLoss\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"loss\\\"\\n\",\n      \"}\\n\",\n      \"I0318 00:58:32.653069 2013098752 layer_factory.hpp:74] Creating layer data\\n\",\n      \"I0318 00:58:32.653080 2013098752 net.cpp:84] Creating Layer data\\n\",\n      \"I0318 00:58:32.653090 2013098752 net.cpp:338] data -> data\\n\",\n      \"I0318 00:58:32.653128 2013098752 net.cpp:338] data -> label\\n\",\n      \"I0318 00:58:32.653146 2013098752 net.cpp:113] Setting up data\\n\",\n      \"I0318 00:58:32.653154 2013098752 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/hdf5_classification/data/test.txt\\n\",\n      \"I0318 00:58:32.653192 2013098752 hdf5_data_layer.cpp:80] Number of HDF5 files: 1\\n\",\n      \"I0318 00:58:32.654850 2013098752 net.cpp:120] Top shape: 10 4 (40)\\n\",\n      \"I0318 00:58:32.654897 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:32.654914 2013098752 layer_factory.hpp:74] Creating layer label_data_1_split\\n\",\n      \"I0318 00:58:32.654933 2013098752 net.cpp:84] Creating Layer label_data_1_split\\n\",\n      \"I0318 00:58:32.654943 2013098752 net.cpp:380] label_data_1_split <- label\\n\",\n      \"I0318 00:58:32.654953 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_0\\n\",\n      \"I0318 00:58:32.654966 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_1\\n\",\n      \"I0318 00:58:32.654976 2013098752 net.cpp:113] Setting up label_data_1_split\\n\",\n      \"I0318 00:58:32.654985 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:32.654992 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:32.655000 2013098752 layer_factory.hpp:74] Creating layer ip1\\n\",\n      \"I0318 00:58:32.655010 2013098752 net.cpp:84] Creating Layer ip1\\n\",\n      \"I0318 00:58:32.655017 2013098752 net.cpp:380] ip1 <- data\\n\",\n      \"I0318 00:58:32.655030 2013098752 net.cpp:338] ip1 -> ip1\\n\",\n      \"I0318 00:58:32.655041 2013098752 net.cpp:113] Setting up ip1\\n\",\n      \"I0318 00:58:32.655061 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:32.655072 2013098752 layer_factory.hpp:74] Creating layer ip1_ip1_0_split\\n\",\n      \"I0318 00:58:32.655148 2013098752 net.cpp:84] Creating Layer ip1_ip1_0_split\\n\",\n      \"I0318 00:58:32.655159 2013098752 net.cpp:380] ip1_ip1_0_split <- ip1\\n\",\n      \"I0318 00:58:32.655170 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_0\\n\",\n      \"I0318 00:58:32.655180 2013098752 net.cpp:338] ip1_ip1_0_split -> ip1_ip1_0_split_1\\n\",\n      \"I0318 00:58:32.655190 2013098752 net.cpp:113] Setting up ip1_ip1_0_split\\n\",\n      \"I0318 00:58:32.655199 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:32.655206 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:32.655213 2013098752 layer_factory.hpp:74] Creating layer accuracy\\n\",\n      \"I0318 00:58:32.655223 2013098752 net.cpp:84] Creating Layer accuracy\\n\",\n      \"I0318 00:58:32.655230 2013098752 net.cpp:380] accuracy <- ip1_ip1_0_split_0\\n\",\n      \"I0318 00:58:32.655237 2013098752 net.cpp:380] accuracy <- label_data_1_split_0\\n\",\n      \"I0318 00:58:32.655251 2013098752 net.cpp:338] accuracy -> accuracy\\n\",\n      \"I0318 00:58:32.655259 2013098752 net.cpp:113] Setting up accuracy\\n\",\n      \"I0318 00:58:32.655267 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:32.655340 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:32.655354 2013098752 net.cpp:84] Creating Layer loss\\n\",\n      \"I0318 00:58:32.655361 2013098752 net.cpp:380] loss <- ip1_ip1_0_split_1\\n\",\n      \"I0318 00:58:32.655369 2013098752 net.cpp:380] loss <- label_data_1_split_1\\n\",\n      \"I0318 00:58:32.655378 2013098752 net.cpp:338] loss -> loss\\n\",\n      \"I0318 00:58:32.655388 2013098752 net.cpp:113] Setting up loss\\n\",\n      \"I0318 00:58:32.655397 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:32.655414 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:32.655422 2013098752 net.cpp:122]     with loss weight 1\\n\",\n      \"I0318 00:58:32.655438 2013098752 net.cpp:167] loss needs backward computation.\\n\",\n      \"I0318 00:58:32.655446 2013098752 net.cpp:169] accuracy does not need backward computation.\\n\",\n      \"I0318 00:58:32.655455 2013098752 net.cpp:167] ip1_ip1_0_split needs backward computation.\\n\",\n      \"I0318 00:58:32.655462 2013098752 net.cpp:167] ip1 needs backward computation.\\n\",\n      \"I0318 00:58:32.655469 2013098752 net.cpp:169] label_data_1_split does not need backward computation.\\n\",\n      \"I0318 00:58:32.655477 2013098752 net.cpp:169] data does not need backward computation.\\n\",\n      \"I0318 00:58:32.655483 2013098752 net.cpp:205] This network produces output accuracy\\n\",\n      \"I0318 00:58:32.655489 2013098752 net.cpp:205] This network produces output loss\\n\",\n      \"I0318 00:58:32.655503 2013098752 net.cpp:447] Collecting Learning Rate and Weight Decay.\\n\",\n      \"I0318 00:58:32.655511 2013098752 net.cpp:217] Network initialization done.\\n\",\n      \"I0318 00:58:32.655517 2013098752 net.cpp:218] Memory required for data: 528\\n\",\n      \"I0318 00:58:32.655547 2013098752 solver.cpp:42] Solver scaffolding done.\\n\",\n      \"I0318 00:58:32.655567 2013098752 solver.cpp:222] Solving \\n\",\n      \"I0318 00:58:32.655575 2013098752 solver.cpp:223] Learning Rate Policy: step\\n\",\n      \"I0318 00:58:32.655583 2013098752 solver.cpp:266] Iteration 0, Testing net (#0)\\n\",\n      \"I0318 00:58:32.683643 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.3736\\n\",\n      \"I0318 00:58:32.683686 2013098752 solver.cpp:315]     Test net output #1: loss = 1.00555 (* 1 = 1.00555 loss)\\n\",\n      \"I0318 00:58:32.683846 2013098752 solver.cpp:189] Iteration 0, loss = 0.869394\\n\",\n      \"I0318 00:58:32.683861 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.3\\n\",\n      \"I0318 00:58:32.683871 2013098752 solver.cpp:204]     Train net output #1: loss = 0.869394 (* 1 = 0.869394 loss)\\n\",\n      \"I0318 00:58:32.683883 2013098752 solver.cpp:464] Iteration 0, lr = 0.01\\n\",\n      \"I0318 00:58:32.698721 2013098752 solver.cpp:266] Iteration 1000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.701917 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7848\\n\",\n      \"I0318 00:58:32.701961 2013098752 solver.cpp:315]     Test net output #1: loss = 0.590972 (* 1 = 0.590972 loss)\\n\",\n      \"I0318 00:58:32.702014 2013098752 solver.cpp:189] Iteration 1000, loss = 0.54742\\n\",\n      \"I0318 00:58:32.702029 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.7\\n\",\n      \"I0318 00:58:32.702041 2013098752 solver.cpp:204]     Train net output #1: loss = 0.54742 (* 1 = 0.54742 loss)\\n\",\n      \"I0318 00:58:32.702051 2013098752 solver.cpp:464] Iteration 1000, lr = 0.01\\n\",\n      \"I0318 00:58:32.718360 2013098752 solver.cpp:266] Iteration 2000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.721529 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7696\\n\",\n      \"I0318 00:58:32.721562 2013098752 solver.cpp:315]     Test net output #1: loss = 0.593946 (* 1 = 0.593946 loss)\\n\",\n      \"I0318 00:58:32.721593 2013098752 solver.cpp:189] Iteration 2000, loss = 0.729569\\n\",\n      \"I0318 00:58:32.721603 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.5\\n\",\n      \"I0318 00:58:32.721613 2013098752 solver.cpp:204]     Train net output #1: loss = 0.729569 (* 1 = 0.729569 loss)\\n\",\n      \"I0318 00:58:32.721622 2013098752 solver.cpp:464] Iteration 2000, lr = 0.01\\n\",\n      \"I0318 00:58:32.740182 2013098752 solver.cpp:266] Iteration 3000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.743494 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.77\\n\",\n      \"I0318 00:58:32.743544 2013098752 solver.cpp:315]     Test net output #1: loss = 0.591229 (* 1 = 0.591229 loss)\\n\",\n      \"I0318 00:58:32.744209 2013098752 solver.cpp:189] Iteration 3000, loss = 0.406097\\n\",\n      \"I0318 00:58:32.744231 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.8\\n\",\n      \"I0318 00:58:32.744249 2013098752 solver.cpp:204]     Train net output #1: loss = 0.406096 (* 1 = 0.406096 loss)\\n\",\n      \"I0318 00:58:32.744266 2013098752 solver.cpp:464] Iteration 3000, lr = 0.01\\n\",\n      \"I0318 00:58:32.764135 2013098752 solver.cpp:266] Iteration 4000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.769110 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7848\\n\",\n      \"I0318 00:58:32.769170 2013098752 solver.cpp:315]     Test net output #1: loss = 0.590972 (* 1 = 0.590972 loss)\\n\",\n      \"I0318 00:58:32.769223 2013098752 solver.cpp:189] Iteration 4000, loss = 0.54742\\n\",\n      \"I0318 00:58:32.769242 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.7\\n\",\n      \"I0318 00:58:32.769255 2013098752 solver.cpp:204]     Train net output #1: loss = 0.54742 (* 1 = 0.54742 loss)\\n\",\n      \"I0318 00:58:32.769265 2013098752 solver.cpp:464] Iteration 4000, lr = 0.01\\n\",\n      \"I0318 00:58:32.785846 2013098752 solver.cpp:266] Iteration 5000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.788722 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7696\\n\",\n      \"I0318 00:58:32.788751 2013098752 solver.cpp:315]     Test net output #1: loss = 0.593946 (* 1 = 0.593946 loss)\\n\",\n      \"I0318 00:58:32.788811 2013098752 solver.cpp:189] Iteration 5000, loss = 0.72957\\n\",\n      \"I0318 00:58:32.788833 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.5\\n\",\n      \"I0318 00:58:32.788846 2013098752 solver.cpp:204]     Train net output #1: loss = 0.729569 (* 1 = 0.729569 loss)\\n\",\n      \"I0318 00:58:32.788856 2013098752 solver.cpp:464] Iteration 5000, lr = 0.001\\n\",\n      \"I0318 00:58:32.804762 2013098752 solver.cpp:266] Iteration 6000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.808061 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7856\\n\",\n      \"I0318 00:58:32.808112 2013098752 solver.cpp:315]     Test net output #1: loss = 0.59028 (* 1 = 0.59028 loss)\\n\",\n      \"I0318 00:58:32.808732 2013098752 solver.cpp:189] Iteration 6000, loss = 0.415444\\n\",\n      \"I0318 00:58:32.808753 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:32.808773 2013098752 solver.cpp:204]     Train net output #1: loss = 0.415444 (* 1 = 0.415444 loss)\\n\",\n      \"I0318 00:58:32.808786 2013098752 solver.cpp:464] Iteration 6000, lr = 0.001\\n\",\n      \"I0318 00:58:32.827118 2013098752 solver.cpp:266] Iteration 7000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.831614 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7848\\n\",\n      \"I0318 00:58:32.831657 2013098752 solver.cpp:315]     Test net output #1: loss = 0.589454 (* 1 = 0.589454 loss)\\n\",\n      \"I0318 00:58:32.831707 2013098752 solver.cpp:189] Iteration 7000, loss = 0.538038\\n\",\n      \"I0318 00:58:32.831728 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.8\\n\",\n      \"I0318 00:58:32.831745 2013098752 solver.cpp:204]     Train net output #1: loss = 0.538037 (* 1 = 0.538037 loss)\\n\",\n      \"I0318 00:58:32.831759 2013098752 solver.cpp:464] Iteration 7000, lr = 0.001\\n\",\n      \"I0318 00:58:32.849634 2013098752 solver.cpp:266] Iteration 8000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.852712 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7796\\n\",\n      \"I0318 00:58:32.852748 2013098752 solver.cpp:315]     Test net output #1: loss = 0.589365 (* 1 = 0.589365 loss)\\n\",\n      \"I0318 00:58:32.852792 2013098752 solver.cpp:189] Iteration 8000, loss = 0.684219\\n\",\n      \"I0318 00:58:32.852840 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.5\\n\",\n      \"I0318 00:58:32.852852 2013098752 solver.cpp:204]     Train net output #1: loss = 0.684219 (* 1 = 0.684219 loss)\\n\",\n      \"I0318 00:58:32.852861 2013098752 solver.cpp:464] Iteration 8000, lr = 0.001\\n\",\n      \"I0318 00:58:32.868440 2013098752 solver.cpp:266] Iteration 9000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.871438 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.7816\\n\",\n      \"I0318 00:58:32.871461 2013098752 solver.cpp:315]     Test net output #1: loss = 0.589656 (* 1 = 0.589656 loss)\\n\",\n      \"I0318 00:58:32.872109 2013098752 solver.cpp:189] Iteration 9000, loss = 0.421879\\n\",\n      \"I0318 00:58:32.872131 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:32.872143 2013098752 solver.cpp:204]     Train net output #1: loss = 0.421879 (* 1 = 0.421879 loss)\\n\",\n      \"I0318 00:58:32.872153 2013098752 solver.cpp:464] Iteration 9000, lr = 0.001\\n\",\n      \"I0318 00:58:32.889981 2013098752 solver.cpp:334] Snapshotting to examples/hdf5_classification/data/train_iter_10000.caffemodel\\n\",\n      \"I0318 00:58:32.890224 2013098752 solver.cpp:342] Snapshotting solver state to examples/hdf5_classification/data/train_iter_10000.solverstate\\n\",\n      \"I0318 00:58:32.890362 2013098752 solver.cpp:248] Iteration 10000, loss = 0.538933\\n\",\n      \"I0318 00:58:32.890380 2013098752 solver.cpp:266] Iteration 10000, Testing net (#0)\\n\",\n      \"I0318 00:58:32.893728 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.782\\n\",\n      \"I0318 00:58:32.893757 2013098752 solver.cpp:315]     Test net output #1: loss = 0.589366 (* 1 = 0.589366 loss)\\n\",\n      \"I0318 00:58:32.893775 2013098752 solver.cpp:253] Optimization Done.\\n\",\n      \"I0318 00:58:32.893786 2013098752 caffe.cpp:134] Optimization Done.\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!./build/tools/caffe train -solver examples/hdf5_classification/solver.prototxt\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"If you look at output or the `logreg_auto_train.prototxt`, you'll see that the model is simple logistic regression.\\n\",\n    \"We can make it a little more advanced by introducing a non-linearity between weights that take the input and weights that give the output -- now we have a two-layer network.\\n\",\n    \"That network is given in `nonlinear_auto_train.prototxt`, and that's the only change made in `nonlinear_solver.prototxt` which we will now use.\\n\",\n    \"\\n\",\n    \"The final accuracy of the new network should be higher than logistic regression!\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"from caffe import layers as L\\n\",\n    \"from caffe import params as P\\n\",\n    \"\\n\",\n    \"def nonlinear_net(hdf5, batch_size):\\n\",\n    \"    # one small nonlinearity, one leap for model kind\\n\",\n    \"    n = caffe.NetSpec()\\n\",\n    \"    n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)\\n\",\n    \"    # define a hidden layer of dimension 40\\n\",\n    \"    n.ip1 = L.InnerProduct(n.data, num_output=40, weight_filler=dict(type='xavier'))\\n\",\n    \"    # transform the output through the ReLU (rectified linear) non-linearity\\n\",\n    \"    n.relu1 = L.ReLU(n.ip1, in_place=True)\\n\",\n    \"    # score the (now non-linear) features\\n\",\n    \"    n.ip2 = L.InnerProduct(n.ip1, num_output=2, weight_filler=dict(type='xavier'))\\n\",\n    \"    # same accuracy and loss as before\\n\",\n    \"    n.accuracy = L.Accuracy(n.ip2, n.label)\\n\",\n    \"    n.loss = L.SoftmaxWithLoss(n.ip2, n.label)\\n\",\n    \"    return n.to_proto()\\n\",\n    \"    \\n\",\n    \"with open('examples/hdf5_classification/nonlinear_auto_train.prototxt', 'w') as f:\\n\",\n    \"    f.write(str(nonlinear_net('examples/hdf5_classification/data/train.txt', 10)))\\n\",\n    \"    \\n\",\n    \"with open('examples/hdf5_classification/nonlinear_auto_test.prototxt', 'w') as f:\\n\",\n    \"    f.write(str(nonlinear_net('examples/hdf5_classification/data/test.txt', 10)))\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Accuracy: 0.832\\n\",\n      \"Accuracy: 0.832\\n\",\n      \"Accuracy: 0.832\\n\",\n      \"Accuracy: 0.831\\n\",\n      \"1 loops, best of 3: 386 ms per loop\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"%%timeit\\n\",\n    \"caffe.set_mode_cpu()\\n\",\n    \"solver = caffe.get_solver('examples/hdf5_classification/nonlinear_solver.prototxt')\\n\",\n    \"solver.solve()\\n\",\n    \"\\n\",\n    \"accuracy = 0\\n\",\n    \"batch_size = solver.test_nets[0].blobs['data'].num\\n\",\n    \"test_iters = int(len(Xt) / batch_size)\\n\",\n    \"for i in range(test_iters):\\n\",\n    \"    solver.test_nets[0].forward()\\n\",\n    \"    accuracy += solver.test_nets[0].blobs['accuracy'].data\\n\",\n    \"accuracy /= test_iters\\n\",\n    \"\\n\",\n    \"print(\\\"Accuracy: {:.3f}\\\".format(accuracy))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Do the same through the command line interface for detailed output on the model and solving.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"I0318 00:58:43.336922 2013098752 caffe.cpp:117] Use CPU.\\n\",\n      \"I0318 00:58:43.654698 2013098752 caffe.cpp:121] Starting Optimization\\n\",\n      \"I0318 00:58:43.654747 2013098752 solver.cpp:32] Initializing solver from parameters: \\n\",\n      \"train_net: \\\"examples/hdf5_classification/nonlinear_auto_train.prototxt\\\"\\n\",\n      \"test_net: \\\"examples/hdf5_classification/nonlinear_auto_test.prototxt\\\"\\n\",\n      \"test_iter: 250\\n\",\n      \"test_interval: 1000\\n\",\n      \"base_lr: 0.01\\n\",\n      \"display: 1000\\n\",\n      \"max_iter: 10000\\n\",\n      \"lr_policy: \\\"step\\\"\\n\",\n      \"gamma: 0.1\\n\",\n      \"momentum: 0.9\\n\",\n      \"weight_decay: 0.0005\\n\",\n      \"stepsize: 5000\\n\",\n      \"snapshot: 10000\\n\",\n      \"snapshot_prefix: \\\"examples/hdf5_classification/data/train\\\"\\n\",\n      \"solver_mode: CPU\\n\",\n      \"I0318 00:58:43.654855 2013098752 solver.cpp:61] Creating training net from train_net file: examples/hdf5_classification/nonlinear_auto_train.prototxt\\n\",\n      \"I0318 00:58:43.655004 2013098752 net.cpp:42] Initializing net from parameters: \\n\",\n      \"state {\\n\",\n      \"  phase: TRAIN\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"data\\\"\\n\",\n      \"  type: \\\"HDF5Data\\\"\\n\",\n      \"  top: \\\"data\\\"\\n\",\n      \"  top: \\\"label\\\"\\n\",\n      \"  hdf5_data_param {\\n\",\n      \"    source: \\\"examples/hdf5_classification/data/train.txt\\\"\\n\",\n      \"    batch_size: 10\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"ip1\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"data\\\"\\n\",\n      \"  top: \\\"ip1\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 40\\n\",\n      \"    weight_filler {\\n\",\n      \"      type: \\\"xavier\\\"\\n\",\n      \"    }\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu1\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  top: \\\"ip1\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"ip2\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  top: \\\"ip2\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 2\\n\",\n      \"    weight_filler {\\n\",\n      \"      type: \\\"xavier\\\"\\n\",\n      \"    }\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"accuracy\\\"\\n\",\n      \"  type: \\\"Accuracy\\\"\\n\",\n      \"  bottom: \\\"ip2\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"accuracy\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"loss\\\"\\n\",\n      \"  type: \\\"SoftmaxWithLoss\\\"\\n\",\n      \"  bottom: \\\"ip2\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"loss\\\"\\n\",\n      \"}\\n\",\n      \"I0318 00:58:43.655120 2013098752 layer_factory.hpp:74] Creating layer data\\n\",\n      \"I0318 00:58:43.655139 2013098752 net.cpp:84] Creating Layer data\\n\",\n      \"I0318 00:58:43.655264 2013098752 net.cpp:338] data -> data\\n\",\n      \"I0318 00:58:43.655297 2013098752 net.cpp:338] data -> label\\n\",\n      \"I0318 00:58:43.655310 2013098752 net.cpp:113] Setting up data\\n\",\n      \"I0318 00:58:43.655318 2013098752 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/hdf5_classification/data/train.txt\\n\",\n      \"I0318 00:58:43.655365 2013098752 hdf5_data_layer.cpp:80] Number of HDF5 files: 2\\n\",\n      \"I0318 00:58:43.657317 2013098752 net.cpp:120] Top shape: 10 4 (40)\\n\",\n      \"I0318 00:58:43.657342 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:43.657356 2013098752 layer_factory.hpp:74] Creating layer label_data_1_split\\n\",\n      \"I0318 00:58:43.657373 2013098752 net.cpp:84] Creating Layer label_data_1_split\\n\",\n      \"I0318 00:58:43.657384 2013098752 net.cpp:380] label_data_1_split <- label\\n\",\n      \"I0318 00:58:43.657395 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_0\\n\",\n      \"I0318 00:58:43.657407 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_1\\n\",\n      \"I0318 00:58:43.657418 2013098752 net.cpp:113] Setting up label_data_1_split\\n\",\n      \"I0318 00:58:43.657426 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:43.657433 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:43.657441 2013098752 layer_factory.hpp:74] Creating layer ip1\\n\",\n      \"I0318 00:58:43.657451 2013098752 net.cpp:84] Creating Layer ip1\\n\",\n      \"I0318 00:58:43.657459 2013098752 net.cpp:380] ip1 <- data\\n\",\n      \"I0318 00:58:43.657467 2013098752 net.cpp:338] ip1 -> ip1\\n\",\n      \"I0318 00:58:43.657479 2013098752 net.cpp:113] Setting up ip1\\n\",\n      \"I0318 00:58:43.662454 2013098752 net.cpp:120] Top shape: 10 40 (400)\\n\",\n      \"I0318 00:58:43.662477 2013098752 layer_factory.hpp:74] Creating layer relu1\\n\",\n      \"I0318 00:58:43.662497 2013098752 net.cpp:84] Creating Layer relu1\\n\",\n      \"I0318 00:58:43.662508 2013098752 net.cpp:380] relu1 <- ip1\\n\",\n      \"I0318 00:58:43.662520 2013098752 net.cpp:327] relu1 -> ip1 (in-place)\\n\",\n      \"I0318 00:58:43.662530 2013098752 net.cpp:113] Setting up relu1\\n\",\n      \"I0318 00:58:43.662539 2013098752 net.cpp:120] Top shape: 10 40 (400)\\n\",\n      \"I0318 00:58:43.662546 2013098752 layer_factory.hpp:74] Creating layer ip2\\n\",\n      \"I0318 00:58:43.662555 2013098752 net.cpp:84] Creating Layer ip2\\n\",\n      \"I0318 00:58:43.662562 2013098752 net.cpp:380] ip2 <- ip1\\n\",\n      \"I0318 00:58:43.662571 2013098752 net.cpp:338] ip2 -> ip2\\n\",\n      \"I0318 00:58:43.662580 2013098752 net.cpp:113] Setting up ip2\\n\",\n      \"I0318 00:58:43.662595 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:43.662606 2013098752 layer_factory.hpp:74] Creating layer ip2_ip2_0_split\\n\",\n      \"I0318 00:58:43.662654 2013098752 net.cpp:84] Creating Layer ip2_ip2_0_split\\n\",\n      \"I0318 00:58:43.662665 2013098752 net.cpp:380] ip2_ip2_0_split <- ip2\\n\",\n      \"I0318 00:58:43.662678 2013098752 net.cpp:338] ip2_ip2_0_split -> ip2_ip2_0_split_0\\n\",\n      \"I0318 00:58:43.662689 2013098752 net.cpp:338] ip2_ip2_0_split -> ip2_ip2_0_split_1\\n\",\n      \"I0318 00:58:43.662698 2013098752 net.cpp:113] Setting up ip2_ip2_0_split\\n\",\n      \"I0318 00:58:43.662706 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:43.662714 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:43.662722 2013098752 layer_factory.hpp:74] Creating layer accuracy\\n\",\n      \"I0318 00:58:43.662734 2013098752 net.cpp:84] Creating Layer accuracy\\n\",\n      \"I0318 00:58:43.662740 2013098752 net.cpp:380] accuracy <- ip2_ip2_0_split_0\\n\",\n      \"I0318 00:58:43.662749 2013098752 net.cpp:380] accuracy <- label_data_1_split_0\\n\",\n      \"I0318 00:58:43.662756 2013098752 net.cpp:338] accuracy -> accuracy\\n\",\n      \"I0318 00:58:43.662766 2013098752 net.cpp:113] Setting up accuracy\\n\",\n      \"I0318 00:58:43.662818 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:43.662827 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:43.662839 2013098752 net.cpp:84] Creating Layer loss\\n\",\n      \"I0318 00:58:43.662847 2013098752 net.cpp:380] loss <- ip2_ip2_0_split_1\\n\",\n      \"I0318 00:58:43.662854 2013098752 net.cpp:380] loss <- label_data_1_split_1\\n\",\n      \"I0318 00:58:43.662863 2013098752 net.cpp:338] loss -> loss\\n\",\n      \"I0318 00:58:43.662873 2013098752 net.cpp:113] Setting up loss\\n\",\n      \"I0318 00:58:43.662883 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:43.662901 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:43.662909 2013098752 net.cpp:122]     with loss weight 1\\n\",\n      \"I0318 00:58:43.662922 2013098752 net.cpp:167] loss needs backward computation.\\n\",\n      \"I0318 00:58:43.662930 2013098752 net.cpp:169] accuracy does not need backward computation.\\n\",\n      \"I0318 00:58:43.662936 2013098752 net.cpp:167] ip2_ip2_0_split needs backward computation.\\n\",\n      \"I0318 00:58:43.662942 2013098752 net.cpp:167] ip2 needs backward computation.\\n\",\n      \"I0318 00:58:43.662976 2013098752 net.cpp:167] relu1 needs backward computation.\\n\",\n      \"I0318 00:58:43.662988 2013098752 net.cpp:167] ip1 needs backward computation.\\n\",\n      \"I0318 00:58:43.662997 2013098752 net.cpp:169] label_data_1_split does not need backward computation.\\n\",\n      \"I0318 00:58:43.663003 2013098752 net.cpp:169] data does not need backward computation.\\n\",\n      \"I0318 00:58:43.663009 2013098752 net.cpp:205] This network produces output accuracy\\n\",\n      \"I0318 00:58:43.663017 2013098752 net.cpp:205] This network produces output loss\\n\",\n      \"I0318 00:58:43.663028 2013098752 net.cpp:447] Collecting Learning Rate and Weight Decay.\\n\",\n      \"I0318 00:58:43.663035 2013098752 net.cpp:217] Network initialization done.\\n\",\n      \"I0318 00:58:43.663041 2013098752 net.cpp:218] Memory required for data: 3728\\n\",\n      \"I0318 00:58:43.663158 2013098752 solver.cpp:154] Creating test net (#0) specified by test_net file: examples/hdf5_classification/nonlinear_auto_test.prototxt\\n\",\n      \"I0318 00:58:43.663179 2013098752 net.cpp:42] Initializing net from parameters: \\n\",\n      \"state {\\n\",\n      \"  phase: TEST\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"data\\\"\\n\",\n      \"  type: \\\"HDF5Data\\\"\\n\",\n      \"  top: \\\"data\\\"\\n\",\n      \"  top: \\\"label\\\"\\n\",\n      \"  hdf5_data_param {\\n\",\n      \"    source: \\\"examples/hdf5_classification/data/test.txt\\\"\\n\",\n      \"    batch_size: 10\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"ip1\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"data\\\"\\n\",\n      \"  top: \\\"ip1\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 40\\n\",\n      \"    weight_filler {\\n\",\n      \"      type: \\\"xavier\\\"\\n\",\n      \"    }\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu1\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  top: \\\"ip1\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"ip2\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"ip1\\\"\\n\",\n      \"  top: \\\"ip2\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 2\\n\",\n      \"    weight_filler {\\n\",\n      \"      type: \\\"xavier\\\"\\n\",\n      \"    }\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"accuracy\\\"\\n\",\n      \"  type: \\\"Accuracy\\\"\\n\",\n      \"  bottom: \\\"ip2\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"accuracy\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"loss\\\"\\n\",\n      \"  type: \\\"SoftmaxWithLoss\\\"\\n\",\n      \"  bottom: \\\"ip2\\\"\\n\",\n      \"  bottom: \\\"label\\\"\\n\",\n      \"  top: \\\"loss\\\"\\n\",\n      \"}\\n\",\n      \"I0318 00:58:43.663349 2013098752 layer_factory.hpp:74] Creating layer data\\n\",\n      \"I0318 00:58:43.663365 2013098752 net.cpp:84] Creating Layer data\\n\",\n      \"I0318 00:58:43.663373 2013098752 net.cpp:338] data -> data\\n\",\n      \"I0318 00:58:43.663385 2013098752 net.cpp:338] data -> label\\n\",\n      \"I0318 00:58:43.663396 2013098752 net.cpp:113] Setting up data\\n\",\n      \"I0318 00:58:43.663422 2013098752 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/hdf5_classification/data/test.txt\\n\",\n      \"I0318 00:58:43.663457 2013098752 hdf5_data_layer.cpp:80] Number of HDF5 files: 1\\n\",\n      \"I0318 00:58:43.664719 2013098752 net.cpp:120] Top shape: 10 4 (40)\\n\",\n      \"I0318 00:58:43.664739 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:43.664754 2013098752 layer_factory.hpp:74] Creating layer label_data_1_split\\n\",\n      \"I0318 00:58:43.664772 2013098752 net.cpp:84] Creating Layer label_data_1_split\\n\",\n      \"I0318 00:58:43.664783 2013098752 net.cpp:380] label_data_1_split <- label\\n\",\n      \"I0318 00:58:43.664791 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_0\\n\",\n      \"I0318 00:58:43.664803 2013098752 net.cpp:338] label_data_1_split -> label_data_1_split_1\\n\",\n      \"I0318 00:58:43.664813 2013098752 net.cpp:113] Setting up label_data_1_split\\n\",\n      \"I0318 00:58:43.664822 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:43.664829 2013098752 net.cpp:120] Top shape: 10 (10)\\n\",\n      \"I0318 00:58:43.664837 2013098752 layer_factory.hpp:74] Creating layer ip1\\n\",\n      \"I0318 00:58:43.664846 2013098752 net.cpp:84] Creating Layer ip1\\n\",\n      \"I0318 00:58:43.664854 2013098752 net.cpp:380] ip1 <- data\\n\",\n      \"I0318 00:58:43.664862 2013098752 net.cpp:338] ip1 -> ip1\\n\",\n      \"I0318 00:58:43.664875 2013098752 net.cpp:113] Setting up ip1\\n\",\n      \"I0318 00:58:43.664901 2013098752 net.cpp:120] Top shape: 10 40 (400)\\n\",\n      \"I0318 00:58:43.664924 2013098752 layer_factory.hpp:74] Creating layer relu1\\n\",\n      \"I0318 00:58:43.664945 2013098752 net.cpp:84] Creating Layer relu1\\n\",\n      \"I0318 00:58:43.664958 2013098752 net.cpp:380] relu1 <- ip1\\n\",\n      \"I0318 00:58:43.664966 2013098752 net.cpp:327] relu1 -> ip1 (in-place)\\n\",\n      \"I0318 00:58:43.664975 2013098752 net.cpp:113] Setting up relu1\\n\",\n      \"I0318 00:58:43.664983 2013098752 net.cpp:120] Top shape: 10 40 (400)\\n\",\n      \"I0318 00:58:43.664990 2013098752 layer_factory.hpp:74] Creating layer ip2\\n\",\n      \"I0318 00:58:43.665000 2013098752 net.cpp:84] Creating Layer ip2\\n\",\n      \"I0318 00:58:43.665006 2013098752 net.cpp:380] ip2 <- ip1\\n\",\n      \"I0318 00:58:43.665015 2013098752 net.cpp:338] ip2 -> ip2\\n\",\n      \"I0318 00:58:43.665030 2013098752 net.cpp:113] Setting up ip2\\n\",\n      \"I0318 00:58:43.665052 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:43.665066 2013098752 layer_factory.hpp:74] Creating layer ip2_ip2_0_split\\n\",\n      \"I0318 00:58:43.665077 2013098752 net.cpp:84] Creating Layer ip2_ip2_0_split\\n\",\n      \"I0318 00:58:43.665086 2013098752 net.cpp:380] ip2_ip2_0_split <- ip2\\n\",\n      \"I0318 00:58:43.665093 2013098752 net.cpp:338] ip2_ip2_0_split -> ip2_ip2_0_split_0\\n\",\n      \"I0318 00:58:43.665103 2013098752 net.cpp:338] ip2_ip2_0_split -> ip2_ip2_0_split_1\\n\",\n      \"I0318 00:58:43.665113 2013098752 net.cpp:113] Setting up ip2_ip2_0_split\\n\",\n      \"I0318 00:58:43.665122 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:43.665128 2013098752 net.cpp:120] Top shape: 10 2 (20)\\n\",\n      \"I0318 00:58:43.665137 2013098752 layer_factory.hpp:74] Creating layer accuracy\\n\",\n      \"I0318 00:58:43.665144 2013098752 net.cpp:84] Creating Layer accuracy\\n\",\n      \"I0318 00:58:43.665153 2013098752 net.cpp:380] accuracy <- ip2_ip2_0_split_0\\n\",\n      \"I0318 00:58:43.665168 2013098752 net.cpp:380] accuracy <- label_data_1_split_0\\n\",\n      \"I0318 00:58:43.665180 2013098752 net.cpp:338] accuracy -> accuracy\\n\",\n      \"I0318 00:58:43.665192 2013098752 net.cpp:113] Setting up accuracy\\n\",\n      \"I0318 00:58:43.665200 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:43.665207 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:43.665216 2013098752 net.cpp:84] Creating Layer loss\\n\",\n      \"I0318 00:58:43.665223 2013098752 net.cpp:380] loss <- ip2_ip2_0_split_1\\n\",\n      \"I0318 00:58:43.665230 2013098752 net.cpp:380] loss <- label_data_1_split_1\\n\",\n      \"I0318 00:58:43.665241 2013098752 net.cpp:338] loss -> loss\\n\",\n      \"I0318 00:58:43.665251 2013098752 net.cpp:113] Setting up loss\\n\",\n      \"I0318 00:58:43.665259 2013098752 layer_factory.hpp:74] Creating layer loss\\n\",\n      \"I0318 00:58:43.665273 2013098752 net.cpp:120] Top shape: (1)\\n\",\n      \"I0318 00:58:43.665282 2013098752 net.cpp:122]     with loss weight 1\\n\",\n      \"I0318 00:58:43.665290 2013098752 net.cpp:167] loss needs backward computation.\\n\",\n      \"I0318 00:58:43.665338 2013098752 net.cpp:169] accuracy does not need backward computation.\\n\",\n      \"I0318 00:58:43.665351 2013098752 net.cpp:167] ip2_ip2_0_split needs backward computation.\\n\",\n      \"I0318 00:58:43.665380 2013098752 net.cpp:167] ip2 needs backward computation.\\n\",\n      \"I0318 00:58:43.665387 2013098752 net.cpp:167] relu1 needs backward computation.\\n\",\n      \"I0318 00:58:43.665393 2013098752 net.cpp:167] ip1 needs backward computation.\\n\",\n      \"I0318 00:58:43.665400 2013098752 net.cpp:169] label_data_1_split does not need backward computation.\\n\",\n      \"I0318 00:58:43.665407 2013098752 net.cpp:169] data does not need backward computation.\\n\",\n      \"I0318 00:58:43.665415 2013098752 net.cpp:205] This network produces output accuracy\\n\",\n      \"I0318 00:58:43.665421 2013098752 net.cpp:205] This network produces output loss\\n\",\n      \"I0318 00:58:43.665431 2013098752 net.cpp:447] Collecting Learning Rate and Weight Decay.\\n\",\n      \"I0318 00:58:43.665441 2013098752 net.cpp:217] Network initialization done.\\n\",\n      \"I0318 00:58:43.665446 2013098752 net.cpp:218] Memory required for data: 3728\\n\",\n      \"I0318 00:58:43.665534 2013098752 solver.cpp:42] Solver scaffolding done.\\n\",\n      \"I0318 00:58:43.665568 2013098752 solver.cpp:222] Solving \\n\",\n      \"I0318 00:58:43.665577 2013098752 solver.cpp:223] Learning Rate Policy: step\\n\",\n      \"I0318 00:58:43.665586 2013098752 solver.cpp:266] Iteration 0, Testing net (#0)\\n\",\n      \"I0318 00:58:43.683938 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.5184\\n\",\n      \"I0318 00:58:43.683981 2013098752 solver.cpp:315]     Test net output #1: loss = 0.716141 (* 1 = 0.716141 loss)\\n\",\n      \"I0318 00:58:43.684236 2013098752 solver.cpp:189] Iteration 0, loss = 0.764954\\n\",\n      \"I0318 00:58:43.684267 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.5\\n\",\n      \"I0318 00:58:43.684285 2013098752 solver.cpp:204]     Train net output #1: loss = 0.764954 (* 1 = 0.764954 loss)\\n\",\n      \"I0318 00:58:43.684305 2013098752 solver.cpp:464] Iteration 0, lr = 0.01\\n\",\n      \"I0318 00:58:43.714700 2013098752 solver.cpp:266] Iteration 1000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.721762 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8168\\n\",\n      \"I0318 00:58:43.721818 2013098752 solver.cpp:315]     Test net output #1: loss = 0.434918 (* 1 = 0.434918 loss)\\n\",\n      \"I0318 00:58:43.721899 2013098752 solver.cpp:189] Iteration 1000, loss = 0.282425\\n\",\n      \"I0318 00:58:43.721917 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:43.721932 2013098752 solver.cpp:204]     Train net output #1: loss = 0.282426 (* 1 = 0.282426 loss)\\n\",\n      \"I0318 00:58:43.721942 2013098752 solver.cpp:464] Iteration 1000, lr = 0.01\\n\",\n      \"I0318 00:58:43.750509 2013098752 solver.cpp:266] Iteration 2000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.754590 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8224\\n\",\n      \"I0318 00:58:43.754621 2013098752 solver.cpp:315]     Test net output #1: loss = 0.416874 (* 1 = 0.416874 loss)\\n\",\n      \"I0318 00:58:43.754660 2013098752 solver.cpp:189] Iteration 2000, loss = 0.51988\\n\",\n      \"I0318 00:58:43.754672 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.7\\n\",\n      \"I0318 00:58:43.754683 2013098752 solver.cpp:204]     Train net output #1: loss = 0.51988 (* 1 = 0.51988 loss)\\n\",\n      \"I0318 00:58:43.754690 2013098752 solver.cpp:464] Iteration 2000, lr = 0.01\\n\",\n      \"I0318 00:58:43.782609 2013098752 solver.cpp:266] Iteration 3000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.789728 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8176\\n\",\n      \"I0318 00:58:43.789777 2013098752 solver.cpp:315]     Test net output #1: loss = 0.415907 (* 1 = 0.415907 loss)\\n\",\n      \"I0318 00:58:43.790487 2013098752 solver.cpp:189] Iteration 3000, loss = 0.5093\\n\",\n      \"I0318 00:58:43.790510 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.7\\n\",\n      \"I0318 00:58:43.790530 2013098752 solver.cpp:204]     Train net output #1: loss = 0.509301 (* 1 = 0.509301 loss)\\n\",\n      \"I0318 00:58:43.790544 2013098752 solver.cpp:464] Iteration 3000, lr = 0.01\\n\",\n      \"I0318 00:58:43.817451 2013098752 solver.cpp:266] Iteration 4000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.821740 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8252\\n\",\n      \"I0318 00:58:43.821770 2013098752 solver.cpp:315]     Test net output #1: loss = 0.409124 (* 1 = 0.409124 loss)\\n\",\n      \"I0318 00:58:43.821822 2013098752 solver.cpp:189] Iteration 4000, loss = 0.284815\\n\",\n      \"I0318 00:58:43.821835 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:43.821846 2013098752 solver.cpp:204]     Train net output #1: loss = 0.284815 (* 1 = 0.284815 loss)\\n\",\n      \"I0318 00:58:43.821890 2013098752 solver.cpp:464] Iteration 4000, lr = 0.01\\n\",\n      \"I0318 00:58:43.847015 2013098752 solver.cpp:266] Iteration 5000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.852102 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8256\\n\",\n      \"I0318 00:58:43.852145 2013098752 solver.cpp:315]     Test net output #1: loss = 0.404445 (* 1 = 0.404445 loss)\\n\",\n      \"I0318 00:58:43.852188 2013098752 solver.cpp:189] Iteration 5000, loss = 0.511566\\n\",\n      \"I0318 00:58:43.852200 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.7\\n\",\n      \"I0318 00:58:43.852210 2013098752 solver.cpp:204]     Train net output #1: loss = 0.511566 (* 1 = 0.511566 loss)\\n\",\n      \"I0318 00:58:43.852219 2013098752 solver.cpp:464] Iteration 5000, lr = 0.001\\n\",\n      \"I0318 00:58:43.876060 2013098752 solver.cpp:266] Iteration 6000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.880080 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8328\\n\",\n      \"I0318 00:58:43.880105 2013098752 solver.cpp:315]     Test net output #1: loss = 0.396847 (* 1 = 0.396847 loss)\\n\",\n      \"I0318 00:58:43.880700 2013098752 solver.cpp:189] Iteration 6000, loss = 0.397858\\n\",\n      \"I0318 00:58:43.880718 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:43.880729 2013098752 solver.cpp:204]     Train net output #1: loss = 0.397858 (* 1 = 0.397858 loss)\\n\",\n      \"I0318 00:58:43.880738 2013098752 solver.cpp:464] Iteration 6000, lr = 0.001\\n\",\n      \"I0318 00:58:43.913795 2013098752 solver.cpp:266] Iteration 7000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.917851 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8316\\n\",\n      \"I0318 00:58:43.917876 2013098752 solver.cpp:315]     Test net output #1: loss = 0.398135 (* 1 = 0.398135 loss)\\n\",\n      \"I0318 00:58:43.917956 2013098752 solver.cpp:189] Iteration 7000, loss = 0.243849\\n\",\n      \"I0318 00:58:43.917971 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:43.917989 2013098752 solver.cpp:204]     Train net output #1: loss = 0.243849 (* 1 = 0.243849 loss)\\n\",\n      \"I0318 00:58:43.918002 2013098752 solver.cpp:464] Iteration 7000, lr = 0.001\\n\",\n      \"I0318 00:58:43.943681 2013098752 solver.cpp:266] Iteration 8000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.947589 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8312\\n\",\n      \"I0318 00:58:43.947615 2013098752 solver.cpp:315]     Test net output #1: loss = 0.394763 (* 1 = 0.394763 loss)\\n\",\n      \"I0318 00:58:43.947651 2013098752 solver.cpp:189] Iteration 8000, loss = 0.513399\\n\",\n      \"I0318 00:58:43.947664 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.7\\n\",\n      \"I0318 00:58:43.947674 2013098752 solver.cpp:204]     Train net output #1: loss = 0.513399 (* 1 = 0.513399 loss)\\n\",\n      \"I0318 00:58:43.947682 2013098752 solver.cpp:464] Iteration 8000, lr = 0.001\\n\",\n      \"I0318 00:58:43.973080 2013098752 solver.cpp:266] Iteration 9000, Testing net (#0)\\n\",\n      \"I0318 00:58:43.977033 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.834\\n\",\n      \"I0318 00:58:43.977056 2013098752 solver.cpp:315]     Test net output #1: loss = 0.395663 (* 1 = 0.395663 loss)\\n\",\n      \"I0318 00:58:43.977710 2013098752 solver.cpp:189] Iteration 9000, loss = 0.399341\\n\",\n      \"I0318 00:58:43.977735 2013098752 solver.cpp:204]     Train net output #0: accuracy = 0.9\\n\",\n      \"I0318 00:58:43.977746 2013098752 solver.cpp:204]     Train net output #1: loss = 0.399342 (* 1 = 0.399342 loss)\\n\",\n      \"I0318 00:58:43.977756 2013098752 solver.cpp:464] Iteration 9000, lr = 0.001\\n\",\n      \"I0318 00:58:44.003437 2013098752 solver.cpp:334] Snapshotting to examples/hdf5_classification/data/train_iter_10000.caffemodel\\n\",\n      \"I0318 00:58:44.003702 2013098752 solver.cpp:342] Snapshotting solver state to examples/hdf5_classification/data/train_iter_10000.solverstate\\n\",\n      \"I0318 00:58:44.003850 2013098752 solver.cpp:248] Iteration 10000, loss = 0.244639\\n\",\n      \"I0318 00:58:44.003871 2013098752 solver.cpp:266] Iteration 10000, Testing net (#0)\\n\",\n      \"I0318 00:58:44.008216 2013098752 solver.cpp:315]     Test net output #0: accuracy = 0.8308\\n\",\n      \"I0318 00:58:44.008252 2013098752 solver.cpp:315]     Test net output #1: loss = 0.397291 (* 1 = 0.397291 loss)\\n\",\n      \"I0318 00:58:44.008262 2013098752 solver.cpp:253] Optimization Done.\\n\",\n      \"I0318 00:58:44.008270 2013098752 caffe.cpp:134] Optimization Done.\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!./build/tools/caffe train -solver examples/hdf5_classification/nonlinear_solver.prototxt\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Clean up (comment this out if you want to examine the hdf5_classification/data directory).\\n\",\n    \"shutil.rmtree(dirname)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"Use Caffe as a generic SGD optimizer to train logistic regression on non-image HDF5 data.\",\n  \"example_name\": \"Off-the-shelf SGD for classification\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 3\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/03-fine-tuning.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Fine-tuning a Pretrained Network for Style Recognition\\n\",\n    \"\\n\",\n    \"In this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.\\n\",\n    \"\\n\",\n    \"The upside of such approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the \\\"semantics\\\" of the general visual appearance. Think of it as a very powerful feature that you can treat as a black box. On top of that, only a few layers will be needed to obtain a very good performance of the data.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"First, we will need to prepare the data. This involves the following parts:\\n\",\n    \"(1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts.\\n\",\n    \"(2) Download a subset of the overall Flickr style dataset for this demo.\\n\",\n    \"(3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"os.chdir('..')\\n\",\n    \"import sys\\n\",\n    \"sys.path.insert(0, './python')\\n\",\n    \"\\n\",\n    \"import caffe\\n\",\n    \"import numpy as np\\n\",\n    \"from pylab import *\\n\",\n    \"%matplotlib inline\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# This downloads the ilsvrc auxiliary data (mean file, etc),\\n\",\n    \"# and a subset of 2000 images for the style recognition task.\\n\",\n    \"!data/ilsvrc12/get_ilsvrc_aux.sh\\n\",\n    \"!scripts/download_model_binary.py models/bvlc_reference_caffenet\\n\",\n    \"!python examples/finetune_flickr_style/assemble_data.py \\\\\\n\",\n    \"    --workers=-1 --images=2000 --seed=1701 --label=5\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's show what is the difference between the fine-tuning network and the original caffe model.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"1c1\\r\\n\",\n      \"< name: \\\"CaffeNet\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \"> name: \\\"FlickrStyleCaffeNet\\\"\\r\\n\",\n      \"4c4\\r\\n\",\n      \"<   type: \\\"Data\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   type: \\\"ImageData\\\"\\r\\n\",\n      \"15,26c15,19\\r\\n\",\n      \"< # mean pixel / channel-wise mean instead of mean image\\r\\n\",\n      \"< #  transform_param {\\r\\n\",\n      \"< #    crop_size: 227\\r\\n\",\n      \"< #    mean_value: 104\\r\\n\",\n      \"< #    mean_value: 117\\r\\n\",\n      \"< #    mean_value: 123\\r\\n\",\n      \"< #    mirror: true\\r\\n\",\n      \"< #  }\\r\\n\",\n      \"<   data_param {\\r\\n\",\n      \"<     source: \\\"examples/imagenet/ilsvrc12_train_lmdb\\\"\\r\\n\",\n      \"<     batch_size: 256\\r\\n\",\n      \"<     backend: LMDB\\r\\n\",\n      \"---\\r\\n\",\n      \">   image_data_param {\\r\\n\",\n      \">     source: \\\"data/flickr_style/train.txt\\\"\\r\\n\",\n      \">     batch_size: 50\\r\\n\",\n      \">     new_height: 256\\r\\n\",\n      \">     new_width: 256\\r\\n\",\n      \"31c24\\r\\n\",\n      \"<   type: \\\"Data\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   type: \\\"ImageData\\\"\\r\\n\",\n      \"42,51c35,36\\r\\n\",\n      \"< # mean pixel / channel-wise mean instead of mean image\\r\\n\",\n      \"< #  transform_param {\\r\\n\",\n      \"< #    crop_size: 227\\r\\n\",\n      \"< #    mean_value: 104\\r\\n\",\n      \"< #    mean_value: 117\\r\\n\",\n      \"< #    mean_value: 123\\r\\n\",\n      \"< #    mirror: true\\r\\n\",\n      \"< #  }\\r\\n\",\n      \"<   data_param {\\r\\n\",\n      \"<     source: \\\"examples/imagenet/ilsvrc12_val_lmdb\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   image_data_param {\\r\\n\",\n      \">     source: \\\"data/flickr_style/test.txt\\\"\\r\\n\",\n      \"53c38,39\\r\\n\",\n      \"<     backend: LMDB\\r\\n\",\n      \"---\\r\\n\",\n      \">     new_height: 256\\r\\n\",\n      \">     new_width: 256\\r\\n\",\n      \"323a310\\r\\n\",\n      \">   # Note that lr_mult can be set to 0 to disable any fine-tuning of this, and any other, layer\\r\\n\",\n      \"360c347\\r\\n\",\n      \"<   name: \\\"fc8\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   name: \\\"fc8_flickr\\\"\\r\\n\",\n      \"363c350,351\\r\\n\",\n      \"<   top: \\\"fc8\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   top: \\\"fc8_flickr\\\"\\r\\n\",\n      \">   # lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained\\r\\n\",\n      \"365c353\\r\\n\",\n      \"<     lr_mult: 1\\r\\n\",\n      \"---\\r\\n\",\n      \">     lr_mult: 10\\r\\n\",\n      \"369c357\\r\\n\",\n      \"<     lr_mult: 2\\r\\n\",\n      \"---\\r\\n\",\n      \">     lr_mult: 20\\r\\n\",\n      \"373c361\\r\\n\",\n      \"<     num_output: 1000\\r\\n\",\n      \"---\\r\\n\",\n      \">     num_output: 20\\r\\n\",\n      \"384a373,379\\r\\n\",\n      \">   name: \\\"loss\\\"\\r\\n\",\n      \">   type: \\\"SoftmaxWithLoss\\\"\\r\\n\",\n      \">   bottom: \\\"fc8_flickr\\\"\\r\\n\",\n      \">   bottom: \\\"label\\\"\\r\\n\",\n      \">   top: \\\"loss\\\"\\r\\n\",\n      \"> }\\r\\n\",\n      \"> layer {\\r\\n\",\n      \"387c382\\r\\n\",\n      \"<   bottom: \\\"fc8\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   bottom: \\\"fc8_flickr\\\"\\r\\n\",\n      \"393,399d387\\r\\n\",\n      \"< }\\r\\n\",\n      \"< layer {\\r\\n\",\n      \"<   name: \\\"loss\\\"\\r\\n\",\n      \"<   type: \\\"SoftmaxWithLoss\\\"\\r\\n\",\n      \"<   bottom: \\\"fc8\\\"\\r\\n\",\n      \"<   bottom: \\\"label\\\"\\r\\n\",\n      \"<   top: \\\"loss\\\"\\r\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!diff models/bvlc_reference_caffenet/train_val.prototxt models/finetune_flickr_style/train_val.prototxt\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"For your record, if you want to train the network in pure C++ tools, here is the command:\\n\",\n    \"\\n\",\n    \"<code>\\n\",\n    \"build/tools/caffe train \\\\\\n\",\n    \"    -solver models/finetune_flickr_style/solver.prototxt \\\\\\n\",\n    \"    -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \\\\\\n\",\n    \"    -gpu 0\\n\",\n    \"</code>\\n\",\n    \"\\n\",\n    \"However, we will train using Python in this example.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"iter 0, finetune_loss=3.360094, scratch_loss=3.136188\\n\",\n      \"iter 10, finetune_loss=2.672608, scratch_loss=9.736364\\n\",\n      \"iter 20, finetune_loss=2.071996, scratch_loss=2.250404\\n\",\n      \"iter 30, finetune_loss=1.758295, scratch_loss=2.049553\\n\",\n      \"iter 40, finetune_loss=1.533391, scratch_loss=1.941318\\n\",\n      \"iter 50, finetune_loss=1.561658, scratch_loss=1.839706\\n\",\n      \"iter 60, finetune_loss=1.461696, scratch_loss=1.880035\\n\",\n      \"iter 70, finetune_loss=1.267941, scratch_loss=1.719161\\n\",\n      \"iter 80, finetune_loss=1.192778, scratch_loss=1.627453\\n\",\n      \"iter 90, finetune_loss=1.541176, scratch_loss=1.822061\\n\",\n      \"iter 100, finetune_loss=1.029039, scratch_loss=1.654087\\n\",\n      \"iter 110, finetune_loss=1.138547, scratch_loss=1.735837\\n\",\n      \"iter 120, finetune_loss=0.917412, scratch_loss=1.851918\\n\",\n      \"iter 130, finetune_loss=0.971519, scratch_loss=1.801927\\n\",\n      \"iter 140, finetune_loss=0.868252, scratch_loss=1.745545\\n\",\n      \"iter 150, finetune_loss=0.790020, scratch_loss=1.844925\\n\",\n      \"iter 160, finetune_loss=1.092668, scratch_loss=1.695591\\n\",\n      \"iter 170, finetune_loss=1.055344, scratch_loss=1.661715\\n\",\n      \"iter 180, finetune_loss=0.969769, scratch_loss=1.823639\\n\",\n      \"iter 190, finetune_loss=0.780566, scratch_loss=1.820862\\n\",\n      \"done\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"niter = 200\\n\",\n    \"# losses will also be stored in the log\\n\",\n    \"train_loss = np.zeros(niter)\\n\",\n    \"scratch_train_loss = np.zeros(niter)\\n\",\n    \"\\n\",\n    \"caffe.set_device(0)\\n\",\n    \"caffe.set_mode_gpu()\\n\",\n    \"# We create a solver that fine-tunes from a previously trained network.\\n\",\n    \"solver = caffe.SGDSolver('models/finetune_flickr_style/solver.prototxt')\\n\",\n    \"solver.net.copy_from('models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\\n\",\n    \"# For reference, we also create a solver that does no finetuning.\\n\",\n    \"scratch_solver = caffe.SGDSolver('models/finetune_flickr_style/solver.prototxt')\\n\",\n    \"\\n\",\n    \"# We run the solver for niter times, and record the training loss.\\n\",\n    \"for it in range(niter):\\n\",\n    \"    solver.step(1)  # SGD by Caffe\\n\",\n    \"    scratch_solver.step(1)\\n\",\n    \"    # store the train loss\\n\",\n    \"    train_loss[it] = solver.net.blobs['loss'].data\\n\",\n    \"    scratch_train_loss[it] = scratch_solver.net.blobs['loss'].data\\n\",\n    \"    if it % 10 == 0:\\n\",\n    \"        print 'iter %d, finetune_loss=%f, scratch_loss=%f' % (it, train_loss[it], scratch_train_loss[it])\\n\",\n    \"print 'done'\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's look at the training loss produced by the two training procedures respectively.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false,\n    \"scrolled\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[<matplotlib.lines.Line2D at 0x7fbb36f0ad50>,\\n\",\n       \" <matplotlib.lines.Line2D at 0x7fbb36f0afd0>]\"\n      ]\n     },\n     \"execution_count\": 5,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAXUAAAEACAYAAABMEua6AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzt3XmcXFWd9/HPtzt7AlkkJCGAgbCIqCSyuIDaRECEYZvB\\n\",\n       \"EQRFB5iMo8CjzuMwOlpdioo4IM4iM6wTgdHhgRFBRAhLM6gQtgQCIQQkYc8CJIEQQpb+PX+c01hp\\n\",\n       \"eqmqrl5SfN+vV7266tZdzr11+3tPnXvuLUUEZmZWHxr6uwBmZlY7DnUzszriUDczqyMOdTOzOuJQ\\n\",\n       \"NzOrIw51M7M6UlaoS2qUNFfS9fn1OEmzJS2SdLOkMb1bTDMzK0e5NfUzgAVAW6f2M4HZEbEbcGt+\\n\",\n       \"bWZm/azbUJe0PXAYcDGgPPhIYFZ+Pgs4uldKZ2ZmFSmnpv5j4P8CrSXDJkTEsvx8GTCh1gUzM7PK\\n\",\n       \"dRnqkv4MWB4Rc/lTLX0zke4z4HsNmJkNAIO6ef/DwJGSDgOGAVtLuhxYJmliRCyVNAlY3tHEkhz2\\n\",\n       \"ZmZViIgOK9LdUbk39JL0MeDvIuIISecAL0XEDyWdCYyJiLecLJUU1RbMNiepOSKa+7sc9cLbs7a8\\n\",\n       \"PWurJ9lZaT/1tiPA2cDBkhYBM/JrMzPrZ901v7wpIu4A7sjPXwYO6q1CmZlZdXxF6Zajpb8LUGda\\n\",\n       \"+rsAdaalvwtgSdlt6lXN3G3qZmYV68s2dTMzG8Ac6mZmdcShbmZWRxzqZmZ1xKFuZlZHHOpmZnXE\\n\",\n       \"oW5mVkcc6mZmdcShbmZWRxzqZmZ1xKFuZlZHHOpmZnXEoW5mVkcc6mZmdaTPQ11FSUUd1tfLNTN7\\n\",\n       \"O+iPmvo44HoV5fusm5nVWH+FegMwqh+WbWZW17oNdUnDJM2RNE/SAkk/yMObJT0raW5+HFrmMse2\\n\",\n       \"+2tmZjXS7Q9PR8Q6SQdGxFpJg4DfSToACOC8iDivwmW2hfkY4OkKpzUzsy6U1fwSEWvz0yFAI7Ay\\n\",\n       \"v66mXXxc/jumimnNzKwLZYW6pAZJ84BlwO0R8Uh+6zRJD0q6RFK5Ie3mFzOzXlJuTb01IqYB2wMf\\n\",\n       \"ldQEXADsBEwDXgDOLXOZpc0vZmZWQ922qZeKiNWSbgD2iYiWtuGSLgau72gaSc0lL1toZhypPd41\\n\",\n       \"dTMzIFeUm2oxr25DXdI2wMaIWCVpOHAwUJQ0MSKW5tGOAeZ3NH1ENG82v6I+BzyHa+pmZgDkSnJL\\n\",\n       \"22tJhWrnVU5NfRIwS1IDqbnm8oi4VdLPJE0j1boXAzPLXObYPL5r6mZmNVZOl8b5wPs7GP65Kpc5\\n\",\n       \"DngS19TNzGquP64oHUsKddfUzcxqrL9uE+CauplZL+jPmrpD3cysxvo01FXUUGAwqfeLm1/MzGqs\\n\",\n       \"r2vqY4FVpNsMuKZuZlZj/RHqLwOvAsNV1OA+Xr6ZWV3rj1BfGYUIYDUwuo+Xb2ZW1/o61Mfxpzs8\\n\",\n       \"rsTt6mZmNdVfzS+Q2tbdrm5mVkP90vySn6/CNXUzs5rq7+YX19TNzGrIzS9mZnWkP5tffKLUzKzG\\n\",\n       \"+rtN3TV1M7Ma6utQHw68np+7pm5mVmN9HeqDgI35+avA1n28fDOzutbXoT4Y2JCfvw4M6+Plm5nV\\n\",\n       \"tf4O9eF9vHwzs7rmUDczqyNdhrqkYZLmSJonaYGkH+Th4yTNlrRI0s2Syu3FUhrq63Com5nVVJeh\\n\",\n       \"HhHrgAMjYhrwPuBASQcAZwKzI2I34Nb8uhylJ0rdpm5mVmPdNr9ExNr8dAjQSOqKeCQwKw+fBRxd\\n\",\n       \"5vLc/GJm1ou6DXVJDZLmAcuA2yPiEWBCRCzLoywDJpS5PIe6mVkvGtTdCBHRCkyTNBq4SdKB7d4P\\n\",\n       \"SdHZ9JKa33zxWUYy1aFuZlZKUhPQVIt5dRvqbSJitaQbgL2BZZImRsRSSZOA5V1M19z2XEV9GdfU\\n\",\n       \"zcw2ExEtQEvba0mFaufVXe+Xbdp6tkgaDhwMzAWuA07Ko50EXFvm8kpPlK4j/U6pKi20mZl1rLs2\\n\",\n       \"9UnAbblNfQ5wfUTcCpwNHCxpETAjvy7Hm23qUYiNwKY8zMzMaqDL5peImA+8v4PhLwMHVbG80hOl\\n\",\n       \"8KcmmPVVzMvMzNrpsytKczNLZ6FuZmY10Je3CWgEWqMQrSXDHOpmZjXUl6FeepK0jUPdzKyG+jLU\\n\",\n       \"2ze9QOoB41sFmJnVSH+HumvqZmY15FA3M6sjDnUzszriE6VmZnVkINTUfaLUzKxG+jvU/etHZmY1\\n\",\n       \"1N+h7uYXM7Ma6us2dYe6mVkv6uuauk+Umpn1ooHQ/OITpWZmNTIQQt01dTOzGunvUHfvFzOzGvLF\\n\",\n       \"R2ZmdaS/a+oOdTOzGnKom5nVkW5DXdIOkm6X9IikhyWdnoc3S3pW0tz8OLSbWbn3i5lZL+vyh6ez\\n\",\n       \"DcBXImKepFHA/ZJmAwGcFxHnlbks19TNzHpZt6EeEUuBpfn5GkmPApPz26pwWe1PlLr3i5lZDVXU\\n\",\n       \"pi5pCjAduDsPOk3Sg5IukTSmm8ldUzcz62XlNL8AkJtergbOyDX2C4Dv5Le/C5wLnNzBdM0A7MZ+\\n\",\n       \"7Mkb7d52qJvZ256kJqCpFvMqK9QlDQauAa6IiGsBImJ5yfsXA9d3NG1ENAOoqK8CO7R72ydKzext\\n\",\n       \"LyJagJa215IK1c6rnN4vAi4BFkTE+SXDJ5WMdgwwv5tZufnFzKyXlVNT3x84EXhI0tw87BvA8ZKm\\n\",\n       \"kXrBLAZmlrGsDk+UqihFIaL8YpuZWUfK6f3yOzqu0d9Y4bLeUlOPQmxUUa35vfUVzs/MzNrp7ytK\\n\",\n       \"wU0wZmY1M1BC3SdLzcxqYKCEumvqZmY10N+33gWHuplZzbimbmZWRwZCqPv+L2ZmNTIQQt0nSs3M\\n\",\n       \"amSghPqIPiyHmVndGggnSl/DoW5mVhMDoab+GjCyD8thZla3HOpmZnXEoW5mVkcc6mZmdWSgnCh1\\n\",\n       \"qJuZ1YBr6mZmdcShbmZWRwZKqI/qw3KYmdWtgRLqrqmbmdWAT5SamdWRbkNd0g6Sbpf0iKSHJZ2e\\n\",\n       \"h4+TNFvSIkk3SxrTzaxcUzcz62Xl1NQ3AF+JiD2BDwJfkrQHcCYwOyJ2A27Nr7viUDcz62XdhnpE\\n\",\n       \"LI2Iefn5GuBRYDJwJDArjzYLOLqbWTnUzcx6WUVt6pKmANOBOcCEiFiW31oGTOhmcoe6mVkvG1Tu\\n\",\n       \"iJJGAdcAZ0TEq5LefC8iQlJ0Ml0zAB9lBA/zQQr8pt0orwEjVZSiEB3Ow8ysnklqAppqMq8oI0cl\\n\",\n       \"DQZ+DdwYEefnYQuBpohYKmkScHtEvKvddBERAlBR64CxUYjX3zL/ot4ARkch1vV4jczMtnCl2Vmp\\n\",\n       \"cnq/CLgEWNAW6Nl1wEn5+UnAtd3MqrPmF3ATjJlZTZTTpr4/cCJwoKS5+XEocDZwsKRFwIz8ukMq\\n\",\n       \"qiEva1MnozjUzcxqoNs29Yj4HZ2H/0FlLmcwsKGLNnOHuplZDfTVFaWdXU3axqFuZlYDfRXqXbWn\\n\",\n       \"g0PdzKwmHOpmZnXEoW5mVkf6sk29u1D3PdXNzHqoL2vqPlFqZtbL3PxiZlZHHOpmZnXEoW5mVkcG\\n\",\n       \"0olSh7qZWQ8NlBOla3Com5n1mJtfzMzqiEPdzKyOONTNzOqIT5SamdWRgXKi1KFuZlYDbn4xM6sj\\n\",\n       \"DnUzszoyUEJ9DTBKRVX169lmZpZ0G+qSLpW0TNL8kmHNkp5t90PUXeky1KMQG0i19THlFtzMzN6q\\n\",\n       \"nJr6ZUD70A7gvIiYnh+/7WYeQ4E3uhlnObBtGeUxM7NOdBvqEXEnsLKDtyppKikn1FcA4yuYp5mZ\\n\",\n       \"tdOTNvXTJD0o6RJJ3TWblFtTd6ibmfXAoCqnuwD4Tn7+XeBc4OSORpTUzJ4cSNCqZjVFREsn81xB\\n\",\n       \"bn5RUX8OzItCPFll+czMthiSmoCmWsyrqlCPiOUlhbkYuL6LcZtV1AjgpSh0GuiweU39NFJbvkPd\\n\",\n       \"zOperuy2tL2WVKh2XlU1v0iaVPLyGGB+Z+Nm5bapt50o3REYXU3ZzMzezrqtqUv6OfAxYBtJzwAF\\n\",\n       \"oEnSNFIvmMXAzG5mU26b+gdUVAOwAw51M7OKdRvqEXF8B4MvrXA5ldTUtyX1a9+6wmWYmb3t9dUV\\n\",\n       \"pZX0ftkxv3ZN3cysQgMp1Ntq6juQmnUc6mZmFRpIof4i8A7gnaReLw51M7MKDZhQj0KsJ93Y632k\\n\",\n       \"3jQOdTOzCvVlqK8vY7zlwN7AwzjUzcwqNmBq6tkK4N24pm5mVpWBFurLSWVyTd3MrAoDLdRXAJuA\\n\",\n       \"x4ERKqqxV0tlZlZnBlqoLweeyz+a8Sq+AMnMrCJ9FepDKL+m/kx+vho3wZiZVaTaW+9Wqtya+sPA\\n\",\n       \"dvm5Q93MrEIDKtSjELcBt+WXDnUzswoNtDb1Ug51M7MK9XqoqyiR2tTLufiolEPdzKxCfVFTHwJs\\n\",\n       \"iEK0VjidQ93MrEJ9EerVNL2AQ93MrGIOdTOzOuJQNzOrIw51M7M60m2oS7pU0jJJ80uGjZM0W9Ii\\n\",\n       \"STdLGtPFLBzqZmZ9pJya+mXAoe2GnQnMjojdgFvz68441M3M+ki3oR4RdwIr2w0+EpiVn88Cju5i\\n\",\n       \"Fg51M7M+Um2b+oSIWJafLwMmdDGuQ93MrI/0+N4vERGSotMRLuIUJjNZzWoGWiKipcxZO9TN7G1B\\n\",\n       \"UhPQVIt5VRvqyyRNjIilkiaR7oPesVO5Chgbc6K5wmW8CoxUUQ1VXI1qZrbFyJXdlrbXkgrVzqva\\n\",\n       \"5pfrgJPy85OAa7sYt6rmlxzkrwMjKi6dmdnbVDldGn8O/AHYXdIzkr4AnA0cLGkRMCO/7sxQKr+Z\\n\",\n       \"V5vXgJFVTmtm9rbTbfNLRBzfyVsHlbmMak+UgkPdzKwiA/mKUkihPqqGZTEzq2sDPdTX4Jq6mVnZ\\n\",\n       \"Bnqou6ZuZlaBLSHUXVM3MyvTQA91N7+YmVWgr37Ozs0vZmZ9YKDX1N38YmZWgYEe6m5+MTOrwEAP\\n\",\n       \"dTe/mJlVYEsIddfUzczKNNBD3c0vZmYVGOih7uYXM7MKbAmh7pq6mVmZBnqou/nFzKwCAz3U3fxi\\n\",\n       \"ZlaBLSHUXVM3MyvTQA91N7+YmVVgoIe6m1/MzCrQ7c/ZdUXSEuAVYBOwISL262A0N7+YmfWRntbU\\n\",\n       \"A2iKiOmdBDr07Ien1wNSUUOqnN7M7G2lFs0v6ub9qmvqUYjAtXUzs7LVoqZ+i6T7JJ3ayTg9aX4B\\n\",\n       \"h7qZWdl61KYO7B8RL0gaD8yWtDAi7uxgGdU2v4BD3cysbD0K9Yh4If9dIemXwH7A5qF+O5u4g4Ka\\n\",\n       \"BdASES0VLmYN7gFjZnVMUhPQVIt5VR3qkkYAjRHxqqSRwCFA8S0jHshr0RLNVZfQNXUzq3O5stvS\\n\",\n       \"9lpSodp59aSmPgH4paS2+VwZETd3MN6iHiwDHOpmZmWrOtQjYjEwrYxRH6h2GZmbX8zMytQXV5T2\\n\",\n       \"NNRdUzczK5ND3cysjvRFqM/v4fRufjEzK1Ovh3oUYl0PZ+GauplZmfqipt5TvlOjmVmZtoRQ9z3V\\n\",\n       \"zczKtCWE+oBsflFRE1TUNSrqnf1dFjOzNj2990u3JBRB9GAWC4FzVNRuwMvA/yHdv/3F/JgETAa+\\n\",\n       \"G4VY1UkZPgCsiODJkmHvAv4S2Bk4I4LV5RZIRe0K3AS0AscDZ1exXmZmNdcXNfV3VTORxGCJM2mO\\n\",\n       \"B4F/BG4BHgS2YcUeE3lyxgnAUcBOpFC/RkXtrKJuU1FfLJmPoPViaP2HkmHbALOBscAuwKfKLldR\\n\",\n       \"Q4FrgPOAmcBfVLN+Zma9QRE9qUR3M3MpIL4UwU8rn5avAz8AvhnB2SrqBOC5KESLxC+BI4G9I5in\\n\",\n       \"ohqBXwKH0Np4Adp0ImKPKMSLmvjggRwx8yZGLW1k62fORq1bsfCoo3j4+Dv51HGfpTmOJtXUm8oq\\n\",\n       \"V1HfB/YA/hxoBF4A9olCPFXpOpqZdURSRER3v1XR8bR9EOpXR6SasMT2wPKIjm/FK3EE8FfAfwBX\\n\",\n       \"AMcBvwD2iWBJHmcn4F7gR0BTBJ8EUFHDeeITe3HFby/kswe/xtRbHgLOZ+let7B+5HPc+v3RHPbl\\n\",\n       \"+3lptwZe3GN/PvK99YhNbBr8W/51wYms3GVaBE93uT5FTQduBPaKQizLwy4BlgA7knrpfC0K8Xx+\\n\",\n       \"7wxgqyjEWdVuQzN7+xnoob4a+DkwGjiaFMhHR7By83HZClgAXA18HvinCL4n8Q3go8BhEbRK/BPp\\n\",\n       \"xzm+mcf/TgQ/k5gA3ADMZ8SKI/m7iauBVh44ZXvu+urOvLT7ycBU4FDgcJo1F9gLOJb1I7/Gsvc+\\n\",\n       \"wg53XwT8KgrplsIAKmpYXt564A7g8ijERSXvHw78GrgMeI7UJPMDYAXwfdIvQ50ShbipNlu1exKT\\n\",\n       \"gfdFcGNfLdPMamegh/puwDF50H8AzcAnga+Sgu8HpJqugMYIPi8xHFgXQUgMJt2j/RekNvWrSc0u\\n\",\n       \"SySmA/8NPAW8H/jXPP//BJblYX+MYKbEnsDDwIURzNysnO+67iDGPnkDk+fALjdt5KVdj46L5sxW\\n\",\n       \"UVNIgT0Y+K+8HntHITa9OW1q+tk3CnF3fv1u4BzgY8ABpHb7/wL+Igpxlybf+3l2/N0pTJu1inc8\\n\",\n       \"di6D17UAXyEdYH4KPApsjEKs7XS7FrUDK/YYy3//zzHMnP4rBq9rJF25Owh4N7cXz2fYyg8y/dK7\\n\",\n       \"GPbKH/J2D+D1KMTSTub5UeCHwJejEPe3e0/AoCjEhpJ1PhB4D+kguLizspZLQnz0rP2Y8a1v5vX4\\n\",\n       \"HOl8x4nAT6IQj1c0v6JGAu8D5kQhWkvWY0oUYnFehwuApcD3oxDrVNRY4N9I+9k5+ecUa0JFDYlC\\n\",\n       \"rFdRDcAMYCvg91GI5R2MOwKYGIV4sv17ZS5rIrANqSK1FelOqYtruT4ly2ok7Vt7Ax8G7qFkm5eM\\n\",\n       \"N4r0zfuxKLzlh3TKXZaA7YAVUYie/PDOgKOitibly9woxEMDOtQ7KpjEMcBZwDtIJ0F3JNXiD4rg\\n\",\n       \"rTu52BmYQ+r1cmIEt5S8N4oUADdE8FQetiup18wlwBcj2JROmHI28KMIXuxgGQ3ABD7wk3+nqXgE\\n\",\n       \"oacZ/vIwFh1xAxuHPcee/28m8Gma43lSCG9PCtLrgDnte/ioqBFtwZzPB5xFawNsGPlOVr1zEU8c\\n\",\n       \"Opq9L9qaoasXIjYC19Da+EU2Dd4ONJhNg29h2CtXk/rpfwI4CPgNsBH4S9Zs20jj+tGodQnDXllD\\n\",\n       \"MBUQm4YuZuHRU2ltvJ0nPjGFo7/wWxpaTyR1Dd0qz2NJ3uZzgeeBg4HDgYuAU4EZUYhH80Htb4Bj\\n\",\n       \"Sbdavoj0jeVE0kHzQeAI4HHgKtLPFo7Pj02kwFwKvMwrk0ewZsJTbPfA3cA+wKlsGvRJkGjc8D+8\\n\",\n       \"MO0jbP3sNF7a7Up2/MNzwEkEwaoptzJ2yaHAY8CupIP/WmAI6acSh5K+sd1FOsexNelAegCwEniW\\n\",\n       \"tK8tBZoJDkdcCmwghf4KYF/g7vz3xly+52ltPJ+NwwYx5LVv5uX9nhRgytttEzAceIRUsdid9LsC\\n\",\n       \"B+Vt8iCpQrAP8CHgJWAdsCqX50N5fovzZ/IaMA7Yn9Sz6sK83n+fP/fHSQH9ev48JpCC+2Xgj8BD\\n\",\n       \"wGdI4fpCXs5rwHvzNJeSvs2uJR2UZ+Tt81iefi/SDzWszWV9iVQR+iNwAjA9l2E46YC7Sy6vSP9v\\n\",\n       \"vwc+SOqRdhepI8ZUkvGkytk+wM15uncABeAPwLtJ33IPyp/bE6QDxOWk/4Ef5XXbSOrivIB0X6nf\\n\",\n       \"ATdGIVbkA8cHgHeSKgb3k/bxrUn7/gZgWRQiVNSeuQx/yON+lLSv75rXcW0uf0Meb5f8ea8g9bp7\\n\",\n       \"mNREPBX4OCkHXif9psRo4Mn8ua4BJuay3JTX91DSPnMTcBvwJdL/VAtwVhTi3i0u1NN7NJBq5hvK\\n\",\n       \"mxf7AksjeKbM8d8FPFZNd0ptP+dAGjZcyIaR27J0+nxgMrTeDw3jSTvfBaSdZTqp9jGS9I81K4J/\\n\",\n       \"6qAsUxmzeCPjH/kui2dsig0jviAxmlEv3M0BP1zAwiNPZ8mMGcC5wH8xcukrvOcXn2e/nz7CS7vt\\n\",\n       \"z/N7r2T5e2Zx7HHraGgdyYX3LOT5fb8D/AvpH/NTDHl1HpuGrGHT0KdJtf1/JPXweR6YGcFaFTUa\\n\",\n       \"OJm0gz9Hql1tB/wvcGUUYqmK+hzBT9kwYh2D3mikYdOlpJ33ReA0UthcHoV4BEBFDSYF2dH5vRX5\\n\",\n       \"0UDamScRGseSj32YbR4bxqgX1iBeZ+24y5h12+dR61g++ON5PH3ANF7Z4Uye+OR3gD+jWQ1cOGcn\\n\",\n       \"nt/vcva5YAZ/9reNwELmf3oIzxywJ4ed9gDpILKRdAvoffPrV/LjrlzmU0ndTiexascWLrzvM5wx\\n\",\n       \"9U6Gvro98JEoxCoVtRewJ6kGODs3uf0dr044lQ2jtmfR4b9g2qwrGLZ6b9I/Nnm7iXSQ2wvYgRS4\\n\",\n       \"d5IODFPzPN8AFnDx75/g8zPWMeiNMcBDOVjE7c27snTad5l8z3p2v+4BJjy8hBRUyvvZGFJQrAZ2\\n\",\n       \"y49hpIPCsjx8LKmXWVtgXhSFePN3gXMNdzrw16TAH0lqBv0N6cCwOynMHsvTDyIF7vg83R6kHl8t\\n\",\n       \"pN5ma0mh+0T+rCn9FqCitiOF+yZSQLYCq/L+NQ44PW+rVuB7pN5rTwFXkr6Jj8xlaiJ1SFhJqtR9\\n\",\n       \"Nc+j7VvY3qSD0yHAiPxZ3J+XGXldd86fwWpSBWAT6UC2HbCcVLlpqxhckrfL1DysNY+/Oq9rI+kb\\n\",\n       \"0HjSgfcE0sF4NqkX3Bjg26QD3E552Vvlz+lgUvhfTPo2uDupZn4I6cD1gyjEije3YQ9CnYjotUea\\n\",\n       \"fe/Nv3fLHkMg3pWfj4T4GsQREEPbjSeInSD2hVgCcTzElyHuyc+/CvEixIr8mFAy7TshbodYBfEQ\\n\",\n       \"xLSSeV4BsQziaIhPQNwNcRnEcXk+H4QYnp//HOIqiFMgHoPYtqTcV0A8AXEfxG8gppYsfyTEdvn5\\n\",\n       \"cIgTIH6GNqxg/MP3MHTVcogP5GWdADEKohHi/XndvghxEMQOEFtD/Divx/g8z10gxkMUIG6BTQt4\\n\",\n       \"7xUzmX7RKIjfQpwPsSfEXRCfyNN8CmIBxGCI2RB/gLgmfx7/lrfVaogZefxdS7dpB5/joRDn5G36\\n\",\n       \"nxDPw6bzaKaxm89/XN7+n4K4KZej0+XkaQZD/FX+7K8oWac9IDZAHN7BNN/J8/9ZXobyY5v+/h8o\\n\",\n       \"8/+ky+3Y7fTNNNBMQxfvb08zh9CcKqCdjCOaaezoM6WZEaXT0swUmvk4zQzOr7ejmZHt1mkiRKdl\\n\",\n       \"6qIMnZYxjzO8/O1KVL1Ne/cDr75gW+IDYi+INTmkPp3/ue+CmJL/UYd0Mp0g1G7YIIhRJa9HQtwM\\n\",\n       \"8Vxb+Ofh5+RgH9/FvA+A2I90YHoRogXi3lzWl3IQr4C4EWImxJQ87eEQayEezQeEl/PjEYj/hriI\\n\",\n       \"dFB6AWJTDs1z8zp/Ly9rVQ7H7SAOgXga4kmIX0AM6qS8N0P8ewrgGJ3X+W6IayG2IR3kni5Zn1UQ\\n\",\n       \"d0L8EOLb+SBwIMTQvKxnIM7O6/revA5j8nb5YV63n0Ic2/bPDPHPED/Nzxsgirk8V5EOUvtCfAHi\\n\",\n       \"DojLIS6AWApxK8RhEH+Txz8K4pa8nW7J87sA4jyIYXnb7J6XMZd0EPkuxDqIY8rc746DOKHCfXUk\\n\",\n       \"xF4dDG/MZbsVYlweNikPuxri/XnYqLwPzMlln0SqUOxSMq/d87bZuQ//B3eFmNxu2BCIbSGGtxve\\n\",\n       \"kPfTr5EPohBTIVZCzOxk/g2U/K/mef8NFR4Eul8Pouppe3cDV1+wLfUBMZke1l66mPcgiK3bDdsa\\n\",\n       \"Yo8K5rFjDrwP53/sxvx8aifjD2/biUk1mEmdjDcs/xXELIhfQ0zI/wRDS8Y7k1zL7qKMe0JshPh2\\n\",\n       \"fn1cDtlBJeP8mHRw2SWH46EQ38rhfTrEcogLIa4r+Uf9Xp72CojHSYH/7Ry8p5O+zdwDcT/p2802\\n\",\n       \"7cq1F+kbyo9I3yZuytOeDPF1iF3bjb8v6VvFgxAjcsh/E+KPEIty2W4oGX8GxCsQC0kHrudJB86/\\n\",\n       \"Jn0bEsT0HEQPQPwLxDdIB60XyQfjTrZpI0QT6eD0q1yul/J6j4H4D1JI3wJxG8RP8vb9Vd5250Oc\\n\",\n       \"kcu0GOJZiItJB9vPkb5FPg5xbV7eyblMl0E8BfEZiOtJlZChpG81Y/K478jLPpN08O+yxtvJ+o3P\\n\",\n       \"+9wruUwNpArBhXk9l0O8Tvpm1LY//wjid6RvSS+TDqYPQlyZP9+2b02D8vgHQrwG0Zo/+6F5ewbE\\n\",\n       \"sbX9Xyeqnbbf2tTNuiJxJPC/EXR26wcBDRFvtnG3f/84Ui+o90WwSGIq8HwEr+cT74cAl0XwRsk0\\n\",\n       \"DaR2zleBWyJo7WDWla7Hh4GVETyau+eeReoZtYp0cvao2PzE/1eAqyN4RmLHXJ69gcNI5w9eI7U7\\n\",\n       \"30g6gf4B0rUdx5PaoBfm13eTTnBOIJ24nAo8A/yWdAL3VtJ1FXeQTgJfQzoPMQa4KIL1Ep8lte3/\\n\",\n       \"KvJtNHJvtB1JJx/nkdrOr+dP5xbuJp1v2gs4IoKFEqeQzuVcmtfjfaSTj5DOF3yGdNJzcF7fBtKJ\\n\",\n       \"xPv4Uxv5aNJndiypff3XpLb4e0lXm/8KeBr4el63y0jt8UuBb+XtuS3phOZrpPbyHYEPRfBy3tZn\\n\",\n       \"kT77L+d1+wZwUl7Hc4Bvkc6htfXGG53X5cy83Pfksm+VH/vk9yeROlXcBJwCbAucGsEGiUl5mw/K\\n\",\n       \"j2cieLHfTpRKOhQ4n3QC4eKI+GG79x3q1m8ktorg1f4uRxuJrYGPRXB9fj2ms4NWB9MOJvXq+GPE\\n\",\n       \"W0/+SwwhncicT+rltRcp9F4gBdtTETzXwXQ7ATtHcGtVK5Xm8RPg9xFcJXEYcAapl9qKDsYVqWdS\\n\",\n       \"W1BfQgrQ09rWS+IdpB47++THzqSD4L2kLsz7kw6Mz5JCdj7p5OWH88Ho/aTgv510fcvGkuWPIN0W\\n\",\n       \"5GXgruigJ1we7wukLsa3kTLuLODfIvhZfn846UB4GekgewfphOqHSMG+jnSgeoC0/T9C6vVyZV6f\\n\",\n       \"l/Pwk0knUjfmR3ME1/ZLqEtqJJ0tP4jUk+Je4PiIeLRkHId6jUhqioiW/i5HvfD2rK3+2p75osNz\\n\",\n       \"gO9FsKhk+BHAneUeNDuY7zBSL51/jqDbH/rJ18wcC1wcQZfXbeQDy/Wk3jendXRg6Ul29uQujfsB\\n\",\n       \"T0TEklz7beZeAAADf0lEQVSIX5BusPVoVxNZ1ZpINTGrjSa8PWupiX7YnhEsIzWRtB9+fQ/nu450\\n\",\n       \"sCh3/LmkZqdyxl1L6t7YK3pyl8bJsFmf8WfzMDMz6yc9CfXeO8NqZmZV6Unzy3Okq+ja7ECqrW8m\\n\",\n       \"3f/FakFSob/LUE+8PWvL23Ng6MmJ0kGkE6UfJ12Kfg/tTpSamVnfqrqmHhEbJX2Z1PeyEbjEgW5m\\n\",\n       \"1r969eIjMzPrW73yG6WSDpW0UNLjkv6+N5ZR7yQtkfSQpLmS7snDxkmaLWmRpJsljenvcg5Uki6V\\n\",\n       \"tEzS/JJhnW4/Sf+Q99eFkg7pn1IPTJ1sy2ZJz+b9c66kT5a8523ZBUk7SLpd0iOSHpZ0eh5em/2z\\n\",\n       \"lvcryLX+RtJtKqeQLvudB5R9bxI/3tyOi4Fx7YadA3w9P/974Oz+LudAfZCu4JsOzO9u+5Fupzwv\\n\",\n       \"769T8v5b0xs0bcmPTrZlAfhqB+N6W3a/PScC+Y6sjCKdm9yjVvtnb9TU37woKSI2kO6RcFQvLOft\\n\",\n       \"oP0VZUcCs/LzWaR7mFsHIuJO2PwnE+l8+x0F/DwiNkS6mO4J0n5sdLot4a37J3hbdisilkbEvPx8\\n\",\n       \"DemCzcnUaP/sjVD3RUm1EcAtku6TdGoeNiEi/eA16X4RE/qnaFuszrbfdmzeHdf7bHlOk/SgpEtK\\n\",\n       \"mgq8LSsgaQrpW9AcarR/9kao+8xrbewfEdNJv+f6JUkfKX0z0vcyb+sqlbH9vG27dgHp132mkW4a\\n\",\n       \"dm4X43pbdkDSKNJNwc6IiM1uPNeT/bM3Qr2si5KsaxHxQv67Avgl6evWMkkTASRNgrf+nqt1qbPt\\n\",\n       \"136f3T4Ps05ExPLISD/R1tYc4G1ZBkmDSYF+eURcmwfXZP/sjVC/D9hV0hRJQ4BPk+5hbGWSNELS\\n\",\n       \"Vvn5SNJ9pNt+5Lrt5kUnAdd2PAfrRGfb7zrgOElDJO1E+r3Oe/qhfFuMHDptjiHtn+Bt2S1JIt1y\\n\",\n       \"eEFEnF/yVk32z57cJqBD4YuSamEC8Mv02TMIuDIibpZ0H3CVpJNJP3j7l/1XxIFN0s9J99zeRtIz\\n\",\n       \"pB8EPpsOtl9ELJB0FenHhzcCf5troEaH27IANEmaRmoGWAzMBG/LMu0PnAg8JKntzo7/QI32T198\\n\",\n       \"ZGZWR3rl4iMzM+sfDnUzszriUDczqyMOdTOzOuJQNzOrIw51M7M64lA3M6sjDnUzszry/wFBsEB8\\n\",\n       \"UlvRigAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7fbb37f20990>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"plot(np.vstack([train_loss, scratch_train_loss]).T)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Notice how the fine-tuning procedure produces a more smooth loss function change, and ends up at a better loss. A closer look at small values, clipping to avoid showing too large loss during training:\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[<matplotlib.lines.Line2D at 0x7fbb347a8310>,\\n\",\n       \" <matplotlib.lines.Line2D at 0x7fbb347a8590>]\"\n      ]\n     },\n     \"execution_count\": 6,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAXgAAAEACAYAAAC57G0KAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsnXeYHNWVt98jgXIY5ZyQMNlIJJMMwhhssI0Dxsbr8Dms\\n\",\n       \"zTpne9e73qa9tnFYrzMYe53WOeyuFzA4YBAYTEYiCQQCCSRAaZQTEtL5/jj3TlXXVHdX9/SMZsR5\\n\",\n       \"n2ee6a6uqq5Ov3vu7557rqgqjuM4zv5Hv319AY7jOE734ALvOI6zn+IC7ziOs5/iAu84jrOf4gLv\\n\",\n       \"OI6zn+IC7ziOs59SSOBFpL+ILBSRK6s8/g0ReURE7hGRea29RMdxHKcZikbwHwQWA52S5kXkXGCO\\n\",\n       \"qh4MvAu4rHWX5ziO4zRLXYEXkanAucB/ApKzy3nAjwFU9TagTUQmtPIiHcdxnMYpEsF/Ffg4sLfK\\n\",\n       \"41OAFan7K4GpXbwux3Ecp4vUFHgReTmwRlUXkh+9d+yaue/1DxzHcfYxB9R5/GTgvOCzDwJGiMh/\\n\",\n       \"qepbUvs8CUxL3Z8atlUgIi76juM4TaCqtQLsqkjRYmMicjrwMVV9RWb7ucD7VPVcETkR+Jqqnphz\\n\",\n       \"vHIxy4EXaUmXNXyhZbkK+I6W9KpGj90fEZGLVfXifX0d+wP+XrYWfz9bi4hoswJfL4LPouEJLwJQ\\n\",\n       \"1ctV9WoROVdElgLbgLfVOH4QsLOZCwWeAQY2eazjOM5zjsICr6o3ADeE25dnHntfwdN0ReB3AQOa\\n\",\n       \"PNZxHOc5R0/PZPUIvnUs2NcXsB+xYF9fwH7Ggn19AY7R0wI/EBPqZvAIPoWqLtjX17C/4O9la/H3\\n\",\n       \"s/fQ0wL/rJa0Wj59PTyCdxzHaYCeFvhm7RnwCN5xHKch+prAewTvOI5TkL4k8M/gEbzjOE5helrg\\n\",\n       \"mx1gBY/gHcdxGsIjeMdxnP2UviTwPsjqOI7TAH1J4D1N0nEcpwH6mgfvEbzjOE5BPIJ3HMfZT+lL\\n\",\n       \"Au8RvOM4TgP0NYH3CN5xHKcgfUngPU3ScRynAfraIKtH8I7jOAXxCN5xHGc/pS8JvEfwjuM4DdCX\\n\",\n       \"BN4jeMdxnAboSwLvaZKO4zgN0JcGWX2ik+M4TgN4BO84jrOf0tcE3iN4x3GcgtQVeBEZJCK3icgi\\n\",\n       \"EVksIpfk7DNfRDaJyMLw9y9VTueDrI7jOD3EAfV2UNWdInKGqm4XkQOAm0TkVFW9KbPrDap6Xp3T\\n\",\n       \"+UQnx3GcHqKQRaOq28PNAUB/YH3OblLgVF2J4HcDB0hZetpWchzH6ZMUEksR6Scii4DVwPWqujiz\\n\",\n       \"iwIni8g9InK1iBxe5VRNC7yWVPGBVsdxnMLUtWgAVHUvMFdERgJ/FJH5qrogtcvdwLRg45wD/A54\\n\",\n       \"XqcTXcqb5GI5I9xbkDlHEaIP35WegOM4Tq9FROYD81tyLlVt9Mk/DexQ1X+vsc8y4FhVXZ/aplzM\\n\",\n       \"XC3pPU1fbFnWAYdpSdc2ew7HcZy+hIioqhaxwDtRJItmrIi0hduDgbOAhZl9JoiIhNsnYA1Hnk/f\\n\",\n       \"lUHWeLxbNI7jOAUoYtFMAn4sIv2wBuEnqvoXEbkIQFUvB14LvFtEngW2AxdWOVdXrRX34B3HcQpS\\n\",\n       \"JE3yPuCYnO2Xp25/G/h2gedrhcB7qqTjOE4B+tJMVnCLxnEcpzB9TeA9gnccxylITwv8ri4e7xG8\\n\",\n       \"4zhOQXpU4LWke7t4Co/gHcdxCtLXpv17BO84jlOQvibwHsE7juMUpK8JvEfwjuM4BelrAu8RvOM4\\n\",\n       \"TkH6osB7BO84jlOAvibwvvC24zhOQfqawHsE7ziOU5C+JvA+yOo4jlOQvibwPsjqOI5TkL4m8B7B\\n\",\n       \"O47jFKSvCbxH8I7jOAXpawLvEbzjOE5B+prAewTvOI5TkL4o8B7BO47jFKCvCbxPdHIcxylIXxN4\\n\",\n       \"j+Adx3EK0tcE3iN4x3GcgvQ1gfcI3nEcpyB9TeB7bZqklGWelGXWvr4Ox3GcSE2BF5FBInKbiCwS\\n\",\n       \"kcUickmV/b4hIo+IyD0iMq97LhXo3WmSHwReva8vwnEcJ1JT4FV1J3CGqs4Fng+cISKnpvcRkXOB\\n\",\n       \"Oap6MPAu4LLuulh6cQQPTKb3Nj6O4zwHqWvRqOr2cHMA0B9Yn9nlPODHYd/bgDYRmdDKi0zRmyP4\\n\",\n       \"SfTea3Mc5zlIXYEXkX4isghYDVyvqoszu0wBVqTurwSm5p+ry56/R/CO4zgFOaDeDqq6F5grIiOB\\n\",\n       \"P4rIfFVdkNlNsofln21gWWTXnnBnQc556tErI3gpy0BgNL3w2hzH6VuIyHxgfivOVVfgI6q6SUR+\\n\",\n       \"DxwHLEg99CQwLXV/atiWwzNfVGVrw1eZ0FvTJCeF/4P26VU4jtPnCYHvgnhfRErNnqteFs1YEWkL\\n\",\n       \"twcDZwELM7tdAbwl7HMisFFVV1c5ZVfFubdOdJoc/vfGa3Mc5zlKPU98EnBd8OBvA65U1b+IyEUi\\n\",\n       \"chGAql4NPCYiS4HLgffUOF9XBX4LMELKkrWE9jUxgneBdxyn11DTolHV+4BjcrZfnrn/voLPd2Dx\\n\",\n       \"S8u5npJul7LsAYZhYt9bmAysxQXecZxeRE/PZG2Ff74WGNeC87SSScBy3IN3HKcX4QLfGiYDy/AI\\n\",\n       \"3nGcXoQLfGuYhAu84zi9jJ4W+C558IF1wNgWnKcTIswXqUj5LIpH8I7j9Dr6dAQvZZkkZanwvaUs\\n\",\n       \"35OyHN/oSUUYBPwSeHET1+QevOM4vY4+LfCYIN8lZTkWQMpyJPD3wGHVDhZhugjn5Dz0dmACDYp0\\n\",\n       \"mMU6HJvc5RG84zi9hr5o0aQFfjbwI+APUpb5wMeAbcCYGse/EKt62YEIBwKfwHL9G43CJ2J1enbg\\n\",\n       \"Au84Ti+icKmCFtGyCD5EzuOArwJ3Ar/GXs93qe3RD6azEL8Yi8D/SuMCPxl4mt47y9ZxnOcofVbg\\n\",\n       \"genAk1rSZ4HrpSzvxCpbKnB0jePzBH4i8DCwk8YFfjTQjgu84zi9jL4s8DOxgU0AtKT/ByBleR21\\n\",\n       \"LZrBdBbxNmAjJvCjGryekcAmTOB9kNVxnF5DX/bgZ2KpiVnaqW3RDKJzpD0K2EBzEXwU+J0553Uc\\n\",\n       \"x9ln9MUsms3hPIeRiuBTrCNE8FKWfsGrT5Nn0YwiieCbEfiNwG6gv5Slry1k7jjOfkqfE3gtqWIi\\n\",\n       \"fjz5Ap+O4F9P5zVi8wS+jeYj+DZgU7iuXrkgieM4z036okUDZtMcQ3WLZkwoKXwwMD7zeK0IfgfN\\n\",\n       \"WzTQXAPhOI7TLfS5CD6wFhgCLBdhsAh3xge0pDuAZ4GhWKbNsMyxrY7g0wLvmTSO4/Qa+ozAizBF\\n\",\n       \"hP7h7jrM834aGAEck1nQO9o007FZpmnqefCDG7w0F3jHcXolfUbggZ9hs1DBIvjHtaR7sEheqBTm\\n\",\n       \"ONA6jc4Cn5dF01UPfmO47QLvOE6voS958KNIctTXkgywDgn/01ZMvQh+kAjpZf+6mkXjHrzjOL2O\\n\",\n       \"vjTRaUT4A1iCeeyQRO7DsJowYAL/POz15XnwEh7bLcLAcHs77sE7jrMf0Zcsmg6B15L+Wkv6j2F7\\n\",\n       \"XgS/DpiHlR8YkslNjw1CFOI2YKMqigu84zj7EX3Cogl2SjqCT1PNojkGs3F2kET7kC/wG8LthgRe\\n\",\n       \"ytI/PH9cANwF3nGcXkNfieCjjZL106F6BH8E8AQmvunjooBHIY7+OzQewY8AtmpJ9zZ5vOM4TrfR\\n\",\n       \"VwR+ROZ/mrQHH2nHGoQngK2kBX7ONSN5xTv30IIInkp7BjyCdxynF1FX4EVkmohcLyIPiMj9IvKB\\n\",\n       \"nH3mi8gmEVkY/v6lyum6Q+CrWTSQRPDJY2MeHswhVwr9n0lH8C7wjuPsdxTJotkNfFhVF4nIMOAu\\n\",\n       \"Efmzqj6Y2e8GVT2vzrmaTZNsVODXhf+dLZrB7QMYtrofx3x/JrznISrz2HcBB4rQX5U9Ba4rFhqL\\n\",\n       \"uMA7jtNrqBvBq+oqVV0Ubm8FHsRWMcoiOduyNBvBD8cW8mjEooGMwIvQnyHt9prnXHNM2Kcjgk9l\\n\",\n       \"0hQV6TYqI3j34B3H6TU05MGLyEws/fC2zEMKnCwi94jI1SJyeJVTdMWiWU31QdZnqRT4tVhe+9NU\\n\",\n       \"RvCDGbpmD+uet43RS+dKWQbz5rNexwE7NqeObaTgmFs0juP0WgoLfLBnfgt8METyae4Gpqnq0cA3\\n\",\n       \"gd/ln+X9zxORi8Pf/AaucwSwkuoWzVpS4q8l3QbMCsv5pQdZBzF09V6WnbmGYauPAt7F7GuP5vDf\\n\",\n       \"psscNBKFu8A7jtNSwphm1MmLu3KuQgIvIgcC/w38VFU7ibeqblHV7eH2NcCBIjK685m++aSqXhz+\\n\",\n       \"FjRwnbUEfjCwhsyMVS3pmnAzPcg6mKFr4eGXPc3AzbOAT7J+9kae9/t0SeFGCo65B+84TktR1QUp\\n\",\n       \"nby4K+cqkkUjwPeBxar6tSr7TAj7ISInAKKq63N27YoH/yTVI/hOAp+i0qIZ0t6P9udtYufIp4A7\\n\",\n       \"uPeNq5m4cFpq/0Yi+DwP3gXecZxeQZEI/hTgTcAZqTTIc0TkIhG5KOzzWuA+EVkEfA24sMq5uuLB\\n\",\n       \"r8EyXLKZOMUFvt+uwQza0I/NU7Zw99//BvgIj58Gw5+ck9q/qxaND7I6jtMrqJsmqao3UachUNVv\\n\",\n       \"A98u8HxdSZN8FFuPdTiQ7h0MAR4BplY5dgtWeAym3TKeZwft5dkhO7j2i0v0z198VD67fQADtk+S\\n\",\n       \"sgzSku7EPXjHcfYT+tJM1s3hL2vT5HrwKbZ2PDbuwYk8M3I3aSF+dkgbe/s/DMwN+zcq8O7BO47T\\n\",\n       \"K+lLAr8l/GUFvqpFI8LZ/OlLryJaNMOeGsfOthilD+woYiZ7b8MW8YauRfDuwTuO02voE9UkMYGu\\n\",\n       \"FsHHNMm8CP4INs6cRBT4Ie3j2DlyJ0mkPQR4hn57bwdOCMd0ZZDVPXjHcXoNfSmCb8aiGceOMQcS\\n\",\n       \"BX7QxrE8M3I7icDHnsF1wFmhbnyXPXgpyxgpy9sKnsNxHKdb6LUCL8JIEf4S7qYFPjubtSOCzyzD\\n\",\n       \"BzCeHW0DOo4ZuGU0O0duJRH44cAWLelj4RwvoDGBj9cViec9GagouCZlmS1l+UXB8zqO43SZXivw\\n\",\n       \"wATgRSIdC33U8uA3Y+UKsv73OHaOGkCM7gdsbWNn2xYyAh/2vQI4j4ICH6L9wcC21ObowU8HZkhZ\\n\",\n       \"0pbUHOC4eud1HMdpFb3Zg4+R+iHU9uAHY3VnkmyZhPHsbBvUca4Dt7axY/RmEq88nhcaFHhslajt\\n\",\n       \"qcU+SJ13OtAfSE+gGkOyaLjjOE6309MC30+E/gX3jUJ+GCamW8kIvAj9MEHdSb7Aj+OZEUOI67IO\\n\",\n       \"2DqcHaM3kUTasWcAcAcwmuk3DqKYwKej/0jsGcwI92enHhsDtElZilTddBzH6TI9LfC7KB7Fxwj+\\n\",\n       \"OGB7qM+e9eAHATtV2Uu1CF77DyWuyzpg23C2TthAjkUTIvE/ccRvplBc4LNF1+J5pwNLgINSj43B\\n\",\n       \"ovqhOI7j9AC9WeBHYIJ5HEmknPXgh2DiDZml+UQYhNk3Q9Eg/gO2DmXLlHbyPXiAexm3eDTNR/Bp\\n\",\n       \"D34BlRH82PDfbRrHcXqEnhb43YSBVhEGi1SdfQom5Aux+vPRJ8968NF/h84R/DgsffJZEBP/AVsH\\n\",\n       \"s372OioFPp0Fs5i25eMoVk2ymkUzHBgP/JXOETxY7rzjOE63sy8i+JhJ83HgX2vsOxy4C4umqwn8\\n\",\n       \"EGoL/FpgG3v7bwMuZPvY7WyZspHOefCRxQxbPYFiEfww8gV+ArbQyBI6e/DgAu84Tg+xLwX+aCqz\\n\",\n       \"TLKMAFYBK6gt8NGiqVxc26LoNcA29h64A/gX/vzFRWj/9EzWbBT+BP2fGcSwp/NWjspSLYK388Bj\\n\",\n       \"wOzUoOrY8Hpc4B3H6RH2pQd/BDCpxr7RPnmISg8+Lb71LBqL4J8duBP4H+5702asQYipkBUirSVV\\n\",\n       \"dratZPKd4wq8lrxB1p3h/xNa0vXAHpLIfQxWEdM9eMdxeoR94sGHAdA55C/eHYn2yRKas2iSCP7W\\n\",\n       \"D18GvBtrEHZQPYKHHaMfZ8J9Y6hPXgS/K/x/Ivx/lMSmGQMsxSN4x3F6iJ4W+I2YR30IZlcUieD/\\n\",\n       \"Ctwftm3G6r9EinnwN/zrai3pBjoLfLbUAGybsIwxS4qIcCeBD6mWu0kE/jHgICnLIMyaWoELvOM4\\n\",\n       \"PURPC/x1wFmYPXMLNvGpWibNCGCLKr9W5cth2yZgeJjgBIlgQy2BT3LP60fwG6c/wuileUsDZsmL\\n\",\n       \"4AnnjgL/CHAwFr23Axtwi8ZxnB6ipwX+GuBcTOAfwLJNqkXx2RRGwmSnrSQ2TTaCT/vziUVTKfC1\\n\",\n       \"Bllh5UlLGPXYkAKvZRidPXjC+aPAPwQcSiLwG/EI3nGcHqKnBf42bBLQWSQCX82Hz6YwRtIimRb4\\n\",\n       \"tZioR2IEv5VE4AdRL4Jf9NZlHLijv5RlVp3XUi2CLwMPh9su8I7j7DN6VOBVeRb4M7Z60v3UjuA7\\n\",\n       \"++PGRhKbI50muRyYmdqvWgQfs2jyPfjdQ3ew6P9tBj5a5+XkCryW9Fta0pgu+RA23jCOxKLpUYEX\\n\",\n       \"YYgIv+/J53Qcp3fQ0xE8mE2zG8soqWfR5EXIaZFMp0k+TlLkC+p78J3SJAM7uPkTO4G/k7JMqPE6\\n\",\n       \"ql1fB1rSLeF655FE8D3twY8BXpJTK99xnP2cfSHwVwJfVWU3VQQ+iFHeTFGobtGsBw4QoS0M3B6A\\n\",\n       \"Ree1BllR7ZicFNnJlikDgF8CH6jxOuoKfOAh4BRgHfvGohmKFTkrMq7gOM5+RI8LvCrtqnwy3H2K\\n\",\n       \"/Ah+KLAjDKpmyRV4VRSzaWZgJYaXhG3bgKEiHIgJ3a5QffJZ8gU6ToL6NXBajZeSN9Epjwex9V73\\n\",\n       \"iUVDIuxFMoMcx9mPqCvwIjJNRK4XkQdE5H4RyY1qReQbIvKIiNwjIvMKPn+1QdZqA6xQaXOk0yQh\\n\",\n       \"sWkOBxaHbTGCHwesCaIPFsVXE/jB7DlwJTAl/UBYa/XccLdaDyPLQ1hvoR3rUQwLq0H1FLH3MrLm\\n\",\n       \"Xo7j7HcUEZrdwIdV9QjgROC9InJYegcROReYo6oHA+8CLiv4/NU8+E4pkinSUXDaooFkoDWmYUIi\\n\",\n       \"8JOwyVWRZ/KeIwwEK7d8aB0wObNAx3zg31LXWETgHwz/28NEqK30rNi6wDvOc5S6Aq+qq1R1Ubi9\\n\",\n       \"FROsbNR9HvDjsM9tQJtIzQHKSDWBrxfBVxP4x6ku8BPpLPDVnuNPXPulz6FsJ6klAxbRz6yyHms1\\n\",\n       \"Hgr/23OuvyfoswIvZZkoZSnt6+twnL5KQ1aBiMzEMkJuyzw0BZuGH1kJTC1wyvXAEBGrvy7CiSKc\\n\",\n       \"S/UUSagUyKF0juCjRVNP4HdSXeDfAJzO9rE7qLRpJgOjsUYpux5rNZ4Or2VduN/TPnz04BsSeCnL\\n\",\n       \"BQXmAnQ3JwB/v6+eXMrSX8qSXcjdcfoMhQVeRIYBvwU+GCL5Trtk7munHUQuTv3ND354uibNa7Ef\\n\",\n       \"dC37I+3BZ22X5cCRWL2bx8K2tMA/ndq3agSvyibgvaw7ZCidBZ7wHEUGWK1CJbyIZPJTx/WLcIAI\\n\",\n       \"J6X3l7IMkrKcX+TcBbEIfuziSVKW2XX2TfMB4PQWXkczzAImSVmKruPbNFKWN0hZPp/Z/P+Ab3T3\\n\",\n       \"cztOGhGZn9bKrpzrgIJPeCDw38BPVfV3Obs8SWVt96lhWwWqenHOsSuwqPsx4HlY9F0rgk9HwFOp\\n\",\n       \"7Dk8Hs6xKJWBE2eyTsRqw0RyPfgUS9hw0CBm3JwV+J3AURTz3wHQkt6VupvugRwNXCXCuJDZA9ZD\\n\",\n       \"+qGU5X8L9hDqYQJ/3rsuBM4EXlnwuBkkywzuK2ZhmU/jgaelLKcBf9OSPtsNz3UwNgEvu62RRtFx\\n\",\n       \"uoyqLsCW/ARApHmbskgWjQDfBxar6teq7HYF8Jaw/4nARlVdXfAaHsBEHUycZ2OReU2LJuS6D8QE\\n\",\n       \"P7IWy6pZnNq2Dct4yRtkrSXSq9k8rR87R6SX3ZsM3I5F8IUFPu/6w+0RwGjec8TXpdyRnTQN68HM\\n\",\n       \"aeSkIpxVpXDbUAZsgcl3Hos1HvXPVZYDsJ5Lkbr43Um0iGIj+xvg2G56rrF0fs+nUcxqdJxeSRGL\\n\",\n       \"5hTgTcAZIrIw/J0jIheJyEUAqno18JiILAUuB97TwDU8ABwpwgHYAOkDwMnUH2SdCqxMpT2mc+Ef\\n\",\n       \"SO3fzCCrnWvH6DU8M+KQ1ObJwN9oMILPkK4oOQIUhj99IYn4xp7QMQ2e9wt0jkABhnD0j7ew+qgV\\n\",\n       \"wAgpSxHRnoJ9NwoLvJTl/G5I/5yF9QSnSlniWrcHt/g5ImOB6RnPfVp4bp8F3CBSlqG13jcpy1FS\\n\",\n       \"lrf24CX1KVr1nSuSRXOTqvZT1bmqOi/8XaOql6vq5an93qeqc1T1aFW9u4FruB/LepmJeeS3Y41K\\n\",\n       \"rQh+FJ3tmci9wB2p+00JPAA7Rq1A+80E+8JiPYZFWI+jkAefwzISkRrOhHt3MGjjWJI6OtOwAdlG\\n\",\n       \"I9U28hcLH8rxlx3Anf/wGHA3xRqOWPKhkMCHkg6/xT7HlhC+4LOw9QCmkCxg3l0CPwb7PaQHlqdj\\n\",\n       \"351uy0CSsgyXshzXXeffh/wBqDWW9BrgrT1zKX2SN0pZvtXVk+yLUgVZHsCE4RDMI78X+7FVE98t\\n\",\n       \"2EzTWVi2TgWqXKjKn1Obqg2y1sqiMXaMeYT+uyaGe5OxmbfLwvM3G8EvAuaG28M5/rLHefjc3WiF\\n\",\n       \"wF9B4xF8vsAfd9mhDNyyl0Vv3UpjAr+c4hH8yeF/Kwdlx2CzjR/ABH42sJcmBV7KcmCdXcZiFt+c\\n\",\n       \"sH8/7DN/jO61ac4HftcTA8k9ReglngK8vMZux1PZmDqVzCdJsW6a3iDwa7Af7mlYpsm9YXtuBB9s\\n\",\n       \"mE2YD54XwWf33x3OL1RG3TvCeaqzadr9DNgW7ZTJmF2wPNxvVuDvBY4MP+jhPO/Ksdzx3idBpgTv\\n\",\n       \"eyrwf8AxRbppUpZ+csSv+wFtjHx8lJTlnanHhjK/dDJ/+dyNaP8RmMAX6RnMAO6k+CDrKdiXsXUC\\n\",\n       \"f8uHXoayjCTldjaWnttsBH9LNlKWsrxIyhLHLcYCt5L48BOw3uKjdK/An4A1YC+qtoOUpaHxmCrn\\n\",\n       \"OEjKcp2U5dNdPVcBzsG+ay/Ns+3C9/o4YEqBhve5yhnA9V09yT4X+CDYDwCvxgT+vvBQLQHdiPng\\n\",\n       \"nSL4KmwDVqX9euDjmJBWZ+3h93DAzoHBl40RfBzIbUrgtaSbgNXAHGZeP4MBWwfy6FlXs2vYNkxI\\n\",\n       \"pmE/jq0Ui3Dezflv/Av9dvfjNW+6EPiulOX54bESTx+7gXvffDdmM9yFNRwTpSxfqnHO6WHfohH8\\n\",\n       \"KcAlwOkt86ufPvar7Bi9DmtUYwT/B+DgRp9DyjIaa9iOzDz0DZJ6Q2OxBiSK6XQsgCg6p6NZXgD8\\n\",\n       \"nCp2hZRlMHC/lGVG3uP1kLKIlOVd2Gu7H3hHD4wpvBy4FPudzs15fCoWcK3A3uemCPZWoUzAvoSU\\n\",\n       \"ZTqWaLG43r712OcCH7gfi8weVmUtyeSgajQl8OkNqjyuWmcm6p5BS9k2fi8m7pOBp0Je+3Kaj+Ah\\n\",\n       \"2jTP/+nRrJp7L3rAY2yduA3LIhqLvf67gJdIWU6p030/Htk7jwtfDZMWHg98C7goTFJ6B1ddvgxr\\n\",\n       \"mEZiFtg44FrgY1KWiVXOOQNraAfnTfSRspyYuj0YeD6WRrsNK/TWdUasGM7Wie0kAn8QNrayl8bT\\n\",\n       \"N08J/zui/yByM4Fp4TX0B+4hEfhp2MpcLRd4Kcs4KcuA8LyHAZ8EXiZlyfP6T8XGfpq1M34EXIR1\\n\",\n       \"+T+IBScvaPJcdQkR+VnA1Vhp8HNydjsO6yEuIxlbaYb/g47lPPcn5gMLgtZ0id4i8DHrJU4G+hxJ\\n\",\n       \"JJ/HBuxHXteiCWyj0n8vyuNsmtaPbWNnkkTwYALf7CArRIGfueAQHjn3VqCdjTOewbzs1VrSPVj3\\n\",\n       \"rMze/n9g15Az0wenbAWAw3j45Z9i+JNw88euBL6IzcT9D+AbbJo+IFz3yJBXfztwQzh/XnQFiQff\\n\",\n       \"TkZMpSxTMbsjTk47DlisJd0WztuUTSNlaZOynAcgwmDaHj+A9bO3UGnRPEpY5zYbhUpZBkpZpgbR\\n\",\n       \"zHIaVmIjbe+MxcZmpmJ+/7pw7rTAd1cE/zOsptFc4EEt6UpsIZyP5OwbP/uGI/gQGFwAnKElfSAI\\n\",\n       \"xi+x70dLCdH097AGZamWdBXVBf54rLFeRpMNV7DbDgH+n5Sl2qpwfZUzSOXBd4XeIvD3Y0XNHgdQ\\n\",\n       \"5duq1Mqj3xj+Nx3BF0GVZ9k+djsbDjqWSoG/jq51nxYB5zFk3XBuf+/dQDsbZu8BXkhotLSkX9eS\\n\",\n       \"jueO9+xl/ZzXZI6/WcrywiByh/G3jz/B5QvhhouXBbG4EWss/gMTsRjBg3Wf3wcsJCcvPpxzOvZZ\\n\",\n       \"rKWzTXN2+H90+H8KcHO43bTAY6m4Pw6iNJq2ZbBq7g4t6WZsVnS8priQ+delLHdJWc6SslyGjac8\\n\",\n       \"EF5zlhcCP6BS4GeG/1MxsV+HCc60EIVGi2YFKYGXsoyXshxNQaQsr5KypHsOw7DP5h3YD/n28NAH\\n\",\n       \"gbdJWV6fOcWZ2Oc5Mxz/DilLUYGejQUM6d7wL4DXdcOg7kex92kR8I9h243AYSEoSJOO4GcBSFk6\\n\",\n       \"9cqkLIfVsJM+DnwF+CHwT12++hyqjQ9IWQ6WsmTtvqbOL2W5OOezmE8L/HfoPQJ/N/CvVeq/57ER\\n\",\n       \"62quL7h/UwIPwMMvX8q4xR/CIrsnAbSk/64lvbqp8xn3AIez7EVr2T1sI9DOukP6Y9U6O3olIoxg\\n\",\n       \"2RkjGLKuI/NFyjIKs0ROw6yL7TxxavyCxOj1U8AbwopSQ7D3qZ8IA7WkO0Ikl87mSTMO2KEl3YoJ\\n\",\n       \"fPaHdzb2XsZjTwVuCrcX0LwP/xpsQtox9Ns1ijEPw8qT4/dhJWaPPYMJ/DnA3wHfw+Zd7MEmss0D\\n\",\n       \"zk/7siG99Sgsap6TurZZmA0YBb5dS7oL+4xnUt2ieTPWWBTlEuBjqfsvBm7B0j//kSDwWtKngFcA\\n\",\n       \"34oNQhg7OAT4FUkEfz5wacH5DEdgwVMHWtKHsc+1S6mZUpYxqdvjgfcD79WSfllL+pfwXDvDtb81\\n\",\n       \"jAUskLIsxoKCDoEPabYr0j1TKcsIrMG+POuzS1lmYg3f94AvYSmF1ezGZl/f84Hl4feW3t4fWyvi\\n\",\n       \"2no9BylX2NA3AAAgAElEQVTL66Qs36vxe5gDlLDxx3jMBCwY63IGDfQSgVdlmypfaOCQjWQmOdVh\\n\",\n       \"K80K/F0X/Y31c57GfhBPZR8WYZ4IVzW4JN4KYD33v34z5uWvY90hAzExTttOh7PyJBiy7vBUNsIL\\n\",\n       \"gF1YFHgY1pNIL2GIlnSxlvS6sG0o1sBtpnLRj2oCHyNlyETw4cv9YuDrwNxw/1QsUgOzdXZjYwmF\\n\",\n       \"CdHbsdiM6Rcz/zMnov1g6dnx830Ss2fABP71wNe1pN/Rkh6kJX2flnSDlvQx7P1LL9RyIrBQS/o0\\n\",\n       \"lhobq5zOwnoe00gieLBg4zyqWzSHYwPV6Qlw1V7XVKwRPl/KMiBsPhfzp7+GDaTFCB4t6T2YYMUZ\\n\",\n       \"4/PDNT5C0uM4HGscLsk810Apy6VSlqNSm4+kctJf5GbsfUkf32kWtJTljKzAhe2zgaekLPFz/jTw\\n\",\n       \"8/D+Z/k+1ls5H/uevhV4T7BwHsM+h1dgqcfp780xWCA0AxuwTXMScL2WdIuWdDXwP8Dbcp67K5yK\\n\",\n       \"fffLme3vwn5L3wJ+I2WptVLafGyG/yerPD4Hs5s/lWoEjgDub4X/Dr1E4JtgA8XtGbBlAm9t8rke\\n\",\n       \"5RdX3An8b5XnnA+8jNo5vxWED+9sFr82rirVzvqD4w8sLfBHsnUiPDtoJxbJgQn7z7Av+RGYt9yG\\n\",\n       \"NWJ5X7Yo8JuonLDzEDZLc5iU5Z9SXc6DqCLwmAivBq7CGoe5WGS9JvW6Otk0BSL68zAP+krgLA7+\\n\",\n       \"/Su55y1Av/jcaYG/DxtPqVY24zfABcGP/yU2+Pvr8NhSEptmJiaUaYsGrPfzT9j7vQILJg4Um0kL\\n\",\n       \"9p7fRTEf+8WYD70YGzAXTOB/jzWKbyFZLyDydayncTEmLtcQqqQGER6PFUE7V8ryAuh4f7+NBSHX\\n\",\n       \"SlliAbsjyUTwgVvJCDxwp5TlTfFOCCh+A/xfzkD7C7Be079LWV6KRaCfqfIe3I29h98HPqElvV1L\\n\",\n       \"+uPwWLRoXol9fw9NHXcs1jO8AHitlCVdE2o6SboyWC/undLamdQnYNH166UsH5KyfE7Kcin2Ot8P\\n\",\n       \"fB77DT0oZemo7yRlOTTV4zgMG+B+v1gdpSxzsN/yASRjFenFirpMXxX4dSQiVBdVLlXN/aIX4VE2\\n\",\n       \"T5umJX2NlnRHzuNzMcH7jEjx91NLehd6QKyauZFN04dhBccqBR620X7Ik9BRdfIkLGLZiP2wFmMz\\n\",\n       \"e58mM9Ep9CqGkCPwoWDXYqx65+eBd4eHXo6NMYC9z2mBPxv4I7AE+5G9jM6DQR0CH7rlXwDuk7LU\\n\",\n       \"KpH86vCabgSOZ+xDp3Pf320ksYduI/j8WtL7gIOC/ZTHb7GqpDdjP8BDtKSxImT078GE5V4sK2d2\\n\",\n       \"eK3RwvgqZhc9HRqtlVjOtmA/wE8DbwiNyAeyFkKKs7CMpZ9jYv56rBfxiJZUtaQ/yRaUCzbRe4GX\\n\",\n       \"AJ/FhDuOAxwBLNGSbsCiwm+HXtQnMdE9E4uWfxuuqZNFk3o/OzJppCzTsEj5c6lB6mPCe7Ia+Hna\\n\",\n       \"ksEaki+G9+KXwBu1pGvz3oDw/n0TuFFL+qfMw6ux93k+ZntlBf6uMH7wc+AfUo9Nx+yzyJ3Y7+HF\\n\",\n       \"edfQJCdgjes7sdf7DPZevk5Leq+WdK+W9B1YY/uT1HfgKqwRBxP4a4EPY2NG/UNDEXtfc7DEkq9i\\n\",\n       \"nxu4wAPwEyp9ze7kUUJFQRFOEOH7mcfnYa36XhqI4gMjgC2q7GHPwI1ov6eoFPgjgFtYeWI7cFL4\\n\",\n       \"MZ+ARWB/I8kOacPso2wEPwhbg3YPnSN4MJvmy5iQvDb41a/ARBI6R/CnAddpSXdjX8KLMEFPswDz\\n\",\n       \"4fsD38UGEm8FfpY3sBe2zQf+EER7IdsmrGX9wQ8RBF5LeqmW9EfxmODt5qIlXYo1FJ/Wkl4cuvCR\\n\",\n       \"dJbMTCyCXIF9hu2p/b6M/ZDjGMAKrGdjYx6Wj38A9oO/CPhKGDB7v5TloPC6BBOcP2Pv5yuwgcgP\\n\",\n       \"1et+a0mv1ZKepCX9lZZ0T3i96zEBjz/+n2LjUH/FBqhfFiyLq7Dg57zwWvO83IeBUcE7B2uQf4/1\\n\",\n       \"TN4ftp0dXudbMCFeImW5IDx2PBZdvwP4qJY0+x3Ivp4fhOvJbo8px7dh35FOAh9ufwuL0GNPokLg\\n\",\n       \"w3kux8S4y4ilq04HHtCSXqElfZOW9DPhe1gx+KklXYAFAM8P4wCzgROCtTUE633+BgvkfovV6XpJ\\n\",\n       \"OHwO1qv8C/DCVADx3BZ4VbarVvwgu5PHgJkhOj8dEyMARBiERYT3YStavazBc6fr3rdz5z+8l+RL\\n\",\n       \"DRbB38jD527FunDnY1kR6zCBh8SDf4rOpQrSC6LkCfxtWDZNCROxL5L41ZAaZA1fvmOxaAmscZhM\\n\",\n       \"Z4F/FJvE8mcsSj4T6x0MBr6TI/IHA2u4WDeLcCjwNW765N8wMa5M0SzYQ9KSnq8l/a+ch5ZiKZb9\\n\",\n       \"sIj1cezHOZfEokFLuktL+j+p427AovEjsJRQxayD52MDhi8J1/tJLLsDQkE6Leny8HmN1ZIeHwS4\\n\",\n       \"GZZjkeHicI2Kva8PA6doSdMR7fexVOPH8xrD0Gu4gySKPx1rmP8R+LhYCuxLgD+FQfn3YL2Pz4RI\\n\",\n       \"dR4WXd+gJc0GPLnUaNSWAr/DGqJDoUNgpxDsKy3pQ1hjGhuJGVRG8GDiebaUZVDcEBrdi6QsV0hZ\\n\",\n       \"XlXt2kJPc3Rq07HAohDIFOFmzDqNRRJfgEXvD4WemgIfwmyxM4FDw3XOwVJKn8Aa61guPW/cpCn6\\n\",\n       \"pMD3JGEy1EbsCzcXmCFCTJ86HFiqyk4skjq16HlFGIBNrok/wHau/vb62GUXYQwWAdzHo2eD/WB/\\n\",\n       \"gGVggH2p1mPRVbUIPvrvkC/wPwBOD8/5K8wa+FXq8XQEPx14JgyOgQn8ktR9oOOH/AcsIn6ZlnRr\\n\",\n       \"+KG8Eouar5SyfF3K8uFwyDyskTkZ+JWW9H+4892bMOHKZvDcIdL8zEdMhJ+HDbRuDrn7K7H3ZV2N\\n\",\n       \"467AxKVjKUgt6d1a0ie0pBuxxvcj2A/49WHg7b1Y5EbYvyvzJsAao5NIRXda0vu1pG9Vmx2d5leY\\n\",\n       \"pVNLKG4lEfj52MSah7HG4VLMokk33tcBA7DewlPhdbeCd2I9vYexxrc/9p24Vyvr/i8gyfzJWjRo\\n\",\n       \"SduxQCs9/vMWrBFcTu3o/mTgrtRY0fGkBr8L8DesoT8F+00dj2lDx/iKlvRuYEr4vyTsM4XEav4r\\n\",\n       \"ZlUeSLMJITm4wBcj2jQx6ySmrEVxAvtyTQ3CXIThmD0TI5t2Ktd/jWLSDv1GaUkvw6KCT0FHxsWx\\n\",\n       \"QVBzPXgqBT6bRUPwEeO4wq+xga505Jr24NNdZrBB54+Tzzu1pBeEtMb4XFswC+tG1s9+ht2DLgl5\\n\",\n       \"xsdg7+EYkmnro7Av/iAR0gN8E+hajfoHsPfknzF7BpKB81oCfy/2wzufnO6zlnSZlvR/1OYg3BrO\\n\",\n       \"/xpaO8tyOfZ7rdt9D43Jz0i+m3nchllp07H3O57337DP+vbQAMZzKvBfWPbOHbQILenq0GPahtWl\\n\",\n       \"mkHn7xrY7+uoEN0fQH6K9FVU2qRvAS7GxkxeGCzIPE7Cgo+YxXMCjQl8jOBPwXojG7CApmIAPTXe\\n\",\n       \"cif2XXoyjLmAWV7vJOkhtgQX+GI8itkls7DWOq7yMxeLZG1SlP24T8k7QQ7ZZQnXkS/wG7B1YNGS\\n\",\n       \"rtCSdqyUpSVdHm5WWDQivEqET5AMsEJ+BN+BlvRxYGJmsCwdwR+LZUTE/Z/Ukl5Z5Vy5K1FpSZ/R\\n\",\n       \"kn6Bbyy9kY2z9mIRZGwkR2MLuQzHBGd9znsyhGQR8YYJjdmrsZS65SKUWHtofH+qWn7hB3cl9iOu\\n\",\n       \"133+T6wR/rKWtOg8jSI8jqXHPlpvx8AHoGbq8Q3YBLI7gRviZxaE9g1k0jADP8Gqst6Z81greAgL\\n\",\n       \"Yl5O53Wf78Nsr+nAE1VE8Crg5cFymYVF0VeHHs4dwItD1tibxQrNxeDhBOzzf0mwTk4jsUCL8Aj2\\n\",\n       \"3ZyLNQy3Yb26bIZU5E7gdZg9FfkrNs7TMv8dXOCL8ijWIi/BPrROAh9oxKYZTmW9nWxZgIOxbut6\\n\",\n       \"kjVc+6fsoTRZi+Zw7Etaz4OvIB2xpa5pSJjQcQydo6pmaePRs3ZiPu88rOGIQj4NE/sNmMCn35Mh\\n\",\n       \"kLtqVWHUsnBeh2VmnMfy+XFMoFYEDzvargFg19B6P8CrsCJm3+zKdeawHLPECi1XGKLiqh5yiPJP\\n\",\n       \"xyyML2ceu1lLem3OMY9hPb3rso+1iIewAf9hVFqFYK+/DZtBnfXfIw9gmnYENiHtl6kIOdpsP8Je\\n\",\n       \"8zcx2xMs0PgqNrB8ATa+UO05OhEam79httJ2TOD7U1vgJ1Ep8A9iv3UX+H3Ao1g2yKJwe3YQ2qOp\\n\",\n       \"7AbfhE2LL0I2gs9aNDMxG6EjgsfSrfKqQGYHWUdi0UDaokmvJFWIICbfAf6FTATfRdp45GWKTXrZ\\n\",\n       \"FXz8+BqnURnBj4OOAdaBdFHgAbSkv9eSXgEMYdXcrdg4yPaaB/38qnu56RPw+a15tW7S596lJf2g\\n\",\n       \"5qfUdoW/0OIaMmEA8L+1pIXniGhJXx/swe7gQWyg9S3Zxin0MB7ABppzxTcI7fexaP2fMEspciWW\\n\",\n       \"0jgDK818AXBhGFAejn3PT8NKRmQnVhXhD1haJVgUvwtL0MjjvvB4h8CH1/dDkkmDLcEFvhiPYi1y\\n\",\n       \"h8Bjrf5S1Yo1YW8HjhIpVK2vnsDPApapsgNQEQZjX/6z0icRoT8memtIIviRWAMxjETgn6K5olmX\\n\",\n       \"ABdi3flOC6k3SRvLTx+ICXlsIEdjqaZR4LMRfBTWLgt8iqEsOW8T8JW6vueKUwZy7RehVdUyGyQ0\\n\",\n       \"HC3Lruil/BY4V0taLfK9D3gpNebAaEk/g/2O5mpJ70htfwyL0l+rJd2pJV2MBREfBu4Ig7SLsYDi\\n\",\n       \"941euJb0Mi1pXBz7duAd1XpboVexEHME0ts/piVtVS8ZsMEKpz7R91yEWR2zsXSnv6R3UmW7CO8H\\n\",\n       \"fiXCNaodk4fyyPPg03bETJLZeusxAZwFHCHCeFXWhMdGYlbPNmBwmNw0AhPEWSQCv4JkvdfCaEnX\\n\",\n       \"SVn+AzihhYM/bewZOBTlKqRjgZfR2Be+msDHxqtpDz6HoWyddKCW9F8K7Btnsh4GZCfsOC0gpJPW\\n\",\n       \"KrJ1HzYxr6Z9EmySJTnbs0kBv8IGYKNF9X1ANZn/0BSh9/HTOrudR41xn1bhAl+MdZivtgiLMg/C\\n\",\n       \"JrF8LrujKj8U4XZSKXJVqBrBi9CG9RjiIF20V2ZivvxpJJOR2oANqjwrwh4slS167UeSlDVeAUwT\\n\",\n       \"QRqo4RP5HPllEJrFZrWuOKXE9JtjSthorPbIYcAzquwSyRX41kbwxV9XWuCdfUMsIV7YH6/DrzDP\\n\",\n       \"/zYALen3WnTeumgo79HduEVTAFVUlRNV2aTKFiwqPh4bVM2jncRTrkYti2YmsDwlxOuxruNUzFec\\n\",\n       \"nzqujaR88g4sch+JWTJHUjnICplUySIEr7b24iiNYQL/g5tWq1VRhETgj4YO2ytdzbKlFk2wtmKB\\n\",\n       \"tyIMxz73w1vx/NUQ4TIRWloZcT+ipQKvNuv5SyTVUPc7XOCb41Hg9horQq0HRtepMFkrTXIWlcWU\\n\",\n       \"NmBivQ6rBTM/9dgoEoHfTiLwi7Bocxt0LI3YlE0jgoSJWa0i1qUZnto2Brvmg0kEvjsj+Gj1NCLw\\n\",\n       \"d9H9EfzbyB9If84TLJx/pvhCP0XO+ckWTtrqdbjAN8cjWBGhXFTZhRUnqiVGI6gU+HSjMJNkIk58\\n\",\n       \"7JiwbRE2oSrWEclG8EMwgV+IRajpRqgpgcemqf+47l7FacOsrrTAxwheSKyp7hT4IZn/9RiOfe4H\\n\",\n       \"ijS8ZGAhQq9iAHCGSOFsrOcUWtLPF00VdQoIvIj8QERWi0juEnoiMl9ENonIwvBXZMCqr/Nx8lcO\\n\",\n       \"SrMeas5qrYjgQ6OwAxPnvAj+WCyr5llsZuUR4bE2kog3G8FDawT+NJpYMq4GMa1zOHTU9IlTtDeR\\n\",\n       \"vJ703IDcQdYu9CyGZv7XI35eD9J9Ufwg7DvwaZJVkRynaYpE8D/EUpNqcYOqzgt/n23BdfVqVFmt\\n\",\n       \"WndN1mzaIwAinCbCyXSe6AQWsU4gP4I/nET0nyZZuGIMiSDuwCLcYVg0DJX53c0K/Imp52sFbeFa\\n\",\n       \"YgQ/ClifspHi69mS2mcwVoM8G8HfK1KxFF9RmrFoWiLwInxAJHfG8xDs83qQrpVkcByggMCr6l+h\\n\",\n       \"Itc7j2aWaNvfiamNWd6EpWVlPXiw9Lu3kR/B9ycR/dXQMRA3iWRB8e2YMGwPx++lMoJfSYMCL8JQ\\n\",\n       \"TNBaMvAXLKiRVAr8aJKUMVvtykgL/BBs0DVZ1s0Kjx3S5LU1KvDDSAS+qwOtp2NjKlmiwG+htdlC\\n\",\n       \"znOUVnjwCpwsIveIyNUi0q1ZBn2I3AgeE9iTsYlSWYH/HLYk2GwqBT4KXty2ikqBj5koO8L2Tars\\n\",\n       \"xrINumrRHIfZPSLSEtEZio1PbCAR7zEkr/EJCgo8SeXAWouJ1LqOeN4ixAb5UZLl85plRJXnjQK/\\n\",\n       \"lcrxCcdpilbkwd8NTFPV7SJyDlZNLXdNThG5OHV3gaouaMHz91aqpUpOw1bBuZCMwKuyQoSfAW9S\\n\",\n       \"JT2yH3tQMYJfRbIy0WQqI/hJJCmR11OZUtaMwJ+IlSiegDUeS2vvniDCcaqdClPFQeG0eI8mEfVL\\n\",\n       \"sAYA7PUMFOEAzKJZg5VYjZyO9VKaFfjtNC7wT2auoRmGV3lej+AdRGQ+lZlyTdNlgVfV1EChXiMi\\n\",\n       \"l4rIaNXOlfRU9eKuPl8fomOQVYS3Ar9Q5RlMYF+DVTXMevBgEy+yNSzWY/5zTA+rG8EDqPL2zHlW\\n\",\n       \"YBk4Eh4vMuHpJKww10mYyBcS+CDKt4gwVZX0qko1BV41mYauioqwFRO7GMGnF7uej+UwNyPw8Xzd\\n\",\n       \"KvBh0trWMDgeKRTBNzkpzenjhMB3QbwvIqWqO9ehyxaNiEwQsUL5InICIHni/hykHRgTxPTbwFwR\\n\",\n       \"RmBe+iNYUbLO06ltAPermc1PAQ+mRGI1MDGcOx3BR4HPazgIA8PPYEWrltTJ049++YlYGeR0o1KE\\n\",\n       \"GVgAkU0prBfBZ4n7VVg0IkwJ57qZggIfqnHGBZKH0pzArwLGhgasCD+gcj3ReK5qAr8j2GvPQkUt\\n\",\n       \"fMdpmCJpkr/ASmEeIiIrROTtInKRiFwUdnktcJ+ILMJWur+w+y63TxEHWcdhP9xDseh9RZgZe4fa\\n\",\n       \"Itt1UbU1H1ObotgOB/aG2bXQ2aLJYwW2fuUU6Milr8Y8LJpcgTUqjWTSxJLK2XGIRgU+G8EPCw3P\\n\",\n       \"6dhM4vXUKYOcYiYmuGACny7QVo/hJJH4Ooo3dvOw3lqaahH8YJKsJ/fhnS5TNwpR1ZolSlX121iE\\n\",\n       \"6lQSB1lnhfuHYiK5suoRNch01ddgkfE0EnsGkgg+d85C4Nck67AeFK6pGhcAvwlWySoaE/i4uHVR\\n\",\n       \"ga82OzHuNzjssxebDDQXKwu7kcrFmmsxHBgVSg/HCH5eA8fGhjTaNDU/y7B4yQRsAttoVdaHxim+\\n\",\n       \"nizRooGkYVubs5/jFMJnsnYf0YM/CBPeQwgRfFdPHLrwG7EVbp5OPbSdlAdf5djPqvJ7zOc/qNp+\\n\",\n       \"QYguICmalk7NLEIjAp/OosmStmi2Y1lBQ8O1PIW91qIe/HDoqLY5FIvEG53oBMV9+COxErTXkyzI\\n\",\n       \"PjRcQy0PHirfH8dpChf47iNm0czCfuAdFk2Lzr8aiz6zEfwgals0kWUkvYs8jsa+H7Fee0cEL9Kx\\n\",\n       \"+HEHIpyQ8fTnYAOyRT34aqVT0wK/gySynYg1bhspLvAxM2U0zXvwUFzgj8J6U78DXhW2xWJv9QQ+\\n\",\n       \"vk7HaRoX+O4jWjQHYROYDsIEtVUCvwqrT5MW+GzlyFrUjOBJ2TPhfhzYnQjcIZIsHiLSsYZmevxl\\n\",\n       \"dtjWVQ8+bdHEDJNh2FjDKhoT+PTzDQnHUmUZxA6CpTOEpPTyUzQm8FcBZ4WB2XRefxaP4J2W4gLf\\n\",\n       \"fWzEBv9mY930J7GBwVYK/DwqLZq4TFwrIvgzgaszzzcBW7oQbKJWTAP8DlY75SsitAVBPAhb2abV\\n\",\n       \"Fk06go8CPzJcy7w6q2nFiHgUyXKGRXLhhwLbU4PiDUXwqqzDspdG4RG804O4wHcTIdtiCzYYuAxb\\n\",\n       \"ULiVFs0qTBhbFsGLWI55KOB1FJWLbEcP/kxM4E4M28vAlap8EVvY+HOY+G3AJlnFuQBfEOF9ZAQ+\\n\",\n       \"FBobT/XlALMWzbZwjjbMQ09H8G+kdpGudATfiMBny0rUFfhgV8UIHpLCacPD7appkuF2RwMo0r01\\n\",\n       \"6J3qiPAGEd64r6+jWVzgu5eYwvcEJvDQWg8emo/gV2CWywAAEY4EHgoWzFHAY+mCaqH2/bPAy7Fa\\n\",\n       \"Oi8IlsMbSOqX/zNm05yN+e/papBHAx/FBD8dwc/BFjepWGQ5RV4EPxtYq8qe8FrbgqBOwUrtVssO\\n\",\n       \"6zGBxywkJfmcYunjEVjjXCRNMkbwN4e6O07Pczz23e2TuMB3L+1Y3vsuTOA3pXLWu0pc6i4vgs+d\\n\",\n       \"6JQmCOpT0CEccWHts7Ev9R05h63CMkB+hPn/Z2Cvb1k4ZzvwLUzwo8BHi2YGJmCnYgK/E0vTPZKk\\n\",\n       \"8csjz4OfE64lllnejQnmVEwUj61yrmHYjOBGLZpcga8zUSzaM3EMIwr8cKoLfCcPXoSBWA9lUp1r\\n\",\n       \"3KeIMFIk+RxFGN/AZLDeTByv6ZO4wHcv60nqxzxAZQngrhIFvtkIHoIPH4TqQsxLfylwAuafZ1kN\\n\",\n       \"XK/KJmxl+zLw35l9voo1AksJq1SF888ALsZEfWMQvi1YMbNOM3pT5EXwc6h83dGmmYKNG7y4yrmG\\n\",\n       \"Y+KcjuC30bjAxwa01vKHp2AzgCPpCH51fE4RThThv8I+eR587AHtE4EXYYgIRUqAzwQOTjV6P6L6\\n\",\n       \"59CXcIF3qtJOUlfmFiw6bhWrsJmVaeFpVOCjD38cNoHoEuwaX0B+BP8E8Odw+1asPs1v0zuEImlv\\n\",\n       \"CdvXY8I7FtiFzSK9mcS22IL1FopE8GkPviOCD2zEovLJ2MpTtQT+cZIf7fbwVy8XPrs4i5Jj04gk\\n\",\n       \"lhfWu7k+9XBckrEjgk+t3hUnauVl0cS68Psqgp8DfCTbWxHpVEhvKqYncQLXqPDX13GBd6qyBisv\\n\",\n       \"GxfubuWsxEeA7OpZjQyygkXwLw3n+aUqK7DIeA75s2HfiUVmYAJ/vyoPZ3dS5SpVHg4DzdswD/Nx\\n\",\n       \"VXapcmqqUdqCWT1FIvi0RTObzgI/OzzXH4HjRDoi5GkiHWUThmGNVFWLJmTiZAU/r3b/Sjqnmf4Y\\n\",\n       \"+GB47nlYYxaJ4xEjsIZPsVWs2kiEsFYEv68W4p6MvfcdvZUwHrA0I/oxbXZ46n9FmmemAewrjCJ/\\n\",\n       \"1nGfwAW+e/ks8M3uOLEqO1X5embzDpJiVUX4C8lCIrHcxB+AhcHbzj7ntjCwCfAz4PwCz9GOifjj\\n\",\n       \"OY/Fsri1BD7WZElPdBpIZ4vmSGBlGBi+FTgvPPYV4MPh9nBM4HMHWUM+/B/pnIkzHDqt4PVj4Asi\\n\",\n       \"9uMPYncsVs//VGBRZlH2tAe/OfW8I8kX+KYieBEmiXBJkX0LMjnn+edi15we+I1lqOPAcCeBx4KD\\n\",\n       \"l7fw2lqKCB8V4SOZzR7BO/mo0q5af8CzhawjVWa0Hqrcqsp5qnwoVdL3P4EvFDh2e170nkM7Fs1W\\n\",\n       \"E/h1YXC2GlswkdsdGpcotNkI/kiSVMvLgPeLMAkr9BUHetMWTV4E/zLsPXxPxoLIi+B/go2rxAyi\\n\",\n       \"KdgA7k6szs/1mf3THvwWMgKfmkwVbbYYwY/DMp6KWjRHYuWoW0UU+HQPIha+OyK1rW4Ej/WyevNg\\n\",\n       \"8fNI2Xuh0W5I4EUYWq9Ka0/iAr8focoWVc7t4jmWqPK7Vl0TJmzHULnwSGQLtf33uM94ksg2RsVp\\n\",\n       \"gd+EiU0s/nUFJjjfDM8bbY5o0UzEqnDuplLg34algP4v8KHU+eNyfR0EH/4i4M0ijMcasYXAd7EV\\n\",\n       \"u/IEPnrw2Qi+H5U2VHzdMYK/j+IWzURqD/42Sl4E/3zs/U/n53cIfKqgWofAhwZsBq1d27fVjMHS\\n\",\n       \"f6NAD8YK2zUSwV+Nfd97BS7wTnfTjkVG1SL4WvZM3Gckld40dLZoDiUIfPD+L8MspEuojOCj4MeG\\n\",\n       \"Yjs24DkROA0rrnYJ8N7UDz0vgo8DytcBL8Fsi4XAT7ESDbdkdq8VwYNZHtU8+PsoHvl2h8BnexBH\\n\",\n       \"YVVJ0xH8NOy9HYZZaOmyDITjD6T3C/xokkJ5sRfXiAc/kfylOvcJLvBOdxPtlzyBXwPcW+f4dK17\\n\",\n       \"SAQ+XeZ4IyYe6dmw3wU+jWX9xAg+ZrDsIiPwmJVzhSpbVXkUs1smpo6rNn/hGuAcLIJfpMoGVU5U\\n\",\n       \"7bBaImmBz0bwUJnZE193jODvB8aHKLgeE7EGq6Ec9ODd51kLk7AZzRPDfoOxSPx/CQIfjpuKLUie\\n\",\n       \"jtzTAh/LYvSYwIswQoRzGjhkDPAwySzt0dhn1UgEP5JeVGLCBd7pbmoJ/D8Dl9Y5ficmtmlvemt6\\n\",\n       \"li10rF/bUZ9dlfWqfJbK2bTRatlAZ4E/ksrSDA+RpC/WE/izsQHWhVX2AbORhmGikY7g27CJWtUi\\n\",\n       \"+HFYw7WZYpFhFNBGC5XdTP4EscnYussxgj8cE8FFwGGh0WnDZjk/Fa45Pnda6GZiEX5PRvDnAp9p\\n\",\n       \"YP+xWGG4WM9oFPbeu8A7ThVioa1OKaIhbfLZzodU7BMnRKVn6T6V2S0KfF49m21Av1AULc58XU+l\\n\",\n       \"pz8EOAyLQCNLSAQ+2ip517cyXM8YaqxXGwqVbcAyT2IEP5iklMVYzO+NC46nI/i1mCVVxKaJvY7C\\n\",\n       \"No0Is7AIe1Jmez9MkBemzvt8bIbuxvA6poW/lalrzovgZ2LWVU8K/HEUbOhCL2QM5qGnI/jCAh9S\\n\",\n       \"QAdRfI2BbscF3ulu2oEnii5PWIW0wN9HkgIZiXn/nVZYCg3EOkxgokivpzKCH4qJeXrA9yHg0PDD\\n\",\n       \"PxqrCFqNa4B7CrzGdZiVlPXgl2NZONtTpQ22Y172REzgV1Fc4PfQmA8fK4SOy2wfhzWeT6Se+/kk\\n\",\n       \"tjqGaqcAABLySURBVNoDmE0zFXvvY0rrcKyhyhP4estEtpJjybwPYd2CdhE+GVNcA0Ox9+1vWM9k\\n\",\n       \"MInAD8qzr3Iss2i3eQTvPGd4CMu37wpbCBaNKntVOw3MbsQEu9oEr3YqBT5r0UzCxCjdQCwhWYXr\\n\",\n       \"AGqXmfgeFMo9Xxf+b6VS4JdhVkiHbx+EfhtmE6zHIvgimTQTw/kaFfh2OotvXNA93XuYS77AryCZ\\n\",\n       \"1zAc69VkPfj7gf45E8laThDfY+gcwZ+Drc9wHvC+1PYxQHsYO1mCWXajSHqgg3Ke5gYRHhLhg+G+\\n\",\n       \"C7zz3EKVe1V5dxdPk47g81gJ3JZZtzbNOkxgom+fjeCPBZZkjo8e/EnALTXOjSpLVbmq7quw64iT\\n\",\n       \"xbaTiOHjhAg+s/9WYENI56xq0YgwV4SDw0St2GAUEvgQmZ6B1RTKRvCTMaFux9IfR2C2R6yxczNW\\n\",\n       \"onkOnS2arMDPDNe1GpggwgHhfN3FbMIAaSbSPgObw/DLcE2RsSQN8IOYZTcaCwaqFaSbAXyeZEZ5\\n\",\n       \"FHi3aBynAWoKvCorVTmzxvH1LJrxVPrvYLbJBOBFdE55bJZ2kkJlO0jy+9diUXD2NW4hGbuo5cH/\\n\",\n       \"G/CecL61mCgVFc/ZWHG4v4XjSdkRk4GngvW0Bot6HwzF5sAahaXAB7AIPm3RdAi8CP3D63ucIPDA\\n\",\n       \"m7BJdZ0Q4VSRQguq1OJYrJ5SbEgJaw8cB/w1XF/6/RxDkhAQG/e40tgO8gV+DHAlNlGtP8m6BB7B\\n\",\n       \"O04DREujWeoJPGQmXIXB38eA19M6gV9HZdrnJMxW2kD1CD4K/CpyLJpQTvgMLE0zrnK1meICfyY2\\n\",\n       \"KWsNSQT/DhG+E64vDmg/jUXrHRO4Qq/m77HofQmVFs0qYHCInidjM5afIRH440nyzbN8HGsAusJx\\n\",\n       \"WFZUeuWwk7D6SVvC65qc2j9P4KM91imCDx79gSRrG7ThFo3jNEWHB98k7VRaNDeSjAvkCnxq21Dg\\n\",\n       \"zi48d5p1JBH8dkxgNmEikifw6Qj+CeDwnMG+UzCxmhfOV1jgg/i+H6srtJZE4Odhs3RfSSLwq4Cz\\n\",\n       \"sIldHYRsmkNVuYlKiyZmCg3D3vvl4ZAo8PMwiyOPcVjJ6q5wLPa5bSYR+HSFzzyBjxbNQyQWTRT4\\n\",\n       \"7GSnMVijFQfxx2ACv5m+JPAi8gMRWS0iedUF4z7fEJFHROQeEZnX2kt0nLoefD0qInhVblDlN+Gx\\n\",\n       \"eN6sRQP2Q1+k2qXnzl5HtQh+APkRfBSdW7Hfa7YU8kuAX2DCciImoEUj+FdjDecfMIGPg6wzsHLP\\n\",\n       \"6UXdn8YqYN6cOUfs7cTrjQK/hUTwZ5DMg1gTXvfzgWEiuWmM4+m6wB+OZVxtIXkv0gK/ClvRLGpg\\n\",\n       \"OoJ/BGuUJlDdg0/vHyexjcSybvqUB/9DrKRsLiJyLjBHVQ/GKuld1qJrc5zICiprzzTKOqpPVtqG\\n\",\n       \"pcfl5bD/gSo+cZPch/m/0Fng47Y0HRF88MEvAT6V2eclWAXMhViGSKEIPvQEPg18JkSha4FxqcVZ\\n\",\n       \"PodF9jFjZhVwe2aCWZa0RZMW+MkkcxRWY72O1ZgFlhfFjwNGi1RE2IVJFQlrp9KiOQyboEWwizaT\\n\",\n       \"TIIbG/ZHlZ2Y7XQ4VSwaKgU+TqaLAt93InhV/SvJFzCP87DSqajqbUCbiPTmehNOH0OVL6jyrS6c\\n\",\n       \"Iv4Q88RpFXBRlfLIf1Xl8i48b/Z8d6tSCne3YxHiRkxEoLMNtRmLeCO/BGaIcDJYeQFs4tTtmMDP\\n\",\n       \"o7hF8wIsz/6qcG3bsdmoMeJersqb4nKMwLXUL32dtmi2kMzGjemWYMJ+Wrje5WQEPowpDMFstGaj\\n\",\n       \"+CFY9dEo4iNSM27Xp/ZLD7SmBRus99af6oOs6aybtEXTtwS+AFOoXEh6JUllOcfpDcQfYl7BsGdV\\n\",\n       \"+X4PXw+YwPfDIvjNmP2RjeAvxnrQQIcV8j1seUUIA6Rh+91hW1GBfzFwdSb9cw1WGE6Dt96BKjeq\\n\",\n       \"8ss656xm0aQHa1djdtRCzLaZmTlHFM5baV7g20hmN8drGImVuEjPnE778GkPHsyyUyrrBqWpZtE8\\n\",\n       \"RS+yaFq1KG524Cc3Z1hELk7dXaCqC1r0/I5Ti6oCvw/pWH1Llb0ibCQj8KoVFTMj15KI/otIBotj\\n\",\n       \"HZxV2KScegJ/JlYaOc1aLPskr25QEaJFE0s7pAU+HcHH691DiOBFGKXKBsyeWYP1Sj7a5HWkBT4O\\n\",\n       \"smYjdOgs8NkIfkP4bKoNsuZZNHfTxQheROYD87tyjkgrBP5JktVcwKL3vJogqOrFLXg+x2mUWhbN\\n\",\n       \"viK7vGK6Pk4t7gamiDABE+m44MgKTKCfxESyqsCHJQWPx2yQNF0SeFV2iaCY+GU9+Cjw0XJaiAni\\n\",\n       \"MSKMA5aLMJak9s4dwPEiDMizz+qQjeBHkGTEpKkl8A+m9o8lpQWYptpROyg6F+uw+QRttMCiCYHv\\n\",\n       \"gnhfREpVd65DKyyaK7BFlhGRE4GNqrq69iGO06Nsw6ab98oIPvzfQAGBD7Ngb8Dyzw8k1NMPVstc\\n\",\n       \"VZaSsmhE+IecGaNxScFsg7cGE/5mI3iwRnQy1SP49diYx9MkHvw5mAUykyDwqqzDBqSbieLzLJp6\\n\",\n       \"EfzYzOO3Aq8Nt6NFcyhJFlHa0klbNKuw2jX9m7jullMkTfIX2Cy3Q0RkhYi8XUQuEpGLAFT1auAx\\n\",\n       \"EVkKXI7NqHOcXkMqV7k3CnwUokICH7gO+BhwXdpDV+3wuePAogBfxPLZ05yJWT1Z1mK1Zboi8Fuo\\n\",\n       \"jOCnAnviQuth8fnvhn2jB/9yYC+WmhgjeLAc/Y+KdFrcHAAR/l2EN+Y8lLVoRpAv8E8Dk0MVyIEk\\n\",\n       \"cxRizaN7wt04yDoZmCrCMKpbNLEuUq9Yx7WuRaOqbyiwz/vq7eM4+5h17B8WDZjAf53MpKMUUdTG\\n\",\n       \"h/+vxOqvEFauuoDQ686wBssc6arAx/9bsAYjbywBLNodiaV6XoEJfCy3gCrLRPh3bKzg/Jzjz8WE\\n\",\n       \"9WeZ7XkRfDWLZhIm1utr1Bvajs1qjfME5tB5kDVm0WwiyR6qGVCEBni+aqflHVtGqwZZHae383Ys\\n\",\n       \"D723kBX432B54UV4AKvMmBeFQyIwh2CTds4KdVhmYLn9PyBnwhJJ5NxViyb+34Jl5WTr9wMWJYuw\\n\",\n       \"AhPeWzGBH0GSEQRhXV0RpgfvG7BBWez15bkQjVo0B1GZQZMlzjqO6d8H0zlNcmx4nhjBF/HhZwN/\\n\",\n       \"FGFgrWJ2XcFLFTjPCUIO+u59fR0pKgRelf9WrbkiVAfB5jgqLXiZx2O1ynlYHZ17sFmrVwGfV+Xf\\n\",\n       \"qghKKwR+C1bXfg+JwFeL4MF8+Kuwxi1aNB25/6psw3of/5A57kSskZos0rF2aqSoRbMKi8r/E/hK\\n\",\n       \"jWuMHvx4bK7A8zLnW4/1EPphK5BtpUqqpAjDwqAy2Nq2B9LaNXQrcIF3nH1DnNRUrYZ9V9mMZcQ8\\n\",\n       \"DPwfllr5J1W+V+OYNdhg9Joa+9RjK0kUvwVLL8yN4AP/io3dLcMi6bQHH7kMK4A2MLXtJOAmrN7M\\n\",\n       \"C6BjkhQUtGhCg78BuEk1mW+QQ/Tgx2MFzA7HIvSN4TzPYu/3ptBwxh5UBeH6rgG+HTYdGf5nyzS3\\n\",\n       \"DLdoHGcfoMoeEbZRe5Z4V4gCfyUWxc8FPlLnmEeB/2rB6ltpHx5qRPCqVqlThD1YBL+ajMCrskSE\\n\",\n       \"e4HXEcYSMIH/Ojbn5sQwEPtKbH3cohYN2HjE7XVeU4zgh2ONyqsIOfKpfdaRzAfqZNEEv/3SsM8p\\n\",\n       \"4f5R4eFx1FjusSt4BO84+46jVJPMjRazGfOoH1ZlhSpvDlP3q6LKRlXe1cXnTQt8jORrWTSR9Zge\\n\",\n       \"zSRn/V7gq8BHRJCQgngC5tvfimXhfA6zTiDfoom1aSoIM3R31rm2ONFpPCbws3PO1U7SG8uzaE7A\\n\",\n       \"Jqa9FAusp2ER/EqSejgtxwXecfYRqTov3cFmLFrslsiwBnFwldT/WhYN0JHKugzL4snr1fwBS2V8\\n\",\n       \"EVblcnXIlb8t3P8B5sfHhTfyIvhsFk1RYgQ/gaRCZVbg11Ep8FmL5u3Ad8Pcg1uwmaqzsFx/t2gc\\n\",\n       \"x2mIzdhqTD2dGtqQRZNhGTAxzyIKGTdfwQZDJxKWyVNljQgfwgZK34iJcFbgh2GNXZ5FU4S4MPt4\\n\",\n       \"zEJ6OOdc60gW/KiwaMLM4QtILJm/YYL/GMnM4wrC5LTRqh119JvCI3jH2T/ZjAlRT9NVga81wPsz\\n\",\n       \"LPPlDapJGWdVvh6ybVZg1keHwIdsnp1YBN7sgPYOTIQ1NJiP0DmtslYEfz62rm8s4XILcDrWG+hY\\n\",\n       \"aEWE14rwcxHuxno972zyejvwCN5x9k/2lcAvhI7aMRtJVlUqwjLy/Xego0571bUpqBT4tJjH1M1m\\n\",\n       \"c823Y1VzY/roYqygW5rVJIOsW4GhIhwJfAZ4IVZaInInlm55P/Z6Dw/bP4FlPH0TuLvemEkRXOAd\\n\",\n       \"Z/9kAfR83r8qfyUsahIE+fgGDr+RrrkKK7CB5d0ZcdwMXcoM2o6Jd+xdfJnOFXQvJdHTrViJhndi\\n\",\n       \"kf0LVZMlIVXZIcKd2OIjQmLRHAR8X7VLi9tU4ALvOPshqvzvvr6GRlFlEWHFpSZZga3FujGzfQvU\\n\",\n       \"zZSpRZyUtho6Gq4KgkUUiR78kcCn0uKe4mXhOk/AVtIagdlILS3U6B684zj7Cyuwgcw8gW82gwaS\\n\",\n       \"SWlFJ4BtxWyio7GJUZ1QZX0YTI4e/CxgWatLFrjAO46zvxAtmqzAb6b5DJo4ULuL4tH1VqyUwuOx\\n\",\n       \"imYN1mJ58LMoXouoMC7wjuPsL6zE8ujzIvimBT6wneIR/DZssPeOAvtuwZYw/P/t3VuIVVUcx/Hv\\n\",\n       \"r9IHMwgJxi4D+uDD+OQQDJFI8yT60oWiFAIfeoju0EMiSPrQgwVBD0EEGViEJUViEGRBRRAkkrdS\\n\",\n       \"KcEBLS8DRSQSKP17WOvk8Xgue2b2OXtm+/vAxj1775mz/LP8u2fv9V9rBMqvi3CCN7O6+J00dUHZ\\n\",\n       \"j2hgagm+UXvQM8E3rVUwhu/gzczay5OHneHaBP8ezPil80Wm9ogGes9x0zBJmjCt9ATvUTRmVien\\n\",\n       \"aEnwEXxfws/9hrw8YgEXSENUD/W6MJskvZAt/RGNE7yZ1ck1Cb4MEVOqKj0FPDyFQqVGVawTvJlZ\\n\",\n       \"Fx+RXrZWJo+6+WwK3zJJWmi89HmDnODNrDYi+LjqNkzDJH14/g5+yWpmVrWz9GlaZ0X0Za3Xaz9I\\n\",\n       \"iohonb/BzOy6lhdEXxDRfijnTHJnoTt4SWskHZf0q6SNbc6PS/pL0oG8bZ5OY8zMrjcR/NMpuc9U\\n\",\n       \"zwQv6UbgTdI0ncuB9ZJG2lz6bUSM5u2VkttpLSSNV92GunAsy+V4zh5F7uDHgBMRMRERl4APSYvb\\n\",\n       \"tvLjl8Ear7oBNTJedQNqZrzqBlhSJMHfSRrX2XA6H2sWwL2SDkn6XNJyzMysUkWGSRZ5C/sjMBwR\\n\",\n       \"FyWtBXZzZYVzMzOrQM9RNJLuAbZGxJr89Sbg34h4tcv3nATujog/mo4NZriOmVnNTHcUTZE7+P3A\\n\",\n       \"MklLSLO1PQasb75A0hBwPiJC0hjpP46r3gp7iKSZ2WD1TPARcVnSs8AXpLmWt0fEMUlP5vNvA48A\\n\",\n       \"T0m6TJp1bV0f22xmZgUMrNDJzMwGayBTFfQqlLLuJE1IOpyLyPblY4skfSnpF0l7Jd1adTtnK0nv\\n\",\n       \"Sjon6UjTsY7xk7Qp99XjklZX0+rZqUMst0o63VTouLbpnGPZhaRhSV9L+lnST5Kez8fL6Z8R0deN\\n\",\n       \"9FjnBLAEmEdaNX2k359bp400jeiilmOvAS/l/Y3AtqrbOVs3YBUwChzpFT9SMd/B3FeX5L57Q9V/\\n\",\n       \"h9mydYjlFuDFNtc6lr3juRhYkfcXkuacHymrfw7iDr5ooZR11/qS+n5gR97fATw42ObMHRHxHfBn\\n\",\n       \"y+FO8XsA2BkRlyJigvQPaGwQ7ZwLOsQS2hc6OpY9RMTZiDiY9y8Ax0h1RqX0z0Ek+CKFUtZdAF9J\\n\",\n       \"2i+psfDAUEQ0lhA7BwxV07Q5q1P87uDq+cTdX4t5Lhc6bm96nOBYTkEeqTgK/EBJ/XMQCd5vcWdu\\n\",\n       \"ZUSMAmuBZyStaj4Z6Xc3x3maCsTPse3uLWApsIK0JurrXa51LNuQtBD4BHghIv5uPjeT/jmIBP8b\\n\",\n       \"MNz09TAVr7gy10TEmfznJGnx4DHgnKTFAJJup/iK75Z0il9rf70rH7MOIuJ8ZMA7XHlk4FgWIGke\\n\",\n       \"Kbm/HxG78+FS+ucgEvz/hVKS5pMKpfYM4HNrQdICSbfk/ZuB1cARUgw35Ms2kKaHsOI6xW8PsE7S\\n\",\n       \"fElLgWXAvgraN2fkBNTwEKl/gmPZkyQB24GjEfFG06lS+mffl+yLDoVS/f7cGhkCPk39gJuADyJi\\n\",\n       \"r6T9wC5JTwATwKPVNXF2k7QTuA+4TdIp4GVgG23iFxFHJe0CjgKXgafznanRNpZbgHFJK0iPCk4C\\n\",\n       \"jSJIx7K3lcDjwGFJB/KxTZTUP13oZGZWU16T1cysppzgzcxqygnezKymnODNzGrKCd7MrKac4M3M\\n\",\n       \"asoJ3sysppzgzcxq6j+vUsbacqJa4gAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7fbb37f207d0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"plot(np.vstack([train_loss, scratch_train_loss]).clip(0, 4).T)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's take a look at the testing accuracy after running 200 iterations. Note that we are running a classification task of 5 classes, thus a chance accuracy is 20%. As we will reasonably expect, the finetuning result will be much better than the one from training from scratch. Let's see.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Accuracy for fine-tuning: 0.570000001788\\n\",\n      \"Accuracy for training from scratch: 0.224000000954\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"test_iters = 10\\n\",\n    \"accuracy = 0\\n\",\n    \"scratch_accuracy = 0\\n\",\n    \"for it in arange(test_iters):\\n\",\n    \"    solver.test_nets[0].forward()\\n\",\n    \"    accuracy += solver.test_nets[0].blobs['accuracy'].data\\n\",\n    \"    scratch_solver.test_nets[0].forward()\\n\",\n    \"    scratch_accuracy += scratch_solver.test_nets[0].blobs['accuracy'].data\\n\",\n    \"accuracy /= test_iters\\n\",\n    \"scratch_accuracy /= test_iters\\n\",\n    \"print 'Accuracy for fine-tuning:', accuracy\\n\",\n    \"print 'Accuracy for training from scratch:', scratch_accuracy\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Huzzah! So we did finetuning and it is awesome. Let's take a look at what kind of results we are able to get with a longer, more complete run of the style recognition dataset. Note: the below URL might be occassionally down because it is run on a research machine.\\n\",\n    \"\\n\",\n    \"http://demo.vislab.berkeleyvision.org/\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"Fine-tune the ImageNet-trained CaffeNet on new data.\",\n  \"example_name\": \"Fine-tuning for Style Recognition\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 4\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/CMakeLists.txt",
    "content": "file(GLOB_RECURSE examples_srcs \"${PROJECT_SOURCE_DIR}/examples/*.cpp\")\n\nforeach(source_file ${examples_srcs})\n  # get file name\n  get_filename_component(name ${source_file} NAME_WE)\n    \n  # get folder name\n  get_filename_component(path ${source_file} PATH)\n  get_filename_component(folder ${path} NAME_WE)\n    \n  add_executable(${name} ${source_file})\n  target_link_libraries(${name} ${Caffe_LINK})\n  caffe_default_properties(${name})\n\n  # set back RUNTIME_OUTPUT_DIRECTORY\n  set_target_properties(${name} PROPERTIES\n    RUNTIME_OUTPUT_DIRECTORY \"${PROJECT_BINARY_DIR}/examples/${folder}\")\n\n  caffe_set_solution_folder(${name} examples)\n\n  # install\n  install(TARGETS ${name} DESTINATION bin)\n\n  if(UNIX OR APPLE)\n    # Funny command to make tutorials work\n    # TODO: remove in future as soon as naming is standartaized everywhere\n    set(__outname ${PROJECT_BINARY_DIR}/examples/${folder}/${name}${Caffe_POSTFIX})\n    add_custom_command(TARGET ${name} POST_BUILD\n                       COMMAND ln -sf \"${__outname}\" \"${__outname}.bin\")\n  endif()\nendforeach()\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full.prototxt",
    "content": "name: \"CIFAR10_full_deploy\"\n# N.B. input image must be in CIFAR-10 format\n# as described at http://www.cs.toronto.edu/~kriz/cifar.html\ninput: \"data\"\ninput_shape {\n  dim: 1\n  dim: 3\n  dim: 32\n  dim: 32\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"pool1\"\n  top: \"pool1\"\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 3\n    alpha: 5e-05\n    beta: 0.75\n    norm_region: WITHIN_CHANNEL\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 3\n    alpha: 5e-05\n    beta: 0.75\n    norm_region: WITHIN_CHANNEL\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"conv3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool3\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n    decay_mult: 250\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 10\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"ip1\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_sigmoid_solver.prototxt",
    "content": "# reduce learning rate after 120 epochs (60000 iters) by factor 0f 10\n# then another factor of 10 after 10 more epochs (5000 iters)\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_full_sigmoid_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 10\n# Carry out testing every 1000 training iterations.\ntest_interval: 1000\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.001\nmomentum: 0.9\n#weight_decay: 0.004\n# The learning rate policy\nlr_policy: \"step\"\ngamma: 1\nstepsize: 5000\n# Display every 200 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 60000\n# snapshot intermediate results\nsnapshot: 10000\nsnapshot_prefix: \"examples/cifar10_full_sigmoid\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_sigmoid_solver_bn.prototxt",
    "content": "# reduce learning rate after 120 epochs (60000 iters) by factor 0f 10\n# then another factor of 10 after 10 more epochs (5000 iters)\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_full_sigmoid_train_test_bn.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 10\n# Carry out testing every 1000 training iterations.\ntest_interval: 1000\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.001\nmomentum: 0.9\n#weight_decay: 0.004\n# The learning rate policy\nlr_policy: \"step\"\ngamma: 1\nstepsize: 5000\n# Display every 200 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 60000\n# snapshot intermediate results\nsnapshot: 10000\nsnapshot_prefix: \"examples/cifar10_full_sigmoid_bn\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_sigmoid_train_test.prototxt",
    "content": "name: \"CIFAR10_full\"\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_train_lmdb\"\n    batch_size: 111\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_test_lmdb\"\n    batch_size: 1000\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.0001\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\n\n\n\nlayer {\n  name: \"Sigmoid1\"\n  type: \"Sigmoid\"\n  bottom: \"pool1\"\n  top: \"Sigmoid1\"\n}\n\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"Sigmoid1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\n\n\nlayer {\n  name: \"Sigmoid2\"\n  type: \"Sigmoid\"\n  bottom: \"conv2\"\n  top: \"Sigmoid2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"Sigmoid2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 1\n  }\n\n}\n\nlayer {\n  name: \"Sigmoid3\"\n  type: \"Sigmoid\"\n  bottom: \"conv3\"\n  top: \"Sigmoid3\"\n}\n\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"Sigmoid3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\n\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool3\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip1\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip1\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_sigmoid_train_test_bn.prototxt",
    "content": "name: \"CIFAR10_full\"\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_train_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_test_lmdb\"\n    batch_size: 1000\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    bias_term: false\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.0001\n    }\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\n\nlayer {\n  name: \"bn1\"\n  type: \"BatchNorm\"\n  bottom: \"pool1\"\n  top: \"bn1\"\n  param {\n    lr_mult: 0\n  }\n  param {\n    lr_mult: 0\n  }\n  param {\n    lr_mult: 0\n  }\n}\n\nlayer {\n  name: \"Sigmoid1\"\n  type: \"Sigmoid\"\n  bottom: \"bn1\"\n  top: \"Sigmoid1\"\n}\n\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"Sigmoid1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    bias_term: false\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n  }\n}\n\nlayer {\n  name: \"bn2\"\n  type: \"BatchNorm\"\n  bottom: \"conv2\"\n  top: \"bn2\"\n  param {\n    lr_mult: 0\n  }\n  param {\n    lr_mult: 0\n  }\n  param {\n    lr_mult: 0\n  }\n}\n\nlayer {\n  name: \"Sigmoid2\"\n  type: \"Sigmoid\"\n  bottom: \"bn2\"\n  top: \"Sigmoid2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"Sigmoid2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    bias_term: false\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n  }\n}\n\nlayer {\n  name: \"bn3\"\n  type: \"BatchNorm\"\n  bottom: \"conv3\"\n  top: \"bn3\"\n  param {\n    lr_mult: 0\n  }\n  param {\n    lr_mult: 0\n  }\n  param {\n    lr_mult: 0\n  }\n}\n\nlayer {\n  name: \"Sigmoid3\"\n  type: \"Sigmoid\"\n  bottom: \"bn3\"\n  top: \"Sigmoid3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"Sigmoid3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\n\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool3\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip1\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip1\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_solver.prototxt",
    "content": "# reduce learning rate after 120 epochs (60000 iters) by factor 0f 10\n# then another factor of 10 after 10 more epochs (5000 iters)\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_full_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 1000 training iterations.\ntest_interval: 1000\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.001\nmomentum: 0.9\nweight_decay: 0.004\n# The learning rate policy\nlr_policy: \"fixed\"\n# Display every 200 iterations\ndisplay: 200\n# The maximum number of iterations\nmax_iter: 60000\n# snapshot intermediate results\nsnapshot: 10000\nsnapshot_format: HDF5\nsnapshot_prefix: \"examples/cifar10/cifar10_full\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_solver_lr1.prototxt",
    "content": "# reduce learning rate after 120 epochs (60000 iters) by factor 0f 10\n# then another factor of 10 after 10 more epochs (5000 iters)\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_full_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 1000 training iterations.\ntest_interval: 1000\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.0001\nmomentum: 0.9\nweight_decay: 0.004\n# The learning rate policy\nlr_policy: \"fixed\"\n# Display every 200 iterations\ndisplay: 200\n# The maximum number of iterations\nmax_iter: 65000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_format: HDF5\nsnapshot_prefix: \"examples/cifar10/cifar10_full\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_solver_lr2.prototxt",
    "content": "# reduce learning rate after 120 epochs (60000 iters) by factor 0f 10\n# then another factor of 10 after 10 more epochs (5000 iters)\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_full_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 1000 training iterations.\ntest_interval: 1000\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.00001\nmomentum: 0.9\nweight_decay: 0.004\n# The learning rate policy\nlr_policy: \"fixed\"\n# Display every 200 iterations\ndisplay: 200\n# The maximum number of iterations\nmax_iter: 70000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_format: HDF5\nsnapshot_prefix: \"examples/cifar10/cifar10_full\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_full_train_test.prototxt",
    "content": "name: \"CIFAR10_full\"\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_train_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_test_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.0001\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"pool1\"\n  top: \"pool1\"\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 3\n    alpha: 5e-05\n    beta: 0.75\n    norm_region: WITHIN_CHANNEL\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 3\n    alpha: 5e-05\n    beta: 0.75\n    norm_region: WITHIN_CHANNEL\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"conv3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool3\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n    decay_mult: 250\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip1\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip1\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_quick.prototxt",
    "content": "name: \"CIFAR10_quick_test\"\ninput: \"data\"\ninput_shape {\n  dim: 1\n  dim: 3\n  dim: 32\n  dim: 32\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"pool1\"\n  top: \"pool1\"\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"conv3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool3\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 64\n  }\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"ip2\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_quick_solver.prototxt",
    "content": "# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_quick_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.001\nmomentum: 0.9\nweight_decay: 0.004\n# The learning rate policy\nlr_policy: \"fixed\"\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 4000\n# snapshot intermediate results\nsnapshot: 4000\nsnapshot_format: HDF5\nsnapshot_prefix: \"examples/cifar10/cifar10_quick\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_quick_solver_lr1.prototxt",
    "content": "# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10\n\n# The train/test net protocol buffer definition\nnet: \"examples/cifar10/cifar10_quick_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.0001\nmomentum: 0.9\nweight_decay: 0.004\n# The learning rate policy\nlr_policy: \"fixed\"\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 5000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_format: HDF5\nsnapshot_prefix: \"examples/cifar10/cifar10_quick\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/cifar10_quick_train_test.prototxt",
    "content": "name: \"CIFAR10_quick\"\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_train_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"cifar\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mean_file: \"examples/cifar10/mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/cifar10/cifar10_test_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.0001\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"pool1\"\n  top: \"pool1\"\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"conv3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool3\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 64\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/convert_cifar_data.cpp",
    "content": "//\n// This script converts the CIFAR dataset to the leveldb format used\n// by caffe to perform classification.\n// Usage:\n//    convert_cifar_data input_folder output_db_file\n// The CIFAR dataset could be downloaded at\n//    http://www.cs.toronto.edu/~kriz/cifar.html\n\n#include <fstream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"glog/logging.h\"\n#include \"google/protobuf/text_format.h\"\n#include \"stdint.h\"\n\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/format.hpp\"\n\nusing caffe::Datum;\nusing boost::scoped_ptr;\nusing std::string;\nnamespace db = caffe::db;\n\nconst int kCIFARSize = 32;\nconst int kCIFARImageNBytes = 3072;\nconst int kCIFARBatchSize = 10000;\nconst int kCIFARTrainBatches = 5;\n\nvoid read_image(std::ifstream* file, int* label, char* buffer) {\n  char label_char;\n  file->read(&label_char, 1);\n  *label = label_char;\n  file->read(buffer, kCIFARImageNBytes);\n  return;\n}\n\nvoid convert_dataset(const string& input_folder, const string& output_folder,\n    const string& db_type) {\n  scoped_ptr<db::DB> train_db(db::GetDB(db_type));\n  train_db->Open(output_folder + \"/cifar10_train_\" + db_type, db::NEW);\n  scoped_ptr<db::Transaction> txn(train_db->NewTransaction());\n  // Data buffer\n  int label;\n  char str_buffer[kCIFARImageNBytes];\n  Datum datum;\n  datum.set_channels(3);\n  datum.set_height(kCIFARSize);\n  datum.set_width(kCIFARSize);\n\n  LOG(INFO) << \"Writing Training data\";\n  for (int fileid = 0; fileid < kCIFARTrainBatches; ++fileid) {\n    // Open files\n    LOG(INFO) << \"Training Batch \" << fileid + 1;\n    string batchFileName = input_folder + \"/data_batch_\"\n      + caffe::format_int(fileid+1) + \".bin\";\n    std::ifstream data_file(batchFileName.c_str(),\n        std::ios::in | std::ios::binary);\n    CHECK(data_file) << \"Unable to open train file #\" << fileid + 1;\n    for (int itemid = 0; itemid < kCIFARBatchSize; ++itemid) {\n      read_image(&data_file, &label, str_buffer);\n      datum.set_label(label);\n      datum.set_data(str_buffer, kCIFARImageNBytes);\n      string out;\n      CHECK(datum.SerializeToString(&out));\n      txn->Put(caffe::format_int(fileid * kCIFARBatchSize + itemid, 5), out);\n    }\n  }\n  txn->Commit();\n  train_db->Close();\n\n  LOG(INFO) << \"Writing Testing data\";\n  scoped_ptr<db::DB> test_db(db::GetDB(db_type));\n  test_db->Open(output_folder + \"/cifar10_test_\" + db_type, db::NEW);\n  txn.reset(test_db->NewTransaction());\n  // Open files\n  std::ifstream data_file((input_folder + \"/test_batch.bin\").c_str(),\n      std::ios::in | std::ios::binary);\n  CHECK(data_file) << \"Unable to open test file.\";\n  for (int itemid = 0; itemid < kCIFARBatchSize; ++itemid) {\n    read_image(&data_file, &label, str_buffer);\n    datum.set_label(label);\n    datum.set_data(str_buffer, kCIFARImageNBytes);\n    string out;\n    CHECK(datum.SerializeToString(&out));\n    txn->Put(caffe::format_int(itemid, 5), out);\n  }\n  txn->Commit();\n  test_db->Close();\n}\n\nint main(int argc, char** argv) {\n  if (argc != 4) {\n    printf(\"This script converts the CIFAR dataset to the leveldb format used\\n\"\n           \"by caffe to perform classification.\\n\"\n           \"Usage:\\n\"\n           \"    convert_cifar_data input_folder output_folder db_type\\n\"\n           \"Where the input folder should contain the binary batch files.\\n\"\n           \"The CIFAR dataset could be downloaded at\\n\"\n           \"    http://www.cs.toronto.edu/~kriz/cifar.html\\n\"\n           \"You should gunzip them after downloading.\\n\");\n  } else {\n    google::InitGoogleLogging(argv[0]);\n    convert_dataset(string(argv[1]), string(argv[2]), string(argv[3]));\n  }\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/create_cifar10.sh",
    "content": "#!/usr/bin/env sh\n# This script converts the cifar data into leveldb format.\n\nEXAMPLE=examples/cifar10\nDATA=data/cifar10\nDBTYPE=lmdb\n\necho \"Creating $DBTYPE...\"\n\nrm -rf $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/cifar10_test_$DBTYPE\n\n./build/examples/cifar10/convert_cifar_data.bin $DATA $EXAMPLE $DBTYPE\n\necho \"Computing image mean...\"\n\n./build/tools/compute_image_mean -backend=$DBTYPE \\\n  $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/mean.binaryproto\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/readme.md",
    "content": "---\ntitle: CIFAR-10 tutorial\ncategory: example\ndescription: Train and test Caffe on CIFAR-10 data.\ninclude_in_docs: true\npriority: 5\n---\n\nAlex's CIFAR-10 tutorial, Caffe style\n=====================================\n\nAlex Krizhevsky's [cuda-convnet](https://code.google.com/p/cuda-convnet/) details the model definitions, parameters, and training procedure for good performance on CIFAR-10. This example reproduces his results in Caffe.\n\nWe will assume that you have Caffe successfully compiled. If not, please refer to the [Installation page](/installation.html). In this tutorial, we will assume that your caffe installation is located at `CAFFE_ROOT`.\n\nWe thank @chyojn for the pull request that defined the model schemas and solver configurations.\n\n*This example is a work-in-progress. It would be nice to further explain details of the network and training choices and benchmark the full training.*\n\nPrepare the Dataset\n-------------------\n\nYou will first need to download and convert the data format from the [CIFAR-10 website](http://www.cs.toronto.edu/~kriz/cifar.html). To do this, simply run the following commands:\n\n    cd $CAFFE_ROOT\n    ./data/cifar10/get_cifar10.sh\n    ./examples/cifar10/create_cifar10.sh\n\nIf it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be the dataset, `./cifar10-leveldb`, and the data set image mean `./mean.binaryproto`.\n\nThe Model\n---------\n\nThe CIFAR-10 model is a CNN that composes layers of convolution, pooling, rectified linear unit (ReLU) nonlinearities, and local contrast normalization with a linear classifier on top of it all. We have defined the model in the `CAFFE_ROOT/examples/cifar10` directory's `cifar10_quick_train_test.prototxt`.\n\nTraining and Testing the \"Quick\" Model\n--------------------------------------\n\nTraining the model is simple after you have written the network definition protobuf and solver protobuf files (refer to [MNIST Tutorial](../examples/mnist.html)). Simply run `train_quick.sh`, or the following command directly:\n\n    cd $CAFFE_ROOT\n    ./examples/cifar10/train_quick.sh\n\n`train_quick.sh` is a simple script, so have a look inside. The main tool for training is `caffe` with the `train` action, and the solver protobuf text file as its argument.\n\nWhen you run the code, you will see a lot of messages flying by like this:\n\n    I0317 21:52:48.945710 2008298256 net.cpp:74] Creating Layer conv1\n    I0317 21:52:48.945716 2008298256 net.cpp:84] conv1 <- data\n    I0317 21:52:48.945725 2008298256 net.cpp:110] conv1 -> conv1\n    I0317 21:52:49.298691 2008298256 net.cpp:125] Top shape: 100 32 32 32 (3276800)\n    I0317 21:52:49.298719 2008298256 net.cpp:151] conv1 needs backward computation.\n\nThese messages tell you the details about each layer, its connections and its output shape, which may be helpful in debugging. After the initialization, the training will start:\n\n    I0317 21:52:49.309370 2008298256 net.cpp:166] Network initialization done.\n    I0317 21:52:49.309376 2008298256 net.cpp:167] Memory required for Data 23790808\n    I0317 21:52:49.309422 2008298256 solver.cpp:36] Solver scaffolding done.\n    I0317 21:52:49.309447 2008298256 solver.cpp:47] Solving CIFAR10_quick_train\n\nBased on the solver setting, we will print the training loss function every 100 iterations, and test the network every 500 iterations. You will see messages like this:\n\n    I0317 21:53:12.179772 2008298256 solver.cpp:208] Iteration 100, lr = 0.001\n    I0317 21:53:12.185698 2008298256 solver.cpp:65] Iteration 100, loss = 1.73643\n    ...\n    I0317 21:54:41.150030 2008298256 solver.cpp:87] Iteration 500, Testing net\n    I0317 21:54:47.129461 2008298256 solver.cpp:114] Test score #0: 0.5504\n    I0317 21:54:47.129500 2008298256 solver.cpp:114] Test score #1: 1.27805\n\nFor each training iteration, `lr` is the learning rate of that iteration, and `loss` is the training function. For the output of the testing phase, **score 0 is the accuracy**, and **score 1 is the testing loss function**.\n\nAnd after making yourself a cup of coffee, you are done!\n\n    I0317 22:12:19.666914 2008298256 solver.cpp:87] Iteration 5000, Testing net\n    I0317 22:12:25.580330 2008298256 solver.cpp:114] Test score #0: 0.7533\n    I0317 22:12:25.580379 2008298256 solver.cpp:114] Test score #1: 0.739837\n    I0317 22:12:25.587262 2008298256 solver.cpp:130] Snapshotting to cifar10_quick_iter_5000\n    I0317 22:12:25.590215 2008298256 solver.cpp:137] Snapshotting solver state to cifar10_quick_iter_5000.solverstate\n    I0317 22:12:25.592813 2008298256 solver.cpp:81] Optimization Done.\n\nOur model achieved ~75% test accuracy. The model parameters are stored in binary protobuf format in\n\n    cifar10_quick_iter_5000\n\nwhich is ready-to-deploy in CPU or GPU mode! Refer to the `CAFFE_ROOT/examples/cifar10/cifar10_quick.prototxt` for the deployment model definition that can be called on new data.\n\nWhy train on a GPU?\n-------------------\n\nCIFAR-10, while still small, has enough data to make GPU training attractive.\n\nTo compare CPU vs. GPU training speed, simply change one line in all the `cifar*solver.prototxt`:\n\n    # solver mode: CPU or GPU\n    solver_mode: CPU\n\nand you will be using CPU for training.\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/train_full.sh",
    "content": "#!/usr/bin/env sh\n\nTOOLS=./build/tools\n\n$TOOLS/caffe train \\\n    --solver=examples/cifar10/cifar10_full_solver.prototxt\n\n# reduce learning rate by factor of 10\n$TOOLS/caffe train \\\n    --solver=examples/cifar10/cifar10_full_solver_lr1.prototxt \\\n    --snapshot=examples/cifar10/cifar10_full_iter_60000.solverstate.h5\n\n# reduce learning rate by factor of 10\n$TOOLS/caffe train \\\n    --solver=examples/cifar10/cifar10_full_solver_lr2.prototxt \\\n    --snapshot=examples/cifar10/cifar10_full_iter_65000.solverstate.h5\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/train_full_sigmoid.sh",
    "content": "#!/usr/bin/env sh\n\nTOOLS=./build/tools\n\n$TOOLS/caffe train \\\n    --solver=examples/cifar10/cifar10_full_sigmoid_solver.prototxt\n\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/train_full_sigmoid_bn.sh",
    "content": "#!/usr/bin/env sh\n\nTOOLS=./build/tools\n\n$TOOLS/caffe train \\\n    --solver=examples/cifar10/cifar10_full_sigmoid_solver_bn.prototxt\n\n"
  },
  {
    "path": "caffe-fpn/examples/cifar10/train_quick.sh",
    "content": "#!/usr/bin/env sh\n\nTOOLS=./build/tools\n\n$TOOLS/caffe train \\\n  --solver=examples/cifar10/cifar10_quick_solver.prototxt\n\n# reduce learning rate by factor of 10 after 8 epochs\n$TOOLS/caffe train \\\n  --solver=examples/cifar10/cifar10_quick_solver_lr1.prototxt \\\n  --snapshot=examples/cifar10/cifar10_quick_iter_4000.solverstate.h5\n"
  },
  {
    "path": "caffe-fpn/examples/cpp_classification/classification.cpp",
    "content": "#include <caffe/caffe.hpp>\n#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\n#endif  // USE_OPENCV\n#include <algorithm>\n#include <iosfwd>\n#include <memory>\n#include <string>\n#include <utility>\n#include <vector>\n\n#ifdef USE_OPENCV\nusing namespace caffe;  // NOLINT(build/namespaces)\nusing std::string;\n\n/* Pair (label, confidence) representing a prediction. */\ntypedef std::pair<string, float> Prediction;\n\nclass Classifier {\n public:\n  Classifier(const string& model_file,\n             const string& trained_file,\n             const string& mean_file,\n             const string& label_file);\n\n  std::vector<Prediction> Classify(const cv::Mat& img, int N = 5);\n\n private:\n  void SetMean(const string& mean_file);\n\n  std::vector<float> Predict(const cv::Mat& img);\n\n  void WrapInputLayer(std::vector<cv::Mat>* input_channels);\n\n  void Preprocess(const cv::Mat& img,\n                  std::vector<cv::Mat>* input_channels);\n\n private:\n  shared_ptr<Net<float> > net_;\n  cv::Size input_geometry_;\n  int num_channels_;\n  cv::Mat mean_;\n  std::vector<string> labels_;\n};\n\nClassifier::Classifier(const string& model_file,\n                       const string& trained_file,\n                       const string& mean_file,\n                       const string& label_file) {\n#ifdef CPU_ONLY\n  Caffe::set_mode(Caffe::CPU);\n#else\n  Caffe::set_mode(Caffe::GPU);\n#endif\n\n  /* Load the network. */\n  net_.reset(new Net<float>(model_file, TEST));\n  net_->CopyTrainedLayersFrom(trained_file);\n\n  CHECK_EQ(net_->num_inputs(), 1) << \"Network should have exactly one input.\";\n  CHECK_EQ(net_->num_outputs(), 1) << \"Network should have exactly one output.\";\n\n  Blob<float>* input_layer = net_->input_blobs()[0];\n  num_channels_ = input_layer->channels();\n  CHECK(num_channels_ == 3 || num_channels_ == 1)\n    << \"Input layer should have 1 or 3 channels.\";\n  input_geometry_ = cv::Size(input_layer->width(), input_layer->height());\n\n  /* Load the binaryproto mean file. */\n  SetMean(mean_file);\n\n  /* Load labels. */\n  std::ifstream labels(label_file.c_str());\n  CHECK(labels) << \"Unable to open labels file \" << label_file;\n  string line;\n  while (std::getline(labels, line))\n    labels_.push_back(string(line));\n\n  Blob<float>* output_layer = net_->output_blobs()[0];\n  CHECK_EQ(labels_.size(), output_layer->channels())\n    << \"Number of labels is different from the output layer dimension.\";\n}\n\nstatic bool PairCompare(const std::pair<float, int>& lhs,\n                        const std::pair<float, int>& rhs) {\n  return lhs.first > rhs.first;\n}\n\n/* Return the indices of the top N values of vector v. */\nstatic std::vector<int> Argmax(const std::vector<float>& v, int N) {\n  std::vector<std::pair<float, int> > pairs;\n  for (size_t i = 0; i < v.size(); ++i)\n    pairs.push_back(std::make_pair(v[i], i));\n  std::partial_sort(pairs.begin(), pairs.begin() + N, pairs.end(), PairCompare);\n\n  std::vector<int> result;\n  for (int i = 0; i < N; ++i)\n    result.push_back(pairs[i].second);\n  return result;\n}\n\n/* Return the top N predictions. */\nstd::vector<Prediction> Classifier::Classify(const cv::Mat& img, int N) {\n  std::vector<float> output = Predict(img);\n\n  N = std::min<int>(labels_.size(), N);\n  std::vector<int> maxN = Argmax(output, N);\n  std::vector<Prediction> predictions;\n  for (int i = 0; i < N; ++i) {\n    int idx = maxN[i];\n    predictions.push_back(std::make_pair(labels_[idx], output[idx]));\n  }\n\n  return predictions;\n}\n\n/* Load the mean file in binaryproto format. */\nvoid Classifier::SetMean(const string& mean_file) {\n  BlobProto blob_proto;\n  ReadProtoFromBinaryFileOrDie(mean_file.c_str(), &blob_proto);\n\n  /* Convert from BlobProto to Blob<float> */\n  Blob<float> mean_blob;\n  mean_blob.FromProto(blob_proto);\n  CHECK_EQ(mean_blob.channels(), num_channels_)\n    << \"Number of channels of mean file doesn't match input layer.\";\n\n  /* The format of the mean file is planar 32-bit float BGR or grayscale. */\n  std::vector<cv::Mat> channels;\n  float* data = mean_blob.mutable_cpu_data();\n  for (int i = 0; i < num_channels_; ++i) {\n    /* Extract an individual channel. */\n    cv::Mat channel(mean_blob.height(), mean_blob.width(), CV_32FC1, data);\n    channels.push_back(channel);\n    data += mean_blob.height() * mean_blob.width();\n  }\n\n  /* Merge the separate channels into a single image. */\n  cv::Mat mean;\n  cv::merge(channels, mean);\n\n  /* Compute the global mean pixel value and create a mean image\n   * filled with this value. */\n  cv::Scalar channel_mean = cv::mean(mean);\n  mean_ = cv::Mat(input_geometry_, mean.type(), channel_mean);\n}\n\nstd::vector<float> Classifier::Predict(const cv::Mat& img) {\n  Blob<float>* input_layer = net_->input_blobs()[0];\n  input_layer->Reshape(1, num_channels_,\n                       input_geometry_.height, input_geometry_.width);\n  /* Forward dimension change to all layers. */\n  net_->Reshape();\n\n  std::vector<cv::Mat> input_channels;\n  WrapInputLayer(&input_channels);\n\n  Preprocess(img, &input_channels);\n\n  net_->ForwardPrefilled();\n\n  /* Copy the output layer to a std::vector */\n  Blob<float>* output_layer = net_->output_blobs()[0];\n  const float* begin = output_layer->cpu_data();\n  const float* end = begin + output_layer->channels();\n  return std::vector<float>(begin, end);\n}\n\n/* Wrap the input layer of the network in separate cv::Mat objects\n * (one per channel). This way we save one memcpy operation and we\n * don't need to rely on cudaMemcpy2D. The last preprocessing\n * operation will write the separate channels directly to the input\n * layer. */\nvoid Classifier::WrapInputLayer(std::vector<cv::Mat>* input_channels) {\n  Blob<float>* input_layer = net_->input_blobs()[0];\n\n  int width = input_layer->width();\n  int height = input_layer->height();\n  float* input_data = input_layer->mutable_cpu_data();\n  for (int i = 0; i < input_layer->channels(); ++i) {\n    cv::Mat channel(height, width, CV_32FC1, input_data);\n    input_channels->push_back(channel);\n    input_data += width * height;\n  }\n}\n\nvoid Classifier::Preprocess(const cv::Mat& img,\n                            std::vector<cv::Mat>* input_channels) {\n  /* Convert the input image to the input image format of the network. */\n  cv::Mat sample;\n  if (img.channels() == 3 && num_channels_ == 1)\n    cv::cvtColor(img, sample, cv::COLOR_BGR2GRAY);\n  else if (img.channels() == 4 && num_channels_ == 1)\n    cv::cvtColor(img, sample, cv::COLOR_BGRA2GRAY);\n  else if (img.channels() == 4 && num_channels_ == 3)\n    cv::cvtColor(img, sample, cv::COLOR_BGRA2BGR);\n  else if (img.channels() == 1 && num_channels_ == 3)\n    cv::cvtColor(img, sample, cv::COLOR_GRAY2BGR);\n  else\n    sample = img;\n\n  cv::Mat sample_resized;\n  if (sample.size() != input_geometry_)\n    cv::resize(sample, sample_resized, input_geometry_);\n  else\n    sample_resized = sample;\n\n  cv::Mat sample_float;\n  if (num_channels_ == 3)\n    sample_resized.convertTo(sample_float, CV_32FC3);\n  else\n    sample_resized.convertTo(sample_float, CV_32FC1);\n\n  cv::Mat sample_normalized;\n  cv::subtract(sample_float, mean_, sample_normalized);\n\n  /* This operation will write the separate BGR planes directly to the\n   * input layer of the network because it is wrapped by the cv::Mat\n   * objects in input_channels. */\n  cv::split(sample_normalized, *input_channels);\n\n  CHECK(reinterpret_cast<float*>(input_channels->at(0).data)\n        == net_->input_blobs()[0]->cpu_data())\n    << \"Input channels are not wrapping the input layer of the network.\";\n}\n\nint main(int argc, char** argv) {\n  if (argc != 6) {\n    std::cerr << \"Usage: \" << argv[0]\n              << \" deploy.prototxt network.caffemodel\"\n              << \" mean.binaryproto labels.txt img.jpg\" << std::endl;\n    return 1;\n  }\n\n  ::google::InitGoogleLogging(argv[0]);\n\n  string model_file   = argv[1];\n  string trained_file = argv[2];\n  string mean_file    = argv[3];\n  string label_file   = argv[4];\n  Classifier classifier(model_file, trained_file, mean_file, label_file);\n\n  string file = argv[5];\n\n  std::cout << \"---------- Prediction for \"\n            << file << \" ----------\" << std::endl;\n\n  cv::Mat img = cv::imread(file, -1);\n  CHECK(!img.empty()) << \"Unable to decode image \" << file;\n  std::vector<Prediction> predictions = classifier.Classify(img);\n\n  /* Print the top N predictions. */\n  for (size_t i = 0; i < predictions.size(); ++i) {\n    Prediction p = predictions[i];\n    std::cout << std::fixed << std::setprecision(4) << p.second << \" - \\\"\"\n              << p.first << \"\\\"\" << std::endl;\n  }\n}\n#else\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"This example requires OpenCV; compile with USE_OPENCV.\";\n}\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/examples/cpp_classification/readme.md",
    "content": "---\ntitle: CaffeNet C++ Classification example\ndescription: A simple example performing image classification using the low-level C++ API.\ncategory: example\ninclude_in_docs: true\npriority: 10\n---\n\n# Classifying ImageNet: using the C++ API\n\nCaffe, at its core, is written in C++. It is possible to use the C++\nAPI of Caffe to implement an image classification application similar\nto the Python code presented in one of the Notebook example. To look\nat a more general-purpose example of the Caffe C++ API, you should\nstudy the source code of the command line tool `caffe` in `tools/caffe.cpp`.\n\n## Presentation\n\nA simple C++ code is proposed in\n`examples/cpp_classification/classification.cpp`. For the sake of\nsimplicity, this example does not support oversampling of a single\nsample nor batching of multiple independant samples. This example is\nnot trying to reach the maximum possible classification throughput on\na system, but special care was given to avoid unnecessary\npessimization while keeping the code readable.\n\n## Compiling\n\nThe C++ example is built automatically when compiling Caffe. To\ncompile Caffe you should follow the documented instructions. The\nclassification example will be built as `examples/classification.bin`\nin your build directory.\n\n## Usage\n\nTo use the pre-trained CaffeNet model with the classification example,\nyou need to download it from the \"Model Zoo\" using the following\nscript:\n```\n./scripts/download_model_binary.py models/bvlc_reference_caffenet\n```\nThe ImageNet labels file (also called the *synset file*) is also\nrequired in order to map a prediction to the name of the class:\n```\n./data/ilsvrc12/get_ilsvrc_aux.sh.\n```\nUsing the files that were downloaded, we can classify the provided cat\nimage (`examples/images/cat.jpg`) using this command:\n```\n./build/examples/cpp_classification/classification.bin \\\n  models/bvlc_reference_caffenet/deploy.prototxt \\\n  models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \\\n  data/ilsvrc12/imagenet_mean.binaryproto \\\n  data/ilsvrc12/synset_words.txt \\\n  examples/images/cat.jpg\n```\nThe output should look like this:\n```\n---------- Prediction for examples/images/cat.jpg ----------\n0.3134 - \"n02123045 tabby, tabby cat\"\n0.2380 - \"n02123159 tiger cat\"\n0.1235 - \"n02124075 Egyptian cat\"\n0.1003 - \"n02119022 red fox, Vulpes vulpes\"\n0.0715 - \"n02127052 lynx, catamount\"\n```\n\n## Improving Performance\n\nTo further improve performance, you will need to leverage the GPU\nmore, here are some guidelines:\n\n* Move the data on the GPU early and perform all preprocessing\noperations there.\n* If you have many images to classify simultaneously, you should use\nbatching (independent images are classified in a single forward pass).\n* Use multiple classification threads to ensure the GPU is always fully\nutilized and not waiting for an I/O blocked CPU thread.\n"
  },
  {
    "path": "caffe-fpn/examples/detection.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"[R-CNN](https://github.com/rbgirshick/rcnn) is a state-of-the-art detector that classifies region proposals by a finetuned Caffe model. For the full details of the R-CNN system and model, refer to its project site and the paper:\\n\",\n    \"\\n\",\n    \"> *Rich feature hierarchies for accurate object detection and semantic segmentation*. Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik. CVPR 2014. [Arxiv 2013](http://arxiv.org/abs/1311.2524).\\n\",\n    \"\\n\",\n    \"In this example, we do detection by a pure Caffe edition of the R-CNN model for ImageNet. The R-CNN detector outputs class scores for the 200 detection classes of ILSVRC13. Keep in mind that these are raw one vs. all SVM scores, so they are not probabilistically calibrated or exactly comparable across classes. Note that this off-the-shelf model is simply for convenience, and is not the full R-CNN model.\\n\",\n    \"\\n\",\n    \"Let's run detection on an image of a bicyclist riding a fish bike in the desert (from the ImageNet challenge—no joke).\\n\",\n    \"\\n\",\n    \"First, we'll need region proposals and the Caffe R-CNN ImageNet model:\\n\",\n    \"\\n\",\n    \"- [Selective Search](http://koen.me/research/selectivesearch/) is the region proposer used by R-CNN. The [selective_search_ijcv_with_python](https://github.com/sergeyk/selective_search_ijcv_with_python) Python module takes care of extracting proposals through the selective search MATLAB implementation. To install it, download the module and name its directory `selective_search_ijcv_with_python`, run the demo in MATLAB to compile the necessary functions, then add it to your `PYTHONPATH` for importing. (If you have your own region proposals prepared, or would rather not bother with this step, [detect.py](https://github.com/BVLC/caffe/blob/master/python/detect.py) accepts a list of images and bounding boxes as CSV.)\\n\",\n    \"\\n\",\n    \"-Run `./scripts/download_model_binary.py models/bvlc_reference_rcnn_ilsvrc13` to get the Caffe R-CNN ImageNet model.\\n\",\n    \"\\n\",\n    \"With that done, we'll call the bundled `detect.py` to generate the region proposals and run the network. For an explanation of the arguments, do `./detect.py --help`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"WARNING: Logging before InitGoogleLogging() is written to STDERR\\n\",\n      \"I0218 20:43:25.383932 2099749632 net.cpp:42] Initializing net from parameters: \\n\",\n      \"name: \\\"R-CNN-ilsvrc13\\\"\\n\",\n      \"input: \\\"data\\\"\\n\",\n      \"input_dim: 10\\n\",\n      \"input_dim: 3\\n\",\n      \"input_dim: 227\\n\",\n      \"input_dim: 227\\n\",\n      \"state {\\n\",\n      \"  phase: TEST\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"conv1\\\"\\n\",\n      \"  type: \\\"Convolution\\\"\\n\",\n      \"  bottom: \\\"data\\\"\\n\",\n      \"  top: \\\"conv1\\\"\\n\",\n      \"  convolution_param {\\n\",\n      \"    num_output: 96\\n\",\n      \"    kernel_size: 11\\n\",\n      \"    stride: 4\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu1\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"conv1\\\"\\n\",\n      \"  top: \\\"conv1\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"pool1\\\"\\n\",\n      \"  type: \\\"Pooling\\\"\\n\",\n      \"  bottom: \\\"conv1\\\"\\n\",\n      \"  top: \\\"pool1\\\"\\n\",\n      \"  pooling_param {\\n\",\n      \"    pool: MAX\\n\",\n      \"    kernel_size: 3\\n\",\n      \"    stride: 2\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"norm1\\\"\\n\",\n      \"  type: \\\"LRN\\\"\\n\",\n      \"  bottom: \\\"pool1\\\"\\n\",\n      \"  top: \\\"norm1\\\"\\n\",\n      \"  lrn_param {\\n\",\n      \"    local_size: 5\\n\",\n      \"    alpha: 0.0001\\n\",\n      \"    beta: 0.75\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"conv2\\\"\\n\",\n      \"  type: \\\"Convolution\\\"\\n\",\n      \"  bottom: \\\"norm1\\\"\\n\",\n      \"  top: \\\"conv2\\\"\\n\",\n      \"  convolution_param {\\n\",\n      \"    num_output: 256\\n\",\n      \"    pad: 2\\n\",\n      \"    kernel_size: 5\\n\",\n      \"    group: 2\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu2\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"conv2\\\"\\n\",\n      \"  top: \\\"conv2\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"pool2\\\"\\n\",\n      \"  type: \\\"Pooling\\\"\\n\",\n      \"  bottom: \\\"conv2\\\"\\n\",\n      \"  top: \\\"pool2\\\"\\n\",\n      \"  pooling_param {\\n\",\n      \"    pool: MAX\\n\",\n      \"    kernel_size: 3\\n\",\n      \"    stride: 2\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"norm2\\\"\\n\",\n      \"  type: \\\"LRN\\\"\\n\",\n      \"  bottom: \\\"pool2\\\"\\n\",\n      \"  top: \\\"norm2\\\"\\n\",\n      \"  lrn_param {\\n\",\n      \"    local_size: 5\\n\",\n      \"    alpha: 0.0001\\n\",\n      \"    beta: 0.75\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"conv3\\\"\\n\",\n      \"  type: \\\"Convolution\\\"\\n\",\n      \"  bottom: \\\"norm2\\\"\\n\",\n      \"  top: \\\"conv3\\\"\\n\",\n      \"  convolution_param {\\n\",\n      \"    num_output: 384\\n\",\n      \"    pad: 1\\n\",\n      \"    kernel_size: 3\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu3\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"conv3\\\"\\n\",\n      \"  top: \\\"conv3\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"conv4\\\"\\n\",\n      \"  type: \\\"Convolution\\\"\\n\",\n      \"  bottom: \\\"conv3\\\"\\n\",\n      \"  top: \\\"conv4\\\"\\n\",\n      \"  convolution_param {\\n\",\n      \"    num_output: 384\\n\",\n      \"    pad: 1\\n\",\n      \"    kernel_size: 3\\n\",\n      \"    group: 2\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu4\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"conv4\\\"\\n\",\n      \"  top: \\\"conv4\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"conv5\\\"\\n\",\n      \"  type: \\\"Convolution\\\"\\n\",\n      \"  bottom: \\\"conv4\\\"\\n\",\n      \"  top: \\\"conv5\\\"\\n\",\n      \"  convolution_param {\\n\",\n      \"    num_output: 256\\n\",\n      \"    pad: 1\\n\",\n      \"    kernel_size: 3\\n\",\n      \"    group: 2\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu5\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"conv5\\\"\\n\",\n      \"  top: \\\"conv5\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"pool5\\\"\\n\",\n      \"  type: \\\"Pooling\\\"\\n\",\n      \"  bottom: \\\"conv5\\\"\\n\",\n      \"  top: \\\"pool5\\\"\\n\",\n      \"  pooling_param {\\n\",\n      \"    pool: MAX\\n\",\n      \"    kernel_size: 3\\n\",\n      \"    stride: 2\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"fc6\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"pool5\\\"\\n\",\n      \"  top: \\\"fc6\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 4096\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu6\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"fc6\\\"\\n\",\n      \"  top: \\\"fc6\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"drop6\\\"\\n\",\n      \"  type: \\\"Dropout\\\"\\n\",\n      \"  bottom: \\\"fc6\\\"\\n\",\n      \"  top: \\\"fc6\\\"\\n\",\n      \"  dropout_param {\\n\",\n      \"    dropout_ratio: 0.5\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"fc7\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"fc6\\\"\\n\",\n      \"  top: \\\"fc7\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 4096\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"relu7\\\"\\n\",\n      \"  type: \\\"ReLU\\\"\\n\",\n      \"  bottom: \\\"fc7\\\"\\n\",\n      \"  top: \\\"fc7\\\"\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"drop7\\\"\\n\",\n      \"  type: \\\"Dropout\\\"\\n\",\n      \"  bottom: \\\"fc7\\\"\\n\",\n      \"  top: \\\"fc7\\\"\\n\",\n      \"  dropout_param {\\n\",\n      \"    dropout_ratio: 0.5\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"layer {\\n\",\n      \"  name: \\\"fc-rcnn\\\"\\n\",\n      \"  type: \\\"InnerProduct\\\"\\n\",\n      \"  bottom: \\\"fc7\\\"\\n\",\n      \"  top: \\\"fc-rcnn\\\"\\n\",\n      \"  inner_product_param {\\n\",\n      \"    num_output: 200\\n\",\n      \"  }\\n\",\n      \"}\\n\",\n      \"I0218 20:43:25.385720 2099749632 net.cpp:336] Input 0 -> data\\n\",\n      \"I0218 20:43:25.385769 2099749632 layer_factory.hpp:74] Creating layer conv1\\n\",\n      \"I0218 20:43:25.385783 2099749632 net.cpp:76] Creating Layer conv1\\n\",\n      \"I0218 20:43:25.385790 2099749632 net.cpp:372] conv1 <- data\\n\",\n      \"I0218 20:43:25.385802 2099749632 net.cpp:334] conv1 -> conv1\\n\",\n      \"I0218 20:43:25.385815 2099749632 net.cpp:105] Setting up conv1\\n\",\n      \"I0218 20:43:25.386574 2099749632 net.cpp:112] Top shape: 10 96 55 55 (2904000)\\n\",\n      \"I0218 20:43:25.386610 2099749632 layer_factory.hpp:74] Creating layer relu1\\n\",\n      \"I0218 20:43:25.386625 2099749632 net.cpp:76] Creating Layer relu1\\n\",\n      \"I0218 20:43:25.386631 2099749632 net.cpp:372] relu1 <- conv1\\n\",\n      \"I0218 20:43:25.386641 2099749632 net.cpp:323] relu1 -> conv1 (in-place)\\n\",\n      \"I0218 20:43:25.386649 2099749632 net.cpp:105] Setting up relu1\\n\",\n      \"I0218 20:43:25.386656 2099749632 net.cpp:112] Top shape: 10 96 55 55 (2904000)\\n\",\n      \"I0218 20:43:25.386663 2099749632 layer_factory.hpp:74] Creating layer pool1\\n\",\n      \"I0218 20:43:25.386675 2099749632 net.cpp:76] Creating Layer pool1\\n\",\n      \"I0218 20:43:25.386682 2099749632 net.cpp:372] pool1 <- conv1\\n\",\n      \"I0218 20:43:25.386690 2099749632 net.cpp:334] pool1 -> pool1\\n\",\n      \"I0218 20:43:25.386699 2099749632 net.cpp:105] Setting up pool1\\n\",\n      \"I0218 20:43:25.386716 2099749632 net.cpp:112] Top shape: 10 96 27 27 (699840)\\n\",\n      \"I0218 20:43:25.386725 2099749632 layer_factory.hpp:74] Creating layer norm1\\n\",\n      \"I0218 20:43:25.386736 2099749632 net.cpp:76] Creating Layer norm1\\n\",\n      \"I0218 20:43:25.386744 2099749632 net.cpp:372] norm1 <- pool1\\n\",\n      \"I0218 20:43:25.386803 2099749632 net.cpp:334] norm1 -> norm1\\n\",\n      \"I0218 20:43:25.386819 2099749632 net.cpp:105] Setting up norm1\\n\",\n      \"I0218 20:43:25.386832 2099749632 net.cpp:112] Top shape: 10 96 27 27 (699840)\\n\",\n      \"I0218 20:43:25.386842 2099749632 layer_factory.hpp:74] Creating layer conv2\\n\",\n      \"I0218 20:43:25.386852 2099749632 net.cpp:76] Creating Layer conv2\\n\",\n      \"I0218 20:43:25.386865 2099749632 net.cpp:372] conv2 <- norm1\\n\",\n      \"I0218 20:43:25.386878 2099749632 net.cpp:334] conv2 -> conv2\\n\",\n      \"I0218 20:43:25.386899 2099749632 net.cpp:105] Setting up conv2\\n\",\n      \"I0218 20:43:25.387024 2099749632 net.cpp:112] Top shape: 10 256 27 27 (1866240)\\n\",\n      \"I0218 20:43:25.387042 2099749632 layer_factory.hpp:74] Creating layer relu2\\n\",\n      \"I0218 20:43:25.387050 2099749632 net.cpp:76] Creating Layer relu2\\n\",\n      \"I0218 20:43:25.387058 2099749632 net.cpp:372] relu2 <- conv2\\n\",\n      \"I0218 20:43:25.387066 2099749632 net.cpp:323] relu2 -> conv2 (in-place)\\n\",\n      \"I0218 20:43:25.387075 2099749632 net.cpp:105] Setting up relu2\\n\",\n      \"I0218 20:43:25.387081 2099749632 net.cpp:112] Top shape: 10 256 27 27 (1866240)\\n\",\n      \"I0218 20:43:25.387089 2099749632 layer_factory.hpp:74] Creating layer pool2\\n\",\n      \"I0218 20:43:25.387097 2099749632 net.cpp:76] Creating Layer pool2\\n\",\n      \"I0218 20:43:25.387104 2099749632 net.cpp:372] pool2 <- conv2\\n\",\n      \"I0218 20:43:25.387112 2099749632 net.cpp:334] pool2 -> pool2\\n\",\n      \"I0218 20:43:25.387121 2099749632 net.cpp:105] Setting up pool2\\n\",\n      \"I0218 20:43:25.387130 2099749632 net.cpp:112] Top shape: 10 256 13 13 (432640)\\n\",\n      \"I0218 20:43:25.387137 2099749632 layer_factory.hpp:74] Creating layer norm2\\n\",\n      \"I0218 20:43:25.387145 2099749632 net.cpp:76] Creating Layer norm2\\n\",\n      \"I0218 20:43:25.387152 2099749632 net.cpp:372] norm2 <- pool2\\n\",\n      \"I0218 20:43:25.387161 2099749632 net.cpp:334] norm2 -> norm2\\n\",\n      \"I0218 20:43:25.387168 2099749632 net.cpp:105] Setting up norm2\\n\",\n      \"I0218 20:43:25.387176 2099749632 net.cpp:112] Top shape: 10 256 13 13 (432640)\\n\",\n      \"I0218 20:43:25.387228 2099749632 layer_factory.hpp:74] Creating layer conv3\\n\",\n      \"I0218 20:43:25.387249 2099749632 net.cpp:76] Creating Layer conv3\\n\",\n      \"I0218 20:43:25.387258 2099749632 net.cpp:372] conv3 <- norm2\\n\",\n      \"I0218 20:43:25.387266 2099749632 net.cpp:334] conv3 -> conv3\\n\",\n      \"I0218 20:43:25.387276 2099749632 net.cpp:105] Setting up conv3\\n\",\n      \"I0218 20:43:25.389375 2099749632 net.cpp:112] Top shape: 10 384 13 13 (648960)\\n\",\n      \"I0218 20:43:25.389408 2099749632 layer_factory.hpp:74] Creating layer relu3\\n\",\n      \"I0218 20:43:25.389421 2099749632 net.cpp:76] Creating Layer relu3\\n\",\n      \"I0218 20:43:25.389430 2099749632 net.cpp:372] relu3 <- conv3\\n\",\n      \"I0218 20:43:25.389438 2099749632 net.cpp:323] relu3 -> conv3 (in-place)\\n\",\n      \"I0218 20:43:25.389447 2099749632 net.cpp:105] Setting up relu3\\n\",\n      \"I0218 20:43:25.389456 2099749632 net.cpp:112] Top shape: 10 384 13 13 (648960)\\n\",\n      \"I0218 20:43:25.389462 2099749632 layer_factory.hpp:74] Creating layer conv4\\n\",\n      \"I0218 20:43:25.389472 2099749632 net.cpp:76] Creating Layer conv4\\n\",\n      \"I0218 20:43:25.389478 2099749632 net.cpp:372] conv4 <- conv3\\n\",\n      \"I0218 20:43:25.389487 2099749632 net.cpp:334] conv4 -> conv4\\n\",\n      \"I0218 20:43:25.389497 2099749632 net.cpp:105] Setting up conv4\\n\",\n      \"I0218 20:43:25.391810 2099749632 net.cpp:112] Top shape: 10 384 13 13 (648960)\\n\",\n      \"I0218 20:43:25.391856 2099749632 layer_factory.hpp:74] Creating layer relu4\\n\",\n      \"I0218 20:43:25.391871 2099749632 net.cpp:76] Creating Layer relu4\\n\",\n      \"I0218 20:43:25.391880 2099749632 net.cpp:372] relu4 <- conv4\\n\",\n      \"I0218 20:43:25.391888 2099749632 net.cpp:323] relu4 -> conv4 (in-place)\\n\",\n      \"I0218 20:43:25.391898 2099749632 net.cpp:105] Setting up relu4\\n\",\n      \"I0218 20:43:25.391906 2099749632 net.cpp:112] Top shape: 10 384 13 13 (648960)\\n\",\n      \"I0218 20:43:25.391913 2099749632 layer_factory.hpp:74] Creating layer conv5\\n\",\n      \"I0218 20:43:25.391923 2099749632 net.cpp:76] Creating Layer conv5\\n\",\n      \"I0218 20:43:25.391929 2099749632 net.cpp:372] conv5 <- conv4\\n\",\n      \"I0218 20:43:25.391937 2099749632 net.cpp:334] conv5 -> conv5\\n\",\n      \"I0218 20:43:25.391947 2099749632 net.cpp:105] Setting up conv5\\n\",\n      \"I0218 20:43:25.393072 2099749632 net.cpp:112] Top shape: 10 256 13 13 (432640)\\n\",\n      \"I0218 20:43:25.393108 2099749632 layer_factory.hpp:74] Creating layer relu5\\n\",\n      \"I0218 20:43:25.393122 2099749632 net.cpp:76] Creating Layer relu5\\n\",\n      \"I0218 20:43:25.393129 2099749632 net.cpp:372] relu5 <- conv5\\n\",\n      \"I0218 20:43:25.393138 2099749632 net.cpp:323] relu5 -> conv5 (in-place)\\n\",\n      \"I0218 20:43:25.393148 2099749632 net.cpp:105] Setting up relu5\\n\",\n      \"I0218 20:43:25.393157 2099749632 net.cpp:112] Top shape: 10 256 13 13 (432640)\\n\",\n      \"I0218 20:43:25.393167 2099749632 layer_factory.hpp:74] Creating layer pool5\\n\",\n      \"I0218 20:43:25.393175 2099749632 net.cpp:76] Creating Layer pool5\\n\",\n      \"I0218 20:43:25.393182 2099749632 net.cpp:372] pool5 <- conv5\\n\",\n      \"I0218 20:43:25.393190 2099749632 net.cpp:334] pool5 -> pool5\\n\",\n      \"I0218 20:43:25.393199 2099749632 net.cpp:105] Setting up pool5\\n\",\n      \"I0218 20:43:25.393209 2099749632 net.cpp:112] Top shape: 10 256 6 6 (92160)\\n\",\n      \"I0218 20:43:25.393218 2099749632 layer_factory.hpp:74] Creating layer fc6\\n\",\n      \"I0218 20:43:25.393226 2099749632 net.cpp:76] Creating Layer fc6\\n\",\n      \"I0218 20:43:25.393232 2099749632 net.cpp:372] fc6 <- pool5\\n\",\n      \"I0218 20:43:25.393240 2099749632 net.cpp:334] fc6 -> fc6\\n\",\n      \"I0218 20:43:25.393249 2099749632 net.cpp:105] Setting up fc6\\n\",\n      \"I0218 20:43:25.516396 2099749632 net.cpp:112] Top shape: 10 4096 1 1 (40960)\\n\",\n      \"I0218 20:43:25.516445 2099749632 layer_factory.hpp:74] Creating layer relu6\\n\",\n      \"I0218 20:43:25.516463 2099749632 net.cpp:76] Creating Layer relu6\\n\",\n      \"I0218 20:43:25.516470 2099749632 net.cpp:372] relu6 <- fc6\\n\",\n      \"I0218 20:43:25.516480 2099749632 net.cpp:323] relu6 -> fc6 (in-place)\\n\",\n      \"I0218 20:43:25.516490 2099749632 net.cpp:105] Setting up relu6\\n\",\n      \"I0218 20:43:25.516497 2099749632 net.cpp:112] Top shape: 10 4096 1 1 (40960)\\n\",\n      \"I0218 20:43:25.516505 2099749632 layer_factory.hpp:74] Creating layer drop6\\n\",\n      \"I0218 20:43:25.516515 2099749632 net.cpp:76] Creating Layer drop6\\n\",\n      \"I0218 20:43:25.516521 2099749632 net.cpp:372] drop6 <- fc6\\n\",\n      \"I0218 20:43:25.516530 2099749632 net.cpp:323] drop6 -> fc6 (in-place)\\n\",\n      \"I0218 20:43:25.516538 2099749632 net.cpp:105] Setting up drop6\\n\",\n      \"I0218 20:43:25.516557 2099749632 net.cpp:112] Top shape: 10 4096 1 1 (40960)\\n\",\n      \"I0218 20:43:25.516566 2099749632 layer_factory.hpp:74] Creating layer fc7\\n\",\n      \"I0218 20:43:25.516576 2099749632 net.cpp:76] Creating Layer fc7\\n\",\n      \"I0218 20:43:25.516582 2099749632 net.cpp:372] fc7 <- fc6\\n\",\n      \"I0218 20:43:25.516589 2099749632 net.cpp:334] fc7 -> fc7\\n\",\n      \"I0218 20:43:25.516599 2099749632 net.cpp:105] Setting up fc7\\n\",\n      \"I0218 20:43:25.604786 2099749632 net.cpp:112] Top shape: 10 4096 1 1 (40960)\\n\",\n      \"I0218 20:43:25.604838 2099749632 layer_factory.hpp:74] Creating layer relu7\\n\",\n      \"I0218 20:43:25.604852 2099749632 net.cpp:76] Creating Layer relu7\\n\",\n      \"I0218 20:43:25.604859 2099749632 net.cpp:372] relu7 <- fc7\\n\",\n      \"I0218 20:43:25.604868 2099749632 net.cpp:323] relu7 -> fc7 (in-place)\\n\",\n      \"I0218 20:43:25.604878 2099749632 net.cpp:105] Setting up relu7\\n\",\n      \"I0218 20:43:25.604885 2099749632 net.cpp:112] Top shape: 10 4096 1 1 (40960)\\n\",\n      \"I0218 20:43:25.604893 2099749632 layer_factory.hpp:74] Creating layer drop7\\n\",\n      \"I0218 20:43:25.604902 2099749632 net.cpp:76] Creating Layer drop7\\n\",\n      \"I0218 20:43:25.604908 2099749632 net.cpp:372] drop7 <- fc7\\n\",\n      \"I0218 20:43:25.604917 2099749632 net.cpp:323] drop7 -> fc7 (in-place)\\n\",\n      \"I0218 20:43:25.604924 2099749632 net.cpp:105] Setting up drop7\\n\",\n      \"I0218 20:43:25.604933 2099749632 net.cpp:112] Top shape: 10 4096 1 1 (40960)\\n\",\n      \"I0218 20:43:25.604939 2099749632 layer_factory.hpp:74] Creating layer fc-rcnn\\n\",\n      \"I0218 20:43:25.604948 2099749632 net.cpp:76] Creating Layer fc-rcnn\\n\",\n      \"I0218 20:43:25.604954 2099749632 net.cpp:372] fc-rcnn <- fc7\\n\",\n      \"I0218 20:43:25.604962 2099749632 net.cpp:334] fc-rcnn -> fc-rcnn\\n\",\n      \"I0218 20:43:25.604971 2099749632 net.cpp:105] Setting up fc-rcnn\\n\",\n      \"I0218 20:43:25.606878 2099749632 net.cpp:112] Top shape: 10 200 1 1 (2000)\\n\",\n      \"I0218 20:43:25.606904 2099749632 net.cpp:165] fc-rcnn does not need backward computation.\\n\",\n      \"I0218 20:43:25.606909 2099749632 net.cpp:165] drop7 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606916 2099749632 net.cpp:165] relu7 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606922 2099749632 net.cpp:165] fc7 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606928 2099749632 net.cpp:165] drop6 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606935 2099749632 net.cpp:165] relu6 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606940 2099749632 net.cpp:165] fc6 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606946 2099749632 net.cpp:165] pool5 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606952 2099749632 net.cpp:165] relu5 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606958 2099749632 net.cpp:165] conv5 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606964 2099749632 net.cpp:165] relu4 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606971 2099749632 net.cpp:165] conv4 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606976 2099749632 net.cpp:165] relu3 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606982 2099749632 net.cpp:165] conv3 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606988 2099749632 net.cpp:165] norm2 does not need backward computation.\\n\",\n      \"I0218 20:43:25.606995 2099749632 net.cpp:165] pool2 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607002 2099749632 net.cpp:165] relu2 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607007 2099749632 net.cpp:165] conv2 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607013 2099749632 net.cpp:165] norm1 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607199 2099749632 net.cpp:165] pool1 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607213 2099749632 net.cpp:165] relu1 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607219 2099749632 net.cpp:165] conv1 does not need backward computation.\\n\",\n      \"I0218 20:43:25.607225 2099749632 net.cpp:201] This network produces output fc-rcnn\\n\",\n      \"I0218 20:43:25.607239 2099749632 net.cpp:446] Collecting Learning Rate and Weight Decay.\\n\",\n      \"I0218 20:43:25.607255 2099749632 net.cpp:213] Network initialization done.\\n\",\n      \"I0218 20:43:25.607262 2099749632 net.cpp:214] Memory required for data: 62425920\\n\",\n      \"E0218 20:43:26.388214 2099749632 upgrade_proto.cpp:618] Attempting to upgrade input file specified using deprecated V1LayerParameter: ../models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodel\\n\",\n      \"I0218 20:43:27.089423 2099749632 upgrade_proto.cpp:626] Successfully upgraded file specified using deprecated V1LayerParameter\\n\",\n      \"GPU mode\\n\",\n      \"Loading input...\\n\",\n      \"selective_search_rcnn({'/Users/shelhamer/h/desk/caffe/caffe-dev/examples/images/fish-bike.jpg'}, '/var/folders/bk/dtkn5qjd11bd17b2j36zplyw0000gp/T/tmpakaRLL.mat')\\n\",\n      \"Processed 1570 windows in 102.895 s.\\n\",\n      \"/Users/shelhamer/anaconda/lib/python2.7/site-packages/pandas/io/pytables.py:2453: PerformanceWarning: \\n\",\n      \"your performance may suffer as PyTables will pickle object types that it cannot\\n\",\n      \"map directly to c-types [inferred_type->mixed,key->block1_values] [items->['prediction']]\\n\",\n      \"\\n\",\n      \"  warnings.warn(ws, PerformanceWarning)\\n\",\n      \"Saved to _temp/det_output.h5 in 0.298 s.\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!mkdir -p _temp\\n\",\n    \"!echo `pwd`/images/fish-bike.jpg > _temp/det_input.txt\\n\",\n    \"!../python/detect.py --crop_mode=selective_search --pretrained_model=../models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodel --model_def=../models/bvlc_reference_rcnn_ilsvrc13/deploy.prototxt --gpu --raw_scale=255 _temp/det_input.txt _temp/det_output.h5\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"This run was in GPU mode. For CPU mode detection, call `detect.py` without the `--gpu` argument.\\n\",\n    \"\\n\",\n    \"Running this outputs a DataFrame with the filenames, selected windows, and their detection scores to an HDF5 file.\\n\",\n    \"(We only ran on one image, so the filenames will all be the same.)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"(1570, 5)\\n\",\n      \"prediction    [-2.62247, -2.84579, -2.85122, -3.20838, -1.94...\\n\",\n      \"ymin                                                     79.846\\n\",\n      \"xmin                                                       9.62\\n\",\n      \"ymax                                                     246.31\\n\",\n      \"xmax                                                    339.624\\n\",\n      \"Name: /Users/shelhamer/h/desk/caffe/caffe-dev/examples/images/fish-bike.jpg, dtype: object\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"import numpy as np\\n\",\n    \"import pandas as pd\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"df = pd.read_hdf('_temp/det_output.h5', 'df')\\n\",\n    \"print(df.shape)\\n\",\n    \"print(df.iloc[0])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"1570 regions were proposed with the R-CNN configuration of selective search. The number of proposals will vary from image to image based on its contents and size -- selective search isn't scale invariant.\\n\",\n    \"\\n\",\n    \"In general, `detect.py` is most efficient when running on a lot of images: it first extracts window proposals for all of them, batches the windows for efficient GPU processing, and then outputs the results.\\n\",\n    \"Simply list an image per line in the `images_file`, and it will process all of them.\\n\",\n    \"\\n\",\n    \"Although this guide gives an example of R-CNN ImageNet detection, `detect.py` is clever enough to adapt to different Caffe models’ input dimensions, batch size, and output categories. You can switch the model definition and pretrained model as desired. Refer to `python detect.py --help` for the parameters to describe your data set. There's no need for hardcoding.\\n\",\n    \"\\n\",\n    \"Anyway, let's now load the ILSVRC13 detection class names and make a DataFrame of the predictions. Note you'll need the auxiliary ilsvrc2012 data fetched by `data/ilsvrc12/get_ilsvrc12_aux.sh`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"name\\n\",\n      \"accordion      -2.622471\\n\",\n      \"airplane       -2.845788\\n\",\n      \"ant            -2.851219\\n\",\n      \"antelope       -3.208377\\n\",\n      \"apple          -1.949950\\n\",\n      \"armadillo      -2.472935\\n\",\n      \"artichoke      -2.201684\\n\",\n      \"axe            -2.327404\\n\",\n      \"baby bed       -2.737925\\n\",\n      \"backpack       -2.176763\\n\",\n      \"bagel          -2.681061\\n\",\n      \"balance beam   -2.722538\\n\",\n      \"banana         -2.390628\\n\",\n      \"band aid       -1.598909\\n\",\n      \"banjo          -2.298197\\n\",\n      \"...\\n\",\n      \"trombone        -2.582361\\n\",\n      \"trumpet         -2.352853\\n\",\n      \"turtle          -2.360859\\n\",\n      \"tv or monitor   -2.761043\\n\",\n      \"unicycle        -2.218467\\n\",\n      \"vacuum          -1.907717\\n\",\n      \"violin          -2.757079\\n\",\n      \"volleyball      -2.723689\\n\",\n      \"waffle iron     -2.418540\\n\",\n      \"washer          -2.408994\\n\",\n      \"water bottle    -2.174899\\n\",\n      \"watercraft      -2.837425\\n\",\n      \"whale           -3.120338\\n\",\n      \"wine bottle     -2.772960\\n\",\n      \"zebra           -2.742913\\n\",\n      \"Name: 0, Length: 200, dtype: float32\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"with open('../data/ilsvrc12/det_synset_words.txt') as f:\\n\",\n    \"    labels_df = pd.DataFrame([\\n\",\n    \"        {\\n\",\n    \"            'synset_id': l.strip().split(' ')[0],\\n\",\n    \"            'name': ' '.join(l.strip().split(' ')[1:]).split(',')[0]\\n\",\n    \"        }\\n\",\n    \"        for l in f.readlines()\\n\",\n    \"    ])\\n\",\n    \"labels_df.sort('synset_id')\\n\",\n    \"predictions_df = pd.DataFrame(np.vstack(df.prediction.values), columns=labels_df['name'])\\n\",\n    \"print(predictions_df.iloc[0])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Let's look at the activations.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.text.Text at 0x114f15f90>\"\n      ]\n     },\n     \"execution_count\": 4,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x114254b50>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAALMAAAOoCAYAAACa7cU2AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvUmMZel1Jvbd9+6b75vnMV5MGZlZmSlmJYulEkWCpIQ2\\n\",\n       \"e+O2V+0GBBE25AXhCTQBEe2FbEAbw4AhayFIC9NA24s2BAluSAuBLUEDSIKqYjIzq3KKOd48z/fN\\n\",\n       \"0/Ui8juMYIsSkZESuwJ5gUJlRUW+8b//f843HcUwDLy93l7X4TL9vF/A2+vt9aaut4v57XVtrreL\\n\",\n       \"+e11ba63i/ntdW2ut4v57XVtrreL+e11ba5rsZgVRfmqoij7iqIcKYryrX+k58gpivKJoiiPFUX5\\n\",\n       \"6NXPAoqi/LmiKIeKovx7RVF8V3j8/0tRlLqiKE8v/OynPr6iKP/61fvdVxTln73B5/xfFEUpvXqf\\n\",\n       \"jxVF+edv6jkVRUkrivJXiqI8VxTlmaIo//0bfZ+GYXyq/wFgBnAMIAvAAuAJgFv/CM9zBiDwEz/7\\n\",\n       \"3wD85qs/fwvA/3qFx/8CgPsAnv5Djw/g9qv3aXn1vo8BmN7Qc/7PAP7Hv+N3r/ycAGIAPvPqzxqA\\n\",\n       \"AwC33tT7vA478+cAHBuGkTMMYwHg/wXwL/6Rnkv5if/+TwH8m1d//jcA/rPXfWDDML4LoPszPv6/\\n\",\n       \"APBvDcNYGIaRw/mX/Lk39JzAf/g+38hzGoZRMwzjyas/DwG8BJDEG3qf12ExJwEUL/x36dXP3vRl\\n\",\n       \"APgLRVEeKoryX7/6WdQwjPqrP9cBRN/wc/60x0/g/H3yetPv+b9TFOVjRVG+feHIf6PPqShKFuen\\n\",\n       \"wod4Q+/zOizmfyo+/vOGYdwH8M8B/DeKonzh0os4Pxf/0V7Lz/D4b+q5fx/AJoDPAKgC+N/f9HMq\\n\",\n       \"iqIB+GMA/4NhGPqlB7zC+7wOi7kMIH3hv9O4fDe/kcswjOqrfzcB/H84P+7qiqLEAEBRlDiAxht+\\n\",\n       \"2p/2+D/5nlOvfnblyzCMhvHqAvB/4sfH+ht5TkVRLDhfyP+PYRj/7tWP38j7vA6L+SGAXUVRsoqi\\n\",\n       \"WAH8SwB/8iafQFEUp6Io7ld/dgH4ZwCevnqer736ta8B+Hd/9yO89vXTHv9PAPwXiqJYFUXZBLAL\\n\",\n       \"4KM38YSvFhOv/xzn7/ONPKeiKAqAbwN4YRjG/3Hhf72Z9/mmu/6fxz84P/oPcN4g/Ot/hMffxHlX\\n\",\n       \"/QTAMz4HgACAvwBwCODfA/Bd4Tn+LYAKgDnOe4D/8u97fAD/06v3uw/gP3lDz/lfAfi/AXwC4ONX\\n\",\n       \"iyr6pp4TwC8DWL/6HB+/+uerb+p9Kq/+wtvr7fWpvz41ZcY/BTHy9vp0X5+KnVlRFDPOy4hfxXkD\\n\",\n       \"8EMA/8owjJc/1xf29vqP6vq07Mz/lMTI2+tTen1aFvM/FTHy9voUX5+Wxfwffy309vq5X+rP+wX8\\n\",\n       \"jNc/SIwoivJ2wV+TyzCMv0sb8g9en5bFLMQIznHRfwngX/3kL/36r/86gsEgJpMJRqMRNE3DcrlE\\n\",\n       \"JpOBruvQdR2TyQTVahWbm5tYr9eo1+vQNA07OzsYjUbodrtQFAWz2Qwmkwnz+Rxmsxmr1Qq3b99G\\n\",\n       \"Pp+H3W7HbDbD3/zN3+BXfuVXsFqt4Pf7USqVYLFYsFqt4HQ6MZlMEAwGMRgM5OcAEI/Hoes6LBYL\\n\",\n       \"3G43yuUyFEWBruvY3t7GeDxGsViExWLB2dkZ7t69i1qthmazid3dXUynU6zXa7zzzjs4PDyE2+3G\\n\",\n       \"YDCQz2E6ncJqtSIcDmMymaDf72NzcxPL5RI+nw/Hx8eYTqfIZDLI5XJwuVxQFAXBYBBnZ2cIhUII\\n\",\n       \"hUL48z//c3zhC19AuVxGJBLBdDpFs9lEIpHAcrlEKpXCcrnEy5cvsbe3h3w+j42NDVgsFjx79gzh\\n\",\n       \"cBiapgEAGo1zUm8ymcDlcmGxWCAWi+Hs7AyJRAIWiwWTyQS/93u/99qL5FOxmA3DWCqK8t8C+A7O\\n\",\n       \"JZ/f/ruQjGg0Kov3ww8/RDQahWEYWK/XmM1mCAQCMJvNsFgsMJvNsFqtGI/HuHnzJvr9PqxWK6xW\\n\",\n       \"K7rdLrLZLCqVCvb29tBqtRCJRAAAe3t7mEwmMJvNePbsGex2O/x+P0ajEaxWK2KxGOx2O6rVKpLJ\\n\",\n       \"JFRVRSAQwHg8BgCMRiPEYjGMx2N0Oh3EYjEoigKz2YxwOCwL0zAMqKqKBw8eYDwew+PxoFarYWNj\\n\",\n       \"AwBQrVZhs9kQi8VgsVhgt9sxHo/h9XqRTCZRqVQwGo1gs9nks2k2m5jP50gkEmi321BVFaFQCG63\\n\",\n       \"G91uF5FIBKvVCuFwGKvVClarFT7fuc7I4XDAMAwEAgHM53M4HA4AwHK5RDKZxHK5xMbGBlRVhaqq\\n\",\n       \"cLvdcLlcMJlMMJvNsNvt2NzcRKvVgtlslsdaLBZwOp1wuVzo9/tXWiefisUMAIZh/BmAP/v7fqfX\\n\",\n       \"68HpdOLg4ADL5RIA8OGHH+Lzn/88qtUqarUaYrEYFosFUqkUjo6OoOs6Dg8PEQ6H0el0oCgKKpUK\\n\",\n       \"HA4HTk9PYbVasVwu4XQ6oes61us17HY7Go0GdF3HfD7H8+fP5Uv86KOP8ODBA7RaLVgsFqiqikaj\\n\",\n       \"gfV6LQsrl8vh+fPnePfdd1GtVrFarTCfzzGfz/HRRx/B5XJhOp1iPB5jPB6j1WrB7XZD13UcHBzA\\n\",\n       \"brejVqvB6XTCZDJhOBzK7j8YDHBycoJAIIBu91zdWSwW4fF40Ov1sLm5ibOzMwQCAUwmE/R6Pezv\\n\",\n       \"7yMcDuPly5cYDocol8t49913USqVUCgUoKoqjo+PYbfb0ev15JTp9/tyk4xGI2xsbGA8HsNkMiGX\\n\",\n       \"yyEUCkHXdTSbTaxWKyyXS8xmM4xGI/h8PhwdHaHX6yEYDMLn8+HZs2dXWiOfCpz5Z7kURTF++7d/\\n\",\n       \"G4ZhoNvtYjQaIZ1Oo91uY2trC5VKBeFwGGdnZ5hOp3A4HLDb7TAMAxaLBdPpFH6/H8PhEGazGTab\\n\",\n       \"Dfl8HtFoVH6fi3E6ncLtduPRo0e4c+cOOp0OQqEQhsMhfD4fFosFrFYrCoUC/H4/1uu1PE6n05Ej\\n\",\n       \"VdM0BAIBDAYDLBYLeDwetNttWK1W9Pt9KZNMJhM0TcMPfvADpFIpWCwWZDIZ9Ho9WK1WOJ1O7O/v\\n\",\n       \"y07o9XqxXq8xHo/l35ubm+h0OvB4PPjkk08Qj8cRiURQLpdhs9ng8XhgtVpRqVSgqirS6TSePXuG\\n\",\n       \"W7duYTabYTgcYj6fYzabwW63YzAYYGtrC4ZhwOl0olw+1/+EQiEAkMdar9eIRCLodruYTCawWq3w\\n\",\n       \"er0wmUxot9sIhUI4OTlBMBhEr9fDt7/97WtfM/9MV6lUQiAQOOfpX9W9rInr9TpyuRwikQj6/T68\\n\",\n       \"Xi+Ojo4wGo2QTJ6jfL1eD6FQCC9evEA6nUav14PP50O1WsWtW7dgGAZarRZ8Pp/8P5PJhMlkglwu\\n\",\n       \"J19gMpnE0dEREokEFosFRqMRZrMZxuMxptMpstksarUa4vG47FZOpxNnZ2cAAEVRsFgsMB6PsVwu\\n\",\n       \"MZ/PsVqtsLGxAZPJBIfDgSdPniCbzWI+n2M8HsPpdGK5XMJsNiOXy0FVVVitVqiqKrv+aDSCyWRC\\n\",\n       \"IpGAruvweDyw2+1oNpswDEMWWDgcxnK5hMPhwHw+x2KxQK/Xg6ZpGAwGUFUV0+kU/X4fLpcLx8fH\\n\",\n       \"0DQNi8UCuq5DURScnp4iHA7D6XSiUChgPp/D5XLBbrejXq/Ld1apVGCz2aCqKkqlq4kdr9ViBs53\\n\",\n       \"TTZAZrMZOzs7cLvd8Hg8ODk5gcPhkF00FArB4XDA7XbDbDYjEonAbDZDVVVYLBYoioJoNAqn0yk7\\n\",\n       \"kqIoGI/HWCwWOBeBQR6n2WwiHo/DbDZjd3cXzWYTZrMZbrcbJpNJSpn5fI5kMgmTyQSTyYTFYoHV\\n\",\n       \"agWXyyVlDGvWWq2G7e1tFAoFWK1WaRR9Ph+8Xi/6/T4Mw5Bd2WKxYDweIxAIYL1ew+v1wmw2o1Qq\\n\",\n       \"SX08nU7h8XjkebPZLMxmM3RdRzabxXq9hqqqSKVSACBlgaZpUBRFPlu73Q6r1QqPx4NQKIRqtYpQ\\n\",\n       \"KIT5fI6trS10u13MZjNYrVbY7Xb5ffYf8/kcxWJRmsadnR185zvfee3v/tOCM/9MF5smRVHQ7/dh\\n\",\n       \"NpuxXC5ht9vR7Xbh9Xphs9mwXC5htVrx1a9+Fd1uFw6HA2azGYPBAI1GA4PBAIqi4HOf+xyq1SpG\\n\",\n       \"oxFarRZWq5U0VYPBAE6nE6qqYjKZSNPW7/dlNwuFQtLscFcfDodwOBzweDwAIGUNj/H1eo1er4fV\\n\",\n       \"aoXVaiWPabFYMJvNYDab4fF4kEqlMB6PYbfbpQTyeDyYzWZYrVZwOBzSYOm6jnA4LI3vbDbDfD5H\\n\",\n       \"NBpFq9XCbDZDs9kUlMZsNmMymWCxWMhn2263MZvNMBgMYDKZpNZlGeNyuaSkYt3MTcTj8cBisQCA\\n\",\n       \"IES9Xg+pVAo2mw3D4RBOpxOtVutK3/+1WsysLa1Wq+yohmFgPp/DZrPB7XYDOD/GnU6ndOVEAzY2\\n\",\n       \"NhAOh2G322XRcDcOBoNwuVwIhUKw2WwIh8NYLBYwm81wOBxwuVxyUywWC2iaJjeX1WqVXZNNm2EY\\n\",\n       \"srupqgqXywWn04lMJgOXy4VgMCiPrWkanE6noB28WQ3DwHK5hKqqMJvNePfdd2XnBM7rVpvNBk3T\\n\",\n       \"4Pf7oWkabDYbHA6H1KhutxsWiwWhUEhe33w+h8VigdfrBQAsFguEw2HM53NYrVZsbW3B7/dDVVXM\\n\",\n       \"ZjN5TS6XC2azGYFAAG63G5qmyfs2mUxyeqxWK/h8PkwmE0GDptMpAoHAlb7/a1Vm+P1+NJtNRKNR\\n\",\n       \"PHr0CGazGb1eDzabDfV6XdCF2WyG9XqNv/zLv0Sz2ZRFeXx8jMFgICjCs2fPsLe3h8FgII0aS47Z\\n\",\n       \"bAabzYaTkxP4/X40Gg1MJhO43W7BnK1Wq9wo/X4fTqdTXt/x8TGcTicSiQSazabsvIFAAIqi4OXL\\n\",\n       \"l1BVFU6nE5VKBfP5HPF4HLPZDKFQCE+fPsWNGzcwHA4FzTg4OJCbtt1u4+TkBKFQCGdnZ9IrLJdL\\n\",\n       \"aJqGYrGIRCKB0Wgk5cb29jb6/b6UT/l8HjabDWazGe12G+l0GoPBAB9++CHm8zny+TxcLhcmkwkK\\n\",\n       \"hQLW6zVWqxXi8TgePnyIjY0NubGazSYAyG7PGl5VVTSbTdksrnJdq515Op3CYrGgWCxCVVX0+32B\\n\",\n       \"zLjzzedz+fdyuUSv15MjdTabwe/3S40XCATQ7/exXq/R6XQQCASQy+Wk8ZlOp4jH42g0GrIDLhYL\\n\",\n       \"tNttAOclxGKxQKVSgaIo6HQ6cLlcsjNFo1GsViu43W44nU5p3nRdh6ZpSCQSUFUVJpMJNpsNTqcT\\n\",\n       \"0+kUpVJJmsThcAhN0zCfz6FpmpA0q9UKHo8HqqrC7/djPB7LydHr9aSpZIngcrnQ7XYxHo8xm83+\\n\",\n       \"gxPF7/fL52mz2WC326WvGAwGcDgc0HVdyCI2ydPpVH6X5RPret7kqqpitVphOBxe6fu/Vjvzer2W\\n\",\n       \"49Lv98NisQj05na75YvvdDpIJBJYr9d47733pHmJx+NYr9dIpVKYzWYoFovY2toCcF6Pj0YjbG9v\\n\",\n       \"S2OjaZqgGiQKLBYLut0u1us14vE4xuOx7KgOhwOTyQTr9VoayPV6LSXJYDBALBaDw+GQXXQ+n8Pv\\n\",\n       \"96Pb7Qoyw3LAbDYLqREKhTCZTORGYelhs9nQ7XYFGw6FQtLMBgIBYfF4IiiKAsMwpDEeDodCrhiG\\n\",\n       \"gXq9jnA4jHa7LU0r2UE2qHa7XW5Aj8cjJVKr1bpUang8HgyHQ6TTaSwWC8Tj8b/v6/0Hr2u1mE0m\\n\",\n       \"k+woxD+5aNxut1DUxItZP6qqKrUxG6pOp4Pt7W24XC4YhiEd/Hq9lkXCGhKANFeqqsLn8wmbdfFx\\n\",\n       \"AUhzyNqWmDRJFTaLLIlWq5XU1g6HA+v1GhaLRd6D1+uVJtLv98vuzTp2PB7LIluv1/B4PBgMBphM\\n\",\n       \"JlAURero4XAIr9eL0WgEv98PAPJY7D2IcvDzWK/X8lqXyyXcbjfcbjfsdvulv8Od2el0ygkzHA4F\\n\",\n       \"c+Z3papXW47XajGzk2ZpQJhuc3MTx8fHWC6XWCwWwuixM282m3jvvffw9OlTwTwbjYZguuFwGPF4\\n\",\n       \"HIVCAcC5vsDpdEq5wWOVdaPJZEIkEkEul8POzo7UxKVSCR6PB4FAAHa7HaVSCZqm4dmzZ4jFYvB6\\n\",\n       \"vahUKrDb7Wi323A4HGi320gmk7LYisWisHAejwf5fB5+vx/lchnhcBi6rqNYLAqGHY1Gkc/nEQ6H\\n\",\n       \"oaoqOp0ODg4OEAgEUKlUMBwOMR6PsVqtBKPnjdput6HrOnq9HgDAbDajXC5jZ2cHiqLAYrFgvV4L\\n\",\n       \"IhIKhdDpdLBcLmEYhpAr7EN8Ph8Mw8CLFy8QiUTw5MkTAMDZ2RmSySSKxeJP/W5/luta1cxOpxPj\\n\",\n       \"8RixWEzqVKvVina7DcMwpKsn6D+bzfDw4UOEw2EUi0U4nU6MRiM5Svv9PsLhMGq1GlqtljR0qqpC\\n\",\n       \"13Wpm/1+P3Rdx3K5hMfjQSwWQ7vdRiQSga7r2Nvbw3q9RjqdhslkQjKZxHQ6xXA4hN/vh8/nkzJg\\n\",\n       \"d3dXdCOTyQSxWAyTyURecywWw+7uruy4fr8fs9kMFosFuq5jMBjg/fffF4hO13XRSRCmS6VS6HQ6\\n\",\n       \"iEajGI/Hsjun02lYrVbMZjNZkJFIRAgZAPD5fGg0GqjVagDO+wKWEzxV/H4/er0eDMOQfoBwKLH7\\n\",\n       \"1WqFdDoNt9uNYDAopchVrmu1mGu1GtbrNUqlEsLhMBRFkcVG/JaKMzaELD/YlKTTafh8PmGsyOwN\\n\",\n       \"BgP0+33ZzQFIudHr9aSUaLVaODw8hNfrRb1ex3K5xJMnT2AymdDv9zGZTHB6eio0b7FYFChwNpvh\\n\",\n       \"0aNHMAxDVHXtdhsulwuDwQC6rmOxWODjjz+G2+3GfD4XjDYYDMJms8HlcuG73/2uqNXYWBUKBei6\\n\",\n       \"jtlshnw+D6/XK83ZcrnEcDjEZDIRWI6NcbValROO+DNvDuLgFFlxMxgMBvD5fFLnt9ttmEwmjMdj\\n\",\n       \"jEYjHB4eYjgcYr1e4/Hjx6Ihefr06U/9bn+W61otZqfTKX8mRTsej9FsNqEoijRmq9VKSpJsNovx\\n\",\n       \"eCyNT7PZRKPREE2E3+8XUkNVVazXawCQsuHil93pdAAAHo8H0+kUiqLA5/MJ8rBer+F2u4UmXq1W\\n\",\n       \"UFUVvV5PbrZQKITFYgG32y2NIutWvg5FUdBoNDCdTjGZTDCdTlEoFGCz2bBYLEQItVwuMRqNBFdm\\n\",\n       \"4xcMBjGfz6HrupBFZrMZo9EIiqKgXq9Lv8FewWQyiUSWRBBJlOl0ilqtJiUStSV8zYZhCPyn6zqi\\n\",\n       \"0Sg8Hg8ajQbS6TTMZjO63a7cgK97Xaua2eFwwOl0QtM0qV0TiYRQupubm2g2m5hMJjAMAz6fDz/6\\n\",\n       \"0Y/wwQcfwGw2I5lMXmLLBoMB7Ha71MSBQABHR0eIRqMYDocIBoOiGdY0TZAFRVEwn8+xsbGB1WqF\\n\",\n       \"RCIhwiHgfLFzB/R4PFgul7DZbMKasWZnHcuTw2KxIB6PC1tHUoOUOUsVljOKoggjarfbkU6nhSRq\\n\",\n       \"NpsCnyWTScxmM3g8Hui6Lo1kPB4XEVQoFBJSio/BRtFmswniYjaboWkaXC4XbDYbfD4f/H4/isWi\\n\",\n       \"vCaSKNSZK4oCm82Gmzdv4k//9E9f+/u/dos5l8vBZDKhVqvh7t27KBaLWK/XGA6HGI1GsrPlcjkE\\n\",\n       \"g0FomoZSqQSv1wuHw4HVaoXHjx/j/v378Pl8cDqd6Ha7cDqdqNVq8Hq9mEwmoqngEdnr9VAoFHDj\\n\",\n       \"xg2B56bTqezmhLFI5ebzedE7T6dT2elY7qiqinK5jGQyiZOTE6xWK9y4cQOHh4fweDzSXBIrnkwm\\n\",\n       \"cnpwxyaeznKgXC7D4/FgNBqhUqkgEAhgOp0CgLCSy+US7XYbwWAQR0dH2NjYQLlchqqqqFQqiEQi\\n\",\n       \"sNlsWK/XmEwmcDgcmM1moq9erVZot9tot9twOp1oNBoYjUZYLpeIxWJSdrA273a7cLvdWK1Worx7\\n\",\n       \"3etalRnD4RCbm5vw+/0iGHe73QgEAhiNRqIeo5Ccu5LL5cKtW7dEdXbv3j0Mh0O0Wi2RcdINwSPU\\n\",\n       \"7XbLMUwMOZVKYTQaoVqtIhaLYTqdIhKJiAC9VCpJ+bGxsSEEA3c/KtiIDft8PpTLZezu7iKTycDj\\n\",\n       \"8cBkMsnuSRbN6/UKBOjxeFAul+FyuTCbzeDz+VAsFrFYLOD1euU5U6kUdF0X3cpsNpP6PJFIwOFw\\n\",\n       \"wO/3o91uy3/funULq9UKkUhEqHUKmogds3zLZDIAzqFJ1vadTgd2ux3T6RSapsmmQOLkqqTJtVrM\\n\",\n       \"7XZbFqvdbpcOm1RwIBCAx+MRwXssFsPjx4/lg9za2oKmaUJMRKNRuFwujMdjHB0dwePxoNPpwOFw\\n\",\n       \"oFqtyvHudrsRj8eFdGADSf1BOBxGt9tFNBqVGpT4tMfjQTgcFtYxlUpBVVURIlFmSnbT5/NJrc33\\n\",\n       \"xrqaKA4RGyrzCO2FQiGkUikpbagr5o1FRISCqK2tLYHXbDab3MyUdhLNIbFDvD6ZTEofwBKE6Mx0\\n\",\n       \"OhWxF0mqUCgERVFEivu617VazGazGfP5XHxvg8EA9Xodi8UCtVoNz549E9IDAMbjMba3t2EYBiqV\\n\",\n       \"Cl6+fInpdIoXL16IBpq1MzUEPGLJMNKZksvl0G63BV1ot9sYjUYYj8fQdR0Oh0OazNVqhWKxKN5C\\n\",\n       \"Noe6rqNWq8nPCHV1u120Wi2Uy2VRq5VKJZjNZgSDQdlhy+WynCAX3StUrD158gSnp6eo1+uXqP56\\n\",\n       \"vS6wGZtNq9WK58+fAzgnZpbLJdbrNebzOUqlEvb39wWLXywWODw8FJJmMpkgn89jsVig2Wzi5ORE\\n\",\n       \"UCAqGXu9Hk5PT9Hv99FqtaCqKqrV6pW+/2u1mCORCFwuF3w+H0ajkSjnVFWVJovsVyQSQbvdRqvV\\n\",\n       \"EqaK/55Op6JfIBxFFotoxnA4hK7rGA6H4vEjPOXxeKAoCvx+v/gDqZleLBaCUxN9GI/HcLvd8Pl8\\n\",\n       \"QhvPZjOB4rrdLoLBoODYbKAoGgoGg7Db7fD5fJcE/Y1GA51OB91uV9R5s9kMsVhMqGoSSe12Wyhp\\n\",\n       \"ylR1/Tw6mWUP6+/5fC6NJckQi8UiiEWn04HNZpOTiPix1WqFw+EQ/yB9jzRRXNX1dK0Ws6ZpmE6n\\n\",\n       \"aLfbl2hk0rar1Qper1dqaE3TcHZ2JsJzr9crOmIC/Wze6NhgM3PRga0oimh8TSaT1MCDwUBkmPy7\\n\",\n       \"3IFtNpuIepLJJObzOabTqezOFotFdnNN0zAej+XUASC7OmtOLloK/slwkvVrNBpCPfN1ES9nOTAe\\n\",\n       \"j2VRDodDaVrNZjMURRFsmLs9kYtOp4P1eo1+vy/mB/4+f5dMIfXgiUQCNpsNtVpNTAlkWF/3ulZo\\n\",\n       \"Rr1el91tMBig2+1K7enxeBCNRtHtdkXEQ8yVWloSILdu3ZJaDoAIheiqJipBAU6n08Hm5qbsyGyw\\n\",\n       \"eAOQRSS6QDobgCwGs9ksNTjNqIvFQrQhPAE0TUO1WkU2m4XdbkelUhG4zO/3X8J6b9y4IYL/RCIh\\n\",\n       \"2giTySS753w+RyaTkViEcrkszpjPfe5zggbFYjHpLUKhkLhVgPO6nvpoOtxpzYpEIvD5fJjNZlJi\\n\",\n       \"kZ0kscVa/Etf+pJQ3K9zXaudmdrcTqcDt9uNbDYrWgPWoRQWKYqCDz74AMlkUnYpTdOkIYvH4/jq\\n\",\n       \"V78q7JzdbsdisYDP5xPBDUVNbH646EwmE6LRqLwmAPI7VPI5nU4EAgE4HA7ZTe12O2KxmEhAicg4\\n\",\n       \"nU4pc4hFc/EHg0E4HA4kEgncv39fMGZN05DJZOD1euFyuUTUFAqF0G634ff7YTKZ4PF44PV65X0F\\n\",\n       \"AgFp7Ig1s8lkbZ9IJIRip1fQ7XYjkUgglUrBbDYjkUhgd3dXSCHGMPCz9Xg84lRxuVxwu91y6rzu\\n\",\n       \"da12ZtZoLpdLsNHZbCY6iKOjI/HXhUIh/PEf/zHW6zWq1So8Hg+Oj49FaE58l7tZpVKB1+uV0Bab\\n\",\n       \"zYZisYhWqyVlAMX7lEo2Gg3cu3cPz549g9PpvKTlpZ53tVrh+PgYiUQCxWIR4XBYMGHi41arFdPp\\n\",\n       \"VBRt/X4fs9kMiURC7GC1Wk0sXGwu9/f3BcqLRqMoFAoCI/I1dDodtFotKU2sVis6nQ5u376N4+Nj\\n\",\n       \"IW5arZb0EnyP/X5fmrxGowG73S4Sz3a7Lbgy3zOpbxJWjx49gqIo+OSTT7C9vX2lXRm4Zjtzu91G\\n\",\n       \"rVbDYrHAfD5Hv98XQqLT6WBra0vw0fl8ji9+8Yt49OgRIpEIXr58KU0cpZcX3dW0OdGhMRgMEA6H\\n\",\n       \"EQqF0Gq1xJ1BnJW7UavVgtPpFC3zfD4Xva/b7UY4HMbW1pZIJJllQQlpLBZDsVjEdDoVhIP2LcYK\\n\",\n       \"UHJZq9UwnU4RjUZFl00cnbEDg8EAo9EIjUZDJJ5kEllWud1uKQlcLhd6vR50XZfAmuVyiUKhIKcP\\n\",\n       \"yyOq69gf1Ot1iXS4GMjD+p4nTyQSkRPuKte1Wsy03qTTafkivF4vFosFgsEgAIhmVlVVPHv2DOl0\\n\",\n       \"GhaLBRsbG6KpCAQC8Pv92N/fRyKRgKIoyGazcpwTXaDTeWdnR7x6wWBQHM4+nw/L5RI3b94UBRoA\\n\",\n       \"WYilUklwXOpAdnd3EQqFLkUm0FOXzWbhdrslwosGXYryl8ulwI3L5VKc4IQSWTdHIhHBsi9mh2ia\\n\",\n       \"Jhg4dd/UUbPuZbCL1+uVG8Ln88FqtYrLmyekz+cTcT4p+Gg0is3NTSmVWNZomobd3d0rff/XajGz\\n\",\n       \"ZqVgxuFw4OTkRPBTpgQRnXA4HBIYU61WoSgKVquVeO6azSYGg4FkU1itVvGymUwmie1iidFsNtHr\\n\",\n       \"9dBut9HpdESo/9FHHwnNPBgMxPHN0BdCdoPBAJ1OB7quiyi/Wq2i1WqJvrjX6wnaQTgRAE5PT+V1\\n\",\n       \"7e/vYzabSRQWSwGSLLQvUWjEXZg0NJnQfD6PwWAgbGi9Xpe8DfoOWdpczOVot9soFApSVhACrVQq\\n\",\n       \"osY7OjrCfD6XaLLxeIxKpXKl7/9aJRp985vfFI1ur9cThosZEbTs12o1JJNJgag0TRNLFb9ILiiH\\n\",\n       \"wyGCI3rpgXcGAAAgAElEQVT8JpOJEBOkcG02m7BkrG8pKGK9DEB+3u12EQ6HRePAPAzu+Iz+oqKP\\n\",\n       \"aAbdNIS8qMjTdV1uUGo82MgVCgVsbW1hPB4jGAxKk0yh0avPDy6XS3QZ4XAYw+FQtCcMhmk2m4jF\\n\",\n       \"YqLvZsadw+GQml5VVezv78Pv98Nms8FqtULXdXHAOJ1O1Ot1JJNJiQaj7ew3f/M33yYaAYCu69JY\\n\",\n       \"nZycSDLl3t4eLBYLjo+PEYlEUK/XxdVRqVRw7949SdqZzWY4PT3FvXv3cHh4iLt37+L58+e4ceOG\\n\",\n       \"7JT0FLLZ4S5DnDsUCmE0GgkzV6/X4ff7JamzWq1KXsfFpspms2E6nUpjyXqTATOVSgXRaBTBYBD1\\n\",\n       \"el1yKVhTU2HHBKaXL1/CbDbj6dOnl2IB6ERZLBaCP5P27nQ6gpfTCeP1evGjH/1IwhuZqKqqKj7z\\n\",\n       \"mc/g8PAQAETu6nK5hNUbDAaX3Oq8eb1eLwaDAcrlsqA3xL1f97pWO/PXvvY1JBIJ1Go1BAIBbG9v\\n\",\n       \"4+DgAPF4HNVqVaSh/JIePHiAP/qjP8KdO3dgsViQzWYl7GU2m+HevXv45JNPMBgMJLtiMBgIwwYA\\n\",\n       \"iURCcjlqtZrseKlUCt1uV8oAitsXiwUCgYDoGEwmE6rVqsgj4/E4crmcMJfNZlN2cmZpsJkju0gC\\n\",\n       \"5f3338fDhw/RarWQTCYRDAbRarUEWQCAZDKJH/zgBwiFQlBVVVRrhmFI7U1vZDweR6fTQaFQgMlk\\n\",\n       \"ElSFFrNYLCaUORlKn88HXdeF9ez1egLrsYcg6hMIBEQF6HA40Gg08Ad/8Advd2bgx9FcpFSdTieq\\n\",\n       \"1aqI0UulksBS6/UaP/rRj6DrunzZh4eHUlc7nU5873vfE7tQrVaTxxmPx7KAWq0WWq0Wtre3MRgM\\n\",\n       \"MJvNBB4DzmWpvV5PvsCL7ufBYCBIQr1eRyQSkXqXiaTdblfez0WTLBV79XodLpcLnU5HYDDWyuv1\\n\",\n       \"Gt1uF+VyWUoS0vPtdhs2m008fvP5HF6vV3LvaMotl8tCqe/u7iKfz6NYLEoQjcPhwPPnz8UCRSnq\\n\",\n       \"ixcv8MEHH6Db7Qpt3mg0xDNZLBZx69YtqdEZA3yV61ot5mg0KouDH2AmkxGigOk5JCBu376NFy9e\\n\",\n       \"IJvNSo1NyxDF4kyyVFVVogXS6TRqtRqsVqugDvxiWfuFQiHx11HySScKPXB0fvDE+MnaXFEU7O7u\\n\",\n       \"4uTkRBADZiiTfibyQnJntVoJ3svXzNDvi3Af47EODw8lDIYQnsfjgaZp6Ha7SCQSAktyx2bN/eUv\\n\",\n       \"f1nqcU3T5DGcTifef/99kafyJGK/QQOC3W6XBClmlFzlulZoRq/XEyuPx+MRmpbZxQxL6Xa7smPQ\\n\",\n       \"t+Z2u+Vo3tnZgd/vF3aOZYTD4YDVakWv1xP2i6whAxNXq5UIdoipmkwm1Ot1YQ3JVHIRs6EMBAK4\\n\",\n       \"deuWiIbozmaJwKkA3W5XUBXutIZh4Ctf+YqUIQ6HQwwHjF7Y2NgQ9i4ajUozCkCaRWos+v0+QqEQ\\n\",\n       \"lsulnG6NRgOBQAC3b99GLBaTm5PiLjpZmF1y0aDK9CLqmQnZ8e8DkI3oda9rtTPv7u7KEdtsNiWN\\n\",\n       \"nYJ1TdPQbDYxHo+xtbUlwd1M62Rj+MMf/hDpdBqPHj3CrVu3UK/Xsbe3h1KphF6vh93dXXQ6HRmZ\\n\",\n       \"cHBwIOUD0ZOHDx+K1arX62E0GkniPj1z29vbIuJxOBwSaUvPIp0cz58/lxuLzN/BwQHu3LkDj8eD\\n\",\n       \"SqWC09NTiR2j8IdwHQmc09NTsXDVajXs7OwIDBkIBBAIBCQrOZ1O46OPPoLf74eiKOIkyeVyAjHy\\n\",\n       \"FKDemzkYmqah0WggEolITV+r1QR/7vV6YqAlW8v6+SrXtWoAf+3Xfg3pdBqNRgN+vx/b29t48eIF\\n\",\n       \"EomE0K0AUCgU4Ha78d577+EP//APcefOHQlOrNfrYiS9d+8eHj9+LLFTbrdbRErUKaTTaZGKNptN\\n\",\n       \"0T0kEgkRqNOISn0ETbLckS4iCqlUCqenpzJnhF495iYDEPVdMBiUHW+xWOBLX/oSvve976HVaiGV\\n\",\n       \"SiEQCIhiLpVKwWQyIRaL4fvf//6lHA2PxyOjMqjE83g8EhJO7TQRkQ8++ABPnjyRxKbZbCajIxhK\\n\",\n       \"Qzy60+kIoUOokhJTWrPYADabTfz+7//+azeA12oxf/Ob34TP55NMDCbIUzfM2rdarYoplVRwqVSS\\n\",\n       \"fOblconJZILJZCLOFMMwEAwGJUaLBAsA2V2JqbIJ4mtheUEnB7v8YDAohA5F/rQoMeiFODMNscR2\\n\",\n       \"AUiyEL1/dDiPx2OEQqFLxAtLBqZ5djodSTBi2hFfC0uS4XAosCIXPKcEMH42GAyi0+lIWpSmaQiF\\n\",\n       \"QhKnQHktYx6o+S6Xy9jY2MDBwYE4TMxmM37rt37rLZoBQOhbpuik02l0Oh3s7Ozg2bNnElZCXLdY\\n\",\n       \"LOLo6Ai/9Eu/BJPJhNPTU0SjURwdHYm4BvixdWk0Gol/jSJ1NkpskE5OTiTettvtSuYF2T5FUUQZ\\n\",\n       \"Rziu1+vB5XJhNBpJ6UNV2mAwEMlpq9WC1+uFruswDAM3btxAtVqV+n08HouGeDgcolKpIBgMolar\\n\",\n       \"iQ6brKbT6bykIe71erh9+7bk2fV6PRweHkLTNJn9srW1hf39fWQyGdFhc7YLwxYnkwnK5bLk2zF6\\n\",\n       \"l2wlk5zS6TSOj49xdHSE4XCIeDyOR48eXen7v1YNIO3+Pp8PmUwGsVhMPkgygyQuWDoAuJSwmc/n\\n\",\n       \"ZRKTxWLB/fv3MZlMZAZIPB6Hz+cT0oSLjbVrKBRCJpMR4TqllTS38r8pxp9Op+LrYwIQM+yY00aR\\n\",\n       \"EcuJcDgsoieiFmQwqXOgwo8sJwVUgUDg0pEfCASg67o0bWQwAUhjmk6nYbPZpA8hXV8ul6UGZonC\\n\",\n       \"7+FiBC/ZSiYlMfqB6AwjDiiXfd3rWu3M/X5f4lsDgQC8Xq+Ij0ajEe7du4f1eo1cLof1eo1f/MVf\\n\",\n       \"RKFQEBf3r/7qr6LdbmN3dxfr9Rq//Mu/jMPDQ/lS6FBhc7Zer7G7u4vZbAaXy4Xlcim75t27d8Vg\\n\",\n       \"u7m5iW63KwIeQncU+XQ6HVitVqRSKSQSCZnwxMgCegqZ2EnHNJPuuSi+8pWv4Lvf/a7Y/D//+c+j\\n\",\n       \"1Wqh0WiIN5DmWjpI+v0+7t69KygMhfmLxQJf+cpXxHsYi8Ukfvezn/0sjo+P5UZIp9NIJpOy+FVV\\n\",\n       \"FSc4R67REU9ZAbXcvLF4mj18+PC1v/9rVTP/xm/8hgSOx+NxoYopWD84OJAEIy58kgkM2bZYLJjP\\n\",\n       \"57hx4wa+//3v486dO4JyOBwO5PN5ZDIZ2VnD4TAODg7gdDrR6XSQSqVkxz47O5OJTGz2iAAUi0Xs\\n\",\n       \"7u6KCIgCKKrO6DKv1Wrw+XzybzKBdH9wgZVKJYxGI8nNY0QtpaFUEFosFmkgycjRysRSxDAMiQpg\\n\",\n       \"2hEx9NPTUwmX5MSAer0uDhQmI1HjQQyeJ+T29rYwgnSGc7zGw4cP8Sd/8idva2YAl/LTTk5OkMlk\\n\",\n       \"RKzPzDhivGSiSqWSgP5Ucem6jhcvXogQh40krVMU1U+nU9TrdSEp2L3Te0h7EbFqKueI5xKpuJia\\n\",\n       \"xEaMfyeRSIimghYqivy5ABlzexFRCAaDkqF8kWbmdKxgMChZdsxGJvLCcqPZbEpaKHCuU3a73RKT\\n\",\n       \"wPfD8qbdbiObzcLr9YqakDpoBk1e9ExSJ0Ip6duogQtXr9cTL1y9XhdXcaPRkGaOKjIOaCwWixgO\\n\",\n       \"h+j3+9B1HZ1OB6VSCcvlEq1WC5PJRMbzAucTSSuVClarlUR9DQYDFItFVKtVybB48eIF/H6/sGwU\\n\",\n       \"ELXbbWENaZjN5XLyxQPn4xcePXqEZrOJ/f19iZZ9+fKlHNvdbhf5fB69Xk8gQebPMXCl2WzKv5ld\\n\",\n       \"x5mGpPHZHH700UeXXiNwLiut1WoYDodCz1erVZycnOCjjz6SiQP7+/uXUkEHgwH++q//Whg9Mn9U\\n\",\n       \"EzKXg+E0tJz98Ic/vNL3f612ZsZdUYTDwBWKYIh5sllSFEUEN5FIBBaLBdVqVVKMOBaiUqkglUpJ\\n\",\n       \"LAEhtXg8jlQqhfl8jtPTUxHqM2eNKT8Mn6FYh0gFa06Xy4VAICDHNIXzbKoikQh6vZ4YcGOxmEgn\\n\",\n       \"iSUfHx/j/fffx2AwQK1WQyQSkcfj6eB0OhEKhVAuly9N3qL1iuaE0WgkzpfVaoVMJoNGoyGoTTab\\n\",\n       \"lXR+ZnhwABBw3gDev39f6PF4PI5arYZsNivZcqzh+dy6rl85Of9a7cyse7vdLjY2NjAajcSSw2gA\\n\",\n       \"fpBMHNrd3RWtcLFYhMPhwAcffCBw2XA4xP379y8lwnOK6Ww2k6ziVCoFTdOQz+dl+A4F8bFYTHZ6\\n\",\n       \"1qBUpLG+Zj7c2dkZlsulBJlfLDvu3buHVquFWq0mU6+Ojo4AnLtXLqYIcehQLBaTG5PBjjw9qF+h\\n\",\n       \"0L/f78Nmswkzubu7K3G+DGrkfEKOq+Bzc+oVoxaIXjAzj3U+iZ/pdIqzszO0Wi10Oh1EIhFx4rzu\\n\",\n       \"da0WM0MEedRdPMq501AcztqVc7OPj49FYD8cDkWrTHz34OBAJJccJkl9Ba1YFDKx1tY0TSBARVEk\\n\",\n       \"uJs5dcFgEFarFbu7u9JIUaNMPx5vSmK2xMoZysIoMTq3V6uVMIz0JRIn56nF/Ds2aD6fD9lsVnKs\\n\",\n       \"yUgSleCQn9VqJZ7CVqslOXIul0sE+2Q4OVtQVVWZ90fzLMu/3d1dsaIBEJ3I617XajFHo1HxyjFr\\n\",\n       \"jSIb4LzepcTzopOYXT/F7aPRCMFg8JJL5aIwhynwwWBQ0ISLY8moT6AskjNHmLQJnBM8vNlo108k\\n\",\n       \"Epfm41HOyh2amDTjAbhDslyg04UiqYupRZx2xRuUN5TFYkG9XpeIrvV6LVOpSLCwDOBnVa/XJdeZ\\n\",\n       \"aBDFWhy1xrFwlAZwhBwAubEp2KLV7arjhq/VYqbHjaMJSGK43W5EIhHJkgiHwzCZTPjGN74h9enF\\n\",\n       \"3ZM7zu3bt6GqKjY2NpDP50VzkEwmpYRgCihJCS4aTdOkkWJYDONgZ7MZ0um0mDkJZ5G9JEphMpmE\\n\",\n       \"OSNxQmPp/fv3xZ2xXq9x48YNRKNRCcC5mOi0ubkpRAwpd2Ljq9VK4nsJ9/GGDQaDcjLYbLZLUtn1\\n\",\n       \"eo1sNivGBZZZvGlJhtAl73K5kEqlJCeaTCgD2JmncZXrWi1mBqMAkJ2XGC79f4xvtVgs+N3f/V2B\\n\",\n       \"kJj03m635QN++vQpPB4PCoWCDNCJx+Oi8wUg5ljuRjy2qVVg88UZH9xtdV1Ht9uV3Zy1K28KwzAQ\\n\",\n       \"CoXEo8jFdnh4iNlshkqlIrssiZdCoSCxsfQAMqSRr+0iEUNJZqPREP0IP5/1ei11fyQSwXq9Rj6f\\n\",\n       \"lxiGxWKBs7Mz0bgUCgUMh0OZgtvpdFCr1SRdaTAY4PDwUDLw1uu1jMAgucTv7nWva7WYCa15vV6Z\\n\",\n       \"trpardDv99Hr9UThRYeyrut48uSJNCrValUWf7PZRLPZxGw2kwR6jnoYDAbo9XpYLpeo1+sAzskF\\n\",\n       \"SkSn06mI+pfLpUw25fxpjmTg8E2KcihCYrYFQ2lqtZrMHWQUVj6fF3qe8BthOr/fj1wuh7OzMwlS\\n\",\n       \"BCAWrVAohHw+j+l0KtoKDjSi2Ii1M8Nu2CMQmuTi43QpPn6tVsNyuZSkfAY8kjKnhIAacc7q5glz\\n\",\n       \"letaQXPdblfS4Cm8Zy1IZRe9emyCqEzjF7VcLsV8ydkhTNFknUn5JuEx4tLMkqOAnmqzP/uzP8PN\\n\",\n       \"mzelsWLcLlP4J5OJ1M2cbUIEoFQqSbJ8Pp/H7u6uRBWUy2WZoMoTwul04vj4WGxZxKQNw5BTh8J9\\n\",\n       \"yl1nsxnG4zEajQba7bYYU3O5HOLxuDB4RGAYS0AGks0x7VakwGnv4vAhOsGp075oAm40Gm8H9Fy8\\n\",\n       \"7Ha7hB/evHlTglISiYTMKSFExQ79wYMHmE6nuHPnDlRVRTwex+c//3nRC6uqir29Pezt7QGAfDHE\\n\",\n       \"bGlz4iTXyWSC0WiEW7duodVqST1rtVpl8DyDwWmMJYvGkoei/kAgAIvFIlG9X/jCF7BYLGCz2ZBM\\n\",\n       \"JhGNRuUGZRA5bf0mk0kEV3zf8Xhc3Dcsjbxer9TWZA8pZHrnnXdgGIY0nDS87u3t4b333pMmc2Nj\\n\",\n       \"A5qmSTOqqqrAmbxBORErlUqJu50eRTa2b0NgLlysvxKJBE5OTmTHYbCKy+WSBU+56Icffgiv1yu5\\n\",\n       \"cqenpzg7O4PX65XdsVwuo1arCRFis9nQaDQAQPDqSqUi6fbBYBCFQkG0FFy49NNxNPB8PkcoFBJI\\n\",\n       \"ajQaYWdnR2r/fr8vtDRwznD6/X5ks1lBFKivGI/H6Ha70HUd9+7dk3hceh8pSqJovt1ui/aZ5leq\\n\",\n       \"DBl8Q6wdOO8NOCelXC6jWCzKjcjPhG4ap9OJs7Mz0XirqiqnHxvG6XSKTCYjoiuGU17lulaLmePO\\n\",\n       \"Dg4OsLe3JzYdMmfPnz+XcJf1ei3/jyHYtFwxmJBA/0UJI3FVAGK9Z/RUv9+XnYbjwjweDw4ODkRj\\n\",\n       \"bbVacXZ2Jk1krVZDPB4XAdDx8bFkTlD0RAd1rVZDrVYT10w+nxcUxul0yjFPkQ+H3qxWKxwdHaFc\\n\",\n       \"LsuUVe7qjNsi9HcxXIZzDKlb4XzCyWQiZYnFYpGsDeZSc1GrqoqXL1/KaVKtVqWUWa1WaDQaKBQK\\n\",\n       \"6Pf7Qt1f5bpWNTMpW5vNhrOzM3H9khb+hV/4BYlYJcZLS5PH40EqlQIAHB8fCxXLY5Y1bjQaFa0D\\n\",\n       \"m0IaTZkFx+aGOuV3331XBkl2u11sb2/j8PAQuVwOd+7cwcuXL5HNZmXeB61KzOoggcFRxXTChMNh\\n\",\n       \"afhY5wYCAaG7Cf01m03cunULjUYDDocDW1tbElAzmUxw8+ZNsWVRQ71YLJDJZGTMMo2zJFrcbjf8\\n\",\n       \"fr8I6ymBJc7MyNxMJiM3GrFlirRYc1NPAwB/9Vd/9drf/7VazPyCAAjiQG0EXdDUKjCSi116r9eD\\n\",\n       \"z+dDvV4XbJQNJZVkw+EQw+FQxjSwSSQZ0m63ZSYg0/UXiwUqlYpkdZByp0KPdXQ+n5dxCkRKKDxi\\n\",\n       \"gmir1RIyp9VqyQChVqslC4mjH6jKG41GKBaL2NzchMlkEhKl0+lcYuqoFmw0GjLtlWo2lhsUDjmd\\n\",\n       \"TgkPJyZP0wPnCc5mM9RqNcGsq9Wq3FgARB8D/Dg1dDweX+n7v1ZlhtfrhWEYQooAELaOXTU/dMJJ\\n\",\n       \"jL/iImcsFus6Cvu5g9AadHHsA1N66ARhtgSz6OiqoG6CyAfrRJpSiX+T4CEdTe8c/YgApAShPprJ\\n\",\n       \"QbwBer2ePD4TUCnx5ONejOHt9/uXyin2Fsy5YF0MnDe5xNEpKeX4NFL7JF2YcsoShcmjfr9fSCc2\\n\",\n       \"o2/p7AtXPp+HYRhyxIbDYclEowCJaAfhNuDH0tH5fI5kMimIw/b2tjRQ1WpVspNp8eGRzClPZPIo\\n\",\n       \"pGfKZrVaRb1eFwz64OBA0BRS5P1+H3a7Haenp1Izq6oqxANd1s+fP8fBwYEsZir5eJOs12vs7+9L\\n\",\n       \"xCxxY07d0jQNtVpNfH4sWS6eIEQtyPA1m03k83kEg0H0+31sbGxIVDB9hKTMSbxwI6CICoCI+Dud\\n\",\n       \"jkSSES8fDoc4Pj6+0vd/rcqMbDYr0FupVMJisZAoAArKWZtNp1Msl0upNznfw2KxoFwuY3t7G51O\\n\",\n       \"RwgPzgcMh8Oye3NqKScmud1u+X0KkkwmEx48eIBcLifYbzabxYsXLwBAdtubN28CgCzKg4MDaJqG\\n\",\n       \"O3fuoNlsSt4xg84phmKaEWFETdPwxS9+EScnJ3LDWiwWiT5YLpdIpVKXBksyTejGjRuo1WryfiqV\\n\",\n       \"CrLZLBqNhhA0LpdLCCHe9BsbG5LrQadMLpeTGz6RSFyaheLz+YTc4qajKApu376N73znO6/9/V+r\\n\",\n       \"xUzT6Gg0ws2bN2W4JDFYhh4Sp2Xjx1RNqti+/OUvXzK1MvqKRyqd2Ryyzi+R5lLGupJkGQwGUrcz\\n\",\n       \"RsDtdiOVSkmjBUAyLKxWK+7fvy8qQE5TpaieTpLVaoVUKiUjlEmH53I50SdPJhMZLq+qKmKxmCQW\\n\",\n       \"UU3H8oE0Nhc4zar0U7KeZ+NnGAa8Xu8llpGqxI2NDQlUZNnH98cJASR8SPaQCn/d61otZgBCXfPD\\n\",\n       \"yufzuHHjBp49ewafzyeeO2ZA5PN53L17FwBwdHQkKZxMQcpkMhiNRkilUmg2mzJplWMOMpmMDLBk\\n\",\n       \"TobX6xUfXjQaxenpqZAZlFFWq1WxQtXrdWEuB4OBjJKgXgM437HL5bLkJNfrdRH3cN4IAJlv4vF4\\n\",\n       \"RILZ7/elxqcvkjoNUtMUP7GM0jQNP/jBD3Dz5k2h6UOhEEqlkuiiOV6OJxFnZTscDpRKJdGxkLpn\\n\",\n       \"w8dm9y/+4i+QSqWENudp9brXtVrMgUBARPjEkblTkDHL5/PQNE0cIz6fTySWsVgMzWYTiURCnNjR\\n\",\n       \"aFQICtLTi8UCkUhE/IYWiwVbW1vi8ubgRu7ioVBIbiSOLPP7/UKnMwaBijcOn+eQyovyUWK2Ozs7\\n\",\n       \"siBjsRhevnwpJ8Lt27elAWM5Qe9iLBZDLpdDOp0WMoXZdgyUofz0/v37MJlMQjYtl0sJPGezerG8\\n\",\n       \"YcgM2VE2pZTjEl+m129zcxM2m02mxfKEet3rWjWAF8MJg8Gg1GSr1QrxeFx2QNbJDBnkbBIuPIac\\n\",\n       \"0BVBOxFNmOPxWBIvKbfkUKCLORdMMGKICwdk0p1MfBqANKZer1coeEJfxKuj0SgcDofMGnG73dLA\\n\",\n       \"+f1+gREZpEisG8ClkW7vvfeeCPndbre4wTVNExSDNwazLjRNw8bGBnw+H9555x3E43FprAlRMp/D\\n\",\n       \"brcLle9yubC9vS2oEV//bDZDNpuVuYOBQEAkAa97XavFzN0DOC832u227GzEoLloGa/V6XRk5t1s\\n\",\n       \"NsNsNkMul5Nalzg05/Qx8YcKL5IEq9VKfk7pJ5NDya5R08Fxv/TDkV2j+P1iMpGiKKJEs9lsooku\\n\",\n       \"lUrC2AEQGahhGPL3uTPSB0mamTsrcI7NE1lgiUMTA3PnWPJwnBtDXwhB8vMDICZfMpcOh0PKHAbM\\n\",\n       \"DIdDzGYzOX14IrBUet3rWpUZDodDZjAXCgUEAgGpTUmdcmcFIEQE4SjivdVqFTdu3JB8t9FohFAo\\n\",\n       \"JNQrmbbj42PJVpvP53jx4oWMLeOIhWw2i48//hg3b97EaDSSDp8RBxxaTxMudRgWiwW5XE6cJnQ1\\n\",\n       \"Ewmg/467f7ValezmRqMBn88nQ4esViseP34s4qOLcQesV5kNQudJOBzGixcvcOvWLamz1+s1KpUK\\n\",\n       \"7ty5AwAy4o2qOo4/I4O5s7ODDz/8UNAMEiiDwQA+nw8nJycolUrisKGc9nWva7UzD4dDxGIxgaIo\\n\",\n       \"0Kf+wul0iheQx6/H4xHBeSwWE3WXw+GQbAqyeszgIImQyWQkItbhcGBjYwN2ux2JREIym8fjMdLp\\n\",\n       \"tAyOZCNmt9vFrGqz2SRgnHnRrNdJPCiKIuTH9va2zMter9cSCAlAAiOpAGQCPnsIlgekqvk50Huo\\n\",\n       \"KMqlYHMK+4mY+Hw+wYqZthoIBLBYLC7NDySUmclkLvUlpOhJbxN14ed8letaLWZqhRl8clFj7HA4\\n\",\n       \"EA6HcffuXdnpvv71r+PFixfSXPEL59/d3t6WCazPnz+XLGLOiWbKPFVnHLfQ7/dlpAIDwxlGbjKZ\\n\",\n       \"EAgEkEgkhJpm+cPRYiyLdF0XmSjpcvrx3n33XcnQcLlciEQi2NnZkRqa7B6jCbi4xuMxVFUVCnw0\\n\",\n       \"GmFzc1OylcnIseS66CKnu4YL+otf/KIYIO7evQvDMGSHppyVtLzD4UAsFhMaezAYIJlMCmxK7+RV\\n\",\n       \"rmu3mDksnblubMbIVp2cnAiM9Du/8zvSsTscDmn0qG57+vSpTIn67Gc/i/V6Da/XK8J0UuZU1tG5\\n\",\n       \"zLwJDuBhPgZdK4qiIJ/PS6O2Wq1E6EQNM4MMiaIwXouumMePH0sJslgsUC6XxWHOHDdGDnBUMmN2\\n\",\n       \"6dEjRX94eCi0PN04F6WbDIqs1WoSx2u32/Hw4UOEw2EJnAQgWpXxeCz5cV6vF/P5HNVqVXZ9pqrS\\n\",\n       \"dMu5Kle5rtVipsKMNSE7eLJq/X5ffsaGZ7VaSQI+pZsAhOigOu3p06eCKFD1xWGYTqdThEntdhvD\\n\",\n       \"4RAAZNwC8V2GwLAhm81mWCwWwpaxGaVnzjAMcWdQ4wD8WFA1n89xcnIionqO/3358qUM4Wk0GjCb\\n\",\n       \"zQiHw7BardIQE5HhTTedTmVoD08aNop8LjZ1HBwfi8Uk3ovzCAHIeGbe5LRWsV6v1WoSKcxcDeqk\\n\",\n       \"r3Jdq8V8MZ3T6/XKFKaLugVCciaTSUYHZ7NZ6eAJ7VFZxgRQ1qs0krL+pD7jYs4Gm0WWOBe1yYw6\\n\",\n       \"YEnD2SuEyWaz2aVwFr/fL4IiLmafzyejzliLspxhOj5PJeBcGnt8fHxJa8zyhK+dsb4U2XMsBgNi\\n\",\n       \"FosFksmk4MK0hlE2qqqqSAYURREFIeE6mlXJ9nHYJwDs7OzITn6V61otZkJoJpMJ3/3ud8X/RmF7\\n\",\n       \"r9fD2dkZcrkcDMPAxx9/LJFcFOZTCMTjtlQqoVqtinlTURQR2p+enkodPRqN0Gw2pVkiccPxxWxu\\n\",\n       \"CHfVajVxKr98+VLeQygUEsr78PBQhj5SvEP9NTMwLBYLFosFjo6OJAjxyZMnUtPzPft8PokzsNvt\\n\",\n       \"Yv7t9XrY399HvV5HLpeTkcI0InCQka7rODg4EPaQPj6iHMys43tyOp1y+i0WCxQKBei6jlarhXw+\\n\",\n       \"LyeRYRg4OjpCs9mUPLvXva5VpO03vvENAOeoxmKxQCqVwmg0QiAQkKHrmqbJh2i1WtHv9xGJROTI\\n\",\n       \"j8ViqFQqCAQCqNVqSCQSaLfbCIVCcLlcODk5kbhYHsG6rl9iz+je0HVdEoiAH4fQsAHlbl+r1aAo\\n\",\n       \"CrxeLwqFAnZ3d6EoClqtlkTvskzhjMLxeIzbt2+jUqkgmUyiWCxKvkWv1xOpJYfqkATh+7fZbPB6\\n\",\n       \"vYKIUIP86rNEIBDAeDyWnZgpTqxrecoBEKkrB3ayvGDqkdvtliE89EISdalWqxJa0+v1rjTT5Frt\\n\",\n       \"zMWIwIUAACAASURBVPzwmB3MLx+AfGDMauNR+vDhQ/T7fYTDYezs7GA6nUrGM0dGUNbInZb1KGWb\\n\",\n       \"TqcT0WhUnq9cLssOyFnWTBulmIfDNRmUyAZsY2MDg8EAlUoF3W5XalFOUiXURl1Fs9lEqVRCo9FA\\n\",\n       \"JpMR5u0nU/CZMLS1tSV2L6IZFBCREOFjhEIhSUelk5uqP5p5+dlQn00Eh8pCjm5jCcXanL0MCSwO\\n\",\n       \"PbrKda0WM+flLZdLsSExSYhlgt/vRzKZlKC+nZ0d+XtMsOe8kHg8LtJLkh1utxuZTEbqTSIjpVJJ\\n\",\n       \"glWYt8bSgNroiwlErG25Q1HYTgUelWqcnzKbzVCv1wWTdblcCIVCoi3e29sTXyEDYJgdnUwmJeiG\\n\",\n       \"C8nv9yMej0NRFESjUamhfT4fotGowHfUlXC8BCdUcb4KNSCksdmfMPOagemhUAiBQAAAZEIAMWcA\\n\",\n       \"gthc5bpWDCDLi9FohCdPniCVSiH3ag51pVKRBUlqmV8KVW8cTs6gFHb6xKnr9TpWqxVOT09hMplk\\n\",\n       \"5APrbe7ezNIg2cGGidOr6LrodDqIxWKCgrAWpdSUuzjw41xohh/+ZC5boVAQtIV5eazN2+02dnZ2\\n\",\n       \"UC6Xpf5l0OJyucSzZ89EZcfdlKpC4LwMInxIlKfT6eDo6Ag+nw+NRkPCKjkOIp/P4zOf+Yy42uv1\\n\",\n       \"upyGVPNRW16pVMTCdpXrWu3MTqcT8XgcgUBA7niyetytaGciAXDjxg2MRiPcv38fiqLA7XZjc3NT\\n\",\n       \"sGoGJJJ4YNYzmUZKNPln1tAckh4IBODz+SS3gySJw+GQNNJqtQqv1yu6aVqskskkFosFEokEotEo\\n\",\n       \"7t69KzQ2d0/CgNRHc+fmc1MvQgMqJ7/y7xK9GY/HsFgsYp5lWTKZTATF4HvkwCOyp7RjMcaLMxXp\\n\",\n       \"qzQMA4qiIJVK4ebNmwiHw6LRps0rFApdOdHoWjWAX//612V+HTHe4XAoi6zT6UiUQCQSkWiqvb09\\n\",\n       \"1Go1mXGiqqrY+y8SFVxsbOyIWTN+gKybqqoCyVFcREvRYDCQnTeTyUiSEgNROGbhohEAgGRyAJCd\\n\",\n       \"0zAMaQY5NJMQGLXDtExxRATHyZGJYyQYE/07nY5kWVA3wosZ1tvb26L7oD2MiUoM2Dk8PJSGk0QL\\n\",\n       \"4Tev14tWq4V4PI6PP/4Y8XhcXN/f+ta33s40Ac4bQCIFAKRBymQyUmb4/X7k83mZiVcqlcTQWiqV\\n\",\n       \"EA6H8cknn+DGjRsoFotSa2cyGSiKgv39faTTaQDnx+/t27dxenoq5AZhOWZZ+Hw+CQgkKbO1tYVW\\n\",\n       \"qyWlAC1QnU5HRDt0ceTzefj9fvR6PXGJ0LKfyWTk98vlMhwOhwiRaFz1+/1CdDBat1gsyk5JzyLt\\n\",\n       \"WERyVFXFyckJdnZ2pNZutVoCUzJsZz6fS7AkMX6WKCx5Op2OSGbZfHP8BU/K4XCIv/3bv73S93+t\\n\",\n       \"yox33nlHpJN0h4zHY/j9foGsCoUCer0ebDYb0uk0bt26JfFRzH4gW8WOu9Vqwel0Qtd1Cd02m83i\\n\",\n       \"FOEX7HA4sLe3h0wmI/44nnyRSETkqUyMZ0onSRiOEKMYirU6GTaeDBw5RnsTUzQzmYzQ0ER06Ptj\\n\",\n       \"1pvZbJbH4/Pruo7T01OEQiE0Gg0RVHHXZ7QBm1SKmkqlkliuqtUqhsOhmCMo2OJ74aINh8Oiw7bZ\\n\",\n       \"bIKQ8Ma+ynWtFvNoNILdbsft27eh67qwcMPhEOPxGNvb24hGo8hms/L7T58+leOWOmEmcZIgoAOE\\n\",\n       \"WDM1vIvFQgaqr1Yr9Ho9Yc04IZZ5yPP5HLFYDGazGVtbW0ilUqhWq3A4HJLyQ2EP6WxqJGazmexy\\n\",\n       \"zMpgXVutVkWmms/n5fgn0cL4r1AohHQ6DV3XZUfmPwyQ6XQ6yGazItzXdR3j8VgGVM7nc0SjUTQa\\n\",\n       \"Dek7mPO8sbEhs17G47GMcrtYj5Oh5fPSNEGChWaI172uVZlBRVs+n0ckEkEwGJRwQIvFglarJeId\\n\",\n       \"BsL4/X4oiiINXzqdFpaNkBaF+0RKmBxKZzUdHABET3FRNRYIBGT4OaOumOcxm83EgkVVWzQalbgv\\n\",\n       \"wnM0oNIdTSE/cE5X7+3twePxiNE0Go3KDcJkUgASGH4xuszhcAhuTNNsPB7H9vY2FEUR7yFPvAcP\\n\",\n       \"HiCfzws+bLPZRDDFBZpOpyVEh2OMiV+rqioumKOjI5mJ8tZpcuHiLLrBYIBEIiGEAcNRqBpjorvd\\n\",\n       \"bsfe3p4sFsZ70d3N0b4cEQGcL9ZEIiGa4VQqJdpn6ik2Nzehqira7baEG3JSLMkFm80m7g9KUFVV\\n\",\n       \"lUDGdDqNjY0NjMdjaJom6UO1Wk2GaVJcRLZuPB5L4A2tVByswxKHo9CIaVOboaoqRqMRNE0T/Jnl\\n\",\n       \"xI0bNyTqS1VViXFwOBzi2GE+CWtzNqScB1Ov10WZyJtmPp8jm83C5XIhHA7LHO/Xva7Vzsxhj5qm\\n\",\n       \"SaPW6/XEDa2qKnK5nEBrxDt7vR7u3bsnCjRFUfDkyRMoioJisSiDdZj1fHp6eilln/kUvV5P/j99\\n\",\n       \"dX6/H/v7+/D5fMjlcgLV8Yter9c4ODiQm4KZ0C9fvpR6neo1UsqcPciprjabTY7y1Wol+dOcsnV2\\n\",\n       \"doYHDx5IrV8qleByuUTUVCgUYDKZkE6nRS7rcrlQLpehqqroNfx+PwqFguiQL0Z8UYlHRSFRDWpX\\n\",\n       \"FEURBKZcLosYi/R4PB6/smruWkFz3/rWt8QFTbiL+RX0nGmahmaziXA4LEA9gX6OSqP4vN1uIxqN\\n\",\n       \"otvtioictTHtV/T/EaMmKzgejy/Nhma9SHiK0BTDvinKobOExA71HhwqxKaJ8B0AoYh5IwMQRo+D\\n\",\n       \"KtPptLB1hmHI9FWOgCAuzc/m/2fvTWIjTdMzsScYjGDs+75zX5JVWdndVdXVDbUgoaUWIMD2QbB9\\n\",\n       \"MDAwfPPBgiEImjnqMjB8kQUddDIaAx0MDGBpZB1aLVVD6pbVKk11rZlkMrnFvm+MIIOMYGw+sJ6n\\n\",\n       \"gpZkA0nMtETUDzQ6i5kMBuP//u973+d9Fopt+/2+DBOtVqvkaGyKWdOz5LJYLHC5XHjx4gVWVlYU\\n\",\n       \"mtTtdtFsNnVCVatVxGIxHBwciLvt8/nw27/9219xMwAo8anb7SoMnuy3RTvWwWCg3D1292z25vM5\\n\",\n       \"jEajUA2WKhSeLvoQkzBDtl2/30e32xW7jg9Et9tFvV4Xx5pQFxtDAKrHFw1VCGfRQ48sOJqJ03OZ\\n\",\n       \"OyJPB4bVk5dC+ii1jFSvUE3CYB3ymLvdrnBu/u6cYM7nc8TjcSlISB0tFAryZl5aWpJXHh1Tyf/m\\n\",\n       \"58TPjIkGvH8PuR5VmcH8PRqJs2FhhgdZYAyeoXjVYrFgPp+jUqnAbrfj+PhYjSJwN13joq/X6+L/\\n\",\n       \"slHkhOvq6koEnXK5rHQnBkESCru5uZHzktVqVWaJ2+1Gs9kUt5lWYTRx5OIhlEfiExvMRqOBJ0+e\\n\",\n       \"4OTkRLs/3z+xbFJj2fhS7sXFvyhwYIlGSI+BQp988sk9X7nFPER6WxPbbrVaeP/99+FyuTQQms/n\\n\",\n       \"sFqtqFQqMmpcWlr6ajEvXtT8+Xw+PH/+XJozjk37/b4ytdfW1mRgyDhhGhXm83nFJdAEnMd6KBRC\\n\",\n       \"uVyW2XYoFFLOx3Q6lT6QtNJUKiWnHrPZrOO01WrJ5Pvk5EQDCL/fL7NCngCMKO73+9jY2MDl5SXy\\n\",\n       \"+TxWV1e1E/d6PU3m6KBEeufFxQW++c1vqoyhRjKXyyko02AwCJEYDAYapWezWbkmVSoVGI1G+ekF\\n\",\n       \"AgGMRiMpdmiQw5OBGDRtvlKpFLLZrJxO+/0+vF6v3J9yudyD7v+jKjOq1ap25EQiIaql2WyW0plj\\n\",\n       \"3eFwiFgsJk0eFRtLS0vY3d2VDu/y8hLj8VhwV6FQ0FjZaDQqKm19fV0DjpubG43D6VZP7JY3mgwx\\n\",\n       \"xjqQgEPOBkWp1CNeX1/j3XfflQMpANXtVG3kcjk9SFdXV0ilUmK9MQAoEAig1Wqh1WpJSsWfSTst\\n\",\n       \"p9OJTqej7JHz83OsrKyIXcif53A4pG+Mx+MajcdiMXQ6HfHAiYTQ0YibBi3PMpkMEokEvv71rz/o\\n\",\n       \"/j+qnZlukjQxJG7sdDrRbDYRCASwtLQkM+2vf/3r+L3f+z1873vfQ7/f140ol8uS0FNizykaSUP5\\n\",\n       \"fB7xeByJROIeMZ5DlWAwKDUGH6ZKpQIA4gHT0jYYDMpmgLU7c1BWV1fRbDZFZN/b25NaejabSVlN\\n\",\n       \"3jWZd6SbMkKZSQDME+dYvFqtIhKJSKS6uroqeiktgClrItxJJCIajaJSqWhgwkGIwWDAzs6OamHW\\n\",\n       \"/SzFrFarBjqJRAIGgwEul+vBCa2PajFzxEvcM5FI4ODgQAhHtVpFIpGQWvmv//qvcXFxoQV0e3ur\\n\",\n       \"3WM+n+OnP/0p1tfXtZsAdwuxUCiIRHR+fq5FzmleKpVS7shkMpFsi7vxaDQSsYcppcxFYYxEsVgU\\n\",\n       \"ZmuxWPRQnZ2dod/va6o3GAyEP7Nc4I5Nf2oy05hJmM/nRS6irGtlZUU0VJKhWq0WvF6v8hHfeOMN\\n\",\n       \"jEYjpNNpnJ6e4ujoCH6/X0Y6FxcX0lMeHx9rgbMBzGazWF1dRbVaVT/A/gbAgx2NHlWZ4Xa7EQgE\\n\",\n       \"7imfGQlGEj1r0EQigZubGzx79kxNT7fbRTweV5fPiRv9nUnhpIcdWWNckGTXkWhEwxia0jDPLxQK\\n\",\n       \"yRaBDkQczpAjQggNuPPQoxuR0+nE+vq6kBgS6jOZjDzq7HY77Ha7FjEpoNQ0kqq5vLyMZDKpBpY+\\n\",\n       \"HlarVTAdhaypVEqec8+fP4fdbleNHQ6HMRwOJd/qdDry0qPRIjNlaJDDptTj8Sj+Ymtr60H3/1Et\\n\",\n       \"5larJaiMHyitZ6m8praOuwCJ4TQ+zGaz99AQBlAyzIfjYZLyLy8vUSqVhBww+4OjZO5ANzc3aDQa\\n\",\n       \"Sjbl2NhoNKq2pvi2XC7j/Pz8nkh2Mpng6OgIo9EI5XJZkn2qoOlHxwg1ohHX19figFBGRYiNjSQN\\n\",\n       \"ZgaDgWLTFk84+tqRA12r1XQ63dzcKFGKjfJ0OkW321XaADeTfr8v1hwbZnqL3N7e4qc//emD7v+j\\n\",\n       \"KjOoPWMEwng8VvY0rVdpMUA+MY1ZKFUymUzI5/OiQTKhlAMCaveY/+f1etUI0nSQuyfVF5ubm+Jl\\n\",\n       \"cNTLJCzu2g6HQ1Fku7u7SKfTODs7Uz27tLSEJ0+eyDO63+8jGo1KEU03ImZ4MzWAUjCeFJRD8YRg\\n\",\n       \"zHEkEpG7Pl2NAoGAfEKAuxOCJwdfkyHwlHuxTn/y5Il6Ao76Kczl570IGXo8Huzv7+PHP/7xa9//\\n\",\n       \"R7Uzc9pGT2AAWFtbg8fjkT6PcBERD5YfVBVbLBY8ffpU3nHtdhvxeFxTMgo7eYTTrd7hcEhEy12d\\n\",\n       \"/A36qBEXpoKbRCP6UrCpy2azaDabKl+i0SgcDodISdPpFHt7e+Is076WCpJWqyW5FnV4tOWiiyjp\\n\",\n       \"q6PRSPKli4sLNceTyQTRaFTWvNPpFKurq8LAl5aWZDlG/2W/3w+XywWTySTlDADxOOh5R19nTkBD\\n\",\n       \"oZAcjh5yParFTMcdyp3MZjNqtZoMSrrdriTxjUYDn3/+uaZ9i9a35D189NFHikVg3Tyfz/UAMAeQ\\n\",\n       \"WHa73ZYBOTkf3InYWLHM4f/oRkRVNHdS2oItLS3p4by4uJAq/Pj4WOWRy+XS1I6EKe5+zEAslUqa\\n\",\n       \"kLInWFSqMHyHsCLLBv6OzDShjIyjfzbOzWYTJycnmprm83n5i9Btlb8bdZS8Z/V6HdfX11+5gC5e\\n\",\n       \"HMlS2UAVBg2z6dc8n8+xvr6ORqOBjz/+WDesXq/fI/CQSUbrVyYnEbdttVqCxFiDz+dznJ2dKU11\\n\",\n       \"eXlZN5mqEu5YbC65yMm9uL29lTEKPZhLpZJsb0ejEarVKgDcy22h1xxr0o8++gjZbFZlwnw+R61W\\n\",\n       \"k2av0+losPLixYt7UivgrqzodrsSLZDeenJycm+BM6+EZQNJ+q1WS/ZjRGpqtRqazabIXzS0oRPr\\n\",\n       \"Q65HtZiZn0FF9Xw+1wIlr4KKB8qQ1tbWMBgMxNug1eyibJ6dPxdRoVDQkUj4jP5qREuoEuGxCkD2\\n\",\n       \"WmSmkQtxe3t7L/GVpwPN0AEoQ7vb7crl/vLyEs1mU80nQ4goHiDxiTtot9vVJJFHPrPF6RNCxTkb\\n\",\n       \"aUKXbK5Zo3Msv7KyIq4I7c4MBoM+Dy5QnobcTMhbuby8xNramsI4H3I9qgbQ7/dLjUweMXVxhI48\\n\",\n       \"Ho9y8DweD3Z2dhRvRhGq1+tFMBhEOBzG6empMkuYyrq8vCz4y+12C9q7uLhANBrF5eUlMpmM7KYy\\n\",\n       \"X8QQEwlwOp3ySaZ6gw9SMpmU+/ze3h4ODw8Ri8XQarWwv7+vBNlAIACn04l6vS4FNAlKNpsNLpcL\\n\",\n       \"kUgEVqsVn332GWKx2L1m1eFwCFKjeJX1Kwc+a2tr4o6k02nZ/NJWN51Oi4uytraGXq+H1dVVQYGt\\n\",\n       \"VguxWAyJRAKtVgubm5sK2mw0GvjlX/5l/OAHP8B0OsXOzg6cTid+9rOfvfb9f1QU0N/8zd/UsXh8\\n\",\n       \"fCzSzfb2tthlhJii0aiErbSNslgs6Ha7ODk5wRtvvCH3916vB6fTKUUF4xs4GeQRzOOVmdCLlrcM\\n\",\n       \"vATumtLDw0OF5JDIw9oagOBE4tRkwIXDYRgMBlSrVQXIE4fmQ8UkVKYGXF1dYX19XTUwgzY3Nzfx\\n\",\n       \"6tUrAHcPAO26mHf42Wef4c033xTDr1AoyMe52WwikUjA4/Gg2+1qOEKrhOPjYyEs9KxmrszKyopE\\n\",\n       \"wjSlcTgcKBaL+NM//dOv1NnA3Q3xer2o1+vaBb7xjW8gFouJMONwOMT9/Y3f+A380R/9ETKZjPSD\\n\",\n       \"dAl1OBz4pV/6JTQaDaTTaVl/XV1dIZPJoFqtSnXSbrdhMBg0uiZhiOUKUQh6ylmtVuzv74tFR7UG\\n\",\n       \"c0TsdrvKH7fbjUqlIppmJBJBv9/Ht7/9bZVPe3t7aDQaKj04zGFgD2mYVHzf3NzIsZ7QGkk/HH0z\\n\",\n       \"u48PyHQ6xZMnT5DNZhEKhTAej7G/v696mFNPcrf39/cxmUzkgBoOh+9lKrrdbsnFVlZWpCd8yPWo\\n\",\n       \"FjO5zIwF3t3dxYsXLwDcBcdUq1WkUikFNf7kJz/BYDBAuVyWf1s8Hhd984MPPkA8Hsfx8TE2NzdF\\n\",\n       \"MG80GlJG0K6KTpfJZBLxeBxnZ2dihxGB4Eh7Mpng4OBARueVSgWxWEw1LP2Xy+WyXIdYw5+cnGB5\\n\",\n       \"eRn5fF5EqVKphH6/j62tLdW1FxcXop7Sl5puRByGMJCSRzuFCLTK+uSTT7C+vq6fFw6H0Wq19Bof\\n\",\n       \"fPCB8kqIQbPefv78Ofb29hQiz82CfBDW+7RmuLq6wqeffvqg+/+oGkBygknYYe3IuAT6FlMpXC6X\\n\",\n       \"JRh9++234fP50O/3FZHAhi7zRQb1fD6X2xH1fLQIsNlseOedd7CysqLIXr/fj3Q6LX0crQSMRiMy\\n\",\n       \"mYxsb2liyAxv5gASv6bmkCN2NqSEE5lexXzCWq2GeDwuFUy/31cAJiVP8XhcwfTsNdLptAYpAFTy\\n\",\n       \"8DObTCYaxbNnYELVovnhZDLB22+/rTSAxRwZpnft7+/DZrPJWNLr9crf+XWvR7WYF6GinZ0dkdnJ\\n\",\n       \"XlsUktLIm01To9EQPTMYDMLlcmFjYwNerxflclnSIGZkL+r+GEhDp3qGXzLugTtWKBQCAJUBxF23\\n\",\n       \"t7dxc3Mjh3kA2NzchMlkkscdx9qMKeNr0PJg0Td6e3tbam9yJ7gzDgYDCW/JPQG+FAMzr5ClCNU6\\n\",\n       \"VKDQyZQIkN/vFxUAgEze2+22ZGNUyVCixaRX4C5dIBqNKhb5IdejWszU9Q2HQ3S7XQCQEw9H0Pl8\\n\",\n       \"Hs1m8x5mTPiJ/hOExg4PD3UzFj3VOLXjpI96PUr4J5OJyoxFI0TyIsh+I4Zcr9dF9KGmrtFoiFvB\\n\",\n       \"QQ+taPv9vqBE4tSE9MijJixpNBoVIsQ0WYpKyQOhAIGnGndhui4tfpZseBkeNB6Ppf0zmUxy/a/V\\n\",\n       \"alrAnMz6/X4hOuRxUK1Dn5KHXI+qZp7NZojFYlhaWhIzK5lMIplMCsyPxWJSiAwGAxQKBZkaUmtH\\n\",\n       \"VyIqV2hGSF8KAHLuIRY9mUyws7Mjgj8HEIT5iFfzwfB6vYjFYsK+DQYDUqmUSEgXFxfY2NgAADV9\\n\",\n       \"5CEDwJtvvimVCL9G+RLH6E+fPhVRiKlR0WhUZQgDNjnVo3qbO/Y3vvENmEwmlUp+v19ezIt2B4QJ\\n\",\n       \"R6MRIpGIpF38nWw2GzqdjhTewWBQJRAN14PBIFZWVvCTn/zkte//o1rMHDRwFEuzlV6vJ+yWH2Sr\\n\",\n       \"1YLT6VSOc7ValekKhy6cBtIqi1atXPz0U6vVarJ2pUcG3Y2CwaBU3STgU19ITsbFxYVqzmKxiL29\\n\",\n       \"PUSjUTSbTXQ6HY2cS6WSsPCjoyMkEgkR9fk7U33On0FyUT6fh9vtRi6Xg9lsvmeHxYkhR/acmLKW\\n\",\n       \"pyNppVJRnc/MbWoAyRFn6USzce7ALENoTMnXpUSLtIKHXI+qzOAO4fP5YLFYxFnmxUWczWYlvmSZ\\n\",\n       \"QMNvp9OJ4+NjSYAsFosolNw1CS/x6GZQZbvdlvFKuVxGJpPReBmA6k5OIGmE4nA4sLa2hkAggJ2d\\n\",\n       \"HblvcrLXbDblnETrWtapXDA2mw1ra2twu933vOn4bzY3N3F1dSVCUqVSuccZ4fCkWCzKe4/1Nf07\\n\",\n       \"GHVBshaNGInBEyun2ICDGrfbjdlsJhnXeDyWEIBTS+BfIDnfYDAkDQbDXxkMhgODwfDCYDD8T198\\n\",\n       \"3WcwGP7SYDAcGwyGvzAYDJ6F7/k3BoPhxGAwHBkMhl/9p16bdSMXCWMaOE6mm9HGxgbMZjPi8Tia\\n\",\n       \"zaZU1CsrKygUCjIJNxqN0sExooDUULLhms0mQqGQpnckLXE34nHKhFhi2H6/H4VCQcw1RjkQ8SiX\\n\",\n       \"y4qYoPKZBJ/JZCLiPyeH3A0HgwEymYyaPyqo6WfBppK+cBxDE6HIZDKSRxEPJkGJ7kqLKawcXZM1\\n\",\n       \"SPX7YlyF1WrV+yX2TIkaHzqLxSIPwNe9fh5lxhjA/zyfzz81GAwOAB8ZDIa/BPDfA/jL+Xz+vxoM\\n\",\n       \"ht8B8K8B/GuDwbAH4L8BsAcgDuB9g8GwNZ/P/4Ez9e3tLYLBIFqtlhZbt9tFIpG4F25JP+VOpyNb\\n\",\n       \"K4L3Ho8Hp6enagRJdKcZIgBZurKB5Gv0+31cXV0hkUiorrTb7SgUCoLBhsOhFl0kEhEPmpO08/Nz\\n\",\n       \"iU/J3+DAhikAsVhM7krMB6R1gslkQqlUkqSKooJSqYR2u60UAS484M6yy2w2o9FooFwuYzweCyMm\\n\",\n       \"GkS3fe7MfF+xWAzT6RS5XE7OT2woAQiqm06nsugisnJ6eipP6PF4rGDM173+s+/M8/m8Np/PP/3i\\n\",\n       \"z1cAXuJukf4XAP7dF//s3wH4r774838J4P+Yz+fj+XyeA3AK4J1/7LUX+QXlclnNVq/XU11IMSVN\\n\",\n       \"FXmM0juj2WwC+JIMdHNzI8YdvSqYFEXkhMoU7py3t7caFpDHSx+5+XwOp9Op98t/Wy6XNf2bTqf6\\n\",\n       \"b6YxcYGQj8EdkdRPADqBWNfyAer1enIkqtVqKqn4sNLei/YAJFXd3NyItGSxWGS2SBah2+0W25Bw\\n\",\n       \"IdESfjbAlwY67F/m87mwa46/6dH8kOvn2gAaDIYMgGcA/h5AeD6fk9BaBxD+4s8xAIsu1CXcLf5/\\n\",\n       \"cLGLn8/n2NzcxHA4RCqVklKZKVJer1e7IXcLhpmT9UaOMF8zEAjA4XDg7OxMcB71hhyEkKlntVrl\\n\",\n       \"HXd5eYlvfvObikQAoJExSe7hcFjDA7PZjNXVVSEHPp9PO1skEkEwGES329W4mp4diyVBMBgULk5H\\n\",\n       \"f1rzUoKVy+WQSCRUgvBUYh08m82wtbUlvjWpqUQdaEVLC2GauZB4tLGxAbvdLqNELnRuBGazGcFg\\n\",\n       \"EO12W7503/rWt/BXf/VXr72efm4N4Bclxv8J4Dfn8/nl4t/N70ZP/18MqH/079jscSchl8LhcMjQ\\n\",\n       \"hOR9h8MhMxhmBrbbbfj9foTDYXXwhJNYlgSDQeHJ5DuTgE+iPnP3GNLTbrflSkSXJZqfE9NmZBux\\n\",\n       \"6FqtphKCBB5yRAiPAVDOISeDoVAIhUJBJQRppDy1eMq4XC7t9AaDQYGgjICgfnE4HEoqRqEAHZWG\\n\",\n       \"wyFms5lI/Uzgury8VMDRzc0NKpWKsGbGdHD3rtVqmnbSiuF1r5/LYjYYDCbcLeQ/ms/n/+GLL9cN\\n\",\n       \"BkPki7+PAmh88fUygOTCtye++No/uD7++GP8+Mc/xt///d/j/PxcNSvrvFQqpVByGmm/fPlSdSZz\\n\",\n       \"TjhsKJfLkuRTjcGdvVarYT6f66FhA+V0OpHJZNDtdpFMJrVIybdgI0hZFXcvLm6n06lcwevra+22\\n\",\n       \"0+kU+Xwefr9frzebzTRtI/OOMq/FqGGiO1TceL1e6RrJX2b5wBRV2oqxZuaOTliNfBEOZejLwclh\\n\",\n       \"s9mUH14ymRRzLp/PC4NnQ3h4eIj3338fZ2dnD1pXPw80wwDgfwdwOJ/P/7eFv/q/APyrL/78rwD8\\n\",\n       \"h4Wv/7cGg8FsMBhWAWwC+I//2Gt/97vfxXe/+1289957WF9fh9ls1iJi88JO2mazyRibPAfu1vRi\\n\",\n       \"406zmHe9tbWlv6f1AO2uAGiqxbJhEYaLRqMAIJsANmyEqzj2Zh0diUQ0YGF+YK1WE8JC+wE6CwEQ\\n\",\n       \"8uL1esVUYxwFvfZ4wiya5ZAlxwUKQDAe/ZoXp30MBeLCpIKdOYRut1tYP/+O6AYHKhcXF0in03jj\\n\",\n       \"jTfwne98B0+fPn3Q2vp51MzfBvDfAfjcYDB88sXX/g2A/wXAvzcYDP8DgByA/xoA5vP5ocFg+PcA\\n\",\n       \"DgFMAPyP83+ChE2CD8lCzL6LxWI4Pj7WzVp06WT07qIxCy24SL1kWVEqlaRqZijPs2fP5BTE8oaW\\n\",\n       \"VUajEcFgUDYATLciU44LL5vNIhaLoVgsyqX/9vYWR0dH2rEZJ7G0tAS73Y6joyNsbW3BaDSi3++j\\n\",\n       \"0WgIFiOB//LyUlyLdrstLnGv1xNJ6fb2Fp1OR/TXUqmkB+Gzzz7D6uoqyuUyms2mpojkbpBkJGAf\\n\",\n       \"twAAIABJREFUXy6X0Wq1RA01Go3I5XLY2dnRiJoUVZ6CPp9PbMF+v4+VlRV89NFHD1pY/9kX83w+\\n\",\n       \"/7/xT58I3/0nvuffAvi3/3+vzc6aRnzdblflBN2GLBYL/H6/HOpZNnzxc9DpdODxeKTTMxgMSlIl\\n\",\n       \"2M9BwKKUyOFwCJpjfVqtVvHkyRMpM2j8wikep2wulwu5XE71Mz3xms2mLGd5qjCmjNNDmpFT3Gq3\\n\",\n       \"2xWISSOWZrMppTb9NJjbMh6P4XQ6sb29jfPzc/UX19fXSpCiqQ6RGVokMJNwOp2K9ERONi1sGTFM\\n\",\n       \"l9HFnZxCXkJ9RGNe93pUSpMvMuRgt9txcnIiHJdcB8JmDKW5uLhALpfDG2+8IViMOrV4PI6joyO8\\n\",\n       \"8cYb8mVjx06Xei5Gj8cjKI35KXTr4QImf6LVakkBs7y8jFQqhVKppHqUjLJ2u63kKdqF0dHIbDbj\\n\",\n       \"9PRU3iDX19f3FBvMRqGNVy6XwzvvvHNPZZ7NZrGzs4OjoyNJwTKZjFKnaMtFBIh+dhaLBePxGKPR\\n\",\n       \"CLFYTIgNfTQ4STw4OBAcaLPZcHZ2hlQqJToscxqbzabq7ouLC/z+7//+V2bjACT5pwMPdydip/1+\\n\",\n       \"X9Mr8mrJfONCDYVCwllJPL+9vdUOmM/ncXFxcU+Kz0aRr8shS7FYxM3NjRYrORg87imGpZXA5eUl\\n\",\n       \"Tk9PFVlMKI/KZUZEMMlpNpvptQAIxisUCvKKBiAfOT4UXJw8SQjd1Wo1ABDfYjqdypKApCsGeZJF\\n\",\n       \"yH/Pet5qtUroenV1hYuLC+HHZN1xQ6G9AAn95G+87vWoFrPT6YTT6YTb7QYADS0WrQO4s3HETM4F\\n\",\n       \"ORaTyQT9fl8LmiJQk8mE29tbPH36VFl2dDsKBALaOROJBCwWi2J6rVYrdnd3MZ1OJUsKhUK4ublB\\n\",\n       \"sViUupkB7NTVUXVNSyxi4LSoJRLAnZtj7mAwiF/7tV9Tbgo1f4twGxtg4O4B4AApnU4jEAiIuE/H\\n\",\n       \"VPKSKcliY8vmjs0o4xy4oMPhMJaWlhSLlk6n5S1HPz3SUxfzBV/3elSsOTLM+EFarVa8ePECb7/9\\n\",\n       \"tuq509NT3N7eIhQKCRajYplw1+3trVwtg8HgPTfNSqWi0oUBkJTtLy0toVQqCYWgK+bh4aF+PgDk\\n\",\n       \"cjkd6yyLzs7ONP3jjSUXmqPpVqslGI0m3zzaiV3P53P8+Z//uXR9jL148eLFvTSoRYst7qyLHnZv\\n\",\n       \"vvkmCoUCCoWCnPTtdjtyuZyQELIQ6SGXyWTkMc1BD6eshOqonVxeXsbBwYEyWtrt9r88aO4/5cVd\\n\",\n       \"mbvd5eWlrJ84cuYY1e12i1REz2LyoGnFCtx14fSlowplNptJ0c1deXl5GePxGLPZTCgIEQ42eTRs\\n\",\n       \"XPTR6Ha7oqNykdJmy2Kx6Huur6+VOEvaJ49yOp1SGNtsNjXm9ng8aLVaWF1dVRA8ADWCTKdiL8Df\\n\",\n       \"bdFnhEw4CgQikYiwZdolENlYWlrS5HUx74VOo6Tccgp4fX2NZrMpIfFDrke1mBknzKxqZpBw5yTX\\n\",\n       \"FgDy+TysViuq1apyOrgQer2ejm3CdoVCAcPhEMfHx3C5XFJqAFB9SmUIxavdbleK7H6/r52Xk0Bq\\n\",\n       \"5yKRiFQZtBXgzsdGtV6vo1AoKHCI3G1yT25ubnB+fi67A9bUDMM8PT2VmQtrfnIkWOIQ3+bwg2aI\\n\",\n       \"JFkxqKfX66kWprspkRUOkF69egWTyYRGo6EThaofg8GghpXcED5YD7ke1WJ2OBziBOfzeeGu1AXS\\n\",\n       \"9GQ0Ggle4k4VDofh8Xi0eMnl4BFLUSZdf0iSYRAjF+vy8jLS6TS8Xi/W1tbkl8yGyWKxiGnGq1qt\\n\",\n       \"qn5krC8XOuvpRCIBv9+PQCCgkEp6dxDm293dBQDEYjFRTVnDklttNps1+OFAhYobUmG50LhTj8dj\\n\",\n       \"xGIxTCaTe/0IbcrYQ7BkMxgMgjvj8biExDRD56kyHA7lnE/W4kOuR1UzM7z94uICm5ubiuH1+/2o\\n\",\n       \"VCqYTqdIpVJoNpuw2+1477338Ad/8Af42te+pskWiTlWqxXvvPMOXr58qeQmkmbi8TgajQYSiYQU\\n\",\n       \"GJwiUgnOCDHgDk1Ip9Pq3OfzOdLptP4tGyaqyvkQsflivQncuSP1+32Zs7hcLnFN2PRdX1+LTGUy\\n\",\n       \"mfQg0laWkWxUquzv7wv3ZV633++XPS/9OYLBIM7OzmSzsLu7i7OzM7mL0v3f5XJpgDIajeRnQoUL\\n\",\n       \"J4hMAbPZbAiFQjg6OnrQ/X9Ui/no6Eg7R7lcxurqKlqtFoxGo3BncpgHgwFOTk6Qy+VQLpfV/Fmt\\n\",\n       \"VuTzeUQiERwdHaHZbKJer8s7jTed9TBrWDYy7Mi5OzmdTlSrVdWGtCe4vr6Wsz+TSVl78uexdiec\\n\",\n       \"yAUN3E07vV6vXqPT6eDs7ExpVUQx+v0+2u226m9yt/v9vtQkLKHoukQHVRKhaJyeTqfFua5UKjrp\\n\",\n       \"Go2GBL38jIvFItbX1++VVxzukILL5pnQX+6rtKkvL2aE0FOYQweaYpfLZfEFvF4v3n//fezv7+P2\\n\",\n       \"9lYeauTlWq1WHB4eaqoWiURgNBqRSqXQarU0FTMYDHC73VJTkHfALp/iTvJ22RBdXl7q/3u9ntyF\\n\",\n       \"5vM5otGodtbRaIRgMCgoiwoWckIoOuj3+0IZer2eFmG/39e42mKxqPSgb7PT6ZSZ+WAwuOdtPRwO\\n\",\n       \"Bc8xQZZU03Q6rfE98CWSxJKMfBKbzSYHVd4jytGKxaI+AzIDH3I9qsVMLHaxM+aImLUa67PF3OdF\\n\",\n       \"X2fipotWUoxNYBLpdDqVdQEtBBZTT7lAyAF2u91yx2RmCJELvk9GCy+OsTn0WST+cFReq9VU1y7K\\n\",\n       \"mkjr5DSQfQObWQAKAaIFAut5ci46nY5EBPzdKO8iYYmRGre3tzAYDEry4i7carWUo0jvjX6/j2q1\\n\",\n       \"Kq9sDrRcLpfEwA+5HtVi5k5jNBrlzENne+KdrHsZWcAPlz4Qi65DjEQYjUbadbl7D4dDRKNRrK6u\\n\",\n       \"isLJGjcYDAIAXC6X3PLZ0NEAkQ+IyWTC6uqqyoVFqigHDMRrWTIBEDne5/NpQZCFRyd9vj4d+Umm\\n\",\n       \"L5VKGqhwcETIEcC9FAEmEbDfMBgM8Pl8Ut6QGMXGjzs7SxZyqAkjUg5mNpsRCoXUN9BQ5yHXo6qZ\\n\",\n       \"uZi4IGgcSEvaXC4nWT4XSTKZRDgcFkIxn88VDE/bK07alpeXxfaiTwSRDx6vS0tLKg04vaMqhLnY\\n\",\n       \"XNDkKtCrgm5FVH4Ph0NhuhcXF+IQB4NB1ZnkYBPHpcqFtlxkxgF32dSJRAKZL+LNVlZW1Jh1u12k\\n\",\n       \"02nVwmx6J5OJmkFyukejkax70+m0jCdZKrHBY0m0u7srYQSnhBQMn56eIpFIwOVyYX9/H++///5r\\n\",\n       \"3/9HtZhJPTQajRpgEB1oNptCMcg5IPYM3CEhxKdbrZaQER6n3DkWw3rICiNaQNnT1dWVOvv19XVp\\n\",\n       \"9shj5uJvt9vaDdlw3d7eagfjrk5sl2YqwWBQNrvAndHMq1ev4PV6hZ3zQbq8vMTV1ZVQg3K5jHK5\\n\",\n       \"jGQyqabs1atXKj2m0ykODg6wv7+Pg4MDRb7lcjmpz6mPJFpRqVRkcMOypdFo3HNIpQSt3+/rpGs2\\n\",\n       \"m2IQjkajB3kzA4+szCAHgEcaR7A0c6Fae7Hmo2cwFRgANDRgKI7b7dbomimwtJOiSpuDDE4OqQFc\\n\",\n       \"bABJdg+Hw/eO9dPTUxGU6H/MI5tQFk8IIghsasmRJueCVmMUCzCn8OnTp1KbJBIJDUfYfBkMBnQ6\\n\",\n       \"HR37w+EQ6+vrGu9HIhHRadnAcbHS29lgMIhzzQkpKZ/kX9DHg5g2zdWpmH/Q/X/Qd/8zu+r1um7S\\n\",\n       \"eDxWZgenfixBaB3LnYSqCY6Db29v5dLT6XQ0bQMgGIvKDzLlaLXFUoKZ2awnCYsNBgMtbKIBDocD\\n\",\n       \"Nzc393zyKIWiUeHNzQ0SiYRIRyylxuOxHtBOp6Nyh+Y2HHU3Go17sRDMxmYJwnCixei0brcreI/N\\n\",\n       \"LQDh6dQCknXIXXY+n6NSqYgeQASp0+nIuuHq6upeP8LS7CHXo1rMq6urMuErFAqIx+PodrsSnp6c\\n\",\n       \"nIjbTGJQuVxWXghDZkjcp2SoWq1KPcJdn6VFMpmEz+cTZk0G3vLyMtrtNpaXl5HL5RAIBDTZy+fz\\n\",\n       \"+PTTT5XNTZ3g9vY2AEj1QeyXRo/n5+caQ7Np5dSNPnuE5hi9TMrqzc0NyuUyrq6u0Ov1NKjhKdPt\\n\",\n       \"doWjMw44l8uJy1KpVGA0GlGpVOByudDr9VAsFjV65/dQGADcOTidn58rANTj8SCZTOrnEvPm+2UC\\n\",\n       \"1etej2oxc7TKLp+7InkGZKPRZoAUS4582ZFPp1PdGIpJyW5j5C7DdBhFRhcfwmFUaJAHzCOYOYAu\\n\",\n       \"lwsABJ1R8LmIQrBJ5ciaDwqzwC8uLuSBQV4Fa1e+X5ZKixEThCT50LEJpfKaDkcUtVJDSaSHqhmW\\n\",\n       \"Nnx9qkz4fii4pS/JYpQcSyzu8g6H46uAnsWLDDgGQVqtVni9XuVDUzbFiIXt7W1ZaNEgcTqdYnd3\\n\",\n       \"V8E+sVgMfr9fUz86hXLxcpchIkFaJRGBTqejE4N1Msn2HFkTuguFQuh0OkJh+CCwObTZbIjH42i3\\n\",\n       \"21hfX1dYJ5GJ2WymMX4qlZL9F1UxVKYTRuR4+dmzZ/eYbRzFv/nmm8Lb+VBsbGyI+01UwufzSYNI\\n\",\n       \"jgZPJQYBjUYjJJNJXF9f69Qh1s3f8Tvf+c6DRtqPamfmolg0BqSQlFlzk8kE1WoVKysrEn6SqM/6\\n\",\n       \"stlsit98dXWFFy9eKEqBtSEneDabDYFAALPZTP4XhNeur6+RTCaRz+dVO3I8TfOWbreLUqkk/JhH\\n\",\n       \"OC14Wc+zVGJoJadqtVoNw+EQZ2dnsFgs8j4mw67Vasmckbss86zp8FSr1dBqtdRLcLyezWalsnn5\\n\",\n       \"8iUqlQqKxSLMZrNOJEKItFmgTpInEetwIjzz+VxlBZtdNqIP5TM/qp2ZsiiDwYBf/MVfxGg0QiKR\\n\",\n       \"0FHMJCUqKRgQ4/F4YDAYUKlUFJHgcrlUUqTTaXX8s9kMfr9fWjpOzajmsFqtOi5p5RWNRqXHI0uN\\n\",\n       \"zvoAsLGxoVEyd3oAGs1Tlu/3+3F5eSnTF7LOqFlcWlpSnDAALRxKtbjgWq0W4vG40BFCZ3t7e3j5\\n\",\n       \"8qUQG5PJhPF4rFhgmkuyWSVD8Pb2VsoVDof40DNjnBkqLGkcDge63S663S52d3c1KPrRj3702vf/\\n\",\n       \"Ue3MrBcHgwFevnyJyWSCfD6P0WikepTYKLm+bI6o1DCZTKjVauLXEmpjmORgMEC1WhWiYDabtdNx\\n\",\n       \"vMtBhdPpFE+CyhMeszy2iRZwRM66nlg00RQGUXLH5+KgiTdtAhqNhnZrq9WKZDKpo547JbWMRFnI\\n\",\n       \"ISE+zSB51sr8GSxLuBszEpkedQzbASBn/0gkcs+/j2Lfy8tLOBwORCIRTRNp4fu616NazDabTQuN\\n\",\n       \"OzQ7fTYtRqNRxoKRSAQABK+xCbJaraovibtypyYXgTwL7kR09+TpQJtaypMACDYbDAZot9sIh8Py\\n\",\n       \"sSDLjkcw1TKsX/l+2GQFg0HVrnQ74g7KB6RSqcjVkzAka1VafxHLrtVqaLfbSo4yGo3iT3BX5qCE\\n\",\n       \"ggLCdW63W78r+RbkuBDy5AbCySVxbXKqCZE+5HpUi7larSLzRYrT4kKmTxp3NcafdbtdvHr1SkGY\\n\",\n       \"9E7j8GE0GiGXy2EwGKBer0sQC9xZBtBFs9frIR6PazzN5i4SiWgMzN2JCEgwGMT5+bkWJjFyRk1k\\n\",\n       \"s1mNmTkip/v+5eUlarUaKpWKVB5msxnFYhHNZlO7XjKZVK3Oh5hDDQAav3PHBCCjnMlkgu3tbcxm\\n\",\n       \"M2kBOdanxQKbw3q9Dq/XKzIVJ6GUgBEGZQjSbDZDLpeT+p2G7g+9HtViDgQC6qhp1kc4ik0Vechr\\n\",\n       \"a2s4Pz/H1772tXvsr8lkgs8//1xYKamSvEksWRa5GYFAAPV6HblcDul0WlAf4S+6edJTw+FwoFKp\\n\",\n       \"aAE8f/4cgUBAzD4OYDju5u54fX0tOihra5KHbm5uhDPTW/rs7Exyf4fDoV2f5CGy4BahOYpLDQYD\\n\",\n       \"crmc/KYZFrpYlrEUInYPQPU0m2meCuTD8AFKp9PI5/OiDwD/Ap3z/1Ne9FJm3QxADphsqjqdDvL5\\n\",\n       \"vIy4Of6lbKfX62lnYQ1Zr9eVyX11dSXjQk7ygC9D3klIInuMN5blB2/YxcWF0I9FY3HW6eQqc2cn\\n\",\n       \"T5rSp1qtdi8QnsR2q9Uq6iaRgm63K5ISHy6WL6PRSGNq+twxQo2+ILlcThEWy8vLqNVqwuCJSjBo\\n\",\n       \"h9Af/7w4kaTqnbwWOjU5nU71Ig+5HtViJmC/qAJmDUlkgkaFBoMBmUxGpHpOo+jHTIk/m0Lu8GyK\\n\",\n       \"6LHhdDrVHFEzR0yaU0KbzaadnMMWppX6/X6EQiFxkFkaud1ukZJo+E1UgtnfixhyPB4XnXPRrXMx\\n\",\n       \"wJIPCG0S+HsyB8VqtQpa5GJfWVlBJBKBx+NR3AUxavIt+ACSB86SZXHQwywYNpGcUDIGmU6lD7ke\\n\",\n       \"1WKmspkCVlIeOW1j00YvYxoaTqdTPH36VDtHMpnUrpnP57GxsaE6k8aE5C9w0rWysoJ4PI5Op4O/\\n\",\n       \"/du/FTmJRB36UbB8oRSfEQtsCvP5vOREvLk+n08PBJOjiCgwB4WMuPF4LOSGxCXuzFyg9EOmap0O\\n\",\n       \"RDR6pH6PYoWLiwuN+DnoIJHeZDLds9alSxENxPl14uTkr3Dgwt7joQsZeGSLmbASox4YLk43eKo0\\n\",\n       \"qMW7vLzzOLfZbDg/P4fL5YLP5xNhfz6fIxgMyjKLRy89Meiwz5RShuPs7+9jNBopa4819NbWlhhm\\n\",\n       \"9OPg0IayKE4lSdjhkUyhbiQSQSaTEb+CqaYUsTJl1mQySenCXXE4HGpX93g8mnKSwcYdk4udu2oo\\n\",\n       \"FBJGzPqcDDtCkfTvo1BgUYzA7OzFv+dgyeVyweVyqcd5yPWoFjM9KiwWCzY2NhRxwBg1t9uNVCol\\n\",\n       \"9tnm5qZG4IlEAsAdPvorv/Ir8Pv92NnZQbfblQs8oTzuRExSpWNmIpEQwkHuBx8k4A6r5m5MI3KX\\n\",\n       \"y4VIJCIvC9bOJN7XajVEo1HM53O88847yOVyKJVKsv5iGP3a2ppKEWoBiWZQwUJLACI1LpdLll92\\n\",\n       \"u10cZ54w/Mw4rWSDabPZFNlM1Uk6ndbn4nK51PyxtHO5XMLLGbUcj8fRarUQDoeRyWTw1ltvPej+\\n\",\n       \"P6rFnE6nAUBUSx6pwN1Cr9fr0r2xI6cFVzabRbFYRCgUwocffohGo4FsNotEIiEuM0WiDI+nWTeP\\n\",\n       \"4mKxqB2zXC4r42QxCIgK5efPn2t3Go1GCIfDuLm5EbRGP7pgMIijoyO0222USqV7tgX8d9fX1ygU\\n\",\n       \"ChIN0LTl5ORETk7T6RSlUkljeJZHNGphmXN5eYlisQiv14uPP/5YueOkbXLo0m63pWp3Op04OjpS\\n\",\n       \"6A/jhulD3ev1lAvOCaTH4xGpv9lsot1uI5vNPuj+PypL29/93d+V7dT19bWiFDKZDPL5PCaTCQKB\\n\",\n       \"gBxA7Xa7FjN1buQ3OJ1ODUwcDgeq1aqaMrphcuJF/jN5yYTlWLMuktSJUDC+12KxoNls6vuvrq7g\\n\",\n       \"crng8Xhwfn6uOpdoCSeHALC7u6sRPYMvfT4fisWiIorZqLI8WCTIc3hEezK+7tXVFUKhkAzEq9Wq\\n\",\n       \"6ulF5IM7O8lG1WpVJjaMkgAg03OPx4NoNCrx6mQy0YLmfz/E0vZRcTMod7+9vdWxR8kSieLZbFZm\\n\",\n       \"KMCdlxwzoEm8r1arcLlcOD09RTQahcPhkGUtE1YdDgdarRYcDoekWOQEA5CZSiwWQzabFSRHXzoA\\n\",\n       \"GnszNoxDjX6/j1KpJDEo9YSLkWP9fh8vX76Ex+NRucOpIQW7FCYQ/uJAiIQs/v7Ly8vIZrMKJKLt\\n\",\n       \"FxtYckPoo8e+hIlbJCKtrq7KQ4PZJtPpVBpGk8mEo6MjTQHZILJpbbfbD7r/j2oxc7TK5CKHw4FU\\n\",\n       \"KiX/s5ubG0SjUUl/kskkPv30UxiNRglPWU7YbDZ84xvfQK/XkxyKI9xAIIBGo4FwOIxIJCJ8lLte\\n\",\n       \"o9FAJpMRu45UTbpt0lOCdrQ+nw9Wq/Wec3w8HhfXl5Iqejqz/DAYDIIAidDw/dNf2mw2o9PpCKFg\\n\",\n       \"o0dif6lU0o7LwREhtkwmoxqbdNVer4dUKoVsNov33nsP+XweJpMJOzs7Kh+oyqGo1Wg0IhqNij89\\n\",\n       \"/yLIMxgM4vj4WPfpK6+5hYv2ADTpo6UtfedOTk7QarVweHgIAPjhD3+IUqkkUhGbuXK5jMlkgr/7\\n\",\n       \"u7/DYDDAhx9+qAHIaDSSJRUX70cffYTb21s0m02VI8xVaTabcvHhBPLw8BA/+tGPZDrI3YxWs9Pp\\n\",\n       \"FCcnJ6jVauJg012oXq9jOp0qUctgMKBYLKJSqUiaxJ03l8uhVqvp3zIznM0gd8disags61KphEKh\\n\",\n       \"gMFggE6noxPi+fPnyGazyOVyqFaruL29xU9+8hMl4h4fH4tfMZvNcHp6KqaiyWTC+fk5Pv30U3FL\\n\",\n       \"SMI6PT1FuVzG8fHxg4lGj6pm/q3f+i34/X5Uq1UlONVqNaTTaU2zBoMBKpWKSOY8iuPxOA4PDxXs\\n\",\n       \"SAn/8vIyksmkGklK+GnISNir0+kgFouh2WxqBA7c2R8Ui0XBhoPBAPF4HIVCAdvb27i8vES5XIbJ\\n\",\n       \"ZMLW1haazaZU4NPpFOHwXbbnYpYKoUVmllCAy1E5f+eXL1+K60FEAYC+32azyfOtVCphd3cX1WpV\\n\",\n       \"aaxHR0dyZgKg9AE2tPxcV1ZW0Gg0sLm5iUKhoF6DJxjlWFar9Z4SZdHckZHNf/iHf/hVzQxAzDf6\\n\",\n       \"lq2trSnmloMJCjHPz88RCoXwZ3/2Z/jVX73LliedczQaaVGZTCZ17AT2uUszDy+fz8uHjUMW1pPE\\n\",\n       \"bkmyp46O2HYkElEA5tnZmWpdlkyFQkF1cbValYkiSyUiHL1eT6NkckKCwSAGgwGurq5E6uGEE4AG\\n\",\n       \"L9zhT09PtasPh0M10KSA7uzs6OuDwQD9fh8OhwMnJydwOp149eqVyplXr17pdTgm5/Tx5uYG/X4f\\n\",\n       \"T58+RS6X04T1q5p54VoMs+EukkqlFN3LuAOSyff29nB8fKwaFoD4DCTxl8tlDR/IxWWcmtlsVoNI\\n\",\n       \"hlgikcB4PL5ndsKMPnpnhEIh8Z/JQEskEiI00V6MMb0Oh0PvmYzAaDQqewNmsTBygegIcEe+mkwm\\n\",\n       \"8Hg8SnPl7ur1ehWvTHSHo3aiIA6HQykDrOlJed3b25PMazHLcDwe48033xS5iT+XsKjP58PGxoYG\\n\",\n       \"O8T4H1ozP6rFDEDTs9lsBofDgdwX0b80iInH4+j3+5jNZvjZz36GbDaLjY0NMdDy+bwWzuHhoeTv\\n\",\n       \"3F1MJpPCeqbTKc7Pz2E0GsW96PV6iEQimhAC0Gic3hKVSgWXl5dSf5MC6XQ61UCdnJwICiS+zZ/Z\\n\",\n       \"6XSkR2T4TTablVSJC3fR9bTRaAiiI5easioqSBjOyQXOJrHdbovlN5lMkEql0Gg0cHx8LN0im91n\\n\",\n       \"z55hPp/j5cuXiMViorVeXl6iWq0ilUqJOx0KhdDv9/Hxxx8jk8koIOh1r0e1mI1GI2KxmNhdixke\\n\",\n       \"oVBIx7DX60U0GkWpVMLq6qqIRgys5Gu53W6B/JQj8Uin0npR7UFbLZKRKEeinIqOlxyVr66uiqcQ\\n\",\n       \"iUQ0bjYYDNjd3RV5x2azKZ9veXkZq6urKicIjdFtiBNPwnckQFEpHo/Hpf+jqaPf70en00EkEkGt\\n\",\n       \"VpMsi7tzMpkUhOZwOMRNoR9dOp3W4qRCJRaL4fr6Gi6XC6urq8hmswiHw3A6nQiFQqLUXl5eSgWz\\n\",\n       \"vb2NH/zgB699/x8VmrG0tKQMkKOjI3Q6HRl8UzTKHYSRaAw05/96vR5evXqlWo+qZ5JyuIB4rNJU\\n\",\n       \"hhNFst4YbEPftcXwn3g8DqPRiGKxCI/Hg3w+L4I8hxilUgn9fh+5XA7n5+eo1+tSbVgsFiEq5JdQ\\n\",\n       \"rTEej9FoNDR1Ix86Go3CbDbj+PhYBCrgDs6s1+uaCpICS4HAbDYTmalQKOD29lbTRzLgms2mkgRI\\n\",\n       \"bT04OBDllcT+eDwuTPzk5ASz2Qyrq6sAvgysf9D9f9jy+ed1UYt3fX2N1dVV1WCcolGhTbmTx+NR\\n\",\n       \"Ph6VETTwBqDdhznZvJE0bgFwT1VNpKPRaGhX5i5I6RbJ73a7HTabTYmnXEC8oYuOQaRu0g+j2Wwi\\n\",\n       \"EAiIUM9aepHuSrsrlky5XE7c4m63K4mVzWaTGyizXMj9ps0tveZWV1dRr9c13FmklfLzo0MqR/4c\\n\",\n       \"cdPgnPwT9gGVSgVms1lj9Ydcj6rMoGKZww1yeVkLkgvB0S5wl/8BQIOLarWKUCgkByPulG63G+Fw\\n\",\n       \"WIuFlgY8/umSD0BGMhx0UNHByRmhKA41aPfFGpsjdRLx2Rim02lJoviQ8Pgn14L6Pu6Uw+EQwWBQ\\n\",\n       \"pQ2TV8nko0CBfGpaHTCHBAAGgwG2trZQKBSwuroqMevt7S1cLpdOFH6mJC3RNIecZWLwVMOT1ETF\\n\",\n       \"SjKZfND9f1Q7Mx2KqtWqMu2IGhiNRh17nELRfRK44w80Gg3tmJ1OR2UJd18ORrjD0MlnOp1qoEHV\\n\",\n       \"Ch3xeTGfkKUKCfFUx3DHZpQaU2VZOnBnB+5OoFwuh/l8LnOY2WwmxGBRpe7xeNDr9bR4WeeSOVcq\\n\",\n       \"lSRoAKBShoJU5mMzP2Vxs/D5fIIvuVFwbtHtdlGpVKRdbLVaMkYcDu8yzk0mE8rlskbrvBevez2q\\n\",\n       \"ndnn86mxuby8RCwWUxd+e3uL09NT0TgZ5P7ZZ58hlUrJ7op149ramkJziAxQ+Q18iVBQBV6r1dBs\\n\",\n       \"NuH3+xGPx1GtVmEwGLCxsYFyuYx6vQ6LxSICEtl1rFX584PBoFhr3OnoqcG8QJqb1+t1IQX5fB6p\\n\",\n       \"VOqeOWSz2USxWBTfArg7varVKsLhsCRdlUpF8WmMQ1tdXcXx8bHUJq9evUIymZSggQ9xLBbDYDBA\\n\",\n       \"LpdDJpNBsViEzWZDqVTSUIhCYpY4fB8vX76UqY3L5cKLFy8edP8f1c5M61faW5FTPJlx/k4sAAAg\\n\",\n       \"AElEQVRMZA1LRyNaxL777rvqwP1+P+x2O8rlsoSZRAVIKieLjgR07lwsH1ZXV7WjZTIZABCbjGUD\\n\",\n       \"v4fQG4/i9fV1OJ1ORCIRWCwW9Ho91bGkj8bjceG9NJ6hkU0ymZQZJG3CFqd8FMtmMhmZsNMujG6i\\n\",\n       \"bBgBaLdedCtiWP1in8FpH+0S3G43LBaLdmtO/KjmpnYxFAohlUqpnOEO/brXo9qZecROJhPs7++j\\n\",\n       \"Xq9jf39fzkB2u10UxMlkgm9+85v44z/+Y2kBSREF7hZaMpnEaDTCW2+9JfdLq9WKUCgkMhLrXNa2\\n\",\n       \"5+fnSKVSCIfD6Pf7sFgsiEQi9wwVKRZg7ondbke321XtyiY1nU5LxUHlChEWwnPM7CbLzWazYX19\\n\",\n       \"HSsrK/D5fIq0oBiWvhf0ox6Px4ItOYnj0GZ3dxfxeBylUkmLlEMVDox6vR78fr8aYp6Oq6urGAwG\\n\",\n       \"8Hq9uLq6gt/v18Mci8XUuN7e3iIWi8FisWB3dxeffvrp69//hy+hfz4Xa1kAohrSI+Pi4gK1Wg3n\\n\",\n       \"5+ci2z9//lyj5m63i2KxiE6nI8X28fExjEYjyuWynHqcTqfIPBw9k9JIC9p6vS6XHgAapxM+A4Dz\\n\",\n       \"83PRK2u1mnZ82hcwJYr/hg8qFzZJ/IPBQO+fTqCkldI3r91uo9frodFooFKpSIFDbjUht4uLC0Ft\\n\",\n       \"V1dXOD8/v/d7sE4nS/CTTz7ReyP3hb1CPp8Xz3qxhON7orKGvPBarfaVCczixcbH5/NpFEu6ISdX\\n\",\n       \"4XD4Hjc4FotpzErVBhGBxeAZo9EoV1GLxaJjn/9N+iQlQMwCoSZvUTlNLzuiJGx8zGYz0uk03G63\\n\",\n       \"ShjyiGns6PP5VNvSkZTHNodCHJ5wlG6327G1tYV0Oo1IJIJwOIxisSjeCrOyV1dXlfcHQNPPdrut\\n\",\n       \"E2tRyLu5uSmeRyAQuNeIcgEHAgH4fD54vV6dkORsUDLGrBj+jNe9HlWZQSVFpVJBtVrF+vq6uumr\\n\",\n       \"qyuVF8SiaQ5Oji5z6er1OhKJhAy6yc2gJIhqEDZdHF1zYEFYrVAoIBaLyfuNmC/pkZwu0ouOo2cS\\n\",\n       \"ibi7UmnSarWwt7eHQqEg4hEZeqVSCclkEgaDQaw40kfL5TJOT08xHo+l2eMDSGcnn8+HbDaL8XiM\\n\",\n       \"k5MTvPPOOygUCkJV6MvB0HamyTJEk8iR2+3GxcWFNg265g8GAxwcHKgBTqfTODs7k41CMpnERx99\\n\",\n       \"9KD7/6gooL/zO78jWI27FfOoc7kcRqORSEI2mw3Pnj3D97//fXzrW9+ShJ4RBk6nE0+ePMHZ2Zls\\n\",\n       \"CxjKHovFZBBD6IuUTw5X2CCSief3+3W0cuelrIjTReBLc0EA8rCgkY3VasX6+rpIU2T4UY3O0oEB\\n\",\n       \"l5PJBEajEe12W+lQXq9XZUkymcTJyQmi0ahG53x4KJuixzSHJ8fHx1hfX0elUsHbb7+NXC6H6XQK\\n\",\n       \"v9+PVqslVh7jMGhKTlIRbQfo3lQqlRTddnh4iO9///uvTQF9VGUG67fr62usra0piqHX6wmQNxgM\\n\",\n       \"ytOgyoFMLgpSDQYDQqGQambuqCwvSqWSeLjn5+cSyC7Gkd3c3AjDvbq6kvdwv98XdjwajcTjpWMm\\n\",\n       \"LWIZZUFGHSmnvV4PpVIJtVpNUivi2s1mE+FwWDERHF0ToqOdLrH2bDYra1367HG8TSHu/1vtQryc\\n\",\n       \"TMTl5WWsrKzg6OhIxCI2s4sj6qOjIxk38pTiQ7+ysoJisfhgq4FHVWbQPPD6+hoHBwdIJBKo1+vw\\n\",\n       \"+/04OjpS5hwAGRNSElWr1TS1YxNERh2nh/V6Xd07A3y4QzLyl75u5GrwlCC1lNavrVZLLvfkRRMT\\n\",\n       \"ZtRDq9VCqVRCuVzWwuDrcpLGZpQiVT4szWbzni8zldVEM/gadNfncc/TYjQa4fz8XNwJn88nvz6n\\n\",\n       \"03nPo5kBPp9//jn8fr+UJpubmygWi4LnqJIhYsJm3OPxwO/3Cwp83etRLWbuwHSo5wh6PB6rWeFC\\n\",\n       \"o7qCC5XCS/pjsEEZjUbK3vN6vUilUjg9PYXH40EgEJDmkHo3m80mToTRaEStVpP5CUe+5F2Qgced\\n\",\n       \"nObo5C/QJNHhcKDdbuuUCIfDmtrRO45+db1eT+PrdDoNs9mMbDYrbPjdd9/F+fk5HA6H4ERqABnn\\n\",\n       \"QBuFRVuv8Xh8b+TNMisej8sb2u126zNhMPzGxgZ2dnbw6tUrtFotRCIRWK1WRbiRsOTxeOB2ux90\\n\",\n       \"/x9VmUFZDj/ITqcjfPj8/FzwWLfbVWhPNBoVrNbv9xEOh/Hy5UtcXV3J8w2AiOs8TjlqJtmHO3O1\\n\",\n       \"WsVgMEC320WtVlOHTvhuPp8rAIg7IZ30XS6XIDc2gJ1OB5999tk9tTYXD4lALIGoyuZono0Xifjj\\n\",\n       \"8RjPnz8Xjxm4I0iVy2VFU9TrdTXJlUoF8/kcVqtVA6RmsyknJvpnkGgEQJ95uVzWg/7RRx+hUqkg\\n\",\n       \"FoshEAig3++rIY9Go/Lrq1QqD7r/j2oxkxzU7Xbl5BMMBmVjRa83RpZReUxPYpLtCbsR5spkMuIf\\n\",\n       \"mM1muQjR7IU8Az4cnI4xLoKdv9PphMPh0E1mBh5trOjvRi0fvYxZBxM9yOfzikKmYTlH6wzTTKfT\\n\",\n       \"etjMZrPqd/qDcOqXzWbh8/mk2I7FYkKFEomETGUYJEQdod/v1+fBh4OQIA0TSZGNRqMiJDUaDSV7\\n\",\n       \"TadTYdc0jXzI9agWM+mcW1tb8hEmVDUYDPC1r31NZic8pjnOpa8zACEEnIrRe9hoNCIcDivAnJa2\\n\",\n       \"GxsbsqXiA0RxgMfjgcVigdfrFXuPMqR2uy0XT9oDMMC+UCjIPZONFydzW1tbqn25q7KMqdfr2Nra\\n\",\n       \"Eh2TnIy9vT2N6ykc4GsCd9PTvb09YeWj0UjRwqPR6F5uCYdKfB0AWFtbk5DVZDLB7XbL5ZQnCqmq\\n\",\n       \"PJW2trawsbEBh8MBv98vL5PXvR5VzcysO0p+6G7EHZvydtZ/l5eXwnkJ29XrddWJ+Xz+3uia+rlF\\n\",\n       \"IjvDZ2gMfnt7K484LqpKpaKalROvm5sbUSTp2dFut+WuRJ8PMvI42OCkjSbiLKs++eQTpFIpxTgQ\\n\",\n       \"LaAZziKDcDabyUuDuz+V7FdXV1haWoLf70cul0MkEtHPqNVqwqfJByHMuJj9N51OUSgU9HC3Wi3M\\n\",\n       \"53MUCgVxRsgt53ubzWZf+TMvXhaLBbPZDC9fvtQRbLFYEI1GEQgE8Omnn8p8hRpBSpr4tUQiIUIS\\n\",\n       \"j+xIJKKkJ9bS3GGYh8LhBWmSNCSfzWZ6IADIgZ4DBuoHTSYTksmk9IV0z+eDR9NFeiuHw2G43W4k\\n\",\n       \"Egk4HA5sbGzA5/Nhe3tbEz/W60tLSzg5OUGz2dTQIhgMqo5ftNFiHW6325HJZNRE+3w+rK2tKd6C\\n\",\n       \"fA6ecicnJ4I8Wf/ToJyGi+RvMz/84uICn332mfzrHlpmPKqdOZ/Pa6RqtVpRr9cRjUa1Mzx58gRO\\n\",\n       \"p1Mezkxj6vf78pXgjeURzV2YsboANP4m0jAcDhVGwx2LxJ9IJHLPPJz6QafTqe8ht4G7FIWtfr8f\\n\",\n       \"2WwWwWAQpVIJ4XBYyozr62u43W5RXEnuZwPKARDJ9sCXQgTW+AaDQWNoojwcV/Mzodqb9l4sERbj\\n\",\n       \"2qhhZIlBfz9OX3lSVKtVqcuXl5cRDocxm30Zk0w/vte9HtVi5oKhkTjN/FZXV1Eul2G1WtFsNgVH\\n\",\n       \"sd70er149uwZXr58KdB/fX0dzWYTRqMR6XRagwm73a7jsNVqySiQdTUJTWzaOF5frJFpEcZyod1u\\n\",\n       \"w+PxiPM7mUzw/PlzJVMRGiMyUyqV1GgBEDrD5pYihdlshuPjYxQKBbz77rvS8jEDfD6f4/r6Wr4X\\n\",\n       \"6+vr8t8gykGEqNVqiStit9v1WdIbJBqNiqhF83JuEIydAO4mnMViUQOW0WiEg4MDxGIxfPjhhw+6\\n\",\n       \"/49qMdPhx+l0wm63I5FIaCjx7NkzURlJByX0tba2huPjY3g8Hjntt1otyZJYG4fDYZmUM8ODvGnG\\n\",\n       \"5hLLpvVVuVxWfAJ3P+aGcNcmh2QxsjgUCskB32azodVqyUt6c3PzXrNF9IYqkrfeegtms1nwGVEM\\n\",\n       \"s9mMt956S3U+g4y+973voVAoKFObdE+a6EynUzWyFosFyWQS9XodwN0JRqdQlmMrKys4Pj7G1dUV\\n\",\n       \"rFYrvv71r8sqzOVyYX9/X1PWDz/8EJFIBPF4HPF4HB988MFr3/9HVTMTfyUWSrokc+mInxLdILON\\n\",\n       \"trB09vz4449hsVhk+FKr1WRN2+l0lPZkNpvh8/nUVNESl7EL3KHJS+DxzEVLxQankGwgr66uhF0T\\n\",\n       \"/242m+h2u0JsWFZwh2UjS79nNoDM8qM0q1wu47PPPtPnsrS0pGhi8jg4dKJVASeG3W4Xw+EQL1++\\n\",\n       \"FCTJuGKPx4NmsylXqZWVFe3quVxO9rpEV0hA4glWr9e/ihtevKh0Hg6H+OlPfwq73Y4XL17INHE2\\n\",\n       \"m8lYkd5yFotF2YHM6/v8889V85I1RptZIiVWqxX5fB5PnjyRIzxjxoiKuN1u2O12fPzxx3LppEPR\\n\",\n       \"4eGhRKZnZ2f34D6OsTnYIbb7ySef4N1330Wv18Ph4SGSySQCgQCurq6Qz+fx3nvvod1u4/nz59jZ\\n\",\n       \"2VFqVDabxfe+9z3xRcLhMIbDIV69eoV2uy38+sMPP4TNZsPR0RG++c1v4vj4GH6/XxEQzWZTihvC\\n\",\n       \"iJVKRQgPDWaYTejxeNDtdoX9h8Nh6TP5vSaTCe12W/yOh1yPjjXHXRiAMqvJXOv1egiHw6jX64jF\\n\",\n       \"Yuh0Ouh2u1hbWxOmyrxpu90uk0LmcpCny1356uoKkUgE2WxWGPXl5SVSqRQqlYqGI5eXl+I0t9tt\\n\",\n       \"JJNJFAoFlRkAdFPD4bAyRRazr+kJTXU5BaeskYvFouRiXFjT6VSfx2w2g8vlgtvtltiAJZPNZkOl\\n\",\n       \"UsHm5iZyuZySpchXMZlMKoNIE+BnnEql0Ov1JNalZKzRaAjxIMGIlrxUYwNQ6ef1evH8+XP8yZ/8\\n\",\n       \"yVfGicDdzhyLxcQ79vv9KJfLamwKhYIyqJ1Op/6bEqfFnJKNjQ188MEH2N3dRa1WQzAYlFrCZrNh\\n\",\n       \"Mpno39IJkzVuu91WihRxWmLEw+EQ2WxWnGhqDEn7vLq6QrVahdfrFbRFd/2trS3VuicnJ1hfXxcz\\n\",\n       \"sFqtYmVlRczBRVOaXC6HUCgky1zW2FycZ2dnsNlsImkZDAY5EhEeLBQKcDqdaDQaiMfjwoZZEpH+\\n\",\n       \"yfKl0WggEAig2+3qtCH8yUHV8vKykCZmiz/kelQ1c6/XQ7Va1Q5BthnwJTeY2jcS2yklMplMMnhh\\n\",\n       \"zEMqlRKnl69D9hm9l0mnpLg0n89rgEA4ixa19EB2uVxIp9Oy+6L0n+NfIiCMIiMSQOivVqthZ2dH\\n\",\n       \"jZbRaMTOzg6azaZqdk74AIizQdMZNqF0dGJjNxgMBElyUMNIC/5+JCgBQDgchsPhwPn5ueA1jslN\\n\",\n       \"JhOKxaL43X6/X4YwLH/o70Fs/qHXo1rMmUwGy8vLYmkxLZXTr1AodG9gQjSA7kCM/2Vo+enpKdLp\\n\",\n       \"tEjubrcbsVhM3Amfz6chwsXFhVTYdNGkGeHa2hqi0ShSqZTYbMzL8/l8aDabwrj5b6ngpmiU6upg\\n\",\n       \"MIhnz56hWCyK1G4ymVAqleDxeOR8enFxgXq9rodlZ2dHrLetrS1ZH3CXvr29xdramoj0tNpiAmsw\\n\",\n       \"GJQYlzIvNrWpVEq8DA5crq6uEAwGtfiZyBUKhcT6m81mSKVScv7f2dl50P1/VIuZux5NANlNc3HR\\n\",\n       \"gZNQ02w2QzqdxtXVlRACwlNMaO31emLOEa7jzez1esrr4M2hFo/5gMFgUAw2vpfb21sAd2QdSpvI\\n\",\n       \"cWCX73A4NIl0OBz38qmLxaLyQnw+n6inwWBQ9XQymdQiouMRJ5a0CyiVSoL2KD4lt5nQ32Qy0YNJ\\n\",\n       \"vJhTPH5m1EmyRr+8vFQf0ul05MtMiJCvSz/mZrOplICHXI+qZqY6+vr6Wjo0OnQOh0OUy2UEAgFc\\n\",\n       \"Xl7i4uICpVIJlUoFv/7rvy4nokgkgoODAzidTk2sSqUS9vf3NbYNh8NakIT4eIN41FNKxZqTxzj5\\n\",\n       \"u6VSCTs7O2om7XY7jo+PMZlM0O/3pZ/jcGU0GiEajYq7TFI9AHE06FrPcoW+cHQXomyKg5FIJCL6\\n\",\n       \"a7lcRjAYVJbi5eUlstks1tbWMB6PUSqVEAgEUKlUtPPzQaOq2+VyadN4+fIlNjY2ZFDJxRsKhYSe\\n\",\n       \"UDUfj8cxGAxQLBYfdP8f1c5sMBjQ6/Vk4UrWGtXBpF8GAgEsLy/j6dOnwl8jkYgC1kOhEDwej4YA\\n\",\n       \"NBm0Wq1wu90yM7FarQqcpzzK4/HAbrdjNpuh1WohnU6ryaNXBCPOrFarGjY2THRIojGKy+WScTmd\\n\",\n       \"96mAJhLldDpVBrGUYQorGXtUwBAloV5wNpshHo/D6/UKxWAtzxLG6/UqLJOjfZYQ/FxYjtzc3Kih\\n\",\n       \"I3YfjUYVskl6KaPbqGC32+2aaL7u9agW83A4hM/nQyQSwYsXL+D3+8VQIxm90+kgm83C4XDgb//2\\n\",\n       \"b7G2tgYAcrt3OBw4PT2V3o6xwDy+3W63EA3eVMrlOX202WyIRCLY3t7GZDLB6ekpvF4vrFarFtpw\\n\",\n       \"OJTHB6VDlEt5vV5xQubzOUajkbJW6BzEUbDNZtPQhGjMwcGBrAgYO7G9vS3s1263w+PxCE2hbwZl\\n\",\n       \"VQyQZ+PKWIlcLidrXMq7iOrwwSRzMZlM3nNN4jCHLEH63pF6S/+Sh1yPqsxgrBibLh7hVBszg4Mw\\n\",\n       \"2d7eHprNpiZv4/EYq6urygnxeDxIp9M4PT2F2WxGOBwWLRSAwieLxSJWVlbk/O7xeJDL5eByuVCv\\n\",\n       \"17G3tyeqJKEvNj0sV4LBoBht8/lcLj9EFubzuRq3tbU1KbS5k29tbcnMkE0rw+MdDsc9ORcX5tbW\\n\",\n       \"ltTrVqsVTqdTeYBGoxFra2uw2+1CQ4xGIxqNhhbfcDiE3W5XbDDlTwaDAQcHB0IwWFM7nU6p1KlE\\n\",\n       \"Ae5ONdrfPsTR6FEtZhoLspli/cpxM2VJDNqhHRXLgclkgnK5DKPRiH6/j+vra5RKJZGEuCA5mr25\\n\",\n       \"ucHh4aFI+4PBQDAVYTnCTvTooPIjGAzKsJD1eqfTEXGn1WqJdUbMl7tZsVgUgy2fz99z6KdNAvCl\\n\",\n       \"eQvxcA5K6ApKI3FmrbDxJB7OHmQwGEhWxQaOCp75fC7dIHFs4K70YePX6/WwvLys+LhAIKByh54j\\n\",\n       \"5Eg/5HpUi5m8ZC4ujl45fWOsLlld7XZbsqjl5WVRJefzuWrHcDiM4+Nj2cVubGyoiaMglPgp60cu\\n\",\n       \"JvKQb25u4PV65dVMvJjcCI6XGUvMEEi6jvJ9LSIrfDASiYTw4WKxiHQ6LVJTvV5XaeFwONDtdvUe\\n\",\n       \"qL0jOYryLKZi0SmJPnKktjLKmWGda2trqFar6Ha7YtPd3t5FCweDQfE3yAbMZDLKBuT/aJfwlT3X\\n\",\n       \"wkWXeQDiVVCbRnegVqulepMfMgWgxKVZ21UqFTV4qVRK8BWtuMiqo7pjOp2q0eQpQViQkBwjkamA\\n\",\n       \"WVlZwcXFhZKq2Pg1Gg15JbPxY71psVgkeuXO2e12kUqltAgrlYqOd6ZLcZhjNpsVXNlqtVCpVMSr\\n\",\n       \"zmazauD40FGvR5SFjSeRE5qJm81mkZEajYagO74WuRu03QUgMtViJvjrXo9qZx4Oh/D7/crZm81m\\n\",\n       \"UpKQxEO5FLFTt9stmiUX3dOnT3F9fY1oNCqDGHoa83gkErBYCwJQxl0ikZAcPxKJKCaN/IlEIqFk\\n\",\n       \"rMWGj9wGu90uojwZdpyYcSJHFp/P5xNBnsHqHITQxoBBlZSOra+vw263IxAIYGVlRRFrpKsyJMhs\\n\",\n       \"Nktk0Ol0YDAYkEgkcH5+Dp/PJ2ydD3AoFILNZlOqFDkt9AbhVNJkMklIyzjob3/72/iLv/iL177/\\n\",\n       \"j2pnph8w2WqsNwOBgLi+wWBQ07Tvfve7ir7l4IHZJx6PB++99x5cLpesrkajkfwsyBXm5I7uRExR\\n\",\n       \"4q5LbjPhNS5QNo/8Ghs0EvSn06lQEQphR6ORIMdf+IVf0EPK9z8ajWSHy1g2ogWsT7lzU/PI8oLU\\n\",\n       \"TSIqhMrMZjPcbjcGg4GaN6Is6XRaO6vdblcUBnO7w+GwxuGE/Vh+uN1uwXNseB8anfaoFjOtrgDc\\n\",\n       \"iy6jNcBiYzMej/HDH/5Q6mPgjsHFGIbhcCgCztnZmRomhjbS2KVSqcBgMMhbglwJmnDTrpVxBxTP\\n\",\n       \"srnkAqKjEZNKWdOXSiW0Wi2RhohsfPzxxxKQcldnZggx5GaziUKhINIRHZco3eIOztqaaEo2m8Vg\\n\",\n       \"MEC1Wr2XPcihDkuYv/mbv1GZdnh4eM+mlkaLtOfNZrOo1+uy7B0Oh6Kf0t+vUCg86P4/qjJjcaxa\\n\",\n       \"LBaxvr6Ofr+P+XyOy8tLWVldXFwgmUxKeEpjQTp5LhoQ2mw2IRIrKyu4vr5W112r1bQgiEtTyDqd\\n\",\n       \"TnFzc4NUKiXLrNFopM6d7LdqtSq3UO7a8/lccQuVSgWJREKEe46HOV2kCqbZbCIWi6FYLIrBBkB0\\n\",\n       \"S3qAjMdjjZxJuiedkyjQaDRSaZLP56UFtNvtqNVqyjQhOsRSA4CSa9lQs+5nUla73YbBYIDb7db4\\n\",\n       \"mp/zQ2vmR7Uz86ilRS1ruVQqpbqN7C/mhzCXj3gtJ15utxter1cwUiaTwXg8Fi7r8/kQCoWQyWTg\\n\",\n       \"cDhkVUXPDPKQOarmgMRoNCpSze/3w+FwiEfB5ClyJbxeL/b29oQ9c5Et5hDu7+/D5/OpPNjd3UUq\\n\",\n       \"lUIoFJInHEOLCL3Rf49oCq3HMpmMnOydTqcUJJFIREQhBgtx2sqp5fX1tSKPaRlMmRhjLRKJhMoX\\n\",\n       \"svjY/DFD/CHXo1rM1Lsx05nOmeQXb29vi3Nrt9tVBozHY1EtabMFQMJVugCxWSL/mLsdp2m0q41E\\n\",\n       \"IvfypFmnk6O8tbUFp9OJ09NTTRQp0VpbW5NwYDweo9PpiMQUDAaRTqclElhdXUWtVpMxY71eR6PR\\n\",\n       \"EBON1gkmkwnb29siVW1vb8ujeTKZaBzNTBLqCVdXV2VDtr29LaiTA5N4PK54CbqXMn6COzXxe5ZE\\n\",\n       \"TqcTwWBQ7qbpdFpw5Pr6+oPu/6MqMxiW2Ol0kPsihPHk5ESB5PRxq9frkgWZTCZUq1V15oPBAKen\\n\",\n       \"pxpeMB7C4/HI7XNpaUmmKH6/H7VaTcy55eVlHZmU3S8mmN7e3qLVauHk5ASbm5uo1+toNptKwWId\\n\",\n       \"yQDN5eVlUTfPzs7kz0y9nNfrVT37zjvvKGGW7LhyuYxcLodEIoFyuYxkMol8Pi8Vzc3NDQ4ODuBy\\n\",\n       \"ufDq1SvV/W+//bZMXwjtcfhBfJ0Wv5xSUuFCo51WqyVuM51VKQUjgZ/ezwCUGvu616PamVdWVkQn\\n\",\n       \"jEQionICdw0hlSg8TjOZDK6vrxUi2Ww2kUqlEAwGhQhQNkS1SK/XQ6vVUkdPcg1wdzO4k5LfzBKB\\n\",\n       \"msLhcKhygSR3WnsBUOwDJ2x8IJi4ShkX0RI2VA6HQ7Upd06askSjURQKBT2Y0+lUJt8A1JiyPPD7\\n\",\n       \"/YL9mIO9+HmFQiHFWDAAiI2cyWTCzc0NisWipqQkIbEs4USUjS0X91cTwIWLUBgNRRhDRq0d61iS\\n\",\n       \"w5lRQl4AVddMJ7VYLLJs5XSLdTBVxZTwW61WjclZkrARSiQS4hgTVVlfX5f7D3HrdDqNYrEopCEU\\n\",\n       \"CmkX5MJhCi3hLZJ7CInRBiCRSKBYLMJisSgbm/ki5GQT8aH1LQBZc5lMJuzt7amW5yKMx+MaY/Nh\\n\",\n       \"4ENEPjM9rWlOEwgEUC6XxSkJBoMqlUg28nq9ePr06YOsBh7VYv5/2HuX2MbXNL3voSiRkijeREq8\\n\",\n       \"6a6Squqc03X6nD59mzFmpoE4i0Hs2SWzCbzILpuBYwdxso8RZJO17VUwNgLYm8DxwHA8iCcz4x50\\n\",\n       \"t/vUudRNUkmibhTFOylRlERJzKLO72mqPXaAUpzxCP0HDk5dVBTF//f/vvd93ufCbplMJvXy5Uub\\n\",\n       \"cePkzsCDsW+xWNTZ2Zn929rttq0I4GccHx+rXq9raWnJkB0ZJyhF2D0JvwQCxGLr5z//uZaWlmzO\\n\",\n       \"PTU1ZV4GJUE0GtX29vYdqwJ2O/gOuBmx60Ki73Q6zsPGJB2oD9ekxcVFM+Cogxk9l8tlN7DhcFhH\\n\",\n       \"R0c2A4cui/80pUa32/XDDb21UChY8IsYAYiQrJRAIOBhDqgMvG3I+u97PajFfHp6qtnZWTsSgRSM\\n\",\n       \"jo5qYWFBh4eHd8IooV0Sh4ad69ramh3vGUqgxJ6amtL29rZlVpFIxLXy1NSUkQs69VqtpvX1dXtJ\\n\",\n       \"gBMHAgHbeDEe7nQ6ppEiAmVogRsR9EwGEghxiVA4Pz83P+Pp06dOV8Xi4NNPP9Xe3p6J9ciYIFJh\\n\",\n       \"kTA5OakPP/zQKhWULaOjo3r06JG2trbMwVhaWrKPHV597Xbb2TEff/yxXr9+rXK5bG7zxcWFlpaW\\n\",\n       \"9NOf/tR86sXFRf3BH/zBe9//B1Uzs6OBOLRaLZVKJfV6PW1vb5sMhPdZq9XS1taWpf27u7uSpD/+\\n\",\n       \"4z82ToxzJ6UIfsaMYEl1Qmp/cHCgbrerVqulYDDoxpK0VY5fkATYdbD9Tk5OtLu7q/39fVvbbm1t\\n\",\n       \"WQECzZP8FUkOh6QJhSvCgIRxfSAQULFY1M7OjimvIyMjevv2rSYmJmzHcH19rWazaZIQEQ4YhG9s\\n\",\n       \"bLie5j0DSZ6cnGhvb0/VatWl15dffmnJFYJgfDgoRSKRiH7605/e6/4/qJ0ZoguezKAHNBhjY2Mq\\n\",\n       \"FosmteDX1mw2XTfSgBHsiC0WTp9MGKGCNhoNhcNhLSwsaGNjww0lk0ZUJ2dnZ+780SDirUHzOdwg\\n\",\n       \"ofeDyDMcbgl6kkqlnF3SaDQ8XKEePj4+to6R5Few8qOjI0WjUSMmmJEzOOLBweEIo3CIXKhQSMqC\\n\",\n       \"YDQ9PS1Jd/jW2C6QQ55MJu+ExmP7y7993+tBLWaEmnhNAOgDb7VaLePMlA4ctYyV8UCDAgp1kRKA\\n\",\n       \"aDByOeBKo6/jRsEoQ8SKjReDG9w+oZlK8k4/MTFhKwQeCMhR0WjU4tTR0VFzrJfUCdUAACAASURB\\n\",\n       \"VDm+ISNBlOdBokyg/MGPj0ZYeicqODk5MWeEMT28auig8Cn4zHkNXP/hmvR6PWUyGX8eJApAxwXB\\n\",\n       \"oQnkc3jf60GVGZlMxvG6sMAoO05PT7WxsWGft+3tbXvFQVHsdDreeTiuqYtxkz85OTHcxPes1Wo6\\n\",\n       \"Pz/3wmFBM8plzM6EkXp8mFyExRXyLnZZID2C2ilX0OXBa2bHZ3cbjg+emJiwPxyhm2DqU1NTHhJB\\n\",\n       \"faV86Xa7XpA42yMXgzZLM1mv1++M3FutlsbHx1UqlcxFQUYFPl0qlWwOAzx6n+tB7cz1et1sucPD\\n\",\n       \"Qy0tLdkDbjiJ6vT0VPPz8yoWizo4OND3v/99L/zb21sVi0VTLTEyQdpDycGNh9vcbretwN7f31cw\\n\",\n       \"GFQkElE4HFaxWFQymdTe3p5mZmZsaQtBCHcjOvxer2fFCLvi5OSk7buoU4djzjCbwcARESuSsGEl\\n\",\n       \"diKRMDzHsKPf73vsLP2C03Fzc3MncxAnz06no7W1NS9SsHAkVVBZgQbHxsZMCWXTGB0d9YInw/s+\\n\",\n       \"14NazNAlu92u1tfX1ev1tLq6qoWFBcViMcvfi8WiRkZG9Lu/+7v6+3//73uXAycF3ltZWVGj0VA8\\n\",\n       \"HtfOzo4//MePH6vVapm/S5NXq9U0NTWlmZkZL3BJHu9+8skn6nQ6ury8NOQlyRYC+H4woKA8gIif\\n\",\n       \"z+e9ED/++GMnQEFawpeCdFTQGlxM4U4z0WOw9PjxY/Oiz8/PfSI8ffrUwUYMU4AA4U8zoMJ2C1UL\\n\",\n       \"WDRG64FAwPkmTC3D4bDevn2rlZUVU2V//OMfv/f9f1BlBjtMJBLR9va2ZmZmtLm5qePjY21sbGhk\\n\",\n       \"ZESbm5va3t7W9fW1/uiP/sh14v7+vvNMGOtubW1pYmJCf/Znf6bFxUUT5iENkWLKtI/FPD09rTdv\\n\",\n       \"3tjhBzYYabGBQEBv3771BLHT6SibzXohkDFCwhRDj3A4rJOTE83OzupP//RPzQhEywcXmjG8JPtu\\n\",\n       \"EEwEe29vb88ck6+++sp8aZyggsGgk2CxNsAjhIknuDH2vbjs93o9/Zt/82/8cxAkNDY2pkgkomKx\\n\",\n       \"qGq1qlKpZBLT+fm56/D3vR7UzsyMH+y4Vqvp6dOnPloZbYML09gtLS1Jkk0Ih9Oger2elpeXjTkj\\n\",\n       \"0BwfH1c2m3XE7/HxsSdxjUZDS0tLNjan3AGayuVyLn0WFhYsI6JBwsSFeAZ28Ovra62urno6yEgZ\\n\",\n       \"iA2JF6YqWO9SbszOzhpJYac9OjryuDqdTjsQk1obGBBSFdNBBAYEbi4uLjoACKN3ml+abxIAEomE\\n\",\n       \"ut2uCoWCSqWSJ7c0ou97Paid+fLyUrVaTb1ezxEPBwcHCgQC5lNQryG5J2CnVqvp7OzMIemYk2M6\\n\",\n       \"zo4WCAQ8vABL5e/QFmLq3el0dHp6qp2dHd3c3PgoZiFTEoCJYzvAQ4BRODl7wIiVSsViXB4Cfn5I\\n\",\n       \"7yzm8/NzJzsxqcPc/OjoyLnZ7Nq3t7ceTXe7XXOSydqmB6FnQGxQrVa9WDFlhOvBCcLDzESWr8P2\\n\",\n       \"lhi3970e1GLmOB0bG1OhUHDcWDgc9i7U7/cdYQABPhQKKZvNWjaFXIkunEBKbk6z2fRNn5iYUK1W\\n\",\n       \"UzqdNmwGJRJn+5GREbXbbd9UXDCR5+PEyeIHoyU7cHp62hYBiFuRd0E4wkgRUS8TymGBLAaN7XZb\\n\",\n       \"jx49crJrtVqV9IuHYXt7W5FIRNFo1Lg0wyLp3aS12+2qVCo5fg0SlqQ7uYTwm8fGxhwNEY1GPW1F\\n\",\n       \"nVKr1e7Fy5AeWJnBTYYsgzUAi3RsbMx8AnDoSqVi5x+OyMXFRScxFQoFx6VhWwD3F0cgTGFSqZTx\\n\",\n       \"ZSA3BAL4rREkiS6QhRAMBi0m5TRAtwgEB0e43+/ryZMnTkENBAIObOd98vNKsj0tZUAymfT4nWQo\\n\",\n       \"BkszMzOS5GaRsufjjz/W/v6+x+98lsOEouEyBN3ksL0XpdPExISazabm5uYcAB+NRrW+vq5/9a/+\\n\",\n       \"1Xvf/we1mEEXkPIwWfre976n3d1d1et1TU9Pe9pUqVTUbDYt/4fv+/nnn2txcdFEI3aQi4sL7e3t\\n\",\n       \"WdKPi2a9Xr9TMszMzJjsP1zSMG6/vr7W8fGxj140gzxYyPtp7Jg2AsuNjo5qf39fhULBOO/19bVq\\n\",\n       \"tZpj1hi0oBiJxWKOHx4m+PR6PUej8XMmEgm1221tb29bRf6zn/1MhUJBr1+/1vr6uiPdsDoACWq3\\n\",\n       \"2xoMBoYhyTSR5CaUevzNmzfG5QOBgF68eHGv+/+gygwWAGJQjm+GA1hHzc7O6vr6WpFIRAsLC3Zt\\n\",\n       \"LxQK7txnZ2eVzWaNXLBDIt1nPA2XGChqWIEMmsHolmlYOp02rjwxMaHz83ONjIxofn7e2DTMNKZp\\n\",\n       \"pFCxg09NTfn4xok+n8/bkovmja9H1c176XQ6biCB7Six2FEvLi7cLDNc4nUgOoFQENLJ2D0ej7sE\\n\",\n       \"gjgFuQnoELMeyFmccO97PajFHIvFDMZzYzEVp6mizMBkEANAXIgYGaMmgXyPSSGdeCQSUTabVSQS\\n\",\n       \"sUQrkUj4e8RiMXOEUZ9QfpRKJU/POp2OxsfHbUYIY40RPJL9SCTicTwsNth/TPsoldAuwskm6/rm\\n\",\n       \"5sZ85EePHtnEhihkRAmUOysrKx7F83nwUKPC5qFiwBONRu1Vx4m1srJiO4PBYKC1tTXzzDc2NnR4\\n\",\n       \"eOh7dp/rQS1mkAlk/DRdgUDAmR/hcNilCB8eDR3TuKOjIzdvyITQ4fX7fR/3dPG4EQ0GAw8ukNUz\\n\",\n       \"VcRSdmRkxOmr8CDg/WJ9xTHPkby9vW0VNiNjMg+RfGHCSOlCUgB6x0QiYV/ndrutvb09x7PRBGMs\\n\",\n       \"g1UWE7tisWgkZXgoA/w5HLUBMsTPO6xCWVtbM6OPIRQ7OIOm+1wPajEz8YrFYspkMlpYWLhjUIIO\\n\",\n       \"7c2bN7bVyufzdg1KpVL+Wiy0YJrhw4aBCsSdR48eqd/vO8R9YWHBLj7hcNjEGsLooZb2+33Nzc0Z\\n\",\n       \"G5+dnVU+n1cgEDC3hFOARQEDDQOVq6srra6uGiLj/QN7dbtdGxjyUGE1m8lkTCiCGjs5Oem4YiaE\\n\",\n       \"YOG4f+JeSvPJZwWKg4f1wsKCJicnFY/HXa9XKhUlk0kVCgU/aExVgQrvcz2oBpAOm9BKjj0IPNVq\\n\",\n       \"VWtra3acbDab+slPfqLf+Z3fMU9CeqfoKBQKNhGkMaLJYmrIrokBzBdffKGlpSUnn0ajUYsCUIfc\\n\",\n       \"3t56isaUjigGjFjIGWFHYzFeXl4qFos5aw9zSExeGo2GLQnIrIZkRVN8cHCgq6srVSoVvybE/M8/\\n\",\n       \"/1zRaFSdTkdXV1fa2NjQt771LWPGkJBoImHUYSIDsnJzc6ONjQ19/PHH2tvbs5cIzqWMwMHLCc7c\\n\",\n       \"2dm51/1/UDtzo9FQNBq9M7qV5GYvkUh4d6NUYGdm52NnhjQz7C8My02SNYB8DcoRGhv8ncFaY7GY\\n\",\n       \"+dNM2XCyB7KC0QfkBQUU939y/7CmhZ7KbinJQ5B+v+9/gy8Fp8DCwoK1i9Tr2NQOy6emp6fthB+P\\n\",\n       \"xzU7O2tYE0wb9TXO+5D5sXigtuczYyfu9/sKBoNulDOZzL35zH9hizkQCAQDgcDzQCDwf3zz++lA\\n\",\n       \"IPAvA4HAZiAQ+D8DgUBi6Gv/+0AgsBUIBN4EAoH/9N/zmncYbfF43PUdgxIUwagmIMeTeVcqlUz6\\n\",\n       \"QRERiURcdnCzmOpJ73BfdGwwykAFgsGgTwoeDEkWrZKvgjEMjDe0eRjbMGFE+c0pwvFOUzoYDFzz\\n\",\n       \"N5tNcx5ohuFugPgwyGF6R34J743mF4f9QCDgEwsCP+6e/X7fJ9jMzIw97piK8lkxNCIgk2EQm8/7\\n\",\n       \"Xn+RZcbvSXolKfrN7/+OpH85GAz+50Ag8N998/u/EwgEPpD0X0j6QFJB0h8GAoH1wWBw+8svyNz/\\n\",\n       \"+vraw465uTmHwrBbocP7zne+o+PjY+8oIA7r6+taXFxUt9t1HY68H7n/4uKiDg8PjTywUIH2hkk/\\n\",\n       \"S0tLhukgxvOay8vLikQibigZ/ExMTOjJkyeuddvttubm5jyYoBeA5TczM+Pp4K//+q+r2Wzqhz/8\\n\",\n       \"oUqlkl8fEtTLly8dAVev1xWNRr2DMkwhygFmnCStrKzo4OBAyWTSKVmkFcDNgMfS6XRMUWVAhDXY\\n\",\n       \"4uKiarWalSiJREKJREKFQuFeC+ovZDEHAoE5Sb8t6X+U9N9888d/XdJvfvPr/1XSH+ndgv4dSf/b\\n\",\n       \"YDDoSyoGAoG3kr4n6c+dfbLrFotFFQoFhyh2u12dnJwoGAzq5OREIyMj2tvbsz0U3IJ4PK7NzU1b\\n\",\n       \"37bbbTvaB4NBJ5XC38AbgjocUhApqwsLC3rx4sUd1yCQgnw+r2q1qsPDQz8c6XTaNer+/v4dU/Ji\\n\",\n       \"sXgnYSqVSrkhbTQabnh3dnY0Pj6uL774wgQkxu4w/TDJYcAxNTWl5eVlnZycqFwu69NPP7WY4ZfD\\n\",\n       \"3gnlhFGHri+fz2tra8sELgZT8Xhcx8fHxqNPTk4ccccJEwqF7h0E/xdVZvwvkv5bScO7a2YwGJx8\\n\",\n       \"8+sTSWho8pIOh77uUO926H/rormCIMQCQNqEjJ5mJJlMWscGPIcHBnUftgKMeamb4SWTz41ekPoY\\n\",\n       \"ST6RC9TLwGCUB3hGh8Nhzc/PW6pF3jSsNIYb2OaiDcTPjQaNh4VaF4sAJoe4k3L0Q0/FVJFhysTE\\n\",\n       \"hPkVwHAY12CzMDxm5/OSZOydzPGbmxv3F3zmw+HxvD9KsPe9/n/fmQOBwH8mqTIYDJ4HAoHf+vO+\\n\",\n       \"ZjAYDAKBwL+P3Prn/h3TulAo5FDFbDbr+DGoiIg78W1molar1dRut50dQjjNRx99pHa77boZwn4o\\n\",\n       \"FNLr1699XLKrXl1d6dGjRyarszAxER8ZGdGHH35oOKzZbDoEiIkbZcnY2Jg++eQTnZ2dqdVq2bgc\\n\",\n       \"tQrfA7YaDkZkpoAbgxqA+GCyPjIyopWVFQt/yefGM5nmGI7zN/fHglbKomQyaZsHuBnDAZl4ZpdK\\n\",\n       \"JQ9zMB4fHR01Ln6f6y+izPg1SX89EAj8tqRxSbFAIPD7kk4CgUB2MBiUA4FAThJxnUeS5of+/dw3\\n\",\n       \"f/ZvXV988YWB91gspu9973v2NJ6ennYgZavV8gcLPtxsNl2rknrKqHhjY0Orq6t3AiOldwoR6Z2v\\n\",\n       \"M9M0VNns6MiuSH7FQLBcLltPB6aLMyn+yZI8esepnxwQJoGzs7M6PDz0wARNIEproLP19XXjzMBl\\n\",\n       \"NKiSnA3Ybrc1MzNjRiAC2WFbA+r9i4sLraysmFZL5gpWva1Wy9NWgjNxPo1EIup2u/qDP/gDR3L8\\n\",\n       \"pcsBHAwG/8NgMJgfDAbLkn5X0v81GAz+S0n/VNLf+ObL/oak//2bX/9TSb8bCARCgUBgWdKapD/X\\n\",\n       \"YOG3f/u39ezZM62trZn/S+cOBRJMFix32KkHqVG1WrXUX/oF6Z+Qeeo8vCzm5uYcPD/MVMOU5uTk\\n\",\n       \"xIMNJndAdvCEGSODIEjvTGGKxaIHGSyKer1uiwCUK6Ojo2bqVSoV+1ygpuZUOTo6Urvd1tHRkbF4\\n\",\n       \"+gqyTVDc7O7uuvkc9rkmLxHzRR5wBjAnJycW3TYaDdVqNVUqFY/JDw4O/P0eP36sH/zgB/r44481\\n\",\n       \"Pz+v+1z/MQxNKBn+J0n/OBAI/FeSipL+c0kaDAavAoHAP9Y75ONa0n89+Hfoa6rVqlUW+BOvr69r\\n\",\n       \"ZGREc3Nzkt5lBQ7nYuN8xL+5vr7WX/2rf9WOoWNjY/ZmhvM8jO3ifwz+G41GzZhDALq6umpeBOlT\\n\",\n       \"6ALHxsb06aef2pyGvG3qZWpqmlaiIPBQ5vtLMo0T9IQMFkbHy8vL+uijj3R8fKyTkxOl02lJ704x\\n\",\n       \"9IY0iRMTE/r000/NJ1lfX7dsbG5uzqUTPzfmktls1jwNKANra2va2dkx7rywsKDT01N9+OGH+vGP\\n\",\n       \"f2x+yO3trf7JP/kn772Q/kIX82Aw+L8l/d/f/Loh6T/5d3zd35X0d//fXo+SIpVKaWtry9RKIDLI\\n\",\n       \"RLgUgQEXCgVzjs/Oziyy5NhvNBpaXFyU9Av4j3oPDJejFr9jBiF4One73TvJVVjUfvDBB9ra2jLB\\n\",\n       \"KRaL6fj42Po8nP45NWCtYSADrRO/O7gn6A15n9lsVmdnZ9rY2PDUEAU2jeX8/LyazaaHJeDAQIZo\\n\",\n       \"9X72s595HI/iJpVK6fr6Fymr19fXymQyVspXq9U7tbn07sRbWVnx2P9XXnNDF5ZXjUZD9Xpd+Xze\\n\",\n       \"WDCSptHRUdVqNevwcOqBWIQh4dnZ2R19Xz6ft6kL/Irj42MtLS1pc3PTjvLUsQxOarWak6pAF2q1\\n\",\n       \"mtNih3dCxtE0i2C5mBFC3SQ0XpKnhLVaTdls1sMNdtRhr+RWq6V8Pq+XL19ay9hsNq173N3dNQNw\\n\",\n       \"fX3dD1Gj0TCKAc7OEIi+4ejoyIgN4e8IEnA9Oj4+dv0fCoW0ubnpxK2pqSltbGzc6/4/qHE2kBML\\n\",\n       \"SZLhKqxhWbT8Gd369fW1KpWKVlZWrELGlhbeLkEyTMskeVgCfsrNabVa3v1IfaJxBIeFpim9a6ym\\n\",\n       \"p6ft5wxScHt7q16v598zBgbOg81HhsnU1JTDMMGgqcrgh9DwAqsx+kf6xGcjyc3p2NiY3aJIASAM\\n\",\n       \"E/UNjS9mjZKcnXJxceFGmM8LBToOUQxZ3vd6UIsZzBJVMHXwcKQXU8CJiQnl83kNBgP/HgJRNBq1\\n\",\n       \"kSHwHpgsujiCexqNhon25AF2u10tLCzo+vraTR6eyjh2krEC6kBziCxpamrKRCJI/MQwMJ3L5XL+\\n\",\n       \"3njcoSaHmE89/fbt2ztuTaurq8adZ2ZmjCfjTQ0vnOgKyoNQKKRkMqlMJmMrLiBMuBXBYNC+H/yM\\n\",\n       \"eO71+33lcjkbmu/v7zvy+S8dzvwf8uLoe/TokXZ3dz2sgHr50UcfmVvMQqUehA/BDkJWB4lK6Ao5\\n\",\n       \"OiEFnZ+fu65Op9MuN4ikQEzL8Q36US6XVS6X9eTJEw9xoGaWSiV7JrNDYudF7iDuP2DCjLiDwaAN\\n\",\n       \"x1moR0dH+vVf/3Uf91NTUyqXy8pkMpqamrLi5PHjxyoWi16U6PPGxsYsdkBdTX4gD2omk/HPEIlE\\n\",\n       \"7jR1o6Oj2tnZUS6X84l2c3Oj9fV1P6jcp/tcD2oxswt2Oh1PskgXhUZJTZpKpXR6emqvDEz8aGJ6\\n\",\n       \"vZ4bP5ohdHVYFJAKdXZ2pkaj4bixQCCgcrmsZ8+e6eTkRLVaTZFIxFRPLK/y+byPeHKpr6+v7xzB\\n\",\n       \"ktw4ogVkqHN+fu7d9urqyijKF198ofHxccdgEGjJRA5KKyaKfI9ms2mDxtvbW9XrdROOMK6B6Xd9\\n\",\n       \"fW0EhFMLnFuSU3Jvbm782SJawIcPshMptn9Zx9n/QS68JGCDochgVNrpdDw1g63GgKFcLlu9LMmE\\n\",\n       \"fbK3wWKr1ao9iMFx4WRcXl7q8PDQQ5nNzU03jJQRPBjSu4cPfw4eEtyLGJkz8pXkCWWv11Ov1zN+\\n\",\n       \"jckLdgW4+Pf7fZXLZYfPl8tltdtt48g0j5CwSqWSm8VQKKS9vT0vUpAd2If0J9TmBHIyqaSGHx0d\\n\",\n       \"tWd1sVg0w25qakrFYtF9y8HBwb1Zcw9qMdOoIF1iAgY1sVqtqlarmUNRLBZtHJPNZh1Kw9exACET\\n\",\n       \"AV9h3s2xyGtK8s5FibG0tKRKpeKdb9jjmUZwf39fp6en1tSR7AT7jLoXQ0eEAZwO4XBYjUbDtS9E\\n\",\n       \"I+DCSqVi6y+4JSxMml0e8MFgYI/rdDpt9GR8fFyvX7/2yTHc6LKbk60tyZg0D4Yk5fN5G8+g4Lm+\\n\",\n       \"vtbW1pYGg8G9yfmB+/p7/cdyBQKBwd/6W3/LcBcmK1NTU8rn89re3nb6KhDT4uKi/sW/+Bf64IMP\\n\",\n       \"fETiLs+ABMEpMn4GAbgFpVIpCzUlefFls1mjEOyCPBzU54y4u92u6aGSjLB0u1078w8rsHu9nmZn\\n\",\n       \"Z+0YynhZeuctx/eA4IMOUJKj5c7Pz/X48WM9f/5cCwsLarVanohOTU0plUrZOw94LxwOa29vT/Pz\\n\",\n       \"86rVavrOd76jg4MDD094DUkO3+E9sMlQFvV6PaVSKQcSZbNZ7ezs6B/8g3+gwWDwXmLAB1UzU0dS\\n\",\n       \"25JGen197d0NJXM4HNZPfvIT+09QJ6Ktu7q60uvXr5XL5e4ouDmquTHDOxVGiAw1ZmZm7gTWUK+v\\n\",\n       \"rKzo1atXd6LLRkdHdXp6qkQiYUZeOBxWv993qYFdLH7LS0tL3pXJa0mn0y4ppHcTT5pTHOwh+PNz\\n\",\n       \"VCoVnZ2d2dsDkhWLmAVIniFmjGQs4oREmREKhRzYSVJXqVRylgvfhxMUaJTm8H2vB1VmQCXEIYc8\\n\",\n       \"arwpKAtoxgqFgp4+feoINNzt8Y9IpVKmQ+bzeSuncR2CaA/PIh6P2+qViSHHLoR8giYhBA3j3DDq\\n\",\n       \"JicnbTMG8R/jRpAPfDTw1cDeAAuEfD5vAn4wGLSIlnE7Uz5IT7FYzLAfsij+L71DTWKxmOkAnFjk\\n\",\n       \"s7DwcW0aHx93VPJw5AbMQGi2qFw4Ue5zPajFDIzEWBpDPsB4dkDwT5ovFhq1JX83Pj5uGAt4D24x\\n\",\n       \"N4yRMfU2JQNNFjgsN4whAaXJYDCwSJTXxMWfMTwPGLsfv+50OpYcgZkzGCHm7fz83GUN5pAgMPF4\\n\",\n       \"XOl0WrOzs4YcIWdJMrxHrU0ZMuzzTOkEdIiA+OrqStvb2+Zew0ch/4+HiXr6+vr63r4ZD6rMqFar\\n\",\n       \"Fna+ePFCq6urVkZUKhUNBgMzyrLZrBUgIBmJREJv3rzRl19+qd/8zd+0kpkdB/IP9loHBweu0VFS\\n\",\n       \"z8/P+wiHRvry5Uutrq7aVw6HfBYBHhNMLvGq6PV6zg8JBAKeUMJ6IxTz7OxML1++1NOnT1WtVrWx\\n\",\n       \"saFcLqdGo+Fx9He/+12XLZRF5XJZlUrFKhC+dmpqynazKGSCwaDZe+zoqVTK6BCsw7GxMafFUtqh\\n\",\n       \"cUQ4gCHlMFJzeXmpP/3TP73X/X9QOzOE71qtpt/4jd9wk5HJZLS8vKz5+XktLCw4n2R9fd2LZ3V1\\n\",\n       \"1Q3Pr/3ar1kvFwgE9OzZMw8k4FeMjIyYGx2LxZRIJFyyUDJwcxcWFix2xfyQsS4awm63q5WVFe++\\n\",\n       \"+H+k02nbdi0sLLiBXV1dNRV1fHzcKbCRSERra2t69uyZDWeSyeQdESmZ36izP/vsM4VCIS0vL3t6\\n\",\n       \"GA6HVSgUbPV1c3NjSy6U2ZwSlEoEuSMmpsYfGxuzmePk5KS51PjTIXxdX1+/1/1/UIuZumtqakrP\\n\",\n       \"nz93t9/r9eygj4UAuSH7+/v2v7i+fpd/t7+/72O63+/rxYsXHrdCUuc4Zxfudrva3d3V9va2nYko\\n\",\n       \"GYrFot3lsYPllMDPrd1u6/PPPzc/gl3/8PDQ/s3lcvnOe5qamtLbt289TKlUKup0Oh748GAwkgbx\\n\",\n       \"AKEAa9/c3FQoFHJ8G+VSs9n0/8GxW62W6vW6tra2XM8jRfviiy9MGsIUcbicwqPj4ODAvBA41ldX\\n\",\n       \"V7bWfd/rQS1mTMAZSRP+gjKZm/fixQt33YxhqXXJMSGeASXH+vq6RkdHHXbDhA1HI0xMpqenzbTD\\n\",\n       \"TT+RSNg8EHSiWCx6XBwMBrW+vq5CoaDx8XH/HnbZy5cvzXir1WpaWVmR9G5w8eTJEx/ZcB6IsIBH\\n\",\n       \"0mq1lMvl7vQM+/v79rogSmJ8fNzxFcOefPQRpGphKonYAUwZvgkPOicM7qlTU1NaXFw0TwbWIpHN\\n\",\n       \"1Orvez0onPn3fu/3lM1m1Wg0bKGFDIhmj6kXyuGXL1/qu9/9rrnAML6y2ayCwaDi8bj29/eVSCTs\\n\",\n       \"iTEzM+MdKxKJ2OGn0WioXC7rO9/5jqEtmkn8inu9niE7xursrMVi0bKpfD6vZrPpwcT4+LjlTJIs\\n\",\n       \"ph02RJekR48eeQeHttlqtbS2tqZqterQTGBIbHNBLeCVrK2tObx9b2/PAl1Jd0QJ8Xj8TlOMAQ99\\n\",\n       \"CerwUqlkHSbKHjz2EB+fnJzoH/7Df/grnFl6B8Rj6DI6OmquMkOBi4sLVSoVXV1d2dwF8jkZJoyW\\n\",\n       \"k8mkyuWyHj16pFwu5zF0r9fzWJZUUtTfoVBIhULBhPTb21sT2OEqsDihmGLa0u12vdBisZgjgbHH\\n\",\n       \"pRTK5XIaGRmxOaQkP0xjY2Pa2dlxTBzvczgDEZ4GFFJOj06n41qW6d75+bk2NjYMqd3e3nqgRK1M\\n\",\n       \"gxoMBm0GyWeJxxw6wvHxcSd9QZ2FAgvP+z7XgyozUDTTXAD9QBCnXkYrB6Go3++rUqlYaAm8l0gk\\n\",\n       \"dHZ2ZiiPY5obgHEMSg+GKOx81MKQlKiVaaAIk0SkCm8auBDkBEU4wxygr2g0ekeyhI+HJDsNhcNh\\n\",\n       \"N4gYGzKs4LMCB0Z9g/8GJQCnDiVHNps1FCfJ8B3qF7IRsRgGlSEujQcBExu41ffNNHlQOzPdM/oz\\n\",\n       \"4n+BwOD+QpznmIdUxDABKie+w+xawWDQDDgWifSOzsjCQ6UNtkr+nSR7QqPQJpJ3eCBycXHhiSRk\\n\",\n       \"f45m/i02XKFQSGdnZz6+Dw4OzGADRmSxMHnDu4PanNIJS1sIWaAWCAFo4ubm5kzQh4fNa/E9rq+v\\n\",\n       \"9ejRIx0eHnpkz3QTigAnZ7lctgUDzfX7Xg9qMRNZxqBi2ELgW9/6lssISXbvubi4cLYJcNWTJ0/s\\n\",\n       \"0wa0hPE41FH0hkzxMFak7MArAtMWDF8Izzk/P1cmk7FQFp41R7gkE+zBmUElMIohPIf0J4Y/cJQh\\n\",\n       \"1SM0CIfDWl5e9tCEU2Ztbc1iU+y6mEZSK/OQ7+/v69mzZ2bJkeN3dXWlVCrl4QruqIFAwA0zlNtM\\n\",\n       \"JqNGo6FkMmlZWzweVyaT0e///u+/9/1/UIt5fHxcf/RHf+SdgiYtl8upVCr561KplDY3N8304liE\\n\",\n       \"51sqlfThhx8at0W5EolEjAJcXFz4RqCZ29/f19XVlZ48eeJBCjU5Rzz1KiGQjLKpF0Elfvazn+n2\\n\",\n       \"9tbE/16v52ObcT2j6WGrWTLDYdxRokC7HB0d1fPnzzU6OqqFhQX1ej0dHh5qamrKTenBwYEk6U/+\\n\",\n       \"5E/0gx/8QMViUbFYTMVi0RRRSqm1tTVtbm6az8Ewh6DON2/e+H0Pm8GjEaOx9AAAIABJREFUECcl\\n\",\n       \"izyY+1wPajGHw2F9+umnkqT9/X3lcjlH+FIijI6+C1b/9NNPdXh4qF6v52EENTfqEbK4i8WiPvnk\\n\",\n       \"E11fXzuJ6vLy0vG/sNAI5gmFQjYCDAaDajQavmEMEEqlkkWylAT4shGTDHRFRjVlEWFCt7e3WlhY\\n\",\n       \"MI2T3BFG52NjY55IPn782Gbei4uLtpLF40LSHfQnnU7r6dOnVsYgJ2OqSMOXz+fVbrdNbY1Go8rl\\n\",\n       \"csbBnzx5Ikku7dj15+bmLHIgGeu+QfAPqgEsl8sG4NvtttW/OOZj/8RiSqfTbrhIe4J0BCmJ8oDm\\n\",\n       \"BSQENQj/BpvXk5MTNRoNHR8fq9lsGhqTZG40rkl7e3u2O4DpRrjN0dGRzs7OdHx8rFKpZOwbo3Ho\\n\",\n       \"p5VKxRERjKlRcnPasDPi7onNAZ8NMil8+n7Zfw81CGN/eBUoyodFBDjg02CD7fd6PeVyOUeo8VAg\\n\",\n       \"USsUCsbP3/d6UDtzLBazgcrjx49teTU1NaVcLqfj42P/GVTJ2dlZzc/P6/j42LDXs2fPVK1WNTMz\\n\",\n       \"o8nJSa2srDhhtVAoaHd31xmAwzawNzc3mp+f19XVlU1nzs7O9OGHH9ooER706Oio/fDm5+e9+MB7\\n\",\n       \"P/vsM52enro5ZKDA4lxcXDRaAc6MgSHSLcSxjN/Hx8f15MkTZ5VEo1ElEgmHvWNsODU1pWQyqU8/\\n\",\n       \"/VSnp6e2q/3ggw8s9wIbnpiY0OPHj90rMHqXZFbh48eP9ebNG4dwghrlcjk3iVdXV1pYWLjX/X9Q\\n\",\n       \"ixlrq0QioVKp5KiFfD5vFly5XHbkGTsHtlpnZ2e6ubnRj3/8Y33rW99y140XBrs3MiiI5+zK4+Pj\\n\",\n       \"trKlbgd7ZVedmJiwXS7/BqfQsbExk3tgvKFsBpID193Z2VE8HjeHGNNEyPws0FAopHQ67UnjxsaG\\n\",\n       \"jcvJ1YarcXJyckf6BK4NQjJcKyOjokxidD5sc0sZ9ZOf/MQCBfLIm82mNxNgxb29vXvd/wdVZpD4\\n\",\n       \"SdmAATecY+KFi8Wij2QaL8a54MQ48EMNZSEP482tVsuvD46K5cDZ2Zl97pBnUZOXy2XX76VSyW7+\\n\",\n       \"TNRg6gFpoUZhAXGEg1+zOHEm2tvbs6PR8fGxnj9/rmaz6TobSwPwZkl3yqJarWYjckSzfGa8Bgy8\\n\",\n       \"wWBgmdmwRS/4+sHBgYW7ePbh88dDge8civP3vR7UYgZu6na7XqTIh4CvWHwTExN2pZTe5W1gVM6R\\n\",\n       \"SeeOjSzjbuRNNzc3hvWwo8IwkdEv+Sa8FgMG6lQaJ/Bd/p5pIwR2VM9o9fh9t9s1J4PviVsoPwdN\\n\",\n       \"MAMe6Z2YltMFQe74+Lj51ihv0OqBNTMlROaFVW4wGLQotVarmaeBcJWfgYeU04cTcnZ2VvelVjyo\\n\",\n       \"xcwVDAa1vLzsIxuMNBqNan5+XolEQhcXF/rBD35g0L5arToMZ3l5WZOTk1pdXbUyYnd31/q5+fl5\\n\",\n       \"m5vgEQdRplgsuhwAviPgkqHBsJ0AsQg3NzdWVMMVpoaFBD8sB4MkBD85Eono8PDQY/GrqytlMhml\\n\",\n       \"02nrBkdHRz00IZwoGAzahJEHj1JmeXnZpCDYfcMBltTt+DWTghuPx+12RLYgp2QikdDs7KzLQWIz\\n\",\n       \"xsfHf0XOH76AeJD/4OmAqzvZeq1WyzxjuvxGo+ExLYgGGR4498AMw1YA8hGCzX6/rw8++ECRSMSO\\n\",\n       \"P9PT08Z4G42GE5Y4MZB6Db5JumL8zi5IWUFyKxNN3gMcYXY3CEQ8PCxEBkgoxzudjndwYtqYgiIN\\n\",\n       \"y+VyTg0AMcHIhmEIERMTExOeeGK3xZifBAISAcbGxiz1QvnCoOde9///o3X0H8XFBwdJBnlRs9n0\\n\",\n       \"MckY++bmRm/evDFSQOglu1YikdBPf/pThUIhdbtdR5/RjK2srFgIcHNz45p4bm5OoVBIW1tbxnFR\\n\",\n       \"WDB5gwAFU4/3x8PCIAZ+Nb9Gzzc5OWmHfwSleOUhcZJkGwBI/MFg0DZbfC6UEWDYjMnJScHyi4XN\\n\",\n       \"w5tOp/Xll18afYFIBOF+b2/P3Be8q1HehEIhxx8TYTEyMqLl5eV73f8HtZhpKIC/kEQRkUsULoR8\\n\",\n       \"hghgwSw4GrJhuRA7LyNxFhnecoPBQMVi0c0Nejtiz1BR09TRLKLxY1QOoR2CDzZdkPrxjCuVSob4\\n\",\n       \"8NAALtzc3HR9jvMpNTLvqdFoSJIV7NI7E539/X1HOfT7fRspDlspDGsMKX8wlESHyImGPGs4l4XJ\\n\",\n       \"ZKVS8ci+0+loa2vrXvf/QZUZ7DRM1MCG4/G4Op2OPzRQBTRs2AOcnZ157I1C5erqSktLS2o2m961\\n\",\n       \"EYOen59rd3fXgllkVLjgD0f1Yn5CREI8HjdPpNfr6e3bt0omk05xrdfrxmmZstVqNaXTadXrdYXD\\n\",\n       \"YXU6HfvrIdXCaCabzRodwN0JXLjT6ejRo0cKh8M2aeH0mJubU71e19LSkmOHpXcLHY8QCEtIyaDX\\n\",\n       \"YhozOTnpQKPhQVE4HNbBwYEfPKaUvBaU1ve9HhQ5/2//7b9tE0BgNpwrhxUoxA4fHBzo6OhIH3/8\\n\",\n       \"sQF91MZ4NeOPHA6HLTfiuIYlBs46Pz9vUxOk+pCKbm5u3CTmcjmVy2XzKOBo4JzPgwOpabjuTyQS\\n\",\n       \"HsljoojhOIT4arWqqakps/p2d3f1rW99y6cA/GfKGfyq4TpDA221WndqaWrtRqOhVCqlsbExG7lQ\\n\",\n       \"KpBwi9KdySBIzXATGolEVC6XXcIUi0X9o3/0j35Fzpd+kZQKCb9QKOj169f66KOPdHR05A+YkoIj\\n\",\n       \"+vDw0IaLpVLJE7z9/X1ze6EukmmXz+e1s7PjmjUQCOjVq1caHR3V/Py8tre3TX9kFI3ItVKpWKOH\\n\",\n       \"O1KlUjGnmZwTckRCoZDK5bINWCQZV6aUgNfADkzONrsmKpiVlRV9/fXXzvUmIqL4TcYgDSS2Yclk\\n\",\n       \"Uq9fv9bl5aVPqNHRUX311VdaWlry5sH7hCW4u7vrhpoyazhVFjEDJdvs7KxLn/e9HtRipsumEWFh\\n\",\n       \"0T0DO0myjAlGHKPfhYUFvXnzxrBSOBx2OUANyKKEbgkMxTQPuf6wGgMOMv8el1Iu5PnDptx4SoC4\\n\",\n       \"5HI5fy+I/DwAkrzQh/8MNTe/ppfgfcPbgPuNcBc2HwaInEiSHF3MAmV83u12NTMzY9gvmUyagssO\\n\",\n       \"De+70+l4uMQD/Jc2O/s/xMXkid2KxYeJCqaEQG+4V4bD4TvHH+76sVjMyIj0iywU0AteH8MW5Eaj\\n\",\n       \"o6Mm+1BqYHGFcADlBgMTMFeIOPF43AMVoC3MbQiFpCSoVqu2t6L0IOUV4lK73VYkErEPxu3trfkg\\n\",\n       \"gUBAR0dHDrVkiMLPy4ibngFv5larZS43AxbEsbwOny0lCra7YMq9Xs96wPvuzA9qMe/t7dnkcHNz\\n\",\n       \"U5VKxSUFknZgqJGREdXrdauxSRUdGRlRIBDwv+t0OkqlUoaSJBlSqlar7uLh5lYqFW1tbdmNk1oX\\n\",\n       \"ByJqaRCPq6sr7e7uqt1uW+18fX2tvb09HR4eOvEKXBgDGqBHPI1BZo6Pj/XixQtVKhVP5N6+favx\\n\",\n       \"8XEz3GC3EbFMs4k2sdfreZrKZwcfZXt725O7eDyuvb09NZtN1Wq1O2XeMCWAzwlC0eHhoX9ebL2C\\n\",\n       \"waBr/Pe9HlSZMTwho7lCH5dKpRzzQPc+MTGhly9f6pNPPvGCxrQ7k8no6OjIaU/hcFiJREJbW1u+\\n\",\n       \"KdPT0x48gPXCJgPRYKiA38Xo6KinYcBSkjxgAPdNpVKq1+s2rMGAEKXK3t6ek6WwTYjH41a+MJYm\\n\",\n       \"LTUcDvsYj8fjOjk5sSkkk8TR0VEnP0nya8K4I59bkss50m4RCtDoSvLnv7y8rE6n44EKYuDZ2VkP\\n\",\n       \"fXiv97ke1M48Pj5un+VIJGLTQxx5jo6ObG5IVBqxwEj+b25uXFfncjlP0jhyce1BG4gtF8c2Rze1\\n\",\n       \"5vz8vKFCxtnUrTDjSqWSDcYh6qAdZPLG8CYYDNriQJK97vg54UxQ06LwiEQiOjs7UyQSUSAQ0PT0\\n\",\n       \"tJl2PCDSu9qdBxSXfQhJ8XjccCF1NbtwvV53WYVYFU8Oml1KsNvbWy0vL/uERIh73xiIB7WYLy4u\\n\",\n       \"LF0iVbRcLqvb7drWFVVErVZTLBYzk+7k5MTG3NVqVVdXV84ShCtMSCVCTsbYku6w6SYnJx1ttrW1\\n\",\n       \"ZfhN0h2MGF0hN71eryuZTEqS3Y8Ir4fM1O12bWiDEBTK5eeff24PakmexjGoAUuvVCqmWxKPMezz\\n\",\n       \"DFR4enqqUCikZrMpSQ7WxAcDQhG5MMCQt7e32tracp2NrS5ErvPzc7169coPCyP7QqFwr/v/oHDm\\n\",\n       \"v/k3/6b9LSYnJ+11vLS0pGq1atEmVEQojESCkVaFofjr16/16aefegoGXgonAaMVjARZtMj2W62W\\n\",\n       \"ZmdnFYvFvAuyWIgwi0QiHm0z6ctms1aiSPJkjaYVPkUmk/EQY39/326k4ONMMG9ubuw7jUcIA5xK\\n\",\n       \"peLSAtRmdHTUwZhg5kBoEJYuLi5MKOKziEajHgZtbm4qlUrZZqDdbmt2dtYNOCUIFINEIqFyuay/\\n\",\n       \"9/f+3nvjzA9qZ6ZZIw4XDsBwOurNzY3H04yHaYBAAMhEIX+DoHWon5hu85rDQZVIpBDSBoNB77KI\\n\",\n       \"XBlWkHjFAwVPuF6v69WrVz4xkGNhDTA7O6tKpWJZE5gzxPmzszPvqrw/rGYpFchqGRsb83u/vb3V\\n\",\n       \"4eGhkRGaulqtZvydMT4kfCIyIBIhNID/wqQUpyRQCx4APq+rq6tf8ZmHLxTZuOdDjEGRgcyJ3Qby\\n\",\n       \"0OjoqL3dIpGIXYkIkCfeiykXHNzhXZlG6fr62goN0ILhmpUSCBgREj2LnFOFDBXYf5Cc+v2+9vf3\\n\",\n       \"NTs761OEKSW2VzSsCFWLxaJOTk78M1C+9Pt98zCGLQYkGXmBT4JBzPX1tWKxmBYWFty0MjBhx2+1\\n\",\n       \"WnY0pQGm9JmYmLDpSzgc1tu3b9XpdMwVv8/1oNCMTCZjUj2NHBMxEkZRUXS7XaVSKYenN5tNLS8v\\n\",\n       \"q9Vqmby/t7enxcVFRydMTk56tA2vIZvN3jFVgRXGAgZDjsfjPimgg05MTNx5ULCNpYxgVxsZGXFp\\n\",\n       \"0+v1lEwmPXCZn5932QRj8MmTJ0Y0UJ/DEUkmk3r79q0fZBCUpaWlO7pBFtbs7KyazaYJVUCf7LoM\\n\",\n       \"XvCPnp2ddXgRmwrWuVAF1tfXfVp+9tlnOjo68s91n+tB7czDlq/Yxe7u7jqS6+XLl+YrDwYDY8Ow\\n\",\n       \"w6rVqiYnJ/X8+XPvcBzJkH8YVvT7fUNyHM+lUkntdtsunxMTE34vBwcH2t/fN+cXQtPV1ZVH2dgc\\n\",\n       \"4Ex0fHysRqOho6MjW9sy+Hj16pXZcjSvTD1PTk7sqk85sbu76+YWyRK8FeRNaCgrlYqCwaDevn2r\\n\",\n       \"s7MzZ6CAwQ+jQyhfGKcT6fb1118rEokYb2YMTloW+styuWxt4K9yAH/pIjt7GCojOJI/I4SSpgdI\\n\",\n       \"CIQBaQ8JSXTqGGzf3t7ao46bgyLl6OjIY2vk9LD0JicnPR0LBAKuw/H0GPa/oJwJBAKuMylVGGqA\\n\",\n       \"BtB8RiKROxM5xviSjF/jDEpwEfVxIpHw2Bk8m4cRghA2BmDplENMERnHU/dL8oAF/D8cDruZJYy+\\n\",\n       \"2+3ag+Q+14NazMSJQUHEBhZ39unpae9sKDaIGh4fH9fs7KxrQnBbeA0sOoz/CN6h+clms5qamtL8\\n\",\n       \"/LxisZjy+bxHyywscFmINpCUksmkuRt8fTQaVaFQsO0WC3xsbEzhcFjz8/Oul9kt8aKmHia+AsXJ\\n\",\n       \"zMyMm8KZmRkvUFANkgfAiYfH15QwsAQxv2FQg9Ib7jThQNFo1PcGrnQ2m7XmL5FIKBgMql6v6/Ly\\n\",\n       \"8l73/0Et5tPTU01OTt4xGMRwm6gCjmJUH+zA0i/Sqqh/WRRYyFILD4P9vB5BOZJct0tyXUrdiJE3\\n\",\n       \"O7kkZ1oz7GCSxi5HLcsDQSmBAyiNGpNHYEkmcwgU2OkZkPD+JHlHl+TPhZ9XkhOvKGcY4HDKwEXh\\n\",\n       \"52YXxraLyGIEwfwawS4ozX2uB9UA9no9k4kqlYpD0IkvYLGR2wwufHx8rEQi4SMPrLrZbLo0YKer\\n\",\n       \"1WqWWqGiGCYjATVdX197zAwP5Pr62to4BjeRSETVatXmiVggMBW8uLjQ0tKS5V63t7fmg/T7fW1v\\n\",\n       \"b3uHhf9MbY1O8PT0VIVCQRcX72IxGNUDPYJdk9gaCoX8ELNoQXIw0ME7D3gNTSO52PQEIDCcmBin\\n\",\n       \"Q7DCHAdnpvtcD2oxLy8vGzNGcZxMJq1CPjw81OPHj5VKpdTv9zU3N6dXr15pfn7ecBFyJYSfhULB\\n\",\n       \"Zts8LGNjY44qGxsbs3QIgSrdPHUxnAjqRY7ieDzu6Ao0dHjWjY+P6/DwUOl02lTSwWCgQqFgk5dE\\n\",\n       \"IqGlpSXn+EnSwsKCNjc3lc/n7VIkyelXDMkogchDZKJJ/mA0GtXs7Kyte1lok5OTVphjMcBGAIQI\\n\",\n       \"1RZsGzuvTCbjUT5+GwiJf0U0+qXr4ODApUWxWLTioVKpaHt7W4uLi25MksmkSqWSTk5OdHx87NIh\\n\",\n       \"FAqpVCrd8U0jtw6SfCKR0Pb2tjqdjhKJhBs3Fj2aw2w2q2Qyqc8//1zZbFYHBwd30qVglDWbTU1P\\n\",\n       \"T1shgh4OTjUG6ATb4LbPrthoNDwlRHaFSoSQeDDg4bhfGkkGOIVC4Y5K5+XLl1ag9Pt9RSIR7ezs\\n\",\n       \"aG5uTo1GQ6urq66nKUGgcw7bKVSrVYVCIW1sbCiRSHikT5gSGPe//tf/+l73/0Et5mg0asd7TALZ\\n\",\n       \"fZDhU16Ew2FFIhF7UCB5v7r6ReZ2qVTS3NycO/FoNKpHjx4Zw6aRHBsbU6PRMCowNTWlZrNpf2Ka\\n\",\n       \"QqZ0CFmHE5dwM6Khy+fzrpsZceP5wQ47MzPj0+Lt27cmWUny4uOBZleUZK+L4QwSGs12u22fan5m\\n\",\n       \"6V0/Uq1WvbiB59LptLrdrrLZrPsSppAkyaKuSaVSxt6Pjo4c0cY9+uSTT/TFF1+89/1/UA1gv9/X\\n\",\n       \"zMyMlcXkgQwntwJnsQgjkYgbJoSXLGgoiuwkHJGQe7gJ0WjUQw4W88rKitlmOAdJMkmIGpQpXyAQ\\n\",\n       \"0MzMjGKxmBlmDC5WVlaMqkBmIu0qlUopmUwqHo9rdnbWdl1YEpyfn5vrfHt7a3Ny6KcMlnhIoMwS\\n\",\n       \"Jg/n4vb2Vul02uw5lC70EjyYpBA8fvzYmYeUQMOREisrK5qenvZgh//f53pQizkYDKpYLKpUKhnR\\n\",\n       \"OD4+NuQDrjucEzIctINjD5gsJBweBIB+OBksymKxeMfSCkrk4eGhPZ7xVy6Xy7a8BceGhbe3t6dS\\n\",\n       \"qeTG7fr6Wrlczuw/lDFo7oLBoBs8QnaGs/6gXvb7fR0cHBiRqVQqNrVhTI16He83vj/fo1aruYGr\\n\",\n       \"1+tqNBoO70FMgA0X0cvYDjBBxRZ4bm7O77vVajmo577Q3IMqM9LptJlaFxfvUptyuZzH18SmgTBk\\n\",\n       \"MhmVSqU71rTIq4aTVcfHx12KMHWjhOl2uyoUCo5JA6dOJpP2W0NIy0JbWFhwqYO5YiKRcCNZq9WU\\n\",\n       \"SCQsDqBUGQ7xkeSaE/gNm16a0lQqpYuLC83NzalQKNitlAdkuCxghx7edVFk39zcaGVlRZ1OR/Pz\\n\",\n       \"85JkhiENHgYygUDA42/qYh7m4XqdfgC52PDw6n2vB7UzQ9phuifJNShDABowPmiySKBBMsxgp8ag\\n\",\n       \"pVQqmYIJXgu2C8wE8RybW2AvKJPgymQSgt1KMlEfNhlKFSaScDTAxYEGsYOdnJzU/v6+J5X8/DSM\\n\",\n       \"cJLB1xl0BINB02IhYPEz8JnwEEgyR5mwTH4GRueXl5f2qoNiixSMB3PYRBGN5rAS5n2vB7WYkd5L\\n\",\n       \"UqlUUr1eNxmcupfGplKp3HEMRV3MqJdjEItZoDMUI1dXV1pcXDQzj10WMjujbB6gYRUy+X/Fb7JC\\n\",\n       \"ms2md2Wmfdvb22o0GpY04X9MzQuMeHx87IWJoSELCV7F6empVlZW/q0dm4eKRc1uC5TGAuckOjw8\\n\",\n       \"NP7c7/c1Ozvrmh9DRAZOu7u7pnpS+8/PzztCgxOC0yYQCCiXy93r/j+oxQwVkqgzpk2gAF988YVV\\n\",\n       \"JNTHWMjCRYabwDQMiI0F2Ww2Xe/y9fv7+3fGxzhlBoNBUzkvLy8t12fwMqzJA6GAqD83N+fdn4YT\\n\",\n       \"2T7Iy9jYmNbX1z1KluRAePLBcd7/+uuvbZEF/DY6Oqpqteoe4c2bN6a3UiqNjIwol8tpYmJCT58+\\n\",\n       \"teSMh5zT6M2bN45shlkIJRZBL1NIsPiXL19qa2tLvV7PlmX3uR5UzVwqlUzfpB5EECpJ3/72t5VM\\n\",\n       \"Jp1wFI1GzX9eWVmx+SHcCthmHPk4XTLsuLi4UDabdX0IFfL8/FxLS0va39/3eykUCg5oJ0Ma/jFl\\n\",\n       \"DST9er2uo6MjPX36VF999ZUNBcHJQQzYAaemprS0tGTTcYhOWM7WajXNzc2Z34GKPZlMKpfL2csC\\n\",\n       \"gxxcP7HYYlIJ+sFGgLi13W7rt37rt1yCDAYDPX782J55w9mI2Dc0m039xm/8hv75P//nSqfTymQy\\n\",\n       \"2tnZudf9f1CLGQfNy8tL7e7uan193aNWVAy9Xs8y+WKxKElOlBoMBspkMnrz5o1dkRYWFiy5oo5k\\n\",\n       \"LHx5ealms+malPqXAQniWcoYasWxsTE9f/5cT58+VTAY1P7+vnOxEQUkk0ltb29renpaGxsbGhkZ\\n\",\n       \"8S6KgTcE/evra2vv4GdgUXBycmKIbGtrS4lEQuFw2AT+WCym3d3dO7yM29tbLS4u6mc/+5lWVlZM\\n\",\n       \"qJdk1IMyKZlMKhAI6MWLF4YtB4OBSqWSVlZWjOKAr8MrCYfD+slPfqKZmRlbKvAe3vd6UGVGo9Ew\\n\",\n       \"L4BdUJK9j+Ets4NVq1XX0wTq4IhJWurR0ZHd4rGgpU5sNpt3Xocmbmdnx4lWKDHgLWPyQh429SMl\\n\",\n       \"y97eno968kmgRkJ4Pz4+9micqIZyuWx6KqbjrVZL/X5fxW/yqiE6MZpGmAo0iRcdJjb4MdNUYzXG\\n\",\n       \"QwcVFHX1sEUtmwpTyuEkLHB0PhMU5m/evLnX/X9Qi3m4SwfQH5YtkXCEreqPfvQjHR0dKRKJ6O3b\\n\",\n       \"t5LeWVtls1lFo1Gtr6+bOwFycXR0ZC84OnCQA2AscGK8JiDdo+6QdGf4sry8rFQqZU820BEgLnas\\n\",\n       \"4Zhi8OlUKuWSgtiFRCLh4c34+LgFvcNMPPw7Tk9PlU6ndXNzo7W1NROFfnkEjv9GJBK54/rZ7/eV\\n\",\n       \"TqdVKBScIAs/Ay44TR42tpLMHuQU4IS7z/WgFjO0Q2pKfs0ABVgunU5rYWHBNrL8ORBRuVxWr9dT\\n\",\n       \"p9PR9PS0R9O4u+fzeeXz+TuUS+pv6sTb21tTGkOhkGZnZ73j8YCgXAYxQIRKWGUkEjHjjxIoEonY\\n\",\n       \"4wNvEPgOTOuowcF1IS/hyREKhRSLxXR5eXnHN2Rvb88NLGSphYUFE51I5Uqn0woGgyqVSpqamlKv\\n\",\n       \"19Pe3p7a7bYDRCuVihc2ppPBYFDz8/Oan5/3OB8l+zAJ6n2vB7WYQSdubm5MbpFkiAmFdL1etzUU\\n\",\n       \"Ub0wwJDic8RWq1Xb20LgOTo6Mk5aKpW82OLxuMWww87zt7e3llPhA4d8C686EABQEOpYBigc05eX\\n\",\n       \"lzo6OlKr1VKz2bScCuTg4ODAxCGwbyaGkvx/4o7fvn1rzJgUWUb62MyC/tDgMoBiweOfQfnBlJVJ\\n\",\n       \"I7xq+pCTkxMr3KmnR0ZG9OGHH97r/j+oxYw6hFKBiyYpHo87Xo1SAguqYWokBoUYfQ9LmbLZrCOH\\n\",\n       \"4/G4d2lQDMoWdmsGG+l02qgBJCdMt+GJ3N7eupRYWVnR6Oiocrmcxa9TU1OamJhQNptVLpdz+A7l\\n\",\n       \"y9TUlObm5pROpz3lxB9j2AIrFAo5LXV9fV3RaNS0WHR7DJbW1tb8QOLkORzaQ+kGR5zPPxKJ+GRC\\n\",\n       \"ZhaNRpXNZrWwsKBcLmdRLCY4NOTvez0oNAMjwlarpePjY3344Yfa3t62pB0ojV3t8vJSGxsbZrHB\\n\",\n       \"9nr16pUSiYRevXqlxcVFHR4e6rPPPrNJYC6XszsPwtNIJKK9vT1Fo1Ftb2+b6TY+Pq6DgwNPxpia\\n\",\n       \"bW5u6oMPPlC1WlWxWLR6Gbejw8NDLx6mfNVqVYlEwkMVSoLz83Pt7+/b3LvZbCqZTOr4+NiZgtI7\\n\",\n       \"4QDE/1gspnQ6rd3dXbMCZ2dnLQpIp9N69eqVwuGwPTOmpqb085//XB999JEdkmjc+Gw7nY6mpqa0\\n\",\n       \"t7enTCajL7/80nU3zeBwhszOzo77mFevXt3r/j+oxQwPF4J5KBTS6uqq4vG4VlZWHGqDrzF1KHjr\\n\",\n       \"7u6uBoOBlpaWlEwm9fjxYy0sLHgYwq6CNVY6nVar1VIul3PjRkQE0cU3NzeOLSYkaHJyUh999JEG\\n\",\n       \"g4GHEpQwpFHlcjmXMzSGKysrFtKyMyPnxzqMmhUKLMY46AehrUJ/xUBcknd4pqOYiUvvUJ7b21s9\\n\",\n       \"efLEuj+GH4VCQZ1OR6FQyM3msCYS7nQoFFI0GvUDSv+xtrZmUevGxsZ73/8HVWZgmI0iA9wZZGJy\\n\",\n       \"ctIZf7lcTs+ePdPp6an/TS6XcyYdiopOp6NcLmfhar1e947b6/X05MkT8z+y2azm5ua0sLDgehAD\\n\",\n       \"lGEjQ7gT/DqVSrk5oxbHIxp6JzxsBLnPnj1TMPguzphgnqWlJTeF7JpLS0tuvhKJhCYmJhzhQBmS\\n\",\n       \"zWYViUQ0PT2tQCDg7zkzM6NoNGojR0ozAjLhdo+Pj2t+ft4G7vQctVpNIyMj5jED5RUKBU1MTGhu\\n\",\n       \"bs5wYSqVcsDo+14PajHTZAUCARWLRVM9z87O9Pz5cxv6Yc79xRdfeHeDnthoNDQ2NqZyuazNzU1d\\n\",\n       \"Xl7q5ORE09PTZr0dHx9bxEouyvARyv85HfCWgITPa1OfHx4emhkH6R/vOHZrYuGQXf385z83rl6t\\n\",\n       \"VvXy5Ut/b4KJcHhqNpuq1+uOOAZmBFEpFosql8u29cJa4OzsTLu7u44SrlarOj4+tpH45uamEaBS\\n\",\n       \"qaROp+NkXMx0sHmgyTw9PdXx8bFubm5Uq9XU6/X01VdfGWu+z/Vmi5xbAAAgAElEQVSgFjPuPufn\\n\",\n       \"5/qzP/szjY6O6uDgwDXbq1evbDyCbxyyIfDVRCKhr7/+Wqenp1ZWswhgjjWbTVMaOWa73a62tras\\n\",\n       \"zGDYIL0zAoetBkWTRXp5eelQH0QEqEe2trYcwdZqtcxXrlQqJgKdn5/r6upK6XTa9gmvX79WtVrV\\n\",\n       \"/v6+ud2zs7O6urqy1g8PO4ZI5+fnOjk5sWXY5eWltre3LQPb29sztMgIHNV1rVaz+oSHF2gQY5mj\\n\",\n       \"oyMtLCwYbx7OP4lGo2q1Wvrqq6/udf8fVM2Mp0S329Vf+2t/TTc3N1pdXVUymbTwM51OK5vNuqwA\\n\",\n       \"m4Vi2e/39cknnyiXy5lY/1f+yl+R9M7+i/Ew/sZAcLFYTI8ePXLJwjDl7OxMn332mdrttknpg8FA\\n\",\n       \"T5488ZELNRQIL5FI3DHehnXGQGQwGCiVSpmFNkzFpJxKp9OeFHa7XZ2cnFjCRN4gCMfq6qrVHsvL\\n\",\n       \"yzau+e53v6vJyUlls1nnhwN3UpsD4/HgxmIxW56xIaTTaT+olHqS7A6VyWQUjUb1wx/+8FeyKS6i\\n\",\n       \"G2i0kCZVq1XVajWb/cEfGB0dVT6ft1VrLBZzHASKCZw48dGIx+M2Gm+32zZMCYVCHnsnEgl7uzF+\\n\",\n       \"HuYJgy9j78VuhhcGBi0saKyvksmkjo6OfByfnp7aVkt6xzvhCEcihqPTo0ePrG1sNBoeyRNkn06n\\n\",\n       \"tbi4aNYh6hTUO8COGNnABMT1iIcJByOMXqLRqEZGRlxyQGcl/HN5eVmRSMQeG/e5HtTOzA41MjKi\\n\",\n       \"g4MD+1kMa9ngAsTjcSeHRiIRzc3NmR4J3ZIanIFLIBAwDMdNGfaH6PV6HlpAA81kMqpWqx7jhkIh\\n\",\n       \"0zsxUcRNCLX08fGxJDk6DSiQIxlYEP8O6Z3NLjxl+A/QW4H2GGAkk0lVq1X1+33zVygjGLTADqxU\\n\",\n       \"KrYWuL29dckFFZQdGcadJH8O4XDYbkVwvYf1hpRK8GBAVd73elA7M4MPjPkwKSF+YGtry+6dmKnU\\n\",\n       \"ajW7WfKhV6tVx5uhhqBm5TUpBejGa7WaoT+kVTwMmLZIshigXC67QWNixn94KEMc6na7Oj091cbG\\n\",\n       \"hrHlVqulcrls1hqqDlhphPocHh5qb29Pt7e3Ojg4UKfT0dHRkUub6+tr1+ULCwuuo29vb23OyKST\\n\",\n       \"Gh35GA/y7e2tzXBarZabOd47dAL43GDbp6enOjk58evcN6H1Qe3MkUjE5BYmaOl0WpFIxFkky8vL\\n\",\n       \"CoVCymQympmZ0T/7Z//MdrFAXN1uV/F4XPPz86ZPXl5e2hL38ePHDrZZWFhwlARY8/z8vKeINzc3\\n\",\n       \"9rgbHx+30yg70/T0tB49euTuH4LU4uKiY4lpULPZrNXTa2trNm6EaA9feGVlRaFQSIVCQWNjYy49\\n\",\n       \"stmswuGwCoXCHQiNYQdQH43tt7/9baXTaY+hCT2anp5WrVbT4uKid/JMJmMLsqurK+/8YMsgOXz/\\n\",\n       \"drutxcVFvXz5Uo8fP5ake8dAPKidGf0eFyaFsNfW1tZULBZ1eXmper2ufr+vx48fO42KHWTY4xnC\\n\",\n       \"UCqV8g7/5ZdfWr1cLBatokAYgLcdWC0NHO8Ft3m8JqhhIfUMBgP7w11fX6tQKNh7mjp6a2tL5XLZ\\n\",\n       \"qAoPE0rnfD7vEwIbLHZ73O0xbxl+uCYnJ83rIBGA6R6CX8br8DcmJiZUr9edb0Itnclk/KDBz0D1\\n\",\n       \"Dkc6l8vp+vra2dz3uR7UYoYQE41GTU4ni6Ner2tra8seD0jucTN6/fq11SdgsGSDfP3115bNM4lj\\n\",\n       \"ocDvoFEkx3pkZMTaPpo+rArIy2PnpmY/Pj72Qs/n8/bG2N/fN5WUU6PT6SiTyVj9AhIivSsdSqWS\\n\",\n       \"2XTDVE6YhJRfk5OTOjo68klCs8tCRYMIFxufEMwVpV/0KsCgDITQVHY6HXU6HaMbpFIRiwG0R7nx\\n\",\n       \"vteDKjMgq+DbQJcNUiHJtSq7GRo9iEQ0gWj6qP/wRBv2bwP2YpLGwqRposnDt7jVat2hbCLkRCUO\\n\",\n       \"FRIvj3a7rWw2aztdGjHizIZ9pQn8wdeDxpDdEM4yu2a9Xrf9AaYvuC7xmWElgAfd0dGRxQXs5nwO\\n\",\n       \"qVTK2SUkVfF3nJYgIQyFyMvGz46I5Pe9HtTOjL8DKmwGHbgKUdtSa+I/zA4JjIYkfnT03bM+DPfF\\n\",\n       \"YjFDa8M7D7HAQE4ou6VfxDPE43Ef4SAVktwETk5OeudCns+uNzo6qkqlYuJQIBBwQxUOhy3jAn0B\\n\",\n       \"mqTxg8BEaQGycHFxYUISuylWDZw05+fnajabLr1wSqVR5AEafjCHjRVRyKPG4TNBcAukivPS+14P\\n\",\n       \"ajEPW8HSsLVaLY2NjdlDeHiHDoVCRjbYibChQqFC6lS1WvXRyhSPwJ2zszObvtze3nqU3O/3TSBi\\n\",\n       \"hI6ae3JyUqVSydxg5E/sbJubmybZB4NBR0O0Wi37Y8AVYRd+8+aNjo+Pjaogeo3FYo5sYFemUW40\\n\",\n       \"GuZkQHyiyUSXiHoaM/ZoNKpGo+ExOzg9kCeWZqAwlDfHx8dGT4YTsEBJMpnMve7/g1rMOBChZxtW\\n\",\n       \"Otze3lr9wI2U5OmWJDPeyPTAulZ6J+GnURkMBvYjnpqaMg68v7+vqakplxXVatU1eD6f9xEOJyOX\\n\",\n       \"y9kSgBAfJFM/+tGPvHujSwQdGB8fVzweN7EpHo8bffn2t79tHw+ywyORiHkbOBm1222dnp5accKw\\n\",\n       \"BGf8WCzmhjCXyymXy7khjUQiWlhY0OjoqG0dGDRR2tHwYRXM0ArHqPX1de3v79vPBAHCfa4HVTNf\\n\",\n       \"XFxoZmbGtS81LDZTn3zyic7PzzU2Nqa5uTmdnJwok8mYq0sjx/E/7J8xMjJiKRZH+KNHjzykgPnF\\n\",\n       \"TcRY8fb21mYneEpwOhCxEAwGlU6nTXIC2sKMkRAg2HZzc3OWKeFGNDIy4qRTalS8Nfr9vubn510m\\n\",\n       \"ZDIZoweEaQJXnpyc2OIM32gQD+A58lpWV1c9jSR2jQYQZuHc3JwhS5z+8bRbXl5WPp/XxcWFy777\\n\",\n       \"XA9qZ+b4J5MOIWYsFjMzLh6P+6aOjIxYWZHJZFSr1TQ1NaXvf//7bhDZvSYnJ3VxceFSA8ssdnB4\\n\",\n       \"IZIc0l6r1WwEg6M9pcvs7KxqtZoajYZVypCGbm5unI+NF0ggELCDEIR2bHfhITMsgrjPxVSSBxw+\\n\",\n       \"BCcLDR9oDVNGeMoIejnhOJmIP+ZzB/HhgaLeZ3hECUbWYbfb1Zs3b4xr/yo6behClsRiBUKi7qS+\\n\",\n       \"Q7dHg3Vzc6ODgwMfc3/4h3/o6Vs6nb7jr4FWTpL1gpQUBwcHSiQSxrORUwEXwiG+vr5WuVxWLpez\\n\",\n       \"UeHc3JwpoMFgUAsLC3Ye2tzcVL1eN3S1urqqvb09n0SMg+Fw83PTA8zPzyuZTKrX63l48/r1ay88\\n\",\n       \"IosnJia0s7OjUqlk1Qk2teDltVrNBKphxAJbA1TjX3/9tYlc+PrBz8b+oNfraX5+3o1qtVq91/1/\\n\",\n       \"UIu53+8rk8nYlHA4nFGScU7MSIhE44hjEPLhhx+a9AJsB1yHmSHHMK6fMzMzSqVSOjw8dK2MTInd\\n\",\n       \"iJIlEAhYAApKgkcH418WMuoQUBR2RkmGzXgAGBKxAPlMQAwoc66urjQ3N2cWW7/fNy8EVcpwqBEN\\n\",\n       \"GooWampMbVC2876ur6+VTCZtMHl+fm6yfr/f1+TkpObm5swN4QFH3vW+14OqmcfGxrS9va1ut6v1\\n\",\n       \"9XW1222LO/lw0+m0+RMTExPK5/NKp9M6OjrS6uqqaY1Pnz7VH/7hH7q+pWm8urrS0tKSDVlSqZQb\\n\",\n       \"RtJJ0+m0yuWy4yQWFxdNsfxlb4zBYGA0g51/cXFR8Xhc7XZbpVJJS0tLarVapo/u7++rUCi43oQA\\n\",\n       \"BewVCATuGH3jj4zdQSwWs3i11+vp29/+ti1siTzr9/taXV3VYDBwJgxGjuQR0hwTsoMpJWaR+EPj\\n\",\n       \"AYi6O5lMqt1ua35+3v7Ok5OTtvt63+tBLWacgjqdjv74j/9YH374oY1UmKKxQ62srOiLL76wb0M+\\n\",\n       \"n/ei3Nzc9OBhY2PDdTblwsbGhlKplN68eePdV5KxamRS1KTPnz/3AiADGyPFm5sbvXnzRoVC4U4Q\\n\",\n       \"/YsXL5xR0u/3nfsRi8XU6XRcH8M9xpUJke7c3JztDEqlkr7//e/r5OTEtbokl1kvXrywcQxj+Xg8\\n\",\n       \"rpcvX2p2dlbtdlt7e3v2jF5dXfVUMJFI3AkDBREJBAJKpVKqVqu2FeO0qlarmpmZ0cbGhnf9Z8+e\\n\",\n       \"6fnz5/e6/w9qMRcKBX9wy8vLVjUAh+FjAaUTdfT09LQHJ9Sr+XxeZ2dnWlxc9DGIrSuICYaJMOlQ\\n\",\n       \"ZGOezQQQOf38/LxVzVAiB4OB1tbWPGRh6LC+vm63ULBnFCEYM/LAcOJQk1L6HB0dKRQKaW1tzQYu\\n\",\n       \"fAYMhvr9vpaWliTJiAjvIZVK2QckkUjo5OTEQlVKFghXPOwY1GD7tb+/r6WlJX8/kA5Mxo+Pjx1G\\n\",\n       \"+qMf/ehX5Hyuw8NDCzorlcodUxKgNaiRHIO/vMNJcnqT9E5XuLOz446dhcCxj29dv983T4JjFWND\\n\",\n       \"eBPb29tqNpvmOMBt6HQ6LitCoZDq9br29/ctYRo2UgECxL8ZK9jz83NVq1W7McGVDgQCOjk5sWSJ\\n\",\n       \"KDbw55ubGyMdGExSu7bbbR0dHalYLOrg4MCIR6PRMC2AQM3h3MHb21v//4MPPrD0CtnY5eWl3r59\\n\",\n       \"69INMcHu7u697v+DWsyMR5m84QgEtMQOwoeeSCRUrVZtvI3hNzAeOCuqk1/2Nr65uTHri6HMsPsm\\n\",\n       \"i5YGk4UNG43mEzYauYHAhpisVKtV78qQ5nG5Z8qHhQJ1PQuEKGJOCppESaZk0pgyCkeMS0wyWS1o\\n\",\n       \"+ygrMDPHngBMGvRIksM8edgZxlByZDIZw5oHBwf3uv8PajGvrq6qXq+7XKDGhQ3X6XSMQ0vvatzF\\n\",\n       \"xUWl02nHeJHElMvlPEzI5/NOfo3H4zZNYYfLZDJ2KSoUCmo2mx6CAH0tLS356zBNhMMAzkt8wuzs\\n\",\n       \"rDKZjOMp4GywqHZ3d82RxkgRhyUsDdj5QXDgU1AOTExMWGXNZ4VHH+VONBpVIpFQLBa7Y8GwuLio\\n\",\n       \"TCZj9t34+LjLBsxlJNmHjocTQcDk5KS51LiXBgIBlzvvez2omnl3d9fH3fn5ufb29ky3pAbc39/3\\n\",\n       \"btPtdo0NE2HQbret2aNmBXVgB4rFYrYy2NvbM/m/Vqvp9evXvpnpdNrQ14sXL2xSA77MtI2pZKvV\\n\",\n       \"UiqVsgni5eWlR+E0c1iQHR4euieg9gyHwx5Tw1tGvEpNzVTy+PhY6XTaGSmUQiAu4NK8FwI9JyYm\\n\",\n       \"rH2Eu4EFA+GeTFKhsqIXpDSbnJz0hJNBCZzw+1wPamcGhqNhWVpa8s41Njamer1ujgFH/szMjNLp\\n\",\n       \"tK25njx54ro7m81qeXnZdlWIUlm8iURC3/3ud40m3N7e6tGjR3ry5ImpoSwERr/sZrVaTY8ePdLk\\n\",\n       \"5KRSqZSWl5f15MkTTU5Oanp6Wqurq8rn81Z1EP1weXmpubk5188EZgIrzs3NuWxZXFzU5OSkgzuH\\n\",\n       \"U5442ldXV72rT09Pm3jEzwv3BKW79Isdt1qt2jASE5tkMmlbBfoLSplsNqu1tTVb7cIrhxY7PBN4\\n\",\n       \"n+tB7czNZtO7BbKeaDRqwxG692AwqEAgYG4E4D66t2HqJey6ra0t831xGu10Otrb25Mk7+KlUskN\\n\",\n       \"EWP1P/mTP7FUiRMhFoupWq2a2IRjJvzjvb09zczMqF6vK51OG47LZrPa2dnxoKVcLnsUD/H/5ubG\\n\",\n       \"Sm44y5VKxZg0LkTdbte6O8j6MzMzprfigAoCwpj77du3Ojs707Nnz3R9fe3Phh0eI8jhGOZ4PG4y\\n\",\n       \"FqPysbExtdttnZ2dmXx1n+tBLWb4s+Pj43r27JnGx8ftS7G4uGhTP3Yl6suZmRl39clk0se99C48\\n\",\n       \"5/Xr15qdndX09LRGR0ddGnAShMNhZTIZ7e3t2RKLB6bdbnswgx6P4QEnxMLCgs1gcMn/4Q9/qO3t\\n\",\n       \"bT+c2GQxoSuVSkqn0/Z6ZtKILo8dk5+B8HXEslhvTU5OOrASg3ASAjKZjAqFgsOJarWaUqmUc1cu\\n\",\n       \"Li40Ozvr9yy98xbBfw+vEkk+qRYXF525CPID7fRXCa1DF2Pdbrerr7/+WsFg0C6UlUpFh4eHOjw8\\n\",\n       \"1Pb2tq6urlQul81VZiJ3cHBg0k+xWNTp6amurq684EBI8BVmZz05OTGBiCkd/GluEgw1/C2w/Nre\\n\",\n       \"3la/39fTp0+Nxe7s7NihHnJQsVhUvV5Xs9m0OppBTSAQ0OLiog4ODryA6vW6Tk9PtbW15UGGJDsN\\n\",\n       \"YbfVaDSc2wJXhZobH+dhx9F2u+3FSUIANNZGo2HUhO/faDTUbrfvOJyCclxdXRntuG/a1INazBi+\\n\",\n       \"dDodffbZZ7q6urLJCP/P5/MqFAoaHR3VysqKj2IooaFQSOvr60omk5qZmfHflctlp5hisBKLxVQu\\n\",\n       \"l410wMmlDDk+PtbU1JSNZiS5Ycpmsx7EzMzM3IkNptEjjAcxKcR3VN2SPCYeDAY6PT1VJpNx44m/\\n\",\n       \"RTabtf4RVQzSL3jR0WhU6XTaeeH8HdBkrVYzOR81tiSLGOCRsODBkDn52PUlGTFBG0lZ8ivW3NB1\\n\",\n       \"dnbmm4F3MtBQvV5XJpNRt9u1l1qn01GhUPAuglfa/8Pem8W4mqb3ff+vdq7FnawiWfvZ+3TPTPcs\\n\",\n       \"PZIVQbDsXDibA8QOksALYASyIyMXUSLpOjGsIIiQQFGgwIHgC8NyvMSJBRuxJDua0YxmJtPdc06f\\n\",\n       \"pU7tVdyXIllkVbFIFvnlos7vaZY8stqn0NKoMB/QmDlbcfne732f5//8F4YfDA3gZ9Tr9Wv+Fycn\\n\",\n       \"J3aDLi4ujCUG8uHz+XRycmJ/j+iDUChkWK7X6zWVSzKZVLfbNZ4EF2NxtHIsDLB03juEJthxDI3g\\n\",\n       \"I1OToh6BXcdnA3kBEep0OqbS5kFFZAveLslw9PHJJgOZ8SYYk5jRaKRGo2GRbGR0jyvr3+S6VYuZ\\n\",\n       \"ZocvfLwOnJ6eVqFQUK/XUzKZtJ0DqIuufzgcqlwum98FmXh4x4HFMjjhxjO6ZljA+JZhBna2sNtg\\n\",\n       \"m0EjdRzHms+TkxOTYU1PTxtnpNvtmqMQMBZ4LU3VeA4KC4oHFJI+iAbMwG63q1gsZsoR2IAgEjAR\\n\",\n       \"WbRgz0xFWYy9Xk/BYNCQIWIiYAbyHYIwIezlYYco9abXrVrM3KS5uTnt7OxYfVYulw11IHdvbm5O\\n\",\n       \"h4eHevXqlVqtlo1Sy+WyKpWKOp2OXr58qVKppP39fSP8MMqWPnFOAtuFuTYYDNRsNs2lf2try3Zr\\n\",\n       \"oK9ut2t1da1WM+U2D+Dh4aEajYbV0/ClIbWPuxSRa/Ls2TPt7OzYz2MgAQoiyVz6qXtxYqrX6yoU\\n\",\n       \"Cte85qrVqgqFgl69eqXBYKCjoyOdn59re3tbH3zwgT1U5XLZhK0Q7fnewc9xNCoUCup2u2o2mybB\\n\",\n       \"gvOChdmbXrcKzYAVNjs7a74T4LH4zY0f7cQmjBu/rK2tWWNHLcmYHKUFU7l0Oq1oNKp6vW6umODQ\\n\",\n       \"TNimp6ctkmw0GlmtSmorUB96ORrRQCBgp8rc3Jza7bYeP34sScavjkQitosi58JR6PLyUvfv3zf+\\n\",\n       \"MPV3KBQyVfX8/LxWV1cNIvT5fKrX68pms5Kkx48fG1TH6FqSiRNwkEIQwNic/G3yFwlCmp2dVTab\\n\",\n       \"NR0iOze5KD+MGx674CVAcqFLhqhPpw3uymJhyIARIHnQMNTgYbCAqtWqDTDq9bp6vZ6psTHfxlIA\\n\",\n       \"+T9+ERDlu92uyuWyuc1jik7IPELRcd5xrVaT3+83425qbcxbwIFHo5F5YfAZxk3EsQgjnYqdejAY\\n\",\n       \"WN0K0X44HBr5CKuu+fl5223JIWRDQFVCI4wxD7Ck3++3iWy/3zfvulAodGNo7lYt5mg0qkgkokKh\\n\",\n       \"YOLSYDBoqAO79cXFhSWyOo5jdV6/31c6nbbMkng8bhgtJB5MEweDgebm5rS4uGgkdOpwGqZSqaR7\\n\",\n       \"9+7ZZAs9IdwI8vTG6aSSzLd5MBgY3XR9fd0kSeOpTjSO8D1odpmEjjv+czrxsyHEwxqkIUQIzPSO\\n\",\n       \"XPHZ2Vk7RR4/fmykpHg8rkqlorW1NaMPJBIJC4zHcyOTyVgjiusSiVSoT25y3aqaeTgcWpAMPhXj\\n\",\n       \"Tu6O46hWqxkP4ejoSKenp2q323rw4IFc19Xu7q6VFb1ez3YTr9dreG6hULAaGuspWGWDwUB7e3u2\\n\",\n       \"29PQwf2l2UkkEub8Q/oVu9Q4rRQJPyHrkoxKiqAUfwpqZOwQ8K9AroT5DdNAtJDlctkoo3jZwbsm\\n\",\n       \"ExxDGOwWwPMhNL399tv2OXH+pxGn9EKWdnR0ZL6AYPS8xk2uW7WYoXpClSS4Em9iVMnoz7LZrJUe\\n\",\n       \"3/72t826ikRWnDeRYlH/xeNx88AA2oJcdHZ2ZrIq13XNfAV30GAwqOFwqEKhYF7PxPQiIAU9YMJG\\n\",\n       \"3ggS/3A4bAgMTReav06no5WVFWPxjdfpeEoD68H8Y7oJlEZp1mg0bFQ+MzOjXC5nKMrx8bEkmSh2\\n\",\n       \"b2/P/iwYDNqmgvcefUa5XLYyJRaL6ezszPqDu3fv3uj+/5EsZsdxQo7j/APHcV46jvPCcZwvO44T\\n\",\n       \"cRznNxzH2XIc5587jhMa+/s/5zjOtuM4m47j/Knf7+dOT08rEolci0qAAE8dV6lULKidkJzz83Pz\\n\",\n       \"njg7OzM+QrPZNOcecFdJ5itMJAR5KaSOttttOY6jeDxuDc/ExITtiDSmOPrAj8hkMnJd19yLqO0Z\\n\",\n       \"O1Mzw2mgnMFii52Qh+/Vq1fW2DJAcV1XqVRKk5OTNvk8OjpSu902FyIU1bgucbpxwd3gIfF6vTbh\\n\",\n       \"G41Garfb9v+Hw6Hy+byazaZarZbm5uZ0fn5uU85MJmMWDs+fP7/Ruvqj2pn/J0n/1HXdB5LelrQp\\n\",\n       \"6Wcl/Ybruncl/dbrX8txnIeS/pykh5L+bUm/7DjO933fiEYxGp+dnbXpGR04+DBTKhzoKTNojODc\\n\",\n       \"wqzDgwPeQyAQUCKRMGNE+MVgtfPz8xYdxrROktXmNEaSzK4AxQf4Nzki4zHANHyu6xqxn2YTDNjv\\n\",\n       \"9ysQCBjagUKk3+9b2tXc3JzV79Fo1LJO+FmUCvy54zhmAIN4F4QHtTbNsuM418LqU6nUNTNGPpPH\\n\",\n       \"4zEagN/vv7Gl7R96A+g4zrykP+G67l+QJNd1LyWdOI7z70r6t17/tb8t6f/V1YL+9yT9Xdd1B5IO\\n\",\n       \"HMfZkfQlSd/6vT8b1hYkcelKkoRe7uzsTKFQyGAhqJ+INuF2EAgJNZGpHLIpNIXo++AZcCLQFBKA\\n\",\n       \"Oe4bDbTn8/mMAplMJm0Xm5iYsOEKY21qZfjQjH89Ho+Gw6H5QO/u7hqJfnZ21oYuLEpgPhpBHEZn\\n\",\n       \"ZmasVobMBNKDD914rHAikdDR0ZGFEUlXo+9+v28qGNxYx08EPid8ZxpX8GZEE296/VGgGauSao7j\\n\",\n       \"/KqkdyR9IOm/lJR0XZcOoCIJF71FXV+4eUnf12IdIsu4USCwFDgpjRZ0THI6KDP44hkTh8NhVSoV\\n\",\n       \"vffee9rd3bVGyXVdO2oZomATAC7Lw0GpI8ketlwup4cPHxoSEg6HbXdlR8c6Fs+K09NTexBprFzX\\n\",\n       \"ValUUq/X0+Liou3CTNbGbWcZ6AAjnp6e6vj42BTVGxsbOj4+VqPR0Be+8AWre0FoXr16ZQ0msROS\\n\",\n       \"rHbH5R/oMJPJWMPL0ARrAdydaFbpL25y/VGUGVOSviDpl13X/YKkM70uKbjcq2/J/df8jO/7Z1//\\n\",\n       \"+tf19OlT/c7v/I7K5bKx3eAFAC2NRiNFIhGr4Uajke7evWs1M+mq0lWA5dramh3zS0tLxkNgN0Wa\\n\",\n       \"v7S0ZE6kHPGUHkz/eGiWl5etdsb9k+YVhTOYLUJc6JfoB7GKzWQytthhrLE7k3GCpAo3VPjMTB3x\\n\",\n       \"uwiFQlZ2YCZD08hnmZ2dVSaTMd4HJ1W73TYc/f79+8aPdhzHyPgMdVKplPFfXrx4oWfPnv2xNBvP\\n\",\n       \"S8q7rvv/vf71P5D0c5LKjuOkXNctO46zIInZZkFSduzfZ17/3r9yvf/++woGg9rf39fjx4+Vz+dN\\n\",\n       \"lf1n/syf0Xe/+12l02kLbL97966Oj4+VSCSMHonuLR6Pa3Jy0tyPGE9DwifUZ3zI4vP5jFyPsoUA\\n\",\n       \"n36/ry996UsWSIn75vT0tBKJhIbDob785S9rNBppb2/POA/pdForKyv6xje+YUYrHo/HzBghwUME\\n\",\n       \"gqyUSCRUqVS0vr5u43z42DRulDmZTEa5XM4WNLXvvXv37Ncw/BAQSJ9MAgmUf/fdd61hHBcwzM/P\\n\",\n       \"q1Ao6K233lKhUDDbsoWFBSUSCX3hC1/QcDhUrVbTr/7qr77xwvpD35ld1y1LyjmOAw7zJyU9l/RP\\n\",\n       \"JP2F17/3FyT949f///+W9Ocdx5lxHGdV0h1J3/n9fr7f79edO3d0eHhoFrGdTkff+ta3bOdFqey6\\n\",\n       \"rnGDd3Z2jOlGbIQkiy7GT2J6elrJZNI87KhRUX7s7u4aW02SjbQDgYCOjo4sCapWq5kbfiAQUCqV\\n\",\n       \"0pMnT8xCYHFxUdFoVGdnZ/rggw/k8/ns10RGcMyDHCCQxbETE5ZkMmkuoOz40pWYYX5+XgcHBza2\\n\",\n       \"H1fk5PN5q9Nx65d0zXCcaWUwGDRUZnZ2Vs1m81oWOK5L0F0nJydVLpetzKKxvcn1R4Vm/LSkv+M4\\n\",\n       \"zhNdoRn/naS/KeknHcfZkvQTr38t13VfSPo/JL2Q9M8k/VWXu/h7LqRI2FrhDo+vMPBQo9FQo9HQ\\n\",\n       \"2dmZNYGUG6gqkFOBeLA4YYjRCKJ3k65KksXFRfl8Pu3s7NhkkYeAcoMhCOYxDEhw9gyHwxbtwMBn\\n\",\n       \"3INuvAzC9w6E5fT0VI1GwxYMI2YWMEkB29vb14ZLlBUw/4gD5iSC+TaeMYhwgLoetQt4P98ffQLf\\n\",\n       \"seu62traMniS/oWH5U2vP5Jxtuu6TyR98fv80Z/8ff7+35D0N/6gn7u/v28+EjRBzWZTmUzG/Ihp\\n\",\n       \"dlgkH374od59913DodvttprNpu1IhUJBHo9H+/v7isViRpf9F/kAACAASURBVNLhz4DUQAoobfCP\\n\",\n       \"wOYKwg4wHrgxLp6MoPGdK5fLikajZiWLOsXv95sGEDtckJpqtWqvg3Ib1hp4NGT6YDCok5MTVSoV\\n\",\n       \"TUxM2E6PpAs9Yb/fV6FQULvd1v379+0hazQaeuutt8zRkxocJAYNIUIG/pzPBhGJh9l13Ruz5pzf\\n\",\n       \"Z5P7Y3c5juP+1E/9lHEvBoOBZWQTlwDXgBqQnZyJGlwHkpok2S5KXvbOzo5FKwCdkaFHA4ZxeaFQ\\n\",\n       \"UCqVMgMURtIYhKPLI9RSkqEgpMBCHOp0Okb+l64SWe/fv69CoWA539FoVK1WyzIF3dchl61Wy0wd\\n\",\n       \"JyYmzKAGywRilsHoCdyp1+smppWuBk/j/Gswe4j9Z2dnZopDucL0r9lsWrTb5OSkTk5OlEgklMvl\\n\",\n       \"lEgkND09rU6no1/6pV+S67rOm6yBW0U0kqREImHxB6SNxmIxM/wG/mK3LhaLZpoCL5luv1QqGeGf\\n\",\n       \"3ZFFxoAD0j0LgeOUUTDaOSxxwcCfPXumjY0N40+ABkSjUZs8MoZnEQK10RxiNH58fKxKpWLlDGN3\\n\",\n       \"8vcKhYLV2T6fT5KszJmamtLBwYHFSBCjkUgklM/nDc0YLyvGVTLjkCH6SPyaeah5IDmZKLXw/CCk\\n\",\n       \"fnt7+0b3/lZxM46Pjw0tQLlALciuMC56JZnK4/GoWq1aKA1JpixGlBydTsdEpBCCQD2QNkHex64L\\n\",\n       \"otH4EU+cAqNvuAnjJKBSqWTUU6it5XJZZ2dnajab8nq91zSJlBeSjOLJg5tMJnV0dGTfB4McShEM\\n\",\n       \"aeLxuJ1OExMTisViZuCCUyrNM+XD5OSkTUr5njAWx2EVSsC4IABeBu6lYPM3uW7VzgxhptFomJAU\\n\",\n       \"90t2HaynIpGIyuWyQqGQ6edCoZA6nY6Wl5dVr9fN1w3CfDAY1Oc+9znt7++bnhAbXerTSCRi3T0L\\n\",\n       \"m3EwgxYsajn6x03ACe3JZrNqNpsKhULGkSAUiBEzyIvjOEbFJA9wfn7eCEi7u7s2+XzvvfdULBZV\\n\",\n       \"KBTshIrFYsaRwO2JYB7q52QyqcnJSS0vL2t5ednMJNPptObm5nRwcKCJiQkzsnnx4oUl0r7//vva\\n\",\n       \"3NzUxcWFotGoTQD5jJCn/vSf/tP6xje+8cb3/1YtZjr2QCCgjz/+WFNTU8rlcgoEAmbvSr7H0tKS\\n\",\n       \"GZY0m00jj09MTOg3f/M3zawchteDBw+uWbiSUbK0tKRqtWrSpuXlZUMF7t+/b5ZeOArhWt9sNk1j\\n\",\n       \"t7u7a4pxKKb9ft+wYaiseH34/X7t7+9rYWHB5P3tdtuGK1BJMX+UPrFheP78uS4uLoyuiaMnwtJO\\n\",\n       \"p2N9xatXr8yGd3xHHw6H2t3dNbencUFAsViUx+NRr9fT0tKSarWavvGNb9g0E5YdBpPYdw2HQ/3O\\n\",\n       \"7/zOje7/rSozUP0eHx8btivJ1MKFQsEcjvCE2NzcNLEl4lAWO+JOasTxcJ5kMqmlpSVFIhHL4kin\\n\",\n       \"09dU1B9++KGRa9iRp6enTbNHPR4Ohw2VYPH1ej1ls1lNTEyY6z9OoShX8NigfAGOoy5nYpjL5Yy3\\n\",\n       \"zMO7vLyslZUVyy6kcWU3x8ARwQInHPxrn89nJ1ClUrFyCG4LiA+e0aBDEI8WFxeVSqUsExFx8E2u\\n\",\n       \"W7WYwSnZMRk+zMzMKJvNmko6FosZqwyXHjgFo9FIb731lo6Pj7W0tKRut6vHjx+bKhomWKlUUrlc\\n\",\n       \"1t7enrHCkNofHx9b8wX1E8dRFtv9+/dtTAwPQ5Jl6VFrM21stVrGqCuVSkYGgi8cDoctg5rm98mT\\n\",\n       \"J+p2uzbw8fl85vMB6nF6emqEIyaIYMOgNYT9oErHYyObzaper2txcdE4HECGoBz5fN7gv5WVFTWb\\n\",\n       \"TRUKBWMKYugoychOb3z/b7Z8frCuWq1mjRhlxuHhoYbDoV69emXHOAOUXq9nNxKzbho3+L0XFxcW\\n\",\n       \"84tFFWUJNlOSTJpPAwdrDE+7mZkZPXv2zJAR1CqSzD8Ck/DLy6tQ+bOzM5uSgRNTElCPQ6Fk0AG2\\n\",\n       \"izkM6a0Er1MeNZtNy8YulUr2GXFwmpiYMDU2vwc6hCIbdAXsGUI/f7fX6ykSiZjC5OXLl+YuhdKF\\n\",\n       \"B50T9SbXraqZE4mETdXu3LmjYDCotbU1U4lwzDFChhoZCoVMgXJ+fq6lpSVrwkqlkpUE4/IpWG6x\\n\",\n       \"WEx7e3vmNgRvAlx1enraGqnFxUVLJKVkuLy8tAAcGsB8Pq8vf/nLJiWCggmCwQ7LLglBieEEdgU4\\n\",\n       \"gsIBSafTpslDezhOJ11cXLQsRDzwOIlIfcUzgzIDvvjS0pI1eHzXBHIS8wBCgjMSpye/f9Ohya3a\\n\",\n       \"mfGIqFar8vl8FlDOl39yciJJNr69vLzU9773PSPqP336VNVq1QYse3t7SiQSNnKVZAQaRrR7e3u2\\n\",\n       \"wPF1brfbKpfLyufzxvnlJJCu4hVAF6CtssjPzs5MQU2ADdxp/DxyuZypzDFZYZzMe52enlaxWDQM\\n\",\n       \"Op/P25gfqT/OozwsvF/G9+PKamwATk5OrOQBm8cnhNIBXd9wODSYFJErOkhOsl6vd213v8l1qyaA\\n\",\n       \"v/iLv2i1LTRDMN1SqXStpvN6vfJ4PKrVagqFQkZER1SZzWa1ublprvTUxXTePp9Pu7u7unfvng4O\\n\",\n       \"DuTz+YwzHIlETHAK3r24uGgZJSAUeEXTXPLzedAQxKL65oHE1406HU42JwElByLWer1u3nDpdNrk\\n\",\n       \"SclkUtVq1RYsxKTZ2Vlr+miiiUKGXI8Ui1IGtYjX67UcGU6bbDZrPGl4IgxbyB+cn5/X4eGhfu3X\\n\",\n       \"fu2HE0Dpyq2HQcLBwYE8Ho8KhYKZDm5vb5tNAFG+jK57vZ458DD5oubb3983b2acORmXE8eGVdW4\\n\",\n       \"9wMu9ASh49V2cnKiZrNpGDbZJRcXF0Zc4lhmbE5tSUmCkBXBK3g3fwavo9fraX9/3xrPaDRqC5fp\\n\",\n       \"pCQzYwGaQ51OE8tirdfrCoVC5uCJvS9WBJwOPODwPhBFYCrOqPv58+dm27W8vHyj+3+rFnO/37dF\\n\",\n       \"gocx2je0euxseNJhtCLJaljsqbhJ8I1ZeDhuwl5DvoRChRvPlAwMFl4FuzxqcASio9FIhULBdIgs\\n\",\n       \"NuxomdjhSgQXA/MWIDZgOvJMHj58aJrBvb09SdLR0ZGSyaRhzYgVQGSIwcDsJhwOX3Mpvbi4sJwY\\n\",\n       \"EJmzszNr/i4vL80sHWV5LBazJvvly5cGb3LyYKH2ptetqpm73a6WlpasAfF4PEaux3gE21tEm0QZ\\n\",\n       \"jJsFokaGCFQsFo2CCU4NnAYDT5Lxdlm4kNfhNkgyPwseFOkTWinowGg0Mv4FMN3l5VUcMlg0Pso8\\n\",\n       \"kFBbYaDVajXT1T179syGNkicCCOirsW9E485ooY7nY45GgEbjqu4R6ORNjc3jSJAGq4kO8WA3mq1\\n\",\n       \"mj34Ho/HOCzAksjP3vS6VYt5ZWXFDLPBMon8PTs7M34wg4FxC9bLy0tTi9DggOHiFgRrbNz9R5I1\\n\",\n       \"YJiwcJNoinK5nHw+nz0Q9XpdjUbDbizN38LCgtFHJyYm7H3AQIMjQWwCTDX4zAsLC5Jku3U+nzeI\\n\",\n       \"sFKp6OTkxEhU2CEA62G2yAKfnZ1VpVLR3NycEomEDg4O1Ov1VKvVtLa2ZiP6fr+vZDKpWq1mFrc0\\n\",\n       \"gZyE9Xpd9XrdNhYyB/EWwc/jpov5VpUZmLycn5/r/fffl8/n0/Lyssnoo9Go2WuRQ51MJs0wcGLi\\n\",\n       \"KmotHo9fE43SkYNaxGIxnZ6e2uKZmZkxmTwcDpJfB4OBRQmHw2FVq1WlUin7M4wVYZzNzMyoXq9b\\n\",\n       \"iik163jshMfj0e7uriQpHo+bcWOpVDJbW04WdnvyrQnXYeKHfxyjcHjGo9FIqVTKyjCGJpRf1Pce\\n\",\n       \"j8caUSwdMJ5h9w2Hw4ZpI92KRCLK5XKmeD8/PzdF/Ztet2pnHo1GNgDAL5k8O6Z93Gik+Pl8Xr1e\\n\",\n       \"T48ePTILXOT5uLljGUBJAb0RrgZH7sTEhGWGIAtikXD802SxUCcnJ806llSsSCSii4sLe4jINcEq\\n\",\n       \"i9AeiE6YLbK4MAKv1+sW0QBNs9VqmZdevV7X9PS02QNgqcWonBRb2Hvsqixk6uSJiQmbrKLmgbFH\\n\",\n       \"rgpNKREax8fH8nq9kq4eSDw8bnLdqsWMDJ/AcrR4tVrNYgw4wmliEomEer2eqtWqstmsDTRQWjSb\\n\",\n       \"TUtiotyg5Bh3mQcSG+cUh8NhHRwcSJLtmI7jaGlpySZ9UC6DwaDtzKAm0WhUXq9Xx8fHdsowzEHF\\n\",\n       \"jQwKrLpWq9lInVg213X16NEjs571+/2mIG+32+YoSo0MVElDSYZ4s9m0CGVG54FAwBiHPATs9ozf\\n\",\n       \"GdBQRiBNA6KjYU6nv6+DxKe+btViZlekloVvixQIEji46mAwsHoWbzl8MGZnZ21iiNIEPgYjaUmG\\n\",\n       \"Ow+HQ8OWaf5OT0+NWklYDaw49IaYLuLKiTggk8nYGHlyclKNRsPMwJF1EYaJajoajZpqhh2RyIvd\\n\",\n       \"3V1VKhW5rqt+v29TOJyfRqOR1e64djabTRvagGM3m03LWSRuzuv1KpfLqdFomHhgcnLSUgcwJefk\\n\",\n       \"OTk5UTqdNtNHHJtQtLzpdasWM1a2pCnBSwBK63a7Ojg4MKioUCjozp07VpaMRyDQWGFmiFkJTDbS\\n\",\n       \"S/GhGPcvrtVq1hzB+2C6x8KDSYdqGRy42+3K4/HYuBoGIAMJkAw0jDSd4MU44NdqNTu2MW2k/h4X\\n\",\n       \"FzAyH4+owMiRUgqWHv+fEoLGGWIVzSuE//Pzc/n9fh0cHJgiB14Jqh5Jpg6v1+s3uv+3ajHjZClJ\\n\",\n       \"X/3qV008GggE9JWvfEXhcFgbGxsGHf3Ij/yIDRJc11UoFJLP59OdO3cUDof1+c9/XtFo1PKyA4GA\\n\",\n       \"TQzZ1djVfT6fKbDxgcbhh1Li4cOHdnpgyILPHQMK9HDoBBOJhEGNmBOS9uT3+5XJZDQxMaG1tTVz\\n\",\n       \"EM1kMlaCLCws2M4eCASMhzE/P28DlHv37ikSiVhGNtL/bDZrTL9kMqlQKKSFhQXzqctkMrq8vNT6\\n\",\n       \"+rpZLpBUgFCA4RJjc5K2JicntbS0JL/fbxvEvXv3bnT/b9Vi5smemprSkydP1O/39e1vf1u1Wk1f\\n\",\n       \"//rX1Wq1VCwWjZH25MkT20k3Nze1vb2tqakpvXjxQu12Wx988IEmJyf17Nkzo4OenZ3ZlJEF3mq1\\n\",\n       \"1G63lcvlVCgULHuPYxqFyc7OjkqlkrHO0MK1220Tqx4cHFjdC3b79OlTC6jf2dnRwsKCNjc3NRp9\\n\",\n       \"ki4FJNloNFStVg0poDZmKMJg4/Dw0DSG+XzeTrMnT55Y2bO/v29KFrL8Njc3rcd4+fKlBRrhI42r\\n\",\n       \"0ZMnTyRdIT3U8jSEyL9evXplSM3x8bE2NzdvdP9v1WJm9Iv6gpwOEqAkGUke7wuv16u5uTlFIhEl\\n\",\n       \"k0mrCRlo0Cyii0ulUmo2m2Y9xYLAfmpjY8N2psXFRRugTE1NWXkBGZ9pYrlctp9DID01OM0V3GV2\\n\",\n       \"MYSlkPOpV/lcLN5+v28KFyy7ms2mTe0YM1cqFftMJycnchzHRAQ8SEwNIWwxhmfow9AIF1Lw9vn5\\n\",\n       \"eftM+Nf1ej1Fo1GdnJxYHARkrje9btVipg6s1+vmfQxFEZ4AQxByAKkDHzx4YM0b6gjw54cPH5rR\\n\",\n       \"X6/Xs1gFjBlRHqPJOzo6MolWPB43SA8rWjgVmCpi/wWjjPo5lUqZsyaN0+TkpHZ2doybDeFHkpmw\\n\",\n       \"dLvdaza+eH10u11LmYWWOe5QCjKDTnJ9fd3iLsrlslKplIbDoSlIQDLG7RMQD4AzozpHpYJ/HfU4\\n\",\n       \"aV2Li4tGanrT61YNTVgYUCsdx9H8/Lw1K3BtUTQsLi6aQThaQZosHOXn5uZMv5ZIJLS7u2s7PU0Z\\n\",\n       \"qmdqZmpRFiA7FaPuyclJ86qD7ww/hIEGqabjkB8BnKhKxnfLy8tL87igJ4AZyOtA5GHRj9vzIlJg\\n\",\n       \"kWIOiXHO8vKywXfU/XiLzM3NmXqFwQeuUdJVPuN4LiGYfyqVMgHxcDjU2traje7/rdqZganm5+dt\\n\",\n       \"6kT2M0GPHN00c/l83gYY9XpdpVLJ0I5ut2v13Dh0BfmdLp8d/fT0VOVy2WrdcDis8/Nz40wwZOl2\\n\",\n       \"uzbuHvdwAxtnOrm3t6dyuWziV5Qf8DtOT08Vi8WMQffy5Ut7T9LVWBu73mazqVwuZycJtlrS1QNU\\n\",\n       \"KBTsBKnVauYeSkNNn8FpVq/Xbdw9GAz08uVLM1qv1WrGQDw5OTGolO+gVCoZlo76u9Pp6MWLFze6\\n\",\n       \"/7dqZ85ms+ZLAfk8nU4rkUiYCDUYDNpU66/9tb+mX/iFXzAPOISo3MAvfvGL2tra0uPHj+3mjEYj\\n\",\n       \"ZTIZFYtFOY5jtTCUURYE0BSu+Xja9ft9G6qM80NALiAynZ+fa3Fx0SZ3jKeJKtvY2DDVNAOOt99+\\n\",\n       \"20j6ExMTSiaT8nq9RtuEuca4GuNy3itjfxzx4WDgQx0Oh/Xq1Svz0FtZWTG+OEbiTAIzmcy17EBU\\n\",\n       \"OL1ez/4MLv14JMa//Jf/8o3v/63amTudjnkTY+iCGkOSTbTY/f7+3//7Njwpl8uSZBo7n8+n3/3d\\n\",\n       \"35XX67UdKBQKGdG/3+9fcxsCTsOHA981COnjOzjMNK/Xa7RR+A2UQ1BK5+fnNT8/bxBju922EwNO\\n\",\n       \"MUqZw8NDG/hQMvHaMzMz9vP4+5xkNIfspOSN06gNh0OFw2HlcjkbsmDZG4/HJclOn0AgYM75NJJw\\n\",\n       \"pw8PD42aSpoV43mcp25y3arFPA7sAxXRyRN7Vq/XrYGjAWLc2u12jVPMTcaSgOOcmwtxqd/vq1ar\\n\",\n       \"mYkLAxMW5sLCggVMnpycaDgcql6vG4MPqAxuhPRJklO/37dFzWcgP3B/f/8a+y+fzxuSAfrR7/ft\\n\",\n       \"825ubqrdbluYJycDn7tUKhlVMxgM2tQOR3xckngQEfdyYVhTq9VssQI3sslg5+DxeMxU5uzszKao\\n\",\n       \"P6SAjl00Y5KME0w9eXFxYTAUhHFJhhOfnp6qUChoaWnJCDxwPbjJeKhxU1GzwAkuFovXvJipsyXZ\\n\",\n       \"RBBOsqRr1lbBYNDkU5Cazs7O1Gq19PHHH1vpVCwWzROZiLNxx/tEIqHDw0Pt7+8bguH3+3X//n1D\\n\",\n       \"VC4vL/XRRx+p3W6r1+vp6OjI0mnBnycnJ40mKl31I6FQyEoJTjnG9B6Pxx526eqEw8qr0+mY7ApD\\n\",\n       \"yd3dXeXzeWvKmbLe5LpVi5nYBUa+uFHCM6hWqwbY+3w+22GweI1EInr69Kny+bxlfbDjjsvnwV5b\\n\",\n       \"rZaOj4+vmZgztmbBhkIh5fN5i3qQPgms58httVoqlUomuSIsEzEqENjLly/l9/tNUABBH5717Oys\\n\",\n       \"CXK9Xq92dnZUrVb13e9+1xQhcI6Xl5dt5N/v9413Eo1GrwXCY7dQq9V0cHBg9E94JtgAg5lLnyTR\\n\",\n       \"AjcC5w0GA21ubqpWqymTyWg0GllOYy6X++HQZPyCF3BxcWG0SZ/PZ2lMsVjMjmF4FkBnCESTyaTV\\n\",\n       \"vjjpE3wDsR1FMbIgj8djP4Njc25uzvw1IpGIxZaBQnDMY6VLPQ2WjEYuHo/bBA/yEA5NlAjU3Ax3\\n\",\n       \"UqmUvSev16vV1VVDMkiUBesmBoLoMoZIlCx+v9+EqalUyqRc0WjUeCQ4IQWDwWtiWnD9ubk5eTwe\\n\",\n       \"G3kjgqXnIBXrj1102md5MV5mLAztEhgJVhaowscffyzpagyOLxrURXamg4MDQxMg9/AfWC8TMcj7\\n\",\n       \"R0dH8ng8xtRjV6TGjcViarVaJrlnJ4MoNQ4posZgMjk3N6disaiTkxOtrKxYElSr1bJhDdg5dXmh\\n\",\n       \"UND7779vOzluoUCVCAnGTxhKA04YnJl4LzTJUGl9Pp+2traUSqUM60bjiAL8448/Nm43jEHqZBTc\\n\",\n       \"N7lu1WLGzIXdmKw/dhGv12uKB6/Xq3fffVetVkupVEonJycKBAK6c+eOJFlIYywWU7FY1MrKis7O\\n\",\n       \"zozMRKkCRk19S22JEIAwdHLyGo2GWQZMTk4auYldeDgcKpfLaW1tTaVSyVCF+fl5ayYfPXqk7e1t\\n\",\n       \"LSws2EMDbkys2+TkpNLptLrdrpkyhkIhzc/Pa3Z21rw77t27Z4oQdsZQKGSKG1QyfHfsrNTMkJaY\\n\",\n       \"4qVSKfV6Pb148UIbGxsqlUomCGaIRdgRuzcKoLW1NT179uyN7/+tKjMcx9GLFy/UarX08uVL2xUx\\n\",\n       \"KkGDRuYGam74wXjIcQwnk0nDQMeDa3q9nvENKpWKKUCYLDJhG/euYDrJkUpJNDExYdyMUqmki4sL\\n\",\n       \"3b17V7lcznyXJVlmH9xov9+vi4sLvXz50thpqEowIOz1emYFlkgkJMlcQvHzODg4sHID1yKGLHjv\\n\",\n       \"lUoltVotbW5uWoMKDRU3JcbzEPTpHegPoAigAD84ODAfQPd1NPT6+vqN7v+t2pld19Xq6qqmpqZ0\\n\",\n       \"fHxsVlegBtgDXF5e6u7duzatw2qLUoCatFQqSZK58WAqjhPnaDRSOp22KR91KLo4ak90ewwriGAI\\n\",\n       \"BoM2WPD5fObNkcvlzCim2WxaMiociNnZWcPF79+/b6JXEBwI77xfGjDw93HUJxKJWD43vBHeDxku\\n\",\n       \"GNjgkMruih0Y5Qw7NjESjuMoHA4bp5nvdzgcKhaLWdlFljbf95tet2pnlmQDEnY+HInI3uPIh7zO\\n\",\n       \"eJcRMpgvjQ6TL+RUDCN4ABgUINSEtwFKMhgMrHOnVod8VK1WrUypVqvmu4aamsULhMZrM1WDozEY\\n\",\n       \"DJTNZo07cffuXUMtpCsOSTAYvGYrO276uLS0dK1koJSiacOEkcaU75WHA/4HwlfkVzSmNMxwpply\\n\",\n       \"BgIBQ5GIobjJdasWM7TJV69emZ4MXu/k5KRxG2ZmZmwUzPgZNhwLDiYbqgoom0+fPrVhCqE4z58/\\n\",\n       \"V6VSUb1eV7FYNA7C8+fPDXaC/zw7O6tCoWDSKLzlKHm2t7c1Nzen9fV1I9VTo+/v7+vly5cql8ua\\n\",\n       \"mpoyeytJ9nPm5uaMu91sNs0v+uDgwIYcNKTn5+cWK0GzWqvVlMvlzIc6n8/bAsYWAfgTHBoFD8Mg\\n\",\n       \"fPZ6vZ65e3Y6He3v7xunut1ua3d312YAw+Hwh9Dc+NXtdrW4uKhEIqG9vT1jiyHTka4w3EKhYImm\\n\",\n       \"0BNpDEOhkCWxBgIBW4RwoHGgpw7ErsDr9SqZTJqk/+zszCISOPK73a6q1ap5w3HUExlMXdnpdFQo\\n\",\n       \"FIzrAYa8tLRksRCQ93E9hTxEHBsstXFjGxY+WDGG5PwaTH1+fv6aoxHoTTAYtAaTB7bdbst1XYPw\\n\",\n       \"CMoMhUJGzIItB0zJ+56bm7s2PcVM502vW7WYOdaxiR0MBkokEmaKCHAP9ZEjsdVq2YLD1gv3+NPT\\n\",\n       \"U6NEQtRBSMoIFlNuKKCtVkvxeNzIPSwGeMncYMoaBKODwUDdbteSnGgiPR6PsetYoJjQEEEsfeIg\\n\",\n       \"NBgMzHeO3EOSnhjeUBYh6yJmORKJ2HhZkrkiQaxnikmuN1Zo5JJzRSIRbWxs2MgaByWGK3BoGNn/\\n\",\n       \"0Dfj+1zj9E2spCCzMLSAOM6xSNMEdIZyYnyQwcJjLMuE8fLyUrlczky7yfTAplWS1Y2MiTGcwTJs\\n\",\n       \"NBqZ6mN8gcPHQKVN6TAcDpXP582yq9lsqlwu2zAG0hAXbMBxYny73bamEcSn0+moXC5f46x0Oh2z\\n\",\n       \"QHBd1/gu8EwY8ZNmBcEKXgg/C/iN/BWStXq9nsnOwL/f9LpVaAZfGPBbIpEwwlEwGLTdDb4uQlRJ\\n\",\n       \"tsij0ahyuZyxylKplNlhoaR2XVfhcNgmhjDfRqORiWA//PBDra6uWlOEhVWv11MymbT4CJh1i4uL\\n\",\n       \"isfjxogjAQBXJRYjNgHEp8F847RJJpP68MMPDb9m941EIjo9PTU7he3tbT169MgQD6BJDMj9fr/i\\n\",\n       \"8bipdIDpeNA4JYigazQaJmglb/wrX/mKMfYKhYJWVlbs5Op2u4pEIgqHw4buHB0d3ej+36qdGYyz\\n\",\n       \"2Wwqm81a182oFLfMaDRqZiePHj1SMBhUr9ezXSIcDisejxtRhiMfiT5cYEk2FQT2A79NJpPa2tqS\\n\",\n       \"JIMD4Q67rmtoADDf0dGR8vm8qajhSFMS8NonJyfmm0wphAMof8apcnx8bEc5C4UyLJVK2WdCKQ4N\\n\",\n       \"s9PpmIiBrGwml5IMkUEkwGmGb1yz2bSyDt+54XCoQqEg6eqkQB8Zj8fNBgFriDe9btVippZLpVLa\\n\",\n       \"39+Xx+Mxz7iDg4NrBoHsQjR4iUTCdl9UFoFAQPv7+1YPUlZAhzw8PNSLFy8sUD6dTl9TWaRSKSUS\\n\",\n       \"CXMSJccEUSsEJEoX13WVyWSUy+UsPJOpYjAY1OPHjxUKhUwlTi4IgxssdlnkeOKxq09PTxtezpSw\\n\",\n       \"0WiYqSQMQGwUUGnzOoVC4ZrrEpndKNa73a7ZGYCZz8/PW94h7vzjfiIHBwfa3t6+Zoj+ptetWsx0\\n\",\n       \"5gxKMMkOBoMWUwaJHg7E3t6ewuGwarWaDQSAoRzH0dramnmnnZ+f281qtVpaWlpSLBaT3+9XpVLR\\n\",\n       \"1taW6QWhW/IeQDgCgYCazaYqlYoWFhYUiURMI7i0tKTd3V1NT09bQtTS0pKazaYk6cmTJxoOh0ql\\n\",\n       \"UvL7/TYup5nNZrMKBALGGcZ9qdFo6PT0VLVaTUdHR9fQHFAOgjF5SKhpx8fcaAuhfhILhws+BCJC\\n\",\n       \"jjgdYBvOzc3ZePz8/NweOgY5PzSBGbtQLAcCAe3s7Ei6UkDAZa7X6zo+PtarV690fn6u7e1ta0qo\\n\",\n       \"7RqNhmq1mhHZyScZt7yFoYbxIGNyCDcgAtS4eKnx0OAiBAGqWCxKkgVB4q5Uq9UstAaONcJRvCqm\\n\",\n       \"p6cNBwbNgPMMjoxam9ra6/UaRMhkkQcYxbn0yQCqXq+r1Wppd3fXsHkebumKeovyBRwfZILyi3qd\\n\",\n       \"soUafDzGeX9//2b3/0b/+gfsajQaVoNiLgjZxuPxWOhkNBo13wZwXJ/PZ0qOhw8fWpPjuq6Wl5eN\\n\",\n       \"dPT8+XMjzkB9pDZNp9OmQaR2xm6WJotdaH193bwr0um07d7Hx8c2Wt7Y2DDCEg0rECEq6Uqlovn5\\n\",\n       \"eSWTyWvuSaRo8X1MTExoYWHBYDxJRrx3HEfpdNoek5NwsQAAIABJREFUjFgspm63a9/PaDSykM9U\\n\",\n       \"KmWm4eOMOtAXhjQzMzMW7zw1NWUiCMdxjB9CU57JZCRJ9+7d07e+9a03vv+3ameu1WoKh8PGe2Ch\\n\",\n       \"wpUAtoO8TiY2scKhUEihUMiUw9Ink7V0Om0WAOxO7CrjtSi7EKUGIe8Q4ceHL/CeaZxAKrrdrpUI\\n\",\n       \"+/v79jnW19fN9wJcGPNx3IT4M/IHiS/DCgEvjYmJCQtsh6sBnIlvB7g4nngYO4KcwPlGzQKvgwEJ\\n\",\n       \"9gcXFxcmskXjOD58wo0VUtWbXrdqMQcCAZ2cnFjeBrUtO0UymbT85/Pzc9OopdNpQwPK5bIODw/t\\n\",\n       \"JpJGWqvV7FiVZDcHTLper9sNgt4I9Adsh9l3p9O5Zh5OgwnmCy7caDR09+5d833+6KOPjMTjuq4O\\n\",\n       \"Dg5s+NHr9Yzn4bqu4vG4/Tz4HdVq1ep/MgVZ4FBXCQjiIYBQNK6bPD8/V7fbVT6f12AwMHSILEDK\\n\",\n       \"BzLGyTCMRCJWUvR6PZ2dnVn9X6vVLAXrTa9btZhhniEZAj5jiEJKK/L9Uqmkzc1NG8uy63W7Xfu3\\n\",\n       \"s7OzJlfCHTQajdpQIZ1OW629u7trpwD0U3ZMpnsQ6eEu4PPGIsI6l9p2OLwKXS8UCgYBomaB7smk\\n\",\n       \"kkaNupnFmMvlrun2oMSCfBwcHKjdbqvRaFgdj1RKkjkaMUJnjM9IW/qE6M9ixuCcqWGr1TKy1LgL\\n\",\n       \"P/EUUGBvct2qmhnrWIB/eBPseqlUym4+Zt0/8RM/YclIuBnduXPHXCuLxaLVwpQO1WpV6+vryufz\\n\",\n       \"Ojo6MkgtmUyq0WgYR2RcnoRKA5kS07JIJKLd3V27yfgrs7MfHBzo7t276nQ6+tKXvqQnT57YeDge\\n\",\n       \"j5siZXJy0nBcxs8LCwuGb8NUwzcDD+lWq2WOSvQJ4M7Ly8v2oFM/VyoVQ1sgaNVqNTv1kKKBtGCn\\n\",\n       \"AKTIWB+ivyTD3H/yJ39S3/ve9974/t+qnRluBrXqYDCwY5XjFII9HIdyuWwumhB/yA4cjUZaWFiw\\n\",\n       \"UetwOFQ2mzVN38LCglZXV826Fgx5dXXVcFMWNzo+UJHDw0O72djF0gAuLi7ae00mk6pUKqpUKtrc\\n\",\n       \"3LT6nbqdYzqRSJjWEV8L2ILUt4hyYdsx3ZucnNTi4qJBbKQJ8MCjm2RDmJ2d1UcffWTTwPn5eesn\\n\",\n       \"4LKAJTNyH41GCofDtrlAf2X0HgwGb4xm3KrFLMkaM75MmiPgJ0k2qm6320ZkR3EC/5lJFna0cHnR\\n\",\n       \"9zUaDdul4CJwk/L5vCYmJuyhQG2CL0e5XDbEgdIGsxZU3yAWNIZYEhAVPM65HjcER/FNSQJrDjUK\\n\",\n       \"3tBwLLBAoPw5Pz+3E47GTZINb8rlsvnFoVpnCgqRiMYaMQJqdt6767qmJOd7q9fr14hKb3LdqsWc\\n\",\n       \"TCbN7Jtuen193VhoyHSQLiWTSWN8ra6u2tBl3EQcw+7Z2VnNz8/bVC2dTmt1dfWaIWAoFDJuBibi\\n\",\n       \"uAWFw2H7/bt37xqCEAwGjTAfDAa1urqqxcVFmzbGYjFDKGZnZ7W+vm47siQbAnGSzM7OKplMWvg8\\n\",\n       \"zR3NJ+UMsOD5+blxQhYWFoxmymteXl7K6/Ua0pDJZDQcDg3dgQGYTqetROE1+PPV1VWl0+lrukoy\\n\",\n       \"WqSrzWVxcVFvvfXWje7/rVrMp6enOj8/1+HhoWKxmO2mSNyZMEFvhKnGDskOCqtubW3NFCk7OzvG\\n\",\n       \"a56fn7eoBdhpYKdM2MLhsF68eHEt62Q8nw9iE+GTuVzO5E8c1Vjo4jcN8R8/DBJlkfOz88KFoFnk\\n\",\n       \"/Y67FGHZO86F5iRC5YLcrNvtqt1uGxWVhKxxORenCbj24uKiJiYmdHJyot3dXcOdoXvOzMwYZAgv\\n\",\n       \"Jp/P3+j+36oG0HEc4+S+ePFCjx49MvdKQstZuDi4Z7NZGxZEIhH1ej2jjXY6HcViMVWrVS0vL5uS\\n\",\n       \"GB4xu56ka3wISWY8TlQvjRjhPpVKxUbflUpF8XjcGHwHr3OnKY8QnP5eNTS9AYstmUzKcRytrq5K\\n\",\n       \"kkmpdnZ29PnPf94miODBWGYRTs84OhAIWK3Nd0WTSZ0L24/3IslIVOgn5+bmNDU1pYWFBZVKJTuF\\n\",\n       \"MFHknuEIxbj8Ta9btTMTxLO3t2e1XrfbtfovEAhYbh1TwZOTE52dndn/ohVEnYKjPcJLjAX9fr8d\\n\",\n       \"17iKMtJGLgTnGRgNO1vG5RB7qLvhGAMfTkxMWJnBuB2/DeKK6Q0kqVgsGmLApA91SS6Xs0wUxtQs\\n\",\n       \"aIYvkqy8oWxh9MzDhzKbeDaMKOGlsPvTn0DMAkFCoY6/HdAemYM3uW7VYuamozSBQH5yciKv16tS\\n\",\n       \"qaRkMmk5Ht1uV8lk0iAiBgNwcy8uLhSLxa6pK2DEnZ+fG29iMLgKU6c5wrzl4ODA0BVGv8T3gofD\\n\",\n       \"dDs+Pla5XLagenDtnZ0dI7SfnJwYOoK6BQsD6mKI+6VSSUdHRwoEAjbJm56eNvgOohCLj50WwhE2\\n\",\n       \"CIPBwCDEZDJ5TdU9HnlG6m2n01G1WlW1WrV/2263zYeOBhhBwsrKilZXV63Rvsl1q8oM/CSwWyUr\\n\",\n       \"r9vtan9/37znUD0fHh6qXq/rwYMH5ubD1BD3TCiWCwsLqtVqlnPHYpienjbnH8dx9J3vfEdvv/22\\n\",\n       \"5ufnTdlBQA1wITX04eGh+W+gTcRNHuU1DkKoVBiklMtli1K4vLy0Ic3du3fNeIb6l6gI0Amcn/r9\\n\",\n       \"vpGoUGGT15JKpdRoNEzSRU1NTV4qlcyoEScoyjigNkk2Rk8kEiaURRne7/f16tUrxeNxi0G+yXWr\\n\",\n       \"dma/36/19XUjsGSzWYOvYHqdnZ1ZAwYpRpJZ1RJVRtjMuK8bdaUka76wGUCgmc1m5fV6TUENkoJ2\\n\",\n       \"EM82PDfw0wCB4Ne8R+pmhjrg0tSjuGjG43G99dZb5kfHqTE7O6vJyUkj8M/MzBhfghoYWA1cGucj\\n\",\n       \"8r6hqSaTSc3OzioUCpkahc+zsLCgZDJpDw59CvYIxWLR4tbweJZkzXkikbBS502vW7WYJRleOt61\\n\",\n       \"x2Ixc72nSSSrJBqNanJyUuvr62ZMAv5KDU4q0nhCEtAYQevo4dDEUeb4/X7zhpZkYgHqdWpHhKe4\\n\",\n       \"EFFGoAph6IOnBlAZKVGUHjSwSMfwgwabHhee4v3M4GLcmwOCfqfTMTIWLkfY77IgUXdLsoaY7D8Q\\n\",\n       \"Daab+IcA3xHuCUR6k+tWLeb5+Xm9evXK5DgMHSDDhEIhJRIJU25/9atfleu6SqVSqlQqptAOhUKK\\n\",\n       \"x+NKJpNaW1vT5eWllQy7u7tGZVxaWjIEBGI7nfrFxYUNCwiDRzdHHjXTM+pjwoQkmT9cMpm0h4tJ\\n\",\n       \"JQ8jDvnU6ZjDrKysmKAWfBkBLIuJGp7QeIYv0idBR3A3Jicnlc1mFY/HTdGOMyjTRdhyDHfIaykW\\n\",\n       \"i1byABnCp5Zk6BNayZtct2oxS5906Oj9GCRAfEH9zM0olUp2nI/voHy57DQ0Rn6/35od/g3HJkQi\\n\",\n       \"VODIhjglxp3hKUvAiDFhxL+t3+9f88bgs7GYeGgoncgQpKHEHoydniBOdI4wCXnQ8PdAHQKRioFO\\n\",\n       \"sVg0PSETPH4+tTiuRxCfeEB5GMH+aaylTzD/cTeoN71uVQPY6/W0sbEhx3FsqMCxCiQ0GAyMnba3\\n\",\n       \"t2ewFLsgQ5JwOKy9vT2jPw4GA62urqpcLhuMxO/t7e0pFovZDWLRQXnEUouGNJlMmpMQixlS/8rK\\n\",\n       \"iilY4GTDU8YSDLMXIo2npqb0/PlzjUYjO31w/8QCIR6Pa35+3miyDJRGo5ERgYAfiRxmkoqAdXd3\\n\",\n       \"V+l02pQ5RBsjICYyIhqN6t69e8Z79ng8Oj09ValUMo5Kr9fT4uKiqeeBTm9y3aqdeTxSjIhc3Nzx\\n\",\n       \"O/N4PAaN0X1T89ZqNeuq9/f37bhmR0KxzOiZ+hU9XbPZ1NOnT23Awa7FzinJjmheh4bKdV0dHR2p\\n\",\n       \"2Wzaboq6GQ72+IgYfzYml8vLy8ZJZqLGCYUXHBO3eDxurqVwqsGtcXYiQiMejxvllN+LxWLG2Ybg\\n\",\n       \"32g07LOdnZ2pUqnYqXJ6eqpms2mqd8oSeN3g/j/cmccudhlqwng8bg0M1gMYm/T7/WtSqk6nY9ZR\\n\",\n       \"77//vh31k5OT1vDQKFEzY0RI4+Tz+bS0tGSqZrBdJmqQ1TmWV1dXr5USGCrC9SVokqkj74dFzTSR\\n\",\n       \"qRrcEhhsDD7QGPr9flOQc2LAM0FZAurg8/m0sLCgSqVi8W8MbYApEUCQsDWOEIEtI9TlhKP5HI1G\\n\",\n       \"unPnjnl8cL9uct2qnRkmHGlMTK0uLi6u8X4hi2P4R8TX+fm5SqWScWoxDUQ1AT6KCyjNFDvk8fGx\\n\",\n       \"7VRg0HT046y2TCajqakpPXv2TB6PR8Vi0cSpsVhM9XpdW1tb5n3MMEWSmc+wmCDQn52dmc1tKpUy\\n\",\n       \"lfo4BjwajQzrpUHDdQiBqyQTClDuIHQlXPPVq1eGRzMVnJ2dNUgOD2dqfcS5SM56vZ6mp6dNZ0jT\\n\",\n       \"+8Ohydh1cnKiSCSifD5vhjDo/cZDKWlgtra2FI1G1Wq1TMSJETk7yatXr5RKpVQul82nji8dp6TD\\n\",\n       \"w0P1+/1r4TbRaFSlUkkLCwva29uz3R97g/PzcxOHEizEovX7/eZJgeJb+gQ7rtfrNvAAEmNUPRqN\\n\",\n       \"9I1vfEOJRMIguWq1aq+P4TcPRL/f19OnTxUMBlUul62kITGAU2QwGMjr9erk5MRq+UajYUKCcSMc\\n\",\n       \"pqUkTUFz3d7etrqcDQfr29FopO9+97s3uv+3ajEDHVWrVX35y182emM0GtWDBw/0W7/1W9Z0ZDIZ\\n\",\n       \"y62DmkkE2dzcnJLJpClMwKTZpVzXtQkWJtxAXNlsVv1+X4uLizo4ODBJlMfj0aNHj1QsFo0Aj08y\\n\",\n       \"tex7772nQCCgFy9eGCpx//59C9jMZDIKBAJqNBp6+PCh4drVatVKkuXlZbXbbWUyGbVaLb311lt6\\n\",\n       \"+vSpWYoxoACtmJ6eVjqdNjcj1DR+v1/vvvuuITaIAegVJCmRSKhSqWhjY0Pb29taWVmxvzs5OWkZ\\n\",\n       \"JfPz86rX63rnnXes6fb7/YpEIubpEQqF9M477+i3f/u33/j+36oyg0WGYz4k9LOzM33ta1/T7Oys\\n\",\n       \"7UbtdtsciZrNpvL5/DUTxZOTEz158sSy8SgzSJzCSPvu3btGvBkMBjo/P1coFNLm5qYtjkAgYOhJ\\n\",\n       \"u9223ZqckKmpKW1sbKharerjjz+2B2xxcVGVSkXPnz/XysqKYcEbGxs6ODgwLJsd2ePxKJfLKRQK\\n\",\n       \"qd1ua2VlxUg+IC2gI5Is0erly5cm6K1WqxYKVCqVrKwaFwFg7HJ0dKTZ2VnlcjklEgnLHSfMB0SG\\n\",\n       \"YQ/oED4fOzs7FhDUaDR+KGgdv4bDoblwVqtVNZtNE7biGwcTrNVqmUyeAPOpqSmVy2Vtb28bK0yS\\n\",\n       \"kXywERhPFKWenpiY0NHRkRmOV6tV4+2yoOBFoL+jXi8Wi9rc3DSPDORLjUZDpVJJ5+fnJnWCNce0\\n\",\n       \"jKy+jz76SJVKRfl83ohGW1tbNpIH48aajHE7JRYi13G1Ry6XU6VS0fb2tolqKdcgUVFKYAscDAZ1\\n\",\n       \"cXGhw8NDU8OQMUiPQbwzzqLhcFjT09NaWlq60f2/VWUGU6vt7W1z0KGjd11Xm5ubBkvxRcKYg7wP\\n\",\n       \"moAxNgYnkHZYyBydExMTZtAC0f7i4sLc6i8vL80bjoHH2dmZGo2GHjx4IOmKP8FwolQqGYRYLBZt\\n\",\n       \"CIFZIpHJoVBIkmwYEg6HbTgCLDYYDLS3t6dAIGCMPmpcmH7wIWhYSZVlYY371DH8yWazymQyOj8/\\n\",\n       \"N2dTuNJwLVjoL168sNE28CVOqPl8XsVi0RxQfxidNna5rqvFxUWz0fJ6vVpaWjLTvsePHxslFE85\\n\",\n       \"bFrhaDQaDUUiETODAcqLRqPKZDLm1Xx+fq7l5WUL2ZGk1dVV4x1g0u3xeOw9YU3r8/ns9TBXgeRD\\n\",\n       \"4A11tSSrz+FjU/JQpzNqRwmDRzMSr3w+r0ePHqnRaCidTmtnZ8dc/c/Pz22kjx8eE9GNjQ2bGsL/\\n\",\n       \"CIVC5iOXTCYNhx43YZ+ZmdH9+/d1fn6uhw8fGlaPNx0CAcI0JV2btr7pdasWM4YjuPOcnp5qd3fX\\n\",\n       \"AhRxPCInBKgOWqLP5zNRp8/nM6wUFQlWWPV6XbFYTHt7e9f8JxhlZzIZbW1taXFxUbVa7Vr5gPcG\\n\",\n       \"devCwoLRRCWZoSMuptLV7pjP522BEsa+vr5upH8GLLDsGKVTwyJghUiFema83GL8j9UWAyG4yel0\\n\",\n       \"2qBH1NW4H6G+wWw8l8tpcXHRdmJ25nFSEqcEYgBEA2963aqamYEBypLxXZLaDG4FRzIG5EiE4Cww\\n\",\n       \"jUK2D6WRXYppI68xTrPkhrMLESYPnwP/NpTg0lXHDwRHUBC7NX4bYMzT09NmDcC4GBIRsiUWsNfr\\n\",\n       \"NVoowltG65KMoTfu98wuSTgoZRaZLQxqgC+xtB0MBlaiQayampqyMTx2YVgN+3w+k6zx729y3arF\\n\",\n       \"HI/HjZEG7bBardrCbjabOjg4MNJ5sVhUJpOxkTYO+Nvb26a6ZtfhhkLCIR4CV85er2dNFXKmdrtt\\n\",\n       \"XhLAeoxxJdnRDHoAAw4eBkHqPECVSsXgPKwOUJf4fD69/fbbSiaTyuVyCgaDyuVyqlarevr0qY23\\n\",\n       \"MVAkywUUA0cnRAOocUhT9Xq9JnCQZPxjsklwZmIYhVodOwLYfjTNDHZozpGk3eS6VWXGxMSEwU0r\\n\",\n       \"KyuKRqN6++23TZ6D5RXZfUtLSzo+PjZpVCKRsFIBvJkdlx0JP2HHccwbGUI9tTYNITwRZPZ4MwcC\\n\",\n       \"AStfJNlxzUPT6XTMGGY8FwWh6rjjJiPp6elpFYtFw2slaXl52XZ4HDk9Ho9KpZI8Ho9CoZAJDPr9\\n\",\n       \"vrLZrCEvvV5PqVTKyFIYvUhXPBdsAxYWFhSPx014gKIb43DeIwsd2RrUUUSx0WjUxttvfP/f+F/+\\n\",\n       \"AF4Q0sF5O52Ovve97xk/oVqtql6vW2xupVKxnDpJJj/66KOPbLQ9HA61s7Nj4e0sZBYinGLqWBQg\\n\",\n       \"uHsSwTBO6YSFxy6G0Tm4Mw7/lUrF4seoYfn3QG2gDHt7e1pYWNDZ2Zm+973vqd1uq1gsql6vG76L\\n\",\n       \"KxOG4L1eT6enpyoUCjo5OTECfrFY1OXlpQqFgvL5vH0OfOGgDcByY3JIuUWQD73BOCGrXq8b2gRe\\n\",\n       \"PzMzYxYNN7lu3WImYzqRSJiCGu0cR/3MzIy5djJpc11X5XLZ9Gho+uA1o9rArYd6OBqNand3V5J0\\n\",\n       \"584dDQaDa8LTi4sLpVIpMxAHzuLYnZqa0vz8vMLhsM7OziyBCYNC6n0QBwwXKXvOzs5sdx0OhwqH\\n\",\n       \"w8pms3ZyzM3NmZ3A8fGxDZZGo5EikYipPcbRDyKTSQUYjUba29szwQN8bVCcmZkZw6jZnXE9IrEA\\n\",\n       \"O9xUKqXV1VXNzs5aiTU5OaloNGrigje+//8mf9lxnElJPtd12zd61c/oopnr9Xp68OCBqtWq7t+/\\n\",\n       \"bzslLjqxWEyXl5eanZ21YMdyuaxwOKyZmRm9/fbbSqfThkhAQoejkclkVKvVzAfi3r17NiHDMJAY\\n\",\n       \"Bb/fr9XVVSsLgK/wiFhYWDCr2bm5OZ2cnGh9fd3QFDjAPHShUEgnJyeKRqPKZrPq9Xr2v51Ox+p/\\n\",\n       \"Rt3AdJREBGhyOgSDQSUSCUMTHjx4YIt9bW3NHv4/+2f/rIW4wyQEIqR8I/QSuzJQEHSQ8/Pz9jDg\\n\",\n       \"WgrRaG5uTnfu3NHXvva1N77/f+Cj4DjO33UcJ+g4jk/Sx5JeOo7zX7/xK36GFwgBEnh8g6n1RqOR\\n\",\n       \"qZilT+KJIeBQd1J6IKfCzZOdo1arGaTHRG58V4FiCl+a477dbtvuzBEPEsDi8/v9FolAljZTQ/Dk\\n\",\n       \"hYUFKxcYyrA7MrAA2SgWi/a61WpVPp/PLAWoc3O5nPkw8744vcDZ9/b2VKlUJMkYfvl8Xl6v1+wK\\n\",\n       \"mHoCc2IGSSMNH3p6etpw62KxqE6n86/4X7/J9Wn29Yevd+J/X9I/k7Qi6T+70at+RheKEfi01MJg\\n\",\n       \"n/jHccQWi0Xt7e1Jkg0PsBxgIUBrhAbJkARuNFwHOnsoleTqAcUx5Rt3l2cBIkbF7gt4DGol4gDY\\n\",\n       \"bozF0fV1Oh21220zLWQaiFSp1WpZA0qTjGP+eMoUAyV4JuygXOOGMyhuKEMYlvD5eMjYib1er0Uo\\n\",\n       \"YyDp9/vt9UBJbnJ9msU85TjOtK4W8z9xXXcgyb3xK38GF+UEEncYc6hLJF07chcWFvTOO++YoBTB\\n\",\n       \"6Y//+I9rNBpZ/QzKgS0rZCPootls1hY2+XhQUDkFwK2B1hh5s7jAXAOBgNbW1q69d6xqGV0Ph0Pd\\n\",\n       \"uXPH/D7AwiHL83qhUMgUNYhPMXxkUfr9fq2trRksCcrh9XqNc8J3kc1mFQqF9KUvfcnyUeB2SLKY\\n\",\n       \"NOwP4HlsbGxYc0gpRynDgg+FQjc2Tvw0NfOvSDqQ9FTS1xzHWZF0M0DwM7pwITo/P9fu7q6CwaC+\\n\",\n       \"9a1v6Ud/9Ee1s7NjyVGNRkNf/epXdXBwYP4XENJrtZp+/dd/XV/5yldscre9va0f/dEfVbvd1v7+\\n\",\n       \"vkFZhUJBmUzGDAr39/f13nvvaWZmRrlcTplMRtlsVt/85jd1//59mxYi6sTIGyQgHA7r4uLCnOsR\\n\",\n       \"B1BrDwYDPXjwQJeXl/rggw+sbsachViyr3/963r8+LFFyOHk1Ol0tLu7aycLusO9vT0jy4NaLCws\\n\",\n       \"6Gtf+5ru378vj8ej58+fG2nq8vLSaKfn5+fa29uzkwtvup2dHX3uc59TPp/Xhx9+aDK0arWqRqOh\\n\",\n       \"5eVllctlzc/Pm4XBd77znRvdfweN3Kf+B1cF6NTrHfoH5nIcx/2Zn/kZG3KwG5dKJWUyGZXLZdVq\\n\",\n       \"NcViMcukgxA/btDCIiMBCWdOJEfVatXQEca0kHQocwimxMibB2Z6elqVSsWQEgQBHPdg4EwLOQGk\\n\",\n       \"q2M9Go1a/Y9hYjKZVKlU0vT0tOr1ujKZjHq9niEY+N01Gg2FQiE9ePBAh4eH18oMJpypVEpHR0eG\\n\",\n       \"L+fzeVOQFAoFK6lWV1dt3P7WW2/ZRoHxOY6ed+7cMSLSYDCw8ofvrNVqqVQqaXFx0T7Hr/zKr8h1\\n\",\n       \"XedN1sAfuDM7jrMr6VuSvi7p667rPpf0A7WQucYd8V+9emV2rPV63XYvgs3ZUWGP+Xw+80A+ODiw\\n\",\n       \"iSGEdLBe5PfYaY3Xw5QP7XbbRr0gHgTBY1VFmCZ0yng8rmKxaDUu0iZifsFjoazm83nFYjGLESaI\\n\",\n       \"h6FHvV43K4JWq6WNjY1rhurU4zShWAgMBgOVSiX5fD6DODGkGadvLiwsmAxL+iRQlFE3r4tTKqaU\\n\",\n       \"ELtAiHBXgl56k+vT1MyPJP1vkqKS/gfHcXYdx/nHN3rVz+hijIq4k8YEISocAXYHJFWYgY9GI8vZ\\n\",\n       \"4z+OzfEAHbSAKysrymQyJqWiHq/X6wZZkfREcynJCPxwIaLR6LXBiHT1YFLnX1xcyOv1WkYgTRm2\\n\",\n       \"X9iDjUYjLS4umuZvPMxnOBxeS62lSWPnnZ2dVSKRsHQtvjeaVTwwELvyWYDnAoGATTah2PKQU6/j\\n\",\n       \"N02/QFnI93/T69Ms5ktd7cRDSSNJNUmVG7/yZ3BhLzAcDnV0dGRJpohJga2oXaPRqD766CODnsbh\\n\",\n       \"MgYY0WhUuVzOEIGzszNr8hCwglocHBxY/Ytx9vHxsdnXMinETRRjlXK5bLwLGr1ut2sWr6TGMv0D\\n\",\n       \"m6V8IOmp0+kYf6JWq5kRJA78LCgmjIy5MYDBEBI0A5Eru/yzZ890fHys4XCoFy9eWHxEv9+3GIfh\\n\",\n       \"cGjeHBD+KbNQgY9GI21vbxs2jdsopd2bXp+mAWzrCl/+HyX9Ldd1bxZw/BleGK/0ej198YtfVDqd\\n\",\n       \"NrPDRCKhcrmsVCplqMP5+bneeustxeNx4w0EAgEzCccHIhaLGQ6cyWRULBYVCATMwQdoKx6PG2cB\\n\",\n       \"HgX5eKAY1Lqzs7NaW1vTxcWF1tbWNBgMbHTNZBHXUZw4KWP6/b7eeecdY7/hjg93RJLtntTY4Mjv\\n\",\n       \"vPOOcrmcPVxM6nhwcB0KBoP64he/aMw7ThnU5YyyOVUQKqyvr5sPx9nZmXw+n+7evau9vT3t7e2Z\\n\",\n       \"sQxmOB9++KGSyaQZM97k+jSL+T+W9Cck/VVJf8VxnG9K+prrur95o1f+DC52AVhaHo9H29vbevjw\\n\",\n       \"oQ4PD804EecfLGPxlaD5+fa3v6379++bEvni4sJYbNSweE8sLS3p8PDQUAqiziqVih2fqL7JBOSI\\n\",\n       \"HzdEJ9i9XC7b0AfYamtry6y0Tk9Plclk9OTJEz18+NBGy/Pz82YVViqVDFk4OTkxK7BaraYPP/zQ\\n\",\n       \"6npOiidPnpiqptVqKRqN6uLiQltbW8pkMrZj+3w+1Wo183JmkYPIzM3NWaJsvV7X+vq6Tk5O9J3v\\n\",\n       \"fMdOQp/PZ8kAT58+NTKV3+/XN7/5zRvd/z+wzHBd9/9yXfe/kvSfS/qnkv6ipF+/0at+hheiUtTH\\n\",\n       \"1MnjvGK4vpOTkzo6OtLKyorhqhiTo22DTI4ok4YxHo8rlUoZooEyHK5yIBCwwcm4uxBORpB3OKaZ\\n\",\n       \"usE9Hsdfl5aWzNwGhh0DDZyO9vf3bVERKeHxeBSNRlUsFrW1tWU7+Gg00t27dw01YfHCN4GHDKtu\\n\",\n       \"cnJSzWZT5XLZegmGIoyyQXJwB6WcgArK73U6HS0sLMjn85lglzKQz/6m16cZZ//D14jG/yzJq6vp\\n\",\n       \"382r9c/g4sgl9heerHQ1HSRllAUZDAaVSqVUrVaNcQeRCPYd0zJ4zBi8zMzM2ESLnGomXTDHfD6f\\n\",\n       \"TQ/hNAyHQ6sTEZdyJCMuwNqAEwYUArZaoVCwYUa9XjfOBGQfTF3QREYiES0uLlrje3Z2ZoFAYM7I\\n\",\n       \"tcbVNxi57Ozs6Pz83EoLOB7AcODYPJCSzLXUcRzjsbCYSQmo1Wq20OFS3+T6NA3g35R013XdP+W6\\n\",\n       \"7n/ruu5vu657M4vzz+jCOBvZOugExB8QhnF8GPyWKx6Pq1armamh1+u1rEBonFjLer1eSZ/kcdD0\\n\",\n       \"kd1H6hTlCfo6iOh4zGH7Ct+5Xq8rlUrZLogNeCw8AAAgAElEQVS5TaFQsIeLnzU1NWVc4ZWVFUMr\\n\",\n       \"BoOB9vf3Va/XDa5rNBqqVqu2YDm9xmN/m82mUVkrlYq63e61NNZ2u61wOGyeIixqWICIdxEUtFot\\n\",\n       \"RSIRHRwcWMwbRoyQp0Ch/jC4GU8k/Revd+h/6DjOT78eb//AXbj/xGIxrays2JGJ6iQYDJpDKI0S\\n\",\n       \"quDJyUlDHfr9vqmf2aV8Pp8SiYQNBxhPM+iQdC2x9OTkRPF43LDUcZsuJPlo/DY2NqwEikQi8vl8\\n\",\n       \"Bl2xW+EC5LquEomE8aXBx1FTp9Npe9+xWEzxeFzLy8tmWEieYbFYNH4IavN0Om1xE6hdKJUQBnM6\\n\",\n       \"0AeMfw/EOMzMzFzLZuH98LNQ80hXQoNIJKJoNGo9xJten2Yx/6+SviDpf5H0y5Leff17P3AX7kPk\\n\",\n       \"0bXbbQP6ic+VrjpwZEhYZ1Fro4JGesWuAZkebjFMNgYFXq9X6XRax8fHlgHCkYq6mlNhdnZWzWbT\\n\",\n       \"rAsQw+7t7dnRz3iYnZNAGwYelC88hOScYJlFP3B6eqqdnR3LYGG3h7M8PoxBqAuNFvErpQTWuuPJ\\n\",\n       \"t/Qgkoznzfssl8vGL5+cnLR8FcdxdPfuXbNQQKp1Uw3gp0Ezvui67ttjv/4tx3Ge3uhVP6OrUqlo\\n\",\n       \"aWnJGirAe3a6cDgsx3GUTqfl9XoNVqLOJKB9fHDgOI4x5SAiDYdDa5jgCxOnOx7T1uv1bHQNzg3f\\n\",\n       \"eH193RAM1BycBrwnhKEQjIimIIAH5GZ6elp+v99chOr1ugKBgLLZrNndQpLHUZ/xOoMTSQb1jYt1\\n\",\n       \"edCAPXFA9Xg8RtACu6fkoQEOhUIKBAKWq0hdf3FxYRtENps12PMPIzrt0nGcDX7hOM66rgYpP3AX\\n\",\n       \"ZQCIAmJTal+YYi9fvjQzFnbRO3fuWLAPI1oSq5rNpvkeM1XMZrOW/wFhKBAIKJlM2kMC/4HxNgrx\\n\",\n       \"drutSqVi1gZEl0FgBzWZnZ01jw+EuugNB4OB4cHEU6TTadPvMQGdmZnRs2fPbKfl4VpZWZF09eCM\\n\",\n       \"/yyEuKhXsBNj55ZkJwzNLLwP6nl415RomUzGQj1pSAOBgNbX1+0hpQe4yfVpduafkfQvHMchcn5F\\n\",\n       \"0l+60at+Rheq54uLC3OPJ7Ac21oUI+wU7Lj4nVFzM3LN5/NaX1/X1taWwuGwxYvRCBKbAA/5448/\\n\",\n       \"Nlck5PjIryDHY57SbDYNvcDUhZvKTk4TxaQRe15JFqg5MTFhYlRscSmTpqamtL6+btO8cVsu6RMn\\n\",\n       \"I4Sl+HtwCgQCAfOci8ViVgZRe/PvhsOhjfvhdvNaW1tblnTL7g0ve2dnRxsbG4rH4/awvOn1aXDm\\n\",\n       \"35J0V9Jfl/TTukI2/sWNXvUzuiYnJ6+hApj/4Q/XaDR0eXlpTQ7qj8vLSxu99vt9G4hgmHh4eGg8\\n\",\n       \"CZQS2OWizmaXx9YL8vp4/BmiAUSkXPh2jCtY4AqzKAnrxC631WrZwwvUxt+p1+v2HyNu+MNgv6Aa\\n\",\n       \"/X7fKAAEc5IQUKlUjIBUq9VUKBSuiXJLpZKdeM1m00oHIMVAIHAtPZamm/4EvB5K6f7+vm5y/b47\\n\",\n       \"s+M4/6GuSPiOrpPxN14/tf/oRq/8GVyMbi8vL/X2229rMBhoaWlJ6XTamg4sVfv9vvlM4I8GC45S\\n\",\n       \"AcL+j/3Yj1lgfDqdVr1eVzabtag1auHxvDvc5hOJhGHR8/PzCoVC8vl8SqVSFmsMlLW2tmYnCF55\\n\",\n       \"aBaZHHLEJ5NJcwKC7ENdOzMzo0wmo+fPn19LsyI/vFqtanFx0Wr+xcVFtdttK03gayCMDQQCWllZ\\n\",\n       \"sbgLSfY6w+HQ6JvLy8tmfr6+vm7aRYYuWOAyGWX8jqrmvffe07Nnz974/v/rduZ/5/V/f1nS/y7p\\n\",\n       \"P3n93996/Xs/cBcWtgxP4Bng3bazs2OcW3ZNwPpWq2WTMxYNu+7e3p5BZVNTU3aT+/2+0TPZlWh6\\n\",\n       \"gsGgDWcQjmLYQtcOGoHhIMYzoVDIOMWXl5fK5/Mm8R9n5925c8e8PUAqGHS0220tLy/bEGM8FBPb\\n\",\n       \"LgzE9/b2bGGP8619Pp+9Byx8A4GAPB6PRTLDzgsEAsrlcpqenlYmkzGjSGifEPs9Ho9xQSYmJpTP\\n\",\n       \"5027+G/Krf+91++7M7uu+xclyXGc39CVDrD0+tcLkv72jV71M7oqlYqSyaSVECAHBM5IMoYZpoGN\\n\",\n       \"RkPJZFJTU1MWAww3gsECJQPBkAxUUGBLMltclB08WIhsqSNBRHAqwv4ABAYIDkhQuoISG42GxRrD\\n\",\n       \"UvP5fMbNQLoPOQrMnCnhxsaGBcjTsJHPAmIBUT+fz5utLn8nGo2arx0Kc3ysa7WaFhYWrLkDosNg\\n\",\n       \"cRzqk2RUgVQqZRAm9fdNrk/TPmYljbOmK5JuZqT7GV3pdNo4tHAvaM4wXEHrRo3YbDZtosZuDjYK\\n\",\n       \"LopLT7fb1dHRkR318HQhL6FgwQgR/gVKC8hAKEDOz8+ttse8cGJiwh5EYKyjoyPzNwZflmTlC2Y1\\n\",\n       \"+NAhTn358qU1i1A7cWaCv8L7Pzs7U6lUMtHtzMyMeULPz89re3vbhiRwj4EnM5mMhYLCN0H4wJS1\\n\",\n       \"3W7r7OzMMPbl5WUTQdB//GHEDf+mpP/HcZy/6DjOX9IV2eg3bvSqn9G1tbWlUqlkit+TkxO1Wi3b\\n\",\n       \"mVkMNHzJZNImhChRMPIGXmMkXSwWDYIjehi1Bg2jx+PR0dGR7V5wfyHTHBwcSJLBhghG2ZHZDaem\\n\",\n       \"psz1B1jP7/erUCiYRhAsmiYN3gcpAJJs4uf3+20xEXLfbDYlSYeHh2q329rb29PKyoo1qgx3Wq2W\\n\",\n       \"jo+PTUlDY0mJAg+EOpgGkvg2fPFg1x0eHqrT6dgw5vj42B4O3tObXn+gBvC15u8/kPRjumoEv+a6\\n\",\n       \"7v95oxd1nJ+T9J/qiuz/sa6gPp+kvydpWVcC2v/Idd3W2N//y7oSCPx113X/+ff5me5P/dRPaXFx\\n\",\n       \"UdKVjGhpaUnFYtG8NAqFgk3QyDM5Pj7W5z73OYPNyOWDJUZHjmvl7OysDg8PLUN73HCRJoudmweF\\n\",\n       \"cTYTQyiV9+/fN3YctExSVsdRF4hHNEosHCaDKEn4XNT7eGdsb28bHPn5z3/eRKXhcNhKs1qtplQq\\n\",\n       \"pVKpZMGY7My8TjAYVKVSUTabVbFYNEUP/iHkpXi9Xm1vb9s0dWVlRYeHh6pUKuaSBEvu29/+tjKZ\\n\",\n       \"jJUnP//zP//ZaQDdq9X+j17/d+Prtbr7r0h64Lpuz3Gcvyfpz+tKnvUbruv+947j/DeSflbSzzqO\\n\",\n       \"81DSn5P0UFJa0m86jnPXdd3R7/3ZKC9Go5F1/+VyWaurq5Z5DUqB3xrEpMvLS+VyOUUiEf3u/8/e\\n\",\n       \"m8RGmqZ3fv+PjGBEkIx9j+C+JXOrrF4wXdOC1GrBGvukMWADvowxNnTTwePlYBuQL9bFbWAGXgAL\\n\",\n       \"sC0bmsPIkAFjoIPGGEEQ0FZ3Cb1VZlYWkzuDZARj3xgRDC5Bhg/M31PBbnWjkVRrWkR9QKOyM8lY\\n\",\n       \"3+99n+f//JePP9bS0pLVcqiwHccxmyp2H+rbwWCgZrOpRqOhubk5G5H7fD4dHBxYfYi5zNjYmB3N\\n\",\n       \"uF/m83kbN4fDYR0fH1vO9M3NjRYWFuwxX716pUePHlnjR24htSo3yPBd9C/ko52dHXPhZ1S+ublp\\n\",\n       \"ZU2j0dDR0ZE++ugjvXz5Ul/+8pe1tbVlnyelA3rG6elp5XI5czwdHx9XqVTS4eGh0um0Go2GhRJh\\n\",\n       \"w9VqtXR6emqNbTgc1tnZmba3t++1tn4eCui/5zjOjuM4p47jdN797z72XKe6lWFNOo7j0i2t9ETS\\n\",\n       \"b+nzxvIPdevTIUn/UNIfDYfDq+FwmJO0K+nv/XUP7HK5jPZIbDDcDGLIOGqpH6mXwYcZYsBpLpfL\\n\",\n       \"SiQSVh9PTEwYMZ1mklDLVCplahV4FRMTE0aCZyxMnAM7MDVwKBSy0qNQKFhkGfRNOn7KF6xoQTiw\\n\",\n       \"2GKCyEjZ5XIZ6WpsbEyRSOTOsIgbiM8PEev5+blOTk4sBQB7LSRQpGxNTEwok8nccUh1uVxGBZie\\n\",\n       \"njbSEoY1jOHpI3Bqvc/189TM/72k3xoOh4HhcOh/97/A+z7hcDhsSPqnko50u4hbw+HwzyQlh8Mh\\n\",\n       \"2sKyJDQ0GUn5kYfI63aH/olrNOcDDgURDNAWs9ms4akgGEBdmAjSFLlcLqVSKUkyGumTJ09MHvWl\\n\",\n       \"L31Jbrfb5D4TExNaXFw0LgW85S996Us2QUOuhA4wHA6b7xq+yVBDgel4rPPzc83NzRm5nXKC/Oxo\\n\",\n       \"NKqlpSX9+q//ur1u3lsymTS23agIIBQKaWFhQZFIRPPz88pms0qn05Zqtbi4qLGxMeMuM63Euovm\\n\",\n       \"lR6AhCmMeCBiBYNBra2tWVTx9PS0VlZWbCIai8XMVuF9r59nnF0aDodv7/UsI9c7bsd/qtuxeFvS\\n\",\n       \"/+04zj8a/ZnhcDh0HOdnFfN/7b/lcjlTSxQKBfN7WF9fty9iZ2fHdqdarabd3V3F43Gr/1qtlmGt\\n\",\n       \"b968MQ4HOxKPe3x8bMcukzTsvJDdY9n68ccfy+VyWWOaSqXMhRT8eGtryxTR7LY4ikYiEeNGcDoc\\n\",\n       \"Hx/bgiyXyyb/L5fL+t73vqdkMqnNzU3Nzc1ZPTwYDAwVKRQKev78ufnModxmihmNRvXZZ5/ZuH58\\n\",\n       \"fNy0j3x209PT8vv9JqAFyoTfjRNSt9vVycmJ5ufnbWqaz+dt6MK4Hqu0971+nsX8g3d17b+UBOF0\\n\",\n       \"eI8J4FclfXc4HNYlyXGc/0fS35dUchwnNRwOS++w7Mq7ny/oFh7kmnn3dz9xofxgQvb06VNjZlUq\\n\",\n       \"FW1sbGhubk67u7uamZmRx+OxwUM8Hre86lKppLW1NZuKERLZaDTk8Xg0HA5NfjQ7O6vvf//7Jvkn\\n\",\n       \"ShgvuIWFBXMdlW5LoVwup7OzM62urhrSMjs7a+UDsv1RAWu/31cikbBdkpNkcXFR+/v7qlQq1mgW\\n\",\n       \"i0V5vV6trKyY9g8vurW1Nb169UorKytyu93mMkRcA7+bTCZNCbO0tKS3b99aRgkuS61WS9lsVvv7\\n\",\n       \"+xbtgHQsFApZNjgDmePjY62urlqkxfj4uF6/fm0sPKaH73v9PIs5KKkv6R/82N+/72LelPTfOI7j\\n\",\n       \"k3Qu6d+S9D1JPUn/WNK33v0Xb44/kfQvHMf5Z7otL1bf/fxPXN/85jdVqVRssibdTr4gvTNOfv78\\n\",\n       \"ucbGxjQ3N6fx8XEtLS0ZZIdwtFgsanZ2VuFwWLVazSRBREokk0ldXFyo1+uZZEi6zc6bn5+3o31p\\n\",\n       \"aUlbW1tG7mGxj3oSb25umrr7+PjYXgNum9fX10qn0+p0OjZZzGaz8vl8lgkeDoeVTqdVq9UUDAbt\\n\",\n       \"5kHVQZkC9ZObEY860rdG6304KOPj44pEIub8hA6RCR/E+kQiYQQsBLn9ft/MbpaWlgz5YKDzG7/x\\n\",\n       \"G4Z6FItF7e7uvuey+vnQjP/ovR/9r3+8V47j/HNJP9AtNPcj3ZrM+CX9seM4v6130Ny7n99wHOeP\\n\",\n       \"JW3olnr6O8OfgifCkzg/P9dnn32mpaUli0YoFApGsCkWi1pdXVWtVjPZDyoJLGWR+kMgf/TokarV\\n\",\n       \"qrHJ6vW6Go2GFhYWbHgBCgC2HAqFtLe3Z1wIkJFoNGrxZYRnJhIJ84vrdruqVCpG1ul0OkbKAZMG\\n\",\n       \"GpuamtLV1ZWZpXMiEOaDQIGPDGbg69evtbCwYDZZnU5Hy8vLpo2kOYQNSAkCBAjllWFNr9ez0mHU\\n\",\n       \"GHE4HFqjTAxFoVCwcpBGEGLTfa6fijM7jvNfDofDbzmO8z//Nf88HA6H/8m9nvlv+AJnBiP2er2W\\n\",\n       \"M724uGiTMHgYNFe7u7taXV01dtz4+LiZeUPHJGsalKHf71tWNf4a7HLdbtf4C+Pj40okEvrkk09M\\n\",\n       \"ZzgYDBQIBNRqtcx48OTkxAY1IC9MJnl+wi7x+MCTAt0j+LXf71etVtPS0pLevHljzyXdNnvo8SKR\\n\",\n       \"iN24PCZEIfwvXr58aQ79KFJIaJU+l2gRdcHOO+qPnc/n5ff7zY6XIdH8/Lyazab6/f6dsfy3vvWt\\n\",\n       \"XwjOPOE4zt/TrfvnqDjrx1l0vzTXqJXs7u6unjx5YpyHUQiKpNLPPvvMfM7wt7i8vFSpVDLvB4g0\\n\",\n       \"WFQh82+1Wmo0GsZ9htBEehOlB8c1Uy/Ce0YFnEBz4L+gKUi/kGmxWDFDhKtNc8hxTm1PY1YqlfT8\\n\",\n       \"+XNb9ChlENd++umnhgCBR/PYKEd4L5wWkPIjkYhlfR8cHFicBWJiXg8TRDgsjUbD6LDlclkLCwt6\\n\",\n       \"9erVvb7/nwXNhST9D7qF5n5b0rqkhqQ/GQ6Hv5REo0wmYx/awcGBJicnDXW4ublRsVg0eiVj3lar\\n\",\n       \"pcnJSe3u7hrMVqlUDIdmQEHADf5wNGBgw71eT3t7e5qYmNDCwoLtlBzPfr/fund+FhSEkyCdTtuw\\n\",\n       \"BUoqo3HqWBYVvGS4DdTNOPOz642KTIlo43E4lSEpMarHibRSqdhkkzKt1+tpeXnZNo5R1iEsRZpO\\n\",\n       \"r9drPUw+n1coFDL/Ed4PAgBU6fe5fhZr7r+QJMdxPLpFIP6+bsfO/6vjOK3hcPj4Xs/8C7iYTEnS\\n\",\n       \"8+fPTaqDKSByonQ6LbfbrbW1Ndt9+QI9Ho+Wl5cVDofNRIVRMCJNiEeYwFCzosQIBAKanJw0S61k\\n\",\n       \"Mqmbm9tMasJpGCLAFgMhyb0LWE8kEtbI4teBWhtFNqcJCbHhcNgcO5H8w1uGjI8OL5lMGj6MBRe5\\n\",\n       \"K4RkRiIRM3ZJJpNGKGLSyEmDGh2qK6mrCGD5/V6vp3A4rEgkYoQpPluv12ul0fteP8/QxCcpoFtU\\n\",\n       \"I6jbQcdfvfcz/gKvWq2meDyuubk5Cz5H4FkqlWyQcnR0pMFgoJ2dnTuY7tXVlXK5nOr1utrttkWt\\n\",\n       \"sdt7vV4tLCzYIj4+PjbqJCJO+M21Wk2rq6uWk3Jzc6NEImEyI8bUOCZVq1XDji8vL7WxsaFOp6NW\\n\",\n       \"q2U85e3tbeN91Go1GxPDad7Y2DAiEXYB3W5XzWZTa2trCgaDljEofX7zAzeCSlD2sPtyopXLZQUC\\n\",\n       \"Ae3s7Ghzc9MQknK5bAxBmsV2u23BO5CgGOHzmmn8rq6urOm9z/VTF7PjOP+b4zjfkfR/6XZX/q6k\\n\",\n       \"f384HH5lOBz+UmoA2WVoKtxut03cqDX7/b55JqMSBttF8AlzjMYHeKzf7yuXy+nw8FC1Ws1G4YTG\\n\",\n       \"A0cR9jg6OIDX6/P5VC6XzSIWJTclAg0kMi2GJBMTE0qn08ZTZmDB1HM4HNp4fNSTg7H24eGhMfLY\\n\",\n       \"RUebZEnG/Gu320ZpdRzHxvzU1NlsVrOzs8YhwSxytE5OJBKKRCImmIWGC8JBw7e9vW0MPV7z+14/\\n\",\n       \"a2eek+TRLZe58O5/rXs92y/4wuSFLBCQAZqw5eVlI9l7vV5lMhn7kihJ4GjgjE+WXigUUjqd1uzs\\n\",\n       \"rLLZrJaWlhSNRrWysmK7NlwEMNt4PG4LJpPJmEUV6auEBkm3w5R4PG64ONFk3CTRaNRG0tFo1B6X\\n\",\n       \"GxBVOgKC09NThUIhPXr0SIlEQrOzs5qfn5fL5dLc3Jwx/iRpbm5OLpdLyWRSw+FQS0tLCgaDhstP\\n\",\n       \"T0+bMfmoAQ7vB6V1PB43/JhyLxAI2Ag8lUopm80qmUyaEfvz58/tPT169Ohe3/9PXczD4fDf1i2h\\n\",\n       \"55/qFr34z3U7DfzXjuP8t/d61l/QRb1YLBZNY0bjBYeZY5whCObheBS7XC5ls1mbCgK9wa6jbqX5\\n\",\n       \"AT0hMpcsPhZ0rVYzr2VixChtwMXj8bg1Z+yYNFCRSOQOBHZ2dqZKpWJcYgYbyWTSbmZ8j+fn5++M\\n\",\n       \"4tvttlZWVrS3t6dYLGbeHmNjY0qlUmbs2Gw2dXZ2ZkptjBLx3lhcXDRCFwY3YOGxWEzJZFIfffSR\\n\",\n       \"NaDT09Om/mF0DbUWES285/tcP7NmHg6HN8Ph8FPdRqb9K0nfkbQi6Z/c61l/QRfmgzgYjTptgr/i\\n\",\n       \"gxwKhVSpVOTxeIxUhMHf0dGR7XZut9sk85i54HHBVIzamYw+mjUYbJDPCcUh4oEjHstcsGp2O+kW\\n\",\n       \"thsOh4Y7M1kEm2Z3Zrrn8/lMNABEdnx8bKbn2IXBCgRaq9VqdxIAaArZ4aGT3tzcKJfLqdvtqlar\\n\",\n       \"3Qm+9/l8qlQqFi40ekJKsiEP1rahUMgyGPHiu8/1s4Ym/0TS13VbLw90WzN/591/3wyHw+t7PfPf\\n\",\n       \"8OU4zvB3f/d3Jck6bgg6a2trKhaLajQaSiQShhsDSZEehUlLpVKxjh5KJvIpbABYlNiBwaOQZISl\\n\",\n       \"0dfAVA0tHPwR8kngXFSrVYXDYfOP5nkbjYbVmYhgM5mMOfoXCgW1Wi09evRI+/v7pphGroWiA4Ep\\n\",\n       \"3GPgNho1vEWy2axOTk6sJAD3pi8YDoeWYMXJNhgMzBQnn8+bwypZM+Pj44rH42a2zgQWJKjZbOoP\\n\",\n       \"/uAP3nto8rN25gVJfyzpo+FwuDQcDv/RcDj8/eFw+OqXbSFzEUbJUILdh66eQQVfHmppHI9goQE9\\n\",\n       \"TU9PGw+ahYpjEMoIVCxwFQjj8Xq9VkdTEuB3h4UXsiK0e2dnZ5bJR1Ks4zjmnwH1E1kTY2iIQpeX\\n\",\n       \"l8rn80omk8rn88pms3bz0oQ+evTIfC8g5yOoxZXT7/fbScWgZHd3VxcXF2anywib3bxYLFrDCNmI\\n\",\n       \"RhAJGjpHCP6SLIUqEAjcO9fkZ+HM/9m9HvnfwIV9QDAY1Pb2tpaXlw3yoXYEevL7/frOd76jQCBw\\n\",\n       \"R/rU6/XMOqtQKCibzaparRp6UK/XbUzLrsNjIkCdmppSu922mIfd3V1zHALFoKtn6oiAFJcl0AG4\\n\",\n       \"FX6/X2/evNHMzIzGxsaMbhqJRCxjm9ff6XQsiYrBDM1as9lUPB43Fh2vidH5qO7QcRydnJwYtk0Z\\n\",\n       \"QKj8ixcv7AaAmM+Usdlsmjzq5uZWFFQuly1KAl7G6Ibx8ccf3+v7v5+51y/ZValUzHkSGIvFygJi\\n\",\n       \"8oTJYqvVsgyRVqtlzu9+v98GLEiVLi8vNT8/b7s5gerj4+NGREdAyyABf7jz83OrVfv9voXY0PAR\\n\",\n       \"nM4OGYvF7OZgcSwuLtouGo/Hjdgvfe5NDQLCe4WHUq1WzdUU3BvzSEjxbrdbU1NT5pkMwZ7auV6v\\n\",\n       \"G087EAioWCwaJEgsHQ01Wkr8RdrttiFDICOodkCR4vH4vb7/B7WYGWrwBeKNAUoRDAZtiII0KhKJ\\n\",\n       \"2OiZ0gAiO7Xf6uqqHcmkOEky0jrWAktLS1pdXZXP51O327XYNaIggKvw0UilUjbKlaSPPvpI0q16\\n\",\n       \"u1qt2kJPJBIKh8P2vjAoxB2IiRuUUrgVwI08FnpFOBJAl1A7MU+PRqN287NrEiUMi47YtouLC+Mx\\n\",\n       \"Yx+AYSRIkM/ns4g2DGUkqVqtmicgNNP7XD8Pn/nvzOVy3b4dMGTQiNEMQBQShNaQucHuGI1G7e+g\\n\",\n       \"QJIDPYowsIMD9ENOpxkCQ/Z6vXZEA235/X6Vy2VrwIgtY9diRI5gdHp62hpWxs7Y0DIahzQfCoXs\\n\",\n       \"JsBay+12G6cEngpeHzwHI3IQEPB27L+on8lKoS/gfbbbbaVSKRu4MKhCUIu2Ei1lMBg0qBI73l/k\\n\",\n       \"0OTv3NVqte7IkDAXqVQqyufzev36tQaDgWX9vXz5Un/1V39lmXws/N3dXZXLZaudX79+bRAY0iE4\\n\",\n       \"E9TR5+fn2tnZ0e7urprNpvb399XpdFStVnVzc2P4d7fbNfMZIK1YLKZwOKyNjQ1ThdBYVioVFQoF\\n\",\n       \"5fN59Xo9bW5uyu12a39/X2NjY6rX6+p0OiZurVQqOjk50Q9/+EPbpev1uk09p6amrPzAQvbt27cK\\n\",\n       \"BALK5XI6OjpSv9+3XgP0BK4F8GK73TbokRiLra0tG1BtbGwol8uZVAxCErs4sF6hULBk2vvwMqT3\\n\",\n       \"yM7+Zb0cxxn+3u/9nunbqJkxALy4uNDW1pbxIzAM/NM//VN97Wtf09jYmH3YLFhJtnNWKhXDRePx\\n\",\n       \"uHEWRt192E2heUI6qlQqluTabrdtWMJJ4nK5LMYNhAXkhb9DdQJ/YjRagjLl7OxMkUhEx8fHCoVC\\n\",\n       \"BschUCB+AkRh9GdHA3jOz8/tZMtkMuYw6vV6LeD+7OzMpGTU5PBPCOKhyQUxgYQEzRWjnEAgIL/f\\n\",\n       \"r2q1qt///d//xflm/F26cPicmpqyTD8avOPjY7OMAq/99NNPDVe9vLw0NcTBwYGWl5dVr9eVTqf1\\n\",\n       \"8uVLra6u2uLr9/t3jAxZxEQa4GcB3MZjs/BGx8+QdVwulw4ODowEFIlEdHJyYtq4wWBgiygUCunl\\n\",\n       \"y5d6/vy5TecKhYI5arKT4pMnyX4OxAOe8c3Njfb3902Eix9dMBi08Et8LqiDCXDHnBGuCqXKYDDQ\\n\",\n       \"ycmJKX1Aas7OzpRIJCy5C2FCJpNRr9f7W8nO/jtz8aHlcjnzktvf37fdFd+K0S5fktrttkFz+NDB\\n\",\n       \"FQbG83g8llHSarWMgYaTPVM3BiPNZlO5XO4n/Juhh4IKjPows6DAxbGDRYlBWHq9XrcEKI5xXI4g\\n\",\n       \"5CNIiEQidgMx2CmXy9ZEMjUcdS6itLi8vNTx8bH5fmAkjsv+xMSECR7A5cHkGdlDfuK9jGL9pNNS\\n\",\n       \"htEgv+/1oBaz4ziKxWIKhULmx0beCMaBlUrFuMv7+/t2BLKIgdP4kjD0ZjyO9wUjc9hl5+fn5i3B\\n\",\n       \"okokEgoEAkbqqdVq5l7E+FeSwWQoVjA8x/YA7JqAd8J4jo6OzLgF1Qp4LyXI4eGhmayw4xLJzO/i\\n\",\n       \"1olub2JiwmzM0Eey8CVZ2hQwIacbeDNeGpJsuMLv0l/wM+DoY2NjFr/xvteDKjMwDqTDR9WMixAJ\\n\",\n       \"rNRymUxGW1tbFgnGbkd3z+5IPU396Ti3gY1kdZDD3W63tbi4aKoLhgjxeNz8NKampoxJhqv+KCqC\\n\",\n       \"GpqGk8V7cXGhx48f224Ils5NR61+c3Njiauzs7MaHx83OzAMcaampqzOdpzbwCJUNKA8+HtA6A8E\\n\",\n       \"AsbTgJSF3wg8bkb9lHFYCMCak2RICfwYoobdbrc++OAD7e3tvff3/6B25m63q2w2a6QgiPEQhCDn\\n\",\n       \"YCkAdHdxcaHDw0OVSiV5PB7jXjDoIISKR+8AACAASURBVP4MTjM83WKxqHg8bgw4MlDOzs4MNaEe\\n\",\n       \"R+jZ6/VMj8jOzpGPgBZyETsXdq8MVsCD4WrTTMLfoBmVZFixJGvKGMOP2uzS3NbrdWPS8Z7ITpme\\n\",\n       \"njb/PTBqSVbSMaaH4wFhqdVqmYsq5Z7P5zOZFOStL8qMkQtesqQ76gnqzmg0qomJCcsBXF1dNS+1\\n\",\n       \"6elpJRIJC8K5vLy04QnHsvS5TRfcA74gvjh83Xg8yheOXXbhmZkZW3A42UsyMg87NeR8Hj8QuHVG\\n\",\n       \"A5uG1zExMXGHtA8HWZJhu6N4s8/n09jYmDmcIh4AW8YmbGpqyvgko6QseoFRiy5eN+bnLGzouPBH\\n\",\n       \"oOYyykZPyXt73+tBlRnsvnTy6OP6/b4qlYq5AlFXAkuRyoQqm7iw4+Njm/q9ePHCMF2CfcbGbvOv\\n\",\n       \"EWtisj18F8jT7/dtkII97tXVlUqlkgk54UozDmbYwOugKS2VSgaP4VYPN5vHHz1p4GzAtwYfxg+D\\n\",\n       \"G7dareri4sJKoWq1ar54GKbDMoSmCd/i6upKy8vLVtdTruEtIt3WyAgm8NVg5wa3Z7O5b0Lrg1rM\\n\",\n       \"REAMh0ODjZjizc/P3wl1hCcBofzi4kKJREKtVktPnjzRcDjU8+fPLf200+loZmbGcFrG4D6fT5lM\\n\",\n       \"xrzqUGfQSN3c3NjfoY1Doziac+LxePT8+XPb5bG+GhWN8hiEAXFK4Gz6+PFjIz2hOEEGxY4JxxiF\\n\",\n       \"us/nMwOaZDJpzR67p+M4lvVCZMTc3JyOjo4kyf4dLw2QDqaY4PBsJLiR4tfBOB2F+32uB7WYGdHi\\n\",\n       \"kwZyQB2J4oEsu/HxcbVaLQvcoaMvlUpaWFhQLpfT6uqqNZJwEuAeU1unUimzJpiZmdFwODToDW4G\\n\",\n       \"amnCeMBrnzx5YqNoSpRKpaLT01MrCdjlILd7PB6zvUKLSKaI1+tVNpu18HlQkrm5OdtV8QJBTT4q\\n\",\n       \"G+t2u0ZWwo4WYpXH49HJyYn5P6P8pnTjJmKHHk3sOj091ezsrDXf8XhcyWTSft/v92t+fv5e3/+D\\n\",\n       \"WswsRkoKGkCO7r29PQWDQcvpYLxMXBkLhzHwcDg0SijNFpIi+BEYkDMqrlarymQydtOcnp5qYmLC\\n\",\n       \"jv/BYKDPPvtMlUpFL168MCwarR/BPYPBQIPBQLFYTNItfZJBEGSfXC5n/nS9Xs/QmL29vTuBOvv7\\n\",\n       \"+3ajP3nyREdHRzaAAY6E3DQaJwwBSLplJIKJQwjK5XL68MMPJX1uYIOdLS5KoCsTExPa2tqydNZY\\n\",\n       \"LKZGo6Ef/ehHZjqJJvF9rwe1mMmbGyX7QGMsFArK5XJaW1uzWtfn8ykYDCoQCMjtduvw8NByTWKx\\n\",\n       \"mMFVHMfUjrFYzLjGsMEwPMGxBzk9rv3AVaMDE5qh09PTO0lVNzc3NohggY1mmHASQBoiVIgb8vLy\\n\",\n       \"0gKHer2evfbz83PVajUdHh5auYJHteM4ymQyhuJcXl5qf3/fBKiSzBgdewVJhrbwWJIsT4aYCXBw\\n\",\n       \"pomoY3BspZHd2dm51/f/oBbz6ELhw4Mi6TiO1tfXNTExYTarkmzyhZ0tHhhMrEb/zAKg6WJqiIEL\\n\",\n       \"0iEMAqlnKW9gkIEQAI9NTk6aJhGnUeiczWbTuvxIJGI3KgMNYDEwW3IIkTWBwlAmcAOfn98G33c6\\n\",\n       \"HWMFYkHG0CUcDttJRjkGrXX0M85kMtYUYn2AS+nq6qrt5rVazWBNpqyYmTuOY+jO+14PCppjJAxm\\n\",\n       \"C20T/zSO+1ETbI5YSQbr8Wev12uKbBo2GhvHcayxAi1gB0LVwgBhcnLSOviJiQmLUJBkkBvDklFH\\n\",\n       \"/H6/L5/PZ+JPSUZlpaRiwEJwvKQ7aAPvgd0VTw/qfbfbbYoPsHnKLRJUr6+vze0UygCpBIPBQGdn\\n\",\n       \"ZwqHwzalhGtCE4qtAhg/426c9omJ/oICOnLxoVEvI2eHW4FEiWOQJgb/NTjJTLX6/b5WV1fV7Xa1\\n\",\n       \"trZmeCtDjmAwaJgqTdT6+rpCoZBRLuGF0LihsoCGylhcuvWSxu0IKiX/hQ8ChXJ098TrGZ9l6mq8\\n\",\n       \"3y4vL3V+fm7Oo5Qo4+Pj6nQ6hgFLMiiRhQ1ezedIT4G1wKjF7+j3wJCJngEP5ng8rlgsplgsJo/H\\n\",\n       \"Y70HZdl9rgdVZhCySGoS7kIEk6OrQ2R6eHgot9ttYZI0RPgbT01NGTZ6eHgoj8cjt9utdrttnA+g\\n\",\n       \"MwwLRxXKjuMolUppY2PDBjBYX7Fj9/t97e/vm4s+ITg0sZJs4kYNS5lDg4krPZFmmIaXy2U7ibAE\\n\",\n       \"AEtvNBp2etTrdbMc4LGhBuCST+4hqATTQ6Z2GDFiGt5uty3YCPsBamZQF+ijpVLJLM/ucz2onZnj\\n\",\n       \"rdvtan193dha0WhUCwsLWltb09TUlHFsV1ZWbBeenJy0RmZ9fd3M/YbDoRYWFu5k/VE3D9+F6ExP\\n\",\n       \"T0uSVldXTcWCNo4Gje4eeA1oLxgMWuO1uLhoFrWQbyDuEy4Pg25packwXpyULi4uFAqFLKhnlFdB\\n\",\n       \"hDDsPU4hJqHkJuKkBI4NtDccDu35KRm4YTGZfPz4sZVnDH5GTyW86+hbeJxnz55ZuXSf60EtZmxb\\n\",\n       \"SQrt9XpGnSwWi1bLxmIxa85w1el0Ospms3K73crlckbpjEQiKhQKCgaDVs+Gw2EbWNBUeTwe7e3t\\n\",\n       \"aTAYGNsO931JBj/RvSN+pY7HVjcSiZgWkVKlUCjo7OxMyWTSKJvdbtc8KFCMsJMz3ZyYmDAUZGFh\\n\",\n       \"wU4bHJTghuBexO8ydBpl5Pl8Phv983553Zwo1WrVZFr4hoCfS7JaHU75zMyMRWlwo9znelCLmZ14\\n\",\n       \"YmLCdlF4COwMqJYh01A/0jxJMvokO/2ojzG7lCQbFHQ6HcuF9ng8hlxAeJJkzDuaQOwNwLTJ2qOz\\n\",\n       \"73Q6xoHgtTLoobbE3oCamBD4ycnJn9AW0nhR89P8IXTldWBiKMmEstxUNHCjo3HKJur40cdFLCDJ\\n\",\n       \"eM3Ak5FIxE440KXRBvx9rge3mOEpIObsdru2m7FzMkTY29tTsVg0g29yQlhQ4LpbW1uqVCpGm8TO\\n\",\n       \"CpkSzvMXFxfa2NhQr9dTMBjU/v6+8Q1ubm7MZ+OHP/yh9vb2TAFDHU6DiqqZBZhOp9Vut1Wr1ezG\\n\",\n       \"2d7eVqlUMjiS0oQ0V5z/qcn39vbM6pa6F7SlXq+r2Wyq2Wyq3W5bc8wJMPqZ5XI5M4bhxGq1Whbd\\n\",\n       \"TOkBR4OMlF6vp9evXxu9FkOYfD5vrLovlCYjF5AcujVGswwCiEvz+/2qVCqan5+3nZOj7uTkRJLs\\n\",\n       \"KAyFQpqfn7cR7+TkpMLhsMUHV6tVffWrXzVGWjabNRwXM29OA4hFX/7yl7WysqL9/X3FYjETozLo\\n\",\n       \"oValmSQMh5RWslIg/jMGpx7nuKb+jsVievbsmYLBoGX+1Wo1K8PIOqHeBokAC7+8vDQbBIhavFew\\n\",\n       \"cXjao0gLCAbDpKdPnxpkSlj96uqqstmshVze53pQixk65vn5uVZWVkx2HwwGjcnGYmeKl81mjVoJ\\n\",\n       \"uM+XlclkLIaBoxlWHM0iI2/YeUdHRzaIoKzBeBvMNpfLSfrck25+fv5OFAL1tiSLbYPbQZ0LS216\\n\",\n       \"etr8KDqdji1UGj+sAMi/pgQA4qNmvrq6MqlTrVbT5eWl5SBizI4hDQOTcrlsJU2xWLTyg/g0HEUv\\n\",\n       \"Li7UaDS0sbFhv+t2u1UsFnV8fGwG5PeJTZMe4GLGUw1d39nZmQ0ykD7h6IMIs91uG04KZkwuCpM/\\n\",\n       \"6mRI5/i1wfa6vr62yAhSpxg8RKNRgwAvLy8tTjiVShl1NJlMmkceYerssKAm0EUh42O7xXOAIODQ\\n\",\n       \"iWzMcRzNzMyYATsEI2pavDKgeIKewDCEFMTrJU6DP8OhTqVStvPjZJRIJMy9FFxeko3iM5mMpqen\\n\",\n       \"TVRxn+tB4cyjnT87BJIiRtaYi4M3Hx4e6mtf+5qNpxuNhlnbTk9P2wgW8Wqv1zN8lEiJy8tLW9wM\\n\",\n       \"Q1BUwIIDFRl10a/VajZJ5FTh6G+1Wkb5pHnqdDpKpVLKvUt4RaolyULuYQeGw2EdHR0ZOoHODw9k\\n\",\n       \"EAfqXKIY0C/+uPAUSzE0kZRwbBx4bSwsLBgnBCf9er1uLk7oBkdZeCzw0cHL+1wPamdmgfX7fa2v\\n\",\n       \"r0uSZmdnFQqF9PWvf12SzM2eXTMej5vhSiAQUCaT0crKilKplO0aMzMzRirHg5laOxAIKJFIyOfz\\n\",\n       \"KZVKaeFdvPDMzIwRhqTbHf0b3/iG3G63KpWKotGoaejgE5N2Co1VktE7UXaDyszNzVmdy8IYHx9X\\n\",\n       \"Op22Zu7i4kLpdNokXLxfEAhMGJeWlgwSxC73+vpaa2trxmUmPQAkgtLI5/NZeHwqlTLR79zcnGWm\\n\",\n       \"QH4CR4/H41pYWLDT6NGjR0qn03rx4sW9vv8HtzPzgf/whz/U+vq6pYHu7+/L6/Xq5cuXdixSu21t\\n\",\n       \"bWl2dlb9fl/ValXlcllnZ2em2Mjn82YtQDbKzc2NmSJWq1Vr1CRZHUkuCZKjly9fyuu9zaWu1WoG\\n\",\n       \"g21vb1vpASJzfX1tte/ExITl8sEjoWaHHER9i3M/Xs+E+Kyvr9tujlh3a2tL19fX2tjYUDgctloa\\n\",\n       \"X2Zi1KTbEm5nZ8dKHaadJAFQw3NCADuO7sRMK+lDGNEfHh7aFPM+14PamScmJsxHYjRjA47B9fW1\\n\",\n       \"vvKVrxgpfH5+XuPj40YaJ8LA4/FodnbWUAKgrkAgYH501H/JZNJ4Bmjm0um0GZug/IhEIuYQj7AU\\n\",\n       \"zggZItSak5OTmpyc1MzMjNXk1NhYGJDWCl/j+vparVZLw+FQ6XRaLpdLjx8/ViqVstAhPECwJYhE\\n\",\n       \"IhYdgelko9FQJpMxPJpGGXN0dm9gykgkcsciAU/mdrutTCYjt/s2bB6uC/U/Kbf1el2xWMymhfe5\\n\",\n       \"HtRiLhaLCgaDlqgECsHuBgcZVhlN0iirzOPxaHFxUbVazRhi4KYME2B8TU1NmaHK4uKigf7NZtNy\\n\",\n       \"rAm9hCtC5EMmk1G9Xr9DPGKo0ul0LEf6+PjYQntWV1dN7Qxr7ujoyPBu9H48XqvVsgEOEqbR4RFD\\n\",\n       \"C4S3xWLRSjBQC/BuSiIa2+XlZUuhvbq6MqkYUN74+Liq1apisZg5QI0aW2JojiF8IpG4NzT3oMqM\\n\",\n       \"yclJlctl09qNjY0pl8tZKUBDM7pwILfs7u6aOvnVq1daWVlRsVjU8vKySqWS+SXD8iJUJhqN6tNP\\n\",\n       \"PzVRAHTJo6MjhUIhnZycGIaMyLbVamlvb08vXrywqAakRiTGkkft9/v19u1bw4xBTY6Pj0313Ol0\\n\",\n       \"TEWOuSGYdqVSMVrnzs6OksmkyuWy4dkMRObn55VKpbS1taWFhQXF43G9fftWCwsLZmOAZQPEoXq9\\n\",\n       \"rmQyKcdxtLOzo1gspkKhIJ/Pp/39fQWDQZXLZZ2enmowGBgFYGdnR91u12Ik4I3f1zjxQS1mZPAc\\n\",\n       \"iRzvCF0rlYqpMoga7nQ6ev78uSRZvRkIBDQ1NaW5uTlLq6KRo95MJBK269FkMUJHps8NANYLujE9\\n\",\n       \"PW07OSoXsksoazCKATWAx4FAdHZ21soeMOP9/X1r6sj0Rp6EgGByctJeL2SqDz74wLjRH3zwgQKB\\n\",\n       \"gJVf4XBYY2NjKhaLmp2d1dbWllKplJ0YwIcLCwv2eV9cXGh2dtbsCKABBINBk4KlUikbArHzczq+\\n\",\n       \"7/WgFjN5eYeHh5adjWr4+PjYSgyGKtjCApnxb7VaTclkUvv7++amDw8Da67Ly0vt7e0pk8lYXcyk\\n\",\n       \"EQjw4OBA6+vryuVydpOhUQRCY1iwsLBgDWQoFFI+n7cBCc+NexHNV6FQsBJqY2NDMzMz6na7ZjuG\\n\",\n       \"9VWj0dCXv/xli46DjwLlc2dnR5FIxIwmgdgYXXs8HhUKBbNJCAaDJnqAT7K/v6/p6WlLEuB9ACPi\\n\",\n       \"3ES6bCaTsfzwbrerSCSiH/zgB/f6/h/UYgaam5mZsQUGFprJZEwtgWD0y1/+sl6/fm3kHVh3kUhE\\n\",\n       \"0WhU3W5Xjx8/VrFYNFEmtaTL5dKHH35oeDB16rNnz8yc8Pnz5xY7DN2x3W7r0aNH+uSTT9TpdLSw\\n\",\n       \"sGAu+vCfS6WSNZfEEgeDQc3NzUmSaRjj8bgNKdLptKTbcTN2tvCJA4GAnU7hcFiO46hQKBg/4qOP\\n\",\n       \"PlKtVtPc3Jz9zNjYmJ4/f65oNGpq9W63q+XlZY2NjVndi8F5Nps10tL09LRev36t09NTpVIpeTwe\\n\",\n       \"VatVzc7Omivr2NiYvvKVryiXyykajSoQCOib3/zmvey5HtRibjab5qQZiUQsEwQOA00QrpksTOxd\\n\",\n       \"cf7kS6Ih4oiE7I+F7fX1tcUEwwYjv9txHIubqFQqNg6nFpZkMFUikTAi/OHh4R1bLJThw+FQm5ub\\n\",\n       \"mpycNM0eNgYMRsbGxnR0dGT2AfV63f69VquZAAHjF4/Hc8fP+uTkxBz9WegMoPDV2N7eNhvbDz74\\n\",\n       \"wOimiAYCgYBqtZqJYsfGxlQqleR2u/XmzRvrB5gAsovj3nqf60GhGdSxYMHUc8ic4vG4TcKoMykb\\n\",\n       \"RvVvoA7EfoGQQLmUZIw6vrR2u23KDhzlp6amLC0Vq4JAIGCcCbBqrAJgyDHSptYm8AYzb+BGFg+w\\n\",\n       \"nN/v183NjdLptEKhkAkI0PiBzqRSKcXj8Tuvnx1+fn7elOJoJHFJLRaLZoKDtZbH49HCwoKhI2DL\\n\",\n       \"qVTK+hMoq7FYzAxtMJ+knoZMdZ/rQe3M7JjwehmiQHb3eDymJJZkaEM4HLa6lB2DsgMGHNkmEIjG\\n\",\n       \"xsYUj8fVbrdN88YR3e/3jf/baDQMBwYH5+fPz8/t+J6dnbXAHV4XvAuQk+fPn9tuBmaMoz6vZXFx\\n\",\n       \"0WRjlFuw30AfwOITiYQhOPV6XalUSicnJ5qbm7MbJxqNmvkknnlwPiDaEwY6Gl386tUrey8ej8eM\\n\",\n       \"2+lbSJo9PDzU5OSkqW3ucz2oxQw+CpcBjjK1LhjpqL0qUFu9XjdmG8cxLDZMs7PZrN6+fWt2ruz+\\n\",\n       \"7Xbb1NtQIplyoVzBdAVPDRQm3FjFYtEEoC6Xy9QoQG2BQEA7OzuamZkxQhC499nZmQ4PD03lgdpm\\n\",\n       \"e3vbFjRhlHCqeU9kWUO6Z7IYiUTMWQlZGEJWIMazszNls1kbxVNq4Y2RSqXM4RM2IAKGQqFgr33U\\n\",\n       \"OPI+14MqM9gtXC6XTZjAcKl9B4OBYcA0K6AMxBhQXwYCAYO2EomEBoOBKaChfqKkmJqaUq1WUygU\\n\",\n       \"UjAYvKO6wPNZkh2pOC1RCkxOTt6B+6ampgw1Ga3bObZRnZCt53K5tLa2Zp4ULEosFjh5qMUxAOfU\\n\",\n       \"QK3daDR0dXVlhHqGG0CHozZn3Eyw61DPwM4bHeTAq6YU4qTCSRR05T7Xg1rMLNZqtWq45+rqqi1m\\n\",\n       \"0I2FhQUzPwyFQobdMnnLZDJyuVxKJpMqFApyHEdbW1s2DaQp4kYAV56ZmbF6MxKJGG9ieXnZBgdM\\n\",\n       \"JzEq56aD64x5ICKCTCaj2dnZO/zjSqVihizwt5eWloyHMjMzo9XVVc3NzVmJ9PLlS6tLz87O7qAS\\n\",\n       \"hE9KMpuEy8tLZTIZSZ/7L1MXo58EN2cKyM2LhzPPTVwdpVcymdT6+rqazab5WYON3+d6UIsZKiFx\\n\",\n       \"D9A5KRvYEUAiWq3WHbI7bvrk3NXrdfn9fj179sx2NqAoToBRC1t2a45iYo45guFtwGMgealWq6lW\\n\",\n       \"qxlhiJuDCSVu9+Vy2ST/7HSdTkezs7M2SgdRYdLISHp2dtZI+nAo4F+w+KrVqilo0C/Cm3j27JmZ\\n\",\n       \"PmK2E4lELG6DlCmErgyJCEWikf1xLSRuTZCq7vX932/5/HJdo0EwUDZHrVZHhatEJow2HRB9sH9F\\n\",\n       \"ncxNggIF05jRP/O7cHj5+R9/Dfw7/ybJNIc0dxB3EATwJUPzZHfkd0AQ4F2MEuAhyXMjjr5P/j9o\\n\",\n       \"A6+P5+S04L+S7PcoPUb/DjSDkoXXT8NKL4N7E802C/sLQeuPXbjHM75lUoW06PLyUqVSycg/+DdP\\n\",\n       \"Tk7aQoeIDxKwsbFhjK7p6WnV63Wdnp4aMgGOSugOJQpHNuPwRqNh2C/TOXb2dDqtRqNhdSZ0yF6v\\n\",\n       \"Z65B4XBYiURC6XRa+XzepnEnJyemrAFrJ0dbkp1O5+fnWl1dtRE9C5b4BzynR+VZRDVsb2/r/Pzc\\n\",\n       \"8PBut2vPBTQJrk9zS47MqF4SBfhov0LP8oWgdeTCbV6S3e2VSsWGFUBWqCiwnAL5wHEHl3pyOvBx\\n\",\n       \"o8Yd9S0eRSWQ8uMmxM0D0YiGjp0OY0een9EuC8JxHEMVUFCjFJE+j7pAsU3ji4YPXna/37+zi4NS\\n\",\n       \"8H7wrqCJBd2ABIQG8eLiwmxyyT7hhgQZAh0iwwSko9/v35GBAdHRSPI673M9KGgODoAks4nlKB2V\\n\",\n       \"zxOPBuEGqigWtfjGMdVCQT0+Pm41IjvM8fGx7erBYNC8niEyUZfy2vBiZiFKssEFEWlgwiAYQItM\\n\",\n       \"DCkbgP+kzw0V0fKR5oROkUFQtVq1gQe1MeUTXAyU6UwzJRnxiRtidXVVjUZD7XbbnJJARjCU5LWB\\n\",\n       \"unBBmmIkj1HjFw3gyIUvBHc6pBf4tCgxrq6u1Ol05PV69emnn1p6ab1ety+o2WzaLnR+fq5CoWBi\\n\",\n       \"Uvwkzs/Prbms1WrWUPZ6PRWLRaXTaQUCAR0fH9ti4ngNhUKmAeTxkXydnp7q4ODAPPIajYZRNSXZ\\n\",\n       \"KYDdFQQefg6eNAMNIi64kSk5Wq2WjdpJnCVtC8bd6empqtWqms2mQYBnZ2d6/fq1wZtsAggALi4u\\n\",\n       \"tLu7awgH6I8kyzZsNBo2aqfZhPD/vteD2pmz2axp6Hq9npFrgOOgPQ6HQy0tLalUKimTyRg0Fg6H\\n\",\n       \"zWFneXlZR0dHlve8trZmvskMQ9ADspiJN/B4PIrFYkZ5pNYdtc+CHgnnwefzWY2bz+f11a9+1YZA\\n\",\n       \"8JQHg4GpPcrlsoLBoKLRqBqNhsmmWBiMsweDgRl++/1+zczMqFAoaGFhwT6zarWqxcVFYw5yw0Jh\\n\",\n       \"pZ7HNRXuhtfrtXF9JBKxps/r9epXfuVX7kwq8bq7uLiwcTr8FWis902belA7M5l75XL5Tozazc2N\\n\",\n       \"IpGI5f9xHPZ6PduBOdZxkAdSk27H5FBBOX4JvIS4g3bv8PDQ3IWazaa5/tC5M4kkWF6SJS3lcjmb\\n\",\n       \"7HFME+COQz8Z1IFAwOrVZrOpyclJk2lhVA7agPVBu922Mgp0Ay4HTEJODj4nbjjIRtTRGKGDU1er\\n\",\n       \"VSNwBQIBe5/wR+hfpqenFYvFjDOyv79vU1DiJd73elCLeTSo0efz3TEFHFVzwC2QbhXKkowmeXl5\\n\",\n       \"qXw+b+aKkF9w+4FEE4/HjSzPFxQKhZTJZGxaNjZ2m+4EFAXKgisSEzj88fCGBsLCgKXb7Rp5CUiP\\n\",\n       \"unxyclLBYNAGKCxyn89n+SG4juJFTW4JJHxODzzlaABpbkmWBba8vLy0x2C3ZRI5GgcRCATk9/st\\n\",\n       \"go3nkmSlycrKiiFA9BDvez2oxQykBJUSLwzcP9kBgMVohCTZTsNUjHEx42HUxKiPqfVAJyC6083D\\n\",\n       \"xKNZIyKYETZu/ldXVyoWi7bwCcZpNBrGJaFZBNbzer12CvHagCJZ/NLnXhrwMXBWmpqasgxB6m7+\\n\",\n       \"x46OExNj+W63a5pBSpDJyUl7TLBnRAj0CAxoUKHg6I8ImAHPaMza+14PajFzXJZKJYXDYeNFUGKg\\n\",\n       \"HIlGoyqXy+YVIckWG4gDkzhJevr0qU3k6vW6OfMTlomCgzJl1JCFGpQjHjQAhTK1InU+po/4bsCz\\n\",\n       \"AOkYHx83UhISLKwTut2uTezAgh3H0fLyst3cJycnVtKMYuXj4+MqFAoGnzF0gkgfDocttoFaGZSF\\n\",\n       \"kTVID5sKnynqG3gYIEcHBwfqdruG2LDZvO/1oBbz6empURHn5+ctK+/m5kbr6+tKp9MmwMReACk+\\n\",\n       \"wTYYpWQyGWUyGVu47ObsnuDCkP+xteIGWV1dtXqx2Wwa644FwFAF/sb5+W1g/IsXL+RyuVQuly31\\n\",\n       \"am1tzeito8R8himSrLbFFUmSqU0ajYa9N9AKRvI0n9LtZgBJCOEBC0+S4d6UF1gfzM3NmaodNIcR\\n\",\n       \"O1QBhLg0pZOTk1pfX9eXvvQly0tkcvm+14NazGDGoVDIKIbgt1tbWyboHAwGZj6+vb2ts7Mz1et1\\n\",\n       \"jY+PWy3YarW0u7srj8djOyFNnCQbJMRiMQUCAQv8AZ7b39+3aRrO8fl83oxj8vm8Ybp42R0fH2t3\\n\",\n       \"d1exWMycji4vL1Wr1eymw1GT4HYGI8i2PvnkE2OpZbNZq7VDoZCR/5niBQIBOz1omIEC+/2+kayI\\n\",\n       \"jDs4OFCpVNLFxYWazabFNRNyiRIem1vpdpBFWfPJJ59IklmTlUolHRwc2HPeN274QS1majqmbTDl\\n\",\n       \"MB/BoQc2FzXdaJIqi4QYMOpNJnWQhuDl0gxRC/b7fYOtIpGI5ZFQhxJfzCLimGYggmwfM+7Dw0NT\\n\",\n       \"hFxcXMjr9SqVSunly5fGnR7lkYTDYWPGIc7FK4/QTQhMqMnPz8+Vy+Ws1mewg6+e4zgmYgCrJtyH\\n\",\n       \"KeeoSypTv5mZGY2Pj5sXHacItgOY29AAYyf83t//38Qi+mW5IBphtI2BH7yAcDhs2C3NIYhHuVw2\\n\",\n       \"mKtararf7+v4+Nh4DtwAeNVNT09bCYIdLoqOwWCgWq1mkBicYgIy5+bm5PV67TXxWrEWw9sC7jKs\\n\",\n       \"PG4iyot0Om0GhfAbINkD7bndbsORWUzU16P84VEkBSswSFbn5+daXFy00E8kY7FYzD5L4iVojK+u\\n\",\n       \"rowJB8pBzgxcDuwYcD9NpVL3+v4f1GJGutRoNIxEnsvl5DiOer2ednd3tbe3Z8Yo5XJZ09PTRiyS\\n\",\n       \"ZCVJt9s1N856vW5ec7u7uzo9PTWXecj2yPBh0JEXGAqFdHR0ZBitJO3v71uwPFNAdIo0ZS6XSycn\\n\",\n       \"JyY+bTab2trauiNIPTk5MaFtp9NRMpnU2NiYNjY21Gw2ValUlM/ntb+/bylUo3UplNNcLqdWq6Wd\\n\",\n       \"nR2dnZ1pc3NTLpdLe3t76na7CgQCVqbhkAqPhKQCHh+8GqPGUqmks7Mzy5U5Pz+3tIKTkxM1m02V\\n\",\n       \"y2W1Wi19/PHH9/v+7/Xbv2QXcF8LeQAAIABJREFUzR4exFNTU3ry5InBXuCoCFfn5+eVz+eNczA9\\n\",\n       \"Pa10Oq0PP/zQwnpIWUKgOkqFRKDJhXG51+s1yb8k81pjV2OiJ8nQA4xeuGlIfRplAI6G+Yw68Uej\\n\",\n       \"UcNyb25u9NWvflXX19daXl42MStIChiy2+02kS22WslkUsViUclk0pxIUaYggZqfn7ebVJKJASBP\\n\",\n       \"ob6em5uz7wEyFwgMwgXeM83pFxPAkQtegySzvYIDQFME3gpsBAsNxfMoF4HmCcwZCy60bAxHgJyA\\n\",\n       \"9dhlqR8pd3DKpKECwup0OuYkii8cXAlwXZzzwZmpvXlfqL3RPUoySOz4+PgOTEjkMX0A/BIWOT7J\\n\",\n       \"9AB8jrxOOBU0bM1m02i23Bh4QeMfTSMMG5GQIsbvo+Yx73s9uMXMhImAx2QyaWNeosPI6wsEAsat\\n\",\n       \"oFFkx6KxQptXr9fNy5m6mYGKJKs3WYz8PKqM0Vo0Ho/blC+VSikYDCoUCplHxuTkpCTZ5A5eBLxg\\n\",\n       \"+ByRSMQYauSLkNDKa5qamtLy8rIZOJIuQI0/NjZm1gS4guKjHAwG7fPD0gCzGj4bzA8ZWUuyxhYU\\n\",\n       \"A/iSqerU1JQx5xDvZrNZM7l53+tBLWZUx/v7+6a88Hq9isVimp2dValU0sTEhDUwsNLAP6mdZ2Zm\\n\",\n       \"DC8ebXS63a4Rfvb3981BH2hrdnbWuL7pdNrsv/g7POVyuZzS6bRisZjq9bqZIlLvM7yhFs5kMpZw\\n\",\n       \"1ev1VC6XjfIJ2wxsmGaWSRzj50qloouLC5Nf3dzcaHZ2Vufn5za5I5OERnU08/rs7EwzMzNWzzPY\\n\",\n       \"wdsjGAyaQQ5jfiai1NJ8hmdnZ7YxoGqnhLrP9aAWM11zMBhUqVTS9PS0Xr16Zb5rHHG7u7sKhULG\\n\",\n       \"dwa+ikQiarVa5rrZaDQUj8f18ccf25cFiWh+fl6JRMI6d1AQiDflctnQA2C+fD4vt9utsbExffbZ\\n\",\n       \"Z1a/MlZm54L4z5QR0xXKgGw2q83NTQtgZwIXj8dtwog3R6PR0M7OjkVcwB1eXFw0828yXvDBy+Vy\\n\",\n       \"RpTis2m1WkZzRV+I7UIul7PSBlgSPw98peGXc0KA0OCctLGxce8YiAfVAKK4JqMPHLPdbuv4+NjU\\n\",\n       \"EORIj4ZEVqtV4zg7jmMSHoLSfT6fuYTe3Nyo1Wqp3W7bGNrtdiufz1tJ0Wq1LJEUyf5wODTDl4uL\\n\",\n       \"CxOcwm2AfHN2dqaJiQnlcjlNTU2pWCzaogc7x1IWOA6OtfS5nQGLw+VyGT7caDTk9XpVLBZNUtZu\\n\",\n       \"t80FCvEpGX4zMzM6OTkxuzM+E0n2mvELgQ2HvAqVN8aRYPDQQSXZTUb61H2uB7WY4UZMTU1pbW1N\\n\",\n       \"4+PjWltbk9vtNnYbx6fjOGbXBSoACQeHIzgPXq9X9XrdWGoQ/pmq4TyEIjsSiWhlZcWOek4MSoPh\\n\",\n       \"cKgnT57o8vJSU1NTSiaTxilJp9N3aJrgxG63W7Ozs5JkKAG/F4/HVa1W7QTAwDudTqvT6WhlZcVs\\n\",\n       \"sYAvYe7BJqRWT6VSJmAIhUJKJpOWkQK7DmEtpCq42dItooQFGB56P95XgIwQ2sP7wPzxfa8HVWbA\\n\",\n       \"xBpt+JjYkTON/gzvuFErW6ijfOEnJydGGR098uHtYisFod/tduvp06fm6sPQAv4DAxTq3kQiYTIj\\n\",\n       \"XIS4MUYVKTMzM/Z71Wr1Tvg8ODnKE0Si0WjUYDsITOQLor2TZIjI1dWVMpmMGo2GWZr5/X5dX18r\\n\",\n       \"EAhoOBwaZIcaHGN0Biaw7paWlswZNRAIWMQF8iwCffDgA68m+/B9rwe1M2NWwk7QbDbtAzw4ODC+\\n\",\n       \"AIudYxOaKEMSSUaIabfbNgABqgqHw6rX68ZSQ28HjEXnD3yH7g7ZPiNkmHTU4aOyo/Pzc2O9jToj\\n\",\n       \"+f1+M1ocjXSQZCPhyclJ7e3t3RHugpD0ej0b6XMjnZ6eWjAPtXI6nbYoCoZDg8HADBYh5jebTauj\\n\",\n       \"JyYmTBB7eHio5eVlG7nzGY5KwyYnJy2ldm5uzvg073s9qJ0ZlhpxAwg7O52ODTcQoiI8pdMGMoL6\\n\",\n       \"eHPzeWA8jkgTExNaXFy0o5FmDpK79LmqBYdLSSZClW6nbhCe2JE47jFIgUAPaoC3HHYCOGfCgWCB\\n\",\n       \"wKkulUq2o5Limkql7nhiSDJzcAQCSLT4Wfge9BqBQMC8m8fHx41uOhwO5ff7zeuOTQJKZ71et5MM\\n\",\n       \"/BmO9MzMjPx+v66urkwA+77Xg1rMdN6jvGPqQ9h0kH9SqZQFzCCNL5VKmpmZsVICo5fR1KidnR3F\\n\",\n       \"43GLJKMOZeiAnwWG4Dc3N5aOyuID14aXcHl5aUrxZrNpX770eYD81NSUlQykTZHKivtoMBjU1NSU\\n\",\n       \"0UHxQeZxGJhEIhHDyFdWVgxVgNPMIsR/b1TOVKlU7Abxer3WUDOQYXA0iuuDtkiybEWwdkS32Dzc\\n\",\n       \"53pQizkYDBoODLeXZsbr9arVatk07vT0VEdHRwYfLS4u2gdaLBZNxsOiwpQc8/CJiQklEgmdnJxY\\n\",\n       \"XY7zJfUiuzn0RqZ3k5OTqlarlojV6/WswQQ7hhzFOPr6+lqFQsGwaHK6GVAwycRDA1X4xcWF0UbZ\\n\",\n       \"YVF048vM54UxJDg04UKSjBA1NTWlSqVi+DHOSqhLKIcuLi6MZsuGwusD36d5ZDD0haXtyMViubq6\\n\",\n       \"0vz8vCYmJrS+vi63263l5WXza/Z4PHd4GjDMUHJ8/etft90OhQS7h3R7AsA1JlDn/Pxcy8vLVluv\\n\",\n       \"ra2ZOTmcDRY03m6JRMIifmlc4TKDV7OzgWOn02l7HXA2er2eQWpAdyAvqFlOT0/l8/m0urqqo6Mj\\n\",\n       \"E/USydZsNjU7O6ujoyNls1mFQiE9efJEwWDQnisSiaharRqKAxeDphnVi9vt1tLSkg1jPvjgAx0c\\n\",\n       \"HFgoKI06rvsul0vxeFzf+MY37pVr8qAWM7ZPNFfhcFjNZlPT09MmmaLBwguZqSDddrlc1tXVlebm\\n\",\n       \"5sy9EoIQnGS4uOVy2WAm3DCBr3K5nFKplLrdrkqlktFGLy8vzRosn8+bsJUdENplLpczce7Y2Jjy\\n\",\n       \"+bwWFhZUKpXMxDwQCOjk5MRMHrkhvV6vIRy4I0UiEZVKJVWrVR0dHZkDKgy8ZDKpXC6nwWCg3d1d\\n\",\n       \"PX36VMVi0XB6ScbjwBxyenra2Hng6dAEsOydmJjQ9773PaOdSrelCo002syzszN99tln9/r+H9Ri\\n\",\n       \"TiQSVvPBUkP1DOwD3IW0Ci8I4tRcLpdOT0+1uLiodruthYUFY6r1+31dXl7agCGZTCocDt+xlmI3\\n\",\n       \"X1hYULFYVDgcViqVshExvsl4sGGeAqEIBUg0GjUJ0unpqUUtXF7eBkY+ffrURuWkn9JYYpNAA4ry\\n\",\n       \"gxKBaOXRmwXl93A4NO8RanpcjdA8RqNRm/ThsM8p8eMsQozWm82mMQkZqLA7O45j9NW//Mu/fO/v\\n\",\n       \"/0HVzNVqVSsrK0omk7ag0dhBkez3+8rlcvJ6vTo+PtbExIQZIQJrYbtVLpdVKBR0cHBg0BOlAx7F\\n\",\n       \"1Isc2TDbIN5D4u/3+6Ya4XE46m9ubrSysmKex+Sc1Go1VatVQz3Ia5mZmdH29rY1WWNjt8lUNF/U\\n\",\n       \"2tgdQEzi1EBlg9kLcWlTU1MqlUoWR8GElIkn3nJAe3wOkuwzODk5MYSEBhMYFIQIrBoGHqqdL5Qm\\n\",\n       \"IxeulT9uV8vkCZiNThwHfEkWbEngDOyyTqej+fl5MzYhVAeH+Gq1alNFhhHHx8dmbTU2NmZQIIGX\\n\",\n       \"TP7Q1EH6R34EZMjNR1BlNpvVxcWF3r59a6GR+XzecHIiJigBYMUVCgXDh4HwwMIvLi5MZAvCglxs\\n\",\n       \"NAweigC8ZZAeDHLA1JlcEqrJaYZRZLfbNVVMKBTS/v6+Kbm/8JobuTDuZoExFmYKxSSPhYd9bDab\\n\",\n       \"VaFQMJbY0dGR7YLz8/N3DABxr6fpwUS72WzaUCWdThtNkhuGcTkj6VarpcXFRaM9RqNRJRIJq3XJ\\n\",\n       \"kV5eXjZYa39/X36/X0tLS2o2m6ZZBBFZWlqS2+1WJpOxGGU4IixcGkTG79Jtc8fQ5/LyUsViUbFY\\n\",\n       \"zFAdRvqpVEqNRsMmevw+pw8bxeXlpQ4ODsxQEfiQTQXPasoxMPovBK0jF2Z/LMirqyu9fv1a7XZb\\n\",\n       \"tVpN7XZbhUJBH3/8sRqNhg4ODmxadXx8bOSizz77zESep6enevXqlUUYYLd1dnZ2J2H07OxMOzs7\\n\",\n       \"km5Pgu9///vGt2AAcnV1pe3tbZ2entoUjxICJQgeGtiG5XI5vX37Vjs7O5YNPj09re3tbVvQQHtI\\n\",\n       \"sKrVqnkhU/rQoELT/O53v2ufGdwLl8ulzz77zDSG/B2+HYeHhwYLUo5cXV2Z9VexWDTI7eTkxEqo\\n\",\n       \"VqtljSoN39HRkXk6X19fq9ls3ivQUnpgixk+hN/vt2OWXQq5/nA4VDablcfj0ezsrFkMQGg/OjpS\\n\",\n       \"OBzW+fm5jVifPHmivb09tVoto0MyhKhUKlbPfulLX5IkFQoF89CYnp42jSBc4UgkomfPnqnRaFhN\\n\",\n       \"++bNG8OKMTinMVxcXJTf7zfp/5s3b4xjsbOzI7/fr2g0qmazadzqdDpt4TrsypQC0EjhekxOTlpJ\\n\",\n       \"8PjxY8v6wxeDMiOVSlkkca/XUzwetyaU5pWdn/BQGry5ubk7TkfAcfQgTFLvcz2oxYyvBBHAPp/P\\n\",\n       \"lL9LS0sWKLO8vGyYssfjMayUSR2cW8xKIColEomfSKaanJw0mAuVy9OnTw3vPjs7UyqVskYHQ/BR\\n\",\n       \"BXU4HNbs7KyJQFdWVvTo0SMjH7lcLgUCAc3Pz5tuDwd8vDHwwmC4g8qFcojnCwaDZh8wNTWlpaUl\\n\",\n       \"S30ilJ6JIPyL6elpBYNB5fP5O8oUYtUo3UBkoBL4/X5TpCC7wqgRbL5UKsnv98vn833Bmhu9SDIa\\n\",\n       \"NVnJ5/OWLgo68e1vf9tU2C6XS9VqVQcHB8rn80Y2b7VaOjw8lM/n08HBgR3T1M2tVst2tVKppEql\\n\",\n       \"oq2tLcvHps6mQYKeOsqjYCGAlGBsyASuXq/b85ycnBiWHAgE9IMf/MAQCfjKcI6Pjo60u7urfr9v\\n\",\n       \"HGHyChnMYBqDXzJKm/39fZVKJfN1JpKZ045dmoiM09NTxWIxHRwc2OdM8w1LEZ40wmBU2sB1NK7f\\n\",\n       \"//737/X9P6jFTI1brVYVjUaVz+eNrgmZBXUwC+Ht27d27HLkezwes2O9urrSzMyMDg4OVCwWTegq\\n\",\n       \"fW62CDw1NTWlTz/9VIeHh7q4uFAulzNXfrBoFCyBQMCmYjc3N8rlcsrlcjo6OlK1Wr0zaIHWyciZ\\n\",\n       \"m67T6diNJ0lv3761G6fT6ajT6eji4kKbm5uqVCqWaY21LBAheDEY+sTEhOkpOUVevXolSaZHhFDv\\n\",\n       \"crm0u7trDa7P5zNVCaNykJnT01MrrSBnwZRjMHOf60ENTR4/fmxqjUKhYEc1/OXV1VV5vV4bCkxP\\n\",\n       \"T+vZs2fGGkun00bhjMViRuDnJqArJ1RmbW1NvV5PmUzGFCcffPCBTbqSyaQhDq1WS8vLyyoWi1pa\\n\",\n       \"WtLr16/NCJ2yaGFhwRbS5uammZYD1yWTSYsVxn6rUqlY6CVO9r/2a79mo2LMDj/88EML6gF9gP22\\n\",\n       \"tLSkTqej5eVlY9oNBgPNzMwYwvLixQsVCgXNzc2ZFArL3dXVVfOwo3Q7Pj42y6+nT5+qWq3acIpp\\n\",\n       \"57Nnz0w4IUnf/OY39ebNm/f+/h/Uznxzc2OLh8anWq3K7/eb1VS5XDZuAtpASUZu53fOz88Nt4bF\\n\",\n       \"ViwWdXx8rEKhoJubG+3u7trud35+buQheBEoPzY3Nw1/vrm50f7+vi4vL5VMJtVqtcxhnuFNrVZT\\n\",\n       \"MBjUzc2Njo6ODFLM5XKGV5+enqpQKCgajRpa4Pf75TiONjc3jTVYKBQMuYHNB1LCZ7C3t6fT01O9\\n\",\n       \"fftWzWZTm5ubmpycVLFYNFuFUqlkCxsODBBfo9GwxCtOIHoPRuBMD4vFovFEjo6OJEmlUkm1Wk3f\\n\",\n       \"/va37/X9P6idGcVEv9+35gycl24fD2MmZGCmQFyzs7OanZ01kxPk/B6PR+l0WoVCwcbTsVhMMzMz\\n\",\n       \"ZiSDOXksFlMul1MikVC1WtXS0pJ96YPBQOl02iaUcDFisZjpEqmlyfPGrZ5pWiwWM70jzSlmLnNz\\n\",\n       \"cwYxYhkQjUY1Oztr+dnRaFS7u7uam5tTMpk0u4Dp6Wnl83nNzs7K6/VqfX3dfEGWlpZsAVOWAOHN\\n\",\n       \"z8+r0WhYYzwcDlWpVLS+vm6mNo7jyO/3KxAIWI1Mg45d7tjYmJUz73M9qJ0ZWAquAWqGUaMRamIW\\n\",\n       \"KcB+OByW1+s1iy2mh5VKRfF4XIeHh+r1elpYWLhTl25ubt5p4mq1mnZ2dmzYkc1mLd0KtOHi4kKB\\n\",\n       \"QED1el29Xs92e2ptScbhuLi4UDabtckaihB4FESUnZ6eyu1227+53W6zLaBmZeSO4xAJqthpcVpw\\n\",\n       \"U9EE0izSkEoylALMmSkfhKy1tTV7L0RmAM3d3NxYwgDWXf1+/4tx9ujFlGlyclLb29vyeDzWFKK+\\n\",\n       \"ANGglsbdvlKpmLKY8gG3+L29PSUSCZ2fn6tcLtuX1uv1TDSKIhrLglH3TTjVqKpvbm7ucKYxZYzF\\n\",\n       \"YlY/klhFedRut3VwcGCcajgcmJJzE5I6heMQpt69Xs8Yb8QpwwVhetnpdNRoNMyilgYaKic4OJKu\\n\",\n       \"s7MzS7Nlp+ZxuNlG3aBwCcW0Ef60JJ2cnKhSqdzr+39QixkpEl8cDpTQEofDoRKJhO2O8XjcvNU4\\n\",\n       \"PvFcwwJgenraQH1qV/jGICSw35Dej8qQwKkRmDIClmTavsnJSUszHR8ft+QoiEBYDIDCILLFX1n6\\n\",\n       \"3GwctTf5h5OTk5qfn7emdTTInlKKMT4nEnCfJEttZefv9XqmXIfYBE8D2RlCYqaeOB2NKmL4bmq1\\n\",\n       \"mo3L7xvQ86BqZmC4wWCgDz74QNJtAI/L5bJFzFCA4UM0Gr0TmgO1c25uzhoVFhYEJWRVdPHHx8dy\\n\",\n       \"u9168eKFscvYcbrdrubn541sj9KCBqnf72t9fd0W6XA41MzMjE5PTzU7O2uk95OTkztyJALlWSzo\\n\",\n       \"76Bk8viStLGxoYmJCQWDQWUyGR0eHiqRSMjv9xu9c3x8XIlEQgcHBwqHw3K5XHrx4oUkGSkJKwYY\\n\",\n       \"c8Cc6Ckpg9g8UOPwOpB8wWZE5Q2py+Px6C/+4i/e+/t/UIuZ6RwoAFyGUTX1zc2NHZcYrqCXa7fb\\n\",\n       \"Rg2F54v+T5Idx9SSRDfA1Ds+Pr5z7I96O8O4g4W3v79vo/RqtWokHpz6GS6Ew+E7Tkkc5d/5znf0\\n\",\n       \"+PFjs+2tVqtKJBKSbssAEgAwICdFitIF05ZarWboiSTjdIRCIf3whz9UJpMxfJu+YNSknShliFpX\\n\",\n       \"V1cql8sqFos2FqfcoT/BWQlDSIxgNjc37/X9P6gy4/r6Wvl83tx6+OJZSOjrVlZWDO1gp8QEBTSB\\n\",\n       \"IQJaPlx/qAfBUKFTYrJCuCU3CZTPUddPThB2O3b7vb09s8qFXzwYDMxckOFMt9tVJpORx+Mx+uX5\\n\",\n       \"+bnZC1CmsNtDC8UOjIYNZiD5KLD8IPUjKGAhUw8Tq4zR+dbWlinSuemwHBhtaGu1mrlCUbbw+cHK\\n\",\n       \"u8/1oBZzMBjU0tKSfD6f6fskWWkBTvvmzRu5XC5tb2+bnOfq6kqVSsUceZD9Q9ZnTM3OTPOYyWSM\\n\",\n       \"D3J+fq5UKiW/32/oCPAbbDUGL1BFaQLdbreePHliE0NeFzug4zjK5/OGzDDRQyw6NzdnYTfcMBMT\\n\",\n       \"E/aaqYu9Xq9mZ2etlGi327bAaCoLhYJFGEtSKpVSOBw2mRNJsdy0wWBQ5XJZ8XjcbH8XFxeN7skN\\n\",\n       \"m8lk7DsJh8Nqt9vKZrNG6L8PLCc9sDLj7OxMsVhMS0tL2tjYkMfjUTabNSzz0aNH8ng8BrcNBgMd\\n\",\n       \"HByY3B7WltfrtUUtyerS8fFxzc3NGZ8Zgerq6qp5PlMPhsNha/4kmVbQ5XJpYmJC6XTaHDUJkCRy\\n\",\n       \"+PDwUF6v16RbDFdoEhG5UufjWMTrJYSSm7pWq8nr9RqRCMcmdkPHccy+tt/vWxIXTkqQkyBt4ZvH\\n\",\n       \"e2y32xbySXPIzs/PwPGAtcfzlkolyxR88eKFNjY23vv7f1A7s8vlMhUzihBJpm4eLTXAcVlk7Ewk\\n\",\n       \"n2IcMyppwgT8/PzcRrmEl8diMRvM4I1RKpUswxuCPibiPOfExIQNccCMYawB/TFZZNwO1ZPoCRor\\n\",\n       \"pmuZTMbKIZfLpaWlJXMukmSliyRLXWUcj3H6+Pi49Rtkn1DOwKaDcgtXA1PGfr9vblIQpnD+9Pv9\\n\",\n       \"BllCaIJNSNP8vteDWsykgfp8PpviUTdDCSV1aVRS5PP5VC6XdXFxYWbhg8HARrHslOzQNzc3Ojg4\\n\",\n       \"MPd3bK5gjnG800TijUz9PnrzXFxcaHl52aZw9XpdpVLJeNMQcQaDgQqFwh0nJkSnHOUslmazKb/f\\n\",\n       \"r4uLCxUKBV1eXt5JkJVkcjE4x5CsEPQiFIAqe3Z2ZiUPXiBXV1dmAIOdGa8D7gfvncELjSjREHC9\\n\",\n       \"y+WyiRve9/qFLWbHcf4Px3HKjuN8OvJ3Ecdx/sxxnG3Hcf614zihkX/7rx3H2XEcZ9NxnH8w8vdf\\n\",\n       \"cRzn03f/9j/+rOdMJBIWcoMCotvt3unAsZylAWMHi0QiFuIIjssuTvzZqLn30tKSDT/g8DKqxaEI\\n\",\n       \"ohKDA0nWmJGP53K57gxo4JZQj3u9XiP8uFwuPXv2zLDcUaOW6+trw8cZdYNf4+9BzAQ3KRNJSiaU\\n\",\n       \"KQTJw7CDsz2aQoXbfzQatbwXyjS425CVZmZmjKjFoudmSafTNuj6ZW4A/09J/86P/d1/JenPhsPh\\n\",\n       \"mqQ/f/f/5TjOE0n/gaQn737nf3FA/aXfl/Tbw+FwVdKq4zg//ph2wcvAV4JFjHIYSGpvb09XV1c6\\n\",\n       \"Pz/Xo0ePVKlUrNtGuDoYDEzmM6oiJpEUtcX8/LyZMlYqFdvNGE1Xq1VTVJA/QqgjUqnz83OD1RzH\\n\",\n       \"sUVHatbr169tKvejH/3Idv7RsHeGF4PBQL1ez9yOGB7F43EbnDDVxHKX9K1oNKqtrS1DZEabUxQl\\n\",\n       \"fC04J0HColSChE9J02w2VSwWTUYViUTU7Xa1ubmpTqdjXnm4mt7n+oUt5uFw+P9Jav7YX/+WpD98\\n\",\n       \"9+c/lPTvvvvzP5T0R8Ph8Go4HOYk7Ur6muM4aUn+4XD4vXc/989HfucnLjR+kMHBM9l1KC2I7yVk\\n\",\n       \"kp0CtTBEelAIBgXYuNL1X11d2dEbCAQsdpcpIDcJRzzdvSSDzAjhOTo6Mg0ftFDCHqlhR4Mqg8Gg\\n\",\n       \"rq+vVavVDHo7OjpSvV63RpHnoLZH5FssFjU+Pm78C25U5E1YI9TrdU1OTmpnZ0fX19eWzgrGDk4O\\n\",\n       \"dk3utyQjHAFPIh8DimPIgxSMEuk+1982mpEcDofld38uS0q++3NG0l+N/FxeUlbS1bs/cxXe/f1P\\n\",\n       \"veD/MqHiiAOKgg6ayWQ0Njam/f19410gCWKMzaiY+nI07J2Fge/z2dmZmZr4fD4lEglDDtgNuQGw\\n\",\n       \"OQCmSiQSloLKLriwsKB6va6rqyuTGfF7oxxo9HvY2wKpUTKQ5YKu0ev1an5+3hyf4EP3+33L2c5m\\n\",\n       \"szYGHx8fN61kIpEw+RWnCRM/FienDTa2fO6Y5BB+hESMTEF6kftc/8agueFwOHQcZ/g3+ZhwgMvl\\n\",\n       \"svL5vGVoY1RI/cdu+8knnxhJhklXpVLRzs6ONVkzMzPGuYD4D0qCOpoc7W63a6bbYNHLy8t6+/at\\n\",\n       \"pqenrYSA1skE7ujoSKlUyqA0jF+wGKDO3tra0uPHjw2XxYeDgM61tTVThayvrxuZp1QqaXl52WRe\\n\",\n       \"1Kh4YuCev7u7K7fbrZ2dHaXTadVqNRukXF5eWoNGTMSoOSJMQEItGf/n83lNTk6q2WxqZWXFXlM0\\n\",\n       \"GlWj0bAJZiqVsmzt973+thdz2XGc1HA4LL0rIaBJFSTNjvzcjG535MK7P4/+/U91pP7zP/9zO47h\\n\",\n       \"/i4sLBhBB89hFtTi4qLliaRSKRUKBblcLlNZBwIBOY6jpaUla9Aw+2ZHZpoH3CbJoLtOp6OzszM9\\n\",\n       \"evTIGh/4C8QrYAMbCAQMWaAk+cpXvmJm3M1mUx9++KFZbMF/6PV6knRnyPPBBx/I7XbbTsv7xnUT\\n\",\n       \"jLjb7dqAp1AomJTq8ePH8nq9+s3f/E3zf6MJhBiEskSSlTRQa/GI5jl9Pp9pM1dWVu6MsA8ODlQo\\n\",\n       \"FHR6emon4Ptef9uL+U8k/WNJ33r333858vf/wnGcf6bbMmJV0vfe7d6njuN8TdL3JP2Hkv6nn/bg\\n\",\n       \"T58+NWI4I9P9/X396q/+qqEbyeRtZYPl7eHhodLptPb29swK9u3btzawyGazOjg4MOLR4eGhJYxe\\n\",\n       \"XV2Z0oM6s1qtam5uToeHhzo9PbVQ9lHD7k6no0qlYsc5BouO4xh0ODU1pd3dXYMPWbSSzCkUzjY7\\n\",\n       \"IST+4+NjRSIRbWxsaG5uTs1mU4VCwRphJEyM88vlsvGNT09PVSwW9fTpU21sbFisXKvVUiaTMZ84\\n\",\n       \"8GVu6kajoUgkYv0HFmg852Aw0OHhoQKBgPUy/X5fCwsLZgN8n4GJ9IuF5v5I0nclPXIc59hxnP9Y\\n\",\n       \"0n8n6Tcdx9mW9Bvv/r+Gw+GGpD+WtCHpX0n6neHnt+nvSPrfJe1I2h0Oh//vT3tOpEDBYNDsqPB+\\n\",\n       \"kGR16tjYmK6urvTkyRNtb29blANcBkbOp6enRjQiv2M4HJoOT5I1Z8Fg0LzlKEcw9Qa6op5Fmg+S\\n\",\n       \"4fF4lEqlzA63VqvpzZs35ssxmgWSSCQMv8atHjsBFgh8Z0br8XhcPp/PNIWUNzSXTPbi8bjloOBW\\n\",\n       \"StIsDTTwJE6pEP5h6t3c3CgYDN5R8IC0YK07GAyUSPz/7L1ZaKx9nt/3LS0llWpXrSqptEvnvDrL\\n\",\n       \"291Dz9uDjQcTE8LcJJCLkItAcO587YDN3BtMIJAJODdxArnwQMDBZGCmJ8MwNtP99uJ+l7NKOtpK\\n\",\n       \"qr1U+y6VVJULvZ/f+6g9Y4cjjz0R7wNNn1fnaKv6P//n9/+ucfl8PmWzWa2trcnr9VqG88def2U7\\n\",\n       \"82Qy+a//kr/6O3/Jv/8XpEKkAAAgAElEQVRHkv7RX/DxLyS9+P/yPUmjZM7FBYzDAW0ytOvr16/1\\n\",\n       \"8uVLFQoFJZNJG1EoaUc2eXV1ZTjtxsaGldlId1BaJBJRtVq1TAjiB549e6aLiwujsEOhkImgSqWS\\n\",\n       \"pZOCgZNl7KwWW15eNikqCxK4jYMlqUW0PPG7E9sFdY1eGbqZ2d/tdlsONQcyyJB4PK5Wq6VoNKp6\\n\",\n       \"va5UKmWZG7VaTdFo1HyB6L3Bop1tt51OR/F4XKVSSV6v955YqlqtKhKJPHgxPyoGEAKEkzOSROJX\\n\",\n       \"0QtwWKvX64rH41pcXDQXNrsIcBLB4lNTU6YppimVXR9bPtASckzczPRjE77S6/Vs12s0GrazStLh\\n\",\n       \"4aGV9bAIsOXz+chZ0UWzuKCKGaHI7CDls1wua2pq6l5MADUXPCFGo5HBkuzwvDabm5tGHuE5lGQz\\n\",\n       \"P85ybn7+HmY1m83a+xSLxdTv9+/1dCNs+tjrUS1mZJ9ER7G4sL+DFXMSJ+VzPB6blvnq6kqLi4u2\\n\",\n       \"ONxut9msIEui0ajcbrcd4GD80A47/W2YS/kZwK4hINjBOp2OksmkVT7wu5C5TKoR7B6VEcTDjsdj\\n\",\n       \"9ft9sx5dXV3ZWBUMBo2sYffkBkfoRKUyvw8kBg4dbuzZ2VlziDtvimw2q4uLCytIgtZHvcjrOjMz\\n\",\n       \"o42NDVMiwiAS6viQ61EtZpqclpeXLREehRi7BztWKpUy7S//hgUD89dsNg2V4AZIJBL29WZnZ21h\\n\",\n       \"MWYgsCGhCC0DTmqaq3BZIORnxk6n04YdkwwE3Xx5eSmPx2Oh6HSukC6EfYuRJhAIKJFIqFAoaHl5\\n\",\n       \"2TTctEZxWHOKjqD/WeBUBPO9xuOx5V8vL99B/pNvqpKXl5ftKQEbiSOb6AcO5oS+o88gifQh16OS\\n\",\n       \"gHY6HZvTstmsUqmUVY5BB0Nr1+t1K01fX183NVun07G4rsFgYLsROcawWlNTU8pms2ZmZS4HLsvl\\n\",\n       \"ctrd3dXc3Jzevn2rjY0NY/D29vZUKpVsFkXPUavVrEJhMBiYTqTf75sPj2oLZmQoduSsHEBdLpfK\\n\",\n       \"5fI9qvv29lYnJyf3VHtgzwTE4MbB2BuJRLS/v286DmqM2+22Xrx4YX3aMH2RSESDwUD7+/va2dmR\\n\",\n       \"JMvSwPZFnjQoDKVJb968+Xe8w//261HtzHjiwEKhndl5ms2mzWgLCwtGHzN7NptN0/zOzs5axNT0\\n\",\n       \"9LQ5mUulksUMcLDDxu/1erW/v29aDoLE0Waww/NGYhsKh8Omcut0OhbEyMGOA1w8HrfdlO/HyMDv\\n\",\n       \"j5WfAG9cLshD+XeYX/1+v+r1+j2yAzaOzwXFYeHFYrF7+mxn5jImhMXFRaO+y+WyWc1wkCN4Ojk5\\n\",\n       \"MZ/jQw2tj2ox82il7pcmJk7+FMPzJlCHS8gh1cFUg83NzdnXQJ8BHc6bwmGITOInT54omUyaXmNp\\n\",\n       \"acnmZsgFDjrOxiXknEg6KcHB5ErOMbMuYwjqN8LT0TswErBrLy4uajweG0qC+o8FOz8/b/gvFLfL\\n\",\n       \"5TKordfrmQAJTyMtXOTRjcdju7HRiqOwu7q6Mve5JNNQJ5NJlctlk9A+5HpUixk70eXlpZaXl+3F\\n\",\n       \"QgpJbFQsFlM+n9fLly8tr5mTuNvtVjQaVTAYVCQSUSwWUygUMmf2eDxWOp02RwU7GMlEjUbD0jOZ\\n\",\n       \"n2OxmNrttkVx1Wq1e9AVB0aiuQh+JPPOSYwUCgWzICGjBIUIhUKKx+PmqKGbOxKJKJvNaji86wfP\\n\",\n       \"ZDLWeY27xe12m7OEHZKDH4vWaTLw+XzG8i0sLBgCBG6NiAkNCF5DJ2LBpsOZYm1t7UHv/6NazM7O\\n\",\n       \"EsQ29XrddlL8geSnMR82Gg0TyvOG4apmZ0NQTiALmmAwWrfbbcgBc6Pf79fp6alCoZCWlpaMOUyn\\n\",\n       \"0wYjTk1Nyev1GiMnyYwEkuxpIn3rxQsEAjbjczAlJkGS5TUT+8UNi1E2mUya4o2DGU8qxhiETcSW\\n\",\n       \"rays2BMNdAVyidZbcH1nsSWtsox4S0tLikajtgF4vV7bwR8qA31UB0BO9lNTU1pZWdHCwoKePn1q\\n\",\n       \"ZEmpVJLP57PK20AgcE955na7dX5+rlgsZuMKFiUOX0tLS6YDBtpaWFiQy+Uy79z09LQ2Njbk8/kM\\n\",\n       \"tZC+zUhutVpaWVkx5zb5d9PT03bydyIm2LUWFhaMjVxZWbF5tt1uW8p/IBDQ7u6u9buUy2UtLi7q\\n\",\n       \"5OTEdBg0yeKUWV5etqaAbrer1dVVUxAyD1cqFfNSUkzE68c5xO12m0IunU4rGo3q8vLSkCSXy6Xp\\n\",\n       \"6WmtrKwYjAgj6na7re/lY69HtTMDtQUCAdMlv3v37l6xZKPRULPZtEXxh3/4h5YOKt3Be+VyWd1u\\n\",\n       \"V7/85S/NOMphi0c+Viko4Uajof39fdXrdWUyGZ2cnBjsdX5+brgz4wKkDa5xOlempqZUKpV0dnam\\n\",\n       \"y8tLffjwQaenp5ZzjH+v1WopEAioXC5rOBzqxz/+sVUQn52d6csvv9TBwYEikYhpTKLRqBEndL5c\\n\",\n       \"Xl7aCEQEAcgGehVm2mq1ahQ/6AQuF3Z4DoAHBweW5j8cDg09arfbyuVy+vLLL01wBUZ+fn7+oPf/\\n\",\n       \"US3m0WhkgS/Mz8x7kBzEUHH4onrA7/db1ACGVqq8oHbb7bbNhGRnEMvKLI7fDf0GGg4UdWCr4MeQ\\n\",\n       \"C/jpSqWS4df9fl/xeNyqkZ2MGrJT6e4g+fTpU0sInZqaUiqVktfrVaFQMHsVMBomWwgTEB/w36ur\\n\",\n       \"KxPz86Qg+UiSZV7ARIIiMZ6hNaHcB+0yBmHatIbDoQ4ODuwGwcn+sdejWsxOCSb+OmIBgMSwQaFR\\n\",\n       \"lmQkAxemTN40Sab8KhQKJhuFlga7hibnjebmcYqKCBwEJ2Z2BE3wer2mXgsGg9ab1+l0LHgRJlOS\\n\",\n       \"sZcgF6QulUole/xj2r2+vrYcPHyHtHNhdOXsACwIFd5oNIxAcjbJgkrgwOE1B8sHvcBWRWuX9O1N\\n\",\n       \"gaAfrcjHXo9qZg4EApaMeX5+roWFBS0vLxv2urW1ZUZR5Jvs4pzWYbxY3OCn6HK///3vq9Vq6erq\\n\",\n       \"ynKSWaxIIsPhsImDJJlSj7LI5eVlg/4ikYjOzs4sYyKRSBjpwgzNgk+n05YNcnh4aMlC4XBYxWJR\\n\",\n       \"U1NTRlm7XC4tLi5qaWnJ8u5wXaMbATdGw4IUFVES1RhIUr1er9bX1yXJFHsIjDY3Nw1LxmhATh89\\n\",\n       \"LYiYiFzg6YGQf25uTn/8x3/80e//o9qZOYWfn59reXnZMF1knThD0DjMz89rd3fXRhOCu5PJpJ4/\\n\",\n       \"f24RWW6327QQ1WpV6+vrikQiJtwH6QBpQG9wfn5u6USkzeP0xrkNxY2ajViDvb09pdNpM4HOz8/b\\n\",\n       \"fHtwcGC1wdQo4I6GQCH0nHRNdl4IJBKXkK5Go1F74kh3Yn+oe9qv5ufnrSgUhR3RDDwF0GvzNch0\\n\",\n       \"5r0BwuTwvLu7q0gkYibkh1yPamfu9/sGf/3Zn/2Zfvu3f1vtdlvxeFzn5+eWRN9qtfTixQv9+Z//\\n\",\n       \"ua6vr/Xpp59a6MnU1JS++uorcx9nMhnVajWrAh6Px/r6668VCoV0cXFhMyewFegH83csFtNPfvIT\\n\",\n       \"a1Uir65YLBpEd35+rq2tLZ2eniocDqtcLqtWq5nYnhDETqejSCQij8ejDx8+2LzKPJ9MJs3kinGV\\n\",\n       \"Cgaqz1ZXV1WtVuXxeKzMMpvNKhKJWMoS54U/+7M/06effqpisaijoyOzoa2vr1tGNTY0FjPoDTsu\\n\",\n       \"VPft7a3VGhPpdXR0pGw2q1qtpp2dnQclgEqS66FWlb8ul8vlmvz9v//3TdV2fn6u1dVVtVotJZNJ\\n\",\n       \"QzM4fIHBvn79Wmtra3K5XLq5udFwOFShUNDu7q76/b4SiYQymYxpbweDgeG7xNeGw2GbaZkjMco6\\n\",\n       \"O7tpPV1fX1cul9PNzY22trZ0dnZmhgGqG77++muzMaH7WFm5c5Dxc6RSKYMbh8OhhTs6c6pJ0/+t\\n\",\n       \"3/otZbNZ+Xw+3dzcmEEVCSp+RZJIA4GAstmsksmkoRJTU1NqNpsGza2srBiRA75MItLh4aHS6bS9\\n\",\n       \"Xq1WS6urq6bDoDSTpyXk0D/+x/9Yk8nE9e94u//C61HtzLOzs5YFt76+boczxPDMd5VKxSJkR6OR\\n\",\n       \"fa7X67VETEgVpIuBQMDqFNBVBINBLS4uKpvNajKZaGlpyUwAoVDIEACYQHKggbH29vZsjAD/xTj7\\n\",\n       \"4sULe3yTzYYKjR7C0WikeDxuWmDwaqrZQEGIFvN4PAZPRiIRhcNhi/tiNmcHZWeFJr+5uVEoFLKx\\n\",\n       \"jbgAAtxxpNB6hY4EDJv4sUgkYn0wEEsQXN/R2Y6LmKd6va7PP/9czWZTBwcH6na7CoVCKhQKuri4\\n\",\n       \"UC6Xs/yJ4XBou8dkMlGn09Hr168l3YnsZ2dn9e7dOw0GA3U6Hfsc9AlkctRqNfu6qOCOjo5ULBZV\\n\",\n       \"KpUsoAY1WrlcVqPRUKFQsJ8LC9LU1JQ9gtGF5HI5w3vBnGEwa7WaxYVNJhNlMhnLqM7lcnr79q3K\\n\",\n       \"5bIqlYqRGs62LX43GNPXr19biQ7Y9vv37/Xq1Svl83n1+321Wi2LRqCThTl4OBxaLcX5+bkmk4kl\\n\",\n       \"i5I2RdA7X+/g4ODBUQOPajGjCeZxL8l2lcvLSwtPxG+WzWbNXsQLCRkAPlwul5VKpVQul02jjEmz\\n\",\n       \"UqmoVCoZlEcuBnit3+/X9va2uT0QNNGuhCyz2WxadRr5GxAryCddLpdptIvFos3Q/FzhcNic18Bu\\n\",\n       \"LMhwOKzl5WXTGHPYKhaLJkuFQJmenjaCCP/heDy2aF/ERODtlUrFKPVKpWJpRjyB2HERLhEeU6lU\\n\",\n       \"9OHDB7ndbrOtfRc27rg47YM+ODv9qOriEJfP5/XixQvTA1erVbNAJRIJo6ihpHnsMi+D14Kf4ggZ\\n\",\n       \"DoeGjni9Xh0cHFgpJW4Xcu5KpZJlSqOAA9cmeV66Q2kqlYo91kn/B+rqdru6vb3V+fm5hsOh0um0\\n\",\n       \"zeOBQEDxeNx6TJCvMl/DKKJXhkiCgPJ4PMrn8xZGw+9GxBkVGYjygfXK5bKp/Pg7dN/D4VAXFxem\\n\",\n       \"PFxcXDQi5yHXo1rMTmiqVCopFAoZrIVFCS9dOp3W+/fvzVgJrjw1NXWvBhgSggoDqHD8hOgTENf7\\n\",\n       \"fD4bDSaTiba2tiTJNBYQBEB2zWZTp6en9/DhyTe9hQjZnbMxRT/OMBvQjbW1Nfl8PjuwES9wcXFh\\n\",\n       \"NyKYLh4+n89nNDM3E0WaUP8IgMiRCwQCevLkiRmAuaHRUDMrh8Nh02Vw0BuPx5qdndXOzo7G47HZ\\n\",\n       \"wzAKPOR6VAdA0nqur6/NFoX6ixcWN/VwONQPf/hD0yjgvNjb21Oj0TB6OpVK6fXr15qdnbWU/LW1\\n\",\n       \"NRPYI2Mk4urq6korKyv62c9+ptnZWTWbTe3u7lpGBdEFaDRYnPgRqSlzLuZCoSCv16snT56oWCzq\\n\",\n       \"5cuXajQa8vl8Zun//ve/bzfd+vq6xuOxIpGIwWROM4Db7VY2m1U8Htf19bUd2nw+n4rFora3t+3Q\\n\",\n       \"GY1GNR6PLf7LaaeCAEkmk6pUKjaeTE9P6/3794rFYra5cPMmEgkVi0VjBxlHQqGQPvvsMzuvfMz1\\n\",\n       \"qBazJKNikT4ym/r9fvP7BQIBjcdjXV5eqlwum2653++rUChocXFRiUTCymrQYnAzUDrDaR9FGN6+\\n\",\n       \"UqlkRZRcWJkkGfFAtNfMzIzq9bolGUkyZGEymejJkyf2+WRJo97jxiBNU/q2dhnbP27x6+trxWIx\\n\",\n       \"XVxcaHt725wpaENubm6s9ow5GSgTAb7H47ECIV7X4+NjS/eMRCKSpJ2dHYssIOGfoiGeDrxWOFSI\\n\",\n       \"I/vY61GNGbz5w+HQgr8vLi4saahcLlu2GwcZEAAklqANKMZQdI1GI2UyGXtTsejzKC+Xy8pkMrYT\\n\",\n       \"IgZyu906PDy8F17O45tDZLFYtJIgrqurK4vJrdVqarVaevfunSUIcXgjI7pUKlmf3tHRkYbDoYrF\\n\",\n       \"oiqVimXKESGLZczZBFCtVlWpVAwRGo1G2t/fV7/f12QyMaIlk8nY7E4z7XA4VC6XM2jQ7XZbqA6t\\n\",\n       \"q3QiQtW3222L7AUZIYrgY69HtZgZKQjxQ/rofJzhLiZsEJaKbAhJRl37fD61Wi2l02mdn58rFAqZ\\n\",\n       \"ZhnxuZPVIh3T6UFEA4yqDAwX0Q76iJubG1vULpfLVHX4/wikaTQaxgzOzMxYZdni4qLdmOx+zhAW\\n\",\n       \"xP/8G8Yjvg4pns5EJwypoD8conHJkBNNulKr1TIZKN/X7XZbEA4xB+g6+v2+RYylUikTJX3s9agW\\n\",\n       \"sySDpxDpYyJttVomyEfiuLm5qVqtpkgkoouLC9XrdS0tLdlBDCgqn8+b0+L4+Phekj4J+YiJgAVz\\n\",\n       \"uZxub29tAbCLIfZhIQWDQa2srCgYDJoNCxcISUE4y0FiEomEjTmrq6u245O77PQ+IpJCCI8LhYQh\\n\",\n       \"ZlrGCppVgTOB8cjpcx72CI10GhDIzEPBiHqx3W7b0w/MnXiGdDptf/eg9/5hS+ev10VANy4Nstuk\\n\",\n       \"bwU2ZC87A1KwRd3c3JgEUvq27BxblBOeA5FgXGCnhyrv9/vmO0TAQxYbdixJhm4gPqKNiR1s8k09\\n\",\n       \"GYJ50opI4GQ+5qnDzkuBJt19LGx02GiNITlAJqjOADajgJ5YW0lGGg0GA9Myc6BD3glGjRWLkYjD\\n\",\n       \"JN5Ep2vnodejWszQrU6xezQaNREQSaCc4IvFokFy6XRay8vL1pgkyYJW0um0CYicNWaMAM4QFiSP\\n\",\n       \"n3zyiVmPYN1wOuPCRvREjQQ/L6U/dHOvra3ZzxYKhYxdQ2Mh3S0wnh4gDjzOiaPFRY48k12W5Kfb\\n\",\n       \"21sz92JjqlQqtoNfXl4a6gGD1+v1NDc3ZzYwXo9nz54Zbh0IBJRMJrWysmJPDEa5UChkZ5LvGEDH\\n\",\n       \"hYAeGA6HNgk9lOLAiqHHALMlWBvEg+gpAgR5pIJXI/CJx+P30u35e8IG0XE4Y6t4DJO+ubS0ZIlE\\n\",\n       \"PDX6/b7pS25vb7W7u2s3AkQO8Be1Djc3N9rc3JR0d3Mjt4S1xPOIThstM9pqtBnslmhSGF+YddFO\\n\",\n       \"+3w+Kw/iey0sLNjC5KzCE45dGU0KP+fs7Ox3HkDnRUUDEBQHJYTx8/PzJoJh/CCbmEZTZxMUj2eI\\n\",\n       \"AQ5yIBlY/hH4A085682urq7uLWBoa5RpuFzozmOXZ5dbXFy0XZRMC+xbUO+SLNUTxzc3IsmhLFLo\\n\",\n       \"ZEYmfm/CGQmYYSTg5iVEETYTSxiLnacdryuSAMRd09PTps9wRvJeX1/byPbQ61EtZubRfD6vg4MD\\n\",\n       \"o7MlGfSEUKjVapl9ajQaaWdnR4VCweSXtEVVq1WtrKwYUwcdzGO3UqlYLx7ifacWIxqN6ujoyJzS\\n\",\n       \"PNolWeBhqVTS7e2t8vm8zcbValW5XE65XM7kpefn57q9vTVIC1oaa5jP55PP59PPf/5zDQYDgxgz\\n\",\n       \"mYy9Liw6NNrD4fAeC8mMPTU1pS+++EK9Xk8ul0sfPnz4N3bp0WhkFDo+PgqJJJkbm3o3n8+ny8tL\\n\",\n       \"vX//3oy+pJPOzMx8p81wXuh8vV6vBRA6YS1sSQiOtra2DPnI5/M2z25ubioej5sQCLYMkgOpKLYk\\n\",\n       \"yBi+byQSsURNoD1ob/pMyO5wu92WHE/gSrPZ1OLiolZWVoyCnpmZ0c7OjlHbkozsmUwmJk91u93a\\n\",\n       \"29vT/Py8nj59qrW1NdNmuN1ug+dAdgiBIeSm0+lodXVVHo9Hz549kyQjcoD8CFJ0uVxaW1szjUci\\n\",\n       \"kbCwRij5m5sb04pcXV0plUrp2bNntttLspYuzMUfez0qBpAZEScD1iTeMMYGFs9kMlEwGLTFglIs\\n\",\n       \"mUyacJwUTTpCUIzNzc0ZS0dqEtpfzJ5EZwWDQXk8HhsJ/H7/PXYRyI2QdJqsKLxk9OFxDYXNYdF5\\n\",\n       \"c5Gl4URiIpGIWai40SaTiVmfeFIwg4NvExU2Ho+1u7tr9XI8CUCO6Cjn4Onz+SztH7EUyVC4ZRB0\\n\",\n       \"kY46NTVlN+nHXo9qMZ+enmplZUWlUkn1el3r6+s6Ojoyiw6QGtgxkNLy8rIJ1IPBoN69eye3261y\\n\",\n       \"uSyXy6Xj42NbXCwOOkv6/b5lTlBq+fTpU8OjFxcXdX5+bofSZrOp1dVVi9kKh8M6OTmxwxflNc5k\\n\",\n       \"e2fWXSgU0urqqt68eWM+RkwEr1+/1tLSknK5nNH5CHyIC9vf31ehUJDL5dLTp09tLODQVy6XbdH+\\n\",\n       \"7Gc/0/b2tpEhc3NzNgc3m02tr69rfn5eFxcXarVaWltbUy6Xs0q6733vexZXhkDf4/Ho66+/tswP\\n\",\n       \"TLrz8/M6OTl50Pv/qMaMXq9n9n7gMHbHcDhslCoah/X1ddP3BoNBSXdz7Pb2tubm5qxv78mTJ3YC\\n\",\n       \"v7i4MF0xGDDIBLamXC5n3x/cmIVMulI0GrU0Ig6CsITM6pAXy8vLdjMRjMgh1rlzoxXh90DhNhqN\\n\",\n       \"VKlUDPPmUHZ9fW1JqLjGnWmkqVTKdnFCJoHiqDMmviASiSgUClkvDKmq6Lv9fr8KhYJ1gS8sLGhh\\n\",\n       \"YcFKgJjdH3I9qp2ZIBRsSVtbW8acgVBAJiwvL5uplPmRGbndbiudTqter+vTTz/Vmzdv9Df/5t9U\\n\",\n       \"uVzWkydP7nV68OiE2v3hD3+o0WhkWRS3t7eG+YZCITsIFYvFe8WWTgq90WhoaWnJXB6cBfb29owl\\n\",\n       \"e/v2rdbW1iwUkd8FjHg8Hmtzc9McKRsbGyqVSnry5ImOj48l3SVxxmIx3dzcGLSIRxA3OQuUcMXl\\n\",\n       \"5WVDc7gp0Vqj1Esmk8YoEmSD+AqSik0DpzxM6Ndff/3R7/+j2pnZAbEVwVRx0CDHAQYskUjo+PjY\\n\",\n       \"SAOv12u4KwU6pGuy46HZRfnW7XaNHcOlTZVYqVSyghuwWeJqSdiE5CEfOZfL6fLy0vBxyh/5upgN\\n\",\n       \"WFD8D+gOxwhh6FDXnU7HXhcOb5NvKn5hFsvlsv1+UO6QHpVKRb1eT+1221LyGYdCoZAtbq/Xq8Fg\\n\",\n       \"YMgN6Adjymg0Uq/Xs4Bzvhe/w0OuR7Uzoy9GSE9wCTsBxIIkOygSqcUjnV2Ckksn7RwMBs3oCoWN\\n\",\n       \"LmHyTVEk2gaSPwlZ8Xg8FkFF9BYGVeSdCwsLCoVCZl/iBlheXjaNCaQHOyXhNaSKDodDbW5u2u4K\\n\",\n       \"Vc54hUkBWJBIXvoI8fJB9XPjrqys6ObmxtoIyKqm5IfDIgKptbU1ixljLEE5KN1xAlS4Qd87VYMf\\n\",\n       \"cz2qnZkDDzZ2UoZgppBIoglAl4w2wxn44vP5jFFDx8ApHrgPFwW4bDgctsMOXxuiBBIEgT4qNRYo\\n\",\n       \"cQLsnIRzk13M4m42m3ZYhRzBMV0oFCxYfXZ21jTDkEKQN/gMCZ8hgQjVGuwgDmxmdUnGbkK5O9GY\\n\",\n       \"Vqtluy94uCQrRWIEhCEkUsypfXnI9ah25lKpZMQJwSrFYlHStznHzWbTWC3kinjxstmsms2mGV0R\\n\",\n       \"7IB6SDJzKWQJ8+dkMtHh4aHm5+cNy4aBdCIjVE68fv1aKysrCoVCJlDCtexyuQx9QYyPLR9Hy8XF\\n\",\n       \"hba2tmyBd7tdo9eBIFEKYq/qdDoqlUqaTCba39/X3t6eJTkBCbZaLUUiEZVKJWUyGe3s7Ojo6Eh+\\n\",\n       \"v1+ZTMYOfuVy2dpfUQIuLCwYovPFF19oe3vbDuPkbCDGv7y81NbWlikTodwfcj2qnRklGW+MpHtV\\n\",\n       \"CeCpnKTJunAKi3CNMELwZ07u7Dw0jxIcgy0IRALMG0EQpThQ0+DG7FqId9jJw+Gw6bD5H+ozHNTs\\n\",\n       \"dKR/8rN7PB5b+E7amEMpzg/pWxaS38vr9WpqaspaucjAQ4ONOo/dGz30eDy2EQvBEr8DYxznDj53\\n\",\n       \"OBzagROz60OuR7WY6c8AfPd4PGb4RNiyvLxsyjkamXBA8IKDelDYA3sWDAaNoIhEIrq6ujKRETT1\\n\",\n       \"2tqa5UMgZiLRCNIFoRCJmhx+UKpRPcaCxVyLrgSDKT0ljE348ljwkkywhFAKwoVRiI/B2Emyosxo\\n\",\n       \"NGpKQ4wKku4llwI9Enzj/Lm5ORYWFuzwzSbDCMNsvrCwcC8296Pe/wd99l+zi0Vzenpq1Q6c3Eul\\n\",\n       \"kk5OThQIBNRut5VIJPThwwfDYKm9hb5GHba/vy9JFkhOa+ri4qIlFaHJZdTA2nRycqKNjQ1DAgaD\\n\",\n       \"gTwej37605+q1WopkUgon88bhvv+/XuD8FqtlslRWRy1Ws2cHvV63XbU6elpFYtFra6uWrA3vzNC\\n\",\n       \"+JOTE52fn2t9fV2VSsVc5IPBQF9//bXVVAyHQx0dHek3f/M39eWXX+rly5dyu9362c9+plgspsPD\\n\",\n       \"Q33yySeWdxGNRtVsNlUsFi1Mkq6T29vbe+9DvV63UWRxcVH7+/sW+bW6uqo/+qM/etD7/6gWMzkW\\n\",\n       \"c3NzltRJ78je3p6y2axmZ2etMGdjY8MWAY9hHCjb29uan59XNBrVxcWFMVXT03ddd91u14LA0RlX\\n\",\n       \"KhVFo1Ftb2/rw4cP5gghf4KILUic4XBo+opms2lRAdQ5RKNRuVwuVatVs0Whsjs4ONDTp08lydLr\\n\",\n       \"udGi0ajRzji8cZcnk0kr6wQ92N3dtVELLHwymejFixdKp9N2cw8GA21sbNx7nZG5RqNRLS4uanFx\\n\",\n       \"0ZLzmdc3NzfNYQ6hws/z4cMHbW5uyu/36+nTp9Yw+zHXoxoznJW6xEBRW4a6bGlpSVdXVzZXYo+C\\n\",\n       \"gdve3rbETh6ZpHuORiPlcjkT2UPP4t7mEMOBjqxjBDTg3l6v1/oFXS6XHVJJx3cGnIN2cADk9wTq\\n\",\n       \"arVaJn4i2IURyePxaGlpyRANMPi5uTkzA6BRhoyBpURNOBgMFI/HTYDFz8bXaTQakmRNU6A/qAE9\\n\",\n       \"Ho9h6B6PxxCU2dlZe/Iw538XAuO4ksmkOp2OLU5JCgaDNp/V63UL8f51GxQWoEKhoEajYaQEc3Ei\\n\",\n       \"kbCDGgudmZs3p9VqaWNjwxYJi4Y8DUQ5/X7f6sQ4ONLahLyUv3fKRZndCU+hpzsSidjuTWd4IpGw\\n\",\n       \"uVyS9aLwqGcRHR0dmRqw0+nY3yHc50DIbgyeDp2O6o/FT6cgemq0GVDokFgUXr579842nYdqmh/V\\n\",\n       \"YkaTiyKsVqup12xYVkYAACAASURBVOuZAyWVSpnegORJ0ADKLGHdRqORAoGA8vm8zYfoIKanpw3S\\n\",\n       \"gp7l0X1wcKBCoWD2fq/Xq3a7bYclbE3OqKpqtWqWfdCQt2/f6uTkRP1+31RozWbTvvbt7a3Ozs7s\\n\",\n       \"5kNznclkdHZ2pqurK/uZe72eAoGAjSLOUG/yojH24pa+vr5WvV5Xs9nU5eWlhTQ2m037ODgx+mgi\\n\",\n       \"DIi4xYBQr9c1NTVlxUdnZ2fy+/2q1Wra29uzhd7r9R70/j+qxRyJRMzr1uv1FA6H740Ik8nEpJde\\n\",\n       \"r9cOSiyYdDqt0WikfD5vRAP1aZPJxMgLREfkRWCShd1bXV01manH4zHUAVlkIpEw2z2PbZzReAPX\\n\",\n       \"19cNDoQdpEMPxzQECf0swHH4HtGdkIJPHC74LkGOku4Vb3I5dSPBYNDievkY8F6/379XEITgCeIF\\n\",\n       \"XTb5fOhOyHcmG/s7d/avXegcwGZ5FPKmUyID3IRIPpfLqdlsWuUwJToslMXFRbNHkRuHIRONBmgI\\n\",\n       \"8yKsHTuTy+VSt9tVpVKx3fH6+toanehkGQwGFpfLU8EZmgJFj37YWd4pyUghDK8o7VDxcXMiYMKK\\n\",\n       \"xc02Pz9vhBGzNjQ3LmvK4J0pTlD009PTBk0SVoMS0OfzWU4f3eQEW34XAuO4UGkBVWFkhVb+9cjZ\\n\",\n       \"TqdjiZfMhjQJIHuUdE+/AA1MWCJEBzsvQn9JJgji/6GFIXDQS7AIUffFYjH7t0RsQVRIshwK6N9A\\n\",\n       \"IGD493g8VrVaNXERMVosevoIabtCjO/3+41BlGSNVBgIGEFY7EQMMAODK6Nbcbvd5vohgDKVStmG\\n\",\n       \"ws4NsUKR6EOuRwXNweoR5zoajbSysmI5coj1yWWjq5kFFYvF1Gg09P3vf992tqWlJcttQ5fA4r+6\\n\",\n       \"urIkz42NDSMhXC6Xdnd31Wq1NJlMLDiRmZOdHbENoTOURBJUs7GxYXJVr9ers7MzY85+53d+x0Ja\\n\",\n       \"qFCGUfv000/twIikdX193RCDr776ytw0uVzOrE8ItHgdv//975swiXR+pLJut1tLS0s2c6fTaV1d\\n\",\n       \"XdlrShOAz+ezm7/ZbCoYDGpra8u0G6lUSplMxsRJD7ke1WLO5/OG2WYyGWOtOp2Ocrmc3r9/bwTE\\n\",\n       \"zs6OfvrTn8rr9SoejyuZTOri4kIul0tv3rzR3t6eTk5ODF3AOo8pkyR+DpI0sgJZ8Qh/8uSJ3r17\\n\",\n       \"Z7sec+RwONRkMrHyG7THwFtISEejkWKxmI6OjiyEvNVq6Re/+IXJN2dmZkxc3+v19OWXX1qGxmQy\\n\",\n       \"UalUUjgctmB0dt1CoaBut6vz83NFo1FztKNf/vLLL7W6uqr5+XkdHx8rFAqpWCyaTpxEIhoDPB6P\\n\",\n       \"oSPchLVaTefn5za393o9q3RutVqGdsTjcf30pz990Pv/qBYz2giPx6O/8Tf+huGdMzMzSqVSGo/H\\n\",\n       \"djianp7W1taWvv76awtsAVYC1vL7/Xr+/LnNysBqpPwsLy9bKDcaZsgAxEvoHBANDYdDE75DZrx8\\n\",\n       \"+dJy77B2MTIg1OHfwwAS6gg5QkyXz+ezKFraWZeWllStVu1rgJ1juaI/BYMtGgvMAxxiKXAnGoCR\\n\",\n       \"C1yakWF6etoO2rOzsxbrJd2NSMQn4MPkiURm9sdej2pmBnxnHgYJYA4Mh8Nm0aFYkYZSxhAkoxQ6\\n\",\n       \"ttttnZ+f266YSqXMmHp7e6tUKmXjBHMn8s2ZmW/rg53lP8zlLFQWtRP3JhoLrTBMJmMKDCEppUB9\\n\",\n       \"zoKg1dVVc5AwG7OwGE+mp6eN/EGHwqi0ublpZlPOBsz+RAtweIW+9vl8ikajxnAC0dFzgkLOSTJB\\n\",\n       \"+nw3ZvzaRTG60xns8XgsGLHVapkrJBgMmriF/jyYuXq9bppliAqCTyBDrq6urCA9kUhYNx41wexS\\n\",\n       \"zMcQJ3weyjoe16RzcjMMBgMNBgOdnZ1pNBpZHzXlQFDcdBFCeDAKobIbjUZWk8wsi5GXxUR0F0gH\\n\",\n       \"mRj0HV5fX9uNCjnS7XZ1eXlp5xRnSEyj0dDu7q4ajYalGqGQA9WZmZlRsVi0ZCOETB97PaqdeTQa\\n\",\n       \"mWoOnfHNzV2fNEHWdJjMzMwol8vpiy++sDeCMhzyk+m/ZqckFJF4L2fJeavVMqSAvGGv1yu/36/T\\n\",\n       \"01OzEgGtQaIgwCGwkOaqk5MTu/lYBJlMxna5wWBgITHoLyAlXr16ZawaPxMZcoVCwQ5ks7OzRtiQ\\n\",\n       \"A40Yf35+XsVi0XyIRIxNJhOdnZ2pVCrZDXxxcWGvjySzYJF01G63TWNdq9XMHT4ejy2lv1qtmtn1\\n\",\n       \"Y69HtTNj1e/1evrRj36kmZkZy2L70Y9+pLOzs3syzh/96Ef60z/9U3k8HsXjcWWzWYVCIQUCAa2t\\n\",\n       \"rSmRSKjVatnogcUJIT1yTPTM4/FYyWTSHunValXxeNws+fF43HZ0Zs3r62utr6+b5SkUCqler2tr\\n\",\n       \"a0snJyfa3t42gX4oFDLKNxgMKhQKWY1yMBjU1dWV9SAGg0HVajWrLD49PdXTp0+NrZPuyI6trS1L\\n\",\n       \"x19fX9fl5aXtkM+fP7cxBMx9fn5en332mbrdrnZ3d602Ar8jTz2nmTgej6vb7Zp9isMiysNwOGwQ\\n\",\n       \"4k9+8pOPfv8f1c5M2iUBgSRvMkN6vV6z+gMvpdNpIz+YHUEIMLPiY4O1Y94Lh8OqVCqW57awsGCL\\n\",\n       \"Cr0Hj31ERDgyFhcXLYC71+up3+9rY2PDIgMYSWDOeEzzSGdWRdsxHA4t0IZ6BixJ/B6NRsPCEBln\\n\",\n       \"nDM5ZAavjd/vNwzaGYzIYsSjyPdMJpOmbcZeJn2bphqLxUxpB9kCdOfMqfvY61EtZsiRlZUVvXv3\\n\",\n       \"znQZ3W5X796902QysSDver1u+XPtdlvlclmJREJut1v5fF6VSsUeyc1m03Irbm5uLGMDsB+XBrju\\n\",\n       \"zc2NKpWKnj17Zsowboipqbuy+VKppHa7bYs/kUjo6OjIAhnr9brh469fv7aid1AY6e7mrdfrxryh\\n\",\n       \"c/Z6vcrn83agmp+ftzEDsRNfH0c57B/5eGTAodWW7jKXcZTwmkmyjI1yuWzumHq9bo6ZTqdjyAjE\\n\",\n       \"FMQSWXrLy8sPprMf1ZjRbDaVTqdtpyOKCikjj1NKfLDkSzKxEaQBsBJzK0wcNCxO72AwqHa7LemO\\n\",\n       \"NUOYjpCGmwE/ICIlYr6QrTJ+FAoFs13RNxIKhYwuR1dcLBYth47gQunbInZgvampKVUqFSudn56e\\n\",\n       \"tlo5bj4SlQqFgon7nz17ZhLWy8tLi9T69R3U7XbbOEVGHjsuh2bEWPv7+xani8oPRWEwGDS57sde\\n\",\n       \"j2pnBvQHW56fnzeM+Nez0YDC8Ag6U/cR4CCfnJqaMjqXnTMcDmtubs7EO9Vq1WZRgllg8lqtlh36\\n\",\n       \"iBUAcYFhA6d10urO3Qu/IQE0zn4SFheoC1Q3PdzgwMg4ybpDa0H6EvkXjDDc4IifeB3xBwKzES1G\\n\",\n       \"tvVoNNLFxYWNSOhbnDsvqBFxw+122yxVH3s9qsXMQgCdIKKVncBZnshj0O12W+wAMk52UkpoWFxo\\n\",\n       \"k1HBgXQMh0ODvNAozMzMWOAih0SanogL4CnBjMvNgyjK2b9XLpcVDAZVrVat74SnB7UX0rc5zexy\\n\",\n       \"zOgQNpIMV5dkowtnBQgTbhYWLhlzmUzmnu4YgRLIDuIoRFPAhtwUzvDEUqlkoxBw50OuR7WY2+22\\n\",\n       \"QqGQpVJ6vV6lUil5PB6LaYXo8Pv9pvElsIXdjMoCdijCDBcWFrS+vm4G1HQ6rW63ayQBuDHySHZv\\n\",\n       \"dkWn8zsWi9nX4Y3HKU1GHOlHsVjMdCN0GHJQhBHkwDc/P28/Pzs31n7+/Wg00tOnTy1ViadDr9fT\\n\",\n       \"0tKShsOhGXyxQfE6kPbJRsFNQrQYPzepnhAiBCViLA4EAhZAyUj00OtRLWbcCzgm2ClRqi0tLZlG\\n\",\n       \"eDwe22OdBTAajYzxgn4lbw4Grl6vm1SR2RwGjhw24CYYMElmfJVkI87S0pLN5fV6/V6dWD6fN2fL\\n\",\n       \"0tKS4ec8km9ubnR6emrzvCRDHiQZmgPqQAoROmznrA55QnQZMQj0+aFDTiaTFppDTwkwILENVBfz\\n\",\n       \"c4BcuN1uOww6O7WDwaDJbgme/NjrUS1mQvyur691cXFhWg1gsc8//9x0EdQU8PelUsnqyKhWI5P5\\n\",\n       \"7du3hgMjq8QdEQ6HbU7lZI4HkUgtmDEqJ5zVaUB4kBgwh8lk0hZdoVDQYDBQqVQyzbTf71c6ndZk\\n\",\n       \"MrGRI5VKWaVZJpO5F15DQCHnBW7YmZkZXVxcWK0x7QIul8vaqNBP032Ip5Fsul6vp+npaQvYQZvR\\n\",\n       \"arVUqVSMhGI+ZnYHJUE19x0D6Liw78Modbtd5fN5NRoNawotl8vW0XdwcKDRaGTRq/SQsMjpeEaz\\n\",\n       \"ixAJzJbQQnQUiHhwtGCrR2ONDJLx4fLy0hKSyLkjdoukIDQmGFcR+ZdKJZVKJatem52d1fn5uXq9\\n\",\n       \"nlVS8LnOInjYPKIR+H2B55zdJ8y75NhhFSsWi1b1wJmj2+3aE0OSYdwkK9FG2+/31el0LCqBA3u7\\n\",\n       \"3dbp6emD3v9HBc0hYPd6vfre976nubk5vXz5UjMzMxbp6qws6HQ6Ojk5Mes7J/FEImGWfFLoe72e\\n\",\n       \"WfMhGxYWFpRKpXR5eSmv13uvumxnZ8cWwd7enh0McYHjj6MaQZJFw4ICOKllnCuElsfjcUUiEZu3\\n\",\n       \"wbdnZmb0/PlzDYdDPXnyxHZ+SdZ4hZaEgEefz2dtU7RM+Xw+vXjxwlAXyCfIGTq3k8mkCoWCUqmU\\n\",\n       \"pfYHg0F7QnGQBFenKZaD69dff22E1ubmpr788suPfv8f1c4M6uD1elWtVu3FlO7mW2fVGVoCXlSn\\n\",\n       \"JQr9AzMirmlEPbz5UMPkInPK55EZCARMT+wMAcdVTZImhzJm/larpVqtpvn5eQtY4VDFXE/VBeIh\\n\",\n       \"mEksYhAog8HA1IDY/AlugZ6GTLm+vjYWcTQaqVqtKhAI3NvNGRMWFxcViUTUaDTMoMBODI6O6QGr\\n\",\n       \"FWcCnDntdluxWMwCK79zZzuuubk5ffjwwaC1TCZjrBSjwuzsrG5vb1UoFJROp+X3+1WpVMyRzbzM\\n\",\n       \"okXKSb4G1PHU1JQCgYBqtZrFFDgzI5wkDbs7uyxubkypqNIwdYKY4HJ25kUzOkBiZDIZY+awL4GH\\n\",\n       \"w1SilYber9frarfbBkcyqoC7wwRyYC2VSmo0GjZ2kcXc6XSsCoNFSR4fOmVkuKFQSBsbG/Zz8jPx\\n\",\n       \"/UKh0HcHQOdVrVa1tramy8tL+Xw+bW9vm9glEAioVCrdOyw542+RJPr9fkWjUQWDQWUyGdPgYnWq\\n\",\n       \"Vqsm6SQDGoyVnmwOQYSfoKS7urpSs9m0PhUSQHF8I8h/+/atHQ57vZ61vjpd4UdHR9Yjws1B7C5K\\n\",\n       \"PpfrrlC+2Wxqf3/fVHTdbtdGFJfLZQgDiaLYnaDV2YEJvuEgDOYcCoWsQo1DHXMyQeK5XE4fPnww\\n\",\n       \"Idji4qLdGPV63Wb7h1yPamZG6JJKpewASDBgPp+3kEHeuFAopJ2dHROVj0Yj272vr6+VSqWMlUJc\\n\",\n       \"g+vbyfKRIgTRQdo8VDS1vIw08XhcuVzOHu+wiqPRSD6fT8vLy0Ydh0Ihe9zDsN3e3mp7e9tiEKh8\\n\",\n       \"QxvsdruVy+WUTqetuoLwGoRG+CA5OyB6oj2An2t1ddXo7HA4bAxiIpEw1wx6aL/fr2AwaOMT1DUx\\n\",\n       \"ZKjj0MzQiOXxeKyi+J//83/+0e//o9qZcWBzAEGAg9A+mUza6Ztynf39fUMxJpOJFfBcX1/bgREZ\\n\",\n       \"KOmfvNGEuEiy3ZkF2u1274nvcZigHqNtdW5u7l4gDfoN4C7sUbhMWIztdtvQievra2uMku4OqLFY\\n\",\n       \"zNCBarWqZDJph1/6U/BHIoDC7Y0znYRSMPSTkxPd3t6q1WqpXC4bgTMzM2MbCSMX/06SvSYLCwuq\\n\",\n       \"1Wr2++LIRmP+HTTnuBDOg+9ie7q+vraoKnZBDnBPnz615HuwY2odnDJHJ9RFDgR4bb1e19XVlSVr\\n\",\n       \"IhGlv4OcOxYmnSXxePxekidaCXZ44DNQFFRsMG6YU5lTIYRwi0AAgV9LssMXMzI9JSjdiKvFGuUc\\n\",\n       \"d1h4HKR5kjHmIOpyPqGAE4fD4b1K4mq1aiNJr9fT9fW1qfA++v1/2PL563UNh0PLWuZg02g0LBG+\\n\",\n       \"UCgok8lYKEsul1OhUDAvHnhrsVi0GZhoKjQYZFJw+GIxSHeqvXa7rVarZSJ+BP6o0YbDoSUsUcng\\n\",\n       \"VPaBXbMoGZcGg4HN+9yU9IFgU2KmPj4+thq5crmss7Mzi9RijmYnHA6H9rrAmBJJxgFPkiWoViqV\\n\",\n       \"e2eGbrdrbng2DnoJQS/q9boajYYqlYqurq5UKpW0vr6uXC6nUqlkAq2HmFmlR7aYwS/ZedBidLtd\\n\",\n       \"e0Gvrq50dHRkhkzcKYSysNtymmcHlGT/XpKpzfC89ft95fN5LS4u3mPYmFNRhXHjsKgvLy9tETnN\\n\",\n       \"AVwsDNqiRqORPR0k2ciESo2oAAgOoDwOb8yseP9AQHjNiGXAWOCMmG21Wga1URXHmMNZAzMArxOv\\n\",\n       \"HSo5RP50d0Nr06L1kOtRLWZYMvSx9GXMzMwYFhsKhbS1tWWdHEgZ2YVAD5hlWZSk83AgDIVC5jJG\\n\",\n       \"EQaS4gxIBGFwLlB2WA5v4N7sxNyIhM0w96fTaZvtna1N/PvDw0ONRiOtrq5ajjTin2KxaGMLvkG/\\n\",\n       \"32/IBloKn8+n9W/KPm9vb7W0tKSLiwsbD6hThjTh4IwGhQYAdnwSlgh2dLlcRnd7PB4jg7xer6LR\\n\",\n       \"6IPe/0eFZsBixWIxvXv3TrOzs9rc3JTP57M3mF1xe3tbxWLR2DRs99VqVdfX1/YxDk3smDRHXV9f\\n\",\n       \"a3l5WePxXec1LmTiacmzwANHAIyzZoIQ8m63a8U30l2YDYuLg2cymdTV1ZVWVlYMr0aVR+AiwY8c\\n\",\n       \"NmHvarWaYrGY2ZkIKV9YWFA6ndbl5aXdVLlczhqk4vG45ubmtLu7azcNIZM4wbGeSTIqPxAIWJ3F\\n\",\n       \"zc2NVldXVavVbOHjZWQnR2m4s7Ojzz///KPf/0e1M+ORo0fPiQAgeOFNxkFSKBQM3AfzJaiQAxCw\\n\",\n       \"HE5vbgpEPJze6aeem5tTqVSyXR4MlnBBxgoWPLs/cy15xuC25+fnhs6gDQYblmRhhbBonAP4Nzwh\\n\",\n       \"yuWyHUoJRgRZ6PV6qtVqJk7yer06OTmxUYA0JHQfmBiYnUGAiAZuNBrWNksjAQdVwhLJZCY64Tuc\\n\",\n       \"2XGVy2WFw2HLV6vX63b4a7Va8ng8Jj6XZD64VqulcDhsaAJZa+12W2dnZ5YIxOO01WoZ9Ypw/ubm\\n\",\n       \"xpzVR0dHJsGkVswJ44GtMoMzr5MPHQqFzKsIZutM5eQQRh4yEtBcLmd2KKxUCPdBJTg3YAwgCkCS\\n\",\n       \"RQDTgYi+m5gGkvx5LWEO+Zk5dIOPowakJgMokKeTkzWFoXzI9ah25ng8br44Kgjy+bxRxbVazRg4\\n\",\n       \"qOFOp6Nut2u47vX1taEUaHWPj4/tjWMm59CCDmI8HlupeSKRsMcxlipYSC5O8CQmtdttRSIRa0Jt\\n\",\n       \"NBrKZrMWWlOr1VSpVEwQxS6Nmfbi4kKLi4uqVqu6uLiwRdPr9VQuly1SDMaOuAQ0FaAz3W5XJycn\\n\",\n       \"drOiyWYnBfa8vLy0cabVaimTydgTS5KJn/g9z8/Pza/Izdbv923Momb5Idej2plJfy+VSjYjr6ys\\n\",\n       \"WL6cJDvI3Nzc6LPPPtMXX3yhH/zgB0asBAIBffbZZ0qlUrarUSkGIkDGm9/vN6yU+ZJdkAXe7Xbv\\n\",\n       \"FdRD1rBTBgIB7ezsWCcJBk+CDMFx+/2+Xrx4YcL3TqdjcWC0rJ6dnSmVSulv/a2/ZXpnHCYXFxem\\n\",\n       \"5QZ18fv9ur29VTqdVrvd1tLSkmq1mhKJhDwej2klIDyQBEDwVCoV7ezsWIkRO+zi4qLevXtn4vte\\n\",\n       \"r6e1tTXNzMzYzO/xeHR2dmb4NV3cJycnH/3+P6qdeW5uzmztsVjMnBXAbPl8Xs1m0x6bmUzGegE5\\n\",\n       \"2DUaDZsdS6WSKdd45BPgQmqPJJtnKefhUINNn8c9eDIkAl8XvTK2J0m2izF7M+vzcUYFbrhms2m9\\n\",\n       \"K2ixEQixg1PGzhOJ1+X9+/f2hOp0OioWi4YXS3dPD84ZPLHY+amLgNZuNptG6sBoAk2SntRut3V8\\n\",\n       \"fKxoNGq/N6/TQ65HtTMji+z3+3aCR3WGO8PtdlvlApoN9BvIGSWZH45Cmn6/byU/XHjnCHRhN2bn\\n\",\n       \"BdelDIj5GCUeQh9nrjTxB9DClNlj/8LTBxtXqVQMbuSpU6/Xzeofj8c1NTWltbU1EwjBPM7Ozqrd\\n\",\n       \"blu7Kwwo9PLs7KztukCbkuyJwNwrydKMQqGQpDtnC2MRmhSv12tPSOS0QJy3t7f2BPzY61Et5nA4\\n\",\n       \"LJfLpcXFRW1sbNjBCvoV8+nl5aXZgXCOhMNhJZNJ5fN5G09OT0+tyw4xPRcQE2MA/w7smZ2cXZyg\\n\",\n       \"bg483W5X6+vr95RwpHpiu2LnRc23uLho+g8Sg4DmJBl2HIvFbGFyCKPnMBaL2cF3fn5eT58+tdHF\\n\",\n       \"5XIpmUyaFpuK44WFBRuDCDkMBALqdDpaW1uzBiuIFlKOMLSSeETjLdAfehmIFDaSj70e1WJuNpty\\n\",\n       \"u926vb1VJpO5lxUHAZLNZi17ghgC2Dh27Q8fPphz2ykeGg6HarVa5kjBjEoKPJoM3lBwZr43OxU7\\n\",\n       \"OTDeYDCwXZL8Z0YLHNZoj8mvKJfLVtADCQMaAbFCbwvMIT+v3+9XqVS6h9IQSUB/S7/fN80xwZBo\\n\",\n       \"tIPBoIrFohlYUe/Nzc2ZlDaTyZgzHjtVOBw2zTidhdwAzPIPuR7VzMwjFScxqe/RaNREQCwO5sdk\\n\",\n       \"Mqn5+XlzO4BCgIxgwSephxefGohAIGDfL5lMam1tzbLbgL8QMSEVrVar9rGbmxtrkp2fnzfxfyAQ\\n\",\n       \"UCgUskMbkQI8/nF8JBIJxWIxExmhpGO0wW3CjE9/H/0sXHNzcxbTAK0ej8eNHOFjvA5EgPH9uPH5\\n\",\n       \"upubm/fy+5CL+nw+RSIR+/md5wvGmI+9HtVi5sXhDgfIBw3IZrPGqGGopN4BOpv6MzLYgsGgNjc3\\n\",\n       \"VavVbFxwluUQQTUej1WpVIzggFQBt2Xhh0Ih0zOMx2Nzq9CV3Wq1lEwm/41drFwu3xOx8xRwEjcz\\n\",\n       \"MzOKRCKmTUZQ5fF4bBbP5XLWXotHEXUdrhnID+j96elpra2tWZE9MzWifV43btZ+v2/wJgZW1H24\\n\",\n       \"uSFgODgvLi6aM+Zjr0e1mAuFgkXS/uIXv7BTfrVaVblc1mQy0dXVlT58+KDl5WW9evVKX375pVU4\\n\",\n       \"cBJ/9eqVGo2GcrmcMpmMXr16pVqtplqtZmgA1n92YP5cqVR0c3Ojk5MTC4chEPHw8NCQlFwuZ8RM\\n\",\n       \"Pp+3SgS3261KpaJXr14pn88btkv9mXTHdB4dHSmXy5kOpFwua2pqSqenp8b2oUU+OztTOBy2Gxg2\\n\",\n       \"k50wn8/r9PTUkvcRMaGEazQa+uKLL1StVrW/v2+51cQiDIdDy2gmpgyJLBsMgqpms6nT01PL9cP1\\n\",\n       \"3W639a/+1b960Pv/qBYzpYy4HkjZQSS/sbFhyZ29Xk8/+MEPTLWFLoLYgPF4rKdPn9oBEUMrJlC/\\n\",\n       \"32/sFoQKJ3b6Q3BiE8HFqBAOh/X06VMjDZgXp6en7ZCJdYlHujNQhlSgaDRqWuR4PG6LxO12m/uE\\n\",\n       \"wzAYOp9PfVuv11MymTRBEhG+19fXdkhk502n04Ybc+iTZBUa0l3O3vLy8j31IsmnHJhBl54+fSqv\\n\",\n       \"1yuPx6PxeKznz58/6P1/VIsZK73L5bIXVJKhDTRHsWOxA6Gsi8ViFrNFUeX8/F2v3+bmpgmPmP/c\\n\",\n       \"brei0aiRCk7jJrAVHkMWOoL2arVqijvMnGg8pqamtLu7azl1SErJvvN4PIbrJpNJ0wwnk0mzIlF1\\n\",\n       \"jO4jkUhoYWHBbmTmbhakMwgdMwKNt1D9l5eXCgQCFuMr3TnRWfTkSIP+IJN1YuFer1exWMzS+jmE\\n\",\n       \"S/qOznZeULvMlbBUKL5ub291fn6uy8tLzc7OWu0BWgMsTufn55Jk4ppsNmvBhM5AFEJaqO7F/QxF\\n\",\n       \"DNWN25tkTxJ+FhYWdHl5aXMnYiEe97g3bm5uDOEg563ZbNpj/vb2Vvl83tqeyuXyvdbZw8NDm8n7\\n\",\n       \"/b7Fg/Fz5/N5q2Zot9tWIXd5eWndKDc3Nzo4OLBxC/koC5cRjXMC2R/Ak6BFJDIBo5JgipHgIdej\\n\",\n       \"WswEHQK+k/5DUPj09LQleEKgQE3Pzs4qGAzargcEhYE0FouZe8Lj8RhGTQbG/Py8Op2OotGoEomE\\n\",\n       \"LSyQE3Y35lUc106BvzOQBY2JJBt7vF6vif/BvRkdnKo5SYbEOLMoqK3AbYIlCxiS34nF67wh+TsM\\n\",\n       \"uYxNXq/XUolAefg40CS0OcgP4xWSAEijh5ImLnac/79fLpdr8g/+wT+wEz96hX6/r2QyaSd1VGXY\\n\",\n       \"ltAkYOwk8Scej+vNmzd6/vy5ut2uCYUKhYKi0ajNnGCr4MnORy07GOwaVqTZ2Vm1Wi3FYjE72F1d\\n\",\n       \"XSkcDqvdbisej8vv9yufz6vT6ZhQH38gdqx4PG7fA0SEYHJuQq/XazJYzLyQLYwO7KQcCsmjLhaL\\n\",\n       \"CofD1jRVLBZtgZIgOh6PbfZGk03RJcExHo9HlUrFCn1YvAicON/0+3393u/9niaTyUelwTwq0gQG\\n\",\n       \"6+rqSq9fvzafmd/vN6lkq9VSv9/X2tqafvKTnygYDJpA6OrqSpVKxaxCV1dXqtVqqlarhsFCOuDz\\n\",\n       \"YwaV7mbvYrFo3YBPnz5Vq9XS0dGR0um06alXV1ctpAWYDkwYGvj8/Nx0zRwgm82mNjc3rXAHOhg1\\n\",\n       \"ILkcw+FQqVTKRqzRaKTNzU2beYHhkJ1yqEXGyY3FXD8YDEwlR+dhtVrVD3/4QwUCAWv2Iq8OcokG\\n\",\n       \"WSSh5XLZ5KXOThUQnn/9r//1g97/RzVmEEqCKL7b7arRaNjjFlF4u922w5NTYkkxPOwd1DIh4ZLs\\n\",\n       \"UYtP0Kn9IEmJ9MzXr19rMBgoHA7bbglujKgGOI6F1Gw279HYwIbEBVSrVYsEYKcFHgO3vr29NaYT\\n\",\n       \"RAWRENVwVkvA8QAAIABJREFUfN7S0pKxg1QQgwlLsqyQer1uMtp4PG42KA7JjC0gH+zyvB/ENvh8\\n\",\n       \"PkmymF3iBYBUH3I9qsW8uLhoijd2JVgz52wK5MWjdnZ2VpFIxA5ACMWdSADhiSTvc8Ch5BKkgxAW\\n\",\n       \"DpV4AUFFsGeRXYEB9OTkxCJp+XuE78zfQIiI8Jn5CbEhzFH6VvjDeIWjptPp3KuDy2azpnzjTMHN\\n\",\n       \"TAQu/Yr8t9ONDvvH/wjJIXqLZCMkAI1Gw8J4JKlYLJorJpFIPOj9f3RjBrQoHSS8aIRe0ylCdFS7\\n\",\n       \"3Zbb7bYT/vT0tIrForxer8rl8j1GDFwXXJTqXoRM7Ix8P1LzeVSjaU4mk5bGD9THrE5PHjtXPB63\\n\",\n       \"WZ+EoFAoZPnLzOqNRsM6XUBqJFm9Mon3NMGizWDUAbuuVCqm5yA1n5w8Wgb4MwffwWBg2XdcvLaE\\n\",\n       \"koO8AP1dXl4aHIlW+qES0Ee1M7tcLgtD3NvbMxVaMBg02IeUIGJlUWwBhd3e3trBJZVKGbLBDMkb\\n\",\n       \"y7jAYYbPY1asVquWME+OMp1/tVrNZmXmSfDm0WhkPSHM5sCNhULBPIDMpxzsKNEJBoOm6GPMIqQR\\n\",\n       \"byQRWyAexAIQZ+sUHY1GI8OI6RPHTTIejw35yGQy9vry88bjccP0cfOgvkun0+b5cx6YH3I9qp0Z\\n\",\n       \"8Yvf79f79++VTCZNWBOLxUweWi6X5XK5lEgk7NDlTBFiMSBVBD8lbHA4HGp9fV2NRsMqJyh5h1Rg\\n\",\n       \"XJhMJpa/jNAGtVi73baDGZFeyWRS29vbevv2remsWQigAR6PR8+ePbODHAuTcJcXL15YqOL09LSy\\n\",\n       \"2aw2Nzd1fn5+zx0j3R3aqFXj9eAGpwWWtHxQG24AFIqj0cjkrPgVk8mkjRs8jci7W1paUqPR0N7e\\n\",\n       \"niqVijGYoVBIf/qnf/rR7/+j2pl5wyWZ4ByigYxloKVYLKZ8Pm87H24TSdrc3NTMzIyOj49NYMSh\\n\",\n       \"iMchCywej9/r3O71eqpUKqZrzmQylrhPvluhUNDl5aXi8bguLy8tnosZvVqtKhaLmY55a2vrXqYE\\n\",\n       \"CAVRsJhUB4OBVS/g8CB/7uzszOIDOGxhlmWBIgIiUgsJLVplapTj8bglIlEGRFWFU9F3dHQkSVpa\\n\",\n       \"WtJgMNDXX39trCCQXzabNeIIVOhjr0e1M09PT1vZTb1e1/Pnz43xOj8/1+npqc1lBIAfHx9rb29P\\n\",\n       \"0p3gBjPp1tbWPSoW6AwvHfkapVLJZlQgvPn5eXvs/uAHP1A2mzW3NTAgVDZ6ZIIGqVxbXFy0iC2n\\n\",\n       \"3SoQCBgbiJ7C4/Ho4OBAq6urRmUTJYa7hEX91VdfqVgsKpFI2NhzcXFhkB1pR6urq/rlL39pGRvn\\n\",\n       \"5+eanp7WxcWF+v2+hYk3m02dn5+bxezk5ERzc3M6Pj7W7u6uBoOBjo6OrLSICggiGDhwl0olvXr1\\n\",\n       \"6kHv/6NazF6vV/1+33ZLQkl4rK6trZnJlF3rk08+MYOr2+3W9va2fv7znxuIv7q6ajt6NBo1JouG\\n\",\n       \"15WVFeVyOYPLnCbT1dVVEx1FIhFVKhXNzMxYKAzjAV0shULBXC6wlzRGoYHAcJrJZGy3Ho/HWl9f\\n\",\n       \"N5auWCxqbm7OPIG9Xs9Cb+bn57WxsWG2MuA9dt1yuayVlRXD7GEMd3Z2DO2IRqMWNjkYDJRIJGx0\\n\",\n       \"QgcOk0r4JK8NEQtUFK+vr9sTb3t7W7/61a8++v1/VGPG5eWlZcelUik76WMBIlEIdAHRvtfrVTwe\\n\",\n       \"t11RkiW+j0YjraysaDAY2GmdR2Kn09Hh4aEJlXw+nyEf6XTaDkHj8dgOgWRXcFAjZ+Lg4MDgsMlk\\n\",\n       \"onq9bnh1KpWyQktaVN1ut4UpDodDC11ETLWysmJS1V6vp/39fatj6Pf75nahv5sdGVsVoqZut2sB\\n\",\n       \"i+z2zogwmlyvrq60vr5u+mTcOWg3aMjNZDKm94DVhIl1+is/5npUixlrOzJDFiQHtWazaYvc6/Va\\n\",\n       \"VgMaXFg5aG1EPGdnZ4pGo9YJgjl1bm5O6+vr5pQej8dGoDjZPEkGwbVaLYP4gAQjkYi2trZMmgmR\\n\",\n       \"Qr9etVpVu922sG8OVaT8o4mQZHoOerbx4q2srNzDgKmyIEgRTQhPEuZXcuiA0IA8MfjC5C0uLhq2\\n\",\n       \"Dj4OdU6KaqvV0vLyspljnS6VeDz+4Pf/US1mBOiSjJzgQIhGAsex2+1WPB7XJ598oqWlJQsMHI/H\\n\",\n       \"Jtl8+fKlHRihXCElgsGg6TAk2cLk4AWVy8GPz5mdnbXFj6aB/7+5uVEsFjNIMBAIKBwOW0Ycrmyn\\n\",\n       \"OEqSkSOMWRwMCasBmgNyg11kFneOQNwMzPZ8HnAjgiiUe05tBi4cDpTs2iAt0p0cN5lM2muGis7l\\n\",\n       \"cln18Mdef2WL2eVy/W8ul6vscrneOD72P7hcrn2Xy/XK5XL9Xy6XK+j4u3/ocrmOXC7Xgcvl+k8d\\n\",\n       \"H/8Nl8v15pu/+5/+bd8TlVe9Xrf+jIuLC1Oq8diFkMC6wy4ObV0oFNRqtXR8fGzlOuPxWMPh0GA0\\n\",\n       \"sF7eQElGF8N6FQoFix+4vb01ZwXfi1YmaGEKKsF7OfyxGDn9QwPTBzgcDo12R0PtxIwxDpCNDK0P\\n\",\n       \"Y9jv920sYccmQJzMOyIJIH+mp+/6vmFF6WzhfQBhwYBLyLskk+fip+z3+/9eogb+Knfm/13Sf/Zr\\n\",\n       \"H/t/JD2bTCafSvog6R9Kksvl2pP0X0na++Zz/onrW+3i/yLpv5tMJjuSdlwu169/TbvYMaamprS/\\n\",\n       \"v2/BfZPJRI1GQ6enp7q4uNDh4aEJdw4ODuzNx692eHioTqdjweJnZ2fyer1qNBoKhULW5soC4xH7\\n\",\n       \"6tUr25FwTPOzYIxFSE9kVqPR0MXFhT3COcA1m01dXFzo+PhYNzc35iAnu6NcLiufz5sBFX10t9tV\\n\",\n       \"oVBQqVRSJpPR+fm5eR+pJgaCa7fbKhQKyuVy+tWvfqV6va5ut6vj42PL0CO4/e3btyqXy8pms/bU\\n\",\n       \"Y3YGmoQwkWTjGK8hpBWkETcXeXdnZ2f6F//iXzxowf2VoRmTyeTPXS7X+q997E8c//kLSf/lN3/+\\n\",\n       \"zyX9/mQyGUnKuFyuY0mfuVyuc0n+yWTyy2/+3f8h6b+Q9OO/6HuSk+F2u/W3//bf1u3trT755BPN\\n\",\n       \"zMxY0TuObMYKbO6IiiTp5cuXSqfTBkdtbW3J7XZrfX1d2WxWKysrVgQECTM1NaUXL16YK3p2dlbr\\n\",\n       \"6+tyuVz63ve+Z07qbDardDptemVK0t1uty4vL+X3+xWPx00HDGGztrZmdRDr6+sqFApKJpOmiSYs\\n\",\n       \"MhKJmN4ZTUetVjNtNiTMYDCwSAG+z87Ojt68eaO9vT35fD49efLE6HRMrLu7u5aGtLy8bPkZ0PdA\\n\",\n       \"gCSyrq6u2rgBMrO1taVSqWQpqM4O84ODg49ec/8xobm/K+n3v/lzStLPHX+Xk7QsafTNn7ny33z8\\n\",\n       \"L7xKpZLi8biq1aqy2ayWl5dt10BRRwALLxyULimVKMygYqvVqrmbCVlBylgulw16IkR7NBpZeAyH\\n\",\n       \"QEYdSRYUiPMEkiOVShl7SS6Fs5EJRIGiHKfAiTkVbUomkzE9BEKnWCxmjhc0Iq1Wy0rn+VmI/4WG\\n\",\n       \"50xQq9UsTgsChQMfB1IUipgPoLqLxaKp96DrV1dXLYwccdhDYDnpP9IB0OVy/a6k68lk8s/+fX5d\\n\",\n       \"ZmMyKmDv8PeBBIB4rK2tWRMqM7XT/+dyubS6umrWH1hEqnrpsuOkz1wsyYJVoJ8hO9xut9UzoMxz\\n\",\n       \"u92q1WpaWVkxFABIb2lpycRBPNr9fr85OjhcohumgdUZASbdVRlz6MTU68y1SCaTFk0g3ek0wKoZ\\n\",\n       \"gYrFou2yzN5g9JgIQqGQotGostmsHRIp+OEACyoTiUQMPRmNRtrY2HjQ+/8ffGd2uVz/raTfkfSf\\n\",\n       \"OD6cl5R2/PeK7nbk/Dd/dn78L429+dWvfmVU72/8xm/cK14HnyWgpVqtyufzaWNjwwJi0D4sLy9r\\n\",\n       \"b29PP/3pT03WiZLs5uZGa2tryuVy1lZ1eHho1iLGi263q4ODAz179kyxWMyYRKA3dA8o2UhGcrJ8\\n\",\n       \"CNpjsdg92aXb7TYTKZgxBzev12s4Ljg6yAiQJLsz4iYSTImgJXv55cuXqtfrWlpaMhQim81am20w\\n\",\n       \"GDTtNYJ7SZbFQUg7cQwgMn6/X6PRSB6PR7u7u3rz5o1mZmZUKpUetLb+gy7mbw5v/72k355MJs42\\n\",\n       \"lv9b0j9zuVz/o+7GiB1Jv5xMJhOXy9V2uVyfSfqlpP9G0u/9ZV//t37rt6w+rVQqWTE7kkcyjQuF\\n\",\n       \"ggUJQqWS7xAMBq3sxuVyqVgs2mGNKKpisWh0Nrsr6jBQhYuLCy0vL1uaJo9rbFGlUslkk91u10LS\\n\",\n       \"mX3b7baJcDjE0sXNzgdWWyqVrJuEeje+LwHmCwsLdhAEDkNgDxsYiURMK720tKS3b98qlUopm83a\\n\",\n       \"Tt3r9Ux8RSQYSI8ku+Elmd2Lf9vpdLS8vKzT01MTds3MzFjp5uzs7INm5r9KaO73JX0u6YnL5cq6\\n\",\n       \"XK6/K+l/luST9Ccul+srl8v1TyRpMpm8l/R/Snov6Y8k/b3Jt+bEvyfpf5V0JOl4Mpn8hYc/SRbR\\n\",\n       \"yu6Gg2IwGGg0Ghn9y04pyYoaoamRTALoo0nmoAhVzWjCIubR7XLdNU8lEgl7jDvz6njsY/ik4RWB\\n\",\n       \"Ok4M0uWBtnCNkNEhydhDyBooZp4+y8vLJoOdn583+prdEuaQ187j8RgjCgHFrAuawgJFF47UFEaT\\n\",\n       \"QzgYu/Stkdbj8RgsB1bOkwmDwYPW3GMytP7u7/6uUdIEaM/Ozlrw+Oeff67NzU1ls1nF43Elk0n9\\n\",\n       \"wR/8gT777DObBYmJhREDCy0UCiZ+j8fjljdM2QzwGXM5Yw3wFG+oM2fO7Xbbjl0qlTSZTMwQSnYb\\n\",\n       \"bBq/C7oTXNpQ0pKMweTgScJTqVS6V1fMmBGNRnV+fq5QKGQ4MomkHJ4jkYjy+byNDPl8XqlUSt1u\\n\",\n       \"V8vLy2q32ybCkmTudzYEbgTC1RnH8P+dnZ1ZeVChUNA//af/9KMNrY+KAaSiIJlM6sc//rFSqZQO\\n\",\n       \"Dg50cXGhP/mTP1G/39ebN2/05s0bjcdj/f7v/75Ze7DznJ6e6l/+y3+parWqn//852q1Wnr37p1V\\n\",\n       \"O+CHk2SHsVAoZITK2tqaIQHhcFiBQMAe04jrA4GADg8P1e/3VSqVdHBwYKMAMbK1Ws28f+RwkMo5\\n\",\n       \"NTWlX/ziF5Yf1+v19OWXXxrN/MUXX1hgeKlUUi6XMwqdw5b0bTffmzdvVK/XVSgUVKvVlM/n5XK5\\n\",\n       \"dHx8rC+++MIIHjDvarVqss2bmxtdXFwYG0igOOVBbAbFYlHSnU3q8PBQs7Ozhq93u11lMhl99dVX\\n\",\n       \"D3r/H9ViJgCl3+/ryZMnqtfrFsZHd3QoFNLu7q6ur6/15MkTo1LxqE1PT1vJeiKRMDwYaxNs1fT0\\n\",\n       \"tEkaLy4u5PP5lE6nLY+OVM5CoWBZ0fQJokXG0r++vm7QVrfb1Wg0sogxRobp6Wmtr6/r8vJSXq9X\\n\",\n       \"qVRKt7e31gOysrKig4MD3dzc6OXLl7YLLyws6MWLFzZz4wFEVVir1Sz9aX193Zg9SYYrEweAXgVF\\n\",\n       \"H/G3CI56vZ7djLwfyGiXl5etgWp9fd0MrrCyZII85HpUi5kTNwA8eQ9OJwnQG91zSEFRlzGSSN86\\n\",\n       \"k3FUIIqJRCKG1YJRS7L6taurK21tbanT6SgQCFiSfiAQMNE+nXjT09NGYaOqYwQBZnS57nrAsUnR\\n\",\n       \"XUgMGJQ0Og6n3xBcHdMAX5sx6vnz5yYLZTcFLw8Ggxb2ArkB9Mesz8+NrgP1IU8t4monk4lCoZBS\\n\",\n       \"qZS5W9goPB6PgsHggxfzo9Izx2IxE6RfXl4qFospk8koHA4rm80qFArp+PhY/X5fS0tLNsNho5Jk\\n\",\n       \"GRE8sj0ej66vrxUMBlWpVNRsNhWLxTQej42Fe//+/b3uEyrXvF6vaXj5u6mpKZVKJRWLRQukubq6\\n\",\n       \"UiKR0NnZmZW2I5EcDof3lHfFYlErKyva39/X8+fPTaiPl7BQKJgznfkdzTF6Ehq3pqfvmlLb7bZp\\n\",\n       \"kU9PT+3gR+D4YDBQtVq1kWlhYcH8egiIIJdOT08N2qtUKmaOKJVK2tzcVLvdVqlUuhesjtPnu3gu\\n\",\n       \"x4VMkkwLshw4WNFfMjs7q3Q6rePjY3uss0uTtUzyJocw8h1wMFOV0G63bUTx+XxaXFy02K5gMGiP\\n\",\n       \"dnZ3AgnxDaKrAAGBoCGDAocG0bh+v99y6cjUIHOaoBrIDvTb/A5zc3OGcqTTadMo82QoFosKBoNG\\n\",\n       \"vpCvwe5/dXVltWtQ5mg9oPE5CLOjEwFMljQ9jKA76KVvb2+/W8zOy1lsg4yTiFpiVSXZYxBLEIgE\\n\",\n       \"lDMWfhan9K3kE3cy8zM1ZxRAokNGXgnNzfzqtDEBR8EsAmexWLFg8W+RdbK4xuOxQYXOMcVJjDBq\\n\",\n       \"OGFLYDUYRIJvwHrRqGBqIJODjDk0JYw9kmyBsykgRWWkgIUFeYGlhfzhd3nQ+/+gz/5rdlGzi4YX\\n\",\n       \"JRn4JzkOzJEQDGQIl8tlDYdDbW5umgwSWWOpVLKdfm5uTsFg0AISmc1RzzUaDWP0isWiOS7y+bwx\\n\",\n       \"YdL9EBVy31hwpH/ys7EIcKrAoKFrmJmZ0eHhoQaDgXw+n0qlkmKxmBXgwM6RPQdjSLkn4whxvaje\\n\",\n       \"pqamVCwWbd5GU4K2udPpGAGERoUAm1wuZ/S8dOcE4hBZq9V0fX2tXC5ndDkHx4+9HtViTiQS1uhE\\n\",\n       \"ChHJQ3SXOE/XqLXw9EUiEWMGEb4TYsIuWavVTL/ATIuoCMEPTVIul8t2ymAwqLW1Nbu5YO/Ynckz\\n\",\n       \"ZidmROEg6/V6tbq6KpfLpXA4bHUPOE3G47EVVUp3RgUcNjjHMbtCX8PgQaKggeYQOhgMjKxBGgAd\\n\",\n       \"vbKyYj/Xy5cvbf4lnsHn85keptVqmeMbNR8JUqQ31Wo1PXny5EHv/6NazOPxWJlMxmYyv99vmXMQ\\n\",\n       \"FdLd3Eu/39bWlonv0TVgNE0kEqpUKuae9nq9SiaTNkM70/HRcHi9Xi0tLVmGBiHk7HbOvAlQFhYA\\n\",\n       \"kkhgNeeiDgaDpv0YjUbm1pBkeRVouREO8XOQWkrcb6fTMRJjMBjY7rmzs6PRaGQMIAdkn89nLhny\\n\",\n       \"QQhSREXIARhWlKfNwsKC2bcQOZHMjxab1x5U6GOvR7WYnTnCHOzQ3rKLcfLv9XrK5/NWeSvJhDg0\\n\",\n       \"kubzeXk8HiuShD3DuUFkLoufpiocFhyOOASCmCQSCUMhwuGwhZTDCCL3JFmeGbrT6SgSicjr9erw\\n\",\n       \"8NBw65ubG6PMJRn+DNTm9/uVSqXMGUIeBv4/YmxZ8HNzc/b3mFoZ3ahCY25HVxKPxy1wB10L5wC/\\n\",\n       \"369EIqGVlRWNRiNVq1VzuyNaImnqIdejWswcYG5ubizIkN2ERYFM1OW6q4pAi4B+gbqGUChkownz\\n\",\n       \"KYsKurpYLBpFjQkVySmRXcy/GE6vrq50dnZmRAGsIgc5xpRoNKqNjQ01Gg37POZ+dkQSmtCMsBMy\\n\",\n       \"OqyurtpOjisGBhOTKq8XcQC4P/AK4qTpdrva3t62UBhcMtxA0PugGsFg0MYKKopLpZLcbrcWFxeV\\n\",\n       \"SCQMD+dmeSia8ahwZtRl7KrQqOFw2BqQMGmGw2G9efNG5XLZ6Oj/l703iW00T9P8HmoXJVIiRZEi\\n\",\n       \"qV2KiIzIyD2nq9CDBvpiw30a32wffDB8m4MvBgzY1wF8NGD40BfDA8xlAJ8MH+zxwJgFVWigcmq6\\n\",\n       \"KisyMiO0UiLFfZcoiRIp+qD8PfmpptszCLnHM0J9QCIztVL8/t///77P+yzlclkvXrxQoVDQ/Py8\\n\",\n       \"Tk5OHDXMzgrvGC0hQwAGE+xmhOUsLi56pMvDABYtyRyMRCJhoevs7KzK5bLFotiGzc/P6+zsTLu7\\n\",\n       \"u1ZqYK8AFHl5ealCoWD1Nr8rk8k41mFiYkKNRsPkn3q9rkQi4TiKpaUljY+Pq1wu2zdjYmLCu/3K\\n\",\n       \"yoqHIggaaJTByA8PD7W7u2sz9nq97j6iVCoZvgueZD8p5T7selI7M40MkibpfrcG06UejMfjtodl\\n\",\n       \"58Gyip2SHbbX6xnCgojebDat8UN1Qpe+vLysy8tLvXz58gGRqN/vO0AHGKxer5sQX6vVPCkEFSB+\\n\",\n       \"jF0YZQc7ZxA5wMZAuh9tLyws2DGoVqs5UJ4jHfQA6ma73bbd2NnZmRGS29tbW3ZR14O40ADe3Nx4\\n\",\n       \"sBNkGA4GA1UqFcN2xWLR7xvlBfkpiCQecz2pnZlGDFvaeDxuU5WJiQmPUIkbY6GAXqTTaRUKBcXj\\n\",\n       \"cX388cfqdDo2+wuFQrZwDYfDnohtbGzo+PhYU1NT2t7e9i6Lxk6SHX6INeN3ElIDAQj+CJNCGkwY\\n\",\n       \"asik+FzQqDyfzztWAjTi66+/tnp8fX3dMRHHx8cPPKdJoZqcnFSlUlE2e69MI+EWjd/FxYX5FLe3\\n\",\n       \"t2YC3t7e6tmzZw80mBsbGy7XwPVBRxAToCGkgQXC+9DrSe3M/X5fb968US6Xc5PX7/d1cXHhiIdG\\n\",\n       \"o2GC0OLiov3fIPpIUrPZ1NnZmW2oMPWmFKnVat6hjo+PH5DqoVZiOoNrz3A4VL1e9y7H8czYmQkY\\n\",\n       \"sn/gtEql4mP67u5Op6enJtNTxxOoORgMrMKen59XtVr1xI3EKRpLbA6AIAn9xCByZmZGBwcH5kpT\\n\",\n       \"nkCBheRPuRU0LG+1WpZN1Wo1XVxcPNjt7+7u1Gq1VC6X9fbtW01NTRm/f8z1pBbz3NycXr9+rUQi\\n\",\n       \"YbiLXS9ot4XnGjcUqwEYbXhZYDoOUYadjWwTJmw4HOEOCiuNySDQHoMLnPsh1jN2Bm/e39+33wV1\\n\",\n       \"ftAOdn5+3gaGQesxSa7fiUoD1UmlUh7t7+7u2jar0WhYOQIGDwrDBA9iP68fe7JoNOqUKyBNsGp6\\n\",\n       \"F0hS4OIseiZ/6+vr9gZ5jMpEemKLmXqQWAU67aC8iAVOh12pVFzXQoqBa8DYlv+HrwBmDaoB7txu\\n\",\n       \"t01wl6T9/X0LaFutljHmfD5vsnuv17M5DMqPWq1mBGVubs67NJ4bwYcLTHc0GrkUQVgbjUZ1c3Oj\\n\",\n       \"Uqnkr6O+pTGliQyHw6pUKnb1pLaHrFQoFDw1pTcgonlpacmuR91u105KnI5TU1NqtVoWFDB04sHr\\n\",\n       \"9/s21HnM9aQW883NjW8wzRq4MCSZVqvlWF285wiKZDqH29HExITOzs5cFw6HQ5sMgrt2u10nNyUS\\n\",\n       \"CZPp8edArIofMsOScDis6elpe1cgdyLRFRchfOBw2URYy2KT5IcLx/3z83OfCBMTE36Y4WmQdwLm\\n\",\n       \"CyZM47aysmJJFrszTS6LsdlsuicgcAivaWLp4vG4rRXga2ACA2RIk51IJLypfOj1pBYzDdH09LSn\\n\",\n       \"Sevr616gcHCDurOlpSWHVUqyiR/TKaT4DEqwBQiaDaITjEajzjeBiMP3Az3hSA+7DxsBYEQmjfwu\\n\",\n       \"ToDZ2VnH/7I7go+jwGZnY0weiUQcQsQED74Jjvd8nAd5ZmbGcjGGRJiIg8VzCsD5CIfD9rjGdiCb\\n\",\n       \"zer29tbQJR554PoIgWmOsWt4zPWkFjNvEFAcWjRG10ydGN9ubW2Z9A6sRRY2qAE3IxQKuUHZ3Nw0\\n\",\n       \"V4PuHa7wYDBwHYjzJfwJfjdcYW4uDD+8jSGyB0MimTRWq1VtbGzYqZTaG27J7OyszSLj8biWlpa0\\n\",\n       \"trZmuRVOQwyN4Elks1nrB6GgJpNJw5ZgzzxkTAPRkF5fX3tH39jYUKVSMcGL6Sjj/mg0+iDtFhNI\\n\",\n       \"NpQPvZ7UYkaKD/US2wHpnrhPt87ErNvtGvynDCHQkThidhoSXJH+M66+vb11WhLHOLxqTgQaPZQa\\n\",\n       \"+EPwc/k6dsi7uzu7DDGGZ0GDzoBrY0pIOQP8lUgkzL3ANy8Uuo+BCIfDymazDxybODnILJTkJCoo\\n\",\n       \"pPw39ma1Ws2KbiaUkKPw+RsOh1bAwANnSsgABTbhp59++qj7/6QWM+NoGsGpqSmPpKlzEWeCxXJE\\n\",\n       \"gwPPzs56QbBQZ2ZmrEiW5N2TkgDfDRyEGPeygGOxmPPzcPUJRolJ8uKnbtze3nZZgNIllUppZ2dH\\n\",\n       \"8XhcpVLJDRz8aV4XjkyYrkDqAUYLwnih0L1/dafTsUwqiGxQjlG2DIdD+1qjSgfSlOQhDKw5BAF8\\n\",\n       \"LhwOGxEiK5yN5ne/+92j7v+TGpqcnZ05944dFz5DvV73mz47O6tisejmjbw74hAODw/daNVqNZ2d\\n\",\n       \"nblMOTo60ubmpmU/uI12Oh17Qmxtbbn+vLq6UqFQsKEhlriMsefn59Vuty1+xeNufn5e+XzeHhnU\\n\",\n       \"ruFw2P52YNfn5+f+WzY2NswZAfcmsbXX62l5eVmFwr1934sXLxx/NhqNVCwWlcvl/PP/xb/4F/r8\\n\",\n       \"888djDk2NqbT01Pt7u76fUVsAFsuk8moUChob29Pr1+/VqlUcvNMuVGpVLS0tKRWq2Uvjkql4lLv\\n\",\n       \"Q68ntTPjesmUj1xn4hlCoZDtYBlns2hZJM1m07sLAthoNOpuGx5H0FMDDgURYpgWSlImk7FLJ8aL\\n\",\n       \"lBZoCXHdX1tbs5woeGPZwWAA0rg1Gg1TUIfDoT7//HPn/DGIACLD5w5jG7B3yD4sWMb+GCJeX1/b\\n\",\n       \"CRSUItjIAUNOTEwom82aLCX9xPfG9iyZTLoUopYmGFSSIzg+9HpSi7nT6fhI46hl12U0vLq6alus\\n\",\n       \"0Whk53r0bvF4XOl02kcsOX/tdtsU0VQqZbgMsxQaH/R03FBIPEF4LJVK2RkIBAQHJh4ylM5LS0sP\\n\",\n       \"HJfgFt/d3Wl1ddUWV5ubmw67R8mSzWb17Nkzu/FDj6Vpm5+fN6V0a2vLpdba2ppDNWG9ZTKZBxKq\\n\",\n       \"y8tLy82C0i5kXcCZ2WzWxjfD4dC+IlgvzM3NGS7c2dl51P1/UouZHRi1RDCmYWtry8SfmZkZd/xI\\n\",\n       \"nhKJhBse0qFAK1is8AyQZ2WzWTO+2A0B/5m4wcfAgzkozwKvhTIqyTpDjv5YLKb19XWrmEOhkCFF\\n\",\n       \"ZGGQn6hzg0qQer2uarWqfD7vk4VkJ0orHvylpSULA2j0+B4oovBGyGuhLuZv4H2EHw2zEKTk9vZW\\n\",\n       \"a2vLh+SrAAAgAElEQVRr2t3dVSaT0Q8//ODhye7u7qPu/5NazGSKBGEggmnwDgYdGAwG2tnZ0ccf\\n\",\n       \"f2yL1cFgYBLScDjUzs6Out2uO3OaIzDkYLkQj8cde4ZcaHp62l27JEN4c3NzFnkiXAUGhLWHooMH\\n\",\n       \"hN+DopkgeNAEgtqx2mURLi4umodM00qtzYOP1o9TDVydciv4wMJIDD688KGDDeFwODTXGk4Isi5U\\n\",\n       \"Kby3CwsLJjc95npSixmnH4jfkIdALUAGTk9PNTc3Z34DA5WlpSWPp+PxuI6Pj7WysqLBYOBsPoJ+\\n\",\n       \"uOGkU0FYmp+ft+snNxeoimMVWIvas91um6jPTkdZgRQJVXg0GtXq6qrd8AeDgS158XLGwBtjxm63\\n\",\n       \"q88++8zZfCxy+gRI80CN5+fnfj8ZQgXLk2azaT0lpwR+0uDTjPWBAektqKnHxsaUTCaVTqcVjUbV\\n\",\n       \"6XTszvSh15NCM0qlkgWcLFqST0EvMLvGq+L4+Fizs7M2DCSjo1AoOHgGtQjoAKoIJml4Gg8GA52c\\n\",\n       \"nGhtbc3jcgxOqLdhx01OTnqELN2rv+nyR6ORlpaWVCwWLYliSNHr9VQul/0AQIrq9/v63e9+p62t\\n\",\n       \"LRP5QXWmp6dVLBY1Go10cnKidrttZcr+/r752/A+QFfq9bpisZgd9GOxmE1j2u22TRJ5nzF2rFar\\n\",\n       \"psIWi0VrGclIYZFjPXx+fq5YLGbBwodeT2ox4/7OwmFUHYvFHoD6LFDQg0wmY10bKAU8DJrFmZkZ\\n\",\n       \"Y629Xk9bW1vq9XqOGEM0S1g8N3dra0t7e3tOj1pdXdX4+LhyuZzr9nq9rlQq5ZtPOUPdT6YJ5VNw\\n\",\n       \"ZA8mXCgU9POf/9wLEsUHHOFwOKxer+fBENPDjY0Ntdtt/33s9uDk09PT2t3dVb1e98fA7kFEMK/Z\\n\",\n       \"2NhQt9t9IGplMokT093dnXq9nnnbQdEEviYfej2pMuP09FTpdFrS/U7HgAQmGvo6sM3r62vHpJ2d\\n\",\n       \"nXmHBY9GCRJUaLdaLT80hULBi5mSAlPzfD5vrwy8L3DWJJ6C6SDkHRh5MPCGw6FrS2xxiU0gFIhB\\n\",\n       \"xuLioo6Ojsw7QXUdFLnCyhsOh/bBmJmZMf2VRAEefuipvI/kw7DjM1y6uLiwsTmUUTBxYpn5evoG\\n\",\n       \"8lfgOY9GI/3www+Puv9PajGn02n1+30tLCzYXYjun1qQpk2S3r17Z1ZXNpu1ZAiNGtkjNHGor/f2\\n\",\n       \"9ozjFgoFN569Xk8bGxsWnM7NzSmRSKhUKrm2DdoPIDQ9Pj424yyRSGhlZcXlBB5w4+PjOj4+tuv+\\n\",\n       \"5OSkI4klmWvd7/eVy+UUDod1enqq6+trFQoFL3rcQKempoy5o/CuVCqKxWIqlUpuCIHVIN3T9EGY\\n\",\n       \"wlCSMbskswShgzJtpZZnNH51daVYLGYPvz9wMwIXuXszMzNmbSHIzGQyxm0hDT179sxfDwMuHA5r\\n\",\n       \"Y2ND29vbD7JQsJeamZlROp3W9PS01tfXlc1mjUlnMhlzP2DT9Xo9ZbNZw4QsgKAjP4gLC50HaPPH\\n\",\n       \"cPdkMukSCNNHyENYyoJhLywsaG1tTSsrK/roo4+cFgBHYnl5WcPh0G5MPGDz8/N69uyZyVh40lHO\\n\",\n       \"8NqICoZRxwh8c3PTgfAQmIDwQJbgxsBKfPHiha6vr202/vz580fd/ydVM6O4gLcQDocdp5bP51Wt\\n\",\n       \"VhWJRFQoFOxnfHd3p1qt5mDzqakpvX//3i75HJ+vXr2yFRUmJldXV9rZ2XEzWC6X9dVXX2l6elr5\\n\",\n       \"fN5JTUQVQ0LCzX5nZ0f1et1DHRpKOMSorJF/oStsNBr6/vvvtbm5qdFo5L+B+ndvb88EfUlefCS7\\n\",\n       \"0rRFIhG1223j5ijMT09Ptb29rd/+9rfa2NhwNIYkB7f3+32HBd3d3enk5MQTTbjVmEySG7O7u+vd\\n\",\n       \"Hn4Mwt5QKKR//I//8aPu/5PamZEtzczMWG9HPToajbS8vGzjFaAiSgKaHVAD/Jw7nY42NzedVBqN\\n\",\n       \"Rm3R1Wg0HAXBtAy3e0oTBgo0YxzPxK5RI1NXAsuBRyNiTSQS5j0z6AFqk2SjchQfnASw/hjBd7td\\n\",\n       \"ux7hkYHkCpiNciXoOw0zDu4xzbB0P/YHzYEHTiP5+yruTqdjr2qgPBrotbW1v/rG/hteT2oxB7to\\n\",\n       \"RshEliWTSU1N3aeswlqDN8HORQJTPB73EYn5+Pz8vLLZrAaDgady29vbthaAXE5TRhNZrVa1srLi\\n\",\n       \"8oYHS5K5xOl02ho7poPLy8t+6BhybG9vG8aCB8zu+9FHH/nhxYMDpt3Lly9NLaWhq1arSqfTfm3x\\n\",\n       \"eFzJZFK1Wk2bm5uOpWAKGo/HjRdfXl56ETI+X1hY8AMWj8d1eHhoZiDsPTz/cEzi9a+ururVq1f/\\n\",\n       \"7gbB//9xMTomiJwJG7IgYsoYZBBAI8lDDmLLJNnJEky4VqvZ9RPaJDznoHdavV7X8vKyCUGlUknN\\n\",\n       \"ZtO7M0MHShtomdhxBX2i2VHhMxD4EzSGQQVCvY1VV7PZVKlUUqlUcuMFrRQcGv53sVh0UsDl5aX1\\n\",\n       \"hpQf4MWEADGYYsTOGJ8h1OaPUc6w8qamplymtVotS8wYZbfbbeVyuUfd/ye1mOFhBB3jkeGjWIbg\\n\",\n       \"AlR1eHiocrls2Twj3qArEh+jocPBHsYYxztWrclk0uNvOnyidrGsKhQKJuYwSKE55GuJFD45ObGT\\n\",\n       \"Phg52SmUCkCDGJoHd01YacGwScoT7G3n5uZ8KmBjxpDo5ubGVg1YFRweHvq9ZuCCypzQHiRVCFYp\\n\",\n       \"+XgNTBoJpX+so9GTagAjkYh1gKurq5qYmNCzZ888+4doxDEMkWZzc9OqbI6/TCajq6srRaNRp7lK\\n\",\n       \"8i4Inj05OWkuBAlTlBuhUMhk+WCGH6JVcv/W1tbMT4YmyTBhNBppZWXFWYAc41h9URf3ej0brc/P\\n\",\n       \"z3vKiXvRaDSykz3EJz4H7RQi1cXFha3FEomEms2ma2yQDvwvglnilGCUbZg9kn0oyf3FcDjU1taW\\n\",\n       \"Dg8PHyQJPOZ6UjtztVp14/b+/Xu7AdGoIU/K5XLOKGF4wi5B1h0jYngLiFhZjMPhULlczmUCudJj\\n\",\n       \"Y2MPcF1cRWkmGWFjLzs2NqZisWiaKfg4CnNKEEl22Gw0GkYrKF1o0sbHx1WtVlWpVCy1gvfMqB6T\\n\",\n       \"cr63Wq36yGfUjfUBwxskVahlKG/QJoLDM/bPZDI6Pz9XJBJ5YJxer9dtOFOtVj3iH41Gj+ZmPKnF\\n\",\n       \"DHdhfHzc0WnAdRxtCwsL2tzcNJZbLpeNCSMyZayKFQBkdepRdj3MEGnKGCuPj4/b6w50hHp3ZmZG\\n\",\n       \"1WrVmSWMwLFIgGO9urrq5isUCjloE750UIiLqePe3p4kmbMNfgsJKEhRBW2hfl9dXbXEiQeb0Tml\\n\",\n       \"D2UDfiO4m4LEnJ+fm7ctyWqXu7s7ZbNZtdttP5jBhAOmin/YmQMX2rulpSUfjeyCxIMR7jg2NmZ9\\n\",\n       \"HrgpolRYcBgTUnem02nt7u46KxD93YsXL8xBfvnypebn57W/v++dEqI9nOSvv/5ak5OT5iODc29t\\n\",\n       \"bWl7e1uJROKBQz9eFvF4XMvLy+Zmw97DVD2TybhkYOHi67a2tub3Bg8+BiTZbNaKksXFRW1sbJjR\\n\",\n       \"B/F+e3tb5XLZ9Xg0GlUmk3G5tru7q+3tbRtTQuQClbm4uNCXX35pduLd3Z02Nzf17NkzpdNpv4+P\\n\",\n       \"uZ5Uzdzr9ZwrIt0LKNltiQdDwIldFFZTBGKORqMHnz89PfXnYrGYv2ZxcdGaQ1hx8/PzqlQqfhDY\\n\",\n       \"bWiWKBkoXSYmJixobbfburi4MMuvVqtpYWHBQlP82iKRiBqNhtrtthEV/JPZ+fj7g1azNLmULzSf\\n\",\n       \"Jycn7jUqlYqbvYWFBUfI9Xo91Wo1vXjxQnt7ey4lsDZgQCTJeDS5isCQg8FAR0dH5qCA7zPyTiQS\\n\",\n       \"Ojs7e9T9f1I7M1O5Vqv1IP4LkjsEI5wtQS+YjKEeCdoUhMNhdTodO3bCYYZhtr297RgDcu+Wl5fV\\n\",\n       \"aDS86Bit4ztxeXmpYrH4oKlMJBIP5FmZTMYNFrUtzDcsFWjQeO00c5gd4kcBDIhLKqXR0tKStra2\\n\",\n       \"NBqNHONGGQPrjxE5DxT4fDwed63LAzYxMeFgedThwI+UfIy1EeyiqAkmu37o9aQWcyqVslJEuseJ\\n\",\n       \"OdLga0QiEW1tbdlDglo3Go1qeXnZiwnzE9w2adhQkfD9+/v7Nlth+CHJQxuMBBnYwMGIRCLe+ajp\\n\",\n       \"Ly8vtbKyYhMZ6V4QC3UV9IG/dTAYmKBDnRoKhfTxxx/btBHYEF4Fi35paUm9Xs9cDxCgbDZrqiee\\n\",\n       \"ckB5jPJbrZYNcUBriIBg0IIqZmpqSolEwmUOpc/ExIRevXplstHU1NQfAnqCF2SY6+trnZycaGNj\\n\",\n       \"w1RHQuDz+bx+9atfKRKJ6OjoSJVKxXgto+ZOp2OjwF6vZ7Th7OxM5+fnTpY6PT01uaZcLuvXv/61\\n\",\n       \"g2mYKK6srJgAj+F3tVpVtVpVPB5XoVBQLpczsWd8fNxZKs1m0+mv5+fnRlNarZaD2MlL4b+Hw6H+\\n\",\n       \"6T/9p6pUKtrb2zOzjqEPKVKUE4gYcrmc3r9/r9PTU719+9bUTeLPsCCA10JjCEJSq9VsME49TgLs\\n\",\n       \"5eWl9vf3HYtRr9c1MTGho6MjIxynp6f6i7/4i0fd/ydVMxeLRfMFvvzyyweGhdhjzc/Pe1dbWVmx\\n\",\n       \"HD5oIYWOD+ehtbU1T9uAm2ZmZtyA7e/va2xsTNvb23YvIv633W5rZ2fHHsbsWkB0kUjEYlD8L4C9\\n\",\n       \"UqnUA487mjqOfIJwsBKjXEBpzTEP6gCygys/tfbt7a3S6bRV1xCYMpmMWYdM9IDfUL7THDP+pi/g\\n\",\n       \"ayASJRIJRz8gbg2iHisrKzo4OHjU/X9SOzPlBESbSCTiGF5CYaampkzlZNHAv6A8WVpaeuDlhuIZ\\n\",\n       \"NyJ2SqAkyodQKKSVlRX7zyHgpEYMBviAmMD1JXoNmI6dnQUZDJIPuoiCyEh6ENWAbq/X62lubs4c\\n\",\n       \"ZUlW11ByQOOkrOB9BPHhtSCWHQ6Hjp+YnJy0IBdEhJBKavhMJuO/A0ejnZ0diwPwruPv+NDrSS1m\\n\",\n       \"SDTlctnHWqvVUr1eV7FYdFb20dGRZmZm9Ktf/UqFQsHDFrr9vb09R6vhMs8xn8/n3WAeHBzo5uZG\\n\",\n       \"79690/n5uWZnZ60kGQ6HPhkYGcNQq1QqajabDsPE/CSYKnt2dmaeB/yParVqZl65XFaxWHR4EFZd\\n\",\n       \"lEXsjJeXl3r37p1Za3hroJwBJTk5OVG/3/dwaHx83AkCEPf5mzGt4QSCjzEYDFStVu0dx7j/9PRU\\n\",\n       \"zWbTrymIS5PFMjEx8Wiz8SdVZgSPUnbf7e1traysqNvt6quvvrKKWpI+/fRTL+zJyUkjFp988oki\\n\",\n       \"kYjdPqPRqJLJpPM+4DXE43ElEglDVevr60okEjY6XFtb09XVlTKZjKGofr+vdDrtgEdcRwlgh4b6\\n\",\n       \"+vVrB72DB7PLMuAhFhnEgsEIxoaM7nO5nE1gcBJdXl52CZZOpxWJRDze5zThxMGylkEQu3gymXQ5\\n\",\n       \"sra2ZjlUOp22NGxzc1OZTMaw38LCgkW8/D6SYl+/fv2oUuNJLWZqMCAmYgfg1WKzGgyvZEq1vb3t\\n\",\n       \"FKlOp2PSC/U0imUWRzweVy6XswL77u5OBwcHdtpfXFzU6emptra2vAPhPoSw9OjoyGIA6I/T09Ma\\n\",\n       \"DAYmr3e7Xa2srNh5dGZmxkpr4hPwocAeiwgHQiURtQbjkfleHiDc+iFojY2NPeBct9ttNZtNdTod\\n\",\n       \"7e7uWk9IFB0lAzXy1dWVlpeX9e7dOydLra2t6eLiwppJyP2tVstw4GOuJ1VmEHrDmyX9xAqDlEOz\\n\",\n       \"xK50dnZmzWCQdhkc69ZqNVWrVfstY/iC3xrYKbsqWSEMAlBy8JAFgxxnZmbs5IMrEaR+xLCNRkMX\\n\",\n       \"FxembEajUev2GO5I0tu3b13OkCRF+VOr1Yxi4CWN6xLvDzzvq6srZ4zQH1Dbh0IhlctlHRwcWE1C\\n\",\n       \"EA/2uYgjoJyiuTw8PLTDPi5R1O1BhuGHXk9qMScSCbXbbZ2enrqUgHIZCoWUz+ddThwdHWlpaUnb\\n\",\n       \"29uSZBok7K1YLObBxPb2tpsvXC2DxzGG4gxSWKRHR0eSfrIGS6fTjheGU8xiQOmC+pvmFcYZIezE\\n\",\n       \"TKysrBg3Jt96c3NT29vb9prDBxkpGeLTWq1mhqAkDz7gevOeMWzBsYlmOhaLKZ1Oa25uTs1m08gI\\n\",\n       \"DzhBnDx8QHiM9Cm7wMmx9eJefOj1pMqMs7MzcwPAmXHOhDDP7rGysqLT01MVCgV9/fXXkuRy4eTk\\n\",\n       \"xJL4XC7n8gV6aa1WU6fTsUyo3+8rn8/bAoCEK8jux8fHisfj+vbbb7W8vOzMEhqrVqtlSJEmjXRU\\n\",\n       \"JmSLi4uqVqsmDWFSw2Jh52ZBwgbENXRlZcVjZowXh8OhST6dTse0VhYhDDrCgebn51UsFrWwsGBD\\n\",\n       \"SU5AoDbscEFE0P/Nzs7aSSqYPx4Mvnzz5s2j7v+T2pnHx8d1enrqIw+yPMT2arXqBrFQKBjmoq7m\\n\",\n       \"OJ2ZmTFDDgta1Crtdlurq6tu8ra2tlwfcpNAMAiumZycdNIrdFMol9S4FxcX9jumJOKhuL6+Vq1W\\n\",\n       \"s1Po2NiYc0+IfWPhEA1HeYE4NhjGUywWTb8Ewkwmk5JkDSWDD3jNvG5OHdAIal8oACRhUeLUajVv\\n\",\n       \"KCQSUKdzoQ5/rN/ck9qZGYlOTk7qd7/7na1S5+bmtL6+rkgk4vru5z//ufL5vEWhDFBqtZpubm60\\n\",\n       \"urpqYnkul9Pu7q6dfoKZH9PT0/roo4+Uy+U81oVSSk4HCafdbtfiWQwPkVuBvEj3vGzG1gxd6vW6\\n\",\n       \"Li4utL6+7uYTt6Bms6mjoyO9fPnSihpKrvn5eR0dHemLL77Q7Oysstms5f6YshDAA8OOkHqosuDG\\n\",\n       \"lAu8X5FIxDrF9fV149iUPzwEm5ubhgevrq6USqUUj8c1PT2t4+Nj1+1/CLUMXEFXeWAkorzYeTEQ\\n\",\n       \"pwEjE6/T6ajVaimdTiuRSKharbrDDh6N1WrV6aLgsKhTmKC1Wi1r9JgEgiiAQUO8v7m50dLSkmKx\\n\",\n       \"mL755ht1u10f2dTDhUJB3W5XyWRS19fX5hJfX1/7n6mpKeegoA6pVCo+bcrlshqNhnMPcfAPRprl\\n\",\n       \"83lJ8kQSJIPvw7iG4Qc4NtYJ4NN48dFI5vN5+9zxM66vr3V0dKSTkxPNzs7q/Pzcjv4fej2pxYwq\\n\",\n       \"AvM/TAexucJzAl/km5sbpdNpFYtFnZ2d6eLiwg5D7BLX19denDc3N955GQtDRu90Ol5k8XjciaU4\\n\",\n       \"AbGjsbvDU2C0e3l5qU8//dTsuJubG0cNo/5GNxiU+NNoBR30MYgk6+T169fmKjM2r1QqZrMFx+Qw\\n\",\n       \"14AFUchQ58/M3CfbFgoFlxbY3SL0BZXAyByTcgQHeHUMBgM9f/5ct7e3xuwfc4Xgm/77foVCodHf\\n\",\n       \"+3t/z4w2Im07nY7W1taUz+f17Nkz7e/vmzOAjo2FwGCE7n5/f1/Pnz9XsVhUKpVyd399fa2lpSXt\\n\",\n       \"7e0pk8moVqtpbm5Oktx88QBFo1H1ej2tra2p3W4bY+52u1ayIErF544HcnFx0SIDHgoeJBznpfvG\\n\",\n       \"NJ/PO2SSr+E0KBQKevnypVNS37x5Y0SC3ZgouEKhYGd/mk8eVrByONIYKBJRFwqFzHEmiUCSc1ku\\n\",\n       \"Li7sWY3nM2FDWPv++Z//uUaj0QcpW59UzdxsNs2txbEnn89renpa5XJZp6enku538C+++MI3EtRh\\n\",\n       \"b29P09PT+uGHH/TixYsHsp5Op2PEACNBvOAoYXASRawq3S+0q6srj4tZHCcnJ3r9+rVarZbev39v\\n\",\n       \"ZyTstorFohlrlAIMMgaDgYUBDFmur++zCnmdSLVQdON0j4Ch1+tZmIAYAIrm1dWVPvvsM/3617/W\\n\",\n       \"+vq6RQzo9DKZjMuQubk5S7c4+fgcihaGTvA6EDBcX197N7++vvb9+NDrSZUZcBgikYjS6bQDJhmW\\n\",\n       \"hMNhJZNJbW9vm2iPJ7Ikk4pYMNjfBqX5mIBLcgQvquebmxuLNyEXUTfzM/CFQ/U9OTmpzR/Tq/A9\\n\",\n       \"bjQaSqfTtjdgh2M3Xlxc9AiYWAYWONIvyO6w5xgUofpmV0fahWoaCi0nCLUxQxOQlrGxMfX7fSvR\\n\",\n       \"+X++hixEBL+UF0S2QWBimvnY5k96Yot5eXlZ+Xxe7969Mxnm+PjYZPOjoyPjwxBmgnAZuCfwWC6X\\n\",\n       \"s76Nm48UCKYZllNAVjQzl5eXJiIxgJFkcxZGx6hLUKvQvBL7hhcG3hLValVHR0e2zuK1MuhgVH17\\n\",\n       \"e+uJH8T6q6srR5RxkuAN0u/3PXbnocHABaI/ZRaELjjXjPppbPv9vur1um192X3RD0qyAp4Thbr6\\n\",\n       \"MdeTWsyMctlRsZDCYyKdTmt2dtalABRFJmEoQzgaaXoODw8Npy0uLmp2dtYEc2pN6kUWNNRImGiU\\n\",\n       \"C5JMh2Q3oyaWZOI7BH1w2dnZWTtrMm0EP2dEfnt7q7OzM8ORq6urHu3zgFCjg8ODtExPT2t1ddUo\\n\",\n       \"yu3trc1gjo+PdX197Y0BJ6NqtfrAJ4PhDQFIqE6YGuIlQvAosi6cmP5gaRu40MdFo1GTcsCRMekb\\n\",\n       \"Hx/3LhWM+oIWiUwK6AtJE8ckrpt4wM3Pz/vmEizJlBAcFrYcuxcoADufJJcUUCQpB5A9MbEjUKde\\n\",\n       \"rysSiTgDhTg3/KbhZzAkgliEzIlgeUm25EUniFFMLBZTs9nU4uKihsOhzXLYtZPJpAc4PPyE/QQ1\\n\",\n       \"gEwiKWPYaCA0kQmIGOJDryfVAJLn0W63tbm5acIOY1aOYdw0gbFSqZTNADn+KU0YGsA2k6RcLqcv\\n\",\n       \"v/xS1WpVx8fH2tnZsUUWRKGPPvpIY2NjWl5e1tnZmR2ScBhl4Y2NjdnqgJ8PbEcj+fnnn9sghUZx\\n\",\n       \"ZWXFY2AeMMJ/GEyweOFbkAfOsEWSxQxB8xa4Hufn50qlUqZ8TkxM2MUTlUkkElGxWHwQkzExMaHV\\n\",\n       \"1VXnAFJaMY1EUACDjjLvDxPAwIWTEDxgIsMYZNDYdDqdB5BaoVAwwwu6JdwNRrtwKc7Pz7W5uam9\\n\",\n       \"vT0NBgNtbGzo5OTEnfnNzY0ymYx50nAQ2GF7vZ5WVlZUKpUciEksWi6Xe2BRi9r5m2++sSrmzZs3\\n\",\n       \"SqVSqtVqev78uebn550shY0AJw9Z4UE8nbLj6OhIr1+/liQTs2ZmZvT+/Xul02lPJ7vdrhYWFnR8\\n\",\n       \"fGxIMRwOq9FomGrK68Q4cXJy0hI2Itqq1aoNFxuNhvsGSGDRaFTff//9o+7/k1rMU1NTRie63a47\\n\",\n       \"cnZCTEtACJ49e6Zvv/3WhirD4dAUxuFwqO3tbftiAItR30GW5/fRtUO+oWbGDYlBCUR8yFBbW1se\\n\",\n       \"uLCTz8zM6M2bN8a7Z2ZmDIlBAqL+r1QqbhwpXZ49e2ZIjtAfVCjEMyB76na7NoFJpVLWFDIpZeEu\\n\",\n       \"LS2p0Wg4I3E0Gnn8zRSQJhT5FSUPjLqtrS1zQlqtlnZ2dtRoNJwUOzY2pn/+z//5B9//J7WYufGT\\n\",\n       \"k5N6/vy5ecVMvoDFrq6utLq6qkKhoK2tLZN2YIhFIhGtra3pu+++89FI84SDPC6ia2trtnulpLm+\\n\",\n       \"vtb29rYdQ4kdg8iPwBSlBiHzg8HggahUuocbQS54GKampmw2uLu7q3a7rUqlYh8N6lM8N3AXQjgK\\n\",\n       \"9LewsGDrXRQvWAHMzs7q+fPnTpcCjej1eq7vz8/PbReAWhvWYq/Xsx4RtIXdHWSEAQpNI/YKH3o9\\n\",\n       \"qQYQpUK327UTESR1tIEXFxdmrUn3xHtgPCIO7u7uTKXkZ05NTVkPCG+Dpo+RM00X/ASOeeA3YCjk\\n\",\n       \"9aAQ3W7XaABNHxM/eCWMw+GAnJ6emrWGsoUHNjhCl2QFdrVatbkLdFcULJxkWOiGQiFzoEulks7O\\n\",\n       \"zlzbA1/ynqFDvL29NU4O1gxjj3KLJAGijnlN/X7/DzmAwQszcGRHGFtDjcT+lYgESC9YSSWTyQck\\n\",\n       \"oEgk4nFyrVYzKoArKDv9wsKCer2eSqWSmyJUL8iyKHWC0BlxCjc3N/ZbZsrY7/dNOKIWZ3Gw2+E3\\n\",\n       \"jXs+i7NSqbg8IAxIup9GFotFc1fAskFY+NuBznK5nJUmLFyYhLFYzH7VIBQrKyvGlGu1mpvMarWq\\n\",\n       \"q6srMwiZdBYKBZ86DKcecz2pxby0tOSbvbm56eYH4g55fMPh0CoMOn8GFBcXF673grsY1gFBZUcQ\\n\",\n       \"62WBTUxM6OOPP9Zvf/tbB6KDX/N9QVgNNhzHNgsJGwOGHvxdwGJ4Xuzs7CiXy3mEvPljzAW7J6cB\\n\",\n       \"431MY7BBWF5ediZfIpFQq9XS8vKyHZhub2/NQwFjxyNvMBhobW1NjUbDmwjI0OzsrJLJpOttoDpS\\n\",\n       \"u1KplJlysVhMqVRK1Wr1Uff/SS3m4XDoHemXv/yl/uRP/kSHh4cKh8PK5XIPvOX+5E/+RL/4xS9c\\n\",\n       \"RuB42e/39c033+iP//iPdXh4qJ2dHVsGYF8gyZ7GOzs7zsFuNpv67LPPdHh46PF1IpHQ999//2Aw\\n\",\n       \"Mjc3px9++MHZHgRWMnLv9/s6ODgwXAh/+pe//KV+9rOfaTgc6vT0VJlMxlq7/f19ff7556rX63rz\\n\",\n       \"5o3TsW5ubnR4eKg/+7M/e3BSXVxc6ODgwEoXoLLBYKDDw0P96Z/+qfb39810m5ub08HBgT0/MDsk\\n\",\n       \"gFO6JyuVSiWNRiO/zyTmFgoF7e7uOuuFxhGeytjY2KPRjCdVMxPIAy2TuhW8mRIDGRTQkCTzJthd\\n\",\n       \"2YEl+SiGnwAJnZ/NRROEUaMknwT8TkodMGkGFpOTk8Zc+TpOkuDXUEPDOaYsYBIpyb0CvQCMPl4j\\n\",\n       \"P3NiYsJupMCIvGa+hr+D30lCV3A0Lckfp2zi74UiwP3gNQZptLxG2Hgfej2pnZlmhPq02+1qc3NT\\n\",\n       \"4+PjnlIFPTA+++wzlUolT7iIffjZz35mvJrygZEwqa4rKysPFN/cJMSm1L4cvUB6PCSYFRLZAFxW\\n\",\n       \"q9UsosUtlAki5QbWYhzVi4uLuru7U7FY1PLysnZ3dz3pY6HiBUcDDBei2WwqmUzaHw8TSaIqss3d\\n\",\n       \"TJUAACAASURBVNmsSqWS+Rk0fsjEbm9vTW4iuWt+fl7Hx8c+GaC6AnsCWcLC297e1mAw0N/+23/7\\n\",\n       \"USE9T2pnJguEGpB/BxlZKEbgXaRSKVtLsYMxOcNsheYE9hej716v57Ev2Gu5XFa1WvXPZLeCWokr\\n\",\n       \"EbRSdkGwbXR+lUrFVElG4UiugMqwsg1i471ezyIDcGdJdvPk9Go0GpqZmfF0j4xCILWJiQlnhgPv\\n\",\n       \"URbNzs6aHCXJzlFwL0BSkE7ROGIHjM0wHHLQmsfuzE9qMW9ubhpPRYoEngnLjboYvJTFhdWs9NMR\\n\",\n       \"D3KBfzK0ToYJ3KhWq2ULg7W1NSUSCS0sLGh1ddWNJCPnyclJLxKyQNDYseviXIRmES4IWDjZehD4\\n\",\n       \"GctT266srDjCeDgcuinjdfF9nCpM4SAwscPShGKIKMk+diA4sVjMFr2QpsgEDH5tNpt1GQW+DsZ8\\n\",\n       \"fn5uEexjrie1mKnJwJQxLCEXsNFoqNvtql6vm6eLNAhYCp0fOjU0cCipcfeRZAUzwZaFQsE8CEa7\\n\",\n       \"DEsWFhYeHOM0SZDVQQYYXiCuZYGDBTM6hgeBMhsyvCQ7CpE9eHx87L+LiRyLB14E9FB2VngpPEwE\\n\",\n       \"6pTLZW8I8EDAmOfn511KsXOXSiVj1jDsrq6uzImm1gZ3fsz1pBYzYk92tvPzczcwmKywQ3EzkExB\\n\",\n       \"rL++vlY4HDbyMDU1pY2NDbO7cI9nYsfN5zhmOCHJgwjwaZh8DDtoVnHEr9VqqlQqD8zDsRoAamu1\\n\",\n       \"WuZLBOvwYG4frDXstGZnZ83jhio6HA5NLuLvB6mhyQTaQ2XOyRU0d8HVFBECECjDKHgtmL4QLIp2\\n\",\n       \"Ea8P+ofHXE9qMff7fTvlg0xQq7Go0Zo1Gg3vxKAGtVpN6XRazWbTbLrBYKBCoWDGHQoKjmL4B7Dx\\n\",\n       \"SH1FdXxxcWGhKGYrdPhQUsGpg2GPwREv6mkMYDKZjAqFgur1uutd6aeUWWrZbDarRCJhES0fv76+\\n\",\n       \"diIrGPfi4qJWVlYsb+JnsWvCJuRhgfYKagKygnaQoRGlCg8w2kumnLyvsAkfcz0pNAP+ctBG6tmz\\n\",\n       \"Z/ZdBthPp9PKZrPK5/NuwMbGxrS2tqZQKKTd3V0HvlODQn/c2try+BWUAoroZ599pkajodnZWW1t\\n\",\n       \"bSmdTuvu7s4nBMR4rAComVdXVx+4gDKEwAMPn4vhcKjl5WVzGwifZ+FR6ycSCfth0BRChGfs/erV\\n\",\n       \"K/OdIeHzHiCFSiQSDsREFfPy5UsPWCQZA0eIgA7yiy++8APICYb+kJOHzST4UD/melKLuVwuu4Q4\\n\",\n       \"PDz0tC1IXWSRwg9GkNrr9cwVYBwcNC2EI10oFBSPx3Vzc+PpFVev11Ov13OqEg3Nt99+6yHGYDCw\\n\",\n       \"TW6pVNLc3JyKxaLK5bJ39YmJCf32t791LR6NRnV0dKTRaKTt7W0dHx8bScEuC1uxZDKpcrmsV69e\\n\",\n       \"OWpibm7OYlF203q9rmg0augPRIQSAXd+0Bni28rlsr3rfv7znzuCAswedQlWYiRrYSr+7t07m5jP\\n\",\n       \"zc25ESeh9jHXkyozCHthaEHjxxHJoiTMnJ0AnjKex5CIer2e7VhXVlY0NjZmXJjdPJPJaGxszHBa\\n\",\n       \"uVz2Tg35aHt72zIpeBc0oSQwsSPCcFtZWfGQplQqaWpqynTRra0t/6yVlRWP1RHYJhIJG0PCc8bJ\\n\",\n       \"iAVLkhVjf5pS+BIMXhKJhHHzer2uTqejZDKpVCqlTqdj4exoNDJ1lLII56Lr6/uMmdvb2wcmlCi5\\n\",\n       \"Jdnc/DHXk1rMZIXgI4E6mhqOWvHs7Ez9fl/7+/tuhlKplDKZjPb393V2dqbb21s3UCxEUI2DgwM3\\n\",\n       \"Ze12+4Fa+auvvvLuNDMzo6WlJRttw3oLh8OuiwuFgs7OzrxL7+zs2NaqXC6bfipJ33zzjeLxuPL5\\n\",\n       \"vNrttur1usuYt2/fGjl5+/atwuGwms2mSqWS3r9/7wWP2eL8/Lw6nY6KxaLTAEBE9vf3FQqFVKvV\\n\",\n       \"PMKfmppyhjgCVkoZGjvek4uLC2WzWe/aWOGCboCy8HO73a5KpZK1mB96PakyAwz16upKf/qnf6rL\\n\",\n       \"y0ulUikvKEmWD93c3Oirr77Sd9995yYJZ3e0bIlEQrOzs5ZgUZdKsgCgXq8rnU4bZoIRJslG4QsL\\n\",\n       \"C57e4bu8trbmRSvJqEe1WtVgMFAkEnH4ejKZVLfb1SeffOJmFZtcBAG7u7sevrx+/VrxeFynp6da\\n\",\n       \"WFjQ+vq61Si8P3jLDQYD7e7uuh4PyrcWFxe9uw+HQ+3s7Oj9+/cWEcAbCdoWsNPCf0aqBa58c3Nj\\n\",\n       \"XSaDHAZUq6urev/+/Qff/ye1M8NTGBsb8zCk1+uZX1utVi2NHx8f18HBgfm7qEbgJNCUgJHOzc15\\n\",\n       \"8bGrTU9Pa2trS6enpw982ihJWGzwgKFsLi8vW0YEP4MygCENKhAMYeAVE5/G7giaIN0/zEFTFngg\\n\",\n       \"2AXQIMNeGwwGLnn4GfQTePSBNqBU4YFh8IT9Ge8Xihegu2DZQSmCvUO1WjV3nGi5x1xPamdmV2FR\\n\",\n       \"TExMKJVKWbIkyXKf8/NzbW9v6+joSJubm27M2u22lpeXlUwmdX5+/iDgcXx8XDs7O47MhWQONbLT\\n\",\n       \"6XgxAkOBw/K9OCQlk0mdnJyY8wCbLBqNGgpcXl42cgIKA4WS188ImITYsbExh7pDxWSHB+kpl8uK\\n\",\n       \"xWI2mlxaWvL7FovF3DMQRElID4oUhklzc3NKp9MPLMWSyaTla/Pz8yqXy0omk/4bgkQjmmAQjp2d\\n\",\n       \"Hb19+/aD7/+TWszr6+s2AEdbhs1Wp9PR+vq6fSwghf/RH/2Rrq6uTAGlyYN4c3t7q9XVVUmykHQ4\\n\",\n       \"vA+rpIljBIxfRigU0tbWlkfM7OD1el0zMzNuisbGxmwaAy+E45dkKnZy6JLscs+fPzcLMBwOq9Vq\\n\",\n       \"edgDG44mstPpKJvNeleH2Tc9PW2no2g0ajx+eXnZej5orBMTE47MYJCCDnJjY8NoBM33p59+qna7\\n\",\n       \"bUiQkCRIX4y2d3Z2nFSF2fmHXk9qMe/t7SmZTKrT6ditvVwua3NzU41Gw8gGYs18Pq9KpaJPPvnE\\n\",\n       \"YTmpVErv3r3zsGRhYUEnJydWRoCYHB0dqdvtamdnR4PBwEc1ZCHKEYYIKLgJaz87O9P6+ro935aX\\n\",\n       \"l7W/v2873aWlJUuKqIuj0aix4W+//VaffPLJgwFFtVrV+vq6qtWqyy3ITPQEcCkgy0tytASO+NI9\\n\",\n       \"flwsFk0uqlQqSqfTZtCVy2Xb8VIGQXY6Pz9XPp/Xy5cvjR5VKhXFYjF/PaaNCHwJOHrM9aRqZlwz\\n\",\n       \"yZqmucI1kxuG5Oju7s64KbwJsGEaMr6HsWu1WrVkX9KDOAigMnjAMO6Y/mF0CFQInCdJlUrFtTW1\\n\",\n       \"KGE5xBnX63VPJ4HmsAfj9+C0dHNzo/fv3xsWAyZklAwBCoUNY26wa9AP0BqGGzwYDD3gfmNRG5zC\\n\",\n       \"np2deROhNKNODoYfQStluPWh15PamWma0Oxx5PZ6PQ2HQ+3u7jrlCPohShI4wYzC5+bm9Pr1a+c/\\n\",\n       \"X1xc2LwEXBXQH6sCBJ3JZNK8ZGRTkmwdRm0+GAy0ubmpw8NDB8PH43F9+umnOj4+VjgcVjwet1VX\\n\",\n       \"IpHwgltaWtLk5KS94lA9Y1ozGo30/Plz/+6XL18qn89rcXFRxWLR2SaSzGhjyMGUkJNqf3/fFr9L\\n\",\n       \"S0vmVmNxMDU1ZdMbFnkwi5HIYpo8xK7r6+uKRqOejj7WPPFJLebhcKj19XUnhMZiMb17907xeNxB\\n\",\n       \"8MBBn3/+ub755hu1221LmrDqajabFoaura050AfXeUkPXO/L5bIZdWSaMEZmbD47O6tGo6FoNOq8\\n\",\n       \"wRcvXiifz6vb7brGrdVq5ikQBAQH+d27d1pdXXWAD0c7zRm77cHBgd2Q4F3ncjljvoQPYca4v7+v\\n\",\n       \"ZDJpQcFwOFQymTSunEqllMvllEwmdXZ2po2NDXuJUMc3Gg2trKzY1Aby/c3NjfL5vG5ubjQ/P6/p\\n\",\n       \"6WlPZbH5Jcrtn/yTf/Ko+/+kFjPHKME7kHx48pnMSfdkm/X1dcViMSUSCVvh4i5E5ggKFUmehN3d\\n\",\n       \"3YdTorwIBspTkrDjIOiEIRcKhZRKpWwpgEgWfzt2V1AXdnyaTsonfgdiA9h1k5OTymazrtV5Hfw9\\n\",\n       \"NJ2E0jNan5yc1NLSklO5+v2+zSXxGWm1WrbbmpmZ0cLCgmZnZ209FswfRxaFjTBja5rCm5sbbWxs\\n\",\n       \"mE4bDof1R3/0Rzo8PPzg+/+kauZareak08vLS/v/BhNGGWsDodEUnp6e2t8Ygg6RwOz0yP1XVlY0\\n\",\n       \"GAx0dHTkBXVxcaFer6ff/va3pmqCdYO9SnJWNmJV8FhQB6A7WH/dbtcTOlKyWDBTU1OmgoLYML4u\\n\",\n       \"Fos+RYrFovr9vhtKFh4+yWj2iMCAg4z7KKJYkqoYAOFzwZh7enra9l9MTmdmZpTP55XP53VxcaFo\\n\",\n       \"NKpQKKSFhQVVq1U77lNjP+Z6UjszMBcYLWhCkCLJsTwcDk0ikqRkMunSgXowOLkKh8PGrgnkgfZI\\n\",\n       \"3Qw0NjMzY9srhgxwhokpY0wNkw05Fq8VeuT5+bkymYx95HhNpJsCpwGJ0bjBk2DoAZF/cnLSLDaY\\n\",\n       \"bDRfg8HA9FYmpf1+385H1PBwnzn1hsOhVldXXT8H45zJ+qaJRM7W6XS0sLDgARHUg8dcT2oxMxgB\\n\",\n       \"DYjFYl6AwGJB82sUx2NjYybm09W3221dXFzo5uZGzWZTsVjMC5LMFBQi0WjUE75qtWouRblctkki\\n\",\n       \"Nx4zGcxQ5ubm1Gg03N0TY0FAJnkn8D2CtrfJZNIjcEI1seeCSwFngqhjmra7u/ugeqC/k5MTxWIx\\n\",\n       \"uxoBH2KE02w27auBMhxord/v6/vvv9fU1JTW19e9y9K/cHW7XSUSCZVKJT179kyFQkHtdlupVEr5\\n\",\n       \"fP4PfObg9fz5cwejVyoV46EQfK6vr92QwZ8guHJ6evpBohJJTezITMYQb0K4X1hYMOWx3+9rfX1d\\n\",\n       \"6+vrqtVqNjjM5/Pa2Ngw+T8ajTosh92KAQfjZ7wp2O05iuFNAAHi7zYxMaEXL14YZqP04HUzzgYJ\\n\",\n       \"odxgZM2ODucavjTcC3oN8gopDzBgZ2QeNG9kEETjmE6nbfWAuQwIBp7Rj7meVM2MnRVOlNVq1VTQ\\n\",\n       \"wWDgRoQdm/IAJAADbG5yOBxWtVq1+6YkT7mkn0y6gbrW1tb8OuAqsCCx/EKhjA4vaKrSbrdtOdBq\\n\",\n       \"tRSNRt04IrcC3wYBYHSfSCT8eRpgHJLA3SmxGCkDRQK1zc/P23ZWkn3sGFNHIhEtLS2ZDYfcS5JP\\n\",\n       \"MzBuHPgZvQffA6IieC/RbiJV+9DrSS1mRsvdblej0cixvxBhisWiGx2mdTRftVpN7XbbuwtYNf5w\\n\",\n       \"HK1k8rEAMW+ZmJiwIJRygkYJmI4bDe+Z+AhuJrtas9n0IoTlhoxLkn833Ohut+uROo0lYZ1B7zw4\\n\",\n       \"29fX1zo7O3N9z0IcDAaG0VjEDHigZ/J1NKwMi1CeX11dSZJOTk40GAx8QrHokYxx8b1BA5wPvZ7U\\n\",\n       \"Ypbk+T+NEvxhTE/Gx8edecIYlYkYuw6UUPBnxsiE09AE4vMMNMZwAPJR0IKK6VnQ6BB8eHp62kw4\\n\",\n       \"FC9EFv++m1FwUWAQDnSHxAqbL6DAlZUVn0ixWMw0Vb53YWFBkUjEAl+IVXA8Li4uTCHl75+ZmXEu\\n\",\n       \"IaUWJRpe2JLMt1heXjYhifuApzUPDxDoh15PajEXi0XNz88beyVC9+bmxpOnZrNp2iJvbqVSUalU\\n\",\n       \"8lDk+PjYGSBATCizsQqg5gRzJX8Pdhg72Gg0cug7dE2kUexwSP6xCJuYmPDroVShgQUGC0JaDC6w\\n\",\n       \"PiBQiMaNRq1YLJqKKcmMPLgduNmjHOdEoAw5Pz/XcDjUycmJnaCwNMD1v1arOTKZRns4HNoscjgc\\n\",\n       \"6vj42KN3VC0IeR9zPanFjDMn/scYm0DBhOQDFAXEFg6HTbfkDaV+u7q6MtRGWZLP523kDYbMjgTv\\n\",\n       \"AH8LGhyw6JmZGXt5SLJ3NBM/iDdgyVjISvJgg9MEvgi8kmQyKeknewIWN+UColFcSXmdQGIMlvh6\\n\",\n       \"yrVgk3Z5ealsNutmmdMPR1U2EZAWGsugmSLpBfPz89ZNBgdaH3o9qcUsydEJOPVQJyYSCaXTaR+9\\n\",\n       \"UA5xrWSHJS4X/dvW1pZub2+VyWSUSqV0cXGhubk5R6XR2EBCD4fDymazxlgZaScSCSMDGxsbhgQR\\n\",\n       \"iobDYXOMKYHC4bBrUmpxQnOC2XxBiJHIBSaE8Iuz2aydh3hQ+H5UNb9vQbaxsWGUAe9lJpCEDSFC\\n\",\n       \"SKVSnrQuLy87JwYkCauuRCKhZ8+eeTq7vLyslZUV29o+5npSixn3HjI2UE/f3t7aY2J8fNw1ZDab\\n\",\n       \"1fLyshEQdjSgLmRAGJKz4FFnQN8E3stkMkZOtra2bGnFbiTpASzFKJcj/vT0VBMTE/r88899CjD4\\n\",\n       \"YAeUpP39fdfM6BJTqZRPllQqpcXFRWWzWde3pKzSjAUHR1htSTKPBdMadHsw+KR76RnuSqFQSPV6\\n\",\n       \"3XwMGsyNjQ3L2DDWubu7c9nGrs19QbD7mOtJLealpSVls1k3TigxIpGIFhcXzasg6ou6k8EB2C9N\\n\",\n       \"GPUoUBc3a2lpyXHAmUzGlmA4zc/Oziqfz5s1hg5OknFkEq2CEcc///nPNTk5acYcjSl1LmR8Ysmw\\n\",\n       \"BqMMWVhYUDKZ9AgfP+p4PO5dnXwV7LeWl5d9zE9OTjodKngyrK+ve3LJTowFLc0u6nAabewW4Lb0\\n\",\n       \"+33F43ETmiDzI1RgY3jM9aQWc7fbdboUuxqRaYg4qQelexyVN5Vjs91uG5/lZ5RKJcNgklwXApsB\\n\",\n       \"lcGBZurINA20AfQAUhK7PrkfQQ87anfySBDLSnIDx+++uLiwmXq1WtXk5KR99HgAwXSpS9nx2aVr\\n\",\n       \"tZqzDPl9TBDZpdl1ee3YLYBvUxJxusEODLoYcVphJMnAptfrPVoD+De2mEOh0P8SCoUqoVDozV/x\\n\",\n       \"uf86FArdhUKheOBj/20oFNoPhULvQqHQfxj4+FehUOjNj5/7H//ffieNBQsnqDSGID81NeW6kURW\\n\",\n       \"Fji1M14Y2BOk02nnSkejUU+qgNXgKkxN3ccEdzod17IsRsxXQAuCukScjTj2oXKyq7F7swMD11EG\\n\",\n       \"xeNxbW5u2oqM5FOI8wxwpJ9EB0BjaP+oa5laXlxceKwd5CPDwAvKr9AZBi27WKiRSESlUslGkvxM\\n\",\n       \"EgIQGoTD4UfHDf9NjrP/vqT/SdI/CH4wFAqtSfoPJJ0EPvZK0n8i6ZWkrKT/OxQKPRvdb0F/Lum/\\n\",\n       \"HI1G34RCof8jFAr9R6PR6B/9Vb8QDV7QOBu7gBcvXphr0G63NT8/7x3tiy++cPN2c3OjnZ0d28xi\\n\",\n       \"qEh0cDQa1cHBgRYXF02+j8ViVoQgV0qn03bDBOcGQgvWjMBzHNl8H00cjWOz2fTpcnd3p9XVVS8+\\n\",\n       \"Ps4i2tjYMMwXiURUKBSsR3z9+rUKhYK/ptPpaHt72+VIcHG+fPnSr4WSi40ATJpMlGKxaC0h9ryE\\n\",\n       \"AH355ZfK5XJqt9uampqyv93V1ZWOjo58316/fq1/9I/+ylv7b3T9je3Mo9HoF5L+qiLof5D03/ze\\n\",\n       \"x/6OpH84Go1uR6NRTtKBpJ+FQqG0pMhoNPrmx6/7B5L+47/udw6HQ+VyOZXLZR0cHOj29lb5fN5y\\n\",\n       \"feLKRqORDbHJ7RsOfwoxf/funRNIcUgKh8O6urrSr3/9a83NzbmB4ohmaJHP5508hV3YaDSyLlGS\\n\",\n       \"DR3JF2GhBm1mwWibzaZ++OEH7e3tSZJlU7gJNZtNXV9fu5GamJhQuVx2XX5+fu6H5eLiQnt7eyqV\\n\",\n       \"Smq32y4rIPpXq1Uv/MFgoLdv36rX67lMgYRPU8dJVCqVTGTqdrvK5XI6OjryRPDw8FC5XM4Pf7lc\\n\",\n       \"VrFYfGAGA0X2Mde/1Zo5FAr9HUmF0Wj0u9/7VEZSIfD/Bd3v0L//8bMfP/5XXqVSyVAU0n+ok8H6\\n\",\n       \"sV6v26EIsWWxWFShUHB9e3d3p0KhYCyXEoZjl10F2AwtHnUrg5ler+cByu3trT0tpJ8yUIDUKpWK\\n\",\n       \"arWaarXaA6ok43cmZAsLC5ZyYUqOLxwPQi6XcyQGHA5OLBY3fG2wbgYesAslGV7E3xrfZTYENIFM\\n\",\n       \"DXkogfiQnQXH1QyCIIVNTU2p1WrpF7/4xSNW179F1lwoFApL+u90X2L4w/9f/o6TkxOLTS8vL/XF\\n\",\n       \"F1+oUCiYS8uxSk2LLAqy/cTEhLrdriqVij766CPr8mB3YQZIGiqWBORBr6yseIwNUhAOh3V0dKSP\\n\",\n       \"PvpInU5H8XjcNE2aQ47jzc1NIyPFYtFmhisrKxqNRmo0GlpfX9f5+bmNGdfW1kxs2tnZUSgU0ps3\\n\",\n       \"b/TJJ59YUFupVCzpB16Ea8KOfXt7q1Qq5QFJcLpH/U4EBTs6WSZMIlnkNzc3evnypUsiEgAYXF1e\\n\",\n       \"Xmp9fd2nF8Oif5+i03YkbUr69kfW2aqkfxkKhX6m+x13LfC1q7rfkc9+/O/gx/9aOcLXX39tn4Zv\\n\",\n       \"v/1W4+Pj2tzcNOiP2HVhYUGj0cj+EEwAQQugh8IUC7LNCNOZmZnRzs6O5UeM0VF3gDlHo1GHBEEm\\n\",\n       \"4jWEQiH7VoB+BJEPxubgzOzONH00uWNjY5ZupdNpPXv2zBZk8/Pzpp22223TTLGoHQ6Hzs7e2tqy\\n\",\n       \"zx7+e5jO0G8w8YxGo5aNIb8C0UBBQ5wwDlIEGRFngfnM5uamUqmUvvvuOxUKhb/u9v5rr39rZcZo\\n\",\n       \"NHozGo1So9FoazQabel+sX45Go0qkv53Sf9pKBSaCoVCW5KeSfpmNBqVJXVDodDPQvdPwH8u6X/7\\n\",\n       \"635HIpHw7gB+2el0zIPApw36Ibatl5eXOjs7cwIqgwPiDZDvDwYDW2S1220Vi0VDf6g2KEew+uL4\\n\",\n       \"ZhcEJSFzj9o3yJlgigksRhlDLSvJO+twOLRhIT8b2RcRx/A0JJmOiWK90+mYlonMC3Sm0+lYiAAa\\n\",\n       \"wuvp9/sPXicJs+zE9XrdBH4ecMS1/D1AkFAEHkvO/5uE5v6hpL+Q9DwUCuVDodB/8Xtf4hC50Wj0\\n\",\n       \"vaT/VdL3kv5PSX93xJ2V/q6k/1nSvqSDvw7JkO4x2U6n452TOg0ICUNsOn3cheAzLy0tudve2Ngw\\n\",\n       \"t5jakjICRhgGhrVaTaFQSBsbG/apGxsbs4sogZVEU2CVNTMz49gyVCRg2RCSFhcXXVvPzc2pXC5r\\n\",\n       \"bm7Oo29chxDIIse6u7vT+vq6lpaW3Ggy5MG6YHZ29oHUHx9o6urd3V2Fw2E/9L1ez0JZYMdIJGLW\\n\",\n       \"HFPB6elpTyQZ0HB/KJtAj1jo6+vr/+4u5tFo9J+NRqPMaDSaHo1Ga6PR6O//3ue3R6NRM/D///1o\\n\",\n       \"NNodjUYfjUaj/yvw8X85Go0++fFz/9W/7vcy7aPJw3kSHBkFNvROgtoZC2OUiFKFBuXH12LnzGQy\\n\",\n       \"aS85vOqOj4+NrYJhg5JIMpdhd3fXgTnwptEerq+vP/A5hq03HA7NL1lcXFStVrMChOkadWkoFLKB\\n\",\n       \"zcTEhJaWlhSPx21YmM1mjXRQMoDJ47OHjRnDmo2NDQttqY8XFhYc0TY3N2cjSJh1GDvyur744gsN\\n\",\n       \"h0Mz6vACWVpa0mAw8CbxodeTmgDyJrdaLZcJDCbYAQaDgXcy6I2gFyggJFmEyWIJQlIc6be3t9by\\n\",\n       \"BZtKTgI0g4ykg1M4/psmKcjvZYeldoXAj8o6GKADbs1ro6ZlN2QYA92TcgPcGJ0jPslM/ySZ8A91\\n\",\n       \"Nfha+NuCHs1AlVBW8cVjMcO3ZvJKYw4777H+zE9qMXOcjo2N+fhuNBpOOEIPSCpTv993DUzQI6yw\\n\",\n       \"4JuM++X4+LhyuZwXnST7SnCT2u22xbI0R9L9QwT1k0kdR3E8HtfCwoLq9bpyuZxlR5i1MPLGShcF\\n\",\n       \"Cjg5zD3pHva6uLgwFxrFNX8zP4NTAlgPn5FgjY8/CC5IsVjsgXedJItz4ZJwovD6UWnjZc3JAVkJ\\n\",\n       \"24fZ2Vnt7+8/6v4/qcWcyWTMzKJ5i8fjDyiIxO72+31lMhm9fv3a+O/d3Z13jYuLC/ODOeqpoQm1\\n\",\n       \"OT4+tniUr8O1Z35+3gaG4LxAfCwIGHf8O2jIkkgk1O12NT097XEwnAiYcMHpHLs8fQL+FbxueMu8\\n\",\n       \"Lzc3N5ZZYUnGg8fPZwyPsSInBuUDzk8TExNOv0KmBQQJVwPVSpCeWq1Wjah0Oh39rb/1tx51/5/U\\n\",\n       \"YuaG05RgHwUtkl2g3+8rnU7r7OxMP/zwgxYXF7WxsWFBKsy0YrEoSR6UAGthBfD5558/cM7ENWh6\\n\",\n       \"elqxWMyeciSSMkkjICcSiTxwQULaREQbY/CPP/7YjRUnhSQvYEhOEH2Wl5e1ublp/d7l5aVhR7jT\\n\",\n       \"6+vrFjMwEmcxNxoNQ3NY+galWJJsxEhqAGbsExMTWl1dfZALiJKdEmdtbU2RSES7u7vK/ZiVPRwO\\n\",\n       \"HZr0odeTWszAR7FYzGB/uVx2PRzESTlqgZ5qtZp5y5QcwHHwEBhtB1XE1Mq9Xs8Zf9PT0yqXy1pe\\n\",\n       \"XnYksfRTTt/a2pptBqLRqHq9npsvxty4Il1cXOjs7MxDDOIXUKcw1SR2AQITTv7Sva80dNRGo6Fw\\n\",\n       \"OKy9vT07cI5GI6MVg8HAtge5XM5DFTjPcKrj8bgymYwuLi5sezA/P6/Ly0sdHByo2WxaFAH5C1Zi\\n\",\n       \"Pp/X4eGhms2mNjc3fe/+EDccuIgmQwvIsSbJujyGKsHjcmpqylRKmicI44D8aAJptKA+gmuDjsCu\\n\",\n       \"I+gdmy2mcbe3t9bFUXuS9NTr9fwzQS84/hk8oCeUfqKihkIhe1lQswLFSfKE7vr62jpImHsMbyDS\\n\",\n       \"83dfXFxYdSPJfs68PuKcb29vlcvlXLfPzs56YHN1daV2u23eeCQSsdE4pybY+fT0tCmwH3o9qcUM\\n\",\n       \"ZkpOhiRjunTqcHQleRFDsqnX61pZWZH0kysSNW2QC42TPQ0nnXmj0VCj0bAKm8WP7o4mCQ4Ei4HT\\n\",\n       \"gAuqJeULfOrp6Wn/TgYaSJ8kWQQQ9KeAnw0XmeYMbgT4ND7NBLNPTEzYh4/yAoEC5Q9kEreghAAA\\n\",\n       \"IABJREFULGx3a7WaWq2WERR8+G5ublSv180lAeunrEAE/FgK6JNazDDE6NSpn4Gv2A3Y2YC4Zmdn\\n\",\n       \"LcM/Ojqy4gH+AzsS0QiwzEKhkA4PD90sgeeCwUajUU/U+H24FwFpQTEFSYDYgyEMNScoDOiJdF/i\\n\",\n       \"NBoNM/KIQkNYCx8FoSnDkKB9Ag95oVDQcDi0gpoNod/vm4V3fHxstTmnHqcLfA9OuKDHH9knV1dX\\n\",\n       \"nqbSdNJk0qQ+5npSixkQ//z83JAboTngnBi7EB3GwgU6glR+fn6um5sbzc3NaWxszA0lgP/t7e0D\\n\",\n       \"gj1DCth31L6SvNuBcZdKpQeUR/BeFNoMNihFwHERml5dXRktQCyAcxL+09FoVOVyWcfHx+r3+/rL\\n\",\n       \"v/xLj/er1aoXDrg6FmP4d1CWoBKhxwjaijWbTT80wZE3KhXQo7OzM3sxByHLqan7xFbpJ5X6Y64n\\n\",\n       \"5TUHwB+LxfTJJ594ogRBiLqNbp1uHMdPjuhkMqn19XW/4cB1oVBImUzG4TadTsfYK538xMSE4vG4\\n\",\n       \"ms2mlSCUPxMTE6rX60ZFUGfg0dzpdJROp122vHr1SrVazcd70PKqVqvp/Pxc8Xhc3W7X30eK02g0\\n\",\n       \"0u7urvr9vi0G5ubmbNuF2//S0pIf0o8//lj5fN4j8mw26/cV4v329rZhTkoMxMFkaiNCQNhL6Dul\\n\",\n       \"0c7OjqT7pu+jjz5yTMcfEloDFyR3doPz83NzjqmJGW7QnOFOz3EKTgrXmQEFJPVyuexjkp2LocDN\\n\",\n       \"zX2edrVaVa1WU7PZNG8Yf2Qw3LGxMd/kYrFoDLfZbDpXhJjh8/NzG7Tc3NzYNTRI6JFkCwX8lYHx\\n\",\n       \"yuWySfHn5+cKhUJqtVoezgwGA2cTgpLwuuEsU7IwUEL53u12Va1WXcvjxl+pVMyN5t80pXhAd7td\\n\",\n       \"9wtAf4+5ntTOLMlURKiTMLnQz8HcSqVStpdi+ACXF0Em/hXdbtecDKAqRAB08NR+y8vLvsmUM+zw\\n\",\n       \"7K64jtLwLC4uWgfIMT89Pf2vGH6DvBD5S1MIGhCsXYOIzMLCgtLptBdv0GWfZhXpEtg0tltkvQC7\\n\",\n       \"SXLZw5gavxEs0ZjyXV5e2q2f4B7orezinHIwDR9zPamdGf5vkGyDKJR4BVCJZDJp7BOpP/4ZENX7\\n\",\n       \"/b6urq6867A7UU5gasLnw+Gwp41ra2vu9iORiGMjIKGDScNpAE3AJJ2RL40kZRAPBoQeCPU8PHd3\\n\",\n       \"d4rFYuY3j42NaX9/35yRdDrtfEQsZ+GkUB8zjsdlaGlpySUAHhjxeNwRF3jgIaSNRCJaW1vT1taW\\n\",\n       \"7YHBvjHByWazTn2V7h+QpaWlR93/J7UzczOoD0ejkbOr2+22fvaznznTr9PpaGtry94Ul5eXrkW/\\n\",\n       \"/vpr0xdnZ2etVoa2yQ4HF3l1ddW+xix0yP/hcNjB6OCuQF29Xs+eFhcXF3bIh42XyWTMfFtbW1O9\\n\",\n       \"Xnez9+rVKzdTZAciFEilUvZmvru706tXr7SxsaFms2lJP+pt+gl21uApgX7x5uZG6+vrGo1G9smL\\n\",\n       \"RCIOtXz9+rWxa3JOVldX3SDOzc0pk8n490syNPfixQs7pv7B0ShwEWnAMCEajVq3dnl5qV/+8peW\\n\",\n       \"4oN90uX3+329f/9ey8vL2tvbc0cPfIW7/g8//KBqtaqzszPzb6+vr7W3t6fj42PX6Pl83mPe29tb\\n\",\n       \"7e/v6+DgwDBhLpfzQ1WpVMysazabDuQ8OjpSLpdTsVg0aw/E4PT01IOfcrmspaUlnzblclmlUkmt\\n\",\n       \"Vsv00lKppNPTU+caMu3r9/uq1WoqFouqVqvqdrt2Vnrz5o3x+larZWoruDY9ytHRkVl43W5XBwcH\\n\",\n       \"2t/f1+TkpBqNhsrlsqeYcLpPTk40HA51dHQk6R4B+f777x91/5/UYkblMTExocPDQzdy2GmNj487\\n\",\n       \"LGZ+fl6Hh4fWpcEpeP/+vfb39x3zAH4r3fMRgrVyqVTS5eWlMpmM5ufnHYcWtLm9vr7W4eGhfSGQ\\n\",\n       \"VtH4VSoVVSoVN3grKyuOhmAkL8kBQxzrBwcHloENBgPt7e3Zsek3v/mNbm5uLKz97rvvNDc3p/n5\\n\",\n       \"eZcsoAeUBxgYohaH/1EsFt2gvnnzxkLWVqv1wCyRoQiLnRMS3J9mGvX85uamfx8ql8dOAJ9UmcFo\\n\",\n       \"NRQK2SeDWpcwHDDOWq2mr776yrsLZUk6nXbIJf5ywFfLy8te+GC5Nzc3Ojo6cn349u1b7e7uamFh\\n\",\n       \"Qaenp5LuaaIcryyGeDyuubk5LS8vG4Ml1HJ3d9cNFI3d9fW1tre39Zvf/EaJRMKEoUKh4DIAoelX\\n\",\n       \"X33l180kE3YfCM/Y2Jg+//xzZ7BAjDo4ONDq6qp5JJRe+I0Ew4Du7u5c4jDeD7p7ZrNZlxoE+8Ae\\n\",\n       \"JKkLDw9IXt99990H3/8ntZjJ/6CZYuoUDoe1sbGhQqFg5CGVSqlQKDgxlSFGMKe6VCopEono7OzM\\n\",\n       \"kh+I6mDYmKdw5MNWY2dGAcJghHKBAE3MY3AaxUAlGo0aj8UPRLrvCwjWGRsb0+Lioo/y6+trY9Px\\n\",\n       \"eNwT0ampKQdrfvrppy5LwuGwhb6j0Ujr6+s6PT1VLBZzc5lMJhWPx5XL5fTRRx85T5vSand31+6i\\n\",\n       \"5XJZ29vbToglZm56elpnZ2caDAZ2RGWsju4SPsljridVZqA9w22IYQgeGpLs8IlcSJJ3WiAuRsfQ\\n\",\n       \"HZEzgVgg4iyVSra2YmwNpTMWi/k0QEJPp7+wsGDVMscsGrzr62v/3HA4rFar5RMGSZgkm6dLcvYJ\\n\",\n       \"xzmTUGrsTCaj7e1tIy3IuYDk2u22RqORKpWKR/AwChnpMwACsotGo3rx4oWFuQyJgBZbrZYymYzL\\n\",\n       \"D0or6K/D4VClUskml5jWPOZ6Uos5aO4HvISUh1pa0gNVc7lc9hEMqM/EC8wVAjwqEbjGmMLQYJL2\\n\",\n       \"ymgaw/Fnz55ZnR0cZiBUlWT2XTCMp9lseurGiB3/ZeRWwJGrq6u28gJ1YfHncjkrZ/idGItXKhVt\\n\",\n       \"b2+bHTcxMWEfDlAZPgecSMPM3wNTjs+RwgV1AG5J0JOj3+9rdXXV/I3x8XEtLy8/6v4/qTKDnQrj\\n\",\n       \"Fbrzdrttayx229nZWSudsZ2CjVapVBSLxXR5ealisahQKKSDgwMlk0mPrsGKeSiAxfL5vNbW1lQu\\n\",\n       \"l7Wzs6Pb21v95V/+pdbX151oSqQbkzbCe1i80EfZzcCAc7mcd7d8Pu8hBRNOVCl4S0PXROEdiUT8\\n\",\n       \"fSy0m5sbnZ6eampqyhg8Q45CofCvOO8jYG21Wvrss8/cZPPzMU88PT19kAkIxZX3Gi860J5er6ff\\n\",\n       \"/OY3j7r/T2oxI33H6QcjlXA4rGfPnunw8NDHtyQHr4NPS9Lq6qqlUalUSpFIxEORUCjkoMZYLOZd\\n\",\n       \"nA6eUgMnUerBdDqt8fFx79rEDWMWTrQDQxeGPpLM+ajX695Nx8fH9fz5c0/TUGpjLnN1deUdG+0h\\n\",\n       \"C+jLL7/U0dGRSw2w9Ha77QcYRGV7e9tYdKlU+ldKokqlomQyaUrs+Pi435egH93W1pZ9n4M49nA4\\n\",\n       \"1K9+9StFo1GtrKzoj//4j/XrX//6g+//k1rMQQ9mYK1qteruGbPA6+trJZNJ1et1vXv3Tjs7OzZc\\n\",\n       \"mZycVD6f1+7urgqFgpEE1NXlctnG5eVyWdls1iJSiOhzc3NqtVpKJBJaXFz0CYAmMJPJeNIGYWl6\\n\",\n       \"etqsPZzrwaPhQY9GI2cdnp6e6vnz567ty+Wy+dfQVUEhUHpfXl7qzZs3mp6e1tHRkdLptDkolBWt\\n\",\n       \"VssSrLdv32pnZ8dC2/X1dYcYNRoNpVIp9Xo9nZ6eusSQ7pviw8NDbW1t6erqSr/73e88Xq9Wq+4J\\n\",\n       \"KO/gV/+zf/bPHnX/n9Rinp+fd94Io2kMuQmtnJqaMnIxNzdnHzdqZBACPDYWFhYkyRESTPWIZSBK\\n\",\n       \"DCckGkIQE6AxOBLgzDRjkpwGC6zFrnx1deWJGgSfRCKhdrttvJjaPJvNuoHr9XpGVSDvT0xMeAwt\\n\",\n       \"yYJWSeadcGKhvIY7Mj09rZ2dHUNulB6UQQx/+P6FhQVzMhjRIwKg9s9msy6lOG2SyaQpoR9yPakG\\n\",\n       \"EGJ7Pp/XDz/8YDSA3fjo6Ejdblfn5+c+dsmWhnMQDoe1v79vbvFwONTp6amSyaQGg4EtXxlQkCPS\\n\",\n       \"7/f17t0714ClUkmFQsGICnActeK7d+8eKMFpMlutlvL5vNl7V1dXOjs784ADGyuGKJQ533//vWKx\\n\",\n       \"mOr1uhl6eIccHR3ZKheL2cFgYE+Ld+/eqdfreQpIJjj85lAopL29Pb17987MOklWqmAWQ2nBCYG7\\n\",\n       \"53A4VLFYdANKGdLr9bS/v69Go6FKpfKohSw9sZ0Z3kM2m3V2NGNW6t0go47dEYIM0h0IQdSG1JWz\\n\",\n       \"s7NKJpNaXFxUq9XyTomsCoiOtKWFhQX1+31Fo1Hd3d0ZW56fn1c8Hlc6nbaUa2ZmxgaGqVTKeDkN\\n\",\n       \"K8gJLDXgPx5AnIiur69VqVTsvjQajbSxseEdnVSrYFTD+vq6a/pKpWKUhPp3enra9TOC2KBJ4sLC\\n\",\n       \"gtUrQYNJBk2gHeSZoFSfmprSZ5995teytrb2KCfQJ7UzS3LXTR2I85B0j/NyhEOAD4fDPlbD4bDJ\\n\",\n       \"8Hw9aantdluzs7NKpVIeYaM0AX/GUV76yXuCUwD1NRxofDWA6djR+FmowyX5a8F9JZl032g0vJDO\\n\",\n       \"z8/NpZB+0kQC0yGZAk5EmT36MW8bxuCzZ890fn5ulQ3OoPzO7v/T3pnERpqmef3/hR22Y3Hs4Vht\\n\",\n       \"RzjttKszO6u6ekEjulsaMXBEQgKEaAEHhIAbHJFAnODACSE0h9EIBiEhREsDamkaxIFlUEs9rVqy\\n\",\n       \"ypWb006v4XCsjnDYYUd4+TjYv6c+dx8G2V3drcCvVKoqp9MOO97vfZ/n//yXoyMbKDG69/p2oHOk\\n\",\n       \"jOr1egbvccC0Wi27Sfne6C/vukZqMzOJYtzq9/sVjUbl9/sVj8dv8X1hfCGfJ+KBGpjoYaQ+lAvI\\n\",\n       \"h/j4YDAwpIPBArERl5eXFicBcZ4sDwxjKG8Y5EgyBUc4HLbQ+nK5bKebd3hCuhbWAjxENFgQe6h1\\n\",\n       \"GVA8fvzY8hIRGriua7RPHkpJpqPE8851XUNrMpmMGScyxYtGoxZLhykjcCPmjbOzswYtggLBl77r\\n\",\n       \"GqnNzKQPWT5XfCAQMOgKuibypv39fSMPSdfQHBuBNwkyDzwDJnd+v98cfSCXU+cmk0lzAGV4QpnA\\n\",\n       \"EEeS1Zk0n/CxyVxhLO4NtPcmvxJrwSDC64gkXTdzjuOYVAweCqoVyq3JyUn1+31rIDlx+b2GQiEj\\n\",\n       \"18/MzJhZDcoVoDdJxoGBocjCcpjUWU5u3p+Hk9mzmFBh3H11dZ0jTebHYDAwR3pkSouLi6rX61pf\\n\",\n       \"X1elUtHExISdQizGvcBPs7OzRtfM5/P2+Y7jmBfd1taW1a/o/TASZHrGxxlseC14oZ3CNJOuy429\\n\",\n       \"vT2b3LH5h8OhMpmMcaRPTk7UaDSshMJw3CsEYBjEz4sbE/wVoEBvXLIkU6gcHBzYz8MDw+tlmAQN\\n\",\n       \"9+DgwN4PvECazaba7baJihk+3WeN1GZG/o/RNtYASNoZdzOJwk+CUwiKqDckh1MZMSlEJca8dPJ4\\n\",\n       \"v0HSAVbDzguvCMbYruvq4ODATlo8JGiiyFzB+Pvo6EiHh4fGhKMPkK5Pd3L8uBn8fr8Fv0M4QnaF\\n\",\n       \"oz+bGoMcfo5Wq2VIS7/ft5iGg4MDTU5Oan9/3+pkPOy8v0NJVqK1Wi2jgWJnC4LEQAqUhQnpXddI\\n\",\n       \"oRlo9SYnJw1KKxQKpiaBRDQ2NqZgMKh6va5isaipqSnNzc1ZkA55JYxll5aWzFQlHA4b0TyXy1nD\\n\",\n       \"0+l0NDc3J+kaH3706JFxIHDtgYqKeoXJGajB9va2OWp2u13Nzs6q0+lodnbW3OYDgYDq9bqi0aiJ\\n\",\n       \"B6ifJVmtj1TMdV31ej0za6RUoRnmZKQvwMcZhQgJA1hp0czSLPJgwuMm229+ft7onicnJ6aK5/ef\\n\",\n       \"TCbNswM67H0FrSN1MnMK88Zj2CLJ0AJOqpOTE2WzWTMWJwaC6FyiwmZmZixfWpI5+khSvV5XKBSy\\n\",\n       \"k6vZbGpnZ8cyQM7Pz00SJH1phTAYDLSxsaGjoyO79mkaKQ0YFzuOc8sOtt1uG4VSkk0F8VLma7qu\\n\",\n       \"a01rPp832iYnKhseh9NwOGz+zeDePKhEx+EgCm+D7wMnhduHg0L6MmEWl1O88ra2tqysou5+ELR6\\n\",\n       \"FkoKTEuY+LHJaVBwFdra2tL4+Lg1evCLm82mGSwyUCE3EKcflNMHBwe36tX5+Xm5rmunLS7+sO/O\\n\",\n       \"z8+tGeV1bW5umtVWMpnU1NSUhasjEMD8Bf4EdrQgNNAsyUvBfuvs7Ezr6+vy+XxmSQA9lTpWktX3\\n\",\n       \"TBt58Hi9WDMcHh4a3EcIKEw5hkhgyJJM9U5ZRFNbKpWMVgB8GY/H7/X+j1SZIck80Z4+faqpqSml\\n\",\n       \"02nLI2GAwvi0UCjo4ODARtnpdFrValXLy8t6/PixVldXzRqALI5arabHjx8bXzeVSpmHBGGXUEiB\\n\",\n       \"6iAvAX9NTEyYO+bU1JQeP35sjSs15+Liol3pp6enGgwGhgbgLMq4HT8P7GwLhYIcx7EhinRthFMq\\n\",\n       \"lRQMBlWpVMx7GluxVCplmxY0AsN2xLWNRsOMZbBWyGQyarfbymazury8vKXMASb1+/3m+BSLxVQu\\n\",\n       \"l3VycqJisWhxGRg33meN1GZGekSy6cTEhN68eaN4PH5L+MmVyBUNNLa6uqqVlRV98skn5nnBm7Cw\\n\",\n       \"sKBms6mNjQ1ls1nF43Ht7Ozc8l6rVCrmN/fpp5+qUChYWE6/3zee8MTEhN69e6dMJmPm6MViUUdH\\n\",\n       \"R9re3rbp3/Hx8S1y+/j4uDqdjh49eqTDw0N7CDFqYWixvr5uDZ8kazCRg52dnemTTz7Rs2fP7KYh\\n\",\n       \"U2R7e9sw4Ldv32ppacksgGOxmL744gs9e/bMyjYa14ODA5ueHh0d6ac//am+973vmdXu9va2lpaW\\n\",\n       \"FAwGremkLKK0evBn9iyutlwuZyQZZEvHx8eWDY0KJRKJWPYzENbBwYFtfm9cL/ZZ+FGgisB6lvEv\\n\",\n       \"sWuhUEgzMzNKpVJqtVo2MGEczfQMFQcRZhDUveVMMpm01wSpB6ycKR9NGX4aMzMz5iY0GAzsRgqF\\n\",\n       \"QgoEApqfnzcVCYT9wWCgx48fm+s+ZRGuQ8RhUDdjxHh5eWnTRfgnS0tLkmSYerFYNK+74XCo+fl5\\n\",\n       \"44D7/X5Tkd9njdRmppMGA0UdghoC7wvePO+Ym82dTCbtRCZEHkpnOBxWo9EwX2NYbl58G0IQAxk2\\n\",\n       \"AI0p7pcMDyAKwa7z+XwqlUrmTYE+EYYZymnMzxHNcpKfnX0Zf0wZhI0s2C+1NvyNqakp5XI5TUxM\\n\",\n       \"qF6vW7oAzkjE0IE5s9E5UamtgT8ZefNQg6gwtsaCADsvRujf+c537vX+j1yZQa16fn5uFliXl9cp\\n\",\n       \"pODDPp9PyWRSlUrFUA8k84hJo9Go+Vd0u10z3p6bm7MGB6IQDRhWVl46qVcxIsnQCUbtuI6GQiEj\\n\",\n       \"GDUaDdVqNWsGwa5xqfeyzkBkGOrAcUin0+Ze32w27ecvl8tqtVrGycY9lKkllFGI9vBNUJejdsdY\\n\",\n       \"Z2VlxWRUwI7AcJLMbRVWHYQmGs1arWZG7vcVtI7UZgZvbTQadhJ2Oh3l83nt7e2Zsrjb7ZrXnCS9\\n\",\n       \"e/fOLKjy+bw2NjZseME0C38NfukMKQi15OFhzLu7u6tQKKRUKmWmKq1Wy0qSra0tM6nBCZTanTiH\\n\",\n       \"vb09K51wPQLZaDQallaFoSEMvkqlYpDX+fm58a753bTbbbXbbUWjURv/0+RielgsFrW1taUnT56Y\\n\",\n       \"sSSmjdPT0+anjEkkJzRIEHAe6bfHx8dKp9PmsYFaPZfLqVqt6vT0VD/+8Y/v9f6P1GYmugAZFFJ6\\n\",\n       \"NhXX4szMjPr9vhYXF7W6uqrl5WXVajXlcjn5/X4tLS3Z38f6FQNzGq1+v28n8ZMnT9RqtdRsNs2o\\n\",\n       \"EbI+I2tOJIhNSPxh6JGvB+MsnU7fsiHodrtWZzIYmZ6etqHMycmJ1tfXlc/nVSgU5PP5bJqXSCTM\\n\",\n       \"vZTXgko9m82aAPbq6soCLa+urpTP5zUzM2NNMkIGbhPsc6EHUNpg6sLwB4td6ZpzjlgC/jZ9y9zc\\n\",\n       \"nN68eXPn93+kambpyyBGmjemX81m0zjI/BnWr41GwyTv1WrVuvt6va6LiwuTVFHHAiexiarVqqRr\\n\",\n       \"TSHxZDxQ0CUxXYTny0AHkjwnJIy9k5MTS5elQQUSY8zNyeitf8lNoX5mQ+VyOeNE87oxPidTkBhm\\n\",\n       \"WIE8WODPRFdgWyvJmkWyUfDgANILhUJmuwuejBgYNTw1OEaOd10jtZk5dbrdrr773e+aG2Uul9MP\\n\",\n       \"fvADxWIxlUolzczMaHFxUd/+9rdVLBaVTqf16NEjMwtcWFhQsVjU06dPNT09rSdPnhgPASkS+Rx0\\n\",\n       \"5JKMdD81NaX5+Xm1Wi0FAgETASwsLBiCkM1mlc1mLd1Jkj744AMtLi5KkoX6kFWIN0a5XFa329Wz\\n\",\n       \"Z880Pj6umZkZM/CemJiw9Cb0d7Ozswaj4WtRKBTMYWhhYUHLy8vy+/3K5XJmyh4KhfT06VNlMhnL\\n\",\n       \"BE+n04pEIiqXy1peXtaHH36oi4sLLSws6OzsOnY5HA7r4uJCmUzGmtKlpSWdn59rdnZWkUjEoL/F\\n\",\n       \"xUV97Wtf0/T0tPL5/INxonfhZBQMBvWjH/1IR0dHpoT+4Q9/qFqtpv39fZMhvXnzRhsbGzo9PVWt\\n\",\n       \"VjNzwJcvX2pnZ0fValVjY2P6+OOPzWzw3bt3dr32+31zBhoOh1pfX7dQm+fPn0uS0TMDgYCeP39u\\n\",\n       \"JJ4XL17cIjfF43Gtrq5qa2vLTFKomz/66CNjrWFntb6+buwzWIG9Xk+VSkX9fl8vX75UIBCwJjca\\n\",\n       \"jZqtbb/fN3bc0dGR1tbW1Gw2Va1WtXWTEHtxcaHnz59rb2/PmHiDwUDb29vGMlxdXdXZ2Zlev35t\\n\",\n       \"rks00C9evDA7r3q9boJanI5c19XLly/t99Bqte4tmxqpzYyrDnIjsGKQCq5qYDRKiIuLC9PdQTL3\\n\",\n       \"bpTz83NVKhWjMKJIoTHi8xkOgE1LshEwZQMUT055xsX4Z6Acka6HHXjTMdYGPeFnREQLJOit6ykf\\n\",\n       \"CM2RZIw+0ATIRECMUDqBNkEi4Jmcnp4aZIcyhq8LzwRKZ7lcNjQIliK1MjAedgx4Yd9njdRmjkQi\\n\",\n       \"FhRD5AKnDMy1SqVi4T3o9Gq1mlKplOLxuF6+fGn8g16vZ/J/L6F9bW3NnDObzaZht/gwe2EyHg5o\\n\",\n       \"j5B/CAA6PDw0F1Bqcuk6Lm1/f98SXicnJ7W1tWW6PVxKyQV/9eqVksmkxbcR1lmtVs0oETMX6uKT\\n\",\n       \"kxNtbGxod3fX+N5+v99IRFBEId5vb29rOByaqQuEJep3fkZKMQhSbHh+d0Q7IwsjMcvLIb/LGik0\\n\",\n       \"A6nP5eWlnj17pnA4bN16NptVu9022RR/dnl5aXgudd/R0ZEKhYJ2d3eVzWZ1fHxsymcmd8lkUpFI\\n\",\n       \"RLlcTj/5yU+Uy+VsWgd7bG9vT5OTkyqVSnYLQGKfnp62CGRG581m08SvcB7Gx8fNHgGJlOM4hiWj\\n\",\n       \"K1xZWTFRbCQSscxwLBIQ8ebzeQugxLne5/NZ4urW1paVRZysExMT2tnZMcNwHEERrLIJaXrz+bzZ\\n\",\n       \"N2DxQPO6vLysYDCoZrNpSbqJRMKoqPfxaB6pzQx1k+katk8MDhqNxq0ygT8n9Ql5EYoUkA4wYuwA\\n\",\n       \"KC2wxGKke3Z2pu3tbXPLhCRUr9eNJITxSTgcNlFou9022RIO9I1Gw2AwTji6/snJSQu0JxjI5/Pp\\n\",\n       \"3bt35h2yt7dndrSgGJKMpsogCWsw/gzO9eXlpTY2NrSwsGA+z1ADSJslk5ByCCemSqViGHmv11Oj\\n\",\n       \"0bAHC6NxuNno/1qtlsF3d10jtZnhOUiygcJ7771nXT+CV2/edb1etw7+8PDQPIX9fr+ePXtmm4RJ\\n\",\n       \"2WAwsGlaJBJRp9OxzD7UGoPBQKVSydJK+b5+v1+1Ws2+DtYDnNBc3f1+XwsLC9rc3JQks5dFHOrz\\n\",\n       \"+VQul20a6fP5zC9udnZWtVrN+MuE9FDy4H46OTlpATnAdCARR0dHhjF7hxqcwGgmIQvBdb66utLK\\n\",\n       \"yop2d3dvZbPw0GHXAF4OTOo1bbzPGqnNjAH25eWlnj9/ru9///t6/vy5VlZWjMPw7t07u+5evnyp\\n\",\n       \"YDCozc1NTUxMaGZmxtCEYrFohoftdluZTMZO+/X1devOHz9+rO3tbVNykIf35s0bxWIxJRIJra2t\\n\",\n       \"yXVdQySGw6FqtZpKpZLa7bYZfOMsOhgM9PHHHxu3AfLQ6uqq3n//fYsuQwaFO//jx4+1u7urt2/f\\n\",\n       \"amVlRVtbW5qamrplI0Y93el01Gq1rLlEZULjNzk5qY2NDSM7XV1dqVarGef75ORES0tLRmuFj7K5\\n\",\n       \"uWk6wEgkYhwNLzfEK5aNxWImaHj79u293v+R2sw+n89y+eLxuMmksKXiTYIY9PTpUyMQgXSUSiVD\\n\",\n       \"NpaXl1UoFPT5558rFovZaT4/P69ut2skdhKfOGXi8bgODg5MFpVIJGxYgkuQJAvxOT8/v5V7PTY2\\n\",\n       \"plwuZx50DF0ePXqk8fFxpdNpHR4e2mtqtVoqFAp2bZdKJQ2HQ/ue4Me9Xk+xWMxEtvl8XmNjY6b2\\n\",\n       \"zmQyevXqlfL5vAVXUmZgWQsSA1dbkj0IsVjMkrwqlYohHJlMxiRqZ2dnRvWMx+MWLBoIBPTkyZN7\\n\",\n       \"1cwjhWaAXEgygan35ICeCJOOcEu0dJQKdOetVsuuf6aJ0BjPzs6MTMP1OxwOb2XtDYdD4/iiViEu\\n\",\n       \"gaB0LGAJtOT/OcnxcPNa3cIRxugGWAvRKA8EPz+oCyNnb9Qxr5tJKfU4jSoNKp4YOHuSQoUaR/oy\\n\",\n       \"0J0bBUYcE0xvpDD2wZIMBcH7465rpDYzOjw2JJgyTdNgMNDi4qLxHiSZ/Ws0GrUOHGFqOBw2RTZ2\\n\",\n       \"A+jtcDDCoTMcDmthYcEoqKSasnGRWUky4g96PdTgkINQpcCdILfbi7rAicb5iDLULiPnAAAbvUlE\\n\",\n       \"QVSITYz7Etxtr2YQYQIU1mw2azkmk5OTFl2M0p26GjUNpCfyDZFUwYpD2we1ttlsSpI9CAgNODAm\\n\",\n       \"JyeVSCRULpfv9f6P1Gbm2gOrJQiSXBOSlCCbY91KPccbAd0RPwmaJeilJycnRkJC5Xx+fm6xZPCH\\n\",\n       \"IbRD8YScj2sSrykWi91yY2q32/ZnSPZ7vZ4kWUnC6Y27PfwMhiZY24IaYA5DApTXmgtivDdkh+8P\\n\",\n       \"6T4Wi1mTx0PDTRiNRo0HzuGBRQLjfgZSCAho1r3uU/dFM0ZqM7PpMAakJpWkWq1m5Bmc6PFqoGa+\\n\",\n       \"uroy1bDruup0OqrValY+EFMsyZw28VzGdBFLAmxrQ6GQ3rx5Y+6jCAIwGifWbWdnRycnJ0qlUuZs\\n\",\n       \"71WihEIhc19CuYJFAIrvbrdrkGMgELA/R7iLRwf2sefn5yYgYITOx6Xr7D++Jmy/4XBoMCeYd7PZ\\n\",\n       \"NGdU4ERJRiRiotnv97W9vW1TRDSLDLD29vbu9f6P1GbGXhYd3WAwUKVSsQaFDA5q2Hw+r1wuZyfx\\n\",\n       \"8fGxEY4uLi5ULBatufNycyVZAiq2AEzOkP6gCfx5ISo6PgwaOU0JvUHpwVXPmFm6tg7rdrsKhUJG\\n\",\n       \"hAfL9fv9FsJDxLLXmKZWq5kaG/k/1lx4KYODcwuMj49bvBun8GAw0MLCghYXF43rUSgUTJzgDQHl\\n\",\n       \"gcKEMZFIqFQqyXVdO0wo9zBhv88aqc3MYAOTleHwOh/v4uLCNH6cmJyMOzs7doKEw2FDNxCiQldk\\n\",\n       \"EkdUA1o2x3EsR4Tw+KurK6XTafsauCJheXV6empKDzjEY2NjFlOBkoXNwegZPZ4kG9hALe10Onr5\\n\",\n       \"8qW5B3lLG/K2k8mkMpmMfZzohsFgYEMNbxPH5+I9glk4zaTP57M4ZeImBoOB4vG4/f75vfn9flUq\\n\",\n       \"FbVaLUUiETswdnZ2TEjhdWm6yxopaG5+ft42FL9EGqpsNmtdM6Sjcrl8yzWT0xtJPIw33kTqQklm\\n\",\n       \"gohLPMMU13W1uLiotbU1I50jYmVogqkiSUyYwFDb7u7uWoYgNTOGjtPT07ZRksmkZaiwsXBRokkj\\n\",\n       \"Ai2VSpm7J6bf5J9MT0+bC9RwOLQxdCgUMm0ktr7YF5yfnyuXy9nn4qyP3hL1OFFt+/v7yufzNkSB\\n\",\n       \"xJTJZDQYDJTJZO5NNBqpzVytVs2EZW9vT9lsVmtrawoGg5ZljaNmNpvVu3fv1O12Dd8NhUKKxWJq\\n\",\n       \"NBrK5XJaX1/X4uKiJSYxco5GowbVYdiC1RSEG9d1tbm5aWHulDZnZ2cKBAJqNpvGa+h0OkokEkYS\\n\",\n       \"8jox8WCyaS8uLuzvUPJgEEMNurW1ZRg7LkWJRMLIROgMiV8DWqxWq8a5GA6H2t3dNUdSEri2t7eV\\n\",\n       \"y+XU6XSsvMFugXq51Wppb2/PAjU3NzfNSQqvEhptDp1ut6v9/f17vf8jVWZ4nfCZMiEJQrRJM0JZ\\n\",\n       \"wcmDAjmRSNgJBUZMzQxWipSK65uUJ9QqXmgNj2TGyYTRgCyQ/ee6rimp2QTEkYFDUx7x+YyHXdf9\\n\",\n       \"BdgOU3HorpLMaN1rWg5C4/V6QyIGOoF/NHpELGips6nf+Tt4YvMzghJxkvM74+vyujCmuesaqZOZ\\n\",\n       \"rn58fFwHBwcWmtjtdk3Oj21XsVjU2tqaQqGQtre3LYH1k08+sYB48keurq7UbrclXf/CqQ1brZbB\\n\",\n       \"YHxdfDW81+fPfvYzG0sTwg4Bajgc6vXr1/ZGIxY9PT21FFaYe5999pmePn2qt2/fGtKB0WKlUtHK\\n\",\n       \"yorVzh988IH29vZuGceAI0NFrdfr2t/fN5oq9fnr16+N24LDvSR99NFHBqsB8WGfy4bEIhdfD7jN\\n\",\n       \"RDB/+umn9jPB8d7Y2FAqldKLFy/u9/7fb/v8Zi3Gp9SYnHLSlxJ5rKDIdR4fH1c8Hr8VOYbrOyRy\\n\",\n       \"Mvq8RHWc7BG1gnxgKg6xZnt7274/DDGvb0YsFrP86ePj41vaunQ6bUhIv9/Xe++9ZyR9It3Q3kHT\\n\",\n       \"7Pf7evbsmS4uLlQqlawZgy8cCoUM2ZiZmbFbAwMbyEoYvRDgeXV1pVwup0KhYA8b0cYIZTFZhIVI\\n\",\n       \"g9nv9y0vBQ8+7BaOjo5ULBYVjUbvjWaM1MmMBhBCzenpqbLZrIX1lMtl49WSUw2TbHZ21poWyEpw\\n\",\n       \"H/DMoEGho2dChsccdTdwGE0S17yXu4EUX5IqlYoWFhaUSqXsFKO+x5mUm4N6vVAomLUCrDj8Orze\\n\",\n       \"0MRAcFLi5o8ns1eYGw6HLQg0FAppfn5e5+fnhttDV00mk5qenlYymTS1O+UECV9YLgD7MXghbSAe\\n\",\n       \"j2tra8u+P5HE91kjdTJ7hx107G/evLGmaWNjQ41Gw3i7yJQkmZz+5OREL1++NLYXCg82K40ZWjYe\\n\",\n       \"DnzhMIEhtcnn86lardoQR7r2ZUOKj0EhnBLGzRMTE2YW3ul0tL+/ry+++MKmcKenpwZFoiBJp9M2\\n\",\n       \"5SSAc3d3V1tbW6rX65Y0y7QP4hRhnycnJzo8PNRnn31mnnWXl5c2DKJcQ2GCsrvRaNgAqNfrme4P\\n\",\n       \"Xjh+Gqenp6YhBOFptVoWQQyz8a5rpDbz7u6uWXNNTU1pb2/PVM1ch5lMRqVSySiLfr//FgkmFArp\\n\",\n       \"61//ujUofAxL206nY40kmR5YvGKIuLu7q/HxcW1tbVk96NXuMczAooCNxGvnxAeqk2RstP39feM+\\n\",\n       \"YEwDTXN1dVWnp6eampoyLjVQHthxp9PR7u6uGUsyLcTyS5LZ3UK+wgAmFotZ3jcIije/G9szmm9Q\\n\",\n       \"HXyXd3d3DTt/9+6dBf5kMhlrEO+zRmozT05OWgbgxcWF+U8wwma4gDKZUwUFBGtzc9PqXsdxTD4F\\n\",\n       \"5xlqJ2oLRtjHx8dKpVKan5/X3t6exavhKQf3+OjoyGpGBjY4CtGwSTJyUqfTUb/ftzwRPJrxiGPz\\n\",\n       \"Ya1FnY9mkVsKrziGIXwNHmxJJjDFOxozdbw0sA/gRvOaJQJPRqNR01Eiz8JFlIMF0QDun8Q832eN\\n\",\n       \"1GZOJBJ2pVOnTU9PG0OMRmRmZsbcO4HUvKcSv2w2F9kbp6enarfb2tzctFgzJnCcRpxMiURCnU7H\\n\",\n       \"rGXJuwbWQ5PHZoCB1+/37b8xCPf7/cZXDgaD6nQ6xuHodDrmqVwqleznApkAO2+32zaA6ff7t6xk\\n\",\n       \"vXYAPp/Pmk4ePhygqMPRHoJHx2IxJZNJSTJCviRj6qExpLQBPqW/YbDjjbO4yxqpzcyAIRwO2xgX\\n\",\n       \"3R61n+M4Vm9SInDFQpDhKmU6SLfuOI5yuZx1+6SxIsPKZrO3BKSZTMbyAQeDgfL5vGG3NE+u65pI\\n\",\n       \"9vj42Nh4Xo8579WOe1Cn01E4HDa5UzKZNAsujMbxf87lcsbmY6N7rbp8Pp/i8bjm5+cNt0Z0K10z\\n\",\n       \"9bDdKhaLFnLE7xmlNTgx1rmgI6AtcFHoJeCC8HoeMk08C14ALLdGo2FTMghAMOey2ax8Pp9evHhh\\n\",\n       \"glTAfJoW1NG8AePj46pUKmYpRd0bjUZNQXF4eGh17enpqfk6U3/2+33z0SBlSZKN3iHpey3DqHXB\\n\",\n       \"eznJ4W1L11wNEBwSoqrVqiYnJ01LCFNNkim0A4GA3R6VSuVWrBxQ3tnZmVZXV9VsNrW5uWkPH1NA\\n\",\n       \"IDjKE0QP9AcIcuE8g8ow/MHD5L5WAyO1mVEvQLKfmZm5ZQyIDxuNGK6a8Bxw3A8Gg1YvgzF7TU9g\\n\",\n       \"nUEckr7MU4FTAQ7MZI0oByZy3BySzOeDWhxCFLh3IpGwkTz4ND5wvMZ0Om3cEdKd8L1AXMBmYQLI\\n\",\n       \"BmJjBQIBJZPJW2lU2OhSSsTjcXMqoozhNXW7XbVaLbs1+H6RSMSaR3oARu2SjPvMzXjXNVKbGWrk\\n\",\n       \"xMSE2u22XePeuIHx8XGT8WNyPTU1pZ2dHWugQBEI60EpAQbLZqvX68rn83bieYN5/H6/NVyHh4cm\\n\",\n       \"FkWNAY86GAyq3W5rYWHByDdQLREWHB8fGxGKmrXX66lcLpuD089n8Q2HQ2PujY9fxyvTdPl8PkuW\\n\",\n       \"hXdBqXR4eGjOqWgmq9WqlWxEEtMcImpAkoV3M5pCSUZuAmkh1u3s7EzJZPKXUmJII7aZab6oKYGM\\n\",\n       \"IJZzNVOngmIg7iTsBigJBYg3S3pyctKw1lwuZ+NaThmGE964MngQqVTKBgcISfn+zWZTiURC2WzW\\n\",\n       \"XhvTQiiiPAC4bNJo0cAlk0m7acbHx42miWl6PB63PgB5P/AYpVYoFLoVwsmgx+fzGXmez+f05uf2\\n\",\n       \"2gnDT0E6xY0F9Hh6eqrFxUU7oTudjr2mu66RmgBiBA4H4ejoyPzTgLfgEMO5QKMWi8UMyqIM4R9M\\n\",\n       \"vNlQfr9f3W7XOnJgPST/uVzOqKachLwGMGmQClALTLglWQlwcHBgXwtFCRwKdHxYWwHFgdxwolM6\\n\",\n       \"VKtVo6BeXl6aPwY178TEhGHtyL3q9bqy2awODw/tNKaWpv7nlK3X64bQ0PCScksWIpK1i4sLU4jT\\n\",\n       \"eJMicJ81UiczDC58Gri6vKLTYrFozYrrulpbW1M4HLag9UwmY1ROBiGA/8PhUO12W5FIxK7iSCRi\\n\",\n       \"JxQTNpz1qUOpXwkI6vV6t7BkvhecjVarpbOzMzOXweET7Jdam5E4J3W73TYbBdJkgR0Jcef7oAzh\\n\",\n       \"9mEUjRoHZhx8i4ODA8OwvTcesikQGgwZUaLTUyAcTqVShq+DxMASRPFz1zVSJzNUT0k20s1kMgY3\\n\",\n       \"4UCUzWbtysMzmSndxcWFqbNTqZROT09tgkaDh5EKjSLwXyaTsSFELpezpnBjY8NOK+iWXrErJ/fP\\n\",\n       \"GycyXUulUpKuSwL4EkzSTk5OTOeIWxGlCbZfnOQQ6UlWhe1HBgwELWy24vG4PSiZTEaJRMLi1+C9\\n\",\n       \"8L1o/BKJhLnpT09Pm0mjJONKn52daX5+3hAdvt+DP7NnMahoNBomfQL+KRQKJtkBjQDEhyMcjUbt\\n\",\n       \"ZCEtKhaL6fDwUJlMRicnJyoUCobrBgIBLS8vG7sslUqpUCgol8uZW2etVlM6nb7F4CPYHX0fDwlB\\n\",\n       \"PoFAwDIEQRIkmX6Oh5P8bvjOPJjU6ODSJLXSiDLiDofDZkJDsA/ly+Xlpf0cOPJTViBvOjg4MANz\\n\",\n       \"qK1g1jxAPJzYCsdiMX344YdWVx8cHBhz775Eo5E6mff29kyA+vbtWz169MjGzo1Gw3jJEIi++OIL\\n\",\n       \"w1tRcqD7Q5SJvs9b4wH0E/gTDofVarXME5nmh1N4bW1N0WjU/JiDwaBt7HA4rJ2dHc3NzZmyGs6F\\n\",\n       \"JPPFcF1Xu7u7yuVylhLV7XaNegmZh1IIWA3yz9zcnEFnvV5Pp6endhJSNzcaDRtxX15eam1tTeVy\\n\",\n       \"2YZK3FwXFxcmzvV6kwwGA/l8Pm1vb9vno67hhG40Gtrf37dJIU6kkgwPv+saqZMZQjmNBJvL5/MZ\\n\",\n       \"/xbCDSbegPmc0LjEw4WALcdVDD4tyYSq4LaoqIfDoSlNoDziUMqpCRHn8PBQh4eHJlAFWsRUkBvk\\n\",\n       \"/PzcsG82I03ocHidPsupeXh4eOtn7/V6NkJHKgWc1uv1zMkemy4eYJTaICOgHNBMr66udHV1ZQ8J\\n\",\n       \"AymclLzfy2v4jnMSDSMYOA/wXddIbWbHcZRIJKzpYsIVDAZtfDw1NWVKj1KppE6nY00UGSSwuILB\\n\",\n       \"oIrFomGpZJgUCgVLHgXb5mrGLBE/DLJCaL6wqOVEu7q60uzsrJ1y0jW7DL4whCDMVqhFQURAbgij\\n\",\n       \"Jx+EcgRBbb1el/Rl+CUj50gkYha7bE4Og2g0qng8rlgsZicrzSoPPqUDpCdc+KmHq9WqfW1SbLkR\\n\",\n       \"gDnxLXkI6PEsygjGsJOTk3a1UxcChXEaQaL3cmzR6yFLIvWUMoPrFuTg6OjIRs9gsVy9cIWpxTmx\\n\",\n       \"sa+dmZmx6xaUhSubk5TXSv3PwAEyPafxwcGBTQ2BGPkeuJYy4MG2FzU2p2Sv11M+nzeEBrSC7w9d\\n\",\n       \"liFUNBrV8fGxTk5OzMwdtTmCAQ4M/o03Hli2pFvY913XSG1m6tVUKmUYM/UfUzBCKrGe5d+lUsli\\n\",\n       \"0Rhbo9zgTffmccRiMdO7cZ1DPz08PDR2Hrgv0zfgME5NxtFs8kAgYLcL3nPIn6TrJheus+M4BnP5\\n\",\n       \"/X5T1XAzeTdTKpUyZhuvzcv0Y2oHXRPkBYbb7OyshV/yoDLwuLq60uPHjw3K8/l8WllZscYQf49Q\\n\",\n       \"KKROp2OhRuVy2dAWGtD7rJHazN43AukPJzJlAB5zIBS8MbVaTZ1OR3Nzc7cyo3GRR4HMlAyneDa4\\n\",\n       \"67p6/fq1EomE5ubmtLm5acQayhGgNDYvHGGu89nZWWOYMYaenJzU7u6u1cH7+/vG/qOuRZ0tyVKx\\n\",\n       \"4BtfXV1pfn7e7GO9rxUMmZtmampK1WpVwWDQsPFYLKZ8Pm8REyA3HBKo2JvNplFdJyYm9PnnnxsD\\n\",\n       \"kN95LBYzVGhra8tKEMx1aIrvukZqM4NIQG2k1mTiRaOWSCRsLAwiEYlElE6nTfrU7/eNdE+NTSnS\\n\",\n       \"bDYNl93d3b1FMAI9oRnFS5mOneAaXh8km0qlYr522OZSjszPzyscDqteryuTyajT6WhsbMywdG8T\\n\",\n       \"x2QzEomo0Wjo+PhYe3t72traso3sOI6KxaIikYgNl8ixLpfLOjs7s8FTr9czJIXXCh2Vm4T6n2mn\\n\",\n       \"JMOgu92u4vG48Um2buLVSqWSPXjD4VDVavWBnO9d2KRCjaSbZiM3Gg1TRLCRJVlqExuOpoYxNSNw\\n\",\n       \"jAPxK4YQj66QUwoyO9wMmjVq4lgsZjUvGHc8HjfvC5yJQE9IcoX/izUApQS+bY7jmAIcxUcoFFI8\\n\",\n       \"HjdPDn5HQHsYuOAxwgPrbUi9jkgMQrxZKgyAuHlAM3BsOj4+VrvdtqHO1NSU9vf3jTuCKujBnsuz\\n\",\n       \"aLTwNhsMBvZmc9Wn02krD5iSzc3N2Ym2sLCg/f1984PDLEaS0T7p7lkkTI2NjalSqRj5hgYRrSG1\\n\",\n       \"O280D5xXQoTEiXGvd9OtrKyYqQwDHx6aJ0+eGOQIEw1NHnAlpyq5gFNTU5qbm/sFC1tsFkibhUI6\\n\",\n       \"OzurFy9eWNOMDpCyi6koTSDxyfBZsACDlgpNAHRmYWFBn3322Z3f/5HazJC8SVpl80iyN51aV7pO\\n\",\n       \"p9re3lY+n7eYhFevXplbveu6Rn88Pj42eO7ly5daWlqy0Et86VzXNVd9pnbBYFB/8id/osXFxVu8\\n\",\n       \"h+npaZu2kTIVi8Xsagd263a7mp2dtUy/Uqmkd+/eSbqewNEEVqtV4zPDQX7z5o01gMCKYOR8X5z4\\n\",\n       \"vfKnVqulYrGo/f19BYNBy+dutVo6Pz/X7u6uORbx+zg5OTG+B+qS8fFxsyFD2c7D/vMj73a7/YAz\\n\",\n       \"exeDDt5AQi45AagLa7WaudZjE7u2tqZ+v690Oq2LiwslEgnzjCByDUlSuVy2EuTJkycWX7y5ualk\\n\",\n       \"MmlBP5iFg5RQPoyNjWlnZ8dujEAgoJWVFcPDwVtbrZampqb0+eef26Bhc3PTcqkLhYJ561H3A0fi\\n\",\n       \"tTc2dp15zX8zCofBh9k6/tIbGxs2Wo7H4woGg/ZvXE+JWoaJSAmTTCYNVgRnhlaL8z7cj1qtZiUK\\n\",\n       \"2DSawbuukdrMmLGA7wITodr2+/12ivK5lCbf+ta3tLy8bE0ZmCcEdLgGJL1Go1EbqBA2ubS0pFgs\\n\",\n       \"pn6/b1AWWDHIBMJONgcyJzwwwI6xuiWgExvY+fl5Q1m46r3GijRpi4uLphd89eqV3QrAX9/85jeN\\n\",\n       \"gA/X23Ec5fN5NRoNOzVRrbTbbdNFcioDX9ZqNeNpx+NxRaNRlUolI0rBRGQSG4lE9N577+np06dW\\n\",\n       \"zrXbbSNU3XWNVJmBI7ska4b4BVUqFePjktFB/YgrpiQ7fY6Pj83vLRKJ2Cg8kUgY/fHNmzdGNg+H\\n\",\n       \"w6pUKjb9kmQNHU0WnA1Yc9SO3kAbrAEYPEgymVSr1VK5XFa32zWRgVeEQCk0Nzdn2sBWq2XNrNci\\n\",\n       \"gJMVMxt+fzRpl5eXajQaWl5eNldRYtWYbEoyOLPValldziLSgmkpURk0mAx5oNxCB7jrGqmTGegN\\n\",\n       \"wSpDAUnGoiOMJhAIKJ/Pm0SK63F+ft7G2fPz85qbm9PV1ZXJliDpF4tFyzABrUgkEqb4QPDKm4tr\\n\",\n       \"kVedUSgUlEgk7GuAyUqyiGSGJ9K1/KpSqSgQCGh2dlbSdVN6eXmplZWVW4oRtH3pdFrvv/++GZDD\\n\",\n       \"Vtvb21O5XFYoFLLvA+UzEAgoHA5rY2NDPp9Pc3NzdprGYjFls1mThPEQYr8VDoftAeXURZkdiUTs\\n\",\n       \"ZoH2SVZMKpXSBx98cK/3f6Q2M8gDLvk/H3PAyYfoE6yWLh5SEFg1VlkA+0BTjH2B/hCB8mAwNYMD\\n\",\n       \"7XX5YSQM6iLJ8FlIQahbQArQyAHz8TAwiKB5ZQLoDajEK5mfm+9BJh8PO8iIV2RLLe793QGfwQLk\\n\",\n       \"AWV4wykbCASUSqXs1oHY5PP5jEgF/Ie8amNj417v/0htZr/fbx5nXKHIplCgMDjwJiOB5+K9zJXN\\n\",\n       \"NMsrMG2320axlK7JMvV6Xb1ezzKmJVlNy/gbDjP5f9T2MMoGg4GmpqZ0enpqUCC2BJKs5oT15/P5\\n\",\n       \"zNWfYBzGxlgAICBlosnmZQNx4vf7fbNEoImGigrxiIcJEQEwICUVcCKjfxAawn/Q+PE6+J0QxSbJ\\n\",\n       \"fu67Lue+cVW/KctxnNH4QR6WXNd17vL3RmYzP6yHNVJlxsP6/3s9bOaHNTLrYTM/rJFZD5v5V7Ac\\n\",\n       \"x8k6jvMfHcdZdxznI8dx/shxnCXHcVZ/3a9tlNZITQB/E5dzjeH9Z0n/1nXdv3bzsa9Lup9JxMP6\\n\",\n       \"hfVwMn/167clDV3X/T0+4LruqiRLPXccp+Q4zh87jvPxzT+/dfPx3M3HP3UcZ9VxnD/rOI7PcZw/\\n\",\n       \"uPn/zx3H+Qc3n/vIcZz/enPy/7HjOMs3H/8rN5/73HGc//2r/dF/tevhZP7q11NJH/8pn1OT9Odd\\n\",\n       \"1x04jrMk6T9I+rakvy7pv7mu+89vTviQpG9Iyruu+3VJchwncvM1fk/S33Vdd91xnD8j6Xcl/TlJ\\n\",\n       \"/0TSX3Bdt+r53JFcD5v5q1//L0D+hKR/7TjO+5IuJS3dfPxnkv6N4zh+Sf/Fdd3PHMfZkLTgOM6/\\n\",\n       \"kvRHkv674zhhSb8l6YdMJm++piT9RNK/cxznP0n6w1/KT/Qbuh7KjK9+vZD0zT/lc/6hpKrrus8k\\n\",\n       \"fUvSpCS5rvt/JH1PUkXSHziO8zdc1+1Iel/S/5L09yT9viRHUsd13W94/nly8zX+vqR/LGlW0seO\\n\",\n       \"4yR+2T/gb8p62Mxf8XJd939ImnQc5+/wMcdxnul6c7Eikg5u/vtvShq7+bw5SQ3XdX9f15v2Q8dx\\n\",\n       \"kpLGXNf9Q12XEN9wXbcnadNxnL988/ecm+8hx3Eeua77M9d1/6mkhqTiV/jj/lrXw2b+1ay/JOl3\\n\",\n       \"bqC5LyT9M0lVfVmC/K6kv+U4znNJy5II9/htSc8dx/lE0l+V9C8lFST9T8dxPpX07yX9o5vP/YGk\\n\",\n       \"v33zNb6Q9BdvPv4vbhrFVUk/cV3386/yB/11rgduxsMamfVwMj+skVkPm/lhjcx62MwPa2TWw2Z+\\n\",\n       \"WCOzHjbzwxqZ9bCZH9bIrIfN/LBGZj1s5oc1Muv/AuHZAPr9VeA9AAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x114254ad0>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"plt.gray()\\n\",\n    \"plt.matshow(predictions_df.values)\\n\",\n    \"plt.xlabel('Classes')\\n\",\n    \"plt.ylabel('Windows')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Now let's take max across all windows and plot the top classes.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"name\\n\",\n      \"person          1.835771\\n\",\n      \"bicycle         0.866110\\n\",\n      \"unicycle        0.057080\\n\",\n      \"motorcycle     -0.006122\\n\",\n      \"banjo          -0.028209\\n\",\n      \"turtle         -0.189831\\n\",\n      \"electric fan   -0.206788\\n\",\n      \"cart           -0.214235\\n\",\n      \"lizard         -0.393519\\n\",\n      \"helmet         -0.477942\\n\",\n      \"dtype: float32\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"max_s = predictions_df.max(0)\\n\",\n    \"max_s.sort(ascending=False)\\n\",\n    \"print(max_s[:10])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The top detections are in fact a person and bicycle.\\n\",\n    \"Picking good localizations is a work in progress; we pick the top-scoring person and bicycle detections.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Top detection:\\n\",\n      \"name\\n\",\n      \"person             1.835771\\n\",\n      \"swimming trunks   -1.150371\\n\",\n      \"rubber eraser     -1.231106\\n\",\n      \"turtle            -1.266037\\n\",\n      \"plastic bag       -1.303265\\n\",\n      \"dtype: float32\\n\",\n      \"\\n\",\n      \"Second-best detection:\\n\",\n      \"name\\n\",\n      \"bicycle     0.866110\\n\",\n      \"unicycle   -0.359139\\n\",\n      \"scorpion   -0.811621\\n\",\n      \"lobster    -0.982891\\n\",\n      \"lamp       -1.096808\\n\",\n      \"dtype: float32\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.patches.Rectangle at 0x118576a90>\"\n      ]\n     },\n     \"execution_count\": 6,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAXAAAAEACAYAAACqOy3+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvdmPZVl23vfb0znnTjFHZGZlZmVWVdaQVd1FqmVSomjD\\n\",\n       \"Ei3BEiVYFgzD0LMBCzBEw4IN+C8wYMCCAL/IT/SDn/xkA6Rk0pxsUjRpkt0NsqeasirnITLmO51p\\n\",\n       \"D37Y+9x7IyurSRhsJxuM1Z3Iyhs3zrDP3mt/61vfWkeEELiwC7uwC7uwHz+Tr/oCLuzCLuzCLuz/\\n\",\n       \"m1048Au7sAu7sB9Tu3DgF3ZhF3ZhP6Z24cAv7MIu7MJ+TO3CgV/YhV3Yhf2Y2oUDv7ALu7AL+zG1\\n\",\n       \"H4kDF0L8XSHEx0KIz4QQ/82P4hwXdmEXdmF/2U38eevAhRAK+AT428Bj4I+AfxxC+OjP9UQXdmEX\\n\",\n       \"dmF/ye1HgcB/GrgTQrgXQmiB/wX4hz+C81zYhV3Yhf2lth+FA78KPFz596P02YVd2IVd2IX9OdqP\\n\",\n       \"woFf1OZf2IVd2IX9/2D6R3DMx8D1lX9fJ6LwhQkhLpz8hV3YhV3Yn9FCCOJln/8oHPg3gbeFEDeB\\n\",\n       \"J8B/AvzjF7/0P/7z/w4kKGOYVxWPnz6lalqUUmRZRq/XAwJVXRICVGWNc548L5BSUZYlbduilCTL\\n\",\n       \"DFJJsqzHZDJhMBiQZRlVVaG0wjpH0zRoYzBaUzY1OIuUEqUUSikIgRACvbygrmuEDwghcM4hlMSH\\n\",\n       \"AAKcdWidASCEQAhBCCEeA8iUxDsbB1ebRTgipcR7jzEGIQXBe4QQIOJxfvO3fpu//XN/CyHAWoeU\\n\",\n       \"AiFk/A4gECihQAqsc/jgkVrT1DVGaaSU4AMyQBCSFgEEjJJorZFCEghY2+AJ+BAQUlAUPTJj0EKi\\n\",\n       \"lUEEiW1a2rYlpDGxwRKT3QEQ8V4IeMAYQ5ZlhAB4DyEg0tg4D8ZkgMBaS2tbEJJ0afGaEbjg00SM\\n\",\n       \"9xrSuP7rX/lX/PzP/wf4NL5CgPcBKQTee0IISBnHSEqJlHJxzQAhBHwaZ+89Ll3fi9/rnqX3LOYE\\n\",\n       \"gHceCAghz50HwAe7OEc3DzpbPbYQgtZavPfpmkHK5fV219ddkwua1gWUlCgt8M4hEITgQQR+/Vd/\\n\",\n       \"mX/wD/4Rzlo8HqHicdrWLY/j47z13i/OoyQEb7HWEgQE4nwQQi5GXhGfRyAQREixtERqhZSKqqrQ\\n\",\n       \"KkNqRUBircVZiyRer1Rq8eyFWI5Bd5/OOYIU3SCdG3spBIIXfJRwyzH1Hu/FYl0sxj0egF/65f+N\\n\",\n       \"f//v/YcgQrobn54/+PR38PGWXPDx8AFC8AQXFmPn04oVUiCVjHM63kV87unfSmnEyn0JIWhsG387\\n\",\n       \"CEK8grh+fCAE4riuzJHVv5dz0H/p83/53/4zvsr+3B14CMEKIf4p8H8ACvjFlylQtgZ9qrZGZZrh\\n\",\n       \"YIPd3W1aa5nMZhweHzGZnDKdTvF1m5x2dDBaACGQaYVWEh/iQ3bOMZ9Pcd4ymZyxsblJ0S+YTqc4\\n\",\n       \"59IC8TRtgwiB1rWooFBKIkWaqwHatsF7h5YSKQUBQeMafPBkWYHODEIE2tZijMFaj1KK1sbFIYsM\\n\",\n       \"kQbf2hYh08MQGiHB+SY6BgEEAWmj8N7hXBPntQg4LxaOQQiBRBG8AC8Wi034wKDo4b0j+IAQ0Tl4\\n\",\n       \"wCiFUSo6DecI0ifnE1Baxp8Fh/cttra0Pm4Swguc9WilEULig48bg5IordHaoHScNkEIlFYLh+Wt\\n\",\n       \"wzu3GEvvHZPJmBACWZaTGRPvNQDJiTnn8DYsFrn3HoQApYBACBZCWsDEBRSIjkLKtICDx1mPF9Ev\\n\",\n       \"xH0xOiLrk2MTEqWWm4d3Pjoplt9FgHUtTVsv5mnn0H33PBIQkmnjXv0jVj5bbLxCgFBIpZAKVn2U\\n\",\n       \"9wHnIeARPo2hUGnBW0JQeO9QUqClwBgNBNqmRAqB1goXLN5BZjQgcM6S5QXeS7wLacOQiOBASkxR\\n\",\n       \"RMfrPUIqnA908XAILu3RgiC6Tc4nh2fRWuO9xTUWhEBJickN0oO1Nm6OKl6ztXGDQ0CwadNSauHA\\n\",\n       \"F5tcemAibZCrjkt6t3g+cdLHDTl4SwdrSBuQEqBV+rYAj4Ag8CoAEoLAuc6hEsFRCHgvQK48O+/T\\n\",\n       \"tRCBmPcJSMXvaC2jM/Y2HRdkWndGKQIdcAhpicfNyRM3b+/jPTsh0x4WvxvnTLyu5AT4s7DRPwoE\\n\",\n       \"TgjhV4Bf+WHfqat5XPzWcjI+o+j3kEqxu73BtdcuUdctk8mEtbyPEIKqqTk9PeP09Ix5VaKFx3oH\\n\",\n       \"IaCSk/NIZrMKk2WMx6c45+j1ekgRyIyirmucc+R5QdbrR+fhHDaA0TohDUm/KDDGpE1hTi9PCDwI\\n\",\n       \"mqYB4k5praAoCvKsRwiB2WxGAExmsNZSNw1aK6y3mJRtkAJkQk1SaBZpCCHxCLy3rIC5hSk0Xq7s\\n\",\n       \"0iIiCB/cAm2p5GwCAt94vEwOz3mEAJObNEkjwgoh0PqW2gUkIIWKi1KbOH9EcpXKLJBcVddQ17Te\\n\",\n       \"LZBHnudorSMaNzmZNmitKasZ6+vraK1p25bZbMZsNsdai1ImRiM6RkBtU59HxbYlBIcUAR883gV+\\n\",\n       \"4b/8hT+fCfpjbM+A/+J//6VXeg3/4n/4xRSdWpyz4GOEJWyMlDrk2UU9QkTnLaWjbi1KyMV8kSqh\\n\",\n       \"WA9KRue1iGz9HFKk0M0/2S2XFC0sEavH2jbNn2VUE92gJBCjAohOWIr0My+QLCMo5/3i+l0AHdQy\\n\",\n       \"evIBH0RyvBF8pZPHtYBcuF2hRIIZpIi3u8+IwoOIhwhBIFNkGUJAipA+D+fXw1fYj8SB/1lM93Ia\\n\",\n       \"a1lfH1GMBiitaBuLrRqaxtHUFVQ1k/mMPC9QWnPt8i6vX72EdZ7JdMqsLHHeUVUVVd1QNg2Xd7Zw\\n\",\n       \"Pjq0LOtTJPRunWWYG5x3GG1QJku/V+GcjQPowAFN6ROVk9Pr5chMkuUZWZZHp9c4yrKibW0KpQKj\\n\",\n       \"wYjNzY2I3rWhbWpOz87IMkPd1IuJGkIMe633eNviXEAKyes3Xqf1Pu7YLFGKVgofAs47hAWl0wwO\\n\",\n       \"oBBUto3ICI8yktour0kIFVGq0ggBTdOgjKZtWkLaSKy1GK2QSsewLwRa4SA4pNAE0oR1DoFa7jeB\\n\",\n       \"SF0l5Gxtm6KXSBPFjcKh1HyJrIKg6BUrC8KBjTSFDzYtujTpvefNmzdpmuocPfGX3f7mq74AQMno\\n\",\n       \"8JSMFKHzHqN0ijSTC4uPG6klUkZKynqfogjwwVPXFcCCnnLSoYVESBmdqs6Wjkyo+GcFtYcQ0vwM\\n\",\n       \"vPPu+ykKCckBR3BBECnS8RH9AkLKBcjtwJ8UiZp0ljZFbUopBCICjY6Gcy5FSkv6LEY+LkYz6d8y\\n\",\n       \"RQIxoI4UkweUjr/nkAu0DhCCXGx6HY1FOtYPs1fmwO8+eYgxhpPZGIB+3qPf64MPOOHJkCAVyqgY\\n\",\n       \"JfmW4BQBhZaCXq7Z2NiL4ZdQWGcjfxWgLEtms1l00GVJOS2pm4Z+r4cMgf6gh84yqvkMGRLP2RFk\\n\",\n       \"BEajIVJKZrN5QoAwm03ITBZRaBUdVV235HkPYzLmTJjPSwICY7IYpjpHVuRY51FEuqGuLVJpWucZ\\n\",\n       \"9EcMh+uE4Ni+dCnxiYLx+BSIznRWlQQfEmKJHJ11bdwk2pambcnynKapKWQvLhoJMgict5Tzkl5R\\n\",\n       \"4EPc+b216DzDOsfpeMysnLE2GrCxvo4UkrppUSncU8QQXCgVJ7+1BBc3N5FQSAz7wsIp13WNx2Nb\\n\",\n       \"S57rBbVDiNxlR2fpLC14Ik3V8c5KxU3H4bn9wXvIjn+WF04c/mI48IhEPRABh5cStIDkhCBGipo4\\n\",\n       \"X4WInLD0AhXAW7fIXyilovNzDu/Adw41oe2lvw7I4BM6j+hbIOKaF4Fbt95NGwiL/IpITrKLJhf5\\n\",\n       \"C+/xwa1QHNG5dseSMjpka1tynS/uqXPeEZDErWKZe2GxmXjv0qYkI33SUYDOEnzKl4kU1bPMpUGk\\n\",\n       \"omL+SCQ68i8oApfG0PrA6dExg16fZ0+fg/fsbu+ghSZYS24ysl5M3OV5Rt02aGEwKiLpumkSF6px\\n\",\n       \"wVMUOd47NtbX2FgbYVsLnRMSgiLPOTk5QQhBf7TB1voGdV1zfHJCXVUYkyGEIMszCCB8YDqZoguD\\n\",\n       \"lpFzdrbFNXWkU+ZzbF1T5AVmbZ35ZIwXku2dXYKX7O8/5emzxxR5zuUrV+gVfUIIjIYj1tZGnI5n\\n\",\n       \"3L1/D6UUg37ksgf9PqPRGk1T470jM4a6rhkMB5hM0+/1OXh+gA8OqSRa6JjA7fXiZFQK6x1VWaGU\\n\",\n       \"pugXWOcTWgEhNePJjFlV0rQts1lF0zQ0rWPQLwjWoZRGSYkQKiEViVAabz1aRppHa423lrrxiRtN\\n\",\n       \"jl0lTldB6yJ/HZ2vSIliFk65C0uVUmgdHbp3LgbHCWFZbxF+icwv7NWbtTY6pOAJBLxWyyTgIgeQ\\n\",\n       \"OPSUpAfouANtYlIdAt75yI8jIyWTcoZCCJQx5yiSjj4BFs64c/AS152AmC4UCOEjmkEsnTkCZWRK\\n\",\n       \"YAZEl7/pqBspSdkXrIsgjOSYSVy3EJKQonyR7k8qlegVv7j3LkHd3Y9WEjSLyEB0mLGb2gmpd5Go\\n\",\n       \"FyJtlF9tr8yB3/n0c/Jej6ZpGI2GaGWYTuZIeUaRF2xvbiKV5tHh04jahCArMtbX19IRBFpnSBHp\\n\",\n       \"iPm8ZntjyHQ2WyStgvdoo1MWP0TOTQiKLKOta0aDAUWWYZuGwd4eW1tbACglGY/HKWwKTMsZ09mM\\n\",\n       \"qirJ85zRa1fRxiwSNQfPD5nN5wyKDKE0bTkjzzNu3Xx9wf3Ox2dMz06Yz0tOTMbW1hbWw/7jxwxH\\n\",\n       \"QzZH1wkSgmupbYvRmsZGNczW2oizyRl1LZicneBsVOM0TYsPgf2DI3wI2BAYra8z7PfIMo1Sirpp\\n\",\n       \"6BW9pKJRTOclx6cT9g8PUEazNlojhApCSBGQpy6rqMRJ0YQLFhV8jCoSxClETnAp6UjkAH1QOJ94\\n\",\n       \"qKRSiQnYGBL7NEEDHuscznmkkBidRYVISlB7F86ncMQy0XZhr94GvV6izSwueKyKfHAIAZwn+KiY\\n\",\n       \"6SKs1YcptMRbnxy7jPMhJbWVMEvKhCUP3CmOrGVBx6wifSEEjXMxlkt5IiEjUBBeEJIT7Rws51Qo\\n\",\n       \"AeETyAhRESWkQClD0dOEZpWu8bjEU3cRo/OOxrY45zB6kBKeccPxyclDjCw7QYO1FqlkJEpS5L/K\\n\",\n       \"dbu0iwkRE6M/zF6ZA1dIhr0hNot88sbmkCaMmdYNJ9M5z45P2Nvb43RWRbmglsiy4qysmM/nKKlx\\n\",\n       \"1rG+sYmzgbZtOT05xvmYuOz1eigZB81kBm8tZdkktNgSZnOkEFRVxXgyoZxOOTk8ZDgc0u/3kUbT\\n\",\n       \"NA1FUXBpd49ePkZKSVEU2LYlyzKKNJHX+gNm83mkTLKMyXhMWZYcHT1nOFxjb2M9OizvyHZ3qKsm\\n\",\n       \"KlQQ7O1sopTi9PAAIQW2qdnd3WFjY8jh4RRnG8bHE0yec3B0xO7OLoU2eOcxUnBwcExdNxhjaOuG\\n\",\n       \"mZxipEDkktYJyrKmamqcg42tLYTSFMMBGz7QtBbrPMMsY20wJFMG6xvyzJDpmPgZDUdM5/EZ4D1o\\n\",\n       \"hRJqkWdwbZuouigbtM4ilMB6T5ZlWGepk6rDGIOzAmddlG/KSH2Fao6UGiGXUrK4AFNomSRYL7N/\\n\",\n       \"8d//82UI2m0AISSEt0rJhhh0p7jcJwTp02KRSpEVvXPyRFg6kbZtF6oZ5xxKx/nh031mWbbgSVfl\\n\",\n       \"jR34iNLBSA91XKpPCgedEOzv/M7v8K1v/VFSP3mapo5jaNskmYsJuH6/x/b2Nof7R8zGc9Y3RmSZ\\n\",\n       \"xnqL1pK832M4HJHlfYaDNabTOe+8c5srl66ijMFah4dF5CSkQBH5ZUlUMTXeEQL81//sn3xpzDsa\\n\",\n       \"zBiDCh4pAlLFhFznUaIzDyn536HclPeITyY5uiQ7XURYMVoL3oMUeCEj39wl+ZRIScMkySSOXZAd\\n\",\n       \"6g34kCga1dEbguAjgAgehPILLl2k6FyHqBQSnZQwIe6QEHc8bTpHcDgXqQ5tNJmMsuLWrsgKA4BE\\n\",\n       \"6xXaJslSo2LLAmIxNyJqX+HYfUhc/V9QBP7h+x9ydHLM2XTK+vomSMWVa9fZ3Nzk4OiQd999l63N\\n\",\n       \"LT6/d5/T0xOUkjRNlWiVYRqowLxu+cFHH3P50mV2N0bx4FVL3bj00CMi6Phr7+KCM0TZU57n5CZb\\n\",\n       \"IPWqLGOyL0nwpmdjgoiJmm5RGmMYjUaUZbngu/LMoFRBP89RwdHPDWfHh/RzTV5kDAYDtDYAHB0d\\n\",\n       \"Y4whHw7J+gMGgz4ff/QDTk9P2d7d4uzkGN+U9Hs97t5/zPP9fbwU2BDItaEpK8qyRkrNdFZy/cYN\\n\",\n       \"RuubTGdzGmvp93rkhaCcl0CgrmusA9t6xpMpk1lJVddIYzBZhhaWclZiqwatIiudZ1mkcYo+rqmR\\n\",\n       \"QjEaDKiDo5dHXjA4TytlUpV0gWdE0m3bJL1w1N12FIuUEhc8MkR1gfOO1raMRsU5tKVYOtAOxL3M\\n\",\n       \"PEknvZBjppA7LWgVzqM1GSLvajIdw+OoPVs4yMhj+kjlyCWHmZssOrqiwDuP0Iper7eShAoLx7bq\\n\",\n       \"/LXWeNdirQfaBTXUXaf3ntms5ezsjO98509iwt1GgNA2c5p6htZRgiqlxLeeMK+AU2zr2NnZIcs1\\n\",\n       \"1jasrQ3IcsPJ2Rnee3r96AzqynLn00/QUnPlyhWUjKojERxKkGiEtF6EpKs/+KrcsXPd2iLSCmqp\\n\",\n       \"mJBBghBoIQmSKC1MSTkfwGu/3HCJ3PDqeIkIvUHJ+KejL7oEaYjP2Cu1QPhCgBQFS9mjO3e9wQuC\\n\",\n       \"AhHi30r7qKNPDpU0R6Ni1tPaWPegM4N0umNhFjpupQza6LQJB5yz8f5cVG8ptZTWQqRdOlpJpJ+v\\n\",\n       \"5guilNDSSQm7Db0b6x9mr8yB52uGa+uvMRiPuX37fWZljZCas8mETz+9w/HxGbu7V3htVHBj63WE\\n\",\n       \"kpRlGUOQEDg9GdO2nt6lEU8fPCUXOW+9dQ3nXJQLtu3CsTjnKMuSpo0EwLycIZVBSsW8dAunEXWe\\n\",\n       \"esHndUU+qlNVJL2ysC3j8oSqrGjaltwUaGMSdmjRIvKEm9uXMVqjhEF6RTWes762xs5oA3ygVwzY\\n\",\n       \"3NpBKcHh/Ud88N5t9nZ3+dbBH3B9uE05n5NNW/6nf/mL/Of/2X+KkwF7fBg3FRfY3d5lfPiMWimG\\n\",\n       \"TctWXvDg0RNGW2uY3FNPauqTMU4anMkpdq6yXowoJxWzgxN2Nrc4fnbAdKjRGz0aHNQtb127gZUZ\\n\",\n       \"R1j2fcVJecLlfp9NAkJntLMZfa1Y21gj7KxRe4u0UM8atNlkLuD7D5+wtTlgOByQ9TOkc9hgKfKC\\n\",\n       \"fq9AS4UXEldF/fhMSlS9j6DByhE63yMPlrp8DIVgrrKXzqOzcozOM1SWx+KaRLeYEGVcFYbgohxR\\n\",\n       \"p8KMuPmQCjEkVduglaGVc+Y256CcMWmeo71jgx7X1nK2eoKq1JjBLmM3w9t+oltj0ZXSIGTAtk3k\\n\",\n       \"/YVGCo2UmqyfETXWUVWkRRSYTafTJEuF3/jNX+fk7BhhMryQ1LMKJbLoNFx0HlJFeaYIgWpc0gbL\\n\",\n       \"tJ6T5wbfVPSnBWvDPpmSzM9OOTs+wfR6HB6f8uYbb/Hk6X0uX96Nkj5hoqZfKRAktVMbkSWK3C2L\\n\",\n       \"yF40pR1Rn58KVNpOvx1ARSmq8zHqEnKZmFQibrR+Qb3FQi/n3KK4RkqJSnJeb9t0RkG3mwgRC5xk\\n\",\n       \"UBCWunEd2vS1KGrwgmWkk5LsHQoILkqPg1oWy3WFVK0jbRxRcCV0qpWwywSmQ56LsjoAEUSz4MOB\\n\",\n       \"JfUTQ4V4buJGpdItLcFFvLyoZlnWH4g/pdvJK3PgIQTKek7bNjjnePTwAddfv5miYM/dzz/nyuVL\\n\",\n       \"QCArctq2ZTgcRpQlwFrPb/7G/8X2zi5v3LzBX/vpv4Znshi04GKY3raRn2rbNhZvpIfqfJcACYvv\\n\",\n       \"dCiqU1TYVJxDXaG1QiuNbT3Wxp9rndHPDMbE0FglIb+1LUFCVZaIfh8tDCfjUwgBe5qKinzgtJpz\\n\",\n       \"/+ljtjY32XvtCvcfP2KwNsARuHP/Ljevv05lG/6rX/invPHmm3z6xWfcuXOXrc0Ntre2OTw+ovWe\\n\",\n       \"h08fUQyHTGcl9x/c5+xswNpAcXx8Sp71yfojHJ7gG8pywsnxISZTVE3FpUuX2NldR2WSh48ecvPm\\n\",\n       \"Lc5OxvgskAvB7PkYdzxm1q/ZuzagtS1SQFO33H/wEDkoQGmE9eigcG7KSV3j24b21OMciKKHlnEO\\n\",\n       \"69CijUarVC3YOgwCqpqhyXCtY94KoiTB0reKsirp914+VdfyXkRBdYNUMe+hgkATkD7gjcTLSGOA\\n\",\n       \"S4hRxQQXAScceWEQQvL8aJ/D00CJwCpHY2sMniprGF3aZhgGHE2iJFSmCj4XHPiAt+BSdW8IAbyL\\n\",\n       \"cjbvcGXUBUsgWIeFRSFOlg34/d/8PR49esRwNEzzx+Fcu0i+x5A+RqBtE+L9SYnKNLnJMFLRW9sk\\n\",\n       \"uAYlVdRZS03b1jRlTT8veP50n3424ODwkM3NHYoiYz6fL5C29x6pQix2QhK8RuuX869aL59FzC+t\\n\",\n       \"RhUxYuqS0qsUQIysRKwziJ8QEEgVKYwFTRZlG+BWabMU6eCjPnHl/IsjifOfd8hehoATKYkOiXfv\\n\",\n       \"zras7IS4kaku8Zn4N8cKnUfnmBO9lqSJkkgbIsK5y15F2SGwGCPxgl9eOnKJUmLhl/40Ce2r04Fr\\n\",\n       \"jfeBTz/5lHv37mFt5Py++c1v45yjKPr88Tf/iL/x03+Foig4OztjfWMt8Yoe71uePH3Iw4cP+Kmf\\n\",\n       \"+mmePnvE3XsfcevWLW7cuEFTVTEMbdtFSJO2vxiWiGUW23u3CIdsSkh4l7LsIeCaesFPNU2D0/Fh\\n\",\n       \"WGvRSkBo8FYggkIXGW0QIBSOgM4zRKY5PT5kPpvT6/Xo9/vkJmPeVBwcHXJ0doIAdq/scTQ549nx\\n\",\n       \"IVJIBqdHHE/HaCG59+ghJ+MzFILpbE7bWuazko2NTUyW88ff+Q7v3v4apiiYzGfUTcv7793GOc/O\\n\",\n       \"3hWOz8Zg59y98xEqK3j79m0ePnhEVc3wZYE9KVlvYD0Ydl6/yayx5KYg1C31aAsXLOW0xEvHfDKn\\n\",\n       \"n/XJih4y7+GVomlLCqHRGnLXsjnsoWqLth5pU9sCEZKcymFlmyagQCrNqMgplEMNBpzUBUFvEWYT\\n\",\n       \"2pmimlaMtkYvnUdhXpMZg8VFXXEImCwnF5rgPVa0CBELpK2z1M6TZSYhv0BVRxlmXbccHj0hqB28\\n\",\n       \"Lahrj5EalUlcmCNUH6MKmsOKbG2IDAK0wLuQ5o8leIeWMWkWK4RjZWPwEucDtkMNsEB9VVXx/e99\\n\",\n       \"j15RUM1Lsizxyol+cN5hnSUr8nTHIlX6euqqZmdvl2o2Yzyfs7E2xLWO3cs77GrDoyfPcD7w4PFT\\n\",\n       \"fvbf/nd49/bX6Q+GqaBtqYs2iQ4QPiYKbWsRwrxktKNVVbVw/Eu6RSTkvESfy++sOCHnU4VoPLdS\\n\",\n       \"UX8NpLzFkjLJ9ZejLvclaV38by+WNMyCRksLXABGSISOGvPue6uJw+BlovnOO0yJwKWS/oUWPD2H\\n\",\n       \"7j6740mZLWscVq5jWQcRFlXaQi3bcKyaD+LctXXH+Cp7ZQ6863fycz/3N7HWMZvPqMqav/Lh15iX\\n\",\n       \"c0CgVAxvH915zJMnT3jvvfdo25Yv7tzhD//wj/j6++/y8cef8t47b3Hz5jWq5pjZfMK9+1/Qy3L2\\n\",\n       \"9vYwpkMRUZkSd9IsDhQd97kMbWRSnoTkvBECHVp8Sl5BzOV134mI3C4eWOUdbu5wDvJBjswUTjj6\\n\",\n       \"60PyYY979+5yJb+CKjRVWzPaXKeal4TgOZuNMVKh8oy93V3Qmv76CKUUa6M1nh8dMlpfp5cXjCdj\\n\",\n       \"tnZ3aVuLtxaVZ+xeuUTQirt3P2Nja4fPvvicwhgePXqIA9752tfZXOvx5rvv84ff/DaZ6UcHv7nO\\n\",\n       \"2f4Br1+6jBEaF2C0sUk7rVFeMOhvMK7HPD854ObNy5i1LZwNeG2YBoe1AaVzJALpPdujEf1BjlY5\\n\",\n       \"goB3Fmdr2oRQREqQKSmRRiNE4NnsiFy1VB7GYRMz8PjWRUnpYJNsZ++l86joGVI1NJnWSCHo5QW5\\n\",\n       \"0gTrmFSTyDtqtQBu1jXMqxlCKqwLCC2oXcWsPMP5jJAZYgweUD3F6fiYzZ03ODoqKQY9pmVJL1NI\\n\",\n       \"IXBJ76+1QatY7YuIfTRU0q+HxKMLrSP69J6mboDYm2Y6HYMQ+GCRQaCANlha5whCkBUGKUlONzpe\\n\",\n       \"JwR53uPu3fu8/eZNgitQSjKbzchMwfbODltbe9y994BBf42333qXn/iJn0RJzXe/+12GQ4OUsYLS\\n\",\n       \"OrC2WaLxlCf6KvPWLtbOkraAtl3sTwuKIeZGujL1QNHp/ZNOOrILSz48/XI8zkv6N8Vq+aXji0g6\\n\",\n       \"Jiy7dd6h447iCF1UHjzCxRYCoTsHUf8tCCgpojpm5RghxCS97IqXVGrJEJb69uDj2rd0uZeYMI1P\\n\",\n       \"cjke5xQ1K45/1YkLqRdOu5Pa/jB7dSoUrfHOMRwMcM7R7xXkeY5rXdR81zVKa1rvuHr1KltbWwyH\\n\",\n       \"Q3ywFNl7vPXWG2xsbPL+7ds4WzE+O2Jvb4/9/X3m8znDXp+qqhLVEW9TJnmhEF11YvdQVKJuOq00\\n\",\n       \"KKmXIn2XUE/y9D4lO2JSNGlZE8r3siuJj4tNSon1kYO31pLlMaHpgc2dbcZnZ9jhgE8//ZQiz2lD\\n\",\n       \"w6XXLuFay6wu2bm0x+NHjymrijfefAul4nTPewUB2NrdpWpbTudzPrv7BTt7e/RGI3YuX0UKxddv\\n\",\n       \"v8vZyQnPjw54/vQhxwcH7F25Qts0tE3AhcAXTx8hyzlrTYmqZoy213h+esz0ZMrl0SZaSz782jfg\\n\",\n       \"Y8NaL5CLjLYRhKJHU5Zk2mBcoHAOX5WgQGeGoPIYxViB0xCCo2lqgoghaOMtobVxs8x7VN7hB2vc\\n\",\n       \"eTalmR9SeEvRNgx7BcG9nAu0KpY+ewFIhZdgdSAIh8dhjFoUZbRtG1GW1pg8i/rzEBdlJjQ3b9yi\\n\",\n       \"ths82p8QvMMUhrPJmM0Cpk3LR5/dIe+9jsiLhORCLJpKTU5ig7BODeFoXUMgoGREs0apWBQVYisw\\n\",\n       \"pTS7uzuyo8crAAAgAElEQVT87M/+De7c+ZymqanGY6RMtQjexerAhPAjcPALNU0zmzEYDnm2/zwm\\n\",\n       \"MQc91tfWuHrtGscnpxyfnHH92ut8cucOt29/wKXLrzGdTNjY2EhghYWUrWtApZRC6ajeqqv6pWOO\\n\",\n       \"8AsJHimRuXDoyRkrmdQhzp9zXLOqjHUGWkeViXMgup4/sWpTahXX6ZcceFjozBErraHCl5N9AnBd\\n\",\n       \"P5YUeccNR5xrRNb9HV8kFvMnofsdEcGHkjIm3b1LDlzERHbnoGXsHSPFisLJ+3PovDukThuY8B7P\\n\",\n       \"ebQdUmS1SPCG87TQy+yVOXCTZXGStw1CQJEbBLE3B0GiJQTfYvI8FpBohfOWTCmGwyGbm5vYpuWd\\n\",\n       \"d95iPp9jjAEzYmNjI950QitZltE0CV2ElRArLMMbt5Kw6egW75ca09isRi4Qh9KSTEmsDWgjFlxl\\n\",\n       \"CAGbuPdoAusdxitG/T6tc2ysr6O0SqhNR822ybh5/Tp5nnN8dETbxIZXbdPSNA11VZNlObfefIv9\\n\",\n       \"/X0+v/MZV69do6oqTK9AGM3mziaXr73GpctX6PV7TI6PQGX8m9/9fX7y6++zNlyjcrEAYzqesLW5\\n\",\n       \"yXTWIJRkZ2+X7z75Nq/dusHW61f43kefMJnXvPPubeoA+8+fMLFTnj19RHZlyLwVZPmI1nkOjo64\\n\",\n       \"fPU6g6Kg7xwYhcxi2OmauDF6ZXCuiWXKSExmQHYZ/yivPLOeor/O2GU4J6iDwDUO1wJCUD2fvnQe\\n\",\n       \"lW1FY1tkZpITl9TeRVlca8G5hLTiIlU6i3MlfoKSAtu2qTHTJpOZw7sWIWp8cOQmw8sBT56XzBqJ\\n\",\n       \"0y3GwNHJDIFEG0NmCqyrmc3nqXVDTGTppMwR3uOtX3DbSkUZpvWOum34xje+QVEUPHnyBL+5jfeO\\n\",\n       \"2jscscVAlG8kpBeg9Z7GOfIs59qVK4yGfba3t8BZxmenfPfjjxlPpmysb/L8+Ii333mPrCioa0uv\\n\",\n       \"P0jAIqJKRNRPF5nBdsUpUhBw6Ozlm2bHLRNCqhZ0aQ0sUXDk1bsiGFJiV4CRtM7GjptK4v1yI4ib\\n\",\n       \"AAgVnb9K7iny2/G/VOKoFwx2OqURL+jDE98c13pY5EGFYFEB2c2L+H+XPhNprkR/oUXsX0RSJvmQ\\n\",\n       \"Igff9eNc+gmVNlaR1DNpkFIydakLDyE68i7qX+reI82n5FKO+qfZK3PgPrjUWXQpl1FKoLWgaerF\\n\",\n       \"g2pSJlrpOGht2tXauiHgmM2nUVsbWly7bAsp09+dEiUWlCx3Nr6isu8cL5YG1yLAdZ3a6Gi32LNB\\n\",\n       \"LDupxXqv+F0pZayQ1FF+5kPAGEVIDaU8Ub412N1baKqFEAx7/QVPX1c1UkquXL5MAIbDEcO1ERtb\\n\",\n       \"G4xGIx48fIjWGus9W0IwnpyhMgNESuPk9Akm63H33kO2tre58cZb7F67yXc+/gyC5Cc+/DqZyTk8\\n\",\n       \"3ifvFxzNxjx4+IC+lDgP+48fUfmWYBsefv4Rm0XOd55+yiAb8td/5t/le3fu0grBwfN9Lt16h2ef\\n\",\n       \"fcbRs0dcunqJeTNng3W2t7co8oKsP6AsZ/Q2thitjZA6ltOfHJ8ynk7ZdZpiMOS3/+QT7JmAXGFw\\n\",\n       \"vHPtBlvbe3xy/+GXHxaQpZUZrCckRCdlTls7XOtQSPqDPhsbG6l5UpxXDx7cZz6bQwgURcH45Iz9\\n\",\n       \"45raarwrMXnDbHbGzqW3cGXg6X6FKXZiQljOKHp9pIwtBuZtTfAOoaBs5jRNg8nMIok+yDKCc5gs\\n\",\n       \"Q0pNWVXUbYNzniLvRRlqnscNWRk8JJle7EfTlZF7HykVlMYCo/U1JtMx4+kZeS+jV+Q83n8ae91I\\n\",\n       \"wWB9jf1nB/zD/+g/ZlbVqMmUzY21CDKI3LDEpcZKDimXKBVsSmh+2VxbEylORVggxhiNLOlHsUja\\n\",\n       \"dXy7FAKpRezroyQKEztCduqQAF4KCA48uCTci1LOqCLxbZ1o0LgWpRA479HCrHDKIRaPSZk206WC\\n\",\n       \"JXZmXKUz0loWJGoGFjw8JAosOnmRJLAiBJROrSW7XSdAcA5EbC+w4iaA2JJLSIlWkXprIOnS/VIr\\n\",\n       \"H0c1nj/RNP5PceKvzIEvvWH6V3A433XpWyYMpOh29vM9lyHE/thLhdG50GP1vjsqo+O3hBCL0tnu\\n\",\n       \"97r/XqALlg5cKXPuYZxz8mnCL4Ik78Cn1qcr31epiY8QYlFeHDn5yLMrGaVTXVRmrWI4GEY54uZm\\n\",\n       \"TPRIyc7uDm/zNt473nr7LcqyQiZK4Ps/+Ji816epGh7Xj7j9wfvMTk9iEbFWND6wvrnB9vYWs7Jm\\n\",\n       \"fHJEYTK28gJfNQzzjCwElHVsbG5QSUGjFW/ceIePfn/CdP85Nq+ZHY15cPdzjJQE23J6+JxHmeaj\\n\",\n       \"7/4xG4OCjz4+4tH+I/7WX/33GPo2th9VPWrb4kvPrCoZDodsbG/RH62RDwb01YjD41M+/87HNOtX\\n\",\n       \"MZsZWWjZGwx5/bUbfPtPPnvpLPrwrXfYP3jOtGmY1TW66HN4dEq/GKCF4Xg6Ybixw87eNSaTCUII\\n\",\n       \"Xr/xOru7lzk7OeLOZ58xPjlGCcHl3R1msxZdDAh6TL/3Gn2zi/TblOUpShoyM8Cplmoe56bSkqAC\\n\",\n       \"rasxucYohWwiH25UhjES6QFU7HWTK+o2Fl4pHamwyWTCr/3GrwMwPZujsgznY2SaSYlsLet5j6as\\n\",\n       \"yIZD5iJg1kbs7z9lrdcnBM+9L75ga2czOjkBvUE/VkkGywdf+4DnByfUVnB8fEwIsd1rwDIajXDO\\n\",\n       \"UrctKjU0szaCKx/al465SAlWUtK2dU0M9zGpI6ZO8l2Pd6CXLQQJWiJcp71PG5Xoeot0Fbix6Mf6\\n\",\n       \"WOyS5xm2cRFBJ4faUTZeRKo8UkxLOXDnABaR88JRn/cjMq27iICXvb+7X1mAtOSPFpHQgldPLYFT\\n\",\n       \"ncMi0lhZ+6vJSiFigZwSEhFiS+PYFjuC2NyoBaj1K4nQr7JX5sAXzlaQerAvG8Ikr50QbZcBJuLb\\n\",\n       \"lWxzLJvtnKlfhFmrm0M3eCF0qYpo4YWfdxZecOzxM784rxQihngsd9jV70vBgrvrGMF4RWFxrI46\\n\",\n       \"dGnnXY5JbMAjU99qH2Jv4rKtKYpiEWKFELu69XWP4WhIVVcEoXj33bc5ODhkd2uTQksypTl4WsTE\\n\",\n       \"3qBHf2ONwWjEz+zuIELcNJpZxeeffM6l9Q3eu3GTDInuj8iyHqflHNUzuLMT2ukZvVwxvHKF+qzk\\n\",\n       \"4NlTtl+7zs5wiBUSb2uuXb3MoFdweHbET735M7RGc9jOyYVmUjYEPIUUtE3DZOY4shVN06KMYavn\\n\",\n       \"qYPHisCsOmM95BSZ4M7H3+b4bErd8Zkv2FZ/xN67u8zblqPplO99cgfXgMgzil6fvlDorM9kVlE1\\n\",\n       \"kc8cn014/PAe77z5BpPNI9xsSr/I8AZCWTPqD9HFCCEM5WzG8fER9+59hrOaYjAi67XUTaygG40K\\n\",\n       \"1jf6zOanEBoybRgMhmysbzHoDzg7qynnNf3+IPaa7vq+CIHSmoePH/Frv/prkRppGoZrI1rbooIB\\n\",\n       \"Z1Ftw4bJuX3tdfIs47OH9xlPx9S+InhBdXyCyTTTU8nRwTOUVgwGQ5xreXD/PkXe4/jwKNY9eCjr\\n\",\n       \"hsY2sUVyazk5fR6LaUyO8gapNL1ejs7UVzoPncrOIVGOwiwSrBG2RDVXpA+69rDpfR9WLjTVXbVs\\n\",\n       \"lz+K3HZEntEh6kRpplZTISBCTBJ2rWONMXR97mMUkNrIChYqm5D+F3NfAtd22B6Esx38Xq7lFFVD\\n\",\n       \"lCt3/Hxc5ilxK0W6fhkbdPlYZbr03V0PpsAqsIy0UvyZ87HzoDYGmX4W2YJUUOVZjNFX2avjwFfU\\n\",\n       \"ITFwSLxP+iSkhJBIbUY7iqJzpH4xMN3D7RoepaonVjjslFDpjhtP+3KNa4fWz6F9HwAXNwsRJ2/H\\n\",\n       \"X0GcOotdXJDCQnHOsasXduXu87CySHwIy4SY9zgcVRsVLvV0itIqopngkU4steapt3GR56yNhhwe\\n\",\n       \"HrK3t8Vbb77J8eExTdPG5lXOIozi+pVLkRJyDiU1vbU1RptDfuKD92nahvXNTU7OxlxFcHp0yJOH\\n\",\n       \"d8l6mp/88CcJWvHozj2asub61T2O65qty5d4eO8+WRZfkLGxs8XutdeYTZvoeAOcjmcIApnJCN6T\\n\",\n       \"W0ebXraR5znjecnx4SkuD9z+2puYfs7R5x/x5rWbnB49x9mXh5JP9vdZ29yAPOdPvv8D7j55Tq+/\\n\",\n       \"ztyOmU73eXLwiDfffCNKN3VsSlZNJ+xtbdDWJbdu3sC4ig9uv8f/+sv/Mz29xU5vC5FJ7t97ytbm\\n\",\n       \"dU5tRT0/Rul15qeSRw+e4aRib28TYxycTTk720crT12V2NbzjQ//LX7+7/wdjCrYf3bM84NDqqpi\\n\",\n       \"WlZMZnMyk1HWNb/zu/83Z9MphFioU9bziOylQPiGvdGIr1+/gRvP2B72Gb79FvLhF9w7PWLYG+Fq\\n\",\n       \"R6E1vX6BdZadrW3uP3jI1vYuTdWSScPnn33CzbffxyEwmcZ7F7tYajBoEJAXBfPaUtclLjj8vD0X\\n\",\n       \"jZ6zDhAl1NzJAFcrI0MI57TgnT7eB4mMgXPS40uqql42bRKpdwigRUDiadsW3a3/RI0YIxcJ3bCQ\\n\",\n       \"JXYVoURAmHqPrL6dSEuN8y1dZU3nI0LwSW+/ROsx+lbn1mv379gLXy0cuRAC4bqWyMuGVKzo0Vff\\n\",\n       \"yhTpqdT4KxV5idBVh35ZXvhV9uq6Ea5MDiGSg/XpEQm52Lk78iJmieXC8abbB2IGeDV7K6VErtz/\\n\",\n       \"atjU8V/xWOeR9up3V5G7EultGR2Nk2ibEJbfwccIwAuZmueEc865u99uI5Hdq8XkUs6IjzxujBQE\\n\",\n       \"DsiKfOX6Qkym+Fh62zQWrVTqy6AAz9bWJsNBj9xIjo4P0NpQ6ALVy2i8i53YVCA4i1IBhGP3xlWu\\n\",\n       \"v3YJ0bQEDXMVWLu2i5tV7G2tcXl7xI03r1DZmksbu4jasbm2zvreLnu5QWQZO2tryNYzr0qOZxN6\\n\",\n       \"m2tc2srjZCUmZL1tEaHbdBTT6YSgDArNzLcM14d88MF7tJmkrk55bXvAds+wPthlQvXSefTk6IDj\\n\",\n       \"piSYDDHoQ55z0tTMZhNm0zkKwfODQ2bTCaPhkPlkyu76iL/+U9/gyvabHDx9zLf+8A842n/C7OwR\\n\",\n       \"P/PTX+f0tObJo8c8uveYja/vgZ0jqehlG7S1JtRAHuj1CvCWuqwY9DSvvbbN7vYWRwcnBFdx+Owx\\n\",\n       \"g/46W1vbrK2ts3flCk/3n/P9H/yAz764y6effsazZ89ikt2LBZrUStDLFNs761wdriHahus72+w/\\n\",\n       \"22fz2mVy4RnkCuEchdKs9fpkRazazKRiczRicnpCrzeknM/417/8r/gnv/A2rRO01ZyiyGirKVVb\\n\",\n       \"Q4i9g6xtIUlnm6biPIN73jrZbORtBTqPc9Rau6ipiJFsdFrWWYyJb3jCpQAbQXAO51uUFGQyj1Gp\\n\",\n       \"s8nBKbSWsa2FEOktVulVZYsXKZAURpast+xFY60lyKUj7d6wg4hFUfEVgKmfCrGMvqM0RUfn+Bix\\n\",\n       \"+0TnLDTdziY0nl6d6Fg4bZVecPJiVN/l4GAJ3rrEaIw2liAwfscvKZwVmvll9uoqMe0KCsVFgn/h\\n\",\n       \"5EDrDgXnEU2voOJVmmMpw1ki6iizOo94l+MZHXDXnvRFmc6ybWkXkpHeJ5mmdHLALnTtJ88je7n6\\n\",\n       \"AMXyc5Wa0XdNedxKBrt7RlIo8OnhdlSR+zJ10FFOWmdxJ3fLlq7ethR5HsdQ6AVakEAuBMEG6km5\\n\",\n       \"GCcpHaI9Y97p3ps41r5pY5N7JRns7TG6fBkIaKXZvf4mTV0vxgMBDBKH1wzYZCc2+krIp0NAq7Kz\\n\",\n       \"LqIKSQVgvUNKza3XZ8zrGu+gnFdMZ1N6esbbbvzSeXQ6ndL3HgFsZRnDq7vMyjnrt64gtUZUmiAF\\n\",\n       \"j588xkvBsZG8du0aAPPa8+DJETOb8/mDUxq7gVNDWjHm6LTBhpxnz59zcloBmwSZM7MTGgJCl1hf\\n\",\n       \"IssAdcPkbMJuu8PRwwmnp3NcKDm9Pqd3dZvJLPD5/Sf87MY7bO1twb1DwvCQO/v3cb1AW7X01RDV\\n\",\n       \"FjgjsKFkIBSvD3pczhWybjmZHpHtrtEOc4Z727jPjyj6fbQ3XN7c4LWNNQgtR+MTNl/b4rtf3OOs\\n\",\n       \"HBNkTnNwyLd/8Mf85F/9kIP9J8xnh+S5xnpwOqNRUE3OGPUKMgO1rWi1gJdT4JxwRiYNoo09vp2K\\n\",\n       \"Kq+6buj3BrR1g3eBqipRSpH3eoQQ3xnptEc6iUajpEovFgl4Uac+4SCcBOtoElevtY5No3KDVAYb\\n\",\n       \"YpVi01iUVvRHA5pmjAoRDGQii31YlKPyc7wEFTR15dFk2NBELj14Ek+BlqnRFQKZErAqCIzOQCqc\\n\",\n       \"j+/NRaQKYtcifIr808bS9ERKdsbXtzkbKRXfRCQvlYxpshAorIgOXASkl8TujfGNPbFFQESJS0Xb\\n\",\n       \"y+2VOfDudVywdMiLHVOslMF2jnslTHsx+bj6ey9+p/veqr2sAupFO/c7YrkPrpbnnk+GnA83X3bO\\n\",\n       \"l5070j3nv7usYhMru/KXNaHLkPX8n3iM5QbZ9Xd5EQWsnu+FAy90tS41mF81rfWicVP31pLu8+56\\n\",\n       \"M2NwUn7pXN1xX9yAjffxNWvasBFSPwsfqJuWLM8Wb2950W7durXoT93RVq2zVFXFrJzTTj3XXr/O\\n\",\n       \"5vZtfAgcHB+ysbmJkor7D77g9OyYjc0Rg/4AFzTPDg6o5g3G5FFpIGKFsDEmvndRRFy2ZkbIWjIY\\n\",\n       \"9jg5LQHDvQePqcsZ/bxPb9Dj6OAJ2ztrfHLvAb/0q7+F2VznvQ8+5Pn+Kd/8/T9helQzP5iy1R/R\\n\",\n       \"jkuKbFnEMez12BqtkQMSyfPDp5SnY66MhnhrWR+OmM0r+v2cXq7RCowwrPf7HM6mfO32u3z+eJ/D\\n\",\n       \"sxnV/JTf+zf/J0eHj7m0sx0dos6wTYUU8Rn3exmEQN3YWLEY/EK19aKpEFUV1jmUzvA+vh9WKUHT\\n\",\n       \"VgQ8Usc+/iHE8ZOpSMbZNhYOOYUUCilBOAiptYFIL2yRCJDxrU9CRiRuXRv70jsX32sp4ztby6rF\\n\",\n       \"2opMxBeZRGoyYEOLw0YgJQRFlpOpHGfb9NaexHUr0N18dLFFAsQo3/rlO3ch5uE6cLaa4AwhvifX\\n\",\n       \"+6UcMb6HVcd2Cy52I+zWdPdCdiliS9qOQRCie9l0pF/+wiYxu+KaTs7TOYPu310YtpAArlAQLzqz\\n\",\n       \"1cqllylKVh1i97MXz79Ayp3Q/ksIf2kvbhQ/7FrOtRVdpVHEea3ni5nr1e8t6ZwvX0f32WpWfLWR\\n\",\n       \"/OoxV0t7VzeUJVd5/ryrY/Pi9a32i3nZWHQdHV8csxc3m8WGk8LqRes3YpFU18GwyIe8zLZ3tuK7\\n\",\n       \"OLWhaVta20a5l4/ISrZR5VA1dey82IPxeMK8bVEI1tZ7BF8xn51x+eo2bes4HZ9xcnbM4eFzqmqG\\n\",\n       \"yTRGBhSevZ0Ntrc24iL3gXZmqWYN9XzOoJdjPfTXBxTDgv3Jc57+wTO+OJgyqU7oD/t881t/zG//\\n\",\n       \"1v+DYZ1bN79OqDyTo+esDXuUszO0MmRGs9kfUJiMUNe0rWMwWqMwGf28YGu4zng8RfUEl7ZHrA0K\\n\",\n       \"FAEtoJ9lrPmCk5MT3rp6hQ+/tsXaxg5ZnrO+tUmvN2AyneN9rDjWzoFvaUuLRaOzPliPlssCpBdN\\n\",\n       \"O49QsUjKuhYlBGU5RwiJMbEniLXtIjqLFabLDodCgnct3ll0ov4WdRk4nND4oAmuTYm+aEblNE21\\n\",\n       \"yBXZpo495rXGZJpgPU1TIYVGKUEbapyM7+0MriGTjtY3IHzqLgo69c33IUZxCBAq9ZpBUbuIortK\\n\",\n       \"WOfjS2Kk1CnZGfNwIQRMEAgZOxLGVhwB6Vx8eXLi6YWMbyVSRifk3vm6lHuT3WaxbNfxw+yVNrNa\\n\",\n       \"ILiVxd39rLOXo0u/cIhf9fMQwuL1RKuOpHM0qxHAi074q5Dvi05xdWPoHODLHNWL537RWb547Jc5\\n\",\n       \"39XrWT3+i/0WVhOwf5YoY5mQeVGmef5azucsxLlS69VNZPW6VqvjVq9ldSNb/Lx7ozgSraKW3WSK\\n\",\n       \"XBpsa78ylFxfX4+hZ+vQRpHlJiajSDJOZVE69jQRUnJ5sMelS7sopbBNSzUvqcuSqqxAB4zKGAxG\\n\",\n       \"TKdzdna2WN8YMpmMmc3n7Gxv8Natt5lOS+4/fYhtPPVkzru3brExWuNb3/wWbdOii3Vufe0D8rUe\\n\",\n       \"Tw+ecfXNTa6+cYvf/b3f5e69Q2Zzx+HRCRsbBeu71wlG8nz/AaPNAnc2Y2+4Td9k2KZFAZbA2++9\\n\",\n       \"y6Pnh5RlRTmZUQhF3i/YHOYYLMKBkTlG5zjjaV1Ae8fZs6f8o7//97jzxRdRpz6b0zaAUhA0wrdo\\n\",\n       \"aynyjMYZghrg2hrl3Fe+DSYHqrrFK09W9Ajtsv1yXcdmX+mdw7ExlIgFbd7G3uGxWVZsL+CdjQ4u\\n\",\n       \"vSokvZsHgUWSeo8nBUhZzZaFRiHmvbSSaOWZzUu0NEgREW9dW7xwmDxGAv8vc+8dY1uS3/d9qk66\\n\",\n       \"sXN4r1+eeTM7aWdmd2Y2cZdLchMpkivLf8iQLFsCJMiGAmQZBiQYsAzLVrAJC0q2JVAQTEqyTZES\\n\",\n       \"STGI3KWWK23eHc5OTi/NvNTdr+PN94Sq8h91zu26p8/tNxQlDOuh3733hMr1rd/vV7+glMKoBJUl\\n\",\n       \"hLUIjbb2F2kGGYBB5uIQIcQkZKAW1lI3CAKrwaIKM/scPyaHbZpQ+vncNyhpuQFyat06qMrXOhop\\n\",\n       \"rDFfmiWT8wL33MESVt7sg+Q8ve8UeFlUANOL3aWIy9R68awLYuU8XYBy77lg424ixfWy/maRl/vb\\n\",\n       \"fa+oSxUlDRAVPrSdv/eSJpouDki67Zi1uRR1rMqvDLhlrRt3YyjyKe67YF/uH3cTgWpuZla7Umty\\n\",\n       \"aRVHlZqwqkYdBZit7h+VT3aJlxtNqXyRa0zuZlajhWXrdV4/YwwEHvPzbYKlRQaDATIKSJOUpaVV\\n\",\n       \"Hrr8EOPxiMC3KpLD8RApPGpRk/3DLucvL9PtDRh2xoSizs13NnnsmY/QHyZ0hgO+9PWXiFo1tCdY\\n\",\n       \"XV9ke/sGvU5GGC4hhM/C3AL97j4CQT1ocuHcRe7t3OH08jKLjRbSwHA8QgQ+MgrYOewgo5BRkjDs\\n\",\n       \"j6iHEWEE840avjYEwkNqK7Ndqoc0Gi2GWcrTT36QbNhjrhbiRw26211u3N4hyTSXzq5x/uIpxoN9\\n\",\n       \"PD/k+u199kYJUinWWwLq9co+97SX+4IXDAcD5vLgFp7v4QcBqcpIktQGzXbOoqQorKEBT05ENMqk\\n\",\n       \"VrlAMPkTwnp1NUqTaWU3Agm+J8nSjEzFOeUsGSVj/FqIURqlY6uvH3mkSk08RHo55RsG0lp7A0Us\\n\",\n       \"1+JMxioQ5Fynto6ntBSWWzE28LnK8s/c4tqKjiyWWZVDmVun5mbzucqlkMV6sYeUfuAjPZEfdR3F\\n\",\n       \"1kyyFG0EWue6+veBiffRkGea7S4inRTAWDSoLAM6CbjKIFb126UoTwKWcj1dYHevuRSxC/KzgHXW\\n\",\n       \"RlUur+q3C45FqnL4XgBelcil3F9VXELVGUKZEi+3w90AyxzSrLKnvhf3zYSbRgpJMs6sPvAMfVhr\\n\",\n       \"tesjDCR55JrCylb6HlrZPgvDiDAKSdJ0ImNsNpqkcYwnJPPz82jfim2SYUyj0WDRzJNmQ4Y9Ta0+\\n\",\n       \"h1YQhnWajSZRI8EISUpASoMHn9Ps91J++h/9LIPBGJMakpuHoDVv8Dae7zEeZsy3enQPBzz3zHOY\\n\",\n       \"pk+WjpHMcXBwD3/OMB8KaqFPphTDLEEpz4oI5JhUa4bjlJWVFcbxEE9mmCQFYZ1vaW0IfJ8wCJir\\n\",\n       \"h6hQ8NQHHqOXjlhemCdWPkr1WD91DhlEnNtYYXWtTtzzubO5zb3dAwamTSgkq5fOEi3OV/Z5vdZG\\n\",\n       \"hxla2MAfF1dW6fV6HHY7pFmKMdbthecHuaaINcjxAj/3xy4xntVKEcIQCg+RU+BKCIzwQUpUFk8i\\n\",\n       \"HSldmLIbvMCbWIlalUKfJLcDMUKTqRidaetAzAOjPetpUWUEfq4Z5XloxORQ0crqM2spYo78JmXG\\n\",\n       \"BiKeKATkBjuF/D1RKULnYsQ4m0SwB1BaToJ6CymPDHQUIGQO6po0zaw4x/etuMY78q8jZsz7Ir2v\\n\",\n       \"AF4ANdjKl0UdVfLXsiihuOZSgO79KtCYJYeFI6p0VnLfKT9bdl7jchZuRGs3ryoqeZYoo+q6u8Ed\\n\",\n       \"OxytEMVUlXM/oL9f3SYULUyNpzsW5XE5VoeJ+4gjlU2Mdf3KCcpUYZTLabUpMpgYTWmdIZQ3MZhI\\n\",\n       \"0xEFV2uMIR1n1kxb5CbZwlL+9UYdMEitCaMmSwvtnCLSebRwCSpGGTBBjau3tvjy17/H61ffYTjO\\n\",\n       \"aNaa9Pr7eJkmFB7+uIHvSdpCM9rfo+37PP/NL7O8sor0PZZWl+mnmrmVswy2b+DphGbkUw8DlCcY\\n\",\n       \"pskkRFdmFCbTRLWIUPhII4jCiMD41L06GMny3CKthQUWz6xyuNch9jTG8+in8Pa16/SykDhT3L3Z\\n\",\n       \"oP3pZ2gGAa+//jZbuyNMXZAMR1yNFHOnTlX2uR/W2d28w5s3b6KBO60Wa2trrK6vYYRgd/+A69eu\\n\",\n       \"Mjc3ZzfC+QXSJCFJNY1aSJLmaq/SQwqNVBqMYhzHiCBEiYyo3iBTY+IkzilRTRRad61RFBFEvj2o\\n\",\n       \"zFLCwEdmGUIYG2UqCKwriixD+D5GGdaWlphvtOju75MNB3ieIM61WKzfFEPgeda0PTvyZyS0ttal\\n\",\n       \"wvq3GWUjgtyWwYZ1s5F8pJR4UTDRlrPYpUBCpjS1oJb7ZLJy7nE8yOerQeXReIhzJS1hA4r/vj7E\\n\",\n       \"LAPXLGq0LC91Qdq9Xz6orKLSXfApDgfcDaE4NK1KJ8mHZ9Xfpdwn5vyOSKeIFlTV5irKtQpwy/Wt\\n\",\n       \"2rjKbS7fq+JyZo2HW5+ymMl9ZlaqclUABWDbbwaDcOSvJ+YnCg2bnP02JtcwsJ9FNCW3TFe90xhj\\n\",\n       \"Q74JQTK2Pj4KWagUYExGpzuiVqsRBBHI0Drxb59FK8Nvfem3+M53XmD/oENy0EPFCVsjq2s9Tsas\\n\",\n       \"Xt4QZZoAACAASURBVLzAeuM8t955l3HaJWxH9OIOC2fm6Aw7nD37AJ3RmNbyGqPRGB0EGM+QYPCM\\n\",\n       \"sQezqcJDYMMh5P5LjI23KIwkTlLqzQZgf6+srFFvNzEK0lQz0GNSAWF7gWef+TAimmcYJ8w17eaY\\n\",\n       \"acMwTqxIw2gevHSeRx97gJfferu6032fO7c3Cf0aRhj6vREPP7zM4UGPN956k/3DLqdPn+bM6fMc\\n\",\n       \"7B3w+itvMRqOqEUh+/v3CGoNPvmZz3Dl2nX2drdpBQEXz21wam2NQTzm9s4Oc4uChXadTqdDo9Fk\\n\",\n       \"PBxRy0MpCiHYP7ReGz3PQwa+lWVjfawMhj2yTONHNYaDEfEopVVrcv70BhfOnuHfffc7gKFWsyqC\\n\",\n       \"CMGgP8LzfYxnDQgLjs/LtbyyLENpRRQEqDTJ1RvDIyOhICATVoNF5NS79KxWlhdacZ7xQOnMBiX3\\n\",\n       \"fEf8eyQG9sMg9wJpTwTS9PfpIeYs8Ub5kK58H0pGMbPYco6OBKoAZpa/41lU66y6lMFh+rBvcmfq\\n\",\n       \"+rR2xnT5BXVYBY7l32555U2rCoCDoBju44BYls2/F/FOUZ8qEQy4OvVMZHkup3SUl0FrMflePsw5\\n\",\n       \"KSmVW7GRq4QJJvJGIQBdsYEXbZ30hgABkQ0vAUbmsloFwqMeNBHCY5QohFAcHA741veusnnrLq9+\\n\",\n       \"53dQ+138UcqzZ87RbjcYm5iRzBj7iqAd0Vg8w4effJivffu3eHf7Gs35kMP+PWpejRs3rvPgmQeo\\n\",\n       \"GUk7atKLmmzde4eNtSWQHlmc4AcB2hh8X1pvhjJAeoYkSWxQDXseiRAQegHX373KqY2zyEadaHmB\\n\",\n       \"QAc2ilAQknYO6eztI6VkfeEcB7u7jEZ9VtdWyUwXjWZ/+xav0mN5udoH+6/9xq9TazRZXFplc3OT\\n\",\n       \"tVNrrK2t8a9/4zdI0gxPemzd3cYXITtb9/Cl5OzaBjpTBL5Hdzji+o13GY5HGGM3zqtvvs3lCxf5\\n\",\n       \"/ksvc3d3lwsPgU7qLC0tEccxtXqbUazY3d2lVqsThhFJlrK0sMhoPGYuatLvH6JUShQGgEIpgxQB\\n\",\n       \"d+/eZKk9z7e+9R021ldoNuvs7e2DkIzjlCCsobTGCNDKinK0p/GExJOKOE7xPSsa8YMAncu/LV/o\\n\",\n       \"59GeBNLjKFi1J0FoDIV64dGaDqOALE4mhn7W1ZNVM8wy6+WwUEH0ZoQSLNLvCcCFEO8AXUABqTHm\\n\",\n       \"I0KIJeDngAvAO8AfNsYcVrx77NNlxavY+PuBq3uwmX85BnLFZ3FqXqVaV87/mGk9s52tz8rHFblM\\n\",\n       \"H4hWA1T5oHMWVVwlwilfL5JS1cBYtTGeBOAnbWDuofJ0PafrWM7fw6dwzwky1wt2yp3hPXJSZ8NE\\n\",\n       \"9l3SPM83DzPZHCeiq1IdZCH79PJI5iZnaX2PJFXUWnP8lb/y1zl//kGGco7Xvv8iK1GDh5+6RDAc\\n\",\n       \"Ex92ePLiRZRQXL17k/1Rlywbs9AydO/t8MnHn+Ar+3foHnSYb8+hE4FUkv3NbZ576kPcvHaNs2c2\\n\",\n       \"6PbvkSljnUsJDx1nyDAgkBJpfALPRjeS0sp8hSfAU8ggQOmEO7fuctDrYOp1fuALX2BpfRUlDcKT\\n\",\n       \"nNnYwGjNaDRiHI8JF5qk7Tq+73Pm9FniQYzxBETS6nZWpJXVRbrdPp3tbc4uLdNqNdjcvIMQBqVS\\n\",\n       \"BNaIZ9DrMR4OWV1aYb45T6dzyHg0ZH//AK/VJMPQqNeYi0IWwoCN9TV6h4csLcxz9/Yt/DNnWV5a\\n\",\n       \"4/y5DQajMYPhkKVlj9t37rKxcQZBQKY9PvjUszywsszuziZb27e4t3uPyPMJG3O89sZVdnYPaTwz\\n\",\n       \"x6XzZ9nf3QYpSNOUWr2BkBlxHBPWImq1Or1eF50ppAe1Ws1GJ8KqEY7HMXGc2wTkroKjIMAYiOOY\\n\",\n       \"KAitOWGu3FCc7YRRbaJyG8dx7kvJILzcH4wo/KaQ6757KGVIsxirbTM7/V4pcAP8kDFm37n2l4Ev\\n\",\n       \"G2P+NyHEX8p//+Wql6vEA/8+qcinbCRSLNSqzWJqIVMtW6+iul0AOwn0q9pT3mDcDauqTe5nOd9Z\\n\",\n       \"5c4Sa7j3yu10ZfnlzfCkVG5/AeBljZ2qepTrYKPUixzErRy8OGw0wvWvXE6uSGZ68zEGpCicnE1q\\n\",\n       \"nYux0uN9io1nqHWu52uVghkOR7TmFsi05E//13+Gr3/ju3zz536Fs6dOc35tEa2H6Br80Bc/ixqM\\n\",\n       \"ifD4A5/5ArtbO9x59xbD2yOMjBgOhjz6zOf57pVXePHGNUyjRTeJCaKIX//OV/j4xz/Cwd4WG6fP\\n\",\n       \"MOrs4wsfP6qTmFHupljTrEXYUG3aBn3A4IWSYTwkiYfUvRpnzq3z2BNPc/PePo12EzxJpjNMkiB1\\n\",\n       \"RiQMzVaIaoWMjUGGEadOnSKUHpEQHPQ6HKqUVtCo7PGoHnC2sc5CY57xaIzxBP1Bj4uXLrC7t0+/\\n\",\n       \"P6Req1GrRcy12tap2vY9Op1DtJ+xvLKIEIZGvU42HBA1a8y1W6TjMZfOn2Wr22WxvkC32+ell17h\\n\",\n       \"7LnzjJOY1dV11k+d4eCwz/dfeoXx2Hri7PbHNJ76IM1mjdXlVa5cu0KcKuptw6uvvg5GcOPGO9T9\\n\",\n       \"S8TxmEcefhylDf3BgHqtQZykHB526PX6NBp12nNtayykUjyTa5MYw/Lycn7oqanX6mRZRppkJGlK\\n\",\n       \"rVYjiYd5IHTr1VEUFIC0XhujWt0G3PZ80lADHlpnk/Xn+z6pShHSxtjVWlgu8IT0H0KEUl5ZXwQ+\\n\",\n       \"nX//GeCrzADwMohCBWWeE6ki/+c8aFkXYa8abSY1OekwbxZVX6b8p4FgGtRdirWKUq5qZ1Ubq35X\\n\",\n       \"abCcRBGX8yieLR+wniTfr7peJZt3yyr3Q1Wdq0RAbn2mDjn1EXBbIbVHPrxYaJ2RJBijc9/KR/6s\\n\",\n       \"rRzSGl9Qsla1HuvcOlgKKI56mNTDzyICFeATooWmVgsYp4rX3nqDv/v3/gE7e11+7IlP8eADD1Jv\\n\",\n       \"N7i1vcnK6VMMV1boyH0+ePEy33z1ZYKDDi2lefhcg+bCHAeDASEBj/hN9DDja4NN1HwIo5hFHfLi\\n\",\n       \"y68jSXnu0UdJBiOa9QbJ4IBxMkBEIamBmueDslo+Yb1OIH0gINGKtUuX2O92+dQXfoyVU6c5J33w\\n\",\n       \"fJB5FKDcaZLRhYsD6x1Sa4OuaVSmkAja3gJBHB/5Qy6lTz77LJ7nU49sVKhUZwyHI5RWXDq7ysH+\\n\",\n       \"gbXSlB5pssR4NKbb7SIjjztbY86un2d5bZUrV68SBj4H3Q5BFPLClbfRjTob83MsLS/nvlU0wvNZ\\n\",\n       \"a6ySKcPm9ibD4ch60ySAGG68cZ27bz3P0tIyly4/SKx9dvY7jDcPQCukMOwf3mOvM8fNd2/w3GMf\\n\",\n       \"ZM747B0MORiNuXrrLsM0ZWF5lblmRquuWF1e5tT6BbxWh/W1DTZvbjHsxqSjlIW5eR5/9DHOnt1A\\n\",\n       \"Gc3vvPAiL734EtIMrc9/37cHnQiMFiwvn2Z+YQ2VSXbudVhcXEFGinE8wvMEO/fu2jOTcR8z6CKN\\n\",\n       \"wg8kBJAmM/wZ5Ok/BAX+W0IIBfxDY8xPA+vGmO38/jawXvWi57AZxcKu0gCpotKrAAOsU5rKSlZQ\\n\",\n       \"zS5AVclyy3mXr1VR81X3ivJPirAxi/s4iZo+KRW7+UkbUbmc8veTOICpw0chJs6NZrW/SK7YqQz0\\n\",\n       \"1rfYbC5sVt29XNyBN123InZkAd6uZaybn33efvoysk6QPIlQGkWK0dbf853NLf7+3/k/GA81Tz3y\\n\",\n       \"JGfPn6PT7TBKR6gkoV6rcefOXc6dPcfGxYv44zHe3h70etzr76FGI8JGg5V6g0dPf5Afahpe+tIv\\n\",\n       \"0xnHmMzH9yMCLdBCMooTWu221U3PMlrtJkYKMiEYjUZ4QhD5AWmmiRo+0guI45gHP/AIn3/kUQZJ\\n\",\n       \"wiBLrQVkHhrQqvIdEUw2YIpXbS2bk0qzNvGlxcUjQsEYIi+k1WxM+vjcmQ3SNJ0ai/F4TBwnjAZW\\n\",\n       \"3zuMQpr1kMFwyGG3S6vdRGnF9tYWg8GAxaVF1tdXWV8/lfvKTmhEDeLxmMO9HRZaDYL5EE/4jMdj\\n\",\n       \"PNkgU5Kd7S79bsa77+5Yl7TUGA16bN/ZY6G9SLu1ws7WNttbW7z6yssMtUDUm3jSZ9QfsLq8wtz8\\n\",\n       \"PI88+iiL7TbBch+hPG4pxRtvvcVCY57OQYeV5WWCMKBWr9FotFhbP80oPaDX61Fvt1mYXyBNrTdR\\n\",\n       \"FQb00oQ7t7a5ePEhbty4xa3Nm3z4mQ8TD0esnL6IEJqWTgkCASZje/suc+0m3V61D6Ai/V4B/AeM\\n\",\n       \"MZtCiFXgy0KIN92bxhgjjhx2T6Wf+ac/P5kwTz/5OE8/9XilhZ7rj6EMku5hn6W4quVFZbAQQhyX\\n\",\n       \"l3Ncvuted8ssf5YBrVxXOFKrKwNY1SHg7ybNovhdrZf7bQKz/K24bSj3tQuAZVHQSd/LG0C5DVX9\\n\",\n       \"VzUmRSqceJXrWSWecjmTKi5Kj61JuicM0ldAhhaCeq3F3/wbP4WnAj73A58E7TEY9RgN+9y7vsXq\\n\",\n       \"8grXX36FMxcusv3OO/zCa2+wGIU8uL7Kgw9fYhSc5bDfRWWKpLlAs7XAf/bkH+Kdrdv81ne/zdDT\\n\",\n       \"DGSCSD2CwFod+r6PUIIoqmHIrOFL7hPE833bbgTjTFGvS8J6g/nFRYTnUavX7AFoZij0iL38gPdo\\n\",\n       \"c9O5KElN9QUc6UDPSqdc9UJh/Vprra2/klzTqVDlK+Z7FEVW7W4OEII4SZi7/ADCsyHIDFaP//Kl\\n\",\n       \"c2RphhFYs3hPWu0cbRjFY9L+kPOnV/A9n163y9LiPE8/+QlqtYioVuflV15n++4e48EIgWA4GtJu\\n\",\n       \"thn2Rmzf3eGhBx9g9fQ6/STm4eGQzAvojWJSBPd29tnZvMO9OzeJ+x12d3e4+Og8C3NL7G13iHzP\\n\",\n       \"ujmYX+R73/su3/jG1zh3/jy1WoPr16+zcHqRuZV1uoMB927eYTgcceHCRcZG0e3tY+pQWwpY1Yv0\\n\",\n       \"zYCwVUeEHr/4r38dYwyf/sEf5KUXv4/QGdt37zA/P89oNJw5DvB7BHBjzGb+uSOE+EXgI8C2EOKU\\n\",\n       \"MWZLCHEauFf17p/8E39kJpVXJCGs5db9KLsi3c/stAqQpu9Xq7kV1OysPN4L6LqLpgCxQn+9qo6z\\n\",\n       \"QLCqTVXXymKbk9Ksfi0Da9k6swyIVeNX1P+kNpQ3yPK1orxZqcpKtni3sPiF4/5iynWLsnmETDBy\\n\",\n       \"hBYjUpOiTECWJPyPf+V/4Td/+cvUdZ1GELG1/w7bm3do+AHjw13a7UW88RghPR5//BGuvfUWA0/w\\n\",\n       \"wrvXSBuSRx95mA8+9gSvv/kG17a22Dhs8Mc/9mme/+3fYlyPSMOQ5UabO3fvsDlX46nLF+htD0nT\\n\",\n       \"xPqvNh7jLMMPIoQXWtEFhjSJ8UyTP/cX/xtGSUqcpUjfByPwPAG5cTp5kBC3b8tzzO0LdQLHOBgN\\n\",\n       \"pw7kC8o7DEPq9foUgeTOGa01yXCM0ookTqzpfZrkB3yWY2jXAmrzLYLcgCdVGXFs9bWNgfF4jCXr\\n\",\n       \"BDozVgNJDdAqYfPuHd65/grjcZe15UZ+sBig9ZDTp+bpdTZptx5k52CfN6+8xWicsLC8RlSvce7c\\n\",\n       \"Rbbu3ePSgw/Q7/fwPMnh+gKJvodJE+bqdYYMeP673+bDT32I3Z0dBsM+aZby2ONP0m63afktvvSr\\n\",\n       \"X+bHf+InmN+Yp9FoWI7Jk2z2NpmrhfzOt7/K448/xsUL69y9e50rb1/hYx/9MPV6EykD5heWCMOI\\n\",\n       \"S5efoNlsIaTk137xX8yc+//eAC6EaACeMaYnhGgCnwf+J+BfAX8c+F/zz1+qer9q53fyBnK5rfSO\\n\",\n       \"AUlFXez1+4giiu/uNfev/Ows8YGbbzkf9145lU3LXQrFzdvdMI61saJtVanK0vS9bDSz3nFBumw0\\n\",\n       \"Va7fSX1dfr74K9j18jvF5yx2vjCQmtWGrMKbotsWt14+AUJotCetIyzpo02NwFvgrTdep+63SfY7\\n\",\n       \"9Lt3Gahdmj7MN0LS4Zhk0GHzzrvIZouX33qNZqvFvf17tGoh436Prdeu885LV/ihH/1hzHgEdzo0\\n\",\n       \"On3+2p/48/zpf/y3SPw6wyCi3rCimMXIo+1r5lttRuM+Xi0iHgwxniTLQ3rVGzVqzTrNxUUOej2i\\n\",\n       \"RhO0si5JsdaPkwhW1kh8qh+MMVNz0u2nKkveItVqtYl8GpgiTNI0nXB/5fGWUuIH1mtiq1UnTa18\\n\",\n       \"t/Bu6fpesT7vbX5JkGDjjyra9Rqj0YgwDBmPxzQbTXq9Pkl2wMJCwGd+5DmUMvR6ffqDAXt7e4xH\\n\",\n       \"Q8bjIUG4xMK8j/HhwoMXSZOM4TBmZ2ePr/zmr3HqzBk2b10nM4oHLz9ArVHn0pnLDPsxh1mfS+fP\\n\",\n       \"ceH0OdI0YTDo0et3WF9bs0GIleL221e5uLbBWy++SppkLCwsEgQ++we7LC7PM7/Q4vFLF1mfa+Lp\\n\",\n       \"DhuXN3jq4fP0e0Pm5hdRGq682mNhoU2vu8edd67/RzXkWQd+MR8kH/hnxpgvCSGeB/65EOJPkqsR\\n\",\n       \"Vr3saozMEl3ANIXkAkQZDIGJE6NyqtIwqTq4K5flLvRZk7kMZMXELXe8C3xlCrkM/JWcyIz+mTXA\\n\",\n       \"99OqKedfvl+1eVW9X2WQVR4v97yjqh8LIHHB+72KllwAP4kbKbeliqL3PIPxDUZoUiNITYD0Wnzr\\n\",\n       \"m69iTJ3Ir7NzcJ2st4sXZdQCH5kleCiSdMj50w8zFD5b71zn4x94iLlGg3OnTvGt//eXWX7gQX7n\\n\",\n       \"V77Ey89/h49+4lmeXNzghddf5dmPfZw/8Nwn+JXrrzJSMa35ObZuvctoHLOxsYpWQ/67v/yXWNs4\\n\",\n       \"w63NTW68c5MbV65a7ZZ4iFev89Szz7Kwsspht4P0fOtn3RTnSspapEpvEtzX7UtrHThbq6oqFeNV\\n\",\n       \"9gVUNecnetH5c+NczFKvWdU63/fJkmSyHnyZr2tp322325P3oyjK5fmK4aiPlAskaUq9EaJ0g85h\\n\",\n       \"D+EJkiSlWauBykgbDR44f44oCvF9nzDySY1Pe34eDGwEIR+4/CBaP0e338WvhcRJzEG3w3y7zvrS\\n\",\n       \"MmIp4BvvfourV95lZXmNBx54AMQpanXrYrjbOaTVbNCq+ZxaP838/CK9/oj5hSV2d/dY7a3iB/DS\\n\",\n       \"Sy9w5eqbJFmMl6UMByOiKKLWaNLpDmi3F/CCkNv9Q7q9AT/wA58iiiL+7xPGQvxuZa7/IZIQwvyb\\n\",\n       \"f/1zlROm/NuXwbEFXQbJyYKUxyntgsU7CciqQNt9torSdPOsAq1yqro/i3Kv6pNZm8h7Ea+UOYhy\\n\",\n       \"favAzBVLlDfYsrjifptbmav5923Hk8995ti9F7/z5fcknilT9pVtzjyUTIkaPqMsJagt8rWvv8Kt\\n\",\n       \"dw5Yba9x8/WXObj5JjLrMhfZKDZR6NOPR7RWV1k6d5762in8WhNhBBvLq9x48y2e8Oe49dZbPPHM\\n\",\n       \"k7zy7lvc2rnF4xcv86GNB/i3X/oKanmBX3vzZV7v7dKeW2T3zibn1xd54sFzPPLQRX7iJ3+cBI2W\\n\",\n       \"HkJ4hNIn9DziLCZFI6UgU1bf3cvPBIpoUWAFDrmuzbG+vt98/cinfvzYve99/dfv+365HPLyVW4l\\n\",\n       \"izFWECKEFe0YY1UinLmlyV1JI3LjLPLDUX3kqS+X6wcitC4TjA1CnmUZSZzYDR5jvQFifZ+nvkGl\\n\",\n       \"mTWhT1Wu621VWIejIQqNF/gMRgOiQBB4EQKfLLE+xOfmWlx68BIIwd7+Ia+++gb9wZjMH9EbDFmY\\n\",\n       \"W8b3Q4zxaNRb9AZ94nhEe65Js1UnHo+oJ9ZQaP/wgIND60NmMIp599Yt2vPz9Pp9G/ih1uBf/vy/\\n\",\n       \"xBQ7cim9r6b0Lis+c/c305PEZc2qHDuV1fUKKqD4XWb/q1jHY1WYATguZVFl7FN+fhaF6NbVLa8M\\n\",\n       \"wO4772XhlDcG13hplhFSOY+qjcot/36+Y8r5Vv12r1XV4aT2FkEkXLl2OZX9xRSO08opE2N8z6PT\\n\",\n       \"HSFljX/1S7+KJxfwtc87V6+SjQcsr8yzv73HcCCp1QKG45h6q8nKqTWu3XmXp8+fQwnY3tzGz2Bn\\n\",\n       \"e4ffObzG+soyr777NhsXz3L+8Qf57gvP89yPfJo/+tRf4O/81N/m3MIyvUiyM4g5vXEWnQ1ZXT/N\\n\",\n       \"53/0x/DCAKEVRtjo5bHKrNMoYTASKzIRxaG8ACTGFUcUAQLyafRezkZmbbRFH5ZFieVUqRqMwIZP\\n\",\n       \"zN8ReTjB/D+L39YiEmMQvnVSZjcfENp6+xPC+hw3uXMrgyEr4pIJCLQBrJOrwkEUudooYch8q2Y9\\n\",\n       \"BmqNyTQqsw6rtNHMqzZJlpCpjGbDhibMUo3n1VCZXTf1eo1OZx8NjEdjlpcWaLcVCR3ObpxCyoAs\\n\",\n       \"M8RxSrtdRxhF7AlUqmnVWjSiJk0RsbOzw/zyBuceeJRhPKLb6xMLq0PuN9vUGnUOO50Tx+l9NaUv\\n\",\n       \"g1cVK2fUyXLvKZAzx0UVLqXuyuqqNC+EEFM+Ulxqu+rZWSb973VCu4BYXhBVC6iqjJNEQVXAWbSr\\n\",\n       \"7ELXzV+I42qWbt+dJA6pSuV+q0onyfreKzifBPIFcAthY0+W8wcwniFLFauLp3jhe6/SpEY9avLa\\n\",\n       \"22/SPzykGRqiRsDS6hpbtw/wpDVxv3DxEvv9Pj6S3e17PPTYU9zbvEe92aA+N8fe3l2GnZi1U6cY\\n\",\n       \"Zgln2mv8qT/zZ9k6PODezl1+/I/9EWS9xn/+F/8stBcJ6zX6vUNeevlV/tyf/a/oDTrWuk9K62ND\\n\",\n       \"G6QGpMil2sfBtGzV/F5A1+lILGFc3Z/e5GA459D08bzc8o7GyDoOK8LoTew7pKDQVSui3ACYvB6Y\\n\",\n       \"3O22lPmhtPUKqLH6/wAeaU7IG5RKiurZOeFZ9VVjLzDopta+RkqrneNJpC/wDUivTlNYD4qeZyMy\\n\",\n       \"WW8N1lNloU+fpAlKZ2idMT/XIEkyUNayMk0V9VaTkRzjGc3iyhKdbpcwrCMy6PdH6IUmYa3FYeeQ\\n\",\n       \"g94mRsA4HbO0sobneyhjCMKQM2fP8/OzR+r9A/CyRgZMg2+VQyd3IbueDIt3Qz+sBEd/wlYen8Bl\\n\",\n       \"UKzSQqkSHZSv3w+g3OQCYdl8vYrKdetb/n0/4HPTLNez5ToXh1Bue8oiq/fa1qpnT6LAq66dpEPv\\n\",\n       \"RhyvyqPgPNw8y5vPpI6eTyto85Xf/G3efOkKpB7Nxj41NcCvQ5qMUGkNKVr4yx6HoyGPXb5MtzdC\\n\",\n       \"pRltWUP3Y9q1OqtLy4xVRm1+jl1S0tGQ3o0eWzdvsXf7HmvrZ3j4scf4yvbXObW8QDOs8yf+6H/B\\n\",\n       \"P/v1LzEYjohqdYyBr33jGzz+1GM2fiUGI5SN6KI0RguK2Km2/Uf+Z6b7y5B7i6kUoZWTNmYS6Lcq\\n\",\n       \"TSj6wkCq4uypPJ8n6zZ/L0dXW7s8uLAxZgLIQO6C1YZ2m8xyLfPNxSAxTHx560I92AL/JHsh7AFo\\n\",\n       \"LkASQuKluVM5bR1UWSoejNY2zicSlWRgQPuaIIhQyuD7AZ5vXf1GQYCUAcKA79VIUwXjhCAIiOOx\\n\",\n       \"DcgcWde1wvOp+fMoZRiPUhubNuky15DUgjbjOMYLAqS/RK8/oNVuIf2AXq9nQ9OdkN5XAIdpACnL\\n\",\n       \"VuGIAp8YH1RQjkU+Vb62q+TbJ4HRSaKM4tr9qM9ZQPJenz1JK6csPpolCplFvVfVY1aby3UrU3Un\\n\",\n       \"tcFNJ4FFkcqHmO67J4lQyi6Iy8nlJsrqhuWxN0LyzvVbfOPffosLq+dZXJjn2tUrDIZdllcW8X3B\\n\",\n       \"aJRQCxuESw1WGmcYZwkizWj4EUmmyfojku6A27duI6OQYTym29lDDwY0wgbJ2DC6vc/WO5v8qb/6\\n\",\n       \"P/ADn/0st7e3ePu1N3nqiSf50jdf4CAeIMaKC5ce5LDTJQgixunIBtLGUqtSWgMkivmYu8xFgDLK\\n\",\n       \"aacVs0gh8cRxo51Kd8ZgnXrdZ6wmZdxnLciCcjbWWVR+42jcPEdTxs1X6MKDtlUnBtA2HLg5kgih\\n\",\n       \"hEFQJ1Xa+iIxZop30DJ305A3KcqDfQgp0cJuBJnW1nsgEl8IRBAhEYx0jBTexJeJUhnapGSZDYhs\\n\",\n       \"Mo0UYzAS32uQKQVBgPLACyOUsS6K2/PzqAwWpE+mDDLtHfVb7lRrNI5ZbS3ZOLC1gIVwgayCu3HT\\n\",\n       \"+xhSrZgILnDnrJs4Csor84HPjEalR5ErJixePoE8KVAimnSKEEfPGG0mhyASMSnHPlcGBlc0Uyzw\\n\",\n       \"ItvC3NvNQzCdhWvVmVMaFO8b5xmbr1LT4ZSK56Q8qkcZcGadA5SBbxZQTWo6AfWjNhZ1seXa7/bT\\n\",\n       \"rpgqn1KF5d6xJKctXE/aRIwpg7SZzAGqc69s0/HkQx59vKDcLBtsD6QsqEsQkppo843f+HlOz61T\\n\",\n       \"qzegIdG6S5MBMpbIxhyH45QFQs5/4AOszje58vy3aQcSEw/JhmP6e7tcf/55GuOE5LDLUhTQU4KF\\n\",\n       \"epuGF9Benyftj0k6e/zDv/HX+S//+/+WZhQx3N5kezxk7fQyi9kid25f5Ymnn+BDz3wQpRUhnvWN\\n\",\n       \"kbfB9ms+XjnlagrgEkd9bAwYZTDCoES1KuYsTm9W3wpD7oTJjpOpttWrpMCnRjiv+4RYY3quT767\\n\",\n       \"ZVPIxAuPkgaZf5pCpGLyM7bJIgeDdfWqjSHJw6CZ4lhX2EDN+TEnSuflCEGQR9vxAm9SnhBRvl60\\n\",\n       \"RVAhEEiSNJ3ULkkyyOX5UlqXv8bYjVEIQYB1aeDlIiE/9Jivz2EMNB38uN95xfsK4EodRbkpJqRN\\n\",\n       \"Rws+mwqHZm9pk+98UiLy97SR6Fyv1JVzC2FlbBgLCKoIh+T7CCkmLOd0PYqKFGIKF5Bt/QqOoChv\\n\",\n       \"UvMZYFnlhe+ozOP9o5wI1naj0xQRr8vGEVXA7P65ZU2PQbFpFuKS6XtHG5fDGXGcW2Ki3zCdXFg/\\n\",\n       \"yRcM2HBU5aVabJAnpVkbWJH0BLNsmKoi8osXeEjpgZFkmSLwA/75P/05djZ3Obdxkf5wyNUbb7IQ\\n\",\n       \"+XhKYOIhola3ZuytJhfXTnPtjVfYv3MH7SlCnVhvj9qzDpqCkCzLiPwQLT16nR6Ndpso9HnqfaY5\\n\",\n       \"hwAAIABJREFUmacQnZSbh/t89Rd+iY9/5rPUDvt4kaAdBXTVmI3T6/zsP/kZ/uAX/x92d+8RBiHG\\n\",\n       \"WABHGLRFPzuuTFOcx8REJxBxVYZYE1HiLMq6RCi45brj4uZXxV0VFIMoni1xf5N33KKPSsv/t+33\\n\",\n       \"PWttekx0VGCBMRhpw6RNZm2RvzbH56SwVLp0xE5grVStnN09E7J+vgVHfv7DwJ+0oRDzuesl01iP\\n\",\n       \"k3lA5DRNjwVBeS/pfT3EdHfbskx7kibAwaQzpJSTgxStFEZpUNo6gKEYWktpSQRSHOVtpDnyG+0M\\n\",\n       \"Wjk8miumKRs7uGqMLkAW16om9awBqaKmy5S0NUY5rrJ3kuy9vBnNEkPMImBn1cUTx9syq22uPPN+\\n\",\n       \"2ipSklOUVRv6/dNMMYsBKyfWWIcrBrQ9eIvjmDCsk8aK2zffZefOLssbZ+jEIwIN/iBlmPVZXKiT\\n\",\n       \"pSnJYZePPf1JGmtnuf3qK2y9/RreoG8DJwcetTBCa+iNu6ytX2T/3oCl5Tnm1pft3E0y9u5ucXUQ\\n\",\n       \"86Hzj3CmPkcmAnZef4MlP+IX/79f4HBxgZu9Qx566DzPPvMhxvGIZrNOluViIOFhhPXzIoWcmLzP\\n\",\n       \"AsnytfJzs9wg3G+8TgoC4oqt3DxniTmLe+7nrFQlGizKLvIvcMK9X5bJV82XKo6huF6OBVuFHVWi\\n\",\n       \"Xbe+03Yg1nK1uB/m0YbKdblfet8AvHywdARUYsrwQ0o5Ae+iY4uYi5POzGVo8cSdY4AU0oY9Mtbx\\n\",\n       \"vR/4Ez++Nrr0ccu/oi4u6LplwpHVmFKKWq02NegwQ6boDHYVoBbvF06hXHn/0X1dECxTaRaYFi5o\\n\",\n       \"7rej2z4+vmiqxC9CCDD62CSeOdHMtLl6Oc9yPU6Sdc9K92MxPV9iUTx/TtivRsNcu03nsMf62ml+\\n\",\n       \"9Vd+g3ZzgfMPPUymDd/45V+jGaeEvsdhr0uzVidMFUn3kGh5hV7vHmnaJ82GJEbTrjVRKiX0ApJB\\n\",\n       \"j8AYxr0e49GIqF5j9dwFxnc22drZxIwTvnn3gLA1x5n1Z3n2Cz+M7Mf8ws//PKQpKhlz9/ZNfuZn\\n\",\n       \"f5qde3cJQquVYPlxkQupXY7wONFRNeeqUtlS9b1QgO5arXIHUQWy7r3y/WJtFfkUxlknuTS+X/sK\\n\",\n       \"g58yWBebS3m+HXG65th8nXAkpTaUN60jMeh0+1w116KtSh1tni7x59brvbh0fh9FKMflupWR5rWZ\\n\",\n       \"nHpP/UmZE1f5BDLge0cn7VpnE5AOghDIwyLlEaONduXp1QY1Rb2Kz/JEKazYXM2ZKkCdtdsXO7er\\n\",\n       \"2gd2QqdpOtNKrjzximQlDo6UsLRYqia65VyrZNLHKXgAydHEvR/Yzpr0Ve8pleXPlO/87iiyY28L\\n\",\n       \"2xcT5ttYPxq+7zMcxMy3F3nx+ZfYv3fAU49/lDGG/YMDlleWkbv7aD0gjRUDNUKNFFfefov6aMDC\\n\",\n       \"SotxssC+6qK1YqQzWvUmWWIQqSLuDqjJgOFej1pznquvvsGz5y/x7Cc+hsbgjxRhvcnOYo3Xe7uk\\n\",\n       \"3R6f/kNf5Mr2Fq//5q9Sq2mSeEQQeiRpjMz9nxiTcxCFKFEfH9fy2Ln9dGw8K+b1/UA8CIJK7an3\\n\",\n       \"Oi5VADdr/ZVTua6z6g7VHk+L8socieurqMhXSjml4VRFhEzP62njt+LZMrEohHCiYx3Ve1Z4xJPS\\n\",\n       \"+wrgLngVyQVEAE/61hzYFLElcx8PUtpDCCFQWAbZN4rCYMkTkjAMc5DOD0tFzkgLkQe6PSrTndhl\\n\",\n       \"0CtMft0Yli7L406CMshW7ebF90Js5N6btDunBmZN6nIdi02tzMbdj6oVAmYYeU2VVd7Q3HbNUuHL\\n\",\n       \"HEqnnF91PapA/uRJfD/ZehFzsJxPHKc0621Uanjhey+ysrjKuzvbzM3Pc/udm0ilWF5dRsU+XuKx\\n\",\n       \"s7eLkD71VkirWSMQ0GrU2YpThFH4Gjwvw3ghB70u7TShtrBMrT3HwoU1/Myws3OI2N7j4cceZiFo\\n\",\n       \"Eo9TzFKbyx94hNFhl91vvsHu1hZpMuYv/Pm/xKB3iBd6SM+zJueKfN7mQJArU5fnSVn8VT5/OYl6\\n\",\n       \"PYlLKlKWZZP5qYrwYiWAm7UxuHV1530ZPN9rKpfnrt+T1l1503HFokUdq7DJnffl5wvjqTKeTHy9\\n\",\n       \"5JKD4nohDnbX0f18n5TT+yoDL6uAuR1XPFMEEzWmoJjzs2KlrF6sBOkHJFkGQqK1IgxD0jjFN4U2\\n\",\n       \"S35YmgN4UbbgOHtXrmPxV7beKzrfrW9VKh90VgGvu/sbYyYe9Kqoh/JCcb3Cufrb7kFwEY8PjruO\\n\",\n       \"tZzPbHewRTpaYEX+xWIrjimqFp2NpFNok1jNoOMb1jQLOb35ldU9y0mIk9nMsjdjYwwID09CmmY8\\n\",\n       \"/+0X2Nvd44lHn+Qg8unf2yfrdGgEPj01ZnFuHvqC9bMNhhKaq6fY3dtjPpCcWllhpz7P4PCAXpzg\\n\",\n       \"BQ38KGDl3DnOPvYBRHuBu4eH+CureMt7DPe67HQ6bH7z25xZWOW5xz9M3WshDxJWo3keO/Mg/+JX\\n\",\n       \"fonPf+4zPPvMs3T7e4AlFgwCX0iE8dBGgTBWG4RqMC7PNyGOu5QoOJ7ikn2/ANfZQBI4gaLdgzc4\\n\",\n       \"vsEfH49qcUfVO/ebk+VUpn6rABiOPIO675WtuI2xYqIgCKbyKNaBC9zFBuZi71H/W2Kh6Fsvj61Z\\n\",\n       \"1M/Nt6ov7tfu9w3Ai4E/abcrnhNKoI1CiNxQQYDWtrFvX7/B5YcfIag3GA8OqUc1PD9A4KGShEat\\n\",\n       \"ThLHVv1KCLyCHVLKHmbClOy9XH75upsKGVvx6VIQ5Y4vJsKsCexSBWXquxhsV/e7mECuu1R3YpX7\\n\",\n       \"tdgMC8B3KYgy11A1HpPFr6epqOJeVbsKXeJyv7j9c7QxTPv0fq/JzbuqDp7wcidnFqyEsLrURkg8\\n\",\n       \"JN9//gVMqunsH9K+fIadvR1WGnUykzHSmt1eh8gIMt/nwac/yJlLF3n5699jtL/DYX9Ic3GFODX4\\n\",\n       \"aMJmm4XlJWLfYzQe01z0GSUZg3FC5nsMjcZTBoXmdmeP5c3bPHPuHEtRi348ZG59jR/53Gd5+oc/\\n\",\n       \"SlaE6VIpSimiWsNySoY8YrpC5m5Vq4CrTH3avplm8afGaopYMJxECJY5TleW646zm7f7vTxPXc+k\\n\",\n       \"7jhWEUaz5oZbZplSLlP2VRbZVcRSOS/Xe2hV3abPrKo3tfsldy3NcnnhpvfdkGfWQBcpyyxwF0FE\\n\",\n       \"pRRIz7Mx6xB853vf4//8R/+YT/3gp/ncj3wakyqUkZBpamGNOEltp8h8oolC19yfiDBc8KzyqFYG\\n\",\n       \"NBdo3XpXAVMV5e3+LjYId7JVnXgniXXM40mPQoOmAGWRG3BYYDrurKlqsbl1KFtolhfDBHgBP9eL\\n\",\n       \"LYN41eSU3rSxkZtXVf2q+nhW3kWqWozT93Pd3QljIsBYz3Zvv3aFYX/Axvo5DvcOuLZ1HQ4HLId1\\n\",\n       \"/NDD9xscxockytBeWqW9fIreMGXj/EVWPvo0m3fvcvrRh3n1+e8jk4xs2CPwfITSjHf3WF07QzTO\\n\",\n       \"qGWGOAzpBT6hFOjQkPiCt/fvMP/uVeT1ZZYeuYh4aJ3z3UdYXFxGSkOaQa3eIAgD+oOxrb7tEMAg\\n\",\n       \"5FHAhvL4ueN4RFkflzO7/X10z66zWf1+EuHlluvWpVwvl8ssj3XVWnHLrEpl8HUJhTJR42p/HOdI\\n\",\n       \"pvGoTJwV7a2yaq4ihtwy3st8djezKjwqp/dVhFKWW5U7AIqDCEBYxXdj7ILMtCaoNwiiGk8+9TR+\\n\",\n       \"FPGPf+afcubMBj/6mc+xONdGpakDgnmHYK27PI4s/zzPm1CnVVR4sRO6O6nWeuJTo2qAyru7S/mW\\n\",\n       \"WbDi0y2vDODWiCbXXcYBQ6YXSnGtAMmiHoUjK7c+7qQs6n2/xVlWmZo1maGQ0U7LKas4FftbTOKa\\n\",\n       \"GvsSiEKTfLYc3+VAqpLRGopysByEQKOV4rd/+6ssLy6zv7vH0sIyg6s3CQ0cej71Rh3jBzTrTQZa\\n\",\n       \"sHbpAeKxDXXVqtdpGkXmS05duMC1q+9wcPs2gdYMuh2iKESPBrR9j9OLc9SDEFGv0Q0lDIf0R0M6\\n\",\n       \"vmGofL7z2wfgCR5uB8j5BsqHU6c3OOjsoIR1tKR0ShhG9hBHmNywXKK1QZvj4FW1jqrEV+5GWh77\\n\",\n       \"Ki5yMq4Ol+bOicr+r6D0CyKlTCGXlQruZ61dzrf8OQv0Z20E5TLKYO6+V5aju/UtlzWLKKxKbuSu\\n\",\n       \"Kg2fY8+fePc/YkpzcC3AZRYLbGV3gCioDkmcpSwsLNAdjWi12ozSQ5ZWVlleWeHG9ev8rb/7d/mJ\\n\",\n       \"z3+Bjz3zLL70MSoDY6mwKploGWBsueLYdXcgpJQTh/TuxCuzUe4mUQXw7nMuR1Ck8uIoNppiAdTr\\n\",\n       \"daev8skrBZ4np54vNpwqqqDMDRSgXN60bIWOL7hZi8podWxBuZuH2zbfC9CWvHRzOAY05VTeKI/V\\n\",\n       \"wdrgTaz0tDH4wuflV17h7u07PHTxIbyWT7/TZVkJxmTgCXo7uxgkLC0TbJwhmJvDJIJsqFi6eJr9\\n\",\n       \"uzd45fsvMu/VObt+mtH2PTyd0esd0ppbJ2j4xGZMPxvRu9sjjgfs7tylvt9BR4JxHdq0aA0VyStX\\n\",\n       \"0A+cR15Y5+Mf+SgvvvgiZ8+dttHO6xG9YZ8giPKzWGtVKlEgfYQ8zgVWUb0mP6eoAsYyoFgOtbK7\\n\",\n       \"7VjlIFMGTfdalR64C0yFF8kyten+uW1xtcFmzYMqyr14x73vyrvdze53u4GV5335vMvt5/JmMCvd\\n\",\n       \"j9Mop/cNwIOgOJm1B5THnVPZT19oUm1QKkNkCuEL/DAk6fQJAo+l8+u8u7fNqheQGMOHH36Upx76\\n\",\n       \"AK++9hpXrl3lx77wBRYX5vClRChF4PsYpZBZan2Na33kB8ErJmauO1wCc1f+5Ype3FR4Myzeq6Im\\n\",\n       \"yvExywNWJV4qfhdlu9S/++eyi9Lz8LycChdySjRTgGOxqIs2lr0xloHeKJNbtjKJBJ+3Nm+fW9/j\\n\",\n       \"VH6Zsp9MdnX8sNJyE9bPx6z5XK5vud+ksEENijHwvYjRSPOVf/MtTp26QL/TZ6FeZ39/k2atznjY\\n\",\n       \"QZgUz8/ItKDT3eHspbMMuoeMuyPOn11HJCP23r5GNBzzwr/9Kj/+4z/B3mIbNRCkPozHGZ3xDiZ6\\n\",\n       \"i26SEnfHREGApyTGC6n7AWkywpgxMYK93W2GuwfIxQaNxgLbt+9y6tIZZCYhyQjqEZmAUIGvvdy3\\n\",\n       \"h9WsKix23Y3vOLgW433Uv2XZ7/H5djLrXgal8ly9HyHjEiFlsCoICTdPV3tjVn3gOIfopvI6Ka5V\\n\",\n       \"le+2cRaglwG8StbtEiDuBlZF5VfV5/etCEWprAQc09Rs8aeSlMS37Hjk+SDtYvcTEJ7HmUvn+fK/\\n\",\n       \"+ypRnFJv1DnsdlGex/qpU7QXF/jpf/KzfOLjH+dHP/dZhp0uARLf85BG4UvITOGnTeT+JGxZRitr\\n\",\n       \"5ensiF6uzlVQD4WeNlQfglQNZAEkBZiWT6JdWXVhzCOlRClNlmZTeYHlZIq+OvKNfbSwXY4jUylZ\\n\",\n       \"luWgZ/C8gto/6u+i7DJ7bCeSmJThyqyn1SkdUDbHdWhnTcgsy6bKF8LV6T/ZKrCoQ5XoIIg80lHC\\n\",\n       \"8vISnc6ANFFsbx+gVAAyotWIGOzfI+7tkxpJvVYDHTPQCf0kYX79NHWjuXPtCr3+kGYIO1t3STfv\\n\",\n       \"sb4wRz8Z89orL9Bq1dja26V/2EelB/hBQC1qsLA0z9Z4hBeGRM0WWQbNhRbzPmTJkGE/JqtL6s0m\\n\",\n       \"KqwT+jX8VCOlRygDUDEmgBTrhdDXHgaByjWyPKcP3M/yHLT9f1xsVy2OgJPEVkU6icU/TtUfiU5m\\n\",\n       \"PV/mNqu4u7LqbhUX4BI6Vam8GVRRx+6G4yoNuOWV83c5WbfNVWH9ykScWzcXF3/filBcirt8yAC2\\n\",\n       \"IVmWIZUkA3zPy2WZGqUVSguUkiwvLZHFCaM0pu0vsr4xT9RosKEUg3jMD//QZ3n9tVf5G7/zU/yn\\n\",\n       \"P/mTPHz5MsNBn5rnE+eROIQnSVVqT/h965FMeB44VIQLzkkeAsoVdRRqhkW7ygMchiFw/IS8SFWD\\n\",\n       \"OcVOIibybze5vq2P+nGS6wRQi8lQr9cdKqgQk0xTRVrbCONFOgJVJtaiBVhWhbYrt6Gcj9u24neV\\n\",\n       \"V0F38zgp//Im4m6m3W6fRqPGzZt3aNTbRLUmL3z/qzzxwafYvnkHkSXc3ryLjyAA0JBmikZzHtnQ\\n\",\n       \"nDl3if3+kGFnwPryMru3bnHz2hWaKiPTKX6jxp3bt1laXKTX64JWJKMxUkr6nUNW1lfw2jXkXI1G\\n\",\n       \"tkgvSzAC1tdXGB4cIJRP2uvz0je/w4dO/wQ6U+zcusPNF17l8hMPM9KCIDWIwEP71rOeZyQmV/0U\\n\",\n       \"pfa6/VcGt+pD3uMbqh2baqB18zlpQy5zrSeJBoq6F4RBGQDdTcidP+67RSo2ieKzqrwqnyNVG6BL\\n\",\n       \"dBSgXxWsuZqDObrnipyqrD3dOlRZqJ+U3tdDTJiW8Zbl4cYYfAKCvE3CntjgiwBhDJ6UhEgW5+a5\\n\",\n       \"uXWXx1bOs384QPZGjJKERqtJ4EU898zHMEbzz//lL/Lxj36UT//gp0jSBBmFCGMAjZTWFaXWdoMA\\n\",\n       \"AdIeGiqVIcWRv5YCNAuDhuIgrdhtC3ByzY2TJJk6oHEp3pP6p5hAWukJMJcnWfmwsrjuTm7brmyK\\n\",\n       \"CpLSw/OmfaUX77iHg1OiIDUtv3frWAYSl3pxx7RKrOKC+6TNJ7DDRer3+5P6lsVKAFG9TrfbY2l5\\n\",\n       \"jSw1fP3ffYvdnT3qG3NcvHSRrRvXac0tMe4eoOIhsc7QGHrdAXOrawRhAz8esdCSRNqweecWYZLg\\n\",\n       \"SUEgJfUoItHKmsxHIVoZZL1GFIXEo7EV12lNFo/xPUmSxDYAsTCgEkb7h5iR4k6asfVvIj7x8U/y\\n\",\n       \"o5/9PH/tf/6r/NQ/+b84vNenKTy8DIYBJMIQKm1tIDBgjm+e5bGsuleeZ9Op2qrYfcelmE+KYuXO\\n\",\n       \"eaj2SV9+3q1XFTiW61UAf/HdFdOU14FbB3duuRyJ++cSZOV+rVo37txz532ZY3UJnzIHcD+xiZve\\n\",\n       \"dwAvy5fKjZHSAy/f8SQ2Sr0RCGlItMLLJJ/86Me5/uJrdPo9rF9en7lGRKPWIDOag8N9kizmU5/8\\n\",\n       \"NC+8+Dx3tjf54k/+QWphiNAZgbBGEjoZE+TBHxA+wvNByNwJfUkWbMzkULCgvpVSE4tNl0L1fX8i\\n\",\n       \"q3UpkVksnHsCfWStiQ37pI+iC7mp+F1QyGX2FY78uBTPFODvsovFJCu4jHIdpZimlMuT3i03U9Mi\\n\",\n       \"lOLQ100TUVnJarPI634sZHGIW5Y/FotrnI6pN5qMhjHJWPGdbz/P+fOX0UoTJwmb9+5Rb7WIfEk9\\n\",\n       \"rbN/cMAw0Yhak9byKbb3+/QGQ85vbCCTMSIeExmF0IJhv0c/GSKjAKMNZ9dPMdCHZJjcDaKif3iI\\n\",\n       \"TBRZZqxaZaYxCrbu3uNzX/gM7165BkoybkREa8vc7uxx4aEP8vADD9GoNag1Gsg0I5CS1DNkwh5S\\n\",\n       \"B7nqqMdsgCtzefcTiUw/N/vZYq65FpjlNFMz6QQKvKhzFXiX53M5H3eeuBxHFWVdntNV3HDx54o/\\n\",\n       \"3LaWtXfca+XDy6r+cQmUqvv3o7yL9L7qgbtilDKrVHxmRqFyB72uu0jP95BoakgubJzlW1/9Gp87\\n\",\n       \"dZpOt0s9jOz7aUo6HtMMQhpRyFiPefrpp9na3uJv/u9/m//ki1/kuQ8/xbCzh6czWvWINIkt8PpF\\n\",\n       \"JA+7YATiWF3LwFU+WS9YL1fmVh6YYiDhaEEUeqrTlCj5uer0gJeNIKrqCHZyFZOxsKRzxRhunu4E\\n\",\n       \"LC9SrfQUBVNlNTppv5lebC77WWZBi03PDb7gUiwnsepuHscoqNBDpZrQq3HlnXdZmF9iZWkZreDm\\n\",\n       \"9esMhwOEFMw1W/gZzIc+jBLqC0t4zTn27m4y325jjOLGtStInRJFPnGSoZXi8KBLKgxhGLK6uESj\\n\",\n       \"2aQXp8TjGL8eMej2qC2ust3rsrRxirlTp/H7Yxqex6kzF7lw/kF8GfLa7Vt84EPPcGvzLkko+eBz\\n\",\n       \"zyCiBqNxTK3dJuv1ERp0CGMkNS0QBlQFzpap2N8tiNu+u/+z7jwpJ9das2pt36/8k8QHVdxalQbZ\\n\",\n       \"LO6j6rCxStXPJbSmuNB8nlaJgN3+mNV/xdx2KXiX8HDbcL/0PlLgHraurvhkWnZcgJOPzGPp5Q01\\n\",\n       \"hsQY/ChEZIbTy6uYmo8wCaiELDFIIAxCVk6t0+n3kKEkiJbojQesraxw+dGn+bVf/RUOD/b4zKc+\\n\",\n       \"QSgMo2HfWrlpG7Xayw82rbN3gUJPgUMURXlrHEDNlN1scsrbpQaOg3L1qfZ4PJ7I04Ep9qtMlfoV\\n\",\n       \"cvGCglJK5Sw2WNNra7lXuAbxZM4tGFWpYlhssNMTVOP5wdSpetkXxmQi6+Pm8AXQFg6RiucnboId\\n\",\n       \"HxtlJ0BVqegXpdRENXWqLtpgMkEg4caVG5xZO824P2BpaYleZx9pFKPhmFAYAk8RC0l9aYnLTzzF\\n\",\n       \"IMlYzDSB0XS6hyTJCJMkBKFPEISMsiTfGBNqQUCqFSgIowjf9+n2e+zvaZ77yMfobt/l9GOPYmoN\\n\",\n       \"etfuoOOMl15+nQtnzrAyv8Tjlz/AamuZ1WfPcOPdG3zgE89x/c23CKOAzYN95j0fmYIRkPiKWmYj\\n\",\n       \"7Bj/+EGay2HdD0yqxB82mMhsaHCtqKFaa8rVxnLHqTzHqqjUYs5U1f1+FGtRRtUhYRXl7eZ5Ehfj\\n\",\n       \"rrmyqLJ4tpC9z6qbm1IndoH7eT+Os5zeNwCH476wy5NMCBsE1UNYdTABwggkgkxYK81QeCRaEbQa\\n\",\n       \"7Nzbot1qk8YxUa3BcDig0+kQ1Wu0ghadww79YR+kIPVq/Minf4i333qDv//3/wF/7I/8YU6treAJ\\n\",\n       \"GA36RLUa43hswcYPchn09A5bNXGFEFbNjmnqM03TKQ0T9z2X4nBFL7NOud2Du7KBjhtyrninyF9p\\n\",\n       \"dSxfw7SbSxdkq8bEXYjFvWKxVlErLqC69S76owzaLrVefJ+1MIApTaByP/i+T6oT6vU6Uvtcf/sa\\n\",\n       \"ly4+xGg4pLe/izQJq0ttzDhk1OvTNzEJknPnLtFYXKK3t8/Djz1C3RO89M2voY0irEfESUo9lOjU\\n\",\n       \"4NcivMTQWpgjVhn97oBWEFGLIuZ9jyCKkEKysXGWGFBhxOHw/2fuzYMsy+76zs85d39rvpd7bV3V\\n\",\n       \"1VW9V1f1pl0CWwIZiUUwgwwOzLAMWCwTJsYT4LGDiImZMbZngnHMGAwzDkAgIxAIkAwSIEC7hJaW\\n\",\n       \"1Gt1d+1b7svb737P/HHz5jvv5suWYDzRnIiMzLzvvnvP8ju/8/19z+/8fgFZprh2+w7dXo/HHzmP\\n\",\n       \"a0ria9c4duFBEstktb/LAyfuZntng9Dx8qw8SUYWJ8QC0iTLU4GVUp8V/VuWy7KC0/u2XITYC1lx\\n\",\n       \"SCnvd0wLDnWYxVQGLtOokWmWQ7kd5XJY/PEyki2/T1/kCipP/265HdMWnGlK97C664p6mlVy2N+H\\n\",\n       \"lVc9oUMxoIVZcaCzTQlZLlDKEJiZQCqxlwdP4GSS1DSYO3aEtbU1Fs4tEEYRW51dwiCiWquRotjc\\n\",\n       \"3iUIQ6q1GrWax64/Ymu3y733nMGUZ/mlX/m/+cmfeA+uZbK8vEC/s0uzOUMU+PkRcm1Ff6VOTrMM\\n\",\n       \"lR48SakrmWmKWleWBZIsm3BF/5RRffHcw0zHcZ3F1JjjhblbRhPl03Ll+hTtL9D0NOWgt1sf67LX\\n\",\n       \"SZlSKzaBi3ceFo9Gpwv0TdMwDPPJZ2RkoeJzH/84rWaLF55+lhPHj/HylYtYhsKouLSrDertFrcG\\n\",\n       \"OyzOLXHPfQ+wNRjRG/kszM/SW79NHAXML8wx2O0SRCmpH4BhMgp9KjMzzC4u0dnYwnIcvEqdmUYT\\n\",\n       \"y7LY7uyydvEK8/fdw9ZuH8+wcBwbI0lJSBgRcWNrhSPeCRbaVdrtGWaGXQhjNm/eoVKvsjPoYVVt\\n\",\n       \"lCExhMSREmlCmhyM4ldWOtOUwGE+85Myc7jy0GXhMESsy6j+vWnz6DD0O81q+JtYEsWzykp7Wv8U\\n\",\n       \"ddLBVbGnNa1e+sKgy3z53sOuTbMCvt73p5VXVYFPowR04ZBSkigQKrf6i+zRmcoQpoklJMLP3eCO\\n\",\n       \"3X0Xz33gLzlx8iRRmlKdmaHleEjDxDBM0iSjtedXLA3B0fkqi+0WW1s7hEnCo4+9hj/40J/w2IXz\\n\",\n       \"VOt1LMclDAPiKMC1vDz3nsbJlhVaUQyZx2wpFEnZ/ahAltOQcnnXvDzgZaSr96XuTlhQOuNUaXto\\n\",\n       \"XZoTdc+yLE/CyqTgFkq5PFbl8StKEWe9fF2fLEUdy0i/aGN5AurIbhrdVJSyJ0shN/vcvJkiYos/\\n\",\n       \"+9M/5bGHnmRupsWws4uRxvjDHmZoM9rapO5VSV2PhaUjuF6V7eu3qdUrRIHPzWtXyeIQ07LxKjWi\\n\",\n       \"KCPwuygzRZgGi0eWCZKYzJA4rsPm7i5JnLC8vEymFP2VNU6dOU3omDQrMyQLswTb22Sk+PGIyzde\\n\",\n       \"4tmLX+PsyhW+c36OumGTCbhz6yrtSpXmbIMokwyJEAKsMCUoKIbs66O/shLT0ep0hZj7jB+mLMub\\n\",\n       \"0WV6RAixv99SdgGd5laqK87DrIOylVUu03jt4ntlUDjNw2aap0kZWU97xzT6pVx0i6Vcr3L9vt7i\\n\",\n       \"Wy5fV4ELIX4NeAewoZR6eO9aG/hd4C7gOvC9SqnO3mf/HPhhIAX+O6XUn097bhFPu0Bakx4XYwUu\\n\",\n       \"pYGVKhKRkbC3eZBBIpI8QWgKwpTcffYMf3LzN+j7Pl6ljrRdjEoF07S5c2uVYX+AFIKaV2Gm0aRZ\\n\",\n       \"sxGWweL997HTGzG/dJTFI8f5xKc/jldxuOeuY5CEuLYkTnLyUbcasizb9+0GXZGA2FMg+uAUnxeJ\\n\",\n       \"GsobQOWDB4XwF26L0+iEct+NBUSR/5mhx78Iw3BisqVpnifU9ewDKF+PvKZPTP09Oj94GNIp5wPU\\n\",\n       \"PXf0xarwkCmjL91KmVbK/LvuUimEIEpGrKysUK9UsQwT0/O4fOkGUiYYKsE2baIwZNTHoIa/AAAg\\n\",\n       \"AElEQVSLibCQ0uDOnRU812V+do6tO9fp7exAHBFmGa3WLF6lScc02Op1aM7MIIw8XnijUqNSa3Dz\\n\",\n       \"0jWGgyHVRgOExPEkqUgxHJuYhO3uFsONVQypyDIfK0kwMnj+U5/kypU7/PhP/FMW52ap33OWj/3h\\n\",\n       \"h3nHd307HTNhV4VUFBhBzECmGJaJmaqpikVfIMuW2Tdipgtx+GevtOldfm55PHV+vvjMcZypezzT\\n\",\n       \"nnuY6+1hXi/F9/X2l8Na6PeUrcRpHHrxvfK+lj4Oh/V1eSEq10N3Sf5GyjeCwH8d+L+A39Su/Rzw\\n\",\n       \"MaXUvxVC/Oze/z8nhHgAeDfwAHAU+AshxFk1JbiwrkzKm3V6ybKUbM8DxDVMTFNBRo4QgMzKE4ou\\n\",\n       \"OzVm7lpit9/Fsh2G/pCV1TUqVoVUGswszGMkKa6QdLq7bO/0EFISpxmG5VKp11GZ4g1veDOf/+sv\\n\",\n       \"kyQJD9x/BmUI1CiELCOVEO+dLjRMA5Wp/CScyLlkZQhQGXryb31gCiVdoFt9kKch3EL56RseZeGY\\n\",\n       \"JhBSFpMgP6xT3GM79l4CVxDSwLQEKJVHCylNKt3a0NGtjr50JSsYb+WWKRcdXelWhn5fGaVM4zSn\\n\",\n       \"lUyp/SzpYo+7lYYgTWOyTGHbFV54+gXiIKHWbPD0xS9ipRG2oag16hBFmBmM0oS64+RZ5VdWOXrq\\n\",\n       \"JPHQpreziSMltldHpBn+KESaFq0jR5DNOo35OXrBCMeqUHEqrK+uYbkmioSdzhZHjh0liiOev3OJ\\n\",\n       \"0aZJxfCItnepphKVJkhp4WCAyOinIVt+n/e+9ze478QJvvkd38q5Jy5gBQkVx8ZSFikZmQTTYBzl\\n\",\n       \"JU1RKRiGSaLAsE0ykUdjNwBjL+hVwqTFeBgKLETxsG4fn/IFEHvx5Mt3FYqxkIdcPgvQoy/e36iX\\n\",\n       \"UdlS14vcs3z3vqW1YTodM83CzZ8zeTJav0+/X5fp8TvGIA4tEfh4wVD7vw+zGPRr/0UQuFLq00KI\\n\",\n       \"k6XL3wG8Ze/v9wKfIFfi3wm8XykVA9eFEJeBJ4G/Lj9XR3r6YJZNDdvOzUkyRZYkqL2GG3sD6psp\\n\",\n       \"AkG7F/Pm734bFz/xJe49dZqeH9Ku1KhJj53QZ31nnaZp0ag2WFpu4zZOkES5chsMBqxvbDIYDEiF\\n\",\n       \"Yn75GCs7fT75Wx/gx9/zY9TjDraAUOZH/i3XwUiBKIEkD5IUkxELsC0Dq+QZIqXcP/JecM57fTux\\n\",\n       \"sVkosyiKsCwr34SL4/3VHqbTGcUEGCMYPdPP+KCFflI0/944S4hOPejvKKMM/Z1lZV8W8GmeBPoi\\n\",\n       \"of8uZKE8gYrPDzPnM5XviWDIPPqkKYnimCSNcF0bVIWXn7vKvfc8yG63iz/os2AK2PNzN1NQwsSs\\n\",\n       \"2MzVqsT9DmY6Ih5scfHpG9y6co35+gwVt0aSJkRxTBpEhDWLxdOnOXrkBM8/9zw1zyEajhh0e6gs\\n\",\n       \"RlgGftij23OI+wGDLMGdncVtztJq1OjEXRIlMBOoKAszS0lUxsi2OP+ax3npS1+il/ksPHyG1Zfv\\n\",\n       \"YGRVGjMNNqNdTNuikiiyOEY4NiLJ4xPatotnuwyjgIwMVIpIU2SaooTYP1k8zfor5KP4f1IhTpY0\\n\",\n       \"PWiZlYu+WBf/T6MuvpGiL/S6PE6+S782XjzGf0+6AU6Ty6JfdM+m4jPdoplGUeXP0sHnwb29PJRB\\n\",\n       \"kdtWTCx6Xw+ZH1b+thz4olJqfe/vdWBx7+8jTCrr2+RI/EDRD27oZnmZDihOMBYdV3gXlIUhSRKe\\n\",\n       \"eOQRvvAnf8FmZ4MkBkdUUNUKC615FqsunmnQXVsn6PdZ2djMBz1TzM7Ocdfxk0hDIgzJIBjiRwFb\\n\",\n       \"W1u87zfex/e9823MVD0kAkulECQIQ2K6Nqi8E22Vx8uO0mTqIOjmWXFNV3y6kircE/Xd/WkKsOiv\\n\",\n       \"goIoK1Kdd1dKTbh2FZuUQoj95Mz6Bmv5YI3enqJME/6iTochqrJfefFb3+TU5aJsGRws+WRMkgTL\\n\",\n       \"tomigDRJqFaqpGnGH3/ooyzdfQ9mKgl3OhhKEagEVyjMJEUZBjgGx5eXmT1yjJevXqbVmoUsY3t1\\n\",\n       \"FcKQUHWxazVsaRGmKf0gRJmSertFLxhheB7zi/NcfvZZkiTBVAKRKpJRyO7aFu3UZsE0cdM8r2U2\\n\",\n       \"V2dzsI2TSdw0Q5g2tcTintocN3opz3/5Szz8xtdSa7TAdumY4EY+QZzCbIXR9jbHajOEoU+sFKEB\\n\",\n       \"hmsRqpTM72MpsZ+7NDUkgZG7wZocRI26d9Q0y2ha0ZVNMSf168VnutIuLEkpD1qSheyWn1FGxcUi\\n\",\n       \"P032dMQ9Vs5QoF69Pbp7r/6eslIvKJQyNVKW36/XT9PvyW1WvR66lfH15T4v/583MZVSSpRjtJZu\\n\",\n       \"mXZRb9xBZHgQeekcma5gdMVjphK35hKLhPn2PKZwGPoRg9UOhmWBhEatSsWusHT0OOvrm/T7A7rd\\n\",\n       \"Hhtrm9iuQ61epVqrgWHz6EPn2ens8lt/8Pv82A/9IE4KFhLXtBhFISEKJQUGAlMJrDTDMU2UPRl+\\n\",\n       \"Vhf4aXGQi/bpDv5RFO0LWkGl6K59OtVQKLsiZ6ceoa94r45eClSvK0m9HtrYTozDQQSi9lFEuU6T\\n\",\n       \"E2k8hoe5uOmKfRqiO0yQreLsgGIv1ZjAsj1GQcqtm7fZ2RpSmZ3h+MIRnrvycU4sH6E/3AK/hwpj\\n\",\n       \"eiIiq1Q4Yjrs+gmxMFhYXGJj/Q6j3V2qhoEjMrqbG9SbLUzHwSGjdXQZu1bhxo3bVBpVDNvCcEzc\\n\",\n       \"SoW414W9BT1OYzYbJp6haPk+C7RpVRo0mzH+yiaWdFi4a5koiZhfPoZ65goPvektOG84x5W1bU4f\\n\",\n       \"rdNeXkRud7Btg4997pO85fHXcuX6Kovzs/hpQGJKlIgwpYFlgBmnGEqQCEUiFIHMqT5zrwv1DWM9\\n\",\n       \"Fn6Z4jqsTFBnJUpm2ngViqlQqGWlpcts8V1dzsYpyzLtWZPK0TQPi899ULbLc0//Thno6O3T3//1\\n\",\n       \"XAeFYGI+7N0xUZ/D5PwbVd7wt1fg60KIJaXUmhBiGdjYu34HOK7dd2zv2oHy67/5O/uVPH/uQc4/\\n\",\n       \"8tC+4OgovLzhoG90FQpuX1GNfO45cw+Xr1/FPeGAMnHacxxfmEcmAlyTYegz6A3oDW7jBwG27dBq\\n\",\n       \"zOA4LtVqlW63Q6fTpdfvYloGse/z6Gtew/t+7wP8t9/3A0TDkDCLcWybSCoyoUiVwshSskwRhckE\\n\",\n       \"11heVXVlq7dL3/ATQuyj8GKzVBdaXag8z9v3AilQbPH84vv6u3VkXEyccjo33a1RV976+/XrUw/Q\\n\",\n       \"TFnAXsmS0JGOjuzK1Ey5CKEQoviuRAiHYJTQbM7x9Fc+xrGjZ9gOenz+83/NzJ6l0l6YI941CHo9\\n\",\n       \"giTEa84hK00ur6/jORVWVjfYvHWLhmVTt0zSICDwRwRRiHIc5o/fxZmz9zIYjRBSUm3U2FpZJU4T\\n\",\n       \"ao06wjRRYUToh6goQcwbWJZDnKXIRFG3bJZOnKSrXC5fv4awLM6/9nFsBTVMvvCZT3Ky6XL2/guQ\\n\",\n       \"GbRPneTirb+kuR1yrm/y8T/4KI9929tZEQpHWKASjCiPE97v96h5HhGCBIgzSSYFpjTIOOjNY9v2\\n\",\n       \"xJwbI9qvn5GnbEUdZvbr89YwxlH2dEur/GxddsvvnK58xwGsxrI25vPLi4wu0+WFRwc85ToV9ZqO\\n\",\n       \"wHW6ZpK6+ZuULz/1NF966mvf0L1/WwX+YeAHgX+z9/uPtOu/LYT4RXLq5AzwxWkP+NEf+kcTQlLm\\n\",\n       \"toqGT8vcXF6BCyGw0owTx0/yxc9/hcfPPIofpvT8Ef4gxAwUvSwisSQNZeB6Jq7rkCQpQRQQxyG7\\n\",\n       \"nW0c22Z+tkW71SBNYoajAb0o4My9D/Brv/lb/OC7v59gMMQTApWmqL0MPwpFZghs08bRJkLRtvKB\\n\",\n       \"E90UnBbB0DAM4jjeD1+rT6ri2Tpa0umSopT7Ut8sKi8IRb2KZ5ZP0hX3Tdsl179fPLu8SaOP8zQl\\n\",\n       \"Xp4Q5bYeVjKVQMErYmEaHvVala995Xmq1Va+ZxLG9Ld2cIXAVBEiU0ivRqps6p5LY+kYK6OYbhDi\\n\",\n       \"eh47nS1G/SFVElAphsqQjk0/DohVRiMK6Gxu0R+OOHX0GCjFauBTnH2p1OukTkRKftp43m4ikpRR\\n\",\n       \"GLDud9i1UkBSP1Jj0LXZ6HeJvvAUqjvi6NE5TJWxefEKteoCfnfA4pFZbNsifuEqr/FabIgh/+H9\\n\",\n       \"7+f+h+/nmy9cYN5tYAyGmEmMVamSioxYZKTkeT8dJSGFOIsPKKY4jvdpSb3vX0np6GOpj7s+hoW1\\n\",\n       \"dRBFJ/veU8Vn5XMExd/FhnlZbstyUlBoBxeOjMKfvfyZ/r7yYlWACSnlBKjSQdG0RWQyDd1B7xXd\\n\",\n       \"AjksabQQgiefuMCTT1zYv/Yr/89vTr0XvjE3wveTb1jOCSFuAT8P/GvgA0KIH2HPjXCvIS8IIT4A\\n\",\n       \"vAAkwE+oQ5bxYkXW3eB0BK4jAdu295VYUcq+qEopXNvl3nvuZWV1g5Ef4NXaCNeFUUIYDFjf3sJo\\n\",\n       \"VsmUSdtycFyHdrOFYZj0e32Ggz5bQUAax1SqLvNzsxxZXGTOUtxeW+Gh84/x73/91/jpH/8xIj/A\\n\",\n       \"NW1klgIKYUnCNCZLIojGpmGh9MomYBmhlo/d63G+dUVWVmjFBCz6VFem+uZU4b6ne8Dok7Tsc6s/\\n\",\n       \"p4ghnlNdYv95hV932XVvbGaOY4cXz9fpr2LcCmQmRHECcLJeRcKPaSX3N1YgJUJlCGFQcWr88Yc/\\n\",\n       \"wmMXnsRxTJLdXVqWiT/q45mCqOcjanV2DJdjx+/jvgfP84VnX+BUq0HY67CxukYNgWta2KZAOBY9\\n\",\n       \"PyBFMdNuUa3XePFrz2CYFkfa86ysruJ3urhCQByDZSBsG7dlIS0Pt+KysrGKU/G4tbXBa++5m/kj\\n\",\n       \"y1y8chlzqcXWZgcjAykhSALe9oY38ofPPMdxWxDtdBmtbnH+sXv5zPZnqM+7nDtxD8P5Rf78S1/k\\n\",\n       \"9uVrvOf7vw8bgWU5eBWPIPHJsgxLSGQqEVFKnKakMp0qh0UYgkLecmWrXkHJ6GFdJ/ljHQDoG/aF\\n\",\n       \"HBXIVKcMddnWg60VMqbz4zrdo8tPmub3x3E84aqahwWYzJQVx2OlrCvv8oa+74fYtr0vu6Y5yduP\\n\",\n       \"+2M83wrgU4TjLeS/+KzIlKXUpOtrUbfy/D3MfXb/3d8Iz/Jfuggh1F/96e8fQGU6daKvumXf4TIX\\n\",\n       \"XAxCGKWoWpU/+pOPEOyMuP/cBUJh4SmTmnCxGnW82SZJd0gYbZOkMWGYJzkwpIHnulRcFykFSRwR\\n\",\n       \"+EPSJEWaFtKxCEVKpGL8QZ+//8Y3IgYjZBwhhCKSKZHIsDAw1ME8l7rA6pbF9FgUcr+tOhrJsmxi\\n\",\n       \"w2cad6ej1+IenT4pK1odnZctIG289n8XCKp4tm4F6O3Lf4wDddXLJOpJJvpr/L5xwK37z7/5wDOe\\n\",\n       \"+8pfgswnRNVt0O/6fPFzT3PpxeuY0qE+W6d/9TpJvwcuEMTIzOSm78PSEc7c/SBJLOiKjPPH61x5\\n\",\n       \"4Xk2r11B9XdQwx411yQTGZGCyDC5++wDxAg2Nzao1euYhsXmxgauZeCaIo8HHodk0iRI4N5zj3Bk\\n\",\n       \"eYHnL10kVgleonjwrnvIbMlOFvL88y/QSE28zCRWKUY65PGzD3FrtYPRXuTe+x8g7O6wdKzNiePL\\n\",\n       \"PPtnn2Gxucwt0+BSNeGlzVv0tjd59/d8F+2qh02GkWW550mi8tg+SiFMgzCLDyDrXElPykuBFJVS\\n\",\n       \"XHjttxzo8y995iP7NF0+7mP0fpg+Kd6bppMxQOBgJEHdStPlsgxeJuuczyM96mfuQjvpEpv/MFH/\\n\",\n       \"4n7dsi2K4zhEUbR/MKmQ9cJi0eVbV9blhapA3VIWwM6YUND6vNOtX8MwuPCat6GUmmoSvWonMceB\\n\",\n       \"oA56MehmF0xyrkWDixVe90Zp1JqM0oRve+u38O9/6Ve5zwAXwcbqOqFbo3v7Js25OVzXpd40ME2D\\n\",\n       \"enMW13EJwpBer8fmTgcBVCsOrdY81WqFnc1dYvLQpNv9PvNzbf77n/s5fuUXf5He+jpZHCFsA8M0\\n\",\n       \"JnzAi7YU9Z/Gremc/rRNRz0++rRA9dOiERb3wuSO++HCLzQhO+jaN14E2J8QxfN0ZFOMS/G/YVgH\\n\",\n       \"zNPyZuVYiP9mUdj2i8yTXhuGxaA/Ig4zbly9QRrFWLbJYGuFzVtXEVFIba5BFmegbGqtNo0TdzEK\\n\",\n       \"A7bXdzh1/mGuvPhVVu/cpG6bGI0GoyTGVwlhmhGkGQtHF2m029xZWaNaqWBLydbGOt31DWLLpLbQ\\n\",\n       \"xjIl1VqNhbtOMcwkr3vLW1icbfPg44/Smm/z8Q/9Z9ZW1zh7/iF6/W2smkfaC5GGxFCwFvh88GMf\\n\",\n       \"4R8cf5TNr36R5MgC9XMn6cYBfcvg6KnTdJ+6wj2PPUJUi1leXuTO7ib/8n/9BX7hF/5nWk4Ff6fD\\n\",\n       \"QrVGmgX4gU+11SCIwgNJuHP5meZjPbn5Vy6O45AkCVEUkaY5TaEDMMMwcBxnH3XrHk06oCmKPj90\\n\",\n       \"eqWY22Xwo5ex4ss38AvEnD/ToMhCVFgZud6wDuic8rPL1IxunRTPg0l51y1epdIJoAN5GknDkPuL\\n\",\n       \"iJ7dqgCmkHveeZ43ce2w8iqmVEsnlI1SasIsKQShfExbn/xFKf72/SGWaTHXbGBXbUbBEIIB87NN\\n\",\n       \"avUZGjMNurtdBipiOMr2Om8HJQRCSKqVCl61RdVzEUIxCkJ6gx3SICbJMgzH5PSRk8Qq5p3f8S7+\\n\",\n       \"3a/+Kv/we95Fxa2QxREyk5hSYpS4waJdhYAWXjfTlFWBFIr7i7jc+ue6wp6WxKD4TOfv9P4uI119\\n\",\n       \"E1Kvk5RyYgHIr2cTY1bcV3xP35zV/y4+LyZO2f/fsg7mUCxP1mlFShMhTYQStNttPvbRvySOIo4s\\n\",\n       \"LZJGKTevXMIwMxqeS9DvEqUZkfSoLyyyUKuxub3La197jqE/5PrKDUQSkgmBbVrYtRq7/Q6jVGF6\\n\",\n       \"Faxagxu3V1BJSrNdY9TvM+x3qbk2jlL01tdZWlrkrmPHmVlcwpiZpVZv8My1qxw/ukxnfRsbgySM\\n\",\n       \"WL12k+pMlbc88Rq+8MnPsrmzQ3eng20JZqpNZMVkN9jkzs1LnLq7hWtX6A58mK2xUROojRVk6LK7\\n\",\n       \"GWJ7Fv/kh36Mj3zkYxxdWuINj51nY+TjqAyn6hEnEVEcYlvOAeSrL8Z6f7/SIiqEwPM8TS4mPcqy\\n\",\n       \"LGM4HE7Ixli5HfQ20TfOi+vljWyd8phWyrKYy5+FvhAV96TpGJzon5UDrBXv1Rcb3VlAl9FpMlt8\\n\",\n       \"b8ylFz8ZaTp+f0FfFUxD4To9LRVbubzqCR1get65okOKI/c6naCvnPqgWq5J5kcMdne4/9z93F69\\n\",\n       \"xYW7H2B7p4sfh8gw5f4z99GXCaaoEkUJa2trbO3sYEiD4SA/QDPbapGmUc5zOxYmgjRJyKKYcOQT\\n\",\n       \"GwkpMHvsKP/2V36JX/j5n4fRCDX0J2LDlbnkwvSCcXIF3WQrBkzP5KPzhMUzdSWn84NlJF58r+AA\\n\",\n       \"dW8THVmXn1HUoYzu87Ga9DMvng2TE69sUpetCCi7TqYH2qa36TCFEsUJArBMh7XVDT776c9y5q6z\\n\",\n       \"bK6vMdeeQw36pET4saJhmfhSMsgSahWH2y++QOZaBP4MweoGKvZxDZPI97EcF+G4eMYsSexTbTSI\\n\",\n       \"lWDQ7dLwKvSGfcJwRH6IMsOWEgsTEcVs3rqNnyhee//D2IYJ0iDsB3z1s59nvtXgdW9+M3/0R3/A\\n\",\n       \"6173GsLdDkIKgiigogyMfkStUeXZzWs88e638pnPfZ67zpxi5vhpdlc6LD54hva3OrR6MU6k6Gxv\\n\",\n       \"IaVDvxty/uzDdMIhv/y+3+Yff//3Ii2JkcRYcUzVcYmzybCoOrWm93n+9ytsHGu0VjH+eugFKSWu\\n\",\n       \"6x6w7PL5PN4E12W/PM5la7CQhbL31nhhUBPhK8YIe5qHjJz6bJ0W0a2Cct3KDgBFHYIg0GiUMf8d\\n\",\n       \"huHec3MaRQiBbTv786EYA90jLIqiibSGh5VXTYHrpkOB9HSFUzSq7FNcXvn0Do4Dn6rlEGUJd999\\n\",\n       \"F9evfYr5uTau65EIAxmmvPTSRULXwh+mWKaNV6nw8EMPIqRBlil2d7YJQp9erw9pipA1UimpVDxM\\n\",\n       \"KUEogjTAc6ssnzyGMCX/8b3v5d3f/h00bRuSlDhKUBIQYAhJtpdEGMNACSBTJFG+6gpjEnmUFy1d\\n\",\n       \"mcNkFvZpMYnLForen/mGcYF4C2GfvoGpRyjUr+dH1Ccnnj4Gel2LjZrD9jaK+/NxJO+j/Qzqegq5\\n\",\n       \"w2NdJFECUhJEIR/8wB9CarG106M/GGAg8Qc+tbkmKuiTphlRGlNvtXBtydbqNmazyvNf+RLhxgae\\n\",\n       \"Y2LbAstzMTGJghBpOcwuzHHsrpOs376DMkYYbo3BaJOt23doOx71iofMUirSJQsTLGXQ3d7lpRcv\\n\",\n       \"YjSazC8ucf25F2h6FV689BJHHzjJd//g9/Gxj3yE2XYb2zLIgpCWtLA9D5WkDIhZ6W8T9Po897kv\\n\",\n       \"Unv7HEvH7mJna5dTFx5i9emLDNZ3qdoW/cCn6nl0d/ookfLYI4/xf/zSL/OP3v1fc3p+DtepEo98\\n\",\n       \"LMcmVQphiHzBzBRk2V44CAOk2LNGBfIVNzGLk4/Z3lhN38/RwcJ4rh8Mv3rY/+V4PFmW7Z9h0N+R\\n\",\n       \"K3YLKcU+nTNWvgcpmvze8bzTZbdQqoW8NRpNsiwlifON/CQdz78sy0AVG7mKarWay2SSkGVF4Lni\\n\",\n       \"VGtufRSx/i0rmqBHi/YVFndhDUwGqTtYXjUFriuGokzjaPX/dZOiPIhSSiyREskYTMl8rY6RpNxa\\n\",\n       \"uY1MDExhMzu3iHmsgnAc4mTIaDgkCHyuX30Ox3Fo1GeouiateoP52Tq9bp/haIifpPhZgGM7zNQa\\n\",\n       \"NCpVQCH6isfOXOCZ6Cs89eKLPPLoIzSSFNu0GKkYy83DhtpCIg2DUMIoiZFC4JLHjygrQ10B6yak\\n\",\n       \"LmRlt6oyn1f0iS4Y+fNy97XJPhQYhnmgv/X66OM0RswHIyceDEY1RjplDlMf93wCFl5GBdcuD7xv\\n\",\n       \"WrGxMNwaf/7JL3Ll5S1mvSa15jJ3djpsXb1KNhKEm0OUjNg1MoRQHK3XIPRxTMWMZbK1s0Ovu40t\\n\",\n       \"DNRsC6/RhGGGk1kMY4VRaWLMzOH1YyyryYiY0epN6qGgGgWYtZS5U0dRYUa01sPE5ejxk3T9AVG/\\n\",\n       \"Q9DZ4vadmwRRwNXb1/ihUz9KFIccPXmUnfUN2o0KnbrJIIxoRQZJkiIcm5cuXubU3HH6V1dY3V7H\\n\",\n       \"ePwMwncIr25jziyRBhF2OMCpSFIX7CSjoUz8nZC3PfxGXnj6Mp8bPMX3fse7aFcqJNEQYUuiLAKR\\n\",\n       \"YYoMA4WBBCVIMUgw8lybWcwUkdgbE3PCpc4wph9cG99fKEu1T6Ho1nSZ+ivKtIQi5U38sWdWQppO\\n\",\n       \"7pONAcG4DgUC1wFi8V5dqe/XL03JVIY0JI7hYCt7PxdpuudpojJFkhYJwidPkhsGSJnuswymae/x\\n\",\n       \"8dEEdZSm6T7iLiwa3Qo5rLyqCR3KB07KK2HZ3CsrfR3BSymxpEuYpDhVl5mGw4ljR9jcXOOJ84+z\\n\",\n       \"dmedly89h+V5hCql3ZzBsR3mjyxTqVQY+T6WadLt9li5fYNMKaqVCrOtOpXGTI7yRj5RELGzuUV/\\n\",\n       \"0Kc128QdObz+NW/gP/zHX2ZmZoYHT9yF7/vUKhVGoxHSEMTSIA5DhBTY7JmBe6c1JRwQyDLnr5uF\\n\",\n       \"OhVS3oTR+0bvn0LRFpNHf64QkmzPP1jv3/Ix/kLYdDNzvDAc9B3Or4sDil5fVHS6pMydw8GY5NNK\\n\",\n       \"vdUgigWf/dhfYjpVfBXRnqljRTGGY5M6NtJUhCkkUUy9NcMoTBkGHU6dPEkYjNhau40rMxKlGA36\\n\",\n       \"RElCo9IC16TqOZw6eRcbGxtEgc+5Bx8gTmM6JJjLQ5Q/YKe3RdwPmKvNELZNkoqFmK3RXVtnBosv\\n\",\n       \"P/0sJ+45xac++yl+6Ed/GN8PkAa84Y1v5IO/+wHcSpX7HniASy9dIiRGWpLAD0AKjCOLVEwTWwga\\n\",\n       \"WKQVm9WNDnefuQ83CxkIwfXVFSxb4Mw08awKSRjRkhlGb4e7T53kf/wffpb3/JMf4e57jmNFETUp\\n\",\n       \"ycIYUyowJYkQKAQqy3JUDqhXOFxdpjF1OmXMMx/cuITcstP3ZnQqozy/Jz041L4Vqd+nK1ydUtEP\\n\",\n       \"t5X56cK61PWI7hBRuM3qsqkzBDp42vfE2XOHLp5X9JFhGPt5W7Ms0zZ3Y4Q4yPcXNJBOK79SedXj\\n\",\n       \"gesNKJtSxWe6y08ZeeoTWwoLCInDmEQolpcWefbp5+h0t5mdbzC/3MZ0XMI0ZrTjE4URd25cp1qr\\n\",\n       \"YUiJZVuYQtJuVPA8D9M06Xa7bK35mLaNFAZVt0L72HEMQzAYDRkGfdZXNvjhf/zDfPpzn6FZqbDQ\\n\",\n       \"bBD6Ea1KjX4wwlcJ0pTYWZ6MNlUQCoUUYGSTwYXKYWgLZFsUnZcu/i/36X5/aAsDsJ9rszwOBVIu\\n\",\n       \"0zD6z9gFMf9eGTmVaZJ803SM1PQxLdMyeh2L9pY/O0yBZ1nG777vdzjSaJI5VVIhuHb5IjOGgfBq\\n\",\n       \"mFUHYQp6/oBqq0GtMUOnM8BE0huN2Lh9jaolIQkRpksah8Rxgh/E2I0GJ+86jWMaEAU8cuFhXGmw\\n\",\n       \"e2MNzzJpzM3S9JZxbtmkSUx3t4OyXO47/yDXu9vY0mD9hcvcdfcxXrr0Mm6lwhve+AaCJCSTYFg2\\n\",\n       \"b3/nO/n0X32CGMX80SPcuHYNI81wHZtGo4Fs1rj33D1ce+ky7bvuImjPMFIweuYZ3nDuIZ67fZsZ\\n\",\n       \"LHZ3+/QtycAaIeM8wFqz4jHo9vmf/sXP86E/+zDb6ZDH77kX264w6g6xqx5RlpEZilTmLocy3fOJ\\n\",\n       \"ntrbedHPZOiLcdli0y2usRKdHvtdt+AKZVr4fOsyqMuYLkdhGO6/Uz/UVk7IoP8U8614tn6oqVDY\\n\",\n       \"Y68RnbotFpLcYsy57ZQkyQ68J4qiHMhpXixSShzHnlgoLMva98wrz8NXKq+aAkS5HNgAACAASURB\\n\",\n       \"VNcHUV/ZyhO3EBY9mQFMxiUo7t3tdPBq1bxDDMl9957l4nPPstPdoj+06Pf7WLaL5bnMuHOcOHFi\\n\",\n       \"//nDYZ9er0enu4thGIRRRrXW5sziKaLUIAgiOjsdtjfW8f38oMTsXJv2bBNhGWysbvLG17yeT33p\\n\",\n       \"03zvd34nchQxGI4wbItIxdiOhR1nGBmkKiNOMxzLwtI8bXTB0hWxvriVB7Sc+09X6LpJqHuhHFb0\\n\",\n       \"9xZ0VdlX1jAOUiBFvfXIiUKI/QlRRubTFojiQJeuAIpDHK+EQrbWN7j+8mXOnXkYe2GB2JTc+OLX\\n\",\n       \"WGjUSC3F5vYGSSpwmy3uu/AEvWHAWvcyzWoNjNxbIh4OmKmYpI4BoSLLYKPfwXIkS1lAxR8x47nM\\n\",\n       \"zTbxMLj65S021m/jex7u/EJubUUhV+7cYenUab7ywvNYnkP39joPLC1ybXeXrzz9Vf7Vv/kFUgGG\\n\",\n       \"ZZKoBMt1WDx6lNe++U18+Qtfwg9jjFYN5QdUDBfbMtno7XCqYqCiiOGNNbxWk6jiYfg+F59+hnDo\\n\",\n       \"YwUxs24Fs24RGIJsGLE0O8dApSxWHbrb23z3d7yL3/iD97Py8nW+861vZWlhiSQckiUxhiLP+yoU\\n\",\n       \"ytiTr8PDa0/w0GXXujJFNk1p6h5Yk+BgHKunyIajW9hlNK7LlOu6+3OoULg62NPBYnGYpvhMr2+Z\\n\",\n       \"A5dykhHQPUXK7a3V6hM0og5Mi99F231/OGHp6m6/Ot0z7US0Xl7VrPRlhK1P4DGKkxMN1AWjbKJZ\\n\",\n       \"lpX702YZSZxgSsnc3BxRFHP0yHGWl48jpcnA90l9xe07K/k7hMCruJimRXt2dk+gFGEYsbK6Shxn\\n\",\n       \"WJZDvV6hXvMQKqce/GDE7vYOwgTTMthcXePuk6f5hf/tf+efvecnkJnCSmJMUxIHEVmSYUlJJiWG\\n\",\n       \"FGRJSpglE9TEtA0+vZ06J1woSF1I9P4tI5WprmJjmnC/6II2qfBVzs9qyGCaJTWeqJPJb6dx58UE\\n\",\n       \"0tGSboYXLo6Hlc9//FOoIKTX7yBUglVxyRIf0zUgjEgNSZTBTGuRUDms93osnboXU6VcefrLDP2Q\\n\",\n       \"pldBECOFgSQ/ZOF6Dk6jxvbmBtvXb3H+wgXsNOW5rz3FXbNtkoUm4e4ug34fx3MYJgGt+07jtGbp\\n\",\n       \"3t6AIKbuuGRVm6ee+io/8dM/xekz9xDFUR4/J8vAEmQqpdFuE6WKWnuW+5fnWL9zG9EZIlJBHIV8\\n\",\n       \"/JOf4O2PvRkvyejdWuFS0GMmMKkZktc/+QQ3vvgMfhjRiXoEromjDO7cuk1sQmxKZAaDwYBve9Pb\\n\",\n       \"2B10+MBHP8r3vOvbabouZiiwVIZUGYnKSEQK5CGJD1s4dVRbjFcZfB02zvk+DBP0RnFPIXN66Igw\\n\",\n       \"DPcVarHIF3pCf58OHsoblLp1l1MTTMhuIZv6/Br7aMfIvbMGOU89Rvrjwzi5v3mx8ai/twCfZZfF\\n\",\n       \"/OTxGKToNKLOLryS7MPfAQVeVLDsbQJ5QwvXHH1QivuKVXbfdLIFUmSYjo0jBEjJ448+yUc++meY\\n\",\n       \"dhUpHOqVOo1GE2/ewzTzHd4kSfBHI8IoJI0TXM+hVqvhOA69Xo+036HX22Fraw1TmjQaTRqNJvPz\\n\",\n       \"bZaPLDLyB2ztbLO7u41yLX7gB/4b/vCjH+Vd7/g2LNOGIEAqmSditvN3OjKPoKfEGL3C2DLRXbX0\\n\",\n       \"fimiFOrIepr5WvRT8VmOHCZjqIw5O/a/pwteManGJT9dmd+rm8nF8WoDPZdima8shHQaNSJlfhoz\\n\",\n       \"b28xrhLLKlzUppuS11+4RM326A66eFnCypVN/H6fykxGXZgoy6FRb9JaOMLV2+tsd0c8+MBxOhur\\n\",\n       \"pMLAdD3CUYjrOag05+0hR1PVmRZmBkuLy1QyuP7cc3gopO8TZBFexaM77LM7CIirNvede4gb124S\\n\",\n       \"hRHR0Me0JB/4iy/yT//5z/LgI4/kMbrJ+8q0DOIkIUxTbMejvbjAztY2p44ew7ZMbl68RLA7wjNN\\n\",\n       \"nIrFzu4WQdqj29nk9vY6cW2WdK7FVneH1sI8qrtLwwTDMRCpouq5KMtC2QahH1JzKgxHPkfnlzEr\\n\",\n       \"Hv/Lv/s/+Zmfeg+ztgMIPMOC2McAhBSk2eHKo2zJ6a5wxU85rnwx3kmS7W825h4aBcgQe/7ZxRFy\\n\",\n       \"c0+hFYgbLOtgncYKc3KDs5BxPVpn2TuqPFfK9KwQgiSJUSrav9c0zX1Fm7c1P/GZK2j9eP2khaAH\\n\",\n       \"pMvlPp367sk2iQOLZbm8qhRK0UllgdAn9jQuWF99LcvaVzJJmrvgqDQGBMowsW2bkR/Qai8Qh4qV\\n\",\n       \"Oxus3t7Bqlg5520YWLaNYUhM08BxPTBMOv0hYjAiSRJsx6I9e4SKVwNg0B/gj0b0eh2KCEa2ZXL8\\n\",\n       \"+AlGYciw20cZFi9eu87502eoCANDCqQliA3I4gQrynIUZk7uehc70OWi95XeP9M2CXXEO2k6TvqH\\n\",\n       \"688sJl7Zk0R/tv5e/drYJBbo2cz17CvFd6Z5KuTvPxhHHdS+4tbNXr00qzM4XoXV/g7J9haj1TUi\\n\",\n       \"A26HIW1hs1vxuHDvg8SpIvIj2q0Wd27dYuX6JcwkojHTIhYpveGAVI1wDYswjlk+eQKv1mDl2nW+\\n\",\n       \"+ZseJ+j3uXjxRZrVCo5dpTXTYJBGbCQ+fhJxduFerjz3EsNRRK1eZ3s04uUbl/mpn/0ZHrrwCHGa\\n\",\n       \"5vlz8vVhr+G5XPQHQ/pDn6Wjx9lY3aJSq+E2GyT9gKrtkRqCi5de4k0PPU53a5v09hrD+Qy7YjII\\n\",\n       \"fY7PtTCrLjYxz9y8QioliQzJhCRVAiUgGIyo1aoE3QHRcMhP//hP8lef+CRvedPraHkeSoJUgrrl\\n\",\n       \"EMVx7vJ6CP+qnz7U5WzaqV997AsKQpefsnzr3ykCwI2twYMeW2WlrM+HsqdLId9jEDI9QYNeN8+r\\n\",\n       \"Tnw2poPyuDvje+We/MsJEKbTMfq74jjn0Mt1LffJ3+lNzKIUlSxPbn1QdQWl/+i7w7blIDJAQ2vN\\n\",\n       \"eo32bJvnL77A8aOnOH78GO3mHMPEp9vrsL29TdpLcRyHarWKQtBwHJIwIghGRFFMlgzZ7XSR0qBS\\n\",\n       \"qeA4DnbFpT4zg23b+L7PcDgi8CMcy2EQ+Jx/5Dy/8zv/idl3fz+n5uaxLYM4TcjzB4ElIBbkfrla\\n\",\n       \"e/Ud+vImUNEPxX069aLfU0bYOlooC0l+79j8LUc1LFMg5Wu69VAeM31Tuvx8fbzHpm02wWPqpbzB\\n\",\n       \"XZRBlDAz38Tod/C7PeZrdXwzYxBGdPsD3LkWuzs77HZ8Fo6coFavc/3aJSwVotKIFHBrTRLDZdDv\\n\",\n       \"sjkc0V5YwHQrbK9vMVNt0pppsTMc4WbQW1snsSzsLY+eAyMX2gvLXHr2IhYW/TDiwSce5eqdG/zk\\n\",\n       \"P/sZzj/5CGGc7qXbI49ameW/4zim2Zzh0ktXGPo+zTRja2sHuSs4e/Y+bgcZnTtrDEZDUPD0xWd4\\n\",\n       \"86OvY2dri5euXsadrXP95g3WjRXOnbmPU+0lNjbWWA+GxBKa1TrBKCCRAstxCJIIMzOZtapsXLrB\\n\",\n       \"27/pW3jfB3+bb3vHt+IsLeEBw94Ix7IQ5uGnHsunc8seRcV462NX/F9W/jrtUbYkx/I5edhGV8rl\\n\",\n       \"5wghJugWHaiUUbEuU7rFO/n+yQ318gJRvD9fZMZ8dnnelPvGNA2y7OA5ibKyP2wR3R+LV/z0/8cy\\n\",\n       \"rYJlgSmUlc5JFdcLZaCb+yqTqDSFNAORO9djWdz3wD388Uf/gieeeIz1m+tsrN9BmSaNZpPTp09R\\n\",\n       \"qVQIghDf9+n3B4xGI0ajIY7j0mg0cMwGAgiikDDM2Nnewvf9XJHbNtKQeK5LtVrDti0qQtLv9fn5\\n\",\n       \"n/uX/Ol//jDzb3kzFSUQhshP+gU+KWCY5j6CL/qk/Hd5AMuWSDkQftGHulvUeBEcH5CZRNIHlWN5\\n\",\n       \"gugLaZnzPCxegz7G5YVIr2/ejklvmnLWlMOQyMbIZzZT7KxsUktTUiKIFI5jE3qwNNtGxQF+Z5uh\\n\",\n       \"YbB2+UWGww41T2J4BoE/IhE20q7h1iWiMUPiufTCmDRW2FWHWyt32Lp5AxHF2AgGwYA0CMGu02y0\\n\",\n       \"WLt8nTnhMegMWDh5DGGZZAY8/uRjdPwu0nTz9grI0oxUFe5pJqORz82bN2k0mqSJYvn4Ce7cuEHY\\n\",\n       \"H2FVK+yEQ2wDRJqxtrnK5uoKDxw/wZ3eFs989SvMLC9x/JFzZKZEdgbMKou+YTAwDeIowkgVQkji\\n\",\n       \"JMrpwVodU5ioNGX9+m3e+ff+AU8/9TUqT1rMOBbzM3WGIx95yMGpw+SjPDfLXip6uIxpSklXjNOs\\n\",\n       \"vOJHD9hWBnL6nomeDKWoX3G9ECWdZinL2Vi2J+WwSByht62g+8oOBHr9y3tbaRrvuzMWKF23eA3D\\n\",\n       \"wLbtv7uxUMocuNoXagN9A61YFcsdMk3ZZyo3ZQwpkAJSMtI05PSZkyQfGRHEPZaONJFZi+4wJoxj\\n\",\n       \"1lbvYNk2ju1g2w4L87OYhslwNCIIArqdXYSwsCybaqVCtepRqTWRQuIHPt1Oh1F/gO8mpKmJsHxs\\n\",\n       \"UyL9hM/9xadotds8d/US5x9+EBmGCD/CEQbCEsQqIz+yOUYEOm9XXC+nkNMHtYxGpm5UUnikFJno\\n\",\n       \"J8Np6ohZRxpFncp9L+U4TvI01FKUaXSP7iapb7CWg2LpQv9K3jP3P/k4l556Gk+YeLYBWUTc69Hr\\n\",\n       \"D7AW5zDJ6HW2qdsGZjQi3F4hGO5i1h2azSZutUIQQZqa2LUmS6eO0Vic5/aVq6T9gEqlzktXLrN7\\n\",\n       \"8xZzlok/GhC7ktqxBXaHfXqXb2DujkhlzFvf/q0cf/IClWOL/NVH/wTilMzM9zkked5KpMQUxb5P\\n\",\n       \"xnPPPIvjeNimg+u4DIIRp06e4tnPf4Hlk0ex203CnR3MNKVZq3Hx5ed55P6HWV6Y5dSx47ztbW8j\\n\",\n       \"rdpsPHeZFz//LMdPnKBRcemmPoa0saSJrxSuZYMpGfojTNNCCEnUHZIMfN7yyGv4wue+wH2P3Ieo\\n\",\n       \"unh1h2p4uAI/7ATwNDkswFcZbJURell2i+focYTKMYR0Ra7HEynu1/WEjmyLIGtF0R0Iygi7DBym\\n\",\n       \"WaN6nXUng2KxKW/OT9NfRR+WFf7fWQqlfALqsPCOZRJfiPyUXt7Igq/dix9iuEghMQwwUAiRkApF\\n\",\n       \"1XV54okLPPXlL3D25N0kQUyjeYSZeo3KQhXDMBmOhoxGAWvb23tK06LVanF0cQnbqzMcjugPBmxt\\n\",\n       \"bTMYDnBsm3qjzgMPPIRjO3Q6HXZ3dxj4ffpBQM2ycKTJ3WfP8v4P/x5ezePCqXuwRZon35WSLM3y\\n\",\n       \"pBAlVOo4zphO0cxBmDydVjYvCxcnHankPqrjPi845UnO7eCmoo6GdKpD9xoyXoEnnTZZ9RNmOlco\\n\",\n       \"pSSOwxJllMcGzy2rPa+NKcWr1wjCkKbpILKQGEAY1Gs1zNl5At+HNGN5fpFbV6+g/D4LTQ9FzOb6\\n\",\n       \"LdrzR7HsKo5wmF8+QuPoIt14xNFjR5k/fR+3Ll/mzu07MBzgWiaj0QinNkNfKZAGdpyx5Nb5ptd/\\n\",\n       \"ExuJ4ubaKnfX6zSdBsPdAXEzxTT2zOpiwdprY78/YGdnh35vRHtmljAcUJ9v0bl5m4cfPs+nv/wZ\\n\",\n       \"zj5wDytRSNId0B32SXzF1u42TqXC33vr32d2bo6rO2usrq3SkCZyFNKebdHLBPEopmE3UFKRkGEY\\n\",\n       \"JngGCQLP9nAMC88wWHv5Gv/VO7+LD33qT4nqNsfm56mIrOycpE3C/Pi4SjPQ/LoLgFCAjlw2CvTM\\n\",\n       \"AbnTFVYuF+wnRwF1IIx0mV/WQU4ZuY+BR3ECVN/sH88ffdPVNM2DHiPZWOewH1cFsjSvY16FfI7p\\n\",\n       \"dSvaWsyFvF7s94OO3nWQtM8mlCyDw8qrehKz6Ggd9enuPLkiiibuLSKf6QOWd5xAiARE7sKaAioT\\n\",\n       \"SGGhfHjra7+J3/jN99J+Yo4kTemtZwz7Qzqmj+XkcSAMQ+JWKyRJiilNglFEv7OKaa9iWblJc/L4\\n\",\n       \"HErNopRiOOizvXZtvw6epWjXW8RxTBAnmFS4fnOVb3nLt3H71jWOzR9ltlFBmhlZlmDK/CBz0Z59\\n\",\n       \"tLAXG9vYS4WFUGQqQ6npOSWLMv4soziKYZr2Hkede5AYxkE6RFfaxU+Zf9evjzeDDnq+6HUrK3Dd\\n\",\n       \"PCzoH/2Z+Ribe88r0slZxFmMKaeLanjpNmdnFxEioTPs0PcNtmNBs30MkgZrw3Vcy+Pl67cx/R4L\\n\",\n       \"VUWW9NkJIkIMAlKMuEM06LM96MCXP8lg8yZZxWZoWEg8Zi2Xu5bn2A26MNPAqcxgjVo0Z+oYbspS\\n\",\n       \"ZHHl+Re5pgS3r17izm/9PqfcI9hGHWF3SOIICwu1F+MlS3NF3tna5tbNO8wvHCFMJK5XZXWjx9zy\\n\",\n       \"CdJuh3uXTpKtdjl24gRXb11neGsbA4/P3brF9/+rf4FcWuLGxho3r1wjGUS02m2sKMXZHrJspmxk\\n\",\n       \"CbtGyPoopt2egWhEnMUYBgyiIZ5h4AeKRsPj+aef48zx+3jmqStEp1OMo22axvQ4HKGKEZnAVBKp\\n\",\n       \"IEqDiTk9dkMcK80CcBWAYgK1ColAIIRCMFbySTYGAYUVqu8R6bInpTNh0Y29UQr5LnJyTgIVHSDq\\n\",\n       \"Hiv78ir35hTFO2Ec/4S9ebU/A4sZMDE/9bkxrsPkOQ19fhT116mow8qr6oVS5lOLDBx6AJcoivYH\\n\",\n       \"JkkmXeDGSr24Nhlj2jAMlBCYtkU8TKlUKly9epXmTJN7732UilslThI6vQ6dXockjUnThFqtSqvR\\n\",\n       \"wjJMer0+u51tut2cUjEMg1qtRqXi0pppUa1UUErh+z6DwYDdTic/VeV6zM40iZKY3W6HY8vH+PSn\\n\",\n       \"P8M//O53MUpiBCKPWW3JfQ4sTRMyle0FxcnRDUIiEUCGjomKNialI7zJXuAs0xxvipTN2zLHWEZF\\n\",\n       \"xeqvb66UUUXxfSiC9xxcfCf594OnLnWEUgTwKu4rDnKsrq5y9OhR1tfXp8pRd5gw2O6wsNDEEhLb\\n\",\n       \"EFw49xCWN8ONm+vUjlcIVzoMN28zP2uxNdolHirSyCIzJUGoqFYaNGuLGGca3PzSNl5ljsFoh9kl\\n\",\n       \"j83eFnFtmaudAbVWC9fJePHFL2CbS3g1iW2MGHlNor5kKzVZ3R7x7MWv8IYf+QH8yi6jCFxlkGYZ\\n\",\n       \"hhQYmnud67nUalV2dndZXq4jpaTlVagbJpESXH3pRRZqFWZlizMLR1mLJTcuXud7v/0HOFufpdsZ\\n\",\n       \"8MH/9DucXjjC6eXjRJ0ud3q7LPgZxxfn2Vy/g6h4nD17N5u37jDvVYlJyCyBaZlYpkSkGY7j4Scp\\n\",\n       \"iWVw+tRJXrz4HEdmHiWZnkOAcOhjSAOkhSEkxp6izxFjbr3q3kzj8TZI9/zsDcabiGM9ANnePC90\\n\",\n       \"RJk/niZL+iE1nWrUi26pFvpj8t0HwYZSaj/rlE4N5bTQwdSFuqtr2QNsWinTpoV1WxQ91Mhh5VVT\\n\",\n       \"4OUj45B3RBiGhGG4/z9wQCnrRVdIxf8TG3tCoAIfr1bl3rP3cv3mDe6//37WV66RpCkKiet51Cou\\n\",\n       \"lm2Tphmj0YhOZxuBIo4jarUKi4sLSCno9/uEYUCSJKyurJAHr7exHZsgHOF6FRSCMAqId1KCIMCy\\n\",\n       \"bWzTxHMqfOLTn+LR8+cwTAtDmpApDFMiDYGFiVIZURTu0wfSMFAyD3ylo5lpPJkQRTLkws1JTPCG\\n\",\n       \"BfV0ODoYC++04P9lE6+8GBT3H8Zd68gcJieoEPnBiOK+AqEfOXKEMAxZWFicKkdxNMCtVdju91Gm\\n\",\n       \"xJlpcvK+e7iztsX5Jx/GT3a5ufoc7ZlF+uEOgyil4boY0sFqzlA7fZJ+t48XCTZuvUTX36XqeKRZ\\n\",\n       \"k2A75nRtmYXGMmm9yc1On6RvcNw+x5BVdjc7NNptNmybyEkYBUP667e5/+4jPPamx1A1GzOUmJmB\\n\",\n       \"IfeWXzVuv23ZdDpd2u0FPNchTWMMx2Q36tMPO5x+4hGufe2r9G71cVo1WGxw5vTrufv15+htrnP5\\n\",\n       \"xUu0I6h2A5JKl6EI2VVDOlfXWO7ucOzYEtdjn1s3L9OwPOLAJxaKOFZke0gyDAI810WaBkGWYFZc\\n\",\n       \"Hjv3CL//wQ/xjrd/69Q+N1QGGYTpHgVAAQT2YviTIASYhpXTLWmeFg8hMI08nEOaZcRJHjDLkJrP\\n\",\n       \"uBB7flp5Zx3GtxcyWFbauryWNyR1nlt3jCj+h8lzFQXlV363Tm+8Ekgpgxi9lOdCMV908HMYPamX\\n\",\n       \"V02B60pW94GG8aqaJAmu606srmVFoZscxfXis/0V0zTYWt/g3Llz/N4Hf58nnniC9myFWqVOnGTs\\n\",\n       \"7vYIRkMMaWCZJvOzs7iuTZyEdLtddnZ69Pt9sixH8QsLCzTrDeIkJPAD1tbX/l/m3jzYsuwq7/zt\\n\",\n       \"vc98xzfmy5dTZVZWZo1Zk1QSoAmNqFFrACEZEWAEmO42gQnb0RFtYUfTJhocwWTobsRgBMbYEkhI\\n\",\n       \"QsJCI5pAElKVSqpJlVU5VE4v8413PvPZu/8497x33stXQNiOEOefd4fz7r1nn73XXutb3/oW48mo\\n\",\n       \"NHoSAt/DdX1sy2U0HJae+WjE/Ow8o3DA1dUNjhw9BDrFkpVKYHXDDLY9TXrIckLrbdhod8KnfoN3\\n\",\n       \"IKdyXC1L1jxybpro9bHay2SpPOK6Ma4mZ3WP6ptv3XvZbyFV59Q/pzqq52ma7UrWSilxXZfxeIyU\\n\",\n       \"ijwL951H0oS4jSZoydpgg4W5BbYmW+Rmguul9Mc+Td+wZFmshB7W/GlUBn5eYM10mTlynKfii9iu\\n\",\n       \"w20HfILMpthKcRsW7/zhH+TCE49iWZJX/cBbOLu6yZULK1i9jFkfPvBnH6OXGfDnuXLlLFYx4VB3\\n\",\n       \"jv/rF34R3WkxHGiarg3swE1yO98g8FyPRqPB3OwMRue4jo8WGtd2CC3FzPISq5e6iNGEqDeisziL\\n\",\n       \"O9/hwOGDnP/YX7N17hLXnz3P/a9+DZicvEhJZc442qIdOzjr4LQ8jp08Tm8yobAcCm1ASIQWeI5D\\n\",\n       \"y22g8xzHd/F1BrYkHI1401u+n9/53d/mDfuMuSMUWGoqggW22aGeVhtvGE12YbtSSoQUGJ0jhcKS\\n\",\n       \"EmEptBBTPHkHH96GXsyO9kk1p+rRYH2O7VfwUuWL9hbHwW6xq7pd2dt8pdxTdjshZa3DzeJxQtws\\n\",\n       \"hb33nOp4PhZP5bzuvb7nO77tWih1Q2zM7u4clmWRpTu96Oo7WjXQdaqcVLt3QikESgjiNGVmZobc\\n\",\n       \"GF70wod49OuPcmRpDsf28Nwmrtuk1Wjguj6TMCbPQgaDPtpkuK7NwaUDuK6P1gVxFNPb2mB97QaO\\n\",\n       \"4+D7HouLC/ieT5LGxFlOpguG62sUWYHvBviOR7PRIM5iojThG088ycyBRZwiJ8szlAQ1ZYlUTX31\\n\",\n       \"9tiAoUBSaovXvYM6hlb3LqpkS32cgZtw62qCVHSm6ti7YOpeRMX5rp9fh8PqR31x7GWZ7PWiqoRV\\n\",\n       \"udgByutxXRfLdhhPO7zsPbLxBKKEQhpmXIczp06z3t8qDYQU3Lh+DaXHDMwY1WjimCa2pTFRH993\\n\",\n       \"8bTLAa/LyfkW54aX8AqfgzML3Hpkia9+4eMIO8OfnePi+ioXN9Y585Lb6V8+x4FrXX755/4dX7v0\\n\",\n       \"LE9trrA1WKdjzfKPvu/NBI1ZEuGz1HSJ+jcwntq+n1UkxVROOI3LVn6tdhfPD9BpxmQ0ZrzRZ/bo\\n\",\n       \"QU7eeRef+eMP0nYcYvMcL7rlNj7yH9/Hi9tHUMMI37FYmWzhygbXV1foDdaJ0gHx6oQXzN3LDIa1\\n\",\n       \"SxdxlxZYH/dwbZ/AblCkBRKBMmVeKQ9jbE+RpynKGPq9Ma983f8E7/kvN425MIo006TKYEQpCSGE\\n\",\n       \"gDyj6jpjMKV2fFb2gbRtG0tYCG0w0mCQaCPQRpOlGUaUlL1tiM7cbJT3zrG93m59btej0/0M/n4F\\n\",\n       \"N3Uncccj3908vX5Ujma19ixrNx13r1Oz97295wkhththVL/r7zLi39ZCnjqeWseMKg9sh+kApYda\\n\",\n       \"/e8eFsV2gUQNa9Jltt9gCHy/rHyS8H1veQu/9du/zcte/IKSWTKMGI02aLfnkXg0g8aUlKHJdcpw\\n\",\n       \"1CfSKa6b4Ng2tmMxP18mMQeDPv1+D6FLMXfP8/AbPtK2CRyP0WCMzjOyIiOODcqzmZ2dJ+i0+OP3\\n\",\n       \"f4Cf+KEfRGXJ9qQXotQfllIgkOxAkPvzoevYWXWjdzRlbtYK34/KtR+EUn/+t2GC9d9S97Crc8ti\\n\",\n       \"hRIXreAcY3YL9lcejpRWjSssS3U5WWA7LoXWdDoz+84jK0qJx+toJWgvHaQIDbluEMwtMNQe/rUv\\n\",\n       \"kEQ5G6JBoRUUQ4yVEBYJXrvDpcEWW/2rHJ1t4YeSqNBsRBusP3GFmabLbUeOkfSgq7o8/Ncf492/\\n\",\n       \"+qu86r57eeXSnRBkWIHgxSfv5tMf/wBv+J/fxu0PPEjmQlZsYmcunlQkUiGmokjosrpUF6V4fxSF\\n\",\n       \"XL1yhcUDGZ7rkirYjIZECh6/epUiS/BO3sLGhavEF1cIHn6MF9x1L5lj0YtDxuR8c+UiQkK4vkFR\\n\",\n       \"JCQiZuy7XB9vcGL2BEWa8dhjj7ElwbF9fL+FpVza7Rl8N0Cbgk67yWTUoyhyMgSt2QWS55GUzbTB\\n\",\n       \"SIGybZASJXZyU2XOycGyFGmWbLcXLJkrBcqUazNNS40TxFQmQyiU0Mgpw0UXBZmpPOTKky7ndVmQ\\n\",\n       \"Vq2LncYNO966QYidSLycm/W5vDuRX/9b542X37lTTbzjyJSJ2Dr0Uj7mpnVRt2d16KVeiV7//vqm\\n\",\n       \"8A/aA9994TuP68aheq9ecloPUfYaoOoQZscHlZTcnSLP8QIfy7F54P77+eojj7B88Agz3Xlm5wIw\\n\",\n       \"il6vx2g0oqBs/dTuNLBsm0bTn0IK5aJLp+FXu91iYXaWoBGgtabf77O+vs4kipBG0mm1mJstDY9l\\n\",\n       \"O4yiCZMkIQgauI7LU2ef4fTxYyih8AKPKBxjb4sIVcoZEgSIPRtcNUaO49SMY2XQ2TaM9QlUH/u9\\n\",\n       \"4wk751dhcPVa/X+q1/b+7/6ekd7l5ezge3rXIip/946XlGUZhjI3ACVfzhj0HQAAIABJREFUWj2v\\n\",\n       \"JkSO0wgYJhnDVPPIk88wyDULR3L6ozEL0SpHgw6uLkil5rq2eS4KiG2X5c4cgejj3jJLfvReVp74\\n\",\n       \"CxqO4NjxZRaXFhhu9Hnu/DUWLIf1r32eX3jn95P98Bv41f/7l1jvrHH+0bOIxQdZNBbKEbzmLa/l\\n\",\n       \"0kYf6YPJJmQ6ohCzFGaq80JpwI0BgSDwfMbjEb2tATdu3OCJxx/HWZxhPB4z22ghGz6ZMCyfuZvl\\n\",\n       \"uUMMLl9jtNJnc2aTD55/hCPHDrPQWGBMjrAFw7VVdBwh7YJRWvDZx/+GXBhOHTjGnUGbs70Nmm4T\\n\",\n       \"r+FDo8Gl9etcWVun2WySxQkN2+bEocNI5XLh/CW0tb8B8VszTJKYPC8oTFFWmlJgWzbCsikkGCQZ\\n\",\n       \"AqRFxc7IdYGHLKNJyyr56NMu7UYXpHFSFuJIWeaUonQqx1DmovJ8Jxlfcb4rjnlpL4qpw7Azzyvj\\n\",\n       \"XxrOHRphNdfqxrY+p6u1VBTZrnlefWY1f+vQZHXuftHmXoNcX8d7oaDn89r3O/5BtFSr46b15+X7\\n\",\n       \"dYW7ko+51+hUHmd1LyR7Qi0psS0LiSCJYk6fOsWffOCbPPjC7+T61eu4Tkqr0WVmps3igXniKCHJ\\n\",\n       \"EgpdMB5NiKOQRiPAsiziOCZNE/K8YDwaoqZYred5BIHP4VYTgyBLM8ajMePJGKkkJHHJLpGCSRLx\\n\",\n       \"0P0v4C8/8ylOnThJrgvCOEVKC8u2KPIcozWYUmgfBLrQ6FrJ+zbVap/S5NLL3es13CxVUI1/nT5V\\n\",\n       \"ffbenEJ9I3g+w733PhbFDjS2H05e34wdyybN06nmSdnaK8sypGMhlWQ83h8DfwaLUZwznOQsNzUz\\n\",\n       \"2YB8eJW8eIpG3OOCbGOiLdqDNTaziK8Xszw+OUbbDWgVj3Lb7CrXNlKevujSbWSsrfVpbUg2V59D\\n\",\n       \"yAmzS7OIYJGvbwLXUsT4WX7qn7wK/+A51odn+MsvtXn0kU3+5b/+RS71VylsH11IBAFaCQqvgTYT\\n\",\n       \"pJBQ3YtpaPXss88iDJw5cwYhFc1Gk2EypnX8Vg4vHmTu4EH8mQ4NN6CDwxc+8BE+82cfIeuHdObm\\n\",\n       \"ePB1r2bmyEGQksSkPPvs0/zFn7wXr8jwA4eQjL9+/GFGV25w9623c9/cAv0kIRn0OH7bce570f0M\\n\",\n       \"hcEoB1MYrMzQUh5CWEwMZPp5pEz9sppzNOjjegG5zEr8PS4V+ZI0AqDVauAGwdS50EjbwhQavX3f\\n\",\n       \"C4q8KGmEUuJ4Fo7nlYa40Hiet91irK6hUs/d1D3m+jzbBa3WEoN1ymp9Du6d8zv4/e75XX5/Kci1\\n\",\n       \"bV+25/T+RW31pP5e6Yv6WoCbqYx/1/FtZaEAu7CnvZBA6Y2ltXBpx5Or47bV+bZdCqIrsZtfWaQZ\\n\",\n       \"rueiTemaz8/Oceb+B/jSl7/CC1/4EHmaEyUT1m9cpNls43ke3e4s7e7MNMwdkWYp4/GILMsIgoBO\\n\",\n       \"x6PdbE4xYc14POb8+WukucZ1XNqdDt1uF9d1y4rO4YD+cIsoilCWg+25vOIVr+Rd/+bn+He/8PM4\\n\",\n       \"UoDOiOMIW5W8WCVk2aMQgRYao3eaE1cTpRKyr9/wcoLlu/C/MhzcXYFWjXG9+06ddlhN7rqSWx3X\\n\",\n       \"rmOHe41yeYhd97c6vw6hVL8tiRMKiulCNSWfvyhI86yMKJ5nqn4ttMm1h+MHRMOIOx3N6TlDN76A\\n\",\n       \"Y1b5yDNLPC0WOHP65SQyZrCxwrIlEeGAp85dILhdcmrBwYm+Ss/PuZCPmDhLHGrPMZskHF1e5MNP\\n\",\n       \"ab7EYX75feu8WF3nV1/ikLSvMeOfxNcS11ris3/zDD/5T7+P6+eewc4LctVgkMVYrsY2GjPlsiul\\n\",\n       \"EKpsYfb1R7+OZVlceu455hcXWF9f5fCth/FbDrGJIM8xSUY/GbCZF7zoja/lysYKwzTj//i5f0Pq\\n\",\n       \"2Wz1+ywvHGSYhRw4cRRLGj713veSjEMSk2Eri7Mrz6F1zrETxzh96jRXNjb41pe/yKH+HXSOH8fu\\n\",\n       \"dIlzjZIeBQo0OMrabu6w9wi1IIw1nttBAJbj40gxlQQuW7LZtmJra4Ozz1xmOByyuLjIzGyHwPLI\\n\",\n       \"ixwpDFJaOK6HkqXqX5pnGFNgCQvLEaRpTjDdAKqGv/UkY9UQYS+FtW6Mq/Oq52XbspudwL3Gfy/r\\n\",\n       \"re6tV6/fDGfu36S57uRUv7me86k7ZHu/7+/igYu/r6v+P/IQQpi//uxHd5XM1w1APXud5/H2gNVD\\n\",\n       \"m/pN28Fn/RLg2s5mlwNg2zZJliKVQiqFkJLCc3jP772HUydPcvDAAdxpkiWJ07KaDEEQNImznCDw\\n\",\n       \"cJwdrqsxBlOUfe2MKWGMChsWqLKjSxQRZym2YyOVwvM8XNelmHrMk0nI6laf7sIia9ev8cqXfRcm\\n\",\n       \"TzBJhBJiSiOchmuURlnXsvLVeO3t0FOO504hzG72x/4qbPWJWXkZe2Vq9074vXBO9Vm7eeJmW5ui\\n\",\n       \"vknsFxFIYyGsalPI0KLUYA+jhGajzWiS8oKHvuumuXTy9T9DGKVkkxArGTFnjTk1azg+kzFjx2zO\\n\",\n       \"nea5G5or4xa9FJyix1EnpaEUT69u0Z0LeMlcyEM8xyVsImaI6KLznIOupoFN2DjJheA4W1HEXcUq\\n\",\n       \"b/IHtB9cp9mdISwO8anzLb52o82L73uQFx3M6VrrRM2ANeFhbB/LpEhjkEiEEdPCFcnWVo+nv3WW\\n\",\n       \"paWD5HmpVqhlQiYyokmCiMGTPldHfbaKlCxLOdadY9YLUK6i0Ba+06AQ0Ow28QKLpiv51Pvfx+a5\\n\",\n       \"Z+g4LoNoQigNWIZO4HLHsVs5deQE0TAiysGbX6R79DjN5aMYt8FwkiAtC8exKOKMe86cvmnMP/+N\\n\",\n       \"p1CFwc4NSii04yJVxa+ujJdGKoGYJuYFMBwOKPKYMIxYOrCA7zlcu3IJx7bwPQclQRc5ejqnfWtn\\n\",\n       \"099P26Q+5/erc6gb+8pWlLRae9e5e52V6rzyu3ZDteXnlAnour0q5/X+trS+Rvc6Mc/ngdfX2j0P\\n\",\n       \"fjfG7E/K/7YmMZ/Pe6uD/UkS3WREqlAEdlMOs6w0vJZSJT4mFSgLIQWtRpM4TUrjrQvSLOMlL38Z\\n\",\n       \"f/Hnf85b3/xmnCk8MjPbptnokOeaKEzY7F/nypXncByHIAhoNBp02m2Cho/nzRDHMf1+n62tLSzL\\n\",\n       \"ot3q4Lke7VYLo8qmD1u9Hpcvr5ZFSsqi02qxOL9AZ+4A66MRG1s9Vlauc2CuS+AHFFmKFOVC14Ap\\n\",\n       \"NLkptuEkqNOc9stmlwa/jmlXk7w6qs9RquzZtxPxlML4nufVcMAdSUzY2cT24+bvhJkGpfbQyGpe\\n\",\n       \"SZm1361BEY9jhIRGIyDJEyxLMTc3R55rlpcP7juPZrSgbTlEjsUwdRmYeR4fap7ohTQbis7lT2MB\\n\",\n       \"ca9gYp1m6B4iG+e85MxxDroHefzpq4yfGdC+rcsLX3+azWdWefbRc+SdJSa3nuDq1nnmrvwxb16C\\n\",\n       \"WSfj+Kl76I0skq15FrxV7MYFTp9+CdmhV/PkV69zx50Jy8vX2Ag1y/e8jOtbIzwnIM9yTGFKGEVa\\n\",\n       \"WMpiYWGBA4tLRFGE47hobZB6gpQGZTkwzGngkTUD1j1D7iisSYIdxownI+JJzsWrq5y/cZUPffiD\\n\",\n       \"6CTEdQx33nqYRQ2NQoHlMfRyrqVbNOIR/adHjNY2WLLbzDbnmG8tEF5ZxbhdGkdmKRoObhBw6eI5\\n\",\n       \"bj10dN8xl46LyjQ6SYjiCOMKbMfBUDX1haARIIQgy2LStKDVatHuOGR5iN/MUa4LluLW03cSeB4r\\n\",\n       \"1y5x9fIlpNAsLCzQbAbkk/Guwr4KLqzPpcoWVPS7ugOYZdk2a6ruBE6mjKb97EplxKvcklI7kWaW\\n\",\n       \"ZVOPf6exSX0jsazdjbvreH2dHlt57/tFrdWa+PsmMv9OD1wI8R7ge4E1Y8w909d+DvgJYH162ruM\\n\",\n       \"MX8xfe9fAT9GWc3+z4wxn9znM82XPveR7cVu2/a2GE1954H9NVOqwalXCVYDUheS30le3Mw/DrTD\\n\",\n       \"RGh+9wPvxW01uf3oCdrapeH5OK0GozTFUhYNy0U5FpZjb2NuvV5v23halkWj0UBKSZqm5Hm6/Rvq\\n\",\n       \"N833ffI8J01T4jguf7dRjCZjfM/hr/7qs7zjH72VViPAEoYiy3FdjyIrm1q4rgcl4WbHc5gaacsu\\n\",\n       \"fxvTMFbnORJNicqZKYUNbKv0PCQgdJnc1QbyQmJMTqlIUCCELjm7xpDnIKSNUg6m0MhshJAOxnIo\\n\",\n       \"LJtMCwqdYymDMimOSqFIUGiMdjBQVtAag7AdbMsl1xJQSOlgTFnAkzkFnm2xcvUqF8+d58bqBoNR\\n\",\n       \"QlxILl6+QqvV4Q9/5zdump9n3vgzDCbjkoqWG0SmsbSgE7QZbvVJ/Ii80KAcbNulyFNUEXKgJVlu\\n\",\n       \"GczgCsdmHWYaipW0B2HKA0dP0hqE6LUttG+z5gaErVn8RoNbZl2OdeFQZ4vWgTFNW2GNOwhvlpWh\\n\",\n       \"xGGelgePXF1l3buLUw+8GmVN8FKNUhmjICQWHgEB5AOEzHG1hdHQtxyktmlKB6M1uiiwpiXpFT1U\\n\",\n       \"UxbhKKWwRYAtJegR5889w+//7h9x6fwN5ma73HPmOIPhCrbj8uSTF0mlhwhcjnTbLNoWdhzSCTzu\\n\",\n       \"vvc+mgvLPHb+GlZnEdwWR47fwt33nSbNMm47dftNY/71b5wly1J832c0npDogsBvIoQgTVKEsKYl\\n\",\n       \"8Yq8yGg2S4ZWmmfouOx25fs+aZaR6wzXLTu9N9tNjCno9fpsbKzTbhYkSYw2Bc2mj5QChQGdYQFC\\n\",\n       \"56A1Skp0aiOkJM8zHM8hiiNc1y4rO800WqakzHrCpqjYQEKWj025uWpdoATIqZqpQe6yLVXStNIC\\n\",\n       \"rx9Vk4Y6JFP38Ku/e6VjK5u2V5GwOu66/+X/XR747wP/D/CH9d8K/Kox5lfrJwoh7gTeDtwJHAI+\\n\",\n       \"LYQ4ZUpxjj0Xu9uT3Js0qx5XJdbTz9+1o9WlJStjXeeUVzuZbe8Y3+pc17URAt7xtrfzy7/+73ng\\n\",\n       \"1F1YWiItVcrKjkekSVZ26VYSzyuLLhynbDbrui5RFDEYDJhMJnieR6fTYWFhHiHYfn0ymWxvNo1G\\n\",\n       \"g7m5OZrN5rTYISUIFlldW+EFL3iIj3z0Y7z9bW+lQNNutinyHKn0tNGEg8aQFzlFYaad0CSWskCA\\n\",\n       \"sncy8VJJlFAYUTJyFGXerMiLaSZBIpScKu5qhIwRWsN0UkphkWUFujBYysKSZWJVaw1+m1bDp7e5\\n\",\n       \"jq8cVJ5iOQ4aSSYdUtUCzyVodkiiaZcek5GmCUkakRcZQmpyXTAZrLOxuUkYThiMXOJwwjcefpgw\\n\",\n       \"DEkzg+M1sLw2wl1kZRDvnUIAFLmNMH5Z9ScLomSA7UhaM4JTd5zGUR7rm1ucu3CFNAEpPAyKQViQ\\n\",\n       \"JSky67I+nLAw47Ka3sZ4WLAatXjtqQ4teYNhf5MsPchhXzArL3BCJ7R7Ln6yhPAKhm4IicFrFMSN\\n\",\n       \"lEBsoMIRhxp3snKjydVzj3Hy1J1oVzOOh7TwMGmMZh4rn8GyniOyMoriCI0YUBNyXSaxjS62i1mk\\n\",\n       \"nC5yo8HkGFOQmpjRJCbwJMdPnuLt7/hBfukXfoX1jTUuX7I5sNRhc3MT17JJDIyjmPbx44w2V+nY\\n\",\n       \"isySpJZg/ugyb/mO7+QXf+O3ePjxs9x++x186Wvz3HffA9y2z5gbk6GUIElC2i2fTGuWDx5kdXUN\\n\",\n       \"R7plUxRjqDruRNGEcByDAaUcbNsiyyMajQbrWyO0TtFGkxcxQgoaDY8gOMxc10UpSRiFPPvsWdI0\\n\",\n       \"YXFuBkvZFGlCnuR0Wi0G/QHtVkAcxeQ6RWqBkJBPq5AtYSOcqcaOFJii8n5rzJLta5NTAw6gEXIn\\n\",\n       \"SiyKgiiKAIHn+ds2qjLaaboD99bzVHVnrrJtddtVx96r3/P3xcD/TgNujPmiEOKWfd7ab0d4E/Be\\n\",\n       \"Y0wGPCeEOAc8BHxl74lVaFSF7NXOVvveXZnj6rX6brjX6NcFb6pEZ91ThlrlX15QGINC8+qXvYyz\\n\",\n       \"Z7/FS1/8EjZurJHHGYcPHcJybIyUaG0Yj8eMx2M2NtaxbQfHcaZGvexgL0SZULx06RJQed0B8/PB\\n\",\n       \"dnInjmOuXLlKUZTetWU7ODYsLhxgY3Od2fkDPHPhEqdOnmAYxegsxZt2HBqHY4RdTUKJrer0qRLn\\n\",\n       \"r8IuJVWpFKfLZKDGYLTBsu1dOYLSx9AImZfMAAxSWAgUgd8gS0uOuqDAsQUoi562EFmG41nYJsG3\\n\",\n       \"NZOwz2ovZiO2OLsy4PzKgEGYE0XjHXyR6UQ1GiFL5TolyxDVUhZYTeIwRLRP0JqxieKU3CgSU/5v\\n\",\n       \"4Ub7zk8lPQLPJU5i0mRMs9UiT3vMH+6Q6xFWnKHjCRQxwlhoYZNkECEJHR8lbGZmjhHNdTnlD1mf\\n\",\n       \"FJy7dIn5NY+XLB7iqNtjZT1i9foWA79D1Otx3N4kb80i7Qi/Ca04AZ3gdjTYK6iZMcPJGVqH3sxX\\n\",\n       \"n/ogB+aewmodJmjcihqvM6tyhiLEkpIgb+HplNikuEVOKgoSNQ3HhQPaTJ2VDKaVfqrKi0hNoxOU\\n\",\n       \"MCOS2+++m+96+XfymU9+ikkUAwdIE0ma5kRZjN1qMxiNuOXgMsPVqzQdG6/b5svffJjHPvinfP3p\\n\",\n       \"c/SShCfPPcX1a02OHDm275gfPXYEpSSWkuRpihCSyXjE8uIsWZptc/pHoyGWsGn5Np7vlc1Psnw7\\n\",\n       \"sbixucLMzAzD0YDuTIcwjHBslzSNmJufYzgYlTRCYXH7qTPkRcblS89BkWIrheu0GUcav7FAmPZw\\n\",\n       \"PBtl/NJpm0JSxpTMNbsyirkGVcpNa1MSBRBT5pMxpacz9cQxlBHb1CDbtj2lE5efWY/8YXcHqjrs\\n\",\n       \"WEJJ2bbBrtZt5WVXDcTrUAvcLHa13/Hfg4H/tBDiR4CHgX9pjOkDy+w21lcpPfGbjgrPqrzTCseu\\n\",\n       \"jnoYUU9qVka8So7VcdjKk68Pwu4k3k54YzkOMtc0lMND997Hex77Q85efIZbjx5HJBlFHCEkXN9a\\n\",\n       \"Z6Y7SxD4tNsttC47mY/HY8JwzHhcik8FQQCAUtaUuRIRx+n2tXS7XXzf58CBpe1rDccThOUwmYTM\\n\",\n       \"Lyxhez6f+PSnaXW7HFxcpNNqEg76eI6FzhXUMtRpmqKNQU538HybemgopEJPKXyoqlCi9DKMKAsd\\n\",\n       \"pqNcjpOWiCmsgVRoA1EWYVsSyxJokyEsg7RsHO0RRmMWuh2efPQRpBScOHkHbdfhg+//OJuRRWwa\\n\",\n       \"tGeXsfz1MiFsLCQWRkvyVGP0TgQmEWgNcRKj7BmM1kySjDAV2I6HcpyyAUbxPJS2PMZWHlFeYEkb\\n\",\n       \"iebEidt47rlLSJnTMAHrWz0m6QSjXFAejU6DQhss20HZio04ZvPGJtK6waGDbY7ePsfKhWs8Z5rM\\n\",\n       \"zhhOLMc8Moj4Uv8IyWiWF830ObQe48Q5x5dmOKUCsmxImnVxmzbOIZsJDp9/4gbXEotm+A2KsMfj\\n\",\n       \"mze4/8zdWLlFK9BMTI+YDsrkoDYYmQAhHYTIKPICPZV0LXQpUGa0KesBEGgMcTLBdh0syyNJCnzH\\n\",\n       \"4p0/+aPMzLX5i49+ku9cPsn6xohGOyUeDknGIdlghGiXNMADi0toDcPhmG+dfZpmo4sddMhTze23\\n\",\n       \"38HSweV9h1wIw7Vrl2k1AlzH4fq1a6yvb3DnnXfhBw08z2U0GrO8vIgu9HSDMWRJhO06uE5AnCYs\\n\",\n       \"LNxKr9/n6OFlwiii3QxACALbI0sTGn6TySSi0WyQpSl5AYeWjxP4PkWWohBcW7nGKEwJWi45ZcMM\\n\",\n       \"pMKybGRhtj1rOR0zRGmcSzpyuQa0KcvjpZQIozCm2GZ/obNtQ70Dn4htwa26gc3zbNvW1N+r11RU\\n\",\n       \"a3cvAaOOOuwtzPvbjv9WA/5u4N9OH/888CvAjz/PufuC7KW2yFTUZ6q7UTfGdZyoTnfbq40Au/nM\\n\",\n       \"e3e3yvuuzhGibDA6yhMcJOkkxrJs3vzmN/Ebv/1bfP8b30y2NeLo0jJKuJw8dSuDrfK39nq97YTK\\n\",\n       \"7OzsNsVJ65JG2Ov1cByXRqNJt9ul2WwSxzFhGJIkCZubm9tY/8GDB3FsmzSJWZhf4OrKdZqzs7z2\\n\",\n       \"9W/gfe//U/7Jj72TKBzRcC3SLC27ueQaKUp2ilKgjJhK0k69gUrEygjcSoyqpJEDhiSMSrhFyWmI\\n\",\n       \"V3ogqmhMK39AKEluChCQSU2SRgihcZRNHsWgBStXr/OtsyH3PfCdfP2Jp/m1n383UQrHb72TNCmY\\n\",\n       \"6bpsXDmPP9vCUjZKOihhARK88nvSNCbOYyxLYdkKPS5xUy0EljQ0PLtsP2cMrpJId39p02hyg5mZ\\n\",\n       \"Jdq+pNCKfj/i61/9BktL84wnY+yGIhcubtdHSEmWFygLFIKiiEmjAqUstDY8Pj5FcnmLl5/w6R4q\\n\",\n       \"GKQbrCBZLGKWm0OuF1t8ddDlg1cP0kwT1KbHbRdHfN9yzm0HDYI2MmzQ33yC7ozhi1/5I04+dBw3\\n\",\n       \"S7D0ExTFSX7pj8f84NveyHz6DMZO6dkaowzKWBRKYecGuwqSCo2yFHmWoyyrxFi1YUomwnVsLCWI\\n\",\n       \"w5xOa444GlCYjDf/wPcznhi81jzaauB34a7lg+SjmAMzHYrhmMMLSxRxTjiKuHThMq7wKbTi6JET\\n\",\n       \"vO3tP8SD9z3A1ZXr+475+fPnmemWMGIJRRruv//e8v5iiOII2y6hhySJpswrzdziAv3RmM3NNRYX\\n\",\n       \"F0mTiNnZGZI4odstm6QMRyM8x0UbjYOLNxcQx3GZ4BUWtm0ThqV2vHJcbjt9F1obvvH4X+K5ZRm6\\n\",\n       \"JSVpbkpY0BgkBkzpUUspkJYqheQoa0cUU9lZrRGypHgKU9kivSva39EacnYZ3fK13dDHXi2Uvd70\\n\",\n       \"XjtVN957HdDnO/6bDLgxZq16LIT4D8BHp0+vAUdqpx6evnbT8b73f2Tb+N1/393cd+auXYyHela2\\n\",\n       \"IvNXF7pfuFENWFWdVTU7rg9odSPiOEZ4JTamitLILR84wJve/CbOnj3LG17+apLhmCxL2Vq5iqcC\\n\",\n       \"XNtDGAPakOcZRZYThxG+7+O6Dp1Wm06rRZxkhJOQUZKUGKuUWFLSmZ1jcX6BwWBAFEVkSUqeJOR5\\n\",\n       \"Tm9ri06nQ39aBXr7HXfx9DPPcOau20nyBMd30VmBq+wyuaXLFb69aUkJGmxlg5wWOWRpWUBENQkM\\n\",\n       \"jcAny1N0UVAUpb44WmIbv2S4WAVCUZZTK42wLDZ7KeNxxMzsIoHXRqUJy4dv45tf+Ar/9TffR3vh\\n\",\n       \"EAfvfikSSTYZoxjQEBNaCxabyG1hLiF1WbmHRjk20pGITCA9hbQtutIHDOE4xBRFWdAjd4oeKl34\\n\",\n       \"vUcyWaNXTBDSxrF9mq6gc+gwr3vN6/jsZz/HpnGQMiNNYpQBW4IsQOcZ0ghs5WIyMCi2WgHn05DW\\n\",\n       \"lRu8+KSDGeQ8cVVw59xdHFZbvMJdwTRHfNm+nyezZRz7EMXgAuudMYdGIVL0CLMCu2XxzNnP0WnO\\n\",\n       \"ojc7JFmbQ0c2OdPVPDE+zR9+8RLvfNMS7fQ8rumRZ5KmZVHoq0Sph3DnabY6JEmCMRoKMIgyb1FR\\n\",\n       \"1gBLaookw5MBaZRS5AYtBIWQ/ORP/zM++5mv0E9Sch2SZxMOuD5JOKTpltoyc0tL9IYhW1sTXvXK\\n\",\n       \"13Pfi16C35wBIblw4QpRnOw75u12h4W5BXzf4aknn+TQ8mGSJMcLAjKtKXVeDKNwgGs7oBRJlFHk\\n\",\n       \"4DoeMzMWeV42BQ7HYZngn0pdtBpNbMsmy3PIDbYSuK0GpuETxsk2RJrnOeMwZByGIOD4iXuQAsbj\\n\",\n       \"Ef2tLXSR0m21KPIEdCnbrHWBFIY401hWiSYWucbostmFEGWTlR0YRSPUbknY0giLXQ5hZYuqSszK\\n\",\n       \"2axqVfaySerFPHsNtlKKrz38Db72yDf+x7BQAKYY+EdrLJSDxpjr08f/HHihMeYd0yTmf6HEvQ8B\\n\",\n       \"nwZOmj1fUrFQKhJ+XdZxL62wMt71waozTxzHwbIsNjY2WFhYYDKZbHvJURQhpcRxHIqi2MatLcvi\\n\",\n       \"mSsXmG/PMOe38QOfRMKYnL/8zGfxjeTYwUO0Ox3cTpvB1gRb2WURjlK4rrvd3HgymTAcDknTdNoY\\n\",\n       \"ubXD+S4KJpMxcVx6361Ws2zNNoVciizl+o3rGCGJi4JcG/xWg9W1NS6cO8trX/3dHF4+QJ7G6CSl\\n\",\n       \"7TWI43g7sauUIsuyXeX0UGFx5fuTyWT7fGmp7SjFCMjyHNdyEZmNUQWTZIIKFP1wwtPnLqKFR6F9\\n\",\n       \"wglo7TIehbTsiH6ccWVtROfgMXACwjQnnQxQeUgx3CQdbTLXbiBmD6JR5LnBdgK0tkgLQZwbjLLJ\\n\",\n       \"jGASJyjXxUuGmKkKne/6FBrSooTK0Bkmi/jwb/6Lm+bmT/3rX2Ort87m1ib9wYje1hglPU4cP8X1\\n\",\n       \"6+sMTUAUhlgSijxD6AJd5NMeiqWxUbYLBvpOiqVS/OgKD7Qj7j8wSx7bxFtDTrDJYX/IqtPiy85J\\n\",\n       \"wqigO3eKojfiYNHHs2N0N+WBky7zbPHlsxd5eK1Bp/FK/sXbZrh15hzCafBI/7v5r5sv4snNS/zT\\n\",\n       \"t97CYrTKjPTRJiQyCR//9MN87vNfodlscObMPdx5550cPXoMratilp3oKZ30WJifY2szAlyUC82O\\n\",\n       \"xxf+6q/58J99ku/93rfyx3/yXoyIGN64yJLvMtvu4CqLRqPJKMk4fvtd3P/il/LQS7+bi5dXSTPK\\n\",\n       \"sej3uOfM3Rw40L5pzK+trDMaDli9scLp06cQEoJGm/5whB8EaF3q41uWxWQ8ptNuY0lJHMW4DZ8g\\n\",\n       \"CDj37Dluu+0E66sbBEHJUsmyjDRNcWyboNEgTUIaDZ8kyYjjGCksXL/sLzqJ4tIzp6zIlJRsNtdx\\n\",\n       \"aAYeV648x6VL5+m2AhpNB0xKUaQ4tiTLd1hvnuOSpSm6qKLbaXgz7XIvbbY3jHourdhT5FQ6ljvy\\n\",\n       \"t3WPunpeP7feCBx2YOIKDq1z0+976NXPy0L5+9AI3wu8HJgHVoH/E3gFcF95q7kI/C/GmNXp+e+i\\n\",\n       \"pBHmwM8YYz6xz2eaL37mQ7t2r3p4Ud+N9r5e7VjVRdYHoM5Prt6gTUDoAAAgAElEQVS3bXvb665o\\n\",\n       \"QEVRkJiCtt9ATptE5BJyS7C6vsEnPvZxXv6SlzLT6TAJQxy7iS7YBd9U3n+9MWmdpF/xU13XRQix\\n\",\n       \"XZFZJTMALKXQRmO7LsqyCZOYJMsYjAZIKXjiicd5zatfWdKwhICk1IZI0/JvXuRYlk1e5Agpt7+r\\n\",\n       \"unbHKbvx2LZNmk6TKEpuN4YojCZPcyajCL/lE+cpmTA0urMIK+DDH/4koxEkoUWaKBzXx9hDkrzg\\n\",\n       \"0JHjrG8OaLTaJEmKMDmBAzodk0YjkmiEzkKSOKPZbJcestvAclpg+WQ4xKkkFwotFJkIydOELI5x\\n\",\n       \"bZs4TlGOT65LHZs8jfnEu3/6pvn50Pf8BFk+JElGuK7D6o01PLeJEBaWcklTWTZTmIa3JVQkQFjk\\n\",\n       \"GoxQSNslyzSBtkgcjWVFzGSb3L/oc6wLM05Ia3ID2b+BcVzCuSVuS4cMOcBlcZinRk2+tJIRNVJu\\n\",\n       \"bU+4PS845AzQ+QVmgiW+71X34s/lTJoRwjnFf/7yGT56/RTM9JlNvskx1ce3Fd2ZOxmsnef61ScR\\n\",\n       \"UjA3P19Ge7ogSRI6nQ7dTpfA9wh8n27TpdNscvz4Kf7ma49w+doVHn/yMTb7fdbW+pw5cz8rK9fQ\\n\",\n       \"JsWRMQ1HE08iZruzFBpaM/P82P/2U/jtWYLWHAaF7/lsrfdoegF5oTl9+uYU1rVrN7hxY5XlgwdQ\\n\",\n       \"StJstdjY6tPudNjqD2i2O+U6EIZoEuE5DmrqdDlTpyfwfbTOyLJy/fmey2QSsrS0wOqNDfr9TbI8\\n\",\n       \"JGgEzM0ulFRdIVDKJskypBQYIUmzDCkVSVwQRxFKSvIso90J0EXBoL/OYLhOmkzwA5s8T9B5WQXp\\n\",\n       \"2FZJuRWCIi83dbldpFOtkXwb1t2J5iu5it3OpFK76x3qtqzuwdcRhrqXXX3WXmf1byvk+TZWYv7Z\\n\",\n       \"TQa48iL3Et/rHO7qgivjBNBoNHj88ce59957ieN4u+xWa102VJjCKfX/j7MUKSRFVpYbW5aF5XkI\\n\",\n       \"z2G9t8V//qM/4sd+6EcYb/XJitLozc6UwlVQ3pCrV69uZ5VL77tBp1NO3rW1NQaDAWmakiQJrVZr\\n\",\n       \"2sknwFIWhS62f3+WZWXI5dl4ngdSsrq2xmA85tz583z/234ADxBxVG5KlkVaFDiug1SS0WRCp9sl\\n\",\n       \"mUIylu2QpkXp1aQp9rStmmU5SEtx9doK/VGZ4dcGFpaPAJIbmxts9sc89a1n2OyNGfQjlg4coUgg\\n\",\n       \"8JtoJD09QgpJK2jR8BqE4zG+45KmKYXJ0FKTpDFRHOL2rjKejFBKE4ZjpBKkeUZ3bpGZhSWcoEOm\\n\",\n       \"IUlz1lnENpBMRsx02pjCYHsNtHTIp574e//tD9w0l17xtp8liXoYEzGZDHCm1XutVptxGGHGQ4yQ\\n\",\n       \"WLbNKAwxUrFwYBnL9dnqD4mTjDjOKAwYW5eslTBBSYGtCg75GQ/MaW6fSXHEgDBMGQxy+vYsHa+D\\n\",\n       \"ZbfpqyNc0Qe4EI145vK3aMeK1x6zuI1HuPuQg3v4Aa4evIvNuVN4meLUkfv4lfdfZKNzF3HyNNba\\n\",\n       \"V5ErV7ljvsl4fAlUQV4UtDptMIZRGJZdojodAMbDEWmS4FoCQUGSJWSFptHosLHZI45jPNfFsgR5\\n\",\n       \"lqGERa5D3EAwGY2Jo4R3vetnWVnb4FOf/zz/+7/6WVbXepw8cQqTaxxp8D2f6zfWufeeUzeN+cc/\\n\",\n       \"/kkOHVrm8KFlfM8lL2ASJRghcFyXrNCMxyW11rUUo8EA3/VoBD5pnuP7PmEYsrGxzpHDh6cMpYIP\\n\",\n       \"fuhP+cQnPkG306G31QOVcePGDYos58SJW3njG9/Ea17zGhrNJlprtvoDut0Z0jzHlmUzFqNLIxgn\\n\",\n       \"MVIabFsiZNnc/JGvf42DBxdxLQdTFERRSKfZYDjoY1tTaHVaJbvNx1Z7dfatKaVztz0qCRT72jqy\\n\",\n       \"LNtGGipm3F5JisoW1kvsq+/9B2nAv/KFP9+VuKwupCqbrbzoOs5dHXXa4PTztnfILMtKI0gZ+nie\\n\",\n       \"t23YqpL3JEkg12glKYzGsRx0kpVetGsxkZpPffpT+Fpy+/Ix/M4sytlpNFxpGbiuux0hVDchzzNc\\n\",\n       \"10MpOYUyyl6bZZFPzmQy2b4uIRWWbZetrTAUeXkOQpAB4yjl8soK/eGYV7/8pRya7WLZFlmWlypu\\n\",\n       \"xlBQdrORSqFFyb3tD4akiSZPM2ZmZ1lf3cD3GwghabRa9IdjpG3hej5hkrOykXLp8jW2+mPyAowW\\n\",\n       \"2JYiCYdEkx5FNmZxrk2URGSNBSxp03BbuMonTwxFrpGWjbAE/ckI27fLoolJjzQZs75+DSVSICPL\\n\",\n       \"4pLJKG06sweIU83s/AF08xAWmng4oOW5ZGlGkhlSY5EbRVoYPvjLP3rTXHrNj/4iRZogTIakwHUU\\n\",\n       \"nW6LOJ6QpDFWmhMlCUmW4fgeQbOFMYZJGIGQeI6DzksJ381iQNsIfCRhYdgYhcxIi07Y50BQkJsR\\n\",\n       \"SMNwa8jnx03uPhBw30KTXj/m+saE1f6IseMjGzZ3tQwnrAlzXZ/hwjG+vhKwnt+BFoZOc5VXvOp7\\n\",\n       \"+dI3Umj6LHcybjz8cbrJN9AotOwwHI1KYTNKZyWOIyylkAKUVKhpRfE4HBDGA06cuAXf7ZBGkKcZ\\n\",\n       \"ppjQ8CUKRZFI/FaDftTn13/93/PENx/jzrvvJtUFTsPnd9/zHu44fQd333EXS3MLDEYjRtEE3w84\\n\",\n       \"dvhmJsqFixeY7bYREhSSPBPkusD2PHJT6vJPwlLTJ09yHKXQRblmMq3pdtsMBgNc1yPPUp566kme\\n\",\n       \"fPJJHNuiEQSAwXFtoiQiDiPW1ta4dvkKW1tbgGRmZoY3vOmNvPZ1ryObrkVdlJKxRguUpYiiMgoN\\n\",\n       \"4wjXczBCT9kzV8jCATrP8H0Xk2d4tsUkHJVqiqIs/y+F5EqYsW6fKuOqNbu85fK10sjXC3OAbVtW\\n\",\n       \"b0heQTh7S+rrSEJ1/j/IUvr9Or5UEEd1UTsDo286r/KwK2NYee5VCXhFlJ9MJtsCVJUBVUphG4VG\\n\",\n       \"l9KXAiwpsaUiMhqhJA++6CE+8Pv/iduXjuC5Hm6jsb259Pv9UqBqMMC27e0S+yAIMEYTxxGbm5vb\\n\",\n       \"kEtVrRkEPs1mY5qcKiskoyhBWQrXEqAVRZGT5gXDfg/HbXLrrbdx8fI1PvGpT/PWN3wPw+GQ2fm5\\n\",\n       \"aekySNtiNJkQpymXLl+i1++xurrBaJggheR1r/0eFpaWKQpDu9WhNxiihY2yfZ4+/xwXLt9glM9T\\n\",\n       \"FBLXXYZUYwPoiE7HoRVoomhCll8lzyb0NiLmZw8wyQqasy2MEvhBg3EYE8cJrc4cURoyGA5x7BaL\\n\",\n       \"R48yc8tpAleQRhOMLmi3ZikKG4NPGGmSWKPtNVqeRyIysnCMbyvwPOJCERWCKN0/G58LiRYOFg6+\\n\",\n       \"53L48BKjcY/OTJswGuFqH0YjrDyn0AWFtum0WnhuQpGk5EmEyQuUMJwW9zHSa2TeEM9KefDYYbqe\\n\",\n       \"ZLZ7N82ZU4yygKWlWRy5wZsmIU98+WP0oz7hwWMstTrclzd47omnGYdn8c2YqHkrjwWHySZjbpcu\\n\",\n       \"36E2WWmlrDRbfO3LDzNvBUw2XVJcVDfE848iJxLfatDsdHE9b8osMkRRiO95GF3q8AggMyB8h1uX\\n\",\n       \"TqCkwGQOndYsOk7xnJTALej4LcKh5oGHvoMHX/5CPOHyuU/+Fffd8wKUpWm3ZnjDa76H3/6t32Tj\\n\",\n       \"0mXe+sY3lUShVpMo3r94qtn0SNKYTrtFf6tHw2sTtFqEcYxjO0RJgmfbYCBouggDaRzh2TaubXH5\\n\",\n       \"8mUOHlwizwsef/ybnDv3LCdPHqfVbCCEoN0u2VvKtjGFZjgYcHHpAlcuXS3X3WjIb/6//x+PPfYY\\n\",\n       \"P/KP/zGLBxZR0pRNo4VFnhW4blngZrsOaaZxXMjygsUDx2hYCVevXWFz7QadZplXch0HPTW4uiiN\\n\",\n       \"tzYVj3x3qb1l2VSJzN3aLGqXOmi9crwOpdQLF+uaQ3UcfC+d8PmObyuEMn18E7Zc7XQVvlyn2lSP\\n\",\n       \"dwkhbRtwa9cF1z+zwqir5x4WGigk5NKQ64IsS3CUTVFo3EbAn37oQyAtXvLQyzCFZjzq4zoK24JO\\n\",\n       \"uwmUwvZRnBEnGZMwRucJvudiWTa24xJFMZ3uLHGSMRyNsSynpENaNrooE2oGU3qLjk02Fbo3xpBm\\n\",\n       \"MYHr4/k+11c3+eRnv8APv+MdDAcDXvCCB9FZQZrlZLkhTjI+9ZnP0Zmd5wUveBHN9jzv/t3fwg0s\\n\",\n       \"Hrjvbl505m7WnrvC0vwS41SSN7p87tGn2JxEFHmC1jkCjetYZGmKJSRFZhBakiYGSzoM+1sU6UWy\\n\",\n       \"ImNp+Qhe0CErHHTuUxRlYUcU91AqIcsm3HPyDrI8L3U+jKbXHyKVTa41aZoTxgmtdgvHcQhHZQVi\\n\",\n       \"qx3g2DDorWIrTRpHZLlGS49fe9f/etNcevs/fzde4JKkCUHgMg4nOK5NGIUYrSmiIb7v47sOJs/o\\n\",\n       \"93p4rkOaxLheUPKqpyqBcVTioRLDLUcO02m3MHk6nY8wHI4QUyfhlsO3sLJymbXVFZTKiJMRpshw\\n\",\n       \"XJeVlTWCoItUPp7XINIFjueVSTIDpsiRxpTjLBVa52RxqSq5sOihlGZzfYMi1TiOj5IOUnoUWESp\\n\",\n       \"xnYbGGkj9ZDAKbn+S8uHcfyAOE2RgGtLjE55yxu/l8C1cRzBcDBmq7fFRz/yUX70nT/KJAxZXJzH\\n\",\n       \"cVwcR/IHf/Cf6PcH3HX33XRmZ/A8h3vvuvOmMb9y6SLzs3MUeU6SpGgEzXaLMEqI4xQvaJCmKcqy\\n\",\n       \"aloiGa2gwWDQJ/ADpBDcuL7CF7/4Be66+87SDlhWqdSILJtFpCUuXeVx+v0+Fy5cYHV1leFwyKOP\\n\",\n       \"PsrrX/96fvzHfxzHc7edvZ2oveTQ1mEOgDDN8DyHyWTC1sYqUTgqIwRpStgpLSG0UsN9RzwujGOC\\n\",\n       \"oEExpRtukytkSZ+URu/yqut6RVX0vuON7whm1aswK0e7jo0/8OLX/sODUL76Vx/bNtawQwOsKxTW\\n\",\n       \"Odz1ctOdndDaFXpU5PrqqJekwm7lL5FqkBJhS7QUU4lmjTQlNIGS9Ecj/sMf/Efe/sYfwLUctM6I\\n\",\n       \"ogkCzWDQx7ItXN+n2eygEdv0vjzPy2TNeILfaJZJDwRZrhmNxsRJKbKTxBmWdEBA0PCJ4ghtynDe\\n\",\n       \"VorxeEwcRXQ7Hc48cD/tuTk+/KEPkoQhDzxwP77rcebMGQyS9fVNeoMxt50+xfpGD8fv8ju/93uE\\n\",\n       \"6YQsmXDqyBECIXnZd70Uy+9waWvMY89dY5Jp4iSj026RJBH/P3tvFmPZdt73/dba83DGqlNTd3X3\\n\",\n       \"HXgnXlIcREqGIpEaKFlEHCGRYweBjUgZESASkhdFGZ7jIAiSQInzYCcvNig5tmwlQpwAphRIDEVS\\n\",\n       \"A6/E+fKOPVbXcOY9jysPa5/Tp5uTX8JLA3cBja46VX266py9v/Wt//cfyjIn8LxuMAlSGJS5ltUX\\n\",\n       \"eYbIH4JsWCcRH/7wh0jSmjxXFGmL5/mUZUa/5zIYBLRl2anitHptuVoThD1WUQxCMhiOyIucVmnu\\n\",\n       \"LlJycXGG75mItqQuEoa9gOU6YrB3xH/5H/4b37dr9N31nde9u3cI/YB+r09elOSlphumWUnTKiYH\\n\",\n       \"E/KOPRbHEa7jYNsWdV1hSVNbBAj4rU99ipfe+xJOx/HfFHAhDAzzcTvbDaTQNA3n5+fMZjO+/vWv\\n\",\n       \"c3Fxwcsvv8yv/se/AoitB/3m3n8k2BFsZotVq3SgedNgmYI8T7i8uCBJVtDWeI5DEkdYtoVtaKMs\\n\",\n       \"x3V0qlfXWVfdCb9VO0rxpnqMUbepU0+Go+j1uBHdhp5YFOVjjwkheP8P/9QPHoSyu/tsfH53O+XN\\n\",\n       \"50VRPKZa2jAsNsUeHhHkbVtj309Odnez9TYbQ9MxNxR0RyetxjKlSVUVGIbB/sEBZdvQltpQxzRN\\n\",\n       \"er0elm1zdOMpyrJisVxxNV+zcSgzTY+mVdx/cAfHdYE5tqON6U3bwbQspLDJ6gbX30e1Bk3dkJaS\\n\",\n       \"WhnYtokUgiRN6A+uc3Bg8/xzz3K1nHPtqX2ee+H93Ltzhz/4w8+xNxrzxVe+wi//8i8xnc4J+30u\\n\",\n       \"zy+xHZ+mafnABz7AV77+VUR/wJ2zC9okxfeHnD79LPcvpqRRjNsb4btj7ZnhWhzsX2O5nCGEiWXb\\n\",\n       \"NE1N3qQoBa2pGAwOqZuEloqvfOkLPHXrKSbDISrU+Kxtj7cbq+lbCGmT5QWmZeH7JnWd0g9dpGmy\\n\",\n       \"XFzSonAchygpqJuG4SDUAb+mSWsKsiRhGAZcnN37fl6e767vsoS0MCyHvG54eK797Iqqoj8Ycng0\\n\",\n       \"ZjpfYVoWRZ5pXYZl4roOWdYyny0YDYZ8/Rtf5QMf+ID2AS+1XF0BspVboRJKF7/+YEASx9vT+I0b\\n\",\n       \"Nxh1QrpXX32Vy8tLfuVXfoVf+7Vf4/T0lKIocF1/8yQdq2Q3klHiWCaN1JoHzw24cfMmX/nylzvY\\n\",\n       \"osTxA20+V1UoIE0z3Yh0sIbrdMPIpgudAehM7HYh390m0jCMLR1RqcdnfZt6ZprWY/XrezXY71gB\\n\",\n       \"3z3q7Bq7mDsvwgbo310beh48LrffGKXDI1hld5q7W9Cl1HmTrXhEsjcAo9VyWN91SaoSy7SYHB4Q\\n\",\n       \"xStOnnqWZRSjlCQrYBotqBtFUbbEGdBC00BSpBpzCyZYtk2SpkynMfvjfeKioEmLDhoyyMuStpHa\\n\",\n       \"JyVJuH79Oo5jEfoevuvqgmiaDMZ7lNLi7tmMxSpnMD7ipJXUZUGaxvzO//F/8oH3/xCB52E7DnGS\\n\",\n       \"UamGW7duMV/HXM6uGIxNYjXlz778FVrbZbpaczA5JCkLWmVgixbHMCmznLLQobRVU5BliR4CqRaU\\n\",\n       \"JEtqVGvg+z0so+Dq/G2ylY9nh7z4/A/RKJMyb5CmqTm0baNVlW2LZxs4nk8cJ6Bann3qlCzPSbOU\\n\",\n       \"0ahHVTesVzGe6+E7JpVQFNEKQ7Uc7w2/b9fmu+u7r6vZgm+8+jrXT2+glKLf69OzbK6mM6arNUVR\\n\",\n       \"4HkuJyfHJElEPJuTei5pmnK4p61iLctmvVxgmNovfBOFppTaJmoZlg1SEMXRY37gWZFjWiYn168R\\n\",\n       \"pwlvvfUWzarmlT//IsfHR9391ezUDvHY36rtao1hdvWjpakU73v/+4miNefn5ywWc1zPwzWB+lHU\\n\",\n       \"mezgkrLrlLV9hVY3t+KRSRV8exXlo9r1iDa4C6Ps2jb/86x3rIDD47mYu5/vUnZ2DZt2VU27RfrR\\n\",\n       \"sODx54PHu/HN522r1YCbj4UA2baIRuFYDmmaoUyT6WKOGwRcXl6SRQmO30M6IVfLmEoZpFmBtCwM\\n\",\n       \"BI7tsFquKSqDg8MT5osFgTAI+gec3nqB+WqJ0bZkWUYcx5ob7hp4PY/BYICUUockRxVL1yKLYixT\\n\",\n       \"cHpywu17b3JwdMJg/4h1UhKvV4xHQ6bROQqL2WzFnXv3uHHtGqvlgqOTU3qDkIvbdymKCtUapEVD\\n\",\n       \"0Uqeef4FLmZX7B8cce/BHXw/pCFjMhqxWsU4tknoWFStwnEdsiyh3+uxipYURYnrhpRpTpxWONLG\\n\",\n       \"UDXRYonyGrJ4gVIWQW+EECatUNA0WLZJFKc0VUktJK5lUrctZw/u4nkelmEwvTxDCTiYXKOtG+YX\\n\",\n       \"l5we7rHfDxgP+yzXy+/HJfnu+udYYX9IXjQUtbaFffhwyipac3zthHSVcnA4oVUt33ztNeq6oipL\\n\",\n       \"9kZDnn32aWYXM77xjW/QCwIiYwVsKHia2aHaFkMnIFOXuoELgoCq0iyxjS2zZVlEUcTBwQHL5ZKq\\n\",\n       \"zviTP/ljhsMBP/nxnySKI8JgI0LaNHpdEyf1CdwwTAxpAAaYgihe0+uPCHtDsiznm998ldbQIc2O\\n\",\n       \"45CnGdI0qcuyc1vUGLsQOsjCMK3HYF3YPfE329nWJmtzw4rbQEOaWfcoOEZnHDwu+HlyvWMF/BF3\\n\",\n       \"Um7tXnf530+mrcOjHWt37VrIWpahPRPkbnrG4/6624FGd+xRSmEI7eAnJdRVhet4pG1Nv9cjz0ui\\n\",\n       \"WtOKskYSXa0RdkBSVpQ1mErgOR5FK3DCIa7wiNMSx+1xNVsQhg2rKOtEDDZNC54f4nQ8XWEKZosp\\n\",\n       \"dV3j2A5hP2S5mKNokC1UTcEzz95CWj6LxQrbcrl5ax/XdbEsmyyNEaohTUsWixUvv/wy6zgmjpbc\\n\",\n       \"vX0b1bT0e0OG/THiuGY8CPA8m7LIeOrG9c4jWXL//l2G4ZiDccjtew9pakWcZxwMh9A2OIZJf69H\\n\",\n       \"VYHjjPCNIbYqqdMVRluQJRGW4eB6AbWqMEztJ2F5HlmW0/MDAtfDtG2SJEM2Nbeu32C9XpHnBZNx\\n\",\n       \"j7oVxOs5prQ4vXaMbxkMQwdBTfAdvFDeXd//9Y1vvo5l29x7eM5oOEIqQd20PHx4TqMa5t+cMxzq\\n\",\n       \"SEFahaAhz3K+8IU/xbM9XnzhOb72ta92qks9xDU3Xi/ShLoGNB2wqmu+8MdfIElSmqbh6aef5umn\\n\",\n       \"nyJJUzzfw/Vcbt66yXR+wXyx4Etf/jLvffllJvsT3c2Lx0/wis5iWRoo1c3WpLYoGPRH2rfeEHiB\\n\",\n       \"yYvvfR+zizus12vMRoCUGKZJUzeapGBI7TPeKixTku0gBrski43x3i7TrmkejxXcfH0DGW9CJTaq\\n\",\n       \"8u+03tFQ483Os6tygke49WYYsCnQpmluZeObx54M9d2l4ABbkc0jvmaHS3Xe1w3VTpcusF0HpIFv\\n\",\n       \"2VSGxcOHZxw/8wLS8aiUQBha/KIUjEYjlNA2sUXV0DYNliHx/RDTkAT+EWEnOljHOlczS2P6/QHR\\n\",\n       \"esloPNJUK9cnSVLaRnF+doltmzimgxSKyf4hTQ1RtOBytmI0HDAYjkjTFMPyOLm2x2x6wcXFlNVi\\n\",\n       \"yfve+z7yvCDKcgwBP/5jP8adew+ZzueEYUASzTm/OmdvNKIqc6RowarpuYrJyKVM5wQ2UNcEgU+S\\n\",\n       \"pown+7imzWw2xR/26Y/2WU/nrNcNQ3+P2cV9UCYIi6ptaNqalpqmMomiCN/3SdK1VrOahlZGGibx\\n\",\n       \"aoVt2xjSoFExlimRlpZ5H+7vUacxRZ4zm55h2xb/6B//Ll4vYDwcMpvNtI/MYklRVKRpimlY+H7A\\n\",\n       \"wcGRtvr1A/JiSZZlpGnMfLmgKIptAEcvHDAYDlkuVpRlRdFIFnN9dI5jzV6hE5lYhk2apBRpzng8\\n\",\n       \"5uhoiG3bBGGPMq9plOLOnTukWYoQgjJP8AMXKQWn10+5mk5ZLFfkZYOSkiDsIQ0by3Kp85qr6ZyT\\n\",\n       \"4xMQOYahGSpFnuOYWm9gmzoRyjAs6qrFcTyEDWWTsl4sMERDHkfkeQqqoVUwW6y59cyzfPONN7h2\\n\",\n       \"/RaqlSB02lJTNxwfHeJaFqYpuXvnDrdu3aRpWpI4RSmJH3iUZcFv/Mb/wL/+1/8ah4eHuH6AIQ2i\\n\",\n       \"tbbujeOUMi8JQp+L6SVFWXB6eg3TNLl//z77e/s8uH+f48NDnnnmPdy/e5+3b9/j5o3r3LlzmyyN\\n\",\n       \"ME0NSyilDaXqusGybJarFZ/97GeZTqcIIZjP57zy56/wwgsv8Au/8AvUdc16vSbs9djb26MoCr7w\\n\",\n       \"hS/w4//SjxOGIa7ja//0rqaIJxywBZ01787DEp2CJaWB5/kcn5wS9iLu3r2LFApDgmk5SFpq1aIQ\\n\",\n       \"2I7moe9SBneJGJvEq01HrZXZ9s5Mr9EECMRj9OofaBrhn3z2/wLYUms2w8zHyfKPGCrweCzSBjrZ\\n\",\n       \"SNM3viC7lJzu/3pMDCTQftpN3WjKkgCEwBKGtmKtWtwgIKkqSin4r//7/46//BM/hW1a2LZHmtdM\\n\",\n       \"Fysct0etBEHYRyHwPJ9GKUQriNdRd4roFIhSJ/J4ntfJ+02yPEMKk3WU4Do+cZrhe75+01pFWxfc\\n\",\n       \"vHHKZG/EajVn/+CAqmkoqwbX97n/8IKirIijCNFqqbljGVR5ynhvyNPPP8Nrr9+mPzikrCDJdVxZ\\n\",\n       \"XReMhj1m8xkGgn7okcXnxHHGapWwWKVcu/4Ufm/MfLnGdjwQJqsoZm9/zLqYE68S+s6Q0A5oipzD\\n\",\n       \"/QG22WLZLWm6xu952nSrNjXlSgjW6zX7+xPW6zVSGvoY27aURdkdiUtM28X1Qr1B1g2ha1EVGeNh\\n\",\n       \"SN00lE2DMEzyNOlOVi1BEFJV2lJAIGmbhrpqSJKEsqroBwF1U+O4zvZm1iremjTNEUjSNMM0LKQl\\n\",\n       \"qJua6eyKycGEu3fvMJlMWMznHBxoBd94vE+R5cTrGU2rA6yjNME0LUajUYe/ttiWSVnk2lxpuSAv\\n\",\n       \"ciZHRwz39snzCmnYLJdrVAPTqzmjwYhRf4jpNgjZYhpym2Rf1y112Wo+ddmQxBlZVhCMfPy+Q1vX\\n\",\n       \"mIYA1dJUFU1dUVba7KltoShL7p895Md/4uPM5jM8x+XgYIIhDbIkwXUdhoMB6/WqOxUbWJYNKF55\\n\",\n       \"5RW+9rWv8tOf+Gn6/QFB2CPLckzLJopiemFIvzfgajrF8z1aGtbrFZ7n4Fi62To9ucZ8PqeudBjJ\\n\",\n       \"5eVDbp6esF7NsExo20eEhKZRKCUQ0uCP//RPuLy83J6ioyhCCEGWZXziE5/ghRde2J7W79+/w1tv\\n\",\n       \"vcmdO3dxHJe/9V/9Lfr9DYTSnb7ZnLi/VTL5napg09Fr1+s189mUssi1EZroRHu2RdvZyGrk51GM\\n\",\n       \"2pMsk03sm/7a40PKzfdWVfNYfZNS8tIHfuIHj4WyKba7DJEnf/ENB3Q33QK+9UXaFfds1i5kspn8\\n\",\n       \"AlvWiWg1boXQtMFaB49h2Fr9BoIv/tkXOdg7QEhBkecI4Oa1a1w/miCEZL5YkWQxWVGSlGtapfBs\\n\",\n       \"l1HPwbFt7VNS94iiNVEck8fx9oL0XJcgHHLz5AjTclmvE/KioCxrEIqsKvjm175MfO0Y17FIHIOr\\n\",\n       \"2RQlDPrDfaoyJ04ypGEipIGhFIYUhOMJb91+nVJlBMEI2VZk64SyqvF7ATWKhxcXhEFIUzWcnV3h\\n\",\n       \"W+B5fVxvQMslbVsymz1kONLFxvUcTLOHKVt6ZsXx6RGidXBND1P0KYuIqq2osoqqKilzk7ZpKatc\\n\",\n       \"m3y5DoYpWa0XnaTfxFBm59diYlsGozBAWg6W4+F4LkkcMYcO+akAACAASURBVLs6px/43H/4gCAI\\n\",\n       \"MWwfIcD3dQpLHEeoVndmvuchpUEYBLiOQ+CNkVKQJhVlEXN2dsl4b0TY6+kEIVVw/fSQMq+J45Q7\\n\",\n       \"t2/j+wZZnvPCc8/x+huv47kOWRqhVM2gF5DECRfnD+gFAddO9AC2qEpq1ZIVOet4RZokWJbOvNwb\\n\",\n       \"jvG9kHWUomTD/QfnvPHWXRYr3fH7fsDeaI9Rb0idr6lsyXodIU2B5/u4jrMdco3HE5I4xZES3/fJ\\n\",\n       \"shzbs1jFC0CQC4ijhLIs8DyPo4MJrl8iJbz99ltcOz6iLlJCz0EIqIocy/fp9UMc2yFJMwzDYjwe\\n\",\n       \"sFotSdOYui557bVv0O+H9Hs9XMeiyFMEAtsw2B+NSNOMe3fv4gYei8WUXi/AMg0EUBQFe6MRt9++\\n\",\n       \"w+HhIdiC+WJBr9enKDXTq25LUA1S6eKtbWMtLi8uuus0IC8KsiTh5No17t67hzQMXn3tNd7z3HP6\\n\",\n       \"FN40uK7XpdjrmVVV15RV3Q0fv7VQPrkE357xITBoWhgO93EcF9XWXF5eUOSZZkq1DYbjUOYFhnjU\\n\",\n       \"MO7GPW6JEh3bTj/2SMDzaBYntmjBk83rd1rvaKjx5u8NDLIp1k8avzyya3xEPXRdd/vxJnB0gxdt\\n\",\n       \"XpTN2o04gu6IUzUgJY3QeHgXEanZLEWB5/f5/Gf/iA9+9EfYH48RtCSrNVW6oOcHqKbhdN+najwM\\n\",\n       \"1wNpUTY1WVpQFgV1nSAaCxMY9x2eun6AYZqU5a3tYGaxWnB5OSWNtR+H6wYcT8b6dRB99vaHVEVG\\n\",\n       \"nqe01ZrAllieT6M0FBOGfeqq48ELgWVKlqslnuczX0yJVjH7oxMsaSIdE0MoBoMeffoslyuiVcrx\\n\",\n       \"4Q0cQ9E0NY1quH4jpKprnr9+jdu375AVCaZtEMUp/TCgZ4LVZAyGIa6jMybbQEuQi7xlMj4gTUo8\\n\",\n       \"L2CaTOn1Ndzjui7SkCRJg1KNVsG2NVXH3V/NFL3hkCROSIucXt/n6OiANFlz8+ZN8qJmFeeUWYUr\\n\",\n       \"WkxpMOgPGPQGXDs+RrUtZZHpMI1kTZIkKKXY2ztkcjBicjwmSROE2XJ+8QBpSO49uEtVVNRlw/HJ\\n\",\n       \"CYPAw7R1CPZHP/JhkiymrCru373L4eGEhWUw7A2YTqcs1gmz+RylWkb7Q5zAx3ZMxgf72rohr7l3\\n\",\n       \"do7OXTRRwiPoDeiNDQbjA6q64MH9+0ijpd93OD1+ijSOsfyxNhmrKqpGBzmYhsU6WlCWNXXVCdaA\\n\",\n       \"tqg4PjziwfkFnheyWMZ8/Kd+js9//vNgOPi+ji97/pn3cHZ+RhpH1LU2f1J1TRJHOK5LGPQoipJe\\n\",\n       \"r8c6iqibBtOSlGWDaRm8/4deRgitMPZcHyEkaRIhpYllmhwdHZLlCY7dYx2t9HstdUGu6xrLtIhX\\n\",\n       \"MXGSoKSg1+uDKjr8WTxmjdy2igbBxfkFVVWRdq6GtqMpuJuT9oMHDzC6IHKE9sjv9/us1zHz+ZIv\\n\",\n       \"fenLfPxjH0cgUErbQ4AeYH7bLDGlvgVgAf2zSGlSljWeF9I0FUfHJ9y5/RZRkuO6Gj4RpolqHg0c\\n\",\n       \"nyy8Gxrho4bT+hbYV6lHyszvteFs1jvKA9/sQHVdP0Yf3MWQHMd5LMj4SYaJbdtbfuVmEPpkGMRG\\n\",\n       \"cr/7YthKJ9S0QumOuws+MCyTsNfj1dfeZLlc0vMD6rpA1TWWAbOLh/jHR13oQJ9SNGT5ikZaIE0C\\n\",\n       \"zyBwfYxuWBFHMVoNVhCvV/ieR5ZkiEYHAD//nkPiJEWhi2RZLPEchzheUBQthlTM5ndYXK2w7QG9\\n\",\n       \"kXYK7AUBqzTFtn2SJMM0BHGS4XoBhtWwTmL6gyGqrcnSDC/oIYUWVkwXK27deoYwaDClSdtK8irR\\n\",\n       \"1CvDoqlK3nr7TSaHB/hhoDm+MsAQcDjoEUU5si7Iqpyg71IUKY5rUZX6SJlFFXlaYlkGeZHiBzpP\\n\",\n       \"tCwLXHeE9od5lMzt+z7tMsJyHWohMOMVaZqQJksG/R5REtO0BmF/QFEpSNaUVUnbVhRZThprHHWy\\n\",\n       \"v4chdF6NEA2mYbJYnuP6HoZlISSsoxXhwCWKE1xfcnrjBgaSsqxYr9cYpkFe5NiuiR94mKbB6ek1\\n\",\n       \"zs7u0+/1ODt7wM0bN8gKC8f1yYqUKItQqsEoBbZlU5U1B5MjLNMjzwrOLubMlxGO5+D6LoYlmBwe\\n\",\n       \"c3xyxN6wTzSbcXn1AEsaFG1Di8QwtZeOkmxTl0yn0T7aSvOXi7zk3r17WLbHCy++hGF73H1wzsHR\\n\",\n       \"NYqyRDUlIk2IFgsc2+by4iGDwZBwFGJbNo7jajOoqmY4HJJlGU3TUpY5lqVTkoLA5/DwANDFxLK1\\n\",\n       \"F894NCRax6i25erqgslkwipaYtkm0uiwXNl2boQuhmHQD3tEecbDhw954fmnKbINtxvKqgTVeYw0\\n\",\n       \"Wui2sWvesLTyPN/OlI6Pj8mybEfIZ7BYrLZ15Ctf+Qo3b95if29CEPg7czaFNHaK4maqKbrB1u7j\\n\",\n       \"aCFbqxSmZVLXTVdfLG7depp7928znV3hOBaGkJg7qstdKf0uc26TxqMx72/lehuG+VjxfpJG/eR6\\n\",\n       \"xwp4EAQAWz/rTYdcVdVjg8mmaR7DjjaBDZsjyUZyL6VE6ix0XeiVQpr2I2+BDp/aTKUbQ2ipbqto\\n\",\n       \"pCQ39S5tVjVZmvB7f/Q5jm89hcorktYlTkosQ5FFJTDj+skhV/M5vWGfwPdASrKyJE0T2lawXq2p\\n\",\n       \"yprBQAsOBHCwt0/ZVBiWyXQ2RbSKq4dzzUixDPb6fbyJDwis66csF2uiKCFwDujfPEAaLctVxN7A\\n\",\n       \"Z7m+YGzZCFkhzZK6UTQSXM9m5B6wPx4wGAy094sjWSwv8ZSWOBfLOdk8xHY8ppczPfDxPNK0ZBll\\n\",\n       \"eLZNr9enjUuGjg0SaplRFAVJ6XPt1lMIaZKmGVla0TY2q6RidrXAPrJxXYHnSZapR0WFVAbJOt1u\\n\",\n       \"wE3TkOcFaapzLpum4drRgDZrUQomk30O9/ewLEs7PlowvTrHtnXREY7E8x0cK6BtKswWyiLl7GFC\\n\",\n       \"o1oEBkG/j+8HHN28QZ4XNEpwNZ1zeTXH9wWhP8ANbaLlkjyLGYQh+8d7WwvgLE1Zr1csZivqsuT5\\n\",\n       \"559nuVgQ+C5vvvk6IHE9j4PJhBvuGNu2WS6X2I7Dm2+/xeLqIcvVitVqRa8XcvMk7E6CkrqsSC4e\\n\",\n       \"0DQtVn3C4dEJlm0TJdorpiq14jdJdLEWQjDo96jyEkMIAj/AkJLCzHnz4UPG168zvfcGoVHTNgWq\\n\",\n       \"LemFDm2lKIocFUIjJO/78IdACFzXx7AcBAbCMAl8gzzLsCyHIo8xDZssK5jOFgSBh23qGMH98YS2\\n\",\n       \"qVgv5mTxGtfTuZOjYch8doFpmrimQ5mXWK5JpSqqqiAIfPJSu4RatuS5555lNp9RNGBgIKRBWVZY\\n\",\n       \"tkFd5zRtjTQrxnt9ZvMLrEzbyMbrJb1ej8DzuHH9OqppKItKB4V0TeD5+RWW4zPaPyItGs6nc4Ks\\n\",\n       \"oBf2sG0Dx9w0gzvMtHbTMG6jjUFs9CE6pR6gy5VGmhbKMLl18zkCf8S9e/fAsmiFhmEtQ6LqGql0\\n\",\n       \"2Epba6FgXVYIpD4NiEe2IZv6rQkdzbYhhe/dgb9jQ8zP/j//+xYTesSBfLT77O5a8EjR9KR3wK4N\\n\",\n       \"bduyTcIQG8x8B07ZdOGGYeA0LdJ1WCQRgecjkWR1A47NH37mj3jlc3/MJ37mE5i+i2OGNFVNWaQk\\n\",\n       \"6zlFFnM4GbM/Gevsvl6PVimqtiEIevpNUlCVDUJpXLNuW0zL1Ck5hqBpGxzLoamabuMqiWM98Kyr\\n\",\n       \"Gi8Mmc+0Y9vBwRFVVegDnTSoaz3MLMqG5TrCcX3SXBt2hWGP1WqJ51j6IjEkCoVhaGXlYrHCsGyE\\n\",\n       \"sHBsh6ZRSFMSJwlh2CdNss4/vCD0XCQ1nmPR1AV1VWF6DllWIITE80Ok0E6LtmlSpDH9wCOOV8Tx\\n\",\n       \"inB0ojsr6GiaOs9RwQ61Sm/KQuU0TbsVQuhOBIJAH9k3fuqr1QrTtVGqwbUtaBoGgwDVtJqCZRg6\\n\",\n       \"AKCsKKsSKRVu58SItLh77wHXr58iEFhWt+W3DUq1lN1Ns/FVNzsXvSLLsG0bKUT3HgnKqkYISZIk\\n\",\n       \"2+93XbfrUm2m8zmr1ZrRaITn6cdty9JB00o3E3medy6S4PlBBwHuWEagKWiWaVIWxdaWtK4qirwg\\n\",\n       \"TxI81+WZ597DxeUVluNovjSKpiqBhsBzUaplulhwcHKK43iUVY1h2kipnTlNw9S+Jnmufz70gPns\\n\",\n       \"wW3miwt+9Ec+RJkX+K6PlrnblGUBou2+19j+zHleaNio1X70nud1ls66mWpUw2qx4Nq1E6q6okhj\\n\",\n       \"8iKDtsa0JFmSAHDv3l3yumG5XDJfzCnzcltg3/ve9/Hiiy9SFIUetirBxcVD3r79Nq+88hdMDo/4\\n\",\n       \"t37plzk5uUYcx5yenlLmOt1o2O/jd3MAfY1tjKUepxpvGGvfbrz5pK4kSRLOzs5YRRcEvqf9jeqa\\n\",\n       \"tq6wLRPVbKCiroa1IKS2pd9g35tOfQMZ7z7+3g9+7DsOMd+xAv7Fz+uch7Ist0PI3Z9lNxShqqot\\n\",\n       \"oX3TqT+Z5Aza2W93mLlLJdzlnQshCKTBosowBz1cTJxGkKP4f7/0Cn/7f/zb/Orf/Pfo+SH0PaJV\\n\",\n       \"RpGkHB0fYtAShh5Xl5eUecKNm9fJsozDw4nOv8xzjfW12mrz2rVTLMvGMEyKqiIvCxarJVEcY5sW\\n\",\n       \"nu2BgLAX4HleF9hQaxqTH/DqN1+jyEstTOiFnJ2dMR7vs7d/gGk7FEWFZTsslxFh2EOaGpe0TIOq\\n\",\n       \"KsnylChJyLKM5XKJ7/c4Pjkhy8pHrB1TYnUCJkOaGj9uWoRoaaoCU0JVZlRVwfs/+EFc1yXPK1ZR\\n\",\n       \"qgtu3WgXxSJjb9THNAx6PZ+8Nuj1esxmM2az2XaTzfNcC3gsnXLU7/dxHc3iCAKdzrJxc7x37x55\\n\",\n       \"VuK6Lp7n0ev1cDyHLEuZXl2QJjGmIbl+csJyudRqv+MjxuNx19XBbDpnvoqoqobRaLL1aBcd3BKG\\n\",\n       \"PmEvIOz1Wa/XRFFEHEdbXLQfBqi2JQwDZjNNaTs4OCAMQ8Iw1HTG5ZJvvvZN3U0pwdXVFcfHx1tr\\n\",\n       \"46LQ12bebQaO44BSlHXN8ckJy/VKX9elFr5s4EEpJYPBQFNWu6am6Taa9WJBtF7hhyGO5zOZTLAd\\n\",\n       \"jySJKIqCqspJEz3YXK0jRvsHjPf3qeqWulHUTctgMKapa+7fu4/nelRFQdNC08Lrb3yd0cDn6aeu\\n\",\n       \"Mx4OUUoxn62o64YwDHTClG0ym0d6M5Ky4y7bBEG4pfltYhGllLi+h0R13tvQD33SNCaJ1yha6qpE\\n\",\n       \"AGdnZ0Rp2jUTZbdRehweHjKZHGB2mZab+/+1N17ni3/2Ckle8Eu/9MsIafDMM8+wv7/P22+9xWg4\\n\",\n       \"Ym9vqOGnumI4HJLnOYOBngeNRoNt7Xm8Vnz3Ar6Bbauq4ktf/VOklPR7AbQtvmOTJwmqrTuKZGfG\\n\",\n       \"Jx4xYnah5J36uP0/DMP4riyUd9SNcNdu8duZVQGPDTGfPE7sYuX6iXVxl90L0I0tOsL+RqqvjyeV\\n\",\n       \"bPXwQ0mqtkEaBp/5w8/y2c98lp/6qZ+m1xtyfHRElqRYlq1pWUmCNCRlUWAYkixLMA3YHw9BtRwd\\n\",\n       \"7CNMQasUtmkTRwlNrXm1Qgid+I7C6YpRURRYpk0cReRFhmlpw3fX9WlbODt7iOcFWLbdyfIzhsMh\\n\",\n       \"dd1y5+7drgPMuHHrFkVedvQvE4WezNd1zXw+5+jokPl8RlnVBH7AU7eeZr5YUtediMCWFHmJIQ1t\\n\",\n       \"yF9W9MOQ2ewKzza5ujrn+PAApVqatsG2HIRhIqWJaTmYhjYqUm1FHMfYtkmWJlRKn0Qsy9zCZHle\\n\",\n       \"cP36DeI41hxxz+PO3bvcvHZ9i2kWXfKQ3nh1gHRd18Sxpg/Wqt120LZhsFot6PVCPM+lrmvNyy9L\\n\",\n       \"hJSYRg1ImlYxHO5RlDXCsBDoay3LE534U1eIDpJzHJ0gY5gGTV1TFTlZnhEEHqPBkLZtWS6n2p62\\n\",\n       \"0alPAolhmNsuOY4THNvRG3Kl7ROazpPec9wOb25QAoJeoH1zJLiWS9ZBS5ZlkaV558WR7iiTFVVd\\n\",\n       \"k6cRR5M96B6TpkWRl+RFQds22kK5KnEcm6vplNObN8kyXaBH433SrKCsGsqyhC4YWyioG0VZNXz+\\n\",\n       \"85/h5fc+x+HBHqaAfm+gN0XLJopWlGWBaZvYTrA9VSilts2Wvh4tPTRVirrRkXBVWWI7NkKAbRo0\\n\",\n       \"dcV8MUN1X0coZrMZaZpun8uyLPo9HRSuIxJ3TexqPvf5L3B5dcXk8Ihf/MW/ymi8pxs71WBZjq4B\\n\",\n       \"VY3jOIyHQxYLDceYpok0wHU8hNxQlcUW1vhOCMaTXidN05DWCRfnD1nNp5imQVMVBK5L21RIdkNp\\n\",\n       \"Ou9y9cglcYMs6AyBR8VdCMHLH/r4Dx6N0HGc7qaMt7j2ruk5PBpSbi6KJ/11N7/8dmBgSVTT0ABC\\n\",\n       \"KU3UNwxs69GLIoTAsA1io0ZmFX3ToKLl7/7W3+etr73Of/Q3/x0cx2VeF5wvpth5g/B9PN8HAupW\\n\",\n       \"Eac5Z/fuMZnsE/Z73L1/xs1rJ3z1K1/H7zl4gUfgBVimReAHDPp9jfU7LnlR6JQPAGxMw6Q/CDkJ\\n\",\n       \"jyjLgqIoubi8II4TTk6usVpFmIaD57l4foBSigcPHjAc9Lm6utLQRpHiuy6WZbJex3heQFlVnJ/d\\n\",\n       \"ZzAYELgOxy+8iJR6YHx1dY7vBShLIqTAC32SJCGJU4xWYphweXaHLEs5eOoGzz79wzRNqbnJUsdY\\n\",\n       \"LRdrLq9mZFmO5/rYrottWxwdTUjimMDfJ861Ne56vWY+n5IkCXmW83uf/mdUVcVqpRkLH/zgB6kO\\n\",\n       \"95GG4saNG9rmc7YkiiLSNGU+n3Lr1i0sSw+BAi/Ur1WSscwTjo8PkVKf4vr9HmmWUFU1VVVSJEui\\n\",\n       \"JMU0HaaXF4ChaaN1y95kn36vj2WZVFVJ1bQsl0uuLqYo1dAf9PA9D98PuHnzBnmRcf7wIft7e1y/\\n\",\n       \"fkxZlSyXK5pGcf/+faqqJvBD6u5a9R0bU0Aroa5yxqMRtGAYgnB/jGUbLFcrgtAnThKKsuDhYq09\\n\",\n       \"5j0f0et1xaTl+rUj6kZ1G1xBUZaU+ZqyynAtbW3sOgamaeP7DlmWaTtUqe+Bk5Nj0kif0lZRxGo5\\n\",\n       \"Rxombd1QFSVRkhCGPfK8wDRtTEvTYEfDEXle4LsOq9W6K8YwmRyQpgmraEmSxNt5lOtoa4jVatWd\\n\",\n       \"ArRRlecF2JaGvnzf5+LinJu3bhJHEafXTlgs50jLJk+TLizFIwiCraW0ZTnkmY4kzDKdcKW7V4GU\\n\",\n       \"BvP5gsPjYz760R+l7EJdbNvm1VdfZTQYcnLtGEsK3nrrNmVe8vTTN7m8nJLnOScnJ8xm804oJfF8\\n\",\n       \"d6dZ/M4N7oZ0sXVRlSanN24Shj1uv/UmlmmQ5gWoFtMQXfh402lBHOCRfcimEd2lR2+a2u+23rEO\\n\",\n       \"/DO/90+2roPbwroTCPqI2F5tf6GNDHWzdhkrUkqKDk82NPETKbQYQqCPImVHuDdMg9LVz1MVJZ/6\\n\",\n       \"1G8SrSP+lU/+FfIk0xmOQhAGPWhb8u6GUUISJRlBf4jtOETrFVkSoZoC6pLjwwluYGFaElOa1FVN\\n\",\n       \"nmU09ab7t3A8D9PS5k6WaWEZJnmpj/NN22wHsj/yYz/7/X1T3l3vrn8B19/7+7/JK198BWGZPPfc\\n\",\n       \"c7z88vswLR1evlqtuXnzxpap5Hsue+M9iqLqIDSYTPa4uLhif39/C8v6gccGE7ftR+ZTHWVl25Vv\\n\",\n       \"1Nsb645SbQIhKubzGfduv4Xr2drlTtV6GNrpNTaU5d3CvWHk7c71hPjuocbvWAeuj21sVYq79JtN\\n\",\n       \"0d7gUBtZvVJqi5lvTGDKjed018VvpfRNixKqy2GssaXO68vyDIEBaclZNOdv/y9/h/fefJaf/4mf\\n\",\n       \"JHRcvEGf2eUUr4KzsyvsvQHDQR/XdcgrnZazXCywXRfTkAwGQxxTEi0XvH37HqO9HkGo8d1hb8De\\n\",\n       \"fohQunPUhjzaPrOsa0xp6BgnKXBdB6SgqrSt6rvr3fXu+t7r/PySr37t6/y7/8G/r5OxwpCyrGhb\\n\",\n       \"sCyb27dvc3J0RF3XhGHIdHbF8dE19veHXF3NOT+/5Pj4kDTNybKM4+NDkjQlTZMOumtYrdaMx2OE\\n\",\n       \"gKKoOphDdUNVpamPbYswtFxfGDb7exPapuHq8lzz5Q2Lusy1VbWQmDshNvCom9+V0j+pZ/l26x2l\\n\",\n       \"EW4MW+ARprQpxrsT4d3B5OaXK7tBz6aYQxeN1g3xUJq20yqlU73LgiKJccOASrXM5jN+8zc/xYde\\n\",\n       \"fB8ffOllQtvXDIfS5WhygCe1mjCtS1bLJXlRIA2LwXDIaDQm37ACmpo337rNZLxH2BuRFQV5uWY0\\n\",\n       \"GnJxNSfwbPIsx5AS23W6DStg6HnUZYVpmMRx1NEpTQxp6hPEu+vd9e76nuvhxRW//p//FzRtw2Aw\\n\",\n       \"YD5fYFk2q1XE008/zfRqSpFr0VASa3hmvV7z4EHE9evXCMOAPC90wPGwz9X0CtM0GI26oe18znA4\\n\",\n       \"ZLVaAdDrhTqYwtq1hVUa6toJOkaaHB9dwzJN3n7rTexegDQsAsuirErEDutll2m325XvKsu/0/ru\\n\",\n       \"LPH/H9eTplMbr4C6rnFdd2tctfneXRHPJlghDMNt162hk84JrOtgDcMAAf1Bn1opvF5IUZXceXCP\\n\",\n       \"//Z//p+4NjnkR59/H55pY/Z9wsGAoDWwy5aHyxmLKsM1LYKgx+TgEMe2eHD/Hg/v32G9mOKakmEv\\n\",\n       \"5MXnX8T1QzBsTMtHGi6XlwvOLy5plcTzA3qDgcb0bIuiKjXtaLUijRN8z8dzXUAfobIs+/6/Ie+u\\n\",\n       \"d9e/gOvn/vInSdIChCYPHB4ecXR0TBAE3L9/X0vx8xzLNEnTlPFoRBStCEKPt2+/jZCCs4cPaNpq\\n\",\n       \"y0oxDMl0ekVdl0hhEEWplri3sFysKcuGqmz1IFUp2gZUC7RAqzHspm4RSPb2DnjmmWdZRzFCSMpa\\n\",\n       \"h5DrP2LbjG6G30/ahvzAduAbiCRN060lrG3bmKZ2sNuA+xsZ/KZQb3jCG1fCDZRimibStDVFLM8x\\n\",\n       \"DAPXdjBNk9ligRv4mI6Naip+/zN/yE9/7OP8+Ic/SrFOqFDce/sutjBwkazjiPHN62BIVldziqrB\\n\",\n       \"dhx6/T7XT46pioI0TXn48Iy6hbpVuF7I5PAYpRryPCWO77FarIjjr3PzxilZEuukes/BthwGwyGD\\n\",\n       \"cEgWxxR5QVkV2LaFNOVW5PTuene9u777mq9WHB2dUBURdd1yfn5BFEW89NJLjMdD2lpx9iBnMh7T\\n\",\n       \"NDUX5xcUZcFheEhVVZyfP6Qoio4GnG1VpQcHB1xdXWKZHrZpkKW6ix8MBpRlQRTF20Lb64XdPE9o\\n\",\n       \"XYcwsC2TtgVawd54gmmavP7aq3i2RhcM+a0xkZsOfHf9wA4xP/cHv7v5GGA70NxIrJ8E8nf+7Xbn\\n\",\n       \"2sArG5+UjUWsUArLtHBsnTdZ1jV5XTFbL/mH/+QfszfZ52c+9JfI6wrDtQlNF0/azFYLpO9S1RWW\\n\",\n       \"0tP2yhKAQVPVqKrEsU2oKyzDxPMDqgamizU1BnlR4TgWgpaqKoijJQYNQugMzUHXhZuWRdgLKbOS\\n\",\n       \"dJ0wmexR1SVSCqpGc7P/yi/+0re8bn/8h/8UIbR5UxiGNG1LUZbcu3cfpRTvec97dB6nbdI2Ja7n\\n\",\n       \"k6YFrhtgWg6rSEuXTcfGtu2tMX5btQihsA1TR0Y1erLu+T5lVZJkGXfv3eXg8BBpW1imZi+4Hbun\\n\",\n       \"aRocx6FVijQvdPQUCtU+fjw0TXu7WW844HVdc/bgnF5/jG07lGWB7wcdH76k3+9rD5BaS909z0Oo\\n\",\n       \"FsOyqeqG86spnq8dDIXQ4hfb0cNhANvTsxbb0MNsz/VIk4SqqiirSouH2lazFix3S1s1TYssyymr\\n\",\n       \"mrrz6MiyfDuMnkw0tbFRCikNirJCtTVSteTpGtmUGKpiMOjhhD5N2wWUCLPLVoSmbknzgjfefJu9\\n\",\n       \"/Qm9Xp9GCHr9HlIIzYmWYBkSRKuNqtxN42LQVApT6vdDCKirAim0W59h2VxOp0RxxvMv6nDpPC+1\\n\",\n       \"iIUWUwqaStP22lZp10jDJE5SXNvkt//Bp/j5T/4coLQqV0kQFnkFcZqCCZ5vU9U1nulRV42+Xzra\\n\",\n       \"n+M4HYMEmk5kpQBpGniet72vy6La+psEYYg0NGc8TVMcQ3J8fML5+SWWY/OzP/1j33JP/NPf/zxZ\\n\",\n       \"ltMPLAzDZDgckeU5bdNFliHohT7rxYLRaIhA4QU+0+mU09NTZrMrHekXRezv7xNFEYPBgKLIGI1G\\n\",\n       \"zGdrRqM9TFOS5+VOfdL5m67rbotu6NlbZo+wjI7rvW3Nmc0uefutN/BdB1M8boO9Wwd3bWSF+AHN\\n\",\n       \"xNzI4jed+GZtjhHQSUuVQEhJSxdEKh4VccuyUI1O7tATY4VRNkjXZlYk9FWDSiriwGAZGvyvf/dT\\n\",\n       \"/PDhM3zsw3+J0jUQnTnWOk1JzRwzdHBdG8fpkSQJ0+mUKiqwrIDxaI/ewX5XQGdajFMWOI7D0fEI\\n\",\n       \"13NYLOasFgmz2RrLsnnPUy8RpxlRFOP410iziLNphOvWOL0xg0mfycE+cRJjWzZlnnJ4sP8dY5X+\\n\",\n       \"/Et/Qdu2BEFAVuS8/vrrvPjiexmPx0wmE9q2ZW9vj/V6Rb83oqoqPNfBcU2SZM2w53Ln3j2eeeY9\\n\",\n       \"LJdLFIoWi2QdYRom67Lq5g/a43y+XPDpT3+amzee4vT0FNew8V3dQdRCkWURm+g7Q3TWnVVOlq21\\n\",\n       \"glAGnUCls8e0LKq6QilJAzS1TjF56b0/hO+2tI1ivV6zjjIuLy5RQnL7zpvcfOoG4/GQvcNr5GVG\\n\",\n       \"WWqBj21ZIEpGfavjCWs+sus4nF+csV6vmV2lRNGawNe0tMPJPpfnF+yP9xkHfQa9AWVZs1qtWMZx\\n\",\n       \"R32rCEMNm1mWxWw21xho0zI5OCAIAtq2YDDsUeQJUbREVrkOaO5OiYdHJ5iWq0VZTctw0KOqCoqi\\n\",\n       \"oGy0rPyN117nS1/6Ep/85L+MYUikyDkY9UGlmKaFnL5BRAAAIABJREFUETxKbEmTHKffI81SFssV\\n\",\n       \"pmmR5yUCSVlqDLfp0o/KsuBP//RP+Bt/49/khm1jGA1S1bih0YUrFCRFge14WswlDRzf05uv6xAM\\n\",\n       \"Ai6WV3iDPmmacbFYIaWBwGA4GnM6PqEoClariCqvqcUa13UZj8dbEsJ8PqcqH6VmDfoBYagNt9Jo\\n\",\n       \"rXnyjqtteh2HJM+IoojFfEkYhgRBQBD6nF+dIwzB5eX5t70nVF1wcrhHmWc4jsPDB/e186FpMBwO\\n\",\n       \"eeONN1Cq4eYzz1BVFd/4xjc4nEx48aWXSNOCppUoTEbjCXfv3WdyMEGaBrPzJes4YTgYM1/OWC4W\\n\",\n       \"3Lhxk34/4M6d+/T7fQaDHovVGgyDMAh5eHHBYDDQzUxZd54rCtM0qKqGyeSYtpXcuXMHz+0cCdsW\\n\",\n       \"29Quj1VZYtuO3vTaFim/d3n+rh24EOIU+HvAAZpD83eUUr8hhBgD/xtwE7gN/DWl1LL7N/8Z8G8D\\n\",\n       \"DfCrSql/9m2eV33uD353Sw3c7Dq7CTpbQ5iOy6069RLykb+3aRh6t1NgSEmZ5wjbRDRgGQa1lGQG\\n\",\n       \"TNdLfvsf/iNOj6/zw+/7AOV8TWnodI3hcIjdGfRo4UhN2qkDVdvi+T5ZWpBl+ZZcb9s2Yehrj42m\\n\",\n       \"3qZoADhOQNtAXbcURc0qSpDSwHZsqroE2RDHa6q6pOc7DPshAsWgF9ILA/IsRUrJx372r37L+/GZ\\n\",\n       \"3/sd7dPRiUDKsmSxWLC3t4+UWm5eVTWGlNRV2QkeGuI04c6dO2RZhuXYfOQjH8W0LGzLZrVa4Tlu\\n\",\n       \"56uuTe7bVnHv3n08z2Nvb0JVVhRFSZqleJ5JGIbkeb5Vsz1pHgZaeKFaE7r3uKhKLaJqWx2eUFQ0\\n\",\n       \"LejkFQuJLkaO42A7PgpBXpb4vs9sPqXp1GxIgRAOZaUVi5P9Paqi7LwwdDNgGiZt0+B5Lo7nURRa\\n\",\n       \"IJQmMUHg47seaZqiGp1tuF5G7O3tg2vs+FMIyrIijrXN7+mp5qcniX4/TaNTVxoK17UxDIE09GtX\\n\",\n       \"Vw1xmiKkxXodYcsWgaLXCxFC0e/3uLy64OLhOR/5yEcpO4+MtlWYBmhWmqAVj0IBmlpfe4apFbrr\\n\",\n       \"daRpqba7FacZpsFqteThwwcURcHLL78XUFsRkJRSc5ERqKazKxBaRUhHs63qFtsS/M5v/wN+4V/9\\n\",\n       \"16iqmuFwTJ4VSGmQZTmW7VFVNW2r6PV6FEW2jTzbnIo3bqGbbnLjJOp0YcOmadG2mtOulKJFIQ0D\\n\",\n       \"39OY9eZEvjG1UgJ+/mc+9i33xP/9+38EQtDznG1DmCR6WNnv91FKMZ1OGY1GTKdTrl+/Tp4k9Hra\\n\",\n       \"cuL09DpJkmoTLiHIcn3Kcl27I1Q4nYrW6uT8gvFY2+hK08AwLYqOgCBqhWHIrT2IYQg83936nTdN\\n\",\n       \"jVItd+/eJV4/1DJ+oZWhnuPQ7sjoq7pGSvN7Sum/V4mvgP9EKfUXQogQ+KIQ4tPALwOfVkr9N0KI\\n\",\n       \"/xT4deDXhRAvAX8deAm4BvyeEOI5pdS3NbV9MoRht3BvCkHZ1JjowQCGvgA3fEmhFG3daiNBBbbn\\n\",\n       \"scoTBk6ASkuWqmQdmvzmb/0Wt9wxP/b+D+PtDak9n6pSxEnCm2++jW1bnJxcYzjsI6XAcTLyXHfX\\n\",\n       \"y0XEaKQVYGmakaYpeZ53A48Btu3gecFWNhytr5CGieNoB7zDwwlRnDKbzWhVg+2aeJ6PVZtkZc7q\\n\",\n       \"7gOuX7tGVlTcvvMqh5N9jO8wWn7ttde20IMQgouLC5bLJZ/85Cfp9Xqd4MHCsS2aRg9FvvKVVxkN\\n\",\n       \"9/jID38E1YkS6qamLBqSJMYyTZIkwvN8XfxNk9t3bvPsM+/pLkILKWE6u+wMqFp838eyLF0Eu3lE\\n\",\n       \"mqb0ej183yeKIt1VpTn/H3NvFmzZdd73/dae9z7zOXfsCY0GCIIASXAWKVKUSIqmqIROHMliUnKV\\n\",\n       \"qxK5ErkU5ykPyYMrT67yS8rlylviSqpSZSu2wyjWQDGS6ISkJJIiQUwkpkajp9t95zPueVh5+Nbe\\n\",\n       \"3SRAKsMDuasaQF9033vOPnut9X3/7z/MFwtzUFsMhyMsSzMe94ljuceD4YiqrrFVICrCLCfNCtKs\\n\",\n       \"wPVcFos5Fy9dpKwKFouFebgLzo5PmE2nuLaD8mDn0iXWqwVFIQERq/WK88UcZSkx6wo8dvZ2DdRQ\\n\",\n       \"YtkWylEEYchkNuXg4AC39qgb2SS3trZwcsgyTc/2qMoYz1X0ticoy6IsBKpaLeYkyUbaYtfBD0I8\\n\",\n       \"z2enN2KxWtHvDVicHWIpuH79OoeH96hrga7+zm/+pnRnfkDTyH1EN9RVCWi09YCloJSkV0W9iOVy\\n\",\n       \"YQQfHlmWkmYJ3//+9wF417veheu6XL58Gd+XTXQ49LAsRVNXD+hptkA4cbwhCCNcz8VybCLbZjjo\\n\",\n       \"STc0X+B4PovFAsf2GAwiRqMJeVZyNp8TxwlJkjAe9+n3JbeyMDmWJycnRFHEcDg0vvg1VVWSJGL5\\n\",\n       \"W5QVoZH/13XNcrmkLAqO1zH9fp/hcEiSZKYKtTpCw49eaV5w6fIl8rUIh/b391mtVgwGA+7fvw/A\\n\",\n       \"e97zbu7ePcDzPI6Pj3nysce4desWtm2zXq65dGmfF1/8AVEUsbe/S78/4PsvfZ/dvV1sz2d9NicK\\n\",\n       \"Q3Z2tkCLQnp7d5eyqjg7P+fS5X3mixWqaugPBixWK6IwwPECTs/O2Nqa0WhNo0WdeuHSJV564QDf\\n\",\n       \"dQDNcNAnS+XQqR7KSdBavQUT/9Hr/xUGrpT6PeC/M79+UWt9pJTaA/5PrfWTpvputNb/2Pz5Pwb+\\n\",\n       \"G631N3/k++iv/emXOhZKK9Bp2094KOC4NpXxQ1iqZVlQN7iOI05iWr6eoVF1A7UmGA04zTb8wR9/\\n\",\n       \"mWKd8oVPfZYyzhhvbXO8WTD0RSnpuu5DasFzXNdjNBpSFAW27eD7Hmke43muwfXE10QpxXK5Mhif\\n\",\n       \"VOWe5xm/YUVRSFBA0ygsYyRVliVllVMUGVVT4zo26+WKOBaRwWQ0oBcFuI7Nv/vv/+Zb7v/f+49/\\n\",\n       \"nb29PVarFdeuXWVnZ4dbt25hWRYf+9hHWa1WsoGu1oSez3K5ZHt7u3MlbCuaKIpEEapMxWiYL2EY\\n\",\n       \"EgQhWVYY+CXqqpkkSdhsYu4e3Obpp58GFK7jMhgMumF0m7bT7w+kakQzHA2wbZvYJIs3NCwXK7TW\\n\",\n       \"jMdTkiQBLKpKEwahVKGeT1U12I6Dsizm8zlFmRP1IlHwlg29XiQ/z7ZJ4gTdNDiO1y30fr8PQF7X\\n\",\n       \"NE3NYjGnrgq0rrCUhW1buI4jSUl+KMpDW9bCaiWfq3iKw3A4MEVD/UBYZrkGN3fAtMJ1XbNYLlks\\n\",\n       \"18wXS7S2Wa7W7EwHDPoRk+mYqipZLOY0Tc3e7g5R1KPIBTeu6wcVuNaygXciN8vt5hZFVbJYLLh3\\n\",\n       \"TzaoCxcuMJ3KDKEs806S3ev1WK/XBEEgqkylqarazJocmkZjWw4NmtVGEoyaRihxf/XNv+RDH/ow\\n\",\n       \"YRhiuz55Xhh82yLPSrGDiHridqgr2ZSLQgoXQw8uy4osy7tZS0sPljxYh6IoSbO8M6CzLAs/CKTr\\n\",\n       \"yTJCL6TSEq9WNzW/+rlfesua+P0/+Tp1U3N1b7fbO1oSRL/fl7CPSJ6VixcvYts26WbNdDpjOBrw\\n\",\n       \"2quvm8Fkwfb2Nnfv3mUwGIrPi+ty895dHn/8Guenc9Ca4XBoAjVSbEdM8xarJVEvIrBt4jjFDwLT\\n\",\n       \"dZRmhiN2Bq4ruZ+245Bu5hwd3idPE8oiQ+nGPJtGidkVtIr3fPBT//8xcKXUVeD9wLeAXa31kflf\\n\",\n       \"R8Cu+e8LwMOb9V2kEn/LVRlfCHm4HigqWxpN25ZTPfBB6aa2Shz9LGXRqAbLnFg+itBzKDyLY53z\\n\",\n       \"v//hH+ItMj798Y9TKNje28OppNJZx0sc5TIejyjLitFoyHQ6Fdz07JzpdCrmO0GIFzjE8YbDw0Oz\\n\",\n       \"0fWIoogo6jPoDymKnMViyfHRKbansGyLXtRnd2+b+XzFcrkmLzJcz6NpGsKwh2XBcrXGdgPCSGFZ\\n\",\n       \"mtPzBXfurpk+ZKzz8PWLv/hJTk9PmExGNE3D/fv3uXTpElVV8MorL7O9vY3rDtnb3SFLMmazGaPR\\n\",\n       \"SEzvoTMUakVUYoGuKcuCXi9is9nw1a9+lV/5lV/BdW2qqmQyHaKUTRgFoODpp5/m+PiYixcvkmUp\\n\",\n       \"Gs1oOMJ1Xfb29tBaglnDMCQtC+7dO8T1bJNN6uE6DluzWbe5OLYZVleKPC9kAMWawWCIqxyUBaPR\\n\",\n       \"gDSV4dZysWA8GnN4/x4XL16kqmpCP+gw2aqqSLOM2IRIBP0RKNi/cImiyCnylKoq0Lphs9mQZjmz\\n\",\n       \"mUO2jlnNT7h69SrjyQRbWXiOI4cIUiB4A4GclGWRZCWnJ8e4rhzyriXDOdt2ODo8ZntHpO/jyYws\\n\",\n       \"WbB/8RJ5moj/dlFw8eIFqkK8Y6IowvcD0ljyLNuw3bJ6KNhb52RZxs7ODi+++CKXLl3ine98wnSB\\n\",\n       \"HnEc43kOdS10OLGlcHBdH9t28FyHqsoRiUQbINCgLGiqmuFAbCLadv/evfvGXbDA17LpHh4eY9s2\\n\",\n       \"g/7owSDb92lMhXxycsLx8SFbWzumM/OZzUwm7GrNcrFGWZrJZNKlrruO1X0WLeRnW4rJeESVF0S+\\n\",\n       \"dHQ/7tJ1wd7OLuv1muFwaJwP3W4Td12XXq+HUorDw0O2trYo65JNsibJEp545+PEccbR0RHzxYLt\\n\",\n       \"nV0ODg4YjUZUVcMjjzzC7dt32d/fx9INt27d4amnnpSiJsvxfJ/Z1pTj0zMaLyDq91mv1531xf2j\\n\",\n       \"Q2azLTZJQhgEeL7HJk7w7JDZbJc3b1zH9wLyRN5/VWbUddlt4Jb1k7ng/482cAOf/K/Af6G1Xj/M\\n\",\n       \"CtFaa/V2ll0P3eO3+2IURV2STovzta2D+ZmymT+Ul9dRbvQDX5OHja5GeKx1xcrSfPs7z3L/4B6/\\n\",\n       \"/sm/wTjsYfshZ+slvTCi5/oEOxGWZYlhTlVS1WJnOugP6fUuYNsuWZaxXq/AknZvMBgYW9OCs7Mz\\n\",\n       \"5ufnHNw9wLYdZrMtLl26TFGnhlNa8PIrL+M4HqPRGNfxqZuaIBiwWa/IixzP89na3qMqc9arJZXn\\n\",\n       \"4vkOh0dHb3fLODi4y9bWFkdHR3znO99hPB5z6dIF4zURcufOHfb29njt1dfoh33+vb/5N5nPF2jd\\n\",\n       \"POQpIYrVLEtwXI8kyYiTNa+9/gpxnPLudz9NnmfmvtodzVMpGA57bDYbrl69wsnJGQBvvvkmo+GY\\n\",\n       \"0Uj8x8GiNd+fr9Z4nlgOxMkGpWAxn7NerXj22e/x27/996XlVhZhNCIIAgbDIZskpmkqDo/uS0Xo\\n\",\n       \"OAz6fUajERcvXmS5mNOLtjk6PCQvCtbrmO3tbcLQZzAYUVaVwThXHB2eotEopbvqVKmG+fmZGa5a\\n\",\n       \"vPDSqziOyyhyeOmlH1BmOR/60IeYTid4nkZZkKYplmWxWW+YjCdsTac4uzs0TU2RZoZWesjJySnv\\n\",\n       \"fe8zFGVDGEbYrktVbIGCqNfj5PiEOE44Pz/Hc1xGoxGbzYbVckk/Egc/lCxc33dFsWcpPFcO0OvX\\n\",\n       \"r3P58mVzeJXk5kBqmob1es1gMOD87JzpbEpViqZifj6nP4iwVEMbJtCumyQVx7/cbPpJkqIeovA6\\n\",\n       \"jpiKrRdLLl26QFU1oC0TwFBg2xZFlaF1w+7ODlcfeYRNnFCWYk+7mAvsNRyNePTaIzR1yXq9Jksz\\n\",\n       \"8ey37A6Sk1Qt2xRRGRdmE0JXoQOHHxcsZqsazwbluUYpKcWKZ34/Gg1Zr1fG6dMlSWJ6/RA/DLn5\\n\",\n       \"5pucLxaMhiMee8djvPHGm+Rlyc7+HkVdEW9S+lbPJP2ssIEnn3wnR0cnFEXBZDbjfH4uthujIapq\\n\",\n       \"mM/n9HoiDspzOag3mwRQbOIEJy+kurcUnuvj+xF5FuMFATQVlm1jGTZdZeYGP+n6azdwpZSLbN7/\\n\",\n       \"s9b698yXj5RSe1rrQ6XUPnDc7jHA5Yf++iXztbdc/8P/+M87rPvDH3wfH/nw+7u8RN/3uxAH3/c7\\n\",\n       \"miBdIIPGN4O3dujZKE1T1FQ9hz//1l/y0jf/io+89/0UnkVZV/hxzmTcpwxc1DqjKDIa3YCCwTAi\\n\",\n       \"zwqqqmATrzpIR2CTPkWZo3XNcrHGdT18L2B3e4c0zZhNFXGcUBY5CWB78nrEGMehrjV1VZgAVB+a\\n\",\n       \"mrIocCwbXWvu3r5DELhEkU+v57FeNfj+2+N9vV6P2tCjLly4wHg85oknnuwqjtPTU55//nlGwzGe\\n\",\n       \"H3Bw/5Ct2YzA98mL7KHqZEVVldy/f58kTZhtbXH58uWuI5LK2O04+O2B3WLleZ4zmYxQSqxO0zQD\\n\",\n       \"DYvFgjDsce/ePUmKCQYEYYDr2uzubrNcLrl86Sr9Xo+PfvQXABgNBwCcz5diOas1US9CKdV1R0Uh\\n\",\n       \"9gPr9Zo0TQl8r6N71U3DZDJhvY7J89zMLgIqE3Y9mcxQyiLLEnLb4d69u/i+h6U8BoMBuzu7PPbY\\n\",\n       \"O4nCkDLdAHB2eoJj29y8eYcoinj66aewcHBdm17YI89z5vMzHNclNB3kYCDvIwwj4/PtoGlYr1b4\\n\",\n       \"niWbS1WxvbvDuBihlCbwPOqmYtDv45qwB01rBwF1WdMYA6s0TY3Aq2F3dxvLku7Jc93OslY3UtD0\\n\",\n       \"ez2UhqauWC2XzKYT4lhcIptGoruaRiCqNoNxMBATLt94eLeww9aW2CT3+hFZKvCA43q4nnFAVIrI\\n\",\n       \"FmOmqqpYrwVGC3wXFfgMBmLAVpUNeZrQ6BrHtnAiec2N1pSlFE+27ZDEawLfoxeFuDY0RYaui25+\\n\",\n       \"86OXS02RrAh7YwmOyNIHnu1FjuPYOI5FksQoJdGBtu9wvjxnsr0lzpKbBavXNuzu7nHr5i3cwCeI\\n\",\n       \"Qvx+wHq9EujUhEyfnp6yvT0lTTPWmzWjwZDlekW8XrO3vdslgwlMmXFyckavJ4XPdDoRMkCS4Dse\\n\",\n       \"Smsm0y1efeW+CeDIgZpnn32e7z73ktkf3/Ztd9dP3MCVrNx/BvxAa/1PHvpf/wb4u8A/Nv/+vYe+\\n\",\n       \"/s+VUv8tAp28A/j2233vv/t3/vYPmbi0G3aSyGCkTaWwjGTV0q2yUv78ZrPB9/3OoMqyLI7qhMXx\\n\",\n       \"hu/926/xiY/8PPuXLuL2I+L5msX9E+IbGf72mMl0ytZkgNYNaZpxcnxM1IuYzWYAnedKmsXUhhM9\\n\",\n       \"Ho+ZTmfUdcPZ2RlxnHTGM/v7F0wayCFlWZPmKZayiXo9+j0fz5MW/+DgHr7vE/gug34fP+jheJ6h\\n\",\n       \"n5Wcn52wXC0Z/RgIRdzlhHq1tbVFkiQ8++yzRFFPKqDdffK8ZG/vArqGl195jatXr+LYiiiKmE6n\\n\",\n       \"JMkGx3HxPJ8rV64ym00oq4ogDLvWWoIh+ty6dYu8SPG9UJ4HS5GmcceEWa1WXL9+g+vXr2NbDmdn\\n\",\n       \"Z1y9eo0PfvCDfOELX0A5EcfHR9iOYOAvvPDn/PzHPkpRVNR1ycnJCePxhJ2dHWY72ywW4kBYlgXn\\n\",\n       \"5+coZbG9vS32rrZLv9/n8P4xaS5MneroiMceewytNVtbW51DnWDYmpOTE7IilhScuqDIM6Ig5D3v\\n\",\n       \"fhoN1FXFfD5nE8fcTVNGkXRlW9t7+J7Ho1cfx7I08/k5/Sji/PwcP3C5cuUKRV2x3mxYL1fkWYbj\\n\",\n       \"OJyfn7O9vcNkMsByxdPC9z2yNCHLUvI8Zbmsqcucu3fv8Mlf+DhlWpAYPvzQQFEtrVY9lOXqejYn\\n\",\n       \"JyekaWrmMzb93qDDsWXgLEKQvb098tZStidmcb0ooq5KNIq6EeYUBivXWrNcLJlOZniuT2KgHMuy\\n\",\n       \"SNPEKJ5d4jgGrZmfn+L5D9z0rPpBF+w4xjq2Kn5kM0FYOlWN7fzwPMu2Q8pKIJWmUR2M43gWdVWj\\n\",\n       \"LHB+DJQQuC51kXMU3yNOxJ99vakIgpA8z1iuzrl69SonJyc0jSaON1zqXTIznSUoi4986MM898Lz\\n\",\n       \"9PohO3tbNHXNa6+/ynQ6Iwo8krX4q1979FHyPOfGGzfY27/AztYWx6cnDPo9PN/nxo3r7OzsEvUi\\n\",\n       \"bt++zWQ8pt8LWa/XbO9sc3p6xnA4xFLgBw55VjKeTMiLgpnbR3k2dZnxkQ+/nw998Bnh5iubf/Y/\\n\",\n       \"/Yu3fe/w19MIPwF8DXiBB1DIf4Vsyv8SuMJbaYT/NUIjrBDI5Stv8331N74qe37bIrT0mYej1Vox\\n\",\n       \"h21LyrVSSihihnJouQ5lXXWGV/Eo4l/90/+eC8MJT77/GRrXwrNsbMdhGPWw4oLNes3N5BxVVExG\\n\",\n       \"YzHK54HrYetC1oqKlFKUldCjkiTB82SQKdRHoUEpFMpSEqTgiaClZUOI3aRwt2UaLxmVtmVTVBLc\\n\",\n       \"oK2GMAy4efsGSmkUDf/Zf/pfvuXz+Kf/5B+itWa9XktwilZkacrJyRlPPf00i/mcOElEnGISuW3L\\n\",\n       \"YjAY0u9FNE3NY9cepWlqVqslj1y5wunZWacAq6qyU7VmWcZoPOzYNkmS0It6+H5IHAstcTKZ0TSa\\n\",\n       \"2WyL+/cO8X2f01MZGo3HYxpts7UtPuJlWRi/iRGNVuhG2uc4TlmuV7iew2KxwHVdrl17lC55Js/R\\n\",\n       \"GtJYMH3fD4lzqbJa++HWegHj+e4HPkVeYFsWcYYRhmyIehFh6BuLVafDSLUZgpd50dHd8jzFUlLF\\n\",\n       \"eq7NdDoh3qzFz7oscQIfBTi2jW05pGnKG9ev8/g73oFS4jGvbGN/rIUP7DqOHICew3q1wLYstiYT\\n\",\n       \"aZfLCm1w77oxLKyH4EHHleqv1xPvcMuy0A1Yyuo24daOwnEsqroiDEMzJNYSgKAVlsmMzMuigynr\\n\",\n       \"upZEoyxDAZ7v8Wf/9s/49Kc/zWK5JIqEZSUME9sEDHsdi8yy7M5WtWlqiThrA5mNJUbdCLbtOR5l\\n\",\n       \"WeB6PkEYmBSm2kBbFpZtGQ60RZ6mHQ++aRo+8Qufesua+ItvfJ08z/DCiDAMKYqCzUagJBmeekRR\\n\",\n       \"yHK5Mh7pPVw/6NKv2r1mMpl0PiS6aQjCkNVqxe7ONqvlGtuyxC/f84iiiMVSuPG6TfZRCt8RGCjL\\n\",\n       \"Mra3t431bUK/L92Z1kLddMxspalL0BWH92+znB/j+zZlniASToVCntH3ffSz/9+GmFrrb/Dj/VJ+\\n\",\n       \"+cf8nX8E/KOf9H2Bjj/c/mqpRP1+n+l0ahI7FL7rdfhtUzfYysJ25ZRPkwTlOkLRsRS/+7u/S5ln\\n\",\n       \"fOqXPkVGLYuiLDk8OeHQOWGgHfb6E565/BRlUXN8dMTrr71BWZbs7Oywu7uL68pJn+c5aSaJ2Mpw\\n\",\n       \"O7e3t8nznNVKhAu9XsRo1O+goMViwfxEKEyB7xMFoQQbbzbMz84oikJcCsdjer1+hzO/+vrL3Fwt\\n\",\n       \"mG1N8Dyb09Pjt71nh0eH9Ho9LNsijVPB4k/P6PeHvPbadYqioN8bUtcNaZrT6zkEYcC9w/uMhkP2\\n\",\n       \"dnf587/8Jm/cuM7v/PbfxzEWnBJN5XbRYO1ibIUhcbxha2sLgB98/wfcuXPAZz7zGVxXPN3v3z+i\\n\",\n       \"3++zt7fH448/QZKYAAssFotz8iKjNkzS7DjD86QrKaqazSam1+uz2ax4/PHHuXnzJi+++CLacIjf\\n\",\n       \"+973Mp1MqWu4fv0N3njjBpeuXSWKInb29gg86QbyLGF7e1uyLBcSbrtYbrD9CN+3ePTRd8hwyHRv\\n\",\n       \"dV2xmC8oy9oc2B6D4Zgg9GXRrlZYaJJ0w8nRIXcP1uzv7zMdj1BKsU4z1uu1VPlZxrA/YDqdEkUh\\n\",\n       \"682aRktlfHp6hucGpFlGLwwAje/1ZEhW5hRlQS+KaGqJywujAXWtaXRFwwPztryQJJ88z6Vbosbz\\n\",\n       \"Q+pSm/eg8X2fMAxZb8z7X4hiV3Jma/K0pKwqqqamQTJlB4M+tvGJn02nMpw06sGmqbAtxdnZKbPZ\\n\",\n       \"jPF4xGq1ZjKWDrEB6qYi8ANT0Mgz0+ga13WEbVVVNLowWH1FlVcPnDmbhloLa8TzPEajEbPZTOAo\\n\",\n       \"28YJI84Xi64zg7du4HlZk5cN2ClKtWpuRV2XrFYieCoK0XWcnZ2xWhaAY2CjAa7tkKxXVLmwZwLf\\n\",\n       \"l+fe1kSBw+L8nDzPGfT7pqBrSOKYi/v7JFnKJo6ZbU2kEFIu08nYeOuv2d3dJUsTgYxM3kEQCDxc\\n\",\n       \"NWIfXabCST8+uiNiKQW252IrG9tygZ9RKX1bgZvf/xDL5OF4NVvJKSkRXwK51E1DXkr+YZJnKNfh\\n\",\n       \"u997lm9/8zt8+vOf4/LePmGhCZWNCjwqG0oLagvqrCBY5eROgGPwvqLIybKURgsvVuvGUAalWmlo\\n\",\n       \"xL/bFSpX69uyXq9pKe6d8Eh5aG0Z6t2mo0+JGqsEszm2uYPrzRqtGvzAw3Zt3rjxGv1Bn3/wO//w\\n\",\n       \"Lfft61/7Ei+//IoR8CzJ0pymEWn3crmiF/UF39SaIAqkoisKojCg0Q1FLrmWyhJq3PZsC9BE4QM5\\n\",\n       \"cFuF+OZBruuayvgbbzYbxuMxH/7wz1GVFWmaMxyOpF3NCsMkCLqqZjwaSnWmhVudFQV1oymKkiQr\\n\",\n       \"ZIAaJ6RZxmopm47I60sa04KXZU3gB4Rhn2vXrnHt2uPcvn+fNE0Yjoaslit8T4Q0tYl3a5qKN998\\n\",\n       \"k/FoxIUrVwTzthRZJlTJsixxXBfHcXFdr8v43MQS89VocSfyfU+k14GP0mKPsFot6UURXtjHsR1h\\n\",\n       \"YdQNumlYrVb0B32qusb1XRzXoawqHFvsGSwLyjw3Vgspru1Q5hlhEIocHkVeNDSNksxE+0GwSYsg\\n\",\n       \"xHEsdMS6QuGg6wcpLnmekucZQeiT5/JeNpsN+/t7ZEmBhYtp3MCS9ZamqSTUex5FmmIpidj7gy//\\n\",\n       \"G37jN36DPBean1gcWx3c2aY5BVGPusaQD4wNqiVDY4FHhe1SGqgzj3OUUeY6tovreSRZ2tE05X0I\\n\",\n       \"rKSDAZZlEYURVVnytz7/q29ZE7//R1/Bcz2qUii0QSBpR21gTPtMO47TeQwtTladfL7f75khv4R6\\n\",\n       \"CLGikBQsxyEpS6mFbYc8z9maTGUfMh1DlhdE/R4aTZlL9F9koLj1es3Ozi5Zlpvq26bXE7582Wiq\\n\",\n       \"KieJl9iq4uT4gH7k4RihotIKtMwZnvm5z/xzeg5nAAAgAElEQVTsSelbxVb90KS1xZRbXnIURaTk\\n\",\n       \"DMM+DtBkFY7nUaHRrgu+DN1ODo+48+ZdvvCZzzEdbeGUDa4fUDYNTVURL2M838MLfAZBn8byqauY\\n\",\n       \"qk7ZJGvCIGQyEwVY1JuwWW/ARG6FYSA8bsdluVpwenyC57qEUcRoMMR17a4q11oThi6O7aAcm15/\\n\",\n       \"ynodM1+cdWq0yXiM64uScL1ZEoQOlu2SZSl37xzy6U9+qhuI/eg1Gs746M99grqueP75F7h58yZV\\n\",\n       \"VfH+97+XW7duk2WZCFI8B7uSzdL1XbBrjo+O5UF0PCxtc//olCSreOSRKzz2rieRxVeB1vSiHmmS\\n\",\n       \"cHhwyLPPPsvHf/7necdjT3Bw94DRsE/P9/GGQ+bzOWm85OjwLmHUJ4p6lE3Beik83k226VzWoihE\\n\",\n       \"o2QDLWoWizXb27vkRUmAwrIDXMdhPwjxfI/ZdAvbkTSlo6Njrl+/znyd8Z3nX2Jrd5/+uIfjemzt\\n\",\n       \"jEnTlLOzM7I84fT0hOVyQZalWL0h6RvXeeaZZ+j1IlxXuPhJklFVclAHQUi/3ycKA7a3dkiThNpw\\n\",\n       \"4xeLOUmcMGcNCpHzb1/EcR3ycoMduFRU1MgB5UUhWV6QxCmbTcxoOBZGT+RTVhX9qI/WDVHQw3VD\\n\",\n       \"As/BsQqUktAQrRvC/gjH8ygyGbI3jRz68UZyI13XQVc+riXh0HW74FH0ez6OJYVPL+jRAElac/36\\n\",\n       \"XS5fuYIb+BRlQZkXWNrCcT2GfoBl2dRVTX8cYikLTQ3KZrFaEscbdrZ38PwA1di4rk/oaWpdkmcp\\n\",\n       \"aZJS1gILVpXwyB1Hmc7aIgxCHNeRwy4ICCyZp8g8S37uMPLFJsPRlFWNa4V4dkijfIEUK3B+TCUq\\n\",\n       \"KfPSjSjLpSgbgp7MEnwDn9qWTZKnWK5AbRPHo65qpvvbrDdrST+yFL0g5O7dA/auPGJSkUJcu4dl\\n\",\n       \"KzP/2TCfnzMaDYmTGGVZ2JkUMVmWs7OzRZZl6FrjujZbW1PqWsKlt7a2yJIUmpq6LCjqlKos6Pc8\\n\",\n       \"bt08oBdEuJZQKptaqNGW9dcX1z/VCrwd2LwdlbBlTFROQ50V9L0Qq9aARaE0KgzIa8lJ/IuvfQPf\\n\",\n       \"cfjA0+/tKsd2EOkYsQbQGejkeQ5WZdgOluFJi6glCOTPOrbbyYM3m9hMg5sOc+28e00YQ0t/LAxe\\n\",\n       \"LgKGmsCQ+lscL8sy6romTVNGoz7L1dywN0KeeeaZDnd/3wff2i6+8NzXOiFOSwlcLBbcuHGD7e1t\\n\",\n       \"Tk5OuH37NnVTgdWIeZCyODo8pijEQEkb06eqqvFcl9nWjPnihDzLeOzao53owrEcelGPvd1dirxk\\n\",\n       \"OBiaMGLxfFktFyhgMpkw3ZqJ4VTd4EcRti0HXPu+0bCJxUgrzwuzgYtJkG279AdDai2dWFEUrFZr\\n\",\n       \"g6PW5HnOeDxl0O/TaE2WZri9Prdv3SZJEk5Pz80MAi5c2GcyGREEPpZtUZY5yeK0CwxpqzDf92mD\\n\",\n       \"ruU1Cce6ruTetJCK40hGqTbCmixLqKrSVFAZaZrg2gKtrVZrVoslO9s7bE1n2LZDkeVYtkVWF1R1\\n\",\n       \"TZ4VOI5PnmYdhdaxhCFjWTK8azDGUmiKPAcappMxYegxHgwIQp8iTwkCH40LloNlKapC7AREEOOQ\\n\",\n       \"FSVxklKU8qzduXfAYDhge3sH33XNM1AZzFZk8jZyj5QFYWjTaMHR43iDbizytMK2RALvejZBIGEk\\n\",\n       \"nrmf7TOudQP6QT6mQDiV+e+Kpm7MIN3DshyUaS9c1zOiOF9mII2wd2S4XvOZz372LWvij/7oywDY\\n\",\n       \"riUpXLaNZTvQ2rXywK5DKYeqrvGcpns2JcvUxmozUtMUhUC8k+kU1/LNYHgX27WM6dyDAWxRiklY\\n\",\n       \"WdVEvojK2j2ovRxjZet7gUDDgBvanJ+dMpuNuXdwl1EvQmmBDS31IOu30T+jiTw/GqXWvuEfdSVM\\n\",\n       \"45TI8+VBSzKCKMIKfKqmpqwrXn/1Ne7cusXnP/s5oRpZCteWyLImz4hTadGHw6EkoXsuXu6Rl6kJ\\n\",\n       \"RhW5uOcJXXExX7DZxAbjFsHOYDAQnNW0k+3wy7Zt4jhmMV+YpCDwggjP9xkOh12ittCr1kRRwHx+\\n\",\n       \"Rq/XZzwe8s1v/SVnZyd88YtfFErcQ14ib3dJ56vklDc+woHv89i1a7I5RREX9vfZxBvyIuPw8FAy\\n\",\n       \"//YvdEZRWSaKuf5QZO/LxZyqLGjqmhs33qQsS55+6mkm0xmb9YbXrt+QAGAlLeF73v0UYPHYO96J\\n\",\n       \"Y1s0Tc3JiWQL7u7t41g2y+VKsE8xO5EDD0Vi0uhPj++yv3cRUJRlzf2DuziBqCjl0JiR5ZmoBJuG\\n\",\n       \"N998k+OjisFwyGQ8psoyzk6O2N7eYXDlElkuPjWB6wjVVGtUU0tgru93ifZJIpYGrW9M23JHUWQ2\\n\",\n       \"eUl4WsxXFGXcDRN9L6TXD7FtV+irdcnpSYmtety+dZtXXnkT17YYj0ccH3+fT37i41A3lE2GZzuE\\n\",\n       \"rkWlNFt728RxyqAXkmUlWDZ5VoDjU1UlVVFwcPdNgsAzMwmPrdkO8XpJozXLxYoLF/Zp6oq6EWZK\\n\",\n       \"GHnkeYrtSJh1rWV+U9caZdkcHt1DWQ5bs21cX0KP43ViKKM+lmUzmUxFRJIXRsXpkOWxWQc+4/EY\\n\",\n       \"hY2tfHQjHjFVXVDXsjFr6J5dz8CSSj1w2HPM4PThsAKFBF5XVU0DXYYpQJJsqKoa3w8leFwJZfLt\\n\",\n       \"LtsEJpe5wB5ZmgrRwPMpiwLXDykMvXQ0HkFZ09TiNqmbxvh8a3xPNt3hYCKK5nBAXWqyYo1SmuVq\\n\",\n       \"QZLGXLp4gaqRsPCmafB9US+XRUyaPBDKdU6fpmjrRUJxtG15P4eH90iTBM+ziYJAIFpL4Xkujm2Z\\n\",\n       \"2cdfH9jwU02lL8uyO7nhQcJzu3nbto2yFdpgWU0NludSATg2t+7c5o//8Ms88653EzkeFy9fksqm\\n\",\n       \"kqQb13UJwhDHtknSlKOjI4qiIAwCev1WTRlSlZWx+NSdzLfFsGSKD57nMRwOO550W1UMBgNTMQjt\\n\",\n       \"arneUJYVi+WCqqq4dOmSeV819+7d67qBr371z7hy+RJf/I2/LbxrM+RoceAP/fzn3nLfXnrua9R1\\n\",\n       \"YxR0wtopikLcCZMUy36QkN2+jyRJuHnzpqlsVxSFiJAaY6HqODYNFUrZzOdzrly5wuH9E87P550/\\n\",\n       \"exhGRnRh4SjF+9//fu4d3OXs9ITxSMRN29vbjCdTWXSWJUyduqZuKhotD7GlLJbLFf3+EDT4nnDl\\n\",\n       \"o16PtBQpflUJPc/3AizbZjgcMBwMaXRDmuS88cYbjCZ7ndWCUjaWshiORigFSRITxysRIDUVtrE5\\n\",\n       \"2DEugsvlkvF4TFVVXeXdmel7Ln7go5RNFA2QrgziTcJ8Picviw6eofHY29ul34+wLEW8WaF1hR84\\n\",\n       \"OBYEgct0OqEoM+q8oMgLslxa/aqBRlsox+PsfEGcJGzimO2dXcZDD9+1CPygY16NjN90VZXE6w11\\n\",\n       \"U+N7PptEBExFkRsYRYOW4dhqHXP79h36wxGD4YjRaGK6jZTKSObrqkIbpkgURWRxu7F7VHXO/fsH\\n\",\n       \"PP6OR42q0UU1NrYtQdy2oySFRmuUbXV88jzPheqYpx1brO10WqVlu9YdW5gsfuBT1Q/sU5taUrTQ\\n\",\n       \"bTp7Rd3UfPqzn3/LmviTr/yJdKOehef5Qo5wHDzfp24kF7fRigaNbhoswwqqyop+r9etodbiWAHx\\n\",\n       \"RqiynudR1ZkpqoQSOhiIU6nn+7ieRxwnZKmswbJIGI1GnZ7F81zOTk86Uy/bFpaN53pgaeLNBmiw\\n\",\n       \"lcZWCs+xyLMUx7E7iMmyrJ+YSv9T28D/6s+/3H2YD1oc1X3oYPxQ6pq8KAh7PfKyoLEVbhBx49ZN\\n\",\n       \"nvvus+xMtxhHfXa3dtiYh6a9YS01MAxDwlDsRNshR5mVhukCUtvKfRDPZ/E8aD2Ny9IICbTqhDSt\\n\",\n       \"D3CeF9RmwOq6rjADGhmGSjp6SdNI+zgY9Dk+Oea5577Hr/3ar7GztUUUhvJ6ylI24lSc895uA//B\\n\",\n       \"c1/vKvS2uhHObdX5M8gDLyKNFjLyfaEv5kVBnuXE8YaXXnoRraE/6KNVw9nZOf3BkPVqw2K5pihL\\n\",\n       \"/CBEa1NBWYrxeEKy2XB0dMTFvV2m4zGDfoTnuWzWGxLzPobDkfBkt2cdHTRNE1OJSeWlkEq91xsI\\n\",\n       \"W8jRBkZoiEJJaS9yGYzOZrPuUDqfz2lUgOt4nM/n+J6P78mQcDKZIOyJGj/wqMqCLE6oDNS2NZsx\\n\",\n       \"GA4oChGs1E0tdEbbRjcNWV3Q+n5bygZsPC9gtdyw2qwJQ8HLfd9HNZ5RwAYUZUaRJbiezSZekCYb\\n\",\n       \"fM82JmjCMCnyAj/ssV7FaGWxSXJWq5hbd+7y2DuewA8C+oMBuoqxVfNAVq4sbNtF64YiL8QHvpS0\\n\",\n       \"eWUrwxUWOK0uxQwsyzJev36Di5cuEUZ9LNvG90N0I8+OpawfMrYqjdDMMxRd0DS65ODgNs+8793G\\n\",\n       \"gsHBswPqmm59QYM2A/6HO1OxuxA6nqb5oXXewprtWm8ajcbYDVu2eIzY0p3b6kG4OQo+9guffsua\\n\",\n       \"+NOv/B8G3pQCTALD5TO1LaHrad0yOdVDXYFLFInXj2d8WBQQJ4kQG/KcuqpBFT90CGnA9wM6f/tG\\n\",\n       \"KLvKsun3fIl6NPc1y5IOqgsClyRORIXp2CjH4eTkCNtS+K5DmYvXe12XYAQ8GjkgP/ixz/3sQSjL\\n\",\n       \"5fKHaGvqoQ+r/XfTNDjKFkN5W6GbGuU4LNYr8jznxRde4De/+B+xM5pS5kVn9iMeJgH9fh/P81gs\\n\",\n       \"FiyXS27cuCGihl6Pi/uXGQ7HeJ7HarXoLDdbf+I0jZnP551XS+u0l2VZJzZqGlFHDgbihZ3nOUcn\\n\",\n       \"R7ieT68X4rhi75kkGY5j88Ybb/Da66/wD37nPyeJYwLf5/T0tPOzKMvSYI7x296zVs7dHjctXt7S\\n\",\n       \"yyylcD0Py7bJsgLXdzuOvK1KPMfFAgLf5fO/8isCT9x8k9PzU2bTGUUpcFIcJ+xfuMSdg7v4vo+v\\n\",\n       \"FOvFmvuHR8II6fWYL5dYtnhRn54cd1DTcDhkd3cXz3V5+eWXmU6n7O7udOrSIi9oas1wOGb/iSdY\\n\",\n       \"rzcisCgSTk9PBUNtNI5lM9vf6w7KOI45ODgQ4Ydq6IUOjz/6PvJcMPMWzjo7OzEsEzFI2t/fx3VF\\n\",\n       \"BFRVFZvNBtCUVSk2Csau1LIsJlvCUZ/NZihlc3j/mLOzU/Ks7A508Z6pSGOp3E/PFuIlE4WAzd7u\\n\",\n       \"BYajPpvlEtAcHR0TDQY4XkStHPqTMacn54YW1/C+9z7N5ctXWK3X9KIeZWVjWdLtrFYrvCCgyMU6\\n\",\n       \"t98fcnBwr1svRZVTVjm6loLBtm2uPfooUa/PdDblytWrxl9GDsLCpLFXRWmqPKurLre2ZzRGeu95\\n\",\n       \"LlVV4LoeR0cnhqIYENcprhvIMDLwRQ0q/yDPM5Ik7tSydVkZOXgpxmGGCtx2da35m28qZSkUbKIo\\n\",\n       \"fLAn0MYuapr67QvN0OTMYsT2rQiqQUzlRIMgeoGiKISqGfjkWcxyIcKaptGsje1zLwxBF1hWjbZr\\n\",\n       \"A1XKM7LZxFy5csVU5wGr9RqwKE3QehLH5iBShskmh3AY+OimJgo98izBN17iN2/eYDYZYyuF5YnF\\n\",\n       \"ReDL7K3RYmT1cFbC210/1SFmy3FtYZQWE3/YpVDV4AYBWVNR0VChOTg44Ev/6l/zNz79y+xtbROY\\n\",\n       \"qbLypIJo/bnbSj4IAlrflHaAKMP91oMcQCrZLMu6g6WlESqDOxfGn3pra6sTNrQHRmvAhW1TViXz\\n\",\n       \"+dwozwQve/3117j22KN87pd/mThujXa8zqymHWq19+EDH33rwOYHz37NYO0/HHjaCira16KwAMsY\\n\",\n       \"yrfvU8RKQegDLUwlYc8YUctiseF8PidJc+7cPWCxWorUeLlkuVoQRT18Yy86nYxQusF3HB65cpnh\\n\",\n       \"YEgUhuRlwdHRCevVmsp4j2RZSr/f6xbsxz/2CZIkFS54XuC5PpbrdNSvdtDbdhZtTmrbXaWFDAXr\\n\",\n       \"qsH3A4qipBf1zGcFjuuYBB9xBuz3+w8SYMxnKD7W8vqGw6Ek8bhSfadpRp5XJEnGdLoNyLBLo1GW\\n\",\n       \"CGc81wyxdavcFZpcYTxukiQhCkPiOEEFHus4ZjE/x3Mc0nTDxT3hlHueawJbTCK5I+2+OP/JM2tb\\n\",\n       \"LmmSinDHVH7L5ZKyysnLFM91sJTFarXk7p07jMZjnnnmfeRlgev6aBSu46Eb81qzHNd1yPPMrBWF\\n\",\n       \"49pQN93vPc8lTlYMRz3TwdXUZYNj+4h7ZCnrRkFjujTbthmPR1RViWPZHd9bWUq8VurKDCWFYFDX\\n\",\n       \"Nb4XYJvN/WHoFDB0y5IgkMSfX/rsW2mE3/rzbwihoBQltza5snXTYLtuJ9F3XHkPlmVh2Vr84oPQ\\n\",\n       \"4NVt8Vfj+y5ogeJae1fLEnhms4lxPXEjbRlV7XBcYFQJRMnzjCAIiDdrijw1XYeF53sEvqRO7V+8\\n\",\n       \"xCsvv8xoOCDPUqLApzH3R+YJdIPYn8lEnnaTLEsxt2k5ku3m3bmwKYv5akE4GlIWBX4U8o2vf50P\\n\",\n       \"f+CDPHHtMRqjdrRdm02Sdk54LXziOE6XsiJOgCGTyQQbqdha7+ooEv+NFmJZrVb4ftCxPfb29lFK\\n\",\n       \"cf/+fV566fvkec6FCxe4fPkyWmvOzk7I84LFeonlyAO4vb3NC88/z7Pf/S6/9fd+i8cevcpmvcb3\\n\",\n       \"A1OxJMYvpZbAY9N+uq77tvesqEoaNL7JA1VKIJ0Gjet7aPPhu65NUzXCLLBt4liEK77hLOdF1g2d\\n\",\n       \"bNvGtV3iOGU8HuF6Po7tcfXRa3iexzf+4uso1VCUKVpXlLUijHzieMPuzhYXdvc4PLzPnbu3qSuN\\n\",\n       \"7djs7e3jBwHvfeYZvvvsX/HEk09y+eIldna2ODs9pyxLcYUrK2xlcX5+TqWt7rkYDgf0+16HSy6X\\n\",\n       \"SxaLBWma0uv1mG5POyimrjRK2RwfH3eqwyAQTP7C/h6PXrsmg+bFgrIU86f1esnpqXQNOzsi3tK6\\n\",\n       \"JtmsyIuCXtCnyFYM+n3KImVn5wL9ngg5Nol8r+P5qRF+iG/I1miM7wvckiQZtu2yWG5I05RkuSJO\\n\",\n       \"E2jEbnbUj7jyyCUcZaGamqYqqcsKypQ0r1AmRMC2bYqsMOtBnoF7B4e89NJLfOADH8CyLcIoIAoj\\n\",\n       \"ozeQIAvPDEAloEHCOOqqRjdis9tu3q7rMplMKKtcrCmCoCsGtFZEkfi+BIHHdDpFaQtLibhEGCUC\\n\",\n       \"16R5IeZl8znn5+cURY7r2NiWje/7xo897Nbjw5CEZVnUTU1Rtpa6DyjFbSeepTm5OSB/9Kp0TVOW\\n\",\n       \"9IOgU0C3ebmVbnBdD1OhdQeDrksUMiy1bRvPF1WkheQLSPVfi+e7I/sUGra3t7Atp2N/1UVOWcsQ\\n\",\n       \"VylFkkhXnRno1HVdoigENK7rkGw2eI7D8dExvX6f4aCPbVl4jiRgtTkAQSDU5bpufnYr8K//2f8G\\n\",\n       \"0H2g8GCS3TJU5CQ17UXokxQZX/njr3Dj1df4W//OF9iezqjriuPTU8JBn77x7GjpS71er/PBbjfF\\n\",\n       \"ltYkWX8DgkC8CirjxZDnRYdxt16+ICnT7WCs3+9hWUJHXK+Xnbw2zzPyuqLWwtt9/vnn2d/b59d/\\n\",\n       \"7T8ApamKUgYpRh6tmwdJRO2h1dqAvu/nPvOW+/bSs/9X1/K3XUZbWbYD2Ma0fIEn4p1Wzi+bFNR1\\n\",\n       \"afBkz+CfFVLrKCQBRHjpvV6PP/mzP+XLf/xHTGZjtnem3L9/nwaHyXiEhaQgDXqhcKbnC2azLSaT\\n\",\n       \"KWHU5+TkhNAP2N7aIopC5otz8jRjOBiwt7vHoD+kKk1Ki+vhBMJckE6i7N6XuFZWZtPJyfKcqN83\\n\",\n       \"c4OKPCuZzbZEvdiIIKOqSoLAZ7Ve0jRQFCV983c28UaqWS3V/HqzAS1r3PPh+PiU/f0LbG/v4jo+\\n\",\n       \"QdijadoJiRY5te/hBcLAEV+XmrrSxnSr5ODufcKoh6UsZrNtcmoxN7MUuinRdcVkNKDMMmgkn7L1\\n\",\n       \"+shpcMznaBmWiu/73L59m9deu86TT76Lra1t0iSVahYtSUR1jaXg9PSU4XDIZDoVdojWZh4ivvny\\n\",\n       \"Wquuw2mfo6Zp6AWhuYcCcxwc3CTq++zuSscZ+hF1pShLoRc2jUjnHc/v1nCLceumpjJd4Q9vQoo8\\n\",\n       \"z6Sb82Xo6Lji+x4EvsGztelGLWpj5tXUmo//4lsx8D/4fdlHqGtU58fiiIAMJKOyedDpaxSOJSEw\\n\",\n       \"Td3geXLw1caSoxMgId5DCtesKXEkBQRnr2sMSmP+fo1liZd8kggtdDDom7lWQmAOss1GtBG2IxAf\\n\",\n       \"WmMrTW2e2dKYsJlpBZZt8/T7PvmzV4G3VxuC2g5BgG4q7DiOJMnbFufzOSfnZ3zzL/6Sz3/mszRF\\n\",\n       \"RRYnOK7LpcuXyXRF0NislivhpVoWdVnRCyW8oKlrqQg8jygIWW4WHNw7YLlcMpvO6PcHDAZ9kuSI\\n\",\n       \"8/Nzw/vNuHr1UYaDCePxlF6vz+npCUVREoY+URQSRQGL5Zy7B3ek+o18kizj+e99j//wi1/k4sVL\\n\",\n       \"rDcrAt/HQtHIdAIzpcA1/iiu45qW0u+YLz96PQwLtTBKO+xq4Ye6rg0+qfCcgKLIu0pdMHK3kz03\\n\",\n       \"WjyblVIox6HIa2azGefnC770pS/x+vVXufrIJTbxivnJMf3QZxFnnJ4d0VQNdVXwvmfeS5YlhFGI\\n\",\n       \"47mcnp/jJymXr1ymLioOj4/JDY1qMpVQ55dfeZU8y/n4xz4u841aU2mRe1u2xag/AA1ZnpGma9Is\\n\",\n       \"Nd2JRRSFxm9DmBRVXfLGG28YfDVgNpsYqCagrxs81+98U55/7jne+c530u+JlL3l8rdyc61THr/2\\n\",\n       \"GFleoJTNdDIW/NNw2tMsJU0TsixGxaqbi7RFSJrkvPbaa2xtj+n3xU6hbhpC28N15cDuDftslith\\n\",\n       \"mrgS5tA0DWWtqYuKkrYjtfD8kCzLuHPrTe4fnfDJT37C5HWGpGmM4zqdSlmjJcSiqam1fDZxmnem\\n\",\n       \"U22V6Pkubi8kTVIcpwdo6lo2zLooO0dH23bEC99zOqZTFIQ0tUKpNnyloG4EXmyj0QTyCHBtB8ex\\n\",\n       \"Os1EC4/IPEjYG8ulZKcanmDHxHrY5M73fWzXI4p6b7smxpMxRVniKgyZwKKqayzbRjWa2gRCgEIr\\n\",\n       \"qexDL5IO39ZYFuhG7Eceik7Hdh3QMm+ybAcrcHBdx3wvcAxVsqwL6roiTlYsl3O0htFoZKjRJfFm\\n\",\n       \"zWw260gV6/WaxWIhiUmeR1XmVGVFU1ckSU0bqm07LqCNA+WPv35qG3jbqrUv+OEPrf1l2zZZI94K\\n\",\n       \"uxf2+Rf/8n/hIx/5CO9+11PovGR+ekpjQXxQ4PYjhniMx+MfoieKpHvUYW51VVPUBaPRiOl0YgyA\\n\",\n       \"StI05eT0BNu2eeSRR5hOxTJys4m5f/+QW7duo7Wm3+8xnU6MTLfh7OwU27G4fPkiq9WK7z7/HNiK\\n\",\n       \"3/qt/0QUjWmC5/umKvZQysJ1PCMeetBtPFyl/DgIpdINjZKYuaKuKI1rm+u6rJN2gGJRVCXKeEy0\\n\",\n       \"ByHQwTPtz5MW1qbRDWWa4nsRJycnfOtbf8Xrr73GZDJmvVkQBR5VXeG4Nr2eDBVd22Zvd5eqrnE8\\n\",\n       \"l74foSybskq4svcoB/fuoWtFU5c89dRTWJbF6fExr92/wXQ8Zm93j1dffZUokirlwuVdPF+SX9br\\n\",\n       \"JSCdVBj5DIYRyrKoTTUXbzImkylNo0WIlbZtsy3inSTh9ddfN94c4h6ZZRkf+MAHOpiuqqqOctl6\\n\",\n       \"1DS1hReFYjSWl6zXS2zbI1/OO5OvremYMAzJq5qyqlgu1lR1RZYJ1XB/f5/ZbGo2ImFDqFLjeULp\\n\",\n       \"TJZz8izBUkM2G+GaW5aLZXko36XvhYR10Vk2+J7LSy++yFPvehdZFuN5AQcHd4zbXo5qNGEYYHse\\n\",\n       \"y/ncxJpJukzQi8TGtSw6TnKSbGQwZw4O12DDWZazM9syXRwEfkCSLlnH50TRlLOzM871OVpbeG5o\\n\",\n       \"OmXpRloPltZ8rmmEMQN0qupWcR0EPSzLxnUVrusxGDiiFjbPZ/u8Nk2D5bhmmLshSd5+I1uuVlgW\\n\",\n       \"2J6P1g1V3RibBIeirHCUZWyjrY5ajHFPbJq6Y8SIoMeideRU2CilaaoSS4nQp8hFiPSw8VutS5TS\\n\",\n       \"xMmGIPDNc1XTNJrpeILvBybgIWSxWFDkuUxULGVsDGwspQiiqHsOq6oylhc2gR/8xH30p1iBy5S2\\n\",\n       \"UdK/tq5+TVHhux51VZKWBUUYEXg+f/H1b5CtYmbXxizOl0yGYy5ffRxtWRR1RZJnVGlKo2y0BctN\\n\",\n       \"SpKck+clnnfWJe64tlQMJ2cLwVsHA6oGev0RYSTm+sp2OLh/iOO4jEcjhuMJcRJTFoKll7rg/PzM\\n\",\n       \"eBvb5FXB8y+9aBLGf5X3vuc9ANRFiefYNHVtNm9F4AvbJIgC8iwjN+KFRgv3tdYNtvNjrDM9vxti\\n\",\n       \"Wh7GFdG0h8ruHsYai6ZqcF3fLMj2ZH9oQGyLRwlo6rI2D3/Jt7/zHZ77/gtMtsVD2g080DWu1jRl\\n\",\n       \"Tb1J2N3ZZm9v3yRwh0xmW/zJn34V2/Xwwx4vvvwKWmsG0ZDtrRm37hxQmir8nY8/ged73Lz5Jrdu\\n\",\n       \"3aLIc8Nfl8rl8uWLXLlyxbADpK3NsoyqFCjBsi2Go4HQrbC6NHbbEajEtm36fck2XK/XfO/b3+aR\\n\",\n       \"R66wY4zIPMuEA9uQJCmB66CrnDjdYNliKBXHMePRGMt2qYzDYd1UFFWDVSrKusQyEWfecIjWkCY+\\n\",\n       \"2SbF0hAvVgBYlsIPAgLXI4szhr0+qe3g2pZAKU2JasTDRNkORVERBQEKTZ4IJbaoS0bDHpOx/Bxl\\n\",\n       \"2ezv7kkwdBabTVhRlDWu54kDoutwenrK3t4O1DVZneMHPl4YkdkO/zdzbx5k2XXf933O3e/bX+/d\\n\",\n       \"07NjAAwwGCwSCZCgKEI0SYnRYomxaFouypFV5SQVJ7FViWOpUpWSaJWrZFGyLVmOo1RsWZYUaxct\\n\",\n       \"0lZICVxAgARB7Nvsa0/v3W+7793tnLgBUB8AACAASURBVPxxzrn9BhtdrnLRl4Wq4fR0v9fv3vu7\\n\",\n       \"v9/3911kKZGhrB5erieIWwH9/h5KKs1vHg+p1SKuXV/j+Ik76LSpsOCy1EyhJBkjy5JCJdWuyMKP\\n\",\n       \"nqdl7L7nUas3UCij0ch0x+wGKPS0XWYlEwM/+V6A7wdkRUm3pXUC9XqEEG99T3TbczrOLh3jOS5h\\n\",\n       \"oLv4fDIhCLVLY6lKFBKhJAIBruasC08iKaCUlDJHCEd7FakSJbVjpnA0k8YRWoWZZRkIgSxLU+w1\\n\",\n       \"DBN6ggJdiKMoJvAC0qwgzxOiKKYscnr7O2RpyuqhZeo1rcrU1EcPqaAwcZCe5+G4wjz03jkT89uq\\n\",\n       \"xCyNSskoFvRSKKiTTVLAoV6r4cYBN2/c4DN/8ic89l0f4O5Td0GpuHnrFllaIFyXRrNBrdHQvGUE\\n\",\n       \"aZYThCHdmdlK1trv97l85SpJMmR+fp6l5QWsrL7fH5BnBZ7nG8MbYXDVkvWNDWoNPb41203iOOa1\\n\",\n       \"11+lVosZjxOUgCefehLP8/i7P/VTRH5AadggnpGRuwbftF1fmqZ4pWZbWIMd+zXgbdWYmfFMtxJz\\n\",\n       \"JfUCz3phpCZfUL+eIApj8kIvmLRoSS9otfWqpMgywiDQVqOuw0uvvMbT33yamdk50jyjVBJPOJR5\\n\",\n       \"iSsE/f6AY0ePsby8zJEjRzl16k6uXb9BuzvDj/6VH+XZF19kY2uLKIrp9Xr43YCd3T2ySULo+YRB\\n\",\n       \"wOUrV5hMtG/G0aNHOX78mKH5NXXE2XjM+vo6X//61+l2O9x9150URcHKyjKTyZjJeEImdc6i5wYE\\n\",\n       \"vo+UMBwO9EQhBKNkzMbmBoPBgHvPnNZ83yQhDAKGI50RmeUZrucQxwc2CEoUSAlxHBvK4YBkot0T\\n\",\n       \"a7UafhBVGapFkVeUUlmWrN1c48iRVea6Mxq+Mx2n67jkeYFwHMbphO3tbYTQN22r2aAoJDUBVfq5\\n\",\n       \"41HkOWWp4bHROOHw4VXQtwj9/Z6mijqu9rxx9H1T5AXC0+k9Yagf9Lu7u8zMdnEnOowjGSZG/aiL\\n\",\n       \"gu9r9a7Gf11qcaS5+EAuS8Ig4MKFCzz8ru9kOBxSrzcMzKCo1eoGx47IjExem1zlDAZ9ze82O5g0\\n\",\n       \"zUxnXhJFATg6kSuMIjxXIFyPuVYLlGA80Vz8IHLZ3d2uOOd6ufnom+6JnZ0tfX78kDxLjTWtvh8E\\n\",\n       \"ms7nux7CtYZzIFyFMBERSpY4jiJwdV5lkecolJ5004wwCphMEqQsK9KFffBZXD0MO9RrMcrX5An7\\n\",\n       \"gPI8H0fAZDJmd3sLz3VYPHyIRj1mYqYSOMj/PbDlvT1i8h3r6Dt+9T/zEUeR7gTNcihNDU/TZNhZ\\n\",\n       \"744L5y5w/9kHaNSbhFHEeDDk2LFj5HnBcDRikqak6aRiJGSpFjRsbGxUvhea/jfLeByjkJw7d67y\\n\",\n       \"xuh2u0RRTJbmFc0wSZJqg47Q3fHW1pYZjXXnoBkpL/G93/thzp49W+GR9oR4nsdoNKrG74ObX99A\\n\",\n       \"1kLAQj6WMfN2xzS90qaweJ5fLUIjwyIQmDgmWVTS5izLjEjCNR2rR+TUKKX2TCnSnCe/9hSdTqda\\n\",\n       \"wtSimGTYAynJy5zTd59mdnaOBx54gCiKGQ5HLMzNc+XadU6euoszd5/m+vXrZEnC6soyk8kEWUpW\\n\",\n       \"V1dpNxqMkxGDXsqhQyuVsvXmzTVwHOpxzOLiovHunvDggw/SbrUQQsMdFy5cZHZ2hkajQSw8XNdh\\n\",\n       \"a2uXV8+fN0VRT1YYpsV4nHDmzH14kU8/0VBSmqQ06g0muabROY6DErpDK4sSqXIcR9MZ7fkLfB3N\\n\",\n       \"JaU0BcLSRl1arRrtVp2bN28Shjoerd/rVfQye46FgcnKMufYsWOMJwmjZESZKBR61+J5AWEYMzGv\\n\",\n       \"IRyBB5TSZEf6Lo7j0mzVCfxIS9BVDkjKstAWrsLF9Xzj/CdxhMelCxfodDvU4jq1WgPPdRmORhpb\\n\",\n       \"VSVFmeIIhzRLcISDMCZZrueCyllZXuDK1cucPHlCd/tegDIWA2UpyYsRpczN9efjumFViKZFdeYC\\n\",\n       \"ZjzSC+Qsy+gneyj09BlHNYJQQzOu4xN4bhWwAgfakDceURRqHcHIRMONU+0+6ek6YF0OhdDnwvU8\\n\",\n       \"8kIn9SChLHIyWYCUGopX1stfw5CjROdx2lzZOI4rG43piUMIRztPep72iHcdgtBHlZL9/RFZlrGw\\n\",\n       \"MFdFxyl1oEC3DZuFjqah5P9iC7hmfhQ4dhnnR9r4JxkT1mJKJZEoJonkq088yUe/93s5tLTMcDAg\\n\",\n       \"n6RMJhonbHdazAchUklu3ryuucNxg6NHFyuq2cbGBltbG5piWAs5dOgQx44dZX9/n1u3bnH+/HmE\\n\",\n       \"0EupkydP4noCR+psvs3NTQ4fPoRwhNn0623+n/37/8Dq6iE+9bM/x87uFplZ4nRaLe3oZkQLQCUw\\n\",\n       \"sic9iqLbaHz2/0sp6XQ6b7u4sKIdO3pZAYrtCizP2Rpe2TR5eyFYxZ5WhkVmMTchDCN+4Rd/kbmF\\n\",\n       \"ecaTCa6rp4d+f59Os8n+zjYnjx2jFtVotTosLi5rYUhWgOtw/31n+cY3v8nd99zDxz/2Mf7vf/n/\\n\",\n       \"UAsDsmxCv9/n1q2Cy5MJvuPy4P0PkOc5e70+Fy9fYWZmhnoU4wiXF194keFoyNmzZ+l2Z3nlpRd5\\n\",\n       \"7bXXmJuf4b3vfa9hLaQMhj0ju8948MH7GY/HdFpt8ixnMByQG/x1MBgQRCE7e7vVgyuSBckkOfA/\\n\",\n       \"VwIXF4mkHscURcloNDLdkaReq1EziTS2IGmztTGDvva4DsOA0ajP/GyXnZ29amlsYYXM2NcKx2Fn\\n\",\n       \"f48w1BYPSaKX8LNzC2SZpnmmk4lRE1qjtIgiTcmnLEmRBVHggfBI84k5t05Fz5NKEMdNZJmzsrLM\\n\",\n       \"yy+/zDe/+Sz9/pC77rqLI0eOaCqt6xqzsE5l8SCM+KUsc5LBkDP33M2f/ulnOXHsqPYZaQW4vo+U\\n\",\n       \"AuGijZ1cu9QsGE/GVYMCVPstu4Bv1Rt6WSl14IP1C0rGCYNBT5vA5SWlLPGCoGps3q6QNWqhlqgL\\n\",\n       \"jSF3ZrpkWcpoMCSOQ1zPxgJisOWCIHTNA7vQy1NHkJeSdDyhKHSAjCxLklGC4+l7ajLRexc7MVu2\\n\",\n       \"C2ilrOM4eEoyNHunVquF6wg9MZqdRBD4DIdDswyNKiol3F7EbQMIBySPtzu+fV4oj/8RDjoqqchz\\n\",\n       \"BC5eGJCVBaUjIPBwXJd/8ulfZX52jtOnTzPTahNHIb5z4HyWpikjE7zQ6WhDqMkkRZaS3AhBXNc1\\n\",\n       \"KrfMnIyxkV/brzuMjIqq3+9XjBhbHLXgQ3/Yr7/+Or7v89BDD3LmzBnTPblMUs1RLvO8KuC287AG\\n\",\n       \"N7aLCMOwojfaRdp01yal5N4H3/+mz+2V575cCX1st15BKGYBpJQWm9gUo2lxlA721c6PUimGyYgo\\n\",\n       \"jnn2+Rd4+utP0+522NvbM+nYGZ7jsLO5wd2nTnHnHSd5+N0PM0xzbly7zrFjxygLXXTyvCAvNd+4\\n\",\n       \"3mqwsbnJa6+/zqUb1zT7x3Xp7ffpNFscWjlE4AeUpTR8dk1hS4ZDfM9jcXGRvMh59dVXiKOI1dVD\\n\",\n       \"zM3Osru7w/7+PlEc0mrPYu0OyqIgNF40nrlRPVd7SO/v7XFla53V1VU9/RQlYzNZ+aYoKAXyDfCV\\n\",\n       \"MinseiFYGEMl7btuoT7Pc8gz3XnmWc7a2hqNepN2u43vB9V5CYIAZZWFjnbWk7LE8RztYNjr02rp\\n\",\n       \"MGiFMLmvwghg9Dkr8kLTNs0kqZ0aJY7nGBqhdgjUKT1aWSlliSoLNjY3ePXVl7n3zH0cPXrciJoK\\n\",\n       \"c+04hrpZGIjAsYQQyjKl3tBJPD/zM/87P/upn2M0HBOGMaUEPf0LfM/HcYuKaeK6XlUwLSXQxhJK\\n\",\n       \"KZEThXCUoRfqh5SUFkpwzXnQfHEpXBxjl6uU4uFHP/Cme+Lxz38Gz/dR8mCB6vte5SMjZYkwKVuu\\n\",\n       \"o3MtM3XQIJVFwWQyppQlvuGpC6E7cwBJSRhG2tDNQJNKYX5PTVH0DW3RjfwKpskz7X+jlCQOQ7rd\\n\",\n       \"tglO0ZTFaWbltC2Gvf+r91eWPPjwf2Iiz3/Ow3VdPMfBEw4qkwRhqEUqYUwuoDcZ8bVnnmZrY4sf\\n\",\n       \"+aEfplarkSVj9vb2GPUHzMzM0O12abdbBKFPqSSDQd9Qj2I6nU6FUe7t7dLva7724uI89XpMkSsm\\n\",\n       \"+z3Wb21Qq0cV9afdbjMajdjc3DTCkRpBoPMOP/vZz/IjP/IjPProe/VirSgMXppXwaWR8diwFDXN\\n\",\n       \"l5WVcMl2cLbo2geRHc1Ho9Hb+oHbFKNpWpYVLdj/7OuOkoTA+KEoKQn8gKzQyTul0rBJrV4ny3PO\\n\",\n       \"nT9PXK/RHwzM5zbGc122NzZ44L77EAruv+8Bdnd2kV7AnXef5rVXXuGhBx5gfX2dWq0GyuPm1k06\\n\",\n       \"nSbzMzMc/Usf5IULr/HsM9/k4sXLmobWaNAfjWjEChBEUc3ALKVmHPgeN9dvsbu7y8LCIkeOHGZ9\\n\",\n       \"fZ2nvv519nZ3ec97HmF1dZWba2t0Ovph02w0CEKPItfqOseB3e1d9s3Dt1GL2d/dIU1TDq2sIIuM\\n\",\n       \"erdNnmVQasy8KHUhDqMaNpZsMpngOA61WkSSjHEcj3SiVXX1RoM8LZiZ6SIl5J7D8vISQmgmQ5ZN\\n\",\n       \"SFOjjEWQS72obzRr1BsNLaQa69SadrvDYDik3Z1BlpAVKZ6r/Xs83yHPs4qWd2ttTecrzi9SrzfI\\n\",\n       \"igypNF/ccQwVF71zKcuc4WjApUsXefe7383KyooWBqWJYVlIXMdcjy6UgOtafxItBivzjDAKadQj\\n\",\n       \"9ra36czO4XshwvEpS8VgMGQ0GpKlB4ZgjuNoeqNwqonTD7RHvuM4+IEPQgu+SmwzEiFLzeqZjHXI\\n\",\n       \"h35fkRHaBPj+W5eqZlMbUvUHY1zPQUhFGGqlaJ7l2pNFKVCSUjhIAaUozINUR5c167WKlWKVkCgN\\n\",\n       \"VZbK+M4Ijanr2mXphMJg6Pr3HBubYGs8VpbagqPZqFcTuYZRvQqSsfe+Ldp2IilN4/N20JE9vm0d\\n\",\n       \"+Fe/+McIqaCUCKWxt3GW49djMiTDLOXX/sX/yXvOvIvl5WXiMMRzHeZnZqvuNk0nFc85CANcP6zE\\n\",\n       \"BVmWaRpaHN/WTY/HY3QQbFB9rZR5RXfKsowoCqufm+c5Vy9fpiwLPvzhD7OwsMBwOKzk2QdCmvL2\\n\",\n       \"LlhoSa+lg9n38EaMy1IG7ULEjp93n33zwub5p//8tgKuT672MK7UZ0WB62lIQJkuV0lFnmV4nmYp\\n\",\n       \"ILRc2/FcnnjyKb7xzDM60Xs8xnEEeZoii4Juu8Xy4iL3n7kPWZQcPXKUvvGc6e3t4zlCR4OZ9+x4\\n\",\n       \"ggsXL7C6uopUikSV2iB/aYlnnnkWFIyTCYdWVrQxWKE9srUHMuR5VvnA1+ox29vb7Gxvs7S4wNLS\\n\",\n       \"EqPRUAuE4ohjx46ytLRcObr1ej18z8VzHZaWltjb26fTaTGcTLSFrBCEQcj8nDbd91ydoen7hrIm\\n\",\n       \"BJmRxetzYiTZpRZw7e/vM5lM6O33GA4HFMYA6q677mJ2do79/R6HV4+glKDZbDFOElzXR7gOwnU1\\n\",\n       \"DbLMyfKMuKYl9rrDz+n3h+zu7dPvD5FFSafTNr4gEcloRBD4JMkIm+MZxTWahmdeb1gLAcdIygsT\\n\",\n       \"SCwYjQbs7Gzz0EMPUpjp0/5+02Zv0w9+Pb6bgmecHP/k332GpcVlTt112gimQDiaJ+37gdYXTGVX\\n\",\n       \"TlM0LeVOR60Zl0GUEZQVCJR5CB10tRZb9oVfdd8g+cBHfvhN98Tjf/b7pGlKJhXNZkMrTpXEM9OI\\n\",\n       \"/d30VFFQlgWeESpZuMgWc2n54wd1CsmBUM4R1lrDNf+Zz916q8uimpDzPCMMfRpT/PXpmjBt6FVx\\n\",\n       \"v123ahxsPXAch9P3f9d/eR04pgvE+NCUShE36qSGV/zqs88xPzvPu77jIZTU0U+3bt1i7cZNwiAw\\n\",\n       \"KeQNut3ZqvgWMq3I/0ppEvxgMGA00vaQMzOzzM7OU5Yl+3t9g0f5uJ5mo+ixEqxp/+c+9zkWFhb4\\n\",\n       \"vu/9CIdXVxkMBlXBtjl+9kI9KMjubU9PezFb6MLKh+3XLVY6/SDV6s83H1ZqP/2gsBFm9iFhT7pS\\n\",\n       \"CjcwnFI76uc5RVpqDFMoKB3+4vG/4MSJk+zu7hor2xGqKIjCgHarxSMPP8zS4hKTUcLzL7zAvQ8+\\n\",\n       \"hOM4tNstrl6+zNzcXMXi0IyJw4yShJMnT7LR79Pvv85jj93L7k6PT3/60ywsLJKmOcvLK+RZrgUf\\n\",\n       \"jgeOwgGaUUyjWWewr31Yao06e/0+61ublGXJiRMn6HZajMZjPvvvP8fK8hK1WlzBJAq4unaTsijZ\\n\",\n       \"2tsxwinF7MyMzoscDDRsJhwKVTBJTHBuEBBOLZEcR7C2dpOXXnqZ4XBAZBzoTp8+zZ2nTjI/P0tu\\n\",\n       \"PFliI2WXqmQ4GBKGgY5Qcx2zFMuNak8QBgHpWGsAdDTeHrfWNlhYWGR2ZoZBf2jS6TU+7Qc6xxLH\\n\",\n       \"JUk1TLV3+XoFU8zPzhCGEQsLC7rL9jyisM5gsE8ca98eWwwC362aEoTA93SAcJ4dXJvC9yuqIAgm\\n\",\n       \"k4xTJ+/kytWr3Ou5ZKqkXquxv9ej1miCKhHK15L/8sDPvig1pBX6AZGvE3ekLPHqPnmWalqf0MAR\\n\",\n       \"QkM+2i3QMQ+XAhdQ0vJe3lpS7rkabpV5iuuUuJ4JJXdd49QIvtkRSOlQli5KaStc0AIehUC4Lq5z\\n\",\n       \"EGFn72vHEyiJdh4sdbdtd1plKfE8HSYCepqTBhpq1utm+i5wnIPkMd2kSYpiUn3mFhK1E3m1y5AH\\n\",\n       \"hIi3O75tBbyUBbLQeK1nsLNJlmn6zjDliS99me/54Ifo9/YojCXpyuKCMTXSktUrV65q/DiKqdVi\\n\",\n       \"HN8sNByHPC8rBoYOI85IkjE3bqyhJJXfr8a4JyB0Yd3e3mJjYwMpSz7+8R/lnnvuAVmSZxnzc7Mm\\n\",\n       \"307SajZJDKvDEQLPWFLmWVp98NOduGWhTHfgVlBi/256jHqrY3prbUcr+/S2HVSWZTiug+PprsN3\\n\",\n       \"D9gvYRjiKl0Uk3TC5/7Df6DT6bK7s0tZ5CjpaM6y4a0fO3qUxaUlBv0BoR9w/ORJNjfXWVhYABSH\\n\",\n       \"jx7h/PnzHD58mFJpUYjEY7y3z2vnL7B46DCD/pBP/dw/4PkXX6TTnSXNcp59/gUajRZxVEcqgesI\\n\",\n       \"0iKvFrTj7V3yLCWIa3Tn5tje3mJ7dxdZFFy9eo2rVwt2d/fodDpkZcHpEye4fPky/UG/Wu62Wi1m\\n\",\n       \"5+bpdtsMBwOuXb1GmqYcWV0ljGM8z8HlIDgkzTJGw6E2IRqNuH79Ojs7OywvL3Hk8LtMEHW98oIf\\n\",\n       \"DYd6JPY8At+j225RKkUUzQIlUeybsGCT2q40Nl0U2iwqGQw4//o5Wu0Od999J6Nhwmg4QgjY2tqg\\n\",\n       \"LPUyVZqRPssy8sIYMwURea7pb5s7u+zu7iPLFzlx/Dgnjh/HdQV7u9ucO/cax48dpdVuEYU+rkMV\\n\",\n       \"YG09qx2zjKsYUVlGFOkCXa/XybKU+flFnnzyKTY2NlhePkQUBMzOdkDppHad26GxY3t96kJuMmzL\\n\",\n       \"AlUUFEXG5s118iIn9DzCKEDL5g39VTgUpfYpB6gFmg0kpTSahTcfjtCTXD2KCM2C3jFQln4HUBYl\\n\",\n       \"RalhE4S20z3o7EX1c7RLZVE9wC3GXRQF9XpoKJiGO++6uK4wNMmJKbT6Z9XrddPcWZGiwjo/uq5v\\n\",\n       \"7tv8NijUptnbGmAbvW91fNsKuCxKXDPSlwqSyZhmu02a5nzl8S/h4rK6tETdc83iMaUodWcqhKhu\\n\",\n       \"KBQUueaaamWeIssK0333kbI0kucGruvRqDfJ85I8T6vkjNzkU/Z6PebmZnj44Yc5fHi1YnEEroOS\\n\",\n       \"esPseR6qLBkYTF0okFIngkgpkWjjdjsaaXeytBqtrNzddq1we0G2I+1bHVauDFRK0+mFh/anjhGu\\n\",\n       \"AEfhCo/SeCvYaSHPc/r9AZMi49r160RxzKg/xPcDlCyZTMZ0Wk2U8YnY2dmhUW+glNCCDNfh4sWL\\n\",\n       \"3Hv6Hh3aurRIYRRtjqdNpY4cPcq169f51V/5Z6xvbgIOy0uH2N3bo9XuMD87z/rGJqfuOIVrOi7h\\n\",\n       \"OBRSanxe2+4xSSdsXbvGcNBnkqXMzc4R12pEAczMdukPBly+coX+cMQ41dt+rQwN6bSHvPDa6zRq\\n\",\n       \"PgsLCywvLVPkBbuDHju9PRbm5omMh0c2SYnjiPn5efb29qqH4qlTp8yCyiFJEiaTMe1WC6QkHY9R\\n\",\n       \"RYkf+Gxvb2tec6izL/f29+h0ukilF3a+Z+TdjrbfLcqSSxfOcfjwKt2ZWfb3+pRlThwFTFLN397d\\n\",\n       \"3UV4Ltdv3EBKRac7w+bWDgqIa03a7Q47W+sIVVYuj6+9fo6r165Ri2PuvedufuAHfwhZ5ly7fpO7\\n\",\n       \"7zxJbhgr1q9k2kfHWtQGQcAXvvAX1GpdwtDXDZEL6SRjPJqwsb6G42iKXr3WQEoFjovjOtU1af1i\\n\",\n       \"tIeJLppBqKGfWjOikDmesGZsOZ7bJk0zlFTEcQNZlgwHA5qtCPPc4+2Q3jiyOZ4Sx3hoG3NRMxFY\\n\",\n       \"GwHt0KlQqFIile6+i6kJ2H4GjuOA0hGLOqwiYjLRtafRaGhPpNJ6nGs4RUpJWUi63TaOq+EyKQ8g\\n\",\n       \"En2vgpQZ3pRRlX1Ne9jzouPmnG/JQvm2FfBaXCMvJYVSSAVxo844Tdnd2eOVl1/mse9+DEfCzvYW\\n\",\n       \"QRjSbDaN3FkZwxhN14rCmHq9QbcbkJZaGDQea4bH0aPHCAKPW7fWuXHjBukkI44bpsPVT71z586x\\n\",\n       \"v7/Pmfvu4f3vfx8nTpwAVOVgGEY6H3JoQpbtU9MKcsCpiq4dgcqyeMvxCKhohMBti5/pTfTbc16j\\n\",\n       \"ahQDpgQOB9a4vu/j+m4VeeUgKJVkf3+fer2pxz8l9ZShFFIpjWMiSXNtJl+La3zndz7ImXvOsL29\\n\",\n       \"w8VLFzm8epRSlsRxyNmzZ3j15VdYWTmE7/tcvHiRE3ecZDIeM7ewyJe+8gS/+Vu/Ras5S6PZ0osh\\n\",\n       \"pfjAY38JqwLd2drm6aefZraroY1ap2Xee6BtNR2HmivIshSpFLks6A/6NFtNkmSAVIpH3vMejp84\\n\",\n       \"SRhF/NIv/zKO67G9u4vn++wP+oySBFTOxWvXadTruI5Lo1ZjptNlkhbMz8/TbDQogP1hws2b62YK\\n\",\n       \"2+bYsWNsb28z0+3qJaLvkoxGnD93TusDanVqRj7u+wG7+3ta7NKsm65sXE09OrRAsyxq9YitLU07\\n\",\n       \"PXxohSSZMDenMzSfeeYZQKsoz5w5Q3umy4+srKAcVwcxS4Xnh9xc28D1fFQxoRZ6DIdDbq3d4tKl\\n\",\n       \"i2xubRMGPtevXeW55+a5//77WFpc4Oq1axw9vAIcsKLszmU4HJIkCRsbG1y6dImPfOT7cZ0GeZEa\\n\",\n       \"PrPLytIyL7/8EotLp2nU6iYfNSaMIjAQXlGUZprUIRPatvjAu0cIEBICJwAhUdKaj5U0azUc4ZKO\\n\",\n       \"M1whWJybZ1KMqvvh7bpRawdgGyKLtReFhoCE4+IIUMrClRJVAI7+XlVqurK9r+w9ZDvvuF5HypLZ\\n\",\n       \"2ZkKjrWvaVkv+r52CX1fC7eyDKkKXN8FITU2X0ryvGQ8ThmXE8LwIFJSO0n6twXb2ObsW7kRftsK\\n\",\n       \"+GQ0ogCE54HnkSQj0qzghReeZ3ZmhpXlRYo8p1Zr4DiCoTH+j+PY5FV6ZuOr6A97OEKQloJmq4lf\\n\",\n       \"arvHnZ1dTf8D4lpMGIWAYG9viysXL9JsNjh16hgPPvCA/j7fI5uMtVOY5+IailOaTypmiOV/WjaC\\n\",\n       \"VJr9oFVdumstS6fqlKcx+Wl+rL1Ypsn7053AWx2FKfjSLGeEo5VkruvhefpmHPQT3Q15LqqUCGOY\\n\",\n       \"1Wq1yc2CyQ18zp87Tz2IGCYj3bM4gnQyZqbTolmvc3hlleFwqEMa2m2GozFSKbY2t5kkE5ZXltnZ\\n\",\n       \"3WH18GHuuvdu1je3EI7Hv/w3v8O58xeI6x28uE13fonZmVnCKCBLU+JaDFJx9Mgx7rrzTjOCSwZp\\n\",\n       \"j73dHdY3NkmzlFa7RS2uEcYB22vXGAz2EUjas3W+/7EPs7i4iG8ZP7LkPd/5EE88+RRz7SbD0ZDR\\n\",\n       \"/p42vCpLxr0B/+RffZrrV6+SjEb8+Rf+nCdfeBElBLVGg6XlJWqNOh0vYOXQKuzucvnaNY4cPYrj\\n\",\n       \"uly7tUZRpMRByOLKovbKzlLW+lvkaYErBK1Gk4X5eTzHYTxK6N3aZtTXXuROPabV7SCUIs9ydjZ3\\n\",\n       \"eNd3vItkpOXyw+EeZSnptlusb27ywcfeR3dmhmScQp6T5WN8YVhbjuDE6hI4DkJIHCRZ1ubo4UM8\\n\",\n       \"+t6HAcX6+jq9/T1u3Vrjq089xfqtWxw/fpQ77zjJ/fefZWlpkd3dbU1ndAVe4LF5dRPHdfi+j36f\\n\",\n       \"gQKHWuxCQZ5BFGsVa6fd0YEbfkBc0wlSjqeFVZ7nIqVZgmIKtrDBxMY+1qiu7UMNBK7naRk5JW6o\\n\",\n       \"cfpMZlVEIOrtEjGpdguFYX84BkKlFCB0xy0NtRKM2tHT0nqE7tcdBcLxdRgy4Lh6iV2WIEzA8P5+\\n\",\n       \"D6V0DGGaptUD0DJOwlArsEfJCNd19ERZ6GI8Hqem0RKGA041pVuoZHoCmG7g3g5Otce3rYDPzsyS\\n\",\n       \"Fjml45CXCkRBuzPDc88+x4c/9CFQJqF8NKTb7RqurKbS6TQT3UnEBs+M4xhSyauvvkIQ+LpjDzQm\\n\",\n       \"NknHSKXhgSeeeIIwDPn4xz7G6uqh6mmXpWOUNEn047L6QJVSxsjGeJbIqdDlLH1Tzt8b6UF28Whx\\n\",\n       \"8OkN8/Qychrzejs3wrI4EP/YnxUG+mejtE94FIYgMIIcD0cqkPoi1kyIgmatzksvvkS33Sb0fITv\\n\",\n       \"0u/1aLVbDPp9Dr/rXcY3PMF1M+1v7ugg2IX5BQaDARcuXeDs2bOcv3SB1cNHyKXiZ37673PX3fcS\\n\",\n       \"NdrcdedphFej1WrRH+wTexGR6+MIBy/0QEBeKEqz5Y+jCG9hgSPHjvHSy6/QHwzY2NqiWYtY37zF\\n\",\n       \"X/3R/5rTd50iyyYs1ef0BJQkun+P9wAAIABJREFUBIGnHSG3t/EFyDwlS7QQJ/B9lOtx+eo1Nm+u\\n\",\n       \"0Qxj5lsdXnvpZXAcGp0W/eGQ/nDEVm+fr507j+O4tLtd7n/wIc5dvMja+i1muh0W5udpzXYRnsu1\\n\",\n       \"m9dxRjmNWp3IcVlYWqLX6zORKfuDPl/+8hPUmg16g76G4fZ7pOMxjUaTRx99H1EUMb8wz2Aw0OKy\\n\",\n       \"qKuvmSJjbq7NzvY6WZqwsnKY/d1dZucW9VI+TUmHY4IoRHguSTLC+rzb68nzPGZnu3S7bY6fOE6t\\n\",\n       \"9iEuXLjAFx//C77y1af4s8/rLNYf+7G/ius4FHnG008/TRRFvOeRR0DBaJToYAMOpkMpFd/93e8n\\n\",\n       \"GSfmvtOWExbnthzuMPRus4W4neMstNLTsrCm9kSOf1CibWlXpV5q2iL9lofQalbfBJ/b+6s0y0NV\\n\",\n       \"vbbBlYWGF5VSVZKiQIDSwd0C/ZpKoaccdcCr1wKuSdWIWefFdrtz237L1gPfO7j3LdXQ3vP2vRZF\\n\",\n       \"YXIB3Cp9apoS/K1w8G8bjfCpL/wBSZYS1WokaUbcaPK7v/d77O/t8z2PfZDID81CJTAp22W1CLTj\\n\",\n       \"kv37fl9HrHleWHmfKKXpX2u31hiNBiglOXr0KHefvpsTx4+TjccV9Uln/jmVsAYOCrXnaZP8sjhw\\n\",\n       \"TQSjrjQmUXZxZlNxLN48/WeLZVloxb7GtBPjgSeCw+n73/emz+2lbzwOHFjv2uXWNKtFy7D1Re9o\\n\",\n       \"TzVAUALSEUgB/eGI3/yN36BmEkmcUGPljpKcvvOUNp06dYfuIkrN0W+021q0oczF57rcuLlGWpSc\\n\",\n       \"v3iZP/6Tf0e92aUzM8/S8iphVAPXULWkxHEEge9rvw0EspQ6SsrR00IuMzS9rGCSTQyXesT+7jat\\n\",\n       \"Rp2P/5WPMRr2aNTrjHd7+MYFbzAY4Idalv9L//gfs7G5UQmZBsMh6UTjvQ/e/wBHDq2CUnzjmWd0\\n\",\n       \"RqLjIny9PO/OzrK6OMurr7xOXhQ0mm2UgJWVFWZnZ80YXjLs7WsvnahFvVajkJLN3V0kilE2IclS\\n\",\n       \"unNz4DlEtRqD4ZC4LNm4ucbi4iJhGLGzvc2DDz6kl3tKC4dsOtCNWzcZjxLa7Rnm5hbw3QDfC7Sp\\n\",\n       \"luNo6aNQlEp/plakYmGR6evKFghbJPf29rh65TK93j43blzh2NEjzM50mZud444TJygM7c9zPRNH\\n\",\n       \"pqrrKs8LhHDY29tjZmYGpXRD0Wq3qyAFq3uY1iXAgQWE/nkHTYuFCHTBfvPU6YoDVgjAw+//oTf9\\n\",\n       \"m6e/8hlzXx0oNQ9gSM2ymZ5oNa48Mf/O/nsLrxgxkbEKEI6D46iq3kyzQuxkPQ2pWkjqNjvsKWjH\\n\",\n       \"ft807GN/nqZbFtXezBIPhBDvGGr8jh24EOIw8BuAph3Av1BK/RMhxP8B/CSwZf7pTyulPme+5+8D\\n\",\n       \"P4HWBvyPSqk/e6ufPZpMcAOfvJQooRdjzz33PB/+0IfxPJ3Con8Z7VPieT6TyYTd3X1GIw2n2Kg0\\n\",\n       \"vbyrMTRbfCkLLl++zI0bNzhy5DB//cd+jFpdjz6NRoM0TWm3tTLKFlhbEO0FYP+cpinSdL6BkfZa\\n\",\n       \"/xLH052w5ZcD1c+zJzRN0+pk2pNjBQ52uTl9gpVSVRf/FuejOuFWMGCl8faJrpTezkgUnhdUr5ul\\n\",\n       \"GY7vUTqCJ5960vh9KKI4okCSFgWh77G/v8+HPvhBBEoveNAMm+3dPa5eu0ozDrnzzrtBOBw7cQef\\n\",\n       \"+dPP8YXHv4zjN1g9fArHj5lbOMLNtTVm5pv0+j0WFxYZDQaEcZM0L5BFaRRpZmmTgxe2yfOU8aRP\\n\",\n       \"u9nl8tbrLMx1ufjqKxxdvp8v//nj3H/ffSwcWiBeWGY4GvLiSy9RypLHv/hFXOM7kxkbAYSgM9Nl\\n\",\n       \"Z2ufdqvFjRtrfPhDH8FV8MHv+RBxvcZnP/dZnnn2WdLxhGGvxxdefp73PvpdXLx4kdfOvU632yVJ\\n\",\n       \"M86dv0AY+LiOw/z8LFeuP8fRB84gxnvMz85Q1D0aUczJmRPcvHKN5e4cWzdvIYY5tTRnN+0xuzhH\\n\",\n       \"e6ZNkeXML8xxa+MmK8uHmIwn5sGrlbqLiwt85Ymv4jguy8vLRIEWjAiJnqCQJprMIfA8lHGjlEWJ\\n\",\n       \"g6jOt8a2R5Veod8f0Om0ie+6i6LMuevUCZ568gk6zSZHjxzR+5xS0mm32d7dJa4f8Jf1tewg5cHe\\n\",\n       \"xvM8EwqiO02llAnfOKASvtXxxq9ZLvZbHdOQwtt1otOduaXlVveSUU1OT8j63tb4+MFbsQ9Bk2fp\\n\",\n       \"aGKCUiW+H97Gn7f3vxA6Um160rD4uRXv2TpgbTTs71BRFE3N0fm6siroWZZV/32r41tBKDnwd5RS\\n\",\n       \"zwkhGsAzQoj/D13MP62U+vQbPsx7gI8D9wCHgM8LIe5Ub3GGarUGwnPpDYd05+Z48qmvcfbsWRSK\\n\",\n       \"tVtrKKlo1RvVeJHnOcl4TFkU+H5Q4U+9fl8vNUcj0mREHEcEYcjdd57ixz7xcVqtlpbKliW1MEIV\\n\",\n       \"Ob4jGA6HOI4WwdhFwjSNx8IkruviRjq1fBq+UEqRl0WVoDFtVGUFEdMnzXb4Qogq83GaA24vViml\\n\",\n       \"LkBvcdj3Nt2x2++b9osQjjZpUoU0/HNtdKUcgR94XL16FVdoxWEYRWR5SpZnHF09xLC3jxA6w1BK\\n\",\n       \"hTL4YqvV4uzZs4hSu61dv3mdJ576Ol/6ytdozyxxz5nTdOcWCII6e70xi4vHGGZ9ao0OgyQlipsM\\n\",\n       \"kpzA83A8H4SL8hSF0ruDbChxXB/PjcjzgsX5BbY3b5COR1Dm7G3t87u//f+yt7PLsTuOMBqNiOOY\\n\",\n       \"erPBoUOHWF5Z4Ttclz/5d58hScYoRy+wGs0mk0nKa5dfY21tjXd9x3cSeD5CwXsefoRLly4zmUyY\\n\",\n       \"abRwjxzhheef576zZ8mygt29PVxXj8ajwQhZlvR6fU7ffTfD3T5ZlvP6Cy8ji5KluQVuXr9Ou9Hg\\n\",\n       \"jhMncB2hI9OU4MjJ47ieDr/tJ0PyrKDMNX20UW/iOm7V0a1vbXLHHSe4ce0GtVqN1ZVVilLSbneZ\\n\",\n       \"pBPjER1QlCXpeIJAJyMJoaetwrChHEdQCyNcNI7aWFpkd2+PWq1Gkgz1fiGMuOvOOxFSGQVjxLA3\\n\",\n       \"oNNsk5g80+mRPo4j6vVaBR2kqY5ia7U7hpaoF/iWJ31QJFX1n5QHux6lVMUHn45TqI6pe8MW4Tce\\n\",\n       \"B3DNgVBumopncyWn7xvfD6de4mAa1iJAOx1ouwJbfKc99ZvNZrUHs122fR/Tk8Ub8expDcc0lFqv\\n\",\n       \"16vvsQ+Haej1nY53LOBKqXVg3fx5KIR4FV2Y3+YT54eA31ZK5cAVIcQF4N3AU2/6l44gzTLtQhYE\\n\",\n       \"XLlyhTP33ac9HoSDchSJUUbap5jneRRlya31dS5dukRZlszNzbG6uso999zD6vIcYWi8iF0PpSS7\\n\",\n       \"uzsEgY/jaMBLSlVl/tmCbOk6tjDbDtl2G0h9ku2Ha3HvMitvK6q2IE+PTXYJYQv3dNGdxuYqXP0d\\n\",\n       \"yPt2VJse2+zJtj8nTVNKWSIcoQsD2u1NyhLh+VWknBdG2pkty3TobDLBDwLe+973Vuo6S+8rpaRU\\n\",\n       \"2n88zXPiRoMbtzb46lPPsLB8lMNHT1FvzeD6TdJCEdU7bO/1CJuefs0so8RDKfDDBkWWUyIosxzP\\n\",\n       \"c0C5eF6ALFPiuIYsBxT5BKEK/u7//D9QpindVgcXLfqJ2lFl4FXIkv39PuubG1y+epUkGZMXJaMk\\n\",\n       \"odvt4rgBSuR0OjP8xm/8Jo++532UecYLL7zAAw88wI/91U/w6muv8fQ3nsZFMNNu8cKzz3LPPffy\\n\",\n       \"Wpox3NvTC2ypF17rN27x/d/7UYrhhM8/9QXe/fDD/Jvf+S3e/ZN/k1FvwIuvvsTnn/gyJ+44wZHj\\n\",\n       \"R0mzlOjiy6gsx3McZmZnKTJdFC5dvMwHPvABGvWY8XjCaJAAwpQ6Ra+/z8L8vOnoMqA0PjtauRqF\\n\",\n       \"msY6Ho+rCbEsclqtFkVZWoiXLNOe641Ggyyb4LsOIgyYMSZWsiyJw4h0kuK7HulkghuYhd7Ugi1N\\n\",\n       \"U2o1LYaq13Vajy3AnucY9gkcFGzMfWcLubityCHk21AED4ydpov9Wx22QOpCp6eEigGG9sVRhuoL\\n\",\n       \"tqu3ego51f0r815tso/E88A34RW2+57WYli4yMJftqOe3mlN1xdbKyxN0Hbi0/DqdN15p8nDHv/R\\n\",\n       \"S0whxDHgQXQxfhT420KITwLfAH5KKbUPrHB7sb7BQcG/7UiSBDfwCQIfTzjs7uygSkmZ5xQypdFo\\n\",\n       \"kmc56STFZvcNBtrzeWZmhr/8Qz/A7OwsMzMzB1hfrkUKjqPpZ57v0mxov400K26DMRBuZbFqL34r\\n\",\n       \"R69y6UxhlYV++lubUWvLOl10LddzWlJsT4ot/PakTOP4tlufHj3f7mK1i5Lpp7odtWznAOB4LoEf\\n\",\n       \"oEqFzAtjD+qB45CO9MTh+T7JZILveQyTIc1mg/3dXc69/jqB57O0uEAUhZX/g5LaHEx5PrkU/MEf\\n\",\n       \"fYaTd54hbnSJGl0KfMgVSng4UtCamSMvhxSFJIzqFKWk0WgbLru+8OJGg3wyQSFRqkAp3ZmOkx5X\\n\",\n       \"Lp3jv/nxv0bkCXpZyu72FoPekLKQuDWfWq1OFNfodrvMzs6xuLzCI+99lJdfepW4UWdzc4ter0dc\\n\",\n       \"q+ulXC1mf2ePT/2Df8DxoyeII5+5uTl++7d/m0996lOURcHnPv9ZZFky2+5w4+oV3vfIwzz77LP0\\n\",\n       \"en3N9HFdDq+ssLF2iy9+7Sn29vaQLz7L9/3lH+T3/ugP+fFPfpJbN68z7g342z/+k9x35gwoxebO\\n\",\n       \"JlIpWo0mnU6XZKhDb1949lm+9tQ3OHHsBPPzC2xt7rE72OHW5hqqlNRi7TlvvTg810M46AR2xyXL\\n\",\n       \"JjhK+3orqUCV2mM8T3E9jyzN8DzNXhkNh9qmWenotX5vn7vuvIsiz4nr2lytFtc0pOM6SKPOLKHq\\n\",\n       \"Dq05V+hrNWmj0QDzgBZCm0WhNCuLKfzcFtOqo0fhCN01K/M/fX8cLOmllFXz9k7LvIMdVGDuuQOh\\n\",\n       \"nDfVEHmeQ57rZaKwcLJwDSPG3juKNNXmdFaVaRWxRaFoNDoVGqCVncWUwhrStLjNgXT6Pp7u4Kfh\\n\",\n       \"FXtYCMUWc2t+Zwkbb3f8RxVwA5/8HvA/mU7814CfNV/+OeAXgb/5Nt/+ltWo1WoxyTOKPOOf/cqv\\n\",\n       \"ko7HvPD8czQaOqk5iiIWFhZZWVk28lVdGBuNpr5wjBQ5y3RBUEoiixQc8IW+0BHadc9xXVx0BysN\\n\",\n       \"XiwcXQhtGn1R6IAFGwxsi2SWZdTjWtWB2MI9XTCnL1S9TDUBAeYBYDsJmw9ov9c+AOyT+Lbu5C2O\\n\",\n       \"adXlG8fCaQimkCWTNNWYqKNv4AL9ILHdQmm41qV5+ES1OrKUfPKTn2R7c4syz0jHE/JSsr27w9z8\\n\",\n       \"ArV6A9eL+dmf+4d4QYNme46w0UG4IY4b4Xia8yyVInBdCimo1xqaSaAzJLAuidKEK2NUajJPiGOf\\n\",\n       \"ZDhke3udn/iJH8d3JJ4rWFxawBMe4yQl8AKk6+huMQxwXY8SRZoWIFI63VnysmRuboFr126ws7uP\\n\",\n       \"lJJWo8kwSbh2/QYf/f4f5Nb1a/zCL/wjbt68wc///M/zXd/1XexubXHi+HGuXr3G/Nws1y5f5H2P\\n\",\n       \"PMzlK5e5eOEScRCihMRFct8DZ/nSl7/M1evXaDR0dN7+zg73nz7D9fOX+cVP/UPO3ncfn/j4j9Jc\\n\",\n       \"mCMvS2QJG+ubOMIhCkIefvi93Hv6LPV6natXr7L84ApZOeZf/9a/5sjqKs1mk52dbWZnZ1CqrOCc\\n\",\n       \"stTOeXEQ6jzFoqyMkuz1l0307iU3/75ei3E9h8LVCt1Go04UBiSjIb7rE3i+DgH2fIIwoJxaKk6z\\n\",\n       \"K9JM2wBs72zhuMLoGmJGoyFK+UyHA1tzqIqBInQhtfTf2/FjwRv3dNPLw7draqZhk4N/Y6TwxUHH\\n\",\n       \"e3CfanGR49j7QcOZjotJEvLMJCGqYm3j4qTUlh6TiaYV287cwhxZdsA6szsq+/7s7ymEqCiI04tN\\n\",\n       \"oGKtTf/OrVbrLX/v6jN6x6/qH+4Dvw/8plLqjwCUUptTX/914DPm/94EDk99+6r5uzcd//hX/i/9\\n\",\n       \"AQQBDz10lr/xNz5pGBVela0HoAyjxHW9qjPGpGOEvl+NaLoQHkAJpVSMh0NGo4Q4jqoP03VdSgQ2\\n\",\n       \"C9CehHq9jnWis8wOS1NMhqM30X/KskS4TrVEtEZY7XbbjKoZg8GgwtWtP7cdiyxulmVZFYxrVaG2\\n\",\n       \"03/jUa/XK6c8+8S2T3x7EYVhiO8E5Ko0RmEgXJfAdemNhuyb5bDFXO1+IK7V8FxR4Xq+oxPrvcDh\\n\",\n       \"yJEjZHnBq6++xvOvXmY0Ljh2x2laMwtIEeCGdXq9Ac0wZjDYZ25uhltrN7nj1FF2d/aIwogyLwk8\\n\",\n       \"46EsS1zhocoUzxV4voMfeGxu3GR3+xYf+5EfYLYzS7+3TZaVuKFevkrhgOfrghVq//jxOMUxdp5C\\n\",\n       \"uHS7szz+xS+Za9PT7JfAw/dDmq0WFy9f4p//2q9y37330uy0edehFV5++WW9hItjFIq8mHDs2GGu\\n\",\n       \"XLnMyy9+k/sfeIBWs8alS1fY3tnh0qVzHDl0nNOLh3jl1Ve5/NLLfOz7/iv++A//gO/+7u/m7gfu\\n\",\n       \"49z5c3zx2a/zxEvf5H/7qf+Fe+8+rQvsKMH1fXzXIxklRJHOTDx8+LBmcTRmWbu5xsrSMufOnePo\\n\",\n       \"4SMkSULDuF0KoV0cFYJSltTqUbUIK6UevZPBkHq9jos7BUNo9XPoB+SFvr4DP2DiuvTMZOt5upAz\\n\",\n       \"TlDIapq095QfeNSiBqPRiPn5efb391lcXOLy5UscO3asSnsqy6IqhBa6sF2q7Y5tAdNNiUBKUT0k\\n\",\n       \"bDMjDH49vZh8i/p0m6r54LW0pF4IpyqmtpA6wr1t8nY9XUCtuVgchwcmdOKAzWOnbB0YMr6No20b\\n\",\n       \"KNs5Txfi6UJu64olPdgmzr7GuXPnWFvf4vVzl257z293vCONUOhHw78CdpRSf2fq75eVUrfMn/8O\\n\",\n       \"8C6l1F8zS8zfQuPeh4DPA3eoN7yIEEJ9+Qu/r4uZ0PhQEIaVFFnvHjSGK4vSFJoJnuualO0D8rvr\\n\",\n       \"6gsmyzL8wLqF6cQUu1S0F4stoHYs6/f71RPWPm0tp9YWc4AyL27DwCzMkuZZ5Sxmcfo0TYmiqCrC\\n\",\n       \"dhRMkqRSYE5j19N4mH1fSZLw0CMfetP5eOkbj7/pwpheeNivTdIUPG3VSymRSqEcl0JJzl++xJ9+\\n\",\n       \"7nPUoxiZa+HBeDJhaWGBTrPBT/7E32AySlBlQV6UKKETunVYbMgv//PfYn17l/vu/w5y6eFHDbJC\\n\",\n       \"4UcxaTqhFukgh26nw3DQBwQOOlA2NCpLz3WQMiVNE8LQAyTnX/4m29ub5NmEh+4/gyq1m1uZF0bN\\n\",\n       \"51Sq0dRwkQUCx9OZiyBwXI9XXn+NXk8vGDc2Nwkjn15vn9XVVfr7PQ6vrnDhwgV8z6Xb7rCytMg9\\n\",\n       \"95xGScWXv/oFGs0Gvb1d5udn+NpTT3Ly5ElmZma4445TuJ7HtRs3WVu7Rbc+T6etE9GVgCPHj+F6\\n\",\n       \"Hs+/+BI7+7tcuXIVHF1k3Szn/e97H5/4xCdoNVv4fqB1BcZ7W9vBGngv9vnDP/pDdne2qMUx9Tjm\\n\",\n       \"wQceqAquDgMTlKVegI3HyW07G0u1tbS+MAwNbCHIsxLHs1Q6bdym909XGY1G3LhxE8/z+eD3fA+o\\n\",\n       \"qUWgOaztLujvG48TBoMhMyZ+sG6StCzUZ7vIN2K5b8S17fLQ1IbqPlDqILVKSsl7PvBmN8InH//D\\n\",\n       \"2xhk0z8fISgKWTVM9vORhSQv8koxDfDGHZSU+vdRiIqabAu4rT22ttjmDg6i9OxnYN/bNDxiodxp\\n\",\n       \"GLWCfcx0D/qBEgQB3/Gej6D+E90IHwX+OvCCEOJZ83c/DXxCCPEAGh65DPwt88G9IoT4t8ArQAH8\\n\",\n       \"928s3vYoioLJeMysMevvuB3GlhdbnVRFHEX0eiNqtRp5kTJKBrd5bKfpuOJbD5LxQXdsPgxrbCWl\\n\",\n       \"pFB2zBI0a9qpDW7naNoFgu1o4zjGb3jV8s+OOfV6nbrTqLqfoihoNnVmpn1wwIG3SRiGt7kJTnN2\\n\",\n       \"LY49zQF9q8Ni9NN0pMFggO/79Pt9Hcbb6RDXa2SldkXDGP0jjMy++kxy0mQMCDAKs+FwyK//+q9z\\n\",\n       \"aGmZWhTiuh7t7gxxo0a706WQgqe+/gw/9MN/BeX6CNcnKySduQWuXLnM4uIC6SShVou4uXaNdq1F\\n\",\n       \"vV4nScb4YUSSjAh9D+FphVscB2zcuskz33yayM3Y2thEFgWP7+6wub5WTWJRFCEc7deNIzi8MFdx\\n\",\n       \"4LOipNfvGyWfIIhjhoMRfmgnkxyUfhgPR0NeP3eeoshJRhn9fp8XX3qB3/2D36NZqxO1PD7wgfcj\\n\",\n       \"aZLlGY88+h6uXblCo7XKN59/hlN33smRE4eptWJGvYJrGzeYmZtleXmZQaKThJ57/nnSSUpsTLEa\\n\",\n       \"9Qb1huDprz3JM09/jb/3v/49zp69n729Ps1m05xLSZqNDBSRcOLECV5+6UVWD60QeDr2ryxLBgO9\\n\",\n       \"A6g1GhpOCj2iKDCT3ATP80iSIaAzJ5XSqUKOA/39Pu1WF6kKev0eMzNder19tra2COOIJ7/2FL4X\\n\",\n       \"8gM/8ANIpfCc2x0zc9OsWBgxSbRKtyxLtre3OXr0KMPhkDiOK8Os6cOWAbt8PpCMH3S39to+gC/z\\n\",\n       \"CmJ8p0ZTSllNovbBoRfwmgtu7zH7OtrAyzXkhmljK6oIxG63q4uqc+Brbt+/hT9tivzOzg6zs7MU\\n\",\n       \"hWRvbw/HcTh0SIsEh8MhvV5PK3KnsPzpBuyNUKqFcqYXnG93fCsWylfQhq9vPD73Dt/z88DPv+Or\\n\",\n       \"oo1mGo0Ge3t7LC8vV0k4drtb/ZKlHu2KXBdZm+YiS8lEpQR+gOf6lIXEd13CKKy6Zx2bZOxe0bxQ\\n\",\n       \"0E/1wUAnh3uuXmQIBI57wLG273EygUGmL7owDEBBmurEa6n0T3Q9D4FVxh08DPQT1kUZc3h7Mxxc\\n\",\n       \"NJpNEJplbpEf2L6+1TGZTPRC0nUrd7O4ViMzE0yr3QGTsei4AoGDH/ogHB0ZlU707ykchPBwg8gk\\n\",\n       \"wEy0zD3w+e/+279Ff7+HK9AULNdlmIyROPzOv/1djt9xitFkQrPdwvNrjJKc0SChWW+RpxmNuEaa\\n\",\n       \"JBxaWKEo7YNDf07NRg1XKFSZsb+3zcb6Da5cvUSRj7l2a137SruC7Z1dZucWtI1rXNMRY0oiHUEy\\n\",\n       \"HnPhwjn9kC9LHNfXUm7PpygLiiwlrgUIx2U8mTAa9YijmMuXLmp/dM9jPB5Ti0Ncz2NxaZFH3vuI\\n\",\n       \"6VJzhBQoKWi229y4fp16q4Pnx6wcOsyVazdo94c4jstwmHLXnXfx9ad16Mi7H3mEX/6lX+ZDH/kI\\n\",\n       \"l65cxnEE3W6Her1BKCSz3Q5CCP7pr/xTHn3f+/jwhz6kXRjTlDAI8F2HIpsghMvZ+x7gT/7oMySj\\n\",\n       \"CfVancFoRJnnNBrNisPseQ6+55LlKa7jGhhC0mg09XJ+klKv1RlPxowMHzyZjNBJRk329/ep1WJc\\n\",\n       \"x+WFF1/EEYLveewDOA6MJwlCGdWiUgSenj6dUtGM6yRj/WAajkYUZUle5Fy+cpnFxUWScUKtVq+Y\\n\",\n       \"NMpOywYJ8D0ff5pdwsHSUk1J8B3HJUJDYFpt+dZQgpICFExKy5lWVUesSr0bys3963oeSoIIfEqp\\n\",\n       \"vVtc10VISEZjFNBptWg2W5r15nvVbmwayrAPID2FjInj2ODiLQ4dWiHLMnq9fUBDmtrf3SPPtSNi\\n\",\n       \"alTdusezv/+By6ENg55mtL3d8e1zIzQfSKvVYjAYVMXbjoK2Iy3yA1qdbyxbLTcVgNqBGUxWpKTj\\n\",\n       \"gy7c8iwdY1bjOiaqykSGtVqtAxzOdRkNB9US074eSuL7rlkiORXEEoYhw+HQKNd00ofneaR2Iz+N\\n\",\n       \"HfoejuNVEI0dv8LAx5rJu46DMt1Inr81gd8L/Or3PpA4mzQjqarXDoMQpTSeXsoCKcwC0fOoxbF2\\n\",\n       \"WlMAHmlmgo6LEuW79AdDyjJHCcFoPCGMYiQCJRzWNjZodpaJ6g2U8EiGGVGtyXicMNOZYdjfQ0hF\\n\",\n       \"5AYEIqA3HtJo1Gk06gSuIB0PCH2Xv/jS46BytrbX2e/tMxj0qDVi/cDNcrI85db6munSYzZ3Nml1\\n\",\n       \"Wmz3dkjGE2pFTiklcS0mGaf69zfL6tm5OZqNlrlGpO44e7t0Wi2GvZ6WQuc5wyylUAXxpAGuw9Ly\\n\",\n       \"Mq0wIA5rvP7qOZa/8xB5BsdPniLNclw3ZHFRe8I7jqTZbvPquXOEcYwSgsuXL3P4yGG2NzbwhKBW\\n\",\n       \"rxOGOvHeRQdGBEFAs9Xi8ccf57lnn+Wxxx7jox/9qHY8RNDr9QjiiI2NLaIoptlsM5mkZHmu7RCU\\n\",\n       \"TrP3XK0W3N3dZTDsE8cxS0vLKKVIkjFRFBOGddY31mk2m0zSnDAsqNXC26bTNM0Ig4D/n7k3D7Lk\\n\",\n       \"us47f/fmnm+vtauqG9UFNBaCAAEuEMBVFEVJJLWZ9EgjKyxZ4/GMwxEKj0OjmbAdY88S9jjkcNgh\\n\",\n       \"Ozx/zESMRdlhSbQ9Q1kOitplihYpkiKABtBAN9B7Vy+1vjX3zDt/3Lz5XkENaP5xQBnR0Y3Cq/fy\\n\",\n       \"ZeY995zvfN93JPDjP/ZjDURgSYG0LCbDIVVWEHQcopGedpUXBa1OmzTJcF2fTrdPv5+TZhmHtQnY\\n\",\n       \"bBZR1kmKHvJro2RtrZzlJ7DuSunpPErpwQ56uINEVBUik+RCe6RU3B8Dt9D2DKmKsaQFggYONevK\\n\",\n       \"8k8OUikqM0lLx5Fr166xtLTCzgPbui+GoKwUs/EUyxJN72y+nudMs0VCw+3bu7iuW2fQVu38Oaph\\n\",\n       \"oDlFWQhjRFfV1f6cMKEFUh5KGU7428fRd3Qq/SLAb7JTs9uZIBn49glOpaHtGHjEZNt6oEPeQBSL\\n\",\n       \"wR4WPYp1dtztdpvhtQaLNmWSwd2MNNZ4dpsybBEeMd/DYHemObEo4NEioezE9zXnXRR5E/ANx9xU\\n\",\n       \"IG8+bFv7kRTFHKIxjVXVkFCaAAAgAElEQVRtaFVn+VWJquaj2xCCQimkZeO5LkkUa0ZJPZ0ny7X9\\n\",\n       \"rq4EXCopuHXrFqur2vY1KxRnzu5wdHTM6dUH64BfYNkO0+mwpqUd0m752JbmMZdVSi8U2DKliCfc\\n\",\n       \"O9jjYP8eVy9fRFUFw+Gh9qSgYGm5h++52FLQCVs88fijfOB978VzHfK8YDgaEXbb9Pp9XN/HTnP8\\n\",\n       \"MCAIAuIkoawgbIWkWca/+IXP15n3DNt18RybsigYHh3zX/7Mz/Dxj36MJElod9vMoql20lOKX//N\\n\",\n       \"3+A//vavM6iWOLWxQZpl7Jx7SFNQk1SPPev3UEpSVoqb93Y5ODgg8H1+8Ad/kC996Ut893d/N7/9\\n\",\n       \"W7/VQA0a4tOVDYJG+XvmzBkO9vf58pe/zK/+6q/i+z5//nOf42Mf/RhpUdBptzm1vsrdu7cJQl/T\\n\",\n       \"TqVhLQhms5gsySiKin5/pYYtCixpY1se02mM61UEQQulBMvLq5oOlyXaiCuaIdAw5vHxMUjJdDrD\\n\",\n       \"cXQCYqpE1/exA0lWVQQ9ndl3vS5ZnpImMR7au8OxLYosoywKXr90iZWVFc0+QlAVJRWiWT+e7ehq\\n\",\n       \"WM4nAZWl7rWYONAIXgBbAKKq/dT/5KGqnLyq8ELt7a8n0WtDLSHmtD3jR2LbNgh1Qkz3zDPPatV0\\n\",\n       \"nOiquB7aoFXbcbMZLGbERtNhEiqlFGtrayeYXoseNYsxwuDoi3i3lLLpmxmYxkC2b3e8ozMxFxsW\\n\",\n       \"i14kJnjatk2apPXOLKiULi8ar2FVUJWAELieg1N/nUX+qHkY4KSwJk6Spnm4WB4tZvuLP1/0NVjE\\n\",\n       \"zAyebTYMk2Gbz59j2lnTXGo67VVZ06dU0xU31+V+h5kwr/9YTTfb7NhJEmNZdj1IQNO5VP29C6Xw\\n\",\n       \"/IBup6MnkQ+WGA4n9cMkyIqCVqvF5z//eT78oQ+xvrZKFMcsLS+jhIVtWQx6Xaoyx5IVWTqjVDb9\\n\",\n       \"Tp+wbTMTEktktMNQc7Ul7N+9zbe++S0cWxBNJwyHxxR5SpLEOJ4NZYXlOkBOkWTM4pi/+GN/jccf\\n\",\n       \"eZTR8RGua3P2gU2msxnD8YgyizgYHbG5tE4SJyilSOpRcffu3qM36DMejxFS4noeWZ7hubrJHQYB\\n\",\n       \"rVaLmzdv4roOo9ExXhiQZClBq8UXv/hFHFkyS2JObW0ym0VY0qIoKy5fvYaUFnGUsrS0RKfTYXnF\\n\",\n       \"xvO1h/i1Gzd4+umn2d3dpawD1dLyEnatsLRFRZrGxHFSz+B0OLW5jio10CCEJIpnlKqgKCqOjoeN\\n\",\n       \"dbHJmHXS4jCdznBdD0VOVUlsy8d1QoqyIM10E73b7WrVZt3Qj2YxQeAS18O/QTKNJhwfHRHFMbMo\\n\",\n       \"Is0ywlYLWRYN93g6nSKkZG1dw1mO4+BYkCQ6g53NZmRZiuc6dV8o4N69/Wa6k14TFkWZ17BnCY5T\\n\",\n       \"27cWqGqexOmFZdVBtw7WtVFVVWpq7H0PUWJbgslk1FwnDSfNVZ/aKCpsAuetu7u6VxQEhGGLu3fv\\n\",\n       \"al+bGnMvirmAsN0Om3iQZVkz/NywxkxgN2vR6EkM7XiR0WYoh4u9tkW9iTl/4wW+SEF8q+Ods5Ot\\n\",\n       \"OdfmpE1z0Jy8eY2UEt/R7A3zxbXS0mrMrESt/jKy+BOUofq9Fo3SdfCsiNKU8Xi8sNvqh3+R92oe\\n\",\n       \"RHOO5r2MCZbv+w2FME21gALmTRvzIHQ6PaIoOtGNl5ZFVRZNQ3Zxcs79DgMv6YdSnuCWG0qiUtrD\\n\",\n       \"2XPnRjlCCBxpMZ3N6PWXkBJmsylCaBqnJdpkqW6CJXHGmQceIE3ihsZVVVrR+sjDD3MwzbFVTjf0\\n\",\n       \"UDigIu7cvMmZrXWm0xF3bu1y5Y3LxLOItl/x/icf5Ad/8AeI44S//bf/FlWR0e14pHlK0Ao1YwaH\\n\",\n       \"laU+/9VP/Q06rYA0ntJuB8ymE65evcL29jatdqhN+JXi2htX2dw8TZIkdHs9XN+j3W7zyqsXaIUh\\n\",\n       \"B0dHtFot+v0+8WyGYzvs3bnHgzs7dSarF0+aJEhLsnfvHs8+8wwvXXiBvKo4Go1oBSFHh0eEYYtz\\n\",\n       \"5x5BIOl1us1zW6ANzJ544gl2d3cJfZ9vf/vbbGxsMBwOmU6nDV+45Wm5dK/XO1HFObaD57kkScq/\\n\",\n       \"+bf/hldeeYW/+bf+Drbj8P3f/2n+zt/9n3j00UdPJBm27RDNYhzHY3m5g+v5JEnCbDbBdR16vR7T\\n\",\n       \"2aRpDuZZhu3YHI+GzOIpr7z2qq4884xer8t0qt0+kYLdO7dphSF+EJDEMZZtM00ipknMNNI+2EES\\n\",\n       \"EXgeSZbWrBOtHQ1bLQ4PDtg6vcnw+Jjd27sM+gMGgwFFoTehdrdNluvBDZVSVOh5uJZtaVFNTTOc\\n\",\n       \"j1HTQ1pKpRD3bcVBmk41Th/o4Q9VpSjKgrIo8f2grqbnjU2lFE888Z56YHmL2WzGztkdhkdHBJ5+\\n\",\n       \"D2M9YUgSs9msWfudTqdBAkyQXazEFxuS5p7N7519ImE1vHJdQevfMTHFUJj/tOOdG+hQ86JNZ9h8\\n\",\n       \"SQ0rzK1X9RDiqglyZtczXxjmEnLDyTbDihdtH01Wa3Y+t35tv98H5tl5XA9uMMFbd/ajE+pK85lm\\n\",\n       \"N12kAMmFDHquApv7JpjfMc2KxfMyPPC3E/IsQkL6vd3mehguu1KVVrst0KukgF6ng0Dw5JNPcvHi\\n\",\n       \"GxRVRWAF5Kl+7WQWQZmjFIStNkmSEEURx8MxrVab73jm/Xz+l36F0dE+k0lCGHQYTSf4vkM87LM0\\n\",\n       \"6DIdj/nuj72fF55/gb/4o5+l3W5T5AXD6TF/+Sd/nMtXrlKpCr/dYhZFKODSG6+j0pRf/MVfJE1m\\n\",\n       \"PPXkE5RFgRSCl156iXc9/riexBOE9Pp9fNtlb3+fRx55lL39fR7YfoDeoMOlS5c4PDzQ+GU0Iyty\\n\",\n       \"ZFVot8vQIwx9jTPbDrbj4NkWt27f5jd/+7f0rMxWm6OjIwDcUx7rpzaoCsWP/MiPcvmNy5RFyfHx\\n\",\n       \"kNFoxMHwkJ2dHdbW1njxxRe5d/cujz/+OG+88YaeRN7paKvQ2azuSSieeuopiqLg5s2bHB0dIS1J\\n\",\n       \"WgfYc+cepr804Oj4iCAMEULysz/7s/zar/1aXW0qxuMJruPT6XQ5OjqmVBGBH+I4Nq7n1vDICMuS\\n\",\n       \"HB3tMx5PyLOMzc1Nut02y6tLbG9vNwFoPBrx0vkXtKz/yhWKomBvb49+v49jWQStkFmWcu/4ENtx\\n\",\n       \"6Ha75KokiiPafsDh/gG2bRGnOsB7nscsjsCSlEXO3uEBSZbWTX7wIo+8yll0DpTCqiGPWshTVw3a\\n\",\n       \"4nVKnlfo5fBWknJtYZsUZkSZjW0LqP2SQNQDObR7pef5RFnSeB+hVG31224ERnriUEpZFBRl3sQr\\n\",\n       \"mHsamQD9ZvMuI6RaFOyYGLHoK2NgqkVItaqqRrjzZojmrY53LICbzM40BEzmbb6cgSIW/QYMRNF0\\n\",\n       \"resgZgK0FLoRUhYZWBaibl46tsSS4AaaT55lmZ5gvpAtm+C4yBXPsoxpPfsQaD7L+HWbjcO27YY+\\n\",\n       \"uLi5mBsmhYAKvchqPxIppJ4jWcM4BrJZxMXefJRFgaqq2gPdbjrsGufWlDnT2S+KrMbzNJVKY38W\\n\",\n       \"wi546sknefmVVymqumqBpqP+4M4Ol69e48GdHW7dusWZM2dYXlnD8zyiaMbP/9zfA+mQZxXTWYzj\\n\",\n       \"Otg2XHr9VSxLcXR4hFDwP/z1v0qRp5SF0iPpiownHnsXvuvywY98lDhNsByX/+V/+19BQdBqc+/O\\n\",\n       \"bcIgYDJLePjhc5xaX+UzP/ADDEdTLr3+OteuXeP6jdtce+MSlm0zGAy4c28PBayvn+LoeEi702kM\\n\",\n       \"jGazGWUS4bouS0tLXL58ueHjX7p0ia9//evkZYFjyt76uRsNx9y6eYvtM2fxPI+vfe3rfPr7PgUI\\n\",\n       \"Njc3efXVV/mVf/dLHB0d8a//1b9iZWWFH/6hH6Lb7dLr9Th//jz9bo9WENJtdxCVvm/nHnqExx57\\n\",\n       \"DNCb+6VLl7h9+zY3btxAofjAB56h0+5QFHqQ8drKKu2wxWQ0ZrC8hG8FHB0NkdKi1W7j+QGO62po\\n\",\n       \"qsooopgonnLr1k1GIy2yefjhh7Bth42tLWzXa9aSbdtw+jTvefLdXLhwgaIoaLfbnDlzBktaRHHE\\n\",\n       \"/sEB337pRa5eucZ3ffcn+N3f/V2iWcQH3vc+jipFr93BKiXCscC2eOXia+R5zvXr17l5c5eqqnjs\\n\",\n       \"scd45pln6PV6lHlKSYXr2jXH2SNOU22XW1TYtoOqlP55nNCyjYWFT57dH0IZT8YURUZneUXj61LW\\n\",\n       \"laPE9XT/wLF1LEmznLyoELYO7KISUOnBzQYW1dbHZv6s9k3xPKeJERrfzppYpaFeQ7fMm7igBVd+\\n\",\n       \"EwcWVdiGn78YwE2/zWDlrVargUHf7njnhhrXwc7g3UATLBclweZLNQZSJzik851c74pzsY55jeFO\\n\",\n       \"m6Ds+35tAznPkk+wTqjpemruj7JoHGXmD5pAned585kGYzO4l3Fl0zetarJr872FAD/wTtzYt6MO\\n\",\n       \"haEe6GyoTbZtY9k17icWHNMkCDEXB5VFgULiuB6VEpw7d46yLPD9kEpVOFJSoQVAcZJw+cpVULC+\\n\",\n       \"vk6lYDw6JooTup0246OU0TSiFXS5ffsuQegTxRPObJ3ixs2r9NoeVaGoioQ8Nz7qlmZvCJvNjU0m\\n\",\n       \"oxHCsvA8n5WlJQ6PjomjiMHyKmVZcGv3Djdu3kRKSa/bxbZsXM8ny/WGeObMGVxP+3fs7JylKPVG\\n\",\n       \"GIQthqMhvX6PPC/I8pxuR9uiHty7xz/7Z/+0bpSnBEHAyko9GCJJWBoMWOksI4VkujRDVRWbm5tI\\n\",\n       \"IXj94kX27t6rvXMS0jhBuorReMQHn3uOra3TPP30UyRJwhOPvxv1F36c/b09bt64ieM4TCcjijxn\\n\",\n       \"eXmlhrpyQLG+for1U6d49rnnNKSTpcyiCN/zoWZd7OzszLFXJN1uBylthJCkWUqaZ+R5ysHhPa5d\\n\",\n       \"v8J0PMKtjaje854nOLW+TpFXJGnK3sERrVao2SrTGd26UnjgzDZplpAkCXdu39G0N8eh0+3S6/bo\\n\",\n       \"9/Rm9Nijj3H5jTdY6g+YjseErRZlWeAELmcffJDB8jIo+KEf/iy+77O3t8eVK1dYWl4hiiI8IcGx\\n\",\n       \"yMuCrCgaQVKexdj2XPhSFDlXr15heniDslSkWVFn4T/xJ9ZE0GojUDhOAAhc10NK/WxpLFppnxjA\\n\",\n       \"rTcwpZT2PLc4CXeoOo4s+Kn4vncCIQBjr2udsMlYdCNcJF2YZqaprs1h3FCBhmdeFMWJXhjwp1rK\\n\",\n       \"vqMB3GTUJsCa8sHgS0YlabLvxQzZBDzHcQjDsFkYBuc2jYRGfg9N0NVlifnDiQtuyp8oihq57GJg\\n\",\n       \"XbSZNeoqg2GZKmKRzdIosFTeNE1N09PzXG1OtCCD1+q6+L7XTMNJWo242CtI05Q8S5rrqA3/cz1O\\n\",\n       \"zdK0JNf1GE0mhK0ux0dHvPfpp7lw8RJlXtJqaxx/Gs3Y299ne3ub0w9ss7LU1w9a3c1P04TpcEIQ\\n\",\n       \"tLn06gW2t89iezanT6/y8svn8T09sSdLc4qsoFQCUfvPjCZjHn3scZIkYXg8pNvvkcYxj5w7xze+\\n\",\n       \"8U181+X06S1++qd/mqOjA8qy5Mzp04xGI5RSTMYzBoMlwjDk6huvsryy0gzVQFh881vf4ktf+hLt\\n\",\n       \"th5957gOLRHScTR743/+O3+3KYM7nW7N5Z4yHo8bOfvB8QFKaf+JP/iDr9Lt9rAsqxlafPPmTYZH\\n\",\n       \"R3zgAx9gaaXP5/7859jb29P3OMtJZzGppWG4TqvNux9/vF74mkY4m83Y29tvfOyRetNPa1xcSkmR\\n\",\n       \"l2QUNT/aYXt7m+eff57eoE9VgRAwmYxwHD3k4er167x28QKj4SGrq0u0WiGinuDeabeJohhL2oR+\\n\",\n       \"iBSOFuQ4Dp21NYQQ3L17l7t37/LgzjYPP3QOgN3bt5lEWnzy8ovnObW5gSpKZFlxZmOT2WRKnuXs\\n\",\n       \"17TBvMz54v/7q7zrXe9iZWWFy5evEEURh4eHnDun33MwGBCEIa7vEYYhvu/rBCjTvQApBFk6Z1Op\\n\",\n       \"ouT88S0kMIpHCHn/UHX9xi5nz+5gWQ6e59UCJr1WpbBBVORVTlVpFpcOkmlT+ZuRadS8d6UUZT2o\\n\",\n       \"pSgKVJI1AdWork0wXoREdBwpTwhyTKJmGtGLKs5ebz5hzNhj5HnO6upqk3SaOPl2xzsWwM0XWdxh\\n\",\n       \"DE/aKPC04GZUXxRjyk7DkzZZd1nqrvai5NVsEKZ5aWAXE9iNQsscZhc152ZKHtM4NUHeYF6m5Fls\\n\",\n       \"OC6qK81GY7BvSzqNNN8E/TTTtKXFLMDg//c75hz1k1NNpJQEoY0Ulm7kqFIzR6RuDBkVnxYhpQgp\\n\",\n       \"2Njc4OULrzbmXnme44ct9vf36Xa73Lx5kyJPNXbp2o1IKY4TWt1lHth5kFanw/7ePe7cvY1CsL6x\\n\",\n       \"xWg0xbV90kKRKg0dZElSQxsVb1x+nSefeJIKiOMp3/mxj1AUKTtndsiLjDSeooocW8KF8y/oJqXj\\n\",\n       \"4whIZ1Omx8csLQ3Y3d1lMBhobnu7zcsvv6w3YVuLftyaNaBUxWOPPdroDaqqYn//oPHCcV2X0Ug/\\n\",\n       \"Y9tbZ5jNIqS0+PT3fqpZPB/8jmeB2nSobg6KquT44BBHWviuR5okDAYDsjTFtvRgW9Be3QiB5Tr0\\n\",\n       \"eoPmHguhudaaq+xosZMQSGWmoEdMp1OWlpbZ2dnhypUrSCnpdnoMhyOSLMOyXF44/xJVVdTvpUjT\\n\",\n       \"DNfSSUaRK2xbMotTiiJC1glFluaNaMx1HM5ub3N8NOTOnbtMJxP80Ec4DhcuXOD7vvd7eenF81x8\\n\",\n       \"6RXCICApSpb62v/76vVrxHHMw+fO8f6n3stTTz/N8HjI2pIe3bf5nVtaFdsES0GcxsxmEfE40n0O\\n\",\n       \"ObdatYTEtiwEgqN7d/Esj5zavVDeH1ZcWdvAb3Xod3s4jl7bZVWCsmrhjKobk3N6ojGi0tVwSVkW\\n\",\n       \"evqP0ApwIy4C0LbUOgE01fSi8dwiEpCmyYm5AG8e7mLUwydgLGiqfMNcmU6nTbx5q/GK5njHAvji\\n\",\n       \"ZHaY86WVUg0zRE++9poLYXBf82WFEI21q8lcDT5seJ6GT2lcB01AdhyvyWAXb4YJumYXNJxOsyMu\\n\",\n       \"NhVMpvzmhqrZJAz2NS8NdcA3eLmUog7i+vPDMGw2hvsd5obPJf1uAy3pTWbeyNEMlwpVKRzbxnMt\\n\",\n       \"8qIkL7Uqcnt7G9u29MCGBU+HrKg4PDzkK1/5Cn/lv/7LHOzvceXyTR56cId2q8Us6YK0qZTij59/\\n\",\n       \"kW63w3g85MEHdxhNIu7eO+TBnYfZPzjGbkvaYYt0mnF6+zRXr19h8/QphKW0n4mQJFHCxz/6Yaqs\\n\",\n       \"JGwFfO0r/5Hv/Ph3MpmMWRr0KIuK2fiYlaVV8qLAkXpI7kMPPcj16ze4fvMGr1y4oFkMaGtVCyjq\\n\",\n       \"TbssM87ubDMcHqGU8axQjZeIgbosKYmnMba0GI3GrKyuoMqKLM8oiqwe3zajpKLValNmGa0wbPjN\\n\",\n       \"rus2Qz88x22eG9uyUQjiKK2fEa02DNutOpBb2I5NXuZ6UElV90scXw9YmI148skn2d45y8HBAVEU\\n\",\n       \"cfbBHQZLyxwcjvjaN79NK/CospKyBIFFu9snmk6ZzWJaoYPvtXDbDpUqGQ6H9RqxaidISzNPXI/z\\n\",\n       \"58+ztDRg4/QmaVWwtbFBGsdsndrgtQuv0ttqsdTu4UiL4+kx3/rWH+N7HmfWNwjDkN/58m/zkY98\\n\",\n       \"RNPvegMO7uzheT5xnOMHPqPhCD8McLBpBQFJElNVJd1uC6HQ3iToWZnL3T4Hd66Tp0nNdLl/Jnru\\n\",\n       \"kXdRlWaClp70ZFkGelVNAC/LokmgbHvuu6KTPAuh5hoNFoJ9lhdNk9Fk4fPgX51IvBabkcZozmTf\\n\",\n       \"izxyy7IYDocnEjXze+YzTAKqnVff+ngHlZggLd1oKAs9gCDPMz0r0dY3odvVNznLcgTzAQi2radG\\n\",\n       \"O46HbbsIIYmTBNsWGkapKpRpJoo6K6sUZVVhVQolpKaQmWkdUnsZC7loZjPPlPRGoB0RtX+CnsJt\\n\",\n       \"WfIEu0TWXF2DgQe+T1HqhyqrN5QMRZrUHHIpqGrcXptvFSRJfKKrvXjEsbbyXNyIjEGPebAMDxah\\n\",\n       \"O/wG/6uKEtt2sGypB3bbNmtrq1y9dh3L8yjKEoRkMFhib/+Qdz/2GL/8y1/gQ89+Bw8/dI4knjFT\\n\",\n       \"CisMydOCw8OjuiGnWF9fIQx97u0dsLF5mqJUrKysUVpanjwejdh+YJsXbt7k9OnT3Lxxg63TW3pD\\n\",\n       \"s3TP4/qNXc48cIbtB85y9eo1BoMBYRDW3tJw4+YNWq02vV6P2XTGbJawurpGmhdceuMK+/uHepq4\\n\",\n       \"0HapjiVwLAeVFDz+2OOMJxMCP0AKiVdjo/pZskjihLzSJXSWZ3Q6beI4IopjlpcHWgkJ9Ps9ytpP\\n\",\n       \"x3JckjhBWpbGVoHJZEKn3akZU3rjdlxt8xCEWsBSVdqjI4qiBh6sqA2NKqjyAlvaVKpiPB4xWOpz\\n\",\n       \"fDzE9TxWV9Z0z8J10batulFvqkfbthseuuO4LA2WEFhY0qq59w4rK8tYls1oNCQvchzL5o3XL7O5\\n\",\n       \"tcXTT78Xz/cZTUZ0Om1Gx0M8z+eZ7/gOwiDgjTfeYGNzE9dzGQwGrCwvc2p9nW6nS5blPHzuHBdf\\n\",\n       \"e43V1RVUVdEOWyRJghCS2XTCYNAjTmp2h9BmdYHnIaRAGud5BWVVMJ2OycqKJC+Qlo207s/GmE1n\\n\",\n       \"WNKh3fJJEh2gTaVsyAcwHzJu27YWGJmkjdqnPNcNf9uxMeZaQugpRJohU9WMuTlsMld3GorvfAiD\\n\",\n       \"wdoNO84EZOp7bZK1xeTxrdh1b3e8YwFcKYlQtd1llWn8yXJxfYe80Ioq29KZSFmWGicsM3zfIysK\\n\",\n       \"bMuhShL9FURFlik8T2dnjmV8hDWZ33ZcpG1jS6GlsqKWm1cK3/dQ1NCCJbCYX9SihmJsy2qoT3le\\n\",\n       \"AHOoRADKkki0/7EWHJX1mBHN887KGuN2bZ351L+rhNCNxbosk1IsGNPf79C4/aKXsj6nfCF4Sy0p\\n\",\n       \"rh9Opy6nK9BqUBRIbebz6U99kn/yT35el4BCIiyHKM6Qls8rF17n/e99ipWVNXzXxhbg2Ba7e3fx\\n\",\n       \"fZ9Ox0NaJbd373BqfZ08K0iimH6nT1UUONKmyktu3bzJ5vo6rmUReD6hrwUgRT7vsEshaXX7+GGH\\n\",\n       \"pRWr5sYqZkpzYts9HyfQnN3LN26wsbYKCopC8fwLL+GHPUbTXVqttr7fUuP1WZrwuR/+DL4f6ClD\\n\",\n       \"NWynext2rSjNcRyrqbz80Guez57bacyPijynXMA8BRIpbaSQZKkO8I7jUSkQsi6/pUVZ6rF0hjIq\\n\",\n       \"bTO9fe47bwmBsGzNWvDmsFjYDnV274V1sJC4jn5uhYJW4GNJAdTZo2VhO7oJWGQZeZEQuB6OJZCB\\n\",\n       \"QyUEN2/d0HzuIKDX7aMUPPLoY/XzZzMea6GQhSDwfa2yHY8Jum0++env4+rVq6RZRpokPHD6NPv7\\n\",\n       \"+xxPR7z3ve9FSkl/dcB0OuX67g3chcClIQMHx3YJwoCWbGmGSDBvLAoJWZLS6Xa5cv06le1RSpeC\\n\",\n       \"Cnl/Mz5818OSNkkyt7Itirm//pv1IFmW4TpaSWwCsGVZ5EVer4P52irLkuls1oyPM702A4fo50RD\\n\",\n       \"mlosNHchzfO8Cd4m6zb8fxOszTkusu/MDF3gbRlp5njHArhXU4mmk6h5oLXTjML3XWxblyN5pnfS\\n\",\n       \"IPRqFWNOUWoxT1EWoHKktOvsO8Nx7Dor16ORyrKgyDKKrJ5OYju4rkeZFZSqIsmzpuHgOzZZPXFd\\n\",\n       \"KS20KVHkpcKSAsvS5Z2qNNc6CEPN8FAlpdKlcV5UKGEhpM7+XU9DRWXdgYqzulEjLapqziYxXsqG\\n\",\n       \"Onm/481SXCHmlrRm09GbisZXizwnLXLsUtvzDwYDZlFEJRRpkXNqdY1Hzp1j72jEaDTBcWoZvxIk\\n\",\n       \"ec6li5egzPnMp76HPM2YzfIGr1taWmL31i4ry4MaA1QsLfWxHYl0bSxLUCSammZZFlevXmVtbY08\\n\",\n       \"zzl79qyWmNdQUhRFeuh0FuP5DvsH+zxw5gHu3buH53sNDNTtdFlbW+PSa5dYWVvj2s2bHA6HXHjt\\n\",\n       \"Iv3+gDDUjodxnCKEAlXx3HPPcXh4eAIKM58LnODlL8Jmi4rYRe491BQzJbGsuahrkY1g7lXzt9D2\\n\",\n       \"Bm9mKhgqWdMwq0vsRXc+s9AXfaON3YTj+vVz0274/2UJ0rNotVtaJt8bkJFxNBrR7nY5ffp0895Z\\n\",\n       \"lmNb85F/WaaHflu2ICs0N3oymfDaa6/xwQ9/mN/+zd8kjmN2dnaIooh3v/vdfOhDH+Jb3/oWN2/e\\n\",\n       \"JAgCVldXkVJy5vRp8jyn3W43lDrdi5k1VWNRFNy+fRuYszLSehDJdDbDbYVUlaoz6bcKZBoirdTJ\\n\",\n       \"RmEDh5j7xVxHYV5nAnGe57Rq75o0TRtLaE3Jnd9LQz4wz46eLqUaQkFVVYxGo4bssPhMmGdPkxe8\\n\",\n       \"5n6b8zONT5gz4/40Iyt4RzNw1SgvF5uEWZaeKEGClh6JluU5CAhaIarOfHXTQdUPfEVQPyRxNAVB\\n\",\n       \"Hax1AyWKYoqiwoxB8rwWCEmaFkhLIi2HWZzWxjo64EpA+wlb2tBJCWRVT68WWuGlEAjpIJReQNQZ\\n\",\n       \"cFlVFDX0o5sqmrrk2B4KpaezVxoy8TyfoqhwHLMI33rXXVR9mRJrUXFpAk9ZldiOU1cF+iEZj8d6\\n\",\n       \"0ryooYPZjM/+8J/jn/9f/3c9hCJHYGlXOCGZJjGHwyG/95U/YPvMac6de5CSgtl0yt3bd8izrAmG\\n\",\n       \"e3t7rK+foiwLbM/TJlpVxdraGmEYMplMKIqimW5ixFFhqLPM2WRGnqUIFKdOrXJ4tM+pjTXu3LmD\\n\",\n       \"7/sMBkuMRiOSNGZtY5Ov/dHXuXdwwHA80cZRvkecxGjuO5R5zvd/5lPcvn27+UyDUZpNcxGznM1m\\n\",\n       \"zaIx52mOxSzOQFcalkgajw5jCBYEgbYxVWgLiLJEiXmvxbyPUfIuCsF0hpYv3EuJ47jN7+oJN4og\\n\",\n       \"0MHFb7Wb12tlcjlNTB0AACAASURBVEan06IsSlpByGuvvsZzzz4HlsWp9VPkC453hiJnyXlAchyb\\n\",\n       \"ssyRlt1seq+//joPPfQQB3t7fOITn2B3d5ft7W3297Vkfnd3l83NTU05rSqGw2Gjas6znMlkwmQy\\n\",\n       \"4dFHH236P3P+9HwgyebmJnEcNxv717/xRyhVkecZtqMl9vc7bNtGOBZpujgIYt7LMkQG04C0bVvb\\n\",\n       \"LQjxJ9gexlTPvH5xTZn+3KzOyM01M8+W8WNa7EmZ62wqY/OsmSzb/L9F/rdhsP3/HWr89izx/4yH\\n\",\n       \"bbuUpWIWJWR5iZBSu6i5Ab4X4roBAos81zQx7WXgkKYZSZJSlCVV3ZCzpMR17KZccxwHS+pJ8kmc\\n\",\n       \"MBqO9Jg2zyPwA5YGS41as9XWkmvf92mFQZ0dVc0CtywLp7bTVDUHu8j17MYkSUnTjCzLa/jDo9Vq\\n\",\n       \"E4YhYRgQhCFe4OP5+o9SiizXczjzQo80s+25hwoIzV/O7m8nm2VZ07E35RbM6YqLWbnreeRlobG+\\n\",\n       \"2khJURL4PmmSoMoKVVSEns+TTz7J/t4+oP2Qi7Ik7LQR0ua11y+zfzzEC9ukRYXveWxsbCCEYG1t\\n\",\n       \"jaT2lJlMJmRZClTMZhOKImc4HDY0vOXlZW0+FceMx+OmC2/mnJZV0TA4UBW3bt0gSeImMIwnY1zP\\n\",\n       \"pdUKmaYZB6Mxt27faeyItWukJI6meK5FK/B55gPva5rXxijNHIYRsFjRtFotgiBoaH6G4mUyZSFE\\n\",\n       \"Y62Q5RmWbdFqhQSBTxD4hK1AM2nShCxPqaoSy9ZeM8aczbCsTIZmPq/dbjdZYBD4hGFQL2LjjV82\\n\",\n       \"jI3pdFJbAxtFs1b6qlqxmNbahMlsRrvTpVJQLGTyVVU1tLXj4aE+T8tqAvbR0RHvfve7efnll3nq\\n\",\n       \"qad45JFH8Gt/c0OL/eM//mMef/zxJhj+4R/+4QnSgGNr6GBzc5NnnnmGCxcucP78ea5du9Y8szoO\\n\",\n       \"2M05dbvaP/727dv4QYBlzTNo8+/7rYksmw8VNkGziQN1lm02i6qq6Pf7dDqdxm/GBHtDfKiqiiiK\\n\",\n       \"GI+15bTZ8MMwPEEV9n2/MQQ7Pj5uvot5D+ODZCiT5vzerLBcrMrSND0xk/fPLITy73/9N5BS0Ov2\\n\",\n       \"6PW6TfllOzaqbkRaUmKVkrzIyHO9C/teGyEUeZE1zUXQZYyqrSRtDFOjBCHwAx+BOLFghTRUq7Qx\\n\",\n       \"w3cch77X0dafC+VUVRohgFN3qevGYFmBKjVNSenxb/FsSl4UWLYO/GVZNBN9PM8BpfBczeJAgRM4\\n\",\n       \"eop8vdN3u96JQLN4LJrWm3LbZAYm8wC9yZRVCWj8v2mo1NlXp6XLWulIcqX41Pd+Hy8+/yJVVVKU\\n\",\n       \"OY5jNxx4v9Xi9cvXiOKUJx5/F88+8ySXXn6FtbU1tHDC5c6dO5w+vYUeX6Yhlv39fVZWVpqHcTKZ\\n\",\n       \"NLYE3W6X3d3dZhNVStFutzjYP6i9JmyeeOKJxhq11W4znc4YjcZMplO++s3zZHnGZBo1CtKiKkjH\\n\",\n       \"EbYUHOzv8d/99E+TphF5Xja2rr7vN6pdUwov6hEMvrm4+A2MYc6zoQDW2dRi2WsCxWI/Qkqpx9qJ\\n\",\n       \"udf8HD+dT3s3C99x7Br+mw84MDTYJEkaqAdgPB7z8Y9/nP/0n75aY6ylHvgQtlBoBk5eaKWp1hwE\\n\",\n       \"Jyo3g+tOpuNaQOdyamOdKJrx/PPP89xzz7G6utqwwkwj8OLFi3zsYx9jOp2yvr6uB1e021RV1Qx5\\n\",\n       \"oNKsqv39fY6Pj3nPe97DwcEBCE0xPDjQ2a7nuhQ1tOF5HoOlJQ6OjomSBM/XIxPfDkrQvQyHoshO\\n\",\n       \"iOFMlmzu9SJVt4Ggam8hkwkbDrlhhJlAvHjvFxuP5j4HtVGaec1ihWX+ezFgL2b5piowm1+jsF7g\\n\",\n       \"mb/d8Y4F8NcuXaYo6tFDqqjFEBnSstja2uIDH3g/W5tb2NLFcXxarZDxeMx4kiCkQkqBIx1s29LE\\n\",\n       \"+0oibIVlu9pMvlRYro0tZJMV2HVWUOQ5Ao0V54WR4etgnsRFc6NsW/s0zxdfRpFrtoKmn2k2TFVV\\n\",\n       \"mnONHs6QZDozqDC7aMFsEjcNDQN55IX2eTAPlmmEyLcQLZgM0GSIJovU5zYPHkWZIy2pB1KkEVLo\\n\",\n       \"QN/pdCiLQpf3lYIKfMchEYKf/e9/hr//v/8DQj8kLwvdLEPhBQFpnPDa65fp9Ppcu3aJj374I7Q7\\n\",\n       \"fS1Lt2yyXNucmgkqVVVx+/Zt+v1lZrN4AeYxWUZOv79EWer7mOcloe8xHk84u7NDFMe4jk9RzoiT\\n\",\n       \"TPtwDAYcHI34vd//Awi6XL12g96gi+c6TKba30QISOKYv/BjP8ry8oC4ts01C6KqqobWaXBZMyHG\\n\",\n       \"lNvGE8csRpMBmXJ78Xen0ynT6fREVg0nMyrzelXpxei5NkpppoN5beVYqKrE9+YzU6WYe2Y4JrjX\\n\",\n       \"G3aTSSP5nk9+D7//+79fBzFNnZtMpoRhC8fxODocsvPgtm4Av/5604PodDqNSM1kvpdqK9g4hulo\\n\",\n       \"wuapDYbHxwR+QDzTU6f27t6j1+vhWPU5WTZ37tzh6aef5vLlyzz00EP6OtXzZpeWlohjbQi1vLKM\\n\",\n       \"Au7s3qbdbhOGLcq8xLEVbs8jzXL+6I++yYWLr5FkORUFonbWfKu+0GJ/wGTbSqkmCTKZ/qI1q6qK\\n\",\n       \"BkozQdhsogbSNdl1nCSEYdiYmB0dHdHv95tM3FR2eZ5zfHxcV9/hiY3BfIYJ6ELoST3mnAeDQRMb\\n\",\n       \"DPXYqDL/NBz8HQvgD557GDNZfn9/XwdNR3sW3NrdZW9/nyzLWF/WirHNzU067U7DrZWWRAqwXRff\\n\",\n       \"1/L2vExQKNqtdj2MARzbxvc8nFrxWSmF5TgUedxg31VZEc1S8jxDmukmRUmZ183VunqzhIWw5+rN\\n\",\n       \"NNVNUyooVIklrSbj9DxPD0QVEjtsNzeyLAqqssS2bALfxvGCRtnleZoe9lbqK4O/LtrlLvLMzc8c\\n\",\n       \"6Tb0RB1E9AY2ynONxUtLb0IIirJAeD6eLfmeT36CL3zh39JbGjQPe1VpnK/V7vDyhVc5vbHCN54/\\n\",\n       \"zwfdgE6nzXA8w/VCwMLzXMbjCVmW81jNbOj3+1y/fl3PpByPSZKEfr9PVVVcvHiR97znPQAcDYe0\\n\",\n       \"Oh2SpCCKMxQO02mMF3QYjmdcvHiJb3372wDcu7VLu9OFSg8IcG2HJJ7hSMHDD+3w6COPUJWl5mkr\\n\",\n       \"tbAxzp3jtChprng1GZSxADXZt9mQFrMhkykZ+MMsZKMgXtQNmIzX/NuU42bBmvczY/XM50ZR1Nxn\\n\",\n       \"bcbkNEpdc2RZiu06tNstxuMJluWTZTrYZVlOu9/n8tUrjCdjTp1a59FHH20qAJOhDofHgODo6IiV\\n\",\n       \"lWWiaEq/32N1daUWDnUbeMNscltbW83z5jgOzz33HL/zO7/Dzs4Oly5dYmdnR9P46s0mDEOOjo4Y\\n\",\n       \"Dof0Bn1anTZlUZIlaQMhZlmGJW12zj3Ey6+9SqvdJo6GJ6w17nf4vt9YNC/CiCZgL9q5msNw/402\\n\",\n       \"Y7ER2VToC5CMMdgDmsx6cbSaSfY2NjaaoLwIgyx+rnkOWq3WCXvpLMsaqM4Y+Jnv8XbHOxbAn3n/\\n\",\n       \"+2svD6kpPPWOeHx8zM2bN9nbu0s0m7C7ewPX9RgOj0Do4a+Oo8suQE9Otyw9BNmV5FmG69W7Y64z\\n\",\n       \"ErfmfiIgDEKWl5YoshllWdDr93j0kUdZX19HOj6BH2gKXt0gzfOcPMnrstqu+eKaN17kOWmmZdKW\\n\",\n       \"KZ2lA0pSVhVCSkoFZaohDcvS09P1iDMFZUWaz5pAUZYVSom3tJM1D4PJBIuiaCx1TelVlppTrwy0\\n\",\n       \"I40oQSJYNMzXfGmhIE8S0izjg88+Q5JE/PqXfws/bCEtm1JVBK2QOEqwbYd7h0MKbH7zd7/CY48+\\n\",\n       \"TL/XZXN9TfOds5TlpXUODvbodQfEabKgfJ0rzgxHd2VlpSkxv/CFf8dP/dRPcfnqdc7u7JDlBctr\\n\",\n       \"G3zxi19kMou0z8l4Rp4XBK02ZZGj6ilEUlSoomSwtsxP/sRPkMQRvquhKCVkY4+wWI6a0to0ikzw\\n\",\n       \"NRCKWdymGjPnba51FEXNxrnIdmiUtwtBwGT/juM0TV8tUguae7poombKcsPtN5myUemZcr8sS6bT\\n\",\n       \"KTs7Z7l69bp2UhRGyZmSZQXHx0M+85nPcHR0eOI63Lp1C9u26XY7TYNbCFhdXeXKlSssLy/T6WhO\\n\",\n       \"u/Gc/+Y3v8nOzk5TtQgh8HyP8XjM9vY2Uko+9KEP8fLLL9PrdBvcP45j+v1+MztWSonrOxRZznQ6\\n\",\n       \"o9vtUlQl/UGPr33zG+RFQa/TZjo5bOCit1oTeZ5qRlg9YcdUUObamgC4yM8u8npY+YLM3WTIi9au\\n\",\n       \"pgFq1qeBeRZhS6D53IYWK+c2s2ZzXjS6M8+H2WTM55jKIc/zxnr4z6yUPvQsfF/PlQwc7QncDhxO\\n\",\n       \"rfZ58vFHCENdMklLcnh4xAsvvMTdvQOGxxPSPNeOTZaFNH9bFllVIN2QtChBVehJ3wWlUrh15jIc\\n\",\n       \"RxyPZqR5hJSC8sZtvvat84gaP1eVwndd2q02YSvEqWctGp8F39dwThAGBPXN1L4LAsfR3iZl7fGN\\n\",\n       \"YSgIWZdh2ig/CAI818X3HGwhmh3XcR0sSxLHs/teM5PpLe7ki0Y78zJTNZm85pvXDAr0pO+iLCmL\\n\",\n       \"DFWWCAXScXEtSZVnfP+nP829e3s8f/4lLNvBcQOGQ+29kWc5wpLcvL3H2soSX/mDr7G1tUnnox/G\\n\",\n       \"tl063QF79+7h2D5Zqq/BjRs32NjYaLIM40dSliVLS0uNH/V0FushyVmJsBz27+zz8oUL3N3XCsrX\\n\",\n       \"Ll3RsnovQFRG7RhR5jlCQq/b4a/+lf+GIs/rmYvaxU4tZMKLwiyYK1vNYnoz/mjgqkXc2vxOEHgN\\n\",\n       \"RGKCgG1LhDC0PDOsAYLApyznG64JGIv8fRO89YxEiWU52Pbc4dJ4dnie02ThnnTJi4qHHnyIa9du\\n\",\n       \"aLhIaf/sotDnO5lFfOMb3+Ts9hn2J2OqSittt7e3m8adgeGklNy+fZszZ05TlvMJMp7ncXBwwEc/\\n\",\n       \"+lEuXrzIbDbjoYce0jh6on3dZ7MZQRBw8eJFVlZWODo4bH5/MBgwHA7pdDrsHx8QeAGB5+M5Lr1e\\n\",\n       \"n1u3dun2+9y6vcsLL76I32px7fpNlrouYRhqb5lO561jSRg2/iaLvQnbthu+/WIz0DQVTUA1393c\\n\",\n       \"b/Paqqb+moRj0SfJvMZk7MbTafGzzDOzCLcsVnKLTV+zpk3FEUVRc/5vd7xzPHALbKXLTtuxqPIc\\n\",\n       \"oSoENmWRMUl0hoynaW2ntlbxQo8kvUIySvWE+ULj3BWQZilFJXBdvXjLomgWVJoXpGntEGjbegis\\n\",\n       \"J/UQ3KrEDgLc+uaXRYGQFpO0YpxM6htdNIvVlFxCKqpcDzrWN04RhgFSimYzyDNtSO95rjaxr7Sn\\n\",\n       \"uYZ4bGRV0u+0WVoasLa2yubWpv65vH/H3WTbix1tmCu2DC9VVRW2VWfblkTUHfwsTxFYVGXZDGNG\\n\",\n       \"KVSZE6cZluNwVOZ89rM/TNhu81u/83t0ehaqEsRxQq/fJ1daOHJwPMaxBLdv3+U//Idfp9tusdTv\\n\",\n       \"4jsOH/3IhwGd8ezt7bG5udlwa43fTZZltNtthsMhRVHwwPaDpFnB9Zu3+PaLL6GE5M7duwxHI8oS\\n\",\n       \"BksrJJmmuKmqIMkSqrJAKEWe5fy1v/HX8T2PKJriuz55rr0o8lo5ZxYf0GRQZgEBDbvCZNuLsIdZ\\n\",\n       \"fIuCjMWgZ+7XIpd3kdJphueaRinQcI5Nw3ReLp9kHyyW+OYzTeAvKq1R2Nzc1H7mYYtZrDPvTqvN\\n\",\n       \"vb09VpaXeeH8S3iuQ7/fxbZtHnjggZpG5zCb6cDX7/c5PDyshWSq4WUbX+pWq8XFixfpdDrcvXu3\\n\",\n       \"YUEZvFcpbdB0fHzcGMyZCe+GvZIkCYP+gDRJmIwntFstiqLkzJkzJGnOV7/2NXq9HklRYjla7dxA\\n\",\n       \"EdX91ckmay7L+f0098lcv8WjqioKNRdLNT97U2BdhNpMUI+iqAnYJhExfZJFV9JF+MW8x2Jj27iU\\n\",\n       \"mntvBD+LzfFuVw/b+DObgZdZTJUJPbVaWbVDhFY25UWhHdNaLcb5jKqsGPS7bG5ssrV1msk04bWL\\n\",\n       \"r3N3b4+s0FCB41pks4w801PsldAX13M9PFvj4ZasM9CqolACx2/hS4mZMSkqCcIBJNKWCLSHsJQ2\\n\",\n       \"SAUSXNev5b4ltqN0ExTNTJkmOa6t5c1C6PPK84JZlDXlcJorJjMdiB1RcRtNW6xUqRuMAtbX14CP\\n\",\n       \"/IlrdjQc43s+ti2oyhxVy+od22mwXtuxkYAldRZWllVjGlSVFQi98UkpkbYWmWgamdSzNPMKG8ln\\n\",\n       \"/9wPkaYZX/3Dr9PtLaGQjCdTLCeg1+0ym01QSrK3f8DevYq1lWXGownxbKphkO1tds6dZjyNubd/\\n\",\n       \"RFVpI7GD/X1WV1cpioJZdMj+wTHD0ZTJLOKf/x//J7bj0O50+fbzzzNYWkJKm/6gz6SerJ6kKaJM\\n\",\n       \"tOVAVbLU7/D3/97PcXR4QBQntFpt4kjj7EmSUGblnA1SZ4SLAdLzvDk1s2GYCCxLS981RGfjOnrR\\n\",\n       \"lUWFbdkIadeZ8SK0JZvSffEzmkaU0toE8zuGM6yDiT63Ss2tkg0+LKWeWamUwqmZUJaUZJFujG+d\\n\",\n       \"3uLs2W1u3NzFdmw9GT7wqcqyCcb7h4c89OAO05luulZlyWQ6par7FEkck6UpD2zrhufW1iZZqp/b\\n\",\n       \"q1evsrm5yfb2NnEcMRj0EUKwurrK/v4e/aUl3SD2PCbjMd1ul36v11Qc0rLwfY9Wu0WaZQR+wDSb\\n\",\n       \"MJ1OCcKQvf09ZlHMrVu3iJIUNwgY9Po4MmU2ndLpdgmC8L5xxGDgjq21F0YGD8ZjX1dB5lrmxXze\\n\",\n       \"JNBs2I1+wkAqC/j3IrvE3FttETAXgJnfg/lGbu6dqeBMZWDYRHktDBJSNGwkE8CNtfafWQxcusb0\\n\",\n       \"xSGry9dZFBMEIaUQVJbFLM2wa1c937KgLFhqeSy3Ax47+53MZjPiNGE6jSnLkqhUDI+PGY4m7B8c\\n\",\n       \"cHhwRFpEOK6vxzQ5Lrkpd11BpTKKvPYzsES9sEDYQp+TZeOGPqosa4Mbo5JSCLt2JrQF0prv+FWR\\n\",\n       \"Iyot10cJhHS1paUSKAQVWiikhEBJi9iwRygppc7Udkf3b9j8y1/5MtTiECk0991zHXr9Pp7vaSsA\\n\",\n       \"SyKp8GRJt9ul29XWqRunNlAo3BofNNztvf19Ov0W7TCgXYueJpMJk4N9PvWJ7+KRc4/wC//yl1BS\\n\",\n       \"i66scsokj/W1cUOssEOpYH8Uc/doiueHHOcRV/ZeoX/pOp7n8dIbew3UVJUllnijgVSyLCOKIibR\\n\",\n       \"RDdoiwh3MmPtzGksS+B7PqPREe0wJI6OyNKEwLGp8phnnnmGT37ykxweHmuIIi/ICz3+qjgeUhYF\\n\",\n       \"rdA/EUxNsDULzODcVVXVDBFVVyjaq8MWNkJJyqwCJZFKojJFZSlKpZlMhuutF65q/q0w1ZhNVepR\\n\",\n       \"X9IyakHt/SOEIs+zWsBREQYhRS3kqhQodKO92+s3lZbB4x3HoigzyjLlv/iRz/EPfu4f4ooA2wmI\\n\",\n       \"s4xuu8Pd/QMee/Rh3rh8lfe/9/34XojnuNzd36Xb6RAOlgG4O5qyvLxONMuxbZ/pWCtz9/b32Nrc\\n\",\n       \"JE0TUC62Zenxe5ZkOhkRBgFlXmBLTQkc9Ae6aXl8xPLyMo7jcG9vD0c52gs3LvFaPrnjkhYZcRaz\\n\",\n       \"urXG7/7yF4jiKUtLqwyPRwS2S1nkBJ5Pyw849+DOfddEmVfY0tHy+LJE2n6T8WrKrzEwq1BSYLs2\\n\",\n       \"sqhQUltKGEFPmmV6GIRS2jahKlF5hqhOPjuLWbJpUC/+bV5rmpJmfsAi1KmUAlViSeNfBHE0JUlT\\n\",\n       \"PC9oAr+51293vGMBXM+QzBqQv93unOiy610uwvMcHMdlNotwXJdWq0WW5fNhvkIbtVcVrLoupzfW\\n\",\n       \"UAp8L8BxPMbTKS++eJ6rV64xmkxAgR+EKCqKskJUtbVtpVWaruNCmRPY2sK2SDMsW2evQkocaZOU\\n\",\n       \"uX5oHQfHcmozotpYCiOvNhM+JLawUfWiLspSbw6Og7WYvWHhOLKR1d/3mlUVlgLLdhFUxGlGmuWM\\n\",\n       \"Zhpush0b23GgKsgTnbFJS5LECb7nEkUxrufWXPAeUkqOjg8RsvZpkZKN9VUeeeQRNjY3aXe6PPHu\\n\",\n       \"x/ib/+PP8I9//p9SVSmlkNiORV6kHBwnDUPC9x2UskmyRDd2peTm7pHGBtV86rbJRsqi0KZGRUEU\\n\",\n       \"x7TabXzfZ325r2eM5gkoSS4yHGlxZ/c2/X4H27IZHu7zkz/5kzz77LNEUcRoeEQY+gSexp/LPEVS\\n\",\n       \"ISzBeDxsIJNFPq5pDBZF1iiCldAKTtuyqVRJluY19imbCq4otHLXcVzK0tbNdAVVWVMzgVIY62NZ\\n\",\n       \"M5Nqy2RhfKJz5MJsTkNLTFM90EHKuYLPlO6L/YN5Gd9CqZI0y+n1unzw2ed4/oXzjMdjpLQYlWN8\\n\",\n       \"32P31h1c2+Jf/9Iv8VN/6S9x9dp1VleWcF3NEtEQh0UY+Fi2w3g0Znl5hVdffZWtrS3abc3njiNt\\n\",\n       \"mdtqd0mzHLvui0glcF2fLCvZ3t7h8PCQJNHDL+7evcfGxgZ37tzBXXVZWllmNBrhBQEqlURJzC/8\\n\",\n       \"i89zcHDMyuoyBwd7dDo9ijLDlrqiWVpaYmfn/gF87mciamX23GVSCNGYe5VlCZkgK3JaNUVWWnOL\\n\",\n       \"V2uhwSvrQG77LtS9C/PMLGoBGjhVzC1lLctqlKiGvTLvb9CobwPfPZGdO46DZdtUFc06eTN75n7H\\n\",\n       \"OxbANSfbWfDyQEu5hWxoRaaBlOe173bT8NN2oOYCuLVQw7UthNDeI1WpqKqMfsvlo8+9n+/6yHNU\\n\",\n       \"peLg8ICDoyPSrKgbPYqq0grE8Xjc/CkqDe9Y0tLUQgFCWKiqwBUK4WsRQEUBaFaKG3oaY66HtioB\\n\",\n       \"WIoKDblIaSFtG8vg6VWlp25jgrjSJbvr3P+auR5lXmiGi1IgLf0ZQqGwyZQgy7Q3i+PqUr5UiqAX\\n\",\n       \"EkcRlt9C2g6lUhyOJzqYCQeUIEo0H/fVi1e4uXubB7YfIE4ijkdjhG3T7/rc3dsnwWaWRvi+xvuz\\n\",\n       \"PKGs5ja2rmM11q6+32nk5kIIoniGqoO5QiEdi3YY0Bv0UUpfi+FwiO1Y2Jb2cpklkVYcBh6T4ZgP\\n\",\n       \"vO99/MCn/1uWl5ZJ4ykS2NpYYzKZIGqnwE7o14we1QhLzMIBmqBoRBSGrke96WZGfAEkWQwZOLZL\\n\",\n       \"nmT1+DmPND05rNaytR8M9b2uygoqnUVrd0g9fX5OK9NBOAiChulRVVX9vEl9zdCL2bFt2q2WzvBq\\n\",\n       \"jLcsS1SpvaxbYcg0Svjgcx/k1VcvIYRNmqTgCWzbZTiZsdTvo4TD//PFf8/HP/ZRLFsPhFCqYhJF\\n\",\n       \"NWynmM3GeJ7D9Ws3WV/bwPcC7t07wHU8Do9HLK+usbKyRhxHNW+5RCkb23KxpEORV2RpwenNMwyH\\n\",\n       \"QzbWt8jijEF3mdHRmMTT3vBZntPqDHjl4hvMopxur89wNKZCIURFp9uhSDXUcPr0adL0/pNpTDUi\\n\",\n       \"JCg1H45uWRZZzZYxEAmA73lkNSVRqAW+fp43mD3UM3IXqInmZ4t4tgnoQLMJz2azhuNtzK/KN72P\\n\",\n       \"4zgNF928L2jKs+sFTT/EQDZvG0ff9v/+ZzwWaW8aH/UJgrDxVjbkecOVNDuULkl8RqNx3ZX3USj2\\n\",\n       \"9/dZ7rTrslTUWKOW4IZBgOtpXxCh2qwMWhSlzo4M7mX8DsyF8/2wYQ5MkhmT2ZQrV67y2sWLTKIZ\\n\",\n       \"WJJW2EYJQVVBWc/irISF43pNs6usMwIHuUBZqnGyIicvMpCm8aJvx1v5H1RCOx8WeakN+anVY1Ig\\n\",\n       \"1NxPQQibvFIIqb/bdJZgOz4tzydJEz1yzbVBgUCSFxWSimkcAZJZnPP8iy/RH/R48qn30Ov3Offw\\n\",\n       \"w9zavc1v/P5XOX/+PHY0Y3VpmbRurDVd+lp1aNsWulwQNR9e1PYJdb9BirqBkzCZTPB9j263i20J\\n\",\n       \"oqneXFRVIqqKJI6YTSb843/0jxgMBqTRMar29BYojg4PNRXNGAQpkOhyOS1OTlN6M05pgnuv19M2\\n\",\n       \"u0KLOqgDaJ5nWJZNmtXe1R3tP1LkVbNJaD8S1TSzmlK7rn5tR1NPzSanMzLtsXN8fNwwJfR5zlWg\\n\",\n       \"5hyNXYHjOBRZRpSmWlhkadqs7nUoVpaX6fd6XLl2A9f1Scmx7YJer8doMqXX7nBv/4g3rt7gyccf\\n\",\n       \"w7ElZVmxtXUa39eDLVphyGQ8pV+Le7KiYDaN2Hr0NG9cucL/196Zxkp2XPf9V3XXXt4yb/bhUIsp\\n\",\n       \"yrJkSaQo0atiy7Ei2fESBPCOwEgQJN8cIIBjy0AQ5Ivj2EicIHEMBIkMRXGU1ZZpx9BqRZAdSNbC\\n\",\n       \"RbvEhBTFZYazvqW771K3Kh9OnXvvGw6Hjm1xRE8fkJh+/fp131tdderUOf/z/4cQ2Nzaoo2UqF3X\\n\",\n       \"0VRtXwQGRFnLQ5mVPPHYE5w6dUpQXOWU8xfPM93YBJPx8P95hPe970NMpiWbmxv4DmazCUluuHDh\\n\",\n       \"KU4eO8mJEyd58Ytf8qyR6IAEks20beV0riiiYoQ40XU+Rn1c29I/dphpmrJYLHqFL/E3Rd9DcC2a\\n\",\n       \"REEK4y5L/cwx5LcoCtqm6n3OmKvn0mURzd7e3ub48ePP2pWtZm7k4Y2Esx8GCiAHfieE8DZjzA7w\\n\",\n       \"X4AXA48CPxpCuBr/5m3A30JE/X4mhPC+67xv+N3//m8B21exlTNgf3+fyWTS37TubF3XRW5t4c5u\\n\",\n       \"2yhCkNg+wst8R5qlVFWN8CUPquzxcwGJiNJs0g+8ojGcE06WNImipyZuNPLHdEiLbes6Hvvq4zz2\\n\",\n       \"1cc5WK5YLpZUdUPTtjRtFx/LsSxNM0JQBrvQ5+RCQIqPnUw+4Rm2KO/Ff3z7bzzj+/jJv/N3yVOJ\\n\",\n       \"zoMPKKrZB4EK+iARmsfQOXEIeSqSZkEjB/mDnr1NhA0yQQI1NcY7uq6B0B1qT267jjwvcIlQ4u7v\\n\",\n       \"7bO7e1WigAYsOgAAIABJREFUxEwU0bMsI7UDi5/Jh3b0rpNGJ/ygN6gpAhC+ctA8Y+xm7BwWOHH0\\n\",\n       \"KN/31rfgnaPMC7AdwYuGYZZlmETSG2mkZD26c0SkxFILiYkLpsFgYueuF7k5RR7pHDESRQt6SZzs\\n\",\n       \"cPRd9QiCulphKOXrjGgbIp1wnCoxDSKMmLVbIsRUctpxTqGBtoev1XVDGr9b3Vg07aTBjhbU1Imk\\n\",\n       \"kU8+TTNskkUJu4xf+ZV/xv5yRVW1bG1vM5kI50eWptLV6Vrufu2redkdL6GplhzdOUKRp7iuZT6b\\n\",\n       \"ce7cebY3RO3oShSAyIqUz37uc3zzq78Z5xxPPPEEx48fBwxlXnLp8mU2NzaidqfhyPY2YNjf3+9h\\n\",\n       \"eMYYTJ7w2GNP8NT5p/ngBz/E0eMnmEwn7O9fZVJmHDkyY7HYZz6bcPrEWe65554YicI33/09z1gT\\n\",\n       \"n7//I7LR+fZQpO29nGv1dDWmoVDEmW74SqyWX5OrHiOmNPga4/R1g9UUivYB6HuPkTC6OWigY83Q\\n\",\n       \"NCY+SfL1bTt0VOvnv/YNbyaE6/Pp3jACDyFUxpg3hRCWxpgU+ENjzHcCPwS8P4Twy8aYnwN+Hvh5\\n\",\n       \"Y8wrgR8DXgncBnzAGPPyIIQlh2wymfWDoTepkld6vM3zHILl0sWnyIucNEtJkpaNjTl1vcLahCRL\\n\",\n       \"hf0NMCHBJIasKCjsBFVeSVOp2vdbVRChVJvEhpbYtGMTQY80bROPQBOqSo/aGQFDdeBIspwzJ49x\\n\",\n       \"9sxpmrYlYPv8m4u58raR1tpz587x9IULXHj6gshL5bl0bwIJgIEuNFgjatzGypH5emYNVNWSrCcA\\n\",\n       \"Erw3gI2LM0QVeh8MIUaXWZZhs1wad6xEpxoh+BBYNTVBanQy6RLhg07yGRhhZCwVdoe0zW9uHWFr\\n\",\n       \"a1uQDq6jbYRkbOWWEB2lWw0wvJ4cLDYQXVsQSmMXqywAUeWezeeYEGialvvu+z3auiJLErokMibG\\n\",\n       \"MUmSJJJgCc3BZFriOheheylFUXL61EmKvODIzhHu+IY7KMuSK1dlvm1szGOEOKd1Nb6TQrHrhKPd\\n\",\n       \"AKGDLMtpqprOdQRqjBFRDx952m0qqTSJ+GJwYGCzlJyoa1vSJCePEfZqWdFUNSEPZKkUCFVJSDcX\\n\",\n       \"dRLaTaqRuixwR16IknvTNWAsKYaf+qmf5B3veCdN46TQv6rY2dmhiRzV21tbfOwTn+Spc0/xEz/+\\n\",\n       \"I9TVitA58jThq199nCNHjhCMxySBNLNMZ0KM5lwtm7rSPk8Ecuh9S1lmVPWCo8e2uXDhIouloChm\\n\",\n       \"8wkXLlzAB4F1FumULz38MA88+BDbR49RlCVXr15hNpswmWYcHCyYzUo2Nze5/eztUjg2aeT2eaZp\\n\",\n       \"cbxxdR8Fa9qCEKTWFOeYnM4CVWybl1NSIimdWoQmxnhvdbDz2UzSGs5B/D685qjjHA7e42MzkDbr\\n\",\n       \"aN1Fm4w0AjfGYBh4dDQH7lxHGmXhNP/9Z0ahhBBUlC2PPucK4sC/Kz7/DuB/IU78h4F3hRBa4FFj\\n\",\n       \"zMPAvcBHr31fQyKQtnjk1qLftV1srvU9JCxNM/b29iSKijulJ1A3kb85WOxKothyMpEF4aQzTuhn\\n\",\n       \"ZYGB6EYaa2WBWiL+OsJ2IqxnVVcR1kW/IAMZXdtIC3rMlfZQvAB5JpDCwqbMTh3jxWdO9Bws3nsu\\n\",\n       \"XrzII488wrlz51mp8K+Z98UR/fd6tr0xoVpJoS20ArX0iICA805URRIRGvCtpG6K1OK7FqPV7A7q\\n\",\n       \"yHme59KkNIlj7luJKBWZk9hMNrgkyEKy0LESB+cjpW8KttS0hDQvaTW+9nXvbEJMLyRm4I/oye07\\n\",\n       \"Kc5mRR6Pk1l/OvBdR4tIqfkANhjaYPuIx1qDB9p+3AKrvQpl2GsuXSJJU86dv0DTtHg9xRnDkSNH\\n\",\n       \"OH1KmBUPDvaZ5hM2NzeZTuUEOJtN2NzaYDqdMJtMaVuPtblALlHcMeRpRucc1bIe9A07JaMyrFZS\\n\",\n       \"2ykLUUz3vsO1gi+eTmcYY4dOUNOvu36xa2pmHEWC4Pg752PhLsV1jqZuOLazzTd+453c/9BDwnpp\\n\",\n       \"otqV90wmU3b398iznIcf+Qr//u3v4K1veTM7R7a5eOkis42tWGtpWK5WBOOpY+fiqVOnSBLLbDJj\\n\",\n       \"f3+X1WpBmmZ458myhLYV0rJjx45y/vx5tvMdsixle+eI4Pmbht/77d/iS1/6MkePHcdYy8HBLuDZ\\n\",\n       \"2tqgays25xsUZcqxnR1e+tI7+trFs+GhFwvBSudl3iOKNCL2IYjylh3mizrVcR9AX0uLUF8lchuf\\n\",\n       \"HPoNIdIg6CahJ7S2bWli4KnR87hwqR24fUevE00DNXXurRv6PMYNZM9mz+nAjTA5fQq4A/j1EMJn\\n\",\n       \"jTEnQwjn40vOAyfj4zMcdtaPI5H4M0x3JSCqm5heDUMB7lrIjNeBSSxnz55FeIKlkLNYLXsMdBIH\\n\",\n       \"NslS9hcHdE4EBXDK52wEwhcCdSv41DQ6NmONSGpFUQhNq9g0xXSCUjeS4JUJYiI5k+/oOlkcUtjz\\n\",\n       \"cYdPSNKEpl5B15JQ4F3H9nzKva+7SyZK4yARx57lGdPJlCtXpLHlX/7qM8fsjhed5sqVK6wWS1Yr\\n\",\n       \"qXC3jTjJNM8oi5IueLxvSRPZmLS4kyQG57sYaQvTY2oEF24IgtrIE1Hyzg3WJtR1S/BQ5CK5Jrjg\\n\",\n       \"BJuJYw8+9Dlt+Y4SYWWMTjs1yZAuiffQR5DdkEoJIZApjwwy5sTOgCzNJTURhE8dDElexkgrOjIr\\n\",\n       \"fQRd5yXfbsEHS9dBVgjD3P6qocwKylKcN0G6cnd3H5YieS7SfCqEnKWppGmMZ2Njk8mkYHNzg+3t\\n\",\n       \"LW47c4bbXnSGyXRCFnO/nYcsLyU9Y6SDL2DASColTVI6F6idbmqePDfU9eHmIde2mFG+V7DMvpfP\\n\",\n       \"06akqqqwUU/T13JqoXO0bU2SWP7q97+F1WrJJz51PztHjwvkbyZScZPJhOVKmB53Fyv+22/9Dq98\\n\",\n       \"xcs5dfIE3/iyl9E0FcZarly5wvHjxyR48YHJZEqaZLRNC8EwnUhh1UShZ3BkWUFVNSyWNUd2UroA\\n\",\n       \"Jsn55Kce4rHHHmN3/4Djx49Tt8KUOJ9PKcsC1wiufzrZ5MW3385rX/MaXAdFMenFgq9nRSmbWhpP\\n\",\n       \"NYc6Hhl4ZoiPnXMUkb9GA40kSWK9o2UymTCZTA515g61pUHGUKkR1DfNZjPmI5STIlDGNAjKdmiM\\n\",\n       \"wZrQQwV1vRRFQesGWTU9hd3I/iQRuAfuMsZsAe81xrzpmt8HY8yNSqXX/d2/+vW3A3L8ves1r+Su\\n\",\n       \"176qbxNeLpf9ZN2vhIb0ypUrbG5v9Q62bQVnubGx2Tc/tG3HfC6pmfl8hncdV6+oiKvIX+ED1liS\\n\",\n       \"JCdJkyg6G6PG4MmyIjqS2KThO0KnajiiqakiuDphgvdUdUW9XGBTkbRKbEGSZVJZzoq+Qy1LMqp2\\n\",\n       \"SQhQTApccFjr8Y3j6uqAjfkGTXN9Csnv/a7vILEJRZbjXEfrHMWk5JFHv8L/feRRLl+9wv7BPtVy\\n\",\n       \"RfAOEJ3LvChYLiq6VvJz5UQccttW5GlCYqSZyRiLsa3wNHYdk8xgk4wQPGkSmE1K6moBIm8h6ao0\\n\",\n       \"hVggdq2n7TqhyjVQ5tLaLwXJoTXbRAfqfSdKRQGC68jzRLq5YjcrKMQzgBfOGe89lRP0SJENGoME\\n\",\n       \"sEkuTTjWoLGNjSe7IpWcdd10/YkgsZGjOy05OFiQlTlJXkCAqnMkkRXywtUDkoMF5y7t4tyjJOln\\n\",\n       \"8K7CEDhz223cfdfdnDp5MnbAJuSZIJekEGrIkgRnXWR/sLFYHXrYoY6Jj2MhcUI81cRTSGqHHKtr\\n\",\n       \"5CQjqAsvJ0aCcN6XBc7VpAR+5Ef+Oj50fOqBhyjLCft7LbP5nLoOlOWMg+WSNEnY3Njgw3/4vzlz\\n\",\n       \"8jTGptx25jQGWFYti0WDMRJFlkXBaiU55Ukx42B3GVM7E7w3BJ8SvKWYlBw/foonnnqaxWLJe97/\\n\",\n       \"XggiY7izc4yLl55mtjljmmekWcDQkOUZmzNpALrzZS/HNZ5Vo927z06rqgFB5wc9Sk1B6YY45oDJ\\n\",\n       \"I30tMUrXKDmNr9cctiKUxtG0vr9wsGd9oOmcY3d3Nyp5DU1c3vs+hanSdEPNxVPEwnTfBZym2AQ+\\n\",\n       \"/okH+PgnH/wToVBuWMR8xouN+YfACvjbwHeHEM4ZY04DHwohvMIY8/NxQv5SfP17gH8UQvjYNe8T\\n\",\n       \"3vPu30RI6geKzs7LEVhpHYWhrIjV3Uza3uPN5rlEIdP5nP2DXUIIlIVUnPMYtRikmDQu+DnXxWhf\\n\",\n       \"dm5No8hxyFMUeX8Ul0ngadrYdhuE40LSNYGmrWUzsCP+5jTFhbirB6WpzMhjq2xi0z4CrNsV2NBD\\n\",\n       \"1nwQhsPEJtz7xh98xvh/5lMfJHQB34WolZgP+dckgcTQ+YBrG4xvyDNB6FR1y2OPP84TT56naVuq\\n\",\n       \"umFxsCQY2Nna4czRY7SdFFq8BTB4A3v7e1ENqWF3f1c21koiRDmYGaHuNbYv5vm4qHwn95VnA1QU\\n\",\n       \"JMohNkf4mAZT9AqGnstDH/cwLf20AB32UGSUWElb+c5z7TqvXYdlYIJTFsbgvdAbxFxmmqS4TDca\\n\",\n       \"4eYxxuDqRjYpE3Ct6xEGqfHIcUbSTqJI34loSBEFGeJxez4tIQjVwtbWNvP5lNlsxomTJxAVnIFj\\n\",\n       \"XKPEcRefbnzjln7nnGi7lgUhyIbqu4F4KxhDVTc4D7/7e7/Pgw89JMcxLJvbOyRpJrA/58jSTOia\\n\",\n       \"r+6SGOFbOXnyKHd8wzfwile8QgiuHn6Y206dYrFYgBbY1AFF9NbTFyRd9eS5czzy6Fd49LGvihpU\\n\",\n       \"lvYCzsE1YDpmGyWtq5hOcpqm4tTxU5w9c5Z77n49+JTQeTob+kgY4M5Xfesz1sQXH/qI3MNExlhP\\n\",\n       \"gDpn9LGe+qy1kKcEH7A6H2HIaUfnnUdx8kbppu1h/nc9CY1JqopIAztO02itSdPD/SZhhhSZzk3x\\n\",\n       \"gfTBq2LZv+WNP/inK2IaY44BLoRw1RgzAd4M/GPgPuCngX8a/313/JP7gP9kjPnnSOrkTuCPr/fe\\n\",\n       \"EonEHGbMV9qkJEmG6vBkUpKl00gANHB+CIWnwnwOZGB8IE0MIlPWEjrJCyZJQhbTFHkuijyZTWhD\\n\",\n       \"Qhs/R3ZsHzGtpt88+hb7JBUOjojmaJqGrnXkhfCFd52DzpPmGSYdlHy6LjCxA3/GtCgjxGnAHxvT\\n\",\n       \"xZw/lOWUjdlcFsl1zDU1TdVSZBllllDmObVzNK4lGE+WlYIvxZOYgGtWJEnKtMi448Uv4uV33klR\\n\",\n       \"TAgY2k5EbG0Af7CS00chAs/eeIKFqqmxqSErMmH/w5ClEx599FG+/OWHOXfuPFVds1hVBBKMEa4Z\\n\",\n       \"Y8SxrxYrouJXPHKmZElCEkWEJbqJsmLSdkgwkFkL9nDHmxanEpuQR9X2mNEi0rXLQvGKuImRdlFi\\n\",\n       \"rYy/Dfq6gDUJSWpIioSmrvHBsKrVcYJrOrAS2fWsdcCqEjKq3Iv4QpalEY0ios/4QNvBwUGFj6im\\n\",\n       \"1bxgf2+X5WopdAnAxoZoVgphUmBzc4PJZCINVPM5J0+c4Mxtt7Fz5Ag7Ozvs7e2RJCmr1RLvRTfV\\n\",\n       \"B+k36FyH9w4fN+HlckmSplzd2yPNcn7qJ34U7z2fevBBDJYrVy4ym2+R5xPyvMT7wJXLu5KTNZAk\\n\",\n       \"GZ/73Bf5ymNP8Ad/8GGCDxw7usO33XsvJ06cYGM2Z7VY4B0sfMXjTzzBF7/4RR75ylfIilJSmrmU\\n\",\n       \"zKazCU3bcf7CZVGrN4HJdMpiscv2zoyua3jR7WdJk4SX33knbdtS7S85cfwET+9e7BWVNO9/rSl0\\n\",\n       \"r1L5NDMwbnadwG21qKgTpY0plgRFpUVfpLnziFIqy5J5vnFoDuprNUXSNE2fTlmtVr3z1tqY8sho\\n\",\n       \"2kUdP6E7pL0pcNY2wp9Nz+L5bPfd++jngBG+GilS2vj/O0MIvxJhhP8VeBHPhBH+AgIjdMDfCyG8\\n\",\n       \"9zrvG37/t3+jHzyIEawd+HjHFz7mTdZdUJENuruNcZ2aT9IdUB2yFjc0atMUyPXyTDoBdFcc58v6\\n\",\n       \"QtVoRx7nc68F/+sXb2KOHejTMJ0bBInHkKR7vuMtz7imBz76gT4CU129QHdIVFWKLHmEq7m+lpDn\\n\",\n       \"OU30plr1NsawWC7Iy3xIU+g4xuOhWugiYU85sK7pv3rvTSe0rJcvX6aqKpaNpHlc23JwcIDrOnav\\n\",\n       \"XuVgsej5nauqEmFoW8Zx0/SCx9i0XwSa3/Yh0NWxOcOKmIe+RvUp9bo67/G4/nvr/FB0SpKk1zCU\\n\",\n       \"G4TQDa+99vvVcde/z5Okl+AySRJx3kJqJvBD+npJaiTCk4ULrnNMJ1PquiKECFUNOq/Bd8LbLvPU\\n\",\n       \"YAMcO3qUM6dOc2xnh9lsJnn11Eoj1NYWR49sybXHk0vb1mR5BgZa1+JNykf+6GN8+A//iEW1Islz\\n\",\n       \"kqwgL6VjOcukBkAI4IX7XlMQNsoTXg9PHYInzeJJNxjapqOczMjyjKapwHQ0TcXm5oy2rdiaC6Sz\\n\",\n       \"LEt85zlz+jR3vuxl3H72rAQ6TTs6gQwCFiEEXvW6Q9lbAD79iQ/GOSvzsCwlSFLMvfehryP0RGKT\\n\",\n       \"vH88dH4j0QND+soY0/OV6Pc/rvnoOtcNvg2DdqpG3vpeY14Ta21f89HPUXphY0x//5q6ef23f/+f\\n\",\n       \"LgIPIXwaeN11nr8MfO+z/M0vAr94o/fVix6rnFgrudqhNVYmy+XLl9nY2Oid+LUDr1+uOmrducbk\\n\",\n       \"+sulAGmkCj/pd1GFLSrWVojwh4JHHyXbgXZUaSN1Q9DP0N14uVz2jUjq7PUaVG1Dd+v95VKEjuMR\\n\",\n       \"dFy0vZ4pXlUn48bGRt8Fqbk6KW5ZibTDoPeoY6XXphvUbDYjmIBvHT5+JyEEFgcHfc4uz3PyLO8n\\n\",\n       \"YTfaXNURr1YrbJqwsbnJ8ePHY0ST9/nIxUJa+zciVlhhccp69+gTT1FXFefPP83jTzzO7t4urm0i\\n\",\n       \"U5ucgIxNKJIEXwjfubUBEAIjQ0qWiFyXa53UGPKM1osQtqAGhJzKAM41FGlClhjyvGBvb1cKjla4\\n\",\n       \"3L1GUnG8k1Rk8HSeYWzfTSybg8dF3Upj0n5TTpKE4LTRI/KgBMOqqsnzkq5zUkL2gTTNSRJDyPKo\\n\",\n       \"BIXg5ruOvf0lTz75IGWeRWrfJvKqREWoxDKbTjl2dEcoCU4d5+Spk+zs7JBPcgwJb33r9/Hyb3oV\\n\",\n       \"//rXfo2mavDecPXKHrP5BvO50BQkxoozjgyWWZy/+/v7HDlyBPwgJaaanZ2v6TqtDQXapiJ4x2q1\\n\",\n       \"pCgzygidzfMZu1cv9nTKx48d53V3383OkSPCpljKnN3e2qKuapLsucmc+jRavN6xIEddN31znDJN\\n\",\n       \"6tpWBkBtRtK+kh6CGH2UBka6zlXnUmmB7SgQbJ3rifi6VGpBWaTGSIyhiIRceZHTua4X94aB4nbc\\n\",\n       \"FKTXeiP7/8qB/3mZMSa8977/cMhZJ0lcPCPoFAxVeH2dLqCxkv2Y1a2HrY1er1H3+AjjYp5J30ML\\n\",\n       \"HRrV688hhNgOfZhm9FpKUv0bVa221rJYLPovWBuWdCOS475ElXoNGuV3Xce3fvcPPGPcHvjoB4Bh\\n\",\n       \"EzPGgPH9Y3XUwQecG04qep15jJ51UchmNqSwFKrmnIhkjNt5lSlNTwia+9QIQ78Pjxzrq6pikk8k\\n\",\n       \"pRHvryjyWLSU3EdTN/142xGda55ntK2TfL6TnPtyueTqlas452lDx5XLV6RLzYg2adO2BCMbyWq1\\n\",\n       \"kg16b4+9g4rpZMJqVcUClusjqSIvek4day2rCL3UzU2/17HCuJ5ecKG/t1gcGf4uPrbGCMS0c1J3\\n\",\n       \"UfR6YNS4NWgiFoVwi7hOajBSX5H6QmolOrZW8P0+jrWJqUQ6ke6zsWCMhaZtMIllZ+cIs8kM4wNn\\n\",\n       \"zt7O8RMn+P33vofzT18gzXO5fGvJEmmmy/JYzI1zTdOQ+l0rJ4ykKiDgoqCyAT+kIdq2Is8Tlqt9\\n\",\n       \"vHdMJiVHt2Zsbm7y8jvv5JWvehUmyH22TUOeZRzsH/QINZsOsL+u63j169/8jDXx6U+8/xA8UAOK\\n\",\n       \"IkI2lRd9DM0NSDdk50X0HIRwzMTx1s07jwVKddxd1/XSa7re9Nq0cWh8yrdm4ECxo/Xku446Bmvj\\n\",\n       \"rlBtaNQNRP3Z3d/ylj9dBP61tOVySdd1vQhBFVMCGplpC6keRw4ODnoV6cVi0SufjLGaqiVnre13\\n\",\n       \"4TFESyNdoG9fHjt3zZWtVqu+xXk2m2HtwJfRV63T9NBmoV/omLBJu7Xquu6J8/M87++5i4gWnVy6\\n\",\n       \"GT0bDlxzbGOdvLwYRHT7jc+Ynnt6f184zbe2tqgamdzqkITIaEVZFn0UoZuhOnSdTFpY1o1VMKuD\\n\",\n       \"gGsVtQO9cxgM8/mcbtVQFnl/ymmdE9Yn3bQtZImlamuM6cgt1HXFal+k7bI8h64Db8ht4MSxbWbT\\n\",\n       \"WWzesriu6zlEtFt3/2AfFz9vUpY89eRFiqJguVriWkfd1IQAFy9eZH9/j6fPP83+/p445o0dDFKA\\n\",\n       \"JQTSJBFkSjKXnDtSBG/bFptIbYDYbSn9BLZ3wILHj/C6xJJmwwnSWkteFv0mMS5K2jwnp5ACfKyv\\n\",\n       \"+M7RdILCCa2Mn6g9BbwL1M6BD5KP74SStms70rwEA+cu7eLby8zLkiefvhRJkwzb2ztcvXp1SEHa\\n\",\n       \"QLVc4YGkLA81tUgRdoo1BmftQL1rDNbGlFeQFNKqWpIYaF1N03jSRBq7sgTuuftu7rrrLgmaYlC0\\n\",\n       \"WCyYllLkVMRYYizZRFODgaq6PsGbpKsGeDAwSkMEqqruGQE1yCgLWX+uc3FTFCRZQL5zvMe1LcEP\\n\",\n       \"HDVjLngNDseyacaY/vo1764nbXXMWZbJyckOKk0qsafjrKkXubevY1V6ZVYb72bdqFilDklJ1Ofz\\n\",\n       \"OWmasre318saaQ5Knc7Vq1ex1jKfzw85NF0kY5zohQsXeseWjHZHFRrQKNo5J3Jp0MOG+q4uhtyo\\n\",\n       \"Dv5qtWI2m6EcGypArPe0Wq16jheFiPU/h4FP4Xr2bW96JjJlbWt7Idm/+fU/3/fT9TVOJGhgZ21C\\n\",\n       \"nof+1Dnk701P1Tqfz2OrvIAc1DTwU1+iQZnCB1UQWaNway1FlmOCnAhb1+Gatm+H995TLVe9OHIW\\n\",\n       \"WUG1nqYpWT3p3qg+N7ab5sA117RcLiNRVcTORlNHOZvN+qOItZb5bDa0FANNjG4Ta4XQKMKbuq4j\\n\",\n       \"TZJ+F9X29KZt2NvbF7hXzId77/svVHdA5xzL5VIihUzyjrPZLKIABpXzPM9oGsGIz2YzNjc2Ygoi\\n\",\n       \"jeiXwGw6o6oEEVLkOcTqt4uq20mWkubSzRdMjAzWtra1PaepOEKSqNMeI5cGB6s1N4mYh4LkuBtT\\n\",\n       \"T+q6KYBobsoJS/R4pZGnIUkUciiIJ+c6vPM4Z0ZpFGkerOuKPBdIcpJI/ca1NdWq62lAmqaRGkQq\\n\",\n       \"sOcOHzeCr1NBh7HKtNBxpn0BoiiKPgKuIrPY+KiuRxdFUmTpoBC+t7cnzjgeU5xzHN3ZOYRUOXb0\\n\",\n       \"GCBHlOVy2Ufqe3t7PZHWODcakKaTxeKg16oD+t1bj1Dnzp1jPtsQXHAY8LxK7B5C6PHSmoPDSEt0\\n\",\n       \"kgjHBwxV8LWtbW03Nj3dggpqdCMAQ9Kf0lX/Uxy0II806pX3OFw81E0gy5JDiJA0TSiKnLquyfMM\\n\",\n       \"yPruTO0YlRM5vQNXkrI0FXZU6RdJSJJhAxHaj5ayHGoPLhZEb2Q3T5En7oxVVXH58mXSSKqvTlnp\\n\",\n       \"GrsIhdOEvxYUdLfUL8hay6VLl3p+ZS1AFkXRR9J6VBkXLZXe8ciRI33eXPPXA/xsgAfNZrNDOeiB\\n\",\n       \"kyOmcpynyIt+MmmOfVyYFYy5i9zHg8wX0LOfrW1ta3tuS9PsULOTrjXJUx9Ooeq6FbqBQcwYBlTb\\n\",\n       \"GOU29heaPlH0SgihF15RdFpqD3Ot9A2K0UmPU7gmDHh1rS/pRiD3JcV9Rak86/3/+Q/pn8x0YHei\\n\",\n       \"nl5d14JMaJrRUUfag4ui4MKFC4QgCtd69OnTLzH1sbOzQ1VV1HXNqVOnWC6XrFarPlWiOWzNO3Vd\\n\",\n       \"1xc+VZG66zopxnltz29JRsUR/Tv9vfIbtG3LdDpleTAQ2uiXM0afKD5dmP1EFUR3eC10Durya1vb\\n\",\n       \"2m5kuobVhyjwQPDaAg/Uda+BkwlQr0TTsm7aCBNMaKqmr5V519F2DWme9ClTLeqOi/kwIhfLIqgi\\n\",\n       \"ClZXq6p/v851dG5oBFIfpoGoNC5O+tqZBoTPZTc1Ak+ShIODgz5lEBjQHYou8RG6s7293d+UKmVn\\n\",\n       \"Ixiboi40BbK/vy/dj9EZA30hUYuNQJRoaw4NnDLl9YPLQISjiJfxJhNihX65XFJOJhDoo+i+Cyya\\n\",\n       \"RgtpmrKqKlxdM5/PefDTn+Oeu1/T1wbu/9gH+pxeXdeURYHhcAONyHvR5/hAK/EGwoBhJY6rqtOP\\n\",\n       \"j5oQyKylbupIKyCF27woqKpKFoP3Mjmdo0yyvhMty/OoNDT8nUYYNkmEewYlt6Ifj6pa9ZulKs+Y\\n\",\n       \"oMRXArYzFj7+yYd43V2v7GsCPUontrR774V/IiJvNJpyneslslaLZX+s1fdQJIEuat1sXecJwZMX\\n\",\n       \"BdYY2lZk6rz3VCupYZRRLLhrZH6Uk5JqVfXdeABtN6T8rBUKXSlUpz3kMc9zadu3pk+5YaSW0jcy\\n\",\n       \"IRGaIURCpIoHH/o8b7jntYJDTiyNa7DGMinLiHtOpCPYGpL4/axib8Asn8hrTJSISxLKooiKUcKp\\n\",\n       \"0gXBvDvX9WNWFiVZRMqM4XGaZ/Y+RB7+ijQTCb9ghB9dT7pVVdHUNUnk1Ff2vTGpHUBRloQgY1mt\\n\",\n       \"Bl4SPXn3zjKuTecsi8WKj3/yfr7lDXfTNE3fyaxc/CAILu178DGvrGtY/IeM3/hELJQEvo+4q6rq\\n\",\n       \"r0X9w7hRT1Ou+jrttxhDD4c1OpimS8at9eOi6Y3s5qnSx0i3rmuWy6XID42ctTGGg4MDyhEiQyE9\\n\",\n       \"xpg+qlZnpovRWnsIvqfFCuAQ9FAVf9R5K5JFO6108PRvdTGqCrY+p85QP7upa1nofXV8OEppLk5h\\n\",\n       \"SXkxwLTlAAAI6UlEQVSek3iZBH/88ft5wz139c0Fy+VSmmxCYDabsYyY8rEIQZqKGHLfeRidcmIT\\n\",\n       \"0jQ/jA0PwvsxLtyE6DydFzIoTUcpvKksS3Z3d+WU4qUtOViRt1ssFmwmCU1sttnc3DzUQOW9p4gC\\n\",\n       \"szYRyJhzTuhriwmhC9hg0Rq7D77nyfZRXegTn7yfe1//6kNH4Wq1IMsn1DGnqVGRnsS8gcSOAoJC\\n\",\n       \"FmkXcdWegAuyuJq6oekck7IU6tQItV1Vckw21nKwWLJcLvsT3aXLV9jY2KDIpBnD1Q6V97PxCF3m\\n\",\n       \"pZBoIYuzrWrh4zGW1CSUce5KcBC7i51oa6ZZhvcOFyW31ImmNmValnzm81/me9/0xp72OFiBhaZJ\\n\",\n       \"wmQyjdd8QJrE1u44T+k8TZxDRSHcM8oyuVwu2N8/IC1KkjShnE5xXUueCNugJbC/txs3FYgsI/of\\n\",\n       \"xsqmbEygc8pZ469xkIVAM83gOLUdXdeaJ1DVVR84+G6A6I07ofU5heoZY3jo01/gL7/pjb3DBKJu\\n\",\n       \"6bD2VquVnPDT5NBmAESo75Ba0YANe7gHYuxPdB3qdTz55JOcOHGi9zNFUbC3t9cHC+pjdPPSDUDf\\n\",\n       \"d9wcqA7/udKpN82BG2NYLBYURcHW1lY/CYv5nCbmmdJE+Ep6HGaa9oxfV65cYXNzs4+GNQVT1bUo\\n\",\n       \"TEcstLWWVSWF0aKc9lSRAREHDiFIVOU9W1tH4kIzffRrTSdyaZ0IGaxWdXSCoc+RqTOdTqfYTPhY\\n\",\n       \"Glf3jjvPcxZLKYoWE8G4ewKta3uayTaS7adpyqVLF8jznIODochio/KQ0sdqfr+Jx0YDTEcaos51\\n\",\n       \"vUYkcKjFWCOGEOQaiiJHlWoC0ETcqqBsBItrMGRFSdt27B4sKMoJBzHyLCczFsuqH3PFjwea/p4A\\n\",\n       \"bHQ2s3RGMFKB1wWZkOG8p6qldTkI3yFJKimnPDaHFEgXpTpu3SDVEYwXhJ4IjDH41pHHOkpmE+rl\\n\",\n       \"Sr6zosRg2JzNn9HcZYwhm5ZszAq6dkWZW4psBnh8sCSZdGcqdn/c/OObgZQ/mUSRDSOR/6qVrtTU\\n\",\n       \"ZgNKIo+RX2pxbSApZW5qe3awEKylcY5lREClNsUEQ5oWET3VkWWW6XRO13XMY7EuhIAzjrSMp9u4\\n\",\n       \"/vYaoRIoN2aUG7M+xwswm86pqorFYsFysc98Pqd1VR9FpmnKweJAggwva3cMrQ0hCmrH/oDOeyZR\\n\",\n       \"61W/F00VysY3+IUy6rmGbMTSSBRMifJ1485sDd60Bqbv07aL/u8FX27x3lG3owBGg5zQUUzyQ1Fy\\n\",\n       \"MB6RZUx6HyRBWC38+77DE+giH9Hp06eBIRugVA263jRgHGcFxigY9QPjefRcgIabmkI51LUUj5zq\\n\",\n       \"jPULriMuu2cnTIR0RtnDNPpVBz4ueGq060fdgPq8Oii9BqCPVsbdimmaktqBUF6du/6N5ql0Msi9\\n\",\n       \"JaRpFI7whyMAN9qQ9EvURaPHP5X0Uny6pG4KQpCUj/Kg6L3rlz3uRtXgQusD2hh1+fJldnZ2sFa6\\n\",\n       \"Gzc3t3BuSCNo5KBHSL12vY4QzKHvTsmTNE2hJyr97HEE1XW+b6lPUtOnRMbt/poz1HvslYP8wEsy\\n\",\n       \"m036fKY6bL32wzAy34/HtS3ZWmsZY2214UKPy8YYTJQt0/vpP8umhzZDvfbx0Vr/pnZNhJAlfcHL\\n\",\n       \"WkvX+j5iVLyw1nB0zqtp2nBc8NZ5p+tJf9a5NV5X13Yj61zVQEWP8fp6bfba3NzsP1dPtuNioX6m\\n\",\n       \"jvUYLaZObNzNqn0OepKU9STcMXoP+l663sZpQ4XujhVvuq7rkVz6fWvqUeegUlmM0Wvj9amp0TGP\\n\",\n       \"t5y23SHuI4jc4wyduSCw5y624+t4HfrdNVH7OO05dtjjNaPvcyO7aa30z/uHrm1ta1vbC9TCs7TS\\n\",\n       \"3xQHvra1rW1ta/uz2437NNe2trWtbW1ft7Z24Gtb29rW9gK1592BG2Peaoz5gjHmy8aYn3u+P/9m\\n\",\n       \"mTHm7caY88aYT4+e2zHGvN8Y8yVjzPuMMduj370tjtEXjDF/5eZc9dfWjDG3G2M+ZIz5rDHmM8aY\\n\",\n       \"n4nP37LjYowpjTEfM8Y8YIz5nDHmn8Tnb9kxUTPGJMaY+40xvxt/vuXH5FAzytf6fyABHgZeAmTA\\n\",\n       \"A8A3PZ/XcLP+B94I3A18evTcLwP/ID7+OeCX4uNXxrHJ4lg9DNibfQ9fgzE5BdwVH8+BLwLftB4X\\n\",\n       \"pvHfFPgo8J23+pjEe/37wG8C98Wfb/kxeb4j8HuBh0MIj4YQWuA/Az/8PF/DTbEQwkeAK9c8/UOI\\n\",\n       \"ZB3x378WH/8w8K4QQhtCeBSZgPc+H9f5fFoI4VwI4YH4+AD4PKKlequPixJgiLCkzJtbekyMMWeB\\n\",\n       \"7wf+HUrneYuPCTz/KZTbgK+Ofn48Pner2skQwvn4+DxwMj4+g4yN2l/4cTLGvAQ5oXyMW3xcjDHW\\n\",\n       \"GPMAcu8fCiF8llt8TIBfBX4WGCsc3Opj8rw78DVm8VksyNnvRuPzF3bsjDFz4H8gItj749/diuMS\\n\",\n       \"QvAhhLuAs8BfMsa86Zrf31JjYoz5AeDpEML9DNH3IbvVxkTt+XbgTwC3j36+ncM75a1m540xpwCM\\n\",\n       \"MaeBp+Pz147T2fjcXzgzxmSI835nCOHd8elbflwAQgi7wP8E7uHWHpNvB37IGPMI8C7ge4wx7+TW\\n\",\n       \"HhPg+XfgnwDuNMa8xBiTAz8G3Pc8X8PXk90H/HR8/NPAu0fP/7gxJjfGvBS4E/jjm3B9X1Mz0kv8\\n\",\n       \"74HPhRD+xehXt+y4GGOOKZrCGDMB3gzczy08JiGEXwgh3B5CeCnw48AfhBD+BrfwmPR2EyrJ34eg\\n\",\n       \"DR4G3nazq7jP432/C3gSaJA6wN8EdoAPAF8C3gdsj17/C3GMvgC85WZf/9doTL4TyWk+gDip+4G3\\n\",\n       \"3srjArwa+FQck4eAn43P37Jjcs34fBcDCuWWH5N1K/3a1ra2tb1Abd2Juba1rW1tL1BbO/C1rW1t\\n\",\n       \"a3uB2tqBr21ta1vbC9TWDnxta1vb2l6gtnbga1vb2tb2ArW1A1/b2ta2theorR342ta2trW9QG3t\\n\",\n       \"wNe2trWt7QVq/w/Uvjt8hhUJzgAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x116bcb210>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"# Find, print, and display the top detections: person and bicycle.\\n\",\n    \"i = predictions_df['person'].argmax()\\n\",\n    \"j = predictions_df['bicycle'].argmax()\\n\",\n    \"\\n\",\n    \"# Show top predictions for top detection.\\n\",\n    \"f = pd.Series(df['prediction'].iloc[i], index=labels_df['name'])\\n\",\n    \"print('Top detection:')\\n\",\n    \"print(f.order(ascending=False)[:5])\\n\",\n    \"print('')\\n\",\n    \"\\n\",\n    \"# Show top predictions for second-best detection.\\n\",\n    \"f = pd.Series(df['prediction'].iloc[j], index=labels_df['name'])\\n\",\n    \"print('Second-best detection:')\\n\",\n    \"print(f.order(ascending=False)[:5])\\n\",\n    \"\\n\",\n    \"# Show top detection in red, second-best top detection in blue.\\n\",\n    \"im = plt.imread('images/fish-bike.jpg')\\n\",\n    \"plt.imshow(im)\\n\",\n    \"currentAxis = plt.gca()\\n\",\n    \"\\n\",\n    \"det = df.iloc[i]\\n\",\n    \"coords = (det['xmin'], det['ymin']), det['xmax'] - det['xmin'], det['ymax'] - det['ymin']\\n\",\n    \"currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor='r', linewidth=5))\\n\",\n    \"\\n\",\n    \"det = df.iloc[j]\\n\",\n    \"coords = (det['xmin'], det['ymin']), det['xmax'] - det['xmin'], det['ymax'] - det['ymin']\\n\",\n    \"currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor='b', linewidth=5))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"That's cool. Let's take all 'bicycle' detections and NMS them to get rid of overlapping windows.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def nms_detections(dets, overlap=0.3):\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    Non-maximum suppression: Greedily select high-scoring detections and\\n\",\n    \"    skip detections that are significantly covered by a previously\\n\",\n    \"    selected detection.\\n\",\n    \"\\n\",\n    \"    This version is translated from Matlab code by Tomasz Malisiewicz,\\n\",\n    \"    who sped up Pedro Felzenszwalb's code.\\n\",\n    \"\\n\",\n    \"    Parameters\\n\",\n    \"    ----------\\n\",\n    \"    dets: ndarray\\n\",\n    \"        each row is ['xmin', 'ymin', 'xmax', 'ymax', 'score']\\n\",\n    \"    overlap: float\\n\",\n    \"        minimum overlap ratio (0.3 default)\\n\",\n    \"\\n\",\n    \"    Output\\n\",\n    \"    ------\\n\",\n    \"    dets: ndarray\\n\",\n    \"        remaining after suppression.\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    x1 = dets[:, 0]\\n\",\n    \"    y1 = dets[:, 1]\\n\",\n    \"    x2 = dets[:, 2]\\n\",\n    \"    y2 = dets[:, 3]\\n\",\n    \"    ind = np.argsort(dets[:, 4])\\n\",\n    \"\\n\",\n    \"    w = x2 - x1\\n\",\n    \"    h = y2 - y1\\n\",\n    \"    area = (w * h).astype(float)\\n\",\n    \"\\n\",\n    \"    pick = []\\n\",\n    \"    while len(ind) > 0:\\n\",\n    \"        i = ind[-1]\\n\",\n    \"        pick.append(i)\\n\",\n    \"        ind = ind[:-1]\\n\",\n    \"\\n\",\n    \"        xx1 = np.maximum(x1[i], x1[ind])\\n\",\n    \"        yy1 = np.maximum(y1[i], y1[ind])\\n\",\n    \"        xx2 = np.minimum(x2[i], x2[ind])\\n\",\n    \"        yy2 = np.minimum(y2[i], y2[ind])\\n\",\n    \"\\n\",\n    \"        w = np.maximum(0., xx2 - xx1)\\n\",\n    \"        h = np.maximum(0., yy2 - yy1)\\n\",\n    \"\\n\",\n    \"        wh = w * h\\n\",\n    \"        o = wh / (area[i] + area[ind] - wh)\\n\",\n    \"\\n\",\n    \"        ind = ind[np.nonzero(o <= overlap)[0]]\\n\",\n    \"\\n\",\n    \"    return dets[pick, :]\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"scores = predictions_df['bicycle']\\n\",\n    \"windows = df[['xmin', 'ymin', 'xmax', 'ymax']].values\\n\",\n    \"dets = np.hstack((windows, scores[:, np.newaxis]))\\n\",\n    \"nms_dets = nms_detections(dets)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Show top 3 NMS'd detections for 'bicycle' in the image and note the gap between the top scoring box (red) and the remaining boxes.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"scores: [ 0.86610985 -0.70051557 -1.34796357]\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAXAAAAEACAYAAACqOy3+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvUmMZll23/e7wxu+KeaInKuys+aq7ibdraZE0oIgU4Qt\\n\",\n       \"mrAsGISgrTfaWAa88tYbwzagnQEbhGUv5I1XNiBKIE3SNCi2SDfZbLC72TVmVWVV5RQZ8ze+9+7k\\n\",\n       \"xb3vfV9ERTYJg8Vim3G6oyLjG95w371n+J9z/leEELiSK7mSK7mSnzyRX/YFXMmVXMmVXMn/N7lS\\n\",\n       \"4FdyJVdyJT+hcqXAr+RKruRKfkLlSoFfyZVcyZX8hMqVAr+SK7mSK/kJlSsFfiVXciVX8hMqX4gC\\n\",\n       \"F0L8B0KId4UQHwgh/ssv4hxXciVXciV/3UX8RdeBCyEU8B7w94BHwB8B/ziE8M5f6Imu5Equ5Er+\\n\",\n       \"mssX4YH/DHA/hPAghGCA/w34B1/Aea7kSq7kSv5ayxehwG8Bn638/TC9diVXciVXciV/gfJFKPCr\\n\",\n       \"3vwruZIruZK/BNFfwDEfAXdW/r5D9MI7EUJcKfkruZIruZI/p4QQxGWvfxEK/LvAK0KIu8Bj4B8B\\n\",\n       \"//jih/7Hf/bfggSVZcyrikdPnlA1BqUUeZ7T6/WAQFUvCAGqRY1znqIokVKxWCwwxqCUJM8zpJLk\\n\",\n       \"eY/JZMJgMCDPc6qqQmmFdY6madBZRqY1i6YGZ5FSopRCKQUhEEKgV5TUdY3wASEEzjmEkvgQQICz\\n\",\n       \"Dq1zAIQQCCEIIcRjALmSeGfj4OqsC0eklHjvybIMIQXBe4QQIOJx/q/f+V3+3r/3dxECrHVIKRBC\\n\",\n       \"xs8AAoESCqTAOocPHqk1TV2TKY2UEnxABghCYhBAIFMSrTVSSAIBaxs8AR8CQgrKskeeZWgh0SpD\\n\",\n       \"BIltDMYYQhoTGywx2R0AEe+FgAeyLCPPc0IAvIcQEGlsnIcsywGBtRZjDQhJurR4zQhc8GkixnsN\\n\",\n       \"aVz/9a//K37pl/4jfBpfIcD7gBQC7z0hBKSMYySlRErZXTNACAGfxtl7j0vXd/Fz7bP0nm5OAHjn\\n\",\n       \"gYAQ8tx5AHyw3TnaedDK6rGFEBhr8d6nawYpl9fbXl97TS5ojAsoKVFa4J1DIAjBgwj81m/8Gr/8\\n\",\n       \"y/8QZy0ej1DxOMa45XF8nLfe++48SkLwFmstQUAgzgchZDfyivg8AoEgQoqlJVIrpFRUVYVWOVIr\\n\",\n       \"AhJrLc5aJPF6pVLdsxdiOQbtfTrnCFK0g3Ru7KUQCC7oKOGWY+o93otuXXTjHg/Av/y1/4N//+//\\n\",\n       \"xyBCuhufnj/49Dv4eEsu+Hj4ACF4ggvd2Pm0YoUUSCXjnI53EZ97+lspjVi5LyEEjTXx20EQ4hXE\\n\",\n       \"9eMDIRDHdWWOrP5ezkH/udf/h//6v+B58heuwEMIVgjxnwH/J6CA//myCpStQZ/K1KhcMxxssLu7\\n\",\n       \"jbGWyWzG4fERk8kp0+kUX5uktKOC0QIIgVwrtJL4EB+yc475fIrzlsnkjI3NTcp+yXQ6xTmXFoin\\n\",\n       \"MQ0iBIwzqKBQSiJFmqsBjGnw3qGlREpBQNC4Bh88eV6i8wwhAsZYsizDWo9SCmPj4pBljkiDb61B\\n\",\n       \"yPQwhEZIcL6JikEAQUAyFN47nGvivBYB50WnGIQQSBTBC/CiW2zCBwZlD+8dwQeEiMrBA5lSZEpF\\n\",\n       \"peEcQfqkfAJKy/hecHhvsLXF+GgkhBc469FKI4TEBx8Ng5IordE6Q+k4bYIQKK06heWtwzvXjaX3\\n\",\n       \"jslkTAiBPC/IsyzeawCSEnPO4W3oFrn3HoQApYBACBZCWsDEBRSIikLKtICDx1mPF1EvRLsYFZH1\\n\",\n       \"SbEJiVJL4+Gdj0qK5WcRYJ2hMXU3T1uF7tvnkRwhmQz36o9Yea0zvEKAUEilkApWdZT3Aech4BE+\\n\",\n       \"jaFQacFbQlB471BSoKUgyzQQMM0CKQRaK1yweAd5pgGBc5a8KPFe4l1IBkMiggMpycoyKl7vEVLh\\n\",\n       \"fKCNh0NwyUYLgmiNnE8Kz6K1xnuLaywIgZKSrMiQHqy10TiqeM3WRgOHgGCT0VKqU+CdkUsPTCQD\\n\",\n       \"uaq4pHfd84mTPhrk4C2tW0MyQEqAVunTAjwCgsCrAEgIAudahUp0jkLAewFy5dl5n66F6Ih5nxyp\\n\",\n       \"+BmtZVTG3qbjgkzrLlOKQOs4hLTEo3HyROPtfbxnJ2SyYfGzcc7E60pKgD8PGv1FeOCEEH4d+PUf\\n\",\n       \"95m6msfFby0n4zPKfg+pFLvbG9y+eY26NkwmE9aKPkIIqqbm9PSM09Mz5tUCLTzWOwgBlZScRzKb\\n\",\n       \"VWR5znh8inOOXq+HFIE8U9R1jXOOoijJe/2oPJzDBsi0Tp6GpF+WZFmWjMKcXpE88CBomgaIltJa\\n\",\n       \"QVmWFHmPEAKz2YwAZHmGtZa6adBaYb0lS9kGKUAmr0kKTZeGEBKPwHvLijPXiULj5YqVFtGD8MF1\\n\",\n       \"3pZKyiYg8I3Hy6TwnEcIyIosTdLoYYUQMN5Qu4AEpFBxUeoszh+RVKXKOk+uqmuoa4x3nedRFAVa\\n\",\n       \"6+iNZwW5ztBas6hmrK+vo7XGGMNsNmM2m2OtRaksRiM6RkCmqc97xdYQgkOKgA8e70Ln0QQfjWY0\\n\",\n       \"aq7zxtuf6K3HceoN+934hBCiBxoCWuu4ELuFI/AOtNadNxY/E2ia5nNe9mQyic8lRXDtOaOSW3r9\\n\",\n       \"Ukpqmxb0ikGGGKEJ6c/dtxAy6qTgECKQaYUUILzl4OAZjx8/5Pf/7b/h5s0bbO/uMFpboygKpBTU\\n\",\n       \"dcOgP6CqaqRQ0Zh72dkMqVT0HIVGeAdSUGhN8B68wFsXo4A2WknKsFVKxphuHsTo1OKcBR8jLGHj\\n\",\n       \"2LfPqY16hIjKW0pHbSxKyG6+SJW8WA9KRuXVRbZ+DilSaM8r2+WSooWlx+qx1qRxXEY1UQ1KAjEq\\n\",\n       \"gKiEpUjveYFk+Uyc9931uwA6qGX05AM+iKR4o/OVTh7XArJTu0KJ5GaQIt72PqMXHkQ8RAgCmSLL\\n\",\n       \"EAJShPR6OL8eniNfiAL/84juFTTWsr4+ohwNUFphGoutGprG0dQVVDWT+YyiKFFac/v6Li/cuoZ1\\n\",\n       \"nsl0ymyxwHlHVVVUdcOiabi+s4VLizrP+5TJe7fOMiwynHdkOkNlefpehXM2DqADBzQLn6Ccgl6v\\n\",\n       \"QOaSvMjJ8yJO5MaxWFQYY1MoFRgNRmxubkTvXWeYpub07Iw8z6ibupuoIcSw13qPtwbnAlJIXnjx\\n\",\n       \"BYz30WK3UEIIaKXwIeC8Q1hQOs3gAApBZU1UGnhUJqnt8ppEWsRCaYSApmlQmcY0hpAMibU2Kgml\\n\",\n       \"Y9gXAkY4CA4pNIE0YZ1DoJb2JhChq+Q5W2tS9BJhoqhoHUrNl55VEJS9cmVBOLARpvDBpkWXJr33\\n\",\n       \"3Lt7l6apusXVTmatMiDgnIEQw13VQRu+g3uECEynkw5q0Vqj1NKzPB/GClxc5tHjt4Ha1kuITEZY\\n\",\n       \"pYVQ1tbXY6De3kua103T4LxHrih7v6JsQjqHEAJjms95ncZbkDJ6mSHQ1AtMXdHUCyaTMWvDHtPp\\n\",\n       \"EQ8+OuPhw5KiLPEhMBwOCSGws7PH3rVrZFmJVjnON2hVRO9TKKxx+AAhQXLGtlBJQEqNALQKqBbq\\n\",\n       \"IHRoh7XReWkNnpIRInTekymdIs2kwuLjRmqJTGNnvU9RRHxOdV11YyqEwEmHFhIhZVSqOl8qMqHi\\n\",\n       \"z4rXHkJI8zPw6mtvpiikNfTRuSCIFOn46P0CQsrOyW2dPykSNOksJkVtSikEIjoaLQznXIqUlvBZ\\n\",\n       \"jHxcjGbS33LF+IkQISYPKB2/55CdYYxzUHZGr4WxSMf6sXr0x777BcrHjz8jyzJOZmMA+kWPfq8P\\n\",\n       \"PuCEJ0eCVKhMRY/EG4JTBBRaCnqFZmNjL4ZfQmGdjfhVgMViwWw2iwp6sWAxXVA3Df1eDxkC/UEP\\n\",\n       \"nedU8xkyJJyzBcgIjEZDpJTMZvPkAcJsNiHP8uiFVlFR1bWhKHpkWc6cCfP5goAgy/IYpjpHXhZY\\n\",\n       \"51FEuKGuLVJpjPMM+iOGw3VCcGxfu5bwRMF4fApEZTqrFgQfkscSMTrrTDQSxtAYQ14UNE1NKXtx\\n\",\n       \"0UiQQeC8ZTFf0EuLPADeWnSRY53jdDxmtpixNhqwsb6OFJK6MagU7qm0eIVScfJbS3DRuInkhcTF\\n\",\n       \"HDpFVtc1Ho81lqLQHbRDiNhlC2fpXHeKMyA63FmpaHQcnjfeeh3Z4s8roTd2iYH7EAguEFrPMPjo\\n\",\n       \"UQoRz63ortE5E8fVe5zzKCUTvh2NlLcuYcLRoyJdl1IyGtt2ASdFoJJHq2SEZgQCpSVLnzeOj0R1\\n\",\n       \"SswYkxa76xTCYDBgsVgwHA4pZcaTJ4959923mU2n1PWcItM09YL5dIrzBqUcY9OwqAz3XnoVpSTP\\n\",\n       \"9p/gvOPDj97nhTt3uXnrDoPBiPX1LYxZ0BiPDkVcIwm6wvrOAIeQoq3WuImlJ6wS/CRbWMXHqAii\\n\",\n       \"w+GlBC0gKSGICl4T56sQEROWXqDSOLdGVSkVx8M5vAPfKtTkbS/1dUAGn64pGkSBiPcjAi+//Foy\\n\",\n       \"IHT5FZGUZBtNdvkL7/HBrUAcUbm2x5IyKmRrDYUuuntqlXc0utFULHMvdMbEe5eMkozwSQsBOkvw\\n\",\n       \"KV8mUlTPMpcGEYqK+SOR4Mi/oh64zDKMD5weHTPo9Xn65Bl4z+72DlpogrUUWU7ei4m7osipTYMW\\n\",\n       \"GZmKnnTdNAkL1bjgKcsC7x0b62tsrI2wxtJ6EEIIyqLg5OQEIQT90QZb6xvUdc3xyQl1VZFlOUII\\n\",\n       \"8iKPnp0PTCdTdJmhZcScnTW4po5wynyOrWvKoiRbW2c+GeOFZHtnl+Al+/tPePL0EWVRcP3GDXpl\\n\",\n       \"nxACo+GItbURp+MZH3/yAKUUg37Esgf9PqPRGk1T470jzzLqumYwHJDlmn6vz8GzA3xwSCXRQscE\\n\",\n       \"bq8XJ6NSWO+oFhVKacp+iXUpTAeE1IwnM2bVgsYYZrOKpmlojGPQLwnWdUpJCJU8FYlQGm89WkaY\\n\",\n       \"R2uNt5a68R1soJRCqITpKjDOdh4ynXKgU8ptWKqU6uAK71z0V5OHZb1F+NYzjwrRtUZBRG+K5NcG\\n\",\n       \"mcJ9SIuWmLniXM4MAeRZVKrOOUwTDbJUOayE6xAXe1VVKaLLY7JNy3TPy+uAgHUO4SM23X43Lm6L\\n\",\n       \"Cy7i8AlvJbTjmxG8oVdmzGZj/uT73+Wd937EYjan3y+xTUMIjjzTaCVRCqo6OhbT8Zgf/eD7jNbW\\n\",\n       \"yMsc5yI8dHDwFOctvXLIK6/mFEWfvNBYH6jrGp1lCQMPOBtxb5ESm7TzJNmgqNxdxNQTHNLCWD7E\\n\",\n       \"iMJrtUwCdjmApOyTtx4fAhAEOotJdQh45yM+joyQTMoZCiFQWXYOImnhE1hGQq2ClziWTz/FOcJH\\n\",\n       \"bwaxVOYIVCZTAjMg0vX6FrqRkpR9wTrbGfLWOHRJ1GSA22hEKpXgFd/dezte7f1oJUHTRQbCr8zV\\n\",\n       \"dDOqS5YLvBDJUD5fvjQFfv/9Dyl6PZqmYTQaolXGdDJHyjPKomR7cxOpNA8Pn0SvTQjyMmd9fS0d\\n\",\n       \"QaB1jhQRjpjPa7Y3hkxnsy5pFbxHZzpl8UPE3ISgzHNMXTMaDCjzHNs0DPb22NraAkApyXg8TmFT\\n\",\n       \"YLqYMZ3NqKoFRVEwunkLnWVdoubg2SGz+ZxBmSOUxixmFEXOy3df6LDf+fiM6dkJ8/mCkyxna2sL\\n\",\n       \"62H/0SOGoyGboztRATlDbQ2Z1jQ2VsNsrY04m5xR14LJ2QnOxmqcpjH4ENg/OMKHgA2B0fo6w36P\\n\",\n       \"PNcopaibhl7ZS1U0iul8wfHphP3DA1SmWRutEUIFIaQIyFMvqliJk6IJFywq+BhVJBenFAXBpaRj\\n\",\n       \"8lh9UDifcChilUpMwEav1KcJGvBY53DOI4Uk03nEXFOC2rtwPoUjYqJNpaqcLkHWHr/DZJeQRrfA\\n\",\n       \"V7L6sWKh+09U1FqS6xKtNVUTvezgo+fjUrKPELDGopToEkztAo0RQ5a8aUubkG6vyRjbXYsNKZkn\\n\",\n       \"RIR9lKSua/K8QGvF2ekx999/l2o2ITjP+HQR4T3nyDIdYUVgbW3IjevXyYsa21iaukJryc72JmfT\\n\",\n       \"MXW9YLGYMRmPqeuaO3fusr29h1YF5BqlJcZEOC9qpdbWWUjedUgVObGCJRrY9j4kJNjM4oLHqogH\\n\",\n       \"hxDAtRFQi9+Kc/k4oSXe+qTYZZwPKamtRLaETFjiwMv8BV0ks+rpCyFonIuxXMoTCRkdBeEFISnR\\n\",\n       \"VsFyrgolIHxyMkKsiIrPJqPsaUKzCtd4XMKp24jReUdjY1SV6UFKeEaD45OSjzpFdQUN1lqkkhEo\\n\",\n       \"SZH/KozmkhUTIiZGf5z8hXOh/Hkk1oH/5Z/3Sq7kSv7/J//dP/vn2LBMBvuQIBrVwhuC4KMDETwI\\n\",\n       \"5TssXaToXMtYKYSgKyVsK2iWSnl5fJdwcq2X0ZqxK2WF6RBt1BDzHynf4z2CWDXUetvRa/dLOMkv\\n\",\n       \"S2T/m//qPyf8JdaBX8mVXMmV/KVJpgVSlF1y2gV37v3gBUGBCPG30j7W0ac6f4i14bFi1mPaSqU8\\n\",\n       \"QzrdojC0ddxKZehMd4rWORuhJBert5RaltYCKccSOkMQ+07kitcd8N7SVkTpBEkBCcJ5vlwp8Cu5\\n\",\n       \"kiv5iRalFDqYiNakogYv6Lxhn5LsLdYcXCw9DmrZLNc2UhkHqCVUJHTqlbDLBKZDdknY2EwkUmlu\\n\",\n       \"0+HhwBL6iSB9PDfRk1eCzquHlGiHlOBe9h+IP4Pt5EqBX8mVXMlPtIiUs1it0xcsu2ZlCDghugSt\\n\",\n       \"ajtx20+KpTeslUolqW1lSMARG7fa47cdtW1FDoBEYBP271bQ4VUvOwS6nJG4oJdX+wCUih67Mebc\\n\",\n       \"PV0mVwr8Sq7kSn6iRYmAF21VSlhWrsQXEEAmJELLVLXEOYgDIKQqG6XOK0yJwKWW/q4WPP4FsFIl\\n\",\n       \"E5AyX6FMWF7Hsg8idF3aQi1pOFYlJlqX19Ye43nypSvw//V/+e+x1jGbz6gWNXVVM1/MaWtw33zr\\n\",\n       \"DQ4PD3j8+DGvv/46xhg+un+fP/zDP+K1V1/l3Xff51d+5R9x9+5d3n7/BwD0+316ecHe3t45CyZT\\n\",\n       \"XWZs5xapXEp0SfLYCSVSV1RKdAiBDgafkgsQ6RHaz1jju+4+7z2Vd0zmc5xz1E3NcDhCCJjPFzjn\\n\",\n       \"ePDgY27cuMFwMGQ8noCSVPMFIXhGgwGZVDz4+AF7u7vkWvPBB++jlGJttMbbP/oR6+vr9IqS8WTM\\n\",\n       \"xvoGxliEVJycnvGzP/+3eXZwyMcff8CN7RGzyZgyyyJOB7z61a/x4cPH3HvtTf7wu98jz/o457l3\\n\",\n       \"5xZnDx/zwrXrbN+8gRv2yAYjzLRGVg1FphjXYz47eMTdu9fJnMLZgNcZ0xA7WaUN9BFI7/GZZJEa\\n\",\n       \"SAQB7yzO1rFxR8RGliCiJySz2LJvjKdQhsrDOGySDW7ip2N0dUjwNddu3aQUksXTJzDqU+QZtm7w\\n\",\n       \"zuEFuOBwxOfXL3oUShOs41k16XDFWIsbS98aYxBSYV2gLPss6or7H/8Q53cJ+TbGV2TCc22Uk7uH\\n\",\n       \"/PIvfpOjI8PJuMe08fRyhRSCpomlZlrHZo+6rkFEHg2V6teDEF2ZWqYz8J6mbgA4m0741V/9VRAC\\n\",\n       \"6yz9PEMBxlqMc5GuINOJdiAqh5anpyh6HJ+c8sq9uwRnyZSkms944/U32N7ZwbrAxw8+xVjH3/y5\\n\",\n       \"v82/862fQUnND3/4Q4bDIRDrjpVWWNt2m0aagTzvAUul1ZZ9tq+1a2cJW8Q11uqj1Trvtmon+ECp\\n\",\n       \"lg1RQsiuEaj9Tvs7BMjk59XTP/mn/+nnXhPBR96S+FfnHbcQR+g6dT3CRQqB9hwQk5aC2LwUUj1/\\n\",\n       \"52EHE2v5ZaqJV4mSISzr24OPa98CISVNfQix8Y3leJyrqFm551UlLqTuxnl1zJ8nX7oCHw4GOOfo\\n\",\n       \"90qKosAZF2u+6xqlNcY7bt26xdbWFsPhEB8sZf46L730FTY2NnnzjTdwtmJ8dsTe3h77+/vM53OG\\n\",\n       \"vX5Xv6sTd4dM5YVCtN2J7UNRqYKtrZUGJfWySN+JaDmTpvcp2SFl5JvwzkeehxDwsm2JX5azWe9Y\\n\",\n       \"LBZYa8mLnMFggAc2d7YZn51hhwPef/99yqLAhIZrN6/hjGVWL9i5tsejh49YVBVfufdSbD4Bil5J\\n\",\n       \"ALZ2d6mM4XQ+54OPP2Jnb4/eaMTO9VtIofjaG69xdnLCs6MDnj35jOODA/Zu3MA0DaYJuBD46MlD\\n\",\n       \"5GLOWrNAVTNG22s8Oz1mejLl+mgTrSVf/+o34N2MtV6gEDmmEYSyR7NYkOuMzAVK5/DVAhToPCOo\\n\",\n       \"IpakWYHTEIKjaWqCiCFo4y3B2Ggsix6Vd/jBGvefTmnmh5TeUpqGYa8kOMlGoXFrJU01RVaBTGly\\n\",\n       \"HVu3vQCkwkuwOhCEw+PIMtU1ZRhjopelNVmRJ+w0LspcaO6++DK13eDh/oTgHVmZcTYZs1nCtDG8\\n\",\n       \"88F9it4LiKLs6s+lbEvb0qKVbQ2ww7iGQEDJDIj8NEKpVD/tUUqzu7vDz//8z3H//oc0TU01HiNl\\n\",\n       \"6kXwLnYHepe6hUkVDLFeuZnNGAyHPN1/hrUNa4Me62tr3Lp9m+OTU45Pzrhz+wXeu3+fN954i2vX\\n\",\n       \"bzKdTNjY2EjOCl0pW0tApZRC6VjyVld158j0er3ULZvWT1v6lioFO4WePFMlYyIuOH9Occ2qRewz\\n\",\n       \"0BpkbFZBtBQDsWtTahXX6ecKL55TuSY/n+wTgGvLTaN3lgyOOEdE1v6OG4nFqpTQfkdE50NJiQwy\\n\",\n       \"dRZHiMM7v1TQMnLHyMRj0q79i8RUgQjRAAjv8Zz3tuPvZe34stPz+fKlK/DYTgxlkSGI3BwEiZYQ\\n\",\n       \"vCErithAohXOW3KlGA6HbG5uYhvDq6++xHw+J8syyEZsbGzEm07eSp7nSy6LsBJihaU34dyS9a/N\\n\",\n       \"IMdmvrbNdtnui4jt7LmSWBvQWbTWXeOJc12NKAisd2ReMer3Mc6xsb6O0ip5bTrWbGc5d+/coSgK\\n\",\n       \"jo+OYmOJ85jG0DQNdRVrhV++9xL7+/t8eP8Dbt2+TVVVZL0SkWk2dza5fvsm167foNfvMTk+ApXz\\n\",\n       \"e9/+A376a2+yNlyjcpbgPdPxhK3NTaazBqEkO3u7/PDx97j58otsvXCDP33nPSbzmldfe4M6wP6z\\n\",\n       \"x0zslKdPHpLfGDI3grwYYZzn4OiI67fuMChL+s5BppB5DDtdEw2jVxnONbFNGUmWZyDbjH8syTqz\\n\",\n       \"nrK/ztjlOCeog8A1DmcAIaieTTFbPYZZwVqpWMznBO9ZGENjDTLPkhKX1N6hUARjwbnkacVFqnQe\\n\",\n       \"50p8BSUF1phEzLTJZObwziBEjQ+OIsvxcsDjZwtmjcRpQ5bB0ckMgURnGXlWYl3NbD5P1A0xkaUT\\n\",\n       \"IZfwHm891jqci4ybzjusd9Sm4Rvf+AZlWfL48WP85jbeO2ofIwofYm02Pnl6AYz3NM5R5AW3b9xg\\n\",\n       \"NOyzvb0FzjI+O+WH777LeDJlY32TZ8dHvPLq6+RlSV1bev1BciyiV4mI9dNlnmHb5hQpCDh0Hkmg\\n\",\n       \"rLGEYJGSlblNqtl3BFxaA0svWIhIZtZ6xKQ1QiYxzkbGTSWJZfZRa7YNRCLxyqikniK+Hf91qWjI\\n\",\n       \"xIX68LDsCSA5X0l/dx2Q7byI/3fpNZHmStQXWkT+IoKP0V5IkUNXFrjUEyoZVqFS2Uq6+rbzt60L\\n\",\n       \"DyEq8tVa92RjsM6i5LLT+M+SL12Bt9wXzjmUEmgtaJo6egJS0tjY/qx0HDSTrJqpGwKO2Xwau5aC\\n\",\n       \"wZklLaRMvyNxUuoUY6UdW15u2c7hYmlwLQJcy9TGsikhnadlUhOJA4HULi6VJNc5QkbvLMsUIRFK\\n\",\n       \"eUCEwGB3r2urFkIw7PUhxMaYuqqRUnLj+nUCMByOGK6N2NjaYDQa8elnn6G1xnrPlhCMJ2eoPPKE\\n\",\n       \"KF1wcvqYLO/x8YPP2Nre5sWvvMTu7bv84N0PIEh+6utfI88KDo/3KfolR7Mxn372KX0pcR72Hz2k\\n\",\n       \"8oZgGz778B02y4IfPHmfQT7kb/3s3+FP73+MEYKDZ/tce/lVnn7wAUdPH3Lt1jXmzZwN1tne3qIs\\n\",\n       \"SvL+gMViRm9ji9HaCKljO/3J8Snj6ZRdpykHQ373++9hzwQUigzHq7dfZGt7j/c++YzpmWX31iZZ\\n\",\n       \"fciibnjy+Al3bt+KjRbWE5JHJ2WBqR3OOBSS/qDPxsZGIk+K8+rTTz9hPptDCJRlyfjkjP3jmtpq\\n\",\n       \"vFuQFQ2z2Rk7117CLQJP9iuycoe8P8LJGWWvj5SRYmBuIjwkFCyaOU3TkOVZJOkyhkGeE5wjy3Ok\\n\",\n       \"1Cyqito0OOcpix5SSoqiiAZZZbFa2Ec+Dyk0bRu59xFSQWksMFpfYzIdM56eUfRyemXBo/0nketG\\n\",\n       \"Cgbra+w/PeAf/Ce/wqyqUZMpmxtr0ckgYsMSl4iVHFIuvVSwHVRA4vqIOT0RuzcTxBk6jzFGI0v4\\n\",\n       \"UXRJu6i8IrwltYg11EqiyCIj5Ar84qWA4MCDS4V7su1IfI4zOpnP0CJbwZRDbB5LjKJStd2QLd3C\\n\",\n       \"KpyR1rIgNexA13kJCQKLSl5IGRt9QkDpRC3ZWp0AwTkQkV7gXCMakZJLSIlWCXqDVJfuE1Feiv5J\\n\",\n       \"kFMae/9nKPEvXYG3EoLD+ZalbwV7E61lP88GByHyY7fzDVgNPVbvu2tpTviWWMEk2++1/24z18vj\\n\",\n       \"BJTKzj2Mc0o+TfguSPIOfKI+Xfm8SiQ+QoiuvThi8hFnVzIyIbZRmbWK4WCItTZGGy7Sge7s7vAK\\n\",\n       \"r+C946VXXmKxqJAJEvjR2+9S9Po0VcOj+iFvvPUms9OT2ESsFY0PrG9usL29xWxRMz45osxytooS\\n\",\n       \"XzUMi5w8BJR1bGxuUElBoxVfefFV3vmDCdP9Z9iiZnY05tOPPySTkmANp4fPeJhr3vnhn7AxKHnn\\n\",\n       \"3SMe7j/k737zFxh6E+lHVY/aGvzCM6si78fG9hb90RrFYEBfjTg8PuXDH7xLs36LbDMnD4a9wZAX\\n\",\n       \"br7I977/AVNVs7nR46XNDdSs4q2/+XOUZcH+wTOmTcOsrtFln8OjU/rlAC0yjqcThhs77OzdZjKZ\\n\",\n       \"IITghRdfYHf3OmcnR9z/4APGJ8coIbi+u8NsZtDlgKDH9Hs36We7SL/NYnGKkhl5NsApQzWPc1Np\\n\",\n       \"SVAB42ojtV96AAAgAElEQVSyQpMphWwytFJkKifLJNIDqMh1Uyhq05BlGUpHKGwymfCbv/1bAEzP\\n\",\n       \"5qg8klAJAbmUSGNZL3o0i4p8OGQuAtnaiP39J6z1+oTgefDRR2ztbMZGFAG9QT92SQbLW199i2cH\\n\",\n       \"J9RWcHx8TAiR7jVgGY1GOGepjUElQjNrHUKCDwZBwnzTvPTOJbbEONe1FBEqCgFBlhgxdeI48ZHh\\n\",\n       \"cUkhSNAS4RKMIJKhEi23SNuBG7tBrbdApNGwjUM8R4EHQYKY/BJ2CG0NdoqcO0V9Xo/ItO6iB7zk\\n\",\n       \"/m6/0jlpSR91kZBfNg45T8eQ2UUaK2t/NVkZcf/ILyMS50ykxY5ObJFFAi1oecz/imPgQZA42JeE\\n\",\n       \"MElrJ4+2zQAT/duVbHNsm22VqV8Js1pXeQlthNCmKtJ5L7zfXc8FxR5f8915pRAxxGNpYVc/LwUd\\n\",\n       \"dtcigvGKQnesFjp0yfLSHScS8MjEW+1D5CZemJqyLLsQK4TI6tbXPYajIVVdEYTitdde4eDgkN2t\\n\",\n       \"TUotyZXm4EmJFILeoEd/Y43BaMTP7u4gQjQazaziw/c+5Nr6Bq+/eJccie6PyPMep4s5qpfhzk4w\\n\",\n       \"0zN6hWJ44wb12YKDp0/YvnmHneEQKyTe1ty+dZ1Br+Tw7Ihv3ftZTKY5NHMKoZksGgKeUgpM0zCZ\\n\",\n       \"OY5sRdMYVJax1fPUwWNFYFadsR4Kylxw/93vcXw2jSyLvZxnsyly0GO96DE7OuXmKy+z99ouc2M4\\n\",\n       \"mk750/fu4xoQRU7Z69MXCp33mcwqqibimeOzCY8+e8Cr977CZPMIN5vSL3N8BmFRM+oP0eUIITIW\\n\",\n       \"sxnHx0c8ePABzmrKwYi8Z6ib2EE3GpWsb/SZzU8hNOQ6YzAYsrG+xaA/4OysZjGv6fcHkWu65X0R\\n\",\n       \"AqU1nz16yG/+xm9GaKRpGK6NMNagQgbOokzDRlbwxu0XKPKcDz77hPF0TO0rghdUxydkuWZ6Kjk6\\n\",\n       \"eIrSisFgiHOGTz/5hLLocXx4hFSRt3tRNzS2iRTJxnJy+ozgPTIrUD5DKk2vV6Bz1XnPtjGARAmB\\n\",\n       \"8ZEOlrACOYqsS7CSug2tNUT4oCXFSvt9WNnVVMt2Ta02viQcOSpEnSDNRDX1HG80rVJiFNBSK9Cx\\n\",\n       \"D4b0v5j7EjjT+vYgnG3d7+VaTlE1gLVLAq24zFPiVop0/TISdCUKhqXubjmYAquOZYSV4nstZa/O\\n\",\n       \"MmR6L6IFgSAkwtON0fPkS1fgWZ4RA4eWbpMuoSikRCSa0RaiaBWp7wamfbhLwqOW96LDsFNCpT0u\\n\",\n       \"AOJyjoHWWz/n7fsARH7maJpFh19BnDqdFReksFCcU+zqglVuXw8rFtaHsEyIeY/DUZlY4VJPpyit\\n\",\n       \"ojcTPNJF8qZqsUiKH8qiYG005PDwkL29LV66d4/jw2OaxkTyKmcRmeLOjWsREnIOJTW9tTVGm0N+\\n\",\n       \"6q03aUzD+uYmJ2djbiE4PTrk8Wcfk/c0P/31nyZoxcP7D2gWNXdu7XFc12xdv8ZnDz4hzyNz38bO\\n\",\n       \"Fru3bzKbNtQ2ElqdjmcIAnmWE7ynsA6TNtsoioLxfMHx4SmuCLzx1Xtk/YKjD9/h3u27nB49w9nA\\n\",\n       \"7TsvsrWracaPOTg6QgCD/X3WNjegKPj+j97m48fP6PXXmdsx0+k+jw8ecu/eV+j3+xQ6kpJV0wl7\\n\",\n       \"WxuYesHLd18kcxVvvfE6//uv/Qt6eoud3hYil3zy4Albm3c4tRX1/Bil15mfSh5++hQnFXt7m2SZ\\n\",\n       \"g7MpZ2f7aOWpqwXWeL7x9b/BL/3iL5Kpkv2nxzw7OKSqKqaLislsTp7lLOqaf/Ptf8vZdAohIFXG\\n\",\n       \"op5Hz14KhG/YG4342p0XceMZ28M+w1deQn72EQ9Ojxj2RrjaUWpNr19inWVna5tPPv2Mre1dmsqQ\\n\",\n       \"y4wPP3iPu6+8iUOQ5RrvXWSx1JChQUBRlsxrS10vcMHh56ZrQ7dNZL8s87KdwCuOSMz7LOd1olEN\\n\",\n       \"oSMoC6El9Yq82DIGznEHHyGpqnpJ2iQSdwigRUDiMcagxSU79iQRrk0EthzaRIcwcY+s7k6kpcb5\\n\",\n       \"aJDaNRidOJ/YTJfeeoy+1bn12v4dISXVKXIhBMK1lMhLQipW6tFXd2VSSnbXK0Lc10CEtjv08+WF\\n\",\n       \"z5MvXYHH8pykYH3yuoXsLHcLXsQssewUb8tBBzEDvJq9lVIiV+5/NWxq8a94rPOe9upnVz13JRLp\\n\",\n       \"fwvjJNgmhOVnSMxkXshEnhPOKef2IbaGRLZbi8mVkiwfcdwYKQgckJfFyvWFmEzxsfW2aSxaqUSi\\n\",\n       \"pADP1tYmw0GPIpMcHR/EhadLVC+n8S4ysalAcBalAgjH7ou3uHPzGqIxBA1zFVi7vYubVextrXF9\\n\",\n       \"e8SL925Q2ZprG7uI2rG5ts763i57RYbIc3bW1pDGM68WHM8m9DbXuLZVxMlKTMh6axChNToqcnWr\\n\",\n       \"DIVm5g3D9SFvvfU6JpfU1Sk3twds9zLWB7tMqDh68oyiGLJ/dsz+2TGjXp/HRwccNwtCliMGfSgK\\n\",\n       \"Tpqa2WzCbDpHIXh2cMhsOmE0HDKfTNldH/G3vvUNbmzf4+DJI/74D7/D0f5jZmcP+dmf+RqnpzWP\\n\",\n       \"Hz7i4YNHbHxtD+wcSUUv38DUmlADRaDXK8Fb6kXFoKe5eXOb3e0tjg5OCK7i8OkjBv11tra2WVtb\\n\",\n       \"Z+/GDZ7sP+NHb7/NBx99zPvvf8DTp09jkt1H3um485Sglyu2d9a5NVxDmIY7O9vsP91n8/Z1CuEZ\\n\",\n       \"FArhHKXSrPX65GVOIJBLxeZoxOT0hF5vyGI+41//2r/in/zTVzBOYKo5ZZljqimVqSHETU+sNZAq\\n\",\n       \"TpqmotVibdIy+LgtISlobOuaAwJdxDlqrU0VM23XYiK9cpYslYviUoCNIDgX6XGlIJdFjEqdTQpO\\n\",\n       \"obWMHPZCpF2sLlfgzaIh7+UdzGGtJcilIm132EFE+tm4BWDAOZ8iYd9BmqKFc3yM2H2Cc7oSStdW\\n\",\n       \"4qStEx2d0lZpg5OLUX2bg4Ol89YmRmO0sXQC42f8EsJ5HvCf5EtX4LiYARZSdliZEKB16wUX0Zte\\n\",\n       \"8YpXYY5lGc7So45lVuc93uV4RgXcJk8vlulIuaIs2zKxVK8a0sUJIvwRJ8d5z16uPkCxfF0lMvqQ\\n\",\n       \"WNvcSga7fUZSKPDp4bZQkVsy73V3kDwMrfNoyd2S0tVbQ1kUcQyF7rwFCRRCxI0KJotunKR0CHPG\\n\",\n       \"vK17b+JY+8ZEknslGeztMbp+HQhopdm9c4+mrrvxiK5wwvCaAZvsUPZ6MWvftjO3WfUUYnUtyKkK\\n\",\n       \"wHqHlJqXX5gxr2u8g8W8Yjqb0tMzXnFjtq/toFSD6fVY395mMZ9zOp3S9x4BbOU5w1u7zBZz1l++\\n\",\n       \"gdQaUWmCFDx6/AgvBceZ5Obt2wDMa8+nj4+Y2YIPPz2lsRs4NcSIMUenDTYUPH32jJPTCtgkyIKZ\\n\",\n       \"ndAQEHqB9QvkIkDdMDmbsGt2OPpswunpHBcWnN6Z07u1zWQW+PCTx/z8xqts7W3Bg0PC8JD7+5/g\\n\",\n       \"egFTGfpqiDIlLhPYsGAgFC8MelwvFLI2nEyPyHfXMMOC4d427sMjyn4f7TOub25wc2MNguFofMLm\\n\",\n       \"zS1++NEDzhZjgixoDg753tt/wk9/8+sc7D9mPjukKDTWg9M5jYJqcsaoV5JnUNsKowUY6PmcPCuY\\n\",\n       \"NwsqM6Ec9mjqhlxmCBM5vp2KVV513dDvDTB1g3eBqlqglKLo9Qgh7hnptEc6iUajpEobiwS8qBNP\\n\",\n       \"OAgnwTqahNVrrdGZJi+yS1VIf3NA04xRIToDucgJElCOys/xElTQ1JVHk2NDE5OTwZNwCrRMRFcI\\n\",\n       \"ZErAqiDIdA5S4XzcNxeR9ht1BuFT5J9yYE1PpGRn3L7N2Qip+CZ68lJFvmMfAqUVUYGLgPSSyN4Y\\n\",\n       \"d+whCEJMlJ2v+rlEvnQFvkqS3llMsdIG2yrulTDtYvJx9XsXP9N+blUuWsjL5Nx3xNIOrrbnnk+G\\n\",\n       \"nMeqLjvnZeeOcM/5z7YVKavjcFlN6DJkPf8Tj7EM+9ptwi56Aavnu3Dgrq7WJYL5VdFadxsztLuW\\n\",\n       \"tK+315tnGU7Kz52rPe5FA5x5H7dZ0xkbIfFZ+EDdGPIi73ZvMaahGpbcunEjeluNiR5XrPtECIFx\\n\",\n       \"lqqqmC3mmKnn9gt32Nx+Ax8CB8eHbGxuoqTik08/4vTsmI3NEYP+ABc0Tw8OqOYNWVbESgPhaZo6\\n\",\n       \"Jh1FDOsFnrVshKwlg2GPk9MFkPHg00fUixn9ok9v0OPo4DHbO2u89+BT/uVv/A7Z5jqvv/V1nu2f\\n\",\n       \"8t0/+D7To5r5wZSt/ggzXlDmyyaOYa/H1miNApBInh0+YXE65sZoiLeW9eGI2byi3y/oFRqtIBMZ\\n\",\n       \"6/0+h7MpX33jNT58tM/h2Yxqfsrv/97/zdHhI67tbEeFqHNsUyFFfMb9Xg4hUDc2diwGj0TEbdRE\\n\",\n       \"LIncGJZUpkKFWFVhnUPpHO/j/rBKCRpTEfBIHXn8Q4jjJ1OTjLMG68A6lbZ9A+EgiNRinjZskQiQ\\n\",\n       \"cdcnIaMnbtOGHBdlUc2wtiIXcSOTCE0GbDA4bHSkhKDMC3JV4KxJu/YkrFuBbueji5scQ4zyrV/u\\n\",\n       \"uQsxD9c6Z6sJzhDiPrneL8sR4z6sOpJrudhE1q7pdkN2KSIlbYsgCNFuNh3hl7/yScy226hVBm15\\n\",\n       \"T6vYuxLAFQjiojJb7Vy6rKJkVSG277XNPatNBrAMGT/v4S/loqH4cdey5Kp252EUcb7W82LmevVz\\n\",\n       \"Szjn89fRvraaFV8lkl895mon3apBWWKV58+7OjYXr6/F81pu7otj0TTNOe/hotH9nMFJYXVH/UZs\\n\",\n       \"kop825KyGFJVFXmuWRvGxKw1cVf0LMvIdUZjDMaaWO7lo2clTaxyqJqaLM/RPRiPJ8yNQSFYW+8R\\n\",\n       \"fMV8dsb1W9sY4zgdn3Fydszh4TOqakaWazIZUHj2djbY3tqIi9wHzMxSzRrq+ZxBr8B66K8PKIcl\\n\",\n       \"+5NnPPnOUz46mDKpTugP+3z3j/+E3/2d/4eMdV6++zVC5ZkcPWNt2GMxO0OrjDzTbPYHlFlOqGuM\\n\",\n       \"cQxGa5RZTr8o2RquMx5PUT3Bte0Ra4MSRUAL6Oc5a77k5OSEl27d4Otf3WJtY4e8KFjf2qTXGzCZ\\n\",\n       \"zvE+dhxr58AbzMJi0ei8D9ajZWxAsiHukuOcw0wqMq1jN6OKTVLWGZQQLBZzhJBkWeQEsdZ00ZkP\\n\",\n       \"FhnatRmDS+8M3ll0gv66vgwcTmh80ARnui5NgEwVXCbO1mS5JlhP01RIoVFKYEKNk3HfzuAacukw\\n\",\n       \"vgER9+qUEnTizfchRnEIEEqgpUShqFP5pJSRp9v5uEmMlDolO2MeLoRAFgRCRkZC72J5oHQubp6c\\n\",\n       \"cHoh465EKtPJc291Xcq9dU1JETVY5b6/TL50Bd56h3BeMbZyuXfpO4X4vPdDCN32RKuKpFU07Ya8\\n\",\n       \"q+dYPedlnu9FpbhqGFoFeJmiunjui8ry4rEvU76r17N6/It8C6sJ2D9PlLFMyFws0zx/LasRhhCC\\n\",\n       \"PM/PXd/F8Wif0cXPrB7r3Ni0O4ojuz0vs1xRyCxtpqBQg9je7VOzg+iVcTu14HHGoTNFXmQdn3Os\\n\",\n       \"3rMorVF55MG4Ptjj2rVdlFLYxlDNF9SLBdWiAh3IVM5gMGI6nbOzs8X6xpDJZMxsPmdne4OXXn6F\\n\",\n       \"6XTBJ08+wzaeejLntZdfZmO0xh9/948xjUGX67z81bco1no8OXjKrXub3PrKy3z797/Nxw8Omc0d\\n\",\n       \"h0cnbGyUrO/eIWSSZ/ufMtoscWcz9obb9LMc2xgUYAm88vprPHx2yGJRsZjMKIWi6JdsDgsyLMJB\\n\",\n       \"JgsyXeAyj3EB7R1nT5/wD//Dv8/9jz6KdeqzOaYBlIKgEd6graUschqXEdQAZ2qUc/jgWTQGEwKF\\n\",\n       \"UhRK4hZxi7eqNnjlycsewURj3W6pJ5VMVH5po2IRG9q89bFsFpn4ueOzFCHg01YhaW8eBBYJaTOX\\n\",\n       \"iGEvqtml8zhTntl8gZYZUkSPt64tXjiyIkYCzjmCa3C2IS8LPD72XxgLFiAgExwihOi2DPQidupm\\n\",\n       \"WRYrWFzbZp/0R5ds8+RSp7kfcGlbQ5K3Hgmq0lrHI0XaDco2Xb5gWdfWOlbqc5H9RfnSFfhFqADO\\n\",\n       \"L/ZVj7j1/JYcC5/3LC8ec1VBrb63qmxWkwyr20aterjtsVb/Xv1eey2XedIARVF037/Mm36edJUu\\n\",\n       \"K0py9T6eZ1zaa7zseBcV7sWqm1XD0B6nfX9V2V8cn1UjApdHM8+7LxNbLmPhaNrhO4RAcOc3mPXO\\n\",\n       \"ETR4J85RI2gtUalpyqVuQk9A5bGd34sY1vt0fSEEyBTr6yOyrU1msxmyyDCNYWtrl1defoWqWpDp\\n\",\n       \"WCI5r+ZIoSiLAcenY154eZvxZMb8rCIXPT598IQ3v/kzTOcNZ/MZv/nt71MMS7wS7F7bZH//YyZn\\n\",\n       \"ljzfQgjNxtoG0/ExAkEvG/Dinbs8O3jEje1tNvtDZIB5tUBkGllkHJyeIYucRdMwny7o5QV5Aev9\\n\",\n       \"Eu0DmVBIHzHbrV5Ovz9kbg0//fWvYecT1socXfQZ74/5+OEBjfV85fYeL9y9TjU7Rumcjx4ec7Ro\\n\",\n       \"kM5xbSig1yPrj5Ba44zBTaf0pEZ5RXAeIwXz2Yw1rWPEpRU6yzDO0jQmbpq9kouSou2GBtRy71AX\\n\",\n       \"TCwuEHQ/QkRW1+A81rtoCJ6jyxbzKbrMCc7jfB3r9QuFcZGCIO4qFD3fPJOx2xto93JtczKxgCBF\\n\",\n       \"nT4ST3kpYrQS4sbnzqbfqeM6QkdRjcaSQ5m6U1PbfNrpR8h2vcQkpc40UomU6lrurdlYgw9xV6fI\\n\",\n       \"sXT5PbfypStwoPOIV3HV9oYuYkA/TnFdVGKX/b3qUf44xbIqq7DEquK5CIGsKvnnKdbnGaqL57vs\\n\",\n       \"71Xl2MplhO+twrsMcrk4XpdFCZflEC564hfvY9UAXoyQnnfuc/9u3w9dNI0UkqaysR447VjSnoMQ\\n\",\n       \"IokWoJRGBGhsot9MSkNqhXdxzPK8IC9yGmM6jHHQH2DqGiUk6+vreB1hm2Ze0+/32QzrGDtnPvGU\\n\",\n       \"vTW8gzzvMegPKPoNQUgMGYY+L33Lczwx/E///F8wm1UEE2g+PQXveYf3UVpRzS3rwwnj0xnf+ua3\\n\",\n       \"CAONNRWSNU5OnqHXAuu5oMw11jnmtsE5FblbZIXxnnll2NnZoarnKGkJjQGRQYJEMq3Js4y1Xo7L\\n\",\n       \"BT/12ptMzILtjXVqp3FuwrXrd5BZwZ2bO+zu9agnmkdP9nl2eMIsjMiFZPcrtyk213l4csLR6Zgb\\n\",\n       \"W5tsjbaYHTyjV/bxucWLQJHn3N3ZZTKZcDo+w1hDCJH2QukMQqwsgUjKhfeEIAkqVqUIEciFQiQP\\n\",\n       \"3AlBEDpuvGxr8jxWlzgfW9kvk7IsaVIfSBAe62q89fhA3KPVK4SPe1xmOlVGKYVHdEnFiNXb2CkS\\n\",\n       \"lrxJNsSNiLuCgNSwI6TEB0/jDMInGLG23Q72AM7LblNvIeWyQccBQial7tO2ey0aAVIt+XWW9ASX\\n\",\n       \"y5euwFsvTWv9OajjMvz1IpTQvrbqAa6+f5nSeB4OC0uv9Hmy+p2Ln71IXrMaWawma1ePdZmX/Dwo\\n\",\n       \"47LXVw3c55Kjl0Axl53nz1L0f9a1dR4ty+fT4urt8S8+l89dQ0cfsSzZJHjy1CcQUfFo3L1cblnV\\n\",\n       \"XY0P3V9t05T3FuFU1zBhzII2qg0hYCob27RFXLBORM+/1+8BAek9eTFga2OUPCKfdguX4GpcgJCV\\n\",\n       \"3P/sKb/17T/i7fsPmFeWQTlgMj1GWU8uFLrqo5VkJDyL4yNGWvPd3/8ttnd2kVqxtbvN1HjWdm4z\\n\",\n       \"2/8Y5RsGhaaXZzglmJsm3a/EBkewnqIsyIVGBkGRF2RB01M9CJLttU2GGxts3trl9OiMWnmCUkwN\\n\",\n       \"vP/hR0xsTm0djz/tM/o732SQZbz99vs8PVwQeoJmvuB+4Vi7fp0z2zCeznnt3it866tv8Ue/93so\\n\",\n       \"KTl88oh3P/0UDzwaDtnb22P32h5BCA6PT/jow/usra1FQ7i+gWkaGuPplzmNSWWvUiGFRzoPwVHV\\n\",\n       \"NSLLccJS9PpYV1E3dfJEPcUKbLcqSmuktQgRkFlGlmWRisJahNYEF9jb2mK9P2R8fIydz1BKUDeR\\n\",\n       \"iTHypgQylRgr7ZLPSHgfu0tF5LdZ2AVZ6mVwLu4r2u4fqoqsq5ZrN4NGgnWeMisTJ1PEuat6luZr\\n\",\n       \"wKXdeKhTkZaIG17/RCQxW3meN3oRL11V0qvvX0xUXualryqfNjmwahDapOll8uPw4edd/6rn3rXz\\n\",\n       \"r0A6sd34817F8zzXyxTuxeu9zHBdvOeL710W5Tzveaxez0WYafUzz5PLqAqgVdjxX4GAWNmNuzte\\n\",\n       \"937M/kPka47XlMLvEFKFQfytsuzzEZY4H9W44PFC0FQ1pDKy6IFBCJaz8YKyLMmyAmQeSfxHt/Eu\\n\",\n       \"8Nu/+dt85zvf4/jkjOZkgqsbni5irXXVVOzefZFr/Rf47MEnVGZMPiqY1Gds3FrjbH7G7dv3OFtU\\n\",\n       \"DLf3WCwqfJYRVKAhoEKkvTXGoRAEVDJkjhBkLLUNkrox9Ab9OBpBsrOzR280IDgwxjPzFUZAPtrg\\n\",\n       \"b3zzG4hinXndsDaIxtH6wLxuIqQRPC995QXeePMeP3jvfR4dHLAwNd+ZVYyfPGZzMAClePTwCbku\\n\",\n       \"CSIwnSx49dVtTk8mvPPeuxyfjrlx4wa3brzAydEJb//wPRbzBWWRc3z8jKzs8+/+wi/wwYcfcXS4\\n\",\n       \"zzDLuHvnJtf39pjVFQ8PDljbFGyMepydndHvD6jmC8re8NI5ZYOOWDaRY2U2n2CtRxcl89mCemEY\\n\",\n       \"lgNeuHGTF2/f+n+Ze+9gy5L7vu/TfdI9N7wcZt7k3dnZiN0FdhcZBIlMAQQl/UGVZNliWSrSIiVL\\n\",\n       \"crmKtKtMy7IlymJZJVGUTZkumiAlywwimECRAAmABJGXi81p0k56YV6++cT2H336vr7nnftmIJJe\\n\",\n       \"9tbsve/cc/p0+PW3f79f/wJ/+I2vA4paTZsIIgS97gDHdVGOdiB0Cs7XKegrTVOyPCPwPLIkLswb\\n\",\n       \"/QMnIc8jFdqCRRTcu3S0VZbja3WeciDLU52U3HEt9e+BGtj1PaKCDjXT8ef8ENOUsk51krgPJaeY\\n\",\n       \"SWI5B0cCVQDjT9jNJ3Gtk9pSBofxw77RL2PXx60zxt9vuMMqcCz/bb+vvGlVAbDnmek+DLBl3fzd\\n\",\n       \"qHdMe6pUMGDb1DPS5dmS0kFdijwXo+/lw5xRHcI6LzH29IU6Rdv+a+coo28UAsgrNnDT19FoCBAQ\\n\",\n       \"4CKQoGShq81AOIReAyEcBnGGEBm7ez2++s1LrN1Y5cWv/zHZTht3kPDkiVO0WnWGKmIgU4ZuhtcK\\n\",\n       \"qM+e4G2PXuBLX/s9rm1cpjHts9e9Tc2pcfXqFe49cQ81JWkFDTpBg/Xbb7CyNAfSIY1iXM8jVwrX\\n\",\n       \"lTqaofSQjiKOY/xaiNTnkQgBvuNx5doljq2cRNZDgvkZvNzTCX49n2R/j/3tHaSULM+cYndri8Gg\\n\",\n       \"y+LSIqlqk5Ozs3GDF+kwP79Eoxmyt7vP3MwUge/x2d//HCrPqdUbzM4tsra2xtKxJZaWlviPv/M7\\n\",\n       \"xEmKIx3WVzdwhc/m+m1cKTm5tEKeZniuQ7s/4MrVa/SHA5TSG+elV1/n/JmzfOu551nd2uLMfZDH\\n\",\n       \"IXNzc0RRRC1sMYiqVSiDJGcqaNDt7pFlCYHvARlZppDCY3X1OnOtab761a+zsrxAoxGyvb0DQjKM\\n\",\n       \"Ejy/RpbnKFHEegFyJ8cREkdmRFGC62jViOt55IX+W8uFbsFMCKTDCNCFI0Ho2PPavPBgTfuBjmVv\\n\",\n       \"HP10qCdtZpimOsqhMUF0nGqcMuVPBOBCiDeANpABiVLq7UKIOeAXgTPAG8D3KaX2jqhj9GmL4lVi\\n\",\n       \"/J3A1T7YLL4cAjnzaU7Nq0zryvUfcq1ncrD1SfXYKpfxA9FqbrV80DmJK65S4ZSvm5Jl1cBYtTEe\\n\",\n       \"BeBHbWD2ofJ4O8fbWK7fwcWE5wRZ2AVb75UClHaocmURL0Kp0TPauMGoUMZ6XWwearQ5jjaBUhuk\\n\",\n       \"0X06RSZzVYi0rkOcZNSaU/zYj/0TTp++l76c4qVvPctCUOfCY+fw+kOivX0ePXuWTGRcWr3OzqBN\\n\",\n       \"mg6ZaSratzd578OP8PmdW7R395luTZHHAplJdtY2eOqxt3L98mVOnlih3b1NmikdXEo45FGK9D08\\n\",\n       \"KZHKxXMkjtBOZkrkCEeAkyE9jyyPuXVjld3OPioMec9HP8rc8iKZVAhHcmJlRYfhHQwYRkP8mQZJ\\n\",\n       \"K8R1XU4cP0nUi1COgEBCLnBzwdKJ46ycXCFKh4R1j3pYp93usr+xwcm5eZrNOmtrtxBCkWUJAu3E\\n\",\n       \"0+t0GPb7LM4tMN2YZn9/j+Ggz87OLk6zQYqiHtaYCnxmfI+V5SU6e3vMzUyzevMG7omTzM8tcfrU\\n\",\n       \"Cr3BkF6/X0mTb3nsSe5ZmGdrc431jRvc3rpN4Lj49SleeuUSm1t71J+Y4tzpk+xsbYDU9u21sI6Q\\n\",\n       \"KVEU4dcCarWQTqdNnmZIR+vW0yTVUo+E4TAiigqfgCJUcOB5KAVRFBF4vnYnLIwbzNmOH9RGJrdR\\n\",\n       \"FBWxlBTCKeLBCBM3hcL23SHLFEkaodSfrQ5cAd+plNqxrv0o8Dml1D8TQvxI8fePTqqgSr3x7RYz\\n\",\n       \"WGUnEbNQq1QpYwuZat16FddtA9hRoF/Vn/IGY29YVX2yP8v1TnrvJLWG/Vu5n7Yuv7wZHlXK/TcA\\n\",\n       \"XrbYqWpHuQ1SySKEQOHQIRgdNiphFOTmoFMwLmOZesc3H6VAChPkbNTqQo2VHB5TdD7DPC/sfLVR\\n\",\n       \"MP3+gObUDGku+YH/6of4oy9/g6/84m9y8thxTi/Nkud98hp85yc/RNYbEuDwFz74UbbWN7l17Qb9\\n\",\n       \"mwOUDOj3+jz4xEf4xsUXePbqZVS9STuO8IKA3/7653nXu97O7vY6K8dPMNjfwRUubhASq0ERpjin\\n\",\n       \"UQvQ7su5TvqAwvEl/ahPHPUJnRonTi3z0COPc/32DvVWAxxJmqeoOEbmKYFQNJo+WdNnqBTSDzh2\\n\",\n       \"7PZhAhkAACAASURBVBi+dAiEYLezz16W0PTqDNp9XOEy6HXY7ewyNd3EcRxO1peZqU8zHAxRjqDb\\n\",\n       \"63D23Bm2tnfodvuEtRq1WsBUs6WDqm3cZn9/j9xNmV+YRQhFPQxJ+z2CRo2pVpNkOOTc6ZOst9vM\\n\",\n       \"hjO0212ee+4FTp46zTCOWFxcrqSnr3/zW9QfewuNRo3F+UUuXr5IlGSELcWLL74MSnD16huE7jmi\\n\",\n       \"aMgDFx4myxXdXo+wVieKE/b29ul0utTrIa2pFnmWaRt3VViTKMX8/Hxx6JkT1kLSNCWJU+Ik0Qep\\n\",\n       \"UV+buxZRHYXhAKSO2hjUQoQCz3FJ/BxwyPN0tP5c1yXJdKYoWVi/KPVnn5W+jFSfBN5ffP8U8EWO\\n\",\n       \"AHA4bGM89lkwqaL4z7pRiy5CX1W5GrXkqMO8SVx9mfMfB4JxULc51ipOuVwOq1YmSxZVFixHccTl\\n\",\n       \"Osy95QPWo/T7VderdPP2u8rjUNXmKhWQ3Z6xQ878ALi1ktqhmF40tFLQAeQ2UUtQKi9iKx/Es9Z6\\n\",\n       \"SO18YYDf/NMR6+w2aA4oCjqoxMFNA7zMw8UnFzm1mscwyXjptVf4yX/102xut/nuR97HvffcS9iq\\n\",\n       \"c2NjjYXjx+gvLLAvd3jL2fN85cXn8Xb3aWY5F07VacxMsdvr4ePxgNsg76d8qbdGNu3DIGI293n2\\n\",\n       \"+ZeRJDz14IPEvQGNsE7c22UY9xCBT6Kg5riQaSsfPwyLtGMecZ6xdO4cO+027/vod7Nw7DinpAuO\\n\",\n       \"C7LIAlQETVK5CXGgo0PmuSKv5WRphkTQcmbwoghyCOemEAKaeY3F5RlOn1gu7PJdwkBnhUryVKcM\\n\",\n       \"zDPOnVxkd2dXe2lKhySeYzgY0m63kYHDrfUhJ5dPM7+0yMVLl/A9l932Pl7g88zF18nrISvTU8zN\\n\",\n       \"zxexVXKE47JUXyTNqtfA1VeusPra08zNzXPu/L1Eucvmzj7DtV3IM6RQ7OzdZnt/iuvXrvLUQ29h\\n\",\n       \"Srls7/bZHQy5dGOVfpIwM7/IVCOlGWYszs9zbPkMTnOf5aUV1q6v029HJIOEmalpHn7wIU6eXCFT\\n\",\n       \"OX/8zLM89+xzSNXXMf9dVx90IlC5YH7+ONMzS2SpZPP2PrOzC8ggYxgNcBzB5u1VfWYy7KJ6baTK\\n\",\n       \"cD0JHiRxtfepKX8aHPjvCSEy4N8opX4GWFZKbRS/bwDV22ZRqmyw7WIDxVGAATooTWUjK7hmG6Cq\\n\",\n       \"dLnlusvXqrj5qt/M+4/KsDFJ+jiKmz6qmN38qI2o/J7y96MkgLHDRyFGOUEn9d+Ucl5F8ymEKGKL\\n\",\n       \"TZbC7D64spxrUIAz3jatHtO7vy2JlSUtfb+pN9BBkByJyHIyElSu4z3fWlvnp/7lv2bYz3nsgUc5\\n\",\n       \"efoU++19BsmALI4JazVu3Vrl1MlTrJw9izsc4mxvQ6fD7e422WCAX6+zENZ58Phb+M6G4rnP/jr7\\n\",\n       \"wwiVurhugJcLciEZRDHNVkvbpqcpzVYDJQWpEAwGAxwhCFyPJM0J6i7S8YiiiHvvf4CPPPAgvTim\\n\",\n       \"lybaA1KafJrag9CMgU6Y4lR7yxaskk5Fl44sqJJEx9kxDnJK6YQDgePTbNRHY3zqxApJkozNxXA4\\n\",\n       \"JIpiBj1t7+0HPo3Qp9fvs9du02w1yPKMjfV1er0es3OzLC8vsrx8rIiVHVMP6pW0UXMljqyTZpLN\\n\",\n       \"jTbddsq1a5s6JC01Br0OG7e2mWnN0mousLm+wcb6Oi++8Dz9XCDCBo50GXR7LM4vMDU9zQMPPshs\\n\",\n       \"q4U330VkDjeyjFdee42Z+jT7u/sszM/j+R61sEa93mRp+TiDZJdOp0PYahX5anU00cz36CQxt25s\\n\",\n       \"cPbsfVy9eoMba9d52xNvI+oPWDh+FiFymnmC5wlQKRsbq0y1GrQ77YlrAv7kAP4epdSaEGIR+JwQ\\n\",\n       \"4lX7R6WUEgcBu0vlHwLwc7/wEo8/+jCPP/ZwpYeetLjuMkjah32a46rWF5XBQghxWF/OYf2ufd1+\\n\",\n       \"Z/mzDGjltsKBWV0ZwKoOAb+dMonjt61e7rQJTIq3YvehPNY2AJZVQUd9L28A5T5UjV/VnNj32ht/\\n\",\n       \"WY1TNS6TpKh8qF3SHaGQbgak5EIQ1pr80x//CZzM48PveS/kDr1Bh0G/y+0r6yzOL3Dl+Rc4ceYs\\n\",\n       \"G2+8wa+89Aqzgc+9y4vce+EcA+8ke902WZoRN2ZoNGf4K4/+Jd5Yv8nvfeNr9J2cnowRiYPnaa9D\\n\",\n       \"13URmSAIaihS7fhSxARxXFcHL0MwTDPCUOKHdaZnZxGOQy2s6QPQVGHsiJ3igPdgc8sLVVI2NhZw\\n\",\n       \"YANtxtjYNodheJh4hI5rnee5jldSWDoZUz5D70EQaLO7KUAIojhm6vw9CEenIFNoO/7z507pFG4C\\n\",\n       \"7RbvSG2dkysGRTyccvnkd3+AWi0gqIU8/8LLbKxuM+wNEAj6gz6tRot+Z8DG6ib33XsPi8eX6cYR\\n\",\n       \"F/p9UsejM4hIENze3GFz7Ra3b10n6u6ztbXJ2QenmZmaY3tjn8B1dJiD6Vm++c1v8OUvf4lTp09T\\n\",\n       \"q9W5cuUKM8dnmVpYpt3rcfv6Lfr9AWfOnGWoMtqdHVQItTmPxXyWrurhN0OE7/Dp//jbKKV4/3d8\\n\",\n       \"B889+y1EnrKxeovp6WkGg2q9vyl/IgBXSq0Vn5tCiE8Dbwc2hBDHlFLrQojjwO3qp/8hAH/z+3/1\\n\",\n       \"EJdnihDac+tOnJ0pd3I7rQKk8d+rzdzK7v7lOu4GdO1FY0DM2K9XtXESCFb1qepaWW1zVJk0rmVg\\n\",\n       \"LXtnlgGxav5M+4/qQ3mDLF8z77PbMQmEy/eamDdwOF5MuW1BOo2QMUoOyMWARCVkyiONY/7HH/tf\\n\",\n       \"+N1f/xxhHlL3AtZ33mBj7RZ112O4t0WrNYszHCKkw8MPP8Dl116j5wieuXaZpC558IELvOWhR3j5\\n\",\n       \"1Ve4vL7Oyl6dv/HO9/P0F36PYRiQ+D7z9Ra3Vm+xNlXjsfNn6Gz0SZJYx69WDsM0xfUChONr1QWK\\n\",\n       \"JI5wVIO/8w/+PoM4IUoTpOuCEjiOPiuQSO3WWFIxlmnMHovMGitThsPh6ADPPpA3nLfv+4RhOMYg\\n\",\n       \"2TST5zlxf0iWZ8RRrF3vk7g44NMSQ6vmUZtu4hUOPEmWEkXaXnsSCYmsR57FrK3e4o0rLzActlma\\n\",\n       \"rxcHix553uf4sWk6+2u0mveyubvDqxdfYzCMmZlfIghrnDp1lvXbtzl37z10ux0cR7K3PEOc30Yl\\n\",\n       \"MVNhSJ8eT3/ja7ztsbeytblJr98lSRMeevhRWq0WTbfJZ3/rc3z8E59gemWaer2uJSZHstZZY6rm\\n\",\n       \"88df+yIPP/wQZ88ss7p6hYuvX+Sd73gbYdhASo/pmTl8P+Dc+UdoNJoIKfnMp//DxLXznwzgQog6\\n\",\n       \"4CilOkKIBvAR4H8CfgP4G8D/Wnz+2lH12EGerLqBQm8rnUNAUtEWff0Oqgjz3b5m/yvfO0l9YNdb\\n\",\n       \"rsf+rVzKruU2h2LXbW8Yh/pY0beqUuVpejcbzaRnbJAuO02V23fUWJfvN/+MSqv8jPk0dGJvGpP6\\n\",\n       \"Zb8/rYimaPfFbpeLhxA5uSN1ICzpkqsanjPDa6+8TOi2iHf26bZX6WVbNFyYrvsk/SFxb5+1W9eQ\\n\",\n       \"jSbPv/YSjWaT2zu3adZ8ht0O6y9d4Y3nLvKdH/su1HAAt/ap73f5x9//d/mBn/3nxG5I3wsI61oV\\n\",\n       \"Mxs4tNyc6WaLwbCLUwuIen2UI0mLlF5hvUatEdKYnWW30yGoNyDPdEhStPfjKIMVBxY7piilxmjS\\n\",\n       \"HqeyJ69SOo62WatGPw2MMSZJkoykv/J8SylxPR01sdkMSRKt3zXRLXNlg702LU2ShNiL0flHq89l\\n\",\n       \"Al8Sp7vMzHh88ANPkWWKTqdLt9dje3ub4aDPcNjH8+eYmXZRLpy59yxJnNLvR2xubvP53/0Mx06c\\n\",\n       \"YO3GFVKVce/5e6jVQ86dOE+/G7GXdjl3+hRnjp8iSWJ6vQ6d7j7LS0s6CXGWcfP1S5xdWuG1Z18k\\n\",\n       \"iVNmZmbxPJed3S1m56eZnmny8LmzLE81cPJ9Vs6v8NiF03Q7faamZ8lyuPhih5mZFp32NrfeuPJn\\n\",\n       \"6sizDHy6mCQX+HdKqc8KIZ4GfkkI8TcpzAiPqsT2UJwEUFXqiTIYAqMgRuVSZWFSdXBXfpe90CeB\\n\",\n       \"XxnIDOGWB94GvjKHXAb+SklkwvhMmuA7WdWU6y//XrV5VT1f3nyr5suxzKqqxtEAiQ3IVaqlsgni\\n\",\n       \"JPVVuW9Vm1GVdOI4CuUqlMhJlCBRHtJp8tWvvIhSIYEbsrl7hbSzhROk1DwXmcY4ZMRJn9PHL9AX\\n\",\n       \"LutvXOFd99/HVL3OqWPH+Oq//3Xm77mXP/7Nz/L801/nHe9+kkdnV3jm5Rd58p3v4i889W5+88qL\\n\",\n       \"DLKI5vQU6zeuMRhGrKwskmd9/tsf/RGWVk5wY22Nq29c5+rFS9q6JerjhCGPPfkkMwuL7LX3kY6L\\n\",\n       \"4CBwks6EnmrLBuewpKu9AydbVZnxEkKMEoSPdOilWEBVND+yiy7uGxZqlrCmTetc1yWN49F6MOcb\\n\",\n       \"QupnW63W6PkgqI5GuLQ8R5bX2d/rIBxBHCc0ajXIUpJ6nXtOnyIIfFzXxQ9cEuXSmp4GBSuez/3n\\n\",\n       \"7yXPn6LdbePWfKI4Yre9z3QrZHluHjHn8eVrX+XSxWsszC9xzz33gDhGLdQhhtv7ezQbdZo1l2PL\\n\",\n       \"x5menqXTHTA9M8fW1jaLnUVcD5577hkuXnqVOI1w0oR+b0AQBNTqDfbbPVqtGRzP52Z3j3anx3ve\\n\",\n       \"8z6CIODnKntdzM23q3P90yhCFGmegS/87q+Ufxv725XeoQVdBsnRgpSHOW0j4h0FZFWgbd9bxWna\\n\",\n       \"dd4JRMrvKNdVBqoqjnHSJnI36pWyBFFubxWY2WqJ8gZbVlfcaXMrSzX/qf0w91VZ2EwqZfA2bans\\n\",\n       \"c+qQyYSg7jJIE7zaLF/6oxe48cYui60lrr/8PLvXX0WmbaYCncUm8F260YDm4iJzp04TLh3DrTUQ\\n\",\n       \"SrAyv8jVV1/jEXeKG6+9xiNPPMoL117jxuYNHj57nreu3MMffPbzZPMzfObV53m5s0VrapatW2uc\\n\",\n       \"Xp7lkXtP8cB9Z/nE93ycmJxcOgjh4EsX33GI0oiEHCkFaabt3Z0isYfJFgX6HKmwtTk0XneiV/ue\\n\",\n       \"surqqOer5kWhdKZ5pUCpkRetNiMrIMGirZwilDSicM6Ct7/3Y4fe8/U/+gye8EdOXY7jkaYpcRRr\\n\",\n       \"BhGlowGiY58nriJLUu1Cn2SFrbc2Ye0P+mTkOJ5Lb9Aj8ASeEyBwSWN9YD411eTcvedACLZ39njx\\n\",\n       \"xVfo9oak7oBOr8/M1Dyu66OUQz1s0ul1iaIBrakGjWZINBwQxtpRaGdvl909HUOmN4i4duMGrelp\\n\",\n       \"Ot2uTvxQq/Orv/yrKLMjl8qb7olpOK9Juz9qnEhs0awqsFPZXM9wAebvsvhfJToeasIEwLGBpMrZ\\n\",\n       \"p3z/JO7Gbqv9vjIA28/czcIpg5ztvDTJCalcR9VGZb//TrFjyvVW/W1fq2qDzb3neY7njWdmsVUr\\n\",\n       \"VaUcL8YETiuXVAxxHYf99gApa/zGr/0WjpzBzV3euHSJdNhjfmGanY1t+j1JrebRH0aEzQYLx5a4\\n\",\n       \"fOsaj58+RSZgY20DN4XNjU3+eO8yywvzvHjtdVbOnuT0w/fyjWee5qkPvJ+/9tjf41/+xL/g1Mw8\\n\",\n       \"nUCy2Ys4vnKSPO2zuHycj3zsu3F8D5FnKKGzl0dZqoNGCYWSaJWJMIfyAjgI/AVaMhVCGJ7prs5G\\n\",\n       \"qjbaqg3cnqfynJSfEwh0+sTiGVGkEyz+p/Fbe0SiFMLVQcr05gNiwvwO4pjU5CUT4OUKkHieOwoQ\\n\",\n       \"RWE2iu8z3azpiIF5jkpzslQHrMpVznTWIk5j0iylUdepCdMkx3FqZKleN2FYY39/hxwYDobMz83Q\\n\",\n       \"amXE7HNy5RhSeqSpIooSWq0QoTIiR5AlOc1ak3rQoCECNjc3mZ5f4dQ9D9KPBrQ7XSKhbcjdRota\\n\",\n       \"PWRvf//IeXrTARzGrTvKXKnKjtZ7j4GcOqyqsDl1W1dXZXlhxEQYTw4x6d5JLv13S9A2IJYXxFEL\\n\",\n       \"yH7HUaqgKuA0/Sqbb9r1C3HYzNIeu6PUIVWlPG5V5Shdn71ZlqWIu+EGjQWQsbgobwCjDcpRpEnG\\n\",\n       \"4uwxnvnmizSoEQYNXnr9Vbp7ezR8RVD3mFtcYv3mLo7ULu5nzp5jp9vFRbK1cZv7HnqM22u3CRt1\\n\",\n       \"wqkptrdX6e9HLB07Rj+NOdFa4m/90A+zvrfL7c1VPv7X/yoyrPGf/YMfhtYsflij29njuedf5O/8\\n\",\n       \"8A/S6e1r7z4pdYyNXCFzQIpCq30YTMtezXcDutZAohljfQhqXS6eHdWi682rGZwRJz2aWx04zKTR\\n\",\n       \"G/l3SHEQnEwexLlRRTtQRdjtCRuP5/k4JAUjr8iy2DRP04SjzVeVvkCvnWj/Gim1dY4jka7AVSCd\\n\",\n       \"kIbQERQdR2dkyjJQSkeqNPb0cRKT5Sl5njI9VSeOU8i0Z2WSZITNBgM5xFE5swtz7Lfb+H6ISKHb\\n\",\n       \"HZDPNPBrTfb299jtrKEEDJMhcwtLOK5DphSe73Pi5Gl+efJMvfkAbttH2+BbFdDJ5rKMLs4UpRS+\\n\",\n       \"61eCozsSKw8TcBkUq6xQqjiP8vU7AZRdbCAsu69Xcbl2e8t/3wn47DIp9Gy5zeYQyu6PvRF+O32t\\n\",\n       \"uvcoDrzqWnnjLVuTVB2amWIkj3J/q1QEwnFpei0+/7tf4NXnLkLi0KjvUMt6uCEk8YAsqSFFE3fe\\n\",\n       \"YW/Q56Hz52l3BmRJSkvWyLsRrVrI4tw8wyylNj3FFgnJoE/naof16zfYvnmbpeUTXHjoIT6/8Ucc\\n\",\n       \"m5+h4Yd8/1/7z/l3v/1Zev0BQS1EKfjSl7/Mw489pPNXolAi0xldshyVC0zuVN3/g/gz43OvKKLF\\n\",\n       \"VKrQyiVXapToFw5oMS3F1cc4SFWcPZXpebRuhdkU1GhHUEVyYaW0XfmoDil18gMERx3lOVI7zGjG\\n\",\n       \"RAP/qHoh9AFooUASQuIkRVC5XAeoylXB/ee5zvOJJItTUJC7OZ4XkGUK1/VwXB3qN/A8pPQQClyn\\n\",\n       \"RpJkMIzxPI8oGuqEzIEOXSscl5o7TZYphoNE56aN20zVJTWvxTCKcDwP6c7R6fZotppI16PT6ejU\\n\",\n       \"dEeUNx3AbTvUsm4VDjjwqoOTcuyPqljbZTG8zJWXAeYoVYa5difucxKQ3O29R1nl2CobqAbwO4m/\\n\",\n       \"R90/SbVk3lk1JncC8qPAwpTyIab9bFllc9RBcVXfzMZVNjcsz70Skjeu3ODLf/BVziyeZnZmmsuX\\n\",\n       \"LtLrt5lfmMV1BYNBTM2v48/VWaifYJjGiCSl7gbEaU7aHRC3e9y8cRMZ+PSjIe39bfJej7pfJx4q\\n\",\n       \"Bjd3WH9jjb/1j/4H3vOhD3FzY53XX3qVxx55lM9+5Rl2ox5imHHm3L3s7bfxvIBhMtCJtNHcqpTa\\n\",\n       \"AQlDj0XIXARkKrP6qdUsUkgccdhppzKcMeigXhX3ltVpRp89qZh16bquBkTDRNhryrEsZex6RW4i\\n\",\n       \"aGtz4gnv0PFXQpIs17FIlBq7N5dFmIaiS0GR7ENISS6Uzu+Z5zp6IBJXCIQXIBEM8ggpnFEskyxL\\n\",\n       \"yVVCmuqEyCrNkWIISuI6ddIsA88jc8DxAzKlQxS3pqfJUpiRLmmmkEnnYNyKoFqDYcRic07nga15\\n\",\n       \"zPgzpBXSjV3edACH8cBOQui5NUl5ZTHxqcrJkoPMFSMRryAgRwoyEYwGRYiDe1SuRocgchRTg+K+\\n\",\n       \"MlnYqhmzwE21xt3brkMwXoXt1VlwGpjnlXWPrjfLxtMpmfukPGhHGXAmnQOUgW8SUI1aOgL1gz6a\\n\",\n       \"tuj36u/6U3M1smIVGc+9Q0WOe7getYkoVd4w1IgGDpaubkt58z26uFBkH9f0YMRgfSClQUmCkNRE\\n\",\n       \"iy//zi9zfGqZWliHuiTP2zToISOJrE+xN0yYwef0/fezON3g4tNfo+VJVNQn7Q/pbm9x5emnqQ9j\\n\",\n       \"4r02c4FHJxPMhC3qjkdreZqkOyTe3+bf/Pg/4b/47/8bGkFAf2ONjWGfpePzzKaz3Lp5iUcef4S3\\n\",\n       \"PvEWsjzDx9GxMYo+6HEt5qvgXJUBLnEwxkqh084JRSaqzfAmSXqj68o4R4kDPbowduW67qpSxYGP\\n\",\n       \"zXDR9hGzxjitj77faYaFMzKTzJVWz0hLFaNJV4d6zZUiLtKgKXOsK8BRFJJERpYXK1cIvCLbjuNp\\n\",\n       \"BkPH5wmK9ZJrBBUCgSROEozGPo5TKPT5UuqQv0rpjVEIgYcOaeC4LqBwfYfpcAqloGHhx53OK950\\n\",\n       \"AB9XG5irBwt+XGzTP+Wq2PmkRBTP5UqSF3altrgthNaxoTQgZCYdkusipBiJnLbId0DQRk1hA7Ju\\n\",\n       \"n5EIzPtGLZ8AllVR+A7eeXhcMiuDteZO85EdbNk5ogqY7X/2u8pjj2UrbFdj5uMAxIsNlcPSEiP7\\n\",\n       \"hvFiw/pRsWBAp6MaX6oHG2T52iTn3qqN4cB0WKepMplfHM9BSgeUJE0zPNfjl/7tL7K5tsWplbN0\\n\",\n       \"+30uXX2VmcDFyQQq6iNqoXZjbzY4u3Scy6+8wM6tW+ROhp/HOtpj7ugATZ5PmqYErk8uHTr7Heqt\\n\",\n       \"FoHv8tgTjyH2E67v7fDFX/k13vXBD1Hb6+IEglbg0c6GrBxf5ud/4VN87yf/H7a2buN7PkqJIo6X\\n\",\n       \"IlcH6g2tILFGqawmOoKJq3LEGqkSx9ZCUc8EVZ55rymTrLTG5qjgGIS5tyT9jZ6Z3HzzNlxHe5se\\n\",\n       \"Uh0ZLFAKJXWatBHVmvpzdZgmhebSpaV2Au2lqvXs9pmQjvMtOIjz73vuqA/GlNJeL2mOjjhZJERO\\n\",\n       \"kuRQEpS7KX8uANx0unK3GQEHo8GQUhY7lxbJVJZDlusAMJglrzktiUCKg7qVVAdxo61JK6dHs0X2\\n\",\n       \"srODLcLbAGmuVRH1pAmp4qbLnLR2RjlssneU7r28GVWpRPT1ymZNbIsjDvdlUt9sfeadrFWkpABn\\n\",\n       \"eyOtvrfqdZP6pxWfEi096dCd5PrgLYoifD8kiTJuXr/G5q0t5ldOsB8N8HJwewn9tMvsTEiaJMR7\\n\",\n       \"bd75+HupL53k5osvsP76Szi9rk6c7DnU/IA8h86wzdLyWXZu95ibn2JqeV7TbpyyvbrOpV7EW08/\\n\",\n       \"wIlwilR4bL78CnNuwKf/319hb3aG65097rvvNE8+8VaG0YBGIyRNCzWQcFBCx3mRQo5c3ieBZPla\\n\",\n       \"+b5JYRCq5kuV5rM8/vb6qUpoPUnNaX6zP7+dYoeMMN9tyzN7Q7DXxaQ1Y0sM5no5F2wVdlSpdu0+\\n\",\n       \"jvuBaM9V87vv+4eeuxsQf9MB3Oi0D4BKjDl+SClH4G0G1uRcHA1moUOLRuEcPaSQOu2R0oHvXc8d\\n\",\n       \"uQHr7NKHPf9gHAjK74QDr7Esy6jVamOTDhN0itZkVwGqed4EhbL1/Qe/54ZhGSuTwNRwqXfa0fUY\\n\",\n       \"H140VeoXIQSo/BARTyQ0Ne6uXq6z3I6JIFxx790Wx5VoFC/mReivKoepVov9vQ7LS8f5rd/8HVqN\\n\",\n       \"GU7fd4E0V3z51z9DI0rwXYe9TptGLcRPMuL2HsH8Ap3ObZKkS5L2iVVOq9YgyxJ8xyPudfCUYtjp\\n\",\n       \"MBwMCMIai6fOMLy1xvrmGmoY85XVXfzmFCeWn+TJj34XshvxK7/8y5AkZPGQ1ZvX+dTP/wybt1fx\\n\",\n       \"fG2VoOVxUSipbYnwMNNRRXNVpeypWqYXW11lH26b56rCQZTpoVx/+XfzDlOPce67m5DG9nPla1VO\\n\",\n       \"gmZzKdPbgaSrDtHrSCIp9eGwerN6rdlmrqavWabGNh6bITT/7qb/bzqAm1JWSYwIJ1cjO9axf1IW\\n\",\n       \"zFVBQApc5+CkPc/TEUh7ng+Ig8hqgMptfXq1Q41pl/ksLwTjxWZbzlQB6qTd3uzctmkfaIJOkmSi\\n\",\n       \"l1yZ8EzRGgdDSOrQYqlayFpyrdJJH+bgASQHhHsnIJ1E9FXPZVla3FP+ZfLmcjdFb2TmH6B0clnX\\n\",\n       \"den3IqZbszz79HPs3N7lsYffwRDFzu4u8wvzyK0d8rxHEmX0sgHZIOPi668RDnrMLDQZxjPsZG3y\\n\",\n       \"PGOQpzTDBmmsEElG1O5Rkx797Q61xjSXXnyFJ0+f48l3v5MchTvI8MMGm7M1Xu5skbQ7vP8vfZKL\\n\",\n       \"G+u8/Lu/Ra2WE0cDPN8hTiJkEf9EqUKCMKrE/PC8lufOHrND81lB11WbvhBiLMRDlfXU3c5RFcBN\\n\",\n       \"Wn/fbrGfsxlBu+4q9Y4dq8j0y1g82b4j5XEcp+tx5zdzb5lZFEJY2bEO2j0pPeJR5U0H8DKh2YAI\\n\",\n       \"4EhXuwMrk1uyiPEgpT6EEIIMLSC7KsM4LDlCjmI3SKkD4ShRCNJCFIluD95pE3aZcI3Lr53D0hZ5\\n\",\n       \"bCIog2zVbm6+G7WR/duo35aDkz1WVd/tTa0sxt2JqxUCJjh5jb2rvKHZ/ZpkwpdanE65vup2VIF8\\n\",\n       \"1b1HR520ix4HC8CLEkUJjbBFliie+eazLMwucm1zg6npaW6+cR2ZZcwvzpNFLk7ssLm9hZAuYdOn\\n\",\n       \"2ajhCWjWQ9ajBKEy3BwcJ0U5PrudNq0kpjYzT601xcyZJdxUsbm5h9jY5sJDF5jxGkTDBDXX4vz9\\n\",\n       \"DzDYa7P1lVfYWl8niYf8vb/7I/Q6ezi+g3Qc7XKeUdBtAQSFMXWZTsrqr/L5y1Hc+STJy9CyzR2b\\n\",\n       \"v20p2qaTO20sNt2XwfPbKTbI2uv3qHVX3nRstahpY1U8F5vuy/cb56kynoxivRSaA3Pdju9j1tGd\\n\",\n       \"Yp+Uy5sO4GVAMAMHxSFmkUxUKcMxF2fFWabtYiVI1yNOUxCSPM/wfZ8kSnCVsWYpgh8VAG64XcHk\\n\",\n       \"E3jzfvOv7L1nBt9ub1UpH3RWAa+9+yulRhH0qriH8kKxo8LZIq59EGzy8ZXH2zznOJPDwZpysMBM\\n\",\n       \"/WaxGZ101aLTmXSMNYm2DDq8YY2LkOObX5XFifbmuztHovKBp1IKhIMjIUlSnv7aM2xvbfPIg4+y\\n\",\n       \"G7h0b++Q7u9T91w62ZDZqWnoCpZP1ulLaCweY2t7m2lPcmxhgc1wmt7eLp0oxvHquIHHwqlTH4X4\\n\",\n       \"VAAAIABJREFUnHzofkRrhtW9PdyFRZz5bfrbbTb391n7ytc4MbPIUw+/jdBpIndjFoNpHjpxL//h\\n\",\n       \"N3+Nj3z4gzz5xJO0u9uAZhYUAldIhHJ0QguhtDkh1WBcpjchDoeUMBKPuaSfN+B6+NDdpk1z3T54\\n\",\n       \"g8Pr+fB8HKaTSc/ciSbLpcz9VgEwHEQGtZ8re3ErpVWaJpSuKWYd2MBtNjAbew/GXzMLZmydIrem\\n\",\n       \"aZ9db9VY3KnfbzqAw2Gu21yDg7jIucoQonBUEJDnurOvX7nK+QsP4IV1hr09wqCG43oIHLI4pl4L\\n\",\n       \"iaNIm18JgWPEoSzTh5kwpnsvv7983S6GGzGfNgdRHnhDCJMI2OYKyty3mWzb9tsQkB0u1SasMhdh\\n\",\n       \"NkM7kt8BaB7mtsrzMVr8+TgXZX6r6pexJS6Piz0+BxvDeEzvqnI0qFePrSOcIsiZBishtC21EhIH\\n\",\n       \"ybeefgaV5Ozv7NE6f4LN7U0W6iGpShnkOVudfQIlSF2Xex9/CyfOneX5P/omg51N9rp9GrMLRInC\\n\",\n       \"JcdvtJiZnyNyHQbDIY1Zl0Gc0hvGpK5DX+U4mSIj5+b+NvNrN3ni1CnmgibdqM/U8hIf+PCHePy7\\n\",\n       \"3kFq0nRlCVmWEdTqWlJSFBnTM6SOElIJXGXuU4/NuIhfHtuDMVQjMLJpqTyPNvNg00SVFFs1h6Zu\\n\",\n       \"2xfEnsc7mdCN5thxKttX7qsN4FXMYhUN2XXZ0UOr2jZ+ZlW9qd2p2GtpUsgLu7zpAD5pok1JUw3c\\n\",\n       \"JomolALpODpnHYKvf/Ob/O//18/yvu94Px/+wPtRSUamJKQ5Nb9GFCd6UGRBaMKYorljkdXMoFc5\\n\",\n       \"h5QBzQZau91VwFTFedt/mw3CJraqE+841oF5HOlgLGgMKIvCgUMD0+FgTVWLzW5D2UOzvBhGwAu4\\n\",\n       \"zgEHdpT0AiCdcWcju66q9lWNsV234RjL95QXo13yvLDdHQkmApSObPf6Sxfpd3usLJ9ib3uXy+tX\\n\",\n       \"YK/HvB/i+g6uW2cv2iPOFK25RVrzx+j0E1ZOn2XhHY+ztrrK8Qcv8OLT30LGKWm/g+e4iCxnuLXN\\n\",\n       \"4tIJgmFKLVVEvk/Hc/GlIPcVsSt4fecW09cuIa/MM/fAWcR9y5xuP8Ds7DxSKpIUamEdz/fo9nQy\\n\",\n       \"A6E7DiiEPEjYUJ4/ex4POOvDemZ7LA9+0+usPL72c1Xrwn5/GTTL7bKlzPJcV62Vo7jRKvC1N50y\\n\",\n       \"U2NbfxyWSMbxqMycmf5WeTVXMUP2OyZhgV3Kjo1/7gHccKY2yMDBZOvdFRDa8F0pvSDTPMcL63hB\\n\",\n       \"jUcfexw3CPjZT/1bTpxY4WMf/DCzUy2yJLEmtxgQtHeXw4Hnn+M4I+60igu33bXhYGBNTI2qCSrv\\n\",\n       \"7jbnWxbBzKf9vjKAayeawnYZCwwZXyjmmgFJ0w4TyMpuj02Upt13Wpxlk6lJxAxGRzt+IFzmiA7+\\n\",\n       \"FqO8pko/BMJYkhdjIIwK5O4PuFSeg3kPWoIQ5ORZxhe+8EXmZ+fZ2dpmbmae3qXr+Ar2HJewHqJc\\n\",\n       \"j0bYoJcLls7dQzTUqa6aYUhDZaSu5NiZM1y+9Aa7N2/i5Tm99j5B4JMPerRch+OzU4SejwhrtH0J\\n\",\n       \"/T7dQZ99V9HPXL7+hV1wBBdaHnK6TubCseMr7O5vkgkdaCnLE3w/0Ic4QhWO5ZI8V+TqsPRRtY6q\\n\",\n       \"1Ff2Rlqe+zKw2nUZkLElwkmgVMXpGyalDMxlo4K7PccZ7+f45lXefEybKiXGCsag3G4bl8q0XAW2\\n\",\n       \"RzGFVcXO3FVl4XPo/iN//f+hJEkydlpcLlp3R7FotQ13lCbMzMzQHgxoNlsMkj3mFhaZX1jg6pUr\\n\",\n       \"/POf/Ek+8ZGP8s4nnsSVLipLQWkurEonWgYY/V5x6Lo9EVLKUUB6m/DKYpS9SVQBvH2fLRGYUl4c\\n\",\n       \"ZqMxC8BOczUiXilwnPHA+2bDqeIKytKAAeXypqUbdHjBTeKOVJ4dWlD25mH3zXU8cs1e2jWMAc2o\\n\",\n       \"DlktalfRj9I+eCMvvVwpXOHy/AsvsHrzFvedvQ+n6dLdbzOfCYak4Ag6m1soJMzN462cwJuaQsWC\\n\",\n       \"tJ8xd/Y4O6tXeeFbzzLthJxcPs5g4zZOntLp7NGcWsaru0RqSDcd0FntEEU9tjZXCXf2yQPBMIQW\\n\",\n       \"TZr9jPiFi+T3nEaeWeZdb38Hzz77LCdPHdfZzsOATr+L5wXFWaz2KpVkIF2EPCwFVnG9qjinqALG\\n\",\n       \"MqBoCXV8HMv1lzdxQyM2yI/mwKI1Q1Ou6x6SuAydmn9Vc1xVjHVMFedunrV/t/Xd9mZXtYFVlUkb\\n\",\n       \"XPm8yx7nqo2wqkzadCb2/a7u+jMseXFAeTg4lf50RU6SK7IsRaQZwhW4vk+838XzHOZOL3Nte4NF\\n\",\n       \"xyNWirddeJDH7rufF196iYuXL/HdH/0oszNTuFIisgzPdVFZhkwTHWs8zw/iIDhuMXiF7XAJzG39\\n\",\n       \"l616sYuJZmieq+ImyvkxyxNW5bVp/jbvtrl/+58tLkrHwXEKLlzIMdWMAUezqE0fy9EYy0CvMlV4\\n\",\n       \"tjLKBF/0tuif3d7DXH6Zsx8Re1at11boOB+i0AKoPAc13i67veVxk0LTmJkD1wkYDHI+//tf5dix\\n\",\n       \"M3T3u8yEITs7azRqIcP+PkIlOG5Kmgv225ucPHeSXnuPYXvA6ZPLiHjA9uuXCfpDnvmDL/Lxj3+C\\n\",\n       \"7dkWWU+QuDAcpuwPN1HBa7TjhKg9JPA8nEyiHJ/Q9UjiAUoNiRBsb23Q39pFztap12fYuLnKsXMn\\n\",\n       \"kKmEOMULA1IBfgZu7hSxPbRllfHYtTe+w+Bq5vtgfMu638P0dng+yrGH7PvLtHonRsZmQspgZRgJ\\n\",\n       \"u85JUp75zZZeJwFgeZ2Ux8p+v93HSYBeBvAqXbfNgNgbWBWXX9WeP/cqFKNrS9Nxbtb8y+KE2NXi\\n\",\n       \"eOC4IPVid2MQjsOJc6f53B9+kSBKCOshe+02meOwfOwYrdkZfuYXfp53v+tdfOzDH6K/38ZD4joO\\n\",\n       \"UmW4ElJl4rSJIp6EfpfKM+3lae2ITmHOZbgHY6cNk/Vw9nebAy2LocAY52E2CLOxZVlOmqRjdYGW\\n\",\n       \"YMxYmXbZC9uWONIsIU3TAqQVjmO4/YPxNu8ui8eakMToHbbOetyc0gJlddiGdhJBpmk69n4hbJv+\\n\",\n       \"A24sVzlSjUtKpg1VqgMvcEgGMfPzc+zv90jijI2NXbLMAxnQrAf0dm4TdXZIlCSs1SCP6OUx3Thm\\n\",\n       \"evk4ocq5dfkinW6fhg+b66ska7dZnpmiGw956YVnaDZrrG9v0d3rkiW7uJ5HLagzMzfN+nCA4/sE\\n\",\n       \"jSZpCo2ZJtMupHGffjciDSVho0Hmh/huDTfJkdLBlx5kEcqDBB2F0M0dFIKssMgyslq532Ua1ON/\\n\",\n       \"WG1XxWXqKg5z8nY8eXutVpXDXP2B6mTS/WVp8yjpzi5VAfGOapttymeesdtpX7P7XN4Yy/XbNGn3\\n\",\n       \"uSqtX5mJK/fFPPvnXoVih3oti2BpmiIzSQq4jlPoMnOyPCPLBVkmmZ+bI41iBklEy51leWWaoF5n\\n\",\n       \"JcvoRUO+6zs/xMsvvciP//FP8Je/53u4cP48/V6XmuMSFZk4hCNJskSf8Ls6IplwHLC4CBuc4yIF\\n\",\n       \"lK3qMGaGBnDLE+z7PnD4hNyUqskcEycRI/23XezY1gfjOKp1BKiGGMIwtLggoyYZ54ryXGcYN+UA\\n\",\n       \"VBl5ixqwrEptV+5DuR67b+ZveyxtEdR+l/1bFYCb6/Zm2m53qddrXL9+i3rYIqg1eOZbX+SRtzzG\\n\",\n       \"xvVbiDTm5toqLgIPIIckzag3ppH1nBOnzrHT7dPf77E8P8/WjRtcv3yRRpaS5gluvcatmzeZm52l\\n\",\n       \"02lDnhEPhkgp6e7vsbC8gNOqIadq1NNZOmmMErC8vEB/dxeRuSSdLs995eu89fgnyNOMzRu3uP7M\\n\",\n       \"i5x/5AKDXOAlCuE55K6OrOcoiSpMP0Wpv+WxsT+rD3kPb6h6bg67optSPguqKmWp9SjVgKnfMAZl\\n\",\n       \"ACzPebnYa8BsEuaz6n1VMUeqNkCb6TCgX5WsuVqCOfjNvM+WLMr/zL3lM7g7bV5vOoCXD9PK+RNd\\n\",\n       \"PLyiT0Kf2OAKD6EUjpT4SGanprm+vspDC6fZ2eshOwMGcUy92cBzAp564p0olfNLv/pp3vWOd/D+\\n\",\n       \"73gfcRIjAx+hFJAjpQ5Fmed6gwABUh8aZlmKFAfxWgzBmByBtm2srXO23Y3jOB47oLE53qpiE0We\\n\",\n       \"5+RZPgLmMpGVDyvNdXvh6X6lY1yQlA6OMx4r3TxjmyeOqYKycf293cbyYre5F3tOq9QqNriP+lzi\\n\",\n       \"esy1KIoOEb7ruofUSgBBGNJud5ibXyJNFH/0h19la3ObcGWKs+fOsn71Cs2pOYbtXbKoT5Sn5Cg6\\n\",\n       \"7R5Ti0t4fh03GjDTlAS5Yu3WDfw4xpECT0rCICDOM+0yH/jkmUKGNYLAJxoMtbouz0mjIa4jieNI\\n\",\n       \"JyAWCrKYwc4eapBxK0lZ//2Ad7/rvXzsQx/hH//P/4if+IX/g73bXRrCwUmh70EsFH6Wax8IFKjD\\n\",\n       \"m2d5Lqt+K9NZifoqrZXsgzWbYz4qi5VN8zBZFVJFE/bnUUBmgN98t9U05XVgt8GmLVsisf/ZDFkV\\n\",\n       \"01C+ZtPemORYklhtxqcsAdxJbWKXNx3Ay/qlcmekdMApdjyJzlKvBEIq4jzDSSXvfce7uPLsS+x3\\n\",\n       \"O+i4vC5T9YB6rU6qcnb3dojTiPe99/088+zT3NpY45Pf873UfB+Rp3hCO0nk8RCvkAgQLsJxQcgi\\n\",\n       \"CH1JF6zU6FDQcN9Zlo08Nm0O1XXdkW7Z5kQmiXD2Qjnw1kSnfcoPsgvZxfxtOOSy+AoHcVzMPQb8\\n\",\n       \"bXHREJmRMsptlGKcUy4Tvf3eNBtXoZhDX7uMVGUlr01Tl8012gfFZc6prH80i2uYDAnrDQb9iHiY\\n\",\n       \"8fWvPc3p0+fJs5wojlm7fZuw2SRwJWESsrO7Sz/OEbUGzfljbOx06fT6nF5ZQcZDRDQkUBkiF/S7\\n\",\n       \"HbpxHxl4qFxxcvkYvXyPVAe+BjK6e3vIOCNNlTarTHNUBuurt/nwRz/ItYuXIZMM6wHB0jw397c5\\n\",\n       \"c99buHDPfdRrdWr1OjJJ8aQkcRSp0IfUXmE66nCYhsbVJnd3eGYXfd9hicfzvDGuumx/bZeJlklH\\n\",\n       \"cODmPVXgfVTby3Ri00QVZ10lzZXHz/yz1R92X8vWO/a18uFl1fjYDErV73fivE150wG8yvjd/kxV\\n\",\n       \"RlYE6LXDRTqugySnhuTMykm++sUv8eFjx9lvtwl9nb1aJQnJcEjD86kHPsN8yOOPP876xjr/9H/7\\n\",\n       \"F/zFT36Sp972GP39bZw8pRkGJHGkQcI1mTz0ghGIQ5NWBq7yyboRvWydW3libJ2iWRDGTnWcE6U4\\n\",\n       \"Vx2f8LITRFUbQROXIUZ7IdqHKnabTH3lRZpn+RgHU+U1Ouq/Gl+ctvhZFkHNpmcnX7A5FvtsoGph\\n\",\n       \"TBJPHd8hS3J8p8bFN64xMz3Hwtw8eQbXr1yh3+8hpGCq0cRNYdp3YRATzszhNKbYXl1jutVCqYyr\\n\",\n       \"ly8i84QgcInilDzL2NttkwiF7/sszs5RbzToRAnRMMINA3rtDrXZRTY6beZWjjF17Dhud0jdcTh2\\n\",\n       \"4ixnTt+LK31eunmD+9/6BDfWVol9yVueegIR1BkMI2qtFmmni8gh92GIpJYLhIKsAtfKXOy3C+J6\\n\",\n       \"7A7fWwZlm07KxdDYQX1Hq0HK778b9YFdqizIJkkfVYeNVaZ+NqM1JoUWdFo+JyozFZPG2tC2zcHb\\n\",\n       \"jIfdhzuVNx3As8xWn4zrjg04ucgil17RUaWIlcINfESqOD6/iKq5CBVDFpPGCgn4ns/CsWX2ux2k\\n\",\n       \"L/GCOTrDHksLC5x/8HE+81u/yd7uNh9837vxhWLQ72ovt1xnrXaKg00d7F2QkY+BQxAERS8sQE0z\\n\",\n       \"vdkUnLfNDRwG5epT7eFwONKnw+FYDza34VboxQ0HlWVZIWKDdr3WnnsmNIgjC2lBZZUmhoYjHyfQ\\n\",\n       \"HMf1xsTociyMESHnhz0nDdB6njc2JqMwwVaMjUkRI6u48izLSCy7/1FbcoVKBZ6EqxevcmLpOMNu\\n\",\n       \"j7m5OTr7O0iVMegP8YXCczIiIQnn5jj/yGP04pTZNMdTOfvtPeJ4gIpjPN/F83wGaVxsjDE1zyPJ\\n\",\n       \"M8jADwJc16Xd7bCznfPU299Je2OV4w89iKrV6Vy+RR6lPPf8y5w5cYKF6TkePn8/i815Fp88wdVr\\n\",\n       \"V7n/3U9x5dXX8AOPtd0dph0XmYASELsZtVRn2FHu4YM0W8K6E5hUqT90MpFxl/lyKasKyvfa1lj2\\n\",\n       \"PJVprGozNjRzp7aX22PuNcyUTXt2PVWMYlUb7N/KNFfFeBrd+6S22SWxchfYn3c6tCyXNx3AzUBU\\n\",\n       \"6YOF0ElQHYQ2BxMglEAiSIX20vSFQ5xneM06m7fXaTVbJFFEUKvT7/fY398nCGs0vSb7e/t0+12Q\\n\",\n       \"gsSp8YH3fyevv/YKP/VTP81f/6vfx7GlBRwBg16XoFZjGA012LheoYMe32GrCFcIoc3sGOc+kyQZ\\n\",\n       \"4yLt52yOw1a9TDrlNvcY8KzimO1nTP1Znh2qVzEe5tIG2ao5sRei+c0s1ipuxQZUu91mPMqgbXPr\\n\",\n       \"5nvZG87+e4zjL42D67okeUwYhsjc5crrlzl39j4G/T6dnS2kilmca6GGPoNOl66KiJGcOnWO+uwc\\n\",\n       \"ne0dLjz0AKEjeO4rXyJXGX4YEMUJoS/JE4VbC3BiRXNmiihL6bZ7NL2AWhAw7Tp4QYAUkpWVk0RA\\n\",\n       \"5gfs9YfkueLqzVvst9s8+djj1FxJcvUqJ9/6MKnnstbZ5aHT97C9c5soCHVWnjQnT1ISAVma61Rg\\n\",\n       \"jNOIGd8yXZYBzh7bchGiCFnBYYCxQdWmlTIATtLjlhmXsmqjCuTvVIwKblJfj3qfvcnZ9djctl2q\\n\",\n       \"Npwq0J0E5DZQV0klk75PKm86gJvFaB+OmCKEAFdCXpgbOgI3F0glyIqYFkEuyVyHhZMrrK+vs/To\\n\",\n       \"ElEcs7W3SzSMaTSbZCg2t3cZRhGNZpNmM2R30Gdrd5/7z9+HKy/wr3/6/+SHf+hvU/Ncjh9forO3\\n\",\n       \"y/T0DPFwoF3IrR39qEHO8hyVHfaktEGmCqhtsDScZFmEM+NT5upNvZNEx4M2i8qY40bcLYNk2Vuu\\n\",\n       \"3B7Tf8NNV4GD3W/TDpuDMXWYzcgWS219vj1O5qC4TCv2oWkURXrxOTl5pPjKF77A7PQsLz/3AqdP\\n\",\n       \"neT1y6/gOQqnXmOuMUVrbpYb3R2WF45x/oGH2Or2afcHLC3O0964SRIPWVxaoLu7zzDOyAZDcFz6\\n\",\n       \"0YD6zAzzy8fYu72FFwSE9RYzU9N4nsf23i7rr1xm8YHzbO12CB2PIPBx0oyUlD4x17ZWWQlPszTX\\n\",\n       \"YG5uhpnePkQJm9dvUW812Om28Ro+ypE4QhJIiXQhSw9H8SuDThUI2LQ2CSQO0tkdFJu+7M+qOmwa\\n\",\n       \"tdtStY4mcb93w3mb+ybFKyqvh0njY9pkM1fmTKuqXfbGYNN8+d5J16qkgDs9X1XedAA3emNTbKCQ\\n\",\n       \"UpIqEEpL/SZ7dK5yhOviCYkYaDO4k/ec4cVf+n1Onz1LnGU0ZmaYDUKk4+I4LlmaM1vYFUtHcGKx\\n\",\n       \"wfLcLFtbO0RpytueeAe/+uuf4Ym3Pk6j1cILakTRkCQeUvNCFAfWHmWzR3vyHKljthggKZsfGc6y\\n\",\n       \"ilMun5qXJ7zM6ZoihBgzpTIqnYNUaQW3LsdNNvM810lYGSdcA8p2/VXSkSkmznr5ur1YTBvLnL7p\\n\",\n       \"Y3kB2pydabPpv9kwqsDL0M1IN+9miMTjd3/nd3jikbezMDNLb28XJ0sY9Nq4kU9/a5NW2CCrhSwd\\n\",\n       \"W6EWNth+4ybNVp14OOD61SvkSYTr+YT1JnGcMxzso9wM4TosrxxnmCbkjiSoBWzu7pImKcePHydX\\n\",\n       \"is7qOufuu5cocJmuz5AuzTPc3iYnY5D0uXTtNV545VkurF7mexcXaDk+uYBbN64wV28wPT9FnEt6\\n\",\n       \"xAgBXpQxNCqG/M7cXxnEbG61GiS1zXgV81FVf1k9IoQYnbfYqhC7DnOf7VNgz+PdtXO8DVVtq+LA\\n\",\n       \"y4yi3QebnsqcddU7qtQv5WJbo5TbVW7fnTbfcrkjgAshfhb4OHBbKfWW4toc8IvAGeAN4PuUUnvF\\n\",\n       \"b/8d8F8CGfBfK6U+e6d3GCcOOz62GUApHbxMkYqclOLwIIdUpDpBaAbCldxz4T4+c/3n6AwGhPUW\\n\",\n       \"0q/h1Ou4rs+tG2v0Ol2kEDTDOjNT00w3fYTnsPzgA+y0+yweO8Hyyim++KUvENYDzp85CWlEzZck\\n\",\n       \"qVY+2qqAPM9Htt1gHxyBKADEnhzzu0nUUD4AKjseGOI3ZotldQIccKm2+aUuCv01x45/EUXR2GLL\\n\",\n       \"Mp0ntBb6h7h8O/KavTDt9xjCLUsGdvvL+QBtyx17szIWMlWius19Gw58Eidom1QKIYjTPqurq7Tq\\n\",\n       \"DTzHxQ1DLl28hpQpjkrxXZ84iui3E2I8pHS4dWuVsFZjcX6BrVtv0N7ZgSQmynNmZ+cJ69PsuQ5b\\n\",\n       \"7T2mZ2YQjo4XPlVvUm9Ocf3iVXrdHo2pKRCSIJRkIsMJfBJStve36N1ew5GKPB/gpSlODi/94R9w\\n\",\n       \"+fItfvCH/j7LC/O0zl/gc5/+DT7+F7+HPTdlV0XUFTjDhK7McDwXN1OVwGJvkGXJ7G7EdCHGmalx\\n\",\n       \"Gq8GnTJDYn+a32z9vPktCILKM56jDkntYsfpr+5LdRyTcvtsZsGsqSodunmufK5lz8OksZ60GZrP\\n\",\n       \"crjeO5W74cD/b+BfAT9vXftR4HNKqX8mhPiR4u8fFUI8BPwV4CHgBPB7QogLygQXnlDKh3V2yfOM\\n\",\n       \"vLAAqTkurqsgR3MIQO7phKLHgyYzZ46x29nH8wN6gx6ra+vUvTqZdJhZWsRJM2pCsre/y/ZOGyEl\\n\",\n       \"SZbjeDXqrRYqV7znPd/BV7/2NGma8tCD96EcgepHkOdkEpLCu9BxHVSutCec0Lpk5QhQOXbyb3ti\\n\",\n       \"DAAZ7tae5CoO14CffeBRJo4qgpDSLALtrGPu8QO/SOAKQjq4ngCldLSQ0qKypQ2bE7NNAW2QFRwc\\n\",\n       \"5ZZVLvbCt6UM+74yl1Kl0xy9a6SfLa4pkEI7GulolYIsS8hzhe/Xefm5l0mGKc3pKZ575Rt4WYzv\\n\",\n       \"KJpTLYhj3Bz6WUorCHRW+dU1Tpw7S9Lzae9sEkiJH7YQWc6gHyFdj9mVFeR0i6nFBdrDPoFXpx7U\\n\",\n       \"2Vhbx6u5KFJ29rZYOXmCOIl56dZF+psudSck3t6lkUlUliKlR4ADIqeTRWwNOnzqUz/HA6dP810f\\n\",\n       \"/yiPPvVWvGFKPfDxlEdGTi7BdTiI8pJlqAwcxyVV4PguudDR2B3AKYJepYxLjJO4QEOK+pIaqVOU\\n\",\n       \"KvwjzDwITfw6njylokZ16TnW9GmYHnvzLlsl2cVu66QDPu2N7FgUKKw+VKtjqiRcGFf52GpJ+x77\\n\",\n       \"vnGu/YCJw0oEfrBhqNHnJInBvvanwoErpb4khDhbuvxJ4P3F908BX0SD+PcC/14plQBvCCEuAW8H\\n\",\n       \"vjapfpubKotsUGS+EUCuyNMUVXTcKSZ04GYIBHPthO/4yx/mlS9+k/vP3Ut7EDFXb9KUITvRgI2d\\n\",\n       \"DaZdj6nGFMeOz1GbOk0aa3Drdrts3N6k2+2SCcXi8ZOs7nT4g1/4JX7wb/8ArWQPX/x/1L15kG3J\\n\",\n       \"Xd/5yTz7Xeve2t/W7/Xr13u/3qVGEhIGCWQQYMEYGRyYATzYYDtiHOMZ22MHYcfMIHvGwTgmvMDY\\n\",\n       \"4TEYGxASBgzd2AIktNIttaTeu9Vv32qvuvvZT84fp7Ju3lP1WgwxE81kRMWte8655+TJ/OUvv7/v\\n\",\n       \"75e/hFiWS/4d38PKgSSDrEySlFKQCnAdC6cSGSKlPFjyrjnn/badcWxqZZYkCY7jlE64ND2Y7eFo\\n\",\n       \"OsMMsyvvZe70M11oYa4ULX833SXEpB7MZ1RRhvnMqrKvCvhRkQTmJGF+atRfHUD6vFmnrMgRolwP\\n\",\n       \"AKVPBEuW2SdtSZKmZHmC77uganz95Uvcc9cD7PX7hKMhS7aA/Th3OwclbOyay0KjTjrsYecT0tE2\\n\",\n       \"r71wlesXL7PYnKPmN8jyjCRNyaOEuOGwfPYsx4+d4pWXX6EReCTjCaP+AFWkCMcijAf0Bx7pMGJU\\n\",\n       \"ZPjz8/jteTqtBr20T6YEdgY15WAXOZkqmLgOj7zzCd740pcYFCFLD51j7es3sYo6rbkWW8ketutQ\\n\",\n       \"yxRFmiI8F5GV+Qld1ydwfcZJREEBKkfkOTLPUUIcrCw+yvrT8qG/mwpROzR1CoajrKRqMSdr/f0o\\n\",\n       \"6uKPU74RTVHKpnl+OnlM/58NAzxKLnW7mJFN+pxp0RxFUZX3MieYKZKfjqGC6d62YmbS+0bI/Hbl\\n\",\n       \"T8qBLyulNvb/3wCW9/8/xqyyvkGJxG9bqmZ5lQ7QKxh1w+nogqowZFnGkw8/zLO/83ts9TbJUvBE\\n\",\n       \"DVWvsdRZZLnuE9gW/fUNouGQW5tbZacXivn5Be44eRppSYQlGUVjwiRie3ubX/q3v8QPfugDzNUD\\n\",\n       \"JAJH5RBlCEti+y6oshFdVebLTvLsyE4wzTN9rBrTrN9Zhyea3v2jFKBuL01BVBWpybsrpWZCu7ST\\n\",\n       \"UghxsDmz6WCtLqwx38fsO33cVO5aeR9VqnHl+tN0curnHhXXXRQFSFnCw/0BKkTJuTquS5JE5FlG\\n\",\n       \"vVYnzwt++zefYeXOu7BzSbzbw1KKSGX4QmFnOcqywLM4ubrK/LETfP3SBTqdeSgKdtbWII6JVR+3\\n\",\n       \"0cCVDnGeM4xilC1pdjsMoglWELC4vMiFl14iyzJsJRC5IpvE7K1v081dlmwbPy/3tSwWmmyNdvAK\\n\",\n       \"iZ8XCNulkTnc1Vjg6iDnlS9/iYfe8xSNVgdcn54NfhISpTnM15js7HCiMUcch6RKEVtg+Q6xyinC\\n\",\n       \"IY4SB3uX5pYksij3AeUwajSjo46yjEx5Owo46DFZPV8FJtqSLLtuVqZNP9hR8vZWaFXLsom4p8q5\\n\",\n       \"lI/qe5jhveZ9q0pdUyhVaqQqv7er1+3G7P5ZtOya11d9Uf9fKfCDopRSopqjtXLJN/gjOE/XAAAg\\n\",\n       \"AElEQVT9EcjwcKYwkyMzFYypeOxc4jd8UpGx2F3EFh7jMGG01sNyHJDQatSpuTVWjp9kY2OL4XBE\\n\",\n       \"vz9gc30L1/doNOvUGw2wXB578BF2e3v8u1//OD/xoz+Cl4ODxLcdJklMjEJJgYXAVgInL/BsG+XO\\n\",\n       \"pp81Z9ej8iDr9zMD/JMkORA0TaWYoX0m1aCVneYCzYyC+rl6wlBKHaB6U0ma9TD7xuyHwwhEHaCI\\n\",\n       \"ap1mB9K0D28X4mYq9qMQ3Wybif3FVWUdhBBIxf5WYwLHDZhEOdev3WB3e0xtfo6TS8d4+eKnOLV6\\n\",\n       \"jOF4G8IBKk4ZiISiVuOY7bEXZqTCYml5hc2Nm0z29qhbFp4o6G9t0mx3sD0Pj4LO8VXcRo2rV29Q\\n\",\n       \"a9WxXAfLs/FrNdJBH/Yn9DRP2WrZBJaiE4Ys0aVTa9Fup4S3tnCkx9IdqyRZwuLqCdSLF3nwm9+H\\n\",\n       \"9+7zXFzf4ezxJt3VZeROD9e1+OQX/pD3PfEUF6+ssbw4T5hHZLZEiQRbWjgW2GmOpQSZUGRCEcmS\\n\",\n       \"6rP3m9x0GJu58KsUl+47E5FWKZjq96rszN5j2l/mOVNmD/f1dNK/HSgoo0Vul5/7sGxXx575myrQ\\n\",\n       \"Md/PnJS+UeigEMyMh/0rZurzjeT8j2Ot/EkV+IYQYkUptS6EWAU294/fBE4a153YP3ZE+QcA/Jtf\\n\",\n       \"eI1Hzj/AIw8/eCA4JgqvOhxMR5dWcAeKahJy17m7uHDlEv4pD5SN113g5NIiMhPg24zjkNFgxGB0\\n\",\n       \"gzCKcF2PTmsOz/Op1+v0+z16vT6DYR/bsUjDkMfe+U5+6dc+xn/zgz9MMo6JixTPdUmkohCKXCms\\n\",\n       \"IqcoFEmczXCN1VnVVLbme5kOPyHEAQrXzlIz9MsUqiAIDqJANIrV99e/N59tDgI9cEykWw1rNJW3\\n\",\n       \"+Xzz+JELaI6YwN7KkjCRjonsTLRzoAiKkjNG6GgJ/VuJEB7RJKPdXuCFr3ySE8fPsRMN+OIX/4i5\\n\",\n       \"fUulu7RAumcRDQZEWUzQXkDW2lzY2CDwatxa22Tr+nVajkvTscmjiCicECUxyvNYPHkH5+6+h9Fk\\n\",\n       \"gpCSeqvB9q010jyj0WoibBsVJ8RhjEoyxKKF43ikRY7MFE3HZeXUafrK58KVywjH4ZGnnsBV0MDm\\n\",\n       \"2c/9IafbPnff9ygUFt0zp3nt+u/T3ok5P7T51K8/w+Pf+UFuCYUnHFAZVlLmCR8OBzSCgARBBqSF\\n\",\n       \"pJACW1oUHF485rruzJjT/XYUEtf9Vu2/233qYo5by5pm2TMtLfNa0yo76ni1lBPLNIHVVNamfH51\\n\",\n       \"kjFlujrxmIDndrJ6NAI36ZpZ6ub/Sfny8y/wpee/9se69k+qwH8L+BHgH+9//oZx/D8IIX6Wkjo5\\n\",\n       \"Bzx39C3+AQA/8eO/DRy9W4uenc1z1ZhpUyk6ecGpk6d57otf4YlzjxHGOYNwQjiKsSPFoEjIHElL\\n\",\n       \"WfiBje97ZFlOlESkacxebwfPdVmc79DttMizlPFkxCCJOHfP/fybX/x3/MhHfohoNCYQApXnqP0d\\n\",\n       \"fhSKwhK4totnDAT9bmbqWTO8UNfdjJ6AEnWkaXqQvrY6qKqUg0mX6FJtS9NZVJ0QdL2qIXtm0VSN\\n\",\n       \"ee/q7/W9q2avaQ4fpcSrA6L6rjP3VfuObHIKkYHmFXGwrYBmo87XvvIK9Xqn9JnEKcPtXXwhsFWC\\n\",\n       \"KBQyaJArl2bg01o5wa1JSj+K8YOA3d42k+GYOhmoHEsVSM9lmEakqqCVRPS2thmOJ5w5fgKUYi0K\\n\",\n       \"2aeKqTWb5F5CjkVRwKLbRmQ5kzhiI+yx5+SApHmswajvsjnskzz7PKo/4fjxBWxVsPXaRRr1JcL+\\n\",\n       \"iOVj87iuQ/rqJd4ZdNgUY/7lL/8y9z10H3/m0UdZ9FtYozF2luLU6uSiIBUFOeW+n56SkENapIcU\\n\",\n       \"k95QRcdnmxPzUeUoa+0oyrAa5jlF0dlB9JQ+V11HoP/XDvO3cnJq+TPvN61rYThgZ8+ZzzNBgn6e\\n\",\n       \"ljUTVJmg6CgEr1Nj799lpn7lJ2gL5HZxHUII3vHko7zjyUcPjv3cv/rFI6+FP14Y4S9TOiwXhBDX\\n\",\n       \"gZ8G/hHwMSHEj7MfRrj/Iq8KIT4GvApkwE+pb2AHRFE0M/vrDjSRgOu6B0pMl2piJKUUvutzz133\\n\",\n       \"cGttk0kYETS6CN+HSUYcjdjY2cZq1ymUTdfx8HyPbruDZdkMB0PGoyHbUUSeptTqPosL8xxbXmbB\\n\",\n       \"UdxYv8WDjzzOP/u//g1/46/8BEkY4dsussgBhXAkcZ5SZAkkU9NQKz2tQKvvpzutuuzezPNtKjLz\\n\",\n       \"d7odTCelqUxN55QO3zMjYMxBWo25Ne+jc4iXVJc4uJ+O666G7k3NzGnucH1/k/7S/aaRmRDaYTZb\\n\",\n       \"L73hBxQkSYrMHZQoQJQKXAgFUiJUgRAWNa/Bb//W0zz+6DvwPJtsb4+OYxNOhgS2IBmEiEaTXcvn\\n\",\n       \"xMl7ufeBR3j2pVc502kRD3psrq3TQODbDq4tEJ7DIIzIUcx1O9SbDV7/2otYtsOx7iK31tYIe318\\n\",\n       \"ISBNwbEQrovfcZBOgF/zubW5hlcLuL69yVN33cnisVVeu3gBe6XD9lYPa5/aj7KID7z7PfzHF1/m\\n\",\n       \"pCtIdvtM1rZ55PF7+NzO52gu+pw/dRfjxWX+y5ee48aFy/zkD/0gLgLH8QhqAVEWUhQFjpDIXCKS\\n\",\n       \"nDTPyWV+pBzqNARa3kplO40cmcqCBh1mvnhVuWYKAEyHvZYjjUxNytCUbTPZmpaxo/hxs5T7xZbX\\n\",\n       \"p2k6E6papgWY3SkrTadK2VTeVYd+GMa4rnsgu7Y9y9tPx8t0vGngo9tKy78+p3fKUmo29FXXrTp+\\n\",\n       \"bxd5c/DsPw7P8v92EUIvzYHff+bjMwrA5Lu0oqrGDle5YN0JcZKjGnV+43eeJtqdcN/5R4mFQ6Bs\\n\",\n       \"GsLHaTUJ5ttk/TFxskOWp8RxucmBJS0C36fm+0gpyNKEKByTZznSdpCeQyxyEpUSjoZ823vegxhN\\n\",\n       \"kGmCEIpE5iSiwMHCUrMUA8wKrGlZHJ2LQs6sUNVopCiKGYfPUdydiV71NSZ9UlW0JjqvWkBGfx18\\n\",\n       \"agSl721aAeb7lX/WobqaZRb1ZIcmFVMOzLBGMgukQokcRQ6yHBB1v8WwH/LcF17gzdevYEuP5nyT\\n\",\n       \"4aUrZMMB+ECUIguba2EIK8c4d+cDZKmgLwoeOdnk4quvsHX5Imq4ixoPaPg2hShIFCSWzZ1330+K\\n\",\n       \"YGtzk0aziW05bG1u4jsWvi3KfOBpTCFtogzuOf8wx1aXeOXN10hVRpApHrjjLgpXslvEvPLKq7Ry\\n\",\n       \"m6CwSVWOlY954u4Hub7Ww+ouc8999xP3d1k50eXUyVVe+s+fY7m9ynXb4s16xhtb1xnsbPGR7/9z\\n\",\n       \"dOsBLgVWUZSRJ5kqc/sohbAt4iI9pARLJT0rLxopmvKn0bqJqPVYlXKK3m+nT/Rz83w2B4iWd/M6\\n\",\n       \"00oz5VIpxWNPfeDQvZ/73DOI/SyZZtbPMoR2NiS2/Jutv77etGx18TyPJEkOFiZpWdcWiynfprKu\\n\",\n       \"TlQadUupgZ01o6DNcWdav5Zl8eg7P4BS6sjZ621fiWlGQOhiCgnMcq76hfUMb0ajtBptJnnGd77/\\n\",\n       \"2/ln//znudcCH8Hm2gax36B/4xrthQV836fZtrBti2Z7Ht/zieKYwWDA1m4PAdRrHp3OIvV6jd2t\\n\",\n       \"PVLK1KQ7wyGLC13+u7/zd/i5n/1ZBhsbFGmCcC0s25qJAdfvout/FLdmcvpHOR3N/OhHJao/Khuh\\n\",\n       \"vhZmPe5HtbHZrkcp/tm8IxwMCH0/E9noftHfLcuZuafu02p/lvf7xlnYDvrfEiAFqlTfCAGW5TAa\\n\",\n       \"TkjjgquXrpInKY5rM9q+xdb1S4gkprHQokgLUC6NTpfWqTuYxBE7G7uceeQhLr7+VdZuXqPp2lit\\n\",\n       \"FpMsJVQZcV4Q5QVLx5dpdbvcvLVOvVbDlZLtzQ36G5ukjk1jqYtjS+qNBkt3nGFcSL7pfe9jeb7L\\n\",\n       \"A088Rmexy6d+8z+xvrbO3Y88yGC4g9MIyAcx0pJYCtajkE988mn+7MnH2Prqc2THlmieP00/jRg6\\n\",\n       \"FsfPnKX//EXuevxhkkbK6uoyN/e2+Pv/y0f56Ef/JzpejXC3x1K9QV5EhFFIvdMiSuJDm3CX8nM4\\n\",\n       \"xrpU6rMUgRCCubk50jQlTUvQUyLfkqYwAZhlWXied4C6TeenCWjMfjWVp5nuwaRNb4dGSxkvHfga\\n\",\n       \"MZf3tNC7EGkro9QbziGdY04auk4mNWNaJ/p+VXk3LV6l8hmgA+A4NpYlDyYRc3crDUyhtCiCIJg5\\n\",\n       \"drvytitwmEYtmGaJFoTqMm1z8Oui/w/DMY7tsNBu4dZdJtEYohGL820azTlacy36e31GKmE8KfYb\\n\",\n       \"bxclBEJI6rUaQb1DPfARQjGJYgajXfIoJSsKLM/m7LHTpCrlQ9/zYf7pz/88f+H7P0zNr1GkCbKQ\\n\",\n       \"2FJiVbhB/V5aQHXUzVHK6gBl7l+v83Kb502FfdQmBvqcyd9pxF8dSMCME9Ksk5RyZgIoj0/vV+W1\\n\",\n       \"tcCZE3DVStADp7oS03EO76F4lMkshEDJMoRQCVnuniRthBJ0u10++czvkyYJx1aWyZOcaxffxLIL\\n\",\n       \"WoFPNOyT5AWJDGguLbPUaLC1s8dTT51nHI65cusqIosphMC1HdxGg71hj0musIMaTqPF1Ru3UFlO\\n\",\n       \"u9tgMhwyHvZp+C6eUgw2NlhZWeaOEyeZW17Bmpun0Wzx4uVLnDy+Sm9jBxeLLE5Yu3yN+lyd9z35\\n\",\n       \"Tp79w8+ztbtLf7eH6wjm6m1kzWYv2uLmtTc5c2cH363RH4Uw32CzIVCbt5Cxz95WjBs4/NUf/Qme\\n\",\n       \"fvqTHF9Z4d2PP8LmJMRTBV49IM0SkjTGdbxDyNecjM32NidyXUaj0YGiDoLAkIupfGiZG4/HM7Ix\\n\",\n       \"VW6Ho01Mx7k+XnVkV+t3SCYqsljKn0N1IirrMAUn5rlqgjX9XHPiMIMFTBk9Smb176Zcuv4ryPPp\\n\",\n       \"8zV9pZkGHTp91FZs1fKnQoGbSYhMxKpUmWfZRKSmgoNZheb4NkWYMNrb5b7z93Fj7TqP3nk/O7t9\\n\",\n       \"wjRGxjn3nbuXocywRZ0kyVhfX2d7dxdLWoxH5QKa+U6HPE9KnttzsBHkWUaRpMSTkNTKyIH5E8f5\\n\",\n       \"X3/un/PRn/5pmExQ43AmN5xZNy2AWpnrpeOmyaY7zNzJx+QJ9T1NJWfyg1Ukrn+nOUAz2sRE1tV7\\n\",\n       \"6DpU0X3ZV7Nx5vreMDvwqiZ11YqAauhkfujdzHfSFBHsT/gClJKwH07o2B7ra5t8/rOf59wdd7O1\\n\",\n       \"sc5CdwE1GpKTEKaKlmMTSsmoyGjUPG68/iqF7xCFc0Rrm6g0xLdskjDE8XyE5xNY82RpSL3VIlWC\\n\",\n       \"Ub9PK6gxGA+J4wnlIsoCV0ocbESSsnX9BmGmeOq+h3AtG6RFPIz46ue/yGKnxTe99738xm/8Ot/0\\n\",\n       \"Te8k3ushpCBKImrKwhomNFp1Xtq6zJMfeT+f+8IXuePcGeZOnmXvVo/lB87R/Q6PziDFSxS9nW2k\\n\",\n       \"9Bj2Yx65+yF68Zh/8Uv/gb/0Qz+AdCRWluKkKXXPJy1m06Ka1JrZ5uX/s+ixmurApLV0+KrpbPZ9\\n\",\n       \"/5BlV47nqRPclP1ZkDCrlKuycJT+yHM1k75iirCPipCRR97bpEVMq6Bat2oAgG5T7c8r22bKf+sd\\n\",\n       \"pEonp/breQfjQfeBGRGWJMnMtoa3K2+7AteCYC7O0Q1aRWr6+up3/X8ahdQdj6TIuPPOO7hy+TMs\\n\",\n       \"LnTx/YBMWMg45403XiP2HcJxjmO7BLUaDz34AEJaFIVib3eHKA4ZDIaQ5wjZIJeSWi3AlhKEIsoj\\n\",\n       \"Ar/O6ukTCFvyr3/hF/jId38PbdeFLCdNMpQEBFhCUuyvYMOyUAIoFFlSzrrCmkUe1UnLVOYwNdlM\\n\",\n       \"/ryqKKvIwFTK2qnjOFrYj3ZgmhkKzePlEvXZgWf2gVlX7ai5nW9DX1/2I2UbHeygbm4hZyIfhWc5\\n\",\n       \"JBQoIE8ykJIoifnEx/4j5A7buwOGoxEWknAU0lhoo6IheV6Q5CnNTgfflWyv7WC367zylS8Rb24S\\n\",\n       \"eDauK3ACHxubJIqRjsf80gIn7jjNxo2bKGuC5TcYTbbYvnGTrhfQrAXIIqcmfYo4w1EW/Z093nj9\\n\",\n       \"NaxWm8XlFa68/CrtoMbrb77B8ftP830/8oN88umnme92cR2LIorpSAc3CFBZzoiUW8MdosGQl7/w\\n\",\n       \"HI0PLrBy4g52t/c48+iDrL3wGqONPequwzAKqQcB/d0hSuQ8/vDj/O///F/wFz/y5zm7uIDv1Ukn\\n\",\n       \"IY7nkiuFsEQ5YRYKimI/HYRV0lL7/Sz3nZhmMjYhyom2VGxaySuUOtqfY4KFKQg7nH71dt+r+Xhu\\n\",\n       \"F4kSRRGW5SClOKBzpsr3MEVTXjsdd6bsaqWq37vValMUOVlaOvKzfDr+iqIApR25inq9DrAfV68T\\n\",\n       \"z+lVraX1oXP9O04yQ4/q99MWt7YGZpPUHS5vuwI3Z9WjOFrzu2lSmOe1knBETiJTsCWLjSZWlnP9\\n\",\n       \"1g1kZmELl/mFZewTNYTnkWZjJuMxURRy5dLLeJ5HqzlH3bfpNFsszjcZ9IeMJ2PCLCcsIjzXY67R\\n\",\n       \"olWrAwoxVDx+7lFeTL7C86+/zsOPPUwry3Fth4lKcfwybagrJNKyiCVMshQpBD5l/oiqMjQVsGlC\\n\",\n       \"mkJWDQes8nm6TUzBKO9Xhq/NtqHAsuxD7W3Wx+ynKWI+nDnxcDKqKdIxnTnVfi9RnI4y0ly7PPQ8\\n\",\n       \"jVZqmYuyc3JVYGUSy2/wX/7wOS5+fZv5oE2jvcrN3R7bly5RTATx1hglE/asAiEUx5sNiEM8WzHn\\n\",\n       \"2Gzv7jLo7+AKCzXfIWi1YVzgFQ7jVGHV2lhzCwTDFMdpMyFlsnaNZiyoJxF2I2fhzHFUXJCsD7Dx\\n\",\n       \"OX7yNP1wRDLsEfW2uXHzGlEScenGZX70zF8mSWOOnz7O7sYm3VaNXtNmFCd0EossyxGeyxuvXeDM\\n\",\n       \"wkmGl26xtrOB9cQ5ROgRX9rBnlshjxLceIRXk+Q+uFlBS9mEuzEfeOg9vPrCBb4wep4f+J4P063V\\n\",\n       \"yJIxwpUkRQKiwBYFFgoLCUqQY5FhlXttFilKgXYOKlVO6nrRjxlSZ1lHL1zTZaos1QGFYlrTVepP\\n\",\n       \"l6M3FDlcSgs+I89n/WRTQDCtg0bgJkDUzzWV+kH98pxCFUhL4lkernIP9iLN9yNNVKHIcr1B+OxK\\n\",\n       \"cssCKfMDlsG23X0+PpmhjvI8P0Dc2qIxrZDblbddgZthRtWZsGruzSqHWWeflBJH+sRZjlf3mWt5\\n\",\n       \"nDpxjK2tdZ585AnWb27w9TdfxgkCYpXTbc/huR6Lx1ap1WpMwhDHtun3B9y6cZVCKeq1GvOdJrXW\\n\",\n       \"XInyJiFJlLC7tc1wNKQz38afeLzrne/mX/7rf8Hc3BwPnLqDMAxp1GpMJhOkJUilRRrHCClw2V+c\\n\",\n       \"tL9aUzK7QqwqVKaC1ArdXHChSxVNmO2jFa0ePOZ9S1SVHmrf6jJ+LWymmTmdGA6buOVxcUjRm5OK\\n\",\n       \"SZdUuXM4nJNc93Oc53jCIhcKt9MiSQWf/+TvY3t1QpXQnWviJCmW55J7LtJWxDlkSUqzM8ckzhlH\\n\",\n       \"Pc6cPk0cTdhev4EvCzKlmIyGJFlGq9YB36YeeJw5fQebm5skUcj5B+4nzVN6ZNirY1Q4YnewTTqM\\n\",\n       \"WGjMEXdtspqDmG/QX99gDocvv/ASp+46w2c+/xl+9C//GGEYIS1493vewyd+9WP4tTr33n8/b77x\\n\",\n       \"JjEp0pFEYQRSYB1bpmbbuELQwiGvuaxt9rjz3L34RcxICK6s3cJxBd5cm8CpkcUJHVlgDXa588xp\\n\",\n       \"/sf//m/zk3/1x7nzrpM4SUJDSoo4xZYKbEkmBAqBKooSlQPKWIhiKhl9zJQ9k06Z8syHHZdQTgKm\\n\",\n       \"b8akMqrjezaC42ifiNYhpsMTpimOq2NAX1+14s2ACB02a8qmyRCY4OkgEmc/K6ipy/R9gyA4OD51\\n\",\n       \"7qYIcZjv1zSQSSu/VXnbFbj5AlVTSp8zQ36qyNOcoaRwgJg0TsmEYnVlmZdeeJlef4f5xRaLq11s\\n\",\n       \"zyfOUya7IUmccPPqFeqNBpaUOK6DLSTdVo0gCLBtm36/z/Z6iO26SGFR92t0T5zEsgSjyZhxNGTj\\n\",\n       \"1iY/9pd+jM9+4XO0azWW2i3iMKFTazCMJoQqQ9oStyg3o80VxEIhBVjFbHKhahpajWx1MXlp/V2X\\n\",\n       \"qhKthijpvTbNYiLlKg1j/k1DEMvfVZFTlSYpnab2bc3qan2r71s9Z56PnIJ6ViCLgsIp+NVf+hWO\\n\",\n       \"tdoUXp1cCC5feI05y0IEDey6h7AFg3BEvdOi0Zqj1xthIxlMJmzeuEzdkZDFCNsnT2PSNCOMUtxW\\n\",\n       \"i9N3nMWzLUgiHn70IXxpsXd1ncCxaS3M0w5W8a675FlKf6+HcnzufeQBrvR3cKXFxqsXuOPOE7zx\\n\",\n       \"5tfxazXe/Z53E2UxhQTLcfnghz7EZ//g06QoFo8f4+rly1h5ge+5tFotZLvBPefv4vIbF+jecQdR\\n\",\n       \"d46JgsmLL/Lu8w/y8o0bzOGwtzdk6EhGzgSZlgnW2rWAUX/IP/x7P81v/uffYicf88Rd9+C6NSb9\\n\",\n       \"MW49ICkKCkuRyzLkUO5vcVhVG2bKhirark7S+rhpcU2V6NG5300LTitTc+OOo6xMU6biOD54prmo\\n\",\n       \"rbohg/mnx5u+t7moSSvsadSISd3q6JTSYiy57ZwsKw49J0mSEsgZUSxSSjzPnZkoHMc5WH1dHYdv\\n\",\n       \"Vd52BQ6zNEB14OoFPOZmBjCbl0Bfu9frETTqZYNYknvvuZvXXn6J3f42w7HDcDjEcX2cwGfOX+DU\\n\",\n       \"qVMH9x+PhwwGA3r9PSzLIk4K6o0u55bPkOQWUZTQ2+2xs7lBGJYLJeYXunTn2wjHYnNti/e88118\\n\",\n       \"5kuf5Qe+93uRk4TReILlOiQqxfUc3LTAKiBXBWle4DkOjhFpYwqWqYjNya3aodW9/0yFbpqEZhTK\\n\",\n       \"7Yr5XE1XVWNlLeswBaLrbWZOFEIcDIgqMj9qgtB5zU0FYG7cYBblCMgKonHIZDTmytcvcP7cQ7hL\\n\",\n       \"S6S25OpzX2Op1SB3FFs7m2S5wG93uPfRJxmMI9b7F2jXG2CV0RLpeMRczSb3LIgVRQGbwx6OJ1kp\\n\",\n       \"ImrhhLnAZ2G+TYDFpS9vs7lxgzAI8BeXSmsribl48yYrZ87ylVdfwQk8+jc2uH9lmct7e3zlha/y\\n\",\n       \"M//4o+QCLMcmUxmO77F8/DhPvfeb+fKzXyKMU6xOAxVG1Cwf17HZHOxypmahkoTx1XWCTpukFmCF\\n\",\n       \"Ia+98CLxOMSJUub9GnbTIbIExThhZX6BkcpZrnv0d3b4vu/5MP/213+ZW1+/wve+//2sLK2QxWOK\\n\",\n       \"LMVSlPu+CoWy9uUrn/aXLjpawpxQq6F1VYrsKKVpRmDNgoNprh69G45ped0ujFCIMiGbHkNa4Zpg\\n\",\n       \"z+TB9WIafc6sb5UDl3KWETAjRarv22g0Dzl6df31p373MBzPTIhm2K9J9xy1Itosb7sC14PcXJwD\\n\",\n       \"GChOzrygKRhVE81xnDKetijI0gxbShYWFkiSlOPHTrK6ehIpbUZhSB4qbty8VT5DCIKaj207dOfn\\n\",\n       \"9wVKEccJt9bWSNMCx/FoNms0GwFCldRDGE3Y29lF2GA7Fltr69x5+iwf/d/+CX/rJ38KWSicLMW2\\n\",\n       \"JWmUUGQFjpQUUmJJQZHlxEU2Q00c5eAz39PkhHXbmUKii2nGmQJVVbwGTXhQTEGbVfiq5GcNZGAq\\n\",\n       \"5Kq5qrl1c7BX0ZoeQCZaMs1wHeJotkdm5cSqoNVq8vu/8Z9QUcxg2EOoDKfmU2Qhtm9BnJBbkqSA\\n\",\n       \"uc4ysfLYGAxYOXMPtsq5+MKXGYcx7aCGIEUKC0m5yMIPPLxWg52tTXauXOeRRx/FzXNe/trz3DHf\\n\",\n       \"JVtqE+/tMRoO8QKPcRbRufcsXmee/o1NiFKank9Rd3n++a/yU3/jr3P23F0kaVLmzykKcASFyml1\\n\",\n       \"uyS5otGd577VBTZu3kD0xohckCYxn/rDT/PBx99LkBUMrt/izWjAXGTTsCTveseTXH3uRcI4oZcM\\n\",\n       \"iHwbT1ncvH6D1IbUlsiiDAH8zm/+AHujHh975hm+/8PfTdv3sWOBowqkKshUQSbKnN9Szm5moJHx\\n\",\n       \"YZlgRglX6c1qP5d+GGboDX2Nvr+ZOiKO4wOF+lbKzAQPVQelab2V1MR0fJl0jTm+pjHaZa5xse9I\\n\",\n       \"T5Ip0p8uxinjzbXj0XyuBp/VkMVy5bGYoX+qY/mtaCNd3nYFritYjTaBfXN5PzTH7BR9nZ5lD0wn\\n\",\n       \"VyBFge25eEKAlDzx2Dt4+pn/jO3WkcKjWWvSarUJFgNsu+TfsywjnEyIk5g8zfADj0ajged5DAYD\\n\",\n       \"8mGPwWCX7e11bGnTarVptdosLnZZPbbMJByxvbvD3t4Oynf44R/+r/mPzzzDh7/rO3FsF6IIqWS5\\n\",\n       \"EbNbPtOTZQY9JaboFWY3t9DcotkuOkuhiayPMl91O+lzJXKYzaEy5eymfWEKnh5URm+hnVpCmGZy\\n\",\n       \"id7kfiSPLmZ+Cn3fo1a/QYl0iv3JrMxhUe4m5Dg6RE3tLx5JAQcrCBj2Iq68+iYNN6A/6hMUGbcu\\n\",\n       \"bhEOh9TmCprCRjkerWabztIxLt3YYKc/4YH7T9LbXCMXFrYfEE9i/MBD5SVvDyWaqs91sAtYWV6l\\n\",\n       \"VsCVl18mQCHDkKhICGoB/fGQvVFEWne59/yDXL18jSROSMYhtiP52O89x3/7d/82Dzz8cJmjm7Kt\\n\",\n       \"bMcizTLiPMf1ArrLS+xu73Dm+Alcx+baa28S7U0IbBuv5rC7t02UD+j3trixs0HamCdf6LDd36Wz\\n\",\n       \"tIjq79GywfIsRK6oBz7KcVCuRRzGNLwa40nI8cVV7FrA//xP/w/+5l//SeZdDxAElgNpiAUIKciL\\n\",\n       \"2bFoprqoKnEzFE7/VfPK6/7OsuLA2VhGaGiQIfbjs/UScnv/mZr/Bse53QKvcum/OS60jJvUTzU6\\n\",\n       \"qjpWqvSsEIIsS1EqObjWtm1MGqkoyhWfpYI2l9fPWp1mQrpS7vMjn63rbj7vrVNIecYAACAASURB\\n\",\n       \"VMrbrsCryBsOc5768yjnnuM4B0omy8sQHJWngEBZNq7rMgkjOt0l0lhx6+Ymazd2cWpOyXlbFo7r\\n\",\n       \"YlkS27bw/AAsm95wjBhNyLIM13Pozh+jFjQAGA1HhJMJg0EPncHIdWxOnjzFJI4Z94coy+H1y1d4\\n\",\n       \"5Ow5asLCkgLpCFILijTDSYoShdmzXm/tga4WLVBVE/YoJ6GJeGdNx9n4cPOeeuBVI0nMe5vPNY9N\\n\",\n       \"TWKBTv4Ps7uv6N8cFalQPv9wHnWM3WDK/vFKB5RUhFnOaDCiXZ/DC2qsDXfJdraZrK2TWHAjjukK\\n\",\n       \"l71awKP3PECaK5IwodvpcPP6dW5deRM7S2jNdUhFzmA8IlcTfMshTlNWT58iaLS4dfkKf+ZbniAa\\n\",\n       \"Dnnttddp12t4bp3OXItRnrCZhYRZwt1L93Dx5TcYTxIazSY7kwlfv3qBv/63/yYPPvowaZ6X++eU\\n\",\n       \"88P+i5dyMRyNGY5DVo6fZHNtm1qjgd9ukQ0j6m5Abglee/MNvvnBJ+hv75DfWGe8WODWbEZxyMmF\\n\",\n       \"DnbdxyXlxWsXyaUkkzGFkORKoAREowmNRp2oPyIZj/kbf+Wv8Qef/kPe983fRCcIUBKkEjQdjyRN\\n\",\n       \"y5BXg+IwrTtT8ZhydtSqX7PvNQVhyk9Vvs3f6ARwU+R/NAdeWq+zVuAUXEyt0Gkk0+wK7yotadYt\\n\",\n       \"COoz56Z0ULE/cehr5b78yxkQZtIx5rPStOTQq3WttsmfeicmTDu7OrjNTjUVlPlneoddx0MUgIEC\\n\",\n       \"280G3fkur7z2KiePn+HkyRN02wuMs5D+oMfOzg75IMfzPOr1OgpBy/PI4oQompAkKUU2Zq/XR0qL\\n\",\n       \"Wq2G53m4NZ/m3Byu6xKGIePxhChM8ByPURTyyMOP8Cu/8u+Z/8gPcWZhEdexSPOMcv8gcASkgjIu\\n\",\n       \"13hf00NfdQLpdtDXmdSLeU0VYZtooSok5bVT87ea1bBKgVSPmdZDtc9MzrJ6f7O/p6ZtMcNjmsWk\\n\",\n       \"dexcYDcafOqrn2aUZMwttrGGPcL+gMVGk9AuGMUJ/eEIf6HD3u4ue72QpWOnaDSbXLn8Jo6KUXlC\\n\",\n       \"DviNNpnlMxr22RpP6C4tYfs1dja2mau36cx12B1P8AsYrG+QOQ7udsDAg4kP3aVV3nzpNRwchnHC\\n\",\n       \"A08+xqWbV/lrf+tv8sg7HiZO8/3t9iizVhblZ5qmtNtzvPnGRcZhSDsv2N7eRe4J7r77Xm5EBb2b\\n\",\n       \"64wmY1Dwwmsv8t7Hvond7W3euHQBf77JlWtX2bBucf7cvZzprrC5uc5GNCaV0K43iSYRmRQ4nkeU\\n\",\n       \"JdiFzbxTZ/PNq3zwW76dX/rEf+A7v+s78FZWCIDxYILnOAh7FgzoPqmGg1apsyqy1X13QH8ZS8/1\\n\",\n       \"pwkqjpbPb5whUd9HiGl+8SpQqaJiUz5Ni3f2+bMO9eoEod+jlM0pn10dN9W2sW2Loji8TqKq7KuT\\n\",\n       \"S7W87Qr8KM7X/K5fsLprh1YGprmvConKc8jLXNGgwHG49/67+O1nfo8nn3ycjWsbbG7cRNk2rXab\\n\",\n       \"s2fPUKvViKKYMAwZDkdMJhMmkzGe59NqtfDsFgKIkpg4Ltjd2SYMw1KRuy7SkgS+T73ewHUdakIy\\n\",\n       \"HAz56b/z9/nd//RbLL7vvdSUQFiiXOkXheSAZdsHCB4Ox7ZXj+nvpiVSTYSv29AMi5pOgtMFMrNI\\n\",\n       \"+rBzqBp7a06kVc7zdvkaTCGsTkRmfcv3mI2mqe6aYnKJvnAJ90Z87YVXIIL5QrF7a4tGnpOTQKLw\\n\",\n       \"PJc4gJX5LiqNCHs7jC2L9QuvMx73aAQSK7CIwgmZcJFuA78pEa05ssBnEKfkqcKte1y/dZPta1cR\\n\",\n       \"SYqLYBSNyKMY3CbtVof1C1dYEAGj3oil0ycQjk1hwRPveJxe2Efafvm+Aoq8IFc6PM1mMgm5du0a\\n\",\n       \"rVabPFOsnjzFzatXiYcTnHqN3XiMa4HIC9a31thau8X9J09xc7DNi1/9CnOrK5x8+DyFLZG9EfPK\\n\",\n       \"YWhZjGyLNEmwcoUQkjRLSnqw0cQWNirP2bhygw9965/lhee/Ru0dDnOew+Jck/EkRFZ8Tlqmqoqs\\n\",\n       \"Ornr/qxGqZjpMo5SSqZiPMrK+0bKTK8U1T4TczMUXT99XKsck2Yx62E+27Jm5VBvHGG+m6b7qgEE\\n\",\n       \"Zv2rei7P04NwRo3STYvXsixc1/3/Ry4UmA52M2RQN5A565svaRYhBIUqTRlLCqSAnII8jzl77jTZ\\n\",\n       \"0xOidMDKsTay6NAfp8RpyvraTRzXxXM9XNdjaXEe27IZTyZEUUS/t4cQDo7jUq/VqNcDao02UkjC\\n\",\n       \"KKTf6zEZjgj9jDy3EU6Ia0tkmPGF3/sMnW6Xly+9ySMPPYCMY0SY4AkL4QhSVVAu2ZwiApO308er\\n\",\n       \"W8iZnWqaiKZiNdsVdESK3ol+Np2miZjNAarrVG17HRN8O9Siy1F0jxkmaZrm1aRYptCb9JJlWXjC\\n\",\n       \"59lPf4Fmvc2J8+d48/kXCIRN4FpQJKSDAYPhCGd5AZuCQW+HpmthJxPinVtE4z3spke73cav14gS\\n\",\n       \"yHMbt9Fm5cwJWsuL3Lh4iXwYUas1eePiBfauXWfBsQknI1Jf0jixxN54yODCVey9CblMef8Hv4OT\\n\",\n       \"73iU2oll/uCZ34E0p7BLP4ek3LcSKbGF9vsUvPziS3hegGt7+J7PKJpw5vQZXvris6yePo7bbRPv\\n\",\n       \"7mLnOe1Gg9e+/goP3/cQq0vznDlxkg984APkdZfNly/w+hdf4uSpU7RqPv08xJIujrQJlcJ3XLAl\\n\",\n       \"43CCbTsIIUn6Y7JRyPsefifPfuFZ7n34XkTdJ2h61OOjd3k35aU6wR8lhxp8VcFWFaFXZVffx8wj\\n\",\n       \"dBS1qJ9t5hPR15t6wkS2OsmaLmYAQRVhH4qAOsIaNetsBhloH1DVOX+U/jLb1pT9P/UUilYAt0vv\\n\",\n       \"WCXxhShX6ZUvqfna/fwhlo8UEssCC4UQGblQ1H2fJ598lOe//Cx3n76TLEpptY8x12xQW6pjWTbj\\n\",\n       \"yZjJJGJ9Z2dfaTp0Oh2OL6/gBk3G4wnD0Yjt7R1G4xGe69JsNbn//gfxXI9er8fe3i6jcMgwimg4\\n\",\n       \"Dp60ufPuu/nl3/o1gkbAo2fuwhV5ufmulBR5UW4KUUGlnudN6RTDHITZ1WlV81KHOCk1TS1bxqiW\\n\",\n       \"99bOQJhFCJqUrVIjVWHS/+u+MhffVMtRg9VcYWZyhVJK0jSumOZlbvDSstqP2thPhh/FMdevXGWh\\n\",\n       \"0yVoNojimLbtIYqYFEBYNBsN7PlFojCEvGB1cZnrly6iwiFL7QBFytbGdbqLx3HcOp7wWFw9Ruv4\\n\",\n       \"Mv10wvETx1k8ey/XL1zg5o2bMB7hOzaTyQSvMcdQKZAWblqw4jf5lnd9C5uZ4tr6Gnc2m7S9FuO9\\n\",\n       \"EWk7x7b2zWo9Ye2/43A4Ynd3l+FgQndunjge0Vzs0Lt2g4ceeoTPfvlz3H3/XdxKYrL+iP54SBYq\\n\",\n       \"tvd28Go1vvX938b8wgKXdtdZW1+jJW3kJKY732FQCNJJSsttoaQio8CybAgsMgSBG+BZDoFlsf71\\n\",\n       \"y/xXH/pz/OZnfpek6XJicZGaKDcqEdPqIvZztat934TKCzDiujVA0KCjlA2Nnjkkd6bCKuWCg81R\\n\",\n       \"QB1KI/1WYYRV5D4FHnoFqOnsn44f0+lq2/bhiJFiqnM4yKsCRa7Qq4ahHGMm963fVY8Frbt0O5jo\\n\",\n       \"3QRJB2xCxTK4XXnb84F//lO/PdOZVbSnw47Mc/p7Fb1p54QuSimUKJ04cZbyb3/xF/jWb/8AWZ4z\\n\",\n       \"2CidDpYtcbwyD4Rlyf3g+hxH2jiWS5qk2C44TmnS1Ov1A0U3Hg1R+52llVng+6RpSpSWW6uFaUKz\\n\",\n       \"3eLG9cs8+djDzLdqyCxBFBnsL2TW73OAFvZzY6MKBAqEolCg1GFO/Ch+T+9+LaXEdV0jFO9wzGtV\\n\",\n       \"aVf5xOpx8zfmc49C/UfF9WskremfKaJP9yen6YRdFMVB/SVTzvLSXp/nP/cl/FQSbY0I+32EyOiN\\n\",\n       \"ewzCiJ1BSHv+BJ7fYjLewHd80tEIe7xFx0koipjdKGGARWd5BYuCZDQkdxehv8Vo6xpFzWVsOUgC\\n\",\n       \"5h2fO1YX2Iv64Dq0al1stUptLsNK1jiWODiTBpeV4EbL5ubaJme8Dv/D3/1J9ro9VGrhKKdU4MbE\\n\",\n       \"e+PGGp//wnMsLh3DdRv4QZ1+NGGhM4fq99h54w2KeIzVcrl0/Qrh9R06RUBar/FDP/P3mF9ZIe6P\\n\",\n       \"uHzxMtGVdc4qn0ZSIOcCNuyczSIl8utsxCnd7hwkE9IiLZd3FzmBZSEzheN6ZNJBtBq8eOkip8+e\\n\",\n       \"4tzxLm3LwQljrDwrdz9yJZktKYSFyAV2IZFAoqZJl0xazVSaGnBpJ/cMahX7iYNQCKbjPCtm5VtK\\n\",\n       \"yV33P3lIn3z95WcP5OUohPxWlI05KVRpy/LzMMWr1OxGFlPZn13IU/2dWb6R3jUV/GNPfTvqT2s+\\n\",\n       \"cJgOdL0Dh5nAJUmSg8bOstkQuCntoI/N5pi2LAslBLbrkI5zarUaly5doj3X5p57HqPm10mzjN6g\\n\",\n       \"R2/QI8tT8jyj0ajTaXVwLJvBYMheb4d+f7KfNMei0WhQq/l05jrUazWUUoRhyGg0Yq/XK1dV+QHz\\n\",\n       \"c22SLGWv3+PE6gk++9nP8Re+78NMshSBKHNWO/KAA8vzjEIV+0lxSnSDkEgEUOxv3lYW/Y5ZZQlv\\n\",\n       \"tp84y7anTpGqeVvlGKuoSM/+pnOliir070En7ykqfTLLfVcVuXkvpaYJvPR1eiHH2toax48fZ2Nj\\n\",\n       \"g263i8oL3PYprMZ1rDCnP+4x2umxtNTGERLXEjx6/kGcYI6r1zZonKwR3+ox3rrB4rzD9mSPdKzI\\n\",\n       \"E4fClkSxol5r0W4sY51rce1LOwS1BUaTXeZXArYG26SNVS71RjQ6HXyv4PXXn8W1VwgaEteaMAna\\n\",\n       \"JEPJdm6ztjPhpde+wrt//IcJa3tMEvCVRV4UWFJgGeF1fuDTaNTZ3dtjdbWJlJJOUKNp2SRKcOmN\\n\",\n       \"11lq1JiXHc4tHWc9lVx97Qo/8N0/zN3Nefq9EZ/497/C2aVjnF09SdLrc3Owx1JYcHJ5ka2Nm4ha\\n\",\n       \"wN1338nW9ZssBnVSMgpHYDs2ji0ReYHnBYRZTuZYnD1zmtdfe5ljc4+RKcGc5+Lu72KXxjFZLFBC\\n\",\n       \"YkkLpIMlJGXaVk1TlNarGc007W+LfD/O3jIm5KmCh2J/nAMztMtbFXMRWHU9hC7mRGAqbtPKPIoe\\n\",\n       \"0rtOmdRQSQsd3rrQDKGtRoAdVaq0qbZuddFW7luVt12BV5e6xnFMHMcH34FDStks1dlO0wcH34VA\\n\",\n       \"RSFBo849d9/DlWtXue+++9i4dZksz1FI/CCgUfNxXJc8L5hMJvR6OwgUaZrQaNRYXl5CSsFwOCSO\\n\",\n       \"I7IsY+3WrZKT9VxczyWKJ/hBDYUgTiLS3ZwoinBcF9e2Cbwan/7sZ3jskfNYtoMlbSgUli2RlsDB\\n\",\n       \"RqmCJIkP6ANpWShZJr6qIuCjonfK5bg6zKlUhFOfwuH0AyY6MIX3qOT/VROvOhno66vctdk3Zn1n\\n\",\n       \"eVHJNHGSjsHNOHbsGHEcs7S0TDiZYNk21mgDO9wlkDXSZITfqLEzHKJsiTfX5vS9d3FzfZtH3vEQ\\n\",\n       \"YbbHtbWX6c4tM4x3GSU5Ld/Hkh5Oe47G2dMM+0OCRLB5/Q364R51LyAv2kQ7KWcbqyy1Vsmbba71\\n\",\n       \"hmRDi5PuecassbfVo9Xtsum6JF7GJBoz3LjBfXce4/FvfhzVcLFjiV1YWHJ/+lXT93cdl16vT7e7\\n\",\n       \"ROB7pRXi2ewlQ4Zxj7NPPszlr32VwfUhXqcByy3OnX0Xd77rPIOtDS68/ibdBOr9iKzWZyxi9tSY\\n\",\n       \"3qV1Vvu7nDixwpU05Pq1C7ScgDQKSYUiTRVFCXiJo4jA95G2RVRk2DWfx88/zMc/8Zt81we/A9d3\\n\",\n       \"GaUJNcfCth3sfVRIAXG+TwGggcB+Dn8yhADbckBAkZfb4iEEtlWmc8iLgjQrE2ZZ0ogZF2I/Tqts\\n\",\n       \"rKOioarFVNrmNVWHpMlzzy7Wmd1+0LQUqxElVXrjrUBKFcSYpToW9HuY4OePw4687QrcnD3NztKh\\n\",\n       \"QL7vz8yuVUWhv1dNJjP2U9oW2xubnD9/nl/7xMd58skn6c7XaNSapFnB3t6AaDLGkhaObbM4P4/v\\n\",\n       \"u6RZTL/fZ3d3wHA4pChKFL+0tES72SLNYqIwYn1jndF4WCo9CbXAx/MCHNtjOBiUyHw4ZKG7wHDS\\n\",\n       \"58bGNidPHYciwZY6S6DuMIXj7Ds9ZCnQhdICMuvwMTtYH9eTl21LA5HPzva66LaqRrJoRGwqY9M0\\n\",\n       \"1t+PokqOGkj6GvM+uujvSZLOOGullHiex2g0QkqLLJ3sp+F0ufzlT2MNBmzsRUg1was3oJBs9rdZ\\n\",\n       \"nF9kd7xLpsZ4fkJvFNAIFCu2za2Jj71wD1YKQZZjd+bonDzDq9FlHM/l3HJALXXIdxO8us2P/vAP\\n\",\n       \"cunlr2Lbkm/78x/mjY0drl+6hb2X0g3g47/5NHupgmCB69ffwM7HHJ+b5x/+zEcp2k0G/YKG5wDT\\n\",\n       \"UDR54G8Q+J5PvV5nvttBFRmeG1CIAs9xmdgWnWMrbFydQwzHhHtD2ktdvIU2yydWufj059m9cJW1\\n\",\n       \"Ny/y6Ps/ACojyxMSmTEKd2lFLu4WuE2fO+46w954TG675IUCIRGFwHddml6dIstwA4+gSMGRTIZD\\n\",\n       \"vvfD38//+a9+np/8qZ+gXvOJsgy3KHALgS0ssK39JFjgqGnoqZ54J+F4htuVUiKkQBUZUljYUiJs\\n\",\n       \"i0KIfT55yg9r35bY3wu1arlVi+awq0X7i6qL42A22ZWpV6qbr5RzyiwIKWmgw8njhDicCrt6jVln\\n\",\n       \"U0lr/aXB61u9r1nedgWuZ0A9+M0Ui7ZtkybTvejMGU03tBkqJ63ZmVAKgSUEUZLQ6XTIlOKdT76D\\n\",\n       \"r37lq5xcmcd1fHyvgec1aNbreF7AeBKRpRP6/R6FSvE8h9WVZTwvoChyojBib3ebrc11XNclCHyW\\n\",\n       \"lhYJ/IA4iYjSjLTIGWxtkqc5gVcjcH0a9TpRGhEmMV97+RU6y0u4eUaapVgSrP0oEb2pb4GepECR\\n\",\n       \"Iylzi5voQLcTTCe0opjdDsu0WI5KEgXT7Gy6VDlrE0XomG/zejO3hVnMwVGNMjEVvqbPppQOQPk+\\n\",\n       \"nudhOy6j8ZhGe44/+qNnuXRpB0/YXLpwjZYfQBiTS0XHczl/9z1s9XZLBSEF62s3sYoRfTXCqjdw\\n\",\n       \"VQPHLlBhjyDw8AuPZX+OuxaaXBhcxc8DVjuLnD25wnOf+V2EkxJ057m8tcHl7S3Ov+deetcusHxz\\n\",\n       \"jn/yD/4RX7r6Jq/u3GK3v0Xb7vIXvu/PUat3iUXASsMj7K2jfOugP7UlxX464SQqt/JrtubwgxpF\\n\",\n       \"kjIejhht9+ieWuWu+x/g93/112m5LpG6wjtPn+O3fuFXeKp1EmsQErg2t8a7eLLO2sYt9vpbhEmf\\n\",\n       \"aGPME/MP00GxefUy3soiW6M9PCeg5tTJkxyJwFKlLyibRDi+RZYkWErR2xvxrd/xnfzupz7Nd/3Z\\n\",\n       \"76BV85BhjEQhioIkLUis0sdU7KdXIEvRu84oVJk7Pi33gXQcB1vYiEKhpEIhKZSgUAVpkqJEGbJ3\\n\",\n       \"QNEpjghgODoO3JRVfY1pnZpOeF2OWnBjgsQpIp/dPL36THPdgm3POlmroOZ272LSl3ojDF2vb6TE\\n\",\n       \"33YFbipiU1GbkSnlO0w92RqJHnTYwQIJg2sqSve5QlELgnLlk4Tv+/CH+bmf/3ne+9QTZWTJIGQ4\\n\",\n       \"3KbVWkDi06jV94MyCrIiYTDsERYJnhfjOg6Oa7OwMI9Sin6/R6+3hyjKZO6+7xPUA6TjUHN9hv0R\\n\",\n       \"RZaS5ilRpLB8h253gVq7ya/+2sf5y3/xB7HS+EDohSjzD0spEEimbovZeGhdTO5Md/R0ZevhXOFH\\n\",\n       \"hXIdRaGY39+KEzTrYiJsfW25WKHkRTWdo9Rswn6NcKSc7vhi27LMLidzHNcjLwra7Q5f+MIf8fTv\\n\",\n       \"PMOjZ99Fb3Mdv3Cxw4RotEVhCVorq+QTRVbUqc0vMih8gpufIQ4ztkWdvLAgH6DsmEke47faXO3v\\n\",\n       \"stu7waluk2AiCfOC7XCbrZev02l4nDt5B/EezFlzfPnzT/Mvf/Zn+bZHHuZbV+6HWopdEzx114P8\\n\",\n       \"3u9+nA999w9w72OPk3qQ5js4qYcvLWJpIfaTIlGUERxFXibvD8MJN65fZ2k5xfc8Egt2wgGhBS/d\\n\",\n       \"uEGexvh3nWb70g2iy7eofflFnnjgYVLXZi+aMCLjhVuXERImW9vkeUwsIkaBx9pomzu7d5InKS++\\n\",\n       \"+CK7ElwnIAia2JZHq9Uh8GoUKqfdajAe7pHnGSmCZneRWCiCxhzPfPL3+MiHvps4j5GqfAclBZbj\\n\",\n       \"gJRYYuqbKn1OLrZtkaTxwfaCiBJZW6ocm0lS5jhhf4d7KSwsUSAF++2TkyqNkDWSPjoKRW9ePEXr\\n\",\n       \"6v+m7j2DLEmv88znM2mvLdtV1W66p6fHN3pmgAEIYOAIgqSAAEYERO2SIQXBXXFjpSDXKDZiRa1C\\n\",\n       \"DDGWYoRErqgfIhQUQS5FCqADQEI08I6CH7gxGNNm2lV32WvTZ37f/sh7q27V9JAMShHk5p+qupWV\\n\",\n       \"tzJv5vud8573vAch9jPx+t6cvZcPFvJnv87qxuv33O8m3g9katXJLPVSf89LnotZPJulXmatCWbf\\n\",\n       \"f3ZR+P9FBH74IsyCw/R3sy2nsynKYQCabvsaF+qU1VqqssQPA7Tr8PBDD/HVJ55gbfU4c91F5hdC\\n\",\n       \"sIper8doNKKiHv3U7jTQjkOjGUwohfqhyyfpV7vdYml+nrARYoyh3++ztbVFlCRIK+m0WizMzwGg\\n\",\n       \"HZdREhFlGWHYwHM9nnnuee4+dRIlFH7ok8RjnD0ToalzhgQB4tACN71GruvOgOMU0NkDxtkbaLod\\n\",\n       \"7nqbvYlm+efpa7N/M33t8N/ePoI/aPy/z++ZAw9R/X/vR0lFUWCpawNQ66WV1vzq+3+NTrtL1xNk\\n\",\n       \"IsdVJZgKtxEyzAqGueGJp59nUBqWjpf0R2OWkg1OhB08U5FLw03j8GISkjoea50FQtHHu2Oe8sQr\\n\",\n       \"WH/qT2i4gpOn1lheWWK43efFizdY0i5bX/scP/fed1P8vXfwi//3v2Krs8nFbz6HWH6EZatRruD7\\n\",\n       \"/vbbuLLdRwZgi4jCJFRinspOfF6YgJ8FgSD0A8bjEb3dAbdu3eKpJ5/EXZ5jPB4z32ghGwGFsKyd\\n\",\n       \"e4C1haMMrt5gtN5nZ26HD118guMnj7HUWGJMiXAEw80NTJognYpRXvGZJ79CKSxnj5zkvrDNc71t\\n\",\n       \"ml4TvxFAo8GVrZtc29yi2WxSpBkNx+H00WNI5XHp4hWMtigUzaDD5cvXWW13aIYNXCWJspSyrKhs\\n\",\n       \"VXeaUuFoB6EdKgkWSYEAqZmqM0pT4VMrV5TWtR59MqXdmoo8zWrFlJR1TSnJJ3YMdS2qLG8fgbuu\\n\",\n       \"M8GLahIw7N/nU/CvgXNfRji91w432MxGzbWwoDhwn0+POb1/Z6nJ6b63yzZvV6u7Hdd92E76L9r+\\n\",\n       \"2gF89gQOn1AN5rMOd7Ue8zDoTCPO6WchOXgBhJQ4WiMRZEnK3WfP8ju/920eedVruXn9Jp6b02p0\\n\",\n       \"mZtrs3xkkTTJyIqMylSMRxFpEtNohGitSdOUPM8oy4rxaIiacLW+7xOGAcdaTSyCIi8Yj8aMozFS\\n\",\n       \"ScjSWl0iBVGW8OhDr+TTn/oEZ0+foTQVcZojpUY7mqosa3mirY32QWAqg5lped+TWt2mNbmOcg9H\\n\",\n       \"DS+1KpgulrOGP9NjH64pzC4ELwfchz/Hqtqnxm7Hk88uxq52yMt8EmXVo72KokC6Gqkk43HMm970\\n\",\n       \"Fp78zpP0yx7raY+B7zCMBaO0ZBiVrDUNc8WAcnidsnqGRtrjkmxjk13ag012ioRvVPM8GZ2k7YW0\\n\",\n       \"qm9y1/wGN7Zznr3s0W0UbG72aW1LdjZeRMiI+ZV5RLjMN3aAGzli/AL/6B98L8HqBbaG5/j0F9t8\\n\",\n       \"84kd/vH/9S+50t+gcgJMJRGEGCWo/AbGRrVUbvpZTFKrF154AWHh3LlzCKloNpoMszGtU3dybHmV\\n\",\n       \"hdVVgrkODS+kg8vnf+8P+dQf/CFFP6azsMAj3/9W5o6vgpRkNueFF57lT37nA/hVQRC6xBT8lye/\\n\",\n       \"zujaLR648x7OLyzRzzKyQY9Td53i/KsfYigsVrnYyqILS0v5CKGJLBSmpLL1pJhm6BNIB1NkoDVa\\n\",\n       \"aEaDPp4fUsqi5t/T2pEvyxMAWq0GXhhOgguDdDS2Mpi9z72iKiukkDVo+xrX92sgrgy+7++NGPvz\\n\",\n       \"RotNDe9m77MD1OpMYXCWsp29Bw/f8/v8/cH7u75/a0OuPXzZu6dv39Q2W9Q/bH0x+yzAQd/1v8z2\\n\",\n       \"1w7g0wjysJxmenJFkSMmYnnYj+Rmedvp/o5TG6IrcdA7uMoLPN/D2Do0X5xf4NxDD/PFL32ZV73q\\n\",\n       \"Ucq8JMkitm5dptls4/s+3e487e7cJM0dkRc54/GIoigIw5BOx6fdbE44YcN4PObixRvkpcFzPdqd\\n\",\n       \"Dt1uF8/z6o7O4YD+cJckSVDaxfE93vSmt/DT/+xn+Pmf+1lcKcAUpGmCoyQCUXPeoq7KG2GwZr8b\\n\",\n       \"cXqjTI3sZz/w+gYrD/B/dTp4sANteo1np+/Myg6nN/esk9ssrz3LHR4G5XoTB7jF6f6zFMr0f8vS\\n\",\n       \"jIpq8qBaEBP7zrKoMwo0r3nta7l4+QpP9Aue3crZHqWUMqA0Pm4QkgwT7nMNdy9YuuklXLvBHz6/\\n\",\n       \"wrNiiXN3v5FMpgy211nTEhEPeObCJcJ7JGeXXNzkq/SCkkvliMhd4Wh7gfks48TaMh95xvBFjvGv\\n\",\n       \"P7jFa9RNfvH1Lln7BnPBGQIj8fQKn/nK8/zEP/whbl54HqesKFWDQZGiPYNjDdYU6InDnlD1CLNv\\n\",\n       \"fPMbaK258uKLLC4vsbW1wbE7jxG0XFKbQFlis4J+NmCnrHj1O9/Gte11hnnB//kz/4zcd9jt91lb\\n\",\n       \"WmVYxBw5fQItLZ/4wAfIxjGZLXCU5rn1FzGm5OTpk9x99m6ubW/z3S99HoODBAAAIABJREFUgaP9\\n\",\n       \"e+mcOoXT6ZKWBiV9KhQYcJWuhztYiRYGbTVxlqO0S2kscWrwvQ4C0G6AK8XEErgeyeY4it3dbZ57\\n\",\n       \"/irD4ZDl5WXm5juE2qesSqSwSKlxPR8la9e/vCywtkILjXYFeV4SThaA6cDf221Ztt8ENgXew9La\\n\",\n       \"6c/12LKXBoGHwf+w6m02Wp++/lI68/ZDmmeDnOkQh9maz2xAdvj9/iIJ5V97I89XvvCnLwGA2ep1\\n\",\n       \"WaYH9MizIH+YN3ecoCa49qrZ9QVwHIesyJFKIZVCSEnlu7z/V9/P2TNnWD1yBG9SZMnSvO4mQxCG\\n\",\n       \"TdKiJAx9XHdf62qtxVb1XDtraxpjyg0LVD3RJUlIixzHdZBK4fs+nudRTSLmKIrZ2O3TXVpm8+YN\\n\",\n       \"3vKG12HLDJslKCEmMsJJukYNymamKj+9Xocn9NQ3BJOM5PC8zdu7sM3emNMo47BN7eEb/jCdMz3W\\n\",\n       \"QZ243Wsiml0kbpcRSKsRerooFBhRe7DHSUaz0WYU5TQaLX71V9/Ph5+Map4UgeOGxElOEcXobMSC\\n\",\n       \"HnN23nJqrmDOSdlZuJsXbxmujVv0cnCrHifcnIZSPLuxS3ch5PULMY/yIldwSJgjoYspS1Y9QwOH\\n\",\n       \"uHGGS+EpdpOE+6sN3hUMaD+yRbM7R1wd5RMXW3ztVpvXnH+EV6+WdPUWSTNkU/hYJ0DbHGktEomw\\n\",\n       \"oo44kezu9nj2u8+xsrJKWdZuhUZmFKIgiTJECr4MuD7qs1vlFEXOye4C836I8hSV0QRug0pAs9vE\\n\",\n       \"DzVNT/KJ3/0gOxeep+N6DJKIWFrQlk7oce/JOzl7/DTJMCEpwV9cpnviFM21E1ivwTDKkFrjupoq\\n\",\n       \"LZAGHNclw1I5iijLajCtLE5pUUJhXA+ppvrqKXgZpBKISWFeAMPhgKpMieOElSNLBL7LjWtXcB1N\\n\",\n       \"4LsoCaYqMZN7OtD7i/6UWrjv/GMvwZNnvvWFA0Hf9F6cVZRMsaKW1ToH9j0crEz3qwOMg1RtfZy6\\n\",\n       \"AD2LV/V9fXssnX1GDwcxLxeBzz5rDz7yZuzf1Eaew/alsyANkGXJS0BkmorAQclhUdTAq5Wqq95S\\n\",\n       \"gdIIKWg1mqR5VoO3qciLgte/8Q38yX/+z7zn8cdxJ/TI3HybZqNDWRqSOGOnf5Nr117EdV3CMKTR\\n\",\n       \"aNBptwkbAb4/R5qm9Pt9dnd30VrTbnXwPZ92q4VV9dCH3V6Pq1c36iYlpem0WiwvLtFZOMLWaMT2\\n\",\n       \"bo/19ZscWegSBiFVkSNF/aAbwFaG0lZ7dBLMypxuV82uAX+W055GzNNtehyl6pl9+xlPbYzv+/4M\\n\",\n       \"D7hviTn7md1Om7+fZlqUOiQjm4lKptKv2UgjHacICY1GSFZmaK1YWFigLA1ra6vEccbp06dxvnkB\\n\",\n       \"TEGrEUJV0dYuiasZ5h4Du8iTQ8NTvZhmQ9G5+kk0kPYqIn03Q+8oxbjk9edOseqt8uSz1xk/P6B9\\n\",\n       \"V5dX/eDd7Dy/wQvfvEDZWSG68zTXdy+ycO23eXwF5t2CU2cfpDfSZLuLLPkbOI1L3H336ymOvpWn\\n\",\n       \"v3qTe+/LWFu7wXZsWHvwDdzcHeG7IWVRYitb0yhSo5VmaWmJI8srJEmC63oYY5EmQkqL0i4MSxr4\\n\",\n       \"FM2QLd9SugodZThxyjgakUYll69vcPHWdT78kQ9hshjPtdx35zGWDTQqBdpn6JfcyHdppCP6z44Y\\n\",\n       \"bW6z4rSZby6w2FoivraB9bo0js9TNVy8MOTK5QvcefQE494QX3ukacIgGSM8F+l6qMJgsowkTbCe\\n\",\n       \"wHFdLFNbDAgbIUIIiiIlzytarRbtjktRxgTNEuV5oBV33n0foe+zfuMK169eQQrD0tISzWZIGY0P\\n\",\n       \"NPa93EAH13X35HezAWBRFHuqqdkgMIqiQ8/LS6PgKTOg1H6mWRTFpGltf7DJbMCi9UGnxtmmw1l5\\n\",\n       \"7DR6v13WOn0m/rKFzL8wAhdCvB94O7BprX1w8trPAP8jsDXZ7aettX8y+d0/AX6ceijTT1lrP36b\\n\",\n       \"Y+6FyV/87EdxHGfPjGZ25YH9NHt6sWaLYrNdgtMLMmskv1+8eKn+ODQukTD8yu99AK/V5J4Tp2kb\\n\",\n       \"j4Yf4LYajPIcrTQN7aFcjXadPe1or9fbA0+tNY1GAykleZ5Tlvne/zD7oQVBMBlIkJOmaf1/W8Uo\\n\",\n       \"GhP4Ln/2Z5/hR/6799BqhGhhqYoSz/Opinqohef5UAtu9iOHCUjrybRwJmmsKUskhpqVsxMJGzi6\\n\",\n       \"jjwkIExd3DUWykpibYlSgKgQwtSaXWspSxDSQSkXWxlkMUJIF6tdKu1QGEFlSrSyKJvjqhyqDIXB\\n\",\n       \"GhcLdQettQjHxdEepZGAQkqX2h5AUbgVvqNZv36dyxcucmtjm8EoI60kl69eo9XqIGTdDfjkDUVp\\n\",\n       \"KhzfBSkZRONailZaRGHQRtAJ2wx3+2RBQlkZUC6O41GVOaqKOdKSrLUsdnCNk/Mucw3Fet6DOOfh\\n\",\n       \"E2doDWLM5i4mcNj0QuLWPEGjwR3zHie7cLSzS+vImKaj0OMOwp9nfShxWaTlwxPXN9jy7+fsw29F\\n\",\n       \"6Qg/NyhVMApjUuETEkI5QMgSz2isgb52kcahKd265b6q0FIxNR8DW3+eE8WFI0IcKcGMuHjheX7t\\n\",\n       \"V36TKxdvsTDf5cFzpxgM13Fcj6efvkwufUTocbzbZtnROGlMJ/R54BXnaS6t8Z2LN9CdZfBaHD91\\n\",\n       \"Bw+cv5u8KBgMI+aXVhj0hxRVRSNskqUZRZETBAGjcURmKsKgiRCCPMsRQiOoOy/LqqDZrBVaeVlg\\n\",\n       \"0nraVRAE5EVBaQo8r5703mw3sbai1+uzvb1Fu1mRZSnGVjSbAVIKHrn/e16CUc9++1OY3EFISVkW\\n\",\n       \"uL5LkiZ4nlN3dtpJtkwtmfWFQzVVAwlZf2/rxdWYCiVATtxMLfIAtkyNuaZe4LPbdEjDLCUzG+FP\\n\",\n       \"vx62jp1i2mFHwul2/0NvfNkI/C8D4I8BY+A3ZgD8nwMja+0vHtr3PuA/Aa8CjgKfBM7aevrn7H57\\n\",\n       \"AP5nn/6Dl+iQZ0n+aYv15O8OAONsmi9l3VY+W8Wd7uM4++A73bfjtciEZSwM//qX/g3vfsfj+EYS\\n\",\n       \"aJ8CS288Is+Kekq3kvh+3XQxpUs8zyNJEgaDAUVR4Ps+nU6HbreDEDAYDIiiiCiK9habRqPBwsLC\\n\",\n       \"BBhL4iRHKc3G5jpCWJ797lP83R9+D4KKdtisi5mVIc9ywkYDg6Wsypprm1w/rXQN7LMcnaj17xPF\\n\",\n       \"V30tBVRlhQAUk0jBUutwbQZm36BeCkFR1IUkrTRa1x2qlTFIz6HVCOjtbBH4LlWRo10Xg6QQCqs8\\n\",\n       \"cDzCZoc0qaf0VLYgzzOyPKGsCoQ0lKYiGg3Y3tkhjiMGI480jvjW179OHMfkhcX1G2i/jVCaNEkZ\\n\",\n       \"j/qMR33wTlIag/Z8rJBEaYoxJdiKZDQgdCVHlxZYXVnGVT5bO7tcuHSN3EiMkGALAlURqhxZ9PHK\\n\",\n       \"iKW5Bht5i/Gw4vRSi7edNbTG32TQHzGSqxw5dox5fYvT7Yy28GiHKzgn+lReBlkTf6HFRp4zLwKa\\n\",\n       \"8Ygb0X187db9dO4MOXP2PoQyxOkWrbbHuEgRHEOXHlq/SKYLquI4bgWoCCuY+KZU6EMZTmUNlS1r\\n\",\n       \"jgyHLEkJfYmrFE9/+7v8q5/7Baqi5MydRzmy0mFnZ4ftrQFDq4iwvOa+eyl2NuhoQSvwuP8Vr+Ds\\n\",\n       \"Kx6hc/QO/uW/fR9ff/I57rnnXo6uLnL+/MO89vWPkWQFrusTxQlRFNUqL1tneb7vUxjD2upRNjY2\\n\",\n       \"MZWth6JYy3TiTpJEtRmZBaVcHKfuOA7DBlu722glMdbgerUqZZr9LXQ9lJLEScwLLzxHnmd8/5t+\\n\",\n       \"4CUY9cQXP0a71SZNUsoqx/W9Pd5bKYVFUE2jXSnQlaEyFmsFRkyFAlPsMRMABzAIuZ8lTqNwEPh+\\n\",\n       \"sIdRU1zK83269+AEoINj0oCXYNfhTHo2Mz33yrf81SkUa+0XhBB33OZXtzvgu4APWGsL4EUhxAXg\\n\",\n       \"UeDLL3f8acffdGWbed8DlePpa7Or4WGlxPS1Kce0x0uL/cafvYtdVlTWojC89Q1v4Lnnvstjr3k9\\n\",\n       \"27c2KdOCY0ePol0HKyXGWMbjMePxmO3tLRzHxXXdCajXE+yFqAuKV65cAaZRd8jiYrhX3EnTlGvX\\n\",\n       \"rlNVdXStHRfXgeWlI2zvbDG/eITnL13h7JnTDJMUU+T4k4lD43iMmHRoCilxlN5LxaY8//SmUFLV\\n\",\n       \"TnGmLgYaLNZYtOMcqBHUMYZByLJWBmCRQiNQhEGDIq816oIK1xGgND2jEUWB62scmxE4hijus9FL\\n\",\n       \"2U41z60PuLg+YBCXJMl4n19kEoFYg5C1c52SdYqqlQbdJI1jRPs0rTmHJM0prSKz9d9WXoIrQnRV\\n\",\n       \"d2e6XoDreJRGEfoeaZaSZ2OarRZl3mPxWIfSjNBpgUkjqFKE1RjhkBWQIIndACUc5uZOkix0ORsM\\n\",\n       \"2YoqLly5wuKmz+uXj3LC67G+lbBxc5dB0CHp9Tjl7FC25pFOQtCEVpqByfA6Bpx11NyYYXSO1tHH\\n\",\n       \"+eozH+LIwjPo1jHCxp2o8RbzqmQoYrSUhGUL3+SkNserSnJRkalJOi5cMHYSnBQw6fRT07qINDQ6\\n\",\n       \"YU0zIrnngQd43Rtfy6c+/gmiJAWOkGeSPC9JihSn1WYwGnHH6hrDjes0XQe/2+ZL3/463/nQ7/ON\\n\",\n       \"Zy/QyzKevvAMN280OX78JMPhkGa7ixCC1dUjlMWk/V1JtJKUeY4Qkmg8Ym15niIv9jT9o9EQLRxa\\n\",\n       \"gYMf+PXwk6LcKyxu76wzNzfHcDSgO9chjhNcxyPPExYWFxgORrWMUGjuOXuO8mWaaoLGEnHew/Ud\\n\",\n       \"lA3qoG1CSVlbK9ecqUqlNKBqu2lja6EAEzMta20d6UwicSx1xjYBZMdxJnhVH3M2859i2RSPZmnH\\n\",\n       \"mkoq9gB7+txOQXs6QHyWaoGDQ8lfbvuv4cB/Ugjx94GvA//YWtsH1jgI1tepI/GX3abR6eGhpbNp\\n\",\n       \"xGxRcwri0+LYLA87LVbMXoTD0fv0GNp1kaWhoVwefcV53v+d3+C5y89z54lTiKygShOEhJu7W8x1\\n\",\n       \"5wnDgHa7hTH1JPPxeEwcjxmPa/OpMAwBUEpPlCsJaZrvnUu32yUIAo4cWdk713gcIbRLFMUsLq3g\\n\",\n       \"+AEf++QnaXW7rC4v02k1iQd9fFdjSgUzFeo8zzHWIiepWbknPbRUUmEmEj7UtFGi1t9aUTc6TK5y\\n\",\n       \"fZ2MRExoDaTCWEiKBEdLtBYYWyC0RWoH1/jEyZilboenv/kEUgpOn7mXtufyod/9U3YSTWobtOfX\\n\",\n       \"0MFWXRC2GonGGkmZG6zZz7IkAmMgzVKUM4c1higriHOB4/oo160HYFQlVZHgKJcwSBmOt6mqHC+Y\\n\",\n       \"w1E+SVmhpYPEcPr0Xbz44hWkLGnYkK3dHlEe1dmB8ml0GlTGoh0X5Si205SdWztIfYujq21O3LPA\\n\",\n       \"+qUbvGibzM9ZTq+lPDFI+GL/ONlonlfP9Tm6leKmJadW5jirQopiSF508ZoO7lGHCJfPPXWLG5mm\\n\",\n       \"GX+LKu7x5M4tHjr3ALrUtEJDZHukdFC2BLXNyIYI6SJEQVVWmImla2VqgzJrbN0PgMBgSbMIx3PR\\n\",\n       \"2ifLKgJX896f+DHmFtr8yUc/zmvXzrC1PaLRzkmHQ7JxTDEYIdr1UIcjyysYA8PhmO8+9yzNRhcn\\n\",\n       \"7FDmhnvuuZeV1TVazTZSKXZ2drly9Srdbocjy8vcuHGVViPEc11u3rjB1tY29913P0HYwPc9RqMx\\n\",\n       \"a2vLmMpMFhhLkSU4novnhqR5xtLSnfT6fU4cWyNOEtrNEIQgdHyKPKMRNImihEazQZHnlC/jrjqK\\n\",\n       \"DWHLo6QemIGsfVtkZZnCn5xcM4SdNP7U0mOo6ZO6C1oirMLaak/9hSn2gHqfPhF7hluzAFuWxR7W\\n\",\n       \"zP5utqdi+uweFmDMsg6HG/P+vO2vCuC/DPyLyfc/C/wC8D+8zL5/LkfT6/X2ovBZMJ7liaYneDvP\\n\",\n       \"cDioZz68uk2j7+k+QtQDRkdlhoskj1K0dnj88Xfxb//9+3j3Ox+n2B1xYmUNJTzOnL2Twe4IYwy9\\n\",\n       \"Xm+voDI/P78ncTKmlhHW5+LRaDTpdrs0m03SNCWOY7IsY2dnZ4/rX11dxXUc8ixlaXGJ6+s3ac7P\\n\",\n       \"87YffAcf/N3f5x/8+HtJ4hENT5MXeU2HlAYpanWKUqCsQEu1x9OZqYmVFXhTM6paRg5Ysjih9m+W\\n\",\n       \"kxSvjkBU1Zh0/oBQktJWIKCQhixPEMLgKocyScEI1q/f5LvPxZx/+LV846ln+X9+9pdJcjh1533k\\n\",\n       \"WcVc12P72kWC+RZaOSjpooQGJPj1++R5SlqmaK3QjsKMa97UCIGWlobv1OPnrMVTEuk5COERFxGe\\n\",\n       \"Wxdqs3xEEmXMza3QDiSVUfT7Cd/46rdYWVlkHI1xGopSeHjdACElRVmhNCgEVZWSJxVKaYyxPDk+\\n\",\n       \"S3Z1lzeeDugerRjk26wjWa5S1ppDbla7fHXQ5UPXV2nmGWrH567LI35oreSuVYugjYwb9Heeojtn\\n\",\n       \"+cKXf5Mzj57CKzK0eYqqOsO/+u0x//0Pv5PF/Hmsk9NzDFZZlNVUSuGUFmeaJFUGpRVlUaK0rjlW\\n\",\n       \"Y5m6Jnuug1aCNC7ptBZIkwGVLXj877ybcWTxW4sY3SDowv1rq5SjlCNzHarhmGNLK1RpSTxKuHLp\\n\",\n       \"Kp4IqIzixPHT/PDf/VEeOf8w19dvsr6+SavTRQjN8vIRsJaLFy8y123jeR6eVw/LfuihV9SfL5Yk\\n\",\n       \"TSY0iSXLkonyyrCwvER/NGZnZ5Pl5WXyLGF+fo4szeh26yEpw9EI3/VqSgUPfyEkTdO6wCtuD1f3\\n\",\n       \"PnCebz35aXyvbkPXUpKXdcs/1iKxYOuIWkqB1Ko2kqPuHVHU3Z/WGISsJZ7CTrHIHMj2972G3AOg\\n\",\n       \"W7+2r0WfUi5TDAJeEk0fxqlZ8D4cgL7c9lcCcGvt5vR7IcR/AD46+fEGcHxm12OT126z/QwAv/XB\\n\",\n       \"J3no/AOcP3f/AcXDbFV2KuafvN9t043pBXMcB6313rDj2Qs6/SDSNEX4Ci0cVFWD3NqRI7zr8Xfx\\n\",\n       \"3HPP8Y43vpVsOKYocnbXr+OrEM/xEdaCsZRlQVWUpHFCEAR4nkun1abTapFmBXEUM8oyiqxeabWU\\n\",\n       \"dOYXWF5cYjAYkCQJRZZTTiRZvd1dOp0O/UkX6D333s+zzz/PufvvISsz3MDDFBWeciZ+0vUTvrdo\\n\",\n       \"SQkGHOWAnBhUFXndQMT0JrA0woCizDFVRVUZrJVgJI4NaoWLrhCKup1aGYTW7PRyxuOEufllQr+N\\n\",\n       \"yjPWjt3Ftz//Zf7o332Q9tJRVh94DImkiMYoBjRERGtJs4PcM+YS0tSdexiU6yBdiSgE0ldIR9OV\\n\",\n       \"AWCJxzG2quqGHrnf9FCWOUqUNEOHca4IHEWZJcRRn14VIaSD6wQ0PUHn6DG+//u+n8985rPsWBcp\\n\",\n       \"C/IsRVlwJMgKTFkgrcBRHrYAi2K3FXIxj2ldu8VrzrjYQclT1wX3LdzPMbXLm7x1bHPEl5yHeLpY\\n\",\n       \"w3WOUg0usdUZc3QUI0WPuKhwWprnn/ssneY8ZqdDVrQ5enyHc13DU+O7+Y0vXOG971qhnV/Esz3K\\n\",\n       \"QtLUmspcJ8l9hLdIs9UhyzKsNVBRO2WruuHF2pq11dJQZQW+DMmTnKq0GCGohOQnfvKn+Mynvkw/\\n\",\n       \"yylNTFlEHPECsnhI06u9ZRZWVugNY3Z3I773LT/I+Ve/nqA5B0Jy6dI1kjSj0WqSpnUBUDuQJQnt\\n\",\n       \"doelhSWCwOWZp5/m6NoxsqzED0MKY6h9XiyjeIDnuKAUWVJQleC5PnNzmrKsay7xOK4L/BOri1aj\\n\",\n       \"iaMdirKE0uIogddqYBsBcbpv9DS7bfd2OXX6QaSA8XhEf3cXU+V0Wy2qMgNT2zYbUyGFJS0MWtds\\n\",\n       \"YlXWHu316LR6yMo+jWIQ6qAlbA3C4kBAOMWiaSfmNNic1vYOq0lmm3kOA7ZSiq99/Vt87Ylv/bdR\\n\",\n       \"oUxA8w7gozNFzFVr7c3J9/8b8Cpr7Y/MFDEfZb+IecYeepPZIuZnP/77B2wdpxdgst8eeB8m/Keg\\n\",\n       \"7LouWmu2t7dZWloiiqK9KDlJEqSshxpUVbXHW2utef7aJRbbcywEbYIwIJMwpuTTn/oMgZWcXD1K\\n\",\n       \"u9PB67QZ7EY4yqmbcJTC87y94cZRFDEcDsnzfDIYubWv+a4qomhMmtbRd6vVrEezTSiXqsi5eesm\\n\",\n       \"VkjSqqI0lqDVYGNzk0sXnuNtb30zx9aOUOYpJstp+w3SNN0r7CqlKIriQDs9TLm4+vdRFO3tL7Xa\\n\",\n       \"y1KsgKIs8bSHKBysqoiyCBUq+nHEsxcuY4RPZQLiCIzxGI9iWk5CPy24tjmis3oS3JA4L8mjAaqM\\n\",\n       \"qYY75KMdFtoNxPwqBkVZWhw3xBhNXgnS0mKVQ2EFUZqhPA8/G2InLnSBF1AZyKuaKsMU2CKBIkJU\\n\",\n       \"ObaERjOgM9dGiILd3hY7uzv0ByN6u2OU9Dl96iw3b24xtCFJHKMlVGWBMBWmKiczFGuwUY4HFvpu\\n\",\n       \"jlY5QXKNh9sJDx2Zp0wd0t0hp9nhWDBkw23xJfcMcVLRXThL1RuxWvXxnRTTzXn4jMciu3zpuct8\\n\",\n       \"fbNBp/EW/vcfnuPOuQsIt8ET/TfzRzuv5umdK/zD99zBcrLBnAwwNiaxGX/6ya/z2c99mWazwblz\\n\",\n       \"D3Lfffdx4sRJjJk2s+xnT3nUY2lxgd2dBPBQHjQ7Pp//s//CR/7g47z97e/ht3/nA1iRMLx1mZXA\\n\",\n       \"Y77dwVOaRqPJKCs4dc/9PPSax3j0sTdz+eoGeUF9Lfo9Hjz3ABubO7TbbcoqY2dnlxMnVjCVZTQc\\n\",\n       \"sHFrnbvvPouQEDba9IcjgjDEmNofX2tNNB7TabfRUpImKV4jIAxDLrxwgbvuOs3WxjZhWKtUiqIg\\n\",\n       \"z3NcxyFsNMizmEYjIMuKuttSaLpzB6fEA1y6dgtJbZjluS7N0OfatRe5cuUi3VZIo+mCzamqHNeR\\n\",\n       \"FOW+6s13PYo8x1TT7HaS3kym3EuHPcpztpY224k5xSoh9u1vZyPq6c+z+84OAod9mnhKh85q088/\\n\",\n       \"+tb/KhXKB4A3AovABvDPgTcB5+uPmsvA/2St3Zjs/9PUMsIS+F+stR+7zTEPyAgPk/Wzq9Hh16cr\\n\",\n       \"1vQkZy/ArD55+nvHcfai7qkMqKoqMlvRDhrIyZCIUkKpBRtb23zsj/+UN77+MeY6HaI4xnWamOrg\\n\",\n       \"UNdp9D87mHRWpD/VqnuehxBiryNzWswA0EphrMHxPJR2iLOUrCgYjAZIKXjqqSf5vre+pZZh1ZwB\\n\",\n       \"ruvWN7nrUlYlWjuUVYmQcu+9puc+nWbjOA55PimiKLk3GKKyhjIviUYJQSsgLXMKYWl05xE65CMf\\n\",\n       \"+TijEWSxJs8UrhdgnSFZWXH0+Cm2dgY0Wm2yLEfYktAFk4/JkxFZMsIUMVla0Gy26wjZa6DdFuiA\\n\",\n       \"Apc0l5RCYYSiEDFlnlGkKZ7jkKY5yg0oTe1jU+YpVRZRlSk69YiTEVUVYUxEUQ7JshGe57JxaxPf\\n\",\n       \"ayKERiuPPJf1MIVJeltTRQKErie+CIV0PIrCEBpN5hq0TpgrdnhoOeBkF+bcmFZ0C9m/hXU94oUV\\n\",\n       \"7sqHDDnCVXGMZ0ZNvrhekDRy7mxH3FNWHHUHmPISc+EKP/S9ryBYKImaCcI9y2996RwfvXkW5vrM\\n\",\n       \"Z9/mpOoTOIru3H0MNi9y8/rTCClYWFyssz1TkWVZrXLqdAkDnzAI6DY9Os0mp06d5Stfe4KrN67x\\n\",\n       \"5NPfYaffZ3Ozz7lzD7G+fgNjc1yZ0nANaZQw352nMtCaW+TH/+d/RNCeJ2wtYFEEfsDuVo+mH1JW\\n\",\n       \"Btfz6k5HLel2mvT7fRqBw61bG6ytHkEpSbPVYnu3T7vTYbc/oNnu1M+BsCRRgu+6qEnQ5U6CnjAI\\n\",\n       \"MKagKOrnL/A9oihmZWWJjVvb9Ps7FGVM2AhZmF+qpbpC4AfBSzBqEKVkaUWaJCgpKYuCdifEVBWD\\n\",\n       \"/haD4RZ5FhGEDmWZYcq6C9J1dC25FYJqomCTe00602dkRs2yl81P7SoOBpNKvVRRcpgKmeLULA08\\n\",\n       \"G7AejuyF+PMbef7aOzG/8KmPAPsXYxYAp2A8q+GenvAUnAAajQZPPvkkr3jFK0jTdK/t1hhTD1SY\\n\",\n       \"0Cmzf58WOVJIqqJuN9Zao30f4bts9Xb5rd/8TX78R/8+490+RVWD3vxcbVwF9Qdy/fr1vapyHX03\\n\",\n       \"6HTqm3dzc5PBYECe52RZRqvVmkzyCdFKU5lq7/8vinqcmOM7+L4PUrKxuclgPObCxYu8+4f/Dj4g\\n\",\n       \"0qRelLQmrypcz0UqySiK6HS7ZBNKRjsueV7VUU2e4+j6WmntIrXi+o11+qO6wm8sLK0dByS3drbZ\\n\",\n       \"6Y955rvPs9MbM+gnrBw5TpVBGDQxSHpmhBSSVtii4TeIx2MC1yPPcypbYKQhy1OSNMbrXWccjVDK\\n\",\n       \"EMdjpBLkZUF3YZm5pRXcsENhIMtLtljGsZBFI+Y6bWxlcfwGRrqUk0g8z1OKPKfol2ALhM2J4x2y\\n\",\n       \"pIe1CVE0wJ1077VabcZxgh0PsUKiHYdRHGOlYunIGtoL2O0PSbOCNC2oLFjH1KqVOENJgaMqjgYF\\n\",\n       \"Dy8Y7pnLccWAOM4ZDEr6zjwdv4N22vTVca6ZI1xKRjx/9bu0U8XbTmru4gkeOOriHXuY66v3s7Nw\\n\",\n       \"Fr9QnD1+nl/43ctsd+4nzZ5Fb34VuX6dexebjMdXQFWUVUWr0wZrGcVxPSWq0wFgPByRZxmeFggq\\n\",\n       \"siKjqAyNRoftnR5pmuJ7HloLyqJACU1pYrxQEI3GpEnGT//0P2V9c5tPfO5z/B//5J+ysdnjzOmz\\n\",\n       \"2NLgSkvgB9y8tUWr1WJze4tut0MzbJEkQy6+8DxHj65x7Ogage9RVhAlGVYIXM+jqAzjcVRnolox\\n\",\n       \"GgwIPJ9GGJCXJUEQEMcx29tbHD92bKJQqvjQh3+fj33sY3Q7HXq7PVAFt27doipKTp++k3e+8138\\n\",\n       \"0Lvf8xI8GacZjqylrtbUIJhmKVJaHEciZD3c/IlvfI3V1WU87WLKMNFkAAAgAElEQVSriiSJ6TQb\\n\",\n       \"DAd9HD2hViddsnt6bHXYZ18jpWIK8FM8qQUUt8U6iqLYo3CnyrjDlhRTgJ9tsZ++799oAP/KF/74\\n\",\n       \"wIlM22anUfQszz3dZmWDk+PtrZBTTTbUqY/v+3vANtVwZ1kGpcEoSWUNrnYxWVFH0Z4mkoZPfPIT\\n\",\n       \"BEZyz9pJgs48yt0fNDz1MvA8by9DmH4IZVngeT5KyQmVUc/arJt8SqIo2h9iIRXacerRVliqst4H\\n\",\n       \"ISiAcZJzdX2d/nDMW9/4GEfnu2hHUxRl7eJmLRX1NBupFEbU2tv+YEieGcq8YG5+nq2NbYKggRCS\\n\",\n       \"RqtFfzhGOhrPD4izkvXtnCtXb7DbH1NWYI3A0YosHpJEPapizPJCmyRLKBpLaOnQ8Fp4KqDMLFVp\\n\",\n       \"kNpBaEE/GuEETt00EfXIszFbWzdQIgcKiiKtlYzSoTN/hDQ3zC8ewTSPojGkwwEt36PIC7LCkltN\\n\",\n       \"aRV5ZSnKiTRLGOLxGGktypZUeYawBZIKz1V0ui3SNCLLU3RekmQZWVHgBj5hs4W1lihOQEh818WU\\n\",\n       \"tYXvTjWgbQUBkriybI9i5qSmE/c5ElaUdgTSMtwd8rlxkweOhJxfatLrp9zcjtjojxi7AbLhcH/L\\n\",\n       \"clpHLHQDhksn+cZ6yFZ5L0ZYOs0N3vS9b+eL38qhGbDWKbj19T+lm30Lg8LIDsPRqDY2ow5W0jRB\\n\",\n       \"K4UUoKRCTTqKx/GAOB1w+vQdBF6HPIEyL7BVRCOQKBRVJglaDfpJn1/6pX/DU9/+Dvc98AC5qXAb\\n\",\n       \"Ab/y/vdz79338sC997OysMRgNGKURARBSJpm+EFQN6gpXQ+fxjLfbSNk3VNQFmLSWOVT2tqXP4pr\\n\",\n       \"T58yK3GVwlT1M1MYQ7fbZjAY4Hk+ZZHzzDNP8/TTT+M6mkYYAhbXc0iyhDRO2Nzc5MbVa+zu7vKf\\n\",\n       \"fuf3XoIn1cT8CkTtF6QVSVJnoXGa4PkuVtQNRDduXKOIB5iyIAg8bFngO5ooHtVuiqJu/6+N5Gqa\\n\",\n       \"cRafpuBqDAei5fq1GuRnG3OAPSybHUg+pXAOt9TPMgnT/f9Gt9JPT2DKL81WfPcvzEFPj6mcZ9ao\\n\",\n       \"Zhq5T5sApgNzoyjaM6CaAqhSCscqDKa2vhSgpcSRisQahJI88upH+b1f+4/cs3Ic3/PxGo29xaXf\\n\",\n       \"79cGVYMBjuPstdiHYYi1hjRN2NnZ2aNcpt2aYRjQbDYmxam6QzJJMpRWeFqAUVRVSV5WDPs9XK/J\\n\",\n       \"nXfexeWrN/jYJz7Je97xAwyHQ+YXFyatyyAdzSiKSPOcK1ev0Ov32NjYZjTMkELy/W/7AZZW1qgq\\n\",\n       \"S7vVoTcYYoSDcgKevfgil67eYlQuUlUSz1uD3OAAmIROx6UVGpIkoiivUxYRve2ExfkjREVFc76F\\n\",\n       \"VYIgbDCOU9I0o9VZIMljBsMhrtNi+cQJ5u64m9AT5EmENRXt1jxV5WAJiBNDlhqMs0nL98lEQRGP\\n\",\n       \"CRwFvk9aKZJKkOSGFCiFJVExxi0RRlIWEiNcNC6B73Hs2AqjcY/OXJs4GeGZAEYjdFlSmYrKOHRa\\n\",\n       \"LXwvo8pyyizBlhVKWO4W5xmZTQp/iK9zHjl5jK4vme8+QHPuLKMiZGVlHldu864o5qkv/TH9pE+8\\n\",\n       \"epKVVofzZYMXn3qWcfwcgR2TNO/kO+EximjMPdLje9QO662c9WaLr33p6yzqkGjHI8dDdWP84AQy\\n\",\n       \"kgS6QbPTxfP9ibLIkiQxge9jTe3DI4DCgghc7lw5jZICW7h0WvOYNMd3c0KvohO0iIeGhx/9Hh55\\n\",\n       \"46vwhcdnP/5nnH/wlShtaLfmeMf3/QD//n3/ju0rV3nPO99VC4VaTZI0xXEEaTbG93zWVlf4nd/9\\n\",\n       \"IN/3ljeR5Smddov+bo+G3yZstYjTFNdxSbIM33HAQtj0EBbyNMF3HDxHc/XqVVZXVyjLiief/DYX\\n\",\n       \"LrzAmTOnaDUbCCFot2v1lnIcbGUYDgZcXrnEtSvXXwZFLEpaLBYrNGVR4Xl1g5vjueSFwfWgKCuW\\n\",\n       \"j5ykoTOu37jGzuYtOs26ruS5LmYCuKaqwdvYqY78YKu91g7TQuYsSIM64A562AwLOBB9w0sbeqYA\\n\",\n       \"f1hO+HLbX3sE/sXPfvQl3PJ0pZvyy7NSm+n3B4yQ9gBcHzjh2WNOOerpzz4aA1QSSmkpTUVRZLjK\\n\",\n       \"oaoMXiPk9z/8YZCa1z/6BmxlGI/6eK7C0dBpN4Ha2D5JC9KsIIpTTJkR+B5aOziuR5KkdLrzpFnB\\n\",\n       \"cDRGa7eWQ2oHU9UFNYuto0XXoZgY3VtryYuU0Avwg4CbGzt8/DOf5+/9yI8wHAx45SsfwRQVeVFS\\n\",\n       \"lJY0K/jEpz5LZ36RV77y1TTbi/zyr7wPL9Q8fP4BXn3uATZfvMbK4grjXFI2unz2m8+wEyVUZYYx\\n\",\n       \"JQKD52qKPEcLSVVYhJHkmUVLl2F/lyq/TFEVrKwdxw87FJWLKQOqqm7sSNIeSmUURcSDZ+6lKMva\\n\",\n       \"58Maev0hUjmUxpDnJXGa0Wq3cF2XeBRRmpJWO8R1YNDbwFGGPE0oSoORPrujDLRPkUgsdZHacx38\\n\",\n       \"0CPLM8LQYxxHuJ5DnMRYY6iSIUEQEHgutizo93r4nkuepXh+WOuqJy6BaVLzoRLLHceP0Wm3sGU+\\n\",\n       \"uR9hOBwhJkHCHcfuYH39Kpsb6yhVkGYjbFXgeh7r65uEYRepAny/QWIqXN+vi2QWbFUira2vs1QY\\n\",\n       \"U1Kktavk0rKPUoadrW2q3OC6AUq6SOlToUlyg+M1sNJBmiGhW2v9V9aO4QYhaZ4jAc+RWJPzt9/5\\n\",\n       \"dkLPwXUFw8GY3d4uH/3Dj/Jj7/0xojhmeXkR1/VwXcmv//p/pN8fcP8DD9CZn8P3XVxHo7UgiSI+\\n\",\n       \"/alP8tjrXsfy8gqL8wtUZUmW5RgEzXaLOMlI0xw/bJDneT2/VO0blLXCBoNBnzAIkUJw6+Y6X/jC\\n\",\n       \"57n/gfvqLFrr2qkRWQ+LyGteelrH6ff7PPbGl5pZJVGC63t7wd5+1l5raGdpDoA4L/B9lyiK2N3e\\n\",\n       \"IIlHdYYgbU075TWFVnu475vHxWlKGDaoJnLDPXGFrOWT0poDUfWsX9E0e9+PxvcNs6YYBjANtGe5\\n\",\n       \"8b/RU+lnHcNmB/ACE95W74Hv7Co1jWynJ+m67kSatH+esy33cLASnOU5SIlwaoc4LRXa85FWIGU9\\n\",\n       \"EeTNb34z/+HX/19e+eAjeNolDPw6Gs0NN2/eQjsaLwhoNjtYFL4fgq0jfSU9ev2IoNGkP0zqdl3p\\n\",\n       \"0RuMSbPaZCdLC7R0QUDYCEjSMcbW6byjFONxQprs0u10OPfwQ7zqdW/hIx/+EFkc4/khgedz7tw5\\n\",\n       \"LJKtrR0ee+wx7rr7LFvbPbSj8DyfOIn4zGc/z42LlwmF5A2v66CDDjc2blHkCQpDUUGnPUeWJeR5\\n\",\n       \"SiNokWU5ygEpFIY6RXVCD1d2CGTFrVs3eeSRVaK4JE1jstjgOyHSStqtOTqdYxR5WlNOso5kTJUT\\n\",\n       \"BB7pKEZKydG1ZdKsboNvtoKa+99YJww0wtYPZrfVoD8cMTfXpayGWOmQV4bKlBw9ucL6+jqiMrQD\\n\",\n       \"lyyPWJ5r1alnldVaXd+vvV8QdOYXcKWHAEZ2hJYaO3EIbDfnCf20thJutxkP+wz6BZ12i2Ji2dvp\\n\",\n       \"tImimLIouHrjGr7ncuz4ccLQwZiMfn+H3d0+i0srRHGOdh3CVhNf+URRjNKKubk54tGQuW6H/u4O\\n\",\n       \"VVXiuhoqQ6MRYmyM60oqoxn2h7SabbR2WFxaoTcYkxUW5YcgJCaDhisZjiPa7Sa9/gDHrZUYZZlx\\n\",\n       \"9s5TfPe7zxCN+rSaTdrtLi+88DyD0ZBLly/TbDa5di1jeXmBsqx4/PHH6fX6XLh4ka9+9Wt4nsvK\\n\",\n       \"kdpYatDr8ZnPfo63/613gJCkWU671UaonDTPiOOEOMmpjKXrezDx0hmNR/ieh+t6RGlCEDQwpsIK\\n\",\n       \"+OQnP8l999+3D4RmfwalNbUqZBb0OpMawOHtfe97Hz/1v/4k1u4XAetnf0IvSgC5N/JMaEVZVQSe\\n\",\n       \"xx0nT5KmEZsbG0TRgCovCLyAaDzCcR2U0mRZhud7+Oz/P1MxgrEzneLV/lzXWY57Wus6uMCYvb6W\\n\",\n       \"KVaBIMvyvfOaxcKX2/7aI/Avf/6PDphS7Y/V0nsyudnK7nQlnYI97NMwruvv7TNb2Z2drTeN4qus\\n\",\n       \"Vm5YWasx6vZ3gSMdkjxDeS4q8PkXP//z/Ojfery2m9UalMRxXbxGkzwv6PUHDAdDpg5lWgdUxnL9\\n\",\n       \"xjqe7wMC16uN6bXroR0HKRRxmuC7LaxRtam9lpSTh1kKQRJHtBoNfM/l7rNn2Orvcu+D9/GVr3yZ\\n\",\n       \"a1eucPXFSyzMzaOU4r3v/TGeeeZZmu02ruvjeiFGBTzxrW/y5DNPISSMd7cxUcyrzj/C8dNneHFj\\n\",\n       \"m+v9MV5rDqka7Ozs4PkO83ML9Ps7E4WNS1WVDIejCQCXdJSlrCLieJs8G3LqjlO0W11sqZDCw3WD\\n\",\n       \"vYVVqwohBUmaobXDKIqojEUpF6k1/f4Ag534ymSUVUUYBvWAXw0mT8iTiCDw2djpU6Fw/Qari0fY\\n\",\n       \"2d4hL3Iqa1hdWSXLUlzXYbvXY3NzkxMnjjMYDBGEBL7P9tYmR9dWuHjhAljDqVOnGQ1HddQn6yJ0\\n\",\n       \"UkbkacLaygqtVoOG7+E5Lus3btBptVm/eZO5+QWKSaF4OBqQxCNcV2MmWUydNYJBIZWm1epS0SCO\\n\",\n       \"M7SWuE49asxWJVLVbptQYcq6duO6GmnB0ZIwCBgNhpRV3SugtMPSygoGQZxn+EIw12wwGo85euwY\\n\",\n       \"cRxhTEmWZcx12zTCAMfRLC8vAJJ+f8SVFy9hjGFjc5PHXv86kiRhcXGB0Pe4cuUKR48eZbfXJ2w0\\n\",\n       \"6PV6XLx0gUYj5MVLl1hZWeb8+fMcXTtGu9VCac3NG7cAyIqCdqfL4tI827sDtOOQpQmOo/C8afCT\\n\",\n       \"0N/pM9f5/9h70x9Ls/u+73POs293q7q1dnX37JzhkBIpkbEkS6JEKbIJJ0YiL4Agv3AiJ28CwwEc\\n\",\n       \"2foH4iAIEiBI8sKKAEGGZEtRbMgQ4oW0BUqkKFGmJC6jGc7We3Utd3/29eTFufdWdXNIOQkiSsAc\\n\",\n       \"YDCFW13VfZ/7PL/zO9/fdxnw+htfQyitqVBryqvc+NyvbSIMpRlVvX6fNEkQQvDR7/7oN9STn/l7\\n\",\n       \"P8MyXvDTP/3TnJycUJYlrut/U/ihExIptN6hbRtMKVG0fPUrX6HrKizDwLZNDc+ujcS6rsNawyya\\n\",\n       \"YbYWz7Xr0Jlr9eVqqCmfYMQZhrGlIyr15KxvU8+uW91u1nd87JN/ejvw63j1pthuivmmeF9fG3re\\n\",\n       \"5mev+wk8ncB+fZp7vaBLqfMmO3FFsjcAo9NyWN91SesKy7QY7+8RJ0uOnnmeRZyglCQvYRLPaVpF\\n\",\n       \"WXUkOdBB20JaZhpzC8ZYtk2aZUwmCbujXZKypM3K9UNuUFQVXSu1T0qacuPGDRzHIvQ9fNdFCrBN\\n\",\n       \"k/5oh0pa3D+dMl8W9EcHHHWSpirJsoR/9mu/znd++DsIPA/bcUjSnFq13L59m9kq4WJ6SX9kkqgJ\\n\",\n       \"/+4rX6WzXSbLFXvjfdKqpFMGtuhwDJMqL6hKHUpbtyV5nuohkOpASfK0QXUGvh9hGSWXZ3fIlz6e\\n\",\n       \"HfLyS99Bq0yqokWapubQdq1WVXYdnm3geD5JkoLqeP6ZE/KiIMszhsOIumlZLRM818N3TGqhKOMl\\n\",\n       \"huo43BkwXcXUZczsLCFNEm7evMVyucIVOeP9Ab7ncbw3QHzgOZbLBX3HIC911NXN4yFVsWJ/rAOs\\n\",\n       \"pSg5PhyyWCywbAvLgiDq0TQeStXQNcxnMaZh0It8DEOwvzemaVqaquRyesH+/h5SdgS+T1VWVLXm\\n\",\n       \"6G+G10ma0TQlZQmGMDHX8V69Xo80S7AsA9uz19JuHSHXlC1tp6iKilUyI17MybKEXj+iakuarsS0\\n\",\n       \"LaokwQ8j0iIDWk4f3cGxTFzHZtTzCDyT4TAizTLeefdddnf3uXPvHmWli0UQRrz2+uucPnrEyx/4\\n\",\n       \"AB/96EfwfJ/JZAJIptMFaZpx+9az+L7LnTv32Ds45vxyhu0EvP7GW9w4uYlSil7UI7JsLidTJssV\\n\",\n       \"ZVnieS5HR4ekaUwynZF5LlmWsb+jO3rLslkt5him9gvfRKEppbaJWoZlgxTESbytC++1HM+lXTb8\\n\",\n       \"/h98icPDg/Xz1V6rHeKJ/6tuXWuMze/saGvFhz78YeJ4xdnZGfP5DNfzcE2guYo6k2u4pFp3ytq+\\n\",\n       \"QqubO8G2lsF7qyivahdPNK3XoeD/J+vbXsA3x43N13BdrvqkYdN1VdP1In01LLiuPNTr6d1sszsa\\n\",\n       \"trX9WgiQXYdoFY7lkGU5yjSZzGe4QcDFxQV5nOL4EdIJuVwk1Mogy0ukZWEgcGyH5WJFWRvs7R8x\\n\",\n       \"m88JhEHQ2+Pk9geYLRcYXUee5yTrbsJxDbzIo9/vI6XUIclxzcK1yOMEyxScHB1x98E77B0c0d89\\n\",\n       \"YJVWJKslo+GASXyGwmI6XXLvwQNuHh+zXMw5ODoh6oec371PWdaoziArW8pO8txLH+B8esnu3gEP\\n\",\n       \"Ht3D90NacsbDIctlgmObhI5F3Skc1yHPU3pRxDJeUJYVrhtSZQVJVuNIG0M1xPMFymvJkzlKWQTR\\n\",\n       \"ECFMOqGgbbFskzjJaOuKRkhcy6TpOk4f3cfzPCzDYHJxihKwNz6ma1pm5xec7O+w2wsYDXosVgt6\\n\",\n       \"PZ+oP6BI5tjWTYSQfOeHXiKKerz22h/RiJZ33noH23YYDkfc2Nuhv9PfekovVzPm0zmPH58CgtB3\\n\",\n       \"8b2IJIkp8iUQ0e/16IUhaRLjOCZFmvLowT0O9g6p64YkScizkvHxDlWZMhoO6NoW1wwZjW5SVtof\\n\",\n       \"fjKfcPPkBo9OH2IZUlvuCohXGa7nUNUVRaGNvVzX1UZJjo1jClaLJY8fnyHoONrf49atGwyHfSbT\\n\",\n       \"CU3b0I8i4tWS0PdxHZt+GBC4FkK1FHnCO2+/w4c+/CHuvvs2YX/I8c0TwESaNqYSWCjiOGY2W/I9\\n\",\n       \"3/u93L55izt37mwHeUII2gZ8P0SphrfeepdPf/ozvPDii9R1w8uvfIiibCkbbQv7+PGEZbzi8PiI\\n\",\n       \"bJmxtz+mUx1ff/NNmqamrip2hgOef/5ZpudTXn/9daIgIDaWwKZb1cwO1XUYOgGZptINXBAE1HX9\\n\",\n       \"TYvbaDSibnK++MXfZTDo80Of+CHiJCYMepunfl2orxwJ27bBMEwMaQAGmII4WRH1hoTRgDwv+PrX\\n\",\n       \"36AzdEiz4zgUWY40TZqqWrstaoxdCB1kYZjWE7AuXDvxr/HuDaTbtvWWFbfxgtLIw1VwjM44eG8D\\n\",\n       \"r+3v/3ZDKL/7W//X9mixoc1s3ux7YUDXqTzXfU/00NPWMnOuKDlbPudTPHIl9SBMtR2GUJjrsNW2\\n\",\n       \"UUjbJusaZOTzs7/wj9gzHSLPx3RC4qJF2AFp1VE1YFoOnuNhGALVtQjhUdcae5sv5oRhuP132o69\\n\",\n       \"NiYCZ83TFabYctcd2yEMfBbzGaprsCQcH+5zcuMYafnMVyVvvvkmURTiui6L2SV5liBUSy/w2d8d\\n\",\n       \"8eqrr7BKEopW8MUvfYVVVumBl5AI1TDqB3ieTVXmtEqtPZIlDx+eEYYj9vZvcPfBY6pGUbYdQRSA\\n\",\n       \"gfZXdj3qGlRT4BsKW1U02RKjK8nTmFc/+CquF9AobRS1ETvkeYFYhzWbtk2a5jRtQ683YLVaUhQl\\n\",\n       \"hqtoOkFR1JjSYhhG+JbBIHQQNGRZguN5ZFVJka2QQuPJs9mcOE7o9wYURUUU6YI9ny9wXZesmuM4\\n\",\n       \"NnmeMuj38YOAuqlQSlAUJUmc4rs+WZ4TF9rASAiBbZtYprk+IiuyJCdLc4ajEbbpIIyKNE1o2g5T\\n\",\n       \"6oF1UZRbuKBqCvI8pWlqDOGQpgV+GGBYFlXTooSxpruV2LajO/iq1oITz2FnOMS1Ldpa/85uc7qU\\n\",\n       \"gniVYNqWFmI1Ff3Qo8hiLAGOY+Cv3TGlZXP/0RnRYMj5+Yy8rNkZjvB8lygMcWybuizwPIfRYHgV\\n\",\n       \"X6gEdQ2WZZKkMf/qX/1LWtXykz/5k9x/8JAgjLBsmzhOGA6GSCXWsJKpMzTrisGgz2DQg04haCny\\n\",\n       \"gtlshmd7vPyBF3jtta+RpStsSw9xN4VPyCuetWEatG3HG2+8TppmtG3LT/83f/cb6smv/fN/zpd+\\n\",\n       \"/4tcXl7y4osv8hM/8ROMd8f6/hNPnuAVsMkkVevNQ2xYbwjtCy51Z1yWJdPze6xWKzzHpa4LrcEo\\n\",\n       \"Cs1aM6T2Ge803zyvr6LbrpMsNsZ7Tyo5nzS+2qyquhIObWrX/yc72f+/1wYbelp2en0YcN0SdiMb\\n\",\n       \"37z2dKjvdQoOsBXZXPE118PNtfd1S32tSxfYrgPSwLdsasPi8eNTDp/7ANLxqJVAGFr8ohQMh0OU\\n\",\n       \"0DaxZd3StS2WIfH9ENOQBP4BYRjSdR2rROdq5llCr9cnXi0YjoaaauX6pGlG1yrOTi+wbRPHdJBC\\n\",\n       \"Md7dp20gjudcTJcMB336gyFZlmFYHkfHO0wn55yfT1jOF3zogx+iKErivMAQ8P3f933ce/CYyWxG\\n\",\n       \"GAak8YyzyzN2hkPqqkCKDqyGyFWMhy5VNiOwgaYhCHzSLGM03sU1babTCf6gR2+4y2oyY7VqGfg7\\n\",\n       \"TM8fgjJBWNRdS9s1dDS0tUkcx/i+T5qttJrVNLQy0jBJlkts28aQBq1KsEyJtLTMe393hyZLKIuC\\n\",\n       \"6eRUJ4+rDj8KuHH4AtPplKqq1g+H5Oz8DNPQIom9vQOOj48I/ICijHTijTSYTmacPnq8DeCIwj63\\n\",\n       \"Tm6xmC/xHIE0JfOZPjqvFjM8z4O1yMS2bZpaMrs4YzQacXAwYH+nTxBGVEVDqxT37t0jS5cU2Yqq\\n\",\n       \"SPEDF8cQnNzY43IyYb5YkMQtSkqCMEJ0Nv3QoykaFssVR4dHIDwMQ1FXKfO4wDEtaB1sUydCGYaF\\n\",\n       \"ZwY4joewoWozVvM5nudRJDFJsmKqJnQKpvMVt597ntdee43jG7exbY+mralrg8vLCYcH+/iej2lK\\n\",\n       \"XnvtNW7fvkXbdqRJhlISP/CoqoqvfvUr/NW//tc4PT0ljCKkNIhX2ro3STKqoiIIfc4nF5RVycnJ\\n\",\n       \"MaZp8vDhQ3Z3dnn08CGH+/s899wLPLz/kDt3H3Dr5g3u3btLnsXr4eK6seq0Z45l2SyWSz73uc8x\\n\",\n       \"mUwQQjCbzd6zhoRRxM7ODmVZ8ju/8zt8/5//fsIwxHX89Xxr3Qk/5YAtWFvzXntZolOwpDTwPJ/D\\n\",\n       \"oxPCKOb+/ftIoTCkbtokHY3qUAhsx6Jruycog5uCLYTYJl5tOmqtzLa3r+nAjnX9WRv7PW1V+83W\\n\",\n       \"t70D/8Jnf33bKT/dUW+K8HWIZbMzbaCTzTR4M/C8TslZ/11PiIEE2k+7bVpNWRKAEFjC0FasdYcb\\n\",\n       \"BKR1TSUF/93/9D/yF37gh7FNC9v2yIqGyXyJ40Y0ShCEPRQCz/NplUJ0gmQVr49KawWi1Ik8nuet\\n\",\n       \"5f0meZEjhckqTnEdnyTL8T1ff2idomtKbt08YbwzZLmcsbu3R922VHWL6/s8fHxOWdUkcYzotNTc\\n\",\n       \"sQzqImO0M+DZl57jzbfu0uvvU9WQFjqurGlKhoOI6WyKgaAXeuTJGUmSs1ymzJcZxzeewY9GzBYr\\n\",\n       \"bMcDYbKME3Z2R6zKGckypecMCO2AtizY3+1jmx2W3ZFlK/zI06ZbjakpV0KwWq3Y3R2zWq2Q0tDH\\n\",\n       \"2E6HVWhhVIVpu7heqDfIpiV0LeoyZzQIadqWqm11uEOWrruTjiAIqWttKSCQdG1LU7ekaUpV1/SC\\n\",\n       \"gKZtcFxn+zBrFW9DlhUIJFmWYxoW0hI0bcNkesl4b8z9+/cYj8fMZzP29rSCbzTapcwLktWUttMB\\n\",\n       \"1nGWYpoWw+Fwjb922JZJVRbaXGkxpygLxgcHDHZ2KYoaadgsFitUC5PLGcP+kGFvgOm2CNlhGnKb\\n\",\n       \"ZN80HU3VaT511ZImOXleEgx9/J5D1zSYhgDV0dY1bVNT1Q1JltF1mnH18PQx3/8Dn2A6m+I5Lnt7\\n\",\n       \"YwxpkKcprusw6PdZrZZrOMPAsmxA8fu///u89trX+OSPfpJeT29YeV5gWroDj8KQXtTncjLB8z06\\n\",\n       \"WlarJZ6nB8BCCE6OjpnNZjR1Q9sqLi4ec+vkiNVyimVC110REtpWBy0IafC7v/dFLi4utp1oHMf8\\n\",\n       \"ws///DfUk9/47Gd5+PAe7777Dvfu3cdxXP7Bf/sP6PU2EMqm49bF1XiqK998771Wu6bXrlYrZtMJ\\n\",\n       \"VVloIzSxFu3ZFt3aRlYjP1csmKcZcZvYN/29J50Mr5h37RP1TUrJK9/5A396O/CnQxyuv/ENB/R6\\n\",\n       \"Cjt840W6Lu7ZrOuQyWbyC/qD6lpty6mZO1rR2OjgMQxbq99A8KV/9yX2dvYQUlAWBQK4dXzMjYMx\\n\",\n       \"Qkhm8yVpnpCXFWm1olMKz3YZRg6ObWufkiYijlfESUKRJNsb0nNdgnDAraMDTMtltUopypKqakAo\\n\",\n       \"8rrk6699heT4ENexSB2Dy+kEJQx6g13qqiBJc6ShWRSGUhhSEI7GvHv3LSqVEwRDZFeTr1KqusGP\\n\",\n       \"AhoUj8/PCYOQtm45Pb3Et8Dzerhen44Luq5iOn3MYKiLjes5mGaEKTsis+bw5ADRObimhyl6VGVM\\n\",\n       \"3dXUeU1dV1SFhh6qutAmX66DYUqWq/mWGmooc+3XYmJbBsMwQFoOluPheC5pEjO9PKMX+Dx8/Igg\\n\",\n       \"CDFsHyHA9yOWiyVJEqM63Zn5noeUBmEQ4DoOgTdCSkGW1lRlwunpBaOdIWEU6QQhVXLjZJ+qaEiS\\n\",\n       \"jHt37+L7BnlR8IEXX+Stt9/Ccx3yLEaphn4UkCYp52ePiIKA4yM9gC3rikZ15GXBKlmSpSmWpTMv\\n\",\n       \"dwYjfC9kFWco2fLw0Rlvv3uf+XJJVdX4fsDOcIdhNKApVtS2ZLWKkabA831cx9kOuUajMWmS4UiJ\\n\",\n       \"7/vkeYHtWSyTOSAoBCRxSlWVeJ7Hwd4Y16+QEu7ceZfjwwOaMiP0HISAuiywfJ+oF+LYDmmWYxgW\\n\",\n       \"o1Gf5XJBliU0TcWbb75OrxfSiyJcx6IsMgQC2zDYHQ7JspK8cLYAACAASURBVJwH9+/jBh7z+YQo\\n\",\n       \"CrBMA4FOi98ZDrl75x77+/tgC2bzOVHUo6xKDZ12FagWqXTx1raxFhfn5+v7NKAoS/I05ej4vaMF\\n\",\n       \"mrbFdb11ir2eWdVNQ1VvErq+sVA+vQS8Z7crMGg7GAx2cRwX1TVcXJxTFjmOaerNx3GoihJDXDWM\\n\",\n       \"15l1W6KEYVyztr4S8FzN4sQWLXi6ef1m69tewK/DIJti/bTxy5Vd4xXx3XXd7debqKONC+HmomzW\\n\",\n       \"e0UciboFKWmFzpZcR0RqNktZ4vk9vvC5z/ORj/8H7I5GCDrS5Yo6mxP5AaptOdn1qVsPw/VAWlRt\\n\",\n       \"Q56VVGVJ06SI1sIERj2HZ27sYZgmVXV7O5iZL+dcXEzIEu3H4boBh+ORvg6ix87ugLrMKYqMrl4R\\n\",\n       \"2BLL82mVhmLCsEdTr/F+IbBMyWK5wPN8ZvMJ8TJhd3iEJU2kY2IIRb8f0aPHYrEkXmYc7t/EMZSm\\n\",\n       \"VKmWGzdD6qbhpRvH3L17j7xMMW2DOMnohQGRCVab0x+EuI7OmOwCLUEui47xaI8srfC8gEk6Iepp\\n\",\n       \"uMd1XaQhSdMWpVqtgu0a6rUx/nKqiAYD0iQlKwuins/BwR5ZuuLWrVsUZcMyKajyGld0mNKg3+vT\\n\",\n       \"j/ocHx6iuo6qzHWYRroiTVOUUuzs7DPeGzI+HJFmKcLsODt/hDQkDx7dpy5rmqrl8OiIfuBh2joE\\n\",\n       \"++Mf+y7SPKGqax7ev8/+/pi5ZTCI+kwmE+arlOlshlIdw90BTuBjOyajvV1t3VA0PDg9Q+cumijh\\n\",\n       \"EUR9opFBf7RH3ZQ8evgQaXT0eg4nh8+QJQmWP9LYdl1TtzrIwTQsVvGcqmpo6rVgDejKmsP9Ax6d\\n\",\n       \"neN5IfNFwid++Mf4whe+AIaD7+v4speee4HTs1OyJKZpaqQ0UE2jB7WuSxho3n8URazimKZtMS1J\\n\",\n       \"VbWYlsGHv+NVhNAKY8/1EUKSpTFS6jnBwcE+eZHi2BGreKk/a6kLctM0WKZFskxI0hQlBVHUA1Wu\\n\",\n       \"8WfxhDVy1ylaBOdn59R1TbZ2NbQdTcH9JkUEITS7Z7VKmM0WfPnLX+ETP/gJBAKltD0E6BnCe2aJ\\n\",\n       \"KfUNAAvof4uUJlXV4HkhbVtzcHjEvbvvEqcFrqvhE2GaqGuJQU8X3g1UfNVwXsVIXhX5K2XmH7fh\\n\",\n       \"bNa3vYBv6H/X6YPXMSTHcbad9XVJPegP3bbt7RD0eg7eprBvdrPrXTiArfTQshNKd9zr4APDMgmj\\n\",\n       \"iDfefIfFYkHkBzRNiWoaLAOm54/xDw/WoQM9KtGSF0taaYE0CTyDwPUx1rYASZyg1WAlyWqJ73nk\\n\",\n       \"aY5odQDwSy/sk6QZCl0kq3KB5zgkyZyy7DCkYjq7x/xyiW33iYbaKTAKApZZhm37pGmOaQiSNMf1\\n\",\n       \"AgyrZZUm9PoDVNeQZzleECGFIkliJvMlt28/Rxi0mNKk6yRFnWrqlWHR1hXv3nmH8f4efhhojq8M\\n\",\n       \"MATs9yPiuEA2JXldEPRcyjLDcS3qSh8p87imyCosy6AoM/xA54lWVYnrDtH+MFfJ3L7v0y1iLNeh\\n\",\n       \"EQIzWZJlKVm6oN+LiNOEtjMIe33KWkG6oqoruq6mzAuyROOo490dDKHzaoRoMQ2T+eIM1/cwLAsh\\n\",\n       \"YRUvCfsucZLi+pKTmzcxkFRVzWq1wjANirLAdk38wMM0DU5Ojjk9fUgvijg9fcStmzfJSwvH9cnL\\n\",\n       \"jDiPUarFqAS2ZVNXDXvjAyzTo8hLTs9nzBYxjufg+i6GJRjvH3J4dMDOoEc8nXJx+QhLGpRdS4fE\\n\",\n       \"MLWXjpJsU5dMp9ViNaU5zGVR8eDBAyzb4wMvv4Jhe9x/dMbewTFlVaHaCpGlxPM5jm1zcf6Yfn9A\\n\",\n       \"OAyxLRvHcRkMhlR1w2AwIM9z2rajqgosS6ckBYHP/v4eoIuJZWsvntFwQLxKUF3H5eU54/GYZbzA\\n\",\n       \"sk2kscZyZbd2I3QxDINeGBEXOY8fP+YDLz1LmSfrZx2qugK19hhptdBtY9e8YWkVRfGeNWQD+8zn\\n\",\n       \"y20d+epXv8qtW7fZ3RkTBP61OZtCGteKogKEYhv0ef111kI2pTAtk6Zp1/XF4vbtZ3nw8C6T6SWO\\n\",\n       \"Y2EIiXlNdXldSn+dObdJ47lOtniy2TSfKN5P06ifXt/2Ar7x6950yHVdPzGYbNv2CexoE9iwOZJs\\n\",\n       \"JPdSSqTOQteFXimkaV95C6zxqc1UujWElup2ilZKClPv0mbdkGcpn/n8b3N4+xlUUZN2LklaYRmK\\n\",\n       \"PK6AKTeO9rmczYjWggmkJK+qtZhCsFquqKuGfl8n9whgb2eXqq0xLJPJdILoFJePZ5qRYhns9Hp4\\n\",\n       \"Yx8QWDdOWMxXxHFK4OzRu7WHNDoWy5idvs9idc7IshGyRpoVTatoJbiezdDdY3fUp9/va+8XRzJf\\n\",\n       \"XOApLXEuFzPyWYjteEwupnrg43lkWcUizvFsmyjq0SUVA8cGCY3MKcuStPI5vv0MQppkWU6e1XSt\\n\",\n       \"zTKtmV7OsQ9sXFfgeZJF5lFTI5VBusq2G3DbthSFVu+B7jqOD/p0eYdSMB7vsr+7g2VZ2vHRgsnl\\n\",\n       \"Gbati45wJJ7v4FgBXVtjdlCVGaePU1rVITAIej18P+Dg1k2KoqRVgsvJjIvLGb4vCP0+bmgTLxYU\\n\",\n       \"eUI/DNk93NlaAOdZxmq1ZD5d0lQVL730Eov5nMB3eeedtwCJ63nsjcfcdEfYts1iscB2HN658y7z\\n\",\n       \"y8cslkuWyyVRFHLrKFyfBCVNVZOeP6JtO6zmiP2DI83qSLVXTF3V5HlKmupiLYTQitCiwhCCwA8w\\n\",\n       \"pKQ0C955/JjRjRtMHrxNaDR0bYnqKqLQoasVZVmgQmiF5EPf9VEQAtf1MSwHgYEwTALfoMhzLMuh\\n\",\n       \"LBJMwybPSybTOUHgYZs6RnB3NKZra1bzGXmywvV07uRwEDKbnmOaJq7pUBUVlmtSq5q6LgkCn6LS\\n\",\n       \"TCvLlrz44vNMZ1PKFgwMhDSoqhrLNmiagrZrkGbNaKfHdHaOlQsMwyJZLd6zhmRxtrWDPTu7xHJ8\\n\",\n       \"hrsHZGXL2WRGkJdEYYRtGzjmphm8xkzrNg3jpljqgq71ITqlHkCuEVppWijD5PatFwn8IQ8ePADL\\n\",\n       \"ohMahrUMiWoapNJhK13TAoqmqhFIfRoQVxTqTf3WzLt225DCn4EOfAONbLDRTUG+fozYMFKUUtsC\\n\",\n       \"f90MZtNtdx3bJAxzQ0EUV/4pTXflM+60HTIImKcxgefgIsmbFnybz//m51mcnvOjP/KjmL6LY4a6\\n\",\n       \"gywzRFexWCU4js3ueMRqlRJEkk4p6q7VqfRIRsMBddUilMY1m66jLAudkmOI9ZDHoQ020tyKy8tL\\n\",\n       \"rcyqG7wwZDbVjm17ewfUdYmiJYwGNE1Lv9ejrFoWqxjH9WmamijwCQOP5XKB51gsFwvkOnz2YG8P\\n\",\n       \"y7aZz5f0nr2NECaOKdnb3UWakiRN6fV6mOvrV1UloeciaPB8l7YROsLLc7g4P0MIieeH2JakUgI3\\n\",\n       \"8DHFPp7nkSRLptMLwuERSrU0TYshjTU2qo//lmnSi0KE0JuyUAVtq09K0+l83YlAEOg/89xzL2wN\\n\",\n       \"xAxMqrJGKqDt6PdDfNfXvtWGQVnWpEnGfL5ASoW7dmK0TAspBLs7OwgEliXxdnZQ3QClNEcfoKoq\\n\",\n       \"XTT7A6IwpMxz8jxfv7eEwaBPVTcIIXnw4P7Wh911XbIs5WBvj8lshgCeuX0bz9NRX7Zl6aBppZuJ\\n\",\n       \"oiiQhsF8PsXzAw0BGuDYJr1egERT0CzTpCrLrRVpHK8oi5IiTdnd3eHmzRucX1xiOc72OSoyrfAM\\n\",\n       \"g5DA95nM5zi2heN4VHUDXYeQBnmWanvjptFzHqlT2k3LxpAmhmkhDBPbdmmaFiEM9vb2qaoSRLf1\\n\",\n       \"9AjDcN0ll0gp9WZm24RhSJIkWJaGc1rV8s47b3N8fIRpmZRZQlHmerC9YZchsUwbD4ObN06YzWdk\\n\",\n       \"WXytwD65NENlTtM0nJ4+Yrx/wNHREYZhsFwu6fV6LOZzlOoY9Hr46zmAvseue6fodR0BeK/x5nXR\\n\",\n       \"4cHBAVEUcXp6yjJOCHxv7W8kKOt6S1UWUqCTilraFoTUcW4b7HsD815HDf5MdOAbaONp46pNR74Z\\n\",\n       \"ZG4I7Ztue/Nz16e1eV5eqZmumVddpylKKddmNAazeIHZjwATqxW0luS3/uBL/NzP/xx/+2/8LawW\\n\",\n       \"EILpbEKZZhwc7uO7+4ThbS4vLrj/4DE3b91guUrY3x9TFAWr5UJjfZ222jw+PuHoaE97KtQ1RVUy\\n\",\n       \"Xy6IkwTbtPBsDwSEUcDu7mgd2NAgpcHzzz3PG19/k4uLx1qYEIWcnp4yGu2ys7tHr+/juC6W7bBY\\n\",\n       \"xIRhhDRNjg8PsEyDuq7Ii4w4TVnlSxaLBb4fcXh0RJ5XpHGur58p8R2HZDnFkCbL2RzVduSrjrYu\\n\",\n       \"MSXUVU5dl3z4Ix9hEI0oipplrK1xVdOStA1VmSPpYZomJycnFI3B3niH6XTKdDrdbrJFUWgBj6VT\\n\",\n       \"jnq9Hq5jMRyOCAKdzrJxc3zw4AFFXuG6Lp7nsTMa43gOeZ4xuTwnSxMml5IbR0csF4lW+x0ecHK8\\n\",\n       \"t+7qYDqZMVsuqeuWGweHVOuMUrGGW8LQJ4wCRmuWTBzHJEm8xUV7YUCWpmsnSX3N9vb2CMOQMAx1\\n\",\n       \"LN5iwdff/LruppTg8vKSw8NDRNdQpAllWerCmufrk4QDSpEXDYdHRyxWS0qpu/N6nUJl2/oE2e/3\\n\",\n       \"2dkZbpuadt2dreZz4tWSy8szPN9nPB5jOx5pGlOWJXVdkKV6sNk1LWWWEfg+Shk0bUNVVfT7I9qm\\n\",\n       \"4eHFBZ7rUZclbQdtB5ezBcP+kKKoGQ1GKKWYTZc0zYowDHTClG0yncVPpF/p1Cl9XVarlX7fhf6+\\n\",\n       \"63vs7u6wXC4RAnphiJSQJisdOoJGMw3DpCsrdkYjojAkTVMc5xvDHACk0J4r9+7ex/UDfvzH/4qm\\n\",\n       \"5XYdN2/e5M677zIcDNnZGdA0NfN5xmAwIMti+n09DxoOn/RZufIy+db1q21bfN/n1q1bfPlrF2RF\\n\",\n       \"RS8KoOvwHZsiTVFGs6ZIrvFvccWIgStxIVxpV647sn6r9W2nEX7xc/8C4Ikd5zrj5PoQ8+k38+RO\\n\",\n       \"CQhd9OVGvKNfRKlue2QyTX08qWWnu3MlqbsWaRj85mc/x+d+83P88A9/kigacHhwQJ5mWJataVlp\\n\",\n       \"ijQkVVliGJI8TzEN2B0NQHUc7O0iTC0GsE2bJE5pG82rFULoxHcUzroYlWWJZdokcUxR5piWNnx3\\n\",\n       \"XZ+ug9PTx3hegGXba1l+zmAwoGk67t2/jxCSNM25efs2ZVGt6V8mCkGzVnLNZjMODvaZzaZUdUPg\\n\",\n       \"Bzxz+1lm8wXNOuZb2JKyqDCkoQ35q5peGDKdXuLZJpeXZxzu76FUR9u12JaDMEykNDEtB9OwtKdF\\n\",\n       \"V5MkCbZtkmcptdLpJpZlbmGyoii5ceMmSZJojrjnce/+fW4d3yDPc5RSlOvkIb3xaqVd0zQkiaYP\\n\",\n       \"NqrbdtC2YbBczomiEM9zaZpG8/KrCiElptEAUpssDXYoqwZhWAj0vZYXqU78aWrE+tTmODpBxjAN\\n\",\n       \"2qahLgvyIicIPIb9AV3XsVhMtD1tq1OfBBLDMLfm/UmS4tiO3pBr3T23a096z3HXeLM2dQqiQPvm\\n\",\n       \"SHAtl3wNLVmWRZ4VKCDLsmvKZEXdNBRZzMF4B9avSdOiLCqKsqTrdGFp6grHsbmcTDi5dYs81wV6\\n\",\n       \"ONoly0uquqWqKlgHYwsFTauo6pYvfOE3efWDL7K/t4MpoBf19aZo2cTxkqoqMW0T2wm2p4rNKRlY\\n\",\n       \"34+WHpoqRdPqSLi6qrAdGyHANg3apmY2n6LW30coptMpWZY90bj1ogE/9VN/6xvqyS//8q/w21/4\\n\",\n       \"HS4uLxnvH/DjP/5XGI52dBFULZbl6BpQNziOw2gwYD5fEEWRPt0b4DoeQm6oymILa3yz+nm94G6g\\n\",\n       \"3qxJOT97zHI2wTQN2rokcF26tkZyPZRm7V2urlwSN0iCtmG4go+FELz60U/86aUR6uOVxrWvqzHh\\n\",\n       \"aki5uSme9tfdvPntwMCSqLalBYRSmqhvGNjW1UURQmDYBonRIPOanmlQ0/Gz//gf8e5rb/Ff/Y3/\\n\",\n       \"HMdxmTUlZ/MJdtEifB/P94GAplMkWcHpgweMx7uEvYj7D0+5dXzE1776R/iRgxd4BF6AZVoEfkC/\\n\",\n       \"19PZlY5LUZY65QMAG9Mw6fVDjsIDqqqkLCvOL85JkpSjo2OWyxjTcPA8F88PUErx6NEjBv0el5eX\\n\",\n       \"tE1JU2b4rotlmaxWCZ4XUNU1Z6cP6ff7BK7D4QdeRkrN+Lm8PMP3ApQlEVLghT5pmpImGUYnMUy4\\n\",\n       \"OL1HnmfsPXOT55/9btq20txkKanqmsV8xcXllDwv8Fwf23WxbYuDgzFpkhD4uySFtsZdrVbMZhPS\\n\",\n       \"NKXICz7z6X9NXdcsl5qx8JGPfIR6fxdpKG7evKltPqcL4jgmyzJmswm3b9/GsvQQKPBCfa3SnEWR\\n\",\n       \"cni4j5T6SNvrRWR5Sl031HVFmS6I0wzTdJhcnAOGpo02HTvjXXpRD8syqeuKuu1YLBZcnk9QqqXX\\n\",\n       \"j/A9D98PuHXrJkWZc/b4Mbs7O9y4cUhVVywWS9pW8fDhQ+q6IfBDmvW96js2poBOQlMXjIZD6MAw\\n\",\n       \"BOHuCMs2WCyXBKFPkqaUVcnj+Up7zHs+IorWxaTjxvEBTavWG1xJWVVUxYqqznEtbW3sOgamaeP7\\n\",\n       \"Dnme03UNUupn4OjokCzWp7RlHLNczJCGSde01GVFnKaEYURRaBdH09I02OFgSFGU+K7DcrlaF2MY\\n\",\n       \"j/fIspRlvCBNk+08ynW0NcRyuVyfAioMw8DzAmzLRim9sZyfn3Hr9i2SOObk+Ij5Yoa0bIosXYel\\n\",\n       \"eARBsD2ZW5ZDkb/3EFNKg9lszv7hIR//+J+jqjfGYDZvvPEGw/6Ao+NDLCl49927VEXFs8/e4uJi\\n\",\n       \"QlEUHB0dMZ3O1kIpiee715rFb97gXvcyMQwDIU1Obt4iDCPuvvsOlmnoIGbVYRpiHT7errUgzvrf\\n\",\n       \"/mT2wXV69Kap/Vbr296Bf+7f/tpVYb0WCHpFbL9yI9wEPmzWdcaKlJKyXk+JNfETKbQYQh/JDKo1\\n\",\n       \"4d4wDSpX/566rPjFX/wl4lXMf/yp/4gizXWGoxCEQQRdR7F+YJSQxGlO0BtgOw7xakmexqi2hKbi\\n\",\n       \"cH+MG1iYlsSUJk3dUOQ5bbPp/i0cz8O0tLmTZVpYhklR6eN827VbiOjj3/djf4KfyPvr/fVnY4n3\\n\",\n       \"KKh/5+/8XYRl8uKLL/Lqqx/CtHR4+XK54tatm1umku+57Ix2KMt6DaHBeLzD+fklu7u7W1jWDzw2\\n\",\n       \"/im2fWU+taasbLvyjXpbSv1CpTaBEDWz2ZQHd9/F9WztcqcaPQxd6zU2lOXrhXtDqd7Uu00H/q1C\\n\",\n       \"jb/tHXjXdXhr74ZNV70p2hsc6voQczNg2pjAVFVF27bbLn4rpW87lFDrHMYGW+q8vrzIERiQVZzG\\n\",\n       \"M/7X//0f8sFbz/MXf+CHCB0Xr99jejHBq+H09BJ7p8+g38N1HYpap+Us5nNs18U0JP3+AMeUxIs5\\n\",\n       \"d+4+YLgTEYQa3x1EfXZ2Q4QSW9l32+rk7appMKWhY5ykwF17KNe1tlV9f72/3l//futrr/0RP/Vf\\n\",\n       \"/hc6GSvUNs9dB5Zlc/fuXY4ODmiahjAMmUwvOTw4Znd3wOXljLOzCw4P98mygjzPOTzcJ80ysixd\\n\",\n       \"Q3cty+WK0WiEEFCW9RrmUOuhqroaRhprLx3DZndnTNe2XF6cab68YdFUBXVdasrhUz5Pm477upT+\\n\",\n       \"aT3Le61vewf+h1/8N8AVpnQ9Ygj0LnddZbl5cxsJ/fXOXRkmhpBYpqklyFUNSnNMpSEp6xo31DDE\\n\",\n       \"dDrl53/pF3nm+ef4yCuvEto+SZJgei5R1MeTWk2YNRXxckVRlkjDoj8YYNkeRVlqvLNtePTwAePR\\n\",\n       \"DnVV0qgSITqGwwF0EHg2RV5gSIntOmtZfYDjeTRVjWmYJEmMYRrb3b5tW773h/7yn+hn8v56f/1Z\\n\",\n       \"WO/VgX/mNz5P27X0+32SNMWybIqi5Nlnn2VyOSHwHLI0Ydjva3695RLHMTduHG8hkDRN6Q96pGmK\\n\",\n       \"aRpEUQQIVqvleuCp5xJRFG7tMDY49rYj5xrGLfQrk8k5d959h34U0DYVptT1yLj2Pq4X8utd+WZ9\\n\",\n       \"q0zMb81R+RNY14cBm/Bh13W3xlWbP3NdxGOa5jblfdN1a+hk7QS27mANwwABvX6PRim8KKSsK+49\\n\",\n       \"esD/8L/9LxyP9/lzL30Iz7Qxez5hv0/QGdhVx+PFlHmd45oWQRAx3tvHsS0ePXzA44f3WM0nuKZk\\n\",\n       \"EIW8/NLLuH4Iho1p+UjD5eJiztn5BZ2SeH5A1O9rTM+2KOtK046WS7Ikxfd06ADoI9SGzvb+en+9\\n\",\n       \"v/74lWYlCE0e2N8/4ODgkCAIePjwoZbiF9pFMMsyRsMhcbwkCD3u3L2DkILTx49ou5qiKOj3exiG\\n\",\n       \"ZDK5pGkqpDCI40xL3DtYzFdUVUtddbStFv90LagO6IBOY9hto+mQOzt7PPfc86ziBCEkVaOdSvV/\\n\",\n       \"uuvenM43as2nA2y+1fq2QyhZlm0j02zbxjS1g90G3N/wIzeFemOYvnEl3EAppqn9jquqoiwKDMPA\\n\",\n       \"tR1M02Q6n2uesmOj2pp/85uf5ZM/+Am+/7s+TrlKqVE8uHMfWxi4SFZJzOjWDTAky8sZZd1iOw5R\\n\",\n       \"r8eNo0PqsiTLMh4/PqXpNL/c9ULG+4co1VIUGUnygOV8SZL8EbdunpCniU6q9xxsy6E/GNAPB+RJ\\n\",\n       \"QlmUVHWJbVtIU8vs31/vr/fXv9+KBiPqMqZpOs7OzonjmFdeeYXRaEDXKE4fFYxHI9q24fzsnLIq\\n\",\n       \"2Q/3qeuas7PHlGW5pgHnW1Xp3t4el5cXWKaHbRrkmZb+9/t9qqokjpNtod105aYpaOsWKQxsy6Tr\\n\",\n       \"gE6wMxpjmiZvvfkGnq1tJwx55bK6Ydlt6NHX15/6IeYXPvvrAOsLcBXg8DSQf+1ntzvXBl7ZcL83\\n\",\n       \"FrFCKSxTZwMioGoaiqZmulrwK//0/2RnvMuPfPR7KJoaw7UJTRdP2kyXc6TvUjc1ltLT9toSgEFb\\n\",\n       \"N6i6wrFNaGosw8TzA+oWJvMVDQZFWeM4FoKOui5J4gUGLUJ0OiFm3YWblkUYhVR5RbZKGY93qJsK\\n\",\n       \"KQV1qx0V/9KP/81vuG5f/Oyvr0ULM8IwpO06yqriwYOHKKV44YUXaBody9a1Fa7nk2UlrhtgWg7L\\n\",\n       \"WEuXTcdeZ4jqCLCu7hBCYRsmqutQrT5Wer5PVVekec79B/fZ299H2pb2yW5a3DW7p21bHMehU4qs\\n\",\n       \"KLFsWyeEd08GS5umvd2sNxzwpmk4fXRG1BtpX+yqxPeDNR++otfraQ+QRkvdPc9DqA7DsqmblrPL\\n\",\n       \"CZ6vHQyFkMi1vadl6N7E9nRyim3oYbbnemRpSl3XVHWtxUNdp1kLlrulrZqmRZ4XVHVDs/boyPNi\\n\",\n       \"O4wejzW1sVUKKQ3KqkZ1DVJ12q+8rTBUTb8f4YQ+bbcOKBFaiq4UtE1HVpS8/c4ddnbHRFGPVgii\\n\",\n       \"XoQUgqauEBIsQ4LotFGVu2lcDNpaaS/ytWFTU5c6ji/PMSybi8mEOMl56WUdLl0UlRax0GFKQVtr\\n\",\n       \"2l7XKa2NMEySNMO1TX71n/wif/FTPwYorcpVEoRFUUOSZWCC59vUTYNnejR1q5+XNe3PcZxtvFzb\\n\",\n       \"am9tBUjTwPO87XNdlfXWSTQIQ6RhUlUVWZbhGJLDwyPOzi6wHJsf/eSff08I5Vd//TfoBTq/cjAY\\n\",\n       \"khcFXbuOLEMQhT6r+ZzhcIBA4QU6eejk5ITp9BLHcYjjmN3dXeI4pt/vU5a59pufrhgOdzBNSVFU\\n\",\n       \"1+qToG31fbMpuqFnb5k9wjLWXO9ta850esGdd9/Gdx1M8aQN9nWjvus2skKIP91+4K7rPuFvsjlG\\n\",\n       \"wFpaqgRCSjoUzZo3uynilmWhWp3coSfGCqNqka7NtEzpqRaV1iSBwSI0+Lmf/UW+e/85fvC7vofK\\n\",\n       \"NRBrc6xVlpGZBWbo4Lo2jhORpimTyYQ6LrGsgNFwh2hvd11Ap1qMU5U4jsPB4RDXc5jPZyznKdPp\\n\",\n       \"CsuyeeGZV0iynDhOcPxjsjzmdBLjug1ONKI/7jHe2yVJE2zLpioy9vd2v2nyyB98+Q/puo4gCMjL\\n\",\n       \"grfeeouXX/4go9GI8XhM13Xs7OywWi3pRUPqusZzHRzXJE1XDCKXew8e8NxzL7BYLLShPRbpKsY0\\n\",\n       \"TFaVNgQTa4/z2WLOpz/9aW7dfIaTkxNcw8Z3dQfRCEWex9t5hCHW1p11QZ6vtIJQBixWS6RcC64s\\n\",\n       \"i7qpUUrSAm2jU0xe+eB34LsdXatYrVas4pyL8wuUkNy99w63nrnJaDRgZ/+YosqpKi3wsS0LRMWw\\n\",\n       \"Z615wpqP7DoOZ+enrFYrppcZcbwi8DUtbX+8y8XZObujXUZBj37Up6oalssliyRZU99qwlDDZpZl\\n\",\n       \"MZ3OWC6X0HaM9/YIgoCuK+kPIsoiJY4XyLpYBzTrU+L+wRGm5WpRVtsx6EfUdUlZllStlpW//eZb\\n\",\n       \"fPnLX+ZTn/pLGIZEioK9YQ9UhmlaGMFVYkuWFji9iCzPmC+WmKZFUVQIJFVVkuf6JGvZJlVV8nu/\\n\",\n       \"90V+8id/gpu2jWG0SNXghsY6XKEkLUtsx9MWzNLA8T29+boOQT/gfHGJ1++RZTnn86VWSmIwGI44\\n\",\n       \"GR1RliXLZUxdNDRiheu6jEajLQlhNptRV1epWf1eQBhqw60sXmmevONqm15HBx7Hccx8tiAMtfI5\\n\",\n       \"CH3OLs8QhuDi4uyb1pCj/R2qIsdxHB4/eqidD02DwWDA22+/jVItt557jrquef3119kfj3n5lVfI\\n\",\n       \"spK2kyhMhqMx9x88ZLw3RpoG07MFqyRl0B8xW0xZzOfcvHmLXi/g3r2H9Ho9+v2I+XIFhkEYhDw+\\n\",\n       \"P6ff7+tmpmrWnisK0zSo65bx+JCuk9y7dw/PXTsSdh22qV0e66rCth296XUdUv7x5flbduBCiBPg\\n\",\n       \"F4A9dMv8D5VS/7MQYgT8MnALuAv8NaXUYv0zPwP8hqeQ+AAAIABJREFUZ0AL/G2l1L9+j9+77cB/\\n\",\n       \"7/P/crvrXE/Q2RrCrLncaq1eQl75e5uGoXc7BYaUVEWBsE1EC5Zh0EhJbsBkteBXf+X/4OTwBt/9\\n\",\n       \"oe+kmq2oDIlhmgwGA+y1QY8WjjRka3Wg6jo83yfPSvK82JLrtUTY1x4brQ6R3UA+jhPQtdA0HWXZ\\n\",\n       \"sIxTpDSwHZu6qUC2JMmKuqmIfIdBL0Sg6EchURhQ5Dqx/Qf+w7/6DZ/Hb33mn64DgLUIpKoq5vM5\\n\",\n       \"Ozu7SClxHEe/Dylpai2GaduWJEu5d+8eeZ5jOTYf+9jHMS0L27JZLpfrtJEapdapJJ3iwYOHWvm4\\n\",\n       \"M6auasqyIsszPM8kDEOKomAwGFCs4arr5mGghReqM2FN/yzrSououo6qqinLmrYDnbxiIdHFyHEc\\n\",\n       \"bMdHISiqCt/3mc4mtN06a1AKhHCoaq1YHO/uUJfV2gtDM5dMw6RrWzzPxfE8ylILhLI0IQh8fNcj\\n\",\n       \"yzJUq7MNV4uYnZ1dcI1r/hSCqqpJEm3ze3Ki+elpqj9P09B2qaahcF0bwxBIQ1+7pm5JsgwhLVar\\n\",\n       \"GFt2CBRRFCKEoteLuLg85/zxGR/72Mep1h4ZXacwDdCsNEEnrkIB2kbfe4Zp4PkBq1Wsaam2uxWn\\n\",\n       \"GabBcrng8eNHlGXJq69+EFBbEZCUUnOREahWaSm50P5DrGm2ddNhW4J/9qv/hL/8n/yn1HXDYDCi\\n\",\n       \"yEukNMjzAsv2qOuGrlNEUURZ5tvIs82peOMWuukmN06ijusDYJoWXac57UopOhTSMPA9jVlvTuQb\\n\",\n       \"Uysl4C/8yCfeswP/F//2t4k8Z0vBTVPNJe/1eiilmEwmDIdDJpMJN27coEhToihiuVxwcnKDNM20\\n\",\n       \"CZcQ5IU+ZbmuvWa3OWsVrUVVaAbcaKRtdKVpYJgWZV1j2yaiURiGZs1pbrjA892t33nbNijVcf/+\\n\",\n       \"fZLVYy3jF9A2NZ7jbAOTu66jbhqk1IjEBz/yg/+vO/Aa+K+VUn8ohAiBLwkhPg38TeDTSqn/Xgjx\\n\",\n       \"94C/D/x9IcQrwF8HXgGOgc8IIV5USn1TU9vrIQzXC/emEFRtg4keDGDoG3DDlxRK0TWdNhJUYHse\\n\",\n       \"yyKl7wSorGKhKlahyS/943/MbXfE9334u/B2BjSeT10rkjTlnXfuYNsWR0fHDAY9pBQ4Tr6Ox3JY\\n\",\n       \"zGOGwwFhGJJlOVmWURTFeuDRx7YdPC/Yyobj1SXSMHEc7YC3vz8mTjKm0ymdarFdE8/zsRqTvCpY\\n\",\n       \"3n/EjeNj8rLm7r032B/vYnyT0fKbb765hR6EEJyfn7NYLPjUpz5FFEVrwYOFY1u0rR6KfPWrbzAc\\n\",\n       \"7PCx7/4Yan3zN21DVbakaYJlmqRpjOf5uvibJnfv3f2/2XuzWMuy877vt9aehzPfserW0N1kjySb\\n\",\n       \"pEhqoEyZlCiREuQMssUEEhIkVgLYyADEyEOeoofAiPMQBEGeDDhIEMCWFUZhNJGaaMlSRIpDk81u\\n\",\n       \"suehhlt15zPuea+18rD2OVVkVzft5IEEog1Ud9W999xzzj57r/V9/+8/8K5H3t1dhB5Swtn5SWdA\\n\",\n       \"pYnjGM/z7CLYzSPyPKfX6xHHMcvl0lZVecl0NutgB0m/P0BKw3CYkmX2HPf6A1qlcERoVYRlRVHW\\n\",\n       \"FGWN53vMZlMuH1ymaWtms1l3cdecn5wyGY/xHBfhw87BAcvFjLq2ARGL5YKL2RQhhTXrCn129nY7\\n\",\n       \"qKFBOhLhCsIoYjQZc3h4iKd8lLaL5NbWFm4FZWlIHJ+2yfA9QbI9QkhJU1uoatGFDsdhgOu5BGGE\\n\",\n       \"7wfsJANmiwVp0mN2foQU8Oqrr3J0dAelLHT1q7/yK7Y7C0K0tucRo1FtAxhMxy+2i7j1yYiTmPl8\\n\",\n       \"1gk+fMqyoChzvv3tbwPwxBNP4HkeV65cIQjsItrv+0gp0Kq9R09zLISTZSvCKMbzPaTrEDsO/V5i\\n\",\n       \"u6HpDNcPmM1muI5PrxczGIyoyobz6ZQsy8nznOEwJU1tbmXd5Vienp4Sx7H113E9GwbdNuS5tfyt\\n\",\n       \"m5YosvJ/pRTz+ZymrjlZZqRpSr/fJ8/LrgqVG0LDg47J9jbV0gqH9vf3WSwW9Ho97t69C8B73/se\\n\",\n       \"bt8+xPd9Tk5OePyRR7hx4waO47CcLzk42Oe5575DHMfs7e+Spj2+/fy32d3bxfEDludT4ihiZ2cL\\n\",\n       \"jFVIb+/u0rQt5xcXHFzZZzpbIFpN2usxWyyIoxDXDzk7P2dra4I2Bm2s6OjSwQHPf+uQwHMBQ7+X\\n\",\n       \"UhZ202m79dAa9om3YOLfe/xrYeBCiM8B/1P356eMMcdCiD3gT40xj3fVtzbG/KPu578A/Lox5svf\\n\",\n       \"83s2Ffi/+IPPbgQ66/YT7gs4Vl1lfB+WKqUEpfFc1zqJGfv1EoNQGpQhHPQ4K1f87hc+T70s+MWP\\n\",\n       \"f5ImKxlubXOymtEPrFLS87z71IIXeJ7PYNCnrmscx7V5ilWG73sdrufjdGnW8/miw/jUJo3c+g0L\\n\",\n       \"6toGBWgtkI6L5/vW06WtqOuSVis812E5X5BlVmQwGvRI4hDPdfiFf/NX33L+/+P/8JfY29tjsVjw\\n\",\n       \"8MPX2dnZ4caNG0gp+fEf/zEWi4VdQBdLIj9gPp+zvb29cSVcVzRxHFtFqOgqxo75EkURYRhRlnUH\\n\",\n       \"v8SbaibPc1arjNuHN3nqqacAged69Ho9tNbkeb5J20nTnq0aMfQHPRzHIeuSxTWa+WyBMYbhcEye\\n\",\n       \"54CkbQ1RGNkq1A9oW43juggpmU6n1E1FnMQEQUDbaJIkts/nOORZjtEa1/U3N3qapgBUSqG1Yjab\\n\",\n       \"otoaY1qkkDiOpZtGUUQURFZ56NhrcrGwn6v1FId+v9cVDeqesEx6HW7uQtcKK6WYzefM5kumsznG\\n\",\n       \"OMwXS3bGPXppzGhsvThmsylaK/Z2d4jjhLqyuLFS9ypwY+wCvqHKSm8zt6jbhtlsxp07doG6dOkS\\n\",\n       \"47GdITRNtZFkJ0nCcrnsYMoWKQxtq7pZk4vWBke6aAyLlU0w0trgOIKvfvlLfOhDHyaKIhwvoKrq\\n\",\n       \"Dt+WVGVj7SDixLodGuurUte1LVwcZ5N6VJbVZtZi4TnR5cG61HVDUVYbAzopJUEY2q6nLIn8iNbY\\n\",\n       \"eDWlFZ/+uY8/sAL/3B/8Kdf3djdrx5oEkaapDfuI7bVy+fJlHMehWC0Zjyf0Bz1efumVbjBZs729\\n\",\n       \"ze3bt+n1+tbnxfN4885t3vWuh7k4m4Ix9Pv9LlCjsEZfUjJbzImTmNBxyLKCIAy7rqPpZjjWzsDz\\n\",\n       \"XIxROK5LsZpyfHSXqshp6hJhdHdtdkrMTUEr3pFG+K+MgQshrgMfAP4K2DXGHHffOgZ2u79fAu5f\\n\",\n       \"rG9jK/G3PeI4/i5F5ZpGswk0bu/5oGymtsI6+kkh0UIjux0rQBD5LrUvOTEV/9fv/R7+rOQTH/0o\\n\",\n       \"tYDtvT3c1lY6y2yOKzyGwwFN0zIY9BmPxxY3Pb9gPB5b850wwg9dsmzF0dFRt9AlxHFMHKf00j51\\n\",\n       \"XTGbzTk5PsPxBdKRJHHK7t420+mC+XxJVZd4vt8JlxKkhPliieOFRLFASsPZxYxbt5eMv8dYZ338\\n\",\n       \"1E99jLOzU0ajAVpr7t69y8HBAW1b8+KLL7C9vY3n9dnb3aHMSyaTCYPBYOOjvPYitt4Xawt0Q9PU\\n\",\n       \"JInlwX/xi1/kU5/6FJ7n0LYNo3EfIRyiOAQBTz31FCcnJ1y+fJmyLDAYBn0bJLy3t4cxgqqyqTBF\\n\",\n       \"U3PnzhGe73TZpD6e67I1mWwWF9fphtWtoKpqO4BiSa/XxxMuQsJg0KMo7HBrPpsxHAw5unuHy5cv\\n\",\n       \"07aKKAg3mGzbthRlSdaFSITpAATsXzqgrivqqqBta4zRNmG+rJhMXMplxmJ6yvXr1xmORjhC4ruu\\n\",\n       \"3UTobI97FnISUpKXDWenJ3ie3eQ9aYdzjuNyfHTC9o6Vvg9HE8p8xv7lA6oit/7bdc3ly5doa+sd\\n\",\n       \"E8cxQRBSZLndDIzuOo17EASmoixLdnZ2eO655zg4OOCxxx7tukCfLMvwfRelmk1ItuO4eF6A47j4\\n\",\n       \"nkvbVliJxDpAQFtXvFbR71mbiHW7f+fO3c5dsCYwdtE9OjrBcRx66eDeIDsI0F2FfHp6ysnJEVtb\\n\",\n       \"O11nFjCZdJmwiyXz2RIhDaPRaJO67rly81msIT9HCkbDAW1VEwe2o3unY2/bDh/7/b6lDHreZhH3\\n\",\n       \"PI8kSRBCcHR0xNbWFo1qWOVL8jLn0cfeRZaVHB8fM53N2N7Z5fDwkMFgQNtqrl27xs2bt9nf30ca\\n\",\n       \"zY0bt3jyycdtUVNW+EHAZGvMydk52g+J05Tlcrmxvrh7fMRkssUqz4nCED/wWWU5vhMxmezyxuuv\\n\",\n       \"EvghVb7qcldLlGo2C7iUzju+93+lBbyDT/4P4D83xizvZ4UYY4xYB7w9+HjHEv/+xOZ169A9p13M\\n\",\n       \"78vL21BuzD1fk/uNrgb4LE3LQhq+8rVnuHt4h7/9sZ9lGCU4QcT5ck4SxSReQLgTI6W0hjltQ6us\\n\",\n       \"nWkv7ZMkl3Acj7IsWS4XIG271+v1CIKAsqw5Pz9nenHB4e1DHMdlMtni4OAKtSo6TmnNCy++gOv6\\n\",\n       \"DAZDPDdAaUUY9lgtF1S1TSPf2t6jbSqWizmt7+EHLkfHxw88V4eHt9na2uL4+Jivfe1rDIdDDg4u\\n\",\n       \"dV4TEbdu3WJvb4+XX3qZNEr5N/7W32I6nWGMvs9Twuvc4XJczyfPS7J8ycuvvEiWFbznPU9Z21sp\\n\",\n       \"kdLZ0DyFgH4/YbVacf36VU5PzwF44403GPSHDAbWfxwka/P96WKJ71vLgSxfIQTMplOWiwXPPPMN\\n\",\n       \"/t7f+/u25RaSKB4QhiG9fp9VnqF1y9HxXVsRui69NGUwGHD58mXmsylJvM3x0RFVXbNcZmxvbxNF\\n\",\n       \"Ab3egKZtO4xzwfHRGQaDEGZTnQqhmV6cd8NVybeefwnX9RjELs8//x2asuJDH/oQ4/EI3zcIycZt\\n\",\n       \"b9UlsW+Nx7i7O2itqIuyo5UecXp6xvve9zR1o4miGMfzaOstEBAnCacnp2RZzsXFBb7rWfHJasVi\\n\",\n       \"PieNrYMfwt64QdDZIkuB79kN9NVXX+XKlSvd5tVQdRuStZld0uv1uDi/YDwZ0zZWUzG9mJL2YqTQ\\n\",\n       \"rMME1vdNXuR4nkfVLfp5XmxS2i0915qKLWdzDg4u0bYajOwCGGocR1K3JcZodnd2uH7tGqssp2ms\\n\",\n       \"Pe1samGv/mDAQw9fQ6uG5XJJWZTWs186G0jOpmo5XRFVcmkyIvIEJnR5p2Ax3wHhe51S0hYrfvfv\\n\",\n       \"waDPcmnZS77vkecZSRoRRBFvvvEGF7MZg/6AR979CK+99gZV07Czv0etWrJVQSqTLulngQM8/vhj\\n\",\n       \"HB+fUtc1o8mEi+mFtd0Y9BGtZjqdkiQJZWl9jS5fvsRqlQOCVZbjVrWt7qXA9wKCIKYqM/wwBN0i\\n\",\n       \"HQfZsenabm7wTsf3XcCFEB528f7fjDGf6758LITYM8YcCSH2gZP1GgNcue/hB93XHnD8OgD/8//6\\n\",\n       \"Eh/+kffzkQ9/YONlHATBJsQhCIJ7FrGbQAZD0A3e1kNPLQy6VrSJy//9V1/i+S9/lY+87wPUvqRR\\n\",\n       \"LUFWMRqmNKGHWJbUdYnuFFO9fkxV1rRtzSpbbCAdC5uk1E2FMYr5bInn+QR+yO72DkVRMhkLsiyn\\n\",\n       \"qStywPHt67HGOC5KGVRbdwGoAWhFU9e40sEow+2btwhDjzgOSBKf5UITBA/G+5IkQXX0qEuXLjEc\\n\",\n       \"Dnn00cc3FcfZ2RnPPvssg/6Qn/75f8yq/ccQ2Ep7jaRl7b1/VLW9AvoT+2d9GOwEWnVXiOiukhYI\\n\",\n       \"+/br4z37tfX/gbfcZMPud8a9t76Xj30C4E8J7qO8190v8K3PD9vhWx+3WoHj2tc4Gtuv7e199894\\n\",\n       \"LpQFBD4c7L3lVwBw7dK9v7/3iQf/zP1H3ANf/AlJlFBVFdPpOa7nEXUsKqvcgyiKO59vF4NmuVgQ\\n\",\n       \"+NaWtGpbtnd3GNYDhDCEvo/SLb00xevCHgxrOwhQjUJ3BlZF50kOmt3dbaS03ZPvWSgHbCCJlJI0\\n\",\n       \"SRAGtGpZzOdMxiOyzLpEam2ju7S2ENU6g7HXsyZcge8TRdEGdtjasjbJSRpTFhYecD0fz+8cEIUg\\n\",\n       \"duwH1rYty6WF0cLAQ4QBvZ5VPreNpipytFG4jsSN7WvWxtA0tnhyHJc8WxIGPkkc4Tmg6xKj6s38\\n\",\n       \"5kFHnS+IkqENjigLVqsVUgrrlug6uK4kzzOEsNGBTuByMb9gtL1lnSVXMxYvr9jd3ePGmzfwwoAw\\n\",\n       \"jgjSkOVyYaHTLmT67OyM7e0xRVGyXC0Z9PrMlwuy5ZK97d1NMpiFKUtOT89JElv4jMcjSwbIcwLX\\n\",\n       \"RxjDaLzFSy/e7QI4KkDxzDPP8vVvPt91Se98Xb7jAi5sqf1PgO8YY/6H+77128C/D/yj7v+fu+/r\\n\",\n       \"/1QI8d9joZN3A1958G//dQD+g3/vcwghNgt2ntvByDqVQnr2JUqzVlbaynu1WhEEwcagSkrJscqZ\\n\",\n       \"naz4xr/4l/zkR36C/YPLeGlMNl0yu3tK9npJsD1kNB6zNephTfxLTk9OiJOYycSuOGvPlaLMUB0n\\n\",\n       \"ejgcMh5PUEpzfn5OluUb45n9/UtdGsgRTaMoqgIpHOIkIU0CfN+2+IeHdwiCgDDw6KUpQZjg+n5H\\n\",\n       \"P2u4OD9lvpgzeBsIxbrLWerV1tYWeZ7zzDPPEMeJrYB296mqhr29Sw98/F8f/9+OxXxFEHpcvXqV\\n\",\n       \"WrUsVyuW8wVVWeK6LhcXF2xv7zAa9ZCe9bQIAp+yyCnLgqoqmM8Vqqm4ffsWH/sbH6UpavKOD9/v\\n\",\n       \"oKg1rVbcl+Xq+Q6np6cURdHNZxzSpLfBse3A2QpB9vb2qNaWsolNr0riGNU2GARKW+YUHVZujGE+\\n\",\n       \"mzMeTfC9gLyDcqzHft4pnj2yLANjmF6c4Qf33PSkutcFu25nHdvW333yBJal0yoc97vnWY4T0bQW\\n\",\n       \"UtFabGAc15eoViEkuO8AJai64ji7Q5Zbf/blqiUMI6qqZL644Pr165yenqK1IctWHCQH3UxnDkLy\\n\",\n       \"kQ99mG9+61mSNGJnbwutFC+/8hLj8YQ49MmX1l/94YceoqoqXn/tdfb2L7GztcXJ2Sm9NMEPAl5/\\n\",\n       \"/VV2dnaJk5ibN28yGg5Jk4jlcsn2zjZnZ+f0+32kgCB0qcqG4WhEVddMvBThO6im5CMf/gAf+pGn\\n\",\n       \"LTdfOPyT/+Wfve17/34V+EeBXwW+JYT4Rve1/wr4b4HfFEL8XToaIYAx5jtCiN8EvoMt2P6++T5T\\n\",\n       \"0vXFuvY4uT9abS3m2GRiKoXsEkCklChsnqWUEuE6iEmPP/4fP8uHHnqM6wcHaE/iNy3RsEf/0h4y\\n\",\n       \"q1ktl7x5dsTJnduMBkNGoxHDgQ0SzlYZaxcy13WJO5Otphte3blziO8HG5HCmgZ1fn6CkIIgcPH8\\n\",\n       \"mDRJN2yIvF6xVAuSJOHqwWWL0eYFqm6Y5WfWnlNqoihksZwSdUKLBx5CkOXWR1t3wQFhGHF8fMqT\\n\",\n       \"Tz3FbDrF8yNuHd7lAx/+Pp/sXx//2ofr2e7vzTffwA0DKwyKAtIkoSgKstWK/f19qqqkKTKEY2c5\\n\",\n       \"nu/g+QnD4YCiyDaJO9PplK3RCBHHtE2L6ehjSncsrO46t+203GDgQWjtTvOi2EQErm8zSwdVtKol\\n\",\n       \"iqJuSGxwpQtGIF0Hzw3QZg1bOmitiMKY2XSKAPzAp9/vMxkPmc3nlKVD2y5RqiHLloShDRheC3DW\\n\",\n       \"1DfVDYzbVqHWgcydJYbSClM1+K5P01R4fkAYel0KU40UgsB3kY7sONCSVVHY4WccfZdW5HuPum1J\\n\",\n       \"4pgw8MmzFUa3OBJ8zyWJYzskxCDQ7GxvkS2W6KYlDiOklDz7zW8wHo04PzlGChvycf3qZctmSQcY\\n\",\n       \"1RKFPvPZFN/3Obi8z2w+I8+WCAFNVVLXFTtbWzRVyfF8xsH+Pk3TsJjNSdMe+XJF5PvopkW6LlVl\\n\",\n       \"z5U2LQ8//BDz6QlB4KB0C8bO+jzH35A53vaafKdvGmP+grf3S/mZt3nMPwT+4Ts+633HGuteU4nS\\n\",\n       \"NGU8HlN2fMzA8zf4rVYaR0gcz+7yRZ4jPNdSdKTgN37jN2iqko//zY9TouxN0TQcnZ5y5J7SMy57\\n\",\n       \"6YinrzxJUytOjo955eXXaJqGnZ0ddnd38Ty7WVRVRVHaRGzRcTu3t7epqorFwgoXkiRmMEg3tMfZ\\n\",\n       \"bMb01FKYwiAgDiMbbLxaMT0/p65r61I4HJIk6QZnfumVF3hzMWOyNcL3Hc7OTh54ro6Oj0iSBOlI\\n\",\n       \"iqywWPzZOWna5+WXX6Wua9Kkj1LvhBj+9fH/9gjjkPFwgBCCZVGyXC45PT2lLEv6aY/xeEwcRyxX\\n\",\n       \"S7SxlfHZ2Tm+F1KUJUkUAobAT+yQrKmom5okjtFKIaRDFPdQyqBNi+ZeYVPVNsmnqirCKMKg8IMI\\n\",\n       \"1RiaRiGEIQgCoihiubLhvrOZVezaGEJFVTQ0bUurFRqbKdvrpTidT/xkPLbDyU49qHWLIwXn52dM\\n\",\n       \"JhOGwwGLxZLR0HaIGlC6JQzCTsthMVttFJ7nWrZV26JN3WH1LW3V3nPm1BplLGvE9/0ueWhi4SjH\\n\",\n       \"wY1iLmYzZjPrDf/Rt/lcqkaDUyDEWs0tUKphsbCCp7q2uo7z83MWc4sbWtioh+e45MsFbWXZM2EQ\\n\",\n       \"WP63Y4hDl9nFBVVV0UvTjlihybOMy/v75GXBKsuYbI04Oz9HCo/xaNh56y/Z3d2lLHILGXV5B2Fo\\n\",\n       \"4eFWW/voprCc9JPjW1YsJcDxPRzh4EiPzhXrbY8fuBJzvcO4rttZNt6DU4wxm+obbXDW6ReNomqq\\n\",\n       \"jkKWITyXr3/5GY6+9RKf+PTPcdysiGpDJBxEGHIluUQjQUlYlDXh7SMqNyRNhmw/uU9dV5RlwXKZ\\n\",\n       \"kWUrjNFdlW2TqDXa+nd73oYu6Lou5+fnrCnu60U89H3qssEoyPPVhj41Gg0tDUtr6rKgLkuU1ixX\\n\",\n       \"S4Qw7O9u43gOr73+MmkvfeC5+vSnP8ULL7zYWegaMIL9/cu4rsd8vmA4GFt88wdgj/D/h6MsM158\\n\",\n       \"+Q5JHONHKX7Ht9ZKY7RmsVigtbUV8AIP13NJ0gjXsfYMUkJTVZ3VQoHnulRVgRQ2I1QgNqHYQoJw\\n\",\n       \"7lXXYRgShiFZlhFFAa26R7ddUyfzPKeqSsIooOyKj7OzM/b39yjzmjRN6Ro3kJbVVRQFvucTRQnL\\n\",\n       \"xQopbMReXdcMh0OiKGJ3d2djcTwY9CmKbJPmFMb35jJO13EIaYfGrhsjhGW7RLGFXKqsQnTKXNfx\\n\",\n       \"8HyfvCw2NE2wNM68KDBhDykl4509+qP7hjTfcwjHJ88X2O457Fg2mji2EWxVVTKdtvS6+2p2anHt\\n\",\n       \"07tHpGmCMJrVbEbT1F0+b21TsFyXvGkQCBZdQMXWyK5RJyc2F7asai4uLjAYmspG/8WxJUh85zvP\\n\",\n       \"s7Oza7/frWVJYvnyjTa0bUWezXFES6sajBEbnYuFoayFwzsdP/AFHNhMWteY8pqXHMcxBRX9KMUF\\n\",\n       \"dNni+j4tBuN5EFjj9tOjY269cZtf/OmfYzzYwm00XhDSaI1uW7J5hh/4+GFAL0zRMkC1Ga0qWOVL\\n\",\n       \"ojBiNLEKsDgZsVquoIvciqLQ8rhdj/lixtnJKb7nEcUxg14fz3M2VbkxhijycB0X4Tok6ZjlMmM6\\n\",\n       \"O9+o0UbDIV5glYTL1ZwwcpGOR1kW3L51xCc+9vFuIPZfv+U8DfoTfuxHfxKlWp599lu8+eabtG3L\\n\",\n       \"Bz7wPm7cuElZllaQ4j/4Y/3sZ38V1/WR0qGpawaDIdeuXeW973sC62vcgjEkcUKR5xwdHvHMM8/w\\n\",\n       \"0Z/4CYIg5PD2IYN+yv7+Hn4QMJ1ObQ7ockkUp8RxQqM1y6Xl8cZxvHFZi+MIgyCKIppacTFbsr29\\n\",\n       \"y8npKU3TUjcGz3UtbTPwmYy3rMWu53N8fMKrr75qlXXA1u4+nm+VpFI4FEXB+fk5ZZVzdnbKfD6j\\n\",\n       \"LAuuP3QNzzQ8/fTTJElMUZS2rV0sOlM0TRhGpGlqvaSTIUWeo7Shad9a77XGY7J92YbxNiuc0KOl\\n\",\n       \"RWHFP34cUVY1eVawWmUM+kPL6IkDmrYljVO7sIQJnhcR+i6urBHChoYYo4nSAa7vU5d2yK613fSz\\n\",\n       \"1YpstbBc4jbAk5YRrTplskCQJgGutDd8EiZoIC8Ur756mytXr+KFAXVT01Q10khcz6cfhEjpoFpF\\n\",\n       \"OoyQQmJQIBxmizlZtmJnewc/CBHawfMCIt+gTENVFhR5QaMsLNi2lkfuuuuQXkkURriei+tYdWYo\\n\",\n       \"7aJq51n2eftxYG0yXEPTKjwZ4TsRWgQIYRAtuO9QiYquGxHSo240YWJnCYF0cFwXRzrkVYH0rMvp\\n\",\n       \"yPVRrWK8v81ytbTpR1KQhBG3bx+yd/Val4oU4TkJ0hEIKSiKFdPpBYNBnyzPEFLilBX9/oCyrNjZ\\n\",\n       \"2aIsS4wyeJ7D1tYYpWpAsbW1ZYOmtUI1NbUqaJuaNPG58eYhSRjjSUup1MpSo6X8/oXYD4WZ1YOo\\n\",\n       \"hGs+Z+tqVFmT+hFSGUBSC4OIQiplcxL/8l/+BYHr8sGn3kfQtUDrQaTbiTXgXoJ9VVUgW+Iktrto\\n\",\n       \"WUInIQ9D+7Ou423kwatV1k2DLUYfhuFmo1mHMazpj3Vrn9sKGBRhR+pfm+CUZYlSiqIoGAxS5osp\\n\",\n       \"s9mMKIp4+umnN/z3p3/kE285b8998882Qpw1JXA2m/H666+zvb3N6ekpN2/eROmWH/2x/+4tj//s\\n\",\n       \"Z3/VyuWFpG0Vvucx2ZownZ1SlSWPPPzQRnThSpckTtjb3aWuGvq9fhdGbD1fFvMZAhiNRoy3JtZw\\n\",\n       \"SmmCOMZx7Aa36aAMrDJrpFVVNU2tmM2sSZDjeKS9PspYOK2uaxaLZYejKqqqYji0wbbaGMqixEtS\\n\",\n       \"bt64SZ7nnJ1dWBN9CZcu7TMaDQjDAOlImqYin51tAkNc1yVJEoIg2ARdryPKyrJEtfbcCCFIer/4\\n\",\n       \"lvNXlp+nbZuugiopihzPsdDaYrFkMZuzs73D1niC47jUZYV0JKWqaZWiKmtcN6Aqyq7Sa3ClpCxz\\n\",\n       \"q5TsoA2lrfS+ripAMx4NiSKfYa9HGAXUVUEGVtwGAAAgAElEQVQYBhg8kC5SCtra2glYQYxLWTdk\\n\",\n       \"eUHd2Gvt1p1Dev0e29s7BJ7XXQMtTdN0SfIODvYcCQlR5KCNxdGzbIXRkqpocaSd/Xi+QxjaMBK/\\n\",\n       \"O5/ra9wYDeZePqaFcNru7y1aaVzX60KbXUQ3oPQ8vxPFBVbUpy17x1bUik988mcfKOT5/d//PI4n\\n\",\n       \"bQqX4yAdF9Z2rd17shoSl1YpfFdvrk2bZeog1xmpRYHABm+PxmM8GXSD4V0cT3amc/cGsHVjTcKa\\n\",\n       \"VhEHVlS2XoPWh9tZ2QZ+aKFhwIscLs7PmEyG3Dm8zSCJEcby/qW4l/WrzQ95Is/9GDi81ZWwyApi\\n\",\n       \"P7AXWl4SxjEyDGi1olEtr7z0Mrdu3ODTn/w5SzWSAs+xkWW6KsmKnKIs6ff7Ngnd9/Arn6opLAyB\\n\",\n       \"lYv7vqUrzqYzVqusw7itYKfX61EUBVVV3WMFdEOcLMuYTWddUhD4YYwfBPT7fTzP2wwzl8slcRwy\\n\",\n       \"nZ6TJCnDYZ8v/9WXOD8/5TOf+QyDweC7vEQeeK4AKYTd5Tsf4TAIeOThh+3iFMdc2t9nla0e+PjR\\n\",\n       \"YEBZWsVc2rey9/lsStvUaKV4/fU3aJqGp558itF4wmq54uVXX7cBwEKyXC5573ueBCSPvPsxXEei\\n\",\n       \"teL01GYL7u7t40qH+dxWuLrD4sMwRCLIuzT6s5Pb7O9dBgRNo7h7eBs3tNCU3TQmlFVpVYJa88Yb\\n\",\n       \"b3By3NLr9xkNh7RlyfnpMdvbO/SuHlBW1qcm9FxLNTUGoZUNzA2CTaJ9nltLg7VvTBzHm5bXLvI2\\n\",\n       \"4Wk2XTzw/DmOZ+mrquHstMERCTdv3OTFF9/AcyTD4YCTk2/zsZ/8KChNo0t8xyXyJK0wbO1tk2UF\\n\",\n       \"vSSiLBuQDlVZgxvQtg1tXXN4+w3C0O9yIX22JjtkyznaGOazBZcu7aNVi9KWmRLFPlVV4Lg2zFoZ\\n\",\n       \"O79RyiCkw9HxHYR02Zps4wU29Dhb5vi+bzc66TAaja2IpKo7FadLWWXdfRAwHA4RODgiwGjrEdOq\\n\",\n       \"GqXswmy4F8TidzREIe457Lmue8/+Yl2oYQOvLUTJJsMULPTYtsqm0AsbYfaghXt9SCFpKgt7lEWB\\n\",\n       \"7wd4fkBT13hBRF1VlGXFYDiARqGVdZs0Wnc+34bAt4tuvzeyiuaoh2oMZW0hzvliRl5kHFy+RKtt\\n\",\n       \"WLjWmiCw6uWmzijye0K5jdNnV7QlsaU4Oo59P0dHdyjyHN93iMPQQrRS4PseriO72cf3D2z4gVfg\\n\",\n       \"f/4nn9vg3Gs/lPXi7TgOwhGYTiKrFUjfszRm1+HGrZt84fc+z9NPvIfY9bl85cBWNq1NuvE8jzCK\\n\",\n       \"cB2HvCg4Pj6mrmuiMCRJ12rKiLZpO4tPs5H5Sik3oboAvm8n82Fo0zzWVUWv1+sqBssYmC9XNE3L\\n\",\n       \"bD6jbVsODg6696W4c+fOphv44hf/hKtXDvjML/8dq0jshhyu61IUBT/yE596y3n79jf/DKV0p6CT\\n\",\n       \"GzP4JEko8gLp3JeQXX36LY9/4fn/hrq2IiTdWai6roOmRQiH6XTK1atXObp7ysXFdIP1R1HciS4k\\n\",\n       \"rhB84AMf4M7hbc7PThkOrLhpe3ub4Whsbzop8TwfrRRKt2hjL2IpJPP5gjTtg4HAt1z5OEkomnwD\\n\",\n       \"bSxXKwI/RDoO/X6Pfq+PNpoir3jttdcYjPY2sxMhHKSQ9AcDhIA8z8iyhRUg6RansznY6VwE53Ob\\n\",\n       \"sNK27aby3pjp+x5BGCCEg+O8NZP05q1/uoFn0D57e7ukaYyUgmy1wJiWIHRxJYShx3g8om5KVFVT\\n\",\n       \"VzVlZVv9VoM2EuH6nF/MyPKcVZaxvbPLsO8TeJIwCK1SVggGnd902zZkyxVKKwI/YJVbAVNdVx2M\\n\",\n       \"YuciSisWy4ybN2+R9gf0+gMGg1HXbRS0nWRetS2mY4rEcUyZrRd2n1ZV3L17yLve/VCnavQQ2sFx\\n\",\n       \"bBC344oucNkgHLnhk1dVZamOVbExqFt3Omul5fpedx0rEgrCgFbds0/VyqC0ArNOZ29RWvHxT/78\\n\",\n       \"AxfyP/zCH+L5Et8PrLma6+IHAUrbXFxtBBqD0RrpOGAEbdOSJsnmHlpbHAsgW2XWVM33aVXZFVWW\\n\",\n       \"EtrrWadSPwjwfJ8syykLew82dc5gMNjoWXzf4/zsdGPq5TgS6Ugbsi4N2WoFaBxhcITAdyVVWeC6\\n\",\n       \"zgZiklL+cKfS359leb9cfpNO3yrquiVKEipqGhReGPP6jTd59hvf5On3vJdhnLK7tcOqu2iUUhSd\\n\",\n       \"LajFsSOiKOLKtauEYUhRFDRlQ5GXlEWFrW3thTGdzrGOcX2CILDS28YKCeq6piiKTSKQXQRq8twy\\n\",\n       \"ZjzPI4l7HQ3KIS+yjn9q28deL+Xk9IRvfvMb/NIv/ZKlHXXRcHme24X4HdJ4JKJTyXWJHdLBd1za\\n\",\n       \"umHY71MU1nVPPviz5sMf+hBVWZFlK55//jmMgbSXYoTm/PyCy5cPmE0XlFWN6/t4YYQx0GiDkIL+\\n\",\n       \"YEi+WvH7f/hHXN7btRa7aYzve5yennPj5m2apqHfH1ie7PaEtqODOoDrSLYndtAqcJjPpyRJj/Ns\\n\",\n       \"hXENniNwhMPo0j6rLKOuao4Pb9NOJhvP8X4aotoS6fpcTKcEfkDgh9xZzhiNRoAmCnyGg5S2qSmz\\n\",\n       \"nFa13Lx1k63JhF6/x3wx77zi7U0ZxRFGa0pV07QNRbFkPHrr+RNCcHBwhSAIENrvFLAhdVPiuy6e\\n\",\n       \"77DKZhT5Cq1bgiAgTWO049k4vknCcpHhCMkqr1icn3Pz1m0eefej7O7vk/Z6mDbDERrHtV2pFIKs\\n\",\n       \"sNL/uqpJ+yNUY9PmJ3FCozRy7enTWDOwsix57fU3uHxwQBSndtES1s4zCGKiUH6XsVXT1lSVNW+z\\n\",\n       \"g80SbZqN+jSKIqR08Z0Qpehk8Iq21ZhuwF+W5aYzjeOYNEmstzx6s2hb/5N6U5Fb5oqh3hjaORRF\\n\",\n       \"bl0SjV3UpBQ48j412QMOAbR1TVNVuJ5H0+XkOtIF17XwpwBhBKy7wsDH9y23PeqiDgWQ5Tmj8ZC6\\n\",\n       \"qmjqCkSz2YSqsqAsC4IgpCpLVKvB2OvaMsASa/CmNU1dM73IN5BokoTkWU4Sp3iug3BdVss5jhT4\\n\",\n       \"nmeH256D53tgjC3UMO/YkcMPwQJuyft601rBd5ubu8KxhvKOwGiFcF1mywVVVfHct77Fr3zm32Vn\\n\",\n       \"MKap6o3Zj/UwCUnTFN/3mc1mzOdzXn/9dStqSBIu71+h3x/i+z6LxWxjubn2Jy6KjOl0uvErXzvt\\n\",\n       \"lWW5ERtpbdWRvZ71wq6qiuPTYzw/IEkiXM/ae+Z5ies6vPbaa7z8yov8Z//Jf0qeZYRBwNnZ2cbP\\n\",\n       \"ommaDnPMHniu1jfUertZXxxrepkUAs/3kY7D/AH7QC9OkEAYeHz6U5+y8MSbb3B2ccZkPKFuLJyU\\n\",\n       \"ZTn7lw64dXibIAgIhGA5W3L36JgwCImShOl8jnSsF/XZ6ckGaur3++zu7uJ7Hi+88ALj8Zjd3Z2N\\n\",\n       \"urSuarQy9PtD9h99lOVyZQUWdc7Z2ZnFULXBlQ6T/b2NYX6WZRweHlrhh9Akkcu7Hno/VWUx8zWc\\n\",\n       \"dX5+aisgzxok7e/v43keaWpl6qvVCjA0bWNtFDq7Uiklo60B2ggmkwn5Az6CNE0775mWIrOV+9m5\\n\",\n       \"ZS8kcQQ47O1eoj9IWc3ngOH4+IS418P1Y5RwSUdDzk4vOn8Pzfvf9xRXrlxlsVySxAlN6yCl7XYW\\n\",\n       \"iwV+GFJX1jo3TfscHt7Z3C91W9G0FUZZV0HHcXj4oYeIk5TxZMzV69c7f5ma1WpF3aWxt3XTVXly\\n\",\n       \"U11ubU/QnfTe9z3atsbzfI6PTzuKYkimCjwv7BgxgVWD2v9QVSV5nrFcLq3dcdN2/PXGGod1VOB1\\n\",\n       \"V7dmcwVdpWwMuK5DHEf3Cjm6gHJt0OrtF7I0iVjrgNd8cY01lTNGb6yi67q2VM0woCoz5jMrrNHa\\n\",\n       \"sOxsn5MoAlMjpcI4qoMq7TWyWmVcvXq1q85DFsslIGm6oPU8y7qNSHRMNttxRGGA0Yo48qnKnKDz\\n\",\n       \"En/zzdeZjIZ2o/JtIRsGdvamOyOrd+K/ww8BhPJnf/RbmwHI2t9kzX113c7bOwwpdUuLpsVweHjI\\n\",\n       \"b/3vn+VnP/Ez7G1tE3ZTZeHbSn7tz71us8NO+LCW5BdFgR3urz3IAawrX1mWtsISAs9zNxtLWZbU\\n\",\n       \"nT/11taWdYWrm82GsTbgwnFo2obpdNopzyxe9sorL/PwIw/xcz/zM2TZ2mjH35jVrIda6/PwgR/7\\n\",\n       \"2bectxee+bMOa78X5AxsBBXr1yKQXBQ//ZbHD4I/IIwCYA1TKaq6AsdCRrPZiovplLyouHX7kNli\\n\",\n       \"bqXG8znzxYw4Tgg6e9HxaIAwmsB1uXb1Cv1enziKqJqa4+NTloslbec9UpYFaZpsbtiP/vhPkucF\\n\",\n       \"vh9SVzW+FyA9K+tefwZrs6M4jjc5qZvuqrZDQdVqgiCkrhuSOOk+K3A9t0vwsc6AaZreS4DpPkPr\\n\",\n       \"Y21fX7/ft9WuZ1N3iqIkTX/5Lecvyz6P6Gh7vtcNsc1auWsVwXXncZPnOXEUkWU5IvRZZhmz6QW+\\n\",\n       \"61IUKy7v7TMeDvB9rwts6RLJXdvuW+c/e8060qPIC4y+l2w0n89p2oqqKfA9Fykki8Wc27duMRgO\\n\",\n       \"efrp91M1NZ4XYBB4ro/R3WstKzzPparK7l4RuJ4DSm/+7fseWb6gP0gIAmsHqxqN6wRY98jG3jcC\\n\",\n       \"tGEztxoOB7RtgyudDd9bSGG9VlTbDSVNR4VVBH6I0y3u90OnQEe3bAhDm/jzU5/8hQdCKF/90l/S\\n\",\n       \"Nh31uMuVVVrjeN5Gou969j1IKZGOsX7xYdTh1eviTxEEHhgLxa3tXS1V0y7gnm/dSNeMqvVw3MKo\\n\",\n       \"NhClqkpL+Vwtqaui6zokfuATBjZ1av/yAS++8AKDfo+qLIjDAN2dHztPYDOI/aFO5PF9n+VyueFI\\n\",\n       \"rhfvjQubkEwXM6JBn6auCeKIv/jzP+fDH/wRHn34EXRXwTuewyovNk54a/jEdd1Nyop1AowYjUY4\\n\",\n       \"2Ipt7V0dxzHWyN1CLIvFgiAIN2yPvb19hBDcvXuX55//NlVVcenSJa5cuYIxhvPzU6qqZracI117\\n\",\n       \"AW5vb/OtZ5/lma9/nV/7j36NRx66zmq5tC1YVXaWAVYGnSTJxm7T87wHnqu6bdAYgi4PVAirStUY\\n\",\n       \"vMDHdB++5znwgArccRy00lR1uRk6OY6D53hkWcFwOMDzA1zH5/pDD+P7Pn/xl3+OEJq6KTCmpVGC\\n\",\n       \"KA7IshW7O1tc2t3j6Ogut27fRLUGx3XY29snCEPe9/TTfP2Zr/Lo449z5fIBOztbnJ9d0DRNB021\\n\",\n       \"OEJycXFBa9aBGAH9fo809Te45Hw+ZzabURQFSZIw3h4jhB2CqdYghMPJyclGdRiGFpO/tL/HQw8/\\n\",\n       \"bAfNsxlNY82flss5Z2e2a9jZseItYxT5akFV1yThg3n4168esMrt7zqZnnXCD+sbsjUYEgQhi/mK\\n\",\n       \"PC9xHI/ZfEVRFOTzBVmRg7Z2s4M05uq1A1whEVqh2wbVtNAUFFWL6EIEHMehLuvufrCv4c7hEc8/\\n\",\n       \"/zwf/OAHkY4kikPiKO70BjbIwu8GoDagwYZxqFZhtO0i1ou31SeMaNrKWlOE4aYYMEYQx9b3JQx9\\n\",\n       \"q9EwEimsuMQySixcU1S1NS+bTrm4uKCuKzzXwZFOByOlhJ3qcT1fWuPiUkqUVtTN2lL3HqV43YmX\\n\",\n       \"RUXVbZAPOsqmIg1DdLcxrPNyW6PxPJ+uQttsDEY1COyw1HEc/MBFqwaJzRew1b+ynu+uLeYwsL29\\n\",\n       \"hSPdDftL1RWNskNcISyHPwwCyiLrjLm8jotu8DyXfLXCd11Ojk9I0pR+L8WREt+1CVjrHIAwtNRl\\n\",\n       \"pfQPfwX+pT/73U2lfH+ihyWytyjdtRdRQF6X/MEX/oDXX3qZf+sXfpHt8QSlWk7Ozoh6KalvB4Rr\\n\",\n       \"+lKSJBsf7PWiuKY12ay/HmFovQrazouhqupNosjayxdsyvR6MJamCVJaOuJyOcf3feI4thWNalHG\\n\",\n       \"8nafffZZ9vf2+du/9G+DMLR1Ywcpa7L+fUlE601rbQP69I++Vej67Wf+dNPyr7uMdWW5HsDqruVb\\n\",\n       \"lG+t4MfJn6BU0+HJfod/tthaR2ATQCwvPUkS/uhP/pjPf+H3GU2GbO+MuXv3LhqX0XCAxKYg9ZKI\\n\",\n       \"tm2ZTWdMJluMRmOiOOX09JQoCNne2iKOI6azC6qipN/rsbe7Ry/t0zZdSovn44aWuWA7iWbzvuI4\\n\",\n       \"7gIQ7EZcVhVxmto5RtNSlQ2TyZZVL2rdia8awjBgsZyjNdR1Q9o9ZpWtbDVrbDW/XK3A2HvcD+Dk\\n\",\n       \"5Iz9/Us8+sR/+ZbzVxW/g9OlK/mhZeDUdWvTZlpbhdd1w+Htu0RxghSSyWSbCmXNzaTA6AajWkaD\\n\",\n       \"Hk1Zgrb5lGuvjwqN232OsmOpBEHAzZs3efnlV3n88SfY2tqmyIsO1jY2iUgppICzszP6/T6j8diy\\n\",\n       \"Q4x1c1TK+ubb19puOpz1daS1Jgmj7hxamOPw8E3iNGB313acURCjWkHTWHqh1lYO7nYMLmCDcRut\\n\",\n       \"aLuu8LsXIUFVlbabC+zQ0fWs73sYBh1d13TdqER1Zl5aGX7ip376gRX47/7O/wlKITZ+LC5SOmiw\\n\",\n       \"GZX6nk2HQeBKGwKjlcb37canVNe9Cvt8Aus9JPC6e8o6kgK4XYZAh9J0j1dI2YmQcksL7fXSbq6V\\n\",\n       \"E3Yb2Wq1su/NtRAfxuAIg+qu2fVMzL5LiXQcnnr/x354K/B1COpmaAmbqbDrujZJ3pFcTKecXpzz\\n\",\n       \"5b/8Ep/+6U+i65Yyy3E9j4MrVyhNS6gdFvOF5aVKiWpaksiGF2ilbEXg+8RhxHw14/DOIfP5nMl4\\n\",\n       \"Qpr26PVS8vx4o5wqy5Lr1x+i3xsxHI5JkpSzs1PquiGKAuI4Io5DZvMptw9v2eo3DsjLkme/8Q3+\\n\",\n       \"nc98hsuXD1iuFoRBgERYlaRNOwUDnut1vshe11IGG+bL9x73w0JrGGU92V/DD0opu3iVb32849oh\\n\",\n       \"yVr2rI31bBZCIFyXulJMJhMuLmb81m/9Fq+8+hLXrx2wyhZMT09Io4BZVnJ2foxuNaqtef/T76Ms\\n\",\n       \"c6I4wvU9zi4uCPKCK1evoOqWo5MTqo5GNRrbUOcXXnyJqqz46I9/1M43lKE1Vu4tHckg7YGBsiop\\n\",\n       \"iiVFWXTdiSSOo85vwzIpWtXw2muvdfhqyGQy6qCakNRofC9gtVrRti3PfvObPPbYY6SJlbKvufxr\\n\",\n       \"ubkxBe96+BHKqn7ryQNcR1AUOWWZITKxmYusq8oir3j55ZfZ2h6SptZOQWlN5Ph4nt2wk37Kar6w\\n\",\n       \"TBPPhjlorWmUQdUtDU3XkUr8IKIsS27deIO7x6d87GM/2eV1RhRFhut11Ww38ErT1ApFjP1ssqLa\\n\",\n       \"mE6tq0Q/8PCSiCIvcN0EMChlF0xVNx0zp8JxOlGV726YTnEYoZVAiHX4So3SFl5cR6NZyCPEc1xc\\n\",\n       \"V240E2t4xM6DLHtjPrfZqR1PcMPEWhck6/AHx/OJ4+SBnwlAnCZ4woYnCyFplUI6DkIbVBcIAQIj\\n\",\n       \"bGUf+V0GgWOQEowGre8XrQsczwVj503ScZGhi+e53e8Ct6NKNqpGqZYsXzCfTzEGBoNBR41uyFZL\\n\",\n       \"JpMJ6+zY5XLJbDaziUm+T9tUtE2LVi15rjY+T47rAeYdSQ3wQ7CAb3Db+z609R/HcSi19VbYvbTP\\n\",\n       \"P/vNf85HPvIR3vPEk5iqYXp2hpaQHdZ4aUwfn+FwuMGS1+3gsEvcsNWHolY1g8GA8XiElJKqshP3\\n\",\n       \"07NTHMfh2rVrjMfWMnK1yrh794gbN25ijCFNE8bjUZf+oTk/P8NxJVeuWPObrz/7TXAEv/Zrf9cq\\n\",\n       \"GoscPwio64bQ9xFC4rl+Jx66123cX6W8HYTSGo0WNmauVi1N59rmeR7LfD1AkRuHxu89qqq65yvT\\n\",\n       \"QVRSOmijaYqCwI85PT3lr/7qq7zy8suMRkOWqxlx6NOqFtdzSBI7VPQch73dXVqlcH2PNIgR0qFp\\n\",\n       \"c67uPcThnTsYJdCq4cknn0RKydnJCS/ffZ3xcMje7h4vvfQScWyrlEtXdvEDm/yyXM4B20lFcUCv\\n\",\n       \"HyOkRHXVXLYqGY3GaG2sEKtYt82OFe/kOa+88krnzWHdI8uy5IMf/OAGprNKTEu5XHvUaCXx4wjn\\n\",\n       \"bZzvHAFbYysvr1pF07bMZ0ta1VKWOdPplP39fSaTcbcQWWGJaAy+b8Uc+XxKVeZI0We1yizVTXpI\\n\",\n       \"6SMCj9SPiJR9P1prAt/j+eee48knnqAsM3w/5PDwVue2VyG0IYpCHN9nPp12sWY2XSZMYmvj2tQb\\n\",\n       \"TnKer+xgrts4vA4bLsuKnclW18VBGITkxZxldkEcjzk/P+fCXGCMxPeirlO23cjag2VtPqe1Zcys\\n\",\n       \"7+91hqstLhKkdPA8gef59HouXmCv9/Xwcs1Kk67XDXNX5PnbL2RVVeD4AcZoWqVxO2Zb3bS4Qna2\\n\",\n       \"0XJDLaZzT9RabRgxVtAjWYddCByEMOi2QQor9KkrK0SylrF2jVGmQQhDlq8Iw6C7rqwEfjwcEQRh\\n\",\n       \"F/AQMZvNqKsKBzvotDYGDlIIwjjeXIdt29poNekQBg/wVL7v+IEv4GESQqeOFAh03RJ4PqptKJqa\\n\",\n       \"OooJ/YC//PO/oFxkTB4eMruYM+oPuXL9XRgpqVVLXpW0RYEWDkbCfFWQ5xdUVYPvn28SdzzHVgyn\\n\",\n       \"5zOLt/Z6tBqSdEAUW3N94bgc3j3CdT2GgwH94Ygsz2hqi6U3pubi4rzzNnao2ppnn3+uSxj/ed73\\n\",\n       \"3vcCoOoG33XQSnWLtyAMLNskjC0VqerEC9pY7qsylkL2wHPlB5shpvStcm3THgpnczGqt6H/r61D\\n\",\n       \"bfCvgzYKsN4y9uJv+MrXvsY3v/0tRtvWQ9oLfTAKzxh0o1CrnN2dbfb29rsE7ojRZIs/+uMv4ng+\\n\",\n       \"QZTw3AsvYoyhF/fZ3ppw49YhTVeFP/auR/EDnzfffIMbN25QV1XHX7eVy5Url7l69WrHDrCbe1mW\\n\",\n       \"tI2FEqQj6Q96KGXjvdZp7I5roRLHcUhTm224XC75xle+wrVrV9npjMh82YUDO5DnBaHnYtqKrFgh\\n\",\n       \"HWsolWUZ/QfcN3VbIxtBoxpkF3Hm9/sYA0UeUK4KpIFsZoVAUgqCMCT0fMqspJ+kFI6L50gLpegG\\n\",\n       \"oe0CJByXum6JwxCBocotJbZWDYN+wmhon0dIh/3dPRsMXWbdIiyoG4Xn+9YB0bMeKHt7O6AUpaoI\\n\",\n       \"wgA/iikdF600OtCbzctxBVHfZ7GYYrSx/OZiRRyH3Lx1h4cefhfDARssWCnLFMrzAq0Urck3s6I1\\n\",\n       \"/Oi6VsbuuS5xkmIwnUajpm4aPMfHYLttVSvKDn7yXB/P86lbxahvdQJJEiLE29vJ9pMxqipwpUPg\\n\",\n       \"2yq+KUv8wF7vyigMGmE0AgGO5awLV6NpQWmUbhBCgtb257WwcY3SMmmksCrMuq5BCLRSG88V3/cI\\n\",\n       \"XEGLXYjDMMJ3faq6pWlywjBCtQ3z2Tl1VXFweZ8ktqpMYwyu56INtF0cpOu6G9dVpd45E/MHvoCX\\n\",\n       \"Zdlle9mKO/QT6rICJEkc40Q+h7dv8zu//dt8/G/8TR5/92OgDId371JXLcJxSHspcZoShNZvo6ob\\n\",\n       \"/CBgNJ5sZK2LxYI33rxBnq/Y3t5mb3+Htax+sVjS1C2ua7FfEB2uqjg6PiZObfvWG/SIoogXX3qB\\n\",\n       \"OI4oihwj4Etf/hKu6/Jf/IN/QOj5qK6rcDsZ+bqiW1d9VVXhKsu2sM9373twzxr0e4+6M/laS8yN\\n\",\n       \"tgO8tRdG1eULvl0FueaZu65VLLZ1TeD71mrUkTz/nRf56jNfZTzZompqlNG4QqIahSMEi8WS69eu\\n\",\n       \"s7+/z9Wr13j3ux/l5q3bDEZjfvnv/DLfeO45jk9PCcOI+XyON/I5v5hSlzmB6xH4Pm+8+SZlaX0z\\n\",\n       \"rl27xkMPXe9ofj0bcVYUHB0d8ZWvfIXRaMjjjz1K27ZcurRPWRaURUmtbc6i6/j4nofWsFotbUch\\n\",\n       \"BFlecHxyzHK55Kn3PEEcx2R5TuD7rDKbEVk3NY4riaJ7NghGtGgNURTxIBAlisJNhmrbNhtKqVaK\\n\",\n       \"O4d3uHr1gK3R2MJ3XcXpSIemaRFSUlQlZ2dnCGFv2n4vpW01seBe+rl0aZsGpSw8lhU5/w9zbx6k\\n\",\n       \"yX3e931+fXe/95w7szN74sbuAuAFEOAFUjzEkKJIKiIpKpQtq+I4FeeQK3EsV6pUFqVSlSJKtiU7\\n\",\n       \"lVCmJMuUYikyKVmEDlIEDwggQRD3sYu9d2d2zvc++v7lj1//et4FZ5dOpVxUo1BY7O47M2+/3U8/\\n\",\n       \"z/f5HqurK6BuEfrdnqKKGia2Y2Ea6r5Jk1RZKhsKdohjZbI0M9vCDGE0GjAejgv1oyoKdmG3qvBf\\n\",\n       \"k8D3lKQfSPIM13E4e/Ys97/xDQyHQyqVagEzSIKgUuDYHnEhk4+iiDRNGAz6qOCIvNwrqc48w/Mc\\n\",\n       \"MFSUout5WKZAmBZz9TpIwSSMyXOJ45m02zulCCbLMu6/QQ0ZjoYEtksSK/sKIdT9IFB0Ptu0EKY2\\n\",\n       \"nANhSkQRESHzDMOQOKbKq0yTBIlUk24U43oOYTgmz9VORttr6ElDRQU2qQQ+0lbkCf2AsiwbQ0AY\\n\",\n       \"TmjvbGOZBourB6lWfMJiKoE9Qz/DMIv3en3E5M2OH3gBrwUVZW1pmERRwdMsMuykVFDx2TNnuefU\\n\",\n       \"vVQrNVzPYzIYcuTIEZIkZTgaEUYRURSWjIQ4UoKGzc3N0vdC0f9mmUx8JDlnzpwpvTFarRae5xNH\\n\",\n       \"SUkzHI/H5QYdobrj7e3tYjRWnYNipLzA+973Hk6dOlXikdMOi6PRqIimsqZufnUDaQsBDfloxsyN\\n\",\n       \"jml6pU5hsSy7XIR6BYtA3MD4ZzKZlN/TtCw8IyDLcySQRgmPf+sJms1muYQJPJ/xsAd5TpIl3HnH\\n\",\n       \"nczOznHvvffieT7D4YiFuXkuXr7C8Vtv58Qdd3LlyhXi8ZiV5SXCMCTPclZWVmhUq0zGIwa9iIMH\\n\",\n       \"l0tl69raOhgGFd9ncXGROFbCqPvuu1sA0v4AACAASURBVI9GvY4QCu44e/Ycs7MzVKtVfKGc77a3\\n\",\n       \"27z86qtFUVSTFQXTYjIZc+LESSzPpj8eKHrpOKJaqRImikZnGAZSqA4tSzNymWAYis4Y7/MxqAKh\\n\",\n       \"aaMm9XpAo15hbW0N11XxaP1er6SX6c9YFDBZliUcOXKESThmNB6RjSUStWuxLAfX9QmL7yEMgQVk\\n\",\n       \"eZEdaZsYhkmtXsGxPSVBlwmQk2WpsnAVJqZlk6Tq9w1hcf7sWZqtJoFfIQiqWKbJcDRS2KrMSLMI\\n\",\n       \"QxhE8RhDGIjCJMu0TJAJy0sLXLx0gePHj6lu33KQhV95luUk6YgsT4rrz8Y03bIQ7XmE5/oCZjJS\\n\",\n       \"C+Q4jumPO0jU9Ol7AY6roBnTsHEsswxYgT1tyA3uDEajIhpuEuG6Doal6oB2ORRCfRamZZGkKqmH\\n\",\n       \"HLI0Ic5TyHMFxRdMljRNEBJGY5XHqXNlfd8vbTSmJw4hDJJCSW0VRl6OayOznG53RBzHLCzMldFx\\n\",\n       \"Upp7rJiiYdPQ0TSU/Le+gHc7HbWMsz1l/DOe4AY+mczJkYTjnL957HHe/773cfDAEsPBgCSMCEOF\\n\",\n       \"EzaadeYdl1zmrK1dUdxhv8rhw4sl1Wxzc5Pt7U1FMQxcDh48yJEjh+l2u1y7do1XX30VIdRS6vjx\\n\",\n       \"45iWwMhVNt/W1harqwcRhig2/Wqb/5d//hesrBzk0//sF9htbxMXS5xmva4c3QrRAlAKjPSH7nne\\n\",\n       \"dTQ+/f95ntNsNm+4uNCiHT16aQGK7go0z1kvhl97aMxT25NKKZmEIa7r8Su/+qvMLcwzCUNMU00P\\n\",\n       \"/X6XZq1Gd3eH40eOEHgB9XqTxcUlJQyJUzAN7jl5iu9897vccdddfOyjH+W3fvtzBK5DHIf0+32u\\n\",\n       \"XUu5EIbYhsl999xLkiR0en3OXbjIzMwMFc/HECbPP/c8w9GQU6dO0WrN8tILz/PKK68wNz/Dgw8+\\n\",\n       \"WLAWIgbDXiG7j7nvvnuYTCY06w2SOGEwHJAU+OtgMMDxXHY77fLB5eUp43BceI24CCkwMcnJqfg+\\n\",\n       \"aZopIdU+d0atVi2tU8NwwqCvPK5d12E06jM/22J3t1PSOzWsECcJlq0SzHe7HVxXWTyMx2oJPzu3\\n\",\n       \"QBwr5WYUhorlkGujNI80ikjCqLxeyFM8R6kToyQsBF1GSc/LpcD3a+RZwvLyEi+++CLf/e7T9PtD\\n\",\n       \"br/9dg4dOqSotKZZmIU1S4sHUYhfsixhPBhy4q47+LM/+xLHjhxWPiN1B9O2yXOBMFHGTqZeaqZM\\n\",\n       \"wknZoADlfksv4OuVqlpW5irwQfsFjSdjBoOeMoFLMrI8w3KcsrG5WSFrNQIMoTDk5kyLOI4YDYb4\\n\",\n       \"votZyNJFscBMkxTHNYsHdqqWp4YgyXKiSUiaxgXdNmM8GmNY6p4KQ7V30ROzZruAgWkqrN6SOcOB\\n\",\n       \"ahbq9TqmIdTEWOwkHMdmOBwWy1CvpFLC9UV8OsThRveyPn7gBbxZbyjAHmVviZGSZBk4FoZp8m9+\\n\",\n       \"4zc5fuQYhjCJJhG+51Lx/D13v/GI3fYOlmVx4MAiWaYWMv1+l6QQgrRm6qysLJUYVhhO2N3ZJUlT\\n\",\n       \"Dh5c4dChQ4wKFVWneKAYhoFlG8wvzNLr90p60enTp7Ftmw9/+Ec5ceKEMqnyK4SR4iinSVIWcJ3+\\n\",\n       \"HkVRiV2D4jpreqNepCm5snHd33vtoW8C3/fLbj0rvD70AshxHIRhsLH7va/3vL0c0dF4zHA8wvN9\\n\",\n       \"vv3E41i2jUR16Y6jYsB812P96lXuuPVWjh45wv1vup9hlPDiCy9z5MgRXFulEw0HQ44fO8qw16NS\\n\",\n       \"r/LRD/0Ir5w+zU5nh8B3ESbESUxQq9PpdXFsB9OyWFlZQQhFYdtttwmCCkePHiOOE7785S/jex6v\\n\",\n       \"f8MbmJud5dr6Bt1uF893qTdmsW2T2dlFOu02ruOyu7uDZZpYpsCr+QR+gGsbnD7zMisrK2r6STMm\\n\",\n       \"4zFB4GEXRUFKyNMEpGQ0Soubav/zPx6NQIjCs8ZUZkiOTRInzM40OX36lSLYWS1US6VsQQuk0Bnk\\n\",\n       \"eYZhGWRS0tneoV5XeP5wNC5yX0UhgFG+2jmUMYNBEBDHym/EsIxCbr3nRy9ziWWapEmMzFLW166y\\n\",\n       \"duUS73z47Rw+fLQQNaWFiMkoqJtpAREoGp3qRHMaCy3StMblS+cxTKXYjKKQLAc1/Qtsy8Yw05Jp\\n\",\n       \"Uq3UyoKpKYE6ljDPc/JQqWAVvTAljlX0m2kIZpr1opApvnguTIzCLvdmdOc43jN3G40UldR2TBWY\\n\",\n       \"EMYqjUcor21TGIThnsw2S1PCcEKWZ9iFSEsIyMip1SvkZDRbDWXoZuiOWcEdKs1InXMJas/gOEgJ\\n\",\n       \"SRzTLQLF69UKrVaDKIqKWMa05PXD3nSh3+M0oeH/d6jxf+7DtixknKuLHInr+iQCeuGIbz31JNub\\n\",\n       \"23zkQx9WF+54QqfTYdQfMDMzQ6vVotGo47g2mcwZDPoF9cin2WyWGGWn06bfV3ztxcV5KhWfNJGE\\n\",\n       \"3R4b1zYJKl5J/Wk0GoxGI7a2tgrhSIDjqLzDL33pS3zkIx/hoYceVIu1VPFh0zQpg0u9Ij5KU9QU\\n\",\n       \"XzYv1Z26g9OdiX4Q6dF8NBqVAbmvPcIwLB8u+nW6eOt/9ffd7xiPVeeZSQWbBJUKcZJw5tVX8SsB\\n\",\n       \"/cGgOG8TLNNkZ3OTe0+eREi45+S9tHfb5JbDbXfcySsvvcTr7r2XjY0NgiAAabG2vUazWWN+ZobD\\n\",\n       \"P/Qunjv7Ck8/9V3OnbugaGjVKv3RiKqvzCk8LyhglkwxDmyLtY1rtNttFhYWOXRolY2NDZ749rfp\\n\",\n       \"tNu8+c0PsLKywtr6Os1mk06nQ61axXEt0kSp6wwD2jttusVkUg18uu1doiji4PIyeRpTaTVI4hgy\\n\",\n       \"hZmnmYIAXC8oWEkR+/U9UahUdZVqlSRKmZlpkeeQWAZLSwcQQjEZ4jgkigplLIIkV0Tzai2gUq0q\\n\",\n       \"IdVEpdY0Gk0GwyGN1gx5BnEaYZnFQ9o2SJK4pOVdW19X+Yrzi1QqVeI0JpeKL24YBRUXtXPJsoTh\\n\",\n       \"aMD58+d405vexPLyshIGReOCZZFjGsX1aKqgatPU/iRKDJYlMa7nUq14dHZ2aM7OYVsuwrDJMslg\\n\",\n       \"MGQ0GhJHe4ZgqvFRylA9cdqO8shX2Z42CCX4yjALNotHnilWTzgJGY3Gxc/lFUIbB9u+calyHYv+\\n\",\n       \"YIJpGYhc4rpKKZrECUbBbkPmZMIgF5AJ5QWvCqWgVglKVopWQiIVVJlJ5TujmCoaFtN0QlFg6Op9\\n\",\n       \"TgqbYG08lmXKgqNWrZQTuYJRrRKS0fe+Ltp6IlGLdvP7QEd/Cwr4ZDxBSIMojpjECXbFJyYnThKe\\n\",\n       \"eOJbPPz2tzPo90kjdWEfXlktseAoCul2R0WX41CrNUpxwfb2pqKh+T4LC3Ml1LC1tYUKgnWoVqtK\\n\",\n       \"Rp0ney5qcYznuTiOhevWSZKE82fPkmUp/9s//TkWFhYYDofUC3l2XES/pWmGX/gYl91QQVfTF7Ye\\n\",\n       \"jcIwLAuvHuX1QkQv1PY7NDVSCy9c1wVE2TVrCOXGLBZfpbwIJdc2LJPvPP0svf6ASiXAEIrnnEQR\\n\",\n       \"eZpy+NAqnudxz4mTXLt2jcOHDtOP9sbJK1euqGiw4oFx9OhhTp8+zcrKCt1OxNzMHHfeeTcPP/wu\\n\",\n       \"nnrqaZAwGYc0Wi3lmpdmgKECbwWlEVOt1sCwbZ55/gV2d3Y4sLjAfa8/xm6nyytnXsX3PY4IOHBg\\n\",\n       \"idFwSK/fV0tTy8QyDQ4cOECn02VhfoZhGLK7u4ttmvQ6Xebn5pgMR1jF2G/bFo5pYRQLbQ177Ode\\n\",\n       \"evXqZXrdHsPhgDRVOOntt9/O7Owc3W6P1ZVDZKnyTp+YY0zTRpgGwjQVDTJLiCLFmR+NxoU9Q8Jo\\n\",\n       \"NOLK1TX6/SF5mtFsNgpfEI/xaITj2IzHIyWyMgz64wm1gmdeqVaKrrmwZ03SIpBYvZ9Dhw5x9OhR\\n\",\n       \"0jRWnap+f6jOVTcStm1eZwuBzMtp7eSpk1y8eIFbXY8sGyjOtKF40rbt4BaFaboZ0ZNlmqaMhiNV\\n\",\n       \"2AqmlerKE7IsRSCLh5AqjGbpIupiC3tPtZneeDfU3tkiziW1WhUpM0bjCZZ2Z0RgmIrPnWXKM92y\\n\",\n       \"HSzbKeEiDRnlBX/cKKBKVbQ1rHQ9xVlxzgVmEeI8Ho9J8vS6ohsELr5rkyQxiu4ulJNhcb701Dzd\\n\",\n       \"lIWF5fL079/s+IEXcGmglglS4lcrRAWv+OWnn2F+dp43vv51yFxFP127do31q2u4jsPc3By1WpVW\\n\",\n       \"a7bkmqZ5VJL/pVQk+MFgwGik7CFnZmaZnZ0nyzK6nX6BR9mYlrrY1VhJadr/yCOPsLCwwA+/772s\\n\",\n       \"rqwwGAxKrFnn+OkLVX+wtm1e9/TUF7O+GbR8WP+5xkqnR0QNvbz20B+6/uC1eEUXdP11b/ShJ0lC\\n\",\n       \"GmUKwxQSMoOvPvpVjh07TrvdLqxsR8g0xXMdGvU6D9x/PwcWDxCOxjz73HPcfd/rMAyDRqPOpQsX\\n\",\n       \"mJubKx86ijGxymg85vjx42z2+/T7p3n44btp7/b4zGc+w8LCIlGUsLS0TBInSvBhWGBIDKDm+VRr\\n\",\n       \"FQZd5cMSVCt0+n02trfIsoxjx47RatYZTSZ86c8fYXnpAEHglzCJBC6tr5GlGdud3UI4JZmdmVF5\\n\",\n       \"kYOBUkUKg1SmhGMlEjIdB3dqibTfce7sq9x5553cdutx5udnSQpPFr+QsucyYzgY4rqOilAzjWIp\\n\",\n       \"lhSqPYHrOEQTpQGI45hOp8O19U0WFhaZnZlh0B8W6fQKn7YdlWOJYTKOlCy+c+FKCVPMz87guh4L\\n\",\n       \"Cwuqy7YsPLfCYNDF95Vvjy4Gjm2WTAqEwLZUgHAS712bonh4qVxVQRjG3Hr8Ni5eusTdlkksMypB\\n\",\n       \"QLfTI6jWQGYIaSvJ/5SffZopiq1rO3i2StzJ8wyrYpPEkaL1CTCQICR5noEUJaSWJCkmIHPNe7mx\\n\",\n       \"pNx1LPIkwjQyTKsIJTfNItUd7GJHkOcGWWYipbLCBSXgkQiEaWIaexF2+r42LIHMwbKcQmRjljut\\n\",\n       \"LMuxLBUmAmqaywtoqFapFIU4xTCsPVaSaZKmOWkaludcq8+nXVh1k/q3voCnBSfZNC3COFb0nWHE\\n\",\n       \"Y1//Bu9817vp9zqlJeny4kKBISrJ6sWLlxR+7PkEgY9hu8pz1zBIkqw0l1dhxDHj8YSrV9eROaXf\\n\",\n       \"r6I/hSBUYd3Z2WZzc5M8z/jYx36cu+66C/KMJI6Zn5slDFX0Vb1WY1ywOgwhsGzVLSRxVJ54/dTW\\n\",\n       \"Pt9aTDFt2qU79Gkxz42EPNNba/2U18otDZ3EcYxh7v+hu66LKVVRHEchj/zFX9BstmjvtsnSBJkb\\n\",\n       \"irNc8NaPHD7M4oEDDPoDXNvh6PHjbG1tsLCwAEhWDx/i1VdfZXV1lUyqBWmOxaTT5ZVXz7J4cJVB\\n\",\n       \"f8inf+EXefb552m2ZonihKeffY5qtY7vVcilwDQEUZqUC9rJTpskjnD8gNbcHDs72+y02+RpyqVL\\n\",\n       \"l7l0KaXd7tBsNomzlDuPHePChQv0B/1yuVuv15mdm6fVajAcDLh86bLqSFdWcH0fyzIw2QsOieKY\\n\",\n       \"0XC4lzvZ+N7z92M/9tHSC340HKpOyrJwbItWo04mJZ43C2R4vl2EBSeFWEphyGmqzKLGgwGvnj5D\\n\",\n       \"vdHkjjtuYzQcMxqOEAK2tzfJMrVMzYuRPo5jkrQwZnI8kkTR37Z227TbXfLseY4dPcqxo0cxTUGn\\n\",\n       \"vcOZM69w9Mhh6o06nmtjGntiLu1ZbRTLuJIRFcd4nirQlUqFOI6Yn1/k8cefYHNzk6Wlg3iOw+xs\\n\",\n       \"E6Sh4JocwEJP+3sNiUD7ecs0JU1jttY2SNIE17JwPQclmy/or8IgzZRPOUDgKBVpnueFZmH/I0sz\\n\",\n       \"Kp6HWyzojQLKKlxkydKMNFOwiZr29ha+Wn9pCOXxmRQNl2EIzEKGn6YplYpbUDAL7rxpYpqioEmG\\n\",\n       \"RaFVX6tSqRTNXVbK87Xzo2naxX2bXAeFqmSksKwButH7fscPvICbtkUmYRxOqDUaRFHCNx/9OiYm\\n\",\n       \"KwcOULHMYvEYkWaqMxVCFMnuFSjGxiiKC2WeJI7TovtWIbNK8lzFNC2qlRpJkpEkUZmckRT5lL1e\\n\",\n       \"j7m5Ge6//35WV1VHNx6PcUwDmasNs2VZyCxjUGDqQkKeq0DTPM/JUcbtmkKl3Mmi0mVQy92noZLp\\n\",\n       \"gqxH2v0OPeLDHpwyvfBQ/tQ+whSwjx2qngT6/QFhGnP5yhU832fUH2LbDjLPCMMJzXoNWfhE7O7u\\n\",\n       \"Uq1UkVIoQYZpcO7cOe6+8y4GgwELBxZJC0WbYSlTqUOHD3P5yhV+8zf+FRtbW4DB0oGDtDsd6o0m\\n\",\n       \"87PzbGxucestt2IWHZcwDNI8V/i8st0jjEK2L19mOOgTxhFzs3P4QYDnwMxsi/5gwIWLF+kPR0wi\\n\",\n       \"te1XylCXZmPIc6+cphrYLCwssHRgiTRJaQ967PY6LMzN4xUeHnEY4fse8/PzdDqdGy6Odnd3aNTr\\n\",\n       \"kOdEkwkyzbAdm52dHcVrdlX2ZafbodlskUuJMCS2Vci7DWW/m2YZ58+eYXV1hdbMLN1OnyxL8D2n\\n\",\n       \"DMltt9sIy+TK1avkuaTZmmFrexcJ+EGNRqPJ7vYGQmaly+Mrp89w6fJlAt/n7rvu4IM/8iHyLOHy\\n\",\n       \"lTXuuO04ScFY0X4l0z462qLWcRy+8pWvEgQtXNdWDZEJURgzGYVsbqxjGIqiVwmq5LkEw8QoAo21\\n\",\n       \"DkGLaPJMFU3HNfD8gKDmkeYJVglFJFhmgyiKkbnE96vkWcZwMKBW9yiee9xkh0klCJDkGLLoplGv\\n\",\n       \"yQtXP2UjoHz0JRKZ5eRSdd/p1ASsz4GCkFTEogqr8AhDVXuq1aryRMq0x7leZuZkaU6r1cAwFVyW\\n\",\n       \"56KsA+pehTyPsaaMqvT31If+XFTcnPG3n4WSSUkuwa9WmEQR7d0OL734Ig+//WGMHHZ3tnFcl1qt\\n\",\n       \"VsidZWEYo+hanutTqVRptRyiTBnLTyaK4XH48BEcx+LatQ2uXr1KFMb4frXocNVT78yZM3S7XU6c\\n\",\n       \"vIu3ve0tHDt2DLVtV6pL11P5kMMiZFk/NbUgB4yy6OoRKMvSfccjoKQRAtctfqY30Tcq4JpuOB0/\\n\",\n       \"N/1rbahv2vtj4GmaqvFP5mrKkJJcSmVrS06UKDZG4Ae84Q33ceKuE+zs7HLu/DlWVw6T5Rm+73Lq\\n\",\n       \"1AlefvEllpcPYts2586d49gtxwknE+YWFvn6Nx/j9z7/eeq1Waq1uloMSck7Hv6hUgW6u73Dk08+\\n\",\n       \"yWxLQRtBs1787I6y1TQMAlMoloKUJHlKf9CnVq8xHg/IpeSBN7+Zo8eO43oev/brv45hWuy021i2\\n\",\n       \"TXfQZzQeg0w4d/kK1UoF0zCpBgEzzRZhlDI/P0+tWiUFusMxa2sbxRS2w90z+1yracKrZ84ofUBQ\\n\",\n       \"ISjk47bt0O52lNilVim6sknpUZMXLAXDMAkqHtvbina6enCZ8Thkbk5laD711FOAUlGeOHGCxkyL\\n\",\n       \"jywvIw2T8TgkzSWW7bK2volp2cg0JHAthsMh19avcf78Oba2d3AdmyuXL/HMM/Pcc89JDiwucOny\\n\",\n       \"ZQ6vLgOUbCc1ziuP9PF4zObmJufPn+e97/0AplElSaOCz2yyfGCJF198gcUDd1INKiVW63oeFBCe\\n\",\n       \"SlEvEqOEqrymuefdIwSIHBzDAZEjc20+llELgoJpFmMKweLcPGE6Ku+Hm3Wjmo8NlFh7mioISBgm\\n\",\n       \"hgApNVyZI1PAUPeMzBRdWd9X+h7SnbdfqZDnGbOzMyUcq+m4WsyjfgYT17aVcCuOyWWq7kORF/TT\\n\",\n       \"nCTJmEwiJlmI61rle1NOktcH2+jm7Pu5Ef7AC3iepmBZjMcjojjlueeeZXZmhuWlRdIkIQiqGIZg\\n\",\n       \"WBj/+75f5FVaxcZX0h/21BIqE9TqNexM2T3u7rYV/Q/wAx/XU0u/Tmebi+fOUatVufXWI9x3773q\\n\",\n       \"dbZFHE6U6ZNlYhZBs1ESlswQzf/UbIRcKvaDUnWprjXL9mhP05j8NDtEXyzT5P3pTmC/Iy0Kvpo0\\n\",\n       \"FPYm80yN1Za6GQf98Q0hlLzgwpqOzatnXqXieAzHI9WzGIIonDDTrFOrVFhdXmE4HKqQhkaD4WhC\\n\",\n       \"LiXbWzuE45Cl5SV227usrK5y+913sLG1jTAsfvvf/QFnXj2LX2li+Q1a8weYnZnF9RziKMIPfMgl\\n\",\n       \"hw8d4fbbbitG8JxB1KPT3mVjc4sojqg36ooK6DvsrF9mMOgiyGnMVvjAw+9hcXERWzN+8ow3v+F1\\n\",\n       \"PPb4E8w1agxHQ0bdjjK8yjImvQH/4nc+w5VLlxiPRvz1V/6ax597HikEQbXKgaUDBNUKTcth+eAK\\n\",\n       \"tNv7nr+dTpvF5UXllR1HrPe3SaIUUwjq1RoL8/NYhsFkNKZ3bYdRX3mRGxWfequJkJIkTtjd2uWN\\n\",\n       \"r38j45GSyw+HHbIsp9Wos7G1xbsefgutmRnGkwiShDiZYAsTyzCwDMGxlQNgGAiRY5ATxw0Orx7k\\n\",\n       \"oQfvByQbGxv0uh2uXVvnb554go1r1zh69DC33XKce+45xYEDi7TbO4rOaAosx2Lr0haGafDD7//h\\n\",\n       \"AgocKrELKUkMnq9UrM1GU1EVbQc/qBR0RiWssiyTPDcoel5VsIUOJi7sYwvVtX6ogcC0LCUjJ8N0\\n\",\n       \"FU4f53EZEYi8WSImhdy9CAQuIFQyAUJ13HletPEUakdLSesRql83JAjDVmHIgGEqy9wso1gQC7rd\\n\",\n       \"HlKqGEJNC95b2iYqvCFNGY1HmKahJspUFePJJCoaLVFwwCmndA2VTE8A0w3cjeBUffzAC3i1WiXJ\\n\",\n       \"JIiURnOGZ55+hve8+90gi4Ty0ZBWq0W9rkDJMAyLNBOmONHqv0Q5L7/8Eo5jq47dUZhYGE3IpYIH\\n\",\n       \"HnvsMVzX5WMf/SgrKwfLp10cTZB5kUQ/ycoTKqUsjGwKz5J8KnQ5jr4n5++19CC9eNQ4uF446iXm\\n\",\n       \"dPHWXcaN3AizdE/8o7+W66ivjVQ+4Z7rgoDuPl9CMSFSakGFF55/gVajgWvZCNuk3+tRb9QZ9Pus\\n\",\n       \"vvGNhW/4GNOMlb+5oYJgF+YXGAwGnD1/llOnTvHq+bOsrB4iySX/9Of+CbffcTdetcHtt92JsALq\\n\",\n       \"9Tr9QRff8vBMG0MYWK4FApJUkqHGeN/zsBYWOHTkCC+8+BL9wYDN7W1qgcfG1jU+/uM/xp2330oc\\n\",\n       \"hxyozKkJaDzGcSzlCLmzgy0gTyLisWI9OLaNNC0uXLrM1to6Nddnvt7klRdeBMOg2qzTHw7pD0ds\\n\",\n       \"97p868yrGIZJo9Xi7nu+9/zVZ1sIy+Ty2hWMUUI1qOAZJgsHDtDr9QnziO6gzze+8RhBrUpv0Fcw\\n\",\n       \"XLdHNJlQrdZ46KG34Hke8wvzDAYDJS7zVH5bnsbMzTXY3dkgjsYsL6/SbbeZnVtUS/koIhpOcDwX\\n\",\n       \"YZmMxyOyLC3Vxrq5mJ1t0Wo1OHrsKEHwbs6ePcvXHv0q3/ybJ/jLL6ss1k9+8uOYhkGaxDz55JN4\\n\",\n       \"nsebH3gAJIxGYxVswN50mOeSt7/9bYwn4+K+U5YTGuc2DLPYP1jX2UJcz3EWSumpGR1TeyLD3ivR\\n\",\n       \"urTLTC019QL4RockVyHGRQZAnudkxfJQlt+7wJWFAKPA6ItoK4EAqYK7BaJk9JiWXbxWLVmVgCss\\n\",\n       \"GzHtvNhoNK/bb+l6YFt7976mGup7Xv+saZqWHH+dPjVNCf5bv8SMo5hxFONXa3z+85+n2WpSrdXL\\n\",\n       \"k9CcnSGMIka7k3IR6BfeJFEUMZyoMNwoirAsl5lWo1jGKA74+rV1RqMBUuYcPnyYT33qv+LY0aPE\\n\",\n       \"k0lJfdIuYNOKr9LO1lIm+VmaXQd1OI5Dmmcljui6bpmKo9kp07/WWNY05jhNL5wu4PV6fd9zNf3B\\n\",\n       \"AqXz3PQyRMmwb9CBF9/v6tpagbUp8YPhWpiGQRxFpThpNBoq/44sxHFdqo2GEm3InErV5+TJk1y5\\n\",\n       \"ukaawV995Wt88U/+IwuLq0jhcsstx7GcAEyLcRhhmi7jMMaxbRzfQ6CUb3leJM5YJkmcAxaDXsTi\\n\",\n       \"wkFazZAoGtFt7/D6e17H607dx2jYo1lt0mnvYhcueO12m8FoyAc+8AF+7Z//c3a2dxDFZNMfDIjC\\n\",\n       \"mNbMDJ/73Oc4dHAFCkhLCpiMJji2zcbaOq3ZWd7wwJt4+aXTjG6ghD1/6TLDXld56VTqVIKANM85\\n\",\n       \"c/Y8OZJRHDKOI1730ANgGXhBwGA4xM8yNtfWWVxcxHUddna2Cp8PTbdTwppqLaA77BCGY1zXYzgc\\n\",\n       \"4PseSTQhzWXBVXeVtUMa43sOAru8JuI4IksihMyxi040HA85uLTIJ3/iE3Q6HS5dvECv1+Wzn/2/\\n\",\n       \"OHL4ELMzLZaXlrnl2DGiUHvMWGVToa93VVhUzNvMzAxhNGE0GlJvNMilvE4XMU23g+sLqO48r7sX\\n\",\n       \"ZF5mVU4fprieFXKjIy9EPLCXKaDuY8F+MISOZ9Nfcg9eUR26MneTRRctS3hmOiRGCKGmq+Le22OY\\n\",\n       \"pCUxQdcQzTbTtUJDMDpaTp9fzf+2C0KEfi83O25awIUQq8DvAop2AP+nlPJfCCF+HvgZYLv4qz8n\\n\",\n       \"pXykeM0/AX4apQ3476WUf3mz75FkOVKoxdgzzzzLe979HixLpbCoN6N8SizLJgxD2u0uo5GCU3RU\\n\",\n       \"mlreBQyLLX6ep1y4cIGrV69y6NAqP/nJTxJU1OhTrVaJoohGQymjdIHVJ0x9oHtP0SiKyIvO1ymk\\n\",\n       \"vdq/xLBUJ5wkSSl/119PX0RRFJUfpP5wtMBBLzeB67637uL3+TzK4q0FA1oary8uKYvtzD5Hnudk\\n\",\n       \"huDxJx4vOOQSz/dIyYnSFNe26Ha7vPtdyjg/zTJlgG/b7LQ7XLp8iZrvctttd4AwOHLsFv70zx7h\\n\",\n       \"K49+A8OusrJ6K4btM7dwiLX1dWbmVYDw4sIio8EA168RJSl5mhUm+MXSJgHLbZAkEZOwT6PW4sL2\\n\",\n       \"aRbmWpx7+SUOL93DN/76Ue45eZKFgwv4C0sMR0Oef+EFsjzj0a99DbPwnYmTRAmLhKA502J3u0uj\\n\",\n       \"Xufq1XXe8+73Ykp41zvfjV8Jr9fh8wAAIABJREFU+NIjX+Kpp58mmoQMez2+8uKzPPjQWzl37ty+\\n\",\n       \"5++ZZ55nfn6Wi1ee4fC9JxCTDvOzM6QVi6rnc3zmGGsXL7PUmmN77RpimBBECe2ox+ziHI2ZBmmc\\n\",\n       \"ML8wx7XNNZaXDhJOtI5AKXAXFxf45mN/g2GYLC0t4TlKMCLyYoIiL6LJDBzLQhZulHmaYSDK60xh\\n\",\n       \"26MyTq7fH9BsNvBvv500S7j91mM88fhjNGs1Dh86pPY5WU6z0WCn3cav7Plv7xXevb2NZVlFKIjq\\n\",\n       \"NKWURfjGzYvOa/9Mc7FvdL2+tmm50X2h/zu9I5IoP6XpCVnd2wof3/tRNM5e5FkaipggZYZtK2Mw\\n\",\n       \"zZ/X978QKlJtetLQ+LnmcU9rQabpvSVFsag5Kl83L6fqOI7Lf7/f8f068AT4n6SUzwghqsBTQoi/\\n\",\n       \"QhXzz0gpP/OaE3kX8DHgLuAg8GUhxG3yRp8QEEYxrbk5Hn/iW5w6dQqJZP3aOjKX1CvVcrxIkoTx\\n\",\n       \"ZEKWpti2U3acvX5fLTVHI6LxCN/3cFyXO267lU9+4mPU63Ullc0yAtdDpgm2IRgOhxiGwWQyKRcJ\\n\",\n       \"0zQe3Y2bponpKdn0NHwhpSTJ0jJBY9qoSgsipj+06Se3znyc5oDrizXPc1WA9jle27nocVGPzmWW\\n\",\n       \"oGHsy0JRqjiLS5cuYQqlOHQ9jziJiJOYwysHGfa6CKEyDPNcIoXCF+v1OqdOnUJkym3tytoVHnvi\\n\",\n       \"23z9m9+iMXOAu07cSWtuAcep0OlNWFw8wjDuE1SbDMYRnl9jME5wLAvDskGYSEuSSrU7iIc5hmlj\\n\",\n       \"mR5JkrI4v8DO1lWiyQiyhM52lz/8/f+bzm6bI7co6wPf96nUqhw8eJCl5WVeb5r8yX/8U8bjCdJQ\\n\",\n       \"C6xqrUYYRrxy4RXW19d54+vfgGPZCAlvvv8Bzp+/QBiGzFTrmIcO8dyzz3Ly1Kl9z//mxja9Xp87\\n\",\n       \"77iDYbtPHCecfu5F8jTjwNwCa1eu0KhWueXYMUxDqMg0KTh0/CimpcJv++MhSZySJapLq1ZqmIZZ\\n\",\n       \"TnEb21vccssxrl6+ShAErCyvkGY5jUaLMAoLj2iHNMuIJiEClYwkhMBEkBZsKMMQBK6HWXzu1QOL\\n\",\n       \"tDsdgiBgPB6q/YLrcftttyFyWSgYPYa9Ac1ag3GRZzo90vu+R6USlNBBFKkotnqjWdAS1QJf86T3\\n\",\n       \"iqQs/83zvV2PlLLkg0/HKZTH1L1xM0m5mm6N8n6YpuLpXMnp+8a23alvsSdfVyJATelVXvK6+Oou\\n\",\n       \"XAjlmaT3YHpK17DR9HT9Wjx7WsMxDaVWKpXyNfrhMA293uy4aQGXUm4AG8Wvh0KIl1GF+QZnnA8B\\n\",\n       \"vy+lTICLQoizwJuAJ270PXwvwHEcLl68yImTJ8myXHmJGJJxoYzUTzHLskizjGsbG5w/f54sy5ib\\n\",\n       \"m2NlZYW77rqLlaU5XLfwIjYtpMxpt3dxHBvDUIBXnssy808XZE3X0YVZd8i62yBXH7I+uRr3zuLs\\n\",\n       \"uqKqC/L0RlwvIXThni6606NliavfhLyv2S/TTBj9YeuvE0VRoXbb5zB0lmGC5XrKmS2OVejsOMR2\\n\",\n       \"HB588EHFSikuNGEosUcmlf94lCT41SpXr23yN088xcLSYVYP30qlPoNp14hSiVdpstPp4dYshGWT\\n\",\n       \"xDEZFlKC7VZJ44QMQRYnWJYB0sSyHPIswvcD8mxAmoQImfKz/+N/RxZFtOpNTJTox2t4pYFXmmd0\\n\",\n       \"u302tja5cOkS4/GEJM0Yjce0Wi0M00GKhGZzht/93d/joTe/hSyJee6557j33nv55Mc/wcuvvMKT\\n\",\n       \"33kSE8FMo85zTz/NDz28z/lHsHH1Gh943/tJhyFffuIrvOn++/l3f/B53vQzf49Rb8DzL7/Alx/7\\n\",\n       \"BsduOcaho4eJ4gjv3IvIOMEyDGZmZ0ljVRTOn7vAO97xDqoVn8kkZDQYQ6EflEh6/S4L8/NFRxcD\\n\",\n       \"WeGzk2MI8FxFY51MJuWEmKUJ9XqdNMs0xEscK8/1arVKHIfYpoFwHWYKE6s8y/BdjyiMsE2LKAwx\\n\",\n       \"nWKhN7Vgi6KIIFBiqEpFpfXoAmxZRsE+gb2CTXHf6UIurityiPwGFME9Y6fpYn+jY6/QqSmhZIAB\\n\",\n       \"VuE1n+d7hVp7i2sVpv6e6mfVyT45lgV2EV6hu+9pLYaGi3Q49jTsMX2f6vqia4WmCepOfBpena47\\n\",\n       \"32/ygP8PGLgQ4ghwH6oYPwT8QyHEp4DvAP9IStkFlrm+WF9lr+Dve9iOjSUM2ru7yCwnSxLSPKJa\\n\",\n       \"rZHESYnLqfxJ5fk8MzPDj37og8zOzjIzM1MuA2QSFfiRop9Ztkmtqvw2oji9DsZAKOx7+uLX8tky\\n\",\n       \"l64orHmqCqJbxFNpW9bpoqsxsGlJsf5QprEv/SHpIq+79enR80YXq16UTD/V9ag1rSA0biClNwyD\\n\",\n       \"aKQmDsu2GYchtmUxHA+p1ap0223OnD6NY9kcWFzA89zS/0HmuRI5WDZJLvjjL/wpx287gV9t4VVb\\n\",\n       \"pNiQSKSwMHJBfWaOJBuSpjmuVyHNcqrVRsFlVxeeX62ShCGSHClTpFSd6WTc4+L5M/zdn/oJPEvQ\\n\",\n       \"iyPaO9sMekOyNMcMbIKggucHtFotZmfnWFxa5oEHH+LFF17Gr1bY2tqm1+vhBxW1lAt8ursdPv2L\\n\",\n       \"v8jRw8fwPZu5uTl+//d/n09/+tNkacojX/4SeZYx22juf/4NWF1eZnP9Gl/71hN0Oh3y55/mh3/0\\n\",\n       \"R/ijL/wHfupTn+La2hUmvQH/8Kd+hpMnToCUbO1ukUtJvVqj2WwxHqrQ2+eefppvPfEdjh05xvz8\\n\",\n       \"AttbHdqDXa5trSOznMBXnvPai8MyLYSBSmA3TCWFl8rXW+YSZKY8xpMI07KIoxjLUuyV0XCobJql\\n\",\n       \"il7r97rcftvtpEmCX1HmaoEfKEjHNMgLdWYGZXdoGMrsybWVmrRarULxgBZCmUUhFSuLohGaLqZl\\n\",\n       \"R4/EEMq0Shb/qPtjD7/WuLYuhDcrZIqu6RT33J5QzppqiCzLIEnUMlFoszJhFowYfe8osy7L2lNl\\n\",\n       \"akVsmkqq1WaJBihlZzqlsIYoSq9zIJ2+j6c7+Gl4RR8aQtHFXJvaacLGjY7/pAJewCd/BPwPRSf+\\n\",\n       \"r4F/VvzxLwC/Cvy9G7z8pih8msT8q9/4TaLJhOeefYZqVSU1e57HwsIiy8tLhXxVFcZqtaYunEKK\\n\",\n       \"HMeqIEiZk6cRGGALdaEjVAalYZqYFKnsBV4sDFUIdRp9mqqABe1LootkHMdU/KDsQHThni6Y0xeq\\n\",\n       \"WqYWAQHFA0B3EjofUL9WPwD0k/i67mSfY1p1+dqxcBqCSW/QgesgCPVQSfe66yzDCyrkWc6nPvUp\\n\",\n       \"dra2yZKYaBKSZDk77V3m5hcIKlVMy+ef/cIvYzlVao053GoTYboYpodhKc5zLiWOaZLmgkpQVUwC\\n\",\n       \"lSGhzhWQF+HKFCq1PBnj+zbj4ZCdnQ1++qd/CtvIsUzB4oEFLGExGUc4lkNuGqpbdB1M0yJDEkUp\\n\",\n       \"iIhma5Yky5ibW+Dy5avstrvkeU69WmM4HnP5ylXe/4Ef4dqVy/zKr/zvrK1d5Zd+6Zd461vfSnt7\\n\",\n       \"m2NHj3Lp0uV9z1/gWEiRY5Jz8t5TfP0b3+DSlctUqyo6r7u7yz13nuDKqxf41U//MqdOnuQTH/tx\\n\",\n       \"agtzJFlGnsHmxhaGMPAcl/vvf5C77zxFpVLh0qVLLN23TJxN+Lef/7ccWlmhVquxu7vD7OwMUmaY\\n\",\n       \"pl02C7Zh4DuuylNMs9JfQ19/cah2L0nx9yuBj2kZpIUvSLVawXMdxqMhtmnjWLYKAbZsHNchm5Ku\\n\",\n       \"T7MroljZAOzsbmOYotA1+IxGQ6S0mQ4H1uZQJQNFqEKq6b/X48eC17pA6vvoZvYG+vXX/51CCp/u\\n\",\n       \"dbx796kSFxmFkVcUKTjTMCmShKxikhBlsdZxcXmuLD2UX0+t7Mw1zBHHe6wzvaPS97v+OYUQJQVR\\n\",\n       \"/1zTi2I9Tev3cyNCQ3mObvqn6ovbwP8D/J6U8gsAUsqtqT//LPCnxf+uAatTL18pfm+f4+cB+K1/\\n\",\n       \"8wqve90p/s7f+VTBqLDKbD0AWZDvTdMqO2OKdAzXtssRTRXCPSghyyWT4ZDRaIzve+XJNE2TDFFm\\n\",\n       \"AeoPoVKpoJ3oNLND0xTHw9H30H+yLEOYRrlE1EZYjUajGFVjBoNBiasHQVB+uBoP15tnHYyrVaG6\\n\",\n       \"03/tUalUitDZsHxiT2+xQU0JtuFA/3tfH8cx3WI5rDFXvR/wgwDLFCWuZxsqsd5yDA4dOkScpLz8\\n\",\n       \"8is8+/IFRpOUI7fcSX1mgVw4mG6FXm9AzfUZDLrMzc1wbX2NW249THu3g+d6ZEmGYxUeynmGKSxk\\n\",\n       \"FmGZAss2sB2Lrc012jvX+OhHPshsc5Z+b4c4zjBdi8wQ5MIAy1YFy1X+8ZNJhGGpFHIhTFqtWR79\\n\",\n       \"2teLa9PCMASOY2HbLrV6nXMXzvN//Ovf5OTdd1NrNnjjwWVefPFFtYTzfSSSJN1/iew4Jju7u5w/\\n\",\n       \"f4ZDB49y5+JBXnr5ZS688CIf/eH/gi/+hz/m7W9/O3fce5Izr57ha09/m8de+C7/6z/6n7n7jjtV\\n\",\n       \"gR2NMW0b27QYj8Z4nspMXF1dVSyO6izra+ssH1jizJkzHF49xHg8plq4XQqhXBwlgizPCCpeuQjL\\n\",\n       \"cjV6jwdDKpUKJuYUDAF5muHaDkmqrm/HdghNk14x2VqWKuRMxoqaV0yT+p6yHYvAqzIajZifn6fb\\n\",\n       \"7bK4eIALF85z5MiRMu0py9KyEGroQnepujvWBUw1JYI8F+VDQjczosCvpxeT+x2vNYArd0TCQAij\\n\",\n       \"LKa6kBrCvG7yNi1VQBWrSwWW64KtZfdaQa2ayCqTgsWmD91A6c55uhBPF3JdVzTpQTdx+nucOXOG\\n\",\n       \"9Y1tTp85vy+D5nve+83+UKhHw28BL0kpf33q95eklNeK//0w8Hzx6z8BPi+E+AwKOrkV+Pb+X/3n\\n\",\n       \"Afiv/+4XFZ6ZxASekiKnSYRpCkzDJBfgugoGqRQLSoVXaW8BdcHEcYztaLMngyColktFfbHoAqrH\\n\",\n       \"ssmkX44sGovSHinTxVy/dppvmySJih0rZPGVSgXXdcsLwy3Uo3oUHI1GpQJTM000JKJHRd2Vj0b7\\n\",\n       \"bCBBOe4Vqi394WoKpOd55UUT3iDVR00X6sEDSmqcJAkCNYFUfUWLskwTScHtFUpogTA4efIkX3ns\\n\",\n       \"eSy/zuz8EkkmsD2XKEqoNVtEUcjsbJMoGnP8+GEGvb6yC85yHNvGtW3y4ubO84goirFciyxNeOXF\\n\",\n       \"77Kzs0USh3ztq19FZjGua5MlaaHmM0rVaFRwkQUCw1KZi8p1zmJ7p83KyiHiOGFzawvXs+n1uszM\\n\",\n       \"zWJZFidPnODs2bM8//JLtBpNZmdcPvThH0Xmkmvbl8nIaM3sD6Hc+/oTXL66xvr6NdbWr3Lw4BLz\\n\",\n       \"C3NIAUEt4GMf+3Geff4FwkgtgFUwQ8Yv/+Knedtb3sInPvEJ6rW6eg/hGCEgTUM8zy6wWMX8eetb\\n\",\n       \"3057d5vZWcV394vwjVqlopgVKDc80zLpDfrKqMpTNFbLtmgEfmkTIUyjgC0ESZxhWFqJqYzbGs0m\\n\",\n       \"Fy9eYjQacfXqGpZl8653vhPk1CKwOPTXBG3b4LG2dpWZ2Tk6nU4ZGD19feqCrRsg/TWnp9c83yvQ\\n\",\n       \"e/e18mrX99vNIJRp1th1X1/kpGleNkx6z5WnGcJQ13+chJDsFVLVPBmEYaTeT9Hs6YfKdGOnz4V+\\n\",\n       \"f6a5V+inO3BdI3St0WyTaXqiPk+33norx48f560PPUBSeEB99nOfv+F7/34d+EPATwLPCSGeLn7v\\n\",\n       \"54BPCCHuRcEjF4C/X5y4l4QQ/x54CUiB/1bebPZBeQm027s0zSaTghcrodgIq4u31xsRBAFJGjEa\\n\",\n       \"D67z2I6iScm3Howne91xwczQxlZ5npNKPWYJaoFyagOuG2X0AkF/OL7vY1etcvmnx5xKpULFqJbd\\n\",\n       \"T5qm1GoqM1M/OGDP28R13evcBPWEoYvw9IPiRuorjdFP05EGgwG2bdPv9xmNRipdpbI/i0U5L+pz\\n\",\n       \"khCNJ4CAQmE2HA757Gc/y8EDSwSei2laNFoz+NWARrNFmgue+PZTfOjD/yXStBGmTZzmNOcWuHjx\\n\",\n       \"AouLC0ShCkxYW79MI6hTqVQYjyfYrsd4PMK1LYSlFG6+77B5bY2nvvsknhmzvblFnqY82t5la2O9\\n\",\n       \"nMQ8z0MYyq8bQ7C6oPJK4zgmTjN6/X6h5BM4vs9wMMJ29WSSgFQP4+FoyOkzr5KmCeNRTL/f5/kX\\n\",\n       \"nuMP//iPqAUVvLrFO97xNnL292PvDjocOrZKUPcZ9VIub15lZm6WpaUlBmOVJPTMs88ShRF+YYpV\\n\",\n       \"rVSpVAVPfutxnnryW/zj/+Ufc+rUPXQ6fWq1WvFZ5kTxqIAixhw7dowXX3ielYPLOEVzkGUZg4Ha\\n\",\n       \"AQTVqoKTXAvPc4pJLsSyLMbjIaAyJ6XMUGIc6Hf7NOotcpnS6/eYmWnR63XZ3t7G9T0e/9YT2JbL\\n\",\n       \"Bz/4QXIpsYzrHTOTRCVgaRhxPFYq3SzL2NnZ4fDhwwyHQ3zfLw2zpg9dBvTyeU8yvlf09LW9B18m\\n\",\n       \"JcR4szKS53k5ieoHh1rAC9RiMy8fPlmWFQZeZkFumDa2ooxAbLVaqqgae77m+ufXjZZunnZ3d5md\\n\",\n       \"nSVNczqdDoZhcPCgEgkOh0N6vV7JGdf37jRT5bVQqoZyphecNzq+Hwvlm7BvxPkjN3nNLwG/dNPv\\n\",\n       \"OnV0Oh2Wlpbo9/vlm9DxTnmek2dqtEsTVWQd2yHLcvIsJ5QRju1gmTZZmmObJq7nlt2zik0q7F5R\\n\",\n       \"vFBQT/XBQGEMlqkWGaLwDYa9gp5nGWEIgzgpFyVIiCKVeJ1L9RVNy0KglXF7DwP1hDWR8npC/95F\\n\",\n       \"o9gErmPjOLZK8zFubGAThqHKKzTN0t3MDwLiRIU41xtN4MY88igK1fsUBkJYmI5XJMCESubu2PyD\\n\",\n       \"/+bv0+/2MAWKgmWaDMcTcgz+4N//IUdvuZVRGFJr1LHsgNE4YTQYU6vUSaKYqh8QjcccXFgmzQwQ\\n\",\n       \"BratzlOtGmAKicxiup0dNjeucvHSedJkwuVrGxhCYJuCnd02s3MLDIcDPD9QnazMyQ3BeDLh7Nkz\\n\",\n       \"6iGfZRimraTclk2apaRxhB84CMNkEoaMRj18z+fC+XPILMe1LCaTCYHvYloWiwcWeeDBB4ouNUHk\\n\",\n       \"orQwfe1x8fJVGv0hhmEyHEbcftvtfPtJFTrypgce4Nd/7dd593vfy/mLFzAMQavVpFKp4oqc2VYT\\n\",\n       \"IQT/8jf+JQ+95S28593vVi6MUaT8tE2DNA4RwuTUyXv5ky/8KeNRSCWoMBiNyJKEarVWcpgty8C2\\n\",\n       \"TOIkwjTMAobIqVZrajkfRlSCCpNwwqjgg4/DEUJI6vUa3W6XIPAxDZPnnn8eQwje+fA7MAyYhGOE\\n\",\n       \"LFSLUuJYCkoxMknNrzCeqAfTcDQizTKSNOHCxQssLi4ynowJgkrJpJF6Wi6QANuy1VSm2SXsLS3l\\n\",\n       \"lATfMEw8FASGkDe1k5WZIMw0Z1qW3b7M1G4oKe5f07KQOQjHJsuVd4tpmogcxqMJEmjW69RqdcV6\\n\",\n       \"s61yNzYNZegHkOM4TCYTfN8vcPE6Bw8uE8cxvV4XUFOv8ne3SBLliBgVqm7V48lyCtHYuA6Dnma0\\n\",\n       \"3ej4gSsx6/U6g8GgLN6aRqM70jTZo9VphZLmpgIQ7JnBxGlENNnrwjXP0ijMasyCRicchzSJVZiD\\n\",\n       \"xuFMk9FwUC4x9fdD5ti2WSyRjBJGcV2X4XBYKNdU0odlWUR6Iz+NHdoWhmGVEI2mRbqOjU4GMQ0D\\n\",\n       \"WXQjSbI/gd8qICLY6+zzvEgzymX5vV1nfwwdFM6LFAUz0iKKi6DjNEPaJv3BkCxLkEIwmoS4nk+O\\n\",\n       \"QAqD9c1Nas0lvEoVKSzGwxgvqDGZjJlpzjDsdxC5xDMdHOHQmwypVitUqxUcUxBNBri2yVe//ijI\\n\",\n       \"hO2dDbq9LoNBj6DqqwdunBAnEdc21osu3Wdrd4t6s85Ob5fxJCRIE7I8xw98xpNIvf9iWT07N0et\\n\",\n       \"Wi+ukVx1nL02zXqdYa+nFINJwjCOSGWKH1bBNDiwtETddfDdgNMvn9n37C0uKk94w8ipNRq8fOYM\\n\",\n       \"ru8jheDChQusHlplZ3MTSwiCAlKzLAuTvAwJqNXrPProozzz9NM8/PDDvP/978e2LAwEvV4Px/fY\\n\",\n       \"3NzG83xqtQZhGBEnCRTLYctWobkyl0qFOuzj+z4HDiwhpWQ8nuB5Pq5bYWNzg1qtRhgluG5KELjX\\n\",\n       \"TadRFOM6DgbwEx//eInxmobAME0G3S55nOLXbMY9lXaVpCmVWpUojHEcj1q9SbOZEMUxuzs7HDly\\n\",\n       \"RCXqFE2KCvm1kEZhrRwn12HduZRlKEIu8yLcwUDkOSI2SITySMm5MQZuC5dITjANEwTl8lHfV6a3\\n\",\n       \"RxwQQpDmaQmBCCG4ePEiMzNzHD10WO3FEGS5ZNQfYpqi3J3t3c97TLNpQsP6+hqO4xQdtFk4f/YK\\n\",\n       \"CHePoiyENqLLi2l/jzChBFIuKmQiQ+zfS+zVhJv/8X/+Q+NEujvVTztdJH3Puo5TqWk7Gh7R3bYK\\n\",\n       \"dEhKiGK62MP1uJsQgnq9zmg0KmGLac72jaSxegybhkf0e9DYul5OTAt4lEgovm6hoX/uNE3Kgq85\\n\",\n       \"5noCee1hWcqPJE33IBq9WFWGVkWXfwMWim3buI5DOJ4oRkmRzhMnyn5XTQIOuSG4evUq8/PK9jVO\\n\",\n       \"JatHjtJud1iZP1YU/BTTshkOuwUtbZdqxcMyFY85yyMagcAyItLJgM2dLXa2N7lw7jQyT+l2d5Un\\n\",\n       \"BSkzsw0818EyBLWgwom7bucNr7tP5U0mKd1ej6BepdFs4ngeVpTgBT6+7zMJVUZjUAmI4pjP/fbv\\n\",\n       \"FJ33CMtxcG2LLE3ptjt87Gd/lne89W2EYUi1XmU0HionPSl55C//gq99+RFa+QwHlpaAZ7/n/A36\\n\",\n       \"I6Q0yHLJlc01dnZ28D2PD37wg3zpS1/iXe96F1/+q78qoQYF8anJBkGp/F1dXWVne5s///M/54tf\\n\",\n       \"/CKe5/HRj3yEt731bURpSq1a5cDiPBsb6/iB2m1YhmYtCEajCXEYk6Y5zeZcAVukmIaFZboMhxMc\\n\",\n       \"N8f3K0gpmJ2dV3S4OFRGXOMRArX47nQ6YBgMhyNsWzUgekp0PA/LN4jzHL+hOvu6WydOIqJwgovy\\n\",\n       \"7rAtkzRWaTevnjnD3NycYh8hyNOMvJC0x3GsvHcAaeyFHWdZhpyCT0rMHLAEIPLCT33/I0knuIHy\\n\",\n       \"9ldJ9MpQS4g9hor2I7EsC4S8Tkz3xjfer1TTk1BNxUVog1JtT8qHwXRHrDUduqGSUrKwsHAd02t6\\n\",\n       \"ZzZdI/TSUhd/XSc0eUHDNBqyvdnxAy/g04uB6ZBfTdeLwqh4MgtyqcaL0mtYpuQZIASOa2MXb2ea\\n\",\n       \"Pzqthpou0pMi1uy1tL3pbn/69zUdUP+d6YeBxsL0hzBN5N/DtOPSgrLctOdZQZ+S5fJTf+D7HePx\\n\",\n       \"eOoCN8tttn5ih+EE07Qwjf1fnyQJ9VpNJZG3Zuh2B8XFJIjTlEqlwu/8zu/w0IMPsrgwz3gyYWZ2\\n\",\n       \"FilMLNOk1aiTZwmmkRNHIzJp0aw1CaoWI2FgiphqECiutgHbG+t858nvYFuC8XBAt9shTSLCcILt\\n\",\n       \"WpDlmI4NJKRhzGgy4Sc//g+467bb6XXaOI7FkUPLDEcjuv0eWTxmp9dmeWaRcKLiwcI4xrIcNjc2\\n\",\n       \"abSa9Pt9hGHguC5xEuM6askd+D6VSoUrV67gODa9Xgc38AnjCL9S4Qtf+H/Ze/MYy7L7vu9zzt3v\\n\",\n       \"22vtrq6empruWTicIYfLkEOKFClZtCRKliwldrzEtuI4MBwIipEoiGXYjhN4gWwEhmPYAWxEiWTD\\n\",\n       \"suhNjg3boERJpmWRIinOTM/aPT29r7W+/e7n5I9zz32vht0jIwo8guEDNLqnpl69V/fe8zu/5bv8\\n\",\n       \"PJ6smKUJp85sPfD63buzx8rKCp1Oh9U1lyA0GuLXbtzgueee4/bt21R1oFpZXcGtGZauUGRZQpKk\\n\",\n       \"tQenx6mtTXRlGg1CSObJjEqXlKXi6HjYSBfbjNkkLR7T6QzfD9AUKCVxnRDfiymrkiw3fIlut2tY\\n\",\n       \"m/VAfz5LiCKfJE2wTurT+YTjoyPmScJsPifLc+JWC1mVDfZ4OjV6OBubpp3leR6eA2lqMtjZbEae\\n\",\n       \"ZwS+V8+FIu7f32/cncyecCirom57VuB5tXxriVaLXrDZWE4ddOtgLUyLRVUPh8YCuA5MJqPmOpl2\\n\",\n       \"0oL1aYSi4iZw3rp328yKoog4bnHv3j2ja1P33MtyQSBst+MmHlgwgk0gl81Y7F60fBILO15GtFnI\\n\",\n       \"4fKsbRm8YD+/BVEsQxAf+ru/6//9D7As7MYOB+2Hh8UpF3oGvWF/ccO0dBpXdlGzvywt/gRkqP5Z\\n\",\n       \"y0LpJngq5lnGeDxeOm3Nw7+Me7UPov2M9mdZEawwDBsIYZYZAgUshjb2Qeh0eszn8xPTdOk4qKps\\n\",\n       \"BrK2EngYjNC2l8xDKU9gyy0kUesMq8H8zjWfzej1V5ASI1YlNJ7v44g2eWaGYGmSc/aRR8jSpIFx\\n\",\n       \"KWUYrU88/jgH0wJXF3TjAI0Hes7dmzc5e2aT6XTE3Vu3uXL5bZLZnHao+Mizj/G7ftf3kyQpf/pP\\n\",\n       \"/wSqzOl2ArIiI2rFNWLGY22lz3/1I3+STisiS6a02xGz6YSrV6+ws7NDqx0bEX6tuXb5Kltb26Rp\\n\",\n       \"SrfXww8D2u02r73xOq045uDoiFarRb/fJ5nN8FyPvbv3eWx3t85kzebJ0hTpSPbu3+fjzz/PK6+/\\n\",\n       \"RKEUR6PRA6/fk0+8r+EKlBik0TPPPMPt27eJw5BvfvObnD59muFwyHQ6bfDCrcDQpXu93okqznM9\\n\",\n       \"gsAnTTP+4T/6h7z22mv8qZ/4s7iex/d93/fyZ//cn+HJJ588kWS4rsd8luB5AaurHfwgJE1TZrMJ\\n\",\n       \"vu/R6/WYzibNcLDIc1zP5Xg0ZJZMee3NN0zlWeT0el2mU6P2iRTcvnuHVhwTRhFpkuC4LtN0zjRN\\n\",\n       \"mM6NDnaUzomCgDQ3KA0ouh8PAAAgAElEQVQhDHc0brU4PDjgzPYWw+Njbt+5zaA/YDAYUJbmEGp3\\n\",\n       \"2+SFMW5QWqNQCG3EzBBOzcrUSzZqxqSl0hrxwFGcWUk6xo+M+YNSmrIqqcqKMIzqanox2NRa88wz\\n\",\n       \"H6gNy1vMZjN2H91leHREFJifIYThoViQxGw2a/Z+p9NpOgE2yC5X4ssDSXvPFvfOPZGwWly5qaDN\\n\",\n       \"a2xMsUiX32y95wHctivsL2naCgvp1SRJmosfBEFz6tlfGBYUcovJjqLoRPvEtl3sv5UysDatNf2+\\n\",\n       \"gYzZ7DypjRts8DaT/fkJdqV9T3ua2t6a67rIpQx6wQJb6CbY19hhxfLnsjjwdyPyLLeEzM/2m+th\\n\",\n       \"sexaK8YPmGP2Oh0EgmeffZaLFy9TKkXkRBSZ+VmT2RyqAq0hbrVJU6PoeDwc02q1+djzH+Gnf/bn\\n\",\n       \"GB3tM5mkxFGH0XRCGHokwz4rgy7T8Zjf8e0f4aUXX+K//L0/RLvdpixKhtNj/ugf/gO8feUqSivC\\n\",\n       \"dovZfI4GLl1+C51l/MzP/AxZOuODzz5DVZZIIXjllVd439NPGyeeKKbX7xO6Pnv7+zzxxJPs7e/z\\n\",\n       \"yM4j9AYdLl26xOHhgelfzmfkZYFUpVG7jAPiODR9ZtfD9TwC1+HWnTt88Rd/wXhlttocPUQLHOB9\\n\",\n       \"73s/x8dDRqMRB8NDdnd32djY4OWXX+b+vXs8/fTTXL582TiRdzpGKnQ2I/BNVfbBD36Qsiy5efMm\\n\",\n       \"R0dHSEeS1QH2/PnH6a8MODo+IopjhJD8+I//OP/8n//zutrUjMcTfC+k0+lydHRMpedEYYznufiB\\n\",\n       \"X7dHRjiO5Ohon/F4QpHnbG1t0e22WV1fYWdnpwlA49GIVy68ZGj9V65QliV7e3v0+308xyFqxczy\\n\",\n       \"jPvHh7ieR7fbpdAV82ROO4w43D/AdR2SzAT4IAiYJXNwJFVZsHd4QJpn9ZAfgnlAoQoslR0MJtu0\\n\",\n       \"PGoiT101GInXKUWhMNvh4QE8CFzS0lqUubiugFovCUQNsTXqlUEQMs/TRvsIrWup33ZDMDKOQxlV\\n\",\n       \"WVJWBlBg9YmWOwWWVLWMkLFEqmXCjo0Ry7oytk213FJVSjXEnXe2aB623vMAbgcCNvO2v5xtRSzr\\n\",\n       \"DdgWRTO1roOYDdBSmEFIVebgOIh6eOm5EkeCHwW1E3huHMyXsmUbHJex4nmeM629D4Hmvaxetz04\\n\",\n       \"XNdt4IPLh4u9YVIIUJhN5posWgppfCTrNo5t2Sz3xd65qrJEK1VroLvNhF1Ko5lclkUz2X/QSpMU\\n\",\n       \"4ZZ88NlnefW1NyhVXbVAM1F/bHeXt69e47HdXW7dusXZs2dZXdsgCALm8xl//Sf/AkiPIldMZ4mR\\n\",\n       \"QnDh0ltv4Diao8MjhIb/8cf+OGWRUZXaWNKVOc889T5C3+cTn/o0SZbieD5//n/9X0BD1Gpz/+4d\\n\",\n       \"4ihiMkt5/PHznNpc5/Pf//0MR1MuvfUW165d4/qNO1y7fAnHdRkMBty9v4cGNjdPcXQ8pN3pNAJG\\n\",\n       \"s9mMKp3j+z4rKyu8/fbbzOeGPHPp0iW++tWvUlQlni176+duNHwACwr42Mc+ztbWFm+88QY/949/\\n\",\n       \"lqOjI/7+3/t7rK2t8YM/8AN0u116vR4XLlyg3+3RimK67Q5Cmft2/twTPPXUU4A53C9dusSdO3e4\\n\",\n       \"ceMGGs1HP/o8nXaHsjRGxhtr67TjFpPRmMHqCqETcXQ0REqHVrtNEEZ4vm9aUyqnnCfMkym3bt1k\\n\",\n       \"NDIkm8cfP4frepw+cwbXD5q95LoubG/zgWffz+uvv254AO02Z8+exZEO82TO/sEB33zlZa5eucZ3\\n\",\n       \"/I7v5Jd+6ZeYz+Z89MMf5khpeu0OTiURngOuw2sX36QoCq5fv87Nm7dRSvHUU0/x/PPP0+v1qIqM\\n\",\n       \"CoXvuzXGOSDJMjzXpywVruuhlTZfT1JarpWwCCnyh7dQ7t6/R2d1zfTXpawrR4kfmPmB55pYkuUF\\n\",\n       \"RakQrgnsQglQxrjZtkW1MrBLE/yNbkoQeE2MMP3tvIlVptVr4ZZFExcM4Sps4sAyC7ssjYb7cgC3\\n\",\n       \"8zbbK2+1Wo3Mxrut9zyA21MKaILlMiXY/lKNgNQJDOniJDen4oKsY7/HYqdtUA7DsJaBXGTJJ1An\\n\",\n       \"1HA9vdBHWRaOsv6DNlAXRdG8p+2x2b6XVWUzN0012bU9uYWAMApO3Nh3gw7FsTF0ttAm13Vx3Lrv\\n\",\n       \"J5YU0yQcP8DQIQwDlBacP3+eqioJwxilFZ6UKAwBKElT3r5yFTRsbm6iNIxHx8yTlG6nzfgoYzSd\\n\",\n       \"04q63LlzjygOmScTzp45xY2bV+m1A1SpUWVKUVgddcegN4TL1uktJqMRwnEIgpC1lRUOj45J5nMG\\n\",\n       \"q+tUVcmt23e5cfMmUkp63S6u4+IHIXlhDsSzZ8/iB0a/Y3f3UcrKHIRR3GI4GtLr9yiKkrwo6HaM\\n\",\n       \"LOrB/fv8jb/xv9eD8owoilhbq40h0pSVwYC1zipSSKYrM+Ar33L9/s7f+dskSUqWpEhfMxqP+MQL\\n\",\n       \"L3DmzDbPPfdB0jTlmaffj/79f4D9vT1u3riJ53lMJyPKomB1da1udRWAZnPzFJunTvHxF14wLZ08\\n\",\n       \"YzafEwYh1KiL3d3dRe8VSbfbQUoXISRZnpEVOUWRcXB4n2vXrzAdj/BrIaoPfOAZTm1uUhaKNMvY\\n\",\n       \"Ozii1YoNWmU6o1tXCo+c3SHLU9I05e6duwb25nl0ul163R79njmMnnryKd6+fJmV/oDpeEzcalFV\\n\",\n       \"JV7k8+hjjzFYXQUNP/CDP0QYhuzt7XHlyhVWVteYz+cEQoLnUFQleVk2hKQiT3BdrwlmZVlw9eoV\\n\",\n       \"poc3qCpNlpcoBd/9kBjS7Q3wvAgQ+H6AlObZMr1obXRiAL8+wLTWuI4LDifbHbqOI0t6KmEYnOgQ\\n\",\n       \"gJXXdU7IZCyrES6DLuww01bXdlk1VKDBmZdleWIWBvymkrLveQBfbt7b8sH2l6wOgc2+lzNkG/A8\\n\",\n       \"zyOO42Zj2D63HSQ09Htogq4pS+wfTlxw+3nm83lDl10OrMsys5ZdZXtYtopYRrM0DCxdNENTO/QM\\n\",\n       \"At+IEy3R4B3HaZAs71ymnWTYiMuzgizLKPK0uY6IB2fg89mUuNXl+OiIDz33HK9fvERVVLTapo8/\\n\",\n       \"nc/Y299nZ2eH7Ud2WFvpmwetnuZnWcp0OCGK2lx643V2dh7FDVy2t9d59dULhIFx7MmzgjIvqbRA\\n\",\n       \"1Pozo8mYJ596mjRNGR4P6fZ7ZEnCE+fP87WvfZ3Q99nePsOP/uiP1mYHFWe3txmNRoawNJ4xGKwQ\\n\",\n       \"xzFXL7/B6tpaY6qBcPj6N77Bv/yX/5J225BwPN+jJWI6nmnR/c9/9s81ZXCn062x3FPG43FDZz84\\n\",\n       \"PkBroz8xnPy9b7l+9+/fZ3h0xEc/+lFW1vr88H/2w+zt7Zl7nBdks4TMMW24TqvN+59+ut74BkY4\\n\",\n       \"m83Y29tvdOyR5tDP6r64lJKyqMgpa3y0x87ODi+++CK9QR+lQAgzsPM8n1Jprl6/zpsXX2c0PGR9\\n\",\n       \"fYVWK0bUDu6ddpv5PMGRLnEYI4VnCDmeR2djAyEE9+7d4969ezy2u8Pj584DcPvOHSZzQz559eUL\\n\",\n       \"nNo6jS4rZKU4e3qL2WRKkRfs17DBoir4+X/6z3jf+97H2toab799hfl8zuHhIefPm585GAyI4hg/\\n\",\n       \"DIjjmDAMTQKUm1mAFII8W6CpdFlx4fgWEhglI4R8l1AlAxzHMKUNgcnsVSlcEIpCFShlUFwmSGZN\\n\",\n       \"5W8t06hx71prqpoZWpYlOs2bgOr7/gnJi+WWiIkj1QlCjk3U7CDaJnhaa3q9hcOYlccoioL19fUm\\n\",\n       \"6YQFJ+Vh6z0P4MsnzDKV3Wbmk8moviiWakuDk7ZZd1WZqXYQBM0Fsz0mO7y0bRcb2C1Dyy57ipr3\\n\",\n       \"kE3JYwenNsjbnpcteZYHjsvsSnvQ2N63I73aBJkm6Ge5gS0tZwG2//+gtcCoL8R7bNYSxS5SGPqx\\n\",\n       \"0hU8iMujTbAXUnB66zSvvv5GI+5VFAVh3GJ/f59ut8vNmzcpi8z0Ln23ISklSUqru8oju4/R6nTY\\n\",\n       \"37vP3Xt30Ag2T59hNJriuyFZqcm0aR3kaVq3NhSX336LZ595FgUkyZTPfPunKMuM3bO7FGVOlkzR\\n\",\n       \"ZYEr4fULL5khpRfiCchmU6bHx6ysDLh9+zaDwQDHdYnbbV599VVzCLuG9OPXqAGtFU899WTDN1BK\\n\",\n       \"sb9/0Gjh+L7PaGSesZ0zZ5nN5kjpMHzA5fvzf+7PgKWmq4rjg0M86RD6AVmaMhgMyLMM1zHGtmC0\\n\",\n       \"uhECx/fo9QbNPRbCYK0NVtkzZCchkNq6oM+ZTqesrKyyu7vLlStXkFLS7fQYDkekeY7j+Lx04RWU\\n\",\n       \"KuufpcmyHN8xSUZZaFxXMksyynKOrBOKPCsa0pjveTy6s8Px0ZC7d+8xnUwI4xDhebz++ut89+/8\\n\",\n       \"nbzy8gUuvvIacRSRlhUrfaP/ffX6NZIk4fHz5/nIBz/EB597juHxkI0VY9239ZkzhhXbBEtBkiXM\\n\",\n       \"ZnOS8dzMOeRCatURRsZBIDi6f4/ACSio1Qvlw9EYYatDv9vD88zerlQF2qmJM7oeTC7giVaIylTD\\n\",\n       \"FVVVGvcfYRjgllwE4NTuREBTTS8Lzy13ArIsPeEL8E5zF8sePtHGgqbKt8iV6XTaxJuH2Sva9Z4H\\n\",\n       \"8HfipbXWDTLEOF8HzYWwfV/7ywohGmlXm7na/rDFeVo8pVUdtAHZ84LmvZdvhg269hS0mE57Ii4P\\n\",\n       \"FWym/M6Bqj0kbO9rURqagG/75VKKOoib94/juDkYHrSWtRNM9eE3rSVzyCwGOQ9aYRBQVIYVubOz\\n\",\n       \"g+sarROrFyOEIC8Vh4eHfPnLX+aP/dd/lIP9Pa68fZNzj+3SbrWYpV2QLkprfuPFl+l2O4zHQx57\\n\",\n       \"bJfRZM69+4c8tvs4+wfHuG1JO26RTXO2d7a5ev0KW9unEI42eiZCks5TPvvpb0PlFXEr4itf/jd8\\n\",\n       \"5rOfYTIZG3u8UjEbH7O2sk5RlnjSmOSeO/cY16/f4PrNG7z2+usGxYCRVnWAsj60qyrn0d0dhsMj\\n\",\n       \"tLZ6HMaVxWZEvu/jSEkyTXClw2j04B74eDyiQtFqtanynFYcN/hm3/cb04/A85vnxnVcNIJkntXP\\n\",\n       \"iGEbxu1WHcgdXM+lqApjVKLqeYkXGoOF2Yhnn32Wnd1HOTg4YD6f8+hjuwxWVjk4HPGVr3+TVhSg\\n\",\n       \"8oqqAoFDu9tnPp0ymyW0Yo8waOG3PZSuGA6H9R5xaiVIxyBP/IALFy6wsjLg9PYWmSo5c/o0WZJw\\n\",\n       \"5tRp3nz9DXpnWqy0e3jS4Xh6zDe+8RuEQcDZzdPEccyX/vUv8qlPfcrA73oDDu7uEQQhSVIQRiGj\\n\",\n       \"4YgwjvBwaUURaZqgVEW320JoqCqj2SKkZLXb5+DudYosrZEuD89Eu93VOlM1Tk+OY1uvugng1rbO\\n\",\n       \"7KOFOqBJ8hyEXnA0WAr2eVE2Q0abhS+CvzqReC0PI63QnM2+l3HkjuMwHA5PJGr2dfY9bAJqlFcf\\n\",\n       \"vt7zAO4HIVVZIaSgKHJUpXBccxO6XXOT89wILtkg6brGNdrzAlzXRwhJkqa4rjBtFKXQdpgo6qxM\\n\",\n       \"aSqlcJRGC2kgZNatQxotYyEX+sHLmVK1pIho9BOMC7fjyBPoElljdW0PPApDyso8VHl9oORosrTG\\n\",\n       \"kEuB0tYDz7iZpGlyYqq9vJLESHkuH0RWoMc+WBYH+6Dluh6OK41ht+uysbHO1WvXcYKAsqpASAaD\\n\",\n       \"Ffb2D3n/U0/xD/7BF/jkxz/G4+fOkyYzZlrjxDFFVnJ4eFQP5DSbm2vEccj9vQNOb21TVpq1tQ0q\\n\",\n       \"x9CTx6MRO4/s8NLNm2xvb3Pzxg3ObJ8xB5pjZh7Xb9zm7CNn2XnkUa5evcZgMCCO4lpbGm7cvEGr\\n\",\n       \"1abX6zGbzpjNUtbXN8iKkkuXr7C/f2jcxIWRS/Ucged46LTk6aeeZjyZEIURUkiCujdqrolDmqQU\\n\",\n       \"ypTQeZHT6bSZPiAF7/d7VLWejuP5pEmKdBzTWwUmkwmddqdGTJmD2/ONzEMUGwKLUkajYz6fN+1B\\n\",\n       \"hcnutAJVlLjSRWnFeDxisNLn+HiIHwSsr22YmYXvY2RbzaDeVo+u6zY4dM/zWRmsIHBwpFNj7z3W\\n\",\n       \"1lZxHJfRaEhRFniOy+W33mbrzBmee+5DBGHIaDKi02kzOh4SBCHPf+xjxFHE5cuXOb21hR/4DAYD\\n\",\n       \"1lZXObW5SbfTJc8LHj9/notvvsn6+hpaKdpxizRNEUIym04YDHokaY3uEEasLgoChBRIqzyvoVIl\\n\",\n       \"0+mYvFKkRYl0XKTzcDRGnhW0WyFpagK0rZQt+AAWJuOu6xqCkU3aqHXKCyMx63ou1plHCONCZBAy\\n\",\n       \"qtZSWbRNFuxOC/FdmDDYXrtFx9mATH2vbbK2nDw+DF33bus9D+Cu46NVbvpPjo8fehSlYVS5jslE\\n\",\n       \"jA5xTlnlhGFAXpa4jodKU/MrCEWea4LAZGeeY3WEDZjf9Xyk6+JKYaiywtDNtdKEYYCmbi04AofF\\n\",\n       \"RS3rVozrOA30qShKYNEqEYB2JLJ2rjaEo6q2GTE477yqe9y+azKf+rVaCDw/aMoyKcWSMP2Dlunb\\n\",\n       \"L2spm89ULAVvaSjFD1hZmqPQII2Yz/d+z3fx1/7aXzcloJAIx2Oe5Egn5LXX3+IjH/oga2sbhL6L\\n\",\n       \"K8BzHW7v3SMMQzqdAOlU3Ll9l1ObmxR5STpP6Hf6qLLEky6qqLh18yZbm5v4jkMUhMShIYCUxWLC\\n\",\n       \"LoWk1e0Txh1W1pwaG6uZaYOJbfdCvMhgdt++cYPTG+ugoSw1L770CmHcYzS9TavVNvdbmn59nqX8\\n\",\n       \"8A9+njCMjMtQ3X80sw23ZpQWeJ7TVF5h/DAZAsjSxZxBIJHSRQpJnpnhlOcFKA2ivv6OdKgqY0tn\\n\",\n       \"IaPSte7tC915RwiE4xrUQrBoi8Xt2GT3QVwHC4nvmedWaGhFYU3aqrNHx8H1zBCwzHOKMiXyAzxH\\n\",\n       \"ICMPJQQ3b90weO4ootftozU88eRT9fPnMh4bopCDIApDw7Idj4m6bb7re7+bq1evkuU5WZryyPY2\\n\",\n       \"+/v7HE9HfOhDH0JKSX99wHQ65frtG/hLgcu0DDw81yeKI1qyZRAi0WKwKCTkaUan2+XK9esoN6CS\\n\",\n       \"PiUKqR8ewEPfJ00XUrZludDXfycfJM9zfM8wiW0AdhyHoizqfbDYW1VVMa1VRI0bUdxUq7Y3bVQk\\n\",\n       \"RU0WCk+wKW3wtlm3xf/bYL2scGgTReuhC7wrIs2u9zyATyfj5oE2SjOaMPRxXVOOFLk5SaM4qFmM\\n\",\n       \"BWVlyDxlVYIukNKts+8cz3PrrNxYI1VVSZnnlHntTuJ6+H5AlZdUWpHWkrBKKULPJS8KpGN6ydJx\\n\",\n       \"qNAUlcaRAscx5Z1WBmsdxbGB9umKSpvSuCgVWjgIabJ/PzAkpKqeQCV5PaiRDkot0CRWS9lCJx+0\\n\",\n       \"3knFFeKk1KV9YK2LzjvXYDBgNp+jhCYrC06tb/DE+fPsHY0YjSZ4Xk3j14K0KLh08RJUBZ//ns9R\\n\",\n       \"ZDmzWdH061ZWVrh96zZrq4O6B6hZWenjehLpuziOoEwNNM1xHK5evcrGxgZFUfDoo48ainndSprP\\n\",\n       \"58Z0Ok8IQo/9g30eOfsI9+/fJwhNwPJcl26ny8bGBpfevMTaxgbXbt7kcDjk9Tcv0u8PiGOjeJgk\\n\",\n       \"GUJo0IoXXniBw8PDE60w+75As6Fs6WrbZg9admM7jgNa4jgLUtcyGsHeq+ZvYeQN3olUsFCyZmBW\\n\",\n       \"l9jL6nx2oy/rRlu5Cc8P6+em3eD/qwpk4NBqtwxNvjcgJ+doNKLd7bK9vd387DwvcJ2F5V+eG9Nv\\n\",\n       \"xxXkpcFGTyYT3nzzTT7xbd/GL37xiyRJwu7uLvP5nPe///188pOf5Bvf+AY3b94kiiLW19eRUnJ2\\n\",\n       \"e5uiKGi32w2kzigqzpqqsSxL7ty5Y+JAjcrIsgyJYDqb4bdilNJ1Jv3wQFYUBUqfHBQ27RB7v1jw\\n\",\n       \"KOz32UBcFEUjB51lGVmWNRBA6yQENOAD++wYdyndAAqUUoxGowbssPxM2GfPylLb+20/nx18wgIZ\\n\",\n       \"95sJWcFvgwDebrdPDAnzPDtRgkQtowWeFwUIiFoxus58zdBB1w+8IqofkmQ+BUEdrM0AZT5PKEuF\\n\",\n       \"tUEKghYISZaVSEciHY9ZktXCOibgSsDoCTtG0EkLpKrdq4VheGkEQnoIbTYQdQZcKUVZt37MUMVA\\n\",\n       \"lzw3QKONO7syLZMgCClLhefZTfjwh3WZ9WVLLPs1+2A+DPw/Ho+N07yoWwezGT/0g7+bv/l3fqo2\\n\",\n       \"oSgQGC1whGSaJhwOh/zyl/8tO2e3OX/+MSpKZtMp9+7cpcjzJhju7e2xuXmKqipxg4BKmQxnY2OD\\n\",\n       \"OI6ZTCaUZdm4m1hyVBybLHM2mVHkGQLNqVPrHB7tc+r0Bnfv3iUMQwaDFUajEWmWsHF6i6/8+le5\\n\",\n       \"f3DAcDwxwlFhQJImGC0LqIqC7/v893Dnzp3mPW2P0h6ayz3L2WzWbJp305+wrSvTlkgbjQ4rCBZF\\n\",\n       \"kZEx1RgJiKpCi8Wsxd5Dy+RdJoKZDG3hj+g4Es/zm9cahxtNFJngErbazfcbZnJOp9OiKitaUcyb\\n\",\n       \"b7zJCx9/ARyHU5unKJYU7yxEzpGLgOR5LlVVIB23OfTeeustzp07x8HeHt/5nd/J7du32dnZYX/f\\n\",\n       \"UOZv377N1taWgZwqxXA4bFjNRV4wmUyYTCY8+eSTzfxngZ9eGJJsbW2RJElzsH/1a7+O1oqiyHE9\\n\",\n       \"Q7F/2ArCgCxbNoJYzLIskMEOIF3XNXILSxBjoD5cJs3nWj6ILcxPa93o+ttrZp8tq8e0PJOy19lW\\n\",\n       \"xvZZs1m2/X/L+G+LYPv3NTV+d5T4f4A1m6fkRYWQ0qio+RFhEOP7EQKHojAwMaNl4JFlOWmaUVYV\\n\",\n       \"qh7IOVLie25TrnmehyONk3yapIyGI2PTFgREYcTKYKVha7bahnIdhiGtOKqzI9VscMdx8Go5TV1j\\n\",\n       \"sMvCeDemqTElyPOibn8EtFpt4jgmjiOiOCaIQoLQ/NFakxfGh7MojaWZ6y40VEAY/HL+YDnZPM+b\\n\",\n       \"ib0tt2ABV1zOyh+0NBVRGJKlKbpS6FIRByHPPvss+3v75n7UEqFxp42QLm++9Tb7x0OCuE1WKsIg\\n\",\n       \"4PTp0wgh2NjYIK01ZSaTCXmeAYrZbEJZFgyHQxzH4eDggNXVVSM+lSSMx+NmCm99TitVNggOtOLW\\n\",\n       \"rRukadIEhvFkjB/4tFox0yznYDTm1p27jRyxUY2UJPMpge/QikKe/+iHm+G1FUqzyyICliuaVqtF\\n\",\n       \"FEWN8cY7lxCikVbIixzHdWi1YqIoJIpC4lZkkDRZSl5kKFXhuEZrxoqzWZSVzdDs+7Xb7SYLjKKQ\\n\",\n       \"OI7qTWy18asGsTGdTmppYGqnJ8P01TVjMau5CZPZjHani9JQLmXySqkGtnY8PDSf03GagH10dMT7\\n\",\n       \"3/9+Xn31VT74wQ/yxBNPENb65hYW+xu/8Rs8/fTTTTD8tV/7tROgAc81rYOtrS2ef/55Xn/9dS5c\\n\",\n       \"uMC1a9eaZxYWGkOWhdhqtbhz5w5hFGElIUy2/PBQZcwpFqqB9t8W/WJ/bwsu6Pf7dDqdRm/GBnsL\\n\",\n       \"fFBKMZ/PGY/NMNse+HEcn4AKh2HYCIIdHx83v4v9GVYHyUIm7ed7J8NyuSrLsuyEJ+9v+xbKS6+8\\n\",\n       \"Rq/Xbcov13PR9SDSkRKnkhRlTlGYUzgM2gihKcq8GS6CKWN0LSXpYpEaFQhBGIUIxIkNK6SFWmWN\\n\",\n       \"GL7nefSDjpH+XCqnVGWJAF49pa5x5ZUCXRmYkjb2b8lsSlGWOK4J/FVlgr0pnTzQmsA3KA40eJFx\\n\",\n       \"bbEnfbcbPFTPe1m03pbbNjOwmQfw0FM7q7OvTsuUtdKTFFrzPb/zu3n5xZdRqqKsCjzPbTDwYavF\\n\",\n       \"W29fY55kPPP0+/j4889y6dXX2NjYwBAnfO7evcv29hmMfZlpsezv77O2ttY8jJPJpJEl6Ha73L59\\n\",\n       \"uzlEtda02y0O9g9qrQmXZ555ppFGbbXbTKczRqMxk+mUX/36BfIiZzKdNwzSUpVk4zmuFBzs7/Hf\\n\",\n       \"/eiPkmVziqJqZF3DMGxYu7YUtoe+LWvtBnvYaiCAdTa1XPbaQLE8j5BSImp9DaB5T6DZoPYwq+qB\\n\",\n       \"t2n/LQwOLAw2TdOm1QOmovrsZz/Lv/t3v1r3WCtj+BC30BgETlEapqnhHEQnKjfb151MxzWBzufU\\n\",\n       \"6U3m8xkvvvgiL7zwAuvr6w0qzA4CL168yLd/+7cznU7Z3Nw0xhXtNkqpxuQBZVBV+/v7HB8f84EP\\n\",\n       \"fICDgwMQBmJ4cGCy3cD3KevWRhAEDFZWODg6Zp6mBKGxTPzNWgnm3uUnyHA2S7b3ehmq27Sgam0h\\n\",\n       \"mwlbDLlFhNlAbJ9R+3wsB1zbkrNuRBaNYiss+9/LAXs5y7dVgT38Gob1Es783dZ7HsC/+vUXUbqs\\n\",\n       \"yRA50nE4c+YMH/3oRzizdQZX+nheSKsVMx6PGU9ShNRIKfCkh+s6BnivJMLVOK5vxOQrjeO7uEI2\\n\",\n       \"WYFbZwVlUSAwveKitDR8E8zTpGxulOsanebF5sspC4NWMPAzg4ZRSuHWg6vA90hzky0r7ClaMpsk\\n\",\n       \"zUDDBoiiNDoP9sGygxD5ENKCzQAtEcRmkeazLYKH1W9451pZXaEqS1PeKw0KQs8jFYIf/x/+e/7i\\n\",\n       \"X/rLxGFMUZVmWIYmiCKyJOXNt96m0+tz7dolPv1tn6Ld6RtauuOSF0bm1DqoKKW4c+cO/f4qs1my\\n\",\n       \"1OaxWUZBv79CVZn7WBQVcRgwHk94dHeXeZLgeyFlNSNJc6PDMRhwcDTil3/l30LU5eq1G/QGXQLf\\n\",\n       \"YzI1+iZCQJok/P7f93tZXR2Q1LK5dkMopRpYp+3LWocYW24vK1a+c9nNZV87nU6ZTqcnsmo4mVHZ\\n\",\n       \"79fKbMbAd9HaIB3s9yrPQauKMFh4pkqx0MzwbHCvD+wmk0byue/6HL/yK7+C43gNdG4ymRLHLTwv\\n\",\n       \"4OhwyO5jO2YA/NZbzQyi0+k0JDWb+V6qpWCTBKajCVunTjM8PiYKI5KZcZ3au3efXq+H59SfyXG5\\n\",\n       \"e/cuzz33HG+//aMHrRYAACAASURBVDbnzp0zvfvab3ZlZYUkMYJQq2uraODu7Tu0223iuEVVVHiu\\n\",\n       \"xu8FZHnBr//613n94pukeYGiREjRtN4etkQNh7TZttYLUxOb6S9Ls2pVNq00G4TtIer7RpbWZtdJ\\n\",\n       \"mhLHcSNidnR0RL/fbzJxW9kVRcHx8XFdfccnDgb7Ho3NnTBOPfYzDwaDJjZY6LFlZf5mh9d7HsDP\\n\",\n       \"nDWT7LIscTyjWXDr9m329vfJ85zNVcMY29raotPuNNha6UikANf3CUNDby+qFI2m3WrXZgzguS5h\\n\",\n       \"EOAFZtqttMbxPMoiaXrfqlLMZxlFkSOtu0lZURX1cLVOyBzhINwFezPLzNAUBaWucKTTZJxBEBhD\\n\",\n       \"VCFx43ZzI6uyRFUVruMShS5eEDXMriAw8LCHsa9s/3VZLncZZ26/5kn/ga8fjcemFy8dcwghKKsS\\n\",\n       \"EYQEruRz3/WdfOEL/4jeyqB52JUyfb5Wu8Orr7/B9uk1vvbiBT7hR3Q6bYbjGX4QAw5B4DMeT8jz\\n\",\n       \"gqdqZEO/3+f69etsb28zHo9J05R+v49SiosXL/KBD3wAgKPhkFanQ5qWzJMcjcd0mhBEHYbjGRcv\\n\",\n       \"XuIb3/wmAPdv3abd6YIyBgG+65EmMzwpePzcLk8+8QSqqgxOW+ulg3GhHGdISQvGq82grATog9ay\\n\",\n       \"kNhy+8NuZMsgXuYN2IzX/tuW43bD2p9nbfVsC2A+nzf32fqdWqauXXme4foe7XaL8XiC44TkuQl2\\n\",\n       \"eV7Q7vd5++oVxpMxp05t8uSTTzYVgM1Qh8NjQHB0dMTa2irz+ZR+v8f6+lpNHOo27Q17yJ05c6Z5\\n\",\n       \"3jzP44UXXuBLX/oSu7u7XLp0id3dXQPjqw+bOI45OjpiOBzSG/RpddpUZUWeZk0LMc9zHOmye/4c\\n\",\n       \"r775Bq12m2Q+PCGt8bBlD+HlNqIN2MtyrnZZ7L/lZiwPIpsKfaklYwX2gCazXrZWs8ne6dOnm6C8\\n\",\n       \"3AZZfl/7HLRarRPy0nmeN0NtK+Bnf493W+95AP/O7/isgfDUJ+Lx8TE3b95kb+8e89mE27dv4PsB\\n\",\n       \"w+ERCAcpXUMjroNcUZYG5icFji8p8hw/qE/HwmQkfo39REAcxayurFDmM6qqpNfv8eQTT7K5uYn0\\n\",\n       \"QqIwghrbqVTNUkyLuu/l1nhxgxsvi4IsNzRpx5bO0gMtqZRCSEmlocpKKlXhOMY9HSEpKg2VIitm\\n\",\n       \"TfukqhRai4fKydqHwaITyrJsJHVtdlhVVe0F+K2rgcAJq/hmWj9FmpLlOZ/4+POk6Zx/9a9/gTBu\\n\",\n       \"IR2XSiuiVkwyT3Fdj/uHQ0pcvvhLX+apJx+n3+uytblh8M55xurKJgcHe/S6A5IsXWK+LhhnFqO7\\n\",\n       \"trbWlJhf+MI/5kd+5Ed4++p1Ht3dJS9KVjdO8/M///NMZnOjczKeURQlUatNVRbo2oVICoUuKwYb\\n\",\n       \"q/zhP/SHSJM5oW9aUVrIE3INy9dieVBkg++7tVAsUsC2guzBuYx2aJi3S0HAZv+e5zVDX0NSi5p7\\n\",\n       \"uiyiZstyi+23mbJl6dlyv6oqptMpu7uPcvXqdaOkKCyTMyPPS46Ph3z+85/n6OjwxHW4desWruvS\\n\",\n       \"7Rp5VNPLhvX1da5cucLq6mpjym0157/+9a+zu7vbBEwhBEEYMB6P2dnZQUrJJz/5SV599VV6nW7T\\n\",\n       \"90+ShH6/33jHSinxQ48yL5hOZ3S7XUpV0R/0+MrXv0ZRlvQ6baaTw6Zd9LA9Ye5L1Tjs2ArKXlsb\\n\",\n       \"AJfx2WVhDoNlmrvNkJelXe0A1O5P2+ZZblsCzfsukxLtH3s4Lwvd2efDHjL2fWzlUBRFIz38255K\\n\",\n       \"H/kOkWc0gduRx6n1Ps8+/QRxbEom6UgOD4946aVXuLd3wPB4QlYURrHJcZD2b8chVyXSj8nKCrRC\\n\",\n       \"SoMLrbTGrzOX4XjO8WhGVsyRUlDduMNXvnEBUffPtdKEvk+71SZuxXi116LVWQhD086J4oiovplG\\n\",\n       \"d0HgeUbbpKo1vrEIBSHrMswI5UdRROD7hIGHK0Rz4nq+h+NIkuTBrvQ201s+yZeFdhZlpuZo+q2v\\n\",\n       \"NxrihqVYlTm6qhAapOfjOxJV5Hzf934v9+/v8eKFV3BcD8+PGA6N9kaRFwhHcvPOHhtrK3z5336F\\n\",\n       \"M2e26Hz623Bdn053wN79+3huSJ6Za3Djxg1Onz7dZBlWj6SqKlZWVho96uksMSbJeYVwPPbv7vPq\\n\",\n       \"669zb98wKN+8dMXQ6oMIoSzbcU5VFAgJvW6HP/7H/hvKoqg9F42KnV7KhJeJWbBgttrNtNx/fNiy\\n\",\n       \"r4mioGmR2CDguhIhLCzPmjVAFIVU1eLAtQFjGb9vg7fxSJQ4jofrLhQurWZHEHhNFh5In6JUnHvs\\n\",\n       \"HNeu3TDtIm30s8vS/I6T2Zyvfe3rPLpzlv3JGKUM03ZnZ6cZ3NmKQ0rJnTt3OHt2m6paOMgEQcDB\\n\",\n       \"wQGf/vSnuXjxIrPZjHPnzpk+emp03WezGVEUcfHiRdbW1jg6OGxePxgMGA6HdDod9o8PiIKIKAgJ\\n\",\n       \"PJ9er8+tW7fp9vvcunObl15+mbDV4tr1m6x0feI4NtoynQcbTQP192SNqcryPMLi7ZeHgXaoaAOq\\n\",\n       \"/d1te9J+r6qhvzbhWNZJst9jM/bAVvhL72Wfs+V2y3Jfe3noa/e0rTjm83nz+d9tvecBXBQZrueg\\n\",\n       \"igKhFQKXqsyZpCZDJjCwtlNn1gnigDS7QjrKjMN8afrcCsjyjFIJfN9s3qosmw2VFSVZVisEuq4x\\n\",\n       \"gQ2kMcFVFW4U4dc3vypLhHSYZIpxOqlvdNlsVltyCalRhTE6NjdOE8cRUormMChyI0gfBL4RsVdG\\n\",\n       \"09y0eFykquh32qysDNjYWGfrzJb5+kMyQJttL0+0YcHYsrhU/ZDJdV5kCBxUVTVmzGiNrgqSLMfx\\n\",\n       \"PI6qgh/6oR8kbrf5hS/9Mp2eg1aCJEnp9fsU2hBHDo7HeI7gzp17/It/8a/otlus9LuEnsenP/Vt\\n\",\n       \"gMl49vb22NraarC1Vu8mz3Pa7TbD4ZCyLHlk5zGyvOT6zVt88+VX0EJy9949hqMRVQWDlTXS3EDc\\n\",\n       \"tCpJ8xRVlQitKfKCP/Enf4wwCJjPp4R+SFEYLYqiZs7ZzQc0GZTdQECDrrAl8IOW/X+2NAdOZODL\\n\",\n       \"WN5lSKc1z7WDUqDBHFtEw6JcPok+WC7x7XvawF8qw1HY2toyeuZxi1liMu9Oq839vT3WVld56cIr\\n\",\n       \"BL5Hv9/FdV0eeeSRGkbnMZuZwNfv9zk8PKyJZLrBZVtd6larxcWLF+l0Oty7d69BQdl+r9ZGoOn4\\n\",\n       \"+LgRmLMO7xa9kqYpg/6ALE2ZjCe0Wy3KsuLs2bOkWcGvfuUr9Ho90rLC8QzbuWlFqIdbqk0mE4we\\n\",\n       \"0mIQaO/hOwf6SilKvSBLNV97R2BdbrXZoD6fz5uAbRMRO4hcViVdbr/Yn7E82LYqpfbeW8LP8nC8\\n\",\n       \"2zVmG7/tM/CqSJHaqRUiDLOpKEujmNZqMS5mqEox6HfZOr3FmTPbTKYpb158i3t7e+SlQYF4vkM+\\n\",\n       \"yyly42Kvhbm4gR8QuKYf7sg6A1WKUgu8sEUoJdZjUigJwgMk0pUIjIawlC5IDRJ8P6zpvhWup80Q\\n\",\n       \"FINMmaYFvmvozUKYz1UUJbN53pTDWaGZzEwg9oTiDga2qHRlBowCNjc3+EsPuFZHwzFhEOK6AlUV\\n\",\n       \"6JpW77le0+t1PRcJTB6QRJZFBcIcfFJKpGtIJgZGJqlUhS4ULpIf+t0/QJbl/OqvfZVubwWNZDyZ\\n\",\n       \"4ngRvW6X2WyC1pK9/QP27is21lYZjyYks6lpg+zssHt+m/E04f7+EUoZIbGD/X3W19cpy5LZ/JD9\\n\",\n       \"g2OGoymT2Zy/+bf+Nq7n0e50+eaLLzJYWUFKl/6gz6R2Vk+zDFGlRnJAVaz0O/zFv/CTHB0eME9S\\n\",\n       \"Wq02ydz02dM0pcqrBRqkzgiXA2QQBAtoZoMweXALypEOValwHRch7UBzubUlm9J9+T2aQZQ23AT7\\n\",\n       \"GosZNsHEfDalF1LJtj8spfGs1Frj1UgoR0ryuRmMn9k+w6OP7nDj5m1czzXO8FGIqqomGO8fHnLu\\n\",\n       \"sV2mMzN0VVXFZDpF1XOKNEnIs4xHdszA88yZLfLMPLdXr15la2uLnZ0dkmTOYNBHCMH6+jr7+3v0\\n\",\n       \"V1bMgDgImIzHdLtd+r1eU3FIxyEMA1rtFlmeE4UR03zCdDolimP29veYzRNu3brFPM3wo4hBr48n\\n\",\n       \"M2bTKZ1ulyiKHxpDzB4w3AtLgwersW+qIHsti3LhNwmLQ9ketk1LZan/vYwusfc2rVm5y9ol9gCw\\n\",\n       \"B7m9d7ZVYysDiyYqamKQkKJBI9kAbqW1f9v3wN3QI6/L19k8IYpiKiFQjsMsy3GFgysdQseBqmSl\\n\",\n       \"FbDajnjq0c8wm81IspTpNDGMvkozPD5mOJqwf3DA4cERWTnH80Nj0+T5FLbc9QVK55RFrWfgiHpj\\n\",\n       \"gXCF+UyOix+H6KqqBW4sS0oj3FqZ0BVIZ3Hiq7JAKEPXRwuE9I2kpRZoBApDFNJCoKVDYtEjVFTS\\n\",\n       \"ZGq3Rw8e2Pzdn/vXUJNDpDDY98D36PX7BGFgpAAciUTxsRe+9fWT7PtOfuFBs7oKKGA6h+/4jPnz\\n\",\n       \"W1mPnIOkToKyFIIOJ9yCVk6Zvz/3PSdf933f/+//Hq9d/MLiP2pDnTsPN9b5/74qidQSnWuUo6m0\\n\",\n       \"QTJZrLfZuLr5t8ZWYy6qMlZf0rFsQaP9I4SmKPIa+aKIo5iyJnIpDRozaO/2+k2lZecInudQVjlV\\n\",\n       \"lfGf/54f5i//5F/BFxGuF5HkOd12h3v7Bzz15ONcfvsqH/nQRwiDmMDzubd/m26nQzxYBeDeaMrq\\n\",\n       \"6ibzWYHrhkzHhpm7t7/Hma0tsiwF7eM6jrHfcyTTyYg4iqiKElcaSOCgPzBDy+MjVldX8TyP+3t7\\n\",\n       \"eNozWrhJRdAKKTyfrMxJ8oT1Mxv80j/4AvNkysrKOsPjEZHrU5UFURDSCiPOP7b70NviSs/Q46sK\\n\",\n       \"6YZNxmsgv1bATKGlwPVdZKnQ0khKWEJPlufGDEJryqKok5kcoRYH8XIWbVsl9qC2f9vvtUNJ6x+w\\n\",\n       \"3OrUWoOucKTVL4JkPiXNMoIgagK/vdfvtt7zAD4cDutyunNiym5OuTlB4OF5PrPZHM/3abVa5Hmx\\n\",\n       \"MPMVRqhdKVj3fbZPb6A1hEGE5wWMp1NefvkCV69cYzSZgIYwitEoykohVG1SrAxL0/d8qAoi10jY\\n\",\n       \"llmO40pUZYaSnnRJq8I8tJ6H53i1GFEtLIWlV1uHD4krXHS9qcuqMoeD5+EsZ284eJ5saPUPWrlS\\n\",\n       \"OBoc10egSLKcLC8YzUy7yfVcXM8DVT4wgP+n9VtbjmdQC7JmSFaVa4bpGlRVQzOBSljpY1kjk+qT\\n\",\n       \"Ulid6AK55M1pYYlZZgwdpFww+Gzpvjw/WJTxLbSuyPKCXq/LJz7+Ai++dIHxeIyUDqNqTBgG3L51\\n\",\n       \"F991+Ps/+7P8yB/5I1y9dp31tRV836BETIvDIY5CHNdjPBqzurrGG2+8wZkzZ2i3DZ47mRvJ3Fa7\\n\",\n       \"S5YXuPVcRGqB74fkecXOzi6Hh4ekaU6SpNy7d5/Tp09z9+5d/HWflbVVRqMRQRShM8k8Tfi//6+f\\n\",\n       \"5uDgmLX1VQ4O9uh0epRVjitNRbOyssLu7sMDuIH9ipqZvVCZFEI04l5VVUEuyMuCVg2Rlc5C4tVZ\\n\",\n       \"GvDKOpC7oQ/17AIWrTfbbmnaqWIhKes4TsNEteiVxXxj4QEchf6J7NzzPBzXRSmagP9O9MyD1nse\\n\",\n       \"wAeDQX1iYqjcQjawIgt1Kopad7sZ+Bk5UHsB/Jqo4bsOQhgWnao0SuX0Wz6ffuEjfMenXkBVmoPD\\n\",\n       \"Aw6Ojsjysh70aJQyDMTxeNz8KZUwusDSMdBCAUI4aFXiC40IDQlAUQIGleLHgekx16atWgCORmFa\\n\",\n       \"LlI6SNfFsf10pYzrNjaIa4Oq8R+CQ/YDqqI0CBetQTrmPYRG45JrQZ4bbZb/tP7/X7N0UtvPBWTZ\\n\",\n       \"SbNaxzV6MGDutaoUKJNFS2lMNkTNSTDLBOEoihqkh1Kqft4kShuWcRAEeK5Lu9UyGV7d462qCl0Z\\n\",\n       \"LetWHDOdp3zihU/wxhuXEMIlSzMIBK7rM5zMWOn30cLjn/z8/8Nnv/3TOK4xhNBaMZnP2dzcADSz\\n\",\n       \"2Zgg8Lh+7SabG6cJg4j79w/wvYDD4xGr6xusrW2QJPMat1yhtYvr+DjSoywUeVayvXWW4XDI6c0z\\n\",\n       \"5EnOoLvK6GhMGhht+LwoaHUGvHbxMrN5QbfXZzgao9AIoeh0O5SZaTVsb2+TZQ93pjEtD9B6YY7u\\n\",\n       \"OA55jZZZnmuEQUBeQxLFMl6/KJqePdQeuUvQRPu15X62Dej2M1i2qsV4W/Gr6h0/x/O8Botufy4Y\\n\",\n       \"yLMfRM08xLZs3m295wHclha+HxJFcaOtbMHzFitpTyhTkoSMRuN6Kh+i0ezv77Paaddlqah7jYaC\\n\",\n       \"G0cRfuChVIXQbdYGLcrKZEe272X1DuyFC8O4QQ5M0hmT2ZQrV67y5sWLTOYzcCStuI0WAqWgqr04\\n\",\n       \"lXDw/KAZdlV1RuAhlyBLdZ+sLCjKHKQdvJjb8TAmpRJG+bAsKiPIT80ekwKhF3oKQrznt/U/ylVW\\n\",\n       \"Od2O0R8pC1XDPnWtR6KbYVZTatfVr+sZ6KkZbLl1RmY0do6PjxukhBnALSjgtpdq5Qo8z6PMc+ZZ\\n\",\n       \"ZnQ8HAObLUtjarK2ukq/1+PKtRv4fkhGgeuW9Ho9RpMpvXaH+/tHXL56g2effgrPlVSV4syZbcLQ\\n\",\n       \"GFu04pjJeEq/JvfkZclsOufMk9tcvnIFrTXdXo+ilkStqoo8LZohMGCctRSEXsjtG7c5deqUQXGF\\n\",\n       \"MfcP7hN3uiA8Lr99lS9+8ZeJ4pBut4OqoNWKcHzB/v5dNtc22djYZGfn0XfNRE1v2RymRWGqcysd\\n\",\n       \"GywhTuw+X0Z9vJPSvxwwXddlNps1Dl8m3gQNh+CdaBILUlhmWdr3XIb8BkFAkadNzFnW6jk8MqbZ\\n\",\n       \"/X6f9fX1h7Kym8/4bv9TmHT23wAB4AP/TGv9E0KIFeDngB3gGvB7tdbD+jU/AfxRTDf1x7TWX3zX\\n\",\n       \"T8ACXuO6btPIXyY/WMdoS7lVKmnKHdcx2cr6+jqeqmi1YtI0w+j2qlozO7O/j3lHKfC9qC5haw9K\\n\",\n       \"XZImGUJKpHBIk0k9/HDptUJ67YjTm2t87nd8hqKsuHHzFjdu3mI6T5jP5qRZTl4U5EVFmuWkmSnL\\n\",\n       \"fNfDWC2BZ8y3UY4RxvLiNqoyD19VlVCZIQzqwRm0Qhtj5BrVYkdtSiuk8Ex/XGvUQ9QI/9P6rS3f\\n\",\n       \"s0bSGUKGpirDlrkaoQwTWAjqNohRxMzKOY6QONJDKVP5mcN2gTMuywrX9agq1aAPLMXfZumz2ayB\\n\",\n       \"9ZVlCWox7AxDj0oL/ts/8Sf4q3/1f2MyT0jTlDAMSRKDJJmlKZ7r841vvkRVVZw/9yh5OgcGRotf\\n\",\n       \"KxxHkmYJ/c6ASlWMxyPW19eYzsYYCoNgOp1wcGCG0db13Q98XNepvTsFrZpebltANgD2V1a4cvU6\\n\",\n       \"d+/v8aUv/TKr6xtEccRkMiSKQvr9FrPZhLW1FTY2Nvnwhz9cE5re5b74PlVVnECHKGXqWts+WVYM\\n\",\n       \"7HQ6J0g8VljNfwfk0LZpsyxjPp9/C05/mUFt/9j3s2xuWDjVLzN2pWg3pDHL0KRu36yvr6O1Zm9v\\n\",\n       \"77cGI9Rap0KI79Baz4VJ635VCPEp4AeAX9Ba/xUhxP8E/CngTwkhngb+C+Bp4Azwi0KIJ7QRLHng\\n\",\n       \"skG1LMvG8sriLH3fBy05PLhrHhDPxXEKOp02WZYgpYPjuQ1xRWgH4Qi8ICCQEdZ5xXXN1L55BrQx\\n\",\n       \"SpVOTWipSTvSMeiRvMjrhyEiTc1ncV0PjSCdljiez9bmGttbp8mLAo1s+m9l3SsvckOtvXfvHnv7\\n\",\n       \"++zv7Rt7Kd837E3AARBQ6RwpjBu3kKZkftCSAtJ0jtcIABkrOQDpugb/LAUg+al/8mPour/neV4j\\n\",\n       \"mC8khoFZZwhKa5LciEhpVeIIA0N0ENY8hrJUTTZYsKgiBNogHcqKIjciY1WZAUZvvKyvicXYSilx\\n\",\n       \"agLROwdCbs1iNcw548odhwFCazzpIIAiS/Ech8qpFRPra+I4Ti2CZWQOojikrMoauucSBCGnT20S\\n\",\n       \"+AGDlQHnHjtnqPDjSb2h23WG2KYoM1R9kJaVESkTQJ6neI5PnmZUZYUmQwhj6qFqnXbpmlaayfhq\\n\",\n       \"6z0B3dBs1rIocB0fv86wk3lKnmZoX+O5ZkBYLlWDNpGxWZyFs9myvyhK/MA4uedVDkLiIviDf/AP\\n\",\n       \"8NM//XfJ89IM+pOUlZUV8lqjut/r8evf+A3u3rvL7/99v4csTdBVie863Lx5i8FggBYK4WhcTxK3\\n\",\n       \"AoqqpCwzozliZZ8jAzlUqiAMPdJsxupan/39A2Zzg6JotSP29/dR2sA6Azfm0uXLvPTyBfqrawRh\\n\",\n       \"yHB4TKsVEcUe0+mMViuk2+1ydvssaIkUrlH0fMiK45i8zE5AfauqVgD1PJz6GTPDQ01aB1RTJTmm\\n\",\n       \"pZMZo4llvLcNsO1Wy7Q1yhLq+6Fsj7p+hrVSqJoMZMk6dpBqSUY2AxdCIFjo6NgeeFlWuLUtnO1/\\n\",\n       \"/5ZRKFpra8rm1zHnGBPALT7hp4FfwQTxHwR+VmtdANeEEJeBjwFfffjPp9kIyxhOu7nLQjWQMNf1\\n\",\n       \"GI/HRv+kbr0oNFluprVCS2QiUaoijCKzIUrDjDPys2aDgfGNFFKaDSqp8dc1bKeG9SRZWsO6aDak\\n\",\n       \"xqMqckNBr6FFDRRPg+8ZSGEgXVqn1tjZ2mg0WJRSHBwccPXqVe7du09ijX9FuxmO2L8ftPqdiDQx\\n\",\n       \"Uqm6MFBLhTEQKFVpXEUcYzSgCpMZBK5EVQXCTrMryGrNc983JKWovuaqAIFokDmO9MwB52izkSRU\\n\",\n       \"JHWAq/N8F2Ro4VOGvGSzmExlTbDRdXvBEQv9iEbcvjLDWS/w63LSq1tbBnFTYKzUlAapBYWWTcYj\\n\",\n       \"pUABRXPdNMk4xSrs5YeHOK7Lvfv75HmBqjeLFILBYMDpU0ZZcTqdEPsR3W6XOI5wHIdWK6Lb6xDH\\n\",\n       \"Ea0opigUUvoGconFHYPvelRlSTrPFv6GlRWjEiSJme2EgXFMV6qiLAy+OI5bCCEXTFDR7Ltms9vW\\n\",\n       \"zLKYGRgcf1WqOgt0KauSPMtZW+nz5JOP8+KFC0b1UtRuV0oRRTGjyRjf87l89Tr/50/9NN/z3Z9j\\n\",\n       \"ZdDn4PCAVqdXz1py5kmCFoqsrmBPnTqF40haUYvJZESSzHBdD1UqPM+hKIxo2draKvfv36fvr+B5\\n\",\n       \"Lv2VgcHz5zn/4p/+Ey5deovVtXWElEynI0DR63WoipRuu0MQuqytrLC7e64Z+r0bHvro6BA/9Fk2\\n\",\n       \"THAcY6ad1Th2+7zYoLrMA2hmaTXU1wq52a81768XAnL2kLCdgqIoyOvE08IUlweXloFrEURVaTwN\\n\",\n       \"7LLBvSgXPA/bc3+39ZsGcGGUnL4JnAP+D631a0KITa31/fpb7gOb9b+3OBmsb2Ey8Ycuz/XrMlQ0\\n\",\n       \"bhgW4G4HmfXnQDiS7e1tgxUvzCBnlswbDLRTX1jHc5nMplSlMRSgtK4cwkD4tCYrDD7VrQObkMJY\\n\",\n       \"atWmEJbkIl0XUZlUVBhZOfOAiFqcSVVUldkcYRghpapPeAfHdcizBKoChwBVVvTbMR/78HPmQclL\\n\",\n       \"cExg93yPOIo5PjbEFv7a3/qWa3XukdMcHx+TzOYkiZlwF7kJkq7vEQYhlVYoVeA65mCywx3HEZTq\\n\",\n       \"/23vXGMly667/tt7n1fdqnu7+/a7Z8ZxkulhPMZ2t8cPktjETpzYMdhBiEAQQhaCzyAhhRBLCPgC\\n\",\n       \"ASQeEiEoQBRhwDwEOA5YdhzZseIgP+dpjz3xiOmZzIy7e2a6+z6q6jz35sPa65xze3p6TBL3nfat\\n\",\n       \"pW7dulV1q07tOmft9fiv/78jeAhWmB4T48F4EY3FYzInSt6ZwVpHVTUED3kmkmuCC3bYVBx78KFP\\n\",\n       \"F+U7csLKGJ12YlwfafflHo0gO98/FmKU3TSCbbdJgob/aZJJWSkInzoYXFbESCs6MitzBF3nhX/G\\n\",\n       \"gg+WroM0F4a5nWVNkeYUhThvgkzlbm09IU3yTKT5VAg5TRIZiDKe9fUNJpOcjY11Dh8+xB1nznDH\\n\",\n       \"a84wWZuQxtpv5yHNCmkwG5ngCxgwUkpJXELXBqpWNzVPlhmqqtnjXNqmwYzqvYJl1lJggw4llWWJ\\n\",\n       \"jXqavpKsha6laSqcs/yp97+X5XLBVx54kM2jxwXyN52xXIqU22IpTI9b8yX/7X/8Ovfdew+nTp7g\\n\",\n       \"j919N3VdYqzl6tWrHD9+TIIXH5hM1khcSlM3EAxrEymRmCj0DC1pmlOWNfNFxZHNhC6AcRlffeAR\\n\",\n       \"nn76abZ2djl+/DhVI0yJs9kaRZHT1oLrX5ts8H133cWb3vhG2g7yfNKLBb+crU0nEmmPHF/btn2m\\n\",\n       \"pwNQGhzlkb9GAw3nYobXNEwmEyaTyZ7J3KG3NMgYKjWC+qbpdMrMDqyDikAZ0yAo26ExBmtCDxXU\\n\",\n       \"6yXPc5p2kFXTLOxm9p1E4B44Z4w5BHzKGPPu6x4PxpibtUpf5rG/B8Cv/OqjnHvjfZx70+v7MeHF\\n\",\n       \"YtGfrDul0JBevXqVjcOHegfbNIKzXF/f6IcfmqZjNhNSotlsim87rl1VEVeRv8IHrLE4l+ESF0Vn\\n\",\n       \"Y9QYPGmaR0cShzR8R+hUDUc0NVUEV0+Y4D1lVVIt5thEJK2czXFpKp3lNO8n1FKXUjYLQoB8ktOG\\n\",\n       \"Fms9vm65ttxlfbZOXd+4hv2eH/0RnHXkaUbbdjRtSz4pePLCU/zfJy9w5dpVdnZ3KBdLgm8B0bnM\\n\",\n       \"8pzFvKRrZBq1mIhDbpqSLHE4I8NMxliMbYSnseuYpAbrUkLwJC4wnRRU5RxE3kLKVUkCsUHcNp6m\\n\",\n       \"64Qq10CRyWi/MUZEOEY1RGFE7ESpKEBoO7LMyTRXnGYFhXgG8MI5472nbIWCM08HjUECWJdhrRH1\\n\",\n       \"pLhmNmZ2eVIQAlR112cEzkaO7qRgd3dOWmS4LIcAZdfiIivk89d2cbtzLr64RdtewCVfw7clhsCZ\\n\",\n       \"O+7g/LnzxgMkvQAAGwdJREFUnDp5kuA9iXNkqSCXCAGDIXWO1raR/UFFp0MPO9Q18XEtJE6IWU3M\\n\",\n       \"QhI71FjbWjIZQV14yRgJwnlf5LRtRULgZ37mz+JDxwMPPUJRTNjZbpjOZlRVoCim7C4WJM6xsb7O\\n\",\n       \"5z7/fzhz8jTGJtxx5jQGWJQN83mNMRJFFnnOcikEW5N8yu7WIpZ2JnhvCD4heEs+KTh+/BTPfvsy\\n\",\n       \"8/mCT376UxBExnBz8xgvvHiZ6caUtSwlSQOGmjRL2ZjKANDZu++hrT3LWqd3b06r2nVd3DiHCcge\\n\",\n       \"8hcdsTrgLNLXEqP0fhI3Pl/Jr6RsmuyJpvX1hYM97QPNtm3Z2tqKSl7DEJf3vi9hqjSdbjDgyWNj\\n\",\n       \"up8CThKsgy9/5SG+/NWHvyMUinmlJ+x5sjF/B1gCfw14VwjhojHmNPDZEMK9xpi/HU/IX4zP/yTw\\n\",\n       \"d0MIX7zudYL69U9+7D/0O1znJQVWWkdhKMtjdzeVsff4YbNMopC12Yyd3S1CCBS5dJyzGLUYpJk0\\n\",\n       \"bvi1bRejfUlHtYwi6ZAnz7M+FZeTwFM3cew2DNSVJgTqppLNwI74m5OENsRdPShNZUoWR2WdTfoI\\n\",\n       \"sGqWYKMmHkjKV9U463jrOz/4kvX/+gO/RegCvgtRKzEb6q/OgTN0PtA2NcbXZKkgdMqq4elnnuHZ\\n\",\n       \"5y5RNw1lVTPfXRAMbB7a5MzRYzSdNJG9BTB4A9s721ENqWZrZ0s21lIiREnMjFD3GqmwW2vxxGGn\\n\",\n       \"Tj5XFkntNRVsmgbicISPZbAkSYRi3dBzeejtAScf3y1Ah90TGTkrZSvfea6/zqu2wzIwwSkLY/Be\\n\",\n       \"6A1iLTNxCW2qG41w8xhjaKtaNikTaJu2RxgkxiPpjJSdRJG+E9GQPAoyxHR7tlZAEKqFQ4cOM5ut\\n\",\n       \"MZ1OOXHyRFTBGTjGx1N/erHrxqe/6zEEI2RSIfjYj2iHaN4Yyqqm9fAb/+sTPPzII5KOYdk4vIlL\\n\",\n       \"UoH9tS1pkgpd87UtnBG+lZMnj/KDP/AD3HvvvUJw9cQT3HHqFPP5HKJT6x1QRG9dfl7KVc9dvMiT\\n\",\n       \"F57iwtO/jw/g0qQXcA5tDaZjul7QtCVrk4y6Ljl1/BR3nrmT+8+/BXxC6DydDX0kDHD3638onnHX\\n\",\n       \"XxefJZ3IGmsGqOeM3tasz1oLWULwAavnIww17ei8syhOXivdtN3L/66Z0JikKo80sOMyjfaatDzc\\n\",\n       \"bxJmKJHpuSk+kD54VSz729/5AUK4sSjoK6FQjgFtCOGaMWYC/ATw94GPAx8C/lH8+bH4Jx8H/pMx\\n\",\n       \"5p8ipZOzwJdu9h5pmvX1SusKnBu6w5NJQZqsRQKggfNDKDwV5rMrC+MDiTOITFlD6KQu6JwjjWWK\\n\",\n       \"LBNFntQ6muBo4vvIju0jptX0m0c/Yu8S4eAIoW8Mdk1LlgtfuCJIkizFJIOST9cFJnYQNF3Liwhx\\n\",\n       \"Eqcl9b0u1vyhKNZYn87kIrmBtXVFXTbkaUqROooso2pb6rYhGE+aFoIvxeNMoK2XOJewlqf84Pe9\\n\",\n       \"hnvOniXPJwQMTScitjaA311K9pGLwLM3nmChrCtsYkjzVNj/MKTJhAsXLvCtbz3BxYuXKKuK+bIk\\n\",\n       \"4DBGuGaMkctsOV8SFb9iypmQOoeLIsIS3URZMRk7JBhIrQW7d+JNm1POOrKo2h4rWkS6drlQvHxH\\n\",\n       \"IcRIOy+wVtbfBn1ewBqHSwwud9RVhQ+GZaWOE9q6Axvo4kXbRf6YZSlkVJkX8YU0Fe4dg4g+4wNN\\n\",\n       \"B7u7Jb6VPstylrOzvcViuRC6BGB9XTQrhTApsLGxzmQy4fSZM8xmM06eOMGZO+5g88gRNjc32d7e\\n\",\n       \"xrmE5XKB96Kb6oPMG3Rth/ctPm7Ci8UClyRc294mSTP+0l/883jveeDhhzFYrl59gensEFk2IcsK\\n\",\n       \"vA9cvbIlNVkDzqU89tjjPPX0s3zmM58j+MCxo5v80NvexokTJ1ifzljO5/gW5r7kmWef5fHHH+fJ\\n\",\n       \"p54izQspaWbSMlubTqibjkvPXxG1ehOYrK0xn29xeHNK19W85q47SZzjnrNnaZqGcmfBieMnuLz1\\n\",\n       \"Qq+oNCajut7yPKdU+TQz0Pl2ncBttamoJ0qjKBEUlRazaa2dezm3iqJglq3vOQf1uVoiqeu6L6cs\\n\",\n       \"l8veeWtvTHlktOyijp/Q7dHelIGiJsKfTc/iebPPDa8QgRtj3oA0KW38/5EQwj+JMML/CryGl8II\\n\",\n       \"P4zACFvgb4QQPnWD1x0i8F//90CMYO3Axzs+8DFvsu6CimzQ3W2M69R6ku6A6pC1uaFRm5ZAblRn\\n\",\n       \"GvMidH4QGlXugzG2VI9NndL14H/94k2ssQN9GaZrB0FifT/nHG/+kfe95Jge/sKn+whMdfUCeyFS\\n\",\n       \"0mTJYDTUoNj6OnpT7XobY5gv5mRFNpQpdB1jeqgWukjYUwysa/pTP3vdCS3rlStXKMuSRS1lnrZp\\n\",\n       \"2N3dpe06tq5dY3c+7/mdy7IUYWhbxHXT8oLH2KS/CLS+7UOgq+JwhhUxD32O6lPqcXXe42n7763z\\n\",\n       \"Q9PJOddrGMoHhNANz73++9V117/PnOsluIxzEectpGYheEKg75ckRiI8uXCh7VrWJmtUVUkIUSgi\\n\",\n       \"6HkNvmsipbGUg2yAY0ePcubUaY5tbjKdTqWunlhmaxMOHTrE0SOH5Nhj5tI0FWmWgoGmbfAm4Xd+\\n\",\n       \"94t87vO/y7xc4rIMl+ZkhUwsp6n0AAgBvHDfawnCRnnCG+GpQ/Akacx0g6GpO4rJlDRLqesSTEdd\\n\",\n       \"l2xsTGmakkOznCb2tnznOXP6NGfvvpu77rxTAp26GWUgg4BFCIH73vxjLxuBg5yHAptc9ph775UT\\n\",\n       \"ZZiozCdZf3uY/EaiB4bylTGm5yvR73/c89HrXDf4JmivzfaRt77WmNfEWtv3fPR9lF5YIKVtj0AJ\\n\",\n       \"IfCWH37/HywCDyE8Crz5BvdfAd7zMn/zD+CGXEw3NN1prLU9DnPsxK9cucL6+nrvxK9feP1y1VHr\\n\",\n       \"643J9RcLAdJIF37S76IKW1TwvRDhDw2PPkq2A+2o0kbqhqDvobuxYl7X1tZ6Z6/HoGobulvvLBYi\\n\",\n       \"dBxT0HHT9kameFU9GdfX1+n8MLHVNE1sblmJtMOg96hrpcemG9R0OiWYgG9afNz5QwjMd3f7ml2W\\n\",\n       \"ZWRp1p+E3WhzVUe8XC6xiWN9Y4Pjx4/HiCbr65GKYV5fX+8n3lTVxlrLhWe/TVWWXLp0mWeefYat\\n\",\n       \"7S3apo5MbZIBGevIncPnhhA6rA2AQP4MCakTua62aaXHkKU0XoSwBTUg/PEGaNuaPHGkzpBlOdvb\\n\",\n       \"W9JwtMLl7jWSiuvtEpHB0/MMYyO6KOouek8bdSuNSfpN2TlHaHXQI/KgBMOyrMiygq5rpYXsA0mS\\n\",\n       \"4ZwhpFlUggK8J3Qd2zsLnnvuYYosjdS+deRViYpQzjJdW+PY0U2KouDkqeOcPHWSzc1NskmGwfG+\\n\",\n       \"9/0U97zu9fzLX/ol6rLGe8O1q9tMZ+vMZuskLsEZK844alCm8fzd2dnhyJEj4AcpMdXs7HxF12lv\\n\",\n       \"KNDUJcG3LJcL8iKliNDZLJuyde2Fnk75+LHjvPn8eTaPHBE2xULO2cOHDlGVFS59ZTInvVZVM1Oh\\n\",\n       \"fxIl1/1wnDJN6vOVAVCHkZqmHYbvRmUSDYz0OledS6UFtqNAsGnbnoivS6QXlEZqDGcMeSTkyvKM\\n\",\n       \"ru16cW8YKG7HQ0F6rDez/68a+B+VjSPwT/zPX+sjHjOq+ekH0N1TLwa9gMZK9mNWtx62Nnq+Rt3j\\n\",\n       \"FKaNdSZ9DW10aFSvv4cQ4jj0gBTQ4xpDkfRvVLXaWst8Pu+/YKXd1I1I0n2JKvUYNMrvuo63v+sD\\n\",\n       \"L1m3h7/waWDYxIwxYHx/Wx118IG2HTIVPc4sRs96UchmpmyEroeqta2IZIzHeXXASjMErX1qhKHf\\n\",\n       \"h0fS+rIsmWQTKWnEz5fnWWxaSu2jrup+ve2IzjXLUpqmlXp+KzX3xWLBtavXaFtPEzquXrkqU2pG\\n\",\n       \"tEnrpiEY2UiWy6Vs0NvbbO+WrE0mLJdlbGC1fSSVZ3k/rGGtZRmhl7q56fc6VhjX7IU29J8tNkeG\\n\",\n       \"v4u3rYlsj10rfRdFrwdi3XvImmRCT7hF2k56MNJfkf5CYiU6ttYQYqPYEzCxlEgn0n02NoyxUDc1\\n\",\n       \"xlk2N48wnUwxPnDmzrs4fuIEn/jUJ7l0+XmSLJPDt5bUiQBvmsVmLvTXT+oGOl4XI0ydBwi0UVDZ\\n\",\n       \"9MNFsm4lWeZYLHfwvmUyKTh6aMrGxgb3nD3Lfa9/PSbI52zqmixN2d3Z7XHTNhlgf13X8cff8pM3\\n\",\n       \"jMAfe/Az/fM0oMgjZFN50cfQ3IBMQ3ZeRM9BCMdMXG/dvLPYoFTHrRS6PYZ81KfQwaFxlm/NwIFi\\n\",\n       \"R9eT7zqqGKyNp0KVklY3EPVn59/+3j9YBH4rTD9gGUsCGpnpCKmmI7u7u72K9Hw+75VPxlhN1ZJT\\n\",\n       \"LpXxhJTCctR56fjy2LlrrWy5XPYjztPpFGuTXt6q71onyZ7NQr9QHbXtuq6f1qqqqifOz7KsF17o\\n\",\n       \"IqJFTy7djF4OB641trFOXpYPIrr9xmdMP623syOc5ocOHaKs5eRWhyRERkuKIu+jCN0M1aHryaSN\\n\",\n       \"Zd1YBbM6CLiWUTvQty0GI9Ozy5oiz/osp2lbmTKNm6uzkDpL2VQY05FZqKqS5Y5I26VZBl0H3pDZ\\n\",\n       \"wIljh5muTePwlqWNI80C4RTenJ3dHdr4fpOi4NvPvUCe5yyWC9qmpaorQoAXXniBnZ1tLl+6zM7O\\n\",\n       \"tjjm9U0M0oAlBBLnBJniZlJzR5rgTdNgnfQGiNOWMk9gewcsePwIr3OWJB0ySGstWZH3m8S4KWmz\\n\",\n       \"jIxcGvCxv+K7lroTFE5oZP1E7Sng20DVtuCD1OM7oaTtmo4kK8DAxRe38M0VZkXBc5dfjKRJhsOH\\n\",\n       \"N7l27dpQgrSBcrHEA64o9gy1SBN2DWsMrbUD9a4xWBtLXkFKSMtygTPQtBV17UmcDHalDu4/f55z\\n\",\n       \"585J0BSDovl8zlohTU5FjDljSSdaGgyU5Y0J3sSHSMmqb172ZYhAWVY9I6AGGUUu11/btXFTFCRZ\\n\",\n       \"QL5zvKdtGoIfOGp0PkX7c0MJabj+9fi17q6ZtjrmNE0lc7KDSpNK7A0Tue2e6PuVauD77sBVoNdq\\n\",\n       \"6ur9aHc3PYn6bDYjSRK2t7d7WSOtQanTuXbtGtZaZrPZHoemF8kYJ/r888/3js2NdkcVGtAoum1b\\n\",\n       \"kUuDHjbUT3Ux1EZ18ZfLJdPpFGU508+nn2m5XPYcLwoR638PA5/CjexPvPulyJSVrey2sl/+L3/k\\n\",\n       \"Lyllv+F3DeysdWRZ6LPOoX5veqrW2WwWx98F5KCmgZ/6kjFroLWDILJG4dZa8jTDBMkIm7ajrZte\\n\",\n       \"wMR7TxnpDfI8J82zPf00Lclqpnuz/tzY9t2Bb29vR6KqiJ2Npo5yOp32qYi1ltl0OowUA3WMbp21\\n\",\n       \"oq0X4U1d18XxcNlFdTy9bmq2t3cE7hXr4d77/gvVHbBtWxaLhUQKqdQdp9NpRAEMKudZllLXghGf\\n\",\n       \"TqdsrK/HEkQS0S+B6dqUshRESJ5lELvfbVTddmlCksk0XzAxMljZylb2HZlk2+q0x8ilwcEqskMi\\n\",\n       \"5qEhOZ7G1Exdgy6ApqlihiV6vDLIU+OcQg4F8dS2Hb71tK0ZlVFkeLCqSrJMIMnOSf+mbSrK5aBf\\n\",\n       \"W9e19CASgT13+LgRvMoFHfroOCIrlAhdI+AyMouNU3VNXRRJkSaDQvj29rY445imtG3L0c3NPUiV\\n\",\n       \"Y0eP9e+tJDXee7a3t5lMJj1CQWujARk6mc93e606oN+9NYW6ePEis+m64ILDgOdVEpsQQo+X1hoc\\n\",\n       \"RkainROODxi64Ctb2cpe2aTkqIIa3QjA4PosXfU/xUEL8kijXsmQ9zYPdRNIU7cHEZIkjjzPqKqK\\n\",\n       \"LEuBtJ/O1IlRycjpHXiSCJorSYQdVeZFHM4NG4jQfjQUxdB7aGND9Ga27w68bVuuXLlCkqY9ZaaW\\n\",\n       \"GowxdBEKpwV/bSjobqlfkLWWF198sWdu0wZknud9JK2pyrhpqfSOR44c6evmWr8e4GcDPGg6ne6p\\n\",\n       \"QQ+cHLGU03ryLO9PJq2xjxuzyj7mGaCHWpJRvciVrWxl35kNFL17BaIVoaN+Qq9boRsYxIzlNfZq\\n\",\n       \"aOp1rf5CyyeKXgkh9MIrik5L7F6ulX5AMTrpcQnXhAGvrv0l3QiAvrmvKJWXs3134JtRT6+qKkEm\\n\",\n       \"1PUo1ZHx4DzPef755wkh9AIQXdexWCx6Z962LZubm5RlSVVVnDp1isViwXK57EslWsPWulPXdX3j\\n\",\n       \"UxWpu66TZpz3fe3MjZoj+nf6uPIbNE3D2toai92B0Ea/nDH6RPHpXdf1qiC6w2ujc1CXX9nKVvZK\\n\",\n       \"piXVqqp64IHgtQUeqNe9Bk4mQLUUTcuqbiJM0FGXdd8r821H09UkmetLptrUHTfzYUQulkZQRaSg\\n\",\n       \"LZdl/3pd29G1wyCQ+jANREMIPeGVbhr9jMJNbN8d+O7ubl8yCAzoDkWX+AjdOXz4cP+hVCk7HcHY\\n\",\n       \"FHWhJZCdnR2ZfozOGOgbidpsBKJEW71n4ZQpr19cBiIcRbyMN5kQO/SLxYJiMoFAH0X3U2DRFCaZ\\n\",\n       \"JAnLsqStKmazGQ8/+hj3n39jj0N96Iuf7mt6VVVR5DmGvQM0Iu9FX+MD7cQbCAOGlbiuxu1VT1cV\\n\",\n       \"9NRaqjpyocfGbZbnlGUpF4P3cnK2LYVL+0m0NMui0tDwdxphWOeEewYlt6Jfj7Jc9pulKs+YoMRX\\n\",\n       \"ArYzFr781Ud487n7+p5Aj9KJI+3ee+GfiMgbjabaru0lspbzRZ/W6msokkAvat1s284TgifLc6wx\\n\",\n       \"NI3I1HnvKZfSwyiiWHBXy/lRTArKZdlP4wE03VDys1YodKVRnfSQxyzLZGzfmr7khpFeSj/IhERo\\n\",\n       \"hhAJkUoefuQbvPX+NwkO2VnqtsYay6QoIu7ZyUSwNbj4/SzjbMA0m8hzjNBCOOco8jwqRgmnShcE\\n\",\n       \"8y785LJmRV6QRqTMGB6ndWbvA0kq1BZJaimXpcwW+KHBX5YldVXhkGtF2fcUMqiWFwUhyFqWy4GX\\n\",\n       \"RDNv3vRSHzKfL/nyVx/k7W89T13X/SRzkqT9ebdcLvu5Bx/rynoNi/+Q9RtnxEJJ4PuIuyzL/ljU\\n\",\n       \"P4wH9bTkqs/TeYsx9HC4RgfTcsl4tH7cNL2Z7bsDd07Iq9bW1mhHztoYw+7uLsUIkaGQHmNMH1Wr\\n\",\n       \"M9OL0Vq7B76nzQpgD/RQFX/GhO+6aLqYY0ymXoxFUexxzuoM9b3rqpILPTZCxqmU1uIUlpRlGc7L\\n\",\n       \"SfClLz/IW+8/1w8XLBYLGbIJgel0yiJiyrUL3k8GMuDm1Sk760iSbC82PAjvx7hxE6LzbL2QQWk5\\n\",\n       \"SuFNRVGwtbUlWYqXseRgRd5uPp+z4Rx1HLbZ2NjYM0DlvSePArPWCWSsbVuhr80nhC5gg0V77D74\\n\",\n       \"nifbR3Whr3z1Qd72ljfsSYXL5Zw0m1DFmqZGRZqJeQPOjgKCXC7SLuKqPYE2yMVVVzV11zIpCqFO\\n\",\n       \"jVDbZVlFtXDL7nzBYrHoM7oXr1xlfX2dPJVhjLZqUXk/G1PoIiuERAu5OJuyEj4eY0mMo4jnrgQH\\n\",\n       \"cbq4FW3NJI2iD1FyS51oYhPWioKvfeNbvOfd7+xpj4MVWGjiHJPJWjzmXRIXR7vjeUrnqeM5lOfC\\n\",\n       \"PaMsk4vFnJ2dXZK8wCWOYm2NtmvInLANWgI721txU4HIMqL/MFY2ZWMCXaucNf46B5kLNNMMjlPH\\n\",\n       \"0fVa8wTKquwDB98NEL2bjZWvr6/zyKPf5Mff/c7eYQJRt3S49pbLpWT4ietBEmoC9R1KKxqwYffO\\n\",\n       \"QIz9iV6HChl87rnnOHHiRO9n8jxne3u7DxbUx+jmpRuAvu54OFAd/iuVU/fdgX/gz/3V/T6EV4l9\\n\",\n       \"nn/9b9613wfxKrPf5l/9yo/v90G8yuy3+Re//I79PohXlVWVOGbtgUEcg2/mvfMXfLnF+5aqGQUw\\n\",\n       \"GuSEjnyS7YmSg/GILKPrgy4Jwirh3/cdnkAX+YhOnz4NDNUApWrQCF0DxnFVYIyC0WG/8RDZKwEa\\n\",\n       \"9t2Br2xlK1vZH8aU6kAd7XgqU7NkpbIYo9f0b7UkqtF7OwJO1HW7h/sIJEtyDJO5ILDnLo7jj6ef\\n\",\n       \"+8eui9rHZc+xw9bj1Y3llSbl93GUfmUrW9nKVvad2MuN0u+LA1/Zyla2spX94e3mc5orW9nKVray\\n\",\n       \"V62tHPjKVrayld2mdssduDHmfcaYbxpjvmWM+flb/f77ZcaYXzXGXDLGPDq6b9MY82ljzO8ZY37T\\n\",\n       \"GHN49NgvxDX6pjHmJ/fnqL+7Zoy5yxjzWWPM140xXzPG/PV4/4FdF2NMYYz5ojHmIWPMY8aYfxjv\\n\",\n       \"P7BromaMccaYB40xvxF/P/BrsmcY5bv9H3DAE8BrgRR4CHjdrTyG/foPvBM4Dzw6uu8fA38r3v55\\n\",\n       \"4Bfj7fvi2qRxrZ4A7H5/hu/CmpwCzsXbM+Bx4HWrdWEt/kyALwDvOOhrEj/r3wT+I/Dx+PuBX5Nb\\n\",\n       \"HYG/DXgihHAhhNAA/xn46Vt8DPtiIYTfAa5ed/cHEck64s8/E2//NPDREEITQriAnIBvuxXHeSst\\n\",\n       \"hHAxhPBQvL0LfAPRUj3o66IEGCIsKefNgV4TY8ydwPuBf4vSeR7wNYFbX0K5A/j90e/PxPsOqp0M\\n\",\n       \"IVyKty8BJ+PtM8jaqH3Pr5Mx5rVIhvJFDvi6GGOsMeYh5LN/NoTwdQ74mgD/DPg5YDyOedDX5JY7\\n\",\n       \"8BVm8WUsSO53s/X5nl07Y8wM+O+ICPbO+LGDuC4hBB9COAfcCfxJY8y7r3v8QK2JMeZPA5dDCA8y\\n\",\n       \"RN977KCtidqtduDPAneNfr+LvTvlQbNLxphTAMaY08DleP/163RnvO97zowxKeK8PxJC+Fi8+8Cv\\n\",\n       \"C0AIYQv438D9HOw1+WHgg8aYJ4GPAj9mjPkIB3tNgFvvwL8CnDXGvNYYkwF/Afj4LT6GV5N9HPhQ\\n\",\n       \"vP0h4GOj+3/WGJMZY74fOAt8aR+O77tqRmaJ/x3wWAjhn48eOrDrYow5pmgKY8wE+AngQQ7wmoQQ\\n\",\n       \"PhxCuCuE8P3AzwKfCSH8ZQ7wmvS2D53kn0LQBk8Av7DfXdxb+Lk/CjwH1Egf4K8Am8BvAb8H/CZw\\n\",\n       \"ePT8D8c1+ibw3v0+/u/SmrwDqWk+hDipB4H3HeR1Ad4APBDX5BHg5+L9B3ZNrlufH2VAoRz4NVmN\\n\",\n       \"0q9sZStb2W1qq0nMla1sZSu7TW3lwFe2spWt7Da1lQNf2cpWtrLb1FYOfGUrW9nKblNbOfCVrWxl\\n\",\n       \"K7tNbeXAV7ayla3sNrWVA1/Zyla2stvUVg58ZStb2cpuU/t/6S2bnP6vZqYAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x114f15d10>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"plt.imshow(im)\\n\",\n    \"currentAxis = plt.gca()\\n\",\n    \"colors = ['r', 'b', 'y']\\n\",\n    \"for c, det in zip(colors, nms_dets[:3]):\\n\",\n    \"    currentAxis.add_patch(\\n\",\n    \"        plt.Rectangle((det[0], det[1]), det[2]-det[0], det[3]-det[1],\\n\",\n    \"        fill=False, edgecolor=c, linewidth=5)\\n\",\n    \"    )\\n\",\n    \"print 'scores:', nms_dets[:3, 4]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"This was an easy instance for bicycle as it was in the class's training set. However, the person result is a true detection since this was not in the set for that class.\\n\",\n    \"\\n\",\n    \"You should try out detection on an image of your own next!\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"(Remove the temp directory to clean up, and we're done.)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"!rm -rf _temp\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"Run a pretrained model as a detector in Python.\",\n  \"example_name\": \"R-CNN detection\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 6\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/feature_extraction/imagenet_val.prototxt",
    "content": "name: \"CaffeNet\"\nlayer {\n  name: \"data\"\n  type: \"ImageData\"\n  top: \"data\"\n  top: \"label\"\n  transform_param {\n    mirror: false\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  image_data_param {\n    source: \"examples/_temp/file_list.txt\"\n    batch_size: 50\n    new_height: 256\n    new_width: 256\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8\"\n  inner_product_param {\n    num_output: 1000\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"fc8\"\n  top: \"prob\"\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"prob\"\n  bottom: \"label\"\n  top: \"accuracy\"\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc8\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/feature_extraction/readme.md",
    "content": "---\ntitle: Feature extraction with Caffe C++ code.\ndescription: Extract CaffeNet / AlexNet features using the Caffe utility.\ncategory: example\ninclude_in_docs: true\npriority: 10\n---\n\nExtracting Features\n===================\n\nIn this tutorial, we will extract features using a pre-trained model with the included C++ utility.\nNote that we recommend using the Python interface for this task, as for example in the [filter visualization example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb).\n\nFollow instructions for [installing Caffe](../../installation.html) and run `scripts/download_model_binary.py models/bvlc_reference_caffenet` from caffe root directory.\nIf you need detailed information about the tools below, please consult their source code, in which additional documentation is usually provided.\n\nSelect data to run on\n---------------------\n\nWe'll make a temporary folder to store things into.\n\n    mkdir examples/_temp\n\nGenerate a list of the files to process.\nWe're going to use the images that ship with caffe.\n\n    find `pwd`/examples/images -type f -exec echo {} \\; > examples/_temp/temp.txt\n\nThe `ImageDataLayer` we'll use expects labels after each filenames, so let's add a 0 to the end of each line\n\n    sed \"s/$/ 0/\" examples/_temp/temp.txt > examples/_temp/file_list.txt\n\nDefine the Feature Extraction Network Architecture\n--------------------------------------------------\n\nIn practice, subtracting the mean image from a dataset significantly improves classification accuracies.\nDownload the mean image of the ILSVRC dataset.\n\n    ./data/ilsvrc12/get_ilsvrc_aux.sh\n\nWe will use `data/ilsvrc212/imagenet_mean.binaryproto` in the network definition prototxt.\n\nLet's copy and modify the network definition.\nWe'll be using the `ImageDataLayer`, which will load and resize images for us.\n\n    cp examples/feature_extraction/imagenet_val.prototxt examples/_temp\n\nExtract Features\n----------------\n\nNow everything necessary is in place.\n\n    ./build/tools/extract_features.bin models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10 leveldb\n\nThe name of feature blob that you extract is `fc7`, which represents the highest level feature of the reference model.\nWe can use any other layer, as well, such as `conv5` or `pool3`.\n\nThe last parameter above is the number of data mini-batches.\n\nThe features are stored to LevelDB `examples/_temp/features`, ready for access by some other code.\n\nIf you meet with the error \"Check failed: status.ok() Failed to open leveldb examples/_temp/features\", it is because the directory examples/_temp/features has been created the last time you run the command. Remove it and run again.\n\n    rm -rf examples/_temp/features/\n\nIf you'd like to use the Python wrapper for extracting features, check out the [filter visualization notebook](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb).\n\nClean Up\n--------\n\nLet's remove the temporary directory now.\n\n    rm -r examples/_temp\n"
  },
  {
    "path": "caffe-fpn/examples/finetune_flickr_style/assemble_data.py",
    "content": "#!/usr/bin/env python\n\"\"\"\nForm a subset of the Flickr Style data, download images to dirname, and write\nCaffe ImagesDataLayer training file.\n\"\"\"\nimport os\nimport urllib\nimport hashlib\nimport argparse\nimport numpy as np\nimport pandas as pd\nfrom skimage import io\nimport multiprocessing\n\n# Flickr returns a special image if the request is unavailable.\nMISSING_IMAGE_SHA1 = '6a92790b1c2a301c6e7ddef645dca1f53ea97ac2'\n\nexample_dirname = os.path.abspath(os.path.dirname(__file__))\ncaffe_dirname = os.path.abspath(os.path.join(example_dirname, '../..'))\ntraining_dirname = os.path.join(caffe_dirname, 'data/flickr_style')\n\n\ndef download_image(args_tuple):\n    \"For use with multiprocessing map. Returns filename on fail.\"\n    try:\n        url, filename = args_tuple\n        if not os.path.exists(filename):\n            urllib.urlretrieve(url, filename)\n        with open(filename) as f:\n            assert hashlib.sha1(f.read()).hexdigest() != MISSING_IMAGE_SHA1\n        test_read_image = io.imread(filename)\n        return True\n    except KeyboardInterrupt:\n        raise Exception()  # multiprocessing doesn't catch keyboard exceptions\n    except:\n        return False\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(\n        description='Download a subset of Flickr Style to a directory')\n    parser.add_argument(\n        '-s', '--seed', type=int, default=0,\n        help=\"random seed\")\n    parser.add_argument(\n        '-i', '--images', type=int, default=-1,\n        help=\"number of images to use (-1 for all [default])\",\n    )\n    parser.add_argument(\n        '-w', '--workers', type=int, default=-1,\n        help=\"num workers used to download images. -x uses (all - x) cores [-1 default].\"\n    )\n    parser.add_argument(\n        '-l', '--labels', type=int, default=0,\n        help=\"if set to a positive value, only sample images from the first number of labels.\"\n    )\n\n    args = parser.parse_args()\n    np.random.seed(args.seed)\n\n    # Read data, shuffle order, and subsample.\n    csv_filename = os.path.join(example_dirname, 'flickr_style.csv.gz')\n    df = pd.read_csv(csv_filename, index_col=0, compression='gzip')\n    df = df.iloc[np.random.permutation(df.shape[0])]\n    if args.labels > 0:\n        df = df.loc[df['label'] < args.labels]\n    if args.images > 0 and args.images < df.shape[0]:\n        df = df.iloc[:args.images]\n\n    # Make directory for images and get local filenames.\n    if training_dirname is None:\n        training_dirname = os.path.join(caffe_dirname, 'data/flickr_style')\n    images_dirname = os.path.join(training_dirname, 'images')\n    if not os.path.exists(images_dirname):\n        os.makedirs(images_dirname)\n    df['image_filename'] = [\n        os.path.join(images_dirname, _.split('/')[-1]) for _ in df['image_url']\n    ]\n\n    # Download images.\n    num_workers = args.workers\n    if num_workers <= 0:\n        num_workers = multiprocessing.cpu_count() + num_workers\n    print('Downloading {} images with {} workers...'.format(\n        df.shape[0], num_workers))\n    pool = multiprocessing.Pool(processes=num_workers)\n    map_args = zip(df['image_url'], df['image_filename'])\n    results = pool.map(download_image, map_args)\n\n    # Only keep rows with valid images, and write out training file lists.\n    df = df[results]\n    for split in ['train', 'test']:\n        split_df = df[df['_split'] == split]\n        filename = os.path.join(training_dirname, '{}.txt'.format(split))\n        split_df[['image_filename', 'label']].to_csv(\n            filename, sep=' ', header=None, index=None)\n    print('Writing train/val for {} successfully downloaded images.'.format(\n        df.shape[0]))\n"
  },
  {
    "path": "caffe-fpn/examples/finetune_flickr_style/readme.md",
    "content": "---\ntitle: Fine-tuning for style recognition\ndescription: Fine-tune the ImageNet-trained CaffeNet on the \"Flickr Style\" dataset.\ncategory: example\ninclude_in_docs: true\npriority: 5\n---\n\n# Fine-tuning CaffeNet for Style Recognition on \"Flickr Style\" Data\n\nFine-tuning takes an already learned model, adapts the architecture, and resumes training from the already learned model weights.\nLet's fine-tune the BVLC-distributed CaffeNet model on a different dataset, [Flickr Style](http://sergeykarayev.com/files/1311.3715v3.pdf), to predict image style instead of object category.\n\n## Explanation\n\nThe Flickr-sourced images of the Style dataset are visually very similar to the ImageNet dataset, on which the `bvlc_reference_caffenet` was trained.\nSince that model works well for object category classification, we'd like to use it architecture for our style classifier.\nWe also only have 80,000 images to train on, so we'd like to start with the parameters learned on the 1,000,000 ImageNet images, and fine-tune as needed.\nIf we give provide the `weights` argument to the `caffe train` command, the pretrained weights will be loaded into our model, matching layers by name.\n\nBecause we are predicting 20 classes instead of a 1,000, we do need to change the last layer in the model.\nTherefore, we change the name of the last layer from `fc8` to `fc8_flickr` in our prototxt.\nSince there is no layer named that in the `bvlc_reference_caffenet`, that layer will begin training with random weights.\n\nWe will also decrease the overall learning rate `base_lr` in the solver prototxt, but boost the `lr_mult` on the newly introduced layer.\nThe idea is to have the rest of the model change very slowly with new data, but let the new layer learn fast.\nAdditionally, we set `stepsize` in the solver to a lower value than if we were training from scratch, since we're virtually far along in training and therefore want the learning rate to go down faster.\nNote that we could also entirely prevent fine-tuning of all layers other than `fc8_flickr` by setting their `lr_mult` to 0.\n\n## Procedure\n\nAll steps are to be done from the caffe root directory.\n\nThe dataset is distributed as a list of URLs with corresponding labels.\nUsing a script, we will download a small subset of the data and split it into train and val sets.\n\n    caffe % ./examples/finetune_flickr_style/assemble_data.py -h\n    usage: assemble_data.py [-h] [-s SEED] [-i IMAGES] [-w WORKERS]\n\n    Download a subset of Flickr Style to a directory\n\n    optional arguments:\n      -h, --help            show this help message and exit\n      -s SEED, --seed SEED  random seed\n      -i IMAGES, --images IMAGES\n                            number of images to use (-1 for all)\n      -w WORKERS, --workers WORKERS\n                            num workers used to download images. -x uses (all - x)\n                            cores.\n\n    caffe % python examples/finetune_flickr_style/assemble_data.py --workers=-1 --images=2000 --seed 831486\n    Downloading 2000 images with 7 workers...\n    Writing train/val for 1939 successfully downloaded images.\n\nThis script downloads images and writes train/val file lists into `data/flickr_style`.\nThe prototxts in this example assume this, and also assume the presence of the ImageNet mean file (run `get_ilsvrc_aux.sh` from `data/ilsvrc12` to obtain this if you haven't yet).\n\nWe'll also need the ImageNet-trained model, which you can obtain by running `./scripts/download_model_binary.py models/bvlc_reference_caffenet`.\n\nNow we can train! (You can fine-tune in CPU mode by leaving out the `-gpu` flag.)\n\n    caffe % ./build/tools/caffe train -solver models/finetune_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu 0\n\n    [...]\n\n    I0828 22:10:04.025378  9718 solver.cpp:46] Solver scaffolding done.\n    I0828 22:10:04.025388  9718 caffe.cpp:95] Use GPU with device ID 0\n    I0828 22:10:04.192004  9718 caffe.cpp:107] Finetuning from models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel\n\n    [...]\n\n    I0828 22:17:48.338963 11510 solver.cpp:165] Solving FlickrStyleCaffeNet\n    I0828 22:17:48.339010 11510 solver.cpp:251] Iteration 0, Testing net (#0)\n    I0828 22:18:14.313817 11510 solver.cpp:302]     Test net output #0: accuracy = 0.0308\n    I0828 22:18:14.476822 11510 solver.cpp:195] Iteration 0, loss = 3.78589\n    I0828 22:18:14.476878 11510 solver.cpp:397] Iteration 0, lr = 0.001\n    I0828 22:18:19.700408 11510 solver.cpp:195] Iteration 20, loss = 3.25728\n    I0828 22:18:19.700461 11510 solver.cpp:397] Iteration 20, lr = 0.001\n    I0828 22:18:24.924685 11510 solver.cpp:195] Iteration 40, loss = 2.18531\n    I0828 22:18:24.924741 11510 solver.cpp:397] Iteration 40, lr = 0.001\n    I0828 22:18:30.114858 11510 solver.cpp:195] Iteration 60, loss = 2.4915\n    I0828 22:18:30.114910 11510 solver.cpp:397] Iteration 60, lr = 0.001\n    I0828 22:18:35.328071 11510 solver.cpp:195] Iteration 80, loss = 2.04539\n    I0828 22:18:35.328127 11510 solver.cpp:397] Iteration 80, lr = 0.001\n    I0828 22:18:40.588317 11510 solver.cpp:195] Iteration 100, loss = 2.1924\n    I0828 22:18:40.588373 11510 solver.cpp:397] Iteration 100, lr = 0.001\n    I0828 22:18:46.171576 11510 solver.cpp:195] Iteration 120, loss = 2.25107\n    I0828 22:18:46.171669 11510 solver.cpp:397] Iteration 120, lr = 0.001\n    I0828 22:18:51.757809 11510 solver.cpp:195] Iteration 140, loss = 1.355\n    I0828 22:18:51.757863 11510 solver.cpp:397] Iteration 140, lr = 0.001\n    I0828 22:18:57.345080 11510 solver.cpp:195] Iteration 160, loss = 1.40815\n    I0828 22:18:57.345135 11510 solver.cpp:397] Iteration 160, lr = 0.001\n    I0828 22:19:02.928794 11510 solver.cpp:195] Iteration 180, loss = 1.6558\n    I0828 22:19:02.928850 11510 solver.cpp:397] Iteration 180, lr = 0.001\n    I0828 22:19:08.514497 11510 solver.cpp:195] Iteration 200, loss = 0.88126\n    I0828 22:19:08.514552 11510 solver.cpp:397] Iteration 200, lr = 0.001\n\n    [...]\n\n    I0828 22:22:40.789010 11510 solver.cpp:195] Iteration 960, loss = 0.112586\n    I0828 22:22:40.789175 11510 solver.cpp:397] Iteration 960, lr = 0.001\n    I0828 22:22:46.376626 11510 solver.cpp:195] Iteration 980, loss = 0.0959077\n    I0828 22:22:46.376682 11510 solver.cpp:397] Iteration 980, lr = 0.001\n    I0828 22:22:51.687258 11510 solver.cpp:251] Iteration 1000, Testing net (#0)\n    I0828 22:23:17.438894 11510 solver.cpp:302]     Test net output #0: accuracy = 0.2356\n\nNote how rapidly the loss went down. Although the 23.5% accuracy is only modest, it was achieved in only 1000, and evidence that the model is starting to learn quickly and well.\nOnce the model is fully fine-tuned on the whole training set over 100,000 iterations the final validation accuracy is 39.16%.\nThis takes ~7 hours in Caffe on a K40 GPU.\n\nFor comparison, here is how the loss goes down when we do not start with a pre-trained model:\n\n    I0828 22:24:18.624004 12919 solver.cpp:165] Solving FlickrStyleCaffeNet\n    I0828 22:24:18.624099 12919 solver.cpp:251] Iteration 0, Testing net (#0)\n    I0828 22:24:44.520992 12919 solver.cpp:302]     Test net output #0: accuracy = 0.0366\n    I0828 22:24:44.676905 12919 solver.cpp:195] Iteration 0, loss = 3.47942\n    I0828 22:24:44.677120 12919 solver.cpp:397] Iteration 0, lr = 0.001\n    I0828 22:24:50.152454 12919 solver.cpp:195] Iteration 20, loss = 2.99694\n    I0828 22:24:50.152509 12919 solver.cpp:397] Iteration 20, lr = 0.001\n    I0828 22:24:55.736256 12919 solver.cpp:195] Iteration 40, loss = 3.0498\n    I0828 22:24:55.736311 12919 solver.cpp:397] Iteration 40, lr = 0.001\n    I0828 22:25:01.316514 12919 solver.cpp:195] Iteration 60, loss = 2.99549\n    I0828 22:25:01.316567 12919 solver.cpp:397] Iteration 60, lr = 0.001\n    I0828 22:25:06.899554 12919 solver.cpp:195] Iteration 80, loss = 3.00573\n    I0828 22:25:06.899610 12919 solver.cpp:397] Iteration 80, lr = 0.001\n    I0828 22:25:12.484624 12919 solver.cpp:195] Iteration 100, loss = 2.99094\n    I0828 22:25:12.484678 12919 solver.cpp:397] Iteration 100, lr = 0.001\n    I0828 22:25:18.069056 12919 solver.cpp:195] Iteration 120, loss = 3.01616\n    I0828 22:25:18.069149 12919 solver.cpp:397] Iteration 120, lr = 0.001\n    I0828 22:25:23.650928 12919 solver.cpp:195] Iteration 140, loss = 2.98786\n    I0828 22:25:23.650984 12919 solver.cpp:397] Iteration 140, lr = 0.001\n    I0828 22:25:29.235535 12919 solver.cpp:195] Iteration 160, loss = 3.00724\n    I0828 22:25:29.235589 12919 solver.cpp:397] Iteration 160, lr = 0.001\n    I0828 22:25:34.816898 12919 solver.cpp:195] Iteration 180, loss = 3.00099\n    I0828 22:25:34.816953 12919 solver.cpp:397] Iteration 180, lr = 0.001\n    I0828 22:25:40.396656 12919 solver.cpp:195] Iteration 200, loss = 2.99848\n    I0828 22:25:40.396711 12919 solver.cpp:397] Iteration 200, lr = 0.001\n\n    [...]\n\n    I0828 22:29:12.539094 12919 solver.cpp:195] Iteration 960, loss = 2.99203\n    I0828 22:29:12.539258 12919 solver.cpp:397] Iteration 960, lr = 0.001\n    I0828 22:29:18.123092 12919 solver.cpp:195] Iteration 980, loss = 2.99345\n    I0828 22:29:18.123147 12919 solver.cpp:397] Iteration 980, lr = 0.001\n    I0828 22:29:23.432059 12919 solver.cpp:251] Iteration 1000, Testing net (#0)\n    I0828 22:29:49.409044 12919 solver.cpp:302]     Test net output #0: accuracy = 0.0572\n\nThis model is only beginning to learn.\n\nFine-tuning can be feasible when training from scratch would not be for lack of time or data.\nEven in CPU mode each pass through the training set takes ~100 s. GPU fine-tuning is of course faster still and can learn a useful model in minutes or hours instead of days or weeks.\nFurthermore, note that the model has only trained on < 2,000 instances. Transfer learning a new task like style recognition from the ImageNet pretraining can require much less data than training from scratch.\n\nNow try fine-tuning to your own tasks and data!\n\n## Trained model\n\nWe provide a model trained on all 80K images, with final accuracy of 39%.\nSimply do `./scripts/download_model_binary.py models/finetune_flickr_style` to obtain it.\n\n## License\n\nThe Flickr Style dataset as distributed here contains only URLs to images.\nSome of the images may have copyright.\nTraining a category-recognition model for research/non-commercial use may constitute fair use of this data, but the result should not be used for commercial purposes.\n"
  },
  {
    "path": "caffe-fpn/examples/finetune_flickr_style/style_names.txt",
    "content": "Detailed\nPastel\nMelancholy\nNoir\nHDR\nVintage\nLong Exposure\nHorror\nSunny\nBright\nHazy\nBokeh\nSerene\nTexture\nEthereal\nMacro\nDepth of Field\nGeometric Composition\nMinimal\nRomantic\n"
  },
  {
    "path": "caffe-fpn/examples/finetune_pascal_detection/pascal_finetune_solver.prototxt",
    "content": "net: \"examples/finetune_pascal_detection/pascal_finetune_trainval_test.prototxt\"\ntest_iter: 100\ntest_interval: 1000\nbase_lr: 0.001\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 20000\ndisplay: 20\nmax_iter: 100000\nmomentum: 0.9\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/finetune_pascal_detection/pascal_det_finetune\"\n"
  },
  {
    "path": "caffe-fpn/examples/finetune_pascal_detection/pascal_finetune_trainval_test.prototxt",
    "content": "name: \"CaffeNet\"\nlayer {\n  name: \"data\"\n  type: \"WindowData\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mirror: true\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  window_data_param {\n    source: \"examples/finetune_pascal_detection/window_file_2007_trainval.txt\"\n    batch_size: 128\n    fg_threshold: 0.5\n    bg_threshold: 0.5\n    fg_fraction: 0.25\n    context_pad: 16\n    crop_mode: \"warp\"\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"WindowData\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mirror: true\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  window_data_param {\n    source: \"examples/finetune_pascal_detection/window_file_2007_test.txt\"\n    batch_size: 128\n    fg_threshold: 0.5\n    bg_threshold: 0.5\n    fg_fraction: 0.25\n    context_pad: 16\n    crop_mode: \"warp\"\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8_pascal\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8_pascal\"\n  param {\n    lr_mult: 10\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 20\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc8_pascal\"\n  bottom: \"label\"\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"fc8_pascal\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/examples/hdf5_classification/nonlinear_auto_test.prototxt",
    "content": "layer {\n  name: \"data\"\n  type: \"HDF5Data\"\n  top: \"data\"\n  top: \"label\"\n  hdf5_data_param {\n    source: \"examples/hdf5_classification/data/test.txt\"\n    batch_size: 10\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"data\"\n  top: \"ip1\"\n  inner_product_param {\n    num_output: 40\n    weight_filler {\n      type: \"xavier\"\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"ip1\"\n  top: \"ip1\"\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  inner_product_param {\n    num_output: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"accuracy\"\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/hdf5_classification/nonlinear_auto_train.prototxt",
    "content": "layer {\n  name: \"data\"\n  type: \"HDF5Data\"\n  top: \"data\"\n  top: \"label\"\n  hdf5_data_param {\n    source: \"examples/hdf5_classification/data/train.txt\"\n    batch_size: 10\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"data\"\n  top: \"ip1\"\n  inner_product_param {\n    num_output: 40\n    weight_filler {\n      type: \"xavier\"\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"ip1\"\n  top: \"ip1\"\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  inner_product_param {\n    num_output: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"accuracy\"\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/hdf5_classification/nonlinear_solver.prototxt",
    "content": "train_net: \"examples/hdf5_classification/nonlinear_auto_train.prototxt\"\ntest_net: \"examples/hdf5_classification/nonlinear_auto_test.prototxt\"\ntest_iter: 250\ntest_interval: 1000\nbase_lr: 0.01\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 5000\ndisplay: 1000\nmax_iter: 10000\nmomentum: 0.9\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/hdf5_classification/data/train\"\nsolver_mode: CPU\n"
  },
  {
    "path": "caffe-fpn/examples/hdf5_classification/nonlinear_train_val.prototxt",
    "content": "name: \"LogisticRegressionNet\"\nlayer {\n  name: \"data\"\n  type: \"HDF5Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  hdf5_data_param {\n    source: \"examples/hdf5_classification/data/train.txt\"\n    batch_size: 10\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"HDF5Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  hdf5_data_param {\n    source: \"examples/hdf5_classification/data/test.txt\"\n    batch_size: 10\n  }\n}\nlayer {\n  name: \"fc1\"\n  type: \"InnerProduct\"\n  bottom: \"data\"\n  top: \"fc1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 40\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"fc1\"\n  top: \"fc1\"\n}\nlayer {\n  name: \"fc2\"\n  type: \"InnerProduct\"\n  bottom: \"fc1\"\n  top: \"fc2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc2\"\n  bottom: \"label\"\n  top: \"loss\"\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"fc2\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/examples/hdf5_classification/solver.prototxt",
    "content": "train_net: \"examples/hdf5_classification/logreg_auto_train.prototxt\"\ntest_net: \"examples/hdf5_classification/logreg_auto_test.prototxt\"\ntest_iter: 250\ntest_interval: 1000\nbase_lr: 0.01\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 5000\ndisplay: 1000\nmax_iter: 10000\nmomentum: 0.9\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/hdf5_classification/data/train\"\nsolver_mode: CPU\n"
  },
  {
    "path": "caffe-fpn/examples/hdf5_classification/train_val.prototxt",
    "content": "name: \"LogisticRegressionNet\"\nlayer {\n  name: \"data\"\n  type: \"HDF5Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  hdf5_data_param {\n    source: \"examples/hdf5_classification/data/train.txt\"\n    batch_size: 10\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"HDF5Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  hdf5_data_param {\n    source: \"examples/hdf5_classification/data/test.txt\"\n    batch_size: 10\n  }\n}\nlayer {\n  name: \"fc1\"\n  type: \"InnerProduct\"\n  bottom: \"data\"\n  top: \"fc1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc1\"\n  bottom: \"label\"\n  top: \"loss\"\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"fc1\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/examples/imagenet/create_imagenet.sh",
    "content": "#!/usr/bin/env sh\n# Create the imagenet lmdb inputs\n# N.B. set the path to the imagenet train + val data dirs\n\nEXAMPLE=examples/imagenet\nDATA=data/ilsvrc12\nTOOLS=build/tools\n\nTRAIN_DATA_ROOT=/path/to/imagenet/train/\nVAL_DATA_ROOT=/path/to/imagenet/val/\n\n# Set RESIZE=true to resize the images to 256x256. Leave as false if images have\n# already been resized using another tool.\nRESIZE=false\nif $RESIZE; then\n  RESIZE_HEIGHT=256\n  RESIZE_WIDTH=256\nelse\n  RESIZE_HEIGHT=0\n  RESIZE_WIDTH=0\nfi\n\nif [ ! -d \"$TRAIN_DATA_ROOT\" ]; then\n  echo \"Error: TRAIN_DATA_ROOT is not a path to a directory: $TRAIN_DATA_ROOT\"\n  echo \"Set the TRAIN_DATA_ROOT variable in create_imagenet.sh to the path\" \\\n       \"where the ImageNet training data is stored.\"\n  exit 1\nfi\n\nif [ ! -d \"$VAL_DATA_ROOT\" ]; then\n  echo \"Error: VAL_DATA_ROOT is not a path to a directory: $VAL_DATA_ROOT\"\n  echo \"Set the VAL_DATA_ROOT variable in create_imagenet.sh to the path\" \\\n       \"where the ImageNet validation data is stored.\"\n  exit 1\nfi\n\necho \"Creating train lmdb...\"\n\nGLOG_logtostderr=1 $TOOLS/convert_imageset \\\n    --resize_height=$RESIZE_HEIGHT \\\n    --resize_width=$RESIZE_WIDTH \\\n    --shuffle \\\n    $TRAIN_DATA_ROOT \\\n    $DATA/train.txt \\\n    $EXAMPLE/ilsvrc12_train_lmdb\n\necho \"Creating val lmdb...\"\n\nGLOG_logtostderr=1 $TOOLS/convert_imageset \\\n    --resize_height=$RESIZE_HEIGHT \\\n    --resize_width=$RESIZE_WIDTH \\\n    --shuffle \\\n    $VAL_DATA_ROOT \\\n    $DATA/val.txt \\\n    $EXAMPLE/ilsvrc12_val_lmdb\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/examples/imagenet/make_imagenet_mean.sh",
    "content": "#!/usr/bin/env sh\n# Compute the mean image from the imagenet training lmdb\n# N.B. this is available in data/ilsvrc12\n\nEXAMPLE=examples/imagenet\nDATA=data/ilsvrc12\nTOOLS=build/tools\n\n$TOOLS/compute_image_mean $EXAMPLE/ilsvrc12_train_lmdb \\\n  $DATA/imagenet_mean.binaryproto\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/examples/imagenet/readme.md",
    "content": "---\ntitle: ImageNet tutorial\ndescription: Train and test \"CaffeNet\" on ImageNet data.\ncategory: example\ninclude_in_docs: true\npriority: 1\n---\n\nBrewing ImageNet\n================\n\nThis guide is meant to get you ready to train your own model on your own data.\nIf you just want an ImageNet-trained network, then note that since training takes a lot of energy and we hate global warming, we provide the CaffeNet model trained as described below in the [model zoo](/model_zoo.html).\n\nData Preparation\n----------------\n\n*The guide specifies all paths and assumes all commands are executed from the root caffe directory.*\n\n*By \"ImageNet\" we here mean the ILSVRC12 challenge, but you can easily train on the whole of ImageNet as well, just with more disk space, and a little longer training time.*\n\nWe assume that you already have downloaded the ImageNet training data and validation data, and they are stored on your disk like:\n\n    /path/to/imagenet/train/n01440764/n01440764_10026.JPEG\n    /path/to/imagenet/val/ILSVRC2012_val_00000001.JPEG\n\nYou will first need to prepare some auxiliary data for training. This data can be downloaded by:\n\n    ./data/ilsvrc12/get_ilsvrc_aux.sh\n\nThe training and validation input are described in `train.txt` and `val.txt` as text listing all the files and their labels. Note that we use a different indexing for labels than the ILSVRC devkit: we sort the synset names in their ASCII order, and then label them from 0 to 999. See `synset_words.txt` for the synset/name mapping.\n\nYou may want to resize the images to 256x256 in advance. By default, we do not explicitly do this because in a cluster environment, one may benefit from resizing images in a parallel fashion, using mapreduce. For example, Yangqing used his lightweight [mincepie](https://github.com/Yangqing/mincepie) package. If you prefer things to be simpler, you can also use shell commands, something like:\n\n    for name in /path/to/imagenet/val/*.JPEG; do\n        convert -resize 256x256\\! $name $name\n    done\n\nTake a look at `examples/imagenet/create_imagenet.sh`. Set the paths to the train and val dirs as needed, and set \"RESIZE=true\" to resize all images to 256x256 if you haven't resized the images in advance.\nNow simply create the leveldbs with `examples/imagenet/create_imagenet.sh`. Note that `examples/imagenet/ilsvrc12_train_leveldb` and `examples/imagenet/ilsvrc12_val_leveldb` should not exist before this execution. It will be created by the script. `GLOG_logtostderr=1` simply dumps more information for you to inspect, and you can safely ignore it.\n\nCompute Image Mean\n------------------\n\nThe model requires us to subtract the image mean from each image, so we have to compute the mean. `tools/compute_image_mean.cpp` implements that - it is also a good example to familiarize yourself on how to manipulate the multiple components, such as protocol buffers, leveldbs, and logging, if you are not familiar with them. Anyway, the mean computation can be carried out as:\n\n    ./examples/imagenet/make_imagenet_mean.sh\n\nwhich will make `data/ilsvrc12/imagenet_mean.binaryproto`.\n\nModel Definition\n----------------\n\nWe are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf).\n\nThe network definition (`models/bvlc_reference_caffenet/train_val.prototxt`) follows the one in Krizhevsky et al.\nNote that if you deviated from file paths suggested in this guide, you'll need to adjust the relevant paths in the `.prototxt` files.\n\nIf you look carefully at `models/bvlc_reference_caffenet/train_val.prototxt`, you will notice several `include` sections specifying either `phase: TRAIN` or `phase: TEST`. These sections allow us to define two closely related networks in one file: the network used for training and the network used for testing. These two networks are almost identical, sharing all layers except for those marked with `include { phase: TRAIN }` or `include { phase: TEST }`. In this case, only the input layers and one output layer are different.\n\n**Input layer differences:** The training network's `data` input layer draws its data from `examples/imagenet/ilsvrc12_train_leveldb` and randomly mirrors the input image. The testing network's `data` layer takes data from `examples/imagenet/ilsvrc12_val_leveldb` and does not perform random mirroring.\n\n**Output layer differences:** Both networks output the `softmax_loss` layer, which in training is used to compute the loss function and to initialize the backpropagation, while in validation this loss is simply reported. The testing network also has a second output layer, `accuracy`, which is used to report the accuracy on the test set. In the process of training, the test network will occasionally be instantiated and tested on the test set, producing lines like `Test score #0: xxx` and `Test score #1: xxx`. In this case score 0 is the accuracy (which will start around 1/1000 = 0.001 for an untrained network) and score 1 is the loss (which will start around 7 for an untrained network).\n\nWe will also lay out a protocol buffer for running the solver. Let's make a few plans:\n\n* We will run in batches of 256, and run a total of 450,000 iterations (about 90 epochs).\n* For every 1,000 iterations, we test the learned net on the validation data.\n* We set the initial learning rate to 0.01, and decrease it every 100,000 iterations (about 20 epochs).\n* Information will be displayed every 20 iterations.\n* The network will be trained with momentum 0.9 and a weight decay of 0.0005.\n* For every 10,000 iterations, we will take a snapshot of the current status.\n\nSound good? This is implemented in `models/bvlc_reference_caffenet/solver.prototxt`.\n\nTraining ImageNet\n-----------------\n\nReady? Let's train.\n\n    ./build/tools/caffe train --solver=models/bvlc_reference_caffenet/solver.prototxt\n\nSit back and enjoy!\n\nOn a K40 machine, every 20 iterations take about 26.5 seconds to run (while a on a K20 this takes 36 seconds), so effectively about 5.2 ms per image for the full forward-backward pass. About 2 ms of this is on forward, and the rest is backward. If you are interested in dissecting the computation time, you can run\n\n    ./build/tools/caffe time --model=models/bvlc_reference_caffenet/train_val.prototxt\n\nResume Training?\n----------------\n\nWe all experience times when the power goes out, or we feel like rewarding ourself a little by playing Battlefield (does anyone still remember Quake?). Since we are snapshotting intermediate results during training, we will be able to resume from snapshots. This can be done as easy as:\n\n    ./build/tools/caffe train --solver=models/bvlc_reference_caffenet/solver.prototxt --snapshot=models/bvlc_reference_caffenet/caffenet_train_iter_10000.solverstate\n\nwhere in the script `caffenet_train_iter_10000.solverstate` is the solver state snapshot that stores all necessary information to recover the exact solver state (including the parameters, momentum history, etc).\n\nParting Words\n-------------\n\nHope you liked this recipe!\nMany researchers have gone further since the ILSVRC 2012 challenge, changing the network architecture and/or fine-tuning the various parameters in the network to address new data and tasks.\n**Caffe lets you explore different network choices more easily by simply writing different prototxt files** - isn't that exciting?\n\nAnd since now you have a trained network, check out how to use it with the Python interface for [classifying ImageNet](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb).\n"
  },
  {
    "path": "caffe-fpn/examples/imagenet/resume_training.sh",
    "content": "#!/usr/bin/env sh\n\n./build/tools/caffe train \\\n    --solver=models/bvlc_reference_caffenet/solver.prototxt \\\n    --snapshot=models/bvlc_reference_caffenet/caffenet_train_10000.solverstate.h5\n"
  },
  {
    "path": "caffe-fpn/examples/imagenet/train_caffenet.sh",
    "content": "#!/usr/bin/env sh\n\n./build/tools/caffe train \\\n    --solver=models/bvlc_reference_caffenet/solver.prototxt\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/convert_mnist_data.cpp",
    "content": "// This script converts the MNIST dataset to a lmdb (default) or\n// leveldb (--backend=leveldb) format used by caffe to load data.\n// Usage:\n//    convert_mnist_data [FLAGS] input_image_file input_label_file\n//                        output_db_file\n// The MNIST dataset could be downloaded at\n//    http://yann.lecun.com/exdb/mnist/\n\n#include <gflags/gflags.h>\n#include <glog/logging.h>\n#include <google/protobuf/text_format.h>\n\n#if defined(USE_LEVELDB) && defined(USE_LMDB)\n#include <leveldb/db.h>\n#include <leveldb/write_batch.h>\n#include <lmdb.h>\n#endif\n\n#include <stdint.h>\n#include <sys/stat.h>\n\n#include <fstream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/format.hpp\"\n\n#if defined(USE_LEVELDB) && defined(USE_LMDB)\n\nusing namespace caffe;  // NOLINT(build/namespaces)\nusing boost::scoped_ptr;\nusing std::string;\n\nDEFINE_string(backend, \"lmdb\", \"The backend for storing the result\");\n\nuint32_t swap_endian(uint32_t val) {\n    val = ((val << 8) & 0xFF00FF00) | ((val >> 8) & 0xFF00FF);\n    return (val << 16) | (val >> 16);\n}\n\nvoid convert_dataset(const char* image_filename, const char* label_filename,\n        const char* db_path, const string& db_backend) {\n  // Open files\n  std::ifstream image_file(image_filename, std::ios::in | std::ios::binary);\n  std::ifstream label_file(label_filename, std::ios::in | std::ios::binary);\n  CHECK(image_file) << \"Unable to open file \" << image_filename;\n  CHECK(label_file) << \"Unable to open file \" << label_filename;\n  // Read the magic and the meta data\n  uint32_t magic;\n  uint32_t num_items;\n  uint32_t num_labels;\n  uint32_t rows;\n  uint32_t cols;\n\n  image_file.read(reinterpret_cast<char*>(&magic), 4);\n  magic = swap_endian(magic);\n  CHECK_EQ(magic, 2051) << \"Incorrect image file magic.\";\n  label_file.read(reinterpret_cast<char*>(&magic), 4);\n  magic = swap_endian(magic);\n  CHECK_EQ(magic, 2049) << \"Incorrect label file magic.\";\n  image_file.read(reinterpret_cast<char*>(&num_items), 4);\n  num_items = swap_endian(num_items);\n  label_file.read(reinterpret_cast<char*>(&num_labels), 4);\n  num_labels = swap_endian(num_labels);\n  CHECK_EQ(num_items, num_labels);\n  image_file.read(reinterpret_cast<char*>(&rows), 4);\n  rows = swap_endian(rows);\n  image_file.read(reinterpret_cast<char*>(&cols), 4);\n  cols = swap_endian(cols);\n\n\n  scoped_ptr<db::DB> db(db::GetDB(db_backend));\n  db->Open(db_path, db::NEW);\n  scoped_ptr<db::Transaction> txn(db->NewTransaction());\n\n  // Storing to db\n  char label;\n  char* pixels = new char[rows * cols];\n  int count = 0;\n  string value;\n\n  Datum datum;\n  datum.set_channels(1);\n  datum.set_height(rows);\n  datum.set_width(cols);\n  LOG(INFO) << \"A total of \" << num_items << \" items.\";\n  LOG(INFO) << \"Rows: \" << rows << \" Cols: \" << cols;\n  for (int item_id = 0; item_id < num_items; ++item_id) {\n    image_file.read(pixels, rows * cols);\n    label_file.read(&label, 1);\n    datum.set_data(pixels, rows*cols);\n    datum.set_label(label);\n    string key_str = caffe::format_int(item_id, 8);\n    datum.SerializeToString(&value);\n\n    txn->Put(key_str, value);\n\n    if (++count % 1000 == 0) {\n      txn->Commit();\n    }\n  }\n  // write the last batch\n  if (count % 1000 != 0) {\n      txn->Commit();\n  }\n  LOG(INFO) << \"Processed \" << count << \" files.\";\n  delete[] pixels;\n  db->Close();\n}\n\nint main(int argc, char** argv) {\n#ifndef GFLAGS_GFLAGS_H_\n  namespace gflags = google;\n#endif\n\n  FLAGS_alsologtostderr = 1;\n\n  gflags::SetUsageMessage(\"This script converts the MNIST dataset to\\n\"\n        \"the lmdb/leveldb format used by Caffe to load data.\\n\"\n        \"Usage:\\n\"\n        \"    convert_mnist_data [FLAGS] input_image_file input_label_file \"\n        \"output_db_file\\n\"\n        \"The MNIST dataset could be downloaded at\\n\"\n        \"    http://yann.lecun.com/exdb/mnist/\\n\"\n        \"You should gunzip them after downloading,\"\n        \"or directly use data/mnist/get_mnist.sh\\n\");\n  gflags::ParseCommandLineFlags(&argc, &argv, true);\n\n  const string& db_backend = FLAGS_backend;\n\n  if (argc != 4) {\n    gflags::ShowUsageWithFlagsRestrict(argv[0],\n        \"examples/mnist/convert_mnist_data\");\n  } else {\n    google::InitGoogleLogging(argv[0]);\n    convert_dataset(argv[1], argv[2], argv[3], db_backend);\n  }\n  return 0;\n}\n#else\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"This example requires LevelDB and LMDB; \" <<\n  \"compile with USE_LEVELDB and USE_LMDB.\";\n}\n#endif  // USE_LEVELDB and USE_LMDB\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/create_mnist.sh",
    "content": "#!/usr/bin/env sh\n# This script converts the mnist data into lmdb/leveldb format,\n# depending on the value assigned to $BACKEND.\nset -e\n\nEXAMPLE=examples/mnist\nDATA=data/mnist\nBUILD=build/examples/mnist\n\nBACKEND=\"lmdb\"\n\necho \"Creating ${BACKEND}...\"\n\nrm -rf $EXAMPLE/mnist_train_${BACKEND}\nrm -rf $EXAMPLE/mnist_test_${BACKEND}\n\n$BUILD/convert_mnist_data.bin $DATA/train-images-idx3-ubyte \\\n  $DATA/train-labels-idx1-ubyte $EXAMPLE/mnist_train_${BACKEND} --backend=${BACKEND}\n$BUILD/convert_mnist_data.bin $DATA/t10k-images-idx3-ubyte \\\n  $DATA/t10k-labels-idx1-ubyte $EXAMPLE/mnist_test_${BACKEND} --backend=${BACKEND}\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet.prototxt",
    "content": "name: \"LeNet\"\nlayer {\n  name: \"data\"\n  type: \"Input\"\n  top: \"data\"\n  input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 20\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 50\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool2\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 500\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"ip1\"\n  top: \"ip1\"\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"ip2\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_adadelta_solver.prototxt",
    "content": "# The train/test net protocol buffer definition\nnet: \"examples/mnist/lenet_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 1.0\nlr_policy: \"fixed\"\nmomentum: 0.95\nweight_decay: 0.0005\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/mnist/lenet_adadelta\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\ntype: \"AdaDelta\"\ndelta: 1e-6\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_auto_solver.prototxt",
    "content": "# The train/test net protocol buffer definition\ntrain_net: \"mnist/lenet_auto_train.prototxt\"\ntest_net: \"mnist/lenet_auto_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.01\nmomentum: 0.9\nweight_decay: 0.0005\n# The learning rate policy\nlr_policy: \"inv\"\ngamma: 0.0001\npower: 0.75\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"mnist/lenet\"\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_consolidated_solver.prototxt",
    "content": "# lenet_consolidated_solver.prototxt consolidates the lenet_solver, lenet_train,\n# and lenet_test prototxts into a single file.  It also adds an additional test\n# net which runs on the training set, e.g., for the purpose of comparing\n# train/test accuracy (accuracy is computed only on the test set in the included\n# LeNet example).  This is mainly included as an example of using these features\n# (specify NetParameters directly in the solver, specify multiple test nets)\n# if desired.\n#\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.01\nmomentum: 0.9\nweight_decay: 0.0005\n# The learning rate policy\nlr_policy: \"inv\"\ngamma: 0.0001\npower: 0.75\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/mnist/lenet\"\n# Set a random_seed for repeatable results.\n# (For results that vary due to random initialization, comment out the below\n# line, or set to a negative integer -- e.g. \"random_seed: -1\")\nrandom_seed: 1701\n# solver mode: CPU or GPU\nsolver_mode: GPU\n\n# We test on both the test and train set using \"stages\".  The TEST DATA layers\n# each have a stage, either 'test-on-train-set' or 'test-on-test-set'.\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\ntest_state: { stage: \"test-on-test-set\" }\n# The train set has 60K images, so we run 600 test iters (600 * 100 = 60K).\ntest_iter: 600\ntest_state: { stage: \"test-on-train-set\" }\n\n# The net protocol buffer definition\nnet_param {\n  name: \"LeNet\"\n  layers {\n    name: \"mnist\"\n    type: DATA\n    top: \"data\"\n    top: \"label\"\n    data_param {\n      source: \"examples/mnist/mnist_train_lmdb\"\n      backend: LMDB\n      batch_size: 64\n    }\n    transform_param {\n      scale: 0.00390625\n    }\n    include: { phase: TRAIN }\n  }\n  layers {\n    name: \"mnist\"\n    type: DATA\n    top: \"data\"\n    top: \"label\"\n    data_param {\n      source: \"examples/mnist/mnist_test_lmdb\"\n      backend: LMDB\n      batch_size: 100\n    }\n    transform_param {\n      scale: 0.00390625\n    }\n    include: {\n      phase: TEST\n      stage: \"test-on-test-set\"\n    }\n  }\n  layers {\n    name: \"mnist\"\n    type: DATA\n    top: \"data\"\n    top: \"label\"\n    data_param {\n      source: \"examples/mnist/mnist_train_lmdb\"\n      backend: LMDB\n      batch_size: 100\n    }\n    transform_param {\n      scale: 0.00390625\n    }\n    include: {\n      phase: TEST\n      stage: \"test-on-train-set\"\n    }\n  }\n  layers {\n    name: \"conv1\"\n    type: CONVOLUTION\n    bottom: \"data\"\n    top: \"conv1\"\n    blobs_lr: 1\n    blobs_lr: 2\n    convolution_param {\n      num_output: 20\n      kernel_size: 5\n      stride: 1\n      weight_filler {\n        type: \"xavier\"\n      }\n      bias_filler {\n        type: \"constant\"\n      }\n    }\n  }\n  layers {\n    name: \"pool1\"\n    type: POOLING\n    bottom: \"conv1\"\n    top: \"pool1\"\n    pooling_param {\n      pool: MAX\n      kernel_size: 2\n      stride: 2\n    }\n  }\n  layers {\n    name: \"conv2\"\n    type: CONVOLUTION\n    bottom: \"pool1\"\n    top: \"conv2\"\n    blobs_lr: 1\n    blobs_lr: 2\n    convolution_param {\n      num_output: 50\n      kernel_size: 5\n      stride: 1\n      weight_filler {\n        type: \"xavier\"\n      }\n      bias_filler {\n        type: \"constant\"\n      }\n    }\n  }\n  layers {\n    name: \"pool2\"\n    type: POOLING\n    bottom: \"conv2\"\n    top: \"pool2\"\n    pooling_param {\n      pool: MAX\n      kernel_size: 2\n      stride: 2\n    }\n  }\n  layers {\n    name: \"ip1\"\n    type: INNER_PRODUCT\n    bottom: \"pool2\"\n    top: \"ip1\"\n    blobs_lr: 1\n    blobs_lr: 2\n    inner_product_param {\n      num_output: 500\n      weight_filler {\n        type: \"xavier\"\n      }\n      bias_filler {\n        type: \"constant\"\n      }\n    }\n  }\n  layers {\n    name: \"relu1\"\n    type: RELU\n    bottom: \"ip1\"\n    top: \"ip1\"\n  }\n  layers {\n    name: \"ip2\"\n    type: INNER_PRODUCT\n    bottom: \"ip1\"\n    top: \"ip2\"\n    blobs_lr: 1\n    blobs_lr: 2\n    inner_product_param {\n      num_output: 10\n      weight_filler {\n        type: \"xavier\"\n      }\n      bias_filler {\n        type: \"constant\"\n      }\n    }\n  }\n  layers {\n    name: \"accuracy\"\n    type: ACCURACY\n    bottom: \"ip2\"\n    bottom: \"label\"\n    top: \"accuracy\"\n  }\n  layers {\n    name: \"loss\"\n    type: SOFTMAX_LOSS\n    bottom: \"ip2\"\n    bottom: \"label\"\n    top: \"loss\"\n  }\n}\n\n# Expected results for first and last 500 iterations:\n# (with portions of log omitted for brevity)\n#\n# Iteration 0, Testing net (#0)\n# Test score #0: 0.067\n# Test score #1: 2.30256\n# Iteration 0, Testing net (#1)\n# Test score #0: 0.0670334\n# Test score #1: 2.30258\n# Iteration 100, lr = 0.00992565\n# Iteration 100, loss = 0.280585\n# Iteration 200, lr = 0.00985258\n# Iteration 200, loss = 0.345601\n# Iteration 300, lr = 0.00978075\n# Iteration 300, loss = 0.172217\n# Iteration 400, lr = 0.00971013\n# Iteration 400, loss = 0.261836\n# Iteration 500, lr = 0.00964069\n# Iteration 500, loss = 0.157803\n# Iteration 500, Testing net (#0)\n# Test score #0: 0.968\n# Test score #1: 0.0993772\n# Iteration 500, Testing net (#1)\n# Test score #0: 0.965883\n# Test score #1: 0.109374\n#\n# [...]\n#\n# Iteration 9500, Testing net (#0)\n# Test score #0: 0.9899\n# Test score #1: 0.0308299\n# Iteration 9500, Testing net (#1)\n# Test score #0: 0.996816\n# Test score #1: 0.0118238\n# Iteration 9600, lr = 0.00603682\n# Iteration 9600, loss = 0.0126215\n# Iteration 9700, lr = 0.00601382\n# Iteration 9700, loss = 0.00579304\n# Iteration 9800, lr = 0.00599102\n# Iteration 9800, loss = 0.00500633\n# Iteration 9900, lr = 0.00596843\n# Iteration 9900, loss = 0.00796607\n# Iteration 10000, lr = 0.00594604\n# Iteration 10000, loss = 0.00271736\n# Iteration 10000, Testing net (#0)\n# Test score #0: 0.9914\n# Test score #1: 0.0276671\n# Iteration 10000, Testing net (#1)\n# Test score #0: 0.997782\n# Test score #1: 0.00908085\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_multistep_solver.prototxt",
    "content": "# The train/test net protocol buffer definition\nnet: \"examples/mnist/lenet_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.01\nmomentum: 0.9\nweight_decay: 0.0005\n# The learning rate policy\nlr_policy: \"multistep\"\ngamma: 0.9\nstepvalue: 5000\nstepvalue: 7000\nstepvalue: 8000\nstepvalue: 9000\nstepvalue: 9500\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/mnist/lenet_multistep\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_solver.prototxt",
    "content": "# The train/test net protocol buffer definition\nnet: \"examples/mnist/lenet_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.01\nmomentum: 0.9\nweight_decay: 0.0005\n# The learning rate policy\nlr_policy: \"inv\"\ngamma: 0.0001\npower: 0.75\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/mnist/lenet\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_solver_adam.prototxt",
    "content": "# The train/test net protocol buffer definition\n# this follows \"ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION\"\nnet: \"examples/mnist/lenet_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# All parameters are from the cited paper above\nbase_lr: 0.001\nmomentum: 0.9\nmomentum2: 0.999\n# since Adam dynamically changes the learning rate, we set the base learning\n# rate to a fixed value\nlr_policy: \"fixed\"\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/mnist/lenet\"\n# solver mode: CPU or GPU\ntype: \"Adam\"\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_solver_rmsprop.prototxt",
    "content": "# The train/test net protocol buffer definition\nnet: \"examples/mnist/lenet_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.01\nmomentum: 0.0\nweight_decay: 0.0005\n# The learning rate policy\nlr_policy: \"inv\"\ngamma: 0.0001\npower: 0.75\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 10000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/mnist/lenet_rmsprop\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\ntype: \"RMSProp\"\nrms_decay: 0.98\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/lenet_train_test.prototxt",
    "content": "name: \"LeNet\"\nlayer {\n  name: \"mnist\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    scale: 0.00390625\n  }\n  data_param {\n    source: \"examples/mnist/mnist_train_lmdb\"\n    batch_size: 64\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"mnist\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    scale: 0.00390625\n  }\n  data_param {\n    source: \"examples/mnist/mnist_test_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 100\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\n\n\n\n\n\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 200\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\n\n\nlayer {\n  name: \"offset111\"\n  type: \"Convolution\"\n  bottom: \"conv2\"\n  top: \"offset111\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 72\n    kernel_size: 3\n    stride: 1\n    dilation: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"dec\"\n  type: \"DeformableConvolution\"\n  bottom: \"conv2\"\n  bottom: \"offset111\"\n  top: \"dec\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  deformable_convolution_param {\n    num_output: 100\n    kernel_size: 3\n    stride: 1\n    pad: 2\n    engine: 1\n    dilation: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\n\n\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"dec\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\n\n\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool2\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 500\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"ip1\"\n  top: \"ip1\"\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"ip2\"\n  bottom: \"label\"\n  top: \"loss\"\n}"
  },
  {
    "path": "caffe-fpn/examples/mnist/mnist_autoencoder.prototxt",
    "content": "name: \"MNISTAutoencoder\"\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    scale: 0.0039215684\n  }\n  data_param {\n    source: \"examples/mnist/mnist_train_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  include {\n    phase: TEST\n    stage: \"test-on-train\"\n  }\n  transform_param {\n    scale: 0.0039215684\n  }\n  data_param {\n    source: \"examples/mnist/mnist_train_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  include {\n    phase: TEST\n    stage: \"test-on-test\"\n  }\n  transform_param {\n    scale: 0.0039215684\n  }\n  data_param {\n    source: \"examples/mnist/mnist_test_lmdb\"\n    batch_size: 100\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"flatdata\"\n  type: \"Flatten\"\n  bottom: \"data\"\n  top: \"flatdata\"\n}\nlayer {\n  name: \"encode1\"\n  type: \"InnerProduct\"\n  bottom: \"data\"\n  top: \"encode1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"encode1neuron\"\n  type: \"Sigmoid\"\n  bottom: \"encode1\"\n  top: \"encode1neuron\"\n}\nlayer {\n  name: \"encode2\"\n  type: \"InnerProduct\"\n  bottom: \"encode1neuron\"\n  top: \"encode2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 500\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"encode2neuron\"\n  type: \"Sigmoid\"\n  bottom: \"encode2\"\n  top: \"encode2neuron\"\n}\nlayer {\n  name: \"encode3\"\n  type: \"InnerProduct\"\n  bottom: \"encode2neuron\"\n  top: \"encode3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 250\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"encode3neuron\"\n  type: \"Sigmoid\"\n  bottom: \"encode3\"\n  top: \"encode3neuron\"\n}\nlayer {\n  name: \"encode4\"\n  type: \"InnerProduct\"\n  bottom: \"encode3neuron\"\n  top: \"encode4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 30\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"decode4\"\n  type: \"InnerProduct\"\n  bottom: \"encode4\"\n  top: \"decode4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 250\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"decode4neuron\"\n  type: \"Sigmoid\"\n  bottom: \"decode4\"\n  top: \"decode4neuron\"\n}\nlayer {\n  name: \"decode3\"\n  type: \"InnerProduct\"\n  bottom: \"decode4neuron\"\n  top: \"decode3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 500\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"decode3neuron\"\n  type: \"Sigmoid\"\n  bottom: \"decode3\"\n  top: \"decode3neuron\"\n}\nlayer {\n  name: \"decode2\"\n  type: \"InnerProduct\"\n  bottom: \"decode3neuron\"\n  top: \"decode2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"decode2neuron\"\n  type: \"Sigmoid\"\n  bottom: \"decode2\"\n  top: \"decode2neuron\"\n}\nlayer {\n  name: \"decode1\"\n  type: \"InnerProduct\"\n  bottom: \"decode2neuron\"\n  top: \"decode1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 1\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 784\n    weight_filler {\n      type: \"gaussian\"\n      std: 1\n      sparse: 15\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SigmoidCrossEntropyLoss\"\n  bottom: \"decode1\"\n  bottom: \"flatdata\"\n  top: \"cross_entropy_loss\"\n  loss_weight: 1\n}\nlayer {\n  name: \"decode1neuron\"\n  type: \"Sigmoid\"\n  bottom: \"decode1\"\n  top: \"decode1neuron\"\n}\nlayer {\n  name: \"loss\"\n  type: \"EuclideanLoss\"\n  bottom: \"decode1neuron\"\n  bottom: \"flatdata\"\n  top: \"l2_error\"\n  loss_weight: 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/mnist_autoencoder_solver.prototxt",
    "content": "net: \"examples/mnist/mnist_autoencoder.prototxt\"\ntest_state: { stage: 'test-on-train' }\ntest_iter: 500\ntest_state: { stage: 'test-on-test' }\ntest_iter: 100\ntest_interval: 500\ntest_compute_loss: true\nbase_lr: 0.01\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 10000\ndisplay: 100\nmax_iter: 65000\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/mnist/mnist_autoencoder\"\nmomentum: 0.9\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/mnist_autoencoder_solver_adadelta.prototxt",
    "content": "net: \"examples/mnist/mnist_autoencoder.prototxt\"\ntest_state: { stage: 'test-on-train' }\ntest_iter: 500\ntest_state: { stage: 'test-on-test' }\ntest_iter: 100\ntest_interval: 500\ntest_compute_loss: true\nbase_lr: 1.0\nlr_policy: \"fixed\"\nmomentum: 0.95\ndelta: 1e-8\ndisplay: 100\nmax_iter: 65000\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/mnist/mnist_autoencoder_adadelta_train\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\ntype: \"AdaDelta\"\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/mnist_autoencoder_solver_adagrad.prototxt",
    "content": "net: \"examples/mnist/mnist_autoencoder.prototxt\"\ntest_state: { stage: 'test-on-train' }\ntest_iter: 500\ntest_state: { stage: 'test-on-test' }\ntest_iter: 100\ntest_interval: 500\ntest_compute_loss: true\nbase_lr: 0.01\nlr_policy: \"fixed\"\ndisplay: 100\nmax_iter: 65000\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/mnist/mnist_autoencoder_adagrad_train\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\ntype: \"AdaGrad\"\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/mnist_autoencoder_solver_nesterov.prototxt",
    "content": "net: \"examples/mnist/mnist_autoencoder.prototxt\"\ntest_state: { stage: 'test-on-train' }\ntest_iter: 500\ntest_state: { stage: 'test-on-test' }\ntest_iter: 100\ntest_interval: 500\ntest_compute_loss: true\nbase_lr: 0.01\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 10000\ndisplay: 100\nmax_iter: 65000\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"examples/mnist/mnist_autoencoder_nesterov_train\"\nmomentum: 0.95\n# solver mode: CPU or GPU\nsolver_mode: GPU\ntype: \"Nesterov\"\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/readme.md",
    "content": "---\ntitle: LeNet MNIST Tutorial\ndescription: Train and test \"LeNet\" on the MNIST handwritten digit data.\ncategory: example\ninclude_in_docs: true\npriority: 1\n---\n\n# Training LeNet on MNIST with Caffe\n\nWe will assume that you have Caffe successfully compiled. If not, please refer to the [Installation page](/installation.html). In this tutorial, we will assume that your Caffe installation is located at `CAFFE_ROOT`.\n\n## Prepare Datasets\n\nYou will first need to download and convert the data format from the MNIST website. To do this, simply run the following commands:\n\n    cd $CAFFE_ROOT\n    ./data/mnist/get_mnist.sh\n    ./examples/mnist/create_mnist.sh\n\nIf it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist_train_lmdb`, and `mnist_test_lmdb`.\n\n## LeNet: the MNIST Classification Model\n\nBefore we actually run the training program, let's explain what will happen. We will use the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf) network, which is known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with Rectified Linear Unit (ReLU) activations for the neurons.\n\nThe design of LeNet contains the essence of CNNs that are still used in larger models such as the ones in ImageNet. In general, it consists of a convolutional layer followed by a pooling layer, another convolution layer followed by a pooling layer, and then two fully connected layers similar to the conventional multilayer perceptrons. We have defined the layers in `$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt`.\n\n## Define the MNIST Network\n\nThis section explains the `lenet_train_test.prototxt` model definition that specifies the LeNet model for MNIST handwritten digit classification. We assume that you are familiar with [Google Protobuf](https://developers.google.com/protocol-buffers/docs/overview), and assume that you have read the protobuf definitions used by Caffe, which can be found at `$CAFFE_ROOT/src/caffe/proto/caffe.proto`.\n\nSpecifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto.caffe_pb2.NetParameter`) protobuf. We will start by giving the network a name:\n\n    name: \"LeNet\"\n\n### Writing the Data Layer\n\nCurrently, we will read the MNIST data from the lmdb we created earlier in the demo. This is defined by a data layer:\n\n    layer {\n      name: \"mnist\"\n      type: \"Data\"\n      transform_param {\n        scale: 0.00390625\n      }\n      data_param {\n        source: \"mnist_train_lmdb\"\n        backend: LMDB\n        batch_size: 64\n      }\n      top: \"data\"\n      top: \"label\"\n    }\n\nSpecifically, this layer has name `mnist`, type `data`, and it reads the data from the given lmdb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \\[0,1\\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.\n\n### Writing the Convolution Layer\n\nLet's define the first convolution layer:\n\n    layer {\n      name: \"conv1\"\n      type: \"Convolution\"\n      param { lr_mult: 1 }\n      param { lr_mult: 2 }\n      convolution_param {\n        num_output: 20\n        kernel_size: 5\n        stride: 1\n        weight_filler {\n          type: \"xavier\"\n        }\n        bias_filler {\n          type: \"constant\"\n        }\n      }\n      bottom: \"data\"\n      top: \"conv1\"\n    }\n\nThis layer takes the `data` blob (it is provided by the data layer), and produces the `conv1` layer. It produces outputs of 20 channels, with the convolutional kernel size 5 and carried out with stride 1.\n\nThe fillers allow us to randomly initialize the value of the weights and bias. For the weight filler, we will use the `xavier` algorithm that automatically determines the scale of initialization based on the number of input and output neurons. For the bias filler, we will simply initialize it as constant, with the default filling value 0.\n\n`lr_mult`s are the learning rate adjustments for the layer's learnable parameters. In this case, we will set the weight learning rate to be the same as the learning rate given by the solver during runtime, and the bias learning rate to be twice as large as that - this usually leads to better convergence rates.\n\n### Writing the Pooling Layer\n\nPhew. Pooling layers are actually much easier to define:\n\n    layer {\n      name: \"pool1\"\n      type: \"Pooling\"\n      pooling_param {\n        kernel_size: 2\n        stride: 2\n        pool: MAX\n      }\n      bottom: \"conv1\"\n      top: \"pool1\"\n    }\n\nThis says we will perform max pooling with a pool kernel size 2 and a stride of 2 (so no overlapping between neighboring pooling regions).\n\nSimilarly, you can write up the second convolution and pooling layers. Check `$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt` for details.\n\n### Writing the Fully Connected Layer\n\nWriting a fully connected layer is also simple:\n\n    layer {\n      name: \"ip1\"\n      type: \"InnerProduct\"\n      param { lr_mult: 1 }\n      param { lr_mult: 2 }\n      inner_product_param {\n        num_output: 500\n        weight_filler {\n          type: \"xavier\"\n        }\n        bias_filler {\n          type: \"constant\"\n        }\n      }\n      bottom: \"pool2\"\n      top: \"ip1\"\n    }\n\nThis defines a fully connected layer (known in Caffe as an `InnerProduct` layer) with 500 outputs. All other lines look familiar, right?\n\n### Writing the ReLU Layer\n\nA ReLU Layer is also simple:\n\n    layer {\n      name: \"relu1\"\n      type: \"ReLU\"\n      bottom: \"ip1\"\n      top: \"ip1\"\n    }\n\nSince ReLU is an element-wise operation, we can do *in-place* operations to save some memory. This is achieved by simply giving the same name to the bottom and top blobs. Of course, do NOT use duplicated blob names for other layer types!\n\nAfter the ReLU layer, we will write another innerproduct layer:\n\n    layer {\n      name: \"ip2\"\n      type: \"InnerProduct\"\n      param { lr_mult: 1 }\n      param { lr_mult: 2 }\n      inner_product_param {\n        num_output: 10\n        weight_filler {\n          type: \"xavier\"\n        }\n        bias_filler {\n          type: \"constant\"\n        }\n      }\n      bottom: \"ip1\"\n      top: \"ip2\"\n    }\n\n### Writing the Loss Layer\n\nFinally, we will write the loss!\n\n    layer {\n      name: \"loss\"\n      type: \"SoftmaxWithLoss\"\n      bottom: \"ip2\"\n      bottom: \"label\"\n    }\n\nThe `softmax_loss` layer implements both the softmax and the multinomial logistic loss (that saves time and improves numerical stability). It takes two blobs, the first one being the prediction and the second one being the `label` provided by the data layer (remember it?). It does not produce any outputs - all it does is to compute the loss function value, report it when backpropagation starts, and initiates the gradient with respect to `ip2`. This is where all magic starts.\n\n\n### Additional Notes: Writing Layer Rules\n\nLayer definitions can include rules for whether and when they are included in the network definition, like the one below:\n\n    layer {\n      // ...layer definition...\n      include: { phase: TRAIN }\n    }\n\nThis is a rule, which controls layer inclusion in the network, based on current network's state.\nYou can refer to `$CAFFE_ROOT/src/caffe/proto/caffe.proto` for more information about layer rules and model schema.\n\nIn the above example, this layer will be included only in `TRAIN` phase.\nIf we change `TRAIN` with `TEST`, then this layer will be used only in test phase.\nBy default, that is without layer rules, a layer is always included in the network.\nThus, `lenet_train_test.prototxt` has two `DATA` layers defined (with different `batch_size`), one for the training phase and one for the testing phase.\nAlso, there is an `Accuracy` layer which is included only in `TEST` phase for reporting the model accuracy every 100 iteration, as defined in `lenet_solver.prototxt`.\n\n## Define the MNIST Solver\n\nCheck out the comments explaining each line in the prototxt `$CAFFE_ROOT/examples/mnist/lenet_solver.prototxt`:\n\n    # The train/test net protocol buffer definition\n    net: \"examples/mnist/lenet_train_test.prototxt\"\n    # test_iter specifies how many forward passes the test should carry out.\n    # In the case of MNIST, we have test batch size 100 and 100 test iterations,\n    # covering the full 10,000 testing images.\n    test_iter: 100\n    # Carry out testing every 500 training iterations.\n    test_interval: 500\n    # The base learning rate, momentum and the weight decay of the network.\n    base_lr: 0.01\n    momentum: 0.9\n    weight_decay: 0.0005\n    # The learning rate policy\n    lr_policy: \"inv\"\n    gamma: 0.0001\n    power: 0.75\n    # Display every 100 iterations\n    display: 100\n    # The maximum number of iterations\n    max_iter: 10000\n    # snapshot intermediate results\n    snapshot: 5000\n    snapshot_prefix: \"examples/mnist/lenet\"\n    # solver mode: CPU or GPU\n    solver_mode: GPU\n\n\n## Training and Testing the Model\n\nTraining the model is simple after you have written the network definition protobuf and solver protobuf files. Simply run `train_lenet.sh`, or the following command directly:\n\n    cd $CAFFE_ROOT\n    ./examples/mnist/train_lenet.sh\n\n`train_lenet.sh` is a simple script, but here is a quick explanation: the main tool for training is `caffe` with action `train` and the solver protobuf text file as its argument.\n\nWhen you run the code, you will see a lot of messages flying by like this:\n\n    I1203 net.cpp:66] Creating Layer conv1\n    I1203 net.cpp:76] conv1 <- data\n    I1203 net.cpp:101] conv1 -> conv1\n    I1203 net.cpp:116] Top shape: 20 24 24\n    I1203 net.cpp:127] conv1 needs backward computation.\n\nThese messages tell you the details about each layer, its connections and its output shape, which may be helpful in debugging. After the initialization, the training will start:\n\n    I1203 net.cpp:142] Network initialization done.\n    I1203 solver.cpp:36] Solver scaffolding done.\n    I1203 solver.cpp:44] Solving LeNet\n\nBased on the solver setting, we will print the training loss function every 100 iterations, and test the network every 500 iterations. You will see messages like this:\n\n    I1203 solver.cpp:204] Iteration 100, lr = 0.00992565\n    I1203 solver.cpp:66] Iteration 100, loss = 0.26044\n    ...\n    I1203 solver.cpp:84] Testing net\n    I1203 solver.cpp:111] Test score #0: 0.9785\n    I1203 solver.cpp:111] Test score #1: 0.0606671\n\nFor each training iteration, `lr` is the learning rate of that iteration, and `loss` is the training function. For the output of the testing phase, score 0 is the accuracy, and score 1 is the testing loss function.\n\nAnd after a few minutes, you are done!\n\n    I1203 solver.cpp:84] Testing net\n    I1203 solver.cpp:111] Test score #0: 0.9897\n    I1203 solver.cpp:111] Test score #1: 0.0324599\n    I1203 solver.cpp:126] Snapshotting to lenet_iter_10000\n    I1203 solver.cpp:133] Snapshotting solver state to lenet_iter_10000.solverstate\n    I1203 solver.cpp:78] Optimization Done.\n\nThe final model, stored as a binary protobuf file, is stored at\n\n    lenet_iter_10000\n\nwhich you can deploy as a trained model in your application, if you are training on a real-world application dataset.\n\n### Um... How about GPU training?\n\nYou just did! All the training was carried out on the GPU. In fact, if you would like to do training on CPU, you can simply change one line in `lenet_solver.prototxt`:\n\n    # solver mode: CPU or GPU\n    solver_mode: CPU\n\nand you will be using CPU for training. Isn't that easy?\n\nMNIST is a small dataset, so training with GPU does not really introduce too much benefit due to communication overheads. On larger datasets with more complex models, such as ImageNet, the computation speed difference will be more significant.\n\n### How to reduce the learning rate at fixed steps?\nLook at lenet_multistep_solver.prototxt\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_lenet.sh",
    "content": "#!/usr/bin/env sh\nset -e\n\n./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_lenet_adam.sh",
    "content": "#!/usr/bin/env sh\nset -e\n\n./build/tools/caffe train --solver=examples/mnist/lenet_solver_adam.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_lenet_consolidated.sh",
    "content": "#!/usr/bin/env sh\nset -e\n\n./build/tools/caffe train \\\n  --solver=examples/mnist/lenet_consolidated_solver.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_lenet_docker.sh",
    "content": "#!/usr/bin/env sh\nset -e\n# The following example allows for the MNIST example (using LeNet) to be\n# trained using the caffe docker image instead of building from source.\n#\n# The GPU-enabled version of Caffe can be used, assuming that nvidia-docker\n# is installed, and the GPU-enabled Caffe image has been built.\n# Setting the GPU environment variable to 1 will enable the use of nvidia-docker.\n# e.g.\n#   GPU=1 ./examples/mnist/train_lenet_docker.sh [ADDITIONAL_CAFFE_ARGS]\n#\n# With any arguments following the script being passed directly to caffe\n# when training the network.\n#\n# The steps that are performed by the script are as follows:\n# 1. The MNIST data set is downloaded\n#    (see data/mnist/get_mnist.sh)\n# 2. An LMDB database is created from the downloaded data\n#    (see examples/mnist/create_mnist.sh.\n# 3. A caffe network based on the LeNet solver is trained.\n#    (see examples/mnist/lenet_solver.prototxt)\n#\n# For each of these, a step is executed to ensure that certain prerequisites\n# are available, after which a command that actually performs the work is\n# executed.\n#\n# In order to provide additional flexibility, the following shell (environment)\n# variables can be used to control the execution of each of the phases:\n#\n# DOWNLOAD_DATA: Enable (1) or disable (0) the downloading of the MNIST dataset\n# CREATE_LMDB: Enable (1) or disable (0) the creation of the LMDB database\n# TRAIN: Enable (1) or disable (0) the training of the LeNet networkd.\n#\n# As an example, assuming that the data set has been downloaded, and an LMDB\n# database created, the following command can be used to train the LeNet\n# network with GPU computing enabled.\n#\n# DOWNLOAD_DATA=0 CREATE_LMDB=0 GPU=1 ./examples/mnist/train_lenet_docker.sh\n#\n\n\nif [ x\"$(uname -s)\" != x\"Linux\" ]\nthen\necho \"\"\necho \"This script is designed to run on Linux.\"\necho \"There may be problems with the way Docker mounts host volumes on other\"\necho \"systems which will cause the docker commands to fail.\"\necho \"\"\nread -p \"Press [ENTER] to continue...\" key\necho \"\"\nfi\n\n\n# Check if GPU mode has been enabled and set the docker executable accordingly\nif [ ${GPU:-0} -eq 1 ]\nthen\nDOCKER_CMD=nvidia-docker\nIMAGE=caffe:gpu\nelse\nDOCKER_CMD=docker\nIMAGE=caffe:cpu\nfi\necho \"Using $DOCKER_CMD to launch $IMAGE\"\n\n# On non-Linux systems, the Docker host is typically a virtual machine.\n# This means that the user and group id's may be different.\n# On OS X, for example, the user and group are 1000 and 50, respectively.\nif [ x\"$(uname -s)\" != x\"Linux\" ]\nthen\nCUID=1000\nCGID=50\nelse\nCUID=$(id -u)\nCGID=$(id -g)\nfi\n\n# Define some helper variables to make the running of the actual docker\n# commands less verbose.\n# Note:\n#   -u $CUID:$CGID             runs the docker image as the current user to ensure\n#                              that the file permissions are compatible with the\n#                              host system. The variables CUID and CGID have been\n#                              set above depending on the host operating system.\n#   --volume $(pwd):/workspace mounts the current directory as the docker volume\n#                              /workspace\n#   --workdir /workspace       Ensures that the docker container starts in the right\n#                              working directory\nDOCKER_OPTIONS=\"--rm -ti -u $CUID:$CGID --volume=$(pwd):/workspace --workdir=/workspace\"\nDOCKER_RUN=\"$DOCKER_CMD run $DOCKER_OPTIONS $IMAGE\"\n\n# Download the data\nif [ ${DOWNLOAD_DATA:-1} -eq 1 ]\nthen\n$DOCKER_RUN bash -c \"mkdir -p ./data/mnist;\n                     cp -ru \\$CAFFE_ROOT/data/mnist/get_mnist.sh ./data/mnist/\"\n$DOCKER_RUN ./data/mnist/get_mnist.sh\nfi\n\n# Create the LMDB database\nif [ ${CREATE_LMDB:-1} -eq 1 ]\nthen\n$DOCKER_RUN bash -c \"mkdir -p ./examples/mnist;\n                     cp -ru \\$CAFFE_ROOT/examples/mnist/create_mnist.sh ./examples/mnist/;\n                     sed -i s#BUILD=build#BUILD=\\$CAFFE_ROOT/build## ./examples/mnist/create_mnist.sh\"\n$DOCKER_RUN ./examples/mnist/create_mnist.sh\nfi\n\n# Train the network\nif [ ${TRAIN:-1} -eq 1 ]\nthen\n$DOCKER_RUN bash -c \"cp \\$CAFFE_ROOT/examples/mnist/lenet_solver.prototxt ./examples/mnist/;\n                     cp \\$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt ./examples/mnist/\"\n    # Ensure that the solver_mode is compatible with the desired GPU mode.\n    if [ ${GPU:-0} -eq 0 ]\n    then\n    $DOCKER_RUN sed -i 's#solver_mode: GPU#solver_mode: CPU##' ./examples/mnist/lenet_solver.prototxt\n    fi\n$DOCKER_RUN caffe train --solver=examples/mnist/lenet_solver.prototxt $*\nfi\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_lenet_rmsprop.sh",
    "content": "#!/usr/bin/env sh\nset -e\n\n./build/tools/caffe train \\\n    --solver=examples/mnist/lenet_solver_rmsprop.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_mnist_autoencoder.sh",
    "content": "#!/usr/bin/env sh\nset -e\n\n./build/tools/caffe train \\\n  --solver=examples/mnist/mnist_autoencoder_solver.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_mnist_autoencoder_adadelta.sh",
    "content": "#!/bin/bash\nset -e\n\n./build/tools/caffe train \\\n  --solver=examples/mnist/mnist_autoencoder_solver_adadelta.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_mnist_autoencoder_adagrad.sh",
    "content": "#!/bin/bash\nset -e\n\n./build/tools/caffe train \\\n  --solver=examples/mnist/mnist_autoencoder_solver_adagrad.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/mnist/train_mnist_autoencoder_nesterov.sh",
    "content": "#!/bin/bash\nset -e\n\n./build/tools/caffe train \\\n  --solver=examples/mnist/mnist_autoencoder_solver_nesterov.prototxt $@\n"
  },
  {
    "path": "caffe-fpn/examples/net_surgery/bvlc_caffenet_full_conv.prototxt",
    "content": "# Fully convolutional network version of CaffeNet.\nname: \"CaffeNetConv\"\ninput: \"data\"\ninput_shape {\n  dim: 1\n  dim: 3\n  dim: 451\n  dim: 451\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6-conv\"\n  type: \"Convolution\"\n  bottom: \"pool5\"\n  top: \"fc6-conv\"\n  convolution_param {\n    num_output: 4096\n    kernel_size: 6\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6-conv\"\n  top: \"fc6-conv\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6-conv\"\n  top: \"fc6-conv\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7-conv\"\n  type: \"Convolution\"\n  bottom: \"fc6-conv\"\n  top: \"fc7-conv\"\n  convolution_param {\n    num_output: 4096\n    kernel_size: 1\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7-conv\"\n  top: \"fc7-conv\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7-conv\"\n  top: \"fc7-conv\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8-conv\"\n  type: \"Convolution\"\n  bottom: \"fc7-conv\"\n  top: \"fc8-conv\"\n  convolution_param {\n    num_output: 1000\n    kernel_size: 1\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"fc8-conv\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/examples/net_surgery/conv.prototxt",
    "content": "# Simple single-layer network to showcase editing model parameters.\nname: \"convolution\"\ninput: \"data\"\ninput_shape {\n  dim: 1\n  dim: 1\n  dim: 100\n  dim: 100\n}\nlayer {\n  name: \"conv\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv\"\n  convolution_param {\n    num_output: 3\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/examples/net_surgery.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Net Surgery\\n\",\n    \"\\n\",\n    \"Caffe networks can be transformed to your particular needs by editing the model parameters. The data, diffs, and parameters of a net are all exposed in pycaffe.\\n\",\n    \"\\n\",\n    \"Roll up your sleeves for net surgery with pycaffe!\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"%matplotlib inline\\n\",\n    \"import Image\\n\",\n    \"\\n\",\n    \"# Make sure that caffe is on the python path:\\n\",\n    \"caffe_root = '../'  # this file is expected to be in {caffe_root}/examples\\n\",\n    \"import sys\\n\",\n    \"sys.path.insert(0, caffe_root + 'python')\\n\",\n    \"\\n\",\n    \"import caffe\\n\",\n    \"\\n\",\n    \"# configure plotting\\n\",\n    \"plt.rcParams['figure.figsize'] = (10, 10)\\n\",\n    \"plt.rcParams['image.interpolation'] = 'nearest'\\n\",\n    \"plt.rcParams['image.cmap'] = 'gray'\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Designer Filters\\n\",\n    \"\\n\",\n    \"To show how to load, manipulate, and save parameters we'll design our own filters into a simple network that's only a single convolution layer. This net has two blobs, `data` for the input and `conv` for the convolution output and one parameter `conv` for the convolution filter weights and biases.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"blobs ['data', 'conv']\\n\",\n      \"params ['conv']\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAlIAAAHNCAYAAADVB5V4AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvWuMZdl13/c/tx733np393T3PPkYDUccPsQZiaRkCYpE\\n\",\n       \"CYklOwYhfwjCIAEiJDLswAkQf3AQIEoC64OcIEDiIHESBAiCCAkkJ4GtJHCM+KHQjmGZtmxKJBVC\\n\",\n       \"wxkOZyac4Uz3dHe97q1bt+7Jh+r/rt/517499ER008xZQKGq7j1nn73XXns9/mvtfZq2bdVTTz31\\n\",\n       \"1FNPPfXU0z86DR52B3rqqaeeeuqpp57+SaXekeqpp5566qmnnnp6j9Q7Uj311FNPPfXUU0/vkXpH\\n\",\n       \"qqeeeuqpp5566uk9Uu9I9dRTTz311FNPPb1H6h2pnnrqqaeeeuqpp/dIvSPVU089/b5T0zT/RdM0\\n\",\n       \"/87v97Xv0s4HmqZZNE1T1WtN03y5aZp/6v/rc3rqqaeeSE1/jlRPPfX0vUBN03xA0suSVtu2XTzc\\n\",\n       \"3vTUU0//f6Eekeqpp55+X2kZItRTTz319L1IvcLrqaee3pWapnmuaZr/s2maO/dTZH8E3/2399Nz\\n\",\n       \"f7lpmkNJn7n/2S/hmj/dNM03m6Z5vWmaf/V+Cu5p3P9L9//+yfvX/Kmmab51/55/Ge384aZp/mHT\\n\",\n       \"NPeapnm1aZp/7x9hDK80TfNT9//+95um+R+bpvmVpmn2m6b5naZpPtQ0zb99/7nfaJrmn8a9P980\\n\",\n       \"ze/ev/alpmn+WLT9oPENm6b5j+63+eZ9Xo3+Ueegp556+u6k3pHqqaeeHkhN06xJ+l8l/RVJ1yX9\\n\",\n       \"65L++6ZpnsVln5P0S23bbkn6vyS193/UNM3PSPo3Jf20pA9J+sl4RLn2Pt2UtCPpcUn/iqT/vGma\\n\",\n       \"3fvfHUr6F9u23ZX0hyX9iaZpPvttDiXrGP5ZSf+dpCuS/qGkv3r/88cl/ZKk/wrXfkvSH27bdkfS\\n\",\n       \"z0v6j5umeeHbHN+flfSMpE/c//2EpH/32+xzTz319F1OvSPVU089vRv9iKTNtm3/bNu287Ztf0PS\\n\",\n       \"/6Zz58n0l9q2/TuS1LbtSdz/z0n6b9q2/b/btp1IqqFIDf4+lfRn2rY9a9v2f9e58/T999v+fNu2\\n\",\n       \"X7n/95ck/aqkn3iP4/qbbdv+1bZtzyT9T5KuSfqz9///NUkfaJpm5/6z/nLbtl+///fflPR/SPrx\\n\",\n       \"dxtf0zSNpF+Q9Kfatr3btu2hpF+W9M+/xz731FNP32W0+rA70FNPPX3X0+OSXovPvnH/c+kc6Xn9\\n\",\n       \"Afc/JukL+P9B10rS7SgWP5a0JUlN0/ywzhGej0palzSU9Bfepb1l9Bb+nki61V7svpnc/70lab9p\\n\",\n       \"mp/VuYP0IZ0HoBuSfuf+NQ8a3/X71/7WuU8l6dxp7IPYnnr6HqF+MffUU0/vRt+U9FQDT0DS+yX9\\n\",\n       \"P9/m/W9Iegr/P1W55tvdPvw/SPpLkp5s23ZP0n+p77Aea5pmKOl/lvQfSrrRtu0VSX9ZFyjag8Z3\\n\",\n       \"S+dO2Ufatr1y/2fvfoqwp556+h6g3pHqqaee3o1+U+eo0J9ummataZqf1Hl90a/e/76p3NPg878g\\n\",\n       \"6eebpvlw0zQbkn7xAde+G21JutO27axpmk9L+hf07Tth75XW7//ckrS4j079M/h+6fjuI2v/taT/\\n\",\n       \"pGma65LUNM0TTdPw/p566umfYOodqZ566umB1LbtqaQ/IulnJb0t6T+T9C+1bft7vkSXnZnyWdu2\\n\",\n       \"f0XSfyrpNyT9nqS/c/+akyX3P8gx+tck/ZmmafZ17rD8WuW53w4t6/Ol/9u2PZD0b+jcYXpH57Vh\\n\",\n       \"v14uevfx/VuSvibpN5umuafzonYW6vfUU0//BFN/IGdPPfX0j5WapnlO0pckrX8vHpz5vT6+nnrq\\n\",\n       \"qUs9ItVTTz19x6lpmp+7f57SFUn/gaT/5XvJyfheH19PPfW0nHpHqqeeevrHQX9M52cxfU3nxxv8\\n\",\n       \"iYfbnd93+l4fX0899bSE+tReTz311FNPPfXU03ukh3KO1BNPPNE2TaPFYiHvqB4MBhoMBlosFuV/\\n\",\n       \"O3n+3TSN2rYVnb/FYqGVlZXqc05PT7VYLDQcDstnvLdpms5z3Jemacqz+FzT2dlZ55n+brFYlHH4\\n\",\n       \"Jx1VjpfP8/0cvz9L8rPdL/PS95DIV//tdufzebl/MBjo7OxMknRycqL5fK7FYqGzs7NOv9xX3ucx\\n\",\n       \"um3zb2VlpVzvfq+urmplZUVra2vl89XV1dI3f+/71tbWNBgMtLKyUr73vb7O/fAzJWk+n+vs7Eyn\\n\",\n       \"p6c6OTnR6empjo+PJUmz2Uyz2UxnZ2c6OzvTdDot43Pbq6urha85d23b6uzsrDM2X+f+sR32lfeb\\n\",\n       \"b6enp6W/Jycnl9bBYrFQ27YdnrkdEvtLWXAfSOvr6+U6yvLKykpZU+6Tn3N2dtaRi9PT0yIz5oHX\\n\",\n       \"p+cq57Bt2zLffObZ2dklHZAyzb7kGC2P8/m8fMd1nOs++8223EZtTS1bl9RVlo1cp+4jn+371tbW\\n\",\n       \"tLKyUtaI52gwGGg8Hms4HGo0GpU2LZ8ep/uc7fr5ptXV1fIzGo3KnGU/Z7OZjo6OdHJyXi9/fHys\\n\",\n       \"k5MTHR8fa39/X7PZrHOv1+dgMNBsNtPp6WlHH3D+qE/5v+eLvLWs+Huubz8v9bWfSX3n+9q2LfJt\\n\",\n       \"vtV4Zdvk/1dXV8sapL7ydysrK1pfX+/MI+ns7KzMK9e9pKKL5vO5ZrNZGZ919Gw203w+78ilx8Hx\\n\",\n       \"5/z7/tXV1aodSVk+OzvT2tqatra2tL6+rvX1dY1G528yGo/HGo1GHVvHZ7m/bdtqZWVFk8mkyM3R\\n\",\n       \"0VFn/LQltD2eY/LIY1wsFh09NhqNihxT1wwGA62trZX7PW+cX//YP+AYTk5OOuvTbQ4GA62vr+tv\\n\",\n       \"/+2/Xd1d/FAP5OTkp3GpkYW7ptxSWbr9FOhUvlRwXNxcvOwrhY6LM9uuKWE6HxQMGxLyo+YUuR90\\n\",\n       \"xmqGM9twX7y4OEb/b8UkXRhezwevYz+TJzlGj82fe6HZQNvBPT09VdM0Rei9+Gu8oMHwuOikeBFQ\\n\",\n       \"yfNzf0cn0f3zdTZSVLocHwOAmhOZPHe/05Fi36yI3A5/WxH6s1zo6USQas6Vx2eniW36ev+4Xx6D\\n\",\n       \"nU+PwfflWkonM+Uz1yWNDOfE8lbTC/zMz2a77puNPOc428hgin9TFvkcyg+NOR0o3pPt+38avGXO\\n\",\n       \"oQ0Ency1tbXyXDvGkooTs76+Xox/ytNoNCpGkkZrNpsVZ97P9H1ux/JKPnrsdrBozNx2zQiTGBT6\\n\",\n       \"mXTO2VeOJYljTVmkITWPvG7szDMoonw/SD/bAbY+SpmhLPh5/JyBrXShE8kH/+9167mogQm5HpYF\\n\",\n       \"IPx7fX298MUOYwZZvp5OiHnlZ9jx87gsjx57ggnU2an/amtQurAD7hPXBfuba926NGWQPPZaok7k\\n\",\n       \"2GvU10j11FNPPfXUU089vUd6KIhUQrn+jPAeI5OE52sRrz8n2fs0qpJ9yL+XpfLyvloUZGLkW0sL\\n\",\n       \"Ealh5OroIlNUCb8yYuczE4FJpGRZipFtJIrk69KrTwQvowf2MXnrMRLpWBZZsm1GXgnVJkpkfrmv\\n\",\n       \"hMKTl0wFcYyJspHMH/LF0WimpXiN++7ok+1ndJ9RG9uuESNnPt9tE+1gSnR1dbWkGCR1/jZ/2KZR\\n\",\n       \"KCJTfC7TTSbKVkah7kvKlL/jmBMFznRAppm9/i1zNWKKJJ+Ra8Tk9ZmIOsdBhNd98N+1SNkoD++b\\n\",\n       \"zWZlXayurmo2m5Vo2eiU59XolPm2vr5evtvY2CjtNE2jtbU1DYfDS+j+6elp0Uuz2ayDco5Go/L/\\n\",\n       \"cDjU1tZWQZ88Bqd6c70wzWIko5YZyPVCvZI6jql3y1eiGWyH3zFd5LY4T5mKdB8TJed6MoLj/i7T\\n\",\n       \"r9PptIN0MIVHOaFey/Xk750GJI/5P/mW65F21qijZcufU8aJVhnlNLHf8/m8yJd0IcMea9oWzgOf\\n\",\n       \"R31qtNa6iXLpezKN7jWWKHUikkTkh8NhR7+kj/CgbNlDcaRcf/Fuxt1Uc6DSmDJ1lQ4RnYma8ksj\\n\",\n       \"RZg+IfmaEmV7tTSGKVMstVRDpm9qfaNgGKZcljpIZZFUUyx8tttIaPhBqRbynIqByiH7QMeC/6cS\\n\",\n       \"rEHUNn40DHRoamkBGi8aWjs7mWo1eRxe3CmH7EfNYGTagvfW/qahogzm9w+613VblhvWszgNQyVF\\n\",\n       \"3pBvNvRUnBybx03nz2Pl5zXFV1uHOXfpcHK8NbmiwiRRRjIwyrXJOc60L/ubTikdVxuTWuqVfXU9\\n\",\n       \"Uuq7tbU1zedznZ6eajweSzp3bE5PTzUcDotTZePlmir3Y21tTRsbG5LUSfNxLJwnp3S4Dj3no9Go\\n\",\n       \"OFDr6+udtK9rVuxQUV6X6Xp/TqeZDom/s5OVASmvqzlSKQM1R9/EObKusZzSUTU/7EzQqWAKy8+z\\n\",\n       \"TuA95pn5armhfiDvycO0cymzXNt0UsgPyy9tyXw+13g8Lk6465DMG9dImS+cQ/d/Pp9rOp2WNl03\\n\",\n       \"5vor8t68qQEP7Bf7kGRniXWzGYBxjdqZo8NF8lpjepZB3zJ6KI6UF4SFS7pcgFYz1Mu+s2BZ4DhR\\n\",\n       \"ifRQGOmAJWqVTlQ6QdLFxFppsdbFPxTgzDcvMwrJq6RliJT7zfFln0kWuHQaaTxS4dXQoFq7OR5G\\n\",\n       \"eUYkzLc0IHSe2J4VP4lzV3MibYASAXJBPcfN+7wg0/GmE8L+nJ6elgXNaI/oVdb+pANtviRRcTEA\\n\",\n       \"YXTlfidfbNzIRyKAvocKfjablTXFfvK35SZlkQhBLSp3PyhXlI1a/RINajqWjLhrwQCdu3T0jejw\\n\",\n       \"Gdkn8sY1M0ROEo2lImbUTIch5c28MiKUhtE65fDwsGOwiFKxEN3y4s8530YW3Bfy1M82atS2baf4\\n\",\n       \"mePZ2Njo1MK4bW+YmEwmHcTEsmLjzfusbygfUleW+LnbY11dGjmui1zH+Xmi0v6faycdcmYUqAey\\n\",\n       \"XsnriN/T4bMT7Pmn82TKNUZnJvV+Xlfru5+dn5mnGxsbGo/HpehcunAkE6RwW9YVRjo9jpOTk0vB\\n\",\n       \"ittMJ4/rn85eAiJeX9ZDGSBbxiwDXjM5hpqudV8pLw9yoEwPxZGqIUy16CWNeP5tIrPTuFPYsu0H\\n\",\n       \"OWwPui6NgP/f2trSYrEou1aYJnFfahD0g1I2y/rIsdfuz+hpWXTO67NQl22z6JTOIgU8F1cqr+SX\\n\",\n       \"+WOlYCcrHRRCvORbRlN8Jp1rOgUe2zJidOO+1CJct51zyfHVEJRlz2SElPPNtOayaKtGq6urGo/H\\n\",\n       \"BWHgzhYTETj3n3NsR9TP4hzTQNBYZWFmyhwRwORVBjQcZxpSts/ou9Yuv2O0nw5YRtyZcq31xd9R\\n\",\n       \"4ZM36bRJ3eJ6ts35d8GxkbDFYlGifbc3nU5L0bh/M4jz3Cca6X6lvNJBbZqLtI/lwCksr8ssql5Z\\n\",\n       \"WSlIhh0pOoCZikkkcVn6jP30d6mj89osneC9lBcaaDpl5Jvn07KUc8ni+uRpkp9np9t8zLS626L9\\n\",\n       \"Yhsu3M6AmTwl+sI2OPe+zw6OHWXu2vPGBfMisyAMZtimHTMHZ3ROzUvPZdM0nfRibWMTx+Q5465E\\n\",\n       \"o6UMHIiaWi5qjiUdd+rEZYEh6aHXSOXkZzTo73xfMraWusnrKUzLnDQKbEaFqYRTCXJS1tbWyk4Q\\n\",\n       \"RnFMi+T2WaJiOU6OaZmx8PcP4lt64eY9lUrNQawZjexD7m6wMUklQqXExcBnZ/uG2wkNW+F4LrjA\\n\",\n       \"OXYa0ZzD4XCoweC8ZoEQt6FoO1E1VKo29uTRMgWakL+vpbKko2H5riGYbpPjZd9Go1ExfHQIrTCS\\n\",\n       \"7/4ud9xQdigbtUiQqY1Ee2ikUmbYb/fTz6OyzfVrfqX8p9NOxeiaFtaskei85Nbx/J9rP50nGyFJ\\n\",\n       \"HSRpWX1JGj1f73XIMR4dHRVEwwbKCK+PNXD7dL69ZigDdBz8HP6YWAs0HA4vbcl3W0YMmE6cTCaa\\n\",\n       \"TCalTfaPfeLcEkFNBMW8zpS+7880UNqN1HWeW1IGbzT81CmWd/Yz26/ZN/PCa9N1aJI69WfZT/eF\\n\",\n       \"Nsv99fe0d+lEel2srq52HI3hcFhknulIP399fb2DJtLhp7NP/lt/s+bJ8s5Uam3Ocodd6o4sUSAP\\n\",\n       \"0kHjZ9mGx0J542/Td50jZcNVixRraT9peY2Pr2UxJwdsRcRoS7q8JdXPdxtZdFZLV9UMLRUwhdTn\\n\",\n       \"GTnq4aRyLKaag0dllf2mYUo+8HmpjDhu9sNKxHynQlldXdXJyUkR5OSplXsa2nRslzkhdBjIJ6N9\\n\",\n       \"jlqcguAC5pi4aDlOR5D+YbrM33FuyYNaFMb2H4SqWBYtI4mK0DAxuk6lxmtSeZq2trYKMmA55BiT\\n\",\n       \"T/7OxbBO0XC7Mucx5akW6XJeE33NNmtyaaVt+aUO8PNpGBOd5f+cRxrudKLosKaDXgsoMuBiSsHE\\n\",\n       \"SDjboQwlslBzEpb1M5HDZWlSo0Scd39HXrN/kgrCbid/ZWXl0hZ9/3bAkw6h+zibzTprkQ5IyhQd\\n\",\n       \"/pQny0Q6Xrkm2Re2l8FlIra8j+k+857nQBFdYmDGZ5qfnO+1tTWdnJyUtLHJ/5vfDMLolJlqtsPy\\n\",\n       \"yPVFJ8q8J/9cX+dn2jln4FFzUt2GA14Wn/se2xTKs9vMAIJjMVrHdDBlv7beag6S+0rbn8FpbcPA\\n\",\n       \"sr6R+uMPeuqpp5566qmnnt4jPbQDOTPKq6FG9KLtmdvDpIf4btub7ZHXoLlEpdgXEqOUjJoyIkp0\\n\",\n       \"ROqecJ31R0zfZX/ordfgz2XpUPcvd5cxQieMnMgSU5isr3G06wiMCBn5xkidffV15GFNFjJiIIJF\\n\",\n       \"apruDrq8z5EzecCxZHGk++7xJ4pX47XbSpSHfOD9teLFZeki3+OxMLpMlCZl0SdjJ1rj6NKpFp7s\\n\",\n       \"7qjRhcS5LjJNlM9mio98ItReGyPRkUzp5L2mRLRr7Zp/mW5wn31PDTFIMpJIPZTPtpzxwEq2lyla\\n\",\n       \"98upx5QvRvcei8fgnVTr6+saDoed0+qlCxQoU4mudTLKQTTfJzsbcUl0inUjHH+tEJwImVN90+m0\\n\",\n       \"s0s06+USTSIim8iK5TtloIbmE6mnPPD51mdMCfm+1Lvc1epTvLmzL/Uex8W++H/zk8+ZTqcFESKS\\n\",\n       \"46yN9UGuC46fetyUpSmsf2V7zLb42UQbieS5Xq+GEBFNTH3FbEnOI/mWtaFEnCgXtCGLxaKzucKf\\n\",\n       \"my+5zolAZSnEu9W6PrRi84RqOQF0qkz8nLAmHZvcsr4sx1z7nukOCkme8JoTkbuiMi1QG3M6WZzA\\n\",\n       \"WrqBcHS2nUJbG5/vpTDkIqnxhqmFWlrExtbPsQKuOYT+u5aarPWBCns2m5XCz6ZpSsFtykC26e/G\\n\",\n       \"43FV4dChS55L3WJ46SKnbwcjnWHKbebseS3Tekk5127DhjudePLSZ/ywz3aoTk9PL51kvb6+rslk\\n\",\n       \"0nE+BoNBqYtKntJJ4v/uS65p04MCnVwL6dSQz7mGmfbw9emcS11D4X5n6j7byDQ6yddwvec81NK+\\n\",\n       \"NVmgk5tOlmstaRhNbduW18bYmWIK2k6mj15gOtxOgHWC15P153Q6LXJuJ+Hk5KTU7XitppNlXZi7\\n\",\n       \"YRk4DofDjp7mRgrztBbs8RmcS/Yhi4M5jzTgdog572y/dk8G0Jyr2k7dTDXZWWaKzs6mZYXrx8Gf\\n\",\n       \"5yIBhlpqjzqXa/Ps7OJsNPPXNi3H5P5m2pf6x33hphynNy3/eTZVHjPDfvs3bUDWidYCL96f7ZFP\\n\",\n       \"tM8eS8695SH9ELe5rLSo3P/Ab79DlB6xpEtOUCoNDrKm5KloKVC5GKmUiKC4DT4/nTbpcrFerd7h\\n\",\n       \"QQXINBKpMOgQJoJUUxxE8JaNnUqa9S6JHGRk6MJSP5vndNjL39zcLAiGpPJOO+/oSWfB/a1F6nkd\\n\",\n       \"58F9pAKSLuoWjLBwYbB+ywo6597jT4TTfXc9h/vrV2cQeWQ/+ZnHSmXj/qURrjkS5A3XCxWDDaT5\\n\",\n       \"sLW11eGjjdrGxoZ2d3d1dHRU+LZYLLS9va3RaKQ7d+6UNnwQI5V7IhxE/2p1UP6byF+2ybmmovfc\\n\",\n       \"+TvKbN7H+fVz0iG2A+57cq5oGDgGritS1riQWBz/IMoaEuoBFvOyP5ZJRuj+2zUtvM+8MAKVemrZ\\n\",\n       \"GUZ8rxvfpejaKq8VoiLJ76ZpOs+0HNZqmIhGZ7Evn+f7OH46Tpwr94sOXOqOZSgI+0RdlYdVeh7d\\n\",\n       \"pvVNDSXjLjW3a57O5+fvdyMSLKmzycOoejp9bIsyTEcg1xydS7bj+wgMJL/TiWWAaVnyRgR/58DO\\n\",\n       \"6FDKqNtLnuW851jz+mW2tMajmm6l/LHuzv1LPZD00IrNM8KkN5reZio+Ks9EWUheCF4ED0K5GPnx\\n\",\n       \"Oct2SjFVlEJaixj8PxUcBclCT0hRulC0NaPt8SfsmX3xdTXEh5T8JgpkgXMagUW1VAwnJyelSJlG\\n\",\n       \"wuN3P2qITE1RZBrUPPH1VmY1x6XWf99H5ctFxIgwn+d5SDl1mzxnJ5E5olC1uXI/mD7ytcsUsw/I\\n\",\n       \"cyR48+bNslPqnXfeKdecnp5qd3e33Pf222+X+bh69aomk4nu3r1b+O31lYqf823+1tApU64nUhrQ\\n\",\n       \"WuCSCs/r1G3TWJCX/Jv/p7OUn7vdmoLmdwyg3FemhdIxJw8sP4z+a333d9ztRCTNhs2yk/rB629l\\n\",\n       \"ZaUEOOYjdcpkMuk4S0Q/WLCeaF/bdl8+TSTH+pFrKA0j1zADOF9L3lMfk9/We6nbPC88r4j3mV/L\\n\",\n       \"HGXzO3Wk9Y1l0f30phXOB8fqteTfNYc3X0xs/UEdQseKzgvXpK/jeqJc0V6kbuHc+R7zkAief3t+\\n\",\n       \"fQ4g1yjn1A4mN5J5LrJ/tP2cm2WOTM0hTB1APcY0N51Gy6EdYT7T6+BBAdJDfWlxzdNLB8O/GdVy\\n\",\n       \"UVKhZhSZ3icdsHSq0qPP74gyeRGn05OMToXsyWB7fp5pPp93YHorJEPAtYh7mYfetm1nl072iahU\\n\",\n       \"wpzul3ex+XsrDSNWdChPTk7K2SNEp6QLYeQ81RQKHeokf8++LjPWnEsrvExt1QwrDYd5z+sodySO\\n\",\n       \"j4suFbhlmONINCFRHo7F50K5baKHbduWk69v3LihtbU17e7u6u2339bR0ZGuX78u6fxcn9lspsPD\\n\",\n       \"w9Ju1g3RGfQY6CTQkSYPa5Egx5Jkx9HP5trLtc01yr7WiP0xcS5qn9eIc+F55VpchnLVomvKKB1s\\n\",\n       \"95MpbD+PazvrS7z+bBxtJPy/9VTuvGTazrszJXWcLc8j09uWbbfLQJMy7TXDl78S/aND7HvdL+kC\\n\",\n       \"xSD/0rmyM1ALWjPQoQxTj6ceomNSs0F2MqwP3ad83Q4dW8tv7nZkP2vItOeBvMv7arYz/2eQTr6Z\\n\",\n       \"31yvzEBYr9euTQczn8s1zN2H1il5vqLHknqYgAPlxfaT81rjnX8vC/D4TDtQPCDVtEyfdZ639Jvv\\n\",\n       \"IDG1k8rQSp1MIrOWwX1s10Tna1k/an2g8NGIuE0LU0ZQjAJM2feMVPK5NbTKW1LzoM+MElOBJyKW\\n\",\n       \"4/CPo41EAMwHKw/pomaMyBTn04Wk9u6Pj48ldbekppPBhcGo3v1gfj0dVyploj5OdzCKSx7UiBGO\\n\",\n       \"lQcXfipn/5+FuaZMP/jvjJrSQaHctW3bORiPxteo0GKx0ObmZvnuySef1M2bN3VwcKCjoyNNJpPy\\n\",\n       \"3Qc+8AG9/vrrZW6YSl5fXy8pQLfrvrKPHk8qHMt1Takn4snvOFeJEPmzNJZ0rLh2/L/vf7d70zmj\\n\",\n       \"M7wMOeWp3RwDU3OMvNk+ibojnVken8IaTqn7qhdTbrDwuiJvjXycnp6W2iiiIGybcmFq24vXgWQG\\n\",\n       \"4Ozs7FI9mvvuYGEwOD+7rZYez/FQzmvBLddIBpl0ZLwxxp9nDRd1dKIXWRtjJ2NlZaU4fDbAGQCw\\n\",\n       \"baLNXstOnRo1pI7w3JlPTrm7n3TkExRg8EEkTFKnRMPz4e8SRWf//Qw7+wwWrRcsczUEkGsqbSDt\\n\",\n       \"EJ/rdhPl8vW09ymnvt6orufJPE2AgKlyBjKmd6uR6o8/6KmnnnrqqaeeenqP9FAQKUag9AIdtWYx\\n\",\n       \"Mj1XwvzSBYxKDzyjHXvoNXTK9xiG5n2MhOid5q6mHIPbleq1SPa2s76Lz6WHzfTbcDgskWTuwMn0\\n\",\n       \"FseyjByhJTxuqNPeu/njQnOfAM7IzGmP09NTra+vazqdFt64borRPOeZEQERKc/bsnoHp0L9HWuc\\n\",\n       \"vHtkGQLFCGcZ8iFdrhEyn/muMcPXhLQpb45wKTNZNJ8pahMRCkZGa2tr2tzcLJGuEUvpPI25u7ur\\n\",\n       \"nZ2dMi9GoK5evarr16/r9ddfL+/CMgrld7OZL5kCYyEmUVWmUGuolGU607fmEefORP4k3J7P+XbQ\\n\",\n       \"xkT5/Dd1jqNfR8+JDpovNaTLfU20mXNtqqWOjDrVULbFYtE5+dqoIHcVU9aMmJ+cnHRQJKdWvD6I\\n\",\n       \"uvl+oyOca88do/WcX6fgBoNBKaD2PBoZMPKQNVIp8x5jTf/6+TU97756TnP9U3e4nUQ6vA5ZYM0U\\n\",\n       \"K1FZ6WKzi9vwvLh9rzGj26yt8jWuIaJObNu2HNZJniWyYz5ST/n7zJ4wxWbExqldv7DYesx/u6/U\\n\",\n       \"B4nUGnk6OzvTbDbrvMrI/aghzrkBoWYDPQ9ZT0hEjvcxE8W5kbqZllpfPO+ZPXJby+ihvSImoU8q\\n\",\n       \"p3R6qLBS4aZjlSkTTmItx8kUhZ9nITPTs0DQ189mMw2Hw046qJbek7q7c6y8MiXoSSTM6X7ZmeF3\\n\",\n       \"8/n5m7a5RXYZ7G7KflGJ0JhQwVhxmKy47US5P3agzs7OdHx83Jk3GlH/JJRL2JUK27tAEr63M2Ml\\n\",\n       \"7q3e7qPbzfScx5MpGPOBcDl3z7iPi8WinL2USjyNZi5YK3grMc8VDRqdkpRFzqGV9COPPKKrV69q\\n\",\n       \"Y2Oj1D299tprOjs70/PPP6+bN2/qox/9qL72ta9Jkt566y09+uijms/nevPNN3Xjxg298cYbki7O\\n\",\n       \"rjk8PCxjSXifxj8dF65fKl1+nulAy3SmcNJBSsPhdUl4nqlUznemK2igm6ZbJ5PzWktJe3x0CMwf\\n\",\n       \"zzEdoizArfWtZjBMLIJ1X3JuPHYWmVueuRuMDhSdf6/PxWJR3q2Wzo0pa+TcB6+tmlNsObBDTt5Q\\n\",\n       \"b1JPMmXjdQ3XAAAgAElEQVTmsbEPNYeWupz1aVI3jc2Uln8zcGOtls+Isp5kP1nL5jmiDuBOQZ7Z\\n\",\n       \"Zp75eazH5JxmKpLymPV+Hn+m9vJ5Juph6zvrIzp9rN/L1B5tWwaUlEuvGcobbY774+eZbPMpF9SZ\\n\",\n       \"DLS5GaUmgwYjaIfcN65df0Y+1fyH0vbSb77DxHy+yYOhopa6RbX+30QFZsVSi158H+8lCkIB83OX\\n\",\n       \"1XI5ak1kidFMkoXmQYhHjtvfsSaFyJmjPBdFE5FIHqTCoaHLKNERlGugMjqwl88IVFJHQM1bCn8q\\n\",\n       \"nozS6VhwjshbzqHbc/TD9mgUJF3ajcGaq+QLo1YrFKn7mh+jgZxLjikNCuWMBsD9TiPNdsk7Opmj\\n\",\n       \"0Uj37t3TRz7yET3yyCNaX1/XU089JUm6c+eOptOprl69queee07r6+v65Cc/KUn6W3/rb+nWrVtq\\n\",\n       \"mka3bt3SzZs39fTTT0uSXn75ZY3H4yIDHveyeck1STmhMeN6ZzDAufF1NTSu5iDlOiNKkcYy17rv\\n\",\n       \"z8g+dQCfx3lLRMbkuaoVufp7yonXYU0vZD8oz+ms5/qiI85de9xR634QDUjDukyXmXIe+W5H8571\\n\",\n       \"g2ncs+CZ75hjQbudw1owkWvMn+drW0y0MenUSBfvS01ZYzt0zoiomoeJHvmeDOASkXMfvM4dsLNG\\n\",\n       \"yqgibSI3FNjpa5qm8xotPpPjJRJJuU0UyfNLdDGpFjjyu7zH9XzmH201bSllhvzKoI38tM1wm0bb\\n\",\n       \"bdusc8gXBkk1gGQZPTRHih2WLhu+7DQhVSo/MjMdErfFk3zTqcq2pK5xqDlHXCw80JCQYc2RqSn1\\n\",\n       \"GlHYapGfF8XKykpJ9+T7mqww6bXnWDk2evBGunxaMu+z8FEIs2DWO/c4TjuS7sva2lopuuQ1jpJN\\n\",\n       \"LIhM40g+5a5GojhWOHTUKHtZtGhDkzsPeSihr8u5TxkiWZ64lZjjoPznXFMeeH7NI488ojfeeENP\\n\",\n       \"PvmkmqbRk08+KUl6/vnny0nDu7u7Ojs706OPPipJ+rmf+zm98sor+o3f+A198Ytf1O3bt/Xxj39c\\n\",\n       \"0rkDNplMOjsy6cRkVFyT41RuVpJUrjRSuY0+EZlc69Ll1IKJfaWM0imwnHp9cK54v40FnUQGe+m8\\n\",\n       \"0FFLBIrPoXLnRo+a/jKvanog+WviWvOPZfjo6Kjzkln/li6/A9B6js+jHqYsrqysdDY7mAf+24Zw\\n\",\n       \"MplcQpXT4SS/2QfqgKZpHojy2SinbufYcnzmW81hzfVMx8X6xbzMAJJjoZw4ADYRYUxkxw6H26It\\n\",\n       \"M3JXmyuuAT+j5uCbqAuJkltWuP54rBCRP+qMmu5gH2sInHmaPKwhssvsKG0ZgwofYJs6ymhwzX67\\n\",\n       \"L7UyHdNDc6QywmJ+WOoqzZo3SONrhUPFY2rbtqSGEnXy5GZEWPuM7fG5XIyMLjMdSGXi6/L0VypE\\n\",\n       \"pqjSM+dClFS2xHOXj3fPuR+LxaJzuGSOh5GdpOJI2JmiwNqBS0fKnzl62d3d7QgqPf5UwuY1Uw/u\\n\",\n       \"l6Mxzov5xV0knlf/pvLKM214GnGiSjQ+dKT8ORVM8sXPdrsZXZMYlVNZpKKzonQQwV0/TzzxhNbW\\n\",\n       \"1nT79m29733vK/UOH/jAB0r91PHxsTY2NjrG4YMf/KAef/xxrays6Dd/8zfLePb29rS6uqqdnR3t\\n\",\n       \"7+9rNpuVs6nsrNaMV40Pte8T5RgOh5d2PDEV4rXkwCWROj6vthY5B9Qn6RBzXXBe+Bz3z7+5Xmv9\\n\",\n       \"MjF4s8HLNu3ceczsB4POdAbT+PB5PteNuojrKxH6JKLfHpeNptEJ7iYdj8edde51s7a2puPj4xLQ\\n\",\n       \"+hTs5Fc6hBxL8p/6vuYseb0ZqefaZDvkAflGZM5EpI0vXmZg6d90Ij22WpqNMktngzVHadeIcllf\\n\",\n       \"0AlgDW/WBNlhIJ8YxJhPltNEssyvwWBQMgHZL64njt/31Wwoeez77Cj7ANuac+rnp27Ndk22UUYc\\n\",\n       \"aWct27ad+RqjWnul30u/+Q4SDSGVBhVeKuGMKGvtSZch17wuryVUz+tqk2XKiUul6EnnIqXR9iRT\\n\",\n       \"AVjgEy2hAq05fL7GyJRzy/bYLZBte/5ONX+fC90RBWk6nWp1dbVTB5ZGh4rPffC1VrLkjdtN9MZ/\\n\",\n       \"GzqmQvEb0lmv5HlwZJZpIc6b2+PBcUZc3A+Oz/PDE4fNU0fCXnSUX8uR26SSSgeJfzOiJQyf1/l5\\n\",\n       \"5vnOzo5u3bqlZ599VisrK5pOp3riiScKH40M3r17V2+99ZZ2d3dLm1tbWxqNRvr5n/95Pffcc/rq\\n\",\n       \"V78q6Vxp3L59W0dHR0WReR5v3brVUaxpxAi3M4IjUmpZdztZDMr167Vkw+S5JF+WOQK+z+uG/aEu\\n\",\n       \"qUWZnkPqJ/ZP6m5Q4PhJD0LXM7peJhs0ooni+blek+aj0Wkjy4lsUEdxzTCgyQjcBtCoizc6+DR9\\n\",\n       \"R/mM7JnaW11dLRsajJa7Xc/PdDq9NHbq8gySqS9zLqgbciwcR81JJg85/zxjiGvb93IsWXdF5zJR\\n\",\n       \"Ko7R9zEtad1Ane0atpoz74Brsbh41xyDb+otrkU6Oa53pa1hip/zS/3IwJ1jJAJMnZi1x/6Ozu+y\\n\",\n       \"miWvQY6PNpKOpHThRPlz6h7y07rdtLKycum4k6T++IOeeuqpp5566qmn90gPDZHKVBu99kQVMs1H\\n\",\n       \"BMFETzej5Myzsx0jNlmMyjQSUYAawkQiGuXn+HpHGPbQCSUTlchtljWkKvvg3+6n05ncESNdbLl1\\n\",\n       \"VObt0X4O+ekaMEYtUjeK4edMXWb9xXA41MbGhlZXVzWZTDroEusKHNUwtcd0G8fCCM1btjOn7zER\\n\",\n       \"6WHUnKkm/+9+sC98aSujKs6R5zxTCYz6/X/KSKYS3B/3aTablWMlpPPTy0ejkV5++WX9wA/8gK5f\\n\",\n       \"v64rV65IOo+w3nzzzVJUvr+/X3izubmp2Wymxx9/XB/5yEf0Iz/yI/rhH/5hSecHeX7+85/Xiy++\\n\",\n       \"qJ2dHY3HY+3t7UmSjo+PC5roiJnjYoTNNcA14ojYKJf54/Rezhl1BaP3lFVTIsWMXIkmLSuWJd8f\\n\",\n       \"hFqx3exDjsPXU96IvhMNTj3k56RO5JrPvgwGg1KgnCUG7A8jcLfp9qTLB4kSOTci5ZP0R6NR2cGb\\n\",\n       \"KRwjwX4Gj+JwOUKmWvlc95fjp84wypBrMtM2kjpF2pkupq1omqaDkFlenBZiX1Ivs99sN1NZRP4S\\n\",\n       \"4WzbtqS0rAN5Unwt80JdQyTG9WzZN6NVlOFEizIDwCyH26wdrZEy5XE6vevrs9+JjmY2wGSEKNeF\\n\",\n       \"EUDLh0te/ByvfaNRfAUOUU+vYfL3QfTQTjbP1IB0udKfE2xlk4bP1zP1lQVzmUYxURh9rT9nOo1K\\n\",\n       \"2G3U+s3vqCjdZjo2CVtL3SLXGl+o6MwLwqYJYbN+RLq8s80GkS8o5Xis9OgAMD3jv/kdU1BeNIb+\\n\",\n       \"5/O5hsNhqZnwmN0f95njcF1Fjd80FLyGdQPpMPk7Kxumarl4CB1L3ddS1FJ1djBIGSi4nZRTGoja\\n\",\n       \"GK1YCZtL0s2bNzUej7VYLPThD3+4GKjf/d3f1dnZmd555x1NJhPt7OxoMplIkvb397W2tqZ33nlH\\n\",\n       \"r7/+un7oh35I165dkyR95jOf0ebmpn7lV36lGGKnb3Z2doojVXO+M/VMnjLd4rQj73EaplaLkE4U\\n\",\n       \"5yd5ZaoVv7JfXitMybjdZW1ynDXj5fVdKy2gHiCPqEcyrev7MkVZ60c+Lw1zli5Yx9RqTzwPdGra\\n\",\n       \"9rx2bTweazQald88X25zc7OzY4w8cnueZ6b9uIWdu9MexHePwXqVRpG1oi43MC/sHFnfDQaD4ugl\\n\",\n       \"r/k89zk3TfA3X+GSThHtE4N6yijtDNPiKSvWo9Szqd/YN9a6JSBBuch6Kq6ZlPn8339z3bnP1KWe\\n\",\n       \"f/JGurzjnZtO/LLjZQ5NtufnUMdLKvK6srJSdqSzTpf1aAQ0ci3V6KEhUu4wJ4DKiDloT2g6QVJ9\\n\",\n       \"NwCvs/GjsyXpkkKqCUoKHYmL40FjZD+WOYJEqzx2GlYbgqxjqil/fsa6CTujuZuitquJyv7s7KwT\\n\",\n       \"mbkIcTqdajweV2savFgyGnCU52ttNGxouehoUHgoHZU7ayD8ORVQOpzpyORWX5MVe62Y10q65kh5\\n\",\n       \"jMztZ4TlvtSCAUZvGV1KF2c8+d7t7W1tbGxoOBzqySef1Gg00sHBgSTplVdekSTt7u6WImC/a+/g\\n\",\n       \"4EBN02hnZ0f37t3T3/t7f0/PPfecJOn973+/fvRHf1Tj8Vh/7s/9Ob300kultmp7e1u3bt3ScDgs\\n\",\n       \"O/tqyttjqJ35MxgMyo5Of055Ic8sJ9QDiVYl5b3uT35HNK2mJNMwsX3qo6w19PzXiqRpYHLd1IrW\\n\",\n       \"+fzUUb6PNX35uREFGiE6OenweCcWDSgde4/JheV2pqRzI+X17ppKHgHQtm05HJY1l8fHx2W9Oyiq\\n\",\n       \"6dRcw4lisPCdSJTl0s5S256/j3I2m+n4+LjzPKIsdhwZGCXKaOImjET/GPzWAnDqmVwvPKIiA29m\\n\",\n       \"FtwW73Vfc7NS8nQZyJBBtR3cxWJxaSe3+8YsRKKaNdCA9/qZ387mDY4xgRKuFdsM2yBnIszb4XBY\\n\",\n       \"zgo7O7s4FNXEXZL2I5bRQ3OkclspvXIbHTpZUtfA0wgnFGeqRZspyLnjgp9LFxNTK3JNRyq9bio+\\n\",\n       \"omqLxaIUYEoXsKLh7fl8XlUK3u1CA0DnLJ0398N9zyJAeuDkMyMS981C5VTfaDQq51ZZGG2cGD3T\\n\",\n       \"AWPERYcgF3TOIXlAZWMDYgeKBom7P7yguEuSz6PT5rlftvCtMNJ4eV7oRC07Cdd84nEMLHBm35yK\\n\",\n       \"GI/H2tzcVNu22t/flyTdu3dP73vf+7SxsaEPfehD2t/f1zvvvCPpfCcnnTOujUcffVS3b98uBaUn\\n\",\n       \"Jyd66aWXyjgef/xxvfDCC/qTf/JP6hd/8Rd17969wksrJZ/JQscgnUTuRrUhcfqHfPKasKJLhTWb\\n\",\n       \"zS6NoRYQpTHy2HNrOJ0IX2M0M9dAznPqnkQQTHQImRJhBC5dTj+lsU7jVou+ibrxc/Yj0RMaQV7r\\n\",\n       \"9HHy13z02jfCvLm5KUklbe8f8oaHJTqtzBS85cjpQgcDx8fHnefWguma/na/fR/XFI29EV7LN7MB\\n\",\n       \"RCvIB88PUS/pPMgxcs5+2tbZEU2AIOeH9pDBLG2X15LfaWr9UUMz6az5XpYscC3awXBK0denvNWC\\n\",\n       \"BzrBtIvD4bDoMKKPlgdSolWZHTDRuXefM4PhPo3H47Lr2HK7vr7eKTx3W3SYjL6aLy5HWUYPxZEi\\n\",\n       \"/EbHhJNH4a9F8bVJkOqKlWhA1qjUUCde475QELN/CYXaSBoVYlv0oi2IfAlxRjbT6bQ4VRlh+28v\\n\",\n       \"/vyc/SS8yv6YlzTs/p588ziMipycnJSddL7PzpIXI5W062LovDJKdt+86PmdFSIjBPKNn1H4rbgy\\n\",\n       \"H+5FxAiICB2VGCkVcsoAIfTc5s4goIY6sT98tuVyOp1qc3NTu7u7xei//fbbOjk50QsvvKDNzU29\\n\",\n       \"8sorZWfUeDzuGBJH4B7H5uamJpOJhsOh9vf3y3dvvPGGVldXdf36dX3yk5/UH//jf1y/9mu/VuTE\\n\",\n       \"Dv3m5mbn0M50jIhgmAdWZKzZYK1aIs9U+jT8nAvzOI2sece5JVHXZAqHCj0dfAYuNWJ/OId0OGtR\\n\",\n       \"NNtMncF2Ui44Hn9nw+LPuK551MQylCJ1BvlkJ2I8Hmt3d7fops3NzeLoJxopnc/XaDQqiBSP8ODu\\n\",\n       \"NPJhdXW1yCWdbfLUDlG+fsnG0zz02hwOh8Xo2/FhXyzTlsd0TrwGmBJ1/4zapLxZ/6S9oLPu+WLA\\n\",\n       \"Th2e99GhN69qQEAGJ+Qh+57tM4VpIqpG+bOcZLbD91C+yQeibqlvbT8Tacv2s4+eN6d0LVeSyi7m\\n\",\n       \"jY2NS3o29S9tg9TdRVijh+JI1RAiTj4jRv9vAUiUigVxTC+4TS6IjMw4yWyTBZMW5HTA7Mmnckvj\\n\",\n       \"SzTIDhKRJPLBjh6L9Tx2e9FS1wh4TFRG0uX6L1/PBZWpARqrmmPqZ9uBmk6n2t/f76CLnqM0Nv7O\\n\",\n       \"/Ga7jjw5HhoT95Nnq5AsLxsbG6Weh3UX/skCecoc+2o+1QxwLeVmYvRpPibaZR4SefB4E6Hk+H3N\\n\",\n       \"yclJOb3cZzCNRqNy0CL57eMbtre3y3k60gXCY8XStq1u374tSbp9+7Z2dnbUNI0eeeQRffazny08\\n\",\n       \"+ut//a/r1q1beuONNwpvc55yzjwezz/XHakWLNkRoIEhb/09EWkikbw+5Zjrj/PKYGYZlF/7PNdL\\n\",\n       \"puG9tmsGg0Rlzb/tNCQST2cnDbT/rvGUupKIlJGcROL9HCLiDo6kc0RqY2NDbXvxmhG273SeX+NE\\n\",\n       \"dJiGu2masmFie3tb9+7d0+HhYXl/G9cI9Q7bdDqH43E//Zm/51pz4GGdyPokyhWRZPKPzn3Kac4R\\n\",\n       \"76s5tERu/Kxsk31IO0S9k1kZykTqMK4l1yaR6PQksGHUqYaqWp+ynwyc/f+yWljy1GvLjhJ1Ltfv\\n\",\n       \"YHBej+ngmnV9NX8gecYx0xmvUX/8QU899dRTTz311NN7pIee2kuyx8xIK3cCSN3TgumVp3fPIteE\\n\",\n       \"042QEAnxd0ahEuIk9J1pDf9v+NdtsU1He/S+7aUbceEOOkf2fB+Rn+M6o6zHcB8Y+WaemZA9UTJ+\\n\",\n       \"xzF5PIvFQkdHRyV1xKiUp50TRWRbTrExFTEajQoqZf4y1cK31fOt8szpr66uanNzszPHTCXx8DVC\\n\",\n       \"y0lG9/w3EUt/lvNO/vhzX8f0tH9nOol89r3ug1MRx8fHunLlis7OznTr1i1J56+B+b7v+z5NJhPd\\n\",\n       \"u3dPg8Ggc0DidDot8u+onfPQtq2Oj481HA4LyvXWW2/pm9/8Znlx8fb2tn7mZ35GkvSlL31Jw+Gw\\n\",\n       \"vAqEKSojYIxgKfs8AJXwvmUk6yNMRKWy4DNlheTvslDXfeVxC7X1kygJ5/DbodRDWbzL/jiqzoM1\\n\",\n       \"eX1NjrJeqsYDj9dEtDhTkP48i65zXOZNFnhbXyZaQV09mUw6qV2+SirRupWVFW1vb6tpGh0dHV1C\\n\",\n       \"g6yDbTPMI89tns7t+3ywZqagXcvDFBbLFvycGm+o9zIzQvQw58K8y9Qlvzffc8243bSP1HNcg+wz\\n\",\n       \"0UyiW6xzytTaMp0mdXdMJxLLdvLFzOYP54Xj5701ftTG7DlwnRNrpCgTiWQxm0L/hHV/y+ihvmuP\\n\",\n       \"TEoHZTAYVIu7crHVlJyZyxxzwtlmIhcajRknY1kaIuF7P8f3OB2V/VoGqXsCCQUzFeYxMK3pZ+WC\\n\",\n       \"Ia+YussCPvahVvxs/iXseXh4WNrxd94FYSNLwTOsT8eVRngwGHS2PvNskhTgNNBbW1udRSGdpxpc\\n\",\n       \"O8E0gPlCqJltpszw+Qn3kprm4ugDywblmw507mSh88zFa/Kuu/l8rmvXrpVjDO7du1fOeDo6Oiq8\\n\",\n       \"l85lw68O8jO4Q8XpDab5pPPjFKbTqU5PT/XWW29pPp+XZ3z2s5/VN77xDR0eHhbDz/Xk4lfzjfzK\\n\",\n       \"gIR8t4JmutkywFREprEywGEqxm3znpRTpyJc72fesG8pe3ROavorZYJj9L1Zp0THufYsqZuySKK8\\n\",\n       \"kuyEZtG05S0Nhv+nI2XyOVHuv+uceCaQ5YHjM9/p7NCRog5lIOr+uADdgRDfTpA6knzzZ6PRqKNP\\n\",\n       \"neZhKpmBE4NryoIDEf9PvcAAsTYPHCuJutfjYMCa9oN8YZCf/K6lCik36XgyXewaSPOP4/Uc2PGh\\n\",\n       \"A+71n7qNfcqi+dS9Kdt0dnk9++T+8Dm2C9vb250dwizzsNwycJIuCtW9Rthmja+mh+ZISd0dUYwA\\n\",\n       \"zUAKXm3Bs51Uam7T7WZ9TBaP83lUPInImHhfLdedn7EgPCefB3IaqcozlvwcjoU1IYmq0YCmQ0PK\\n\",\n       \"XU7J67zeSptoh9tfX18vhXzHx8eduqRapFbjl8fPwkUqNO7cMwLjxULH24vFirEWlXuRUhFlzVjN\\n\",\n       \"OHFeajvz6IRSvnPREgG0o1RTtk3TaG9vr/TNr4F57LHHdHBwoMFgUN6rx7nwWpK679jKiHKxWJSi\\n\",\n       \"3qY534llBODu3bt6/vnnJUnPPfec/ugf/aN69dVXS7s+zsIKisqPjkPTNJ0dQrXAJ/+3c8KC1JSZ\\n\",\n       \"5HFuIaczYWeRhdJnZ2c6OjoqO8W8O2eZY0OekdyPWtS6WCwKsuj/ExXN3XIcQw1x8+c0TPmbR4+k\\n\",\n       \"0V5ZWSlzZ6Jc2rF3NO/59Vry2qJjaX1UQ2Rc32dZ5xk9RCJSLrzDb2VlpdQEShfvvaS9MCoyGAyK\\n\",\n       \"k0cbIF0UZbPfzAwQ7eH8UJ97HdYQ7XdDDjkHXCtN0xRHlXzhGniQ7ubaqq1B9s9y6uvZPx+BYd7z\\n\",\n       \"6ADKLeXO/XIfszA712wN/MisCANa8pXPTvnjs8xPZyXMI/Iq+2J+5TpKB7BGD82RIjQpXThDiSxI\\n\",\n       \"deYTkqNTQ3TG1xMFIZMYfSSSkyhPKjg+k/e5P+5zOiRMabF9pjDoyBB2Tg/cCsLIVKIudkxTCfiZ\\n\",\n       \"iRqR7x4HF6ev8TgN07qNO3fuaGNjo4yHi5/jcrt0lty33L1SQyXpEBh5srFMBIgRCndH+jOPp+bU\\n\",\n       \"ZeTGOZO6BaHpaPvamvPC1Jafs1ic72piFMR55HsLPSdXr14tzs729nZnZyPnKZFTO/WLxfnuPaMy\\n\",\n       \"bntjY6MUoR8eHuq1116TdP4i5J/+6Z/WX/trf01//+//fQ2Hw04Eyg0VngM/j1uOfX3yquZE04ki\\n\",\n       \"QpTBUo33vmZ19fx9kUYjNjc3S6G9nVCflWWnygXOGUHXdBPny5/VrnP/aZBqyrlWwL/sWj47fzP6\\n\",\n       \"Tj1iHhH1oIGvjcMOCueRc0X0PFM0fs+nn8/1QmeBzzeiTF7wXaE0tImAEikjcuZ1b91Hp86o+Orq\\n\",\n       \"annjQ65394W6hkF8Bpcco+Un9QvRJY+HtjB1VCKQLv3g/FO3Uw+nDSR5rLYni8Wi6AXysxbsWYd7\\n\",\n       \"HmqbInztsvVkOfB9DyIHZh6b2/RcOxPB3ZzcIZz2mTbY/OffecRC0kNN7ZGRhOvSm+bA+b8pvUUK\\n\",\n       \"DR2ZTC+QGLGnocv/0ws2WZC8rdaT7edzIaW3bQXhA+1yh5UdDC5aw7CO9mg86LC4rxmheNFTgMk/\\n\",\n       \"54XTyeR30oWCefvttztGiwgJHUGmmPzbfXA6h8o9HT5GFl40nuucE+6upJIiakR5omJNOcuFRqeB\\n\",\n       \"0b/b4XzkMRyMaJc5YX6ODztcXT0/w8epNu9COTo60tHRUUHnfJ+Vsf/muU48ysDX+buDg4OyTbhp\\n\",\n       \"mlKTdeXKFe3s7OgXfuEX9Morr2h/f/8S0mEZ4XpyKs0GLAOTRFY8T0aGlim3TD+0bXvJQJtvw+Gw\\n\",\n       \"1I8Nh8PO29/NV/N0a2tLd+/eLTvGshaJ67eGkqeO4Vr3d4km2AmmrjEyRCPNdWpngvLiZ7hNH9jK\\n\",\n       \"wMzGZzqdltf+SOdonE+u39jYuLRLaTQaaXNzs5wK7Tny8y1PDpJshL3jji8e5zqx3nS/6NjYaZlO\\n\",\n       \"pwV99Tim02lHpknWw0agiCwxg8HvzDd+R1lMuaXj8aBAP4NtrvuU4zzA2DxKgKHm0JDm83kHzePu\\n\",\n       \"ceox98NkmTMaRbTJ/cgxEgBg/9g+5SFBCTrFtD1+pueMQWnbth0AgbrGZ5p5/lk/ZSSViGj2iXbJ\\n\",\n       \"/cua46SH4kjRqSBCYsOUkQAdq5rjlUyoRWi+n/VQVDjShUB5cVsAKPz8P59DuDEF3wLq79hvQ+gW\\n\",\n       \"KAohoef0sH0tD/JkxG7BdxtUxIwcLJQUOI49Bd9zyDSbpFK4fPXqVW1sbFxygPxcj4dG2FGVx0LF\\n\",\n       \"mIbZ/XFNEGHxmiPlRZpRo/lFRVKL3hKpq6EqrP/yew5JdKAsWx6H/0+5Yv99Bpd0ntKTpOvXr+v0\\n\",\n       \"9FRXrlzR0dFROQHePKKD3LZt5/123/rWtwpKd3JyUr6zw/nyyy/rySef1M7OTlHub7zxhnZ2dvR9\\n\",\n       \"3/d9+tznPqc//+f/fAcBTAeRqfJ0GhOp9bP5WfIwgwB+nvrEz3JtjzdHSBebIlzrw/XN9WXeZH+s\\n\",\n       \"p2ho340YDCZKTMeW7dmYzufz0s9lPGBQ6sDCr3JhytAp35OTE02nU+3s7JS05p07d0p7W1tbHWTB\\n\",\n       \"NUC7u7vlEM50wGm0vTFEOnfQZrNZqUWjUaqlZ8gDf2b0k2gdt+eTN/P5vByQaUokM3lm3vj5RolS\\n\",\n       \"f1E3PSjtxGcQnaE+9Zp3m5xP62X3MWXN/as9x/22TjU/PBeUOc6h+0hbTBvFYDSDTEkFzSNvmO7n\\n\",\n       \"mNxP88/ONFE3IvR0Bq0jqW/pgDrATDAhUSg76f6uht6THoRI9ccf9NRTTz311FNPPb1HeiiIFD1E\\n\",\n       \"eoeGOe3B1iJSe+iMBjINmIVn9koZgRDy8zUmXpcpMXrDGSEmesMiyoycMirNMTIScORs9Iw8My8c\\n\",\n       \"Yfo5rkfw27YdpdRSeO4fa0ocjTgSIILAnWFEegaD8xeA7u/va2dn51KkVsudmxw5JYyaUQqRKR6s\\n\",\n       \"xjRAzkcN4XR7WdPBOq1EG9ieEYREURxZZUSXc8Z5Z+SUtYOOkFwPtbKyUg4sXCwWOjw81NbWVgcC\\n\",\n       \"931N05RXEY1Go/L6mBs3bmhnZ0eHh4flOIPs087Ojt5+++2yU1A6j2Zff/11PfXUU/rZn/1Z/Y2/\\n\",\n       \"8Tf0la98pcgM35uW0bXnJ1MgHgdTwEYyrAcYIed8eM48ZpN56fTW5uZmQd34olLzyZ/x9RLb29s6\\n\",\n       \"Pj6+BPFzTDUZqaXDvd4T4TQxDZztEY1KXeFxUGc4micSR9R9dXVVu7u7Jd3ilOd4PNbx8XE53dtp\\n\",\n       \"EukcoeLmDacLqcM8bs+hd5c69Xx0dKTpdFrejmB+14q7PT6XBBhx2d7elnS+yaFt25K+IpJPBMTI\\n\",\n       \"CwuO3V6mjGg3skyAO1yznsb/ExWqyYQzCjn/RJf42+Pw2k67Rt1Lm0iZHI1GnfGPRiNNJpOCXrMe\\n\",\n       \"1WO27kv0jfq+VlrD+2jj2/aifpXpS/+f2R6P3zymXXKbRPX4zjzXBPI1Rn4e++BnEH0mkkey7XpQ\\n\",\n       \"KjnPKggAACAASURBVPWhpfZsrCmMUtc5oKEhcRKpaBOay5qJGiN8L40iUywJxWc/c1HQYCQMT8XP\\n\",\n       \"GqOac8WCXQtL1mpJ6jg1fIfV0dGRDg8PO4oo+89UE5Uii1FNXGyZYjVf19fX1bbnBcqTyaSTjmIO\\n\",\n       \"3M4ZF5D7njl8G9larjxfvZBwN52brK9ZZpS4iGj8zGv3hw6hf7uP5gnrPehgpFNPma3VdKytreng\\n\",\n       \"4EAf+MAHdO3aNd24cUPSxXsPV1ZWOu9udH84N3aqJZXi9Dt37mixWJSXyXIcOzs75Vr38+mnn9bd\\n\",\n       \"u3d1584dXblyRT/+4z9e3tHHPngsTgExnVdL6dlgch5yLtwu55BknpJvliPWaEkXRcxMN7Iuh6nf\\n\",\n       \"jY2NToEz+5zk+2xQsg6FNTWcbzqXqVNYK1MzOqytoRH2uxntUNkh4vqxo82TnyeTiQ4ODnR0dKTV\\n\",\n       \"1dUyh1tbW9re3u7UJJpHnGMXldORmk6nRR9NJpNLtTQOiLJ+zvWS7jeDG+uS3NDi75qmqZ5tZIPr\\n\",\n       \"52RQnnNL5zSDHM6770sblAF56pxlujTrsUjWpZRVyluuET7Xc+e6M84HeVlLbXq+M4DJeaL+8tqy\\n\",\n       \"48Z3rHoNso419T5fNZbz4vlz2lk6l2E/3zaTfeHOSJa6UJ9btpgO5jqs0UMrNk+vj0KQyA6/l7qC\\n\",\n       \"kUzmwpAu76jI+yjEiaBIlwuBa0xN71zqFpiyfdYBpDKmgfd3VpBWuIz2M1fMgj575o4Ca7n7RPWo\\n\",\n       \"6ImI1ZAlf84jDoiCnJycaGdn55Iz6cVI48nC8tzVlPxlTZt5TMRjWR6bjkTWYBHJ4Ti4Y8zPM+/8\\n\",\n       \"PfnC+fNcL6ujSKc+60PoHLqW6c0339TTTz9dovLj4+Ny3IR32WWNnJVh0zQlKt3f3y+v03HBsR0J\\n\",\n       \"7tZ79NFH1bYXL0l+6aWX9PTTTxcD9ZnPfEZf+MIXJElf+cpXyjvYPA4GClxn6cRarr1TKpWax5OG\\n\",\n       \"iBF7KjnPh2Vxd3e340AQ3Uxk2nyaz+fa3t4u9SWJlNaM6jKqGWyPy455onXkoYO9Gio3GAwu7XAb\\n\",\n       \"j8elVsSvb5G6Dt3JycmlQMTHRPi9eeaZD2H12p9Op6Wu0+QNL0bHvd6MlFsXEzmnYfX6zfF7nnks\\n\",\n       \"hXVdrUaNZ4IZmSDiTP2R69AOO424eeMgg3Vgfg71fda/5nzTVvn/DDCI1qROsyNkPcJ15r76O/Oc\\n\",\n       \"gYp1aC3AJD/zuXS4zs7OOvq7NhbykwFAOoveYJU7IS0L1gt08hiwMiiyLrRdqqF9zCZxzZ2cnJR1\\n\",\n       \"neOrIVWkh+JI0VHg71RKNApETxLtyfY4aURcEgrNScuitGyb/2fEwbG4vVT6fv76+npnWz8XhK+x\\n\",\n       \"58x0ngWHHraNgRe/PXMW2h4eHurw8PCSICxD17wAiRrVlJ2VO2Fz99MFpjasHJv7SjTHz0vHmI6b\\n\",\n       \"+7Ys8mKxbkbvVHyeC8tEzWCbP+kAJVLKSIwy2rbnxd3cKJAF/OnUu082NtJ5im13d7fsrnr99ddL\\n\",\n       \"KsbGxQaF27w9P8sCDKd0DPtbHr1LzTvX2vY8rShJ3/zmN7W2tqannnpK+/v7un79uj73uc9Jkn75\\n\",\n       \"l3+5nEVl5yNTJaREXYgOuM/mn2WDc06+8fOcL6e4KYt2HhghU76NJI7HY+3t7XWeeXBwcGmDQlIt\\n\",\n       \"+Mr5zbm3/sl5IvqQn3uMTA1LKs6QkSgfESKpUyJg9Mgyar5sbGx0Dr6U1HHGzs7OdOXKlY5jaWM3\\n\",\n       \"n89LetAy7LSeC9B5BMPJyYlms1k584cpXPPMYySKb7mgw8FDfKfTabmWeiERl0zB+rk1PlvP+dkk\\n\",\n       \"7pDLIIEOaepU/zBtlXLBYNnjM9Vkwp8zKKeuNzmDYKKO5C5rPsdrikS+pXPKlKf5TZTPx5BwZ7J0\\n\",\n       \"UV4yn8/LC9LZLsdFB2wwOD/OxHYiAyXzn84Y+WV7QBlt2+7ZjjV6aK+ISQdF6uaFU1AZkSbMx4ms\\n\",\n       \"tWuEhY4EhZnwKNusQbTsT418n710tsn0BCNDO1JUloyevZPIcCUFkc6LJ9zf+TMbhcPDw6IguTuy\\n\",\n       \"Bqsm6sa/ueXfi0G6gGrtbFH4iRxZEWVNQzpG5jOdI6kL4XMea9dauedCsJLKZ0vdLd0ZaVJB+RrP\\n\",\n       \"F99kL+nS4rdMZFpQuoxImccnJyel9ujatWva2dkpDkvW+vDZnDeiUtK5UZxOp7py5Uo5NdpGggbR\\n\",\n       \"NVR+3iOPPKJXXnlFkvToo49qa2tLP/iDPyhJ+uQnP6lf//Vf15UrV8q5RUQ6SJl+tazZUHKebATp\\n\",\n       \"jHhcNYNCR9IoyOHhYamTklRSzozMuaa820k6l3GnOf0MGlIiVL4/dZfXGuWfz/M1RksyeMkxkm+1\\n\",\n       \"dJT1pA/IdUrO/DSfnN5JI0vnn7U1Dtq2trZ0enqq0WjUOX/MaUE7MpYbIlRO41OvOhiYz+cdFMj9\\n\",\n       \"8o5jzgtlImWDc+Qxcfx01Kj3GOQlgs/gjz+c41ow5mfWgq90KhIQqAX7vta6hE40x0HggI4UUaPB\\n\",\n       \"YNBJdVHnMOjM7x3YpszbHtVsG2XSMmXnmSk46gYequy+cwxe47lm7FyZL5TvRPjoLDGgTvR7mb0v\\n\",\n       \"c/zAb79DRGHK/LLULTL1dxnFkdJrJ3NqxiUpUa6muXxCsimNbhLh0dp1hFtTgRlB4EK0YBqqtJBI\\n\",\n       \"l9+czsiHaT9GyHZ6eIJzIiSpwKXuIloW5Tgf7v5RSTP1NBgMOhGj+bGyslLQFTpVVlKpYP15LmyT\\n\",\n       \"c/eOotIQpdPlz83/B6EollfLhqO7RNdShvMMLbdrBMcnYNMZ83sIr169qg9+8IOlH8fHx8XZsXHj\\n\",\n       \"dmH+9tj8vIODA+3t7alpGr322mu6fv26JJX3FR4eHpZzenzfZDLR008/rVu3bpWCd8/JH/pDf0j/\\n\",\n       \"4B/8gwKDM9pN5JOyZnQza9PY35rjSiNB2UjndD6fazKZaH9/v6REfd6WI9laarAWsPlEdBdMs1+O\\n\",\n       \"3qmMTYlc1pBOOti1ddi27aW5JD/pKNtwO+3hQEpSOaOOAQ8DI+qf4XBYeObPHnvsscLTw8PDok/8\\n\",\n       \"2crKSiknsKwfHR110EGm6KizF4uFxuNxZ66ti/nj75zyJrJNnnvbvwuJOUbz12UP/s58qRnvNPQZ\\n\",\n       \"2NUCh0RoMmD1/+4j59BE5I9tZPCYRLTOlGtpsVh0shj+3g4nA2/OE/tDx9C6r3aauH8c0PCQ3lyH\\n\",\n       \"tJF2qDLYdXBOu2eZoZ3MdWE7yzSybQk3G5GPtTdYkPrjD3rqqaeeeuqpp57eIz201F7CqvSsCd2Z\\n\",\n       \"GCHYm5S61faZ1mPqypHgsr64DT+LxXgZRRAurD3PkQCjRLaRkYuvZWozoWEjUtx95H44leI0nr+z\\n\",\n       \"p15DJ/w8w+xE/Lg7LfvOMTGSky5OsDaqwkjJJzS7qDhrpIjesG0fYkqkg2NwxJoIEcl9ycithjqY\\n\",\n       \"h0RNWL/A1FvC6cy953icsmKtBIua3Uby4vT0VFevXi01KbPZrPqeMm9ZJ2LjPhPql87lb2dnR2+8\\n\",\n       \"8Yb29vY0Go107949Sec1Ui7Y9An95Nd4PNbNmzf11ltvaWdnp5yy/gM/8AP6sR/7MX3+85/XeDzu\\n\",\n       \"FPUmypepW6ZmiEwxhZG7fKXL6VDWybi/5u90Oi0v2t7b29Pu7m4H4eb6NkpDxMdk1LUmb0ZULW+W\\n\",\n       \"7zz0j+05Emb9D9tj1E2+pZxTJ/r7rKeULlAAIgxc5y7Mns1m2tvb69Tj+aDd4+Pjcp3bPTg4KDK6\\n\",\n       \"v7+v9fX1zsu1XWDutczdfp73GoJOBJ96kbKSfOManc/PDzM9OjqSpLLJwvqA2QevedbPEL1x/410\\n\",\n       \"mizT7Lfvs462vuUYuLa4a9FjYEkB7yM66bXBZ/vafB6/8/wbneH8N01T3syRqS+uE/aV9Vsso7H9\\n\",\n       \"mc/n5a0BlEmm39Lm8busS+Q9HANLZJj29HfWI7QdHvsyf6TmWyQ9tGLzXBiZWuNvKjoyzERG5nOY\\n\",\n       \"6qNSNLxdWxg0hMvqO7i4KVy+Lx0iO2W11JH7wUllLQPz2ePxuJO+Y/7d4yJ5gVqxG25PA0VF8CAj\\n\",\n       \"zDGyv5LKrq2VlZVSP1WD6W1obPRy9555JV0cjZBC7fGzHfad8uScPw1rKqZ0lphL59j5N6Fo85Dp\\n\",\n       \"nZp8u26KTgqdQ9eJcDv+wcGBHn30UV27dq3jTHj+bAzdrr8zP1dWVjppVs+Nzxh75JFHdPfu3fI8\\n\",\n       \"pxn39vY0GAyKETo7O9OLL76oGzduaHNzUy+99JK+//u/X9J5uuynfuqn9Fu/9VulH56X7FMqb64z\\n\",\n       \"Ow1+HgOTTM2mE0wZpiI/OzsrDqKkcuK3HdlMdzhtYLlg3SGPUCC5H2tra5cMPOWmlqJin5M3/HxZ\\n\",\n       \"iQJl0uR0SNM0nZPNmSIxf53281EXjzzyiLa3tzUYDEoK98aNG9re3i4bZZzGM28Wi4Umk4leeeUV\\n\",\n       \"3bt3TwcHB7p9+7akcyfLcp2pZwckDjRynFzDdGz43r408uS79b+duoODA21sbHScZfbFMuR58HfW\\n\",\n       \"d54vrkPfY1lLvUd5oRzTTtQKuK3rrLsZ6DNgltTZaMLnZZlJreyEOsrjyzSm+UoHlm15HP4+Azq+\\n\",\n       \"OSFLQmjbsy+skcrNUgxk3Kb7bnua8k7HkDxggbl/aEseVGguPUREKhnN7xKtSuOYHjaNewoIJ5vG\\n\",\n       \"lJPp6Jl9SVQpc+ZepMwXu22+HysXTTorbIuLlWPm7jwWgLpGytdYEXEMZ2cXB3om+ubizVTQOTYq\\n\",\n       \"MCstKg86du4Xc+P+zblh8TcNTxoTL0z3g44UUUH3mbJBhy0Vbcpg7hSqXeddSFY2vK9mABlF0SGw\\n\",\n       \"A0bZ5HXk5Xg8Vtu2evXVV/Xcc89dir7m83mnxoFIrevSjFy5rz7uwKgCjy0YjUba3d0tdUV855qN\\n\",\n       \"8ze/+U0988wz2t7e1te+9jVJ0vPPP6+Pf/zj+qEf+iH99m//dqeIlTUJKQvpBGQxqNcFgxBTLfhK\\n\",\n       \"ZctgwQ7h22+/XXbkOYJ9UCBGA+W16LlNVCz1jL9bLBadukD3k7UZeZaWg7UMukyscyJ6QCczkWEX\\n\",\n       \"mA+HQ929e7cU/EoXxuSZZ57RE088UWpZ3KZloWmaUjtjlM9y+MQTT2g2m+nOnTudnadt25ZCdD/L\\n\",\n       \"Y/W484wpz0s6Sh67P89gi8GjdK6P3RejLhsbG5eCcr9qhDLJfro/KRN0qmqoI4GDtDF+tsdBnWRZ\\n\",\n       \"cv0Ui8G9nrkDk4ec+twmB9ocP20Sdaafz01RdHzpQFK+ySsGDP7cbQ6Hw0u7eRlc0TFK3ri/Jjo8\\n\",\n       \"DETNY/fDgImJtpU6yXx2XRg3DyzzVUgPDZGykjClIlvmMS9DrR5EVmy1CNbMpLPg5xMGZV+o6BMt\\n\",\n       \"80QmFGiBIkqQSJZU3/XjiN3CSMPJXXtZrGcD6iiSkYzb8PjZn+wzHQb33f3JQl1+xqgmIyvyzYuT\\n\",\n       \"xeFUgu4jHU1+RyWVY+Di5gLiSzlrEVRtkwI/q6EDVKKeM0ZMNtpWLLWNAU1zcUaMdH6UgFOljOo8\\n\",\n       \"fvN8sVh0dgmapywg5X1OeRwcHGhtba1zCvXGxoYGg/PzgO7evVui5a2tLV27dk0rKyt688039eST\\n\",\n       \"T5Y233jjDd28eVN/4A/8AX31q1/tKH46vJYLosScIzquRGoyIkw5sZzSWSPKxbTA8fGxbt26paa5\\n\",\n       \"eDkvHWmmB1ZWLl6ybWTP17VtW9YRneNEKx1Ysdja/fS4ZrNZcfRIteCAc05jwzEyrcj3zjnNaaTq\\n\",\n       \"6tWrhYcf//jHNR6PyzEHPAvq4OCg6Jr5fF62mHuuvDPw7OxMx8fHevHFF8tux/X1dR0eHhZ0wIX6\\n\",\n       \"OZ/Sxa5RjtFGjevbTobT3Ua6PXY635z/yWSiO3fuFF1IXe5AjjqlllHwT6IbbIuBsK+xDqDTb0ff\\n\",\n       \"88M2vbvT64UbVZxCtt40uiypnGbPF5jXUpEpWyYGB5Qby5h5b0pEiQ6h5d46jw6S+8K0JrMNBDnI\\n\",\n       \"G8+Fx8xANEtGMuXJFGQ6fj4KxH/nLu8M0kgP7UDOGrIkLXeMUqjzvmXtSl0lx8nI692WjWzNgTNl\\n\",\n       \"xMG2fD2dojS87CMXLe+XumceJeTq2qlEcKSLt5XTkDONwL+zBoPoWI4/HYFEVjJyTEoF5DYdrVuZ\\n\",\n       \"UcAp8Ibq2S86ZRkRsS80zF68OU9ECog+8DMq6UTFEh1LRMrXEmXgCe3kq3Quizdv3iw1C1kjxZcl\\n\",\n       \"S906Pzr9RIGsFLxD0kZSUjnt3G0SXZ1Op+VU883NTX3rW98qp6xbeX7iE5/Q+9//fr366qvFIOdr\\n\",\n       \"PDzH/i4jdZK/YwrH91EmaJSki/Rlyj35Zscl0WErYM8ZX2lhw0wjTdnys7lm3OfRaHRp/N496zcY\\n\",\n       \"8Dsew+G+UY/4+XZaGTDQkBtpki7Sd2dnZ9rc3NTe3l6nxlE6d5oWi0XHsTPPPB/7+/saDAYFtRqP\\n\",\n       \"x+UF10dHR3riiSfK8QdGQO2cE1XmWqTO8Xd2Aiw/lBsHrBwvx++AJJ2lyWSiyWRyKcVEBJq6mHzk\\n\",\n       \"Gqfek7oHRSYxWDIxPefvaAedVq7VgtlR4lEF5A3Tlk3TFMTZgQKdUup56jP3TVIJNvy7VotYmwu3\\n\",\n       \"5QCEjmTq7CQH+Nmm9cD6+volxNU6hvqYAR11Nx1MO06WNcoNAYBl9NAcKQsNPVAqykSkzBQLAiPv\\n\",\n       \"GqLk+/w7J89EtIqeMgWthoR5EXNh0KAnJE9KFIewcRp2OlE5PqJmvofnpbgOhwiS28iIlsaM11uI\\n\",\n       \"OG63beVIheLDD33eVUYRTClSUP3bCtufpdPE+pKcZ7bFv6m0pctQbUa6td/mGduuBQOUIzpPlAf/\\n\",\n       \"7evMRyKN/vv4+FgHBwfFmRoOhyWt4joVO6J8Xq3+w+RonA6Yi8Z5zMbJycmlV6RYeTs1dufOHUnn\\n\",\n       \"hvT27du6du2afuInfkK/+qu/Wgyz59XHONDBI4Kb8yd1z+5yCofzkvLhKDxRpVxzs9lMk8nkUkTL\\n\",\n       \"Z1rO6bAxXcqUjr/3WFibkc4WD8H0GrGD7Ejbc8+1R2rbtjwjr3EbrGUispDnS3l877zzTnm2ZdRy\\n\",\n       \"wwJ7vrbDmxSm06meeuopvf/979d8Ptfbb7+tN954o/DTZ0ml8+F0EcfFcTDYcr/8nWU8nR4iShnc\\n\",\n       \"ORg5OjrSysrFq5DMGyKSXgeSOnxhvzj37F9+R0TK97qGzc4wgxZf73f82RHx2P08llFk0E7kOWXQ\\n\",\n       \"iNcyJ8a6g+d2ea68rlkfx3mlHfU15hvTcP5sOBwW20A59fqrpbXJf9sd88P9sfymjqasZEBn+8Sz\\n\",\n       \"zuiYLaP++IOeeuqpp5566qmn90gPNbUndSMMIiLp9WfkQTQhIdiaB0qoT9IluJf3JPJAymsTlXC0\\n\",\n       \"UEtvJTTPNphXXhaFJp98jZGIjI4JzzJ9yHvIN+4scXTNvDL543otv9eIzzXiwAiC8LSjSSJSTK0x\\n\",\n       \"KneapZZeNdzKnRokIh+sb2IhZO5ASXTMCAfv804jzkHOT0aJ/t+RXqaGnd5zlOr7Njc39c477+ix\\n\",\n       \"xx7T1atX1bZtOSSxlupxv1w/xlohzgdTyOT3aDQqBy1aDswDoxyurfJrHaTzlJH/fuGFF/TFL35R\\n\",\n       \"L774oqSLOh1HdpwX84vpJ/LbP5nyJeTvcXOHD+WcSLH7Y+TTqZFMszLtkOuCaG7qKP4mMmk95XVj\\n\",\n       \"VNF8JcrOlB5TO3yeI2ZH7Jl2MPJiVNH3ra+va3t7W4899lipvfPuurZttbe3V1JyTJUahfIrXYhK\\n\",\n       \"Sedr8eWXX9bx8bHe97736amnniqI1MHBgWazWVmvbNd1kSkTHj9RfSKXTt2SP5wz6j+md4i0JNrO\\n\",\n       \"sVhP1bIcqb/8eSJinHuWZng8Ror8mZEZqVua4M1C/s6viyKaSznMbEIiWJYfp8ZYZ0d7zNIQ1jx6\\n\",\n       \"XFk/lWlBtuP5OD4+vrROfA/TaSyi91xThxqFMgLFNCX5n74Cn2fkyWN3loQHG5syhZj00F8Rs8wh\\n\",\n       \"krpwKeE6MlzqnjFFyjRPLnw/w0zNepeawSTcnUVwNYcmFyn7kBNL45Y74Hhf3s8FQGVkQ7psgbN2\\n\",\n       \"jAbZ4/D3rK+xQLnonekk1rG4XS5MQqdURHSePM8J3abD6jYJtafTx8XJwkM+I40seWuZ4Pg4Z0yN\\n\",\n       \"OQ3AtCjnKCHlnE8rJhbq+5mj0ajU0qRR8P9N03ROjGbtjJ+dTkittm57e3tp6nV1dVWTyaSsCzoZ\\n\",\n       \"t2/f1mg0Kum/F154QV//+tfL/NJRYB2QeUpnOp3ZDIJ4jechAyz3l0ERee96pMPDw3J2jttlHRxl\\n\",\n       \"wkePsPYqg6haXyV10hdMCY7H43Latp06OlmuU7Pj5k0BqWtYsNy2F0Xwg8GgPENSOUZjfX29HCng\\n\",\n       \"WjaniF1/c3x8fCmotMHPtKXH5Xd67uzslB19k8mko69pkOkIcD3Vxsg5YV2VdUeWEdAusL/Wqaen\\n\",\n       \"p2VzBftjouNaW7/U+8sKnF3D6HtdC+f2WR7B4nemFdm2pPJuzIODAx0dHRVZpo1aWVkp7y5Mcn+8\\n\",\n       \"Pjxmy13N0aGeo041z1JH+3fWiKZ8+z47dix3kM7XDc8gdJseo3Wfr3dtMIEajoNpugzgrUPTznq9\\n\",\n       \"flcWm6eDQQSEkTOpZkxr6E86MqzHYnRDR4TPtHedhsREQ8I+0VDVIhn+zu+8aK0AuNg8Tu768vO4\\n\",\n       \"m4K1AHRqKOQ0DrUIgv0h8sXtvlZaPKvG13GXBZGZs7OzcijfdDrt7M5hFGCnyn1jcWBGRUTaPH4q\\n\",\n       \"HY/B8070iY5dGk06TMnTdBAzb+8fKnLfK3Vrl+gwWO7NT19vh7VtWx0dHWlvb68TmXpXGR0/z4Wf\\n\",\n       \"kw6/eeMt1E1z/soYSSX6tYNBnrImZW1tTUdHR7p586Yk6cknnyxnB924cUOf/OQn9du//duSpC9/\\n\",\n       \"+csF/fCOMcuT64WIDDEap9Odc0F5TVlm8JU1cVyzdgzzkFMHD+yP1731CZ1DKuZ0pIispHw1TVNe\\n\",\n       \"nOx6GAZvXvd2pkzus5/l89u4LuxM+6wwSXr88cfLLrq7d+9qNBqVQzfNaxcUz2az4mS48Nl9tYzb\\n\",\n       \"sTP/XMg9n89L/dTx8XHp7+HhYSeA9jg9L/zcTgiDaK5/FjBnYOI+Uh48F24z9buv9QYT2gXOXxpa\\n\",\n       \"6wXLS47NP5wP85TBF/Wlx5DHMUgqcuL31uWLoP0Mn5fFPuVL1KnLXPtkZ4LOIj/zM4gccr1kbSj5\\n\",\n       \"bjnzeGwPiNZ6jK4Rs8xxI4n1JFFok+ePdZJ+LgvJGdB5fEQqU5d8VzpS6Zyw01wIy76rOSTpEPgz\\n\",\n       \"e7SM9LmryTA1i5GpRBmB2WkhEsAIhM9t2/aSA+J+EzqksvazCX1vbGx0JpMQp3f8WBBYGGzv30WF\\n\",\n       \"XnAeR+7WoPPCBUBjRIifaTXpQjEsFouyy4xKyhGg0zw8kJNpnFq058VRi3rMl0T4fJZKfm/lRQPJ\\n\",\n       \"fhLF4UGWiXqRT0w/JprpPtYCAUkdRWLHx7uhbCxt3BeLRdmBY8fU8uPI3H1z6iCDCMv9/v5+MZg2\\n\",\n       \"Srdu3dKNGzd0eHioyWTSiYwXi4W2trbUNOdpo93d3YI63Lx5U5ubm7p9+7aGw6H29vb0Yz/2Y5Kk\\n\",\n       \"b3zjG8VZMr/9N/vsYxgSWXCxMonri06Tiam/RF0t+0ylUUlyfdsBdL+Hw2Hnt7+zzNpJyt1J7kdG\\n\",\n       \"6nTMrE9YMM+XRtP4e/w2wr5WUjGgq6urunbtmp577rmyu9LOj78fj8edFw/7hdbz+Vy7u7udAnmi\\n\",\n       \"4zwSgzw9OjrS4eGhtre3O+82XCwW5Qwh8tv60WuVAWwi/5yj2WzWQWKo21PvkO/mEY02HRDziHIm\\n\",\n       \"dXddJ/qSG4H4HV/Ia96lA0J7Y6JOdpE39Y7bs+5nitO8YIrPtFgsinNCGeL35gnfUWg95H7wAFHL\\n\",\n       \"dg0AoBNpvUo7fXJyUnYPM303GAzK0SQuIWEq0Q5UHutj3tgpTTTK65OZEbeZoE6iqOmzkB7aOVLS\\n\",\n       \"csfH1zxIiGt583TAfC8njwbaz/H9Fg5H+L7GELifQeVMg2xhYr64lvYjDOsx1PhT+34ZqkSIW1J5\\n\",\n       \"sSoj/qwncF9SUVGY05HisyaTSWdHiL/zYmPEbqNjo03H1ULshcr5I6/pbLK/fmYaL85NKhOPPb8n\\n\",\n       \"gkc0i3Of6VL3hf1ynzLFJ3UjXukCUmfaIaNdG1k7pH62+WVHy0bX83pyclJeK0Q5tRJOtOr27dtq\\n\",\n       \"2/NambfeequDgNnAbm5uljHaWL766qvl9OvDw0NtbW3pmWeekSQ9++yzevnll3Xr1i1NJpMSAEjq\\n\",\n       \"IBSWEc6/x0fEmrJW0xecF8su15SVur+jfuDRJ1wHbsuBiwMFyg+DA0mdeUqEl/1msEAnxfpla2ur\\n\",\n       \"cyq426bjRQdkdXVVe3t7evbZZ/XMM8/okUceKam9w8PD8iJsG1Oi3l6bfIOC2yTysrKy0jl/y47N\\n\",\n       \"YDDQZDLRlStXCppFxDxrdqgfExmyPjdvanrR37NNrtM03EY5vHbYLp1a/8/vuBYTcSYaQ5mh02O0\\n\",\n       \"j4Eg7VAafn9u5IWIFPXN6upqOVyX/LN+YzDkPvl5nhepixQl32o1n/7bKUnPF/WXx2S+e90kETXy\\n\",\n       \"fUZnadfyHuoN3+dn0mHluDK952t4QLSBApORs2X00GqkpMsOAyNnevX8nI6TdPmwvzR8CfsSaaEC\\n\",\n       \"p+J0WzRSCeslzMwxME1QM7hExTgGIm6e4IQt3VeOgYaGxXPz+bykzHy4HpU0n01DX/uM47ZSsLAl\\n\",\n       \"muZ7jaR4/P4s0zR0XIhGuc00kukUWYHxPvIuoXH/zWcl+knFSH7bOWQNivmdTjnvraX52K75aaed\\n\",\n       \"9QCOnv1M3s+am5WVFV27dk3SeYrl4OBAOzs7JW3GQlMXDNtYeJ6uX7+u4+NjXblypRzWuLu7K0nl\\n\",\n       \"ZGIbYjtt0nndxp07d/Tkk08WRW9k5Q/+wT+ov/gX/6Jee+21gvJRTj0PXi+eA6JQRBaTiBBQ/jLt\\n\",\n       \"RmOQaUTODwu5a3LjfhNtdpDAVLvJCB4NLeeQssu0Hw2T58h8Yw2JHSmeQP+xj31MH/vYxy6dvrc/\\n\",\n       \"rwAAIABJREFUNeT5c1qIemw+nxe0k4Gi+U8EINO+liOiFXakxuNxObcpEZCcz0wL0R6YJ76OPLSj\\n\",\n       \"4zYzPWN++YBiox8ZtLRtW+SXNiiRQCJLNUcl0TFf75IIPs/rm/qDZQupS63zvH7tnKTusT1I/jGj\\n\",\n       \"wbqspmnK6fKUCfeHmRnKomXI7xLl/LqPBhess0wM5ms1Ys7iJALIPrNNpvXSQczAlo6U59sBCkEH\\n\",\n       \"64ka8FPmfOk3PfXUU0899dRTTz09kB5qsXkNdeJ3JOZhE8LnvYlYMcqtwaZug78Tgq+llfxc5pF5\\n\",\n       \"n68l8sOomaiTP7PHTlTNEa6RB46PuWAiUJIK+sQj7xlB8pUCmZ92JJq8lrpQvGsi2A75lFEjkTMi\\n\",\n       \"Wb6XqZsH5aN5j3+bb5zLWhoz57kmF+4no06PwXPjiJGvafFziJ4wUuV883OmmbxzhQfhuZDY6R3e\\n\",\n       \"S2RnsViU2qoPfvCDun37tvb390sKz/M7Ho+1WJzXzB0dHWk0GpWDCV2k6jm9d+/epZSvC9L9XOmi\\n\",\n       \"ANZydevWLT3xxBOSzuunPvrRj+rrX/+6RqORvv71r1/a4u/0Bw9tzHS4PzMP+TvXPefc8kx5oewx\\n\",\n       \"neb2t7e3L6XgPVde71xPXGt+VY+f5znzHGQayykl7hiSuqiEr+F78c7OzrS1taXNzU2dnZ0V5PBD\\n\",\n       \"H/qQPvGJT5Q0GwuWXV/iOiqn5KSLHVKeY84va6pMREHOzs5Kql86r4t63/veJ0n60pe+VJBIZhY4\\n\",\n       \"fq+Z2roncsj5MBLhua6l6WtZisViUY5tIdJDquk+t5kpNPLBlKll2wmmkKQLdMVb+TNb4faJ1BEt\\n\",\n       \"N/JLefahsUZNyVMiPqx/8nNcx2Z9lkhc01ycks7UN9FMfud2PM5MJRo9slzTlvjHa4d8Z73Zsg0/\\n\",\n       \"HlOt1mnZ+1f9zLTBy2SzjHfpN99BSgeGn5m42HhN5m5r7TA1ZUbX8ut0bhIepDLnThovBvaP46FA\\n\",\n       \"53NTodfG7HaoDFicS6GxgNE5ch7XgmJl6bbIKxqKWmqvVpfC/rVtW2otpIttuXR4afgswKenpzo5\\n\",\n       \"ObmUQkwF5b/Tsa7l2HnysMfGugm2S6ctZTFrNlJJsn3D68mbmiPm9A7JbTntdnp6Wk7/9nyPx2Nd\\n\",\n       \"uXJFW1tbRTG6wLttz0/y5u4l75R68cUX9eyzz+rGjRv6vd/7vc66WVtb03g81vb2duGF03RUoDs7\\n\",\n       \"O5fqZFZXVzvOmR23vb09vfPOO3r77bf1+OOPa2trS3fv3i3ffepTn9KLL76or371q521Y7h9e3u7\\n\",\n       \"GP3kJ+dgWSrIay1T1myHae1MvdLp9ny4loY8cPDkMTBtMJvNykn0kkr9mNtyP3k2mbe/MwWeqS2m\\n\",\n       \"wi3nW1tbJf24urqqra0tPf/885KkH/zBH9T6+rr29/dLoOW5dyrQKRwaVAYJJpYf1D73b+vIzc3N\\n\",\n       \"kh65fv26pHMZtm7i7i7LFNdJzhsLgvk9U4TWK2kvfG06Jf5tOedYUr9mfQ31BuWCuiJrNX29HQ46\\n\",\n       \"GQw8Ux8nr/J0dbfB8fhvp9n4mZ/DoJ3OkqQim4PBQEdHRx1nQlJZE7zPMkTeeR0zRWqnjvPCE8S5\\n\",\n       \"HmtUq/NiDRo/J/8pw94tzlRr8i+DNa/1lE3SQ0OkpMuM4UKgAGW0mYZ12QBr9VNs320lYsXiRgs3\\n\",\n       \"PXMvIhtTCrgXiwududi9WOxMcXw03ByPc73Og9cKQL04lu1CqDlDHov7yMXEZyS//DcRuf39/fKd\\n\",\n       \"x8KdGKamacpWaBohOjFZm5BzmMTIJZ9nZyqjFCqCZYuGTnWNd7PZrER+fB5rRei4Ek00OkgZtDGY\\n\",\n       \"TqflBa+SdO3aNY3H44IicZ7sxI7H44IG8fUNX/7yl/XpT39aP/qjP6ovfOELnV1z0+m07LjiqyLW\\n\",\n       \"1ta0t7enjY0N7e3taTgclgMbrbRu3rxZNjP4u+l0quvXr+vu3bu6ffu2nnrqqeLwra2taWtrS5/+\\n\",\n       \"9Kf1xS9+Uffu3SvPu3btmkajkW7fvl0KvRPFowOURa+pI/Je8pjyRmQv5d9zMB6POzUtNEBZQ2LZ\\n\",\n       \"nc/npX7M49/a2tJ4PO44UkSBiJwx8nX/WQNnx9UbE2ycXnjhBX3qU58q/Day0DRNeRWOx+pNA96B\\n\",\n       \"6bE70KOxJZ2dnRV01I4TdYwPap1MJjo8PCx9HY1GOjo6KhtTuHvZ+rOWOSCvuZPS47dRtA6nk+Qx\\n\",\n       \"eI3TcfNYjPARkTF5rhhYEY16kD0iscYs2yBSaR3vdliDSTTHcph94jwOBoOy25Q2iJS6h2M2r9fX\\n\",\n       \"1zsv6SaaSN3pAMvPotzQGfT8kXfc6ETUyXNG5yrXdr4OymNIZ4rO6bIA2faoZpf8Wa0+s4xj6Tff\\n\",\n       \"QaoNJCk9bE52DZ1IzzvbMtGY0nEjskO0ytcQzqfDlTvWfD/bYB8o8Bnp+p6Eov3MLJz3c3xCLXcv\\n\",\n       \"WNhtfGyk8l6Po8YbOpnkVRp0Gzqfa+Q+5qIx0uIIu/bcdJgcXaVzSbITkjCuF3A+h/cRQeIz2Qb/\\n\",\n       \"ZwSbznBen/d5oVrpUA7Mb74/TVLZReXv8rBG78qzcaMyXVtb09/9u39Xzz33nD71qU/pi1/8Ypmv\\n\",\n       \"lZUV3b59W+PxWMPhsKT2RqORRqORZrOZfud3fkfXrl0rZwUdHBx0ItWmaYqxPDk50d27d7W9va13\\n\",\n       \"3nlHq6urJdV0cnKizc1NffjDH9bVq1cLAiWdozY+qdmK00YkAwCmSskjGpZENVksSwfFha1EtDj/\\n\",\n       \"TpNTnqhrnI5JBW4ZPzg4KCjfbDbTzs5OcWyI4jolOhqNSh/ZTyKrGxsbHSSvbVvt7OzoIx/5iD72\\n\",\n       \"sY910Anzw8+kQW/btqQDiSS4/3zpsvlomamhPJYbO2+LxUL37t0rsmEn38XtNb3q5zC9af1Dg5pz\\n\",\n       \"7rESfaFT5nElb6SLwnMGcgxyOfd0lOmYkYwQUu8zsGqaizPD3E8H3rZx/s5rwO2lLuGp5H4Wgwzb\\n\",\n       \"DAYB/s1gNXV5on8ep4M195lrzbLMsRJcsENKR8z95f98Hn9nxsKfk9+cA17Hde8+ZGDtcfvHjj2P\\n\",\n       \"EkpdkPRQHKkak7jYPclUhnRslqFLKfyMcNJYppOTCFfNGSJZ4BhdSZePDiBs7udYIVGAPEZuF+XY\\n\",\n       \"6FARWXBkZiVDqJJ9snGkoWFemLzzXHiBU2nWFpL7yC3VNe/dnr2h/xpilouNjgkVninnkHUS3ObO\\n\",\n       \"OaYD5Pmlckv54hySx4TpM7XkvjG6sUz9v+y9aW+kx3X2f3U3t17Z3MlZNKNtZMmRBVmILTtAYiBI\\n\",\n       \"XuQDJB8zrwMDToAEcmAgtmF5kceWRtss3Ju9sJtLd/9f8PkVr/tM9fhBgD/4vGABBMnue6k6VXXq\\n\",\n       \"Otc5dQrLDWsPIEWMhMea1Ot1bW1taXV1Vfv7+wXFwHsZgz5WUTK1Wk2/+93vdHZ2pnfffVeSUqLM\\n\",\n       \"1dXVl8YizBDMWblcTiCr1+up3W6neBhYEUnJlTQ/P6/t7W11Op3EeBGPs7CwoH/4h39Qp9NJbR+N\\n\",\n       \"Rur1emo2m1kWk4WORcMNBQAPMvG+oX9w0+X6lD7z+UYf46qLOY9inKLHelE/r6N0BSQ6nU4BCPE+\\n\",\n       \"AMjy8nJi43zsA9ZYFPmuWq2qVqvp/fff1w9+8ANNp9euSxaC6XSq4XCYwDHfwW5SvO2MO2ff+D9+\\n\",\n       \"7vPW9R19yEHYMKpnZ2eFDNTStTGK/nFDJWdw8h3sK+2OAJT7AYbO8JBihDEQXe48K8c6RWDEOGMe\\n\",\n       \"RXbMZYxud0Oc+jOu6CeMFGQZZYB+djlR3PXs7JfXBUPAgQfXe/yQvxdDjvr4uInrsRtDMW7W6009\\n\",\n       \"YdBcLzuL6sCH+YlxSR38N22JBhc6mXr5muAy8v89UfSscqOuvag0XTFKxbgFSYUJnANSPMPBGdfl\\n\",\n       \"6E2vx6xn8L8LnBIBhSNtH8S83wMmo5JCCcXB5lR0nJyOrKMFyXuweBcXF1+iXKH4I5PjFGickF5Y\\n\",\n       \"OBzMEFCa2y46a8HjXn+/09juZo2Fz5FZbmI4iKUAXN1K4X1en8iMIo+4zRfZO6vo48i307pyoS88\\n\",\n       \"v4orzuXlZa2urqb7/LeDKK8730lX42Zzc1NPnz5N8njnnXf01VdfaTKZFI6EkZTOZeO+Xq+XtrEv\\n\",\n       \"LCxob28vndN2cnKSWJeLi4sUH0OyzuPj49QGAPmPfvQjPXv2TJ988omkq7xGc3NzheBn6l6tVpMy\\n\",\n       \"d/aQ9jmgjf3vGZxJ6kc5OztLDBBHongBgPtGDy/UJbKdng/M5xtjkbbGvgRM8JmDDbKL48KD5bl7\\n\",\n       \"967eeustPXjwIMkN5pD0BuVyOeWDclckCxcyc2MA3RoNDNfBzElnQ3q9XtpoAOD0MQWIgEnxfowG\\n\",\n       \"pRf62RdtZEwfO+Ph/7Ogx74g4N6ZOJ7JQhvfFxlIXzPcwGf+AyRoswPPCAiYby4X3GWeEoAxzHPo\\n\",\n       \"Azemab/Ln+d5/6M3YzC6G3s5r4EHj0e9yLU+X3I6MgekfKy5TNGF/r64TkZQ5yW2gXnNOx1El0ql\\n\",\n       \"ZIS4N8VJlVnlNv3Bbbktt+W23Jbbcltuy/+y3Cgj5QVK190Tbr3lEKtU3CKdYxMokcVy68AZBO6L\\n\",\n       \"/uNc8HGMAXImJiJjrBwsFd9pAOLGao8umsjQuMycAYmWG9aA+9mxPthBA9qOFoYzVU7x5vzVbrU6\\n\",\n       \"MxNpXSyw/5uSe0/s38gI5O5Hlu4yiNf4Lhsv0QLxMRRZTHfHYPXEuByvp9cFFxrf+3ur1Wpyy47H\\n\",\n       \"43QauvTyIaMxNgHZnJ+fq9FoaH9/Pz13a2tLz549S3LxhIV3795N57GVStfn8O3s7Gg8Huv58+dq\\n\",\n       \"NpsqlUqJkapWqzo5OUnM0ubmZnLt9Xo9bW9vp3H093//93r+/Lkk6csvv9Tq6mo6u8/7olKppFQC\\n\",\n       \"jN/IVk8mk8RAeP8wT0lWiXUvqRBgDBvkO7fG43HBwo9sLX3oO8WcxXEWkn7h/1xMEC5x5En8GKwA\\n\",\n       \"MtnZ2dH3vvc9SVeM1MbGRtrsERlgdGJMKsrRNq4/fRyii2i3x5j5XCqVihtfPEi5Xq8XmJRarZaO\\n\",\n       \"E8Jl4y4678/ofnW2LvaDMyuuc91FiFxpJ24+5orrBeass518B2PM974OePwP7iZPmOrF2wGrRhud\\n\",\n       \"OYPdh6113Z1zqcH08T0/rAk+7qhvZGLpGx8Tcb65ro/rqbvKXc/TV8xR6hnjqHLxxu7x8bAZ+hBW\\n\",\n       \"ytvn3h2fE+jmuLZTR4qvWcjA51Ku3DiQiguNC44SP/MF1EFWdN/4OzxAmBIX5lgnv4YSXUG54oPI\\n\",\n       \"FRGUN/V3xY7fnmf7QuHuzBiTxQTmJ4LB6I7ybdbIEqrT+yK6M2McgU8kjxVwGUZXm/eVKyvq4e/P\\n\",\n       \"9U2MD/J2eFvj85Bf9NXzd3Tf+ef+XQS4noMHJcKmBffNS8UjNrg/F8jsLimvOwHZUnFzADFAUP2u\\n\",\n       \"3NklxcK1srIiSWlH2srKivr9vlqtVsG9MR6Ptbu7q/F4rNXV1eRanJub09bWlr799tvCdn7qOTc3\\n\",\n       \"p16vl1IfkGVdUgpsr1arWl9f15tvvilJ+sUvfqGHDx/qyy+/TItwjPNzsIfy5WDcCJ5wb5HegfiZ\\n\",\n       \"ubnrHEgeNO1uLx83nqLAXYj+PhYGisdfeN+SQ8jBsy8mlUpF/X5fk8kkHRZNYbdbs9nUBx98kI7d\\n\",\n       \"wX2DHHxMMxZzAcnj8TgdJxJjupAxYy5uhnFjNbo26/V6ci+Wy1e7xtjNe3BwkPKD8R1GgMe4sGU/\\n\",\n       \"hnT4WIj6hL/jtQATXLu0JR6L5Pe5DJEb8xG9xSLuMqVdDsbdMGGuu37mmWSId93FuJCUYhHdwOK9\\n\",\n       \"gIlofDlgiK47ZOOGneeEc/eyGwVc60ZLNK75PoISrvcNNv69g3BfU5CZAygv/r/r8wj6vf5eZ5+n\\n\",\n       \"/O1B575+uIGRKzd2REwEL+5fzgXu+gSL7JE/NyJQrpkFfCL4kGbnM/K6xvZwH4Mp904GDR3jPnoU\\n\",\n       \"TByccau8A4ToF/a2OEBwAMC9npDTmQyuiTvLPGaLQZiLZYsTxRfaaMHFgenxBy7jyCZF68Pb5e1n\\n\",\n       \"cswaK7PGUQ6MesmNJZ7nIMrPIvPnUDePY5pOp2nnGjFtkhKjguyl4hlu0Tp1ObAIAHq4ttFo6PT0\\n\",\n       \"VA8fPtR4PFa/3y/s7un3+8maHw6HiXWCFZmfn9fz58+1sbFRCGKGdTg5OVG73U7vr9fr6na7SbFX\\n\",\n       \"KhW9/vrrkqS/+7u/U6VS0ZdffqlWq5UWFuRE3jFig6gLKSLK5XKy3GM/1mo1bWxspBQNyJRga2Jl\\n\",\n       \"SqVSAmAe3O2ypj459sPHADrAYyqiFc/BwNL1OZ6DwaAwXviu2+3qnXfe0d/8zd/o3XffLYyZ+A5n\\n\",\n       \"nRhHcWFj7AFmfHEhsSNxJxE0utHGO/0gWTf+FhYWUh6x4+PjpPt8Wz6FdtOmWQZu/Mxj5+L85V2c\\n\",\n       \"M+l6yA0AB7jR+HRD0BdzB1XIzUGox+xEo5Rn8B0gwjc0eJlMigeR8xtA44HuDoKk4maXyITybJfp\\n\",\n       \"3Nx1agoHP15vX99c3m44O0BBVm5I+3v5389qpNBvThRQDwAW4zc3ZsAD/pn3aW58zTLm2XE6q9x4\\n\",\n       \"Qs6ceyUCohwL4YtdziLxvx0g5RZBlEROUHHB9et5Zlx4vV1u4cRdGxQGBZOK66TrbMN+iGJu4nN/\\n\",\n       \"DqCWSqWXrFY/Zy1aZi7zuIgzIWYpPL6jTUxEFnNn9OI7ojWE3Px3LA5KvC+c2s6BSd8pFJksCs/k\\n\",\n       \"M6w8V6QR7KOA+S4mWgRk5erqk5iFfTKZqNVqpfxbw+EwXVev19OCSf9HOno6nSYQg6uNpIyHh4dp\\n\",\n       \"Ud/d3X1JpixuruQrlYoePXqkn//859rb29P29nYaA6PRKD3Xd5hJ0vr6ui4vLxNoe+eddyRdWdy/\\n\",\n       \"/OUvtbm5mWTk4HMwGKher6dF/uDgQJKS27HZbOrw8FD9fl+j0aiQZ6nb7erOnTtaWlrS7u5uem6t\\n\",\n       \"VlOj0UjB8O7O6PV6BVDqIJ6+BWhEpoffkX1gXNBPjUYjyQYwIykxRfx/eXmphw8f6h//8R/1ne98\\n\",\n       \"R81ms5B5fDAY6PLyMuUzi6DKx6iDLPqGccf72MXldfe+976JOwwBLuRCajabevHiRRrDKysr6T2e\\n\",\n       \"eNUX0eiqpt7Mm5wB7nWNYGVhYSHNEWek4n0Un4/MfTfg0M38eAoLngcD7CcTVCqVQrJixgrXoQsd\\n\",\n       \"cHHd5eXLyZgByugR9Jm3x92akfn2dcTDQZyVcX3F2HBQ48+h/hg9zlTGtd6BIbJy8BVZNzcEvH9y\\n\",\n       \"bkBvk5MhnjLGQ1acKPD54kDV5RZDQ7zcKCPloCIHnFxATPwo1FeBKJ4bBc3n/LiVEe99FXrl82hx\\n\",\n       \"0inxHhRqXLRzlLJTwyhtrPHImDEAvLNd+fCOaJlGoOGyoQ0RwLgrINdObwvWi8uTxccBSqyffx5B\\n\",\n       \"l7ebvs/1H7KIoNyLy8bBsFvpPqEcpPg7KN6mONldfs5cSMUcOLjqyAoNK4QbYDKZpOzl1NsXSp5D\\n\",\n       \"7iYYLZcfTMf5+bna7bY2NzcLB5ASi0U6At9F9utf/1rNZlMff/yx/u3f/i0pxfX19SSDxcVFdTod\\n\",\n       \"bW1tSVJy8+Eqcbnt7Oyo3++r2WwmV1y0okkCeXJyosFgIElaWVnRYDDQ9vZ2WoTPz8+TG/L8/Fyj\\n\",\n       \"0Uinp6e6c+dOYRdTq9XSxsZGmleDwaAAhKiDsz/Ilv7zRcP7dhbgZ0HF1ei7utziXV5eTvVcXV3V\\n\",\n       \"v/zLv+jDDz9MC5AnJ/X4Ltg1xhA6gQXQD8rlWCcHiPyOQCrqYFy/MfbGGQyP95SuAD8u236/r+Fw\\n\",\n       \"mL5zN5P3Pe+LYMbnDEZQjEcslUopWanfEwvpE6KOJHcb4FG6BiYOopApfed6z2NaeQbAKc5VBy3O\\n\",\n       \"PvmC70apg3hYJ5e3M/7EYfn8zoW5eH1cN/m1TlqUSqXCWGQcuyuU9zmz5V4Lxr3rLx8XDvId1NFm\\n\",\n       \"d20i47iL2cGSr/kAVAxW6uceIF8PMVhmlRuLkcoxPd5RPnmiVeXFJ/2rlJgPEn675enfRRDiEzEC\\n\",\n       \"M+9gt/KYBD4wQPQM5rjQ+mT0giIslUoFZer5g+JAdMDiSsHb6NS/g5VoBfrgd8siV89o6Uag4axc\\n\",\n       \"lBtK0+XmblIHcF6c5ckpTW+HPzOCea7le4CmTyiPr/B7HdjHxcd/A5oi8HMrtVQqpZQDq6urhbgp\\n\",\n       \"4mWk6w0DLGrRIkdJLC8vq16vF5L8Sdfb6z1+6ujoSKurq1paWlK329XR0VECbtVqVdvb2/r5z3+u\\n\",\n       \"73//+3rnnXf06aefSrpSRLjmkLG7KZ4/f6719fUU7IwcG42G3njjDR0dHaW6IO/hcKg7d+6oUrlK\\n\",\n       \"Hor7ULpanEejkba2trS0tKTj42PNzc0lQNftdjUajXRycqK1tTXdu3cvxeyQH6rZbCZXoh/Jg6w9\\n\",\n       \"BQLyctYslzrALVrfch7Hrh+FA/hksUaGP/nJT/TRRx+lxdQTrp6dnaUxQlySxzH6ppZqtZr6Yjgc\\n\",\n       \"prEdXUK0m/EYxxQ6hb6NBgDhAnNzcxoMBml8k1eMcx2dhUOfOgMa57dnAOc+mC83TNzQQYa+4Ho7\\n\",\n       \"AKbOnvhvdLT3E0HkDqi8Da7XeBbzk/a5nnWd5TqM+2BromuL692QdN0OG0ecjzPV1NHXyhhIHvWh\\n\",\n       \"/x1BepRbpVIpJK8sl8vJm+Jy4b0+h9z4ZL30+CnXkV63uB7nSATGEYYM49XHYfQUOIj+S+U2/cFt\\n\",\n       \"uS235bbclttyW27L/7LcWIyUU4QULBO37vk8d38skZbmGe7Ci35P7vPfkbmJriU+i/WI9ZaKO344\\n\",\n       \"YwtrA2Tt8UcxUN0pdtiKGMjHdTnXpLcHi1e6tkz9x60PD6qNLI/3ndcBVO80u9cBSz7uwnDLF3eF\\n\",\n       \"WxO+ldrlS8wK78n1UbR8uQ8Llrp7H2AZUdec5YNbIBa3GiNzF92XOVerM5q8BwaAA43dJePxJe5q\\n\",\n       \"WVhYUKPRSBS5W4kkhsS91e/3C1mou91ucsdtbGwkGRMDNT8/r88++0zvv/9+YSegJJ2cnGhpaSkl\\n\",\n       \"+pSuY96m06lOTk5SrJZ0xUh98MEH+vLLL7W3t5d2WUlXc+bNN9/UdDp96Vy49fV1DQYD9ft9ra+v\\n\",\n       \"68GDByqVSsmd+OLFi7Rt/Pz8XPfu3UuyYdciMT0cU+N9wbmHPv7ZIYcrh9QM0jVDAuvqTAZ9BOPm\\n\",\n       \"Jwz4Nm7k/d3vfleS9PHHH6cYIuYOLGNkij0mDVYAZpN7pesEpeiZuDvJYxhhc/iO96HHfHzzu16v\\n\",\n       \"J9chcwHrH6aoVCoVErnCftEfHrTONZS4yyo3ByNrHOeZs81+P890pi6GX9B217fuqor6hHFBH56e\\n\",\n       \"nhb0tG8cQeYUr0fc1ODB2Yw7Z85h24jNoh3oNR+r8Z2483Fx0n760IPcaT/rAN9TX1hU34zg7Yiu\\n\",\n       \"Z5eNB/C7bqMdET/4/XF+0L9SkSGO7FMMu+FZHg+ZKzd61p6XSM9KL+9K43tfFB3Q5J7B/fzEhU56\\n\",\n       \"eUeDu/yoK/e50uF3nIw5UAM1jyKJgMip2/g+BjfvdpcJ7iYW/5xbKbo1+SzStj6YiLtwCtn7gN9x\\n\",\n       \"oPr7HTS5C8Tlx/uoP0DKARLvdjeWv496xnFBO6K7gD5jcriLlXoRMO4FGSHn6EpEMXCNj7EccMr1\\n\",\n       \"U7lcTjEl0jX9Px6PX4qxQJ7T6TQtjr6dnLPEeBY7xarVqkajkV68eJEWBZ5JTNb8/Lz6/b7G43GK\\n\",\n       \"O+p2u+p2uylA/eLiIgWNf/LJJ5pMJmm3Xr/fT66nWq2WXA3n5+cp8zb1X1pa0vb2dprb1LdUKmlj\\n\",\n       \"YyMFkQNeJOn+/ft69uxZajPKn/azK/H09DTFTtG39Xpdr7/+ejoTr1arpfE2GAw0HA7T0SoeG/H8\\n\",\n       \"+fPC4uKbO+IGg3q9np7pQCIu0Mi2Xq/r0aNH+vGPf6y7d++m9hOcT7wa7/PjOkir4gv7ZDJJ7fL3\\n\",\n       \"+bhkAXMXlYcC+DP5HEMMUO+yqFarKf6JnWy0g1xo0S0TF6cY3uC6OQKfaFC6geHueR/fDhSje348\\n\",\n       \"HqfjQHDxutzQNa5XqDNz1PWj1991lwc/O+hxfTGZTArxOw4wmevVarUQfkA/siuVthGHJF27ganj\\n\",\n       \"xcVFwXCJa6XrWtqIcRL7zOMHXUdFgzKSBOPxWLVarQBWfEzmwmDQ24BPH4fI0o938uKbmbyO1Cm2\\n\",\n       \"3denWeXGGCkmYQQccSB6ieDJ75GuFxX3z3pQ3iwAx/OibzjH1jh7kmO0fOI6s3B5eanRaFQ4ssGt\\n\",\n       \"1tjGHGKOSsGBI8jdFV+Mf/IB7YsuStrl6Qye3+fvjBaft3cWqOJ7X4Sk66DTGACbezeFfong2tvA\\n\",\n       \"fXFLsjM3sX+jn9yfQz/E9rm8aa9PRrfk+DzGFszPz2tpaUmtVkubm5vpPq5xS5H6oJgnk0kh+Z/H\\n\",\n       \"7GAhelzS3bt3NRqNdHx8rPPz87RrjzHx4sULtVotLS8vp/tarZb6/b5OT081Ho/1hz/8QR9++KGk\\n\",\n       \"q2Nndnd3NTc3p7W1tQT8GF+M9Wq1WrAIy+VyauvZ2Znm5uYKixiLPYwN+adWVlZ0cHCQ7iVuBfBW\\n\",\n       \"rVbTjr+VlZUCS3R+fp7ivV68eJEC05Hz6uqq+v2+ut1uYvYYC91uV9VqVXt7e6pWqylZ6eLiYprb\\n\",\n       \"EZwTy+a79ohzm5ub04MHD7Szs6Mf/OAHevToUQqo51w8jl/xnWIsypPJJC3MjLVqtVrIxzULSMXv\\n\",\n       \"nGHGGPBnELPiDFHOIAIY+DuRR6lUSjuRee4sHUyhjtHIjgHY/h1g0AGTlzjnkWluF5df4+tBbLs/\\n\",\n       \"O8rX2S4/z9CDs31xh7lyvcm4gGmGIcbopQBmAb7O6GMUjEajpBMcFCFT+j8GYUd96H3IZ1EvRk9K\\n\",\n       \"lKXPGV9TfY3wcQoAjOBTejlJNv3KdzC8MUYK4Orrohv/eJFmlRtjpKTZOR+YtDF4PLIOs57rHZUD\\n\",\n       \"TxQGOJ0dF0x/Z5x0fOef5awsB10XFxeJ9naWZJYC8Xq6fLx9HsD+lwAfVib1woLIMVbNoBA5AAAg\\n\",\n       \"AElEQVT8H9vktDsT2GXijFGuDX69W6xMiOj287r4s/iMCcrf3mYHz66IYwB5rAvyjPKj0AbfKeUB\\n\",\n       \"7MjWAZvLE6XqYLnVahUAA0yFM0ywTNEyAizB9khXSmA4HGo8HqvdbhdcLQSRz83NpUOIed/i4mLa\\n\",\n       \"7dbv9zWdThOQIMAZmQ6Hw7TFfXNzU0dHR9rZ2dHR0VHKui1dAYK1tbUEdnI0/fz8vFqtliaT62SY\\n\",\n       \"WJXSdWZpXHflclk7OztaWVnR6elpAl8csHz//n19++23qW6VSkVfffVV4Z1ra2taWlrSN998kxa3\\n\",\n       \"ra2txKhUq1UtLCwk0LO8vKxf/vKXkqR79+5JUsrQ3mq11Gw209i5vLxM7tLLy0vt7+8nQLG+vp6S\\n\",\n       \"lU6nU/3kJz/R66+/rmazWdgphsvW5wxjhnmPLqG+PJOxG4PC3XXF/7NYZZ//LKIwcoxFZ3oAL+TG\\n\",\n       \"ikYbCzPsVHwH/eosDgA3GtgReHAv13JNTq/6LjhnzaOR6Bt7pGICTC/OwvgpEhQHZ36KAQYF48X7\\n\",\n       \"wTdA+C496QpkLS8vF9IveFvH43Fy9cZksLQRttrXIeonvRw+47o+rhes1bMYIOofx5p7UtCbMYCf\\n\",\n       \"9zkYRn5sbvA6OVhjjEQwyI5b3K6UXKZ36unzMldubNdeXPidpZKKYMQXRQdb0stb1x2URNbLQUjs\\n\",\n       \"1AjauJ7nxEnqVkmcWKDi2AYUERYqFrKja97hbJUj9bgIUeLgjjKKCsD/zsU7uEuJ334v1gz38hxn\\n\",\n       \"BN1aYNLE/vH2OdPjQIqYEj7PTdY48X0x8rHDd0xaxohbH0xaLHIHp66U/D2AC2el/J3l8tVOGg6K\\n\",\n       \"dgW+urqq5eXlQpZjwATgCCDuCTqh5d3a9fpcXl7q+PhYnU5HS0tLCYQ0Go0U34OVS+l2u9re3tb7\\n\",\n       \"77+vb775Rufn52nXnscEEbtFXqf79++rUqlob29P9+7d02g0SjvsOJqmXq/r9PQ0ue4YAyh7EoXS\\n\",\n       \"dsYeliPAhvsAeLguyJvDvYypxcXFgrsUpocUCp7QczKZaHd3V0tLS7p//74uLy8T67S9va2NjQ0N\\n\",\n       \"h0O9+eabOj4+Tu9bW1tLB+SWSqWUdkG6isk6Pj5OuwRXVlb02muvJRlsb29rc3OzwC7RhsFgkHSF\\n\",\n       \"L7YsyL64O7uB2+fi4qJw9AhWuRth0fD0xcf1jt+HHH18X1xcaDgcpiNicE3htmEeA7ioD7o4GqSS\\n\",\n       \"klvT6+7y8RL1irMqnlID4BKNwxzbDpBzYywaSFLxwGOvvzMgl5eXic3lOweiEdDCYJPdPMrHTyRw\\n\",\n       \"IxnjEqAUZZvbeUeh/nH9Qk86MRHXC5ez61M3VuNa5nL1tcP1aAwHYQ1x74X3oRvXDu54Pu/LxYf5\\n\",\n       \"nIhEx/9zrj13OVFyHRcBkX9HcRdaDoz5gI/0awRITuNG4BCZnhyTkwMKEYxNJlfHfYDSuQ/wFBdE\\n\",\n       \"961HEOXMSZSf5yOJbZGKE9cDq/06romWqU9qR/HuPnpVX0Uw7FZyToH7/dTX+8V9+7n3RoDj488X\\n\",\n       \"Xq6dZZnEz/xzmAGnlv0aFJQrGGc61tfXC3JlUaxWq6luvMOpZw/69PPhptOrRHQeIAojs7y8rFar\\n\",\n       \"pZWVldRnyOvk5EQnJyfa2dlJ7IMv7hz1Ua/X1e/3U3+Nx2P94Ac/0L/+67+q2WzqwYMHCfSsrq5q\\n\",\n       \"b29Px8fHybJ1sD2dXsd4EeTtckeBN5vNxLqwmI1Go8RkEawsXbk3Wq1WcoksLy+ne2GfUY5vv/12\\n\",\n       \"Ss5J8tJyuaytrS0dHBykDN3Ly8vprLvFxUX1er3kaiQf1tramkajkY6OjhLQ29ra0tbWltrtdqrr\\n\",\n       \"o0ePJEkPHjzQysqKut2uGo1GgVX09Ca+oPN+xtTi4mJyTfAd90XWARkAzOOCEQ2VaOGjc3KuL8IX\\n\",\n       \"YEvctUsdMb58caQ/Yhwlz3QdGHUG88H1jjNu/hljiXrnGHnGGqyF3w/wyQVG+4YAN74uLy9Tugfm\\n\",\n       \"Y9Rx1CGCGt6HG8p1lQNpAICve9zLc/075rvHFzLe3F0GIOE719tuXPv4ikHZORIjpx9zfebXOHBy\\n\",\n       \"uUWm0kEqfZHzGNFu17cOsNyVR91yBEaq58xvbsttuS235bbclttyW27LK8uNxUhF/3V0jznDEl10\\n\",\n       \"UtFajbTjrOc6LRn9sfGeWeyY18198Xweg92iq2E6naadE46icduw84jv/IBRp0K9bjwr1ps2I6/I\\n\",\n       \"BHhf+H0xwNktDiwWrEqnPB25R9nwt/c9snE62etMiS49ZxGhi5G7x3A5W+fWR2SkIruUk633Z2TG\\n\",\n       \"vJ+x1tzS9oKrpd1uJ9cXiSyx4huNRiFOqFarJRePW8rIE8uScUUbCU4mlob6Xl5eptQBxBhw38rK\\n\",\n       \"SmLKWq2WPvvss0LsDZZ3v9/XyspKkvdPf/pT/dM//ZP++Z//WZ988ona7XY6YFdSCmzHNcb7iO9g\\n\",\n       \"DnBen3R1XAsJHM/Pz7WyslJweeO+4jnuNmg2m5qbm9P+/r5qtdpLAc9YpbgUiTdpNBrJyr93715h\\n\",\n       \"R9/Z2ZmWl5e1tbWlo6MjbW5upjMDcSG+9tprOj09LewgbLfbeu+993R2dpZioWCk7t27p+FwmHZI\\n\",\n       \"uruHDSndbldzc3Oq1+uFecVZcoxjZ5uIRXJmgvsY/9HCdgYkMuO569wVw5iDKfPn08eUONf4bmFh\\n\",\n       \"IW1G4H9cY7C1pE2AjYJB8PQPzmTFbfcwsLCfOV1D+3zuexJR3zSCvEulYqZvCmwS7cgFviPLqGvc\\n\",\n       \"JZZLH0Aak0ajUWBMiN10t7jrLMIxmPvOxqPfyIbuSTd9fXWGyF18zjrSBt9gE9dw5izPi54I17fO\\n\",\n       \"SLkrPvahrzPU3dtAXVzvwdwiIw/hYI3IzQXKjR0Rk/vbK5obUP5/BFIMpAgK3BXnricfNO4ik4rb\\n\",\n       \"anOgyoGAd7RPiAi2eC5t5JwsPmeL68LCQmG7KvVmsWTw0TYHQj7RUCbuWnLKmffy/Nh+H3A53zCK\\n\",\n       \"zb9HmTmt7TS2pylwBezuAeTj9H6ki2MbKQ4yaVsOIHk/RJDubq44IePCk3M7UmIwu4OrWq2mVquV\\n\",\n       \"gpHZhcOus3K5nOJryMe0sLCgfr+ftrxLV6Cn1+sVYlc86z11pA4eSLq0tJRSIJRKpbTl/vDwUIPB\\n\",\n       \"QIuLi3rvvfc0NzenP/7xj0k2CwsLuri40Onpqcrlsh4+fCjpKg7mD3/4g/7mb/5GP/nJT/T48eMU\\n\",\n       \"iP3aa6+l3Yaj0SgFuEtX8yDGUQDkOGPy9PRUlUol7aLjuk6nk44CKZVKKdUB3/d6vXS8Tr1eL+RQ\\n\",\n       \"YwGpVqs6Pz9PQeqNRiMBrJ2dnUIG8efPn6tWq6lWq+n58+d69OhR2jWI+5D3EA8nXc2DRqOh0Wik\\n\",\n       \"u3fvamVlJbn9mCPlcjmlnWA+cYTF4uJiih9jnNVqtcLxQb4zi52o6BPGvI9Zro9g4lUGqeuZ3Lwh\\n\",\n       \"TxY5z3zxRp/4pgzGov/tcW5kRwcQcai3z6+cEeSAEBkzvqiLH/HjOsRjV12nOnhE7/rOQ3SX52vj\\n\",\n       \"uxhG4bqd75Fl1BezAOjFxYX6/X7Sv37kD/d7oHkuhMFBDmNiNBoVdJdv9oi7OP1ZzGFPpUCdfYOJ\\n\",\n       \"63gHUBFoIWMfM5FA8L99DXH554Ab19br9eRGjwefu8wwSnNrCeX/mYScCNXZHf5GqB4r5H59BxPO\\n\",\n       \"OknFAemLCgLnebNYp1f9HYFWfMcsZotO8h0pgCmpuGDTJiwMX+gdxMV3uYLMLfgRXcdB7GxNjMuK\\n\",\n       \"fmvfWeZH2Tiqd7AZg0QdtHmbeaYra1f63j4UucsGQIgF7s/hubTD5ZHbeOAy8wB2Z8d4N0yUB8fy\\n\",\n       \"PWfKOetUq9XS0RONRqMQBI51CAiTrpNfkqSSg3Ynk+sUCFjs0jVbQyEoGiC0u7ubxuLi4qL29/dT\\n\",\n       \"rNJHH32UgsVZtDmmhmM/JKVUAp9//rnW1tZ0//79FFv05MmTdN7dYDDQ6uqqjo6OJF2N05WVlcSa\\n\",\n       \"Ef8hXTE5n3/+uQaDQQKdFA5Hnk6n6azAk5OT1P5Op6Pj4+PEhngSTPoNgIRlLynlOgJYwWwxLhqN\\n\",\n       \"hg4PD7W9va2dnZ00Z2GGlpaWNBgMtLm5mdiTubk5nZycpP53UEdaA7ajOxvtfUVsmlvq5Lwi31Uu\\n\",\n       \"nQqpHpyN9E0Uvgghe58zbs3zG7DjDKBvkmDXo4N6AG88mBggyzPcMONZ8/Pzhdg3qWhEOosmFRNZ\\n\",\n       \"+rzkHc6y+K48wEBkQKQiG+vrUSzOZiFv6ueAC7nwHgenPMe9G9EoBJwBPn3XJmsLG1Pi7jT0FzFb\\n\",\n       \"sY9dH8aYo9zmHTeCWB9z8UxRz+aCyGNx/ZoDUoxVnxcY6nzmYx/miX6MORl9/Yj1jMaylxtz7TlT\\n\",\n       \"JBWDvyPI4joXWGQiCLp11CxdT87IELGwuuUewZlbynFi+KCIgYyzAJjXyf+HNqYNkTaN9CzFd6BE\\n\",\n       \"MOagxJknfwaLM5MtHnoqXU92X4jdredACgBBPiR2nEhK1mR0fXqJioa6OECNQLVcvj5XSlKBXmdh\\n\",\n       \"Rt5x4lJmWToAtwjMseLiThsfa5EBcPaOzN8s1ix0c3NXKQvW1taSUjw4OEgMTaPR0PLycnp2v99X\\n\",\n       \"o9FI9YhAEvkhV2dkyNLdbrd19+7dAogmPcKf//xnTafT5KL77LPPUsLNarWqdrudFsZms6nJZKK9\\n\",\n       \"vT1VKhX1+33dv39fkvTs2TMdHR0l4HJ6epr6C9bJ3XQRDBOAvbi4mNoOqzQcDvX06VNVKpXkAkQ2\\n\",\n       \"JDCdTqfq9XqFjOrj8VitVksnJyeJ8eKdnjuq2WymNt67dy/lxHrvvfd0enqaZMpuzGaz+RJwnZub\\n\",\n       \"U6vVSuBxc3OzkJhyNBoll1g0gAighxFAhufn50kG1NkX/hgADMhiDrprzMcKAAmd4Kw548i38FN4\\n\",\n       \"N8bAZDJJQNqZ8Og2kZSyhQPeXId5UL2zEvSZy8t1pq8hzjr5WpErDgR8HtNXtMcXYWfYo25Bpr5G\\n\",\n       \"uPHL+yJL78Wv5x3cBzMVE6tSf/SXyzSCUd6L69HXSGf+fL32NSzu1va1MQeE+M4/yxEkcYOQ6+ac\\n\",\n       \"68770L1TzrgRtM96621Adk4WuNxfVW4s/QHF0agzSpHpcUSbE1xE81JxJ0BkZ3IAx5/ntGPOzeiD\\n\",\n       \"JT4jx6b4NZHNYEIwyH3iUWcW2lynspDG7cFeF6eQkQ2f+zX+nQMtL+QD8ngDr+dkMnnpoFBJiTbG\\n\",\n       \"VRn7IMrE2+9/u3wdODu1jPLMKThXsLHvXFnH+3zbMHKPIMzl76wQ7WNR80URGQOeut1uAk8wRMvL\\n\",\n       \"y2mM40765ptvUl4y3gtbBYhl+ziLvKTEiJ2fnycGCcZnc3MzuajW1tb0+eefp636b731lp49e6at\\n\",\n       \"ra20Q5B8SJPJRBsbG3r8+HGK9Xr27Jmkq4Xy4OBArVZL8/PzOjg4SIrLY17q9bp6vV5ix87Pz3V6\\n\",\n       \"epp237EDSroCUiS4PTw8TOObsUiM0tHRUdr5GN81HA7V6XTU6/UKOygBbMgQUAXwYoel97/3Wb1e\\n\",\n       \"T4CUvsAVurGxUdiqDkNydnaW+tF1FSASl68fZYPBkmNH3Z0ymUzSfaQiQDd6pnbmOzl2HJj7tnYM\\n\",\n       \"iWjwsQgvLy+n7Pb+nRtlnljUM7TD0lIf+hvWhkIfue53FsWNUi/oCGf/XScCGHKuNZ6NDnI2y0GR\\n\",\n       \"60w3DqM+LJfL2Tgtl7e7Fr14HCtydkDM0TC4mf3gc57nTA2/Ac/uPeG+VzEygHlnwakfMqHe0bWZ\\n\",\n       \"Y3xcp8d+BETlXJbUBZzgOjmGdPg7Z3leeF4uts3Ljbn24t+RzYm+TkpcfHM0Y6QcX1UPBzpxAQWd\\n\",\n       \"guy9XpEx8+flkLkruxy4inFSvuUcpQ1Qoi7u7nNWSnoZlLhcpGvXn3/ukwwQgO/dQZaff+TUqLMu\\n\",\n       \"TDwGJwoSK9KtgVmgxutOXSN4AUx5P9AGB1q+yLpcZk0s5DwroNzfkyuAXq+bM1goOOk6psVlR+4i\\n\",\n       \"4p5wRbEpQZLu3r2rg4ODQiyMA0lYDNxOjKm9vT3t7u4m96Iv7E+fPlW5XFaj0VCr1dIHH3yQsp5/\\n\",\n       \"/PHHKfblzp07hfQHJOF87bXXdHh4qHv37iUgQXwQGdidybm8vNTJyYmq1aqWlpYSuHHZkgRUuorh\\n\",\n       \"Qn5cQxZzADzy73Q6Gg6HWl9fV6lUSvVZWVlJQMzjtnjXYDBITFG3203f+fEh0+lVHqu9vT1JV3Nx\\n\",\n       \"ZWUlMWCueAkmb7fb6XzDmBsJ4O/noqEPms2mxuOxTk5OCpnbfUz6/AEcxAB7iusvNjDwOXLGxReN\\n\",\n       \"K+obj3rhXvQfAJV7ACjMgQj6kIEv+gSv49Z0dtjdePGZ7tpDNhQHY9Et5HPevRg8IxrlOUOQee9t\\n\",\n       \"8Lr7WkJxNscZGgBN/C664hiXXj9csLzb+xFdCIDN9bkDKorXjTWH+kc3mt9HuyOQ9M8iGHbihLbG\\n\",\n       \"a/06lyUGNG1xeSF/Z55oUw60eVti/b3cpj+4LbflttyW23Jbbstt+V+WGz9rzz9zZB9jU/w+t/Yc\\n\",\n       \"4ef8svHZXtyi8Ngbtx6im83r7e5Ify/X5dgTvoufY8mxiyfGOlF8G67/dutAKh6D4vEN8b1uKbql\\n\",\n       \"hKWCLHiW76BwFoXvoEj53PvJ73Nrz/s6bvF1tixagN6WuLPQ5RDZJi/eTq53H7l/h5xol1tDtAmX\\n\",\n       \"AnL17+MOKq+DBwq7TGEkCLx1BqFer6e4qtFolCh+6YqZGg6HGgwGKb6IY2Cm02nabt/v93VycpLi\\n\",\n       \"mVZXV9XtdvXs2bO0sxD249tvv9VHH32k//qv/0pB8wSbj0YjnZycaH5+XicnJ9rc3EwxYGQzx2UU\\n\",\n       \"4zDm5uZSwPd4PE7PdHfQ0tJS2qVE+/r9fspaXi6XE5MkXbFuJIdkZ6AHeBM4zvNhndi0UCqVdHJy\\n\",\n       \"osFg8NKhrsyBXq+XXGacL+jZ351V3traKiQp5b1cs7S0lA4C9nHA4cfD4VDVajX1BSVa5BRYKbK4\\n\",\n       \"x7ntbimKu9LcVcf/uMR87rpudAaj1+uljPgwStQnZm93necsLjqLjSvO9uMuGo1GqS0xLMDnr8sV\\n\",\n       \"l6WzM7Sf+9zF58+mzh7n5euSewK4lrZEj4W77Vxf8Zn/758xT3x9I1Yq9gn9iEzZnYb7PDJyfiyO\\n\",\n       \"93OMp/O64bnw58VwGG+Hx4u5nH0NdhnPWr/xqsDoI1PWaHS/uw4jE+XF5ZbDHq8qN3pEjPun3f3E\\n\",\n       \"dz6IvXN8q2903zld6ILzv7kuuuR8wfVnuBDjzo5IK8b3xLpFwBjrzAGqEZz49fE+qNgczUm7oH5z\\n\",\n       \"rqp4X44ijyCGAc6ZTXyHK5CJkMsODNjIKSnqk6OUZ/nDXVFEZRgntFQ8UDjGUHkgql/n97vS9B1d\\n\",\n       \"sS0eixFpaV90AEiMG3fvlMtlbW9vpzQIfiwEu+iov6cVQKkuLS2pVqvp7OwsxSxVq1Wtra0VQDLf\\n\",\n       \"vXjxQtvb22o0GumsPgDR/v6+VlZW9KMf/Ui//vWv1W63k6xqtVoKIh+Px9rd3dU777wjSfr000/V\\n\",\n       \"aDRUrVbV7/c1GAzSM5eXlxMgKZfLyR1B+0qlkjqdjqrVqkajkdbX1yVdufZIf1CtVpPb0OXmO8U8\\n\",\n       \"SL/T6Wh9fV2tVkt7e3tqNpupHaenp1pYWFCn09H+/n4hoH4ymajdbmthYUGHh4fqdDppASmXy8nV\\n\",\n       \"iXLnSJpGo5GymhPk78HPABvGFAvZ8vJyWtzYoRmPLJGKqUgYT7jdoz5yoI9LzMeuv8MzYpdKpeQi\\n\",\n       \"nk6vM9H7/CIHF24fAHG/3087yOL5dcxfAJXPFeYkOy/dYMTtie7y+NAcwPG5y7z1sADeG12CLlPX\\n\",\n       \"DVEnMt5ybiDqGF2CXnJhA9GYm2WIo2/pNzc6yYPmIRjoYNyH/jmbJnzd8Ho5+ImAzceUgyXuibGx\\n\",\n       \"LovorvT+i/o3gpoYv+RGu6/VOdn52sp4zgE3xwe5ciNAKsfSxMBct0z43zvKOyAyQ5HpYtBHNA17\\n\",\n       \"Qp1y90cg5ZM9x6rxO8eexMU0AjcUwtnZWWp73GLs9QMYuV/YZexWU87CQTYeb8Dn0ece5T2ZTNIW\\n\",\n       \"emerqK+DXZe5x1J4wK3X260tnp+bRMjMA1EdBPl1OYYTWXrf+I4mZ5m8n/jbF7PY92z5jkwZz/Zn\\n\",\n       \"ETjLZ8T8UOr1ekoZ4IzHcDhMC2/c2dnv99NuMIKrHfCenZ2lNjabzZRYcjweJxas0Wi8xHL96U9/\\n\",\n       \"0ocffqjvfe97+t3vfqeNjY3ULoDX0tKSdnd30/EpDx480LfffquFhYWU6yla4+fn5+r1eoWFliSc\\n\",\n       \"l5eX2tvbSwHXFOK4Tk9PdXp6WkiTQODyZDJJweIebF6tVtXpdFQqlVSv11NcFuOh0+mo2+1qZWUl\\n\",\n       \"PRNZnJycpHgrz7NTqVQKgfLEsrVarRQXValUdHR0VOh7mCh0FIlap9NpYsSiweBxUNHQ8kLgsxuh\\n\",\n       \"FMaLL+LEyw0GA43H45eO1XHwHVmG8XicAu4PDw8TI3V6eqrRaFR4htfVAZ0zNj7HAIWA00qlopOT\\n\",\n       \"k7QLK7e4unziAcBe58hI8W5nYCILhfz8ep7v4DTKPzJMDrCkIoj0Y2Fi2zwONbJYvnMZ2Xl6E2cX\\n\",\n       \"o25HbnE98fWK7xzY8T4/fskL66XLIm6eijsw4666CM6oO2kZ+M7X++hRoJ3udfD6OQDzdnnf5MqN\\n\",\n       \"ACka4gJ3cCS9zPRIxQU957abBWxmuQn93U4ruvspChUUz6CIuxAc5efa4J+76y0OKLaokiDOF163\\n\",\n       \"rgAsUbFRVxZoH3z+fe47B4IRhfv1Dpx4Hp9HgIEyy2X7pv3xN+/LKUGvD5alKxXkG9NkUBfaFunq\\n\",\n       \"aJFGCj0nZ68b9XHLN/YJ1rTn1OFzTjSHsQFknZ6eJpBBfT0BJYsf/7NhYTQaJYaL+2q1WmKTjo6O\\n\",\n       \"Up4o6XrXXq1WS2yW57uZn5/Xr3/9a/34xz/WgwcP9PjxY0lXLsFms5mykddqNX322WeSpHfeeUdb\\n\",\n       \"W1s6PDxUtVpNzIaktOtwPB4nBgr5kvxzOBwmMAjAnE6narVaajQa6vV6iSVhwTg7O9P6+rpGo5HW\\n\",\n       \"1tb0xRdfJNns7Ozo4uIiMU6VyvXhy2SY7/f7Gg6HSRa0fzgcpms9NxUuQRZ7LHuuI/AfEBcBGODS\\n\",\n       \"3Y24JH2MUdCdjDVP5AmoRj85WHB9xqYRd6XBXkS9B9gdj8cpU7zrb+YtTDWuXtrIgdy+GErXrj3c\\n\",\n       \"uJHl8va6cU2KCgwGX+SiPvWxn2PUXb5xIafkwjuiPkF+/n5n5HOAl7nv45428C4M5ly4R2StuAdd\\n\",\n       \"yLmu7vHwPHde3JXO3677Li8vC260yJC57vc2UrdYT38/oC6u2c42er7C+NychyquM85GRRAY1/FZ\\n\",\n       \"BMmscmN5pFAALpRIm8YFK+emcQCUE5yzBjlQwPc++HOupPjuyIJxPZMzUoRex1gPJpIzVQwyXAHR\\n\",\n       \"kpKKh9ZG1IzFEAdZHAw5CzEqsuhT5zeT1d1qruzZSs3fsT7uFstNagqLQW4iu8wcaPO90+LeB67Q\\n\",\n       \"3c0GMJzlKuDzCOhcbihkbxMWJM9nt5h0tWDW6/U0Xk5PTxPTs76+rkqlkoDU8fFx6mMWZIBE3GWD\\n\",\n       \"WygqNrKX3717V++++67G43ECPU+ePNH29nayvuk36Sq+olS6cvH84he/0A9/+MPEQJDZu1Qq6fj4\\n\",\n       \"WGtrawlwbGxsaHl5WZ999pnef/99DYfDxOgQq7S2tqbDw8PUTr7DeibBIDIlj1C73U79vrS0lJ7r\\n\",\n       \"QLRUutqx50zP4eGhFhcXk8uQHWZkDO90OikDOUAT9ypjsdfrJYYEcFyv19Mi7WARQEr2cgAhu6fO\\n\",\n       \"z89Vq9USy0jdfVy5geIWd07xs+jEWKiYxwlGjDYwj2CiuN/dRmxzx1UnXemdVquVdjQeHBzoxYsX\\n\",\n       \"kq4TwgKkHBTAqLreirrdx7XrBY5TYnz4fHNdF9l1xjR18bnhRxbFtcWZGP8OudBHrof8vXEdcjaf\\n\",\n       \"+nqKCPRWDBuIDBwAweOUGCscVwQ7ik5gjERGLK6l3kZY0EiCsH5hOMb1za9zOVCY53EtdYOduvvz\\n\",\n       \"XMe6PvZxE+uRc+vFZ7uB4t+9qtx4ZnMq68Aqgo8cEpzFTPh9lEjR+vty11McrUa/uwM9f68v8v5e\\n\",\n       \"B1252CGvq1sG1NkT03G9M1X8OGp3atgBDP9T3+hqcjlRd7daqR/KyJUcn/E83El+xEClUinEDxEz\\n\",\n       \"Ehk03o2ijdZ1rj0RzDhT5f3p8oiWkPe1T1a3DmO/0W4K73TQ6xnFYZ+k6+SSyJkz1CSlWKbj4+OU\\n\",\n       \"/RoXU7/fT3ViAfZJ74uW1wVAcXh4qFKppAcPHujjjz+WJH311Vfq9/vp+IRarZb6kKzdFxcXGgwG\\n\",\n       \"+tOf/qTvf//7kqT/+I//ULfb1erqqiqV4nEun3/+uT788MOk2Gu1WmIr6B+Py8HNhpvT28N1uO2m\\n\",\n       \"02mKTSLbuqSUDbvVaqWxR7sZY+vr6+r1egl8SUouv/n5eT169CixMPR/r9dLR6D4+Ot2u4nFgFX0\\n\",\n       \"BKS4ttzNwndnZ2dqtVoFd5RUPIMyxxD5vHUm3DckxKNDYMcWFhYSuPNgesZ/Tic68EC3oTOYU8zR\\n\",\n       \"r776KqXNaDabBeDi9fF4QPSGMwPOWPFZLO7CkpSAqYMU1wvIl/e68edpW2AZeYfr3siuIXvYwMhy\\n\",\n       \"07bIcmF0U9/oofH+dmPZjWcMWHfd0z+AJdhv0ktQZ89BFw3I2BfULweGousuelsiGxX7LxdLyu+I\\n\",\n       \"A1iPXafTPuaCG9gRMM0Ct9Er5d/lvFqF9s/85rbclttyW27Lbbktt+W2vLLcCCOV84NH2s9RfbTK\\n\",\n       \"pJfPBZrFRkl/OUaK/ymvYsmcLXHmxe9zRO/Pd1QeGTF3F/q9ntk310be79aCdL2Tx5/p97q7EuvE\\n\",\n       \"LbMY60OJKD3Wh3uwUN2K4L0wUl5XLM/4TH93jH2Iso8smltApdJ1+oH4XLfA6V8fa96fbpG5tUMf\\n\",\n       \"YtESB5Qbd24V81xYClg8/77RaGgwGKQz9Zyx6Pf7Bfrft9Xzbqx+2nN5eXU0DNv/+NEAACAASURB\\n\",\n       \"VLv5Pvvss+RK/Ou//mu9ePFCv/nNb9Rutws7AwlY55iUP/3pT8m19f777+unP/2p5ufntbq6qr29\\n\",\n       \"vcQQvHjxQqPRSN/5znf0+PFjvfnmm4VUEH4gM+d/IQuez/xxFwW76y4vL1MyTwLDp9OpOp2O7ty5\\n\",\n       \"o5OTk8KzeD5xUCcnJ8lFSX0Zw6PRqBCsPhqN0u5EUkxISnFtHNexsbFRiO1jbFxeXhZciXNzc1pZ\\n\",\n       \"WUnsoo83DxFgfrubh0SMuNr8iCfGYHQPuWuOQHafF7QNViuyuDGmkLHIYc/I/euvvy4wb5eXlynj\\n\",\n       \"vVv73h9eN5cD7fTPYLDRXe6+jjrDC0wGzJjHllFc57vegrF5lYsuriuRifFnoveIRYrhFegw+j4y\\n\",\n       \"3vQFOoHvnYVEH/FsXzdhgmg/qXXQpT5unPmMupV6OKMY174Y8+WydsaKQqD5rF3m0W0X5eaeIR8D\\n\",\n       \"UR97G+J64mM/d6+XG01/EMGLL/qu+HONz4GK3MLl18RrEVr0d0dXTw6IuMuM4oPBO5L7ZlGE7Jbw\\n\",\n       \"2IroovJMuV5PlEKObuV6B6HRHeauPRSauy9zz+Td7pLzukXFIhUP8kVZOiBgkYnxaUxKJpaDRV8A\\n\",\n       \"fOAjU+ofA1wd1MZ20g+4LqPCwN3CIufyceULre71c1eMj3XiMYiT8bgN6sd5ZJ7ZnN2dBNxyFArP\\n\",\n       \"RG7UEeVDgHe73U4uPOKAPvnkE/34xz/WeDzW48ePC1n2J5OJWq1WYXz+7Gc/kyT97d/+rT7++GP9\\n\",\n       \"53/+ZwKKjKfV1VV9+eWXeuONN5KL2uPjut1uOryWQ3wlpYznuNsGg0FyeQK+CDp+/vy5SqVSAoTj\\n\",\n       \"8Vj1el3T6TTFJVFYmDmrkENfpeucR2trayk3Dy466kVAvR/Ci/tpfn5e1WpVw+Ew1ZU6jMdj9Xo9\\n\",\n       \"HR8fp2eurq7q4OCgoB8cEDHucDF64D/uLOaTZ8PHJco9vquJcToajTQYDFLby+VyOv7JA+l5JmOb\\n\",\n       \"nX2ckShduUSp997envb29hLIRK68Z25urpA2wtvJLlPGm8vF9RfzEiBExn2+o63+29vvbkZ39cWY\\n\",\n       \"oXgWKz+5+BqKz21AbXSBUpdXHYTsYQQ8N7bHQWHuulyOLdf77qKkvnwenxU3UeXWtigPX/MiqKfN\\n\",\n       \"ubXU18hIPLixG6/1+nBtdB1j1Hhf+3XMj7hevKrcCJCKbBPFhR3LXwJZOSbqVWDAhR0X4Rwb4u/j\\n\",\n       \"J7d7y+vkHcDCnrvfF+b4DAdmOXYJ5ULOHJ/AXEPshreT9/tgoj7O0ETZuH+5VCql+JOc7GKOllm7\\n\",\n       \"HT12ivt8gDs4iLtpnNny4sCFNrkSnjUuXLEDcCi02Vk2f360RL3Qdx7z4XEVPGNhYUGvvfZaWoS6\\n\",\n       \"3a6WlpbUaDR0cnKS+ky63ogAMGDXG++LQNG/YxFtt9taXFxM+ZlOTk707//+7/rud7+rN998U0+f\\n\",\n       \"Pi0k1uz1eqltZ2dnaXH85JNP9JOf/ETvvfeeHj9+rLfffjvFQVUqFR0eHmp7e1vtdrvQf8T/efwT\\n\",\n       \"iv3s7EyDwSCBneFwmN5XKl3ll1pdXS0YACwcS0tL6Rw+gCjfVatVHR8fp5xcvnAAwFZXV7M7iZBj\\n\",\n       \"PJSbpKCTydUROLAX0vXuu/Pzc52cnGhhYaEQp8aGkbiQ53YzurHCoj4YDBLYQg4AeYCGz0tnjTkG\\n\",\n       \"B7l4igKPTYLRY74BtABEMOCMcQ+oJz6M+DL6h/vo8/F4nOLy+M5jhdyoof6VSiUlLfVzJt34cbnB\\n\",\n       \"TCMjZ8f8sxyz5HVx/YGs/FoK8qbdUcc7iIk6aRZ7ghxgbCIRwLNyejGuaaVSqXB+I3PBUw/4vdwT\\n\",\n       \"mUnaHdcSH2+lUqkQExfzlEXZ+H0RSHG950rz+/w53r/kP0PWcY3i+d6W3HiI5cYYqcgeOYiKSJXv\\n\",\n       \"vTFxh0ZO4H5fpPykIkqPwMgXzdyk4v/oTpo1ESJlGJkcf0cESx7E7c90Cw2ZudWAAnXZxOKDLLJr\\n\",\n       \"3o4I0FxOEehwTw7J85mDPs9cHCcoLgyXQ2RYfAL44M/JiHpGSjdn6VLf2CYOZEbGUtGV6u2Myozn\\n\",\n       \"O5CiXXNzVwcZLy8vJ3dSr9fTcDhM6Qg8WauzOsjJZSEp5aSKliesG4kw/Qy3fr+v3/zmN3r48GEh\\n\",\n       \"e3mlUklsCu0FxNRqNf33f/+3fvjDH+r4+FiHh4cJnJE4czQaqV6va39/P7Xv4uIiBYS7K0u6OhMP\\n\",\n       \"ppSFF7AwnU5TmgG38gFjyARZT6fTAmDl3cfHx2q32wXGl0Dx8/PzQjbxvb099fv9BOaRnXQF6AeD\\n\",\n       \"gS4vL5PLlPd5biUysR8fH6f6EhhPADjtx40Ia+JuL85J8/HjDDvti/O1Wq0WNg9wuC0yLZVKaVcl\\n\",\n       \"6Upon+sBZLOysiLpWn+Xy2Xt7+/rD3/4Qxo3tI/rfBfwwsJC4aBiX8Cr1WoBJLixCVij3cvLy6lf\\n\",\n       \"B4NB2gzgoIPCAuqbiCjMcXeTIZtZBhj/+4acyMg7s+X9BLtHHzpzGg11b4Pry6gzI8ng/eZrIHKh\\n\",\n       \"bgSfA6Zc3r4zm3t97UA+zvTzXW7N8Hrm5OUGfJS3h7FED5DLgH7MuSf5cfZzFlDKAdJYbhxI5cAS\\n\",\n       \"f3tnOCDwweyf+//8nXtvfNer6ujv5D63TCMAis/09kj55F6vqrtfF60MR+xOO8f7uNcn8asKiN2f\\n\",\n       \"H4GGKw1KHNi5+jJR3RqIC0DOEgIkxFQFOTDF/65I+d9lG60l3s3kcmXlcgFE+ER010fsa28/MmH3\\n\",\n       \"FIX6AZhY7LDkWVA90Sk5lIg9OT4+LgAN2k1eJAABixPt8ASZsGylUknffvttSnopSQcHBwUL2McT\\n\",\n       \"7Ngf//hH7ezs6IsvvkgxSZIKC/Pl5WViQVjM6ANneZrNpvr9viaTSXL9eU4n3FbT6ZXrDIArXadV\\n\",\n       \"6Pf7aRcdfQWw63Q6mpub0+rqagGc8myMHZi1w8PDNKbYbk992IXqiw8A6uLiIoFBmEXGiOeBg7Vi\\n\",\n       \"YQNkYnS4BQ+w4jPfpQYAcZemg3Zip2JsGCwPdcoBCk5emE6nWl5eTiDIgfrjx491cHDwkoXP+Ped\\n\",\n       \"gs6kkXrAY6Y8/YbPRXdn0U/xIGf6zRdzxgJpMVy3M58B5jD9tG8Wy4PB6u2Nxrwv7BTkgpspMln+\\n\",\n       \"d3wvYyKn1328R1adfvb1wvUVLmQ+c93H/25oevt4ro8FXI8w57MMa36cdXNZ5tbVSJrEEtdQ3OAQ\\n\",\n       \"DuVyuTAOkRVj1Q2uWR4Myo259tx95MU7MS5IdFSOvXFGK4KzHGp1cBDBEt9Ht5a/j9/eDlcW/uP3\\n\",\n       \"+n2UHADJsVk5dsMRtS/s0rVF6otELlbC2+PFB7bL2SeuMz456nmWjN1SoN5Yq94uf2ccDzEFg/e1\\n\",\n       \"B3DGieHxVrHdLPTOOEUmh2dFxeegKrKHbgAQW+Z1ZSHAFePsCYkuWeR8PHi+GS+0ATZrbW3tpQWL\\n\",\n       \"3Ee0mffV6/WUUqDT6SSw8OjRI/3+979PR7y4S4ws5AcHB1pdXdX29rZ++9vfSrpKyPnWW2/pm2++\\n\",\n       \"SS43cgytr68XkpMuLCzo6Ogo/Q0bA2hyNxT9WKlU1O/3tbOzUwCgk8lEe3t7KpfLKQeU98/p6alW\\n\",\n       \"VlbU6/VSbNXFxUU6VoO0G34sC0BnMpmoVqsVguFZYKrVqlqtVqprt9tVo9FIwDfGazHWvH8kpWN1\\n\",\n       \"nFGLqTdgKXEn0vbpdFoINneWi884VodYrnK5rKOjIw2Hw5SGg7pwzXA4TGklGo1Ggammbl988UWB\\n\",\n       \"OY3jH+AQxzCLpqcxoF78jgHV7sql0KbFxcUCm039y+VyYuP8+VEfuJ7Luboo7gWgfa7P0RU5Q5a+\\n\",\n       \"8jABCuPU3W2xvr5eevF1ahbYQL+78Ukf0Rf0P2CRseRMofclbfF3+LyL6zHPiW3n8wg+vU2RJYtt\\n\",\n       \"p66+5g0Gg5QPD4OD90W2zcNjcu8p1HfmN7flttyW23JbbsttuS235ZXlxs7aiy45qbjl05Eo1CwI\\n\",\n       \"0+nKiJBz1rn7cyND5MxERNZ+vTM3zkJFdyT1iCyNF0f9/r+70WL7Irviz3Y6lffCXBDP4Qh7Vp9E\\n\",\n       \"dsotIH8n/QFl61uTx+NxsuTdx4/FBXPjfeIxYlht7pKIQZ65MRNlwv/4+qPF5+xh7BtKjKOItG9k\\n\",\n       \"P93diwvMZe50+Xh8fcgorrVy+eoYHZI7eh80Go3CziTq6vFafvTIdDpN5+mdnp5qdXU1WZewVH4+\\n\",\n       \"n7sEkRf98/z581Tvv/qrv9Jvf/vbxLJQTxiX6XSqJ0+e6P3330+uld3dXdVqNT148CC51VzGjAt2\\n\",\n       \"E/Jetuefn5+r0+mksYUsSYwJ++IyJXM8FrufUUiMTr1eV61WU6/XK7hw/Pw2d7kgU+arxyzh8iNb\\n\",\n       \"Oi5FSYnt2t3dTQcQM86azWZKIIkrC/YHGUyn03TUj+90JZi8VqsVDl5mZyR95pnDYfjm5ubUaDRe\\n\",\n       \"0jVY+hzl47GCyI34KM803+/300aCzz//vHDWoOvZ6M733WG5mB2uja50nussD8/2eCTYHK7Bvcwu\\n\",\n       \"11lsDnojegNiLA/1ZE1wN60XxpE/09/HNTzX3cRRb8d0D9ENi070NcjlzHfI2tcRZwvRYd5+7wNv\\n\",\n       \"B+uWxyH5dzyLMA3azXPdC8C7WfMdL3gfuPzi+3xNoO0cW3R+fp7YXmSLq9xZOu8Xdznnyo3HSPnC\\n\",\n       \"5YMxN9h84vgOM3+eDzCnKqNwfeLOKgxOFnJKfPcs12HOt0vxyeH1jT5i6pkDOu5C83bxXQRl8b1O\\n\",\n       \"10a/eK5OFAeCXi9++0Sb9Z23sVK5znROfXzBQBky6aPvmzrlQBZtdFBHvXLyczrZY6GQl4MlL1zH\\n\",\n       \"ZIwuDL8vN24mk6szBQ8ODrS9va21tTVJV2Ot1+up0Wgkl59nRC+VSqrVappMJoXA4VLpamcZrqle\\n\",\n       \"r5dinXCF3blzJwVJUwAvvgGA2JMnT57o+PhYDx480N7enjqdTsrbtLS0pOl0qrW1NX311Vc6PDzU\\n\",\n       \"22+/LUn6/e9/r/39/RR8PhqNkrsQMEgQt8fPICMCyl0J12q1pAhZyPgtXcczLS8v6+LiojAPzs/P\\n\",\n       \"1e12E9B1fQKoJf5pPB4nQNjpdJJ7lTgij4cCfA4Gg7RxgL5/+vSpJpOrc/iI+aKfqPfCwoLq9Xrh\\n\",\n       \"iJzFxcW06CNj6Tr4m+t8LuB+Qx+6C5b7PcM+4BOg7ptU4qJHigXGLMAWF9XPfvYz/eIXvyjMfQfl\\n\",\n       \"xKjEkAx307iuBQSib3gm+sLnl8vA527UGeQGI5YoGliMQZcr/Z0zkh0k+G+Kt99dl1IxX5LrYAcS\\n\",\n       \"HrrgxXVWdI85gHRjNxqOHmLgcyjKk/7MuUEjkeCyQQ/6dZ4LDpDHdRE8x3f5cyjebjeAaF+Mj+O5\\n\",\n       \"fvIHxgt9HGNdGaezyo3FSDnAkfTS5PJdbc5QRLTI4HOrZ5aCiXXIoVj/n/f5/17HaAnwfbw23h9B\\n\",\n       \"pC+yEUhxr6NyZ8vi5z4QmdBY/R6zwOSPu7n83T4ZKLQ5go/c79inKAQGpstnfn5e9Xo9WYIocBZP\\n\",\n       \"HyteFwdC8XsmDXLwAEhn7mLxseH9y+TKxR9g5ZfL17lwYEpcJihNB7T47OkPtxKr1aomk6st9eQu\\n\",\n       \"8tgy4og8jsfL6uqqer1eSnYpXSmyUqmUzoaL89APl4bNka62+B8cHKjT6ejh/9nRxzEg7XZbo9FI\\n\",\n       \"Kysr+uijj/SrX/0qtYHz8J49e6aFhYV0jIyktHWfmJxoffti1+l0EgAh2Ju+JUkodWUbPfmLYGZ4\\n\",\n       \"JgqzXC6nnY2SEgBhwRsMBinHFs+o1WppPLH7ENZoOp2q2WymYzh4X7/f1+uvv54OknZjDPah2WwW\\n\",\n       \"Fmr6pNVqvcRQXlxcJHYIoOnjGz2ZM8zYRQeocJBFDBvv8DkOw8kBxC5TjIb/+Z//0Wg0KuSDijFR\\n\",\n       \"cS57ey8vLxOQnk6vg+jdoGQsejwl7ZKUGCe/L74PMO9B+sjW3+9AivfHXcC0ywEo8o5y8A0RPCN6\\n\",\n       \"YCgOUKKxHAO6I5Ci/own5oV/F4sDCNoZGfrIzOWKr7sRADrL5aQJ+jKuHb62u072z+Oa64SLvw95\\n\",\n       \"U38/gzC3WzIG2Odklto185v/H8sslia6Lf5v7nEw4f/783Lo2d/B9/7OCILid3TmLKZo1jt8Usd3\\n\",\n       \"8LzYabPAoE+sOJGcYp1OrwPzInjMBWo7xctzc4MxFpdLtIAdFMf6475hMfBJSpbvyETmZBHZTVeC\\n\",\n       \"UlFhRyvPxwoTOwaMswPF5e7bwwH8OWofWXhQPbvTYHVYAE9PT3V4eChJKdXAwsKCTk5OCophaWkp\\n\",\n       \"HUrKOIQFKZVKaZFst9sFWhowNhqN0s4uZMN5egCTqIiazaZGo5GePHmihw8f6sGDB5KUDk4eDAZa\\n\",\n       \"WVnR22+/rW+++UaS9NZbb6W+63a7Oj09TYxbo9HQs2fPtLi4qFqtVkgQiTtrY2ND3W63YD2zm+/O\\n\",\n       \"nTtqNpvprEHaAWODK40M5rSBnEjIzXNeAebn5uaSK4Ax0mw21W63dXp6qmq1mvrZF3wHh9x37969\\n\",\n       \"5L4k672kJGvqOBqNCocrw4wRkM9iT9A4bfCAfT5nUwGg2cegu0ApjDV3+zDWAa4LCwvJJeb90Ww2\\n\",\n       \"dXh4qMePH6cx4uwK4zsyHa6/+d4Xa8aNt5P61Gq1wmLnQIh5mtsUIqmQnsJ3s+JKY065Tndg5oCX\\n\",\n       \"+vE+13+uRzC2XP+7PGIoRFyrouHphjLGAe2gzrTH74061HeuUWi760fX0dLLa6n/xA06sR8ovuZ5\\n\",\n       \"yAxj0Osxi5HzZ6MDeJ6z2xEgwyzzGc/if5dnlH8sN8pIRQTti3kEKJ5QMqJzR/VxMeV+f7dUzBvh\\n\",\n       \"CJdrIsKN9fe/fWDm6sX7/PscUONZkcl5VT1dls66xIGNBeM+aWepooKL7Y1AA/lGpcgPLJJbH7CM\\n\",\n       \"DMw4cZ3idUXkTEUcN3Ey8z/vyQFwv86vl/QSUPLfcTcKypFrsHLoo5gIkd9RwRPDtLy8nNg42kuu\\n\",\n       \"oqWlpcRmeH14Xr1eTwkfKQCtxcXFwjZ3j+/y3VaSkiXPmIFpkVQAH8PhUM+fP0+AaHt7W71eT51O\\n\",\n       \"R5PJRFtbW3r06JEkpV1xx8fHWl9fTwf1Ikd2WE2n05RLSbpypeFOK5fL6RgS6frIEel6dxdJO6Vi\\n\",\n       \"hnrkRjuWl5c1NzdXcDGwQMMI7uzs6PDwsJCvaTKZJFchYxR537t3TxcXFzo4OEhuWMBwq9VKQI++\\n\",\n       \"o54wYIyZxcXFl2SDm1K6XgzcZc11PtaY39F16ZY+xwj5PHT3ui/OGA3Ulfgqxs7S0pJevHihwWCQ\\n\",\n       \"2Dj6h3Ea89rxXPoqzlfXa/QRdT07O0vPR67OWJCtHpYwB+oODw/VbrfTfaREmMWA+5x3wwRWmLa6\\n\",\n       \"3nM9lFvzaGduIWc9jADJgUJu115kI70f/bn0ZXRReh187Md1x3VnjMHiWg+HcD3Dc2gPrlpfj3yM\\n\",\n       \"eHtyxriDb9fhs3atOyj2eqObfU3PvTeWGwVS0svAwhfZiJzpeO+UyP64cLhnFqJ06yKHeON1uef5\\n\",\n       \"PQ4aaENEuLHjve0OinJsGiXe6+62Wf5ip/ula4WCTzwCy1z7JBUUBrLwNmHFovy8Pq5QXKnEtpbL\\n\",\n       \"1/k9PKgWl5wHHEcLNgJQ6Gh3Fc9iF70OswAY10RXabw39hvuPMaO5yDCxbK+vp6sfZdNuVxO8Svu\\n\",\n       \"lkRJ+3EkvtUX8BHbPD8/r+Xl5bQVfG7u+pR7z8nD4k+fE6sjKQUj4xJ78eKFHv4fd9/nn3+uWq2m\\n\",\n       \"ra0tSUrs1tramobDYXo27WPx6Xa7arfbhQBnWB+UI+0hfgSgSYA0rkaAFnpkNBolgFIqldK5hZXK\\n\",\n       \"VdZ1Tz2Aa+ji4iJtoZeuWBfG9NLSUkq5gNx3d3eTpdvpdNJ4gz1ifHKeHfcRXwWrR/txazK2PTga\\n\",\n       \"wJ0zPAEC5ATyBdENEs73gxEtlUopEJeUGFzP+ML1x5jwTPPkjiII3ecHLEXUP4yBuB7wd2RXuI/4\\n\",\n       \"QOLGXJ8AfsnHxoYE5Mj1zBt/B6DIA6f9Pmf6vS+8vq7znIVDJ3i6BF/0ud7/dtbvVSW6cX1ceCxf\\n\",\n       \"BHSxHyJIieuJ3xuD2ykuG/6OOtbf5zJ2Wfi6lGP4Zxm4Odl4/WcZ2G7Ax/v/Uh/cpj+4LbflttyW\\n\",\n       \"23Jbbstt+V+WG4uRihRg7nv/3691BBwtAK7nN6g3RynPctlFZO4WHc/IMQ/+/FdRuJGFi6xW/H9W\\n\",\n       \"PR3pe5yB3xfp8ZjUTSpSntJ1nEguWN/jAairU+peR7doYnxEdG1GOtzrQrwKrI7Lm+udiaFNLsP4\\n\",\n       \"TGfgIqsWrSQKrAL3ujylomXq7Yp18DgX6SpOqFwuJwqeQGLpiiHpdrtpezwuFuk6Lqler2swGKhU\\n\",\n       \"KqX4GjKnkxLB5w9xVT42fEyen5+nw2vZvYcs2NVG3BZsxvHxsb744gu99dZbunv3rnZ3dwtuz8Fg\\n\",\n       \"oI2NjXRWn8sb90ulUinErszNzenOnTuJ/SFRpnTFkBwcHKRjSk5PT3V8fJzOW1tZWUmuoFLp2o1F\\n\",\n       \"XxADc3x8rGfPniWmq1qtpizya2trevbsWRrDsBvNZlOl0tUByeyE3N3dVaVSUa1W08HBgZrNZpqL\\n\",\n       \"sAqkYBgOhymRJzLe399PrkN2xhGAz7Eyk8kkMXkwh6QkINaKvvedZLEMh8Pk9pyfny88s16vp2By\\n\",\n       \"YsXoQ092yjzgiJjpdKovv/xStVpNm5ubOjw8LMjb52rOZebzJbIgzhI4kxePyYl6kp2TMX4M5oln\\n\",\n       \"xOOg2ADAM/jt9Y6bVKKXJecujQHV3p7IgsNGodNzCUFzepDfjDnWLpexrwHTaXFnZgyr8DahL5yd\\n\",\n       \"4ztfE3O6nXti7JS33XUy7BQyiOuxe314D/3L5zw3rmGwgjG9Q9zkEduRwyqUG0t/kBO4lE8JHxdo\\n\",\n       \"/yx2dBRu7HAvs1x+sZ6z6h1BgdcdheG+aQc2sU1xEsY2zHI3eT1iQWl4YHW81mUQwcirZDDr3T7Z\\n\",\n       \"AAVeUAg5ypjnRXed74gjZkq6jhXwieiBkF43fPDIxXeLvKrNsW+4z92ptIvJCfij/rwTdwlxRij3\\n\",\n       \"o6OjlFH78PCwENAJGADE+JEZAEtkRDslFVIhAHwBZ6SUYIz6zqVKpZIWUwLPARk8x2NS6Ceu/f3v\\n\",\n       \"f6979+5pZ2cnZSiv1WoppmZlZaUQW3V+fq5+v1/Y0YSbbXFxUcvLyzo5OVGtVtP8/HzKFs691J8+\\n\",\n       \"AdiUSiX1+/10BpvH81xeXqbnHh8fpx2R9DeB7wA4z09EsHmv11O73U47+o6OjrS9va2Dg4O0K47+\\n\",\n       \"dfe5dH3AMeOi3++nLOIARklpiz7uW8Y8fS9dxZEdHx8nN5F0DVYYk65PAP+lUinlEqMsLS1pcXEx\\n\",\n       \"7QScTK6znrN7FODNzjyeu7+/n0DmwsKCms1mQd4s7tQvtzDGOQyI8LUBGY5Go/RMP42AMcz4xMWb\\n\",\n       \"c+e7PJApoQmz1iDXGf48gGckAZgjtNHbE3VUDNlAl8Y1g7lLmzyujXul61MafDMJc9/1ZE429LOD\\n\",\n       \"WIwP1x08YxaYQu/52aouHzd4S6XrHGLueozuthgDhZy5L24q8uczT2IcmceMSUU3I3L0I71iufFd\\n\",\n       \"e7kYmhzQ8EGSA1ZxMEjF42Z4bw7A5CZNrEeOaZrFhOU+i+xQrDuK3n3Ksc4RgLjcpCJ74pOWCeug\\n\",\n       \"wLct89utowho/F0OyqJf2weoD+r43BjMx+TOAVvui4faslA5WM69i3fk2pHbXejxBV6wVD0Y0Rkw\\n\",\n       \"t54cTPM9DAJt9P7qdrvJqq7X64WdRDADsBi8k/iQ4XCYApy9eAoDrHDkxgISGTg/Uw2FBcvD4ghj\\n\",\n       \"5TE7zWZTrVZLe3t7+vbbb/Xw4cOUNwo5Hx0d6c6dO5pOpwlkNRqNFFNDu/xYknhsCjFJT548SXEy\\n\",\n       \"x8fHiWEBEMG29Pv9FMDuCnY8Hqvf7yd2DmA3nU61sbGRFO7y8nLqC1IUDAaDtEvtyZMnkqQ33nij\\n\",\n       \"wJDBMHl/cTizB6n7FmxA6+rqqiSlg3cZF6VSKQGbwWCg4XCY4s88ZQjy9nnlgMkXu+l0moAymzro\\n\",\n       \"U98VR86u6fR6QwBsmHR1DiPzghgq2k/us5gnSCoaNdTLWXVf1H2BZnx6OoE4/svl8kvj1AOvc7tr\\n\",\n       \"HfR43zkwoES97LtnfQ2Kuj+uVQ4wYn5B5BO9JP5/9B44cZALnAasxu9harwdvivS5RMBLjokgqUY\\n\",\n       \"fO/P4p2+frmcATye+oj7y+VyykkWAZEHlOc2RLFueeyYfxeJEq6fRVhINwSkfED5YPTspt5RPtii\\n\",\n       \"K4LveUZOaLmJwSBx1E+JFsmsxTX3zFyd+Y6BkVv43Q3i9zkzw4DJWRBu8fn7Hb37wIkZdiN17O/1\\n\",\n       \"z93ag32h0Gc+MWPwJIrFJw3PdUsnBhACpFx2pE2Iz6H+uKV8h5P3xSwFFUEz32PNeQCoL5al0nUG\\n\",\n       \"7cjuucUWz/9ikrKzzHeYtdttnZ+fq9frpbP43K0wnU7T585yEczurtWYXLFcLqcdgoA0gqt5lp9h\\n\",\n       \"xnNwk7HFXrrOA4b1xvcUdtTt7+8nNoiyurpaOBDZmRUyfff7fTUajUKeKNyWa2tr2tvb02g0Su8E\\n\",\n       \"gJEtfW1tLcl0Op2q0+no6OhIlUpF7XY7MVmA00qlklg5ZONZ0BcXF/X48WO9/vrrqR3or6dPnxaY\\n\",\n       \"w/H46gBlEoE6eDk9PU1gD8DR6XTSdw52XMmPx+Pkfp2fn9fZ2VkK/G82m7q4uEjnM/pi5fmV4nZ2\\n\",\n       \"Ukq02+0EopA3Z+fV63UtLi6q3W6r3+/r008/lSQ9ffpUz549S2kqpGvwS9JXNy4cUPlGDN/pyvcU\\n\",\n       \"Z914Rr/fV6vVSrnCuMfnsB+ejewAhQ5Ambe+9Z7ncP6gj1Ha4GsV8nLGh7ERQygim+5sHC5v9JED\\n\",\n       \"pZgMljEdjTvXUw4OnFVyhg1DEcDuxp7rwmggUz+u9b+p19LSUpbJiyEYvtHC1wNfg319cIDsz6Rv\\n\",\n       \"c4QMsvNkrD7O4noRwXGu3NiuvRh/49RaZIn4zF0KFAZhrpGRaswxHfx4TwFGhwAAIABJREFUR8V3\\n\",\n       \"5wCa05l+X66O/h1/++/YbkmFTowskN/Pfb4TJL4j0pY8n8U0x9ZE+cUyC517O3KTfTKZFNiOXF08\\n\",\n       \"RoH/fZK7dcRCkdsGTHGFJb28y8Pfx3UoPt8lyL2ePdwXfe5nl5gr92jJuhUFyCHvj7sbut1uco0N\\n\",\n       \"BoPEiElKO9V8YWLxcmq8VCoVGAkSV6JM/H5yLF1cXOjk5ERnZ2cp7oodZvV6PcU6UZfDw0MtLS2l\\n\",\n       \"pJIcaispHajL4gu44btWq5UWpnq9ntxYzWYzKXRX8NIVqCGX03Q6LWyHR8bsIptMJtrY2Egs2MXF\\n\",\n       \"hV68eKHz83Otrq4mpklSyhE1Nzenfr9fAFKeFPWbb77Rd77znSRn4qI6nU7qI9/tNxqN0vOk6wXX\\n\",\n       \"E46i5AFEDnrYoYdsAHMcanx4eJjaT16ti4sLDYfDgpFUqVzn1mGhhHGEjQCcEQ/H+K7X68kQkKS9\\n\",\n       \"vT39+c9/liQ9e/ZMz58/T/mmfDHF1c0C7s/w+cxY9QU5lriwMzY855VUdEMBUmmjszCeANS/5x2u\\n\",\n       \"C6NR6+CE/6MBFY1b7wv+9/UuhoJ4mhreDwvpMo4pGZwl8ne6Low6nOc7U+3tYc7yXGfbnY13efI/\\n\",\n       \"Ojwayf5cYta4z+vlhjfPAtC5256x68asy5R+zK0Vvta7fo6pgnLlxoCUlA/qll52wVF8MMRANWee\\n\",\n       \"fAI6rejvcnYiMkQ+SF4lcO5zVOvM0V8qESjG9nhdIrijDdEycBk6tT8LVTt1THGmyQGdt93bn6Pp\\n\",\n       \"Y3ELgwU8WiOU8/Pzgq88x+xJxVPX/cfllusLlDkTKvYhn3m8hLfP7/dxxP/R2vbvqXe04ACYABz/\\n\",\n       \"DkajWq0mECopATVXwrS/Xq8ni5Y2AQhZSFhk6vV6Yoj6/b5WVlbUbrc1nU7V7XZTX5DLqtlsanl5\\n\",\n       \"uZBpfHFxUXt7e0lxHh4eJtfe3bt39cUXX2h+fl6tVist3LSPWKbT09NCP3nQdMwWPp1OdXx8rHff\\n\",\n       \"fVenp6daWFhIrI90tbAzH8mvhdwGg4F6vZ6azabK5bJqtVqKS5pMJin/0NnZWSGZ6fn5uZrNprrd\\n\",\n       \"rjY3N1Uul9M5hH5kCcCQPhwMBhoMBqlPPE8YdSiVrpOpspisrKxoeXk5sYPOZLGwDodD7e7u6uTk\\n\",\n       \"JIE0smkfHx8ng4B+Ylx2u90CK8m4gMXsdruFc/+Wl5eTW5MA7idPnujZs2eSpK+//lrHx8eSrty1\\n\",\n       \"jD36Siq6jaNedcPKmQc3Lp0d9zAAnxO0D9kDUnOMCnJ3vepMdvR8wFTA2ObcgrQ3gkBAkbuI0ANu\\n\",\n       \"OPmz/HnRoKUNyCgaks6qeH24Ltc+13PIis+cTYt1hX0GmDtrjj5wDwhAy2N3+T0LrPjnrl+5h3oC\\n\",\n       \"2kql0kuhILTNvSr+3auAlMd45cpt+oPbcltuy225LbflttyW/2W5EUYK1iYyKJE5ir7yyAL4fc7k\\n\",\n       \"5Fx/0cXiSJT6OOvi90dGJJacHzbXPv6HcvZ7nIVw+pPngaKjdUWJMTsxritHc0pK1rFb0FCZzvL4\\n\",\n       \"u5z6zvnpvR/cEuJ6WKbY97zLGTK/nrbkfO3RWsj1U2SW3MqLTJ67Jn3sRKvT6X7+90Nn3dqNY84t\\n\",\n       \"day5paWlAo1PQOVwOEzuMncJnp+fF8a0M4icNUe9vK1xhxdpDIbDoY6OjlQqlXTv3j11u920Uw73\\n\",\n       \"I4k5t7e3U3vW1ta0ubmpr7/+Ol375Zdfpu8fPnyow8PDFP/jcsC9R0B4ZJ5pC+ySdMXytNttDQYD\\n\",\n       \"VSoVra6uajqdpucPBgPt7OxoMrlOsuhM3ng8TiwLLkTpipFrNpvpuJrpdJrYHDK3r6+vq1wu69NP\\n\",\n       \"P02xVbANGxsbmpub097eXiHJKQlQOdAYeRMsfXl5qV6vp4uLi/RMguVxKzSbzUKKA5glSVpfX08u\\n\",\n       \"2J2dHX3zzTeJKXBXFkyRu0KZTzCc/X5f4/HV0UI+ZkjkCuM1GAzS+2EL6TPfKBB1k3TNMDBXIjtA\\n\",\n       \"iUyLjw1SGLjrkPZ4SISvCc4I5dxMzhbFGBrXMzEkwtch1xMuv+jug5FCfq5r0E/+vwdRo6PcpZlj\\n\",\n       \"wSLLhHvcw1N812L0rvhuYK6P6yXXS9fZ4WFHeQb3+uYAnulhFc7gezs8VQGyQ4d6HxI3GHWe1zPG\\n\",\n       \"+VJPLzGeOHqtYrmx9Ac5msxpyjhond6MrjenLv077nFQE104PMMFGRfXXB2je8vb5fWMPu+cO4//\\n\",\n       \"o9uMetLx8TtoTad5o5JAcUSXoVOYuUHigy0CFXff/SWQ6c8DZMQ4ghg46FQ3sSHIMbpgXe5RplER\\n\",\n       \"OQiKSpD7omsSSt4LSsTdePyNYnAglGtjrg5LS0taW1vTysrKS4dpslvOFwXmideVwnfEL7iSvLy8\\n\",\n       \"TC6l6ErExYbrp9Fo6M6dO5KUdonNz8+rWq2mLejS1fb3+/fva2NjI7m76OejoyMtLi5qbW0tHdfi\\n\",\n       \"7QYocpiyu0vn5+dVq9VSfJMbWx988IGm06sjbLrdbiE24969eylA/f79+wXXBGOBYPKtra2Ca8Az\\n\",\n       \"rzt4q1QqKYfUkydP1Gq1Ul0XFhb09ttv6/DwUE+fPi24KdgNt7CwoO3t7cIiNBgMUgyZL0CSkqsM\\n\",\n       \"l9vR0VGSHf2Fu3Fubi4BW462ITj+8vIy3YfbHBBFSg3+lpQ2Q3jwPkDq7OwsudIIaOcedE2cL/SX\\n\",\n       \"Awf/nODpCE5Y6Nyoc/cSICqmHMG1xFxkdx/1lK530cbFGdkChqJbiOvdFUm9GcuzAI0bkjzPQyti\\n\",\n       \"GINfm3Mj0r4oN+qUixt1d53Xjb+JL5L00noBsESWOYAbQa2fCQjIimttbhMWesANUJeFg+SYQgH5\\n\",\n       \"x91+Pi6p46xd68jK25aL2aPcCJBy364XX+jcYvfvGfzRj87fcXHyToiD1FmeHLqOFkKuxM53688X\\n\",\n       \"8pwfexbwiEyctyUu8pIK8UR+ve8yife6bDjV3QdVZG8cAPJ9ZFlif0Vrj+fE2AmXBbFUDGrAA/3n\\n\",\n       \"8kXeKP7Yxw6iHXSgPHJWcO45XqLypC4XFxfpwF2sLn8Gipd3+nfEBhHg6+OGOBW34KPVyPMiWCLR\\n\",\n       \"ZQShi4uLmkwmqlarqlarKf6G+9bX1zWdTnVycqLDw0NtbGxIumI6zs/P1el0UvyQA8P9/f20oAMo\\n\",\n       \"pKsYqadPn2p1dTXFMnEO3fn5uRYWFtJxNXHss5MNBoTFfnt7OyUcZUz6MTDr6+sajUbqdDqFJKK8\\n\",\n       \"c319PTFRvjhw7A1xMHt7ey8tYi9evEggjLl3584d7e/v6+uvv9b8/LwajUa6j12OvuuTPvdUErBH\\n\",\n       \"fDeZTFLKBMCLgwWPkVxeXk5B6hyTQ7vL5XICaHHRp89d5uyymk6nSd5bW1sp0H5ubk7Pnz/Xt99+\\n\",\n       \"W0gc64Ze9Br4HIxgy3VTLkbGwbPvBHTd5EYqC6TrCWekKBF8OFPO9TFInT6KAIT3UHIMObJ1ubtO\\n\",\n       \"9vscbM1i62gP7YyB5+icaIi6IRgJBQed/kyuYwz4e6Vr8Ersp7fD11NYRNrq60g0aDFMfSely8lJ\\n\",\n       \"FwwD2useAl+fXQfnxoIDuly/zio3mpDTBeeMQnR95QZTZJ18MMRreFdOqN7hOSbqL4GF+D11pSMi\\n\",\n       \"ePF2+vu87bENgEe/JrbdFaR0rUxA75EFkl4+P+ovtYvfrkT8ur8kN1dw3hcsckwMduHwnQ/+XB29\\n\",\n       \"PpGKz4F2d+vNYrH4PE4o6u27jngmu/+cuvadNFyLIoqgrFqtprPMottlFv2NFSldMwnS9YLkIMnr\\n\",\n       \"NZ1O07l3npiRxXN1dTVlB8d9c3R0pNXVVT18+FDn5+cpaaV0BQYbjYbq9bqOjo4KFvuTJ0+0sbGh\\n\",\n       \"/f39lG+KOvf7/SSzo6MjVavV5KKinwhO9vY1Gg0dHx+n3Uunp6eFvEeekZ30AsiEnFGNRkOLi4s6\\n\",\n       \"OTnR5uZmeifzgZ2JjKl2u62jo6NkXVer1fTM/f19dTod7ezspN1HAJpS6Srj/GQySekTkDeygAHi\\n\",\n       \"sN04xp3RpX9xP7G1HBat1WolNyRnEXqoAAA6MiTMGZiDfr+f+tDPLWRe+Fj3TRi8L+fSY5FzfcOC\\n\",\n       \"jqEUd58xD6LrKRq/OSOTOZkzsHie3xeZKN9oMZ0WdxtG0J9ruzPcABpf8+LGKC/oPndVcp/rxOgO\\n\",\n       \"Q0dQF1KyIA8HQQ5OMbwBRQ6IeP/i4mJqkwMR5rsf4Iz80PvIyF2Jrr+jjo8g2ccw7fVdlnyGfnOg\\n\",\n       \"5f1OX/hvD0GJfeN9MKvcaPoDKZ+cK9KZjlgpzjRxbw64IGyu8XdxTWRXfEJQfDJEEBffGRWT9HK8\\n\",\n       \"TGRWqAcd5laGA6bcJOUnunpoAxMkKk2UjLt2vF3ezthfOTbJnxFp2UgZu/sO8MQzPWmey4G2e2oE\\n\",\n       \"nxA5+tstsRxwncU00g9eF1eE9JUzOa7sooJ21ysLIDuwAFXr6+t64403tL6+nurqOxMja8Nht6QT\\n\",\n       \"8LiUSG/7mMKdwnURLJbLV0enNBoNbW1tpQSRz54908HBgdrttpaWljQcDhPImk6nGgwGarVaWllZ\\n\",\n       \"SRmupavdZ3t7ezo8PNTe3l7haBVPM7C4uFjY8eOU/+XlpQaDQTqS5PT0VPfu3dP8/LyePHmi7e1t\\n\",\n       \"jUajBCbm5q6OV2k0GmmnndeHfEcsHN7HAB2+Y6EiLQI5s0qlUooJq1arunfvns7OzhKIoQ/b7bYu\\n\",\n       \"Ly9TXJK7t9F3MG++MJRK11nbkTHfjUYj1Wq1lJ9rd3c3AdDT01O9ePEipeAYj8eFQ6JhTJlTOWaF\\n\",\n       \"dzmT5Qu4J22kPrTB3VWMMfQkfZljA3yR5XPACzu//D76iXnncpvl9YgsSGTAoqsoMkNeT2f/3ZCL\\n\",\n       \"xm7Uy67TXDY59xzxaPzv33G9h3fwnR/6LhWPtHFQ5M/FMPA56O+gnbwvxlnSB143xpqzRN4XDvD8\\n\",\n       \"N/L1dcRBDr9ZA/jOCQPGm6/B3t4IYn0cxnq+yq0n3XCMlAtHyi/S/O8LofQy2+RsQRzgkYWKdWBg\\n\",\n       \"xIWYEpE59faO9Pf5AMm5seKE4V5AD+4G6eWtns5kAECov8fm0C6PDfEFwwdTjrqMiiNaVTw3IvUo\\n\",\n       \"C6flkQvWq7s3PLeUTwwHq5EN4kw0+sLb4vWgbx2gM54iaMTH7+3zc7pcBg7+vF1cc3l5mRYiLDIH\\n\",\n       \"MCxu6+vrWltb0927d9VqtQpxOTArvM+34RKYPD8/n7b3u9XmBoQrTrJrw2R4WgHaDuDb3d1NeZSI\\n\",\n       \"wWExJiGqdJ1p++DgQLVaTffv308Le6PRULvd1u9+9zv95je/0d7ent566y1J10wHBg8uG+pNIkT6\\n\",\n       \"he9gVQ4ODnT//n21Wi396le/SvInMJtUDp7G4ezsLKVmGI1Gunv3biFLfrl8fUwNzI10HYD8/7H3\\n\",\n       \"Jr2RJdcZ9psDp5yYHIs1s6pbPbpbguV2t2XBhlcybMswDHjjP+Bfoj9hLQwv7YU2hleGBcmyBQuW\\n\",\n       \"ujWrq4fqanYNJItDzskpM78FvyfyvYeX+gADH8oLBkCQzJv33ogTJ87wnhMnOELm+fPnunbtWppf\\n\",\n       \"5rXX62XQquFwmHgV+nkZg9FolMoODIfDjHyglpTnW9EvQsHUGdve3pZ0nlvFPLJemF8M1tFolAw6\\n\",\n       \"52EQqfn5eXU6ndRPakjxHa+6Lk2rsEej0NcNfEe9sLzGPNAfZFrMu8LA8qOY8iIH7nj493y9u0Mb\\n\",\n       \"nW3+J18MY8EdZl9zUe5LU0PSc6/4ftRR3m/KYWD4eA4X33MHmXdjCLGuvOioy2r0jL/bHVZHcV0f\\n\",\n       \"RPQvT3/QkLNuYLpsHo1GF4w8aZpbSl2yKBd8fhwsoXnZC3cQ3Dj3vjugEJ9Hvl2e3k79vvTKVbtq\\n\",\n       \"V+2qXbWrdtWu2lX7re2FhfZiTBQvn8/ycm8us3zdMnUPK3oljjzkvdc9s+jhRE8DT8bvc+jb/6YP\\n\",\n       \"HhbKS8b0xOIIDXtIzhv9AK3KO/QRjzYvHOf/e5/IwYgoHda6JyV6rk+E7/E2yGVxyNmRFeYh5k9F\\n\",\n       \"7895hmt58+vzEceK5wV8DO0ZNyEmxu4oHp68lx2ABowbL8w9R57n26qZy3a7nRCMiHKenp6mzQBx\\n\",\n       \"p8l4PNbz58/VaDTUbDbTLi/6zI9D7tI034Px+hhHo1FCB4H5QQnIX2LnXa/XS8jN4eGh2u12Qis+\\n\",\n       \"/vhjbW5upv4sLy/rzTff1PHxsf77v/87FeV89dVXM4VGHVVrtVqpTADhS0827na7Go1Gaadhv99P\\n\",\n       \"+U+lUimF+hqNRgZd4UzAcrmcDib2eSwWzw8JJgne5Qhz0Gq1tLy8nEEj2IHHcS7saINGVKSuVCpp\\n\",\n       \"LsjfAl1yFODo6CjDl9IUkQINJFwI7aVzhJPiqCcnJ6lgKWOgsCqoMHRhbguFggaDgVZXV9OOTXjJ\\n\",\n       \"Q321Wi2Vm2A9sCvw+Pj4QhK7y9cYwsvLbyI5n2seTnMZGWW1py04isM4kGFRXvBc+gPyTp8IZ0EL\\n\",\n       \"R5cd4Xcki2f5mqbx/Lx0Ft6Zh9ZEZAu5m7eVn3yhvBMPGKOnD7jcjkg98iymvXjOKYiPo/eMg3fF\\n\",\n       \"yuesOUeP+Bv9R64f7/PfzluuS/NymgqFQioZAdrL+JiPiEySo5m3EYL2Qiub+0RFZZgXxvN7vTk8\\n\",\n       \"6kS9LByYd837gjL1PBoPsUVF6VClh/5iHz3c5de9pD2Lwhl/ZmYm1X+Ji9H74QvaG4ImCo4ovKLh\\n\",\n       \"yt8xXMq9LK7LaBgNKRcAvqsGgXd8fJxykrwPeVuVoRswLcnePhfQLxrP0ZCHhtI09MGzyuVpzR/p\\n\",\n       \"XLmxUH1buYeX83InPNRBKIZ8H86oQxF7Um0Mx7oQw3De3d3V8vKy5ubmUs6Sh0ViCLZQKKRjYhCc\\n\",\n       \"GOCEBfr9fjKgyGfqdrva2dnRcDhUt9tVu91OFdEHg0FGIEtKdaSq1apu3Liht956S++++6663a4+\\n\",\n       \"+OCDNBe/+7u/m+hNjRtoTV6UG48+9kajoXa7rW63q+vXrydDo9VqaTI5r8x+7do1NRqNFE4iVLSx\\n\",\n       \"saF2u62zs7M0RsJwGCrk4EjTXXDkt1Wr1Qy/nZycaH19XZ1OJ9WEct6gDpjLDK8eDq9ieKEA4VUP\\n\",\n       \"73h18Gi0HBwcpPAeBwjDQ7VaLSlMjDCMTww5Qm++K3MwGKTSD5PJRPv7+9ra2krhaY6iwSnyDSMo\\n\",\n       \"KF93zivufPm698R/1pobC8hiD81JyhgnGL+eM+gyNG+9IpuibMOBzHNm+U7si38HI86dF5+bPBmH\\n\",\n       \"/I+Gp5Q9DsodTO8jz3TZ5/LOaU5/XIchAwgzuoHiuoA+Rb3ndKNFh93H5M+hb8PhMHNMF9+LBhXP\\n\",\n       \"i8ZfNMCgqYfMXaZ6eJMxuKGa116IIcXCcYZmwl3x5RlNlz0rzwDLQwz8f97B4nHvjwnweiVSdicB\\n\",\n       \"fY3GGoT3Fo3GaCzEBFC+S2yYvvg1mD3G+Z0u0JUfZz6PCUevEPrljSV6UZ4/xf9R6UNfvuu5EG6w\\n\",\n       \"8l0/c8nRPV/4Lizz6O3NFwbNhQ3KjIKYfM4p99K0TIHv4PFFSt9Go+mBsp4AC+JCfhHe/a1bt1St\\n\",\n       \"VtP/PkZ/ni98+gfNdnd3tbi4mIRrr9dLAjFv+zD5McPhMON50zyHBcTm5OREvV5Pg8EgGfa8n2NQ\\n\",\n       \"QCZmZ2fT+XW9Xk+7u7va3d3VO++8o69//etJAf/sZz/T+vq6ms1myvXy8/QajYb29/eT0QLK02g0\\n\",\n       \"1Gg0MsebzMzMpOdKSjWbTk9Ptb6+nnK2QBzhH/f6QbLckIwHLC8uLqZEbwzpo6Mjrays6PPPP9f+\\n\",\n       \"/r7G43EyzvDIQb9w0uClSqWiTqejk5OTlCMHT/nxJ2dnZ5mjXsi7QslCm263q0qloqWlJQ0GA43H\\n\",\n       \"43RftVpVv99Psuv27dvp/na7rVu3bqUcqclkkkE52RVZKBS0sLCQ8uLo69nZWeZ4p+goueyK69Ud\\n\",\n       \"zagXaKenp4nelL2gxXIK8H1EtZAD7uS5HGAMHnHgmTH6gHInh4bPHZHy8UQUzJ04v4f74jvdaHeU\\n\",\n       \"nmuelI7ecBnPNf73HDeeyz2+s4659PpcHiXxvvF3RIgYX5TD5fK0Ph5zLCmdFdnr9XR0dJThq3K5\\n\",\n       \"nDn4PebpOu1wJPk8LyGfZ/qZgP48jyxd1l54QU5XRLHD0RCKVn7ec3wSY2jPmxtWIAwIG0dA3GCg\\n\",\n       \"uVJyJc67Y7jJ3+fM5EzMJMbdML5DAi/UPSAWKko2IjKXIXL+WTRE8CJ4v9M30t7H4EnoGIbc77tB\\n\",\n       \"CoVCStiVsmfGcTCvJ3g7miVlC62x6NmdRPOQLf/TfGHxf9wCTL88JEphRTd0HBWAlnNzc6pWq7k7\\n\",\n       \"gkCjVlZW0m44BBhKcTgcZubYlYDzcqfTyQjU58+fJ/QM2hBm8uZnUFH6gO9Qe0qahlxdmZCYPplM\\n\",\n       \"tLW1lc5a29vbyyjVQmFaXHJtbU3r6+v6/PPP1ev19M477+i9995LY/joo4/0+uuva319PbPz7tat\\n\",\n       \"W4nH2GVHTSsgen6q1WqmmjjoCYVHZ2dnk0G4vLyc6metra2lZ9AODg6S4u/1eokX6/V6GhPolBsN\\n\",\n       \"jx8/VqfTSYg28gQeps6To06gJZPJ5EJtKuiIMejXUD6TySQV9GRdLC8vq1qtprAeuxelqYIaj8eJ\\n\",\n       \"V6DL4uKiSqVSQis9SZnDqOFJisYiJ1izzj+OADhq4vLGx+jKj4bB4uuB3474ufJz58vXOPd54nKU\\n\",\n       \"fW7E+qafKOejPCHsGGU7Y4qVy2PDOOE6eiAqf8bgqQk+bu+r6yEHCfh7OBxmdG4EMByRw/Fyg42G\\n\",\n       \"ER3RLZ4BAkoo3YujuhM/Go0yjgmHbiMTfC4uAyy8FAwFhGNqBjrTZT3Og28K8qjG/0lEKioh6eKO\\n\",\n       \"vTwDIFrs8XtReUYUKvbBITtnKPcY+a4r6Rg6dI8xD53y/x069cbkgWK4IYXgjQuL+1wRRzpGQ86f\\n\",\n       \"63TLW1AufBwh4zs+bj5DaeWF9tglBs0Zhx9jUSwWNRgMEvrjyB8C67LcABeQ9Mtj9pE2eB+E86Sp\\n\",\n       \"Uef5OI6UORSN8PP5K5VKqbhiXHiEJ/CSab1eLx0Ey04zlLCjt3hhLqyOj48TD5ydnSWlyMGxfM/7\\n\",\n       \"ipD2XWhueJ+cnKSjZUBnaDMzM/rkk0/08ccfq91up/ltNptaWVlJBwR3u13t7u5KUipU+fLLL6tQ\\n\",\n       \"KOi73/2uvva1r0mS3n77bf34xz/O5M8xBs/LefLkiVZWVpJBACJWqVTS4byLi4uZStuHh4epLldE\\n\",\n       \"OqgGHnM2MIjn5uZSuNLzi1Bi9Xpdh4eHicbj8VhHR0eqVquJDhiE5AdijOzt7WVyvQhZYcDAN5VK\\n\",\n       \"JYUoaV7eASQjOhGEZlutljY2NjQ3N5eKo8JHlUolGVv0BZStVColQ5F1we5Q1ujW1lYy4JgrR2Th\\n\",\n       \"L+aC9XuZIQEPe9X734YoeJjPQ3fcVywW09qgf1L2yBKfU2kq0+C5mP6BDIGP3Mnk+b9Nf+UZPN4c\\n\",\n       \"wY7ymXEyFpe/7sBznSgL44sABQ6f6ygv4hlRGZ7jMuiyCI8jWh7xkZQxbAgjM1cuF3mO53rmyWHu\\n\",\n       \"h0as30JhehwN8oSadfQXR5LnnJ6eamFhQcViMcOHkRZ57YUhUlJ2e6kjNlI+9HvZ/x7LdCMEBssz\\n\",\n       \"plxxwFS80xPOohGH8OVaREriePIQNg+5eYNB/HMWz2WIHEzkXkikny90f5eUzZ9y2lzGOL/Nq4qG\\n\",\n       \"iqTkwWNYuEfkW7lRNggSlLd7K3nGCc+OxpmP0ekkTQtXYvR55XA8E2jgiA/3AS37O1C4hAJdUDnt\\n\",\n       \"4MlWq5XGuLKyorW1NXW73XRkB7TEyKMf5JHRH/rEewhDgSwQKot1bwhpoeTceAZRYS5JKB6Px3r0\\n\",\n       \"6JEePXqk1dVVvfPOO+mZGF+DwSAJS0J70vmxJT/60Y90+/Zt3bp1S//5n/8pSfr617+uW7duqdVq\\n\",\n       \"aTQ6TxynL9TNarVaSfG7x8r5dZw/eHh4mMJpHJ3C2LwcATWYSqVpFXFQJzxnjFQ3vqrVqqrVqkql\\n\",\n       \"kvb29iQpjRGkgxy6crmcQolSNjG20WgkgY6QL5fLKemeNeDH1BDm8/XN9v88VF6S7t69q1KppKdP\\n\",\n       \"n6Zx0n8PbRLGrdfrajabqWJ8THyHN7rdrj766CNtb28n3vDk7rhJhfe5Yo/IOTSYn5/PGKfQlnn0\\n\",\n       \"8wvpG0fe+FzxTi806++Dbi7PovyMCdXRAXaHBqM2bsN3Zw59EeV4RG/ynu/6CXkdoy8xlOp08Ocy\\n\",\n       \"P25EQn+nNXSg8VnUGc7bbigxNz7XbpxLU+c30gT0i/C56yScAfrAdfrGuNA1rMNqtarhcJjOksT5\\n\",\n       \"4X0e1o3AQp7u9XZV/uCqXbWrdtWu2lW7alftf9leaGgvxsOxTGNYioZlmBcuk3QBXuX6ZSiKow6e\\n\",\n       \"yItHGa11aZpwnLc7w3dm0A+3onkm34khNA/rQReS/AjNxHi3NPXOYsIl93uOgickYrF7aMPv9/F7\\n\",\n       \"WMjzFfx7oB6Mzz0sL3LJM+KuGI7RIL4tKW0/d8g5eoqEqEBhoI3nUPl8eg4KeVuOnPnWX0/mhWY+\\n\",\n       \"B5E/Yq6df+4e29zcXPKUlpaWtLS0lLbF+8G+HrpkDhw98d03Hgahyvh4PE4Jm3yX890Ijzl0DX3J\\n\",\n       \"83Ke+eEPf6jHjx9rc3NT9+/f13g8TqharVbTvXv3NB6Ptb29rQ8//DAViGw0Grp586YODw/161//\\n\",\n       \"Wp1OR6+//rqk8519GxsbOjw8VLfbVbPZ1N27dyWdo1i9Xi+Ny+F2UEOOueEoHBALwqP9fj/xFfNP\\n\",\n       \"Aje5dRzNAo3L5bIGg4G63a7m5uZ08+ZNSeeIzWg0SiECCpTS13K5nBK8fU15Mj+8xPomz4jQ7tHR\\n\",\n       \"USZ3rlKppBBj3GzgoSA/mqNSqWh1dVXHx8c6PDzMhGZBNiaTSSoNQa4UoT7CIn5eoKMGBwcHevbs\\n\",\n       \"WQZ54JxDD+nAUxQP5Xnxe47Qe0jQkVb+j6gL93jyM5EJ1qnv+IKXyOmKyc+g1NDfdYK3y1D5PJkY\\n\",\n       \"9VhMGne9wXOZ7zw94zoNGefj9wOkY46rz2NMRaGhZxw5Qg64TPMTeIfZAAAgAElEQVQUiUKhkNn4\\n\",\n       \"wXPJDeNdcTccz/Z0DX5Dk5mZmUyJEs+Roq8+DsYdU2FYn41GQwcHB5mdxsgEohuMA3qTunFZeyGG\\n\",\n       \"FMrpMqOD/z38xvUYMosQZzQAYj6T3wdjeE6P95GJcKPHq0DHfvLMvFCfK3ZPTqTPl+UnMR5PRI2L\\n\",\n       \"zePgzuRudPhhm/78crmcyTuBVozDv8dvTxr3PhMKYes4Y3aa+o8rG3YkUaPJc0HcQPSQKELZ87Hc\\n\",\n       \"cKFPvguHZ7oBzmL1a7QoZKELMLXTF8FLnonPh4doKYWAUV6pVNJxHhhTHlpzge+hamBtpy3t6OhI\\n\",\n       \"tVpNMzMz6cw4F3RUDC+Xy5mzuNhtt7KykujKmXGtVksvv/xyygH65JNPUj//+q//Wt/85jc1mUz0\\n\",\n       \"7W9/Wz/72c/S+7a2trSxsaHr169rdnZWn376aXrf6uqqms2marVaypnAEF1YWEjG7vHxsRYXF1MY\\n\",\n       \"ajAYqNPppHDY8+fPUyiAeTs5OVG1WtXCwoKePHmSqpC7oGXnWwwZd7tdNRoN3blzJ/EpBomH3uAp\\n\",\n       \"cutI0vZdqXNzc+mYGkK3zl9UT2fsbiwQNiL07ZtAMIow0n2LOs5Ds9nMhDAItUwmE62urqaDpKEp\\n\",\n       \"/YG/WWvwt6Rk9B4dHaXnDgaD5MwQBvPQPXK2UqmoXq9nTifAWYoymudgXGHgMYe+/d/5P4bnvOQB\\n\",\n       \"OoS16u9zY47m8pK1F0M9Hg5i3vLe50Yj11y+eGM80RBivA4OxGcQBsbZyDNIXZZAN095iLlVPge8\\n\",\n       \"z/O5XPZQwZ5rrudxeumnb0iIvzGS3NihLxFEcAfa6/ThFDLuYrGYSs8cHR2lXansRqY/MfE/LyTr\\n\",\n       \"7YWVP5CmsWU+c2MoWuBuIPl33cJ2tOeyez0eDaFhnFj+wBcgzyQJ0S3heK6Rx8TdMCKZj8mPcWhf\\n\",\n       \"tO4JOnLkAt8RIJ7jaBFjcis7Wu4umJyJIxoXc6rcKHNDB6HIFnNPOK1UKpn6Hb6gyI8gNyXW0eIZ\\n\",\n       \"9IexkBzoBp40PYeLz/0QW77vgtvH54LGBS3KiyRs956ZGwS078LkXkfEXCnym+e6kVmpVDJCGCEq\\n\",\n       \"TTdFYJi5cVgul7W9vZ05FobGdmKEjOcPNRoNXb9+PR0KPBwO9cUXX0ialkTo9/s6OjpSp9NJ1+7d\\n\",\n       \"u6evfe1r+uyzz/Rf//VfevDgQSYBFPRnc3NTZ2dn6Yy6L774QpubmwkB4+gVaXq4cKfTUbVa1eHh\\n\",\n       \"YernwsKCer1eMmYWFha0vLycFDv8B/38oGTnW+bX+ypJL730ktbX17W7u5sM3mLxPHEVXn769Gmi\\n\",\n       \"69raWkIbG41GJgF8PB6r1WppZmZGzWYzs2YpU0GJh9FoenSS57dAQ+dv1kChUMjk0LCearWaBoOB\\n\",\n       \"FhYWUu7c4eFhQhpJuPdSIyA4c3NzCZVi7unHRx99pF//+tfJGJfOdwqWy2W1Wq0L5T0iCu3rwvO/\\n\",\n       \"kN+OxtMvvutyEdnA+o1OKHI9yjzkjxtqPB/+QH66QeDvdkXrxmNMRI9Ik6N4bpAgV6MT50rckRzf\\n\",\n       \"wYlOcfni80g0g/FHnRbpTT5qdL64lmdw4pgwFjfc3RjyvkAXdKkbVjEq5QBFdCjcWIJezCEODs8g\\n\",\n       \"lxEnAzS20+loMBio3W4nECCijpcZvdILMqQQbl4dGqXmFnQ0iPw3zY2qy8J3Pil5VqUbINLUyMqD\\n\",\n       \"Q52JokdAX9x7YUF5COUyWNhDV+4J8ByMg7goHLXxcTBmxuf9QcAgkECSpIuwebT46SOM70JiPB6n\\n\",\n       \"itNUUJay9bAYX0wM59BdR8mkqeERz4byMGM81JR+etjTFx8L2o0f5oDfEYJGgLK7LhpZ0FVSQpbc\\n\",\n       \"S2auKpVKpj6V0xtjMHrJrJXRaJRJoHXUz2FsSaleEPzoyCOKhERdlB51jIDSt7a2MrA6zxiPx2q3\\n\",\n       \"2/r4448lSf/wD/+g999/X9vb2zo8PLwAxX/yySfa3t7WX/3VX+nOnTsp7Lezs5MMZ3amgqy4MUsi\\n\",\n       \"/r179yRNE0cRsLVaLe06o62traWk8MXFxTSPviOS/32HIyjZw4cPNRwOExK1vb2dDKnBYJCSsuE/\\n\",\n       \"aludnJxobW3tgsFPmNWVQqVS0XA4zChGxo+xzvNdLrC5wQ0FD+d7EV+MJp7dbreTMePoAcU74Tl3\\n\",\n       \"YpCjn3/+uZ48eaLZ2Vldv35dd+7ckTQ9KJn3sUNXUtrMwRp1p4KxoUzdWMI4gM99rKyZPAeTPvj9\\n\",\n       \"0fHOQ3s8zCZdlKWuk1wnuEEQ54JrbuRFHYSh5P30Z0QjC3q5LnFjxdd1XuiL70cnAoTb6Ro3C3na\\n\",\n       \"RtRjbvC50wK6Sd9iaR2fozz9nocGuc73uYevmV93LpGrDqLQz8XFRZ2dnWlvby85kI6AeXQpr70Q\\n\",\n       \"QworGQUhZb3DSNQ8WNIVTbRw/buOcknTxQETQtTotdBQwu5d5oWrGENcsL7wfBE5IzIekB9nHGcY\\n\",\n       \"qj5HONQ9qZjP44apI1NUTHakxoWU09f74yGFuPBR9qAEvssKo8qVZJxff5eHEjGGIqrGIuH7cUGB\\n\",\n       \"HlLMLQpI6O4GJgsQNAuEjO9HWrkwdto42sd15/E4RvjBjQefY//tUHepVEo09R0yGKSE7bygHcq7\\n\",\n       \"0WioUqmo2WxmtrljdJ+dnen58+cJIVpfX9f9+/f19OlTPXjwQF988UVCqbrdrr773e9mlCdCqtfr\\n\",\n       \"aX19XX/yJ3+ibrer5eVl3b9/X9J5uHB7ezsTDnNjYX5+XoPBQKenp1pdXU3ywncyIRN6vV4y9Kgb\\n\",\n       \"hWEKX0rnQhM0HBkEbywuLmp1dVUffvihtra2tLe3l4wxPHz3bL2AYK/X0/b2tubn53Xt2jXdvn1b\\n\",\n       \"klKxVZDGWq2WUSz0BYONNcOBxI4MOArhvO/PPDs7SzV0vCSH8zwGOAcsQzPCUp6GQD/ho8XFRb31\\n\",\n       \"1lsqFosp7Ht4eJhqXVFSwQ0RR4fc+I9r3sfqDnUeCkLzQ5kZY8xDik65p1e4PPdr0RH250MTb76m\\n\",\n       \"aY42RV2U56y7Lotz7LqFZxP6d0cRmmBQudz3vkJj+kzJC0nJUXCZS6Qhponk6U9kH+MnxcKjHXnp\\n\",\n       \"ON6vmEfnY/P38CNNkTGMwIjSQ2MPl/uzrl27poWFBbVarVRKBRpFw9rbCwvtIQTpHLA4CEqeZR5D\\n\",\n       \"TnzGdy6D3pyJnInzlC/XWHwwsDOvCwKfKDxNlJ0vYO7xYmSeK+EG1mUT5go3b4x+rxsXLogiIsU9\\n\",\n       \"k8kko6QQYpFGLtjwIFGAIFInJycJteFeknqHw2E6Sd5hW0e3HCEhbIWX7bzB3EAT7qFRP+gyHuCd\\n\",\n       \"k8k0eRyli6HpwtJp6CFO5wueQf/8fkcb3SCiinS/38945v5OTwz1CuhOtyjs+RkOh6rX68kgWF9f\\n\",\n       \"19LSkubm5rS8vKzV1dX0TN5Nrlqr1UrXqIL9+PFjHR8fq1arZbbv00/6Tu2ir33ta/rnf/5nXb9+\\n\",\n       \"Xd/61rf0wx/+MOUr7e3taXt7W2tra6rVahnDnjPkMIYoh0A/q9Vq5oy9vb29JPxWVlbU7/e1vr6e\\n\",\n       \"BCcGg8P/rAPmaW1tTVtbW/rkk0+0tbWlk5OTBP8fHx9rf38/vcP5u9lspurr7XY7U3/r7t27mpmZ\\n\",\n       \"Ub1e19zcXKb6er/fTwavo42SEj0KhUIyqHi3r+3Z2fMzAb1ulTRF/t2Bm0wmqlQqiWcoc8A1UM9S\\n\",\n       \"6bxWlssitq5vbGyo1+vp0aNHevjwoSSliu2lUikpYS8O62vC1wUKDKXNPHMfSiwiNr5+ovHiKLEr\\n\",\n       \"Y5rL8+g054X3aK47ohGV1w8pq1Ni//yZ7tzRMHrduPNnIJ9cDvh7MaJ8vNzrBpwbGsgRZDL85jly\\n\",\n       \"3i9ohV5wB8XfxbWYfM+8eppFpE0e3fjfIz00dBkbHBxVjYac1wpkLeGY8pvNJXnzm/py6ZWrdtWu\\n\",\n       \"2lW7alftql21q/Zb2wtBpGIYRTr3EoHwpWyYzi3ePGuY7/O9COu6BexQpSMMjryAfLi34Ie6ekJk\\n\",\n       \"DHvRp+gh+PvjAYz+DPrnniDjImnav4MlHT2bGELKg7h5LiiEJ56DAGHNx/FyL3A/38HjYOs176PE\\n\",\n       \"AIm0bFmHbtwbkzNp9IW8J/rioTkPJ0lKOQI8O26rZl79fXg5EeHxuQHBil6mh1N9bqWpxzM3N5fQ\\n\",\n       \"SMJpq6uraVwk3Tp/gyzxOeE0kCLfbekhI77f6/VUr9fTdn3QJGjFOXHQrNPpJN4i1wKeAtUajUYp\\n\",\n       \"X8jnIo+XqdYuSd/4xjf0y1/+Ml0DceQ+RxUJNbAb1HOZOJvPER08UNqtW7cyyKgjJLybfuB9DgYD\\n\",\n       \"PXjwQHt7e+n577//vqTzbf/j8TiFNwqFQipI+ejRI62srKQDmmdnZxOtoNft27fTGLwEByUH5ubm\\n\",\n       \"Moc2exXpwWCQCZl4jhDoIfME2g/qRC6gNEVx4fFGo5HhH/hyfn7+QkHRfr+v58+f69GjRyn06QUN\\n\",\n       \"QZ3Zru6IqyO5jojQN5DgvDwZR2Sct8bjcUJdPKzGu5ALLhNiuDFuTuJ95MS4fnG94iiP6zPChY7y\\n\",\n       \"ECL1SEecQ5c7NO5hw4HrLniDNY8c87Expx7qBIl1FNuTzPPSHqTz0DU5qjEq5KkXrGNPGifKwPOj\\n\",\n       \"XM3Tz9I0r5aiyR7S4zehYPrNOgDh96OaYi6xj4WixZ5o7ykdHvbMay+ssjlE9wUuKR1NEaE6Wh7T\\n\",\n       \"+zOjoRCNMY/d+gKI7+NZTFaEuJnkmAjnCXdepZi+8jyH9z0/wPN+fFw8mzwG+uGhIs87wvCAKaBp\\n\",\n       \"jEU77X0MMHDMEXADggXt4QgXFJyTJCmFH+iPjwNY340dF1okqca5QKD5dQ8lukHk4QVXpIzf+YT3\\n\",\n       \"RLjZ89ycd7jP5zEaYhHCdv7D6OHvvIZwZ2cfY4SWsb4ZCobkXg6klc75qdVqpcOSS6VS2gZ869Yt\\n\",\n       \"zc/Pa29vL13rdDpprJzfxgGyvrXYeX00GiVF/OMf/1i/+tWv9NWvflXvvPOO7t27p0ePHiUekabJ\\n\",\n       \"np7PUSyeJ0k/f/48JTcj3I6OjlLdKencAKrVaommCwsLqcYU+W5u2LlQ9TDUb37zGx0cHKhUKunj\\n\",\n       \"jz/WBx98kIysV155ReVyOdHl5OREL7/8sqRz+H9vb0+dTkfb29va/H9rbdHXR48e6fT0VNevX8+c\\n\",\n       \"N4bh7IYBfMORMfCTH6jteV3UrfJEe2QT8+LhDPKgeJevXwwPwu+04XCojz76SA8ePNDu7m46UxDD\\n\",\n       \"ldINVHf3texhcH77eoOPvNwD19yAiakL7iT6Mz30XigUMgc/j8djdTqdjKK/zMjEmXCa+t9OH2Rm\\n\",\n       \"dKaQ2R7ii4aIO/yuh/xeNz793Dv64sayPwPZFx1Txh7HwD1OX0kpzNdoNC4Yux5mhA7uQPIdZBH9\\n\",\n       \"9ntiaNNDdhhwOOzoi4WFhcRzHr7zvlHeRDrP/8TIQl94TnHcUAVtvL7fZe2FIVIQz2PVoEJ+sCbX\\n\",\n       \"sHLzkCiEUF7CmjOpeyckf/oCdwHuk4G3KCnFXrGCowfucVs/qd29n3K5rHq9njEanNHjDgHGHhOP\\n\",\n       \"/dnRO0JY461FoQADe66FX3N6oZSd3ixCf6YbC3hnMN/x8bG63e4F71SaJukTQ8+L2/vYnE5x518U\\n\",\n       \"ir67xQWy13rxFvnMkzw5kDVu3aZdlr/mz/YaJ7Tj4+N0JhzfcXTT+cHfwaGjlA4ALZHODQm8Y/7G\\n\",\n       \"WFpaWkrjQgixqwtEolqt6vj4WM1mMxk9JHNjsMzPz2dydpg/R6do3/nOd5Ln7GjkYDBISeHUfPKD\\n\",\n       \"h9kZRzI18wLyMTMzo4ODg4zBDW1wJFqtViqTAQ0lpdyc5eXllFD/ySefqFQq6YMPPtDW1pZ+53d+\\n\",\n       \"JxmprVZLtVpNrVYrGU0Yi3fv3tWtW7dUKpX05MkT/epXv0oG7+///u+nk+wPDg5Uq9USqsjcgrrF\\n\",\n       \"/BKvSZbnXIGq4ZkzF752XUZ5DguKxJF4EE8SdWnPnj3T+++/r52dHXU6HVUqlQwC6geQ+1rmnf7b\\n\",\n       \"/3Y0HIPAnUFvLjOglTs70aElP4j3MOeTyUTdblf9fv+Cs+vJ3t4cwYnr1xEndILLLzf2XI77zjZ+\\n\",\n       \"oqHoxlV0zBijb/GnId/i55PJJIPEuoFGf6GpG0yg0/BpnpxDx4HqMA84C1HP+j3+Hh8jMr5cnu4c\\n\",\n       \"9/NQ0XMur73vp6enSUaNx+OUG+rGMu8tlUqpbInLeZ+zy9oL3bXnDMdEoWQxpqRpnR3PuvcWEwlp\\n\",\n       \"DpdicSI0CAPBOJcRih0o3IfgcYXoC9+VtBsEpdL57iqu+YGaLEL3XC5D3mIisu/gi54C48vbzgvt\\n\",\n       \"YWIXWg5rw2ARwcLjdIMXeh0dHWXqKkE3qh+XSiW12+0kfKXpAbwxJIrR5mEBNyiZW0cQI72id+ke\\n\",\n       \"eKR3Xo0veMoVU0yadQWIUnAlRf9coNBAE/CwIq25ZzQ6rzHkByyDUMVdRihqR2I9pLy8vJwEVL1e\\n\",\n       \"z2y5J4zW7/d1+/Zt/eQnP5EkffbZZ3r77be1sbGhg4MD1ev1lGwOneEdRyRefvll/eu//qt+8Ytf\\n\",\n       \"aG1tTWdnZ9rf309zgeCl7pivmcFgkNBFDFnoMjc3l6rAU16Ae09PT5NxiTfruxuhO8YZu89KpZLe\\n\",\n       \"f/99/fznP9d7772nhYUF3bhxQ9J50vzMzIz+5V/+RR9//LGePn2q3/zmN5Kkhw8f6t1339X6+rru\\n\",\n       \"3LmjV155Rf/zP/8jSfr5z3+uP/qjP0qCOu5Q9Lnz9e3hFb7POoXnMX4iz7MuCZU6MsAP6zvyqMtl\\n\",\n       \"5vfBgwd6/vx5CqXWajUVCoXMRpN+v5/ORCwWi6l2FQY4KCbvYoyOFGOUSNOyCcw5JUcYI84qitWN\\n\",\n       \"A75TLpfVaDQya351dVX1el39fj9zigLoBXTx8J3Txku50G8+Zx7c2XKjwneKeXg3hq0YXzQQ4QGc\\n\",\n       \"JJ7tzpbLGZqjfN5iArsjatHoOTk5UbfbTQgNvMVvZJTfE/vhOgq9R+kPd5Sgqzu8DoK4rPPGHDnC\\n\",\n       \"76Hr09NTNZvNJAuc3ugexu0OVwydxvbCdu05AiVllQLXvRAewsZDPvzO+0y6WGsCL1qahr48NMH3\\n\",\n       \"Hd1BIbjXRtjL86akbKVXR0P4DMFXqVQy9ztyEE/kjjuySqVp7YsINTqS4SEm6BENO98h5iG7PEHi\\n\",\n       \"hgVGm4cJ6Cuxe+8n9+EdcA0GR7HAyDHEF3Mr3HBzr8EXqUPTLI5ocLv36MYp70XZRGMLgcii5T6n\\n\",\n       \"l3+fvyOs75464QcMhHgf4/bDWd2QZY0QhuNoFOZydnY2ha8kJQPliy++0K1btxKywhw2m00Nh0O9\\n\",\n       \"9NJL6b6nT5/q/v37ajQaunHjRiasu7OzkxwM5pJnbmxsaHNzU5ubm2q32/rss89SHSlCZeQgDIfD\\n\",\n       \"dEQK9OI3/XLazczMaGVlRd1uV8fHxwkFZrfe2dmZlpeXMwUrWRPkUOzu7qb+7O7u6ic/+Ylefvll\\n\",\n       \"lctlHRwc6G//9m8lSffv39dnn32mTz/9VI8fP86EZD///HNJ0le+8hVtbm6qUCjoq1/9qiTpgw8+\\n\",\n       \"0MOHD/X2229rfn5eS0tLF5xEZJ3PPeMDBfDDtTGMcfScLwj9ECZF/knnSJ2HbvydHimAr0Hqut2u\\n\",\n       \"ZmdnVa1WE6rIu7gX2Ug4zeuWeb4WcyBNHUBkgpdeoY84HnHdsw5xin1nYAz9uQFKIVHCieyuJJRK\\n\",\n       \"OYFYVsEjGB5tYP3RJ1fCjizjgPnuaF/bLqOcBp5H5n2JqJLrL4/CuA5ANzny7yFYL9/ihvh4PE5A\\n\",\n       \"APPr749OkOtQ5j2GHuN8Rn3J54wlpn3grLtByfy4AQzPtVqtBJwUi8WEqtKQlS7XvX/RCPX2Qgwp\\n\",\n       \"GI7fkpJlSjjKjQK207pXGhebM08MFzqC4JPpC8Ohcb4vXTw13EMYQH4OHzsjucVM3+i75zu4V8Di\\n\",\n       \"d0YE8kaZ+PcjeuNWNJ4aRmgst+BGhi9MDJIYuqM/0Jjxe4iO552enqper18IQRwfHydUB0OQd7li\\n\",\n       \"9sXtc+lz76HSaDT6onQjUFJKXHTj1BUUhlwU3tAGOju9oXOEgZ3fvN8+Ni+sGIXGZDJJStKNVGiK\\n\",\n       \"AUJoDCRqOBxqMBgkVMYRuWKxqKOjo1Tn65e//KWePXsm6bzEAfkxc3NzajQa+ou/+AtJ0re//W1t\\n\",\n       \"bW3pS1/6khqNhmq1mjY3NxNNd3Z2dHZ2pnq9rnq9nkoc3L9/X/fu3VOn09HTp0/V6/US3L60tKRr\\n\",\n       \"164lujv6OxqNVKvVEm+4MmEeQDmWl5cznigKazgcanFx8QKSDTpULBa1tbWV5uxHP/qRlpaWdP/+\\n\",\n       \"fe3u7urTTz/Vt771rTSODz/8UL/4xS/07NkzFYvTQpeDwUA///nP1Wg0VCyeh+ReeuklSdK9e/f0\\n\",\n       \"61//Wq+88ko6XsdzOlhPzH9EQz3M7kfkgA6VSqULqRDlcjnV7skLpzgdWIcoUt7p9zWbTRUKBT19\\n\",\n       \"+lTb29uJ/xk/RgnozmQySX+jzHy9xPXhoRiUImhVXiK6RxSQAdDUNwNEFNuPTWFzB/dVq1X1+/2k\\n\",\n       \"b7woI/IcerthjhFBGDY6Zr6BypU0PI+jAxrMMx0Zd32CIYAcI5KBzAAphF5Rz3K/yz/nRQ8pusxE\\n\",\n       \"X7KO4pEthP7iGvZ5j/wd5SjONXLO+d83Q9Fvfpw28EOkG7UM2+126qPraNflUa/6JpC8dlX+4Kpd\\n\",\n       \"tat21a7aVbtqV+1/2V4IIuWQs1d/9nCbw9g0UI3o9Tv8L13MJ+KZviPIUSrPF5KyOwccAaHF/BgP\\n\",\n       \"J+FF4Gl4c+TGUSU8Ct+Z5mPhWt72T/di3RJ3Sz8vEZ93eOl/9xa57igJNPWwnoc3vG+S0sGZ0NIt\\n\",\n       \"+5jkyfwyx44A4vlERAqP0hNy3fNx+sR58FCuezRxnvLi+/CLe3ru/TmdHBVzT5/vcM0RRK/Czvc8\\n\",\n       \"1OD8xvyCzDAOjioBWYq5JtJ5AvH6+romk0kKb2xtbaXdcXjvX/7ylyWdH0z8T//0Tzo5OdFrr72m\\n\",\n       \"V199NSFZjUZD165dS5WvFxYWtL6+Luk8Efv4+FhPnjzRs2fP0lEi0jlac+/evcwuGkcyCEWBfsQd\\n\",\n       \"i51OJ61tEuThPRLX+/2+6vV6Ji8JBGN7e1snJycp1+lXv/qV/viP/1gzM+fHpxwcHOj73/9+4gvy\\n\",\n       \"M8gTefz4saTz5P5ms6nHjx/r9u3bmp2dTXlXN27cUKvV0uHhoTY3N3V4eJiQHPg77mBlPfDseGYi\\n\",\n       \"oT5HK10mgu6BbHhI38PxyIDI3/AqIaz5+Xk9evQo5eiVy2UtLy8nmvouPg//QG8PpUfe95BgRMsc\\n\",\n       \"aSKEJGWPlfFoBvcREuJvD4c7CgiaBL3n5+cTUutFVRkLGxg8dQGEKk8OcT3mPfpcez6lh/24j+9E\\n\",\n       \"BAnUDRkIj4COsf49GsH3Pf0ElJOwPvrA0z3Qoeg3T1iP8+zz63nEHvWIcxzTQTwqQ19i2BCkymU8\\n\",\n       \"vO5IHrLUUcR+v69isZhJaXA6Oy8yv//nyh9gMBUK091w3nFCETHmy9EjfkafQ5F5hhSMErfHe95N\\n\",\n       \"vIahRp9cCXk4iZBZzB/yyfSdK/Gdnn9EvN+ZTsqe+8e27XgfeRRugNDvaHjm5R9Eg5V4MCFMh4H5\\n\",\n       \"HAZ1hR93wnlyM+fLkQOWR1P+R6B5X3x+aZ7LFYVVNDDdaIcmUbj7vd588btAQHAyBr93NBplFq40\\n\",\n       \"FaCRb8jJKBTOD9eNMXrmAJ7zZGlyF2IomvpP8I7vpIH/hsOhDg8Pde/evZQLMxwO1W63tby8nLa0\\n\",\n       \"M4b33ntPZ2dn+vd//3d99NFHeuWVVzIlFZaWlpLBs7Kykvr57Nkz7e/v6/Hjx3ry5In6/b7eeuut\\n\",\n       \"9MylpaUUhpKm8L6HMqPxSSkNkvNRHnwPZUHYC2HoczUajbS7u6ter6f/+I//SOMYDAZaXFxMPMp9\\n\",\n       \"MVS2v7+vd999V5L093//9+p2u/rLv/zLlEtG3lO/31ej0dDDhw/10ksvZUoLHBwcpBCtK3v4pVQq\\n\",\n       \"pVIHHtZtNBrpvEFCQjEHxY/icp5GWZFAy/ji8U3+vrm5OdXr9ZTPhiJyxYTRxkYhngv9Wa9x3dDH\\n\",\n       \"qIRZYzzHQ0bu0Hi4SlKmz9CQZ2I4eWI79CZv1WVv3LnlxqafBoCBhbzyNAnvL84BdOEaITMvbRNz\\n\",\n       \"lLgG/VyfeIoFaSOlUinVLHPni+YOKc8tFqfnevq6Y7zu8PC+4+PjdPg2a5Fxs1EEg8flmstn3uU6\\n\",\n       \"3zf+uHMJ38VcP7/mNPV15Ruwjo+P01xwYHiU/d7XqCe9vRBDiol2hMgXE9Yh3hCM6YrbGc7jtj4Z\\n\",\n       \"fO5x2ohIEWuPgojFhzESvQGu8bc0tb6Pjo4y3oSkTN/jgpemQjNa1M4o7G5wA4Ux+f3+uQs70Atv\\n\",\n       \"LhgQGp6T4P3nN4gagjYib57jwPtWVlYyuzUdiYzootMIpU8phWjUFAqFzELNi2NftghiYdTYZ3jO\\n\",\n       \"G/wAvZwH3PvyhF/64M/wnTrulcUcQBfMeKmeiAmCM5lMMsUsOR4nb9wuXCaTiXZ2dlIS98HBgQ4P\\n\",\n       \"D9MhwZVKRTdv3kz9/tM//VPdunVL3/ve9/Sb3/wm9XN+fl6VSiUVnPzss8+SkHry5In29/d1cnKi\\n\",\n       \"27dv691339VXvvKV1B/QlFarpeXl5QsKHTnAUTfSeY7QYDBIGzcoxeA7Qbvdrmq1WkrWdidqNBrp\\n\",\n       \"4OBAJycn+sUvfpGOOnn99ddVqVTU7/cTyuJzgUGKwUUe1BtvvKHJZKIvf/nLevz4se7cuZM8fZQi\\n\",\n       \"R8fcuHEjGYutVivxDgVOuc8RtF6vp2azmRK4WXfkwjgCTm0dFKLzP7vckE2+BqIB4M6ldL7Tk/wh\\n\",\n       \"Cp0yHxsbG4mnQUYZB0izI0H+Hl/HGCTwKfweZaYbQO4w8kwpm4zs73MdQk4P/IbxC784wuyRCuaF\\n\",\n       \"5lvyvbmBQ1QBWUCRSt8BTd+9vlF0yAqFaeFadCYOlXRuFKD7Ym4PRp/nULmMIwE75g9hJPouUd6H\\n\",\n       \"7Op0OsmA5l6KDzO/oEvMJfLS88Gk6Q5NRxldB7sjGuntxtSHDXsAACAASURBVJk7EY4oxQ1bbhi7\\n\",\n       \"0cr4fpuRJb1ARMo9CulixVk3INjaPBwOMwwmZRWYL0qu8TlM7tAh/18W2suz3uME+sJ0ryomoktT\\n\",\n       \"gcq7HT1yS9qZGOHD7+i18ONGGPR0dMsNQ677ova+02Agfyd9wphwpnODiPGw2EjC9Z0qjkB6crfX\\n\",\n       \"BXEvJm6P9fsvS6CELg7xEk6IgpdrLijj9mApm+gaF58bsD7H7oF7P/2dIK7+LgTfeDwtSMccelFX\\n\",\n       \"N4Jpk8kkCaRCYbpV3UNGo9EoY4DcuXNHn376qdrttk5OTrS/v5/CG3fu3ElFJf/8z/9cT58+1dbW\\n\",\n       \"lqRzo6XX66nf76dyDI1GQ5L05ptvSjpP1n7ttddSbSfpHMlYXV1Nhsv+/n5mmzNnyEFXFF0eAhnX\\n\",\n       \"BmfNYVR6An+/39fBwYE6nY4ePXp0wRFjzbgh57skUQhUPaf90R/9kf7xH/9RxWIx0W0ymSQkR1LG\\n\",\n       \"ADk5OdHe3l7iQUdquR8E6+xsWguMHYeOarrCdgXrCsnXILX0PHTmzkC73dbTp08Tz/iGB0cnpOkh\\n\",\n       \"0SBkvqYIwzm6HCvi8/fMzEzmnE2cEeSzjxHHMk9ex52wXhEe2cU73bByB8/TN5hz5Gh0SB0dc6PH\\n\",\n       \"E8BpjnK54xVlKWOIpWSYI+iKYQzdovHhOpF5B0XyNcOuV+q3uQHukSP6HY0Nkud9HN6Hy5LF+cwN\\n\",\n       \"d482EMb2ufVdgG6sOhLlfeNadBRopJqA8rtDze//c4gUSiYS0hefMz+xZI/7xu2sLFAnqiv7aGU7\\n\",\n       \"LMr3WNzsLMObyVPmjqA5oR25cc/fDUWHLaUs7Ivw9jFwD4vAFzdbi6OX5EYZ/fawIH3CSHMm9v7g\\n\",\n       \"1frzPVxALgn3OXM3m810NAjC1xeI9xXYlXH6YnMB5kYIf/v4ozFHv3wREd93oRg9PubQaYYScuM0\\n\",\n       \"hkn4wYhyzxEh7krTeQPkxPMBWAduYDqfQlfmw9cQ9IlGB8LTFQgGymQyScgH92P0/PKXv9TGxoaW\\n\",\n       \"l5dTHtTdu3fT+Hu9no6Pj1PoAeFHXaHxeJzqPnldJL5br9e1uLiYMfjw4ieTiWq1WmbHkPPJaDTS\\n\",\n       \"wsJCBlGoVCrpfz865vj4WP1+X8fHx+noFeaCMTSbTa2traVipDwTfigWz0OYBwcHkqTvf//7WllZ\\n\",\n       \"0RdffKHV1dULKAo7KJFhoDWSUj2jer2uyWSS6i+BwJXL5zui9vf3M2jG+vp6urawsJBZe6QBsKsL\\n\",\n       \"HoanWUuOdji/cw0Dm12RXvdrOBymOlPD4TCFSqJDRugZ5cy7oA18jbPgzi4/8LAr2slkklkbvr5x\\n\",\n       \"HAhN815Hn0ajkSqVSlpPjs5hqHitp4hO0zzHh3v9OciEqLxdt0QDzA03xu10Jf2hXC6nuXc0n5Aw\\n\",\n       \"/EZzhNyPQGGMR0dHmbUfHVIvT8Fv5oS5cLDBU0EwpqIsykMZoYcb3YzPneMo23m284kbdYwLmvJu\\n\",\n       \"6O3z4EZtDCFe4IFLr/z/2BB6UlbxMwFucUtTaxGh64odr8qPJXBF5YiOv5fPHcXy0J6jHHzf+8tk\\n\",\n       \"upXqaAMLzwUHiIUngvIsxl0qZWuUuBeHAeoGT7lcTudaRYQvGlPOcBhJQOmEOaXp6dmSMh4a4/Dw\\n\",\n       \"q38H+o9Go5R87BWcCTe4MSZlK7tTediRHN7nc0Y/+U1/XNjkhXt5Jt+PixR6uWHngsLDCf5c984Z\\n\",\n       \"kyspV3L8RslzJIlvn3ZepD+VSiUjFOfn5zN5BG7U48kiiNyoo7I4Qmxubi6hHw8ePNDrr7+e0KTl\\n\",\n       \"5eWkLLe2tvT48WO1222tr69ncrNqtZpu376dnAnQCeaB5F3oSCixWCympN5qtaqDg4PE/zdv3kzX\\n\",\n       \"YvXm/f19zc7OamlpKVVuh57MB+9nbhgHhhly5fr166k/ID7UrFpZWUkJ5V5uAJSJd/zZn/2Z5ufn\\n\",\n       \"tbm5qddffz3jOC0tLenDDz9Mxmez2UzXZ2dntbGxkYx0H4PnQYE4gWRhmPua97p7jhxEBCbyyGUe\\n\",\n       \"9/z8fOKDVquls7OzlPPS7/d1dnaW5pRilh6miUoxbrDhnRhtLtelafkDlKajV55aQCK0ywMMtOhc\\n\",\n       \"R4TaDQP4pFAopDyvuA5dznj5FkdUXC64jKO5XGJu8kqycJ2+er0vngky6AYDxpmffZiH5HtaCDRl\\n\",\n       \"Tuv1esYodjmSF0WCj+AP3zwRUXJaBBOcT+EXN7ScNvBxdISjHcFapbmj4O/jOfQJPe28mJc2kmh6\\n\",\n       \"6ZWrdtWu2lW7alftql21q/Zb2wtBpNyLd4vXEQXgNGkK/wJnel4SrVAoJOjYw2uO4HhYiP95t4eH\\n\",\n       \"gMTzPDXPrfFkRN7n/XcvgebhnejteB6No1LQBuSId/h2V+D2mMgHbfCmHHnxcKKPw61vwlTe/6Oj\\n\",\n       \"o3S/h1l91wvHMNDHuG3Z4XYPzzE+35Lt0GqEjfGU3LuAhngdjN+PifD55FnQzcNh7s2BinmxPM/P\\n\",\n       \"gC/wjguF6Y4ovChyCDz05Vv62YXm+VCMHTrAU3iOc3NzKcE3zj88QEjNx8q1+fn5FPqan5/X7u6u\\n\",\n       \"bt++rZmZGbVarTTGlZUVraysaG9vL+UreliXM+SYM55ZLpfTLruYe3F0dJSQlGfPnqnRaKTk9u3t\\n\",\n       \"bZ2dnalaraaChYSaqLxO+KJarWowGGR2RFE2oFarpZQASemok2LxPIcPpEg6r9B+eHio69eva25u\\n\",\n       \"Trdv304oFaEsX3fuJd++fVs3b95MHi9zcXR0pPn5ed27d0/j8TjtCKQtLS2pVDo/ONqPEDk7O1Ov\\n\",\n       \"11OpVFKz2cygOu5Be94SDbTcEU34AATHQ/SMwcPo5NCwThiTo1p+BA3PJeXAk8ZBl0B5PORNdfKI\\n\",\n       \"EoCaOgpCA1FnPbLBh77Ck65H+Ix3kqzveWYRRfbjRRxB4TpzwTuRYS57I2ri9HYky/OgPAISc7I8\\n\",\n       \"1IesdmSJ++hLDH3RJ89f4hp5WRy7xPh9HplLGvoZ/vTSNi4foJXrFubIdSp8RfFpaO85rdAgpi64\\n\",\n       \"TvFxQxMPBTpPOQ+A9EJz+vt/rvwBE+KJfvGaNwSBJw56UqkzkCsan5woNFxBwhi+APlcysKRMSE9\\n\",\n       \"LnCYwnOBeCafMR6vTcJCiBPMQuK6LypPjmRBusESBZkrVsbvRpw/18OeHj6g5o3npHneGefFoZyc\\n\",\n       \"pv7jdIZOMfbNd+ATD8MyL4RQPabOPPBdFlpefgbN4+EeSvV++iJEEUUB52FLF4wuXD3kJE2VEbzr\\n\",\n       \"ix+F5MLGt00TXoDWMVmdkADhb+hGWPD4+FitViuFYFGEz54908bGRjoYWVI6sLhWq6W54kiaSqWS\\n\",\n       \"hO/KykoKu0nniehPnjxJQoqwGOPDIJqfn9err76awoyE7xYXFy+EIfjbDyeem5vLGFLwJvzhByWf\\n\",\n       \"np4mo+wrX/lKSqr+zne+o1arpW63q6Ojo1TlnL4+evRI/X4/5QthgL3xxhtaXFxUpVLR+vq6Zmdn\\n\",\n       \"df36dUnSp59+qo2NDX3pS19KBz5D04WFBbXbbR0fH6vRaGg4HCaaHh8fpwOm2Q3nBj+hynK5nMm5\\n\",\n       \"cgXP77hLNtZr43PGxTwho1jTGHvNZlPz8/NprjCaJ5PznaeEqpgL6B3DzOTM4AS4HB4MBikP8DKd\\n\",\n       \"wNx6GgX9Zjen0wKZyfrwJGaX3WwO4JnkmmG0ut7x/zE08px3fjM+FDbhMN+xiLyOubw01zGkS8DP\\n\",\n       \"hOiga5xnjE8fL59jYLLBwx1ND6tiODNGjFNo6/XO3HiL4TF3FF1+uy6DFk5HPotpGy6r81JPPEXE\\n\",\n       \"+cLpEw1T13mXtRdiSEnTE+89iRtC+g+NwWNoOMFhJI+3SvlZ9s6M/M+EONPExEGP0zrRPTcGYsMY\\n\",\n       \"Mblbyu7mYwwoO09487G7co7GAnHpuPARSDCg7xiENhhIeUmSzvw8j8Y4o7c4mUzSoaW1Wi1T9yUa\\n\",\n       \"sR5Hj0aWG9gYj84fbsyiMKNQiDHtyE9unHlzRYWAjgY1dIlJvDFvzlFB5x9H2XjeeDxOxgAFJn0e\\n\",\n       \"HaUgb8XzFuhTXPyMIa4l6Ibhi9Lb399XtVpNO+bq9Xo6zoX6QeTr1Go1vfzyy5KmhlSj0UhePsKU\\n\",\n       \"HX3wC8YNdGMHGqUE9vb20jPZIo6hj5HBuieZmBwdRzLxrD2Phnv5DgbA3/zN3yT6/tu//ZsWFha0\\n\",\n       \"ubmpxcXF1C9ytXZ2dtTr9TIHGq+srGh5eVnNZjPVW6I213g81nvvvadicVpry8cPL0VnqFarpXUC\\n\",\n       \"audGRql0fhB6vV5PO7ecTx1Big1j2vOHHEXnbD2nmaMcOIKuvCmNgIx2WUM+EgaTI9WOivsczs/P\\n\",\n       \"azAYqNvtXuBvrh8dHaU8HHcEPRcGY0OanlHIs8ijYox+LmC5XL5gnFA6wOWwO3OsZX8mfWc9xh2L\\n\",\n       \"jN2RWuYlOoHch1zMyyFinlgzLnujg+rXyJMdjUbJmHZn3J1830HNnDJmR0ahhecMR+fSc9NcxjrI\\n\",\n       \"4u92PRk3AXj+a3T0Y3PdBZ3yDDmci8s2G0gvsLI5gtYFH2f1sBAcxpamXmhUkggbBCS/+dyNkDyh\\n\",\n       \"4te5H2MNRvY+RIal0T9HnbwvhHMQntEgQsFFVCImC8YEcASJhxqkbH0VBI2PH0ECPf0au/W8j4zR\\n\",\n       \"oXH6Lk1DU6AjHq7MQ2YcYfECilL2/Cfo6kYTDQGVVyk9Gg/xkGdHoeKigV5Od+YCIeZhRgSJI2De\\n\",\n       \"3Dsaj8eZooTQi23e7hTwLt9Z5IiUG+Vs9+c9vAtl44Kf7eMYaTxzcXExVfvd39/XaDTS7du3JSmd\\n\",\n       \"UQUidXx8nIws+kPYLKLGJycn6nQ66dkYGZQCoCDe06dP030U3IQvBoNBJnzK2AqFaXKwIxCgB0dH\\n\",\n       \"R3r+/HlS3uPxOFOC4Pnz5+lcwL/7u7/T7du39YMf/EDb29uZxPg33nhDd+7c0dOnT9O7QN0ajYbW\\n\",\n       \"19c1MzOjnZ0dff7554nfvv71ryd6Ly4uJgeA+WQTAQfm0nzchK9cRp2eniZ0rFQqJVSKEIsrSd9I\\n\",\n       \"4oaGKxOuz87Oql6vZzaUgJYSembuuI7cOjs7S3PjssNLMlA+hDF6+gUIDc/k7263m4yxPP7m/dK5\\n\",\n       \"Ae6IQ55D7hsOeLeHHklRcNnG+3mnPzOG4P0a1z0iQd/c2fZdgr5bMG5k8bnCqCgUCqlPrBl38vKi\\n\",\n       \"Kqx9rrm+YLefyxPmEn5yGkNXdx7j+9xR9ve5QelyE4c9Jrl7n93h9eYy2sfHHEQHw/WTy3He4YZ6\\n\",\n       \"Xnth5Q9KpdKFwpIwqsOWUjaj38M/fq80NVbcqne40eE+t4S5FhET3h3hQ+6PiwbDyo0ynunvjXCi\\n\",\n       \"ozaSLjBFtJz9+xiKjhjFZ7iF7kyOwo4GmAsymhu1eILOzNDNi606OkPf/VkojV6vl4xrN/x4tiMo\\n\",\n       \"l4VLyf1wIeWGtKOYCB6Emgt9F4bRiHbh78oKWjpyxTuhJYKW/sZ3ouDwsBBcXqSP+fOwiHukEf73\\n\",\n       \"uffG3LA2+v1+ConBS+RAUWtJOq8jRWHNRqOhcrmc8ocI9ezt7SVPlrDXwsJCKpzJYcrQqt/va2lp\\n\",\n       \"Sc+fP0/0dJ7Z39/X0tKSOp1OhsaEik5OTjJGkh8vUygU1O/3U0gKeUPeFU5NpVLJKJdvfvObun//\\n\",\n       \"vt5//33t7e1pd3dXknTv3j3duHEjoRR+bMT8/Hw66HZ3d1czMzP6gz/4A0nS9evXtbi4mKrAu9OC\\n\",\n       \"gURdH4x45oK8OvJ3XCZSvsEROx87Ctx5v1QqJcMdJRTlkZfX8C3yGBnQyx0QLyQbUysmk0lCVmN4\\n\",\n       \"y5HXGNZ3RHs0Oq935sqUsaI7nDfQJdHBos+uZGO6Bu/Mmwuuz8xky74gbxivz5O/y3kYeeeos6M5\\n\",\n       \"tLyQkst3csloGCCEDiMigyGNXvS+QlOv8C5NjQl3uhwBdSfd+RvUk888jxdaxCiO08Zln/OT69po\\n\",\n       \"LDtgEiMKrI0YhXEQhPF6VMSN3Lz2wgpygiy414oCc+UhTZWpW/ZMIorVESC3JF3ASNm8p2h5RqOI\\n\",\n       \"97qC8smMC889BEfP/Nko4RhmcwvaDRunB8LGvUAWe0TjECJuSFwWxosWPd6nx7lpbt1DD0dbPKbv\\n\",\n       \"Rkg0grxCu9dEifkA9N29pwj5+uLLQxzpP9ecvyKky9iYHxcKPl9ReGKcYUzG8RMWkJQUJ7zf6/VS\\n\",\n       \"iIZnOJ/hYYNIwPt40+5lu2d/fHycqg2DdjFPGDQgOghw7kFZNpvNJECePHmijY0NDQYDbW1tZeYe\\n\",\n       \"ZOr09FTLy8tqtVpJCbNVHgHvypYq4o5GraysSFKqk4TiOz4+TqG44XCoYvH8rCzQHPcyEfjwEyE5\\n\",\n       \"51PmOVaEPz4+1o0bN7S2tqbd3V09ePBA0nn5h36/r7W1Nc3OzqrT6aRxzMzMaGFhQd1uV+vr63r1\\n\",\n       \"1VcTQrS8vKw333wzoQbdbjfRhm3mEXFxvkXuueE+Go20tramcrmckEEPX3nivz8LXvRTBmigOhhv\\n\",\n       \"vAc+9FAT80l/UDY4Ai6joxL2xGHWPHlefo338dzT09NMiNPDVB7BcF6DLzzfhtAzazI6rhhzvr6L\\n\",\n       \"xWlNvUKhkElKd3QK+e3IymVRjEKhkJDbvBCs6yg3YFyGgzQjx5hHN3SjQeKGtiPHUVf6/9Bzfn4+\\n\",\n       \"JYI7vSLw4PIbY9aRJx8jY3FgwkOzMUeU5t/3CIbznzsR3i9/xmV/+xiY88vaVfmDq3bVrtpVu2pX\\n\",\n       \"7apdtf9le2GhPazkeHI0Vr+jFzHG7SEch57xXGOozGPQvsst7uCIUDSWvlv7niiN9+ZWdR7E7u9z\\n\",\n       \"BMWh0Jij5Pd5oqaHIfC4SU52KHoymaRdLx5j9nc7XT3M6t5iTOaLfXXP06HqiCwxnyAgoBT+zLzQ\\n\",\n       \"lHvi5DA4H7nnEPuCV4ZH4fPrf8fkZ7xkUCafE88B8Gsk+0JTP9yWcXg+w2QySWEoShMcHh5qZmZG\\n\",\n       \"1Wo15RLhMTqSyU6piJbG8CJrCVSHa6wZeP/o6CiF70BboLdXfiYE+PbbbyeUhjyjVqulZrOZPH6Q\\n\",\n       \"HfpZKBRSgja79OgDzwdl4L7RaKRarZaKY5IL5bzJTkHCG3HNg9SMx+PM2Z3j8TjlVfFu6Txct7y8\\n\",\n       \"rGKxqO3tbd26dUubm5uSzsszfPbZZ+r3+ynna2NjQ5LSurxz546Wlpa0tLSkO3fuSJLu3r2bvGd2\\n\",\n       \"usFvlJIYDAbJ4wV1mZubS+VeGJMj1YTT/ABneNFzq2LVc3gFnnQaELJCzvJM37zg/YBu8AnoU8wP\\n\",\n       \"jSg7c0zpDT+CxMN9nr5QKBRSmBVEknHFXXSRTq4TWBPIQEddPNRfLpcTv3nKiSMl8CIHSzNOR+49\\n\",\n       \"jyeGoNBbCwsLaecqYwA543sxBIkcgUYeMuO6pynwXD73VBau0U8/cJpxcJ0ctzwE0HWAz72P2xFH\\n\",\n       \"vhPDlz4unutzzTgdtWPMLs89n9jXEbrb0T34Im7M4t68aAfthe3ak7L5Qh7aizHPmBDnTIVyizkB\\n\",\n       \"0lRB+24Qh+c89IdB4u+MuVL+PhohFZ7n59e5AeaGBWGFmNvi4UoPGWAEIDDdyCJMgZCOyd2zs7Mp\\n\",\n       \"jyLmn9Avvu8LA0aGSV0A+/z5gmIO3AB25vUdZg5VM59Oa38mi5jdYv58X7D+2+nOHHkOhQsbTwBl\\n\",\n       \"3t1gdoHDnBIacQMMIes86bSCtz0USp9RhMDYKBoXTswDhhQ8i5FFjg7PBIanVhPPJEcIh4TDhqXp\\n\",\n       \"UR/VajUpY3ik2Wzq8ePH6vV6un79ura2tjIGEXk8sXYYIeilpaWMESGdVxJ3nlxcXMzMKWHHaDSy\\n\",\n       \"65e5hAZeOoDf9Xo9HXHCZ4Q+l5eXM7zI92/evKmzszPt7++n+1566SVtbm7q4OAgbfGHT/v9vmZm\\n\",\n       \"ZlSv11Wr1VLld655MrRXjPZSAyRvsyszHtp+cnKSSh6g6DEgOfbG+cplDrxIrlretnaMfHa2ebKx\\n\",\n       \"O7cuF3xrOUnmbpAx/mKxmHbzEj72eWKN+LrBKKPf165dS7TZ39/X/v5+UoieK0gf4RHfXetJ+O54\\n\",\n       \"8H5P8vZ58jBYdPDJOxuNpqUIvKo/zcfl72T9u87zkJ479JJS7TRSFKKBwbvQbZ7LRp/43FNFnDZx\\n\",\n       \"fpHlOKHuJMZDlX2M0RmPG5Ccv/IMFX+fy1N/j+sc9Bx0dX5i7vicMKyk5LDgQLizDV/G0KK3F2pI\\n\",\n       \"uVESE/fi99wTcIJjQV6WPBm3lXKNnBGIHHNvILQLDe8PfXEmdSvWDQGuMcEYQM587j24cYYQQLl2\\n\",\n       \"Op2EAsQjEHwnGIoUb8HzaWg+Nvci4uKPiA33+Xj8+/7bcy1Y+NzvBihjcMPM54u5dM/W+YfF5tei\\n\",\n       \"9+xIHuN3LwSaci+L3sfA1m0UhSNkjlZFh8ARMObMx+bz4kUZOZ3ehYkn6iIASOB2Id9ut9Xtdi+c\\n\",\n       \"t+aJoAsLC1paWkr39Xq9dJwLXrgfsFsqlfT06VPNzs7qlVde0ccff5yZ52vXrmlubi4lpdO84KIr\\n\",\n       \"PeiLJ+65Lxh4vmPNjVA8Z3jYUU7qD/G+g4ODC7l0IFWNRiMZIdK54m80Gtrc3NTy8nI6mBmHpFar\\n\",\n       \"aW1tLd0vSbdv3047VkEXXJkOBgN1Oh212+3ER1wjGd/5h7nAAUCper0vDghm3WDAzs7OqlqtJofE\\n\",\n       \"eRTjGuXmCtHRdwxsPxhbyh4/43XicFRBOKNc8N1x7igNBgO1Wi2tra0lxNydIMbKO/mMQqrPnj1T\\n\",\n       \"r9fLlH/w5HNQHYzavOTjaARwzdeTO708PxpU9N1lOwn7OGFuuLrjhdHiaE2j0dB4PE5lPFwusm4c\\n\",\n       \"4Y7INHzmaBXvRR67E4NsgQ9cfqEPQSYjmOBghvcFJ9hlvRtn3Iej7cgp/fSiqcxhNLppPk7PCaW5\\n\",\n       \"4Q1tGTsFSBmrjz3SMLYXYki5weBKLn7mqIF7V27YYBx42Mq9DAwB/57fx3e8MVGXWcHc4wuC+7xf\\n\",\n       \"3lhkLJi4w8x3gvkY4k4BQhmS0inejnDF/pbL5YRauBCRpohcTA6FST3k5ve5gHGUi7njXR7mhJYg\\n\",\n       \"CA6VuvCkX96gD7uWomHkcxLhWIRCRMrcwHJe4HO/5vOM8ILWcZ49fJdn2Pnf7vGcnZ1pOBwmRe0V\\n\",\n       \"0V15ueBHWfMuR+vOzs7r+kwmk5TEToP+JCWfnU3PTPNwtyd2S+eKHaH405/+VN1uV6+99pqk8yrk\\n\",\n       \"pVJJOzs7mp2dVaPRSMYRAovQRblcTsiKJHU6HQ2HQzUajQyi5cUroSXjAP1wD9drvRBuQZlGBLhU\\n\",\n       \"Ot9R2W63Va1Wtbq6Kuk8FMh8YGRRK+vp06cqFM6TZ3d2dlSr1bS8vCxpuhbb7bYODg6SIUtfQYxO\\n\",\n       \"T0/TepSUhDdhMUrAOJ9AE1c08OHi4mIyiuA1SiJgPPuZeb6BAqXiiKukhKwQrqJ5gj686SEz5ATG\\n\",\n       \"lKcRuIJ2HgZhbTQaSR5FpN6dZN95yc7R/f39lKzvdEPx+9rDoMpbhy7X/Fm8j3mLUQopmxjOnDF2\\n\",\n       \"R0L4Lu9wkMD7Ca1Ho1Faj466QwfWo+sqlz8eGpOyTpqPnf9dzvNZ/B9edR3gYcUoazEuYz9jgrh0\\n\",\n       \"sRI6z4jOPe+IwAV84oag09SdefhUOl8zbsz6+JibSBtvLyxHKi9kJk0VWYQAHQFy1EnKHiAcLXNH\\n\",\n       \"sBwqvsxAkrIeBlCtowe0PIXo3pR7EL51HkMqz8LFI6fPHC8B3Hh2dpZg8ajIo6EUDVanGcYTNPOc\\n\",\n       \"Bv72ReRM5ULGPTP3SpyxGZfDtB6iJJTJos9Ddng2z5GUqu/SPw9h8C4XPh66dFp7P+l7Xi4UC9MV\\n\",\n       \"kBuRKDc3RPNCopPJJOUfSVNkqVwuq1arXTDg4Xno7eEHDI3xeJwQCqebdG4cYCBJSmhFq9VSp9PJ\\n\",\n       \"5MIgpFHmbgwPh0PNz89rbm5OlUpFrVYrveOtt97SgwcPEvo1mUwyyrvT6ajT6aR8plarleaiUqmk\\n\",\n       \"HXClUint2qvVaolWPu/c52sIw9DlCoYViATzjiKAfhyYLJ0bNr1eLxnQrVYr0ebu3buSzsOR8CRz\\n\",\n       \"wa5FQm0YI5JSeLDZbGp1dVWTySQZks+fP09ozdLSUmaXJP0ndFQsFtN7OKjcUVU3+KEbx+vQ+J6v\\n\",\n       \"F6eZhzucD0FffY1G5YJ8hZaO8tEYkxuSyDQUsstaDsYmf4n7yG8rFovpmCTeF3PzYjqAh4MceXDa\\n\",\n       \"RQfSDXJfi1xzneMKHsV8fHycZFBUyMhUDA6eQ4t6BpmLfIm5t64/HAX2e/29jirSD67H6I+HF/ke\\n\",\n       \"+ol3gy5JyuUlR/8JU8aUFEekGEdEEpHREcn1eYifMca8cGGv10sGvfclomx57YWF9rCSo+WJcnJm\\n\",\n       \"8Obb4Pnfn4FhIE0RF9plMVhX6DQEtYd9+NzDSW6ouSCKW1URbCT+ej4WC8WtaDfm/DgHXxTdbjfl\\n\",\n       \"ungOkDRNrEeoRIPQtxnHxc3CdG/V0So8fZ4X0SpHnXyRuoHhuQaOvsXxQxc3WmKIyo0hD236nHiM\\n\",\n       \"O0LrvuBc8DD3LgAcvYoOgfOeP9/7g+Dz7zlqyHV/hzsVjspEDx0Uge85UgGfwDd4uvSZBGeUAAab\\n\",\n       \"oxI4AKAqfg4dob6dnR3Nzc1pZWUlw3uHh4cJNvcz3ECpNjY2knDDURgOh6pWq8kYALGTsoVR3YBl\\n\",\n       \"XZGXw2eOFg+Hw7QOMZKgL+u+3++rUqlkio4SEqvVarp165YODw/TOm+1WlpeXk5lEU5OThJaRaHU\\n\",\n       \"o6Mj7e7uZsofNBqNZEyAhMEbjka3Wq3kcNBfR04xJJkLHDme6xW64TNHQKQpkgGPuzHo/Op1mJzf\\n\",\n       \"xuNxJgEbRBMDx2tNxdxGShsUi9M6adFZcd7n/cgML+9BhfW8/CRPXHfnimc6XVxfFIvFdO4hSfDR\\n\",\n       \"GeczlxHoAhBCn1+/n7G6I45T4mvR54nP3fhgfqJMdFnvecbIDebMk61xRugrtPPQGPNKyRdfZ5Gu\\n\",\n       \"PjZpGn5nc4obTvydl5cEnTxE6A5VnFOXl5737IYiNkW/309OixvHEaSI7ar8wVW7alftql21q3bV\\n\",\n       \"rtr/sr0QRAqL0eGyGOby+Lt0bvV60bKIZrgF7l6WpnlYRwAAIABJREFUw5ERLnWPztExLGiSa90b\\n\",\n       \"43sxd4O+eJzcty7H+7zPHp6JSB1eF/TwEBxhnHq9fgEB8rHiDbg35JZ+tLTxKkhU9O/i7UAbpy/0\\n\",\n       \"xGuLCYlA9XhDsS/s7oIm0AjvM4ZuQSfdW/KxxDE73O5zH3O+3BuLKBPfiV6W52F5zpmP0Z8TE9VB\\n\",\n       \"QvCu47lwjMPzgCibgJcYn0li6Onp6YWigXjwEQEkNEWBUA8lg4IQUjs8PExzvLu7m1CuVquVQW7r\\n\",\n       \"9bru3buXSbZmXP1+X9vb27pz546Gw6FarVZmLZMDc/36dVWr1QthBzzshYUFDQaDlM/FPHh+hoci\\n\",\n       \"8DzhJxA5r/rN2YEehvMkX6/83Ww2U8ju7OwsHVgsKSFXoHeFwrS4nyMglCpxXiQVgP44gjYanVf7\\n\",\n       \"dj6E1+bm5lSv11WpVFJ4zJvzaVxrjm47euu8GkP30JhwuydcU6iU8caQN9/znarMEzssvSyDNK0I\\n\",\n       \"T+5YPG7MkXN4BNqAUHvaAeNnPkAgXA6Dxnq0ItLUEXDG5+vLS7j4PPO/o+0gc6Br/j4PsXp+sfON\\n\",\n       \"I+YxGsNYR6NRWjOe1H9ZXpCvJ66NRudlSCgc7HPqiBCywyNGbExgjI4Qud6JuXrQir8jwu95ZP7+\\n\",\n       \"KIdjdIGxlEqljAxmfV7WXlhoD4HkQhMmdKOD5nF+h0AhMELXBYpDiRFOdQZ3uFPKLghnFr+P5/J9\\n\",\n       \"7xv9Q5FL2WTz2D/uY3HH2DC5WrzLc5TYEeK7UqTp4sNgcLiT5wDh+tjjeKCrM7JvKfYF7PlMHhbk\\n\",\n       \"mv9G2NJXDARqYUV4mAUQIVYWALT1vvh8xnExZu5zvvAcCJ8jz3vK29gQDS7PJ0PxewjXhdtoNEo7\\n\",\n       \"vuI5dQhe4HMvY+BHo2BUeV8IYdF/KXuUj9cHkqb5WhhNvkaHw2HKgeF8PM6hW1lZ0f7+vsrlstbX\\n\",\n       \"1/XFF1+kkM6NGze0sLCgarWacSQY93A41IMHD5KgJSTEmX67u7va29tTo9HI5F0xTgwFD4EeHR1p\\n\",\n       \"PB6nhHXPyUMpQ08EpzQ9p63b7aat+t7Xs7Mz1ev1FKaCh4fDoRYXF1Wr1VIY/fPPP5ekZIhxkLcr\\n\",\n       \"dniNWk+EO+AZjAHyUJzffKOJ8ypOAJXgnQ+LxWIKGeXlfDj/ezmVmGODonGe93URc+tmZmbUbDZT\\n\",\n       \"yMplKPlHfIbRPxgMLhxjkrfGfG4ZPzIc2eKyIO6o9jQMHDNoFp1rfij1IU03PtAXf4enUuSFhtxR\\n\",\n       \"jGkuHK+CU+Iyx8N4HlL0OWRMl+kcxsF9HuKLYTr4BjkWUxh6vV7iYz+Wx0uEcF+sHelz4Dt9XUe7\\n\",\n       \"USspY+xFeRnH6LwL7aKcdJuCcCONOY285+2FIVIoWzdUnACxhpNvHwWJ4FqMgeYRJ8aRaUySE4oE\\n\",\n       \"PBZUzK+JBoJ7AihLR1G8nxhMbix4P5lkzyFA4dP36A1yOrqfxcXY+C4Lwxk3eofeQHmgqfeP/0GQ\\n\",\n       \"8oxU/z7XnDbRw6C/3OPCDhpwj+/U8DnyuXHPKjaMU4SbC2Ke44nJvpvJ+x8NYUd3oJ0vWBcwzjd4\\n\",\n       \"024Euwfr3hJ955ko99PT0xTbp98Yu9Hz9fGT5OuJ75zhJp2jCRQHxVA5OjpSs9nUnTt3UkFK6mCd\\n\",\n       \"nJxocXExIUnSuWIkgZvSEYyh0WhoOBxqZ2dH7XY7c5YdXjle/P7+vvb29iQp1bmamZlJRo/zEMaT\\n\",\n       \"F3d0/rp27VpS3l5jqt1uq9lsql6vJ/TBd2CBrrTbbTUajTR+0DAMv4ODgwwiA1oHmkd/oTXor+fm\\n\",\n       \"kP8yHo/VarUyCsNRY+QT95XL5VTmIDYcFhR3dLBYe6PR6ELNL3f0+D8mdjvq7oU+eaaXZYBuo9Eo\\n\",\n       \"c2ZgrC3F+nalTPK25zG6sQTtPe+Q53Edg9aNHnd8HBXBIHGHDkfBd/T5sUQ0N8xcJnk/oyPohm5E\\n\",\n       \"0FkPzEVE1uDXvIRq3umomOuEaHi5nnVD17/HmNgtGo3a+H7+j7miMUrgfB1lu+v1GOVhTIwz6ieu\\n\",\n       \"xYiJ63mQWX9HXo417YWVP2CSvXPA79HC9glD8XtyGYwNokFjEUVPTsomW8ekZfrg3pMjYlJWEcZJ\\n\",\n       \"dOQiImQIufg+/s9DSHzscSFijOL1OUyNgvV73HBFAMEkjtxgBDrCxjU+82J89NWFex6NXajSFzcM\\n\",\n       \"YpK+G5RR+DN2Ry4dUnceiIKNcUNDpzPj8ARgf7Y/x+nqhil9cx5xj9ZDdAhNVz4YLzMzM6lQY/TK\\n\",\n       \"xuOxBoOBzs7O6zB51ft+v5+ZTzfOR6ORWq2WisWims1mZkcbCgpjxsOCa2trWl9f1+LiopaXl3V2\\n\",\n       \"dpYqTVerVS0sLCRE5tq1a3rppZckTcN+MzMz2tvb0/HxsZaWltL4OCC53+/ryZMnevjwYaL322+/\\n\",\n       \"nalBFR2ier2e+LTb7aYQEn0hxOWhn1qtlgxEjF6eS82sYrGYUUiSdOvWrVSramVlRfV6PaNoQIY2\\n\",\n       \"NjYyPAdqzLMGg0EqjcC4QK1AIZxnkBuTySSF9nBwfF1ggGBEM1aMNmgaEWeXCePxOJVrcLq4IwTv\\n\",\n       \"erjJEVUKh8LLlG/o9/sJIfME/pOTE1WrVQ2Hw/R+nokxByLjicq0GNpiXB6Kjjt2oaevwxhNQK5y\\n\",\n       \"DQSOdcizjo6OkqEML+bJfcLv0bGJ6zo2n2saKRJuNESEjM/9+S6XcK5dvrkD6062n5JAnz0dh2cR\\n\",\n       \"Mne56O/IQ4yYQ9cJjqhFMMTnijl0fenrB+NWypao4DseVYImDoDQN/9uXnth5Q+kiyEkfrvnz/dg\\n\",\n       \"zsiEKJr4PClb5ZrnRw+M+9w74F6ER4TGvS/+Tmd4R2VoPvlu4fv1+Ey8D645EuLGGl6XMx39xkCN\\n\",\n       \"wobvIDB9YTP+iBa5IuO6LwYY2+FhaOPeZRQehFtAbdyw8bF680UQjWEXir5wfOzej4hG8nekNzQG\\n\",\n       \"xXKUw9FN9+SlaQ5JNGjpC1vg5+bmVK1W03VQExSDw+2gO9xP8U3o7R4dh+UyDmgzGAwyVbfdY+eI\\n\",\n       \"Ep+LmZkZ3bp1K5VpYO11Oh198cUXWl9fV7Va1bNnz1JtppWVlWRcg5RwX7FY1Guvvaaf/exnSQCT\\n\",\n       \"k/Xo0SNJ0le/+tWUS+F5IxzKu7e3l8lroq/Ly8t6/vy5ZmZmtLq6mikb4YUu3YuGP8mD8aNuWIvM\\n\",\n       \"77NnzzJClu+vrq6m3XnM4Xg8Tjk+boCheEHYyFOBZ4rFYqZ6uedtTCbnRy4RSozoZ61Wu6BIL3Me\\n\",\n       \"nYd5nh8P5OEOz1V0Be1Oj4eicEbcwfS++hz4WsWYg+cdOfZ1hMPosg9ax13XvBtaRjnkSLKjfMgm\\n\",\n       \"6Bl5kdwa38rP2PNQZPriO089SoGR6norprf4uJze/O8Ii/fHHXGXYYyPH78XOnNPNHqi3Isy03WQ\\n\",\n       \"P5PfLjv9Gs+KRpQ/33PnHL3nur/P6ev2RHy+j8H122XthSFS0chwQyYaGRGS88/cI3MDgGsolMss\\n\",\n       \"TUfG/D4IDTP6pPN5XiKyNI0x+4T6IkHQRBr4oo+QL/f5Ncab56VhkLl1DsLm1z1fI2+B8S6Hvwk3\\n\",\n       \"YThFww6kxJNcHbFBUXHNi/X5Nl+nFfPl0LAbQTHp0FG9aNj4QomwMYYgfYn0duPMBa17xXmIYl6h\\n\",\n       \"Oeh9dnaWcnNAZ3guBgtz5/yNouj3+9rb28uc1VUul5NRsbi4qPX19YRm+JEwEdIej8fpSJlbt25l\\n\",\n       \"wsGdTkeTySTVpapUKplK23Nzczo4OEjHzGAIraysaDgcamlpSW+88Ybu3r2rnZ0dSefe/PLysh4+\\n\",\n       \"fKhKpaJarZbZ/j43N5feu7y8nMbeaDTU7Xb1gx/8QMPhUOvr67p582bqz82bN5MCu3btmiqVSqb8\\n\",\n       \"g5/RWavVEgKIHOE9cYMGZ/7t7e2pVDqv8s41jDlChSi+ZrOZQqU80z1ijm1xj5lroMyj0ShTTBK5\\n\",\n       \"5Xlu9NPRRWScG06eM+JhHvK+CPGQjA+fOMobHRt39Hxtcy/rGNTJEQcadYV83Z2cnCS6ttvtzLpC\\n\",\n       \"xvLj8gQnydcktGHduzFPX1xWRLnvTpWvcYwWp4HLG3eqHTmiWjm6ydMIMDw4ncLpDV9joHpfaPQ3\\n\",\n       \"Ikf+OTrMdYRHBdzw4R6X59FY8fvzDEeXt1yj72xggHf9xAJ3eJyn4GU3vrnf590NTje46bPzQDTS\\n\",\n       \"fFy/zZC6Kn9w1a7aVbtqV+2qXbWr9r9sLzS0Fy1MrMq4s0fKFr+MaBXJeDFkhPXru/ncUsaad4+F\\n\",\n       \"Z/JO/8019wzyIMdoPXMtjimiEzFBmc/xOt0z9vt5L8+gb3hkjNefGyF/p6l7sTEUhWeKNxP7Sh6A\\n\",\n       \"pMwZSR4y4tkewsBLBZaPuzdIFo5esHvakb5eAC/Gznl/pLc0RSH4gd6efE8ozufE8y/yaMp7QBAc\\n\",\n       \"lej1eims46FoPCveRSiHRjjppZdeUqVSyXhWJOeyRvCEgcMpXMj3Jen69eu6efNmJkzhOULkwTBe\\n\",\n       \"Em5B0zj3rFarZSrw1+t1HR8f6/3339fKykqmyvrp6aleeeUVvfrqq/rpT3+aEsrX19dTUjioJTRu\\n\",\n       \"NBpqt9uan5/Xl7/8ZZ2enmpjY0Nra2uSlBCg1157Le3cgzaLi4vqdDoJ5XPecPQABNDlFeGmcrmc\\n\",\n       \"kCRpyuuERzz3hjybUqmkfr+vdrudyQGs1+uZs+Kcx2dnZzM5bI4cUhYBOefP9NBODPnH6tlRniwu\\n\",\n       \"LmZQEmjBmgeduAyRKhQKF8JUhEVBUTxFww9e9yOC2HnJYe2O9Li8uCzMeHZ2lo7f8fF7qkRML3F9\\n\",\n       \"4PzGeAm5OTrlOUGOULMu6JufFiBNiy27bPD3gVDFHDDkl48/6hzPJ3ZU2fWTy1/e7UfqRP0BH/A+\\n\",\n       \"L6kCiuwomfNU3i5RR6cI27JuXe7FKBR9dR3k9IsoMo359p3hUX/HcKd//tvaCzGkXAG7wvBtkXlG\\n\",\n       \"hw8o1p3Ky6WJho3HfTFQYtyWZ+T97c/knR7HxjDx6uvOUHmwob8nT5j7fQiqGD/mcw/deR8RCm4U\\n\",\n       \"RIXtfXVI1KuUx37mJekjOKLQIAEWmsd4P1vDI4zqELL3w+mdZwzzfU9I90RK+MHDdD4WX4get6cP\\n\",\n       \"hUIhkxjs8xgTWn0c8L7fh/Kl9IGPE2UymUxS2QFCPAgDkjzZgsx9nU4n5UB1u900F5VKJRk7lLIg\\n\",\n       \"1Lezs6OtrS3Nzs6mvCT6WqvV1Gw2Ux6UH1o7mZzn0NTrdTWbzZTzIZ0fZFyr1TQajVJYkGTrfr+v\\n\",\n       \"2dnZVAX83r17qWwA32F+Dw8Pk5Ld2dnRysqKvvGNb2h9fV2dTicl/DL+jY0NLS0tpRAY4yD5vVqt\\n\",\n       \"plAS97EumDs/gYCE9fX1dZVKJT158iTxzeHhYeLjw8PDzGG/3EdOFzvXpKmCYZ16uQvCugh+D6/B\\n\",\n       \"U3y33++nd87Pz6ter2tlZeVCjhC8xU4zDzXhsHlaAv2Efq6Y3chmzbrz4XKYH56T91w2NvBMQtWe\\n\",\n       \"OxlDNxiMLpfcyKHOFErf0zx8PUtT5Zw3BvgZ2rmT6degnctVrnsNKfriOWDQimuj0ShzDJYb+Hlh\\n\",\n       \"VsbhBpXTl/64DIrGhNPVZZTfE8OlkpLjxVqNG5D8XV7r0MPZlFZhDl0H+Pj9ndSWuywM53TBgKY/\\n\",\n       \"seyP0yLqmbjDMLYXhkh5zPj/q7myzLO+EUTRg+Q91FpxQ8aVKj/+mS+2SGAEQhxLRKpcCdEvR8jc\\n\",\n       \"WHJG94WIhc+z4sKPnqN7tQhRvgeNpGxNDZ7lSt8XdqzrhLCNxzz4vRiT8X1sAZemxrBvUV9cXNR4\\n\",\n       \"PE04ZsyMwenGc6NR7c0NV0c882LifM8VgiuhiAa4oI2Cxb1b7vW5cyOLuk7j8Vi7u7sqlUopURtj\\n\",\n       \"AMHmieGDwUAHBwfp0N8opClTcOPGjZRvQf89Afrg4CA9c35+Xjs7O+k4lnq9rhs3bqQ+j0Yj7e3t\\n\",\n       \"XdgNOj8/r+XlZa2srKQE7T/8wz9M4zs+Pk41oA4ODvT8+XNJ50jO4uKiCoWChsOhNjY29Hu/93uS\\n\",\n       \"pO9973tqt9u6du2ahsNh5nwvvOdGo5F2dvX7/eTRbm5upnPtfDerpHSe1sLCQjrj77IdqD7vXuD0\\n\",\n       \"yZMnOjw8TP3hfMLDw0M1Go1UM4n7MZiQRfQT55E+sjtKOs/lYr2g4P0IFlCw+fl5ra2tZRTt3Nyc\\n\",\n       \"arVaoj1j8YR5jDTPS5mbm9PJyUnGoJEuJuPCC87PvhvXZbQrIMpS+C4ul3kcFcM4MPSYb2/uCHpD\\n\",\n       \"Hrrij3KY8XteqctoFLAraMbFNZf7nj9E3/gNneJubUnp0HGe48f8QE8vVePNUTn+93f7/Ph3XGZF\\n\",\n       \"o8MjG1EuwhvIz7ixBfTQnQ/0rkeUXLdE5Mh36kd0z/vNNXJxvZ/8dqdIyj9jNfIFfO15he7MX9Ze\\n\",\n       \"aGgvfuYejn8nWolu9HiiX9x9Fqtrx/c508SwEf+zeOIi8V0SUci4ceJojRtYeUiYe3A0klD9fp9g\\n\",\n       \"Whw7cDhhHEeu4vj43xPRoSmL0Q1H0AeMGEdzHN1hfrjGOykG6coNZYPiYzFSI8u/616bG9f0L9Iz\\n\",\n       \"/h0RxzyliRDzOYyhAPckfeG5F5YHZ/PjZ5AVi0VtbGyoXq+ncJ10zsP9fl+tVkuDweCCl1gsFtNZ\\n\",\n       \"dGdnZ6l6tiS9+uqrmpubS4U6MV4ODw/VbrdTeNEF040bN/T2229rcXFR9XpdhUIhKbB2u629vT3d\\n\",\n       \"u3dPMzMzevLkScaQmp+f19LSUkooBgVot9taXFzUycmJKpWKBoNBotvi4mIKG/V6Pc3MzOi1115L\\n\",\n       \"8/yjH/0onc3nybiFQiGdNUl4cG5uLhmgJO/Do76+KSIqKSFWNEcoozc/Go10eHiYDJ7hcJjmaW1t\\n\",\n       \"Taenp8mwdXSMit+sj1hKxStzgyZJ00Oi5+fnk2HrmwngVWpRYaguLi6mUC4omSssxsX8uAHmRReL\\n\",\n       \"xelZjhwuGxPNY/Ix97oc9MR4D3kyj1xHtsGL/js6KnlIiitHd4b9e6BJ/l3vT146B/c5Yu46iHWI\\n\",\n       \"go7Iuctrn3uMEeRuRD1w1kAaY9qG089lE2kgOErufGLwwOcxhMV4oJPrC3fc3ZH2sDN0YcygcU7z\\n\",\n       \"+L6oe6VsTauo/91gxCn0cLUbUW5MsksTfvJNEcgK31R1WUpPXnshhpTHd71zEVZ1uD2iQ/GIBYdz\\n\",\n       \"4+4NRwcgKu93RoxKGGPJr7mFy8J3YeJVkn18LnjyGAMhlGcscY+jJdALlMLDA5KSp8nnvMNRN++X\\n\",\n       \"/x29HxcMbtRFFEy6KIAQxHEO3DBCkTvaFz0p/+19dqHCWHlm5C3vI8aw84H3Jc5PHCd8E2H3+Ky4\\n\",\n       \"RVk6V5CTySQhPZQHoB0fH6dilpJSjg+KMdJkNBql42Lu3LmTnnl0dKQnT56o1Wpl6NHtdtXr9VSv\\n\",\n       \"1/Xmm29qc3Mz4wnjkXW7XT18+DDVOPp/2DuTH0mzq+w/MWRmzBE51VzuqnKVwe1u2thmYQMWyF6w\\n\",\n       \"QLDA8sIbNkjwB/AHIGGJHYIVYo+EkNhhscFCCATGlsCiy91tuqvb1V1DjpEZGWMOMXyL+H43nvdU\\n\",\n       \"lBeWvq+8yCuVsiLeeN/33nPPPcNzzj33F37hF/SlL31Jq6urqYAm/dzY2FC5XNaHH36oK1euZDzF\\n\",\n       \"er2uvb091Wq1lMuEAULJBsoZuKL5xV/8RQ2HQ73//vuaTCYJ9ZHm/L2xsZEOBC4UClpfX08hQnYO\\n\",\n       \"zWYzPXv2LB1hIWVDRqVSSY1GI9EfIwNDxY341dXVZPhtbGxof38/7UwsFova2tpSpVLR+fl5JtQm\\n\",\n       \"LarST6dTHR0dpbngeCcKi+7v72fyxwjB1ev1tBsSfsMoZbedoyteniKirsViMdHfETCcB+jnaJXn\\n\",\n       \"s7COfe3BX4TvY30m593xeJxBnTxUFHP5YlglIkTugPv7YijUZQHVvCPiAQ2WKU1kho8d2YaBBdoW\\n\",\n       \"ZQ9oJAaGyxdKYVxcXKT8O+dRaOe7mV3Xca8bjDhI6AqXQZ6H6boMvkA+x5AwNHR9Rx9cF2PEuXEO\\n\",\n       \"ksp6ipEP5i8ada4TPUrjBhyOvId83Rn2fvJcvvPwpfMA7/u5N6QkvaBMXwZdSgs0x8NbXFuWFxTD\\n\",\n       \"Nq6o47OlbK0mKZt8iRHjCjMmuNEQbjAyyak8i78RDaI/XgkY5nQhuAy1c0TGFw0CCBrBxMvQomXh\\n\",\n       \"MfdoPDfBjR1JGW9gGWLj/WEcFG1bZtz4/EkLxYYR6d4V73PjOyJA0Dl6phGJ8rg9itT77v3x+YyL\\n\",\n       \"yz2xOD62+t+5cyezHf/s7EzPnz9Xr9dTuVxWo9FIyAN5UYRoff6Y48FgoGKxqPv376dnPnnyRLu7\\n\",\n       \"uwn5Ojw8TPkH9+/f1+uvv65Go6HpdKqTk5NkLD158kQffvihnjx5onv37unLX/6y7t+/L2muEHZ2\\n\",\n       \"dlI19OPjY925c0fSPBH90aNHunXrVqrUDa0wtnK5nI6Pj7W1taVOpyNpjsbVarVkeFDQU5qfbVev\\n\",\n       \"1/Xaa6/pv//7v1UsLorZlkol3bp1K/ULw5FxsKan06n6/b46nU4mFA0iihHCfHLGIHPp/IwSGgwG\\n\",\n       \"qtfr+pVf+RU9e/ZM0hzlY47q9XpGEDuqM51OVS6X0zgwgAjDFgqFlBs2nU5VrVbVbreTse2hcnLq\\n\",\n       \"6H9Md5hOF8U13Rlgw4I7DTRQlUKhkEFAkQcYLKxDV2oeuvH6Y6Di1FviPVyjeXV/xkCfXQb4HPr2\\n\",\n       \"f5fR/P5ljuMymYDMjmkOPk8YQq7YY0jd++IoDzLQHWE3XjC24BXQQ2rJeVjbK5s7qkdfHWFx9Gw2\\n\",\n       \"W+T9scnHmzujGKM+Du+HX3OH1HUbMgqa+LqIqGHUifH+6NDmcrkMiABN6SM868aiz7ejuq4nYuN7\\n\",\n       \"f05sl+UPLttlu2yX7bJdtst22X7G9kpDe26dT6fT5MViSTu64DAcno+UtYpBlRwhoUWPDQvUkRKP\\n\",\n       \"j3v/PLbtFq9XwKbl8/lMbpJDiR5KdCg6wot+HIB7HctCTo5ERTTId6q4FwQ9HEHyQ0OlRTJvDF2B\\n\",\n       \"xkHrZYX3JL2QC+G7uE5PT3VycpLCMHhMjjbGEgJOD0cOPXbv3owjf+7x8AxHtvg9tOdIlkIheyYi\\n\",\n       \"4Q7CmiQO0xeeC/+ANEjz3KObN2+muT08PEzb/GezmdbX17W1tZWB3aE7OUXc60m3FxcXKaTV6XT0\\n\",\n       \"v//7v5IW4Ya9vT3dunVLb731lq5cuSJJCTV9++239d5772k4HGaS+3/9139dv/u7v5sKJHJky7vv\\n\",\n       \"vqu1tbWU/P2lL31Jn/nMZyRJH3zwge7fv6/BYJBy4Ci6Se5Xr9fT1atX1W63E5JTrVa1vr6eksW7\\n\",\n       \"3W5CZCaTeXXyUqmUPGmOO8nn8zo8PNTW1lYKAcQzEx0NXl9fT7xEQiweP+ERflsqlZKH7+fVgUav\\n\",\n       \"r6+r0+mo1+slnqZvIAVnZ2cpfBbLlziy6Ghwp9PJhCdB1yqVik5PT9O5gzTWCYnzhC6LxWLKr5rN\\n\",\n       \"ZpkxeOh7OBxmKqmDiudyucy8QLOzs7OE8pGYzpg8ZSGivJ6rw3rx/CfQW0JunpRNLmbc3eboDQWC\\n\",\n       \"vTmCwO5EaY6Gx9MTXMYgg+AHR6T8N45Yx7AUuYWMD7lPOJmGjGJtR1SeMReL86K73hfkvW+coTnS\\n\",\n       \"wljon/fVc9Ocn3h3TIXxXe4uh52OoF2MExnMrj7WKrRhXXiIT8rmspHL5PobuRw3Ifih79DHedTR\\n\",\n       \"SqcFctfDiVyLvLCsvdLQnpQ95sMhQBSrtFAKhUIhwb8x+ZvJdXjOE3sjZOehwhhLdQibWi0uaIHE\\n\",\n       \"HZZlDBgXCFRPOEXYAxM7zIjh4VtsvU8ueDx8xBhiXSPot2wbqn+m327YOYQew6YYuzCeL9IIIUfD\\n\",\n       \"xnPRvB6SH67L2B36J0fA4WJpsUXWF2dcGNGY8t/E3/FMfgvP+UKM74jXeMfR0ZGuXr2q119/XdI8\\n\",\n       \"FDUYDPTkyROdnp5qdXU1JQejCKCNCylCT4RaXGGdnZ1pY2NDa2trev78earhJM1zj+7fv69f/dVf\\n\",\n       \"1XQ6rzSPkfX06VOdnJykPJpcLqevfvWrkqTPf/7zms1mevr0qd5++22dnZ0l46XZbOrs7EwfffSR\\n\",\n       \"fu/3fk/NZlP/8A//IEn6whe+oE6nk0JOBwcHqaZTo9HQ0dFRRily1h6J0iTck0/BtW63m8Jh7O6T\\n\",\n       \"pN3d3RROKxaL6eBinjscDjUYDFIJAEnJsNnc3FQ+n1en00nV2TG6qMWFYvN16gqnXq9nQnT0OeZn\\n\",\n       \"8F25XM7MG8fHjMfjlEt1cXGhg4ODZEg1Gg3NZrPEL+7soEhWVlaSjIqGg4fEWB+Mp1KpJFlCvpKf\\n\",\n       \"6chZeYyH44I8/OJj9PUXw0HIUd9F7E7WaDRK8oAdrNxHf2NYhbIVvBfjnfuiLkCeUHrCFbfLNpdR\\n\",\n       \"0Wnh2fl8PmOc4TRLC8cHg4ISIR4SpJGDhkPmNMHo8qRv+IR3oadimskyYzOmNcTf+mefP5dr5IBJ\\n\",\n       \"2dQFaEZf/HeM4fT0NPGx76Jz+eppLM67MYyMUQXPY6BBE59znBjo4ZXj+T3Nx+o0cgPvZe2VGFJu\\n\",\n       \"KPhxEK7Io3XqExCRqlxucVaZx+F5xstivW5tR0TDUQxPgKVv0cCSlLGafXcE1zxnyN/HgliWz8PC\\n\",\n       \"5bu4CLgvjg/r2xnSx+zjcXQOertgdAZ3uvNbN7b4PcIyGoLQ5PT0NOXJ+JlgESX0RYMQdsOV/0cv\\n\",\n       \"0xEpWkwe5J+PNyKf0ViibkmM58Ob5OJ87nOfyxSlfP/991UsFlWv15Mz4CgnPFAozI/RQNGurq6m\\n\",\n       \"BNyoTDY3N1UozI8pYVfXa6+9JmmOgF1cXOjjjz/Ww4cPdXp6qs3NTUlzhGhlZUX7+/t688039cYb\\n\",\n       \"byQj4/Hjx/rud7+r6XSqO/83lwuaPn36VLdu3dI3v/lN7e/v6x//8R/1jW98Q9LcsGm323rw4IGe\\n\",\n       \"PXuWMU6fPn2aFMZsNlO3202Gy7Vr1zQajdTpdFSr1bS5uZnuY+MGByxTiwq6oPAprTAej9ORLRsb\\n\",\n       \"G1pZWdHBwUE6JNllTqPRyORCuWEH7yJzlhlCppsiAAAgAElEQVRHp6enaWesND+SZjgcZoxCFJ8n\\n\",\n       \"N5NT6CU/JKXiorncoihhq9VKNAPZJM9NWmz9hj94JgdWk7wPcg3dcIBAIDCUY86Rrx/y8OBfrnkC\\n\",\n       \"NDxMf1nzvtNvMsluGuA4Igzo2Sx7cDnyHX7yYr8gaM4T0DTumHaEyHWL6wuXc1EOkd8GasEuRu7D\\n\",\n       \"kJhOF6VFuObRhOiUg+6DfjvKRV4VPOtOcsw34tnQG7kG/VyGOV19jMhT14duaKFH/H7oTdFc1z88\\n\",\n       \"E7nMvPuGERBR74e3QqGQduBGlM/H7GgVQAY8Fw0kDDDXwe4sxSgI8xeNTm+vxJDyRefGEwsOIsRE\\n\",\n       \"SAScewMOm7KwIQDKtVAoZGpbSItznJZNoDMU/YzJ3whYfidlk7dBFqLydsjVkRVHY3yxwfguDJyJ\\n\",\n       \"QKVisjj9w1pflnTpXpsLTf+Nf/YxuDHjY0LYMfaYfI1wyOVyGU8c796NMd7HYojoEYLU++EGpRtW\\n\",\n       \"bvDCY7lc7oWKu8wdxlUut9ghyli8rAY0Oj091WAw0LVr13Tjxg0NBgO9/fbbie4bGxtpmz50cDQW\\n\",\n       \"bw5aojDhXe8Hc12r1fTJJ5/o/Pxct2/fVrVaTdd2dnb08OFDTSYTXb9+XaVSKSnojz/+WDdu3NDX\\n\",\n       \"vvY1ra2t6dGjR/rP//zPNI5r166p1WolhUGI7jd+4zf0rW99S3//93+v7373u/qTP/kT7e7uSpL+\\n\",\n       \"53/+R7/1W7+ld955J6Ed+/v7iRd7vV5ao71eL5UpYIciCE+3281URB+NRnr27JmeP3+eCoLCa5xD\\n\",\n       \"RkI1nq80RwQ3Nzd1dHSUwtwo2lqtpp2dHTUajYSaOW+AjDC/8CL1jGazWapF5UphMBikAqNukGO0\\n\",\n       \"+G4q5p4xsJbX1tZSiI534Siys0uaG1nVajUpCUcBMJ4o0YCShm7w/draWjJEJaVaZMhi53NQEyqF\\n\",\n       \"QyuvTZbL5ZJR5sVqmRsfj69TKuXDb27YQB8pu8MQtAqZ6SHBqAAdxcUQc+XqSKEjPNHpXCYPJWXk\\n\",\n       \"KCEsGmvOFX5Ev/nnCJCXoaBPzGGlUkl99eZhQeYQw8+NTcaGDHLH0ekWoy3QOqblsBbhAV8zfM89\\n\",\n       \"vhOU+fRd6ZGu3Of6D7p4H/xengk/eckU6OHz7WOHHu5c857YP2+vLEcKoiwLq0WDgYFAvBgy4nvy\\n\",\n       \"VuKxF75N0mE/mD8iHR4qwrr1PANX2tGi90XonoiP0WP9/kynhU+ab8V2rwkjjlwW6Me1WD/HFx6M\\n\",\n       \"6WHRZdtdY3/IEfJxOZ1h3rgwHJGjyKDniTCH0cPA21kG3XrozpWTt2VehKN9IJnRMOL50+ni0M+4\\n\",\n       \"IJ3X1tbWdOvWLa2ururRo0fa39/PFGVkQeN1RzqzgKMB6AYkixtl9eTJExUKBX3uc59ToVDQ3t6e\\n\",\n       \"nj59mvrIrrZcLqf9/f2kMH/t135Nt2/f1s7Ojr73ve9pOp2m/CnCUCjMbrerb33rW5Kk3/7t39af\\n\",\n       \"/dmf6d///d/1V3/1V/r444/1N3/zN5KkP/iDP9CjR480HA61sbGhZ8+epRAV5R7y+bwGg4FarVai\\n\",\n       \"wf7+fkIk9/f3dXR0lJn7Xq+XQngHBwfpGjTAYMD5cAMUob++vq5+v58MtGazqZOTkwza4sYLeVCg\\n\",\n       \"Eq6EqUXDuue+2Wym27dvq9frqdvtpp190jxnZjAYaDweJ2QBg4/SBuwgdKeH/8OH7kB2u12trKyo\\n\",\n       \"1WqpXC6nqunch+HjRUzhaVc+VL1n7B5W8xpHKPLodNJXeHOZg0LlfMK4HrpHTniBV+aYtYK+cHnq\\n\",\n       \"aFihUMgYWYwTGrgSZh25AcUzI52Q/VK2sCS0c+cUA8KNQP4SMuU3NJ7hzqwbM/QD3oi7z0A6vV/M\\n\",\n       \"HX1mbnxsjrAs251MP9zIis4ehjbXYmjS6Yh+ZSy+0xP5zbuiDOc7l+PIRw8lRpmOzPR8Yzcsya1y\\n\",\n       \"nec6OxpuPifL2isL7TEZLGoEYYxRSwvhxtlfJD1KWebHy3KPyT0c95JcIUe0xicFyxwlxLlKwPoR\\n\",\n       \"DnYl6KhaDAM6WgVKgzfoC4p+OJ2cjv7u+NcZwuF13unM6McOxMR1F2DRA4ihMxSPw9vMDQYL8+9j\\n\",\n       \"8WNS/LkonmXeAM9yGNdDdD5OFxhOP/rM4naaYaS6t+ILdDweJ8V9/fp1HRwc6OHDhyoUCtrY2Ejo\\n\",\n       \"EzTF0Ed5wFN4UCAAzvvkRfFdo9FI5QRarZZu3ryp4+Njvf/++8rlckmx8//pdKrd3V1tbGyksF+x\\n\",\n       \"WNQ///M/q9fraWNjI+M4tFqtlM8wnU71R3/0R3rw4IEk6Y//+I/1ox/9SH/xF3+hZ8+e6a//+q/1\\n\",\n       \"+7//+5KkDz/8UO+9956+8pWv6O2339bW1lYydp4/f65Go5EEHLlA0tzIGg6H6YgVVxgkom5sbGhj\\n\",\n       \"Y0P5fD55+L1eT+vr67p7926iLUYxvEgpA451oQDqeDzW3bt3M0fgYNhgTLhj4or24uIi5SvhKdPX\\n\",\n       \"fr+fkC/QHknp2e5A+FrL5/NJbrkAZ7s9hokbMl6Elf67AYKhByLtRgNhSWlxdBPjm06nKfQYz7v0\\n\",\n       \"dRBReBQhChX6QEOQiIh+gJaRguGKD345OztTv9/PGEu+5jGaoTdH/0A3KZuT69EEeIXmBkFUnBhC\\n\",\n       \"UbG7XIFHvHAqSCn9cb3DPw+1SXO5F+WNn20XoyLQmbF6eM7LSsBv9N91QnRSlzn76F9Pcse4ch51\\n\",\n       \"3c24HLSQFuUu6K+/zw1PnAgaBqvrX+83ujXqY5A5N6Kcdq7vI2DhdFrWLssfXLbLdtku22W7bJft\\n\",\n       \"sv2M7ZUgUliCbp3iNYFouAWI93F+fp62N2Kd492BuPhZbngwy5IJHdqN+TVYrnjPHtfmXVjYETIF\\n\",\n       \"PWMc7iFLCyTK0SqPOwOn+xjc+vZ3OgoTz9SCbn4gaYSjl4WzeG5EBX2M3tyL4V63+B2OxrMA/o9J\\n\",\n       \"ooRo4tEx7lG4Z+zXPCwW++noEvSO6JXTlOeBTrjXzfyTwEyS8s7Ojvb29jLFMz1/rFCY747ifuaE\\n\",\n       \"/uAF4ZVznyNlIBR+9t2jR490cnKiarWaQSXW19dTmOn+/fuq1WqpCvfTp0/VaDQySe940OThNBoN\\n\",\n       \"feMb31CpVNKf/umfSpqXOPj2t7+t4XCoP//zP9c3v/nNRLfvfe97+trXvqZ33nlH5+fnunXrVkKA\\n\",\n       \"HKE8Pz/X48ePE6pGAvx4PE67AkHqKEwLD5OHxNi73a5+8pOfaHt7OyGZ9Ofk5CQV+Mzl5gnc5GV5\\n\",\n       \"flIul1Oj0UgIzebmpkajUUKcCU1ISrk8vuPHk61Z05RXIJ+L9z958iSFOZFfoJTINZdX0gKVIiwM\\n\",\n       \"D29tbaVjacitol9eSBNe8iTtQqHwwi5AGmOH7r6OkDOeb0VzWRFRVc9dk5RBeZERyE5kIHRzOeso\\n\",\n       \"PsiCI3ye+wId8/l8Ji3DUzG8f3x22kfaxFQNv854GQPP9Dyf09PTlNrA3PNe0FRkgqPkUnYjE9f5\\n\",\n       \"LSFhxgWa4/98bqABOiGmpiwLZblczOfzmXQLDkFnvTqaya5UEGKQbqc7esxRLr4HQZUWu279uiNv\\n\",\n       \"fg863PnCx4T+8eiVz3HcrPDT8qOkV2RIufJlkMPhME0GDBljtyROrq2tZeKzENNL1fu7mHQ3LNxQ\\n\",\n       \"wGjysFWcGD6fnp6mEIxPFs1/G5nbf+Phqphs579FUHreSIz3swBdSEh6IW/AjUdpkdPEAqL5+COM\\n\",\n       \"Da15tzMqxgl0cUOCcfpvvDHv8fwvp+WyPDeO2BgOh5n8khj/j2OAVvCiHx5LGJbvfacUeRMbGxtq\\n\",\n       \"NBop2XowGKjRaKT+YxQ6HTmjjbmPOXGE+1yIorhIZL97927amfbs2TOVy+VkvG1tbaWSCvzmzTff\\n\",\n       \"1Onpqd55553Ulxs3bqR+oqD5PBgMtL29rTfffFOdTkff+c539MEHH0iS/vAP/1ArKyv69re/rS98\\n\",\n       \"4Qu6deuW/u7v/k6S9JWvfEW7u7t6/PixfvM3f1NHR0fp3D/4YDwe691339XBwUHKycKIWFlZUbfb\\n\",\n       \"TYnqtEqlkpKcY2i+VCqp2+2q1+vp+vXrKdfIebHb7SaDgOT3arWaDEmMNuQC4eVer6dKpaJarZaZ\\n\",\n       \"07W1NY1GIx0fH2cMm36/r9lsfuTH2dmZnjx5koR0uVzW9va2Pv/5z2tnZ0fD4TDlJRGy4kxJ8p2k\\n\",\n       \"xWn1s9ksGZB+Dh/H/GCYMIbJZL5V3r9jnITJ4H3Pg2Ke2CEa1y5yK4aHoI3vMvPrxWJRlUolY9Dx\\n\",\n       \"bIxjHEZPZu/3+8mwi3k+KF3yc8iTYhzxnLooT1z2I08wSJCTHpr3vFf0k+8ixxDwv9yHwR5DRO4I\\n\",\n       \"e4gLesKXXuaBFhV+TMXAeVwWhowy3Pvq+UfRAPH8KE/LYcee3xv1Hc/w0C0lLxyYcJ2PY4EO5zmk\\n\",\n       \"LPBstyegacy387HDO17Z3Q3ISNNlhmVsr2zXnpStv+ELJSZ5xTONJGWYOE4awrRer6d8D1pMLoOB\\n\",\n       \"x+NxJmeFiUBxxnipnx0U4/7RG+A7FDv3eZ4AAsHf7+9zD8SvwdC5XC4lwtKcGZbllNEiquQGD/31\\n\",\n       \"EhAxjyIKBt7jVj39Jt7vi8YRMBdk9CUaQd5XDKx4FIYbmC4EoEukQ8yLw+tC2UiLwohXrlxRo9HQ\\n\",\n       \"wcFBJqeBfBMEsOemMEaMJTeynD88l4VxdLtd3b17V9euXdN7772XQXNASTBGHj16JGm+Vl5//XXt\\n\",\n       \"7u5qd3c3UyQQ5AoBPxqNUiL2nTt39ODBA/X7ff3Hf/yHnj17pt/5nd9J1/7yL/9S169f15tvvqnv\\n\",\n       \"fOc7+tznPpf6//DhQ73xxhs6OTnRzs5O8iA5Eqjdbms2mydls0ZJRMaAmE6nqSAnhTEvLi7UarW0\\n\",\n       \"traWntntdjUej1Wr1dTv99Vut9VutxMK5Nv0e72eNjc3kzGC0ba9va3pdJpKL0jz3X7Xrl1L/cbQ\\n\",\n       \"ZC7I4dra2tLe3l7GUD84OEhn+l25ciXN4fvvv68PPvhADx480L1793R2dpYMnM3NTe3u7qbSDhgT\\n\",\n       \"9NNRCd8Qsrq6mg5HrlQqGfQon8+ng4pBqd2o4zid4+NjnZ2dZersYEBFNNqT3vk/yJb0Yg0eR5ZY\\n\",\n       \"Ry4naOS2cY8jwOTMYpThREtzxxsjwfMNpey5psj1iLogR1zWggxx3RVxjAa4HnG6k4PFvHFckiPL\\n\",\n       \"nm9K8j2ffe6X0Z3vkNluUDl9mUvmE1SOunTuTLreoUXD09HSaEiR2+g6ItbfQh56Aj/vH4/HydCO\\n\",\n       \"eg9ZHDdNoH+hgzvc3D+dTjMlStBryN9icVH6g+8w6N0Ai1GpZe2Vlj+I4RWuRUXnqAGLzWvwuAUu\\n\",\n       \"LWpTsf2XZ7higzFhRkdPllm3fOdIFCG4yMA8fxk0SovhQv/snoB7ThgmnlQak/d8sfEXYzPC1u41\\n\",\n       \"LoO8uceNHlAh3w0TaUMfPawHfV1A0JgHr3zuwg2FT58iDX1B+wJ2YeOhVH7rnlNMvuSze+ydTidV\\n\",\n       \"6X769GnGaEeQubfufY0GvNPGw7IYDyAW0+lUn/3sZ1UsFvWjH/1I0+lUzWYzzTe0nkwmevz4cUIz\\n\",\n       \"3nrrLb333nvq9/spMdlDJqur8wN4B4OBhsNhMoi2t7fVbrf16NEj7e3t6Ytf/KLeeOMNSdLf/u3f\\n\",\n       \"am1tTV//+tf1gx/8QJubmwlZ+qd/+ifdu3dPzWZTDx8+TMpaUjIcqMFFQry0qMItKRUXZQx4rCAk\\n\",\n       \"HhIC3cNgpjwA1eLxkCn7wMHP0hyRGg6HaSdlt9tN7yyXyzo+PtbVq1d1eHiok5OTNEaQO0K+t2/f\\n\",\n       \"TkbteDw/Y293d1ez2UytViuhg/fv31er1dInn3yijz76SJ/61KdSSLjb7WpzczOz08/TFigLUijM\\n\",\n       \"a10x/nK5rFarlRwZ5328fJwLr7JObS2cQd/peHp6mupcRQTEDauIRjlf41h6oV5kJE4U5+5J86Kj\\n\",\n       \"7BJlDL7rmjA6ss+RWl/7ntYA8oVeYFMB64n+8zvWBfzkCJOPzZvLa5ddpJa4MTgcDtVsNl9A/3Gc\\n\",\n       \"6JPTOK5VeIwxOMpLf6IRQbiU58R5ZAwxnOYOfkykx9gEZaQ/k8n8ZAJQ0KjPXb5GwGFlZXHWbTSy\\n\",\n       \"3CDEWGbeWPfuJMf3+Q5RjHtHv1zPoNfdLuEZUWbH9soqm0dL0hX9spikI0Su6JzY0ROKjOKK0xEC\\n\",\n       \"D7FIi91Cjib5IvVn0C9+xyQsM46WhQ5p7hksM3r4PsKV/r2jeuQp+S4VaOL3+dg9bOIonOcQeUgs\\n\",\n       \"Mq1/Zp6iUcg/wmfQ2+c0jh20TVKGmemLe2a+gOGxlwl+DKzoeUbPmRDVzZs3Va1W9cEHH6R3RQ+K\\n\",\n       \"RUeffEwYvRFVBfGDn3q9XurPF7/4Rc1mM73zzjva2trKIBSOmh0dHanVaqVK6j/84Q/V7XaT8mW3\\n\",\n       \"mzQvHYASPT4+1oMHDxKS0+/39fjxY3U6HX3qU5/SV7/6Vf3Lv/yLpHmpgq9+9aupQvqDBw/03e9+\\n\",\n       \"V9I8J+vq1av66KOPMiUBpDk6BGKCcQA6JM2VKYLSqxSPx2Ntb2+nWkJuEJDDw/b9brer0WiUlDCo\\n\",\n       \"FoaJO0r9fl+NRkPD4VCtViv1T1oYS4TN2FUoKRlm7NDL5Ra7JPFor1+/rk6nk3LmJOnq1asqFou6\\n\",\n       \"evWqjo+PtbOzk/qC3Nrc3NT29rZ2dnYydANtpWYUeV4gV9DLHQjqDFG5HVROUqasBQg265ADo5FV\\n\",\n       \"jjCwnjwC4IYM9HHD3u9HniBzuIahkM/nU/6YI8cg3yC4zD8GCv33d6EDYkRAenF3lhtuoGPIYnda\\n\",\n       \"I8LMe7gPQ593sLbhFxwC1xcun5BD/szZbH6qA31y586d+dhXnsc15sr5zcfE/GPwujykP+gk3xEX\\n\",\n       \"5eR4PFan01G1Ws2EmT186cgZzwIJch3kBZZdn8Zn4rC6QQSvcJ/vlnd+nkwmyThznQa/RNQxpqN4\\n\",\n       \"e2XlDyCCw5yxPIDnO0Cs6XSa8iUkJRgVD8ULcEmLMvFskY6Wsof3YBp+70iHM4ujDTG2CqNGQ9DD\\n\",\n       \"hT7JjNUNymhgRu/DFzPCC7pwzfMFYojN+wSNovfinmVMAoyfPV/Ncx+cBj5u5t29S5KcHZWSlHJS\\n\",\n       \"MAhdkEe0zY3VZWGz+B1CPdLbjUEQA2leGuD999/P5DxEg9iFAzShRZ6Iwh0Pazwe60tf+pKkeS7M\\n\",\n       \"kydPtLW1lcJ4sTLwYDBQtVrVgwcP9MMf/lCSUuhqNpsl4UZ+DTRrt9u6d++eSqVS2jp+cnKi4XCo\\n\",\n       \"crmsr3/96/rggw/03nvvSZrXn5pOp3r27Jm+/vWv6/vf/34aAxXNz8/Pk3HBPHEGHWMcDocJrSHJ\\n\",\n       \"mi3VIEXS4igXkuFdkeEtr66uamtrS5VKRfv7+5nt7lTNxmjHWPTjSjDGIk2RB2tra8kIqdfrySBY\\n\",\n       \"XV1NOVf09fDwUO12W81mU81mM4VLyaeqVqup6CjPLJfLKhQKOjk5Ub1e12c+85lkZBJyrNfrqYK5\\n\",\n       \"G2Ae/nWnzfO7CP8hW6lnlc/Pa3p5xXzyqtzQcCSBNQOy5A6WK0hksstMR/ZdTrhMZDweappOp+lY\\n\",\n       \"FyrHS9lcUfghIite/HaZE8tadz3jjpAbRJ5CAY2cNm4cehgOQ5CNJg4SOAoS9QblVTwVJKJVjq7x\\n\",\n       \"GZryvRsb3lx/xc0DzI8bL9CYvkqLvDsMOXdefW6YR57reh4dwxp1MIUwOmkTPrf8defZxxYNVmlx\\n\",\n       \"QoqHdmkOHCwDCCL9Yrssf3DZLttlu2yX7bJdtsv2M7ZXVtl8GWrjiINbgVTFBaJ3OJaQAYnP7pnF\\n\",\n       \"EJtbqR4Lx2r30Foul8skn9PcYvZ3MC7QMjwNt3KxxuO4sY6BlB1B8uQ3fxcNZATvI3p6NJ4f7/dd\\n\",\n       \"HQ638z7y0TxEGUNmEblb5mW6hxI9JLy22WyWPE6Hos/PzzUYDDIesT8rzoc394aW3edQcPxMlW4K\\n\",\n       \"WT558iSTs+C8Bv1JwPV58PHQV/f2oif8xS9+MXOcy9bWVvJw3aMDEi+Xy7p586Z+/OMfp/fdvHlT\\n\",\n       \"hUIhJXrX6/V0rVKp6OTkRJubm6rVahoOh2lMnU5HpVJJv/zLv6xOp6Pvfe97mfypf/3Xf9WXv/xl\\n\",\n       \"ffLJJzo4OEjI2f7+fgoXMW+OVnCeGuE5Txrm+Bj6QLL3ZDLRcDhULpdLOUfOh6xfkJ5isZjQHBLq\\n\",\n       \"z87O0tE5eN4cIAxd2BXGc0GqSqWS1tbW0lxMJpPMkS6UmJDmSepbW1sqlUpqt9taXV3NHD4MsrCy\\n\",\n       \"sqJyuZzkFzvTrl+/rna7rcPDQ92+fVvSHAE9OTlJfEGIS1qUqWAHs6+vUqmUDnFutVoajUYJjRuN\\n\",\n       \"Rjo4OEi7pOE/SWnrOghSDPGQq+K5avGsPZAFR4DjX/f2nffz+XwqeAwP+fu9ErWXGYj5jvAFqFQM\\n\",\n       \"7aND6AutUChk5sZD9/TH5aj3k7BgRHeQ2WxUqFarS1F7T0+QssnmHh3gt9AceevrLSLvUfbGtJoY\\n\",\n       \"xiPfzHW0R2u43xP8QdCYT57tRTddVvp9Ple+KSqmMUQd5FEBD+F6mDtGB3yXYNSPjjJ6WNT56mXt\\n\",\n       \"lZU/gDBO3Bh2g6h+ICeJyIR+/MDbmD8VQ3A82/sQ4VZvL1O8LNxlzEainYcguD/C5MsYnmd6EqFD\\n\",\n       \"zX4tQpH0j3e4Eo+hPR87zO+5O24oOUPD7AguLzHgRumyBUVfCYH63DBXlUolIzDJn+K5KEDnGQwf\\n\",\n       \"FrP3xQ0pp7/f799Np/NkShKU7927pydPnqSx+1lvMf7uysWhblrcAeSKhVyKN998U+fn53r33Xcl\\n\",\n       \"zUNG5+fnScE5bYC2b926pQ8//FDn5+cphwbnA+PDw2nklty/f18ffPCB1tfXdXR0lPp55coV1Wo1\\n\",\n       \"/du//ZsqlYo+/elPS5IePnyou3fvajKZ6Mc//nHaoSdJ7XZb5XJZs9niuAhXeoTa6DMHVhNGG41G\\n\",\n       \"GgwGunPnTqakwMXFhba3t9OOLYTb4eGhisViynUqFAqZ0gmNRiPRmWNYaLPZ/EBfHB/CB8wda2w0\\n\",\n       \"GqlarWaMEDe0kT3S/NBmwqmNRkOHh4eJNzwHaDqdqlKppDVVrVZTWPXevXvqdDqJptVqVdevX8/0\\n\",\n       \"x3NvCoVC4lMMTuaekLi0yH2Cv90YdMeA61F2Od3cGXTjhdCay0xf36w/N5r8uW4MeZ4Kn70MgrQI\\n\",\n       \"7eFM+Dp0w4O+esiXtYeydHmATGNMbgTCq/CwyxqMqWVhTYwCjC36wkYAdxBdTyH7kc2u5H1Ti4+b\\n\",\n       \"v8ucTil7ogXzj0GELEWu+fzTb88V4n2UF4I2uVwuc1KAO9yeG+ty2R1JrvkufU89ic6pAwhuC6DX\\n\",\n       \"Yx6w/3Vn1sPB6AGnWdyJmKHrS6/8P2wwckQsmCQG4szIonHjQXpx6340ijze7sYFBoDXIHFjhAXF\\n\",\n       \"PW61x91+rhCZnKi0Yx5PFCbu/TjK4wvK+0FjXCgwT5zkOh4L+QL+DsYRc4ig9TLjww1UN+IcTfO8\\n\",\n       \"Kf76HPr4PTmQPjoiR/9Qeo4IuZBywcH7lnktywS58wwIzb179/T8+fOk6La2thKdoreKkPBFHnPf\\n\",\n       \"aFEAcqbc66+/rkKhoB/84AdJebvh6YUqec7du3d1eHio8XismzdvpmeSE4ji8x1fBwcHeuONN1Ji\\n\",\n       \"OFvppbmCvnLlih4+fKher6e33noroS6TyUQ3btzQw4cP9dprr+n09DQZYHiWKysrKadtmSeHseve\\n\",\n       \"HjzMQcAIPpQgOUTNZjMhWSAXrLVicX4+niMN1J2aTCapxAJ8ShK285GkVDcKw2o0GqX/s7vMnTfy\\n\",\n       \"wVqtlnq9ng4PD7WysqKNjY00XycnJyoW5/WyUO6u2JvNZupLvV5PtCF513cAxvXN9m2vBcZ2f0/M\\n\",\n       \"dqXA4dmerM77kBUYf4668Bd55LuzoKPLRZrnvsRcoLgjzWU0vH92dqbBYJCS0eGbXC6X+uD9cBSK\\n\",\n       \"Z7BmPB+Wa8wvypq/GO8+BpwEd8ygE0ZCPAbGnWaQWcbqDpfzpBs60MONE9/p7JuJGIcbi06faIDF\\n\",\n       \"XXkuN7z5OpXmfOhFfHluRMdcLtMvT4BnLPCeG98+p8yH85rTBqfBDVmPPjEGR8B8PP77ZXrWc+CW\\n\",\n       \"tVeGSEVvgAXkW0edGV3hRyKjLCCs38dnRzWkxena7k24QeCWa9z26My3bBeVlD1lm7G4Be5j94Xp\\n\",\n       \"wp37CFPAPP5MlDfM78wArdzDdCUMYhQRK198LlwYFwvXUQenB3PrNM3n52UrKpVK8phdgHk9Ga+H\\n\",\n       \"xftZIA7/updTrVYzULPvPooo309DLkejkVZWVnT//n3t7u7q8PAwGTXQHW/YFx+Cwo1Fn2P/nRtY\\n\",\n       \"0jw5eHt7W+vr6/qv//ovbW5upi3ppVLphSR7DJvbt2/r9PRU+/v7KREaYYvRCXKyurqaDjS+cuVK\\n\",\n       \"QoCuX7+uZ8+epZDYm2++qbOzMx0fH+v69etqNpv6/ve/n64dHByoXq+rUCgkowEa0uKacUMfGmAA\\n\",\n       \"sOWeMFi/33/hvEBq7TjcXiqVUlFK+Mnf4d5sqVTKOFigVawrL2EC33O/8wbPZoeRG4icrYhjcXJy\\n\",\n       \"ksbhSeoxZOGOXj6fz5SpINyJYRQNF+hHKBKaEkpl5yFrh2u9Xi8ZZigx+uBJ5DH04fNHX52/HVXH\\n\",\n       \"OZWyhy+jbL1+IPxBzS760+l00tgpdBpDcvTLjRU3eHh3RDPcEXeZ7YiYh39Ys/TXHUhkgRcVdsMN\\n\",\n       \"A4vnxffR2OggLeQo7/czYx3hczr47js3FrlO8wR8N5iQrchtvyfSIqLxrgscIEGHOGrkKB86Cx0T\\n\",\n       \"3+lRKnfM/LcYcjRkL7rRjTPGEcONcWefyzN3NF7WXokh5bUcfCHCrExU9PidgSEwQoJ7MVKkrDED\\n\",\n       \"8Vwo+kKIMK4rSW/AxChmRzdAYpYJWpjbJ9PREzfc/L0oAIQ0O7ekbL4Mz6E5I/g1/42UPc7BLXdn\\n\",\n       \"VheqvgjZeh9RN1AFjDXeG70NLHyUi0PKTgMgfebPCwhGQy1CvPSDvkM3N8jds5nNZvr0pz+tw8ND\\n\",\n       \"7e/vp5pN3BcNYV9cLuigqxfCg095N7h5EYgAACAASURBVAJ1Npvp/v37+vDDDzNFJ6UFbC5lC/FJ\\n\",\n       \"87DYxx9/nN5BCQBJaafQxsaGisWi2u12osPGxobeeecdffrTn9bR0ZGePn2awnflclnvvvuu6vW6\\n\",\n       \"bt26lXK0GNdgMFChUNDx8XEmhOEoXS6XyxTQY35Resw1fW61WklJ+nE1w+FQa2trCVFgVx80A1Fp\\n\",\n       \"Npsql8sZR4bPGOZxe/x0Ok15Y27w0r/JZFHh28Pl5Hyw9v1929vbOjk5SXPLfZubmzo6Okr3E7Zg\\n\",\n       \"DuEVvHx4Gjq1Wq1kFDgPu7Ls9XqZHLBGo6GjoyO12+2UIyYplT5ANnqxSj+6hHxTmq8XZFUMdUwm\\n\",\n       \"k4yj5buAMZDimnGEl3XiyIs7Tr57ixAlhqTLU69L5DmW3hwVZ57YNey5tl4MmvfyrmhE0hgn1wjf\\n\",\n       \"ebqCpFTSwvWZI4AvMxzQhS4j/fnRWYPHoY3f4wYvOtL1pNPL3+3hO0/vIBfR14zfE1EgRyrd8Y7o\\n\",\n       \"k/OTG2mML6YR4GC7bIeOrmNdr7GmPELi9/209koMKZhw2YQxaIQZjQlwr0b66VYiSssZMoYb6EdM\\n\",\n       \"jsQTjedROVEjyuXP82R2vnfDMYYgY/NFkM8vtoL6AsLLZcGzCBi709SVF/1B0CBsXIBH9MuFoguh\\n\",\n       \"aGTF0KnDq87sLgQiIuCookOxCGoXfGw2YDyM2yvfYlBFqNi3NNNviizu7e2lc9pi0iH/j4YEig6e\\n\",\n       \"c6FRqVQS7fC8UQpvvfWWTk5OdHJyoq2trWQk8B4EXavV0sXFhe7cuZPmFRqBQvgRIlQ7n0zmSdv3\\n\",\n       \"7t2TNA/tEQra2dlJuTjSPNfn7OxMt2/f1mw20+HhoX7pl35J0hwhwHhAWbrgo5SAI5LSXGHMZrOE\\n\",\n       \"rk2n0xRKrNVqWltb0+7ubtrm7gUpye0CpeQaBoejDh72nc1m6XiZwWCgs7OzDGKzurqaSkdg4DCP\\n\",\n       \"lUolHaPi4XI3yN3JcF5cX19Xv99PeUjSHGVbX1/XYDBIxSF97YNMQBuUHc/wyt/OfygFEDaMTLz7\\n\",\n       \"arWqvb09HR0dJb7Y2NhIhUORCY7EYwCyNjyMzrPpI8Yt9zImHB+XffSZe11JTSaTdMTOZLKowo1S\\n\",\n       \"IxyGUc08YQT6OqOvhHXdEeU+D295or4jI8gLl3/+O6cJfXUEKeonrjs/zWaLcDbPdic56hVH1Tz6\\n\",\n       \"AO94OG1Z/7zvL4tGuA70cjRuzDnYIS3yWD0czDUHMBiDR1uYf3+u98V1lDd4BX0RK9vzzmhfeFgx\\n\",\n       \"jpfv/cgb6B2d+9guyx9ctst22S7bZbtsl+2y/YztlVU2jxCee0ae8ChlD98FtoyICxatW+7ErTmC\\n\",\n       \"w98LUuHImG//pzmUzX1StsK2X4vwqo8Ja53+R2iWPvlz/C/hBqzpGJaI4/PcIIctoQ395HvfXovn\\n\",\n       \"EMNi9NNDavwmhrYcoSKUhyflYUlCL+4BuKfg+R2eqOyJxqB9HtbFc4vevHs/oBnb29vpHQcHB0u3\\n\",\n       \"8ft5WXHuCWeBmhGmdJrwDEp13Lp1S9Icefjoo4/UarWSt+xhT1AuttQTbvzJT36icrmcEIBGo5HJ\\n\",\n       \"k9ja2tJgMND+/r7u3r2b+vrJJ5/ozp07yft68OBB4rPHjx9re3tbtVpNH374oT71qU+lZ3Y6nZSz\\n\",\n       \"E7fcxzCuo4NsCyfk6/Sr1+vq9XrK5/MpCdoriROWAXXx9QBKRYmDZWdNeuVveN5zU0DWfF2Anjlq\\n\",\n       \"wTvdo/UxwrP5/GILP3PY7XZTkjo08PCG78bz0hkXFxfp3fBWDLGAHvmZYp4bsrW1pclkcXjv0dFR\\n\",\n       \"Juxdq9WSbGR8oC9OTx+foz2OcEgvljOAN1xOsP59nnwnLs9knhwVhjfOz89TtCDKZvpWqVQyPMl9\\n\",\n       \"y/oIvaUsEsh1wssuXz3E5DlMzoPwu8u0uGvP86J4vufUkccHDVyuQEvnN8aFXPcQGjohFsWkr85f\\n\",\n       \"Thufb/76fVHWRxntNHfd47zq+sYjHvyW98XcMH+mp0wwdp8n15mOZMYcLQ8zgnzH/ERvr8SQ8h1d\\n\",\n       \"cRFIC+jSt8y74OE7aWHoOKzq0CawJzF6h04hJIvCDRImF8HihoQrN4dG3QCKCtQhzBgW8LFFmFrK\\n\",\n       \"VgYmLwQ6OoTpBp/XQpEWoQeHpRmz00GaQ/OEA3kXjXF435ctcIzYmOdGDoX30YUJRkSEyLnmAgD6\\n\",\n       \"R6bntx66cKPWoenJZJIOuJWkZ8+eZcJGkUfjllzfYcUYWegxGZd+wVMksR8dHWXqKPEbxkjezunp\\n\",\n       \"qba2ttKRNZyLRm0iwlTSPAx4dHSUduzV63X95Cc/kTRPtkbJrKysqFaraXd3N/HU5uamRqORzs7O\\n\",\n       \"tLW1lZQXOSrQLcLt0MHHyBwyFxgSJNNj5PgxICh96j35jjP+3+v10vEZGC4YFdJi7RNK8zwpDHKc\\n\",\n       \"LMoy0Nh55ZtcmFsfpysMd+JwNLiv0Wik0BUHX8d1yvMuLi4y9Z2oiB0ru0fnLobYp9Oper2eisWi\\n\",\n       \"tre3085TjrdhPUAH5gLF48+D9h5Cmk6nGUOD8UYFSH+QNR6Sgaaz2SwdsoyM452e9xjrA/EvGgcx\\n\",\n       \"vQF6QmdoGEO2LwurOU0Zq8tL1gP3xfvduWYt8EyqydN884anH/h46KOHTz3XCefAU1RiiobLQ5dR\\n\",\n       \"UT/Q+J6wrBsd5Hl5LhLXyAEkdcT1NHzkaQIxYR3+WOYIw3PeF9dlnvLB+2JOqxvfDqp4SkVMi1nW\\n\",\n       \"XllBTppPJpPOAD2BDSJwDWF7cnKScitoy4jlRI8txpZ9YlDIEUFzgUJjDJ5UySJmXPw/9sMXZrSw\\n\",\n       \"fSzkSkmLAzLph48Pj8obAkpa1KGBrhg+Uva0dvdupGws2WlHczo5muRekRt+cVwkkMaF43zCczyP\\n\",\n       \"iMKEPj6nSUycxfjGsGMbvyfLgti4gRPzASIqw/MRLNzruV6j0Ug3b95M4z8+Pk5b9xmrJyMjgNip\\n\",\n       \"trOzk/oKz7I13Hdjce3q1avqdrsZgyiXy6Xjb4bDYRp/o9HQ6uqqRqORms1mJvkXI8gNKDd0oZuj\\n\",\n       \"eIzBd8JGpKNUKiX+j3mTXD87O0tFKOlLsTgvicF9nuCNAHdHweeH30JrLy3hAtw9feiKQeG86Dtk\\n\",\n       \"URaeqLy2tpbZYUfyMwYt8gyeoy8Y46PRKPGC96VQKGROrpfmMhEniPIJbBhot9tJRkUv3BW2O45c\\n\",\n       \"QyYPBoNk4CPfQP5YN45u0Ffe58aBK074wx0txo+hRJ6bNF9vw+Ew7fzkWezYov+e78J8npycpHw+\\n\",\n       \"rvmuO+acvjAud64iKs+8R+TCjVMvRcH7iDQgk2Jz3cD7MKIwUKKRwDhfZtS50euyHP70fDTGH9He\\n\",\n       \"SBvWPQgaY+Qd0MF1lDs/HulgvplLd2Kcv3wOpMWOc1+Xy5x+tyfoCzTxCBfXvIbdsvZKQ3tu6cXd\\n\",\n       \"BK6EmAg8TJ/g4XCYMXQgspQtyhhRHkcOHE7m/dy/bCJZ8PxzFIb+LjOy/H73ylwASNmKt7wbpnHF\\n\",\n       \"xbsi4sL30WB1SN3hVA8n0D83pnxuXDHRNz+VO9IuevHU6IlG1mg0SoUKfRzMk/fXkTNpAcV7UiJC\\n\",\n       \"3oWW0wIkBGHMOyPKiHJgLugTgtHnyeuXOALDc+k7ioqE6/X19QzShUEiLUI6w+EwheMcquaZ0+k0\\n\",\n       \"HXIrzROca7WaxuN5HZ79/f2MwphMJnr8+LE++9nPZvh0ZWVFpVJJ+/v7L6CR7D6KiCT8Np1Ok8Bx\\n\",\n       \"A9SdDmjjYV0PZTnPjsdjVSqVZMBGfmJdIDh9nUIznCw3bHgnOykdJSA04LtI4e+oXFxuuFCPaxEn\\n\",\n       \"hT4Ui8VUGoECq6urqzo4OFCz2cwkW/NsP/8PmoIsxdpc1WpV+/v7ms1mKfSLAVIqlXRwcJAcRJeN\\n\",\n       \"jrS6rJAWu+TYVYkh62iOe/XR4KbP/jze5aGfaNjRT1BZaMM7MKCiQeTP9rICIJw44258L5PRjuzT\\n\",\n       \"R+R/lK9uGMT5j6g/9zha5rqEviOjXCYh1/ykj9hcTi4zxDwFwOcdvneeg97IIubI5wnDqlgsvuDU\\n\",\n       \"evgbow8a8xzGG41vB1aWNfiA++KcuFHtCJ3rWH7rPOm8T799TLG9EkMKgRhDGFL2AE4fjCskFoc0\\n\",\n       \"F+7Ao0yGGzYQE4+I5kwTIUf3ZD1u6/dy3YVDRE5cKLjRxjtdCHOPf/b3oTSXTaaH0HyxuTCEYXkH\\n\",\n       \"IRpXdm4EITToq3sN0NIFJ9d8oU0mLx5QyTEi5MPQN1AB3hm9OgS3P8/r9rA4oA87thxxcwMEmuLR\\n\",\n       \"eR4MNMUT8flw/nOkwo0anzdoQ60jDH88aUnp2AjWA8KB/pRKpRSe6/V66RBlpwUoE6hTPp9Xt9vV\\n\",\n       \"1taWLi4u1Ol0tL6+LmluEHS73bQGOaJFkprNptrtdjoepd1uL0UjoSW0ccdhNptljDDPYcPo9XxE\\n\",\n       \"+NBzqWgYp74e+X46XRwPE50dr6lVr9cz3j4CFT731AGQL5TCbDbLbJ3HSKTPUUn4Goc28F+5XE5H\\n\",\n       \"wmDYOILdbDY1Ho/1/Pnz1O9ms5mEPmFIaEwlefoCrw2HQ127dk29Xk/dble5XC6NgWNjjo+P03yx\\n\",\n       \"tqE1CBlHN0lKBpTvzIryxRVWlFWsaTcOeLcrSoyjOE8R5ZTmiHS/31+6sxraeuiGayhXdpnSQMBc\\n\",\n       \"jsXwliOnNJANL8Pj6wNDD55wOvGswWCQdglDT/gqAgHIK48wuG5DntJ/50WniecK0Z+IskeECF3j\\n\",\n       \"78aRWVtbU6PRSOgzzyT1wlMdoDe6gntcfsbUgEg315sxBSSO26/x1yM40UFzZ8hPqnhZe6XlD9wC\\n\",\n       \"dQPDPXIpm38SjY6Li4t0ijmEcwKw6FHEoBggMRgcLohiMqgbGQgaFJ+HYaKFLr2YyBe3k0rZHLDo\\n\",\n       \"DbrFjoewzDr3cBTPj6iYK2j3KmAoNzSWbTf1FvNHeD7IAO+KaN3p6akODw9TWBIa8Xs8dubZw1fQ\\n\",\n       \"2o1oQjoYEs5PnmPnAhMDCQ/Lx+rJxHHufeMCvOiekAtdFB/0phwBZQcoBkl/oNtoNMoYhBge165d\\n\",\n       \"02w2S2evMccIae8j/cEgoCo4vxkOhyoWi7px40ZSmih2aLK5uZnyetyD9NwiP1sMYe/oqKPNGJ7c\\n\",\n       \"B51d2KOkl9UXQzjG0DpJyswp/MZ6psq6o1nUawLddofHc29AxRkHiBzKxBFY5AKKOOZVephvOp2m\\n\",\n       \"Y2BKpVIqJkvtK96HcsV4435pgVqDTLlRXy6Xtbe3p83NTe3t7cnb6upqOoeRzQMuSz106coryiqQ\\n\",\n       \"HEdrQY4ieuZORlRS0cF1g4z3ufNGMdp2u61Op6PRaJRxlnlmRCFidIB5jAoSJA0Z5nlH7oiC2sOn\\n\",\n       \"HtaK6A9/oYPzDHoE4xyk0hGTZeF0nsfv3HiLzq0DFjTPeVtWlw955AYRNOE7z0X2DUNuQCN7+W10\\n\",\n       \"MNwg8ve5PmOtRUTT5yKGbplfv+bhd3Sj0wlegHfd+I4GZmyX5Q8u22W7bJftsl22y3bZfsb2ShAp\\n\",\n       \"oLqYFOiwZ0ycc2/ALWyPdQNpxriztAjnYFX6MSigU25p8zy8ZYfwHRWhD9LCK/XYrKMnPt6Xhe88\\n\",\n       \"74P3ee7Fsti8o27QiERVYFCq9Triwnex7IN/F0OGeD+OaMUwnPctzkOlUklQNqEm3zkEZO0hSsYH\\n\",\n       \"yuR5GdDXUR3eTZ7U2dmZhsNhJpQYQ8Ee3iB0Bz0dPYE+8EnMI8Lrgda+fd3REZAp+u5ek3t0QN6z\\n\",\n       \"2Uz9fj+TX8Q6ICzmPMmOO0J8jUYjobEgIH4mW0Td+v2+Op1OJpwK3aiu7/0GFQNtdh6KIXPewTVH\\n\",\n       \"Tz3PjblwuoNAkZjPmuD9vr7IJSH/jrmCVnjY+Xw+0cY3bzgCxzyBHHGUkCevwksgadAND7nX66X+\\n\",\n       \"MffT6TTNE/PMbk4qvoPSeYmDtbW1TO7WaDRKOXf1ej3t2rxy5Yr29/cTXxwdHalaraYE/1jIkDE5\\n\",\n       \"MsG1QmFxYgHzDG3gQw8X+TVQIa+iDd8gN6EV1wj1n52d6eTkRN1uN4NwIxvIvfT7QHaWIfggHPTR\\n\",\n       \"5bcjzS6/YvgohsRIFvfIivO3o3sxRER/+v1+Zh0iYx0xlV48XHmZfqS/jvwxF/7OmM/lSI2fsIB8\\n\",\n       \"9f44Uh378LLQv+809L4yZteLjnw6AuXoZgzR5fP5FGb06IS/j40oZ2dnaR3CB45EMU/I5J+7HCl2\\n\",\n       \"RTj8D5GA9GLNjphQ5wbSeDxOYQhnWIgJ7OuGDIaVQ83OmD6hcSK4z5lLWoRFPIna4UHizDHPy0N+\\n\",\n       \"McxIc1iS5pPrDMvv3eDhfX64I4I/JgE6LWL+QTR4vbmx5cbdsntJBJaUhB6LwJU39HRFGUMm0NaF\\n\",\n       \"CwvD4V1PsPRF5H2Hbg4Ve/gC3vAwQhy750qg+Mrlctqqj6B2qNpp6sJlOp2ms+16vV4KD/JuD6O6\\n\",\n       \"QpIWtXbIR/PwFRXDNzc39cknn6QxEuI7OjrK5IpAN54Td8eMx/OSBqxtDxl4cjrzi7MzGAzS2J3v\\n\",\n       \"GHsUkl4ugDAh68XvHQ6HyWiF990gi+vS83I8h46cM8bPETC5XC45R9KiFIXzCvzlxr7ngklzWcjB\\n\",\n       \"yBhCXKvVaiqVSur3+5pOpyqXy2l+STR22UHb39/X7du3VSqV1Ol0UvkFv95sNtVoNDLGCXzLnLsc\\n\",\n       \"JGyKzImbMlxBY/jF/CJXok5/7keW+mYh/7uyspL4s1CYb9bo9/tpJxxzyCYCz6+j8Q4PQ3sC+7L0\\n\",\n       \"DJ9fl9sxrcHlko+d9Q6Pej4wziCGD/PkMnFZfk50QDx86e9nrG68uEHnSfp+v4c16asfneMJ3m5Q\\n\",\n       \"xYaco4+e7sDYPOzpciJuGvO5cWOXZzM2aIWD5SkG7vy4HGJM0MrlKDSJ4Ie3V2JIocRcAPjCcqXE\\n\",\n       \"dwhmGCjG7skV8YXo9VhgYveSPLk5xmqjZezM4p5LNPBQ7iwYjwfDoHGBR2TImbtQKGTQBhecEWWL\\n\",\n       \"dHRh7n2jP57LED2u6I25UetM5YsnIjf+/5gzMR6Pk5JqNBov0MSfj2CGyb2kQMxXiomUHrN3Lxih\\n\",\n       \"iVKFvi7oUAq+gLkHgcIz/YgdjPZogJPn4Tk4Ths3lnnncDhUpVLJCBrnzdPT04yn6M/0hHgXTKA1\\n\",\n       \"pVJJw+FQx8fHyfu8du1aQntA3OBFDghGeHnuTqPRSOPb2trS6elpymdhNya08fPkoDfjzuVyGeMU\\n\",\n       \"wYgA5j6SwumHtMhDk+aKn0N7QV9pjImSEX5+5enpaabGVswJmUwmOj4+1vb2djr4mb6en58n1MsN\\n\",\n       \"8F6vp62trYSOuhJqNBo6ODjQzZs3VSqVtLe3lxAplECtVkuGgj+fPmJQ+zN3dnZ0+/ZttVotPXv2\\n\",\n       \"LJXFIKe00+mkg8PdwHb5OpksjmthxyLXWAtuSDuP5nK5jOxxxZ/LLXarYZiQ67gM+QGJdh7GMWKz\\n\",\n       \"kdP0/PxcvV5PtVotzYM7Zi4jPB/RDazYD4wol2M05A7PcWfax+y6gPfxGz67ocx6eFmeGs8GdHAe\\n\",\n       \"d2cg5ke5g02+IM375jIN2sOT7vQgn13murEGzd1A94bcch5GLjA2jClpsS5wkt2I9dyuuDPPHQD4\\n\",\n       \"hbEjW9BNTjdHFV/WXokhBdzui1jKLjbpxeJhy7wF32be6/XSdmLu9+fjvUrZM+PwFPmtJ8H5byRl\\n\",\n       \"mAjjLFqvTKQzjPfXGZlxw6jRK5eUUY6erOh082fxPh+jCwnGiJce4WM3fvy6tEBsXDi4R8f4HYWg\\n\",\n       \"Xxg18ZrvluFdKAyEK7zi97KLCMM1himZI/ruStiRB7xFSal+kgtXD28u+ywtFjfC2Xd9eX8wpNyL\\n\",\n       \"wnhgTXgjVI1g87nGcMHoiXyI8wE/+UkB9Gd/f1/5fD6dxTadTtXpdFKZinx+Efaijg/PGQwGmZDs\\n\",\n       \"eDxORh9JwJJSwjSGhq81+AgB5vSl0KYbbnjsfOcVwz1MxWYCykCAwsGf8AAK2qt7S0rhLxwZeIH/\\n\",\n       \"9/v9zDolJE3BXFcm5XI5hfXG47Hq9Xqm/tfFxYX29/d17do1NZvNTKjJlbSjqNT6OT8/13A4VK1W\\n\",\n       \"S2gNu5ifPn2qzc1NXblyJaFdhEdxAiPywmdkH/10z34Z8gHtHAHhd+74uBNKgy4oTXhqMBhkduau\\n\",\n       \"rKxk5IKHoXw9TafTtCmDNephb5Q78tbRDHiS8bjyhDej/JayDkFE4pAvrHNf9yh7N2y4z51p5zV3\\n\",\n       \"fn23myMtbkR55MLrWPk8Mj/u5FGfjHc6os9ccj9GOQ4j13AcXEY50oTxFOfQHWD6HA1JlyFRJmLc\\n\",\n       \"xbIyjrTGqICnvTgfuC59WXslhhQMMxgMEoTpuyJgDI/PuwXqCgNisBiBIKXF4nfUwKvmRqQlQpxu\\n\",\n       \"pXpD+HPNJ1FaCHnfbcFCwRiKhddYaNE4cW8zekku3OJ16BghXxo0xnJ3A9YZk2fRyFECJXMER1LK\\n\",\n       \"VWARuOGG8sbQ8mrmzIUjTtJCQYGgOPODqJF75EaGG4DLcgG83EVUJih8Dzf4/PpY3IDl/cViMeXl\\n\",\n       \"+OJzIeOK1qv+giJh2OBpOr85n1E2IvKb51xFZXJxcZGOXGEOaaPRSMfHx2o2myoUCpmQEDQpFovJ\\n\",\n       \"iPKDgKfTacZo4RphzZhTJCmDTETlPZvNUj5iPp9P6BvPxMEZjUaqVquq1+upsGg+n0+5Qzha7rFX\\n\",\n       \"KpXMTjjeSSXx1dVV9Xq9NFfQhtw+cqRAwAqFQkJtQEC4Ro5Po9FIxU85HgjlheLf3NxMMsrzNzwc\\n\",\n       \"CG2Y37W1NR0cHCR61+v1NEftdlvXr1/PhJPgW5QhNEWZsT593cN7rBPWoaMwUS77kTVuPDgaKy1q\\n\",\n       \"ziGrmRfWkhum/g7kIs/3UDJyCEXqSEt0LB2hYh3jlHm/WW8Yd97cKfOCs47cwa/uYMfwFY2ixC5H\\n\",\n       \"o4L3yIwjbdDKIxfRCIhzTR8wrqFhNDIdJPC1DHrtckxaGEE8MxpSGDvIRm/IWHSGOzQ8x1Ej7sFe\\n\",\n       \"QCZ6mBSZHp0Cj4TRF19vP9eGlFeQZSAxP0palA7Awo6TCMNRh4OQAswCsSaTSYLiyTvBYHOhwXMj\\n\",\n       \"yhO/xyCKFr1Pki/gyWSSOVfLkR9fNBGxkbQU+YpxbM/9wFBwL93H4guc+L8LFH+/zwXv8JwWZ0bC\\n\",\n       \"PREG5zufLw+9IjThgYgigYa4VwqcDHJxdnaWKcCJoPctzNLCAMNbKpfLmQr0bly6kYERFI+qoJ94\\n\",\n       \"VuQSDYfDTMKml6/wZGQMXk8kduOfMYD2uIcFXbnHnQCEKciNC0OQXHdemCcM1uFwmEnu99IUuVxO\\n\",\n       \"1Wo10QLjk7wcSSmchLFHWNCdCJQ568jzGMfjcTLMHAWF7ryfJHxqdUU+HwwGqlaraS69rg31tZin\\n\",\n       \"wWCQ7u12u5pOp+kcRtBBDHdXQvAO/6bTxekLhUIhhT7X1tb07Nmz1JdGo5Hqqk2nUx0fHyderNfr\\n\",\n       \"GgwGGaSVvnmdp1KppHq9roODg0TTZrOpWq2mfr+v/f39NAaUGjSM6IkbTM77HiJGFruTOpvNEtrI\\n\",\n       \"d/7X0xBcLrjXHzcMQAfqRbEBgrkZjUaZUKSPYzqd16wrlUpJefoYWa+OgtCgj4egoR3z6WuG/jP3\\n\",\n       \"MV8PZ8XTSOhL/C7qC+gR3+UOEnPuERs3MFyfYOCSY+dGLX3H2IuOOfzip2JwzR1WB0E8jSIa29A0\\n\",\n       \"psHEecBJiZEap7uXICI1xulEX7Ah3N5wurtB7AjVshCpt8vyB5ftsl22y3bZLttlu2w/Y3sliBRe\\n\",\n       \"gO+0kBZbX6MV7TlKeNNuffN9hDB9VwLWJ3A5Xjx5Jx6iW5bI7qiWtIBVydHxPvgYPHTI/REWdm/T\\n\",\n       \"UTfGF70/moevsLCdLuSL4dXE4nt+5EcMYcatqIwxJgo7hOyeDP32Pjv0/DKY1HMWoBdhHWjl7weV\\n\",\n       \"wrPzHVKgMSRV837fnUSI0mFwD3sug8Qdqvex0Rc8t5jo72E0dng6rUHQ/Nlra2tqtVqZ3V2OrNBf\\n\",\n       \"iksyRiqT5/P5dOQJYyyVSqpUKol3IuLIswmpgOKyY3AyWRSq5BqI8MXFRTo30cP2nn8Sd5uCUJAH\\n\",\n       \"E+kqLZLAHcGQlBKxybPjPfSPhGkP33ruWK1Wy6QY1Ov1FAqqVCoajUYv8D6J7s6LJycnKhaLarVa\\n\",\n       \"6na7mTAmsoudkq+99lpCViiWWiqVdHh4qPX19SQTDw4OVKlU0rsIG0pKhx8jc6rVahr706dP05mJ\\n\",\n       \"XOeZV69e1eHhYUItnPehvW+2iOkOPmcgbMyHh1P8OdzvKEoMeSMvHCUgrM/8np6epkKmg8Egg675\\n\",\n       \"/DoSS5jJ+x+jGTTPxQK18DF4WDLK4xjCYwzwPv1zPcP1ZZEPR82ibAaBQd5BTx8Lc+n5SdwbkWrX\\n\",\n       \"ey77PFWCcSM3HXXztBX0jaNuUVbGHFPfUOP84ii0I6TINdYDaKjzgYeLvaFjY7qOpGQLsK6ivv25\\n\",\n       \"C+3FGKa0UK6eMOehplxufvwF8LFDtcCsCGlCCoPBIC0OzwPwaw5B0iIM7TkULlxgSodlXan7ex36\\n\",\n       \"dYXuY2fBLEtW5Lozt+9UWQan03dnZn7v35Mo7MYbv/N58P4wB77Io0J0iNeVNXT1kGAUSjGHjFwY\\n\",\n       \"jkfhORGG9dIAHlJgx5TPdb1eTweeevgPvvB4O2N3g8WFF+En8gToj2+f59muVBgHApYkbXgYY9F3\\n\",\n       \"FXpI2JPXi8XFdl5Ch14/zBUkpwF4/See6YKVXCBo67TxHD12XnIYrNdzgx7wjdPUQxf8DpkQwyO9\\n\",\n       \"Xi/ljjGOXC6XzhR05c14PXXAeYlQKPLEE3AJA8Y8P9Z+rVZLfENeEnPELqjRaJSZe+7v9/spnCfN\\n\",\n       \"DaLz8/NUCoQdgZJ0eHiojY2N5Ei4bMARPDw8VLVaVaVSSeHJ9fV1dbvd9Pn4+Djxz61bt5TL5dTv\\n\",\n       \"91MI10PM/GWNunyGfzG+oowmpyY6oi7XPOeM5o6X50Exp+R8EY6XlGrDjUajlLvmTjnOEmvbZY3L\\n\",\n       \"QDd6uMfTLFwuYZygxD30487ksrFhfPn1ZcaXG2D8nv5CNzdeMWzcifbwFDLM5wSZ4RuopGwSdwQz\\n\",\n       \"4H1fFxHMcMPG86DizmKah9wY7wx+SQAAIABJREFUl++q5xr3MgZSBFZWVpKj4yUpXH/5fe7AR31J\\n\",\n       \"fx3kcN6PvBDbKzOkpOwuGBYRdYGcmX3LJcmgL1P0nkPiyjx6JeTHIDTiuUu+8GPekxtlnrPjkxhr\\n\",\n       \"QblBgYG1LHkQL3qZVx4T5bjm8XBHvlBQMLAnrFIXxlENV7RxzJ634h6sN+aM/i+bFxceNM+PirRh\\n\",\n       \"HnK5nLrdbsqVcrp4bg3KC0VG3hxJq4zPhbcLOEfpEFKMHYTPk03dqPYYPAaK57RgZCGwobcbsvTT\\n\",\n       \"BRi5IPCUe4l8T/I0yoT6TJ7PQp9Zc8tyLVZXVzM5Z9PpNClk3wQAT4DkrKysJHQqn8+r2WxmjkXB\\n\",\n       \"AHFBzrW4y8Z3zuXzi91lvta5JikZyZPJJPEGyi2XyyWEzNFhp3m1Wk1jA7mMNIE3ML4qlYqGw2EG\\n\",\n       \"kWN++I0rfWQF6KDTEsOsXq/ryZMnqVDt5uZmMtbK5XI6MkZa5N00m03t7e1ljGj4T1IyPNvttiTp\\n\",\n       \"k08+SUgaRnJ0zNit5aUvkAls5kGmeF06X7P+GUcJ5MmVqSu9WEqmUJjvMPRdYr6ZwlE/5Dnvg2/g\\n\",\n       \"gShvPJ8tlirwxPeYa0S+oiNLEUlytMbf65EKrsX6bm4AeH+9MU+uB6Nzwl+MLXe+3DF3HUxzpMkN\\n\",\n       \"jhixcGQPGTMajTJG2nQ6zaDvEamkkVwfEU7klveF/FpQc0kZxI17ItDBvLpucoAk6qWov6KRnKHZ\\n\",\n       \"S6/8f2oMxFGZWI5AUkIV+v1+ggJpKESHh6WFx4FH4krIt2iyaNwzQcm4B8YznUncM/EE4ig0HKJ0\\n\",\n       \"b4FrrvSk7AKMi8sXhe9MdEaAWfw7XwiumDAWfFEiLFgUy4w+3w5Lv/BYfNsrz3SGxmjmmgtCn0Pn\\n\",\n       \"jdFolDz6+Ezm370kdmx5ojH9dAPI6c32dRCX6IWMx+NUksGVLQYS4cJlCZQo6Bi+5Z0kS/sWZVcK\\n\",\n       \"7F7knWwBPzs7S4nRCBZ2tXm/3OtCaLEVnvuYF++/hwTZxeeGCX2B30iqd14sl8sZoy0meVYqlYRW\\n\",\n       \"0RcMTujrCac8k3n3Q7AZBwn3IDaEWkulUlLM3O8bHxyRjggGic/senQEwWUGoUinD8qg0+mkWlHS\\n\",\n       \"3NjZ3d3VbDbT/fv3tbOzI0m6cuWKZrNZojk7CRlDr9dTo9HQxsaG9vb2kgFG+BejbWtrKxlgBwcH\\n\",\n       \"6axFjAZXvI4yu7zkr4dpMUi5Dt2QKS6n3ADC2OQZvvYcdWONMPcbGxvJAOVA53a7rcPDQ62srCRj\\n\",\n       \"cTgcZvjMDRR39DCMXA7TX490SFl5DGoa9YDLV99Zy7hdXjlN3QD19/g8uOHizv0yxN/nCVq70+nl\\n\",\n       \"Ynwton+Q226cYbxwn6PYLkPRl5407yUfPNrgG2TcePP+05/pdJoxeqITCA09fYK1HEGQSKtItzg/\\n\",\n       \"6MMIHGTue+mV/8cNQeXKhIUdyw645ctRCfFIB6oMe26PL14gxmgQuNXuXiJ9YzIc/vZQlgsbXzTk\\n\",\n       \"YdBceCDk8PRd2Xp/JC1V5jFMiWJ3w81/Ez1FWoSP3bDxMGS01KGVIzreVwQgNJdehEljWAx6uoHp\\n\",\n       \"42XhttvtpDAwXD3M5M/kO4wTF94uBCP6h2EJH7pQRMCAzMW+QuMonJ2uvBN+IwTFNb6Dbuz0xFjg\\n\",\n       \"Nxj1IBxxjNDTlSD3IRQpPgl6QpgJHq1WqynshLfJM30MhUIhra9Y9gHUFvQMhIY+wS8YN76O2I0H\\n\",\n       \"Es2acdRZWuzwoiFHQDMckaTvGKWuAFdXVxOS7Uguz8TYoJinOxHkHcYdRqCpoH3Hx8eJ3q1WS7lc\\n\",\n       \"TvV6PckE+lksFpNRUCqVkpHOGDgCh/wSP3amWq1qNpvp9PRUR0dHKSS+vr6e4a1YooW59TAufOwG\\n\",\n       \"V2wefnJji/44oo1xxDXWE0rXDX54mPu8LAxGfrlcTv8kpSKdONCu9EHumFt3CN2QQbbE3EkQJEel\\n\",\n       \"3UiIYSDeH1Ef7nNDwNdMTPFwow5UzGWypAzi7iF71yfMN7IR5JG+uuPsO3ahG2vSnV03mvyvzy8O\\n\",\n       \"Bv1j/CCX1WpVuVwurQv6Dh3cAPIwHjI/6m43imN40sOMEU2MMlrKRlpe1l6JIeXK1xU/lmycfP4P\\n\",\n       \"E4BOcB9QNZ6gG1BMeNweHpnWlZgXFvNQjLTwLN3zcmbzyXBhykSAvDgi48qEMUbPxQ2b6CWxEH2x\\n\",\n       \"+W9iAiA0RRHEfCYXoPzWn+dJrE43+gJNvfRAhPBdaHk4D2Xq84QhRyiBRN1Wq5UWBOEi91pIUpWU\\n\",\n       \"EqGlrDCP6JcbHs5/PjZHIL1sAsYyOTh4kjwLtMYRU+6l/xhF9JuQIOGfZrOZoTVIEIrd+wzNmM8I\\n\",\n       \"+TMfrrzK5bKq1Wo6wmdlZSUTLoWPp9NppjgnlcwJxWD4SUp/XRn7GkVpw08uFFlH8KnnOyDo4TVJ\\n\",\n       \"GZTUESi+4zesa/rkfOMGrgtU+gji5l4q6xJj1uVLtVrVxsaG9vf3M4i4ND+updPp6M6dOymx+ubN\\n\",\n       \"m5Lm4dlcbh6a3NjYkLRQlqyhQmF+VEq9Xk8IGDWUSGAfDAYp72p9fT0ZLO5QwI84gIzHUQdHOOLa\\n\",\n       \"iblobjhgrDhCEGUYdPU1y/PJsfFrg8EgI/fW1taSg4XDRT9clzCvzoeONHAmZwzbuTLGMIgIPk5b\\n\",\n       \"DHO6U+eOmZTVbV5KBznEO9A78EA+v9iA4+gTz+T9OHvu7COLGBv9iY6Yr0UcEUdefV3wFwPPESJH\\n\",\n       \"nx3hHo/HyWmDnu7QwROu+7iGTmP+IoAA7ZGPPpfuOMQxLPsd8/bTEKnL8geX7bJdtst22S7bZbts\\n\",\n       \"P2N7JYgUHjH/5y/oSYzB+v/xTN0KBk4HGsTKBBWQsltipQVUipfkcDQokyNDHm4gqdg9EknJK8GL\\n\",\n       \"LRQKS3MvxuNxpmK0hyVizNif7+iMtEBWfBu209h3UWB9ex4JNC+VSpk4NgiG5625VQ+aViqVXoj7\\n\",\n       \"S9ljCNzDcm825o/xnBjXZi58PGyB9u31Drszv4yj0+lkvA88Grxsz3uLSKUfseJ5EA47S1rKW/48\\n\",\n       \"xgBE7iFozgoDbfDjR+r1etrizxhAYz1s4Eiu040QBXzCuNwb9VALiea+bZrfehI8eUfHx8eSlHbU\\n\",\n       \"El7xkCD9ogwFHih9cWTJ+8k4crlcQr/cS8Rj9nCDIyMcqUIyuXudyAue4+iuh3IdqSUUCv/5/FLI\\n\",\n       \"FP7yKuy1Wk0bGxtpLgjJwcO7u7uq1Wq6fv26Dg4OEuI6nU5VqVS0tram/f19VSqVTF+YPzYjkFhe\\n\",\n       \"KCx2W167dk2dTkdPnjyRND9LsdFo6OTkJK1xRyuQXR5i4X3QBXkTc+v8rLwoH32+l/1lHhxdZA6J\\n\",\n       \"VCDrnS848cALHZfL5RReBmGLaSKg376GCUsPh8OUauKyDVlKzpfvEuS3RBw87EW/YxiZMTv/uc5z\\n\",\n       \"OeKoKSH0KO88ooKcyOVymfWETGP+HGXxlI6IwjjS7XMlLVAnZImvs5iDxe9pKysrqlarGT0F3Xxe\\n\",\n       \"pAXKGVFG3zDhyJznbPE7Px7M+xTnxHmWOf+5C+0ty4ORsnkt0gIu9tAPBPSdNAzQt/T7NReO0UBx\\n\",\n       \"5ReNumVQnicNejxVyu6mQJF5zJw8EPrpApm6PlG4eZiTd8fdA4QjnEFYZD6u2Fxw+liBaGGmeD/z\\n\",\n       \"FMMtbEP15H7mwuclhtNeZsTQRw/DTKfTtFOq1+slujHHbmRiBCAY/UxAr4DuNAKyxuhxQepKnJ1L\\n\",\n       \"9JOcMeD4yWRxBE5s/M5LGpCbE+FvnoWy6HQ6SYB7KIF+A/djPBDmITEb2niYyoUG9Lq4uEh5Ni6I\\n\",\n       \"vczA3t5eorcf48GuNnc+SH53GktKSeh+/puHMPr9fgrNuyEFHQmzESbwUDJHuXgCqjQPCxWLRTUa\\n\",\n       \"jZRb6flT1KSify7AJWV2SMK7pVJJBwcHKcdpZWUlnW/nmx4Io/phz81mU51ORwcHB7p69ap2d3cl\\n\",\n       \"zcsW5PN5tVotHR8fq91up35Wq9UU9sW4g1cbjYZms1l6no/96OgoGRrlcjnxB/zE3+h0MR9SNhzH\\n\",\n       \"OOAZtqa7sewGmPTicRueL+WGDc9uNptpo4krZXLSqLHG8UCc0cda9PG7gYCc9XVdKpXSDsnhcJg5\\n\",\n       \"box59pQP+k2/MCqiEUAozvOuWEse2o9pBp77CR29KrnT1+mIYeWywmnMmvcNQe5g0S9fs562Eg0Q\\n\",\n       \"D126TuR3LivdGWAd4czGtBzG5XSBv7y5rOEdkS5OG/rJc9zgjnMHryzLc6O90mTzuPXWDR4pG2fH\\n\",\n       \"w4GJ3fOGqRD6nguDIuL5TlQEeoyhwiwgTO4NYNTwTGcoR1RIzPMcAjfmPOmTmjRs54yJqjC0K0LG\\n\",\n       \"7tvmoyfA/f5bz+lxo89zBYg5+5gjbfB6Z7PsjifPt0HQ8X4fU/SEY86Fo1WMP9Zqgf4sYgwReIf+\\n\",\n       \"nZ9nz3X0vCHmw/OePPfC0U9frNFbjXF8aObCgO39GE6ef0Bx2uihgiyQByUpoT6+MYO8KlCg2WyW\\n\",\n       \"aqX5Thv6QpLuxcVFRmGMRiOVy2W1Wq0kaKANNC0Wi8mYhaYIIBACzy1y4344HKbDpqE3zgWoI/y0\\n\",\n       \"urqqbrebHAWKjNIXeAjB6qgTieascYxRaa4M2+12QgZB9Hgn84FCZJdot9tVv99PxoLvoiPHo9Pp\\n\",\n       \"pKNy4Nd+v5+UKIoWJO/s7Eybm5u6du2a9vb21Ol0Uo7UaDTS8+fPVSqVklHrMgSDYX19XblcLuWy\\n\",\n       \"YXCRP+U1vfDKJ5OJGo1GRsaCmIGauqz0HEyX2e4oMo8Yt3G9LHOgmRt36GJOGrIBtAia9vv91CeX\\n\",\n       \"O9SOQjY67/OZAqY+BpzRXC6nZrOZWRedTiedh4h8XJbcHQuAujPtiBbrkH6Px+PMXHgNtojgRxQZ\\n\",\n       \"PeXyGsfTaUB/XGe5IcncO/rt8+VoD7/nGeg+zyHzOfRIi/PFbDZTt9vV5uZmJtcpRgEimuybuXw8\\n\",\n       \"cWxuKHqL4AHothuKnieHM/iy9soQKSlb/A8C+4J16NQ9nejNO3waCelM5Ra/MyLMHqE792p8dw7/\\n\",\n       \"PITFc9yIiAmO/hnBIM0X/mAwUL/fT0aBG1N+GCpeOGNA4HOIrHuRjCuGeHim0xvPXFIKPYAwQQPG\\n\",\n       \"j1GGcojhBmmx2D1k4uiGL2D66ugh9HZUCKPNkwW51409voM+9B+0JgqVKNxZUI6AMk8YtvCLe/GO\\n\",\n       \"SlGLxwWqoy0ejoXPSbaVshX0p9NFUbtarZbmfzyeV98uFovJwHEDDFSJPkLTZrOZFO/JyYkGg0ES\\n\",\n       \"Ehg5hIpWV1dT4i5n2rHrx7fw5/P5lGgO4uKGryN9zsMgUigD3+aNUYPR53OOondDilAP72TcGHnQ\\n\",\n       \"FgOy2+2q0Wio2+2mOQBJG4/n5/wdHx9nPFPO4ltdXVW1Ws0Ydvl8XvV6Xe12W51OJ+2UW1lZ0fHx\\n\",\n       \"cQZ983PkDg4OVK1W1Wq1MsjSrVu3NB6PdXR0lMocEPbb399Pa5jq6Kyt/f39tFmg3W6rUqlk6LK6\\n\",\n       \"uqrBYKDBYJCKXcJ/Kysr6vV6mfCUtNi2jtzEcHTHwefbP/MdssjloPMEz3Sd4Oi/8/Dq6vxgb0fB\\n\",\n       \"kb3T6VRHR0dJHziS6YhJDLHj3PCbZrOZ1tPNmzd1fHysw8PDF+QIhtrFxYUqlYoGg0HqC3ICZeyJ\\n\",\n       \"2NAIGRuTpplPxu9hPJfdIOz+e2SNI/qMkZAfutPlq6Pb9MPXk8vVmMqArHW9znwv07ukTVxcXKjf\\n\",\n       \"76eNELwXmhJdYI3GKEnUwf4XR5r/u33gupKIhTvJUb9EFMzbKzGkIAQCi8YgY4jPF/HLYpUgDD5g\\n\",\n       \"z8dhsh3K4zdMnuc4oEBjuXgEOOEAP3oEIenWuS+2aDnzvmq1qmazqV6vp06nk6nDgXKMuxHoy2g0\\n\",\n       \"Sn2M6JiPCUMFj458C6dZDBl6PJ3nubEgvbibhLnyujnMoSMI7iWw2MhVi8YwjOwhHK5hAHqoVVrA\\n\",\n       \"scu8ESlb3iIKMJ8zX1DOBzH0598zVje0YtiVIy68IcDc4HT0CCic+6iQzeGsoJrc50aXI0tra2s6\\n\",\n       \"OTlJlahBoJjPfD6fUI5er5cEGErm4uIiGVkuMJkXFI0LftYSv4EvvX9eYI9GYU/oHUOJjh572QWQ\\n\",\n       \"Dectp7OHElx5g+LBF6enp3r+/LmkRRHQ09PTNEavebWzs6P19fV07AxrrdVqZXZFOnK4sbGhp0+f\\n\",\n       \"psOJq9VqOnz44uJCN27cSEprPB6neep2u2q326meVb/fTzxYr9fV6/U0m83UarXU6/UyW/xbrVZ6\\n\",\n       \"jit2jAsMIb/2snCfI66OPLiB4msoOi08N4ZseTY8RmjPHRMMH/8nLQwi8mEwcrmG4UFZDjfqKVHB\\n\",\n       \"+mUOqWO1sbGho6OjVM8QfnLkGqSLsftY8/nFzjRH4h1NlxaIK0rd830xuqCdGzI8F752xAuecmTL\\n\",\n       \"6Y0+QFZ42kbUxT6PPB+eiXPsURynB/IRtN2dZJB4R/lorAU35J1PGTeARPyNj9vnbDKZJEfKkTv0\\n\",\n       \"yM9djhST7MRxDxxl5IrGjSDPhaEBWfs1FiiCiDCCtEB5IJIr71wulzGm4nZYJhzP3WOxDmnGpDqY\\n\",\n       \"31Ei7gO+r1QqyZiiRcVMX0lQBhWKi9Yha5jLoXGO1oiJyowD4RAhfBQasCkhJ4e3PQ9HWuRPYfVH\\n\",\n       \"i58xQlenNwIHujJOX+TRiOY55IBEYc2z3ZD2732R+n3D4VCz2eyFGmCed8AzvK/+1w1Kfyd5VVQq\\n\",\n       \"Z+6bzaZms1ky2kE6CoVCCiVBJwwUN7xIEmecFxcXac5qtVpGQVUqlVS8sd/vZ5AlQkLr6+tqNBov\\n\",\n       \"5EJgnNIPFBRhR5CoGOYFvcVodIEH71LTyoU+1cOhJ+UXnKdc0TAOeAVFQZkH3nl+fp5o6qgKYc9i\\n\",\n       \"saiTk5O0hqDbZDLfCNFoNLS+vp7Cn/1+X2tra4kX8/m89vb2JEk3btzQzZs39fjxY3U6HV2/fj0l\\n\",\n       \"jTO3IJWj0SgZCxsbGyoUCjo4ONDx8bGazWYqcXB6eqqtrS3l8/mUDwafQjP4z1MOMJqZy8ifHhFw\\n\",\n       \"NAReJNcF+YCx6bIPnqb5M3yN++eo6Jknz9dzY9kNAXiLZ3iVexwUl5PkjzGv6CCOPiqXy2o2m5nx\\n\",\n       \"EK7O5/PJIIghZuS3K3b41Mfocvb09DTj5PlGCtaJy2k3pKLT7mE4eBoecN2L7HXHi2vISpcl0iIv\\n\",\n       \"zKMuzjv0B0fYZQbzNZ1O1W63E91Btd2I4ZkR+VwWVeC96BRo6sgaz6Bh0LEunJ+QIS9rl+UPLttl\\n\",\n       \"u2yX7bJdtst22X7G9kpzpDwvya1ELFCsXodmsWrd4sfLixWssdJ5llvYeB+gEsRheSY5Jngsy7bz\\n\",\n       \"Az37zg7PoXFvi/AbeSTAuTwLpIIkX3b0MbaYM+M083HF/B9P9CQZVFpsscd79G3+9A1vyMOXHiKI\\n\",\n       \"oUB+47C2Q67QGfSJRjI11z3h3p8B7UFTqIbrYTV+U6lU0vM8KRUe8pi9o2N8xpMBMfG+4hnzDnjN\\n\",\n       \"m4cbuc5nQqCMkZ2HhBocQSAMCM03NzczCEK1Wk27lnyMlExwpCgmvfpBtzHZ/vj4OIWqvcTB9va2\\n\",\n       \"tre3k2fn/AbsX6vVMonMIDyE7xwFYN5YS77DDvQXr9l5n3dyD0iXh2JAoxmHI1148iAT9JXQz3Q6\\n\",\n       \"zXjxtOFwmPG+KWMCPc/OzlLOErTFm87n86kUB+NHLlSrVXU6HXW7XW1tbaX5JndyY2MjMxcgH1ev\\n\",\n       \"XtXp6alOTk7S+xqNho6Pj9P6cH4vl8tJZlar1RQCpPnpBhGtZQzuoTtaxPeeH8Q1p0MM+UOPiM6A\\n\",\n       \"0iLTPbTL+6rVakIuaCDirGtHxkFnhsNh2nrvaF21Wk0hPtalNA9FHx0dKZfLZULi9AXZ7yVCoCdp\\n\",\n       \"Dsh735zjsssjH15IkzXsCA7zgN6bTrNlQ+I8eiPC4JtVoM2yXDI+cy9z7PPE/TGC4yHG2BfWGCiY\\n\",\n       \"hwx9Fy3NUy5Yo8idGK70MUTZDA966oXThmKv0GIZChfbKw3tRXiQheOTICkxL/DtsoXoytYJhyHl\\n\",\n       \"ORP8zkNInjhOTJbPnpDp7ywUCpm4rm+H9rCgtKg2TG6Vw6D0DRrk84ujMOgD43f6xHABz6ahiFgQ\\n\",\n       \"hLukeS7IycmJarVaUnBc4x62pJOgKGW3lcbjPpxpPWeCvjjMyuKSFiEcF8o0n1voxHXCEB4GY34R\\n\",\n       \"hvADCpBn+sLz8BXX4T8Ox2QOuE4OQ6zD4mFSD0V4/pYbbdJcKcLbhJY8QZKq1hcXF5nq2OVyOSUf\\n\",\n       \"N5vNTBVfNyIQnL6rqdFoqNVqpfXhQhrjoFQqvXCcye3bt1NYxQ0+nI21tTW1Wq0XDGDWQrE4r8UU\\n\",\n       \"6cZYPckWGlIXxw1Fz1tjXJQEkOY5RPQD3o73YLw6f/sxOIRW3KFjDDyLXCdoQgV6D8PBA27c+4HG\\n\",\n       \"bIm/evWqjo+P0xomv6ndbms6nWp7ezu9jx17udx8h5lvVMDR/Pjjj3X//n3l84tK39Cy1+upXq+r\\n\",\n       \"XC7r8PBQ0sLZ8ZIgMVzEPGD4uAxHkcawoCs3NwyYC3eOPZTF2kc2uGL3UguDweCFGmOeUiBlD1jn\\n\",\n       \"nUdHR6pWq5lD0Ak/k2bBfa1WS/V6XU+fPk3r2nOkMN7gWc/Xgmb0P4a0uG82W9Tvwtl25xQ5h2MM\\n\",\n       \"cMCzYniLMgluaBDq9PxCGjTk+S7rXSaTvxyNWt917noQR5nP0bBBPjmwwjy5/nCZCP/lcrlM6onr\\n\",\n       \"yJin5uHFGIrkXujkub/sRP65M6Q8UdStWowCz12SFrFQj4W78I2eavSE3BuAcRCgeFGTySQJ00Kh\\n\",\n       \"kMkr4WR23he9KE/o9t11rtjJV3Cjx/MLXPi4scA4XAnH+DuoGEYP78N4cg/YE4exvGu1WiauD5OR\\n\",\n       \"D1KtVpNy7ff7L2yZdgZzIzkuVOjmNa6krLBbhjrgWXp+EeMnd4aFzDg9iXk6nWpnZyfNr3tgvkCZ\\n\",\n       \"Cww76Oa5bCwwhCnjw9vybceu+MkZId/H5xihiVDwhHqMY99Vxv/Z5o4RRa4G7wPlmUwmCdWS5krB\\n\",\n       \"k4o9H2JlZUWj0Ui1Wk29Xi9tTZbmeTnkEZBb41vAy+Vycjp8Vw9jn0zmW+7dqPHET8bl65fdSPCA\\n\",\n       \"K0hHEjF6oJsrAebNnaGY/8Y4jo+PkyxgV6rnSoA+uNEhLRwxch056w0+Y/yVSkWNRiOjIAaDQVoz\\n\",\n       \"d+7cSWj0cDhMOWIrKyva3d3N5MCR/9Rut1Wv19PYz8/P05w9f/5cN27c0PXr1yUtjhyazWY6PDzU\\n\",\n       \"xsaGrl27Jmm+2w/aMTeO0mNMIWfoG3yDvMMYijtQkZfuUPFsd2gj8uLN6wdypA7y3XkYeRELZIJA\\n\",\n       \"sc6Pj4+TbGfn1tnZWUJrvbwHBpc7UsxT1Gc0lxGeCwhfoNMwDlzWuG7DwZKU+os8BBTwHGJoiqzx\\n\",\n       \"eXJjyR1VdIUjUzGfDQMK0MObz2vMr6pWq2lzFAaozzXvjAY4SJYb4xhsroPdOYVW0N6NJc/txRnk\\n\",\n       \"/egnDD8cZ9erL2uvrI4UzVEIFIkbSdKCyBDWEwtRVjBEJJq0UHIxgcyTes/PzxMKRKVgdu240kMJ\\n\",\n       \"wEg+GUxqnEhpUaOl3++niefd7gVGhM3LC0jKKBPQD4eMPdnPBZdXv+b69P+w92bNcSTJubZXFdba\\n\",\n       \"CwBBsls8PdMtyWQmk270/3+HTBppemOTxF47tirUuajv8XwyAM4xmxt+FwgzGghUZWZkhIcvr7/h\\n\",\n       \"8fSUBQ9BXxhT/oaR93Wz2SyVhh0bI4U4o54DPscg2ZGwYbNT6TF8CYp9eHiI+Xwe/X6/FmEY/h6N\\n\",\n       \"RnF3d5eLESPsiNpRsN/Z0Q6GA6NoObTz6OJ0VkRcV5JcQQ5RzhDO6SvXHR4exv39faZ3cBym02mM\\n\",\n       \"x+M0pBFVpXFKHAwGg1rKCDmkFADPA+WcTCZxe3tb21aPs2qyqguAUnDSBFjmje97bCzDODuHh4f5\\n\",\n       \"fRwyI0fICagdRTg3m00SvekPuqR0epvNZiyXy0z3eX0Nh8N4fHyM33//PW5vb6PX69UMJrLKVnfO\\n\",\n       \"d8MIg1j1er0syEn/jTpjlCOihl7d3d3ljrrLy8v44YcfUg/d39/nPb2eKYtQBlHff/99XF5expcv\\n\",\n       \"X+KHH37I50HKJ7jj/XAiTIou0/Wk/HkXZw0IPrneiBx6qly/DprKdCHvYqfYz6NsDDQMnD7QIU4M\\n\",\n       \"cCBhvdtqtWI2m8Xnz58jotrtt7e3F91uN25vb2vfjYg4OTnJnbPc004z7+T3Mhpmp4bfccIc0KB3\\n\",\n       \"XL6FQPDg4KBGK0DPGHlBx9AX9892pqylZP3n+eE+zE9ZKwskz0GPn4dOsGNDoGJ7Y/DDSFfZ0L+l\\n\",\n       \"Hudap+/op51U981/MxWGz0D6PVZl+2aHFkc8R1e8WDwANtr+PaJeUt/oBZ+Zf+PUF9G60QCUG8gR\\n\",\n       \"k7RarbLw3tHRUUbzTLDTG/SF7bW0p6enhGJvbm5itaq2MhsdK1EQSgmAPhmyNHqCk+g6TQituQk0\\n\",\n       \"c8eIfDkQF6HmPoxdxNZAobS43ve141im0Lif89u8I3ONo1qmd90XO9r0A36ZHRvkaG9vLw84jog4\\n\",\n       \"OzurFb90dOXUAs/wIqOPdpq4jvdx+sOQM04YP82V4D4UT+R3EJmDg4PcUo/DT8kMUhTv3r3L6s6k\\n\",\n       \"MjebTRwfH6czyVwwPhgdxnmxWMR0Ok1EkmiXMUVRLpfLWK1WGR3DM8KBJQDhM6fl7dAj61bQDgzs\\n\",\n       \"fHo8nRZBSeMA8kzqBCEj5XrFadjb28soudnc1oI6OjrKHZolmttsNrPK9j/8wz9ExBYl8K7hw8PD\\n\",\n       \"RIVwghjj2WxW01seX6rwR2yNxcePH2M0GuW7mJPF2DQa212SfIZczGazeP/+fbRarUzfwbVC3zIf\\n\",\n       \"PI/+WG7pJ/Lp+WKOmTtQCa995r00eNzXKR0jNjjApnxwPWsC1HQymeQc8l7WBbS7u7ta6vjp6SnX\\n\",\n       \"jJEhIx4RVX2xRqMR/X6/Nk93d3fx5cuXdFhKZ6m0S/x0yQOQetM9IiKrz9uh4B3M1/QzkTPeE/5k\\n\",\n       \"RD3NZXSROUT3Oo1G4znQlcQrAAAgAElEQVTofzsmDlL4x+/MG2AGzwNdh3Pm9zdfinnxuDEX6AE3\\n\",\n       \"vttsNp+VajByZRCg/NzoGrucX3LoaN/EkeJFHQl6sGn+nIWLcPt7pOe4j68xAhJRrwpNdMXCx8ki\\n\",\n       \"Kid6RhlHVIaN/jFZ3NMpNfcfoW+1tvV0Li8vcxFRCM/kVysy0DgXKPV9nb+1E8n1NDt/GCGihdvb\\n\",\n       \"27z/yclJonJE00aF6A+L206P58UOSukU21H2WKGk/TzGrYxmiRRIN/V6vVpKz5C30z5v3ryJq6ur\\n\",\n       \"Z2fWRUSWaLDTZg6Y03aOrpBN38upD/rj8TLS0263U1mTKmA+WdDr9TrJsBGRROS3b9/Gu3fv4vz8\\n\",\n       \"PB0JoHQqOF9cXKTiI9rmHo1Go3bsDvN9d3cX7XY7HX5k5fHxMZERnO/JZJKcGxxjnucjUuBg8RmE\\n\",\n       \"XgcKNIwaypA0JWOGPDsQ4v2n02k8PT3lVnUcRsbUKU8McERVvBMk8/r6ura+1+t19Hq9RHJIh5EO\\n\",\n       \"bbfb2S8/D0QB/mGZRgdh44gS2nK5jMViEf1+P4uv0hfSfefn5/Hu3btaBN3v9+Pu7i6ur6/j+Pg4\\n\",\n       \"n+f5BQUtaREOFLhnv9/PscZR6ff7tWCANf21lL4DjDJYcQDlOTYK4n4y38vlMkajUQ2RMpqPrXDt\\n\",\n       \"JgfsDtTOzs5qqS3P03A4zA0cm82mVrmedTYej58R7bkfSKsDQSNCfs+Iuq7B0fS2fwqu4owRUPO5\\n\",\n       \"y594TB3QYt9sa0iVYte4jrG13JpT62ft7e09qwcH2uWswXq9zlMACLyQC4IyB90voZ/o4ZKHZ3kx\\n\",\n       \"Il7Oq+WUAIPvORCkT19rr+UPXttre22v7bW9ttf22v7O9k0QqdI7jqgQG8OCL6V1yobXivdo5MJo\\n\",\n       \"ARGhOTy+jpxyROSZTkTWTik8PDzEeDyObreb6JLRLqI9oinD1I7I1ut1QsqHh4f57yUeCv0tkRz6\\n\",\n       \"jAdtUp5TnUQwjnhIc8Jz2Gw2yb2BF0OEbFQEBI9rzCEqI0iQJBqRRZnXNnIHT8gRnSO4rxE67+7u\\n\",\n       \"Yj6fJ+pEpEVK1VBtt9vNcSTVxj2J0LnW/Xczt8fzY5JoCXHzXqSzjDQxxhC4TTYHOdjf369xdhqN\\n\",\n       \"RhwdHcXp6WlcXFzE1dVVTYYjIneQHR4e1sjIFF/kXDwQKeQHJPb4+LgWQW42m0ylnJ6eZjoJefLu\\n\",\n       \"Wqd5SV/v7e3lwbkRkegOCBa8loiopcPm8/mLu2cdKVufEBGTCt3Z2amRRyOqSNqVv3d2duLm5iba\\n\",\n       \"7XZu4LCOgB94dHRUK5uAfgKpNSeLNCD9PTo6yr7e3NzkGmPN8RlI4GQyScQKeSP19Pbt27i+vo7p\\n\",\n       \"dFrjTi6Xy+j1es/Ghf6hmyhaS4Pgz0+nQ50JACn02ofyALroNAprxUiUm9NeXnNePxH1qvmbzbYw\\n\",\n       \"LlX4fYAy6TnKh5R8HlAxo2etVisPjEZfcB2bCbrdbq5V1hgNFBKUxe9lZMU62iicsxv0z1xP68nH\\n\",\n       \"x8cYj8cp26ZDmIu0v78f/X4/7Ql2wNw3I4DwbOlvWe7GXDnbY5A6l12IiFoqj3krsy3oB1LpERVf\\n\",\n       \"j7EzkshGF9Bs22DWGciSx4W/I2NO7TEvzoS4OQX6UvsmjtRyuUxDXC5EG9eXUmMMhuE6Gy4z/0tS\\n\",\n       \"pHcM9Pv9GlmcFFlEJIF3uVzmoJvrQ5Vh0hhMOKRh18yx42K41FwnyuHjeDiNV+aHDeH2er0kHZKf\\n\",\n       \"N7fJabX1el1LDbpys53EiC3JdTQa1e5Z7myjMnxJFmWsmBf6jZOA8sChiqifS1gSI73Dq7yOOUVx\\n\",\n       \"w++J2KYn+X7Jw2COn5621XSBpCO2C4Zq4OxktMNTjmnJPSgNAQqM+SdVWpIoSQN2u90aERmHnrG+\\n\",\n       \"vr6uOQKdTid+//33+PXXX2vwt+uz7O/v1w58Zf1BOLYjwWdHR0d5BI2VIjV4UJDMzeHhYRow1raN\\n\",\n       \"rrkzTrVA4J5MJuls4dCzAWRvb3s4MNvcI6LGI5rP51nV+6VAAk6JHQ3k7PDwMPr9fo17BHcKZ4/3\\n\",\n       \"YPcunBX6yPMIAnCoPE84mKT9vIZJk5Q7ER8eHmI4HObzdnZ2stxFRMRoNMo0ignPHGGz2Wzi5OSk\\n\",\n       \"5hDg7Ji74p1wm80mdwOSiqYvOC5sImFnI7LhnXyUJ2ANWBeU3CqoEHzHzhayYl4b78hYoYPNYfTa\\n\",\n       \"LDfsoG9MWI6onLUvX75kCqokY9NXc2qPj49z8wb60ilYGlSEkn9qvqb5Spa9kgDeam0PjR+Px5n2\\n\",\n       \"tS3F+Wg2t+VhTHmgtlWz2cwdnNzX5Hqc6YhI6oedFHNccdzMt/NP8wBpHlvLDWPF707hWX5wPkt5\\n\",\n       \"4Tmkey1rdqJMLzJlp+wn41H+ze2bOFIoDNcO8ouVAo6j8NKAmfVfogcmOeLRGnWByGfUIyLSiOL5\\n\",\n       \"enGjhGezWU1pRlRRGwgROWeui6gfl2FkAWPJ9Xyf92bC2+12EnyJNBwJo9iIflA4fM9ePlGto5OI\\n\",\n       \"alcP48P7cJ139ZhoiLAhwL63uV5G13h/R26MZUTFY/DiodnpZp4wiLu7uzEcDmsOjI1fs9lM3sf1\\n\",\n       \"9XUNWWCnXKmQnWOH62J0DDmDk2OFihIm4qNeEfLGuEIadt0ucyks49QA+u2336LZ3O7eAnXa399P\\n\",\n       \"5wbjyDuyBdx8CRzQ/f39OD4+zhIJds5ms1ltl54LFhKx9/v9DF5sSJFvxtDzDc8DJNZOHcHV/v5+\\n\",\n       \"zOfz2m5GO43j8TiOjo5qSBS6wdwWvg+azM4/b+4wVws0EDnF8TCiGxFZfgInGa5URHUsCQHgZrNJ\\n\",\n       \"NJqz8iaTSa5H5ALjjNMN18bvzHjjEEZE7uK0E4nOiIh8NjJgcjKGnkDBvBR0NgbTzhFGF46V61qZ\\n\",\n       \"/8QaKblAyKGdLAdQjLMLNYKQ0Ed0DbqV9VlykVxuxe8ACnt/fx8fP36scX3evXtXQ73MAXr//n2+\\n\",\n       \"P+gu64TSMgTrDrzpj5ESO2BweYwYMZ7YyMfHx+Qy0tA95lVxLbWxyr9H1DM4ZSbGtpi+GszA1pVZ\\n\",\n       \"Cl/LuDG/3vTCGL10sDyZIda+gYPy+Cv6jj9gW2o7w3f8O/JZcvicTfha+2aIFPCjvVKEvzRQbDfF\\n\",\n       \"GBmaRKBsVC0cEdXuQBuM8Xicu6Ps+NBAqIBCSyfu4eEhC1rSVqtVnquFI8Y9Oc+M/mIgI56ft+S/\\n\",\n       \"IbikE9g1yHjR77u7u1gulzXHjR0RXA+iEFGv3cTfrZAgpTpC57MSBbTTw9+NwHke/E52JB0VeT75\\n\",\n       \"DGVq4beA43iwaKbTaabIdnd3M2r3+D49PSVZmoKFKGbQG4qO8n4Yw3I7vjcmYIzKBYlBcXTHM/f3\\n\",\n       \"92M0GsVyuYzpdJqfsTuHMd3d3a0V5ru8vKyRkO2A/fHHH7mmdnZ20slqtVqJuu3u7sbHjx9Tpt69\\n\",\n       \"e1c7a4wUV0RFTG+1WjUZjKjScDYIGG/vnMH4ITOcUWcHwcVo7QiQUqRRKA8iL2e90bwxwrKIgcII\\n\",\n       \"NRrVGYXj8Tg/Zw4wmsi+yeNGejCS3NO7kwhiQHJo0+k0jo6OkjiNg4a8UawVp4fdw6DhIOPL5bI2\\n\",\n       \"phFVxH9/f18rqnp8fBzj8Th1kKN4jCznLJakcKPpNm5e8/6c64xUMpYR9a3qvKPXN7q5vCeBMYGe\\n\",\n       \"nQMCRxxop4xsP1izRhp2dnYy2P348WOO5eHhYQYWd3d3tcOON5tNHi4dEXFxcVFLcXHPfr8fq9Uq\\n\",\n       \"bRC0AtaFA90yW2CHwOPWam2r36PjGDdnUjyHvINtHboDOSO16+dhswjMnbpmDNHFpp80Go0Yj8c1\\n\",\n       \"Z8/Imfvg8j00AqKX0r22LfTVwArjVqaN0f12XO10EehZDg0YvNS+mSOFo1HWdiihuoh6yf+I52k/\\n\",\n       \"rmGBeNEwIHi13OPy8jIhUXbgGHK1cJQcA/rCAkWAfcgkCsZw83Q6TY/bSsXva0SDd2BR9fv9OD4+\\n\",\n       \"rm3fZUFTKNPoEnD/SzsfykqtpQG6u7uL2WyWKbwSdXLK0gqTMStzzM73G8ou71EufqdBQDoYb+65\\n\",\n       \"2WwSLaGuz2azyd1hOLSOjHiGDX7E1rCRCigjEaJw7u9xK51hnJ4yMnNEi7PWarUSmcCBpeFEAX0T\\n\",\n       \"hUVsiy2SZnMJCT4DugfFNNcJZ5ZSB6enpxFRHVrsVIudVJAuZNw7COk3z/WYGFWcTqc1hA1DaLSW\\n\",\n       \"z5h/UtpO+2LQUIzUzoqokKTSKY/Y6h+UOsbYZVHgwkREreo96xv+htGNRqORTvt8Po/JZFKjCsCf\\n\",\n       \"nM1mNRSl2dzWcgIhd+FUZHS1WuVRKIw3qaxGo5FpHYIBdBm7BHEoIiL7hVxjFLknjsJLKAifIxfI\\n\",\n       \"ON/DGLK2S0NsQ+TUj1GA0mChU0BhcCRxWp1iZ06hbTiVylpzCYuI58e0WH4Xi0X89ttvOYd7e3tx\\n\",\n       \"enqaa8K79h4eHuL09DSf6xQsBpy1w5iy/gj0Pe78HzuJc887WKdHbNefHRUH2dZBrEHWRWn3LJsl\\n\",\n       \"dYMA0s5yRKWj2u126k7kbWdnJ4EQAioH0N6xbp1BORPWmNex5w29ZweUMfD3Pb92Mj0uZeqZvrAG\\n\",\n       \"PRZl+yaOlB0iT4YJ2eWCInoyehFRRUlfi4RQsiX/YDqd5nli9MOViLneqAF9tyPnqMKNCN31QFjU\\n\",\n       \"5GPt0ePskRpAsaM8gWQxOtyz3W5nVP709FQ7IgPl4cjTwsACsMJzIxq2ssWgGdp11MQYMU82mBCb\\n\",\n       \"PcY05omF7c9Br7inESKczL29baV2HAIW0nQ6Ta4PRsjwMcoEZ2p/fz/G43HWBLLxwnh47mlEMRgF\\n\",\n       \"5rfkSHiMGBsUyGKxyLl8KUrGqPP+pF+pE2OEzBw4jkpB9jl7cLFYxGw2i9FoVKshxjiD2pToJQ5i\\n\",\n       \"u92ucYvYdm1ieUQVCB0eHiaixD3H43Gcnp6mUTT5GafZ6Xka/BUI88gx/Tk8PIzpdJoy5+tbrVZM\\n\",\n       \"JpOsL0ZR14jIEgpsL/f823ihr2x8qR3X6/VqnCWcNcb58vKyljrcbDYxHo/j7du36Ux73jE0RmuQ\\n\",\n       \"d7huBwcHmdpDzlqtVgZvFGNFznBOmTtkEoNXcly4J3OPAeMeln/mztE+uofPuS+o0EsN54v7EtTR\\n\",\n       \"H28Isq4muFytVskjfIk0XRpT7A/oZavVirOzs5wnMg3dbjcDQsYPxInrnGJmvggoqC9GUNRut+Py\\n\",\n       \"8rI2DuhcEFlnTEgpI4/YFuQG3UM/SsTfa9z6ySCEx/5rnzm4Hg6HGeigNzzPPinA84S9I0h2cE2t\\n\",\n       \"MGrWOa2PLCGXzkiVjlIJzDjgLxEwnMASDSxtUtleyx+8ttf22l7ba3ttr+21/Z3tm5U/AE4vSV3A\\n\",\n       \"aEZ9TGorURV+N0TpnQMgJ2XxMaoge7eeIV6iE6NkEdXRDEZo8FS9VdxQLPcEYscrN4wJOkL0Tp96\\n\",\n       \"vV40m9tihiAXjoJBL+B5gMiwo8MRiMcNCJ/IvkSl6D/ImgmSRAdEyOYmfC3CJIIhumLnSETUECYi\\n\",\n       \"JafIDPn73iBEIC+bTbX9ttz5BkrCPU2iB02IqA47ns1mWQyR8QaZ8E42o3IgUEQ63tUGaoq8Okok\\n\",\n       \"VcDYeNs/HB/QSqMum80mi11CNId/0W63o9lsZuFAE2Cvr68zHUTaz++PrLhQHrJIfzudTiJaXOex\\n\",\n       \"KNEa0m6kf4w6lfJkJOv+/j5LjXjMKMTI2EJidRmDiCrl4fQd6Aay6rMM7+7uEuUrd6yCbEyn0yyh\\n\",\n       \"Ye4gP0tKAPL1+fPnGA6HcXJykmm4+Xye5UY415B3cIqZ1BnIIf25u7vLOXdK+OlpewzN09N2ZyqF\\n\",\n       \"QweDQepf5MmlVowYGB2EgwVxHqTMZHSQt3INmxeKnka/+YxP7mPUmjXD+5G+YrfmZDLJzRnIovlJ\\n\",\n       \"pJxchRw74vQ13zfiaMTmt99+y/V8enr6jBvbarXi8PAw5QI9dX9/XztyzDqB9wFJHo/H+X4uwYOe\\n\",\n       \"duoJu0W67O7urpYWfvPmTXS73WeoozM76HiPt7M6RqGMyDgVH1Eh4yCnLt/DGsammuPY6XSi1+vV\\n\",\n       \"9KH1JRxHMifuJ4hdeV3pT+AbML9896XrvIZLpNJZopfaN6tsDunS3AQgY/NpIuqpL64v4bqIisNS\\n\",\n       \"OlJc59om6/U6rq6uotVq5WGsXlBwjJyH57py4fM8titzD3avRVRcGlIe5XbL8p3NHXMqE+PIdzA+\\n\",\n       \"ZYXmXq+XaQOnapyidArUTi3vaD6YF2TpkJnEa6i/5AeRf+c9Pb+kAvibn4dhsiPGMyIiFY3LW3Q6\\n\",\n       \"nRw3uB1eiOaAeBcZ0PxgMEgip40zysaKKWJrsHu9Xo5PuduO56P8Sr4W88OmCjvqfMamAhTmZrOJ\\n\",\n       \"Xq8XvV4vBoNB7kLjeRi66XQas9kseRtPT08ppygQVynm0FacNB9MDBeFQARZhMzPM73RAqNlo23Y\\n\",\n       \"HGcKSJ7PkF+nLklfff78OcbjcZycnORc8M6MGw4xir80Jjs72xpSe3t7ScTHKCGvTm0if/yfOmue\\n\",\n       \"X/7v9CwpnIeHhzg7O4v379/H999/HxERv//+e8xms3jz5k08PDzExcVFzg2OASR6+FsRkSl9+Dns\\n\",\n       \"tLVMR2x1xHfffZcpf9K6cOvsKJqrxDrAeTY1ATk1dcG80nIe0Sde75Z9gsjyc+tjpwkjtnp0NBpF\\n\",\n       \"u92Oi4uL+Pz5c82p44DccrcfPDYH5earUavLzg73/PnnnzO1f3p6mk4PdIPd3d3cBEDjOBlvNnJA\\n\",\n       \"t7Ozk84+toZGEFTWaLI98Lj6wPLxeFxzdmgmerP2vE5JzZV2zo0+IqdsdrFN9lyQZsbGIteXl5fJ\\n\",\n       \"3bTsMBfoG9vtiMo2YJvKoNTy5XXJvHJfBzu2Azs7OzVubglgvNS+iSO1WCyi2+3mqecRlYNQ5q0j\\n\",\n       \"qtonNjQl8Szi+eGNEVXESzOxDkK1OS18dn9/n9diqCMi0S2T5miz2SwVnL3biEjiJ3lrO2fkilGW\\n\",\n       \"nnyTxyHUoty63W7uyAFxcATpBRtR5ZZpLJaXOChGDNww7DgFNvrOjZfGi3cxP8zRAO/OvUsuhccJ\\n\",\n       \"hcFzcCIajcYzZcP7+XBWO3k4KeaeYPAZZ4yQeR+QX41MeYdVSdy0I9hqtWpOr7eJl3WSuNYKHYNJ\\n\",\n       \"+YtGoxHT6TQuLy9r5F/Q0svLy2fRKmR3uGPeNRdRIQWgSdzz4OAgLi4ukoeCA2K+Ds4X48YYIB/w\\n\",\n       \"ixh/b2N3gOHxZc1hEI+OjuL8/DzG43EiU3Y0eA6cJ7gy3BdHC+ffc0DfkRNkic0LvV4vDRyNaNdr\\n\",\n       \"zLv9QCx2dnaSmxmx3TpPLbNutxtXV1d5Peub8QJFoi84Beys9KG2GCbkyBsqeM/JZFLjlFoXY1TM\\n\",\n       \"LWLtEUjQJ88xrTTE5pRhyHgWusC8Ke7JWkLv8xwQUHSpDyVfLBZJ1LaepR/ozJdKGURUMlnyWOfz\\n\",\n       \"efzlL3+JiO2affv2bW1s0aPwVWnU3mK8zQOy7nXJCAjW9K8s90FggSx7Mw3BAI6Kg6G/NUdc6+DB\\n\",\n       \"gTdj8/j4mJwxvz/zYV7heDzOABDEivfiAPbhcPgsu4FcMF7mfzIGOMIlclaid+aHlcR7fw8nys+I\\n\",\n       \"qIpQf82xjPiGdaSA1S04JrI2GlXNDtINwO2QwSLqSBYL1AaaKIsB5TMWGdtj2ZFAs1duh8AGytEY\\n\",\n       \"11xfX8f79+9zG7wjZCM3VtAoKXYuONWwWq2yBhDXcwAphoNxKdNvCAZCUO6cscJ0BPn09JR9YDdc\\n\",\n       \"iSQ4vUlzasVpjYioGUeMJvcy6ZvvlM1RiZUbCp3UDg4DCBHOgNG4kjRpZw8ZIVJzygoHl0jeuw1J\\n\",\n       \"g5IyczTt8SaC8n1BLFCMfv9Wq5U7hjabTa6FiLqhubi4iMFgkKTi8XicAQGpQjugJrY7AudcN2Tp\\n\",\n       \"6uqqtmY+ffqUaSP6HlGhAPzfxGjgfeScjQfMKwiQU7t85rVO8BNR1VMjdYIMowiNuJJWQgZx+pA7\\n\",\n       \"nGDmcTAYRK/Xi/F4XKslhMHiEGKnIXlf1oB3JqKXkJtms5nPYzcuaES3262hKAQkpSxFRNYiI/VB\\n\",\n       \"Pzn3j2dbTrnX7u5uppyMnJEmJO3reyLDTrcYuXEw6PXGdfyzsbJeZm34fU0k9z3v7+9jOp3GfD7P\\n\",\n       \"INKpZe6NETbST//L9B3zBBpZkpFJt/33f/93UjwiIt6+fRv9fr+WFkS+CfCo9wXhmnXhcSxRet4b\\n\",\n       \"h6IM/spyHh5rdgSuVqssMRIROT+murhaPJ+VZHGuZeeuz4QkZY9N8BzTN9YKp0wwT5QLQmegv12n\\n\",\n       \"r3T0WWPIh6kQ7HI0SODnObVp++xgmubf7ZC91L6JI0VU4BwukCIDGFEtCIwaEwlcT/OC9o4TrjH/\\n\",\n       \"yErKQmUIFEH1IncUgVEnJWDEgpQHBs3KxIrQPBv6ybMbjUZ6+YvFImvTcN3FxUVEbKPSfr+fEDYL\\n\",\n       \"w2NHH1AY9s4Zu/Jv5LkjIg+9LT9z+orxRmiJmGwEiSiJTO3U2VB+DVJmPnBYIup8nlIRYQRxXgxh\\n\",\n       \"42AaheI6Fibj0mw2n/EhnBb03FJ9m3e0kUKWzJHi+fAqQB8sDygGnCIcK8/xZDKJwWAQx8fHKRs3\\n\",\n       \"NzeJ0LGVGwd8NBrF0dFRzt3Ozk4t7Uf0XMrm5eVlFr6cz+c13kCZHnPKBMSHQ27hBzGGZYrFaxSn\\n\",\n       \"xQEJ3+FoFYIu6wHu65Qh6xRZeXp6ypIDyPdiscgdhoeHh7VyDCh7jDHpf97ffD7Pz3q9TlTPfWbu\\n\",\n       \"2u12HB8fx2QyqaVqCBBAI7w2QE1IL1qeneJpNpt5rE1E1OZoMBjUPiPtDBLPESQRUeNi2pGiMf9l\\n\",\n       \"mi6iChydDbD+dtBXUhkw7mQj0IO3t7exWCzi+vo6ZrNZ3NzcJHeQcSGrQP+4H84aHDCjLU4rmi+G\\n\",\n       \"c9Dr9WI2m8Uff/xRQ9Dev38fw+GwFoQhhw8PD3lEE44//bRjRzqSz0xRcbPxJw1lXcNzcdycwrLz\\n\",\n       \"UKI8yLepHXxOAOD15RpbRnaMAjK+pPBMIUGGOJLJKHxEvShniSxGVHxlI/bYGOyrx9H0HeTMAT/2\\n\",\n       \"hb45GEYvfq19M0eKCWVx4Sx0u91EpgzxItQYHRe1sxCVSM9ms6lxhfge0C19QPlHVAsfQTWSRX/M\\n\",\n       \"a3EF54eHhzQUJQHXxtSCaCSM97EA47mv19s6KiiM8Xgc3333XS1Sd/TsiAalWiIk/M1OCNfh7FAf\\n\",\n       \"xOOGcff9LNQlQmZIn7pHJScCBe7owPMF3O2/MVe8L9eBpkG4xZGjL0Rt5fgzNyas2tB0Op2asXMk\\n\",\n       \"xHl/m80mick8A0XO4t5sqvpjT09PiargSHlMzGcZDocp36Cpj4+Pefo8lfsp7gjn6e7uLo6OjiJi\\n\",\n       \"uykC9PTw8DCPPYnYppoitk46zgTOGTVvxuNxDIfDGAwGNePNlnNkx2PocXZ6g/FAcTFXbqASNgIR\\n\",\n       \"9XpYjImLJPL/3d3d3DwQseV2OJrHufGY3t3dJRLochnIp+tX8U7oC1LQ5l9gXEkPYyQ4qmi9Xsdo\\n\",\n       \"NEoHNaKqiI7+wrHj3Y3GEYVz3dPTU4xGo5RDnGgoBryPA0GCANK68/m8ZtzYiAFFwc6UdSuOq3WD\\n\",\n       \"0W47SzbWpY7iu4xP6ejw/ZKaAMWBcSiReE4S4O+uQeTMAXKEXNCgpfz+++8REbk9v9FoJDezTN95\\n\",\n       \"ffMZRZRZO+YAEkQQJBtZKYNHnlHyIymZYi6fU2W2eTSjcoAb3JN/LiVDfxy42OGHG0bQYuSauSHo\\n\",\n       \"to4ukTCCYfrCdxgX6xP0D+vM64lrcKIcTGP3Sl4ZY/a3Unuv5Q9e22t7ba/ttb221/ba/s72TRAp\\n\",\n       \"V0AFlcEDhqjmHVx8F++SLZMRzwnUTmmRljCHwMToMp9ryJHo05wgGh44SJCPdOD75HrL9B3Pw3uP\\n\",\n       \"qLZj4xW7EfltNpuE4bn3bDbLgoRs68Wbxksnp00qgvubl0BzJEhU5bw/fYWXYb5TRHWUD1G5ycJw\\n\",\n       \"PZhn+sW4GUJ3lEoapNlsPktfUrCRSKkkxnNWGRGtU4lEsMD/JYGdOTcBkQjKRe6cxmEXC+jh7u5u\\n\",\n       \"LedPJE+EZs4WUSXjYggalJY0j6s7r1arTNd4LcABgYR7cnKSpOxms5kV3N+9e1eDsQ8PD+P333/P\\n\",\n       \"atvz+TzTfp4DCLXMhdNJ/B10DJnneqMXzBncs7JyOYgRXEDewZwu5o/iqxHV0TZw2Q4ODmpbr9l2\\n\",\n       \"PpvNaqn0RqORPA54m47m0UGz2axGQ0BuSb964wDrl/WBvHBPdnrB40L+XGyVeyBjy+Uy05mLxSI2\\n\",\n       \"m00WemQzw/X1dTw+Pubh5lxHdE9fWYf7+/tZeb3T6USn06khnMwTqIJRHvNI+Y7XKTsPnSaNqBAb\\n\",\n       \"6xyuM++pTAeScoeyQQqYe3JNibbTyrQNc0EzV5JmXWZu0ZcvX/JZ3333XWZNIipOEjtgjcjAXzQn\\n\",\n       \"y7u7fWSKU8wgn+i2EgE0gluiK4yxETyPK3YDGeE9QI2Rf3S5x8VZBLImnP5hhJdme0img2b+mGk2\\n\",\n       \"fMY8W5/SB8aynPeSMO57+vlkmoyo/620XsQ3cqTgffj0cNJgy+UyD3p0jhTBgA/w0q4C7waKqCox\\n\",\n       \"M/mGlBFMLzTXmYFo7PPreI65EE4R2dBCBiy5AsCU7ouJcUyihZz7Qy61crm+vq4ZKj6DtI4DFvF8\\n\",\n       \"txqQJcLnviKcJiZyD67HSHFPDKGrIHvXGtA2Qu/UH/fBYfYCZvGXhFM4WIx36YSuVqtadXen13A0\\n\",\n       \"GE/SGyxMUsl26Hkm8uSFihyh2E1oZu6oQA2J1TvFzLtzmgaFwE4aFFQ5hxgrzz9jz84uuF7spHn/\\n\",\n       \"/n28f/8+rq+v01Eej8fx8ePH+Omnn2J/fz/++OOP2vPYmec5jYgsvcBzz8/Pc54IfEib9Pv9WorK\\n\",\n       \"81DW3oIUz+5Z9EWj0ci6PdPpNI0LzhL9wFBRwZw5pB8oTO+KfHh4yBSNS7QQAKF/LFPIgnfY2Tn3\\n\",\n       \"BoVymzvpPOTK+oS0IEaHtUw9qZubm0yN8O4nJyexv78f19fXcXl5Gbe3t+koHh0dpRzhoOLsI3/W\\n\",\n       \"TU418R3mxtyUiNIT4uAAACAASURBVPqZiqXjwtqwE807Wpas+1jX5uyYI4ccUuPKaRrGmmtMlShT\\n\",\n       \"eE4X8jsBgVNbJVWCuZ9Op/Hzzz9nKvH777+vkfTR+RDNTSlA19iZ5B2wSegrPrNDznt7V58dw3KO\\n\",\n       \"nPJysOY55R39TPSMuXJeMyUXzYAF/eE0AZ6Hc4atsCMLtae0V34/NveUTryDe2wbn9nxcvBt2cOB\\n\",\n       \"NMfT7/RS+6YcKZRjRCX8cFt8ECsDw8JAwURUpG3/4zOQLeeTza/xYnG0ZOfIzkFEfWssiuElcimI\\n\",\n       \"lksQmExdKiCfG2akg37wHAwmz+G5OAZ2ePb29qLb7cZsNkuF6kjQjlTZrPD8HROsebadPqIj3tXk\\n\",\n       \"d39mAqQXyUvRFU5NSeQkp47slA4I48czeQbOCTL1+PhY49yZy2KjgsIyqliO2cHBQQwGgxiPx7Vi\\n\",\n       \"nkRl3W43I2krqf39/ej3+9HpdGqEdrgSlisWPwRuo7m8NzsWaaPRqMaB6/f70Wptj8C4ublJwzyZ\\n\",\n       \"TJL/BKLhdQgyhsNII2hpNpvJEyy3RbN2HSThJMAH6fV6uWa4/2KxyKAGo9/pdJ5xbZjLiMjjYVCk\\n\",\n       \"m011Fhv3pvbS09NT7lajdg99tTGFd+UDmr3e+J3Cu6UM4/S3Wq0aemIOFX2IqLhOyATX8xm7C2ez\\n\",\n       \"WUwmk1qtqNFoFN1uN/UjyNLd3V3WmHOQwjwhl+hGxhc0gb+zc4u58jiXQZ3J0f5/RDwzlNZ9BLTs\\n\",\n       \"LiQ4Zf585pwNdkTUdIERCyMXfk/aS0ET1/l+pVFdLpfx6dOndIg+fPiQ3+X8TJwozz3oD/LrjTd2\\n\",\n       \"9Kx3HVRGVOcuGjFjLl3OhHHh/Wl2Qvx+Rsj4O8iSv49Tiw4y/3c4HCY/cblcxng8zoDOa6l03OxQ\\n\",\n       \"l05WRKWLIPI7iPZ7l/23g4cd4n4RVSBd8mbtAL/UvpkjhXLlxahU3Gg08rBcYHwcIe8Y88ChMCHm\\n\",\n       \"mZRH9F8So5kEDCOVjiPqOw9emkATX7l/RL1eDs6Zya8+T8zIEue6NRqN3KpaLiKiGveHseB0eSLK\\n\",\n       \"iK0h7Xa7MRwO4/T0NI6OjuLjx4/x6dOn7I/RljKdRtQCoueonEVpyJ7rQK+4v50Q1+1x1MC7WwE4\\n\",\n       \"dQt6aHSHcSOqhiDrKIt+eZu4ZdCOL8+DZMt3mAM+s3PpqsxEMJCCcbQg+W42m9qZcJyxRXMK17tg\\n\",\n       \"vB3dMhMR6QxFVHA+le13dnbSAen3+9Hv9+OXX37JeSNt5nGmn8fHxzGbzeL6+roGaTPeRLXz+TzH\\n\",\n       \"+O3bt2nYKA1g2WLnICiCkYjDw8PatU55+7mDwSBRF8j0rDHWFmPKVn6Mk9NE3M+puDLap7iq09r0\\n\",\n       \"mX45GLC8+fBymoMKgh7eg6rQGCru3+1203kHzfAGFXQMxTI9T2dnZzEcDlOv0beHh4cYj8e1mnZ2\\n\",\n       \"bkj5euck7w6yB1JnJxuZ47ukHRkbjzFjh7w5ELXO5X7oKeuAzWZ7KDkZA3Qhn5UImNNCRpbskCDX\\n\",\n       \"LuRJXzy+nkveD1v26dOnrOsXsUUACSL5ng/MdkbA5GeQX+ocgpggf9gQgqzSSTDqQnDjsbS+s63h\\n\",\n       \"Pr6GBuIMiuvxdnbB5XIIjPb3t2eYejMF88DmI48pDVtQ2m/0frk5oEzzEahEPC/MXM5p+a72Ixi7\\n\",\n       \"r7Vv5khhoFlsi8Uibm9vYzgcxnq9rcxqZAkhQ2mWRQxBYxyVRtQPYeV3mp0JFwIs8+r+/+3tbUaH\\n\",\n       \"KGlzDIzAkIZ0X15KQUVE1p4pUS63l9Ajw8wYJcYM2PPg4CD+z//5P9Hr9fL63377rbZw7UwwtmyH\\n\",\n       \"7nQ6z1J0TpfSMGzsfIIr5ftznatw8/58r0RrSv6T+Q3A4nxuY8U8Mp7mp6CULRs8nz6wuMoImXn0\\n\",\n       \"bhgca3hM6/U6C81FbFMwLM5ytyBKkaNscIIiKqMABw4Dzzyt1+tEbz98+JDzf3Nzkwqs0WjEx48f\\n\",\n       \"0ymjvyAWcGoiqiNEPn/+nAiMq8Xj6OM4ukK6+YJ2Eh8eHmpVoMv16EBhNpulowxvkPWOgxoRGWwR\\n\",\n       \"/WJUmH/PA0ggYzOdTmMymeSOTo7T4TOnZm2EGQvvOrZSJjgEXSnRBYwPCKrHbWdnJ1E20negek6J\\n\",\n       \"I8MYK4JA70SEW8WRI5SN4XnsYsTBAmFoNps5lre3tzXqhWkMjKPLERA4OLBk3Fhv1oVGQaxv0Q80\\n\",\n       \"1j8GmbHBdrBTlh1wEfVijS+labATpeFG75jfWaZ+rBuMUDnQub6+rqWTkK+yZATBL7XDVquqMCzP\\n\",\n       \"YcydJcHZ5HrmwoiSUS9nRuyolnbG70WgyJgSKDgVztg4aOD/zC8OP+l9H9zO2vd4GDWnv6w1j53p\\n\",\n       \"FS5uzFwQfON00hfbEds/1qydVSPRphO91L6JI4Xxc54VztTu7m4MBoNYraozxVhgEZWxttMDxIz3\\n\",\n       \"WW67jqgKcFrxmXsFzyCiUho22I5aICgjbDQiFStgC6qjVgs7wsmCtxGKqIwP+WV7zwg/aSVQPB8h\\n\",\n       \"wdh1u9346aefImLrnV9fX6dht/OCUJX5dfqNUkBp2glicZRl9UuyngnVjAOwagm3c1/Gx6gTi4E5\\n\",\n       \"9LZcpxfNu+Iz8wz8dwwWkbnROBbb7u5ujXsCemAC5Gq1So4JKetGo5GGjsZY4oDCRYiIJFHD2+G4\\n\",\n       \"CcZlOBwmInt3d5elCnAqWCs4CIxzp9NJJ+jLly85z4wTqI25KJztxljjvEdUpFKu6/V6NXI3SAtK\\n\",\n       \"3ak9z2WJ1JpXiEMUUSHDk8kk1yFHOiFfBwcHWazRpPx3797Fr7/+GpPJJA4PDzOdyVzQjzII47mP\\n\",\n       \"j48xGo1q8oQMErUTLEZUyh3U7fb2Ntc4SF2z2czaetSgm8/nuQ7hkJbIAqgQ1bO5DqRqOp1Gq9WK\\n\",\n       \"N2/e5FzjRONsgZru7u7GcDhMFNOlH5xGIx0DtyWiSqF4LZUpf3SHxw2HCGTExhWdxHOMvKFbQKWM\\n\",\n       \"1iMbjJObMwoue8H7o0OYZztE6Ogyw8E8YKdw/iMizs7OYjAYRL/fT7TLxpo1wPyyLthYhBPselfI\\n\",\n       \"ptPPdrIZu1JfMidOJ9q2svYAFYzGAjY4qOXnfD5PDibyaF1DUMPmFY+bwRTzA5EXv4vXE/d239zo\\n\",\n       \"50upYt+3HBuanVTG+yXELJ/31U9e22t7ba/ttb221/baXtvfbN8EkQKKM6ROtOjdTbSSiNxut2v5\\n\",\n       \"TbaKgkgRRRF9cE1ElbYi2ii9fK5z3p5nRzzfjeLcrmFEPP5yq2ZE5WE7v813F4tFonM8F/SGNEwJ\\n\",\n       \"xxIl2isnAiJ6IqKk/2/evMldUOW4ma9E2tRoHTs3HM3w/jzDxUhp5go48qSPRrYctRFxedMB80TU\\n\",\n       \"YXlhTBkfEB9D3/7n7zsq4xk0okyiYefbneJljLyLCcTUkDz3plimizPSnA6CbI2MjUajJJvTD0oV\\n\",\n       \"rNfr3P1KqgN0zMTw6+vrTA9GRI1nSPFGiOjr9bYg7GAwSETCZ7k1m82MTg2TszGDXTZeX57XMvJj\\n\",\n       \"HszT4TPu2Ww2M93OO3neSP9Np9P48uVLREScnp7G6elpnJ+fJ7p2dnYWEdtipaQjymNwWq1WdLvd\\n\",\n       \"rKZdbnN3BAySxHvs7OwkCd27Dw8ODmpn5a1Wq0Sk2u127nRiPTo1wc5ckEjmt9vtxs3NTdzf38fp\\n\",\n       \"6WmmeSO2aNzt7W3tXNDPnz/n3LNrmirlfgfSlWzbd2HGErkreThO+fi7rIWX0n6sXZPCzZvyhiMj\\n\",\n       \"9Tc3N7V0nrmMcM1A9FzklP6AtnqTERXvWYtlio7rSM0bbRqPx/Hw8JAHHTslxvuhA0GA2CQEKlva\\n\",\n       \"nVKX+TvWMUZWPE8eU+tK72b3GBphQhasv4wE2656w47l2J/RvHnDfK8SvXZDB9CQC/PzLGv+vtG1\\n\",\n       \"krLi76Gb/lb7Jo6Uc79eiHt7ezGfz/P8HZcjcOopokoVke6CK+C0gQmPJsnyGUoNY2WyuEmK8J0i\\n\",\n       \"qoNr7dwxyIbd6S8ChRBxv5eEIqKqKG2SekSVUvMuLjtm7EJid8779+9rBGyUlQn1kHCBcO0EsojI\\n\",\n       \"h9uhdB7d/TB5Ex6CU6nMO5/RcNY45sHj5gUFhMtckLu2wjCviTktOUkYJBZPSXAu03M0O1+bTVW9\\n\",\n       \"PCKS00ffmCufUQbEDZTPHDN3BBMR1aHFm80mIfPLy8tot9uZhvIOMI5egVuFMWTH03A4zPcnRbFc\\n\",\n       \"LuP29jYNMM/lvZl310QjWMDQuEr24+NjOhdwQ3h3O4iMH+/nTQOWJwcH7D7z/G42m1pQxc445hg+\\n\",\n       \"5d7eXh6lFBFxfn6e1y6Xyzg9Pc00JGOMM1Te//DwMIbDYe5CshzjgMFnImBk3tFNDpQeHh4yLQz5\\n\",\n       \"HYfv7du3WZkeA877XV1dxXw+z40k8/k8rq+vIyLi+vo6hsNh3Nzc5Jicn59HxNbh7XQ6cXR0FJPJ\\n\",\n       \"JO7v7zM9jY7iMHdzwJgXBw7mgNqBLIOskjPl/5vrik3gnjaQcIa8gQQd1uv1ckMD482OU9az7QCG\\n\",\n       \"HhkpUz+kWZ1KJMBwSozmVBMbB8o00e3tbczn86wJhwyjd91XvzunZJiyAtfHQaavQdegH80v4r1Z\\n\",\n       \"27ZDUDrQiTs7OzUSOY10ozlbLjNSliPwmvBZkgRGTrGV69ubFPiJfBG42z4/PT0lxw1dZeespOv4\\n\",\n       \"npZtpxzNT/ta+yaOlCMdIz9MAJFZ6UmbNMxLW+lF1POeOGclxyqi8kbhTxhZstKCe1Q6LRhtGwV4\\n\",\n       \"HDzXdZTw9B19lVFGxDbyYSs8jf6ASpnESnRAH+DInJ2dxZ///OeIqKNm9u5RROYyRdSj/VJx4Cwx\\n\",\n       \"BnZ6PEdE5GVES0RjYry5co+Pj1nQknd3P02AtbCzMPx+kCPhgXn3lftqArhliPm3sXx8fEwekzco\\n\",\n       \"REQeVOv6MCa/866Mh40UxOCTk5M0yMwFCq3X69V2fEG23d/fj5ubm7i5uUmjGLGNzN+9exftdjvL\\n\",\n       \"MfAOnHPZ7Xaj3W6nEWY8QdSWy2U6bnCROp1OlhdwLR+IuuzOY2wODg7y7D94QOZGWX7Kmk44wiYB\\n\",\n       \"05gDnE0jmRidbrebO4W4L8URkcXxeJzrzXxEEGLvsqL+krlaljfWi/tqAixrzeRfrsVpYp4uLi7i\\n\",\n       \"xx9/zHpQ5kCenp7GbDaL8XicBoq5o4gqKCYOQ0TkeZD9fj+Ojo6Ss8V4QuhHLzpYctHViEhuF5+j\\n\",\n       \"L6xHPVfMqaN9xoyxLPUJP12ehuvgG8HDsU3Z3d2NyWSSx/3QIFC7bz5KCJ2F3rGjx4aP0sFGP1G/\\n\",\n       \"yw6RETj4jSCONvKsA6NVBwcH0e/38/BhGn1DhyOrHmtsA0i59Tc8Y+aNcSVQcNDrHX3Wl3ZC7Nw+\\n\",\n       \"Pj6mvvCceg4NWHgOsJ/0hXEpuV44SThaRn/R7eiv0nZZPiPqO02NJpZc3BI9K9s3caSIQDlDKqIy\\n\",\n       \"3nj0EVVUjjFx/Smu8w46tjgbqjWZ0QvRA9hoNGrbZ2kmDzMZbDlfLBZZVdnfZ2Jd04d+skAdRURU\\n\",\n       \"njKRhreP4nnTdzsdhndZaOxq+f333zMtQ7TgCIT7EiUTVTBm9JV3MOnSEZ3fEcSNcTOsDDyKovbu\\n\",\n       \"DZwkl7tghxmIAn00wmTFWUYNbPsGHSzJ5jakTichQyhmO8MRkc4TqI4RE8afcaU8BH3keqe0+Wy9\\n\",\n       \"XsfNzU0isTa09JsUK+k7vutzKvnuyclJ/PnPf47lchk///xzbhWOqB9EfXx8HJ8/f07jPRqNYjgc\\n\",\n       \"xq+//hrz+Tz++Z//uVYVm8NFHfkxbswrDqxRB2TBKQ/mmnQRP/kOzyIt7bQA+oJ7r9frRBsiIonU\\n\",\n       \"EVuDPxwOawocWWq1WrkJICJqKAuyhh7yzjmTxbknShvZZg7ZtMD9ymAKA8V72Rn8/PlzljGhDk9E\\n\",\n       \"5CHVpAmNLA2Hw3TIkDWnMJrNZr7z6elpzhP6zP1xw4DRP4zf175HY97soNiB8Bi4ryCg/N33BMkB\\n\",\n       \"mXKwvL+/n7tbcTiQCxfxxa4Y4XbWwrvo+v3+s4DURHtqc5UoHn3l2aChEds0MmsIh8Zzwd/R29wX\\n\",\n       \"mTdoUAatzAGfG3lhNy/3N9JlPVrW0UJ2cIYcKNDu7u7i/Py8tmHqawAC9zOyxHqCkmD6i9+FTSH8\\n\",\n       \"3fbRqVv3zde/hJqCFtvu8+wShSzbN3Gk4D3YsUEw4BLYCOGEYORQ2hFVntXIhJ0bL+oSbiUatEce\\n\",\n       \"UUF5XsQ0Sgrg1ft6eCFlnpdnM4H26vkek8f2YxpCBkRv6JvnoaBd1+Xm5ib+53/+J969e5fCZQVp\\n\",\n       \"hYZAITgU7iQ6dMqMqAyj7pSox4n7enu0OQ3sooyoc6eOjo5qc2jDymIuI136X0bQ3mJeLgI7qDYO\\n\",\n       \"GGyQH1cIp+9sqfZOGuYVQ1vubCHqB+Wy88a7LRaLODs7i7dv36byhRO1s7OTqRhQIJxElJDTficn\\n\",\n       \"J/Hly5f4y1/+kv3zDsPNpqoX9fnz54ySj4+P4+zsLH799df4t3/7txiNRolWWX4xHjjuIFdOjzA2\\n\",\n       \"pBeRX6J65qaMcC0z/KS4KfNE1EgARbkC5Jt1NB6P4+rqKtN7jPV0Ok0E0WmKiCrVzPsa0eC5rENz\\n\",\n       \"ID0nBFPMuWuh2Tn22mQdeJcoyBAHSCNH7DRkTq+urmrlD0hZLZfLWvqK9XxwcBDz+TwajUYeLbNe\\n\",\n       \"r/NvZb0vHEXQcfpq9LB0dkp9a2e6XMN2tLxTmLHCqfKa9r1KRAZaAzqSNCsZA3Y6Wi+xhlxjzfOE\\n\",\n       \"LLDuGXsKozqQtGFnLnd3t0dGkbpdr9e5S5L+WbcbzbFT5NQZ3/OYGpHC4TH9gnHmsxJUMD3D92TO\\n\",\n       \"+d32koYjaX1HfwEXPN6lnJjr6rXC2mGMCFT4WdpaxuQlCg0yZkCDuUeH2PHj2S8FDbRv4kgRuboy\\n\",\n       \"LguFgXZ6A8ElNWAym6NQBJ/rgMxfguVs8BAmC1TJb7IzRokGCgnaaDrHbAWMAWXyjDI5V1ymgEyU\\n\",\n       \"JJowcR4BiKjOboqIVB7z+bzG5fEc8E7l+FDg0+ifieCPj9vCgERKJqLjZLJ4HQUwN1zv8WbR7u3t\\n\",\n       \"xWAwqD0P561cGDyj2Ww+g7Bfmu8ynWDj4kVq5edUKoYc5NBFXDGIZWqpJNAbCXPj3rPZrKYYSDc4\\n\",\n       \"rWsiPGvp3bt3cXx8nPP/yy+/xHQ6zaNOOp1OGszJZBLtdjvu7+/jl19+iVarlfys+/v7GI/H8eHD\\n\",\n       \"h3j37l3M5/PkaQyHw5r8N5vNTCX3er1ajTdqNDH3pOFKYjC/o8C95X61WmW6y2uAe2LQIbguFot8\\n\",\n       \"j263G7u7u3FxcRGtViuurq5qDjG8osViUdM1oGJOG7h0ByiYgynPhcm13JO0JH8j6kU+HBARFdPM\\n\",\n       \"2cH4R2x1InXnms1m7QgcUw9AjbzWlstlOsOTyaSGkJC+RO78fnakcX6c+kKOy9QcDgY6z4GN0WUC\\n\",\n       \"ytLJcurdDgrlLUD8cEYw2qw3r2mCddKY1lFlOYSIqK01+KmksZE10Cg7krZJ/GSMQH+vrq7i4eEh\\n\",\n       \"hsNhGuoyDYUzGBG1NWAebqNRnbhAs7Pg9394eKihcLYDlhs7aJ4fxsuoIt+jkLBlH33pFJznHBvy\\n\",\n       \"UvoM5L/MKPEM7KVTm3DJsF32Byy7OHhG1bDRRspoZUarbK/lD17ba3ttr+21vbbX9tr+zvZNECm8\\n\",\n       \"XrZXR1TpHVAHIqaI6rgHvlOm0xzhOq1Cjtv5WrxhQ6cR9ZOg4QuZ4OocMz9NzIyoDtk00dqRJ5FE\\n\",\n       \"yZECojQnx6mCMlIFIen3+5lLh2zqCGq1WuVWbXviEdUuxvL96A8RD1GRUQLG0DslGEM3eCvM79PT\\n\",\n       \"U0aKoD00ECLuBWy+Wq3i06dPNZlwpE8EATrAeNNPoxCOMEjLGLrnJ5Esc2GEymkdkBe/O1EQiKej\\n\",\n       \"KKc2DRW7j6vVKrflR2yRE9AIdn8xx0TU8Jqurq4yKifVsF6vs/gmxwPNZrP48OFD/P7777XvMg/t\\n\",\n       \"djtLHPzXf/1XjTT++LitdN9sNuPs7Cxl4c2bN9FsNnPXJegC/VwsFnF/f5/8LG//h+dIpI7M3N7e\\n\",\n       \"JuLGdaQMXO4EVMtnSTKHoCx3d3fJL6JYJRtVqJJuuQDlckrFKACy7J3FToN5rTkt32w2a0flgBqA\\n\",\n       \"upacndlsVkPHQEH29rZVvklDNZvNODo6iogt4vH0tC04aY4JcxFRbVlvtVq1Y4g6nU5uvTfa3GxW\\n\",\n       \"h7qDrlhu6a+Rp5KozHecRrWclOkwo/boBtYr6x2S/nq9TvkZj8epv9kFDlJrkjVjZBSbOYB0bfSf\\n\",\n       \"MeC9jVSD1L6UAnKKESSNOQDd4h14X3N0sCfWr4w140wWhf4Y8fb/yRYwT6UtMupotMopwXJOjcSW\\n\",\n       \"5G9sEmvA/GXslncVWg85Q2MEDLQJm1pSe/icfjjzgNz67353xtlySfv/HUeKF/cRA6PRKCeuhH9R\\n\",\n       \"vHd3dzkQziV7EixQm80myxsgXObeuLq04UgTmq0sI+qnjuMQmE8AlAgPiPuQnvT7lVwonC/qCdFw\\n\",\n       \"sJzO4O/NZjMNCNWdIyKP2iAVEVFPUdrZLHPWKOYy1857wEkBtvUCN9HeHCAWC6RnUlgRVe0RHFvn\\n\",\n       \"w9m94maFTr9NhORvhoDtKHuueJbLQhhKN9yObABzo8QtM9SegdvCHOMIO13sBR5RGRAO8KU/du4a\\n\",\n       \"jUY6mcDTjcb2CJj7+/sk7GMIu91u9Hq9+Otf/5ppuH/4h3+I6+vr7AOORcR26/xyuYx/+qd/irOz\\n\",\n       \"s7i/v4/vv/8++4JcTyaTuL6+jj/96U8RsXX4qEeEUTG8z/U0r1FkCGWJXLCeSdm22+2s6u5UD/do\\n\",\n       \"tVq18g+TySSJx+bBwTNrtVq5Ziw3yBHv4DIVbJDxzuGIKi1kQ0Nj/drIUKZkMBjkPLCbkLno9/vp\\n\",\n       \"BN7e3uaOyIitQ9Tr9fI8QY8JKRGcTHZBR9SP68HBcKmV+XyevJ1Wqzo6yBwlAgzrTBuy0ijyuR1C\\n\",\n       \"ZB3jzBq2TrTTyucmXOPYOOhinnCsKDvA+u71etl3TgswSdsUE2p88RmnBUDdsAxzHf0qid+uis67\\n\",\n       \"E4ShKyIqLmV53JhpC6Qynford6cxFyXlgPuZjmJnhb+x7uzYoYuYF8ubbabnCVACp9XOGUEA82c6\\n\",\n       \"DQ4R48r6sSwtFosMLO0M02/e28G8OVpl+o7xNAWHd/9/tW/iSCFojlparVZuyTYxNaJaGNQ2Iaec\\n\",\n       \"L6H8f0RlkDA0zp1bgKzwbbxNfkZImQy+78E1zyuiyqdagP0cFizfJyLHWHAwaERF7ja3onSkMO6t\\n\",\n       \"VnWqPErJx+V4MZHzNzfFAscYec4iKtStRFx4BouMe3vHE0YIUisN48+7lk4m6BmOFgufd2ShOUpy\\n\",\n       \"n5h7I44eEzvtzHnE81PVPdc4IEayMDL0uVRgzFXJ40IJ8hlISUTUtqYTQTI/OG0gKJxnxdwdHR1F\\n\",\n       \"o9GI//3f/62VMWi1Wrn79OrqKtrtdjpB0+k0/uM//iPG43FMJpP405/+VON2YNzn83kMBoN0sggU\\n\",\n       \"iPpANRjnZrOZu5ow8owTHI+SbE2Q4I0NyKiP02H++v1+EnnH43EiZPAIcUIcUCF/DixK3pbrW/Ee\\n\",\n       \"IJKus+N5K51GjAZGkXty5l+3242dnZ2YzWbp8D4+Pkav10sZIApHLkajUR4bRF+RW8uvOSQU5+Tc\\n\",\n       \"RIyRx3uxWGQRVgez3IuAzOuC57sfNtDmQZbcSX/uYMgoNU42fWVevIMYmaJILXO9XC5rXEWXJzFX\\n\",\n       \"Ep3NWjY66KKTBC9G3HBOGK9y3fNeluFms5m6H9vAO5Q7PyHJc2/GxFkVmjdyvBRE4pzj1JtfxHiU\\n\",\n       \"G4jgWuGk2H69NJfmKbNxjADGGwbIlpT8KPsBL70DcsguaaOcrVZVWsd/517oW29qMO8aOXVdSNb+\\n\",\n       \"19o3caSYFEogRFQkQG+pNgmw3W6nYvYL2XiWRE0GgAVgeJAokf7YQNuz32w2uaWbe7qhcLknjk7p\\n\",\n       \"nLFLDQ/aaBgCjxDv7+8nhE8dFJwMjwtRCv2zQOHBf/nyJRVEGUXbgTBaRZ9N2LOQY/DL+zlKIerk\\n\",\n       \"XovFIueY75FSWK/Xeb7i7e3ts8gU+N2pFfrvui1W0I6WIuopX+YbZVo6p3ZCgcLpJ3Lj/3NPGjt/\\n\",\n       \"jMp4zB0EMI+kPIlE6f90Ok2UBkTECoxgABTCjjSE5GazGZ1OJ43jYDCI/f39OD8/z5pd3PPf//3f\\n\",\n       \"Yzqdxs3NTbx9+zYJye4rNa9+/PHHnEPqez0+Psb79+9ryDBGwakNFx+ldhrIsaNg3os5NXpyd3cX\\n\",\n       \"nU4nlaBR0JubmxgMBnF5eRmj0ajmZHqDCGkVxoZ16ajc+sBrFJlg/hlHo5Zch0EhYOH9IUvz7tZf\\n\",\n       \"k8kkv3t2dlbrC87saDSqUSUYt3a7nboLRwuZmc/niXAdHh7WUD7Gu9y9ZxKxEfcyrU/gUeoZrgcB\\n\",\n       \"t/EtEYDyOtBEB4bM/3K5zFIPzCk7StF7dly5D3rE5x4yXhGRBh/75E0P9LcM6HDSyjQvc4Ls29nC\\n\",\n       \"QIPaYF9wPuyg0kB/SC8b4UMWsYu8p8EG+o4edzBgRBF55Zl2gMpd97x/SZnB1nFff8Y9WMcObgjI\\n\",\n       \"CKJs51kv5WYgnjebzTLA9o5Vnm09YZuHvbPTH1GVdvlb7Zs4Uru7u3kMjA9njaiiPhucdrudfCoW\\n\",\n       \"+0vfL1EXDCyD4PyseVYINM/jXkykBxXhKhd4RJWuMcJkhW1nwGkfvm+0A+EYDAZxcHBQ40PQ4ExQ\\n\",\n       \"HM4CBe9qsVjkifXcm/6U/xzJ0Fe+S7OBJCX3krPA4ud5KCHm0NfgLLIbzgubcWOePcceU3+/7AvO\\n\",\n       \"UOn0WaEy9+zEw/Fy6oPrO51OLnBD/yxCO+WlU8DYGlYG5cAZ8Nig+Oinjfju7m5GyqQxnGrkmBC2\\n\",\n       \"/r979y7l5suXL3FxcZE7Z/mMQ2zfv3+fUT/je3R0FHd3d3F4eBhv3rxJ7mLEdm1NJpP48OFDNBqN\\n\",\n       \"mEwmKfukz3AEHh4ect3zDGS/THmyFghCGDMCBYw6yINT0IzV/f19HB0d1Zxq5AinlzH10T4YMqf2\\n\",\n       \"LQcg3jwPAE5hCwAAIABJREFUJwFuF89DP1EM0ty6vb29dOyYCwd+Z2dneXyQFTpcQZ4F0sX8Ird7\\n\",\n       \"e3vPnIxGoxGLxSJPkCC1wgHAIC42sow7awkn7KXGOkX2jcgQXDht4u8ZIUJ/oJuNLhCQPj4+5vEt\\n\",\n       \"Tl8a2bYRxrkiI+Lgw4gEuyl5b8pp8CwQScY7YmuL2NHpNJkDVgelOE7oRae2kF9SgiWPEn3vXWpG\\n\",\n       \"UMq5cMqs1OvuG84EffG6NE/L/2eesKnW0fSV59pRs+PJWqPhYGGLDZKUjqRtADsHoZ+UTqZ3QZfO\\n\",\n       \"J0h1GYwzfmUA7PZNHCkWLk5ARIVycO7US3WkUHoR9e3t5aQZAkUR41BZEFjcLxlZ/k4E4QgDZ25v\\n\",\n       \"b68GG3PfiEp5um9WEBYM5+TL55HTpdQCW8jd106nk3V0SpLf8fHxM3Ih/cHZiqhQg4gq522iu+F/\\n\",\n       \"uEAsNISYBY/w+/1xOIn0rRSbzS0Bt9vtZmTgaN78CCslX89Y+DPfAyeN3/lXKvanp6c8S5BoyqkK\\n\",\n       \"5h8EjWgex8oRr+UUeWPeS+XOfZ2q4h3m83kqCwxuRIWacL3R3C9fvsTT0/Z8PY4FMX/u+vo62u12\\n\",\n       \"vH37NlqtVhLRV6tVjEajNDj9fr+GvoCQUJuJfp6fn8fR0VF0u93aWW4RdUI5TjQ6gNpwpOncWPc4\\n\",\n       \"J4bXGQ+cEPhnyCn13g4PD9PZNCeP4An5dkFSlCny5Ll0WsVGH8cGI2QjjGOy2WyyZhByA0cKOWu1\\n\",\n       \"qlIUyNf5+fkzsj1oCRsRkAEaaE273Y7j4+M8IgbHeDAYxGQyqSHdh4eHacDMSWFdeCxIMzr4fMlx\\n\",\n       \"Kj+zbuY9+P0l9AXUGB2OA4qeaTS23DIQHdZFmWLld/cNWXHNI5OUbaAxvs5yGOVh3eOA81zkxMib\\n\",\n       \"ZZifOF8u8Iu+LtOFpfxRssOZnPKaMsA0guTUHsip0dWI6lxcxqG0lw4+nTovEX87Z5av0pnFcUMf\\n\",\n       \"e+5xlqbTaXKg/TwQfBxinmPqSGlnDCaUwECZxn6pvZY/eG2v7bW9ttf22l7ba/s72zdBpEajUcLU\\n\",\n       \"eLx44ERtRiyI7rwLjobn7Dw73rijVe8qiKg4UnACSvIcKTryyWWEY8Ia183n8/SE8d5fSo85vRNR\\n\",\n       \"oTygbsDKfBcEp4wCgURB9oCxI7Ze+9PTU0KcLrZGg+DL+72EANK4LykK+ASGPP0dw9f8NNLn1Kqj\\n\",\n       \"J1IP/szIAXPnVsLX/I2oDfnwThpSL07v0k++T3Tmwqr8PaIqLBtRFTEl3VtGMKAN5PSRL48XyILR\\n\",\n       \"KpAx754qd6yaQwWfaXd3e5grxNvpdJr3JAV4cnKSu1dJUfGsvb29ODo6itvb24yS4Uydn5/H+fl5\\n\",\n       \"/Ou//muSu5fLZXz48CHu7+/zqBvI7WzxdukLk3+R75c4jUTcpLG9RknhuOQDRG3QwX6/H81mM5bL\\n\",\n       \"ZaJg+/v7MZ1OE/32bjgQpfV6neiaURNHvE5v0CdayRMBRWw0trsu6TNn9+3sVEdcse4oQ4E+cLqQ\\n\",\n       \"MfVmmpLW4PQIpRFms1mmUTgPkesXi0WmF42yMGZ7e3u5/b9E5Iy8Iqfl+nbqzH8rKQ4eR3Qlusop\\n\",\n       \"HOYNBJjyFug0Ut9GGczrRCa5rtfrJUrDeHqNIqtGuv2u/CvHxTQCnut3QBdYlmxX0G/Wf6Q2ndJl\\n\",\n       \"DYO2Oo1Jf8wlJV3qFNbj42MN9UQ2yh13nnOPATbTGRzktCz7UqJm9JvP6OvR0VFSgXxPTsG4v79P\\n\",\n       \"/vR4PM77QXlxdofM0EsoE/rbxw5FVPawTPG7fTOOVEQFaUZUC/D29jYrxZZnv5FOi6igQBoD6p0T\\n\",\n       \"XGuCop0pjFu5648+4sSYQ1OS9pyiW61WydPwYvLzEG5PMEqfrcdOa5owCJmXMSOdAFfHfBYWCie9\\n\",\n       \"25BFVAbMBDveA4fMZx2WitJpljJX/lLqAy4AzgsOB9fjPGAw+Iz58Th4btfrdRoGO4p8D+VgJeCx\\n\",\n       \"Z1F5yz0LB8K2nSzabDbLVAjX4WTZKTJUzZbykhPnvnItjflhF9r+/n7tOIl2u52E29vb20wLQeTE\\n\",\n       \"EYLPxvwOh8N4enqK2WwWnU4njQl9xYCbl0NZh0+fPsUPP/wQ+/v7cXl5GRHbqthwekgn8e6LxaKW\\n\",\n       \"0sTBpy+eAzvJpPuQTXMOcaSY12azmbvfIqrDu0nvcbxOxJZSQLqUtca4jcfjXL+c+4mTRfV2gjOn\\n\",\n       \"r1iLcFbKtPbd3V2NZI5Sht+GnJPG5DrkBFnhmJvd3d24ubmJx8fHGI/HMRqN8jtsIMDxMmmY47fg\\n\",\n       \"m+3u7qbjtlgsYjab1Wp0OdBAFk2JMK8ThwXdUKZIWL8lcZfxdnDFPfn8/v4+D5+m0Q8cPJxFKtZv\\n\",\n       \"Nps86BvZn81muduPPvPZZrN5xlWyjuP9GAtXkmcOv+ZcW/e7XA625SVnCV3h4Jl72W7B47Njx3g6\\n\",\n       \"nY2cEpzyPrwrOgSaQFkuh8DFqVOa72XqBoFjs1nVXfTYOnh0OhlnuNvtRr/fj+FwmOvCO3b5PjaR\\n\",\n       \"6vN7e3vZJwe+pFDNieZ5zBv3sK9gp/ql9s3KH5DnfMmrXS6Xz4htJRHNxpt6VPP5vEYAxji3Wq3o\\n\",\n       \"9XrPJp9m40YzEmSSekTdq+f3iOqEcHvldgjg6UBwtHOyWq3i5uYm+v1+HB8f15w/cuB2eiIqD346\\n\",\n       \"naaxwcj6Pfr9fvJQTJDEszefw9exNR1UgDHFyUIB2EB7F5sdD4wBC6fMj+O8lWRrE05ZqH+LM2Bl\\n\",\n       \"gnJizFx0kTEw0sX7gS4xXy5GimPC2PjdI6LmyL+0HdqLn35gdOwMlvwpHG3X1aLEguXE26XhdGFI\\n\",\n       \"mA8iOWoQcdRIxPaMvuPj47i9vU1ekhX/bDaLH374IT58+BAfP37MvnBg8tPTUwyHw+h2u1lSAYNr\\n\",\n       \"JWpngeOCvCOPuScQYi3ZAcEpJ9o1fwynDIeJDRuMG4fQQkZnHo1QTyaTLD4asTXCw+Ew9YvHNKIK\\n\",\n       \"viyTyAF1veA2cU8MoI054w2SzK5GNt1EbJ3B9XqdZ+xdXV3lzkOcAwwOHEueNxqNYrlcxmQyyaDB\\n\",\n       \"c2iU3zoRuWQeza/h3c0DMv8S/VeuU3OCynHzWqH/5Zl5BC82dAThOMDwq5ApHy3jYA++TatVlTJw\\n\",\n       \"RgF9CZeJvoMOsjvNtovfzZGzjkKvY/i9KQgdav1Iw/EoESCutS4xj5U1ZSeS96CQK+jL3l51HFm3\\n\",\n       \"242Dg4Pc0cnYRlR6jHVjgMRrgrl0AImjz1zwGU4XZ0xSuJNxA7TAsYNz6B1+Jd8Jnc544eBFVPYQ\\n\",\n       \"Hqht9/+LHxXxDc/aM5kuooLrcDIQLD7jOgsEn2HsIGPbATMZknSXrwdJKCMoP8cKw9FxmbZD8bCg\\n\",\n       \"PAFMEg6NYUyUy2q1iouLi2fn0Hk3CgsronJIqeHT7XZr6TkWY6vVyoNqEVQrH0dlfueI+kLgmShU\\n\",\n       \"E/Y9h95JYqfBBFEraIoGEkWXAu77lOPNvf0ufIdxBbXw9w1D27FzGtCy4zFtt9uJQLwUdQMNO51M\\n\",\n       \"UUWQGRvhMvr1+/r/L6Xh+I7J18yb0zBOC+Fg7OzsxHg8joODg0RyTk5OotGo6rusVlUNtdVqW+l5\\n\",\n       \"NBpligiEBIfkzZs30W634+bmJpXb4eFhbm9eLpfR7XZr6xfD43pREfVzJnGkyzpeGL71el0bm8Fg\\n\",\n       \"kMZnNptFt9tNYz0ej/N9n56e4s2bNzUna2dnJ1EdIwiz2Sym02kiH05RsduNMXeg1G63c+fiZDKp\\n\",\n       \"obm8vw/Z9U6p5XKZQRYOCeMNujSdTms7KNlVulqtsnQK+ouU/f7+fqbGXOsNB5Q+uoYW6TICCqP9\\n\",\n       \"OFjonDJN85Khj6ijURi5MmhGL3gOJ5NJjMfjdAScgqWvh4eHiT5ZbhycujmtizNhG8R7gcw6eLIu\\n\",\n       \"sL2wYw2ibH2C7POOTkPZoS3BA+anJK7zLGTE9sDX4jiUtm13d1unsdPpxMHBQe2gc8CPk5OTmjM1\\n\",\n       \"mUyy6KkRQt7D+tRZkzIVuLOzk/ccDAa5RiHvs+7YMToej2M2m8VyuUwHm/WKHSkDGmcn7Jyie7Dt\\n\",\n       \"6BX6+VKGye2bIVIMnAWNhU9kWfJWIiKhca5DiBlMFg7Piah2BrFV0v0oHQHuzeQzgDx/PB4nKkB5\\n\",\n       \"BDsLPt7CSA9pBN4F+J/rIrYTNplM4vLyMgUYp8XRkB0pIF5H1VyH4eEdOMCUZ6LsEXBD0oayvWBx\\n\",\n       \"2jyPFkbSli+l4UB/XkoJehu455doHYcXeSjlyf3h/dmlQXTl6Bk0g0Jx3lpr9Mi5cuYd5Wf0D6O7\\n\",\n       \"Wq3yfu4nyr6s98W7olDMV/Bc0Mx1enp6yiKQIDJOlYN8PD4+Jl+AvqzXVYHbo6OjWuQ/Ho9TuQ+H\\n\",\n       \"w1rEyRZ+EElQECK54+PjuLm5iel0mn2hphHpFiJ+xtvOyO3tbXKrdnd34+zsLKNhR5gYHvpFGpax\\n\",\n       \"wXHzESl2el0qgfIQEVVNK8ogePzZ+Qj9wIYYhxVF7Xd0KQbk3Gv+/v6+dugtzgm7eClT4CrmrAfG\\n\",\n       \"ZjqdZn/u7u7S6eB+vAMIpE9a4J6grfTfc4NBQmdbvpBhr2enjNCRyFRJvbD+ceDgNc/cIVMgijc3\\n\",\n       \"N6kj4Qcy9kbduQ9oFSgPcxZROTY8z2uUccSJBGHku+h1B+iMhdGl2WxWQ2boQxmQeecfes16AHk2\\n\",\n       \"CMCzmQsQd+tFEDBnTEqH/+TkJB0ngghKDlFU15mDyWQSFxcX8eXLl0TNjbZbfqj7xrh5N7jt/HQ6\\n\",\n       \"Td2DrqVw87t373ItgdRbXzqgtn13MVFnDdy8Tkod7GC+bN/EkQI2da4VQwSU7Tx1RP2sMje2MuMY\\n\",\n       \"4KxE1CvQAsWW6R0iMASSZ+EIlAt/tVrl1u7BYJDRD/dcrVapwHg2DQ8ZI4/iM1LTbDZrXAAbAD63\\n\",\n       \"h2042FESzgef43Q5p0wEQMqkVBwIpAWohJq9oFCWOAXmifCe5lvRUNAQbv0eVrLlAn5JPvhbyVPw\\n\",\n       \"IsXweu4cudlY2JHi+8yJiyc6jegjbTx2yAPGAiOPkraDamSNOXcV84jIjQk4py6Sh1LEYcfxYdwu\\n\",\n       \"Li6i2WzGn/70p0x3RGxRl8lkEu12O3q9XoxGo+wLHC/m3+nynZ2dODo6iouLi5hOp3F4eFhDbs7O\\n\",\n       \"ztIRfXqqOFInJyeJvJQI3/39fTq9yBuK9+7uLobDYRoC82AiIonpjJ1lkfXJfSeTSQYuGFqcKVBA\\n\",\n       \"3h8Hi3F1oUenFOBXRWyDr/l8HqvVKrlnyJvnDF3E86bTaTorOKomhiPTIIrML6kOc2l43nK5TN4K\\n\",\n       \"fCGaNwHwTPpGSgr9Rl+dirLDYn5VGQQZ8WaNOJApr2NcrQv39vbi+Pg4ms1mXFxcZLAQsTXsLgJp\\n\",\n       \"buhms8l6WWUpFqNRERW/LiJqWQb0rtNb9JE+lzoOFAj9xpihR5BTb2xAp30tyDVqwv24FufO+svf\\n\",\n       \"KYNYz816vT2fs9Pp5Pu32+3o9/u5PsqyGaenp7G/vx9XV1d51BHzyRhjSxwsUtgVmgLyDeL4p//v\\n\",\n       \"CKqIqNlLBxi+J6iYuVZG/rHRzJODIQc/zohZfr/WXssfvLbX9tpe22t7ba/ttf2d7ZshUnjbJiSa\\n\",\n       \"0FdG0EQ/L6FSRKrwHJz2wyMGHizRqpLgF1GPkEgdmo8CeRS40ygI7wTXgqjFRDzQFzxeUBATac11\\n\",\n       \"MTpjIp/TlbwL1xEhE0X5ufQHJAK0zrwl8yQcCXIfPP8S/jdq5pQoCN9L3r1TcyCTHm9kxTwN3tE7\\n\",\n       \"kIxaMqaeX1f+Bf3kXQ19g/LwLMshssR7esemPzf3B9kiMkOGyk0McBQcUXEvpzrpN3NNuujw8DCj\\n\",\n       \"cu7DmB4eHqYsXl1dxXA4jH6/n7sPzeWCHxGxRUW88wfonvsz3p1OJ6bTaczn8zg8PIx+v5/jPp1O\\n\",\n       \"k7S+t7cX19fX+e69Xi83aJTrwOgb/DTLN2sK9MnzX3KV1ut1DX00Z+b6+jplsNvtJmpwfHycCDl9\\n\",\n       \"6HQ6Od6gPfSHdJA5fhGR6BQ6ylxN1hbIsPlqFNXkHg8PD5n25FByZMSEcsjUpIE7nU6tn8gKRTlp\\n\",\n       \"ICnmOprnVPJyjIiUKSzTNlgL6HGnAXl/aAfmMpYcxZI7tbe3l7tPXSKk3+/H/f19pkJNGWAeeUdz\\n\",\n       \"hDjmaHd3NzqdzjPEgjWPHbKu8fp01sDjgAyASNlegLTzzu4jNsgFnEE1bRPMvcIucF/3weP4EoEd\\n\",\n       \"efTmFdZlu92upWkjtutpMpkkSuq0O/PN2HlDCMgmyHC/36+lruHvnZycJDKHfG82m2i324mce10w\\n\",\n       \"V8yvqSQvcd4i6kdCoVdsZ5xyfql9E0cqImoGJ6LiCsDELw0uLwMR2/UtgCLLVjoShnip4RFRKQRv\\n\",\n       \"/0dBt1qtrKsTESmgKFgbWsimCA7OVMQW3rcjZ2WK4baTiJJkxwVQKoQ/PmMcymrZGGxvC3WDw4NC\\n\",\n       \"cjqJBY2xNMSPQWIhe9EwjzYS7g8OdJmiY+xw/nhfnuf0BlyTiOdbhCOeVxKnHygVPqMBt5fEWDuv\\n\",\n       \"NDtY5iDw3jYmfj73Ne8CnhJ9Nr+MHYHc105A6fQhN5xHaRnm3hgTZIXSBJPJJDlhvAelRyIiU1t+\\n\",\n       \"/+FwmAqu1+vlO04mk+TsQNTFAfHxMHBMUHwofQjqb968yXdAcQLhe4dRufOHs+EYJ2TtJUODs3B9\\n\",\n       \"fR1HR0fR6/WSGM9643NSD8gRTjvvwrhhaNgs463nfEb60QYTmSFVtFgs0iEidcpacOBJ3Ti4TlSj\\n\",\n       \"Z9zYgUddHe7poBMjxpiaIE8Kz5whO0AOsvjd6VNzJO1A2ajyGY5CaYQdhJS8V/NpGU8+Q3fiTMFl\\n\",\n       \"5Z6Hh4exXm83J3gnM2e0HRwc5Lt7V6EpBQ6CSse5lDWaAxsajq0Da77rVJt1B7KHLkCmmONut5u7\\n\",\n       \"TW0T6Lvvb3oEY4jceA5sr3EibfccBDsgd3r56al+EgYyjH5zYLa3tz2Q+NOnT9Hv92upctb509NT\\n\",\n       \"DAaD3LEfETVZKpvTdeUckj51CtU25W85URHf2JEynwnhoh6SvXwf6UDEXZaUh6lvTo/RAr5b5ovN\\n\",\n       \"rfEgRlR1RUo+ixcCzhR9iKgQj1arlSRWHEUiCyMlEXUis9+dhU2UZwFGKWA0/G7O/TsqMXEawjWL\\n\",\n       \"wxwLhBEhsmHG2D89PSWpnnmiMaZGB1G8cNUsnDhrnhP+bifI8+moECVvw8911BVBdnB2MXg20EaV\\n\",\n       \"kM8SBUXhY8j8fp1OJ4nfcP2YKxt5lE5E5dRybx93gbJk/I3KMkcYate1ohAr48WOlIhIRwgkh7pR\\n\",\n       \"3B+HiF1m8BTa7XZuK+/3+zVuAuOKscAp8NywPr0pA6I1xoTdpbwDRW4Hg0H88ccfz9AKnETOoWTn\\n\",\n       \"GvdBH3gNU6YEVHB3t6oVZQ4RjqePbMEhIEhxkUCQnv39/bi5ucmxwYFip5yRYwIjxsbIOhGynWOj\\n\",\n       \"IHA97u7uktcWUTkSg8EgHVT+RiAKR5Pgj3vyDAIbZNS8KFoZLDDHpUOEzkCWHbTZAfW93JAHjCdj\\n\",\n       \"MZ/P80xHry8QRcoZ0Ad+8rz5fJ71qSIiZQG55lgUnodjYieQcbCz4t9L0nKppxkTdvq6n9hIvufN\\n\",\n       \"D5ZDdBtIJqUCuLcdYNBfxqa0ew5KvduTsxlHo1He344lc/4SL8tghscJ+UAOncFoNBqxXC7jr3/9\\n\",\n       \"a3S73dpOdnSFuYXmN/udSn4v4+hjZbgOzuNLOoZx/1r7Zo5URLUdMaLa1dVsbgvrmXnPIjEpr0x1\\n\",\n       \"AA/7IE2ntPg/SondTjzbUQXXEFnbeKHcOBAxooITXX0VR8GHyK7X6zg7O0tDiSDyLAx/uf3fKQ4v\\n\",\n       \"ZP6P02RBdASJILu4ohUdC9aKj7+h9LgvTiJGxM4LDh3v5oWDwjcK5uKEXMP1jvSdtijlwgRiRw6Q\\n\",\n       \"e3l/qjZH1HdR8f7e3cU1yJU3ExC1QFalLzx7d3c3d0yBakRUu8QYS6f9UGxOmdlZL1Msjio9vrPZ\\n\",\n       \"LO9pZWGSPPcYDodZc+3NmzfpSFBD6/z8PO7v7+Onn37Ksb+5uUmnlIgeWez3+wnH23GKiHS6GVOn\\n\",\n       \"w5fLZTpSw+GwdnYl6X4czfF4nDKD3JGKw7B4JxVjS/BjFAtj6iiV63hHHEOjk8iCHRPmn1IEvV4v\\n\",\n       \"jo+Pc+5vbm6SUEs07RQ7fWa7uY0//xgDy4GbN68wt7PZLPr9fpyenqaz4Cr9EVWtqojKSfduMho6\\n\",\n       \"0alCI2tln8pxIxgrEVCjxegwf8b13BvnFnSf3bDMtRvoJ+PHmOJksKvTwRCBK880Gsma3d3dTTQF\\n\",\n       \"mbG+c5BM4OC1YGcY+4Vd89w7Y+G0FBt2ms1mnJycxGg0yvM0I6qAnnfkLELmiB3eOO4+NcF63rLy\\n\",\n       \"9u3b3DSBzCM3nN3J2idgiKjOvESOLVM+H9YOFM8/ODiIq6urODs7i+Pj47zOB7y7dhdjit5HPplD\\n\",\n       \"9LptrQNUHFQcQaNjBBlfa9/EkcKgl3lH5599HAwGC2PValXHSKCIyM96m3fpmRoiJqL04LphpHDe\\n\",\n       \"zK9gCyi1j2g3Nze5Wwlnw5yN0WiUUVS5swBOR5mic6qp9JT5HAiaBc13vKWY7zki4HqUl6MhBBIH\\n\",\n       \"oOSC8E4ggYyNnS732TwJlH6ZejOCUDqZdgZ8JIajWws/ix2F+/j4mHA1KTA7Fu4nsgZqZbSI8QSx\\n\",\n       \"4h2QCcaj3NV0e3sb0+k0VqtVLkyUDeUErIzdN5QCn7nOEE400WU59kD8ZVoxYpviGwwGcXNzkzJ1\\n\",\n       \"d3eXu+7+5V/+JXZ2duLjx4+5Joge37x5U+sLjg5Ov2vJGIWF20DwwXgC3eOs0T92wOFgMU/IAcaD\\n\",\n       \"delUFGkE1hD9WSwWMRqNMqq2o+5AgMYcYrhZr/P5PI34cDhMWaBa/Nu3byNii2idn5/HbDbLsQE9\\n\",\n       \"IEA0j8cRsPkz1KKime7Q6XRqOhGe1OXlZS3A4z34HnwYZJh5QoYdtCCPBF1Gap3WQ19aFq17rL+d\\n\",\n       \"6isLspbUDHQ2Y0S1+sVikXWtPGe9Xq+GSPK8vb29Z0eO8BnrnUDbaJJ1aYmGe94c7PJ3nDfrBBA/\\n\",\n       \"bCFOX0S169rv7lRxt9uNXq8XJycn8ebNmyxXEFEhw+zeNGcrorKnrCHzDnkutov3R2bX63Vyahm3\\n\",\n       \"29vbuL6+TlS4PFrM8mAdtbe3l/QAOIK2ewTrFxcX8fnz57yOEgqMnxEwp5BZP0by+D/p3bIOmndV\\n\",\n       \"O/PDevha+2aIlJViRDXgX4NOidZRelYGl5eXtbowL8GK/LPBiKigVyMPEfEsAjDSBPy3u1s/dXu1\\n\",\n       \"WsVkMskT3bmWdxiNRmmE5/P5s8VtVMOGvSRK+p5GjEqvmYX48PDwTJmyOBGSElpnQRGlsRCJql/y\\n\",\n       \"+JvNZjoE9NP3xAlDgJ3ztsNYCjGKgMXFO2IAcAKtLOC2tNvtrHhvmcFZhDPibewvGVOPGYrHUTeG\\n\",\n       \"jrkD2WAe2+12HlVCLRnky1W+TRKNiNqYlP3BkMJzKQMHIPj5fJ6IRMTW6FOj5fPnz8lRYixHo1ES\\n\",\n       \"rc/OzvI6qmh/+PAhlRXt/Pw8NptNPtM1hkC+4HFA6o2oUvKk3rwOcQR3d3fj/Pw8DVJExZtC1kAP\\n\",\n       \"eP/7+/vke4FAMf/U8+r3+7UjnRhveJcmqtNASymFAFnb9XNAdJCpdrsd3333XVxfX2etLNY0ZSqo\\n\",\n       \"i2SU3s4I+stIR6/Xi6urq6y5Y94Z66vUNUTzjFer1ao5w+iBMsB0msjGruRHlpwg/5/rLd92jP0M\\n\",\n       \"PmOdsP7p6+7u9hw9Ni4YHcWAsjGA+3NPOFOWPcYNtJnvligX34uoHCcHob4f98CJQh86o8B3+LtR\\n\",\n       \"avfZDgpHIR0fH8fx8XGMRqMa0uXMBmND/6iDBvhgvYiux6G07ub4Keut0unhPubaYTvshJR8T9LL\\n\",\n       \"FJ+NiOQEdzqdmM1m8eXLl1oxVmdg3CfI8GRyeCfLk8fJQZK5WvSdn6V/ULbX8gev7bW9ttf22l7b\\n\",\n       \"a3ttf2f7ZkfEEP3Q8LZNirbnC6cGb9yRxtXVVUK4jpCA/ZzWcCMyMyRN/0zms+dKhWwfKeFidxA/\\n\",\n       \"v//++0SJIioO2GAwyPcGiQDFMCzpaAC40QRC99epMb8bKAZcAUctvh+5eROjndp0lExRRj/H6UtQ\\n\",\n       \"Mg6VdPRFuhCeBOPmSAfI2OfJ0UjXODIwCRRZ4XkgI6RESn4IHBzzrnhXw7vIB7C/eUnMJVwk8wcc\\n\",\n       \"vdOXbrcbk8kktwtzHxdmNBoGpwjuUpmChptjNDOiqpoMB8SIBf3+9OlTNBqNWmXz4XCY79xobI/D\\n\",\n       \"+e233yJim7L6x3/8xzg6Okr0lfcHoXGBW6dFQAxAnoxwEpG+tA6bze0RLxcXF7WjZUBwkRe4Hy6S\\n\",\n       \"eHd3F/1+P49Q8Rl5lGTgOiMKpGzhiZQ8zogqfUaKDmQL5IS+8TwQ6VarFbPZLGWROfPGBsYNukK3\\n\",\n       \"2817mwO2v78fx8fH8fPPP8disYiTk5OIqPM/WRs8zzoXlNq6kY0+ROGgAGQEQBiMftIf5q1ETtHB\\n\",\n       \"rB3TGpBhj1MpB3zXOyEbjUa8ffu2htL5uZSHaDabcXx8nPNGMU7QQ/NRkdu7u7uYzWZJ0aBvfMfp\\n\",\n       \"f/5m7pr1kxE90si2Ud5IYKTOckmanNQdx6dAwGYuTP6GPkHaE1oDRzvd398nLcbVxUHOGo3GsywG\\n\",\n       \"RzAhH6YKlHxNyxm6H26ZU6KQ3km3IlOsa747mUwSHUcX8t4u8kza/eDgIMfAdBcoNKW8mF9b0mSc\\n\",\n       \"Qvxa+2ZHxJBSQVCddmHQnTIjL3x8fPxsgqfTaUK8hhAZjOVymXAlCoVjK1CIdnpQ4F40Jp4BObIr\\n\",\n       \"y1uL2S01n8/j5OSkRmREATHRJQ/IcKj5BU7vGHrEsJK2wEiV42zyOO/BgsGAe7cG6T4LpwWcbdMm\\n\",\n       \"YPIe5gdhCCIqDgWOoB1CeDU7OztJjCQVdXp6Wsu505+ISNL/09NTpoCsoHEG7ExFVAbKaRs7mGyJ\\n\",\n       \"RtF64aMwkJ0yzUjeP6I66JM+c583b97UlC0y4/Sc5x/DYz5aREWQJI1jhxLl3mq18uwsV8zGQcO5\\n\",\n       \"Mmkhkt0rAAAgAElEQVTaHKBPnz6lXPz4448xHA5zfqbTaXz//fcRsd2qP5vNkh9kxx4lv9lsnjk1\\n\",\n       \"7XY7FotFchst+5BlF4tFEstpOKlwS2j8v9frxePjY+6apSQA98WYMJ/mOpEOGAwGqacsT+gJ6wWC\\n\",\n       \"OxNTS7I19+cYHuYeRc3zvL6pB0VZCesI6iQdHR3F1dVVHhLd6/VSR2JomcMyYMKR5XfWMPrQgRCO\\n\",\n       \"J30z36Xk8TmIYK3hpNqRMuWiTHmxFng2mzV4XrPZjHfv3iWV4uLiIiKqndQ+fxV5Y5ezd0mXwUmz\\n\",\n       \"ud39ae4kzrW5tuZysW7os+/pVJF3FZsP63sgM8gQARpOrY9HIQVHMG254d9yuYyrq6uIiORE+VgV\\n\",\n       \"7zonqME+OAVrWgNEfd6L5xLsWrez3gBD0InIBGNmXes0LqAL74Df4DReuQOeMbF9KmXRtpRSMuif\\n\",\n       \"ssaf/Y2X2jdxpDBgCEPENlLodrs1h8oC2Ov1otvtxmAweLaLrtfrJQfDEYa5P+v1OksrRFSCQKT1\\n\",\n       \"UqTPdXZOUBAonPKIBQZ8Pp9Hp9PJd7i7u0uP26eS01j4Jpry004Mz46oHByEAuMfUW0RRTFbYCOq\\n\",\n       \"Mg54/RgtmqNLowXeMs7C9+JnfECu2I5Ozt+L15wGBJnf6ctkMkkkwjn1iMhCkpAnqVWEXPj+cIb4\\n\",\n       \"HcSsRPkiqoJvLGZzIUz+Rh54P5w/5LBE71C+RJn0r9frpUOE019ypIzwWG4ajUY6w5YNR9soThck\\n\",\n       \"3dnZiV6vl+/nOmmdTifm83n89ttv0Wq14ocffkiZmM1m+fPDhw/x7t27iIj45Zdf4vHxMR2Yku9h\\n\",\n       \"vt16vc5dOBDwefbt7W1thw7bwr2jkbVAYNHpdGIymdTOokPeMYiHh4c1gjsOD/LE+9/c3KQjuFgs\\n\",\n       \"kksVUW0Hhw8C8Z7nEZCg12xgzEXp9/u1s/2oPUVJCq/DTqdTc9iN0j8+Pia68vbt29RTIBGLxeJZ\\n\",\n       \"fT2cdProulV8hgPW6/We8XRwvJAtI3nIKfrdXCcHUs4CRMSzde8xjah2hxmlZ7yQ3ePj4+Q2zWaz\\n\",\n       \"nNfHx8faMT+gfqwnb1CgD5vNJnlFXt/oPO7r/ltPWy4YN+yZSzIMBoMM4sssDI4fusubNwgeAAcg\\n\",\n       \"lLuMBRshOPeSsQFVp78GJQgEmGPPE0619aQRKH56juirHS4K0jK/XttuzmBgGxjz6+vruL+/Ty4n\\n\",\n       \"epl7An5g+x0o8BP7YE6fkaiXHF6/e9m+iSNlD9GQa0R1fpgJzgjSwcFBIg8M3N3dXRrQciLxSL3t\\n\",\n       \"GWO6s7OTqTi8ZUf7EXVkyIsNhc5EMOh8B0V7c3OTC4LnOP3C+3nRWfmWf4uoH2ZpR6bVqh8iasFi\\n\",\n       \"0TjCREiJhktCrYXGSAeLkJo5VnyOaJ1GjKgIgihTFFJEVQQRobcimkwmuRMSpWKUp9frJXkSo8J7\\n\",\n       \"ozDLVEO/34+9vb086BXHppxD+shnpHvYUuw5RA7Yudjv93NsI+rVeBl/pyy4/0v95fuO5HjHMpK1\\n\",\n       \"Y080zrg6LeICkc1mM2Hz09PTWK1WcX5+Hu12O46OjvJdHZV2u9346aef4j//8z8jIuLLly/x/fff\\n\",\n       \"55g7JYaTiLM7GAzSYHz+/Dl+/PHHRHF3d3ezphUlLCIiKxjbgPEsEN7r6+taIMK7objRMSBuTg16\\n\",\n       \"XgguMCQYIeoKgUyQjmMOHdHaCFuBs4UcRwrjRSDlYIgxNBLqgqSs7Yioka2Nitl55N3t8BnBR77R\\n\",\n       \"F6Sc/dn/Ze/NfttKkjzcIClq46bNdpXdNd3ThcG8z8z//zbvszxMowvdXVWu8iJLJMVVEkXyPhBf\\n\",\n       \"nu+EWXMvGhjoPigBw7YonpNLZCy/+GUkjhB/m2aADHoOeHc+CGT9ApLF851KZ24oEWKn+8uXL8X5\\n\",\n       \"pC4Ua+j0k20Jz2G9nGoz2szY6IsPJ7muodce5y2ntXl2pp6QDuZ2CdsY0DcCIh9Acn9ADq3bHx8f\\n\",\n       \"yxhx1HPmB8fNwT9OKQVJjWT6uzyP/2MTcGLywSWc16enpxiPx2WvcdIVe+D0MPuBFKpTa+yBxWJR\\n\",\n       \"Ds0gF7bZgCXMKw6R9asP+jhdmIOEiK9Ljrg9iyOFoXReFUTFhp5FBGoEdnRkCvxoFr8dJ35nX979\\n\",\n       \"6uqqHPE3IsJ3zdfKCEREvRQA/+cZ+fJdPGyEzekqn9CwgPI+KyU7Ofyco98cX2bM3mygWeZlOVrw\\n\",\n       \"WvAOlIznBUVMmhDEI6KKonB2XI02IuLz58+l767ezfwQnWRFZJTAGwqZQOHhOERUl6yyUR0xNhqN\\n\",\n       \"gmJOJpOYTqdlI5pX5e/zPaf5zK3i+ZvNJobDYTnlwjvZ8EbNnMJxCi+nxVg/UEYaa4TSdzkGGwXk\\n\",\n       \"ysejKRrKZyBL/X4/3r9/HycnJ9Hr9eLm5qbICHPSbrfjj3/8Y3z48CF+/fXXiNjVmen1etFq7W4C\\n\",\n       \"IHWFLFrpNZvN8jlXyXz58iVOTk5qx/hns1lJGVLoFIQT5Utqo9FolAKUEXXFSGqHlCvGhdQA5Qci\\n\",\n       \"KmNiOUWm+I7RFaMQ/L6d5Iio8RNZJ8YBaomsgZSyLk6h2ZiCcNMH73eOoeP0g5AyPtARZDsHH+io\\n\",\n       \"jCDjuOIQ2Olx6svpE9YC3Yc8O/jCCTBixR7z3np8fCxyMxwOY7vd1U776aefagjCZDKp3SQB8kRf\\n\",\n       \"cDLYI+bd2TEEDeH/lhHkgf5hh3xaknlzZgV7gsxwMTVcVFqer5zuJ2ADmSOIo6/8Xg607JRnxwDZ\\n\",\n       \"R5/4e6D+rIOzNDirppA4S+Gxr9frGI1G5bsEVft4S0b/cxqZ/X97e/tVCpaWKQHIL/OaA1DQOnwI\\n\",\n       \"15/ah7bV5u43P/k/bJBNM78moiLp2rnAyOBQmSsQUU3C/f194T5F1AvamfMTUTkXh4eHcX5+XgTT\\n\",\n       \"ffGGt/OCcsLZQ4CdIgT+dRqPiJwog2gPw8uC+eipNzqC5X7a4XC66PT0NO7v72M8Htfy+a7fYvjd\\n\",\n       \"ggJ/i7WwMgAyJUoAQeJ7kAm9KVnzu7u7WtToHLiJmCbwHx7urv+4vb2Nb775poaAkQpmDhyZgDbi\\n\",\n       \"NHgTMI84TBR+i9ilBQw5k86IiBp0jvOeG4oKkiPOC7VPkDk7x0Sc/DxHdGx4Uo05vcP4HHmjzKik\\n\",\n       \"3el0as77ZDKJzabilqFsbm9vCyry/v37WCwWBT0hzfv999/Her2OX375paBHkNTZDzlNzd1+y+Wy\\n\",\n       \"5pxTDmG1WsW3335bS+kfHR0VxJDvei88PT2VUiibza72D5wvUoYYZ6OujoI56OAUhh1G1oc55boo\\n\",\n       \"Uil2pHESMUTWQwQeFN1EvpElxujaVOhJy7D362q1u7KHAMlXJ6FDcL4cXcNHwzgbBffvOqjjWayd\\n\",\n       \"6QXIBnKDrs0omH9GY96sX7Jjw7sp+hoR8Ze//KWUdZnNZrWq94+Pj3F3dxej0WjvO3E60BE0HE76\\n\",\n       \"awd0s9mU4MxBDt/DGcyBEJ+xh0FCkYvNZhPT6bR25Q/vA2kFdTL6CJeU+xhtB1gndEJOb+EU0XJq\\n\",\n       \"zw67i53iQFFY2QgVeg0ZNRptp7LZbJZnsu/gf5liYRlj/DRz0VarVQyHw9qBAyOa+AQenwEPyz4p\\n\",\n       \"0+12G5eXl+V7Dqp+q72UP3hpL+2lvbSX9tJe2kv7O9uzkc3xsJ0qIXVDysL53Ih67tunB5bLZYlk\\n\",\n       \"fRWII8RMGMMTBZ4178oeaGb98128cPNlQC84yulrBHxCDvQBrxj0yDA+jdSVI1/Dpk4LZGI06RsX\\n\",\n       \"WvMxf3OfQBsidkgPiArpEx91hXsGIudCaaBSRHT0td/vx/n5ee24Os90Cq3RaNQKrrK20+m0kCB5\\n\",\n       \"Jgif8/JGwCi8eHFxUTudBBpHtH9+fl7m7fPnz6XyfLPZjF6vVyO3A20z10ReVKbOJFM4KqQMF4tF\\n\",\n       \"Qao8DlJMrC9yg+wR6eVjxUS8ufAgEV2j0SicB/q6WCwK+R5uzZ///Ocip1wcvtls4urqqqBqj4+P\\n\",\n       \"hZMGcsWpPTgN7LHlchkXFxdlDOfn5+VdFNnzWsCXcCoVOeWOP1Jk9IVn+UJvk/RBnFxgMaJ+d+fR\\n\",\n       \"0VG5EobvwVdh7xjl4y4+iqBa3s7OzopucqoNlIt97JOd6BDW0ukl5IoLhnOa3frOqaGcnrMOI5W3\\n\",\n       \"Xq9L9G19wv/z/XdGXMxDYQ8zJ6TNXAAVtNlrYHQwo2B8ZhoBKAmI47t37+JPf/pTOY15fHxcTi1C\\n\",\n       \"tAe9v76+LnMGgs6+MfnbiB/vN3pkjiHzynwbQXYajv0AsmYkGp3tAsG/deraGRquJyMTw/s8x+wr\\n\",\n       \"dPg+JNDUCGTRtIrtdls7dQ6CZ9pNRHXlljlGNBBPp25pli/G6cwP68WhCMtFRHWbAbo+Imo2Bf1u\\n\",\n       \"ZJz+OCXNZ+axIR8RFaUhnx53e7Y6Uvvys0B2FsKIiq/EJaZ2LIBoUdIWRlfhdh6b9xkeNK8h9y0r\\n\",\n       \"9oj6qZmc24aUahg3oroPDkVkweCkFA6NlT7v5TSG+wdcifLEkJsH43oa5lhYMbPRI6r7rJz3z2P0\\n\",\n       \"fBmqHY1GtbsSPX7nnd2YE893zk8/Pj7GfD6vEY4pXQG/zmuHHGGw+/1+MV7Ox6MQ+YzULWkBH7k3\\n\",\n       \"9M8fO7gYdOqaWE6n02kxyigik+3NPbHTYwXK85z2pQ+sh9NRrCm8JQcmOHWbzSZubm6+4hUeHByU\\n\",\n       \"K41YZ47ncyEvvDVkBsNwe3tbg+k7nU40GjsC+jfffFOrst7tdmuydXh4WOSJe/jgpzjFTPkG0kL0\\n\",\n       \"P9emg+/glCjryB42n+fg4KCWQnx8fCx7kHc1m82yp3wCibRw3i+ZrDqZTEq6lEuOfR+aDz64XlZO\\n\",\n       \"l6Hw5/N54Y4yb+wNc3voA4aNAM6pYhtjX9rLHuQ0qoORiJ0+YT1szPiuqQKu+5MdKq8xP3eqlGd+\\n\",\n       \"//33MZ1O4+effy57mGdyKTdpqsfHx8LLIViPqNJVdlxt2E3E93jZd7YXDoqgNfh96HtSvH4mOjYf\\n\",\n       \"+OF7OEbMC79r0rdTf8wVTj22yH1FN1g+sJ/mvxmwADzIlBaew7p7LQj06Yc5nsw/e8Z2wGn57PBZ\\n\",\n       \"Xpgnp/74jHIL5qQR4LpeGu/DGXVql/nM78/tWRwp35HliCaiykNaESGAkOtM8iVfbKKueTJm7OOh\\n\",\n       \"RlSOAwrWgsiC47ka6fFkmrjod+wzbC4+aVQmokKWUELk/pkHPjNRlL7QB4yOyeYRUa6eoFaRyd8m\\n\",\n       \"v2dnEUcTwTI3w/NgxcRJOG6iz+Rnk6OtNMwLy84skS3v4Th7RIVs8jveTMzN09NT3N7e1hzefr9f\\n\",\n       \"5tf9YAyvX7+O9XpdSKusPQ6ueRM+DYPsse7mBzCHh4e7m9p9TYg5UHa23SdkgEtA6bsdYDuuPA/5\\n\",\n       \"wAgig1YioGX0BfmHAP7dd99FRMT79+9LuQF4OD4J980338Rf//rXgtjRF+48Qz6/fPlSHIl2e3e/\\n\",\n       \"Hg5Uq1XVQluv14UvB5cCzuHj42MMBoNy3Q7Iqfc7z8Dw55NX/r85REavfTzeAaBRK+YebmFGE82x\\n\",\n       \"Yl1w8EGocJ7t9KC8XZ6AqDiTmH2whbnHcTFnpdGoymUg03bUMSDZGcr8VAy8kTzrYOtJ81TR4+ZY\\n\",\n       \"8V3WwEiWUSC3VqsV3333XaxWq/jhhx9iPB6XS6LzNVWQ0plvo2lPT1UNIjss7DnztbLOsk7EmWMv\\n\",\n       \"ZSfSyBTOPmvBWB0g571sp3W7rV9jw3hMhqeved54nrlcRt2Yf3SZZdkHqH7rpDey5+CBAHKzqd+P\\n\",\n       \"yfdyPSrPKTIMQsozsZW2t3zGOuZx2dml8UzLApmynC3LqJ7bszlSEVGL9vedmjNZk0ldLpe10xTA\\n\",\n       \"jUYJeA5CbOWQoUR77vaWgQRttCPqpHicDBNHSRHiRLBJ8YJZlExGRZE6VcczgcbZqBYEPiOKR/Fg\\n\",\n       \"HEgvTiaTWoFMKzungZg3lB19zg6h4W0TGTebTXz8+DHOz8/LJsj9iqg7ITyPZxpiR1njGLkKNesA\\n\",\n       \"apCVDf0jIjVakZ1jK7eIKCkEn1BxypENZ5QDBwj0xOli+kZUjQLlHTmNQN+QWaJ9EAEa91PxnuzU\\n\",\n       \"s69AN1gDR7NOeyLDkMK/++67eP/+fUTs7tP7l3/5lzg4OIj/+q//ipOTk3j16lVE7NCqu7u7uLm5\\n\",\n       \"iYeHh9oJuuPj40KMHo1G0Ww2y95nP5+cnBQHjMb36QupNtYCB2swGNT2IWNcrVYFBXXFcxwrZNx7\\n\",\n       \"itTPyclJQW0dYPkAxsPDQ1nH5XJZCosif3a6+Bmpa/pDtG5Umd93LaN+v18zrugDn5BiDCaS814b\\n\",\n       \"K4KPiCoFyr9B63LAlJ0a9BljRN68j63fIio9472f98i+tHYOkCKq0hDdbjfOzs5iOBx+ha47lWZn\\n\",\n       \"kb2KfkT27QTiODozwHx7bvns6ekp5vN52btZlxIgWyc6Bcf/mWMHxYzB9AocGZAp20v+DeqaM0Am\\n\",\n       \"1FuHIYPoQztLnGjDEbXcoa+dQrNjY/TecoGso989r6wZyBF6ivlA9jJKZPuBw8n7MiptB4y95YDP\\n\",\n       \"yPD/5kRFPCNHCpjQJ6EYCM6DlRtGgdM4hjkRSBbMgunN3GhUx/+d42YBUTa8z6mwfWMAIfCx44OD\\n\",\n       \"g1KBGmXDMxk3C2OjDG+C/rmuDUUa7TFH1IuWNZu7o9PwUlAk/M7Z2VlMp9PixFq48e4zPIpQMUf8\\n\",\n       \"nO9ZcTKnmbPGWuAIYJgcuUdUjit9M/TfbDbj4uIiNptNjMfjEl2ykdhUTiuguKxQmdvPnz/H09NT\\n\",\n       \"XFxclOs3aCim9XpdjtxbBtjw7i/f8+ZEdoz0oLhcUZtnGv1zetvy4XpbXot9R7lx9H1qC3kHwcTR\\n\",\n       \"6PV6ZS045XJ4eBhXV1dxfX0dP/30U0RE/PM//3NcXV3Fv//7v8dwOIx//dd/LcqGAp6r1SouLi6i\\n\",\n       \"1+uVyJNIdbVaxd3dXa3QIykh+m+nEM4NhsnGAM4R8+UghHeCxOB822CyD1GuDgY4IccfdJSjV97j\\n\",\n       \"9R2NRnF0dBSnp6dxe3tb5AM0A+fK6SRzwAiK+B5GBHQEh5Q+MH6CwGzIrL/sKA+Hwzg5OYlut1sL\\n\",\n       \"PpAT9o0DW+TNDpMdBpBYI6R2bBzY2CkAMaSBavF962bXykIv3t3dFZ7jt99+W+YUBDYHZk45s/7Z\\n\",\n       \"kWLNfNqXZ7AmDnZZH3QvThXy7XSaUR4KCmP7DAJQOgbd5SDCts+OhlEwvo/TlB0pI0UeI2vCz+0Q\\n\",\n       \"gsqwJ5yWRdcgG8wxKWaQZdKCyBvlQkDXLcNGvtx/mp0yo77oWcAMAiyP9elpd20aY/B7mTNnHv63\\n\",\n       \"tF7EMxbk3Gx2VYfhprD5DTd6EUlPAeHnaIXFiqgEwkRRCwKfkU4Cyt/Hs0FBZIcK4adYZESUu8eI\\n\",\n       \"sM0Hyn3K0WVERazjGfQhC7ohRzsr3CPFfDin3+l0ym33PCvPsfPFkEeNGvFOoiiUt4XThGZvRMPb\\n\",\n       \"fg7vxtBlB6vZbJZrBUBmMN6Qwp0eyAKPkXGqCbI0Ssx5dKJRNo8RoMFgUIPqGTPNKBKlJ0xUJ9WW\\n\",\n       \"o33PrbkRNL6DwvY7I+r3eTk4oC84YSgU114CjSEtAupGam80GpU73H7/+9/Hf/zHf8Tnz5/j3/7t\\n\",\n       \"3+Lg4KDcwweacHFxEZ1Op2bYIf2Px+NinO7u7iIiyvwSqXvclhUUW1ZuzClOodcR9IG0t6NKp7iz\\n\",\n       \"XsDQMEdGfChrAkqUkczr6+v4/e9/H4PBoMwp84qSdnXn9XpH+saozufz8j6QDiJ2owB8F+fEe8by\\n\",\n       \"wVF/p3GPjo5iOp2WquYucJs5S5YnHCfQIKdoPLfoAOstZIo96vQ0Tk+O/NELjH84HNaoIOjRX3/9\\n\",\n       \"NT58+FCrMYb+y6lG7ADy6WDI+jjrXPpNf7wORu3zdWNZ39uOIO/+mR1MO6I4h7yPdK5tofeGUT07\\n\",\n       \"xZZr5NzvJDDNDhGyYOfZhynM87Q8gIYSkFoWjV4yT0admHNssPc26+xglPHRRwcrXif+jUzzfZxB\\n\",\n       \"yke4EZT9Vnspf/DSXtpLe2kv7aW9tJf2d7ZnQaSIgrfbbYki8AINM2evkOOOERUxjWj31atXBc3I\\n\",\n       \"RHWnBTKHxnCiv+Ncqb+XoU4QE8ZALhkYmsjT5F9QFEcYQMwHBwdxd3dXoGFQJ8ZgLxoEj0ji6emp\\n\",\n       \"nE5xocGcQuS7GbWyx79arQqXxH01zwfSrlMRoIdOtzF+5of3Eb3QR9bcXBcXkiSV6iPw4/G4Fn3k\\n\",\n       \"dAScj30RBkRpl3BABpHRRqNREMfBYFDSMhzH5b2WXZCu2WxW3uvrjUA1kFOjCVl+Sd1RcDFD8Ya8\\n\",\n       \"c/rLc+so2b97eHgYHz58qJ1aQ/4hf799+zYidoVjf/nll/inf/qnaDab8cMPP5Tn9Xq9glQis6RS\\n\",\n       \"KbrJGt/d3RUOWrfbraWSHHmDUBKpWp4gLVMtfR/Pj1QDe8R8Ra+XOR2gFIzB8kSVc1CJdrtd9BdF\\n\",\n       \"NofDYSlOCtrOaT5OnZpG0O12y54h1eQUItwxUFCnzeDjUWrFyAJyStrESHG73S5pPd9DR5kMo0PM\\n\",\n       \"p9FQo+smamfU38greiNzQPm+9YVRDfpwfHwc5+fncXNzExFRkI3VahWz2Syur69L1XPeY6TE6TD3\\n\",\n       \"LaNBIDXsT/oHdwiujrlNRpXg74L8I3eupo4Mm6vldDJzy+9EVFeMMXbSUqZYmH5BSswcNMZhBDvr\\n\",\n       \"HsaVv8ucMJ+et+l0WuNist8sF5R9MVrm8bP3nYJ1RsPvM9/Up2/ppw+BOP0H2gaH1adSmXuKg4Ke\\n\",\n       \"MWdkxH6rPetdezYKuYZFRAWr+pjzdDqtOVksFEezqSHi75uv44Xi+0wQv08/yAXnnK8JqoaiSYdx\\n\",\n       \"rBxnhGe3Wq2iGP0+w8ZsZHhAzAuGwmOgTz4tyPdGo1EMBoMiuIzfuWAElZQcY0FpzGazr4y335lh\\n\",\n       \"XOaAvjpd6E1hUj7PzFwEGu8xNI8jxf175tPxPvM1SCXQX5cnmEwm5ToXZM0pouPj41K9G9gew0Xl\\n\",\n       \"3IhqA6NsmQPWmtN/+5SbTzJlhwBlBx/CcsO7rODNyzGJ3ScoI6rLiVG4jJFgBeVsnsiPP/4Yg8Eg\\n\",\n       \"Dg4O4qeffqrtQ4wavBSud4mo7iEkvdftdsta4Fizp+BM8b31el3qkrmEw2KxKDwQFKpTNexNHGTL\\n\",\n       \"DYbCqXGXDeH3ebYPKTAnOOLmlcDjIWXGs0hduoQHqU0CHhP/aRj236I74HjhCPG+w8PDmM/n5YJp\\n\",\n       \"OwStVium02ms1+u4uLionYayc4BMe1+iJ3BqkVn6gx7ACWNPkpY2l4fP0G3sU1Me0AV8p9/vl0Dx\\n\",\n       \"06dPZf/haHGAgT3tZv1ASQS/x+PHaYUSEVHtNVNJkEWCcfZvpoGQpuMAQ05DZZ1Hc3radtHpKObX\\n\",\n       \"up13klKz/s7pzkwo93wZZDAvzCnXiCicYM8ffeP/7HM79TzTZHPr74io6T2aaSmtVr1avKkX1PZz\\n\",\n       \"ag/7jVwxBkAG9IjlEAcs98PtWRwpn8JyLhfuBpvfXIHtdhuz2eyr0194wiyE66kQBZjPY+8UD5p3\\n\",\n       \"o8Q48mzF4U1DTp8J9zMbjUatcNc+j98kWhqb1E4FY2DD2TGIqAieRol4H0iNDbEVscm1/pu+mIRn\\n\",\n       \"ATePANQhH1HO/Cs3b+j8MzshfE4EnSPkiChEcdZvPB4XYXd0heLwprGMmFt2dHQUl5eXpf8YGxoG\\n\",\n       \"FufNXCnehTNhJGy9Xsd4PC6cFK8xRscbnHHC/WMNPW+sqb+Xlfs+hMAcFu5sMz+u1WqV8gzv3r0r\\n\",\n       \"BQ3n83m8efMmRqPRVzwYnHWcJDun0+m0zMnp6Wmcnp6W+W6323F2dlYrAGlnIvNYTLiF94Wz4b1j\\n\",\n       \"3eGCuTyDtTPi4sYzzTujzArzZMU6mUyKEV0ulzEYDGpGAQXOOqMjZrNZnJ+f1/a/+2KF7n2BU847\\n\",\n       \"4Z9FVLWpbm9va/IQsXPczs7OYrlcxmg0in6/X9bJKHuzWV0v47ljbtCN/NyBlZEd3m3UOQd0OIWZ\\n\",\n       \"f5iDDTiRrOtkMinXax0cVJfvbre7y6HNu2PeTHTGCTMyjqHH4SIQ9ljQHy78zNhAbjwGozmscZY1\\n\",\n       \"nr1PZxJoZztjMr6dU+wrJ44dmKEPQEF9yIh5wgm5v78vzim614RyNx+gMDqGXKPjzKNFJoyk0bDr\\n\",\n       \"BKoEhW6np6eFe+q5Qvf7PsKISkcTKOeA3air14gx/P8OkUIAXQzOpxqIThnMdDotkbEJ4jzDxjIj\\n\",\n       \"KxCNjUDxN9+xoaTRH28svscC5VNUPoaaIU76aCE0ugN0aCFmfETBRi1o9/f3JXLkhEREdRs8pyLy\\n\",\n       \"0VtHXRZ6GhsOpZg3yXq9rtWgYb7tvPAczztzYEK9YXcrWMZxdHRUijpmcreh2KOjo/jy5UttLdjg\\n\",\n       \"RIQRUUMvQF981x7RyT7EjY2P7Jocyd+OTP1ODDOy7RQeP8uHG9brdYn4qOPC9wxn4/zbKWUfGRX0\\n\",\n       \"3KHYSCdH7Gps3d/fx2QyiTdv3hRSckSUC4SROc8p+wQD7wurqUr/9PRULikFOb26uqqltI1yeX5B\\n\",\n       \"MbMz6HnPSIdTe/tSnqzJPuMVUZH37bw6qs4pMwzcarUqRTE9RqJyggOex7w1Go1YLBa1u8Gc3vIa\\n\",\n       \"YhScTkOGDw8PYzAYxOXlZQyHw1gsFjVndDAY1Iq+IudGVdin1jXMox1E61MjLXZemENSJH4na4HD\\n\",\n       \"4aDNcz6fz0s1/YgoqZn1elfaJdM5kIl8QtZrBvpLAy3ySTHS/ybH08+ctnfQk4MX1shzwNo76LRO\\n\",\n       \"zMG/aRL8n/2Sswa8D0fKFBXmw7YpoiKxozOQS2QKCgpyYEQMPZsDOmRjX4rut5xG5gFHDzn0HiBz\\n\",\n       \"4EwNDRTKJ8g9pzTsgufWiFruS5ZLt2dxpHL0SGMRnCKIqC6hHAwGZUCuRE3OE0OEMFJp2sgAk+MU\\n\",\n       \"GY6ZNyDRsbkdfGYUKC8GjhyolRUfAsz3Mupjz9kKmg1jQ0lf/LehdrhGhljzOJhzDHwWfiMlOdok\\n\",\n       \"YvA62Vm0gucz5p+xu6w/x40zUkZ0gJPVarVqhoZNxdzQz+FwWFIscAqIvPv9fkH5Wq1W9Pv9MhtC\\n\",\n       \"9csAACAASURBVPfj8ThGo1G5VgauUET9VCVpHRfUw3lpNBo1tIjvUtOIOXdqNxsiO6AogG63WzPQ\\n\",\n       \"nst9yCLFP70eNIIOInMbttvb2+j3+3F4eBjX19e1S025WcDHxiMqnpkRGXPn6Av9N7IAsgkny4YG\\n\",\n       \"pbxvfHZA8l4gzb/dbr+6mJi153lGSLzHnPrmd/luhvqREwzqarWqnYRkDY6Pj2vHrk9PT2sGLKJK\\n\",\n       \"sdh4ZFTCOoM+Iac3NzexWCzi7du3cXJyEp8+fapF9uv1rrQHCIn1IIEhhtfp5Jx6+a3+sSbeNy4C\\n\",\n       \"6jnNgSF7ww2DuFwuy5w6C5FT3/BgnPFwIJ2RL2SGOUUGMxJPYIIsZS4jc2QZYl54b9bfvpjaOtiB\\n\",\n       \"KE6PP0OOWDOPlcY6ZIQsol5RPqfh0PsEBP6eHUP/DNTNzhXzho1AXrz2/l3LIv1gDbzvSWmDGNpR\\n\",\n       \"Zt32lU1ANul7Djz5PnNtR9N7b197tjpSTIaPUOI547wYNvYGv7u7i8+fP0fEblK5EoAjxI5aEGJD\\n\",\n       \"uHzPnrkFdTKZlPdtNjtODEaYRUBI891ILJ55DxFVJMMCYWwjohwl5nlGjpgbNkM2mnZ2ECDmjD4w\\n\",\n       \"VnN2UKKsg9NGKDMbNKNWQNsWdsZv/lHuq//vjcia0R8r18ViUYo4np6elhIEEVWFcqITOGoRu036\\n\",\n       \"6dOnaDabJXWVSbwoNztnIE1E/Mvlspb2g+fCMXWnNa2QQRDpa6vVKmkv0kom2rKGrLOLweFInp6e\\n\",\n       \"lut3WAvkFjn0++gXCtxpXRAp1pO1ABWK2Bnkk5OTgjrhBJ2entaQWBpV9I+OjqLX6xWEhLFThXy5\\n\",\n       \"XJaUAc/AMXdai3Ejj4bcQWDNZ8u8O5AxG3vmBLQHJyobLtI11guOaJEPv49CpHCUkJvJZFLumWQ8\\n\",\n       \"fMY8YhS979ATBCx2Fk5OTspeIxBhDDhxh4eHcXl5Ga9evarxXxyYGDkkELWsECjY6PAuIw04EZSV\\n\",\n       \"saNtZBuHynOKfDr1x2c8p9frxWg0KgjRaDQqe4mrgByIoivZG8gV/cQhcvCFEXbAuY+a4SrvEdVd\\n\",\n       \"bOxd6wKPFZ1pZ2a9rooME2Qx3xnFsSFHDvdlU0yQx07ZnmRagtfQNZc2m00tLc5tFYzP+x9b43Qz\\n\",\n       \"jWdZfnivnalMk4moHHkDHwYWWG90opHinPFgv/B31hf+/Qw6+B372kv5g5f20l7aS3tpL+2lvbS/\\n\",\n       \"sz0LIhVRoTf2ToGAiUrwJDudTrmvCxgUb3E4HBZ4m9QADSg9R7K8L0cPJtxyegjvnb4AJeLtG+Ll\\n\",\n       \"ma5ebk8ZZMnpvYiKqGfYme+Z3EckmcfoNF2upA4MTVTrKsJwvZjT3IjIHZU5ktvHsbA372ie+SL9\\n\",\n       \"5TQkaVqQSJPB4XJMp9Ov0qytVqugUPATQJaIrrgvzpEu/fGamQtA/0CKHBXBZYO/gzwxB/TfRFDG\\n\",\n       \"0el0ylp6/pxeZT0cQdNAF00AhfuWo0sTYA1le+zA1ZkAyrvOz8+j1WoV9ATCLKmr4+PjghAQTT8+\\n\",\n       \"Psbl5WVst9vCu3r9+nXc3NzUIkhOCSLrIDzmV4BiGC0yx8Gpa+bM4wB5Y0w+qs+cIY+ZX4L+MeeS\\n\",\n       \"SDYjiMhup9MpJ7PgNvLZ2dlZ7bQSn0G+59Jic9pIXbLfTD/g+piDg4OyPtxfyOXny+WyoIBG+tB7\\n\",\n       \"yJPTmv7/er2uFevkfXALM5ePueP53hMglcy/v+f0EjqQNeVz0Fini9FfyKOv5yHVSRrazXxF9ilj\\n\",\n       \"dDYgp5G9R9nnNL7jK7QYX5brfSn9XC4G+TLdxPNi7hConhGyiPq9kaZ8gEbxPc+p18LjQdfSV1A9\\n\",\n       \"t30onNN4UEJymtH20FQQ1n+z2dRoHx4zupj3QeNYLBbl1B7rT6ke9DNzQV/Qp/TJ47Fc7mvP4kgh\\n\",\n       \"XDnHTPO1KxERl5eXNeVmSI4FYELyUUg3Q3MIl6Fl+tBq7a5dIH3jflp49zlg/DyTVHkuStHQ6sPD\\n\",\n       \"Q0lTkN7zpZQIN6kvK33Dj84V+7QfQmHnBW4RZDwLqlOShtw9DpzgbrdbOy3z+PhYq3HiOcjOgSsj\\n\",\n       \"t9vtUmHc78KxWa93FapJu7LWlCdwOoy1oPYKc+sN5ZOjHp8rImNg2VxWMN1uNzabTVknDDlyZEct\\n\",\n       \"ojpJZQVkp4d1xRC7BhPrinzSH3hT6/W6EGX5HStpxumUAo6lT7DRjo+Po9/vx3q9jru7u9p6NRqN\\n\",\n       \"mM/npW6bHeXhcBjdbjdOT0/j48ePNY7BbDaLt2/fxmKxiKurqzJ2k6uRGada2PPZcDHHrBHGhblB\\n\",\n       \"6ZNmyukkjD77g8b6oRc875Q8wfnabreldhE/e3x8jMlkUkuLcO8ka7BcLmsne5GFfLyalA8Oix2p\\n\",\n       \"x8fHwp/EqSAgefPmTdzd3ZV9mnki6DAcGr/35OSkpged4uPAj51zp5StQ3LQgs7L3CrG7r9zehdZ\\n\",\n       \"9R5HL8FF9clqKrfbuTHXyzowE5U9H6wp823ZcH9tQzglZtvAPOMY2DlCdnmO+5Fl1in9ZrNZLt7m\\n\",\n       \"3XZQPE95/dk7dnwjqjJD6E1/bz6fF71p3ULLoERucETN5aPlAzbMW54P5BK9g33x7y6Xy7i7u6vt\\n\",\n       \"txyI4RSi55kXvsPBFq+hSf/72rM4UkRk7pw3dzaKLDboko8i+tilnxNRGSgEMp/ow/HhcwutT1b5\\n\",\n       \"tJCjyexgRFROIv3KUYSRMCsQrjIBgfE77DQ6V8x3UbDmSHmzmkeSCbdcG2PUjZNsFjK+B4mTwpjm\\n\",\n       \"QjAfGGcLI3NHP3yShJ/zvHzM36dIjORERHE6KHZpx6Xf78f5+Xnc3t4WdJKx0wecRvME2LTInfP2\\n\",\n       \"/IHvxPhwXDE2djD8TkfBzK8NFHwOHDQ7EE9PT4V7FVFFyVzr4popILy824VTGZORMPp3enoajUYj\\n\",\n       \"JpNJQTUonkl/cN6enp4KOsbJvIuLixiNRjGZTMr7XQ6g0WhEt9utnThkDTNX0QGSOS+WH9bIDq9l\\n\",\n       \"ln5STJN5pKFzMv/EPCbPv3lOllN0GSccvYcPDw9LTSe4Scz34+NjDSX2HvHJPMsf/cZpIyBiXBym\\n\",\n       \"QP8YxfOpMiPVPJ/Cif1+v7Z3vT9wUO2E2gEBrTLXCZm37PE8DCm6LuvX7XZbC/iYbxwk86/y2kdU\\n\",\n       \"qJf7b32Y+XGWB+tL/nZGgrVgvDhc5pfaeWTd/Cz+YLDdT5rl00gd+tL2xDXzmPMcvNEXo5XsRes8\\n\",\n       \"vjeZTL7imjmgx1HKBU+t67D56IzM1fL3CP58ctCymJFsnLP7+/tasWTuDaXPHq/BG+ya0S0aeigj\\n\",\n       \"cG7P4khl5nxEFe0b2kYxLJfLWhTrNM1sNqvBjo44HH2tVquSBomoCi9CfrUj4X6iII0eRFTk2Kz4\\n\",\n       \"nTbxGOzx0kwIRpggnXoj5pMtdgaBN7OhMRLTau1OplF9mv6hwKjwahQK+B4HkOPqs9msEDyppmxj\\n\",\n       \"YkXgzYZhMYnfhsYb3uNHQVjJG+2h72wOKz5QKCI3YOqzs7PyTKBek/Adydh44eTiwKA8IuqVh7vd\\n\",\n       \"bjnubhnmO6B5bMzBYFAzJC7ah2FmnC5Y6Sj17OysKB/mwcqVOeFv5N3KkHGgoNg7duSRN2q4/O1v\\n\",\n       \"fyvvf/XqVQyHwwL98z3WdDab1W4BoC/ed74zE0eGk4cumZERaeSK+Vqv18V5NCLLd3wowg44cmxn\\n\",\n       \"ne+BAmVEgf4Q3FHj6Pz8vMjpbDar1eui4QDjJHst0FutVis6nU7NWWIMpPEGg0FBxwj+fKDAhzB8\\n\",\n       \"0tSNgHGz2VXuN/kZmSf9yLr5WP1isahlDLLcgP7YUcnZAxfKZO2Y97u7u9rhJHQ6joWJ8RRB3Ww2\\n\",\n       \"pfAsDUeSPZ77YDI573NAbuSZz2zUnU3hO+i+HODYKfL+xTm0Pst2kfmzvmbeeKbfwXPQi0aimBcj\\n\",\n       \"kHYyQao43OOTvkaUmWcHqw5OszPCvBFkGHiwDUCGvE7YOFfnZ3+ia72+/r/T1/zNHzt2/1/bszlS\\n\",\n       \"CJ0XignDAGSjjxK3sCJEKIEcgRkFMofCG4SJZVJt8A2vRlSX4PJsvwshsZKmsUAWanNvaAhb5gyg\\n\",\n       \"EB2xue+kdeBJ0Afms9vt1hwOUlMooNPT02LkuJYFgzudTktF4Zubm5jNZsWwRdSjJUfQmXvCnLFu\\n\",\n       \"VkSkTbbbbTkSzvfIkV9dXcVwOKxFszgkjIP32Jgb/aGhuFByKAM7ShiyvK6srRFO5oHSCKQ881F+\\n\",\n       \"Ih87z61WqzitPi3GOKbTaTktd3JyUjuSTP+JmByJ2nnI6CiyRXMwYJTXaCwRW6/Xi8PDw/jxxx9L\\n\",\n       \"Py8uLuLLly8xnU5LZGp+Dc8EeYRbhdHjRCRONeOz07her2scFAwpKQkj3OawOeqPqAzlvlOpdp69\\n\",\n       \"Dp5DHC2nIXFAQZ9saFD0PBuuHGP0dSIeP+gXgUu73S6O4nw+L6dYr6+vYzAYlNSejQHOkCN9dMLx\\n\",\n       \"8XEtDcX4SaNz/Qw/d7CZeVfsocViEYvFIrrdbi3wRR4dlPKZjet6va6lS32adj6fl8/u7u5isVjU\\n\",\n       \"ys14j/N//tDYM9ng+3PrWJprGtlJ9Jx6DH5e5tPRsi7M+xE9470ZUe1t1sDoLA0aAXbNto3Peee+\\n\",\n       \"kjPMH8+ECoHddHCPHWde/D50tJFABx/sTzvEPNOBisfPvgQBNBXFJyfRK+bbunZcDjyxs/Q5gyIZ\\n\",\n       \"GXN71jpS/DuiXjMmok5IXK12VW1R8laoIEvm2xgVceVjIyYQ0djY+wSZln9uQ5sdAjsvhi1ZNH5m\\n\",\n       \"Hg1j8sJmiNNEbvfNYz09PY0//OEPERHxzTfflFpAKACuhmDeyHmjJHDCLi4uCnrlO94idh4/5SbY\\n\",\n       \"QEakiKAzIsV8OJ3qqIXfyfW3OE5NBGrCaUTUkJ19iog1cRQHomK5YA3NQ2EtfedSRP26BDcTGTFQ\\n\",\n       \"dpascB1EGPrfbDa1NBTcMQyUAwX33xErfWFc/HwfAsr/M6q6Xld1jRwZomB++eWXmE6n8Y//+I8R\\n\",\n       \"UdUuQhlxPD9ilxYAHXI6lebUjfe2jRafmScGoRvlbh3AOrGmTvs5LcdagWbY8cp7kTlpNBqlRpXn\\n\",\n       \"nOfe3NzEdrstTsh0Oi10hG63G0dHR8V5coCUG7wUHMyzs7Ov6kH1+/3CCWL/sv+Qt4xms098PQ9j\\n\",\n       \"JyjJFALzM9lXzElEFQifnJyUK1qcsmSuLYP0x/rAKf8ff/yxHFggoKCOFI6rHSnXrcpVqL0PMyLm\\n\",\n       \"z2jej/QTR9loFg09jd7Z5xDk9zgr41Sbn5mRVPpMJsHBtZE82x2PFydhvV6XPeqK+Ov1rkbjzc1N\\n\",\n       \"DS02Gpf1DSgi8+IsRZ53+ue/XbvJBzKs53P6nb2HTWF/8x70DfXreAbBtm0/fSGNaB+EZzr7sK+9\\n\",\n       \"lD94aS/tpb20l/bSXtpL+zvbs15abC8TDzNHAfxtBAteQ0R1RNepQUeiJmI7kgKqzoRL3mEv39B/\\n\",\n       \"jvqJ4mhOqWVOQ/aE88kuIitDoxEVXGluS0SUO6aOjo6i2+3GH/7wh3j79m1ERLx9+zZev34dFxcX\\n\",\n       \"EbHjNs1ms9oN4qAfoAA+Xpr5NY4GOJmCB+/0EWkF0lB8xpyAZEEWzc8nSqYvPkoOkuESB+aYOIom\\n\",\n       \"0oQUbzRusViU+QfRyjJHis/jZr5BedxPTovmS2f9b6ORLvEA/8L9cfTj9JGRFbiB5ld5HI5M3R+i\\n\",\n       \"NaJV5Io5Zf5AB/M+fP/+fSyXy3j79m1J0a1Wq5jP53F4eFiuILHsk3o1GhQRBX0zkdVE9Ha7XfaR\\n\",\n       \"9y/cIFASyOqG4ZEzUCsib9bWvBcaew+emtFM9i0pN0fQpG05en12dlaTN/YGiBbfm81mZZ1ms1lB\\n\",\n       \"IPke6ZsvX74Urhj9BE0mmrZeZa4sE8gMc4IuMeeLeTd6zzPRdei2xWJR43WRwuGkKY15A7Fwf5A5\\n\",\n       \"+DXtdrsga91uN3744Yd4//592WsgebPZrKR12FvIIqgRpUqc9gM1ps/eHyBilllkkrW27XF60NQF\\n\",\n       \"/vAcf25kJu9xpxSdWja/lPVxWo7DJk67MlbQGo+Dz0A5mW+4T6A4vV4vPn78WOvPPp4Rssfesb3E\\n\",\n       \"rpj7+luZg8xTBpXaR0WIqC5mNhUB1MlpXacS2fvmyTEfLn3A7zOfWZfm9iyOlCczOxrAooZHgTEh\\n\",\n       \"8ObTOXmy/UzXDDGnATKdDbEn0NCtnbu8MBmyxanj/2x8uFUYMDZ6xNcXpnperCgMtfI7bLZXr14V\\n\",\n       \"QnlElIrUHAuGw+ATC9PpNCaTSTEOFn4MkY1DRJQUDMIL/yWiSgv5FBpzikLYx0tBHphvX6HhtWm3\\n\",\n       \"23F3d1fjlrHxTeZmzTBKrAnv5lQHp4tcD8qp2cxpILXs/iMXOECu++UUg1M4mYOB42QOEZ/jSNJP\\n\",\n       \"jjvTH4/VhiFzwPibvttomODrlCrf8RpmR8GptsViUaqbO53GuwzB40S7wniv16uNgXlErqgLRF98\\n\",\n       \"uXjm63kP+znMG2tFX/fxL9in2dGiwrv5gA8PD4Wzg9zB58GYEIBQ+Zx+ku4j/YcjtV7vyka8evWq\\n\",\n       \"lP7wZcd2RglSIqq0rnlpNHN2rDcZOzqSdbV8M9/cvmB9xL7BiO2rsZRTJl4v7zs+f/PmTfzDP/xD\\n\",\n       \"fPr0KX7++eeaY4Ne4m/vfV94i6OaCdUOALMD6n3itCe6n72fnS70pfUE62GOrNP25jRa1ngWcpaN\\n\",\n       \"OHKNfeNQkZ/l9JqdZMZ9cHBQ+I4R1Z5Hh5+fn8dkMinyZhsRUbdRtpfWNU9PT6VMB7LH9xhDXgeP\\n\",\n       \"Mac7mRvey/tyqtY0C+s20zasA9G76Gm/E93hwDi3Z3GkuEjRCorOo8gwDhGVo2HByPlgC5qVG5sP\\n\",\n       \"z53fQ0idh7YC5/8Wdt6dOUCMA4XokwruC0oWAcpKDG+aSJoxwxHJRticqdvb27i7uytH1REIvG82\\n\",\n       \"JnMIMoQR4lQMfeX0w2KxiPF4HNfX1xER5eJQO0GOchzx4QTwbzgNKLKMAkEOZK4iKqWL8nGNMRwV\\n\",\n       \"jPF2u62dCkMpgo4Z1eT3MWw+1ZSNqBUUyFdWpqwN8mbHO/+NA2huD2uBrBoFYtMjd2xoHAHkBXSR\\n\",\n       \"/iAH9DnzkuxgGJVgzjGKVryQuClyyjrd3t7WDjA4aGFuTERnPZfLZRwfHxeZAS2MqE4TEUBlDiH7\\n\",\n       \"meKUliVH56BPNPPKspKOiHJpOL/Hszi5BFJnLshyuSxOLtwgDmiAAKFz7BCiP1jbjDqydw8ODsr9\\n\",\n       \"jqwNe/zgYHekHI7QfD6vRfmZH4bh8ykvr5M5e/4MZIf1MoLQbDZrwaKdbOSKsWYHzOvi1ul04urq\\n\",\n       \"Kl69ehWfP38uPDzWlz1EaQlzGJnv1WoVr1+/Lg5oNqz7goh9Db2GPgbd8BryPoJGf4YutjEngHIZ\\n\",\n       \"D88Xjvd2W5WPoOGsWu4zX8/rh2wYgYd/SnPpHPYwzYTx/PsEJoAY3lfojPl8XkMC6WPODNkm8Md6\\n\",\n       \"xO/2qWjmAd2JPFgX23ly4B4R5UCHA37z3DJCm9uzOFJAslmR2ZGKqBdwRMF6siOqUykIKAY7ImqK\\n\",\n       \"DoXjUygoLjtQEdWE5z8R1f197rOdOgTVhoxnm6DthtF2VGWvPY+B5k10d3cXv/76a0GkECjSA0S0\\n\",\n       \"PAsh5FTeeDwuCvf29rY4J5zYc2SCUiEStjJlg/NzNpuLdLrvfI/oiurkHjOONSfTjICB1jAe5pa6\\n\",\n       \"UtvtNnq9XvR6veK4OAWcU2nZObYDYifWp/toRlptsDxGZMtwtCs+ZyTATjvPsXFjHKCKtIzQOl1s\\n\",\n       \"xWL0xZ/xb6dFPAand5jvo6OjUmzSyHC32y0nPdfrdUwmk9phERwiyOpGajFOGBTWkEMmoIvcA+YT\\n\",\n       \"SDZq9Js1zWlBIygEBxgv5n69XhckifG56jvvbTZ3aTp+5nsFswOCPsEJc3QNAmkSLPN2enpanEvI\\n\",\n       \"8qzTarWK2WxWHGtkzvKMI2CEBCQCPbYvTY0MnJyc1MpR4Dz5cnA31iEj1TaYdrQiKoI5znu73a6l\\n\",\n       \"kh8eHmqnDhkHc/b4+BgXFxcFlWbd0FM5E0F/9jlWBPaM3zQCUwsIFu3U4AxYv0fU64IR0LD2yKj/\\n\",\n       \"uGwCiKzXJ6N7/J71EjaIvd9sNuPy8rLIooNYzw1pZWeETGsgzZyzK+gnp81+K6BDhrxODnqzTTBI\\n\",\n       \"YL2Zi7b6XTwDxyjTL+wLeA35/Lfas6X2iIhyNEREnL3ziMrQ2KAhDBg2BCWiyrFuNpty8ozvkS4k\\n\",\n       \"WmVj8Uw88OzZs6jeMK4ldH5+XsZhLxbPG0Vh1MFKOI+PPru+UebWsIE/f/5cvvfhw4fo9Xo1Lo5T\\n\",\n       \"EaTfiL7H43ERlOl0WjbNcrmspREYF+vFc7wWFl7PN0JOn5364XcoxWDOAQaFSB7HAycQZMhoAI7U\\n\",\n       \"ZrMplbqJynhOLnXAM73OHguKxbJqfhiN73kunKrOCKv5LVaYPNcOljc7xh5ZArXkfcwd77Aj2WzW\\n\",\n       \"i97ug7+RDxu29XodnU4nnp6e4u7urqwFNbZms1ntOif6zJH42WxWLpyNiJJudo03o0qkBbbbqthl\\n\",\n       \"xM45cd0uZNsOsHkXeX1sWCiSyfdQ1Ow15sPFOBeLRY0DaQUNkgsKQloSFM9OJmm/09PTcprPaBpc\\n\",\n       \"Qp7F9/j9VqsVs9msoDK8jz6x1zIagM4kOrdsGbFzs0PtcgARVYV65txyFFGllDCkpk3we/yM/4/H\\n\",\n       \"4xgOhzEejwtPkjVHz+/jaq5Wq1gul9Hv9+Pbb7+NXq9XmxsbZ88XDogDnoxy2WDn1BJ2IZfIARWx\\n\",\n       \"/vX7jCx7LeyoYm+YHzvCOSWGg2bUzGthRNU6CocfJ5QK/bnlWlA8i2yL9RA6HwfMto2+OP1Og4u6\\n\",\n       \"b76ZD/4YbGCsyKKBDuYNnWlk1HzBzWZT6iVGVMj4PvS69Pc3P/k/bEymNzjohr1Q13rCkWAy+QyF\\n\",\n       \"yQS4mCHCizFwuhBPlujJi4SBtEIz5MjGQeAsrL1erxRH3CeEjNFRgo2yo7WIeg2WvLkRCMZ5f38f\\n\",\n       \"P/30U+lLTgu9evWqjJd0l9NYNCJ0ogzDyk7d0V8LGvO8b4z8nZUshgXFYRTEBGFD0vTl6ekplstl\\n\",\n       \"Oca7z7BRbweZcS2nzWZTyMD8vg1QNsI5HWBo3NwQK0Q+I4XH/NPs/Dv9yzgyguE0If1DBjL/wpwq\\n\",\n       \"r2FO69B8LYn7xJgwRBi2nMZCpiynnz59Kr93fX1djFzEbq/ZIeFqk4hdhW76MZ/Pa1XWIf7jZJPi\\n\",\n       \"5Tkc03bh3Jz2xOBmNI/5zqjqer07Mk4aLyOOrBXf974g9eGAjM+8DkazQdvRlQ48mX/SKqTzaK1W\\n\",\n       \"qzhuRrLMi+MdONSgqBhhGz2cWWTOSD7vc9rLzYEl85gDFjv1ds5ms1mRj9lsVvYwP+OP9yEBxOvX\\n\",\n       \"r6PX69U4kBh97EROw7OP8rpwbN5IjsdHP7J9MocUXZXBA6NA1sME3kavWSeXSOFZdnT9HT/T6Sr0\\n\",\n       \"gwvfdjqdstaAHTxzMpnUHB4jZNbpdl7QTfxxOhLZ9p+cyeFZDj6wldb1zCllfdB/OKm8L4MmzqZg\\n\",\n       \"f0A5vU5G7fe1l/IHL+2lvbSX9tJe2kt7aX9nexZECh4PKauI+nUAREQmFhJRkgrAO+Q0gSMiPHVz\\n\",\n       \"HfZFguZROJIy6gEilGFtPNSMVsBN4DnO+RKRZoSAZs5P9vrzBaaMgWf7RE1ERX5l/PAXvvvuu9JX\\n\",\n       \"OGOgM/AP8Pg5CeIoGUTCpEXe6agSNIfv5cq7hsmZe9ICvJv3Ad+SAjL5m2gb2aEvpJJAKUDgeCbz\\n\",\n       \"vF6vy/U3EbuojHcSkTtqcYqk3W7HYDAofWm32yVyBWkx3wUOivk3ERWXjz+MhzV2FOl5A0HwPDvN\\n\",\n       \"CroFquj8v7lnTjnkfZQROU6DzWaz6Pf7Ze6Wy2V88803ZQ06nU45vLBareLk5CSGw2FBDowac8ko\\n\",\n       \"qS1QUw6lgEY4KmUeOEGH7JgjR9TtPW6Z4gLsfegwiMt8Pv8KjSZq9VrQWEuXMXAKgetjzC/h2fCS\\n\",\n       \"HM1nsiyNiu6gwp1Op/AYjWRAZjafJyIKSo/cek2MOBuRMPJJoV5H7ehN5HFf+s78JMZtVHSxWJSD\\n\",\n       \"LT/++GPc3NyU+SRrQV9ZcxAIp8EuLy/j1atXhVuVdQbr+FsUkswZBJ0BlfIBBqOdNMbkTAPvxj6h\\n\",\n       \"z9AlOWUGwpznk3XEZoKe5DQga+uMilFs0Ni8xhEV9QN96u9jO7K+dzqT+QC55XtGeninm+0e46a/\\n\",\n       \"RobJotAPI1KgrKSC6QtpevaH9Rr6Mp8gZ+ym4uxrz+JIuQ5Q6YhOugGl0XELXq/Xi06nU5TUwcHu\\n\",\n       \"1nmcLhsMjGur1YrJZFKu2fD7fA0Dgsh34AH4NJRz0iZsMx6qOJMW8+kNp2S8kCbPmSzHZwhuhhaZ\\n\",\n       \"KwQtp30Wi0XtuorxeFyc2IuLizLfOBZWUu43HBTGb8VmZzFzg5zeYf2c1vX7nBO3s8hzXQfFBFcT\\n\",\n       \"IDl9yLtcSTqiqnzd7XbLFRhOtUbUT6Iwl3bqbFRNqiUVyrgg1zudhlzCAcMJ4fd8EsUwOfNhBUEf\\n\",\n       \"zPewMnNqIh9ggPjtk4xuOCSso8fQaDSKg+6ThxcXF7Fe747rX15elis8InYpuvV6d/VHfmbEzglD\\n\",\n       \"8Tvd//j4WNtjNsBHR0cxHA7LaVyUK86ygxKUJ59tt9uSJsIZcaAFiZnx2rGBU2jj6DWMqBxu3ucT\\n\",\n       \"UyhxE4XZKwR8JrV6b5h7Yl7hPgcIvg7cOT5rt9vleXyGw+u6e8ynU3AYLIy/Tx9SJds63XNjMnUO\\n\",\n       \"SPiMNNft7W1ERAyHw/jy5Uvc3NyUfjCn7Pmjo6Na6i5it7+4VDyiCjhYi/l8Xpxrp0tJW7EGDlCc\\n\",\n       \"XkIPZHqCA+4cQNPsrFh3M347vDzP9AzWnL3CuzL/Kn83Uywi6ulk+or8kDbn9+ERIgfIkcdIn1ar\\n\",\n       \"VbEzzLlTxt4n9I858DOzXaEhR6QJj46Oio63Q+p38n8feMjyjZwyd56ffUGT27MV5KRjTKr5P0ag\\n\",\n       \"InYL2+l0Sr7bXKfLy8tyuoPfN2vfJ198XJJNDwKQvdNc98mKD6HNRhjF741KM/pGiQN7cuaZAgAA\\n\",\n       \"IABJREFU+0Zw9kVJzjPn5/K3NzInVY6Pj2MymZRo1YRrol0QJ96FAOLxm9th7sS+/ngtnU9H4aMQ\\n\",\n       \"LJR2HHL0YWQwol4HhfcTkRodQ2HacLCGg8GgnEZEDs0twunx/XcR1Y3z3mjmchHxgK40m80y38yd\\n\",\n       \"x+IoHfn0VSl8xkbGONuRYo585RGySO01F+20XIFYeZ38b6M/rCHPxvFzNHt9fR3v3r2Lx8fHuL29\\n\",\n       \"LcVgW61WQUDY10bH7u7uiqNkZ4kSHEblGN9oNIqnp921RhDfOegREQVtshHhu1wwjYEFYaGvrAEH\\n\",\n       \"IzK6AKfOhHcrYpBzrzt7DfK8uUrIE9w7O0kYKYI9notT52icv3GGMhpPX+gPjm5GG/hjRMrIBnw/\\n\",\n       \"rslBVlqtVgkiHXCiC3imZcpZAAj8vvOz0WjEdDqNu7u7mkOCTNsYw7vjeivGsVqtiv4bjUbFwfYf\\n\",\n       \"+uRg0HPD5+bLZITCqG7mq+F4O1DgWXZK7Chnuc3vyjwkyw0OBXvA/DT+oL/4HgVzWU/khDmNqLiJ\\n\",\n       \"+4qJmvzOGLmmbLFY1AKiiCoww75YLzCH7Ll9iHJExYXMTibv8il7I1roGfsf9IF3+5nes/vas57a\\n\",\n       \"c2TiiMUQfkRFGPalnBZwECMMGMIAYuLFyKkFnuf3sThGmvievXwrWP5er9dF0WZ0jN/B83ZxTIwN\\n\",\n       \"EauFhu9ltAIHaF/EgYM0GAyKU+XSEHd3d+VyYMPWfldEfKUAbSTw8C2Mnh+iTfrKJsMpcIrT76Zk\\n\",\n       \"A435oB/8LlGjjYn7iUywvswbztfp6Wl5Hv10VVzkyugQlznTB9aQlKChfJNPeY+jX69tji5pJmIy\\n\",\n       \"D5yuQdmRxvUYGQsKgNRvfq5TmJZvO+VGDj0OR3+j0SgGg0EcHx/Hzc1NzUA9PDzU0micqEGGbm9v\\n\",\n       \"o91ux9XVVQ3lIaXHOhwdHdUKnOIocmLNaTCQYaOG/i6OIM4xDSSTvkXU7wPNhOF82ADlbQXOAQoc\\n\",\n       \"YgJC5pTvshd5dqfTKbKBYbChcRBH2pi1NaqELuAzBx9eZ+ZovV6X6tbIBQEuCADpdx9rd0rMe4Q1\\n\",\n       \"xcBaRjFcm80mPn78GB8/fqyhIOx75sDGjGet1+vodrvFPmADWK/5fF4cKcopoDN8uTT6hSAb+WDt\\n\",\n       \"s/PiMaAnmYeMzGU7wJxiA52moy/ekwYd6BcoEXonIz127GxPCKLYB3bY2C/0w4R21w9DtzLfRhit\\n\",\n       \"h5BBkN+cRuf9GXVjL6NLeRbvcDbFjib7hXkxoMHPQdWto+zcGeVHzvZlhNyetSBnxG974M1ms3bM\\n\",\n       \"nQ2XYT7gaRaVCDyiqprcbDYLKmWDzXuJtuwQdDqdGl/Eih+h9QZ0P1Fe3hg+RbRYLGppODsVhhv5\\n\",\n       \"LKLaDPboOZHE9xCSiKpOBv188+ZNOaXEGszn84K6tNvV1QwHBwelrhSCbmTICs0Kk4gUAbXwZcVj\\n\",\n       \"ZYPQ2oB6LegDa4/xyjLhuVsul19d6WIl2G63yyWwpM0Yw2ZTpVet+PicyMUKkrlmI8/n8xgOhzVj\\n\",\n       \"DLJgY4hMoTBAshg/6Rbk2gqaueNZv2XIcKic3jGaiNPPOOgnishjZD+gcBhfp9MptY2QVd53c3NT\\n\",\n       \"ngNSxjPH43GsVqt49epVbY6RUQwgR9iNHOH8b7fbePXqVRwcHJR0Kf1yqQmvP0EVDpX5JaxFs9ks\\n\",\n       \"6HLELq0E5+L09LRWK4o5Zy/ZOQaFAgFEL7EW/swpG+YQx4a54Hs4GMiU0xvZWPoUlNfRkT7yh6xY\\n\",\n       \"nzCudrtdUnsOWtl77B+nshkLv2ddT2s0GjEej+N//ud/alyv4XBYgg4bffYf6TmuguJd6Ajq5+Ec\\n\",\n       \"45h5r7oiPDo2IxDoc+su17Nj7tA1diKRUebF68ReyWkjO5vZgDvt7sDSmRh/17bKTibyz/epEcj+\\n\",\n       \"9HNwgAAYXE/KJ40dWNAX7MA+gMIOIHuDZrvgvhCws17WQ+wVZ7XyiXyekZE1z31GInOQmtuzpfZA\\n\",\n       \"V+x1I8Sk2kyMJL0HidQGGqV/dHQUJycnNSKnc8URX9dsMgRKY6FwpvZVGiY9kBf58fGxcDYspDzT\\n\",\n       \"KZYcmTkKsKKxIsybG+VFxIZgoExxJNjs5onglCBUvB+j9eXLl6JsvGmazWZBFWz0czRmw29HcTab\\n\",\n       \"FePCmHxlT85He/1cBRujZ2XC3/BxVquqSByfHRwcFFTn5OSk5iw4PWPlx/f53AY3okpDrdc7ntB4\\n\",\n       \"PK5ddWP0Edkz0Zh1hLPB99jscNI838wVCF+73S6KjHQfht/H3B00MAZHiY707IBlHgHoC880tG/+\\n\",\n       \"Fc+hRIERzul0Gufn59FqtWI+n9f4eNQwOzzc3QmWUWoU5uXlZUEyaexpAq3tdlvmO9d/MlLdaDQK\\n\",\n       \"0mKibESUK0dwQFwviL3Ivttut7XinZ1OJ2azWTG2fMbeZOzHx8fl0Md8Pi+6pN1ux8XFRY0OYAR4\\n\",\n       \"X4rKQYYdKe99O7zdbrdwi9CjdrIODg7KtSy80/okI2o0B0n79jd9PT8/j81mEz/88ENERFkDHA1n\\n\",\n       \"Djy/V1dXtVQqKPVsNovJZFI7MICxNippZ5B5zKi5ETfmz0FMDqzcsjNqZ5AACf2HnsXBdSCZOTv0\\n\",\n       \"OZdMQQcb9bYNw+7hMBnpQS/jlDrAZD/wXdpsNis8O+xNTouZU+h9jKPDM22nT09PS61Hk8at90hj\\n\",\n       \"0+xcOpNB35AlAg/bWfY032cMDhB+q72UP3hpL+2lvbSX9tJe2kv7O9uzIFJ4fYY5ncckteDoa7lc\\n\",\n       \"xsePH+N3v/td4SFF1AskEj3zTLxhIganAEnvEB35Ql/4FkRX9Ie+m+Nl5MynWohmTIzm/4bhI+op\\n\",\n       \"wdVqVSumlnPgPqFANAliMZ/Py7xA+uQZRH1454Y0STkalQAuJ32R89P8ntN+pD2JZI0G4dEzp0Sw\\n\",\n       \"EfW0QYb+Gbu5YE5tIkv39/c1/oWfDZJjFILn7LsihnUAgTKXy2k4TnDxPdaXIndOERoFog/MI/07\\n\",\n       \"ODgo62ekw1GnUz+kYUCDvL6UKCAd5aPjcAUcsTqSZm9mNJF3wDHw+Ej5cBKOasZ8HxknCvbVMp1O\\n\",\n       \"J25ubgqnyVd9kCbNp+Q8P5BZ1+t1QUjMU2TPMR7k5uzsrJYG8vg5AZq5TkTDeZ1Mokb2+d5gMCjr\\n\",\n       \"a54X3wPJIso2Wsh63d/fR7/fL3uf/XpyclLG57W0jBuRcoqMvWuS+uHhYSkbYmTJZT32Veo+ODgo\\n\",\n       \"l0475U4DzUF+8v1/8/k8Wq1WvHv3Lv76179GRMRf/vKXMldGQJBFOLFOqyOLyE3mwvD7ZD7W66r8\\n\",\n       \"CYdzzImkWX9xwnkftYODIs5EGJHKtAh+x2lFv9Of79PZzLXRHNaI5zklStkM0LDMn2JtSFVmCgxp\\n\",\n       \"aCO8EVFOiIO6+io20HdTdLyGTm2a9oCu4BAOaX2nZVkrj93FYp2iZbzOhNh2weFifLaHeW1ye7bU\\n\",\n       \"Hik54Nl9REQb7O12d+/br7/++pXy8xUPNg4IHgbAKSIUD5PLwkVUqSSMFD/j7+zE0Xy6ACeC76EM\\n\",\n       \"UZqZm4DSwdmzEnLe3kqRNBdCDs8gYkcmRzHgVGUuFpsJ5Y+goqAMWTOOXCU455+Bh7mbyRuD57Va\\n\",\n       \"rej1erWNQVqS8bBOTi/kzeO0RualoEibzV1tL6fSSGUwt05HopQx3CaqIh/MiU+Bkj5CcXv96Ctz\\n\",\n       \"h7IyRw5lgpNi54W5gOjs+SYNmZ1znL7pdFqD11lDxm5Z5x18hlNppx4Zx4jb2WIdUNQmd/NeHCx+\\n\",\n       \"9ubNm0KWJ8XnvQ38ztqavI9D6Jo53kco88yjIDXL/uWEntcWJ86pD9/7x/zntUDmHx4eSvqYtI2v\\n\",\n       \"ZWJNcXKdmnFwieyuVtX1OxFRHCjz++zwO3hysMNn5qv4vayRT3wxZ5YB7xnkm1Srj+7z3cxz9BH1\\n\",\n       \"L1++xPX1dTm9eX5+HhG7E1+sz2QyKfIYEV+l9eywsP7I2eHhYSFR41ijD1wtnc+tA/I1Sk7xmU5g\\n\",\n       \"59QBNLLDuNnfrCFEbZP2mWfsGjrOOop3npycFF1rh81XOzHmiPpVPXkd0dHor/v7+6/2EH30wYej\\n\",\n       \"o6M4Ozsr9eDMceX3Scs6aMXxJI0KVYZ+5sDJV9hkPeDmOUMnWQ5x6H0LiteH30Un4hhbTnJ7FkeK\\n\",\n       \"/LvvEuLOK5P17IRERHGmvKFQ6iwWiE7E/gt/zYWBmGYCZkR1CtAnmuyouViZERqe6xolfObTNe6D\\n\",\n       \"x0Cfs3PG99hAvvA1olp0bxgULwgJCBNzavRsNBoVZU8fbDC8Kc2DwmgYQeDdOGlshG63W5xnjKIj\\n\",\n       \"XjZqPo7P/DpHzffsYBgl4Xsce2Z9fS0GRGUXFuVZ8OPYvBQInE6nxXnAYNj5NCKFY44iOjk5KQqa\\n\",\n       \"KMyGLyKKExJRlalA1n1C08TKrHhsFJkDHKqM5vD7R0dHNZlDEeVIjDXDaTNP5OnpqTZG9hbvYY1w\\n\",\n       \"cJkX+sZccRiDhpJlfRj7fD6Ps7OzYtTyiS7ewZ7YbKoSJr6yw46X55H18T133PdoZI/G+JCzw8PD\\n\",\n       \"WgFYz5NPipn/0m63Y7lc1vQD1zUR0TM3Rt7Mn2PO7FQ4Kke3Mp8Y84j6lVOg6uarwfkDseZd/M1Y\\n\",\n       \"WHcHbehR8wojojg50+k0/vznPxf0IWIXuPT7/RgOh2VvWxbfvHkT/X6/hi6zlvP5vNiTfGKVPqGn\\n\",\n       \"zE3FHvCujNagoxzQ8Szz0RxgmnNjXqFrwjkI9Lp5Tfy9x8fHghzybJ+cs23KNsrBF9kefk5RWOvi\\n\",\n       \"iKqgMDaIE7MRVSYCGwu3EZnieaBk5pbRjJ4xBmqcAUBkJMu8RjugRvzQD36HMw40Aj+cqX08vyxD\\n\",\n       \"tbX6zU/+D9tgMIinp6cC2UdUG9E1nfZ5/Cb70fgMJZWNMIuybzHwkBGuiChRPBOfC74RJWcSGu/y\\n\",\n       \"ouSIzgrUhhSY1saKMZjg7MjAkRb9otK2U4pcagp5LyLKEeyHh4eYTqc1UjFwP0egQZkidoam0+nE\\n\",\n       \"4eFhSVOwaVzPC8PoU4Kkl3AADDc7mjXUi3PCGtqpIxXBnBjJsuIl8rFjiDE7OjqKbrdbDBOGm8j8\\n\",\n       \"9PQ03r59GxG7i6A/fvwY0+m0GFynEp26cLoyokKzDLX71I/RKUefRNXIi+UwHyW2I2Vj5tQn7wCZ\\n\",\n       \"QMaNMtIn0q92sDEoHLawEeZ7rFFWRkSBEZUjAEG52+1+lfLFqOFEWg5pmVhqMjLvY208V5BUiWjt\\n\",\n       \"UPF/jGx2vCy7GR0FcTWSgfE4PT0t83Z5eVm+z95zmg9ZZJ2NqCIXXHjcau1qX/n+RAwvz3IaGUOf\\n\",\n       \"5cnpG1LD3k927iOqUiGsO303OuPPLCfMN87J0dFRPDw8xIcPHwoCTDDm0i085+rqKi4uLgrq4EAY\\n\",\n       \"fcfzbStYx31IBuvtAxg52GF+fa+na6Txe9bftO22XqqFuTZy58AZvQBC5LX03X9ZNph/9lpGB0Eo\\n\",\n       \"6Rvrip05ODgodRstH3bkcrYFHbJYLOLk5KScAMdeYEc8DkAH9oRJ9F5/DgyAQiKLOH527AFP0AXY\\n\",\n       \"eM8N+9GOm9OOzhCwFpaFfe3ZECk8SqfJUCZGnPjMnAh+FlEvkmZBjqgfjweWd6TAd4wiRVTHOfnD\\n\",\n       \"aSO/1wowIwTeHHzm3Kx5LhH1iMX5/Ih6Ne2np6dSE4pmRbPZbAr0z+Lf3d0VxTidTmunQhyxcxop\\n\",\n       \"IkpF8PPz8+h0OtHtdsupjHZ7VyZhMBjEwcFB3N7elv644vV8Po9Op1PWdz6fx/n5+VenLGi8H4Qo\\n\",\n       \"p1RRfrkuiIv/2ZFgXTKfLSIKqoDSd8oXtKrb7ZYNykW5r1+/jm+//TZ++OGH+Omnn2qOMgoB5QhU\\n\",\n       \"bcfEacucFvIm9rhZWzhARnEjvr740w4hcgDfh+85TZxlzkoj86BwdLiWxcqbz1DKRrlIkbCP+/1+\\n\",\n       \"kZmnp6fodrsxGAyKMbXDg6PbbrdraZiLi4vYbDYlGON5KFu4Vqy1nR8cVT7DeWbe7HSDgrNOpB55\\n\",\n       \"hmUZeTDyGxEF1To+Po7RaBTdbrcYmi9fvhTejiuFe16pbeV1Ql8S9IBO0A9+x+P1M5E50yIcyeNw\\n\",\n       \"8SxQepDNRqNRDCTzZLTBXBh0jNEr1mk8Hsd0Oi2lQqbTaaEnYNhJi67X66Lfrq6uyryxt5GN0WhU\\n\",\n       \"6uRhMJ3aJHXO3rdtITjBbiDDGFZslDlLPN+oDA0nHtk3hSSvVXbQWbd8GtIBFPbPzqplmsLLDqyw\\n\",\n       \"P9hEZzEYG+vFvFGyCCcJPcH6bja7WwXOzs6KXeEz9Ln3GfOGDMKvIigHEGk2m4Ub6MCCtSM16ufy\\n\",\n       \"bJ5vm8+c2yeIqCqiQ4Oxw8sa5L3p9iyOlBWz0zM4Ihg+p/Qi6l5+9rAj6gXaIqLGVzEUyLPYOEDW\\n\",\n       \"5oegLDFMjjxBxPyHts+o80xvTBtvzwnHPf08+spYzWchEjUxMGJnSF6/fl1+h42BoC6Xy1J1FofD\\n\",\n       \"qS+M/Wq1uyft+++/L+/E0YjYOcWOhDEkpAIMm/NdhNbKHUXovDzzwsZHRowCOS3CumUZIRLMUDyf\\n\",\n       \"+2gt88RVRL1erxi9VqsVg8Egvvvuu/jTn/4U//mf/xnD4bCsL0bW8DTrStSKI2VEEtTJt5bbgLHu\\n\",\n       \"KAejjTiEzIH5T0boLBvmXvBzFBGRPc+m0Cj9BOEBceT5RpNBHu3wub6YDUGn0ynOOd/xFVI4y6vV\\n\",\n       \"KhaLRRkf/BajypPJpOwd7u1j/3ivYWAc/duxg6SMM4GBtrJGvrLj4CryyBRR+XQ6jc1mE+fn5zU0\\n\",\n       \"g8Kbs9msoL3MKVE58m/n1U4VBomfo18w7Nad6CbW2PrV6Y8czDp1OZvN4vT0tFbAMafrbaQyOokj\\n\",\n       \"9de//jVubm7i5uYmvnz5EqPRqHbHqp2gi4uL4khh6OCKUeogoqp677XyGK1PPI/MicuheI6dqiZg\\n\",\n       \"jqicTPa7qRrIBuiJUTxSdqyJMy12kIx2Mh70DX/b3uRg3ugRQQLvpaRBRNTmbD6fF3pKRJRUHraN\\n\",\n       \"qvo8v9lsxmAwKOUznCpnXo0wM6fISa/Xi6enpxKwj8fjGI/H5Wo3gxlXV1clqwV5nn6iW0Cb7ACZ\\n\",\n       \"e+xMB82p1Jze+9+cqIiX8gcv7aW9tJf20l7aS3tpf3d7FkSK9J0JmeTA8c5NArTnmKNLokJH8TRg\\n\",\n       \"PaNbjsyI7vHQDXHzBxItzeRW0kw5dwp8nI8QO5WYYU6e7Zw4P8PzNnxLP5fLZZycnESv16shK0RA\\n\",\n       \"g8GgHAflZFnELjIZj8clmjHScX9/X+654udEwhwZBynhpA9z6xTtdDqtpW45jt/r9Uqaj3nh+Zy0\\n\",\n       \"c7TCWuS1N3cORCYjYEbzvE5E0ETRPBeiJLwdrpKJiFIcsd/vl2j8v//7vyNix58immV9OZEVESXd\\n\",\n       \"Q9RDusMyBVpocmxOJxithCdAhEqqw+MzGZ1m+crROXsPJMipa6LXfr9fSyVFVAVJfQqJMZCaY67N\\n\",\n       \"2QBhcArHiCtzBtrE73uujci47IW5aKR5eC4IIRGs0UHGn1PJlq+cnvfedVorouJUNhqNODs7K6kM\\n\",\n       \"5DQiSuTtS7aREVLGRqMiKg6JTy7yPpel8PicyrJei6hO6zIvvjnBd+ixVk6BHh0dlfQWaAoNdGBf\\n\",\n       \"qnG5XMYvv/wSNzc3MR6PayejWAf2InLE+9DPy+UyptNp3NzcRESUA0kej/Ui68ffToM7HeR+ooc8\\n\",\n       \"NsuCT95ZT8GnMvLrkiiZo+jsijl/tm3m8NAnZ2accqMotVE+xpdTwNjZ+/v7ImvoPqPi2CKfggUh\\n\",\n       \"uru7q6FORgKZX5p1HPab752fn8doNIrb29u4vb2tpei2223JhDAe0wGcsTCdJcuBqRnOLpGVYH5A\\n\",\n       \"0X0gJ7dncaQMX9q4sehZUIEUUXImgtnostA53cBGtlGwYPq294jKkQKKNMTpMaBkrbzNE7Bid5/J\\n\",\n       \"PTtdyN84BZ4Xb0znyk9PT2O9XpfqzxjciArGxKHKRFE7hAiQlcuXL1/i1atXtes8IqLwWRBscwVI\\n\",\n       \"tZ2cnBRY1afafK0A9YNYdxOUzZNptapq3/w7H6unL05t9Xq9kkqh9pYNtMefa9Q8Pj7Ghw8fipJi\\n\",\n       \"cwMh4/Sfn5/HH//4xzJnP//8czHA9M1ke1KakJF9Qo/vLBaLWK1WNeeN9edIvlMSXjPzXZgXnIFM\\n\",\n       \"nrSxxfGPqO6OI3VLfRi+d3V1VQ5nWGbg6TC/BwcHZQw+4YWh9TVOELwhKzsdyd43p493MC/T6TS6\\n\",\n       \"3W7hbiFfOFOcfDMH0EECZG3Gj9NOanofcZh58/gxNjg4yDckXMZ/c3PzVdrMxHGnIni2dQfvpuU0\\n\",\n       \"hJ0Dp4b4zKevTLrm/zjlJlR7jUipuT/oGTsFTjUhY+bLsP7w37jSibWg7+fn57XDCBFVSQl4NfP5\\n\",\n       \"vAQivnEiOxnI0j5iOGuIo5ADV5PQ85wyfqetmQOCOQIJ82ztrDm4MoeL5+f+8gxsm/mVPjDhsbLe\\n\",\n       \"Xnf+9tyzf7PdIUC0TUQv4dS7Vhpj9z5yStDcJI9ru91Gr9crJzdxrCKinMhsNpvl5C6O4mKxKAEs\\n\",\n       \"NoUDWOgA9JffBzjAHwfX2JBcb87t2RApIi07NgxwH3HRvBOUcUTl9CCEv3UaA+OVo90cqfAZKA7H\\n\",\n       \"5BGMXq9Xi/JyVIp3jBfrz/OxUnvYmd+VlTdzY0/ZxMDJZFK7RgOHk/eiyFyqwIiaBX29Xsd4PI5P\\n\",\n       \"nz6VInteJyKd4XBYnFA+I6ImmspFGVFuNiSNxq4WDgqMUxyM24RFbzhQCis8RxEoAtbdPAHeAVJH\\n\",\n       \"Y21Ho1F5rj/v9/tlXMhQxA6pe3h4iOFwWE4uOorCECHXJp0iWyjxZrNZfoZzimPsvWBFzJz6KDN/\\n\",\n       \"81k+vWIjQWOfwIcyIjsYDGqkbDtnzCVzYkeRnyP3j4+PBVmyobGjxztwrEHIMm+Sz7iXLyt371Hm\\n\",\n       \"lNIIrIk5HeYNsXZGJRy5Zh4MsoS88TmBgU/yOlCYTqe1k8Pwh3CiGWNGpPJdmzwTh5r5Mr+H3+Xd\\n\",\n       \"x8fHNZTT+xK0h59D0sYJ84lhAhk7GEY52eM4P+adcfqY+TWKPRgMCprrZ7NX0NVwelgL+owM2JEw\\n\",\n       \"Nw7Zo4GoIgdGI1lHZDyjSOwH633PtYM5non+Bi3JzyIzs4+jg244OzurodXsNetgB9jMPX8bPYXD\\n\",\n       \"yqEmH5Biv+D8eE5Bxwjmrb9AW3HQ2fsg271erwYmMG/sFUrRoC/JIFm3ucYfVwKhvwjwWN+M+vFe\\n\",\n       \"uF6MzXNuXb6vPYsjhaA5ZcciUYsmoo44+Hi5UShHQHzPzsvBwa7iro9UR1RwIv82pOyjtrlmEwoa\\n\",\n       \"RYkDwHvZtPzbDtH9/X0h7x4dHRWFmcmduQaJj8lamZg0mNEhb2afhLNTsNlsahdX+oRZRMTPP/8c\\n\",\n       \"h4eH8bvf/a4QOc/OzmKxWJRNul6va0eNqevCOjraYa0cDUdE2Ujr9boUdWPecCqog+Xostvtxng8\\n\",\n       \"LmlG+mC5oFKzDRupM5OYMVKGrUejUfk8YpcyePfuXTl16jlrt9txfn5eNud4PK45B5vNphRbpZ9O\\n\",\n       \"b1HzKh8tRqFzIMCKwAaCuWK+UfoYbTvpGE8Uh1E3DAunffr9fq0vjup86pZnsDfsuCJ7jMPGiwAK\\n\",\n       \"4w9hH7ll7/Ne3ud0wOHhYQ3Cj9idAmWd7UBH7BBAB17IAb/D2tLPjEowDqcw0GcuIEozCsEYjGI3\\n\",\n       \"Go0SgNhxPzk5KU4suggdh7OGgjdiAZkeo8d88Rnr2263a0VVI6qsAOiokWwcL6ejnBnImQajpfTj\\n\",\n       \"4eEhrq+v48OHDxGxu9B6u92WuwhJ4zFG3/lnfYIzZtQw2wEMO3/TN+8fp7iQJ88Hn7laNpkNIxY8\\n\",\n       \"L1M97HDxu5nEDJJnpNbUEz+LuXbakv3mQxrsa+qEeR1Jw6JnrBetz/LJNf4mEEQ2jfjzPAchLhzt\\n\",\n       \"vUFAy7Ndm8oEe4IyZw6MdkbUbT52EefeqXJsBX2xrCAbIJ9e1wya5PYsjhTXN1i5w6lwusLKDUcL\\n\",\n       \"A+S6ETgsmYXvWin27mn8m1M7OdpFkFHk7hMbzt68I4yIqI2PvrLRDg4OSkRHCmq73Zajzl5Ep1Ai\\n\",\n       \"6mgDEfBms4nxeFw7Een0m3PqfM4fjAf95mTJ09NTfPr0qbbByIHj4JjvwfyyoTi6zjiMlqCw3U5O\\n\",\n       \"TuLi4qLmnD09VSf92ARG8nAOObWFXBi+Pjg4iMFgUFN8zM8+B9Och+FwWIz3x48f429/+1tcXV2V\\n\",\n       \"vlhm2u12dLvdchLUChwFtNlsas53RJRSCzhChtSdTsBo2CjyO0a8aHbOLft2uLMRQsmcnJwUR8bO\\n\",\n       \"MO+9v78vR8xZu/F4XL7PdS80X9KbT3AZ3bLRZb5Q0vP5vMgQSDFjmM/ntdQEjj9Ij1M/8PDy3EVU\\n\",\n       \"aVY7oMw3eySnaDwOR7s23kTQcBORfRxQAh87505pwm9kj+IAohvNAWPeQJBwcJFD5pqUGuPgmDnO\\n\",\n       \"XaPRqBkvn9g0AoBsZP2ZeTKbzSYmk0nc3NzE+/fvI2LHLZxOp+WkpGvj5bSQUQF+H7qAg1bQGfYf\\n\",\n       \"e5z5zs5OPkXOnDnNCvqDg2FEBj4pfbUNYxzIVj45bgfI82l9QeDptCZOFH0hOOdzbA160KnrvE6W\\n\",\n       \"WWeEvFcZE/bNgRJOFb/vsfMs+mL7i5whk0axj4+PaydCTZPhO7ZtjAmbTf/tyLKmzFd2vplj5sB2\\n\",\n       \"Ntuq3J7trr2IekSNgC8Wi5qXzO8xmdmjt4OTc8mZN+L3odgwxuYD8LtEnxGVgIPgIDSOYJw7t7Ph\\n\",\n       \"vlowvbldNfjk5KQYHsO9/G52qoBNMeARUbvviOjJhFRHMhjr/BlHi3/55ZfyPjYKzzMSgBA7heTr\\n\",\n       \"Lugj68mmGo/HpTZJp9MpRUJZJ37O77iiLZEsRofvMcfMj4nf1DhiDb3ZqLdENLfdbosBvrm5ievr\\n\",\n       \"6zg6Oiq1tLzx2ZxET8iy5xTFYmeKqJT0IGhZRBUJU+IiEx7t9Du9Y8csNyJxp1wcpbMXMWyuobZe\\n\",\n       \"r0tpCx80GI1G8fj4WI7vcww6Iso1O8fHx3F5eVnQtX3zZmdhu62KWObrGUBnSCuY94PJAAAgAElE\\n\",\n       \"QVSwnuZ44GA1m82vCvrRb2TJTqmd8OyYImc4Wk5fgSqxd3jm4eFh3N3dlSr7nU6nli5cr9c1nei+\\n\",\n       \"GBGwzuC59NXBHo3negyZk+Pv8Bz012AwKEGESfggFXYm0InIh9EMdMt8Po/RaBTX19dFp/z8889l\\n\",\n       \"XeCn7HNCqHhuJwD9wdwYeWCeKbZrpGNfHaKISlc5Tcn6Ukep3+/XSr4gO1nvOzuC3aLmGbIPqtnp\\n\",\n       \"dIoDa12a9y79w+mgYDDvdWrc+hzbyb+dDcIp4TOoECA5Rn6c9nbQwPtAY/cFagZO6FdOXZLGQxYY\\n\",\n       \"Hylz0zdAnAiGnSqnL6BgyEmmBjgrgL50cGZd6nna117KH7y0l/bSXtpLe2kv7aX9ne1ZECkiOnt9\\n\",\n       \"IA1E5M5dR1Qn8CIqz5xngdo4qqZlbxwPNJO6IyqIk1QZUYUJzuSusyfL951+dPRHdEmaarvd1nLM\\n\",\n       \"nKwDIs6pBiJhv9upJZNJI6I2f0QB5k0ZkQIVMCJnyPXx8TE+fvxYPnN/XOiUuTafIeffibR8pQVo\\n\",\n       \"A+lVUkMRUS5/7ff7cXZ2Fp1Op5zeoHgn7z08PCzRLRyC4+PjGrE4IgqR0vA5c5qRyXa7XQpykl4a\\n\",\n       \"DoclzcRnRqcajV1BTxNAiYDgpZkL6OrHHDGnEaGDHJGqolGdF/k3QmDo3PuA3yUyN/JKOmO73ZaU\\n\",\n       \"I+tLZM/cRkRBAJfLZUENT09P4/GxupDcldDhZ3iOQZ2I7B01kk5gHzEGkGHkb1/qYD6fl+dxMCKi\\n\",\n       \"OilIP4xuEO16n2d9wt/ef4PBoBCjiXzNzeI+y8vLy3KCL6KOGu1LheX9nnmgPt5vHovTki6M+/T0\\n\",\n       \"FN9++21Zn1evXtWQGF8XQnqQ7202mxqnynJozo0RTr57f38f4/E4bm5uauly+mi03il4dBYIqfUX\\n\",\n       \"qInTmMgwXKter1e7TxCOGDqKdHxElbr23COnoNfw1rwPzVXi3x4DXC4u5zYRnfkCrTYCxhyB8PgC\\n\",\n       \"Yb6HHjP3yOhK5hIh8yBjtl+M0ffqGZXhO5nWwrtZR8+Nsz408wOZB2TYfDX/rnmc6C37AT4FbOI4\\n\",\n       \"v88YeB9/3E/rJGea3M/fas/iSJFnt5JCmbFghvJQdk5BeXAokXwKyXlOfteTYSfKBF/exxHqLDQ0\\n\",\n       \"E8D9PvrkmiEoChwKw+pWCjyLaxLgFiFkwM55TOv1unYCh7niGRkmxmjjKLnuj51baifRbm9vi1IF\\n\",\n       \"5vcpGxSH6y/Rz4gojs9qtSopM04GQgLmvRE7p2cwGES/34/Ly8saN8ZH8/meDS4OAQqSucGJg2Rs\\n\",\n       \"XgpkZxxCG0QcC5z66+vrr4jIPKfT6USr1SrOhDkO2XCSQrChpK9Uusc4+AAEyqXf79fSB6w5f5t/\\n\",\n       \"SF8YXyZdcu2HFbTLP0TsHBR+jqGlZlm32y1zk3lXcEmQG75HatPGjX7ijMORywEGBs9lTlgL0rNw\\n\",\n       \"aVgr+G/MleXTwYeNE59l5ZvvroTgbsVMgESq3Pucgw7oGe9tZA0Z9jrZ6Nop83dx2nEc+Z3RaFT6\\n\",\n       \"6TQnv8dcIXeWUeQok6gz/cBtu92VU7m+vi4pMo6kOz2ZHUf6i/PB6S3LFO8y+ZuggxS703AEyDgL\\n\",\n       \"rorN/HvvmANmI22Hn5+ht+ycIMP39/elrh7OKsE0e82cpOVyWTtV6nRZPiCQObQ4LgRMtl84Uug4\\n\",\n       \"02hciiBzGVn/fbYWGUWGLafIinl6lmGn/MxJcyqRMZijad6hHTUfdMmfmSKUU/c5Jcm8IoNOA+5r\\n\",\n       \"z4ZIkce0cmdBF4tFiWojqlN0VhgmAdrDjKgjUlmg7LVHVJGtT/9F7ISVSMWePXwM+uuoFGGirxZg\\n\",\n       \"nCeIeZC8eT4C4FNZjBXBN5LAZwgu9TMcsToK9ZHoiMoJzBFJRB2ZazabBWXguXDZIPtlwvV6vS53\\n\",\n       \"p3kc/L3dbmvcGxRoRBQukB2pXq9XDJUdglzozsfBKRgJyulrOYhy920OSLy0HIW02+1ymedisSik\\n\",\n       \"ZiNvdrB5FkYeZWO+mhUR/ATL9z4eg+fUXCWOFi+XyxiNRjVukeeb92I8/H1OEbo//D7HzHu9XtnD\\n\",\n       \"rBNF+a6vr2O5XNaOHXOlDPObo2ATRB0A4DxhYLn3sN1uF86REVLGa4SPUg6eL1AyuFRGDmmZC2Id\\n\",\n       \"wbyBSBIg4UDBzaMvzNf9/X25sDiiQs326Sbkx84s/UNGMBbZiex2u3F1dRXD4bAWUCK37D/zchz4\\n\",\n       \"8ZmdM0jdyI33Po19tc+AsQ7Hx8fxzTffFDlFpphro0cELaAojJH+cTDH/TBaYx4o/YPUTLCH7Ltk\\n\",\n       \"QHZcCbaM+toxwJYhwz7NaWTDwQABBCimneNer1fj6nkf0qzHMn/H8uFj/qBQ7p9lw2vm/zM2n6i2\\n\",\n       \"3s+Ode5P5kcxfvalHXRkzXwzj8GIK33LfC36lfuAc4bM8Dsmydt583j+N47UszhSEdXmslHMpQZQ\\n\",\n       \"LK5ia2URUZ3QcLqQzzxwNoGFJqJeA8QeL+kfo1/8PpMOouZNyu/Td6d7GDffNbLCO4l0MELr9bqc\\n\",\n       \"UGETW2g8T2y8ffPM73lunN7LiAVz2e124927dyWCPDg4iMViER8+fIjr6+ty1NRziIL1vXMRO+eG\\n\",\n       \"Qmlee6pio0x4BvPGpsGhBEngokscUSt+iKTeAD7pSZ/pK30BqWKuXHsLInqr1SplLPxZLvppdAXD\\n\",\n       \"CjKBEaAPGWmFdOm0TE7rMF9GRmgcqaYir42knUWfaOIz1/ixQzCfz2O73Ua/3y9oCXJBSYjb29uY\\n\",\n       \"Tqdf3cGIXK3X69qJHyJ51jDXSkKZ8z2fArUDleePVC/pZxO3nY7OQZRT2tmJzk5UruoPQooDZ4cV\\n\",\n       \"J3KxWJQLvZlj5iaiIuCyzqvVqoYYOdVkXZNTEaQuz8/PizPFOlHM0vsmoqpLxR2DnK7mmegtB2me\\n\",\n       \"N36Wg1bQee5TYywRlW4nNejTWjhCOGEmm/Oex8fH2gER5tQHJFhz+hJR7U8/E6ceGbTu9NxmKgl/\\n\",\n       \"gxQ5jQ5SttlsirzwTJe5yXaDvWKk3X3nOw7a6SN72nKc0SDmyPbE5RDsDPKunG6zk4XTaxuS598Z\\n\",\n       \"n4ioIetGqegfwRzP9cELbI33gdfHQYazV54rk9SxMaxXdqL2Oatuz4ZIWVAiqguHSY84usTb57SP\\n\",\n       \"uT4oUzaVI3anL1CWjiZc3dn/NqQZUVUi59/2xP1/n7TIkZojZBSdDaRz196k8FIYZ04z4gAxpz5e\\n\",\n       \"mzemlRvPw9jbeUF5PT09xdXVVbx7965E0XDZUEKu0sxcoJydhuSKBxTV01N1QSXKzDwaR0bb7bZE\\n\",\n       \"rg8PD4UjxWWnpImbzfqlxYeHh8WZWiwWRUHb8BrVi9gpb4w1StqpS1If/K6LanJaCYXK/PJdw82O\\n\",\n       \"kkk1ohidFj0+Po7JZFIiea8pCginpd1u12qTRVTH/532pNbXPmeXsXAdB+OK2O3Rs7Ozks61zGw2\\n\",\n       \"mxiNRjGZTMoaY5C63e5XqKidN/YFStFyaDlHtiIqFA45Y12sNFk/5pp3mk/D33xmNMjPiqiuVLHO\\n\",\n       \"siPE6cJmsxm9Xq98n9Sd69tkPgaGwwbHCLfTJpZv+uZaWOiPyWQSl5eXcXV1VagCj4+PBRHPJSqc\\n\",\n       \"ZmGvWSfakBA0OkBEpnKgiB7nJgbv06enXcFXir0eHx/H+fl5RFQXWm82m5hOp185vBSX9PUnrA/7\\n\",\n       \"jHUxv4Y+YGP4HmMB/bZ843wakbPMYD9arVZJyzFPToWC6tAX9KxRPq8HqLBr0DF//Az9nU/XGm2y\\n\",\n       \"k8n/87iNKoFkGbhg3rBHNH7PKBeN72E3vE68y7yyjN4x3znNzvwbyOB7ZEroi/WJ05m23TxzHxrF\\n\",\n       \"c3Jg5fZsjlTOiRKZkitHOCPqV8Uw4c4zO41mD5QUAgLqlAK/7yObNnoYdoSZ75G3diTo5hQBSpyf\\n\",\n       \"O9LJUR3OgmHKiIpfAYKR04UR9UiIfqIgMpRKy9F8RNTG32zujqOfnZ3Ft99+W5QbimA+n8fFxUWN\\n\",\n       \"xEsDnneEQiSGMnRfSYeYJ5TnYLFYxHa7LXVoInaOFFXE90UNfAcj5Tz6ZDKppVFdxgDndTab1coN\\n\",\n       \"EOkBz9tw46zbwWg0GjXEwiiA1xGD73U0UntwcBDj8bimNFlPIjWiQqclURLMC/LnO9bgLTn6zKgi\\n\",\n       \"xnYwGBQ5BVXzPpzNZsUxtzPIAQH2qvcTc0Pw5OBqX7rL+gIUDB6ZycGuep1T5uZeuA/MtxEeI0Q2\\n\",\n       \"FDhyTnMYMTZnx4VzncbhmSDSEXWUwgRukEnWkGCF9+V0NJXCHx4e4uLiogQH5iV2u92vUpcusmrS\\n\",\n       \"sB2pnO5FxtiH7EXvfdJ6yKJ1NH93u92SAmZuCDJ5nknFjUajxo3LurjVapX37iu4Sr9zyQGccHQB\\n\",\n       \"v2eZPTio6j/1+/1arb19NQkt40YVPV/W18wRAQjrijyxV4w+edz8HvrP8sZaOrhzX/m3gQEH5Pvs\\n\",\n       \"BfNjqk1E5YA43UtfkWHsutFRUoiM0/1jDf1er72DVQcfyA2ygpPKs60XbIP+39CoiJfyBy/tpb20\\n\",\n       \"l/bSXtpLe2l/d3sWRMq8JLxNkB48TUjZERUE6sg5F+jjuY7YHanjpRLR+Fl4tDyTyACY0MRgIEaQ\\n\",\n       \"hcxJ4r0RUXumI2O8crxhCOxEWY7YSE9Cms2et087MRaeCfzriMARBlE373AqA+Tl/Pw8zs/PC6mW\\n\",\n       \"MRGR93q9EsHyXubTcCwRkBE9omM4DnBAKAQZUXHnptNpjEajGsEb0uxmszutlhFHIF4QAtIboGPc\\n\",\n       \"ydRut786WkyF5cViUb7nQoMgIiawwzEhskGWmDfzuEA8WWOOR19eXpZIO6Lil9CXiApRAqbmea6I\\n\",\n       \"Tz/4Tq4kDzzPeJk3n2KKiHj9+nV5H3eaudI4awgyBEJGWobPQBtJhznVQLROFLnvdAzrnPkO7G+e\\n\",\n       \"YZ1BQw7yQQz+D+GXfyMHGeUwVwVUxgi00wTsd55Jior0tm9IAMEgtZWJ7+xVEGv6ws/pp6NrUr4g\\n\",\n       \"GhcXFxER5aAEXKmMyvl+P+s9kEanzbLOsO7KqASIFQiYUQnmB1n25cOkG41gR1TpXtBSI4DIGIgK\\n\",\n       \"KcCIqJ1KhGJhPhPIdZZTz5M5Qcw340ZOzQfy+M3PY//RF8s988d8k6nhfTldaKQLnh7fz5kJ7xmn\\n\",\n       \"+Iw25TUEoeJ5mROHns0ps2xT/DOjwvYJLN/IisfnPmcCPuNzCnYfny8j404L0y+vef5/bs921x4E\\n\",\n       \"WCuqTIpjkOZBIUSZxxQRX8GcTushjBYU/m8YMCJqBu7u7q4cteYdbHz3KaJevRvIOJ8iQsE1Go3a\\n\",\n       \"VQi87/HxMbrdbg1KBdL3JoyonBYfo+dZKCHmgLGa6Ggh8nzCZeEklisDNxqNcoy30djVSxoOh6Wv\\n\",\n       \"vIP19aYhXYTRd6VpGv2lkfZgQ1CLhTGgFEgb2hCycXHcmDvGjoK1U9vtdktKkFQRY3eqmMa84Xii\\n\",\n       \"nDMXxOuEcvW1DfP5PMbjcTSbza+u+0DhGvpmvlkrO6i8o9lslgMMVN2OiCJ/PjThqu849Dj9cNIe\\n\",\n       \"Hh7i9PQ0Op1OjMfjuLu7K+M/Pz8vzh77zWlmO/sm9LJuKDCcFH7PTpiblSAOgOXIqSV+3/w5GyP3\\n\",\n       \"h7QkgQtX/dAfHNRWqzotyDNxnigZwglKZI7ncGkycoODzjvMWbEzQwqYMftwgxU9PCzW8ODgoHzv\\n\",\n       \"4uKiHJRwatDzjZPvNCvzgMNgg8P7bVhJcyM32VmhOe0KfcE6ykaPtCjr6YMS3pc4xvQNKgJzwx/2\\n\",\n       \"iw9TOJhGrhiTU+WZ5wa30Zcns2bMGfrdTkY+KJXT8ayRHRE7eKxBPjDiFJWDk+wg2kFxmo059QEV\\n\",\n       \"/rZzw/f4m+fZJtqJRp95juHcGgTJB0gcqJgaYeDAfzslSrPNxkG1DfS8ICu0HCjl9mx1pEA1zDGI\\n\",\n       \"qOpV2NP0orMomQti3pI3d/Y+feO1c8QINf1brVYFjbAy9ak3E9loOC9EdNlRBAnx9xCiw8PDUuoh\\n\",\n       \"EwfZ1CipiHokZOGIiK+EYh+HiD7tO1p8eHgYvV4v+v1+DV1Amd/c3BTlgcGAr2RkwdwWNia/Y+cM\\n\",\n       \"RUg/mTeUD3eCUXbBMmPFzmeQna3cnX+3HDIO5AJl02w24/9h7816G9uO8/0iRWrgrKlbPZwcx7Gd\\n\",\n       \"xBe5yvf/CrmKgQSGY5+xWwPFUdTA4X9BPMVnL6nzAwwE+l9oAwc6LYp777VWrRreeqvWbDbL59B+\\n\",\n       \"gWdaHhgr9wIhKBFA3tuKkUObiZTX63UFIeNdQSu4Z6vVSie72dydTu/1dlEFMudjOHgnV81w1Mtm\\n\",\n       \"s0mSr2VqNBrluY6sPf2jQHqMuiyXy9xDyBLybeJteSQNCtPor5W2ESj4LDinRoNZWxsDxsO6lRG7\\n\",\n       \"+T3WP8yrCcIRkc7m2dlZrhXjwDGv1XbnJfLeGBbzjqwbjKCbs+LWDQ4m+Fuc/hLlQ74YC+NizlwN\\n\",\n       \"bX1C0QGOD/e23ubffM8VrUaBeD+ew/2QbRBn7yHW3xWl3gu12o6PCPLptXPTUeaM90QGkR/0YRn8\\n\",\n       \"WFZKPYQzjB4uOWsEEzg3XAR1jJP3dXEEsm6bUNo47oXM2sFwUMk7IgPmFRoFM3rDZ3bYjA67kIR3\\n\",\n       \"eMnBfykY4vcEhdYZ7K9y3tAJyJsRN9aX35nj6MCgfE8j3gRLdpTtcL50vRrZHI/ekSnRHMrbhzAC\\n\",\n       \"yTJICziTRqqjREFsVDwZbHaTObnKqjYuOx4lyc2wtpUe74KhtyKO2LUUKEt2+QyFz0+MBU6NEQVH\\n\",\n       \"l5Ab/Xt+lv2NTH5/fHyMXq8XJycncXx8nEhSxK4/0WQyqVRxROwqYlqtViyXy0rkidLiXUtvn7ll\\n\",\n       \"bR2VktLg/nxmJMrjZlxl+sDpFH9vs9nEzc1Nju/4+LjSfqJMm7pSy6kkV1ExZiOLjppL2Biir5Gw\\n\",\n       \"iEgnCceOXj68mxEltw6o1WpJDjfszjjYLxg4R/MoRObIChO0gmfyPcr6QQ0cwd/d3cV4PI7T09Nn\\n\",\n       \"qbvlcpkHPS+XyxgMBhUSvpFjOzxOwfJOzWYzU3Q44KS2bXSNABg95WKeQTztZCF/q9Wq4rhtNpv4\\n\",\n       \"9OlTrvXBwUGmqJbLZeqhsqAEpIX96ojdxODlclmpsOO+rLXXd7OpUhgODg6yTQUo27eaDWMwmW/u\\n\",\n       \"6SaQdsDsSFnO+X/eEyNV6tOyn5VRBJ7PWNw0GFTbwZQd1/v7+wxQPFcgXiCP3ofYH+TOqLEDPAoZ\\n\",\n       \"QDgh0e/t7eWJBg6wnJosg10yDUZb+Gl94u/ZObLT46q+Ui4sG8ivG1OWf29nhe+VqCdjtHOP01fu\\n\",\n       \"U/7fziK/s71zGg5ZYC1eyrzw3g6S7LCXqT+nij0e5tdj9/h4l48fP8ZL16s4UpTfGhpm8vf29ipQ\\n\",\n       \"b8R28CiAiGojRnvKbGxvbgQ/Ip4ZBd7B1QMRu8V1btbGG+XKhHtRcRJ5vo2eo3/SVDwvIjItYo5K\\n\",\n       \"t9tNw4TBszBTOo9Rs5E3j+Ilp8/polqtVulrhFFiM/PZcDjMXkERUUE6SDExjs1mUzl+wfCyv0cv\\n\",\n       \"KBwsp6gwJmx4O1lErHt7ewmps/bL5TIbiZaOratznNLleWx2KtOYb9YF2fHam7tiKNpGx6kWR/PT\\n\",\n       \"6TQajW2PJH5vp8cw9tHRUUWB8zdE5DyPPWY0xfPmVC97gHsxjyAlvCfOFYgw88CF01dykFgbd/3m\\n\",\n       \"YrwYE7fMwLjyPjZ8/lnyRSIinfnxeJz713uYOeEzR+VOx5RUAWQVx4TvnZycZAqJZsI2bGVazIEJ\\n\",\n       \"vCnm1I5brbbtEUagVzopDsqc9np4eEjnez6f5/vSBd9pYsZAdSzrZ0TnJU4NY+OZGCKOTyK15/1m\\n\",\n       \"Q8k4jOx4vzm7wJ5yqtyVya6EZK5IGRnhRodaT1uWMJqk+fgeDp9TWuV7rtfbNg1lypX0LO1YjDQZ\\n\",\n       \"TbRuL/meXien60BquFdEpIPpLIT3vmXEOpN7gBg6gC51q4NXZ16MSPOuZAf8rp4bAsgSEeX9S54X\\n\",\n       \"qBm/cxrPDlxpn0E9zVFzup99yrt5f9h2vnS9iiOFQrUBIx1mpe6XJ8IrkSwGD2xHuWvETuAwevy9\\n\",\n       \"L4x06Z3ymSN137NUaFx2zCKi4tXyfRSK/+alKI73tYMU8fwIF/+/HSkUgcnbfk+E3pAtYwWtG4/H\\n\",\n       \"FY98Pp/H1dVVxZHiXTG6jO3+/j6jcvrEcE5V6dgYKnfKxJuSsTvt4jmP2PHbvC4o6JKHQsRiWWy1\\n\",\n       \"WhkplqkWNqI3ael82+CU5FEcSIyf4WUceZQysmFiaEQ8i/aQXZxfv7Ojfz+vbNtgQ+PfEy2bM4jM\\n\",\n       \"YKRKbgKO0NHRUfYJe3x8TJJz+RwQAIz7YrGopFlRkvTS4XKKiN9b8ZNKIQVUIqA2gmVEzBqXaXue\\n\",\n       \"Qed9Chz43nA4jHa7nXMKgkTK9eDgINNWfk/4PG53YHl7qfyfMdspKNtbYCjr9XpcXV1FRMT5+Xki\\n\",\n       \"jpZP/t4yab1UBmJ2cjyvj4+PMRqNYjKZ5Fhd8GJDGVFFwTDWDjjtPDkwZQ9j/FmXiK2s0werdP5s\\n\",\n       \"L0rUxYgTMsI6cS+coYidQ848O2BjfUgxko2wrsFAOwiwA1JmU+ys2EkHWfJakQYt18jvWepKO3Ps\\n\",\n       \"uZe+R2BnFMiBg+WUlDV70ek/9j3tKdwAln1Arz4HhbbT5gIjgyXowUUgQ4sPZITLDqqfZ+f7W9db\\n\",\n       \"+4O36+16u96ut+vterverr/zehVEyhBmvog8QBAbIy9EbF+/fs0IIGLXFRVv8yWEiIjOeVe8fxCi\\n\",\n       \"x8fHZxwSOBg+M8xpP3OlPA7gW+fYSRcQfZSQNvd2RBQRCY9TyfWSBw2079TH/v5+JRLyO0fsCLUg\\n\",\n       \"CEYlWJ/JZBKXl5cxn89zjIvFIptDgiAYfjdMXKvVErm6vLzMKJ1xOlrwmUu8f0Q1lVrmykEk+N3T\\n\",\n       \"01MlLdpqtXJOLHNGuZADIihIwuYGOPIyP8ERG6kpGlh6nZFToiy+zztAFPe6OBI2OuV0Ycl/sBzy\\n\",\n       \"PubRlGhQKUc8w+lAw9pEa6TpXF3Knux2u9Fut2M2myXvDITRcu/IezweJ6/IqMVLKTEjGozRqT1z\\n\",\n       \"/0C5PL989lLpP3JH9Oln80z0DWgL8jaZTHJsnPPo1NbBwUEi5bVaLT+zzuMZrD0I5WKxyK7gXE6h\\n\",\n       \"griA/q7X62y4WSJL7PdyP0Rs036k540UMc4yg+DP2WOkS+7v77NtyGQyybWwfuQ+INRO7XAxp6Cu\\n\",\n       \"Rl1AlCk0MOqE3ivXl3EYqXKatF6vV1LQyBP/b46uW7QwbtA3I9Xs9TIlxmeWeSO7RnzNmyRV6H1b\\n\",\n       \"Vtvxk33guWFe0R2MH/2FTfQ+BRUrW4lE7NKE2ALrT/QF/GYj4/P5PCaTSUyn0yxmYN7oLO+xlf6B\\n\",\n       \"EUX+1ilmrxHjY6z4AeZou9mt9aALN751vYojZTj8pTQVEJ8NBsb04eEhxuNxLobPGGPA3B8yrY1m\\n\",\n       \"aRRQjK4g5FnNZjPLxl1NYAHld9wfISuNt/+O3zFeFIgFmqtMQbD5eE/4KqVSsGLCGBviRticRuFA\\n\",\n       \"2Ha7HZ1OJ2F6Q9Wj0SghYypcmF9KxuEucGZXROT5a8y1nQW4TvAySGexvqQ+MG4mx+JAeQ24p1Nd\\n\",\n       \"3hg2zNwPmeEdgY1xnlgXjDYKGUfZaQH+zr2bHCggW4aODeWbS4Iswk2yDJMKsUL0+pPSBFb3OMwj\\n\",\n       \"fKnC06kzKykcdkr8kYu9vb04OzuLXq8XNzc38csvv1ScHj/74eEhq/0oJmCt9/b20uFgLcwh8bxQ\\n\",\n       \"5epAxHID38dGnr/jd8w53yv7PJXpKK9xs9ms9IlbrVYxGAxis9nEly9fct6Yq4eHhwpHLmJXFeie\\n\",\n       \"YE6R0LUd54Z34Z4ljwvZf3x8zM7vrgi0nvPfc7HvTZC2nPjvS74Uzs5LKWHSKdAPvBcptPF+9HPY\\n\",\n       \"R66EhFPI+Gy80ROuYmUt0D1OCzp4Q+59aD3ri8OHTjDdgj1tPpTniJS8gxsCATvETtkh4yV/iL93\\n\",\n       \"xZ/twsHBQQY5nAHq9J15U6QA+Qx7gcNnR4SghL1kpwQnsky/uYCAcbh9Tdn7rpQl3tNBtp2aUn8R\\n\",\n       \"oJdcLJ7Pfbrd7jMfw4Gm0/+M+3/jSb2KI1VykiJ2+WYLpD8jv8rkwL+4v7/PzYliLKMaR2V2qngX\\n\",\n       \"8rB2spi4b+WfEWRvGjYlQgCfJGKHKJngZm4RwoKwefwRO6cBEjHf5z3xzM1xGAwG8d1338X5+Xm0\\n\",\n       \"Wq2KMWEz2alAcdATqNvtJoeFPiyz2SyVXuloMI8oG5en7+3tZRNAjlOwMR8MBtFoNNJw4yBjeMoq\\n\",\n       \"TeQCRcUmsSNlZ4J59k9vLDsLlh9zKDC4oH2utkEWarVaOpLmGeAIehy8K+Pgd+UzI3ZOnJ0g/g5l\\n\",\n       \"UvYcIhJFIdsB9Rwhr6xFKYMlz8pyDA/o+Pg46vV6/Pzzz3FzcxPr9boS5IDSULXGe5qXQ38lG2zP\\n\",\n       \"Q8lnMukXQ2J+I8a0dArMH3PJOmOFH4czYGVrJQ7iwjg4hxDd5BYH3Ofo6KjS84rAAq4HbWGYb8sJ\\n\",\n       \"+43vbTabLEbBGeWyLEVEBallv+Bkm8SLE4XzZSPEVRotPmce2ScOFAkkcJTKwg10ETLIOqHfCEyM\\n\",\n       \"UtiZcGBWOkasKfOG0cTJ9vid4SgzDZ5H26/FYlEJMGxn7LTd398/O07LSJP1jmU1YleswnsSfPKe\\n\",\n       \"dpY5K3C93h0VZD6Ugy7QPp6Pw4+Dilx5/AcHB5XeinzP728bhc1A1sp+buj9EiXF5iNfjKF0suxH\\n\",\n       \"MNcGElzNjGw6mOL3PAO9aZl3kPrS9SqOlHtqMBCiDgwKJMyI54eE2mGgDB/hsSHkJHEqWGww7JE7\\n\",\n       \"RcTzUKQ2WhE7Q4tBsYIGZrYidiRgheJIgPfFsBvCRkD5XlkeDBR9dHRUUVBEVN9//32OqUTIeA+n\\n\",\n       \"SSN2yABGzaRFHK7RaBSz2awStfD3rsC008K8OvXFGLk/CpjPkAEcUxSgv1c6H8wpBEi+42jWG8My\\n\",\n       \"46qxUjEzJ6yDDRsG2+OyYfeckzryOkK2LNEFO78gQE6fGrEp03yOqrzOL0r9hz4AACAASURBVP3O\\n\",\n       \"c4dcl+/C/sFB8f3puXZ7e5tIru9Zr9dzHx4eHqbs39zcpEOGs4Eclmiyf1oOHBRYEZNeQzbKlAqH\\n\",\n       \"Pbfb7WeNF7mXnSyu1Wp7qG8ZYBHlPjw8ZHEF42AvcLahURcQEAyyAzN0kBHCiKhE8uzX0qkhSCwJ\\n\",\n       \"uI7mPS92bko5tRyUTr5lir/f399PhLvT6cR8Pk+ytaN/5Iy9Y5kxisw82CjyDuwRrxMpOJBpp5Ih\\n\",\n       \"8GMsjSxxHwwyzye1aj1jNJYABwfArXt4P9a9dLDZ34vFohJkg046dc09jWyWjp9/8re8K/Ns3cgY\\n\",\n       \"sVugwk4XMk4cP9Au1ubu7i5Tey+thQn8pk3YGbIDip4BZHjJibGe5p6MCafY54qyruwr20KvRakT\\n\",\n       \"3bvsW9erOFI+zJcJAolAkRkSPDs7q6AB6/U6ERJY/44AHLWsVqvk5XhjIJj8zoYNQXGlnb1vIEnn\\n\",\n       \"Zvk7M/+tMCJ2R2WU71mmpdiQEZHpMZyp5XLXrNNdq2mc6YNinZZxGsP/5t44OqwFCIbTSoxhsVhk\\n\",\n       \"ry8rcO6JoLIGjJdndLvdilNrhIn343coIs+nq/0ceXgNidAeHh7SkNkIs4m5p40skTmGtIwOee58\\n\",\n       \"Pq8cj4OSQMm9xGGAK+IxOnePgfQzUQx3d3cVRxrkzsiUEYQyhWUFgbPQbDYrvA0+93whn8w9zoK5\\n\",\n       \"R3S7xzlZLnedrUnBHBwcRLvdjvV6nRwjl+m7WSDPc/TJe/szO9F2GIjYQczKI6VIa4OQ2rlE6ePY\\n\",\n       \"2TmDy0FQxNyw35gDZIh79vv9NHy0u4jYdZnnvc3/xAFgjNYl/J53LXtMkfqLiIrzgWzZmHndmW9+\\n\",\n       \"X6Z1mTsjwdyX4LAMQJhDH95cogteY6eHQEeQ17LlB/sJ2eK5HLtT6nf0kINgyxTpV+7tzwjcTDPx\\n\",\n       \"vLFOBAsR1SOHarVapmr5HqgQ+ob5xlF3IO35BG2jythOqNsfcFnv22EwWIAdMOfMz8Qh4p2NcEN1\\n\",\n       \"KOeUABZns3TIrbOcvgP4YL6tj1g/ZN+ABWvG+hlZ5/vYb1/MM/cymlvazpeuV3GkHL1zTSaTNAoM\\n\",\n       \"ngm4ubmJ9+/fV9JbFxcXEbHdGNfX1xGxg7Md0TldZ8PGxOBNm+vTaDQqTdmswIigMEDm4fhyCsrP\\n\",\n       \"J+r0uXkIE0JuRypi1y6CCJNIr9frpQOFE+UNbJjdabOInWFHQbzETSBn73QSUCkG0Uq5zEVHRCUy\\n\",\n       \"Y1M4zcTziMR5lhW0N1KJALo8vHx35p1/28ii/MrvReyOsWDuzCNDHsr1ZQ4pjy55RyhzI6Ceb2Qe\\n\",\n       \"JWjH1albyzBjQg55Dz8Tg2+H11E66+/0ldEIo4VE1lZcRlXZC/QS42JPksIiamUNO51OOjYR1dJ+\\n\",\n       \"FDsOjtFbK+ASOWYezYHyOBhfrbZNxRrNQS/wOXvK84ETyTv3+/0cM58xDvf0Yp6McuIk8zfmZ3lf\\n\",\n       \"2pg4pYN8uImsZc6y5rUmALWuKVMlvr71e9Z/OBxm/yXLAAEXAbIROQw0PEgjsDgfNpZ2CsfjcUwm\\n\",\n       \"kwzOGBd0BBtExohcYMTt1BHcWhaQbzt0XiP+Dr0MbcGcLL5vhCxi197h6enpWcoXuX3pNIX9/f1n\\n\",\n       \"fFOey7NIETMHvofTe6ZN2KlhzxlM4KfllTESPGM3S7TSOpw5YT/xX1kcAIpdIuMeh5FF1rdE6nGw\\n\",\n       \"mW/aLRjYsD5hH3rflaBIeb21P3i73q636+16u96ut+vt+juvV2t/QITq/HBE1fN1ftxw+MHBQZyf\\n\",\n       \"n+ff1ev1jEyMDhF5ANGbC8G9iFzwZHk/R4dl8zpXiZRRGpGL78E9I3ZwvSvaHL3CsXqJbN7v9+P4\\n\",\n       \"+DgRqZOTkzg5OYlOp5P3dTlnibgYdSs5Fa5KAP0ADSovUEFzzJhv7odHz2eOyl9Ci0Cx+K75S8wf\\n\",\n       \"//kzZAli90uctBJBIqp0tUz5Gd9zJOQIjfUllUqqGeTDuX/e1ZwA39scn9VqFbPZLNtGkKJiv7hR\\n\",\n       \"KBG001tOi/C5Uyfl88rCDCNQ/J3HwJhANElvgP447cVe6/V6icKCGrvM2UhfCfXT0gKUmKtEHplf\\n\",\n       \"oytGJRyVU/RQq9WyZYD5J/V6PQ/HdkrU6XhQLMYxnU7zeBAqW09PT/OePAtOFu9vvpWRGd4flA/k\\n\",\n       \"kDEQXYNu8V3uCZ/I5f6MgfE6nek58Bw7feN5L/92tVrl8VF0f/dFA0SoCS5CgbdkYjlrityiE01V\\n\",\n       \"ME/V5fLORLD3ywtd5H3Id5jTkmwPr2ixWFT0M4U8JSUiYptpYZ+5KzvPs36zDsZeGTn3+Fh75MDo\\n\",\n       \"GegtYzDvzkiUES4+Q5eY5sGFHeVz3sfyVtpFo+geA+PAxrJXGT8Vty5MK3WG19JrZ/3mtK5pCeY5\\n\",\n       \"M3ZzqKwv+L3f/Zk8ffOT/8MLgaP6KaLK27BBjIiKgel0OhUYM2I3CfB2bKCB050LjqgeFAycW1Yd\\n\",\n       \"+Pe8D++LMDrdgECYTOhKMUOGFgR4PKR29vb2ctPxHoeHh9HtdmMwGORxD/1+P51DK1su4Gt3fmbe\\n\",\n       \"ECIg8zJlxGfmCHEPBA3n16kENpBTcVwWTDs2QNBs/NJxZSy+v58LBF5+BoT7ktInpYtyNMRrgw4X\\n\",\n       \"is9QeDhSXE6fMu67u7sKb8NKsuQKoERQGK5429vbVizBh3AVHf9h9O1ksZeczvX6mmjrNCX/NmTu\\n\",\n       \"9eX7ZcGE95KfV3K0GEtEVNIlOAplhSbpIKeRS4fWijtiu9/m83mlw7TfCw4dh1bbyYzYVqdiBHG2\\n\",\n       \"Tk5OknfJmiAbe3t70ev10jiWfXAwpCWBN6KaHi2JunzHqbmIqOgm84UidmkZ5NH7wuvpg3x5vg1Q\\n\",\n       \"eTllw37kfjiIw+Ew01gm+k6n0xgOh0lPYF+wn9BVnh/0noMeOx4UGC2XyyTxezzWXXaGMZLs+9KR\\n\",\n       \"Ilhxyhzj/fT0FMfHx2noeQ56mKIpdON8Pq84dawJ33N6nnEx9pIK4fQVOsp8L2wGwQx2xHw587NK\\n\",\n       \"Hhx6sV6vP9OdOEqr1Sp7RlmfWM+UdADLnNtU4CTh1DlFT4BBmpgUdcS2KpE1MoeW5+E/MH92epGh\\n\",\n       \"Mv3Id5HB0jcpeaTl9SqOFIrB3q4/Y+HttHwrJ3p4eBjHx8c5kdPpNAfMwqNkOJcuokrYI+r3RJUI\\n\",\n       \"FJNO9YYjayt3Frd0FGkOyec8N6J6GCz/76gMQWi32zEYDJILQW4aB8oePQLKBkUZWWkbkXCE8RKf\\n\",\n       \"w0aR79vg8T2+WxJOXQ1j5c9nRBUoVBs2bxLPm9eL55pbZQNbVuYYBZvP54kAMa9uGuiqOeaUcbjK\\n\",\n       \"hujqJQVeGusSBSNyQ24wNETMNipGM3DAN5tNxXChDOBSlO9kx82KDkeWy9ElCt+RnflMrBtryBio\\n\",\n       \"KmO+zdlxBa8rJrmnnT4jWciyo0iPg+/aebVj4/5xfO4xwpskUkYuCOJoOsr6ttvt/B5z73Mmedf5\\n\",\n       \"fB7Hx8fPKuTm83kiM8gbetDcTz4DAWMP22Cw9+v1eiLHNvrw1Ox08xlrbqJxKQf+W/+u1WrF2dlZ\\n\",\n       \"pRcV74osLhaLRPsYP/qXefMasiceHx8rhHKjjA7qeFf2DQ4t62tSN/NipGhvby8mk0k6zC60oAlt\\n\",\n       \"xNapNtmc90YueR/zLZEh5NC/512M6ltfWrej890nq9SLRq/s2LAHy3YZETuHnwxNeSHDAB6WU/it\\n\",\n       \"OB3sReYR3e49bsTcup95BGBwdTRzaGfHAaR1RhlcMW/fQikJgLH3Rj/xJb51vYojRQrK8CLOAwrD\\n\",\n       \"jhQKpOwSHrEbJH2Ibm9vnx0GzN/b8FHeiUPHYZ4RUTH0LBZC6vQUxslwu2HIshEeqIU3dkQVkSgh\\n\",\n       \"8dlsFrPZLKNcHDKeV8LEVnCMA+E3cR4hMSQKSsBGxGgYSgXdw6GwsrXX7k0fUT3YMqLaKblU9GXK\\n\",\n       \"ibFx/5J8yf+bSItjiuIsz5djDkBH+Iy/x6Ep78n7oCCtvEBPTKwso31ko3x3fl8qAae5/OyISIXN\\n\",\n       \"nNgQsZ5EipYtPxuDY6cWRYpi9Pjr9XqldLpMFaH8+/1+9Hq9fJ7nn4g5IrLAxIqb5xHcuJKG98Sp\\n\",\n       \"BFFiznEmn56e8vDcx8fH6HQ6FZQPJInneYwgdXQrt+O6Wq2y0SXGnfk2quk0Ow0nI7YEaVAmPnPk\\n\",\n       \"3Ov1KujB4eFh3N3dZbDIe4PSm7hsnYIBoyu6969TSHaEcKxLJNnz8i3ECv30D//wDxlIcb4fa06F\\n\",\n       \"3Xw+z15bOKyQtUHleCYBEfLswBCd8NJei9hWi56cnDxrqExAVDqLrBE6zvoN42rkhcvBvasumRfL\\n\",\n       \"rh3ziCqR2VmTcs96rdAfBG2mKHgty6pPfg8aD6rmTIEdJWcA2u12xV6bUM+cmIjv5/k+1us8z/Lo\\n\",\n       \"LIWzG8iDn1eiTRG7fVGigJYZO9LlM2xj7GOUgUR5vXpDTsOLrug4PDysNJFjkah6s7I1d4NGiRGR\\n\",\n       \"B+E6D2r4l/41RKgoWibN0W5p2Bzle8IjdgJi5WblWkZfhndLCBZFCZRa5v9LbgljtdOAk1GiYDy3\\n\",\n       \"FHKcKMZQjt9jdprSOe0SIWBMzEPpPLF5S5jdskIEZ0eyREts9EAh3ZgyYufU1mq15Ok5VeQmjd7k\\n\",\n       \"vA+yw5xylQqjTIUgVzb8yDAb10hZxK6CzakUfoeMgP6AmEREVnAig1burJODFz4Htkepl/LhVILR\\n\",\n       \"I5BkIjnzH+2Mkjbgu/v7+zEajWKz2SRKZLlAVh4fHytNc5EjHPqyxBkdQosQzw0VZYzT69TpdOLo\\n\",\n       \"6Cj6/X7SCcyLQk/ByXPTSebIFYqMA90C4meHAL31+PgY0+k0uVWknknPG80wVQC5cEAHgvISr4O/\\n\",\n       \"tZFnznAGSK97r5WpXu7FvFv+G41GnJ2dRcSWJ9RutyvHQKH7Hh8fU9czJuQb59oojeWYvktO1fl7\\n\",\n       \"6/U6kQR+h+OGDjbKGbFLNZPa4jOcZ3qb2YCjB9AXTsWi7325pxPPZJ+xXk5/od9LVA1g4FsIo9e6\\n\",\n       \"RFwJFo1WoZP4jN6EEbsejugEUn0ROyeTVCJ/53fgOy/xmZC1MlAyqmZ0GVnBttmu4TSWVZsRO32C\\n\",\n       \"3vcaYHvsK/gq5b68XsWRIlLyyyKEeLZGQVgEoHZ7vD4+BO/TpEujMm7Qxf1BXSzs3NvRhx2oiCr8\\n\",\n       \"XUblLIpz+ihroit3cDbkSMTA8/b39zOdiDKz0ua+Nj78tIPGRi3hSZQFyjpilxJljCgej5t3i6j2\\n\",\n       \"mmEM/J0Rx8VikeMoOVne/IbUuS//BoFgDdm0PmIj4nmDTMPDKHLWyLwFKx7Wq0yPls5TxC5d7b8x\\n\",\n       \"sub0FH9jTgFyhBLge6yd0wEocZQgcmFIHQOBsrSTaeTBqdqISKcTFPju7q6ShmHdUESMH+4i7/ZS\\n\",\n       \"6gCn3Kli1on5K+fX6S4jLqAbDiJKpMTGxdwyDOJoNMpxU6rf7Xbj48ePue9ms1l+RmoKBMVR+d7e\\n\",\n       \"7mgbHB4rd4ISCMTlWXtHR0fR7XbzKBXemaNJ3KIhYpfa6/V6z6Jq5JuAwagDsvgS+kTaDcfXaV4b\\n\",\n       \"1ZcCU/7tgIG5+fjxYzw8PMRkMsk14Ygg0pO8q/d6q9Wq7HenZF7ah9bhZDB8Tqgv97FCZ9B4mD3v\\n\",\n       \"ZxOUkWYk4OIeDhSs25FJt5owOsYzvDciqvu8RAedtsaZdMBjNAfdZkqLnSPf3+vKu5Ryw/1s2/xO\\n\",\n       \"6M5SPzDPJurzLp4/yyL7y8Vb/I2pIyWAYPv7UnoaPet7lrL0/0Kgyuut/cHb9Xa9XW/X2/V2vV1v\\n\",\n       \"1995vQoiRdSNhxux8+SJSs2DMgrTbDZjNptV+Ex8D6jWnuRqta1EaLVa0W63s+IN6JBqC6eJeEci\\n\",\n       \"b0c0RsmMcPCTvy3z4URQjMvvSTQBauBUA+9qr53vAXUbPSrz3eTS7amX3zW8yZwSBdj791hJGTnq\\n\",\n       \"9/1KzpbhWSMSjMeImc+jMp9psVhUogh+wqugeZvvCdTsgoG9vd0p5D6Lje85Bed0AvczIuo14Tus\\n\",\n       \"o+ebOWc+WRfuQXQGImkkK6J6dh7vY9JnCet7L1jOvPaOQP08p2Udwfoz/h5koV6vx3w+r9zT1atG\\n\",\n       \"ijkuhHE5Ted7m0uETnCE67QUqcQypQ3fqWwCyzqR5kCfXF9fZ1UQ62V0DR6To2DGAZJepsBcNk+z\\n\",\n       \"SGScDtLr9TrTkOahWMaMTIAar1arRAiM4h4fH2c0D8rPvdiDPId5ubu7i59//jkRI9MInOIvES7W\\n\",\n       \"GFmzfkJGTk5OYjQaPePDwS1DVnkOa4TuYs5MqEfG0N2me3A00MPDQx58zjhcqWlOFrrFdBDrIjeT\\n\",\n       \"NW8WJA1kyLYEdNkpTC70DjzUzWZX7UeazEiTdSlotCv4LG/oEfad541CGqNIETuqAP9vHeGWISWH\\n\",\n       \"jndFr1pn8BxQvpJzauTfdt52zVXnrIPPRLScIofoe2d3/G9/h8/M5yrRqP8XOvUqjlREtS9HxA6C\\n\",\n       \"dG7YJ6tjmBEuSpIhYnJEgxcRRwjFcH5+nlUflNzDJbETVCozbwzSaCxI6SzYcBreR/DNbbCBdkrI\\n\",\n       \"m6fdbsdqtaocemwYsyQplkosYnc0jeF2ExlRHM4bG6b1JkWZojycu3Zqz+vM85yDt/HGCOFEbTab\\n\",\n       \"SqFBrVarVNaU/CwUpufe6VycPjs9GGZIw76nFaONIunoEj5mDOU8lXPn9zcvC4XKOvozy5G5BIzD\\n\",\n       \"KXGKESKe9+3y9xgbStXOcrmWJYfECg7Dz9w4RWlCPX3OaO9BKoN5I03HcSnu7owjxHqRTvERIHDO\\n\",\n       \"ms1mpUs1c8062MnC0eK7yD69kNbrdeWEBMbulLDnhXdjbY+OjlJHRew6apdOtOeKquKSb4U+KI0q\\n\",\n       \"fCIoC3YIeNfSCPN+dL72PmSfQQR3+op9XXJZuOysvxTwnZ6eZvXedDpNeePZ1ouWNfau0/YRUdnX\\n\",\n       \"flZE5JmVDib9XVMwXI3FeNFRbm/B2HAYLIvI6XQ6rZTsI084NAcHB9HpdCq8NRwUeLp8bzgcZqq1\\n\",\n       \"Xt+dp8p32NcOKk0xsb50/zVS9qUsW26cAuNijzL2krbAWpqu4HVFfsqiCJ7LfuR7OP+WQV+m7fgd\\n\",\n       \"DFJgG2yD+Ftk34GfbZd5bsyRdUJ5vZojhQIzh6YctDcbCpfIi0E+PDzEzc1NLJfLODk5qZDUjIBw\\n\",\n       \"nAqIlPuE+D24rIh9WZnxd3ZsbGQtiCi1knPF91CypXPGpnHU4Dm6v7+vKGkrTiIEHAcLDpwNR3tu\\n\",\n       \"roizxN+VeXVHJnYWI3aKqlRe3rSr1Sq5IDRDvbu7i4eHh2cl0DgRcHe8WT1PZWTC5xg9rxPjs/PO\\n\",\n       \"OqGIcGxMDvW8oHg8H1yes/JznAOeSwsA5vIlpW+Oj50eGzhImH4+37eSctNMoxWWS+btWygPChFn\\n\",\n       \"ASfOCBCK/+DgICaTSdzd3cVms6kYE/rsOOhwBErFFc8teRwlWdZVbEZBPQ4bCxSqHQH4NSBwPiII\\n\",\n       \"2ej3+xVC/d7etiEmfXsspwQyPiLHFWKME91hvcc4eV8739wDbs/x8XGukx1P6yz2EOthOeTdMZKO\\n\",\n       \"5nE6/T6WF+aOvy2NFAgC+9vBVjleF1Ogf8rDlz2H7AEjciZS39/fVxwL/h4HxeiJuZvmMXpvlkES\\n\",\n       \"pOnj4+OKrPOerDF7mHvSRoN38BjgJ47H41ittsTuk5OTiKg6X+xDEGjemywGQSpzaoQeveJed0bN\\n\",\n       \"vYYgd0aMvd5Gs+xk2WHj37ZB7Fk7+tyTIK9Ef9n36CGADcue5cpZCtYfXWJdShaGgIL7uGnw/68c\\n\",\n       \"KSJMR7sIJ8JhyNWKgAm08DuCt9PRarWS+NfpdKLT6VTKw70IpbfJhsHQlsiKoycvsqFKDE/EzpHi\\n\",\n       \"dx5TGcWVqJqf53dhYRE4k5S5DFeWa0BF2mq1rVKhlxKC/1KqwlEFSoJ3L9MGRqv4LkalTOE5rUn0\\n\",\n       \"w+UKmnq9ngfeopzYjE5TPj4+xnw+TyKz54D1JrVXEtuJ4hyd+DPkr0zbMX7ey0bR8+X3QE55P9Ai\\n\",\n       \"y3OZ9nIbENYXcrRJno68mWPWAdlGzsoKUu5txcQYWd/5fJ7Po+qsXq8n8ZzP6GiNzJrETKoMOSNg\\n\",\n       \"4p57e3sVON5pNubZCKoVMpVWbhkQEYk0gvZ6bozg8b6lYu52u9mfiXsPBoN8frfbzdYLPA/nkf1G\\n\",\n       \"awhkEATRUT1/64CGq0SgV6tdbyDObavVahXjwBoyBigS3Pf+/j6urq7S0D49PVUQfN+n1CeeO+sL\\n\",\n       \"f9ZsNmM6nVbWCeeQUnwbVQJg9ken03nWBw8kj88jIh1FZzDsDDNe94XiM9ae/9wShv1QOv7IVqvV\\n\",\n       \"yoDdhHIj7g4wWCOj1A4MaZtB3y1kjc7x/X4/74cNQKaMRtqRBIHHvrbb7bxvq9VKXWPknbkh4C2z\\n\",\n       \"DuhzvlfqRuaPueC92HNUNpZZg4idzsGx9zpxeV9gX+wQ2uHl3yU6xj7BL0HfRuyyDS/JfL7DNz/5\\n\",\n       \"P7zgVdiZIOpwtFimvnBq7GFH7LqN4wjwbxaIA33tGXtTgIaVufLSEYjYRWZMtHO+CKEXiO87j196\\n\",\n       \"2SgEQ5yGbMt0lh0polkcQvfJcjoNBM7RkJ3RWq0Wo9Eo70seuoyuKRu2YHkcZQqJ8TO/ID12bOwQ\\n\",\n       \"AZ8breLvqWIqBbzT6VSUKrJgpWfHl02JQ1XKoQ2BS3lttMu14KfHUkatdkbLYMDzao4U70/AQJQe\\n\",\n       \"sUOW2u12lu3zWYkUGpFB2dEry9GXUT07/YyRMVOF5L28Wq2Sm1NGqSBLyCNzCoLMvnLHaI+XPWpU\\n\",\n       \"jd42/K3XDbSZ5pAOtsyNQnka6cABMeKH7DPPOP7oGoKyXq+XaJajZNZ8uVzGbDZLY9pqtRLB8Poh\\n\",\n       \"M94vOFx85pSRKyZBWjD2Jd+UdAmHnXstZrNZjMfj+PTpU7x//z4/43slks5lXYBuY97QFRcXF/HL\\n\",\n       \"L7/EZDKpIGt3d3eJbMJbjYhKaxrGUvIxSePhhPNZxI7n46o1Ak90vA9BtjPDOOxIGsVw8OVgCZ1s\\n\",\n       \"dJW/Jw2LE03F4ksUg81mE4PBIPb39+P29jYzDxERV1dXcXBwEIvFIo6PjysBpNcKYIKAmPHbDs3n\\n\",\n       \"85Q5pyvLVjvWOWUGh7kkOAeB5Hu2oQ7M+DcZJv/eGSrskd8FfYk+9t72XHh9uJfpEB57xG7/Gzlz\\n\",\n       \"sGLH29erOFKUAFv4mVAm2oYPRWIHxz1hmBQWEWEk3WCinC9D/mXulsm0YeInC8IY/FlEtZeNvWWn\\n\",\n       \"RkoHzZFQqbDKCMAOJhsGo8jfuVGjkb7SeLvkl4teKygaFFrEThGhbOy5R+xIzn4/xr5eryvlryaA\\n\",\n       \"Gmms1+vpSBGlYkBwxnhPyKG8P99DiTKPjoQctSBvzAv/9tzZWbLxtRwyXygAjLHRLOYLhekNz/qx\\n\",\n       \"yZk3lAVOhlO7oD5HR0e5RjZodl4x/FwQX9l3VjZeh729vZxT5BQ0wKmter0eZ2dnKRtloDMej+P+\\n\",\n       \"/j76/X4lPWD5Ho1GGVCx9hg2ggK/hwOSRqNR4SSZY+GjLSwjpLRLw2jH0c5bt9uNfr+fjliv16ug\\n\",\n       \"fZB6F4tFdDqdvA9pmIeHh0QHMIroJkrr2ZO8B3KCIbFu855oNBqp5O1A7O3tPTsiBONeBkSDwSDO\\n\",\n       \"zs7i6uoqhsNhdLvddCTYY+zjUkdZNlgb65TNZtt5//T0tOJIU3hA/zJ3y0cn4IC4iS/B3cHBQTZh\\n\",\n       \"Zt6sYwm6S/SDe5VG3UHAS4G+G8va0BIA8rfm8rF3Sb+zTpzNGLHricV7gkahb6fTacot5xYOh8O4\\n\",\n       \"vr6OXq+XGRfGCMG+2+1Gq9XK3mQcm2QbwkVmAtsHCZ6/I5jFoXXwVdI/bDdJ6aOvcLJw8tEXTgki\\n\",\n       \"I+wVOH3smdJhdxNuZK8MvngmOtGpuvV6XeFm2jllj/xviNRb+4O36+16u96ut+vterverr/zerUj\\n\",\n       \"YpyrjagiPnzuPHqZV/e9QGWI5k24dXrFED4eJ6iT03gmpJE+chRC7t0oUsTz9ge+uIc9bv8/0VHJ\\n\",\n       \"xwD98vw4mmg0GpUT440clZV6TicZDiU95tQTYyO68PeIIohATUB39Zyf91LDUxAER+AgPY5mI6pH\\n\",\n       \"rHCRrpxOp8/I3bwzESbrybu4fNZRGd8zF6DkpvB+Xm8iT1IA7gzu9ycK87uSkqRCx6kII6Dm80VU\\n\",\n       \"KxOdgoyI5Nu4oWYZXdfr9SxUMFfA6JWP1/A5ZUZBI3aVeYyRueB7e3t7cXx8nKgNa8EcLhaLPOTZ\\n\",\n       \"8srflpwNokZS/uaXMDfMf5lqdFdzIlOj0SBUrD+f0VkdfpXPY9zb21YzzWazePfuXRZORET0er1M\\n\",\n       \"7Ww2m0zpMEZQzvl8XiH/Ov2IfjKJud/vZyr4/v6+kjJiHGUqharAyWQS19fXFVnb29tLQvPNzU0c\\n\",\n       \"Hx8ngR0OCUhQicow58yPOTqmFFxcXMT9/X1WBpJROD09TT4biBRpXloHkHVgjKQw4ROZqE3aeTab\\n\",\n       \"Vc7RRKch204HIwesu2XKyCNyZ3tkGgGHi/P3Tota7/vs14gqknN0dJScsPv7+zg9Pc15ub29zcOh\\n\",\n       \"n562x72QrWGMNHAFmTLCPZvNUtfw91ymOpTZFNJpjNk2x2uPjkM+jAwig9yj0WgkXWOz2VTmDR8B\\n\",\n       \"moptt9+7TNuX2RxTQbBZ6Eae5+akoI6uAPbavHS9Wh8pJhdIDkUBdGiYz04GisWKDw4M3YZduYNB\\n\",\n       \"YUIt/NzX/CU+8zvxDvwkzQDUWRLmMXwWRnLFVtrepOZ6OD/rXiLcwwLFu2AoyiNFSOuRZrLgYODg\\n\",\n       \"jvEOjIs18HyzSXhHuGdcTqnZiXW6hE1Vcmj4W6d1MeQYPVI1EZFcFfd2spK0jJgL4TRR2d6CDe3/\\n\",\n       \"XiJAlukLp/DI+Tt9h+FHFuxoNBqNyuHDJrlioG1UbYT5m5IXwLOXy2VWBdqpZR2RN74PP83Kn5QB\\n\",\n       \"Bgpi9P7+fhrtw8PDJMTCLbFjfnFxkfLhvd1oNGI8HqdD7nQK6QOfVeegCGiesZE25bt7e3upFHFk\\n\",\n       \"LANWsrxP6XSTGkPe6DtX6gVSU5yJN5lMKlxNUu3wR5zWZp65h51zZty5IgAAIABJREFU5gPnCqcY\\n\",\n       \"5wkeE2ensb6sn3khXKSkN5tNFmREbB2+wWAQ5+fn8ec//zlub2/j06dP+TzmjHcxRcF6kfn1vmE+\\n\",\n       \"e71ezk9E5AHBR0dH6dyXQSMBBE5DxNYoci/SUcw3jhzjs55Bbzt9Z2I478tewz459YMDx1jZa3Z4\\n\",\n       \"uRfvjd7a29sVTEwmk9QTpOjNLdrf34/BYJD3gxR+enoa8/k8ZrNZchUPDw/TkTZxm0CDi/Sl9b+D\\n\",\n       \"AeaeQNH/RhZxRlyggp1lvbx/0K/MMwGGKQwR1cKOl7horD29wSKi4oiyTgSVJYVms9mk00YKnjEw\\n\",\n       \"V4+PjxUqA/OJ7fzuu+/ipetVHCmM4mQyqZC8yH1inGyEMDDk/b2BUaRE515ghMWC5av8bkS1HQGX\\n\",\n       \"nSs7e5AhuZdJch6D+Tg4Jy+dJehoKGIX0bzkDaO4XqqE87g3m03lfLmI6tlJKBQT3BFilw0zfnPE\\n\",\n       \"yKszBngnlN8aBWE88EScY4fcXvYCQ9HxO6MgzH2z2axwrzzv5irYOcVxRGF4fm3kjYQaGXELAeYH\\n\",\n       \"zhBrwv0Zo422KxaJeOzYgNa57Ydl1GtiJM/IktElc8Qwshhjj3k+n+d5YrwbChoEEKVP9VrE7lgK\\n\",\n       \"95vBYGDovbfZv6PRKLrdbnS73exFxcW6gpx5r3G5cMGE65ITVxp9B0LlfrPjYUIyjToxmsgtzzg+\\n\",\n       \"Po5arRZfv35NI8hYceTQEcy7jRxOPe95d3eX+wgOVnle4Hg8jnfv3kW3283v3d/fV/af0WfmhKo8\\n\",\n       \"Owv1+rby7J//+Z/znZFDxm7DbV1aoumuBqTQgmDQxSvIHA64nUzrByOEyM1kMsl97AADOUGnQKqP\\n\",\n       \"iCw8cjDvve9WJF4bileoiOv3+xlEWCaMoETsAivkxXLIGm42m0pbHr6HE8R+LZ1veHfwj3gf9Lz1\\n\",\n       \"TYnK8H3WhGc6qPa7urChtG0OwAmSbEtKuSt/Vxbi8DwCJYIM6yjujd7z941io7N4Ns9C97EPWTf2\\n\",\n       \"Pwic19BOXXm9miMFzM3EEXljoL3R2EQIvZUNG6hETfie008mlDu6x5Ba+B2VOJ3GpmZRTCzEMcEg\\n\",\n       \"sJBc3MPoD89zJFymCN0N3R42UYBhdkdXjIE5sxDzXgioFSH3NNnXwmgFbeIiY0dJ2Qnh/VlL3olx\\n\",\n       \"OO3puXEaDfKjHeWIXeWXWxawVm4LgNFjzKQFXlonZMLpOUPzoHyeb77HvFkmUVCO1IwuUGGGo2mE\\n\",\n       \"xA6BI33GjOK2Axqxq3h8enrKrtIREefn5xkhllE5zhxOoJE92hogo14nnK+Tk5Not9uVoGU2m8WX\\n\",\n       \"L1/SKTTZejAYxGAwiNlslvc1wRljWDoeyIxlPiLSsVutVs+iahupTqeTMmw95GianzhSOJRHR0cx\\n\",\n       \"GAwqyh0jNp/Pk1TLvqDP3Xq9TjSrdPoJdpx6IoKG6mA0qNPp5P1ns1mcnZ1V0sFG4O3Ez+fz7Pfl\\n\",\n       \"c/UitqkmUmKnp6cxHo8rRh9HxwbFASYXDhjvY0OJw+t7kO6lp5gdAnQ21YRGemi/MRgMKtQE9k+t\\n\",\n       \"VsuSft7PvbXKjtkQ2tF/TpfjALNn5vN5fs9NJ/lbX/69U6F21Ph/3un+/j7u7u4qjnCJ4PO+Tqnx\\n\",\n       \"XfQBY/VnzE35GYE8Py0bUGYIwCzDpsmURS04gbxPidJbzxmpZkz1ej11tG2wD1dH5hiDnSD/P/vA\\n\",\n       \"tBzLLTad5t4GAco1La9XcaTgZZToEIuL0WdybKwxWChMO0OlJ44gvAQxR0Tl78o0nqMrvhuxi7wR\\n\",\n       \"rDJ3iuEHtvQCIIQ82/dGAEGK7JlzLxtyX077OOK384PRsfFjs7AeRiF4FgqKuUIpMV+eT1BFxmfn\\n\",\n       \"zP8PemOHyBvBUSLRFuMwQsAY4SUYdcKI4kjRw4bne60pr+Yq00dWNI7mQJ4itkbIfDrQIKInlBCy\\n\",\n       \"Y2eCjU0bh9IZZDwoPe8bFOxqtapEX+T4QUJc0eaO7ay/kRXkqByj0yk4OXyPeV8sFomQsE4Yebo+\\n\",\n       \"TyaTHCM9fzBw+/v7le7OtC9g7v085hKn1Nwr5IP9Z5mi6SSIolOjGFEiUNYlYme84DzZWQDZcSNE\\n\",\n       \"IwI2EsvlMt69e5f3vL29TfTKe5T3Qj/ZwSb9y5Ez4/G4QjFg/Pxng4i8vXv3LmazWQyHw3w2nEO6\\n\",\n       \"j/O84XAY+/v78f79+0rQYVn0c1xJy/xw+agUOoX3+/18H+sCG/DpdJopQRA1HAjLxnw+j06nk+vl\\n\",\n       \"lO90Oq0EgZZd0ousgYNkDCuUBHMVqR72XHu+cXrpeF6mz0Cbms1m6oTpdBrj8bgSYHDPdrsd3W43\\n\",\n       \"BoNBIvLsrYhIDtR6vc4UZGn37NzyE3QXygR6jjUD7GDflylDrwVygY1wWs+UBv+t6R4EFryT9V63\\n\",\n       \"262ker234fxhw6wT5vN5OuPouPIwZ9r9lClP3udb16s4UnjmHiQoTwm/R8SzyMAIkR2aMjoyIlMi\\n\",\n       \"Ol4YOx08A6NneJl7WhE42vE72bGL2B3L4fE7SjM6g+MTUeUm+Ds8z4RSc6CAmf18O4i8z3Q6zZO2\\n\",\n       \"jQ76dPsS1sRDd08jfm8nAqfDa+LowBvRzlnJaWA85cZg/HZczTGwk+s0Fs6YYWzek7/HCbFzimGB\\n\",\n       \"01Gr1dJRQpEBR7vFQsQ2FYGicT4+IioGAHTNRtfBQnkRWSPbzB0ybc6aI3YMD6iKo0s7Guv1rplh\\n\",\n       \"vV7tTO20gJEfDLs5Qhj+VquVTkNEJCkbuXl4eIivX7+mTADfk9Y0pxIn0YGGHSLGZaQ3YpeidBrR\\n\",\n       \"QZtl0k42aSee69QexsmXCdaz2Syenp6y8zWoy97eloR/dnYW8/k8z7iL2BrEwWCQZxiSUo3YcdJs\\n\",\n       \"OLz2ZdqjROWm02keWcKFjJJaxnBGRHz9+jU6nU6cnJxUUh5lMMyYXiKk8/ePj49JYqeRK4Yd3g/j\\n\",\n       \"iIhMk3748CGfMZ/PKy1fHHARkDqjYCSe37t4gPfmHe/v7/NsRNYSR8sE74hdGgo7g/5kfDj6tHYw\\n\",\n       \"R4iiBdBH1tD6g2DRuhOOjx1NIzbsXTew5ZmkzJkXy7cdIqNgOJfQR0hxsj7YIWy1u9MvFosciwPh\\n\",\n       \"6XSa6DV/Z8eG/e9CAr8L7+jAkzYSs9ksgwZ09Hg8TpQPvqMDVZxd0ro8k33vwKC83tofvF1v19v1\\n\",\n       \"dr1db9fb9Xb9nderIFI0net2u5WUAHAw5FQfd+F8v73okuhWpvJMGHYEDWzp/K0vf6+Exg37ObI1\\n\",\n       \"98fRCZ+5SsH3NApH+uqltJzJ5L5HmT/mJ6kBR6VuWkeUDKfDc1Ov1xOVKvPz5OXLc8qYj4hdCtQo\\n\",\n       \"ChGLIz+/s+/h6Jq5dKqC9eVepLbKiM5oHnNkNJGUi9HB9XpXAgvMy2dElLwPqQY4DSVqSGS2WCwy\\n\",\n       \"2gbp4AK+BmEzodMIYhl5mi/C35lfYySiXq9XiJWr1SpLzS3Pnmfvs4hdStAkfaJSGoKC2O3t7cXl\\n\",\n       \"5WVEbGW41+vlGji6hne1WCxiPB7HZDLJyhgaDzKuUgYh7ZeFIVzNZjPu7u5ynzBfpFhANEq0FfSK\\n\",\n       \"dyyPweF+Lpgg9cPBvCAu3PPdu3dRq21PDzg8PMzP4YQxR+12O6uGTC7u9/uZxouIlBPWx80qQbxB\\n\",\n       \"eYzG8f/NZjOGw2EeJ8M9QQ+QPeR7NBrFYDDIPVA2OGZOeHf4ZP4MXQE/kfUH4aCKyqg632u32/Hx\\n\",\n       \"48eck19//TVub2/j8fExptNpHB0dJWGbPWsCOJf/TSf5Ml2KbrLe8xmJzWYzWq1WpZDIfCzzcthn\\n\",\n       \"k8kkDzW2HLN2EfHsMxAwPydiu99/+umnaLVa8fnz52i1WpUO9XDdSH+RrWAtQKE3m20bD+tT0CGI\\n\",\n       \"8MzNdDqt8EKhUzCnoMW8g9OMpPJB6vis2WwmcmS9zthNGuf9IiLT5mUxmmXYSBM2k7Y3/DS9hLnh\\n\",\n       \"vpZR9pFtb3m9iiPV7/efEaVZeKcIUKJAm6WR9uW0idsfRFTLLF9yiMoc8ktkcH9mx8RpL94ThWGO\\n\",\n       \"gR09vyvfsxPonK/5QEDI3li8e5m69P2YV48DJ2N/fz8VuVMzODMoD0iu5qGhaEziJu8MsdYVOHYG\\n\",\n       \"XjJezKVz6lwoRhO8cXR5l9LJgqjLmpSlrqS3mNeI3WZ7fHxMhcQ68V4mW9o5wXFeLneHantD2zjb\\n\",\n       \"8FnR8878HcrkpTlBLsw3tJwzJ/x0t/iDg4NMq6GIucy34DwvrxOOIDwUPiPlzVph2AgAlstdXyvP\\n\",\n       \"AyX3h4eHcX5+nob98vIyvn79mkqYd2KdPEfmhjHvpSNpIm8p915j9h96xEax5P255Hy1WmUBw4cP\\n\",\n       \"H/LZm80muYb1ej1OT08zvQkvg7PSHHyw3pzjt1qtMiVIwDkajZ5xNR2gMXe8M2kpAoYff/wxn3Nx\\n\",\n       \"cRF7e3vJ+cEoRUSutx35cn5Jozw9PcXp6WklBch8Ybw9hxhSjJu5o+w/HF86dLOncdapqkOmcGbK\\n\",\n       \"tBjrhMNYr+8O3iZdt7+/n0UB5f5GT9mQ8z4Onpm3xWIR0+k0gyh/pySMs2a8C845vEG/J7YR58z0\\n\",\n       \"FOYNh8N6n98zH6bJIHukrk3mHo/HmX6MiMq5tegTU0NstyJ2nf/dl86yiW607ceeAU7wPeSB/VLa\\n\",\n       \"NQ6C5vsELcynA3cHnoAG3NfFBHAgv3W9iiMFv8QbnFLE2WyWSrdsjbBcLp8hHSh3lByGkM8c1Vg5\\n\",\n       \"IvA2qiYBskAIlhcfgw6HyGgQThjfc57fqJcFGI4P72CUy/wunmXBZxw2xtwfJwpCp4UfIiLfh2vD\\n\",\n       \"WhjpqdfraRS9EZbLZSoK3gfjSgk2Df6IDNgwRlFwEsh7LxaLigNmlMToiT9DNphvVw/S98ib2orL\\n\",\n       \"hwTjzOF4MBc8zwrGjhuInpVNyZGDn4ACRR7gTiBn7iVEWwSTwK0I7EDaocI4sf7srYht/5p2u51j\\n\",\n       \"8D7EUUJe4YNFRFZM4jAZPSjXZr1ep5Nxf3+fDQRZXwyBW4CUvb84+mIymeSxK1w446ztZrOpkFVd\\n\",\n       \"QUQ06R5TyD5G0bJBNGpdwT0ho242mzg+Ps53wvnCKWy1WhX+FEYCUrEDSNaQ9i8YxHK9MfCeAxqY\\n\",\n       \"GsXEEJgTyQU/ptlsxvn5eczn8/j5558jIrInHLLkgojT09PKWYA2XlzD4TCdhaenpzg/P8/7Gm3o\\n\",\n       \"9/s5pwQrIKB2KkAwyh5cyAqGcTweV46IabVa0el0cj+iX5lvI9F2bB4fH2MymWThio2py+XLAiRQ\\n\",\n       \"Vs5/5F6MD7lkndFtIGd2Rrkn2Rre1foLvtx6vY4vX77E1dVVdDqddDJBKuEq2Qnhwqk3Lwv5ALFG\\n\",\n       \"jvx71n+1WiVaOR6PMyDAiWK+W61WxbHCnkRE6mz+xs4g+9D6xM43z8D58fE8OFLYcfsK2AuDD7xX\\n\",\n       \"6QCaU8n+/Nb1Ko4UG8PnDi2Xy1yYiK2nXSo3lGaZSkLh4927Uy+OgjdQxC464vtl2i9ih2SV8CJR\\n\",\n       \"y0spKf/OhpaF4/dWmCWa4FRdSTT2GHCqcBBLlAujDOpgpI1IkPG5d1Cn00nFUaYMy54pEPj4Hqka\\n\",\n       \"0CiUBukUjJbTRhhunBejNVyeozKdC9HRf/f09BSz2SydH883/2bdDdOXFWwPDw+pFIhkUTTInNea\\n\",\n       \"En/elfsapmbjGz0ySdNKA0Iz8LmhaN6JMVlOnMqlWaMPgnZUaqSHMfV6vTg+Pq6k39mDdqK81/jJ\\n\",\n       \"+to5pfM5KW5XUD49PeV+9/52ZGiSqsfeaDSyms3IImO0EbOSZq5x7P09HEIqPm34WDMcKt4VJ+T4\\n\",\n       \"+DgPkWbv48RiOIjgGb8rOtFHjO/x8TH3khFnZMmHRrsTc4mMcpHq5Do7O4tff/01IiKur6+j2WzG\\n\",\n       \"aDTKTvNl2icisk+c5Y13gjjO3zE31qOcWcczifR5X1fzGiVx81/k+x/+4R/iy5cvMZ1Oc/yz2Szl\\n\",\n       \"iOIP3p1AG7mxjttsNnkfnDHvfewQOtepYhwk0CH2DM6ou4+zZgSi6FqjK65QLnWxEfzLy8u4vb2N\\n\",\n       \"TqcT//iP/xgRW6f37u4uKy+dwbGzQoViGXwR0BN8+8Jhsm6bTqeZ9nexBPLd7XYzVYjjHLE7g9J0\\n\",\n       \"GKfvmBPmyfckwMMOs/bOWrGmzkyVtAWvPTbQBQl8xnO/db2aI+XNErHjCtgLR4iZ1Iid124DTzXL\\n\",\n       \"0dFRVjFE7BQf9/NCEa2+hEhF7Jyoer3aOJNFJ3KCE8JlR8e5WyNc/Nscl4hI6NwX0eFLjpc/Y+N5\\n\",\n       \"DAgNyFmZ52V8KG82OPNDVdfj42Pc3Nzks1Fu9PIwT8ZIhisqUJ7T6TQjfkeXKGbQKTcJRHGVKQxH\\n\",\n       \"k0YXGDu9glBELuPHEep0Ool2shb+Wz+PeSZlYNTCDpbljMtjMjISEdkHx+lkvk/FFvLksmvfG2eC\\n\",\n       \"9QX1JQ3A+0VsjSkGmujOfAAcJDeAjNhVQlr5sYZHR0f5rowFBMyBCdWc5nvgyIzH46zi4TOca4xs\\n\",\n       \"abjLd/Kc4FywtsgL1Wmnp6eZUuTIEpwdI7KmGDC20omwjKxW26o3o54475PJpIIcgsItl9u2A+X+\\n\",\n       \"Z1zoqjIY4//tfDutz2XklPeBk+Z+Zg8PD4lQ2ZiQlqWS8PHxscLfwSEwWoMscuAs8miHEAe/5LlE\\n\",\n       \"7OQdvXVwcFBJF8PZIhC7urrKzwgU+dzOGvdG97uFCVVim8321AY3ypzP5xUeJPdEN79UBet0J++P\\n\",\n       \"PD09PcX19XU2Ae12u4niYuOGw2HyK603bONwtli3m5ub5Dk5A8G7EpxYr0ZEBpS1Wi3Tr7bTzWYz\\n\",\n       \"UWKvE20aCAqcdvceKoNPbKhtiCsTcfSwX85QsW/YHw5oWXPGYn0CKu6/s1yw1wxKOEgmqCqvV3Gk\\n\",\n       \"2IQWRoQDEmmpGMo2ByguTih3p2WXXfO3GDffs3RMXOrJ37CgRkEQRj7333sBrMxwEF4qZydyXi6X\\n\",\n       \"KYR8RoqKVCeRdsSuf5QNq6Nu7gP6YljVJHLIo2yQ0og4ZcIYfLSE58bIAj2AIrZw/sePH19M3WL0\\n\",\n       \"cEyIbCOqKSPmzQ4U788clGRc1qtsqVCm9vw9b1Dn7RuNRoWYW6ZoWJunp6d06rkvcDNRIo5KRCTS\\n\",\n       \"an4RRoj3wTA5LYlcUTTg93FLhdlsFrPZLBEp0CBImXAXmJtWqxW9Xi96vV4FAUMuFotFpX8N74G8\\n\",\n       \"MUfwgHDkut1uclYs+/v7+3k0zXg8TjmEQGtyr1NK7GnLCM4pv38J5YXgihEy34V7gGSyPhGRARtr\\n\",\n       \"RCqbiz0FksTcgCQul8sYDAa5fyKqPZVAyMq+aaXTFvG8hYoRXkfZyL+dqna7nWmsxWKRzsL9/X1c\\n\",\n       \"X19X+EBGCGazWfK5cEDQ36PR6FmvMuTs8vIyzs7OMt1p7hUBD7zEktOzXq8zZbi/v5/zxhwMBoNn\\n\",\n       \"aMZoNEo6iA2p18IIr8ePM4hMIP+DwSB6vV7ynXD+uCfPQAcwPsYLFxBHLWKbCr29vY1arRYnJyfR\\n\",\n       \"6/XyXY6OjnJPuK0Ka02gzx7FyY2IdMj5e/c0sywQiFgPky6mNUAJWIAgO7CHqmCagBFXUp7ePzwP\\n\",\n       \"O+LsAt/D7uLEsLcJ4nge9obvkUWgeawDKZxI9jfvUrb7wcn0nL2Uzk5Z/OYnb9fb9Xa9XW/X2/V2\\n\",\n       \"vV1v1/96vQoiFbFDb1zmTxoOUjQRBtE1HiIRc8Suyyu/A+aPqDZfK/Oe5hQAIePV4qkTdZdduM0d\\n\",\n       \"cCrP0KMrGiKqh6Jyf5+CTarPUTZjBloEPnbFmzkEnk/fn6hof38/yd9EMUDOpF24iHhOT08rCOD9\\n\",\n       \"/X3c3t7msRdOYQFBN5vNuLi4yFw68wB35P7+Pm5ubrLihkoi1sh8FhAJ5KNM1RC1MWaTjb22/D+y\\n\",\n       \"x5zzfe65Xq8T+SSSK/lZJkA6SgE52N/fz7QiqBPRD5ENURzf46wx5tpcHJNejbKAtPG5D1IFxr67\\n\",\n       \"u4vb29uYTqdxe3ub8rDZbOLu7i5arVYlgl6v1/Hhw4fkh7iBIBGlU8tccEQYt2WRKrFut5vRIvdg\\n\",\n       \"fagUBJmM2O7tyWSSyJHTRUSJruRxeq8krnodifo3m012OvbeBSWjczhrSLoDBNQoJ6gDJH7uxXwj\\n\",\n       \"w6ytkTMIwRzfwkW1WSlnXOZt+iqRca8VSDx7ZjweJ3K0Xq+zkzbNfE0S5mw7UNyIXWX1ZDLJcdJC\\n\",\n       \"wBwxUqkUGjgbYESURqDMN6gnc2YE1BwZ5I55I4UDr4r5RieBZJti8O7du9xTf/vb3+Ly8jLRSLiC\\n\",\n       \"ZA8Wi0UlXcj68LlTnEZWPHdwhU0hccYApPrg4KCS4kdekUPub2oAe4GWBdbtvJtToMwbmQlQS5/d\\n\",\n       \"iv1hXaGC9Pv9Co2A9DjyBk8MvWAkz4R4tyKhjQiFCS4icxEUGR5XjjOf6GrTWdBp2HCnGXlfUCn2\\n\",\n       \"CLLnOSyvV3GkDNFZ8WP8MTpOvwC5NpvNhFkjtpArsCGGpCSN45zYyUKYTGg1L8ZKyqkmFh5j7Od4\\n\",\n       \"AUkZlL0nSKeZX2MHjs3B2MsqpJIcyLsDR5ojw7sgJFQMcT8UP0oPnsh8Pk/eDrC7HUNg6OFwWHFA\\n\",\n       \"IXhzoOfp6WnOM1D/4eFhwtAUFxjqhwhqZwkeh40Pn8HD4vsoPhtzeGKG39nQ/I1TQFQXomic+qU9\\n\",\n       \"gavbWEOgYZTYarU7nNaON8qV78Kp4H7O+TMe5MJwe8TufKgylcU7oPycMiH9sFwuK5VOEbuzJCm/\\n\",\n       \"Nu+F1B6p2Ol0+ozX1Wq10jHFQMORcTDBuLjXbDaL//mf/4lPnz5VeB3mL5YOiA2LeYDMCzIC6Z65\\n\",\n       \"6Xa7eU9aQJiTR0Uba2UHHNIzhtXrw96m+7kvnGd+z/dIAeOAOD1JdaCDv5JD5fSd+Zhlaq50fn/8\\n\",\n       \"8cf45Zdf4pdffokffvghIiLXDb3h3kSs93g8juFwmPJrWkOZIqWCDQ7i9fV1DIfDuLq6yvf5/Plz\\n\",\n       \"rFarXIf7+/uURThDh4eHcXl5WTla6OnpKf/daDRiMBhU9v5oNIp6fXtOG5w1xsH7LJfL6Ha7cXZ2\\n\",\n       \"lu9Zq9Xi+vo6Wq1W9n6KiPjrX/8am80mixFM4KZikDS6U0a1Wi33m6t2mTMcdrh35nHyfdof8C4E\\n\",\n       \"SRRbsN7YRJO4S04xuo3fHR4e5vdwyPr9fnQ6nQp3ydXMvLsLoFjDm5ubrNhmHPSXYqx0qOcYKZxS\\n\",\n       \"+oIhPw5WsFURO1vIO9jusW/MozU/DOcQveiebcgd+5R7WuaojCyvVzsipiSZmaMB2dXcG7xqNjCL\\n\",\n       \"3+12K1VEVrClsxRRRYbKxbJiRoFjcEy6NEnPRh/BZSGN2KDkMepsAr4HImXDx8W7sCnsIBKtosh4\\n\",\n       \"FyrWut1uEqrhRUTszvADzfM5XggzKJ+JfnB9Wq1WnJ2dJZoQEelYrdfrPO/KjgbGdTabJaLFZzg6\\n\",\n       \"VFawEak+eXp6ik6nE/1+Pw0NCh9jbyN0eHiY0bUbF/pdeK433tPTU9ze3maFGvwd5MgOrWUHJ5Uj\\n\",\n       \"JFD0boAKFw1FZySBfzebzcoZdjaSbsFh2cCg2VlAMUBwNZJ5d3eXCALfN8o1Go3i4uIi59XGm3nm\\n\",\n       \"M8Ywn8+j1+tVOCYl/xFZtVNLddvt7W1cXl7G999/n2sI4ReD7v1kbhTOsM9pA91lbzjwMRfJpd7c\\n\",\n       \"l7/B4XXgB9dlMBhUIv29vb0MRHDwuRgz/DFkj+cgZ0Tkrk7ECS0vO03+6eulz9Af3W43fvjhh/j6\\n\",\n       \"9WsF5YO4DErgyjT2Mghjp9NJPdzv95Pv1ev1KoRy3v/y8jKGw2H8/PPPFeeFvwP9M1KPk7xcLvNM\\n\",\n       \"xojI/m8cIG10EINIQOq+aIvFIvl833//fXz+/Dnfj+qzZrMZHz9+rHA8r6+vM0C28xKxa0PR7XZj\\n\",\n       \"Op1W+EpUTeOAuZgCfW3+qxFAG3DOlIuo9h5D/l0JisOGk27Ujf5ZzJ/RQPYUPM9Go1FBSGnLUga0\\n\",\n       \"kOmR61arVRm/Hd3Dw8NEhLl3q9WKH3/8MatHI7boIIEo47NzhF7CmfJa2HcwKMHvcbDN/eUe6BLr\\n\",\n       \"KObeRWXl9SqOFB4/qYKIXZrGREuUBZAnyAiebURktZ4hfSYOMi33QjlGVNM73vS8Cx47gmM4FAOO\\n\",\n       \"0eciSjHCZGXGIpYwPc83cdROHY6Vn8U98chx4Bx5Imh2AlEoJktDHvQGKTe7WwKYqOd3Xa+3J7dz\\n\",\n       \"OOT+frWBHmOgaswRTbPZzLSIoVrSCybSlmgdisXOCQoTAqMhbJNDnX5jDKB5QOtuD+BGfXYwOPvL\\n\",\n       \"zrdLdjebTbb7oEqtTF8C+1vxMzZkHIjcc0GUWcob84Fzhwz3er04Pz/PoMCEzL29vWzCiWww9xSI\\n\",\n       \"EFg0m7tDVnEuSYc2Go1ce6M9/OQ9HQT84Q9/iH/6p39KeUKZsUalM4mzZqTWjhSyj8zxmZ10fucW\\n\",\n       \"BzwLeXUUjtNN9Rb6CyfPzifzTVoCh7Hb7eac4KyQYrdcszYgp95rrDFz4ZSJr9Lh4vr8+XP88Y9/\\n\",\n       \"jEajEb/88ktEbJ26i4uLdOZB33kG+xAdQJuMiIiTk5NKisnoCo56u92OXq8X33//fTZJRO4JLF1B\\n\",\n       \"e3l5mfNNmgd5I3im99Z8Ps/xk1qnrUZEpOOGLJycnMR3330XjUYjq91++umnqNfrcXJyksEn3+eM\\n\",\n       \"SWTVpHfQu/39/TwzEaf6+vo65vN59vsiwEaeQP673W4iwKwXc469scNJ2pM0FX/Dd5F9uq1bt9Me\\n\",\n       \"giABpMdpx8vLyyx0QhbRX4yVtUAnIefD4bBS5X1xcRGnp6dxfHxcQeS4SM0eHx9XwA90OM6d28qw\\n\",\n       \"50tainVMxA6t5nc8m7Qun+Gs4jQZxTMq9q3rVRypn376KQ/AZHImk0luGLxKVwyB5BwcHGSVXsSu\\n\",\n       \"EWLEzrDYKPgqnRUz+3EmuBDkiGr1HdEIC2whBeXiWf4en6HwXYEEYkZkAlQasSub9/f5HgqNyx2q\\n\",\n       \"2bR2QBy1wB0gHWeB63a7cXx8nM6uOQDAtFTzMVesBc4xBtV5ZuYYZIbnkbZCeTtFAGepXq9nbxsj\\n\",\n       \"K0QkODAuVQelLNM9Ebv0FhGiEQSMqnkGXkNXCNmpw5gy7+4UsmdeAAAgAElEQVSpwrrhDODEWN6I\\n\",\n       \"wGq1Xcdslzrj+JuP4UojpxvgMhilNWxObx93eGaOQOGM6DAGUvAEF9zz7OwsOQ9GYJgjy5dTVMjJ\\n\",\n       \"b37zmyynt8PrikxHmlRa4kSU6AdoIYaGCJzx48DwXSNk7Dnv/4hICgF71cYNjh9BQKvVyrUg2KEx\\n\",\n       \"pR0+UHlKwc2vKZ0j6zPzRMrP+NyyZYcKR/ff/u3f4vz8PP70pz9FRMTf/va3jMxPTk6iXq/nfgI5\\n\",\n       \"w7EldW+uar1ej8lkErPZLFNyjIMg+P3793F2dpb8Ghxz5s6ouRFy5h3ZgL+GY2I9jL5Arh2Ut1qt\\n\",\n       \"dPqoRByNRilTpGVJGzK/2BuqzLx3I3aVsDihXBzBwjtafzebzTg7O4uLi4vodDqV+xoNLZ0DgkYo\\n\",\n       \"Edy3lAF0nzuhl8BBibjCqYLX5s7upMDJZphislwu4+bmJn788ce4vr5OO/Tx48c4Pj6Oer2eus1Z\\n\",\n       \"Ggc8FxcX+Tz6eZlS47lBxzBGxg4tASTL9JpyT5hSApIIV5U0PfNOFuZ3v/tdvHS9Wh8pBAoBRdkA\\n\",\n       \"RcIZiKiSxolQmACUK0rdyA0RFBNuBMEGGLTIaRiUHRGqF8opBtIL3BPDY15FRKSBsZPgz8qGlb7Y\\n\",\n       \"nBgxnodBAHKGm8R3UMxwGxiP35Vn9Xq9inMKFPvw8FCB1P1Ow+GwQthzygtugCMs5oBxGBUiAjFE\\n\",\n       \"z2c4OiBzbg2AcmWjlRvPyrj8f55rpQj6CYfLaT8rDjapnXjWFwSl5AXZ8TRpnVQehvGlxoOklcqj\\n\",\n       \"ZTilHmXk1IdRjLLHD+sVEc8CFu4LAsU92UMYKUjZzBv/Zu4dtPR6vUSEnUrEwdjb20s5A62A81ii\\n\",\n       \"unwPeUPpGVUFLSOV7NQAwYAdDL43Go3i6Ogoer1e1Gq1CveGdzGHzbxKnndycpLcj4hdihxODo5I\\n\",\n       \"xM7xQxat7EEjmLPyYr1BO/kuMmjUzwEXOoriHMZ3fHwcP/zwQyVAtL5g7kGWnOrAOWu32xmYcoHU\\n\",\n       \"WuczfpOzkVXz22iJcXR0FJ1OJwMzsg04bEaVSXsh86vVqtIb6/Pnz/Hw8BCXl5fRarUSOcXxs5Na\\n\",\n       \"oiAgzZYBvkOvNgfQIDgEJp5TAk/k3I45xhv5cUoNZJo97gIQv6s5nqwdzzcSbfQbcGEwGFQI3rPZ\\n\",\n       \"LKbTaTYX5V4RkfQP+GHtdju72r979y7pGI+PjzEajZ6BDQ5Y0VEuggENs1OP3seOOZXnFKP5ik7r\\n\",\n       \"4lw70+LCHSNnZVbkpeut/cHb9Xa9XW/X2/V2vV1v1995vVr7A/LXPiOnTKW57BhSecSu8iBiV2WE\\n\",\n       \"Z+roixwqHnvptRslMcGZz0AJTAwnbQWSYfiXdykrhhgvkVeZ1omIZxEE98DrJgppNpsJp4Nq4D0b\\n\",\n       \"kYrYdrgdj8eVqMrnBc1mszg4OIjz8/M4Pj7O+RmNRnF7e5tpv6urqxwHJeBPT09xfHxcicoobWZO\\n\",\n       \"zEkDigd9Ia3EBSLJd3keZEo4KU4ZQfY36lKmec1ZKc8+g9zrFAroz3K5jPF4nKlkv2ez2cxKHSOP\\n\",\n       \"pC+BsEEakVneGfny2XfwPfgPdIG0K3C+OXbA2jwbtAR5Iu9vLhrvCnRffg+5JkJzIzyiMlCr9Xqd\\n\",\n       \"c8NZeCcnJ5n2dfqAdyp5hbVaLStHN5tNBf11I8Fyv4Aqcq8yLdZoNLJQAaTTvAkjJlTY8r1Go5G6\\n\",\n       \"ibQE8xaxa1wIf433odksRFmjZ666fHp6ynmjaenBwUGlOMLvZX6m0Tn0RKlTQK2tm5za43c//fRT\\n\",\n       \"3N7eVhoY+2Bi0IWI3eHV3BfU2e8DQtRobFs5ONUGmowcG6k1b9IUC/ZQt9vNfW/dyl5jTfiMOaYU\\n\",\n       \"v9lsZuPUyWSSxHGQY6O/7AkoI6wheh1CvOeWfUl7D37H2Lm3ic58ZpsF6o48L5e7o5WwR8goVdDo\\n\",\n       \"Nmcq4KchS65II3VHwYCzAOg2yPHmItdq22bLVEN67Z+enmIwGKRtwlZFbNPF7969yxMEsCnsGeTO\\n\",\n       \"FYesL3YNuos75XPGK60xnE2JiCyEGI1GqSMajUbKMPdE9imS4HP7CmU7k5euV3GknNYxHO20gSFE\\n\",\n       \"Jh4yMpsuYqekKB+1AuM5CJoNNFCzKwAQejYKn1nxY8yBVc1TYCyukDLXCRjbpFnehffEULp3FmlH\\n\",\n       \"jL85G+ZWmQD59PQU4/E4SdGcgYfgkEOGq4QTELGtvpvP53F+fh79fj8FkrExJ+fn53F6eprjphIP\\n\",\n       \"he9UKmNibsxboNKPdJuNScSuvBiombnlfSE/WnlHRCpmO2X+CX/AFTERO/gbUqmNM4YrYpePj9ga\\n\",\n       \"WXqekNLs9Xr5txhSOA3wMCIibm9vM4WEQvUY4SqYtxMRWQlnA2p+IAaFTuSsPRwtZM18AO5FqiIi\\n\",\n       \"KsrUMmtyLIaVdOfNzU32rfr06VOlKhM+G3NIOshEb+Zlvd6e3k7lqGXGnEg4HIbq4UAwPubG6wxH\\n\",\n       \"h3lttVpZcUl6yOnC0pnlYo0wXE4FjEajuL+/j4uLi9QbdhA5YHc4HMbd3V38/ve/zznmvexc8NNp\\n\",\n       \"X1+l4+cUph2AiIhff/21shakmajEYn3h0tGhfLVapT7ivlQK2wGJiOSp8ZnPW8Owsycs41Qzsn4O\\n\",\n       \"rr3PXIwQEan3cMwbjUaSzd+9e5fONfLhAMs8QvY7Y8Bg8yz3WGK/8X52os0psj7abDZJQuff5hE2\\n\",\n       \"m81nnEDWyUUxpMC8F1ln0ocGF/hbFzogJzinFH69e/cuIrb74vz8PHmfOFURW71/fX2dsr2/v5+6\\n\",\n       \"7eHhIb58+RKfP3+O3/72t5WWGlQ3kvqM2AUWyB56db1ep32bTqdpo2q1WuVoNVLAFHfM5/PcH4Ax\\n\",\n       \"jLOsji/9Bj4jBfpSej2/+81P/g8v8pauFrLQIAAm7bnizYtvZ8ZRe8QuakOoysi7JJv6+Y7WbUB4\\n\",\n       \"X75vwhoT7aM8jEiZO+LokucQJfmAWQiuEdXjSzwvEZF9TJgXzoUyKdh8H8aHMbGRQqFdXV3luxkh\\n\",\n       \"wUHBkHFPIiScO3MFUGxU5pTVfigj5huFxLuxdo+Pj1kRgwNNPh9SKhdzyvoSXeMIs0a0V+Az+A/N\\n\",\n       \"ZjMmk0muhY8cQLFBMkU5mC8GGZ41RkGVJG6M983NTUZJdsBQligpomsCA+8HG0nmjEDCzhKRMEEL\\n\",\n       \"ShqSPPJghIggBXmwIqLvDITddrudlTu3t7dZHYRM2JB6PxiNQ1lTseX9hDOCM4Excdm1uUIO3vg+\\n\",\n       \"ssG8sm4YgYuLi3j//n06oKAjROwPDw/Z4BY940OEicprtVpWJF1eXmY0HLFrDsraXV1dxcXFRc4N\\n\",\n       \"a8feL7lOPq7HqCqf2/HyxTEnrJsvdK3fjYOM2+12GtcSAaXJoVFLyyRkZZotRuzadCBXnU6n0rCS\\n\",\n       \"Y4WWy2UliGJPE5QSaPC8h4eHSpDI3v/06VNcXV3Fjz/+mH9vJ4PgaTKZVCpPHUwQNDDfkJRxvspC\\n\",\n       \"KS72hwPvo6OjdBr29/fzPV19yl5zxsSOEtkdZB99gHMLZ4h3KHsnOTihgq7Z3DY/paLz9PQ0UZ5m\\n\",\n       \"s5nBTcS2Jxm2ECAE2UHP0yeuXF/QSc4UZD9RHW1nz/J0dnZWCdC40D3YDXNRsYMEciXflmpcOGLI\\n\",\n       \"BUjd/8aRehVHqtfrZfUVAoeRR6ggUEZslTT9Rdg4DAohgnBtEqArrEqCNwtvhWxSLUqI71uwXWXm\\n\",\n       \"ijZHT3yf55WOG85ExI5UawKuy/95BgbIZGM2L4icET7DxlbsEZGQKIRODBIXzsavv/76DMlbLpeZ\\n\",\n       \"hjAigrMUEdmjyO0tVqtd1aBJpURyGFEjUihLkLjJZJLPoFM6UW7Z/NTpY1JmXKQFSf+xvm6QiLLE\\n\",\n       \"sELCxvnwe/L3pBIwLEYzkGGIklyuvnt4eIjb29scI4qBtfeJ9LRJsJxZ3phn5p8xvkRCxjljD+Hc\\n\",\n       \"4FRzT5dwu5JmPp9n5Vmj0YiLi4uMIC8vLxPZfHh4SKPJO7CnqMJCLjqdTnz58iVubm4SBfHl/cS7\\n\",\n       \"IgPj8Tg7R5OO4sKhx8AYWSEAGAwG8fHjx6jX62l4Op1OBitfv37NMnq+NxgMot/v53p6bgkMv379\\n\",\n       \"Gk9PT/Hb3/421wXjM5/PkyjNviC1amPhtWPOynSpf/rid3/5y1/i69evSX6PqBLmkTsCBc6WI5r3\\n\",\n       \"gbPIHc7GbDaL8XiccorRvbm5Sb3utbSDeHJykg7/aDRK4i/Ov59HyfzR0VEiZr5nxO4sv48fP0bE\\n\",\n       \"roUHAbD1ED33aFECshWx1QsnJyd5Xp4D7H6/nwcFWzZ5D/QsQSQXusAUBC47otgGf45MzufzyskD\\n\",\n       \"EVEx+OhPNwYm1erKNr7X7XZzb3BuasRW3i4vL7MIxRmMDx8+5Fhubm7i8fEx3r9/HxFbBJB7zWaz\\n\",\n       \"DCQiotLDirV030Gcrtvb23S6IrbOsPUgKDj/z1yCaPJ3puo4pcvzsPvoBFePImfful7FkcLbNC/J\\n\",\n       \"PTrgrhius1Iw94R7kB4gguOeEbs2ARZENpOREITf1QFGXriXK8Kc0+edHHWXXi2esPPCKHtSNVQc\\n\",\n       \"ROyiOJQNqE/E7qBcKhfN2Tk+Po5ut5tOG/NtYzqdTmM6nVacFuYXJKJUUEDFdujYUMw/EZs3v6tz\\n\",\n       \"+En0hQNK0zrGG7FzMuGL0MuEd0Ehong830bpnKagfQYOE8aP77m7up1T+EY0uDMCRv8bokKUjbl1\\n\",\n       \"pJ7NiWCN6cCMgkSWcFxecvpIkRDRW7nxfBSt03esNUqjjLZA20i7oKhRJn4PDJtThKSMkW+qfVwp\\n\",\n       \"ZCcatKHZbMZ0Ok14/+LiImazWXY+dnNUno/zBKrG5yAoyCHOIWPHmWU9uZjHz58/Z4QKKkHAAc/D\\n\",\n       \"UTl8OuTf1ZU2lvf39xUuI++F03RxcZF73zSFMpXotSJ1478xEv/SdXZ2Fn/+858rUTkBKXwXc0je\\n\",\n       \"v3+fmYThcBjHx8eVSlgHHvV6vVLVh7NM1oCu8BE7ngxzY4dsMBgk3469ZNSNYA39D1JNzydoIEbW\\n\",\n       \"/uM//iP+9re/JQJqFAhH6eDgIA9wdpNVgiiCVqNjFxcXGWAh74wd2ggUBKPmpMOQSXQC6+dWMdgg\\n\",\n       \"Ag8jfebVEliwFiUPiipAjgdyS4mrq6u0ieaj/vrrr3F5eZn7xg4KegLbhTOGTLnNA9V/rP1sNsvK\\n\",\n       \"SYKSiN1egydFo9SIHXcQvW9aDgEujp3Ts656pNLTHEdQbeaQMczn8+SGfet6tdQeL1miR6AWODFc\\n\",\n       \"NCpE8ThfjIOCw2Pjzabi9ybkAY1yH6MpNr5lZIeQowAdXUfsjpkw9I0D4ny/eQ/mcTkqZ554ptMp\\n\",\n       \"tdquLHM+n2fkHhHxu9/9Ls7OzlLZ3d3dxWKxSMV4cnKSKUQrOq8Jl9N+KFenVu30mBtg5Q5s71Qk\\n\",\n       \"yoYGccDMTtE0m9vu68zL6elpKj7Wlw1vrgibCyccQxWxc0AgkptbZIgaJ9KRtZ0OOzVE509PTxkt\\n\",\n       \"YVAiIlMUoI0YgIgd6RRehOfNJcOgB3aWcbQYn5EJ7k1EbzSWZ5lLxboh1/BIUHDwFnHI4b2wTqQv\\n\",\n       \"acaKHHW73XRqHfn5e+xheDYRW+T64uIifvnllxiPx9mhPWLXTBNHj+eSTnRKlvVzKh1FaeSZ8eME\\n\",\n       \"jMfjXD/WolarpfL/8OFDBf2Bs8i+9BqQ3jg4OIjvvvuuYoRYb+TGTq05ayXnECQLZMpGmJ9GHfle\\n\",\n       \"rVaLDx8+xHfffRd/+tOfcl+s1+uM9klPMY5+v5+BAHqMwhLmFK4L+sIBH0R25NdkbNoGNBqNisGC\\n\",\n       \"F8dVNlBdr9dxdXUV4/E4ut1uXF1d5d9++vSpksr/7//+74jYpqHa7XY6TJ5XN1TudrvRbrcrfZSQ\\n\",\n       \"U5xijK6dwdlsVkH+4Sbyvm4Zgt5uNBo5d+xD7JMLNRzoghzbufIeNg/R6S36ZsGpfXx8TJtxd3eX\\n\",\n       \"AUir1UqHGlm6uLhIp84ZjM1mk73a3r17lzy0iIirq6tETWu1WnKbkKkPHz6kvqGfGrJIsQPAirvT\\n\",\n       \"w88iu8VagIb2er3UucwLndVxgh14LJfL1LHT6TQ2m02ll1/JTy2vt/YHb9fb9Xa9XW/X2/V2vV1/\\n\",\n       \"5/VqVXv9fv8ZikAUQ8qh5OxERPKjiLJo3mkkxCiBkShHZvw/qIJTGE5dAJmaiM47A3OaswT5GmjU\\n\",\n       \"sKKhSRNn+W6ZiuAivQWPjPE9PDxEr9eLwWAQs9ms0pQPGJr0C+gNUQrdyyFmkl5hfUjBEXkxX6CJ\\n\",\n       \"vIcjOqBpyNucvxWxRU+ISIG/mQ84KXt7e9nh3pAr9wbeJzL59ddfk/sEWmLSN9wJkEevuSMyCP4R\\n\",\n       \"WySj3W5nBF6v17NBJJC40w1EQrwTCBc8EsYBYkWq1VVdoJ6gjfBPIrYVlHAZIHkyb05xR+w4UxE7\\n\",\n       \"zhPRoFMBEbt0KpEu72IEjlQLc+Mzp0B/TbilupKUmDkIFD+4fQZjIAWM7MJNAP3sdrt5aC3R83w+\\n\",\n       \"TzlbrVYxHo/j69ev8eXLl4iIbMLpZo+G9tm3oGtGkyO2lUgcqsp+A9kGGQWZYQ1Bl7g33yNVwnuU\\n\",\n       \"CCgoFrxEX6y7OYq+zDcyqlYWiXA5ffTp06f4z//8z/j69WtEbLtQU5FFusa6lEIH5PPk5CSfyfhd\\n\",\n       \"Ecp8s+YgLFSbITfozX6/n5WvjAPSb71ez/eMqHY2J51DupRnn56eRr/fj9FolCm63/zmN3lP9Dry\\n\",\n       \"FrFNRXGI88nJSaKxyB060aijK/u8xyMikRGqip0VsN0A6XXRCLaNLt9lk16eYbsVsTuuistI12g0\\n\",\n       \"SltxfHxcscHwzricMmOcvJf5kegEF22QDm+328kZfHh4yGdHRLbccUW4Dzv+y1/+Eo+Pj/Hu3bsK\\n\",\n       \"pxQ+LHrbPDeeR8aHTA4XSDq2hjHQtoH9ad4pTV+/lV6PeCVHCoIwXKKIHcHaUDSTymIDZXpiSF2g\\n\",\n       \"IJz6QqHhSNmZgluBUTRUyb9R5O4zQ2oRCNgOj50gv1fEjgQXseNQmcM1HA7j9vY2jo6O4uzsrMIF\\n\",\n       \"gW+B02LljXJ+9+5dDAaDLDnnMFIcxF6vVykzpSU+aUYrPgyQjygoOzDbGSyrGZrNZh4wbOPNvLqU\\n\",\n       \"mDXkflSN2JFy6sVVJ9fX16nQmRN3jPYGY+28TuamADePx+NUDDiNjI9UAwaYiilklZw+z3AKezKZ\\n\",\n       \"xGQyiffv3+ehruaXMHbS2nx2cHAQJycnCW2zlsiwKzLPzs4yJYqRwIA7XYwjAXfKF9/ZbDaZjsAI\\n\",\n       \"8Ty3DWCPNhqNVHDIuVMGw+Ew96DT2jb+vCvfpwUBzvloNKpUxV1fX2e65/LyMn744YeUDVIbTj07\\n\",\n       \"nWR+n7ll5h26fxjPpOoKjh1zM51On6XeGCOyzTjtNBLsNBqNbDOCc84zPAYrc5ws7lOm8JiH8t+M\\n\",\n       \"t9lsxsnJSVZmXVxcRLvdjtvb27i9va0Y2f39/ZjNZsmFwfFjXKTfn56eKkUVvCdkdSgRDkx9TFOp\\n\",\n       \"L1xp/C//8i8Vh+bo6Cg+fvwY4/E4uWLIHSn/6+vr+K//+q98HgUDi8UieaSlc0p/osVikfrUtgqe\\n\",\n       \"JrIPb5WO3m7Pg2Pp9J5TsI1Go1K0ZMcV3cT9kQvmCeerPOYJ2UN/m1B+fHwc5+fnqVOto8zDxB6a\\n\",\n       \"7oKsQlnwHnIHfMtNr9fLoB7Agz36888/Vzh64/E43+XLly9ZWdlut+PDhw/JdQOU4ExDgi3mhrQ0\\n\",\n       \"h7M7ONpsNnlW4V//+teU/cFgkNV+OPTsLfMUqagtr1dxpMjvll4tXAKicxaDc6vgdNhbRGiILB3R\\n\",\n       \"Ifw8y0LsEmYiDCMroAYvlT26Ms4KjHfDWDi/D1JhQ/GSMn16eqqcgk0+nzy0S5U5uBKSMgowIioV\\n\",\n       \"SzhiJo1jmMy9MdHO5FAUE99jva6urrKiISIy0gS5MRF/MpmkU8icMd84RxDcjZwxTzQ75NkRu8rC\\n\",\n       \"+/v7bNpnQ4sCKvlaJlkjP3wPQvPt7W3c3d1lg8GIyL4lJiszBhwX/m3kBJn6/PlzNn0zQsSYuU9E\\n\",\n       \"VGQEZTkej/MMsIgtx6DValVIoW5iCzLGs8z3idg1nywDGivR5XKZUfnd3V1cX1+nvG42m6zOGY/H\\n\",\n       \"iW5yXxdTLBaL+Otf/5rvyBgIjnDQzGlAXnD44Yt5fYfDYfzyyy9Z/l46y+7NZXlCJpALO5n7+/vR\\n\",\n       \"6/Vivd6Wj1uhPj09VQoL2A8gIxgiE1kJPlyk4mIZgod+v18x3qvVtscUjkvJkzLS5gAUgjLyXiId\\n\",\n       \"zHOj0Yjf//73cX19HRFb9BMngfdkPs3fw7Fzs1f+DWJhfXN7e5tFA3zuRoinp6fR7Xbj6Wnb+45x\\n\",\n       \"wFEimIIvE7Fz3AheTEbGOR4Oh3F9fR2r1So+f/6c+4tiBypljQ7u7+/HH/7wh6jX68mVYX0jdsU/\\n\",\n       \"oOARO87O3t5eHrXFPiQQsWNqp9aBi4P59Xqdfcl6vV5mFXgXZBi75vWnsMOOmREbkPQShUXGCRQN\\n\",\n       \"WHDQNnbawQH6nMpTAsKIXQaDNg+Q1nkW3+cIM/Rwp9OJ7777Lm2BA2KyGlSOl4EQsjUajWK5XOY+\\n\",\n       \"RPYbjUbc3t5WirOwQWQyptNpIpVfvnxJbuO//uu/xkvXqzhSGGorN2A1elmQzop4XhHiRfa/rTAi\\n\",\n       \"qtEYRp1n4kRZsTh9BcRpUjn/Ngpl5wylhqE1ub1EVFarVTognU4nzs7O4unpKS4vL2M2m1VSMwg6\\n\",\n       \"iBJCOhgMYrVaZentcDjMzQYi5qpDN8KL2HWqxng41eQ5Yw2Yb0iSpCzc0I4Ig/n0mWRuzIeBiNgR\\n\",\n       \"JSFrU5USsUuXsoGdLgWaRolMJpOMMChHxyh44xNV2yEBirYThcPgA0kpjUZZo6hWq1Wl4Ruywbz1\\n\",\n       \"er04ODjIChDkLGKrpLrdbnbh9Zwyf/f393F6ehrj8TgV0XA4jF6vF/1+P8meyAo/SX3a6WONTMJE\\n\",\n       \"gbl1B8gYYyQQuLm5STQLgu9kMomjo6NMp9jQNBqNdISbzWZGft5POJcuCliv19nAD3njXZ6enuLq\\n\",\n       \"6ip+/fXXNL6Qmnnm/v5+7sMyILLzaDSW7xE5+zMj2qPRKNFDLirPQFGcJiEQwhHBwUa2QQBBKCIi\\n\",\n       \"x2wEzeR+UMCbm5tKew/3PeNykQ0FHfRp+vd///eIiPjzn/8c9/f38f79+6xAdGrdzjl72A1+F4tF\\n\",\n       \"XFxcxN7eXlxdXVXSJpwIQWDrVjMYRQJXO0uQ6NHRrD+2gvHa8NnB/OMf/5jBW8SuSo6siPfFbDaL\\n\",\n       \"er0ep6enqedcsdrpdOL6+jr76xmNbbVacXt7G4+PjzEYDDL4QHbH43FsNtsGnD6hgzYZ0FV43nA4\\n\",\n       \"zDUsq2fZowTAzWYzyfPIGvOIw++ihVqtlojLarXKeSNd2Ov1YrPZVA6md7NN1sL91QhiIXsbBQLp\\n\",\n       \"wvHD7rFP6MdmSgt945A1n6WIXf348WMS/HmXZrOZ6b6IXU/EiJ3TTqFVt9vNsYOaEVw53U/BgB3O\\n\",\n       \"8noVR2o0GuVklGkTHBcizojIKoKyd0TEzilztRUbzCgEShBhxOiAjjl9BDLmlKCdOlAl7m+kh0ox\\n\",\n       \"IjpHCfA6rMS5Hh8f4/r6OkajUQwGg/jw4UOOgQZ2bH6X82L0cTKZT3coJvdMaoL7YvAwinxGrvzw\\n\",\n       \"8LAyT8w74yKv7Bw9qZiS74JTgbFjk3udWHfn//f392M0GqUid0oQ5OvhYXuwMohAxFb4z8/P8+gN\\n\",\n       \"K2FgZqMXLlf2cUSkynhexBaVc1TI83C6eE/Sg4yfZqblkSVUQ/EujgQZEzC+2wpQsQTChpFkjPRK\\n\",\n       \"YZ95/DjROOpOKcBls6xHbB2u9+/fZ7PW4XCYn83n80z9oNjMa6CSjxSIy9hJT/L/Tr+jwIbDYeVI\\n\",\n       \"Kar8kBVX/vBdl6lTNcsaI+dl+tcVbziOlmF+T/NAo6TsQ9AVyxQO6GKxyCNKeA6OPfwbBxFHR0eJ\\n\",\n       \"hP5/7Z3bT2PZ0cWXaWMa2/h+AWygQd1z0+RlpEjzNA9R/uZIUf6GSEmUzGS6e7qbuzE2Ngcb8AU7\\n\",\n       \"D9avXMc9Tzx8LX3aS4pmJmBz9j5776pataq2NyaDwcA0Qt1uV/1+386Fp6cn5fN5NRoNc+wZsy+D\\n\",\n       \"Zw/SY4keTugJaTnB2IfDoTG0s9lM5XLZfu6ZQh9YsjdoLMzvw7qhWUGz49cw84PzcXt7a/uRWwzG\\n\",\n       \"40WDXpxHSbEeQVw+jWNCdSB7yTughUIhlkb1aT/YmGRy0SMtiiIbM+mgYrFoQS3znEqlTCPFmcLY\\n\",\n       \"fcUx5yOfQz8E00c7EtYte4Bn8sFnIrFoAAsryhnP+/dOIMwkc9Pr9azSHUJDWuojsSFeO8j7A163\\n\",\n       \"R2YJFsy/X9bEdDqN2SLWMA4NAZQ/izc2Nqwz+tPTU+yKGGwCJAu6Sc4yxlWpVGLNrjc2NhRFkW5u\\n\",\n       \"bixwl6S9vb3PslKr+GKOFHlOf8UGBoiXhiMlLfqerN7qLcU95dXGljhZXpzOP3E8+JxPbXEg0pTO\\n\",\n       \"pwYw+Gh9VoWqvHwOFBYbEdNkMjFD7e+J6vf7ur+/j/V/YnywB3jXUPGMhwjf66d4HqJrFqSfN3/I\\n\",\n       \"ekeKUmUcH1/q64Wk5N854DFoLHif10+n06pUKhaBemeL78EZ8NojhMsYPH8Y4wASBUlLkSMCxX6/\\n\",\n       \"H2OUeC6cDy/gZHz5fN7WWSaTMUMaRZEd3slkUvl83lgHjAA9oXzKyn8WJxvjKS30bL4Tt3fcYXJo\\n\",\n       \"DUG5tLQwfHd3d7q4uLCDnO9kbBy2vmCAKBedQavVsvFXKhVlMpmYTocIkkN/Op2qXC5bBM+eoS8Z\\n\",\n       \"zgfr4P7+3kTorH3ffyuKIpt33/bk9vbWUl69Xu+znmf05vHOFfOdSqW0u7urXC5ne9TvYd4Nh60v\\n\",\n       \"RAB8xjsJXkbgtV44egcHB3Z3IoYtiiIlEglz6DudTswpIogcjUax9D3P9vj4GOsIzb749OmTOfTd\\n\",\n       \"btfE2AQ5iURCh4eHtsZY+1yjkU6nY+lQuq/joNLzin2B0ebM8OkmdGX8fXpAScs+aQQ8RPjSIm3y\\n\",\n       \"4sULFYtFY128EwLTSfoHNpbg4Pb21hjVarVq84Yjvds4flYAABpXSURBVCojuLy8VKVSMQbEBwqM\\n\",\n       \"zeuEGD8sm9fq8jlK6nkXJycn+vjxo+3RSqUSS2Ph1LF/vH3yDBD7jRYFPoD2bDfnLnaBdw0DuLa2\\n\",\n       \"FkuZkW2AdfdOCM4cbCw6QXp/ITHAxjGm9fV1W+f5fN6ehQantLhJJBK2Lkhdsjc8G4m8Zjpd3DTg\\n\",\n       \"2ejxeKybmxtdX19bcMI53Gw2VS6XrR/dN998Y/vm9PRUNzc36na7xjwyPjRssMypVCqWpfAp/t9D\\n\",\n       \"aH8QEBAQEBAQEPBMfBFGisgSz15apoyIeL2omiiBiNAzPXzOs0M+XUgEwb8TmZDaQH/gWS5y2TTm\\n\",\n       \"8+JQIlHYJ88e8D1EKz5iI/r1FKFPtSDApE2+pzHxzCmt9VE3bA7RrNcekGZBFLqaZ26323YPmk8p\\n\",\n       \"Ua7qU5Bem4KQ1bM4kqxBny8/ZbykJom+faqCVImvuOK7vfgZ9o3IgL9HOs6zVfP5XFEUxS7s9XMP\\n\",\n       \"lcuaIKLhxnjP4nhNQyKRUKlUMrbOp6g8k0ZkzdqA7vaVekTXzGsURapUKrHOz2gWSKnQDkFaUOOU\\n\",\n       \"aHe7XQ0GA0v7oVMj/env/4JV444wIjRpQWP7zxBtsmdYZ8Vi0VJCfr6ZcxgWaZkW4bn81Tlra2vW\\n\",\n       \"yRwdG2uYVDjVUIVCIaZr8lE7TLAvXoGlbTQaFmWz3tifpCt888LVFCP7Ah2FL/HnPdF4sNVq2dzC\\n\",\n       \"km5ubhrbBotyfn4uaVmZxtodDodWfp/JZFQsFjUYDCwdwfhgXa6urmxvsZ9I+VarVV1dXcWqDz2r\\n\",\n       \"SVqeNcn4SI3V63U7hyaTiQqFgqV90MOwvmHuYWN99TSpTtJw6KukpS04PT3V5eWlVU9Jy6tAjo+P\\n\",\n       \"bd/5DtbpdFrffvutcrmc7u7uLIVDSh0dW6PRsGqr6XSqVqulXq+n3d1da8DIGGBIYBR9yj2dTqvT\\n\",\n       \"6RhzzPh6vZ5ub2+VTqfNDrG3T05OYvqaRqNhFyhvbGzEpABek3RxcaHxeKxisWjsmdfs+AKp8Xhs\\n\",\n       \"ejJpUX3Z7/f19u1bTadT1Wo1NRoNSUu92uPjo7GDvvCB4hWfqeBzSAzQXPobQPh39LPsb9KjsLGk\\n\",\n       \"8JjTfD6vp6cnE3/zOa5SWltbU7/fj2lqT09PrZ0KFYOcw71ez4oLuIMVZml7e1vv3r0zFnK1dQxn\\n\",\n       \"P4wqa41Kep8hW0XC56P/r/CHP/xh7juKS/HOvb6qQVrePF0qleyaAg5pShlJAUBPSktdFNVgXliI\\n\",\n       \"ocO5QTAnKdbrhJQXC4VUIY6d19r4sUAN+2oPxoTjxBxQiVUoFFQoFGJ6Hi9EXK2k8BUHOAYYCyp3\\n\",\n       \"vHaMPjg8D/obRKS+vHY+X5TPYpg5UL2zhR7M38pN1RYb1lfYcXijWfFCVRYwug1f0dfpdKwvjK+M\\n\",\n       \"89oSxsq88Tv0+2Fc/AwHGI0RhxsVHVT7+ENwfX1dg8HAel15fRhOFWlRUr9Q6rxn5hCaXJJpPxD6\\n\",\n       \"stalxdUc3nm+u7uLafJwmK+urnR+fm4GAz0S2qDVi1RxJEajkY6Pj23d7O3tqV6vm0aCdA7PjbOO\\n\",\n       \"QN47SzjfpDh8FRWaqcFgELsf7MWLF7aGWKf+3V9cXCiKIkvdMAacUu/o+3TLcDjU1dWVXr58qaOj\\n\",\n       \"o5hg26exMZZ+TpELSPFu2uPx8hoJAhMOV+4lo/LHH9TVatXaXuBA4GSVSiWrTuNqDap2EeJj3Lmv\\n\",\n       \"kDn1lc/VatXu70NTRnBRrVZjmlKq1Xgf3rn3GstEImHPyTlJawPmwwcuqwVEjB8ZA2lFr6nhwu52\\n\",\n       \"u22OEHuxVqspkUiYpsj37aLgYWtrS+VyWY1Gw/7e4+Ojzs7O9P79e1UqFX399ddmvElNzedzC1p5\\n\",\n       \"11EU6e7uzopkstls7Cqf4XCof/zjH2q1WlZJzbyVy2VLaW9ubprGdTgc2rUrs9nMzkVpmYJFD+xT\\n\",\n       \"R/72hVwuFxObJ5NJczbb7bY2NjaUTqdj/dem06lVVfOM0kLE7SvLCYwl2XdSiOXtF1WO8/ncbNT7\\n\",\n       \"9+/tc999951+/vlnnZ+f69WrVyYHmEwmVrFYrVZjmkNsGmelT7NeXV2ZNhDNH2NAZ8zZtrW1ZWlP\\n\",\n       \"bFK9XtfNzY1++eUX2/dc+5RMJk1Ty3qChFkV1EsLG/LixQu122396U9/+vwCS30hRmptbc0Wva/4\\n\",\n       \"Wl9fj/Ur8oyHL/30bAaHBRvcfweOEwI0aekI0HuD7/Flk2xuX0Hm8/ZeaLyqn2KBIub0wlm0PkQX\\n\",\n       \"fCdOXDqdNgbJN5dDI5FMJnV3d2cGmJyyv27Fl3Hzt7i80Y+D/49oCPEq33tzc6Pb21tzGLy+SFoc\\n\",\n       \"avl83iJDPnd8fKwoisxArGpO+BuUOktLUS0OnXeqiXIxGNLygKbBo2986ZvjTaeLcuZUKmVVSowB\\n\",\n       \"bRxrw98Ej1OLRsUzefx//X5f8/nyMlA0DGhyJpOJNfXk/Xujg26En2HEiIo51HC40On4y3ARB6dS\\n\",\n       \"KTUajZgmDQ0U7CG6BklWyQhz5LUwtVrN5gxhOIfpcDhUNptVFEX6+PGjXr58GdOlsI8ox/fsEUYf\\n\",\n       \"FpDnxCHDEfHaCwpMBoOBGVPYK94xjrkXpjJvu7u71vx1Z2fHnBgcvnw+/5kDRrk9/02jWNYG0Tdl\\n\",\n       \"8J5Rf3p60qtXr5RKpXR1dRWr6GT+0ZH4829nZ8cYG9hHSeawZTIZbW1txYpzuMQ5n8/bO2Psm5ub\\n\",\n       \"+uMf/2hOt69ikpZnA/uR8U0mE9NEnp2dmZPC2KlalWSVxexJzksKMWhPwDhms5lVl/p1ivaRJrj+\\n\",\n       \"XEAjx88RiEuyd3t+fq7T01P9+uuvtk8JKJvNpumT/HnCBbgEYN5phhm+vLy0fSVJ+/v7kmRViVzS\\n\",\n       \"zXeyBrPZrAqFgmq1mj1Lo9GwSt9+vx/TjsFKYavY59ij+Xyu6+vrGFtDtfZgMFCn0zG2knfDfqvV\\n\",\n       \"auakMEaCx2KxqFwuF6sQz2Qyarfbury8tHXGvPG7vrCJYI++VNgwX0jVbrf1888/K5FIaH9/X6lU\\n\",\n       \"KuacszZWK1ZhqllfPhNBQOIzUfysXq+r2+3qw4cP5tijGbu/v9fXX39tLVXIPDHfZJl8wCgt+l35\\n\",\n       \"c/f38EUcKV6sd5a8ANRHCR7Q8CxWaekskbrxgjWMCYccDSH5GVVLMFK8RDxvnomDQVpS6kRglO1K\\n\",\n       \"MuMLwzSbzWKfI6UAo8VhQrNMIkSfaqI6hMh0MpnYRsRRQ8BNVAuurq40GAxULpftQPKVPbAgPL+f\\n\",\n       \"c++g0UZAWkZK9OfxY+RAvr6+Nq8e8JzMi4/6+S5SDr6ij8ibCAm2h7mRllU6ns0gDeK7gnu2CoaG\\n\",\n       \"6MynYPl9L+Zk3aZSKe3s7KhQKNidVdLSIYBBHA6Hlu6QluJvbj/P5/O2hjlcMHoIM/2cEmV6VgmB\\n\",\n       \"Pv+sVCqWFoJRXF9f/6xhIVE3hg/hvCS7ccALSZlv3gtj9OwDQk0ON290cRx8xZhvnOqDFtLYkqwn\\n\",\n       \"Tb1eV7lc1vHxsfWiqtfrxprgTOHA8s5SqZQxdDQ2lRbCcIKZ1QDOV1nCkHpDy7/Dhnr2m4o9qjc9\\n\",\n       \"fOd932NqlWH1UTKXa1NE441JsVg0doGzjLU4GAz04cMHOxP93/BVU2trazGGiRYG9GvyDBWGnBYO\\n\",\n       \"7C//7Gtra1ZtNhqNzEg9PDxY6s2LwKWFker1etrZ2dHu7m5M1sHav7i4ULvd1uHhYayaud/vx6oM\\n\",\n       \"WVPMCeyXvzOR85ssw/v37038vbW1pUajYa0OfLuYx8dH5XI57e3t2VlzcHAgadE24uLiQgcHB/Zc\\n\",\n       \"f/3rX+3db25uqlqtqlwuq1ar2edgx7BhVIIy9qenJ/373/9Wt9vVV199Zc7Z4+OjOVc4jY+Pj3rz\\n\",\n       \"5k1sbp6ennR5eRlLpZKi7Xa76nQ6+uqrr+w5j4+PLcCjnx7I5XJKp9M6Pz+3NeADpVarZa0/fIFP\\n\",\n       \"p9MxZom0N2xVpVIxSQupQtjITCajfr9ve2U8HltWhH2Gffad67e2tpTNZu08ffPmjZ0BOLG8y3w+\\n\",\n       \"b5/DtsDCUpAgLc6LQqHwWR8+jy/iSElLw7BaTUeE6L0/r0WRZAyUpFhUwMHBYYdhJQLCuPjPScvI\\n\",\n       \"1lfu+FSA/11K2HnOVCplC2NjY0O9Xk9RFJlh8uwBDgRXl3iWLZVKxQ5aTzn6hURuXIrrQGARfOqO\\n\",\n       \"AwWD4Z0byvFvbm60v79vz8ff9NV62Ww21tuH3+PfMV48697enu7v79XtdmPMCvOcyWRMTyDFW/6T\\n\",\n       \"5vOUMiwfhhNnAmPqy/R9+wMOKFJ0bAR6LvE8/A3WAhEO0aq/uBSnpVAoxFI7OBZoJXDQvd5HkqXD\\n\",\n       \"RqORbWK0X8wpjirrTZKxlb4ykxQpTCjODHOKgUGz45me8Xis09NTiyz5WbfbVaVSsaaD3qmCydjc\\n\",\n       \"3NSrV69ipeI4it4I+opc5pFUnq9mJXIkKmW+ifRpnVAul+1S4tFopHq9bqkftFnMDbokjGkymYyl\\n\",\n       \"p5PJZKwa1AcgrEVYUNY3jiLVXL7FA3uPYI10DOPHMPr+PqwpHEbmh/3ty+ZJA/n9S4n34+OjisWi\\n\",\n       \"ORn9fl9nZ2exNKgPaljz3lGSZC1SZrOZKpWK5vO5rW+MDAEk/+N7PfOKDsr/rFgs2lUng8HAzsft\\n\",\n       \"7W2bS3SrrCmY0mw2qw8fPkiSVYNxsflsNrMGioyfliewO9PpVH//+99tHD/88ENMK8Waurq6UrFY\\n\",\n       \"VLFY1DfffGPMBXNKehVHAaaD88obaJha9FS3t7eW5oLl2t3dNe3PbLbo1M/az+fzxhaxdxnP/f29\\n\",\n       \"vQ+qoX/99Vf961//kiS9efNGe3t7lm73QeRgMLCbMzqdjtrtdkw2AAt8e3uru7s7c8B++eUXTSaL\\n\",\n       \"ZrTdbteaFUvLimQyF94BOzw81Pb2tq6vr+1KMt/nrNlsmmNN5oWfkeYmJbxaOV6v11UqlfTbb79Z\\n\",\n       \"Wvjdu3c6OjpSOp3WxcWFOp2OzTeMKHKN1bYzrP9EYtGTkO9EcuOD2FV8EY3Umzdv7I96r9azVJ4d\\n\",\n       \"yWazajabajabdhXKKiNFPtl/bnNzM0bT+V4eHGb09PG0OX8TI8LhIC2vOaGkFO9dikcY5IN9SwGM\\n\",\n       \"KekkL+QkDUa6yIvicQRJOWH0cbwwpOSbpeXhiWCYnileIEg39GKxqHK5bAc/hobfY76AT9PA4PA5\\n\",\n       \"5nI2m8Uas2HwPdPCd2PQ6NQNCyjJ8tnj8ViVSiUmusTB4iDzuhxvCH27BsbkmcHZbBZrrocjiYbA\\n\",\n       \"dzDmbzDnvj8LzifsTyaTsUODd8PcIZDnXfnrUNDZSDJDAgNA2llaOrw4qYPBwETMk8lEzWbTnHDK\\n\",\n       \"haVFzn82m+nk5ESFQkHFYtHGkU6ndXR0ZEGIF2JDfTMOr4GDcUokEsZUcngzxxyI6+vrsdvacZIz\\n\",\n       \"mUyssaJPAxLt8g4vLy+Vz+dVKpVMlI5j7PcbLC3sjbRwGCjEYJ/jMGD4WNe+mSNGiBQsncj9+ycl\\n\",\n       \"6tPtOMY+bcm6YO1ykPvmqDC1fD6bzZp+6vr6WtVq1fYbjLUkffz4Uf1+X5VKxXRABJ6sJVh6f/6j\\n\",\n       \"b/RzBDCQPgDxzhntOXq9nvWfYm2Q1vPCZu8s8844N3CW7u/vTRf38PCgdrtt5xeSBa+VI/2Gg8V7\\n\",\n       \"297etu+czWb67rvvNBgM9N///tfmhGeCASSI5NnQ5uKcE4Qyhul0qlwup3q9rul0ao2BOUvp7l6p\\n\",\n       \"VIxZgsWhaMQLuHk3tIB4//69sc3FYlHNZtMC0kKhoMfHx1ivLNjLFy9eGLsmLdiV+Xxua+ef//yn\\n\",\n       \"rUlauLBPYCiZ70KhoGQyqX6/b20kpMX9hTBLPvCQZFf4UDCw2sKhWq3GdHO+8IFiGPYSNmhjY8Ma\\n\",\n       \"vh4dHalWq9m8+X1Fip3zq1Ao6P7+XgcHB0qlUjo9Pf3MzpF9effunc13rVYzfe6PP/74u/m90P4g\\n\",\n       \"ICAgICAgIOCZ+GKpPR/1A9gGSpuJaGi5f3h4aBGBb8znNRt48ZKMHYCN8lQeeWUvOvNVe6RQoBzx\\n\",\n       \"2onI0WNR0isty/FzuVysek5adv1GaEyULS3ywVDl5H6JoHzFIZczetbJ642iKIoxPgjCqTxCayUt\\n\",\n       \"rzuRljQqgl4odbQOiBn5XaI92ir4KMIzUC9fvoyJJ2FbEEaSvoI59NoRvyYon+U9ep0KrRRWq0Bh\\n\",\n       \"fohMYG2kZSsGnseLi3lHjMVXdMG2sG589QxMBu+e7t6rVY+k9Gq1mr0rWC60G76qqdFoxNpf+Co6\\n\",\n       \"UlCsL0+pUyk1Go2UzWZjFW++ioW55DlhsHhnPsXu0zXMtW86+fT0ZCJjr+fyjANj5Ge+mSzr2N/R\\n\",\n       \"xz9hBoloi8WihsOhMUFeD8TckJaGWeTZ0UCORiO7fJWIljWQSCRUqVS0vb1tUXm32zX2qFarxdIU\\n\",\n       \"rEvSML6CbW1tzS7D9oyetGAc2dPZbNZ0UcwvImXSf+g20JewFhEC8956vZ4mk4mq1aqlzhib36/s\\n\",\n       \"E9Youin+vmfj8vn8ZzIBXyVJVV+pVDJmivfOheykvP0NE7DniURCJycnxpx+//33ur+/V6vVMlaN\\n\",\n       \"ux0zmYwxmA8PD1ZJJi3L4/f39+09UdGYSqXUarUURZEymYwqlUosnba+vq5Wq2XXbXnRNZeHYzNW\\n\",\n       \"dZwHBweaTqemS5IWtms8HhvjynnIvMCc3tzcqFQqxdJl7AW0WrQbqdVqenx81MXFhYnVacgrLbRO\\n\",\n       \"19fXlmbE1krLNgYwvWRBpOXdjsxrJpOJMTU02iSteHZ2JmlRFLC3t6fvv//e9pIvwkBPxt737Ws6\\n\",\n       \"nY7K5bKKxaJubm7MBlEhy92kPo3smdS3b9/GtIpktWazmYrFog4PD40dbLfbKpVKdu+fP4Npm0Cb\\n\",\n       \"lL29PdtrnU4n1ij29/BFHCn0PF4czMv1OUsqRprNpnZ3d1WtVm2heR0Uh+Jq7x5+zu9jlKVlB2MO\\n\",\n       \"AwyOtDhsqPJCk4Oh9v13cHa8PgD4tAv/jfKfNKJPM/qOsqSpJFnPHp+DZuzoiXheUiSMi3nx6VNA\\n\",\n       \"moV0kn8en9pjzrxjiFPKeHylpbSk6P1GJP+NaI+KM0km2vYlyjwrjiTl8b7s2Au5MZReNM6cYcAw\\n\",\n       \"3qQg0bj5d0cVG04oTiZrBiMZRZFGo1HMsHuRsNeU8IwYGsbAoVGpVLS/v6/xeGwpKl9FRtsI//x8\\n\",\n       \"J3uI9UOKCoeX52Wt8E7b7bZevnypV69exQwiDgn7yd8diGH1zhxGyAv4STXxOa/V8gJ/aXkZKul8\\n\",\n       \"r9sh2MBh8p/jviz0UxRvMP7NzU29fv3aCjF8FSHp+XQ6reFwqPl8bmuYzusIWqVlpWCz2bR0DC0r\\n\",\n       \"/NrAseVZ/ZnB2m00GrHziTmjapYUGWufefC6KRBFUawTOs56oVDQTz/9ZOl8f0m0tLy5IZvNmjMl\\n\",\n       \"LYwsvXS8/k1apso5f+hkzrPidBDclEolGz9BK+ezD5IfHh40HA5N3Hx9fW3iYAqESH1vb2/bM83n\\n\",\n       \"c/X7fX369En/+c9/lMlk9Oc//1mS9O233+pvf/ubvdurqyt7lv39fRO8U8HH/F1fX6vX61ngQQWY\\n\",\n       \"tDijRqORbm5uLDDlO0ulkg4ODpTL5ayS0t8GMJlMtL+/bxd38zPGiXgcHZYk7ezsmGHf2dlRs9mM\\n\",\n       \"Vd1GUWTONSloNGKlUklRFOnt27f68OGDisWiOaC0wuh2u/rLX/6iy8vLWNsM0Ov1VCqVLA25vr6u\\n\",\n       \"Xq9nleOz2Ux7e3uSZPo8HHgf4GF/0Ce/fv1ax8fHkqTffvtN+/v7pv3LZDIW0JBibbVadmOCdzLv\\n\",\n       \"7u6so/unT5/sZ1QSl8tl9fv92B6tVCrWfoKqeR+UTyYTs0tessJND6tBvscXcaQwAD6CRI+Bx5/P\\n\",\n       \"5+0FHx0dqVQqWdWbP/gwOl547V+i1wBgcKRlybkXY3v2yH+nd14kmcYDLYuvvmLRUEHkI3iqmWA6\\n\",\n       \"vBeNsUBfgdHMZrNmFEejkW1saeFhJxIJE/H5y2Dp6USeH20KefT19XVjD+7u7jQcDs1xLRQKiqLI\\n\",\n       \"hMVedOpF1Gg3mBsqZdDVJJNJE/pxsSq9YbiZW5IZNd+rZJV1KxaLarVaJnjk/cIgrYp42WgwEl4D\\n\",\n       \"x5hon4DWSlocamhn6FPCoe/XmL8+iDFgfDDCvE/mBp1QOp02po/P9no90xkkk0ljOemngnbL9y3z\\n\",\n       \"wmzGxvNQHcPfZQ74e+gO9/f3zdBJS90Za9M7WRSH4LzxvKxhHL7VfegLQFj/vsEtBpZ97VkO1jzP\\n\",\n       \"7PuxPT0t7q/b2tqKNQCVZD2ZqIJj7nlWxjAajWJCUuZ3OBzq+Pg4psekMo3rb3BUWIu0i1htYYEm\\n\",\n       \"slarWSNLH3TBcMFqegaPv1Mul62VA2ux3W7b3+eqDJ6F99bv91WtVmPsJ3osKX4mSkstkGd7JVnA\\n\",\n       \"gGPD//gs+2w0GlkLBOaUwJnz2/dE4qwrlUqaz+dqNpuxYJerbjxjydpgDb5+/VrpdNrE3/P53Kq1\\n\",\n       \"YLN5/lwup9FopNPTU9uLvmGptHBwGo2G5vO5vfv5fG6OPWcgWj5fsIF2lv2Lxomg0velQ7TuGW/W\\n\",\n       \"fq1Ws7ONd4GTcXJyonq9bvde4qQwN9wZyN+cTqfmvO3s7GgwGKjdbhvb46uSh8Oh6Sa73a6tE/RT\\n\",\n       \"0rKdBU7W0dGRstms2RyaB/OdvEscSeaNZsJeowiZgQOcTCb1ww8/6OzszMb/8PBgVc8EugQY+Xze\\n\",\n       \"An2cTMbHuQNp4jWeBPIEXv4sqVarsV57v4cvIjYPCAgICAgICPj/gCA2DwgICAgICAh4JoIjFRAQ\\n\",\n       \"EBAQEBDwTARHKiAgICAgICDgmQiOVEBAQEBAQEDAMxEcqYCAgICAgICAZyI4UgEBAQEBAQEBz0Rw\\n\",\n       \"pAICAgICAgICnongSAUEBAQEBAQEPBPBkQoICAgICAgIeCaCIxUQEBAQEBAQ8EwERyogICAgICAg\\n\",\n       \"4JkIjlRAQEBAQEBAwDMRHKmAgICAgICAgGciOFIBAQEBAQEBAc9EcKQCAgICAgICAp6J4EgFBAQE\\n\",\n       \"BAQEBDwTwZEKCAgICAgICHgmgiMVEBAQEBAQEPBMBEcqICAgICAgIOCZCI5UQEBAQEBAQMAz8T90\\n\",\n       \"n59+FodZjgAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x112797f50>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"# Load the net, list its data and params, and filter an example image.\\n\",\n    \"caffe.set_mode_cpu()\\n\",\n    \"net = caffe.Net('net_surgery/conv.prototxt', caffe.TEST)\\n\",\n    \"print(\\\"blobs {}\\\\nparams {}\\\".format(net.blobs.keys(), net.params.keys()))\\n\",\n    \"\\n\",\n    \"# load image and prepare as a single input batch for Caffe\\n\",\n    \"im = np.array(Image.open('images/cat_gray.jpg'))\\n\",\n    \"plt.title(\\\"original image\\\")\\n\",\n    \"plt.imshow(im)\\n\",\n    \"plt.axis('off')\\n\",\n    \"\\n\",\n    \"im_input = im[np.newaxis, np.newaxis, :, :]\\n\",\n    \"net.blobs['data'].reshape(*im_input.shape)\\n\",\n    \"net.blobs['data'].data[...] = im_input\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The convolution weights are initialized from Gaussian noise while the biases are initialized to zero. These random filters give output somewhat like edge detections.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAicAAACbCAYAAAC5xzv6AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvVuMbVl2pvWvfb/FjkueW568VN5dXSUbl4sHbBBYbYRK\\n\",\n       \"jRqEJW7qfkD90MItN4gGgQC3QHYJiwdejJFfcNvgRtBuaBAPyA9gt5FBcrnc1bbLVemqPFmZlZdz\\n\",\n       \"TuaJc+KybxH7sniI8839rxFrx4lMU7mjKveQQhGx97rMNeeYY/zjH2POleV5ro1sZCMb2chGNrKR\\n\",\n       \"qyKVdTdgIxvZyEY2spGNbMRlA042spGNbGQjG9nIlZINONnIRjaykY1sZCNXSjbgZCMb2chGNrKR\\n\",\n       \"jVwp2YCTjWxkIxvZyEY2cqVkA042spGNbGQjG9nIlZJPDTjJsuyHsiz7x1mWHWVZ9jezLPuVLMt+\\n\",\n       \"7vF3P5ll2TvrbuNGNvJxZKPbG/lBlY1uf3rlUwNOJP2Hkv6vPM/7eZ7/13me/0ye518uOzDLsrey\\n\",\n       \"LPuL36uGZFn2lSzLXsmy7KUsy/4wfLeXZdn/mmXZ4HE7/s3vURv+8yzLfuOqXm8jH0m+X3T7Z7Ms\\n\",\n       \"+2qWZZMsy37te9iGjW7/4MiV1+0syxpZlv3q4/sfZVn2tSzLvvQ9asOnRrc/TeDkM5K+ccljc0nZ\\n\",\n       \"x7lJ9lgu+L4u6fk8z9+Q9EVJfxgO+W8kTSTdkPRXJP1KlmWf+zht2cinRr5fdPs9Sb8g6e98nPtv\\n\",\n       \"5FMp3w+6XZP0XUn/bJ7nfUk/J+k3syz7zMdpy0YeS57nP/A/kn5b0kzSWNKRpFcl/bqkX3j8/U9K\\n\",\n       \"eufx378haS5pJOlY0n/w+PN/StL/K+mRpH8s6Z+z6/9DSV+W9P88Pu+lC9ryBUm//fjv/1LSz9h3\\n\",\n       \"XUknkl6xz/47Sb+44lqZzibCW5LuPz62H5/Jjn9L0k9J+tLj+5w+fsav2XP8oqTfl3Qo6X+TtPtx\\n\",\n       \"r7f52ej2iuN+QdKvPeG5Nrr9Kf/5ftRtO/6PJP0rG93+c4z/uhvwCSr670j6a/b/r0n6+bIBlPQd\\n\",\n       \"SX/R/n9G0gNJX3r8/z//+P+nTDnekvQXdMZG1Uru/289niDDxxPhkaTp40n3UGcRwhckDcN5f0vS\\n\",\n       \"/77imf6apG9LekFnwOZ/kfTfX6CU6bkk/Wcca9//Q0nvSvqcpI6k/1nSb3zc621+NrqNbofjv6wn\\n\",\n       \"g5ONbm9+vu90+/E5N3UGqF5b8Uwb3b7Ez6cprSOdp/wuSwH+VUn/R57nvyVJeZ7/n5K+KulffPx9\\n\",\n       \"LunX8zz/Zp7nizzPZ/ECeZ7/ep7nuzqjA39c0j8h6ev5WS51L8/ztyX1dKb0LseStla0669I+q/y\\n\",\n       \"PH8rz/OhpP9Y0r+RZdllxjXT+efPdaao38jzfCTpb0v61y6iO59wvY18cnLVdbtwyiXatdHtjSDf\\n\",\n       \"N7r9OP3zPzy+7rdWtGuj25eQTxs4uYxRLJPPSPpXsyx7xI+kf1rSLTtmZdX44yLXgyzLDiT9hM6Q\\n\",\n       \"7uuSfujx9f7dx4cOJPXD6ds6Ayhl8rQknxzf1Vn+8+blHqtU/Dm+K6ku6dqf43ob+WTkqut24bRL\\n\",\n       \"tGuj2xtBvi90+zG4+A2d1Qz+7AXt2uj2JaS27gasWVYpffz8uzqjyf76x7iW8jx/KGkny7J/XdJP\\n\",\n       \"5nn+M1mW/QNJv5zn+W/bod+SVMuy7JX8rPBKeozUV1z6fZ1Rg8jzOsvR3pf0rM4oPklSlmVVSdcv\\n\",\n       \"0d7nw99TnVGhw495vY2sR66abl/qeiYb3d7IKrlyuv2YpfhVnenNX8rzfH7BPTe6fQn5tDEnWfh7\\n\",\n       \"VQR3X9LL9v/flfSXsyz7F7Isq2ZZ1nq8xv6ZFddeJf+kpH/0+O8vKKxmeEzx/QNJP59lWSfLsn9G\\n\",\n       \"0l/WGRovk/9R0r+XZdkLWZb1JP0Xkv6nPM8XOgM6rSzL/tJjqvHnJDXt3HuSXgjUXybpr2ZZ9hey\\n\",\n       \"LOtI+nlJfz8/S05+nOtt5JOTK63b0plhzLKspbOgqJplWfOxsSyTjW5vBLnyui3pVyR9VtK/lOf5\\n\",\n       \"yROut9HtS8inDZzk4e/4P/KLkn7uMXX3t/I8f1fSvyzpP5H0gc4Q+b+vomJfBoH+mKR/lGXZU5Jm\\n\",\n       \"eZ4flhzzNyS1H9/n70r6t/M8/+aK6/0dnQGX/1vSmzor2PqbkvT42n9D0n+rs2KpgYrU399//Hs/\\n\",\n       \"y7Kv2jP8hs4q4u9Kakj6d/4c19vIJyffD7r9t3Wmo/+RzuoBxpL+0xXX2+j2RpArrduPlwz/dZ2x\\n\",\n       \"3PeyLDt+/LNqj6qNbl9CsscVuxvZiLIs+x2d0aCbfSg28gMlG93eyA+q/KDq9qeNOdnIk+XK0Xsb\\n\",\n       \"2cj/T7LR7Y38oMoPnG5vwMlGomyotI38oMpGtzfygyo/cLq9SetsZCMb2chGNrKRKyVrWUr85S9/\\n\",\n       \"OZekCIyyLEuf+Xfz+Vzz+VxZlomi4tlspkqlcu68LMs0ny9XcVWrVVUqFdXrdVWr1XTtxWKRrn1y\\n\",\n       \"cqLpdKrT01Odnp5qPp+rUqmo0+mo3+8ryzLNZrN0D87ld6VSUa227Eramee5ptNpOrder6efarWq\\n\",\n       \"PM81m83SPfM8T8/Eb4R7rRKes1KpFPpoNBrp8PBQh4eHGo1GmkwmkqRWq6V+v6+dnR11u13V63W1\\n\",\n       \"Wi3V63VlWabFYpF+8jzXfD5Pz5HneXrGWq2m2WymyWSiyWSSnqNarardbqvVaqlWq6Wf2H+z2UyD\\n\",\n       \"wUCDwUCnp6eqVCpqNBrq9/tqNBqF56ZPFotF6jPuxzP/0i/90lrpzV/+5V9+Itr39kapVCpPHOt4\\n\",\n       \"jYuud9H5ktKceNI94/3iuWVz+Xsh3g7mELJYLAr9x/9IpVJJduSjSBwTtzVl7XKhjd622K4n3Q/5\\n\",\n       \"2Z/92bXp9q/+6q/m2LQyvfPvsixTtVpN8x374cKx3pd+jfl8rsViURgvPwZ7Uq/Xk82bz+eaTqea\\n\",\n       \"TqfJ5ktKfR6FNmHv8jxXs9lUrVZLdm48Hms2m6XjKpWKqtVqwZfQJv/b27xKoj9x27m3t6ednZ30\\n\",\n       \"HHmeazKZJBt+enoqScln4TvcB/A37XBbmee5Wq1W8hvIZDLRdDot9L/7JMa13++n/qXfR6NRei7v\\n\",\n       \"78ViUdCLarWa/pakn/7pny7tpLWAE1dIHsIH1X/TQThHHKJUBAeu6NVqNSkeA7JYLJIiRwDkk4iO\\n\",\n       \"5LjJZKJKpaLZbJaMigMAfrvB4brT6VSTySQpXLVaLYAkFMCvw/O4cG2fqP4MgKdqtZqO5X7dblfT\\n\",\n       \"6VTHx8eFezl4wtEvFgu1223V6/UCOKH/mcDcs1qtponLM/mz0GfxWfgNMGPSTafTdJ2Tk5PC9eKk\\n\",\n       \"bzQaaVLw+6rIk8DCRd9F57nqWv4Zf18W2JSdf9G5ZQ6pXq8XxiuOzyq5LJC66LndgWHoIjBxo4qU\\n\",\n       \"AZPLtCeCCZ8HZeevumbZXF91P5ePMq7fK8EeRr0ps9m1Wq0wZ7FLZcdK520ZgRzAbjqdnutvArvF\\n\",\n       \"YqFGo5HscrvdVqdztq0HNoa//VkcmEhLH9BoNJL/wLbQ/2778RMuEZSUjRuflQE7+rjZPFvpe3p6\\n\",\n       \"qmazmYJnAl138LPZTPV6PbWTfvdAzn0e7SSoZKzoW9oT/Ztfl/H0gBj/io9+EhB3gLdK1gZOVhkz\\n\",\n       \"V1SUCCVzpFn2wN6pDlBwgigyqDse32g0JJ05xvl8npwm1+ZcQI63l78BOKenpwUk2W63VavVkqJE\\n\",\n       \"R889eD4HMPzmud1ReLQSjW+WZUmpa7Wams1mgb0BOTP5K5WKTk5OzvV9jHrimM1mszQ5nOHiWb1f\\n\",\n       \"Yl8xrovFIhmDarWqk5OTwiSj78oM+3Q6XRkdfdKyKk16Wad8GWCyShys87dfo8xYOuux6l7+OYbp\\n\",\n       \"9PRUW1tbOjw8TON8ERtQdq3LHFfWB2VO0Z8FvYlOpOy8y/Y38yu2LwYL8RnLQJIf/yRgcxWASZmU\\n\",\n       \"AVYPPNzxuO2SljrkOkp/wELX63U1Gg01Gg2dnJzo9PRUs9ksARJ3qOieO7xKpaLT09Nzjpd7u45I\\n\",\n       \"Smxtp9PRcDhMPsCBQ5n9KdMld9jeXxEAcGxZH2APZ7OZxuNxsovYSreLbi+jTjMePkawHwBAAnhn\\n\",\n       \"x+O8oG3z+TyBwQjEvH1+Lvf38XeGfpWsDZz4gJYZT0fdPKArFY4Qx9hoNAoUn6QU2U8mk5Q6kKRO\\n\",\n       \"p6Otra10DgKFxjVRjizLEtXnk4BJ522ez+caj8c6ODjQu+++qzzPde3aNV2/fl27u7vn0iX+TGXU\\n\",\n       \"cewvZy3cOPj1XHmhODudTkqz0KeuhDAPtMEpc48cmCTev0wg2A4HElGBXQBP3q/uaGIqD8PlkYBH\\n\",\n       \"FFdB4qS+rJQ522j8Vzkqp7WlYpRYq9XOgcuyNrtReVLKIcsytVotjcdjtdttNZvNRAf/eaUsOl/V\\n\",\n       \"5pjCicwg7fUo8qLrMqcjOPC+LRunVePs7Yu2bdVzl9nCqwBQyvTanRbHEM27vfbghPnbarVSSt0D\\n\",\n       \"T/qZQKrVaqnT6Wgymej4+Lgw7gQ3BCcEYNgp/IPbGA/wuFej0VCn09He3p729va0u7uru3fv6lvf\\n\",\n       \"+lYBxHiKxO1wTJdIRV2LrLrP5/l8nnwLwIE+nE6nqX9w6O7geVaCT+4V2X8HAbBSLm4jYGHo0zKW\\n\",\n       \"CyAzHo/TfX3uR9CBXjAGjUaj1B9EuVLb18fJyISI1BPoF6Wez+caDAZqNBqpbgKFAlkS7ZFGWCwW\\n\",\n       \"6vf7Ojk5SU6PHwbdmQRH85ERoO0MynQ61d27d3V8fKw7d+6o1Wrptdde087OTjrfjZ1UrKvxSYRE\\n\",\n       \"J1+WYuI6brABE51OJ01CFB82yettOp1Oaluk3SKIoH8iwPJnojYIVio6UcaI7xnvmNPlbyYq+kBE\\n\",\n       \"tSpCXaesAiqr2IB4jn+HnkfnBT3rn7luODApc3CrAoVVMpvNUkqtVqtpMBjorbfe0quvvprSPE8C\\n\",\n       \"Q2XyJPamTFzPpdWgCp2LYCY+r9sc7AzzJ9okjnlSn5UBi4vGn+8Z2wgA1iVRT8qYImcxnBXlWBwZ\\n\",\n       \"gQ/znDmM7s7n85TSkJapFmdxZ7NZsj1+H2cYCJy4p1RkNTwYqtfr2tra0u7urgaDga5du6Z+v6+v\\n\",\n       \"fvWrOj09TYGXAxO3ac7oRJ3it/cZ/9NWB0C03dPtMWW/vb2tWq2m6XSqo6Mj1Wq1VHsSWQq/N+1E\\n\",\n       \"t/BZHijzTA5u6DcPDGkn13Uf7deJ9gUfi/1eJWsBJ2URR9mAOgBAkRytukIvFosEOnB2gJp2u63h\\n\",\n       \"cJgAyng8LkTmlUpF3W5XnU5H9Xo9gRMv4qxUKmo2m+doMB9Q/j88PNR8Ptd7772XFPj1119XpVLR\\n\",\n       \"j/7oj6rVap1jI4j+YW98Mrlxc9Dlfed0JROW8+r1umq1WipO5RiM9sHBQUpB7e7upmO8FofnRdEA\\n\",\n       \"gEyK2Fan1RlTV+qYMuK5pSU4WZXC4nz6bTwep/OugpQ5H5fLOOA47tF5oSuec45swpPuA7BBd570\\n\",\n       \"LO5QpLMxxaB/61vf0o/8yI/o3r1758ahjBW8yDnH+z/pWdyOxGfxttBni8UyP17WRgciRJVlrN+T\\n\",\n       \"2hgDDI6Nx7lee/uvCjApk7JncGePrXb7TP8zt3k2D0QAGpPJRNVqVVtbW0nvtra2ku12Z0ywhW5i\\n\",\n       \"v6QlYI3sh7c7z3P1+33t7u7qzp076vV6Go1G2t3d1WuvvaY/+7M/03Q6PcfoxYJsrue/yySCOMTZ\\n\",\n       \"FQcMkgoByNbWllqtllqtVlqEcHx8nOYzz4ofAQB6mguB8fB+cWDjwMnH1ot30XPaXMZ+wtjAiNVq\\n\",\n       \"tQS6VsnarPmqCer/0wH+PYpdrVbVarWSg3MmAjbAacTt7W0dHBxoPB4rz3MNBgPl+VktCJG9MwMY\\n\",\n       \"fgdHeb4szooUtg/u0dGRPvzwQ12/fr3Qtq997Wvq9/v63Oc+l5AmE9cLQF0hHAVHZfYJ6MoelcfT\\n\",\n       \"IdKyGKler+v4+Dj9gIxZJZPnZ8VZAMHJZJKcGSxVZLicQncj5Mg6gjoHOzxPjGzoF66d52eV9JPJ\\n\",\n       \"pJBvvQqyiv34KOITvuxaDhgwzBgmJj46EZ0wv8uKBS9qN846y7IEZmu1mkajkV599VX96Z/+qV54\\n\",\n       \"4QWNx+NzLAXyUZ1tBFyrjnHdiM/qc4h+jaDM+9vBAde5KMXyUcbWxy2yZdgDTx84wL8K4kDK7ZMH\\n\",\n       \"E94fMZCEiYYNKWOhOe/k5CQtSCAVlOd5Ykyk5Xh3u10tFot0DHaGa/mqFPwCz1Ov13Xjxg29++67\\n\",\n       \"6vf76ZkePHigl19+WScnJ/r617+eGENJKYD1e5WB8LL+879dD8rAidtM2I56va5er5f+h0Vqt9vn\\n\",\n       \"0usANoABLHNkEGMaKjInDsRoN230AFM6X/vJMVmWJQbfGaxVsjZwsspI8dBx6arTS16Y2uv1VKlU\\n\",\n       \"CstxpSV9RwdDhTmDMhqNCoWwDBJ1DL6s2BXaC7OkInXGtZ3+ZnBarZZ+53d+R9vb27px40YadBSG\\n\",\n       \"NsQIRFqmfWLNhRsCn3z+nSs9AmKdTqcaDoc6PDxUp9PRaDTScDhMFP3Ozk6KMukPnE+r1Up9FyMU\\n\",\n       \"V2D6gHbHHG1kg5xyjODQQd14PNZwOLwy9SZl8iQW5Unn+AQuG/PIapQxIe6cy6K9J4m3oVar6fj4\\n\",\n       \"OBW08dnJyYlee+013blzR7du3UqRZtRJb0vZfSKYiueW9eWqzz0d48EEn3n9lKfDykBVbIvfO0Z/\\n\",\n       \"0dCXtdXHN9a5OO1+VdKVcT6uAiZxoYGk1M8EjNgRZ6W5hzu6xWKhg4ODlN7xAAfWhBWJsGHScixd\\n\",\n       \"19BFZ2l4DtrMKh/sb7fb1f7+vl555RV98MEHOjo6KqT5ELdJDty2trZUqVRSAOX9dhkh0KDfZrNZ\\n\",\n       \"WlgR9YVAsd1uK8uylLL3gBOGibmAzXCGJj6b+1//oZbRAX0ZuOY3qRxA6WX6YW1pnbLIAeXE+Xva\\n\",\n       \"xsEJBbCgQtIxdLIPihubfr+vyWSiBw8eFFAvUTv0GPdm8HyZFuKpE58Ep6enGg6HiUrziKDVamk6\\n\",\n       \"neoP//AP9VM/9VPn2JgITBBWDh0dHanRaKTalWgovD9daWjrycmJ6vV6YovI7boBBOgx2Q8PD9Ne\\n\",\n       \"Lyg97WWyMOl9nBAin0j3emU99CMKHp2rK7IDwNFolIAg7MtVk4/D5jggpT8w5mV7zUhFJ4lOO+D1\\n\",\n       \"65RR2xe11Z02UZh/x/UfPXqk3d3d0rlS1h/cNxbVMedie1a1z/UdXYxRnt8vgodY4M6xq+7nNmVV\\n\",\n       \"SuxJUSHXdwaRz914XwVggkR2QzrvgMrASWREAa5ZliW75Owv18uys4J59AqbBLCsVCppXxIvJPW2\\n\",\n       \"OcDzFIQX7WOXHWQ6ADo8PNQP//AP6/d+7/cKwTHX9lU93BeGoNlsqtPpaH9//9z2CNzHWTmENDwB\\n\",\n       \"7uHh2bsGm81mYkAkpS0Y6NsI0PCVnkWgX13oE/ezPgdIoTN2+BS3KVy7bCybzWYqZ/Ag5KLAcq1L\\n\",\n       \"iVehJx7cnbsPIP+jcKDySuVsVUpc0uQFkxQ8eWEQyBy2hOO5NgPEPWO+2qvOAREuXItamLfeekvv\\n\",\n       \"vPOOXnrppcIyZ681QbIs02Qy0cHBQapYn06nunXr1rlVNyhEBGW+YinPc21tbaW86nA4lHRWdNbt\\n\",\n       \"dgt0IaCIPCGrMlBO9nBptVrnDKwXvcYo3pUVwxLzm4yBF5pxzOnpqQaDQUoz4bivas3JR2FPykA7\\n\",\n       \"fTmfz9P+ItFpAdiazWZhXwhnL8qcs9/Hj4/HLBYLNZtNjUajwjgxF/is0WjoO9/5jl599dW04V98\\n\",\n       \"Nn9GB/+NRiPNhdimi8TBEPoSlyo6EPJVYJGd8H5dBS4uSvEgzPkI2Mv6N4I5b3ds/7olOmHXmfgc\\n\",\n       \"HIPNpJ/ZyoDjDw8Pk+PmGH9e3wQNB+k2we8LK+sRvKTkVNEL199Op5PASWTJmUfb29u6fv267t+/\\n\",\n       \"X7DRzqxH0Oo1H1tbW6XLoMv6F/vY6/XUarUkSe12W6PRSJ1OJ+ksAdpkMkmZBEAHARvp12azmZ6b\\n\",\n       \"tK+3x5/BWVZp6fdIZVHbSb8yn/Cl/DgAAqh5H/sqrTJZqzWPhqfMCDnIkJSQNx2G8wbZbW1taW9v\\n\",\n       \"L6FbV1KQGh2HAjlz4jUidKSnI/icamPoOtqJQfYVC6BZihdbrZa+8pWvpHbGCNcVnPMpaJrNZikF\\n\",\n       \"s729nZzCRVEW4Obg4CBVd3e73QSMqtWz3Vy3t7fTzn/0qRfH8hyHh4eFfQdA0J7S8bFzOpDnYh8B\\n\",\n       \"IgxQPUYBgAIThbHnc3ak9Wr8i4qrPilZxWRdFqCUHeOpEXbR9XtFiWm9+Pmqc5/UvhhdMlc85ZPn\\n\",\n       \"uV566SUdHx8X5k+kvLlfTK3Ezy4q1uX7sr99eaWPCY7Ha9T8ntHJuETgclG7uGYZExjnt9uKSKl7\\n\",\n       \"361TygJJt3s+z6MNkJbzE6aKbRxwjMfHxynY8Wt3Oh21Wi2dnp6mFIdUXDKLzYTNRTxlBiPMs/h4\\n\",\n       \"++IDxNkZ2JMvfvGL+t3f/d0CsF0F5ofDoba2tpIPIvijPs4Z0LJgIcuytLqStpElaDabyR/AZE8m\\n\",\n       \"k7RNBM/M3PQlvzGthb67/+Bzzxigf/gAgnEHNwQYgBDf0M6fId5/lVyNUNMEJXZnE42aU2mLxUKD\\n\",\n       \"wUCj0Uiz2Swhyd3d3YQ6XZmOj48LkbbXr0Sa2Vf8OICIjIu0rDRnJ9ZoVFBQPj8+Ptaf/Mmf6Id/\\n\",\n       \"+IcLhYw+mRBSOe+//356DlYE0RaYmbjXC0rHLrGj0UiDwUDtdlvHx8eJTen1enr++ecTc9JutxPQ\\n\",\n       \"8sIqKEKvDi+bpB6JxsjW0f1wONTp6WlaLeVGmPF0YOaOmijJXwmwbomsR/xcejLl71EM53q06sAc\\n\",\n       \"AbhBkT8JiFwkcQw5lw2x/BgvHsdYtdttPXr0SM8++2xiQlaxpW74ynS/DCz4/57Omk6nqQZqNBql\\n\",\n       \"+3r/cB8cV7wHz8hcjP3ubeBzZ17KWBX01schGu04Xv78H3X8vhcSwQbithEpq2vydFWlUkl7Lrl9\\n\",\n       \"9GthTyuVSmFvJt/XiPGmj5xhdJAEQKC9nuIAADjzTHs4D1A+GAz06quv6s0330w6X8ZM8kz7+/sp\\n\",\n       \"JQ6r74XAnirx1D7PwHUbjYZ2d3dTkMozTadTHRwcJFs4HA4TewKj7PpJCkwqrsjxOci9OZ82tdvt\\n\",\n       \"1D+AJHyu22KCXeyTL8xwpsTLA1bJWpcSl004qDaQ7Sr6KyqxVyk3m820vNTpYwbWV/I4OHFjxiAA\\n\",\n       \"TkCMzgI4WvfNfvy+7mRQqkajoa2tLb3++uu6deuW+v1+ul803NKZcet0OnrqqafSUjqiDBRDWlJn\\n\",\n       \"bEGPOFW3WCz04MEDnZycpM1wFouF7t27p1deeSXlKD3qJf+JEd3e3k60oufH3WH4+nX2hwE48R4d\\n\",\n       \"H7/T09NEsUpKBV6+F4tPaqhPwIkX565TLkrp8PdFwEQq7o0AG8H5q4SUT5wzq0DRKsATgaY7UV/O\\n\",\n       \"yLnOJjIWJycneumll/Tw4cNzqbYyRwujiLgD53tvY+wrImCCBQASbWw0Gjo6OirQ+R51e99Sl+Wp\\n\",\n       \"hCjOEHrQMpvNEqiJ876MUYvpZ38mB5/rBiYuEUxI51dv8NsBXOxzfzaPvCWd00NAijN1HuC4TvC9\\n\",\n       \"38tTOlzfA1Lvf+w97cGpSmfg9/nnn9fdu3cLjtl/JBVekwJrwfJZxhVg1Gq1CiUBrh/cd3t7OwVp\\n\",\n       \"gJLpdKper1cAcWVsHs93cnKSGJdY08RvbDx6zSZwzI12u53OZ9556hI/DChxP8L96DPf22aVXLnt\\n\",\n       \"66UiQHGH51GEG9o4URgkf3g6BJoVFgCwUMY2oJwMJk7RC2ERFMyLr/xYR7A+Ed9++2392I/9WFJo\\n\",\n       \"ABGTxCu1O52OKpVK2s6Z1UZ8P5lMVK/XNR6P1ev1CuCD9h4cHKRnRrj3/v6+nn766TQGTq3TNz6x\\n\",\n       \"3QBHQwwy5jsKcaFyffURE86NuqN1trEm1+lRASm2uNvvumQVY/Jxo98IMhyo+GcOImKUGKN6b5tT\\n\",\n       \"u9L5FwF6VMw9ABOuV5JSpIkOHh8f69atW+eMbhlw8g3c+O3Ax9sICPD2NxqNwl4U7nRI/zEfvG/L\\n\",\n       \"AIAHKPRFBEURKHoBL4yi93e0d6tW4bi9uqoSmWGeDefjDGZkFaRioSo/Dgax7+48qesjHYy4j6Bt\\n\",\n       \"/oONQDexydPpNNXSRUDlLI37FnZFvn37tu7cuVNgDZwF85qmyWSSbC2+hPNgfTqdTgqsmSv1el3d\\n\",\n       \"ble3b9/Ww4cP0xJpQMRisUjpbhdf0eR9DuD2eeNz3/2cAyfPKMzn81SM66wR/ppnp46x3W4XFq64\\n\",\n       \"T8SGx/a7XLm0DsIDOzXkfzttRCdRiOObvDAIsTgny7ICzcY93Vj5oPG3G2RP/9Tr9UJxKYjfn4ff\\n\",\n       \"LL2sVqv69re/rdu3b+vll19ONTAMphsBT62gVBhjVu7AZjBZAQNHR0caj8e6d+9eyr060oZmRbiH\\n\",\n       \"52IdEKKYPE80PB7d+KQfj8eF/Uz8xVqknqD+nA2CeiUv7crtIPMqgJMyWQXGL5Pe4XykzJl6hBnv\\n\",\n       \"W+YAy66HlLUHZ4tRpWao2+1qPB4n44ue0vann35a+/v72traKgCmMom0fAw+vO/cAWCsvQg49h9t\\n\",\n       \"j8yaj4kDLP6OgRESGQ9vqwcnDq4cTHsA5e2MIOiqAZQYbTswkc4v+faatchaOePktRNeJ+Q6je45\\n\",\n       \"W4E4YPe+5b5e4O/to/6j0WikGinpzC7B9jrIwOa88MILun//vg4ODs6x/FzfAwVYBHyXB7L4ljxf\\n\",\n       \"7inVbDa1u7ur559/XoeHh2l5sLOZ9F2z2Ux+h89cnOHwANzrSdwX4GM4lwDR0zP0ny9p9g00fczo\\n\",\n       \"ex8b/MOVBCceOV3EnjiSc5Tn1J2kROujBCA+SYU8FwrCeTi+sloFH7iYX3ZE7jQ6VB/sgkdSRAcY\\n\",\n       \"yXa7ndIZ3/72t/VDP/RDkooRWKQ8QbTOklDVTVThWzqPRiPN53MdHR3p9ddfT0oDWHNWhWv6+MS/\\n\",\n       \"HZzEfCbfuUGu1+upDZ5j9T5lzGjbYDAoGHtH+uRcPcp3pukqpHXKpAxQSE92Pm6I4rER2Di1G89d\\n\",\n       \"1aYyBiMpQdUAAAAgAElEQVRKHAvGuFKpaDQapT6fz+cJqKCzTmdftE21t8FX7EjLZdHoBzaA4IDi\\n\",\n       \"9gh+ot5GEH1Rf3hQ4Ncs+wyJKdnoqNwplPUvziPeOz7LuqQsRcX/rt8OJImSfb5zDvOYfndWzuc8\\n\",\n       \"OuWpg9gfDkZivwFwPGrnOrPZLNUlwmjgNwicSMXQvsFgoCzL9OKLL6b3taGzrj/cx5lt6kHm87n2\\n\",\n       \"9/fV7XZVrVbT3iWz2Uy9Xk97e3t68cUXdffuXXW73QKTT78yt2GkAQ8O+qL/igDK/Zr7VUBPzExE\\n\",\n       \"NorrkIp33XYw6j7c5wHB9CpZ+4v/yiI7JKZAYuEZD45BJO/u1LMXQ8WqbCJul4gIIyXujhgDjBFm\\n\",\n       \"mS+5716vp8FgUIg8fQK1221Np1O98847+vDDD7W7u5vaTsWztHQKTFhf3VKtni1T29nZSVvzHx8f\\n\",\n       \"K89zPXr0SKenp/rGN76RlNHBCG1xqg0nAOJ1YIiB8NwvCsh5fAfz1W63NRgMCikemB6vIXHnw1h5\\n\",\n       \"vtdRdqTLYc2uCnNykdO/bHrHDXd0WDGSLxO/x0XO5KJniMbeAwKPMom2+v1+Ai/V6tk+P+TLy+4f\\n\",\n       \"GSUciUdfnqpxB0kRNE4h2hFsA33HRl2rntX7w+/v93XmA/12QB7lMixIZFw8TepA6KqwKBH8+f+u\\n\",\n       \"n3yHfYm1fW4nqtVqAgkONKXivksXAcxoE/gd9Y3jvJ5jOBwWbJ37iUqlkuqouOZ8Ptf169eTrUfX\\n\",\n       \"CL4Wi0XaJsHv529Pr1arqT5jOp2mgPrll19Ws9nU3bt3Cyt1XAf5jX8jpRkDFPwshcduR5xJIUMB\\n\",\n       \"++gMPudFEBHBhjMiMDDebr837QJQrZK1bsLmNGyM5CJ1KC2LVBeLRVpKykP7Rl7QdSBSp8F8NYsb\\n\",\n       \"D1dkAIpUpOgioHJkyvlON2NonXImpcNkbLVaGgwGeuONN/TZz362sKKHIk/qSxaLRYpUccSOoKG4\\n\",\n       \"K5WK7t+/r4cPH+rtt99OBhXlYZK6YsGcOE3vRcAcByUZx8z3KaB+gD5sNpsaDAYFI+4Rtiu2KzXg\\n\",\n       \"j/YRdaAL/HaEfhXE9Tu26bLARFrSomUOq0w8aoq6LRXBwJPa5OeQOiMf7oC+VqsVGJTZbKatrS0d\\n\",\n       \"Hx+r0+no9PRUH374oW7cuJEcE2CWZ+FllDyDt6fMwWdZpt3dXR0cHBScSmy/Py+6VdZ/Hr0zTyI4\\n\",\n       \"cFra+xsQ7f3vQUCZeFtjPU0ZEI39sA6JtvgiHYr97qwvgQnBCk5SWq6SJEjyKD8yBj4WHvn7/z5u\\n\",\n       \"nOPpNlZl+UoZT7VUKmepaGwiQPzk5ESj0Uif//zn9eabb6bnunfvXmJ5YPPzfPmqFBy8B7oA7Ha7\\n\",\n       \"rS9+8YvK81zD4TAxJqukVqulnWdJvWAn6QNP53hKB7DowTttJlB0hhq/6+Prfe/pe0/NegDt/hHm\\n\",\n       \"ijm58hlXfvMJSYzkfEBQSJ8QGKL5fJ5e0+479oEevXN9MxxJBQcdGRE+j4Y8IvM4MSJFyXf8Pjk5\\n\",\n       \"UafTObcklxc4feMb30jfZ1mmnZ0d9fv9VGtB5XSv11O32y0t4kUxxuOxPvzwQ925c6ewj4qkQkTo\\n\",\n       \"tB3KNBgMEmJ3JgLwEAvSJKV0kjuvGDWBzmezWWFVldO49JWPlwMo/5GWS5yvyjJil4sAyqrPOM/1\\n\",\n       \"CIYrpg1W3bNM3LhfxOqUtQ+DHR2In9/tdhMobbfbaVdh6PTnnnsuFW878OZ6MCtPYiCoc9nd3dWD\\n\",\n       \"Bw9KgUlZH1FM6cxLdKb8dqAgFanwstSaf+cAIwIWF783ztCvWcYCXDXxKNiBoAc9/p33PfOdBQpc\\n\",\n       \"z5lSaWmnfTxiH7s9iAFmBCnxtzPnvm0C1yWAOjk5KWzQSZp+b28v2avT09NUMEvw6YCYcwFlgB3p\\n\",\n       \"jEF/6aWXCqlRB3VefO7MPas+T09PE+jzVZA8j+vofD5Xt9st2EzGIII0fGmWZYXNHdFZZ7kARowL\\n\",\n       \"13f77cdh3y7S77W/ldjpprJjyj5zcACLQirFQQcTBcV3JOiV1g4wUEhvG2jPAYBPNraAZw8Vvuc8\\n\",\n       \"tnBn4rVaLY1Go5RzbLVaevDggb7+9a+nDdB2d3fV6/XUbreTIsxmM127dk1bW1tqt9vq9XqSzsDI\\n\",\n       \"1tZWymV+85vf1P3799Vut5OyQI26ktN/GODBYJB2t6W9zkr4JkYwI0xGJhrHepU2+xSwp8l4PE5M\\n\",\n       \"CjseSsV9ZRws+q690pItisDEt4e+KrJKj1exKjGl4MeUGWHpyamiaLQv2+YyQx/vhaFZLBapQHZv\\n\",\n       \"by/VGvV6veSEfI4DumDDKGr1wm8MPgWK3W5XzWZTH374YWqT0+QxWsaI+1b+0Un587pNkc7vRRJp\\n\",\n       \"aY4p6zsvGr9IcHS0fVVNyrqZE6l8pViZDvtxCEtZ3f4QWbte+1g4MPE+8L7y+7ueRlDiwAldhZ3w\\n\",\n       \"qJ40PIw2tvH09DTt8orNabVa6UWB7EUlKYEF7CXpTna13t7eTun8mzdv6vOf/7yOjo4KBcHurxys\\n\",\n       \"xr+r1WoqpPUVb85wRh9G4O7jQ0Eyxx4fH6vRaGh7e7uQPpKWQI7+xwZ4MXwZ0wJj5u27qE5wrW8l\\n\",\n       \"XmUsLzK2lUol0cV0pufb3LBG+imiT0eUfh6f8T8d65/TFpiAR48epTf7gmop2MMAAVLq9br6/X5a\\n\",\n       \"CsY1j46OUq7ywYMHevDgQWIQyNEeHBykXRO3t7fT81+/fl2SdOfOHd2/f1/dbje10YEJEz72P0rG\\n\",\n       \"ToP0qae3EGeh/D0PvnqJCnOcTa/X08HBgR49epQUlHvQTq7N2Hhxb3TMXggLEIzvi1i3lIGHyGJE\\n\",\n       \"I48j9vyxg2K/9mVYkHj8ZQEN4gXftMVXnAAQK5WKHj16lPZtIFgYDoeFVCX39iDCgTBAHIDrugtw\\n\",\n       \"R38jHc08jlE8+ua1BJ4CKBsjaclq+BzlnAhCaKuzQ2UMTexzZ6ViWkhaOqCLVjV8EhIZ7chWxb5z\\n\",\n       \"ACcp7c+E7cLBe996MMkYx927nen2NFxkzOhDHyv0jF3EnSUhTeLMPLUg6OpoNNLOzo46nU56gSw6\\n\",\n       \"OZvNUl0TOsdqUFI8BwcHGo/Hab+Rev3sbchHR0eF9jq45n/3QV6r4/0FUPEl0PSXrw7ylBrP6nOT\\n\",\n       \"3ycnJxoMBtrZ2UnAif5kPBwwck3aHcGVA8wry5wgbpjjJEbhPXJgoOr1unZ3d1Wr1dIyWe9wp/Nw\\n\",\n       \"bjhfN3w4uGh8+Jx2uLLQPo/W7969q8FgkJwpqSYcNxR2o9FIK3qgC3n3Qa/XS89AAZazP7Az77//\\n\",\n       \"fkL2/X4/PQ/LbA8ODgrpK/rOWROcuhtxd4ZQmBRsuRIxoWNE7wW0gJc4eVDymHsFxERlZgLE8XGB\\n\",\n       \"smd8r7qsmowOWkiHRAcaj7/oeh+1HWWO06Mfj+59LwV0r1ar6fr166kYG0cN5Y3RIjXDZmdOfTOv\\n\",\n       \"/EVwXgzutDPHe+2VR8HofZkRjMEIDqqsb5yBuagfcTa+Gs1ZFq93KTPaZZ/Hvl+nRHbJ+5r/Oc7t\\n\",\n       \"t5/PuOLcPOXs58OqYBcYW16p4e2QVKh3iixJDFCr1eUW8JVKJTlfzqemDmACi02qGyDSbDZ1fHys\\n\",\n       \"nZ2dpMsAFrfZbGM/n8+1s7OTNjID9LRarXQe/YD+kGr3vsc2AERgaTxzEAGHVEwvwmDAeESw7+eR\\n\",\n       \"joUB9Wu5T3Cd8Pussk+XWVl5JWpOLkrpeNrB0THROIM1Ho8L18IwudP1LXMXi0Vh+2R+s7wJRfbv\\n\",\n       \"YtQonRklgAHtYldA0jYUUR0eHqpWq6U3THJtEDaK4G/JjHUfTPLBYJCQO9ElEUTsz7KoGwNAH52e\\n\",\n       \"nqrX66VJwcukKpVKWjkU2Rb6wHO3cZyYaDBGe3t7CaBgtGNeP0YKMfqJUZIX+l5FuYgddFDi4kWZ\\n\",\n       \"ZZFz2TVXOTc/J0bA3hb+xlnHdnlRrBt+xvDw8DBFkNKy4FBSMsLs3oxjYjwx1G64HRDx0kFJhcLa\\n\",\n       \"OObxfwcyZRE+fRuZJSQypav6lmNX1YysoubLzr+qgm0pAx9Ska3wc/gO0Opg1HXJbbczJw5Q2fnb\\n\",\n       \"ARu2OjpX7ulMMedi+6j/o8aCN8pLyxqn8Xic2Hr8xmg0Urvd1nA4TLuosq/U8fFx0mX07+HDh9rb\\n\",\n       \"20uF4tPpVDdu3NBkMkkpTKkYEABMYD1gL7AJ7veYi7G2kj7gu7Ixc9uLuC6T3nLgF1nQMvvlvjKC\\n\",\n       \"1jgPy+RK7hDrx0jFqI3vpOUab5TT9/Hw853aoqCTDkP5nSHxIs1IrcXrMYCgUBSY+1AIyq6trDBC\\n\",\n       \"ydnvBDDjRaJOczabzYTqfe07hpsUEbvIes6btnt9DUoMWzOfz/XUU0+lGhcYJU/vuDEo22/AgUlZ\\n\",\n       \"ZC6dpYF2dnbSpHIWqwykeiQR9aLsvldBLtJr5CL9jxP4spT+RY7T7xsdSAQpzlxR5MffjMd8Pk81\\n\",\n       \"BLCDRJvHx8fJUPMZG0Wx/NLrBciBE7F2u920AohrA0x4TvonRuw8B6ADXY9sifetBygXjd2T+tf3\\n\",\n       \"ybjseCDObn6ce3+vJQJS16OyzxH/HsaEwMf3hHL76nrI+fRrDMY8gCxzkFyHe5AC5Lvt7W09evSo\\n\",\n       \"sI0COt7tdpPtARhMJpP0Zm4CLJ4DRoLXamAzAeXxeHSaa8d5Tjt4dtrNrq2NRkOHh4eFNCV9hs57\\n\",\n       \"utT3j/Hr099ufwEWznhGm+TnRabQWZkyPWLMrhw4cbS2yhCUfe6Dx0N3Op0ETBx4+HVitL+KGfBc\\n\",\n       \"XYww4yT0CB/jCZvB24aZeKenp5pMJglJ93q9lN6BOoQBAnF7kSnUHceDZllr74WEgCFHql5j48vE\\n\",\n       \"PNed57meeeaZ1EcODKiDAD37mn3GwiPUaLwAN0TDtJH2eZ5eOq/ADtQ8Gohg9aoAlCcBE5eLnOFH\\n\",\n       \"qTPw9GdkkWI/rWJMytpBv3e73VQsyPGMKztsbm1taTqdam9vTw8ePNDW1lZyJjgj0oaMv+95w28v\\n\",\n       \"bGZextVKOKYItjjGI7dovMtSxfTLRWP3JNCJrn8UiYCwTJ7E2HwS4vMxMiPuhJ4UcErLvUwcmOBM\\n\",\n       \"o47CcuCkfZmx1+hEhob7wS47WJCUmGI+w45mWZbS8u6U8zxP9YOj0Uj9fj8Fb7B9HvTyDNSxtNvt\\n\",\n       \"xLAA9LHx+C/u7WlGZ9NY9gzYIPWOfXYg7rZdUtoXCB11QOW/y/rUt7BnjGKhuAf3PLsDTtcjZ7Iu\\n\",\n       \"mk9r3SGWvy+KHpFIr/rDoTSkTzyq92ugiO6kWbZFDpJzy6Io/xsUy3UlJeWrVCoFxQe1eoFpp9PR\\n\",\n       \"wcFBUhKuQ3SaZZlarVZSWgAF92S7eYw3IIt2xpwubfZUjkcs9XpdN2/eVKWyLKD1zY+IgJlI9B+5\\n\",\n       \"T+7rrJKn43hOJrAj6viaAq7hNKRTvlCUPoGvGnsiXc75X/azy4g7Ywd68bplc6Psnm602JHY91Hw\\n\",\n       \"czudjsbjcVrl8NRTT2kymaTjOA+WLIIHxg8g6rl2dNyXRbrBdMfouhVZIOYiALmsRiGCPBfXxY8C\\n\",\n       \"Fvy6Zd+tEmc2r4I4o7cqOFh1jLS0Sxzrf2MvYhDoq0wcYPi13A54m3CcXvsmLW13v99P774hsHTW\\n\",\n       \"jWCSth4fH2t3dzcxKg5GHJg6U0ib40v08DOkOZ3BJ/h1W0cKp9vtajKZ6ObNm2lxgfs7MgnOtrhP\\n\",\n       \"i0xiBJ2eiikLCH1MnIkpAyc+Dv75YrHcr+vKMScurryeM3dlipM6Oi83RnyO4vi5rthScdc+BnQV\\n\",\n       \"q4OCMtherYxzxjgTITIQAAVSM7Sh3++nzdV4LpSRiNKZCNDybDZLG8yRs5RUoBQpFAW00S9Ohzpw\\n\",\n       \"6Pf7aWmyrzyAxYGy3traKuSMQfoRPfOdgzNH2/QvYxiBifc/Ch4NkhuIyBZcBYlRedn3Uvlqh3jM\\n\",\n       \"k8Rzyu4sypzEk67rjpSxAhz4d6RtML5ZliWdGwwGBRaPa0GrMzdOT0/TCgNnyDD06KB/54aUom2u\\n\",\n       \"T/uYG9S/wDgins7x/lgFTGJfP6nffNwvut5F3zmFv27mRDrPUMYoOR4XdTo6ObcV2DBnyNARt+Ow\\n\",\n       \"LewT4sWZzoRxnAc12FRJqViVZb3UMXkAiW/wPakcsLhjH41GyQ9EIE/bKY7lM3SQ15BQaOsbEqLz\\n\",\n       \"6DI+ZzKZ6M6dO2m7CPoWf+DbLPCiWOaAsx0+l7zvHdTgT+IGb5H5cAadcY/gB9DHWFw55kQqN45P\\n\",\n       \"moD+MJFG9EIg74gIMDiOgZ/NZim/Tb2FVFQMJhMrZvJ8ufeGdAZ62u12KrCFVfD8M9flPQjSkmnx\\n\",\n       \"iQWLsb29nZbfwWYAAlA8FJxzcBL0I5PXKVQMHudyzvb2dgIqcXI7kyIt30nkII22OzCK6R4mBYDJ\\n\",\n       \"o5I4/vSxT4BoAHnusvqXqyBPmnzIk0BKvM5F142OzKPZeL/LCPrebrdTnRDX9b1tpGVQ4Pn3WKcC\\n\",\n       \"YOUaRFC+2stX8qCnOHEAPufgvAHcvuSRa/imWFFgAr2fvEgWiYAt9vEqoyxdbNcu+i469XXKKtC8\\n\",\n       \"irGMnzlQZpylYv0QOuLP6oGkM8ySUv0eQvDlRaWIA2VvB2kZbCvMntsdD5YJPvkcxh37SOEsjp5V\\n\",\n       \"Pezc7bpCyh8gDQDHXvL3cDhMby5mKXS1WtV7772Xns/7jDlLX+ELR6NR2qoCJt99Zhwjntn3BJN0\\n\",\n       \"jvWgn3xbB+ZkZGdYVu3nrZK1bsJWZpQR7+zIoETk5wgwrhpxY4rik3/DuVLkhLFzh+vsBfcmP8kx\\n\",\n       \"AJ5ut5v+n06nqeDJ834MNNvv53me3lBcqVTS2zEpbEVhQdQs6WLlhLTcCt7TIEwWFNEn3WJxtmaf\\n\",\n       \"5W/VajWxOBH88Vyj0Sg9U6fTSREAtTNuSJl4DkgiW+KTxkEef3uBWpZlhaVnPrb04WWBwDrko4KU\\n\",\n       \"i/6O/0dAs4p5ig7uojZFJo/8uBcxSstISSpuWEa0yVyLbAS/iRTRHWfYMHCkDQGiq5yPAxOeHdbu\\n\",\n       \"4OAgLRctEw9mmN+eNvQ2x7+9n1dJBDNl//t13J7F/ZvWLatYkzKWMDo7ByU+Z3H+XuDKddArB46S\\n\",\n       \"ErssFdM53ja+A4QAJObzs51YDw4OUt+SivegFXDBWJAmRyepccQmHx0daXt7O92j0+no3r17un37\\n\",\n       \"tqTlTso+T/EFgHgCxHgcz+8rO9l+wrePR2ByAFNem8jzMJ+kJfjzNqzyzz5OPqbMN77jGq7vHvyX\\n\",\n       \"zWOXK8GcXIZFKUPTHq04nYdT90nhk4Gljy5eKOoTMHa8t9UroXd3d7W/v5/W4g+HQw2HwwQMKpVK\\n\",\n       \"YkJms1kqlIKNYRBxuIAJIrvF4uz11ZPJpJAf5b0KPmkdXBEx+DMTvbIMjt1m2TAr9g1g7uTkJD2T\\n\",\n       \"MyBMJp4hKqOnG8qYgXg8EzNuj8x5XCfWr1wF6tslPjPycYFU2XkOJi9ygKuu59fwaAn9yrIs0d4U\\n\",\n       \"CqKPUjHF6uABnfS0ZbPZTICaSNJrqTqdzjnGArDCNXzpOfl2GEvmK20mMqVoHr33fsRIeqrzo4xN\\n\",\n       \"1OmyICuOC32MREaG/igbq3VJBCDIRcA5SgwkY82Gz3G3Gc4Ee32a109wnrQEK86IwQB4vQV2BhBN\\n\",\n       \"6poiWAJGQDOpCIJB7xsYPIpUt7e3dXR0pGvXrhXS+ZJS0Ij+NhqNtGzZ00gEtwSasB71el1f+cpX\\n\",\n       \"Cr7OXwXgfVCtVtOKUPqY4NHbDyBxkOMAxD9jTDwQ8DQSx8Sx8jovr31cJWuvObms+ASWzqNjqfge\\n\",\n       \"Af7ne1dEvvPqZ2m5ix4GxH/8etXq8i2a7hwZTF8JxL29AHc0GiUFJU/INsgc78W6FGxVq1U9fPgw\\n\",\n       \"nVOr1Qq7DfqKCPoKpZ9MJundPc7gnJyc6KWXXtKNGze0WCxXUrix98nMe3RwLkx2B4kOXGKajGM8\\n\",\n       \"okIwJvQV+9JgaLyv+Zxr+9heFYnMHvLnASurQMeqlIOfdxG48b99nKTiC+58fjgFj0Nx1gFwzByD\\n\",\n       \"IcSQslvmfD5PdSc+rr5TbtRFj6K5J22nLVDisKGRoUA8beTR/GWkrP+ik/Bcvh8bGa0I+K4a6C4D\\n\",\n       \"SB4suK2M33GuB3vSsnCeMfaaDw8U6UPArL9Kg3v5+EXWfDqdpqCPdna73WSjfFsIGAq/frPZTMzJ\\n\",\n       \"0dGRut1ugUXMskxbW1uJbcnzs4JRtq5Hr7HnnItuVqvVVEDO/XkdCiuBTk9PC6uE6J+oY8wNB1je\\n\",\n       \"3+6vmMdS0UZ7n8eUEcEqNoF7SCqAHnQ7gkjaGEFSlLWmdZ4k0ZnxmS9L5TOUyhEgiNdzaxxPkarn\\n\",\n       \"p924cLwbLYwyaJQIbTabpSVdZR3uu6JS40I6qVarpRQOk4Tc4mQySe9zkM62t+cNlr46idoWVzzS\\n\",\n       \"QJXK2S6I/X4/0aCAB87p9/uFCNwnNhMLNobPcABc0wuLnamJURH3ROgHJhy/OY7zPFqJxtrH/ipI\\n\",\n       \"ZCSepO8XgZVVTvMiQOMGK9K1q9icVeLREzVKDgYd9PiSxvl8nhg+38tHUgF0Y8SZp1mWFXLavtTU\\n\",\n       \"GQV36g6YYEwAUkTDHkFf5vk/Lrvl4NCDk7LUtB/jjCO/rwooQdxRrZprZdF2DBhdB9FTZ8M4x1N4\\n\",\n       \"eZ6nFAVsHPrAvTy9g+33PVHYdZv3NO3t7Wk8Hms2myXWDp1DCLZYgXNycpKCRWqqnMVGz2u1WtrX\\n\",\n       \"ylND/loHBxBcA+Z6OBymYI1AFtDCPiv4ntgXzvhwXR877ycPNPhfUiEdy+cOQtyX0E8OLl2HncXx\\n\",\n       \"Y580v64cc+LG2HONvrTQaSgGzjvfd9tDKdyQSUpvNN7a2kosCPd3IEM7UEoUj/zlaDTSZDLRw4cP\\n\",\n       \"U1vYbEdabjmM4Cyazaa2t7dTzcbOzk6qQwGhc42jo6N0HRArUaFTZIAVBx9Q8bQ7bnglLTezI/2E\\n\",\n       \"0A++PDv++CSICk/7vT+YbL6XBeMYqWyPhnmmVVHRVTPkZRNv1YS8iFlZlapxwy0VV6gxHpHGXgWW\\n\",\n       \"IgDieg5AB4NBum+lsqzLiECM/+fzedoskPomSYWiPwclTnsTSeJUInihnZ7W4xlhYnguj8ovAxbj\\n\",\n       \"80Qm6knfl6Vo4jh6P3LN2P84m48DkL5X4kyGVEyroytebO/Pgi4CXklre2TPGNI3/uwU4XPt8Xhc\\n\",\n       \"GNfI3viSWuwX86VWq6UFBzAdXA/QzDVhPL773e/qww8/TGDjpZdeSky7gwdPYfhnMB29Xk+DwSC9\\n\",\n       \"c8fZM3SAII2XosKMw6awGg4d8XmIX4S98RoQZ+s8PeO2nLZEW+DpVOw540nw7eO4WCw0Ho+TL2Y8\\n\",\n       \"8SOM50VyJcCJd4Qr+3g8LtQsRNoe6l9aTgBf5hqjO67Npmg46Nu3b6fB9UIg2sObd8fjsdrtdmFn\\n\",\n       \"TElpTTsG1NMZPpnzPFe/30+vrH7mmWeS4X/77bfPpVycBfJ184AIPtva2kpLKh2hSkuAxhIz6gAA\\n\",\n       \"B61WS71eL1GcPgYxveVOg+MYu0jX8n1kNHgmLzhD6HtnsRjDVqulra2tc3Qw4814XgVZ5VAu62i8\\n\",\n       \"T70g8iLWJEayGC4MEQ47nkv/lZ3rKbSYI+Y6vuwR4+VV/b4HD0YaHZpMJmq324Xoy1NH/rlfP+qc\\n\",\n       \"z3mYRNrBSjwPUlYBvstI7PdV59Ie/i7r5+iEow0k8Lgqeh2BWaTpoftd7zgPR+3Le+N4eGTO9+5g\\n\",\n       \"2SyzVqvpvffeS9fzY/mbtDN2L8/zlKpZLJZbsnc6neQHsEfs6IrOHB8f6+7du2m7hfv37+uP//iP\\n\",\n       \"9dprr6ler6etF2ifB7j1ej0x5dvb26mehQ0LY+EpOsGqzqOjI7Xb7UI9yuHhYSpQBwhhDwGFnoql\\n\",\n       \"b5k3/rcHmnGOeHG6v7xzFUiNjDhz0Oc03w2HwwQIV8nawcmqaJJNa3BMdAAdSt7R99mIOXBfT01n\\n\",\n       \"Yhhx0MPhUL1eLympOwOuNR6Pk5ITAbLja7VaTecDlDzSY2ICiNrttp555pn0Tp7r16/r2rVrmkwm\\n\",\n       \"+s53vpNQLytnWKNOesXz5+QiDw8PUx945MizMglQVq4Jc0TKJ8/zlPd0gOMvIkRZMUDObnku08Gg\\n\",\n       \"VCzQ4v84/u6AmQyMG5PQoxPuzTLuq2TEPyoQcXEn7RGNf+8GpUwwNN4nGPoYqbkzdEeBzsQVI+ga\\n\",\n       \"bCJtxOiwm+y7776rF198UdevX9f+/n7STSJBzvWi2Pl8rna7XajLQhecKfE5DdChpspTIjgqvwfP\\n\",\n       \"5IbYDbvLZYCLsyM+Vt6fMU3jUWaZHjgI59UA6xR3XogDAkCD70Lq5zmblWVZYkH4H/DiG08ylgBc\\n\",\n       \"D1xY+cJxpLG9MJ9xrVQqaZ8plu26ffO+JcVMbcWNGzf01ltv6fbt23rjjTf03HPP6eHDh+r3+3r4\\n\",\n       \"8KF6vV4BRJ2cnCQQ5Kzlzs6OHj16lEAMthpdRtc5j+fZ3d1N/cfxb775pnZ3dzUYDNTtdlMQQNvp\\n\",\n       \"S57d7UkEc4xfzD4wPtw7BkfYBNh8AmBshjOoi8UilSLgQ1khe5GdvBLgRDpPZUMVoZDecZ5KIOXi\\n\",\n       \"UT7OHHCQZVlSQOiuo6Oj9NnBwUFqA/uXgFIlJVTtkSPpHKebmSgOFKTlAKE8eZ7r1q1bunv3rp5+\\n\",\n       \"+mm9++67eu655/T2228n2hBFopaFNtBPtVpNn/nMZ3R6eqqDg4OU+sG4O4iAYaIynPbwHhNyo3me\\n\",\n       \"azAYaD4/2z3R33SJcXVQUubcpCUDwuf0mdfxOCDxFRmkthh3vvNxB7iyz8H+/r76/X4hPbdOicVf\\n\",\n       \"UXj+mMZySjTWPMVjPOLw6BqDzrWjM3FnKZ1n9kif8T15dahmZ1PcKANWtra2dHx8rKefflrb29sa\\n\",\n       \"DodpGSdOixefsVkVhpPrEyT0er1CatB1340gTs2fqdFopP0hKGCMYxMNLp99FHAZAUYZ4PAl2D5n\\n\",\n       \"vJ8d4DN+6MBwOLwwwvwkxB0tvyOY4ofniCCS60hnzs8jZ9e76Fh5fk/N+EaRnEtRtYMeBzr4Ed+b\\n\",\n       \"BL0m6CMAbLVa2tvbS6vLHjx4oPv376clvF/60pf0W7/1W3rxxRcLgB8g7Xat2Wzq61//evIXtVpN\\n\",\n       \"O7tsnBgAACAASURBVDs7unbtWkGXnfEg6OY53WYSrPtKTEmp/ZGlcxbSg3fsjAeWUnFDRw9IPbD0\\n\",\n       \"NB1tYMsBSeeuSWDMpnHb29sp6F0lawEnh4eH6Y29TmlJZw9OjhADy+C4USbXhXKhfOTDYDF8tUee\\n\",\n       \"50nx/B6wGjgG1q+DtGkbqN5Xk3hBZ7PZTO1i0pAWIl+5u7uryWSip556Srdu3VKe5/rGN76hV199\\n\",\n       \"NTEhGEicsG9RvL29reeee05PPfVU2iL84cOHOjg40DvvvJNYGi9clJQAGn06nU7V6XS0s7OT6l7o\\n\",\n       \"P69TcSqdAkP6pSzKZMJzz0qlkgxGs9lMfQQAZCKgtFyHQjUHOCcnJ2l1Eud6cTFb+q9bYgSN4Mil\\n\",\n       \"8++J8sjaDQmfx4nu7BjAZLFYLjnEgMT8sUf50SHyN4YLA+ibSmFAY5tY4k4kR8rxxRdf1B//8R/r\\n\",\n       \"+eefT5sOOqMYgRjOodfr6eTkRKPRSLdu3Upvo+W5HESg4x7FM0/j/j7+3FzPo2g+9/SE9xvnlElZ\\n\",\n       \"H0tKqV7GyW2e972L2x2PRNclOzs7CWhGUIJN9b6LKSv6nufyMWDuR+BAWkY606+jo6MEWEl90AYi\\n\",\n       \"fVgWZ3F9/B1E4VjRl8VikcCyAxnY2Z2dnWSb3n777bQNA/bLyxB4Zkn65je/qdPTU/3ET/yE/t7f\\n\",\n       \"+3u6detW6qNnn322MPeY217/B0hpNBr60z/907QCk5dy0q+k1bgOwV2/30/z2Df29BU39LkXtLo+\\n\",\n       \"e/qH7/GdtN0XQdAmlj870Gb7C16HskrWAk4Gg4FOTk4K0bm0ZExgC6SlU3KEiZMEnKCY5Jw5l85j\\n\",\n       \"kOmkavVsKTDvVvCqbpBirPp3qtKdPk7CC0zpcAykU8kffvihXnjhBd27d09PPfWUsizTF7/4xULa\\n\",\n       \"hAnJczJxms2mrl27phdffFH37t3TM888oxs3bqT2fPDBBwkY0RaPEpm49OnNmzd148aNtINhr9cr\\n\",\n       \"MBNeGAZoIZ3FVsxx1ZKnxqjNcYbLAYhTmp6qAhy6k6xWq0kvYIFY6QRrRL+tUzzyLUvXSOdXmrhE\\n\",\n       \"x8k1/douXksiKekyRgrn65FPZAn8mk6FM6e4j7fHDSnXbrfbevrpp3Xnzh3t7e2pWj1b+s7qCF+m\\n\",\n       \"7sCHe3o+/s0331Sj0UhzBeDuTtpXAEUn79vnOxPhgM/ZO4yug3f6M/a5j0MZAI2sGHOOKNiP9fnp\\n\",\n       \"TjWCkcsyOd8rabVaqtfrOj4+PrdZI4GMs9e+ygqnzzi7HvEdNsOvQf/AbDiwdSAdGXW+YwxwwlKx\\n\",\n       \"9kJa6rUDAvTy6OhI/X5fr7zyij744AM999xzOjk50Y0bN3Tnzh299tprGo1GhTe5+7Pmea7Dw0O9\\n\",\n       \"++67unnzpn7zN39TjUZDOzs7SQ+w8bQNn0NJAv6KawLO8Ce+pBow4i+dBbQA5lh15N87MHFGPLKo\\n\",\n       \"+AY+A0iyRQD3935mXBgTFkPwve+iHmUt4GQ4HCZFu379emHpFvtoOI3ra8Sl5YubcIiSEhrDsXGc\\n\",\n       \"Kz3Rvnc+CuBK71G9O1sMPwrBdWLxEeyJ5y4p5vzggw/U7/d1eHiowWCQHOvt27fTLrXxWaWzAe71\\n\",\n       \"enr55Zc1nU71kz/5k6pUKnrnnXdS5TdKAENBPzJhvU30zXA41OnpaaK9URb2MolAkLTLeDxOb+ck\\n\",\n       \"5+n7VDDJPDVE7Q59DPihXUQibDaH4/IIObIJTAqe6SpJWR2B051OkyORSXSJDiqCFa4VDUMZMFnV\\n\",\n       \"Vj/ewQ3O2hkEDBXOd3t7W2+88Yb29vaSsWcDtP39/TRHfG5xLwBvv9/XvXv31Gg09Nxzz+n+/ftp\\n\",\n       \"jwneiuz9ik4Q1TGXmQ8+F2OqzOt6Yt+UgZgI5Lw/0EdPffnqOMCX0/dRygCPB2XrFBwPAaTXMuFU\\n\",\n       \"HRB4wEefABKwCW5b6ZcsyxKIc9tdqVTSHh9E5d6HjD22wPXfN5ZE//J8udmls/bOumVZpsPDQ3U6\\n\",\n       \"He3u7qZnPzk50ec+97kUMHmNIcEVYOrevXtpB1l05Pnnn9e3v/1tzefztPs2/s7BBH3sATzXp68J\\n\",\n       \"ADkX2x/HzmsVve98paf7Bc6hPV73A6gA/JOF4NrOurhvl4rbgpQFWy5rASdEuFmWqdfrJVQuLfNm\\n\",\n       \"vkTYKWyUeDKZJOcLymTSewEP/7M9MMp9eHhYeHcLRs4jQo9ypOWyYlJADDaoVVoCKUfznh7Jskxv\\n\",\n       \"vPGG+v2+jo+PU3+88847kpTe7grFTd1JvX72cqi9vb20E+3W1pb+6I/+SC+++GIaZIpcSZ94eqTZ\\n\",\n       \"bOro6KhQIU50QF9KS+p8PB4XwAypJgwudTA4Mq8bQNygwWp5HRB7A/AzHo/V6/VS9EKbvAiLZ/XU\\n\",\n       \"0kepE/ikJLbJUxBScQlvrFPx1BDXKQM6fh83Ur40153mqjQGOhvb68CbucP/njqp1c5eJb+zs5Mi\\n\",\n       \"OgD/cDgs1ELhgHz+MEcBQq1WS7u7u3r48GEynnEHWq878P7yfo4MqEfmHj171B6Bg+t0HBPEU20I\\n\",\n       \"KVDf6BHH6mwWoCfeAwdVdu1PWtxGEpljRxz0uZ65TSA1ApChaDSu2PCUEA6+Xq/r2rVr6V4OSLEt\\n\",\n       \"3D/+po85F7uCA+Y47DTPStuuX7+u8Xic3meDHt69ezc9J2CEZ6fG6tGjR9rd3dU777yja9eupdTP\\n\",\n       \"7//+7+uFF17QYnFW18hqS1+4gH7GeU4hOS+ZrVQqhb+jfkYGhGt5RoD54+DOQYikQhDCcxIQYMPd\\n\",\n       \"fpWlr2P9EZ+tkrWAE2coACpQtDj9GEWQi3bDyOez2UyPHj1KqM/BCdfFiI1Go7SMyZXcgQmgyGlt\\n\",\n       \"lBCA4imjWENB2gRFc6qc9elHR0c6ODhIwGNnZyetNvDVOkQagK8HDx6o3W7r/fff1+3bt/XjP/7j\\n\",\n       \"ev/991NOdjAYKMuyVOXtyyo9lZJlZ8W/g8EgLYEjvQMAo/LcJwoK6n0DuHQAwSTFMPlvV243OIwr\\n\",\n       \"+wAgtN0NeJk+rTsvLxXTaDEi98/8WSJ7EqlnruHGir50sOZO39sirY5SympfPLWAcfJnwYFznLMD\\n\",\n       \"fo0sy9JSSNKCFHsDdKncr1arOjo60tNPP63xeKxGo6G7d++eCy78uZi3MYVGP3hhtqcSor5EnXIw\\n\",\n       \"EMeJzznfr+MADhDiu9cSnfLcTplHBsb70VMX6xJAgDtx1ynG3qNi9NidEscDcHwlJud7bU6tVtPe\\n\",\n       \"3l4BTDrA9nbxP/3lNgv7AsPR6/VSG7HTjI8X2mIfSdfgY2Dy+v1+quPwFDXB3s7Ojj772c9qMBjo\\n\",\n       \"+eefV7VaTanoz3/+8ymI4EdSAvn0G5/DzMCOHx8fazabpTovfAdsPMDcwTf+JAbf2GGvtyGAdH/o\\n\",\n       \"LAzj5sCFQILxYX6UpSk9iC+TtYATBh7QgEL6qg06F4X0CIKHAmGChH0vD2db6CAKX0kr8T3O1pc6\\n\",\n       \"+YRxx8jkigW7pKN8MnB9KN1a7WzXQFYKsawMNAy74bsK8pykTv7gD/5AL7/8csrns4x4f38/7emw\\n\",\n       \"WJwVbXU6nbSMMoItwANMyt7eXlIi9nSRzvZQ2dnZKeRDmeBEyETFHhECcHxnV4CcGxlfukzbvDYF\\n\",\n       \"J8bfkd5m/KKDX5d4e5yVkFYXyl4kXkgZNz6L1/I0hTs39Bew4QyUzxXO8+8jEHHn4REqlfcYSKem\\n\",\n       \"qYHxqJe0j9PxN2/e1Lvvvps2BORzVv0Afn0Vnb9i3qM5jwi9H2BmGCdSiHGJpfdtGauBrAI6ztT4\\n\",\n       \"MQ5guJ+nbiNIiX+vSzwFE/eNwcZKSzbTwYY7X5/XOEoK5r0ImFUcDnqd8fDI3NORzjL6XPE5OJ/P\\n\",\n       \"UyCGnjrDg83BrhFwYl99bjx48EC7u7vpGjwLYz+fz/WZz3xG+/v7mk6XL4P9whe+oDzPE8AAiNE/\\n\",\n       \"vAzQnw2bzbt4YJmcjeRZASj+PePEPRxs4acoMuY85pzbfknn5rKPNWPp9i8GWj4/VslawInvnueG\\n\",\n       \"xYFJGXXqA056wEEOSND3+kCR3aFyHQAAhXh+X2lJZflqmZjPzLIsGVNpSdsSFUhKSNcpN/ZlQMko\\n\",\n       \"ZKI/eDZPZQEa/uRP/kSdTke3bt1KTMP9+/dTbQsRG+DAl4L69UHGrIrgGTDcgDloWJxFrMZnHKEp\\n\",\n       \"uf7JyUkyAgAsj064B32M+OR3xF3GZhHZew58neJRfFma6UnAxIEt/eJOCpr6IpqfsfHUgVO1fkx0\\n\",\n       \"xDggDwrQZ8CGM13OdFYqZzsmV6vVtJLAa46cSYyMB/rNNuLsA1SpVBIAB+TTfoCGR2fohzNN6BI1\\n\",\n       \"Aqy6q1arhSJqgIKPW4zI0X2eqQxExH6WlrUOOFokAs1o5CMYX5f4/MLmOiNEu2PahWfF/knLmkEc\\n\",\n       \"PfVxbu+d0ZBUGNcY+EnLlEEMSr0WBpBB+3G0jCO+AYaClYGVytmGlzdv3iwEWKPRSN1uN6UrmXOw\\n\",\n       \"4YzjdDrV7u6uJKXNMg8PD5MP8PS2z1X6LabK8jxPzA/PDcPTaDS0vb2d7uX1MzwrYzifzwt9IC33\\n\",\n       \"BcLfuT1wEoH/GUfGlWs7uPG6k2q1mlhUGJpVsva3ErtSoFxMBI8u3EjCVETamS3pSWc4clxVVBYN\\n\",\n       \"s1NdKLFHrtwrFvr48lnAENFfrKEhHzkYDJTnZxXdvV5P0nLrfSYTO2iiYL4Py3vvvZeQ9M2bN5Mj\\n\",\n       \"kpZFuR6NlTl3BxEYhF6vlybmaDTS/fv3tb29XQAobiAc2HlkSyrMDZFTrCg3zoPr+P4X3tdetEVf\\n\",\n       \"Y9x4tqsi0WHFtNSTGJRI9/s5MZ0jFd9p4oYsRiZlxsANl+sKgrF0YOtACbbOHQIggLdYR8AFa+JR\\n\",\n       \"lxt01wFfAk/huzMS3NeBHM8VozYMa+w/N7re/3EMo5P2Yz1S538PqHxO+nX529Mg9DMOZN3g22l6\\n\",\n       \"xt37SyruCM24eMqEMaNPsHXoj5/vTpPxhpGDVYhMatRtricVC5xdfxCCOWfZ8SXT6VR7e3uJvWZM\\n\",\n       \"FotFes8PO2wDZggQfT76LtaAVH+1gz87eoHNi6k9mBJqFKln4d6wW9hfn9c8H3PV3xbOPKxUKikQ\\n\",\n       \"8HStM1OrBEDjY4feeLqU41bJWsAJnc2EhGrG6WI8fJJiDCWd25QMFO+Rm6cPnC3BAHA/irToLCYV\\n\",\n       \"9wdBOoKXllXgEZHSdugx2tjr9dK1BoOBjo+PE8KG/bh586Y6nY4ePnxYOBZAEukzwA8Tl36JhhUA\\n\",\n       \"gALGaJBCVO/b7e1t7e3t6eDgQPP5PBXHQoU6TY0TAdy5UXDAhxFytgxWx8GJR0keaXp7eX6nES9C\\n\",\n       \"4Z+UuBFELprIUSJbJJ1/s+0qwOKfScs3iK5ybPRbDAa4Btf2JbBu5AAtrEIBqABG/T1P0PEYdliQ\\n\",\n       \"Xq9XqE3ytIu03IAMBsaZEGeDPKqkfRzvDp95QHvd4HqUGMfRDaz3v38fI0dnr6Rlca6PR9QNT/M4\\n\",\n       \"UFo3g+JpLwcW0vll3P4bJ+6BCc/o6S4Hv4wNfcr40gb00VkWqViXJRXfVs33UnG3agIb9JR24qBP\\n\",\n       \"T0+1t7eXtlngOo1GQ7dv39brr7+u/f39VPB6dHSkGzduqFo9qy1xANrpdDQYDBJL7bUd9BUCs+CB\\n\",\n       \"M0wxesk2DrVaLe1Bc3BwUNjw0H0b+o3v9D5xPeWa3u/O6tCvzD/muzM+HhhH9tv16coxJzjLSL/i\\n\",\n       \"nGIEIhVrD1xhMWieciCSosNw7K6UODnqPpgYbiS5L2jWQZTTrjAbOHDOZ4DzPE9AZDAY6IMPPijs\\n\",\n       \"epjnuR49eqR79+7ps5/9rCQlULK9va379++nTa24P2iZ/SN8Z77FYlmvg1NwFoq+caBANIRytVqt\\n\",\n       \"tEmb061MEKdtuT6GiB8fR0CTT1ZJ54wubfA+lnTu7zIHcVVkVdtWRQn+bBGARKcbrxmdm/c5feuG\\n\",\n       \"J7anjKGJ90GnAKZHR0ep4I8NnlgOyXujYCJ4Jwg7DwOo0V/moVR8W3GsL0BnKVb06BtjylxyGpr/\\n\",\n       \"eX6Oc3DtDnMVEPC8vDNXXM/ZFe8/Z8oceDqA9PtFVsyPW6fEqJ05T6rWAa7bRbcdnBtBh7O5UjHy\\n\",\n       \"duZsNjtb/dRqtXR0dFQAks6GeK1RTEtij2EPSN/Qxvl8+cJKnmkwGBRqMQiMRqORXnvtNb3xxhtJ\\n\",\n       \"927cuKFWq5U2t/SxJRUEsOd+tVotMYX0dSyYxm85AHAbycaLgBQfg8iWOwHA3OZ/dNnvKS2DEfoQ\\n\",\n       \"8OH2nftFW+L1pD7O3HuVrAWc+EYukRKlvsNzmihZZFK8M1FAf5WzU74cB9BAIdjVj9xejJZQIpCu\\n\",\n       \"TzofePLsvkS51Wqp1WppNBoltOr7L6BsTJZ3331X9Xo9FVgdHBwoz8+KpmiT/87zs132vEAQEMVq\\n\",\n       \"IZ4dytInoeeH3TjPZrNEx+OQMA5EgyihOwJfigzoi5PAxwaaDxTN94A6Z4uYuF6z4LR9BD1XRdAh\\n\",\n       \"f2Y+p71OPUcmQDq/gsMlRj5+fcakbHUF142fex9iqGOhYLVaTcvP2cyvWq3q3r176vf7Ojo6SjrJ\\n\",\n       \"Pjq9Xi+9UG2xWKQ3rjLGtNXtwNbWVnp7baz5oo+IfD2v7oAOI+0sCc/moA1b43Pe+8aPpY9wys6y\\n\",\n       \"uPGNQMZtEp9H4FlWa8GcW6fQZrd/6IKDQ0kFQMnYOVsCoAF0RLbUI3ie28fJAUbZnOA8Uv9ScYVX\\n\",\n       \"s9ksFEKz1B0fwsaBW1tbGg6HevbZZzWdTgsgxdP0165d0/7+frKXnU4nvRIFPwbIcp8Vi2djKpvg\\n\",\n       \"2hkKntltB8ujfUECfYY98eDE2RrYfx8vjvc+4/oRfNBeBzi0i/s7K+ZgO86XKGtjTtyxz+fzQtW2\\n\",\n       \"VESKjpy9aphruJOD6pJUMEpc2wfG84uei445tYg2uS9g4OHDh2kSOfU4mUzU7XZ169YtvfHGG2mv\\n\",\n       \"Ekn6zGc+k9iRbrebjPC9e/fS1sS1Wi1tXuXbM/tkd+fjL5Ki3UwI0jYYB/o3rohiZcVsNkvUom+l\\n\",\n       \"7JMD5XblZFkdq3hwUp4C86JZ71s3Sp739PF0AyktAVkcs3VJdPKuz25M/bhoQCNo4btV0bl0nh5l\\n\",\n       \"bDnGjRbihsIpeNJ3vlcJ+XY3thRwP3z4UJK0t7en09NTdTqdlDqpVCppTpIfJ5XnYBrwyfPgPHyT\\n\",\n       \"LEmJCseAx1oOB9/SMmdf1keMh3/nDKC3hz6NEagzI+hxGfsbGZCY3vEINdbo+NxYl7hDon/c4fDs\\n\",\n       \"/I/d9a0MfBx4Lq8n4hxsvbPP9CkLCLyuzZ2wdH7HX28XAIAl7thdtm9gkUKj0dD+/r4k6dGjR6rX\\n\",\n       \"6wmQE6xRb+JgzVffUKOHTtRqNe3u7iaGnT4YjUYFfwg75G33ue86689JcI5fYe54Gh4g7zrqdjwC\\n\",\n       \"brdBTg5gbxkzacnqe8rG/XScKzzXKlkLOHGnifHDwdCB0hJ9oYBS8eVkXiCJkSK941E4g+ab9XgE\\n\",\n       \"JamwBS/3ccVgUBlY8ovNZjNVXAOQ+Pv09FT7+/vq9Xp6/vnndXh4qHq9rldffVX3798vOKlr166d\\n\",\n       \"c2i8kA/Q4I45y7JElbfb7aSATFwHH4eHhwVFZsLwjBgC/uYdNlSTO4JfLBZpySh9xfcYLU/heH7S\\n\",\n       \"iyDdALtyM76wWTESiCk9ZxWugpSxHU5rXnRsBB9lkWFkVFaJ1zfECB7x6IbrAVIYY/TGgTuAFkYE\\n\",\n       \"VnBrayuttpGW0VG1Wk10tjMisB6ka6Ix9JUKDl4crMb+jfrggYszbG4cI5CINTrOfnkEHEEmx0WQ\\n\",\n       \"HO/BuT4HYnqHc5wRWqe4vZCWO8A6ve/94cDP0+jOlKJTETjH1IuDE2dfsCWeUpCKK85YUFCr1dLL\\n\",\n       \"Wre3twsBEuc1Gg0dHx+rXq8X3gR9+/Ztvf/++3r06FEBCACw0V1W4jA/RqNRWvlCehMg5PtgbW1t\\n\",\n       \"JRYSVsmDLfeX0tJ/YbO5Hz6xWq0WXtsCWxkzD9gH37CR8cOHon8UIMcgC10ASPKeOXwRUsYyogOr\\n\",\n       \"ZG37nPiD4vCgkF25XBkciNAhoFhp2bFujJwa9E3c3MgNh8M0CHQqiukGg1VC0lmn7uzsFKqc+Z3n\\n\",\n       \"edrpFGaFHQ7v3r2rra0tLRaLtMxXUlJaUkHsporx95c8eRTi+wGs6ueYSqDGhueNNCAgBYDlG+/4\\n\",\n       \"e3C8z1E6JqOn7ubzeWJAIgXuQJRaBCYxoIu2eWqOcUFHrgpzQt+5I+SZ6cMYNayKJDwalIqbfEXx\\n\",\n       \"z9xISyqMb2QSYrqC6JT5546BOUg+3PPTpBiHw6GGw2GqLYFBAWiiP6Q7+S0pGVfvv/F4rH6/n95h\\n\",\n       \"gpPziG1VSoZ2OTChH/jfdTCCOAdlZaxIHDdnbvyzOM5x/CJb48/gAGyd4u8gw+ETwDAenl5jLB18\\n\",\n       \"8JuUr1TcFM/HCGE+sacN4JAxdb30FJCDKXSM4+PeSp1OJ62aIbj88MMPtbu7q62tLX3ta1/TrVu3\\n\",\n       \"0rmw4pJSMMg29Owe3u1203j7bsjUB2IXqtWqhsNh6jMWaNB+T/14qs9tOyDX57mkBOzcVjtjg067\\n\",\n       \"jvoSXx8Tgvsy4M1LCambgc1nVV7Ue59zF8lawInvM+Bpm0ajUaie9sLLsvfZkKP2yCvStC44CC+U\\n\",\n       \"hZr2NeG+PTMKxKQAuaPcPMvJyUnaVZUUFe8LoYBrPB7rxo0b6e2tktJeEN1uN+Xnh8NhMug7Ozs6\\n\",\n       \"Pj5OeVFpiWpB+L7k0tNhtElaGnAmOIjb2QsUDwTvyl6v19XpdAoOwDcwos/5ATgywbiH9yPiY0VE\\n\",\n       \"Tr97FMwKEHcEPsGugrgRZkLzf5zYZcyJS3SAq6Jov04EM4ALqbgMNBoymDf0JdKwgHbmXHzeVqul\\n\",\n       \"Dz74IL1SHjDqG3b5VvPT6TSlf3ye43iYn/V6PRXPkmZicyrmIYYvpgf4G/EAIj6jU/8OTOh3ZwEc\\n\",\n       \"lHlfenTv4xL7mZo0HIuDHwf+Mehap9C/9A1stKf+vL0cTx9iE7Ar9JWkBGCRCMqjA/UFAdjCyWRy\\n\",\n       \"Lk1HX1IDRSBHaob9R3hJJMCh1WolJvvOnTt69tlnNZ/PdXh4mBgVxo3/YZjRDYI66Txopm0wLfg8\\n\",\n       \"Ns2MfYg9Rkcig0UfMReYK/Szv0gWYOh22hkVabnSxus8AdBxPJ3d8nHDtwBY6Qc+KwPpUdYCTqDZ\\n\",\n       \"JKVcHyCFKB2D6KkWAIhU3OwKB+YdhtGMBt1pas7tdrvKsixVOns1P9dhMvlyL2n5ojxPGfkSNopT\\n\",\n       \"JWl7ezvlOgEH0OC+pT4reaSzHVo93eXGAWPPM/veJExgULojXGdImDhcH6ViddBkMkm5WIq04hj6\\n\",\n       \"WABkHGUTRftEig6AZ4IxidG8O3oU2yNZANdVkRgtSuep+yhlKZwYua86DvE+8VQEuhxpWfTMUznR\\n\",\n       \"2fr3jJFveLhYLNKS806nk/LdUnEHVc5lHCeTSdqBk/s5a0nE65/hAHwjQHQJ48lz+1x3wwlL4cDN\\n\",\n       \"wa6ncbxvPUjiHHeiDubKxgZg7sytAypnspxxuwriAQK1dLQTe1nGbkvFeitsss8NZ6h8IUQMeqRi\\n\",\n       \"ioi0s9sQrjObzdLeH6xo9IAzyzJ1u109ePBAe3t7qlQq6R06bg9feeWVtIs2tXgAHXS92WwmsEKK\\n\",\n       \"pl6vF5hfdM7rIt1e8a4hxNlhDyI9+HQd53t8J+3xNwAzv7wfuY6ngRijWMTq7XW2DIbTx9uZc/df\\n\",\n       \"zNPoh8tkbe/WibQpoABE7lQW7ABRB+I5LxQZWhjUCPJkwJzKRcG9AIlIgJUmGHTPlznNCqhibxBW\\n\",\n       \"6zDwi8WiMNC0FWfNJGYwKVyl2Orhw4fpGWBjuHae5+lzvx4/kbFwSjwiWJTIlYX7zefz1E84E4/W\\n\",\n       \"PYqEzkVR6TeiXQdGPqG4BnsDuPPwqAFGiWdkEse6gXUJho+2EaWUAQOpCABo/yomxK/v/3Ms14hR\\n\",\n       \"KDpW5ugwth7N+zWYCzgg3/PEi/Yo4KZNkWL3dgJoJKW5yT1dt5zFZM44W+JGjzw+OuL9A1PjdW2S\\n\",\n       \"ClF8ZEK8HoR2O5DzovR4jKfDfIz82Jju4V5loOgqiI8Pv9FlZ00iw+ERt4N0T62hz85QY68BzFyH\\n\",\n       \"H+wS4IAxZgxarZYGg0EK7tBdauV4v40Hj9h1d8osB+Y3vsB1GBsPoOR1Hw64uJ7Pf+pQXNccIPtq\\n\",\n       \"Jvc9jIEHKwjLo6lpwUc4+0eb+d83yvQNNj3odFvk4A0ggg13sOl2niADPbiMfq91h1iPpjwv5XlE\\n\",\n       \"R7t0nDtTouroYOkofz03n3M9nL60fK00gMVTFgy4p3VgGRaLRTKwvmLA8+DVajVRbTgBR6m+Ex/f\\n\",\n       \"ocikijAAfn8iOc7BaDD4kcZG+RAHiLSZ+9Bfk8kk9QfGn70saI9PHNrRbDYLbxyFvvT35nik6Xl8\\n\",\n       \"3/DLz3fnTQTtOdSrIPSDRxXO8pQdy3HovQMN71eu5UbGgbK0HGscLN+5YXO2yYtmnVlwY0X0NZ/P\\n\",\n       \"NRgMUlQGeJWUisK97RhmB2c+T7y93g/+bMxXH3//m2LY0WhUqC1xI++6Tduk4kZRzsgSLETG1QOT\\n\",\n       \"uKLHGVVPzfhY8T8GOqaEfJykYsooXueTllj7h02i/7yo3leHEPU7uyGp8L4lAlNsrgNP9MVX7bju\\n\",\n       \"uG/w8SY9E+0S7aDGER3FefIc6CO6AJvr15OKWxugq7GepowlcJCBfnuQQiDpab+YHo6AnTQ/fQNo\\n\",\n       \"gBViVZBvN+/Pw/3cV8KEO1sjLQNKxsrnegw0fK56DSHtXKlzH19dP754wV2WZYnmipMRxZSWyL0s\\n\",\n       \"QqZjSItADeK8iJDcOLkT8S2Hm81mSrG4gfFIwRkRZx74n8EkDeWokgGMlcuR+WEQXRH9f8Rpc56R\\n\",\n       \"6Nap1LhduAM5Z1K4FudT4EU/kB/mfUKeanOnkGXFF1kxfl5T5BEWz04/k7LCAMUlpYwN11238Ubc\\n\",\n       \"kESn5SmasvZG50xfer846EXnoqDzzB1pmTLhezc+buydHWCZZdQbfmq1mvr9fgGUuPPl/1V95IDa\\n\",\n       \"v3M99M/cGdIGdBDw7xGbp3hoB/dCP7FF3g5/YWUE3/7b03T8HR1QZFcisKKdPiel4ovVvN3raEGS\\n\",\n       \"pgAAIABJREFUEuYfOoWjY+7hFLFxMaXIc7vN8fPdPsLUuf1H1+NCBwIplu3CqMR30jhw4l7MHfaK\\n\",\n       \"4jgKZrvdbqon9OJenyN5nqe0oxfs4icc8MegDNvmPs1ZE0mpXoQ5SDtInTqL5Wwec6GMnfZaEvqF\\n\",\n       \"a/OsnONMErruQNPtugP7arV6Ljhl7Dz4vojtXtsmbBhQImqMDBMZA+FpgIi+fVAADCBmIj6AB6DF\\n\",\n       \"Iylyhp4CYrAoZGXwcdbSkipHwZm4IFSWXcEuOP3pyuD35TiUAQVjsnqE6cyPVKTOPZJ2cFWpVAoT\\n\",\n       \"FmTNMzu4AszQZnKqCCkj+seNsOeJiaAcvHj6KU5OnsmL2zyCANjE/iPHexUAikfgGBFfpRP7wh2c\\n\",\n       \"gxY3ZO6YHKwg8RjG1j/HaDImGChPM3r6h4jJ2+eGEPBf1k7GmPb6/PXjI0j36MxXDXCver2eVo24\\n\",\n       \"rrdarcTuuX57ZB3bGZkVdC8CyAgm/VoYbx8bd8gRUHgwJC1XEjEHGLtVY79OcWYPXUJfcHxeS8PY\\n\",\n       \"8DzuyLxQlN9uw6NDo3+4L6llUjXMCZhexJkIB03YUk89S8tdrLkvb8L2FSfoFHtKAQDKarK8VlIq\\n\",\n       \"roZ0fVsF5F2/ou9zJsn/pw95fxvPCHhykIjQj/ge/57r8lzYA/wm53JdHyufzx50xA1IV8naCmLd\\n\",\n       \"ueR5nhz6YrFIqNEpTwwSkmVZWmeP0vmD+rIwruP0oOfGJKVrcV+Mna8SKGMayiIbb6+DBaedI2vi\\n\",\n       \"LIykhEZjJbz3gVNmOATahIPneXgmz5W6o/eoBkq7Uqmk1RL9fr/w9s08z1OKhvMYPxQYw4XS4yic\\n\",\n       \"uYpLAgeDQRpfn/yc646ZtkoqGPh1igM1npHCT09XuQNy9sJZCeYGn3sfl9GhTqdKq2sjPJqi3zyF\\n\",\n       \"Kp1/10aZvsc2u+PiHjgPJBpH12HqB4h8eU5nZnxHWHSGueoAnWvHokwXn2+AaS+gj88YgYrn8f0Z\\n\",\n       \"GSPvQ9oQmRhnsSLLEHVlnRJTVWyoh91y5xMdkqTCSkJ0xAE0feHpIbf5zt5Jy8JQCmIJUNyeuZ9B\\n\",\n       \"fBWig3P+Z+yxvXxPigib5n4gjqvrBXMqgk/XG19aTTvoczZDlIqr75xB4rquv6RiB4PBOcaDWjHv\\n\",\n       \"a+y2tFzN5Asg0EkCU56XdlDX4sGur4p1RpB+ATCukrUtJfbohY4hx+1gwzsFhfV8l0d50nJDGAwc\\n\",\n       \"gpIy4DG/V6vVNBgMCq95j4Yalod2OdvAPaRiLUe8B4ZVWg44QMKRsCNyd9DOyHBPJgvHMbH8PlzX\\n\",\n       \"t12ODs8/Y4L5tsVUmvNMTh/6TpCMQzQuDv5wJEQlzgShA05h8uxMTO9n2uxGaF0SGQSPNBy4SOVv\\n\",\n       \"UeUa8Vox4nc63A2Ui4N/9IiIDgDh0Zm329uBOEvAvQFA7jSkJSiBAfHAwJ895viZNwASoiyKILk3\\n\",\n       \"x0Clk250IOXsm7fN54Pbm1g0uEqnIliIgQnX8HP9uh69u464M4y6sm7g7eOEbQag+J5Q7vwdHAOc\\n\",\n       \"eU7pvL3kPogHMP45fzsTjaDPnlrxrd1h5Pk/6hU/vgmhrxQ8OTnR9vZ2wT5LZ2kuavF8rLmH2zi3\\n\",\n       \"afV6Xd1uV4PBIPlFr3dy4O9MC3rt98MudDqdtGqIOj8YqzhPOY/+5B7uv7DVkgrZDQelPA/j6syS\\n\",\n       \"B7xcz9N1q2Qt4ITB9j0TcD7tdjtF6OwqGY2Eb5kund811iMPp3opPHUWxB0dvwEwoMR6vZ4oMe5N\\n\",\n       \"u9iYDeOGOHCJkZgrlztekCfRo2+6xoTjuVFEEDxGnEI1p1g5hr5zytqNdyw8zPOzzeQePnyoTqeT\\n\",\n       \"HAHH0g6MPffxqJjP3KA46IvgSzqLWlBu0mZs1OW0phv46EjXJZHWjnSrR/YxdSAVNyh0NjBOYhya\\n\",\n       \"j188zh1bo9FI+yq4XnhaLTpWxo/reg0Q18Wh8wzM6bgB0//X3rsst5UkWbsLAEmRBAHwJqWUWdWV\\n\",\n       \"1tWTfv8n6KfoQVn34M+bUuIFd1IkAfwDnC/2t0PMLLNj5xQ0QJjJKInA3nHxcF++3MPDFSedP2SA\\n\",\n       \"ZS/aawrz1O/3SxzbwGiz2bRqqyRt4MdaeK5gpkxPO/ZfP8PsksHFH7ErfrcdjtqLtJPh9ah13q6B\\n\",\n       \"NwYeNhp9C8NVMyteU+bfOjtpwLMBODqM37umTdIYa2TIR2UpZlY7XDyXd3gPAhD4LH848eIj8bDZ\\n\",\n       \"DlkQ0mB+fHmlHVL67nckKaXsOfbc7XZbzqVrkliOXhunGWhOoM5ms9Lvbre5kbjeJ3X4iDVljMwX\\n\",\n       \"JIDzfBySS1JspfNvWCPbSDM/r7WdaHQvMoM3q1CfisEzB3ERTsAAsxHIjUAQ2TAUtzGFZMPJBJ2d\\n\",\n       \"nZX8l36/Xyoh2vMyg2H2wErKeSwgYRKr8PD4ztPTU6mySAljPkfNEo4L2xOmP4yXOeBGTeaUeaDP\\n\",\n       \"Dw8PBRB5bGYzrGxfXl5KP3w6iWqIZk34bg06WHMSz9wvmBfmAaXHJgNUoqyRAZR4zYLtutVMRu35\\n\",\n       \"Whnz7yRfzXud52HPmTnCGwGovDZ+5Nx1CFCeNCs++vAa4PNaJm2AlDT33hj0o+A8J8iKn2slxak2\\n\",\n       \"Pn9yclL2AUmAlrsk5Qh+rXx5fu1ho18Mcl+rgHl8fNyioL0W9kLt3Lj58wbodlK8tn6WDem3AL7R\\n\",\n       \"WYBCQjM4R97vSZPDAYiBMcDJS/KV0TXQdTi4Ds8kzUlHDOlisSg6CSDFu10MkD2BXMDq8Zk6x8S2\\n\",\n       \"ijASjjQ2BEMNMOD72AuD76enpzw8PBRZTtp6AXC22Wwyn89LH2zMnVeIDPH7N2/elL1hvXx0dJTR\\n\",\n       \"aFTGAJAycDR7yPqxzn/E0hpw+N9mtW0b2aP/TK53VueEY08gK9e3cF0SAAAG6+npKdPpNOv1ulQs\\n\",\n       \"ZaPUNSUwZISLzs7OSpYzjAqTzh+U1dHRUTmJwIQiyPZoLXQII5vYpdwpve1iQNQyeXl5KQAFRoLv\\n\",\n       \"wcrwHkJOvIt5dFIXmw+BhP1J0nqH45ZG1QgUz5xOpzk/P28J2ePjY2azWZ6enjIcDnN+fp71et1K\\n\",\n       \"nvXcYGS73W7rWB7CbwOCEUC54MXYCyGc5Qx5NvUuG2trg1MbLCsuFL2ZBeTc+TU1yIGVIV7Nv2tF\\n\",\n       \"mDSxdeYfZez+ITNWenVYhL/zDgMiG56kfW8PFZOdkGgDTR/t/dkTwxDQDLapMkuf2Nf2DP13jwnZ\\n\",\n       \"r6lz5srsJ+NzSJVx+99mof6o1QCIPtWsgvfOrpmTJEVP2dAAdCkmSbVVnEFARb/fLwwo+x02D6BS\\n\",\n       \"s77r9bqUnTfbAEuctMOIhDCS9rUFNfBFv6NLkS2D6i9fvpRrRpbLZa6uropM8hnkhPpbNfOHTXO9\\n\",\n       \"KOTW4aSk7dSwp7vdbgHHrgdk1sT7BjmCnTw5Ocl8Pi+OLxXHT05OMplMcnNz05qzpH3akJAnDBU6\\n\",\n       \"hvA+TiLfc80YrrBIGt3FvDCPh4eHLSepbjsDJ574ZCsYGNnxeJzNZpPLy8si3NQxAG1x3Hc0GhXg\\n\",\n       \"YqDB0baHh4c8Pj6Wd/GThQbM2Iv58uVLKXnsDYbi4nsg7ZeXpvy9lQiZ4wcHB+XoHUqOI7oGFe6P\\n\",\n       \"6WcMCieGzs7OWhRqTf8yBhQlmdgYDJ6JoNTMkNfEXiVzBNC7u7vLr7/+mul0mm53W3wIBcM4UCIG\\n\",\n       \"WoAK1gsFhDJxnxxSY3xsRjaRQeyu2x9R+vzbhgZAggwhn8yhPXkrINbQ+Sz1SYQ6xGAlyl4x2OTz\\n\",\n       \"gBTLup9Jf/27mtIF9PtZHrdpaOaFOXD+FM9Ejvm+HRFocVdqNjjmfYy1VubIE+927ke32xydd3jC\\n\",\n       \"xpN1dI6D2SSzZx6rw2ZmpAA2NnQY7F02+mWnxmDu5OSk6EHmnRwHHLGknXsDSLCjuNlsL5CELbPx\\n\",\n       \"s2fPHwz+yclJzs/P8+nTp9Yx26Rx2JIU55c5RzfiGCFbHCN+fn4upzp7vV4rfGjGwCyecy+QRd/F\\n\",\n       \"gzx6DyDvPAP9bfmAYa4TcutIgUMosNMAlZOTk1xdXSVJZrNZqVtkB9IOfpLC0iKP3l/IA7YQW8X8\\n\",\n       \"AF49X0RLPFevtZ1whWx+aHl7cSic8XhcBskC4SENh8MMh8PWEd7VqrlLBmXJxXs8D8UEkkya0uos\\n\",\n       \"MKeGxuNxJpNJoQWTtJAifcWzB+myeCjPpIkr1hvcFFfSgAH6b5qTRGDTlXjK9rIAPwgtxeWMuu2h\\n\",\n       \"ooTdN97NZ0H7SQozcnx8nLdv36bb7eb29jY3Nzet9XXIis2MgDKPfKauycL6cImc69DYUHhc9jx2\\n\",\n       \"2ezt160GkihaU7t16IJmNgQ5drGnOixDcwgCI85nbPBQTO67jSR9wuO0MeY7PplVZ+GbvXN+AuNE\\n\",\n       \"OdrD5DPIH33BmD08PJS9S9+d7O3QsD3vmvVM2iEpzykOk/NsalYEQGSWqw4duBmk1YCtDg/VRmtX\\n\",\n       \"DX2RNKG6pM2wOVmefY18T6fT3N3dFbYFAzUYDFqsno3ja/KNgfZckWuC00Qfa5as291eVUKpCYdf\\n\",\n       \"er1eOQxh9psTm85XxBjzbOtwO5zot6QNPM0O8sz6WgSveT0OO5BmL+28UwWXiwTZU58/f87t7W26\\n\",\n       \"3W2pfYAZc227UTvE2Ft0sKMF7NE3b95kMBgUAGXAn3wdOv0zhnFngcw6KZJNDyqHQkxSyqZbiK+v\\n\",\n       \"r3N+ft6iwyeTSSsfBG+Iz2AUvdEx+izq8fFxCTsQwwQUWPGv19v7BObzeau0PIvFcSwotjphlf/D\\n\",\n       \"IDmkwR/6zhxxN06tBJlPPsdcoezt7dGshO1N0hBmfgIUHh4eSp8uLi7y17/+Nf1+P9PpNLPZLN1u\\n\",\n       \"t1C3vAcgAohbLBaZz+ctRsreODHT1WpVnlsjeoOUbwWYJG1vvQYMhAbxLpxMZm8cpW5g6rwaDJYr\\n\",\n       \"nFo50uqN79CQmZmaQUNZ0+ekDRAwBniT3W63dd2D9xNj94kL9oUNM0oXAFMfc6bPprMJlRq48HuP\\n\",\n       \"gfnzvvfnaGYjn56ecnp6WpISTbVbZ/Fun2pjvgzI6+e7ua9189h22XxhqHNBki04oKYUexjG2vkF\\n\",\n       \"DpWdnp7m6uqqsBKsPaULADfowqRhnWqQjy6mUQeFPplhYAzIO8zYbDYrdznhdD48POTi4qKAT/ar\\n\",\n       \"QxHdbrfkvbhMPEwQDig2xHuXk0113pMdSjMr7MWkCakh8wYz1jvuN3N/f3+fp6enDAaDXF9fp9PZ\\n\",\n       \"5nWxh60XsFcHBwctsJM0ABBnEr3f6XQK6ERecLitr+wcvNZ2FtbxZgd0MLmj0agMCEOOUJHwZPT4\\n\",\n       \"/Pyc33//PUnyl7/8pQgjSt7xTzMHbAh7qfwffSIhyRUQnaTFIq7X67LR7DX5hMFkMslqtc1AZzNZ\\n\",\n       \"uSKwPMuCwTw4PJOkZexq0IJyt7CZ+UFIeL49EtOJvOf5+bnUSkHw/vrXv+bx8TH39/f56aefihJa\\n\",\n       \"LBaFnnW+CRubSwl5tt+TbC88PDo6yu3tbbkIsd/vFwWDwtq1R1m31zxrxs66oBBZC5SUq7rWBjZp\\n\",\n       \"19XguXg4VtoYdCt1+gZFmzQlyfnsa7kkSTs0MZ/Pi3d5cHDQOiHDM90Ps4T0l/+jnw41GiyjHLlo\\n\",\n       \"jZMh9M2gFgVt1oI5qvd8zSYxDu+Hbreb2WyWwWBQQryLxSL9fr8YIxwI1sWAlHf/Eagw2GI9DF7M\\n\",\n       \"JDhMtMtWM3cGl7PZrMiFQ8BJw1Cx9uSokMxeh/3MRtghxQ54jxAaShoGi/fBwjHXAN+kAcuup4Ms\\n\",\n       \"wcqNRqOiE2FPfEKF77BnDIScFIr8O1xq+4EsJU3Yh/6anWD+HRZhHSzzjCFpcnHMIn769CmHh4cZ\\n\",\n       \"jUa5vLwsfSbBl0rrOBrL5bLMk2WA9R0MBuWCWIAn+8x2nLlCJjzu19pOwAkCtlqtChVYK1KEyqEJ\\n\",\n       \"FIizt1erVUHonHCBirWyQchNpTlWzqQjYBYSUDXvBlUT22SSqVdC4hdCMhwOC1PAM8zusMFchZZn\\n\",\n       \"gtRr78tGi88yHtOKjq0naWWkW9hqmpaGEAKA5vN568TN8fFxfvzxxyTbGOZvv/1WhPPp6SlXV1dl\\n\",\n       \"7lgLCgsx90bi3FA9GAwKQMMLYS7qPiavF9naRbNhdm6Bc4HwxGmAUtbfe8EsB+vld7mQFYaMdXdy\\n\",\n       \"LH0hodEx8qShq53Ma08naY5v813H6tmL7FPT2O43cg9I4nkoXCrAoszYh0lal8CZ2fFeZfxmEpN2\\n\",\n       \"PlXNsKArzMowXupXAJABSxzzr/UHCYZepzpU5PwiJxE7pOPvfwtyzdpyoZ4Ba7fbLbkmBlroHdab\\n\",\n       \"BEiMf5KiQ2t2CF1qjxsmPGlYSEINOIVJU5CMPpo5MHimn91uN6PRqOQNPjw8lMv7aNPptLAB6Fdk\\n\",\n       \"DPkzqERODLjq/WkHgn4jxzWg9sknjDzA2HmQvMtrQRgLFufNmzf59ddf8/LykvPz8/T7/cJU4SCZ\\n\",\n       \"mWL8rjDr8Azrt1gs8ubNm8I+Yb9xYLBjzJEdhdfaTuA4tzGCtGoKuNfbZjo7KxvlhXCs1+tyKR1J\\n\",\n       \"Pu/fv0+SknvCZUcusgMrYsYBAwLNt9lsyoLybgAQzInZGIQNYZzNZkX4QJyM8+joqJzdrz1eK05A\\n\",\n       \"EoKIYDAn3vw1GjVt6dAAdKLDIg7dAFIQchtH/r1arXJ/f19O6qzX29ye//iP/8j19XW+fPlSlBdM\\n\",\n       \"F54mG2g0GuXs7CzHx8cl1MNczmazkjeEkYNFcSjQuSrIyLcQ2sHjAVBZGRlwJmltUN/fYUVDjk7y\\n\",\n       \"tdFiTu1d83dTxQ578T4rfZqVrZWjARaghH2C0UE2AVo29PboanbPgAJgYuVlT5HfMZeAAffdgAej\\n\",\n       \"RJ/dH36SH2adYG8Vuv7s7KzsJxLsWSMrWIdoaWZJvL/QSzUL6BDft8CYJE3VUMoBMJ66fAMyzFyy\\n\",\n       \"/y2PvV4v9/f3+eWXX3J7e5vk6zouHP81m5akZS+QQ/QXIWxkG5aGfpMAagCF/uUE6GazKZdYbjbb\\n\",\n       \"2inoc4fhsQ3ezzgd3tPoXN5lwG9GPPn6vieHzniX9zfyyNzwfOsX7AN2z3P422+/5fPnz8WJAUSc\\n\",\n       \"np629kXNShoE+bb5WrckjY5CHmx32PN/1HZ2K3Gvty2ty7XWGHbH1RgwE85mcAiDCbq+vs7T01MW\\n\",\n       \"i0VeXl5ydnZWkPZ6vS7I3hQpiogFgNZKUsI4fM4erD176i8YEYI+UWCgd55PbNO0lgEAfQDcoJyd\\n\",\n       \"L2Mq0nkbPMv0m2lNCzBKmz+vGXcAnecBpeDEsOFwmL///e/5+eefi5fJBqX5JAZCbqrdiL1mHmoP\\n\",\n       \"OGkbU/q362aAmTS5B2xanyZB0bB2zrFgLK5uiaeE3LmxT6wETT/72TXbQvPvDRhNufNsji9D+fp7\\n\",\n       \"zIPZSwClT9kAYpAlxgFwZ+/WHiJePPLCPrOSRkH6Ejnvj6R9/xF99r6xgl0ulzk7OyuyDQh7zcu0\\n\",\n       \"o4EMADj4WXuRXgMzJX8UFtpFA5TZaQOU1Dk26BvkdrPZFF3A3HAP2nA4LMma7O/Dw8PixLK2lll0\\n\",\n       \"GBVfsQvIDwYedp71n06nJXcCGYLNRM8AqqbTabEjgAYzNYzV7DaOJb/DtrA3vObsK8t2p7OtVYVM\\n\",\n       \"ERKCkQNsmY1L2owke5F9jk0F1HBSdL1e5+PHj6VvsOlmZer5d4jGa4F981w6fIxMOILxz8D3zi7+\\n\",\n       \"464RkBMUEAYbwQHJUonVnv/5+Xnm83kR5sPDw3z8+LHcJ3BxcVEW/e7uLsvlssU6bDabkuhFVna/\\n\",\n       \"3y9on4kjsxtFCJq+vr7O5eVlxuNxQcmENJ6fn7NYLAoLZAqPvBkLq8M7pp0RRv6OssJztZCYQTGI\\n\",\n       \"MVXufBo2kGPHfJ9/0z/Ago3F4+NjBoNBQc8kLt/d3eX29rZlBJ0M6vABf1B01L6ZzWYFwXvu2Mhc\\n\",\n       \"9mZU/i0ocp9KcTjNRr6m8W2QDAIMUpJm7NPptMwH62pDyk+vVR3br8MRKG+MDUqfz/sIZNKEWBzf\\n\",\n       \"BiQgVwaoVm44DYAnn/7BwCFrHovfjw6pCxvCaKJw/R2eZcXOvJOAiJya9qcfPJM1BmjWeVwAsz8K\\n\",\n       \"bWH88JgNUh2O8nN33WysYH/pL/k4nHY5ONheBYL8JU1CJ04b84gskHBpRpTvO9Thk1rIFM+wswYL\\n\",\n       \"3uv1slgscn5+3rIjyC3POzw8zN3dXQaDQZKUPCeO3yaNkTUjUQNZO1wG96xvzQwyN94fSQPuGRtH\\n\",\n       \"fgmHmblzw04AqqbTaZJtjh85exTQJCxGBOPi4uKrkPNr4X7GzN4ghaHuk3WAdRnj/LN8k2RH4MQo\\n\",\n       \"CiOXNPUCfNwLAANASJp4I4ILc0DZc46hgrwBLlZ4AAiy8k29omAQWhQ2i4nC5cw5qBMljEANBoNy\\n\",\n       \"Xw+KHNSKQPM+/i9p0LJzD2xoWPQ6b8ZGxkLCGHq95g4SmCGAg5u/v9lsslwuy7Hqi4uLcpppPp+X\\n\",\n       \"SrpJE/46OTkpNCog0JuW9TJK513O3jc74FCDWR7mraZAd9WIN9f5BEnjsZs9cX4Km5c1wUtjnQeD\\n\",\n       \"QSaTSYslqVkyyzh5LA6JAUCsaGgYGrNryAo5FuRz4SFyooU9wx5gXIAOG3z+Tt9gUSyzjMWKnn4x\\n\",\n       \"t4Q+GQP9JAm7VviM3z+9r/z/zCmspT1IanA4NEvzfPq9rC3zbwDH/zssVyfJ7rqx/0gypa6JQ1Do\\n\",\n       \"QyeNouvt3OCAouc+f/5c1skhQcLGzA3OG4ADPeJwh+cRY0y4x3uE3BJYlcVikYuLiwJeuJuGd7KH\\n\",\n       \"ALGMNWnCLWb/AJ8+Zo48GvAawPLv15ws7jECTDEOmllB9g1sBn3APmIDYKeYm8ViUcZEfx2OYy7Y\\n\",\n       \"uwbiDpPa+Wftea5DYHX4s247ASdQegio2RAmhUVYLBa5u7trHY+lmbbiRMxf/vKXzGazIrg25mdn\\n\",\n       \"Z0XhsCBfvnzJ/f19rq+vW/kB/LG3RCG1Xq+X4XDYyv6GmnQoaT6f5+LiIvf39y3602yACwQxNwYY\\n\",\n       \"zIU3OayOF9YCwb95n71e2BqEq45b0oekEXjmC+YKAXcs+fDwMOPxuLzb9QscxkGh1YwPc9/tNsdS\\n\",\n       \"HS7DiHiDwxpgqGp6fReNsJ6TW+u59Xw79GJwXDfWHY/VANeeNw1gnzRg3sCFviBTDkEkaa3dw8ND\\n\",\n       \"YbRMzQJGkpTwDvvWILMGy8gVwAS2EoDCs63QfZzTuSkYBZwNF8xij9XhHBpj8RFYx9XZqzWb4vmo\\n\",\n       \"PWav8WvsVG2IWHuPhfd5Lr6FxtyzF50/Y10MC83eRQ5xPliTfr+fN2/e5O7uLt3u9jRep9Mpzo/3\\n\",\n       \"NMm0yOBgMMh4PC65SN4/nrNer1ee63oonU6nJPc7ZI9jQCI0uo5+Pz4+luex3sxF0oTlLeNmdy0n\\n\",\n       \"yBQ63uCJ7zgM62P27HlaDebpB7reoK/f7xcbhO5lvs2UJ+2aNowvafJjzFbxfdbEibrYUsbBuF9j\\n\",\n       \"f2g7AydsyvPz88xms21n/h+FQx4DKOzp6amEakCA/D+5JLPZLF++fMnFxUW+++67LJfLkg9CApNp\\n\",\n       \"1IeHh5yenibJV7FzI0F7M0lKYixH4VxA7OzsrNRaIQnr8fExFxcXmc1mRcFyAyWVVZP2aSEzIRjp\\n\",\n       \"+hSAE14t1LVSR+klTbKmvdTX8lVq5gR0nKQ1h0nKunCjJgq20+m0CuwBEO1tIKSMkXeDtE3D4o0A\\n\",\n       \"Dm1YvBm/hcbcODZPA1glbQBgBozPGbihpNgbSVNDxrQp84BnV+cTmZ3hGZYbhyCs4E1jm85mz6Cw\\n\",\n       \"GRN0OM4A64aCQjly4Rmy4P7ZmAP4kLV63JYHvFyHaWg1UDGDC/sKMDF97wYoYxyeMzsXnlOPC8Nj\\n\",\n       \"x4N9w7o4n8PrtatG6BYHwd55khJi8/wyPwBr1pFTWRwl3mw2JTRuBhsmPWkcWnQn76yNtMOi6Abe\\n\",\n       \"YzYNRw0QAjB9enoqpyt5hpOuT05OCpCuWbhku6dns1mRZxwtwpc03pc0Cf7IIQ4I3wM4sb8sP+6D\\n\",\n       \"dTfAjn6iP1gDPzdp8gkBZ3Z+vC/tEPlyVpxXO0/YGu8Jvksf/6ztBJxwkgaQYu9ovW5yJJxklbRR\\n\",\n       \"HII5m80KNTeZTPL4+JgffvihKE0mwFQYhjPZnhzivaakvdjEsvv9flFYRvYoQICTz9vD/mC8fUTO\\n\",\n       \"tUxQ+owZxYZx5x3O43CyZdIkRdlj9700SQMueJa98JomtJJgY+E1Esqxt2RmA6FHCeC5G/igMLy+\\n\",\n       \"9NEljzmeSpjEdCNzhwx8C835A/aW+Il8edz2zB3qcZVNA1XnkNgw2itPGian9ubdHLrAwzF4xcOv\\n\",\n       \"k7JZF8CuT5+5qqaPA9dKCnYiaZJtzRog34At64zXGBEDhKQNAGn8nvmmD+wDlD/j90++X/+s38v/\\n\",\n       \"A0CSpqAdY6XvHu9rOUe1Ad5FY10BZi8vL2XfY/yRac+Hx4ze22w2Bdz1er1SqZQwO7rRjpbnxbf3\\n\",\n       \"mvVjLkmQpaJ4XZsHPYXugxkBMKHjFotF3r17V5zXJCVUUoe3GDtOnNkQ+s0e8lzUDsrBwUHJVcTA\\n\",\n       \"W4/w3VqneA+gg3kG6QWLxaIFprFdyKLnBl0LQeC+Mv8AE+wnzigy0O/3c3p6mtlsVsZe5yH+Geje\\n\",\n       \"CThxLJ4wAZckkafAImP0+ZwZFehD0DnCfXd3l+FwmG63W042+MiwjQKby4wFz+d5bCaOvjoHBZqT\\n\",\n       \"CT87OysABAFAufN8WBAnlNWgiGfWeRQWJH7PRnVRMyvLfr+fpJ1RXytihNsCbzBB8hoesalBknYB\\n\",\n       \"FPQRlgkQgkA6fOexf/nypYCMugLily9f8unTp3KHBgarXstdN9butTAGgAJjY48EEMZPK/I6ORLW\\n\",\n       \"IGnnRtQGuDZqBgj8G8XjMtt1SMI5JrWxZB1IhERpYzgMovCskZvValVAO8rZ/UShO/GWd3N6ok64\\n\",\n       \"dKsBcf2TseHIeJ55Xx0qqMGHGaV6TnkO/XutiBjfOz09LTF/+vCacd5VW61WxUAfHm4Lc3HJH+tL\\n\",\n       \"f2E4yUfiz3q9LqcpMWrIAU6ic/KSRh8RjkAmYLpYD55F+fokBTh0u93y3KSdRA7YIW9fRaVLAAAg\\n\",\n       \"AElEQVSQ/LzlcpnRaJTxeFyqqCIjTvB2CI69ioPq0EfNQvAcGiGjGihTm8ThSebF85O0i2qyD52Y\\n\",\n       \"zZxjTxeLRetZ7DM32wLARLfbzbt370rkgfIPdtyxT+S8jMfjVt4g4O3P5HpnR4nPzs5aniBC5QJH\\n\",\n       \"SVpn1KG8EKLXBsfmnk6nJanWCgVjTJ4KaDlpZ03jSSJkhBUQZJ9QsbFIGkWTpLVQ/JuNxLMcrzV4\\n\",\n       \"wsCwqGxIxgFAQ/AQQgTE5/WTtMIsDl/VSNyNTUU2NmACkIbBOjs7a3kKrxno2gMCzMzn89zc3GQy\\n\",\n       \"mWQwGOTt27ctg87zHh4eSoEk5+2wKf8ZTfivaA63Je1L7vhjEMXcWnGYQUIRO+RphgSP1qybjWzN\\n\",\n       \"xKFAauCAkfV+sjGn8WzABuDUIIKGl83zUVTsQUAoa8j7DX4I57AHTPX7OC4nBRwOShovk797XIzH\\n\",\n       \"x5qTJhfEAMXryZ6rGZmkCeUBfNgjNEJTPKvX65UrHTBQvpeL/u66kU9wcHBQQALXSiyXy+JUsF6M\\n\",\n       \"gzXnfjGAHLoCxvrk5KTlqNTsFH9H58HYmWE22EM3oBORdwMgwCb6czgctsrmv7y8lCTZzWZTLv3j\\n\",\n       \"vRhskkSZH4fqXGTOsuhkaN7P9+i7c7aQFduHmi1BHs24UweGvz89PWU0GrWAjAEeesOOMk65w0AA\\n\",\n       \"abNXziVinpOUFAjGTB5M8ucJ3zsBJ46bY8Sh1pgE8kkQCo6o4i0xSCdFMll4ecQ5kzbNjQK3EFkx\\n\",\n       \"W0GS/5A09xmQ6c1zCDEx4ShS2BZT9PaMTHMlTUVKKqryPhuTpO0Vm9qn7yhwV7O0ENoztsKm1cqQ\\n\",\n       \"TbxerwtaPjw8zHA4zP39fUuRAIxQIqwV/w8bRe4AoKfe+KwL8dvhcJizs7NMp9NMp9PWJWM2LN9C\\n\",\n       \"AwywLovFohwnN/NQU6pWXjXbhtz5CK5ZIzMSKGP/vVb2KD/+rzaYKGj3mXchmz6NQHM9B4wV/49M\\n\",\n       \"1Bf2MR7mjv1cy6wT562MGYt/x1j+SJnbg7ODwd6xPDl2bwXtUBzjqZU788T3PM/M22QyycXFRTl+\\n\",\n       \"axaH8e9avjudTs7Ozop+4WfNusIckDD/5s2bjEajVj0SA0N0tMEE82ZGPEmxB+iNuoYNewT5BDgg\\n\",\n       \"Y4PBoAAn9hZsFiALMMAxdIe9AQ7IHseikWf0nJ0JH2YwE8begnVk7pK07APzw/o7NFSz3F4HLkxl\\n\",\n       \"TxwfH+fTp09J2kd50VNm4n1K6e3btxkMBlkulxmPxyWPM2lCV+PxuLV29PXu7q7koBjcM78121u3\\n\",\n       \"nYETJv3+/r5sRAwRwrbZbMrFb6enp3nz5k0mk0nu7u5yfn5ekB4CzHc5qcAkAkT4N6EgNk2v1yuX\\n\",\n       \"VxEeccEhTiEYyd7c3JQNdH5+XsJI1DCBWeHZ9ma5KM//h2KDxcE7Sxoa38JnWh2aESBkqtGVZtlU\\n\",\n       \"9kBgqSzkyddeJnkgLlPN/5kNYnPSJ451Pj4+5vb2tgjy27dvy2VTbM6Tk5OyzlwkhZLp9Xr529/+\\n\",\n       \"VvqLEdxsNuWc/beQc+LcD+hTQhdJ+yivE1aTJg+oHoeVSb1OBqcoUmQV8OB+8dMXU9ahHFPWNuh4\\n\",\n       \"W8gOBgejAIPhEJEZNN6LwjJrw15ARjE8VuTuZw2meQZgAaXLe0191+Fdym47ZICD4bmjOaeI9aQP\\n\",\n       \"PqX12ndZAzNQh4dNHQvH4+vQ0i6bnbj379+Xu7Mw2knDGpKXARtoet9z4bW1E2l2CueP8hDsjeVy\\n\",\n       \"WULJrl6L48g6I5voeeoy3d3dZbFY5Pn5OcPhsPTJIcPz8/PC/CWN00HIBFk/OTkptsvsGzowaVeD\\n\",\n       \"TtosEM+mhgngjHlHLvj8a7JQs4PsMdh5bNvj42PG43GRW54L2HQOEbZrsVhkOp2WPKHRaFTqoWDv\\n\",\n       \"CPlTvoO8xF9//bXcVJyk5F9anv6o7QScIKyr1So3Nzfpdrs5Pz8vCh1kbsqbDcDv7L3Y4wE5M+F1\\n\",\n       \"vJaS9tfX160kURSnwyYk26LEQbnHx8e5uLgoAIWaKp50noERJ/sbpQ5qB4w4Ucgeq4+lMi7mg7Gh\\n\",\n       \"aAFfvMcGkX8zn9CX9ljd/G824cHBQekPioey3gChmnrlrpz1el1qopydnRUwN5/PS80IQCDgjrAA\\n\",\n       \"d16QYEWpacZX5wLssmHYam/BFK7zY+rEb8BeHa7EINZ5Nc5dQalDqWOUUU42AE5oNFOALFnRE66p\\n\",\n       \"84ZgBBwqAlgk7evRWdc6Dp40OWhmkQyqHeJ1bROvOc830LYhpKEQ6Xev1yvhT37PT8uX2Zo6P8X7\\n\",\n       \"0IUSAZP+nJ/N3GO07HHXa/zaWP6VDSPPvjQD4BMayEeSwhQDHgBvBpgwF/P5PMnXYUpygi4vL1ul\\n\",\n       \"zm3MeRdyi6dPTuL9/X1eXl6KUeb76H/LIbrLOSDr9Trn5+eZTCY5OzvLbDbLcDgsa8iFgDC/fr7X\\n\",\n       \"zkCf+at1ugEL+525Muv/mmFnHybJp0+fMp/PMx6Pc3p62tpbdujdJ+wbtogaXbbLyCpy7VweGBvW\\n\",\n       \"F/v7/PycyWRSdAPOLev1R21ntxI7CzhpCrBxXw4NdIey5OdyucxgMCiIFKV+c3OT6+vrEsNk07AR\\n\",\n       \"ENLJZJKrq6skKcKVpCjjTmdbVpubhLldeLPZJmadn58naWJvCDZ9ZtOS2Y7QJimnf0DfPDdplABC\\n\",\n       \"TaKRKV7ewf85Ju4cEQtIHf5h0zmmbg8zaViTk5OT/Pjjj3n79m0BWgA2U4q8s/YSki0ABMD5lBDv\\n\",\n       \"PzzcFgTC4LEZh8NhS6H0er2iGJKUo4UHBwdFwe2ysVY0jyVpjKOTHvl9/T0bYf6Nwsdw8516zS0v\\n\",\n       \"i8Wi1B4w6wTIR+E5QY93+hZeg8A63o2TgOJhPC5G533quXGYludbQfNdxkr/6VM9B47R1wygmUEc\\n\",\n       \"EHujzhlJ2vfiuLFXTNvzf95nNRPpUBZ70kDRoSj6+Boo/Vc3OyWcisTweK87/OCj/4wZ1gigcnh4\\n\",\n       \"WMAEpwrRATwf58S5HehNGMKkSYq9uLhohd4xxBwdRkbQRewj9iFsg/MLyauhpD17AVlz3aikufaj\\n\",\n       \"BhgwOOjL2sHk/8zA8lzn17mZNUEf9/v9UqV7Op2W+Tw6OipgBZl1/hhrRk0r+oCsYutgdxi3c34I\\n\",\n       \"Y9MX5qcuKgkI/KO2E3CCB9Xr9TIajcqxMdcvcUweBWZPI2kqbcJCHB0d5fPnzzk6Osrl5WURIMeq\\n\",\n       \"nc3NJK9WqyJ8gJ+Dg6ac8nK5bCVrsRimwDabTQEiSXN8kskHiXNMmveafkSgjYpZTN5pDwpa0zQ6\\n\",\n       \"Y+Vn7SlaaQIMkvYJiZrKx+gz1qurq5Zw1rFDMt95LkJNOI21A4SRaMuJD9aYNfDmY62TFIbgNW9z\\n\",\n       \"l405NPCiOXSDgVqtVq0Mf5RyHYNOXk/w9O+Yz5omJ7wENetEPj4LeHec36EgK0uHUuhTt9s+HQeA\\n\",\n       \"4CfA254zMlon5nrMZiMwILA3KEf2t/vn/tKs7DebTTlCyvxYlmsQVQMUgIXBC+/lHQ7LmOVinvmO\\n\",\n       \"AY0NkB2PXTYcxm63W8LUm82m0Po+8p40Ce+Mg3HDMFhnDofD9Pv9Uu8KUJA0BySo5sq6dzqdXF5e\\n\",\n       \"Zj6fF1Z3vW5KqZOT4svreC4sM/NqJoI9RO4fn0U+ABDURzGw4BkOTfkdzAtHm9nvZoutk1erVetE\\n\",\n       \"GvPL+F4D26wFNsBgnbHDgNA3l9BAXwEeASs+pZo0trMGFzj3SftaBuaYxGjWxvu8bjsBJyjPzWZb\\n\",\n       \"6Y/YFpQhVI9DMi58wySRF4KBBzCAtEHhpnvX63VGo9FXSUqcxBkMBsUzXa1Wuby8LKd6iGWywCBv\\n\",\n       \"b0gEpKZq6R/lglerpgAVVBuK0YKO0sZgma5mk+IFmxUxI4JCJ9kURYPRchjBSpgxUAbazUesMS5O\\n\",\n       \"DgRU8Lmjo6OSfwKFyXhZb7wXQJ3Xj2fbwJFF7hyXXTcbWPeZubQXjGJjHn0S5bWNS3gGZez3Je3q\\n\",\n       \"rrXn1ev1Sv0Gx7MpxW6PJmnAtRkTP5P3YGQZ7+3tbZG15OuifnXo0r+v6WszL1bGBtR4wyhe7xe/\\n\",\n       \"L0nrPcx7XajL4MTPq2n4PwI9zImpfINVAzOU92vNc1yzartozq+pveCkAcGeb7Nm1imsF59D/gwa\\n\",\n       \"0c8OH74G2M7OzsrhidVqVUo5rNfrcvO5nT3XvUoaxp7cEQNlmEROpKDLeQb949kGA8iwwcp6vS5F\\n\",\n       \"QQHFZop5LzqZOUiaECHvrfdCDY7m83nRyci8WXk+x5idgM34GBd2ot4z2FucSdgXOxyev8fHxwJs\\n\",\n       \"0S91+NptZzknLPZwOCxIdL1ujgQ6wZFFgFFhoyTNwBH04XDYin+aXvMCARaMcslwJmzDZ8mkH4/H\\n\",\n       \"X93vw79Z1E6nk9FoVBK4eL69McANFF/SZj2S5iQP40aY7KXCSDB/VhA29DBTSb4CBt48tXGnL8Ph\\n\",\n       \"sMSGARJQtKPRqCStcmytNjQ+6cDY2AAcC2eu+Q5KCc+FsZMke3R0VPJWxuNxWc9dt3qz4TGY+uYz\\n\",\n       \"KHKzG/y/4+32tpmbOgznsJH7gPzzbIP81WpVvLg696WmlpN2PgBrTEPuXA7czgXf5Xm1MXKioOfH\\n\",\n       \"/ajZBhRiv98vSa3Mr2lzGu/h/YASktNfAzTMtefQHmG9Dih6xoF3CWBxWOw1JsbhTjzvb4EVZN2f\\n\",\n       \"n5/LHrSjyO9ms1krL4X9yzOQc7632WyK08c8OefOyf0YWwzlYrHIcDjM1dVVFotFbm5u0ul0cnp6\\n\",\n       \"mru7u8KIo9ORJQNjwCZhIPaGk7DtdL3mCBtoGczbOUTu2MPj8bgAK/aiTx9Np9NSS4p38Szm0vuV\\n\",\n       \"/sG2UHUZ1sesFnaHP8ik5ZXPG+h77wKccLQs28yRc2g4PYrtYy7+zKHc2cV/LN6bN29Khm/STkQj\\n\",\n       \"DOCbi5O0QizE0lEc0IYog1pYHGbxM8ni/vnnn8sdMTAjxEY5XeLnI9CEpFg8qsk6y7nf75fjgsRv\\n\",\n       \"TW/buJpm5N9JO3YNdQk9DeJO0jJ0zBkAEBDhvBPmx4aITTUajVoxcjaTGQ4E7zU6ns8CtuxtORRB\\n\",\n       \"gSYAqr01GxVAIYDQRnHXDRlzSMBMSR0CsPKjsT/MvvB55sThEcB3kq++hwdIkUPytQCZBjVJE9ox\\n\",\n       \"64NiJAaNguH5Tua2gYUxg2qu995rXh+KzeOm3DlAjX4hi1D+ptiZR4Mt5sXgy9dWeD7quamBxJ+F\\n\",\n       \"W9A7jMf5RV5L//Tf2We851sAJ4Qe0L3j8bgwqhg79rqPoaOn7dXjYPIdEklxVJLmxIoveSSPjzWm\\n\",\n       \"+FeSXFxclBwTnNwkBZiQm2ZjDONqJ9WeP418F4d8krRyMpJ2CfgkX4EfO9s4dIASf+f5+bnYRDOQ\\n\",\n       \"SVqVs5Ft5BkZ73a7xS5RHZb+sS5mwWwja/adubEdOTra3kLtCwRZH+bODjDpDKwF9rpOB6jbTsAJ\\n\",\n       \"AmyqExCC4TFlhgCAIFFuNYXr0zYsPr/j0j8W3HQehpOjw9PpNOPxuBxXNuJz38mbAM1zmgTlykZM\\n\",\n       \"Gi/w5OSk1HbBsENDsmgIitkNhM+gxSEmFCDjZ1PDLPm6ANN1r8XYaZwmQkDn83lBwvTFng1eS705\\n\",\n       \"raSZK5QSf7+8vCxIH7kAePBObqd2PhBskzf4LlsdzjGFn6QFXJB9mj0Pb1pkgVBikuKN4inVe8Js\\n\",\n       \"Q6ezzaKnUJY/WxtYclFsUJFnswcvL+1TSabtMSpWUJ4fsyrev8yTQYGZFL5XG2z64T7YSEC3I6M1\\n\",\n       \"yHsN/DI/Bkx/tNZ2evh/K3uvR31U3PNs5oY59X7dZXP/0XVmm+inwz0YUoeO0ck2dj/++GMxlA41\\n\",\n       \"/PTTTyXxlpLxOC/dbnNBKO8HxJJcb0YLucGIUr12PB6X/CiHaJIGePtE283NTS4uLpK0QauZgtp2\\n\",\n       \"AdjrOZhMJjk/Py/MkcEFwJ659zvMRNfRhfF4nN9//711whPHF0fJDiI6wpEK6w3mmjk8PT0tegTm\\n\",\n       \"g/3oongAReacsJmZliStfVO3nYETBmGEy6QATpwYSWIqyZZ1YhAK5B//+MdXl/o9Pj7mw4cPef/+\\n\",\n       \"fasPGEMUoMEMGyVJOZ3iWD7Ik8v9MNy+J8JGGg80aW7KTFIQLuOndbvdcsmUq+kh+PSFXA6O1bHJ\\n\",\n       \"HPtkU5iWs6JOvmZqkq3S//vf/15OPzFvGBCE0LVZGCPALWkE0HONR09hH7MhSTuUR1iKTeZ7imC1\\n\",\n       \"mOdvqaGYrCBrqtifsTJN2qEQvE3WGS/NNDdejJkp3luDHcCwlaa9Hj7HOry8vLSO8dJ3/0Qhwzii\\n\",\n       \"CD02xod8Grj57/5JBdEaCKAjrDw99zyz9kx5BrUf2FMwsZbvOizk/eJ1qoEx64W8m9mxh8q7DHTM\\n\",\n       \"TmIYd80KAkrJA3OIgpoX9JMxHhxsa5RQRgH9hRO5Xm9DzhRgZE8TipnNZi3g6DvXHMKGOTs7Oyt6\\n\",\n       \"GO8eveokcbOHw+GwMCE0h1iQF3IDGevBwUE5lcK+TlJkyswZdgU54d0AMj6TNBcs1o5kt9tthcuS\\n\",\n       \"tPQenzs5OSmXKGKD+v1+kSPWxc4ln7P+sGNbyzeRAW55RmZtp6w77GTyrDoZ/7W2E3BitAQaxrA/\\n\",\n       \"PDxkPp+X41oGDPP5PI+Pj3n79m0ZGEg62RrmH374IYvFooCBwWCQTqfJA0EBEA7qdrvluPB6vU2W\\n\",\n       \"Xa/Xubu7y9nZWU5PT3N7e9tCkWwO+nV6eloWfzabtSrgEgayUibL2wwLWel4rTYgKEkWlfnjWFjS\\n\",\n       \"JDBipMjLQDBcPhhWwsJUhwoMvi4vL4uXwp0aCJtPQpGgyhzB1lCQablctvJKSF4jNmwDCUKvlbir\\n\",\n       \"59JflIiPa++qOW6Ol+PTVIAJnzCycTQgMUPAnJi9SBrji+K3geP3BkJJkx+FrDhJLkmLGbM3Zwqb\\n\",\n       \"Z7vZi/b72ItmA01z41wYxHk+/e71et1yHPwMK/LaY3YYhd/55EfSJFfWSe4GOjQDTFP3ngvCReQP\\n\",\n       \"vRYGslfMO9z/Ogl+V409TCVvDh2sVquyJ22k0S8YW4f6ABaU7f+v//qv9Hq9vH//Pr1erzDLPv4O\\n\",\n       \"UEu28sVRVY7K9nrbAmvD4fArJgJ2mTmFSZ5Op0madWXdyXHju4AR1pe9c3Z2Vu5xe3x8LGwNehEA\\n\",\n       \"bCeAED9A2KEfxsb+pK/0n9AaMu1wC2P7+eef0+l0ynsHg0GxQWZbfDQc4GcdZGBkub+/v8/Z2Vlx\\n\",\n       \"sK2DDfJ8mitJOZHJ8znl+WdH5Hd2KzEThGGjABf0fdJ4fklasUcm2YmuKIC//e1vpZLs77//XhgP\\n\",\n       \"19ZImrAP7wdVQglOJpOStFQfhURw8BDxEBA6jFKS4lkAUAA1pjVrYIDQ2ICgtGAPMEpJY4AYj4ur\\n\",\n       \"2Ztljgxc2CQYPDb0mzdvcnV1lfPz8yJUT09PmU6nBX2jbOy9stEYvw0fY+X/Dw621XWn02nr4igQ\\n\",\n       \"PIALQ8lmfHp6+up69Tr5cVeNeSfsQWlrU6woBDMmNdPlcB7KfLFYZLPZFG8PltGJmskf5y7QJydo\\n\",\n       \"WmFgWFy7woCmNsgoYmSBMZldYE/5HT6uD9BwX2t2ws/lXfz0HjYLQeO7BoJJWvkJyCw6Ablz43Nu\\n\",\n       \"rwG2+v/sELwmy2ZpkubUi9fyNVCzi4ZM+oQJMu4+moWlKBe/96kPchoosOacBDN2yJBDEg4bdLvd\\n\",\n       \"wmKQW9fpdArbi/OHc4t8oEMoJIksole95hhS5Isxc5CCPlAtFtZ7s9km8dvZJMfG+XXMpfWfnRUa\\n\",\n       \"jik2yGtze3ubxWJRWJ7T09McHBwU/cqdSMy/dRIy6WRkQCE2zPkqjN25OmaMWGv2LPuL39e5LK+1\\n\",\n       \"nR1vYNCc8mAxT09PW7dZMmlJWrfr0jBKeIDL5bIYBMI2KD8WwwlWKF0r8Tdv3pRLoJjg1WpVEL2Z\\n\",\n       \"BaNYgBV1UQ4PDwvFRr//iMLFM4a6dOU9NrfpUnsFVmyAILM7q9WqVXYeJQmar9E3LMh3332X0WhU\\n\",\n       \"gM90Oi2nZJKtgrcX7zFCx9M/0/HUlXDVUmKksAC+0RiF7qRO5pHnM5+7bsiaDSfr7Lo9fK6m+R32\\n\",\n       \"qY0860fhMIy7jWzNNBkQ06c6edugo+4nzYra/alDSIyJfwPeUbaUG2eN6xAX7+D7Bi6uaUJugR0U\\n\",\n       \"U8n2+gz2aDaA6A47DXWrGRqzWu67f+K9Mp9ea/4YjLJe3s8e2y6b8yVms1lWq+3Fosxdt9ttXfDG\\n\",\n       \"/sUDR48DHLx/X15eyr09m832BBVJnITdAAacgHT46Pl5W6jt5uamgI2kCaPaLsDkuAQFoIFnJe3a\\n\",\n       \"M96PrI3B8dNTc5ke80G4BFnBJgHOYJWYW8JOyL0rPHe73VKDq9vttu6Xo+G83N/fJ9kCr6urq6Kj\\n\",\n       \"+/1+2TOuP4N8Ol0A+waAcp6cw2r0FZsDeATIuTI1wM9lOlxA77W2E3AC+Dg8PCxHiaGW8NrJ41iv\\n\",\n       \"t/FCMsMRLhaNRcSTpJAN5YsREt+DwOZJmnPuKABO3rx7964V/+T39MPC+fLyUrLBWVhOSJCTwsaz\\n\",\n       \"p5Y08c2aumRzgzCNNvmML9FC2aGA7R2b7cFDmE6nRch8OiPZbszT09NcX18XtI1SIDGN00sodYMQ\\n\",\n       \"+uNcCCdS4snwXl8Uxtx6w9BH1itpjlrbc67DDLtoVmTOF3C+hQ2ywZyNEXNqehlGDuAOa3VyctIC\\n\",\n       \"Ckk7J8JABGNtTxYlPBqNMpvNinzacLsvGCpCfVaivLs23Ky7nQGeC3BlXKbg8a6TNqBAwdpwm4F0\\n\",\n       \"iInwCuMh7Ov+MBeMz3H518Ju/DTA8HgNQgwybYj8O3/PToWdn102mIJut1sMrQtKIluu8oqeRW9Z\\n\",\n       \"PpOmiCT3p71586YAHwzxwcFBbm5uiuPK2tr7h/0jN5DwMmEHvk+1V9YZHYKuAiyQs8j77ehhqAEz\\n\",\n       \"vjMIefa6scbOQ0Q3ADRcfI1cF/IpkW3kkXoutVF/enrK3d1dut3tCbR3797l+fk58/m8HB5g39dO\\n\",\n       \"Me92agDrSmidEgSU9OckEPaSvjNfgEGckJOTk6KvsJlOaH6t7azOCZML9cZEoLgQMiaVBfHGTtpX\\n\",\n       \"T+NZDYfDVmweqmu5XObs7Czv3r3LfD4vTAiKh4mlL1CEm80mt7e3mc/n+f7770tsdDweF8ADs0Ll\\n\",\n       \"u9VqVSoeAlBYYECH47NsEDYB3wEEobQstD51YaXMhqUZRAFOjG4BQqzD8fFx3r9/3zp5tF6vCzCz\\n\",\n       \"d4wn7FAW68Ia1YXaSGp23PXu7i7L5TL39/eF8YI94r4MU7vr9bqEAgE/NdW5iwZbBMuRpKXMknYd\\n\",\n       \"DRtSGziavWg2N4ZqvV4XeXI5d472JQ1ITZq7adg/yBZrOh6Pyxoa7AAWHfo0M9Dtdlsgy8weDS+K\\n\",\n       \"z6IY+cm7MHQ0lKi9M8+NqWaHKGFaANBeH3vdGFEfdTaAM71tI2yGw142fbYz4jli3uh/7bHyOdqf\\n\",\n       \"nWb4Vzb0CuOHZWBvr9fbPD30X9IGacwfYQzGvtlsT4CgS5Ptnri4uChl7ZOtDl8ul+XaEDPi1kNJ\\n\",\n       \"AzL7/X7+7d/+LUdHR/nll19aFaXn83lxdjH42CLnObKm6BkYAJhxHGISSPmugb3HnzRhS+fQWK7Z\\n\",\n       \"E3VtE8CRGRl+9/PPP5fcSWp0Mac1W8uJPOfFuW/oGv+byIZPqCKzXNaKHDA3R0dHmUwmhVAgvAR4\\n\",\n       \"NYPzWtv53TpJAzAwvMk2hENGcJ3tmzRHHK3goP2Ojo4yGo1asa9ff/01z8/POT8/z/Hxcd69e5f7\\n\",\n       \"+/vWZU6wE/Ymid2t19sblP/7v/876/U6//7v/96qLeIy2NyiybhMVTMWUD4hGJA1G6Tb7ZakYFfk\\n\",\n       \"IyZqpgCjD6gDgNCsZBEue8EWzMPDw1xeXubDhw+ty99IjCUkxvwAMhiDDTNKiA3OXDFOxgEljPH1\\n\",\n       \"0eqnp6eMx+Ny/xDvOzk5yWg0SpIWw7XrVlPwKBEbHMBp0lDPNuT+vsMyAAZ74tx9Azvn8AnPNaC0\\n\",\n       \"MUcOVqvt3VHj8bjE7k1Rw2aaAncYyt4+wMTG3UDHgCppco/s1RkcOB5vb9RHVpFF5M5MlNk8vs/p\\n\",\n       \"DowLfXBoir7WjIXBsdcIT5zfYVgMZvAw+T3yzv5xmNlMlIHPrprXHXBiIJikhFfq8RPCQH48fozn\\n\",\n       \"6elp5vN5AffT6bSE1mFAyO04OjrK7e1trq6uCoNjxmS5XObx8TE//fRTye/gRBA5RT4skDS62Sww\\n\",\n       \"Hr9P6cCe83eSVNGTPAfnEjtl5xE7Q+4chp1TUMg/z/HJNqqi26j71KLBl0O1Blnsm263W5z92rFA\\n\",\n       \"/ngm/aI/dSLrarUq4PDk5CQnJyd5+/ZtyXt5eXnJu3fvis53GsUftZ2AE5BajebcMFj21j15TDYL\\n\",\n       \"mjR3ukAnUhr/+vo6s9ks0+m0CD6K03TzbDbLaDQqz4AJOTo6yvX1dY6PjzMej/Pysr38yuf4bZh5\\n\",\n       \"JgbItPrLy0vrfpMkJXHK4SEWmL7gpaKoURCcQGKxXQ+AMYL0if3xXgML+nl0dJSLi4t0Op3iUbA5\\n\",\n       \"8TApwYzw41WTQ0DfADHMyWq1at1jwsZwAi+XeZkd6vV6mUwmJeRgj4E+7Vp50wwA7H3UoAWAYhYF\\n\",\n       \"IFJ/1h55kpZ8PTw8FOXtuUZWFotFicED3pAfU72Pj4+lEjIhSYAFMW76YWXm/rOPzNr4plbWDuNk\\n\",\n       \"JsXxfa8lex85RGc41EQ/7N3jjda5Oy8vLxkMBgVA0U/mhWd5/g1yDJbYdw77OBxjZqXum/OmzKok\\n\",\n       \"bQaNPrs/u2ibTVOdmfkw8EvSMjoAZYwuupKQu4GBQ3sYwfl8nl9++aXldBGe4B1c8GdWGtaZI/c/\\n\",\n       \"//xzer1ezs/PW7VPyLuyDYJZRk+ZuQC4cFIJmYYp+vLlS7nFl+fU689ewy6ROEufHBZN2se3yWc0\\n\",\n       \"s8F8/vbbb5nNZkUPECoCCJCrSWQC547nEqojnMX+5fsOowH+WU/6ip62PcFusK99ei35up5T3XYC\\n\",\n       \"TqDqoeXxlOv4K4thpYbnDrVFAi3AgEUDZYJQR6NRzs/Pi0HnOCyeJyEFjudi1EiSfa8AACAASURB\\n\",\n       \"VDHCnKO/vr4ux2lZNIBM0tBgIE5QMoiesSUpRoES2syDj6oRWnmtzgMLPZ/Py2YhMzxJy0ggHIvF\\n\",\n       \"4quYpY3BcDjMxcVFnp+fc3d3Vyg6AACK5fn5uVScRUEAMnq9Xuv+IjxUxue4a5ISx3S4wmCU9bTB\\n\",\n       \"R1mgEM3Q7LLVRq/Oi0iaUAVjxMB7szuUSaufiwH48uVLOZaNPNb5ESgW1hKamoZB4H0oe6+J85gY\\n\",\n       \"h40Q8sizeYZzXKzgWNt6Hth7AHhCePTb4NR5AM4jqBvvYe5qw8gYUZzOQ2OsdijsMAGWeK4VukNA\\n\",\n       \"rLG9U5prsfjk2bdSw4f5Zw/CZDIvjJN9jyyQP8J8wUJzZUCSMnfkhKD7zs7O8ssvv5TwCyG6+Xye\\n\",\n       \"4XCYXq9XShCQIE4jwRZWdTab5ezsrADDpMlhYg0JL9nBShqn4fDwMPP5vMWmsD/QQThLXn+zpDBI\\n\",\n       \"ZsmT9u3kyJOPGicNs22W5enpKWdnZ+Xfw+GwsHE4tOxZ1sqX35IzQ8KtIwWbzaY4F94HXnMDMOtg\\n\",\n       \"gKYLxmGP0GN/BkySHYETFs+5A0mjuFhkKyGEg88lW6GeTqflCBkxZqO82WyWT58+5f/8n/+T//zP\\n\",\n       \"/8xgMEiSkg9CbQMWltMp0It4p5PJpCw8TASTjWBMp9NCuVGrxUfKfLKChSPEg2GHhWGRYTEQ6IeH\\n\",\n       \"h9bpn6enp5JgnLTvbUnaxdAAATaEzHfSCNSvv/5avO46zwBFwXq4P7PZrBgLvGLWimdYaQFubOz4\\n\",\n       \"PJsAT9OeAs9DhpyAtutm78fy62ampO6z859gLZApswBWhm/evMnnz58zHA5byXvkohggeV4NCGBH\\n\",\n       \"YK1sfGuPB8DPPgB80D9T9zBEZh2QsxrkGKQl7Vwd53y4/6aok6bsNrLGu0xV86euX0FfcDDMCnmO\\n\",\n       \"6IvL3vPd5Ouj0PZ2a2+a+bIxNMjmd/9Mkf//3egreRW14U22sn93d9eqb9LpdHJ3d9cKpbhx9BZn\\n\",\n       \"Eq/83/7t39Lr9fLbb7+1gB0s7ZcvX/L7778XJxOGhdMp19fXSVJYE8CzD19QhI2TjMlWtn17uoEF\\n\",\n       \"zAL5LbAesNtcxcI1HEkK++95JFyEPq1BOo4tgIk9hq5w+MdMUK/Xy3Q6LeUwcNwdmfARbPQLDbBj\\n\",\n       \"9tDF61wegs8zJphMA6mkyQO1LcDu2TF4re0EnKAg8Q4ZGIaTDiM8PpWB8OC9Y3CdK8LGMJvAGfea\\n\",\n       \"OgedkqOC1+eMafcNj8FKmLL3bA4MAhSfE7WShskgPou3S3P45/HxsSBdh1VYWAwA36efbBrQLz/N\\n\",\n       \"rPAulDaeMmvCpgKkEM5hrDBfjIHicgYkfgdUqzeikycxnklz4gPa0bkWPNPPr3+/q4airL0uy7Vz\\n\",\n       \"JJg/fm9Dz3zAeNAMLJyMZ2aBuXXuBobVzbWGkGmHjQA5vIe+Wrm436zVH8mYDTTNjAb/9nN4lgEp\\n\",\n       \"SX027H424+ffda6OmTr2OR4nc+4QE2EAJ63TT+bN4JG+oEu87vzdfTYw5xlej12HdZhj5/kxX2Y4\\n\",\n       \"WS/rNeaNeTg7O8tgMGhd4wHgob158ya///57qXhKCIIkWfY8a4jsM5fUHDFgZ47RmQ6zoO8BMk9P\\n\",\n       \"TyWPizUYDAYF2Jgt9EEKZIm9gt5lDgEe/BtnDp3qvetogNlAy5vvzQH4mZlyWQCHgp1AzN6DyQE8\\n\",\n       \"mJUkB8YOCn1l7XBasZNms2z7cGjZnwDJuu0sIZYBseBJWh4iAm4lzWKzkCS1LZfLVu0EX5+OYA2H\\n\",\n       \"wxJrRtFgqK1kOYLMYtugcLSZvtXJdCyUT98gvFRyRUAAGNDb9rq8oGZnmDMfPXbSI2NwXgm/h5JN\\n\",\n       \"2hUtrdwpuFavFaj86OioVcb+5aW558LrlzT3HBl8mvaHSQHIOUGLGCjKgSQvh3KsAJnbb4E5Sdpl\\n\",\n       \"59ncZgf8b7NC/p4NMvuD79vTwhvlmB8nmnyCyh4QILQGzEkb8PlEGSEay2gNqF8LW7E3bHQNGmx0\\n\",\n       \"zaoYrKL42fc8g1AhPz2HNkTur3Ma6j+vGTAD/DqB0CDIFD5j593+Wcvua8+0DJutqEHlv7rBYKFv\\n\",\n       \"2cMOH6LbACPPz8+tmlXodp9qZF0Ya6+3rVPy6dOnTCaTUjMK7x+QPhqNyj5I2ndKwerCohHyZD19\\n\",\n       \"GMOna2CLkQHrddbg4GBb1Ozq6qrFHJjdQkbRs4CRJCVVgfnjj8NLDqHQN4dJkLenp6dysMJsnp1s\\n\",\n       \"9q33Ms7u8fFxsZu9Xq8w+wAa9i9JrXwfe4uuwY4C6gGPdXjSziPMkZm3r2Tu/xPJ/X/RmIDlcpl+\\n\",\n       \"v9/K+kaIEWRiYb5nxiyKQxCcrWazQ2Nx8Z+LABG/5D2155I0RpiFMRUHyHJcznF8H6mzx4miWa1W\\n\",\n       \"GQwGhdZ04R17TbyTvmBQDKLot5WhPTOe5TCJQwUYeU4d4fGhaBxjxih1Op1y/wXvNYB0QhS/e3h4\\n\",\n       \"yGKxKJsHBgYjxHFxnkdCMnRtnW+Dcf4WqO+kWQN757UhB2i+BkoMRGqalFNgBhmwgev1ulV+HaoZ\\n\",\n       \"Ga1Bda0AUUT8zrKbtK9op5ltqWUNA2C2o2YV+J3XzvPEv82esHf4npNpeY/77eewj+p18hg9PkA9\\n\",\n       \"78MI1ODRTKFPfBhcJE11VN7Dd2vWlHcz1po520VDhg4PD3N+fl6OndvIek7JY0AOanZzvd4eQSWU\\n\",\n       \"zj4xE8GJD+Ydufzw4UOSdrI04IV5Y3+5sjiMOv3w7cIYSzONDvWgY8bjcVnPxWJR2GKMO/3CFsDu\\n\",\n       \"eU+ORqNi7wyazUIbUPNMg2vmk8hC0gAAHMNOp1McN++bJAUwnpyclDGgS11LBTCHvUUHOyGWvUBY\\n\",\n       \"zqFkAx3WnLnyd19rOwEnRmyfP38uA2JSXJqcxUgaJoEkK9PAj4+Pub+/z93dXf7nf/6nRUGb6mOR\\n\",\n       \"6gRNK1Umn36xMIQsMC6msqG0UH7kU5jmRPgxyPaI+DtGwIiSRXYSMMJudsc0HQjdIMfeFwrRyJY8\\n\",\n       \"Avrg3x0fHxcAmKS1Pg7T+eg34JECPKyvjbWBEkKMR0z/GDfJZKwHsuC8hG+hISvINKcEnMdRb0qU\\n\",\n       \"kQ21P2MK2Zudz5lC5/c0e/O1ATbI43nIDN6P2UHWnLyOmhkwnc+znNthJsgGwMAaAIz3y3fZW2Zb\\n\",\n       \"mFOHavkdzyX3hjE5Zm6QTD/tHGAU6JNZQAMgGxjez3gd1/f46U8N8vic5aAGW//qhuwlKXl7GPfD\\n\",\n       \"w8Ny2zqshfPLjo+Pi3OIbjg6OipHgdEBTjDFsfEJM9bGsgTbwncAHcvlsoB5O4027qwh+nS9XhdH\\n\",\n       \"mM+ancap4uQiSbe1zLHPqCHV7XZLDiIGnn4Q5vD9WDBUdnRfc5xJGyCEUx+N5hQR8mXG1s49dhEw\\n\",\n       \"SZkH9CkX/LH/AVnYtOFw2JoH5hUiAAcTmeCzTrh/re3sVuIkBY3++uuvefv2bQuRJs1RVz6XbA3w\\n\",\n       \"zc1NASgIyu3tbWazWWsToyAw7jAcfMY1P8wu1Emjjlvyf0m7ZPd6vS65MCwg5+2TJiHURsJ9eO2E\\n\",\n       \"QdJmXQ4ODnJ2dlYUIEdrUcAYBdPTgBL+JM0dGY6R867BYFAMPXPBM3zMzMaI76L0B4NBAXLMOxsD\\n\",\n       \"oOON5DDeYrEo68/vicnCFJCBT3P9lF03xkk7ODgo7B8G3UesARL2tg2yzD7Q6vCBQTC/N4PgdTZd\\n\",\n       \"+xowtvJOvr7ML2m8OP+ecdMf11PAcJthcz/piz1MKztk38/i72aXaAZtBu7sP7xkgyje4X1P/7g1\\n\",\n       \"m/U0kKidKOdg4EB5jeo9mDSshMMNfv8/8zD/FQ3n4ODgoLDdm82mhA87nU7Oz89bYAFHp9PpFA98\\n\",\n       \"tVqVat528Lx/qTyaNGuIHfCRdkLjABzXg8LrX62aqztsGHu9XmHsefZgMCjHcXGIcIAIqWBQzXDX\\n\",\n       \"joENOwmqgAcDm6R9bJxnWNaYe/YGMtLpdPLhw4dSHHS5XJY8HmSFAwqj0aicXEUXw5ZQ9DJJiSIc\\n\",\n       \"HR2V5zhXZLPZtHSXw3JmSLDjHOZwHpz3q+3ga21nOScvLy8l+zlJxuNxRqNRUSaLxSLj8fgr42eg\\n\",\n       \"kWwVKwlNFlyEm9LIbCAEwMmuCJ6peNA/DAt9cJKPY3mbzaZFa1HREM8hSXkmmwZDW7MVTpxz3slw\\n\",\n       \"OEy32y2Cg8K2N+6Yueun1BsTQ290TF+vrq6SpChYNsXh4WEmk0lZQ/6fRKk3b96U43qOIaN4AGje\\n\",\n       \"yDSU3GuGOUlr7VH4GNM/ovJ31Uzzkk/g/AvnDzFe+o7HxJpi1A2Ka+NrA8nvMbQGDcylEweRXX/W\\n\",\n       \"1HLSPk3jMJQNrcfN352QSv/5jD04Pt/pdEqBK37PPgbM0l8Uqve9wbnZOPa5w3/Ik0Mu1McwhY7c\\n\",\n       \"Q8PX4TbWyawp4Z26VoYdBIdCoMENRgziasC6i8baPT09Fb1G4cSLi4ui28ziWU7MxvlOGXSbHVDq\\n\",\n       \"cfhwgkEPup938Xz0KBffUXTNBtDOnNkv1o/3WVeaLYZlTJpwEP2z04DMOPdmtVoVloE5RcbsEDAu\\n\",\n       \"yxiOqI8Bv7y85Pz8vNQyQhe4+dAFz16tVplOp8WZvru7K6zRcDgsZS04rm1WxkeDCTeSc4IcW18B\\n\",\n       \"YJO0dJLX44/aTsAJx9Gge05OTjKdTjOZTPL8vL1g7suXLyU0Y3TsBpK7vb1tAQrAg9kJKDDTw8lW\\n\",\n       \"UbtGRK34fILFl0Xx+6RRzgizvUBaTdXzjF6v16Ie6SPAw/+G6kT5+bkWCAw9v6cvbMyTk5NcXFzk\\n\",\n       \"06dPrRozX758yf39fYvGxkCgRKEh/S5/niNsoPlut5u3b98W+pF5hklwkiuGhDUDxDh8ZvapNqbf\\n\",\n       \"Ajih/ygsFBj0qgFH0sgD6+gxYGQBZK7IiTGzwfNzUHR8x4rB1LhDQABHg2OaZRuQxRq4DLWBDcCY\\n\",\n       \"33s/0GeHUBhPDdDpk0MLgC2zIwYOyI4ZIBtU1goFy1iQZYM2DI+TC5FLwgKcrMMwAZjsecIgsNeQ\\n\",\n       \"AYdQzXDV87nL5vwO5KTT6ZTilDhRPkHFOlpeAHusB/Wf0C+Pj4+ZTqetMvgOsxB68y3v1puACRtj\\n\",\n       \"5r4Ob6Dn0WOwCbWcueFMU6jQjBwAC9lF9jlhyf/zf87hsDwDWJgzfkfVXAMZ5ufx8bFUPsZZ9x1I\\n\",\n       \"w+GwOMXUBasdysFg0Dr5R2gMJxWwZ/sKI2OWxfoY0I8tYS+9xhLXbSfgBOFhstjk0+m0RemjQJMG\\n\",\n       \"ZTIB9eDW66bcsCvQvnnzJu/fvy+VR/0s3k8SJrRh3RAE0CNelz2zOnwzn89bXpE9fzYZYSjeDXXK\\n\",\n       \"gidNTgCl7Dudbc7J4eFhPn78WASQd5v65vuuoGoQMRgMSr9RfvP5vFRThN2B1rZiYZ34DgLJuFer\\n\",\n       \"Vdm4HIeG+bm9vW3FmJkPBBow4pwDxk1irpXPrhW3G/Rm0jYwyJqBCc0eZm2M+Ls3P7LvBDzmnO/A\\n\",\n       \"KFhRYxzNeNhDA5g7pOKcDjyg2nDTLwCMmRrHvW2MDbT4ye+RMd7B/nKOCyymvXIciLu7u3z48KEV\\n\",\n       \"+ttsNqXgFHJGHspyuSwKmdMlljH0jC+NYz9tNptMp9MkTeEvvEU8f8snesO5KayF6XEML47IrhNi\\n\",\n       \"mX/6iaOSpDhXLnOetAvLGRADRizrOKo1s9HrbU9Jmv0yo4KMozu73W4xvkla8unQGWwuurHf75dS\\n\",\n       \"B9gRnLzBYFBSCBgHxhog471pBytpnIx+v9/af3WBM5pzdmCCkGvfrMweuby8zP/+7/8WRodIQa/X\\n\",\n       \"KzklPAdQslqtWiUqut1uxuNxYU8cwq+jDOxz9AlADOYsSZlfwm1mUrHVq9WqdZKpbju7lZhTB6Ax\\n\",\n       \"BuBEqjqkwyQmTc4DwubENT5vhsG0k424gcN6vS5389CHpNl8LAxCYDaHeCSbBQYHhZWkJEg+PDxk\\n\",\n       \"Npu1jnuS68HnWMzaG6QPVEa0wgclJ41woFjtQZIHwVFssxd4JDxzvV7nu+++K7cGI6BO2GVzLpfL\\n\",\n       \"AmBQYNRGQeE7nGBPHmPrNTVdyvtIXrNnzTiceLirxlrZ+0GR8DszBQYezqEw+HXIBkXHXjFl6zwW\\n\",\n       \"/pC8lrSP8CdNMSfTywZIhDRN69a1UljjpM0KIpckSdI/My98lvlgTgxOeJaPHpKXgqzwDE51AWLN\\n\",\n       \"uPEOAAh7HObUoB4ZRTegdA3eAEckzDNX7pdlnXGwLg4TeU39e/d/12Ed+oUcms3wsWLGyHolaSVQ\\n\",\n       \"+u8OY/Ms7pqB4WIu2Qsu2AlgA6zA6mK8zVpRD2S5XBYHkXcQgmXNALXIkp0BqodTMXw6nRYH0bko\\n\",\n       \"6/W6nCRlXQ8PD0v+Gf1NUhxPnEY7D4TB6PPnz5/z448/lvfQ5/Pz81Y4HZYFUI0NY2/BXPd6vbJf\\n\",\n       \"kNebm5skTZ0xWG+zYOgRKtQmyWQyKfk+7K1Op1NO0OKwAozqpN+67Yw5scLivhU2qBcYL4QFc0JO\\n\",\n       \"TcXWIRdimy6PjsGz0fYEdTqdElNN0gIlzjWp7yZA8fF+cg36/X5B8aB01zhhfNBp3LvgvhntJk21\\n\",\n       \"Qhrje3x8LDcwY/AYL2MzQOAkDV5esjUWVKxlUyDgMCnMNYARJYA3vF5vj7US+3XxovF4nOvr6zL/\\n\",\n       \"9TFoh5KGw2G53t5AldM/yRaYUGDvWwAnSdtIO7Rg4GHvGAWfpDXW2kM3I8H3AA4opZp1oD+mjU2D\\n\",\n       \"W76ReYNuvCYzmA6/8NN0Nv2Greh2uwWgAjwZj49U+h29Xq8Y+k6nU+6ccnVk56Qw32dnZ7m7u0uS\\n\",\n       \"Vn0Le7x2KDgOyvcZP8bIFVEd6nJdGeaU7xh08EzXhEhSEiMdRuLd9op5758p8X9FGw6Hmc1mJUy5\\n\",\n       \"2WxKkqvZMhto1gfd69Cs5dyfJYeCtaQmCrqcvVXX5EC/wDyjV/l/dDcgBEYXgE4fkNXlcllkpd/v\\n\",\n       \"l/L5sDfkkgyHw1aelKteU0UWUNXr9UrBzrdv35YwIDbC+sEMFHqXMAxgZzqdlitZzs7OSv0Z9Kjl\\n\",\n       \"E3uArvHlqpxuury8LDbr4OAg9/f3JVTnveB57/W2dwS9efOmBUjJZ+HvvHcwGLwKcl9rOwEnnz9/\\n\",\n       \"Tr/fL2iPP6BaAwyDlKShrI12ASgsJhsCOtDPYlFhSEzvooiTtJQSRWhcCdbxThe/SppLr1DO5+fn\\n\",\n       \"pdwyZ+5ns1kBEaenp2Ujfvz4sVCmLCZAwgmuxLjn83kmk0mZIwStjgEiEMfHxxmNRuW4GPPF3T3c\\n\",\n       \"O3R2dlYE8Keffiq3FFMcDU8Vj9pJbvydPB+El9COAUntTQN+mCeML5s3acJIq9WqJHMxzl032Cd7\\n\",\n       \"GVbYSfs4oA1RDVpQlvbqa1Dz9PRUkvKShir2RY3OR0raV7S79oYZMbMF9ioBI6w962+Wx8wO4MPH\\n\",\n       \"3h2GYXzsQQx90lT9pLAiYT7mzewac8++ZmwYiX6/X/aPy3ATggWgmBkaDodFhwBuksZhcSVNAwh+\\n\",\n       \"8hzXj2Bf2UGzoq91nR2UXTbWh3Lx3PtVh9/xnM0+ITsvLy9FVuv8FQ4KPD1t70FjvgAcAG+cV3IW\\n\",\n       \"0Q/sDYq2wWZ0Op3i/QP2AA3kglCUkLmfzWaFJfnhhx+KnGIrGM98Ps90Ok2/3y/63g7acrlslX1w\\n\",\n       \"QUTfccZ+MTto4Pb4+Jibm5tiG3HYnp6eyjFiElw7nW0+0+XlZcnhZG7Ze+hnbBB26NOnTzk8PMz3\\n\",\n       \"33+fu7u7HB8fF0cVVopnUUmX577G0HNaDeaSz7FnHeJ6re2MORmPx0lSQMR4PC5F1rxRQXiOjZvG\\n\",\n       \"ht5DWSXNLb945kwu8WQMt5Pb8ITYXH6uQydsQsCL2RnXRGHTQf8dHx/n48ePxXNC0blEMpsd4UZA\\n\",\n       \"EQjmzoyFQRKgzciYhtBQcwCUfH19XWKNeCv2EjFyHz9+zPn5eZImj4XNxm3Px8fHuby8LEqm3+/n\\n\",\n       \"l19+KYoVNgvwiFKj0qGPBvJ+EDveDO9PmvhunQS3y4YiQ/EmTQjSYQoa4AEZ9+ZmQ/Nv55JguAlz\\n\",\n       \"EB6ELQAI1AnEVob2ZpN2QTbei8dXn1hg/9gr9NUIeMfsE+hf9iYyYYbH7JCBFP0E0Fg+aFDNPl2W\\n\",\n       \"NLebG5TwDkAJ7Bs5KXyGXJSaZsewEqZFfxjAoFdQ6hh0M0WMDwfLYUz6jmLfNSuI8cTYj0ajYswZ\\n\",\n       \"J/92bRIcDfQD7LLHjXMEeFssFnn//n055cKNu93uti4WenY+n7cYKBwjJ2EDihxKplihw5jonV6v\\n\",\n       \"l6urq6KDAVKcvgLIEBq8ubnJxcVFK+zOn9PT0wIeAPuAIhwIZMrH570/Op1OPn/+nPF4XJ738PCQ\\n\",\n       \"4XCY4XCYzWaTjx8/5t27d+U4NKCF/YYz++HDh9zf37dOz7BPqeHS7/cLwHt5eSml5X1IBObMR7sn\\n\",\n       \"k0k6nW2CdK/Xy6dPn1q5nIzdJ19J6fijthNwMhgMSt7F09NTbm5uCvrlj+lr0DHAAAEETePZIHAk\\n\",\n       \"cHKSxLUxKCDExPEs2BEjffrB7/HwQLPkzjjc4mS/zWaT+/v7Et4ALSYp5+kxNMvlMm/fvs3nz5/L\\n\",\n       \"AkKJs4kAHpQPpnQx2esYqTpxkr4lWyF7//59AWPX19c5PT0tawCFCWLebDZ5+/ZtZrNZJpNJTk9P\\n\",\n       \"yy3K1DsYDocFyXNjKP9OmsQwMvHxWBw2QkE5qZlj4N1ut8RPURjOjUEJ7pr6TtoFwjCSDnuhMPlc\\n\",\n       \"HXKjAaIdiknSAiYAcudEGZSaIsZYOmQEPVwrGjMUzDnvR4adz4VhN8vJZ5zUx0kznu2wBrLGH/pn\\n\",\n       \"8PRa+Mvz1u1uk60vLi6KHDnEQj4CMkSuCUaNPlIKoAa9zAWGxSd96I9ZKEIAyRbkkZPlOeCEBXOF\\n\",\n       \"IXe4Gcp8l+3Tp0+FPUYfAdDQw4BadJ3HgV4y0HOeHpemrtfrsn7owX6/n/l8Xhi4pDkpCWvtyuDo\\n\",\n       \"AoA/evng4KB1uzuAK0lhO9mz2AUDVBg/O8Xv378v8orMJimMG+9dLpelRIMdu6TJ2XM+DXOUJD/+\\n\",\n       \"+GMrD/Ho6KgcTe52u7m+vi6y9+nTp7JGT09PJRyXbCMWRAfMmFIlFnZytVqVUNN0Om2dwOL3OBn3\\n\",\n       \"9/fFeXz//n0JeY1Go9ze3rZyFZ0bhP3+M4dyJ+Dku+++K7TYr7/+WjqL0kgaj9LCTAO4cConScnb\\n\",\n       \"ICSBkpvP50UR2asajUZJ2ln/UL/2+Jy7AbpF+AEdJH+SBwL9hbInDJI09CheII2wj+OMbHKus2aT\\n\",\n       \"OyuevroUM58zAqdNJpP84x//KHkfk8mkABs2J8d9uYob2pnEKI4Ls2G92Z6envLx48eSCIuRhgHB\\n\",\n       \"0Jo+RxmQCAf4OTg4KN5Bp9Mp8UpAJCeG2Ji7ThpMGjk1aE6aM/7IJYbNORxJmxVE9lA0DpMgQ47z\\n\",\n       \"OhHbMV3nhdigO//EBQSde1IzBKbwUdJJu0gbYyC5EfBppwOljVK2PHNywgAC+XPisOccY0FxMIcc\\n\",\n       \"iZGzHvZQ+Sz5AA6vAeBRog6T4iwBTviMw2f8ZDzkaBBenUwmLZ1E+BZmgXlCXnbdmAtqIpHUiPw6\\n\",\n       \"FMnet4PnUzaEN9brdcbjcZHjs7OzfP78uYR/Tk9PMx6PWyA1SWGH6UfSsKxm28m9MPuHLHW73cIS\\n\",\n       \"mLGm/0lzMs2hO4AHSfiLxSKnp6dFpwNIkVvrcoy6k2/Pz89LMbTXjPUvv/xS9udqtSoOdpISImQP\\n\",\n       \"MKfsOeqHPT8/l9OhLy8vubi4KGF8n8Kibz6B9do9XHzeckG43Swm15sYXGNz7QS91nZWhA1BPDo6\\n\",\n       \"ysXFRW5vb1sdRQmy4fGYLKQYxaSpEooX4qIxxMdM3dm7TBqBtQDxfxyrYvFJuqqpexI/AS0IOooL\\n\",\n       \"dsBsAcoZANTtdkvlQk4S1OEBjJSFg8UG6fomTNPmg8Ego9GoeIar1aqAKTw1gEeSFoBIthvu8+fP\\n\",\n       \"ubq6KgqVvBLi9ggx3hWKHqEEpGCcaYvFIhcXF0lSlHTSnNYidstJJYcn/hlF+K9qyAwN42dPHw/P\\n\",\n       \"XrkTWVkXvGg2sY2ewyWsncEPtCxzxHdsKMy0AIJhEgz26pwXJ2w77OmTdvwfYzXLBYhHNvjpEAaG\\n\",\n       \"hr4sl8uMRqNScdPzTZtMJrm4uCh7ybk4zJvDSfSJtcDR2Gw2OT09zWQyKdU1ofQZD/uZvzNG1pk5\\n\",\n       \"A4DhicPOcPoBz3U2m5VcBOfcOMS1yzYYDAowpl/MDSCXELqdsSQlodWMUtLkFMGUJMnNzU0Bw0dH\\n\",\n       \"R5nP562SDxjT2WxWTjlyMov5dWiEhHz2EuNItvoGubYjYUehrqjqPDj2HQw6DDi3KBP2ZI/A5qH/\\n\",\n       \"2XOE0y2bBilHR0e5u7vLd99918rXSVLsEUCH7wFmcGpJnrXNcZ4abDe6wuweugDGmj7DRB4dHRXd\\n\",\n       \"P5/PS07jeDzOu3fvWhV6Aeg885tLiP348WNRMHjNzupmEEwkE5Y0J17wPhDMJC0PxJOQpEXRouz4\\n\",\n       \"O54TeSMu9wuNdnCwPY5GXJTEJP7/+Pi4MAoOQVlJIYwO19ze3rb62utt70R49+5dRqNRPnz4UKh8\\n\",\n       \"DIKVHQ3l7rABPxk7Xs+HDx9KaAUBwRsEYZOEljTH9mArhsNhbm5uWsi609leioXxQTGAmA3WACn0\\n\",\n       \"lY2aJPf39y1QyO9B/z7yZgDrmgq7bDXrUBvnpDmNkjTGFeBlWh+ZQZm8dk8GzwNY21t1sjYso3Mq\\n\",\n       \"6BO5Tj5tZrmh1TkUdUiKPpghq8t3A9wdtqjngmdgAJLm8jV7sRg6gPxgMMh4PM7FxUUxaABjmFiO\\n\",\n       \"c8IoEXIyo4G+waj1+/3MZrMis/SFNbChMmAz2+Kj2OghJ3QCqiwDzLe91102xojedM4HoMsAbrPZ\\n\",\n       \"FIBJGBwASSjNdZTI5eBZSaM/ky34xOg6rw7bwXoljZPAmlBuALnjQAAev/P1zDwjH+PxOC8vL6Ua\\n\",\n       \"K+NBFjnhOR6PW3dpJSmOw3Q6bbFi7DUcXeRms9km5cIU+7Z4HBWiA9ieTqdT7u6pWeRud5t35Rvl\\n\",\n       \"YYmQL4clfaACZ55QO/ucd+Dk+D48F+D01SWbzSa3t7e5urpq1V75/vvv1OWGhQAAB0pJREFUX5W3\\n\",\n       \"nYATx43tUUEvYby5cMjUdz35hDTY2CBFBDFJC2UmKQuOADrhFDorSVFahIpQEiBkP5cKfYeHhyVx\\n\",\n       \"9MuXL61kOLyAwWCQ09PTfPjwoSR64Tk4J+Xy8rIIvFmE1WrVqvg3GAxyc3NTKGIy3AFyKF7m8/ff\\n\",\n       \"fy+KmYRUo1o2MJRcst1A5+fneX7eVvDF2wQ9A/LwDii0hDJzKGKz2ZT5s5Ew0+R7G0DpTjhcr9et\\n\",\n       \"BOdut1tYl103gwN7ZfbW+R39xyuELqY6Ixsb4A34cTIfQAjDh4G18nC4wkYWeWLdYFoM+vCE8fr6\\n\",\n       \"/X7ranlACUod40GoDoWKh2baHcCdtG8ato7g+Un75l+AweXlZYmrQ7UnDciBYTKNj9FjfIC3yWRS\\n\",\n       \"ZI81AHQD2jkh5fViHv1+jDN7xeARGWAPcdoCZy1pEvAnk0nryPQu2nw+L3OO7rSjQWgFXYisEVbB\\n\",\n       \"aTk/Py96EvYoad9Sj1zBUqCTLy4uCvhwAiynYtBhsM/cNO8QJmEQ2DuSe7nUFF1yenqaT58+5d27\\n\",\n       \"dyU/0CEtFymDaR8MBmWPAAR4H2kMLy/bAyA4pTA/7FHk6/T0tFWXi6qvJAkjQ5wGAnDAFpL3AThg\\n\",\n       \"j7E35/N5FotFAfTMD3PEQQ7W4e7uriR0w5ACKNnH2DnmfDKZZLPZlr0YjUY5Pz//in0i4fa1thNw\\n\",\n       \"cnV1VSaM4jhJQ7ViqJ2sljQhHbwLNq+pLZgTFtWeFsbw4OCgCG7SgBc2DcYaQ4+yxgvgvfQJQHJ3\\n\",\n       \"d1c2MXFBAACb9OHhIYvFosTjfWw4Sen/8/NzPn782Ep2xQNFgfk+BeoDQCPzPd7NnxoYYFxgZCz4\\n\",\n       \"rrY7n88zn89zcXFRjt8ZvEDrnZ+fp9fbFvIhhHN0dJTb29uyxtfX1616AEmK0XTYjjoUvV6v3POA\\n\",\n       \"Yb25uSnH3fg89/7sutUhPQy+ATbKjo3uvAVySepYuZtzTvA0ydZH2fEee58wDSgj5ARP1EYXdi1J\\n\",\n       \"Ab5JinfI3vLeSFJqLgCOXTDw/v6+gC4YIcsgbGmv1yvAnT3W6WwLOhmooexrFpS97QrK7I3n5+ec\\n\",\n       \"nZ2VfIHRaFQ8Z/bAly9fSrgBDxanpPbGyX0ajUa5v79vlTj/8uVL2af0t9vtFiaQ5FrCITBT5N5g\\n\",\n       \"/P+M/v5XNHTjwcH21CNrw3ohT9TaQP5IyiQnDpkGPAPcmDP2RX0a7fT0tIBIdD/rBWipLyZl3gk9\\n\",\n       \"OS/GuYKwIVS3dn4VsgPwHI1GrRQAZIk54EQN4MRHj7FB3333Xfk8cmpW/fDwsFzpgj7GmYHlZP84\\n\",\n       \"Xw02nWdiW9EvHz9+TK/Xy+XlZbrdbs7Pz8spOoOZ4+Pj/PLLL7m8vCxsliuXA3jQbTAqnz9/bpXb\\n\",\n       \"4HQXoR1OZDqfaDKZfFvMCYlnKLD7+/s8Pz+3PB6ED48JQUSg7UmxaLVnjreBR8+kOp/CoRDeSzIX\\n\",\n       \"iwyz4eqDgA6EebPZFBTpOKVDTHiBJBk6Ix0vL9l6f6enp1kul63jYHim9JvNnzQ5ISh2F2QzM5E0\\n\",\n       \"FLNDKhgqnuXwEMfS+Dx06A8//FCOgHNyiPlmDh4fH/Pdd98VT4BTRskWyPB3DC1z6hg13/Emd0Ip\\n\",\n       \"NQr+rBTyv7Ixz05GTRoQ7JguSof8KR/DZHwOESUN64IHBFMC24hS9f6pQ172dmAnqQ/B+mPweZeP\\n\",\n       \"hLKPzF45yZCGbAHCkHufdKvDr8zXcrksoA2vEqAH43lwcFBAlStR8j3+wN5gbCaTSdlnzLlP+5HM\\n\",\n       \"BwgnTOky5svlMrPZrCTX393dZTAYZDqdlpu5fS0GLOJ6vS61fyaTSVkrG3fndlgudtW8fpeXl2X9\\n\",\n       \"fcqF36PHAQUYNWQQZhemD6AL6+cCk+RNwMSyXrwHMIcXTz/ZYzCEDnOik5ElwvROCH16esrV1VXZ\\n\",\n       \"G5YT7wdYOGSfvtEIj3D6hfkARMPS+fQN7DQ6HrtBCM3MInIOuwxYBGytVqtSa4s9CghbLpf5/vvv\\n\",\n       \"W3k6zMn19XU6nU7r9ul+v5+PHz+W00Aw5AbzgCaYxufn59ze3ubgYHsIBSLBl9f+Ues4Fr5v+7Zv\\n\",\n       \"+7Zv+7Zv+7brtvuqVfu2b/u2b/u2b/u2b2p7cLJv+7Zv+7Zv+7Zv31Tbg5N927d927d927d9+6ba\\n\",\n       \"Hpzs277t277t277t2zfV9uBk3/Zt3/Zt3/Zt376ptgcn+7Zv+7Zv+7Zv+/ZNtT042bd927d927d9\\n\",\n       \"27dvqu3Byb7t277t277t2759U20PTvZt3/Zt3/Zt3/btm2p7cLJv+7Zv+7Zv+7Zv31Tbg5N927d9\\n\",\n       \"27d927d9+6baHpzs277t277t277t2zfV9uBk3/Zt3/Zt3/Zt376ptgcn+7Zv+7Zv+7Zv+/ZNtT04\\n\",\n       \"2bd927d927d927dvqu3Byb7t277t277t2759U20PTvZt3/Zt3/Zt3/btm2p7cLJv+7Zv+7Zv+7Zv\\n\",\n       \"31Tbg5N927d927d927d9+6baHpzs277t277t277t2zfV/i+IAQDEy/wsagAAAABJRU5ErkJggg==\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x113e04090>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"# helper show filter outputs\\n\",\n    \"def show_filters(net):\\n\",\n    \"    net.forward()\\n\",\n    \"    plt.figure()\\n\",\n    \"    filt_min, filt_max = net.blobs['conv'].data.min(), net.blobs['conv'].data.max()\\n\",\n    \"    for i in range(3):\\n\",\n    \"        plt.subplot(1,4,i+2)\\n\",\n    \"        plt.title(\\\"filter #{} output\\\".format(i))\\n\",\n    \"        plt.imshow(net.blobs['conv'].data[0, i], vmin=filt_min, vmax=filt_max)\\n\",\n    \"        plt.tight_layout()\\n\",\n    \"        plt.axis('off')\\n\",\n    \"\\n\",\n    \"# filter the image with initial \\n\",\n    \"show_filters(net)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Raising the bias of a filter will correspondingly raise its output:\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"pre-surgery output mean -12.93\\n\",\n      \"post-surgery output mean -11.93\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# pick first filter output\\n\",\n    \"conv0 = net.blobs['conv'].data[0, 0]\\n\",\n    \"print(\\\"pre-surgery output mean {:.2f}\\\".format(conv0.mean()))\\n\",\n    \"# set first filter bias to 10\\n\",\n    \"net.params['conv'][1].data[0] = 1.\\n\",\n    \"net.forward()\\n\",\n    \"print(\\\"post-surgery output mean {:.2f}\\\".format(conv0.mean()))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Altering the filter weights is more exciting since we can assign any kernel like Gaussian blur, the Sobel operator for edges, and so on. The following surgery turns the 0th filter into a Gaussian blur and the 1st and 2nd filters into the horizontal and vertical gradient parts of the Sobel operator.\\n\",\n    \"\\n\",\n    \"See how the 0th output is blurred, the 1st picks up horizontal edges, and the 2nd picks up vertical edges.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAicAAACbCAYAAAC5xzv6AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvWuMbNl13/c/9eh6V7/uvT1zHzNDzgw5HNIWNInpMCEi\\n\",\n       \"2wkCwYElBFASBTLg2DCM2LATSAkSJ5GlWDJi5EMAA0ngL/EjkQPFcuIQgREEcCIbAkJD9JhDgdJ4\\n\",\n       \"yOFjHnfuq2/fflV1VXc9Tj7U/e3+1+pTfe+MqOkmWQtodHfVOfvsvfbaa/3XY++T5XmuJS1pSUta\\n\",\n       \"0pKWtKTLQqWL7sCSlrSkJS1pSUtaktMSnCxpSUta0pKWtKRLRUtwsqQlLWlJS1rSki4VLcHJkpa0\\n\",\n       \"pCUtaUlLulS0BCdLWtKSlrSkJS3pUtESnCxpSUta0pKWtKRLRT804CTLsk9nWfa1LMsOsiz7C1mW\\n\",\n       \"/fUsy37+8Xd/KMuy9y+6j0ta0kehpWwv6QeVlrL9w0s/NOBE0n8q6f/N87yb5/l/l+f5n83z/K8U\\n\",\n       \"XZhl2TtZlv2R36uOZFn2lSzLXsqy7JNZlv2z8N1GlmX/R5Zlvcf9+Pd+j/rwX2VZ9iuXtb0lfSj6\\n\",\n       \"fpHtP59l2etZlg2zLPtbv4d9WMr2Dw5detnOsmwly7K/8fj5B1mWvZFl2Y//HvXhh0a2f5jAyfOS\\n\",\n       \"3nzKa3NJ2Ud5SPaYzvm+Kum5PM+/JelfkPTPwiX/g6ShpGuSfkbSX8+y7NWP0pcl/dDQ94tsfyDp\\n\",\n       \"lyX9zY/y/CX9UNL3g2xXJL0n6V/N87wr6ecl/VqWZc9/lL4s6THlef4D/yPp1yWNJQ0kHUh6WdLf\\n\",\n       \"lvTLj7//Q5Lef/z3r0iaSDqSdCjpP3n8+b8k6cuSdiV9TdKPWfv/WNJfkfT/Pb7vk+f05Ucl/frj\\n\",\n       \"v/8bSX/WvmtJOpb0kn32P0n6qwvayjRbCO9Iuv/42m4ck13/jqR/TdKPP37OyeMxvmHj+KuSflPS\\n\",\n       \"vqQvSVr/qO0tf5ayveC6X5b0t54wrqVs/5D/fD/Ktl3/W5L+raVs/y7m/6I78DEK+j+S9Kfs/78l\\n\",\n       \"6ZeKJlDSdyX9Efv/hqSHkn788f//+uP/N0043pH0Gc2iUZWC5//7jxdI//FC2JU0erzoHmnmIfyo\\n\",\n       \"pH647+ck/Z8LxvSnJL0t6QXNgM3/Lul/Pkco07gk/SLX2vf/WNJtSa9Kakr63yT9ykdtb/mzlG1k\\n\",\n       \"O1z/V/RkcLKU7eXP951sP75nSzNA9akFY1rK9lP8/DCldaSzIb+nDQH+cUn/V57n/7ck5Xn+/0h6\\n\",\n       \"XdK/+fj7XNLfzvP8n+d5Ps3zfBwbyPP8b+d5vq5ZOPALkn5E0m/ns1zqRp7n70pqayb0ToeSOgv6\\n\",\n       \"9TOS/ts8z9/J87wv6T+X9NNZlj3NvGY6O/5cM0F9M8/zI0l/SdK/c1648wntLenjo8su23O3PEW/\\n\",\n       \"lrK9JOj7RrYfp3/+l8ftfnNBv5ay/RT0wwZOnkYpFtHzkv7tLMt2+ZH0r0h6xq5ZWDX+uMh1L8uy\\n\",\n       \"PUn/smZI9y1Jn37c3n/0+NKepG64fVUzgFJEz0ryxfGeZvnPracbViH5ON6TVJV05XfR3pI+Hrrs\\n\",\n       \"sj1321P0aynbS4K+L2T7Mbj4Fc1qBv/8Of1ayvZTUOWiO3DBtEjo4+fvaRYm+zMfoS3lef5I0lqW\\n\",\n       \"Zf+upD+U5/mfzbLs70v67/M8/3W79JuSKlmWvZTPCq+kx0h9QdN3NAsNQs9plqO9L+mmZiE+SVKW\\n\",\n       \"ZWVJV5+iv8+Fv0eahUL7H7G9JV0MXTbZfqr2jJayvaRFdOlk+3GU4m9oJjd/NM/zyTnPXMr2U9AP\\n\",\n       \"W+QkC38v8uDuS3rR/v87kv5YlmX/RpZl5SzL6o/32N9Y0PYi+hclffXx3z+qsJvhcYjv70v6pSzL\\n\",\n       \"mlmWfVHSH9MMjRfRr0r62SzLXsiyrC3pv5b0v+Z5PtUM6NSzLPujj0ONPy+pZvfek/RCCP1lkv54\\n\",\n       \"lmWfybKsKemXJP29fJac/CjtLenjo0st29JMMWZZVtfMKSpnWVZ7rCyLaCnbS4IuvWxL+uuSXpH0\\n\",\n       \"E3meHz+hvaVsPwX9sIGTPPwd/4f+qqSffxy6+7k8z29L+klJ/4WkB5oh8v9Y84L9NAj0NUlfzbJs\\n\",\n       \"U9I4z/P9gmv+nKTG4+f8HUn/QZ7n/3xBe39TM+DyG5K+o1nB1l+QpMdt/zlJ/6NmxVI9zYf+/t7j\\n\",\n       \"3ztZlr1uY/gVzSri70pakfQf/i7aW9LHR98Psv2XNJPR/0yzeoCBpP9yQXtL2V4SdKll+/GW4T+j\\n\",\n       \"WZT7XpZlh49/Fp1RtZTtp6DsccXukpakLMv+kWZh0OU5FEv6gaKlbC/pB5V+UGX7hy1ysqQn06UL\\n\",\n       \"7y1pSd8jWsr2kn5Q6QdOtpfgZEmRlqG0Jf2g0lK2l/SDSj9wsr1M6yxpSUta0pKWtKRLRReylfiX\\n\",\n       \"f/mXPxQi+l4UEtvpeGfazbJMpVIp/Yam06kmk4nyPP/QfeBZ3uZ0Oj3TjyzLvifjK3o27VcqFVWr\\n\",\n       \"VZXL5bm+TCaT9JPnuabTqabT6VxbRf0rlUpnriuVSiqXy+kexjkej1OfYr/4XS6XValU5u6jX4uI\\n\",\n       \"ccT+/cW/+BcvNLz5l//yX84jbyAfX6lUSr+n06mazaam06mGw+Ecb5gX/5+/ua5cLhfyyuVtOp3O\\n\",\n       \"zRs8529Jab6yLEvXweeiflerVY1Go9Q+c12tVueeNZ1O0zrgb6fJZKJqtTonUz6+eN9oNJIk9Xo9\\n\",\n       \"VatV1Wq1M/KDXMGDRqOho6OjOf75syLf+Iz7K5XKXNv8TCaTuXmhvVKplHglSZVKRePxeE6XwDe/\\n\",\n       \"P89znZycpHbK5bLK5bKyLNMv/uIvXphs/8Iv/EJOP73/UV9K8/qUuapUKmd4xby6jDk/aB9eO4/h\\n\",\n       \"iaQk+ycnJ6l/9Xo9fRfn3NumH1wHv1038rxoI9w2xPXha3g6napcLs/pQO9TlJ1qtapGo5H0Af0Y\\n\",\n       \"j8caj8eaTCaq1+uqVCo6OTnRysrKmfF4H+Ap+n1lZUXj8ViNRiPxjOuOj4/nxsrYfc6yLFOr1dLx\\n\",\n       \"8bGGw2Fq9+TkRK1Wa279MD7ajDIwnU71S7/0S4Vy/X11zokb1g9LRQLuFBUjxMT4Z24wznse90fl\\n\",\n       \"U/Sc7yXFcUZgsAgouJJY1EdfAJEHvlD9+vOMNb/9B+H2RSHNz5ErtUV9vSiKRpIx+f+TyWQOHAwG\\n\",\n       \"A00mE9VqtWTEJpNJUgZx/viNMq3X6xqNRkmZwp88z5OBQElmWaaVlZX0MxwOdXJyMqfEHDi5UpdO\\n\",\n       \"ZWA8HqtcLms0Gmk8Hqtarc6NnWdyL226IZCklZWVubl1OULxOQ8AHZubmyqVShoMBmnso9EoAWVA\\n\",\n       \"eZZlGgwGid+AKJ+LaPjgQbVa1Xg81mg0OgNQHNRFw8146E/kp7cT9Uu9Xk/f05eLlu0iYCKdGr4I\\n\",\n       \"lv0+QIobTV8PyGrUA7QPOT/8ueVyWePxOM2PdCqb8LEI6LAGef5kMlGj0dBgMJiTi2q1KumsfqQd\\n\",\n       \"N7r+OeCBeV8EZGgzGm3nn+uOWq02txbH43EhiItrlzbQKycnJwmIwYNoY+Gp95U2mGvmAz4w//45\\n\",\n       \"jo9/F/V7pO8bcBIN0Ye9F8TrCq+IXAjcO+e+85gZnxeVl3u3LsgfZhxPC8yiZ+jjdo86Kka/flHf\\n\",\n       \"IjCIisXbLvL645giL3yeoKJ7fUFdtPKGFoFc/x/ejkajM3JG5AQlRGSiSKETLeDzer2uyWSi4+Pj\\n\",\n       \"pLSyLEtG2b0YPPTj4+M5Beny796YG55KpaJOp5NADWDHjYwrT1fgrgD5PkZ9sixLAMLXkCtI/qb/\\n\",\n       \"o9FIx8fHGo/HWllZSUo3rv0I8uCR95HvmE/4SkQD2UQRF4ESnkVECM/SjaGvEY8yMT50iBvdi6Ki\\n\",\n       \"56PTisBIlFf/znkcwQ5tMf7xeKzhcJgAZwR7gADvAzxENlwXE+2jP/C/Wq2q3W7r0aNHajQa2t3d\\n\",\n       \"1crKyplx+frjcwf19AmjHQGNrwPGALjwSJKP8/h4dmxKjHRCHkGBt75WHMg56GCN8dvnhGf5unA9\\n\",\n       \"wryz/llj/iyupQ+sFXfYz7NnlwacROF3xc4gFqUTuDYqFtqBca7Mx+NxIUiJIaloYCJDI9JEyEij\\n\",\n       \"YGCGw+HcuD6KMX1aYOJ88v6hUKEIYFjE5wlNVKLc6wClqC9xfn3huAHi99Mq4+hpXTRFUBdBm3TK\\n\",\n       \"EwwoAMRD1e5t1ut1TadTHR0dzS1wT8dhaF2pYFS5ZjQazaVfUPjIBe3iKUbPB2OxtbWl27dv6w/8\\n\",\n       \"gT+g/f19PXjwQL1e70xkwI2+K6ToxbkCc7BZJIfOH9YwkQ1JarVac6mQqDjpD8DQAXEE3bVaTZPJ\\n\",\n       \"REdHR0k5cx26RFLqh4M5riPszbPwHh3ESPNpCj5nrhnbRdKiSKkbX8YICHMg6voB+RoOh8rzfA4E\\n\",\n       \"wFt0AtEv0hu0U61WU7rD9UdMcbrM+Xc+nlqtlu7f3NzU7u7uGcAcwWSUG5d3f6ZHKhzIuEwCPhmL\\n\",\n       \"r1kAMSDV1yL30Sb3+bwAvIbDYeI7AHllZSVFBI+Pj1OUMc414+M5AG70E5FeB94Q8o7Me8rX136h\\n\",\n       \"zC385mOmIhAQ/3ePP97LZBV50QiagxSurVaraTF4LtGjDPGniOJCLJfLajabqtfr6nQ6arfbSaEt\\n\",\n       \"mhDa+N1EiWJbruQJfcdxRJ48LUUvtIgWjdX5EOf2w3iJboAuC0CJER9XiDGU6/ld8skoGa7t9/ua\\n\",\n       \"TCZqt9tqtVqqVqvp++Pj47k5PTk5SXJHRMO9UffQAM1e41BUW8GzWC8ffPCBrl69qq985Sv65je/\\n\",\n       \"qS9+8Yu6du3amVqTouhJBOcobE9HkVuPawpyw4DiJ5oT8+WNRkOtVkvNZjOtP4xATEm44pekfr8/\\n\",\n       \"xxf66j/0GyPK355Wqtfrc+Fs5ou2I3h13qCfAJ0XRRF0S2droKIT6Y6MO4fMJTJ6cnIyx5PRaJTk\\n\",\n       \"Ns9z1Wo1VavVVA+Bxw7Ydu/d142DEgyvdLYmg3sajUbS1Z1OR1/72tfORJv9N8bYdVcEJi5j8X4c\\n\",\n       \"lLgmWP84EZVKRbVabU7HAehYKw5OfL7grxNy6vchn57mYd5YU643fC24fuHZ/PYoD+OmRgadt4gu\\n\",\n       \"DTiRnhxNcIEr8qgQfPecIPfeWShENxyY8ByUJMaiyKAvMuT0gZAcqHR1dXXOS3CKSvV7Qb74ouDy\\n\",\n       \"G8ULP6LHUzSuou/8WfwuimB5lCXSkwBKfK6H7L8XgO57QUVK3EGiyyB9RvnU6/Ukk0QDfBH3+30d\\n\",\n       \"HR1Jmnl7ROaoueAHZTQej3VycpI8G+TAUyYYAQpLiZ5BrjBRYleuXNEbb7yhwWCg8Xisv/bX/ppe\\n\",\n       \"eumlBCji+GL0zuc2epIuY+5U+Lp3xc+1tVot3cvYAVrwxQ09hp9n1mo1NZtNtdttVavVVJToBcIo\\n\",\n       \"ZXQLfXej616wh83RHcwv97txdwPjYAj+XyRF+ZXmo9YOYD0S5pE3vHSIQmj/DJk/PDxUpVJJha0r\\n\",\n       \"KyspCh2vJfpImzyfuXbADbmHPx6Ptbm5qevXr+u73/2uBoOByuWyfuInfkKvv/66arVaKqZ2AOS0\\n\",\n       \"qPAWPvG/f+eRFo+sMKaYCeDaRqMxB95wUJBVl1F3ODzSQYGy9xlZc7DhEZMIPH1uvZ9Q/Jy16jJ9\\n\",\n       \"Xjbgcmj0xxQXatGPC0VEred53M4IhMsRYRQoyNFqUYgyeuz8T9iR9ih2JIISDWpEzx+Wbx8m2gAh\\n\",\n       \"dB4yBKzxA7nXzfUOVNwbiF535A3t+e8i4/Mk8jmMsnFRFCNCEaw5Hx0YYticp+PxWMfHxwlgoAjI\\n\",\n       \"w2MsG41G8iLxJD0SwU+WzarsW62Wut2ums1m8uxj/5ljB/yMYXd3Vzdu3NDq6qqm06l+3+/7ffrS\\n\",\n       \"l76kP/yH/3AyvihYV7juVHgaxPkUDbb3Cz56SDiCXZdRrqfO4Pj4OAGabrc7F3mZTCYaDAbp+na7\\n\",\n       \"nfjAs6h5iLIe++QGhz4xx0Qvfe59/Hjj0qkXGz3QiyDkx42RpxGgqIsjsMNALtKdrg/6/X7y8svl\\n\",\n       \"shqNhiSlQk6cR7xxUm8RyDkQwIgzDgBSs9nU66+/rmazmcDh7du39VM/9VP6B//gH+jg4GBOn/Mc\\n\",\n       \"N7RRlqX52g8fK3PqNinqDbcJDvzr9XqSWeTy+PhY9Xo9OS+kU+A1axKnh/56dMbnBZ5Hfcb10e6y\\n\",\n       \"7rmOsdMmc0Yxb1wjRXTx2tyoyOC7Mo+KXZoPAxcpQSeE2ZkXQYkrPE/DeH+ipxf/RuH4fQhYvV5P\\n\",\n       \"4UwoCsV5qaOnoZhWcP7ymY+3aBwOQiJf+HxRGm1R+4yhCJjEuYsetvOgSBaiorsoikrFveEIpJx3\\n\",\n       \"Ll947fCeYkwACUoRXmH0SP8MBgMdHR0lEIMiajabKpVKKXrHNkIiK6whgI506ulzP+sNw9lsNjUc\\n\",\n       \"DnXt2jX93b/7d/XKK69I0lz0kv56CNejKbVaTVtbW8n4eMrDr0X5OeCBx3iSfMe1k8lE+/v7CZzA\\n\",\n       \"s16vlyJHRJgcPFCYzNiJLnnks16vq9FozIH5CK4whi4XGAieG9cl0QSPeF103Un0+KOedPAa16Lv\\n\",\n       \"DvE0BPPleoBnYXxHo5FWV1fT+gHgABJLpVKqraIWwo2mry/66s8C5Gxubia559nValWDwUA//dM/\\n\",\n       \"ra985SsJ+EvzoCMWlPo8ukGnHw5MIqCJvPAx8X8cBymmPJ/V7xwfH5/RMURamQ9sITyNO9ikU0Aa\\n\",\n       \"+xejOi4bbjt8jdLfCNDPc6ovBThZ5Pm78EYD6r+5H6YvMs5efOio168vOhOkqL9Q7J8XcxVFQ6bT\\n\",\n       \"acqNR8Md23aiXy5kTkWGOSL4WFfiHmbkhQtZfJ5714uiSBHMoNyLUm6+oGO0oYiKnn3e9R8nORhx\\n\",\n       \"BYnMRa8u5rQdyFAr0W63VavVknL3KAnPbLfbyQhIOmMUXQFhGDHkHoHiGnb9uDKeTqdqNBqpPfo8\\n\",\n       \"Go3U6/W0ubmpt99+W61Wa86ro09FnjJnlOzv76tSqWhzc/OMTEYD7V5qlmUJ8G9sbKjT6SSgUq/X\\n\",\n       \"UxTp+Pg4pbyYBzeQHnXy3RE+f+5les4fuVtZWVGz2Uw7l2K0AWNADYWDGp5Tq9XmtpNfBsANFUVF\\n\",\n       \"3MD4mof4HzkrlUpzdU5EUtx4uZywAwuj1m63UzsrKytz9Ti+o0qa9+5p1+Ud+apUKrp//746nc6c\\n\",\n       \"HOT5bEfb9va2vvCFL+jRo0eprw4YptPpXB0YbTqY3d/fPwOYuJdxR7vCWSYUZUun9SKDwUDSaYSE\\n\",\n       \"owQAL6R6vYgbOYvR6Qj0pZksu/MUASF6ivn3eUcH+v3IuQcHiDYuoksBTqLCks4KaYxWuHHy3KIr\\n\",\n       \"GhdOCIYUGcMYFTmPzgMo0fhyPYKRZZmazebcuPyaSD5ejHxUAEX3OWDwugKvbfB2HNxFYLMoolEU\\n\",\n       \"OYlgpSjSEZ/NvAAaYxGzp+BcqT9tNOnjpKKIjisCohNey+QG04EMhnd9fV3PPfecms1m2l3gSlmS\\n\",\n       \"Njc3U/rQFZJ0qjC8NsXlMXqm/C8p9fXo6CidFwK4IT+PPO/v7+vGjRtpvD5Gl2FoOBwmQ0y0J9Zl\\n\",\n       \"uZGPvPXoHbtE1tfXkwfpEaY8z5Ox43mHh4dJ7tmVw3W+U8Lnj3Sbg8Q8zzUYDHR4eKjj4+ME4rhf\\n\",\n       \"Oq0Bcg8dAMpvQKQb/uhwXSQVOVJRZ3vdR9ThjJu0d5ZlOjw8TPdHvT+ZTNTv97W2tpZ4Q4oIAOAy\\n\",\n       \"7ODaQQggyHUIstNoNPT222+nKBZpP3azVKtVPffcc/qH//AfJqDkZxHRF8YuSe12O60j0qb9fv9M\\n\",\n       \"3QWG3deqpARSGeNkMtHBwUEaG2uS59Xr9bnUFnLpEZEYNUXv0CfmM25dz/N8Li0J/yPQKZVKyXl2\\n\",\n       \"HsHvPM/TDi2cjfMcykuxlTh6youiAItQuv/2XOQiQ+/PiwDFnxc/9+cjFBEknNd3aDweq9VqpZB7\\n\",\n       \"BGcuvNLZA3lA7+4lnOdhwRdH0R49YTzON/f4fMz+nRve6FGd5/XxvSuzIl65QYpeWgRFi4Ddx03O\\n\",\n       \"M5SOg0+POERvie9dSeEZkbPtdrsajUbJc3J+oPQxbu4teorAFQrrhXoKvBuPCGRZlkLFHp3Jslk6\\n\",\n       \"BS8P5fn2229ra2tL29vbSbF5JCE6C71eT61WS5J0eHioer2eUkgOLvhxvhEJIbSPfAPQWCNey9Hv\\n\",\n       \"95Nn6NsqUZxxB0FMM/nfbgBcVo+Pj1MEBcPoUSd4xzM7nU5h5Pey1JxALs940IsKTn2dMy95ns+d\\n\",\n       \"aAovqC+J5+VwiikGkfM8oqfutRCSEljwg78g11flcjlt0XcQSRQGh246nepP/sk/qTfffFMvvvhi\\n\",\n       \"enae53Nrjn4dHx+r0+mo3++nPnvE0vvsa8r1qzsB7DZy5wJHoVKpaDQapXoOn6OjoyO12+20zqXT\\n\",\n       \"g0U9Aun1Iq6fIHdeAEWAJo/++RogyhN1huuW85zLSwFOnIqUuit3yBWFpwq4Lxo9v9YFwZVffM6T\\n\",\n       \"DF5RiLMokhDvkWYTtrq6qoODg4UAyckP7EII/EROH09E5d4/DJLn6KXZoqYvfj/fedqHau/oFRUB\\n\",\n       \"PO5xPkVA50K/qN8u3P7b5+EygJMiQMtPTMXE8Cq8ByRg2LmeuokYnYNvk8lkziA6CPVnS6cnO7oH\\n\",\n       \"CDDBu/J+UXTna4o+4TmenJxobW1NR0dHc+uQ53hdAu3WajUdHx9rZ2cnpTvoCxFCV6wnJydzx5QD\\n\",\n       \"Jg4PD1UqlbS5uamNjQ1tb29re3tb7XZbm5ub6vV6yvM8GRwMheublZUVHR0dpV1LhMVjvz36w7gG\\n\",\n       \"g4FOTk4SKHLDjHcP8EC5D4fDpLv29vbS9bF439fkRZGvX+SKOS46RIwx+LqF7254vSDZD1uDV6TB\\n\",\n       \"AL4eWfNn8T/8d17SZ98t5LpwY2NDt2/fTp+z7pBlIl6NRkM7Ozv65Cc/mebR0zKu54jisc7ol9se\\n\",\n       \"6RR8Ai7oe1zL5XJZ169f187OTroPYNtoNObAFHwk6kPdzu7u7plIU0xrQh7tBGQcHx8nvZ/neYqa\\n\",\n       \"wudSaVb/w5le6BOADOOg/9JpmqqILkVMPBqiGEkpMj5FBiB62R7Gk86erOftegoogpain6J+F0Ub\\n\",\n       \"ivrL/z7xT2NcY2QDwfCtwPw4cPBxudJnESBYLihxvN6Gh+r9uwhWPITqCtsNl/MvzouneWKKzucp\\n\",\n       \"zstFUwSnkT8+fucByszlNM9noVDSESghH6sbQNINGHg3ENLZd+owN767LK43PGPkhvbH47GazWba\\n\",\n       \"9bO5uZmA67vvvqutra1U7yGdPeMDD5pUI9dw/WAwUK/XU6/XS/NLvQcRD9cX6+vrunPnjt58800d\\n\",\n       \"Hh6q2WxqMpnom9/8prrdblKmWZalrcMYBJQuqRUHdRGQ0YdYF3TlypXULsXHgCfWnusowObR0dHc\\n\",\n       \"WR/uibo3epHkMu2AQJqvDfJ+xj57ZCtuKeb6aAN8h5U7O74d1q93z57/+XHD6ilLP5wQfeXgcTKZ\\n\",\n       \"qNvt6uTkRD/+4z+uN998c64WC4BGP5FrT+MQgXFdiw6nloo23HGRpCtXrkiS7t27Nwf0X3rpJT18\\n\",\n       \"+DCBEQextE27h4eH6nQ6CRz6kfxuT5gX17kArXq9Pgfo3dbx7PF4nNqG7x5BcSB5nhMvXZLIiRsc\\n\",\n       \"F7SIvIvuO2+AcUFFA+gTEJ/H/7GPi/qyyMDGBRqNa7PZfOJOI0fbCHdE4TzbkWz0zN3wubH0diMV\\n\",\n       \"8YB7/Hf8PipUlFKMiPl8u6cDwRsfcxHw8nm8aCoCovQ31kV5usFlIypkf0cHCtT5RtqFe5l/B5wo\\n\",\n       \"Wv52Im3k/UFp4r36rjiupf4Cj5aQOy8AIySOjNHfLJs/HKpUOi3GzfPTY+lRaO6ds8MGg0PE5N69\\n\",\n       \"e+m9NJ7b39raSpEZ2vFIBn3Gw4N/bsyguIUUXrTb7eRxHx0dndlN4YoanlDzwjjgp8s5vLloQpch\\n\",\n       \"y16X4PojAm7IQXnRSx7hd9FnyEmUF+msjXBHwKMmvu7oFwe78T9zyX3UdlSr1XS4IeCfefUUo9ed\\n\",\n       \"IGMHBwcpAkKUkAgCab1Go5E+J/qAnK2urqYic+l0DY/HYz148EC3b9/W1tZWeh59j4AtHtvPvDnQ\\n\",\n       \"AGRlWZbW2HA4TGkjb8f1yHg8TnVwOADMkRd7e9TkPLue5vg8gfy4KHbQ/4/hzUURhuhFx+tccBdF\\n\",\n       \"Qnhe/Oxpn+nPiQYnEn1ESD1SE+/zz2IUo8gwI2x4iVRzg+i5xqM90Vv035Gi0vH+RpDmz4g7oRZd\\n\",\n       \"6/fQDwxJ0RwXzfdFURE/XVlHhRBD/ShPL3aNZwsURbJ8J4qkFF2Q5j3aWESMFwWwoJ/MPZES8vIo\\n\",\n       \"XyIRXH/t2rUULaCY9N1339X169eTMiLiUBTZ41n0kVA6hxiylfrk5CQZ/0ajoVqtphdffFH3799P\\n\",\n       \"0RdP22BAut1uMvxRHqfT6VxxHuOMCr5UKqX6BwzldDrb2dHv9zUYDBLYcZ6j1DmfAmo0Gtra2tLa\\n\",\n       \"2loqfC7ayXNZIoLS4vOMfC7ROXENu2MU1yv6ynnvxsyNrl/DHBU5lTyfmhB/NxLAh2jgyclJ2t2F\\n\",\n       \"Qa3Vamq1WslQA8i/+MUvpq3FXmvh43HekKJxsAT5Rg523NC/Gzdu6N69e+p2u6ltr9fAKfCXdhIx\\n\",\n       \"ZfwOIvjOgRXXMQYH9dimSqUyd64S68Vf2On3eaG46zzn1dPQpQAn0uJiUunsdkvprPFeBEyiwERD\\n\",\n       \"sGjhn2f4i54bjQi/mUxftDHF4Tt3/PtFPPDaDIQBpemFwBg5ThR0PsfoBc9GeJ4kQDFKVPSZG+Ii\\n\",\n       \"MBQ/cwXlB4Dxcx74uCzgBIpKyOc78pbxudx4sZyDVk7KRIE5yPW0j3v3TnweUziS5jwi+sj25aOj\\n\",\n       \"ozSGRqOhR48eaW1tLX0/GAzU7/c1HA5TcepwOEx1H3meJ0VH2x7hIzrj/SHqc3h4mIw/r2nHC3zh\\n\",\n       \"hRf09ttvz+W9pdPIEGum1+vNnc7qz8GAeWjcQ9YeZfLDpsitRxBNG6SnmBs/R8bn1SNO/n4TX4dF\\n\",\n       \"9XMfJ8UIUgQj/p3rJwfi6Jaow1jrtOv6gdRIUbqAtiNQihEvl2lfSxSUAkyGw6FWVlbU7/cTKK9U\\n\",\n       \"Kur1ehrW5XRPAAAgAElEQVSPx+r1eumVBq+++qq63W46D8fXGvYFYz4ej1MxNPO+v7+f1oNHuxuN\\n\",\n       \"hqrVqj71qU/pq1/9amE0xNO/nU5Hq6ur6dkxosV683QKbbL9mWsdaHhKmDVA/z3KRxlB1Fk8qyha\\n\",\n       \"7wXn58n1haV1okAXUdFigGHREDpKQ8DPi0Kc542cZ+hoPz6b53vo3Ptb9Ey/p16vz70h1vt/Hm9c\\n\",\n       \"qFDOCBJ5QhdCVwJOrhAZf1EfXHl4P4rADuPz633eI5/d03I++5xGHsQIwkWTh43jWKFoRH0OpdN3\\n\",\n       \"s6AckS88Pdp2hVAqlVJOG2XiYXjuQem6d+XrhjHQNxQb6Uc/7Gk4HKbi0U6noyzL5nYkdDodHRwc\\n\",\n       \"6ObNm9rb20uFrMgDyhjgjOIjIjOdTtVqtdLaAHwBuJ9//nn903/6T1Wr1eY8U49AuTy5EvXP+fGX\\n\",\n       \"ybmsOf88rVMqlZJBc17FCEwEO8iqp46IclHwzPh9Li+aXM9KxccgeK2TdPY9Ng5IvSjSPfJYc+V1\\n\",\n       \"Dp4yiMDat6BHpwj+kTqhn1evXtU3v/nNuWuQt0ajkXa7IHsrKys6OTnRzZs39frrr+uVV15JkRn6\\n\",\n       \"Op1O0y4j1szBwUGK6CFDnEjsa3djY0ONRkNf+9rXEqDhmuiolEolra+vJ7mODhw8ZltzUVrXHSPA\\n\",\n       \"NLt7aMPllXucXz6n2B8+90hUUTDgPBt3YRIfjfzTXO/RjKIohBuxmLeN0ZUPS4v6+jRj8IXi9/iE\\n\",\n       \"8WIyogaOOPkMpO1eeARgLkQUKjqAKIpQOZgoSrcUXRvBm0c9YjqjKAR83o/zbRH4KeLxZaMir1I6\\n\",\n       \"+xoEUgzRCPGel0qlks4PcCXgQNRBCafLesG1NF+g5usjAr8YwcJzRYkxL+TrO52O7ty5o4cPHyrP\\n\",\n       \"85RH530kAOROp6P19fX0MsxGo6F2u53qaYoMMAa81WppdXVVrVZL165d05UrV/Tmm2+mAw09ZeMA\\n\",\n       \"gXngDAsUfVTyzg8Hbhi8RqORdkWUSqW5tJmnn2I6A6OGJ03qRzrd3ux9JoLiHvNlASZRZqJzIJ11\\n\",\n       \"OpFXZDjLsgQ0pfnDHpkTj/ph+Fjzrv/8OR4diA4S17mMcF7K1atX1ev1Utv0tdFoaDAYzPWl2+3q\\n\",\n       \"6OgoFVr3ej3dvn07FYYfHBxoOp0mkMvrHTjLh0gKMsazWCdbW1tqNpv67ne/m6KIyA9zgKyUSqVU\\n\",\n       \"d4Je8JOdnbd8RlqLqInz0qMZfjpyBCE+V/AaZwQeuwMSoya0g+1a9K456YLTOk9jUIquiUo1eub8\\n\",\n       \"/jDg57xnezvntRk9iaJ7QKEOLpikWq2WlCCH8OCV4TUTpuP+IuNNuC8KUVFfI2iKAMOvj8oJcqGN\\n\",\n       \"URVXIg4gFwGM+BNTI0XXuyK7aCrqY5GiLIrOOeDm1FX47qAAxeRnl/hceoE1oVeUoSs6nsmPH2jm\\n\",\n       \"USjeNQMoIv+OsalWq+p2u3rhhRdUKpW0sbGhnZ2d9OK8d999V+vr66rValpdXU31FZubm+r3+2q1\\n\",\n       \"WmkcACrm0iv9pRlg63a7+sY3vpFSqRh1D0W7LACQ4AWASzpNxWRZloADaVIHj4T02WHDlmbfHRdT\\n\",\n       \"S3k+/wK/6XSainIxen4Sp89P1GexgPkiyKMmEfxFUFK0vvmMuSCKwXceMYopyxiB4bn+WTTi/I1+\\n\",\n       \"cKeOZ+HAxSjXeDxWu91Wo9HQ/v6+JOnb3/62Op2O8jxXq9XSa6+9ps3NTeV5nuqgqBva3NzU/v5+\\n\",\n       \"KqZFpgDJABgOC+x0Oup0Ovr2t7+dapMAx0RcfJ3zd6PRSLuKWO8uOzzHSww43A2ibZ9H+EBqxiOG\\n\",\n       \"zLPz1aOtFHxzjTvQ7lx5Oq+ILsVunSdRFHoY5p8tMr5PIldiRSmMqCTOa+e8PdtS8YL2v3npFH0p\\n\",\n       \"Mr6er0YxuwDW6/VU2R1DsHEcUal4SK5IEUTQwT2LFBY/GBGMnxe+RYMZjbYXv0U+LgJoF00xxege\\n\",\n       \"u6Qzf0MoOQeB7u2xqJl/VwoOYmiLfvgr5d17gSIYcY8ryzKtrq6mLZ3SbE6oLyHkvbm5qbt376Yc\\n\",\n       \"/bVr15JnmOe5PvjgA5VKJd25c0dra2upePbGjRuSlA4k9L4hL+wIWl1dVa/X03e+8x01Go1k3ON6\\n\",\n       \"YPx4iHyPUXT+MX4Al/MhplggT29xwixGJAKkWO/AgVxe8Bwjix758fV9keQA2yMXvvadf1FnezE+\\n\",\n       \"12JIoxfuhhPiHsjr+5AzqfgEW08B+W6cSqWi3d3duV1S3s7x8bH6/b6uXLmivb09ffrTn9adO3e0\\n\",\n       \"sbGhjY0Ndbtd3b17N0U5Op2OJKWibV7pgLNA9IgDBt999139/t//+/XZz35W+/v7+u3f/u3EJ/gT\\n\",\n       \"X5ToRa0e4QOgwBPptP4E2cahwWnwaCvpFwcTlUpFzWZTo9Eo7YLz4tsIVvmbscbyBtfPHpVdRJca\\n\",\n       \"nBRFTfiMgrgYOSny0p8mQnMePcnoUSAUc6VuIPCKHWzgIbohkzS32GiL7wg9YvC5H3544dV5/Xc+\\n\",\n       \"OblR8ghNETigr35t/OF6H3NRBKVojtyrWhQ18fFdBnAC8HAA4t9FEBG/q1QqqegOmUIxFM2r16AA\\n\",\n       \"5LyCH9Dsnkw0HG4kAAO0t7m5qcFgkDw8j+Ksra1pNBrp7t27Oj4+1tramvr9fgIopCIBXOyGODw8\\n\",\n       \"TFEKP1SLtM10OlW/358z1Ovr63r48KEePXqU1hmnbjIOz31zDbUIKFaIHD98oZAYHjH+uFOK+YC3\\n\",\n       \"7Fhg3IAi6RRYk+9n94M/w4Fn9OrRK7EG5qLoSXo0ppil02hFr9dLL3b03VpehySdvpLB6xe8wJLr\\n\",\n       \"mBtPdfqOFAfu6Arko1KpqNvtqt/v6/DwMNV3wX/08mQy0bPPPqt33nlHq6urOjo60gsvvKA8z/Wt\\n\",\n       \"b31La2trarfbOjg4OFM3+PDhw7TLplKpaGtrS++8845Go5E2Nzd1eHiYouXb29sp8uL6nK3ObvO8\\n\",\n       \"Pge+OPjx9c24PApCfYx0un3fnV/kEh3AuT882wt9mV946uvOHQB3uD0yC6hbRJcanCwihI3iokWe\\n\",\n       \"qBv9mI7wlIrn1NwzeBpC+Xvoscjbkk4LIYuMludVvU8uOIyB0DRj95Coj4G2FxnvaNz9f8YCuXFz\\n\",\n       \"QiFEfkWQyPjPi4JA/v2i6FURELkM4CT2MRoW5tprRTxaVS7PzsygfgGj6vJQFBL1iCKGlTbxihyo\\n\",\n       \"uIfrbfs5G6VSKR0QRr0EtVGHh4c6ODhIAOT555/XN77xDT377LOpaHB/fz+BFHLrtVotHRrFzh5q\\n\",\n       \"Vw4PD9Vut7WysqJGo6Ht7e3k5XEYGzwplU6L7QAWrAd2YLhOGAwGc++U8vCyA0PuZx2hwKX5NzTT\\n\",\n       \"LnKLkfIcOoaWMHcE4w4KXVY8MoFR8JqAi6AYCZTmXwUSx1aka46OjhL4RI+5nHvxtztzGFl4ESMH\\n\",\n       \"ADiXe/rkfefv1dXVFGV79OiROp1OknPp1Dms1+v64IMP9JnPfEbb29spRdlsNnX16lU9evRI6+vr\\n\",\n       \"aZs9RbBZlqW6r8lkovX1dR0fH6dtyScnJ1pdXU1Ag+gNssXuIYA1AIL0Dueg+FooOgbAX9QHT5Fx\\n\",\n       \"1+MAZ7dBzO3h4WFyQpgj7oenXjsVo2vu9PiaZE6/byInRUJ9HlCITD7vWhjn3ngRFQGUon55uyhy\\n\",\n       \"R6zSfL7NF07c7iXNv2mVe/k+9qMI6eJlRPBQ5HHFyAb984Iq+PW0Bv+8KJePB4V8XjrN+1TUtivv\\n\",\n       \"8+bmIol5iukT6XT+ixQyvMGz9+hBBGkxouVpBBSj70yJIXNASpFCl07fXgzguXr1qo6Pj1WtVlOq\\n\",\n       \"h/6/8847KYqwu7ubvCvAjG9FrtVqGo1GarVaSW7xVvf29hL4wqvc2tqae9mYe4+sK093+Qv7PDXE\\n\",\n       \"OSnw19Ok8N4jXyhzNwSACnd2uJYQONd64aJ0NprrwJ4x+Prjt6erLoq8765/FgGROE7677s3Yo2T\\n\",\n       \"A2v0hPMBHeVOoPPR0zX87YA9ptmm02kCDb1eL8k7RdqVSkWdTkcPHz5M87q6uqp2u63d3V11Op0E\\n\",\n       \"1G/cuJEOajs4ONDq6mra+n7r1i29++672tzc1IMHD3Tt2rUE1Pf29jSdTlO6Tzp1+Oijgw1qWIim\\n\",\n       \"AlRcTohGcr3bJS8XiM6wyxipyvF4drpspVKZO5TR5QDe0lfa9+exzqNNOS9defGxQp01wNCTIhiA\\n\",\n       \"E5Ss52+9zSjggIkIIiLQKeqj9yca/hhxWNQHNyr+zDi5Hg3xNj3MyWdFudaisRQBDj7zQr0oeEUU\\n\",\n       \"+xUpGk7a8blyBF7khfnnEbhdRvJ8bJEc+DUoEul02y5Kg8XuEQ4PmfpcIRNe2Q8xr34GgRuXIs/F\\n\",\n       \"PVN2VjSbTe3s7Gh/f1/3799PzxsOh+nY9itXrqjX66XCbtIYk8ns0Db30Djbh1DxcDjU3t6eqtWq\\n\",\n       \"Wq1War/T6Wh7e1sHBwdz50YAmtABnFLpB5nBN2n+3VSSUj4fDxmZo4+ef3c9g7Hzc3jgIUXsXO/f\\n\",\n       \"AcB8pxRtY5zj8xyEXzZy4xQ/j/pWOtXVXgTru5HifQCMWEuCIXf95GvDvXr0ImCS38PhMJ32Wi6X\\n\",\n       \"tbOzo+l0muYOmTg5OVGn00kRvt3dXfX7fR0cHGhvb0+S9OjRI1WrVT169EiS5opTKTy9ffu29vb2\\n\",\n       \"NBgMUj/yfHY6OClPf5N4BL6sb+QV4EFxroMaZBMggby6foGfDjb52yMn9MXfZeUAMkZzXQY8MglF\\n\",\n       \"/fYk+34pwEmkCFDOIwcmfgCSg4kiAxoXhLe36NmLDDDPY4G4MXBh8Jw4HlNUPrEYzp/D2Lyv8f/Y\\n\",\n       \"x9h+EViifxyh7AdGuSFzKuLfIkErAkdR4S+KvEQD7wAlem2XIYISCyJ90cNL9ySl+UiQdDqvflKo\\n\",\n       \"e5duOB2cuGFDMXtEzdeKA1/IQS73EOUYDAZqt9tqtVqpgM6jHVevXtXdu3clSbdv3067UDjOmjoa\\n\",\n       \"Uku9Xi8p2UajoTyfFRSym8W9YbZuNhqNubfTxl1vR0dHc4WBPhf0gR+iKOy8cb5JSkaAOYxpMD+N\\n\",\n       \"NxpVThb1GgCMalxXDnJIJ/h8XBZ6UhTVDU0EGtEgeRTNgbBfx7XS+duNixzEGHWGpxhVQFKj0dDG\\n\",\n       \"xoYePnwoSXOGn9cMfPDBB6rVajo5OdGLL76ow8NDHR0d6eWXX04vI5Rm9U+DwWCu8J8i6StXruiz\\n\",\n       \"n/1s6kuWZSlaA6hGlrFjTkTvqtVqAjiM88qVK3PRR8bqLw2Evw7MPTLFevHaNJ9P6s2k+WitrzNf\\n\",\n       \"i1Hn4VC4bMQIYRFdCnCyqIMx/LToWlfsi4x+UXvuSS0y4ovaKopU+IIqMqxMiEcOpPmdOd6eLzL/\\n\",\n       \"7cas6CeOORqyRUrPt7vF02adT867+Hlsu8iLiv1b1O8iKrqWti8DOFkkr274fQHHcKqHoFEcnlbw\\n\",\n       \"NICfVuoy4VuJoyfk6TWPBKysrKQQsXQaYeFtqZ1OJ9WPdLvd1FdqSO7cuZN2K2xtben9999PefjB\\n\",\n       \"YJC2xWdZltJBHELlbzt2Rea7awiTk9JpNptz4X/4wuFw7iHjMbfbbdXr9RTVIdKysrKidrs9B/BQ\\n\",\n       \"xvFlmvAR4wWvYkqGfpOi8iiK9xfZyPN87mVsDv5p4yIp6k3vu4OE6JnH610HFTmCRW2zJqRTwCnN\\n\",\n       \"p4ijTme9RaCEnCFT3/nOd1IhN8+aTCapDqrb7aZC8Pv37+vVV1/V6uqq+v1+OlW2VCqlc3sAJBw6\\n\",\n       \"mGWZer1eems1ckTqhQgOaRppvobGdQNpo9FopI2NDQ2HQ7VarbndY35elsuZF867I89n/qwYBYlz\\n\",\n       \"hmx7eifaTuagSFd71OQ8AH4pwMmTyJHYede4sY8gJTIvKm5vI3q1/I6LIAIRb889hggmYhTEF/J5\\n\",\n       \"nrA/w+sQioyhG+yivjoCXiRI8RkeOnUqmhtXsJFvRfNS5GktIh9TUQTlIqlIzqTzAbhTTKn5fbEu\\n\",\n       \"ATmJ9/s9ePIxhcN8RkXB/XhqpdKsKHZ7e3vOgyLsjbePDOF9Pffcc5pOZ8fGk6bJ8zwdPe/e73Q6\\n\",\n       \"TVuDvX2MXqUyO2GTQ+m8OPLKlSvKsixtoffCWEDG0dFRqhXAgFQqlXRwGump+J4d38kAEflxeXXg\\n\",\n       \"yLZn37HjxJxyvobPq7c5nZ4e3oVBuGiKssX/nqIq0p2u2xi/R7ljhCTyIepVCH7HKJRHyxyUAzzg\\n\",\n       \"JXNAZAIZ5kRVj7CwY21nZyc9KxpsUjakVJDvWq2mwWCQZIzD3dBbRTIiKTmJjJs0zv7+fvru6tWr\\n\",\n       \"iZ9ZNr8zjD4SEUfeiorp0QFuC3CiIe+v82uRQ1bkhOV5nmpYnkSXApwUoadFnvR5SEsqTm3Edvx7\\n\",\n       \"L9RZ9OxoJKLRdSF1Y74IGPiE8TmTR1/8gK0IvBAcF6oIYOKzPXriIWrGGt9YCbmH5+Mo+ryIV/Fv\\n\",\n       \"H8+T5ioqq6L+xe8vA7msMseLgGzRPX4tcuLjdOXioVWXhQhaUNQo4ghivXjW2yiVSsl7ROlzD8Ye\\n\",\n       \"gEI0gpf0cWAaY+BE2GazmeozJKWzF3hXDRS3le7v76edESjcfr+feEf+HrDhp2y61ww/SCG50gWA\\n\",\n       \"+dpwj9ZTOniQXmTJdUS2mJOi9cIzfY0y15VKRRsbG+mguyfpvd9r8r7HtCXfF+nJeK80X3Pl/9N2\\n\",\n       \"kV7wKBTy66k9AHXR+gGYeJu8o4niV98R5dvwXS+zc4waqGq1qvX19bkaD3bvcJZJtVrV/v6+Op1O\\n\",\n       \"Aiu+TgHHRFLyPE/gGzmiiJgxX7lyRcPhUF/60pfmgCG8lJTOEcKWeGG7g7bo7ERn0mtOWEcAuuho\\n\",\n       \"+y43lwGf/yzLks5wGSmiCwcnH8br/bALNHo9LrQ+oT5hRdcXUZHxBzV71XiRV+T38Bl98fwqffX+\\n\",\n       \"RiUXoxBxbA4EihY+/TlvW5eHAKMR9H5HIOb9L5o7n59oCJy/RR4Vvy8jOIHiwuez8/rrCtXBb0yh\\n\",\n       \"OT+j8kcJujx67Yo/w71Pwtm0hRG4c+dOUkhEU0gBofQIZ5dKJa2trSnLZgdDDYfD1DYeI6+Op5Cv\\n\",\n       \"2Wxqd3d37oh6SekMCt5zkmVZMtal0unbjxkz/ZhOp+n9IKVSKaVviiJF0ingx/D5IYa+3Rd+Mh6P\\n\",\n       \"/PB3rBPyNe6yEL1KijABZOVyOdXFeI3QRZHLFmMpchijA1PkuCxyBF0PO7D3M0gcvKG3nDzS4k6d\\n\",\n       \"ryFOXyYt6O9GIm3EGEktOgDpdrtzZ5jUajX1+33t7++r1WqlNGCn09G9e/e0tbWlLMu0trY2B/4l\\n\",\n       \"pTeC8xlgws8NAUhTd8U9bMOPgAfQk2VZqteaTCYpYuF89jcN+7zl+emuONfDrjd4lqegXbdEZ5k5\\n\",\n       \"KJfLC51hp0tzfP33wsBEIy2dLRr16yJA4XdRX6Kxj8bBQYNXR7tn5BEXvyYqniJDVCqdHiBEWM2/\\n\",\n       \"X/S3C1Y0anzuJxEWgTTaK/L0igxljI5EAx2ByiLQ6cCEvoHOPRR7Xr8/bloUOYqRr6ioHTRI86k2\\n\",\n       \"vH7kIPK/KLzqgAMjGz0d/9+VjR/Pnuezw9ZQiLQ9HA7V7/dTGoU0iTQzYEdHR5Jminw8Hs+ladg2\\n\",\n       \"6UWN1Wo15fAhL2hlCzOeV6lUSufBcFBUls2KLdkBBLDa29tL70/B0CMrfhgbz4L/yFUsfKWduDuK\\n\",\n       \"NVpkLLx2yAsQpdOCR+aEtFOlUlG73Van01G9Xv/IMvm9IDfuyFuR/Bat7Sfpdl8bXOu1V65HPOoX\\n\",\n       \"t5Tz7FjP52ke+iQpvbRRUjqXBH3o6wfwwk+n09Hu7q6Ojo706NGjJNt7e3sJxE4mE927d09XrlzR\\n\",\n       \"ysqK9vf3leenKRb4SD+JtOX57K3fvV5Ph4eHCYB5lGc0Gumf/JN/os985jMJELBWkTnebwVgpzYG\\n\",\n       \"XnkkyfV41E3Rlsb5cZvnTqTrY/RLPN6gCFw6Xbw2N4pCXIS0z7sv/j4v0lIUaYhebRHQiUY3evpu\\n\",\n       \"IIuMlAOUGIY8bzwOap401gjSokHz36DY6OW4AvfURAQoi7wm+OS1M5GXsV0foytC/4mRFF8IlwGc\\n\",\n       \"RPDqHpF70UXki93nF1lxBemK1tslUoCxo8gZ3rvH7/1AOaLAULC+RRelN52ebuekMBTDHD3Ou3fv\\n\",\n       \"JtBAf/M8T9EJPNJOp5N2s3CCLCFy6kl4IRk596Ojo3QYFm1LSvIszYzOs88+qyzLEniCV74LAsXp\\n\",\n       \"9QKctBvnkX77tmAP/wOq8jyfe3cL0SWAJrUp/X5fR0dHiedHR0fpjIzhcKher5f4d1HkhsrXLt/F\\n\",\n       \"a6Ri3eBrHfl0EEIRqXR6CizgDrklYse8xfXFfMR0mzt3AGIOOotbzev1+txamUwmCbAfHh5qdXU1\\n\",\n       \"RUNoYzqd6t69e6lwVZoVhz948EAHBwc6OTlRv9/X5uZm2rGEgUYesmwWcXz22WcTSCJFc3BwoE6n\\n\",\n       \"k05lJr1TpEuJinhqBx4gqw7GpLM1Iu4cYEdcl8NLnksU1a/jXn/FCvqMeV1EF/pWYmkx4o50nkKP\\n\",\n       \"vx3QuDEsui+GZYuiL0UUowLRc4+G3o2KG1JfgDHCEMfmExk9F7+Pv/1/9yai0iiKgEReLeJ/EfCI\\n\",\n       \"zygCIQCcGB2J6aE4nqL5eRr5+bjIPUzp7M6LOLZYpOq8imHVmOpxkMi9fI6njcyhXOmT98HPBYke\\n\",\n       \"U6VS0bVr1yTNe5ooGIwrO2kwuJz/8Nxzz6WTZAHCKODhcKjNzU2VSqW0DZOICHUuKL3BYKBut5sA\\n\",\n       \"RKVSSR4m/UbxEvLmnJSjo6O5l5sRHaI4t2gNxiJJjzK5zLJrCIIv7AJizjC8Hl0imuURKQAa65Ww\\n\",\n       \"/WWQb9c5UX/zO8pW0f3cxz3+Sgs8fOTdjR9RN4xp3HYNOHbdCu+Yfz6v1WrpLcPUiEiaq81gLkul\\n\",\n       \"UtruTirlgw8+SLUTgM/r16/r8PAw1Zdcu3YtHW0/mUxUr9e1vr6ufr8/dzgiO8okpWfxpmLfYlyr\\n\",\n       \"1bSxsSHpNGoEeYGuO3ExKoWti2Auzi19dpvG36xlj8byDJ4D6HHZcH1FuvQ8upD9aYuMb1yA0fDG\\n\",\n       \"6xalH/z+okXtBiJWJBdRNCoeQZFODQDfETKL4+R7vDhfZCixaMiLnk/bHlpDKIv4WMRXAAJ9cvDi\\n\",\n       \"fY33FfGzaJxQkbLyMfqiWAS0fK74zq8tuv+iKBp3yD0QDBJKx+fNlTz3uyLzKIl7J3yHwqZYD+PN\\n\",\n       \"Z76LBBnAw/cIGm12u11tb2/r5OQkpUD6/f5cmofnjkYjvf/++2lNkU75g3/wD2p/f3/OwPD2bXbh\\n\",\n       \"YIQBLRsbG8nT9cgMb3ClhgVetlotHR4eJi9zf39ftVpNh4eHqfDWPUw3bBgkpyIdw9p1WaN4Ns4h\\n\",\n       \"7aInACruzQO82OETI4vU0fhbmi+SXK4jSHH9BQCl/66vkGu2k2dZNlfPAb94hhtBapXYAeYGlvbp\\n\",\n       \"CwXWGEkcROaKLbzMzWAwSP2gTiPPZ+mSVquVonW8WfuFF17Qm2++qfX1dR0eHiYwSpqTNN7e3t5c\\n\",\n       \"SqNer6fzUTxq4zxCrr32qtvtqlQq6eHDh4nHjDcWjkunwIx5ijbOdZHLrvPRwZzrHX+NhUe/IlAl\\n\",\n       \"AuvyQ1v0+bxdaBe2eb7IM47fu9ItQuvS6YFW8RpHjUXPZiHBLPdwInlUxA2FKwz64OHEeI10CoZ8\\n\",\n       \"uyKLxT07N+DufcOP6Mm5Eigyju6NuJeNgXSvMfY5RmnO84yehiLQKmrHPVT66HxljNHYXjS5B+9j\\n\",\n       \"9PmM3mD0KF0GIyCMBtJBjD9rOp2mN6CywyXLsrl3vyB7pVIphYclpZ0Gw+FQb731VjrnhFoTFC9p\\n\",\n       \"FvrIAX7PPfecbty4oYcPH+q73/2uvvGNb+iFF15Iio0UEUdxYxjYNVGr1dTr9RLA4pnc67spqF3h\\n\",\n       \"SG/kmJ0VhNg93eK89uPUnefS6XkTTu5VuqKNkUnalWay6UYL75/5Y3uzyzP1BYPBYO61EhdNLqsx\\n\",\n       \"FUy6RTqVraiTJKVI2erq6lxaQZrXbfDVgTRRil6vlw4g8+gAOp2oR7/fTwDHU4PT6VQHBwfpfBHA\\n\",\n       \"IYcOEokYj8d69OiR+v1+SuXs7++rXC7r5ZdfVrVa1c7Ojvb29tI7dNABFDNzjgkRkclkkl7dcHR0\\n\",\n       \"lNJ9nETrB6iNx2Otr69rNBqldtyZZA34sf5Eo1yvu+53W+k2gLVEhDOmZwDTfk8En6wJd3QoYve0\\n\",\n       \"nUdgF9GFp3UWkRtl/ywqZj73ayA3EB4Oh9kIMALPAiCkG9Mwblhi7p8xxdSJF855uCvLsrSjgDyz\\n\",\n       \"g4w4hggOPGRZlBbx+1iwRX8jSB7OPs/QL4qCFM3tIkBzXnSrKELk98fP/EVul+EsCF/I/O/E+NwT\\n\",\n       \"iTzykCgy6n8zh4R28UJGo1GqV2BHAsaP9rmPMPF0Ok1pD+YCw7KysqLNzU2trq4mxUXhINcAGlut\\n\",\n       \"lmq1mj73uc/pzp07ajQaun//froXWWs0GnO7glBSKHUMFtuE9/f3kzLloDTGgVIHxHBOCYdmoeyb\\n\",\n       \"zWby9FZXV+dkiJRWkW5xAwtoIAXB5zzT15MbS/hD2og30cbIDb9rtZra7bakWb3Myy+/rI2NjXTw\\n\",\n       \"3UVRlNEIqF238L0Db9cvEWi5gwVI5TN457qY9CGAD6BLtAO96G+hRk9Vq9W5N14jy76uPBp25coV\\n\",\n       \"7ezs6BOf+IR6vZ4+9alP6Rvf+IbK5bLW1tb06NGj5AgAbjgYDd1KnwAApFw9YkEfkG+Koj0dBZh/\\n\",\n       \"//339dJLL6larardbieeSfORPPhCNJ+2PZLC/+604Fy5HAOK3BnBVtE2KV23y8g2c0f92t7eXgKB\\n\",\n       \"i+hCwEmR4StS0i700QieB1SikfY2PVLgxsB/IkCJz/britIWHup0bwEEXyrNdh/s7OzMHbftk180\\n\",\n       \"Fv9uOBymfCmvn48RlsijojMuvEgsArHz5s/nJF4bn+s/MaLDuJ1XPm/xejcEjCeebnqRxCmoHs2L\\n\",\n       \"88l4ve7AIyA+fjd20nxBG4rB594L96ir8LeB4rW7lxu3MuLhoHT5XzpNgfC8RqOhW7duaTAYpBTS\\n\",\n       \"1eWWozkAACAASURBVKtX9Zu/+Zvqdrv62Z/9Wb300ktzR33neZ68SJQ4qYuvf/3runPnTjpR8733\\n\",\n       \"3tPDhw81Ho9THt69MdYBRbHOt263q2eeeSalTkhJURMC7+gDPCiKpLrhdZ3gesRPVfb14S91Y9cR\\n\",\n       \"xZy+tvkfoEbhY6PR0Orq6vdIQj8a4fG6M+E61XWke/VcBy8dpAIipeID2CSllFej0Zh7942ne4g0\\n\",\n       \"+Bt8MepZlqV0DGeZlEqlBFDoL/2iAFtSSufcvXtX/X5fu7u7euedd/TZz35Wt27d0pe//GVtbW2p\\n\",\n       \"Uqmo1+uluhWijr5u7969m87pqVar6na76VBBZJNzd+hLlmUpagJYLpfLevDggZ555hkdHBzMOTl+\\n\",\n       \"ngm8jRFo191FTiQ6hO9ZHz43RP5xOtx2YM9wcAFBDv6IYh0eHs6lnSNdCDiho0UEo2LBpBtXX9Qx\\n\",\n       \"QhCNZszZAk58u5g0D1CKDHUEQtHg0PYiTzmO2Y1OPHHSjb0/n/v6/f6Zo+Y5TTIaOG/LxwDqBdg4\\n\",\n       \"aCkCX9E7iv3yZ3h0Jj7XFVpM08WIUXxOnN8I/CjqvEgql8spZOvbvaXT00UdjBbx3fmEsXOD554l\\n\",\n       \"baPA2FYLCBkMBnOyg4fjBaHsfHAlBnDJ8zwpUn+Tb7PZTICQN6s+88wzOj4+1tramq5evapKpaI/\\n\",\n       \"8Sf+hO7evZuOtncAhVfHmiRK8oUvfEH7+/va2dnR5z//eVUqlXRGCooTuUWWnQe1Wk0HBwf64IMP\\n\",\n       \"5p7JmFHk8JmxU0+wsbGR7uF737YO3x2Elkqzw+oATfSVehj6j+GUZjtCeHa5XE4Rrkpl9qZb6hWe\\n\",\n       \"pMQ/DiqXyyklEQE36xT947sQ4RFGyg02c+Db0Nm2iwzD24ODg2TAJaVIGH3w57tBBbR4Kh1wQ92f\\n\",\n       \"R7/pO8/Z29vT0dGRDg4OEgjKskw/+qM/qvfff183btxIYJ/TY0nBoOPff/99lUolbW1taW9vT51O\\n\",\n       \"R3fv3lWz2UyHuBH9oOAW/cFYJpOJ2u22vvOd7+jWrVspWgg/vS4E3npND/0CQLPuPFKOrongm7Xg\\n\",\n       \"soDt8KiuR75cLkgTMT8UEhMhXEQXAk4IycYiNGle2KHzAEI0kE/y2iH3knwhFYGEJ4ET/03bEaQw\\n\",\n       \"8Xip9Xo9CSDG4LxnQyhl0LkrvachN/Iu2M5zT034b36IVng43EFdEahhHh10+vxyfYygFCnCaLzp\\n\",\n       \"cwRKF0Eo71arNZdvl+ZPW4zRIOYQUMA9Mc2H8uBZbiT5vbKykiI4XBcVNsDItxP6czwiQ4rIdxmg\\n\",\n       \"zIiYbG9vq9FoaHd3VxsbG7p586Y+9alP6Sd/8ifTibDuIcInjHie59rd3dWP/MiP6Bd+4Re0tram\\n\",\n       \"a9eu6Xd+53e0ubmZ+pRlWdoZ5Ip7OBym39PpVGtra3rmmWcknb4tGDDgRM0BkSQKcldXV5PhKJdn\\n\",\n       \"u5+Itvi8MSfw1aOu/q6ca9euJWBC6J25xnnylEi9Xk/bq10uLooA1aQMIQC3e+YYxaLUDrKPbHn0\\n\",\n       \"z42c1+T5EfJ8X6vV0n3+GgCvUUHOmTeOcPc0CQabSICkVHi7v7+vq1ev6uWXX9YnPvGJFBlYW1vT\\n\",\n       \"r/3ar+nVV1/V3t5ekhGijAAnANeDBw/02c9+Vmtra6rVatrZ2dHLL7+st99+O609BxJem+G6dTAY\\n\",\n       \"6NGjR0nHsq05rmXpFGQRpcLeElFFPt258dQ4fISXHs3zdDIv9kTuvQaFfnhJA5FcrjvPobwQcOIo\\n\",\n       \"KxqU6B1LZw20G8sisFAEKiCPlETA4u0vut/BkxtgJiRGBNxAsKjJ2+N9TqfTM9Xn3g/+RvlKpyAF\\n\",\n       \"9Oy5VhS+K083lBEYOKjwsXsEJQKMeEZEETDxZ/vc+vkQcQG68Szqh6faIsi6DOQL0M8BkDRn2N3A\\n\",\n       \"8517FxhMN3Iuq26s/H0ylcrsxXwU4BF6paYEGXRvKqYiMESsQQAwnwOMiT5m2exgtK9+9av63Oc+\\n\",\n       \"p+PjY929e1df/vKXNZlM9MUvfnEOVBNRINxeLpeTR/no0SP96T/9p9XtdnXz5k3duHEjhYoxJuPx\\n\",\n       \"OIElFDMeLcrw4OBAWZalaxzExh1S8JC/8zxPdThEi1DqRFuZL0+ZwTvfygyf+/1+2kkkzYNqDLSf\\n\",\n       \"tYGecYV/0cRYom6J6535jbIMePA1TjvutXOd83YwGKjT6cztsPG0EO0iC64TAEHU1rEeAJPIFsaS\\n\",\n       \"qCPv2PnMZz6j3/qt39KVK1fUarW0u7urvb09vfDCC3rrrbfmrqUAHb22vb2dzjr56le/qkqlop/5\\n\",\n       \"mZ/R0dGR1tbW9OKLL6b0vOvP6NgAxpvNZqpbgvy0VeQG+eRz5o417rwAPCDXRDWZM2p84CvtSzNA\\n\",\n       \"SHppUaTbeY+M0J9LV3PiCtoNHxQBhhNGmO+iAV3049e7IS66zhXBouhJBEVF0RLvc8zTNhoNtVot\\n\",\n       \"tVotdbvduaLB6Cnz2z1ljB6nCMJX0DPXR4ASxwQfPB8Z+VQEFiOoWZQGi6AoAotFcxR5Ge+Jvy9D\\n\",\n       \"1EQ6relAMXi6iTnxhepzIJ3m2B1g+ntouA5gPx7PTl/Nsiy9XXc4HKZ1Eg1CBK/wlIgeJ1jyLC/i\\n\",\n       \"4zpOaC2VSmq1WukkyldeeUW3b9/WBx98oJWVFX3iE59Qv9/XG2+8kUC17yqg/5x+ef36dVUqs1NR\\n\",\n       \"S6WSfvVXfzWNtdPpqFwu6+rVq+kcFC/0ZXcOfKUNxoScEPGjfsGP//bCVebJj5B3J4C6EbxtT7Oy\\n\",\n       \"y6Zarerw8DDN6XQ6VafTmZs7wJ0bcgdx7oFfJLnuQH6YS4pRMXTwOkbjvBAUEOnvbIoRKOk05b2+\\n\",\n       \"vp54yxxxj58RQl9dL0SnCXANf7nHU6ij0UiDwUBra2va29vTzZs3UwSGqMIHH3yQdtAwlna7nVJF\\n\",\n       \"yMB4PNbW1pYODw/VaDT0G7/xG2l3jgMwaVaTxPuopNPzdlg/8HB/f1+lUiltmXYg69FInAoveGW8\\n\",\n       \"RYdvOliWlNaVz/1wOFS3203rAWDNsxx8+NjclhGpOk+2L3y3ThGwgKKH7/fG6ARGgLbckHG/X7co\\n\",\n       \"2uIKoogWgZUiY4txhmJOD6+YsXnY08eIssqy03QB+X5vw0ODvhDjGBkn18Y3c7o34REmH4vzMwLM\\n\",\n       \"RQAu8nERuHAwtAgUxrm4TGkdV6DwzV8CB4+lU3lDyfMOGa9NGI/H6Z0apNR8Kx4RGVITyBkKwz0x\\n\",\n       \"FJbXXOX56fHXblgIiXe7XW1sbKQtuVyLES6VZrUupH3YuXP//n0dHBzo4OBA6+vraRzULjhwqFQq\\n\",\n       \"euWVV5Ky/vSnP62f+7mfU6vV0sbGhvI8V7/fT4daAeAwGC4X8IVDqwAxLtP+t9cjsEuIufGws+fd\\n\",\n       \"Jc0ZNue3R2HW1tYkKdU9jMenL3gD7MV172tWUoqAXSRh8OKpuA4IkQkIAAKA8HXLbhTakE6PfyCF\\n\",\n       \"AH/ZPs7ntOeyGx0cwAuf+9t6qe1gXBBRkzzPU1qUF0hmWZZASLPZ1P7+vo6Pj3X16lXdu3dP7XY7\\n\",\n       \"pQi5ZjAY6ObNm2lXCjuujo+Ptbe3p+eeey69t4o1h/H3LcPIICkl5JJ0lNsfSXO7kTyq59GWmA7H\\n\",\n       \"vnikgwgLgN7TRx6RxDlhTXqBPVE/LyFgbTxpt86FbG9wYXZwIRWnUvzaaIRcoRaFw3ieRwYcYXM/\\n\",\n       \"//tvfy59g6JhdIpgxz8n1Otj9+vcmDivWGws6hhOxpsFyLinHBdq5J3zyHlJn/2ApNivyBvnl8+H\\n\",\n       \"3+/9iLyP/HV+xnqUIn5fNLHoUSrwEgXjvJROFbLz0UPinF7JwWe+u4t7UAK+o4a8MvxFMaNg3Et0\\n\",\n       \"kIhXiMFgyx+RChQfHv/Kyor29vbSoWoUndJfFBDvFsnzPBXt+vba8Xisr3/967p165aeffZZvfHG\\n\",\n       \"G8qyTDdv3lSWzcLj/X4/RW6QdwoVY3gfJQ94gue9Xi+dGkuhKnPjRjKerYKihgA4eZ4nrxbD5DUX\\n\",\n       \"w+Fwbr3DDzcCzE+RAr8sKcto1D0lEl8LIJ3qAebD66QAWhg9drBISsXZpBYoBKUWSNLc26aJaHGQ\\n\",\n       \"mhcYs67QlR71idFExuf9Hw6HqV+sPa5fXV3VaDTS/fv3dePGjQRue71ekmfk5vOf/7x+53d+J9Vd\\n\",\n       \"7e3tpcje7du35wrDXRfQDv2GH51OJ70UE74y3nq9rv39fTUajZQ29XXucsZ8+P2ARUC02848nx0M\\n\",\n       \"B588Pcuac2AjaU7efX55zqUDJ1Jx9CMCk0UARTobSXGv2g1nBAfx+ecZRf8+euuxX0zsovbdCHmF\\n\",\n       \"OH30RbIIWPE94TtHu5LmlKf3tQhgRT575Xusy0Gp+pgiqOK6+H302ov6tyiysug5Rfy/LOAEQxSV\\n\",\n       \"AEbSgQFGKPadOcXb46Am0gl4kM5jP6eEfhBiBZAAPh3wkqN3BUnaiXQK52+gcDFGKJe1tbUEiCiQ\\n\",\n       \"Iy+ObJ6cnKQCT/rhnrg0MwD3799PRmZzc1N3797Vzs6OxuOxut2uqtWqDg4OtLq6qnK5nA4rc8Pu\\n\",\n       \"awaPF7DHCZ0ArY2NjdR3UqQu/36qLqeKOo8ARn7GA1ECgIqnIfzcCfhC5MYLS+kD25AvWr4BWZLm\\n\",\n       \"TkL1vro8QQ5+Me7oE04lpn2ihsg9ckk7yBtpjlqtluQt1re4MfY5ybIsgWYKszHyDk7L5XIC21k2\\n\",\n       \"ezdUo9FQu93Wo0ePEnBoNBpzu3OyLEt9Aij1ej09++yzc0C41+vpwYMHiTfoBUk6Ojqa2x02GAwk\\n\",\n       \"aW69Az4AJA8fPtTW1lYC3f1+P8klER3uRW6RV0Anc0f0TzotjKUvjJE17E6yrz1kwKOTkfe+1oro\\n\",\n       \"Qs9EdjAS0yKLrpfmd3cUGT43aNFTj8/2vxelIGK73nb0PB1sgOr9b1CpA5GYgor9W8QD90YweB4i\\n\",\n       \"9nsieIpteqjWx8BzvLgzth+f5UayaK7iffH+2M84x0VANgKfiyIMkEcq3Fh5TQOeH+SREOl094F0\\n\",\n       \"asTY/st2YPiLt040Y21tLSncKLcobZQnoXSPnHnf8/z0BE1PWUizmifSLSidWq2W8snT6TQpsPfe\\n\",\n       \"e087OztzhhsvejKZHURYKpV0//59jcdj3blzR6PRSFtbW3OGnXfssDuC8UenBD5xjDnPrFar6vV6\\n\",\n       \"Go1GOjw8TMWIDqaQdULn/M0c+xrx4kCiAQBGalGYQ+acwkP3XH2N8DlA4KJ360jzp0zHrefwnnlw\\n\",\n       \"HnGvpCRrfuYIJxLHHYsQOg2Qxv1cB8h3HnkE2OucMLYAE4iCbdp0PU0dyLVr17Szs6Nut5te5Ac/\\n\",\n       \"OOPEoy/0o9VqqVKpJDDFYYmeKgEcIKPwlIMHIecncwCYYf3BJwChn3nCGof38MKjX0QlkU/nr9eV\\n\",\n       \"QMyfrweXDcCNg9jj4+O5iGehvD1BHn9PKIKBaJSlxSdr+j1+LwrfDRn/S2fPzeAzD+35c4oiDG4g\\n\",\n       \"+RvwEfvroCgaHdC/gwIWNcLECYg+0TH1BYp1JRB544s0GqnIV69hoX0vmoogg7E7+CoCCQ58Yj+K\\n\",\n       \"8u3R83LlF2VkEUi6KPK59ggBxh1eOpjwNKc0v0WY00TdCMRthxHEcr4JCoXIgAMTftNnvHdfMw5O\\n\",\n       \"fYfZ6uqqarVa+syff+XKleTJ5Xmua9euqdvtqtvtpvfroGwPDg5S+ocQ+GQy0fr6evIYV1dX07t4\\n\",\n       \"UNzscsPDKzr3xQFiuVyeAyikhg4ODtI5Fr1eb24tuC4BaHndBHNBBAFQ5uvK61WIlrILyIGOA/9o\\n\",\n       \"EKTT9MdFEoALWfXjzeOaRVb523WXdFoQ6cXARKiQ66J6IIwi/PHICjKFfOX56TkckK/JPM/TTjHa\\n\",\n       \"xKhHp7LZbKrVaqX05cnJiTY2NlSv1/XOO+/o4OAgGVvmql6vq1arpTOp/KwS1hkAGwBLZJLoKGDG\\n\",\n       \"gbdH3ugv0bnt7e0E8IlMUFDtUSFPFSF/yLM7h0ShmG90lad8yuVy2okGj7PsNPrFmnSHgbXpEc8i\\n\",\n       \"ulBwEv/2/xdFDRaBGSh62UUGyw2iT0Q02N5+UVTC23bDzGLkB6PvR1cz0b5rAgXNhDko8LSL99FB\\n\",\n       \"wXnRhUXA5LxrvY9utCIoWwQknzR3RXNWNI5F1/P3eW1/3OSgkb4x33gpKOqihelz77wgCoK8EnGI\\n\",\n       \"xpkdJsfHxyqXyyml4Uo78s5BET/SqTePEWZXzdramtrttgaDQTr1stlsajgcpkLB1dXV9C6Td955\\n\",\n       \"R/V6Xa+99po2NjZ0fHysl19+WS+88IJu3ryZABTKinqZtbU1bW9va3t7O0Vt9vf39eDBA/V6PR0c\\n\",\n       \"HCTDRpQGoOCHTVG854dw1ev19PqIRQ4RXiKesSt2+MghiPCN9eLgUTo9CI4CXgwZax4gidKPBdWx\\n\",\n       \"GPcyECDBAbbXc3i0wmVKmt8KD5BzB9OBN/z0KAPy7R46upPdI/THHTRpPmKLbuVlf/TF3yjtu7B4\\n\",\n       \"jw7plFarpXv37iUZYS2QbmQsgCtOSiZKd3x8nI679xNi4ZHLlO+KcvvlBh9ZdBnzKK2DRa4nzeOR\\n\",\n       \"DYrvoxPqgJR1QDveLvPt10P020FsEV2aF//x2XlGtIhcwbqHXWQ0i+5zI78I+LAwPFIRDXtM6/Dj\\n\",\n       \"nqJ7Q9wbIzlsDUNgYsTEyQGC9yn2M0Y9nD+umP160HgRD4uiFbFvMSJ13j1P+r9IJrzP8fOLpEWg\\n\",\n       \"2MPYfO+7qvgM7wKljpLw8zWk+ZQPyh+P09MKcdcVis5DrgAn5ozryuVy2iq8s7Oj/f19SUpbeYkA\\n\",\n       \"rK6u6uWXX9aDBw+U57m+/e1vp+3Mk8lEzz33nL71rW/p5ZdfTuHsb33rW1pbW9Pdu3fTZ+12O8nM\\n\",\n       \"ycnJXFSFAkmeT7/39/fT+3IIbVPfE8PPFG8CArxQrwhoO7/9HkAfR537gWFRDkg9SUovaEPXRD2B\\n\",\n       \"IWRsGIfLkNJxsI3xOj4+TsXNDtx8Z1qRc+fn5Xh9CfqRa2kHOSMFRH3QxsbGnE6lD/F5UX+4LiyV\\n\",\n       \"Skn+RqOR1tfX1e/31ev1tLGxIUkJLB8cHGhra0uj0SgVnlJHxcF9zzzzjCaTiXZ3d1PfSKtwsNut\\n\",\n       \"W7dS3+JWYKLngGG3BfDAZYe1y/hpy9Ml7oSjJwDLtVpt7kRXaRbZjzVyABN3ZCLYi5Et6dQWwV/X\\n\",\n       \"MzGKHulCwEmRQfTw9CKDyWf+A4OYsCiQRdEE/o/P9O9if4uAUxH684XihY9uqPH2JM15Bb5gYjuQ\\n\",\n       \"g5W4ldH/9ol/GtBXFEWJ+eQoSEVeuIM0R/j+3NjHJ4GQ2OcIxi4LMJGU5ts9NmkeMHq0y5WyKx3p\\n\",\n       \"bK1Qls3v9qJd2nDgQl4ejw4P39v1Cn28H9rZ3NzU/v6+7ty5o2vXriVFCgC4c+eOXnvtNb3xxhv6\\n\",\n       \"+te/rldffVVvvfWWhsOhXnvttXTcerVa1b1795Rlmd566y11Oh2trKzo4cOHqe+3bt2aiwAR/uYs\\n\",\n       \"BWTp0aNHSZFSdHjt2rW0gwhjh1cbPTzAP8bCdzehsPFeAQWsvTzP0xZPzrHx+gqAJP1gHh1oYBhI\\n\",\n       \"72AQfAt/rNnwGoSLJE9Jx7SXgwIP5QPo/PA66dQpg//u+aMPPUIlnUZpANfw0wG1A37klD4Actrt\\n\",\n       \"dkqbS0rR7Dyfpe92d3e1tramlZUV3b9/X594fDLs7u6ubt26pXv37qX+MJ5Op6N3331X165dS7uK\\n\",\n       \"SIOgo0kbAX7oE7JCGoa/PfqDrJHmQV49mg1fWfukGeEV7yfyKCrPJ2pCX4hKYadcN7FT1O0TfOC5\\n\",\n       \"DjK53+uE4AfzvIgu9E1pRcAgGqhoEBd5/xFYnBchiIwtMqTenwhuojcQEaR7pY463bMo8tIgFh1K\\n\",\n       \"kv45avYIhxs7FxYMkIeuiwxgHKPzOqJ0+kL78KDICHt7Hkk5b079Pgd0RQBx0TxdJPli8/mm6DJG\\n\",\n       \"k1CazK90GhXxPLqf2OgF1Q54pfmXruGBsQXTyb105Mp3CLz33nvqdrtaW1tLioXajWvXrunrX/+6\\n\",\n       \"7t+/r+eff17ValXf/va30yFqHgafTqfpUKkHDx6kMDYnG2dZpnfffVfT6TS93A3ZR8FLSt83Go20\\n\",\n       \"qybLsuTV0kciLByyhUJtNptzfKNGgAgPfPSdUF7ULJ2+9dm3ZxKZAgj6mqTfkubOy3Dj6anbmH7L\\n\",\n       \"sizVtJznYX4chOfrY4R30UsHYAFIWMsAOOkUxGC0OU9DOjVcRAzdiaHoU1KKRhAlc6NLv3zbcaVS\\n\",\n       \"SeuQ97v4561WS71eT71eT61WK23ZBQwTMcGQE+kj8kI9EXUy1E71ej3V6/W5HU9e8Mzc03fO5aHe\\n\",\n       \"BLlG1hyYOUCD74ABwEi9Xp97NxWggahJ3NLtYA9g5CDddY/rNPjEGqZ/bjOYF+532Yl0ITUnUTlL\\n\",\n       \"OmPwIrhwigDGFT5tFRkyro0/tFlkjKMhh+neTwcWcYyek/VQM4uVPngVP9cXhZr5P47XozhusF0o\\n\",\n       \"fPyxTedV5J/fuwhcxL45f523DnCKIh8YBTfWkeL8F8nIRZEfxOVy4SctSqdnYpASQB7gLwvcZYT2\\n\",\n       \"uV86lVsvAIV31F5IszmlvgH+OQji2RTOkjNniyx9H41G2t3d1Y0bNxKwWFtbSwCFKAsRm1KppO3t\\n\",\n       \"bZVKJV2/fl2f/OQn1e12tbOzk57Xbrd15cqVOcDCy+6Ojo4S4Njd3dX29nZS0rw4DIPjWzU50K7Z\\n\",\n       \"bKZcvUc5ABrk5sfj2VuP3QtEscLnlZUVdbvdM/LN2sV4u3PghpJIAd/7WiUS5A6GR2MumhiLH2YG\\n\",\n       \"QKGmjrEjL4BpN26+K8yjwPz2KLHrEk8JwBdkkigIsu3nbLhOxMAD8PkNCD46OlK73dbKyooePXqU\\n\",\n       \"1uCdO3f0uc99LoEGTitmnLVaTc8//7wODg4SeNnY2NBkMntpo+tnJ4C8p6IAXgA/PwfH02XOH0+/\\n\",\n       \"kM5FJr1wtshuoosB+YA/TwOTYiNqCT/9O58PACVRGfrru6YASF4oHenCwEn0umP0IhrEaIDc8Ppn\\n\",\n       \"izzo+Bn/u1HwNrxNro+G3z93kBO/p6+gzAhsYuErCzrLTo+ALgIqtB9DrFLx7qTz+BB56IspRoqK\\n\",\n       \"gF0R7+IcxmfEn8hnb8v5ViQTlwWgsBg9PE9/PUTqcy2djVJxvXvNXgNUJH8AEPeuJpNJUt4oC+4n\\n\",\n       \"VEze/ejoKMmTK/XpdFYQ2+v10q6b0Wikg4ODVOD67rvv6sd+7Mf04MGDlP44OTlRq9XSzZs3kxJD\\n\",\n       \"WX7605/WeDxO2yzpKx46L74bDodaWVnRO++8o2q1quvXr6ter8/VA0hKQKrVas05AM5vQs6dTkfN\\n\",\n       \"ZlOdTidt4fQXo8EHxs468nSMFyCyG8o9YCcv3GQLqr8XBqPZ6/Xm0m+SUvsXXXdCesBTZkQ8Iuh2\\n\",\n       \"z9r1CH+ztRxjS7QKcr0ToyHSKbhjcwHANBpUogkeZUCWHcADxh2McvDhgwcP9PnPf17vv/++Dg4O\\n\",\n       \"1G63dXh4mPrCSbCs452dHbXb7QR0m81memkhMsbhboPBIB13j8zDH+ehR5yQazfqHu3AnkinQJDo\\n\",\n       \"G/qDrcIANGpPkNF4pL5H4P3ZtOF1UpKSY4b9Qpd4DZFft4guNK0jnU2FFIGLaASh6HkX3Uu78Xlu\\n\",\n       \"UBGAWLlf9LzIzCJjEfviUaHYTjTCDiaYXD6PQGPRGN17jtd7CqYIREWQFnOb7g1G3kSe+P9F4LKI\\n\",\n       \"HLBG4AHfiiJiRc+9CPKwLZ4Zn7HwUai+w0M6BQyM3UFYBL1ed5Rl2ZynBSAiWkPBnhdnAmbyfBby\\n\",\n       \"brfbSd4wCP4ulN3d3RRNaTQaWltbS4a8Uqnok5/8pN566y21Wi1JSodWYcw5sGpvby+FzzmYyj0t\\n\",\n       \"DufK81mNx82bN5XnuW7dupVy6BToMt+8MdhfUIjMwg838AAePoOXeZ6nz1HWzIvXj8E/Qu7OS//e\\n\",\n       \"gSNeMsWGjUZjzsCR3nD9wmfu/FwUMT4MbbPZ1O7uriaTSTK2AAB3Lly/OQhlfTA/1NV4vYIXyyIf\\n\",\n       \"pCIAu34oIPd7xJaoG9GReGR6r9fT6upqAg8uO91uV9evX9frr7+uzc3NZCdqtVrauTYajdTpdNJn\\n\",\n       \"vJm72+2m55DWAbCsrq6m+gwHah51lU51MM4qtSDOt2gjkElpJoOkdpFBP5rC01r+Ek2iL/5s/o58\\n\",\n       \"BXSwpgAqpIt4jgceOFeFsSyiCwEnLrzRYC36P6YHitpzA+jILxpd/9tzZjCRRebXu6JyilEfnr3o\\n\",\n       \"efS3yLh7H5g0XuoWx+5GKoaJ4zVuvIvGswgocE9MuUVFWRQtiW27kfW+0G78PLbp/Y5g9TKAEgjv\\n\",\n       \"xFNjeBvR2OCFFoX7fSeOG74YLVpUf4JMssMExUfkhGvK5dk5BZwbwlwTfalUKnr48KFarVbanTEe\\n\",\n       \"z47glmZzsL+/n5Q7yp/zINjNUiqVUuHq3t5eWmMocA8Ru2L0k1f5nDNcMGhZlun9999Xq9VKBZjO\\n\",\n       \"D5S5F3PSFp4vY+33+3NKnDHCMzxT+MBz3HMtcq6YY+aD+XMPlvs468T10Hm5+Y+LSqVSihKQ2pBO\\n\",\n       \"gbinsL1wm7nkb9IhXn+CjDPPyCkRphj1y/M8ySM7XphjjONoNErvInPZZn739vbSCwXRo6TmpFkU\\n\",\n       \"5OHDhymK0mq1EmDHIGMvABt+/o6nVvwsF9fx6ADWsUeeiAQSyfNiY+e9R2C95kOayR4OhKSUNoLX\\n\",\n       \"rmtcB/CZp+Qc0DBn8MzBpV+P49Hv97W+vq7hcJjkgfsXytvvQlY/MhUZzUUGx41ykQfu3skibz5G\\n\",\n       \"QaIxxpC4AfVajUUGPeZNFxn6ImO96O84rvOASex/jLoUUeRFUUSDtoqKaV3oY3olzkWRko7GtQhY\\n\",\n       \"LAJuPp+Lnn/RRLgfZeMH6rlXh7LNstnx6ngZzmPpFHy6N+TFlkQe/FwMvJ2joyNJp2emoCxJY0gz\\n\",\n       \"Q7i6ulrYxng8Tgeh8fbswWCQ+thqtVQqlbSxsZE8RT/Hh5B5tVpNxtzXOZEXjLenMzD2nNyJwe73\\n\",\n       \"+wl84IH3+31dv349bWv1gjyIiIgrc4CTg5XV1dW5VC/f4Z1Sy+Lbhz1a5lG/LMsSUIu7QtwbAAYo\\n\",\n       \"7AAAIABJREFU9TVDv2q12tycX3RaR1J6VwtGJctmLyItSvGSAgKUObBmXaBjIt8isPHdOdPpNAFl\\n\",\n       \"gAzRAuaQPhDF43/qjySlU5Rpv9frJf6vra2p2Wzq6tWreumll9KBa0RepNOXMZIW5D1NpJccgLis\\n\",\n       \"ui3waI5Hnfg7y7IEgHxt+DX+tvIsy9K5Jg6GJaWieNqFJ8gx/eB+L4D2aIp0GjX09CU6wUEgn7Oe\\n\",\n       \"ACrww0FWEV0IOIkph1j0xzXS2foEpyIjFkHGk6ICRQaU5/v9RZ5LBCb+u+hZ/rtoTP4MB0VFbSwC\\n\",\n       \"You+d+8uRje87diWgzS+cwR/HiApAg1F9SI+bzyzaN6LAGgc40VTlmUp3OzFjMhVuTw7O8TD/75Q\\n\",\n       \"4a0reRSqe0gocLb3ugI4OTlJkQ08FVIGeHL0BSPgZ09Mp6dvhh0Oh+r1ejo8PEwnq0pKCh3jDYjw\\n\",\n       \"uhiveSqVZmdU+Iv7JKXj5ZEHV+IoP89Rk/LxWpBms6kHDx4k4w94ATSQHqDmhBx/v9/X3t5eeg7p\\n\",\n       \"L3jqc0rI3Ne8A3XqCeCfRxFQ8hiTuEUZoObgCR3JQVoX/VZir3/gRFN4gU5gfbpxdeAhnaYekHe8\\n\",\n       \"a9aHnx7Kc5EhUooeGaMfGGn6R7qNPjvIHwwG6dC+0WiktbW1BLqGw6Fu376d0ofb29va29ubS82S\\n\",\n       \"LpGU2jk8PExABdDmYIQUiMsVET36jPz6+if1g0FHFj2tgxwS0YB/jBee7+/vp3uQPyKktEekEp3A\\n\",\n       \"+Ogvcx3nm4JYIj6eegKQF4GR80D3pTi+XjoNh3tEYhEokeYjAEWGrCgKUtRGvM4X0aJoS4w+xHHF\\n\",\n       \"Z/j9iwBVfG6MEjgIgBZFLuL951FRVKfoe55X1GYR8Ip9j58XtXFeXwAvtOd8izy7SMLwSUrhfwAL\\n\",\n       \"yoI6DK+LQP75mwWPMpA0p2ylU6+Hdvmp1+vprAaPfG1sbMzVEPlbVv14aun0QCe2HKLUPRwb+e+h\\n\",\n       \"atr0KA/RkcFgoP+fuTf5cezKrr0XyWB07IOMNlNKlUqlaqBCGTDgkQHP/Wd74omBMuxB+ZWsLlUZ\\n\",\n       \"TbJvoyX5BoHf5uLJy5D8fX5iHiCRDDb3nnuavddeuzmj0ShiCGhYmzQPNuU6k8lkLVgcC+/4+Fi9\\n\",\n       \"Xi/AHOsJhfHw8BCsD5/n83m1Wi1Jq+A8gMTh4WFcy8fRgwJhcxgzZ46k1fpnTSC4vaGwuAdji0JF\\n\",\n       \"8bpS20bDWkZRUumUZ3Z2A4XEbzzmQFqPDWGePBbLlSX3llYuRMAbSg9Xxv7+viaTSawr7geQZ8+5\\n\",\n       \"+yKXy2k0GgWo5ViGx8dH9Xq9iK9xcIjidlkM40OfYTyYe/rq9WBYX8TNYEjQl0qlEicMO5ijOdPE\\n\",\n       \"enZw7q5JScGKsBYdLACaiQ1z4AngdtcboMVBu7tz0iMfGD93r/6UbtoKOEmVHILNFVT6vzf/zEHE\\n\",\n       \"S1a7C9KUufHPGDjeS5VrFhuQ1c8s0OCbzNkC71f6e4/z8Pc2ARPGNwUE0odsVFZ/NzFVLmhdgW0C\\n\",\n       \"kll9TYGEf+6BdOn4+fyk453O6bYbyg0hvlyujlfH/wvA4NmJd/DxcBfL4+PqoD8EI58DEvL5VaZO\\n\",\n       \"ur4J8ET4Iyyl1YnGgBHmAtqcPZDLrU6mdsDkz+JWFPPrbgsEHsoYgMF16BtWIn0mkyIriJW6I5PJ\\n\",\n       \"JFgVaT3Wi+d2qhoLnZRR3GCwF54i6+sL4Y4y9rWZpnpS2TRVxsRHOE3O3KEQuBfg7CXf/C/RGEP6\\n\",\n       \"xenOxFGkDCeK2fc8gMtZPMBZypZIq/N8pBVjwTUA4tLq0EjWGSn0WP6S1jJg/CA87uuyB8DFeUvE\\n\",\n       \"Tjkr4bEwLuNYn647iPl4enoKJZ3P54PBGY1Gajaba8xJsVjUeDyO1HWPXUFm8F1nVFOXzGg0CuAL\\n\",\n       \"S+L9dgDHXJFiz5r273h8FK5ongXWE3DHMyM/YBg9g+ejc+uk9HyqeFzo+nd+juWdotms+6TC25Wg\\n\",\n       \"o9OfUrbuisoCO+lr74d/tokleokNyAIP6TNvep32IwsAZvWH72aBrJeYlyyAuIld8fd8nPn9x8SS\\n\",\n       \"bGoACJQ9EfNOiUsrIcx6g6VAWaYK390+KDvuh8UE2wCgwcqfTqcaj8eSVtYLlWNx43At7uunrqKo\\n\",\n       \"AQkIO6wlL6DlwbfME8Cf/gBQDg8P43mxWnHV8Ky4XnZ2dkKJu3uK2BqEPS4S3DRQ8KwbwNTBwYHO\\n\",\n       \"zs6iT41GI1KRPXiRceZvdzXQRwCdAzKeEQsZRQf4pM8IdxSBU91eQ2WbDSCWz+ejgFmlUolzZgAc\\n\",\n       \"AGeAKM9DUTAHGYAamlvorG/XDcw54MAzZSaTSSh3H0t+h8xHqTp75syctIqTYL0UCs8p5Nzz4uJi\\n\",\n       \"rd+AI1jAQqEQrkpnlqgVwrosFouq1Wo6PDzUzc1NxMiQir+3txdrm/GRVnVEvM/S+qnR9I/ibzA0\\n\",\n       \"qcx3A9kZGElrzC1j6IfRMlfMJ0CJMeYaboAwD27kblxzP7Em/580t6T4m0WYBVyyXmf97da2X9ut\\n\",\n       \"RGc/+JxreZ98caZKwH/n101bVt+z/mfTOKOQXncTm5EFsHzBpfdPQUDWOPr1oORcaTl4S3/j/c+6\\n\",\n       \"V/rsWe87K/QSSEr7+jE0D34jvZU5JqVPWs9QQng4IHFr210HLvA44E9a1VdxYeDWFNk0KGqsI3eB\\n\",\n       \"ACCcDuZ5SB10Xz5pv+wbshrI2pHW9xUgi+dgHOinsya4mebzuer1umazma6urqKuCoqIANbhcKhG\\n\",\n       \"oxGZN57x4YwHio+YGRRrsViMs3xQgF5BF3CUji3P4LUhHLxx/g4AhDGWVhY62VKz2SysUUAM13zJ\\n\",\n       \"wvwlGqnoxeLz4XWVSiVYN8AHqeq+flm7gBtpvQK2xwYxb6wJ1jzzARCEgcDlxfWp+gsL6LLQgZID\\n\",\n       \"F4AA/fJ96fE+xCJRGBDwSlYagePISWqdpHFPMBeSwlio1+uxhufzeZTYf3h4UKvV0ng8XmMq0Rce\\n\",\n       \"lMp93XXDXidwHVDsYM/dvO5q4TNpxZB7are0iqNZLBaxXx2YkFoMKHMXrj/PprY1HtyVT2o9Zylh\\n\",\n       \"/74DD77vf6ef+e9fUmRusUOp+mLdxHRsUtIvsSe0NGjQX/8UqyJ9ePgfLXX90K8sl1ZW2/ScPgbp\\n\",\n       \"PPjrLPCQ9uelMdoEUP3vrHWz7caGhgJFGd/e3oYVtFyuAj9Rlu4q8FN1CeREqSFkvWiTAxLKtkvr\\n\",\n       \"gZy1Wk3SiuEYj8drNSZgbRB60opKRymQXSOtZyW5MkAhE/CKNQ2ocVcG/XFB+vS0OjYeS24wGGh3\\n\",\n       \"d1etViviVvDR397eajweh5VJ312RueXo7M3BwYGq1apms1kcWsjY7e/v6+TkJCzCNEWZeBPWs48Z\\n\",\n       \"jBWAhf6yPrwyJooO5Z5awwCUbYNvntULiaEMPZ7DlZuPcy6XC9cLay0N9JRWAaCMG4yMMykOaN+/\\n\",\n       \"fx8W/t3dXTApLldh/dIUWGkVh+EuPNYpAJF/0+lUlUol+spawvVIv7yysq83gsu5N4cEEtcC4HPg\\n\",\n       \"TnVlaRXcTuN77gp0Q8PZIq6Xy+XCJUW2mjN9ktYMEH7DNbPACf1iLJwB4/eMP3sDY+yjc+tIHyqW\\n\",\n       \"1DJnUJ3BiE7nV/UbpOwsnU33c6o2tczT32UBJv+XAh4WCIvFFby0bv2Aov0zX1AODPzv9Dn4m2fk\\n\",\n       \"/7SPaXsJMKXf82fyZ8U1gND3PvJbv5d/7myA9yUFWg4Ys4BIuo623dh40ipgFVbDA12d3i+Xy9rf\\n\",\n       \"31+by+Pj47DolstluD588+MuYs+gAD3YjT2CBZPP51Wv13V0dBQAAEWLkIEpwY+MgnH/MusVgQ+1\\n\",\n       \"ns/nVa1W4364JbgP15cUp7/mcrkopIaiRvkVCoW1k18RgFDi0PmM1d3dnabTaYAg1uVsNltLZ1ws\\n\",\n       \"nv3xw+EwzlHBiiRYl3vjxqBc/v7+fgRJMjeHh4cxplDby+Vz0TJn0zxYlvcAZvQZZoBr4QbcZkMJ\\n\",\n       \"+TMAHt0N6e95kDVriJRd1idMG2MPGzeZTLS7uxvuEq7hLAhrQVK4XtyFwP8AS1gZroU8Zn48Xfz2\\n\",\n       \"9jbqmsxmM9VqtbXUc+K4iFdhbNx95UybtHKNEfwLG+Muv1wuF8xUuVwOoAWAQ9axjp1tdXkLs+PB\\n\",\n       \"sdVqVeVyWTs7z4UAqTcEY4gx7nuE78KmetCyH/EAU4hr05kcgEoul4vxQ056OEfatl7nxBWLDzIL\\n\",\n       \"MDpqljoC3N/Lsuidrvq5bAv394Hz97MYHQcHLH5vfn9pHemyQbJYEb+Hf5YFirKUcwr+NrFR3vz7\\n\",\n       \"6YL3e6NgiAPgmfy7znLxfroYmesU+Ph3/yfM17YbFp+07uZCoUnPwg8LjEBMhGy/39d4PNZgMAih\\n\",\n       \"4Osd4Uw2gLTKwimXy3F4Ht/xgl5YpLlcTuVyOeYR0OMBbLy+vb2NsuowBgCAu7u7ACLSSsiTPQE4\\n\",\n       \"g+nxuV8sFlGFFUbm4eFBo9EoBGqpVFrz1x8eHgaQuLu703A4DEt+Op0GzY9gvr29DZDEmAAosBQZ\\n\",\n       \"fxQMmR29Xk+dTmctVRsXHQoHyp/7UGSO/Y6bgrEB0BUKhVA8KHYUHn0jZkFaFcfaZgNMMxcAMBg8\\n\",\n       \"YqpwdeTz+QiMfHp6iqw1AlJTme9xG4VCQdVqVaPRKFxqKHCPNeL+yOpyuaxerxfgA8sedxpGgbMJ\\n\",\n       \"yCzXLbe3t6pWq5rP56G4vZIr5ev9AEoYQBgz9hAgBODOXALac7nnDCTWPrKCQwTZR1RcZX0yBqT8\\n\",\n       \"+zixXhkv3F2ML+7PyWQSQJjvpG5pADYAmjFAntDHdE3QH5gy/macAEwvta27dbKsYbfU/bssBGi3\\n\",\n       \"9OGcypM+BDvcw0FGqvAc0HgGBS0LRKT995YK5JRd2PS79Nop25PF6PhzpOzDTynudFz8dQowuB7B\\n\",\n       \"USxA/34WQHGLKes+6ZgwB+n9/VofAyDxhoBwZi/1CzslDoBA8VWr1agrgvDyKH/Gm2BWFDtjlBVw\\n\",\n       \"6+uGoFHcCE5jexGr/f19nZ6eRoE15oLiVADSyWSi+/v7KMmO5eXCFUYHVxcCjLEBVBCfsbe3p3K5\\n\",\n       \"rOl0GuOay+U0mUw0nU5jrBqNRghTt9x5dgJsPY7FXWcI45OTE5XLZTUajbV6Jf1+X2/fvl1jiljb\\n\",\n       \"gJ/FYhHKhJORSf3E2nTA5+MFsGdsCVgEOCGH2DPbbE7T0x83DB00My5ulN3c3MTYeFo9Lh9A9P39\\n\",\n       \"ver1uqSVe8FlJmvIZQzVR4k3ccaGFFnkBkyA6wiYQgANwez0odFoBAjzAGfAF4CKPe2l3TnYEuVM\\n\",\n       \"qq60yuzzNQyYwkB29xMy0d0n3Je1kuoI12G4VfL5vE5PT1UoFMJlCmjCYFoul5G2D2CiX2msi68F\\n\",\n       \"mBK+6/FkkgJkuR7f1LbGnDiISAEASsfRrLsBeM8VtrtJUFjug+N6qQLmft7c3eA0bKoMAVH+LP4M\\n\",\n       \"fi3+d0bIAYa7aJyByWIN/Hc/xX6kbM+m7/2UayR1LfEbp/bSACpvKTPibqAUoKRAzN1fad8/toai\\n\",\n       \"4ZmcKl0un+MnOKnUhaxbk61WKwq1LRYLnZychC/b/b2cnYPF5cKA77A3EJy5XC6qspJNAljxfUGR\\n\",\n       \"suFwqLu7O/V6vQA+ngqLsD4+Pg7XTaVSCeGFtcSaIKCS+UegTqfTYBdwZeAGQBATNwIA7PV6wYrQ\\n\",\n       \"f7fUAGMp3Y8y4r3BYKDpdBq0+uPjYwQlUlr89PQ0ADiAApk0n8+jUirsC1k/HiiLBQ/zhKsJoJey\\n\",\n       \"ZC7PXqK/f4kG8PSaPMzVdDoN0HF3d7fG2mF9o3QZw2KxGIXKYOVYr71eL76DbGEcWS809s5isYg+\\n\",\n       \"EvPhwbIuS0i1Zd1QGNCDw2G9Go1GsIQYZO5mkhRsDXuJfYSRwHqAQZJWMVuwLtJKX7VarbWYNEnB\\n\",\n       \"OPl9AFOpPoNdYtzYr+z9arWqYrGoZrMZaxBQ5wY/BgHHQtAH1qnHmrjLGjYQIJS6NJ0B26SbpC2e\\n\",\n       \"SpxlAaOUvHgT38lyl9DYyExyymQgQPy+mxgUvx+fu4/Tr526irKek/tLilgE+pyOg/fNr+n9yrqX\\n\",\n       \"K32u7ddxZiW9Jm0TUPPrZzFaKCyEC9Y4NN+mMWH8stxtbiWlACUFsh9bc8oWAY5lxGYlBsKBKwod\\n\",\n       \"xYrg4RTg5XJVV8BZK0lrygAlTiwKa8796whXMnjq9XooGoQM4AnLE9cRgnt/f1/ValXdbleSAsRU\\n\",\n       \"q9UIQPVAX2ciid3w4DhcAZ5pgNKn5gjrDvdMpVJZqyuBMiC+wKlqru3gjXXtpxBDxS+XSx0fH+v6\\n\",\n       \"+loPDw+6ublRrVYLWVCr1aL+CjQ24wr48PTsnZ0d1et11ev1OC6AgnQeU+JAhDX0kuz7pRoMhwev\\n\",\n       \"Qs9j/RIvAguCPCYeCLAuKbKscrmc+v3+mrzwefLMGg/8hlGRFPE5pMiXSiU9Pj6fms0+Ys0AXgCu\\n\",\n       \"ZJAAgmezmQ4PDyOeazqdajqdxtENhUIhGC5nMwGze3t7qlara/sNgIZLw10szkYAEABCDkxdhzib\\n\",\n       \"5EYAspPYD5ieQuH51Gba999/H2D7/PxcvV4vZA4l+B0wTSaTNTCVxtFwPhhMZ71eD3aFfzxjuv8+\\n\",\n       \"upgTZyakbIDiAZ8pcHBgkQIMBCuTl7oUHGBkMSnuz/TvbYoNcWXqin3T+7RNrIYDiixmI4vBSa+Z\\n\",\n       \"Ah3+dyYqfZaXmv+WMfK+k/aGEOZzFHN6H1feqeDl9yhhXwc/t7/bbAAQBxnu6kGhSiultb+/r/F4\\n\",\n       \"HODAi6b1+32NRiO9e/cu3B77+/sqlUoqlUoR4IblDWjBx00DfKBAPcvm8fFRBwcHoVBQnG7lknp8\\n\",\n       \"dHQUKY+9Xk/NZnOtqNZgMIh55zmxUB0AeCwI848/3dcO7g+Uy3w+j0BEYnWg4FH6WO8oRQQ/v6dh\\n\",\n       \"CAGMJEVcC5b0xcWFrq6uNJvNIoBwNBqp3+8HRQ27Ij0rOQA7c+nF67g/YA/WivWAZSyt19vYNlOI\\n\",\n       \"QnbGCEW1u7urbrcbVXV9/t2t4sYZsSleG0ZaGSkEJbMeYUX8pG1JEbd0c3Ojp6enCGz2M2ZYv9J6\\n\",\n       \"bInHZtRqtcj6urm5UbPZjBRcaZV6D4MJs8iegpmgSu1wOIx1AehBUcPa4ErCJQgrg2JnvKgoDVAD\\n\",\n       \"XDGOyBH3JuC+Amzn8/lYazs7O/ruu+90cnIS690BCACOvjN/fqAgWXXs+3w+r+FwGPf3lG3PCPI5\\n\",\n       \"duMpq22NOZE+jBFJmQQam8KVslveuBec9ubzTQyJsyNpQyj4tdLrORuT9peWukq4JwLKn2mTANp0\\n\",\n       \"ff9+FhDx36eAJgVNDubS8fA++7ylsSBewdSfy+fX5yAFW8wFffHfpi69tM+bwNo2WrlcjgwWBCpW\\n\",\n       \"ITVBqtVqHByGICIAEKvR008nk0mACOIVZrNZWN1kq6AomS/PQOC7TvO6ewNlAfXLScNOJc/nc717\\n\",\n       \"9y4sP5SR08YpcEW4QUUzTwhnLFisbk/XJWaAbB2EMBYqboT5/PmYgE6no93dXdVqtaDlPWbDrX5p\\n\",\n       \"VcsBa3Q2m0WmD8BisVjo7OxMNzc3Go/HmkwmUdiKuBVpdUbJ0dFRBLrihnK27P7+XuPxOMbEA27z\\n\",\n       \"+XyUMncrFRZxm42MFeYLZg3wAHMhrWf49fv9UNrI1FwuFwHH7IvUoPI4oVxudeAewB+5wO9gLwCc\\n\",\n       \"zrK5CwGZAQsDGJ5MJrHOyMrZ399Xr9cLWcTcwEoUCoVw0wCG+/1+BP8yBnd3dyqVSnHfNHaHGCWC\\n\",\n       \"tcfjcTzLdDoN8MPYALqlVXqwZ/NIq9g3+u01Sbj3zc2Nzs7OIi5rMBjEYYhkFHHtyWQiaZ39Bxgy\\n\",\n       \"PzwDwNLdbABG1jLz89K63mpAbKrwXeFI2ZZ9VmwKD5wqxzR2IY0RSZU4wtzjPiR9oBxRygixFORk\\n\",\n       \"KX36nuWuyuoL13CA48+Ttqy4l5TxyGJ2sgBJVtv0HQcXFAVDaKTMU3qvTa+zxiHrs/TvbVPfkkLp\\n\",\n       \"k32C0sJyQVjhu8XKf3p6Uq1WC8DgAaWNRkONRkO1Wk2np6e6vb2NeBC+JymUJamtrHtAO1kBh4eH\\n\",\n       \"axYwwIUMoZOTkwAT0qpSpGcx1Go11Wo1jUYjjUajuObh4eGaJQigwEoCIOVyuQA5WKhkIHlwYC6X\\n\",\n       \"i3RTrx0DLV6tVmOdHxwchGJ5enqKwEqUQUohU8iOrAsKahGo6GD+/PxcNzc3Go1GETtALAJ7i+wm\\n\",\n       \"B98wUICop6cnHR8fBw3u6a3Q+ihl4hq2nakjKZ4P0OVp68ViMdwizKOzErj0CBjO5/MR8H14eKjz\\n\",\n       \"83NNp9M1Iw3glrKmKDbkNXEsFLNzxpU1hMXfbrcjQHW5XAazAQswHo91cHCgRqOh0Wik9+/f6/T0\\n\",\n       \"NIAKDAAybjwe6/7+PornwVZ4TBSGHM8E2GRMYIAc5NIXd415PB/K3uOZGBPe9zLxHtvmOmg+n+vq\\n\",\n       \"6kpnZ2cqFArhQvWibXyXeXSASdo06xTAB7hEHqfuOgczL63trZ1K7HQQEwhtlrIkWTEjbBYHBkwO\\n\",\n       \"EdcpU8G1/T5ueWe5k3jtFC2WsQt4acW40Hjtz8HzZ73vv3+J/UhBRtqcbUktEu73c9kGB5D8nQJL\\n\",\n       \"HzOC0diAKWhyEJHVjzRWI2W80r/T8dl2Y+0RzMfrnZ3VCcLSqjIqqbIAFT5zQeDFw66urlSpVFSp\\n\",\n       \"VHRycrIWqMh18PljfeFSqtVqa64RFAXXAFAAQqDIEUSHh4dhKQ6HQw2Hwyi/PRqN4nuwOE7rOjD1\\n\",\n       \"NYFARAjDmiCApQ8VEkBnPB7HAXQoluVyGSmSlFZ3xsT34HK5VKlU0uvXryO1m6BhSuvTp0KhoN/8\\n\",\n       \"5jeh6FA2rH0swDStns9gip6enjQej6OOhfvykV8wUlyDGKJtNj+SgPmVFHEYruQlRYAz1jVrFEWJ\\n\",\n       \"e0ZSBMd2Op3YB4xj6pZfLpeRDcTaxjWX6gyYM2KhDg8PA6C78sYIYMz7/b5KpZJms5n6/b5OT0/X\\n\",\n       \"apB4n2DQlsvlWir009OTTk5OIn6ENe8l7l0/sOb9sE2MGUAY7zljwe+llWz2SqywQ9PpNLKGptNp\\n\",\n       \"PPt8Ple73Q55UCqVwh3k3gPPusHFiosLtokMO/qF/CMTj/ly3fBS22q2jrQ5hsStCd+8zh64sGJA\\n\",\n       \"2ESpAuVeaT/82t78b++jpLVJ4/O0H/Q9vc7PaZv68tK1NgEWf52CI373U/1iHF2Q5nKrACfG2AHF\\n\",\n       \"JgDkc/ISg+ULOAU4WWzPz1nsv0RD8QNkR6NRFBtLgyQXi0UEVgJgGGOYCMD3zs6OLi8vI9sHUOjp\\n\",\n       \"h55xgiA7OjqKuWE9ovQxCnZ3d3V0dKTXr1+rXC5rPB6v+ej7/b6k573Vbre1s7Oj8/NzdTqdtbLm\\n\",\n       \"pEgi8D0DhfXgQm9n57meSLlcjvRqrC83LhCCWJtYyKVSKbJ6Dg8PI27HC6dJ625MX3P0rd1uB0DC\\n\",\n       \"VYEl2Gq1dHBwoNPT03DbdDodzWazsG7Z62k8A6CCPQGzAGtCf1Do9I2AXgcD226kkgKAUYB+VAHx\\n\",\n       \"DAQKo7gYG5iFYrEY7k+qrnrWUi6XWzt8zo0g5n00GkXMCnPpzDigfrlcnTkFqwhIgiEAGBDbQiA1\\n\",\n       \"jNf+/n64YUulUrgfmWdcs/xP3weDQewvZ+QlrYFs1nmlUgmw5YYz7mAK0/Gs3lzHIX8BtGQHMgfz\\n\",\n       \"+epgy8PDw3AX49bCFUvDYHD3HZ/73vKUZGnlsnE9gaHxc2T11mJO6Jy7AFIkyUQ6s+KfO5jxhydz\\n\",\n       \"wT/jXm7lS1oTGlluDmcMvB9Ob6V0Gc/nz5HlcshiAbieAx1p/SyWrOukCt5f+3ilcTQ/p3mfUtCH\\n\",\n       \"EHbFyn3cb5+2rDlM2Zm0r+n3eM5N8TLbaARDEvMgac26xipB2HKo3f7+viqVSlwHoYTi39vb09nZ\\n\",\n       \"WYw97AjWDu9zz16vF2nGzEs+vwoKdX8+1DQCClcJ/mUvtFapVPT4+KirqytdXFysxRT0er3wOSN0\\n\",\n       \"eVbALWwNbq7j42NNp9NgZaDQU0DjLGW1Wg3l7dT83t6erq6uQgG51c7+cZct4wXIub6+DkubAEZA\\n\",\n       \"FgDm9vZWp6en4cbE/YJbGSXHa/fdM5/MAVYmJey5N/LKD7KDVdtWI/CTDJh0D6JQUdhkzCwWizWg\\n\",\n       \"C8Phhsx3330Xqa0EiM/ncw2Hww9kDe6Vg4ODNbeJz7G7OZHbMIIwJMvlUu12W7VaLWKtSqWSbm5u\\n\",\n       \"IpUXEPX9998rn8/r5uYmMlJg49hL6Jvlchnpx9VqNZQx96SPsO+s6WKxqOFwGN+TFOzrbDbT0dFR\\n\",\n       \"6B0MH28pI0gNFhpxJRgmXjSPuJB2ux3zgsyg0Sf0H2AsZXl5VuQxe4igYFhJD5DfuOb+P67V/1+N\\n\",\n       \"BQcac79Z6mNMFVjWQLjCZDBoKevCNfmuKzgHCDRX6lhxaR9o7mNN75VOBH3yZ0/vze83fe6MT/o6\\n\",\n       \"67cOVLKYpU2NOUndYPx+U7wL92RcHISlr1PQkQXYXgJnP4cB+iUaAimfz8dBYAhjWhpLAH2bz+fX\\n\",\n       \"SmLDfuBmKZfLkcHC+sP/DWhAaRYKBfV6vUj5Zey4N6CE115NFgHi6xsrVFpR0g8PDzo9PVW321Wh\\n\",\n       \"8FzVc7FYqNlsqtfrrVVlhTFyhoHYA863IXMHYITVS1+xIEejUQQn4h4AFJJ5ICnqlbgxQmPt3t7e\\n\",\n       \"xomwjUYjxj6XywVo8tIGrLVqtSpJUUOF+UW4M9Yu41BK7hZmrlAU7r4hm2PbwbDSqvbNcrmMmi68\\n\",\n       \"v1gsAhACQgHCPD/zj9uAAFbcb+VyOcZ0Z2cnWDAHGpLWzrdhrtwo9MBZwDcgG6AprVwOgC32ArV1\\n\",\n       \"Dg8PAxywrk9PT4M18/oixWIxjI6DgwNdXV1FcGylUlk7HZyS9awFCjE+Pj6GEVAqlTQej6P6MLFR\\n\",\n       \"3BPdxfi7HkHewHLs7OyEsfD09BR1TmAqXRcDIKRn8OLxMakeBqQw/riT2Y/5fD4AkOtP1yWuf7Pa\\n\",\n       \"VsCJA4Y0/iKNN/GWZWmnQpff8+AgO+lDhSmtKlT6595SpekDysR68/vR3ywl7Ao76/mDW9nQAAAg\\n\",\n       \"AElEQVSyhGlWS4HIS6/T936uQvcxcICSRZungOKn7uNzlLJN/h02hLT5UMOPAZxQnAyKGSAhrao8\\n\",\n       \"kh3CuKEI3bJZLBbhTsnn81GqHTcBcRUwBE6XkvYJPetl6z3+gmqs3P/29jZqTvh5KcvlKsaDfUYt\\n\",\n       \"CIIBPdiVzB/SKgkKdcODfuMuwLqVVkrZ/dXcj8A9Mpomk0kAAmJ63KID3LDn+Iwx3tnZiQqgjAVB\\n\",\n       \"xjwzAMGtQU+9Zh4BMhgngBqvTSEpsnlwH+zs7ETNGJ4XCh7Fj3LfVuM5sOR5ZndnIBsZAz89GmXK\\n\",\n       \"39fX12q1WuGmGA6H4QYjaNtTZgnqdiYG5cfe9zAAmBdq07A+fC1Iz2sAlgewm8vlIq240Wio3W7H\\n\",\n       \"+nj16lW4aLmuV8SFMcMooC/oOhgNwADsHzVvAOLsXZhBlxWSIijZdYUbPbCr7p4qFAoB1mnuQgJM\\n\",\n       \"elFGD7xl7QM+PF4OGcG1GS+CpYl9ISiW/eHxMWnbGnPiSsxdIh7TkC46ab0UOJPg3+MafId/afxH\\n\",\n       \"qjy9OTPAPwaU+6TX8gXi7h4HTy/52TYxKylT8VJLwY2zPlnX3XTvrOv6tZ1id6Di4+LzmMXw+Liw\\n\",\n       \"CdwiSGOFENbp+POdnxqbX6rhz0WJkqWwWCzCGpJWligxG+VyOVJqpVXAN8IWoYxfnO9LK6HE2GC5\\n\",\n       \"4hvHwuS+xFfk8/moh1IsFlWv16OYGvcDNCLEPQDQBae0yjRAKHY6HdXr9TUrj/7i2vDgOxQ+Cg6/\\n\",\n       \"OKyNW+XdbjeEP2OJT97dtIwL93d5gJ+fIlRY1s5kMF5ck9+STutWJcoUNqtYLOr09FTSqlDY9fV1\\n\",\n       \"KB7fl51ORycnJ3r//n0oJZgwYgO22fzcIU9vJh7D9yTfA5yyXjgc0vc88w/jSNYPFrhnllDozEug\\n\",\n       \"S+sy9+DgIOrfOPPg4A45AggmPdzlGSxFu91Ws9nU/f19VCTu9/ux/wgy9/kk9orrs5ZQzM7wM8/o\\n\",\n       \"s8PDw4jLce8CwJk16UYi/7OPdnaej0mgH5xFRWA8wB+d5CQB1+CejAX7kixDXJsYP8TveE0fjIub\\n\",\n       \"m5voB/MNm/WSQbm1mJM0kAzBmga0+nedqXD6yq1oz/ZwMJA1CJtcEln9ZQLT6wKsfHO6/y9Vxs7S\\n\",\n       \"pACB9/k7BThZQjYFSemz+r1dsfv7P/X8PAfPynUQHg5OfFxShmMTI8K4QkO6PzpVMOn4+3h/DA2B\\n\",\n       \"RL8PDw8jU4D6ByhBT/ldLp991QhmSkZzPgxCAqsGBY+QcaGAi2M6na5Zk/QL0IQFisChgBrBf5LC\\n\",\n       \"3eNCzFlHZ0G4TqFQCD85+xbQhKWFYeGl2zn0D+XnhaoQvrA3zWYzrgUgoEAdY+/rM907PAOZM3t7\\n\",\n       \"e3GgG0CDOapUKlH0rlQqSVK4tegrcQfEAJFxc3Nzo3fv3uny8lI//PCD7u7uIn6kXq9rf38/Yg16\\n\",\n       \"vV5kP/kYe6G2bTVYscPDw1iPBPUSVwPbRio8MTbEHLi7EtcCTCHl/lHiyHXmHjcRABW2AuCOwscF\\n\",\n       \"huJ3Vsf1ibt42Fs8F7KN92azmWazmY6PjzWZTHR8fKzFYqGjo6NwMXqqdKPRiNTxQuH5ZOu0lgtu\\n\",\n       \"U+Qj4AP3JQwe4yqt13hxQznLbYlrCMDMOiVdn/nwWE7XN8hlxphxhP3hc4+fkrQGwIjl2t/fjwwh\\n\",\n       \"5hhm8iWDfavl63lA90PzubQKQnWB6ErLgUUWg+LXdbDD36mLZhMYSAEIgtIXd9YC8b44EEgVtH/u\\n\",\n       \"/dv0efos9DEFKllAZxNweak5s+XWBe+l90xdPz4ffo6Fu+AQfq5Q/FousDb1+WNgT9iUtVotrH+o\\n\",\n       \"byq7OvPQbDbDZQENzriSscN3AW8IGhiGXG6Vosj8UrgNgUCUPcFqHniZz+ejLsvt7a329/fVaDTW\\n\",\n       \"4lVQADAjZAwBFIbDoer1etQBWS6XAX5ggLrd7loRKmf1KEzn4wi9vVwuI3DYs3CwwLCqOYBP0pqf\\n\",\n       \"X1qtDYQna3WxWGgwGOjh4SFiGRDAFFAj/oXnqtfrcaowMqbVaukPf/hDBC7O53ONRqMoZNfv9/Xj\\n\",\n       \"jz+q1+sF4CN1tlqt6vLyUu12W+12O/bCYvGcccG9ttkIcsXNJK0OoUTRu7zF/UOslLtBmKNcblWA\\n\",\n       \"DyXs4BDXBNflmAHuhbvC5QigGpnlgdmcrkuwMy7AfH5VCwS20M+HAUjAamL9z2YzHRwcaDQaqVwu\\n\",\n       \"xzEQMJMAq8lkEvdz45uGrIS94Tlx8VF1OmX9XD8CQFi/BOSyVyhS6CnTjDXzcn9/H6wpoI9qyQTz\\n\",\n       \"M1/EscC0kGnkzOPXX3+t4+PjMLCYZ+YQlmfjmvvfX8Y/3Vwp0lk6zmAyeU5rO13kFfNSq8iBQKqQ\\n\",\n       \"nf52sOEgCMrRQYffnwXCwBLkBrJ0xOl9cX9g2md/nQVMsoBP+ltpvUaGC+D0en6dn2pp35mTlJpO\\n\",\n       \"WSF/HhdeCBL3xSL8fbwZYzYcBbLcr5zO6bYbFs9i8RxAh6Bwf720iiPBZUHxMqx1D8J0NwiBbbwP\\n\",\n       \"Pcrrg4MD9fv9KOXOukSQME4cm+4WH4Dg5uZGR0dHITCxdAjgw3WSy+WimuXBwYG63a4ajUYISkkR\\n\",\n       \"JHd/f69qtRr3WiwW4RqRtMYMMBbEFJDV4zEguLiq1WqwHygSrzXhbiesa99fy+VSn376aQAUhCvZ\\n\",\n       \"Rygq+ozywJ2B0L2+vtY333wTypI6NFSTHY/HqtfroQhvb2+jGNhy+XyOz2AwiL4j8xaLVdDzNptn\\n\",\n       \"iuzu7ur9+/d6/fp1GG2uZDgPyt1S0spVyZ5HNqXBtMgAXDPScyAse6VWq625/AgyzefzGgwG0Q9n\\n\",\n       \"27kmv+V7xGF4GvRyuQyQvr+/H5VU2Uf0m7lvNpv65ptvdHZ2FnILlxJgmj3hx0fgDkLf4M6RVgdm\\n\",\n       \"3t3dqVKprAHy1IPAPfkbdpT1RGaTFxcEgPs1AGH0zfuKTGNM8vnnEgMAGK5BWQCO6vj222+1s7MT\\n\",\n       \"xewwYvw8q01ta+AEAYHQppOe1SB9GKvBd7MUubsA/DPpw5RghLazACmY4btu+TMJULgOQLyxgNOF\\n\",\n       \"lLb02j/X+k/7mgVwnDFJFXcW+7SpIeRTMOL0Yso6pffgOg4IpXU3TfoZwjllqPwZfU4+BubEQdx0\\n\",\n       \"Ol1LpcPSL5VKQeX70ewoIKwyBMZyuYyaJQh9ytYzPgjrQqGgZrOpfr8fcSfD4TBiVVB2i8UimA6s\\n\",\n       \"KazHnZ0dtdvtAERuXcLUULW1Wq2q3+/Hcy6Xyyhc1W63I5ZAUliDCORGoxGBvr5e3Vr19TYYDNZ8\\n\",\n       \"8oAtTrf13wAgAFyMVSrEp9Op/vM//zPWcqVSCQFcrVbX4gOQLWQKUTQLvzsgA1CFC8c/90qg/X5/\\n\",\n       \"zbWHmyCXew5O9HohR0dHv+Qy/qBR7wPXijPZ/E1AqbSqiwIwATwiD1xBUmUX5gpggouDNfn27Vt9\\n\",\n       \"8sknwUgShE2sA4odoMF9YDYWi+e6QtKq0qqn4cNQ5HK5AB7sj4eHhwBkvV4vlD/rtV6vazKZRMaR\\n\",\n       \"pKggm+4h1h5AGcaPezvDgovPDdrlchnB48wB44keQ74gi3AxszfSEAWuSeVjjCxncdGHsKcE8MJ0\\n\",\n       \"cZ3Dw8NgbnE940ZirwKAmLOsttWAWP4hUFKXgH/XrWRpZQXR3NJ2JZoFZJyN8Ws5SHJ2xBkZX1R+\\n\",\n       \"H9+wriz9WVIAxLU8L9yfmetmgZYUfHn/0s9dIPgc/Fxlzqby1L0UlHDNtH/e99SNln7XY4vSefeG\\n\",\n       \"svONxXxtu7E5sRIkhQBeLp/rD/C+pMhSgfkoFothJUrPzwjVSultDtiisBQsIsqQrB4UI6m9pGfu\\n\",\n       \"7DwfKEh/cEEQr0EtDqhed2EiWGBwmEcCNu/u7lSr1TSdTsOHz/4ikJEGNcxa4jVgi7VArA0CDkuX\\n\",\n       \"2AOvnYKlxxxIq5Nccem45f7rX/86XCmwP5Iik8iZPvYBLrFcblVTA0AjKZ6T4l+sZ34DU1AulyO7\\n\",\n       \"C/DEHACAyOaAcdtWA2zwXF6/hXFgbGHZmFPfwz5/Hk9xc3MT68xrdHjGCsCW/tzc3IQ7j2wXzjbq\\n\",\n       \"9XrhinIwj5uk1WqFiwLZ74wsCpcCf/P5XCcnJ5pOp+HeIgi03+8HKMNw9aBPZ5Wo0QIAd73D/mbO\\n\",\n       \"PY6MuA/Gmf66/vH/CfJlHeFOJ5CY37u7ibgxZ2gB1Bg36AHXdQ5YyV4i/RkjHpDnpRKQd5vaVg/+\\n\",\n       \"k7TGYEgfnh+TKuAUZCBkstwt/hsaQt1ZE0lrwpc+eLS4uywAH1ngwMFV+s+BA8/uzEUW48M903+p\\n\",\n       \"Yua19yvr2ik78XObK/7U5w8CBiHzHbd6HYT6c6Vj5N9HKaSMTdY1/ydg6/9lIxiQvkNNo+xIU/Wx\\n\",\n       \"4IhxgAbCAeEB1Y8w9dLwbpGjkIlfkVZr59WrVyHIpedTcQeDQWRPYPEhuNxnj0JmfAEY0M4ooMFg\\n\",\n       \"EEqEg9QI5iRyH7CBhYg17a5arn14eBisCs1ZDBTcYrEIqh6GD6sv3eOpLPjb3/4Wlm+hUNDJyYlO\\n\",\n       \"T0/XXEMoLa9+ydwR0OsAiz55bASKgecEgKHIYbMAhhR8k/STgYO/VAOM4PKAbfC96jE6HuMEGwFD\\n\",\n       \"5W5a3FiAXSxyxo+6NScnJ2HgoGx7vd6aq4IUYAf3vq88VsUrGvuhlPShVqupWCyq2+1qMpkEW+Lp\\n\",\n       \"uE9PTwFW6C9rgvWSMtej0WjNPcPzo/xd7klai0NzPUffkROMw+PjY5zf5HK10WisudbpFwwv1wYg\\n\",\n       \"OcuTpm17kVMYzFwuF3PF34wV90X2URQuLSbnbSsr3gUzDwztJX1odTMZKUDwICGaT6Jfi/eyfiMp\\n\",\n       \"QE0qBHyROFXIc2SBkPTa6Wf+/K6gsxiILDdJyrD4okmv5X3w/joI/Cmg4vEhWWAoBRwpCPN7ZzEu\\n\",\n       \"m1xDKMgUsLq/OmVRtt08IA+h46mg1OgoFArxXTJNACJkCBC8R4Go8Xi8llGCUPB/HPXuAuz29laD\\n\",\n       \"wUDNZnNN6J+ensZhf9KKISOmBLbHmZLJZBL0d+qnptrmaDQKQc6zko2FUmYuy+VyWN4oddYbIMuf\\n\",\n       \"k/vCNAGaUAoE5iH8vVaGGxCsZ0+rpBLrYDBQv9/X3/72N3W73agMSjVcwKa0MjA8UwX3F8oB9mRT\\n\",\n       \"qXeeCcbED3sjSJR1vq3GesbSJbaAZ+RzACdrfjAYxNohoBmXg/Q8fmRD+fWYM8aGeKD0b4JtuRaV\\n\",\n       \"VtNqvMR0cE3WH3EsXJv5lBQxFa9fv1apVFK73dbR0VEcE8EzEPtCn2DDPBjbAdR8Pg/mhnXD/WFS\\n\",\n       \"2X++dgEQACze93FCjwH8fC8wRj62MGEOGj1+krmAHQG4AaQxmthr3ieeiecjoBZmiHttaltx69AY\\n\",\n       \"cG+pUnUl7r/jM/6xoJyF8e9nKX82FQuGzYdA8cUEgEoVa9p3LE/pw7LqHheR9TxZ72cpewc0KRjb\\n\",\n       \"NIap8v+5TAP3YB7S+XDWIgWQWc/gzJNfw8fvJYDnAiprPLbdoO0BGghcfL7SszAjtY6qkCh4LDgE\\n\",\n       \"G/+7heZjARvglpOn8y6Xy6g1sb+/H0IRv/KbN280HA7jaPhGoxFuFKhlD1ymmqunOXJPQIUH2tE3\\n\",\n       \"gn3dtYL/G9DPZ25NEZ/j6bX8lvidbrerX/3qV3GGiI8Zc4Hy5PestcFgoLOzsygshsAmXgda2un3\\n\",\n       \"fD4fisXpbJhExgwXM2wP90bOwB7QcOF5PBtjtG1W0BUmlq9b4SgaWDK+T0CzK02ChpkrAsYZA9ht\\n\",\n       \"nhngjkuBAGXYht3d3Si9zlxdX1/rzZs30adGoxF9ocEMMl+4OCWtncHT6XTimXG/cShmt9uNzDyu\\n\",\n       \"ybPhBsMgwB0mPQf4NpvNOCfKjT/pue7NmzdvdHl5Gc8qKfa2tJK9i8Ui9hDrhWBtWE836LzyMKUO\\n\",\n       \"PFUckAcjybh7yARMaKFQCPY1NRB5loODA43H45hTCklS6mBT2+rBfw4ooJC8ucUOCOG7bHoXWKll\\n\",\n       \"nrp9UuXHd3jPfdQe7Eo/HQihGFIA4nSZswqbmJWUIXpJyaaAImWMfKz8mt4v/5f2fdM9nZ3wBZgF\\n\",\n       \"Jl5ik9L7cg3/XRZTxOc01oF/n+fednOLjIOwUOL4xVH8XscD4QDVj5Xn1pnXG6FWiVcYJVMGId1q\\n\",\n       \"tXR2dqYvvvgisngIQAWE397ehvukVCpFJolbw2REAJxIBcflgv99Op0GGCDd2EGBzzkWrbQKlE4F\\n\",\n       \"7c7OTpQSdzD79PQUQcOz2SyybebzeVSklVaH0mFY+BwBkM7PzzWfz9VoNFStVoMRYe2hXGB1UGQO\\n\",\n       \"ynyPM1ekIrsrjWdjjnyto9Bhb7i/P/c2G2NQLpeDhSBo11kCl6EoYjJxcElStt6tddgHfo8iWy6X\\n\",\n       \"UXUYxoCxef/+vXZ3d1Wv19VsNjUajdayyjgzygsDSgpWAVCFQQkDgKW/WKwOzzs4ONDt7a0eHh70\\n\",\n       \"7t27tRg85h+w6mAY+QnTAKPEOVR3d3ehrImpYT/3+/0oYoisA0x5zA9Aw8FAr9dTq9VaY7Jo9I1Y\\n\",\n       \"kuVyFe/F/f2ZMCrQA/SBE8HZS8h25kDSWlzNcrmM88OoZ/TS2t5aQKy31AJOYwiyPk/f5/8UkPji\\n\",\n       \"y2IZ/LvO4nhglCvh9DXXkFYAIVXim57TNyefu9spvY//1lt6Dd5z4JAFRNJrp83dKVzPwRx/p/dK\\n\",\n       \"+5mCEx83roew8Of0+ffxcJCYPvfH0BBO4/E4/NZsRKwSBCBWZblclrQKzgaceKbL3/3d36lQKMTJ\\n\",\n       \"wFiOg8FA7969Uy6XCzAEa0J/qtVqWPC9Xk+S4vvSOgtJH1EmKFmyD3jP0yA5NJDnXi6Xa4oL1nF3\\n\",\n       \"dzdAGMGf3N+BLPvAA3BdydH/YrEYVUW9FDfZSxgZKErW1Ww20/v379VoNLS7u6tutxtjx72RAfTD\\n\",\n       \"gzs9+JN94EIcgQ8j4IwI+yVlx3iNhc71PSB5Ww0lg9vK00xZJ4yBpwVLK8DCOHrlUa7nAJaiXe12\\n\",\n       \"W2dnZ3Fd9grjTAG7k5MTSYpA2vl8HooXF9xsNotAXGmVPdTr9VSv19eU8N7eXhypQAYOIGmxeC4R\\n\",\n       \"gDHB86XMhJ+fA+PkMUyPj4969+5duMekldxwhp7vL5fLyIaCuQAguvwnEwc56UGurGncOP4ZrBHP\\n\",\n       \"wDrH/ULfYLEmk4lyuZzq9bokRYwZMSVkatFv2CbG2NmzTW2rbh0pm/Z3xZYCj/S3qUDDqvNYBfdl\\n\",\n       \"S+spwalyS+/rVlaqPCWtKckUWGSBCr8vCPalfvj9U3bFFTXP5a/93mlf0mfIYlByudxaRDv9SZmP\\n\",\n       \"LPbCv+Nj6s+JcuFzR/9ZIMfvkQIlf55tNq/DQq0Q6FwKauF7JVASdwaCQ1oVYOPsmPv7e3399dex\\n\",\n       \"VohFYbypXAp1DWDwuBO+32g0tFgs4tqSgrKFMmd9jUYj3d3dqdFoqNlsBuhxgMj/uVwu6pnAbGJl\\n\",\n       \"QzFzP7IoHLR4MDBKnswfPzgQBc/4YdFKq7NbsJIZjzSwtlKpqNls6vr6ei0bqtFoSNIHz+fWqwMd\\n\",\n       \"VzbuapzPnzOg6C8yCFACO+W+fGdWuB7MGHEV22qpocG6Zd0ABjHyYEMAgrh4pBWDxPyWSiV9++23\\n\",\n       \"a0wV1jfX4iA9Ulr5h/wsFp9PEB4MBjo/P9fd3V24DTzFmPUFI0gcFUGgzLfvHwA3VYmZ0/l8HjFT\\n\",\n       \"GBOAy1RnuBuL1PDZbBaMI24e2CMv+4+MA3ABInDBIPfo2w8//KA3b95Ef9jT7AN31bpBzjjB9HgG\\n\",\n       \"EtemfguAnnpKzt4AUGFrYHy5Pr93eZDVPgpwIq3T+iwIpzRd4XsQTcouULnSYxMQPlkK0wWCtAqo\\n\",\n       \"hbb2Q4/cbeD39UlOmQlXpOn73g8sTPrpz5QqbK6Z+il9nHjtQMw3tTMpm1gHru2shlf4pPk9eN/H\\n\",\n       \"m/fSDYvicPeZsyNZTIz/7QDJx3SbDYsJ94yzSoABXB8Ezs3n80gfdmsd6hXQ/d///d8aDAZqtVpq\\n\",\n       \"NptRhyNNnweoLJfLtawEhAyFne7v7yPlEEsWAYSVhr+43W6vgSlcq1j1rMeUGSBFkbgJUkM9JVLS\\n\",\n       \"GlDy9Nzb29sI8JW0JgMeHh50f38fwg5g6AYDAtD3TqFQiGqZBE0Wi8W4n7vPABWAKBSD19IAnLBP\\n\",\n       \"6Etawh2WCreeM4jufvJgSErE89ttNTKwYDZgTIrFYsQhsa4A03t7e6rX65HCzrNC6e/t7WkwGGg4\\n\",\n       \"HEbJ/+FwGPVDzs7O1gKxl8tlKGPiue7v7zUcDlWpVDSfz6Omz+7ubqSvshZZ4wBRMn6YO+YZ5sVT\\n\",\n       \"9InLcLa7UqloMBhEufpGo6HxeBylAxgH4mJwJ+HOgaFhrVGUkb3schX2ETewA2FJ8WzUYEEuLJfL\\n\",\n       \"yMxjLyAvCE5ljJ2VonEvntuBXKvVCtACq4ZRgEyBFYKB4bkZ75faVmNOUJ4IFDa/B/c53cz7rlSz\\n\",\n       \"2AwEv3+WZWUDYFLLm3t7rAnXcEvI3RJQnL6oUrZDygYa3I/fg0AdwKRjkQKllGHKAjZ+nbRPKUAB\\n\",\n       \"lDhtjb/XY3183NLX6dz4M6VMjbNPqWsuZWf4Ox2Lj4E5QXFj2VD0qVwuB42NJSkpAmWxPBC8jLW0\\n\",\n       \"cgl88cUX+qd/+ie1Wq1gndK9IX3oBhuNRiHU6AM1RlDOpVIp4iPYF36mC/2Bzsdq9APTEGTua5dW\\n\",\n       \"h4nhk0epsdddQLJm+A1ABmUNa8Hz7OzsBC3PIWysVUBduo4AB3yOZT4cDvX+/fu1UgPOGhIvwFgB\\n\",\n       \"anzNwtweHByo0Wjo4OAgik+5C8QZE54ZS5lsCL5Lhtc2G3NJGjGHWKLkAFUen4b8QGEBaN09BoAc\\n\",\n       \"j8dxBhVxGJIiriOfz6+tH2dEAIK1Wi3S9QkAx/AkuBlQCeCRFO5HQClj77rBi6Q9PT2fA8XexV2H\\n\",\n       \"OwcmVFpl72GU1Go1LZfLyE5yBtNZG9Y6ip97M+YeR8Jev729jfHl3nt7exqNRhqPx2sg3wEO93AZ\\n\",\n       \"y+f8o+4R8+s6mO8CUJDN6RlgxKwhI16qcSJtsUJs2lLGBBTH9936z7qWC+RNqYcpc5AiT3/tSJL+\\n\",\n       \"+b0duW5iJFIXS3rPTfd2Ab0JRDjY4O9NIAHFkY41/cpS6ghJrHi+70rQ3VFZ983yJzq4yOqj59an\\n\",\n       \"v0+BKb97if35pRtMEBtVenYh5PP5UDKkCyOAKJzm7AfMQD6fjwqiHORVq9Ui8HN3dzdYF8YHC2s+\\n\",\n       \"n8c9Eeqz2UyTyUTFYlEnJyfhIvI+cw8AELUKoO9hflACnp0EW7JcLsMq9cqQktasWd8THrTnmTaA\\n\",\n       \"EFxhZCM4g0rQYy73XHmWYEiYDE9zxF/vIJIU6dPT03hO3zdY1MwPgtWtbGQN4BNFSd+k1UGDrFmY\\n\",\n       \"KtaOK3Dq1my7AJukYCoAbACxQqGg0WgUh9tJK/eJB3HDbsGEuUzmkMTLy0uVy+VgJAj4Rg4/Pj5q\\n\",\n       \"Op3GuLgrHUCDAmZvoNilZ6DDWVGsPW+AnkqlotFo9IHRw/ol7qPb7QazeHFxoU6nE+AXdoA1zLrz\\n\",\n       \"1HjcvHwPVxagB3ewsyAOcL0YmqSQCa9evdLDw4Pq9bra7XbEvnm6PnKWv53h5zmdQaVfuVxOo9Fo\\n\",\n       \"7dRo9qob6R4wXigU1tKikTUYKZvaVsCJU92eNphaOG5ZeCaKNwSbMzAgM2cJuD4I0ZV1lssFwcWE\\n\",\n       \"uOXvAIrN4ddz5erfow8pU5P1PCnocIDm4+hj569TMObf8f5kuZ0khYJxdM3fWYDP++8AhPt5H5jP\\n\",\n       \"lLZkE6bsioMwd/850k/B2bYaQhyFlcutykBjZeNOAajwTNCiKF18ygiVSqWidrsd6YcINeJVptOp\\n\",\n       \"ZrNZBKzSB5QFJyNLz2xMr9eLVMJaraa3b9+GO9PvjRCjj3t7e2o0GppOp0FLQ1F7jQ4XuoCKUqmk\\n\",\n       \"2Wy2JpTc0sSqRYCxdgigRQ5AkXNtBGGx+Hz+DrFSBEH6PgUEHB0dxfoiawmmxNlPAgelFbhBuDN/\\n\",\n       \"KCHcMMQNADYoh88hfmQg4UbiRGbO7Lm7uwsFSbbXNhvZIex9mCDfo8wxQALlieJCkfEe4zgYDAIM\\n\",\n       \"E4tD7AZjBAPpLnh3Ly8Wi3DjLJfLMBL6/X4Ec3M0AfEcAHECl5k31loa9+LPiFtHUuwT2BxcOs4i\\n\",\n       \"ciwBOgXQ0ev14m8H0hgZPCfj5rE9DiQkRYD3bDbT6empJpOJptNpAJPHx0c1Go1YkzT2jfede7Af\\n\",\n       \"ceEir5En7KXHx8co1AazA4BkXUsKA5R1/ZJrZ2tunSxLN3XneAqSlJ2OmwIaJpSFxaRDO0orq9QX\\n\",\n       \"dupy4H4eCOclqlOAwYZyxsSpOleeKRhAADtLQz8dIHh/N7l1vG/+dxZQ8XFM54HFJWltTL0/L82D\\n\",\n       \"P5/f0/vA/LoQwFqHkuQfY+kuH4998Ptss6X0rJdBZ85gQO7v76OgExYkjAc0KoAGCrjZbK4FVO7s\\n\",\n       \"7Oj4+DjAy8nJSaSkLpfLOCCPOAbp2SKq1WqhdDudjmazmf7+7/9ep6enkhQH4S2XS02n0wAaktaE\\n\",\n       \"v1dS3d3djZNoHZTzDDBApVIp1pfT/J76m84lhekQdMSI+FrwmhUEJKNAnZ6/vr4O5URm0/39fZwg\\n\",\n       \"LSmYKAdAsEjul0cws1cbjUb4/AGDjFen09FkMlG73dYPP/ygm5ubADjL5VIXFxeRtYWSZPzciNtG\\n\",\n       \"c+UKeOT8J0+zBsCg7JkbD2yVFPIXRUgaMHvk5OQk7tHv9yNuBVBEbAYMBNf04PJGo6GzszPd3NwE\\n\",\n       \"68aa4eDFYrEYgbOk5zso9Mwfz4LjPY8hgSnkvB5irWAVJUXfWB/cI00GgC3K5/NrtXZYt6QYs/7f\\n\",\n       \"vn2rcrkc6485c28DzJW0SntOQYfrHrwH6EsAp7vkDg8PI5vPjUx+A4PGXDJ+7KWX2tYDYqV1Bc/f\\n\",\n       \"WEIo7ZRRSH/rn6EIXGm5j5fBk7QWDOQUs8eSLBaLUCAoTfzCLhxTHzLK260wru39T10h/vtNgbbp\\n\",\n       \"ePnfLBJvzuakLEvaAIaS1hYb4CFlgXxRpyAl7TPvZ8WKOABjvLi/F6XyZ/Q5/RyrqEwAACAASURB\\n\",\n       \"VBgaQtAtSmha1rIfxrdYLMKKZtNiSSEYeebpdKpKpRJVRaXnYk0eT1KpVPT69euwgNg7AAesQs73\\n\",\n       \"GI1Gms/n+rd/+ze9fftWrVYrghiXy2UIc5QlWS+j0Sj6xj5YLpdRsRPhyjryIk6e5SEpmA2eQ9Ka\\n\",\n       \"0PQ4BrKYqEnB7wFIAB9PkXT25/b2Vl9++WXsS2o11Gq1tb0H2EEoSwqZ4srX05v9cEGYK8YDpbBc\\n\",\n       \"LoN+LxQKEWB4fHwcmUknJydRTp9x8WJt22oem8A8eXXjyWQSrB2yCzAD8GROPU4BGcxRCADL29tb\\n\",\n       \"XV5eBpswHA51fn6uxeK55DqnHy8Wq9NwYV92dnYic6fT6UTgLWufjDkPfJVWp3sDtnDJYThIKxdi\\n\",\n       \"pVKJWj7D4TDqlPhZP8g6YrSY93w+H8yaH8iH7PD1BIDCPeJndbn8ZU4oXEiWHP1y17yzI8SFuIz3\\n\",\n       \"fcB3mR/YUUA54JQA8FKpFDVdYFZJX/bzqkgj39S25tZJUWKqJFOF5cI+pft5L8vNgXBDODoq5XO+\\n\",\n       \"m2Y98D36SzomQt+VK6xJFhhx4f0SMKFPaVxH6pZxgJYq5tTl4d93MJU11twb4IVic6Tsvlq/L8/v\\n\",\n       \"1/F54vv010FICnjS3/nYpCAn7f+2G9So+1NRjDQsQBgqgC3CCKBBsKHvFwJZOVTv4uJCP/74YwSG\\n\",\n       \"cvZHLpeLIEPiXUhJxYLB572/v69araa//vWvenh40KtXr0Joe+wHghOhh9Chb25tMf9+ABlMEmX5\\n\",\n       \"3f/te4rXuJeGw6GKxWKkXQLipBXoeXh4CKscEOWsp6Qo1MYhh7hqqtVqGB300ZkVSWtF87xuBGBP\\n\",\n       \"UsTsMDfMe7PZDEEMMMNifnx8VLvdDmVJo6YMwv0l+vuXau6WSCl/FA8xNrBqzpowFx4gDMMA4ISJ\\n\",\n       \"ePfunY6Pj1Uul2OvkJ1TLBZ1dXWlzz77TKPRKNyTBwcHsRboxzfffKOjoyPV6/VIG9/Z2QlwjkzL\\n\",\n       \"5XJrqffSSuawTlHSkgJMeTgBVYNxpXjQKsqfZAqehzgUSeFWxJgAiDBuKHJiQZiTXq+nh4cH/frX\\n\",\n       \"v5akSDtnj7L+eTaMHi80yrOXSqUIdmYfOiD36/ncSor94fLMA969xIe7tja1rYATj1lwt430YexE\\n\",\n       \"2nm3/gEkDiiklQJDuXJNj/zmunzuAiOrD1zfB9s3nP9L4zKklaWfXjtlNHjt/lkfFwcojpb5LIsV\\n\",\n       \"SZmctPlvEBhejMp/x/3cveJ98ziStK9Z/UrBafrddG2kQMTB6ccAUsiAkVYCjHXAmsEiQzg4XYq1\\n\",\n       \"DgXNepNW51DxvAiDf/iHf9C3334bawX2hriP/f19tdvt6ANpxFg91WpVo9FIX331lSaTSQAXrBv3\\n\",\n       \"pTsd3el01hQx9wYc3d3dqV6vhxXK3iKNFoYFS4zxk1br7Pr6WoeHh6FsfJ7z+dUhnyg7LGqAgLRa\\n\",\n       \"Q/V6PWIXyAyhIit9H4/Hse889Zc9uVgsdHR0FGCDeg9Y7cgQD5hlvljr9C39HsD/+vr6A1Z229k6\\n\",\n       \"WM7u7pBWGUbS8/4kGBnAiRuPViwW1+q/YDzO5/NYl1jyFBvM5XKqVCrB2BUKhWCWYMlIb0UGeTB1\\n\",\n       \"v9/X0dFRlIt3txtgCAMAdpKibbTpdBrA9fHxUbVaLeaQe9J3QHy1Wv1g7SCzWS97e3sRsOqynmdi\\n\",\n       \"LzqYcR3FPFxcXEQsDwHysFoewwJwwPDwSs0eX8j3MCJcD7ph7gwZsh/WB1bIwwHQ+ezdl0D31tw6\\n\",\n       \"rpDTDroASj9LlZFPkvvOpJWljtU1m83WAAq/ow/4wX2g+S6L0pUv12BSPYDJla0H1aaKmXu5snfa\\n\",\n       \"k+s7cHFQlLp2fJy8L6nV4wvRf8OiSZWA3yMrONnnk9+81LcUWPn7/tqBj49ZulY+BmAirTIVcONg\\n\",\n       \"FXq2B0KWAD7qkMzn8zgojTWA0EIw4mKk5kSlUgnL0dNeiRNZLpex7huNRvyWWAq3YJ6enkKgeuEs\\n\",\n       \"qmQCljjQz0915R9pls1mM6xhd025yxBAAUNBPQj6PR6Pw4XFfsAy5jfOWCLM3UXg1vpyudRgMNDe\\n\",\n       \"3p7evn0bzwvVT1o1QngymQQIYYwKhULEijjgJAbC4wPcTZWyOPyPq8MpdbdWGZeX6O9fonlGEuyF\\n\",\n       \"9GGdKQ/+Rp7ALnn59ul0GinxMFme9XF2dqbLy0t9//33+sMf/hDKejgcRqBzv9+PIE/mjXk+Pj5W\\n\",\n       \"t9uNOiScVox75u7uTq1WS8ViUYPBIIwFAHValp/4kZ2d5xL+nU4n0oKR+yhk4pd4z90bkuJ+HiQK\\n\",\n       \"kIIxlZ7XBuDfD8xzoxcWBObPARB9J86DfUUBRMYMdvPu7m6NDeL5keNe/t/dQ4AtXnvJiZSNcp3v\\n\",\n       \"+iurbTWVOO1klqWPJe9AwMGHsyBck8b3sLj8unw3CwSlLhkUSpYSTJVmyoakuehZLhl3ofj104Da\\n\",\n       \"9L5Zr1M2w/uVunTSZ0mf09Ex1/ZFL62Qu89VKoD9dTrXzrD42Gf1LWu8/Rk/hgYTQJYMdC4Cgvk8\\n\",\n       \"ODgIIeEguFarhfJlrIlNAIBAmcOQcGw8wrFYLEYMBW4Ip7xxOcGsPD4+hvuHPmDdsAb9YDXYl93d\\n\",\n       \"3cgIABgREMghfJ5iK2nt3g60F4tFZKwQZEs2gINo3D3ud6cRoOllvlP5AmAisBU3F+ABMElBNgfz\\n\",\n       \"7rYBlHjsDPE4ABoUghtCKDNiUrBqeTaCLAF6Xjxrm40xdMXD2ABGYMjcmPO4E9YfqfXT6TSClmlk\\n\",\n       \"rJ2cnOgvf/mLPv/8cw0Gg2BKAKgAQQAl65Vr4WKh4i8MJaAfxqJSqURtksViVT2ZfcFz5HK5qOpK\\n\",\n       \"1hmZVqQvs748sJc96QatuwF9f8CYeI0jGCtn3Uh9z+VyUXeGmDIYP0AB8ge3J64xlynEaaUl7N2F\\n\",\n       \"xdi7XGI9s6c9xsxjFmFX+NsNe97Pals9Wyd1M3hHnW2Q1svEuxXGIKZWPs2pJqevsHpSRQfVlDIa\\n\",\n       \"Wb4xBxdZz8f9s1xOL4GiLFfGJjdJlmLeBBCymn+eplFm3YuNml7XmSaumwI2bz6H6Xh43IGDniwX\\n\",\n       \"ltOKH0PL5XKxHrHEsFDoJzQtQsozCRC4WGBUfETIuNI7PDxUu93WZDKJVMydnR31+/2ImyAqHoVB\\n\",\n       \"lL+7GZ+enuJ8mvl8rm63G25OrCVSKBlnCrsdHh6GYELwLpfPgbSUt2aOUP4If2mVxlgoPJcTh93x\\n\",\n       \"eizOjiG8YUYADSgoQI1n6cCccP4HLMty+Vw9ExCHAmUPnJ2dhZLyNObJZLJWAyiXy8V3OIeF50yz\\n\",\n       \"NVCeADpJEaPBWHgV3k8++WTNRbWt5q6c8XisVqsVGTweuIucxB2I0nTrmvFwA8hjrSSp2+3Ge5Ji\\n\",\n       \"TCljj2vNg289jgWrHTnvQAP2Z39/P5g0ZwkADfSL9UWwa7vd1ps3b2J9AiDQHfyeNSCt5BX1der1\\n\",\n       \"+hojAWjGYGDfkUmEnvOq5YwxDCZyBzlDBV5q5mAsAQAB+IBl1j2y3DPj+B5yDCOFNQ4D+fDwEMxq\\n\",\n       \"vV5fS8VmDIil8RiezDX3v7qCf2ZzS4iFwyJPLX1HY5uYDoQKloyDFb+uRyg79eXg4yUg4e+7snUA\\n\",\n       \"kirnVKikDI5b05uAht/H2Q+/p7NLKaBI3TJcK32mFHS4Ukj/9nv6WKRuIxfgjDnuN2ennFlKI8bT\\n\",\n       \"Z0sb8/sSRfhLNVKDl8ulTk5OIsuF01yXy2X47vnuzs7Omi+ZeYWCXi6fT/OsVCpBhUvPTAHpgq1W\\n\",\n       \"KwA8qcRck+A5XE4oU0/hhomBPgcIEKToaxZKGmHkYJ/r+lkkzmTQAEWwTAi6fr8fghlqXVrNMeuf\\n\",\n       \"eANeu08fEMffDw8POj09VaHwXDDMLe9+v698Ph/z46npnmGD779YLIZ1fXt7G8IfC90L4j09PcX7\\n\",\n       \"WL8on1wut+ayIn6BGBgYgX6//1HEnJB2TtExrG4HA5wbhQVfKpWCYcLiBhzAXuCCgSGDWfnxxx8l\\n\",\n       \"KdJmHx8fY23A8mG4IMMBGswf4+3uBcAPgah+QCBz6udUSSt3hvScHUcc1Wg0itfu/gBosl48nmw+\\n\",\n       \"n+v4+FjD4TCYjWKxGOX5YafcZeiBtZ5qn8vl1kAzz0Z/ACH8TlIYTV5iA0bFgSLvM/e+L/gtHgnk\\n\",\n       \"N6wfhR/ZkzA+/AZg6s+V1baWrUNzP5srMGc0UvbBGRQmAgHPJk6DT5kIBitVtJI+uDdUHL9P2QpX\\n\",\n       \"JA5EssCAN79+Cgb4P1W07qvz370EZvg7BRlZjetmATLeTzNm6AfzkMac8L9fE+WRgke/Xhrz4iDH\\n\",\n       \"7+Eg7aee75dquGVgDs7OzjQYDGLzkx1DWiJBd8PhMKwNrJz5fB6xEIvFQtfX1yG0GQeEEkAH2tTn\\n\",\n       \"xil4fPsIUKx1qF5OFiYWAncR/u9cbpWBg58bxcp9AFP0kd+j/H09Eo9AITKAAGm1uVwuMiuYX0/r\\n\",\n       \"T/ekgxQ+29nZ0XA4XEt/JnaAOg24lKQVEKJ+htd7aLfba0fcf/bZZ+G3x0XEPFCKHaFOfBGuJ/r6\\n\",\n       \"/v37CMSsVqs6OTmJQE9Sc7ddhI2A0Lu7Ox0dHenq6ipOA6YUe6/XizXi2Uh+2JzLBVwjHhyK24R1\\n\",\n       \"hysBmc9rP4IgNVRTgwi5AVPO/AAYyQZCFmXJF+YKhvL+/j7AGcwKlXwdWFA75+HhIcA2fzebzTBS\\n\",\n       \"YD9IPwYMejyKr/Xl8jl+6vj4OM7+kVbsN3FvPC9yGMOEa7ZarQBGHu/FuFLbhfguZ7WRE4AXdDUg\\n\",\n       \"RVoFTLOnXFf81HlRW8vWcWsY5eSxFY5E0996EBwPDHp2IcX3+U0KBrLYAulDhsHZD+9TChLS5/D4\\n\",\n       \"DPrpit37t6ltYosAW95PR/DOmKTXyWqwTk5p8n2eAYbKr+mAwp89C8il7EYW0PIYnZ8aDxr9+xjA\\n\",\n       \"iWddoHio0FgoFNYOOcPtsru7GyWn/bTixWKhwWCgwWCwxl543Q6sb66PYPCsKwJEocQXi0UcVIZV\\n\",\n       \"iiIlkI/vEMCIAQGzgtDEIkqLHfIdLESEmbRiwjAqsHrz+byazeaav19a+bxRIIC2LOPB3WduXVar\\n\",\n       \"1TigjmsOh0PlcquTtzncLZ9/Ls3ebrdjfBHUrVYrFCjZQdyb+AOuT+EwytWnqdPSs1X+5Zdfxu+4\\n\",\n       \"brFYjBRn5nybDWANqD4+Po60dWIwCKYkboffeQwKTGIu95ytxfrB8nYA7ewX4ABWgXF0d7K7X1D+\\n\",\n       \"1CFxtxTzQHo7MWGNRkO1Wi3WA6DdS9ATCE7NFGJbeCae1dk3+uSGA/EruEyIP4PFRJbTd1fqPHOj\\n\",\n       \"0VCn04lzoVibjB+yvFarBeuEnK9Wq5pMJlFN2TNnYDYkhSsOo/L+/j7cZ8wFz89cOpDyjCBneLje\\n\",\n       \"S4zg1tw6CA4HEbyPUk8VkTMKDgRSl0oW6+DXTpWk/85dLH7frGfwZ+G1AwN3cfjvHJh4SxmTlPUg\\n\",\n       \"0MrdIj5G/pzpNfxeKR3K93wD8x5CgeYpgmnfHYCkzFQWWPG+p39vcuP8FJjcdsMK6vf7kQmAlUDt\\n\",\n       \"ABQNUf2eaocAl56FBMoxBbesK1wPUMa8pqIq/l1YBizIRqOhwWAQ1CruDoQUKdEUasOg8NRDt1K9\\n\",\n       \"SN5yuYx6LIAeZ994Bn4rrRgkzisBDOEGgbImawn2hH75tTillt/c39+r2+2qXq+H6ymfz4cwR5mg\\n\",\n       \"SFjz5XI5DocrFotqt9sajUaqVqs6Pj7WcrlUv9+P8a3X6yGkPXsCgMYc+pEDXKPT6YSV7ZYpv3uJ\\n\",\n       \"/v4lGrJhPn8u0e9W/9nZ2RqT4NY+INeroXrqPGvC63Z4wTZnWlGIHjuIvHalDQsAkGJ9AmaQLaz1\\n\",\n       \"5XJ16jPuEGS41xshoBe2P5fLrWWBejgBwGY6nWo+n+vk5ESTyUT1el2j0Shcgyh7AtUBTAAW2EMv\\n\",\n       \"4Y98KBaLajaburm50fn5+VosCvoRo8VdNhgrBNYyVgAvWD1YRsbOSx8g173AG1lJqV5hTny8/bON\\n\",\n       \"a+5/ae3+jxodoqOgY6yolyxhV7QsSqeM3JriO6BQvzcD48qXhY6wywIqm9gWWrpZfqo548H10zgO\\n\",\n       \"n0T6lvbBn+ElZe1WRlb/WMg8twcN+xi7a8ybsyHuF5YUwWObruNrIWV9sp4hZcE+hkZAJ6yAB+7h\\n\",\n       \"XsDC5JmdYfNCR9DXgA2CMKXneep2uwECcDMwf9DI+MYp3EQdBAI1UQR8lz0oKRgKFAtgqFAoRF8Q\\n\",\n       \"SlDfMBewQghMF1wONsiawIXi1D1KbrlcrlXrRBhK64HVBEdKq+BAB0Kz2Swsxf39fTWbzRgPWCdc\\n\",\n       \"KChEAlLv7u50cXERgpx4nEajoXa7HQATBU0ALm4Zd/WgINyNhBLhux7UuO1gWBrujJ2d52J+9Xo9\\n\",\n       \"1mlqOLH33W2PnPOgaOYPNwQFxFiX7qLzOCNcQLjYiFnxfQRw93omHjuBjGbNHR4eajweS1oFLqOU\\n\",\n       \"naFfLp/T3DE4nDWXVqwj7KjLtUKhEDVZWAMYAs4+k3EDK0l/KUro8nMymcSeRxawRpfLZRyWiAwA\\n\",\n       \"JPJ99iUxQwTtHh0dBTsE4wXoB0jDliA3nG1nHQNy3BD9OXFUW8vWSRed/4/iSRVQqkxZKM6+AAy8\\n\",\n       \"pdZ/VsuiiPl70yCm13Sr3wGT38Of2Tcw10uZFq7LMzhIyWImuM5LLp0UoPhnWKr+fK5I0+ulTEgK\\n\",\n       \"rFDSCGTeS4sJcd2UMXFryIGLA6ZNjNk2Wq/XW2O2UEgwENStALQBRhCazmBxwBifQxmzdk5OTuJ7\\n\",\n       \"7AEUHkKLM3Tm87mOjo6iCJWXxnYBjFsHwPT4+BjxGg8PD2q1WmsBh7gzyJaoVqtrdTnu7u7Cp43A\\n\",\n       \"hb0gEM/ZFFxCs9kslJOnetJP4kQIGkawTyYT/f73v9cPP/wQSow9hZAvFouq1+u6vb1Vu93W7u7u\\n\",\n       \"WqAxwl+Srq6uVCqVVKlUdHl5KUnhbjs5OdH+/r7evHkTIMfL/hPUiuJDmJN1wZziwoLmLxQKQcEP\\n\",\n       \"h8Ngd7bZvMgce/Dp6UkHBwehHFlLgErWHd8FOOMKkRSH8rnimkwmobxgIJhfDmj0wxy5HjIFAIjC\\n\",\n       \"BAzzPfoGWIf1GQwGIWM5+RtgTUwjawPXBsXaWF/EbC0Wi7Vqube3txEXwvOVSqUwWtwgZE17Zl6t\\n\",\n       \"VtOf//xn/fM///NaNeFcLqfPP/9c7XZb5+fnMd6Hh4fqdrtxWOjnn38ewdWwg5QCIPYHME2to8vL\\n\",\n       \"y2AYiT2BGfYwCcAdTAqynjk/ODiIU6YxNnZ2diLAelPb6qnEtCzl5/9L6wBhkzJKYxj4TpblneUa\\n\",\n       \"8H6gbLNYFwdE6XX5rVuR3j+UtYMUaf3QQwdnfo20ZkrWOG1yl6TNGZI0IJVN71aFWxlZY+h/O8DC\\n\",\n       \"opa0ViMhC0ikNF8KPpzt4l5pHMO2G8Iml8sFQ8IYYEGgJBG+7trweBEsLNaSp5i6tUN6LPPmcS8e\\n\",\n       \"JDscDlWr1bS7uxspiqw7ikQVCoVw95AGncvlAuSkwITfE3w3GAzU7XbDUj47O9OrV6/07t27YHBQ\\n\",\n       \"Fh7Y6lajjxFKJ5fLheLGmnSFzm+Ojo70/ffffwD6+R/XTrfbjbNvCoXneidYhC5r3rx5o/v7ew0G\\n\",\n       \"A5VKJZVKJXU6HX366acaj8d6+/ZtxLQQi4DlS6Ay92csOWsHFgHQVq1WQ1HPZjP1ej198sknkYm1\\n\",\n       \"zYZsfHp6CmXMmKcup8ViETVf/ERaruPGBkGfXAOWke+5oZTLrbJTkEUO5mEGiH9h70haS0vf2dmJ\\n\",\n       \"tHKKs0mrUgmz2UzHx8fBOEgK5o+znUjxf3x8DPbSi6DBRsIgIJs4gwfXln8OaCI+DEAIW1KpVAJw\\n\",\n       \"YESUSqUoFog7i2DqV69eqd/vRyl91iVz51lKrpPG47EKhULIClytHk8DaCNry3UlZQf8qAr2Li4j\\n\",\n       \"YrGQJ1ltK+AkK+7DLe9UmWaxAKkyZBFmgQ7uKX2Ydst7/n8KnJxRcfbCrfn0d/7dtG+u+P2+WNz+\\n\",\n       \"/RQEpQwFr3k+rIV0fLwx9vh1vQ+MlbNbXrUwZU1S3yKbzClePnMLPQU3jqi9j1lAMp3HFJBtsx0d\\n\",\n       \"HWk8HmswGITCdqHg6w7Wg/GhUiYC9OnpKYSB09x7e3sRLzEcDiNldzQaqVKpRC0T9+mjCFG0lUpF\\n\",\n       \"j4+PkQo7Ho8D1FSrVdVqNZ2dnQWFjyXocyophBBWEC4ghGe73dZf//pXXVxc6Msvv9RisYgDz7D+\\n\",\n       \"oMBZgzANWHEECRP/QcMq9do73333nY6OjiQpDv1zS53rN5tN3d3d6eTkRJeXl5GBBLVNBsTNzU24\\n\",\n       \"XCaTibrdbox3qVQKN950OtW3336r169fx5hTeK9QeM4MIYC02Wzq7OwshPZwOAxlDnja29vT73//\\n\",\n       \"+yi2BVjbVgOIeDwMY+4ucNxjACxcKq68WNvuKuHvarUa7gP2ByySZ/nhpvHD8nBTEKcBIwIIhj3x\\n\",\n       \"Gh2sO+aYf6RzM8/OuuDS8ZgmSQH6fbxINfbzcCRFsC5MJowa+8zl2tPTcyXj6+trDYfDqNHDs0uK\\n\",\n       \"DC/W2/v37yOQ3c8kIr4M9olaJ9PpNOKoCJAfjUY6PT2NoorVajX0kdc9caa3VqutBcvyTMwba5tr\\n\",\n       \"eJp22rYCTlKa3xVsVpDoJqXjIMVBhLQOaNx6BxQ4LeX+u9Rd4BZilgvBWQxX8B67wqbyw5Q84Dd1\\n\",\n       \"J2UxNamLxccMheHtpdgT/I1c29kIZ48YQ4+udjDibhbQONd3PzrXg4KV1ovscQ2fM5/PFJBuYkg+\\n\",\n       \"BnDS6/XCioaNcJ8rQph4Dg/ckxSvsQTxgTM+BLrOZjPV63XV63VdXV1FiifpuMRJEMSHAPQKrtIq\\n\",\n       \"ELVSqQSAQshDaxP0JyliTVLgWq/Xw7pE4RwcHKher6vRaGg0Gumvf/1rHCdPaiL7CuAD2wPgefv2\\n\",\n       \"7VrMCowSgcO+NnBd4c6iSBgKBcVVrVY1nU5DaOOeQaGR8nt/f683b95oNBrp6elJvV5PhUJB4/E4\\n\",\n       \"qPJOpxNj9Omnn0atDVwzuHdS9x3P0u12Va1WQ8AjE+/v73V9fR2gctvZOoeHh+Gy5EgF9rC0OoDR\\n\",\n       \"DSVpxQjjXkFJA3B2d3cjBkhSuPsA6ovFcw2fXq8XMUH5/HN1WeSpx+DBWHAGD/dmfPleuVxWr9cL\\n\",\n       \"1xsxFcQtwQgQFE0GjPcdNwgKHtnJ3BLU2u12Y33xTNSD2dvbU6lUisJl8/l8rToxcrXT6ejo6CjO\\n\",\n       \"H2ItMZ6j0UjHx8cBPur1ui4vL6O+j8e7nZ+fx2GhBOQ703V+fq52u629vT1dXV2pXq+vMVboacax\\n\",\n       \"0+lEgC/PhPwH3HmsD2Powc1ZbWs8OANGJ1HYTtdlWcbOBmQpb67t10FIMNn8HmXBAvB/ruydyXBw\\n\",\n       \"k7o6UpDiAMcngb+dqXF/XKqsX+qLK35Xzi8BEw7mInCLDIPFYhF0o48/izIN1iTYy5+dZ8ByT8EE\\n\",\n       \"8+3X8YBQd004NZ8FTHh2B3cfQ/OUSihwrHFfC6PRKPzADr6wcDgYz4NLAYrlclmDwUC3t7c6OzvT\\n\",\n       \"n/70p/ju7e1tlLQHbLhf/+npKep7IHQXi0WcRZLP59XpdMLXTroshZlarZZarZZ+85vf6Kuvvopq\\n\",\n       \"maPRKGjx2Wymq6uroJwBDAAyZ/d4Pmjsg4MDNRqNteDfQqEQTIVboG4wTCYT/fjjj8rlnl0nw+Ew\\n\",\n       \"hPrFxYUkRQbT4eGhzs7O9P79e3377bcBAPr9vqRVTZL/+I//CN95rVYLmfX5558H68VaPjg40Bdf\\n\",\n       \"fKHf/e53uri40NnZWSjw2Wym9+/f6+bmRt9//30E4ZLRdX19HbVWcHdgFQMMt90I1iX+AorejRNn\\n\",\n       \"irCK2cueKeUxKVTVxdKeTCY6OjqKa/7444+6uLiI2kFY5s4w4jJjHj1biv56HCCFEXERSgogT5+l\\n\",\n       \"Z4aD+Ke7u7tIdYcRefv2rY6Pj6OEPXPHeOHKBATxOQBcUmTqeFq165G3b98ql8vFsRKMYaPR0HQ6\\n\",\n       \"1XQ61W9+8xstFgtVKpUANhcXF+p2uyFDOYOo3++vudGkZ7kLY+W6kdL2zDdsO2wSbEmn09HT01P8\\n\",\n       \"3gvmET/j7BGuXN/LadsKc8LgwhI4kyF9mJYqfRgMK61b/XyH67pVz2/9tVv6fi2aU/BZwa300/vN\\n\",\n       \"e/4cXhjOn8mZGpRGCsx4rrRvrqjpVxovs8kd4jEv6XP461Tx81tnmWBIfEN5vrz7MRFU6cZzMEJD\\n\",\n       \"WfvvU7eS9x8q8mMAJ/1+P9wSrVYrzt8gMBMhvVyul3KHlfJzeTz1bnd3V69fv9a7d+8ivbHRaCiX\\n\",\n       \"y4XCI5J+MBjo4OBA+/v7kVUBhYy7wdcY1iL+9cfHR7169SrcL4+PjwEKYDRms1lkTsAOMIdQ3vyT\\n\",\n       \"pC+++EK9Xi/YNAQejAZKBGZjMBiEUnMWkNoh9N3X+unpqf7xH/9Rf/nLX0Lh5fN59fv9YCg+/fTT\\n\",\n       \"OHSRM1twnR0cHKhSqYTVPRqNwn1GX87Pz+OZ8MMfHByo1+uFUH779m3EC2EF39zcxLx9+eWX6nQ6\\n\",\n       \"YZWyfofDYViv1BSBweGe22wAWlfw7H9iJCStzbu0XpSSNePZHZ6BtVgsQtFjYR8cHOj6+nqtjIHH\\n\",\n       \"mQCaYFIA2/l8PpSfByJLK6UMI4ALSFply8xmM3U6ndiTXq2ZPsDUcQ8qJj88PB+0ScAswbLj8Vj3\\n\",\n       \"9/dxDZhIDAFYT3ednZ2dqd1uq9lsBqNUqVTimAqP/bm8vNTR0VGwIrBOvIdby41swC9Gw2g0iirR\\n\",\n       \"uNVcxzw9PQWI9/pInIPkxlej0Yj78Jywj8zXprYV5gRKlsnG7eJgwpVk6m7xQD4EEA/N4nVWw906\\n\",\n       \"0oq54PceMCd9eGZOmu3D7/htGoeSgiC/b8rKpIrd36Nxf/8/C3hgtbiPL6s5WwGC5fspG7XJtYSw\\n\",\n       \"Tt1NMCxc1+fTx8/nweeGuAqeFWoQJUSgJtaGuxheWui/VGs0Gnr9+nX4wweDQXzGeCKgsaAAIxRc\\n\",\n       \"g+YGoBA09+c//zmCSbGAfvjhh0g1Zu5JYSRmBaq8UHiuQAvzUa1Wg+2YsFi22wAADWlJREFUTqca\\n\",\n       \"jUbqdru6v7/X+/fv1ev14gyYq6srdbtd9fv9oLMlBQWOi4U5gCXis36/r6en5zN8EIaTySQoYGqb\\n\",\n       \"0DzgD5eKp0f63mdtjUYj/eu//qsKhUIAEtYLFvl//dd/RezE8fFxKH+Yi1/96leRXYCV7qmo7XZb\\n\",\n       \"3377rTqdjhaLhcbjsa6ursKydBcQMT29Xk9fffVVgM93796p0+moUHg+4RiL9/j4WL/97W/1pz/9\\n\",\n       \"Sa1WS+fn58EGVCqVX2T9vtS80qikqGPjChUlCShg78IKAGSJayBWCtnBKdsYraenp+p2u3GWEi49\\n\",\n       \"gi6x7mHscGfSl2q1GvLm4eEhWGHqhgDUAdcwPtwP0AMrub+/r5ubmxiTu7u7AAnIfACcMzruOiHm\\n\",\n       \"A7fi3t6e3r9/v1b7xWXew8ODLi4uAtTQb+5RrVZDhpAJyDEKBIkTmFooFIIxRXeRNbS3t6fhcBhh\\n\",\n       \"CG5ISCsjHYaT6rLz+TyK8h0eHmo2m0XW0mAwiGBz9AxxJx9lhVi3zKVVFTnpw+DUTZS9MyQpFeaM\\n\",\n       \"Qvrbl6zrrDgNp23dFZMKRmdiUus+VfDeP6xjty5YmMTl8Gz8QwGkFfZApi81V+ZZAMb77KDQWR7G\\n\",\n       \"OWVbUsBIywJLPm9cw/uV3tvfp39Z7227Ufthd3c3UoclBQPBZ71eT5LCYsrlclG6HgFaLpcjFTeX\\n\",\n       \"y+n169fa39/XZDJRr9fTfD7X6elpVOiEyWKs/QTS09PTAAfEddzf36vZbIbVCM0Ne+FxApSX393d\\n\",\n       \"jb+hwWFTiAOAHeJZvXR9pVKJyqCk0GJBISjJoEBIk/rrtD1z7UC92Wzqm2++Ub1ej4yG6+vrKKB1\\n\",\n       \"eHioi4uLqJJLHIefZfMv//Iv+uMf/6hqtao//vGPury81O3trY6Pj6NIG4qFsZzP5xFrQ0zE7e2t\\n\",\n       \"ptOp+v2+Wq2W3r17F0phMBjoV7/6lRaLhU5PTzWfz9VsNiPgcTabxanVpHW+lNXwSzRfCyhGGDkP\\n\",\n       \"ZoeFICbCrWZiDPxMG2JYUK65XC6YM9Z6tVoNQCEpXIAEzRLbRRCzr5t+v69KpRKsBHFHFEr0gwM9\\n\",\n       \"xdtdant7e3GIXj7/XFARUFutVgNk4eKCrUQu+fpZLJ6L+1G8j/TltL4Q7Goul9P79+/12WefRVow\\n\",\n       \"z0BtFFL1cZUtl88l+bne3t6e3r59q4uLC7VaLV1eXkbRNggCmD5pVWPGSQJJwfDiqvHxA7jc3Nyo\\n\",\n       \"XC6HO5tn8Zon6C3A5Ka2FXDi6ZNMgFvgmwJbXaG628PjFVIAkbZNjEIamOMLRVov+OYUV+qC2hTg\\n\",\n       \"432C+oMuywJSLAQWT/rsLAgUkt8ndev46zQQlbHPcofR3K3Ftdi86fy5cnTwkZV1lILB1DUF4Evj\\n\",\n       \"bAAsCIO03P42G1aUpFCKT09PoUQRsKenp8ECuD8WxqFWq+nm5kaVSkWlUilcEWSfIHyI2O92u0Hx\\n\",\n       \"Ytl4ZkSn09Hr168j/gWLsl6vR+VTBLRXpnSL99NPP9XXX38dwj6fX52PQiT/eDyO4nD7+/uhyNk7\\n\",\n       \"xNFMJhN9+umnEYfgRdY8HsOVEJYj5bax0HnObrerP/3pT+Fa29nZ0fn5eVTwvL+/V6fTUaVSWStY\\n\",\n       \"12g01O/3Va/X9dVXX8UcYOFyCrFnrXlmBcIXt9rt7a3q9bomk4larVYoDdjixWKhXq8XVvWrV6/i\\n\",\n       \"mVAsxMWwZrrd7tbWtLQ68wVGgrmRFFa2pKgxgkzCzSIp3BiAbsAL9UBcxrIfxuNxVEmGxaAWEwpU\\n\",\n       \"UoBC9hQn5Z6cnES2WS73nCZLphouV0AzbjT6y29wb1IBuNls6ttvv430+sfHR71//z4y9WC8GCti\\n\",\n       \"cAB0gKSHhwcdHx+H0k5dH7Crr1+/1mg00sHBQZSdpz4ILKYXg0MWIxOurq6igCCxIP1+X9VqVf1+\\n\",\n       \"X+Vyee0cH68l49eFQYRZcQAHI8O65VTp4+Pj6AcpzsTLHR0d6fr6euOa2wo4cWbALWRJHwCTLPbB\\n\",\n       \"FbLTjJ5RwwZJGQAHAmm8iLtiUjeBMwNpbItfA2Xgv3Ol72NAcJQzJ/6cAJAUfEirA57obxpM68+N\\n\",\n       \"EseqRXk6IOTaXkCM/noKn7uN/He8lz5nyibx/fR5s+aZ7/m8YKGgHF0QfgzghAP4cK84W4JVKK3O\\n\",\n       \"rECY4uJxVowsGQIEPf0WAUVWEAF6i8UiWA9Ja/e/ublRqVRaYzzI5EHxelYPcRdv374NVxCR/Chn\\n\",\n       \"r34qPR98+Ic//EGFQkHff/+97u7u1G63w8J6fHzU8fFxpNyizKi4ijXtacOdTidSJXHpQcGfnJyE\\n\",\n       \"P75YLOry8jLie7gmbq5erxeWLe6mYrEYfcFaB0y12+1gL/Grt9ttffLJJ/EsuVxOx8fHkp73xr//\\n\",\n       \"+7/r/Px8jRFjHlAkJycnAXZms5n+z//5P9rZ2VG/31cu91wS/Xe/+52Gw6G+++47lUol/fa3v/1l\\n\",\n       \"FvCGhiyBoveyALhy2ZMEyuKuIw3XwbKf3IzCZf0SQ/H4+BhxPShygCfxSpzwS00RScGAPDw8BPAh\\n\",\n       \"rklaxUIAvKlSzN7EpdTv99VsNiN+YzKZRDGxcrkc1xyPx2q1WgHW5/PnQnC3t7dhfIzHYz09PUUK\\n\",\n       \"OewDWTycrQTr9vDwoM8++0zfffedarVaGBMpkEHmjkajuAbjQNwIGTXHx8fBXj0+PqrdbqvVaqnd\\n\",\n       \"bqter6vX6wUAy+efT+qm0ByB7GQuSc8kA/Ep6DBcRLjGOp1OABp0C0zv9fW1Wq3WxjW3tYBYaaUU\\n\",\n       \"WQy8RkE58EibB2O6i4j//Tog9JRtcSYCpYqC5n3/nMWIsqehnL1YlIMgSWufp0yI99vZF5Svu3/8\\n\",\n       \"u84s0CevnAiwQUg6COS5UpDg93A3C699Q3g/nd1wYOMti0lJmTKfo/Q56QuWhYM67/s2mxcNKxSe\\n\",\n       \"T/2EhaCeAIGqOzs7QQ/DMmDJSSsQR0YN1gen1+Jnr1QqYT2NRiNJq7gurBssc9wXgChX3jAmrNX7\\n\",\n       \"+3udnZ3p/Pw8zn85ODhQq9WKYmOklxLV//j4+H/bu5ue1LUoDMDvFTgqWFMkCEIwAXVi/P9Dh/4I\\n\",\n       \"YqiJaEqjhWoVP1J7BifvctN7z/S6B+8zMTEqUNu9115rf+Dq6srKFWVZYjQabax+mM1m1kkxc1KW\\n\",\n       \"pS2p5cTEVqtlO4hGUWSTFFlX5zVI09TmCARBgDzP7fnmCJAd1tfX10aa/uTkBNPpFLVazZZis2yR\\n\",\n       \"5zm63S6GwyHiOLZM0/HxMZIkAQDEcYzVaoU8z1Gv13FxcWGBCBtvNso8ZPHm5gZ5nuPw8BDNZhOT\\n\",\n       \"yQQAMB6PrTFfLpdI0xSDwQDb29v/2uPl/8bMFu9rlm3YTjCY5QRPPgPsiHivMhhlW8XSAveX4fPM\\n\",\n       \"ElG9XsfDwwPOzs4se+Q+829vb9ah8vX5/2WwzeDFfYbSNMXe3h7CMMR8Psf+/j6SJLFdVtmOMRAr\\n\",\n       \"isLue95HRVHYEngeHMh7lJu1MUhmWxzHMXZ2duz1mVXjnBVmjhgcu2VeTrTn5+Z9zdVILy8vNieH\\n\",\n       \"g471em2bnTEw6HQ6GweSdrtdCzr4laWp5XKJwWBgc6QYJHFwxLYoTVPMZjOcn5/j+fnZ2nVm/5gt\\n\",\n       \"zfPcdqvlBnJ/86OnSXHU687cdoMNtyNzsyBu4OGO8quBDP++W4JxV5fw9/l9t9ThcrMQ7ByrnTw7\\n\",\n       \"SzewYWdZ7XyrWYIqN+gCNiebVssg7nt0rx8/t3sNq+UaVzWoqAZs7s9Ur4v7XqrZE14XBojV12dq\\n\",\n       \"1v271WyTG6Qx6OJruV99CE6YIfj4+LDDvjhSAb63Pg/D0HYK5ZJDjhIB2Kiv2Wyi0+nYrq08iTQI\\n\",\n       \"AmRZhjAMbT4IJ1YyK8Dab1EU9j5YvgG+Z9qz7MFOhUE4/2+slXMZJScsukuD2aE3Gg30+31kWWaT\\n\",\n       \"dFmmKYoCcRyj3W7bKO3u7s5q54vFAo3Gn8PMtra2EEURgO8SJ1dAMbOSZRmSJMFyucTR0RHKskQc\\n\",\n       \"xzZ/jW0FU+gsD7y/v2OxWNiW5aPRCFEU4devPycEJ0mCwWCAy8tLALBSQL1eR7vdxnQ6xWq1Qq/X\\n\",\n       \"w+npqa3E6ff7uL6+xnA4tH0mmPpvtVq4vb3FZDJBu90GANuwi/uZcE5LrVazn+H15qFvP4VbnvOz\\n\",\n       \"cGUR2wTOdeA2/rwX+Bn5DLME4GYBWBbic+IGLNyXZz6f22ohzm1ie8jyEjtkZlYY1PF7LK8wU8Hs\\n\",\n       \"DOdBuFkaljFZVuPnYhmZmbMgCGxSLU8qZtnIDeA4x6LX61l5lgNR7jDLVUYciLJcyu9z1Q8DRban\\n\",\n       \"HARwPhcACwhZvmIWpt/v2yq0x8dHHBwc4P7+Hru7u7ZUmXufrNdrdDodO/n46enJBjd85jnp+PPz\\n\",\n       \"E+Px2PaKeX193TgLrNFoWNvIoNTdb+m//PO3DlJERETkJ/z8YSQiIiIiDgUnIiIi4hUFJyIiIuIV\\n\",\n       \"BSciIiLiFQUnIiIi4hUFJyIiIuIVBSciIiLiFQUnIiIi4hUFJyIiIuIVBSciIiLiFQUnIiIi4hUF\\n\",\n       \"JyIiIuIVBSciIiLiFQUnIiIi4hUFJyIiIuIVBSciIiLiFQUnIiIi4hUFJyIiIuIVBSciIiLiFQUn\\n\",\n       \"IiIi4pXfPRZNtgyLF3IAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x112bc2190>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"ksize = net.params['conv'][0].data.shape[2:]\\n\",\n    \"# make Gaussian blur\\n\",\n    \"sigma = 1.\\n\",\n    \"y, x = np.mgrid[-ksize[0]//2 + 1:ksize[0]//2 + 1, -ksize[1]//2 + 1:ksize[1]//2 + 1]\\n\",\n    \"g = np.exp(-((x**2 + y**2)/(2.0*sigma**2)))\\n\",\n    \"gaussian = (g / g.sum()).astype(np.float32)\\n\",\n    \"net.params['conv'][0].data[0] = gaussian\\n\",\n    \"# make Sobel operator for edge detection\\n\",\n    \"net.params['conv'][0].data[1:] = 0.\\n\",\n    \"sobel = np.array((-1, -2, -1, 0, 0, 0, 1, 2, 1), dtype=np.float32).reshape((3,3))\\n\",\n    \"net.params['conv'][0].data[1, 0, 1:-1, 1:-1] = sobel  # horizontal\\n\",\n    \"net.params['conv'][0].data[2, 0, 1:-1, 1:-1] = sobel.T  # vertical\\n\",\n    \"show_filters(net)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"With net surgery, parameters can be transplanted across nets, regularized by custom per-parameter operations, and transformed according to your schemes.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Casting a Classifier into a Fully Convolutional Network\\n\",\n    \"\\n\",\n    \"Let's take the standard Caffe Reference ImageNet model \\\"CaffeNet\\\" and transform it into a fully convolutional net for efficient, dense inference on large inputs. This model generates a classification map that covers a given input size instead of a single classification. In particular a 8 $\\\\times$ 8 classification map on a 451 $\\\\times$ 451 input gives 64x the output in only 3x the time. The computation exploits a natural efficiency of convolutional network (convnet) structure by amortizing the computation of overlapping receptive fields.\\n\",\n    \"\\n\",\n    \"To do so we translate the `InnerProduct` matrix multiplication layers of CaffeNet into `Convolutional` layers. This is the only change: the other layer types are agnostic to spatial size. Convolution is translation-invariant, activations are elementwise operations, and so on. The `fc6` inner product when carried out as convolution by `fc6-conv` turns into a 6 \\\\times 6 filter with stride 1 on `pool5`. Back in image space this gives a classification for each 227 $\\\\times$ 227 box with stride 32 in pixels. Remember the equation for output map / receptive field size, output = (input - kernel_size) / stride + 1, and work out the indexing details for a clear understanding.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"1,2c1\\r\\n\",\n      \"< # Fully convolutional network version of CaffeNet.\\r\\n\",\n      \"< name: \\\"CaffeNetConv\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \"> name: \\\"CaffeNet\\\"\\r\\n\",\n      \"4c3\\r\\n\",\n      \"< input_dim: 1\\r\\n\",\n      \"---\\r\\n\",\n      \"> input_dim: 10\\r\\n\",\n      \"6,7c5,6\\r\\n\",\n      \"< input_dim: 451\\r\\n\",\n      \"< input_dim: 451\\r\\n\",\n      \"---\\r\\n\",\n      \"> input_dim: 227\\r\\n\",\n      \"> input_dim: 227\\r\\n\",\n      \"152,153c151,152\\r\\n\",\n      \"<   name: \\\"fc6-conv\\\"\\r\\n\",\n      \"<   type: \\\"Convolution\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   name: \\\"fc6\\\"\\r\\n\",\n      \">   type: \\\"InnerProduct\\\"\\r\\n\",\n      \"155,156c154,155\\r\\n\",\n      \"<   top: \\\"fc6-conv\\\"\\r\\n\",\n      \"<   convolution_param {\\r\\n\",\n      \"---\\r\\n\",\n      \">   top: \\\"fc6\\\"\\r\\n\",\n      \">   inner_product_param {\\r\\n\",\n      \"158d156\\r\\n\",\n      \"<     kernel_size: 6\\r\\n\",\n      \"164,165c162,163\\r\\n\",\n      \"<   bottom: \\\"fc6-conv\\\"\\r\\n\",\n      \"<   top: \\\"fc6-conv\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   bottom: \\\"fc6\\\"\\r\\n\",\n      \">   top: \\\"fc6\\\"\\r\\n\",\n      \"170,171c168,169\\r\\n\",\n      \"<   bottom: \\\"fc6-conv\\\"\\r\\n\",\n      \"<   top: \\\"fc6-conv\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   bottom: \\\"fc6\\\"\\r\\n\",\n      \">   top: \\\"fc6\\\"\\r\\n\",\n      \"177,181c175,179\\r\\n\",\n      \"<   name: \\\"fc7-conv\\\"\\r\\n\",\n      \"<   type: \\\"Convolution\\\"\\r\\n\",\n      \"<   bottom: \\\"fc6-conv\\\"\\r\\n\",\n      \"<   top: \\\"fc7-conv\\\"\\r\\n\",\n      \"<   convolution_param {\\r\\n\",\n      \"---\\r\\n\",\n      \">   name: \\\"fc7\\\"\\r\\n\",\n      \">   type: \\\"InnerProduct\\\"\\r\\n\",\n      \">   bottom: \\\"fc6\\\"\\r\\n\",\n      \">   top: \\\"fc7\\\"\\r\\n\",\n      \">   inner_product_param {\\r\\n\",\n      \"183d180\\r\\n\",\n      \"<     kernel_size: 1\\r\\n\",\n      \"189,190c186,187\\r\\n\",\n      \"<   bottom: \\\"fc7-conv\\\"\\r\\n\",\n      \"<   top: \\\"fc7-conv\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   bottom: \\\"fc7\\\"\\r\\n\",\n      \">   top: \\\"fc7\\\"\\r\\n\",\n      \"195,196c192,193\\r\\n\",\n      \"<   bottom: \\\"fc7-conv\\\"\\r\\n\",\n      \"<   top: \\\"fc7-conv\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   bottom: \\\"fc7\\\"\\r\\n\",\n      \">   top: \\\"fc7\\\"\\r\\n\",\n      \"202,206c199,203\\r\\n\",\n      \"<   name: \\\"fc8-conv\\\"\\r\\n\",\n      \"<   type: \\\"Convolution\\\"\\r\\n\",\n      \"<   bottom: \\\"fc7-conv\\\"\\r\\n\",\n      \"<   top: \\\"fc8-conv\\\"\\r\\n\",\n      \"<   convolution_param {\\r\\n\",\n      \"---\\r\\n\",\n      \">   name: \\\"fc8\\\"\\r\\n\",\n      \">   type: \\\"InnerProduct\\\"\\r\\n\",\n      \">   bottom: \\\"fc7\\\"\\r\\n\",\n      \">   top: \\\"fc8\\\"\\r\\n\",\n      \">   inner_product_param {\\r\\n\",\n      \"208d204\\r\\n\",\n      \"<     kernel_size: 1\\r\\n\",\n      \"214c210\\r\\n\",\n      \"<   bottom: \\\"fc8-conv\\\"\\r\\n\",\n      \"---\\r\\n\",\n      \">   bottom: \\\"fc8\\\"\\r\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!diff net_surgery/bvlc_caffenet_full_conv.prototxt ../models/bvlc_reference_caffenet/deploy.prototxt\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The only differences needed in the architecture are to change the fully connected classifier inner product layers into convolutional layers with the right filter size -- 6 x 6, since the reference model classifiers take the 36 elements of `pool5` as input -- and stride 1 for dense classification. Note that the layers are renamed so that Caffe does not try to blindly load the old parameters when it maps layer names to the pretrained model.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"fc6 weights are (4096, 9216) dimensional and biases are (4096,) dimensional\\n\",\n      \"fc7 weights are (4096, 4096) dimensional and biases are (4096,) dimensional\\n\",\n      \"fc8 weights are (1000, 4096) dimensional and biases are (1000,) dimensional\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# Make sure that caffe is on the python path:\\n\",\n    \"caffe_root = '../'  # this file is expected to be in {caffe_root}/examples\\n\",\n    \"import sys\\n\",\n    \"sys.path.insert(0, caffe_root + 'python')\\n\",\n    \"\\n\",\n    \"import caffe\\n\",\n    \"\\n\",\n    \"# Load the original network and extract the fully connected layers' parameters.\\n\",\n    \"net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', \\n\",\n    \"                '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', \\n\",\n    \"                caffe.TEST)\\n\",\n    \"params = ['fc6', 'fc7', 'fc8']\\n\",\n    \"# fc_params = {name: (weights, biases)}\\n\",\n    \"fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}\\n\",\n    \"\\n\",\n    \"for fc in params:\\n\",\n    \"    print '{} weights are {} dimensional and biases are {} dimensional'.format(fc, fc_params[fc][0].shape, fc_params[fc][1].shape)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Consider the shapes of the inner product parameters. The weight dimensions are the output and input sizes while the bias dimension is the output size.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"fc6-conv weights are (4096, 256, 6, 6) dimensional and biases are (4096,) dimensional\\n\",\n      \"fc7-conv weights are (4096, 4096, 1, 1) dimensional and biases are (4096,) dimensional\\n\",\n      \"fc8-conv weights are (1000, 4096, 1, 1) dimensional and biases are (1000,) dimensional\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# Load the fully convolutional network to transplant the parameters.\\n\",\n    \"net_full_conv = caffe.Net('net_surgery/bvlc_caffenet_full_conv.prototxt', \\n\",\n    \"                          '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',\\n\",\n    \"                          caffe.TEST)\\n\",\n    \"params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']\\n\",\n    \"# conv_params = {name: (weights, biases)}\\n\",\n    \"conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}\\n\",\n    \"\\n\",\n    \"for conv in params_full_conv:\\n\",\n    \"    print '{} weights are {} dimensional and biases are {} dimensional'.format(conv, conv_params[conv][0].shape, conv_params[conv][1].shape)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The convolution weights are arranged in output $\\\\times$ input $\\\\times$ height $\\\\times$ width dimensions. To map the inner product weights to convolution filters, we could roll the flat inner product vectors into channel $\\\\times$ height $\\\\times$ width filter matrices, but actually these are identical in memory (as row major arrays) so we can assign them directly.\\n\",\n    \"\\n\",\n    \"The biases are identical to those of the inner product.\\n\",\n    \"\\n\",\n    \"Let's transplant!\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"for pr, pr_conv in zip(params, params_full_conv):\\n\",\n    \"    conv_params[pr_conv][0].flat = fc_params[pr][0].flat  # flat unrolls the arrays\\n\",\n    \"    conv_params[pr_conv][1][...] = fc_params[pr][1]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"Next, save the new model weights.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"net_full_conv.save('net_surgery/bvlc_caffenet_full_conv.caffemodel')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"To conclude, let's make a classification map from the example cat image and visualize the confidence of \\\"tiger cat\\\" as a probability heatmap. This gives an 8-by-8 prediction on overlapping regions of the 451 $\\\\times$ 451 input.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"[[282 282 281 281 281 281 277 282]\\n\",\n      \" [281 283 283 281 281 281 281 282]\\n\",\n      \" [283 283 283 283 283 283 287 282]\\n\",\n      \" [283 283 283 281 283 283 283 259]\\n\",\n      \" [283 283 283 283 283 283 283 259]\\n\",\n      \" [283 283 283 283 283 283 259 259]\\n\",\n      \" [283 283 283 283 259 259 259 277]\\n\",\n      \" [335 335 283 259 263 263 263 277]]\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<matplotlib.image.AxesImage at 0x12379a690>\"\n      ]\n     },\n     \"execution_count\": 11,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    },\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAAXMAAAC5CAYAAADavt/0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvWnMbWl23/Vbz7T3PtM73aHGrqrurrbdbbttuulYVhqT\\n\",\n       \"gcgycQwIGVlYFiCQhSCRAUu2IyHExxAhWSIKcsRoRSJBfLAi5AgMIU4CiWxIYqc73XZPRVXXcOu+\\n\",\n       \"953OtPczLT48b7dN1O6qxHVz7e7z+/See/Z99nnPfe561l7Df4mqcuDAgQMHfn9jnvQHOHDgwIED\\n\",\n       \"v3sOxvzAgQMHvgE4GPMDBw4c+AbgYMwPHDhw4BuAgzE/cODAgW8ADsb8wIEDB74BeCzGXES+X0Q+\\n\",\n       \"KyKfE5Gfehz3OHDgwIEDv4W813XmImKB3wD+KPA68KvAj6jqZ97TGx04cODAga/yODzzTwCfV9VX\\n\",\n       \"VDUBfwn4ocdwnwMHDhw4cMvjMObPAq/9ttdfvv2zAwcOHDjwmHgcxvygD3DgwIED/5Rxj2HN14Hn\\n\",\n       \"f9vr52ne+VcRkYPBP/BYUVV5Evc97O0Dj5vfaW8/DmP+fwMvi8iLwBvAvwb8yD960dPf889gMO3Z\\n\",\n       \"wCjGG7QX1IM4gxGhqiJiEFMINdMZIe935H3ECRgpqCkoiqpiUgUUjGCtRUSotVJrJidlt1NSugEs\\n\",\n       \"xhi2byaWzw7YXlgOM+YLz2J2zKyb4Z1jHEfOLy94eH4BCMMM7GBx3qA2Y5zFONCijGOmZAUpCFBq\\n\",\n       \"pVah1szqeGC2MtRJmR5Zbq4i41bJSTHGMF2NPPOtc3zvEWvJ2aLVUKtBFFQV1YK9/X1s8IgD64Qa\\n\",\n       \"KkpGfaWkSiqFGoWyVXStCLC6f8S954/50Hc/z73nnmV5Z4Hzls1mw9tvvcErX3yDh2894pW//oDh\\n\",\n       \"2TnbhxNmctSaMQiOijWK85XQO5wFJ+0zxpTJxdAd9/izwN17c/ysYK0jJxi3mcu3R3YXyngZmTax\\n\",\n       \"rest4iriwBnLyQt3+MC3PM9v/p3P8x3/wrcwTYnz1895+PoFXgQ1ghqotQKGUguhh5mtuOAIXQUs\\n\",\n       \"IPy1v/Cbj2Fbv3t++Id/+Ou+/6lPfYpv//Zvf8d17t69+47X/Mqv/Aqf+MQn3vG6H/qhd05b/fzP\\n\",\n       \"/zw/9mM/9o7XffCDH3zHa372Z3+Wn/iJn3jH6y4vL9/xmp/7uZ/jx3/8x9/xOoBf+IVf+Lrv//Iv\\n\",\n       \"/zLf933f967W+qVf+qWv+/5rr73G888//3Wv+Qpf+tKXvu77m82GxWLxjus8ePDgd3zvPTfmqppF\\n\",\n       \"5N8H/hfa/67/+mtVsli5jfCIIKIoFVUB45rxEgGpGGPbBzWCFWGxGtA+cLO+xqpSKxgxxJpbfEcV\\n\",\n       \"LQKqON+MOlSMqfTDiNoeTQVnLGIi3nswlZgifoI9O5wK2VpSSogIq6M5OSeWqx58QZwSKVQBtUqt\\n\",\n       \"GRcsWjIUoYpB1YKOrFaeo/tbqnSIeqaq7DaVkgzWGayR26/BtN8FEHGUmhEsIgDtUCsoYdGjoiCK\\n\",\n       \"mgLGYL1CqLi5IkmpYyEboVgLO8fFo0v8mSNZwQwGO/dIFYyxqAW/Uvyo2F45fb6naGH3+oQ3gnMG\\n\",\n       \"q5ngPc4YnBSMCtYKuRScM4g35FQwpVBToVbBSDtgfTB0vSW6hHUVqF/dA2IF7yzzbmC1OKLrB4IP\\n\",\n       \"HB0ds9tu2Q0di9WcqoVaK6qZUoVCRXMmRcu2E7pswLcDepi/1zv6wIHfHzwOzxxV/avAX/161xhv\\n\",\n       \"0aqogGIRCoJQVBERVBUrjpQzM2dxttI7Q60RkcIw88QxIhVyyph2XxADBapCpmCMQcXRzZSjowDG\\n\",\n       \"sL0ZyVHZPkq4AM1/hf2UyGWHsY7ZfMD2hpNhgbVHiBiyTkSNxLpDCqhJZAtiLMQC1qElgSjOVUzX\\n\",\n       \"0d2PDKce3SnXo7BPE0LFOoOxig2CiJC0YtVg0fawYhxFBWpBMAiF0DmcETKgCrV4XNnBzFLMhErF\\n\",\n       \"DoI6wRmD6SCZyiCeo9MFpWbiOLE53xBLZXP5iPXVmpyu6GYV1yvhbmERlY7A9jziLVh1WGswGtFi\\n\",\n       \"ECeots0zJaWaQph5LIZJMr3pySkDQh4rVjzGZ5SKmAzVoFWRJJjOYhfhq09n1js67yhDR7+cEbY7\\n\",\n       \"8tgO+lIMIhWyImLbvimCCWCtoeulHc4HDnwT8liM+btCoBq5zcDehhlFUASp7bE6N4tFTIXBCkJB\\n\",\n       \"gVoLaYrUknDWEmYdu92eqgpqUKNUMpotnRe6vtLNPMuzGasjTy7Ko4eRMgmut8SoWBVUKzllilZE\\n\",\n       \"DH0fmM07jDFYa5ninov9I0wVSAYlItZhXKVWgzUV7S0qMD+Z03fK++4McASP5JLVdsXDPAGCCPhg\\n\",\n       \"sFaZn4XmgVdBjKVYEOMgtsOIWvHBIEZQzYhx1KqIFDRZyiYjnUE8MGS8GIorlB3Y5OkXHgZhSpWr\\n\",\n       \"yw16fsPF+Zr99hz8SLEJFzJ3PtIR5tDdUbwVclbMFqQCVERamEPEkWLBqqfWQqmC2J7l2YzF3GOs\\n\",\n       \"IghxqoxjZLPJpCkiGIxxVNHbEFih7JWyKoTeYZ3wvo88hwSPLwXvDLNZz75mam0HbvPQlZQStRR8\\n\",\n       \"aJ9TzMBsPsf7J7el3y337t17z9Z69tn3rlDsox/96Hu21vd8z/e8Z2t97GMfe8/WeuGFF96ztVar\\n\",\n       \"1Xu2Vgjhd73GE9v5VhyqGWMMSRWVSikZqR1GDDVXrAqCRWtkjyEMilQl50wpFS0VrMEFx+n8lKuL\\n\",\n       \"G1KKgFDVgShjivQnM/yicrSA49MeGxxh2OGs5fpiIt/sMcYg6qilkFLz0AsDIh3GCVUnqikYa6gx\\n\",\n       \"MdWJSW4IdU5vA84pyVvUKt3Ms5jB0dnAP/ttH+F8+xZ3ujt86vwKTR3iMkEc1uptvFdQgWocWVyL\\n\",\n       \"96sgXtCoCAqYduY5jxGYxh0yQSqKDmC6FmZxwaALcNViUYoU+sUMYxLjuOHBw8T64Y7t248oLtLP\\n\",\n       \"Cgwb8gxOP9SRdc+wEqKFcGmQWDFVKFOkytDCYbVCgTFWijp0Jkjn6JxnOczBF/b7PdhE11kmImNp\\n\",\n       \"YZWcKrXqba6k7YV9zhgrIMLT33KXlCcymbHuESKqGbRAVsiZFEdyrNTJQhb6IeC9IziPd/ax7lsR\\n\",\n       \"+X7gZ2khxP9KVf/MP+4aB2P+j8fHP/7x92ytF1988T1b6+jo6D1b6/e1MVdVrLGU2sIrBsGIJedC\\n\",\n       \"Ni3UQKmoFKRC2k5siyA1oTmSU8JQ29+UwtB3DM+c8uh6y2azQycBqxh1bDd7Fkcr7GDpF5750ZL5\\n\",\n       \"6oT9/g3GGJmvjsmpUEplGiP7fcLulSxK6IQQAiKVsYys95fs40RmxPeCtS1ujRpMZ6imYoJhNu/5\\n\",\n       \"nhe/lSN/n3/3j/8kf/vv/R+sv/Q/8WgO67qjWnDOIaI4X8g1YJylH3rEQsnajCgVQdBSMM5DqaRc\\n\",\n       \"sDmAVNQKOSayGjCKhoLrCmIVMxN832F6pdrKzc3I9uaKR6+vsVHp+ogLFqrBaWXoF5g+sWZL0Up3\\n\",\n       \"BuMuY64CBYuUiHeOXKTZ1ipkKRytThhCTymFKUaCNRhjWvzfecLSMZsiV+sbKpWqisuGTMXOPNYL\\n\",\n       \"u7RnO22RfcZ7Ty4JzbcJcBFyqeRcQAw+BFQTtQpKwrkZw9BjTUuAPi5uu5v/HL+tu1lE/sqhu/nA\\n\",\n       \"7wWemDGvNA8aFDGGioGiWNviqaRMAUQrVSoinu31FsqeNG6ASt8ZhsHhXQ9iCH3HU13P9dAxjiNX\\n\",\n       \"m4laM2lSri5HnnrfDNcJR7NTzHJgfN5g5SHee6YpsdmM1EfCuLlknBzGCo92kYX1FAO7ccc+ZvZp\\n\",\n       \"fxvr7rHWY51BxFGL0puAx7GczdnulT/+iR/k+guv873f8oeQ7tvYvvqn+Zz1rKcrTAsqgfUYFXxv\\n\",\n       \"mS0ctbZvJ08FRKAAtwa+ilIqqFEwhhozUgyahWwyMhdAcN5iqYgWnAuMaQe7yvlbG/YPY4uBG6WU\\n\",\n       \"DFV5+vh93Dk9wc48b9RXGMcrfDAkDFEyWg1ilUrCucA0ZWoRUipcXWyYnwzU6igUjPGoEYrpqBpx\\n\",\n       \"RtFQMcHhbQCrFAOzIdAvAqG3LRGuEKfYnrxyJpYJlYL1jhQzIhXNFckgYvFBSDninGWxmDPvbfP6\\n\",\n       \"Hx9f7W4GEJGvdDcfjPmBJ84TM+ZSFRVt//G1NrMmUGMBUUouOBXEpmaUbg1YTQI44n6iTpk+FOZz\\n\",\n       \"xTlLk5nJdL1HELwTdrvEOI5sb/acvzVy5+4xetoRwqw97npHjLHFYM8rm+sNKSW2+4ldvsbkxEIE\\n\",\n       \"cY4coRiwPuCD4vr2TGFsRZTbckhPqpljf8SJnvDrf+tvc3P+AHHCR77z4/zMT/7n/Cd/9mf43AWI\\n\",\n       \"te3pxLYY8mI1ox8CKWa0Vvaqt0ldoaZMLrQKH+dQa6i1VQGBoeSC5hZeqVVRlRbr9xYjmYphTJG4\\n\",\n       \"GTHFUHImRc+0zZzcPeXe0XM8+/Tz2M4iainxdd68fpNynJG1QWzFiLA8mmOXjgG4fG3PtANjDSlN\\n\",\n       \"LIMHDCllKhWt5TYkBhaDteCXLfzVGUNYuHYoBsF6ZUwbyujw3lOrUmrLYeQYAUvXOSKZnEdUlJKF\\n\",\n       \"2bBkPp8xny+YBUeO+XFu26/V3fwHHucNDxx4tzy5bFFJqLW0gjvBqFC0Ai0RaSg4cVhjMOa2TDE7\\n\",\n       \"sq9EIi5ZJCZ2u5FlmjXj7xwI7XDQRNWKdY7QdaSUub6cGHeGFJUuwHy5YJ4n2BZC77m63IC0KpiS\\n\",\n       \"JorCMBN8V2+NmSPngNaKOI/3ihiBWlvBnSglR0LXk/Ke7376O3n0a/8Q38+ZdwOf/X9+lR/86T9H\\n\",\n       \"Sn+SMBvAteIbKaAWul7AZQLCGAUwGK1ULRQFi6BiKa2UvoV2xJPqRDUeam6lgcW0Wm7rEIFcEik7\\n\",\n       \"0l7R6Kk6UqtBkyEnxVXD7OiI+WKF8cJyfsrRfMPN8ZZ6ueZmscPtPFaEcGSZny1IVTFhRn17i6On\\n\",\n       \"kFEVxrwjiuCdx1iLxbVSSpcwTvChPYmIE1xooSkR15KhGTCFWpVaC8E5sJmorX+gZEOtFmPAZ08m\\n\",\n       \"Ybww6wJWDcEHRB5rr9C7cvs/9alPffXne/fuvacx8gPfXMQYiTG+q2ufnDGvGcRSpMW9KxFTDUkz\\n\",\n       \"zkJwCjoSPFgPBovthJwNLhrGKoxF2F5FLswNUi39ImOtxVRFSqHGQsotPCEus9vtePRwzbPPVGqs\\n\",\n       \"LBdHpGMl5RHjA+Jyq2HWyt17K6Tbc3wi2LlSipDUNi/TWWzvGfoOY1vZ4LTPVBTjPFITi7JCrze4\\n\",\n       \"rmPoO0BZnt7jH/7FP8Nf+rm/xvf/5B+iGKGKEqeKN4YuCMYaCo6cb24972a5BUvRSlVBChTbnmZU\\n\",\n       \"AGORotSk6OjJU6F0CefbwSYVSorN221dSIgYUs7ECfpwxJ2zezz33HNsxjX78YybmxuWyy1H7zvi\\n\",\n       \"jf3bXHxhTT/v6JceNxhWqyNWdyKLM8fmagTjSbrHpEoGYpxweJRCqe1gNcHgnGAcOO9wwbdyRdeS\\n\",\n       \"P1VbYrVVK02UMaGlYD04K+ziCDjsVyp7KvSD5+zsmNVqhnO21dw/Pt6xuxl4Vw1BBw68G0II/7/k\\n\",\n       \"6Ha7/R2vfXIx85RBLRXbaocVqihOlOAqnVeMhy4IzjZPVFSheqaxIyAYI0zrxPpmonLFfBNah2Jn\\n\",\n       \"yLmSs1JiJmvGWoeYwlsPL3nmrUvs0wus3eI6h+06ttsbprhDidy/v8LayvJsRlgWJt3jgydmENdq\\n\",\n       \"2YMPGGvpuo7gAr3fMu0mjKm4rBzNe24entP5GUYqRiymZG7qwPWX/y6f/MC38n+dfwESJN3R9wHj\\n\",\n       \"KgZHLIUSa+tsNRZRqKqE3vPMM0/x1oMH7HZbEItoRqQlP41aarG4CSgeVaXeVv20sFZlfuTJ1jHe\\n\",\n       \"RNKYGWqHdZ75YoUag/vq7zVwtFhRpHLn2US9UnS/RzvHsOjpVoG5n2GGgJt35F2iACZknLXkDLUU\\n\",\n       \"6BRvelwQfMiUVMHS6v+rRcSgKNMmoqKEmUOAuB/JcUJSpuTW/Wo9FNOcADGVzlnOzo5YrVYMQw/W\\n\",\n       \"tPUfH++qu/nAgSfBk6tmqRVyppaKWIsaizEF68BowQUIoTIbHM4ZQLHib2PT7tbDM0QXWd8kNtcT\\n\",\n       \"NSvx0R7rhdC12rdaMniLDQklsr2GT3/uM5ycnPAwXbE4XqApsltvKRl8B8enCzqv+Hkmm20LWyCI\\n\",\n       \"MRipVIFMoqPHuYCIYdZ3zIZmgJfO8cE7z5FfiQzHAzWNmOBQEeI0svn0b/Azf/K/5F/+T/8wMXhc\\n\",\n       \"MayGHmOEmoUYEzlWqCDavFBjhHtP3eXO3WOeun/GF175Eg/evrh97hf0tktUkpJGMLsO6xJqCyIR\\n\",\n       \"wdLPA3fvnRL3HW9/+Zy3395gZnB2dgcqpNIe6a5vLsh5x2oxZ6uR+0/fI75duX5zT14X7P3AbD6A\\n\",\n       \"s9xddFxd7lhfbyklotVhRNBSsUDv5y1x6QLHqwDO4kQopXJ9ecN6MyKikJRxP5Gl4BDiLmOKorWg\\n\",\n       \"WimSwRuMozVdUZkNMxbHgTBz+L4DI5S4f3x79l12Nx848CR4YsbclKZfYu1XGoUUNYAWnC94I/gA\\n\",\n       \"3hWsLxjjMCimGuzCs3WAqzgbqEwM8xOCc+zGxMO3r4lrwFSMEexQEPFY40ET2/2eXVrjcOzHDbv1\\n\",\n       \"FZvdGiOW2SIwzD3DUDBB2UZBEGpsIQ/jhJoqacpElwhTjwkZ2xmGWYcw8ezpKXfK81T/BqKCsZ5h\\n\",\n       \"eUTcXGEqbEXJn/s1nrfHfCFu6X2rU7dGSEUZp4zxYKppnrVahlXgzumKWedZLgbG+BTnDy8p6jCS\\n\",\n       \"2ncnSlWPTBUtLQlqb8uu++BYLu/w1J2nWPhjNk8/w+de/QKzU8/Z/adACtNuz8XFFW+9+YCb3Tmr\\n\",\n       \"+QnOCRPK6Z0j1o8u2F5s2G6OCGPCL2HoAovjQNHIzdUepFXZ9J0juIDSId4RjG+JAeG2+UkJ907p\\n\",\n       \"hh1XVxtKzZRcyfuIxsK0jzjN9J2wWPR0/RysMKbUmsW8cPfoDvefuks/C1jXEqulPtYE6Lvqbj5w\\n\",\n       \"4Enw5GLmClag5Nza4GtFrFC0oMpt7bmCqcitfgt6G/P10ElL7hU1zBZ3GYYFs9kMb3pOTx/yud/8\\n\",\n       \"InlXUTFgKsWBmQxVhPmsZz2+zZ3lHaa6Z8qRNE5oNfjOYAdzW/pnGXOhqCVmi6ug2u4bY2GT13jp\\n\",\n       \"8LYDMZSS6OcdN48uKekGEY9oa/nZ3WywoWec9sxqYfvoDX7qR/40f+rP/yRuPuCsAwzGtQoe1zVv\\n\",\n       \"u0aFquzHiWE2Y5gFQh94+pkTXntjzvmDK1CPaV8PSivnlNw6I0PXPm/nLad3Vzz37HOcnJxx/eCa\\n\",\n       \"cGzJLuONJcXIozf3vPXm69xcPuR6d0GeCst+RTKVyWQyhjoaHrxyRb+a0816MGBdYJh59ntH3GVS\\n\",\n       \"rhjJGBMwtWBtQKV10dr2r0qtlVIqi8VANZWHFxeUXKmxMO4m0r7SB0vvLN4tuH/nHvPFjKnsidM1\\n\",\n       \"Yg1DF+iHHj/zGG/J45YxPz7P/MCB38s8QWNeqUURNdQ6UkWwxpEnpXihdIlqleybYFapE025BFDw\\n\",\n       \"YglDILuMdXOOV6esliusCbjOk3LhzVffYooRcRatQk2GMLccHc8ZmchmR0wT1ELOlRgjxllKVSCz\\n\",\n       \"3+8Yd4liLLUaHJ5SKjWCE89uiqxvdnjx+FCxYtnvM8/YpwkV/Lwj54oYYbfboGPg6adO2U8Tj87f\\n\",\n       \"5qOf+D4++f5P8A+2X8B4xTlDHismFBxKqQ4yaDGUmLjYrHnq3nOEmWXcVJ5+7g7VavNoS6JGSBVy\\n\",\n       \"ghRbW79qxYhBpeDVcud9d1nOZ7gQiDaxmgWeCTOuLrdsdxt2l1dcnp+zK2vWmw359FlInl2ZULGk\\n\",\n       \"KbE/j1yf7+kXjuJ6fAC8UC2kW2EcY7rWH+CESsGZgSlljAjeW0CxpnWCzmaBUz3m4uqKNCXqBDaD\\n\",\n       \"N47QzTk7vk8XFlgJHM1XzO4+w1jXWCksjga64FAtxBQp7zLzf+DANxpPLmb+FcEl6q0nbimxIAbi\\n\",\n       \"pEyjwTrFdJXswRghaWsW4VbO1ztBcHReODqac+fuXazxWB/YryO73YbdbgQFpVI0kZJhOTvheNkS\\n\",\n       \"k/v9yH5XIFd2+x2hs+x3GbHbZuCzRdVRi0XwxF1EFYrRVnanypgjPnYEP7Cwx9y/8zK8HolR6Pqe\\n\",\n       \"WpX5fM7q7B67zQ29D1xfXvDF//2v8B/9O/8h/8Ff+GmSm1rFeJ5wvQHNpH37blTAiONLr7/Ciy+e\\n\",\n       \"tQNGCt3ScsesMMaS4khVZbct7LeRXblpf/dWuKyUStd3+HnHYjkjRGHVH/Hhco9n4oydFP7f6S3+\\n\",\n       \"uwe/RoqJEjPRRG52F/g6UERIktgXRafCl3/jTfpjxyCV+aqn1fc70uQg5qZKiZBrZZx2bPJESorm\\n\",\n       \"QugDi1mgc5YpRay1zAdHSUt21xM6ZfKYmZKlFkE0INrhTEfvPEpL+voQyLliyagqcUrE8WDMD3xz\\n\",\n       \"8gTb+cttw4vHUFsiVE0ro1OhDJmcwEYlmYoNIDiK5qaqiMWIQ8ShBoZFz/JkQQg9+5QZlgOL5RIx\\n\",\n       \"TQCkCsQ0MQxzpMzw5gjRTE6P2F5PbG+2pP0ExuCcJcdIzg4VzzRB2kVEM3Uq9F0HHlRbq791nq4/\\n\",\n       \"InSn9HpEX+6R6hprCqVmnBnY7jfkiytyGrnRkTurI/YpEt7e0psN2nWkPJJ0RKRJvpavaklWhoXQ\\n\",\n       \"dYFXH77G2ckx3sHR2QI7twTnCf6p25K+wtXlhqttzzpdAptWd38r6bu9uGKohg+/scC9Hbm6fJXP\\n\",\n       \"WIvUxLd910d55tWBq+rYlALGcb2/gTQy3Sj7apj2rZlre17ZXkdMB5WmsVNzi1dbY1GthG7AiGBn\\n\",\n       \"c3KEm/WOqWQuby7YJcu9u3fxvaOMBecsvQvMfMdms4Fo2ZuR8zd3HM12eAkMDpIprXzRgTphuxnJ\\n\",\n       \"3lJK4uZyw3b7eGPmBw78XuWJSswZo9SSbzW75VaeTykqxKSYMWG9xxulSquCEGmlaxhDTc1jx0ir\\n\",\n       \"mNARYwPOg+scznlC6NpjvbOsd4aT1Sk1CXnblA/31y0uvd+OaCmkpOypWOfJsVKyJcVK3NIagpxn\\n\",\n       \"s53wnSC9aRozt1oky+EI63oWXYeyYRoz2UaGobbSOWcY5ieQR9I0Erzh9S/9Xf7Ih/8o//MX/yZj\\n\",\n       \"Sa2ccl+Jm8J4AxVhNnhc77C+cPXoEiMwW3gWbsbJ6Qm5ZAyW3g/YI8swX9DfePobyz5BqiNSK2Wa\\n\",\n       \"0HXlfa9nysNXicXgwpycEpoyn/u1z/Kddz/Mr169QlVHKqklnL0g3lJMJlfQJKgUNtdruuMlk1Gc\\n\",\n       \"GEquaLVNz9w6+sXAarG81dypLE9mnD+6wOwzxkGVieXpXcbNRNyOTXjMwTB07NNIRZm217z95pcx\\n\",\n       \"FGJcEDphftzh+ogTxUhlt5vaQbXZsd2OT3JLA/DJT37yPVnnC1/4wnuyDsDV1dV7ttbp6el7ttZL\\n\",\n       \"L730nq0FcH5+/p6t9fnPf/49W+vdNv68E/9Uh1P846AIYpp8bEvdtUYYK4U6VZKBFEpr0lFuq1cM\\n\",\n       \"znr01mutmpn5OSJKrhPjuCWXiawTSsZbh/MerDDvZ3R9jxRP3CrjLnFznllfXZGnTEHREUgKXqA0\\n\",\n       \"TzRHJU+QciXFiJiKDwOmKlorU2wyvcYFFrMT9g+ukN1ErULQgkigimNzc8Pp8phdjMyCBevxYc6/\\n\",\n       \"9L1/jL/xpf+Tqwj7zch+W9lvlZIqNvQU07pAq1imXDi/OOcsz/FWWHQrrHXspoxzinGGfnDc6+8S\\n\",\n       \"OsPDR5Gr7Z7OzrBm4NvfEI5zItkeCZ5geiiRuN9wtdtQUsGXQtKCLRZNlegjxRowrkn05kqYWYy2\\n\",\n       \"A4JgMC6QKmhuE6I63xFmPf1sBhR6ZxDjWNzruLi8IseEw1NJdDOHFM/65gY0Y13Lk4TO4zrDZnfD\\n\",\n       \"5z9/w7AKHB+vOEsL5suOOHX4rnW77raR7XpL3h8SoAe+OXly2iy3E3QUEClN8U8VY2yrXKlCqUJO\\n\",\n       \"hXE0hKJkJ+AsWEUdhOAxpiCqVC3s91uSj1yvL7i6ecB+v8aZBWib5OO84/rmguViyWadMN6xudmS\\n\",\n       \"UmEay22oQClScf52wMWopLGQpkzVinOG0Jk2vmwspD6iIuxD4PLqkiO3pE89qBKnDH1ls73Cd3PO\\n\",\n       \"7jxNzCN+CGzHDfEiUqaRk8//fV7uPsBn3vpV4tZQYkRLAflK4tZTklBtRyqJdLNn1c+pkzJuI1PK\\n\",\n       \"bGNElzNk2aY1GRQjlvt3n2Ffthz3Z/zznPGMOeImb+ntDGsD1jhqyfSzjpBG7vcrKg5RS9aKKRYJ\\n\",\n       \"0kbT0UYhaYFuFrC9Ilaa+qMRco4YCTjvsbaNuCsmMswHOuPIonhZ4DpHLhNpn5AEg5vzKEWqKazO\\n\",\n       \"ujb8winDzOKcIeVCmipKYbO7Ydg5xGaETC8dpSTimMil9RscOPDNyJPTMze302P+ke5r1TYEAVEk\\n\",\n       \"V1JyZDI1O4xX1CneWyyeJJEQDFkTN/sr1C9gB/v9DTFuSbWVG1o3x0hrF5/yyG7a3Hq9le00UvaZ\\n\",\n       \"XDK2WLBtis5+bI1NOSk55ta4UgpGWtGk1Saxm2NkihMOx0PewEzCxxbfyk2aSFOb/rM6OWPcrVmb\\n\",\n       \"wNHZPY5PT7i5eJtxfc6wOCVvEz/6J/4N/vJ//LdYzRZsRpB+B7G1A4lCHgt5vyNNe2qKzBc9Z8tT\\n\",\n       \"pu3Idp/4wmtv0s8sJ2dLju8cs1rMsGpIJfH86lleckd81+l3sN9eM3NzbOiwxmGswbm+jbQbE8/e\\n\",\n       \"fz+7X29PNLHGplVTmvSsOJqSThVs57C9w3qPrQasRbUpH1oxIJUp75mSxUyGsPIEH4il0AffFBe9\\n\",\n       \"hyxIqsyWA7OjnnE3MnOFMGvt+VpaYnPaT1AtIqU9Gflwqz1jMDhKuiGnCuV3rwt94MDvR55c0xC0\\n\",\n       \"JhenaDK/FTen1Zhzq9mSUmkj4UqGDCUEiiqehNiOUkA1M047ylXCGU+piWEW6OaWvImkbDDVotag\\n\",\n       \"OfHw4k0kKCqFcb8lj2PzRLPAVFBxjFMEdZTaDh2RipiKSGnDmo3BuExR2G0Labwgb5V5GuiOPUUC\\n\",\n       \"xhbyuGOVply1AAAgAElEQVS9vqEAc2d48OABcZrwpuK6gQcP3mIcL/iu+8/y7afP8xuPXicEh3U9\\n\",\n       \"mERMlZInxm3Gux5jlG5u2Kw3aLKMWtGqXL19wRS3bE6PePvNt3jppZeoBjotnIVjfvSpP8jN+gpj\\n\",\n       \"O/phTjWewXvEAnmkjBlnHPMwYxEWFANjviFKoro23i0sCvtQKc7RHVtC72+rhG6nIZmAOIezFuc8\\n\",\n       \"vQ3ETWWzPmc265nPFswGSzQDcdxjaiJqZNqO5DrhByFVpTcWOkvNLbzlS8XMeozO8KFQa8QYxVgl\\n\",\n       \"pZFaLIXAfjexPyRAD3yT8gT1zNsUewu3nYvNm5PbSexVviIwJeTSNEaktvhozoWZOFansFh61NBa\\n\",\n       \"4FPF2YgxjtVyzjQW1mlHzolCATVtRGiaiGmkilJVm2pfNkgBiqFqRZMlpYqUgjrFhtqGH0imUCjW\\n\",\n       \"olKhNjGoaW9ZT1u6E9jvtjiEJC0RaNzA2ck9osILz99lt9+z3W+Qkig5M4Qz4sUF/94P/9v82f/x\\n\",\n       \"z/PWuCbRBL2u1iNxLMy7gdO7S1x/TayVPCX2aU/d0xKgUiFXLh5ew5VjZpa4heUD/ogf/Y6Pc755\\n\",\n       \"hNWmIFmNbx21Yql5QtQQuo4+TUzjjsWwIuuIStd0bWhPPyVZjPf4U0NYCNUIEtpAjrhLLOwCNUrw\\n\",\n       \"nuVyRZHCfrvHmMDVw5HF3OG6wLIb2BmDlh1pyjgfqL7gvKOfGdRbSi5MO5jGxH4smGro+5bwns1a\\n\",\n       \"jqSURAiekjN5LNRJmabHqs1y4MDvWZ5caaK06UJNgISW/9Q23LhSEdMqRbjtGCyqTRmQTCmWYSmE\\n\",\n       \"LnB2eowIrDcbgNY09JVJ7ctAHTO7bSZVJaepraMVNS2xqgC21a6nUpDqMFqxpTKlBKXivcfYirG5\\n\",\n       \"iTlZQ9GCcc0Q+t5TimEct9T1RC4VrYJiqKrEGCkF5ssVDx68zTSueekDH+ThW68yLBc8urrkbH3E\\n\",\n       \"yx/7A7w4W9E7T5HCjd/R24EH0yXPP3uH0V5wcrpgo2t2NxO7eEnaBIpmck2I0MZBj4V0tSbfRP7N\\n\",\n       \"H/pB/DZRaNrpVS2LoVWwlJKQmls5pjGIs0QnLO6ckKaRsq/sNyOaU5MyMIJxyvFywbB0mK4Zc2ol\\n\",\n       \"BEeH42g2Z9YfkV1iTBPDbCCnSpoS66sNp/0puWT6oSOOE6Hz1FQRbZvAWEGrZbsfyUlJ+4mahBIr\\n\",\n       \"voc+dDjf1Beda7NQ06RNhrh0VN09tj0rIv8N8C8Cb6vqdzy2Gx048E/AE+wAbYMVWvu5Ymit9vW2\\n\",\n       \"J11ra5YRade12nIDVdsw5L4nhA4xhvmsw1jL9fUNIq32O+fMEAamPkGc0KmQNREzGCq+s1Aqwd0e\\n\",\n       \"GlZw3hNLpaR2f8nSkn42I0bwg6UGcCHjvLk9ZhQjhiIViuXZe0/T9zMu315T04Q1luXxGWE+xwfP\\n\",\n       \"6f07PHgj8ulf+/soGWeVxczz6PqGu3HLR+7e530pc391wt/44mf40FHHpybDVXzEC0/Nee6Z5/jS\\n\",\n       \"zWehGHbbPXmXMaFiDISuY7ed6HTGEOFf/94/gouJbRwRcYT5jH6+JE0T3a06oqb23brOotuKdIan\\n\",\n       \"Tp/mYrdGnLIeR7bpEoMFDG5Q3EwwvUFsIZsmNXzs55x2A7N+hnOtbFSppDhRSm3H9jRRpja4ousC\\n\",\n       \"i8UKTbeKibf/7nlS4rQnx0TceqZtJe4y89Wcvhe8N2htyeFaKzXDtB9Z30zs9hV1jzXM8t8C/wXw\\n\",\n       \"84/zJgcO/JPw5Dxz4Ldc8qbDUm6n4yja9K9v29Fb9NygtbaKkzYxFME1I6/mdpivp7MdNUPNipE2\\n\",\n       \"JFiMQV2hxISKoVZIU8YHwXnB945aDLkUrIFYaxv8gEU7xYUm1sUg2FDb0OCQqVgY25AHSRC6wKo/\\n\",\n       \"wTiDoBQMq6Mj+vkRPnRcXV2RcmS5WLG+Ouel97+fz3zmHxD6BVUr4+U5n/zgyzy43nDsLOPd55i8\\n\",\n       \"5fn+iHPd0R05Xrj3ItXu6cY3GesxD9aP2I4TT89nJCwPs/JMWPCvfvx7+ehzL7G5ugYRrLGI75hi\\n\",\n       \"ZN71eGcoKSICwXt2+zWd73CrJbN4hiLE3Y5gBrxsyVOk5IL3Htu3XEISQeIOj6VzrVSxVbSAcYW8\\n\",\n       \"n4hxpNTMbr9BUYo1LM4G3MyyWh3h1FF2kNcRSyCnG0oGEd/Et8jYbqCbO5wXrCsYZ4BKViHHQi5Q\\n\",\n       \"s6HoSJibx7dnVf/mrfztgQO/53iCHaBCkWbGkeahG2kmsCVAv3JhS4i26WlC1owgPHp4zb27M1Ly\\n\",\n       \"BN/i2Z117VgoisNjS25ecynkkkm3E+G9WFJO2CxUZ5FesKJMouBashNpioPVCSYoYQnGQ7WK9K1Z\\n\",\n       \"RUQoMVCjRbXwwnNPcbI8olRHzi28cnn+iGm/4cWXXiYMHc/de47f/PRnMc7x6pff4N5z72fpEq7v\\n\",\n       \"CBjmyzPuZ4jjjo+9+AJZhJqU6xTx88CdxV3yzZYXTp+i3O/4crdg3I3MuxnGDfydz32eT9x5jj/8\\n\",\n       \"ke/ien2Bn80Y1zuG4yXDMGs5gJSompGi5JrZx4xxQswJMBwv5xAz/dUaa8OtTjrtew0W60urKU9A\\n\",\n       \"TiiF6DKp20MS1BamHLmZLoj5mjxW9mPk0eacXR15n3sW41vdv/eWzndtdqcYqIY8RfZjRFxgtupx\\n\",\n       \"KjifcN424TUytdo2yDop+3FLjC05bb19Ulv6wIEnyhNsGmrW+ithleajl1uFRGmva21zP2tLfOWq\\n\",\n       \"GNOmtU9j4fJyRzdzBO+wnSd4j1Ylp+bLN4ldQ+c7YlGoCe8szt5qbmtCq1BKgeox4XaEnTMw1jbw\\n\",\n       \"ISi4NtjBWI/6iliwOGLKOOnIBp5/5g7P3L2HMxVvHNY7TBTG7Z79+pppv2V5csZ+v+N6uyaNW9DC\\n\",\n       \"h7/zO8lXb5J2G/bba6xp5ZkpV+7fPSPXgpPASUkYKg7lu4+e59IPfO7BQ77t5CnsMkEYuN5HPvrc\\n\",\n       \"ff6Vj/8x6rhjv5+oueB9U3XUWkGVoesoeaLU29r52++61NrCQeqxzuKCx7n2XVkxiCj2dphEG3zR\\n\",\n       \"GpvUwtZFgilkK+T9xCau2cYdWjMlCzVX9nHk3D3i/v071FiZ9olh6OmHpgmfU6bWdnBrbUJc1nms\\n\",\n       \"LfjgMU4x5nawhdYmX5BaPM55A0no5/7JbelbfvEXf/GrP7/88su8/PLLT/DTHPj9zHq9Zr1ev6tr\\n\",\n       \"39GYf62kj4icAn8ZeAF4BfhhVb26fe9ngH+LNlP+T6nq//q1120iUKqKoUmiqmlGvNbaHtURklS0\\n\",\n       \"tjn2apRSBaltcOZrr15TTaaqcrQIDNaTcxsibIwFVWoBsHhjW3ghBOb9gGplO67bMOdscNbQ+0CU\\n\",\n       \"TMrAREtymtvhFs5SXMYYRcggDqk9McPgPR944f08e3KPo9WAicLTzz/LG69liusInWd5NGe73fDZ\\n\",\n       \"T3+GP/jPfZLPfvrXoez4/Kf+Ht/1bS8x9D3T1Rpbt3TzGV/60is8/cwzxJjY18r65oJ7p6fMjzru\\n\",\n       \"18SLd7+FYwZeu3yL02GJdp7dSeBZt2S1KFy9fYHmdghaaTNQVZWhm0PJ1FoYpw3iPON+YjZvQmHJ\\n\",\n       \"RkJncZ2lm3uOFwsqRzzcRrwIVWDMCRsL+WZCRZiHjvW0xVeH2jVFCpFCLZmSWyVSKRWNwuZqy/n5\\n\",\n       \"FcvlMfubLVKVaRyxJjT5g1RbD4Cx7bO7Shc8zoH3BmtNm06kELwjThljPP1QICzx3ZMX2vqBH/iB\\n\",\n       \"J/0RDnyDsFwuWS6XX3391ltv/Y7XvhvP/GslfX4a+CVV/c9E5KduX/+0iHyYNkrrw7RJ5v+biHxI\\n\",\n       \"Vb9mvZhIG+LcRpoF5LZ+W1QpVVFuJVwNZCquGrJmjG3VMDEWHj3csOh886xnM+IUqdU2ZcNSsFrp\\n\",\n       \"HFQxGO/ou8DMdnQhUEUo5abVkIshm4JxhfncUVKmFIeIYkwF7ZCYKLYgxVN9RUWwtdANc56+c8Ti\\n\",\n       \"ZEVyCc2KEVgsl3zxy79BHNccn54yLOZ86EMv85lPf5rTu3dZBDiee8acePD515ktPcuuxztHP1/x\\n\",\n       \"aLOj7HdcXl2Sa221+VbZTRP9ouel9z3P2ekxr375FZ45u8ubF2v6O3fQ4nBujh33iHdt5Jz36JSI\\n\",\n       \"cUO/mFOqwZqezc0GMZk4OVzX8+buglIStoP5vGOxmrMbO2bDEuYTeW8pgKYmk2aDQ6uScuEygYaM\\n\",\n       \"CwlsxXolRaEUocQ2o9SUxKtffJXlbIVm4eZyhymV/X7TSlMtiBX6rsdgEWrLlUhoM1FvK55aDkVA\\n\",\n       \"K2IyxQp9b7B29i629IED33i8ozH/HZI+fwL4vtuf/3vgr9MM+g8B/4Pq/8fem/7alt53Xp9nXtMe\\n\",\n       \"znDHGm6Vy44dD22bzJ3QoiGgRrSEFAGiBShITV40gqCWQLxA4g2oheg3/AMRYpAQYQpE3YLutJIm\\n\",\n       \"Ucc2bsdxbJeryi5XuaruPXc40x7W8Iy8WKcqgcTB7VT1DfH5Svecfc7ZWmvfc579W7/1e75DCcAb\\n\",\n       \"QohvAj8OfO67HBslFFy5BM4jhjS/YfM8ly5FIoogZzH7Y4t5NCOFhJLpLz3bpaeIiCyF5P08Yomz\\n\",\n       \"ShA5b/6pPHd1lTF0dYs2mgOuNs/SRPABKRMg5mLSFIJXjMWTsmDYZYzVIBRBClQ1Qgk4bchqQC8k\\n\",\n       \"dinx20DnGqp1hRSW+4sHeN8z9QMpZU6ePGZhHT/0mY9y8eornG4vqfcFbRsuLy+IdpwVn3k2kFJW\\n\",\n       \"opQhiEzKUIpivTzkO995g8bU1G3LUbfi8vIMKzTPfuhFHj16RAgR17aEUlDFotBoFLZZEoc9KUR2\\n\",\n       \"+x3j2GPcvJGsjeM7FydcpHP2456UJ4wtNJ3l4kJSroqykJkoI/IqqENKhYyKZDy6KJAK0iyoKiVR\\n\",\n       \"ooIiyTkggHHneevbb1JCwjpLDondbkMucY75A3IOV/NxiVGKECNFZKyes0198MiiiGU2IzPOIq5G\\n\",\n       \"MB8UhBD/3dW6PxJCvAX8x6WU//IDO+E1rvGPgO93Zn6rlPKufddD4NbV47v8Pwv328wd+h/GleJT\\n\",\n       \"5PKepH8OhZg7dinnxBySJpOuPL1n+qIoBZULMSfiJNid9xAkqR9QzqKFolCIeaCU2TNdKcHkA845\\n\",\n       \"nDM4ZyiuZcyBizNPCoVUEqbJCJFRToMZEd5BP2+IhkEjlUeYgCgSbQvWZkyViER82HIwLaEKnJ5u\\n\",\n       \"2G08U7+hZEm9PGbvt6yXjtZZXvvC51neWFIPYWa9LBpi2iJMxfbyjJAi9XLFc0c3uTzdUleOxxdn\\n\",\n       \"+OQRRXBy/zv8xE/+ed566ztUVYXEUjvNNO4QpczjJj9QVR1SGrSpZmEWAtsuyDnhrCP4kTAF6OYx\\n\",\n       \"0+uvv84TcYbSkiH2hOSRKBbNkt5GvJ9IWlNQGF0wWYEX5BzRUoKGGGa+UQ4RJoHMEpJg3puUlATb\\n\",\n       \"3cDp+RlOzOOUadox+YEsQFkFZDRz159SIqUMRiLSVXZszsiSEcVQciKLjJHmXZrUB4JSynV48zX+\\n\",\n       \"1OJPvAFaSilCiD/uLfRH/+yqKBcpoEhI5SqkYuYn5wRFFkRJiKAwenZINBrmSN/CVTonpw/3hF4z\\n\",\n       \"dJK6yVSVRmlFFABpns0LgRSCcZpYdBlhJK2s2FU1Wk8MlxNjyFSiYGwmkZBaIMdCHt7lk8NYEq6S\\n\",\n       \"CKnQKhBK5sBZtuMZrTa8sH6W7ZMnPHr7EbvdQImRYb9nHC557oW7xOjZ+ku0hKU54rXtJTZlbty4\\n\",\n       \"Rxs35AyPHz9ktboBRTKkgLGWpq6onKGu15ydPeb41i36aeTy8pLdfsdifYCTkml7yeb8AmNaZDEI\\n\",\n       \"ocg54v0c6pytJhZFu1wxTHsQZbbnRbC4ccT/9etfQnaKuqlJbkI7yRRHYvbUdYM3mT4ElCxYYaiu\\n\",\n       \"2CMxR3IEIwTSFYScBVlON/gU0RSKkuQExhpa1xJ2E4GCUhDjSIgRpRS5JIxWiJzIcV5BIUZEkRSV\\n\",\n       \"SSW9F7ghESDnBiDnjNIfXGd+jWv8acb3W8wfCiFul1JOhBB3gEdX338HeO4PPO/Zq+/9IVzcvwQE\\n\",\n       \"FLCNw9QzCyFfRetoCSJpkogUIRBJgpSUlFBKEMu8sUbO5CEwpEjMjlICOYFzBVSmMBfeUmaVod/3\\n\",\n       \"5NUaiUAJRV1pnLP0QhC2mZQCzVJT1EyflBSKnzfcpuxBaIIq2CSQosIiCVIQ/AUh3GHq9yzqJafm\\n\",\n       \"gsfvvHo1909sNxu0vMu23yHCHi3hc5/7LT716U/Tn2/46tc+x4v3XiKHAWSiCIFtHNvLPTlM+Elg\\n\",\n       \"rGHTb9DG0hysyWLutMdpovaBoC2XTx6jnGGKA9ZV7EaPoqBzJluJVhWCic3ZjrZpOL84o23XjKHH\\n\",\n       \"VjXvvPaYndxjGkV3w1EtJbYzhCljZUPfjyQhkAqMTLSmgDBsp4EpztRSgwYlMaUiEmeVgM6YYphK\\n\",\n       \"QAmFKpLoZ9FUjIVhjIQYMKaglEJETchpDmm+8rzPSaKkIqWrIPAiyEQuTkYu39wjr0Ktr3GNH0R8\\n\",\n       \"v23M/wb8/NXjnwd+5Q98/18VQlghxIvAR4Av/FEHWN9dsH5mwfpOR7UwvCcjEgCzOAhRKFlCyoiU\\n\",\n       \"ETmjEIgiUDNzkBLT7OwXBGGX6DeJ/Tax30f8qEiToHggCMKYSQlCeFf6XjBCY4RECkGYAn4r6C8z\\n\",\n       \"434ihgIkcsxXFD9Ju9B0rcLZCqtmlktJgr3PmElze3GMVJH1esHy4AAfPEoqtDHsdjvG7TlKGHbb\\n\",\n       \"CyiZ3/nyl7jYbzk4us3p2flMuxxmp8D95cBmvyflSLiS3z98+BCpJEVoHpycsO+3tG1DLpkYI916\\n\",\n       \"xbjZIUshxJH1wRIpLdvNgEgVl/ffZvfoCTpH+n2Psy2jH7l98zaqa3j7ySXbx3OB3Dy8ZLOZ4+eq\\n\",\n       \"yuJMzZ2jO0xjZAoeY0GIidoWKmswolyFhcwxfCpnKlFhdY0zDikztXVz3JuPiChIQTGOCT8mZBLI\\n\",\n       \"olEoUinIAt5HYsxwdUEOKc0boRkEBorm5r0jXvrJYz75s8/zmX/u3ve5pK9xjf9/43uhJr676XP8\\n\",\n       \"7qYP8J8BvyyE+KtcURMBSilfF0L8MvB1IAL/dpkNVv4QCrMl7cyJEAgpKHFmiIAkhXkzslx1Wu/R\\n\",\n       \"FaMkqTILjXJB5pmTHFMg5ky+OnhJkGuwlfx9LruQRC9Jscz0RS2wyuKkpq4cVWXph8zUJwgCIQNa\\n\",\n       \"SaQVlKJwnaRZGKSWaFWwtcPWGVVJ+rGwLz3TzRGRC34c0XWFULPXTE6eYRzQJbPdnXLz+Dbn5/c5\\n\",\n       \"bhas1musBp8tIRZQmmHa89bbr7Jc36BzsDnbsj44QElF1bZMY8/l6YOZIonCth1KSk4fP2F5sGCI\\n\",\n       \"iqpp8WOkaY/oFpo4bZHaQolMXrC93HFweMBmd0Fda07DiJAJZy3aitnAKhQudzuWbYtVULkKKzXD\\n\",\n       \"lQFYrQTZeqSKqJwQelbnGglO1ShpyGiy0UgfmaZIjHHOek2C4DMhjSgBunZoqZFFXPncS2pniGmm\\n\",\n       \"mxqnr9bBfA4pDELC5eNLKIK2m1fTNa7xg4jvhc3y3TZ9fva7PP9vAH/jezl5ARSKrGZGgpDlXXX/\\n\",\n       \"ux+ujjl37EJIMrNIJCcBUaCKZsoTqkhKTKQBhpgRncUIyFIj1FWwMQnl5sCHMAVk0Egk1jiOVmvO\\n\",\n       \"z3fsNxMhgKg0VTPz1YXLQMbUEmkKSmeMTTS1w7QZYWenxCF7tvs9cb+lqmcDrKquWK5qku8xorDb\\n\",\n       \"7ehqxaOTt3nm7rOM2y3FR7CKrr3D5cWbHN24hVGWB/ff5vadhmG3IYSJRw+fcHzrBg9PHrBuF5w8\\n\",\n       \"uM+tOy+CqRhGD2HCykLvM1XbzZz6gwMuTneMu0zse6TwpDCx3Q8cHd5ic3pGTCMhZ37rt/4+9cJh\\n\",\n       \"xWwvW4qiRME4DQgdWeqbOCuxqmLfj2wfRepnBTZNGAu+JHLJWFvhsqYSmpwKgjmwW4iMUgqZIfae\\n\",\n       \"GApj70EWmoUlF0ixoKydXTKlwFlFLQo+RKY4Z40qaWc2U8pIpTg4OkCpgJIOIZ6+Be4fFA39SfC9\\n\",\n       \"ikW+F1xeXr5vx/rqV7/6vh3rwYMH79uxAL5L7/h94f18be/n7/+74Sm7JkISQC5IBEXNRlo5ZdJ7\\n\",\n       \"z7xyVkSQYkZq5uqOuGKsxHkDFTXznaeMKQo/gbMFVQo6z/PVuaNWpBSJsZBTwlUGbRWCmtVyxfbx\\n\",\n       \"I0IpSCdwtZtZFHJ2HBRzagYlC4SyyFqhHKAiVlgOaNnvdoybS/wUqauKVAmk8FSHDVJGCh6fJc40\\n\",\n       \"7Pc7lssDtv2eo+M79OM5xi6onES4itW4ZQwJnyMZwebynOM7dzl9+3XknReoq4aq6ajbA/rpglQg\\n\",\n       \"hjmQYz88wrgLDqnQRjDtAv1uRyIggsfHCFIwklgerFC24u3tlrqarYYhk0RBFIEQiTFJ9lwikmFV\\n\",\n       \"WbYbw0UfaLYGrcBqjbVi5t7HecM5CIlM891XDPPdUCnzPkkIHu/nODxBQSgHZb5byzlfpU5FtOa9\\n\",\n       \"jFW4unOhUMocISjQKKVxlZ03xMVTTUK8xjWeGp5iMS9XDoiaLBPEmc1SeFeuPV9h5zf1/LUQkPPs\\n\",\n       \"2xJzpCDnwGHmhBtKQaPJvpBtJASDniI4QUETk0ApBcx2AClFpKjJUSGFxFpHu9TspoBUAqkEOUMO\\n\",\n       \"iSwLJUpSgpwEky9IEa6YLR3TbsOrm9e4197BCY11iTZLVs0RMXi2u3NSkjRtg9UVi+WK1eKQNO1Y\\n\",\n       \"dQd4HzHasrl8AtQYXbNarYlhD2XujlXV8ej+CbZtWR4fcn62plus6fsBowTb3ZbKtuz8iCqBdnmD\\n\",\n       \"4LeMfWG76ZliIA8TMUds25BEoD06wFYas1ziG8ntW7fZbp8QQ8Yz56CmWMgq0HNOW61ZHiq6nSAV\\n\",\n       \"TUgKj0Z5ha6vWEZ5LtyUERE1oghCzkzTNDNq/Py3N0Ix+h7X6j+g2oUY4xxFBzgnkCohKkGRYLMj\\n\",\n       \"54hzljAmhFBYqxCyMHkwWnz3RXeNa/wZxlP1ZskIREmzmo9ZhSmuAiNkEVcFXF45IxaKuJL/C4ks\\n\",\n       \"cu6aAZCg5tR4nxJCFUxSTFNAazlHx6mEdVeba1HgS8RPCVKPEHNIhkZQ1RU+J0hzN0mcuc0pCeIo\\n\",\n       \"mcIsU2+ipe8zdVchlSAVy4aefugRSrGoairb4P0WazUxRfw0cbi6xfZyw2qxQOIJpdA1a5qjG0yX\\n\",\n       \"D+mqjpBGbh8s2TKho2K/Gbi4uOCHf/izvPrK7/Ezf+FnoWRu377Ldrshjj0hekxdsx3OiT4gmwNO\\n\",\n       \"Ly44vtHiqgZpPMfr27zzyjcxbU3O8+9Zi0xXN/zek0e8fPYKqlZ06nDumvc7tI5EaVAls/UTnfW0\\n\",\n       \"jWK10jy+zEypEHJGIqnRlJKIJNIUScUTxkBOc/h2ygmjBKRy1X3PF26BIJSApKC0nT3rZcBqR73Q\\n\",\n       \"OJtRyuGsgOyQai7YZQFd113F2imEkFcX62tc4wcPT5WUKxG/zxcWAqETvDtguRIQSSlARKTK75lw\\n\",\n       \"5fxuEZ/zMaUq2KqgXJkDh7NA29n9cBgCk0/4KSPkHGYw+2vPkXSbzY5pyuz3PSFklNTMIxwog8Dv\\n\",\n       \"M1MPaYRpFxg3iXGT2Z4H9heR3TYx7DJ5MHRuha0cdV1jlCbngFGGlNKcvtN1DH3kzu3niKHQtSs+\\n\",\n       \"/vFP8eTshLt37xCLxocJU6/oug5KRsoOKQvt4oC2bbn7/Ieo2262GRAGbRwYg65W+HH+fzmzYL/b\\n\",\n       \"YYwgxMg4bGk6hzECYQ1nTx5TSqLfXGKcY337Fq+cvYXXAeMa0IoiBcrZWYGbwShHi8LoiKsC3Vph\\n\",\n       \"qkKRkXGMc6ZnyPPehZ+To6YxMewGNpc7dhcbxt1E9DMjpZQ0K0zNfKelVEFZiaslbatoF5b1UYWr\\n\",\n       \"A+1CUNWSZiGpF4a2M7hKsFrVWCNQVaZZGA5u1DSr6878Gj+YeIoWuOVqUxKUnh0RBSBkIReBLAJK\\n\",\n       \"Qcjy3px77tTnW/HEzBsXMqKMRpmMk5HoM04bjJvVJv02Y6Qia5hSpMoWskAKjRKGcexJk2CaBqZx\\n\",\n       \"P2/CpitPkRwRk0AISBoUmZIEOQp8KGyfJLROVHWhVh2VrGi7Q+Swo5SErQyxj8QQqauaplkyek/T\\n\",\n       \"NRwcHrNarUEann/pY3zx81/kxq0V/aMRy+FsAFYstZ3VnMM4sNlteeaZe5iq4uLsnXlerCP16hid\\n\",\n       \"Rt7Z71BCIaXHOYdEEvxIKYKj2wc8euOMzfacupL4aaKylu2jE7p/9l/g7d/7W4isZnfFNPvKK52R\\n\",\n       \"xuKTx6ERImGUQsV+7o5VISrJJMGRMb4QzEwdLUkSpjw7WJZCTAmnLSKBkYpiJabMNrZFZbquoesM\\n\",\n       \"zgmsmQNGtJJoLRFkwJOCIqSJLOdgCqkL6Nm7HpGJJZJK+uMX3jWu8WcUT60zL2kenSCYrWZV+f2o\\n\",\n       \"ilJAFIQUaAPGKKw1V9FizOpRId9TL7YLiTHgGo2SBu0SVS1wTkEupJBm462Yrtz3NClFlssF6/WK\\n\",\n       \"i+0F+8stIYQ54m3MpK2g7AQxFHKQaAHrlcE5NY94JkH/uHD+zsT2NBIn+PSNjyLDQD9uKCKj5TwK\\n\",\n       \"ss4ilcGHgBQgqiW2XXC53fHqN77C2aMTnnnuBX7tN36dJB26bohZUDWOnA1n5+fce+6HyHnmko/j\\n\",\n       \"SCkNWcJ6cYPDG7e4uLikkokY92g1R6vttqfst2e4WpH9wMXZI6bdFpTDDzuQkmefvYdeHCGiYjvs\\n\",\n       \"6YceHzzDNCKAPvmrsZPAFoMWiVTilUAnozQIImPwM5tllOAN3gtSkEihAEXjOhrTzN70uVBbQ20y\\n\",\n       \"ba1YrWua2tBUDmcNRjlkkRQPaZJEn9BYnJ59XKYpMQxhfr1jTwgj/bBl3++ZfPjA1qwQ4jkhxK8L\\n\",\n       \"Ib4mhPiqEOIXP7CTXeMa/4h4ap25YKYaCpmvnPDkrNgsCkS+UhkWFitDjorgM1rPAqGSr5SiQlF1\\n\",\n       \"sFg5zi8DSkVMLUEIrIUwe8BSCoScqEpE6oCQEWNa1l1FWdVs+55HDx6iXCFPfvbqiwmUREmJFJn1\\n\",\n       \"DUx/IjEAACAASURBVEO1iNSLBbtTwfZih99E6soSleT20YIcJcFPWG3Q0nF5ekoIsx9MVdd471HC\\n\",\n       \"Mm4u+NK3v807b53wnXdOiD5x6+Zv89P/1M/y8PEFK6XZb3eM48Sqrjg8XKGM4M7t27SrI6QRhGI4\\n\",\n       \"vzjhY596kTRN5BjRKnO8PCJFxeVuQ+ssGMO0n4hTZIw7Fsc1Qhpu3V3SHqzRFUDkzTff5uzynP3l\\n\",\n       \"lhQ8VgtEbcllB0nT+5HaVYiS0cphrUBUEl3EbE8cExlDZv4sUiKjUUIgmSmFRRSUnC/IdWuRWWMr\\n\",\n       \"w1QiOQXiIOiqg9m6OCZ8mC+8gYAKnl0vudwmzvc7gvdoZRBKIaREykzbVjT1B+pnHoC/Xkr5shCi\\n\",\n       \"A/6hEOLvllJe/iBPeo1rfC94imMW5u4OgZBAmcMRZgikTNR1R7cQ5GTYbnfYpIk5gypYofCTpzt0\\n\",\n       \"1F3mcueRahYJJS8QOmJEmSPjckEWiTEWZSYymkV3wHK1oGtramsZ04YxnqMryCiUEQTkHPzsBKtn\\n\",\n       \"BetVy+5UkoZC9AqtLKvqiCwD/S7y+OItDo6eQ0vH2fkTQr+nbVtyKWTfUxvLlCUPnox8+2Tg62+c\\n\",\n       \"07bHfOvh63ztrXOE+QKrO7dZbT3LznD74CaTv8QohTOWUEAbS1KKMSZM3WLqms2TR9y+fcyYC1ZX\\n\",\n       \"7PuBQzWbku03l5RqwJmKo5tHqGy4f/8xybQsDhYEDDlcMEw9+7Md425EmZliWMjMHxM+Q4gjUneI\\n\",\n       \"NMvztVZIK1CVnlOLUpij9kqAcrUnIgVSSYydPWKEsOQ8kUukriTWZsiFKQikVMQY0KJm2sPQB/wY\\n\",\n       \"WC8DsfT03rDdeDabhLpSwQoKj8/OmKae5YGj69oPcM2WE+Dk6vFOCPEys7ncdTG/xlPH0xuzFK5S\\n\",\n       \"ZWbDLBBz5idXc3KjaLqBeqFp1gphFaYGYwXWKqrKoI3ANhLpIqqefVRMJZCygNW42rI8NkiTkGSE\\n\",\n       \"zQhd8HGLTxNNW7FYrXjmuTs88/xNbFeoDwT1KnPzw4I7H5HcfNFx9JxjeVNiXcLUBbfUVIeCG8/d\\n\",\n       \"4PD2khs3jqGDumkZQmIIeza7S3q/53Jzzsmj13nw5re4/+bLlHHLzXuf4rTP3L73Mf76f/SfcPve\\n\",\n       \"j9Pdfo5f/fUv0rUrXvrQR9mkQiqBNHlyhLqtaJoWt1igzIptSNx65gUePtmiukO6Wx/m4MZzuGaN\\n\",\n       \"vOLhay1pGk1lFU1bce+5Y4a0o6oNt28foESgbiVEwcmjx+xPd4gouLm6zc31XfSVyCeL2SelFEOh\\n\",\n       \"UFVzrqpUAqFn62JjJULKmbXi0+ycqCNCBJTKQMFaTSkTMQr6foeQI4lxvttKIIpEBYUOmrpUKCnw\\n\",\n       \"fWLyA0VItNpjZCD0e/w+ESePypo7Bze5d/vD7Dbw5GL8x7J+r2yhPwt8/h/LCa9xjf8PPFVqokAi\\n\",\n       \"ZHmPepjzTE8suWCMxtiAqebuUCuF0LNLohCSLMXMzpAzP7lpKnyMSFVQpuCqubjXqwJhzug0misB\\n\",\n       \"S2C/mQAwTqOsZH1wyPngSN6j15p2UcjK0o8JbWpMNZB8wDQdevLoArqG5bomqUKYdsSYUEaSUsAo\\n\",\n       \"zRA80zQRfWSzH1kdLnj2pU/xe2/3VOsFv/mbv8NfevCQ3335y+B33Di8RRozw+QRORNjxgfPrVvH\\n\",\n       \"dE1HVTuKWZJT4BOf+AQP33yDy7MzXKtZr45xpsG0EVlusdme46eBAmilqLsFRle89JEPcf/Nd1Bq\\n\",\n       \"9hpfLI44+84blNHTqArrLE4ZFlXDflqwLwOlgBEOLQuyFJQxs6DHaLQps5d8nv9uhVnipfWsDZBa\\n\",\n       \"zX47JWCUReVCibOMH5XnvZGi8FPA4BmyQtkaZxtsHJl8IgSPkhMpRXLhalzlOOw6rFC01YpaV3Su\\n\",\n       \"4c3zNz/4lTuPWP5H4N8rpez+3z9/7bXX3nt8eHjI0dHRB/6arvFnE8MwMAzD9/TcpzczlwXkPEN9\\n\",\n       \"d7yilJo7dQWmEigrkMbPGZRWYFxFCuGKmjirNp0DY+fZ+uP9Hi38nFWJZLmQIDUpjchc4axGC5hC\\n\",\n       \"ZH/6kHvP3+Ho5hFSypm6V0kaUyM6S7t0uGrJdjdQRMFWI2TLZAeyKzircI2gsZbJjmhqkvLENJL3\\n\",\n       \"I+M04oxh22+RxbE4avnYJ3+U+uhZfvf/+NscH3c8PH3Ev/VX/xrKSv7mf/43+V//+/+Btx4+4p90\\n\",\n       \"Fu80phhk44l+RNuK7Cz1Yk1Ie377N/4eb72x4fHZEyoyTRv5+J97iU9+8rPk0sD2DJEK6+UxiYBV\\n\",\n       \"ljFusFXL8y89wzQFchFUqwVvvvk6n7r3MT73yjeoXUulDV1jaahJacl53qHyQMFBsbP2VoCWCSUN\\n\",\n       \"oObov5JIRWC0QaqEzHP82xxMUagbGHeJEAOJkRAFlZ2pj8NlYBCJZ247pBVokVnUjogm5YGSHaJE\\n\",\n       \"pIiUFJAYrIwcrI5Z1YfkAqvlIXVt+F1e++OW3p9s3QphgP8J+G9LKb/yRz3nOvPzGu8X6rqmruv3\\n\",\n       \"vj4/P/+uz31qxfzmnYazs2FmtMgrKiIzZdHognURYzSuBiESbTu7E+q2wXuPlIrtpqC0x1qHJGOc\\n\",\n       \"JHmJNRIhPNlUVG1GW02lW7S0KBmJvnD+YOLRozNu3tlDkWgtaOoOFyxK1KwXFaqai9boRwyWojMq\\n\",\n       \"FZSdMJXCmoJwYCporzY493FCJY9VmnGaCDGhhESaBbZZsugafBg4eRgIQcwOgjny5/6Jz/LFz30R\\n\",\n       \"GZ9giqTf7UnrI6qqRtYVi/Ux1eoIDygheO31ntf7yHMf+Qz7y0te/drLNO6SxfId1ouaqu7mqDgE\\n\",\n       \"hsK+P2PZHSG1xo8TSmba5QpVN/yD3/ttvv3Wm9w9vMOTzQnGLbBO0RXD1BtU1FSqhZSwUlK7hoVN\\n\",\n       \"KLnDpJn/n8WcBqWFQSlJUye0TIyDR1ChBagyYFRi8hFtLSEktMzkNLNicnL46ABBYh7xLGyDLy1C\\n\",\n       \"9ahSqJSirgO1AKH37Mf73Dq6hZwcojbcEOsPbM2K2c/hl4Cvl1L+iw/sRNe4xveBpzYzv3XrJkdH\\n\",\n       \"7Wy8VOBdvw1BYbGsuHFjQWUdUhaUKtRdwVYZ12TapWOx6FisLFW1QBsBQiB1IaPp1hbjLLJI6lqy\\n\",\n       \"PnTYOqIriLFQpowymfOHPScnJ1xcPpqtZN2C2lmcW6JlgzKGwogU6mojz6BRNK1m2TUYbSjSY/Q8\\n\",\n       \"6z+oOiolcUhyyYRhS1N1rA8WHB0uaaxCpolPfuyjPHv3eQ5aSQqeGCb+w1/8a3zrtW9AKkgjWC8X\\n\",\n       \"ZBUIKWCqFm0NarVAS8eXP/cNXrnY8wv/zi/ya7/1Ff6Xv/V3+PjP/Bjf2GlyXlMwswpUaiBhbYPV\\n\",\n       \"DTFGnFA4K+mahvXqgPMnZ7xx/1VuH92iqizdomb0Txj9ntrOkvulckgm6txhjKZpHXfvHPPx5z9C\\n\",\n       \"69azJW3M1LqiUi3domHR1CxqTW1riKBFQcmAtRU3Dw5YOIcUiSL3ZCJV3dDWx+z3ge04h4MoEzE6\\n\",\n       \"s+hWtHWLkAXlMstO0q4VdS2wVeJbD76CdJmcB6z7QNksPw3868BfFEL8ztW/v/RBnvAa1/he8dQ6\\n\",\n       \"84ODlraqeb28wZPHe+awIomrLE0XqVuFM7M/SikDTbNgFwNVbfE+YlVmIRuqOqCNZRz6WRGpE21r\\n\",\n       \"QQ5oA0p1WFdISgKBfjfMxQdNTIVxFyBFJr9DGD139bJGC4vOBsUJqMLCLRmypzBhbcWqPiCNFm0K\\n\",\n       \"Wk9IZiHSOASKH8k+EjNUtkYZy+VmxyuvfImXdM1PfeJ5/tNf+mX+mb/407z88jdZHSz5zGc+y3/z\\n\",\n       \"X//P/OU//y9DAKMN1gj82ROqm88QU0ZEjVOO3/jCV7hz6xb/0l/5BYb9jt3mlNe+9TovfviTiO55\\n\",\n       \"LPcZ/Ejoe1arAzKZAvjQE0KLUgYpE+29j3H68AGNMjS3V5xf7NHNim1+CLKnMhlnDUJHEBVOKBZV\\n\",\n       \"RdUaOtfhuiXLgwUPn5zx6OFjFtbQNWvapaaULSJJor8kjbPLYSoB5yQq16TSIOQeazNTibSdRcSG\\n\",\n       \"9eqIfurZRU/jwNYCZQq2lrM2WCuObuhZ6eoKmoqkCg8u3uR4fUjK/Qe2Zkspv8VTVk1f4xrfDU+t\\n\",\n       \"mK+7jugEQzhm8JFhIyhMFEZcbWma2TQpCD17ZauEGgQhDnAVAt3UlqYGZxTJO870lhw1tpKU7JAW\\n\",\n       \"rBA0rmVkYgg7Yu4Jk6Xkkefv3sVlS0mJFKB2hpQTMvdQWpQaaCvJ6CFHkNIgSqZSFeuqJasKY0Aq\\n\",\n       \"Schb9sOGKmnksEcrS9Jqnh2XQJ4mii5882tfoD6+w7//b/wc/9Wv/Cqf+sxHGS96fv1//z/5d//N\\n\",\n       \"n6O1mu3mMbURKGvoDtYoaXBVja4b7n/1Tf7OF77IT/74j7C7eDgHQZ8/4qd+5Ee5fedDvP7OW7RH\\n\",\n       \"O95+9XfQtePi4oIXP/RRLraPWS2P6Ydzbty4OxP9nUGWyIdv3uWN0xNuHt1i2hum6QwpQQqN1QGn\\n\",\n       \"KiqnERmECiyWNVV7TJcs7XAIpmNRdaSpp7KOdpFIpaWkicsLR4iZaTdyeKuwWtVMynK8foExHRHV\\n\",\n       \"G1zEDU1zQBxgvezohOPNh2+AtNh1oqo1rlbEHNCqxlIjxYqcJZpu9upJI6VMFK4VoNf4wcRTK+au\\n\",\n       \"UmhVaDtD22mII/v9TEusW4F1nqoS9JeeuqooaqKIQko13o9EFTk8XKCNZdGtEUi2yx2ncY8xcwqN\\n\",\n       \"tBlNhVIOa6EPp+gqoUzi8HjB+mBJSZkkRiqtMVoSQiamCR/3ODFH1GmTISiEsDiTsVpRm4osa5xd\\n\",\n       \"kuVjfBJs44RTFmsdQz9g644cC75MTOPIZnvGxz9+g0ZmvvP2N/jX/vmfmROBkiD+01vGfWIYMxdn\\n\",\n       \"D5iMQq0aVnWFFwXhE+LJCSdPzlCq5vDoGGsUOXh+7l/5Kzz/4rOk4lm4hhd/+Bn6J69z/8Hb+NJz\\n\",\n       \"cvKAtm3Zbs9plwtSyXRVA+NE3j6hqRx1lXkUnqCMoC0dRe9o2jXrKy947QS6gJQDPlccryqKXmMu\\n\",\n       \"BnwyVHJB9hZBxlWRlHp8XzAqIbMkhow2hcyeu3c+hUZRpprl4hYh9KR8TtPVZHYoA8uFYkobsgCh\\n\",\n       \"PUp52qXEDxIpJ8gDUqxnewUpUEiK8NTNtTfLNX4w8dSKuRSRIDOucrSdRBQ9Gy4JiSgjyEyREm2a\\n\",\n       \"OShBBZrO0G8nvI+MKVK3kuWhwlQVK9Fy6yiR8n3qNlFEjyxH1HpBSCMFxWKxIIYti8PCvZvPY51F\\n\",\n       \"icK+97TNksSIEBNDv8HompB6IGKkJMmMVYbKgTWSLDLaKGrTUuxE3gdMcYyTRwtJRHNwsEThsCIi\\n\",\n       \"lGLzZMfDh29xQyheOlgSxOzF/vHPfIr733mLh2+8yjuP3+LG4Q8hSqGuDP1+ZFVPhJSIMVE3lh9+\\n\",\n       \"4S4Rz2c+8XE+98Uv8/jkm/zYj/4Fmrbjo/fuUh8sef6lD7MdtiipGIctWkvatkZri59G5PEhOQyk\\n\",\n       \"YaRbrDj2NzE5cx5GRLBEt2LR3ODgqOLh+XfIGKbtFmMjRQpKtWC5WJAZuYFFlsDU7/FjAenRMlKU\\n\",\n       \"pTKOysKoa0qZZm65vI8WL9A0HdJMNE3Ffjui3ewiqRIcLisutiOIRBEOsLRtwuiEsZnsJTkFsk8o\\n\",\n       \"E1CiRjIRy/UU5Bo/mHhqxTykgXHqESJSN4I4FdrWkMuEMRKlZq5yZRVCFUoWWKfwg8SYRNGKnAMh\\n\",\n       \"eEBQV45bt55n0R6R9RuUolCyQ8oWXyTjtEUZT91K6grW3RKpBrRr8MNAY5coJRn8E8Z0QZ1aNpsJ\\n\",\n       \"1EAOFdY2IBJSFIwqyJKwTqKNQ6oDhBMcHd3B5UzY7GkXgVIEpq64ODshTz0Hh0fU7QH7aUsXaiqx\\n\",\n       \"5+j4Ng9e/Qq77SUxDtSu8PD+m9y5c5NHb++puwXy/BSpDb4/w1R3+Q9+4S/zS7/89/ipn/4xXnjp\\n\",\n       \"Fs8982E22wvW+5FnP/sc0/YNchg4PDjidLNnHD3tyiB1g5AapRXSLCmP38EAd82CannM65sTQiOJ\\n\",\n       \"MiKaIw6WN7Gmw1UNb73zMlY7dttTUjKkpJmip20awjBwdGg5ixbSOSVOGD0QZY+rC3UrGQaL0gUh\\n\",\n       \"EyGeYu2KVDzeP6btKhp7wDjM7KScPMZYVk1NCBfovKRxHdYcQThHuj2eQg4RISW5RLSKkA2iXFvg\\n\",\n       \"XuMHE0+tmPfDJfvpEqE0TQM5OGoTMe4Q7UaUHRBa4mpNChasIDMhpcJYjSwWa+b8T8QcH1a5ioVb\\n\",\n       \"44VhH75B626TkfTTSMieKQqqRqOpcN1jvNdsLs8hZ6ysQM92u0papBLEFJGAjxNKWaxpcZWgspoi\\n\",\n       \"IkbPdxOowAP/Dba7S55Rd9AxEEPCWss0DmQMRSgenDxGKmi7Q4QynF5ecrHbk/2AwLDf7Nj5wsX2\\n\",\n       \"lJtHNyhtjRQK70f8xSVn5zvuPjNi3B1+/l/8LG/eH/jK+Rq76fns7UN+/Cc+TT1e8tZrX+Xi4hKh\\n\",\n       \"LYuVxWjP5eUZXVcjVYdUhrI+oGwfkaOnNYreVFTtAUZc0MojdmLLsruJUZqD9W3OH53Qp1NClvhh\\n\",\n       \"iyeh/Z6SEipfMHiFVY5iWrb9OVZNKG1oOiDVTNMZMUpCiOzSOePwNs5VaFNom4qsEkYvyamg5BE+\\n\",\n       \"PqatD+n9JUoJpFBo0WBUJqWAUgktWko2mDSQgqeuFsTx6aeAvl+xaovF4n05DvC++rzfu/f+hWb/\\n\",\n       \"cbzp7wef//z7J8g9OTl534612/0hbdn7jqdWzKdpJDNCKlhjcJXE1g1GW5yJWAwiBipaQlGgIOsJ\\n\",\n       \"4wIxaoyU1E0FAkLa44dvc2P1I1gcXX2TdH6BkjVGKQafUDoi0ohUEik86MTYP2Cc9kjRIXVE6Qpt\\n\",\n       \"E00ryGJEFMPoBWlMGAWrZYUYJ+rKMYUzfJk3Wg09UgqcrdhPAzZHiHMYRZhGLnfnuKrm+eef48nj\\n\",\n       \"J5ydnnJ+esYLH3qJySeOD28y7C9IRChwcHiTdnkXciAWRX95wXZzRt119OenrG7d4N6P/gziH/x9\\n\",\n       \"XnrhkywOj1gfdGzOT/mHf/fXSP1Ad7AkxoQWgsWy5fRiR9M0UBIHd+6BnS1ynZLEXFg0HbeZndy3\\n\",\n       \"YmQhF2zHM+4e3aPuOj790Z/hm298nvPdY0b/DtP2Q8j1ElMkIdaI7NFoKAfYzrDtXwMZMcrQLBLH\\n\",\n       \"YcVuSJSUGUNAiz0gUVpBVjhnibkhZYHUiVY+h5Q9wt3AGojJMwWPDwMxBYzuEMkgRIWsA33uKWJE\\n\",\n       \"q+ZpLelrXOOp4inGxgFItARkoW01hgqjJaZyUMIcMQYIYygloqVEqoy2ic41LJyhiMI4PqE2R/TT\\n\",\n       \"Q3T9LEaC1oYx7FBKgInY4q6KhSfR4/PEYnkHRGCcLjEKsvQsmmNyeYiiYkwjMRqymhPhU8wY2ZHi\\n\",\n       \"nqrSpHJJCTsQE3VpWVVrtFToEtluHxCGkc1FT4iBEuCtb72OVIp+v2PXB1586UPce+4OzimeZE/T\\n\",\n       \"RoYycXjzGZrlISkEtJEkAmV3gZQaKT2rpmb/xivcPuqYpsDDb3+Zt78xYbWiajqenD9A95CpsG3N\\n\",\n       \"o0ePoRTOTk6495GPIg4PYL8hl0KYBnLwVM6xkJqhtIxJMurCfveI/aLjWByij+7SD5/A1hO7+E38\\n\",\n       \"7hTrHEVacmnwuzOEvzH7uKuGTj/L5fQWuXhKtrSdQrJiCAHvN5hWzZF0ITFNPUquKGJEihY/RW4c\\n\",\n       \"tvT7fnZljJcgFVPOxBTIscK6A6RJJF9ANCB6NttHrJrnn9aSvsY1niqeHpvFVBgkQkVyziThca5G\\n\",\n       \"a4GQhpQsOZer8GaNcgKtK5QYMarGGIvWFUVuKezxCcKkZl53gJgz07RFKYgx4eyCXKCEQsgj1rRo\\n\",\n       \"IVgf3eTB42+QxSXQopTDWEHyPQWNFookKpRKTGPAmRqEAenJKRHjAEiCz0wmgBCU7On3I9N+zzBM\\n\",\n       \"FG2onGNPprYGkSW6XaHrQ/ZJc/K45/atFwjuki/92ud48s7L3P+a4/kXP4lt1rQHN3HNEW2d0WrJ\\n\",\n       \"fvOY6vmP8PhLLyOtpnhPUzlO3nkH6wTt8piqrtntdmhabhyt8DFSYqLplqRhjx52kKFtWvbbCxyZ\\n\",\n       \"ThtyCEhZYaeJLGqenH6Zo8VtKnFIVTWsOSD3Nwn9Ewa7wgpPCJGkMpv+DTpzE6EUOSW0aAiMsyLU\\n\",\n       \"dSTfs91PKNlhdY0UhlIKu/0pUnaQO1IvUUXiB4+1ljwZkoc+epKYQHugINQaUQy59Axjf0Vf7dlM\\n\",\n       \"rzytJX2NazxVPLVibmyFwKK0x8c9U8ngPFJpyIWSAwKHFHr2OC+FgsVYgUgdlbEok0FmpH031WbL\\n\",\n       \"dn+C0GsoE1OaEOM8AhFR0Jo1wlj2+y2qWAoRZQztQrPzG5wtc06ltCinKaMkaovKc2pRyp7oIyoV\\n\",\n       \"hErEVCAn9vs9Y+6JSXNoVySVaBer2X9GOVIR9P2Wul1wvu+xpuPjn/40BzduoZ3m2Rc/RtVUxHce\\n\",\n       \"8OwLH+I3f/Xr/MSnbvLmK7/Dzbsv8ODBt7j97A9ha8HBukFbSXV8zDOf/kne+fqXWa4XxGlg3bWE\\n\",\n       \"DALJPkS6w5tM00RVLSn7LbazsyLTtpQnJ6QwMI4Di3pOvV9rwzhOLIqmq44I5ZSTOPLm219jUT1D\\n\",\n       \"8IkYJbrUDLsdyk54uUNh2W335HLKxXTBWi9QFsomQp55+MpZmmUilBU5RtqmYZriXJCBaRwQosxe\\n\",\n       \"6qLm7PRtlgcLfLwgy4hMs++LSA7jFIKANjUxjnPEHx5TSRAfXDjFNa7xpxlPrZhroUA6hACjIyp6\\n\",\n       \"Yh5x1vB/s/dmMbel+XnX7x3XtMdvPGOdquqq6m7b7cR2xwkRgcTECkJOyBVESDgi3KBcwBUi5o6b\\n\",\n       \"CHGDhJBAkBvCRUQgggARIQkOKDZOHMvtod3uqbq6qs45dYZv2NOa3pGLddzpeOiuNn36ROnvJy3p\\n\",\n       \"095rr7Wl793vfvf/ff7PU5g1/TggpcCPHhjJIWNNSRBTlmfOCiE90giUVrghgRjw6YphUMDkihhi\\n\",\n       \"QIaGsrCkGChMg5SOruupqoDImdXijIvLRySOyEkhlUJLQTKCEDKlVSAc47CnS46qrCjGTIgJIzI5\\n\",\n       \"Q0gte/chjZCU5bThOT++xe3X1pTVjKZesR899157E10oVk2BNYphGHn0fEc3POfu65/gR/9wxVc/\\n\",\n       \"/2UuNh9x+2TOL3zuIX/oj/4Ey9Ml9x+csd8dmN1/B/QdzK0z9Be/yuX1BUpkQjUjHHZ0uy1nD94i\\n\",\n       \"Ks1sdkSKgUoVVJXBzSx22BOcZ+j7yRdcKXwIJJGZaUvwA5u+wyhHaSyb4Uvs2itwDcaClDU5dfTb\\n\",\n       \"R0itydkyuAMGS1EKIpcUpsCWEt8KggdyR8oDdW3wftpoRgRillirGcYDSo6EtGCMmbKyDO4CKWcY\\n\",\n       \"u0DIwK57is4KAaQ0fQF470k4hLBoaZHypmnohu9PXtlkPmm1BVJpfMws6xl92GNtQWVmCBlJYkNO\\n\",\n       \"EH3EyBVGT5O9zAKtLTJDTAEjSpQKuHFgHA2FPeDHSM6J6CUhjbTRUxQrfDwgVWDsJH14xok9Z12/\\n\",\n       \"xVAp9oc91lQIkREiYbRhkB6lzbQST5mu3eFDTyMEkmIKuxAD5JGkPXbVcFbdp7JP+Mqv/xJPP3zE\\n\",\n       \"/bc/wdXmORcX1/za53+J+XzFj/7Yj3Hn5ITF/JSybnj79h3arsMWlp/8s3+WX/yf/grbq5Gf/NM/\\n\",\n       \"zZAuePDgLuUnPoHe9iS1QjXn+Kv3ORjN9eUjaC+p56fY2QKODWk8sN/vCfMFhbUUtsBYTe4DY+zJ\\n\",\n       \"WWKLhsPmCToHEJObocgSGWApJKmQHLLCFbBvd8goUdEiRcPQP0X5Eec9QgiCj+ScUaohFz1RHSiK\\n\",\n       \"BYdeIoLB6IJ+vCZnhbYJYsCNLaYsEKkgiQ0xOIyeQ1K4w4iRFaYSVLYgREdt75Dlc0LydO6C2t4m\\n\",\n       \"JkuKEaUTwUPCvrQxK4Qogf8HmITv8Ddzzj/z0m54ww3fAd92MhdC3Af+KnDGFNH53+Sc/wshxBHw\\n\",\n       \"PwAPgK8D/0bOefPiNT8D/AUgAv9+zvnv/PbrWltOqgZtUMoSwogXCq3mCBSFbRidAzFQ2obSLrCV\\n\",\n       \"RWtDLzNKgHcO5zVWzrBa4NWO4D2X189AQsiZ6DPeRZLShHg5BSkYCURyVhi1IKUDVbFks9uTUaRo\\n\",\n       \"QB6IcTL+knIyq9qxJyUPWRJiREvD6LdUleWTb/4oh6uK4TpxefiI5BLLW29hxMiv/8qvcue1u/zw\\n\",\n       \"Z/8YX/3yuxzfvs3y+Jwvfu2rzOdbiArz7ld47f6bDIcD61v3mX/iRxiurymPSm7P7xB85gv/+PPo\\n\",\n       \"ouL2/RXzlLDVnHVd8bUgMM0JV0OPdD0iJPqyIRM5u3MPKwTbzSVv/MBnCYsCuekIpsENI94HrJrk\\n\",\n       \"i0SQZUkpE30YISVquaILiZSvyFkT4hItK0JI+LTHjx3BKZLUWCGJLZw0DbbaIfXIub1FtzV4p5Fi\\n\",\n       \"wRCuyDhiMHQuUylBYwXG1FOotwRtDFloRudRtiJFQ06JkJ4TfUTZRPCZgStSWlIWS8gQ8jCV5F4S\\n\",\n       \"OedBCPEncs6dEEIDPyeE+BdfeLbccMMr5eOszH/X3EPg3wH+bs75PxNC/EfAXwL+khDiB4B/E/gB\\n\",\n       \"4C7w94QQ7+Sc/ykBsNE1WfYoHZCiptAVQz+ihcZai4tTCSQ5gZQzlLYoJZgXjsyenASHgySEgmjt\\n\",\n       \"FLQsDhwGz0g3lWxMSc6C0Qu6MWGDnNLfEbiQSVHw/PIRt8/eJsbMqjjDhRaXA1mG35LcIHJC5gIt\\n\",\n       \"R2yRSHKPkhbouXPrExy2l1xuHlLK1zArhdsLApFyvuLyo8f8sZ/4U/z9n/27fOU3/wr/1r/3F/k/\\n\",\n       \"/vb/ydc/eMgf+fHPcrQ+Yb8bWCzmfPj+B3gfCYcPmJ/dw9Qrnm9a5rMlF7sN83nNGMGNI6nbkIzg\\n\",\n       \"7O3P8Cml8NtrfPLIGHj88AOunz/h+PiYdncB9YJbr91HCI/dedxuj3d7unZHMT8nFxJx2E6ek7zd\\n\",\n       \"vQAAIABJREFU5TLTaIlCsY+SBsOzXKFGza694nhxgvCZZXGLznkQHp8Dw9DhMYg+UZY1s8UcoTK4\\n\",\n       \"TFlUSGp8nxjdBdaC0ooYPCFMKVNaSdCZupgjo6XvAloUXF0/JlC/sAIYSTli5YxKL3HhCqUkRtdE\\n\",\n       \"p9Amkbz7bn02fldyzr/l5GUBBVy91BvecMPH5NtO5r9H7uFd4M8A//KL0/474P9mmtD/deCv5Zw9\\n\",\n       \"8HUhxFeBHwf+4TdfVyg9deslkFqirYbdZtqws5GUPCIWxNwjpcDayfcj6wIzOzDuHTFlhs4y2kxR\\n\",\n       \"CJQUSFnRKEXvRrKMpMA0QYYwrbpFYvAZN2YUFVLC1e6rlHKB1gtSXLDvR6Tp0XYkIYnREoJHCk1T\\n\",\n       \"zmlmx2R1xXL2Cb7y3q9wa7WmLE9JOeByz6I+Z6WPMcqQgufDhx/wyU99ioTi7/3vfxNtTvjxP/Qj\\n\",\n       \"FHWFCyO9a9k93E6hGwHGYeRLv/yr3H3wGtfbLV9jpFAe5I5bd17n3JbE0KOSQSzX3HrwKZ49fkj3\\n\",\n       \"0XsMuy3HpytkcCAz1xdbxDHce/0tRNkQ+x3F6Zr2/SuG/YZs95SuQpsSYabSlROCIA6IMCK9YJZW\\n\",\n       \"XIw79ocPWJQRjKQ0pwgsMT7Emkv2bUsUidLO2G08hW44OQuE4JBIZDJU6pyZibj8HsZoZnWF1Bql\\n\",\n       \"4otQkkBIG2pdMG9WkC1ZRvb7D7F1h9JQ2hlalwgpKNQRkBDSIZREOE9WL3cDVAghgV8GPgH8Vznn\\n\",\n       \"L7zUG95ww8fkOzKy+G25h+c556cvnnoKnL/4+w7w8Jte9pBp8v9tF4uQBSGBQCOlxdo1resZxj05\\n\",\n       \"Z2LypAAuOsZxQAqDRL/Qm0eUFvgUCWFyU8xkkBayxuoCrQRSTvFoOWcgkHJmGDyH/cgwDgQHfddB\\n\",\n       \"rtBak5PGj5ExOCBATnifiBGktpTaorXj/OjTfPDwS2hZklPA+xalLZEdu/4JIY+89967yGxAzVDF\\n\",\n       \"jM3zh1TljH/7p/8CX/rSl/jw/Uf86q9+nsNhh7WG4D2PP3rEe1/9ClppLp8+pS5LYhCUzYx6vkap\\n\",\n       \"kovLZwTnCb4nuBEJNCfHLI5us5wtkEmzWCwJIVHYTFkVU85qIcjVnHa7wQ09ViuS84SxI7o9pIAP\\n\",\n       \"PdJqpFAU0lKLgjLXpChw3nI47CFP4csxRkQyCNLkKS8zIbQMXaTvPH0/acj7bkRIhdElhZ6hVUPK\\n\",\n       \"ibK0iBeB3inlKbuVDmOgrmbMZ3MKOaOQK2QyyJwJsUcpi1KgtCclT06Brj/QtdekOH4nQ/o7Juec\\n\",\n       \"cs5/ELgH/EtCiD/+28/ZbrffOIbhe5NJesM/n3jv6bruG8e34mNvgL4osfwNptzD/RS6MpFzzmIy\\n\",\n       \"JP+9+B3P/e2/8Y/RWhOi44f+wAPe+qEHHM3O+Wizx+VrVJoRosOFESk3k2OfnlPqhJEGp93UJGMk\\n\",\n       \"/TBi/RaEpLKa0Wmk0JOcMY/kDELGqVsQQQwjQz+QgqEqAtZrBr/B6hJTCGZVw8ZJYk5oAzEkBjcy\\n\",\n       \"+kQzW1JYy777MnVxzKE/sO87BjGwSmvq4pST0/tcPnrCanWEP0QWR8e0+2vWZ69xdOtN/tv/+r/k\\n\",\n       \"zU9/kqHfU1iFIHFx+YRf+9zneeetTzObNVTHZwwu8oXPf4HXX7/LmAqOTwrKeUE9L2n7jkU949CN\\n\",\n       \"HK4vif2eduy4/enP8vXP/X0unz2hLA0xGVar21T33sY/fRcdIuP2wGG3JfueujAIIZG6RAoQUuMO\\n\",\n       \"W2wKpPFAlJneO2RskSgO+xYhNPPZKVLUdK0gGoXSieQCCINWJ8yK27huzzg+wQ9PmRcrpLAooYgh\\n\",\n       \"oa3EVgW596TcoZQlpUnRNPgdi/Ub6FTRl9DtHdFP/0stAylfgTgmxsw4JD744nO+9htXeD/A98ho\\n\",\n       \"K+e8FUL8LeCzTL9Kv8FyufyevIcb/vnHGIMx/yRw5VstDj7WZP5NuYf//TflHj4VQtzKOT8RQtwG\\n\",\n       \"nr14/BFw/5tefu/FY/8UP/LHTyhtwW645uxIoVWJKWtkqhnHPVqBD4kYIj4dGA3oUWIKhbKT2kTJ\\n\",\n       \"RF1ZxjgSc6DUkiwzIQaE1ggRyEmSsybFgegTxihgKtskn0FkNts9IQ0YUVDpYwprsGkGaYc2AaJC\\n\",\n       \"kklhJGSHSxklMyiHlHBwGypW9OaSpV6xufqI+WLN5vkeIypySCzXpwQX2e+2/Kmf+ld5/Ojp1FFq\\n\",\n       \"DNfXVyilefONexQ2cXz/NqWeGmrGYc0HH77P2z/0Q8SYaduW9WpFURQILSnLil5KHn30NXSCz//8\\n\",\n       \"Fzm6dcKDz/wBDo8/4M7rn2R97x3i/orU3Gb30Zfo2y1aakKevmO990jrKbUlC6gXDd4bblcWediy\\n\",\n       \"p+DarNhpGBMM454YHTlpDu0W6i2mEAiZII1UtWTXXrPUkpQzQnfs+w+p7F2G0CKkIOcEMlKWNVIp\\n\",\n       \"cnIo6wheYK1ms33E0eIeKI9UAu8Eh25AhzQ1iZXPiAjwd3njU2fcecvS9x3Rl/y/f+vxxxnW3zFC\\n\",\n       \"iBMg5Jw3QogK+EngP3kpN7vhhu+Qb7uM+Ra5h/8r8Odf/P3ngf/lmx7/c0IIK4R4A3gb+MXfcWOR\\n\",\n       \"SXlkXs/ZtdeUxQJbNlhT4X0gREdKAYMhB0Hbt/Rji3Oe4ANudCidKMoMMuBSAikJYcT5EYRESkWl\\n\",\n       \"T1iVpyyLezR6jnDT495lBBYtZ8zqOW07cLW/QFqPKkZyTqSoUaLCWk2IPaEV9NuA77e4cUSryV4g\\n\",\n       \"5ohSls49x42ZUpa0mwvu37pPYdQUjSc0RydHhJj4+X/wCwz9gPM9T589QWtN3/domYlppCklSmdW\\n\",\n       \"s5Lz20veeft1RrdhtZpTVxXDMBBDoNvvMU1JSgMnJ69ztFxy79YxYuyI3vH2v/AnObn3AHl2glo/\\n\",\n       \"YNQK25zRXT0l9heUtpgyV41Ba0Xf98SU8D4CGtcPNFlxbCpKsUBZQ1ll+mFLjJG222ELMzVfSSiN\\n\",\n       \"QqiWEK9QNpGZJIsgMXqkHR8x5gtC7IkxAwKR5qh0hJKzyTRLLdjvRoah42LzPiHvyThAEqMi9BI3\\n\",\n       \"QnvwxCCJIdJ1B2II+HBgdPvf72fh43Ab+FkhxK8wlRr/t5zz//Uyb3jDDR+Xj7My/63cw18TQnzu\\n\",\n       \"xWM/A/ynwF8XQvy7vJAmAuScvyCE+OvAF4AA/MWc8+8osyit0CqRsiTFGVkKRNYoKfCDJxOIWaG1\\n\",\n       \"wDvFOIwIu0UcMnUaCM4hpUQYS+oGfB+I0pCJU61YSCp9wjBoNJGiGEk4xqRQuZic+NIMLSyFUlBU\\n\",\n       \"9F1kv99SzgvK1uIjpJCRClLy3L/zGZx7ipVrfLpE5B4lA9YYUg6889of5vqjxwxSUKpznj57jMyW\\n\",\n       \"YtZQ1w3ee954/RPcuXOH3W7HxeVHCCHphwM5C1IeSNEhRKAwmqOzBePYs17fxczmzJolTdMwX66w\\n\",\n       \"ViK1QY4Hjt94i+7h18EesX38LovidcqmRCRN9dqnyD4RUo9NmaI5Zn1yj93VQ3wYsEWBVBpjLGVR\\n\",\n       \"AoJ2mBqhlLGoJBiGkXFMLGbnDCmw2z5lHGeM7oAuHQKmPRAySieyPND2gr4FoQ9IpnKVG7a0+0zZ\\n\",\n       \"SBCRGDusytOvr5hJckSlGTkYnHeMzmELEKJkGB1CVpP9cQikBLtuZF5lQh5Ad6QUgJeXAZpz/nXg\\n\",\n       \"R1/aDW644f8HH0fN8q1yD//k7/Gavwz85W91XR8FRtWkmJFJoKUlpTi170dJ1BmtSmJMhCwwtmEc\\n\",\n       \"WwQjIgnAMpmeSzKJofcUyuBCi1BAVohU0DQzunhAG0HEIgA/JKwVhAGMqDlaHLFvN+zDgXEYqSrJ\\n\",\n       \"vKl5cjlS6QVKeRaLFc8uPmBZLzCqAhGQKVJlQZPfQuSKh1//ImVR0uUDJ0efYnj/Q7rkyFpyOAy0\\n\",\n       \"3TVam2mjNSeEkBhjGEdHVTbILAk+EFNgcJHt7hoh4OTsLuvzO+jCUhaWopwTskMqCcUcFSJls0QI\\n\",\n       \"kHc/jY6BZn0LkRLh+TPM3fvIrUfuWt794j9iVlrG0SGlQFlNTJkYE0pp+mEAIckhYLWh8x6k4qie\\n\",\n       \"E1VHDImqathuL0gkQufJhSRZTRIj5IZhDMRhQ0wj67VB6QR6T0wZHzPKl6QskHKHVDsIkbEfCTpi\\n\",\n       \"5JY8CsgWpStCmhRFQhqULEB6cpRkYLPtiXGPUhGpPMErkn9lfXA33PBKeWUjX8lE30WUUuQM3a4l\\n\",\n       \"5kmjrNoZznV4HFYarKhRWiO1YQyB4VAgVaRsJDFGfMhsDx11UyBlJsaBoYdlWUDQHNqRohjQZURo\\n\",\n       \"PzkwaoPQipQsVjZY5TA60nYtzRysLCgLi0grcr4mxB60ROsKhMSqY8a4Y2HvgoDbpz/A5bMvUY0C\\n\",\n       \"PZzz5GsfoBDMmob5cokLidt3jhFC4pzj8vISRCanhLISKacvtz7uud5cYZSmrCqkzOjCIKXCaoOU\\n\",\n       \"GiEFRhVkkUlRI2OPPX2APrTohSW1W0bXUTSnqNUZ8cmXSboits+4fes2Tz74GmVZkMgIIRBC4Jyb\\n\",\n       \"4uGUREpJlBIdBad1QYh7LtOeGDw+HljOz/D9no+ef4QyHTpopMxIHRFY3KBI2SGEZr93LEQF4oDW\\n\",\n       \"K/o2kGNFNVMIIn1/jRaGJAJWNuQ8kJMneMOsLhAoYnRY62jHjIoGqQpyigTv6YeBsupQUTG0Nbx6\\n\",\n       \"O/MbbnglvLLJfF2dMaiETJHdMHC5efxC7uYRuaGwEKMDDbW05GRRRhNSC9EgxOSqKKRBK41AobTC\\n\",\n       \"qojpIA6JcQwsmorlcsU4XGK1Ycg9wY+IrAhxJKXEZrOlampgD1Gw3e45PXqdxTwhXElQk1qjLmvQ\\n\",\n       \"keAFpalQSDRLFtWazaPHzOwRLgXOTtd0Tzt8ijjnuXp+QTmfsd8HmmbGbrcDwOjJmyalhNISoS2L\\n\",\n       \"ukDlTFNVxBxoFie4lOnHESEF4zgihMCUhjA60A1KJnSM0MzI4zNkveDw/ldAlVi3J519EvneP6Kf\\n\",\n       \"rdj+xs8hRMI5hy4LfICZMQgFmclvxiqDVImcJV0/+bMsy5q4eQgiYeya9cyw2e65OrQUMaOUxccR\\n\",\n       \"IcYpPEQpUgKRZkSnyXkkxoTVmr4PZJlRUiNFRKgSLRR10dB2F8TUUdfltDkqMzkOxNyTc2YYph6F\\n\",\n       \"LJi+2ERAyEAaC3IoyPEmNu6G709e2cg3VCzLM1IWFMbw9OJ9Nt1DQkiUZsWqvMfZ8nWshnLmsFag\\n\",\n       \"pEHJRJYJIRtELlFCgEjUsxqix5hIXRX4EBhdACL1TFLYAiGmlXxInpgg+8Dl1XNikBy6jtJamvkx\\n\",\n       \"47BA65Kz4zcABzljjMKWBmMkQllyBmuOqMoTiuqYpDNu7DFxhFaR8tRyLjKEGOm7gRAmvXtKCaUU\\n\",\n       \"QghyhqqqMMZQlhVVuaRZHVMuT1ge32W2OmN08YWroCDnTNd3DJ0jDgPD0JNGBykRxp7+omd78ZjZ\\n\",\n       \"fE6VA5mEvPoqYz/CsGcXIkenayLTrwIkU5KRiyhtKZvZpJSRGh8CxhS4FNj2e2RODENPzhpyRCLJ\\n\",\n       \"0aCocGNNiIaYFAiPzJ5aL4ljxeWzRGxL4pAolCClSPCJvsuEEDFGIk0gpQHve0LaYapIU88gKVKa\\n\",\n       \"zLWasiZ4zaHPdEPG6oqyNCjRsFq8QVWeYMxNOMUN35+8ssk8pIiSlqJcIJKgqY/YH64RoqMplgQ/\\n\",\n       \"OSMqIcniElVcMboDKQfQnkRk9AkpFLNyhpaKUhlEFtPmphQ4v32hYQZVSqQQGLmgtDWFVRSFJocB\\n\",\n       \"ox1aOIy2lHbN8fIB1hxTmxOivCbEHT612EqSUQglp6SklJB6xtj1yOwQfiCOGe9alFSkMND3B2Ca\\n\",\n       \"wK+urhiG4Ru18pTSJJFMmcJWlGWFtVOTUMyS7aFlu93SdR3jOLLZbBiGnvZwIAMuQh4O9LsDXTuw\\n\",\n       \"v7okSYlqe8TiHN91sGhI0mKrms3D9zk9u8XF80uKogAh0FIiyCgpCCGSfCaGgDGG+foIIRUxw3p1\\n\",\n       \"RIoZPzoOhwt2/SWyGDhaawor0Ri0rJEyIZPCyhnJWdxQk/yC611mGBVJRAqrUUpSFACBffchPl1z\\n\",\n       \"vb+iHQekiRSFBDESYph+rWlFWUiq0hOiJ0dL9Bo/ZqrilKwMy9kZ8+ZG433D9yevMNA5YSXEkFGi\\n\",\n       \"pC5KhmGJj5FDd01dNoQph4AQHda0KBswo8XTE7xGCUuIisJOdd6+T5gYGF0CBS62dH5DZRoQgbKa\\n\",\n       \"kYaRM2NpD442RSyGPF5DWeOd5s6916iKkn13jRINpmzYXF1ibUIXkWE/MjqHEoFcW/r+mtLUKCvw\\n\",\n       \"bqTKFnKe4tDCSBghScO8qBik5Orqapoo57OpaSoENpsN8/lyKp9YCw6kkDjnUWqkruopQzAm5Kwm\\n\",\n       \"JdhuN9P7fPYRq7O7RNdhNDgMhYRn736e88UctetxfUuSitW8pB8dZWVJCYIPSKlJOVMYjdEKJTPa\\n\",\n       \"GJIAkTNl02DjyOHxR+QccJ3j8dW7WGOADmU9Si2x0jBmBbmikJbsSlwvGcf+hV7esVppCmvRxiKV\\n\",\n       \"QctIyokhXNN76A8GkRJN9U+Sg9qDYnSBWVUQRKQuJVJa2p3CSE30AyJXVHZFAAIv15vl4/Ddyo68\\n\",\n       \"vLz8rlwHmCIDv0tUVfVdu9Z3Ky/1t3j27Nm3P+ljcnX13bPd8f7l++y/ssm8d8+oytUUnlwmcg6c\\n\",\n       \"r25xfXhK7wLO75AWmrokp4IQHGVZ4bJjHCNZbVDpCC2rydcjS4bekVG4YSQEgy0io+9QWaKUIBIo\\n\",\n       \"yxqEJ4VEjBnhBVEMCBSZxKyuqYoGFxwCQRFPWViBH97jEPeQNIfWI8UAtsB1T1iUK8qypFCO+eyc\\n\",\n       \"HC2KCDIzq2dkW0DMSClRSpFSImeB95Hlcon3nqIwVFU9+dCYqWGoLiuWq9W0+WnsFLKBQAootEGk\\n\",\n       \"wHy+YPADxvccNldk7wlaMjx5l8fjGefDJebeG/jYs/OZcb9DGzkpX6Sc3AjDNNhihLIsMVkhqwKQ\\n\",\n       \"2LihEBJrS8YAgxRUakY7XFHbjNaGqtAoWeJdh5AVfkzoUCKyIOWOEBxKKFKIFHVJlJGisJA0WRVc\\n\",\n       \"7yRJ9Hg3ZcEiAn5oGYYdh93UF7DZRNbrmihH6jqRhgofHIYjZNbI7InCk+RNOMUN35+8sjLLYXzG\\n\",\n       \"OOzRsmG3vyCFgJGKUlpiHGiHDSEMSCzkFZIlQjmUDS9WkRKjBVooclIIKqIQ9H0meEdyCZUbok+0\\n\",\n       \"7YFu3NENLSkFRFqDtiTtUAXsx4RLI7aS7LpLQgBlDNebZ6hUoZLh4Bb0PXTDQNt1jC7g0xYpB4a+\\n\",\n       \"w2JYlifEMEL0jH2HmvrjsVYjlGa1WFKW9sUE7qiqAudGZrM56/UapRTWWubzBdYWlGVJ0zTUdUVV\\n\",\n       \"F9RVRVkWrFZLpABNmgI3hIQYsELQP/sS3gea0/vUp8cENzJ+7TdwlxcUJnF8dkyKAiE0+UUNXghB\\n\",\n       \"CB5lLdLUZKOIo2PY7RjGQFPWfOrsDT5z+oOs5BrnI5U9xsgGKzUpRVLwSCwhOqQY8akj5oEYw+SZ\\n\",\n       \"IxI5Z3rXEn1Pzh2FVZAMUBKGGikMs6rCao2LnkN7jQsdPjpUnqPyOU31AJEMzSy/UL5IgpOM447E\\n\",\n       \"FiHCqxrSN9zwSnllk/k4Kno3TE03RcXBP2fXXWLLBbWdUYiSIhlkNhjd4J1g6AJJHJjNFIWp6HxL\\n\",\n       \"58ULP5CK3gl8zoxjnjTHTjPT53RuoO1Htt2OlAMxJlKqSAR8bolqai8Pfs+jx+/RjR3DuOfJ5W/S\\n\",\n       \"5z2BPYUq0OqUnAuqas5idTJtAspIbSWFMIwxgBBYI9HCIIUE5fDBo01BcJGmaV6EOEx182EYODo6\\n\",\n       \"pmkajLEsFguWyxVVVVLXNU3TsFqtSSlR1yVVWU1yRiUJfiSnkfawYf/8Qw6bJ1SzI1qXWaws/cMv\\n\",\n       \"cbh4DBL8cI01K9yw4/btuxijKasaLyGQMbokJ8847nDDgMiZommo50sgc/A9lcicL1fMixmFKWjq\\n\",\n       \"M+ryDm03MroRckFOCp9HhtjTdR1aSyKJbAZ6d8AnR+96nN8R04CSsLT3EWFFJStqs6As5wxDIsSA\\n\",\n       \"i5Ex9tRVRX8AFVeEkMliT1Fpcta0B08/DlMqVbpZmd/w/ckrm8xVsad1zxlDj1GWMQ54RgbvULmh\\n\",\n       \"eWHMFNpIHAUql/ggGHuB0Zp5vWa9vEM/PEVLS9PMUGJO8BCTQaaIlmekJCjNMX0/pQ5tdlf4MDJ0\\n\",\n       \"U0BzlOGFkgJk1hRa4n2P8z3b/j2e7n6ZNj2bYtB0whaR5dpitGZWNQgGlBix2SKSIoSRvmvRlWS1\\n\",\n       \"PqYsaqqyoa4MKU97BKenpy/KLZnTk3O0NpRFTVlWzOfLb9Q3F8vFi/r6nPPzWxhb4GPADQO+7ygK\\n\",\n       \"i1aSHHo2F4/oN89xQ0cWOy6fXyK1RC5WbLdbZvMVfXdFTop9e6CsGqqqokAiU8LHEWKitnN0WaFM\\n\",\n       \"Tc4SKTIzW/HayTGiUGy7A4dwjdSS9foep+u3+OF3fopV/YPkqKjNbYYw9QREIRl6wXDIGK1JMnHo\\n\",\n       \"Pe3YMfiWbfsEFx05VKg0J0tDQuHGxNinySVTJJQSDGNHqZe4PkI4meIGbUYg6Hs4bEeyAyVeXgfo\\n\",\n       \"DTf8s8wrq5lnRkJoaccrpJrc+vbdASUENs/RcobMEXe4IAwF8/UMkRy2qPFOcLw84nx9i97d52r3\\n\",\n       \"LhlHXdZcXO9Qac6isIzdNaZYYJSG1BDdji73RPGYQzfifUKbnqqYMatWCD9iTCDHHiMrUk6I3EMS\\n\",\n       \"JDWnEGuETrh0TcUSIzJd6PB2wd7v0C5wtLpNERua9ZJu3yL0gC0X9D5QlCXHx6fs9xuEUJyfn2N0\\n\",\n       \"MXnRBKjrCq0nL5emaUCISUoZJsuA0hqCj+zGHjcc2F/uifsrTFNTmcyYEjkllmXF5uIZsh9Yv30L\\n\",\n       \"u0k8fPgh50crDruIUXpS02SPKaYUH2KcJIOEacMYIERyCpRVxdX1BQs7Z1lVqEMmBkfXD9y5/w4P\\n\",\n       \"7n6aW0fP+fDZEU+vv8zobtP555R1QRg0KVn6Q089t1RVjaXA9zt6OWnPiQ5rNGUx6d5DFORcQ9Yo\\n\",\n       \"lXGuY8tTvEsIPMiI1gKpAqaIBDI+wHa/4fTo9Zc6boUQCvgl4GHO+U+/1JvdcMN3wCtbmUspGeKO\\n\",\n       \"Xf+cIbRkCVFkrg9PEVlydnSfVXPOzgMUiFBx+/gtrD4mh4YQE01dsqxOqMolhTKUBVhtmBU1RsL2\\n\",\n       \"4n367oBQkvOjO4hsEUrRDVcIlVGyIqcarStyBpQmpoGYO7QSvHn+rxBCJkbNOB6IyZHZItgi2ZHT\\n\",\n       \"gIuO1reMwWNkxW57YPQju+sNUkJVlSQSVVVxdLSmbVv6vufBa69T2MlD/ezsDCkzy+W02SmEoGlm\\n\",\n       \"CCG/0fo/jiPBeYQAlTPdboPbPsMdrtg9fUS/n3TgspC0uz3zWc2tB/cZdwd6N7CY1XRdO72fHEgC\\n\",\n       \"2sOBoe8JIaCUQSoLCIauxfmBnNJkpxAipa04PjrBKkN0jhh7nl9+hFaak9NjfvwP/hgnxyuEcNiy\\n\",\n       \"Qss5xhiqQjKvS2o9ZXWWuWaml6zMOTkavEsoqTlaLybjLy/oekhpgUgFRhXEOJLoyHJDCIHRO1JO\\n\",\n       \"ZKboOm0SznuUiuy7r77sofsfMPkOfSvL5xtu+J7zCnXmCp8cXXjKvn9CSiO6iEjrqErN2eltzm+/\\n\",\n       \"RlPPGGKLyBUiW+bNKdrWfPjofS62Fy9kiglkj9QH5tWc+bxCK4cQkcvr50gKZmbN3aN3kHGJUTV1\\n\",\n       \"XVAXlugs2+sW7xJQkUXmcvtl6rLi3q03acwD3BgZxhGpYNmcU4gFOb8IZ7CSy27H9fgR+9hO8WdC\\n\",\n       \"kFLi+fOP2G0vUVpRlPZFt+ckE8tZ4tzUEWqtJeWIkhol7YsgDUFV1YgM+90U1tH3Pe2hRYpAe/mM\\n\",\n       \"w9UTri+fUBWGan6KqRt812NFRMXM5vo5dVHQ7w+MXUs/uBe5q5oQM9Mic1KyuOBRymJsQ1E12KpB\\n\",\n       \"KEsQGo9EyZKtcwxa0HvPZnNJzh3vfvgP2e4umDUNb7/5gygaYtpTlwadNVYbqsrQlAu0kFhqGBQ2\\n\",\n       \"zyjDAuEsZVmhSzBFQYqG0AUqPWNZ3mJdvc7p8g209qTUkeWWKA7EKMhoQkx470hEhFFYG1/amBVC\\n\",\n       \"3AP+NeCvAOLbnH7DDd9TXt3KPBVIMgSP8y0xDUBCG48yEWs1R+sjbNNQNoE+bthteiQFOiuSMHzh\\n\",\n       \"vZ/j0bNfRxAZ3BZjBbdunzOrTvAIhJLEsaWQhpQis3pJqWsK2ZCBnAM5KMYh411i9IkUM/vhCdpK\\n\",\n       \"Kltze/1Zcqjou5HoMwRLyhVSzEEI5vOau6enrOx95nLNfL5CSoOxllt3H9AsTol5UtQAOOe4fese\\n\",\n       \"MXqGoSfGSEqJ+WzJ84tnDEOPcw7vPdZO1rjTRN4xDCPGZLa7LfP1GmMNq6NbdKOnWi0RerLHvXr+\\n\",\n       \"hN3+CbOyZL+5Yr1c4NxkKVyXc4SySD2FUsQQURKqwk5NVDmSsoAsUVpTVQvK1THVcsbx8hbnJ29y\\n\",\n       \"tpiTGMgy8OjZe3zhvV/kqx/+Ko2cM1sek8JApQxVXaFziUwag6Exc642T+h9h46GW8V9TC7xIVGX\\n\",\n       \"S1IUeN+Tg6bb7PBuwKiKs6O3OFt8CmUUQQS0hnpWMgyB4GEcPcpmlEoI81KH9H8O/IfcOMDc8M8g\\n\",\n       \"r2wyt2JBoWqkfuEB0gZi8IgXpZbej8SUaeolUsOuvWR/uMYNnhQhB/BxRxi3HPpniNhwfvo282aF\\n\",\n       \"1gKsRZeCwiouth/h04gLPUZrNAoCGKMxKiJyouv3CD19Rl10XGyeopSFLDlZPSDmERlhdAdkhhw1\\n\",\n       \"Ri8orGRW1+ScsErT9h1d1yKN5b33v0ZZ1hRlgbGGrt9jjcaamrbbU1UVbdsSQkBKRYqJmDxt22KM\\n\",\n       \"YbPZ0HUdUkq8c+TkuXj2mPKFCsb5ESkU0Xu6riUliZCK2XJJDtB1HSpB13fMFgs6NyC0JCTIMeBj\\n\",\n       \"ImVB8CP7/Z7t9SXRj0iREClBUZCtJPQdYezQEpRUoDKZjPcOhOALv/k5ujFCCtw/fgMmjwlCAAAg\\n\",\n       \"AElEQVQfBM18gS4MQQ7E7BAqIynRlWbMgSwypSk5qtfMyjVGlBhq/JBJXhHdyL67JsjJXXI5f5PK\\n\",\n       \"HqPEFFEXQyAGOUXO5Txp08MUmPEyEEL8FPAs5/w5vs2qPMb4jSOlm3n/ht8/KSVCCN84vhWvbDIv\\n\",\n       \"OKNWJ/ihwLtEcILdxpG84tn+Me144PHFh1S1IeWBftgypAPX2w1aLhiHKwxryNOm4P3br/Pm3c9w\\n\",\n       \"5/wBBz/ikkcvC5p1pPVb9uMFl/sniBwxKWNSRCpB1j3WTi6MOW0IeaQ0xyQ5MMaEFJJVfU6t7vH0\\n\",\n       \"4hH73QV99xxyhDinLs5IemAQB3LyxOgp64qL5x+hlODLX/48w34PUnJoO9548w3KqkApxXK5JMaI\\n\",\n       \"tZayLNhsNpTl9Jx80S2qlJrCqGPEDyN9u8fvr7BWU9gGqSWg2D7/CNduqNcn+P2W1eoIVKZqLO04\\n\",\n       \"0MznrFfHpBgxRpNzRiqJEBkpBCm+sBbICbIgEXHbDbnr0RK67RXD1RWLouD89C5alTjfMw4j7fCU\\n\",\n       \"rz/+In1IvPPgR/jkG3+C0tzmnTf/CMdHdymakjZcYEtJlrAZ9qCgsIb1fMV6vkC8kJLmYOj2HiVr\\n\",\n       \"Dl3P+4/eZUyaWXmbu+sfZWHuIbLm0Pb0w4YYHdqAVAIhFDnalzVk/yjwZ4QQ7wF/DfgJIcRf/d1O\\n\",\n       \"VEp945Dyxvjrht8/Uk77Zr91fMtzv0fv6Xcg8oKU5qRYEcaSsZO4Fq6fJ/re8+HTX2fTfkjXbXHR\\n\",\n       \"M6bI9vAMGQ1hsDTFLSpdEpJmuTilqdYcr844Xt5GSIWWDTJbslkgdeIwXLFpL2nbPZ6A0DVlOePs\\n\",\n       \"+Daq9BgrCB6kKClNxdht2R0uqGcNOWvurD+DMceT85/UjONzNEfUxetkVaN1phdTLmmKmbFrOT+/\\n\",\n       \"zdHREUJknj95zOn6lJyh7w809ZzNZoMQAmtL2nZ44XVuqKqpq1VrTYoRP47sN9dT4k7MjDkx7K5B\\n\",\n       \"CEJyiHSgmS3xoedw/ZQoLd4HYsjknDg5OqE9DJRljU+w71qCd4RxAATtGDBGE7wnuoG+3xN9RGsL\\n\",\n       \"pQapWR/dxY8d7fU1plyzWCwRXpBTIPrIP/j5/5nnmyeENPLpN36Yo9WbNPUxb73+Y5yf3GYxP6IN\\n\",\n       \"HdrMaMqCp/0VwxjwOJI/4NNIZjIgWx8dc7L+AY7nn2Z/2PPkySNc1yJINMUxMpZEp8hJInUg4xDC\\n\",\n       \"I03mJS3MyTn/xznn+znnN4A/B/xszvmnX87dbrjhO+eVTeZ9H6jKGcvZEa4zyGRxo2bop9b+Rxfv\\n\",\n       \"cnn4kI37Iv24JYZEJiLNVI4pzRpBxdHqwbRSdyN953j67BKjJcfLOcezY1b1kpP1Gq0zMTo8PV3o\\n\",\n       \"6XNHGjNGNTQzizYFMUhmsmFVntEeLgjjNSLvQUSsaThdf5IQSpANQp5OP+99Q6PvUpQ1STl2fosQ\\n\",\n       \"nrOzW1xf7TBqxocffB1b2Bf68UzXdTx/foG1lqIoKcuSEDxVXXE4HF4oWDJVU7PZbtEvfFP80KJk\\n\",\n       \"ZOhbYkr0Y0tOClOv6MeBLAKlEizPTqCyLFZr3BAZhumLIqeEVpr18Qn9GLne9+hy+lLLGWxdImyF\\n\",\n       \"EAqpFFIr0uiJYSTGQGUthbBUpuTurXOOV0csdQVKYIj87C/8j2z2GxbVkgf3HuDbiCoM9fwUW2qs\\n\",\n       \"Vfg0lVj+P/berMeWLD3Pe9YQc+whd04nzzlV1dVd1d2kSYomRdm0RIm0LAK+sOwbD/CNLnznP2D5\\n\",\n       \"D8iA/4Bh+IoQDAGCIdoCfGGRht2WmzIpm02y1VPNdaY8Oe0p5jX6Yh8SbbPJ7ha7dEh0PkAiA7Ej\\n\",\n       \"YgGZa3+x4ovve98xOJ75Lb0bGdxIP3b0pkfmCWen77I4WnF68phEL7B2y7r7mG1/hVeQZUdYZ0my\\n\",\n       \"HCWzP1r9BgzR/Sub0vfVLPf8ueK1BfNmu2FRHHG8qFnU6SFdEqCsSiQpeZETfECIQKIUZaFZLo55\\n\",\n       \"evcd9vYGGydmVY0Mkln9gCeXT/nN3/p1Pn7yB4TY4BOHSgSJTtBCUZcZVTlHqBwTPMZ2DM4zToZM\\n\",\n       \"VBjXIEJKP/UgAlJFxvGavrsk9Fu0DAQjWS6+SFF8gfniEVd3z7m6eULfOZAFg5goypooEjyCo6MV\\n\",\n       \"L68+wLsGHQUhHnLMiMB68xKlFMfHJwA4ZwivcqzTNGGNIREK4QNt16ATzeXVC8zYI7zH2wktC+Bw\\n\",\n       \"c6vnC/Jiho+CYA11XjBNE3mZE5RgMB37tsM4R79vKaqSqihYbw/CX6vTc+rlQ4oqoSgrRJIyDSMx\\n\",\n       \"gG17unFi01t2zQ3Nbk9dzTm7WKLKGViBo+Du5RW/9bX/jX3T4yaHLiRXm0uKWcby9A2klCRxROcD\\n\",\n       \"Mh3pxw19GDDOYLqBfbsjyTXVckFRzdn3a+q0Zlas2LV79v01LvTEYHBGo/yco/IhuZzjncQNKWH6\\n\",\n       \"7Kd0jPErMca//ZkPdM89PwSvrWlIRHGoFS9OWR3vGKeRFEmVZaR5RhgNOp2RqTmNNWilcM5S1hVW\\n\",\n       \"3DHLF1g7IZXHO8vd7pZd+4KiUDw4epu+nUB6YtCHR3AhqHMN6mBZNowtWboHmxPjhHY5/djSj46J\\n\",\n       \"PTqL3G4+QMkZaVKy3n1CsDWZyiiygkLVLGrPpy8+YDm/YFbknM6OwWbYqWe92XNUH7NaLhAyBQH7\\n\",\n       \"7Ybt7R1BK/IsJ4RA1/as12tWqyVpluK9PbT5W0dnerIso2l2yDyh0AX77R3CTZSZRMoEaw+iVVpG\\n\",\n       \"lExJT1YE29L3PUIryDQn84dcPf+Ui7MH3G02LOoZu2YHwPHxMUk83NWd61Ayw8mArkoylRKmgegD\\n\",\n       \"U7PlqK5IhCJxEZDEPKAyQ+wlbrR4H/l/fv+rLGcnPFid0fsOaQzjDqpsxdHynIYNk9kz+IBKAplW\\n\",\n       \"iGlARUeZKYJwKOkBQ4iW7f6W+VFCnip8dPg4YK1DxCVhkBgdQUB0GWMniPqzK028554/z7w+oa3m\\n\",\n       \"Uy5fbPBeoXTN0XJBXZRk2QwhDlZhxhlCzICK29sBYxwET51qrBuZYsfN7gVtt4OQMHYGXOT8+A3+\\n\",\n       \"xi/8p1zfTdysd3z64pbru0uyNOF4seRoccZ8CfMqkBc90zBhfMpgJ3rbcbNf0w8TIRE05pbWtHRN\\n\",\n       \"w93mBfv9jvXtHW2/Zp7UpPqI6+srjA/EWBKUpxnuyNICIaGcrajqOev1M+zQsVguyZWmns0I4bDq\\n\",\n       \"VkqgtCBJDp2f2+2aptvjnKOY1fjJsds1ZLnE+YAWka7rSbMEKROm0eC9R+clxlmi0CRFztiNTP1A\\n\",\n       \"s2u5ePwuzTCxWC5p+gbcxGxeMnQdR0dHBKUOphtKo5MCbMRlCfLRQ9TqhHRW8/T2im7siCZhnCaU\\n\",\n       \"FGRlgrOOwUXadkSh+T9++3/ho8v3sOOeenlMXq4QISf1RyiT46YZfR8YjMd4S5ACEwMqVRjTMdoN\\n\",\n       \"Xkbm5YrT1RHjtGNwLT4qhmkCJSlUwjhZumFCB42fAlMvaLf32Y97fjx5bcF8OZ8jYosdJYv6TY6q\\n\",\n       \"Yx6dnFLIyKp6RHAZkxmx1iCUwobAvm+xYcCMB02Otu242nxEPzbU5QnEAjM5ThZfwkwjP/H458jz\\n\",\n       \"GdFJoq+ZjKd41R1al4KssszmnrKscdYc1BeVpe9HgsvxzpFmMDqDC45m13D38pLnN+9zc/chOY6F\\n\",\n       \"TOiHgX17Q5KXJNkSKzPQEnTGy6sn7JuW46PHtO2Wq5fPXpUbHQSh0lSTpinDMBBC+CNPTji093dt\\n\",\n       \"e6jA0XB9c8M09uw2a2JUmMljraWsC6TMAUGR5wQi+13H6vSMtKhxpmfYXpPVM4a+Y7FcYaPixcuX\\n\",\n       \"aKG4Xd8gpEblKaosIJFEJRH7hrjdo7DUWc67bz1kVZ6S5Ss0pyg1YzZbsDjK2O1u6DvDsBuo0pSP\\n\",\n       \"nn+D0U0EZ6nzHEIgBkUkw1tNDOWhQUpbRjfRu5EgPFpG+nHNvn+OkAPHq5w8tyRKAZYQxEFoTEpk\\n\",\n       \"4okYopQQC4iWNFOva0rfc89r5bUF8/myRhYWT8c4OOpqgQQWs3MyPQeRs2tb2vGORa05PSkJwbDf\\n\",\n       \"Duxaw3bXgZCMdmC3vyHBkec5u13k8urbNP0W53rKDOpFBlIiDDS7Dm9HcIoYPTLRPHr0eZROEHLC\\n\",\n       \"jAdl83FscYODqEjTCRcldaKZZyAj7NuOm6tPCK4n4CFabtefIqSmPlpiR8ftzXNOTx7ivCFGQ1kV\\n\",\n       \"VNWhq3NWzymrHOct1jqG4WALF0IgSTLGccQ7jzc9Wnpc21JkOYnQJGWNkgprJ4oyw3mJcQYfHZMZ\\n\",\n       \"ETJlvlxyd3fHNHbMV8e4NMWPDcY6dtsdq+UR9WLJECLeetxk8SEQzQSyIEqNzBJie4e3jnR+jJgC\\n\",\n       \"75wseTCboUWOcnPOTt/m4uKCxWmFUIqTk3OO6hPmecnLqytkSJkGhw4eFwa8CKRZTpVUlJkmkxWz\\n\",\n       \"uaaelcToDl6ipuPm7mM6d4MuBIvlA+bLJVW+QCEJwaC1RWtHmhdIIUAY0pmmWLx+c4p77nkdvLac\\n\",\n       \"edAJJ8fHbHZrJqeQ81NGEUiDJwP6bsILxTgZlsuRNy9KtoWk38EwdeQk4AR1fnpozEkjRVEwtpKv\\n\",\n       \"/s5XODk9Ic0DuYbjxcFrcjf2iFbj3EheOYryiNn8DfI058uf/3n+72/+7wgSkgSS1CPkHJ1EYhhB\\n\",\n       \"GOqy4qhSLDhiDBnNtGN0gUwFlPS83H1KNTuljBXTdMvF6UP6fiC4ESHmXD5/gk7uOLt4m8Ia1us1\\n\",\n       \"x6sTpJQYHwjhUCIYfCDPS5x1ZDKhGbdgR2i3xCTHdD3l6SkhRIwxZLmmyA5130pqlFa0TU9RzhB4\\n\",\n       \"ttstVZHhjSN4j3EjKnqkSJjVOfiR6AM6nRFxxOgRWkMxA5WjgiVYS7o8ZXf7gjJPEX1kMX+LZb5E\\n\",\n       \"n0fW7Q0b3fPw/HNkWYUxI/10ySfPfp8qnyGFQ0pPXiQY16OTQGTE0VHlFd54Ji9QSYYPllzD1K0R\\n\",\n       \"ZUKqZkghKbMVRIvxHVFZZtLShR2jCcyyCqSnrH507jz33PMXidcWzMuyJApHMw1MnWVRLbAY3OTp\\n\",\n       \"GYgiJbrI1DmsbcnzOUdlTRUVPY7JWcZ+pC7PyKQkTQWPT89ZZoKPPvqQp09uWZ1Jjo9T5lVK9ClG\\n\",\n       \"S+6aFi0EBkWWZ1xdP+PR2SNibJiVC5x0qKwhLwTBHRzmbQgIBU6MCErm1YqT5Iypu+Hj5x9Qleog\\n\",\n       \"gKUmds0T8uqLFCczNs0e8OybAeg5OzshSyom05Nnj5nP5kgpKYoCszN4J5B5Ql2VCH9YZacJuLEl\\n\",\n       \"15Ht0LEqSkQ1R7iA1uJQi+48MRUkukQpQdM05EVKkS/YbjcUuaZvenSao/ICaSSb3RaUIEZNVc+R\\n\",\n       \"WhCFAJUjgiBIDTGCMvhmT4yeAsHRyUOePn+ODYLJBfCKMj/ltK7I3okIPyJiBVgQnslMdH2HcwN1\\n\",\n       \"nWOMZxpHgoBgHKnrSMaEIDR5ek5eBpyfwI5EobnZr5n5llX2JsYPqNRRxhMMd4zGE6ykTAs0Oc55\\n\",\n       \"Jvf6V+be/2hewn6/jr8fht/93d/9kV3ra1/72o/sWj9q/jBF+aPgoJH0F4c/Nc0ihMiFEL8thPg9\\n\",\n       \"IcQ3hRD/1av9KyHEbwgh3hNC/BMhxPK7zvkvhRDvCyG+LYT41T/p2jmWRbpkkZ+TqYgWEm0CVZaT\\n\",\n       \"ak2dJCREtBeYJkPFJYKEUsMs08wqDQqULHn7jX+LYBXzRc0bjx7xM1/+BdApfSuxRqFVitYpgoSi\\n\",\n       \"1AQtMK7m5m5kvd/z5PLrrDfPmKeKsi6Y1Uck+qArbk1gGix53dGFls14h/WGKgsUKkEFRaZyJBKp\\n\",\n       \"crr2hilOzFYrjO2w3iC1R0hD2+y5ur6hKmd47xBC8OzZM/q+J8ZIlhWM03RoHKpnByu6PCdJEvbN\\n\",\n       \"lsXqjGGwzGc1xluUTCiLGQDjOEKMtN1AmmdkacXV1cGLsp8ceVURMHRty3a7xRpLu75jGvZ03ZrZ\\n\",\n       \"8QVCa/w4wbIikiJ0CkWOms+YvORyf8XYtCyqORGFnw4t62MfWZ2+TZYKvFzjxXO8vKaoAlnZkRcZ\\n\",\n       \"/eTZtZZ2GhkGh3cJUOCMABKM9QcvUr1gPjtF6Tn9CF0fMK6n6y4Zp5dIkZPqc2R4g5PFuyRJwTQp\\n\",\n       \"pFTEKBibz6wD9J57/lzzp67MY4yjEOJXYoy9EEID/6cQ4q8Bfxv4jRjjfy2E+C+Avwv8XSHETwL/\\n\",\n       \"MfCTwCPgN4UQX4wx/jGBijqVFIVGiYBQOV3bM/aGqAbqcs58ucRKA2FAiwJrcqIQdFNDVitEVKRZ\\n\",\n       \"ZF7mrx7RPcq2LBYXFMmKo5cz+u4O0xbc+sis1GQhoRM7MpkTY4qZBM527PsPKFRBLgqy5BgRC6IP\\n\",\n       \"jHZHkiSU5emh/nu+oe894/rbGB+QUYFSzKpjlHCMcodMZ+ztNekEZJJp11LVx7T9HiUyiqoAIt55\\n\",\n       \"rm9eUlYliIiSMA4N1gUytSDTiiDiwZIuLciWxwTnyPOMvh9IEoUPHust+axiGAzt0FHkGmsCu3F/\\n\",\n       \"0EQH+rZFRIn3UM9qtuOEEa+qX4xlVmeYsUU/fAgx4AeHPDkh2B758o5oRnRRcBHPuY0bapdT5S8Y\\n\",\n       \"x47NYPHeEJVHxhTXdUzBQKzwk+Xs+BhnBXOTY6yFwVAFiQsBLw1JkjKODZtmIJEnnK48IVjG0TFF\\n\",\n       \"Q1lm6DTBMpHJh5T6Td48/yLjquPZ5QecloEp3aBUgjGBTXcfzO/58eT7vgCNMfavNlNAARsOwfzX\\n\",\n       \"Xu3/NeA/eLX97wP/IMZoY4yfAB8Af+V7XjcEmt0NUvlXQlp7nry84ur6jt61IC2pztBFTj/uGMcJ\\n\",\n       \"pUo6M3G9tew7MD7j6dUHbJtLurbD+Y522tJ0az7/4ILTozOmaWLoe4TNyHRCCrgwYIynLo85rj5P\\n\",\n       \"oY/QgIwOH9cEJtpmpGsNSuY4K8jTisenJywWc9LsmHby3PQ9vkhQWUImFMoNxNhxe/sMT8RHhUoK\\n\",\n       \"RIzM6iXDuOX05BTrD23rf/g4XpU1RVng7EQqIwSDGRuyRJJISLTEBU/X9/hwWNG3bc8wjoQQGMeR\\n\",\n       \"qsjJ05ymaVFKveoqdWw2G2azGS9fPMGMDS+evyArUqy1SJ2SFBlBCcahRQ4GLUDpSNzdEXY9ZAnI\\n\",\n       \"BJTm5e6aXEmib6nSCZt8yN3uG+yGb9OuG/rbGZvbhLtrz83VjmiXTINAKk9WGnzoSKVCaU+SjCQq\\n\",\n       \"oINj6EecL3l0+pc4Pf5ZrJc09payFhwtJEWSEEbBsjpnUZzy+Oxd7BBpui0yHQhMCOkpq5S6mv9L\\n\",\n       \"fRHuuecvOt83Zy6EkMDvAl8A/psY4zeEEOcxxqtXh1wB56+2HwL/13ed/ozDCv2PMcaUu6tLdFFx\\n\",\n       \"NF+x2XrWTaQb13htqKoMhEIqGAbBuvmEL7z1JZanb7De3OBjxn67JpUjlzffIShD0zuMeUK/V5RJ\\n\",\n       \"wenROXfS46PFhAkVC2SAVEW6yZKmOXWeU+c1m923maIhkynB90ilaXaOOvEsVkt6e02RK+aLU2RV\\n\",\n       \"0OwtfWwQRaB1G8ZxwqQeySU+KdmZW46KI+bzE6x1vHz+hEcP3+H2bsPnv/RlpBCslqfMF3M2myu8\\n\",\n       \"DaQqIa0O/5I0FbS3axKpuGtuiVHjw0SiMsqypMhSnDE45w6ljq9MLfLskMIpq4y2bfHOcX19xYPz\\n\",\n       \"x2x3WzKt2G+29F2LmSR1PUeKjKJcQrREWQAp+BF9dIp40RIXCxLjWa3Oubn5lM53qLSj3TynHywi\\n\",\n       \"rsil5G4zsNl6lBJIJLc3WxI1J7F7rLuBtMBMEUGOk4YwOZpQEaXgeP6YL3zup3nn7bd4fPIlPl3+\\n\",\n       \"c26uv4r1OaPNOT1+gxeXT/jcX/5FqiTnwdEF33yvJdAgdY91FhsFaX7fNHTPjyc/yMo8xBh/FngM\\n\",\n       \"/HUhxK/8/z6P/Ok6Fd/zsxaHSSqCdDxYvs3p4jFSOaJXbO76g8Sra0icJlcnmKGgabYURcHDs3cI\\n\",\n       \"qidN58zUnMAeokCKFC80+2HPaCNFekpVn5AWc6QusEHgRQ4cNEzHYcA6i5IFSXaMTkq0PCLVR0gZ\\n\",\n       \"kUKTkDJsBdNWsl13ODEhdECmltlMkmYGY3uuh1u6YYsPlijvaJqXr6zZcsZxIC8ymrbj9PSU3d0d\\n\",\n       \"eEeSwDB2zOo5hZaMQ0PEMKsy2v2WrlmjhEDGESUMaTrj9u6Ou/X1IWUhJfaV2cQ4jK90XQ5qfbtt\\n\",\n       \"Q5ZkzGYVWgS6fkeWJOy6icEY0qwAIEmTQ0lkkkGUkBeQp8hiSdi0+EwQxwHcRDe1SJ1wefcBSufI\\n\",\n       \"bs7TT3v6vTpIz8qJJDlU4pTlDJ0kdO3EZtvQdgHrDU71uNJyMa946/SCVVWQMSPENU9e/B5X1zco\\n\",\n       \"Ldg3W4SNFDLyePkupaqoy5yvf/gVggocH81YzTMUCVoUDF3Hvtkz2e0PNPH/ZRFCfCKE+AMhxNeE\\n\",\n       \"EL/zmQ52zz0/BD9wNUuMcSeE+J+BnweuhBAPYowvhRAXwPWrw54Db3zXaY9f7ftj/PZX3idP5php\\n\",\n       \"QvzcHcw8i0WKaUELgQw5WeqR1vLg+JS6fIPN+D5FMaNUFzzfv0eaWGblCRJP1P5QZ97cEuIr9wCZ\\n\",\n       \"kZIgk4rBjGQqp8ofsG+f4OkZzMTm+Q0XR4+JQVPmR2g5Y3KOurxgt/uAbX/HMj+nsxLb7/HG4pKR\\n\",\n       \"yQrOT96mLAs++eQbZHJO144oFSlqQVJNOGHo7j4i0wUIwWKxZLtd8+47b/P82VPKxRLfN2zHnrpM\\n\",\n       \"qQrN2DV80m45P14xJhGhInkxYxgnyjJBMgcChIDOMqQ61MtHAbPFgk+fPeHB8RExKpyPlNXs0B0a\\n\",\n       \"QArBos6ZppTb62dUqaZrt9SzGUIKQhQIAlRzEHMkLxFjTkg1od9xtbtj6wekWiBjIFFLBvMpVbJA\\n\",\n       \"o6nSgk57hBbkaUYljnBiYHI13bSHxHFaPODfeOttHpxVXO17Pr5uudze0N7dsW3uiGywoeE7z36L\\n\",\n       \"i/mMk0wz2mvK/CFZ2fHs8kM+fPpXCGMDWmGd4cX7Lc8+2OKjJIofXQXIn0AEfjnGuP6sB7rnnh+G\\n\",\n       \"71fNcvKHlSpCiAL4W8DXgH8M/J1Xh/0d4H98tf2Pgf9ECJEKId4G3gW+5+rlF//WY/7Nv/kFfvnf\\n\",\n       \"/df5t3/lb6ISwdFpYH7sAEfwAtNIlCwp05TT5ZLz5RtcXz9BxsiXL/4aVSnQcjxUfOgCKVKiP6ww\\n\",\n       \"+2mLCI5cLRFBIIMkmg68ZLV4kzSpQAxYE3FecrvbIMmp8xWRhlTUvPvWL5CXJbt4wzB07HeGrnXc\\n\",\n       \"3t6wvr0Gd/Am/Zkv/CKzVBMmz+464EZPVD17d0OxWKGznOXynIjnc5/7HJeXTzk+eczN9TPssKPM\\n\",\n       \"JG4aaZsdZZEhvKfvtiQU9H2HsZblckGWJehEIqUkOMN+e8vY7djt9ugkZ7SBs7MLbm5uSZKEbuwZ\\n\",\n       \"pgmd5Fze3WJlYDSert1wcnKKzjO8DwTvkd4gMMTZMX6I+H6H0AUxSkQcUHnBO4/eAVmS6zc5W7zF\\n\",\n       \"oj5Gq5zN/pb1bsPgLLM8ZV4IssSTSMGquuBk9pPM1BkPZMW7eck8AdN17LZ7rtZXdJs7Uhuodc7g\\n\",\n       \"XrJv70hUznpcczvtUaVEyZbgJk7mS/7gW1/lvauPKOrHLOt3uHh8wi//jS/x13/p8/z8X/+eWb0f\\n\",\n       \"NfeWcff8ueP7rcwvgF97lTeXwN+PMf6vQoivAf9QCPGfAZ8A/xFAjPGbQoh/yMHw1gH/efwTijXX\\n\",\n       \"t3fkqWBZPuTN8y/z1sU1m83vUAWNSTN6Y5Fpgczm7NuWxWqFQCNlwmgMZpzI9QIXLXjHZAJJGqnS\\n\",\n       \"ilHtDsYTsUfJcKjGIBDZovQxeV7xbvk237j8lCTLCTHBOkkzWhbVQJYIEBNSzDl/8A4vn3+LNDQE\\n\",\n       \"k5HrE7we6doN337/D/jpn/h5vJqoU40fU0yIRJsd2sqdBAvWWGQiqOdHh5eTRUXT3PH4rTe5ub4k\\n\",\n       \"2W/ph4HlrMANPVUmGdsWhEfJhE3ToJKSYEemcTwYiSaa2XwFMaJFJIgAeIa24eHjN+m6w3vrbt8g\\n\",\n       \"lWJWz9he3SGSgLUT1mqScs7i+ORgHl3NEUISEMiqRHYW129QyxNoGsbNJVfNlixqiBB8JEkFqVJs\\n\",\n       \"mitud56izJnVEF0ORU6DpCpXpFPGw+N3Kc0HNI3l9977mNFHnjYdbdgzLxIezBXFyjGanmlyCFfg\\n\",\n       \"/cimmXh44kiTQIqlC7C7e87R6jHFbIkdJdpLGB1WGEad/wi+Fn8qkUOVlgf+2xjjf/dZD3jPPT8I\\n\",\n       \"36808evAz32P/Wvg3/kTzvl7wN/7fgNLKozpkUWkqmqk0AiRkEiN1inL1SPa7mCF5pRk6vfYyZIW\\n\",\n       \"OevdBkVOKhb0/hlh0kQELljybM5RlrENl0y2x7sUEonCIULG5O6oxZLj4zeo11v2ZkRIiZYLrm6v\\n\",\n       \"WGQFeZGx3X/CYlahUTw8+gle3H2F4/KYqjyn8ze4cE2/XvPkybdI3/wCCIGSCcINmK4k2ADsae2C\\n\",\n       \"VAryMqefRqarK4Zhx9HinKfPn3NxdsLL50/IspwYM0bTk1U11jpCPFSunByf4Z2hmtVM1mHGgSxJ\\n\",\n       \"sMFTFYfyQmcswXu0VDTdIV++3x7yzirTpHmGlxE3TAipQEryPMObEXV8jkoSXAQlNVhBKOdIYcE6\\n\",\n       \"xPKU9sn7XK1v+ejqBe9d3fFTb7/JZAfSzBOiw40BZx1tHyhyULoi1Ybe3lFlp7jeEoWgdY6rbYNT\\n\",\n       \"KV5qFBIhICsiUgXGYWLoIt5rXJBECm5uPyJdnSEJeBPx1tA0t2hK2mHDXEISAqPwJHH5/aben5W/\\n\",\n       \"GmO8FEKcAr8hhPh2jPGffvcB320V991aO/fc88MSY/yBm5demzZLlc/xNsULxba54Xb3IU1jCUlC\\n\",\n       \"PT9HyJFymaA0SFHRj5a72yuaQRGRZIliWb+J7RVydDg30po78kzyuTc/hzeOl3cfsBu36DSSZJ4p\\n\",\n       \"Svqp5aZ5RphyHh6/RSo6QvAcLc7x3nO9bdi2O7wf+PjJV4gIyuyUB4/+Kt2+Y08GwLgAACAASURB\\n\",\n       \"VD6fU89OcF7gRsvtzR3vvfxd1lwyO4a60khyhm6k9dd4NmR5ydAb9u2OfduQZjXlvGQxW+CdYL/f\\n\",\n       \"cXZ+jlSaNM2IMlLVBVmWMZvPqWdH5EWFj5osL6nmK6JOMCGyaSaskPT9oSM2Ks28WiJ1QqlSxv4l\\n\",\n       \"0XRIIcmKGXW9ZL46IRz+sBSpRkaPHSZkPkeIFD/uQVhEWUFaEqcNpz/1l9l2e9778Ftcb/4Fv/f+\\n\",\n       \"P+PFzXOsG9CFZX5UkeucQs9RaoYfJZ4G55+wMx/RuJ6trtlEReMlXipcsKQa0lLQKcmmH7m53HPz\\n\",\n       \"4pZ+PeKninEU7Pd37JqnuCmQG42zA8ZseXb9TfrxhihaTDXilEGkn23XXozx8tXvG+DX+R6lt1LK\\n\",\n       \"P/q5D+T3/FkQQvx/5tOfxmsL5tZOKCUxZuR685z17SVKZ+z9xF7dEvIt7fSSIAxCeYie3jQ4uyXK\\n\",\n       \"HVY0RDyz+k0UjrJyWD/S2VsWy1MeHL8LQYD3jP2E9QFZjpikZxgaPrn9Dt34jDxN6c0tMfYs5iU3\\n\",\n       \"m5dcXu3Y7HtGY/n2x7+DU5GsekR5+nl8IkhJGbtIN0RwGdqc4myGLAPJkaeoL5iJt7G+wbhD52cI\\n\",\n       \"BiUiZVlytDqirudMbiTEkfl8jhSCKBSTjQyTp2l7usnSND3GdiitCTEyOUte1NTVMbPlGSqf0XYG\\n\",\n       \"i8KQ0E8T682WYAxGRo5OHzJfniCkRGmF856xH5EonPGgcqTQxNEgvSAIgZq/gdjfICmI6Qwax/Th\\n\",\n       \"N/j8xZfpgiV6uO7uWHdwfvzTvHn+s7z71pf53MOf5q2HP8VJUpGGwH4baJqGfnqGLhOsqNl2HX1v\\n\",\n       \"2TcGJSuKvCZKQQSGfmS/G5gmRxAOISB4yWY/crW27NxEGw15polOYaYJfGBynt4LjBAUdfOZzVkh\\n\",\n       \"RCmEmL3aroBfBb7+mQ14zz0/BK9Nm6Xb34Je4oLn8uqaRAiq8phdt8a5Pa25IQ0PcDHiwgiq5MHZ\\n\",\n       \"Qzq/R8qa0R8c5o/rx+yixcdbEnLaYU3QE2dnp7xYf5NoI007sQgZoSrQWNKs4NnVhyyKHIRCSEWU\\n\",\n       \"hovztzhdRD78+D10NqMfNqQ0fPDsn/Nw8TPM8oTz1UPWdzv8lBCMYb1tOD5Z4EIk5hYnJrJa4UKE\\n\",\n       \"yeGkQyaRpuvIyxrnDE8+fcrzZ08oypy9t5yfXjCMIwCLRY2zIzKryPDoVEOIjKMjTRIW82O8kKis\\n\",\n       \"RAiF8iO1THCmZ7IG4SQ2GkwSSZMUyHAcKlmcc2y214TBIpUnzObo4KjTlOrhW8QsQYSDJjyzC3x7\\n\",\n       \"hZzNEQ8uuHz2MXfjHo9HJhUIR1GckicleaopkuSgFmksy6Xi08vvYPqRKTOkcSLGlmmsDgG7H8gL\\n\",\n       \"zypPyKscMYVXDVCCVEMsJWmVESxEC7e7HTF2HHmJt54YKqSLpC5idh3XvcRJT3WmKOafaZ35OfDr\\n\",\n       \"r1bbGvjvY4z/5LMc8J57flBeXzDfNaiyQK00m90LkqTgLH/MMN6B2pH5Gd1uR6GWZIs547hHhznz\\n\",\n       \"VODdSFHUOK+RMXCz3pFXHi89sjJcrr8JPiNNI84YwjijjR7tIUmX6LKmzHu0kljTIvMBKRVpXlBl\\n\",\n       \"mkePTnnx/A4TFcfVCWHa853nX+VULVlUF3iTMKkRmSis7Vmvn5EvxKEVv1zRTh/T2ZQkS6jKHi8m\\n\",\n       \"nLVMg0IjmS1WEEbGYUBpuNtuOFbH5HlBURS4JEEIaPY7hM7QQpJlCmTCMAxkOifqhBhAKsdmu6dr\\n\",\n       \"tpweLdjv7yi1QJqUXMdDA48U7Pd7lNaM/UhCQlHnxBgwbsJaR9OsWVRLYi3x+0tUqRH1Ejvu0VNA\\n\",\n       \"FSXP3r+lSGruts8Bg0HQq5zhpWe1yEhUzbysEa5Hy4lEeRTxIBrmdoxG4iO4wWCVwDpL9JJxmOid\\n\",\n       \"JTrI9ZIsTZjchHeC7WakaSPLVc44eMxgSFTOMo0sVc9tC9d3nsUDxTzXxGn6zOZsjPFj4Gc/swHu\\n\",\n       \"uefPwGsL5jJX+Gi5fHFF8ILR3ZCqkixPUFlOkBKxiEyTZNe0SDGQaInpU4y3SNXhPQxmz6Z7SW0h\\n\",\n       \"yTweyXr/AkVFxKCzAWdnOCkI3hC6FOIeERVWJExC0myfMqtz2mcdF/MHvHVxwWy+4l98+E2IkmV1\\n\",\n       \"zt1H38SeV4ymYz9sODlOYCUJXaBaPsC5PXY/kBVz5nXF85efILsKIS45lSVlesQ07dmv18z7iTSN\\n\",\n       \"r/w/A6uTE4QQbLc7Ts4fEmQgEpFphkcio8J6T1FVKOsRSUqaFnTdhLGR9eaWTMGTp09ZLmeoPDm8\\n\",\n       \"1M0SovMgFEppnn70bWbzFUV6UGZM0gIlPEFCkhY4M+A//BbpW2/jmg26qtHaErodHz79FCN6erFl\\n\",\n       \"NGuwBUJ1qGgYxh1958hLzW6dcJyXeBlIUkWzH5mJDBlGlJSUuqZPW6KI9F1HqhOCF/RNZOzhqJIU\\n\",\n       \"IqW1EzEcGqCyLIPgGYYR02fE3DE7Elg7MtxGgrCoIiEEQzskr2tK33PPa+W1BfNsUaFsxvHyIVIn\\n\",\n       \"fHp3SZ2uyLM51gV0luBsz8v1R+zGipPlkt44zHhoOE3agx9kWUhUIjDeEE2KlIK+ddR14PjoiG5r\\n\",\n       \"UXnChCYKg1CRIlW0zQAiI00Ux/oxcVyy6TZ8uH+fx4++xF96tOL67iWZSkjSkvnxKU5Zrm4+oekH\\n\",\n       \"klSQF55kVbAqj2m3kjAZZF/g9hWL5Cd5evcROpNUxUgJmHEgSQuII1pXWONJkgTnLG3bsjo6x3iQ\\n\",\n       \"UeOdI0kzohAInaITgVQ5OpMImYFOSVOJcxv6fmTAM00TWV6gZMAwQqxYzWc0TcM4jATnKfKErJgh\\n\",\n       \"M025PCZJNLOTBxTVgqg1Ion46yckjz9HHG4Q+QnS70gzxWbzlGV+Qls9Y7OFKi2QUTEMBePUIBXM\\n\",\n       \"ZjUDAouBKPGjZkSRLmucCMRXkgn5IlKkmhg0UuekiWA/NhgNqZBEn+CDpSgLQgw4GzEe3ASzWWQU\\n\",\n       \"hnbwtN4yP8opUnuQ17X3Lxzv+fHktQXzKp8TtSJPJciUaeeYnQuUmnO7u2NWBLqhJykkIU5M8pBA\\n\",\n       \"dQLavWe4ukEKSVJ4gnSkmSChxDHy8rbhNAiWVc7ZaU2avkOZ5jy9+RZVrsBEprbHu8NqV0+QF59j\\n\",\n       \"kT3ik91XsVNKmmoenp7TOYPSmrzO0IlARIEYBal6CO4SkQhQE3USKY4e8Oxmz9h3FNUZJ4sL/NCj\\n\",\n       \"coHHk0rFOOzxdqSoj5mcZ3V8xtXVC07OLqgXC6ZhZFHXhBDph55ZmeF9RGnF6AM6Kw83AAsueIRS\\n\",\n       \"qCTDjj1VuWQaGzJKkjRhbHvi7IgYLXjLxaO3kBKqIiHEwLC+Qh8tMWNDnuVMw0R1dAz9lrBfE6oF\\n\",\n       \"/tnvo+ojpnZk1zQU+ozj5QWIHu8tWuasRM1m15AmKSLNWbctQhjKPCGRBcEJ3JiBaonRkCYjZZUi\\n\",\n       \"ZSDPZgeRsCSjbSb6PpKqhHmSsZ9G8mTFbHnMMF3j7IidFFHCzky8uLaQ5eRzSFVg6MC/vil9zz2v\\n\",\n       \"ldc48xURz25omcYNq6NzsrQAOZIlKW23J0lShDZEPOO4YV6cUFc5SvdkyQwzeoapxWGJviYKy0V9\\n\",\n       \"wr/3q/8hZZ2zbl+g9cSj88+hkop/+tsTH3z4dQiODI3z0HYdp+kxQgyY2DBL5uTZChFmFNJy3TZU\\n\",\n       \"RxlnJ1/AuGt6uyEmCRKN6JZ45WiTW1Yqo1ys2FjB9fNrvLyjKEqyrCC4HJVoRJIx9AMxBHwwyKD5\\n\",\n       \"8OP3OTo6YZo83kW8dEzWgAg4H9h3E2VZkacpMYA1gUQrpqnHGMs0OVyQPH32jHfe/gLWWEY14sYd\\n\",\n       \"3hiUCvS7O4iK5dHiUPoYI0RIyxxPgpQKR0a1OsJ3O+R8jjAD6ugtxMLzP/2jf8Rl9xH7bkNEYKUm\\n\",\n       \"nV8TpiO0LBC6pDcVUmTYyePs4aYrBSRFQaITfLQMdiKfSVZHJ6hZgReQJhmPHr5LkZ5ihq9wfbkH\\n\",\n       \"BdZbpmFgXmVoUtpO0HeRRKbEGGg7h8dzVDrmicCOEYLmIOx5zz0/fry2YD4OLcFL6kVNUhsYHCIK\\n\",\n       \"QrDMqoKt39G6G5TKyFJNKj1ajZwefQklX9LpDjMWZL5i32wZpw7nc7TMOZudki8SHD3XN8+55IoQ\\n\",\n       \"M54/v2W3NzAGTpOUbX+LX+bspxHlPmRIAqZp2d3dMq8CZ8u3+eDFP6M3AxcnX2S93zCGK2woyMqI\\n\",\n       \"pj6o9U2CKwUP7J6z8xwhjpnaW6RUSGa4GBBKE0VE6xyRRG6fP+Ps4Vu0Tc/iWCOFZrPbkeY5xEhd\\n\",\n       \"15jRUpYlfTdS1wusd2R5hnOBvu+RUjFNhs32hqIseHH5glkaGLc31HlONDs2MpKWGaujFSFahmHA\\n\",\n       \"+5QqLxAyR+cKpWcM3R60JNf6IGxTzohmg5qfcHS24je/8g/YTDsm+5KuG3j7i6dQe+KYoIqCYpjh\\n\",\n       \"nMMER9f1eCEhSqpZQpqkdMPEMPUs6opidYZSKWbaIrVkvniT6AQXp49p+/eZjCXRKUrk7Pe3CAp8\\n\",\n       \"jEQvkamCCH3r0CqlLCRSCVIpSVXOYO6D+T0/nry+nLkQGCMZxxGtHbiRTbslyoTjkxllVjE1LfhA\\n\",\n       \"qhV+mBjTjnldM7k5l+uPkDGHuEQpgSClLEs+efKCX/sf/j71yYzFbEE3XtMOVwzbwE2z4ez4hLLW\\n\",\n       \"BDMxTRE/tYTB84VViZUJd3LBi6fvUc/+NR6cHjFPa6beIlUgxMDYTwgKjB1xWoHy6LxAGMHWWXKV\\n\",\n       \"oArQusZMIy4acgWDH1HB402HGTxFOWOzvkZKjfee9d2afhz44pe+yNX1FUodFBG994QAZjKEEIgx\\n\",\n       \"ZZoM1jqE8IzjwGRH7GSYFSVFpRk3e8rZDDHNURrqcsH6+gqpBDrRYDOkP2jgHB8/RvoBnVbQNoSj\\n\",\n       \"GQRJdI5Ypahxy4sXVzRbw93aYqNhtThnHt7EpH+ALa9hqkmynCgNU9fTTi0yKqZB4k2FzSRD19K5\\n\",\n       \"hizLGcaBIPa4cIN1PV9/r+PR6U8ymDvKKiUUCqJlMop2P6DlIWBnSUkiFV3n6ftAkjjSNMdrQ54X\\n\",\n       \"jENP13zmQlvflx+V3diP0rbsL5oF2p8HfpQNX/8qmsdeWzCvS8GIYLe/5M1HX+Tl9TOsteyGPUoZ\\n\",\n       \"lIpgJIMxSD8RRo0NA5vTK8y0P1Sz0DINPSos0CqSVTNknPGdjz9h/bsbfumXfpbFbEFvJGkpObI5\\n\",\n       \"pRQEKaGqmMeUqMCnYDNFmqXkFnrXsGnWfP7sjAfLR3xw/R369inSQR5WOOd4cPQWWb3ibvOEIghs\\n\",\n       \"qdhtb7kxd+RVTaYUk4QgDKMdGFNDGR3BDgQXCEJhjadeLJn65qA5M/Y8f/oMnWiuhETrlGEc0DrF\\n\",\n       \"GEuMkcvLlxRFQd/3JEnCbrcjenB2YnF+Riot1CW5zvERYvRMU0OSaK5fvke9OMHrnGmQyLBkamdk\\n\",\n       \"1RlN21AXCbqJiHqOWh4RXeTl+x/zye1ToisQtmdeFLx18RCFpGktWW6J7CjKithD8HtEEEy9xw4W\\n\",\n       \"MUV8ZfHOgIRxWtO5lihbUqVIY2C7uWKePub49CHb7jssVgrweFcwNB1KWNT/y96bxdq2pfddv9HN\\n\",\n       \"fq52d2effbpbdetW7zhuExuch7wQkfCCAi/IIrwhIBICxUSCNyLIAxCekECKIkTAAQnEQ4QAS7YV\\n\",\n       \"B6fsclW5XL63bn/6vc/uVjP7ORoe1vZVxS7KVaaury3v38tZa+251tj7aKxvjPmN//f9SYijBBdr\\n\",\n       \"mn6N1Jr5zIOxeATbqqbvDcjbnPktfzb5xGa+lrsy1TzSGFGymD7kyfNvkkk4e/mKOM3pWo+KNF3n\\n\",\n       \"kMHjreH09AVFMUGPnijy9L5Fygl+6HH9gIoTjo7uECcZr148Y/GZEgZFiCLiaYWvR0ahiFxElpdo\\n\",\n       \"ZamlZetG5DDg05RYONbtmm2zIc8SkGu8vGJvfh/nI6SSxElMFJUMzRWh3zCEhqbbUG02SF9xcnDI\\n\",\n       \"B5ct1jmO70yQTmKDJE5KXHdN13dkxa7l7eAsUsVkacbjx0947bXXuLy8IssyDosjjImomwYpJV3f\\n\",\n       \"07QtXduSpinee7y3xGmC946quWBSTmitZzlZcH3+kjgZieOY6XRvd2iZZGit6J1ExgUynSDbS7yX\\n\",\n       \"yChCZCnepMhR8M6Tx1TXHf1YkeaaR4/uo5XgnQ/fwieBvX2Jjq8gaAIjfrCY3qONZhgFkTIYpZEi\\n\",\n       \"gBKMotlVscaBNDIYEdMGR1HGeN9z5zhn2z8m1nPi1LLYjxnbgAgQa8U29OAscQqmzLGu2xlkaE2i\\n\",\n       \"MqLydgd6y59NPrFgbmvP6CyXlxfMJp9ie33NLEtI04inlx4dYo4XE7JFRpSmrNcXbFcX9OECaQe8\\n\",\n       \"0yjrsK1kCDUhCMbBIsWASmIO9vfRwWIbyb3l67y4fMFmK7hzcB/COeVsRRokr4ZX5Dk43WNUiQwZ\\n\",\n       \"61XLph157/QJKpbcf/SQKHZY/wR0gfeSqh3oKsX66pLMSPJFQ9ELVBIxSxOIRuKoZXtu0MJQTEq8\\n\",\n       \"Gnbqk0jincX5kTiJyIs5V+tLQlIymczxUnDx6pLDQ4WSO+OJKIoIITAMA03TYMfxJpA79K4F1a43\\n\",\n       \"eZwggkArRTMMu26Tcmd+vVje3x0W2wGC5+69R6TTA0ZriZMEhUckBSGKEaPDVyPfeP9D3j37GgHY\\n\",\n       \"ti2R0ZxfXRM2Dtopa1VTLmuMrAg+xriROIoYrSXKoCwycq24ajrQkthH9JuIPPV4FLKAg4MF2/YD\\n\",\n       \"FrM7OB9TtxlPHr8kVxmFgSEonDTEdiQ1hotYoXRE3w708UBaahIj8aFhe/0H7GZvueXPBJ9YMD/f\\n\",\n       \"rBEoQhh45+3fYHA1ZZ6jQoxRKVEUUajAJFfEWYGwirGGzeYSqQeIOlAxY7DU65pJPifJJPP5kg8+\\n\",\n       \"fIfZYs6yXFDZDlNDcAqjInRUsFq9jTAVeZKRKo83ZyQ6Io6gHzSZueDCWd59ecZsep/5coKUjjKf\\n\",\n       \"8d673yTLFBfX19S9Yy4OmZgpvfQcHEw4fWmwTtA6TVrmzPXAy/qUJJ+TxzHYgThJaOuaWEcYAb7f\\n\",\n       \"oAOsry+I05LttSRgqesaQvioyU7f94ibsvy6rhmHkc1mg3WegKcoJsRipG7XLCZ32W6usMIzKyfE\\n\",\n       \"MsZkKUU0wVrPdDrZ6cuzBRerFbFRZJMZQkeIfIIdPZcvHmPtiPMxl9eXfPq1Y7QSbFY1s3RG5Rqu\\n\",\n       \"XjniLBBPK6Sx+MTRd9D0u+rTOAiGqkF5iyoEkSiJ9BR7vWa97bGzgSZao5TBDS9YzqfcmR+SDAWr\\n\",\n       \"iwsmkwVxOUcEGNY1V3VFZiNWmy06DxSTgCJgm45upQnX2Sc1pW+55RPlEwvmPREMDUIN2LHBaMW6\\n\",\n       \"bRiDoh8cYazpgqK96sjHDi0zBJrQlmzOO6K0YAwjyiiyuWZot0QqJdYpJycnfPjsfertFUmS4rzj\\n\",\n       \"/smnKItHfPrR5zj7ypt86xvnbKbPmCxi+nSfQnikkaz9NbnU5GJgZVuuqnNG13N8cMLB8if40b/8\\n\",\n       \"1/nNt/4xz1/8En3vIHakpDSjR+jA0f4hV+sVm8qhdcL+QtF3gWaoSbUiS3KaTYdWCmEHxOgQzlHE\\n\",\n       \"GV5KZsuCdVWxnOyC7OXVFVEcU223GB2zrTcIIej7jr5v8bZDYCmzFO9GrHDEImK73SIFjM2WIdJo\\n\",\n       \"bdBKE8UpSS4IdqeM6YaGavuK8vABQimCiQlBo4eWD86f8a0n34DIUM4Kyizi4uwVzgq61NA7Dy6i\\n\",\n       \"q3vSvCJNYBOBkQHf7n4nOQRkUESRxEcabVKi0TEO0PU9UhWs64p267j3yGP9hpO9fe4dlnzptc+R\\n\",\n       \"7e8RxZar6zXvv/8By3xGf14RpS2TzLBZrdk0I2OvEBvFUZp/rPP2xqzlvwO+wK63+d8IIfz6937X\\n\",\n       \"Lbd8/Hxyp0ViiUgzgt0pWKIsxdcdVTOCU2wZuXxRka8MyzuBLOsZnN1VFQ4JXe8IkSDNPEIKetOh\\n\",\n       \"shUmHpmqPRTvc3F+TeASrRLuH6Z84bN/kUkxQQZDbFK++a1rPv9wyvxwwYXdsrd/gpQCN1ZM8wc0\\n\",\n       \"7bsY5RiGns11x+XsFX/hx36Oo8Ofp21qvvLbv8ZlX7F/0CC6CZfDisP5kjSOWdc9UR8T6RgXHALw\\n\",\n       \"kaTfdMgAcZpSJBOq7ZoQLJPcMDrFxEiILLFqWZQRm9UFQkUYbShLydD3SClpmhqlBd5bhFAEtzOE\\n\",\n       \"1q5jvXqF9zMODpaEsDs4Dd5igkcKQRZnuCjgvcWHiCyd0rYNyXyBKEtsvSFSMf/st3+L9foCgefB\\n\",\n       \"/SWjdLz1wVPcWHJwoBgRJBk0a0WcefK8IzJgHBwtJ6jBE+yACIE4kXgMclSIrmEyj4n1jMtNy3gW\\n\",\n       \"sw0V225FMcScXZxzkD6gHh3t5hVFHrNYHDGd3OXyckUnvsZR+jp91bA6FTRXklikpEaRlXPgzY9z\\n\",\n       \"5v494B+HEP5VIYQGPt7V45Zbvk8+OZ15PTKbTBkGgZEtkckwxRQte5zzFHZkS8z1ao31DXuHChki\\n\",\n       \"MIG+t+TaMjqFwzMqT1EaBllxsX4LM5xwZ+8BVfUm1+cOKTrefOd3+OyjHyGOYlAjD04WbF90uK5n\\n\",\n       \"L2ie1JqklBzuP+D85W9QyJL97BHz6YRX2zOquufXf+tXMTrjU/de42D/AfeO3+X0+RUuDEzzY17V\\n\",\n       \"G+p2S9M6pBSgDZebFolCxTWrBibeU5qESVEw9BvSLEcpSRxp1OgRviHXLZvrDXUfYW1g0DEyOLx1\\n\",\n       \"1E0NgHUdwxjoRosioLVHigrnR4auYn5yF6MTetNCcNhxYLu5QCeGyOXM50uUTri8WpMmKVmkQCq8\\n\",\n       \"d0Qy5v233iKWEbkWFKUhFoqXlxVWGJS0CBmjXEBqgTaWph2Rgp31nDZICUbHRE7hGfFBQW/wsSLf\\n\",\n       \"P6BMY5pxZHN9RTbTHO4tKA80OsSEvsSbh5xf9Kzfu6ZcOh6+5jk8OOBYFZTZIZICN8R8+e7ILM3J\\n\",\n       \"spw0itBa8d/+/V/5WOasEGIK/AshhJ8HCCFYYP2xDHbLLT8gn1gwH901Zxc1tmkZx5bFvkcoRd9Y\\n\",\n       \"UDFhcJRE+CJhGAeqVUuexqRJiXcVOlfMEsPmvKUbBvq0J9jAQI1y53iVE5t4p5oRnqbZ8o33vsKd\\n\",\n       \"zT26cU2u5xztLRBOUg3X7AXIo30e3P08q+ff5NXpKdl8n9kkpogKtr3FaM3/8av/M/iBz75+l8Xy\\n\",\n       \"iGrbYN2IEXBv/0tIzinMyNg7rtsGoTxhbGmtRasAskQJtfPdlJKqqjg5ucfLF0/Zn0/JE4Uu75Cm\\n\",\n       \"HW8/eYKvFd0AVgSqTU0cxyil6Lpd/jyEgIkVy9mURFouN69I4pz19QVGatJyinKWoW+Rg2Jzcckk\\n\",\n       \"n6BEwAdPlme0TcMkWyCiCDlarp6+5Otvvs1X3/k1pguwoef0wwaXxszKJWVccLx/B+Ele8cHIGCw\\n\",\n       \"I3cPDsijJXlc0DUj5+dPQAryLOd6dY0xMZPpnPlkSpxEhCCpNzXj0BGlGbOlwY0KERRxnFAkOSEE\\n\",\n       \"xjDigiOLc/JZip0PdH1Pmkwo4xKRCMZhJDIR3n+sapZHwLkQ4u8DPwJ8FfibIYTm4xz0llu+Hz65\\n\",\n       \"oiEtuK4rVpcbNAItL/FSIKTAiQhawRgkKpG7Ev4wYrRldCNFNqEoDYXRJMUrxkvBeuUZ+oEodozd\\n\",\n       \"OaPo0FpRzlKGZkuWG56ePuZyfYr3DbUzGCkI3tLZiCjuOdy7y/HyhDeDJkci2kBbO9Iyw40tbW1Z\\n\",\n       \"TnJOX17y/OV7nNx5gyw1hKFFRQOSOVKsSLzEjReUU9isKryUdDWEsOXOcon0Chc8bVMxnc45PX2J\\n\",\n       \"CG4nvzMZWZmhVcz+4pCXlxu22w6VJHRtR6UkZTHBOcvOmhX2lksmWUpdXZIVGcQBjQU3oqXGO4cg\\n\",\n       \"4MeRwa1ot1fIgyP60TFaTxRplIkQWoFQfPD+Bww+8PDgdbqwZRwh3u8wcYydG5blHpN8wmQyZ3+5\\n\",\n       \"RAqFlIFJPiXKNLNyhnOWD58WGCNJopSu65FaMp9NcYMHLTBKM84GmqamLOYUkxQ3SpzvcM6SJDFp\\n\",\n       \"WtB76NoaNw7Y0ZEkBUlaEMUxSRwxDgNd1zCOI0nysXqAanY2iv9OCOE3hBD/FfALwH/ynRf9/gKd\\n\",\n       \"W7ehW/6o/CC2cZ9YMC8nBUEHGGc0m4aqtowOIh0whUYJSVAwX+YoExi3nrHZ0rmWPH+ADBO6MTBR\\n\",\n       \"nvv7kt9+GuMHTWUbxlYy+i1FkjFbGMQcismcn/nJf4lf/covUW1iosgzyaeYwXJ+vWF6pMgzyfq6\\n\",\n       \"obpqIfYs5lOmxZyRa2aFoSgSnp9/wKwoqFdXVPkpeRIxdoHN9hkqKskmJ4z+BVEUcbgouRIFT16t\\n\",\n       \"maYFyuYIbxB6Zzgxn+6hlKDyjv3lkiwpiFJz09MlxgtJO1heXW4Jg0UbAwSGOEHpBILDGEWsY86v\\n\",\n       \"TtFK0/cjRZrvrN+waO8x+ZQOCONAHCfgBmSwpFHGiIGhR2U5XhqE9aTTJYvBw8mXWOxN2V8ec3V2\\n\",\n       \"Th8c221FHAnaukUZRb/dsNjbI0sKEIHM5MQmoR23PLp7l36wxFGEdRYpFV3X4qylzAqkUhitiZOY\\n\",\n       \"JM1I0hTvLH3ryPKc4D29HRjGEaUEWid474mimDiO8N4x9CMhOJQSoMDzsVaAPgOehRB+4+b5/8Iu\\n\",\n       \"mP9z3AbvW35Y/H4P2e/0l/39fGLBXDiJkoY80Wiv2fQdRkGaRUQxJJMUKQRD31Ove9za4hlRWlNV\\n\",\n       \"H9A2SzITYbUg6A11K8hCwiJfcmU7Rueo/QDKURYj9x494M995kd5/OQxX391ho8MeZ7R25r19YiM\\n\",\n       \"LJdXT3l6fsplf8VUzmjqDdYFAi33D+8xPzjk7Opd9g5Sxr6g70/JsteI51OabUPfr9nbO8EPNVoK\\n\",\n       \"lDB42yOD5WBaIIaCWKfkWc7YbqmrLc717O3tI/yA1hFZkiOlJBBIkphJWVIWW07PL5iYBd57JuUc\\n\",\n       \"k6YI71FS4IaOtukxsSMv5kgG0DHDMLLZXrIXG7K8JDYxaZqyd3hENDugNLqqXgAAIABJREFUbsB2\\n\",\n       \"NYv5dOe4JAJh8BhtyCKNnM+Yl0sUgcOjA8pygrUWrTWXlxdsNhuur6/ZrLdIGYiimNlsijGGrtvl\\n\",\n       \"9kPwdH1304ogYO2IlIJhGJBKkec56STFAzY4xn7XpTKKIrZNjRACYwxRFCGlxDlH37eMY78z/zYG\\n\",\n       \"GSeUN56mf5hP4v8fQginQoinQojPhBDeZmdq/q2PbcBbbvkB+OSkiU2LDz0iRKADkZBIBVGqsENN\\n\",\n       \"LyASiusXLU/O1izKgjSR2MSx3WzZrteU6YwsUmR7CcuDlvY6oOMJOgHVb3GjZLvq0Uby2r3PkZYl\\n\",\n       \"1q1x9Aw+RUUGO/PEY4LzcH5V040XqIlnYCBIyXZ4Tj0E/AB1fcmkzNBKYNMYhMZ7hSdG+Ix2NWKO\\n\",\n       \"NWV0jA4LosGwiJbMDzOKJGLQYISiqSuU79GpIY9nxEawWm85vjPF2hGlA2PvwI9kSUyWxIzjQF1X\\n\",\n       \"xHFClCRMspzReYTvuT67ZH86RceKsamYz0pErlHDgJDQNQ3FdIbSknw+JZ8tCAiC7dE4PBavHaFt\\n\",\n       \"qFYdfd/jXGBvecDl5RXjODCbTWjbljTNePDgAXFsuHv3mKbpuLq6QsrAOI689947lGWJMTuTCO8t\\n\",\n       \"Su2aXymlKYoCpRQhBLTWH7UmCAi01gghGIeBYewRWiOUpu9avLW40YIQ6DhCCBiHjrapUPEu0GdJ\\n\",\n       \"uqs0/Xj5d4H/QQgRAe8B/+bHPeAtt3w/fHLl/C4mZI6h7aiua5I0YbPu6CsPwhElFY0KJCIn8RGE\\n\",\n       \"gEXhRosYwdmEy65iJRSHY8pkHtPpmihJKG3OZrVGKtAyR0jPJCl4+vQxZ6u3UNKzrTqkiTjZf8ho\\n\",\n       \"HhO6EadGlBNk8ynr1TULjpDE5D5BhCnG7XE3SxnDiDGaJEqQ1hCrkiF1DPM1qkvIKOmTHonm5O6n\\n\",\n       \"MFKzqTecnp4j8hgrBzbrLYvpguAFKs6wHqQSDENPbCRNvUFHMWm06/meZzmbzRopoWkauqYlTmKK\\n\",\n       \"JGYymyBkYOx7cANVdcn+7ACpY4zRu3a6dmBSThgGi4pTvId1XRH7gLA9YXlEOLvm+fMP2VZ2Zyk3\\n\",\n       \"9Fg3ghRUVY31jknf81ZbIaQgyxIImqapGUdLnme0bXvTPE1TliVZliGEYDabfHRgG0UxbdvuPEmv\\n\",\n       \"r6mbmuADeVHu0jAhsNluyLKc4zvHRFm2W1TsSJqmmGAQQDt0aGVQIXD6/Bl1XZOm6cc6b0MI3wB+\\n\",\n       \"4mMd5JZb/gh8cuYUE0UfT0nihhdPV8TakAjJer1FKcW0zBACdKI4OlnQD47KbtkrU6yGetuhdUQ/\\n\",\n       \"9FxeDXghySe7isokiqhWHSLSlJFnkd7hg+ffQvmSzaonySXrq2sev3jJp+MHBKeI0wX72QE6naOV\\n\",\n       \"wmaQmCkaRVRqZosDpNY01YbSRPjRs73eUBQp6TSiTDXDmNB3HYGAkBJCYL1a7W79hSDPU4wxeB+R\\n\",\n       \"FXsE70mimA8+eJ+9xYzBWqrNNVrMMEpircf7QJJq3NjjrUNLSZZNSBNDCJ6m2rJdnXJnb5+h2yKD\\n\",\n       \"RxDR91umkxnee4pyikQQlCbLM1ScEkTCcq6xtiWez/EhIRRHPL/8Oh9+8DYHBwc0TUuSpjtFSlRC\\n\",\n       \"BFmcUDcViYkRGKSSHB0d0Q093nkmkwlVVX20M6/rXbql6zp0ZGibljzLODi8Q7WtkDqQpDFnZ2c0\\n\",\n       \"XU2S5SAEry7Omc8dZVnQVBWnp6coE7G3t0ee7XLraZpgjN4tEIkhTmZ4/7EaOt9yy59YPjlpoteM\\n\",\n       \"nWVSBr7wxWO++ZvXLBeG6MQgBgitZbpYomJBc9VSVS3SejJv8EWAkFCtepxXjMbRdYG95YRJNucL\\n\",\n       \"91/jL7zxIxgdMQbHdJaTFYpVHfiR+3+JMjdEIiXRmuViyZce/DSpUZRlgbcwDD1j37EdOrDgvWMc\\n\",\n       \"WqQVZHmMiWPOXp3jhdhJ4YQgBM/qJnCP40g39CymM7q+x0uBDND1A0JKTJTj+xHvKy6uLohis9tR\\n\",\n       \"BsMwdPgAUkUoBKJ3aKmJtEFqxWy+JEtjlNoFscv6GUIE6rqmyDPKNCaMA8Eroiii7Rqq1RWzxR4E\\n\",\n       \"jxKBkCWE4ph4mRHhIS6QQjCOF+zvL5B8mkAgBFAqQimFkz3aGuTUMInmJNpQ1zVSQZ6kaG3w3tP2\\n\",\n       \"PQdHR8ymM9arFW3X0jYNp69e4b1nuVygIkVR5sxmM4ax5+LynEk7wboBqQKb7Zq22WCU4oV3hODJ\\n\",\n       \"spjJfEaSaLquRmtF17YIIZjOFuRZyjCMNM34SU3pW275RPm+grkQQgG/ye4k/68KIRbALwIPgA+B\\n\",\n       \"vx5CWN1c+x8BfwNwwL8XQvg/v9tnnj4/JSkduJyjoynn97YEJDayHGYZSmuurjckaYrsepLeIdOI\\n\",\n       \"bqwxIiLNot0OeNshMku6N+WLd36Kh/deJ89ykjRmudhDCokgYN3IPGm4V2SMzoMKTIspxmiUVvRd\\n\",\n       \"RxSlRJFmtI662mL6nZWa9W4XoLuGOI4Z2p0DfJ5nRNrQNA1JktD3PY+fPaUoCmazGU6A0IokTolM\\n\",\n       \"TD7ZyYzGvscJy9jXuL7GOY8Umm11RZ4XWA/TIuN6swHAO49SgSTNkFHC+fkFgwuUeYq1I5O4wDmL\\n\",\n       \"syNDL8ijhKA9bd+ilcEkKUoqotiQ5zNENsFve8RiD0kEAsDxwTvvcb2uqKoNAcd8vkeRpzjnGK3k\\n\",\n       \"5dUlJyohzfTOws4Yuq6jLEvOzs85OztjsphxePcOIgSMMaR5iptNidOEEAJpmpJmMS+fP8eYCG0U\\n\",\n       \"Sgr29hYIYLSWse85PrqDd4E03aWK1qsVl5fnDMNAmqSYKCKMFhNFAKw3G5zzH7c08ZZb/sTy/e7M\\n\",\n       \"/ybwu0B58/wXgP8rhPB3hRB/6+b5LwghPg/8a8DngbvA/31z8v8H9DS5yphmESYzSBfx5T/3ab75\\n\",\n       \"7feJrWLQA/eXE6Q0uLaBhaCYFFifkuQF+5M5B3uHoKBuO0bbY7Ti+PgBUaaw9DS9RW12u+qymCOF\\n\",\n       \"YL5Y0LUR9XqFQODdgBeOLCmRaYaUOxmQVoLl3h7r1TWTyZTLyyuePnvGwcE+683mow6GAXZ53CTF\\n\",\n       \"eY+JIk5O7pKmKX0/cPrqjHv3HjCbLHHWY51jGDp63+MdrDdrYhnIsgJjIpIIvB3QRjGOA1prQj8g\\n\",\n       \"pWIcPUU2R+mEdvC8OnvJJi24e5gSXI2Rjmk+Z311wfwoo2rWWKPIpwV22C0+WhuivSVjMOjlQ2jX\\n\",\n       \"kEYQJEIKDu/eJyjJi+cfcH15yTvvvs98PiEyMdO8IC4Krtan6MqQ58WuJsA62qYhS1O+9MUvMjpL\\n\",\n       \"33Y8Oz0jyzKyPGXoB7RW5FmOHUf6qqHtO5x3hHqkrRukVvR9T5QkaKm4d3yX0Tq88yACy71DUJK6\\n\",\n       \"rhidYzKbk98EcmkUOkkQUmAi80f+Mtxyy59m/tBgLoQ4Af4K8J8C//7Ny38N+Lmbx/8A+GV2Af1f\\n\",\n       \"Af7HEMIIfCiEeBf4SeAPNCLSkSM3C4wZeXXZsr835Xh+yNOnL5AyRcklD/dTgotJ8gKpUoxJiOMc\\n\",\n       \"KSBLExbLfUZruTg/53J1hvcjXXOjyxSCvqkRUjJJMtI8J4kURmUksWEYRrquwwXJYB0hQNeNWGfJ\\n\",\n       \"85J2W7FabxiGgclkxny+ICtyiixjW1d457B2ZPSBbhh2O848w9qOYWjZbhuKYsJssk+apmzWK5QU\\n\",\n       \"VNuKartmbAfSbMo8T4mUpm073OiJpMTZQB9GggsI4YhMYDqfkmYL3n7/A4be8elPvcb11SWr65r9\\n\",\n       \"Ymet5+xInuV0o0OJgBtHvB9R2hClMVFkEHGJzA6gu2Tz7Ixock1y+ACCZnHygNnRMY+ffZtxcGhT\\n\",\n       \"k6QZq+srvHc0Zy8QUhBHKbP5HOctWZJydvqcKErIsgylFHleEBvJ2Lc8OT9FCEFRFDTVFqUkl1eX\\n\",\n       \"u/8vm6KFxHoHg2cyW5BlGVLKmwWx36VumgYhFVJJ7uwfEicp5XQCAvq+R0pDFGWIxMCto84tf0b5\\n\",\n       \"fnbm/yXwHwKT73jtMIRwdvP4DDi8eXzMPx+4n7Hbof8BqnbLrJa0Q0+9rgg2MM3nzLKeNx58mcXy\\n\",\n       \"mMLkaBPI0gLnHYP1RLFgGEYEAmsbvLMURYyODhmGgaqqiKKIpm2RwROs42i6wEhFLBVJkaFkhJQ9\\n\",\n       \"WmuUUFR1Q13XlIvZRyL9YehRSjGfz7m6ukbJQLfZUHct4UbDP3YDWkukUNTbCk+g7Sqc9YBEaklR\\n\",\n       \"lECg67qbQ0GBVBJrHWWeUbc1RAn7ywlNtUFqjx17MhXhlSQSEi0VUu7SGnXd0tQ9r16d0/ctyzKj\\n\",\n       \"7TqyuMA6MErSdy2zyQSjY6wTxEaSpgV5URD2DnEvniOzKb/97pt85t5DksVdRGQI7HL1r9/7HMbv\\n\",\n       \"cuJZYpjlE642aw4P74KA1fqKNElompq+H8nznNVqzfX1isPDA373d9/k4uoVh4f7LOZLDvYPePud\\n\",\n       \"t/Desre3B+za+VbVhmk5JU1yRudo2w6lFEop6rreadDTFGUMWu4WB7yjbxuGrkMIQZZl9KLHjz10\\n\",\n       \"Cik+Pp3598sPS+v+w7R6+2EWMv1JtqD7Yf6dvyep/WHwvYp9flif8z2DuRDiXwZehRC+JoT4S9/t\\n\",\n       \"mhBCEOJ7inu/689iPSUIT191jG3LZTWw98ZD3vjUfU72HwCewe4KSJwfgYDSisjEpGmGMQbrLATB\\n\",\n       \"Ym9GbCK6vmVbVVRVRdu2JElC0zScXrziUAlUbKDTWDd+JJNbbVcM48j1+optvaEsS4LddRqsqhrr\\n\",\n       \"LH3f4pxnCBbLrn/2pChQQhBFGu/BOkccJbR9RpTEXFysmE53OXkfPEIIoigiigybjcWYmL7fkErD\\n\",\n       \"wWLJdnNNHhdkuWazOmea3mfd1hgdE8c5ZZKw3taUeYmShrZtUSpQZDFFGpEWBU21QsWCSO8mtA8W\\n\",\n       \"Sbgp1hnQOsaPCn14QnN2yvbqGndyD8JICAl4h1eC6cFdhm9/k65v6HrHvXsPme3t0dTVTk54BV3X\\n\",\n       \"M5nOmRRTtFEcH9/FWov3ji996UtsNteIINk72MO6EaVjnn74gjQt+OxnPgNSsF6vubpaM19o5rPZ\\n\",\n       \"TTfIBufcR8VCUkmG0aPSlCTftQno2oY0TRmGge22IoojlBLYMPJdMnq33PJngj9sZ/4Xgb8mhPgr\\n\",\n       \"QAJMhBD/PXAmhDi6qYi7A7y6uf45cO873n9y89of4K03zzFhTRAje4cZh4tDxGg4PDxA6UDT9Bht\\n\",\n       \"doHIO+xo0VGMiTTGmN0XHkGapeAtVTMwjiPL+YK+78myBKM01juuVyvatmU+n6O1IopipJREUcRq\\n\",\n       \"syaWO2VI0zSEEDg7PUUbQ2DXP0VKuXuf1MTBgYDUaIQyNzt4g8ChleTk8JiqqSgmJUcH93DecnV5\\n\",\n       \"QVEU9MOAtRYTJfRdQ55NWOQxbd/grcUav6uwlJp+6AnOIyO5e4/WbDdXrNdr2n6g71tSLfnSawvi\\n\",\n       \"2NA3G1w/sh1G9mYl69WK+XROW23ZPzpCCo2e3AUjsFdX2LGhbRs22zVHbkAI8M4ihWDv5B6Hxyf0\\n\",\n       \"T97j4mLNs2dP2JstOTw85Fu/8y1UgLHrmR3dwclA27bk+e7MwTlJluXkeYJzjuPjE/q+Z1LOmS2W\\n\",\n       \"fPvtN6n7lk89/BTHR/fIsyl931JVFXmef1TpGUUR3ns2my3L/X3qbU0Sx6zXG7x3GBPTtjsrvV/6\\n\",\n       \"lV/l//nK14kis8ux33LLn0G+ZzAPIfxt4G8DCCF+DvgPQgj/hhDi7wI/D/znN//+bzdv+d+BfyiE\\n\",\n       \"+C/YpVdeB77y3T774NOCqU9Rg6QsZ+h8RplkDLZD9AJjFElyU8JNwNyUtjs3ArvS8DhOsdZSVRtM\\n\",\n       \"HO1y4OOAHQfiOKYsS6LKoPLAMI40TY21ljzPGceRKIpYr3cHmmVZ4vG7cedzttstZbE7mJRS3BSj\\n\",\n       \"+F2pfQj0/YjWu0VFKUldt4zjwPtPPqRtGtLpjKZtcN7dHKTO6LuWy8tLZtMJx5/6NCd3lmRJSpLG\\n\",\n       \"vPzwd6hfvSA4h1QprbVExtD2PUkWI6RiGBu00URYlEw4mOeYKKJvamy9Zrm/oN62yChDS0XQGXmZ\\n\",\n       \"MdjAfHlESA2SfUxZc/r0CT/+4z/N8+fvcv3qJcsHS6RWdOfPSA4eUs73WH/ztyjLKVkWc359wenV\\n\",\n       \"OWkWk8S7BXWwu6KirmkZXvRoLXnw4DVWqxV932FMxDe+8Q329/eZLxe89vAhi1lJ2zYs5nOk3O2+\\n\",\n       \"Q3AkSYLWmjiJGMZh12vFyhvJpkdLqKoNUsLV1Zo4jhnHESklP/vTP8mPfunznJ29ZBwtf++/+Qc/\\n\",\n       \"hK/GLbf86eIH1Zn/XsrkPwP+kRDi3+JGmggQQvhdIcQ/Yqd8scC/Hf4/EmzeKlRuGHxHPXoOohyh\\n\",\n       \"5U0Xv51Vmve74CmFYDopyPN8VyEZxx/toqWUZFm2U0aEQD8OEAJxkhBpw3ldk0cxeZrupIKw20Ha\\n\",\n       \"AXxgMimJ4xjnPGWckqcpnkBZ3qEsJzvHHimYz+eMY0/f9ygh6PuR69UVfdviBfQ9JIkiSEmcpwTv\\n\",\n       \"2Gw2pGnK1eUV42Cpmi1CeyIjOVjO6UcYfMf92T6f+/M/x+rijGfvvk19/ZyhrklMhI4MwQswMVGc\\n\",\n       \"cxhHCAl9X7M3yzFCoGRMMpvTjxE626OcP0SlCbEC27e7Hbp1lMkE7wXj9orl/gEvnz+n7zrGvgM8\\n\",\n       \"CEH94pL44AF5Pufw8ICqasiykg/e/xDrR47v3CMy6saLtEYaQ5oWLOZLhsGy3W4wRrFcnvDkyfvs\\n\",\n       \"7+8OgN9+802qZsOnHj7ijdc/w7vvvcu5e4XREWmaMo4O7wNRrLH9iJUO7xxVtWVSTNCR2Uk4pSBJ\\n\",\n       \"NRcX54QQKMuSx48/wLqdlHI2m/0Rvga33PKnn+87mIcQfgX4lZvHV+yaDH236/4O8Hf+sM+bRkv6\\n\",\n       \"2uEAFadoJdDakKU5xmjyPP/odjsEdxPAa4wxCCEoyxJrHVIqIKEbetqhBx92QX8c2VYbFpMpzo6k\\n\",\n       \"ecZiucR7z9XFKybFPrPJDJNESKVp+w7b7UrRFQJnR1ary5vbfsX5+cWuWCUyNFXNarXaybOV5MWL\\n\",\n       \"F2TZhNUmMJ/PEUIhgG11hZL7OOd5/vQdeu8xUcqdgwNW6zX3Hz2kLGbEScLV9RnXFy/4whe+iDc/\\n\",\n       \"xre+/lVcv2G0Oxmj84Khd+TzGC0U5WxCpiS1dUyLknxxwMHRPbTSPHjwgOlsTr3ZkhU5GQ7bbAlK\\n\",\n       \"I6Vn3DY8u7hkbLe8OD3ns1+wBCTV2WPmJzMEAq1GPv3oU1xcX6CV4qd+/Mf55X/ya4AnzSaI4HDj\\n\",\n       \"SNtb7t69Q1VVlOUUHwbmiz2ePXvO9eqag4NjlNKkWUpWJHzrrTe5c3KXOEq4eHVOkgV0pPDB8vTZ\\n\",\n       \"c6pqy/2Hj9ibL+m6Yafy8Q6ld3cDhN87A3BY70hGy/0HjzDaEN8sxB8XQog3gP/pO156DfiPQwj/\\n\",\n       \"9cc26C23fJ98YhWgn/2xL/PWm29y8eqcxSQCJZnNZ8wmC9IswVqLUgJjdk2VtNZovft1x3GnoGi7\\n\",\n       \"jtm8oGs6ijxns92gI03fddi2ZX9vgQ/Qti2r7RVaBiwCpQ13Dg9JspwgoCinXJyfESJDnuc473He\\n\",\n       \"c319zTiOhGBxbrfTbvqOqqmJpaYd+p1+fOjxUY80MVVVfXS3sK22SKE5v3jF4/c/ZLaccfd+wWK5\\n\",\n       \"5Gh/ycHRCVEUoYBuLFFBcvrsW8wPHnF4chchH9FWWz547y2azlJ3DeNV2Jldi5x8f4/ZfMFy74DP\\n\",\n       \"fvZz9P1AtVmhjMHEMXE+Mp0vCPMluh8gbABF0Boxej58+oTV9TUheMS44eLsBc1kwuHCsjef8Pid\\n\",\n       \"bxJFu51zmsT8zM/8LF/72lcYBsukSImMYbNZEUWa/YN9zi+es16vefHiKRcXl6TJhNPTF4QQmM2n\\n\",\n       \"rFZX7O0t+cVf/EU+97nPUzcNg+0Zx5Y0y5mUe9w5ekQ2KTi/uGB/b07wAR8Ce3tLhJC7Xj7dwP2D\\n\",\n       \"uwitUEpjlMK5XXDPbrTnHwchhG8DPwogds3knwP/68c24C23/AB8YsHciIT7Jw8YnCWJCu6dnLB/\\n\",\n       \"cIciT0jilL5vCSEghGBvbw/nHN77jwwIhmG4yV/vjJg3mw3jMCIMREoh0l2Ofb3dcnx8BwKM3vH2\\n\",\n       \"229xeOcuNjgOjg6oqi2Xl6fYocOGQBwnrC6vGYaeKIro+xHnRoZhIEkShJLMZjO89WyaijzJef48\\n\",\n       \"UJQTjImomhaA7XbL9fWaatvw/pP3efzhYxbbAz73+S+TJCnWw2Z1wWKxx3J/D+8HkgcPOX33q1w8\\n\",\n       \"+4B0NkdEBclkQpxNcPYJzllC2MkcizwjSnIcAqUUTdPS1B15UbKta6wP3Dm5C8UEGRSYKYQFQXii\\n\",\n       \"ZMrZ6Vd58t5zQPDhh08ZrefZ0ydMZiXj+hWbas1kMmfbbLHWsq0H7hwfUNdfpG0rlvNdOmN5cMjT\\n\",\n       \"ly949PAR773f0zQN5+cXNE1HZFK+9vWv8tqj13n06FNcXp4Tgufk5GR3N5PmDG2HHQJluWQyKdHa\\n\",\n       \"kJiIh/cf3LS7dUxn0xupqWMxWVLO2QV5AXmaU7c7BdM0L1B/fNLEvwy8F0J4+sc14C23fC8+sWA+\\n\",\n       \"9JZ5sc/DezH7asLDk9fIEoUfPaYwBL8rygnwkVytKIqbsvmBEHrKsmAcBpqm2h16mgg3DvRdx2df\\n\",\n       \"f51Xr15x994Jp6dnfOFzXyBNYg4Pj3j/6ZNd06e25dnzJwxDR6QSnp8+486dOyD4qMdKmiaMo8SY\\n\",\n       \"Xdm+8IFuUyGjiOVswcuXp+R5SZblWLuT1HnnGUfLbDbDxBFf/tKX2V8cMPQ9680FVbXFaM1ZXe36\\n\",\n       \"cd9IY/PZknXdc3j8iOXxXZyy9K1EqIIQHAJFuLljODm+R5rkfOb1N+i6gb4bQQiSJMVozdGdI7TQ\\n\",\n       \"BASCGEQA4RFBErIpv/xrv8xgBwzw9d/+TZ6+eJc3PvNFZtMpV5sNv/rr/4Sf+6mfZT5fsFwu+Wdf\\n\",\n       \"+adY+yH3773GaqW5OD/n3r37HN45Js4zvva1bzCZlMyme3St5+zsbcaxJ00zlNKcn58jiIgiRZaV\\n\",\n       \"PLz3iP3DO4zBgXUYs7sDGMaeJElYr654/ORD9vYPWB68wfj8JderC9brNQd7B3jn0LFBxQGjDZGJ\\n\",\n       \"qKqaovxes+6Hyr8O/MM/ttFuueUP4RML5i8uXjJJU3SSUCQT8jhitCPeOfpm53XprWX0DmstIgSU\\n\",\n       \"UmitsdbuJIkBxmHEjY6x6/HjTgGyf7zEB0GaZ1SbDYvFHBNr1tWG7XrF/mRKU7d06Rbbj2hpEAKK\\n\",\n       \"YkLTdDjnPiryURrqeiCKIiaTKVfWYaKYvh9o6pr5fI6UgaZtGQPIsJNLbuqGut5SypIiS3n48IS2\\n\",\n       \"bZlOSuwYsDbgrEVpjQi7tgNRnjGZ3yEuFwgdU6Rz6vYKEWms9dRNzTCOxHFE19dE0e4g2HvP6EZM\\n\",\n       \"pLF2pGsbgoNqdU4WxQjtAI24yScnScZP/tS/yNOn7xLC7m+7c3SfPM8Yx5FyUvDnv/zjqDjG+YH3\\n\",\n       \"3n+bSTmnLAuc80zKCRLPwcEd3nzrTQ4PDpjPp1RVhdaag4M9NpsNIXju33/A4eEhxkTM5wvm8xlX\\n\",\n       \"V1e7gi0l6Zue6XSCkprHTz5EysDh4RH1tmGzqfjMG18gjJ4sT7hcQd20mPX1rpgoaJq+pm07vPOc\\n\",\n       \"nT3nnXevP/a5e9PL/K8Cf+u7/fw7Czt+v1PMLbf8IPyeqcv3wycWzN/+9jfBC+7dOeTktUPW22uk\\n\",\n       \"UBwcHpCk8UfXlXEMN6oW5xwaj/SedrNh6DvGwZKkGU0IHB8fs1gsSJKED588Js0zlosl4zjStu2N\\n\",\n       \"iYLeBeu25eLykmEYEEJ8lMbp+548zxFil7rZbDZkac44Oq6uzkiShDRN6Loe6yyR2vUUaeqGOM0p\\n\",\n       \"JxlSG/bzhLLLuLy4wrmd0iZJEtp2p6ku8pw4NozDQB12Pp5KGu698WUcBhnHeLFbuJwXvDxd07Yd\\n\",\n       \"m82WNE05OzslyzKMiYDAaEeUhDgyPHjwgGa9ZrG/T4hTBAHXrpBpgUAihOT47glDv0EIwfHxMWma\\n\",\n       \"UZYTNpsNruk4WOxhreXs4gznHJHJePnyJcvlEuk922bg2++/xXSWE8SuU2GapkRRxDAM/MRP/OSN\\n\",\n       \"5LMAwP2/7Z3Zj2T3Vcc/v7tW3dqrq7urt9k9nvEksRN7Ria2WSIICUJxhJBYJAiLeEICCSlA8g+A\\n\",\n       \"eIEnXoAgCCgPBIjCIhRHSUQestnj8T6x25merbuqu6tru/v24+HWOBNrxp7p7ukyk/uRSn37Vtf5\\n\",\n       \"favu6VO/+1vOSVJmZmYYj8dvXVtNU6nVqgxHQ5BQLlf4waU3uHptjWatjqpm1Y/eeO1FKqUaVaPI\\n\",\n       \"wN1ird9jdXWVSqXM0tJSVii6WsU0ixyuz3MAfBx4Tkq5dasn72W1o5wfL97uS3F8+7KIUxwzB88P\\n\",\n       \"Cd2EkT/g8rrggaMnUDUFTc0KHgBYho7j+QS+j23bk8nIFM/zME2TUqXMcDBCUxRM08DzXDqdTtY7\\n\",\n       \"ktmbX19fR1UUkjgmSiN0zcQPfLzAzcbFw2xduhIq+J5LpWxQr2XZ91qtJkEQ0els0GrN0el22BkM\\n\",\n       \"GQ2G1Go1fM8nmJRA6212iaVAVRWazSaOk62+yXZGplnKgEqFRE6WAY7HjC2L2WPH6ff7zLdnUY0i\\n\",\n       \"BbPA2uUrtGZmiMKQ3nBAEPqkabamXVVVFDXLStjr92g2GhiaQZpAd3ObRrPF0HFozNZBMZBpgmJO\\n\",\n       \"kiMCnmczHG9SNAuoujZ5ny1c18V1fUAyGF6jXq9nm52CkCROKBaL+L4PEp564km+9dy38dyISz+4\\n\",\n       \"ysLCIsvLy3S7XaRM8XyfIImpNxpEYdb7H49Hk9wtJYbDIY7jYDsOxaKJqqoMB0Pac4u49piR41Aq\\n\",\n       \"N1ldvUTgj9jRbWqNKqVKGdsb8/DDH8Qwsk1lqqoRxwlJKigUD2Sr+a8BXziIhnJy7pSpBfOlY3P4\\n\",\n       \"ToqmlnBdhzBIOXnsBJ7r4LnZEIIEOusbuFEAKZP6kWBZFuXyHIaIn/ryAAAQTUlEQVSuU65U0DWd\\n\",\n       \"8XiE49j4fsCZM+9nbI+JPJ9Ll9YoFEyEqpJEEbquE4URqqKSpNFkvbRLr9ejUCyxsrKM7/v4foTr\\n\",\n       \"+Kz21qg1G5QLNaIoQjcMqrUGpaKF7TrMzc1hlUvEcYzr2lzvbJJKiRTQmmvhuh5RFDEaDLEsC9/3\\n\",\n       \"uXr5KvVqBW80oF6vI1DZ2e5hlbI85cNxnygMGQ6HDIYDNrodwiDCskpomoppGszPzeH5HiN7QKFQ\\n\",\n       \"QDcMSlaJjY1rFM0iw3gHoVVIhY8iVNIYUEARUCgUePPiRY4cPU5rdhaEpLvZpV6ro2lZAYuZ1jy+\\n\",\n       \"Z7O1vUWaSsIoJApC0hSWVlY4/+xzNGdmqJebPProWfo7A8rlCorQuXZtHV3XsWoV7PH4ra33QmS3\\n\",\n       \"jWEYvnX76Hs+l9cuE8Uxs60WjuNjmCZRLJip12lU62z3NpiprFCrFVD1mPZCm9HAwfFHFMwixWKB\\n\",\n       \"UqlMoVjGtu176rdCiBLZ5Ofv3dOGcnLukqkF80pjHtWIsCIdmUKxoGGPxygCTNNgY+Mavp/lSGk0\\n\",\n       \"myiKwLJKQLZb0DB0FKnx/R+8TqVQpl6vIaXk5MlTvPzSy5CGGKUiiBTP9yhYRWIhEanEKhWQEnTd\\n\",\n       \"IgxjZmdn8TyXwIvpbW3TG/RpNRsULRPDNImDgEsb67TqdQ4fXpnUyAxoV+aoVMqoqqBYK2PrGkIq\\n\",\n       \"9Ho95mdmiKMEvZQNDzWrVcaOjUAhSGLOv3A+u5uolFHVV+h0NihUSoDEsX0Mq8irL13kjUurPP/d\\n\",\n       \"7+GHPsVSgYKqE0QRcaIQRgGL7WV6wz6WVcAPFSTgBQ7zK8skSoDqxUjNAD8CvYSUkIYjCkWLMPTY\\n\",\n       \"3lzHKlZQBMQli1qtzE6vR63epFKpTm7zBK1mE7NYxB6NcG0PpZXt3kzigAsXzlOtNfCDrBTc4tIc\\n\",\n       \"pmkgZTZm7DjO5M4ky+1yo2RcqZQVqMiGSvS3vlhnZmYYDseUrTpxGmPbPp3NZxFxyvr6Omfe9wHc\\n\",\n       \"0YjWbJvR0GZrawtVVVnvXOfw4SP31G+llA7QuqeN5OTsAjGNDGhCCPnrv3WOOFAwU4tapY4hVI4c\\n\",\n       \"OUKtVidNJGHkoSoG1Vo5mwxNU6IopNmsE4Yho5HNwvwsb169zqHFRUql4mQseQuhqoydIWkQYJUq\\n\",\n       \"BH6QFaFQNQajHQwj2/ofBgnVWok0kdl4tOMxM9OiWquyfv06J06coN/rkQYuaAVG9gjT1FEUBd8P\\n\",\n       \"aLWaWEUTz/MxTRPTsIiiGMdxuHR5jUYjS7RVq2WFmp2xS9Es4EYeg8GQi6+/ThBFPHD0JL7jc2hl\\n\",\n       \"hSQRhNLHMAq88eqrXLl+BUjRFIVCvYIqwSwUUUVKpdygUW9hWAWKps7h5RUura1x9rGzLLTnGY6z\\n\",\n       \"sflCscjqm5eYaTZYOn2WqL9Gb2MTNwhIZIKhF2m3l3DtEY5nUy5ZmJbFdrdLo9ZAKApB6ANwfX2d\\n\",\n       \"2XYbkSRZeb96C103GI16uI6DlJI0TWk0GkjIvqAVhY2NDRzHfmvuoFyuUKtVcV3vrQRbpVKJIPCp\\n\",\n       \"VLJ6od3ONjKN8UOfcq2GORmyGjsuIo7RdJPxeEynu87Zs2fZ6Q/Z2enxsV/+XaSUU5l1FELI/cq2\\n\",\n       \"l2dNvHvu96yJcRzf1renFsyf+PAySlmnpJY4MnMEqQgeOnmaWrVKYTKGmuVfMTANC9e1ieIQyyqw\\n\",\n       \"szMgjmMajSalkkUSRkRpjKpmvWBF0ehtbRGmAZVyGU3TaDab9HZ2svqecYTvRVzd6PK+UycRQuC6\\n\",\n       \"Lv3hiAceOMlmr4euawgE1VoJXdMomgU2utvs9Dcpl0tUihalcpkkjvGikG63y3BnxPHjR1hcXKTT\\n\",\n       \"6XL58hqVSiXTaZm8/NJFHjv7CK5jkwpBzaowdl2e+cY3WVhYYLjTx9R00BWGg/FkJ2uY5Q23ijQq\\n\",\n       \"dWzbxvbGRJHCznaH2E9RNEGpVkcVKnPNeY6dPEx7rsX1a1ewSmVURScII0bjHk9/8pewhzaKIjBV\\n\",\n       \"hUSCF0SYpskLL1/ksUfeTxC6jEYjFEXFdWxKeoEwjpiZbzMej5Fpwvxim+2tbdoLi+hatkN17dIl\\n\",\n       \"PNvBdmy2elscOnSEGxkgVlff5PTpU5P0CUU2NjbY2Njg0Ucfpd/vZ/lywoTvnn+eB08cQdN0zFIx\\n\",\n       \"e860SOIE1x4TBB6u5/HG62u05mY4d+4cxaLF6xe/j2oqVCs1zn7kk+/pYH5j/8S7cSf/m3dq607b\\n\",\n       \"O2hdd8Ld2Hq3v7sbW+92HW+kG7kT3i2Y36mtdwrmUxtm6Yx8KkmAXjEpFgosrqxQq9ao12okMpsw\\n\",\n       \"NAwDVdVw3DGappKkWR7whYV5rl/fwHWzWpB6sUBZs+j3tomiBEVRaM5WUYQOStY7XFtbo9lsUq1W\\n\",\n       \"s4k4xeDl197kkfedIUli5ubmiJIU13WoVirU6w3SNCGIPApGdmfQmq2zuDRLr9djq9OlaFkYZoGR\\n\",\n       \"69Cen0dTFBYWFhkOhzSaNZrND9LpdEiSBNt2SCe92UZzBtu22e5vUSqV6G4OePrjH8H3fQqWheeF\\n\",\n       \"jIZOVtezXsUej+lv73DixFFc1yNMYjwvRIgH6A9GjMdjqtUqg+EYL7K5fOUS29tbWeKwYonZ2SZL\\n\",\n       \"y4ewqgViL6A4ySgZeC6abrAwM4OMY5752jf40MMPEsdxVj/U89FUHS8MWFxZ5vKVKyDANExs28EL\\n\",\n       \"PF568QLD4ZiHzpzGtm38MKHWbFGp1mkvLOB6DvZ4xJkzD7G1leVT2dxc5djRk/hewuW1q7Rm6nQ6\\n\",\n       \"HYLQ55mv/y8njq5gj12UQZ9XXn2NuYUFDFNH07J0woZV4tyHH6fVbNDv91ldXWV5eQWhSpLkvV/Q\\n\",\n       \"eVqBLre1v7buJpi/G/vRqZ5aMFeVENcXzFgpC4eWWWgtULQMUCRKIhCqQhgGFIsF0jRhMBiyON9m\\n\",\n       \"Z9gnTWFubi7LkpjEpK5Lb5xlP1RFQprGCFEiCD2iKKVolTHMYpYbfVJ0IvBDojAk9G0GdpZKtVAo\\n\",\n       \"ZDlhNB1JSCpjqrUqoZ+tnCloBltbXUzTZLY9T384pGgWUVMFTQiqlSqbm13m5udQFEkSZ9WLymUN\\n\",\n       \"xwmYW2ijmwYCQalUojXTwHF9VF0nihKsUgXbGRL4KSoppXKRoqHT9z0KBR3f9wgDD1XTKBRUoiDk\\n\",\n       \"A6dOopsFVE3FHu9g6EVs16WgFzh06DCNRoPu5iZJGHH+Wy9w6vQZlg6fIRluo2s6iq7juR6qIjB0\\n\",\n       \"gziIaLVmJ2lrJVHgougmq2++iSJMlg4fpt/bZGfgoBsmiply4vQh4jjh2PFjrHc7HDp8mI3L6wxG\\n\",\n       \"faLAp92eYzzOvpw0TePkyQfZ2dlmcWmWzc0tLjx/geZMnbE9RqYpa1ev8cgHHkamKYmiU6nUmGvN\\n\",\n       \"E4cenc51Qt+FJMAPPYqWSV3WKFgGWzs7iPdAcYqcnGkwtWBeqht4TsLDDz7MkYUVUpEwGAwAMDQN\\n\",\n       \"FEEcx9mSQUUllZLVtUsUzCz3hqZLhKJmOa8HfVqtVvZ6TUchJU2zhEzlSo0gCJBSEoYhCTqqVCmV\\n\",\n       \"KkipsLJyiLLrkKYSzTBxHAdVNybFETRMM0HTtEl5MoV2u43vB1QqFUghDLJVHp4/5tDKCmPbpmCa\\n\",\n       \"WUEGM8WyClne9GGfU6dOQypJNUGqSBzXBQG6pjE33ybwszXv1YrgytoVZtrzrF+9RhAEHD9+nCDM\\n\",\n       \"inVomkarXmdpaZFr16/gOEMSoFQos7K08lZecMuyGI1GLC0s0tnc4MFTZ5hdWMbtbaGbBaIoQAlT\\n\",\n       \"LKuIYugohk51ps5oOMLzXLa3t4l9h3KjxcJCm972Dooe4acRzXoVRSTYtouqgBemdNcuc3j5KDLV\\n\",\n       \"WD60TLezwcrKUa5fvcLs3Dw/8XgTz/NIZczy8iGee/Y5Hjt3jv/8n//micefZH7hGEXrOaqlGrVq\\n\",\n       \"i/F4xJFDx9jobtBolOl0xmi6he0FjIYOplWlVqtRa7Wp1aoI3UDbx3HOnJz/T0xtzPzAG835sWKa\\n\",\n       \"Y+bTaDfnx4f31ARoTk5OTs7+kg8w5uTk5NwH5ME8Jycn5z7gwIO5EOJjQoiLQog3hBC3zDq3T+18\\n\",\n       \"TgjRFUK8dNO5phDiGSHE60KIrwgh6jc995mJpotCiI/uo44VIcTXhRCvCCFeFkL8wTS0CCEKQojv\\n\",\n       \"CCEuCCFeFUL82TR0TOyqQojnhRD/MS0N02K//P9W/r0HW7f00V3auqWf7VHfj/jLHuysCSFenNi6\\n\",\n       \"ZW3iu7BVF0J8UQjx2uR9Pr5LOw9O9Nx4DHf9+ctJGa6DeAAqsAocgSyVNnD6HrX1FFlVmJduOvcX\\n\",\n       \"wB9Pjv8E+PPJ8UMTLfpE2yqg7JOONvDI5LgMfB84PSUt1uSnBnwbeHJKOv4I+Gfgy9O6LtN47Kf/\\n\",\n       \"38q/99tH99PP9tNf9mDnEtDcp2v5D8Dv3PQ+a/tgUwE2gJXdvP6ge+bngFUp5ZqUMiKrp/j0vWhI\\n\",\n       \"SvlN4O3JrT9BdhGY/Pzk5Php4AtSykhKuUb2D3dun3R0pJQXJsc28BqwNCUt7uTQIAss/YPWIYRY\\n\",\n       \"Bn4B+Ft+mMjxwD+LKbFv/n8b/94Vt/HRxT3Ye7uf7ezW1m38ZS/s2YYQogY8JaX8HICUMpZSDves\\n\",\n       \"bI/Vqw46mC8BNwu9Njl3UMxLKbuT4y5wI/n14kTLPdUlhDhC1pv6zjS0CCEUIcSFSXtfl1K+MgUd\\n\",\n       \"fwl8Grh5f/NUr8sBMm3/f1fe5qO7tfF2P3t1D5Ju5S+7RQJfFUI8K4TYS9bLo8CWEOLvhRDnhRB/\\n\",\n       \"I4Sw9kHfnqpXHXQwf8+sg5TZfc076dlXrUKIMvCvwB9KKcc3P3dQWqSUqZTyEWAZ+EkhxM8cpA4h\\n\",\n       \"xC8Cm1LK57lND+mgr8sB857WPvHRL5L56K5zCd/Cz356l3re1V/ukieklB8kKy7y+0KIp3ZpRwM+\\n\",\n       \"BPy1lPJDgAP86V6EiR9Wr/qX3do46GB+HVi56fcVfrTnda/pCiHaAEKIBWDzNrqWJ+f2BSGEThbI\\n\",\n       \"Py+l/NI0tQBMbgn/C3j0gHV8GPiEEOISWXGHjwghPn/AGqbJtP3/ttzko/90k4/uiZv87LFdmriV\\n\",\n       \"v/zjHvRsTH5uAf/O7ofsrgHXpJTfm/z+RbLgvhfesXrVnXDQwfxZ4AEhxJHJN9GvAF8+wPa/DHxq\\n\",\n       \"cvwp4Es3nf9VIYQhhDgKPADsabb7BkIIAfwd8KqU8q+mpUUI0bqxSkQIUQR+Dnj+IHVIKT8rpVyR\\n\",\n       \"Uh4lu6X8mpTyNw5Sw5SZtv/fknfw0d3Yup2f3TW38Zff3KUuSwhRmRyXgI8Cu1oJJKXsAFeFECcn\\n\",\n       \"p34WeGU3tm5i79Wr9mNm9y5nbD9ONlu+CnzmHrbzBWAdCMnGKX8baAJfBV4HvgLUb/r7z040XQR+\\n\",\n       \"fh91PEk23neBzKmfBz520FqA9wPnJzpeBD49OX/gn8nE9k/xw9UsU9Ewjcd++f9N/h3c8O/99tH9\\n\",\n       \"9LP99Jddvv7oRNMF4OW9xh7gYeB7wAvAv7GH1SxACdgGKnvRlG/nz8nJybkPyHeA5uTk5NwH5ME8\\n\",\n       \"Jycn5z4gD+Y5OTk59wF5MM/Jycm5D8iDeU5OTs59QB7Mc3Jycu4D8mCek5OTcx+QB/OcnJyc+4D/\\n\",\n       \"A43ph1xlbAoPAAAAAElFTkSuQmCC\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x12666be50>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"# load input and configure preprocessing\\n\",\n    \"im = caffe.io.load_image('images/cat.jpg')\\n\",\n    \"transformer = caffe.io.Transformer({'data': net_full_conv.blobs['data'].data.shape})\\n\",\n    \"transformer.set_mean('data', np.load('../python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1))\\n\",\n    \"transformer.set_transpose('data', (2,0,1))\\n\",\n    \"transformer.set_channel_swap('data', (2,1,0))\\n\",\n    \"transformer.set_raw_scale('data', 255.0)\\n\",\n    \"# make classification map by forward and print prediction indices at each location\\n\",\n    \"out = net_full_conv.forward_all(data=np.asarray([transformer.preprocess('data', im)]))\\n\",\n    \"print out['prob'][0].argmax(axis=0)\\n\",\n    \"# show net input and confidence map (probability of the top prediction at each location)\\n\",\n    \"plt.subplot(1, 2, 1)\\n\",\n    \"plt.imshow(transformer.deprocess('data', net_full_conv.blobs['data'].data[0]))\\n\",\n    \"plt.subplot(1, 2, 2)\\n\",\n    \"plt.imshow(out['prob'][0,281])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"The classifications include various cats -- 282 = tiger cat, 281 = tabby, 283 = persian -- and foxes and other mammals.\\n\",\n    \"\\n\",\n    \"In this way the fully connected layers can be extracted as dense features across an image (see `net_full_conv.blobs['fc6'].data` for instance), which is perhaps more useful than the classification map itself.\\n\",\n    \"\\n\",\n    \"Note that this model isn't totally appropriate for sliding-window detection since it was trained for whole-image classification. Nevertheless it can work just fine. Sliding-window training and finetuning can be done by defining a sliding-window ground truth and loss such that a loss map is made for every location and solving as usual. (This is an exercise for the reader.)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"*A thank you to Rowland Depp for first suggesting this trick.*\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"How to do net surgery and manually change model parameters for custom use.\",\n  \"example_name\": \"Editing model parameters\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 5\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/pycaffe/caffenet.py",
    "content": "from __future__ import print_function\nfrom caffe import layers as L, params as P, to_proto\nfrom caffe.proto import caffe_pb2\n\n# helper function for common structures\n\ndef conv_relu(bottom, ks, nout, stride=1, pad=0, group=1):\n    conv = L.Convolution(bottom, kernel_size=ks, stride=stride,\n                                num_output=nout, pad=pad, group=group)\n    return conv, L.ReLU(conv, in_place=True)\n\ndef fc_relu(bottom, nout):\n    fc = L.InnerProduct(bottom, num_output=nout)\n    return fc, L.ReLU(fc, in_place=True)\n\ndef max_pool(bottom, ks, stride=1):\n    return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)\n\ndef caffenet(lmdb, batch_size=256, include_acc=False):\n    data, label = L.Data(source=lmdb, backend=P.Data.LMDB, batch_size=batch_size, ntop=2,\n        transform_param=dict(crop_size=227, mean_value=[104, 117, 123], mirror=True))\n\n    # the net itself\n    conv1, relu1 = conv_relu(data, 11, 96, stride=4)\n    pool1 = max_pool(relu1, 3, stride=2)\n    norm1 = L.LRN(pool1, local_size=5, alpha=1e-4, beta=0.75)\n    conv2, relu2 = conv_relu(norm1, 5, 256, pad=2, group=2)\n    pool2 = max_pool(relu2, 3, stride=2)\n    norm2 = L.LRN(pool2, local_size=5, alpha=1e-4, beta=0.75)\n    conv3, relu3 = conv_relu(norm2, 3, 384, pad=1)\n    conv4, relu4 = conv_relu(relu3, 3, 384, pad=1, group=2)\n    conv5, relu5 = conv_relu(relu4, 3, 256, pad=1, group=2)\n    pool5 = max_pool(relu5, 3, stride=2)\n    fc6, relu6 = fc_relu(pool5, 4096)\n    drop6 = L.Dropout(relu6, in_place=True)\n    fc7, relu7 = fc_relu(drop6, 4096)\n    drop7 = L.Dropout(relu7, in_place=True)\n    fc8 = L.InnerProduct(drop7, num_output=1000)\n    loss = L.SoftmaxWithLoss(fc8, label)\n\n    if include_acc:\n        acc = L.Accuracy(fc8, label)\n        return to_proto(loss, acc)\n    else:\n        return to_proto(loss)\n\ndef make_net():\n    with open('train.prototxt', 'w') as f:\n        print(caffenet('/path/to/caffe-train-lmdb'), file=f)\n\n    with open('test.prototxt', 'w') as f:\n        print(caffenet('/path/to/caffe-val-lmdb', batch_size=50, include_acc=True), file=f)\n\nif __name__ == '__main__':\n    make_net()\n"
  },
  {
    "path": "caffe-fpn/examples/pycaffe/layers/pyloss.py",
    "content": "import caffe\nimport numpy as np\n\n\nclass EuclideanLossLayer(caffe.Layer):\n    \"\"\"\n    Compute the Euclidean Loss in the same manner as the C++ EuclideanLossLayer\n    to demonstrate the class interface for developing layers in Python.\n    \"\"\"\n\n    def setup(self, bottom, top):\n        # check input pair\n        if len(bottom) != 2:\n            raise Exception(\"Need two inputs to compute distance.\")\n\n    def reshape(self, bottom, top):\n        # check input dimensions match\n        if bottom[0].count != bottom[1].count:\n            raise Exception(\"Inputs must have the same dimension.\")\n        # difference is shape of inputs\n        self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)\n        # loss output is scalar\n        top[0].reshape(1)\n\n    def forward(self, bottom, top):\n        self.diff[...] = bottom[0].data - bottom[1].data\n        top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.\n\n    def backward(self, top, propagate_down, bottom):\n        for i in range(2):\n            if not propagate_down[i]:\n                continue\n            if i == 0:\n                sign = 1\n            else:\n                sign = -1\n            bottom[i].diff[...] = sign * self.diff / bottom[i].num\n"
  },
  {
    "path": "caffe-fpn/examples/pycaffe/linreg.prototxt",
    "content": "name: 'LinearRegressionExample'\n# define a simple network for linear regression on dummy data\n# that computes the loss by a PythonLayer.\nlayer {\n  type: 'DummyData'\n  name: 'x'\n  top: 'x'\n  dummy_data_param {\n    shape: { dim: 10 dim: 3 dim: 2 }\n    data_filler: { type: 'gaussian' }\n  }\n}\nlayer {\n  type: 'DummyData'\n  name: 'y'\n  top: 'y'\n  dummy_data_param {\n    shape: { dim: 10 dim: 3 dim: 2 }\n    data_filler: { type: 'gaussian' }\n  }\n}\n# include InnerProduct layers for parameters\n# so the net will need backward\nlayer {\n  type: 'InnerProduct'\n  name: 'ipx'\n  top: 'ipx'\n  bottom: 'x'\n  inner_product_param {\n    num_output: 10\n    weight_filler { type: 'xavier' }\n  }\n}\nlayer {\n  type: 'InnerProduct'\n  name: 'ipy'\n  top: 'ipy'\n  bottom: 'y'\n  inner_product_param {\n    num_output: 10\n    weight_filler { type: 'xavier' }\n  }\n}\nlayer {\n  type: 'Python'\n  name: 'loss'\n  top: 'loss'\n  bottom: 'ipx'\n  bottom: 'ipy'\n  python_param {\n    # the module name -- usually the filename -- that needs to be in $PYTHONPATH\n    module: 'pyloss'\n    # the layer name -- the class name in the module\n    layer: 'EuclideanLossLayer'\n  }\n  # set loss weight so Caffe knows this is a loss layer.\n  # since PythonLayer inherits directly from Layer, this isn't automatically\n  # known to Caffe\n  loss_weight: 1\n}\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/convert_mnist_siamese_data.cpp",
    "content": "//\n// This script converts the MNIST dataset to the leveldb format used\n// by caffe to train siamese network.\n// Usage:\n//    convert_mnist_data input_image_file input_label_file output_db_file\n// The MNIST dataset could be downloaded at\n//    http://yann.lecun.com/exdb/mnist/\n#include <fstream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"glog/logging.h\"\n#include \"google/protobuf/text_format.h\"\n#include \"stdint.h\"\n\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/format.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#ifdef USE_LEVELDB\n#include \"leveldb/db.h\"\n\nuint32_t swap_endian(uint32_t val) {\n    val = ((val << 8) & 0xFF00FF00) | ((val >> 8) & 0xFF00FF);\n    return (val << 16) | (val >> 16);\n}\n\nvoid read_image(std::ifstream* image_file, std::ifstream* label_file,\n        uint32_t index, uint32_t rows, uint32_t cols,\n        char* pixels, char* label) {\n  image_file->seekg(index * rows * cols + 16);\n  image_file->read(pixels, rows * cols);\n  label_file->seekg(index + 8);\n  label_file->read(label, 1);\n}\n\nvoid convert_dataset(const char* image_filename, const char* label_filename,\n        const char* db_filename) {\n  // Open files\n  std::ifstream image_file(image_filename, std::ios::in | std::ios::binary);\n  std::ifstream label_file(label_filename, std::ios::in | std::ios::binary);\n  CHECK(image_file) << \"Unable to open file \" << image_filename;\n  CHECK(label_file) << \"Unable to open file \" << label_filename;\n  // Read the magic and the meta data\n  uint32_t magic;\n  uint32_t num_items;\n  uint32_t num_labels;\n  uint32_t rows;\n  uint32_t cols;\n\n  image_file.read(reinterpret_cast<char*>(&magic), 4);\n  magic = swap_endian(magic);\n  CHECK_EQ(magic, 2051) << \"Incorrect image file magic.\";\n  label_file.read(reinterpret_cast<char*>(&magic), 4);\n  magic = swap_endian(magic);\n  CHECK_EQ(magic, 2049) << \"Incorrect label file magic.\";\n  image_file.read(reinterpret_cast<char*>(&num_items), 4);\n  num_items = swap_endian(num_items);\n  label_file.read(reinterpret_cast<char*>(&num_labels), 4);\n  num_labels = swap_endian(num_labels);\n  CHECK_EQ(num_items, num_labels);\n  image_file.read(reinterpret_cast<char*>(&rows), 4);\n  rows = swap_endian(rows);\n  image_file.read(reinterpret_cast<char*>(&cols), 4);\n  cols = swap_endian(cols);\n\n  // Open leveldb\n  leveldb::DB* db;\n  leveldb::Options options;\n  options.create_if_missing = true;\n  options.error_if_exists = true;\n  leveldb::Status status = leveldb::DB::Open(\n      options, db_filename, &db);\n  CHECK(status.ok()) << \"Failed to open leveldb \" << db_filename\n      << \". Is it already existing?\";\n\n  char label_i;\n  char label_j;\n  char* pixels = new char[2 * rows * cols];\n  std::string value;\n\n  caffe::Datum datum;\n  datum.set_channels(2);  // one channel for each image in the pair\n  datum.set_height(rows);\n  datum.set_width(cols);\n  LOG(INFO) << \"A total of \" << num_items << \" items.\";\n  LOG(INFO) << \"Rows: \" << rows << \" Cols: \" << cols;\n  for (int itemid = 0; itemid < num_items; ++itemid) {\n    int i = caffe::caffe_rng_rand() % num_items;  // pick a random  pair\n    int j = caffe::caffe_rng_rand() % num_items;\n    read_image(&image_file, &label_file, i, rows, cols,\n        pixels, &label_i);\n    read_image(&image_file, &label_file, j, rows, cols,\n        pixels + (rows * cols), &label_j);\n    datum.set_data(pixels, 2*rows*cols);\n    if (label_i  == label_j) {\n      datum.set_label(1);\n    } else {\n      datum.set_label(0);\n    }\n    datum.SerializeToString(&value);\n    std::string key_str = caffe::format_int(itemid, 8);\n    db->Put(leveldb::WriteOptions(), key_str, value);\n  }\n\n  delete db;\n  delete [] pixels;\n}\n\nint main(int argc, char** argv) {\n  if (argc != 4) {\n    printf(\"This script converts the MNIST dataset to the leveldb format used\\n\"\n           \"by caffe to train a siamese network.\\n\"\n           \"Usage:\\n\"\n           \"    convert_mnist_data input_image_file input_label_file \"\n           \"output_db_file\\n\"\n           \"The MNIST dataset could be downloaded at\\n\"\n           \"    http://yann.lecun.com/exdb/mnist/\\n\"\n           \"You should gunzip them after downloading.\\n\");\n  } else {\n    google::InitGoogleLogging(argv[0]);\n    convert_dataset(argv[1], argv[2], argv[3]);\n  }\n  return 0;\n}\n#else\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"This example requires LevelDB; compile with USE_LEVELDB.\";\n}\n#endif  // USE_LEVELDB\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/create_mnist_siamese.sh",
    "content": "#!/usr/bin/env sh\n# This script converts the mnist data into leveldb format.\n\nEXAMPLES=./build/examples/siamese\nDATA=./data/mnist\n\necho \"Creating leveldb...\"\n\nrm -rf ./examples/siamese/mnist_siamese_train_leveldb\nrm -rf ./examples/siamese/mnist_siamese_test_leveldb\n\n$EXAMPLES/convert_mnist_siamese_data.bin \\\n    $DATA/train-images-idx3-ubyte \\\n    $DATA/train-labels-idx1-ubyte \\\n    ./examples/siamese/mnist_siamese_train_leveldb\n$EXAMPLES/convert_mnist_siamese_data.bin \\\n    $DATA/t10k-images-idx3-ubyte \\\n    $DATA/t10k-labels-idx1-ubyte \\\n    ./examples/siamese/mnist_siamese_test_leveldb\n\necho \"Done.\"\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/mnist_siamese.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Setup\\n\",\n    \"\\n\",\n    \"Import Caffe and the usual modules.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"%matplotlib inline\\n\",\n    \"\\n\",\n    \"# Make sure that caffe is on the python path:\\n\",\n    \"caffe_root = '../../'  # this file is expected to be in {caffe_root}/examples/siamese\\n\",\n    \"import sys\\n\",\n    \"sys.path.insert(0, caffe_root + 'python')\\n\",\n    \"\\n\",\n    \"import caffe\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Load the trained net\\n\",\n    \"\\n\",\n    \"Load the model definition and weights and set to CPU mode TEST phase computation with input scaling.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"MODEL_FILE = 'mnist_siamese.prototxt'\\n\",\n    \"# decrease if you want to preview during training\\n\",\n    \"PRETRAINED_FILE = 'mnist_siamese_iter_50000.caffemodel' \\n\",\n    \"caffe.set_mode_cpu()\\n\",\n    \"net = caffe.Net(MODEL_FILE, PRETRAINED_FILE, caffe.TEST)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Load some MNIST test data\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"TEST_DATA_FILE = '../../data/mnist/t10k-images-idx3-ubyte'\\n\",\n    \"TEST_LABEL_FILE = '../../data/mnist/t10k-labels-idx1-ubyte'\\n\",\n    \"n = 10000\\n\",\n    \"\\n\",\n    \"with open(TEST_DATA_FILE, 'rb') as f:\\n\",\n    \"    f.read(16) # skip the header\\n\",\n    \"    raw_data = np.fromstring(f.read(n * 28*28), dtype=np.uint8)\\n\",\n    \"\\n\",\n    \"with open(TEST_LABEL_FILE, 'rb') as f:\\n\",\n    \"    f.read(8) # skip the header\\n\",\n    \"    labels = np.fromstring(f.read(n), dtype=np.uint8)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Generate the Siamese features\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# reshape and preprocess\\n\",\n    \"caffe_in = raw_data.reshape(n, 1, 28, 28) * 0.00390625 # manually scale data instead of using `caffe.io.Transformer`\\n\",\n    \"out = net.forward_all(data=caffe_in)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Visualize the learned Siamese embedding\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {\n    \"collapsed\": false\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"image/png\": [\n       \"iVBORw0KGgoAAAANSUhEUgAAA54AAAIXCAYAAAD0R4FDAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\\n\",\n       \"AAALEgAACxIB0t1+/AAAIABJREFUeJzsvXtwXOWZr/usvurWUktqGdmxaawEHEMuthGXITiIyMaJ\\n\",\n       \"wbEMFmCTDMkkoyqSyTnZMwdqpmYyzEyS2ruKue2ZqSTHO/vYGQbhCxdjwI637ViWMEEEMJhgB4MB\\n\",\n       \"gSRLsizJkiypuyX1+WP1Wlp971YvSd3y+1S5rF69Lt/6+lOrf/2+v/dVgsEggiAIgiAIgiAIgjBT\\n\",\n       \"WOZ6AIIgCIIgCIIgCML8RoSnIAiCIAiCIAiCMKOI8BQEQRAEQRAEQRBmFBGegiAIgiAIgiAIwowi\\n\",\n       \"wlMQBEEQBEEQBEGYUUR4CoIgCIIgCIIgCDNKRsJTUZQ8RVFaFUV5U1GUU4qi/HezBiYIgiAIgiAI\\n\",\n       \"giDMD5RM+3gqilIQDAZHFEWxAS8B/08wGHzJlNEJgiAIgiAIgiAIOU/GqbbBYHAk9KMDsAJ9mZ5T\\n\",\n       \"EARBEARBEARBmD9kLDwVRbEoivIm0A0cDQaDpzIfliAIgiAIgiAIgjBfMCPiORkMBlcAi4EvK4pS\\n\",\n       \"k/GoBEEQBEEQBEEQhHmDzawTBYPBi4qivAhUA03adkVRMjORCoIgCIIgCIIgCFlNMBhUEj2fkfBU\\n\",\n       \"FMUDjAeDwQFFUfKBtcDfxxhEJpcRhDC+9a1vsWPHjrkehjCPkDUlmImsJ8FsZE0JZiNrSjAbRUmo\\n\",\n       \"OYHMI54LgV8pimJBTdt9PBgMHsnwnIIgCIIgCIIgCMI8IiPhGQwG3wZWmTQWQUiJq666aq6HIMwz\\n\",\n       \"ZE0JZiLrSTAbWVOC2ciaEuaCjIsLCcJsU1NTM9dDEOYZsqYEM5H1JJiNrCnBbGRNCXOBCE9BEARB\\n\",\n       \"EARBEARhRjGtqq0gCIIgCIIgCIIQTSrFd3KF6RaOVWa64qyiKEGpaisIgiAIgiAIwuWKoijzotNH\\n\",\n       \"vPsIbU+oriXVVhAEQRAEQRAEQZhRRHgKOUdTU9NcD0GYZ8iaEsxE1pNgNrKmBLORNSXMBSI8BUEQ\\n\",\n       \"BEEQBEEQhBlFPJ6CIAiCIAiCIAgziHg8JeIpCIIgCIIgCIJwWdPX18emTZsoKiriqquu4sknnzT9\\n\",\n       \"GiI8hZxDfAmC2ciaEsxE1pNgNrKmBLORNSVE8v3vf5+8vDx6enp44okneOihhzh16pSp1xDhKQiC\\n\",\n       \"IAiCIAiCcJly6dIlnnnmGX784x9TUFDAl770JTZu3Mjjjz9u6nXE4ykIgiAIgiAIgjCDJPV4NjTA\\n\",\n       \"mTNQUACNjeB2p3eBDI4/ceIEt956K5cuXdK3/fM//zNNTU3s27cvpfsQj6cgCIIgCIIgCEK2c+YM\\n\",\n       \"HDsGBw6oInIWjx8eHqa4uDhsm8vlYmhoKP1xJECEp5BziC9BMBtZU4KZyHoSzEbWlGA2sqaykIIC\\n\",\n       \"9f/qati2bVaPLyoqYnBwMGzbxYsXcblc6Y8jASI8BUEQBEEQBEEQ5pLGRqivh0OH0k+zzfD4a665\\n\",\n       \"hvHxcd5//31921tvvcXnPve59MeRAPF4CoIgCIIgCIIgzCDZ3sdzy5YtKIrCL3/5S9544w3uuusu\\n\",\n       \"fvvb37J8+fKw/cTjKQiCIAiCIAiCIEyLn/3sZ4yOjrJgwQK+8Y1v8Itf/CJKdGaKCE8h5xBfgmA2\\n\",\n       \"sqYEM5H1JJiNrCnBbGRNCZGUlpby7LPPMjw8zEcffcT9999v+jVEeAqCIAiCIAiCIAgzing8BUEQ\\n\",\n       \"BEEQBEEQZpBs93iming8BUEQBEEQBEEQhKxFhKeQc4gvQTAbWVOCmch6EsxG1pRgNrKmhLlAhKcg\\n\",\n       \"CIIgCIIgCIIwo4jHUxAEQRAEQRAEYQYRj6dEPAVBEARBEARBEIQZRoSnkHOIL0EwG1lTgpnIehLM\\n\",\n       \"RtaUYDaypoS5QISnIAiCIAiCIAiCMKOIx1MQBEEQBEEQBGEGyWaP53/8x3+wY8cOfv/737Nlyxa2\\n\",\n       \"b98ed99MPJ62zIcqCIIgCIIgCIIg5CKf+tSn+NGPfsTBgwcZHR2dsetIqq2Qc4gvQTAbWVOCmch6\\n\",\n       \"EsxG1pRgNrKmBCObNm1i48aNlJeXz+h1RHgKgiAIgiAIgiDMKQ1ADbAeGJiD45nxVGDxeAqCIAiC\\n\",\n       \"IAiCIMwgyT2eNcCx0M/1wO40r5Dp8fCjH/2I9vZ28XgKgiAIgiBoNDc3MDBwBputgNraRpxO91wP\\n\",\n       \"SRAEIQMKQv9XA9vm4PiZj3hKqq2Qc4gvQTAbWVOCmch6mh0GBs7Q1XWM9vYDtLQ0zPVwZhRZU4LZ\\n\",\n       \"yJrKRhpRI5WHgOl8kZbp8WrUciaRiKcgCIIgCDmHzaZ+u+/xVLN69fS+3RcEQcge3EwnPdaM4ycm\\n\",\n       \"JggEAoyPjzMxMYHP58Nms2G1WjMYTzTi8RQEQRAEIefw+QZoaWlg9eptkmYrCELWk819PP/u7/6O\\n\",\n       \"f/iHf4ja9rd/+7dR+2bi8RThKQiCIAiCIAiCMINks/BMh0yEp3g8hZxDfAmC2ciaEswkcj01Nzew\\n\",\n       \"b18N+/evx+ebXon7TMmGMQjTR96jBLORNSXMBSI8BUEQBGEGyYYiONkwBkEQBOHyRlJtBUEQBGEG\\n\",\n       \"2b9/Pe3tB/B4qrnzzkNz4kfMhjEIgiBczkiqrQhPQRAEQZhRsqEITjaMQRAE4XJGhKek2go5iPgS\\n\",\n       \"BLORNSWYSeR6cjrdrFmze04FnxljEJ/o3CHvUYLZyJoS5gLp4ykIgiAIOUZzcwMDA2ew2QqorW2c\\n\",\n       \"FVGr+UQBWloaWLMmk35zU8zFvQiCIAizj6TaCoIgCEKOsW9fjS4Cq6rqTROBiUjHJ5qOmJyLexEE\\n\",\n       \"QZhtJNVWUm0FQRAEIeew2QoA8HiqWb16W9hzM5USW1vbSFVVfUrFidKpopvoXgRBEIT5gwhPIecQ\\n\",\n       \"X4JgNrKmBDOZjfWUSATOVOuUdHyi6YjJdATt5Yq8RwlmI2tKmAtEeAqCIAhCDmCMZAJxRWA2RBDz\\n\",\n       \"8ytwOj0pCclsKL4kCIJwOeP3+/nOd77DVVddRXFxMStXruTXv/616dcRj6cgCIJw2ZMLBW5S9UJm\\n\",\n       \"Q+sU8W0KgiCEk80ez5GRER577DG+/e1vc+WVV/Liiy+yZcsW3n77bbxeb9i+mXg8paqtIAiCcNkz\\n\",\n       \"UxVbzSTVSKYWQYyFWQI72XmyIeqqkQtfKgiCIMwlBQUFPProo/rjO++8k6VLl/LGG29ECc9MkFRb\\n\",\n       \"IecQX4JgNrKmBDOF0kytJzO8kGb5P5OdJ5t8mzPleZ1N5D1KMBtZU9lHAw3UUMN61jNA+oXhMj3e\\n\",\n       \"SHd3N2fOnOG6667L6DyRSMRTEARBuOyprW2c8/TUZLS2PsLISA9HjmxNO3KnRf36+98BUhPYiSKF\\n\",\n       \"yYR6oqhrJuza9VlGRrqwWOzcffdruFzJv4nPpuirIAhCPM5whmOomTcNNLCb9N5DMz1eIxAI8MAD\\n\",\n       \"D/Ctb32La665ZlrniId4PAVBEAQhy4gl+tLxTUYef/BgnX5sYeFiNm9+O6lwTXS9ufKRbt/uJhC4\\n\",\n       \"CKj38cADnyQ9Jhs8r4IgCMk8nutZzwEOUE01hziEm/TerzI9HmBycpKtW7cyPDzMc889h9VqTfk+\\n\",\n       \"xOMpCIIgCFlMvKhiLM9prMhdqscbj001/TVRpHCmIprJsFjsAFitBXz96y/F3U+bl6GhsxQWenE4\\n\",\n       \"imdriIIgCNOikUYaaGAb26YlGjM9PhgM8p3vfIfz58+zf//+mKIzU8TjKeQc4ksQzEbWlADh7Up8\\n\",\n       \"vun7Y9JZT/H8h4ODZ0M/WRkd7cHnG4jpm4x3fKRonI7nMpt8mhp33/0ahYWLuffeUwnTbLV5uXSp\\n\",\n       \"nZ6e4znt7wR5jxLMR9ZU9uHGzW52T0s0mnH8Qw89xB/+8Af27duH0+mc1jmSIcJTEARBuGxIJC7n\\n\",\n       \"ogiNJjDt9mJuuukxffvkpD/00wTnzh2jpaUhZr/LeFHJSNE4nV6Z2dhf0+Xy8sADnyT1dmrzYreX\\n\",\n       \"AOLvFARBSERbWxvbtm3jrbfeorKyEpfLhcvl4sknnzT1OuLxFARBEC4bEvkW9+9fT3v7gbTSUdNF\\n\",\n       \"SwEdHDxLMBhgdLQXmIgaz44dZfj9/YAqnrZu/SjmeDL1L2baaiRbW5Vo83LTTY/R2vqw+DsFQZhz\\n\",\n       \"srmPZzpk4vEU4SkIgiBcNiQSl7NRhMYofI1YLA48nhtwOIqprW3k0KF6OjsPY7eXsGTJ1xgZOYfN\\n\",\n       \"VsDg4HuMjp5PqaprKqIwlhBvbm6gre15JiZ8eDzXs3btnrjzkU7BI0EQhMsZEZ6SaivkIOJLEMxG\\n\",\n       \"1tTlQyLfolmppYnW01QK6FSxG4fDTVnZyjAv4tq1e6iqqmfr1o84d65JTwEeHPyIQOAiPl8vu3Zd\\n\",\n       \"E9eP2tzcwNmzu5OmDsdK1R0YOMPoaBd+fz+dnYdpaWngf/0vB9u2KWzbZuHcuZcSHp8uZnlr5zPy\\n\",\n       \"HiWYjawpYS4Q4SkIgiBcNqQjLmdCEGnCd/Pmk3i9G/F669iy5UPy8soAVcBZrfns3r2c9vbDHDpU\\n\",\n       \"z8TEmOEM4/pPk5P+uKJyYOCM3nYELHz88UF+9asKhobawu7twoW3cDrLcTiifaMAZWUrWL16G8Fg\\n\",\n       \"ILQlyPPP36afIxAYJD+/krVrn5q2YJ8Lb60gCIIw+0iqrSAIgpDVzJWPcDbTSI8efZCPP96Px7OC\\n\",\n       \"CxdOMjbWoz+nKHaD8FOxWBxMTvqx20vYvPktPeVWm6uenleYnPShKDYsljwmJoaBqd6XsVJ+tXv0\\n\",\n       \"+QZoavo2EKSmZgdOp5tt2yyA+rd8w4YWFi681bT5mQ1vrSAIwlwjqbbSx1MQBEHIcmL1tJwNUk0j\\n\",\n       \"zUQYa8f29Z3E7++no+MwimIP20cVnQqa8AO1yTdAIHCRZ56ppqLiBmprG8PmSj12XK+Qa7UWYLE4\\n\",\n       \"2L7dzfj4SNg1HI5SrNZ89u2riXkfGzY08/zzt7FhwzEWLrw1rflJRm1t44x7awVBEIS5RyKeQs7R\\n\",\n       \"1NRETU3NXA9DmEfImspuzI6IpSoUUy02ZIz8uVxL+eCDYlauXER+vprammpxH1BFnN1eyLlz0QWI\\n\",\n       \"NCyWPK644ibOnTuGzVbE+LgazayqqsfvH6a9/YC+r93uprj40/T3/55Nm15l374vG1JwVRTFxt13\\n\",\n       \"v87x4/+XPpaioqUUFV2ZcOyx5idbq9zmOvIeJZiNrKnZRyKe4vEUBEEQspxEBYESEc+jmaqnMFU/\\n\",\n       \"qDHyV1CwiL6+t2hvP8DHHx9IubhPWdkKvN46Skuvpb//dNR+ZWVf4FOfWktBwSKuuqqOyclx8vMr\\n\",\n       \"qai4PnSeIsbG+lm9+udhxwUCA1y48DqTkz5eeGGNIdJpwWp1AWpU9MUX12G1qpFWRbExPNyWdOxO\\n\",\n       \"pxuHw83Bg3X6HItfUxAEQYiHRDwFQRCEeYkxmpifX8m9957G6XSbHkE1Rv6OHNmqn9vhcNPZeRib\\n\",\n       \"rYgrrriZNWvUtiS7dn2WkZEuLBY7d911hBMnfqJHDR9/fCGjo11h53c4SrHZ8pmcHGdyMkAgMGzw\\n\",\n       \"fNqwWCx6Oq3XW0db296oMRojo/HwejfS1fUyPt/5sGtv2fJBVERzaOgsExMBxsbC+5BqEdd05lai\\n\",\n       \"pIIgXA5IxFOEpyAIgjBP0QSmhrF4jlmeQqNoys+v4OLFs1y48DplZV/Ebndx4cIJfL4LAOTlVeJw\\n\",\n       \"uBgcfE8/vrBwMUuWfE0/R0/Pb/H7p19B1+vdSFvb88Ckvq2gYBFu97V0dh4O29fpLMfvv0gwOE5Z\\n\",\n       \"2Qo2bDiqC2dQ27zcc8+bnDjxU318vb2vhxU+0tAEKpD23EovUEEQLgdEeIrwFHIQ8SUIZiNran7i\\n\",\n       \"8w2we/dyRke7oiJwZkXZjKLJ6fTg8/Xy7ruwbJn6fH5+ZVgEU1FsBINTLVEWLryNiYlxenqOT/Mu\\n\",\n       \"w1Er4E6iRSGt1gIqK2/hy1/+Jbt3f5aJiTHsdheVlas5f/41XUTabIWhCrg2PJ5V2GyF9Pf/ntHR\\n\",\n       \"84yPXzKMObzIkXbNpUvvYWTk3LTmczaq2uZ6VFXeowSzkTU1+2S78PzGN77BkSNHuHTpEh6Ph+98\\n\",\n       \"5zv89V//ddR+4vEUBEEQhAicTjf33ns6pj/ULC+i0d/p8awIe87hKGXhwhq9Sm1Z2QocjpKwfc6d\\n\",\n       \"O0ZPz8sJrpDen2k1BVcVnYpiZ2JihI6Ow7S2Poz2OSEQGAJgcnKqRcv4+BiBwEV8vgt0dBzmllv+\\n\",\n       \"ldHR8wQCF8OEcqTotNuLuf/+9xgZOTft+dQ8vKWl14b5Rc1EvKeCIAiJ+au/+is+/PBDBgcHOXDg\\n\",\n       \"AP/+7//Or3/9a1OvIcJTyDnkGzrBbGRNzV/iFQjKtBWIVrgoGAxQVOTFanUCasXZZcvUyOaGDU2M\\n\",\n       \"jJzT/ZiXLn1CWdnnKChYRH5+ZehMViLF3BQKxpRZu70ktC05ijLVLc1ud3HTTY8xOTmmb+vtfQOP\\n\",\n       \"Ry1MVFa2ImJ+guzbdysWiyqYrdYCnE4PAOXlK1myZD1LlqynqMhLWdnnaWl5SN83cj7jFXgyor1G\\n\",\n       \"Q0PJCxpNF7Nav8wV8h4lmI2sKSGS6667jry8PP2xzWZjwYIFpl5DhKcgCIJw2THdSrkaWgSto+Mw\\n\",\n       \"gcAluruP09FxGLu9EFArxf6f/1NHX99J/Rif7wLnzh2jouIGios/Hdo6keAqU4JUUewsWfI1Uv2z\\n\",\n       \"HQyO64I3EBjihRduD3ve41nF2rV7qKqqZ8OGo9x99+toolZRrIyP+5icDKAoDurqfsv9979HUdFS\\n\",\n       \"rNYC+vpO4vNdxO8fpLv7OO3tB7DbizKOLM+kOMz09RYEQZhpGoAaYD0wnZyPTI8H+N73vkdhYSHX\\n\",\n       \"XXcdf/M3f8OqVaumeabYiMdTyDnElyCYjawpIV2MvkSn001Hx2G9ku3Ro4f50peqsVqddHdHezeX\\n\",\n       \"LFnPhQsnGRlpT+laDoebxYu/yocfPm2oZps6dnsJZWWf08eiKHYqKm7k/PlXCAaDKIqVu+9+DYej\\n\",\n       \"hH37bqWg4FOcP9+qH+90VlBRUY3fPxjTi5rIm5mOfzPdok/NzQ20tT3PxIQPj+d61q7dM29FpbxH\\n\",\n       \"CWYja2r2SebxrAG0Ds71QLpl1jI9XiMYDHLs2DE2b97M/v37ufHGG8Oez8TjaUv0pCAIgiBc7kRW\\n\",\n       \"rh0aasNqteP1bqSmZgcwVckV4N1367jzzr0cObI15vk6OtQWK6mgKDbKy79IZ+dvpiU6QSEYnKSn\\n\",\n       \"51VA9Z0WF18TJiCDwUmefnolDz54gQce+IT9+9eHHe/znae9/QB5eZVEUlCwKGHRptraRp55ZhVW\\n\",\n       \"q5MjR7YmLOyjpdymysDAGb1wU2fnYVpaGnA43DldREgQhMuXgtD/1cB0cj4yPV5DURRqamqor6/n\\n\",\n       \"ySefjBKemSART0EQBEEwoImnwcGzuFxe+vtP4ff3A1rVWFUAxmr9YRReq1f/nGefvZmxsa6oa4Rj\\n\",\n       \"BSawWBx6P04zyMtbwPj4GOPjg/p1Fiy4md7e15ic9EWPwppPefkqbLZ8rFYH3d2t+P1qKxiHo5R7\\n\",\n       \"7jnBM8/coPf5dDjcbNnyYZi4i9UaxbitqGgpRUVXRgnDVKrORu5jbP2itYM5eLBOWrMIgpCVJIt4\\n\",\n       \"DqCmy24DpvOVWabHR/Ld736XyspKfvKTn4Rtl6q2giAIgmASmi9xZKSd7u7juugEdNEZz4do9DS2\\n\",\n       \"tj7Mffed1gvzxGPDhiZcrqVYrXkJ90uXsbEeg+i0UF6+gp6e4zFFJ8DExCg9Pcfp7DxMd/crKIr6\\n\",\n       \"+cFicVBScg0tLQ9RXv5FQBOib6ZUtMm4raBgUUzPZype0Mh9amsb8Xrr8Ho3smHDUZxOd84XERLm\\n\",\n       \"K2a474T5jhs1PXa6ojGT48+fP8/OnTu5dOkSExMTHDx4kD179rBx48ZpjiY2IjyFnKOpqWmuhyDM\\n\",\n       \"M2RNXd5EVl7VxIvdXhy1b1nZCrzeurh+RZutgHffnRI+TqebioobEl7/nXf+jcLCKwkEBhPulxmT\\n\",\n       \"9PefSnlvv78Pn68Xi8VJeflKzp9v1YsIuVxLKS29lpaWh6Iq1cYq4mPc5nCocxopDCMFo/aaPPHE\\n\",\n       \"EvbuvZX9+9dHVc51Ot2sW/cs69btjXmt+ZRmK+9Ruc4ZVPfdAVQROvfImhKMKIrCL37xCxYvXkx5\\n\",\n       \"eTk/+tGPePzxx7nhhsR/v9JFPJ6CIAhCzpFKamaqaJE0QI+ktbQ0cNNNj7F793ImJkax211UVq7m\\n\",\n       \"K195Iupakem1p08/SGmpl4MH6/Rte/fezOhoFzZbEePjw/qximJncPBjhobOTnv8qRLej1NDTfMF\\n\",\n       \"sNkKGR+/hN3uMvT69DE01AaA0+lhdPQ8Pt8AQ0MfAvDMM6soLAxPnXU43Pq9a55YTVhqcxtZQChy\\n\",\n       \"u/E1uXRJLcLk9dZRVLQUiyW+XzRdn6ggzA5mue8EYWbweDyz8mVERh5PRVGWAP8JLECt+74tGAz+\\n\",\n       \"W8Q+4vEUBEEQTCWWl3C6JKq8unfvrXohHqfTQ0XFDdTWNtLa+oguNgOBQb1irOZh7Os7qafoVlXV\\n\",\n       \"Y7Xm8/HHBwgEBuOmuloszrjPzSRWax6LF3+VW275V1pbH+ammx4LE8oVFdczOPghPt+AIXUXbLYi\\n\",\n       \"JiZ8evqx11vHunXPhr02TmeF7gnNy6vkvvtOh81dvC8NtNfEbi8hELiovzbi4RRyE7Pdd0Iukszj\\n\",\n       \"mSvMZVXbAPDfgsHgm4qiFAGvK4pyKBgMns7wvIIgCIIQFzO9fPn5FTidnpgCSEsNtdmK8Pl6dX/h\\n\",\n       \"yEiPLoDy8yv1sVgsTn07qGJ1eLiTgYFTYV7RWMy86JyKbhqpqLiJmprtYdHCe+89zeOPVzI+Psy5\\n\",\n       \"c8dwOsvDRCcQFrkF9MJIxtfG4XDT2XkYgLGxrqi5i1eJ1hh1bm19GKs1n927lzM21guoKc/i4RRy\\n\",\n       \"B819l000AM8DPuB6YA8iioWZJiOPZzAY7AoGg2+Gfh4GTgOLzBiYIMRDfAmC2ciayj3M9PINDbXh\\n\",\n       \"8/XS0XE4qrCNdp0rrrgZmBK6mrjS2qIoioOLF9/j4sU/8O67YLUWkpe3AL9/iJ6e44aquOr3vQ6H\\n\",\n       \"O2nRIQ2bzWV4pKAKyPQxFt8x0tV1TL9vzVt55MhWrNb8qasqqV/T+NqsXbtHb8MSOXfaY2PRoJ07\\n\",\n       \"r9HbuaxZsxuXy8uaNbsZGmpjdLRLTxd2ua6aVx7OZMh7lGA2TU2vAl1AP3CYbPGeCvMb0zyeiqJc\\n\",\n       \"BawEWhPvKQiCIAiZYaaXL1H0VLuOzzcQ5kHMz69AUWyMjw/rkb9AQI34KYqViYlLTExcirpWMDiO\\n\",\n       \"zVaA232d7pNMRFnZ9fT1nTCegVhRy1Tw+S7E3K4oNnp732T7dneowJGaQuVweEJjWMG6dXtpbX2Y\\n\",\n       \"jo7f4PNdwGrNZ9Gi22lvP6Sn2p4//zt8voGo1+a++06HzV2kn9Mo4rWeoTt3XkNFRbUeATV6YMvK\\n\",\n       \"vkBNzfZpzYEgCBpOw88rEe+pMBuY0sczlGbbBPwkGAzujXgu+OCDD3LVVVcB4Ha7WbFiBTU1NcDU\\n\",\n       \"t3jyWB7LY3ksj+XxXDz+oz9aQUtLA5OTf4zDURT1vMXSyMDAGVpb3yE/v5JVq5YQCAzS3Kz6Opct\\n\",\n       \"A4B331X//+IXFzA21kNX1zWMjHRSVTWsP2+x5HH11T4gqO8febz2+IMPigkEBuM+b9bjlSsXMTLS\\n\",\n       \"zbvvToQ9399/EzZbHn/2Z2rV2KamJkZGuujvf5ivf/0lXn/9Q158cS1XXz2un2/Rotv4i79omtb8\\n\",\n       \"/+53Z+ntfYNlyyzAZNj59u69lZYWdb7vuGMj69btzZr1I4/ny+O7gHZqahYBjTQ1vZll44v1+B+p\\n\",\n       \"qRkGCmhq+h4Q/f4V//ELwP+gpqYC2J4j95vbj2+//fZ54/E8evQob775JgMDamXzjz76iF/96ldJ\\n\",\n       \"PZ4ZC09FUezAC8CBYDD4rzGel+JCgiAIQk7S3NzA2bO7CQQuhm3Pz69kdLQLh6OU0tJr9eJCpaWf\\n\",\n       \"Z3z8UiiaacHjqaav74Tuf0wVq1Ut6mP0i84csb2fDkcpixevY2TkXNxCQGqU9GJofzdbtnw47RRY\\n\",\n       \"n2+AnTuv0YsRORylbNnyAU6nO2EBqHQwsxqyMF9oQG13chI17RSgnuzzZMaiBrVNC+TOmC9fpLhQ\\n\",\n       \"hh5PRe0u/b+BU7FEpyDMBNq3SIJgFrKmhHgMDJzRhZXmz/R4qqmre4Wqqnq2bPmAr371BbzeOgoL\\n\",\n       \"r2R0tIfXXvsALSW2t7cVu10VN1N9QRP+XQZgYmKYrq7mNEdr0ceYHrFTd/3+fj74YLfuv2xq+nbU\\n\",\n       \"PgsWqD3eHA4399zzZlwhF9krNRZq2q2W/qdQXFzFkSNb8fkGTPP0Gv2kkX7ebEbeo2YSrcemJjpz\\n\",\n       \"qeXJ9Nu0TG9NNaCK3fWolXoFIT0yEp7Al4BvALcrinIi9O+rJoxLEARBEOYEo0gaHHwPUEXnXXcd\\n\",\n       \"1cWPVvTG6XTjdLpZt+5ZJif9jI11R51vwYJqqqrq2bz5JFVV9TgcpWHPxxeL6XwzbmHLlg/Iz1+Q\\n\",\n       \"xjGxiBTFxsfh42lubiAQGAWsTE5O8Mwz1XrPz0hSFXyFhV79Wr29r+v7a77RTCOUZlZDFuYLmnhb\\n\",\n       \"AdQBh8id6q6NqJHO2RqzJtIPIMWIhOlgiscz4QUk1VYQBEGYYcxMoQzvQ1luKMqjsHDhl7njjr0x\\n\",\n       \"z//LXzqjUmq1VNzh4TYKC704HMV0d78clbprJP1+nqrodLm8YX1H06WsbAWjo12MjnZFPWe3u9i8\\n\",\n       \"+W1OnPgpbW3P4/P1MTk5QWS0tLBwMQ888EnU8ammyk7171T9rZmm1kYSWSRKEOa+x6aW6luAKiTN\\n\",\n       \"HoOZ51+PKjqryS2Bnh1Iqm3mEU9BEARBmBOMkcm+vlOmpVAao2Iez0rDM0HOnTsW9/zGViWKYiMv\\n\",\n       \"bwElJdfQ3X2cS5fa6ek5Tnv7AaxWZ8zjVSw4naWk2jJFUWxcdVUdR48+yP7967HZ8pMfFPGn32Yr\\n\",\n       \"ZMmS9ZSXf0Hvk2l8DiAQGKK19WEGBs4wOtoVEtjhotNqLeDrX38p5hXVXqkVOByJP6hqKbVadDhS\\n\",\n       \"dKaSspsIsyKnwnxC67E5V2vCGEW8mvTTWJOlv5oZpZztCKswF7z33nvk5eXxzW9+0/Rzi/AUcg7x\\n\",\n       \"ughmI2u1e9m8AAAgAElEQVQqNzGmbw4Oqu02pptCaRQ0q1f/XBc9a9bsQVEc+n6lpZ9n9eptMQVQ\\n\",\n       \"RUU1AO+/n0cwOM7YWA+9vW8AasQQ1JYhbvdyLJZ44nMyFHFM1jJFFbb33/8+Y2MX9Hm4cOFE2Hhj\\n\",\n       \"oSjhf/rHxy/R0XGE999v1Ptkqvs59HtSW530Y7Xaw46120vYsKGFwsLF3HvvKVwuL7FQe6Wep7Mz\\n\",\n       \"vFdq5DxqwtCYymwkVz2amZJb71HiA0wP7QurIqCX9AWiUViuInruY/tAp7em5lqkC7PB97//fW68\\n\",\n       \"8UbUUj7mYlofT0EQBEGYTYyRybVrn6K19eGUUygjU3M1QQPQ2vpwWB/KpUvv5sMPn8HhKOarX30e\\n\",\n       \"p9Mdtn9LSwMOh5tAYJS8vEqCwT79WK3HpcXixOFw4PerIlEtCK+hkJ6fE0AVti+//H9H9MGM3avT\\n\",\n       \"iCoup65ptRYwMTESYz8/Docbp9ODz9dLR8dhFMXOpz61FovFjsVip6ZmB06nmwce+CRhunM8b2Xk\\n\",\n       \"PCbrzTpdj6ZUs51NNCEEqoCajUqrM52uaiaRY20MbesHDqMKxHxUAXkW8ALFxL8vTVh6gAuA1h94\\n\",\n       \"OXDacP65SiUWcomdO3dSWlrKtddey/vvv2/6+SXiKeQcWk8kQTCLy3VNZZq2ONcYK52eOPFTRkZ6\\n\",\n       \"9Cqoye4tMnKWSNCMjJwjGPTj8/XS2vowEC2ABgbO0NNznLGxLq65ZjLqej5fLxbLlNjUBGnoERZL\\n\",\n       \"4ihlOMY/3Qr5+RVYLE7Gx0dTOtpmK+aee97EYsnH4SiLm57rcJRSU7ODioobwsbd3/8OX/vai+Tn\\n\",\n       \"L+DgwTp9jo1zunPnNWFzH68qbbpCcrrVbXM9Uppb71HTr7Q6fXKp6M3zTI3128AjQE/oOa24UVto\\n\",\n       \"n3bgOInvS0t/tQCDhu1doWNiRymj15REqueahgaoqYH162FgGi9BpscPDg7y6KOP8i//8i8z5kUV\\n\",\n       \"4SkIwmVBQ3MzNfv2sX7/fgZ86RRumb9k84fxVNtvaOmYkfeiPf7v7eXcuPP/jXrdjYLHas3H7x/E\\n\",\n       \"as1DUay6eI21ryaOIgWQcR+HowQARbGiJRaVl69k06ZXyM+vDJ3VmMJkxWo1ir/o9Can04PDUUpe\\n\",\n       \"3gKuuOKPQtvK6e5+mfffbwwVI0qWnhu6mtVJUdGVLFhwI35/X8woqcNRyj33nMDpdFNb2xj2XHn5\\n\",\n       \"CiC+eFcjr+fD1lWkt1J7fScnA3i9dSkLyel6NKWa7WwyFz7AuRC706EBVRBq+JkSzYcBO+qcafej\\n\",\n       \"fVlVAjwWca7PAg6gAlW49kc8n+5c5JJ4n5+cOQPHjsGBA6qInO3jf/SjH/Hd736XRYsWzUiaLYjw\\n\",\n       \"FHKQ3PK6CNnCmYEBjnV1caC9nYaWlrDnLtc1lc0fxtMVxUbRMzY25UXss32ak75S/XXXBE8wGMDr\\n\",\n       \"3ciddx5iaKiNnp7jTEyMcf58a9Q1a2sbcbmWYrU6dVEaKYCMQrSi4j8oLFxMeXk1oHomh4ba2Lfv\\n\",\n       \"1lC7kMjU2gm9yq0qQKMLC/l8vfj9/YyN9dDd/dvQtgHGxnrCfJnxcDrLDec6T1PTt8KKIRlRFDsV\\n\",\n       \"FdfrAtrpdLNw4W2A6nHNy/Owb18N/f3vAFPrR5uDBQtu1rdbrfkxv0DQXt/OzsNYrfYZT301qw/o\\n\",\n       \"XJFb71Fz4QPMlaI3ZyIe24ktmrX7WRV6fBG4nfCIZBcQQH2POUb4e8oiEs9FA01NK0jFCyrMHgWh\\n\",\n       \"l6C6GrZN4yXI5Pg333yTI0eO8MMf/hBAIp6CIAiZUGBTI0/VHg/bVq+e49FkB9n8YTxSFCeLgKq+\\n\",\n       \"vQrGx4fp7DyMzVZIVVU9i69QP7hpr7smeDo6DmO1OsKilXZ7cdg1NZxON+Pjo3R3q1Vpn3zy01Hj\\n\",\n       \"MArRgoJKHnjgE/LyykL3UoTf38elS+309rYS6ee0Wl36ddUiRPGFpM1WBGipvKlFODdsaGFiIhCx\\n\",\n       \"VdFf/8g+osFggI6O8CJAd9yxl6qqejyelXz00XN0dR3D5+ulsHCxvn60OVi7do++roaG2vQvEHbv\\n\",\n       \"Xq7P2Wx/6SHVbLMZM1I8s7nojfH+jN7uzwE7mBKZF4Ey1C+mqlAjnGWhfatRxaQxImk81xeAL4V+\\n\",\n       \"XgGsQU3bjTWnDaHrvhU617LQPrki3ucvjY1QXw+HDoF7Gi9BJscfO3aMjz76iCuvvJKFCxfyT//0\\n\",\n       \"Tzz99NNUV1enP5AESB9PQRBynobmZs4MDFBgs9FYW4vbGV0xdMDno6GlhW2rV8d8XsguIvstGntr\\n\",\n       \"VlXVxyxCE6tXZOTrHmsf7Vo33fRYVIEirShNd/fxqMhiVVU9Doc7btEa7bw+X79emEf1dlpRRaON\\n\",\n       \"goIrmJjw4/cPUFl5C11dL0f4P9V9S0quxe2+hkBAFdbxsNmKGB8fDtvmci1laOjDsG2LFq1h7do9\\n\",\n       \"UXOrEa9/ZuS+DkcZ99zzBi6XV5+roaGzFBZ6GR5uY3x8GL9/6oOv9tql0k9TCgLlItMp8lPDVDGi\\n\",\n       \"emanGNFsUsPU/S1AFZG/B5agFg2qQPV0vkT4l0mLgHdQo56LgNcAX+iYk6F9bkEVmk+EHmtFhOoM\\n\",\n       \"16xELTKkvRbG8WjMx3nPPrK5j+fo6ChDQ0OAGu38x3/8Rz766CN+8YtfUF5eHrZvJn08RXgKgpDz\\n\",\n       \"1Ozbx7Eu1TdTX1XF7jVr5nhEgtnEEoyRpCJmUtnHSKTQsttdBAJD+jgOHqxLKIibmxvo6zvF4OBZ\\n\",\n       \"Cgs/xYULrwNgsThYuPDLnDt3nMnJ5EWBvN6NrFu3F59vgJ07r8bn68VudxMIDGH8sLpkyXo++WR/\\n\",\n       \"xNFWYkVHi4qupKhoKRaLnXPnjhEMBrBa81m06Ha+8pUnosS3zVbA5GQgSvhaLE6++c2usLmIRaLX\\n\",\n       \"LhapfNkQOb54AtW4T35+BUNDbSJoZ4Qa0heR61Ejb9XkVrQtmcjWnn8FVTBqeFCLAPlDj50Rz2tY\\n\",\n       \"UFusjBCdBVEJ3IEqWGNdX5tTDeNrEfncCuBojPELZpPNwjOSv//7v+fs2bP853/+Z9RzmQhPSbUV\\n\",\n       \"co7c8roIs0GmabSyprKfVNKCW1sfCatsG4t0Uy61lNCyshV4vXVs3vx23KJCWsqocT0Zq90ODJwK\\n\",\n       \"bbUyOemno+NwSqLTYrHT3/8O27e72bnzaiorvxxKK76EUVDa7S5uvfVnRBcnMorOqeeGhz/WfZYO\\n\",\n       \"RwlWawF2exHd3b/l8OF6fQ6Nflu7vYiqqvowz+jkpI9du5brVXvt9pLQ/2rqcnn5St1Pm2zejSnV\\n\",\n       \"2vmSpeOm4gc27vPxxweytqhWPHLnPWo6PsFcTfE0FuOpRE2LXctUaqtWvdYoKq2ovTr9hm2xfz/V\\n\",\n       \"lPpBYqfedwFPEr8YUGNoTBD9WjQCG2lqugnYiIhOIRaPPvpoTNGZKSI8BUHIeRpra6mvquLQnXdK\\n\",\n       \"Gu08JRXBmG5Boli+0chtmuAtL/8CPl8/LS0PhUVLkwliTZg6nR6Dl3IixjYj4cWFNm16jdHR8wQC\\n\",\n       \"F/H5emlrexaf73xESi4EAkO0tj4c8oFGoyhqlDUWPl8vExOjjI2dx+/vD/N4GsV1Tc121qzZzd13\\n\",\n       \"v47FMvW7NjbWpYvSzZvfCv1/kqqqeu666zesW7c3JbEfKXIjizrFIhW/qHGf8vIvJt1fmC7TEZHZ\\n\",\n       \"7M+MJJZf04YqLrU+nNp7T6woZnSrpaltVuBOwr2bidB+/2NVvHUDrtDYPkT1jxqf2wv8D9TU30Re\\n\",\n       \"0Jo4zwnC9JBUW0EQBGFekEo6LkylXfb1ncTvV1sQaKmc8dI7jdudzgoqKqqTpmka02xdLi/nz7cC\\n\",\n       \"4HCoVWLHxnrp7j6u719evpKioisZH79ER8dhLBY7mza9Rnn5F/jVryrw+XrDzq+l/Uam/2qpuEYs\\n\",\n       \"lnzuu+80DkcJO3deg893Puz5SG+oxeLA47kBh6OY1at/zvPP305BwSIcjmL9vn2+AXbtWs7YWJd+\\n\",\n       \"7dbWRzLyZUa+hslSmSH9FGsgrXRr4XLls6iRxTHUdNQy4HWmem6WoqbJjjDVP9OKWn12D6oAj+/H\\n\",\n       \"VlkGdAJDMZ6zMyUujdiIjoIuCe3rA64PXf8qpgTnYuCTGOeqIX5qdKLnhOmQS6m2iRCPpyAIOUMq\\n\",\n       \"hYCE7CJXiryk6t+M9G0aRdMHHzyF399PeflKyso+r3sBNW+jUaAl8h1GXic/v5LR0S69P6bL5dVF\\n\",\n       \"liY4a2p2hBU7Mt7H0FAbu3YtY3LSp+9/yy3/kxdeuJ28vAUMDbWxadMruFxehobaeO65W1AUK4HA\\n\",\n       \"CH5/H1dccQvFxZ9maKiN/v538Pl6sVjs3Hnnb3jnnX9jbKxf927a7SWUlHw2VIFXjcwGgxNRIj3W\\n\",\n       \"nCfzZSZbS5HnS/XLhFjkyroVspEG4P8jtcrRsTzUTtRIZjD0L955FFRBG91LN1pg5qOmMmuCNNYx\\n\",\n       \"GvWoKbS9ofFVAx2AF7U4keYJTeSvzVXvbfYiwlNSbYUcJHe8LkIsEvXTnCuycU01NDdTs28f6/fv\\n\",\n       \"Z8AXK2Vr9kg3hXWuSNW/Genb1CvgDpzRxVVR0ZVhrUD6+k7i9W7Ue1Rq/UJjpX9q68mY3llX9wpV\\n\",\n       \"VfVs2fIBLpcXmErTjUxFjXUfJ078FIejBEWx43AU43CUcPTog/h8A5w/38rYWBetrQ8D4HJ5+cY3\\n\",\n       \"OnC5qvD7LwBBuruP09b2ot4GxWJxct9977Fw4a16CxSvdyNebx1bt36kt4IBi95DVLuXyFYzxrEm\\n\",\n       \"S3uNtZaM6c1A3P6o6QrHXFm3qZCN71HzD2Nq6SkSi04tHd5D7PRZH1M9NhOdJxjneIiOavpRxWYX\\n\",\n       \"iUXnClRP52uokc5qoBVoB46jeULVNZUoNTpXvbdCNiPCUxCEWUX6aaZGNgn02e65ONNoYmbDhqOs\\n\",\n       \"W/dsTNFUU7NDfwwwNtaD1eoItSCZ6heaSNDU1jbici3l0qVPePrplfh8/WHPp1PoaGDgDGNjPQSD\\n\",\n       \"Ac6dO8b77z9JV9exuIJQTfM9GXYOi2XKOzo56dOFqjaWdev2sm7ds7S2PkIgMIii2NE+FCuKjSVL\\n\",\n       \"1idNYfb7B8nLq2Tt2qcSel6N400kEDPpvznf1q0w0xiLBZ1Nsm9F6N9FIvvypk9/jG2xgkbJoq/F\\n\",\n       \"qKL5C6i+zYeAt5nqBVoS+t9YbCiRvzaXvLdCriCptoIgzCrSTzM11u/fz4H2dqo9njkvmpRuC5Jc\\n\",\n       \"ITIVE6a8f62tj9DXd4re3t8xOekPS/VMJ/0zMq03WXpuvPRQ7ZqRlJWtwO/vZ3x8lMnJABUV11NQ\\n\",\n       \"sIiPPnqOQGCqoEh5+UruuONZdu1azuTkKIpix+NZhdNZFtVeJF5bFK2lS7wxx/LMRhJrLaXrzU01\\n\",\n       \"dXa+rlshVRK1O9GeO8tU+mkA1ZPpCe0T7pMOJ57/ci7ZiFo0qIZwb+Y21Pt9DHg49Fh+H+YCSbUV\\n\",\n       \"4SkIgpCViECfeeL5EZubGzh7drcu3AoLF7N589u6eElH0BgFY3n5Su666zcpC9W8vEruu++07vts\\n\",\n       \"avo23d0vMzbWE+YLjRSKTqdHLy5kt5ewaNHt1NRsp7X1Ec6e3UUgMBh2TaezQi825HRWAMGo4kQA\\n\",\n       \"Xm8d69Y9m3DMkH6/zul4c5MJeCPi9cxmkvXCzIQawgWY23CtQdS0UyN1qIKyM8ZzELuoT7bgQo1u\\n\",\n       \"/hR4CjWKWgTcjFpoSNZ8NiDCU1JthRxEvC5CLDLxRJq5pszyZrqdTnavWSOic4YwpqKWla0IS8Uc\\n\",\n       \"GDiji06HozRMdELy9E9tPWmpp07nApYsWZ9UdAIR6b1d7Nz5Gd37uG7ds9x337u4XEux2QqYmPBH\\n\",\n       \"HVNevhKPZ4Vh7G/p6cTqfamiU2vjYrMVhQlRn++87gEFtXKudt6amu0Jx2zs19na+khUq5p4pOvN\\n\",\n       \"TTd1dj54Pefv3z1jeqtZr43m1Xwn9FhLLTVe69XQcy7DPsWoFWtfi3FOK9kpOrXWK0Oo0cwzTKXu\\n\",\n       \"DhPe3iWc+bumhGxGhKcgCPOCbPFEZss4hMQYCwmNjHSGPacJHK0C7XQjZAMDZ+jpOY7P14PdXpjS\\n\",\n       \"eWprG0PeShWf7wLt7QfYufMaXYAWFl5Jd/dxfXswGGDJkvV4vXXcdddvWLNmj17IaP/+dWzf7uZX\\n\",\n       \"v6pACX0P7XCUcvfdr+N0ehgfH2ZyMvwLEoejlPvuezfUi/NtvQBSvPFrntmyss/j8w1w5MhW+vtP\\n\",\n       \"ZdxTNd510i00NJ+8nqnMU26hfWli9B1miiYwe1GL62jFcbRrFTGVJrsaNRp6LfBc6LhYXximUt12\\n\",\n       \"JlmAGnE14gZuC/2szZ92j8UR2wUhO5BUW0EQ5gXZ4onMlnEIiYn0TBYVLaWo6EpstgJWr/45ra0P\\n\",\n       \"Z+wNnG4rkBdfXEtHx2G9P2dkCxe/f5j29gMptXbZvt2tR28VxU5eXjl1da+EtXMxoig27r//fb3y\\n\",\n       \"bjoYU2Gt1nwmJkax24vZvPlk0vNNN402FeaT13Mm52luGECNyJnpO4zXBkS7Vj9qJND4fA1TabnZ\\n\",\n       \"SDmq8OwOPbYCbwBXEj5/2j2KnzMbyfZU25qaGlpbW7GFikAuXryY06dPR+0nHk9BEC57ssUTmS3j\\n\",\n       \"EBLj8w2wa9dyxsa68HiqsVic9PSovi6zPtD7fAM888wqCgoWYbcXR/kLY3kPm5sb6O8/RW/vG5SU\\n\",\n       \"XMvISAelpcs4d+6YLmBBLYKk9d50Oj1YLFYmJvxUVFzPmjV7aG19hIGBM3R3v0wwGECtkhkMuz/j\\n\",\n       \"HIDqB928+a24IjFyvNo1tMdHjmzVhbaiWDl/vjXl+cykX+flhMyTRiJvaDIxG/l8A1O+yGxm6ndY\\n\",\n       \"ZSmq8JwJf6wwE2S78Lz99tv55je/yZ/8yZ8k3E88nsJlhfgShFhk4onMZE1FejrFm5kbOJ1u7rvv\\n\",\n       \"tJ666XCoqWlmpGNq68npdIelxUamnMbyHqpi8TgTE6P09b3O2FgXfX2/Jz+/kuLiz3DwYB1Hjmxl\\n\",\n       \"9eptrF2rptS63csYHe3G7++no0Nt8aKdOxgMYLXmsXDhl6PuT5sDr3cjRUVeyso+R0vLQ3FTOCPH\\n\",\n       \"G/k4PBW2LK35zKRfpxGzUlGzqY8uTK0ps+YpOzH20Uz22kV6Q43HQuI2IMY2IQ2hn7NddFoJF502\\n\",\n       \"1F6eifyxiedTPksJsZhpYRyZMC4IgnDZ8Y9vvcXfDQ5SYLPRWFsbJRobmps5MzAQ8/nn29roGh0F\\n\",\n       \"4NtNTTy7bt2sjj1byYVKolpRG1A/0E8nHVO7z8HBs7hcXuz2Ymy27+nPJ/IXDg6qvQLt9mJuuumx\\n\",\n       \"iN6bFmASq7UQn09tFt/RcUSvPtvS0oDD4ebcuRbGxqYq0JaXr2T16m0cObJVv64xShp5f1r/TmMK\\n\",\n       \"Z0tLQ8wIZeS97Nnz+bDxZzKfxmMj5zadNaSJ4UT3kQqaVxugoaWF3WvWTOs8ZhNrnsxhJqvLpoom\\n\",\n       \"JrXxJLrPSG9oXZJjY7VPaQSeR+3Fma0UovbfXAK0GrYXMSUmS4nt40xnPoWsINNfQxN+jf/qr/6K\\n\",\n       \"v/zLv2TZsmX89Kc/5bbbbkt+UBpIqq0gCFlJPLGXSATGe17bdnZwEK/LRbHdHnZszb59+ofM+qoq\\n\",\n       \"dq9ZE3aewUCA492qt6YyP5/T996rH1u2Ywf9frW66JVFRfgnJvBNTHC9x8OetWsv28inUcg4nRVU\\n\",\n       \"VFRnrQBNF6Mg+tfea/loLIgDP9/lf1PAaFhqqeYvtFrzw3plOp1unnvuVrq7p9J7R0Z6ovpn5uUt\\n\",\n       \"YGysB4+nGofDTWfnYTyeakpLr43q1VlQsIj6+nf09iuJhF+kqDOmyRqjacb9Ir2vkeM3QxAZr+f3\\n\",\n       \"D6ad/qylojqdHkpKluFwRKc4p8L892pHfkI1Crd65kakxPNmQvR4tW1aumyyY3cQ3XfTgxo1zObP\\n\",\n       \"qG7gj4C3UNu8aCxArcBbCpxAFdORaHPiAZYxJbZz/z04V0maaltDZr+GGR7/6quvct111+FwOHjy\\n\",\n       \"ySf5sz/7M958802qqqrC9pNUW0EQ5h3xqsMat6965pmodLhYx2nb2kdGON7dHXXOgpCRvtrjYdvq\\n\",\n       \"1VHnOTs41W6ia3Q07NjrPWqz8UKrlUG/n67RUfr9fg53dl7WVW216JjNVoTPd35GWllkklaZybHG\\n\",\n       \"FNOPfXbeYxnv8Hn+i29ERTa1CNXQUFtUWq3dHp7eq82Z3V6ib9+06VU9tVJLrb3zzkMMDbWFic7y\\n\",\n       \"8pW66DReN57oPHt2d4I02aljjPfa2vpw2Dkjx28GxutpEeF0zq/dR0nJMnp6Yqc4p0JjbS31VVXz\\n\",\n       \"VHRCdKpq8uqyxt+Zo0cfnIHquo2on5YjhWOs8WrpsjeHfn4VVWjFEp27iRadCmrV27kUnZH3GOvz\\n\",\n       \"+gDqPY8ZtpWg3m898AGxRSdMzacVtS/pAeBb0x+uMPNkWuQ5w+NvvPFGCgsLsdvt/PEf/zFf+tKX\\n\",\n       \"2L9//zQGEh8RnkLOIb6E+UmkpyqWGIRwkbiooCBKZGrPF9ls9Pt8YefSKHU4ws75PZst6kOm8Tqv\\n\",\n       \"1NVRmZ8fczx71q7F43RyaWKCgVDkE2BFWVnYfmbMyVwxnXFoAmDBgpuBmWllkUl/xkyONaacXll5\\n\",\n       \"AwCryor5G+8wd955iN/+9s2Ex2jzECn2tMebN79FX1U9P7/zEPe5vFSHxJ5RTE61fHGn3CPUeO/G\\n\",\n       \"PqVaBDOWUE2UKjwTfkPj9TZteiXt82v3kalnd7a92sm+CDH/717kJ9REok8l7AuXj/fPQG/UR1Cj\\n\",\n       \"eFuZSiON15NTows1VfYCcDLG2OOl0mZDlDPydS6JeKwJ0ULDz6XA14AHUft0JkIT537DtilxK5+l\\n\",\n       \"spDkv4Yze/wsIMJTEISsIDJSGS/iYNxebFf7HRrFYGNtLR6nk+HxcQ53dOjnyrdaAbApCk0bNoSd\\n\",\n       \"s8jhiPqQabyO1+Xi9L33xhyP2+nkhooKAFaWl7OksJBypxNPSKiaNSfLd+9OKPpmUqROpzepJgCM\\n\",\n       \"UTqz02wz6c+YybFGwbX7jjupr6riyIZN1K2Ln9IZS6RFij3tscvl5ddrdnPY6Y5bNkQ735YtH/K1\\n\",\n       \"r704rb6Wxj6l8YSPcdw/aD0Ztsa08ba2PmJa9Cs/vwKnswKHw43DURI3apuMXCvCk8kXIdMj8hOq\\n\",\n       \"seBObIy/Mx7PCv1n875QioxqGrdF9uTU0HreFgAvhY5bCJQBawmPFEZ+5E2YETgHXIp4HDRsv4B6\\n\",\n       \"/xtQ5ydRUaFIrg/9vxLYnvkwhZkj+a/hjB1/8eJFDh48yNjYGOPj4zzxxBO0tLTw1a9+dZqDiY14\\n\",\n       \"PAVByAqm46mK17ok1rlu3buX4z09wJSPM1WS+UoHfD5WPf00iwoKODUwoHs+66uqcDsc+rEV+fm0\\n\",\n       \"DQ3FPU+8OdGIHHeYD9Xvj7q/ZONOlWz1u2XSn9Hs3o6ZFlOKPH5TSHTGcqxlSqx7T6U/ZCwvdKrH\\n\",\n       \"psr861OZGpm1SZmdwkDGdQOxi1VlRiyfZiLvJkAbcCuq6Pwp6qduY4RT80LagMnQv2zAguq97Elx\\n\",\n       \"/xLgI8K9uA7gBuJ7N7V1YUctRrQ9xj7CbJLN7VR6e3tZv349f/jDH7BarSxfvpwf//jH1NbWRu2b\\n\",\n       \"icdTqtoKgjCrxBNDjbW1afW/NJ4nkljnKnY4gOhU2VTGF6/CpXHfRQUFuvAzXqfu4EH9WKfFgm9S\\n\",\n       \"/eCTSgXcxtpalu/eTdfoaMxxG8dVmZcXdX9mVeZM97WZLTKp8JnKsemIyUyrqUYe/98cbm4bOMNy\\n\",\n       \"WwH5tY2Q5MN9OmONde+pRIDjpb9nEj2OxMxz5RLTraqsMjvVSyPXjXlfChgFUh3hAqmR+D05teM+\\n\",\n       \"jyrMzhAuOhXgfOjncZPGahZfBl5JY/8bUe9fS5EuBa5B9W5C7NfduC48qCnMUlxIiI3H4+HVV1+d\\n\",\n       \"8etIqq2Qc4gvIbeJl7aZyFMVK4001nm0/bYeORIlkhIVCzGuqcjzNjQ3c7KvD4Ayh4NjnZ2U7djB\\n\",\n       \"2hde4FR/f1QBohVlZdR5vfp1jB/W8w0iOZXvPB9pbeXTxcVU5ufzVIwKuWE+1E2b4vpUPU4nncPD\\n\",\n       \"007DNb422eI7nQ1STX80tkEpL1/J5OQfJ9w3VlpqpOAaHThDadcxulJMvcw0VTOV1NR4v0NmprXm\\n\",\n       \"WoqsWSQqBgXJ/u5lWpFkrtEE0mFU8Wmcg8jcQWNvylPELpCkESQ7vJyxOA7Ee/+0od5fsWGbdm9a\\n\",\n       \"ivQHqOnEMPW6R/bt1I4pQk1VDk/Nlc9SwlwgEU9BEGaMWNHDWFGTRC1QItuZLN+1i9P33ZewEi3A\\n\",\n       \"oscfZ1VFhd465ZHWVnpGRth65EjCtNMwsXbpEofb2/XUWUVR6BlTPUOHOzv1gkMepxOvywWKwt51\\n\",\n       \"69SfQxijhfWHDnG4s5MVZWXsqKlJOn/GHqE/fPnlqAhpZCQyMhJrt1rZ6PXSOzqqR2Mz7Uk4nSiq\\n\",\n       \"WSm/s02q0beBgTP4/WoD+qKiK3E4ihLuGysyGhnxSjfyl2mkMJUIsHGNpXusmeMQIkkUFcwFEgln\\n\",\n       \"YxpxBfAcU1HNyhjHLUctOFQNvBbjWjayI/oZWWW3HNXHaWyPshZVjK9AbQcDU0Icol/3yMi39nx/\\n\",\n       \"6Dy5+sWEMJ8Qj6cgCDNGLE9YLF9mrP2M2yrz83UBBqrQW+HxUGizsaOmRj9PpCdSo76qip6RkZj+\\n\",\n       \"tEi08XVeuqSLXVAFrtvh4HCn2kttRVkZe9et4/bnn+fC2BiD4+Nxz20UgpFjbmhu5vm2NrX3Z0UF\\n\",\n       \"ewxRX2OP0I1eL3sTpOYm8nsO+/2meTSn4/eM5w1Mh0w9lNMhVR9oOv68VPdN14Nqhmc1kzmei9fH\\n\",\n       \"bF/t5RRhnXuMgvLnwMPEFs41TImpCqZSZzWBZjzus8A51IJCdwH7yA6RCWqCoZUpwWlFFYKtwBdQ\\n\",\n       \"xxo5BwOk94WC5octQm0zsyd0XLrnEWaKbPZ4poP08RQEISuJFZWMlVIba7+odiYhD2ORzUavz8fh\\n\",\n       \"jg4cVmtUOq22n1bxVotcvtPfrx+vtVmJhTY+7fhyp5NyhwO3w8Evb7uNOq+XjV4vRzds4KcnTtDn\\n\",\n       \"8ySAV+QAACAASURBVOmiM7JNi4YWJTSOWUtZfeqDD6Z6f3Z0cPXOnXoaq9YjNJUIaay+o9p8mtmT\\n\",\n       \"cDrniucNTIfZr/qZPP1RI5300FT3TfXaxv0dDjcHD9ZNu7rsXLWnmS7Ga+7ceU3a9z0XY85+ItM1\\n\",\n       \"ZwpjBduHiV+K0xgN/WLoZ2NU0HhcFzCI2j7kWbJHdK5FjWYaP48XAi6migVF3gukX6K0EdXLOYwa\\n\",\n       \"4dTWdKalUgXBPCTVVsg5mpqaqEkhTVGYe2IVpdEic2eHhvAWFlLscFDicFDhdOIOFQAyHptvtfLg\\n\",\n       \"0aN8rqyMm+12vU1KpIjRzvu58nJustn4n7fcwsOtrWGRSwvox3/xqadY6nJRYLPxPZuNu+64I+bY\\n\",\n       \"O4eHOd7Tw+HOTh5ubQ1Ldz0zMMDFwFTK1BNf+UpMMZYsLVijMCSqNX/pnrVrUy7qY7zGU2vX8nBr\\n\",\n       \"K9tWr+aR1ta4RZimQ7x0y0SYUZwom4vORKaHJnqPmslU0kwLHM1Vexoj6UQhtWvabEX4fOd1AZnq\\n\",\n       \"fWfzmorErL97yed3dgoVpe5LNaaTamPKR43o+VBbhWiRPWNrlQDR6azTwYIqwl+I87xWNTeSItQ2\\n\",\n       \"KM2oVXcJjVvrqTmIKg4row+dNm7UKrdaFeDEa1o+SwlzgaTaCjmHvFnmNsa0Sw2P00lvKAIZmYoZ\\n\",\n       \"maa5bfXqmCImXjqnMTX0/YsXGQgJxXKnkwuha942NETTX/wFn921i66REcYmJii22xkPBrEoChd8\\n\",\n       \"PpwWC5PBIEHgS5WV7L3jDrYeORKW2msFbq2sjPIyxkov1sa1oqyMRYWFOCyWMFGdbnQyXmsZM9Jc\\n\",\n       \"s4FEqaTZljI5V+9RqabxxpuveHOcyvymkuqbynnSaaeiXXNsrJ/OzsNptyIxu6XOTGLWmko+v8na\\n\",\n       \"l5hFvPTPVFrD1DAljkEttrObqdYqt4Yem9U6xYNanCcSK3AWeDBiPEa0scGUZ9MFDMXZJ5J0W+Wk\\n\",\n       \"nlYrn6VmH0m1FeEpCMIsowmuErudi4GA6p10OuMKLm3/IpuNm6+4gkUFBTF7YUbut2fNGh5pbeVU\\n\",\n       \"Xx9nBwd5ZdMmvtvczOGODlaWl9M9MkLn6CjFdjsnN2/G63Lh3r49LIKpoQBWRWHc8F5W5/VS7HDw\\n\",\n       \"n++9p2+zMPVRR+vhqfs3PR72rF3LzXv30jUygs1i4aYFC8KipPHEoxnznaqYzcVCQJdr78dIjELq\\n\",\n       \"B0533I+r6c5XuvvHE5jJztPc3MAHHzyF399PeflK7rrrN7Pmb71cSP7lhFl+wOn2Fq1hSsTFE2Sa\\n\",\n       \"OAa1sutypnpZPkJ0L89MsKPeQ+T5lNC1J1E9mu8DHRH7lKJWn53ybDawijMsoIATNOLHnVTg15B8\\n\",\n       \"PoRcQYSneDwFQZhlNI/gW5s3617BPWvWxPUNNtbW4nE69Wjgk++/H7MdS0V+PjZF0fdraGnhzMAA\\n\",\n       \"x3t66Bob4+HWVv06v7nrLpYWq6XqBwMBlu3aRdmOHYyMx/YEBSFMdAI0nzvHzrNnw7ZpolNLqT0z\\n\",\n       \"MDDl3+zspKGlha6RES4GAlwI+VS3Hjmi+01j+V8zbV+SriczXrubbCaXUiZnEqMv1Oigi3Qvpjtf\\n\",\n       \"Q0PqOrfbS7jppseS7h/PO5nsupHVgdPxt6bjh72cSe4xTtS+JB3PZ6IVqBHr3Mkq3NagptCuBzai\\n\",\n       \"OsaOh67zbcJ7edpRo4vTQUvbDRAtOhcBt6D6NvtR7zNWRHQQ+AxqJBbAzRmu5BitHMBPA4tJHlU2\\n\",\n       \"o1XObPl2BSE5IjyFnEN6T+U2mrjyuly6yErUw9PtdHJDRYX+OBASgJq404TZ821tujjUivyE9dC0\\n\",\n       \"Wqk7eJDhUJXYtiE11ckK+E6fpt/vJxAM4rRYuL68POl99Pn9+CfDU7lcdjvrlyzh2tJS6g4e1Asa\\n\",\n       \"gVogaNvq1dgt6tuuBfBPTnKgvZ2rd+5kyX/9F7c+91yUwMxUCCaa21iYUQhotsm23o+pvEfN9EfB\\n\",\n       \"RB9X052vwkIvAIHARVpbH066fzyBmey6xuNqanYkvc7lhFl/96JFerKVmIqAjEUqginWubU+lbEE\\n\",\n       \"mbHf5+9Q/ZLGL+N+DbwV+tkOrCLc95kMLVCzErgt9LM1Yp+VwDuE99gsI7yQkXbNCVRxugxtbgtC\\n\",\n       \"46jGwza8wFYSvwMkmg8jiV7H2K+hfJYS5gIRnoIgZDUNzc0MBgI4QoJtZXk5VxYW4rRY2HrkCPva\\n\",\n       \"2jjW1aW3HSl1ODhxzz24nU4q8vPxhITt2YsXdQG36umnGQztPxFxva8tWcKCUH/OVLEp6geWodA4\\n\",\n       \"24aGONbVRa/Px6KCAr0Krtvp5LW772ZxYSGrFy7Uj+31+WgfGeF4d/eUEH3iCW7du1cXr5p4ziT6\\n\",\n       \"mQpmVsCdLcyKeM1mXGC6H+dTJdHH1XTny+FQP2SnGiGNJzCTXTfbvkC4PEi2EqcbcYu3Ao2/ZZpA\\n\",\n       \"M547UQVWbSxFqG1VDgDG96gxpn5zi1FblfQBi1EjlLEwCkstq+VK1KimhymB+TnUCrS/CY2tMfR4\\n\",\n       \"I2qqr/aXpIQp0arhQ5vbRhqpp55DLMOtR2qvJv67TqoVaRO9jmZETYXLhZ07d7J8+XKKior4zGc+\\n\",\n       \"w0svvWTq+cXjKQhCVhHpMVy+e7few3NRQQHv1NdTd/CgXjDHrih6FNSCWjRoPBjkeo+H0YkJvaJt\\n\",\n       \"ZV4eXWNjVHs8OC0Wvc+l2+HQxd/K8nJ+c9ddACzftYuusTF9XEU2G8MxUnEVpj6uACxwOvmCx8Ph\\n\",\n       \"jg48TifLSkoodjioyM8P86YCNLS0cKi9nQG/Xz++0Grl0kS4HF5cWMjbmzeH3XesQkHZVmQn16hh\\n\",\n       \"9txUs1XCxQw0D6XVms/QUFvY+pI1l+skW4lm94CsYeq3rA5VfCY7dwPwPDCKKjSXh85RDTyFWuG2\\n\",\n       \"K3SuAKqYLEZNg60GrkUtAvQuagQyiCrGalArzx4L7T/I1DxobU5AFa5vJxijNodam5cS4AHgCKro\\n\",\n       \"jDW3xp6bw6FtmbzrJHodpY9ntpDtHs9Dhw7xp3/6p+zevZsbb7yRc+fOEQwGWbRoUdh+4vEUBGHe\\n\",\n       \"EJla6jOIsPFQaqvWp7LYbufGBQv05yeB8z6f7qk09rOsWbRI7ek5MsKr59Um5FbgKwsXsqykhDyr\\n\",\n       \"FYfFwuefeoq7Dhzgc+XlrF+yhCWh6Gq8N8vIt94en49jnZ2sX7IEi6JwvKeHA+3tvBCKzGr3paXA\\n\",\n       \"VhvSiAHyQqmuJaE+otUeD29v3ozb6UyaBit9CTNjNuMCqSbQZQNapHJoqC1qfcmay3WSrcRMekAm\\n\",\n       \"83BuT/HcZ1CF5UXU1iWnUYXhIdT+l6dD97AqtP8EqujUPJRtqD7QXuBroe2ngBeBvaFjTxI+D8Zx\\n\",\n       \"JhKdMDWHH4TG4w6du4v4c6sdc7PhOpm86yR6HaWPp5Aajz76KI8++ig33ngjAAsXLowSnZkiwlPI\\n\",\n       \"OcSXML+JFFfXG4RZz9gYDS0teF1qwYjBQID3Ll5kQV4eoApRjRVlZbxSV6enjZ4bGaHX56NzZESP\\n\",\n       \"kE4A+z7+mOMtLYxNTNB6/jztly6pfTs7Oii026lyufBNTjIYp/BQLALBIEc6OsI8oFq01ON00jk8\\n\",\n       \"rKfL7omIWg4FAtR5vWHFl7SUV2Ma7COtrVFpt5pHbo/zh/zDpftZv38/Dx49OuPpufMFs8RgKu9R\\n\",\n       \"ufhRMJZ383Iq7NTc3MC+fTXs378eny+zZOx0zzVzf/fMWomxRKYx/XMVU4WBNpLeb1mkP7MHNbr5\\n\",\n       \"CLAQqEIVlu8a9lnJlGDUisAVAz8DPkEViDB1/17C5yGdd4N4c5hobrXn9qRxnemMIT7yWSr7aG5o\\n\",\n       \"YF9NDfvXr8c3kP57TCbHT0xM8Prrr9PT08PVV1/NkiVL+MEPfsCYIfPLDER4CoKQVUR6DPesWUNl\\n\",\n       \"yHOpidG24WF9/56xMW654grqq6o4uXkzdV6v7qnUChjdvHcvL4VSVItCwhbUiGdktVoNh8XCsc5O\\n\",\n       \"PVU35j5K/IwS3+QkYwax6p+cxGGxMD45qUdBtchnucFL6Z+cxG618tMTJ+gZGYlb9dYYGb5m507W\\n\",\n       \"79/P9at3UFVVj8+9mpbuXg60t3Pg449zrkrtXHGyuYFv7KvhZROERTLMFDGxMd+xGsuDORu+zJmf\\n\",\n       \"q9QwRnd37rw6o/HMv0hxLI+hMWqopbQeRjUopLNWGlHFqpbdokUH/ws1qtgPdDK1ziuZ8mLClMgc\\n\",\n       \"BD6L2ucz2e/FbH01lItfQQkzxcCZM3QdO0b7gQO0NKT/vpDJ8d3d3QQCAZ5++mleeukl3nzzTU6c\\n\",\n       \"OMFPfvKTtMeRCBGeQs4hDY/nB/HahERWYXU7nZy+994wMeotLNT3L3U42F5Tg9vh4Oa9ezl27hyX\\n\",\n       \"DL04G5qbef/iRb30Q5HdrotTlxYhXbYsanzjk5P0jI3pwjRSYlqBP6qsxB5HfHqcTr2Crba/f3KS\\n\",\n       \"gdDYtCq3AK/ffTfO0L42ReHQJ5+w++xZXTBeHRKWxnnSIsNFNhvnfT4OtLfzg9aTrFmzmyK7GgGu\\n\",\n       \"9nj4YqhC70xUqc201Uu2YZYYSOU9amDgDI91LeGH7ctZvftnMzB/5pcvilUcaDZamWSLSNOiuzZb\\n\",\n       \"ET5fb0bjSTdSnP1/92IlqmtRw2tRi/xopOtxc6OmxL5LeHQwuueyyirChZyxAu0YpFTUZ/6T/Wvq\\n\",\n       \"8sNWEHpfqK5m9bb0M0gyOT4/9AX/D37wA6644grKy8v58z//c/bv35/2OBIhwlMQhDkhnTYhRjHa\\n\",\n       \"0NzMqVAKiRX4QkhYRfbM/OLTT1Ozbx9PffCBLjqtwCt1dTy7bh17162jwJCaG8lkxOMg6OLQqihM\\n\",\n       \"AMfOnWPt4sUsLiykZcMGFhcWcrfXi8fp5ILPx1Ao4mlTFL0qr8ZVLpcurr0uF13f/CaeUGGkgUCA\\n\",\n       \"iwbx3BsSlvcePqxvq8jPp8Lp1M9rFJbGqPGetWtnrEptLvb8TMRspo3abAV0s4D3WMbvRj0zMH/Z\\n\",\n       \"Vckyk6il8XWxWvPnLPqpRXevuOJmfTzTXSfzr4JvrNRULZrXxlS7kRJgxzSvERkd1ASlhfB+nY4Y\\n\",\n       \"Y6s0XB/Uoj69qAJ0OdHiM52MAemTKZhDbWMjVfX13HnoEE53+u8LmRxfWlrK4sWL075mukhVWyHn\\n\",\n       \"aGpqkm/q5gHr9+/nQHs71R5PWqKoZt8+vbKrxtL/n723D2/ivNNGb1lf/rZsy8QhBgU3hKYfCU7c\\n\",\n       \"0ha81tZOKSbUboKSJu1F0rO1djdtt/tuN+w53bNnu233fa/T9Lq63Z7Tbjh9NyRN/YKTNIEU3BQT\\n\",\n       \"/FGSOk1DIF+NuyTQGjDGIGHjD9mY3/lj5hk9Gs1IM9JIlsxzc+nCmo9nnueZkTT33L/f/SstRXhu\\n\",\n       \"TimpAsS7zTptNmxZuRI9IyOYW1iAx+3G9SUl+N2FC8A772iqnjxYmZaHhoYQmp1F7+nTcX0PDgyg\\n\",\n       \"+/jxGNIIALVFRTg/O6vkljptNtx7ww0xLrcetxsrfvYzjExNAZBupdTk111QgNkvfSluHpjrrVli\\n\",\n       \"uXv3+zE9PYqCAifuvPMVlJX5lHVqd+FjQ1+Ncy9N9RzmKph7a1PTjrTIgJHvqEgkjKbuH+G3M15l\\n\",\n       \"/rTmOHXklpPl3r1+jI5KLqH19QG0thp37+TPy/PPd6TcjlWw6joxg/z+3VO7vvoSb24YJyGFzf4a\\n\",\n       \"wJcghfE2IDbMFpA+B29ByvV8HsB3IIXn9nLbqB1l/TDucW1m29xBfl9T+Ylcd7X953/+Z/T09GDf\\n\",\n       \"vn1wOBz4zGc+g09+8pP4l3/5l5jt0nG1dSRaKSAgIGA1GKFx2u1o9/mw0+83RViKOdfXi/Pzkro4\\n\",\n       \"Oxtn/sN/JfJlWGZkl9zzkQguGAxvdNhs6ONyRsORCIKDg9jR1KSosMPhMI5duBBHOovtdvymowM3\\n\",\n       \"7NqlLFtWVIQ9J04o2960ezfevuce+EpKFOKpJp0A8PJnPxs3D+mQvunpUczPXwQA7N27AZ///J+U\\n\",\n       \"dUzNBKSyL1+YHlZu+AcHg2ht7UZXS0vMPKihJq+5TkxZ2Gi2jvWru78eM38spBSIznHqkNShXDkH\\n\",\n       \"6ajJ/HnJBTMjq66TIIIYxjCKUYwudMGTAw8IMoMuZOYhiA+SURAgmfSojxGEFHJ+DBLRBCTS2Q3p\\n\",\n       \"wcxNkHJEtaICzEQM5FZ0gYBAqvinf/onjI+P48Ybb0RhYSHuuece/OM//qOlxxCKp4CAQFbBK3Va\\n\",\n       \"tSjVUN84f+3FF9Hzxz/iA5WVqHS7cW52VjEAsgMokOt68ornNYWFIADnZmeVZWpFlEeVy4Wpy5cR\\n\",\n       \"4Vxp230+PLtxY1x/tg8NYec77yhqJoPDZsNlIqnkS02NUlP05qoqjE5PY0zlFBeor8dLZ88qxPOD\\n\",\n       \"Hg9WV1Tg6zffjNv378dQR4cSVgwgjvxqzZWaZKjX79lVh0hkHHZ7Me6++60YxVOtZr548LMYGemB\\n\",\n       \"19toODzQ7LnOZaRD4NjtbzGkW3C9mdu/v830HCdDrpwDq1TCxVAbMwU//OiXlbIAAujOE6Usf+BH\\n\",\n       \"VIkE4mtcakUFsE+rE0AJpLDgZNdZutEFRr8hjG4nkKvIdcXTKITiKSAgkJPQullPVotSDbXyNjY9\\n\",\n       \"jXORCPpHRxGor1dKqNhtNiwQYYEIdSUluMjlWJ7VsAP3ut04F4koyimP8NxcnOI4cOYM2vbvx8Tc\\n\",\n       \"HA6PjSn9+cXJk3GkE4i65U7Mz6P39GksKyyEr7QUZQ4H3lIprWwu7ujpUYjnDRUVeGbjRgDAzF/8\\n\",\n       \"Rcz2/Lzy749duKCEGwcHB+NIhnou/+edr2Dv3g04cM0j+Omhoyh2vKmcJ7Wa2dLSZfqG3+y5zmWo\\n\",\n       \"584MgWM2P4B066hHL1KZ42TIlXNglUqYTVU60yiWlbJGNGLHoiplmSY0VrQf28bAwHYDYelMiVwL\\n\",\n       \"4HpIdUP57VjOKA/+0xow2FetdszA6DeE0e0EBHIXwlxIIO8gak/lD7TMZ9TlUpJBfeN8fGICgFSz\\n\",\n       \"8+F165T2/vzaa5XtXt+6NaaGpvqLrqG6Gr+9804E6utxdOtWOP7wB2WdQ8elNjQ3h56REYV0Mlfa\\n\",\n       \"uYUFze3VGJudxclLl3B4bEwhpaUOB9pWrFDmotzlUsZQ4XLpOsby83pTdzfeunAB/aOjCumskOdG\\n\",\n       \"DfXclZX58PnP/wknpi/HnSe1u3Aq7qVmz3UuwwyBU39HGQ3Ey4RD7FI6B0sNXehCAAEcwIGkYbaZ\\n\",\n       \"/d1L7ICcfjkbKxyWY9sw5nTMDI8OAXgGxkjkYoTNGj2mke2MGx2JeymBxYAgngICAhmD1s26mtAk\\n\",\n       \"A3Nv9cikzFcmuRdOzM/joaEhpb0nb78dq0pL4S4owH0HD+JDVVVKG7x6aYNEwD6xZw/6T5/Gjbt3\\n\",\n       \"4zJHUi8TaeZXqnHswgXUPP44ktHOKpfaYVEKCQaAS5cvo/fUKVyUCWNXSwtWlZXBbbfjuZMnFTJ4\\n\",\n       \"689/HkNCi7lapKMzMwqhLLFLLV+U50YN9dwx6JEqM+VStLY1e65zGekQODOl6K1GqueAkY3v7m/D\\n\",\n       \"7ZFw3vh15krNTyPwwINudOdAbmdiQpN+ORsryFxsG8ZyfVOpkbkYn1ajxzSynfVllAQErITI8RQQ\\n\",\n       \"EMgYtPIQjUIrfDRQX49LsvKoZarD57PVFhVhdGYG5U4nJubnUepw4JLKgCgTYLmdDO0+H4bOnsWo\\n\",\n       \"HO5rB1DucsU48BYVFOCjy5bh+MQEJubnMcGF/jZUV6PY4VDyWGvcbtzi9eLY+fMYm51Fo9eLp26/\\n\",\n       \"XXLbjUTQe+oUGr1eXJybw9jMDJwFBXjlzjvhKyvTdaHVO09m8gNzJZdQwBrwLrSv1Afw/7V254Vf\\n\",\n       \"ZzruuVcvEucopp97bIXDcmwbkcj9GBzsQVPTLXC7n0yj3aUG5iCszmcVyAWIHE+heAoICGQQ6She\\n\",\n       \"LJyUEbRGrxdFdjsm5uZQW1iIp26/Pa5dXrn7TUcHAvX1OLZ1KwL19fjYsmUx25Y6MpPi/ufLl6NI\\n\",\n       \"Vh6dNhtGp6bwoepqtK1YAXdBARaAGNIJQKoJOjqKkenpGNIJSPMwJIf3ljgcOCeTy49fc42i8G7Y\\n\",\n       \"swcDZ87gt2NjWFZYiBvKy/HuxAQuzs9jPBLBhr17AeirdnrnyUx4KdvW63bj9NSUIZVUIHfBFKWQ\\n\",\n       \"txFPNO3IG7/OXHC9XSykrvYmVgbN1xxVh3smat9oaGhsG273SbS2noPb3Quh7PFYzPgKAYHkEIqn\\n\",\n       \"QN5B1J66OsDUuYbqalyIRFBXUoK3QiGFtBXZ7bjV60W506kY4oQjEdz6859jeXExyp1O1BQVKbUy\\n\",\n       \"f9zUhBt37cKc/H10+3XXoffUKcnZ1kAdTyP4QEUFGpctw7PvvRdX3qXa7cbFubkYNRSQyKmWOREg\\n\",\n       \"helqGR1Vu914v8cTMx88vG43xmXSZwdw/N57lTBbI1BK3hQUoNTpxKMGSt4w1fT01JSizl6tyudS\\n\",\n       \"+I5i7rE3N+3AV9weU1rVwEDQwnqk5pCK620+lDUxck3ljtrrR+ZrYAplL10she+pfINQPIWrrYCA\\n\",\n       \"QI6Cd1XteP55JYyTgZUnAYA1u3fjnXvuwfahIVyYncV7k5MApLDUczIB+8jPfx5D4EocDlQ4nQir\\n\",\n       \"FMZUUVtUhMMdHVj+xBNKrVAe5zn1rwBS3qmroADuggLM64QAT12+rJtvysYOQAknBqTQ3OrCQvSe\\n\",\n       \"OgWnzaaE2ZoB7+AaqK/H9qGhpKVEmGratn8/gFiV1EgpksUkKwLx4N1jzdIXa+uRmkMqrrfDGFbK\\n\",\n       \"mgQRzNuyJrmj9maqBibvbPtjAA/B+tqgySBKmggIpAMRaiuQdxBP6K4O8OGfLIyz2u1WnpbZuW3H\\n\",\n       \"ZmcRHBzEcydPKqVRKpxO3CLXvSyVQ1SZ2thQXY1ylytKOtNUO20A3r77bmwfGtIknWowMrlw5YpS\\n\",\n       \"8oX1k2FtVRWucE8UPXLZGK/bjQV5+YcrK9Hu8+HY1q1o9/nQ4fPhhTvuwJOtrQjU12Ns27aY2p9G\\n\",\n       \"oQ6x1XIn1oNWOK+R/dM3MEkPVprSsO+ofDK6sRKJCJBxz03rEEQQfvjRhjaENY6aO2VN9GHkd898\\n\",\n       \"SGymYCbc08y2vHHOhwD8CsBqACfT6axJLB3zHnEvJbAYEMRTQEAg58HIjMNmA6NpFTIRA4AyhwMP\\n\",\n       \"r1uHCEf67HJZlA6fT8nvLLHbsaywEM986lM4KauiPHgyW66TA1pss6FtxYqYZZUuF+7o6cFT775r\\n\",\n       \"eEwFQIwj7u3XXYc3AgF0+Hxo9/lwaMsWlHB9KHO54HW7cZkIYTm8tnj6GIILP8C3XvkNwpGIMv7t\\n\",\n       \"Q0MYm57GfQcPKnmWZhxq1eTRTK6nVr6okf0XW62xjPgGg4DfD7S1ITz+VrTNbwWzy7YWEYkI0GLc\\n\",\n       \"tjNFswc9CGoc1UxZk1xGJsrxpAYzbrJmtuXV0QIAFwGMA9iA7D3SyGa5lcV4TCMgkFmIHE+BvIPI\\n\",\n       \"S8gv8GGWfM6lXshlonb+8/e/V8ia02ZDicOhqJZetxsEKaSVd7AN1NdjR1MTan/6U0TksikdPh9e\\n\",\n       \"OXcOI9PTUmOqHM+bystxXWkpek+fjutHAYCm2lq8eu4cJg3W8DSClSUlmLtyBZGFBdxWU4PlxcXY\\n\",\n       \"e/IkwnNzuLmqCmUOh1JDFAAKMYfr8Qd4cBG/wcch6a5A24oVmJqfj3OY5V1na9xuNNbUxJyDROGw\\n\",\n       \"6bgTG90/ldw8K5G+c6cMvx99/f3wA9j/r7UY8Y7Ce74Rm799AO4Zj7k0tiWIxcjMa0MbetCDRjTm\\n\",\n       \"Lbm8un739MJZeWfb1QDGMTBgRzjcCIdjGC0tIUhfL5n8kFnh0GsUfqSW/2oMV9c1lRvI5RzP0tJS\\n\",\n       \"2Lg65jMzM3jwwQfx7//+73HbihxPAQGBnMVzJ09idGYGAFDtcuG8rNYFBwcV4xkjOYDD4XCMQjhP\\n\",\n       \"hGmZXJY6HIqZTl1JCd5fUYHe06cVh9X7Dh7EHFers+/MmYR9/v3EBMpdLk3jnytAXL6pHZJ6yXI3\\n\",\n       \"U8HU5ctKHmjvqVOocbsVZfOtCxdQLtcDXVtVhT9NTeF8BPg9PogyzICRTgB47fx53CLXMOUVRqY6\\n\",\n       \"srBjFvbKzgGf18kvB6IqZqowsn8quXlWoqWlyxriWywrIo2NaPnSUxg89hCa9u2QSGe+WMNaAD3q\\n\",\n       \"0IXs3bazPjixB+3oxE78W16SzqsPTBcHpLPIvhc83N+vANiAcPg6jI5KNYkHB4HW1kx/yPg+ZBrZ\\n\",\n       \"VFcFrnZcunRJ+Xtqagq1tbW4++67LT+OUDwFBAQsg5pAbh8ailEplxUWKrUn+RxAIzUgmcstj2q3\\n\",\n       \"Gxtqa/Hbc+dwenoa5U4njm3digqXC7c+/TTOz87GucuqcXNVFd4OhXSdZY2gyuXCBQ13WQZXQUEM\\n\",\n       \"8dXcxmZTHHdtALR6U1dSgte3bsV9Bw+iZ2QEN7ou4rrq1Th0RlJCCwsK8Pt77kGFy6UojMwYyGm3\\n\",\n       \"o8ThwNT8PHpPn445B+/fvRv/dfEiFgB8yOPBYHt7QmXTyIOCqxbhsBRuu2MH4JFJTjZFEg0shnGT\\n\",\n       \"H/FaTbZtWbT6IKCP3DH4Mq6LRyMVGrB580q43TsTbp8vGAgGER5+C47i42jp+g3cHt9id0nAAuSy\\n\",\n       \"4snjsccew7e//W3813/9l+Z6oXgKCAjkBNSq2dj0tEI6PS4XXv7sZ/HQ0FBcyKVWDqCa3DCXW75c\\n\",\n       \"x/lIBIdHRxXToIn5edy4ezeG77kHK0tL8Z78BM9hs8WVMWGoKynBssJCzbBaI3DYbLipshKHz55F\\n\",\n       \"id2OKY3wW5fNBp6Waimpc9x7rZ6udk3hmyVP4sWDP0OV629Q43ZjZfVN+ElzMx789a/x2vnzeLG9\\n\",\n       \"XXGw1VIyA/X12On3x4W9jnLn6cLcXFIiqT7PHpdLEFEGjwfoVlGcbIokGlgMl1ktrUZPx7IKauJU\\n\",\n       \"LBMnoRcZw2/Dz6FM/lwfHPwi2lqfWaSeGNfFLYtUyDGEh4cx2n8YADAYfAit6u8UgSWJdB/+WPXw\\n\",\n       \"6LHHHsO2bdtS2jcZhLmQQN6hr69vsbsgoAM1gWTvK10uvHbXXfCVlcUZzwCxZjbbh4bg37sXMEr3\\n\",\n       \"VgAAIABJREFUT737ruKEeuOuXbjv4EHsaGrCLz79aRTZJRsgO4DxSEQJSQWAuStXsGHv3phjr7/m\\n\",\n       \"GmV9md0ec2xXQQEK/vCHpGMrQNRZlsdlIrw6Pg4boEk6AeCSarndFv9AkPWKN00qtNlwXXExql0u\\n\",\n       \"FNFFjI0dxv8YqcYz7x3HuUgEvadP46GhIezbtAmnvvCFmLIpzEzozVAIQPScaJn/zMr9KwDQs2lT\\n\",\n       \"0rlIx/X2akCufUcthnGTlldppgMH1QZR6j4EB4Lw7/WjbX8bwnnmMpyNa2rCIYX6v+cFnmhaTFXG\\n\",\n       \"uOHQ4hoqZc78xyGH7HsbG9G0IzOf2Vz7nhJI3+TOCpO8kydPYmBgAPfff39K+yeDIJ4CAgKWQe2G\\n\",\n       \"yt6/e++9CWtJ8mSIkZiQTCbVOYketxu3yiVCGJ1rqK6GUyZzxXY7fv2ZzyjHri4sxJHxcYk4ulyw\\n\",\n       \"q4hnKBLBb8+di+vT8uJiLCssVN5fAXRrfi5cuaKpUlbJeZnqZc6C+K/eBUjq69GtW9G2YgWK7Hbc\\n\",\n       \"4vVi+vJlnJ+bw7H55XgCX8AFx/swTRI5rXS5dF1i2TyORyKoKymJIfVqZ9u1ck7oFQDfOXJEsz0g\\n\",\n       \"Smbnr1xBh8+XkuutQPZRVFQDt9ub1ZtzLepgpnBGKlATbHUfhsPD6B/tR89ID4KLULJnMZCslAyP\\n\",\n       \"11puwyv1wMDmtfiRe2fGerR0nFqfQ9Sj+QFLW27p6kJ9IIDNBw7A7Vk6Sq5AYqT7kNCKh4w//elP\\n\",\n       \"0dTUBJ8vM+HdIsdTQEBgUcBCaY9PTsJXUoKTly7BV1aGd8JhjEciWFtVhevLynBJzklkxPHVu+7C\\n\",\n       \"Xw8OomdkBCV2O0qcTrz82c8CADbs3YsN11yDM9PTStjn9V1dSm1PPahDcddWVeHQli0AgJt278bo\\n\",\n       \"7KyyzobYUigFAApU+zttNthtNly+cgV8hul1xcWYnJuLyTtlDrylDgc+ds01eFIm4HzeKwDcVl2J\\n\",\n       \"/7P0GXzl3Cacmp6Bw2bD7+68U7dOJ8uJ5XM59XJptbbVgt7+6breZgq5k7O2uNi716+E2tbXBxbV\\n\",\n       \"xCmTSOaM3La/DT0jPWj0NuLA5gPwXAXXgx9+9MsBzgEE0J0gwDmMMIIIYgd2ZNCEyY/sZ95mKru4\\n\",\n       \"CkBI/rsDwGKFJgvkC5LleKbr7m6FO/yNN96Ib3zjG3jggQd0t0knx1MQTwEBgZQQDALDw5KJZ1dX\\n\",\n       \"1Ecl6X4y4Tx24YKiaqrBTHQ8bjfCkQhqHn9cIXZs3epduxQnW54EqcnRk6ramg3V1Tg1NYUxjkzW\\n\",\n       \"FBbiHPfeV1qK60tLcXxyEtcVFWFofFxZV2a3JyyjoudsawdQ6nQqJNhhs+FTdXX40YYNuGX347h4\\n\",\n       \"RVIx7/TV4emNbQoZbKiuxsrSUuz0++Fxu7Fhzx4lx1XPiAnQJoN6BJPflpkRaeVrGiWouQKjhGup\\n\",\n       \"E1TLSsXkOcKRMIKDQexo2nFVkE4gF0vJpFtQJxUS6UdmyO7tAHoBNAB4wWBfBK5m5Lq50IsvvohP\\n\",\n       \"fepTOHv2LEpKSnS3E+ZCAlcVRO2p3MDwMNAv/5YHg/F+Knrgy6sAUk7jxfl5lDudmJifR6nDgfd7\\n\",\n       \"PPjaiy/iVyMjiCwsKMVCWBitx+3GR2pqFBLEh3eysE+v243TnD04Q3VhIZ751Kfw0WeewdjsLNZW\\n\",\n       \"VeHs0aPAihUAJHLI18EcmZqK2Z+RTlZChY2hwGZDaG5Ot5zKAqCQzgqnE0e3blXCj9/nGMOrc9fB\\n\",\n       \"h/cwef4U/Hsv4w8XL6La7Ua1262QTgAol3NA2bjVynG5y6UQRjUpZQZNjIxqudMmKqui3j/XYTTs\\n\",\n       \"yGrzHfYdlcj9NxnZtZIMLyUDlnS0K4/bg+48VXuN/e7Fz04XurKgYppBugV1UrGoylR28ZNYVLvq\\n\",\n       \"NCHupQTUePzxx3HXXXclJJ3pQuR4CggIpASuXCHMeB9EOLXQXVCAgc98BoH6ehzbuhVetxuXLl9G\\n\",\n       \"76lT6PnjHzE6M4PQ3BzmiVBot+Otu+/Gvx45ouQZ+kpL4S4owH0HDyo5i10tLVhVWoq5hQUcHhuL\\n\",\n       \"O37vqVN4aGgI79xzDwL19Ti0ZQvGOCJ8aX4eFxOURgGAQrsdf37ddQAkc6L3V1QoNUWdNptiFGTX\\n\",\n       \"2f/Ply+PyXn9B+9ruA2v4NtVAzi2UI/+0VGcnpnBedlAqObxx1H4k5/gY888g3kitHP5lYwojkxN\\n\",\n       \"4fDYWEKDH4/bDY/LhY7nn0fb/v1468KFOFOgRPmaamMilvOpzhnNFbS0dKG+PpBU5cuU+U4i06Vk\\n\",\n       \"JhBWmEQwLK4Bi7VgtKMH0i1/MqgzCpdShmE84mfHAw+60Z0C6czUTBk3DtJGKiQyU9nF6Y5FQCC3\\n\",\n       \"8B//8R947LHHMnoMoXgK5B3EE7rcQFdXfLlCPfDKz83V1eg/cwYAELlyBd85ckRR1XgV0+NyKSVO\\n\",\n       \"Gqqr8cIdd8Qpcl63WyGXK372M9htNjgLCvC+8vJoKRVIamOFy4Xw3BwavV4U2e3oeP55hWTZ1qwB\\n\",\n       \"5LARp82GIocD8/PzUt1LjTqgQx0dWFlaiuDgIPpPn44JxeXLpGgF5DZ6vXhUdQ0/X/IVzLtfxRNF\\n\",\n       \"dyBy6ULcPpeJcJkIQ7IJUm1RkbKOjYEpx8kMfvj5q5XNk7xuNwZOn0bVzp24uboa7T5fjMpqpC2m\\n\",\n       \"juZSjU9GuJLBakWQfUclIvHJyO5iONFaiUxl1ZmlHWp9bAyZLemSKah/97QVcSuVvUwXv0n1CklF\\n\",\n       \"MV3kekY5CnEvJbAYEIqngMASwGIoT6xcoZHcTl758bhcCnFS35DzrrhP3n472n0+dPh8CukEYm/m\\n\",\n       \"XbJDrdNmw/Tly7g4P4/xSASvnT8PQFIjFyDVxQzPzaG2qAgHNm/GycnJGCXKKxMwO4Cbq6owkcSM\\n\",\n       \"aMsvf4mburvROzKCC6r5ZuVQtNTO5cXFmrmRxydncCxSiV+dGoVLdry9uaoK7T4fHBqlV0ZnZhQF\\n\",\n       \"jc3Z0a1bYxyF9a6J4xMTAKSQ3ec3b0agvh5rKipwdnYWobk59J85A5fdbogwahGrfCytkilFUO3y\\n\",\n       \"zCOZGmtUreWRS+VCzCqTRmFWu1JTsePy+woAD1vYr+xBUiLD4ac0FHErlb3MFb+RnHZ3og39CJt2\\n\",\n       \"hBUqo4BAPkMQT4G8g6g9FY9cv9nnCcpOvx9v33235g05H8rpcbvx7MaNeGbjxpht+Jv5VXK46jyR\\n\",\n       \"kltpB/DyZz+LQH09PlJTE1PmpMBmiyn/wfJAJ994A4CkUL4qk1YboKl2AsDpqSklDJiZHhXb7VhW\\n\",\n       \"WKiEDm+49lrpmNx+H6mp0SxpwvpT6nCg4Gwlqv/kw7KfbsHOdRtjapDyGJueRjgSwfahIYxNT+Ov\\n\",\n       \"VbmXz508qVwTD/T1KUT0kkyqJ+bn0fqLX+DS3ByKHNHgl4bqasMlUbSIlSitEv2O0qqZypCM7KZC\\n\",\n       \"hnOpXEimaItZ2qGmYj4AXxgI4i/2+vGz/W2I5Ek9z+jvnkTpHQ7JTTVWEbeSlGWu+M0whtGPee6h\\n\",\n       \"REIvEoEMQdxLCSwGBPEUEEiAYBDw+4G2NiCcw/cnuX6zryYoiW7Ik4HflxntMNgAvHrXXfh/3nwT\\n\",\n       \"Y9PTeIc7aYUFBXixvT2mP2sqKnB4bCyGYJLqfy3wdJQplNMLCxibncV3jhzBsfPnldqh7Eu21OHA\\n\",\n       \"Dz7xCc2HBF0tLUp+62jFGZw/a0fvXjeCQeDZjRtjQmsZ+kdHERwc1H3owCuxg2fO4Kl330X/6KhS\\n\",\n       \"i5Svj1rqdGqqy8mgdR4TqXwCEjIVoVAsh+c2ehuxY5HDczNds9Mo1FSsHMCy8DDWjPbDa0H+rFUY\\n\",\n       \"GAhi714/9uuQ4e/he3I9zjcRBtDSshb19R0ZdCnOnLJYLD+WkB5KfBjAo5YfwzyWdvavgECuQJRT\\n\",\n       \"ERBIAL8/6twaCBh3bs02crWOYqYRjkRwU3c3RmdmUOly4chdd8FXVhZTUqW2qAgFNhtebG+PMfQB\\n\",\n       \"ouVB1lZV4Y1QKKYWp91mw4LGdxdzs61wOrG+tha/PnNGqcvptNlw7w034Gd/+INmfqdDru8ZuXIF\\n\",\n       \"dgAbamvx7MaN2D40hKfefRehuTmUhasx+c074P3LIaxpCqO80IF3Ll7Eu5OTMW1VOJ04cd99uO/g\\n\",\n       \"Qc0SJ5WPPqqQTB7q+qj5UhplKUGvHmq6SKVciFamXabyM5P3JYhhDKMYxehCV0ZcWMMAfrS/Dd4c\\n\",\n       \"Ky+TrPRPbD3OOnTjdeRruKlUL/SL2AGCBzuRG+PwI/v1RQWuNuR6ORWjEHU8BQQyhLY2oKdHcm49\\n\",\n       \"cMB4rUqB5EhmQmPUpMZMvcpE+wZ6e9F76hQKIJHOPRs34rO/+hUiV2ILpNx+3XXwuN3K8Woeewzj\\n\",\n       \"kQhsAG71evHuxIRufVItOGSCy74l3QUFKL1Qg4WaEMLzc8o2PCkuANB07bV49lOfkuZK46HD7b/4\\n\",\n       \"BXpPn0a5w4GJy5fj6oHqPawIDgzg5ZO/hX1hAh77AubLb0Wps9BSo6BUa8DmMsyYKuVSPVQ/4m+3\\n\",\n       \"tZZlpy88uQqgO0NHNlNkPVskPFmt1dyrx7nUkG590Vhk4yGKQP5BEE9BPAXyENmsPRUOG3duFTCH\\n\",\n       \"ZKrPtT/9qVLvs8PnwzMbNxpum5GqIrsdJycnY8gATxBqiopwcnISM2+9he4vfxk37NqlEDxXQQH+\\n\",\n       \"7Npr4SoowMFTpxC5cgVlTide5+pvAsDJyUls2LsX1xUXK66zDOu83hjHW0AijXq1PrXQ6PXivclJ\\n\",\n       \"nJdDMvn6oYnUMjYHD69bh4eGhgyr4fx5KcUELqE86bHMIt1IArWj51eHji26ky4/b82Tk+j7+td1\\n\",\n       \"t82lCAWt221rb8GNIYggnsJTCCGEBjTgBbyQlZv1ZMTSj+yQ8GRk+Bd9v8Dj/sdzqB7nUgMrtmNN\\n\",\n       \"Tc5sPURJB6KOZ/YhiKfI8RQQSAgzzq0C2tDLZ0uWl8rX+zT7Nc1yD9XutUCsEVPPH/+I/tFRvHzu\\n\",\n       \"HB4aGoKdc5Cdu3IFvadOocTpRGNNDQBgcn4eDw0NxRxr4/79mJybw6sywWyorka1ywUAGBofh1Nu\\n\",\n       \"0wbA43Jpkk69L2Lmgvu7O+9Esd2OMrtdIZ0Omw0Pr1uXdA58ZWWm8mkVoyNM4gqkcVS5XDg9NZVW\\n\",\n       \"TiJ/HTgrpDbM1oBlUNe4zAVzLf56/vsPfzjhtunkOFsNrVxMftn2LDlmD2MYIUiGOSuxMmvkKpn7\\n\",\n       \"rtUmSXqZhMnMpEpRmmI9TgFjsDanNZrH2ogdFrsCCwjkM9Imnjab7T9tNttZm832uhUdEhBIBvGE\\n\",\n       \"LreQzChFjxQkM6G5zesFIOUkVrhcMccwas6iRW75ZbdUV0t/r1+PHU1NWCu/Z/C4XNjR1KSYGGmR\\n\",\n       \"5NHpaVycn8c8EWwAqt1uNMh997rduLm6Gu6CArx21134+LJlAKIlVz7k8WB5cTE6fL64osoN1dV4\\n\",\n       \"MxCAx+2Gr6wMH6mpwSRHxi8TxZFgK9DV0oI7fXXwuRYwDanMzNTlyzh89ix6RkbwRZNOiOxcMXOj\\n\",\n       \"npERlP7lIAKB1MPX1TUu2Tm9wXEeW2d/uChOpfz1fIccAp0LSGaZonW7zS/LFqnnb9R3YmfGjhN/\\n\",\n       \"XMjHjRJLfs5+DGtNklItM5Pq755UusSPNrQhLExzsoYudCGAQE6HRYt7KYHFgBWK56MAPm1BOwIC\\n\",\n       \"AnmIZDemespmMtXnydtvR6C+Hoe2bIlRLmsefxyPvvOO8n71rl26BFSL3Kprha4qLYW7oAAffuop\\n\",\n       \"/F5lXfyJa66R8jiLilDjdsMjK5k8nLKrbQEkZbb39GmUOp0I1NejwGbD78bHEblyBf/8yitKO2u9\\n\",\n       \"XrT7fBhsb8epL3wB5yMRxSm3wunUdJdl88jqelrlYKwm8R63G09vbMPKZR9SjsOXW0mmPqvbY9cH\\n\",\n       \"y3ttqK7Goy1NSiRBKg6v6hqXXS0tWO8+iS9f/jbCp/cuilNpLqmYPNKtp5kJx2yteqOJbtQz6Teq\\n\",\n       \"VnyDkEg3m7NGAJcsOA4bw5vye+urY2pDKl3Sjx70IGhpRVUjiD1zi0WCkzkGZwIeeIRCLZB3GBkZ\\n\",\n       \"wZYtW1BdXY1rr70WX/3qV7GwoGWVmDrSJp5ENAjI8TECAlmAqD2VW0h2Y5pqeQ3+Rp4dowCS0sfy\\n\",\n       \"MO0AxuWSIFpKnBYZUNcKXVlaisODgxiZmsJFzgV2bVUVfvbJTwKQ8jjPRSLoPX06jly/cuedqCsp\\n\",\n       \"wTK55Em504lCux1j09MxJU2Ia6f/zBm47Pa42peVLhc2rViBUCSC+w4ejCFibB7/63Ofs7RcCV/v\\n\",\n       \"c83u3cox+fPWyKnPO5M8JecfRGzZ/d/htseuX1laGtNvvQcXiQipOizR43bjGzVHUIwZVV3DxUEu\\n\",\n       \"fUelGyqayuc3GVHUqjea6EY9XfKcCEzd3S73+SkAF+V1dgDjFh2XjWEcQB3iFdRkc5bqNbW4IZ+x\\n\",\n       \"Zy4bJFiL3KpD8wUk5NL3lEBu4G/+5m/g9Xpx5swZvPbaa+jv78ePfvQjS48hcjwFBATSQrIbUyuU\\n\",\n       \"IHaMSlUbBVxOJq/E6ZEWreWM9LHw17VVVejw+XBoy5Y4YqhFrn1lZfjT5z+P95VLJjwT8/PoPXUK\\n\",\n       \"/aOjCkEusdsxdfmyoo6q22Hje/fee3FmelohYjdxRNBszqZ6rHpzwufSjs3OKuSPHW/70BBmLl9G\\n\",\n       \"bWEhnt24Melx2Vz58B7umn0Yf+3YhdrCQmXcjLiy/rwZCmnOidkQT7UKKiAh3XqaqXx+k+ZNquqN\\n\",\n       \"apEutmwFgKPysrXInErI+syeolcCqJH/rgDwcJrt8w8AtAqhZIpcL27IZ+xjj2yQYC1yqw7NF+Ah\\n\",\n       \"6pcKRPHmm2/innvugcvlwjXXXINPf/rTePPNN5PvaALqtKKM4IEHHsD1118PAPB4PFi7dq0SW86e\\n\",\n       \"uIj34r2Z9wy50p9cfX/HD36AkUuXsLyhAV0tLXjtpZcycjzmdmpm/+DAAF4eHITbbsfz/+2/weN2\\n\",\n       \"J9ze43Jh2cmTOH/xIrBmDRq9Xhz/7W+l2pcf/CB+8IlPKNsPT0xIDqPvvIOOt99WHEZfHhzE0QsX\\n\",\n       \"gDVrEBwcxIMOBxbeeQc1N98Me0EB6J13UHD+PB79u7+L6U9XSwuCg4O4+Prr8H/ve3Hz2VVQgLdC\\n\",\n       \"IeCdd/C+8nKsamxE76lTKHv3XUxdvoypG29E76lTWB8Oo9lux7P33x833u7WVvT19WHmrbeAqioA\\n\",\n       \"wOjRo+g4d07pv5n5HQ6H0S9bxwZdLoxNT8e8Z8dbdeYMQnJu6w1nzmCb/F3N2nv58GEclc2V7t+x\\n\",\n       \"A9+87TZ0FRRgOBzGzFtv4Z9uvVXJaezr68ODDgcmC0/hrtkfYGJ0BYqvvw9v33M7goOD2HblCl57\\n\",\n       \"6aW4/tXdeisObN4cc30WOxzAO+/gxooK7Lj//qTjdbs9cDgexEsvvZYznz+t998DcMnvRzGAB/v6\\n\",\n       \"UJqF43dnuP0uvx/DAGb6+vBPAIrl9Tf29WGbtEPs9i1dCA4Gse3KNrz20msY9vsl/8++PnQA6JPb\\n\",\n       \"62ff9/L+JX19+IKB+VP35w4D4ymWj38DgA/6/dgJoKmvD6MALvr9eEg+Xqrz1QWgo68Pfw/Ak+D4\\n\",\n       \"NwLYobHe7/encf67TffXmvcPApiG3/8sAA8e7HsQ05jGs/5n4YEnI8efwQzgl8jttr5t6EMfWlq6\\n\",\n       \"MDgYxJUr23L++yGb76VlL8PvPyr/3QHgmznTv6X6PhGCA0EMh4dR7ChGV0uX4XrMVu2/ceNGdHV1\\n\",\n       \"obm5GRcuXEBPTw++853vaG7b19eH1157DWE5RenEiROGjmFJORWbzXY9gOeIKM7KT5RTERBYPGSq\\n\",\n       \"UL0VMNs3fvu6khK8vnUr7vjlL3H47Nm4NvTqJGot59tl0OsPv+2q0lKsLC1FscOBifl5pR9Omw2f\\n\",\n       \"uOYaVLrdODczg8NjYwCkMNp37703qXIUjkRw0+7dGJ2djet/OrUi7zt4UHNOwpEIHujrgw3Ao35/\\n\",\n       \"XJusHa97Cmsq9qPc5cTE/Jdw+Oy47lwZqZOYrJalVsmR4MAAnjt5EpGFBdxWU4MnczCnMhn8yE55\\n\",\n       \"jiCsqz/Jt1UD4KSqXT+iY6oF8BsADwEoUm27XadPiUq6MNgB/DmAGQCH5WXq+WP9PIaocml0jrWK\\n\",\n       \"aWSzrIy1xTz0YOVVkZsIy7mkouyMUSxG8aSrF8nKqfj3+tE/KpfhqQ+gu9XcL0S6+1+4cAGtra14\\n\",\n       \"/fXXsbCwgAceeAD/+Z//GbedKKcicFXByFMjAQks7NE75cXp/7sJbW1SbdJcgFnTEn7717duhcft\\n\",\n       \"1nWbVYf/srDO+StX0OHzxRAdpqwlcq5lOD45CUAKy11WVKSEgh6fmFC2mSdC/+gonHY7jly4oCzf\\n\",\n       \"u3Ejtg8NJTXS8bjdePueezTDl82En6rnQC8k2uN249mNG/GMThgt229NxX4cHutFz0gPjk/8Xneu\\n\",\n       \"3r97N67pegb3ntqM0Tl7XHt6/dOaB3WI53A4jNGZGYTm5tB76tSilU5JhkTfUVaX59CDlaGbfFv/\\n\",\n       \"S6PdYm7bUUiksxsS6eS3fQJh5f1qXFEC+7qgXdKllmt3AUAvgOPyey+A04gNEFSHy5bDeIislruv\\n\",\n       \"Vr9SgRFTnWTFPKz53ctktmxuQJj6GId0TVl1lQtYAXUaQjb3JyJs3LgRgUAA09PTGB8fx4ULF/AP\\n\",\n       \"//APpvuRCGkTT5vN9r8AvAjgRpvN9iebzfbF9LslICBgBRTSsH8zDve60dMDBDN0vxEMShFxK74x\\n\",\n       \"gA0/T+5S2tXSojjKqo109LZP5FDLq2I3dXejd2QEgQMHEI5EFAOd3tOnQUAMmelqaUHz8uVoW7Ei\\n\",\n       \"zrlWnRfpKykBAFycn8fL584BkBTO64qL4SqIfp2W2O0IRSKwc08E733hhRgjnwcS3Ejq5dWZIese\\n\",\n       \"eSw3dXejaudOBA4cUNRDM06yrC/lLkbMG/Gbji/pkkZWXmY8EsGGvXtNjzERijl33YbqastcVrOJ\\n\",\n       \"bN3mpUtwg5CUzDYATm45s98qhUTwwogliV5I1KYKUi4j34c57pZjHAW4Sd5fi3RtB/A+1bFdkAio\\n\",\n       \"TT72YWgTYHaUCfnYibLX+HGqt7GqsuPiOsvyyNxjD2scaxOdDYHMwNr6pQLpoaulC4H6AA5sPmA6\\n\",\n       \"TDbd/cfHx/G73/0OX/nKV+B0OlFVVYUHHngA+/fvN92PRLAk1DbhAUSorYDAoqOtDejpARobU6+d\\n\",\n       \"mAx+P9DfD+Dv9gJrjIXQZiIUWB06G6ivR+/IiFLOo8PnwzMbNxrqi3rZpbk5qQ6lw4FLly/HtbG8\\n\",\n       \"uBiRhQWcl8mcDZLpUbHdjrfuvhsNTz+dtB+AfkitVvip2bnwuFzoPn5ccfA1Ou/hSBjBwSB2NO1I\\n\",\n       \"+INW89hjGI9ElDH7ysqStm0UycKC9TAwEEQ4PAyHoxgtLV15bT5kNFgy1dBNrXDVlQDOQCKdHkjl\\n\",\n       \"RdjVz0JZ2fFOIxoKC0gOrsxMx4tXcR63xhyvBhINqgHwKwARALchNqQWkEinG8Ckqr+VAKoBnIMU\\n\",\n       \"jgsALM7ADomo8v1Uw4/Mhz63oQ096EEjGhe5rmPmAnr98KNfnskAAuhOaSb9yE4guoDA4iBZqO1i\\n\",\n       \"gohQV1eHr33ta/j617+OyclJfPGLX0RJSQmeeOKJmG1FqK2AgEBCdHUBgUB6pDOZSnb8EwPA3+2F\\n\",\n       \"faW2S6kWMlEjkFfF1lZVYUdTE26TzXEaqqvxKGeskKwv6mVMYf3YNdco+5VxIbpvBgL4aE2Nso4g\\n\",\n       \"fcm+1NEBX1mZoX4A+iG1TMXseP75mPOgd2605mI4HFZIZ6XLZXjePW4Pulu7kz5FZeVlrCadUh8S\\n\",\n       \"hwXrYSmVUzAaLJmqjqEOV22E5CzLlE47oqSzElHdjB3vJNfWhxDr4Po7vA/FGIND9qAuhUQYewDs\\n\",\n       \"hxSmG0JsSG0DgHa5DTXp9AA4IrdxERLhnOL6toEbA+snr6ndD4lgA8Ycc1PV46xwlrWmFmXm1C1r\\n\",\n       \"HGuzFYguICCghs1mw89//nM899xz8Hq9WL16NdxuN77//e9behxBPAXyDiLH0zw8HqC7Oz2lM1l+\\n\",\n       \"oa8xDKwZxUJRBHUlJYbq/tUUFcWFt6aLrpYWdPh8aOdKojzZ2opAfT1euOMOzT7d8YMfYGJ+HrVF\\n\",\n       \"RXjq9tt1Q3m3f9WNse+0Ajta0bbchw6fD5tXrIBXrgnK9mHlQwDgCoDvHDkCAEn7wZCIkGudB71z\\n\",\n       \"ozUXfM3QI3fdZbk5DysvYzXpTAfZLqeQye+oTN+aHx8YAPbuRdn+/VgRicAN4B3umA3y35WQSJ+6\\n\",\n       \"FuUE9/4GROtjtgGoQAXKsQyXIT0QvyRv9yFIxI+hDMAnIKmg1QB2Ikp8AWAZAB8kFbQBwLS8vBjA\\n\",\n       \"y5C0sncBPItoWDNfp5PPV2UE+3qNsfghke4Ncv/fQmoZklbkHQ4Ovqz78MSaMNf0YE3ZFpFvmE2I\\n\",\n       \"eykBNdatW4fBwUGEQiGcO3cOu3btQg33MN0KZKWcioCAQP4jmTpZXhhdb7TY/MnJSZyLRNB7+jSC\\n\",\n       \"g4OWhNp63O64EFaWT8iDD2f90+Qk3pBdaR8aGlK2Ve83PCyHE8ONVbcUYWVjGMcuXIgxu+lubcXb\\n\",\n       \"99wT40zL5otXLFkY7fahIQyHwzg+MQFfWRnKnU78uKkJDw0NaYbUJlJmSx0OhCIRhCMR6Vgac8FK\\n\",\n       \"w+iF65pxzs0XsHIKiVx28wVdsDZYUh266wuHMTI6ikkAhYODOCxf/3UAPgBJiWTOtT5VO92IEs9K\\n\",\n       \"AI8C6EA0ePJWSOqkGhcADEIiquchKZuD8ra98n6MpJZCIpf3c+0yPA/gZkQDNIMAxgDcB+B38t88\\n\",\n       \"GJllobor5DGVy+Ngob4j8v8sjzUR6TcSCp2Kt6zdLn0OtR6esBxSqe1gimGuZnoZv46R6/TAFFkB\\n\",\n       \"AYGlCpHjKSAgYAjJ8gvN5h8CyUtqWA1Gqo5PTmIiEsGEnKdZW1SE0ZmZpP3gc2Xd/8deHB6P5k/y\\n\",\n       \"+wYHBvBWKITjExP4jRxmy6DOGx2bnjZczgXQnudwJILVu3ZhXA6zTSdfVi/vNhiUiHfnNT18AAAg\\n\",\n       \"AElEQVRxsRS6nYk8YYHsw4/YrLpL3GfSs3kzet1updDCTZDCYQGJUD4D7ZxQJ4A/QCJxrFhDKaLk\\n\",\n       \"kUcxJCXxXwE8BmBOXs7yM9cCKEFsvqcbUiQBr4Kytj4CiRz75HZZn1i+NSAppXcPBLEsPIw5RzEe\\n\",\n       \"a+nCpOqBRK081gpIYbyNkNTSh5CY9PuRPEvRyDZqJCpRlJkc0kS9TLRucRBEEMMYRjGK0YUu4Wor\\n\",\n       \"kJPI5RxPMxA5ngICArow42Cqub/sVnvfZ93Y0ajvQpqKS6le2ZNU+5oMLCR1ZGpKIZ2VLhd+09GR\\n\",\n       \"sLSHUo7lr/aj/d4IDhyIKrxrq6riSrQMh8M4fPYsRmdm8NDQUExbasWSvTdSzgXQnmeP242PyOEw\\n\",\n       \"6ebL6inbTO3NpDPyUkC++XKqQ3f5z+STbjcCkJROnnQCURKnzgmtBHAXJEWyDcCPIRFFLdIJSGZC\\n\",\n       \"HwOwG1HSCURNgV5HfGhWBPGks0DuYz8khfIw16dSxN7stAKoCw9jzWg/PjzSg8/JoavMnKgBkqIb\\n\",\n       \"AHAU0eBPH5JnSBoJhU4lXNrt9qC1tRtDQ9vjcj2tCXM108vcy8XMHedgAQGBRBDEUyDvIPISzMFM\\n\",\n       \"7UfN/TNIONQkKt2+JgMjVRUyybMDcNvtuOMHP8CluTnd/Vi/ekdH4PrfBuHxRG/QD23ZgmdUNTr/\\n\",\n       \"INf1rHA68fC6dTFtsf0+UFmJjuefxzwRfKWluMnjicsxNQOj5WkSkfvgwIBmrisgKZ2ApPbuyI17\\n\",\n       \"zZzEMID+vr6crJSoJsUsJ7MWkprnAbDd7cZYayvuk889MwziSacTwDhiS600QHK//QCkkFeWC7kG\\n\",\n       \"ElHUw4Lc9kSC9YMAEj5Ch6SAHtFYboNEehmRXQvgZwA+Luf9vudtxM+bdsDBbbMSUZJphGzyMJKl\\n\",\n       \"mEomI/vd0zLKykztykS9zJ1cTJbf+ibeBKBtbpQLObC5CHEvJbAYEMRTQGCJI13n2GwSjuOTkm+l\\n\",\n       \"FmEzAz1yxUjf0a1b4XW7pZvemRm8EQolJLtac5iINEcWpFvYi/PzcYon2+/k5KREZk+dwtT8PIbO\\n\",\n       \"ndNUSOPGJivQbW1AmLuH8rjdWFlaisNjYwnHwvdz9a5dMXOkp9QGg8DEBFBbCzz1lAiz5aG+1qys\\n\",\n       \"n2n1LbLaEXcYkjI4CimEVGsbIKpvARLN8CBaQ9MFYBWAU4iWUuHDW62IW7iCqMKqhwpVP/n+AlF3\\n\",\n       \"3EOQjIZ+2NKFN+oD+H83H8B5t0dx6m2U2/IjtXNgxDc2HW/Z7BllJepl7tR+ZErnOMZRhzpN1Tdf\\n\",\n       \"1VBBmAWWIgTxFMg7+BOUoRCIhzqc1fT+cimWD/z3AXQMZC4MFgB8JSUAtAmbGTz32yi5+uLB2HIk\\n\",\n       \"3a2t8JWVKaGpFU4nsGZNQmJuZA55ctpQXa38rdcmv/1arzfp9gyJFGi+zaKnmzQJKm9ENB6JxJDU\\n\",\n       \"RGG2hw8Do6PAQw9BgINape8CEPD7U9aCjJZLSQVqUqx+H0S0vEgVgAH5/2lI5kLV8rbjsvMt9u/H\\n\",\n       \"85EILkAy7lFXtk01k6k0hX0uIlpKhUcIQCEkYjwA4IOQwnp73R78sLVbye30QCKmByApvJk6B6mC\\n\",\n       \"/e61tHShvj6AzZsP5L1RFo9USRZfxuV1vK6p+lpT6iX9vppFpgmzuJcSWAwIcyEBgTxHtkxf9Exn\\n\",\n       \"rIQRsyEjrqtV/7wfoetGgPe8aD+5Gc926ZshPbxuna6DrBnwpj8Akhotmd2egTc4Utdl5dvs2OiW\\n\",\n       \"HXilBwfd3bHbhCIR9J46FTPXauMiNtdvHnVg/HgRSq+fxMdudeDJjUvD7dYKWG2Qxcx4mKGPlR9n\\n\",\n       \"Fl7LzHHY+yJIZIs3CHIglkjy5jzYu1d6CgEA9fWAxd8FyxDvQJsMMf0zCVYahrn0ZvIcLDYGBoII\\n\",\n       \"h4fhcBSjpaUrZ8irH37FmTeAgGGH3DDCCCKIHdihG2psZJts9NUsMmMaJbCYEOZCgngK5CH6+vrE\\n\",\n       \"kzoOfj80CYbVyIYDrRFnXCME+Pb2CHqvGcTaN5twaJ87KRnPp2sqHJYeNuzYkfghQyKCCsTPtRah\\n\",\n       \"5+caBCXRrsPniyvTcrVC65pN53pSk0OjYKGzxyGZ9MwDuA3AckikMlHpDj/iS5PwKIAU7qpg/35g\\n\",\n       \"ZATweoHNmwELvwtskGp4HtZZH9eXNFEHycCInxc1ITdT9sQKaBUyseo7au9eP0ZHpbNdXx9Aa+vi\\n\",\n       \"O9IC2SFZVjnfZosQahFmK9178+l3b6lAEE8RaisgkPfIVg5muiG7RmDEGddIzuqTj7kRCLcaIp35\\n\",\n       \"Bo9HeriQbFwsRFqLdALGjJ3YXAOIcXex+mcz027GmUQqbs4J20Nq2XMsRHcEkloYglQDczeiYaMP\\n\",\n       \"aOzHh9d+GJIDrRpxRK+lRVI6LSadgHRt6ZFOzb4YAH+jUw5JUQWksU4AWA2JYDKwc7BYIbeZDLfO\\n\",\n       \"Xo6oOWTGmTcWRkJXjYTRZqOvgLZplJnwW5EjKpCLEIqngECew6gClktIJzw4lXqhAsmhpWiHIxHc\\n\",\n       \"1N2N0ZkZlDmdmJyfx9qqKhzassXSuc9GGPdSBVPH3oTkNFuOqENsFaTcR+bWympv8vAjqnZ2ILY0\\n\",\n       \"ihFUAbhgttMyqgGcT3HfVOCEVOrlT5CU4SkAk/K6Onk5j8UKuc3kcRPVAzUKI6pbLtbVNKJUZiuM\\n\",\n       \"NlWYUVtzfSxXI3Jd8Xz77bfx5S9/Ga+++ipqamrw8MMPo6OjI247EWorICCQV8hUeLCR/M+rCWbm\\n\",\n       \"Q4/QW50Lq4VMhnFrhS0m3SdPrqMgJGXuovy+DsCvAfwtJOVwHFH10APgPcSPX01yApCUUjuihFUP\\n\",\n       \"tZAUSLP5mJmGOj+VwQ7JuIjNlwtSWHIxgLcQzfFk14wTQAmAnchunmeq4dbZghFCs5ikR4/0Gsn1\\n\",\n       \"zPW8SjP5qrk+lqsRuUw8L1++jA984AN48MEH8bWvfQ19fX3YsmULjhw5gtWrV8dsK4inwFWFxcxL\\n\",\n       \"yJcb0lSQ7tjM7J8s/zBVpKKcBYPAyy/3YflyvyXmTGbmIdG26ZyP4MAAnjt5EudmZhTykMtKohbp\\n\",\n       \"teqz5kdU0QsAhm5/01Vgs/Ud5Ud0bJUA3kUsUWGkUm2ew4PPZ/wVogpkCRKXErHJLyvzLdOBHUAZ\\n\",\n       \"JDJ5AMBnEBs+q0YFgF8AuBcSWefnxg/9a4aRmuP4B/hwO8rhQBekEi1mH3CYQS7l4xkhNItJetIh\\n\",\n       \"vVYbES0G2DXqhBMlKMFO7NQcSy5dU1cLcpl4vvHGG/j4xz+OyclJZdnGjRuxbt06fOtb34rZVuR4\\n\",\n       \"CghkCVp5cEsF/Nhu/f6gZikOo/snm5tk+YepIpWapcPDwNGj2uVJUsFzJ08q83DL008nzF1MNGda\\n\",\n       \"64zmQg6HwxjlSGely5VSDddsQStP0qrPWip1NdOtfZstsLExYqn+KHVBIk7vAvhXaNem5PMZRyGZ\\n\",\n       \"Es0jef1KQmZIZ8I7FhXs3N8FkPo8BuD/AnB9kvYvAvh3AJsAfAxSyPB1ADYA+I28TTmAh1X7sxy7\\n\",\n       \"ERThMBwxNVHVeZmsJusKud1M1GZdDBjJccxkHuTAQBB79/qxf38bIpH4GU2nfIpWXmW+gV2jveiF\\n\",\n       \"Cy7dsXwP3xM5oDmGdP0OrPZLuHLlCt5444202+HhSL6JgEBuIZNP6JLlHubLDalR8ON1/lV0bO4n\\n\",\n       \"m5RQ2GDQWCismblhBjnpgil7kYUF3Ob14ifNzYbCQXk1zVnRAsBvmTlTZCEaoDg1P68oZ8HBwTjl\\n\",\n       \"LNGcaa1jZEyvPfW+AOBxuXDkrrvyTp3XmxuzSmgXzIctdrW0pJVHbPV3lDpcmKlrTki1J3dCe2yM\\n\",\n       \"VAJRYgQAtwJYKbdXA4l0vmlpj1PH5weCWBYexpyjGP+zpQszCfIQ+VBgngTbECXlgERK/wySey1T\\n\",\n       \"dB2QSOXHIBFuQMptPc3tNyGvfxvR+WWkphxOTCD6QOM+eT3/gIOf8xH5fxYebRaZ+N1LJQwdiJKz\\n\",\n       \"dLdJFeHwsOLMOzgYjHPm7UJXTqmWeqG/mcqDNUq8L/kvKcpwEEGRA5oDMPobn4n916xZg2XLluHh\\n\",\n       \"hx/G3/7t3+LQoUMYGBjAJz/5SVN9SAaheAoIcBgelnIP9dQvI86uwSBMqYVmt7cS/HhLdkXHVu6U\\n\",\n       \"xmaUjAWDwMT3W1B7qh5Pbcic660aTNkLzc2h9/RpPDQ0ZMhhlFfTSv9y0FL19baaGgBAQ3U1Gqqr\\n\",\n       \"AeiT8a6WFqwqK4Pbbsd9Bw/GPKFUX2vBIHDsdxIZa6hMTO67WlrQ7vOhw+fDe/feC19ZWfoDyzL0\\n\",\n       \"PmtmldBUXGKtdqpNB0FIxJJ3pmWEphdSaKne2Jji1gbgD/Iy5urK2ntc/nscUrhtJcypjkk7zzpg\\n\",\n       \"8LttWXgYa0b78eGRHnxhsNPwodgcVAM4B0m1vQ4SOW+CZKr0UW77ywAeAsBrAuxxTTm3bBSxzrJM\\n\",\n       \"yTuGDyGAqPkPU5d5MyBGfp1cu2oFdTFh1j03V1xSkznz5ppqqedEa8ah1gyMqs3pKMMCmUG64kY6\\n\",\n       \"+zudTjz77LPYt28frr32Wnz/+9/H3Xffjbq6OtP9SAgiyuhLOoSAgHU4dOhQxtretIkIIGpsJAqF\\n\",\n       \"UmujuVlqAyAKBKzf3gqsWUNUUUHkdMaOt7NT6k9rK1FHh/E5WIwxEBFt2reP8MgjhEceobVPPkmh\\n\",\n       \"2VlT+zU+/TSFZmctvaZCs7MUOHCAQrOzMX/roXnPHmUMgQMH9LdrJkLRLKHzALXfa2yci4nO/n5q\\n\",\n       \"3rOHNu3bZ/i8GIH63OUirLyemin2R7WDiDbJfzcSEfuIdsrbbuKW8fs6ub9dlPzHu8DANklfzdHv\\n\",\n       \"BQSM7fOVfZvokUdA//vTjVQ0GzJ8LJc8xnKd9W3yvNSq5q6Vm5/b5PVHNbZLBSEiChDROq4fqX49\\n\",\n       \"Gr2mtK4DPWhdR4nQTM0E+V8g5ZGkhk7qpFqqpUqqpE2zzbTvQAfNzsb3upM6qZmaaRNtolDKZ858\\n\",\n       \"3xIdcxNtIhCokRpj1ustzxaeO/QcBSiwKMe+WpGMExm5Z8jk/mp8/OMfpx07dsQt1xuHvDwxL0y2\\n\",\n       \"QbovQTwFzIARn02b9ElPJolnKCQRp1RJJ5F58mp2eyNzlAwVFdEbwsLCaDupEkgrCHsqCM3OUscv\\n\",\n       \"f0ntv/xl0i9angidmJiI+XLe/G//lhGSpHX8uieeoPXPPKMcyyiRWqw5ThXJCHWqxNTqH9ZMwMrv\\n\",\n       \"KEYOQEQfJokgMELDXwbN3HYBjX3NEMsGIjpBREXcstVEVJVgH82XfM2ikQghY/sUzYao80DAFOks\\n\",\n       \"0VhWqnrvlOfjBDd3nUS0jOIJK1uvnmMz6O/vpD17mmnfvk30GXks6ZBYo9dUMxknuUbG2N/ZSXua\\n\",\n       \"m2nfpk30mVBr1oiSmszxpDcR8V0McpzsmCEKaRI8veXZQibvpQS0keuc6NixYzQzM0NTU1P08MMP\\n\",\n       \"U319Pc3NzcVtlw7xFK62AjmFTJXZyCbM1tU0u70Vc1RTA4yPS7mdb70F+GRLx1TdZtOtJZpOXU+j\\n\",\n       \"SORUmo06kvwxGAL19djR1GQonzDf6rUmK5EiancaQxjAFyGZ+eyEflitVu3HMICbIIWLNgA4Bcl8\\n\",\n       \"pxGSMc+QTltVANZBqs/JtukA8AqiuYqGO5+huiCrAcwCmIY0N6yWaDGkkik3IZpfyWMVovmtE4iW\\n\",\n       \"m2EwUzszUY7k3r1+JQ+xrj6Ana3dWSmPYnUN0L1+P0blH5y6QAd2djtjciczlaeodqa9hEvoQQ8A\\n\",\n       \"oAENeAEv5IybrihbImAUuexqCwDbt2/HT37yE8zPz+PP/uzP8MMf/hD19fVx26XjaivMhQRyCsVy\\n\",\n       \"UoxVRi9WwwhBSmaco9WGGfJoxRy98gqwYQPw619HSScg9UeL3AwEgwgPD8NRXIyWri645ZV682GW\\n\",\n       \"SLJcU7ZvJh44mDXyydTxK5xOXJyfV47F8gmTwSpDJjNIp6RJMoOepWbUlQqMmLt4IOUnJgMzUSqC\\n\",\n       \"RBKZcdD75PXPQCohwnggb4YzD+Ao19YFSOSlVn7vALBvIIgr4WHAUQy0dAEq058KROtjxnQ+Q9fs\\n\",\n       \"JIA1iCeX0/LrEwC8kHJXSwFcgjRWN7dPLbffhwHUQyL3Rkuj6Bk2dSE2D7GlaQfazA8xJfOfVMy0\\n\",\n       \"EsEh/+B4GxvRsuNRtKlaZXmKUn9TM6jRIq9a+YcP4AHYYMOjeDSG3PH7/xg/xkN4KGvGQgMDQXwp\\n\",\n       \"PIENjlp8qeUpeFSfi1SJeaL9MkX2BQS++93v4rvf/W5mD5JMEk33hRyXlQVyC0ZCXQ3nuqQZkqre\\n\",\n       \"v7MzNkQ11VxGo+Gsev1PNkdWhOKqsae5mR4B6BGADnCd5seyalX0uOvXm5unbISRbnvhBfLu3Emt\\n\",\n       \"v/hFXJjmc88/n/HwTRYiqg7zzWUYzT9NBfkQMpsqMhEWaRR8m16N9jtJCqG1ycvXUzTPkX9VkhSW\\n\",\n       \"WsOW7WkmPALpdSCQ2RsHAy87Nwb1y8uNq4Niw2v5vMYT8vp2Sh62rEanPEcgKTR5vWqf2dkQHTgQ\\n\",\n       \"0MxD5NtoJv18TL4fzYsUFjkbCtGBQIBmdb6YrchT1ApVNROGyu9fS7VZDV3ds6eZHnkE9MgjoP9x\\n\",\n       \"YFVcrmeyMFy9/NBE+1kVTixCbbOPpcKJ9MYBA6G2QvEUyClYqeqkq6Kp9x8bAy7Kj/QrK1NXG5Mp\\n\",\n       \"lkwtPHYMCIXi+59sjsyM26iixT/1buI6zY/F7Y4et7Y28RjV0FNarcTJyUmMRyLoPXUqzma81OVC\\n\",\n       \"d4YLafPKZr6ElWZSlTSq9OYjvnf0KL45MZH0c5WsxqhRxYvfjjmoNsrb96rafwLADLfvYQBc0IOC\\n\",\n       \"myGpmI2Q1E/ICh68jYCGk2i2saCxzAWgFZLyykJonQBehhRiex+AH0Nys2WKoJaabKT26zCk8iuA\\n\",\n       \"pHTOqfZxuz1xZT602mCKabTMSvRsFmMPACcaAfx9wpYyB7fHg9YEPyJWlC7RUjfNlGMpVs4YMIpR\\n\",\n       \"3IpbsRIrs6II8sr2k03uOPVXzzmWqZYv4SXMyVfPA3gAz+LZmDFpOc7mixutUGYFNJGMmab7whJh\\n\",\n       \"9wL5h3RVNPX+7H1lJdGJE6n3S0ux5FVKXi1Mpf9mxm1U0dJ76s2PhT/uiRPpmzRZjXxwQ801LGVV\\n\",\n       \"MpMw+rlKZu7STMYUUX67Dq5NdfudFP8jXUX6TrBs33YiqpwNSUqnCdMfvZfX5PZOInJTcqfdFfJc\\n\",\n       \"uIkI/Z2SSrtvU0yfjehDRkx31I6wqZgR8W1sI6Z+vkQhqiAiUIjuT8vgKF9gVN3UUgc7qZPW03py\\n\",\n       \"kUtRXtfTeksUQSPglW0t9VdvbGqzJBCogzqU9Wy/bbQtbsyLbUpkFEaU2cVwIV5MLBVOpDcOCFdb\\n\",\n       \"gaWIRKGk/Lp0yY+aIOqFuFoR2sqHrNbWSv83NBC1t5tv04wzr5VkzApH4EziaiNRmQi5FjAGs58r\\n\",\n       \"PYdfo+Uu1NutIaIKkgge/4ysmWJ/oGsonnR6KEoO11M0DJQd40OUfqmVExRbYiTRS8uxFiSF2fLr\\n\",\n       \"Kik23JUPDXbIocGJ5tFMGRKi9F1v1W00c30P0C6d3prtZXaRaRKhRWT4ZXVURyHSJoCZ6iPf3gk6\\n\",\n       \"oUsI1cdlfSyjMgKBGqhBcz9+fF7y5hVBMxKGfbWR06XCiQTxFFhSSHbDfMsth3TzB9X5k9m4+bai\\n\",\n       \"huViqYWMjG37q1nNedKbv6VGapZirsti1Va1ApmqAZotJMoZ1qy3qaOQbiOJALZSYpqhJkGSXia9\\n\",\n       \"3BRV09RKo149z+UUS+K88rIquS/JSKNe7iXkNowS1zqK5p8yglxBUk1Ovn9OksgsI8d2IknpfARU\\n\",\n       \"8XQjHZ0NJSWJzVx7i/FxiT48mKcQ3U/q3krfUc20uL1MjEyVMmHEw0veOCJjRmXMRB+NtqfejvUx\\n\",\n       \"EVnlx1dKpaYJWjLCls7vnhEyaESZtYqc5guWCicSxFNgSSHZDfNHP3pIN5RUHWaq15aVxKmuTmq/\\n\",\n       \"oiL1ENxU1EIrSaHePJldnq/IBvHMNplKJ9Q8o301INpk0tQoG0h0PTVTPHXQU0i1ttWCekrVBJN/\\n\",\n       \"z0hhMRHVkvYPdztFiZC6HiYS7Gflq5QkMrmN/oZq6AVqpm3UQRHlkmH9YyZIJI9dMUOaDdHyAwEK\\n\",\n       \"ceY+iS49I+pyJvXGZAqqdE0Z1cAlZFspStVoKFk/tVRNBrNhp6yPXvLSelqf9twYHXOqc8PG10rJ\\n\",\n       \"a6iqCRr/3k1uqqRKaqVWZX/+e8rstVJLtUrbfIiwWVhFTvMFS4UTCeIpsGTQ2SnlULJQU60b5lBI\\n\",\n       \"clBdvz654yu7+fZ6Y7dXh7amQz7NOrjyY02H/FpJCvVIitnlVoLNT12d9rnOZWidW6vJVDJyyH8W\\n\",\n       \"zBLJjBK/ZkrKpqwKAefHrafqZxta1EEvDNwozeCJYDtJRMxNRI90Er3STHRwE1FFSFILB0lSEk+Q\\n\",\n       \"KjRVfn2AYnMW1Y635RR1ia3Q2N/Kl5eIKuklAlUoN9OM/LUSkY+IlpFEPpkqzBNmtVLczLXNu/yy\\n\",\n       \"9tqTzLPW/gzZCYI1F+CbbaUo1dxDvX4mUjrT7WO6eaCsb63USh3Uodsvre06qZNqqTaOCKr30cvr\\n\",\n       \"1COJalLN5o1XS/XGq3UOEpHRSqpUtm+ndtPzZwb5ktNqBJDKDi+Jl974SBBPgXwCT5raE3yXGSVX\\n\",\n       \"7OZbTQ4ZcbJCtUuVhKWrGlpJCvUUV7PL04GarPHzk+ghQS6G/WqdW6vNjcyQQ7NEku/rthdesEb9\\n\",\n       \"ZHfmTH5LwKasysflx13zjQM5odKboQ5Gt2VlPUCkaA8hInqjObpiVyCeMG2i+B/t5Rp94NcvI+lU\\n\",\n       \"GlU97RQlgmZuHopj3u9SSAc3pDhll82VVhkZfrz8pdess60WEj0I4NtZRbmRiWm1UpQpBVWvn4mU\\n\",\n       \"TjN91iJ56c5NqiG26mVa+ydrW289I2jLaJmyfjktV9RSEGgtrY0ZbyJyn6gfrM0SKtEkzwJXJwTx\\n\",\n       \"FMg7GCFNhw4dMk2u1NuHQlETn3RVu1RJGOtTaSlRa6t1JkK5bvKjBzVZY/NTXp74IYEVYb9Wh9pq\\n\",\n       \"XZ9Wmht19vdT5aOPEh55hNY++WTSNs2SXr6vlqmfzRT9ZaijrNyR8+Nu/cxsxlV6hmznDC8naVqd\\n\",\n       \"JOVfKoRHZkpHGiXFk6mVRBJ5XE8SkXRSlOydoHj1jpFHkJRf2UzxP/YOjWXLKaqOnlCts9EsNdM2\\n\",\n       \"aqMILdPYl4XMNtA8tdP9tI1m455b8GosPzY9gqhF5M0Er4ZIIpW86ZJWO+qanlYglWvKaqUoEwoq\\n\",\n       \"c6WtpVo6EWOFZX2NUL7f6c5Nsr5pETrmUMuW8USQJ/VaYbXJ1vNQq5HbaBtVUzUto2Uxc3zo0KGE\\n\",\n       \"5D7RGEMUihnHKoqvYZoqlpKZ0NUGQTwF8g4x4YE6StahQ4dMkyut7a3Mq0wFoZAUAsyTJr79bdsy\\n\",\n       \"q+Rl+lhac5Vo/rQeDgQCUt5soocEVoT9Wk0UMkn+OzuJKr4VJYMdv/xl8v6kQXotU2rNpafpwkzY\\n\",\n       \"MD/ubD6QySTx1Arp1AqZDRApTGtjKJ4INXPbekgKzT2qsS7AvS8hiey1EsWUKymYDSnmP4ykrqX4\\n\",\n       \"0xxLTs8RaB+10/0UIkmpdXLr2yiWJPJ9Ys8tQnK/2XIWJJOuqpzoxpfvRy23H9+O3qXe2d9JzXua\\n\",\n       \"adO+TTG5p0aQCwZomVBQK+Qwai0yawVxZn0G6TvHpoJkfdMidPyy5bSc2qldU11cSSuphmpilER+\\n\",\n       \"fTu1Jzw2I6aM1Oo9MDh06FBScmnE+MjqEjZLyUzoaoMgngJ5DSuULKthdZ/UxkR8+2pSajX4Yzkc\\n\",\n       \"1h9La64SzV8iYpDquqWI5mYifEUig5XfzXxNUsuUWjNsIAHy3XgoHXRSbF6lYk4kv2fr1IRHTYQ6\\n\",\n       \"KRqey5ckYURKnSd5gmKdaJ1EMeVKquRyJSBJqewg7dMcDc+9rGzfQRFlPVMwtUirHpnTCjNOF4lu\\n\",\n       \"fNXhyTz5ZNC71Gu5Oes4kCM/aiaQSQW1kipTbncNraEKqiA3uWkdrYvLjWyn9pg8zGwoalqETs/Y\\n\",\n       \"qJM6FZWygRpiSFwN1VAd1ZGHPIbIs5aCbIRcbqNtVERFZCc7VVN1nPqsFbLMXw9WPpRYSmZCVxsE\\n\",\n       \"8RTIa2TDwMYsrO6TXu5pY6MUfssfy+pcRj7Ul/WhslK77TVrJHLs9Rp37tWaq1w8p/mGTZuIUDRL\\n\",\n       \"ldsP0Imz+VdqJF1YnSubVZhwoNEsu0LRH9dKbjkLAV1HEhFSf0TVRIjPz1TnSbL9a7hlAYoNtwWR\\n\",\n       \"Uq6k8ulGWj4bilmnNu5hY1Arsw00HzMNfD/V++qRueX0JoGICmiKmmnO8G2qun3+fSttTXCjHp/f\\n\",\n       \"apRCVspzhqcbqd2k4rkUwQhGJVXS5+hzKZNBXjXlCZtWW2qVdRWtSmj0YwTJzIDYNowQrqN1MQ82\\n\",\n       \"1GqmVgkVfr3WMdl7PszWTFixOiS5juoSrs+EOp2JtgSyC0E8BfIaekqWkZAjK0iaVhtWq2t64aXq\\n\",\n       \"v4ni1dB0CShzB/Z4KEZ15cHmgFdE6+o0m9Ns34rwZiuRKHx7sZAwdFSDfSz2HC42rMyVzRQObT6k\\n\",\n       \"TTCbyTBb0dpUq4SI2aY7KT5nU4tIsWN5SSKMcbU5Z0PkPBCgdbMh8nDLeUKs7lOd/HeZfNxEl7DR\\n\",\n       \"8Syj/QSKxGxrhN+r2+ffd1Ak4Y0vTz55FTnZMVtnQ4QDAVo7a/6WOhdCba0AT5j4GpbphFeyXMMC\\n\",\n       \"KogjbImMeyqpMkZdNHJsLZKpV1qE35Y/Dtte7T7LHnSoS6gwoszWa4Uoq4lhIzXSalpNFVRBXvIq\\n\",\n       \"Cibfp+cOPaf0lQ9JLqIi3XxbPoTX7DwZhcjxzF8I4imwqMiE22hnJ9EttxxK2GZnp0Si1CGd6v4k\\n\",\n       \"6x/LKwSIOkzGcBkdu1ESwZeZKSmJH1uq4Mms1hj59QBRcXHqtUpzAXqhvmkV0k6z5mXC0NFmMi+r\\n\",\n       \"5BhSmZ9s1zxNB1p9PXTLIe3zZiLPVbPsCulHKxttupmiXfNQVE1UEyl2LK380ZGHiMoAACAASURB\\n\",\n       \"VOUk5VOq19nl9tnx1X0yYrpjwvyYiIgq6Sg3ngUKkf7HJlbVjG3fbAqy+lzoHTPRPmawqA/HEhAB\\n\",\n       \"syRBTTD1XFXNtHuCTlAd1dFROhpD2LQUa15lPUEnEuaA8n1gxkBaiqLazIft5yKXsryQChUSyfrJ\\n\",\n       \"k1E3uWPIHq+Qsu218j0d5IgZRwM1KLmjPDllCmYM8T4UDW8OUYjaqI2W0/I40snWd1AHraSVhuqf\\n\",\n       \"pvMgQeR45i8E8RRYVGQiR9NIm+rcRUaU1PtqtcUTRp68Jirtkmo/9aBZA5Jrb9kySilcVasupjqc\\n\",\n       \"Vw2myN58M9Hy5eZIZy6WOclEqG+6OYcJQ0ctMuRZTKQyP2b3yU4NRW1o9tWMraoOkm1qNBRVDS3V\\n\",\n       \"lDncaoXpMpWSKZ5a7rFriaiaoj/8Xnk/deivVq4pPwaiWALnovhanGq00pxCOvWOw8Aru2rzonRI\\n\",\n       \"YaJj5iPU5yURETBLEtT5e1omPKm0yyNRqCaf09hMUn3NNmrTrMXJ96GGapS/Wf9ZG1VURUwJ3Ebb\\n\",\n       \"NEN/WY4mPx6e9IKiYb8ucpGNbMpyJzljyGAxFcfsx8ZaSqVUTuVKrquTnAQCFVOxEsrMO9GCQD7y\\n\",\n       \"pfXggEj74YNenqaRBwp8qLEo1ZJfEMRTwDDSJQla+6dyk5+sH0ba1KvRqS5fokW6tAje2rXax0rk\\n\",\n       \"Csv306xjrBZp5dv73OeIamrMl2BRq5eMUCdSXNMJ68zEg4d0kYkw1XRzDhOGjqZ7N5wDSGV+zO7T\\n\",\n       \"TNEfHe1LLQE1TZO1avY1C+etmZKNWRtaXUvUlpbi2UHxZJU3JFJvz9pMpBJqGRsZGZ/WeEJEVEpn\\n\",\n       \"qZyOkZdephMUJqJYIyKTzxKTIh8/qnqXfjPFzn0isxezRjBqUqi3v5F2UwnJTJQLqQbfB94p1kc+\\n\",\n       \"Wk/rY9oopuK4ZexfBVVoqrAhCilht2pnWPU/plh2UqcSUsz+HZX9qBnR5P8xJZUnjDypraZq5W+9\\n\",\n       \"kijJSrlokVE98q+neKvzY3mCLFTP/IEgngKGkS5J0NrfTBgpI2Zqsx01QiGi5uZDScNXtcpvhEJE\\n\",\n       \"bne0/ba2+NItzEm2sVFS99T95/vKiClAVF0d22+WP7l+fTRE1ujcataA5OaSn+tVq4yTWtYuU3L1\\n\",\n       \"yLtVSmU+GQmlE8aWDzmHhpEB6TCV+TG7T3K1qZl0aUyCVUawbXaWag4coNbZWeXY/PVk1ZSq27FS\\n\",\n       \"YePb2qY6DlM8+Vc7xU8bI16tqm35nE+94yZqJ9XxVdARpd06OkxEiV1zGbKlnps9jjWhtrFHbSbt\\n\",\n       \"S199bRlREFNVpfT2N2uIo2cmxKBFOJMRW74PvFKqVjS1SCMjdw5y0FE6SttoG3nJG6fgrabV5CAH\\n\",\n       \"VVN1TK6o+l8zNccpxOyfi1xxKij710ZtRBRLolkb7zv0vhgiqVcShe9XG7XFnRczDx8SKd78MYWz\\n\",\n       \"bX5CEE8Bw0iXJKSzP0+kEtVrZND7AeYJkxZpJIolgcuWRUknH1ZbV6d/bC3VUC/8Vb0tv05PLd22\\n\",\n       \"TSKrtbXaYa18rmdDQzxRT0Qa+bqYeg8E9PJj9eY5lfzVXAzBXSrGHWmjmdIiYYuF5GpTApqWJoNr\\n\",\n       \"pvgp468nrfWpQN1OojEnIzVsfR1JqmUrSWQypHEcXvF8pJPot81Ec5uItoa0yeoJig1p1TJCYghR\\n\",\n       \"fG4pwzaSnHWThdrqwUsvE4iomN5QFE8jqmQzZecjYPY41nxHxR7VgojwjCKZoqnl/qqnjqkJG58L\\n\",\n       \"aaYP6vzKNmqjEIWojuoIBCqjMmqjthjn2lW0Ks4MiLVrJ7uynFcs+fxQfj92HPZPrX6q/7Gc0/W0\\n\",\n       \"ngqpkNbROmqlVmqndnru0HMxhFhLzeykTnKQI649fk7MPHwwqngLZ9v8hCCeAoaRbghiOvvzpJWR\\n\",\n       \"IrPhqUTGVFsWXquX66lXTkTdV74EidNJtG5dPFlk27rdRHa7pIqy9bxxEa+WJqvdyfe1vT2e8GvN\\n\",\n       \"gVZupxFizeZCTRQzoY4L5AgskNE6+zupeU8zbdq3iUI5UzIiwa10mnfZyaYs1SlNR+FspsSkhl+v\\n\",\n       \"3k59HPa+gYhe53aMBKLTpj5eiIgc3LI6jfEw6E1/sjEkwwkKUx0dVkinUWjNs7UqqNTaJpkYZzcn\\n\",\n       \"NHZ0uUIw9ZAsz1Pt/qquj8mDN99hobJGQnTVfVDnZTrIQV7y0m10WwyBZCGsaiXRTnZqpVbNsFpG\\n\",\n       \"QhuoQXH8VZNBfj8b2RQFlyew7F8Jlegei82nlprJclfVbrwVVJFQpUwFIQrRKlpF62k91VGd4fMi\\n\",\n       \"kJsQxFPAFLKlRKmJUGur5Kj6/7P39tFtnfed55cEQIgvIgG+GaYp03QiK87YLhmxcRLGBVpT9ZB2\\n\",\n       \"Q9QTbhRvDtOzO+DO+GS3ezqxN+2cnHZ3JzOd05w5090507VmWuXNTCNbtWVFVhwqAWlVSezaieg0\\n\",\n       \"Tc02Cd3IDi1LASVLFqm33/7x4Ln3dx889w24AEHpfnFwSAD3Pm/3Eryf+3vjffqBE9VNtrvbHrCm\\n\",\n       \"pwUoSoshj8eMREQ7dq61vMRJX5/YJxol2rlTP1a5bXu7+bksRcItrxxg5batraIPdR5yrHKOY2MC\\n\",\n       \"QCWoc1dhO8ur05rK9pNJfVKmwUFz7F5iX8uN0w21QarkSrR4dZ7+XFrUKnwMNDV37d9ZcFsyr0u6\\n\",\n       \"g4g6SCTmWSZ/Fk5VXmG4XbOd2o/ltabhHJmxk8PFt3NkgmezzXwqnYNfGMxRjlL0DCVpkcYc6n3q\\n\",\n       \"1pmPfZD+xndcoVWitQJ10BQdq/Hldb2jplVO7pa6six2rqJEVgsaB6cttMUWdnKUM8Cui7polEap\\n\",\n       \"j/oMCyCPlYxTvATu+qiPClSgVmot+ayXeg04VD/roi4jk67MbCuTC6ngK/tZpMWShETy92ZqJgnJ\\n\",\n       \"cj0lXPJ9pFsuXx8JprzWqpxrO7VrM+C6ycmKHBTQhtoYheAZypdqZYnSgZBal9IJTqTLkQQcDnES\\n\",\n       \"ZnXzUN1IJyfFe6OjJoyqlkL+Pi83wvuQ2wwNEW3fLl5HoybEShjkpUhUy6vbU42b5fGl2ax1TVVX\\n\",\n       \"YTXZkpNVV2e55seCz7urSw+XbueRrg+nRE1uCuKGSehqWyrfJU3SRASi8U+NEx4DjewfKbF4Vlom\\n\",\n       \"pT6tqaUq53zqIPMfZz85g5cbdLnhxTQJwE2TSBTk2bKnaTjNxj2peW+i+F6l5UqsylGaxXB6+Xcl\\n\",\n       \"Lm7zYp+FHHX7OI94wqMOepFQdJss7+K4PBt4kN9R1aiTWI02ndwtdfGdTmVUuHTwJsFMxmCqtTJ5\\n\",\n       \"Eh75kG6ujdRIR+loSYxmL/XSNE1r3WFvpBstLqx8DLo+4xSnJmqidmov2UeCqlyTIRqiPuoz4JBb\\n\",\n       \"Y2Xm4DSlCXlrO5PFv2AO/Ha1Vvm4kpT0lH3WLrGT7E/OLcxmu3kVgmcoX1KtadWyfKpJbrjbqkyW\\n\",\n       \"o7OCSt1/f74EODmkcndYbjXk0NTQIPrnYOlmKZQxoXwO0aj5+cSEFWwleO3eTdTUZLWmTk+XwmUk\\n\",\n       \"Yl0Xaf3UwTefu87lVo13la693JLpVRwUda7GKlw6jcWLi29PjzO4Ou1b7g2TEDxL5VbSpAQii9fT\\n\",\n       \"hQ8VaOrQlPaCvtLSM+kD6U1hTS3nfOomAUQNB9KUPjROy2sFW/BKk/lPtpv8u4Dy/acc3ucgqiYd\\n\",\n       \"ktJhlO69YG1taRqnQ8U+/sGjFXicUNyn9cCvOZ5HulI1PcQvbv6ygqQn/lciRzn6lfyvBAZ1Xlwl\\n\",\n       \"/VqUy3W/LBdYdfGdWcqWuIryWEVuIdVlgOXj5/ORYMXhaIRGLEAnrZLSkikfavkS+eD9N1ADdVAH\\n\",\n       \"baEtFkiVbekAVffopE5KUYp2027LPvL3IRoy1qOf+i3geQfdYXymAr9aa5UDorpuXs8RuYbcqrtI\\n\",\n       \"i2E2202uEDxDeZIEA+m26ZZZtlKpSW54WRPed0+PsN7dcIMAJlk+RAXCoSErpHIrI39K+JKApz77\\n\",\n       \"+pwthdzKJ8eeSJifxWJWILzrLrJk2AWIBgZKrbT8GY8TLS7qkwBxF2UJpXfcYXUB1kFzLCZeT06W\\n\",\n       \"tuX35oLsx67+p7Qg83hXL2DIYdWttqjTvqHrbnByK2lSApEerqcrLT0zfsjemrrZtUxETQyse+am\\n\",\n       \"tBf93LW1lfQA6SY7m5v6fpq1H7PpS3fYvaCVDmy8w8540VX1m1TwGMNZoAJN0icpS+s05nIepal0\\n\",\n       \"rmas6yWapE/W1BrjDRQFHHiJk/OSMTRNzueWCozlZiH1A6y8z920m3qox6ih6VbeQ3Ufla9VeBqm\\n\",\n       \"Yct8JHgu0iIN0iDdTXcbkKmrwzlKo0ZioBEasWSb3UpbCSRcX50y2Eq42027tS68bg872OU1O3ny\\n\",\n       \"I7kOHdRB3dRNy7RsWWvuwtxCLcYa8DWV6+YkHmcrEzvZxdCGVs/NqRA8Q3mSCga1uJC3y0Db3+8M\\n\",\n       \"h3x8w8MmTKkgpSsdooIuf27daoISB/GJiVKrKAcoO5fZjg4TlDlk8kRCdk/pdjw9LQBOQjd3r5XP\\n\",\n       \"aNRaz1ONd9WNWXfM/Wp6WvTR12e9MaC7aeHlfOLg7DdRVTVqc4Yimv72v6KeL/wBjR38iPbivByI\\n\",\n       \"rLT0TGGtQFNzemvqtSAJ1m37RwhrBeIX/RLKeC3KXtIDpJq11mtCH/m+tG52s77k064vvwCZZm3K\\n\",\n       \"OaYWcoQDacKhcZp0PMaV2U/dzqPqW2z9yRsomhfwbiBX6sJaesRM9+K/ozH6qOF+qoMR2ZbqFuvF\\n\",\n       \"msnnprNU2s2Rw5WsoamDYNmmCmO91EtZytIyLVOWsrSNtlEXddEYjRlWOL59IzVa3FwlfPI6nFto\\n\",\n       \"i+Xz7bS95Jg0UAMdpaOONTtBoAQlSkq/RClqicm0e/BtZNKhDuqwwCa3uLZRm8XS2kiNlmRFco7d\\n\",\n       \"1G1Zg17qpQmaoCxlPQGi7hxRz+0CFaiHelzPYT83Wpz2DxMZBasQPEN5kgoGlV7I+3Wt5ODDwYW7\\n\",\n       \"mwIi4c7YGNFXv5ovGZ/anlPpkELB6iKrjkNtSyYS4k/pNlsoEDU22kMkB92hIevvW7aY20nQ5i6s\\n\",\n       \"3OVUQqZTP+rYec1SmUjJ7pjbaccOAdHd3VYXXdXqLJ929VPrHQyr4mpbq4KAVZKbW+umqF/q9RgE\\n\",\n       \"fKzKPZ8kEI0VoZODT5pKAXCZ3DPCqnDnRXz/ePHnMJklV/hy8XIrU5r97frVwV2SnXPZDXSl3kjI\\n\",\n       \"1Gmapqkj3+FoAaosTi5N6hET7sXfJh7PqloNdTDsBKc62SX90W3PIcWp/iQvEaJmgVVBTq4Rt0Dq\\n\",\n       \"4jl1D2nhbKZmCyzJ9VHrfcpHH/UZc0lSsiRuUyYDUh8TNKGN8XR6SIuwtt186fbqGsmkQj3Uo3VP\\n\",\n       \"5sfJDuacIM8LjOrPWO83Wtz2D116g1MInqE8iYNBEIla/LpW6oCoq0s802lhdeSWwnQ6b2yfywnY\\n\",\n       \"kVCmZlq1m48EwK1bS8fB4xjHxkSpFCfLJM9qC5ggOjJC9O53C3huahIutHKtp6etUN3dLdyFJeTy\\n\",\n       \"DLfqUwXdri4zjlXOq61NzHvbNvH52Jg1aY9TLU8utb6pepz4c3jYe7v1pqqAZ5rKu+rfQPG/l7ED\\n\",\n       \"G+fWWmkSIkNp8nYMvG7nUZWeTzrwkaA2RNaEQDrJbTuoFO68iEOhDm7TZC6Xrg6nl/Q5qnV1nIjS\\n\",\n       \"RYvv8DXoSl2J0pQ2IMEN4JZpuYw4Of0RUwHALulMyVjJhC83gLCOwhk4OKTw39X9dGNQXWr5Y4AG\\n\",\n       \"tO9voS22FsYRGimJ53QCOG5BnKAJCxyrbrcS8Pg+7dRO0zTtyeIpH13UZbS1lbZaYlJBKAHPOMUt\\n\",\n       \"a5egBO2m3bYArbrX2sGcX3dqNZOvDlzlMZdj8+viXa5reChnheB5naoSeKzUBZPI2ZrG3Vh1yYMk\\n\",\n       \"mHHL5+CgADcJXdLaqI4XEJDqZT6yn927hWWRu6uqMaLSisctjq2t5u+qW/CuXWLMo6PWz+Jxsw8e\\n\",\n       \"98nHp1p8nZ6plFhDvk82ax27zuXW6diq544uIy+RtSxNY6NYQ79Ji+pVgUFPADUx7RTYGBXxv5fJ\\n\",\n       \"j2+cW2ulSYgMeT0GVTxWfuRkePVjhZPb6qDRi9z6cgPTAhENkrCG2rn7SqXJvGCYvMZdqcuV34tk\\n\",\n       \"/xfV+iOuWqOcsszq+raDU/tRuLevQkgHdVAjNRourHZjkBZS3cMteY+M2ZSPPurTutHaPSZowoCv\\n\",\n       \"O+nOEjjWWVjjFLdAZoYyWiufXZIk1TUYhBLA1Y1TxEJP0gANUC/1auuDykeEIkZMqLruXiyYO2iH\\n\",\n       \"Ja6UyD0+d4qmLHC6SIu+zjE/51oo/wrB8zpVJfDoN75TB7mFggleKvx6HZtdCQ91X9XyFo1a3ULt\\n\",\n       \"5qMrxdLTY45XxprGYsKimUoJi2U2K+JKeQIcaTWV26sJhXTjdsvIq85Hvnay0Kpt8EQ9/Kkrp6Jb\\n\",\n       \"D+mq3N9fCpU6V9tqluCpVY1ZogChp4r+euWO0WuN1f3dRJecaKHKqjQJkSGHY2CB9zfX6sK3Mk3m\\n\",\n       \"P896NpJ7ObXTVHpBoJtTnTB/XcvvRfJGXlTbZUStVkZeDkZbaIt2DPI9p0y2TnCluuumKKUtkWIH\\n\",\n       \"gKq1NUYxo80RGjFKnzg9uHuu20Pn+ttIjTRKozRBE7SNtlGCEhSnuJHwCCQAWMZMqvGlTo9+6rdd\\n\",\n       \"d50FU4pbUmUbOkhV3+MAnaUshaofheB5naqS5EB+4/HsQFJ9X016Yzc2nUVUhbFbbskbbqNjY9ZY\\n\",\n       \"RvlsahJWOZ5hlccrqu6zlpIun12g9v/zAOFThwjNayVgytu99VZzv8ZGAae5nD45UiRC9OCDYtzS\\n\",\n       \"gtvWJqy0HNp57Ke0Yk5Oip/Ly6VQrx4zNVEPh+CODr1lUgVJt5I68pg4lXwJUkFY4p3EXSMDg54q\\n\",\n       \"ynGMDmYzt3WUSaOOJ8gXAQVtga1F/GhgNxg08utqKw9ZNxE9liN6KU10kR+/HST8ZruJyqjXbtuf\\n\",\n       \"032FSsNeJVC2kzNYVvH+zDUlr+dUPSRN8Rvnqe7j1aJaoILFKigtnnaSVs8EJShWfIBEmREeCykB\\n\",\n       \"UwU3O/AqcWH18YhQRBu32U3dBlRL2L2b7naF50ZqpJ2009aKa4nVzcOSEMnro4EajLE1U7MFKFUr\\n\",\n       \"ppObrXQJb6EWow27mwb8PQ7Fk0b1YH/nUajqKATP61S1TOYiLYPt7VagUeGXX/D299uPTXdhXChY\\n\",\n       \"YzwTibzFCiohTn3ybLRqoh4VVmXG2JERotH95gUpZuZKwJRbILn7bSoloFOXBddprKmUdT343HTW\\n\",\n       \"Sb5GsZjVTVin6Wmxfr299u6w8ngNDYmSLzy+VNZWVa3adsmbpIK0UlY70zK/qNsMSXMcx5gmW2h0\\n\",\n       \"W0d5bh2S+3s0Q1UT4qqlat5g8AueaTIP2Xf4ix4SBNfO3uvXNlF2f3Yo4GUbJ1Xq7hvKKq/nlO5C\\n\",\n       \"v9YX417jPMsBVBVCOPQN0mDJPHn2U1kGhGd3lRlxJQQ5xYKqjxjFaJEWPVsivTyaqIkiFHF0//UT\\n\",\n       \"58kfUYoSj4m9LX9bSVkVdXu1LzU77gRZ45tUK6bOgimPSZrS1Ed9JZZQN8kbCLwuqe7c8xLfHIJq\\n\",\n       \"sArBM1SJOAzwZDPlXsyrsZh2yWu8goPcTrW4qVZPCXdDQ86gJy2R6ns33mju19oqxj0wIPqMf1pc\\n\",\n       \"kOIP9hsWTwmYXV1m7c7hYaLOTvG7jIHUuaByi6f8XE0cxK1Pcq5NTcIyK9fAa6kU9XjzBEHcasuP\\n\",\n       \"Pb9ZwefQ1GQdqx9rY5BWys2QGddOtXQTJiJH30XtOjLT1keLrtmZIaJ1tww2vMtNYCVWFeQNhkou\\n\",\n       \"XnhdziEqWjpBRG1k/idtKv5sIcPiWYlF0ot767M5onya6K/HiVY34d/d9SrdhX6tM3h6jfO0SwIk\\n\",\n       \"S5o4/U3JvzkJjLrstmofur4SlLB8Zgd1W2mr4b56F91l1KEkElmHvbjx3k63G5ZT7iLcRm10E92k\\n\",\n       \"rdnpFGPpNOZO6ix5f5EWiciE92ma1pZsAYFaqVW7bk41W3OUM/aXllCdBdPufPT6PerkSu43vrnW\\n\",\n       \"fxvXukLwvMYUdMZZDjBBJBLS1XCU4+Yur06ySy40OmpNMCQ/y2ZNEFSf0WhpLCWHQPW9hobi781r\\n\",\n       \"hNycxc1WPrnltbdXuNbyGEhdtlf5HB01gXx5WV96ZMcOMwsuz5Y7NWU9dk6lUqRLcTxutcjyOXML\\n\",\n       \"sHrs5RySSatLMre+ejkXg7JS1hzcAlY5AF7RnP36LqbJ+MZezxLNDfqP79wMVuJqyrx4eYy66W99\\n\",\n       \"wWCazH+YWSLz+I2RSYeLJCydy/r9/H59ezlFLlXSQagNU7nlKao9Bp34uHbTbouVz62WIweGOMVp\\n\",\n       \"mZapn/oNWEtT2gJJEgxjFKOdtNN3vOdO2kljNEZt1Ebt1G6bEMfp0U/9xto8SA9aPnMaj5OFM0KR\\n\",\n       \"kqyzcYqXWDKTlDSATgKeXRxnX/Eh2weZGWydYjb5OqiWULvjbgekTtZrJzD1G98cZrcNViF4XmMK\\n\",\n       \"woKkSzwjy4b4gQopbkFRy5DoMs+qQGpnfbUDWgGfeQMsl5f1CXTk0y7Jj/rkrrR2z85OPeRGIgJI\\n\",\n       \"l5fF2FU3XgNoYU1gpFqfcjnrfrIdCW7crXlx0Yz7VI8Rt3Cq41ePPYdCp/jaZNK+jqfduejXSml3\\n\",\n       \"3lU7vpOoSuVUiioHwGsxZ0Oq+StNIXD4lLx4aaOXxNLl856Xztb66EKH1UrKIy2pL1Y7608AtVOD\\n\",\n       \"LpWbW8hR+kCaxg+NVy2zbrnW8Uq+o+o1g6ddDc8kJS11Op0sWxxOucVTwouEJLs4TJ1lz+sjTnEL\\n\",\n       \"3PJEPeqjkRrpFrrFiH90cnH1+miiJtt27GC1mZrNmNK8eE+1qk7SpGUtpTtyP/Vb4lHVGwJeIY5b\\n\",\n       \"XPnfAt/fzXodlHWyXv82NqtC8LzGFIQFSU08wy1fEorsLJde2tZZ0uzGzS+u1f14vUtptRwelsCU\\n\",\n       \"N7aVgCQBk1tDYzEzGQ+3/KnPeNw6ltZWe0up01PWueT7NjSYbXO4jcfFdmNjRNu3C1jkgAoQ3XST\\n\",\n       \"sEr39YljwqGXx4WqwKZLtgSIJEb82KtQaBdfq8tQXI2YSzvYqnZ8J1F1wbMcN+FazNmQCjjXcppR\\n\",\n       \"N1Ipk2TkxcsYXSQQ0W35vOfdp0nkDBrz16Utl1YKY2kSh7+jQHSsmsGZsqMKbnAE0IS1vQNpwmMg\\n\",\n       \"PAaamqvOXZdyL56r+R1ViYKKkZPQIWFqjMYoS9mSNmV/YzRm1NFU64xKoORutNK9VloH26iNeqlX\\n\",\n       \"a62MUMSSTMjJ4ihrcd5MN9PddHdJ6RW7h86t1u8jRjHP/ekerfnWklIuHdRhWctGatTGmyYpabFE\\n\",\n       \"pihFHdRBvdTrOWZT/Vtwqs3Kz5GNtE7KucqbIyGwWhWC5zWmasS5cSul1apoXvT6sYCqF8xObrY6\\n\",\n       \"66uM7ezvFz85xE1OijZ5TOfNN5tWuslJqyvs6GhpzCJA1NxcCqLqvioE8ufWraVxlg0NZkZbbnHs\\n\",\n       \"7RXuqlu2mLGSvAao3bOx0d6FmMOZ2t/UlNU9VkKo6o7r5dhJ2QFpENZML/1v5vjOcrWhc/brqltN\\n\",\n       \"FSlq4fdzdGB/mg4dGqe1MixREsYW0+RMKm6fu6icpauwS9/tuYFpze47BNCR1ya8wvj4oXHCY6CR\\n\",\n       \"/SNVs3jWw8VzkAoqmYuEDrckQ7y/ARowwG+apmmURqmXektAkceaLtOyxY13kiZLMtruol1G7GiE\\n\",\n       \"InSUjlIzNTtCHG/TS6mVXbSrpOSJ7M8NNu0+S1DCyFIr20lS0tbau4t2WWC9gzos2WVV4JTQnqSk\\n\",\n       \"BS5VeFePm90xd/pb8JLddiPkNtfrXSF4hnKVvMBV3VV55lmvbn86yFT3zeXE5zJpjcy0qovt5E8O\\n\",\n       \"I3KsQ0PW7WWtTZ45trvbhMTOTgGt6bR1XFu3ijHoINzumc2WWhYnJ52TC3mBWgmd/LVdPOrOnVYw\\n\",\n       \"lzGYHOCcss7anQvqtkFY33TngQqi1yNghnJRmohAdOD30vTYY6DHHgPNlWGJKjZDh9xIxQPJ6CCm\\n\",\n       \"2ol+gmwvTc5gWrP7DkpH5bi5eh1rmrzBfWGtQFNzU1WDTqL6uHgmKt9SqR4nGVfZTu2eLF1uoCph\\n\",\n       \"JE7xklhK/rnqjilBUX1Id1g+bxnbKOMWVbfdSZo0YkaXaZlylNOWPOHxjzImsp3ataDHYbid2mmU\\n\",\n       \"Rn0nE2qmZlqkRUsdS/mIUpSWaZluoBuM9/qoT5tAiEPsIi1SlrJGsiR+XkhraDM10wRNWBJF8WzB\\n\",\n       \"PMZ0iIZKXGjtjjn/W9gs2WX5uSLPn1CmQvAM5VncXXVkxBpzqVoj7axWOriQYDQ0pLc+qjAr4xgl\\n\",\n       \"nG3daq1zSUR08GC+JK6Uw58OIKUFlYMjt3BKd9JUyjpGaaWMxcz95fqoVtOJCefkQl6fst22Nqul\\n\",\n       \"1OnJYzCle+wNN5juvF7Knehg0E9iKCc5ldepegxjUU61Ju+/P1+7BEZBB6ZdyypS1KF/O06PPQba\\n\",\n       \"v3+kLIunhLFMgWjdiVQ8kEyaSiFGfc+PW2TQoOfWXr16UlfTzbVe5+xHQbvaluvyqx4nr2VQpCSo\\n\",\n       \"cusaV4EKNEiDFqthP/VbXGylCy6HUB4Tyl1sG6mRuqhLmwhI1oAsUMFw2+2kThqlUQsAuSUPmqRJ\\n\",\n       \"C/DJtviji7pogiboZrpZC7FOjwZqoDjFLVlpB2jAso1M5sMhM0tZ57HnRdvSZTRHOQtETtCEAd9S\\n\",\n       \"TomJ+qhPC5perPzViN+shgpUoEma1LqBhwrBM5RP2ZXU6OwU4MFdOHWw4AQX2ayAGLWOZTQqXEol\\n\",\n       \"HOksnmpf/B/w9LR1295eot27rRldOzsFhMnX0u1UQm4kYoW7m282614++KAA7rEx03o4Pa1P4HPz\\n\",\n       \"zdbsu8mktV272Evd0642qe7Z11cKS2pSJd3xUuFPB4O8nWw2mHNLPVeCKOvjRbpakxK229rytQPh\\n\",\n       \"NJWSy/WqHFHu0wuU/twBGj9QekNAUtTamwWam5sqCzpZM74uE+wscDqIHj3k7QAAIABJREFUUd+r\\n\",\n       \"13g8ovrypOby4+bq995Nvc7Zj4I+p8p1+VWPk992dKCqWrs4hEQoQsu0bHlPlvUoUIHaqM2oNxml\\n\",\n       \"KKUpTYu0WBL72ERNNE7jtkmLnFx9dXU6pWVStsNBbIImaJImSwC0h3psrY8gZzdaPm8islg9pbWT\\n\",\n       \"iCwW06N0VDt2Dp58rmqCJ102WTWBkwRVuQ47aIcxhjvoDuM4qVZ+bjHldVT9nI+bxUp6PSkEz1Bl\\n\",\n       \"S2c11JX/4HKCC25RtXtyWJTupmqGXFU6CypPVARYLZuAsIoS6SFXzaLL40mlFVdXN7S93bqfdFWW\\n\",\n       \"1uLhYQGuXuAzEjH3c4sLvfNO/dpwF9xEQr+Nenx0LrUcgCfss6P7lt1NjmpCn67WpHr+1CSZz7Vg\\n\",\n       \"fglKaaL075XeEKgHqZYdCaJjh8Zpcq1Aa4x+VgtWsAmN2v7lx801TaX3bsKLUH+yc3N0q5+pHiev\\n\",\n       \"7pK6ups6i+IgDRpwFqUoLdKixT1WhUK1lIgENOn6CrK6uU7SpKOrswS1buo2Mrn2UE8JFMYoRgM0\\n\",\n       \"YFhH5RyGadhYwz7qc6yLyduyi/lUH73UWwK6EmrHabwkvvVBerAkVvNd9C7L6wQlLC65shyNnAfv\\n\",\n       \"SwJvK7VSL/XSIi1a1pMfjz7qsz3/dJZYWW7GqzaLlfR6UgieoRzllPBFjf30Gy8o2+AZUe3KfKiA\\n\",\n       \"q3uqQCLHLuM3pWtuR4cVJoaHhUVQvr7rLnP80uKpWg7V9zmQ2MVwdnaaUBmJiO102WNVS2ZLS2lb\\n\",\n       \"N91kurcuLpbG4N5xh4BAmWxJJ7l9ImG6yMr4Wul+qx5PXYwlT3DELZ5uyYL8fK4r7eKkcmtc6mpN\\n\",\n       \"StgeGtKXpqmKrgXzi07l0NY40finijcE9u33Vwe0ynSnWnZKXEHTVEo/RTl8VKvhbwqpoOI5CRCV\\n\",\n       \"3rtxvwgNV9xOfO14rKTfi3m7Y6C6cKqAYRe3mRWVbUsghceT2sV28mytchsJk17qQKqWPd2Dx2hO\\n\",\n       \"0IS2NIx8SCBspEZby6Yue+xtdFvJ9iKD9pjxuo3aXMfKH1nKGvsnKFFiUZYPtV+ZpImveYpSlpsV\\n\",\n       \"MlFTC7U4xvzKY65aTP3oWkvUdS0oBM9QjvJiaao04Qvvo7dXD209Pc5JeWS9Tql8Pm/ZXiYq4hbN\\n\",\n       \"yUmigQERI8nb2rLFBGHuOsz7UPeRGWbHxqwQK2HXzhoZi5mlUCQkqZlqJyas69LWZh3X4CAZWXud\\n\",\n       \"YFOFMbdyKV6ti9xia9eWHCMHQbdzS3XD9nOOBWkhlet08GC+soY2mcqFd0elyZ22VBWICh9fo6lD\\n\",\n       \"c/6gs9z+fEi17JS4gjpYrt+fz9NjOaKX0kQXbTinysPfFFJBJU3e1kR378b9ItRr6/Upe1fbyoHa\\n\",\n       \"LlYyKBDgbqEJSpS067WMBgcsCbbLtEx91Ee7aJelvAqfx27aTXGKW9xQdVDNb4RIiynfp5VaDRhT\\n\",\n       \"Y0kbqIESlKCx4oNDle7RT/1a0OSPCEVojMYsfycyoQ2PNZT9eQHQERoxMgAn82Z2WhUE+WOYhi3J\\n\",\n       \"h1RrKwfRbbSNGqiBOqjDsdyIPOYyYZGbpd2pjRA660cheIZyVC1qBaoZVgcG9Flao1EBoNJNVk3c\\n\",\n       \"E4uZLrf33583XEnV7Vpbze2cYFYHwm1tzlZZoNRtV93faV9dW6OjYrzcNZaXs+HuuzrAk+JuzNKV\\n\",\n       \"WEq1DutA0k5eMt3yMcpasBLQ29v1SYn8nnuVWEid2pL7u8VPVQXUypRTkiSvqop7cz+Jb/12Io9l\\n\",\n       \"3CpTNV2WlWv53EKORp8apdQXU7R8tjg5B8v1wXye/jZNjpxT9vADNNw5NVUL11UVMio5pO4XoZvb\\n\",\n       \"x93+OypNlQB1jnI0SqOUohQt03JFF/N2+3JQSVLSk8VRF3/pBsV8X/67as2TtSpV8e04FHLLX5ay\\n\",\n       \"Rrvc6sgf0vq5TMvaDLQRilCi+ODv30V3GRDHXXNl1lme0Ib/fUqwlmP+Z/TPjBjXrbSVQKA76U4D\\n\",\n       \"Ho155k0An6Zp6qEeSlHKaOcOuoPaqM2SXVhdyxEaMSC9gzrobrrb8rkb4OvcrYN2mw3d8GunEDxD\\n\",\n       \"Oapa5Sv4RbrqzukGg5OTArDuvlufYEdNgmP3bG52r4M5MWGN19QBMf/8rrtM6NHFecZiRNu2lcKw\\n\",\n       \"7iktofK1Gv8qY0TtAC+Vsh43vladnfbWx74+/y6lbqVP5Bj5vDlI68BGdcN2Go9aq3RyMjgrvFfo\\n\",\n       \"0u1TVRh1IAJdkiS/qspNp1Fyvv5V51QpQFXgsszhfXptrXQYaSICUe4TOUr/cZqSe5P+M666cE7Z\\n\",\n       \"wy+OrUzO8NxU0BeCumRNKmRU1wv9WvVxrwyoaxEnJwGNw5TXWo9c8nxRrWNu+6oJdiZoQruPLhFP\\n\",\n       \"kpKOCYl0YCmz5eYoZ8l2207tJdbCFmqhJCWpl3ot4M8trmlKl8xL5xrLQVW1KHJglWNoozbDKqlr\\n\",\n       \"L0tZC3T3UZ+xRsM0TDfTzTRKoxYrKV+PIRpyBXw1gZO0yAYJimEsaO0UgmcoV1Vy8Wy3r5P1TS03\\n\",\n       \"woFJJhLigMVrXnZ0WEHHCRL5Mx4XsZLc4ifb0sVY2j1lzVEny6bdGPizrc1aN3RkRGTilfvyhEo6\\n\",\n       \"wNNBkw6ypQVX164fOYEaL7fC4VBak2UJHlnOxo87rq5/Wau0EpUDXbp9qmI1lEqTLRHokiT5VVVu\\n\",\n       \"OvktIKm+rqE4vPfMzZUOoziX9B+mDeD0mnHV0DQR9RDRGAXLOgEa7pyaCjp+qprlUjazKrfGVAbU\\n\",\n       \"tYiTU2FKV0rFCQ6cst6q2VgHabBkPXm5FAlDuv4KVLBYOhuowYDBQRrUxocWqGCJJ+XWVBXmeqnX\\n\",\n       \"YiUdpmHbcjRqjKm6JnbZanlyI905pQNMXvJElnqR5wNfD2n1tLMkt1EbpSltZPX1k2DKzkIdBCgG\\n\",\n       \"dY6HllN3heAZylWVXDzb7cutXTIhjYTUsTErnMm4Re7CKsGVgyJAdPSoaOs3fzNvAGlbG9GuXaIN\\n\",\n       \"HhspP+eJfiQ8yJqXuZyZPdfrs6fHCsPlPmVdTHnxr8v4K7Pocuux3E6FSDUL7siINe6Vj1l3nNXE\\n\",\n       \"QxxInECNnwMSNmUG36kp5/I4XgFQPW6VSgddbq62un2q6qruQAS6JEl1IbfrX+mK20HCFXcDPR/H\\n\",\n       \"/2MR3v94P33kzbXSYRTnMn5AxHUOPzlMk9+Y9Ayd+Xy+emAdoOHOqamg46f8lEvxKruSN26f1ZO8\\n\",\n       \"XmRXq0RPLePknGp+6uBAVzNSVxfSLjkR70Odp6wnyhMVEZnJiiIUMepmEjkfJ9l/kpKWtnRw2Eu9\\n\",\n       \"NEET2lqk3HrL4yZlXCfXNE1b2o1SlCZowhXcLLGcebNfuT67aTf1UI+tJbSbug3wkm0N0ZAFvu3O\\n\",\n       \"Y96WUwbboG+GBHWOh5ZTd4XgGcpVlVw82+2rS0ijA5TOTtMKpsueq0JLf79oK5nMW96XcMsBZedO\\n\",\n       \"676yFidPzuPkshuJlJY+0Vk6t261utfq3HXtnhzK1f10WXQl+HAglxAnwYjDrNyupcVshx8rDrXq\\n\",\n       \"WnM4dbKOqTG8dnDGgdgui66dBb1aLuFc5VzUyXGtTVNgMXdm47T5vAJzRJQioiQR9ZFwvR0nyn2z\\n\",\n       \"6Nb62UNUaF4zQczrHKuQjLQwtkZTuTkqNK/RlRTRJws28LVWoMHHB2n0qVFfAJPP5zd7SGHg8lMu\\n\",\n       \"xaucrKibxcLq9SK7nmvDepXdXNU4UyldPKEav0lkBQuv62kHqMu0TP3Ub4zDzkrHrV+qO6uUTACk\\n\",\n       \"1vPk/cnYSgl63FU1RSlLXCcXX5sYxSzrxqF6N+22WOm4C246X+rCy/uXVkv5Hk9eJMcst/Gy7l6P\\n\",\n       \"Tb0mDQqz6LorBM9Qrqrkot5uX/n+9LQ+IYwOLHt7S2MPuWtpczPRrbfau7K2t4u+BgZEuxzOeNZZ\\n\",\n       \"Hhtp57KrezY2Wi2IgJkJ160+KX9yy6N0Q+Zw2d5uhWM5RlnjNBo1LcpuNwuWlwWsLy/rjxXvl89h\\n\",\n       \"aKi03XKhUAfEdqqq62o1lSbzG28zjTsoSTBMkva/QPrfspjU3Jx/EEuT4/qWZdmSUGjXLoPd9P4y\\n\",\n       \"AaZGNw9yCzlKfSlFyb1JGjs4VtfWvaDlZEWthoWVKyi3u6AusjeDG6DdXDnsyBIqRMxiuADCAVD7\\n\",\n       \"oXYaW9NnSpXz5zDkJAlnOrdfLg54TdSktQS6Wb84vKnQorbDrbsyVlQnbmXdTbspRSkjHpUn+OG1\\n\",\n       \"Qb1Y6Xj/smaodDWWyZB02YW9nMf1CpRetdnHXwuF4HkdqxqJT3Rt2vWjJoTRlcxQwU9CIXfL3bVL\\n\",\n       \"JMRZXnbPOCthUP4uE+nwGpgc+NRapX7cbrkLL3ct5gApLbtyTCMjJlwNDQkwT6XMz3nNTSk5RhV6\\n\",\n       \"m5rKi9fkUq2V2awJvepx1UGhn/PB73g2OnOsL13vlq00lX7zbyVjTcY/W3Rr/cx+KrxrrXSN3Cya\\n\",\n       \"6voq25dl2SqQsM7aHTc2p/HPVRdgvMgJrvn8a2LdCzKrboXusE5WVDcLayV95xZy1HGgg3AIhLX6\\n\",\n       \"cLvbrG6AOcpZ4gg5bBkxhgfM8xtz+vnp5u8E405uv1x2pVzsXGT7qd82FlRN8qOzpMo42K20tcRa\\n\",\n       \"yeWUtZdDrt/yOGqmXrk2vA9etiaEsFBcIXhex+Kg4FSGo9w2JXzYWan4+3YJYXSxjWrWWt6macXM\\n\",\n       \"a2GQu8LyupyFgtVS2tVlXYvpafdMtPLz4WFrQqQtW0wglv0nkyJZkEy6s7hoQje3BqsgzRMxqQDH\\n\",\n       \"3X6bm/Xr41fcWukGml6T61RitayFS62dKnJj24xusQ4yMr7+x0NUGFtzBwwJhkNEtI2IukiAyaTY\\n\",\n       \"r/CRolvrhzTQSaS3aHK4WSaiQTJcd9XsuY6WLe7+y5P85IrtpEhf+kXOqY2oMF6gqUP+XEQt51MA\\n\",\n       \"oOYE13L+eAw09MRQ9eE4TaXHq9ymNtAdtpK++b7JuWRNLr5131EcrCqpv1lLOSUK0pU56aZuAfiP\\n\",\n       \"gbAfNLSmz5Sqy4qqxobabe/FSqeurwqSdkl7dHNWt+fxjhyI3ayVunjQLuqiu+luT+Vx8vl8ydjs\\n\",\n       \"rLN8vexci0OFCsHzOpZdGQ5dVlIvUJrLmZY9HrtpZ6Xi8Za7d9v3weFzZEQAmewnEhFWQGnZW14W\\n\",\n       \"VsydO/M0OSmArq9PWEWzWSuQybnK+cnkRRxOJZDrLJa6pyxxYrd9U5MA3HTaWiNUTbCki6lU3VtV\\n\",\n       \"gJP1TQETouWaB2HddgNNr8l1amG1rIY1//7787Wt01lSJ7Ly2pxBjSe9X3GNdQOMaSLqJgF2HApT\\n\",\n       \"JECrQFZwVNdXZzFOs3bUDLiKpdLRssX34/NQ21dVIJGRVreNB5C0QIJbXzopfTjBdWGtQNlvZH0l\\n\",\n       \"P6pIQWbVrUbCIY8up5X0LfdN7k/S8tpyTRIZ6cCTw8skTXqGgY10y1Utk3aJeaSWaZn61vpo19wu\\n\",\n       \"mlyzd6F1sgB2U3eJFVIHZE7r4uZmaUna4wGA5fZ8bNM0bWw7TMMW2JVtcYsqh9Q+6qMsZT1bconE\\n\",\n       \"OaUeDx5vyy2uEjaDLnUS6tpSCJ7XsXidRGkpdMtK6rWkBbfM2dVjtLPs2dV0lFDD+1EhkqgUOvjr\\n\",\n       \"rVutcKa2199vjTXVZVyVTw56gABgOUfdGNXEQ9yCqovllJAcjQpwVt2UJdxKIFVhV0Kwn2PoJC+g\\n\",\n       \"6XTcnN7zIj8wGcR8a9Gmc4dkgZEganP6FV/zSwwYxz9nZnwtNK+5A0ba3NeAQj+gp7MYq3DDXy9r\\n\",\n       \"ti+ZXLHPbtZvJ5nwO6a0r5MdYDnNxU87TlL68J2YpwoJmQwFaOGvSsIhjy6n5SaOkvvycfuxngYF\\n\",\n       \"fQu5HH0unaRPjYM+VNBbAe3k1y21XDnVyrSzHPppy06yj67iQ8YmSpfZDuowSoNwleOuLMfVR33U\\n\",\n       \"RV2UprSREEgFYF35EA6KquWSx6vqLKrywbPe+k2Ao27P++HjaaZmGqVRRytyqFAheF7n4hfTjY2i\\n\",\n       \"3Ih6Ye+3pIUfeFXjPLnLLb/o1SUh4jGN3Bqo9sVfSxfYSERYQ/m4ZfkRnuSmv9+Ev2TSatFdXrbC\\n\",\n       \"I3evVcu/qJ8DJuzzsfM15GCbNXMplMxxYMBqsQUEYPNYULdj6AXq/ABjNSyOfsCvGlbVmseXKjBi\\n\",\n       \"V5uzGmstxdf8ZQmMbUSF8TWaOjRHhTfXvAGGCoW62Ek3+FJBSYUbJ9jRQVYzmf+FeologIja2XsD\\n\",\n       \"ZFpp7eZn16dfkCy2Y2T39WLVrtSqmCbPcFxNRt0I+bnwDsrV14/1tBy40elAOk2PAfQYQIemsu47\\n\",\n       \"8PFq1iiocXHp2iw3QYuf8emgTs5X1qmULq5cXhMO2Y2LAxt3fx6mYduER9zyKQG5kRpL5qrW2eQP\\n\",\n       \"NS7Wz/qq2/NzQ433tINoVZsh0VWo6igEz+tcdllbufVwdFRY33RQyuW1pIadu2gsZoUl/hmHsMlJ\\n\",\n       \"0c/u3QK2GhoEaHV3i/cEHOaNUizcmru4aGZx5ePm7XOo0Vk8uSVRth2JmCDc2ioAVk1Y1Nlp/t7R\\n\",\n       \"oc/iyteQWzClRVRCBp8THyOPU/Va7kRda/XGQDlQE3R7RP7ArxqxoAcP5stus6x5K1BjV5uzmpZY\\n\",\n       \"vuary2R1LZVusl7E56LGTkqqGSOirEObHBQnfE6EW1nl9VeEvddHVhBLUkmcqC95sPhp3SL9WLUr\\n\",\n       \"tSr6ANc0lb8U9Si7JC66i+CgXH2dLLcq2JdbkkE9pw6Nj9NjAO0fGaE1n19cOjipRqmIctq0O17l\\n\",\n       \"tCX34eAnrXgt1FIClxxUB2nQm8u24mLLkxDp3J91MZU6SAYJ92PVQrqbdlOMYsY2uhqfXqX7nuLn\\n\",\n       \"Bo/3lMDrBNFSXm4ShHB6baom4AngnwP4ewD/AOD/0Hxek8mGKhWPn5SWR7vkMJVc3NqBgLywjUbJ\\n\",\n       \"yACrfjYyYnV/dRqbaVXMWyyAHBZ1cotD5TUmZabZZFJAX1+fgHJ1LBMT1qy1sg6nhE439fUR4RML\\n\",\n       \"1PjoAUrvP0TT/2rNYh2Wc3JbJy/ycmPAz3EPuj2i6sCkH1WSXKhWcOhpbXyYr0rW3K3EiBellf3V\\n\",\n       \"13aKsu36HLbTzY+XcZGGn67i6xYSACznllReczDjbVdYm1V3PtlZtasiH+B6rSdldroIroarb2n/\\n\",\n       \"1j+Bci1+6jm1VijQ3NSUb+hUxcuQ2NWM5NtxUHCDh3Lmane8ymlLdxPibrqb4hSnRVos2Z7DrddY\\n\",\n       \"SdmHjIF0S/LES8d0UqexdqpF0y7mla9PH/VVBG1e/u+p8yvHfVenaljY60HXO1BXHTwBRAD8I4Bb\\n\",\n       \"AMQAHAdwu7JNjaYbSienOoryolYHparKseo4WRv5Ra9T4hqd+6pfCLODGt3aqMDLE+2o4KnOT+c2\\n\",\n       \"a6fRUSL8nmkB6fmDOQtgy3jS5WUzhnZsrLTWqU7qsXK7MeAXZCttb8cOcc51d3uD9JqpTJ/Darrp\\n\",\n       \"+gbyNOlBz8vcCmRaD9vI2Q3VTk6xmbwtOZ5+EtZHCZ7NpM8yK/fpoNL5yXjN4WIfO0iUc2kgoqNs\\n\",\n       \"blNkAuUYGVl3DaVZ29z6202B+KHaWbX9qBoXNZUaV+tdG130vd7B3isA6LarBjx4OV7l/h24jZeD\\n\",\n       \"arnnjRsg8xhJPhavgFeOO3Ct5eUmwUb/XVZL1ypQe1UtwPODAL7BXn8GwGeUbWoy2VD+5QSlqoK2\\n\",\n       \"jnkdm3Q1veMO6ziDiEnUvc8hU8ZSTk9by5lw91jd9p7X5VPCAtL1+f3UceOacROAW1jVONaentK4\\n\",\n       \"2HITRvEEUEHEEXo9Jqplt26UJj20uWijrbUWVZoQp0DeXW51MMspxqlkCR+PfG5h2+na5vskbfok\\n\",\n       \"ssKpen7xNtR14GsnYbbNYXsnVSlwkl/UDC4MBp5JNcjsrG5tldNXOcBRroUxCOUWcjR6IE2pQ+O0\\n\",\n       \"vEE1YN1kV4/Si6trNeDBy/Eq9+Lez3irdd5Ii+hW2lrW2vnJWstVb5a4jfy7rKauVaD2qlqA50cB\\n\",\n       \"/Df2+hMA/l9lm5pMNlT1ZFdKxYt0F+V+QFC3v3QP8dqOHYjp3i8UrPGa3d2lGWUjEWuNUO72K8HQ\\n\",\n       \"ixV28uNrlD00R723rFksqSqs8wRJvB87uNTBvpPF2glUq5HcRlquW1ocQL3GGU/y+Xz9mya8yM58\\n\",\n       \"5WZ55Ovs1eU2zbbpodJjxT8fJGs9TQl2EhKdQHmw+FpmqG0iors1/UnJ7aSbrZd1ILKunfxdzX7r\\n\",\n       \"8bzM/0q+PGB1kcUV8MCokRhn8PFgIDTIuppubZXT12axJsiL/OSBZGDrWVGtYQepAODH1XWj4KFa\\n\",\n       \"1shaqBzXVa5y5647rtU6p65n1cM5tpGqBXj+ixA8rz05gRsvpVKu/ICgTvLL0ms7bjGeaj1MCUZq\\n\",\n       \"iRTVBVeulQRTvr1d+Red1ERDKmzL19y9WP4us/W6lTThayLrl8oxlZOxuBItLwtLp1N913Ktj+WC\\n\",\n       \"cj6fv7Z9DnmtTTvAky6lu4koVnyvtXQfucYvSsCzswr2F99rJwGK/L9HlES22UVyB2WeCKi/uJ98\\n\",\n       \"LV3bORAuFrfT3dTwe4zV7dM2c1WUf3++KjcxLK6ALDHO6FOjtoDjx7IYZF1Nt7bK6WuzWBOMi/xD\\n\",\n       \"CGw97//P91e9VijR5ljjal3clxPHWkvxGpt+3Wx1xzUEz1BBywt4RlGZXgewjb3eBuCEutHv/M7v\\n\",\n       \"4JZbbgEAJBIJDA0NIZPJAADm5+cBIHxdR69ffBFYXBSvs9l5XLgAABmMjAD/8l/OY37euv3nPw+c\\n\",\n       \"O5dBSwvw8MPzaGsTn8/MAC++OI94HHjuuQwSCbE9b296WrQ3O5vBK68AwDze9S5gzx7n8c7MwNj+\\n\",\n       \"3e+2bq+2DwBtbRns2QMcP262Nzsr5vfpTwOJRAZLS8DCgvi8vz+D97wHOHJEtL+6msGpU6X9vfji\\n\",\n       \"PAoF0d/amv7zxUXx+cyMWB91PoODQKGQwdCQWN/jx4F9+6zz3bcvg9VVc7wf/nAG27cDp07N48gR\\n\",\n       \"4PbbM/jxj835qfu3tIjXt902j6YmYGHBPL6f/rR+fQDgwgXxemQkg+ZmYGio9Hjqjo/b65//PINM\\n\",\n       \"xlzvmZkM9u1j2xfHO3/bPDANZOCtfS/r7fj64Xng+Mb8/dn9vQBAZjYDLAHzP5oHUkBmWwaYBeaP\\n\",\n       \"zwOfBzLnMkBLcfz/n/K6Dci8lgFOAfNH5oEskJkv9l88vpm24ueH54EOIHOp+Pn5eeAIkLktA4wA\\n\",\n       \"81fmcf+3gP9wJYMLAA5F59HaUDw+I8D89DwwX5zfADB/Yh44C2S+X2wPxf4uZ4CTwPz/Ng/8EZB5\\n\",\n       \"VJlfKgNkgfn/eR74v1n7fzgPfJydD9+ZB4aAzD9lgEKx/XeAzM9t1vv4PPAwkEl4PD7q9nK9RjLA\\n\",\n       \"Hof9n8sAM8X1CPB8Oj5/HA/jYSQyCczeO4vsf8ni07d8Gv/18n8FANy2chumb5mG1Pz8PF489iIW\\n\",\n       \"exYBANn/ksUf7fwj2/Yfjj6Md95+B09/8mkk4omKxsvHl4gnfH+uHd/8w3gH7+DpzNNIIIEH/vQB\\n\",\n       \"nDh3An3DfZi9dxbHv3u8ovUN6nVLpgUA8K7ou5B6O4Wvf/LrFa/nucFzWFhYAADMNM1g39i+qoz/\\n\",\n       \"YTyMv8/8PeKI4775+/BZfBYPZB6oyno9MP8ATuAE+jJ9mMUsjs97P377YD//2cwslrCEC/MXfI3/\\n\",\n       \"xfkXsYhFIAPMYAYPzz+MF/EiFjPFv5/5LP4I9n8/1X7Nx/cIHsHD8w973n8Ws8jOZ/FpfBqJjPh7\\n\",\n       \"k9tsxPHbLK8/j8/jXOYcWtCCh+cfRhva6mp8G/36+PHjWF1dBQAsLy/Dk9zI1OkJIArgJxDJhZoQ\\n\",\n       \"Jheqa3m1BqkWsELBTHDjx1XT7n03i5xM0OPVPVS1wpYbc8fnPT0t5ptKCQtdoSD6UZP76NxgeXkU\\n\",\n       \"Ly7K09PCdVa1XHodr594TjcLp9N+QVs/ZR3V9naNy22Z1sea1+MMUI7rm6bSb+ApzWd2mWSl9bGD\\n\",\n       \"rJZAnUup6gbbyNrrptJxgIjiZO/Wyi2iTez3rWwfp/mp54Ic3xBZraHlWBj9unRXySpeqWe5U3bW\\n\",\n       \"IK2YGyl1jXILOer4i47AXFmDVDUscrU8jrVyafbaj1+rY5AxoPVkAa6nsRBtHtf3SnQ9zDFIodqu\\n\",\n       \"tqIPjAN4FSK77e9rPq/JZEO5y2/SGa+lMry6sjpJt61dn/l83ti+u1sAYn8/0Q03CNDzC3C6eftd\\n\",\n       \"K7eSME4uyuUCHS+X4we01ONb7g2JSsVrlNrN26/rbLk3HerB5chxfSXE6WIinTLJyiyucj8OdFwc\\n\",\n       \"qKRbboqs9TBBlrInV9X/BmoiomkSQNpAJmguklnqhO/jND+nscr9hsi5Tqid0uS8LnZyIcWS88ll\\n\",\n       \"e2MYCwvUfeAAjR86FFjJlWqUDAkyCZFXpUm5v8JiRKN7orR8Vr3zUXtVc10OPnew6qVfpGoFOF77\\n\",\n       \"8XvxH2QM6DRNUzd10xiNbQjsceguNy7UTpX+36s3EK6Groc5BqmagKdrByF41o0qAQenfe0son4g\\n\",\n       \"wKmkitpnPp8vyXprF4NZrvyulZ/xV9qXW79+VckNiUrkZd7ViDHVqR7A03F9JWwtU6nFTbXC8ddp\\n\",\n       \"sn4jDyv75sia9Ee3j58nLz2ia2eSSpMXDZKZ/dYu5tNOfK7lmA3LTSiVJkdgLTmfXLY3jMMHzBJL\\n\",\n       \"U3NzPgZUW7klBqoGgJXcXylaAIO2eFYy9iCTM6kK4jvKq+WwVglSvPbj9+I/yPFvtMWrmv1Xek5d\\n\",\n       \"D4l0roc5BqkQPENZVAk4uO0rLW/cVdar7KxaXsar1iJ1c2v1YkHL5axutuXK63pvdDmOoC2ZXq2U\\n\",\n       \"fo6vtGwHmV3XrxznVY30v5UqRwIo+TdykpivIlktoSBhjZTutPIz1erZSPpv+1b2+6Cmb5CwSr6b\\n\",\n       \"vW4hyv1POUr/XprGPzVOhY9r1q7Ex5L0gJlm7Xq9PlOh3av8AqvL9nIYY4dEiaWR/fsDs3hWQxL6\\n\",\n       \"2v68jca+PlYCaNUAsJL7K2sFSn0pZet+Wi5AljN22Vf3F7rr2q2gjvvyAAAgAElEQVR5oyHKTm5A\\n\",\n       \"vJEX/xtt8apV/0ElUaqnZEyhaq8QPEP5VrnXz2pmVrUdv+U8/MLL7t2irElvrwmLuja8WND8lBfR\\n\",\n       \"9VGPDGKnHTtEjGVTE9HiYjBtBmml1Fm21ay8VRWDnPudXIMtaYPnaloKRjdWV8ulGguqPmVWWB7/\\n\",\n       \"KZ8xm33k+2omWv7cWfpe+vfYhf4fTJWuGR+nXQwrkb+SMZXKL7B63L6wtkZTc3PeoXODvmwKawUD\\n\",\n       \"svAYqPsL3RbAq1U8opMbMQfI1FzK80Wwl7GrF9e8r/6v9FdtzpVakmsNUV7HW69ATLTxFq9a9R/U\\n\",\n       \"MajnYxmq+grBM5RvlQsNMsZxaEgfI+k3RtRpe517iG573Xt2JVT4dZuf8iJe+3XSRoIqtxT393vb\\n\",\n       \"x6l+aipFFI1az4UgxI+Jl9hQv7J1OUqT8W224BRPaxngJT0YBakdJCyS3STKn6TJamGcIhPEtpIV\\n\",\n       \"DGMkypvE2fYgUfYEJJIB9RHRDcU202zbDjKhspUsMZ8EEvGcaSqFVYfn+P9avND/zAgVmgti7BwW\\n\",\n       \"1VqadoBpB3dpqv7xUFQz1+1a+aJrxK2eqoXQa1ypDkyCctM1XHH3g7Dm7SI4t5Cj0adHKfWllGPM\\n\",\n       \"qHpxXQvQzufzWmus03qpgFxriPJqPd5oq6IfXUsWPf49FdQx2EzHMlTwCsEzlG+V63apuk2q7bjF\\n\",\n       \"iPqJj9Rd1Om2172n9qW7bnNyAfUyLz9rmMtZ4a/G145GzdKWFu9uxXZu1Xwty3G5dpJbVt5K4d0W\\n\",\n       \"FMaJcp9YoPQfHqCx/Ydo8uNrNjGYfICkB6NKxWFTwqSEPf6tK/uVILZc/KnW0lSfSSJKlL6f+0SO\\n\",\n       \"0v+m6ArbXCiFTd1ThVqQ6aIrf24RPwvNBZrKTYm2ZQxqmu2XJfsYVjdxd2M1vrWKqhl4bmAaZwmX\\n\",\n       \"Y18fc3S7dZIOTPSwkqPcQorSB5I0fshbH4W1AqXmUoQ194tgCW/JvcmyQKkaCZxU5fN5LeA6wZ0K\\n\",\n       \"yLVOCuUVyDfaquhH1bLobQTQ8u+poI7BZjqWoYJXCJ6hfCuoeEO1Hb/tBrG9nxjCcpMIeenXq5tx\\n\",\n       \"Mul9vkFZSZeXhaXTTywrd6vu7S0Fbrc420qlW/OqGX4KROk/9pnwxa8bpk7c4icz03Lgk7DZQtY4\\n\",\n       \"zRgJC+E02397cRs7F9lGTbvsaXGFzU25f+tr2qBeInpQmUNv8WcrCVDtJDPBUVDwnmb93czW5Fq5\\n\",\n       \"JtroAHEqdbv1E9OpAxM9rKQpfcBfIqHcQo5GD4xS6lCKltecv+A4vEnX4dGnRm0BTXdxXQuo0wGu\\n\",\n       \"E9ypgFzN5Edex7vZVS2LXuiiGupaUAieoTa1JFz191cvsUwtrtvsoIjX+ezo8Ad/G+hhZ7hV6yzF\\n\",\n       \"ulqntZCvGwg+Y/7Giwlf2v79fhr7iI3FM2ilyfwWVWtnthDRUTLjMKUraqvDPk5POyAtPsc/pbjC\\n\",\n       \"Slj1Yvm0eyaLY9eNc5Lc4d3rMeS1RLk1N7yuC1TluprqwEQPK+M0fgjFPoY99SETD+ExUPYbzu4X\\n\",\n       \"cvxDTwxR9htZGn1q1Deg1RrqpJzgTgXka6Wm60aqWha90EU11LWgEDxDbapEN6pU100JOUG6sflZ\\n\",\n       \"n3LX0g6K+PyyWX/tO4FWtY95oVBe/VA3VTJuXzcQ0lQCIE7nVGFtjXr+YI7QvFZ90JdAJYFshEyw\\n\",\n       \"vItE7KV6g0JC2phmnwDA0+IKK9/XWTXVZ5fN+xI6mRvsFfb5+i4P65Rm7dkdjxyJOFW1/6BdoDVy\\n\",\n       \"+47aiDqY1ZQKP8HPr0CFtUmamst6bo+7zU5+Y1I7rtxCjlJfSlHHX3RQ7xd7jbhOP4DmlNE2yHUo\\n\",\n       \"9/+epQ7k2vI1Z4GsZ/lxn90IF9V6KCMW6tpSCJ6hNtQy5iY30JBw1d5uhZxKvyx5v06JatTxlbuW\\n\",\n       \"dlCkwqOf9p1AqxbHvFJLsVvG4VSqijdKNG6cJeeUYlHzHUrnN5Oq3J4nCOov7jtNRD0koHPUoU1u\\n\",\n       \"JSyQqItZ9W94h2cXETVr3lsujjdtvn+RbfNCn4f18uKKy9o3ni3kvIYBye07aqOsYxWDkMe7Q3x+\\n\",\n       \"asbbcsfhd5+xg2OGFdMuHlJ1sfWbHEltU81oG+RxLvf/XujCuXGq97UPwTNU0ArBM9RG5p5wlRsg\\n\",\n       \"SbhZXg7WHVYFHC8ZbFMp08U0qLV0S8hUrur5mEs5ZRyu+o0SLzGYaTK/xabKAG1lf1cQ5durQKV+\\n\",\n       \"5tRmjgTsRYioSbOf16edRdOre61drU+QAGIex9lGtFps9++aiVYlmNrNL03Copu1WUsp2b583knW\\n\",\n       \"Mi/VOL883nDYKJdHJxCyAzvL+2PmnbrcZwdtQdAp463bOMoZu3UiRJQmKnykQFOH9PGQ3V/opt4v\\n\",\n       \"9lJ0T9Roc/hJby68qhxjLFl/o0/bx4zaTiUAi+lmceGsZXKdEst3FfrOUY6SlCQQaJiG63rtQ4UK\\n\",\n       \"SiF4hqqH3BO26u8nw6LpJ76xUnEwc4JaFYSy2equZbUSO9Wj7DIOB+XCW7G7caXJbdT90+QMPHL7\\n\",\n       \"ISoFKhWgkpo2e0hYSNupgm9r0ma1tTy3FPuy+zzio68kWeB4rYFoldeS1a2Z2zpyFUjEi06QGTda\\n\",\n       \"rYzDUh7Ht1FJV5wgyQ7sLLGSf9hr/IGm99vHQaoZb9X+dONwgyxP9TUXcpT+v1gGZuUYFNYKNPj4\\n\",\n       \"ILX/ebvF0tn35b6yj4VjjKWmPz+WzyAsppsly2gtrYMllu8q9M3bnKTJQNoMFareFYLnJtBmjsGs\\n\",\n       \"VOXWY6zUPcQrmFUrlnHTKsCT1e4YBAXNft2NS84pL1ZRJ+uWun8ReH7aTXS/LlGWU38FMl1Wo0S0\\n\",\n       \"WOy7Eoum3dMtdjPoPtX++LHSQaJXcLQ7Nl6OayUqji9/Wz64Pvy6bTtIgpAuY6sd2FliJQ9OGH+g\\n\",\n       \"fPvpb09rodEOynTvu0GWE+DpyqGk/lOKCm+WQi1PHITHQJ17Ox2tkeVaHXVjSu5N+or/5Gt88LmD\\n\",\n       \"nvv2q3qoTVlLy6x6rlej71pbmss5hqGrbaigFYLnJlA9x2BWW+W6hNbyy3IzWA+rpRLO3ICTNeiE\\n\",\n       \"Tka7yj/pss6pNOmBSaci8By+gSgPokMgyjuV98iRcElNkojt5HU7p5S+y4VI3fZbfe7j9PRi/eSu\\n\",\n       \"u3eQOyR6BUe+Pj1UuxIqxfHlD+aDazNN3s8zL83ZAJ4d2OliJdXty3HhVVWO+7EO7nQxm3x8qS8K\\n\",\n       \"C27HX3TQxLMTNPq0CaKpL6U8W4LtxiLnqcaPJvcmjeRFXtvla1zN/3v1EItYS8useq5Xo+9aW5rL\\n\",\n       \"OYYheIYKWl7As0FsVz01NDRQtfvYzJqYAA4fBkZGgLk5IJHY6BHVTqurwMwMsGePOe+ZGWBpCWhp\\n\",\n       \"AWZn62M96nFMQclpbpkMsLAgfp+aAvadq/3JWjKGfd72051blnaRwQJEw1OYwj54bJhrAsBhACMA\\n\",\n       \"5gB4WI5XOoG7CuL3q11A4+niB1PF/ZcAtAA4C+CYTSMdAKIATtt8Xom6fLTbDOCC5v1WAE0ACprP\\n\",\n       \"OgH80qa9LIA8gHMAGgH8VnEsLQBm4Wl9Dclj01ZsDxBrXMZh3nCVcZ7pNPP8DJZWl/Cjwo9wav0U\\n\",\n       \"RrpHMHf/HBJx5wZX11cxc3QGe+7ZY9lWttcSbcEluoQjrx+xtDmDGSxhCa888woKK+JkmLp1Comm\\n\",\n       \"hLHf7L2zRpt2/fD+Dr52EL9c/yVaIi24ePUi1q6s4SquGtsMdw3j9fOv4+TaSWMsj77wKJ786ZMo\\n\",\n       \"XCxgqHMIT9/3NB554RGjn4lnJ3D4xGGjjalbp7BvzDxR5Odu65V5JoOFFfGd0hPvAYFwav0U2qJt\\n\",\n       \"aIm24MXffhEDWwd8t+tX/Ljw9XXSBCZwGIcxghHMYQ6Jck+yUBum8BiGqgc1NDSAiBoctwnBc2Pl\\n\",\n       \"doF8valc0Kim6nFMQclpbiU3RVD7kzWQGzMzMIGuCDCB/JNeLba9B95gYAa4+gTQuApcvhOI3gDg\\n\",\n       \"CARQvBfAAQBnitumAKwo+0cBXGavGwA4fbW2A0gC6AbwsofxyT6uuLQLCDD8VQAv2HweA3CpuN1V\\n\",\n       \"m224hgF8G2KsV4rv8flxaHwPxNqsARiCgFkVTOWxKcBc4wqgbUPl9zyzEQej/tZ+/PCjP6wIdnh7\\n\",\n       \"kwOTaIo0Yc89e/DoC49iaXUJr0RfQeHeAvAtACeAtmgbPnDDB3Dh0gUcOynuqkjI8wJLN375Rqxc\\n\",\n       \"UP8ohKINUdzUehP6W/rRHG1GW6wNezN78egLj2LfT/bhzCXxh5UdyOKp+56y7Lu6vorb992OlQsr\\n\",\n       \"WgjkQPzoC4/i4GsHsX5lHTt7duKJsSeMbbc9vg0nzp9ABBFcKZ7ETY1NuHj1omWuunaDgk7AelzU\\n\",\n       \"Pu20ilXMYAZ7sCcElk2q8BiGqgd5Ac9orQYTSq9E4toCmUrV0iJ+jowIvtFpfn4emUymrsa0WeU0\\n\",\n       \"t9lZlTNrf7KWjsGjOGxy6+EMgH3ALGYt/6TLOqcS8GdBWxLQCQDRdwHYC+B9AOIADsKEziSA7wF4\\n\",\n       \"P4CTEBBHsEKnnbWR63xxv9d8jPGy+yYABEy+aPNZFAI6ATEXaUHdCuBtzfYpCOhMQIDqFQjo7IGY\\n\",\n       \"fweACIAMxPH8BcQxBUzwLR5XQ/LY6KBNcyMiaAX6HeXzPLODuJao+GMPysLG2/tC5gtGe0urSwb4\\n\",\n       \"4CjQ2dyJsw1nce7yORx5/QhSW1LGfnvu2VOyz7bHtyHSEEGsMYaXHnzJsBKuX1m39M8BrzXailRz\\n\",\n       \"ygK0ibiwrEroTDYlsTezt2QeiXgCP/4ffoyZozNojjQj+1wWLdEW9DT34LW3X7Os49LqkgG/R14/\\n\",\n       \"gtSXU2iJtmBn907c1HwTTpw/YYxppHsEiaYEjrxxxDJXqUdfeBQn3zmJh771kCfLpO6c0h1rflzU\\n\",\n       \"Pu2UQKI8r49QdaNyjmGtr6VChQLEv/lQoepGs7PC8lZPbsf1OKag5DQ3eVNkQ+Y8MwNkMkg8NIF9\\n\",\n       \"e1b9j2EJwAKEi+JPiu+NQAAIzH/SNbkzPAMBTT9i49gL4FEIt9NjMN1SkwB+AGAAwKsQlr5mlAKh\\n\",\n       \"A3Q+jxk8gwyevTKB9bdXncfW5XkWpeJWUf6fhI/1fRAutJMAfojSW52tAO5gr18CsAXAcQDbi++d\\n\",\n       \"gbCayeO5pvTJjqtFM8W+zynv83NjRrOf2szzM8g8k8HEsxNYXXdZzzqQhLjDJw5j5qg5wdl7ZzF1\\n\",\n       \"61Rgbp127UnwaY22one9F9vPbsdlEidFsimJ7/3290r247DUiEacuXQGp9ZPYcfXdhhrvrN7JwCg\\n\",\n       \"PdaOba3bMNQ1ZPR55tIZHD993GhDAtdP3hZ//A1owK1tt+Khbz2kPYaJeAL7xvbhmye+aazdV//x\\n\",\n       \"q8bvt++7Havrq8Y4AWHBXb+6jsLFAo68cQSvnRN3eIY6h5AdyGLu/jk8sesJ2zW3O06A93NO10bQ\\n\",\n       \"xzlUqFChglToahuqItVr/ONGjKte12JTqlL/Zh4X9ySAR1Cxq6Krpczu8wxQDCcF+iEALKG83wRh\\n\",\n       \"ERwG8ITSdg+AU96H+QwyWCk2fCumMOZ0F1x139VJ59LLXWgjxedFzb6TAJ4u/i6tkmcg5neO9Z0C\\n\",\n       \"8GNY582PYQKmy+zNAJ6CWK8tEJbXAZQqA3N9uauuz5jJclwXN1J+YwdlLGYLWjCL2YpvxqyuryL1\\n\",\n       \"5RTWrwoLZe+WXpxcO4lkUxL39d+HX7zzC8f4zu1/uR2n1s0TXq453yb7XNa0qgJoibTgu9nv4t/9\\n\",\n       \"4N/h+KnjOHnhJNaurCHWGMO5y9Y7D3ZxpjPPz2Dvq3sNSFY1desUmiPNOPTaIUQaI7g9eTsWfiHG\\n\",\n       \"oIsddZN6nKSLcku0BWcvncWxN63uyDpJ996OWAcWP7poiSENFSpUqFordLUNVXUtLZl8MDNTP27D\\n\",\n       \"GzGuel2LcrThEO3Vv9luoLOwulgGcSykpQwode10+lwaSVTQke8nAdwG4TZ6RNP2SwA+DOAd2Cfm\\n\",\n       \"keoComdbgEtAN0Zwj9YUyOTFtTai2Y7HbV4BZj4+g6XeJbRcbMHsn88icSEhrJl/yrbj7sRnlTZW\\n\",\n       \"YM5bAnwMwmIpvSPl8cxCgPDZ4vN3YcItl1zfNgiL8irE2qvnhovKcV3cSM3eO+srdnAJS0airRnM\\n\",\n       \"+HbXs3P3XL8owPPq1avIDmSxN7PXAowzR2e0APjSgy9hx9d2YP3qumXNpVUSsFoyCYR4JI6Opg7s\\n\",\n       \"G9uHxN6E4V4r4TfaEMVluoxGNOKZ5WfQ1NCEt68Iv+/eL/Ui3ZfGhcsXLNB5V+ddWHlnBSfXTqIt\\n\",\n       \"2obCWgFvXHkDpy8K3/Erp6+gd0svRnpG8PhvPI5HX3gUR39xFLd+9daS+M+SNcMMzt57FqmjKTx5\\n\",\n       \"z5OGG69cm+ZIMwCgI9aBP7n7T2zXfqB1ACfOn8CZS2fwyAuP1P1NkVChQoUKXW1DVaSNiH+cn593\\n\",\n       \"3WYjxnUtxYJKiD58WLBdzeXVv9luoBI2PQKzl3PKApC641v8/MdtwFRBJA4DIEBnClbonIGAphSE\\n\",\n       \"a21n8X0OSVL3QcRGcuiM2YzxNHBvxyxu7ZzC/ZhDPAhXYg9wunTDEhZ2LODwnYcx84nicTgPYWmW\\n\",\n       \"4uNXEw51AXgDwhr5dxAAfwRinglYj2eLsq/dvdVZiGRF52ACPeD73CjHddHT+RSAnp+ZwTOZDJ6d\\n\",\n       \"mMB68YSTgOZlrDPPz+CVZ14BngWG1oewx+1GhUY6d0/pFgsApy6eQiwS08Yf6vb93A8+h46mDsQa\\n\",\n       \"YmiNtRrtvOdr70FibwI9X+zBDVtuAABQ0RRfuFjAh5/5MAAg1mj944g1xvDygy8j2hDFVVzF+tV1\\n\",\n       \"AzoBGBl5v3/q+wBE7Oium3Zh4bcW8OrHXkVPvEfEp75xBD85K4C3LdqG0xdP4+TaSbTGWi3xn4WL\\n\",\n       \"BRx5/Qhmjs7YuswuYQnH4sew0rSCkedGMPHsBGKRmLE2dyXvAgADKAH9OdXe1G7s0xxp3lQu4aE2\\n\",\n       \"XrX6ngoViisEz1AVqZ7iH4thgZiYAP7sz2o/rmqsBZ/Tag2vJTYcohMJzCT2IZNNOM+9lgPVAaTy\\n\",\n       \"eb4H+OA54MkjjIN1oLMEEdu5AgFnKiTdBgFhq8VtzsCqXcWx3Fd8zeArfiqBsXP7vEGnCm3N7ruU\\n\",\n       \"KAa0bCkCxekR7PlK8Ti0ohSipSKa947BNibXolkAvcXfh2FaRFUlIDLvOrVlJxmXOwEkLngHuVpr\\n\",\n       \"dWkJKwsLOHH4MI6WcYdoaXVJlDo5Adxy9Jay3GxVmJx5fgYXrlxAU0OT5X1AQPzg1kHEI3E89K2H\\n\",\n       \"DEhUEw2dXDuJS3QJC79YMIB05Z0VI/bzbwt/CwCINIgTqSXSgr/+yF8DAF568CXEG+MARFbZ93W+\\n\",\n       \"D5954TPoaOqwnUNXvAvRBuEAdgVX8N03v4tbZm9B6sspIyvtUOcQvpcV8akS+BrRiJMXTmJ1fdWw\\n\",\n       \"wgJAW6wNf3L3n1jAevtfbjegsKV496RttQ2nVk7h8InDaI22Gjc4Ord0lqyLTvymyGtvv2YbMxoq\\n\",\n       \"VKhQ9aIwxjPUNaNrsezJRs2pHsr8eJp7PQyUqaT8y6PQx32qcYYfgACuyxDAdr64XQrCUsjjJ+8A\\n\",\n       \"cBSlcaKq+iGAVZdJlisGkTm2ubitHeQPF+dxDNa4zwZgdcsqZj4xgz3P7UHijoRwG5bZbFMAfhPA\\n\",\n       \"4zBLpXQW57gOsQZvFJ/txbn9Ozi7wrqVGOHuum0QcOrn9MhAHx9aZ3p2YgInDh9G98gI7p+bQ9zn\\n\",\n       \"30AQtSTVsiBOZVtmnp8pKW8Si8QsbsG8ruZQ5xDyv5XHoy88asRfNjc2Y3zbOI6uHMVtidvws7d/\\n\",\n       \"hu9MfscS3yjH9Ma5N4xMtxPbJnDk9SMGSBprsG0CL731Ek6unQQg3FuJCGcvn7Vs1xXvwvt73o/Z\\n\",\n       \"e2fxwDceMGIwARGHyfuS73135bs48c4JNKLRqDea2pLC9z72PTwSfwTHnj2GN068gY7uDizev4iB\\n\",\n       \"+IB2Tb1IdyzLqekZKlSoUOXKS4xnaPEMdc1ow610VdBGzWlDM9oW5WnuNRqoV8tzidVbzaAqLWmX\\n\",\n       \"IBLixAE8BJHBVrq08uviFQhw4vp7CBhahel2OgTTXRcQcaM/hD7hj6pLEMmLfg576ERxLj+E+K/B\\n\",\n       \"/3NQ0Sr43/Yh8U/F2M73K3P4GkzoBARM//PiPN4LM/PsWQjodHOFdXOXlevO3XX9yM2tuk507+ws\\n\",\n       \"bp2aKgs6Z56fwdmLZ5HaksKTu54sG0pU115uAVVrherKm6jW5J7mHnTFu9C7pRdP3/e04cYq4y9/\\n\",\n       \"/aZfx+n103hr/S0ce/MYzl48i1958lfQ80VR/gQwS5W8euZVAMI19uLVi/i1G3/NMvYHtj2A85fO\\n\",\n       \"45frph/4aGoUTZGmknmeXj+NwycO47a/vA2vrr5qvD/cNYw99+wxrKD8PQnDV5lv+craCn59/6/j\\n\",\n       \"5DMncf7qeWAAOHP/GTwSFy61M8/PIPtcFucuqumYnaVzCXfKnBsqVKhQG6EQPENtOtnFJdST229Q\\n\",\n       \"uhbn5FW1nLtbrIvXmNcSDlYB5iBMIHodpnspF0FkuZX7vU/5/HJx/9sB/BkEvOVhlhkBgGcgYKsd\\n\",\n       \"ztK5vOqULG4rkwJdcdhuD4R1N1V8bwQi+yxXG4TFcw9EnVFpXIoCsMulwtxfHQEZqBwc3dyqXVSr\\n\",\n       \"2Kl4IoGxfft8QycgoOTYyWNYWVvBB576gOe4QKdSHyrMPvrCo5ZtJZQmm5L4wb/4QQnsvudr78Hj\\n\",\n       \"//A4Tq+L+EkZ3yj3645348z6GfyoIGoTjXSPYO3ymuGC+6EDHwIAfOUfvoKFlQWcWj+FKKJGDdFX\\n\",\n       \"Tr2CRJPZ5wsnX8DCyoIBta2RVly8ehHfeuBb2rk3ohFvrb+FU+un0NTQhIltE/j2A99GIp7A7L2z\\n\",\n       \"mByYRHYga7zXHms32o01xIw2fn7+51hYWcCZN84AJ4Gt2Io/KZ74drDodk7pYnvLSYy12coHhSpf\\n\",\n       \"YYxnqI1QCJ6hrhnVg5UuaF2Lc/Kqepp72ZbnWQCDMC2bvP6mtHB2wATEhuL7FyHgKV58fwJmXKPU\\n\",\n       \"CoB3AXgOouYl/zZPQ2TCLcBZV2CfnEeqCSLm1KF2qDH2H8BMBvRjmPAmYy3vhEgkJGNZ+wAssjYu\\n\",\n       \"w5qQCDCB80l4r79ZITj6TUBUkfwAdYDiNSlX1lYMyJHgse0r2/DhAx8uTYyjAaOZ52dw45dvxF/8\\n\",\n       \"/V8YMPvIC49Ytn3fX70PZy+dRXOkGdGGKIb3D2PX13dZ2l55ZwVXinc1Yo0xS2zo1K1T2NGxA8dO\\n\",\n       \"HsOp9VPob+3H3P1zlvM30hDBjV++EReumCfrFXaX5OS6KLMCCJfa9ybfC0CAIQCcv3IeR14/gt9/\\n\",\n       \"8feNGFUubrm8SBfx49UfI/tcFhPPTgAAnr7vaTx131MG/M3eO4vueDfOXzmPS3TJaMNSsuUC8PbX\\n\",\n       \"3sbvrv+u5bi4waIXQCwnMVZoJQ0VKlQ1FYJnqE2nTCaz0UMI5UMblSDJj9zOKd/WVwkTD0HAlbRs\\n\",\n       \"xjXbnoGAxH4A0hNwBCKm8hgEoLVCuON2KvtegbAWnoIZFwoIq+QxuGekbURpjU6md1reAW0hEbN5\\n\",\n       \"yaWt4zDrac5AWGSPQABgd/F5A8S8pC7AClvDELGmGZggJt1mJUR7sWJK+M2i5kAH+PyOUt2xayBp\\n\",\n       \"mWxqLE0AJMHjxDsncOzNYyUAwsGoOdJsAOfKBRMak01J7Llnj2XbvpY+HHvzGC5cuYC31t8S2V/f\\n\",\n       \"OILbv3a7AU4y2VAEEbz02y8ZcYrS9bQ5JrJfNaIRFy5fwL8++q/REhF9vLfjvbh49SJWLqxY5krK\\n\",\n       \"CX71qoDHM5fO4Kdv/xTRhijOXzlv2eb7p76P4e5hy3vSeik11DmEvpY+R0hLxBP41Z5fLXm/OdKM\\n\",\n       \"ni095htrQMNRQdCz985isG0Q8UaRgEmujTynJHA++dMnXQHRT4Zjqc1WPihU+QqvpUJthELwDBUq\\n\",\n       \"VFW14aVZApCr9VW1WnGY4Flaf0Oz7wgElL0LZu3KOZhWUAlaCQB3F98zKjAXL6ob1oBDZ0yw9epC\\n\",\n       \"Kw04sr1GGBaky42XEb0YRcPZhtI22TUzUBzrv4EJeEsQFtkCBHwegYDjIxButnblYG6GcL3lICYN\\n\",\n       \"c8MAJuHdirkBQFeWNiCeVLrZXrx60bAcqjGajcXLg6HOIQuA8My0B187aAFOAEg0JQw3Wm5xk+Cm\\n\",\n       \"AtzK2gre/dV3Y+LZCXzrgW+hv7UfP/n4T3BX111GMiIJWPNvzAMQVsPT66fxV8t/ZSQB2p7YjsK6\\n\",\n       \"s4k/2hDFFTLH+sb5N6zWx6Le1/0+dMbFXZ7hrmFMDkzilY++gp64eeL/ePXHeOHkC8Y2KqRJQLx0\\n\",\n       \"9RJ6twh3hc6mTsQaYnh/7/vxN7/9N8b7w93D2HuPSM+ciCdwc9vNOHayFPoB88ZA4aKYa9CAWI6V\\n\",\n       \"NFSoUKG8KgTPUJtOYVzC5tJmSPpU8TnFIWc7gB8V3x8B8D2Ybp+/YPu0wwQpCVs8GY7OXbQHAlJH\\n\",\n       \"UYTFIhTSL4G9OQFugD4GU01SJNUB4OViX6cB/BK4HL2M6NUomi4X3Q2TMGEzBtFPb/F9QFhdea1M\\n\",\n       \"XmtzqPiU67EXwhUYEBl6e9lnX0ApiMl1+DaAp+Hd/XUDEwT5Op8qdQv2INUtk1u1fqPvNwx30dX1\\n\",\n       \"VcM9VLqV3rL1FguAJOIJ3Nx6M469ecyAH0AA5cS2Cfzs4z8zkupIi9ujLzyKl0+9jFhDDHcm78S2\\n\",\n       \"1m2W8Z2+KBL33HfoPvzwoz/EwNaBkgy4ACyQ2BJpMeCwPdaOP/3QnxrWTztdpssGJDei0QLMHTFR\\n\",\n       \"biWCCL6z8h384NQPEG+M42dnf4bzl8+jo6kD8YjpsrB+dd0Yz9LqEm6ZvQVb/vsWfOCpD2Db49vw\\n\",\n       \"tX/8GhZWFnDkjSP44A0fxNStU7g9ebtRJmb7X27H7cnbRUzo/d92jc2U55T8TAJxJYCoc9ctx0oa\\n\",\n       \"anMqvJYKtREKwTNUqFBV1XWRIEle77ZBWPZOQbjOzkG4n15tJGsAACAASURBVMp4QbldEsI6+gKA\\n\",\n       \"WyGyxQJWSNLFGX6z2PYCGFyeA+4olpQ5aTO+LRAlWwABmlH2WSuAu4p9PQogC0QuC/MmgYQl96cQ\\n\",\n       \"FllAuNy+DNEXNzANsbHPinYwUGwfMC25CQBPQMDWUQgXYg5eKoiVG29ZA6ALRDWIJ1Xj9pzqP3L3\\n\",\n       \"0JHuEezNlBZK5fAzsW0C2YEsXnvoNRwaP2QBFu4WKmtzHjt5DG9eeFM7zpW1Fdy+T7je8gy4XM2R\\n\",\n       \"ZqSaU/jIwEcMt9qzl84i93zOErfqpq54l+W17OsKrmD10ipW1lawfnUdq5dWceT1I+j5Uk+JGy8g\\n\",\n       \"gDXSEMGZS2ewfnUdL7z1Ak6cP2FYYiOI4Pyl89hzzx68du41Yz8JoAAMl2IJgX92z59Z6p3yGE55\\n\",\n       \"7L79wLeNrL/lKoznDBUqVK0V1vEMFSpUIJqZEW61LS0CNq8pyJR1IdV6nFKyrqR0LZX1OdXttgNY\\n\",\n       \"hrB2bocAT6l+CAB1WrdOWGEvfhW491PA4/9eLHgPBJhyNUMk+umAWftye3G7SHGsncV53Q7hIsvV\\n\",\n       \"CGFl/SVEzOhWiHqfjTBddbdAWHPVsWdgrYcpS8z8BAJKXyv+bId+XUMFIqd6nbrP3OpIrq6v4n37\\n\",\n       \"34e+lj60N7Xb1ojkNT25Yg0xI9mOTk0NTfhg6oMGmAFAAxoQa4zhgW0P4PT6abzyy1csFtcGNKCj\\n\",\n       \"qQOrF70F9EYQwVVcLYkB9aNIQwTff/D7uPfr9+LUuvmHF22I4jJdNn4C+lqfANC7pRevfuxVZJ/L\\n\",\n       \"Gms1desUTr5z0ngdb4yjJdqCnd078cSuJyx1USup0xlEHddQoUKFkvJSxzMEz1ChQgWiTEbEcgLC\\n\",\n       \"wrlv34YOJ1hlYAUou7lJAJUxmaoSEMmEABGPuV78fQiiJIrbdd8uCLCVygJ4qvj7DIBXIJL8HAHw\\n\",\n       \"H4q/fwfA5wB8BSKZTxTC4roKAdJyPFPF/dwy4U5AWDG/DeGaG4EA18sAdkJYMxNs28MwQTwLcx1V\\n\",\n       \"Oa1rHer5mRmsLi0h2tKCe2dnyyppUitpQbJ4p2i1I4aZ/6UNe+7d6ws8bvzyjYYFMDuQxVP3PVWy\\n\",\n       \"jQSboc4hvP7O63hr7S0Mdw2jPdaOhZUFNKDBFvzijXG0xdpwev205X0Oc3aKNkSRfyCPe79+Ly6S\\n\",\n       \"l2K25Wvq1im8/NbL+OnbPy2ZT7IpicLFAhrRiGQ8iVhjTGs1HWwbxOrFVRQuFtAWbcMHej8ANABH\\n\",\n       \"Xj+Ctmgbzl0+Z+lv35j4Q+Fgz9/3KrcbDKH0msEMlrCEFrRgFrNIhHfMQoUC4A08Q1fbUJtOYVxC\\n\",\n       \"fWozxHLayfWc8hov6OY2KZPqtAB4EQLEbv4B0PoA8JBN2l+euOi/w6yJuRXAf2bbLUFYUNcB/D8A\\n\",\n       \"DkHUCh0ofiYrTFwG8BaEy6yETjmvncXXt8NMVMRzwQxBWD9PQsRn9kFA51swrb3cY091d5Xr2K78\\n\",\n       \"9BKHuUFlR+y0urSElYUFnDh8GEeVrFn19h2ljdsrZv1KPHME+74Q04KHU8mO9Svrxu928CjdQvO/\\n\",\n       \"lcfSx5YsLqLd8W7LfhElI9b61XX8cv2Xlvca0egKnYCI5dz17C7c2XknGlzrBZWvtmgbCmsF/NO5\\n\",\n       \"fwJQug7SIiuTIZ1eMyFajqs10orCesHY9tzlczjyhqg5mh3I4gM3FH3kXxWJnpojzcYxiUXEF0q5\\n\",\n       \"CYbCeM7ytIQlLGABh3EYM3WducxZ9fY9Fer6UAieoUKFCkTXdCxnUPGCL0G41P4dRFzlUwAungCO\\n\",\n       \"fRo4/Cngd/730n144qJHYMLh28XXEsh4QiP1GtQu9K0NAuTU2MvvAPgYTPiMQ9TzXAbwTHE8CxCu\\n\",\n       \"ttxjskPpWwVxuY6vABh8Cbj9Y0DqeeDJM+7rapeldoOANFq809I9MoJ7NtudFsDTnSK1DieH0J3d\\n\",\n       \"4kRsjbTi/OXzFjCVwPrQtx4yrGmJeAKJpgSyz2Xx0LcewlC3yDiVaEpgYtsEIg1W8GyLtpWAXKOP\\n\",\n       \"S5a1K2t4+fTLIBCaI83oineVwK1Ue6wdu2/d7bntCCKINkQNSOSZcrk6Yh1GPdBGNOL9ve8HYJ3b\\n\",\n       \"+SvnsXpp1RiH1Mn1k4hFYnhi7AlkB7IYvWEU+d/K45snvmkck6bGJl8ZaL3U/gzlrpbiF+oIRrCn\\n\",\n       \"1pnLQoXa5ApdbUOFClXfulaDR2Xc6LGzwOXiBWf2ItDTZI2DfBUiHlO6q94J4ASEtfAVAJ+E6b7a\\n\",\n       \"BFELlLu7AgLI/kcA34ewiHJ3WuniqsaxOrnFcm0BsAbhcvt9CKB2mq/Rfsafb7bqtivnl4E3N+iA\\n\",\n       \"tb66iqMzM7hnz566crP1HPe3uir+tvbssfxN8f0v0SUcef0IRrpHEI/EcexNEZ+Yak7he9nvYeSv\\n\",\n       \"RozYxsG2QVy4cgHr/z97bx8U13nne377HZoGGmhkhJBakkvWSyIZJBzJsRS1IyleEyd0XshcM3cs\\n\",\n       \"u2rdU8luJffurrh3tu7O3Jqb3Joqp27NTO2uK9pkxEzingQpkWLZZhRhCSThGFu2XhxJMQ6KiRBC\\n\",\n       \"vIgWIKBpoPePp5/T55w+p885/QIN+n1UlOjz8pznvAD97d/Ldy6MKKJCraVaami9tx5XR64KdaKd\\n\",\n       \"dzsxFmENeRxmB3Y9sktS4+k0OzE5P8nWmxwIR+MRVy2SpeceWHUALftbJDWWRpBfG47NZMN2z3a8\\n\",\n       \"N/SeIDJXOVcJ12ckPIIiWxHGImMoc5QhGo1ia9lW3Bi9gcHpQaHusqmrSXI/1//reiE66vf6UZ5f\\n\",\n       \"Llkv3158/9NNzSUYIYQQQACHcZjSbAlCBNV4EgSx9FmuxaM+SIVdzSxwxqos+KrAPEB7AVxBPLLX\\n\",\n       \"AGACTJC5Yt/z5fLLxIXfNcQbENWA1Wq6AaxEvLGQH0ygtsZe84ZCcmrBBCdvkpRM+InPtwHARB3Q\\n\",\n       \"6gdcO4Bd24CjtuRRT7X6WTVB+pCSrriQi0O7xY7Dew6j8e1GtPa1Cts1rG/AxMwEWvtaYYYZZlNi\\n\",\n       \"GqxcPF0bvYbh8DBcVheK7EWYmJkQur+uyFuBwel4W+byvHJE5iJCNJCLx+rSajgsDnQNdUGJUnsp\\n\",\n       \"7s3cU1wnF6Al9hLcfP4m3A43Vr+2Gn0P+gxdKwB43P04yp3luDBwAdPz03BZXfjCyi/g/sz9BDFq\\n\",\n       \"N9sxM89qTqsKqnDhqxdwqOuQpOmQ+Jq7HW7J/eDpySPhEaEWdGp2SthX3pTI4/DgifInBAFKzYQI\\n\",\n       \"gsgmVONJLEuoLkGFQICJtDqVWsGlygIUjy7YMyVOC+X1ntVgQu+MlYmmnthy/qu7CMAFxL0++a2V\\n\",\n       \"+1zuki2Xw1NVh8FqM/2Ii04g3ugIAKKIRz3rwbrt1iPuucnnfBqsxlN8XLXU1wR/ziDgeQqY2AG0\\n\",\n       \"2aBZKqVWP7tQtikGUnoX/HeUaG5CGqBK3Z9WuqXYQ7LZ1yzUAAb3BVGRXyEZO7gvCKvJinnMSwTd\\n\",\n       \"ttJtEo9JnrI7HB6G3WzHxOwE+if7BdFZbCvGFyu/KJnH0PQQ8iysoNlldQnj90/249bELdVLoSQ6\\n\",\n       \"nWYnVuStSPD5HJ0ZxSP/8gieb3seY+Ex1TGTcSV0BW39bZienwbAajQvDl1EvjU/YdsnVzwJgF0/\\n\",\n       \"7lfasr8FRfYiYbn4mgPx+5H3hzwMh4cxEh4RrmFbfxt6xnqEfQ/vOSxs77K6MBwelliliG10SHQS\\n\",\n       \"9F6KWAxIeBLEciHWLAStrUyELheUikeXqsgW1ym6AKwD6xDLRV8AAH//yxNFxsBqOfl75howESj3\\n\",\n       \"ueT1mWoCTCz8roHVl4q347Wj1QCaY+uOAzgBlvJ7AnHPzbOi/eXCT3yOm8FSgX1gtaCSebuBJz4T\\n\",\n       \"n5PaZwpagq8JrNlRo8p6AyQVZWo1ppkYO11Ecwv+PLm40PJuVBMnbocbN751Q7KO120CgNPixIFV\\n\",\n       \"B+D3+tHxlQ6Jx6RYzH5h5RcSjllkL0LrrVbJspqyGvgqfXCYHZianRKWD04PYnpuWtdlMcMME0yY\\n\",\n       \"nJ/E4PSgYofbmegMfnHzF4IIlrPKuUry2mlOLJiWNzAanB5Ed6g7YS4wAWsK1sBhZv6cB88ehPMn\\n\",\n       \"Tvym7zewmWz4yd6foKmrSfKc8PuxpWQLgNg1rPiC8P27X3tXck/49rwpkfgDCLVmQlT7SRDEQkGp\\n\",\n       \"tgSxXKirY6KztnaZdvgRsVTTb3laqAssQjkFgGfjNYAJKJ6OagXrQMtTSL8X2/dxJNZwqiGuq3wV\\n\",\n       \"TMCqWb1oWcHohZ8jx4N4aq88FVfPMX1IXsOptd4ASdNU00zpzWp9nYG58XRLj8ODje6NKLIp+3Dq\\n\",\n       \"rRXtHe/F7td348JXL8Bb6FUcw2axocBagGJ7MXrGevDe4HuCj6fVZMUOzw5J6mx5Xjne/9r72HVi\\n\",\n       \"l6L9iBVWzEK7u61RtKxaHGYHyvLK0D/Zn3ScbaXbUGgtTPDs5GOE59knTfLzqHRWYkPRBsXnRGx9\\n\",\n       \"AkDTBsWIVQrVfhIEkQn0pNpaF2oyBEFkmWBQsVnIsmSpercEATwGZj/SBqAitpxH/Bpjr0sAtAP4\\n\",\n       \"PuKirFe0XwDKDYHkt51HwgAmOpO9n3RrrBeT7LhBsEjnQOy83LE5K0U19RxTy8pGr9WNDsSRuYQ0\\n\",\n       \"1SDSEuZJx04XA3ML7gsicD6A/gf9Qg1i4HxAEBvcn/S3X7qK33lGE9bL8RZ6cevP46mvYsE6FhkT\\n\",\n       \"jtGwvgG9470JdY+z0Vl8OPwhAMBismAuOoeh6SEc6joksWyR7JNh0WmGOSFdWIlCWyEGJhOFsJx1\\n\",\n       \"heswM5cYXXVZXZLorfw8qsuqcXXkKgDW4faVna8I65q6mjA4OYjGtxvRebcTDyIPcOKPJ3Dx6xex\\n\",\n       \"rSyxoxePbuohlWdTdxMrgiAIEZRqSyw5qC5BBbebRf6Wu+gEMu7dsmDPlBtMICH2/7uQpqnytNWb\\n\",\n       \"YN1hxTWNSgJLK/1TTZSla0GS7LhuADcQPy+tFGAttGo4y2NfGXjsk9bAqdWYKqD0PKVaXxc4dw6+\\n\",\n       \"119H3VtvIRRW6eRqYG5ckHDrDrnY4P6k0Tujiuu1ONl7Ukjl/eT+J5IxuMApc5RJ9olEI6gqqMLT\\n\",\n       \"K5+WbM8tW7LNPOZ1bTccHtbc1mayodnXjOC+IErtpZJ1E7MTmEPceoVbrQBAHvIwMz+DVQUstXcs\\n\",\n       \"MoZDXYeE9e+df0+4rmORMcxhDpFoBDtP7ExIlRW/Pnj2oGYabSrPplbKNpFZAgjABx/qUIdQhnyj\\n\",\n       \"6L0UsRhQxJMgiKUHF9nZJADgJFj95Q7oT2/VQh6dEp9GsgigUlSLC0sPgH4wISmOQKpFwsSRUB49\\n\",\n       \"1WITmCCeRbzxUQ2Uo4zy80jnVmlFRZUiwRyDVjy6okRaUeZ0xlagOxRCxwCLsgXOn0fL/v2Gx1CC\\n\",\n       \"Rz7lqZjcn/Q/fViNE8+uxU/2HTEklMVRytryWhTYCnB4z2E0dTVhLDKGivwKeF1ejAyNCNtZYMFE\\n\",\n       \"ZAKRaAT13no0+5rhdrhRWVAJj8ODueicYCHisrowMTuRcFwxWimz2cICCy5+/aLQxddsSvxs3wIL\\n\",\n       \"iu3FsJqsmJqbwswsi4xOYxptt9sSGjhxHBaHsPzKyBVEohGYYEKXvwvf7fyukCq74ecbJNer3FGO\\n\",\n       \"ofAQAPXIdSrPZlYj+EQC3ehGR+yXdgABtCyUbxRBZBiKeBJLDp/Pt9hTIBaTdKN1Cig+U91g6aKj\\n\",\n       \"iIuaTGAgOqW5H48GbgSrFZVHINWOlUp66gBYg6AoIAR9/gRpUx899ybT9y/ZuWSj4ZaOJkOZ/B3l\\n\",\n       \"tLLPh2s9Hhzes8fw/mqNY9QazewLBrG+oQHfevMsfll33HAK5Y5yFqWsKavBa198DS37W9DU1YSW\\n\",\n       \"nhZ03u3EwNQArt5j6aRWkxXFtmLMYQ6hmZDg28mP2Tvei+HwMEZnRuEwO7C6YDXyLdJusfJmP2ZI\\n\",\n       \"bV1K7aUosZdItrEqfOZugSWhSVDCNiYLDqw6gJ3lOxPWmWBCz/M92Fa2TdLFVz7mPOZxb+YeBsPx\\n\",\n       \"Jknm2FuxWk8t3vW/KzQT8p/yC/dt085N8Dg8cDvcOPPcGeRZ8nD5G5exrWxbQidbLjprPbV4vOxx\\n\",\n       \"4ftMCkS9UVJqXJQZhG7VqMXhdGsKYtB7KWIxoOZCBEEsLXzIWDOZpIib5Ij9LnMRo41vUmkkVI54\\n\",\n       \"kyBA2TfUB+17o2cbI8jORVJ79qMI3K+3Zbbh1gL7hobCYQTOn8fhPXvgdjgM778QjWPE1/zVPa/i\\n\",\n       \"UNch5Fvy0Tvey2o9Z8aERjsl9hI8VvQYuoZZM6GK/ApJA6EyRxk+V/45BPcFE7xDxdE7Tt3qOpzu\\n\",\n       \"Oy00K+LbiCOjJpgQRfx9yFv/01v4Hx/9D7TdbjN8rmL/Uo4JJkEEBs4FcOzmMYzOjMICiyS11hz7\\n\",\n       \"x2s7rbDCbDLj7efexj9e+0dJ9HnlT1cK16XeW49QOKR6H3kjodHwKNput6G6tBprC9fiiO8Iuz86\\n\",\n       \"mwxlA2pclBlCCCGAAA7jMNw5+4eIeNghH09iWUJ1CQ85GWwmw1F8poJg9h9+5LboBIx7WaYSdb0I\\n\",\n       \"5v95AOyaKPmGiu9NPvT5eRpBKVoqOxdJ7dlfujJaCwxA17XO5O8ot8OBlv37UxKdANAzznwei23F\\n\",\n       \"kmY1mSJwLoCWnhbhmh/qOoSW/S3oHe8VlnGvyRJ7CS594xJK81jtI4/wrchjBrEuqwsj4RG09rVi\\n\",\n       \"+6+2Y2xmDHazXdiWR+84BZYCXB65LLx22Vxoe65NYicCQCI6ASbEju4/mlBrqoXL6sJoeBSv7nkV\\n\",\n       \"DesbsKN0hzD+9y99HwB7/njEUZ5qO495mEzx92SzmMVMdAY/vPrDhOizOGX5VN8pXOy8CIBFkuWR\\n\",\n       \"Sx69Prr/KBrWN+DsV87i+DPHBcsbpcj2QkEpuZnBDTda0JJR0UnvpYjFgGo8CYJYWqTZXVQ3bjDv\\n\",\n       \"yoXEaP1gKvWGKdYowgvgtui1UtRUfG/8UK4jTef+6ahNlbzR3XcEqMvwQ5Ks5pRf2ykAp5CR55N3\\n\",\n       \"mbU6ndgXDMJhUEB7C7zoe9CH+5H7gijUQq1jqdLy7lA37kfuA2DpqqPTowiFQ4LYLLIV4dSXT+H7\\n\",\n       \"l74vRN3k9aUf/9nH2P7L7bgXvgcAqC6tRoGtQOiAazfbcX30OiwmC2wmG56qeApXR67i3sw9PJh8\\n\",\n       \"IMx7IjKBr576KsJzYdybvqd6fo8WPYrvvfM9zEf1NRUqsBQgYolgYmYCbbfbsPZf16Iiv0LoUFtT\\n\",\n       \"VoPLw5fhPuLG5OwkAPb89T3ow8DUgBBxLbIVYWvp1oTOvkopvjvKdwgR2em5afAGuLcf3E7YlpNq\\n\",\n       \"HXE2UaslJgji4YRSbQmCIFIlVRGnhg/G0lCNbp/qPqmQjZRUHWMa8S/MOD6kdW2VhN3rPh8GYp61\\n\",\n       \"6xsasN9gUy3u21nrqcWWki1C+msyCwy19Eil5Xx8cUOfhvUNEruWdYXrsKZgTdLjisf2e/0Iz4XR\\n\",\n       \"2teq2EyoqqAKW0u2orWvFcW2YkH4AqxT7Ew00cpEjlLabqrUe+vR3t8umcfeir2YnpuW+JMC7Nyi\\n\",\n       \"iOKdu+9gaHoIDrMDDosD4bkwqsuqUeooRXBfEACw+RebMTA9AJvJJqQSA5SyShBEbkKptgRBENlE\\n\",\n       \"R6MZQxhNQ00lbTULqcoJBACMgfmUHkPmItM60lwXNbUwzWurZFHBu8x6amuxJwXPWnETGHH6azIL\\n\",\n       \"DLX0SKXlfHzfSp9kndiupdJZqXlc8dhHfEeEcXetYCmzVhNL0HJanNj9yG6hQ+6+yn1Cs6CtJVuF\\n\",\n       \"cZJRYi9JSNtNlc+6P4tmXzNsZptkecdAh+BPajOxdTazDXce3MHM3Aze/9r7aFjfAIfFgbHIGMLz\\n\",\n       \"YXQNdQnXyO1w48af3UDD+gZs92yXzJ1SVgmCWKqQ8CSWHFSXQGSalJ+pTIs4o7WaRrfX2idTHWe7\\n\",\n       \"wbrsDgA4pLGtEZRqU7PQ5ThlYte2/W/aUxLbSsKOd5n98unThtNsAakQ11tvp9axNLgvCJfVhe5Q\\n\",\n       \"Nzb8fAN6x3vj9YUHjkr2EY+h5Bkq9yWUH1M+7gdf/wBVBVW4/q3ruDN5R+iQe37gvNCsZ33RetSu\\n\",\n       \"YEa5ltg/JbaVbcOP9/5YELNizAbfFl0PXcf6f12Pje6NcJildbjcn3R7GROOkfkIuoa7JLWwvIZV\\n\",\n       \"3NmWXyN+DW4/uA18zMR3+1fa0/5QhTrNEgC9lyIWB0q1JZYc7e3t1Aac0I8OL8eUn6lUusPmMj4k\\n\",\n       \"pIp+5Qeb8Mf5AeTBhjf/8iIeWeHVHmchO7/6sDCpwwZI9XnKdppwJsZ3H3ELKaVVBVW49ee3Ujqu\\n\",\n       \"Dz50nOsAQkCFtQI39t2QzClZbas4fdhtd6Otvw21nlqc/vJpAMBjP39MkkZbZCvCWGQMFpMFc1HW\\n\",\n       \"Zdbj8OD+zH1JCmuRtQhOm1PSZVeMFVZB5Crh9/rxzt13MDg9CIDVqj6YfYCbYzcl3W1L7CW4+fxN\\n\",\n       \"NHU14cQfT2A4PIzPrfgc7jy4g9UFq1FkL5KkJO/+9W50nusENmYmzZY6zRIAvZciMo+eVFsSngRB\\n\",\n       \"LG98PublCLAOpwZr5B4qFATjZ/+bG9ceYULjC3er0PF/aQuNjAvyZLW0C2xv8rBT/s/lGA4Pw2lx\\n\",\n       \"4vq3rsNb6FVtRpSM1edWo6+nj3nDIlEA8drWn/57YOyzHqza+oQwtljIAol2IVyY1pTVYI1rDfIt\\n\",\n       \"+Wi52YL5mAGt0+LE5BxrAmSCCSX2EhTYCrDGtQbXRq8hNJMYBXRanHhixRPouNORYM8CANtKt6Hj\\n\",\n       \"K+z3zOrXVmNqdgpuhxszczMYnx0XtjPBhO2l27HCuQJjkTFJoyFx3an4eoiFtpZvph4yPR5BEARA\\n\",\n       \"wpMgCAKoqwNaWzPr5bhcURCMtf+tHB88MoxHB53oDFzXF/HMND6oRzWXWNQ53S61mSQVwdg73ovd\\n\",\n       \"r+/Gha9egLeQPQvyCJrb7lYdlx/zyr0rgsDjEUDxdm/V1aGvtRX/8DcuXK+cEMbWE52TR1jF8xMj\\n\",\n       \"blwkbo6khAUWFNmL8OQjT6JrsAsj4RGYYRbEbL23HivyV6A71I3Ou53CWEoilSP2MK0pq0GZo0wS\\n\",\n       \"vXU73AicC+D6vevoGevBu197V7jm6bCoDbgIgli2UHMhYllCdQnLiIWozwsGNb0cl/UzZeQaK9RQ\\n\",\n       \"vvmXF/GFu1UZFZ2Ga8yS1dKm4kmaZZI9T6Hubgx0dKCvtRXnA5noSJU6Ss2MtPAWenHrz29JBJC8\\n\",\n       \"djTZuCd7T6JjoEMiOi9941KCAOK1rVXbWXMh7qEpfl7UniN5gymlhkNOixMWE6sBLbAWCEKx2FYM\\n\",\n       \"v9ePYluxZPs5zGF0ZhRX710V6k0dlnhN5/DUMF7vfR0dAx3CWE6LE+e+ck6o4xRTXVqNd/3vot5b\\n\",\n       \"D7/XjzPPnUmokwXYPeoc7MTAlQEc6ooXTBv5GZJvu9jenkRusKz/7hE5C/l4EgSxeIh9GbdfBNb8\\n\",\n       \"H0lrMVPC7V6c9FodtaULgg7vy2SprI+s8OpLrzUypZgwAViapGYUa6G8WxeAdLvUZhK9zYa0kHs1\\n\",\n       \"Jhs3PBcWvq90VuJawzVFAeRwu9Hyv7rxYLQfNpMNE7PMQ1P8vIifo+2/2o6p2SmE58LY4dmByoJK\\n\",\n       \"wTrm1T2v4onjT2BomqWxVpdWo8BagM5BluZaYC3Ag9kHggj2Fnpx4M0Dgo+mmP4H/dj4i42oLqvG\\n\",\n       \"nQd3hOWdg50SP06H2SGkIu+r3IfWvlbJOKMzozh49mBCVLhlf4skEm2zsI64RbYi9E/0o+6tOgT3\\n\",\n       \"BQ39DOnZNpXoN0EQhFEo1ZYgiMVDXJ/neA7ofJMtXw61mLlSW6qnBtKHBW3Q8zDXmIVDIZwPBLDn\\n\",\n       \"8GHNNFuxGPhfTpZj7kYvrE4nfvkfy9Ezpe3HqTXmq3texaGuQxlPueSpnPmW/ATfUC7oaspqcOa5\\n\",\n       \"M0mPK0+RlT8v4ufIYXFI6iXlvqKH9xzGS+0vIYoomn3N2Hp0K/om+1BkK8L5r57H9y99XzLfV/e8\\n\",\n       \"isd+/pguT1CA2arcenBLaLzk9/px/JnjwvXgnpwAS6t1WpyC8F3nWoc1rrjPqf+UXzjvem897BY7\\n\",\n       \"+if6he0b1jdgYmZC8WdISUDq+XmjhkMEQaQL1XgSBJHbiOvzGpdZLebq1UBfH1BUBFy9CnhFaarJ\\n\",\n       \"muVkGn6N8wH0qhwz0w16NM7vYasxSzWaJBYDT/WW44X/ziJ2/8/feXC1ZBiAMZEQOBdAS0+LII6y\\n\",\n       \"LTCUxIyRey9vEtTsa5bsIx6r8e1GIapYYCnAg7kHAJTrR4FYp9i7cSHXsr8lYb6v7HwFW45uQWQu\\n\",\n       \"Iul+y9lctBmjkVFYTVZ4XV78/v7vMRIeURTVoXAIL7a/CBNMOOI7Isy31lMLh9mhKSpX/2y1IJSv\\n\",\n       \"fvMqiu3FitfRyDUXP5eRaARtt9seyg+DCILIDFTjSSxLqC5hGSGuz9NRi5ktsvJMcaE5NgYckplZ\\n\",\n       \"8vTXVjCRlk34Ne5NcsxU/ECToXF+y73GTP48adVSqtXriVNWv/u7xwGwFN2KzdXCciMpst2hbkF0\\n\",\n       \"lthLDO2bivejUsptsnsvPwb39txauhWhcAiNbzeq1nIG9wXh9/pR761HsZ3VZybzveTeouLaUfl8\\n\",\n       \"vYVePOF5QlF0AsDGko248xd38GjRo+gc7MRIeARVBVWKkVy3w40Tz5zA8WeOY9eJXegc6ITdbMdP\\n\",\n       \"9v4ERXapz6mSj6r7U/b/WGQMh7oOqV5HI9dc/FwWWAsUvVuJ5Qu9lyIWAxKeBEHkBrwWc6lHOjlF\\n\",\n       \"7M0kamsBeS1fsmY52WIhG/QsxvnlMFq1lGrCVCxA6v/5KNY3NODLp0/jF88kNqExMg+1hj7JSKUR\\n\",\n       \"kZKAMnIMLph6x3s1j+12uHH8meM48cwJrCtaBwCYjc7i+5e+rzo3j8MjqR1Vmq9SYyKA1Yke8R2R\\n\",\n       \"bFPrqcVH3/xI81wHJgcwNjuGmfkZfPnfvpxwXCWhqLce18g1F4/Z7GvW/DAolQ8fCIIgxFCqLUEs\\n\",\n       \"dXKliQ0hJRRi9+bw4cR7YtQCJBP3eCFtR5aYxUm2EOlmbQAAIABJREFU0UovTaXmNZX03XRSnNOp\\n\",\n       \"yw0ggG50wwknggjCrfJQqB3D6LH1bq9nu1A4hJfaX8KD2Qf46N5H2Fq6FU6rU5L2K76uTV1NmlYy\\n\",\n       \"79x9B5FoROKFqoXeYxjB6PNAdaAEQSSDajwJ4mEgV5rY5DpLWaDTPRbIJR/MTJGKIFxoEaCnTlBN\\n\",\n       \"BPngQ0ese1UDGtCi0r1K7RjJmhUZGUc+32w0V0p2X8Tr8ix5+P23fp+SL+diCcCHuSkYQRDaUI0n\\n\",\n       \"sSyhugQZMXsGxZROIk53NxNvra1MhIrI+WdqOd9jg16uRn0wzwUCeN3nw1t1dQiHFiY90OjzlErN\\n\",\n       \"a6asUPSip05QLQ3WGcu9rkUtDifJvVY7hpGU22TjyOd7qOtQwnbidNKDZw9mpK5Vad2df3/HkOgU\\n\",\n       \"P1MLfe85RlOnidwm5//uEcsS8vEkiKVOMKie0vmwI45y2pgfnm7xZrTzbDYjqnrvsWwOgSZ37gd5\\n\",\n       \"9fiMijDqg8mFKgCcDwSwfwGjxdn0RpR7Zy4kSj6TyURQEEEEEMBhHFZNs00WyebHuzZ6TfNYWuit\\n\",\n       \"twWAckc5hsKsk7Auv1kkvy+ZumeLde+5oCcIgkgVSrUlCCKz5FJKqzhF1e9n4lOvQPfBmLdlLqTD\\n\",\n       \"yubgG2xZ9ClpYtDKxYgPJgC8VVeHvtZWeGpr8eXTpxc0NTeXa+LSEcXi8+I+k+mKoNd9PuEDgvUN\\n\",\n       \"DZIPCMTHqyqo0tXARw0j9bZuuxtt/Zm3GMnmBxIEQRCLhZ5UW4p4EgSRWXhKK8BE6GKqHXGK6pEj\\n\",\n       \"xkSw0c6suZAOK5uDs1FlSgEAJwGEAewAcBRAExbOW1RMEIYaETncbkNRy33BoCGhyslELelipUTq\\n\",\n       \"QRzZ0xvN48i7oaoJJyMCK1kkW3w8w42NzgVwsvckwnNh7PDswNEDR5OeqziaCCArkcV0rj1BEMRS\\n\",\n       \"hiKexJKjvb0dPp9vsadBqFFXx+ooa2sXxZNTQrLOsiIUnymjnVl1HiuryOagOiUf4tFcgEV0B2Es\\n\",\n       \"wrvMSRaB04I/T+l0kVUjU9GydBrF6D0vIxHfZJHsdK6jeA565pEJ+D3qGe+Bt8CLInsRyveVo9fR\\n\",\n       \"CyeciLwVQVtfGzwODza6N6LIVqR5L+nvHpFp6JkiMg1FPAmCWHhyqeaUe4OmtC+Mia90jpUpZHNQ\\n\",\n       \"nZLYmrAaTFzHoqPkvckwWkuqRDZq4lKNlsktTdKpE9R7XkoRX7VIcrJIdjrXUezDWV1avSCRZ/E9\\n\",\n       \"6nvQBwAoP1+Oof2sXrR+Xz0azjeg/0E/Ou92AqDIJ0EQDwcU8SQIgnjYCAF4CUAUQDOYyF6i3pvZ\\n\",\n       \"slcxWkuaCfScS6qRSr2WJplEKVKZTiQ51Tm81P4SoogmTQtOF3EkOhKNoO12G2xmGyLzERTbilH9\\n\",\n       \"zWp0FHagFrU4jdNww032JARBLCvIx5MgiMyRS02DlhpGO+QSulloIZNN9JxLqmmndahDK1olwidb\\n\",\n       \"JEsHXsxmT3pINZVZqeHSn8b+hK7hLgCAf70ftv02SWffbKRiEwRBLBbk40ksS8h7apFI4oO5JAkE\\n\",\n       \"WBfYujq0v/FGWvtDyx+SW4a0gonQ5YxBX850yURKbDLEvo56vRyN/o7ix/jbfdcwmZ/8XJJ5VCbz\\n\",\n       \"LA0iiAY0ZF10Asm9PfcFg1jf0JCTohPQ50uqhLzhUsv+FpTmlQrLjuw5gha0SK69Ef9W+rtHZBp6\\n\",\n       \"pojFgIQnQRD6yIWurZlELKR/+MP09tcS4kY75C4gRvSzLhZYZGdbyKQqRFI5xgePDOP4X1WlfC7c\\n\",\n       \"s7SvtRXnZc+kG+4E4ZMtknXz5bWciyk6AwjABx/qUIeQ7NORVDsRB/cF0bC+QZIyq7SMIAjiYYaE\\n\",\n       \"J7HkoC5si0QwyMwgF7tTbaYQCWnfiRNp7a8pxINgnWJ1+FQuNBkPZC+wyM62kElFiBj9HSU+xq+b\\n\",\n       \"Pkr5XLId/dVLrguubnSjAx1oRSsCsk9HUp17U1cTBicH0fh2oxAZNxLR1IL+7hGZhp4pYjGgGk+C\\n\",\n       \"IB5O0rU/yaB9SiYa5KRagptx9xuNJkXZagaUjHSOuRB1eJk6hpGGSJmyZVmKZKPe1Yh1DEEQxHKE\\n\",\n       \"ajyJZQnVJRC6SZZH6nazL78f7Tt3Gs8z5V4lGRBOqimSBvJgU41cZjyQzW1oVMZKlg6aLdI5ZipR\\n\",\n       \"K6O/ozIVGTMS/V2IFOJcJRv1rqmm6OqF/u4RmYaeKWIxIOFJEMTyYNM5wH0ZKH8f6L3PlmmpMb7+\\n\",\n       \"vfcWtWGSaoqkATWZagluBvWzLvSmg6bS1CfdYz5MZFso5TJ6612NPIO5nl5MEASRC1CqLUEQywP3\\n\",\n       \"ZeB+Nfu+6h3g1ue180i11i+QhYxqiqSBPNgMZv5mFb3poJlMXVwMT85ch6w8tKH0WYIgCP2QjydB\\n\",\n       \"EA8P5e8Dw08AzmvA9SrAW6ytxrTW+3ws4giwfFS3e2G9TJeKmswCdW/VobWvFbWeWooiEYsCPYME\\n\",\n       \"QRD6oRpPYllCdQmEIhcfY5HOr/7fwMF6Fi0EkueRxvJM2y9fVl7P81c9HqC/Hzh2bGG9TBc6DzaH\\n\",\n       \"WMqpi/Q7anmQS88gPVNEpqFnilgMUhaeJpOpwWQyXTOZTHMmk2l7JidFEMRDRKaMJL3FLL32zg1t\\n\",\n       \"caj3mLzzzsaNQGcnMDrKli8XL9McJp2GO+cCAbzu8+GtujqEM2JOSjyMZNIOhSAIgkgj1dZkMm0C\\n\",\n       \"MA/gRwD+92g0+qHKdpRqSxBEHHndpN8fT2etqABu3EgvwqenLlKeQtuiUbvFx6ypAdasAZqbtee4\\n\",\n       \"QPWhRBxum3Lv6lXMxD4kWN/QgP1a93eRWAxrmUUnAKAbzO81iJzztSUIgiBSQ0+qrTXVwaPR6O/5\\n\",\n       \"QQiCWMIstEDinVr5sXk6KwAMDLBl6QiFYFC7LtJoC1i1MZNdO/l55qj4WYqoCTZum8LJ9S624vme\\n\",\n       \"DwQUBfKy89vsBsBvUQDMeocgCIJ4KKAaT2LJQXUJKqSaspqqAWSy4x88qD4XuegLBlmkU7wsHfTU\\n\",\n       \"RQaDwLp1gMMBNDai/Y03Uhsz2bVL1d+E0ETNl5PbppRWV8Pr9+PLp08vShRR7+8oPTYvy85vk3/O\\n\",\n       \"VAuAfix0Q3/3iExDzxSxGCSNeJpMptMAKhRW/Z/RaPSk3oO8+OKLWLt2LQDA7XajuroaPp8PQPzB\\n\",\n       \"p9f0Wu/ry5cv59R8cuZ1dzfaY9ETXyzCpmv/qSn4AKC2Fu0vvAC0t6d2/JMn0T4wwF6XlQEjI2gH\\n\",\n       \"AL8fvth27e3twHe+A5/LBRw+LDT18X3pS0BrK9rn54ELF+B77rnsX681a4TrhclJ4LnnjI83NcVe\\n\",\n       \"x8Rle3s78MMfwjcxAdhsaH/kEWB6Gr7GRiAYjJ9vLjwvS/g1F2wDjz2GtS+8AI71O9/B+OQkDp44\\n\",\n       \"AYfbveDz+4fnnsNEXx8sDgeimzbhnStXYHE48B9PnVKcj575Tl2fAkqZ3+YL8y+gPdWfz1x5/R3A\\n\",\n       \"5/IBh4H2yzkwH3pNrx/S15fp7xG9TvP15cuXEYoFFz799FPoIW07FZPJdBZU40kQi48Bz0cJmbLs\\n\",\n       \"KC2NN99ZsQIYHIzPpakpeTqvz2es5jITpHq9xChdO/G5eDzA8DD7fqHO6yEgV305X/f5hNRZR3k5\\n\",\n       \"wkNDANKrMyW/TYIgCGIpsJB2KlToSRCLDe/AalREpWrZEQgAK1cywXngALBtG1teXQ289550Llrp\\n\",\n       \"vIuRlprq9RLDr11TU/xa/O53bF1tLbsW/HtKt9WNVldah9uN/S0tOSU6AWnqbNnjjwvfp1NnSp1V\\n\",\n       \"CYIgiOVCOl1tvwbgHwF4ANwHcCkajT6rsB1FPImM0i5KNSMWGHEznbExZjHC8fsBm005cqoVXcxU\\n\",\n       \"1DVF0nqmeOOg+/fjy6qqgI8+iq9fpPNaqogjh7nclVYOj8TOv/AC9u7enZNRWWJpQn/3iExDzxSR\\n\",\n       \"abLd1fY4gOOp7k8QxBJg0ybg5k0gGgWeegqYnY2LzQpR+XdNDXDkiLq40uo0yyOHegkEgJMngXAY\\n\",\n       \"2LEDOHpUeVx511mtlF+dh5YM0d0tFZ01NcCZM/Gxl4hoyiUSmu4EkLYFx0JYl/BIbHt7u/D9YqN1\\n\",\n       \"3g+lpQtBEASxKKRd46l5AIp4EsTSxe2WiiqbDYhEWArppk3Ab34DWK0stdbrTdxfrNLKy4He3tRF\\n\",\n       \"X7Joq1r9pLx2dHBQu5ZUw14moRx1IhbNdbuBz38eeO21hYtuZkCQsXGk53yuqWlRxUhCDacPcQuO\\n\",\n       \"BqRkwZHrUdRkAvBcIIDekycxFw7Ds2MHDhw9qvueaJ13rl8XgiAIYmmQ1YgnQRDLlE2bmJ+mzQaY\\n\",\n       \"RWXgBQXAgwfs+7VrgTt3gHv32Otdu4AbN9TtRuRs3qy8vRrydFZ5tFVWQye8ib92DfsAOHiNZWMj\\n\",\n       \"20Ct5lJ+HAX/zcRyVI1objbJlCeizHM0NDio6S+pRTqRNIfbDbvbjVN+P9vfFoQD7rQsOPRYlywm\\n\",\n       \"Sp6e/Breu3oVM7HGXf1tbYbuidZ5q61fdv6hBEEQxKKTqeZCBLFg8JbORJYYGGDCa3iY+VxWVgKr\\n\",\n       \"V7PIJhBPq+UKjO+TrGmQ0jGMeIaK01lLSoB33wXq61ldqTitNYbg8zg8jPMOB3DsGNtGpaGQ8EzJ\\n\",\n       \"j6PwRj1hiFSbM2WCTHkiytR0JkSamtdmSvu7AizSeRopR3X3BYNY39CwIN6een5HyRsoKV1zfg24\\n\",\n       \"6ASYR6mRe6J13mrrl51/6BKH/u4RmYaeKWIxoIgnQTwsJEshFa/jAtPpZALP65Xml65ZExdxmzcz\\n\",\n       \"EcnDf/JjBIPAI48AMzNsX7MZmJ833uWVC6OSEvb1+OMsInvxoqLgE97EA9gTDgOHDsXFoVKk6Ic/\\n\",\n       \"BP7rfwWuXYsf59IlxbGNlqNK0EjjNUwQLNJ5GMqCTG8qrqwGd18wmHZjnHTFq2T/I4dTTyOOsRg1\\n\",\n       \"l8mivvIIp9I159egrKYGzpUrYbbZ4GtuNhw9TnbeauudVnbsWk8tDu/JvQgxQRAEsQSJRqNZ/WKH\\n\",\n       \"IIglzssvR6N790ajzz4bjY6OLvz+mWDv3miUtQmKRhsapOsqKuLrDhyIRquqotFPP42vf/ZZtq62\\n\",\n       \"Vjr/0VE21ugoO8fi4sRjfPppNFpZGY3W1bHv+fZKbNzIxvB4pMfnx3nhhWjUYokfo6pKcZjp0dHo\\n\",\n       \"6YqK6LTSnJWOI742VVXZu0fJ7kFWjheN/zZegMOJmR4djZ5uaIhOp3gt090/F/j13r3RHwHRHwHR\\n\",\n       \"07L7/eazz0Z/BER/WVureo4LfQ06Xn45+uu9e6NvPvtsdODup9GG0w3R0emle/0JgiCIhSOm+ZLq\\n\",\n       \"QmouRBB6SOgoYzByku7+mSCZpUlpKcDT+errgRMnpPvqsTsRn2NJCeuGqxWZkUcA166Np7pWVQG3\\n\",\n       \"bqkfw2IBenpYRFYpksjnnP9ToNchjfqJmyZVVQFbt8avzZYt6k2Q9EQsk22jZSuTaeoAtIKl4hpN\\n\",\n       \"U00zOqsW7XuYuqi+VVeHvtZWeGprE1JZExoo5QDUaIggCIJIFT3NhajGk1hyLEpdQmJHmYXd3wiB\\n\",\n       \"ABNodXVMfHFU6hsBMEsSgHWrLS5O3F9cx6g2fk9P/PstW/TN6+RJJiRbW4GXXmLpswC7XhcuJO4j\\n\",\n       \"PkZBQfx73hyntTVeO8rn3OtgDXhawVJPgYTjtH/nO/Fr09ubOFay48hJtk2ye5ANgki9NlLPuSZB\\n\",\n       \"rcYz3drPdJDXVWYL/jtKrX7yXCCAU34/ZiYmMjrPdM9PnN5syc9fkGtF6IPq8YhMQ88UsRhQjSdB\\n\",\n       \"6EHLhzLb+2uhZjUi7sra1MTsRBobEyNYR4/GooP5wK9/HY8GirvP8mNcvRqPjm7YADzxBBvP6wX6\\n\",\n       \"+tjyzk7gsceY0ObHOnmS1YMCwIsvsqhqOByfwzvvAG+/DXz5y8Du3axT7vAw8w7l5yI+xtgY2+7W\\n\",\n       \"reTCXqkBz8WLwO7dOLd7N0IHD+L61BSePHWKiYOkY+n4ACHZNmkViKaAG6l3uk3zwxK1Gs/F7C6r\\n\",\n       \"1Dk2E8ijuBy1+kmteYjX/3zDBpQ/8QTyy8sx3tsriRTLj5vu+YnrTE/5/Vm5VgRBEMTDC6XaEsRy\\n\",\n       \"QJyCWlERb/gjjqzJ033d7sRUSvE2HL6t2GZETkMD8NvfxkWh1RoXjGVlTNDydQBw4ABLq5WP6fEw\\n\",\n       \"ISv36eSpu1u3xsfJywN+/3smRg8eZJG5xx9nIlosqkNgkc5LTwBDn8SbEnm9yqmFydKKldbJU1L5\\n\",\n       \"ssWwV8kketKrk6CWSprNFFOtNN5kqa/pYDRFVWsefL3V5cJsLCrq8HgQHh6WHEN+3JmJiYydX7au\\n\",\n       \"FUEQBLE8oVRbglguqKW3csTRqXffVU7nFG+Tn89EnzyVkm/DO9vyaNfJk3GBaLFIj81tR7ze+DIu\\n\",\n       \"OgFgZEQqOgHWPVZsXQKwjrfDw2w+4pRaiwVob2fnIj7GF78Yf93bCwwNAW1tiWmhPOo39EncJmb3\\n\",\n       \"bnaaPPrmcmHP6Ci7tsnsUZTWyVNSNexVFirdMxm65pCmTQyP9skFi9ryTKCVxptfXg6zw4GRK1cQ\\n\",\n       \"XLcObxw4IJx/OvdFLYrLx3xt9Wqc2L1bGFuvxckju3YJ43qqqxOOwY/r8Hgw0d+P+UgEXr8/I0JR\\n\",\n       \"j/1MLjzLBEEQxNKBhCex5Hgo6xK06u3EtYNer7JgEG/T2yv1q8zPB1auZFHLFSuADz4A1q1jPp6N\\n\",\n       \"jUw8cubm4t8XF8dtR3p79Z1Lfj5Lq+Uit6aGRUXn59lru52dAxe/c3PA/v1MdOfns2W1tcBrr8XH\\n\",\n       \"FAvm06dZRFX+Rnh6Ov691wtwAVBeDtfEBBxKolUPBlNSF7PGMZfmkA200njHe3sxHw4jGokgEgqh\\n\",\n       \"v61NOP+k1yQAwAfWrElBX8lFGv8dxcd80NeHwc5OYWwuvruamhSFG1+//+hRYVzx91wI8uMWb9yI\\n\",\n       \"wc5O9Le1wWKzZUTU6/mAYLk+R7nIQ/l3j8gq9EwRiwEJT4JYCmiJGz3RKfE2fDy7HZicBP7lX1h6\\n\",\n       \"bijE6kD/6q+YX2dnJxO78nR5sxlwudjy2lomOsXRSCXsdpYGvHIlS4k9c4bNpayMiU++jcMBdHXF\\n\",\n       \"o6ZmM4tmtrYykVtRARw7Jj3XYJCl6c7OsnNQEpGxiBEAdl5cANTWwp7s2mphsGHQYtY4pjqHpRLZ\\n\",\n       \"0orS8fPm2AoLsfOVVyTrFK9JNxIbVIlQE2l8TFtxseLYWsJNPK7SMRxuN+xuN0LXrwNgfp/yuWfz\\n\",\n       \"3uXCs0wQBEEsHajGkyAyRZr2E0nHtNlYF9fmZmlt4cmTrEHPjh3x2kaleciXfe97wC9+wYSaOILJ\\n\",\n       \"sdlYNHN4mAm6SESaFnvlCvCFL0iXlZTEmw6p0dDAmgpFItLlK1YATz7Jjieu7RRjscTnqmRJw61K\\n\",\n       \"ACYyz55VtjKRr1erZczG/URu2GgYncNSsNlIVt/J11lsNpjtdtz97W8xE3tW8yoq8Gc3bgCA+jVJ\\n\",\n       \"0ZaGX+edr7yCrkOHEsbORB2l+N546+vxjMgK6VwggJ6WFkRiP6da986o1U0uPMsEQRBEbqCnxpOE\\n\",\n       \"J0Fkimx4dSYbU94IaN06FqUUd53l+8jHOX8+3mE2GZWVbFwuBk0m4PJlYNs21txH3JVWiVWrWAQ1\\n\",\n       \"EmFRzTNngPJyaQ0op6EBmJhg4pA3JyoqYo2GLBb2/eiougdmKMQsWaJRJtB37WLnyJsJFRdL12u9\\n\",\n       \"Uc4F79UcIZcbzXCxdO/qVUFMygWWWhMejqaY5g2qDsO4LU0SMiHckt0b8XnDYkHl00/jwNGjqsda\\n\",\n       \"Ch8wEARBELkJNRciliU5W5eQDa9OPdYeABN1lZVMKHHR6fEA/f0s0sd9K/k4WoKR87nPMcHHx/v8\\n\",\n       \"54H//J+ZyBOnrirhcrHj8OjmypVM7D31FHv9mc+wSKd4XsEgE7o7drCU2vPnmVCdm2PnVVWlntLq\\n\",\n       \"dgPHj7OIqtvNRKe4mZB8PSA0bWrfuTOxJnQhvVdzHD2NZhYLnq7KRadS2qfcnzIyNgaT3a66fQK8\\n\",\n       \"QZXo1M8FAvjpypVoLi3Fm6ImRYD+31GZaLSU7N5IUovn5iQ1rUrkYursUknzzjY5+3ePWLLQM0Us\\n\",\n       \"BiQ8CSJTGKz1S3vMYBCorwf8fhZJ5AKxpoYt37gxXqPpcknH2bFD/ZguV3wcHnHMz2ciko+3eTNQ\\n\",\n       \"WJh87hMT0qZEnBMn2FwuXAA+/pgJzT/9CVi/nonRkRFW4zkwwIRvzEICRUVsH73Xlottp5PtpwRv\\n\",\n       \"2vTee4k1odm4n6mi1dU4y8d0AFnrRJsuXCyVVlerdnQVi7Px3l7c7exEdGYGBVVVKYvpUHc3pgYG\\n\",\n       \"MDM6itttbfjl9u2CQJqJWaAsBGLxKhdp+4JBOMrLhW3tJSVJBWUufsBADYwIgiCWD5RqSxALQZbq\\n\",\n       \"BYVxe3pYWuuVK6xxT2kpizS2tbGI3ZYtrAGQ08kiiP/2b6xhj/xnc/VqlhobDrOmPkC826wSNhtb\\n\",\n       \"z2svS0uBe/fY99XVzHtzbIy9NplYumttLYvO8vnIPTs54ppOjpGU195eFum8cIE1PlK6B7zuUy19\\n\",\n       \"N1dYjLTfJZJqLE9XPRcIoPfkScyFw/Ds2JGQWpqptGE+DsCa+licTgzGnuNkaao8NXispweFXi9s\\n\",\n       \"RUXYFwyiq6nJUH2lEkqpsnye9pISfOPSJRRqNQHLMXI5zZsgCIKIQzWeBJErZPpNPBdR4npOJXh9\\n\",\n       \"43e/Gz++xxOPIsrhtZVGsNuBmRkm2jZuZOI3P599TUzEhaeY8nImfAGWUiuvNzWZWPMicQ1raSmL\\n\",\n       \"tBYVGRPvSteK3wO1xkK5xmII5KUiymVI6hqRKAL11lUqNdoRL9vz6qt453vfA0wm+I4cwduNjboE\\n\",\n       \"knx+fI6Tg4OK9ZVGGv6IRVrJli0Y7+2FxWaDtaAAvubmJSnatO6X0YZIBEEQRHYg4UksS9rb2+Hz\\n\",\n       \"+RZ7GsaimJl+Ey9vLKQUHeRUVbH/+/qYaKupie9rVGh+5jMsPVa8j9vN0m7v31cWmXK2bWPCt7+f\\n\",\n       \"RUDPnQP+5m+AN99kUVqLBfjwQ9Yo6cUX2TKbTdrx1oh4l18rhXvQ/txz8E1MZD4inSkWQyAvsihP\\n\",\n       \"VVCII5Gl1dX4ytmzhsRIsmZFyZrvcIFkyc/HO1euoKayUlGwRiMR3G5rg62oCJGxMUGoqglXpWOq\\n\",\n       \"XRuxSDvl9z8UjYIeloZIOfN3j1g20DNFZBo9wtO6UJMhiGUHrw8E2Bv0ZG94gkFjb+LVRC1ffu0a\\n\",\n       \"e22N/Qi7XEwoOJ1MqPGGPvn5LNV00yb2emyMRSi9XuDWLePRzU8+SdwnFFKuOzSZElN5AVbTWVjI\\n\",\n       \"hOf9+8AzzwA3brDvt2wBtm5lDYzKy+Pn1NwMNDay/fU0+xFfP17rWV0NrF0LHDmSeA/6+liklu+b\\n\",\n       \"7TevRlOvuQfrQqJxzGxHmnhtH8BsTvQKin3BINpj3YvVonzJ5i4+LgDYioo0vT7F6b2IRnEvFELf\\n\",\n       \"lSv45fbtcK1ZIxGx3vp6rG9okFisdDU1ITI2hvyKChw4dkwiVkdjP+viY6pdG17vmWyuelhKUcRc\\n\",\n       \"bIhEEARBqBCNRrP6xQ5BEMuQZ5+NRoFotLY2Gh0dTVz/8svR6N69bDul9cnYu5eNDUSjDQ3xsUpK\\n\",\n       \"4svlX3Z7/PuKimi0sjIa/fRTNp7JFF9nNkejFov6OJn4qqqKRgsLE5d7PNHoU09Fow6HdHlDQ+J5\\n\",\n       \"l5dL14+Oxv/Xus7icerrlfczci8zjfz+LkF+vXdv9EdA9EdA9HQWzuHNZ5+N/giI/rK2Njpt8J50\\n\",\n       \"vPxy9F8qKqJHSkqiJ/fvT9g/2dz5cX9kNidsMz06Gj3d0JB0PP71E5cr+k/FxZJlaueiNB/xsp9V\\n\",\n       \"VUn203Nt1Oaqh2zf20ySznkSBEEQmSOm+ZLqQop4EkSqaEUx5RFRt1t/lKunh/1vsbBmP/39yg14\\n\",\n       \"OG43iwTyZkI8lXTTJlY/KY48JmsWlAm4X+eGDcD4OFu2cyezU/ntbxPPw2pl5+nzxSO5tbVs/vx8\\n\",\n       \"+DUWR72Uajd5tFJshaLHs9NoRDpdloFVS7YjTfuCwZRrMXnHWQCChYg4Ypps7vuCQfx8wwaEY3XQ\\n\",\n       \"JosF06OjCIdCQkRRfswx/vPKMZsxK+psW1ZTA9eaNaoRWKX5iJd9+fRpSfOhPa++KkRL1a6NOPpp\\n\",\n       \"lGSR3VQjofJ9M9FMCUjvPAmCIIgFRkuZpvsFingSGebs2bOLOwG9kUweReNRPnG0ct06NkZVFVsn\\n\",\n       \"H+upp5SjmTU1iVFEkyka3bEjGt2/n0X3xOPYbOlFLs1m9XVWq/Jyj4fN4dNPpVHYhgb1iK04ullV\\n\",\n       \"xfZXi3ByxFFDebRydJRdY6Vrq0DCM5VOtFoPWue2BMiFSFPHyy9Looo8OidELYHo0erqhDlqzV3Y\\n\",\n       \"32JRjPyJI4L/XFER/dXOncLr/89uj/5vJpPw+qeVlZrXSGk+8mXZikJ2vPxy9Nd790bffPZZ4Vh6\\n\",\n       \"IrtKc1AaS23fpRRVzQUW/e8eseygZ4rINNAR8SQfT4IwCo9ktrYmej+K4T6Q3E+TR+W4nUhHB6st\\n\",\n       \"5N6Y4rG4JydnZoY1CTpzJl7XyYlGgQ8+YNHBq1eZr+fq1cxKhNd6pkqy6KhafejwMIt2fvvbrDMt\\n\",\n       \"EI/sKfmHFhay2k6+3Ucfsagj//L7lf0redSwupptI24Y1NTEbF2Urq0WPGqq5x6nCo/eLnLtnNz3\\n\",\n       \"0Qhi/8jFItTdjcj9+wCkHpX7gkF4/X546+sTmgudCwRwyu9P6rXJ/SxXPf00gMTI37gowjk9MICJ\\n\",\n       \"3l5hDmU1NZIMg+INGzTPw+F2w+5245TfL9wL+fXNVoRZySdT7d5qzSGZ56Z8X6rNJAiCeAjRUqbp\\n\",\n       \"foEinsRyQ289II+aiaOGxcUsMrl/P3vNay1raqLRF16IR9k+/ZRFL/Py4tvt3cu2KS6W7iv+2rEj\\n\",\n       \"vQhnJr7E8yovj0b9/vh1euGFaLSsLDFa6vcrRwArKuLb1NdL1yWLGoqjoSUlxiKL6ey7hFCLFi4l\\n\",\n       \"eGTySElJdIzXM2sgjrQ1ezyK0TmOOPInjuZCKrKxAAAgAElEQVSJI5z82Hw7cbRVHBXVinpqRQCz\\n\",\n       \"FWE2UkurN1KsNJZ831yImBMEQRCZAzoinmSnQhBG0WszIbfxEOP3s26z3E+zvp6NK/f6fOQRVuPJ\\n\",\n       \"sdniUUwlKxTuqcntVZLZrKSLy8W65g4Nsbns3s3sUTo6pNFJsfWJ0jXZto1FLXt7E+tfS0vjkWK/\\n\",\n       \"Hzh+XN/cuH1NSQlw6RLr4quXdPZdQohtKOwlJXj+5s2Uo5eZ6IJqdIxzgQBGr1/HWE8P/O++i0KN\\n\",\n       \"+yTuEhseHobV5RLqMPMrKvCtGzeSHlN8vfIrKjA1MAB7SQmqnnkGk3fuwOp0Ir+8HPd7ejD0/vuI\\n\",\n       \"zsxI9tey+hB7cCbzAc0Ecj9SrXrRhRqLIAiCWLrosVOhVFtiydHe3r64E9CbJslTQS0W6fKaGmbp\\n\",\n       \"8cQT7DVvgCNuOJOfz0TavXvSfbnoNJmUU13n59k6LjazJTpNJpY2+/77TFgODbH02lAIMIt+rRQU\\n\",\n       \"MOHIhai8CQvA7FV6e5VTW/Pz2f+FhcDf/33ivoEAu07yVFye5nzzpi7hKDxTgQCznKmoWNaiE4in\\n\",\n       \"PtpLSvCNS5fSEgrJUiy14Om+N48dSxgjWSpwqLsbdzs7MTUwgK5Dh3TPMTw8jIKqKjyya5ewbmpg\\n\",\n       \"QHPe/HpZXS64N26Et74ez9+8ick7d4R5/+Ff/xWDnZ34/cwMnJWVMDscAKSWLGrw9F7eSCjVFGgl\\n\",\n       \"5NdRfL9+vmEDpvmHOykgHqvr0KFFT79eriz63z1i2UHPFLEYkPAkiGzBxc+HH7JIJGfNGiZa+Xpe\\n\",\n       \"myh+zYWYWh2lWhbB7Kz6OjW4f6URolE2v8ceA155Jd6xt6ODiWWnkwnuBw9Y7WkgEBd1YqxW4B/+\\n\",\n       \"Qb3LKxfO4+PA976XOA+1etvYhwPnjL6B7+5mdaEDA4AOMbOU4ULn+Zs3NaOFWqRTr8eFC/e5FI/R\\n\",\n       \"e/KkIGraX3oprWOKt//mRx9h/9GjyK+oUBxDSfDuCwbh8HgwOzGBOx0dGHjnHbzd2Agz94kFEI19\\n\",\n       \"MFT82GNouHYN5bW1AIDI2JimOBbXVeoR8mqiXGm5fDx+LficeeffVKBaTYIgCEIvJDyJJYfP51vs\\n\",\n       \"KeiDR0a3bQP27WPLeHRTvJ5HB8Sv+RvDmhqWbqqF1crScI1gsQD/7t8BzzxjbD8xMzMsxTYQYFYp\\n\",\n       \"AItObt0aF40lJSxy2dKSKDxnZ5nAk4tw8fgck0L2hoYtid5InPBMLQObE72k2hxITZTxaJ3R8bhw\\n\",\n       \"Ka2uhtfvl4wxFw7HN5R9oKJ1TPk85ds73G5868YNxTHUGu5Y8/KEbcJDQ+hrbYXN5YJJ9LPnrKzE\\n\",\n       \"f+rqgsPtxnis6ZCtqAiwWJJ+CMLn+7PVqzES+zCotLpaVcypPdtKy+XicF8wKIhureMoXUsx6dx7\\n\",\n       \"Qj9L5u8esWSgZ4pYDEh4EsRCoCasxIjTRouLgfJyoKwM2L6drd+4Mb5tYaF03y99KS6a9GC1srTX\\n\",\n       \"O3dYdM8oXAQ6nUzw/tM/xUXi+DiL2ALxOsneXiDWfVQ4PpDo0Sm/NrwLLk9PlqNxXQ1HY/Tcp4cc\\n\",\n       \"I11QAe3OuVy4fOXsWTxz/LhkDE/s/pdWV8NeXCwZR0s4y+fZ1dSEycFBvN3YKMzDSPfWc4EAwvIP\\n\",\n       \"TiwWRCYmUPH5zwNgfp0N164J4/FIcmRsDLfb2pJ+CMLnO9nXh0hsfoVr10rm9otNm3DE7cY/l5eD\\n\",\n       \"fwwjf7aV5q4mutU6/2pdSzG50N2YIAiCWBpQcyFiydHe3r60P6kLBFhKp7yRzsqVcRFYVgaMjLDv\\n\",\n       \"6+uZTUplJYscfvIJS2HljYkA1njnvfeA/n59c9i5k0VSOzqAyUlj83e7gS9+EXjjDeDJJ5mwFL8h\\n\",\n       \"t1qZcB4bAz7/eeDECaCxkaXDihsiVVXFrVPU0NvISYVwKITzgYBms5OMPFNq93UByUSTHy2MNsER\\n\",\n       \"N+XRarAjR3z/Tvn9muOIz38+lkLK56lnf6Xj8vMTn4ccl9eLyIMHsNjtcK1bh99HIti5aRN6T57E\\n\",\n       \"zOgoSqurke/x4LZoPvLrxq+rragIkbExxe2OuN2CfYyzshIVTz2FPYcPo6upKasNfhay8RGhzJL/\\n\",\n       \"u0fkHPRMEZlGT3Mha7KVBEGkiZIY4XWJAItmrlnD1k9Px/fjzT6sVuBv/xb47nfj+4g72wIsZfbU\\n\",\n       \"KUCclqhFV1d8fD3w7rglJezr+PF4nac8xXd2Ni6aOzqAF19k5x4IsPNqa2ORTj1RRR4JTREejVkQ\\n\",\n       \"xPeVe4EuMDwyBQDnA4G0z11JyO4LBnWJeY44AmfJz8frPp9uYSy+f3qi1+Lz9/r9WN/QIMxTvr/8\\n\",\n       \"3MTibV8wmHDtrCoZBVaXCzP372MmFqWc7O/HEIBP3ntP2KZw7Vr4jhwRrpv8WOLruvOVVwTh2NXU\\n\",\n       \"hN6TJzEXDqN8xw6YYo3KLE4n6t95R4iois/7V088IdSWZgqj95wgCIIglKCIJ0Gkgt7oltg+pKGB\\n\",\n       \"bXfsGBNgSnYoAEujjUYBbnBvtwNFRdIIZyawWlkHWpntAwAgL48dl0cyy8pYLWdzM7B2rTRt9sAB\\n\",\n       \"do5K4wCsM+zatSy1d9Uqlnb77rvGO8bmQEQxKdyGRa+ozgKZjkylE63kGI1aJhvnl9u3w1lZCXtR\\n\",\n       \"kaJwVTp/LjDvf/IJ5sNhWBwOFK5bh9Hr14WGRo7yckRnZ4XXJquV+Y2Zzfj6xYso27YN4VAIv9i8\\n\",\n       \"GdOxrAT3Zz+LqTt3EOYfsqhhsaDy6adRUFmJ8d5eWJ1ODH/wAaZjNkkurxeutWsFOxa+zb5gUHK9\\n\",\n       \"AGB1XR3uXb2Kr164IGkIxc9bbBGT6v0iCIIgiFTQE/Ek4UkQRuHRLC6+xD6VcrgY8XhY1HB4WJ/F\\n\",\n       \"icnExCf/P1uYzcyCRUxBAUuhjUSknpvl5Uz4bdgQF8EWC/PztFji1i9btjB7laGh+DqxUAWSXzM1\\n\",\n       \"5CI+195Up5kWrIaR9NlwKIRfxcSZTUWcGUGPkFWbn9Jy8XglW7ZIRJaeeWoJYaUU2Z84nZibmlId\\n\",\n       \"UyzWABYRHf7wQ+HnwpKXhw1/8RcIdXfDbLPBYrfDbLPB19yMtxsb0dfaitLqajy4dStBhJosFkRj\\n\",\n       \"P++O8nKEh4bYcptN6IDrKCsT9hNv4ygvR2RsDPOxTIbSbdvwlY4OxevEz3t6dFSSXkzRSYIgCGKh\\n\",\n       \"IB9PYlmyKN5TgQCrwSwtlYpOi0XqUynfh3tCPvoocPductHpcsW/56JzxYr4cZKxebOx8+FjykWn\\n\",\n       \"2Ry3QCkokK4bGmLndPEiqze129n53L/PRGdFBas17exkArW8nEVt+bUqLmb/p9oxNosdZzPyTOn1\\n\",\n       \"dzWIWmMXpaY9DrcbBWvW4G5np2YnX62mP4C+jqVGuquKxxvv7ZWs1zMftXRbvu/bjY0J6aBzski8\\n\",\n       \"ragIAJjHpsWC2ViKO++qW7Jli+TnwrNjB+5dv46Bjg70t7XBVlCAZ06cENJjC9etg62gQNJ1+Q9O\\n\",\n       \"J1bX1WHl008L8y17/HHh+0dizYhKq6vhqalJ2MbqciE8NCSITgAoXLdOiOAq3ff9LS0oiHmH3v/D\\n\",\n       \"H3C6oUFYr+fayklln6XCUjw38lwkMg09U8RiQMKTIPTQ3c0a/4yOSkXn3Fzcp1JpH+4JKar3SsBu\\n\",\n       \"Z+mqzz0nXR6NxlNxuWC125VF6I0bxs7HYmHpu0C8ztPlkgrRU6ek+xQVMcHn9QK3bycK0127WO2n\\n\",\n       \"282+HA62vLCQRX6vXEmvY+xD2nFWTWypCT69nXz1WM3o6Viqdryxnh4ATOjtfOWVhPHk++mZj5oQ\\n\",\n       \"1uq6CgDmvDysrqvDN69exfqGBiY85+aA2VlY8vKErrrcAoVzt7MTY598IpkrFy7Htm7F1MgI7nZ2\\n\",\n       \"Ijw8DJPdjrwVK+D7yU9QsGoVZqemkFdRgQPHjuHA0aMoXLcOD27dwr0rV5C3YgXcmzYhIttmfUMD\\n\",\n       \"VuzaJZmDvaQEvuZmnAsE8HFzs6q36XhvL+bDYURCIfS3taFl82aEQyHdtkJiUtlnqbCcz40gCCKX\\n\",\n       \"IeFJLDkWpQubuLHI1q2s02wsmpEQgeO2KNeuxZclS5edmWEi7s4d6fKaGvYlJhLRl6qrhLgJ0Nxc\\n\",\n       \"vIGRy8UaBonSDYVtOEVFTDz6/ez/UChudQIwr1K53QmvQRsfZ+fn9TLBuHkzixwfOBBPT+U2Msmi\\n\",\n       \"D1mKKAK57WemJrbUBJ9eX0XDVjMa8yvZsgUtmzejubQUbxw4gIJVqwAwK5GuQ4c0z0vPfMTCVRy1\\n\",\n       \"ssSebaV9v/7BByioqsKf/f73ePbNN1Ho9WJ/SwssdjsAlg5b+vjjgs2KUhOhsscfl8yVC5cHfX2Y\\n\",\n       \"FXV0js7MYHpwEJaf/xyh7m4MdnZiemAAXYcOCdHoqbt3MRMKYXpwELfffluyDbd8MQFwxLIdzHY7\\n\",\n       \"ih97DG83NmL0+nUhRZcdUPp7RT73qYEBnA8EUrrXmXo+cpGleG65/DuKWJrQM0UsBlTjSRB6CIWA\\n\",\n       \"l15ib/Sam5n4UavpE9cickwmZi3yySdArKmIhKoqZoUijjiaTCwyqdSAKBWS1Ys6HKwrrrzhkdnM\\n\",\n       \"ROLFiyyiye1e/H4mNF98kY175EiiIFRqtiO/NuXl7HhcBOdi7WaOotcqJp39jdSXyu1G8isqMDUw\\n\",\n       \"oLvekM/Hkp+vWvspns/M2BgGOzsBAN76eljs9qT7yml7/nl8+qtfwZKfL1iU8C645wMB/OnUKUFU\\n\",\n       \"euvrkb9iBULd3Rjv6cHM+LiwjxJevx/DFy/iQV8fYLFg5e7d+NKJE0JNKMDSaedmZhCdmYGtuBjf\\n\",\n       \"vHIFZw8elHTlvd3WJqk/zVuxQmhK5P7sZ1F//rzkHMOhENpfegl333kH04ODcHg8KN64Edb8fNhc\\n\",\n       \"LviOHNH9rKT7fOUyy/ncCIIgFgtqLkQsS3Lee4oLLpcrMYro97N1cusTLvz0UlwMrF4N/O536c/X\\n\",\n       \"bGbNhPr6WK3m+HjiNuvWAX/8Y/y11cpE5NGj6hFIJWHOrw3AoqAPHsS3V+sGuwDdbHP+mVokjHS1\\n\",\n       \"5Y2DAFa7+MyJEyn5SSY7pnhdXkUFpmXCVu98zwUC6GlpkYhHuUB+48AB9Le1wVJQAEdxMaYGBxHV\\n\",\n       \"8SFQ6bZtKPrBDzD5d38nCGM+nz2HD6P9xRcxcOFCQiOi9Q0NmJmYkDRzCq5dK5nj6ro6mO12IBqF\\n\",\n       \"r7k5QZT3njyJ8L17sOTnwxzr3jscs06iLrdLG/odRWQaeqaITEM+ngRhlEyIHLlnJaekhKWsyhv6\\n\",\n       \"FBayqKER4enzAR98YHxuQLyTLY+Azs+zWtSyMmXRWVLCmgmJhefsLDu3DRuAJ55QvlZKHpzBYDxy\\n\",\n       \"zJsYVVczuxWlqCmQE/6YDytGUhL3BYOs5lAkivQKHXEk05wkbVY8nwPHjqHr0CFY8vNxyu9nkcjY\\n\",\n       \"Bz1WlwufvvEGjhQXw2y34+sXL+LSD34giZZyQWcrLkbl008L0UA+F4vNBntpKWbu3cOk+AOSGLai\\n\",\n       \"IkTGxmCy2WArKGAdb/PyMDkwgIvPP49NsVReACirqREE+DMnTkhEOgDAYkF4dBRf+PGPcfLpp2F2\\n\",\n       \"OPB2YyPMIp9dW3Exwvfvw15UhPzycpzy+yWR3d6TJzEVy0iYjzVUMpnNqteSIAiCIBYaingShBgt\\n\",\n       \"yw49wpRvY7MBV6+y1NqSEuDSJVbfqGTtoGRrokZNDXDmDGtGJIqoaKJlzbJiBZurON3W7QYuXwa+\\n\",\n       \"/e14pJIjjlimkiKr134kB/wxH1YynY6rtq0kkrliBR558smEiB4AnD14EH9qbUXZ44/jwNGjCVFO\\n\",\n       \"NQqqqjA9MiLYqpgdDsyHwzBZrfj6Bx+gbNs2YdufrlwpCDie2spFJsAEoMlqRelnPwtHSQmmh4Zw\\n\",\n       \"N/ZzaLJaJVFRS34+rE4nympqhPny69qyeTOmBgYklivrGxowOTgonI/JaoXJbMbKvXsRmZwUIqgO\\n\",\n       \"jwfhmKURj2Q2l5YKPqQAizq7N23C7bY2eKqrsV90fIIgCILINGSnQhBG0bLs4NG31lblTrYAcPIk\\n\",\n       \"26atjY2zbh3ztvz2t1kjoXT53e9YZ13elZajZbnCRSfvNiumpoZ13m1oAP7wB9Y8ye9nUU6vl4ls\\n\",\n       \"bu1iszEhnZfHXqdqb6K3WZBWN1u9zYkIw+jpamukQ6hWJ14ArDmP3a54zL7f/AbhoSH0t7Wh/cUX\\n\",\n       \"AQDjse65ptjzb5P/XAB40Ncn8fLkNiXR2Vlc/Ou/lmwb5n60iDcVWl1XB0dZGfJWrEDxpk2YGRnB\\n\",\n       \"QEcHbre14e4777CNzWaJ6LQVFqJ02zaER0bQ39YmOV+H241v3biB9Q0NEsuVPYcPS65FdHYW8zMz\\n\",\n       \"cLjdsMfOy1NbC091tWQfACiPNfsy2Wywud3I93iYt+jwMG7Ljp8NlqJFCUEQBLGwkPAklhwZ8Z5S\\n\",\n       \"Eyvl5eyLv+mVb6fHS1KcMtvWBty6xSKTra0sssnhvp1ud/JIpJxIBHjsMfZ/XR378npZ859kIq6m\\n\",\n       \"hglKufCsq2Odeg8eZDWpxcXAiRNxaxQ+x48/ZgLwc59jacQjI6wpUrajkFoCVc+HARqQn1nqGEnH\\n\",\n       \"TdaJ15KfD4BFFGGx4KcrV6K5tBQ/W7UKJ3bvxlt1dYLnJgDBN7Mg1j05OjeHgqoqwS7FKvbFTcLA\\n\",\n       \"hQv42erV+PXu3Xht9WohTRUAzDYb9re0YPLOHYRHRjA9OIiJmN2Kp7YWc+Fw/GdXlLHwMYDI+Dju\\n\",\n       \"XbkiLLv1m9/gzQMHEA6F8ItNm/AvK1bgj8ePY25qCt76eqG+dF8wiLyKivg1KyhAeHQUe159Veis\\n\",\n       \"uz9muyKuSXVWVsJRXg6r04lIKITbbW2CpY3V5UJ4dFRTEKYjHsmiJLvQ7ygi09AzRSwGJDyJhxM1\\n\",\n       \"sdLbCwwNxb055dvp8ZLkNiMFBSzCyaMgJSWsO2xVFbBzZ7zx0NSUMeFpNrNx29rYMVatYqK4s5P9\\n\",\n       \"z43sucjlgvPMGSYoRbVnOH8eePNNdt5a4o0LQB5Rqq0FPvoo+6mvWhFNPR8GEFlDr31Lsm0dbjfK\\n\",\n       \"tm8HAETu38fttjZMDQxgZnQUk/39GOzsRF9rq2CBUlZTA3tREV73+TD4298K40zevYtjjz+O6dFR\\n\",\n       \"WJQi+3LMZoRHRjDZ14e7nZ2sC62IW7/5DX62apUkqln86KOCUNT6uXV/5jPC9/y82l98EZMDA4hG\\n\",\n       \"IojOzuJuZ6ckwutwu7H6S1+CKXausw8e4HZbG7oOHYLd7cYpvx9vNzYK6c9cLPaePInw0JBQu2p1\\n\",\n       \"ueDeuBGOsjLMTkzoinqmIx6XokUJQRAEsbBQjSfxcKJWNyhf3thovL6Q1y52djKLFIClwX74IfO7\\n\",\n       \"lB/nD38wliJaUsIijnxOmzfHbU4A4K232PHffBP4/vcTayh7e4Hdu4ELF4Af/ICJ62vXgOFhfZ1l\\n\",\n       \"X30VOHRIuzbTKGr1s1p1t3prRYkFwUjNpxjecMdTWwuH243b4sZcgGA5wjvl8hpJNRxlZZidnITZ\\n\",\n       \"bsfc1JQkkmkpLMScqJGWvDZTC0teHsp27EDo+nVWV2mxKPrrmmw2qe8mgILVqzF5545wPJPVCmtB\\n\",\n       \"AeYjEZisVljsdhQ9+iiGYt1oAcBeUoLnb97EKb8/oWuvvMbV5nLBZLMJ9Z6Ttgo4IwMoqanFV88k\\n\",\n       \"/3BAfA/0fJAghixKCIIgHm7IToUg1FASK4EAcP060NMDvPsuE2Xi17GUPmFbuUjiy3p62La8FpPj\\n\",\n       \"9bLurVy8Pf00a85z7x6Liqq8eVWkspKJRbebpc6Ka0erqlh6rx7Eoq6qSj2CqSX+MoHaMai5kCp6\\n\",\n       \"RF6qQjDVfY1YsIgRCxcAaH/pJfS+8YaQMbC6rg7PvvmmsL28mY4SjrKyuG2JqGkW9xi1FBRgTqFj\\n\",\n       \"rRgtUWp1OmEpKEB4aEjzHIF4Y6PkB403AjPZbPBs3w5HaSnmIxH0t7UJwrCrqQk3jx1LuA7cambE\\n\",\n       \"VYsfThzDN3AIE/WHETyhz0uVxCNBEARhFGouRCxLMlKXoFQ32N3NopQDAyyiJ38tRilVly/r62P7\\n\",\n       \"yQ3mx8fj+xw6BKxZw7rI8je1zzyj3PhHic99Lj73WG2cgOjNuSbiNNVkabPZSmcVp9HGbDQSjqEn\\n\",\n       \"vTlNlmqti57UyHTSJ1PZVy3lUqt+UNzIyOF245njx7Eq5jFXVlODL772mmR7TyylvWjzZpjz82Er\\n\",\n       \"KYGJP0MxTOKGW7GfM09tLfzvvov1DQ2oePJJzfOp2L0bXr9f+VwLC1GydWuC6PwYTLACLOXVHEub\\n\",\n       \"NVmtmJdFQBWJRuGsrIS1oADRSARDXV1CqvH6hgaUbNmCU36/ougsq6nB12Ln9+6u07gHLy7VtuD/\\n\",\n       \"bU7ebfh1nw9vNzYK9jTUJCi3WKq/o4jchZ4pYjEgH09i+WLUk1Murhobpa/FY167xl57PCyddvXq\\n\",\n       \"eM2mEtXVbFve6VY8Pl//2mvA+vXafp5mM0u1fewxJlwnJ6Xr//qvWS2nEvJrwj1HtdJU9W5nBO7J\\n\",\n       \"yQV6fT0TmPJjKPmBLkNSiS7qqatLp/YulX33BYOKUTMuYgHgfCCgGAnl12CspweFXi+s+fnw1tcr\\n\",\n       \"WqscOHoUv9q+HfmlpZjo6UGEC7BYtNBRVobCdesQDoUQnZlBWU0NXGvWwNfcjK6mJtw5fx6zU1OK\\n\",\n       \"6bBiBi5cQP6KFYp2RLPj47h39ariftHZWZjtdsyKfi8ki5yu9Plw97e/xXw4LEQ0g2vXSra529UF\\n\",\n       \"a34+ImNjgr0LwDrorti1C9aCAsGP1O5242C/HzsrnPifjwXhFnmUyp8x8b0RW7Wo3SeCIAiCSAWK\\n\",\n       \"eBJLDl8sCqIJtzVpbQVi1gtJkUfWlCJtPKo5PMxSUzduZNHNvj7lOk2Xi0Xztm1jTYQqKoBjx+Lj\\n\",\n       \"+/1McJ09y5bxxkQAi2TyRkDV1ay2E2DdMzs6WG3o/fusu60YU5IsB3mkVq+lid7tdHIuEMDrLS14\\n\",\n       \"6/59hAF2bs3NGT2GEczB4KJbQaQSXdTT2MdI8x+1fXmETc/14aJHvr0eEcuvAW/2c7utTdVaxeF2\\n\",\n       \"o2DNGtzt7JTUbyIaRUFVFdybNmGoqwvRmRlYnU5YnU7MxbYLdXdjamAAkfv3EY1EYLLZhAilnOjs\\n\",\n       \"LCb7+1UbCc2Ju+Da7XCUl2MjWKSTd9a1FRezDVQsjyxOJ8xWK/7s44+Fe9XV1CQRl6b8fMzEGiEJ\\n\",\n       \"y2PjRcbHhSixWEwOd3bAM9CKq4cCkuurZmejZtVCLD66/+4RhE7omSIWAxKexPJFHDlMJsY4bjf7\\n\",\n       \"8vuZWOTL+Gu5ncpHH8U7vPL/a2rYtqWl7PXEBOs829ubmLbrdjPLkhUr4sf48Y/Z93Y7E6ozM6ye\\n\",\n       \"8+xZZpciPh/xG+Gysvjxi4rUu8DmSAfYUHc3Bu7fRx+A8zYbcOnSotZu5oIVRCrRRT0+m3q20dp3\\n\",\n       \"vLfX0PVRup77gkEUrlsHi8OBtxsbFQUsvwbci9Ph8aC/owPNpaV4I2ZFwjkXCMQ72ooEXWl1Nb75\\n\",\n       \"0UfCGJ7aWpTV1OBurDPuzyorMXL5srC9yWoVOsymhKgue35mBiaTSegkO3PvHqxOJ9ybNiG/ogJ5\\n\",\n       \"/OdUPsTkJG63teGd//AfsL+lBV1NTehpaZH8jDsKCyXXxl5SgpW7d7Pr5nJhWmaXovQ8cXsVW1ER\\n\",\n       \"dr7yirCt+MMJJasWgiAIgsgEJDyJJYfuugRe+1hYCPz93+vbRx4R1LJT4a+vXmX/nznDaiy5wCsu\\n\",\n       \"Bl55Jf7a5WJpsuI33eJjHDrExGhBQXw9r+cMBuMNjsSi0+Vix+XHF1ujbN8uFaELUC+pB+FNcUkJ\\n\",\n       \"9nzyibRxUwYw6kd4fWqKzWcRozzpRCazjVFRrLS9OEKpJmD5NeBenCazGdODg5gZHUW/zA4k1N0d\\n\",\n       \"j3TOzcFst2N1XR2+cvas4IfpWrcOZocDoY8/Fvabm5oSLEdgMglzNYJadBQApgcHcZU3NAIwGw5j\\n\",\n       \"qKsLUwMDmB4cTD5G7Oc61N0dnyOYsCzZvBkurxfuzZuRX1GBb1y6hC+dOAFHeTlmJyYSro/S81QY\\n\",\n       \"+zmLjI2hS1S3Lq+vTfWDCiJ7UD0ekWnomSIWAxKexPJl3Tr2//h4YnMgNd5/n/1vtQL/5b9II4T/\\n\",\n       \"f3v3HhzVeeZ5/PdKfdENqYUkLMsYGceY4AQb2fgaKGvWJo4xDp148SSe3eCdyqomrtp1qiZ4s5PL\\n\",\n       \"TtXEtalJpWaSmirXpioLGSfEBmKIMSYuZK7GNg4bcBJDjA22bAxCCCSEuLRuZ/84fY5Ot7p1aZ1W\\n\",\n       \"q8X3U0WZVp8+5+3Tr4Ueve/zPMXF9mqkN5fzqafsPMtFi+xcz8ceswM8J5A6d86+9tq1Uk2N/drm\\n\",\n       \"ZmnOnNSrqM4P9c6W24YGafVq+++RiF0VN1l3t902xdmm6g1yz57NbGttlrk/FB87prDPQac09hXM\\n\",\n       \"W7/3vZwHfZP5B/7kIGakwD5dED1SAOvcg2n19bp/3bqEQjzBioqE1yQHjAM9PTr9hz8knKts1iy1\\n\",\n       \"7d3r5iwOYVmD21a9uyIKhv+ncUyro2kqVYeSPufK+fPVuGaNJM/Kb0WFVFCgvu5undy1S93Hj7tB\\n\",\n       \"7Kb4DoiahQsl2avDF06ccD+T5Pm0u6lJHYcOSbILELGNFgAw0WingsllrAWBhpNJG46KCsn5QdRp\\n\",\n       \"L+IU1YlGB9t91NZKhw8nfs2xYoUdDCZf2xlPWdlg8BoMSvfcYz+/Zs3gGJPbvXiLGrW326+74w57\\n\",\n       \"+27y++vstAsPeSttLlwo3XSTvRrqx72d5MbTjxAj86Nlymg+k5eWLNGJ5mbJGLdCbe3nPqfPx4tn\\n\",\n       \"PTd3rmKeVUTJbj9SXFOjstmz1b5//8itS5IU19YqMneuTib/f+2nggLNuPtuheO5nwWhkFsUSJJ2\\n\",\n       \"rFypo88/r8LiYvUOs2I/bfZsldTVqevoUQ309bkBdn00qgc2bkw49tmrr3b7nia3pgEAYLxop4L8\\n\",\n       \"k6pNSaa820qfeip93qOXU8ynpER67bXEFULvCktrq11YyGnf4OR4OquWqba0Ol/z5mr29trvNxRK\\n\",\n       \"XcnVCTrXrUssatTWJr30Uupts5GIPffB4sIAACAASURBVA7JXjFdvtw+xrsFN/neetuaTIEWCpls\\n\",\n       \"Wx3r9tx8Nt736qzIhaur1e1ZZRvJWFd1S+vqFK6pkQoKZPX1yerr08ldu7SnqUnhSERfefddlc2a\\n\",\n       \"lbBt1ert1cUTJ9S2d2/KoNMEg27Rn1Rm3HmnHdiOJi98rJyV1IEBte3dq/Y//EHn3ntPJ3bs0HNz\\n\",\n       \"5uh8S4sk6XxLiwZisZRBp/NeqxcuVO+FCzq1d68utbaqx3PsqddfV6yzM+Fz7otvJ5fktncZyZX0\\n\",\n       \"/wQAIPtY8cTkMopVyp07d469Gltj4+DK5IoV6dtztLTY22Zfe21o3mFnp10IyFtFNhCwCwlt22Zv\\n\",\n       \"d03VbiR5FVeS5s2zg1fJDg63b098nfc1XV32yqZkV389diz9sc5KZvKKqTT8vR3t/Zmidu7cqa5/\\n\",\n       \"/MeMVvHy0XArlqNp6+KsXHbHA7zk84ylNUy6Y3c3NenounUJuY6SvSW1uqFB51taFCgpUW9Xl045\\n\",\n       \"/384Cgrs6s/Ofx2FhTIFBWnbpwQjEVV+5jPqbmnRxZMn026TTSdQXq6+ri69K2muJBMKSZalUHm5\\n\",\n       \"Ztx5p/p7euwVXA8TCLhbdwMlJaq+/XZ1vPPO0O3BhYUyxmjGnXeqqLpajWvW6NfXX+/28fS2QZHs\\n\",\n       \"1dDLZ8+6969oxgxdbmtTVUODlm3fPqrgP9OVbfgvo3/3gGEwp+A3VjyRf7JV/Ga01Vzr66WPPx4a\\n\",\n       \"dDY12dtq45UlXX199uqjN8cyWaoWJocP2yuR0ejQoDP5NfFKlKqsTF39NdUqcXIuZ1OTHcB627lk\\n\",\n       \"cn+msPH0u8w3w73X0eTHOiuXIU/lWO95xpJjm+5Yb4GdYHm5CouLFaqsVMlVV+nc0aPua5xKrdMX\\n\",\n       \"LNC1S5cqVFXlBptOFdnpN99sf72/P2XQaYJBFc2YoYJgUG179+ri8eNu0BmKRNxKsl4F4fCQr/Wd\\n\",\n       \"P5/w2OrpkdXbq9iZMwqWlmrJ+vUKJ1W2teLXKSwpUaC0VK27dtkBZHzFtaCkxF7l7O+X1denU3v3\\n\",\n       \"ui1mquO54NMXLNCX9+9XcW2tJPvzKKmrc+9fqLJSX3rrLV2/YsWog04p9TxhFRQAkCkCT0wuoyh+\\n\",\n       \"k9Fv6JyA9qabEtujjJYT3J09a2+vdbbYSnaPzeEClVRBXSRir552dAwWJHI0NdlVciV7NbW+3g4Y\\n\",\n       \"DxxIXf11NEHjkSND27l4TZJqt7nS2Ng4qavK+m249zqWADzTIkKjud75eEBpAgF9cc8e1dxxh3o6\\n\",\n       \"OvRJc7O7yjp9wQJF33xT169YoYd37NCDW7YoGK9mHayo0EPNzfZzu3YpEP+6kyvqLSBk9fbqclub\\n\",\n       \"Yt686Pi1p99yi6obGoaMO5hqu258d8/cFO+z4bvfVTgSUc0ddwx5TWFRkR49dEgD3qJF8XMNXLyY\\n\",\n       \"UMzIWxhoSbz1ycM7dmhafb0ePXzY/Ty649t2TSCgh3fudAs2jWVup/p8J0ProSsRK1PwG3MKuUDg\\n\",\n       \"iSuDE9AOl+c4HG+l2N5e+09dnb1quWPH8MFauqAuXT7rkSN2QCrZqx779qUNGHc3NenFri69XFur\\n\",\n       \"WKqVzOTxpwtOJ0m121yazFVl/Zbuve5ualJvV5eKa2u1ZMOGEe9FuvOk69mZarUsXfBaGv8li9XX\\n\",\n       \"pwM/+EHKticXT5xQqKJCoUhEr0SjennpUhVfc40kqffcOR34wQ/c8V08ccI+X3+/CouKFHT6YsYD\\n\",\n       \"SG/epyksVKiyUlZfn1p37VIoEhmywhnztEwZjd899JB7fwuKitxczekLFug/nTypA08/LSctxRlL\\n\",\n       \"MF58SIWFUiCggnBYJhRy72ny/Q9HIgpFIlo3b54uOO83fv9SGWn1MtXneyXtDAAA+Ct9MzJgkso4\\n\",\n       \"L8G7ktjQMLYtpWvX2q/v6LDboYylUq4T1HnH4VSolYYGg94gMRIZvF6K8XYeOaLW+OrPnlWr0udg\\n\",\n       \"OeNPlYMKcl3iOo8ccfMl9w03n1JIztUsnTXLzQ/c09Sk+9etc1fLvF9zgptkqbbxPj9vni47udGy\\n\",\n       \"e2b+e02NpMEWJ0We7aaLf/Yzd1zeXM/+y5fVf/myJNnVZSMRxeKrqZIdnDqBZvXChWpcs0a/W7Zs\\n\",\n       \"aC5pGk6Op1fJNdeo49ChIeeYdt11Ckci9tbiePAXKC7WNffdp3t+8hO9sHChm7s50Nen9n37Eu6f\\n\",\n       \"974X19Tow9/+NiEvNlRZOSRAdF5z9o9/dHNEnfOl4r3G4mee0b5Vq9zKxGPJ50Xm+B4FvzGnkAsE\\n\",\n       \"nrhyeFcSZ80aWwDmBI+pivZIY2sD46x0SnaF2uQA1hskOudOEzAG4tsRqysqtPhHPxp5/JOFn21z\\n\",\n       \"4JvxrGYlB5WpzjXS+Xc3Nall82b1x2Kquvlm1Uejaly9WvueekqdR46o6rOfVcGtt+r0/v263Nbm\\n\",\n       \"Vrt1FRaq4lOf0lV33qlQRYVeiUbV9sYbGujpGfY9hyIRdZ84oYJQSAM9PQpXV2tafb2MpPIbbtAr\\n\",\n       \"0ag633039QmSCxilcXrfPhV6tvta/f12UBvv0+td0b18+rQ+2rJFH//udxrwFDgKlJWpr7tbgbIy\\n\",\n       \"xTo6FOvsTLjv4ZqahKAzWFGhRw4cGBIMel8jjfx5e49P/oVEql8mAACQClVtceXIpK/naHmrwobD\\n\",\n       \"dkB1223S+vVDr+PjOGKLFmnP3r1aLCk8nmq0Ex0IXuFVdCcrb59NJ9gb7UpWcu9USQk9O3c3Nanj\\n\",\n       \"0CF1HT2q6JtvalpSvnKqKrZOJdXk6qrtBw+q6/333TzI5ODv+hUrdLGtLSG4SiUYiWggFlO/p9VI\\n\",\n       \"6cyZKquvd1cmvVVnTTA4WJyooEChigrJGPWcPatQZaWqbr45ff/PggIFy8rUG+8TXDpzpv7jn/7k\\n\",\n       \"3tdYZ6fWzZvn9tpMJVRZqd7ubncM4epq9Z4/b7eNKSxUqLxcPR0dCkUiuuqee/QffvWrlJ+b81lV\\n\",\n       \"NTSobNYsNa5ZM+znm/zZeufGQG+vTjQ30zMXAK5wVLXFlWe4fpRr10qzZ9uBYXJBn/FyVisCASkW\\n\",\n       \"G9ySmyqP1MdCPuHyct0vKTzaarTp7o+f/VNHgyq6WTHeiqPenL6xFpFJztVMzg90tvFeam3VvhT5\\n\",\n       \"yt4qtlJiEZ3kvqFdx44NBp2SwtOnu3+fvmCBCouLddbZVl+Q+p+5wqIiTf/MZxKCTkn64muvuf00\\n\",\n       \"vSuqocpK1d177+CBAwPq6ehQz9mzCpSUKHLTTTLBYPoemQMDbtAZLC/XF197LSFIC0cievTwYYWr\\n\",\n       \"q1O+vKCkRD0dHW7QGSgrU6y9fbBXaX+/ejo6VFJXp69+8IEe3LLFDfjT5dUu275dD2zaNGKwmPzZ\\n\",\n       \"eudGsKzsiinKBQAYHwJP5J2dO3emf3K4ACoSsbfY7t3rT4DlDeKeecYOJr2VLisqsl/IZ6xBbLr7\\n\",\n       \"M9GB4CSrojvsnMojflYcHeu225GKM410Puf5YEWFrl26NKHtR9f778sEAurp7LQr2nq2n1bcdJO+\\n\",\n       \"vH+/yurrFaqqUlF1tbqOHh3sb+kJSh2FJSV69C9/Sdkm5fUnnxxcjY2vooYqK/XIgQO6f/16t2WJ\\n\",\n       \"iVe2DpaXq3L+fLXt3asTzc0a6O1Vmk25dpEgSb1dXUOC791NTVo3b5560vzCIOQUQSotlQoK1BcP\\n\",\n       \"mANJ1XVrbr894TNINSfGUkhrd1OTXolG1dPd7X7N+1k2rl59xRTlyqWp8j0KkwdzCrlA4ImpJVUA\\n\",\n       \"5Q0QnTYofgRYmzcPBnFPPmkHkwsX2s8VFtptVrJtrEFsugBzogNBquhmhZ8VR/1uLzPS+ZznH/vw\\n\",\n       \"Q3e1znGprU1WX5+7+ljozYc8dUp7vvENlcycqZ4zZ3SiuVltb70labC/pVNwqHL+fJXU1enRQ4c0\\n\",\n       \"rb7eLoI0c2biQIxxA9LC0lIVzZihRw4c0LT6endV8voVK1R9662S7CDSWSFNDgK9XwtFIiqOF0IK\\n\",\n       \"VlRIhYXuSuSOlSt1dN06XWptdd9jcW2tTHy11gQCWvKb3yhcXa2+CxfsgDgefAfLyhSeMcN9v41r\\n\",\n       \"1iRef5xzIlXgeiW1HgIA+IccT0wNTo5iMCiVlkpr1gwGNd58wuXLpVDIn+qu06cPFiuKRqWNG+3t\\n\",\n       \"q3PmSPEqlJMufzFdcSRMCd4czYkOCLJZ3fQXNTWKtbersKREdY2NGujp0SfNzW6xHUl26yHLSsj3\\n\",\n       \"rI9G9cDGjdqxcqU+2rpVVbfcoiXr1yeMzZs/agIBfeX997X/+9/Xe88+627nrV++XA9s2pTwPjve\\n\",\n       \"eUex9nZ7a6wxQ3qASlLRjBn60ltvuVVgty5b5vYg9eaOhmtqEl5f1dCgZdu361f19erz5IRWzp+v\\n\",\n       \"41u3uq8tLClR/Re/qAsff5wyd3Z3U5POxvNqvxR/bqyfU3J+J4EmACAVcjxx5XC2kDY324Gl94cj\\n\",\n       \"7yrfmjX+rbTddpv934YGKV6ZUpGIdPvtg9cb6wrDcDmqfmClcUrLZS9Sv7b5pspJ/PL+/SqdOVOP\\n\",\n       \"HjqkB7ds0f3r16ts9myZ+NbVwtLSwZzPeNBZvXChQuXlerGxUe/98peKnT6tE83N+vWcOQn5r95q\\n\",\n       \"slZfn15/8kl7BdPzC9OBeF6lUwCpddcuxdrbVRAKyerrSxl0StJVd9+tA08/rYttbXr1scd05sCB\\n\",\n       \"hGtJ9kpo1S23SBq6zbgwni9aWFKiqxYtUk9Xl4pqa7Vsxw73flw8edLNnX3h9tsT7lvnkSNq27tX\\n\",\n       \"lz15tePN3QUAIFMEnsg7KfMShstRzNY20vXr7fNu3z60HUqm15voIj+QRK7LcEZbsCiTLZ3ec+9Y\\n\",\n       \"uVIvNjbq2IYNQwKjafX1+puPP3ZX7F6JRtXT2ekWIwqWlrrnrJw/X/XRqB7atk3nW1rs1UxPxdue\\n\",\n       \"9nY9W1en3y5apJeXLtXiZ56xV0vjBnp7E4JRSTr7xz+6Y3MLIBUWaqCnJyEnM1hRoaIZM/SupOC0\\n\",\n       \"abrnJz9JCPRStXW56p57VFpXZ/cNNUb9nmO8AffFkyfdIPKdn/7UvR/OWANlZYqdPq3jW7dq3bx5\\n\",\n       \"inV2ZtTSJlkuf5mBQXyPgt+YU8gF+nhiavD2vkz+ASlbPSzTnXc816PaKyaZ0fZpvG/t2jFv803o\\n\",\n       \"QVldrZizRV3pA6NUPSiXbNig1598UjJGjatXu9d3giynb6Zj4NIlt13KvlWrFKqocAPIglBIjatX\\n\",\n       \"6xc1NVJ8VfLC8eO6cPy4+3pv65SqhgZdbm9X78WLqm5oUO/581Jbm3rPn9e+VasSAr3zH3yg2Jkz\\n\",\n       \"g9uCJX28dav794FYTCeam/X83Ln663ffdQNuSeqK9+wNlpfrTk/PXue+X+7o0InmZknSpdZW7Wlq\\n\",\n       \"SvmZZPI5AQDgB3I8gVxK7p/pfM3vHMyJ7tOJKSObOX7ec4cjEX3S3Dykt2RyTuKG+fN14fhxBadN\\n\",\n       \"U+3ixWl7VUqDOa8N3/2utj74oPp6etTjCW5DlZX66rFjal6xwr32su3bte+pp3Rs/fohFWZDkYiu\\n\",\n       \"vvdet4CPE8C9Eo26wXBxba0utbYqXF0tU1CggZ4eFYRC+lK84NGLixbpC1u26KX77ksItJM5PUwd\\n\",\n       \"v120yA2Wk59z3qvTB9Tbb7Nl82b1x2Kqvu22IfmtAAD4ZTQ5ngSeQC55Cx9lsxDRRF0HEy6bRX2k\\n\",\n       \"7BYs8p5bUsrreIv/XL9ihbpPnHAL9JTNnq2yWbNG/d5jnZ16ft48XW5ttStPO/82FRTomr/6K3dL\\n\",\n       \"qfeajlAkokcOHkwo3iPZ9//Yhg3q6eiQCQQUKClRYVGRps2erdP79rnHeYNF72tchYVupVonAPa+\\n\",\n       \"n9H8AiD5s0p+H6kCVgAA/EBxIUxJ485LyHYBn7EYaWutX2NlC++w8jnXxc/enamMJ8dvpPxQ77nT\\n\",\n       \"XSc5JzEUb3VSvXChSurqRvXedzc16dmrr9avr79elXPnKlxVZQd5AwP2n74+ffLqq0OuWdXQoGuX\\n\",\n       \"LlV9NKqvfvCBDjz99JD303nkiBtAWn196u3q0tttbeqOt1iR7DYn3m3D3tdI0jVLluirR4+qfvly\\n\",\n       \"1UejQ4JOyd4iO232bBWGw3r1sccU6+wccn+T76E3V7WwtFSxjo5h83QxeeXz9yhMTswp5AKBJ648\\n\",\n       \"k6mAz0iFiPwa60T36byCjLb4Trb42bvTT94KsOMJipOrqnofe4PQ4d5755EjutTaqp6ODp3ctUsF\\n\",\n       \"Tj9fr4GBIX0ql23frge3bNEDGzcqHIkkBPnP3XijXl661D2X0/tTkspvuEHRN99UWX29QlVVKqqu\\n\",\n       \"du/Ji42Nat2zJ+HS4UhE0+rr9cCmTe61vMe/vHSpJKl01iyd2rvXvZ8j/dLhvrVrVR+NKlxVpf4L\\n\",\n       \"F/RJc3NWfjkBAMBosNUWV56lS+1AbuHCkQOxXOdGjmWsyInkraATvZUxl707h+O9L04uZfL4xrtN\\n\",\n       \"eLTv3dmmKtmrmJ/fuFH7Vq3S+Y8+GtwOW1CgqxcvVll9vY6tX6/+S5dkAgFd9bnP6YFNmxSORNzz\\n\",\n       \"ePuHmkBA4enT9dC2bdr//e8nFDjy3oPi2lpZlqXLp04ljM0Eg/paW1vK8SfPrZ7ubvf6M+66S5J0\\n\",\n       \"orl5xPxbenECALKNHE8glc7O0RfwyXVu5FjGipzgh/rUnPsSqqzUIwcODMmNlMYftI82cI11dmrn\\n\",\n       \"448PqXob6+zUr2+4QT1nzrjHhquq7MqzHsW1tXr08GFJGlJB1nvMzM9/XudbWhQoKVFxTY1aNm9O\\n\",\n       \"2FJrgkFZ8Z6gjof37NHVixalHHfy3JKk52680e0bWh+NqjAYHDHwnqy/nAAATB3keGJKGndegtPu\\n\",\n       \"ZDQ/gOU6N3IsY0XGxjOnkreCwlZcU6NwTY2qb7tNoYqKlMckbxMe67bl0ea3hiMRFc+YobY339Sz\\n\",\n       \"V12l1ZGIXlqyRJI04447Eo41hYVDXn+ptVXPz5snSbp/3TotWb9exbW1Q475aOtWte7apVe3btVH\\n\",\n       \"W7cmBJ3B8nIVTZ8ev8jgv8vv/PSnacedPLfCkYhqFi6UZN+zxtWrR5V/Sy/O/Ec+HvzGnEIuZBx4\\n\",\n       \"GmN+ZIw5bIx52xjzgjEm9U8WQD4jNxIj4If61M63tCh2+rRODJNXmBxYjbVQ0ljyW508z4GeHvWe\\n\",\n       \"O+eO6761a1VQVGSfb9o0PbRtm0qvvVYmGJQCg62uL8d7Y0r2Z/7o4cOqX75cRTNmuGOouuUWSVLF\\n\",\n       \"jTeq0MkjLbD/me3t6tKltjb7a/FdQN5xpwq6U80tftEBAMhXGW+1NcYskfSqZVkDxpgfSpJlWd9O\\n\",\n       \"cRxbbQHgCpPJFuSxvmbHypX66OWXVb1ggUrq6txtrqm23XrzPCUpNH26KufNU7C8XKd//3u3p2Z9\\n\",\n       \"NKpYR4e7BbggHNZALJZ2TOlawmxdtsxt+5JKSV2dVrzzjnu+Z6++WpdaWyXZ231r7rgjK+1xAADI\\n\",\n       \"htFstQ0M9+RwLMva5nm4T9IjmZ4LADC13Ld27ajyCr15moufeUb7Vq1yXzNSDuf5lhbF2tv1SXOz\\n\",\n       \"wjU1bu7jnqYm3b9u3ZBzv/7Nb2qgp0cFwaAut7frlBMYera+DvT0JKyklt9wg33+NO/BWZV0OH93\\n\",\n       \"Ku4Wlpaq/8KFhNdMX7BAVTffrFeiUfe99cdi7vOxM2fcVV/6bgIApgq/cjz/VtLLPp0LGNao8hIm\\n\",\n       \"U69OTHrkuvhvtFuQvdtr961alfCakbbenj96VJIUrKjQ9JtukpS4fbVl82b39a9/85t6YONGuz3K\\n\",\n       \"pk1u+5NwdXVC4FkQDCZsZ7148qQb3I62FcnOnTvdc9TefXfCc6UzZ+rhHTt0vqUl4b1V33abJCkw\\n\",\n       \"TIuYXLfuSWeyjmsq4XsU/MacQi4Mu+JpjNkmqTbFU/9gWdbm+DHfkdRjWdbadOd5/PHHdd1110mS\\n\",\n       \"IpGIFixYoMbGRkmDE5/HPB7t44MHD458fLz/5U5JikbVGP/6ZBg/jyffY8dkGc+V9PjQpUuaLjvQ\\n\",\n       \"Gvja17Rz5073+UOXLum0pM/Fg7Dk138Qiajj+HHNPXdOoUhE5++9V9d961tu4PpOd7d6Jc2VdHL3\\n\",\n       \"bv3wzjt16/e+p88vW6b71q7Vv0WjajtzRjPi22yPlpbquq9/3Q2aveMLlJXp90ePauCll1T04ovq\\n\",\n       \"PHJEhy5dcs/nfX+SHXgHnnhCA93dKvrzn3W5tVWtN96ou378Y/u5khK9KzsfdGU8wPy3aFTz/u7v\\n\",\n       \"FHrhBS3+2c/0xsGDCe93z1tv6ezbb2uu7FXdwBNP5Pzzk6Su+C8I3pV0OBrV3/P9lsc8nvSPDyZ9\\n\",\n       \"f8n1eHicf48PHjyozvgvGz/88EONxrjaqRhjHpf0XyXdZ1nW5TTHkOOJiUf/y7HJdb9SXLGGa/Ux\\n\",\n       \"UhuQkXJCX1qyRCeamxO2u16/YoVC8UJGgZISDfT26kRzs0KVlZr5wAO6ePJkwtbeWGennpszx80B\\n\",\n       \"LZs9WxeOH3fbotQvX64HNm3S85/+tC62tqogGNSX9+9PaB+T6n1k0uJksrbumazjAgBMnKz28TTG\\n\",\n       \"fEHSjyXda1lW+zDHEXhi4tH/cmwaG3PbrxTwGEt/zuGCN+f5WEeHPmluVqCsTDPuukv9ly65+Z3e\\n\",\n       \"XpivRKMp+4p6A6uCcDihaFB9NKoHNm7U6khEvefOSbK30/7Nxx/7ek9G835zZbKOCwAwcbIdeL4n\\n\",\n       \"KSTpbPxLb1iW9USK4wg84audnq148MkVvkLMnJpcXmxsTBkAjiRdwBrr7NRzN97oFh8qrq3VpdbW\\n\",\n       \"ISt06VbuvIHVq4895lbHnX7zzXp41y6FIxH9oqZGsfZ2FZaU6Oqf/1xLv/KVMY9zrMeM5/zIL3yP\\n\",\n       \"gt+YU/DbaALPgkxPblnWHMuy6i3Laoj/GRJ0AsgT9CvFJDKW/pxe6YoRhSMR1Sxc6J4z+uabKXth\\n\",\n       \"puuR6S2UdN/atapfvlz10agbdO5uatK0T31KBeGwom+8oZLaVKURRh7nWI8Zz/kBAJho48rxHNUF\\n\",\n       \"WPEEAIxBpls3h8s1zOZ20LGu0CaPc99TTw1ZoRxP3iQ5lwCAiZbVFU8AALJhtK1YkqVbsRzPOVNJ\\n\",\n       \"bh8y1hXa5HGmWqEc7r2MpLimRuHqagJOAMCkQuCJvOOUdAb8wpyaGvwMLoeTHCgmB4kjzafkcaYK\\n\",\n       \"XMfzXs63tIy59ygmN75HwW/MKeQCgScAAGOQHCiON+Adz+rmaMYHAMBkQI4nAABjMNnbh0z28QEA\\n\",\n       \"pp6stlMZwyAIPAEAAABgiqK4EKYk8hLgN+YU/MR8gt+YU/Abcwq5QOAJAAAAAMgqttoCAAAAADLG\\n\",\n       \"VlsAAAAAQM4ReCLvkJcAvzGn4CfmE/zGnILfmFPIBQJPAAAAAEBWkeMJAAAAAMgYOZ4AAAAAgJwj\\n\",\n       \"8ETeIS8BfmNOwU/MJ/iNOQW/MaeQCwSeAAAAAICsIscTAAAAAJAxcjwBAAAAADlH4Im8Q14C/Mac\\n\",\n       \"gp+YT/Abcwp+Y04hFwg8AQAAAABZRY4nAAAAACBj5HgCAAAAAHKOwBN5h7wE+I05BT8xn+A35hT8\\n\",\n       \"xpxCLhB4AgAAAACyihxPAAAAAEDGyPEEAAAAAOQcgSfyDnkJ8BtzCn5iPsFvzCn4jTmFXCDwBAAA\\n\",\n       \"AABkFTmeAAAAAICMkeMJAAAAAMg5Ak/kHfIS4DfmFPzEfILfmFPwG3MKuUDgCQAAAADIKnI8AQAA\\n\",\n       \"AAAZI8cTAAAAAJBzBJ7IO+QlwG/MKfiJ+QS/MafgN+YUcoHAEwAAAACQVeR4AgAAAAAyRo4nAAAA\\n\",\n       \"ACDnCDyRd8hLgN+YU/AT8wl+Y07Bb8wp5AKBJwAAAAAgq8jxBAAAAABkjBxPAAAAAEDOEXgi75CX\\n\",\n       \"AL8xp+An5hP8xpyC35hTyAUCTwAAAABAVpHjCQAAAADIGDmeAAAAAICcI/BE3iEvAX5jTsFPzCf4\\n\",\n       \"jTkFvzGnkAsEngAAAACArCLHEwAAAACQMXI8AQAAAAA5R+CJvENeAvzGnIKfmE/wG3MKfmNOIRcI\\n\",\n       \"PAEAAAAAWUWOJwAAAAAgY+R4AgAAAAByjsATeYe8BPiNOQU/MZ/gN+YU/MacQi4QeAIAAAAAsooc\\n\",\n       \"TwAAAABAxsjxBAAAAADkHIEn8g55CfAbcwp+Yj7Bb8wp+I05hVwg8AQAAAAAZBU5ngAAAACAjJHj\\n\",\n       \"CQAAAADIOQJP5B3yEuA35hT8xHyC35hT8BtzCrlA4AkAAAAAyCpyPAEAAAAAGSPHEwAAAACQcwSe\\n\",\n       \"yDvkJcBvzCn4ifkEvzGn4DfmFHKBwBMAAAAAkFXkeAIAAAAAMkaOJwAAAAAg5wg8kXfIS4DfmFPw\\n\",\n       \"E/MJfmNOwW/MKeQCgScAAAAAIKvI8QQAAAAAZIwcTwAAAABAzmUceBpj/skY87Yx5qAx5lVjzLV+\\n\",\n       \"DgxIh7wE+I05BT8xn+A35hT8xpxCLoxnxfOfLcu6xbKsBZI2SfpfPo0JGNbBgwdzPQRMMcwp+In5\\n\",\n       \"BL8xp+A35hRyIePA07Ks856HZZLaxz8cYGSdnZ25HgKmGOYU/MR8gt+YU/Abcwq5EBjPi40xT0v6\\n\",\n       \"z5IuSrrLlxEBAAAAAKaUYVc8jTHbjDF/SvHnYUmyLOs7lmXNkrRG0r9MwHgBffjhh7keAqYY5hT8\\n\",\n       \"xHyC35hT8BtzCrngSzsVY8wsSS9blvXZFM/RSwUAAAAAprCR2qlkvNXWGDPHsqz34g+XSzqQyQAA\\n\",\n       \"AAAAAFNbxiuexpgNkuZK6pd0VNI3LMtq83FsAAAAAIApwJettgAAAAAApDOePp6jZoz5J2PM28aY\\n\",\n       \"g8aYV40x107EdTE1GWN+ZIw5HJ9TLxhjKnI9JuQ3Y8wKY8w7xph+Y8ytuR4P8pcx5gvGmL8YY94z\\n\",\n       \"xvyPXI8H+c0Y83+NMaeMMX/K9VgwNRhjrjXG7Ij/m/dnY8x/z/WYkL+MMUXGmH3xGO+QMeZ/D3v8\\n\",\n       \"RKx4GmOmOX0/jTH/TdItlmV9PesXxpRkjFki6VXLsgaMMT+UJMuyvp3jYSGPGWM+LWlA0v+R9PeW\\n\",\n       \"Zf0hx0NCHjLGFEp6V9L9kj6R9HtJX7Us63BOB4a8ZYxZLKlb0r9bljU/1+NB/jPG1EqqtSzroDGm\\n\",\n       \"TNL/kxTl+xQyZYwpsSzrojEm47bo4wAAAphJREFUIOk1Sd+yLOu1VMdOyIqnE3TGlUlqn4jrYmqy\\n\",\n       \"LGubZVkD8Yf7JM3M5XiQ/yzL+otlWUdyPQ7kvTskvW9Z1oeWZfVKek528T0gI5Zl7ZHUketxYOqw\\n\",\n       \"LKvVsqyD8b93SzosqS63o0I+syzrYvyvIUmFks6mO3ZCAk9JMsY8bYz5SNJKST+cqOtiyvtbSS/n\\n\",\n       \"ehAAIOkaSR97Hh+Pfw0AJh1jzHWSGmT/Eh/IiDGmwBhzUNIpSTssyzqU7tiM26mkuOg2SbUpnvoH\\n\",\n       \"y7I2W5b1HUnfMcZ8W9K/SPovfl0bU89I8yl+zHck9ViWtXZCB4e8NJo5BYwT1foA5IX4NtsNkp6M\\n\",\n       \"r3wCGYnvQlwQr7nyijGm0bKsnamO9S3wtCxrySgPXStWqDCCkeaTMeZxSUsl3TchA0LeG8P3KCBT\\n\",\n       \"n0jyFs+7VvaqJwBMGsaYoKTfSPqlZVmbcj0eTA2WZZ0zxmyRtFDSzlTHTFRV2zmeh8slHZiI62Jq\\n\",\n       \"MsZ8QdIqScsty7qc6/FgyjG5HgDy1n5Jc4wx1xljQpL+WtKLOR4TALiMMUbSzyUdsizrX3M9HuQ3\\n\",\n       \"Y0y1MSYS/3uxpCUaJs6bqKq2GyTNldQv6aikb1iW1Zb1C2NKMsa8JzuB2UlefsOyrCdyOCTkOWPM\\n\",\n       \"lyT9VFK1pHOSDliW9WBuR4V8ZIx5UNK/yi6w8HPLsoYtLQ8Mxxjza0n3SqqS1Cbp+5Zlrc7tqJDP\\n\",\n       \"jDGLJO2W9EcNpgf8T8uyfpe7USFfGWPmS/qF7MXMAknPWpb1o7THT0TgCQAAAAC4ck1YVVsAAAAA\\n\",\n       \"wJWJwBMAAAAAkFUEngAAAACArCLwBAAAAABkFYEnAAAAACCrCDwBAAAAAFlF4AkAAAAAyCoCTwAA\\n\",\n       \"AABAVv1/lzHCzGUnjVoAAAAASUVORK5CYII=\\n\"\n      ],\n      \"text/plain\": [\n       \"<matplotlib.figure.Figure at 0x7fadf4552a90>\"\n      ]\n     },\n     \"metadata\": {},\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"feat = out['feat']\\n\",\n    \"f = plt.figure(figsize=(16,9))\\n\",\n    \"c = ['#ff0000', '#ffff00', '#00ff00', '#00ffff', '#0000ff', \\n\",\n    \"     '#ff00ff', '#990000', '#999900', '#009900', '#009999']\\n\",\n    \"for i in range(10):\\n\",\n    \"    plt.plot(feat[labels==i,0].flatten(), feat[labels==i,1].flatten(), '.', c=c[i])\\n\",\n    \"plt.legend(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])\\n\",\n    \"plt.grid()\\n\",\n    \"plt.show()\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"description\": \"Extracting features and plotting the Siamese network embedding.\",\n  \"example_name\": \"Siamese network embedding\",\n  \"include_in_docs\": true,\n  \"kernelspec\": {\n   \"display_name\": \"Python 2\",\n   \"language\": \"python\",\n   \"name\": \"python2\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 2\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython2\",\n   \"version\": \"2.7.9\"\n  },\n  \"priority\": 7\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/mnist_siamese.prototxt",
    "content": "name: \"mnist_siamese\"\ninput: \"data\"\ninput_shape {\n  dim: 10000\n  dim: 1\n  dim: 28\n  dim: 28\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 20\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 50\n    kernel_size: 5\n    stride: 1\n  }\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool2\"\n  top: \"ip1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 500\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"ip1\"\n  top: \"ip1\"\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n  }\n}\nlayer {\n  name: \"feat\"\n  type: \"InnerProduct\"\n  bottom: \"ip2\"\n  top: \"feat\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 2\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/mnist_siamese_solver.prototxt",
    "content": "# The train/test net protocol buffer definition\nnet: \"examples/siamese/mnist_siamese_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\n# covering the full 10,000 testing images.\ntest_iter: 100\n# Carry out testing every 500 training iterations.\ntest_interval: 500\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.01\nmomentum: 0.9\nweight_decay: 0.0000\n# The learning rate policy\nlr_policy: \"inv\"\ngamma: 0.0001\npower: 0.75\n# Display every 100 iterations\ndisplay: 100\n# The maximum number of iterations\nmax_iter: 50000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"examples/siamese/mnist_siamese\"\n# solver mode: CPU or GPU\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/mnist_siamese_train_test.prototxt",
    "content": "name: \"mnist_siamese_train_test\"\nlayer {\n  name: \"pair_data\"\n  type: \"Data\"\n  top: \"pair_data\"\n  top: \"sim\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    scale: 0.00390625\n  }\n  data_param {\n    source: \"examples/siamese/mnist_siamese_train_leveldb\"\n    batch_size: 64\n  }\n}\nlayer {\n  name: \"pair_data\"\n  type: \"Data\"\n  top: \"pair_data\"\n  top: \"sim\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    scale: 0.00390625\n  }\n  data_param {\n    source: \"examples/siamese/mnist_siamese_test_leveldb\"\n    batch_size: 100\n  }\n}\nlayer {\n  name: \"slice_pair\"\n  type: \"Slice\"\n  bottom: \"pair_data\"\n  top: \"data\"\n  top: \"data_p\"\n  slice_param {\n    slice_dim: 1\n    slice_point: 1\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    name: \"conv1_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"conv1_b\"\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 20\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    name: \"conv2_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"conv2_b\"\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 50\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1\"\n  type: \"InnerProduct\"\n  bottom: \"pool2\"\n  top: \"ip1\"\n  param {\n    name: \"ip1_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"ip1_b\"\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 500\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"ip1\"\n  top: \"ip1\"\n}\nlayer {\n  name: \"ip2\"\n  type: \"InnerProduct\"\n  bottom: \"ip1\"\n  top: \"ip2\"\n  param {\n    name: \"ip2_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"ip2_b\"\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"feat\"\n  type: \"InnerProduct\"\n  bottom: \"ip2\"\n  top: \"feat\"\n  param {\n    name: \"feat_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"feat_b\"\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"conv1_p\"\n  type: \"Convolution\"\n  bottom: \"data_p\"\n  top: \"conv1_p\"\n  param {\n    name: \"conv1_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"conv1_b\"\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 20\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool1_p\"\n  type: \"Pooling\"\n  bottom: \"conv1_p\"\n  top: \"pool1_p\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2_p\"\n  type: \"Convolution\"\n  bottom: \"pool1_p\"\n  top: \"conv2_p\"\n  param {\n    name: \"conv2_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"conv2_b\"\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 50\n    kernel_size: 5\n    stride: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"pool2_p\"\n  type: \"Pooling\"\n  bottom: \"conv2_p\"\n  top: \"pool2_p\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"ip1_p\"\n  type: \"InnerProduct\"\n  bottom: \"pool2_p\"\n  top: \"ip1_p\"\n  param {\n    name: \"ip1_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"ip1_b\"\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 500\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu1_p\"\n  type: \"ReLU\"\n  bottom: \"ip1_p\"\n  top: \"ip1_p\"\n}\nlayer {\n  name: \"ip2_p\"\n  type: \"InnerProduct\"\n  bottom: \"ip1_p\"\n  top: \"ip2_p\"\n  param {\n    name: \"ip2_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"ip2_b\"\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 10\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"feat_p\"\n  type: \"InnerProduct\"\n  bottom: \"ip2_p\"\n  top: \"feat_p\"\n  param {\n    name: \"feat_w\"\n    lr_mult: 1\n  }\n  param {\n    name: \"feat_b\"\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"ContrastiveLoss\"\n  bottom: \"feat\"\n  bottom: \"feat_p\"\n  bottom: \"sim\"\n  top: \"loss\"\n  contrastive_loss_param {\n    margin: 1\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/readme.md",
    "content": "---\ntitle: Siamese Network Tutorial\ndescription: Train and test a siamese network on MNIST data.\ncategory: example\ninclude_in_docs: true\nlayout: default\npriority: 100\n---\n\n# Siamese Network Training with Caffe\nThis example shows how you can use weight sharing and a contrastive loss\nfunction to learn a model using a siamese network in Caffe.\n\nWe will assume that you have caffe successfully compiled. If not, please refer\nto the [Installation page](../../installation.html). This example builds on the\n[MNIST tutorial](mnist.html) so it would be a good idea to read that before\ncontinuing.\n\n*The guide specifies all paths and assumes all commands are executed from the\nroot caffe directory*\n\n## Prepare Datasets\n\nYou will first need to download and convert the data from the MNIST\nwebsite. To do this, simply run the following commands:\n\n    ./data/mnist/get_mnist.sh\n    ./examples/siamese/create_mnist_siamese.sh\n\nAfter running the script there should be two datasets,\n`./examples/siamese/mnist_siamese_train_leveldb`, and\n`./examples/siamese/mnist_siamese_test_leveldb`.\n\n## The Model\nFirst, we will define the model that we want to train using the siamese network.\nWe will use the convolutional net defined in\n`./examples/siamese/mnist_siamese.prototxt`. This model is almost\nexactly the same as the [LeNet model](mnist.html), the only difference is that\nwe have replaced the top layers that produced probabilities over the 10 digit\nclasses with a linear \"feature\" layer that produces a 2 dimensional vector.\n\n    layer {\n      name: \"feat\"\n      type: \"InnerProduct\"\n      bottom: \"ip2\"\n      top: \"feat\"\n      param {\n        name: \"feat_w\"\n        lr_mult: 1\n      }\n      param {\n        name: \"feat_b\"\n        lr_mult: 2\n      }\n      inner_product_param {\n        num_output: 2\n      }\n    }\n\n## Define the Siamese Network\n\nIn this section we will define the siamese network used for training. The\nresulting network is defined in\n`./examples/siamese/mnist_siamese_train_test.prototxt`.\n\n### Reading in the Pair Data\n\nWe start with a data layer that reads from the LevelDB database we created\nearlier. Each entry in this database contains the image data for a pair of\nimages (`pair_data`) and a binary label saying if they belong to the same class\nor different classes (`sim`).\n\n    layer {\n      name: \"pair_data\"\n      type: \"Data\"\n      top: \"pair_data\"\n      top: \"sim\"\n      include { phase: TRAIN }\n      transform_param {\n        scale: 0.00390625\n      }\n      data_param {\n        source: \"examples/siamese/mnist_siamese_train_leveldb\"\n        batch_size: 64\n      }\n    }\n\nIn order to pack a pair of images into the same blob in the database we pack one\nimage per channel. We want to be able to work with these two images separately,\nso we add a slice layer after the data layer. This takes the `pair_data` and\nslices it along the channel dimension so that we have a single image in `data`\nand its paired image in `data_p.`\n\n    layer {\n      name: \"slice_pair\"\n      type: \"Slice\"\n      bottom: \"pair_data\"\n      top: \"data\"\n      top: \"data_p\"\n      slice_param {\n        slice_dim: 1\n        slice_point: 1\n      }\n    }\n\n### Building the First Side of the Siamese Net\n\nNow we can specify the first side of the siamese net. This side operates on\n`data` and produces `feat`. Starting from the net in\n`./examples/siamese/mnist_siamese.prototxt` we add default weight fillers. Then\nwe name the parameters of the convolutional and inner product layers. Naming the\nparameters allows Caffe to share the parameters between layers on both sides of\nthe siamese net. In the definition this looks like:\n\n    ...\n    param { name: \"conv1_w\" ...  }\n    param { name: \"conv1_b\" ...  }\n    ...\n    param { name: \"conv2_w\" ...  }\n    param { name: \"conv2_b\" ...  }\n    ...\n    param { name: \"ip1_w\" ...  }\n    param { name: \"ip1_b\" ...  }\n    ...\n    param { name: \"ip2_w\" ...  }\n    param { name: \"ip2_b\" ...  }\n    ...\n\n### Building the Second Side of the Siamese Net\n\nNow we need to create the second path that operates on `data_p` and produces\n`feat_p`. This path is exactly the same as the first. So we can just copy and\npaste it. Then we change the name of each layer, input, and output by appending\n`_p` to differentiate the \"paired\" layers from the originals.\n\n### Adding the Contrastive Loss Function\n\nTo train the network we will optimize a contrastive loss function proposed in:\nRaia Hadsell, Sumit Chopra, and Yann LeCun \"Dimensionality Reduction by Learning\nan Invariant Mapping\". This loss function encourages matching pairs to be close\ntogether in feature space while pushing non-matching pairs apart. This cost\nfunction is implemented with the `CONTRASTIVE_LOSS` layer:\n\n    layer {\n        name: \"loss\"\n        type: \"ContrastiveLoss\"\n        contrastive_loss_param {\n            margin: 1.0\n        }\n        bottom: \"feat\"\n        bottom: \"feat_p\"\n        bottom: \"sim\"\n        top: \"loss\"\n    }\n\n## Define the Solver\n\nNothing special needs to be done to the solver besides pointing it at the\ncorrect model file. The solver is defined in\n`./examples/siamese/mnist_siamese_solver.prototxt`.\n\n## Training and Testing the Model\n\nTraining the model is simple after you have written the network definition\nprotobuf and solver protobuf files. Simply run\n`./examples/siamese/train_mnist_siamese.sh`:\n\n    ./examples/siamese/train_mnist_siamese.sh\n\n# Plotting the results\n\nFirst, we can draw the model and siamese networks by running the following\ncommands that draw the DAGs defined in the .prototxt files:\n\n    ./python/draw_net.py \\\n        ./examples/siamese/mnist_siamese.prototxt \\\n        ./examples/siamese/mnist_siamese.png\n\n    ./python/draw_net.py \\\n        ./examples/siamese/mnist_siamese_train_test.prototxt \\\n        ./examples/siamese/mnist_siamese_train_test.png\n\nSecond, we can load the learned model and plot the features using the iPython\nnotebook:\n\n    ipython notebook ./examples/siamese/mnist_siamese.ipynb\n\n"
  },
  {
    "path": "caffe-fpn/examples/siamese/train_mnist_siamese.sh",
    "content": "#!/usr/bin/env sh\n\nTOOLS=./build/tools\n\n$TOOLS/caffe train --solver=examples/siamese/mnist_siamese_solver.prototxt\n"
  },
  {
    "path": "caffe-fpn/examples/web_demo/app.py",
    "content": "import os\nimport time\nimport cPickle\nimport datetime\nimport logging\nimport flask\nimport werkzeug\nimport optparse\nimport tornado.wsgi\nimport tornado.httpserver\nimport numpy as np\nimport pandas as pd\nfrom PIL import Image\nimport cStringIO as StringIO\nimport urllib\nimport exifutil\n\nimport caffe\n\nREPO_DIRNAME = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + '/../..')\nUPLOAD_FOLDER = '/tmp/caffe_demos_uploads'\nALLOWED_IMAGE_EXTENSIONS = set(['png', 'bmp', 'jpg', 'jpe', 'jpeg', 'gif'])\n\n# Obtain the flask app object\napp = flask.Flask(__name__)\n\n\n@app.route('/')\ndef index():\n    return flask.render_template('index.html', has_result=False)\n\n\n@app.route('/classify_url', methods=['GET'])\ndef classify_url():\n    imageurl = flask.request.args.get('imageurl', '')\n    try:\n        string_buffer = StringIO.StringIO(\n            urllib.urlopen(imageurl).read())\n        image = caffe.io.load_image(string_buffer)\n\n    except Exception as err:\n        # For any exception we encounter in reading the image, we will just\n        # not continue.\n        logging.info('URL Image open error: %s', err)\n        return flask.render_template(\n            'index.html', has_result=True,\n            result=(False, 'Cannot open image from URL.')\n        )\n\n    logging.info('Image: %s', imageurl)\n    result = app.clf.classify_image(image)\n    return flask.render_template(\n        'index.html', has_result=True, result=result, imagesrc=imageurl)\n\n\n@app.route('/classify_upload', methods=['POST'])\ndef classify_upload():\n    try:\n        # We will save the file to disk for possible data collection.\n        imagefile = flask.request.files['imagefile']\n        filename_ = str(datetime.datetime.now()).replace(' ', '_') + \\\n            werkzeug.secure_filename(imagefile.filename)\n        filename = os.path.join(UPLOAD_FOLDER, filename_)\n        imagefile.save(filename)\n        logging.info('Saving to %s.', filename)\n        image = exifutil.open_oriented_im(filename)\n\n    except Exception as err:\n        logging.info('Uploaded image open error: %s', err)\n        return flask.render_template(\n            'index.html', has_result=True,\n            result=(False, 'Cannot open uploaded image.')\n        )\n\n    result = app.clf.classify_image(image)\n    return flask.render_template(\n        'index.html', has_result=True, result=result,\n        imagesrc=embed_image_html(image)\n    )\n\n\ndef embed_image_html(image):\n    \"\"\"Creates an image embedded in HTML base64 format.\"\"\"\n    image_pil = Image.fromarray((255 * image).astype('uint8'))\n    image_pil = image_pil.resize((256, 256))\n    string_buf = StringIO.StringIO()\n    image_pil.save(string_buf, format='png')\n    data = string_buf.getvalue().encode('base64').replace('\\n', '')\n    return 'data:image/png;base64,' + data\n\n\ndef allowed_file(filename):\n    return (\n        '.' in filename and\n        filename.rsplit('.', 1)[1] in ALLOWED_IMAGE_EXTENSIONS\n    )\n\n\nclass ImagenetClassifier(object):\n    default_args = {\n        'model_def_file': (\n            '{}/models/bvlc_reference_caffenet/deploy.prototxt'.format(REPO_DIRNAME)),\n        'pretrained_model_file': (\n            '{}/models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'.format(REPO_DIRNAME)),\n        'mean_file': (\n            '{}/python/caffe/imagenet/ilsvrc_2012_mean.npy'.format(REPO_DIRNAME)),\n        'class_labels_file': (\n            '{}/data/ilsvrc12/synset_words.txt'.format(REPO_DIRNAME)),\n        'bet_file': (\n            '{}/data/ilsvrc12/imagenet.bet.pickle'.format(REPO_DIRNAME)),\n    }\n    for key, val in default_args.iteritems():\n        if not os.path.exists(val):\n            raise Exception(\n                \"File for {} is missing. Should be at: {}\".format(key, val))\n    default_args['image_dim'] = 256\n    default_args['raw_scale'] = 255.\n\n    def __init__(self, model_def_file, pretrained_model_file, mean_file,\n                 raw_scale, class_labels_file, bet_file, image_dim, gpu_mode):\n        logging.info('Loading net and associated files...')\n        if gpu_mode:\n            caffe.set_mode_gpu()\n        else:\n            caffe.set_mode_cpu()\n        self.net = caffe.Classifier(\n            model_def_file, pretrained_model_file,\n            image_dims=(image_dim, image_dim), raw_scale=raw_scale,\n            mean=np.load(mean_file).mean(1).mean(1), channel_swap=(2, 1, 0)\n        )\n\n        with open(class_labels_file) as f:\n            labels_df = pd.DataFrame([\n                {\n                    'synset_id': l.strip().split(' ')[0],\n                    'name': ' '.join(l.strip().split(' ')[1:]).split(',')[0]\n                }\n                for l in f.readlines()\n            ])\n        self.labels = labels_df.sort('synset_id')['name'].values\n\n        self.bet = cPickle.load(open(bet_file))\n        # A bias to prefer children nodes in single-chain paths\n        # I am setting the value to 0.1 as a quick, simple model.\n        # We could use better psychological models here...\n        self.bet['infogain'] -= np.array(self.bet['preferences']) * 0.1\n\n    def classify_image(self, image):\n        try:\n            starttime = time.time()\n            scores = self.net.predict([image], oversample=True).flatten()\n            endtime = time.time()\n\n            indices = (-scores).argsort()[:5]\n            predictions = self.labels[indices]\n\n            # In addition to the prediction text, we will also produce\n            # the length for the progress bar visualization.\n            meta = [\n                (p, '%.5f' % scores[i])\n                for i, p in zip(indices, predictions)\n            ]\n            logging.info('result: %s', str(meta))\n\n            # Compute expected information gain\n            expected_infogain = np.dot(\n                self.bet['probmat'], scores[self.bet['idmapping']])\n            expected_infogain *= self.bet['infogain']\n\n            # sort the scores\n            infogain_sort = expected_infogain.argsort()[::-1]\n            bet_result = [(self.bet['words'][v], '%.5f' % expected_infogain[v])\n                          for v in infogain_sort[:5]]\n            logging.info('bet result: %s', str(bet_result))\n\n            return (True, meta, bet_result, '%.3f' % (endtime - starttime))\n\n        except Exception as err:\n            logging.info('Classification error: %s', err)\n            return (False, 'Something went wrong when classifying the '\n                           'image. Maybe try another one?')\n\n\ndef start_tornado(app, port=5000):\n    http_server = tornado.httpserver.HTTPServer(\n        tornado.wsgi.WSGIContainer(app))\n    http_server.listen(port)\n    print(\"Tornado server starting on port {}\".format(port))\n    tornado.ioloop.IOLoop.instance().start()\n\n\ndef start_from_terminal(app):\n    \"\"\"\n    Parse command line options and start the server.\n    \"\"\"\n    parser = optparse.OptionParser()\n    parser.add_option(\n        '-d', '--debug',\n        help=\"enable debug mode\",\n        action=\"store_true\", default=False)\n    parser.add_option(\n        '-p', '--port',\n        help=\"which port to serve content on\",\n        type='int', default=5000)\n    parser.add_option(\n        '-g', '--gpu',\n        help=\"use gpu mode\",\n        action='store_true', default=False)\n\n    opts, args = parser.parse_args()\n    ImagenetClassifier.default_args.update({'gpu_mode': opts.gpu})\n\n    # Initialize classifier + warm start by forward for allocation\n    app.clf = ImagenetClassifier(**ImagenetClassifier.default_args)\n    app.clf.net.forward()\n\n    if opts.debug:\n        app.run(debug=True, host='0.0.0.0', port=opts.port)\n    else:\n        start_tornado(app, opts.port)\n\n\nif __name__ == '__main__':\n    logging.getLogger().setLevel(logging.INFO)\n    if not os.path.exists(UPLOAD_FOLDER):\n        os.makedirs(UPLOAD_FOLDER)\n    start_from_terminal(app)\n"
  },
  {
    "path": "caffe-fpn/examples/web_demo/exifutil.py",
    "content": "\"\"\"\nThis script handles the skimage exif problem.\n\"\"\"\n\nfrom PIL import Image\nimport numpy as np\n\nORIENTATIONS = {   # used in apply_orientation\n    2: (Image.FLIP_LEFT_RIGHT,),\n    3: (Image.ROTATE_180,),\n    4: (Image.FLIP_TOP_BOTTOM,),\n    5: (Image.FLIP_LEFT_RIGHT, Image.ROTATE_90),\n    6: (Image.ROTATE_270,),\n    7: (Image.FLIP_LEFT_RIGHT, Image.ROTATE_270),\n    8: (Image.ROTATE_90,)\n}\n\n\ndef open_oriented_im(im_path):\n    im = Image.open(im_path)\n    if hasattr(im, '_getexif'):\n        exif = im._getexif()\n        if exif is not None and 274 in exif:\n            orientation = exif[274]\n            im = apply_orientation(im, orientation)\n    img = np.asarray(im).astype(np.float32) / 255.\n    if img.ndim == 2:\n        img = img[:, :, np.newaxis]\n        img = np.tile(img, (1, 1, 3))\n    elif img.shape[2] == 4:\n        img = img[:, :, :3]\n    return img\n\n\ndef apply_orientation(im, orientation):\n    if orientation in ORIENTATIONS:\n        for method in ORIENTATIONS[orientation]:\n            im = im.transpose(method)\n    return im\n"
  },
  {
    "path": "caffe-fpn/examples/web_demo/readme.md",
    "content": "---\ntitle: Web demo\ndescription: Image classification demo running as a Flask web server.\ncategory: example\ninclude_in_docs: true\npriority: 10\n---\n\n# Web Demo\n\n## Requirements\n\nThe demo server requires Python with some dependencies.\nTo make sure you have the dependencies, please run `pip install -r examples/web_demo/requirements.txt`, and also make sure that you've compiled the Python Caffe interface and that it is on your `PYTHONPATH` (see [installation instructions](/installation.html)).\n\nMake sure that you have obtained the Reference CaffeNet Model and the ImageNet Auxiliary Data:\n\n    ./scripts/download_model_binary.py models/bvlc_reference_caffenet\n    ./data/ilsvrc12/get_ilsvrc_aux.sh\n\nNOTE: if you run into trouble, try re-downloading the auxiliary files.\n\n## Run\n\nRunning `python examples/web_demo/app.py` will bring up the demo server, accessible at `http://0.0.0.0:5000`.\nYou can enable debug mode of the web server, or switch to a different port:\n\n    % python examples/web_demo/app.py -h\n    Usage: app.py [options]\n\n    Options:\n      -h, --help            show this help message and exit\n      -d, --debug           enable debug mode\n      -p PORT, --port=PORT  which port to serve content on\n\n## How are the \"maximally accurate\" results generated?\n\nIn a nutshell: ImageNet predictions are made at the leaf nodes, but the organization of the project allows leaf nodes to be united via more general parent nodes, with 'entity' at the very top.\nTo give \"maximally accurate\" results, we \"back off\" from maximally specific predictions to maintain a high accuracy.\nThe `bet_file` that is loaded in the demo provides the graph structure and names of all relevant ImageNet nodes as well as measures of information gain between them.\nPlease see the \"Hedging your bets\" paper from [CVPR 2012](http://www.image-net.org/projects/hedging/) for further information.\n"
  },
  {
    "path": "caffe-fpn/examples/web_demo/requirements.txt",
    "content": "werkzeug\nflask\ntornado\nnumpy\npandas\npillow\npyyaml\n"
  },
  {
    "path": "caffe-fpn/examples/web_demo/templates/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"utf-8\">\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n    <meta name=\"description\" content=\"Caffe demos\">\n    <meta name=\"author\" content=\"BVLC (http://bvlc.eecs.berkeley.edu/)\">\n\n    <title>Caffe Demos</title>\n\n    <link href=\"//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css\" rel=\"stylesheet\">\n\n    <script type=\"text/javascript\" src=\"//code.jquery.com/jquery-2.1.1.js\"></script>\n    <script src=\"//netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js\"></script>\n\n    <!-- Script to instantly classify an image once it is uploaded. -->\n    <script type=\"text/javascript\">\n      $(document).ready(\n        function(){\n          $('#classifyfile').attr('disabled',true);\n          $('#imagefile').change(\n            function(){\n              if ($(this).val()){\n                $('#formupload').submit();\n              }\n            }\n          );\n        }\n      );\n    </script>\n\n    <style>\n    body {\n      font-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif;\n      line-height:1.5em;\n      color: #232323;\n      -webkit-font-smoothing: antialiased;\n    }\n\n    h1, h2, h3 {\n      font-family: Times, serif;\n      line-height:1.5em;\n      border-bottom: 1px solid #ccc;\n    }\n    </style>\n  </head>\n\n  <body>\n    <!-- Begin page content -->\n    <div class=\"container\">\n      <div class=\"page-header\">\n        <h1><a href=\"/\">Caffe Demos</a></h1>\n        <p>\n          The <a href=\"http://caffe.berkeleyvision.org\">Caffe</a> neural network library makes implementing state-of-the-art computer vision systems easy.\n        </p>\n      </div>\n\n      <div>\n        <h2>Classification</h2>\n        <a href=\"/classify_url?imageurl=http%3A%2F%2Fi.telegraph.co.uk%2Fmultimedia%2Farchive%2F02351%2Fcross-eyed-cat_2351472k.jpg\">Click for a Quick Example</a>\n      </div>\n\n      {% if has_result %}\n      {% if not result[0] %}\n      <!-- we have error in the result. -->\n      <div class=\"alert alert-danger\">{{ result[1] }} Did you provide a valid URL or a valid image file? </div>\n      {% else %}\n      <div class=\"media\">\n        <a class=\"pull-left\" href=\"#\"><img class=\"media-object\" width=\"192\" height=\"192\" src={{ imagesrc }}></a>\n        <div class=\"media-body\">\n          <div class=\"bs-example bs-example-tabs\">\n            <ul id=\"myTab\" class=\"nav nav-tabs\">\n              <li class=\"active\"><a href=\"#infopred\" data-toggle=\"tab\">Maximally accurate</a></li>\n              <li><a href=\"#flatpred\" data-toggle=\"tab\">Maximally specific</a></li>\n            </ul>\n            <div id=\"myTabContent\" class=\"tab-content\">\n              <div class=\"tab-pane fade in active\" id=\"infopred\">\n                <ul class=\"list-group\">\n                  {% for single_pred in result[2] %}\n                  <li class=\"list-group-item\">\n                  <span class=\"badge\">{{ single_pred[1] }}</span>\n                  <h4 class=\"list-group-item-heading\">\n                    <a href=\"https://www.google.com/#q={{ single_pred[0] }}\" target=\"_blank\">{{ single_pred[0] }}</a>\n                  </h4>\n                  </li>\n                  {% endfor %}\n                </ul>\n              </div>\n              <div class=\"tab-pane fade\" id=\"flatpred\">\n                <ul class=\"list-group\">\n                  {% for single_pred in result[1] %}\n                  <li class=\"list-group-item\">\n                  <span class=\"badge\">{{ single_pred[1] }}</span>\n                  <h4 class=\"list-group-item-heading\">\n                    <a href=\"https://www.google.com/#q={{ single_pred[0] }}\" target=\"_blank\">{{ single_pred[0] }}</a>\n                  </h4>\n                  </li>\n                  {% endfor %}\n                </ul>\n              </div>\n            </div>\n          </div>\n\n        </div>\n      </div>\n      <p> CNN took {{ result[3] }} seconds. </p>\n      {% endif %}\n      <hr>\n      {% endif %}\n\n      <form role=\"form\" action=\"classify_url\" method=\"get\">\n        <div class=\"form-group\">\n          <div class=\"input-group\">\n            <input type=\"text\" class=\"form-control\" name=\"imageurl\" id=\"imageurl\" placeholder=\"Provide an image URL\">\n            <span class=\"input-group-btn\">\n              <input class=\"btn btn-primary\" value=\"Classify URL\" type=\"submit\" id=\"classifyurl\"></input>\n            </span>\n          </div><!-- /input-group -->\n        </div>\n      </form>\n\n      <form id=\"formupload\" class=\"form-inline\" role=\"form\" action=\"classify_upload\" method=\"post\" enctype=\"multipart/form-data\">\n        <div class=\"form-group\">\n          <label for=\"imagefile\">Or upload an image:</label>\n          <input type=\"file\" name=\"imagefile\" id=\"imagefile\">\n        </div>\n        <!--<input type=\"submit\" class=\"btn btn-primary\" value=\"Classify File\" id=\"classifyfile\"></input>-->\n      </form>\n    </div>\n\n    <hr>\n    <div id=\"footer\">\n      <div class=\"container\">\n        <p>&copy; BVLC 2014</p>\n      </div>\n   </div>\n </body>\n</html>\n"
  },
  {
    "path": "caffe-fpn/include/caffe/blob.hpp",
    "content": "#ifndef CAFFE_BLOB_HPP_\n#define CAFFE_BLOB_HPP_\n\n#include <algorithm>\n#include <string>\n#include <vector>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/syncedmem.hpp\"\n\nconst int kMaxBlobAxes = 32;\n\nnamespace caffe {\n\n/**\n * @brief A wrapper around SyncedMemory holders serving as the basic\n *        computational unit through which Layer%s, Net%s, and Solver%s\n *        interact.\n *\n * TODO(dox): more thorough description.\n */\ntemplate <typename Dtype>\nclass Blob {\n public:\n  Blob()\n       : data_(), diff_(), count_(0), capacity_(0) {}\n\n  /// @brief Deprecated; use <code>Blob(const vector<int>& shape)</code>.\n  explicit Blob(const int num, const int channels, const int height,\n      const int width);\n  explicit Blob(const vector<int>& shape);\n\n  /// @brief Deprecated; use <code>Reshape(const vector<int>& shape)</code>.\n  void Reshape(const int num, const int channels, const int height,\n      const int width);\n  /**\n   * @brief Change the dimensions of the blob, allocating new memory if\n   *        necessary.\n   *\n   * This function can be called both to create an initial allocation\n   * of memory, and to adjust the dimensions of a top blob during Layer::Reshape\n   * or Layer::Forward. When changing the size of blob, memory will only be\n   * reallocated if sufficient memory does not already exist, and excess memory\n   * will never be freed.\n   *\n   * Note that reshaping an input blob and immediately calling Net::Backward is\n   * an error; either Net::Forward or Net::Reshape need to be called to\n   * propagate the new input shape to higher layers.\n   */\n  void Reshape(const vector<int>& shape);\n  void Reshape(const BlobShape& shape);\n  void ReshapeLike(const Blob& other);\n  inline string shape_string() const {\n    ostringstream stream;\n    for (int i = 0; i < shape_.size(); ++i) {\n      stream << shape_[i] << \" \";\n    }\n    stream << \"(\" << count_ << \")\";\n    return stream.str();\n  }\n  inline const vector<int>& shape() const { return shape_; }\n  /**\n   * @brief Returns the dimension of the index-th axis (or the negative index-th\n   *        axis from the end, if index is negative).\n   *\n   * @param index the axis index, which may be negative as it will be\n   *        \"canonicalized\" using CanonicalAxisIndex.\n   *        Dies on out of range index.\n   */\n  inline int shape(int index) const {\n    return shape_[CanonicalAxisIndex(index)];\n  }\n  inline int num_axes() const { return shape_.size(); }\n  inline int count() const { return count_; }\n\n  /**\n   * @brief Compute the volume of a slice; i.e., the product of dimensions\n   *        among a range of axes.\n   *\n   * @param start_axis The first axis to include in the slice.\n   *\n   * @param end_axis The first axis to exclude from the slice.\n   */\n  inline int count(int start_axis, int end_axis) const {\n    CHECK_LE(start_axis, end_axis);\n    CHECK_GE(start_axis, 0);\n    CHECK_GE(end_axis, 0);\n    CHECK_LE(start_axis, num_axes());\n    CHECK_LE(end_axis, num_axes());\n    int count = 1;\n    for (int i = start_axis; i < end_axis; ++i) {\n      count *= shape(i);\n    }\n    return count;\n  }\n  /**\n   * @brief Compute the volume of a slice spanning from a particular first\n   *        axis to the final axis.\n   *\n   * @param start_axis The first axis to include in the slice.\n   */\n  inline int count(int start_axis) const {\n    return count(start_axis, num_axes());\n  }\n\n  /**\n   * @brief Returns the 'canonical' version of a (usually) user-specified axis,\n   *        allowing for negative indexing (e.g., -1 for the last axis).\n   *\n   * @param axis_index the axis index.\n   *        If 0 <= index < num_axes(), return index.\n   *        If -num_axes <= index <= -1, return (num_axes() - (-index)),\n   *        e.g., the last axis index (num_axes() - 1) if index == -1,\n   *        the second to last if index == -2, etc.\n   *        Dies on out of range index.\n   */\n  inline int CanonicalAxisIndex(int axis_index) const {\n    CHECK_GE(axis_index, -num_axes())\n        << \"axis \" << axis_index << \" out of range for \" << num_axes()\n        << \"-D Blob with shape \" << shape_string();\n    CHECK_LT(axis_index, num_axes())\n        << \"axis \" << axis_index << \" out of range for \" << num_axes()\n        << \"-D Blob with shape \" << shape_string();\n    if (axis_index < 0) {\n      return axis_index + num_axes();\n    }\n    return axis_index;\n  }\n\n  /// @brief Deprecated legacy shape accessor num: use shape(0) instead.\n  inline int num() const { return LegacyShape(0); }\n  /// @brief Deprecated legacy shape accessor channels: use shape(1) instead.\n  inline int channels() const { return LegacyShape(1); }\n  /// @brief Deprecated legacy shape accessor height: use shape(2) instead.\n  inline int height() const { return LegacyShape(2); }\n  /// @brief Deprecated legacy shape accessor width: use shape(3) instead.\n  inline int width() const { return LegacyShape(3); }\n  inline int LegacyShape(int index) const {\n    CHECK_LE(num_axes(), 4)\n        << \"Cannot use legacy accessors on Blobs with > 4 axes.\";\n    CHECK_LT(index, 4);\n    CHECK_GE(index, -4);\n    if (index >= num_axes() || index < -num_axes()) {\n      // Axis is out of range, but still in [0, 3] (or [-4, -1] for reverse\n      // indexing) -- this special case simulates the one-padding used to fill\n      // extraneous axes of legacy blobs.\n      return 1;\n    }\n    return shape(index);\n  }\n\n  inline int offset(const int n, const int c = 0, const int h = 0,\n      const int w = 0) const {\n    CHECK_GE(n, 0);\n    CHECK_LE(n, num());\n    CHECK_GE(channels(), 0);\n    CHECK_LE(c, channels());\n    CHECK_GE(height(), 0);\n    CHECK_LE(h, height());\n    CHECK_GE(width(), 0);\n    CHECK_LE(w, width());\n    return ((n * channels() + c) * height() + h) * width() + w;\n  }\n\n  inline int offset(const vector<int>& indices) const {\n    CHECK_LE(indices.size(), num_axes());\n    int offset = 0;\n    for (int i = 0; i < num_axes(); ++i) {\n      offset *= shape(i);\n      if (indices.size() > i) {\n        CHECK_GE(indices[i], 0);\n        CHECK_LT(indices[i], shape(i));\n        offset += indices[i];\n      }\n    }\n    return offset;\n  }\n  /**\n   * @brief Copy from a source Blob.\n   *\n   * @param source the Blob to copy from\n   * @param copy_diff if false, copy the data; if true, copy the diff\n   * @param reshape if false, require this Blob to be pre-shaped to the shape\n   *        of other (and die otherwise); if true, Reshape this Blob to other's\n   *        shape if necessary\n   */\n  void CopyFrom(const Blob<Dtype>& source, bool copy_diff = false,\n      bool reshape = false);\n\n  inline Dtype data_at(const int n, const int c, const int h,\n      const int w) const {\n    return cpu_data()[offset(n, c, h, w)];\n  }\n\n  inline Dtype diff_at(const int n, const int c, const int h,\n      const int w) const {\n    return cpu_diff()[offset(n, c, h, w)];\n  }\n\n  inline Dtype data_at(const vector<int>& index) const {\n    return cpu_data()[offset(index)];\n  }\n\n  inline Dtype diff_at(const vector<int>& index) const {\n    return cpu_diff()[offset(index)];\n  }\n\n  inline const shared_ptr<SyncedMemory>& data() const {\n    CHECK(data_);\n    return data_;\n  }\n\n  inline const shared_ptr<SyncedMemory>& diff() const {\n    CHECK(diff_);\n    return diff_;\n  }\n\n  const Dtype* cpu_data() const;\n  void set_cpu_data(Dtype* data);\n  const int* gpu_shape() const;\n  const Dtype* gpu_data() const;\n  const Dtype* cpu_diff() const;\n  const Dtype* gpu_diff() const;\n  Dtype* mutable_cpu_data();\n  Dtype* mutable_gpu_data();\n  Dtype* mutable_cpu_diff();\n  Dtype* mutable_gpu_diff();\n  void Update();\n  void FromProto(const BlobProto& proto, bool reshape = true);\n  void ToProto(BlobProto* proto, bool write_diff = false) const;\n\n  /// @brief Compute the sum of absolute values (L1 norm) of the data.\n  Dtype asum_data() const;\n  /// @brief Compute the sum of absolute values (L1 norm) of the diff.\n  Dtype asum_diff() const;\n  /// @brief Compute the sum of squares (L2 norm squared) of the data.\n  Dtype sumsq_data() const;\n  /// @brief Compute the sum of squares (L2 norm squared) of the diff.\n  Dtype sumsq_diff() const;\n\n  /// @brief Scale the blob data by a constant factor.\n  void scale_data(Dtype scale_factor);\n  /// @brief Scale the blob diff by a constant factor.\n  void scale_diff(Dtype scale_factor);\n\n  /**\n   * @brief Set the data_ shared_ptr to point to the SyncedMemory holding the\n   *        data_ of Blob other -- useful in Layer%s which simply perform a copy\n   *        in their Forward pass.\n   *\n   * This deallocates the SyncedMemory holding this Blob's data_, as\n   * shared_ptr calls its destructor when reset with the \"=\" operator.\n   */\n  void ShareData(const Blob& other);\n  /**\n   * @brief Set the diff_ shared_ptr to point to the SyncedMemory holding the\n   *        diff_ of Blob other -- useful in Layer%s which simply perform a copy\n   *        in their Forward pass.\n   *\n   * This deallocates the SyncedMemory holding this Blob's diff_, as\n   * shared_ptr calls its destructor when reset with the \"=\" operator.\n   */\n  void ShareDiff(const Blob& other);\n\n  bool ShapeEquals(const BlobProto& other);\n\n protected:\n  shared_ptr<SyncedMemory> data_;\n  shared_ptr<SyncedMemory> diff_;\n  shared_ptr<SyncedMemory> shape_data_;\n  vector<int> shape_;\n  int count_;\n  int capacity_;\n\n  DISABLE_COPY_AND_ASSIGN(Blob);\n};  // class Blob\n\n}  // namespace caffe\n\n#endif  // CAFFE_BLOB_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/caffe.hpp",
    "content": "// caffe.hpp is the header file that you need to include in your code. It wraps\n// all the internal caffe header files into one for simpler inclusion.\n\n#ifndef CAFFE_CAFFE_HPP_\n#define CAFFE_CAFFE_HPP_\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/layer_factory.hpp\"\n#include \"caffe/net.hpp\"\n#include \"caffe/parallel.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/solver.hpp\"\n#include \"caffe/solver_factory.hpp\"\n#include \"caffe/util/benchmark.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\n#endif  // CAFFE_CAFFE_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/common.hpp",
    "content": "#ifndef CAFFE_COMMON_HPP_\n#define CAFFE_COMMON_HPP_\n\n#include <boost/shared_ptr.hpp>\n#include <gflags/gflags.h>\n#include <glog/logging.h>\n\n#include <climits>\n#include <cmath>\n#include <fstream>  // NOLINT(readability/streams)\n#include <iostream>  // NOLINT(readability/streams)\n#include <map>\n#include <set>\n#include <sstream>\n#include <string>\n#include <utility>  // pair\n#include <vector>\n\n#include \"caffe/util/device_alternate.hpp\"\n\n// Convert macro to string\n#define STRINGIFY(m) #m\n#define AS_STRING(m) STRINGIFY(m)\n\n// gflags 2.1 issue: namespace google was changed to gflags without warning.\n// Luckily we will be able to use GFLAGS_GFLAGS_H_ to detect if it is version\n// 2.1. If yes, we will add a temporary solution to redirect the namespace.\n// TODO(Yangqing): Once gflags solves the problem in a more elegant way, let's\n// remove the following hack.\n#ifndef GFLAGS_GFLAGS_H_\nnamespace gflags = google;\n#endif  // GFLAGS_GFLAGS_H_\n\n// Disable the copy and assignment operator for a class.\n#define DISABLE_COPY_AND_ASSIGN(classname) \\\nprivate:\\\n  classname(const classname&);\\\n  classname& operator=(const classname&)\n\n// Instantiate a class with float and double specifications.\n#define INSTANTIATE_CLASS(classname) \\\n  char gInstantiationGuard##classname; \\\n  template class classname<float>; \\\n  template class classname<double>\n\n#define INSTANTIATE_LAYER_GPU_FORWARD(classname) \\\n  template void classname<float>::Forward_gpu( \\\n      const std::vector<Blob<float>*>& bottom, \\\n      const std::vector<Blob<float>*>& top); \\\n  template void classname<double>::Forward_gpu( \\\n      const std::vector<Blob<double>*>& bottom, \\\n      const std::vector<Blob<double>*>& top);\n\n#define INSTANTIATE_LAYER_GPU_BACKWARD(classname) \\\n  template void classname<float>::Backward_gpu( \\\n      const std::vector<Blob<float>*>& top, \\\n      const std::vector<bool>& propagate_down, \\\n      const std::vector<Blob<float>*>& bottom); \\\n  template void classname<double>::Backward_gpu( \\\n      const std::vector<Blob<double>*>& top, \\\n      const std::vector<bool>& propagate_down, \\\n      const std::vector<Blob<double>*>& bottom)\n\n#define INSTANTIATE_LAYER_GPU_FUNCS(classname) \\\n  INSTANTIATE_LAYER_GPU_FORWARD(classname); \\\n  INSTANTIATE_LAYER_GPU_BACKWARD(classname)\n\n// A simple macro to mark codes that are not implemented, so that when the code\n// is executed we will see a fatal log.\n#define NOT_IMPLEMENTED LOG(FATAL) << \"Not Implemented Yet\"\n\n// See PR #1236\nnamespace cv { class Mat; }\n\nnamespace caffe {\n\n// We will use the boost shared_ptr instead of the new C++11 one mainly\n// because cuda does not work (at least now) well with C++11 features.\nusing boost::shared_ptr;\n\n// Common functions and classes from std that caffe often uses.\nusing std::fstream;\nusing std::ios;\nusing std::isnan;\nusing std::isinf;\nusing std::iterator;\nusing std::make_pair;\nusing std::map;\nusing std::ostringstream;\nusing std::pair;\nusing std::set;\nusing std::string;\nusing std::stringstream;\nusing std::vector;\n\n// A global initialization function that you should call in your main function.\n// Currently it initializes google flags and google logging.\nvoid GlobalInit(int* pargc, char*** pargv);\n\n// A singleton class to hold common caffe stuff, such as the handler that\n// caffe is going to use for cublas, curand, etc.\nclass Caffe {\n public:\n  ~Caffe();\n\n  // Thread local context for Caffe. Moved to common.cpp instead of\n  // including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010)\n  // on OSX. Also fails on Linux with CUDA 7.0.18.\n  static Caffe& Get();\n\n  enum Brew { CPU, GPU };\n\n  // This random number generator facade hides boost and CUDA rng\n  // implementation from one another (for cross-platform compatibility).\n  class RNG {\n   public:\n    RNG();\n    explicit RNG(unsigned int seed);\n    explicit RNG(const RNG&);\n    RNG& operator=(const RNG&);\n    void* generator();\n   private:\n    class Generator;\n    shared_ptr<Generator> generator_;\n  };\n\n  // Getters for boost rng, curand, and cublas handles\n  inline static RNG& rng_stream() {\n    if (!Get().random_generator_) {\n      Get().random_generator_.reset(new RNG());\n    }\n    return *(Get().random_generator_);\n  }\n#ifndef CPU_ONLY\n  inline static cublasHandle_t cublas_handle() { return Get().cublas_handle_; }\n  inline static curandGenerator_t curand_generator() {\n    return Get().curand_generator_;\n  }\n#endif\n\n  // Returns the mode: running on CPU or GPU.\n  inline static Brew mode() { return Get().mode_; }\n  // The setters for the variables\n  // Sets the mode. It is recommended that you don't change the mode halfway\n  // into the program since that may cause allocation of pinned memory being\n  // freed in a non-pinned way, which may cause problems - I haven't verified\n  // it personally but better to note it here in the header file.\n  inline static void set_mode(Brew mode) { Get().mode_ = mode; }\n  // Sets the random seed of both boost and curand\n  static void set_random_seed(const unsigned int seed);\n  // Sets the device. Since we have cublas and curand stuff, set device also\n  // requires us to reset those values.\n  static void SetDevice(const int device_id);\n  // Prints the current GPU status.\n  static void DeviceQuery();\n  // Parallel training info\n  inline static int solver_count() { return Get().solver_count_; }\n  inline static void set_solver_count(int val) { Get().solver_count_ = val; }\n  inline static bool root_solver() { return Get().root_solver_; }\n  inline static void set_root_solver(bool val) { Get().root_solver_ = val; }\n\n protected:\n#ifndef CPU_ONLY\n  cublasHandle_t cublas_handle_;\n  curandGenerator_t curand_generator_;\n#endif\n  shared_ptr<RNG> random_generator_;\n\n  Brew mode_;\n  int solver_count_;\n  bool root_solver_;\n\n private:\n  // The private constructor to avoid duplicate instantiation.\n  Caffe();\n\n  DISABLE_COPY_AND_ASSIGN(Caffe);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_COMMON_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/data_reader.hpp",
    "content": "#ifndef CAFFE_DATA_READER_HPP_\n#define CAFFE_DATA_READER_HPP_\n\n#include <map>\n#include <string>\n#include <vector>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/util/blocking_queue.hpp\"\n#include \"caffe/util/db.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Reads data from a source to queues available to data layers.\n * A single reading thread is created per source, even if multiple solvers\n * are running in parallel, e.g. for multi-GPU training. This makes sure\n * databases are read sequentially, and that each solver accesses a different\n * subset of the database. Data is distributed to solvers in a round-robin\n * way to keep parallel training deterministic.\n */\nclass DataReader {\n public:\n  explicit DataReader(const LayerParameter& param);\n  ~DataReader();\n\n  inline BlockingQueue<Datum*>& free() const {\n    return queue_pair_->free_;\n  }\n  inline BlockingQueue<Datum*>& full() const {\n    return queue_pair_->full_;\n  }\n\n protected:\n  // Queue pairs are shared between a body and its readers\n  class QueuePair {\n   public:\n    explicit QueuePair(int size);\n    ~QueuePair();\n\n    BlockingQueue<Datum*> free_;\n    BlockingQueue<Datum*> full_;\n\n  DISABLE_COPY_AND_ASSIGN(QueuePair);\n  };\n\n  // A single body is created per source\n  class Body : public InternalThread {\n   public:\n    explicit Body(const LayerParameter& param);\n    virtual ~Body();\n\n   protected:\n    void InternalThreadEntry();\n    void read_one(db::Cursor* cursor, QueuePair* qp);\n\n    const LayerParameter param_;\n    BlockingQueue<shared_ptr<QueuePair> > new_queue_pairs_;\n\n    friend class DataReader;\n\n  DISABLE_COPY_AND_ASSIGN(Body);\n  };\n\n  // A source is uniquely identified by its layer name + path, in case\n  // the same database is read from two different locations in the net.\n  static inline string source_key(const LayerParameter& param) {\n    return param.name() + \":\" + param.data_param().source();\n  }\n\n  const shared_ptr<QueuePair> queue_pair_;\n  shared_ptr<Body> body_;\n\n  static map<const string, boost::weak_ptr<DataReader::Body> > bodies_;\n\nDISABLE_COPY_AND_ASSIGN(DataReader);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DATA_READER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/data_transformer.hpp",
    "content": "#ifndef CAFFE_DATA_TRANSFORMER_HPP\n#define CAFFE_DATA_TRANSFORMER_HPP\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Applies common transformations to the input data, such as\n * scaling, mirroring, substracting the image mean...\n */\ntemplate <typename Dtype>\nclass DataTransformer {\n public:\n  explicit DataTransformer(const TransformationParameter& param, Phase phase);\n  virtual ~DataTransformer() {}\n\n  /**\n   * @brief Initialize the Random number generations if needed by the\n   *    transformation.\n   */\n  void InitRand();\n\n  /**\n   * @brief Applies the transformation defined in the data layer's\n   * transform_param block to the data.\n   *\n   * @param datum\n   *    Datum containing the data to be transformed.\n   * @param transformed_blob\n   *    This is destination blob. It can be part of top blob's data if\n   *    set_cpu_data() is used. See data_layer.cpp for an example.\n   */\n  void Transform(const Datum& datum, Blob<Dtype>* transformed_blob);\n\n  /**\n   * @brief Applies the transformation defined in the data layer's\n   * transform_param block to a vector of Datum.\n   *\n   * @param datum_vector\n   *    A vector of Datum containing the data to be transformed.\n   * @param transformed_blob\n   *    This is destination blob. It can be part of top blob's data if\n   *    set_cpu_data() is used. See memory_layer.cpp for an example.\n   */\n  void Transform(const vector<Datum> & datum_vector,\n                Blob<Dtype>* transformed_blob);\n\n#ifdef USE_OPENCV\n  /**\n   * @brief Applies the transformation defined in the data layer's\n   * transform_param block to a vector of Mat.\n   *\n   * @param mat_vector\n   *    A vector of Mat containing the data to be transformed.\n   * @param transformed_blob\n   *    This is destination blob. It can be part of top blob's data if\n   *    set_cpu_data() is used. See memory_layer.cpp for an example.\n   */\n  void Transform(const vector<cv::Mat> & mat_vector,\n                Blob<Dtype>* transformed_blob);\n\n  /**\n   * @brief Applies the transformation defined in the data layer's\n   * transform_param block to a cv::Mat\n   *\n   * @param cv_img\n   *    cv::Mat containing the data to be transformed.\n   * @param transformed_blob\n   *    This is destination blob. It can be part of top blob's data if\n   *    set_cpu_data() is used. See image_data_layer.cpp for an example.\n   */\n  void Transform(const cv::Mat& cv_img, Blob<Dtype>* transformed_blob);\n#endif  // USE_OPENCV\n\n  /**\n   * @brief Applies the same transformation defined in the data layer's\n   * transform_param block to all the num images in a input_blob.\n   *\n   * @param input_blob\n   *    A Blob containing the data to be transformed. It applies the same\n   *    transformation to all the num images in the blob.\n   * @param transformed_blob\n   *    This is destination blob, it will contain as many images as the\n   *    input blob. It can be part of top blob's data.\n   */\n  void Transform(Blob<Dtype>* input_blob, Blob<Dtype>* transformed_blob);\n\n  /**\n   * @brief Infers the shape of transformed_blob will have when\n   *    the transformation is applied to the data.\n   *\n   * @param datum\n   *    Datum containing the data to be transformed.\n   */\n  vector<int> InferBlobShape(const Datum& datum);\n  /**\n   * @brief Infers the shape of transformed_blob will have when\n   *    the transformation is applied to the data.\n   *    It uses the first element to infer the shape of the blob.\n   *\n   * @param datum_vector\n   *    A vector of Datum containing the data to be transformed.\n   */\n  vector<int> InferBlobShape(const vector<Datum> & datum_vector);\n  /**\n   * @brief Infers the shape of transformed_blob will have when\n   *    the transformation is applied to the data.\n   *    It uses the first element to infer the shape of the blob.\n   *\n   * @param mat_vector\n   *    A vector of Mat containing the data to be transformed.\n   */\n#ifdef USE_OPENCV\n  vector<int> InferBlobShape(const vector<cv::Mat> & mat_vector);\n  /**\n   * @brief Infers the shape of transformed_blob will have when\n   *    the transformation is applied to the data.\n   *\n   * @param cv_img\n   *    cv::Mat containing the data to be transformed.\n   */\n  vector<int> InferBlobShape(const cv::Mat& cv_img);\n#endif  // USE_OPENCV\n\n protected:\n   /**\n   * @brief Generates a random integer from Uniform({0, 1, ..., n-1}).\n   *\n   * @param n\n   *    The upperbound (exclusive) value of the random number.\n   * @return\n   *    A uniformly random integer value from ({0, 1, ..., n-1}).\n   */\n  virtual int Rand(int n);\n\n  void Transform(const Datum& datum, Dtype* transformed_data);\n  // Tranformation parameters\n  TransformationParameter param_;\n\n\n  shared_ptr<Caffe::RNG> rng_;\n  Phase phase_;\n  Blob<Dtype> data_mean_;\n  vector<Dtype> mean_values_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DATA_TRANSFORMER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/fast_rcnn_layers.hpp",
    "content": "// ------------------------------------------------------------------\n// Fast R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Ross Girshick\n// ------------------------------------------------------------------\n\n#ifndef CAFFE_FAST_RCNN_LAYERS_HPP_\n#define CAFFE_FAST_RCNN_LAYERS_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/* ROIPoolingLayer - Region of Interest Pooling Layer\n*/\ntemplate <typename Dtype>\nclass ROIPoolingLayer : public Layer<Dtype> {\n public:\n  explicit ROIPoolingLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"ROIPooling\"; }\n\n  virtual inline int MinBottomBlobs() const { return 2; }\n  virtual inline int MaxBottomBlobs() const { return 2; }\n  virtual inline int MinTopBlobs() const { return 1; }\n  virtual inline int MaxTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int channels_;\n  int height_;\n  int width_;\n  int pooled_height_;\n  int pooled_width_;\n  Dtype spatial_scale_;\n  Blob<int> max_idx_;\n};\n\ntemplate <typename Dtype>\nclass SmoothL1LossLayer : public LossLayer<Dtype> {\n public:\n  explicit SmoothL1LossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param), diff_() {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"SmoothL1Loss\"; }\n\n  virtual inline int ExactNumBottomBlobs() const { return -1; }\n  virtual inline int MinBottomBlobs() const { return 2; }\n  virtual inline int MaxBottomBlobs() const { return 4; }\n\n  /**\n   * Unlike most loss layers, in the SmoothL1LossLayer we can backpropagate\n   * to both inputs -- override to return true and always allow force_backward.\n   */\n  virtual inline bool AllowForceBackward(const int bottom_index) const {\n    return true;\n  }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<Dtype> diff_;\n  Blob<Dtype> errors_;\n  Blob<Dtype> ones_;\n  bool has_weights_;\n  Dtype sigma2_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_FAST_RCNN_LAYERS_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/filler.hpp",
    "content": "// Fillers are random number generators that fills a blob using the specified\n// algorithm. The expectation is that they are only going to be used during\n// initialization time and will not involve any GPUs.\n\n#ifndef CAFFE_FILLER_HPP\n#define CAFFE_FILLER_HPP\n\n#include <string>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\n/// @brief Fills a Blob with constant or randomly-generated data.\ntemplate <typename Dtype>\nclass Filler {\n public:\n  explicit Filler(const FillerParameter& param) : filler_param_(param) {}\n  virtual ~Filler() {}\n  virtual void Fill(Blob<Dtype>* blob) = 0;\n protected:\n  FillerParameter filler_param_;\n};  // class Filler\n\n\n/// @brief Fills a Blob with constant values @f$ x = 0 @f$.\ntemplate <typename Dtype>\nclass ConstantFiller : public Filler<Dtype> {\n public:\n  explicit ConstantFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    Dtype* data = blob->mutable_cpu_data();\n    const int count = blob->count();\n    const Dtype value = this->filler_param_.value();\n    CHECK(count);\n    for (int i = 0; i < count; ++i) {\n      data[i] = value;\n    }\n    CHECK_EQ(this->filler_param_.sparse(), -1)\n         << \"Sparsity not supported by this Filler.\";\n  }\n};\n\n/// @brief Fills a Blob with uniformly distributed values @f$ x\\sim U(a, b) @f$.\ntemplate <typename Dtype>\nclass UniformFiller : public Filler<Dtype> {\n public:\n  explicit UniformFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    CHECK(blob->count());\n    caffe_rng_uniform<Dtype>(blob->count(), Dtype(this->filler_param_.min()),\n        Dtype(this->filler_param_.max()), blob->mutable_cpu_data());\n    CHECK_EQ(this->filler_param_.sparse(), -1)\n         << \"Sparsity not supported by this Filler.\";\n  }\n};\n\n/// @brief Fills a Blob with Gaussian-distributed values @f$ x = a @f$.\ntemplate <typename Dtype>\nclass GaussianFiller : public Filler<Dtype> {\n public:\n  explicit GaussianFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    Dtype* data = blob->mutable_cpu_data();\n    CHECK(blob->count());\n    caffe_rng_gaussian<Dtype>(blob->count(), Dtype(this->filler_param_.mean()),\n        Dtype(this->filler_param_.std()), blob->mutable_cpu_data());\n    int sparse = this->filler_param_.sparse();\n    CHECK_GE(sparse, -1);\n    if (sparse >= 0) {\n      // Sparse initialization is implemented for \"weight\" blobs; i.e. matrices.\n      // These have num == channels == 1; width is number of inputs; height is\n      // number of outputs.  The 'sparse' variable specifies the mean number\n      // of non-zero input weights for a given output.\n      CHECK_GE(blob->num_axes(), 1);\n      const int num_outputs = blob->shape(0);\n      Dtype non_zero_probability = Dtype(sparse) / Dtype(num_outputs);\n      rand_vec_.reset(new SyncedMemory(blob->count() * sizeof(int)));\n      int* mask = reinterpret_cast<int*>(rand_vec_->mutable_cpu_data());\n      caffe_rng_bernoulli(blob->count(), non_zero_probability, mask);\n      for (int i = 0; i < blob->count(); ++i) {\n        data[i] *= mask[i];\n      }\n    }\n  }\n\n protected:\n  shared_ptr<SyncedMemory> rand_vec_;\n};\n\n/** @brief Fills a Blob with values @f$ x \\in [0, 1] @f$\n *         such that @f$ \\forall i \\sum_j x_{ij} = 1 @f$.\n */\ntemplate <typename Dtype>\nclass PositiveUnitballFiller : public Filler<Dtype> {\n public:\n  explicit PositiveUnitballFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    Dtype* data = blob->mutable_cpu_data();\n    DCHECK(blob->count());\n    caffe_rng_uniform<Dtype>(blob->count(), 0, 1, blob->mutable_cpu_data());\n    // We expect the filler to not be called very frequently, so we will\n    // just use a simple implementation\n    int dim = blob->count() / blob->num();\n    CHECK(dim);\n    for (int i = 0; i < blob->num(); ++i) {\n      Dtype sum = 0;\n      for (int j = 0; j < dim; ++j) {\n        sum += data[i * dim + j];\n      }\n      for (int j = 0; j < dim; ++j) {\n        data[i * dim + j] /= sum;\n      }\n    }\n    CHECK_EQ(this->filler_param_.sparse(), -1)\n         << \"Sparsity not supported by this Filler.\";\n  }\n};\n\n/**\n * @brief Fills a Blob with values @f$ x \\sim U(-a, +a) @f$ where @f$ a @f$ is\n *        set inversely proportional to number of incoming nodes, outgoing\n *        nodes, or their average.\n *\n * A Filler based on the paper [Bengio and Glorot 2010]: Understanding\n * the difficulty of training deep feedforward neuralnetworks.\n *\n * It fills the incoming matrix by randomly sampling uniform data from [-scale,\n * scale] where scale = sqrt(3 / n) where n is the fan_in, fan_out, or their\n * average, depending on the variance_norm option. You should make sure the\n * input blob has shape (num, a, b, c) where a * b * c = fan_in and num * b * c\n * = fan_out. Note that this is currently not the case for inner product layers.\n *\n * TODO(dox): make notation in above comment consistent with rest & use LaTeX.\n */\ntemplate <typename Dtype>\nclass XavierFiller : public Filler<Dtype> {\n public:\n  explicit XavierFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    CHECK(blob->count());\n    int fan_in = blob->count() / blob->num();\n    int fan_out = blob->count() / blob->channels();\n    Dtype n = fan_in;  // default to fan_in\n    if (this->filler_param_.variance_norm() ==\n        FillerParameter_VarianceNorm_AVERAGE) {\n      n = (fan_in + fan_out) / Dtype(2);\n    } else if (this->filler_param_.variance_norm() ==\n        FillerParameter_VarianceNorm_FAN_OUT) {\n      n = fan_out;\n    }\n    Dtype scale = sqrt(Dtype(3) / n);\n    caffe_rng_uniform<Dtype>(blob->count(), -scale, scale,\n        blob->mutable_cpu_data());\n    CHECK_EQ(this->filler_param_.sparse(), -1)\n         << \"Sparsity not supported by this Filler.\";\n  }\n};\n\n/**\n * @brief Fills a Blob with values @f$ x \\sim N(0, \\sigma^2) @f$ where\n *        @f$ \\sigma^2 @f$ is set inversely proportional to number of incoming\n *        nodes, outgoing nodes, or their average.\n *\n * A Filler based on the paper [He, Zhang, Ren and Sun 2015]: Specifically\n * accounts for ReLU nonlinearities.\n *\n * Aside: for another perspective on the scaling factor, see the derivation of\n * [Saxe, McClelland, and Ganguli 2013 (v3)].\n *\n * It fills the incoming matrix by randomly sampling Gaussian data with std =\n * sqrt(2 / n) where n is the fan_in, fan_out, or their average, depending on\n * the variance_norm option. You should make sure the input blob has shape (num,\n * a, b, c) where a * b * c = fan_in and num * b * c = fan_out. Note that this\n * is currently not the case for inner product layers.\n */\ntemplate <typename Dtype>\nclass MSRAFiller : public Filler<Dtype> {\n public:\n  explicit MSRAFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    CHECK(blob->count());\n    int fan_in = blob->count() / blob->num();\n    int fan_out = blob->count() / blob->channels();\n    Dtype n = fan_in;  // default to fan_in\n    if (this->filler_param_.variance_norm() ==\n        FillerParameter_VarianceNorm_AVERAGE) {\n      n = (fan_in + fan_out) / Dtype(2);\n    } else if (this->filler_param_.variance_norm() ==\n        FillerParameter_VarianceNorm_FAN_OUT) {\n      n = fan_out;\n    }\n    Dtype std = sqrt(Dtype(2) / n);\n    caffe_rng_gaussian<Dtype>(blob->count(), Dtype(0), std,\n        blob->mutable_cpu_data());\n    CHECK_EQ(this->filler_param_.sparse(), -1)\n         << \"Sparsity not supported by this Filler.\";\n  }\n};\n\n/*!\n@brief Fills a Blob with coefficients for bilinear interpolation.\n\nA common use case is with the DeconvolutionLayer acting as upsampling.\nYou can upsample a feature map with shape of (B, C, H, W) by any integer factor\nusing the following proto.\n\\code\nlayer {\n  name: \"upsample\", type: \"Deconvolution\"\n  bottom: \"{{bottom_name}}\" top: \"{{top_name}}\"\n  convolution_param {\n    kernel_size: {{2 * factor - factor % 2}} stride: {{factor}}\n    num_output: {{C}} group: {{C}}\n    pad: {{ceil((factor - 1) / 2.)}}\n    weight_filler: { type: \"bilinear\" } bias_term: false\n  }\n  param { lr_mult: 0 decay_mult: 0 }\n}\n\\endcode\nPlease use this by replacing `{{}}` with your values. By specifying\n`num_output: {{C}} group: {{C}}`, it behaves as\nchannel-wise convolution. The filter shape of this deconvolution layer will be\n(C, 1, K, K) where K is `kernel_size`, and this filler will set a (K, K)\ninterpolation kernel for every channel of the filter identically. The resulting\nshape of the top feature map will be (B, C, factor * H, factor * W).\nNote that the learning rate and the\nweight decay are set to 0 in order to keep coefficient values of bilinear\ninterpolation unchanged during training. If you apply this to an image, this\noperation is equivalent to the following call in Python with Scikit.Image.\n\\code{.py}\nout = skimage.transform.rescale(img, factor, mode='constant', cval=0)\n\\endcode\n */\ntemplate <typename Dtype>\nclass BilinearFiller : public Filler<Dtype> {\n public:\n  explicit BilinearFiller(const FillerParameter& param)\n      : Filler<Dtype>(param) {}\n  virtual void Fill(Blob<Dtype>* blob) {\n    CHECK_EQ(blob->num_axes(), 4) << \"Blob must be 4 dim.\";\n    //CHECK_EQ(blob->width(), blob->height()) << \"Filter must be square\";\n    Dtype* data = blob->mutable_cpu_data();\n    int f = ceil(blob->width() / 2.);\n    float c = (2 * f - 1 - f % 2) / (2. * f);\n    for (int i = 0; i < blob->count(); ++i) {\n      float x = i % blob->width();\n      float y = (i / blob->width()) % blob->height();\n      data[i] = (1 - fabs(x / f - c)) * (1 - fabs(y / f - c));\n    }\n    CHECK_EQ(this->filler_param_.sparse(), -1)\n         << \"Sparsity not supported by this Filler.\";\n  }\n};\n\n/**\n * @brief Get a specific filler from the specification given in FillerParameter.\n *\n * Ideally this would be replaced by a factory pattern, but we will leave it\n * this way for now.\n */\ntemplate <typename Dtype>\nFiller<Dtype>* GetFiller(const FillerParameter& param) {\n  const std::string& type = param.type();\n  if (type == \"constant\") {\n    return new ConstantFiller<Dtype>(param);\n  } else if (type == \"gaussian\") {\n    return new GaussianFiller<Dtype>(param);\n  } else if (type == \"positive_unitball\") {\n    return new PositiveUnitballFiller<Dtype>(param);\n  } else if (type == \"uniform\") {\n    return new UniformFiller<Dtype>(param);\n  } else if (type == \"xavier\") {\n    return new XavierFiller<Dtype>(param);\n  } else if (type == \"msra\") {\n    return new MSRAFiller<Dtype>(param);\n  } else if (type == \"bilinear\") {\n    return new BilinearFiller<Dtype>(param);\n  } else {\n    CHECK(false) << \"Unknown filler name: \" << param.type();\n  }\n  return (Filler<Dtype>*)(NULL);\n}\n\n}  // namespace caffe\n\n#endif  // CAFFE_FILLER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/internal_thread.hpp",
    "content": "#ifndef CAFFE_INTERNAL_THREAD_HPP_\n#define CAFFE_INTERNAL_THREAD_HPP_\n\n#include \"caffe/common.hpp\"\n\n/**\n Forward declare boost::thread instead of including boost/thread.hpp\n to avoid a boost/NVCC issues (#1009, #1010) on OSX.\n */\nnamespace boost { class thread; }\n\nnamespace caffe {\n\n/**\n * Virtual class encapsulate boost::thread for use in base class\n * The child class will acquire the ability to run a single thread,\n * by reimplementing the virtual function InternalThreadEntry.\n */\nclass InternalThread {\n public:\n  InternalThread() : thread_() {}\n  virtual ~InternalThread();\n\n  /**\n   * Caffe's thread local state will be initialized using the current\n   * thread values, e.g. device id, solver index etc. The random seed\n   * is initialized using caffe_rng_rand.\n   */\n  void StartInternalThread();\n\n  /** Will not return until the internal thread has exited. */\n  void StopInternalThread();\n\n  bool is_started() const;\n\n protected:\n  /* Implement this method in your subclass\n      with the code you want your thread to run. */\n  virtual void InternalThreadEntry() {}\n\n  /* Should be tested when running loops to exit when requested. */\n  bool must_stop();\n\n private:\n  void entry(int device, Caffe::Brew mode, int rand_seed, int solver_count,\n      bool root_solver);\n\n  shared_ptr<boost::thread> thread_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_INTERNAL_THREAD_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layer.hpp",
    "content": "#ifndef CAFFE_LAYER_H_\n#define CAFFE_LAYER_H_\n\n#include <algorithm>\n#include <string>\n#include <vector>\n#include <iostream> \n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layer_factory.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/math_functions.hpp\"\n\n/**\n Forward declare boost::thread instead of including boost/thread.hpp\n to avoid a boost/NVCC issues (#1009, #1010) on OSX.\n */\n using namespace std;\nnamespace boost { class mutex; }\n\nnamespace caffe {\n\n/**\n * @brief An interface for the units of computation which can be composed into a\n *        Net.\n *\n * Layer%s must implement a Forward function, in which they take their input\n * (bottom) Blob%s (if any) and compute their output Blob%s (if any).\n * They may also implement a Backward function, in which they compute the error\n * gradients with respect to their input Blob%s, given the error gradients with\n * their output Blob%s.\n */\ntemplate <typename Dtype>\nclass Layer {\n public:\n  /**\n   * You should not implement your own constructor. Any set up code should go\n   * to SetUp(), where the dimensions of the bottom blobs are provided to the\n   * layer.\n   */\n  explicit Layer(const LayerParameter& param)\n    : layer_param_(param), is_shared_(false) {\n      // Set phase and copy blobs (if there are any).\n      phase_ = param.phase();\n      if (layer_param_.blobs_size() > 0) {\n        blobs_.resize(layer_param_.blobs_size());\n        for (int i = 0; i < layer_param_.blobs_size(); ++i) {\n          blobs_[i].reset(new Blob<Dtype>());\n          blobs_[i]->FromProto(layer_param_.blobs(i));\n        }\n      }\n    }\n  virtual ~Layer() {}\n\n  /**\n   * @brief Implements common layer setup functionality.\n   *\n   * @param bottom the preshaped input blobs\n   * @param top\n   *     the allocated but unshaped output blobs, to be shaped by Reshape\n   *\n   * Checks that the number of bottom and top blobs is correct.\n   * Calls LayerSetUp to do special layer setup for individual layer types,\n   * followed by Reshape to set up sizes of top blobs and internal buffers.\n   * Sets up the loss weight multiplier blobs for any non-zero loss weights.\n   * This method may not be overridden.\n   */\n  void SetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n\n\n    InitMutex();\n    CheckBlobCounts(bottom, top);\n    LayerSetUp(bottom, top);\n    Reshape(bottom, top);\n    SetLossWeights(top);\n\n  }\n\n  /**\n   * @brief Does layer-specific setup: your layer should implement this function\n   *        as well as Reshape.\n   *\n   * @param bottom\n   *     the preshaped input blobs, whose data fields store the input data for\n   *     this layer\n   * @param top\n   *     the allocated but unshaped output blobs\n   *\n   * This method should do one-time layer specific setup. This includes reading\n   * and processing relevent parameters from the <code>layer_param_</code>.\n   * Setting up the shapes of top blobs and internal buffers should be done in\n   * <code>Reshape</code>, which will be called before the forward pass to\n   * adjust the top blob sizes.\n   */\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n\n  /**\n   * @brief Whether a layer should be shared by multiple nets during data\n   *        parallelism. By default, all layers except for data layers should\n   *        not be shared. data layers should be shared to ensure each worker\n   *        solver access data sequentially during data parallelism.\n   */\n  virtual inline bool ShareInParallel() const { return false; }\n\n  /** @brief Return whether this layer is actually shared by other nets.\n   *         If ShareInParallel() is true and using more than one GPU and the\n   *         net has TRAIN phase, then this function is expected return true.\n   */\n  inline bool IsShared() const { return is_shared_; }\n\n  /** @brief Set whether this layer is actually shared by other nets\n   *         If ShareInParallel() is true and using more than one GPU and the\n   *         net has TRAIN phase, then is_shared should be set true.\n   */\n  inline void SetShared(bool is_shared) {\n    CHECK(ShareInParallel() || !is_shared)\n        << type() << \"Layer does not support sharing.\";\n    is_shared_ = is_shared;\n  }\n\n  /**\n   * @brief Adjust the shapes of top blobs and internal buffers to accommodate\n   *        the shapes of the bottom blobs.\n   *\n   * @param bottom the input blobs, with the requested input shapes\n   * @param top the top blobs, which should be reshaped as needed\n   *\n   * This method should reshape top blobs as needed according to the shapes\n   * of the bottom (input) blobs, as well as reshaping any internal buffers\n   * and making any other necessary adjustments so that the layer can\n   * accommodate the bottom blobs.\n   */\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) = 0;\n\n  /**\n   * @brief Given the bottom blobs, compute the top blobs and the loss.\n   *\n   * @param bottom\n   *     the input blobs, whose data fields store the input data for this layer\n   * @param top\n   *     the preshaped output blobs, whose data fields will store this layers'\n   *     outputs\n   * \\return The total loss from the layer.\n   *\n   * The Forward wrapper calls the relevant device wrapper function\n   * (Forward_cpu or Forward_gpu) to compute the top blob values given the\n   * bottom blobs.  If the layer has any non-zero loss_weights, the wrapper\n   * then computes and returns the loss.\n   *\n   * Your layer should implement Forward_cpu and (optionally) Forward_gpu.\n   */\n  inline Dtype Forward(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Given the top blob error gradients, compute the bottom blob error\n   *        gradients.\n   *\n   * @param top\n   *     the output blobs, whose diff fields store the gradient of the error\n   *     with respect to themselves\n   * @param propagate_down\n   *     a vector with equal length to bottom, with each index indicating\n   *     whether to propagate the error gradients down to the bottom blob at\n   *     the corresponding index\n   * @param bottom\n   *     the input blobs, whose diff fields will store the gradient of the error\n   *     with respect to themselves after Backward is run\n   *\n   * The Backward wrapper calls the relevant device wrapper function\n   * (Backward_cpu or Backward_gpu) to compute the bottom blob diffs given the\n   * top blob diffs.\n   *\n   * Your layer should implement Backward_cpu and (optionally) Backward_gpu.\n   */\n  inline void Backward(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down,\n      const vector<Blob<Dtype>*>& bottom);\n\n  /**\n   * @brief Returns the vector of learnable parameter blobs.\n   */\n  vector<shared_ptr<Blob<Dtype> > >& blobs() {\n    return blobs_;\n  }\n\n  /**\n   * @brief Returns the layer parameter.\n   */\n  const LayerParameter& layer_param() const { return layer_param_; }\n\n  /**\n   * @brief Writes the layer parameter to a protocol buffer\n   */\n  virtual void ToProto(LayerParameter* param, bool write_diff = false);\n\n  /**\n   * @brief Returns the scalar loss associated with a top blob at a given index.\n   */\n  inline Dtype loss(const int top_index) const {\n    return (loss_.size() > top_index) ? loss_[top_index] : Dtype(0);\n  }\n\n  /**\n   * @brief Sets the loss associated with a top blob at a given index.\n   */\n  inline void set_loss(const int top_index, const Dtype value) {\n    if (loss_.size() <= top_index) {\n      loss_.resize(top_index + 1, Dtype(0));\n    }\n    loss_[top_index] = value;\n  }\n\n  /**\n   * @brief Returns the layer type.\n   */\n  virtual inline const char* type() const { return \"\"; }\n\n  /**\n   * @brief Returns the exact number of bottom blobs required by the layer,\n   *        or -1 if no exact number is required.\n   *\n   * This method should be overridden to return a non-negative value if your\n   * layer expects some exact number of bottom blobs.\n   */\n  virtual inline int ExactNumBottomBlobs() const { return -1; }\n  /**\n   * @brief Returns the minimum number of bottom blobs required by the layer,\n   *        or -1 if no minimum number is required.\n   *\n   * This method should be overridden to return a non-negative value if your\n   * layer expects some minimum number of bottom blobs.\n   */\n  virtual inline int MinBottomBlobs() const { return -1; }\n  /**\n   * @brief Returns the maximum number of bottom blobs required by the layer,\n   *        or -1 if no maximum number is required.\n   *\n   * This method should be overridden to return a non-negative value if your\n   * layer expects some maximum number of bottom blobs.\n   */\n  virtual inline int MaxBottomBlobs() const { return -1; }\n  /**\n   * @brief Returns the exact number of top blobs required by the layer,\n   *        or -1 if no exact number is required.\n   *\n   * This method should be overridden to return a non-negative value if your\n   * layer expects some exact number of top blobs.\n   */\n  virtual inline int ExactNumTopBlobs() const { return -1; }\n  /**\n   * @brief Returns the minimum number of top blobs required by the layer,\n   *        or -1 if no minimum number is required.\n   *\n   * This method should be overridden to return a non-negative value if your\n   * layer expects some minimum number of top blobs.\n   */\n  virtual inline int MinTopBlobs() const { return -1; }\n  /**\n   * @brief Returns the maximum number of top blobs required by the layer,\n   *        or -1 if no maximum number is required.\n   *\n   * This method should be overridden to return a non-negative value if your\n   * layer expects some maximum number of top blobs.\n   */\n  virtual inline int MaxTopBlobs() const { return -1; }\n  /**\n   * @brief Returns true if the layer requires an equal number of bottom and\n   *        top blobs.\n   *\n   * This method should be overridden to return true if your layer expects an\n   * equal number of bottom and top blobs.\n   */\n  virtual inline bool EqualNumBottomTopBlobs() const { return false; }\n\n  /**\n   * @brief Return whether \"anonymous\" top blobs are created automatically\n   *        by the layer.\n   *\n   * If this method returns true, Net::Init will create enough \"anonymous\" top\n   * blobs to fulfill the requirement specified by ExactNumTopBlobs() or\n   * MinTopBlobs().\n   */\n  virtual inline bool AutoTopBlobs() const { return false; }\n\n  /**\n   * @brief Return whether to allow force_backward for a given bottom blob\n   *        index.\n   *\n   * If AllowForceBackward(i) == false, we will ignore the force_backward\n   * setting and backpropagate to blob i only if it needs gradient information\n   * (as is done when force_backward == false).\n   */\n  virtual inline bool AllowForceBackward(const int bottom_index) const {\n    return true;\n  }\n\n  /**\n   * @brief Specifies whether the layer should compute gradients w.r.t. a\n   *        parameter at a particular index given by param_id.\n   *\n   * You can safely ignore false values and always compute gradients\n   * for all parameters, but possibly with wasteful computation.\n   */\n  inline bool param_propagate_down(const int param_id) {\n    return (param_propagate_down_.size() > param_id) ?\n        param_propagate_down_[param_id] : false;\n  }\n  /**\n   * @brief Sets whether the layer should compute gradients w.r.t. a\n   *        parameter at a particular index given by param_id.\n   */\n  inline void set_param_propagate_down(const int param_id, const bool value) {\n    if (param_propagate_down_.size() <= param_id) {\n      param_propagate_down_.resize(param_id + 1, true);\n    }\n    param_propagate_down_[param_id] = value;\n  }\n\n  inline Phase phase() { return phase_; }\n\n protected:\n  /** The protobuf that stores the layer parameters */\n  LayerParameter layer_param_;\n  /** The phase: TRAIN or TEST */\n  Phase phase_;\n  /** The vector that stores the learnable parameters as a set of blobs. */\n  vector<shared_ptr<Blob<Dtype> > > blobs_;\n  /** Vector indicating whether to compute the diff of each param blob. */\n  vector<bool> param_propagate_down_;\n\n  /** The vector that indicates whether each top blob has a non-zero weight in\n   *  the objective function. */\n  vector<Dtype> loss_;\n\n  /** @brief Using the CPU device, compute the layer output. */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) = 0;\n  /**\n   * @brief Using the GPU device, compute the layer output.\n   *        Fall back to Forward_cpu() if unavailable.\n   */\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n    // LOG(WARNING) << \"Using CPU code as backup.\";\n    return Forward_cpu(bottom, top);\n  }\n\n  /**\n   * @brief Using the CPU device, compute the gradients for any parameters and\n   *        for the bottom blobs if propagate_down is true.\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down,\n      const vector<Blob<Dtype>*>& bottom) = 0;\n  /**\n   * @brief Using the GPU device, compute the gradients for any parameters and\n   *        for the bottom blobs if propagate_down is true.\n   *        Fall back to Backward_cpu() if unavailable.\n   */\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down,\n      const vector<Blob<Dtype>*>& bottom) {\n    // LOG(WARNING) << \"Using CPU code as backup.\";\n    Backward_cpu(top, propagate_down, bottom);\n  }\n\n  /**\n   * Called by the parent Layer's SetUp to check that the number of bottom\n   * and top Blobs provided as input match the expected numbers specified by\n   * the {ExactNum,Min,Max}{Bottom,Top}Blobs() functions.\n   */\n  virtual void CheckBlobCounts(const vector<Blob<Dtype>*>& bottom,\n                               const vector<Blob<Dtype>*>& top) {\n    if (ExactNumBottomBlobs() >= 0) {\n      CHECK_EQ(ExactNumBottomBlobs(), bottom.size())\n          << type() << \" Layer takes \" << ExactNumBottomBlobs()\n          << \" bottom blob(s) as input.\";\n    }\n    if (MinBottomBlobs() >= 0) {\n      CHECK_LE(MinBottomBlobs(), bottom.size())\n          << type() << \" Layer takes at least \" << MinBottomBlobs()\n          << \" bottom blob(s) as input.\";\n    }\n    if (MaxBottomBlobs() >= 0) {\n      CHECK_GE(MaxBottomBlobs(), bottom.size())\n          << type() << \" Layer takes at most \" << MaxBottomBlobs()\n          << \" bottom blob(s) as input.\";\n    }\n    if (ExactNumTopBlobs() >= 0) {\n      CHECK_EQ(ExactNumTopBlobs(), top.size())\n          << type() << \" Layer produces \" << ExactNumTopBlobs()\n          << \" top blob(s) as output.\";\n    }\n    if (MinTopBlobs() >= 0) {\n      CHECK_LE(MinTopBlobs(), top.size())\n          << type() << \" Layer produces at least \" << MinTopBlobs()\n          << \" top blob(s) as output.\";\n    }\n    if (MaxTopBlobs() >= 0) {\n      CHECK_GE(MaxTopBlobs(), top.size())\n          << type() << \" Layer produces at most \" << MaxTopBlobs()\n          << \" top blob(s) as output.\";\n    }\n    if (EqualNumBottomTopBlobs()) {\n      CHECK_EQ(bottom.size(), top.size())\n          << type() << \" Layer produces one top blob as output for each \"\n          << \"bottom blob input.\";\n    }\n  }\n\n  /**\n   * Called by SetUp to initialize the weights associated with any top blobs in\n   * the loss function. Store non-zero loss weights in the diff blob.\n   */\n  inline void SetLossWeights(const vector<Blob<Dtype>*>& top) {\n    const int num_loss_weights = layer_param_.loss_weight_size();\n    if (num_loss_weights) {\n      CHECK_EQ(top.size(), num_loss_weights) << \"loss_weight must be \"\n          \"unspecified or specified once per top blob.\";\n      for (int top_id = 0; top_id < top.size(); ++top_id) {\n        const Dtype loss_weight = layer_param_.loss_weight(top_id);\n        if (loss_weight == Dtype(0)) { continue; }\n        this->set_loss(top_id, loss_weight);\n        const int count = top[top_id]->count();\n        Dtype* loss_multiplier = top[top_id]->mutable_cpu_diff();\n        caffe_set(count, loss_weight, loss_multiplier);\n      }\n    }\n  }\n\n private:\n  /** Whether this layer is actually shared by other nets*/\n  bool is_shared_;\n\n  /** The mutex for sequential forward if this layer is shared */\n  shared_ptr<boost::mutex> forward_mutex_;\n\n  /** Initialize forward_mutex_ */\n  void InitMutex();\n  /** Lock forward_mutex_ if this layer is shared */\n  void Lock();\n  /** Unlock forward_mutex_ if this layer is shared */\n  void Unlock();\n\n  DISABLE_COPY_AND_ASSIGN(Layer);\n};  // class Layer\n\n// Forward and backward wrappers. You should implement the cpu and\n// gpu specific implementations instead, and should not change these\n// functions.\ntemplate <typename Dtype>\ninline Dtype Layer<Dtype>::Forward(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  // Lock during forward to ensure sequential forward\n  Lock();\n  Dtype loss = 0;\n  Reshape(bottom, top);\n  switch (Caffe::mode()) {\n  case Caffe::CPU:\n    Forward_cpu(bottom, top);\n    for (int top_id = 0; top_id < top.size(); ++top_id) {\n      if (!this->loss(top_id)) { continue; }\n      const int count = top[top_id]->count();\n      const Dtype* data = top[top_id]->cpu_data();\n      const Dtype* loss_weights = top[top_id]->cpu_diff();\n      loss += caffe_cpu_dot(count, data, loss_weights);\n    }\n    break;\n  case Caffe::GPU:\n    Forward_gpu(bottom, top);\n#ifndef CPU_ONLY\n    for (int top_id = 0; top_id < top.size(); ++top_id) {\n      if (!this->loss(top_id)) { continue; }\n      const int count = top[top_id]->count();\n      const Dtype* data = top[top_id]->gpu_data();\n      const Dtype* loss_weights = top[top_id]->gpu_diff();\n      Dtype blob_loss = 0;\n      caffe_gpu_dot(count, data, loss_weights, &blob_loss);\n      loss += blob_loss;\n    }\n#endif\n    break;\n  default:\n    LOG(FATAL) << \"Unknown caffe mode.\";\n  }\n  Unlock();\n  return loss;\n}\n\ntemplate <typename Dtype>\ninline void Layer<Dtype>::Backward(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  switch (Caffe::mode()) {\n  case Caffe::CPU:\n    Backward_cpu(top, propagate_down, bottom);\n    break;\n  case Caffe::GPU:\n    Backward_gpu(top, propagate_down, bottom);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown caffe mode.\";\n  }\n}\n\n// Serialize LayerParameter to protocol buffer\ntemplate <typename Dtype>\nvoid Layer<Dtype>::ToProto(LayerParameter* param, bool write_diff) {\n  param->Clear();\n  param->CopyFrom(layer_param_);\n  param->clear_blobs();\n  for (int i = 0; i < blobs_.size(); ++i) {\n    blobs_[i]->ToProto(param->add_blobs(), write_diff);\n  }\n}\n\n}  // namespace caffe\n\n#endif  // CAFFE_LAYER_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layer_factory.hpp",
    "content": "/**\n * @brief A layer factory that allows one to register layers.\n * During runtime, registered layers could be called by passing a LayerParameter\n * protobuffer to the CreateLayer function:\n *\n *     LayerRegistry<Dtype>::CreateLayer(param);\n *\n * There are two ways to register a layer. Assuming that we have a layer like:\n *\n *   template <typename Dtype>\n *   class MyAwesomeLayer : public Layer<Dtype> {\n *     // your implementations\n *   };\n *\n * and its type is its C++ class name, but without the \"Layer\" at the end\n * (\"MyAwesomeLayer\" -> \"MyAwesome\").\n *\n * If the layer is going to be created simply by its constructor, in your c++\n * file, add the following line:\n *\n *    REGISTER_LAYER_CLASS(MyAwesome);\n *\n * Or, if the layer is going to be created by another creator function, in the\n * format of:\n *\n *    template <typename Dtype>\n *    Layer<Dtype*> GetMyAwesomeLayer(const LayerParameter& param) {\n *      // your implementation\n *    }\n *\n * (for example, when your layer has multiple backends, see GetConvolutionLayer\n * for a use case), then you can register the creator function instead, like\n *\n * REGISTER_LAYER_CREATOR(MyAwesome, GetMyAwesomeLayer)\n *\n * Note that each layer type should only be registered once.\n */\n\n#ifndef CAFFE_LAYER_FACTORY_H_\n#define CAFFE_LAYER_FACTORY_H_\n\n#include <map>\n#include <string>\n#include <vector>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass Layer;\n\ntemplate <typename Dtype>\nclass LayerRegistry {\n public:\n  typedef shared_ptr<Layer<Dtype> > (*Creator)(const LayerParameter&);\n  typedef std::map<string, Creator> CreatorRegistry;\n\n  static CreatorRegistry& Registry() {\n    static CreatorRegistry* g_registry_ = new CreatorRegistry();\n    return *g_registry_;\n  }\n\n  // Adds a creator.\n  static void AddCreator(const string& type, Creator creator) {\n    CreatorRegistry& registry = Registry();\n    CHECK_EQ(registry.count(type), 0)\n        << \"Layer type \" << type << \" already registered.\";\n    registry[type] = creator;\n  }\n\n  // Get a layer using a LayerParameter.\n  static shared_ptr<Layer<Dtype> > CreateLayer(const LayerParameter& param) {\n    if (Caffe::root_solver()) {\n      LOG(INFO) << \"Creating layer \" << param.name();\n    }\n    const string& type = param.type();\n    CreatorRegistry& registry = Registry();\n    CHECK_EQ(registry.count(type), 1) << \"Unknown layer type: \" << type\n        << \" (known types: \" << LayerTypeListString() << \")\";\n    return registry[type](param);\n  }\n\n  static vector<string> LayerTypeList() {\n    CreatorRegistry& registry = Registry();\n    vector<string> layer_types;\n    for (typename CreatorRegistry::iterator iter = registry.begin();\n         iter != registry.end(); ++iter) {\n      layer_types.push_back(iter->first);\n    }\n    return layer_types;\n  }\n\n private:\n  // Layer registry should never be instantiated - everything is done with its\n  // static variables.\n  LayerRegistry() {}\n\n  static string LayerTypeListString() {\n    vector<string> layer_types = LayerTypeList();\n    string layer_types_str;\n    for (vector<string>::iterator iter = layer_types.begin();\n         iter != layer_types.end(); ++iter) {\n      if (iter != layer_types.begin()) {\n        layer_types_str += \", \";\n      }\n      layer_types_str += *iter;\n    }\n    return layer_types_str;\n  }\n};\n\n\ntemplate <typename Dtype>\nclass LayerRegisterer {\n public:\n  LayerRegisterer(const string& type,\n                  shared_ptr<Layer<Dtype> > (*creator)(const LayerParameter&)) {\n    // LOG(INFO) << \"Registering layer type: \" << type;\n    LayerRegistry<Dtype>::AddCreator(type, creator);\n  }\n};\n\n\n#define REGISTER_LAYER_CREATOR(type, creator)                                  \\\n  static LayerRegisterer<float> g_creator_f_##type(#type, creator<float>);     \\\n  static LayerRegisterer<double> g_creator_d_##type(#type, creator<double>)    \\\n\n#define REGISTER_LAYER_CLASS(type)                                             \\\n  template <typename Dtype>                                                    \\\n  shared_ptr<Layer<Dtype> > Creator_##type##Layer(const LayerParameter& param) \\\n  {                                                                            \\\n    return shared_ptr<Layer<Dtype> >(new type##Layer<Dtype>(param));           \\\n  }                                                                            \\\n  REGISTER_LAYER_CREATOR(type, Creator_##type##Layer)\n\n}  // namespace caffe\n\n#endif  // CAFFE_LAYER_FACTORY_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/absval_layer.hpp",
    "content": "#ifndef CAFFE_ABSVAL_LAYER_HPP_\n#define CAFFE_ABSVAL_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes @f$ y = |x| @f$\n *\n * @param bottom input Blob vector (length 1)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the inputs @f$ x @f$\n * @param top output Blob vector (length 1)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the computed outputs @f$ y = |x| @f$\n */\ntemplate <typename Dtype>\nclass AbsValLayer : public NeuronLayer<Dtype> {\n public:\n  explicit AbsValLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"AbsVal\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /// @copydoc AbsValLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the absolute value inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x} =\n   *            \\mathrm{sign}(x) \\frac{\\partial E}{\\partial y}\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_ABSVAL_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/accuracy_layer.hpp",
    "content": "#ifndef CAFFE_ACCURACY_LAYER_HPP_\n#define CAFFE_ACCURACY_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the classification accuracy for a one-of-many\n *        classification task.\n */\ntemplate <typename Dtype>\nclass AccuracyLayer : public Layer<Dtype> {\n public:\n  /**\n   * @param param provides AccuracyParameter accuracy_param,\n   *     with AccuracyLayer options:\n   *   - top_k (\\b optional, default 1).\n   *     Sets the maximum rank @f$ k @f$ at which a prediction is considered\n   *     correct.  For example, if @f$ k = 5 @f$, a prediction is counted\n   *     correct if the correct label is among the top 5 predicted labels.\n   */\n  explicit AccuracyLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Accuracy\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 2; }\n\n  // If there are two top blobs, then the second blob will contain\n  // accuracies per class.\n  virtual inline int MinTopBlobs() const { return 1; }\n  virtual inline int MaxTopBlos() const { return 2; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$ x @f$, a Blob with values in\n   *      @f$ [-\\infty, +\\infty] @f$ indicating the predicted score for each of\n   *      the @f$ K = CHW @f$ classes. Each @f$ x_n @f$ is mapped to a predicted\n   *      label @f$ \\hat{l}_n @f$ given by its maximal index:\n   *      @f$ \\hat{l}_n = \\arg\\max\\limits_k x_{nk} @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the labels @f$ l @f$, an integer-valued Blob with values\n   *      @f$ l_n \\in [0, 1, 2, ..., K - 1] @f$\n   *      indicating the correct class label among the @f$ K @f$ classes\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      the computed accuracy: @f$\n   *        \\frac{1}{N} \\sum\\limits_{n=1}^N \\delta\\{ \\hat{l}_n = l_n \\}\n   *      @f$, where @f$\n   *      \\delta\\{\\mathrm{condition}\\} = \\left\\{\n   *         \\begin{array}{lr}\n   *            1 & \\mbox{if condition} \\\\\n   *            0 & \\mbox{otherwise}\n   *         \\end{array} \\right.\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n\n  /// @brief Not implemented -- AccuracyLayer cannot be used as a loss.\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n    for (int i = 0; i < propagate_down.size(); ++i) {\n      if (propagate_down[i]) { NOT_IMPLEMENTED; }\n    }\n  }\n\n  int label_axis_, outer_num_, inner_num_;\n\n  int top_k_;\n\n  /// Whether to ignore instances with a certain label.\n  bool has_ignore_label_;\n  /// The label indicating that an instance should be ignored.\n  int ignore_label_;\n  /// Keeps counts of the number of samples per class.\n  Blob<Dtype> nums_buffer_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_ACCURACY_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/argmax_layer.hpp",
    "content": "#ifndef CAFFE_ARGMAX_LAYER_HPP_\n#define CAFFE_ARGMAX_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Compute the index of the @f$ K @f$ max values for each datum across\n *        all dimensions @f$ (C \\times H \\times W) @f$.\n *\n * Intended for use after a classification layer to produce a prediction.\n * If parameter out_max_val is set to true, output is a vector of pairs\n * (max_ind, max_val) for each image. The axis parameter specifies an axis\n * along which to maximise.\n *\n * NOTE: does not implement Backwards operation.\n */\ntemplate <typename Dtype>\nclass ArgMaxLayer : public Layer<Dtype> {\n public:\n  /**\n   * @param param provides ArgMaxParameter argmax_param,\n   *     with ArgMaxLayer options:\n   *   - top_k (\\b optional uint, default 1).\n   *     the number @f$ K @f$ of maximal items to output.\n   *   - out_max_val (\\b optional bool, default false).\n   *     if set, output a vector of pairs (max_ind, max_val) unless axis is set then\n   *     output max_val along the specified axis.\n   *   - axis (\\b optional int).\n   *     if set, maximise along the specified axis else maximise the flattened\n   *     trailing dimensions for each index of the first / num dimension.\n   */\n  explicit ArgMaxLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"ArgMax\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times 1 \\times K) @f$ or, if out_max_val\n   *      @f$ (N \\times 2 \\times K) @f$ unless axis set than e.g.\n   *      @f$ (N \\times K \\times H \\times W) @f$ if axis == 1\n   *      the computed outputs @f$\n   *       y_n = \\arg\\max\\limits_i x_{ni}\n   *      @f$ (for @f$ K = 1 @f$).\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  /// @brief Not implemented (non-differentiable function)\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n    NOT_IMPLEMENTED;\n  }\n  bool out_max_val_;\n  size_t top_k_;\n  bool has_axis_;\n  int axis_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_ARGMAX_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/base_conv_layer.hpp",
    "content": "#ifndef CAFFE_BASE_CONVOLUTION_LAYER_HPP_\n#define CAFFE_BASE_CONVOLUTION_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/im2col.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Abstract base class that factors out the BLAS code common to\n *        ConvolutionLayer and DeconvolutionLayer.\n */\ntemplate <typename Dtype>\nclass BaseConvolutionLayer : public Layer<Dtype> {\n public:\n  explicit BaseConvolutionLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline int MinBottomBlobs() const { return 1; }\n  virtual inline int MinTopBlobs() const { return 1; }\n  virtual inline bool EqualNumBottomTopBlobs() const { return true; }\n\n protected:\n  // Helper functions that abstract away the column buffer and gemm arguments.\n  // The last argument in forward_cpu_gemm is so that we can skip the im2col if\n  // we just called weight_cpu_gemm with the same input.\n  void forward_cpu_gemm(const Dtype* input, const Dtype* weights,\n      Dtype* output, bool skip_im2col = false);\n  void forward_cpu_bias(Dtype* output, const Dtype* bias);\n  void backward_cpu_gemm(const Dtype* input, const Dtype* weights,\n      Dtype* output);\n  void weight_cpu_gemm(const Dtype* input, const Dtype* output, Dtype*\n      weights);\n  void backward_cpu_bias(Dtype* bias, const Dtype* input);\n\n#ifndef CPU_ONLY\n  void forward_gpu_gemm(const Dtype* col_input, const Dtype* weights,\n      Dtype* output, bool skip_im2col = false);\n  void forward_gpu_bias(Dtype* output, const Dtype* bias);\n  void backward_gpu_gemm(const Dtype* input, const Dtype* weights,\n      Dtype* col_output);\n  void weight_gpu_gemm(const Dtype* col_input, const Dtype* output, Dtype*\n      weights);\n  void backward_gpu_bias(Dtype* bias, const Dtype* input);\n#endif\n\n  /// @brief The spatial dimensions of the input.\n  inline int input_shape(int i) {\n    return (*bottom_shape_)[channel_axis_ + i];\n  }\n  // reverse_dimensions should return true iff we are implementing deconv, so\n  // that conv helpers know which dimensions are which.\n  virtual bool reverse_dimensions() = 0;\n  // Compute height_out_ and width_out_ from other parameters.\n  virtual void compute_output_shape() = 0;\n\n  /// @brief The spatial dimensions of a filter kernel.\n  Blob<int> kernel_shape_;\n  /// @brief The spatial dimensions of the stride.\n  Blob<int> stride_;\n  /// @brief The spatial dimensions of the padding.\n  Blob<int> pad_;\n  /// @brief The spatial dimensions of the dilation.\n  Blob<int> dilation_;\n  /// @brief The spatial dimensions of the convolution input.\n  Blob<int> conv_input_shape_;\n  /// @brief The spatial dimensions of the col_buffer.\n  vector<int> col_buffer_shape_;\n  /// @brief The spatial dimensions of the output.\n  vector<int> output_shape_;\n  const vector<int>* bottom_shape_;\n\n  int num_spatial_axes_;\n  int bottom_dim_;\n  int top_dim_;\n\n  int channel_axis_;\n  int num_;\n  int channels_;\n  int group_;\n  int out_spatial_dim_;\n  int weight_offset_;\n  int num_output_;\n  bool bias_term_;\n  bool is_1x1_;\n  bool force_nd_im2col_;\n\n private:\n  // wrap im2col/col2im so we don't have to remember the (long) argument lists\n  inline void conv_im2col_cpu(const Dtype* data, Dtype* col_buff) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      im2col_cpu(data, conv_in_channels_,\n          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1], col_buff);\n    } else {\n      im2col_nd_cpu(data, num_spatial_axes_, conv_input_shape_.cpu_data(),\n          col_buffer_shape_.data(), kernel_shape_.cpu_data(),\n          pad_.cpu_data(), stride_.cpu_data(), dilation_.cpu_data(), col_buff);\n    }\n  }\n  inline void conv_col2im_cpu(const Dtype* col_buff, Dtype* data) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      col2im_cpu(col_buff, conv_in_channels_,\n          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1], data);\n    } else {\n      col2im_nd_cpu(col_buff, num_spatial_axes_, conv_input_shape_.cpu_data(),\n          col_buffer_shape_.data(), kernel_shape_.cpu_data(),\n          pad_.cpu_data(), stride_.cpu_data(), dilation_.cpu_data(), data);\n    }\n  }\n#ifndef CPU_ONLY\n  inline void conv_im2col_gpu(const Dtype* data, Dtype* col_buff) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      im2col_gpu(data, conv_in_channels_,\n          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1], col_buff);\n    } else {\n      im2col_nd_gpu(data, num_spatial_axes_, num_kernels_im2col_,\n          conv_input_shape_.gpu_data(), col_buffer_.gpu_shape(),\n          kernel_shape_.gpu_data(), pad_.gpu_data(),\n          stride_.gpu_data(), dilation_.gpu_data(), col_buff);\n    }\n  }\n  inline void conv_col2im_gpu(const Dtype* col_buff, Dtype* data) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      col2im_gpu(col_buff, conv_in_channels_,\n          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1], data);\n    } else {\n      col2im_nd_gpu(col_buff, num_spatial_axes_, num_kernels_col2im_,\n          conv_input_shape_.gpu_data(), col_buffer_.gpu_shape(),\n          kernel_shape_.gpu_data(), pad_.gpu_data(), stride_.gpu_data(),\n          dilation_.gpu_data(), data);\n    }\n  }\n#endif\n\n  int num_kernels_im2col_;\n  int num_kernels_col2im_;\n  int conv_out_channels_;\n  int conv_in_channels_;\n  int conv_out_spatial_dim_;\n  int kernel_dim_;\n  int col_offset_;\n  int output_offset_;\n\n  Blob<Dtype> col_buffer_;\n  Blob<Dtype> bias_multiplier_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_BASE_CONVOLUTION_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/base_data_layer.hpp",
    "content": "#ifndef CAFFE_DATA_LAYERS_HPP_\n#define CAFFE_DATA_LAYERS_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/blocking_queue.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Provides base for data layers that feed blobs to the Net.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass BaseDataLayer : public Layer<Dtype> {\n public:\n  explicit BaseDataLayer(const LayerParameter& param);\n  // LayerSetUp: implements common data layer setup functionality, and calls\n  // DataLayerSetUp to do special data layer setup for individual layer types.\n  // This method may not be overridden except by the BasePrefetchingDataLayer.\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  // Data layers should be shared by multiple solvers in parallel\n  virtual inline bool ShareInParallel() const { return true; }\n  virtual void DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n  // Data layers have no bottoms, so reshaping is trivial.\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n\n protected:\n  TransformationParameter transform_param_;\n  shared_ptr<DataTransformer<Dtype> > data_transformer_;\n  bool output_labels_;\n};\n\ntemplate <typename Dtype>\nclass Batch {\n public:\n  Blob<Dtype> data_, label_;\n};\n\ntemplate <typename Dtype>\nclass BasePrefetchingDataLayer :\n    public BaseDataLayer<Dtype>, public InternalThread {\n public:\n  explicit BasePrefetchingDataLayer(const LayerParameter& param);\n  // LayerSetUp: implements common data layer setup functionality, and calls\n  // DataLayerSetUp to do special data layer setup for individual layer types.\n  // This method may not be overridden.\n  void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  // Prefetches batches (asynchronously if to GPU memory)\n  static const int PREFETCH_COUNT = 3;\n\n protected:\n  virtual void InternalThreadEntry();\n  virtual void load_batch(Batch<Dtype>* batch) = 0;\n\n  Batch<Dtype> prefetch_[PREFETCH_COUNT];\n  BlockingQueue<Batch<Dtype>*> prefetch_free_;\n  BlockingQueue<Batch<Dtype>*> prefetch_full_;\n\n  Blob<Dtype> transformed_data_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DATA_LAYERS_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/batch_norm_layer.hpp",
    "content": "#ifndef CAFFE_BATCHNORM_LAYER_HPP_\n#define CAFFE_BATCHNORM_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Normalizes the input to have 0-mean and/or unit (1) variance across\n *        the batch.\n *\n * This layer computes Batch Normalization described in [1].  For\n * each channel in the data (i.e. axis 1), it subtracts the mean and divides\n * by the variance, where both statistics are computed across both spatial\n * dimensions and across the different examples in the batch.\n *\n * By default, during training time, the network is computing global mean/\n * variance statistics via a running average, which is then used at test\n * time to allow deterministic outputs for each input.  You can manually\n * toggle whether the network is accumulating or using the statistics via the\n * use_global_stats option.  IMPORTANT: for this feature to work, you MUST\n * set the learning rate to zero for all three parameter blobs, i.e.,\n * param {lr_mult: 0} three times in the layer definition.\n *\n * Note that the original paper also included a per-channel learned bias and\n * scaling factor.  It is possible (though a bit cumbersome) to implement\n * this in caffe using a single-channel DummyDataLayer filled with zeros,\n * followed by a Convolution layer with output the same size as the current.\n * This produces a channel-specific value that can be added or multiplied by\n * the BatchNorm layer's output.\n *\n * [1] S. Ioffe and C. Szegedy, \"Batch Normalization: Accelerating Deep Network\n *     Training by Reducing Internal Covariate Shift.\" arXiv preprint\n *     arXiv:1502.03167 (2015).\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass BatchNormLayer : public Layer<Dtype> {\n public:\n  explicit BatchNormLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"BatchNorm\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n     const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<Dtype> mean_, variance_, temp_, x_norm_;\n  bool use_global_stats_;\n  Dtype moving_average_fraction_;\n  int channels_;\n  Dtype eps_;\n\n  // extra temporarary variables is used to carry out sums/broadcasting\n  // using BLAS\n  Blob<Dtype> batch_sum_multiplier_;\n  Blob<Dtype> num_by_chans_;\n  Blob<Dtype> spatial_sum_multiplier_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_BATCHNORM_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/batch_reindex_layer.hpp",
    "content": "#ifndef CAFFE_BATCHREINDEX_LAYER_HPP_\n#define CAFFE_BATCHREINDEX_LAYER_HPP_\n\n#include <utility>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Index into the input blob along its first axis.\n *\n * This layer can be used to select, reorder, and even replicate examples in a\n * batch.  The second blob is cast to int and treated as an index into the\n * first axis of the first blob.\n */\ntemplate <typename Dtype>\nclass BatchReindexLayer : public Layer<Dtype> {\n public:\n  explicit BatchReindexLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"BatchReindex\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 2; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 2+)\n   *   -# @f$ (N \\times ...) @f$\n   *      the inputs @f$ x_1 @f$\n   *   -# @f$ (M) @f$\n   *      the inputs @f$ x_2 @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (M \\times ...) @f$:\n   *      the reindexed array @f$\n   *        y = x_1[x_2]\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the reordered input.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient\n   *        with respect to the outputs\n   *   -# @f$ (M \\times ...) @f$:\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to concatenated outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 2):\n   *   - @f$ \\frac{\\partial E}{\\partial y} @f$ is de-indexed (summing where\n   *     required) back to the input x_1\n   *   - This layer cannot backprop to x_2, i.e. propagate_down[1] must be\n   *     false.\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n private:\n  struct pair_sort_first {\n    bool operator()(const std::pair<int, int> &left,\n                    const std::pair<int, int> &right) {\n      return left.first < right.first;\n    }\n  };\n  void check_batch_reindex(int initial_num, int final_num,\n                           const Dtype* ridx_data);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_BATCHREINDEX_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/bias_layer.hpp",
    "content": "#ifndef CAFFE_BIAS_LAYER_HPP_\n#define CAFFE_BIAS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Computes a sum of two input Blobs, with the shape of the\n *        latter Blob \"broadcast\" to match the shape of the former.\n *        Equivalent to tiling the latter Blob, then computing the elementwise\n *        sum.\n *\n * The second input may be omitted, in which case it's learned as a parameter\n * of the layer.\n */\ntemplate <typename Dtype>\nclass BiasLayer : public Layer<Dtype> {\n public:\n  explicit BiasLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Bias\"; }\n  virtual inline int MinBottomBlobs() const { return 1; }\n  virtual inline int MaxBottomBlobs() const { return 2; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n private:\n  Blob<Dtype> bias_multiplier_;\n  int outer_dim_, bias_dim_, inner_dim_, dim_;\n};\n\n\n\n}  // namespace caffe\n\n#endif  // CAFFE_BIAS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/bnll_layer.hpp",
    "content": "#ifndef CAFFE_BNLL_LAYER_HPP_\n#define CAFFE_BNLL_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes @f$ y = x + \\log(1 + \\exp(-x)) @f$ if @f$ x > 0 @f$;\n *        @f$ y = \\log(1 + \\exp(x)) @f$ otherwise.\n *\n * @param bottom input Blob vector (length 1)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the inputs @f$ x @f$\n * @param top output Blob vector (length 1)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the computed outputs @f$\n *      y = \\left\\{\n *         \\begin{array}{ll}\n *            x + \\log(1 + \\exp(-x)) & \\mbox{if } x > 0 \\\\\n *            \\log(1 + \\exp(x)) & \\mbox{otherwise}\n *         \\end{array} \\right.\n *      @f$\n */\ntemplate <typename Dtype>\nclass BNLLLayer : public NeuronLayer<Dtype> {\n public:\n  explicit BNLLLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"BNLL\"; }\n\n protected:\n  /// @copydoc BNLLLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the BNLL inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x}\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_BNLL_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/conadd_layer.hpp",
    "content": "#ifndef CAFFE_CONADD_LAYER_HPP_\n#define CAFFE_CONADD_LAYER_HPP_\n\n#include <vector>\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Takes at least two Blob%s and concatenates them along either the num\n *        or channel dimension, outputting the result.\n */\ntemplate <typename Dtype>\nclass ConaddLayer : public Layer<Dtype> {\n public:\n  explicit ConaddLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Conadd\"; }\n  virtual inline int MinBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 2+)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x_1 @f$\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x_2 @f$\n   *   -# ...\n   *   - K @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x_K @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (KN \\times C \\times H \\times W) @f$ if axis == 0, or\n   *      @f$ (N \\times KC \\times H \\times W) @f$ if axis == 1:\n   *      the concatenated output @f$\n   *        y = [\\begin{array}{cccc} x_1 & x_2 & ... & x_K \\end{array}]\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the concatenate inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *        respect to the outputs\n   *   -# @f$ (KN \\times C \\times H \\times W) @f$ if axis == 0, or\n   *      @f$ (N \\times KC \\times H \\times W) @f$ if axis == 1:\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to concatenated outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length K), into which the top gradient\n   *        @f$ \\frac{\\partial E}{\\partial y} @f$ is deconcatenated back to the\n   *        inputs @f$\n   *        \\left[ \\begin{array}{cccc}\n   *          \\frac{\\partial E}{\\partial x_1} &\n   *          \\frac{\\partial E}{\\partial x_2} &\n   *          ... &\n   *          \\frac{\\partial E}{\\partial x_K}\n   *        \\end{array} \\right] =\n   *        \\frac{\\partial E}{\\partial y}\n   *        @f$\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_CONADD_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/concat_layer.hpp",
    "content": "#ifndef CAFFE_CONCAT_LAYER_HPP_\n#define CAFFE_CONCAT_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Takes at least two Blob%s and concatenates them along either the num\n *        or channel dimension, outputting the result.\n */\ntemplate <typename Dtype>\nclass ConcatLayer : public Layer<Dtype> {\n public:\n  explicit ConcatLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Concat\"; }\n  virtual inline int MinBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 2+)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x_1 @f$\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x_2 @f$\n   *   -# ...\n   *   - K @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x_K @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (KN \\times C \\times H \\times W) @f$ if axis == 0, or\n   *      @f$ (N \\times KC \\times H \\times W) @f$ if axis == 1:\n   *      the concatenated output @f$\n   *        y = [\\begin{array}{cccc} x_1 & x_2 & ... & x_K \\end{array}]\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the concatenate inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *        respect to the outputs\n   *   -# @f$ (KN \\times C \\times H \\times W) @f$ if axis == 0, or\n   *      @f$ (N \\times KC \\times H \\times W) @f$ if axis == 1:\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to concatenated outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length K), into which the top gradient\n   *        @f$ \\frac{\\partial E}{\\partial y} @f$ is deconcatenated back to the\n   *        inputs @f$\n   *        \\left[ \\begin{array}{cccc}\n   *          \\frac{\\partial E}{\\partial x_1} &\n   *          \\frac{\\partial E}{\\partial x_2} &\n   *          ... &\n   *          \\frac{\\partial E}{\\partial x_K}\n   *        \\end{array} \\right] =\n   *        \\frac{\\partial E}{\\partial y}\n   *        @f$\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int count_;\n  int num_concats_;\n  int concat_input_size_;\n  int concat_axis_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_CONCAT_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/contrastive_loss_layer.hpp",
    "content": "#ifndef CAFFE_CONTRASTIVE_LOSS_LAYER_HPP_\n#define CAFFE_CONTRASTIVE_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the contrastive loss @f$\n *          E = \\frac{1}{2N} \\sum\\limits_{n=1}^N \\left(y\\right) d^2 +\n *              \\left(1-y\\right) \\max \\left(margin-d, 0\\right)^2\n *          @f$ where @f$\n *          d = \\left| \\left| a_n - b_n \\right| \\right|_2 @f$. This can be\n *          used to train siamese networks.\n *\n * @param bottom input Blob vector (length 3)\n *   -# @f$ (N \\times C \\times 1 \\times 1) @f$\n *      the features @f$ a \\in [-\\infty, +\\infty]@f$\n *   -# @f$ (N \\times C \\times 1 \\times 1) @f$\n *      the features @f$ b \\in [-\\infty, +\\infty]@f$\n *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n *      the binary similarity @f$ s \\in [0, 1]@f$\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed contrastive loss: @f$ E =\n *          \\frac{1}{2N} \\sum\\limits_{n=1}^N \\left(y\\right) d^2 +\n *          \\left(1-y\\right) \\max \\left(margin-d, 0\\right)^2\n *          @f$ where @f$\n *          d = \\left| \\left| a_n - b_n \\right| \\right|_2 @f$.\n * This can be used to train siamese networks.\n */\ntemplate <typename Dtype>\nclass ContrastiveLossLayer : public LossLayer<Dtype> {\n public:\n  explicit ContrastiveLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param), diff_() {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline int ExactNumBottomBlobs() const { return 3; }\n  virtual inline const char* type() const { return \"ContrastiveLoss\"; }\n  /**\n   * Unlike most loss layers, in the ContrastiveLossLayer we can backpropagate\n   * to the first two inputs.\n   */\n  virtual inline bool AllowForceBackward(const int bottom_index) const {\n    return bottom_index != 2;\n  }\n\n protected:\n  /// @copydoc ContrastiveLossLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the Contrastive error gradient w.r.t. the inputs.\n   *\n   * Computes the gradients with respect to the two input vectors (bottom[0] and\n   * bottom[1]), but not the similarity label (bottom[2]).\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times 1 \\times 1) @f$\n   *      the features @f$a@f$; Backward fills their diff with\n   *      gradients if propagate_down[0]\n   *   -# @f$ (N \\times C \\times 1 \\times 1) @f$\n   *      the features @f$b@f$; Backward fills their diff with gradients if\n   *      propagate_down[1]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<Dtype> diff_;  // cached for backward pass\n  Blob<Dtype> dist_sq_;  // cached for backward pass\n  Blob<Dtype> diff_sq_;  // tmp storage for gpu forward pass\n  Blob<Dtype> summer_vec_;  // tmp storage for gpu forward pass\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_CONTRASTIVE_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/conv_layer.hpp",
    "content": "#ifndef CAFFE_CONV_LAYER_HPP_\n#define CAFFE_CONV_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/base_conv_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Convolves the input image with a bank of learned filters,\n *        and (optionally) adds biases.\n *\n *   Caffe convolves by reduction to matrix multiplication. This achieves\n *   high-throughput and generality of input and filter dimensions but comes at\n *   the cost of memory for matrices. This makes use of efficiency in BLAS.\n *\n *   The input is \"im2col\" transformed to a channel K' x H x W data matrix\n *   for multiplication with the N x K' x H x W filter matrix to yield a\n *   N' x H x W output matrix that is then \"col2im\" restored. K' is the\n *   input channel * kernel height * kernel width dimension of the unrolled\n *   inputs so that the im2col matrix has a column for each input region to\n *   be filtered. col2im restores the output spatial structure by rolling up\n *   the output channel N' columns of the output matrix.\n */\ntemplate <typename Dtype>\nclass ConvolutionLayer : public BaseConvolutionLayer<Dtype> {\n public:\n  /**\n   * @param param provides ConvolutionParameter convolution_param,\n   *    with ConvolutionLayer options:\n   *  - num_output. The number of filters.\n   *  - kernel_size / kernel_h / kernel_w. The filter dimensions, given by\n   *  kernel_size for square filters or kernel_h and kernel_w for rectangular\n   *  filters.\n   *  - stride / stride_h / stride_w (\\b optional, default 1). The filter\n   *  stride, given by stride_size for equal dimensions or stride_h and stride_w\n   *  for different strides. By default the convolution is dense with stride 1.\n   *  - pad / pad_h / pad_w (\\b optional, default 0). The zero-padding for\n   *  convolution, given by pad for equal dimensions or pad_h and pad_w for\n   *  different padding. Input padding is computed implicitly instead of\n   *  actually padding.\n   *  - dilation (\\b optional, default 1). The filter\n   *  dilation, given by dilation_size for equal dimensions for different\n   *  dilation. By default the convolution has dilation 1.\n   *  - group (\\b optional, default 1). The number of filter groups. Group\n   *  convolution is a method for reducing parameterization by selectively\n   *  connecting input and output channels. The input and output channel dimensions must be divisible\n   *  by the number of groups. For group @f$ \\geq 1 @f$, the\n   *  convolutional filters' input and output channels are separated s.t. each\n   *  group takes 1 / group of the input channels and makes 1 / group of the\n   *  output channels. Concretely 4 input channels, 8 output channels, and\n   *  2 groups separate input channels 1-2 and output channels 1-4 into the\n   *  first group and input channels 3-4 and output channels 5-8 into the second\n   *  group.\n   *  - bias_term (\\b optional, default true). Whether to have a bias.\n   *  - engine: convolution has CAFFE (matrix multiplication) and CUDNN (library\n   *    kernels + stream parallelism) engines.\n   */\n  explicit ConvolutionLayer(const LayerParameter& param)\n      : BaseConvolutionLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"Convolution\"; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n      \n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual inline bool reverse_dimensions() { return false; }\n  virtual void compute_output_shape();\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_CONV_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/crop_layer.hpp",
    "content": "#ifndef CAFFE_CROP_LAYER_HPP_\n#define CAFFE_CROP_LAYER_HPP_\n\n#include <utility>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Takes a Blob and crop it, to the shape specified by the second input\n *  Blob, across all dimensions after the specified axis.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\n\ntemplate <typename Dtype>\nclass CropLayer : public Layer<Dtype> {\n public:\n  explicit CropLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Crop\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 2; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<int> offsets;\n  Blob<int> src_strides_;\n  Blob<int> dest_strides_;\n\n private:\n  // Recursive copy function.\n  void crop_copy(const vector<Blob<Dtype>*>& bottom,\n               const vector<Blob<Dtype>*>& top,\n               const int* offsets,\n               vector<int> indices,\n               int cur_dim,\n               const Dtype* src_data,\n               Dtype* dest_data,\n               bool is_forward);\n\n  // Recursive copy function: this is similar to crop_copy() but loops over all\n  // but the last two dimensions to allow for ND cropping while still relying on\n  // a CUDA kernel for the innermost two dimensions for performance reasons.  An\n  // alterantive implementation could rely on the kernel more by passing\n  // offsets, but this is problematic because of its variable length.\n  // Since in the standard (N,C,W,H) case N,C are usually not cropped a speedup\n  // could be achieved by not looping the application of the copy_kernel around\n  // these dimensions.\n  void crop_copy_gpu(const vector<Blob<Dtype>*>& bottom,\n                const vector<Blob<Dtype>*>& top,\n                const vector<int>& offsets,\n                vector<int> indices,\n                int cur_dim,\n                const Dtype* src_data,\n                Dtype* dest_data,\n                bool is_forward);\n};\n}  // namespace caffe\n\n#endif  // CAFFE_CROP_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_conv_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_CONV_LAYER_HPP_\n#define CAFFE_CUDNN_CONV_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/conv_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\n/*\n * @brief cuDNN implementation of ConvolutionLayer.\n *        Fallback to ConvolutionLayer for CPU mode.\n *\n * cuDNN accelerates convolution through forward kernels for filtering and bias\n * plus backward kernels for the gradient w.r.t. the filters, biases, and\n * inputs. Caffe + cuDNN further speeds up the computation through forward\n * parallelism across groups and backward parallelism across gradients.\n *\n * The CUDNN engine does not have memory overhead for matrix buffers. For many\n * input and filter regimes the CUDNN engine is faster than the CAFFE engine,\n * but for fully-convolutional models and large inputs the CAFFE engine can be\n * faster as long as it fits in memory.\n*/\ntemplate <typename Dtype>\nclass CuDNNConvolutionLayer : public ConvolutionLayer<Dtype> {\n public:\n  explicit CuDNNConvolutionLayer(const LayerParameter& param)\n      : ConvolutionLayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNConvolutionLayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t* handle_;\n  cudaStream_t*  stream_;\n\n  // algorithms for forward and backwards convolutions\n  cudnnConvolutionFwdAlgo_t *fwd_algo_;\n  cudnnConvolutionBwdFilterAlgo_t *bwd_filter_algo_;\n  cudnnConvolutionBwdDataAlgo_t *bwd_data_algo_;\n\n  vector<cudnnTensorDescriptor_t> bottom_descs_, top_descs_;\n  cudnnTensorDescriptor_t    bias_desc_;\n  cudnnFilterDescriptor_t      filter_desc_;\n  vector<cudnnConvolutionDescriptor_t> conv_descs_;\n  int bottom_offset_, top_offset_, bias_offset_;\n\n  size_t *workspace_fwd_sizes_;\n  size_t *workspace_bwd_data_sizes_;\n  size_t *workspace_bwd_filter_sizes_;\n  size_t workspaceSizeInBytes;  // size of underlying storage\n  void *workspaceData;  // underlying storage\n  void **workspace;  // aliases into workspaceData\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_CONV_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_lcn_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_LCN_LAYER_HPP_\n#define CAFFE_CUDNN_LCN_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/lrn_layer.hpp\"\n#include \"caffe/layers/power_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\ntemplate <typename Dtype>\nclass CuDNNLCNLayer : public LRNLayer<Dtype> {\n public:\n  explicit CuDNNLCNLayer(const LayerParameter& param)\n      : LRNLayer<Dtype>(param), handles_setup_(false), tempDataSize(0),\n        tempData1(NULL), tempData2(NULL) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNLCNLayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnLRNDescriptor_t norm_desc_;\n  cudnnTensorDescriptor_t bottom_desc_, top_desc_;\n\n  int size_, pre_pad_;\n  Dtype alpha_, beta_, k_;\n\n  size_t tempDataSize;\n  void *tempData1, *tempData2;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_LCN_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_lrn_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_LRN_LAYER_HPP_\n#define CAFFE_CUDNN_LRN_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/lrn_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\ntemplate <typename Dtype>\nclass CuDNNLRNLayer : public LRNLayer<Dtype> {\n public:\n  explicit CuDNNLRNLayer(const LayerParameter& param)\n      : LRNLayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNLRNLayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnLRNDescriptor_t norm_desc_;\n  cudnnTensorDescriptor_t bottom_desc_, top_desc_;\n\n  int size_;\n  Dtype alpha_, beta_, k_;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_LRN_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_pooling_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_POOLING_LAYER_HPP_\n#define CAFFE_CUDNN_POOLING_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/pooling_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\n/*\n * @brief cuDNN implementation of PoolingLayer.\n *        Fallback to PoolingLayer for CPU mode.\n*/\ntemplate <typename Dtype>\nclass CuDNNPoolingLayer : public PoolingLayer<Dtype> {\n public:\n  explicit CuDNNPoolingLayer(const LayerParameter& param)\n      : PoolingLayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNPoolingLayer();\n  // Currently, cuDNN does not support the extra top blob.\n  virtual inline int MinTopBlobs() const { return -1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnTensorDescriptor_t bottom_desc_, top_desc_;\n  cudnnPoolingDescriptor_t  pooling_desc_;\n  cudnnPoolingMode_t        mode_;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_POOLING_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_relu_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_RELU_LAYER_HPP_\n#define CAFFE_CUDNN_RELU_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n#include \"caffe/layers/relu_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\n/**\n * @brief CuDNN acceleration of ReLULayer.\n */\ntemplate <typename Dtype>\nclass CuDNNReLULayer : public ReLULayer<Dtype> {\n public:\n  explicit CuDNNReLULayer(const LayerParameter& param)\n      : ReLULayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNReLULayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnTensorDescriptor_t bottom_desc_;\n  cudnnTensorDescriptor_t top_desc_;\n  cudnnActivationDescriptor_t activ_desc_;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_RELU_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_sigmoid_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_SIGMOID_LAYER_HPP_\n#define CAFFE_CUDNN_SIGMOID_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n#include \"caffe/layers/sigmoid_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\n/**\n * @brief CuDNN acceleration of SigmoidLayer.\n */\ntemplate <typename Dtype>\nclass CuDNNSigmoidLayer : public SigmoidLayer<Dtype> {\n public:\n  explicit CuDNNSigmoidLayer(const LayerParameter& param)\n      : SigmoidLayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNSigmoidLayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnTensorDescriptor_t bottom_desc_;\n  cudnnTensorDescriptor_t top_desc_;\n  cudnnActivationDescriptor_t activ_desc_;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_SIGMOID_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_softmax_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_SOFTMAX_LAYER_HPP_\n#define CAFFE_CUDNN_SOFTMAX_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/softmax_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\n/**\n * @brief cuDNN implementation of SoftmaxLayer.\n *        Fallback to SoftmaxLayer for CPU mode.\n */\ntemplate <typename Dtype>\nclass CuDNNSoftmaxLayer : public SoftmaxLayer<Dtype> {\n public:\n  explicit CuDNNSoftmaxLayer(const LayerParameter& param)\n      : SoftmaxLayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNSoftmaxLayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n     const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnTensorDescriptor_t bottom_desc_;\n  cudnnTensorDescriptor_t top_desc_;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_SOFTMAX_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/cudnn_tanh_layer.hpp",
    "content": "#ifndef CAFFE_CUDNN_TANH_LAYER_HPP_\n#define CAFFE_CUDNN_TANH_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n#include \"caffe/layers/tanh_layer.hpp\"\n\nnamespace caffe {\n\n#ifdef USE_CUDNN\n/**\n * @brief CuDNN acceleration of TanHLayer.\n */\ntemplate <typename Dtype>\nclass CuDNNTanHLayer : public TanHLayer<Dtype> {\n public:\n  explicit CuDNNTanHLayer(const LayerParameter& param)\n      : TanHLayer<Dtype>(param), handles_setup_(false) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual ~CuDNNTanHLayer();\n\n protected:\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool handles_setup_;\n  cudnnHandle_t             handle_;\n  cudnnTensorDescriptor_t bottom_desc_;\n  cudnnTensorDescriptor_t top_desc_;\n  cudnnActivationDescriptor_t activ_desc_;\n};\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_CUDNN_TANH_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/data_layer.hpp",
    "content": "#ifndef CAFFE_DATA_LAYER_HPP_\n#define CAFFE_DATA_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/data_reader.hpp\"\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass DataLayer : public BasePrefetchingDataLayer<Dtype> {\n public:\n  explicit DataLayer(const LayerParameter& param);\n  virtual ~DataLayer();\n  virtual void DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  // DataLayer uses DataReader instead for sharing for parallelism\n  virtual inline bool ShareInParallel() const { return false; }\n  virtual inline const char* type() const { return \"Data\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 0; }\n  virtual inline int MinTopBlobs() const { return 1; }\n  virtual inline int MaxTopBlobs() const { return 2; }\n\n protected:\n  virtual void load_batch(Batch<Dtype>* batch);\n\n  DataReader reader_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DATA_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/deconv_layer.hpp",
    "content": "#ifndef CAFFE_DECONV_LAYER_HPP_\n#define CAFFE_DECONV_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/base_conv_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Convolve the input with a bank of learned filters, and (optionally)\n *        add biases, treating filters and convolution parameters in the\n *        opposite sense as ConvolutionLayer.\n *\n *   ConvolutionLayer computes each output value by dotting an input window with\n *   a filter; DeconvolutionLayer multiplies each input value by a filter\n *   elementwise, and sums over the resulting output windows. In other words,\n *   DeconvolutionLayer is ConvolutionLayer with the forward and backward passes\n *   reversed. DeconvolutionLayer reuses ConvolutionParameter for its\n *   parameters, but they take the opposite sense as in ConvolutionLayer (so\n *   padding is removed from the output rather than added to the input, and\n *   stride results in upsampling rather than downsampling).\n */\ntemplate <typename Dtype>\nclass DeconvolutionLayer : public BaseConvolutionLayer<Dtype> {\n public:\n  explicit DeconvolutionLayer(const LayerParameter& param)\n      : BaseConvolutionLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"Deconvolution\"; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual inline bool reverse_dimensions() { return true; }\n  virtual void compute_output_shape();\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DECONV_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/deformable_conv_layer.hpp",
    "content": "#ifndef CAFFE_DEFORMABEL_CONV_LAYER_HPP_\r\n#define CAFFE_DEFORMABLE_CONV_LAYER_HPP_\r\n\r\n#include <vector>\r\n\r\n#include \"caffe/blob.hpp\"\r\n#include \"caffe/layer.hpp\"\r\n#include \"caffe/proto/caffe.pb.h\"\r\n#include \"caffe/util/deformable_im2col.hpp\"\r\n#include \"caffe/layers/base_conv_layer.hpp\"\r\n\r\nnamespace caffe {\r\n\r\n/**\r\n * @brief Convolves the input image with a bank of learned filters,\r\n *        and (optionally) adds biases.\r\n *\r\n *   Caffe convolves by reduction to matrix multiplication. This achieves\r\n *   high-throughput and generality of input and filter dimensions but comes at\r\n *   the cost of memory for matrices. This makes use of efficiency in BLAS.\r\n *\r\n *   The input is \"im2col\" transformed to a channel K' x H x W data matrix\r\n *   for multiplication with the N x K' x H x W filter matrix to yield a\r\n *   N' x H x W output matrix that is then \"col2im\" restored. K' is the\r\n *   input channel * kernel height * kernel width dimension of the unrolled\r\n *   inputs so that the im2col matrix has a column for each input region to\r\n *   be filtered. col2im restores the output spatial structure by rolling up\r\n *   the output channel N' columns of the output matrix.\r\n */\r\ntemplate <typename Dtype>\r\nclass DeformableConvolutionLayer : public BaseConvolutionLayer<Dtype> {\r\n public:\r\n  /**\r\n   * @param param provides ConvolutionParameter convolution_param,\r\n   *    with ConvolutionLayer options:\r\n   *  - num_output. The number of filters.\r\n   *  - kernel_size / kernel_h / kernel_w. The filter dimensions, given by\r\n   *  kernel_size for square filters or kernel_h and kernel_w for rectangular\r\n   *  filters.\r\n   *  - stride / stride_h / stride_w (\\b optional, default 1). The filter\r\n   *  stride, given by stride_size for equal dimensions or stride_h and stride_w\r\n   *  for different strides. By default the convolution is dense with stride 1.\r\n   *  - pad / pad_h / pad_w (\\b optional, default 0). The zero-padding for\r\n   *  convolution, given by pad for equal dimensions or pad_h and pad_w for\r\n   *  different padding. Input padding is computed implicitly instead of\r\n   *  actually padding.\r\n   *  - dilation (\\b optional, default 1). The filter\r\n   *  dilation, given by dilation_size for equal dimensions for different\r\n   *  dilation. By default the convolution has dilation 1.\r\n   *  - group (\\b optional, default 1). The number of filter groups. Group\r\n   *  convolution is a method for reducing parameterization by selectively\r\n   *  connecting input and output channels. The input and output channel dimensions must be divisible\r\n   *  by the number of groups. For group @f$ \\geq 1 @f$, the\r\n   *  convolutional filters' input and output channels are separated s.t. each\r\n   *  group takes 1 / group of the input channels and makes 1 / group of the\r\n   *  output channels. Concretely 4 input channels, 8 output channels, and\r\n   *  2 groups separate input channels 1-2 and output channels 1-4 into the\r\n   *  first group and input channels 3-4 and output channels 5-8 into the second\r\n   *  group.\r\n   *  - bias_term (\\b optional, default true). Whether to have a bias.\r\n   *  - engine: convolution has CAFFE (matrix multiplication) and CUDNN (library\r\n   *    kernels + stream parallelism) engines.\r\n   */\r\n  explicit DeformableConvolutionLayer(const LayerParameter& param)\r\n      : BaseConvolutionLayer<Dtype>(param) {}\r\n\r\n  virtual inline const char* type() const { return \"DeformableConvolution\"; }\r\n  virtual inline bool EqualNumBottomTopBlobs() const { return false; }\r\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\r\n      const vector<Blob<Dtype>*>& top);\r\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\r\n      const vector<Blob<Dtype>*>& top);\r\n\r\n protected:\r\n int input_shape(int i) ;\r\n  // reverse_dimensions should return true iff we are implementing deconv, so\r\n  // that conv helpers know which dimensions are which.\r\n\r\n virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\r\n      const vector<Blob<Dtype>*>& top){};\r\n\r\n virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\r\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom){};\r\n\r\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\r\n      const vector<Blob<Dtype>*>& top);\r\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\r\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\r\n  virtual inline bool reverse_dimensions() { return false; }\r\n  virtual void compute_output_shape();\r\n private:\r\n  int num_kernels_im2col_;\r\n  int num_kernels_col2im_;\r\n  int conv_out_channels_;\r\n  int conv_in_channels_;\r\n  int conv_out_spatial_dim_;\r\n  int kernel_dim_;\r\n  int col_offset_;\r\n  int output_offset_;\r\n  int input_offset_dim_;\r\n  int deformable_group_;\r\n\r\n  Blob<Dtype> col_buffer_;\r\n  Blob<Dtype> bias_multiplier_;\r\n\r\n};\r\n\r\n}  // namespace caffe\r\n\r\n#endif  // CAFFE_CONV_LAYER_HPP_\r\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/dropout_layer.hpp",
    "content": "#ifndef CAFFE_DROPOUT_LAYER_HPP_\n#define CAFFE_DROPOUT_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief During training only, sets a random portion of @f$x@f$ to 0, adjusting\n *        the rest of the vector magnitude accordingly.\n *\n * @param bottom input Blob vector (length 1)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the inputs @f$ x @f$\n * @param top output Blob vector (length 1)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the computed outputs @f$ y = |x| @f$\n */\ntemplate <typename Dtype>\nclass DropoutLayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides DropoutParameter dropout_param,\n   *     with DropoutLayer options:\n   *   - dropout_ratio (\\b optional, default 0.5).\n   *     Sets the probability @f$ p @f$ that any given unit is dropped.\n   */\n  explicit DropoutLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Dropout\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs. At training time, we have @f$\n   *      y_{\\mbox{train}} = \\left\\{\n   *         \\begin{array}{ll}\n   *            \\frac{x}{1 - p} & \\mbox{if } u > p \\\\\n   *            0 & \\mbox{otherwise}\n   *         \\end{array} \\right.\n   *      @f$, where @f$ u \\sim U(0, 1)@f$ is generated independently for each\n   *      input at each iteration. At test time, we simply have\n   *      @f$ y_{\\mbox{test}} = \\mathbb{E}[y_{\\mbox{train}}] = x @f$.\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  /// when divided by UINT_MAX, the randomly generated values @f$u\\sim U(0,1)@f$\n  Blob<unsigned int> rand_vec_;\n  /// the probability @f$ p @f$ of dropping any input\n  Dtype threshold_;\n  /// the scale for undropped inputs at train time @f$ 1 / (1 - p) @f$\n  Dtype scale_;\n  unsigned int uint_thres_;\n  bool scale_train_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DROPOUT_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/dummy_data_layer.hpp",
    "content": "#ifndef CAFFE_DUMMY_DATA_LAYER_HPP_\n#define CAFFE_DUMMY_DATA_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Provides data to the Net generated by a Filler.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass DummyDataLayer : public Layer<Dtype> {\n public:\n  explicit DummyDataLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  // Data layers should be shared by multiple solvers in parallel\n  virtual inline bool ShareInParallel() const { return true; }\n  // Data layers have no bottoms, so reshaping is trivial.\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n\n  virtual inline const char* type() const { return \"DummyData\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 0; }\n  virtual inline int MinTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n\n  vector<shared_ptr<Filler<Dtype> > > fillers_;\n  vector<bool> refill_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_DUMMY_DATA_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/eltwise_layer.hpp",
    "content": "#ifndef CAFFE_ELTWISE_LAYER_HPP_\n#define CAFFE_ELTWISE_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Compute elementwise operations, such as product and sum,\n *        along multiple input Blobs.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass EltwiseLayer : public Layer<Dtype> {\n public:\n  explicit EltwiseLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Eltwise\"; }\n  virtual inline int MinBottomBlobs() const { return 2; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  EltwiseParameter_EltwiseOp op_;\n  vector<Dtype> coeffs_;\n  Blob<int> max_idx_;\n\n  bool stable_prod_grad_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_ELTWISE_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/elu_layer.hpp",
    "content": "#ifndef CAFFE_ELU_LAYER_HPP_\n#define CAFFE_ELU_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Exponential Linear Unit non-linearity @f$\n *        y = \\left\\{\n *        \\begin{array}{lr}\n *            x                  & \\mathrm{if} \\; x > 0 \\\\\n *            \\alpha (\\exp(x)-1) & \\mathrm{if} \\; x \\le 0\n *        \\end{array} \\right.\n *      @f$.  \n */\ntemplate <typename Dtype>\nclass ELULayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides ELUParameter elu_param,\n   *     with ELULayer options:\n   *   - alpha (\\b optional, default 1).\n   *     the value @f$ \\alpha @f$ by which controls saturation for negative inputs.\n   */\n  explicit ELULayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"ELU\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = \\left\\{\n   *        \\begin{array}{lr}\n   *            x                  & \\mathrm{if} \\; x > 0 \\\\\n   *            \\alpha (\\exp(x)-1) & \\mathrm{if} \\; x \\le 0\n   *        \\end{array} \\right.\n   *      @f$.  \n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the ELU inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x} = \\left\\{\n   *        \\begin{array}{lr}\n   *            1           & \\mathrm{if} \\; x > 0 \\\\\n   *            y + \\alpha  & \\mathrm{if} \\; x \\le 0\n   *        \\end{array} \\right.\n   *      @f$ if propagate_down[0].\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n\n}  // namespace caffe\n\n#endif  // CAFFE_ELU_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/embed_layer.hpp",
    "content": "#ifndef CAFFE_EMBED_LAYER_HPP_\n#define CAFFE_EMBED_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief A layer for learning \"embeddings\" of one-hot vector input.\n *        Equivalent to an InnerProductLayer with one-hot vectors as input, but\n *        for efficiency the input is the \"hot\" index of each column itself.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass EmbedLayer : public Layer<Dtype> {\n public:\n  explicit EmbedLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Embed\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int M_;\n  int K_;\n  int N_;\n  bool bias_term_;\n  Blob<Dtype> bias_multiplier_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_EMBED_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/euclidean_loss_layer.hpp",
    "content": "#ifndef CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_\n#define CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the Euclidean (L2) loss @f$\n *          E = \\frac{1}{2N} \\sum\\limits_{n=1}^N \\left| \\left| \\hat{y}_n - y_n\n *        \\right| \\right|_2^2 @f$ for real-valued regression tasks.\n *\n * @param bottom input Blob vector (length 2)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the predictions @f$ \\hat{y} \\in [-\\infty, +\\infty]@f$\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the targets @f$ y \\in [-\\infty, +\\infty]@f$\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed Euclidean loss: @f$ E =\n *          \\frac{1}{2n} \\sum\\limits_{n=1}^N \\left| \\left| \\hat{y}_n - y_n\n *        \\right| \\right|_2^2 @f$\n *\n * This can be used for least-squares regression tasks.  An InnerProductLayer\n * input to a EuclideanLossLayer exactly formulates a linear least squares\n * regression problem. With non-zero weight decay the problem becomes one of\n * ridge regression -- see src/caffe/test/test_sgd_solver.cpp for a concrete\n * example wherein we check that the gradients computed for a Net with exactly\n * this structure match hand-computed gradient formulas for ridge regression.\n *\n * (Note: Caffe, and SGD in general, is certainly \\b not the best way to solve\n * linear least squares problems! We use it only as an instructive example.)\n */\ntemplate <typename Dtype>\nclass EuclideanLossLayer : public LossLayer<Dtype> {\n public:\n  explicit EuclideanLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param), diff_() {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"EuclideanLoss\"; }\n  /**\n   * Unlike most loss layers, in the EuclideanLossLayer we can backpropagate\n   * to both inputs -- override to return true and always allow force_backward.\n   */\n  virtual inline bool AllowForceBackward(const int bottom_index) const {\n    return true;\n  }\n\n protected:\n  /// @copydoc EuclideanLossLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the Euclidean error gradient w.r.t. the inputs.\n   *\n   * Unlike other children of LossLayer, EuclideanLossLayer \\b can compute\n   * gradients with respect to the label inputs bottom[1] (but still only will\n   * if propagate_down[1] is set, due to being produced by learnable parameters\n   * or if force_backward is set). In fact, this layer is \"commutative\" -- the\n   * result is the same regardless of the order of the two bottoms.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$\\hat{y}@f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial \\hat{y}} =\n   *            \\frac{1}{n} \\sum\\limits_{n=1}^N (\\hat{y}_n - y_n)\n   *      @f$ if propagate_down[0]\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the targets @f$y@f$; Backward fills their diff with gradients\n   *      @f$ \\frac{\\partial E}{\\partial y} =\n   *          \\frac{1}{n} \\sum\\limits_{n=1}^N (y_n - \\hat{y}_n)\n   *      @f$ if propagate_down[1]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<Dtype> diff_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/exp_layer.hpp",
    "content": "#ifndef CAFFE_EXP_LAYER_HPP_\n#define CAFFE_EXP_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes @f$ y = \\gamma ^ {\\alpha x + \\beta} @f$,\n *        as specified by the scale @f$ \\alpha @f$, shift @f$ \\beta @f$,\n *        and base @f$ \\gamma @f$.\n */\ntemplate <typename Dtype>\nclass ExpLayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides ExpParameter exp_param,\n   *     with ExpLayer options:\n   *   - scale (\\b optional, default 1) the scale @f$ \\alpha @f$\n   *   - shift (\\b optional, default 0) the shift @f$ \\beta @f$\n   *   - base (\\b optional, default -1 for a value of @f$ e \\approx 2.718 @f$)\n   *         the base @f$ \\gamma @f$\n   */\n  explicit ExpLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Exp\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = \\gamma ^ {\\alpha x + \\beta}\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the exp inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x} =\n   *            \\frac{\\partial E}{\\partial y} y \\alpha \\log_e(gamma)\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Dtype inner_scale_, outer_scale_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_EXP_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/filter_layer.hpp",
    "content": "#ifndef CAFFE_FILTER_LAYER_HPP_\n#define CAFFE_FILTER_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Takes two+ Blobs, interprets last Blob as a selector and\n *  filter remaining Blobs accordingly with selector data (0 means that\n * the corresponding item has to be filtered, non-zero means that corresponding\n * item needs to stay).\n */\ntemplate <typename Dtype>\nclass FilterLayer : public Layer<Dtype> {\n public:\n  explicit FilterLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Filter\"; }\n  virtual inline int MinBottomBlobs() const { return 2; }\n  virtual inline int MinTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 2+)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs to be filtered @f$ x_1 @f$\n   *   -# ...\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs to be filtered @f$ x_K @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the selector blob\n   * @param top output Blob vector (length 1+)\n   *   -# @f$ (S \\times C \\times H \\times W) @f$ ()\n   *        the filtered output @f$ x_1 @f$\n   *        where S is the number of items\n   *        that haven't been filtered\n   *      @f$ (S \\times C \\times H \\times W) @f$\n   *        the filtered output @f$ x_K @f$\n   *        where S is the number of items\n   *        that haven't been filtered\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the forwarded inputs.\n   *\n   * @param top output Blob vector (length 1+), providing the error gradient with\n   *        respect to the outputs\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 2+), into which the top error\n   *        gradient is copied\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool first_reshape_;\n  vector<int> indices_to_forward_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_FILTER_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/flatten_layer.hpp",
    "content": "#ifndef CAFFE_FLATTEN_LAYER_HPP_\n#define CAFFE_FLATTEN_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Reshapes the input Blob into flat vectors.\n *\n * Note: because this layer does not change the input values -- merely the\n * dimensions -- it can simply copy the input. The copy happens \"virtually\"\n * (thus taking effectively 0 real time) by setting, in Forward, the data\n * pointer of the top Blob to that of the bottom Blob (see Blob::ShareData),\n * and in Backward, the diff pointer of the bottom Blob to that of the top Blob\n * (see Blob::ShareDiff).\n */\ntemplate <typename Dtype>\nclass FlattenLayer : public Layer<Dtype> {\n public:\n  explicit FlattenLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Flatten\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 2+)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times CHW \\times 1 \\times 1) @f$\n   *      the outputs -- i.e., the (virtually) copied, flattened inputs\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the concatenate inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *        respect to the outputs\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length K), into which the top error\n   *        gradient is (virtually) copied\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_FLATTEN_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/hdf5_data_layer.hpp",
    "content": "#ifndef CAFFE_HDF5_DATA_LAYER_HPP_\n#define CAFFE_HDF5_DATA_LAYER_HPP_\n\n#include \"hdf5.h\"\n\n#include <string>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/base_data_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Provides data to the Net from HDF5 files.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass HDF5DataLayer : public Layer<Dtype> {\n public:\n  explicit HDF5DataLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual ~HDF5DataLayer();\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  // Data layers should be shared by multiple solvers in parallel\n  virtual inline bool ShareInParallel() const { return true; }\n  // Data layers have no bottoms, so reshaping is trivial.\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n\n  virtual inline const char* type() const { return \"HDF5Data\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 0; }\n  virtual inline int MinTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n  virtual void LoadHDF5FileData(const char* filename);\n\n  std::vector<std::string> hdf_filenames_;\n  unsigned int num_files_;\n  unsigned int current_file_;\n  hsize_t current_row_;\n  std::vector<shared_ptr<Blob<Dtype> > > hdf_blobs_;\n  std::vector<unsigned int> data_permutation_;\n  std::vector<unsigned int> file_permutation_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_HDF5_DATA_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/hdf5_output_layer.hpp",
    "content": "#ifndef CAFFE_HDF5_OUTPUT_LAYER_HPP_\n#define CAFFE_HDF5_OUTPUT_LAYER_HPP_\n\n#include \"hdf5.h\"\n\n#include <string>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n#define HDF5_DATA_DATASET_NAME \"data\"\n#define HDF5_DATA_LABEL_NAME \"label\"\n\n/**\n * @brief Write blobs to disk as HDF5 files.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass HDF5OutputLayer : public Layer<Dtype> {\n public:\n  explicit HDF5OutputLayer(const LayerParameter& param)\n      : Layer<Dtype>(param), file_opened_(false) {}\n  virtual ~HDF5OutputLayer();\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  // Data layers should be shared by multiple solvers in parallel\n  virtual inline bool ShareInParallel() const { return true; }\n  // Data layers have no bottoms, so reshaping is trivial.\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n\n  virtual inline const char* type() const { return \"HDF5Output\"; }\n  // TODO: no limit on the number of blobs\n  virtual inline int ExactNumBottomBlobs() const { return 2; }\n  virtual inline int ExactNumTopBlobs() const { return 0; }\n\n  inline std::string file_name() const { return file_name_; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void SaveBlobs();\n\n  bool file_opened_;\n  std::string file_name_;\n  hid_t file_id_;\n  Blob<Dtype> data_blob_;\n  Blob<Dtype> label_blob_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_HDF5_OUTPUT_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/hinge_loss_layer.hpp",
    "content": "#ifndef CAFFE_HINGE_LOSS_LAYER_HPP_\n#define CAFFE_HINGE_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the hinge loss for a one-of-many classification task.\n *\n * @param bottom input Blob vector (length 2)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the predictions @f$ t @f$, a Blob with values in\n *      @f$ [-\\infty, +\\infty] @f$ indicating the predicted score for each of\n *      the @f$ K = CHW @f$ classes. In an SVM, @f$ t @f$ is the result of\n *      taking the inner product @f$ X^T W @f$ of the D-dimensional features\n *      @f$ X \\in \\mathcal{R}^{D \\times N} @f$ and the learned hyperplane\n *      parameters @f$ W \\in \\mathcal{R}^{D \\times K} @f$, so a Net with just\n *      an InnerProductLayer (with num_output = D) providing predictions to a\n *      HingeLossLayer and no other learnable parameters or losses is\n *      equivalent to an SVM.\n *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n *      the labels @f$ l @f$, an integer-valued Blob with values\n *      @f$ l_n \\in [0, 1, 2, ..., K - 1] @f$\n *      indicating the correct class label among the @f$ K @f$ classes\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed hinge loss: @f$ E =\n *        \\frac{1}{N} \\sum\\limits_{n=1}^N \\sum\\limits_{k=1}^K\n *        [\\max(0, 1 - \\delta\\{l_n = k\\} t_{nk})] ^ p\n *      @f$, for the @f$ L^p @f$ norm\n *      (defaults to @f$ p = 1 @f$, the L1 norm; L2 norm, as in L2-SVM,\n *      is also available), and @f$\n *      \\delta\\{\\mathrm{condition}\\} = \\left\\{\n *         \\begin{array}{lr}\n *            1 & \\mbox{if condition} \\\\\n *           -1 & \\mbox{otherwise}\n *         \\end{array} \\right.\n *      @f$\n *\n * In an SVM, @f$ t \\in \\mathcal{R}^{N \\times K} @f$ is the result of taking\n * the inner product @f$ X^T W @f$ of the features\n * @f$ X \\in \\mathcal{R}^{D \\times N} @f$\n * and the learned hyperplane parameters\n * @f$ W \\in \\mathcal{R}^{D \\times K} @f$. So, a Net with just an\n * InnerProductLayer (with num_output = @f$k@f$) providing predictions to a\n * HingeLossLayer is equivalent to an SVM (assuming it has no other learned\n * outside the InnerProductLayer and no other losses outside the\n * HingeLossLayer).\n */\ntemplate <typename Dtype>\nclass HingeLossLayer : public LossLayer<Dtype> {\n public:\n  explicit HingeLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"HingeLoss\"; }\n\n protected:\n  /// @copydoc HingeLossLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the hinge loss error gradient w.r.t. the predictions.\n   *\n   * Gradients cannot be computed with respect to the label inputs (bottom[1]),\n   * so this method ignores bottom[1] and requires !propagate_down[1], crashing\n   * if propagate_down[1] is set.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   *      propagate_down[1] must be false as we can't compute gradients with\n   *      respect to the labels.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$t@f$; Backward computes diff\n   *      @f$ \\frac{\\partial E}{\\partial t} @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the labels -- ignored as we can't compute their error gradients\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n\n}  // namespace caffe\n\n#endif  // CAFFE_HINGE_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/im2col_layer.hpp",
    "content": "#ifndef CAFFE_IM2COL_LAYER_HPP_\n#define CAFFE_IM2COL_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief A helper for image operations that rearranges image regions into\n *        column vectors.  Used by ConvolutionLayer to perform convolution\n *        by matrix multiplication.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass Im2colLayer : public Layer<Dtype> {\n public:\n  explicit Im2colLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Im2col\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  /// @brief The spatial dimensions of a filter kernel.\n  Blob<int> kernel_shape_;\n  /// @brief The spatial dimensions of the stride.\n  Blob<int> stride_;\n  /// @brief The spatial dimensions of the padding.\n  Blob<int> pad_;\n  /// @brief The spatial dimensions of the dilation.\n  Blob<int> dilation_;\n\n  int num_spatial_axes_;\n  int bottom_dim_;\n  int top_dim_;\n\n  int channel_axis_;\n  int num_;\n  int channels_;\n\n  bool force_nd_im2col_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_IM2COL_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/image_data_layer.hpp",
    "content": "#ifndef CAFFE_IMAGE_DATA_LAYER_HPP_\n#define CAFFE_IMAGE_DATA_LAYER_HPP_\n\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Provides data to the Net from image files.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass ImageDataLayer : public BasePrefetchingDataLayer<Dtype> {\n public:\n  explicit ImageDataLayer(const LayerParameter& param)\n      : BasePrefetchingDataLayer<Dtype>(param) {}\n  virtual ~ImageDataLayer();\n  virtual void DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"ImageData\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 0; }\n  virtual inline int ExactNumTopBlobs() const { return 2; }\n\n protected:\n  shared_ptr<Caffe::RNG> prefetch_rng_;\n  virtual void ShuffleImages();\n  virtual void load_batch(Batch<Dtype>* batch);\n\n  vector<std::pair<std::string, int> > lines_;\n  int lines_id_;\n};\n\n\n}  // namespace caffe\n\n#endif  // CAFFE_IMAGE_DATA_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/infogain_loss_layer.hpp",
    "content": "#ifndef CAFFE_INFOGAIN_LOSS_LAYER_HPP_\n#define CAFFE_INFOGAIN_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief A generalization of MultinomialLogisticLossLayer that takes an\n *        \"information gain\" (infogain) matrix specifying the \"value\" of all label\n *        pairs.\n *\n * Equivalent to the MultinomialLogisticLossLayer if the infogain matrix is the\n * identity.\n *\n * @param bottom input Blob vector (length 2-3)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the predictions @f$ \\hat{p} @f$, a Blob with values in\n *      @f$ [0, 1] @f$ indicating the predicted probability of each of the\n *      @f$ K = CHW @f$ classes.  Each prediction vector @f$ \\hat{p}_n @f$\n *      should sum to 1 as in a probability distribution: @f$\n *      \\forall n \\sum\\limits_{k=1}^K \\hat{p}_{nk} = 1 @f$.\n *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n *      the labels @f$ l @f$, an integer-valued Blob with values\n *      @f$ l_n \\in [0, 1, 2, ..., K - 1] @f$\n *      indicating the correct class label among the @f$ K @f$ classes\n *   -# @f$ (1 \\times 1 \\times K \\times K) @f$\n *      (\\b optional) the infogain matrix @f$ H @f$.  This must be provided as\n *      the third bottom blob input if not provided as the infogain_mat in the\n *      InfogainLossParameter. If @f$ H = I @f$, this layer is equivalent to the\n *      MultinomialLogisticLossLayer.\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed infogain multinomial logistic loss: @f$ E =\n *        \\frac{-1}{N} \\sum\\limits_{n=1}^N H_{l_n} \\log(\\hat{p}_n) =\n *        \\frac{-1}{N} \\sum\\limits_{n=1}^N \\sum\\limits_{k=1}^{K} H_{l_n,k}\n *        \\log(\\hat{p}_{n,k})\n *      @f$, where @f$ H_{l_n} @f$ denotes row @f$l_n@f$ of @f$H@f$.\n */\ntemplate <typename Dtype>\nclass InfogainLossLayer : public LossLayer<Dtype> {\n public:\n  explicit InfogainLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param), infogain_() {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  // InfogainLossLayer takes 2-3 bottom Blobs; if there are 3 the third should\n  // be the infogain matrix.  (Otherwise the infogain matrix is loaded from a\n  // file specified by LayerParameter.)\n  virtual inline int ExactNumBottomBlobs() const { return -1; }\n  virtual inline int MinBottomBlobs() const { return 2; }\n  virtual inline int MaxBottomBlobs() const { return 3; }\n\n  virtual inline const char* type() const { return \"InfogainLoss\"; }\n\n protected:\n  /// @copydoc InfogainLossLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the infogain loss error gradient w.r.t. the predictions.\n   *\n   * Gradients cannot be computed with respect to the label inputs (bottom[1]),\n   * so this method ignores bottom[1] and requires !propagate_down[1], crashing\n   * if propagate_down[1] is set. (The same applies to the infogain matrix, if\n   * provided as bottom[2] rather than in the layer_param.)\n   *\n   * @param top output Blob vector (length 1), providing the error gradient\n   *      with respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   *      propagate_down[1] must be false as we can't compute gradients with\n   *      respect to the labels (similarly for propagate_down[2] and the\n   *      infogain matrix, if provided as bottom[2])\n   * @param bottom input Blob vector (length 2-3)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$ \\hat{p} @f$; Backward computes diff\n   *      @f$ \\frac{\\partial E}{\\partial \\hat{p}} @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the labels -- ignored as we can't compute their error gradients\n   *   -# @f$ (1 \\times 1 \\times K \\times K) @f$\n   *      (\\b optional) the information gain matrix -- ignored as its error\n   *      gradient computation is not implemented.\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<Dtype> infogain_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_INFOGAIN_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/inner_product_layer.hpp",
    "content": "#ifndef CAFFE_INNER_PRODUCT_LAYER_HPP_\n#define CAFFE_INNER_PRODUCT_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Also known as a \"fully-connected\" layer, computes an inner product\n *        with a set of learned weights, and (optionally) adds biases.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass InnerProductLayer : public Layer<Dtype> {\n public:\n  explicit InnerProductLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"InnerProduct\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int M_;\n  int K_;\n  int N_;\n  bool bias_term_;\n  Blob<Dtype> bias_multiplier_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_INNER_PRODUCT_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/log_layer.hpp",
    "content": "#ifndef CAFFE_LOG_LAYER_HPP_\n#define CAFFE_LOG_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes @f$ y = log_{\\gamma}(\\alpha x + \\beta) @f$,\n *        as specified by the scale @f$ \\alpha @f$, shift @f$ \\beta @f$,\n *        and base @f$ \\gamma @f$.\n */\ntemplate <typename Dtype>\nclass LogLayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides LogParameter log_param,\n   *     with LogLayer options:\n   *   - scale (\\b optional, default 1) the scale @f$ \\alpha @f$\n   *   - shift (\\b optional, default 0) the shift @f$ \\beta @f$\n   *   - base (\\b optional, default -1 for a value of @f$ e \\approx 2.718 @f$)\n   *         the base @f$ \\gamma @f$\n   */\n  explicit LogLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Log\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = log_{\\gamma}(\\alpha x + \\beta)\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the exp inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x} =\n   *            \\frac{\\partial E}{\\partial y} y \\alpha \\log_e(gamma)\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Dtype base_scale_;\n  Dtype input_scale_, input_shift_;\n  Dtype backward_num_scale_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_LOG_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/loss_layer.hpp",
    "content": "#ifndef CAFFE_LOSS_LAYER_HPP_\n#define CAFFE_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\nconst float kLOG_THRESHOLD = 1e-20;\n\n/**\n * @brief An interface for Layer%s that take two Blob%s as input -- usually\n *        (1) predictions and (2) ground-truth labels -- and output a\n *        singleton Blob representing the loss.\n *\n * LossLayers are typically only capable of backpropagating to their first input\n * -- the predictions.\n */\ntemplate <typename Dtype>\nclass LossLayer : public Layer<Dtype> {\n public:\n  explicit LossLayer(const LayerParameter& param)\n     : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(\n      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(\n      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);\n\n  virtual inline int ExactNumBottomBlobs() const { return 2; }\n\n  /**\n   * @brief For convenience and backwards compatibility, instruct the Net to\n   *        automatically allocate a single top Blob for LossLayers, into which\n   *        they output their singleton loss, (even if the user didn't specify\n   *        one in the prototxt, etc.).\n   */\n  virtual inline bool AutoTopBlobs() const { return true; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n  /**\n   * We usually cannot backpropagate to the labels; ignore force_backward for\n   * these inputs.\n   */\n  virtual inline bool AllowForceBackward(const int bottom_index) const {\n    return bottom_index != 1;\n  }\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/lrn_layer.hpp",
    "content": "#ifndef CAFFE_LRN_LAYER_HPP_\n#define CAFFE_LRN_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/eltwise_layer.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n#include \"caffe/layers/power_layer.hpp\"\n#include \"caffe/layers/split_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Normalize the input in a local region across or within feature maps.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass LRNLayer : public Layer<Dtype> {\n public:\n  explicit LRNLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"LRN\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  virtual void CrossChannelForward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void CrossChannelForward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void WithinChannelForward(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void CrossChannelBackward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void CrossChannelBackward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void WithinChannelBackward(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int size_;\n  int pre_pad_;\n  Dtype alpha_;\n  Dtype beta_;\n  Dtype k_;\n  int num_;\n  int channels_;\n  int height_;\n  int width_;\n\n  // Fields used for normalization ACROSS_CHANNELS\n  // scale_ stores the intermediate summing results\n  Blob<Dtype> scale_;\n\n  // Fields used for normalization WITHIN_CHANNEL\n  shared_ptr<SplitLayer<Dtype> > split_layer_;\n  vector<Blob<Dtype>*> split_top_vec_;\n  shared_ptr<PowerLayer<Dtype> > square_layer_;\n  Blob<Dtype> square_input_;\n  Blob<Dtype> square_output_;\n  vector<Blob<Dtype>*> square_bottom_vec_;\n  vector<Blob<Dtype>*> square_top_vec_;\n  shared_ptr<PoolingLayer<Dtype> > pool_layer_;\n  Blob<Dtype> pool_output_;\n  vector<Blob<Dtype>*> pool_top_vec_;\n  shared_ptr<PowerLayer<Dtype> > power_layer_;\n  Blob<Dtype> power_output_;\n  vector<Blob<Dtype>*> power_top_vec_;\n  shared_ptr<EltwiseLayer<Dtype> > product_layer_;\n  Blob<Dtype> product_input_;\n  vector<Blob<Dtype>*> product_bottom_vec_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_LRN_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/memory_data_layer.hpp",
    "content": "#ifndef CAFFE_MEMORY_DATA_LAYER_HPP_\n#define CAFFE_MEMORY_DATA_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/base_data_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Provides data to the Net from memory.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass MemoryDataLayer : public BaseDataLayer<Dtype> {\n public:\n  explicit MemoryDataLayer(const LayerParameter& param)\n      : BaseDataLayer<Dtype>(param), has_new_data_(false) {}\n  virtual void DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"MemoryData\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 0; }\n  virtual inline int ExactNumTopBlobs() const { return 2; }\n\n  virtual void AddDatumVector(const vector<Datum>& datum_vector);\n#ifdef USE_OPENCV\n  virtual void AddMatVector(const vector<cv::Mat>& mat_vector,\n      const vector<int>& labels);\n#endif  // USE_OPENCV\n\n  // Reset should accept const pointers, but can't, because the memory\n  //  will be given to Blob, which is mutable\n  void Reset(Dtype* data, Dtype* label, int n);\n  void set_batch_size(int new_size);\n\n  int batch_size() { return batch_size_; }\n  int channels() { return channels_; }\n  int height() { return height_; }\n  int width() { return width_; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  int batch_size_, channels_, height_, width_, size_;\n  Dtype* data_;\n  Dtype* labels_;\n  int n_;\n  size_t pos_;\n  Blob<Dtype> added_data_;\n  Blob<Dtype> added_label_;\n  bool has_new_data_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_MEMORY_DATA_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/multinomial_logistic_loss_layer.hpp",
    "content": "#ifndef CAFFE_MULTINOMIAL_LOGISTIC_LOSS_LAYER_HPP_\n#define CAFFE_MULTINOMIAL_LOGISTIC_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the multinomial logistic loss for a one-of-many\n *        classification task, directly taking a predicted probability\n *        distribution as input.\n *\n * When predictions are not already a probability distribution, you should\n * instead use the SoftmaxWithLossLayer, which maps predictions to a\n * distribution using the SoftmaxLayer, before computing the multinomial\n * logistic loss. The SoftmaxWithLossLayer should be preferred over separate\n * SoftmaxLayer + MultinomialLogisticLossLayer\n * as its gradient computation is more numerically stable.\n *\n * @param bottom input Blob vector (length 2)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the predictions @f$ \\hat{p} @f$, a Blob with values in\n *      @f$ [0, 1] @f$ indicating the predicted probability of each of the\n *      @f$ K = CHW @f$ classes.  Each prediction vector @f$ \\hat{p}_n @f$\n *      should sum to 1 as in a probability distribution: @f$\n *      \\forall n \\sum\\limits_{k=1}^K \\hat{p}_{nk} = 1 @f$.\n *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n *      the labels @f$ l @f$, an integer-valued Blob with values\n *      @f$ l_n \\in [0, 1, 2, ..., K - 1] @f$\n *      indicating the correct class label among the @f$ K @f$ classes\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed multinomial logistic loss: @f$ E =\n *        \\frac{-1}{N} \\sum\\limits_{n=1}^N \\log(\\hat{p}_{n,l_n})\n *      @f$\n */\ntemplate <typename Dtype>\nclass MultinomialLogisticLossLayer : public LossLayer<Dtype> {\n public:\n  explicit MultinomialLogisticLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"MultinomialLogisticLoss\"; }\n\n protected:\n  /// @copydoc MultinomialLogisticLossLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the multinomial logistic loss error gradient w.r.t. the\n   *        predictions.\n   *\n   * Gradients cannot be computed with respect to the label inputs (bottom[1]),\n   * so this method ignores bottom[1] and requires !propagate_down[1], crashing\n   * if propagate_down[1] is set.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   *      propagate_down[1] must be false as we can't compute gradients with\n   *      respect to the labels.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$ \\hat{p} @f$; Backward computes diff\n   *      @f$ \\frac{\\partial E}{\\partial \\hat{p}} @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the labels -- ignored as we can't compute their error gradients\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_MULTINOMIAL_LOGISTIC_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/mvn_layer.hpp",
    "content": "#ifndef CAFFE_MVN_LAYER_HPP_\n#define CAFFE_MVN_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Normalizes the input to have 0-mean and/or unit (1) variance.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass MVNLayer : public Layer<Dtype> {\n public:\n  explicit MVNLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"MVN\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n     const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  Blob<Dtype> mean_, variance_, temp_;\n\n  /// sum_multiplier is used to carry out sum using BLAS\n  Blob<Dtype> sum_multiplier_;\n  Dtype eps_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_MVN_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/neuron_layer.hpp",
    "content": "#ifndef CAFFE_NEURON_LAYER_HPP_\n#define CAFFE_NEURON_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief An interface for layers that take one blob as input (@f$ x @f$)\n *        and produce one equally-sized blob as output (@f$ y @f$), where\n *        each element of the output depends only on the corresponding input\n *        element.\n */\ntemplate <typename Dtype>\nclass NeuronLayer : public Layer<Dtype> {\n public:\n  explicit NeuronLayer(const LayerParameter& param)\n     : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_NEURON_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/pooling_layer.hpp",
    "content": "#ifndef CAFFE_POOLING_LAYER_HPP_\n#define CAFFE_POOLING_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Pools the input image by taking the max, average, etc. within regions.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass PoolingLayer : public Layer<Dtype> {\n public:\n  explicit PoolingLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Pooling\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int MinTopBlobs() const { return 1; }\n  // MAX POOL layers can output an extra top blob for the mask;\n  // others can only output the pooled inputs.\n  virtual inline int MaxTopBlobs() const {\n    return (this->layer_param_.pooling_param().pool() ==\n            PoolingParameter_PoolMethod_MAX) ? 2 : 1;\n  }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int kernel_h_, kernel_w_;\n  int stride_h_, stride_w_;\n  int pad_h_, pad_w_;\n  int channels_;\n  int height_, width_;\n  int pooled_height_, pooled_width_;\n  bool global_pooling_;\n  Blob<Dtype> rand_idx_;\n  Blob<int> max_idx_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_POOLING_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/power_layer.hpp",
    "content": "#ifndef CAFFE_POWER_LAYER_HPP_\n#define CAFFE_POWER_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes @f$ y = (\\alpha x + \\beta) ^ \\gamma @f$,\n *        as specified by the scale @f$ \\alpha @f$, shift @f$ \\beta @f$,\n *        and power @f$ \\gamma @f$.\n */\ntemplate <typename Dtype>\nclass PowerLayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides PowerParameter power_param,\n   *     with PowerLayer options:\n   *   - scale (\\b optional, default 1) the scale @f$ \\alpha @f$\n   *   - shift (\\b optional, default 0) the shift @f$ \\beta @f$\n   *   - power (\\b optional, default 1) the power @f$ \\gamma @f$\n   */\n  explicit PowerLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Power\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = (\\alpha x + \\beta) ^ \\gamma\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the power inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x} =\n   *            \\frac{\\partial E}{\\partial y}\n   *            \\alpha \\gamma (\\alpha x + \\beta) ^ {\\gamma - 1} =\n   *            \\frac{\\partial E}{\\partial y}\n   *            \\frac{\\alpha \\gamma y}{\\alpha x + \\beta}\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  /// @brief @f$ \\gamma @f$ from layer_param_.power_param()\n  Dtype power_;\n  /// @brief @f$ \\alpha @f$ from layer_param_.power_param()\n  Dtype scale_;\n  /// @brief @f$ \\beta @f$ from layer_param_.power_param()\n  Dtype shift_;\n  /// @brief Result of @f$ \\alpha \\gamma @f$\n  Dtype diff_scale_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_POWER_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/prelu_layer.hpp",
    "content": "#ifndef CAFFE_PRELU_LAYER_HPP_\n#define CAFFE_PRELU_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Parameterized Rectified Linear Unit non-linearity @f$\n *        y_i = \\max(0, x_i) + a_i \\min(0, x_i)\n *        @f$. The differences from ReLULayer are 1) negative slopes are\n *        learnable though backprop and 2) negative slopes can vary across\n *        channels. The number of axes of input blob should be greater than or\n *        equal to 2. The 1st axis (0-based) is seen as channels.\n */\ntemplate <typename Dtype>\nclass PReLULayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides PReLUParameter prelu_param,\n   *     with PReLULayer options:\n   *   - filler (\\b optional, FillerParameter,\n   *     default {'type': constant 'value':0.25}).\n   *   - channel_shared (\\b optional, default false).\n   *     negative slopes are shared across channels.\n   */\n  explicit PReLULayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"PReLU\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times ...) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times ...) @f$\n   *      the computed outputs for each channel @f$i@f$ @f$\n   *        y_i = \\max(0, x_i) + a_i \\min(0, x_i)\n   *      @f$.\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the PReLU inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times ...) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times ...) @f$\n   *      the inputs @f$ x @f$; For each channel @f$i@f$, backward fills their\n   *      diff with gradients @f$\n   *        \\frac{\\partial E}{\\partial x_i} = \\left\\{\n   *        \\begin{array}{lr}\n   *            a_i \\frac{\\partial E}{\\partial y_i} & \\mathrm{if} \\; x_i \\le 0 \\\\\n   *            \\frac{\\partial E}{\\partial y_i} & \\mathrm{if} \\; x_i > 0\n   *        \\end{array} \\right.\n   *      @f$.\n   *      If param_propagate_down_[0] is true, it fills the diff with gradients\n   *      @f$\n   *        \\frac{\\partial E}{\\partial a_i} = \\left\\{\n   *        \\begin{array}{lr}\n   *            \\sum_{x_i} x_i \\frac{\\partial E}{\\partial y_i} & \\mathrm{if} \\; x_i \\le 0 \\\\\n   *            0 & \\mathrm{if} \\; x_i > 0\n   *        \\end{array} \\right.\n   *      @f$.\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  bool channel_shared_;\n  Blob<Dtype> multiplier_;  // dot multiplier for backward computation of params\n  Blob<Dtype> backward_buff_;  // temporary buffer for backward computation\n  Blob<Dtype> bottom_memory_;  // memory for in-place computation\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_PRELU_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/python_layer.hpp",
    "content": "#ifndef CAFFE_PYTHON_LAYER_HPP_\n#define CAFFE_PYTHON_LAYER_HPP_\n\n#include <boost/python.hpp>\n#include <vector>\n\n#include \"caffe/layer.hpp\"\n\nnamespace bp = boost::python;\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass PythonLayer : public Layer<Dtype> {\n public:\n  PythonLayer(PyObject* self, const LayerParameter& param)\n      : Layer<Dtype>(param), self_(bp::handle<>(bp::borrowed(self))) { }\n\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n    // Disallow PythonLayer in MultiGPU training stage, due to GIL issues\n    // Details: https://github.com/BVLC/caffe/issues/2936\n    if (this->phase_ == TRAIN && Caffe::solver_count() > 1\n        && !ShareInParallel()) {\n      LOG(FATAL) << \"PythonLayer is not implemented in Multi-GPU training\";\n    }\n    self_.attr(\"param_str_\") = bp::str(\n        this->layer_param_.python_param().param_str());\n    self_.attr(\"setup\")(bottom, top);\n  }\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n    self_.attr(\"reshape\")(bottom, top);\n  }\n\n  virtual inline bool ShareInParallel() const {\n    return this->layer_param_.python_param().share_in_parallel();\n  }\n\n  virtual inline const char* type() const { return \"Python\"; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n    self_.attr(\"forward\")(bottom, top);\n  }\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n    self_.attr(\"backward\")(top, propagate_down, bottom);\n  }\n\n private:\n  bp::object self_;\n};\n\n}  // namespace caffe\n\n#endif\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/reduction_layer.hpp",
    "content": "#ifndef CAFFE_REDUCTION_LAYER_HPP_\n#define CAFFE_REDUCTION_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Compute \"reductions\" -- operations that return a scalar output Blob\n *        for an input Blob of arbitrary size, such as the sum, absolute sum,\n *        and sum of squares.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass ReductionLayer : public Layer<Dtype> {\n public:\n  explicit ReductionLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Reduction\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  /// @brief the reduction operation performed by the layer\n  ReductionParameter_ReductionOp op_;\n  /// @brief a scalar coefficient applied to all outputs\n  Dtype coeff_;\n  /// @brief the index of the first input axis to reduce\n  int axis_;\n  /// @brief the number of reductions performed\n  int num_;\n  /// @brief the input size of each reduction\n  int dim_;\n  /// @brief a helper Blob used for summation (op_ == SUM)\n  Blob<Dtype> sum_multiplier_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_REDUCTION_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/relu_layer.hpp",
    "content": "#ifndef CAFFE_RELU_LAYER_HPP_\n#define CAFFE_RELU_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Rectified Linear Unit non-linearity @f$ y = \\max(0, x) @f$.\n *        The simple max is fast to compute, and the function does not saturate.\n */\ntemplate <typename Dtype>\nclass ReLULayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides ReLUParameter relu_param,\n   *     with ReLULayer options:\n   *   - negative_slope (\\b optional, default 0).\n   *     the value @f$ \\nu @f$ by which negative values are multiplied.\n   */\n  explicit ReLULayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"ReLU\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = \\max(0, x)\n   *      @f$ by default.  If a non-zero negative_slope @f$ \\nu @f$ is provided,\n   *      the computed outputs are @f$ y = \\max(0, x) + \\nu \\min(0, x) @f$.\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the ReLU inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x} = \\left\\{\n   *        \\begin{array}{lr}\n   *            0 & \\mathrm{if} \\; x \\le 0 \\\\\n   *            \\frac{\\partial E}{\\partial y} & \\mathrm{if} \\; x > 0\n   *        \\end{array} \\right.\n   *      @f$ if propagate_down[0], by default.\n   *      If a non-zero negative_slope @f$ \\nu @f$ is provided,\n   *      the computed gradients are @f$\n   *        \\frac{\\partial E}{\\partial x} = \\left\\{\n   *        \\begin{array}{lr}\n   *            \\nu \\frac{\\partial E}{\\partial y} & \\mathrm{if} \\; x \\le 0 \\\\\n   *            \\frac{\\partial E}{\\partial y} & \\mathrm{if} \\; x > 0\n   *        \\end{array} \\right.\n   *      @f$.\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_RELU_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/reshape_layer.hpp",
    "content": "#ifndef CAFFE_XXX_LAYER_HPP_\n#define CAFFE_XXX_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/*\n * @brief Reshapes the input Blob into an arbitrary-sized output Blob.\n *\n * Note: similarly to FlattenLayer, this layer does not change the input values\n * (see FlattenLayer, Blob::ShareData and Blob::ShareDiff).\n */\ntemplate <typename Dtype>\nclass ReshapeLayer : public Layer<Dtype> {\n public:\n  explicit ReshapeLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Reshape\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {}\n\n  /// @brief vector of axes indices whose dimensions we'll copy from the bottom\n  vector<int> copy_axes_;\n  /// @brief the index of the axis whose dimension we infer, or -1 if none\n  int inferred_axis_;\n  /// @brief the product of the \"constant\" output dimensions\n  int constant_count_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_XXX_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/scale_layer.hpp",
    "content": "#ifndef CAFFE_SCALE_LAYER_HPP_\n#define CAFFE_SCALE_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/bias_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes a product of two input Blobs, with the shape of the\n *        latter Blob \"broadcast\" to match the shape of the former.\n *        Equivalent to tiling the latter Blob, then computing the elementwise\n *        product.\n *\n * The second input may be omitted, in which case it's learned as a parameter\n * of the layer.\n */\ntemplate <typename Dtype>\nclass ScaleLayer: public Layer<Dtype> {\n public:\n  explicit ScaleLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Scale\"; }\n  // Scale\n  virtual inline int MinBottomBlobs() const { return 1; }\n  virtual inline int MaxBottomBlobs() const { return 2; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  /**\n   * In the below shape specifications, @f$ i @f$ denotes the value of the\n   * `axis` field given by `this->layer_param_.scale_param().axis()`, after\n   * canonicalization (i.e., conversion from negative to positive index,\n   * if applicable).\n   *\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (d_0 \\times ... \\times\n   *           d_i \\times ... \\times d_j \\times ... \\times d_n) @f$\n   *      the first factor @f$ x @f$\n   *   -# @f$ (d_i \\times ... \\times d_j) @f$\n   *      the second factor @f$ y @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (d_0 \\times ... \\times\n   *           d_i \\times ... \\times d_j \\times ... \\times d_n) @f$\n   *      the product @f$ z = x y @f$ computed after \"broadcasting\" y.\n   *      Equivalent to tiling @f$ y @f$ to have the same shape as @f$ x @f$,\n   *      then computing the elementwise product.\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  shared_ptr<Layer<Dtype> > bias_layer_;\n  vector<Blob<Dtype>*> bias_bottom_vec_;\n  vector<bool> bias_propagate_down_;\n  int bias_param_id_;\n\n  Blob<Dtype> sum_multiplier_;\n  Blob<Dtype> sum_result_;\n  Blob<Dtype> temp_;\n  int axis_;\n  int outer_dim_, scale_dim_, inner_dim_;\n};\n\n\n}  // namespace caffe\n\n#endif  // CAFFE_SCALE_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/sigmoid_cross_entropy_loss_layer.hpp",
    "content": "#ifndef CAFFE_SIGMOID_CROSS_ENTROPY_LOSS_LAYER_HPP_\n#define CAFFE_SIGMOID_CROSS_ENTROPY_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n#include \"caffe/layers/sigmoid_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the cross-entropy (logistic) loss @f$\n *          E = \\frac{-1}{n} \\sum\\limits_{n=1}^N \\left[\n *                  p_n \\log \\hat{p}_n +\n *                  (1 - p_n) \\log(1 - \\hat{p}_n)\n *              \\right]\n *        @f$, often used for predicting targets interpreted as probabilities.\n *\n * This layer is implemented rather than separate\n * SigmoidLayer + CrossEntropyLayer\n * as its gradient computation is more numerically stable.\n * At test time, this layer can be replaced simply by a SigmoidLayer.\n *\n * @param bottom input Blob vector (length 2)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the scores @f$ x \\in [-\\infty, +\\infty]@f$,\n *      which this layer maps to probability predictions\n *      @f$ \\hat{p}_n = \\sigma(x_n) \\in [0, 1] @f$\n *      using the sigmoid function @f$ \\sigma(.) @f$ (see SigmoidLayer).\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the targets @f$ y \\in [0, 1] @f$\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed cross-entropy loss: @f$\n *          E = \\frac{-1}{n} \\sum\\limits_{n=1}^N \\left[\n *                  p_n \\log \\hat{p}_n + (1 - p_n) \\log(1 - \\hat{p}_n)\n *              \\right]\n *      @f$\n */\ntemplate <typename Dtype>\nclass SigmoidCrossEntropyLossLayer : public LossLayer<Dtype> {\n public:\n  explicit SigmoidCrossEntropyLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param),\n          sigmoid_layer_(new SigmoidLayer<Dtype>(param)),\n          sigmoid_output_(new Blob<Dtype>()) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"SigmoidCrossEntropyLoss\"; }\n\n protected:\n  /// @copydoc SigmoidCrossEntropyLossLayer\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the sigmoid cross-entropy loss error gradient w.r.t. the\n   *        predictions.\n   *\n   * Gradients cannot be computed with respect to the target inputs (bottom[1]),\n   * so this method ignores bottom[1] and requires !propagate_down[1], crashing\n   * if propagate_down[1] is set.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   *      propagate_down[1] must be false as gradient computation with respect\n   *      to the targets is not implemented.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$x@f$; Backward computes diff\n   *      @f$ \\frac{\\partial E}{\\partial x} =\n   *          \\frac{1}{n} \\sum\\limits_{n=1}^N (\\hat{p}_n - p_n)\n   *      @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the labels -- ignored as we can't compute their error gradients\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  /// The internal SigmoidLayer used to map predictions to probabilities.\n  shared_ptr<SigmoidLayer<Dtype> > sigmoid_layer_;\n  /// sigmoid_output stores the output of the SigmoidLayer.\n  shared_ptr<Blob<Dtype> > sigmoid_output_;\n  /// bottom vector holder to call the underlying SigmoidLayer::Forward\n  vector<Blob<Dtype>*> sigmoid_bottom_vec_;\n  /// top vector holder to call the underlying SigmoidLayer::Forward\n  vector<Blob<Dtype>*> sigmoid_top_vec_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SIGMOID_CROSS_ENTROPY_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/sigmoid_layer.hpp",
    "content": "#ifndef CAFFE_SIGMOID_LAYER_HPP_\n#define CAFFE_SIGMOID_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Sigmoid function non-linearity @f$\n *         y = (1 + \\exp(-x))^{-1}\n *     @f$, a classic choice in neural networks.\n *\n * Note that the gradient vanishes as the values move away from 0.\n * The ReLULayer is often a better choice for this reason.\n */\ntemplate <typename Dtype>\nclass SigmoidLayer : public NeuronLayer<Dtype> {\n public:\n  explicit SigmoidLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"Sigmoid\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = (1 + \\exp(-x))^{-1}\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the sigmoid inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x}\n   *            = \\frac{\\partial E}{\\partial y} y (1 - y)\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SIGMOID_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/silence_layer.hpp",
    "content": "#ifndef CAFFE_SILENCE_LAYER_HPP_\n#define CAFFE_SILENCE_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Ignores bottom blobs while producing no top blobs. (This is useful\n *        to suppress outputs during testing.)\n */\ntemplate <typename Dtype>\nclass SilenceLayer : public Layer<Dtype> {\n public:\n  explicit SilenceLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n\n  virtual inline const char* type() const { return \"Silence\"; }\n  virtual inline int MinBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 0; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {}\n  // We can't define Forward_gpu here, since STUB_GPU will provide\n  // its own definition for CPU_ONLY mode.\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SILENCE_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/slice_layer.hpp",
    "content": "#ifndef CAFFE_SLICE_LAYER_HPP_\n#define CAFFE_SLICE_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Takes a Blob and slices it along either the num or channel dimension,\n *        outputting multiple sliced Blob results.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass SliceLayer : public Layer<Dtype> {\n public:\n  explicit SliceLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Slice\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int MinTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int count_;\n  int num_slices_;\n  int slice_size_;\n  int slice_axis_;\n  vector<int> slice_point_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SLICE_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/softmax_layer.hpp",
    "content": "#ifndef CAFFE_SOFTMAX_LAYER_HPP_\n#define CAFFE_SOFTMAX_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the softmax function.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass SoftmaxLayer : public Layer<Dtype> {\n public:\n  explicit SoftmaxLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Softmax\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n     const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int outer_num_;\n  int inner_num_;\n  int softmax_axis_;\n  /// sum_multiplier is used to carry out sum using BLAS\n  Blob<Dtype> sum_multiplier_;\n  /// scale is an intermediate Blob to hold temporary results.\n  Blob<Dtype> scale_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SOFTMAX_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/softmax_loss_layer.hpp",
    "content": "#ifndef CAFFE_SOFTMAX_WITH_LOSS_LAYER_HPP_\n#define CAFFE_SOFTMAX_WITH_LOSS_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/loss_layer.hpp\"\n#include \"caffe/layers/softmax_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Computes the multinomial logistic loss for a one-of-many\n *        classification task, passing real-valued predictions through a\n *        softmax to get a probability distribution over classes.\n *\n * This layer should be preferred over separate\n * SoftmaxLayer + MultinomialLogisticLossLayer\n * as its gradient computation is more numerically stable.\n * At test time, this layer can be replaced simply by a SoftmaxLayer.\n *\n * @param bottom input Blob vector (length 2)\n *   -# @f$ (N \\times C \\times H \\times W) @f$\n *      the predictions @f$ x @f$, a Blob with values in\n *      @f$ [-\\infty, +\\infty] @f$ indicating the predicted score for each of\n *      the @f$ K = CHW @f$ classes. This layer maps these scores to a\n *      probability distribution over classes using the softmax function\n *      @f$ \\hat{p}_{nk} = \\exp(x_{nk}) /\n *      \\left[\\sum_{k'} \\exp(x_{nk'})\\right] @f$ (see SoftmaxLayer).\n *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n *      the labels @f$ l @f$, an integer-valued Blob with values\n *      @f$ l_n \\in [0, 1, 2, ..., K - 1] @f$\n *      indicating the correct class label among the @f$ K @f$ classes\n * @param top output Blob vector (length 1)\n *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n *      the computed cross-entropy classification loss: @f$ E =\n *        \\frac{-1}{N} \\sum\\limits_{n=1}^N \\log(\\hat{p}_{n,l_n})\n *      @f$, for softmax output class probabilites @f$ \\hat{p} @f$\n */\ntemplate <typename Dtype>\nclass SoftmaxWithLossLayer : public LossLayer<Dtype> {\n public:\n   /**\n    * @param param provides LossParameter loss_param, with options:\n    *  - ignore_label (optional)\n    *    Specify a label value that should be ignored when computing the loss.\n    *  - normalize (optional, default true)\n    *    If true, the loss is normalized by the number of (nonignored) labels\n    *    present; otherwise the loss is simply summed over spatial locations.\n    */\n  explicit SoftmaxWithLossLayer(const LayerParameter& param)\n      : LossLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"SoftmaxWithLoss\"; }\n  virtual inline int ExactNumTopBlobs() const { return -1; }\n  virtual inline int MinTopBlobs() const { return 1; }\n  virtual inline int MaxTopBlobs() const { return 2; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  /**\n   * @brief Computes the softmax loss error gradient w.r.t. the predictions.\n   *\n   * Gradients cannot be computed with respect to the label inputs (bottom[1]),\n   * so this method ignores bottom[1] and requires !propagate_down[1], crashing\n   * if propagate_down[1] is set.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (1 \\times 1 \\times 1 \\times 1) @f$\n   *      This Blob's diff will simply contain the loss_weight* @f$ \\lambda @f$,\n   *      as @f$ \\lambda @f$ is the coefficient of this layer's output\n   *      @f$\\ell_i@f$ in the overall Net loss\n   *      @f$ E = \\lambda_i \\ell_i + \\mbox{other loss terms}@f$; hence\n   *      @f$ \\frac{\\partial E}{\\partial \\ell_i} = \\lambda_i @f$.\n   *      (*Assuming that this top Blob is not used as a bottom (input) by any\n   *      other layer of the Net.)\n   * @param propagate_down see Layer::Backward.\n   *      propagate_down[1] must be false as we can't compute gradients with\n   *      respect to the labels.\n   * @param bottom input Blob vector (length 2)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the predictions @f$ x @f$; Backward computes diff\n   *      @f$ \\frac{\\partial E}{\\partial x} @f$\n   *   -# @f$ (N \\times 1 \\times 1 \\times 1) @f$\n   *      the labels -- ignored as we can't compute their error gradients\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  /// Read the normalization mode parameter and compute the normalizer based\n  /// on the blob size.  If normalization_mode is VALID, the count of valid\n  /// outputs will be read from valid_count, unless it is -1 in which case\n  /// all outputs are assumed to be valid.\n  virtual Dtype get_normalizer(\n      LossParameter_NormalizationMode normalization_mode, int valid_count);\n\n  /// The internal SoftmaxLayer used to map predictions to a distribution.\n  shared_ptr<Layer<Dtype> > softmax_layer_;\n  /// prob stores the output probability predictions from the SoftmaxLayer.\n  Blob<Dtype> prob_;\n  /// bottom vector holder used in call to the underlying SoftmaxLayer::Forward\n  vector<Blob<Dtype>*> softmax_bottom_vec_;\n  /// top vector holder used in call to the underlying SoftmaxLayer::Forward\n  vector<Blob<Dtype>*> softmax_top_vec_;\n  /// Whether to ignore instances with a certain label.\n  bool has_ignore_label_;\n  /// The label indicating that an instance should be ignored.\n  int ignore_label_;\n  /// How to normalize the output loss.\n  LossParameter_NormalizationMode normalization_;\n\n  int softmax_axis_, outer_num_, inner_num_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SOFTMAX_WITH_LOSS_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/split_layer.hpp",
    "content": "#ifndef CAFFE_SPLIT_LAYER_HPP_\n#define CAFFE_SPLIT_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Creates a \"split\" path in the network by copying the bottom Blob\n *        into multiple top Blob%s to be used by multiple consuming layers.\n *\n * TODO(dox): thorough documentation for Forward, Backward, and proto params.\n */\ntemplate <typename Dtype>\nclass SplitLayer : public Layer<Dtype> {\n public:\n  explicit SplitLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Split\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int MinTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  int count_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SPLIT_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/spp_layer.hpp",
    "content": "#ifndef CAFFE_SPP_LAYER_HPP_\n#define CAFFE_SPP_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Does spatial pyramid pooling on the input image\n *        by taking the max, average, etc. within regions\n *        so that the result vector of different sized\n *        images are of the same size.\n */\ntemplate <typename Dtype>\nclass SPPLayer : public Layer<Dtype> {\n public:\n  explicit SPPLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"SPP\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  // calculates the kernel and stride dimensions for the pooling layer,\n  // returns a correctly configured LayerParameter for a PoolingLayer\n  virtual LayerParameter GetPoolingParam(const int pyramid_level,\n      const int bottom_h, const int bottom_w, const SPPParameter spp_param);\n\n  int pyramid_height_;\n  int bottom_h_, bottom_w_;\n  int num_;\n  int channels_;\n  int kernel_h_, kernel_w_;\n  int pad_h_, pad_w_;\n  bool reshaped_first_time_;\n\n  /// the internal Split layer that feeds the pooling layers\n  shared_ptr<SplitLayer<Dtype> > split_layer_;\n  /// top vector holder used in call to the underlying SplitLayer::Forward\n  vector<Blob<Dtype>*> split_top_vec_;\n  /// bottom vector holder used in call to the underlying PoolingLayer::Forward\n  vector<vector<Blob<Dtype>*>*> pooling_bottom_vecs_;\n  /// the internal Pooling layers of different kernel sizes\n  vector<shared_ptr<PoolingLayer<Dtype> > > pooling_layers_;\n  /// top vector holders used in call to the underlying PoolingLayer::Forward\n  vector<vector<Blob<Dtype>*>*> pooling_top_vecs_;\n  /// pooling_outputs stores the outputs of the PoolingLayers\n  vector<Blob<Dtype>*> pooling_outputs_;\n  /// the internal Flatten layers that the Pooling layers feed into\n  vector<FlattenLayer<Dtype>*> flatten_layers_;\n  /// top vector holders used in call to the underlying FlattenLayer::Forward\n  vector<vector<Blob<Dtype>*>*> flatten_top_vecs_;\n  /// flatten_outputs stores the outputs of the FlattenLayers\n  vector<Blob<Dtype>*> flatten_outputs_;\n  /// bottom vector holder used in call to the underlying ConcatLayer::Forward\n  vector<Blob<Dtype>*> concat_bottom_vec_;\n  /// the internal Concat layers that the Flatten layers feed into\n  shared_ptr<ConcatLayer<Dtype> > concat_layer_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SPP_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/tanh_layer.hpp",
    "content": "#ifndef CAFFE_TANH_LAYER_HPP_\n#define CAFFE_TANH_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief TanH hyperbolic tangent non-linearity @f$\n *         y = \\frac{\\exp(2x) - 1}{\\exp(2x) + 1}\n *     @f$, popular in auto-encoders.\n *\n * Note that the gradient vanishes as the values move away from 0.\n * The ReLULayer is often a better choice for this reason.\n */\ntemplate <typename Dtype>\nclass TanHLayer : public NeuronLayer<Dtype> {\n public:\n  explicit TanHLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n\n  virtual inline const char* type() const { return \"TanH\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *        y = \\frac{\\exp(2x) - 1}{\\exp(2x) + 1}\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  /**\n   * @brief Computes the error gradient w.r.t. the sigmoid inputs.\n   *\n   * @param top output Blob vector (length 1), providing the error gradient with\n   *      respect to the outputs\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      containing error gradients @f$ \\frac{\\partial E}{\\partial y} @f$\n   *      with respect to computed outputs @f$ y @f$\n   * @param propagate_down see Layer::Backward.\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$; Backward fills their diff with\n   *      gradients @f$\n   *        \\frac{\\partial E}{\\partial x}\n   *            = \\frac{\\partial E}{\\partial y}\n   *              \\left(1 - \\left[\\frac{\\exp(2x) - 1}{exp(2x) + 1} \\right]^2 \\right)\n   *            = \\frac{\\partial E}{\\partial y} (1 - y^2)\n   *      @f$ if propagate_down[0]\n   */\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_TANH_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/threshold_layer.hpp",
    "content": "#ifndef CAFFE_THRESHOLD_LAYER_HPP_\n#define CAFFE_THRESHOLD_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Tests whether the input exceeds a threshold: outputs 1 for inputs\n *        above threshold; 0 otherwise.\n */\ntemplate <typename Dtype>\nclass ThresholdLayer : public NeuronLayer<Dtype> {\n public:\n  /**\n   * @param param provides ThresholdParameter threshold_param,\n   *     with ThresholdLayer options:\n   *   - threshold (\\b optional, default 0).\n   *     the threshold value @f$ t @f$ to which the input values are compared.\n   */\n  explicit ThresholdLayer(const LayerParameter& param)\n      : NeuronLayer<Dtype>(param) {}\n  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Threshold\"; }\n\n protected:\n  /**\n   * @param bottom input Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the inputs @f$ x @f$\n   * @param top output Blob vector (length 1)\n   *   -# @f$ (N \\times C \\times H \\times W) @f$\n   *      the computed outputs @f$\n   *       y = \\left\\{\n   *       \\begin{array}{lr}\n   *         0 & \\mathrm{if} \\; x \\le t \\\\\n   *         1 & \\mathrm{if} \\; x > t\n   *       \\end{array} \\right.\n   *      @f$\n   */\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  /// @brief Not implemented (non-differentiable function)\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n    NOT_IMPLEMENTED;\n  }\n\n  Dtype threshold_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_THRESHOLD_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/tile_layer.hpp",
    "content": "#ifndef CAFFE_TILE_LAYER_HPP_\n#define CAFFE_TILE_LAYER_HPP_\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Copy a Blob along specified dimensions.\n */\ntemplate <typename Dtype>\nclass TileLayer : public Layer<Dtype> {\n public:\n  explicit TileLayer(const LayerParameter& param)\n      : Layer<Dtype>(param) {}\n  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"Tile\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 1; }\n  virtual inline int ExactNumTopBlobs() const { return 1; }\n\n protected:\n  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);\n\n  unsigned int axis_, tiles_, outer_dim_, inner_dim_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_TILE_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/layers/window_data_layer.hpp",
    "content": "#ifndef CAFFE_WINDOW_DATA_LAYER_HPP_\n#define CAFFE_WINDOW_DATA_LAYER_HPP_\n\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Provides data to the Net from windows of images files, specified\n *        by a window data file.\n *\n * TODO(dox): thorough documentation for Forward and proto params.\n */\ntemplate <typename Dtype>\nclass WindowDataLayer : public BasePrefetchingDataLayer<Dtype> {\n public:\n  explicit WindowDataLayer(const LayerParameter& param)\n      : BasePrefetchingDataLayer<Dtype>(param) {}\n  virtual ~WindowDataLayer();\n  virtual void DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top);\n\n  virtual inline const char* type() const { return \"WindowData\"; }\n  virtual inline int ExactNumBottomBlobs() const { return 0; }\n  virtual inline int ExactNumTopBlobs() const { return 2; }\n\n protected:\n  virtual unsigned int PrefetchRand();\n  virtual void load_batch(Batch<Dtype>* batch);\n\n  shared_ptr<Caffe::RNG> prefetch_rng_;\n  vector<std::pair<std::string, vector<int> > > image_database_;\n  enum WindowField { IMAGE_INDEX, LABEL, OVERLAP, X1, Y1, X2, Y2, NUM };\n  vector<vector<float> > fg_windows_;\n  vector<vector<float> > bg_windows_;\n  Blob<Dtype> data_mean_;\n  vector<Dtype> mean_values_;\n  bool has_mean_file_;\n  bool has_mean_values_;\n  bool cache_images_;\n  vector<std::pair<std::string, Datum > > image_database_cache_;\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_WINDOW_DATA_LAYER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/net.hpp",
    "content": "#ifndef CAFFE_NET_HPP_\n#define CAFFE_NET_HPP_\n\n#include <map>\n#include <set>\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n/**\n * @brief Connects Layer%s together into a directed acyclic graph (DAG)\n *        specified by a NetParameter.\n *\n * TODO(dox): more thorough description.\n */\ntemplate <typename Dtype>\nclass Net {\n public:\n  explicit Net(const NetParameter& param, const Net* root_net = NULL);\n  explicit Net(const string& param_file, Phase phase,\n      const Net* root_net = NULL);\n  virtual ~Net() {}\n\n  /// @brief Initialize a network with a NetParameter.\n  void Init(const NetParameter& param);\n\n  /**\n   * @brief Run Forward with the input Blob%s already fed separately.\n   *\n   * You can get the input blobs using input_blobs().\n   */\n  const vector<Blob<Dtype>*>& ForwardPrefilled(Dtype* loss = NULL);\n\n  /**\n   * The From and To variants of Forward and Backward operate on the\n   * (topological) ordering by which the net is specified. For general DAG\n   * networks, note that (1) computing from one layer to another might entail\n   * extra computation on unrelated branches, and (2) computation starting in\n   * the middle may be incorrect if all of the layers of a fan-in are not\n   * included.\n   */\n  Dtype ForwardFromTo(int start, int end);\n  Dtype ForwardFrom(int start);\n  Dtype ForwardTo(int end);\n  /// @brief Run forward using a set of bottom blobs, and return the result.\n  const vector<Blob<Dtype>*>& Forward(const vector<Blob<Dtype>* > & bottom,\n      Dtype* loss = NULL);\n  /**\n   * @brief Run forward using a serialized BlobProtoVector and return the\n   *        result as a serialized BlobProtoVector\n   */\n  string Forward(const string& input_blob_protos, Dtype* loss = NULL);\n\n  /**\n   * @brief Zeroes out the diffs of all net parameters.\n   *        Should be run before Backward.\n   */\n  void ClearParamDiffs();\n\n  /**\n   * The network backward should take no input and output, since it solely\n   * computes the gradient w.r.t the parameters, and the data has already been\n   * provided during the forward pass.\n   */\n  void Backward();\n  void BackwardFromTo(int start, int end);\n  void BackwardFrom(int start);\n  void BackwardTo(int end);\n\n  /**\n   * @brief Reshape all layers from bottom to top.\n   *\n   * This is useful to propagate changes to layer sizes without running\n   * a forward pass, e.g. to compute output feature size.\n   */\n  void Reshape();\n\n  Dtype ForwardBackward(const vector<Blob<Dtype>* > & bottom) {\n    Dtype loss;\n    Forward(bottom, &loss);\n    Backward();\n    return loss;\n  }\n\n  /// @brief Updates the network weights based on the diff values computed.\n  void Update();\n  /**\n   * @brief Shares weight data of owner blobs with shared blobs.\n   *\n   * Note: this is called by Net::Init, and thus should normally not be\n   * called manually.\n   */\n  void ShareWeights();\n\n  /**\n   * @brief For an already initialized net, implicitly copies (i.e., using no\n   *        additional memory) the pre-trained layers from another Net.\n   */\n  void ShareTrainedLayersWith(const Net* other);\n  // For an already initialized net, CopyTrainedLayersFrom() copies the already\n  // trained layers from another net parameter instance.\n  /**\n   * @brief For an already initialized net, copies the pre-trained layers from\n   *        another Net.\n   */\n  void CopyTrainedLayersFrom(const NetParameter& param);\n  void CopyTrainedLayersFrom(const string trained_filename);\n  void CopyTrainedLayersFromBinaryProto(const string trained_filename);\n  void CopyTrainedLayersFromHDF5(const string trained_filename);\n  /// @brief Writes the net to a proto.\n  void ToProto(NetParameter* param, bool write_diff = false) const;\n  /// @brief Writes the net to an HDF5 file.\n  void ToHDF5(const string& filename, bool write_diff = false) const;\n\n  /// @brief returns the network name.\n  inline const string& name() const { return name_; }\n  /// @brief returns the layer names\n  inline const vector<string>& layer_names() const { return layer_names_; }\n  /// @brief returns the blob names\n  inline const vector<string>& blob_names() const { return blob_names_; }\n  /// @brief returns the blobs\n  inline const vector<shared_ptr<Blob<Dtype> > >& blobs() const {\n    return blobs_;\n  }\n  /// @brief returns the layers\n  inline const vector<shared_ptr<Layer<Dtype> > >& layers() const {\n    return layers_;\n  }\n  /// @brief returns the phase: TRAIN or TEST\n  inline Phase phase() const { return phase_; }\n  /**\n   * @brief returns the bottom vecs for each layer -- usually you won't\n   *        need this unless you do per-layer checks such as gradients.\n   */\n  inline const vector<vector<Blob<Dtype>*> >& bottom_vecs() const {\n    return bottom_vecs_;\n  }\n  /**\n   * @brief returns the top vecs for each layer -- usually you won't\n   *        need this unless you do per-layer checks such as gradients.\n   */\n  inline const vector<vector<Blob<Dtype>*> >& top_vecs() const {\n    return top_vecs_;\n  }\n  /// @brief returns the ids of the top blobs of layer i\n  inline const vector<int> & top_ids(int i) const {\n    CHECK_GE(i, 0) << \"Invalid layer id\";\n    CHECK_LT(i, top_id_vecs_.size()) << \"Invalid layer id\";\n    return top_id_vecs_[i];\n  }\n  /// @brief returns the ids of the bottom blobs of layer i\n  inline const vector<int> & bottom_ids(int i) const {\n    CHECK_GE(i, 0) << \"Invalid layer id\";\n    CHECK_LT(i, bottom_id_vecs_.size()) << \"Invalid layer id\";\n    return bottom_id_vecs_[i];\n  }\n  inline const vector<vector<bool> >& bottom_need_backward() const {\n    return bottom_need_backward_;\n  }\n  inline const vector<Dtype>& blob_loss_weights() const {\n    return blob_loss_weights_;\n  }\n  inline const vector<bool>& layer_need_backward() const {\n    return layer_need_backward_;\n  }\n  /// @brief returns the parameters\n  inline const vector<shared_ptr<Blob<Dtype> > >& params() const {\n    return params_;\n  }\n  inline const vector<Blob<Dtype>*>& learnable_params() const {\n    return learnable_params_;\n  }\n  /// @brief returns the learnable parameter learning rate multipliers\n  inline const vector<float>& params_lr() const { return params_lr_; }\n  inline const vector<bool>& has_params_lr() const { return has_params_lr_; }\n  /// @brief returns the learnable parameter decay multipliers\n  inline const vector<float>& params_weight_decay() const {\n    return params_weight_decay_;\n  }\n  inline const vector<bool>& has_params_decay() const {\n    return has_params_decay_;\n  }\n  const map<string, int>& param_names_index() const {\n    return param_names_index_;\n  }\n  inline const vector<int>& param_owners() const { return param_owners_; }\n  inline const vector<string>& param_display_names() const {\n    return param_display_names_;\n  }\n  /// @brief Input and output blob numbers\n  inline int num_inputs() const { return net_input_blobs_.size(); }\n  inline int num_outputs() const { return net_output_blobs_.size(); }\n  inline const vector<Blob<Dtype>*>& input_blobs() const {\n    return net_input_blobs_;\n  }\n  inline const vector<Blob<Dtype>*>& output_blobs() const {\n    return net_output_blobs_;\n  }\n  inline const vector<int>& input_blob_indices() const {\n    return net_input_blob_indices_;\n  }\n  inline const vector<int>& output_blob_indices() const {\n    return net_output_blob_indices_;\n  }\n  bool has_blob(const string& blob_name) const;\n  const shared_ptr<Blob<Dtype> > blob_by_name(const string& blob_name) const;\n  bool has_layer(const string& layer_name) const;\n  const shared_ptr<Layer<Dtype> > layer_by_name(const string& layer_name) const;\n\n  void set_debug_info(const bool value) { debug_info_ = value; }\n\n  // Helpers for Init.\n  /**\n   * @brief Remove layers that the user specified should be excluded given the current\n   *        phase, level, and stage.\n   */\n  static void FilterNet(const NetParameter& param,\n      NetParameter* param_filtered);\n  /// @brief return whether NetState state meets NetStateRule rule\n  static bool StateMeetsRule(const NetState& state, const NetStateRule& rule,\n      const string& layer_name);\n\n protected:\n  // Helpers for Init.\n  /// @brief Append a new input or top blob to the net.\n  void AppendTop(const NetParameter& param, const int layer_id,\n                 const int top_id, set<string>* available_blobs,\n                 map<string, int>* blob_name_to_idx);\n  /// @brief Append a new bottom blob to the net.\n  int AppendBottom(const NetParameter& param, const int layer_id,\n                   const int bottom_id, set<string>* available_blobs,\n                   map<string, int>* blob_name_to_idx);\n  /// @brief Append a new parameter blob to the net.\n  void AppendParam(const NetParameter& param, const int layer_id,\n                   const int param_id);\n\n  /// @brief Helper for displaying debug info in Forward about input Blobs.\n  void InputDebugInfo(const int layer_id);\n  /// @brief Helper for displaying debug info in Forward.\n  void ForwardDebugInfo(const int layer_id);\n  /// @brief Helper for displaying debug info in Backward.\n  void BackwardDebugInfo(const int layer_id);\n  /// @brief Helper for displaying debug info in Update.\n  void UpdateDebugInfo(const int param_id);\n\n  /// @brief The network name\n  string name_;\n  /// @brief The phase: TRAIN or TEST\n  Phase phase_;\n  /// @brief Individual layers in the net\n  vector<shared_ptr<Layer<Dtype> > > layers_;\n  vector<string> layer_names_;\n  map<string, int> layer_names_index_;\n  vector<bool> layer_need_backward_;\n  /// @brief the blobs storing intermediate results between the layer.\n  vector<shared_ptr<Blob<Dtype> > > blobs_;\n  vector<string> blob_names_;\n  map<string, int> blob_names_index_;\n  vector<bool> blob_need_backward_;\n  /// bottom_vecs stores the vectors containing the input for each layer.\n  /// They don't actually host the blobs (blobs_ does), so we simply store\n  /// pointers.\n  vector<vector<Blob<Dtype>*> > bottom_vecs_;\n  vector<vector<int> > bottom_id_vecs_;\n  vector<vector<bool> > bottom_need_backward_;\n  /// top_vecs stores the vectors containing the output for each layer\n  vector<vector<Blob<Dtype>*> > top_vecs_;\n  vector<vector<int> > top_id_vecs_;\n  /// Vector of weight in the loss (or objective) function of each net blob,\n  /// indexed by blob_id.\n  vector<Dtype> blob_loss_weights_;\n  vector<vector<int> > param_id_vecs_;\n  vector<int> param_owners_;\n  vector<string> param_display_names_;\n  vector<pair<int, int> > param_layer_indices_;\n  map<string, int> param_names_index_;\n  /// blob indices for the input and the output of the net\n  vector<int> net_input_blob_indices_;\n  vector<int> net_output_blob_indices_;\n  vector<Blob<Dtype>*> net_input_blobs_;\n  vector<Blob<Dtype>*> net_output_blobs_;\n  /// The parameters in the network.\n  vector<shared_ptr<Blob<Dtype> > > params_;\n  vector<Blob<Dtype>*> learnable_params_;\n  /**\n   * The mapping from params_ -> learnable_params_: we have\n   * learnable_param_ids_.size() == params_.size(),\n   * and learnable_params_[learnable_param_ids_[i]] == params_[i].get()\n   * if and only if params_[i] is an \"owner\"; otherwise, params_[i] is a sharer\n   * and learnable_params_[learnable_param_ids_[i]] gives its owner.\n   */\n  vector<int> learnable_param_ids_;\n  /// the learning rate multipliers for learnable_params_\n  vector<float> params_lr_;\n  vector<bool> has_params_lr_;\n  /// the weight decay multipliers for learnable_params_\n  vector<float> params_weight_decay_;\n  vector<bool> has_params_decay_;\n  /// The bytes of memory used by this net\n  size_t memory_used_;\n  /// Whether to compute and display debug info for the net.\n  bool debug_info_;\n  /// The root net that actually holds the shared layers in data parallelism\n  const Net* const root_net_;\n  DISABLE_COPY_AND_ASSIGN(Net);\n};\n\n\n}  // namespace caffe\n\n#endif  // CAFFE_NET_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/parallel.hpp",
    "content": "#ifndef CAFFE_PARALLEL_HPP_\n#define CAFFE_PARALLEL_HPP_\n\n#include <boost/date_time/posix_time/posix_time.hpp>\n\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/solver.hpp\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/blocking_queue.hpp\"\n\nnamespace caffe {\n\n// Represents a net parameters. Once a net is created, its parameter buffers can\n// be replaced by ones from Params, to allow parallelization. Params ensures\n// parameters are allocated in one consecutive array.\ntemplate<typename Dtype>\nclass Params {\n public:\n  explicit Params(shared_ptr<Solver<Dtype> > root_solver);\n  virtual ~Params() {\n  }\n\n  inline size_t size() const {\n    return size_;\n  }\n  inline Dtype* data() const {\n    return data_;\n  }\n  inline Dtype* diff() const {\n    return diff_;\n  }\n\n protected:\n  const size_t size_;           // Size of buffers\n  Dtype* data_;                 // Network parameters\n  Dtype* diff_;                 // Gradient\n\nDISABLE_COPY_AND_ASSIGN(Params);\n};\n\n// Params stored in GPU memory.\ntemplate<typename Dtype>\nclass GPUParams : public Params<Dtype> {\n public:\n  GPUParams(shared_ptr<Solver<Dtype> > root_solver, int device);\n  virtual ~GPUParams();\n\n  void configure(Solver<Dtype>* solver) const;\n\n protected:\n  using Params<Dtype>::size_;\n  using Params<Dtype>::data_;\n  using Params<Dtype>::diff_;\n};\n\nclass DevicePair {\n public:\n  DevicePair(int parent, int device)\n      : parent_(parent),\n        device_(device) {\n  }\n  inline int parent() {\n    return parent_;\n  }\n  inline int device() {\n    return device_;\n  }\n\n  // Group GPUs in pairs, by proximity depending on machine's topology\n  static void compute(const vector<int> devices, vector<DevicePair>* pairs);\n\n protected:\n  int parent_;\n  int device_;\n};\n\n// Synchronous data parallelism using map-reduce between local GPUs.\ntemplate<typename Dtype>\nclass P2PSync : public GPUParams<Dtype>, public Solver<Dtype>::Callback,\n    public InternalThread {\n public:\n  explicit P2PSync(shared_ptr<Solver<Dtype> > root_solver,\n                   P2PSync<Dtype>* parent, const SolverParameter& param);\n  virtual ~P2PSync();\n\n  inline const shared_ptr<Solver<Dtype> >& solver() const {\n    return solver_;\n  }\n\n  void run(const vector<int>& gpus);\n\n protected:\n  void on_start();\n  void on_gradients_ready();\n\n  void InternalThreadEntry();\n\n  P2PSync<Dtype>* parent_;\n  vector<P2PSync<Dtype>*> children_;\n  BlockingQueue<P2PSync<Dtype>*> queue_;\n  const int initial_iter_;\n  Dtype* parent_grads_;\n  shared_ptr<Solver<Dtype> > solver_;\n\n  using Params<Dtype>::size_;\n  using Params<Dtype>::data_;\n  using Params<Dtype>::diff_;\n};\n\n}  // namespace caffe\n\n#endif\n"
  },
  {
    "path": "caffe-fpn/include/caffe/sgd_solvers.hpp",
    "content": "#ifndef CAFFE_SGD_SOLVERS_HPP_\n#define CAFFE_SGD_SOLVERS_HPP_\n\n#include <string>\n#include <vector>\n\n#include \"caffe/solver.hpp\"\n\nnamespace caffe {\n\n/**\n * @brief Optimizes the parameters of a Net using\n *        stochastic gradient descent (SGD) with momentum.\n */\ntemplate <typename Dtype>\nclass SGDSolver : public Solver<Dtype> {\n public:\n  explicit SGDSolver(const SolverParameter& param)\n      : Solver<Dtype>(param) { PreSolve(); }\n  explicit SGDSolver(const string& param_file)\n      : Solver<Dtype>(param_file) { PreSolve(); }\n  virtual inline const char* type() const { return \"SGD\"; }\n\n  const vector<shared_ptr<Blob<Dtype> > >& history() { return history_; }\n\n protected:\n  void PreSolve();\n  Dtype GetLearningRate();\n  virtual void ApplyUpdate();\n  virtual void Normalize(int param_id);\n  virtual void Regularize(int param_id);\n  virtual void ComputeUpdateValue(int param_id, Dtype rate);\n  virtual void ClipGradients();\n  virtual void SnapshotSolverState(const string& model_filename);\n  virtual void SnapshotSolverStateToBinaryProto(const string& model_filename);\n  virtual void SnapshotSolverStateToHDF5(const string& model_filename);\n  virtual void RestoreSolverStateFromHDF5(const string& state_file);\n  virtual void RestoreSolverStateFromBinaryProto(const string& state_file);\n  // history maintains the historical momentum data.\n  // update maintains update related data and is not needed in snapshots.\n  // temp maintains other information that might be needed in computation\n  //   of gradients/updates and is not needed in snapshots\n  vector<shared_ptr<Blob<Dtype> > > history_, update_, temp_;\n\n  DISABLE_COPY_AND_ASSIGN(SGDSolver);\n};\n\ntemplate <typename Dtype>\nclass NesterovSolver : public SGDSolver<Dtype> {\n public:\n  explicit NesterovSolver(const SolverParameter& param)\n      : SGDSolver<Dtype>(param) {}\n  explicit NesterovSolver(const string& param_file)\n      : SGDSolver<Dtype>(param_file) {}\n  virtual inline const char* type() const { return \"Nesterov\"; }\n\n protected:\n  virtual void ComputeUpdateValue(int param_id, Dtype rate);\n\n  DISABLE_COPY_AND_ASSIGN(NesterovSolver);\n};\n\ntemplate <typename Dtype>\nclass AdaGradSolver : public SGDSolver<Dtype> {\n public:\n  explicit AdaGradSolver(const SolverParameter& param)\n      : SGDSolver<Dtype>(param) { constructor_sanity_check(); }\n  explicit AdaGradSolver(const string& param_file)\n      : SGDSolver<Dtype>(param_file) { constructor_sanity_check(); }\n  virtual inline const char* type() const { return \"AdaGrad\"; }\n\n protected:\n  virtual void ComputeUpdateValue(int param_id, Dtype rate);\n  void constructor_sanity_check() {\n    CHECK_EQ(0, this->param_.momentum())\n        << \"Momentum cannot be used with AdaGrad.\";\n  }\n\n  DISABLE_COPY_AND_ASSIGN(AdaGradSolver);\n};\n\n\ntemplate <typename Dtype>\nclass RMSPropSolver : public SGDSolver<Dtype> {\n public:\n  explicit RMSPropSolver(const SolverParameter& param)\n      : SGDSolver<Dtype>(param) { constructor_sanity_check(); }\n  explicit RMSPropSolver(const string& param_file)\n      : SGDSolver<Dtype>(param_file) { constructor_sanity_check(); }\n  virtual inline const char* type() const { return \"RMSProp\"; }\n\n protected:\n  virtual void ComputeUpdateValue(int param_id, Dtype rate);\n  void constructor_sanity_check() {\n    CHECK_EQ(0, this->param_.momentum())\n        << \"Momentum cannot be used with RMSProp.\";\n    CHECK_GE(this->param_.rms_decay(), 0)\n        << \"rms_decay should lie between 0 and 1.\";\n    CHECK_LT(this->param_.rms_decay(), 1)\n        << \"rms_decay should lie between 0 and 1.\";\n  }\n\n  DISABLE_COPY_AND_ASSIGN(RMSPropSolver);\n};\n\ntemplate <typename Dtype>\nclass AdaDeltaSolver : public SGDSolver<Dtype> {\n public:\n  explicit AdaDeltaSolver(const SolverParameter& param)\n      : SGDSolver<Dtype>(param) { AdaDeltaPreSolve(); }\n  explicit AdaDeltaSolver(const string& param_file)\n      : SGDSolver<Dtype>(param_file) { AdaDeltaPreSolve(); }\n  virtual inline const char* type() const { return \"AdaDelta\"; }\n\n protected:\n  void AdaDeltaPreSolve();\n  virtual void ComputeUpdateValue(int param_id, Dtype rate);\n\n  DISABLE_COPY_AND_ASSIGN(AdaDeltaSolver);\n};\n\n/**\n * @brief AdamSolver, an algorithm for first-order gradient-based optimization\n *        of stochastic objective functions, based on adaptive estimates of\n *        lower-order moments. Described in [1].\n *\n * [1] D. P. Kingma and J. L. Ba, \"ADAM: A Method for Stochastic Optimization.\"\n *     arXiv preprint arXiv:1412.6980v8 (2014).\n */\ntemplate <typename Dtype>\nclass AdamSolver : public SGDSolver<Dtype> {\n public:\n  explicit AdamSolver(const SolverParameter& param)\n      : SGDSolver<Dtype>(param) { AdamPreSolve();}\n  explicit AdamSolver(const string& param_file)\n      : SGDSolver<Dtype>(param_file) { AdamPreSolve(); }\n  virtual inline const char* type() const { return \"Adam\"; }\n\n protected:\n  void AdamPreSolve();\n  virtual void ComputeUpdateValue(int param_id, Dtype rate);\n\n  DISABLE_COPY_AND_ASSIGN(AdamSolver);\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SGD_SOLVERS_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/solver.hpp",
    "content": "#ifndef CAFFE_SOLVER_HPP_\n#define CAFFE_SOLVER_HPP_\n#include <boost/function.hpp>\n#include <string>\n#include <vector>\n\n#include \"caffe/net.hpp\"\n#include \"caffe/solver_factory.hpp\"\n\nnamespace caffe {\n\n/**\n  * @brief Enumeration of actions that a client of the Solver may request by\n  * implementing the Solver's action request function, which a\n  * a client may optionally provide in order to request early termination\n  * or saving a snapshot without exiting. In the executable caffe, this\n  * mechanism is used to allow the snapshot to be saved when stopping\n  * execution with a SIGINT (Ctrl-C).\n  */\n  namespace SolverAction {\n    enum Enum {\n      NONE = 0,  // Take no special action.\n      STOP = 1,  // Stop training. snapshot_after_train controls whether a\n                 // snapshot is created.\n      SNAPSHOT = 2  // Take a snapshot, and keep training.\n    };\n  }\n\n/**\n * @brief Type of a function that returns a Solver Action enumeration.\n */\ntypedef boost::function<SolverAction::Enum()> ActionCallback;\n\n/**\n * @brief An interface for classes that perform optimization on Net%s.\n *\n * Requires implementation of ApplyUpdate to compute a parameter update\n * given the current state of the Net parameters.\n */\ntemplate <typename Dtype>\nclass Solver {\n public:\n  explicit Solver(const SolverParameter& param,\n      const Solver* root_solver = NULL);\n  explicit Solver(const string& param_file, const Solver* root_solver = NULL);\n  void Init(const SolverParameter& param);\n  void InitTrainNet();\n  void InitTestNets();\n\n  // Client of the Solver optionally may call this in order to set the function\n  // that the solver uses to see what action it should take (e.g. snapshot or\n  // exit training early).\n  void SetActionFunction(ActionCallback func);\n  SolverAction::Enum GetRequestedAction();\n  // The main entry of the solver function. In default, iter will be zero. Pass\n  // in a non-zero iter number to resume training for a pre-trained net.\n  virtual void Solve(const char* resume_file = NULL);\n  inline void Solve(const string resume_file) { Solve(resume_file.c_str()); }\n  void Step(int iters);\n  // The Restore method simply dispatches to one of the\n  // RestoreSolverStateFrom___ protected methods. You should implement these\n  // methods to restore the state from the appropriate snapshot type.\n  void Restore(const char* resume_file);\n  // The Solver::Snapshot function implements the basic snapshotting utility\n  // that stores the learned net. You should implement the SnapshotSolverState()\n  // function that produces a SolverState protocol buffer that needs to be\n  // written to disk together with the learned net.\n  void Snapshot();\n  virtual ~Solver() {}\n  inline const SolverParameter& param() const { return param_; }\n  inline shared_ptr<Net<Dtype> > net() { return net_; }\n  inline const vector<shared_ptr<Net<Dtype> > >& test_nets() {\n    return test_nets_;\n  }\n  int iter() { return iter_; }\n\n  // Invoked at specific points during an iteration\n  class Callback {\n   protected:\n    virtual void on_start() = 0;\n    virtual void on_gradients_ready() = 0;\n\n    template <typename T>\n    friend class Solver;\n  };\n  const vector<Callback*>& callbacks() const { return callbacks_; }\n  void add_callback(Callback* value) {\n    callbacks_.push_back(value);\n  }\n\n  void CheckSnapshotWritePermissions();\n  /**\n   * @brief Returns the solver type.\n   */\n  virtual inline const char* type() const { return \"\"; }\n\n protected:\n  // Make and apply the update value for the current iteration.\n  virtual void ApplyUpdate() = 0;\n  string SnapshotFilename(const string extension);\n  string SnapshotToBinaryProto();\n  string SnapshotToHDF5();\n  // The test routine\n  void TestAll();\n  void Test(const int test_net_id = 0);\n  virtual void SnapshotSolverState(const string& model_filename) = 0;\n  virtual void RestoreSolverStateFromHDF5(const string& state_file) = 0;\n  virtual void RestoreSolverStateFromBinaryProto(const string& state_file) = 0;\n  void DisplayOutputBlobs(const int net_id);\n  void UpdateSmoothedLoss(Dtype loss, int start_iter, int average_loss);\n\n  SolverParameter param_;\n  int iter_;\n  int current_step_;\n  shared_ptr<Net<Dtype> > net_;\n  vector<shared_ptr<Net<Dtype> > > test_nets_;\n  vector<Callback*> callbacks_;\n  vector<Dtype> losses_;\n  Dtype smoothed_loss_;\n\n  // The root solver that holds root nets (actually containing shared layers)\n  // in data parallelism\n  const Solver* const root_solver_;\n\n  // A function that can be set by a client of the Solver to provide indication\n  // that it wants a snapshot saved and/or to exit early.\n  ActionCallback action_request_function_;\n\n  // True iff a request to stop early was received.\n  bool requested_early_exit_;\n\n  DISABLE_COPY_AND_ASSIGN(Solver);\n};\n\n/**\n * @brief Solver that only computes gradients, used as worker\n *        for multi-GPU training.\n */\ntemplate <typename Dtype>\nclass WorkerSolver : public Solver<Dtype> {\n public:\n  explicit WorkerSolver(const SolverParameter& param,\n      const Solver<Dtype>* root_solver = NULL)\n      : Solver<Dtype>(param, root_solver) {}\n\n protected:\n  void ApplyUpdate() {}\n  void SnapshotSolverState(const string& model_filename) {\n    LOG(FATAL) << \"Should not be called on worker solver.\";\n  }\n  void RestoreSolverStateFromBinaryProto(const string& state_file) {\n    LOG(FATAL) << \"Should not be called on worker solver.\";\n  }\n  void RestoreSolverStateFromHDF5(const string& state_file) {\n    LOG(FATAL) << \"Should not be called on worker solver.\";\n  }\n};\n\n}  // namespace caffe\n\n#endif  // CAFFE_SOLVER_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/solver_factory.hpp",
    "content": "/**\n * @brief A solver factory that allows one to register solvers, similar to\n * layer factory. During runtime, registered solvers could be called by passing\n * a SolverParameter protobuffer to the CreateSolver function:\n *\n *     SolverRegistry<Dtype>::CreateSolver(param);\n *\n * There are two ways to register a solver. Assuming that we have a solver like:\n *\n *   template <typename Dtype>\n *   class MyAwesomeSolver : public Solver<Dtype> {\n *     // your implementations\n *   };\n *\n * and its type is its C++ class name, but without the \"Solver\" at the end\n * (\"MyAwesomeSolver\" -> \"MyAwesome\").\n *\n * If the solver is going to be created simply by its constructor, in your c++\n * file, add the following line:\n *\n *    REGISTER_SOLVER_CLASS(MyAwesome);\n *\n * Or, if the solver is going to be created by another creator function, in the\n * format of:\n *\n *    template <typename Dtype>\n *    Solver<Dtype*> GetMyAwesomeSolver(const SolverParameter& param) {\n *      // your implementation\n *    }\n *\n * then you can register the creator function instead, like\n *\n * REGISTER_SOLVER_CREATOR(MyAwesome, GetMyAwesomeSolver)\n *\n * Note that each solver type should only be registered once.\n */\n\n#ifndef CAFFE_SOLVER_FACTORY_H_\n#define CAFFE_SOLVER_FACTORY_H_\n\n#include <map>\n#include <string>\n#include <vector>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass Solver;\n\ntemplate <typename Dtype>\nclass SolverRegistry {\n public:\n  typedef Solver<Dtype>* (*Creator)(const SolverParameter&);\n  typedef std::map<string, Creator> CreatorRegistry;\n\n  static CreatorRegistry& Registry() {\n    static CreatorRegistry* g_registry_ = new CreatorRegistry();\n    return *g_registry_;\n  }\n\n  // Adds a creator.\n  static void AddCreator(const string& type, Creator creator) {\n    CreatorRegistry& registry = Registry();\n    CHECK_EQ(registry.count(type), 0)\n        << \"Solver type \" << type << \" already registered.\";\n    registry[type] = creator;\n  }\n\n  // Get a solver using a SolverParameter.\n  static Solver<Dtype>* CreateSolver(const SolverParameter& param) {\n    const string& type = param.type();\n    CreatorRegistry& registry = Registry();\n    CHECK_EQ(registry.count(type), 1) << \"Unknown solver type: \" << type\n        << \" (known types: \" << SolverTypeListString() << \")\";\n    return registry[type](param);\n  }\n\n  static vector<string> SolverTypeList() {\n    CreatorRegistry& registry = Registry();\n    vector<string> solver_types;\n    for (typename CreatorRegistry::iterator iter = registry.begin();\n         iter != registry.end(); ++iter) {\n      solver_types.push_back(iter->first);\n    }\n    return solver_types;\n  }\n\n private:\n  // Solver registry should never be instantiated - everything is done with its\n  // static variables.\n  SolverRegistry() {}\n\n  static string SolverTypeListString() {\n    vector<string> solver_types = SolverTypeList();\n    string solver_types_str;\n    for (vector<string>::iterator iter = solver_types.begin();\n         iter != solver_types.end(); ++iter) {\n      if (iter != solver_types.begin()) {\n        solver_types_str += \", \";\n      }\n      solver_types_str += *iter;\n    }\n    return solver_types_str;\n  }\n};\n\n\ntemplate <typename Dtype>\nclass SolverRegisterer {\n public:\n  SolverRegisterer(const string& type,\n      Solver<Dtype>* (*creator)(const SolverParameter&)) {\n    // LOG(INFO) << \"Registering solver type: \" << type;\n    SolverRegistry<Dtype>::AddCreator(type, creator);\n  }\n};\n\n\n#define REGISTER_SOLVER_CREATOR(type, creator)                                 \\\n  static SolverRegisterer<float> g_creator_f_##type(#type, creator<float>);    \\\n  static SolverRegisterer<double> g_creator_d_##type(#type, creator<double>)   \\\n\n#define REGISTER_SOLVER_CLASS(type)                                            \\\n  template <typename Dtype>                                                    \\\n  Solver<Dtype>* Creator_##type##Solver(                                       \\\n      const SolverParameter& param)                                            \\\n  {                                                                            \\\n    return new type##Solver<Dtype>(param);                                     \\\n  }                                                                            \\\n  REGISTER_SOLVER_CREATOR(type, Creator_##type##Solver)\n\n}  // namespace caffe\n\n#endif  // CAFFE_SOLVER_FACTORY_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/syncedmem.hpp",
    "content": "#ifndef CAFFE_SYNCEDMEM_HPP_\n#define CAFFE_SYNCEDMEM_HPP_\n\n#include <cstdlib>\n\n#include \"caffe/common.hpp\"\n\nnamespace caffe {\n\n// If CUDA is available and in GPU mode, host memory will be allocated pinned,\n// using cudaMallocHost. It avoids dynamic pinning for transfers (DMA).\n// The improvement in performance seems negligible in the single GPU case,\n// but might be more significant for parallel training. Most importantly,\n// it improved stability for large models on many GPUs.\ninline void CaffeMallocHost(void** ptr, size_t size, bool* use_cuda) {\n#ifndef CPU_ONLY\n  if (Caffe::mode() == Caffe::GPU) {\n    CUDA_CHECK(cudaMallocHost(ptr, size));\n    *use_cuda = true;\n    return;\n  }\n#endif\n  *ptr = malloc(size);\n  *use_cuda = false;\n  CHECK(*ptr) << \"host allocation of size \" << size << \" failed\";\n}\n\ninline void CaffeFreeHost(void* ptr, bool use_cuda) {\n#ifndef CPU_ONLY\n  if (use_cuda) {\n    CUDA_CHECK(cudaFreeHost(ptr));\n    return;\n  }\n#endif\n  free(ptr);\n}\n\n\n/**\n * @brief Manages memory allocation and synchronization between the host (CPU)\n *        and device (GPU).\n *\n * TODO(dox): more thorough description.\n */\nclass SyncedMemory {\n public:\n  SyncedMemory()\n      : cpu_ptr_(NULL), gpu_ptr_(NULL), size_(0), head_(UNINITIALIZED),\n        own_cpu_data_(false), cpu_malloc_use_cuda_(false), own_gpu_data_(false),\n        gpu_device_(-1) {}\n  explicit SyncedMemory(size_t size)\n      : cpu_ptr_(NULL), gpu_ptr_(NULL), size_(size), head_(UNINITIALIZED),\n        own_cpu_data_(false), cpu_malloc_use_cuda_(false), own_gpu_data_(false),\n        gpu_device_(-1) {}\n  ~SyncedMemory();\n  const void* cpu_data();\n  void set_cpu_data(void* data);\n  const void* gpu_data();\n  void set_gpu_data(void* data);\n  void* mutable_cpu_data();\n  void* mutable_gpu_data();\n  enum SyncedHead { UNINITIALIZED, HEAD_AT_CPU, HEAD_AT_GPU, SYNCED };\n  SyncedHead head() { return head_; }\n  size_t size() { return size_; }\n\n#ifndef CPU_ONLY\n  void async_gpu_push(const cudaStream_t& stream);\n#endif\n\n private:\n  void to_cpu();\n  void to_gpu();\n  void* cpu_ptr_;\n  void* gpu_ptr_;\n  size_t size_;\n  SyncedHead head_;\n  bool own_cpu_data_;\n  bool cpu_malloc_use_cuda_;\n  bool own_gpu_data_;\n  int gpu_device_;\n\n  DISABLE_COPY_AND_ASSIGN(SyncedMemory);\n};  // class SyncedMemory\n\n}  // namespace caffe\n\n#endif  // CAFFE_SYNCEDMEM_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/test/test_caffe_main.hpp",
    "content": "// The main caffe test code. Your test cpp code should include this hpp\n// to allow a main function to be compiled into the binary.\n#ifndef CAFFE_TEST_TEST_CAFFE_MAIN_HPP_\n#define CAFFE_TEST_TEST_CAFFE_MAIN_HPP_\n\n#include <glog/logging.h>\n#include <gtest/gtest.h>\n\n#include <cstdio>\n#include <cstdlib>\n\n#include \"caffe/common.hpp\"\n\nusing std::cout;\nusing std::endl;\n\n#ifdef CMAKE_BUILD\n  #include \"caffe_config.h\"\n#else\n  #define CUDA_TEST_DEVICE -1\n  #define CMAKE_SOURCE_DIR \"src/\"\n  #define EXAMPLES_SOURCE_DIR \"examples/\"\n  #define CMAKE_EXT \"\"\n#endif\n\nint main(int argc, char** argv);\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass MultiDeviceTest : public ::testing::Test {\n public:\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  MultiDeviceTest() {\n    Caffe::set_mode(TypeParam::device);\n  }\n  virtual ~MultiDeviceTest() {}\n};\n\ntypedef ::testing::Types<float, double> TestDtypes;\n\ntemplate <typename TypeParam>\nstruct CPUDevice {\n  typedef TypeParam Dtype;\n  static const Caffe::Brew device = Caffe::CPU;\n};\n\ntemplate <typename Dtype>\nclass CPUDeviceTest : public MultiDeviceTest<CPUDevice<Dtype> > {\n};\n\n#ifdef CPU_ONLY\n\ntypedef ::testing::Types<CPUDevice<float>,\n                         CPUDevice<double> > TestDtypesAndDevices;\n\n#else\n\ntemplate <typename TypeParam>\nstruct GPUDevice {\n  typedef TypeParam Dtype;\n  static const Caffe::Brew device = Caffe::GPU;\n};\n\ntemplate <typename Dtype>\nclass GPUDeviceTest : public MultiDeviceTest<GPUDevice<Dtype> > {\n};\n\ntypedef ::testing::Types<CPUDevice<float>, CPUDevice<double>,\n                         GPUDevice<float>, GPUDevice<double> >\n                         TestDtypesAndDevices;\n\n#endif\n\n}  // namespace caffe\n\n#endif  // CAFFE_TEST_TEST_CAFFE_MAIN_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/test/test_gradient_check_util.hpp",
    "content": "#ifndef CAFFE_TEST_GRADIENT_CHECK_UTIL_H_\n#define CAFFE_TEST_GRADIENT_CHECK_UTIL_H_\n\n#include <glog/logging.h>\n#include <gtest/gtest.h>\n\n#include <algorithm>\n#include <cmath>\n#include <vector>\n\n#include \"caffe/layer.hpp\"\n#include \"caffe/net.hpp\"\n\nnamespace caffe {\n\n// The gradient checker adds a L2 normalization loss function on top of the\n// top blobs, and checks the gradient.\ntemplate <typename Dtype>\nclass GradientChecker {\n public:\n  // kink and kink_range specify an ignored nonsmooth region of the form\n  // kink - kink_range <= |feature value| <= kink + kink_range,\n  // which accounts for all nonsmoothness in use by caffe\n  GradientChecker(const Dtype stepsize, const Dtype threshold,\n      const unsigned int seed = 1701, const Dtype kink = 0.,\n      const Dtype kink_range = -1)\n      : stepsize_(stepsize), threshold_(threshold), seed_(seed),\n        kink_(kink), kink_range_(kink_range) {}\n  // Checks the gradient of a layer, with provided bottom layers and top\n  // layers.\n  // Note that after the gradient check, we do not guarantee that the data\n  // stored in the layer parameters and the blobs are unchanged.\n  void CheckGradient(Layer<Dtype>* layer, const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top, int check_bottom = -1) {\n      layer->SetUp(bottom, top);\n      CheckGradientSingle(layer, bottom, top, check_bottom, -1, -1);\n  }\n  void CheckGradientExhaustive(Layer<Dtype>* layer,\n      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top,\n      int check_bottom = -1);\n\n  // CheckGradientEltwise can be used to test layers that perform element-wise\n  // computation only (e.g., neuron layers) -- where (d y_i) / (d x_j) = 0 when\n  // i != j.\n  void CheckGradientEltwise(Layer<Dtype>* layer,\n      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);\n\n  // Checks the gradient of a single output with respect to particular input\n  // blob(s).  If check_bottom = i >= 0, check only the ith bottom Blob.\n  // If check_bottom == -1, check everything -- all bottom Blobs and all\n  // param Blobs.  Otherwise (if check_bottom < -1), check only param Blobs.\n  void CheckGradientSingle(Layer<Dtype>* layer,\n      const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top,\n      int check_bottom, int top_id, int top_data_id, bool element_wise = false);\n\n  // Checks the gradient of a network. This network should not have any data\n  // layers or loss layers, since the function does not explicitly deal with\n  // such cases yet. All input blobs and parameter blobs are going to be\n  // checked, layer-by-layer to avoid numerical problems to accumulate.\n  void CheckGradientNet(const Net<Dtype>& net,\n      const vector<Blob<Dtype>*>& input);\n\n protected:\n  Dtype GetObjAndGradient(const Layer<Dtype>& layer,\n      const vector<Blob<Dtype>*>& top, int top_id = -1, int top_data_id = -1);\n  Dtype stepsize_;\n  Dtype threshold_;\n  unsigned int seed_;\n  Dtype kink_;\n  Dtype kink_range_;\n};\n\n\ntemplate <typename Dtype>\nvoid GradientChecker<Dtype>::CheckGradientSingle(Layer<Dtype>* layer,\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top,\n    int check_bottom, int top_id, int top_data_id, bool element_wise) {\n  if (element_wise) {\n    CHECK_EQ(0, layer->blobs().size());\n    CHECK_LE(0, top_id);\n    CHECK_LE(0, top_data_id);\n    const int top_count = top[top_id]->count();\n    for (int blob_id = 0; blob_id < bottom.size(); ++blob_id) {\n      CHECK_EQ(top_count, bottom[blob_id]->count());\n    }\n  }\n  // First, figure out what blobs we need to check against, and zero init\n  // parameter blobs.\n  vector<Blob<Dtype>*> blobs_to_check;\n  vector<bool> propagate_down(bottom.size(), check_bottom == -1);\n  for (int i = 0; i < layer->blobs().size(); ++i) {\n    Blob<Dtype>* blob = layer->blobs()[i].get();\n    caffe_set(blob->count(), static_cast<Dtype>(0), blob->mutable_cpu_diff());\n    blobs_to_check.push_back(blob);\n  }\n  if (check_bottom == -1) {\n    for (int i = 0; i < bottom.size(); ++i) {\n      blobs_to_check.push_back(bottom[i]);\n    }\n  } else if (check_bottom >= 0) {\n    CHECK_LT(check_bottom, bottom.size());\n    blobs_to_check.push_back(bottom[check_bottom]);\n    propagate_down[check_bottom] = true;\n  }\n  CHECK_GT(blobs_to_check.size(), 0) << \"No blobs to check.\";\n  // Compute the gradient analytically using Backward\n  Caffe::set_random_seed(seed_);\n  // Ignore the loss from the layer (it's just the weighted sum of the losses\n  // from the top blobs, whose gradients we may want to test individually).\n  layer->Forward(bottom, top);\n  // Get additional loss from the objective\n  GetObjAndGradient(*layer, top, top_id, top_data_id);\n  layer->Backward(top, propagate_down, bottom);\n  // Store computed gradients for all checked blobs\n  vector<shared_ptr<Blob<Dtype> > >\n      computed_gradient_blobs(blobs_to_check.size());\n  for (int blob_id = 0; blob_id < blobs_to_check.size(); ++blob_id) {\n    Blob<Dtype>* current_blob = blobs_to_check[blob_id];\n    computed_gradient_blobs[blob_id].reset(new Blob<Dtype>());\n    computed_gradient_blobs[blob_id]->ReshapeLike(*current_blob);\n    const int count = blobs_to_check[blob_id]->count();\n    const Dtype* diff = blobs_to_check[blob_id]->cpu_diff();\n    Dtype* computed_gradients =\n        computed_gradient_blobs[blob_id]->mutable_cpu_data();\n    caffe_copy(count, diff, computed_gradients);\n  }\n  // Compute derivative of top w.r.t. each bottom and parameter input using\n  // finite differencing.\n  // LOG(ERROR) << \"Checking \" << blobs_to_check.size() << \" blobs.\";\n  for (int blob_id = 0; blob_id < blobs_to_check.size(); ++blob_id) {\n    Blob<Dtype>* current_blob = blobs_to_check[blob_id];\n    const Dtype* computed_gradients =\n        computed_gradient_blobs[blob_id]->cpu_data();\n    // LOG(ERROR) << \"Blob \" << blob_id << \": checking \"\n    //     << current_blob->count() << \" parameters.\";\n    for (int feat_id = 0; feat_id < current_blob->count(); ++feat_id) {\n      // For an element-wise layer, we only need to do finite differencing to\n      // compute the derivative of top[top_id][top_data_id] w.r.t.\n      // bottom[blob_id][i] only for i == top_data_id.  For any other\n      // i != top_data_id, we know the derivative is 0 by definition, and simply\n      // check that that's true.\n      Dtype estimated_gradient = 0;\n      Dtype positive_objective = 0;\n      Dtype negative_objective = 0;\n      if (!element_wise || (feat_id == top_data_id)) {\n        // Do finite differencing.\n        // Compute loss with stepsize_ added to input.\n        current_blob->mutable_cpu_data()[feat_id] += stepsize_;\n        Caffe::set_random_seed(seed_);\n        layer->Forward(bottom, top);\n        positive_objective =\n            GetObjAndGradient(*layer, top, top_id, top_data_id);\n        // Compute loss with stepsize_ subtracted from input.\n        current_blob->mutable_cpu_data()[feat_id] -= stepsize_ * 2;\n        Caffe::set_random_seed(seed_);\n        layer->Forward(bottom, top);\n        negative_objective =\n            GetObjAndGradient(*layer, top, top_id, top_data_id);\n        // Recover original input value.\n        current_blob->mutable_cpu_data()[feat_id] += stepsize_;\n        estimated_gradient = (positive_objective - negative_objective) /\n            stepsize_ / 2.;\n      }\n      Dtype computed_gradient = computed_gradients[feat_id];\n      Dtype feature = current_blob->cpu_data()[feat_id];\n      // LOG(ERROR) << \"debug: \" << current_blob->cpu_data()[feat_id] << \" \"\n      //     << current_blob->cpu_diff()[feat_id];\n      if (kink_ - kink_range_ > fabs(feature)\n          || fabs(feature) > kink_ + kink_range_) {\n        // We check relative accuracy, but for too small values, we threshold\n        // the scale factor by 1.\n        Dtype scale = std::max<Dtype>(\n            std::max(fabs(computed_gradient), fabs(estimated_gradient)),\n            Dtype(1.));\n        EXPECT_NEAR(computed_gradient, estimated_gradient, threshold_ * scale)\n          << \"debug: (top_id, top_data_id, blob_id, feat_id)=\"\n          << top_id << \",\" << top_data_id << \",\" << blob_id << \",\" << feat_id\n          << \"; feat = \" << feature\n          << \"; objective+ = \" << positive_objective\n          << \"; objective- = \" << negative_objective;\n      }\n      // LOG(ERROR) << \"Feature: \" << current_blob->cpu_data()[feat_id];\n      // LOG(ERROR) << \"computed gradient: \" << computed_gradient\n      //    << \" estimated_gradient: \" << estimated_gradient;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid GradientChecker<Dtype>::CheckGradientExhaustive(Layer<Dtype>* layer,\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top,\n    int check_bottom) {\n  layer->SetUp(bottom, top);\n  CHECK_GT(top.size(), 0) << \"Exhaustive mode requires at least one top blob.\";\n  // LOG(ERROR) << \"Exhaustive Mode.\";\n  for (int i = 0; i < top.size(); ++i) {\n    // LOG(ERROR) << \"Exhaustive: blob \" << i << \" size \" << top[i]->count();\n    for (int j = 0; j < top[i]->count(); ++j) {\n      // LOG(ERROR) << \"Exhaustive: blob \" << i << \" data \" << j;\n      CheckGradientSingle(layer, bottom, top, check_bottom, i, j);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid GradientChecker<Dtype>::CheckGradientEltwise(Layer<Dtype>* layer,\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  layer->SetUp(bottom, top);\n  CHECK_GT(top.size(), 0) << \"Eltwise mode requires at least one top blob.\";\n  const int check_bottom = -1;\n  const bool element_wise = true;\n  for (int i = 0; i < top.size(); ++i) {\n    for (int j = 0; j < top[i]->count(); ++j) {\n      CheckGradientSingle(layer, bottom, top, check_bottom, i, j, element_wise);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid GradientChecker<Dtype>::CheckGradientNet(\n    const Net<Dtype>& net, const vector<Blob<Dtype>*>& input) {\n  const vector<shared_ptr<Layer<Dtype> > >& layers = net.layers();\n  vector<vector<Blob<Dtype>*> >& bottom_vecs = net.bottom_vecs();\n  vector<vector<Blob<Dtype>*> >& top_vecs = net.top_vecs();\n  for (int i = 0; i < layers.size(); ++i) {\n    net.Forward(input);\n    LOG(ERROR) << \"Checking gradient for \" << layers[i]->layer_param().name();\n    CheckGradientExhaustive(*(layers[i].get()), bottom_vecs[i], top_vecs[i]);\n  }\n}\n\ntemplate <typename Dtype>\nDtype GradientChecker<Dtype>::GetObjAndGradient(const Layer<Dtype>& layer,\n    const vector<Blob<Dtype>*>& top, int top_id, int top_data_id) {\n  Dtype loss = 0;\n  if (top_id < 0) {\n    // the loss will be half of the sum of squares of all outputs\n    for (int i = 0; i < top.size(); ++i) {\n      Blob<Dtype>* top_blob = top[i];\n      const Dtype* top_blob_data = top_blob->cpu_data();\n      Dtype* top_blob_diff = top_blob->mutable_cpu_diff();\n      int count = top_blob->count();\n      for (int j = 0; j < count; ++j) {\n        loss += top_blob_data[j] * top_blob_data[j];\n      }\n      // set the diff: simply the data.\n      caffe_copy(top_blob->count(), top_blob_data, top_blob_diff);\n    }\n    loss /= 2.;\n  } else {\n    // the loss will be the top_data_id-th element in the top_id-th blob.\n    for (int i = 0; i < top.size(); ++i) {\n      Blob<Dtype>* top_blob = top[i];\n      Dtype* top_blob_diff = top_blob->mutable_cpu_diff();\n      caffe_set(top_blob->count(), Dtype(0), top_blob_diff);\n    }\n    const Dtype loss_weight = 2;\n    loss = top[top_id]->cpu_data()[top_data_id] * loss_weight;\n    top[top_id]->mutable_cpu_diff()[top_data_id] = loss_weight;\n  }\n  return loss;\n}\n\n}  // namespace caffe\n\n#endif  // CAFFE_TEST_GRADIENT_CHECK_UTIL_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/benchmark.hpp",
    "content": "#ifndef CAFFE_UTIL_BENCHMARK_H_\n#define CAFFE_UTIL_BENCHMARK_H_\n\n#include <boost/date_time/posix_time/posix_time.hpp>\n\n#include \"caffe/util/device_alternate.hpp\"\n\nnamespace caffe {\n\nclass Timer {\n public:\n  Timer();\n  virtual ~Timer();\n  virtual void Start();\n  virtual void Stop();\n  virtual float MilliSeconds();\n  virtual float MicroSeconds();\n  virtual float Seconds();\n\n  inline bool initted() { return initted_; }\n  inline bool running() { return running_; }\n  inline bool has_run_at_least_once() { return has_run_at_least_once_; }\n\n protected:\n  void Init();\n\n  bool initted_;\n  bool running_;\n  bool has_run_at_least_once_;\n#ifndef CPU_ONLY\n  cudaEvent_t start_gpu_;\n  cudaEvent_t stop_gpu_;\n#endif\n  boost::posix_time::ptime start_cpu_;\n  boost::posix_time::ptime stop_cpu_;\n  float elapsed_milliseconds_;\n  float elapsed_microseconds_;\n};\n\nclass CPUTimer : public Timer {\n public:\n  explicit CPUTimer();\n  virtual ~CPUTimer() {}\n  virtual void Start();\n  virtual void Stop();\n  virtual float MilliSeconds();\n  virtual float MicroSeconds();\n};\n\n}  // namespace caffe\n\n#endif   // CAFFE_UTIL_BENCHMARK_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/blocking_queue.hpp",
    "content": "#ifndef CAFFE_UTIL_BLOCKING_QUEUE_HPP_\n#define CAFFE_UTIL_BLOCKING_QUEUE_HPP_\n\n#include <queue>\n#include <string>\n\nnamespace caffe {\n\ntemplate<typename T>\nclass BlockingQueue {\n public:\n  explicit BlockingQueue();\n\n  void push(const T& t);\n\n  bool try_pop(T* t);\n\n  // This logs a message if the threads needs to be blocked\n  // useful for detecting e.g. when data feeding is too slow\n  T pop(const string& log_on_wait = \"\");\n\n  bool try_peek(T* t);\n\n  // Return element without removing it\n  T peek();\n\n  size_t size() const;\n\n protected:\n  /**\n   Move synchronization fields out instead of including boost/thread.hpp\n   to avoid a boost/NVCC issues (#1009, #1010) on OSX. Also fails on\n   Linux CUDA 7.0.18.\n   */\n  class sync;\n\n  std::queue<T> queue_;\n  shared_ptr<sync> sync_;\n\nDISABLE_COPY_AND_ASSIGN(BlockingQueue);\n};\n\n}  // namespace caffe\n\n#endif\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/cudnn.hpp",
    "content": "#ifndef CAFFE_UTIL_CUDNN_H_\n#define CAFFE_UTIL_CUDNN_H_\n#ifdef USE_CUDNN\n\n#include <cudnn.h>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#define CUDNN_VERSION_MIN(major, minor, patch) \\\n    (CUDNN_VERSION >= (major * 1000 + minor * 100 + patch))\n\n#define CUDNN_CHECK(condition) \\\n  do { \\\n    cudnnStatus_t status = condition; \\\n    CHECK_EQ(status, CUDNN_STATUS_SUCCESS) << \" \"\\\n      << cudnnGetErrorString(status); \\\n  } while (0)\n\ninline const char* cudnnGetErrorString(cudnnStatus_t status) {\n  switch (status) {\n    case CUDNN_STATUS_SUCCESS:\n      return \"CUDNN_STATUS_SUCCESS\";\n    case CUDNN_STATUS_NOT_INITIALIZED:\n      return \"CUDNN_STATUS_NOT_INITIALIZED\";\n    case CUDNN_STATUS_ALLOC_FAILED:\n      return \"CUDNN_STATUS_ALLOC_FAILED\";\n    case CUDNN_STATUS_BAD_PARAM:\n      return \"CUDNN_STATUS_BAD_PARAM\";\n    case CUDNN_STATUS_INTERNAL_ERROR:\n      return \"CUDNN_STATUS_INTERNAL_ERROR\";\n    case CUDNN_STATUS_INVALID_VALUE:\n      return \"CUDNN_STATUS_INVALID_VALUE\";\n    case CUDNN_STATUS_ARCH_MISMATCH:\n      return \"CUDNN_STATUS_ARCH_MISMATCH\";\n    case CUDNN_STATUS_MAPPING_ERROR:\n      return \"CUDNN_STATUS_MAPPING_ERROR\";\n    case CUDNN_STATUS_EXECUTION_FAILED:\n      return \"CUDNN_STATUS_EXECUTION_FAILED\";\n    case CUDNN_STATUS_NOT_SUPPORTED:\n      return \"CUDNN_STATUS_NOT_SUPPORTED\";\n    case CUDNN_STATUS_LICENSE_ERROR:\n      return \"CUDNN_STATUS_LICENSE_ERROR\";\n#if CUDNN_VERSION_MIN(6, 0, 0)\n    case CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING:\n      return \"CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING\";\n#endif\n  }\n  return \"Unknown cudnn status\";\n}\n\nnamespace caffe {\n\nnamespace cudnn {\n\ntemplate <typename Dtype> class dataType;\ntemplate<> class dataType<float>  {\n public:\n  static const cudnnDataType_t type = CUDNN_DATA_FLOAT;\n  static float oneval, zeroval;\n  static const void *one, *zero;\n};\ntemplate<> class dataType<double> {\n public:\n  static const cudnnDataType_t type = CUDNN_DATA_DOUBLE;\n  static double oneval, zeroval;\n  static const void *one, *zero;\n};\n\ntemplate <typename Dtype>\ninline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {\n  CUDNN_CHECK(cudnnCreateTensorDescriptor(desc));\n}\n\ntemplate <typename Dtype>\ninline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,\n    int n, int c, int h, int w,\n    int stride_n, int stride_c, int stride_h, int stride_w) {\n  CUDNN_CHECK(cudnnSetTensor4dDescriptorEx(*desc, dataType<Dtype>::type,\n        n, c, h, w, stride_n, stride_c, stride_h, stride_w));\n}\n\ntemplate <typename Dtype>\ninline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,\n    int n, int c, int h, int w) {\n  const int stride_w = 1;\n  const int stride_h = w * stride_w;\n  const int stride_c = h * stride_h;\n  const int stride_n = c * stride_c;\n  setTensor4dDesc<Dtype>(desc, n, c, h, w,\n                         stride_n, stride_c, stride_h, stride_w);\n}\n\ntemplate <typename Dtype>\ninline void createFilterDesc(cudnnFilterDescriptor_t* desc,\n    int n, int c, int h, int w) {\n  CUDNN_CHECK(cudnnCreateFilterDescriptor(desc));\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnSetFilter4dDescriptor(*desc, dataType<Dtype>::type,\n      CUDNN_TENSOR_NCHW, n, c, h, w));\n#else\n  CUDNN_CHECK(cudnnSetFilter4dDescriptor_v4(*desc, dataType<Dtype>::type,\n      CUDNN_TENSOR_NCHW, n, c, h, w));\n#endif\n}\n\ntemplate <typename Dtype>\ninline void createConvolutionDesc(cudnnConvolutionDescriptor_t* conv) {\n  CUDNN_CHECK(cudnnCreateConvolutionDescriptor(conv));\n}\n\ntemplate <typename Dtype>\ninline void setConvolutionDesc(cudnnConvolutionDescriptor_t* conv,\n    cudnnTensorDescriptor_t bottom, cudnnFilterDescriptor_t filter,\n    int pad_h, int pad_w, int stride_h, int stride_w) {\n#if CUDNN_VERSION_MIN(6, 0, 0)\n  CUDNN_CHECK(cudnnSetConvolution2dDescriptor(*conv,\n      pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION,\n      dataType<Dtype>::type));\n#else\n    CUDNN_CHECK(cudnnSetConvolution2dDescriptor(*conv,\n      pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));\n#endif\n}\n\ntemplate <typename Dtype>\ninline void createPoolingDesc(cudnnPoolingDescriptor_t* pool_desc,\n    PoolingParameter_PoolMethod poolmethod, cudnnPoolingMode_t* mode,\n    int h, int w, int pad_h, int pad_w, int stride_h, int stride_w) {\n  switch (poolmethod) {\n  case PoolingParameter_PoolMethod_MAX:\n    *mode = CUDNN_POOLING_MAX;\n    break;\n  case PoolingParameter_PoolMethod_AVE:\n    *mode = CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING;\n    break;\n  default:\n    LOG(FATAL) << \"Unknown pooling method.\";\n  }\n  CUDNN_CHECK(cudnnCreatePoolingDescriptor(pool_desc));\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnSetPooling2dDescriptor(*pool_desc, *mode,\n        CUDNN_PROPAGATE_NAN, h, w, pad_h, pad_w, stride_h, stride_w));\n#else\n  CUDNN_CHECK(cudnnSetPooling2dDescriptor_v4(*pool_desc, *mode,\n        CUDNN_PROPAGATE_NAN, h, w, pad_h, pad_w, stride_h, stride_w));\n#endif\n}\n\ntemplate <typename Dtype>\ninline void createActivationDescriptor(cudnnActivationDescriptor_t* activ_desc,\n    cudnnActivationMode_t mode) {\n  CUDNN_CHECK(cudnnCreateActivationDescriptor(activ_desc));\n  CUDNN_CHECK(cudnnSetActivationDescriptor(*activ_desc, mode,\n                                           CUDNN_PROPAGATE_NAN, Dtype(0)));\n}\n\n}  // namespace cudnn\n\n}  // namespace caffe\n\n#endif  // USE_CUDNN\n#endif  // CAFFE_UTIL_CUDNN_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/db.hpp",
    "content": "#ifndef CAFFE_UTIL_DB_HPP\n#define CAFFE_UTIL_DB_HPP\n\n#include <string>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe { namespace db {\n\nenum Mode { READ, WRITE, NEW };\n\nclass Cursor {\n public:\n  Cursor() { }\n  virtual ~Cursor() { }\n  virtual void SeekToFirst() = 0;\n  virtual void Next() = 0;\n  virtual string key() = 0;\n  virtual string value() = 0;\n  virtual bool valid() = 0;\n\n  DISABLE_COPY_AND_ASSIGN(Cursor);\n};\n\nclass Transaction {\n public:\n  Transaction() { }\n  virtual ~Transaction() { }\n  virtual void Put(const string& key, const string& value) = 0;\n  virtual void Commit() = 0;\n\n  DISABLE_COPY_AND_ASSIGN(Transaction);\n};\n\nclass DB {\n public:\n  DB() { }\n  virtual ~DB() { }\n  virtual void Open(const string& source, Mode mode) = 0;\n  virtual void Close() = 0;\n  virtual Cursor* NewCursor() = 0;\n  virtual Transaction* NewTransaction() = 0;\n\n  DISABLE_COPY_AND_ASSIGN(DB);\n};\n\nDB* GetDB(DataParameter::DB backend);\nDB* GetDB(const string& backend);\n\n}  // namespace db\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_DB_HPP\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/db_leveldb.hpp",
    "content": "#ifdef USE_LEVELDB\n#ifndef CAFFE_UTIL_DB_LEVELDB_HPP\n#define CAFFE_UTIL_DB_LEVELDB_HPP\n\n#include <string>\n\n#include \"leveldb/db.h\"\n#include \"leveldb/write_batch.h\"\n\n#include \"caffe/util/db.hpp\"\n\nnamespace caffe { namespace db {\n\nclass LevelDBCursor : public Cursor {\n public:\n  explicit LevelDBCursor(leveldb::Iterator* iter)\n    : iter_(iter) { SeekToFirst(); }\n  ~LevelDBCursor() { delete iter_; }\n  virtual void SeekToFirst() { iter_->SeekToFirst(); }\n  virtual void Next() { iter_->Next(); }\n  virtual string key() { return iter_->key().ToString(); }\n  virtual string value() { return iter_->value().ToString(); }\n  virtual bool valid() { return iter_->Valid(); }\n\n private:\n  leveldb::Iterator* iter_;\n};\n\nclass LevelDBTransaction : public Transaction {\n public:\n  explicit LevelDBTransaction(leveldb::DB* db) : db_(db) { CHECK_NOTNULL(db_); }\n  virtual void Put(const string& key, const string& value) {\n    batch_.Put(key, value);\n  }\n  virtual void Commit() {\n    leveldb::Status status = db_->Write(leveldb::WriteOptions(), &batch_);\n    CHECK(status.ok()) << \"Failed to write batch to leveldb \"\n                       << std::endl << status.ToString();\n  }\n\n private:\n  leveldb::DB* db_;\n  leveldb::WriteBatch batch_;\n\n  DISABLE_COPY_AND_ASSIGN(LevelDBTransaction);\n};\n\nclass LevelDB : public DB {\n public:\n  LevelDB() : db_(NULL) { }\n  virtual ~LevelDB() { Close(); }\n  virtual void Open(const string& source, Mode mode);\n  virtual void Close() {\n    if (db_ != NULL) {\n      delete db_;\n      db_ = NULL;\n    }\n  }\n  virtual LevelDBCursor* NewCursor() {\n    return new LevelDBCursor(db_->NewIterator(leveldb::ReadOptions()));\n  }\n  virtual LevelDBTransaction* NewTransaction() {\n    return new LevelDBTransaction(db_);\n  }\n\n private:\n  leveldb::DB* db_;\n};\n\n\n}  // namespace db\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_DB_LEVELDB_HPP\n#endif  // USE_LEVELDB\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/db_lmdb.hpp",
    "content": "#ifdef USE_LMDB\n#ifndef CAFFE_UTIL_DB_LMDB_HPP\n#define CAFFE_UTIL_DB_LMDB_HPP\n\n#include <string>\n\n#include \"lmdb.h\"\n\n#include \"caffe/util/db.hpp\"\n\nnamespace caffe { namespace db {\n\ninline void MDB_CHECK(int mdb_status) {\n  CHECK_EQ(mdb_status, MDB_SUCCESS) << mdb_strerror(mdb_status);\n}\n\nclass LMDBCursor : public Cursor {\n public:\n  explicit LMDBCursor(MDB_txn* mdb_txn, MDB_cursor* mdb_cursor)\n    : mdb_txn_(mdb_txn), mdb_cursor_(mdb_cursor), valid_(false) {\n    SeekToFirst();\n  }\n  virtual ~LMDBCursor() {\n    mdb_cursor_close(mdb_cursor_);\n    mdb_txn_abort(mdb_txn_);\n  }\n  virtual void SeekToFirst() { Seek(MDB_FIRST); }\n  virtual void Next() { Seek(MDB_NEXT); }\n  virtual string key() {\n    return string(static_cast<const char*>(mdb_key_.mv_data), mdb_key_.mv_size);\n  }\n  virtual string value() {\n    return string(static_cast<const char*>(mdb_value_.mv_data),\n        mdb_value_.mv_size);\n  }\n  virtual bool valid() { return valid_; }\n\n private:\n  void Seek(MDB_cursor_op op) {\n    int mdb_status = mdb_cursor_get(mdb_cursor_, &mdb_key_, &mdb_value_, op);\n    if (mdb_status == MDB_NOTFOUND) {\n      valid_ = false;\n    } else {\n      MDB_CHECK(mdb_status);\n      valid_ = true;\n    }\n  }\n\n  MDB_txn* mdb_txn_;\n  MDB_cursor* mdb_cursor_;\n  MDB_val mdb_key_, mdb_value_;\n  bool valid_;\n};\n\nclass LMDBTransaction : public Transaction {\n public:\n  explicit LMDBTransaction(MDB_dbi* mdb_dbi, MDB_txn* mdb_txn)\n    : mdb_dbi_(mdb_dbi), mdb_txn_(mdb_txn) { }\n  virtual void Put(const string& key, const string& value);\n  virtual void Commit() { MDB_CHECK(mdb_txn_commit(mdb_txn_)); }\n\n private:\n  MDB_dbi* mdb_dbi_;\n  MDB_txn* mdb_txn_;\n\n  DISABLE_COPY_AND_ASSIGN(LMDBTransaction);\n};\n\nclass LMDB : public DB {\n public:\n  LMDB() : mdb_env_(NULL) { }\n  virtual ~LMDB() { Close(); }\n  virtual void Open(const string& source, Mode mode);\n  virtual void Close() {\n    if (mdb_env_ != NULL) {\n      mdb_dbi_close(mdb_env_, mdb_dbi_);\n      mdb_env_close(mdb_env_);\n      mdb_env_ = NULL;\n    }\n  }\n  virtual LMDBCursor* NewCursor();\n  virtual LMDBTransaction* NewTransaction();\n\n private:\n  MDB_env* mdb_env_;\n  MDB_dbi mdb_dbi_;\n};\n\n}  // namespace db\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_DB_LMDB_HPP\n#endif  // USE_LMDB\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/deformable_im2col.hpp",
    "content": "#ifndef _CAFFE_UTIL_DEFORMABLE_IM2COL_HPP_\n#define _CAFFE_UTIL_DEFORMABLE_IM2COL_HPP_\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid deformable_im2col_gpu(const Dtype* data_im, const Dtype* data_offset, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int deformable_group,\n    Dtype* data_col);\nvoid ooop();\n\ntemplate <typename Dtype>\nvoid deformable_col2im_gpu(const Dtype* data_col, const Dtype* data_offset, const int channels,\n    const int height, const int width, const int num_kernels,\n    const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int deformable_group, Dtype* grad_im);\n\ntemplate <typename Dtype>\nvoid deformable_col2im_coord_gpu(const Dtype* data_col,const Dtype* data_im,  const Dtype* data_offset, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int deformable_group, Dtype* grad_offset);\n\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_IM2COL_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/device_alternate.hpp",
    "content": "#ifndef CAFFE_UTIL_DEVICE_ALTERNATE_H_\n#define CAFFE_UTIL_DEVICE_ALTERNATE_H_\n\n#ifdef CPU_ONLY  // CPU-only Caffe.\n\n#include <vector>\n\n// Stub out GPU calls as unavailable.\n\n#define NO_GPU LOG(FATAL) << \"Cannot use GPU in CPU-only Caffe: check mode.\"\n\n#define STUB_GPU(classname) \\\ntemplate <typename Dtype> \\\nvoid classname<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom, \\\n    const vector<Blob<Dtype>*>& top) { NO_GPU; } \\\ntemplate <typename Dtype> \\\nvoid classname<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top, \\\n    const vector<bool>& propagate_down, \\\n    const vector<Blob<Dtype>*>& bottom) { NO_GPU; } \\\n\n#define STUB_GPU_FORWARD(classname, funcname) \\\ntemplate <typename Dtype> \\\nvoid classname<Dtype>::funcname##_##gpu(const vector<Blob<Dtype>*>& bottom, \\\n    const vector<Blob<Dtype>*>& top) { NO_GPU; } \\\n\n#define STUB_GPU_BACKWARD(classname, funcname) \\\ntemplate <typename Dtype> \\\nvoid classname<Dtype>::funcname##_##gpu(const vector<Blob<Dtype>*>& top, \\\n    const vector<bool>& propagate_down, \\\n    const vector<Blob<Dtype>*>& bottom) { NO_GPU; } \\\n\n#else  // Normal GPU + CPU Caffe.\n\n#include <cublas_v2.h>\n#include <cuda.h>\n#include <cuda_runtime.h>\n#include <curand.h>\n#include <driver_types.h>  // cuda driver types\n#ifdef USE_CUDNN  // cuDNN acceleration library.\n#include \"caffe/util/cudnn.hpp\"\n#endif\n\n//\n// CUDA macros\n//\n\n// CUDA: various checks for different function calls.\n#define CUDA_CHECK(condition) \\\n  /* Code block avoids redefinition of cudaError_t error */ \\\n  do { \\\n    cudaError_t error = condition; \\\n    CHECK_EQ(error, cudaSuccess) << \" \" << cudaGetErrorString(error); \\\n  } while (0)\n\n#define CUBLAS_CHECK(condition) \\\n  do { \\\n    cublasStatus_t status = condition; \\\n    CHECK_EQ(status, CUBLAS_STATUS_SUCCESS) << \" \" \\\n      << caffe::cublasGetErrorString(status); \\\n  } while (0)\n\n#define CURAND_CHECK(condition) \\\n  do { \\\n    curandStatus_t status = condition; \\\n    CHECK_EQ(status, CURAND_STATUS_SUCCESS) << \" \" \\\n      << caffe::curandGetErrorString(status); \\\n  } while (0)\n\n// CUDA: grid stride looping\n#define CUDA_KERNEL_LOOP(i, n) \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; \\\n       i < (n); \\\n       i += blockDim.x * gridDim.x)\n\n// CUDA: check for error after kernel execution and exit loudly if there is one.\n#define CUDA_POST_KERNEL_CHECK CUDA_CHECK(cudaPeekAtLastError())\n\nnamespace caffe {\n\n// CUDA: library error reporting.\nconst char* cublasGetErrorString(cublasStatus_t error);\nconst char* curandGetErrorString(curandStatus_t error);\n\n// CUDA: use 512 threads per block\nconst int CAFFE_CUDA_NUM_THREADS = 512;\n\n// CUDA: number of blocks for threads.\ninline int CAFFE_GET_BLOCKS(const int N) {\n  return (N + CAFFE_CUDA_NUM_THREADS - 1) / CAFFE_CUDA_NUM_THREADS;\n}\n\n}  // namespace caffe\n\n#endif  // CPU_ONLY\n\n#endif  // CAFFE_UTIL_DEVICE_ALTERNATE_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/format.hpp",
    "content": "#ifndef CAFFE_UTIL_FORMAT_H_\n#define CAFFE_UTIL_FORMAT_H_\n\n#include <iomanip>  // NOLINT(readability/streams)\n#include <sstream>  // NOLINT(readability/streams)\n#include <string>\n\nnamespace caffe {\n\ninline std::string format_int(int n, int numberOfLeadingZeros = 0 ) {\n  std::ostringstream s;\n  s << std::setw(numberOfLeadingZeros) << std::setfill('0') << n;\n  return s.str();\n}\n\n}\n\n#endif   // CAFFE_UTIL_FORMAT_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/gpu_util.cuh",
    "content": "#ifndef CAFFE_UTIL_GPU_UTIL_H_\n#define CAFFE_UTIL_GPU_UTIL_H_\n\nnamespace caffe {\n\ntemplate <typename Dtype>\ninline __device__ Dtype caffe_gpu_atomic_add(const Dtype val, Dtype* address);\n\ntemplate <>\ninline __device__\nfloat caffe_gpu_atomic_add(const float val, float* address) {\n  return atomicAdd(address, val);\n}\n\n// double atomicAdd implementation taken from:\n// http://docs.nvidia.com/cuda/cuda-c-programming-guide/#axzz3PVCpVsEG\ntemplate <>\ninline __device__\ndouble caffe_gpu_atomic_add(const double val, double* address) {\n  unsigned long long int* address_as_ull =  // NOLINT(runtime/int)\n      // NOLINT_NEXT_LINE(runtime/int)\n      reinterpret_cast<unsigned long long int*>(address);\n  unsigned long long int old = *address_as_ull;  // NOLINT(runtime/int)\n  unsigned long long int assumed;  // NOLINT(runtime/int)\n  do {\n    assumed = old;\n    old = atomicCAS(address_as_ull, assumed,\n        __double_as_longlong(val + __longlong_as_double(assumed)));\n  } while (assumed != old);\n  return __longlong_as_double(old);\n}\n\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_GPU_UTIL_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/hdf5.hpp",
    "content": "#ifndef CAFFE_UTIL_HDF5_H_\n#define CAFFE_UTIL_HDF5_H_\n\n#include <string>\n\n#include \"hdf5.h\"\n#include \"hdf5_hl.h\"\n\n#include \"caffe/blob.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid hdf5_load_nd_dataset_helper(\n    hid_t file_id, const char* dataset_name_, int min_dim, int max_dim,\n    Blob<Dtype>* blob);\n\ntemplate <typename Dtype>\nvoid hdf5_load_nd_dataset(\n    hid_t file_id, const char* dataset_name_, int min_dim, int max_dim,\n    Blob<Dtype>* blob);\n\ntemplate <typename Dtype>\nvoid hdf5_save_nd_dataset(\n    const hid_t file_id, const string& dataset_name, const Blob<Dtype>& blob,\n    bool write_diff = false);\n\nint hdf5_load_int(hid_t loc_id, const string& dataset_name);\nvoid hdf5_save_int(hid_t loc_id, const string& dataset_name, int i);\nstring hdf5_load_string(hid_t loc_id, const string& dataset_name);\nvoid hdf5_save_string(hid_t loc_id, const string& dataset_name,\n                      const string& s);\n\nint hdf5_get_num_links(hid_t loc_id);\nstring hdf5_get_name_by_idx(hid_t loc_id, int idx);\n\n}  // namespace caffe\n\n#endif   // CAFFE_UTIL_HDF5_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/im2col.hpp",
    "content": "#ifndef _CAFFE_UTIL_IM2COL_HPP_\n#define _CAFFE_UTIL_IM2COL_HPP_\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid im2col_nd_cpu(const Dtype* data_im, const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_col);\n\ntemplate <typename Dtype>\nvoid im2col_cpu(const Dtype* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    Dtype* data_col);\n\ntemplate <typename Dtype>\nvoid col2im_nd_cpu(const Dtype* data_col, const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_im);\n\ntemplate <typename Dtype>\nvoid col2im_cpu(const Dtype* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    Dtype* data_im);\n\ntemplate <typename Dtype>\nvoid im2col_nd_gpu(const Dtype* data_im, const int num_spatial_axes,\n    const int col_size, const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_col);\n\ntemplate <typename Dtype>\nvoid im2col_gpu(const Dtype* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    Dtype* data_col);\n\ntemplate <typename Dtype>\nvoid col2im_nd_gpu(const Dtype* data_col, const int num_spatial_axes,\n    const int im_size, const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_im);\n\ntemplate <typename Dtype>\nvoid col2im_gpu(const Dtype* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    Dtype* data_im);\n\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_IM2COL_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/insert_splits.hpp",
    "content": "#ifndef _CAFFE_UTIL_INSERT_SPLITS_HPP_\n#define _CAFFE_UTIL_INSERT_SPLITS_HPP_\n\n#include <string>\n\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n// Copy NetParameters with SplitLayers added to replace any shared bottom\n// blobs with unique bottom blobs provided by the SplitLayer.\nvoid InsertSplits(const NetParameter& param, NetParameter* param_split);\n\nvoid ConfigureSplitLayer(const string& layer_name, const string& blob_name,\n    const int blob_idx, const int split_count, const float loss_weight,\n    LayerParameter* split_layer_param);\n\nstring SplitLayerName(const string& layer_name, const string& blob_name,\n    const int blob_idx);\n\nstring SplitBlobName(const string& layer_name, const string& blob_name,\n    const int blob_idx, const int split_idx);\n\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_INSERT_SPLITS_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/io.hpp",
    "content": "#ifndef CAFFE_UTIL_IO_H_\n#define CAFFE_UTIL_IO_H_\n\n#include <boost/filesystem.hpp>\n#include <iomanip>\n#include <iostream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"google/protobuf/message.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/format.hpp\"\n\n#ifndef CAFFE_TMP_DIR_RETRIES\n#define CAFFE_TMP_DIR_RETRIES 100\n#endif\n\nnamespace caffe {\n\nusing ::google::protobuf::Message;\nusing ::boost::filesystem::path;\n\ninline void MakeTempDir(string* temp_dirname) {\n  temp_dirname->clear();\n  const path& model =\n    boost::filesystem::temp_directory_path()/\"caffe_test.%%%%-%%%%\";\n  for ( int i = 0; i < CAFFE_TMP_DIR_RETRIES; i++ ) {\n    const path& dir = boost::filesystem::unique_path(model).string();\n    bool done = boost::filesystem::create_directory(dir);\n    if ( done ) {\n      *temp_dirname = dir.string();\n      return;\n    }\n  }\n  LOG(FATAL) << \"Failed to create a temporary directory.\";\n}\n\ninline void MakeTempFilename(string* temp_filename) {\n  static path temp_files_subpath;\n  static uint64_t next_temp_file = 0;\n  temp_filename->clear();\n  if ( temp_files_subpath.empty() ) {\n    string path_string=\"\";\n    MakeTempDir(&path_string);\n    temp_files_subpath = path_string;\n  }\n  *temp_filename =\n    (temp_files_subpath/caffe::format_int(next_temp_file++, 9)).string();\n}\n\nbool ReadProtoFromTextFile(const char* filename, Message* proto);\n\ninline bool ReadProtoFromTextFile(const string& filename, Message* proto) {\n  return ReadProtoFromTextFile(filename.c_str(), proto);\n}\n\ninline void ReadProtoFromTextFileOrDie(const char* filename, Message* proto) {\n  CHECK(ReadProtoFromTextFile(filename, proto));\n}\n\ninline void ReadProtoFromTextFileOrDie(const string& filename, Message* proto) {\n  ReadProtoFromTextFileOrDie(filename.c_str(), proto);\n}\n\nvoid WriteProtoToTextFile(const Message& proto, const char* filename);\ninline void WriteProtoToTextFile(const Message& proto, const string& filename) {\n  WriteProtoToTextFile(proto, filename.c_str());\n}\n\nbool ReadProtoFromBinaryFile(const char* filename, Message* proto);\n\ninline bool ReadProtoFromBinaryFile(const string& filename, Message* proto) {\n  return ReadProtoFromBinaryFile(filename.c_str(), proto);\n}\n\ninline void ReadProtoFromBinaryFileOrDie(const char* filename, Message* proto) {\n  CHECK(ReadProtoFromBinaryFile(filename, proto));\n}\n\ninline void ReadProtoFromBinaryFileOrDie(const string& filename,\n                                         Message* proto) {\n  ReadProtoFromBinaryFileOrDie(filename.c_str(), proto);\n}\n\n\nvoid WriteProtoToBinaryFile(const Message& proto, const char* filename);\ninline void WriteProtoToBinaryFile(\n    const Message& proto, const string& filename) {\n  WriteProtoToBinaryFile(proto, filename.c_str());\n}\n\nbool ReadFileToDatum(const string& filename, const int label, Datum* datum);\n\ninline bool ReadFileToDatum(const string& filename, Datum* datum) {\n  return ReadFileToDatum(filename, -1, datum);\n}\n\nbool ReadImageToDatum(const string& filename, const int label,\n    const int height, const int width, const bool is_color,\n    const std::string & encoding, Datum* datum);\n\ninline bool ReadImageToDatum(const string& filename, const int label,\n    const int height, const int width, const bool is_color, Datum* datum) {\n  return ReadImageToDatum(filename, label, height, width, is_color,\n                          \"\", datum);\n}\n\ninline bool ReadImageToDatum(const string& filename, const int label,\n    const int height, const int width, Datum* datum) {\n  return ReadImageToDatum(filename, label, height, width, true, datum);\n}\n\ninline bool ReadImageToDatum(const string& filename, const int label,\n    const bool is_color, Datum* datum) {\n  return ReadImageToDatum(filename, label, 0, 0, is_color, datum);\n}\n\ninline bool ReadImageToDatum(const string& filename, const int label,\n    Datum* datum) {\n  return ReadImageToDatum(filename, label, 0, 0, true, datum);\n}\n\ninline bool ReadImageToDatum(const string& filename, const int label,\n    const std::string & encoding, Datum* datum) {\n  return ReadImageToDatum(filename, label, 0, 0, true, encoding, datum);\n}\n\nbool DecodeDatumNative(Datum* datum);\nbool DecodeDatum(Datum* datum, bool is_color);\n\n#ifdef USE_OPENCV\ncv::Mat ReadImageToCVMat(const string& filename,\n    const int height, const int width, const bool is_color);\n\ncv::Mat ReadImageToCVMat(const string& filename,\n    const int height, const int width);\n\ncv::Mat ReadImageToCVMat(const string& filename,\n    const bool is_color);\n\ncv::Mat ReadImageToCVMat(const string& filename);\n\ncv::Mat DecodeDatumToCVMatNative(const Datum& datum);\ncv::Mat DecodeDatumToCVMat(const Datum& datum, bool is_color);\n\nvoid CVMatToDatum(const cv::Mat& cv_img, Datum* datum);\n#endif  // USE_OPENCV\n\n}  // namespace caffe\n\n#endif   // CAFFE_UTIL_IO_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/math_functions.hpp",
    "content": "#ifndef CAFFE_UTIL_MATH_FUNCTIONS_H_\n#define CAFFE_UTIL_MATH_FUNCTIONS_H_\n\n#include <stdint.h>\n#include <cmath>  // for std::fabs and std::signbit\n\n#include \"glog/logging.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/device_alternate.hpp\"\n#include \"caffe/util/mkl_alternate.hpp\"\n\nnamespace caffe {\n\n// Caffe gemm provides a simpler interface to the gemm functions, with the\n// limitation that the data has to be contiguous in memory.\ntemplate <typename Dtype>\nvoid caffe_cpu_gemm(const CBLAS_TRANSPOSE TransA,\n    const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,\n    const Dtype alpha, const Dtype* A, const Dtype* B, const Dtype beta,\n    Dtype* C);\n\ntemplate <typename Dtype>\nvoid caffe_cpu_gemv(const CBLAS_TRANSPOSE TransA, const int M, const int N,\n    const Dtype alpha, const Dtype* A, const Dtype* x, const Dtype beta,\n    Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_axpy(const int N, const Dtype alpha, const Dtype* X,\n    Dtype* Y);\n\ntemplate <typename Dtype>\nvoid caffe_cpu_axpby(const int N, const Dtype alpha, const Dtype* X,\n    const Dtype beta, Dtype* Y);\n\ntemplate <typename Dtype>\nvoid caffe_copy(const int N, const Dtype *X, Dtype *Y);\n\ntemplate <typename Dtype>\nvoid caffe_set(const int N, const Dtype alpha, Dtype *X);\n\ninline void caffe_memset(const size_t N, const int alpha, void* X) {\n  memset(X, alpha, N);  // NOLINT(caffe/alt_fn)\n}\n\ntemplate <typename Dtype>\nvoid caffe_add_scalar(const int N, const Dtype alpha, Dtype *X);\n\ntemplate <typename Dtype>\nvoid caffe_scal(const int N, const Dtype alpha, Dtype *X);\n\ntemplate <typename Dtype>\nvoid caffe_sqr(const int N, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_add(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_sub(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_mul(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_div(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_powx(const int n, const Dtype* a, const Dtype b, Dtype* y);\n\nunsigned int caffe_rng_rand();\n\ntemplate <typename Dtype>\nDtype caffe_nextafter(const Dtype b);\n\ntemplate <typename Dtype>\nvoid caffe_rng_uniform(const int n, const Dtype a, const Dtype b, Dtype* r);\n\ntemplate <typename Dtype>\nvoid caffe_rng_gaussian(const int n, const Dtype mu, const Dtype sigma,\n                        Dtype* r);\n\ntemplate <typename Dtype>\nvoid caffe_rng_bernoulli(const int n, const Dtype p, int* r);\n\ntemplate <typename Dtype>\nvoid caffe_rng_bernoulli(const int n, const Dtype p, unsigned int* r);\n\ntemplate <typename Dtype>\nvoid caffe_exp(const int n, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_log(const int n, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_abs(const int n, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nDtype caffe_cpu_dot(const int n, const Dtype* x, const Dtype* y);\n\ntemplate <typename Dtype>\nDtype caffe_cpu_strided_dot(const int n, const Dtype* x, const int incx,\n    const Dtype* y, const int incy);\n\n// Returns the sum of the absolute values of the elements of vector x\ntemplate <typename Dtype>\nDtype caffe_cpu_asum(const int n, const Dtype* x);\n\n// the branchless, type-safe version from\n// http://stackoverflow.com/questions/1903954/is-there-a-standard-sign-function-signum-sgn-in-c-c\ntemplate<typename Dtype>\ninline int8_t caffe_sign(Dtype val) {\n  return (Dtype(0) < val) - (val < Dtype(0));\n}\n\n// The following two macros are modifications of DEFINE_VSL_UNARY_FUNC\n//   in include/caffe/util/mkl_alternate.hpp authored by @Rowland Depp.\n// Please refer to commit 7e8ef25c7 of the boost-eigen branch.\n// Git cherry picking that commit caused a conflict hard to resolve and\n//   copying that file in convenient for code reviewing.\n// So they have to be pasted here temporarily.\n#define DEFINE_CAFFE_CPU_UNARY_FUNC(name, operation) \\\n  template<typename Dtype> \\\n  void caffe_cpu_##name(const int n, const Dtype* x, Dtype* y) { \\\n    CHECK_GT(n, 0); CHECK(x); CHECK(y); \\\n    for (int i = 0; i < n; ++i) { \\\n      operation; \\\n    } \\\n  }\n\n// output is 1 for the positives, 0 for zero, and -1 for the negatives\nDEFINE_CAFFE_CPU_UNARY_FUNC(sign, y[i] = caffe_sign<Dtype>(x[i]));\n\n// This returns a nonzero value if the input has its sign bit set.\n// The name sngbit is meant to avoid conflicts with std::signbit in the macro.\n// The extra parens are needed because CUDA < 6.5 defines signbit as a macro,\n// and we don't want that to expand here when CUDA headers are also included.\nDEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, \\\n    y[i] = static_cast<bool>((std::signbit)(x[i])));\n\nDEFINE_CAFFE_CPU_UNARY_FUNC(fabs, y[i] = std::fabs(x[i]));\n\ntemplate <typename Dtype>\nvoid caffe_cpu_scale(const int n, const Dtype alpha, const Dtype *x, Dtype* y);\n\n#ifndef CPU_ONLY  // GPU\n\n// Decaf gpu gemm provides an interface that is almost the same as the cpu\n// gemm function - following the c convention and calling the fortran-order\n// gpu code under the hood.\ntemplate <typename Dtype>\nvoid caffe_gpu_gemm(const CBLAS_TRANSPOSE TransA,\n    const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,\n    const Dtype alpha, const Dtype* A, const Dtype* B, const Dtype beta,\n    Dtype* C);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_gemv(const CBLAS_TRANSPOSE TransA, const int M, const int N,\n    const Dtype alpha, const Dtype* A, const Dtype* x, const Dtype beta,\n    Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_axpy(const int N, const Dtype alpha, const Dtype* X,\n    Dtype* Y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_axpby(const int N, const Dtype alpha, const Dtype* X,\n    const Dtype beta, Dtype* Y);\n\nvoid caffe_gpu_memcpy(const size_t N, const void *X, void *Y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_set(const int N, const Dtype alpha, Dtype *X);\n\ninline void caffe_gpu_memset(const size_t N, const int alpha, void* X) {\n#ifndef CPU_ONLY\n  CUDA_CHECK(cudaMemset(X, alpha, N));  // NOLINT(caffe/alt_fn)\n#else\n  NO_GPU;\n#endif\n}\n\ntemplate <typename Dtype>\nvoid caffe_gpu_add_scalar(const int N, const Dtype alpha, Dtype *X);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_scal(const int N, const Dtype alpha, Dtype *X);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_add(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_sub(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_mul(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_div(const int N, const Dtype* a, const Dtype* b, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_abs(const int n, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_exp(const int n, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_log(const int n, const Dtype* a, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_powx(const int n, const Dtype* a, const Dtype b, Dtype* y);\n\n// caffe_gpu_rng_uniform with two arguments generates integers in the range\n// [0, UINT_MAX].\nvoid caffe_gpu_rng_uniform(const int n, unsigned int* r);\n\n// caffe_gpu_rng_uniform with four arguments generates floats in the range\n// (a, b] (strictly greater than a, less than or equal to b) due to the\n// specification of curandGenerateUniform.  With a = 0, b = 1, just calls\n// curandGenerateUniform; with other limits will shift and scale the outputs\n// appropriately after calling curandGenerateUniform.\ntemplate <typename Dtype>\nvoid caffe_gpu_rng_uniform(const int n, const Dtype a, const Dtype b, Dtype* r);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_rng_gaussian(const int n, const Dtype mu, const Dtype sigma,\n                            Dtype* r);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_rng_bernoulli(const int n, const Dtype p, int* r);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_dot(const int n, const Dtype* x, const Dtype* y, Dtype* out);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_asum(const int n, const Dtype* x, Dtype* y);\n\ntemplate<typename Dtype>\nvoid caffe_gpu_sign(const int n, const Dtype* x, Dtype* y);\n\ntemplate<typename Dtype>\nvoid caffe_gpu_sgnbit(const int n, const Dtype* x, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_fabs(const int n, const Dtype* x, Dtype* y);\n\ntemplate <typename Dtype>\nvoid caffe_gpu_scale(const int n, const Dtype alpha, const Dtype *x, Dtype* y);\n\n#define DEFINE_AND_INSTANTIATE_GPU_UNARY_FUNC(name, operation) \\\ntemplate<typename Dtype> \\\n__global__ void name##_kernel(const int n, const Dtype* x, Dtype* y) { \\\n  CUDA_KERNEL_LOOP(index, n) { \\\n    operation; \\\n  } \\\n} \\\ntemplate <> \\\nvoid caffe_gpu_##name<float>(const int n, const float* x, float* y) { \\\n  /* NOLINT_NEXT_LINE(whitespace/operators) */ \\\n  name##_kernel<float><<<CAFFE_GET_BLOCKS(n), CAFFE_CUDA_NUM_THREADS>>>( \\\n      n, x, y); \\\n} \\\ntemplate <> \\\nvoid caffe_gpu_##name<double>(const int n, const double* x, double* y) { \\\n  /* NOLINT_NEXT_LINE(whitespace/operators) */ \\\n  name##_kernel<double><<<CAFFE_GET_BLOCKS(n), CAFFE_CUDA_NUM_THREADS>>>( \\\n      n, x, y); \\\n}\n\n#endif  // !CPU_ONLY\n\n}  // namespace caffe\n\n#endif  // CAFFE_UTIL_MATH_FUNCTIONS_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/mkl_alternate.hpp",
    "content": "#ifndef CAFFE_UTIL_MKL_ALTERNATE_H_\n#define CAFFE_UTIL_MKL_ALTERNATE_H_\n\n#ifdef USE_MKL\n\n#include <mkl.h>\n\n#else  // If use MKL, simply include the MKL header\n\nextern \"C\" {\n#include <cblas.h>\n}\n#include <math.h>\n\n// Functions that caffe uses but are not present if MKL is not linked.\n\n// A simple way to define the vsl unary functions. The operation should\n// be in the form e.g. y[i] = sqrt(a[i])\n#define DEFINE_VSL_UNARY_FUNC(name, operation) \\\n  template<typename Dtype> \\\n  void v##name(const int n, const Dtype* a, Dtype* y) { \\\n    CHECK_GT(n, 0); CHECK(a); CHECK(y); \\\n    for (int i = 0; i < n; ++i) { operation; } \\\n  } \\\n  inline void vs##name( \\\n    const int n, const float* a, float* y) { \\\n    v##name<float>(n, a, y); \\\n  } \\\n  inline void vd##name( \\\n      const int n, const double* a, double* y) { \\\n    v##name<double>(n, a, y); \\\n  }\n\nDEFINE_VSL_UNARY_FUNC(Sqr, y[i] = a[i] * a[i]);\nDEFINE_VSL_UNARY_FUNC(Exp, y[i] = exp(a[i]));\nDEFINE_VSL_UNARY_FUNC(Ln, y[i] = log(a[i]));\nDEFINE_VSL_UNARY_FUNC(Abs, y[i] = fabs(a[i]));\n\n// A simple way to define the vsl unary functions with singular parameter b.\n// The operation should be in the form e.g. y[i] = pow(a[i], b)\n#define DEFINE_VSL_UNARY_FUNC_WITH_PARAM(name, operation) \\\n  template<typename Dtype> \\\n  void v##name(const int n, const Dtype* a, const Dtype b, Dtype* y) { \\\n    CHECK_GT(n, 0); CHECK(a); CHECK(y); \\\n    for (int i = 0; i < n; ++i) { operation; } \\\n  } \\\n  inline void vs##name( \\\n    const int n, const float* a, const float b, float* y) { \\\n    v##name<float>(n, a, b, y); \\\n  } \\\n  inline void vd##name( \\\n      const int n, const double* a, const float b, double* y) { \\\n    v##name<double>(n, a, b, y); \\\n  }\n\nDEFINE_VSL_UNARY_FUNC_WITH_PARAM(Powx, y[i] = pow(a[i], b));\n\n// A simple way to define the vsl binary functions. The operation should\n// be in the form e.g. y[i] = a[i] + b[i]\n#define DEFINE_VSL_BINARY_FUNC(name, operation) \\\n  template<typename Dtype> \\\n  void v##name(const int n, const Dtype* a, const Dtype* b, Dtype* y) { \\\n    CHECK_GT(n, 0); CHECK(a); CHECK(b); CHECK(y); \\\n    for (int i = 0; i < n; ++i) { operation; } \\\n  } \\\n  inline void vs##name( \\\n    const int n, const float* a, const float* b, float* y) { \\\n    v##name<float>(n, a, b, y); \\\n  } \\\n  inline void vd##name( \\\n      const int n, const double* a, const double* b, double* y) { \\\n    v##name<double>(n, a, b, y); \\\n  }\n\nDEFINE_VSL_BINARY_FUNC(Add, y[i] = a[i] + b[i]);\nDEFINE_VSL_BINARY_FUNC(Sub, y[i] = a[i] - b[i]);\nDEFINE_VSL_BINARY_FUNC(Mul, y[i] = a[i] * b[i]);\nDEFINE_VSL_BINARY_FUNC(Div, y[i] = a[i] / b[i]);\n\n// In addition, MKL comes with an additional function axpby that is not present\n// in standard blas. We will simply use a two-step (inefficient, of course) way\n// to mimic that.\ninline void cblas_saxpby(const int N, const float alpha, const float* X,\n                         const int incX, const float beta, float* Y,\n                         const int incY) {\n  cblas_sscal(N, beta, Y, incY);\n  cblas_saxpy(N, alpha, X, incX, Y, incY);\n}\ninline void cblas_daxpby(const int N, const double alpha, const double* X,\n                         const int incX, const double beta, double* Y,\n                         const int incY) {\n  cblas_dscal(N, beta, Y, incY);\n  cblas_daxpy(N, alpha, X, incX, Y, incY);\n}\n\n#endif  // USE_MKL\n#endif  // CAFFE_UTIL_MKL_ALTERNATE_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/rng.hpp",
    "content": "#ifndef CAFFE_RNG_CPP_HPP_\n#define CAFFE_RNG_CPP_HPP_\n\n#include <algorithm>\n#include <iterator>\n\n#include \"boost/random/mersenne_twister.hpp\"\n#include \"boost/random/uniform_int.hpp\"\n\n#include \"caffe/common.hpp\"\n\nnamespace caffe {\n\ntypedef boost::mt19937 rng_t;\n\ninline rng_t* caffe_rng() {\n  return static_cast<caffe::rng_t*>(Caffe::rng_stream().generator());\n}\n\n// Fisher–Yates algorithm\ntemplate <class RandomAccessIterator, class RandomGenerator>\ninline void shuffle(RandomAccessIterator begin, RandomAccessIterator end,\n                    RandomGenerator* gen) {\n  typedef typename std::iterator_traits<RandomAccessIterator>::difference_type\n      difference_type;\n  typedef typename boost::uniform_int<difference_type> dist_type;\n\n  difference_type length = std::distance(begin, end);\n  if (length <= 0) return;\n\n  for (difference_type i = length - 1; i > 0; --i) {\n    dist_type dist(0, i);\n    std::iter_swap(begin + i, begin + dist(*gen));\n  }\n}\n\ntemplate <class RandomAccessIterator>\ninline void shuffle(RandomAccessIterator begin, RandomAccessIterator end) {\n  shuffle(begin, end, caffe_rng());\n}\n}  // namespace caffe\n\n#endif  // CAFFE_RNG_HPP_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/signal_handler.h",
    "content": "#ifndef INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_\n#define INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_\n\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/solver.hpp\"\n\nnamespace caffe {\n\nclass SignalHandler {\n public:\n  // Contructor. Specify what action to take when a signal is received.\n  SignalHandler(SolverAction::Enum SIGINT_action,\n                SolverAction::Enum SIGHUP_action);\n  ~SignalHandler();\n  ActionCallback GetActionFunction();\n private:\n  SolverAction::Enum CheckForSignals() const;\n  SolverAction::Enum SIGINT_action_;\n  SolverAction::Enum SIGHUP_action_;\n};\n\n}  // namespace caffe\n\n#endif  // INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_\n"
  },
  {
    "path": "caffe-fpn/include/caffe/util/upgrade_proto.hpp",
    "content": "#ifndef CAFFE_UTIL_UPGRADE_PROTO_H_\n#define CAFFE_UTIL_UPGRADE_PROTO_H_\n\n#include <string>\n\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\n// Return true iff the net is not the current version.\nbool NetNeedsUpgrade(const NetParameter& net_param);\n\n// Check for deprecations and upgrade the NetParameter as needed.\nbool UpgradeNetAsNeeded(const string& param_file, NetParameter* param);\n\n// Read parameters from a file into a NetParameter proto message.\nvoid ReadNetParamsFromTextFileOrDie(const string& param_file,\n                                    NetParameter* param);\nvoid ReadNetParamsFromBinaryFileOrDie(const string& param_file,\n                                      NetParameter* param);\n\n// Return true iff any layer contains parameters specified using\n// deprecated V0LayerParameter.\nbool NetNeedsV0ToV1Upgrade(const NetParameter& net_param);\n\n// Perform all necessary transformations to upgrade a V0NetParameter into a\n// NetParameter (including upgrading padding layers and LayerParameters).\nbool UpgradeV0Net(const NetParameter& v0_net_param, NetParameter* net_param);\n\n// Upgrade NetParameter with padding layers to pad-aware conv layers.\n// For any padding layer, remove it and put its pad parameter in any layers\n// taking its top blob as input.\n// Error if any of these above layers are not-conv layers.\nvoid UpgradeV0PaddingLayers(const NetParameter& param,\n                            NetParameter* param_upgraded_pad);\n\n// Upgrade a single V0LayerConnection to the V1LayerParameter format.\nbool UpgradeV0LayerParameter(const V1LayerParameter& v0_layer_connection,\n                             V1LayerParameter* layer_param);\n\nV1LayerParameter_LayerType UpgradeV0LayerType(const string& type);\n\n// Return true iff any layer contains deprecated data transformation parameters.\nbool NetNeedsDataUpgrade(const NetParameter& net_param);\n\n// Perform all necessary transformations to upgrade old transformation fields\n// into a TransformationParameter.\nvoid UpgradeNetDataTransformation(NetParameter* net_param);\n\n// Return true iff the Net contains any layers specified as V1LayerParameters.\nbool NetNeedsV1ToV2Upgrade(const NetParameter& net_param);\n\n// Perform all necessary transformations to upgrade a NetParameter with\n// deprecated V1LayerParameters.\nbool UpgradeV1Net(const NetParameter& v1_net_param, NetParameter* net_param);\n\nbool UpgradeV1LayerParameter(const V1LayerParameter& v1_layer_param,\n                             LayerParameter* layer_param);\n\nconst char* UpgradeV1LayerType(const V1LayerParameter_LayerType type);\n\n// Return true iff the solver contains any old solver_type specified as enums\nbool SolverNeedsTypeUpgrade(const SolverParameter& solver_param);\n\nbool UpgradeSolverType(SolverParameter* solver_param);\n\n// Check for deprecations and upgrade the SolverParameter as needed.\nbool UpgradeSolverAsNeeded(const string& param_file, SolverParameter* param);\n\n// Read parameters from a file into a SolverParameter proto message.\nvoid ReadSolverParamsFromTextFileOrDie(const string& param_file,\n                                       SolverParameter* param);\n\n}  // namespace caffe\n\n#endif   // CAFFE_UTIL_UPGRADE_PROTO_H_\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/+test/test_io.m",
    "content": "classdef test_io < matlab.unittest.TestCase\n  methods (Test)\n    function test_read_write_mean(self)\n      % randomly generate mean data\n      width = 200;\n      height = 300;\n      channels = 3;\n      mean_data_write = 255 * rand(width, height, channels, 'single');\n      % write mean data to binary proto\n      mean_proto_file = tempname();\n      caffe.io.write_mean(mean_data_write, mean_proto_file);\n      % read mean data from saved binary proto and test whether they are equal\n      mean_data_read = caffe.io.read_mean(mean_proto_file);\n      self.verifyEqual(mean_data_write, mean_data_read)\n      delete(mean_proto_file);\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/+test/test_net.m",
    "content": "classdef test_net < matlab.unittest.TestCase\n  \n  properties\n    num_output\n    model_file\n    net\n  end\n  \n  methods (Static)\n    function model_file = simple_net_file(num_output)\n      model_file = tempname();\n      fid = fopen(model_file, 'w');\n      fprintf(fid, [ ...\n        'name: \"testnet\" force_backward: true\\n' ...\n        'layer { type: \"DummyData\" name: \"data\" top: \"data\" top: \"label\"\\n' ...\n        'dummy_data_param { num: 5 channels: 2 height: 3 width: 4\\n' ...\n        '    num: 5 channels: 1 height: 1 width: 1\\n' ...\n        '    data_filler { type: \"gaussian\" std: 1 }\\n' ...\n        '    data_filler { type: \"constant\" } } }\\n' ...\n        'layer { type: \"Convolution\" name: \"conv\" bottom: \"data\" top: \"conv\"\\n' ...\n        '  convolution_param { num_output: 11 kernel_size: 2 pad: 3\\n' ...\n        '    weight_filler { type: \"gaussian\" std: 1 }\\n' ...\n        '    bias_filler { type: \"constant\" value: 2 } }\\n' ...\n        '    param { decay_mult: 1 } param { decay_mult: 0 }\\n' ...\n        '    }\\n' ...\n        'layer { type: \"InnerProduct\" name: \"ip\" bottom: \"conv\" top: \"ip\"\\n' ...\n        '  inner_product_param { num_output: ' num2str(num_output) ...\n        '    weight_filler { type: \"gaussian\" std: 2.5 }\\n' ...\n        '    bias_filler { type: \"constant\" value: -3 } } }\\n' ...\n        'layer { type: \"SoftmaxWithLoss\" name: \"loss\" bottom: \"ip\" bottom: \"label\"\\n' ...\n        '  top: \"loss\" }' ]);\n      fclose(fid);\n    end\n  end\n  methods\n    function self = test_net()\n      self.num_output = 13;\n      self.model_file = caffe.test.test_net.simple_net_file(self.num_output);\n      self.net = caffe.Net(self.model_file, 'train');\n      % also make sure get_solver runs\n      caffe.get_net(self.model_file, 'train');\n      \n      % fill in valid labels\n      self.net.blobs('label').set_data(randi( ...\n        self.num_output - 1, self.net.blobs('label').shape));\n      \n      delete(self.model_file);\n    end\n  end\n  methods (Test)\n    function self = test_blob(self)\n      self.net.blobs('data').set_data(10 * ones(self.net.blobs('data').shape));\n      self.verifyEqual(self.net.blobs('data').get_data(), ...\n        10 * ones(self.net.blobs('data').shape, 'single'));\n      self.net.blobs('data').set_diff(-2 * ones(self.net.blobs('data').shape));\n      self.verifyEqual(self.net.blobs('data').get_diff(), ...\n        -2 * ones(self.net.blobs('data').shape, 'single'));\n      original_shape = self.net.blobs('data').shape;\n      self.net.blobs('data').reshape([6 5 4 3 2 1]);\n      self.verifyEqual(self.net.blobs('data').shape, [6 5 4 3 2 1]);\n      self.net.blobs('data').reshape(original_shape);\n      self.net.reshape();\n    end\n    function self = test_layer(self)\n      self.verifyEqual(self.net.params('conv', 1).shape, [2 2 2 11]);\n      self.verifyEqual(self.net.layers('conv').params(2).shape, 11);\n      self.verifyEqual(self.net.layers('conv').type(), 'Convolution');\n    end\n    function test_forward_backward(self)\n      self.net.forward_prefilled();\n      self.net.backward_prefilled();\n    end\n    function test_inputs_outputs(self)\n      self.verifyEqual(self.net.inputs, cell(0, 1))\n      self.verifyEqual(self.net.outputs, {'loss'});\n    end\n    function test_save_and_read(self)\n      weights_file = tempname();\n      self.net.save(weights_file);\n      model_file2 = caffe.test.test_net.simple_net_file(self.num_output);\n      net2 = caffe.Net(model_file2, 'train');\n      net2.copy_from(weights_file);\n      net3 = caffe.Net(model_file2, weights_file, 'train');\n      delete(model_file2);\n      delete(weights_file);\n      for l = 1:length(self.net.layer_vec)\n        for i = 1:length(self.net.layer_vec(l).params)\n          self.verifyEqual(self.net.layer_vec(l).params(i).get_data(), ...\n            net2.layer_vec(l).params(i).get_data());\n          self.verifyEqual(self.net.layer_vec(l).params(i).get_data(), ...\n            net3.layer_vec(l).params(i).get_data());\n        end\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/+test/test_solver.m",
    "content": "classdef test_solver < matlab.unittest.TestCase\n  \n  properties\n    num_output\n    solver\n  end\n  \n  methods\n    function self = test_solver()\n      self.num_output = 13;\n      model_file = caffe.test.test_net.simple_net_file(self.num_output);\n      solver_file = tempname();\n      \n      fid = fopen(solver_file, 'w');\n      fprintf(fid, [ ...\n        'net: \"'  model_file  '\"\\n' ...\n        'test_iter: 10 test_interval: 10 base_lr: 0.01 momentum: 0.9\\n' ...\n        'weight_decay: 0.0005 lr_policy: \"inv\" gamma: 0.0001 power: 0.75\\n' ...\n        'display: 100 max_iter: 100 snapshot_after_train: false\\n' ]);\n      fclose(fid);\n      \n      self.solver = caffe.Solver(solver_file);\n      % also make sure get_solver runs\n      caffe.get_solver(solver_file);\n      caffe.set_mode_cpu();\n      % fill in valid labels\n      self.solver.net.blobs('label').set_data(randi( ...\n        self.num_output - 1, self.solver.net.blobs('label').shape));\n      self.solver.test_nets(1).blobs('label').set_data(randi( ...\n        self.num_output - 1, self.solver.test_nets(1).blobs('label').shape));\n      \n      delete(solver_file);\n      delete(model_file);\n    end\n  end\n  methods (Test)\n    function test_solve(self)\n      self.verifyEqual(self.solver.iter(), 0)\n      self.solver.step(30);\n      self.verifyEqual(self.solver.iter(), 30)\n      self.solver.solve()\n      self.verifyEqual(self.solver.iter(), 100)\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/Blob.m",
    "content": "classdef Blob < handle\n  % Wrapper class of caffe::Blob in matlab\n  \n  properties (Access = private)\n    hBlob_self\n  end\n  \n  methods\n    function self = Blob(hBlob_blob)\n      CHECK(is_valid_handle(hBlob_blob), 'invalid Blob handle');\n      \n      % setup self handle\n      self.hBlob_self = hBlob_blob;\n    end\n    function shape = shape(self)\n      shape = caffe_('blob_get_shape', self.hBlob_self);\n    end\n    function reshape(self, shape)\n      shape = self.check_and_preprocess_shape(shape);\n      caffe_('blob_reshape', self.hBlob_self, shape);\n    end\n    function data = get_data(self)\n      data = caffe_('blob_get_data', self.hBlob_self);\n    end\n    function set_data(self, data)\n      data = self.check_and_preprocess_data(data);\n      caffe_('blob_set_data', self.hBlob_self, data);\n    end\n    function diff = get_diff(self)\n      diff = caffe_('blob_get_diff', self.hBlob_self);\n    end\n    function set_diff(self, diff)\n      diff = self.check_and_preprocess_data(diff);\n      caffe_('blob_set_diff', self.hBlob_self, diff);\n    end\n  end\n  \n  methods (Access = private)\n    function shape = check_and_preprocess_shape(~, shape)\n      CHECK(isempty(shape) || (isnumeric(shape) && isrow(shape)), ...\n        'shape must be a integer row vector');\n      shape = double(shape);\n    end\n    function data = check_and_preprocess_data(self, data)\n      CHECK(isnumeric(data), 'data or diff must be numeric types');\n      self.check_data_size_matches(data);\n      if ~isa(data, 'single')\n        data = single(data);\n      end\n    end\n    function check_data_size_matches(self, data)\n      % check whether size of data matches shape of this blob\n      % note: matlab arrays always have at least 2 dimensions. To compare\n      % shape between size of data and shape of this blob, extend shape of\n      % this blob to have at least 2 dimensions\n      self_shape_extended = self.shape;\n      if isempty(self_shape_extended)\n        % target blob is a scalar (0 dim)\n        self_shape_extended = [1, 1];\n      elseif isscalar(self_shape_extended)\n        % target blob is a vector (1 dim)\n        self_shape_extended = [self_shape_extended, 1];\n      end\n      % Also, matlab cannot have tailing dimension 1 for ndim > 2, so you\n      % cannot create 20 x 10 x 1 x 1 array in matlab as it becomes 20 x 10\n      % Extend matlab arrays to have tailing dimension 1 during shape match\n      data_size_extended = ...\n        [size(data), ones(1, length(self_shape_extended) - ndims(data))];\n      is_matched = ...\n        (length(self_shape_extended) == length(data_size_extended)) ...\n        && all(self_shape_extended == data_size_extended);\n      CHECK(is_matched, ...\n        sprintf('%s, input data/diff size: [ %s] vs target blob shape: [ %s]', ...\n        'input data/diff size does not match target blob shape', ...\n        sprintf('%d ', data_size_extended), sprintf('%d ', self_shape_extended)));\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/Layer.m",
    "content": "classdef Layer < handle\n  % Wrapper class of caffe::Layer in matlab\n  \n  properties (Access = private)\n    hLayer_self\n    attributes\n    % attributes fields:\n    %     hBlob_blobs\n  end\n  properties (SetAccess = private)\n    params\n  end\n  \n  methods\n    function self = Layer(hLayer_layer)\n      CHECK(is_valid_handle(hLayer_layer), 'invalid Layer handle');\n      \n      % setup self handle and attributes\n      self.hLayer_self = hLayer_layer;\n      self.attributes = caffe_('layer_get_attr', self.hLayer_self);\n      \n      % setup weights\n      self.params = caffe.Blob.empty();\n      for n = 1:length(self.attributes.hBlob_blobs)\n        self.params(n) = caffe.Blob(self.attributes.hBlob_blobs(n));\n      end\n    end\n    function layer_type = type(self)\n      layer_type = caffe_('layer_get_type', self.hLayer_self);\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/Net.m",
    "content": "classdef Net < handle\n  % Wrapper class of caffe::Net in matlab\n  \n  properties (Access = private)\n    hNet_self\n    attributes\n    % attribute fields\n    %     hLayer_layers\n    %     hBlob_blobs\n    %     input_blob_indices\n    %     output_blob_indices\n    %     layer_names\n    %     blob_names\n  end\n  properties (SetAccess = private)\n    layer_vec\n    blob_vec\n    inputs\n    outputs\n    name2layer_index\n    name2blob_index\n    layer_names\n    blob_names\n  end\n  \n  methods\n    function self = Net(varargin)\n      % decide whether to construct a net from model_file or handle\n      if ~(nargin == 1 && isstruct(varargin{1}))\n        % construct a net from model_file\n        self = caffe.get_net(varargin{:});\n        return\n      end\n      % construct a net from handle\n      hNet_net = varargin{1};\n      CHECK(is_valid_handle(hNet_net), 'invalid Net handle');\n      \n      % setup self handle and attributes\n      self.hNet_self = hNet_net;\n      self.attributes = caffe_('net_get_attr', self.hNet_self);\n      \n      % setup layer_vec\n      self.layer_vec = caffe.Layer.empty();\n      for n = 1:length(self.attributes.hLayer_layers)\n        self.layer_vec(n) = caffe.Layer(self.attributes.hLayer_layers(n));\n      end\n      \n      % setup blob_vec\n      self.blob_vec = caffe.Blob.empty();\n      for n = 1:length(self.attributes.hBlob_blobs);\n        self.blob_vec(n) = caffe.Blob(self.attributes.hBlob_blobs(n));\n      end\n      \n      % setup input and output blob and their names\n      % note: add 1 to indices as matlab is 1-indexed while C++ is 0-indexed\n      self.inputs = ...\n        self.attributes.blob_names(self.attributes.input_blob_indices + 1);\n      self.outputs = ...\n        self.attributes.blob_names(self.attributes.output_blob_indices + 1);\n      \n      % create map objects to map from name to layers and blobs\n      self.name2layer_index = containers.Map(self.attributes.layer_names, ...\n        1:length(self.attributes.layer_names));\n      self.name2blob_index = containers.Map(self.attributes.blob_names, ...\n        1:length(self.attributes.blob_names));\n      \n      % expose layer_names and blob_names for public read access\n      self.layer_names = self.attributes.layer_names;\n      self.blob_names = self.attributes.blob_names;\n    end\n    function layer = layers(self, layer_name)\n      CHECK(ischar(layer_name), 'layer_name must be a string');\n      layer = self.layer_vec(self.name2layer_index(layer_name));\n    end\n    function blob = blobs(self, blob_name)\n      CHECK(ischar(blob_name), 'blob_name must be a string');\n      blob = self.blob_vec(self.name2blob_index(blob_name));\n    end\n    function blob = params(self, layer_name, blob_index)\n      CHECK(ischar(layer_name), 'layer_name must be a string');\n      CHECK(isscalar(blob_index), 'blob_index must be a scalar');\n      blob = self.layer_vec(self.name2layer_index(layer_name)).params(blob_index);\n    end\n    function forward_prefilled(self)\n      caffe_('net_forward', self.hNet_self);\n    end\n    function backward_prefilled(self)\n      caffe_('net_backward', self.hNet_self);\n    end\n    function res = forward(self, input_data)\n      CHECK(iscell(input_data), 'input_data must be a cell array');\n      CHECK(length(input_data) == length(self.inputs), ...\n        'input data cell length must match input blob number');\n      % copy data to input blobs\n      for n = 1:length(self.inputs)\n        self.blobs(self.inputs{n}).set_data(input_data{n});\n      end\n      self.forward_prefilled();\n      % retrieve data from output blobs\n      res = cell(length(self.outputs), 1);\n      for n = 1:length(self.outputs)\n        res{n} = self.blobs(self.outputs{n}).get_data();\n      end\n    end\n    function res = backward(self, output_diff)\n      CHECK(iscell(output_diff), 'output_diff must be a cell array');\n      CHECK(length(output_diff) == length(self.outputs), ...\n        'output diff cell length must match output blob number');\n      % copy diff to output blobs\n      for n = 1:length(self.outputs)\n        self.blobs(self.outputs{n}).set_diff(output_diff{n});\n      end\n      self.backward_prefilled();\n      % retrieve diff from input blobs\n      res = cell(length(self.inputs), 1);\n      for n = 1:length(self.inputs)\n        res{n} = self.blobs(self.inputs{n}).get_diff();\n      end\n    end\n    function copy_from(self, weights_file)\n      CHECK(ischar(weights_file), 'weights_file must be a string');\n      CHECK_FILE_EXIST(weights_file);\n      caffe_('net_copy_from', self.hNet_self, weights_file);\n    end\n    function reshape(self)\n      caffe_('net_reshape', self.hNet_self);\n    end\n    function save(self, weights_file)\n      CHECK(ischar(weights_file), 'weights_file must be a string');\n      caffe_('net_save', self.hNet_self, weights_file);\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/Solver.m",
    "content": "classdef Solver < handle\n  % Wrapper class of caffe::SGDSolver in matlab\n  \n  properties (Access = private)\n    hSolver_self\n    attributes\n    % attribute fields\n    %     hNet_net\n    %     hNet_test_nets\n  end\n  properties (SetAccess = private)\n    net\n    test_nets\n  end\n  \n  methods\n    function self = Solver(varargin)\n      % decide whether to construct a solver from solver_file or handle\n      if ~(nargin == 1 && isstruct(varargin{1}))\n        % construct a solver from solver_file\n        self = caffe.get_solver(varargin{:});\n        return\n      end\n      % construct a solver from handle\n      hSolver_solver = varargin{1};\n      CHECK(is_valid_handle(hSolver_solver), 'invalid Solver handle');\n      \n      % setup self handle and attributes\n      self.hSolver_self = hSolver_solver;\n      self.attributes = caffe_('solver_get_attr', self.hSolver_self);\n      \n      % setup net and test_nets\n      self.net = caffe.Net(self.attributes.hNet_net);\n      self.test_nets = caffe.Net.empty();\n      for n = 1:length(self.attributes.hNet_test_nets)\n        self.test_nets(n) = caffe.Net(self.attributes.hNet_test_nets(n));\n      end\n    end\n    function iter = iter(self)\n      iter = caffe_('solver_get_iter', self.hSolver_self);\n    end\n    function restore(self, snapshot_filename)\n      CHECK(ischar(snapshot_filename), 'snapshot_filename must be a string');\n      CHECK_FILE_EXIST(snapshot_filename);\n      caffe_('solver_restore', self.hSolver_self, snapshot_filename);\n    end\n    function solve(self)\n      caffe_('solver_solve', self.hSolver_self);\n    end\n    function step(self, iters)\n      CHECK(isscalar(iters) && iters > 0, 'iters must be positive integer');\n      iters = double(iters);\n      caffe_('solver_step', self.hSolver_self, iters);\n    end\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/get_net.m",
    "content": "function net = get_net(varargin)\n% net = get_net(model_file, phase_name) or\n% net = get_net(model_file, weights_file, phase_name)\n%   Construct a net from model_file, and load weights from weights_file\n%   phase_name can only be 'train' or 'test'\n\nCHECK(nargin == 2 || nargin == 3, ['usage: ' ...\n  'net = get_net(model_file, phase_name) or ' ...\n  'net = get_net(model_file, weights_file, phase_name)']);\nif nargin == 3\n  model_file = varargin{1};\n  weights_file = varargin{2};\n  phase_name = varargin{3};\nelseif nargin == 2\n  model_file = varargin{1};\n  phase_name = varargin{2};\nend\n\nCHECK(ischar(model_file), 'model_file must be a string');\nCHECK(ischar(phase_name), 'phase_name must be a string');\nCHECK_FILE_EXIST(model_file);\nCHECK(strcmp(phase_name, 'train') || strcmp(phase_name, 'test'), ...\n  sprintf('phase_name can only be %strain%s or %stest%s', ...\n  char(39), char(39), char(39), char(39)));\n\n% construct caffe net from model_file\nhNet = caffe_('get_net', model_file, phase_name);\nnet = caffe.Net(hNet);\n\n% load weights from weights_file\nif nargin == 3\n  CHECK(ischar(weights_file), 'weights_file must be a string');\n  CHECK_FILE_EXIST(weights_file);\n  net.copy_from(weights_file);\nend\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/get_solver.m",
    "content": "function solver = get_solver(solver_file)\n% solver = get_solver(solver_file)\n%   Construct a Solver object from solver_file\n\nCHECK(ischar(solver_file), 'solver_file must be a string');\nCHECK_FILE_EXIST(solver_file);\npSolver = caffe_('get_solver', solver_file);\nsolver = caffe.Solver(pSolver);\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/io.m",
    "content": "classdef io\n  % a class for input and output functions\n  \n  methods (Static)\n    function im_data = load_image(im_file)\n      % im_data = load_image(im_file)\n      %   load an image from disk into Caffe-supported data format\n      %   switch channels from RGB to BGR, make width the fastest dimension\n      %   and convert to single\n      %   returns im_data in W x H x C. For colored images, C = 3 in BGR\n      %   channels, and for grayscale images, C = 1\n      CHECK(ischar(im_file), 'im_file must be a string');\n      CHECK_FILE_EXIST(im_file);\n      im_data = imread(im_file);\n      % permute channels from RGB to BGR for colored images\n      if size(im_data, 3) == 3\n        im_data = im_data(:, :, [3, 2, 1]);\n      end\n      % flip width and height to make width the fastest dimension\n      im_data = permute(im_data, [2, 1, 3]);\n      % convert from uint8 to single\n      im_data = single(im_data);\n    end\n    function mean_data = read_mean(mean_proto_file)\n      % mean_data = read_mean(mean_proto_file)\n      %   read image mean data from binaryproto file\n      %   returns mean_data in W x H x C with BGR channels\n      CHECK(ischar(mean_proto_file), 'mean_proto_file must be a string');\n      CHECK_FILE_EXIST(mean_proto_file);\n      mean_data = caffe_('read_mean', mean_proto_file);\n    end\n    function write_mean(mean_data, mean_proto_file)\n      % write_mean(mean_data, mean_proto_file)\n      %   write image mean data to binaryproto file\n      %   mean_data should be W x H x C with BGR channels\n      CHECK(ischar(mean_proto_file), 'mean_proto_file must be a string');\n      CHECK(isa(mean_data, 'single'), 'mean_data must be a SINGLE matrix');\n      caffe_('write_mean', mean_data, mean_proto_file);\n    end   \n  end\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/private/CHECK.m",
    "content": "function CHECK(expr, error_msg)\n\nif ~expr\n  error(error_msg);\nend\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/private/CHECK_FILE_EXIST.m",
    "content": "function CHECK_FILE_EXIST(filename)\n\nif exist(filename, 'file') == 0\n  error('%s does not exist', filename);\nend\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/private/caffe_.cpp",
    "content": "//\n// caffe_.cpp provides wrappers of the caffe::Solver class, caffe::Net class,\n// caffe::Layer class and caffe::Blob class and some caffe::Caffe functions,\n// so that one could easily use Caffe from matlab.\n// Note that for matlab, we will simply use float as the data type.\n\n// Internally, data is stored with dimensions reversed from Caffe's:\n// e.g., if the Caffe blob axes are (num, channels, height, width),\n// the matcaffe data is stored as (width, height, channels, num)\n// where width is the fastest dimension.\n\n#include <sstream>\n#include <string>\n#include <vector>\n\n#include \"mex.h\"\n\n#include \"caffe/caffe.hpp\"\n\n#define MEX_ARGS int nlhs, mxArray **plhs, int nrhs, const mxArray **prhs\n\nusing namespace caffe;  // NOLINT(build/namespaces)\n\n// Do CHECK and throw a Mex error if check fails\ninline void mxCHECK(bool expr, const char* msg) {\n  if (!expr) {\n    mexErrMsgTxt(msg);\n  }\n}\ninline void mxERROR(const char* msg) { mexErrMsgTxt(msg); }\n\n// Check if a file exists and can be opened\nvoid mxCHECK_FILE_EXIST(const char* file) {\n  std::ifstream f(file);\n  if (!f.good()) {\n    f.close();\n    std::string msg(\"Could not open file \");\n    msg += file;\n    mxERROR(msg.c_str());\n  }\n  f.close();\n}\n\n// The pointers to caffe::Solver and caffe::Net instances\nstatic vector<shared_ptr<Solver<float> > > solvers_;\nstatic vector<shared_ptr<Net<float> > > nets_;\n// init_key is generated at the beginning and everytime you call reset\nstatic double init_key = static_cast<double>(caffe_rng_rand());\n\n/** -----------------------------------------------------------------\n ** data conversion functions\n **/\n// Enum indicates which blob memory to use\nenum WhichMemory { DATA, DIFF };\n\n// Copy matlab array to Blob data or diff\nstatic void mx_mat_to_blob(const mxArray* mx_mat, Blob<float>* blob,\n    WhichMemory data_or_diff) {\n  mxCHECK(blob->count() == mxGetNumberOfElements(mx_mat),\n      \"number of elements in target blob doesn't match that in input mxArray\");\n  const float* mat_mem_ptr = reinterpret_cast<const float*>(mxGetData(mx_mat));\n  float* blob_mem_ptr = NULL;\n  switch (Caffe::mode()) {\n  case Caffe::CPU:\n    blob_mem_ptr = (data_or_diff == DATA ?\n        blob->mutable_cpu_data() : blob->mutable_cpu_diff());\n    break;\n  case Caffe::GPU:\n    blob_mem_ptr = (data_or_diff == DATA ?\n        blob->mutable_gpu_data() : blob->mutable_gpu_diff());\n    break;\n  default:\n    mxERROR(\"Unknown Caffe mode\");\n  }\n  caffe_copy(blob->count(), mat_mem_ptr, blob_mem_ptr);\n}\n\n// Copy Blob data or diff to matlab array\nstatic mxArray* blob_to_mx_mat(const Blob<float>* blob,\n    WhichMemory data_or_diff) {\n  const int num_axes = blob->num_axes();\n  vector<mwSize> dims(num_axes);\n  for (int blob_axis = 0, mat_axis = num_axes - 1; blob_axis < num_axes;\n       ++blob_axis, --mat_axis) {\n    dims[mat_axis] = static_cast<mwSize>(blob->shape(blob_axis));\n  }\n  // matlab array needs to have at least one dimension, convert scalar to 1-dim\n  if (num_axes == 0) {\n    dims.push_back(1);\n  }\n  mxArray* mx_mat =\n      mxCreateNumericArray(dims.size(), dims.data(), mxSINGLE_CLASS, mxREAL);\n  float* mat_mem_ptr = reinterpret_cast<float*>(mxGetData(mx_mat));\n  const float* blob_mem_ptr = NULL;\n  switch (Caffe::mode()) {\n  case Caffe::CPU:\n    blob_mem_ptr = (data_or_diff == DATA ? blob->cpu_data() : blob->cpu_diff());\n    break;\n  case Caffe::GPU:\n    blob_mem_ptr = (data_or_diff == DATA ? blob->gpu_data() : blob->gpu_diff());\n    break;\n  default:\n    mxERROR(\"Unknown Caffe mode\");\n  }\n  caffe_copy(blob->count(), blob_mem_ptr, mat_mem_ptr);\n  return mx_mat;\n}\n\n// Convert vector<int> to matlab row vector\nstatic mxArray* int_vec_to_mx_vec(const vector<int>& int_vec) {\n  mxArray* mx_vec = mxCreateDoubleMatrix(int_vec.size(), 1, mxREAL);\n  double* vec_mem_ptr = mxGetPr(mx_vec);\n  for (int i = 0; i < int_vec.size(); i++) {\n    vec_mem_ptr[i] = static_cast<double>(int_vec[i]);\n  }\n  return mx_vec;\n}\n\n// Convert vector<string> to matlab cell vector of strings\nstatic mxArray* str_vec_to_mx_strcell(const vector<std::string>& str_vec) {\n  mxArray* mx_strcell = mxCreateCellMatrix(str_vec.size(), 1);\n  for (int i = 0; i < str_vec.size(); i++) {\n    mxSetCell(mx_strcell, i, mxCreateString(str_vec[i].c_str()));\n  }\n  return mx_strcell;\n}\n\n/** -----------------------------------------------------------------\n ** handle and pointer conversion functions\n ** a handle is a struct array with the following fields\n **   (uint64) ptr      : the pointer to the C++ object\n **   (double) init_key : caffe initialization key\n **/\n// Convert a handle in matlab to a pointer in C++. Check if init_key matches\ntemplate <typename T>\nstatic T* handle_to_ptr(const mxArray* mx_handle) {\n  mxArray* mx_ptr = mxGetField(mx_handle, 0, \"ptr\");\n  mxArray* mx_init_key = mxGetField(mx_handle, 0, \"init_key\");\n  mxCHECK(mxIsUint64(mx_ptr), \"pointer type must be uint64\");\n  mxCHECK(mxGetScalar(mx_init_key) == init_key,\n      \"Could not convert handle to pointer due to invalid init_key. \"\n      \"The object might have been cleared.\");\n  return reinterpret_cast<T*>(*reinterpret_cast<uint64_t*>(mxGetData(mx_ptr)));\n}\n\n// Create a handle struct vector, without setting up each handle in it\ntemplate <typename T>\nstatic mxArray* create_handle_vec(int ptr_num) {\n  const int handle_field_num = 2;\n  const char* handle_fields[handle_field_num] = { \"ptr\", \"init_key\" };\n  return mxCreateStructMatrix(ptr_num, 1, handle_field_num, handle_fields);\n}\n\n// Set up a handle in a handle struct vector by its index\ntemplate <typename T>\nstatic void setup_handle(const T* ptr, int index, mxArray* mx_handle_vec) {\n  mxArray* mx_ptr = mxCreateNumericMatrix(1, 1, mxUINT64_CLASS, mxREAL);\n  *reinterpret_cast<uint64_t*>(mxGetData(mx_ptr)) =\n      reinterpret_cast<uint64_t>(ptr);\n  mxSetField(mx_handle_vec, index, \"ptr\", mx_ptr);\n  mxSetField(mx_handle_vec, index, \"init_key\", mxCreateDoubleScalar(init_key));\n}\n\n// Convert a pointer in C++ to a handle in matlab\ntemplate <typename T>\nstatic mxArray* ptr_to_handle(const T* ptr) {\n  mxArray* mx_handle = create_handle_vec<T>(1);\n  setup_handle(ptr, 0, mx_handle);\n  return mx_handle;\n}\n\n// Convert a vector of shared_ptr in C++ to handle struct vector\ntemplate <typename T>\nstatic mxArray* ptr_vec_to_handle_vec(const vector<shared_ptr<T> >& ptr_vec) {\n  mxArray* mx_handle_vec = create_handle_vec<T>(ptr_vec.size());\n  for (int i = 0; i < ptr_vec.size(); i++) {\n    setup_handle(ptr_vec[i].get(), i, mx_handle_vec);\n  }\n  return mx_handle_vec;\n}\n\n/** -----------------------------------------------------------------\n ** matlab command functions: caffe_(api_command, arg1, arg2, ...)\n **/\n// Usage: caffe_('get_solver', solver_file);\nstatic void get_solver(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsChar(prhs[0]),\n      \"Usage: caffe_('get_solver', solver_file)\");\n  char* solver_file = mxArrayToString(prhs[0]);\n  mxCHECK_FILE_EXIST(solver_file);\n  SolverParameter solver_param;\n  ReadSolverParamsFromTextFileOrDie(solver_file, &solver_param);\n  shared_ptr<Solver<float> > solver(\n      SolverRegistry<float>::CreateSolver(solver_param));\n  solvers_.push_back(solver);\n  plhs[0] = ptr_to_handle<Solver<float> >(solver.get());\n  mxFree(solver_file);\n}\n\n// Usage: caffe_('solver_get_attr', hSolver)\nstatic void solver_get_attr(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('solver_get_attr', hSolver)\");\n  Solver<float>* solver = handle_to_ptr<Solver<float> >(prhs[0]);\n  const int solver_attr_num = 2;\n  const char* solver_attrs[solver_attr_num] = { \"hNet_net\", \"hNet_test_nets\" };\n  mxArray* mx_solver_attr = mxCreateStructMatrix(1, 1, solver_attr_num,\n      solver_attrs);\n  mxSetField(mx_solver_attr, 0, \"hNet_net\",\n      ptr_to_handle<Net<float> >(solver->net().get()));\n  mxSetField(mx_solver_attr, 0, \"hNet_test_nets\",\n      ptr_vec_to_handle_vec<Net<float> >(solver->test_nets()));\n  plhs[0] = mx_solver_attr;\n}\n\n// Usage: caffe_('solver_get_iter', hSolver)\nstatic void solver_get_iter(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('solver_get_iter', hSolver)\");\n  Solver<float>* solver = handle_to_ptr<Solver<float> >(prhs[0]);\n  plhs[0] = mxCreateDoubleScalar(solver->iter());\n}\n\n// Usage: caffe_('solver_restore', hSolver, snapshot_file)\nstatic void solver_restore(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsChar(prhs[1]),\n      \"Usage: caffe_('solver_restore', hSolver, snapshot_file)\");\n  Solver<float>* solver = handle_to_ptr<Solver<float> >(prhs[0]);\n  char* snapshot_file = mxArrayToString(prhs[1]);\n  mxCHECK_FILE_EXIST(snapshot_file);\n  solver->Restore(snapshot_file);\n  mxFree(snapshot_file);\n}\n\n// Usage: caffe_('solver_solve', hSolver)\nstatic void solver_solve(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('solver_solve', hSolver)\");\n  Solver<float>* solver = handle_to_ptr<Solver<float> >(prhs[0]);\n  solver->Solve();\n}\n\n// Usage: caffe_('solver_step', hSolver, iters)\nstatic void solver_step(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsDouble(prhs[1]),\n      \"Usage: caffe_('solver_step', hSolver, iters)\");\n  Solver<float>* solver = handle_to_ptr<Solver<float> >(prhs[0]);\n  int iters = mxGetScalar(prhs[1]);\n  solver->Step(iters);\n}\n\n// Usage: caffe_('get_net', model_file, phase_name)\nstatic void get_net(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsChar(prhs[0]) && mxIsChar(prhs[1]),\n      \"Usage: caffe_('get_net', model_file, phase_name)\");\n  char* model_file = mxArrayToString(prhs[0]);\n  char* phase_name = mxArrayToString(prhs[1]);\n  mxCHECK_FILE_EXIST(model_file);\n  Phase phase;\n  if (strcmp(phase_name, \"train\") == 0) {\n      phase = TRAIN;\n  } else if (strcmp(phase_name, \"test\") == 0) {\n      phase = TEST;\n  } else {\n    mxERROR(\"Unknown phase\");\n  }\n  shared_ptr<Net<float> > net(new caffe::Net<float>(model_file, phase));\n  nets_.push_back(net);\n  plhs[0] = ptr_to_handle<Net<float> >(net.get());\n  mxFree(model_file);\n  mxFree(phase_name);\n}\n\n// Usage: caffe_('net_get_attr', hNet)\nstatic void net_get_attr(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('net_get_attr', hNet)\");\n  Net<float>* net = handle_to_ptr<Net<float> >(prhs[0]);\n  const int net_attr_num = 6;\n  const char* net_attrs[net_attr_num] = { \"hLayer_layers\", \"hBlob_blobs\",\n      \"input_blob_indices\", \"output_blob_indices\", \"layer_names\", \"blob_names\"};\n  mxArray* mx_net_attr = mxCreateStructMatrix(1, 1, net_attr_num,\n      net_attrs);\n  mxSetField(mx_net_attr, 0, \"hLayer_layers\",\n      ptr_vec_to_handle_vec<Layer<float> >(net->layers()));\n  mxSetField(mx_net_attr, 0, \"hBlob_blobs\",\n      ptr_vec_to_handle_vec<Blob<float> >(net->blobs()));\n  mxSetField(mx_net_attr, 0, \"input_blob_indices\",\n      int_vec_to_mx_vec(net->input_blob_indices()));\n  mxSetField(mx_net_attr, 0, \"output_blob_indices\",\n      int_vec_to_mx_vec(net->output_blob_indices()));\n  mxSetField(mx_net_attr, 0, \"layer_names\",\n      str_vec_to_mx_strcell(net->layer_names()));\n  mxSetField(mx_net_attr, 0, \"blob_names\",\n      str_vec_to_mx_strcell(net->blob_names()));\n  plhs[0] = mx_net_attr;\n}\n\n// Usage: caffe_('net_forward', hNet)\nstatic void net_forward(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('net_forward', hNet)\");\n  Net<float>* net = handle_to_ptr<Net<float> >(prhs[0]);\n  net->ForwardPrefilled();\n}\n\n// Usage: caffe_('net_backward', hNet)\nstatic void net_backward(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('net_backward', hNet)\");\n  Net<float>* net = handle_to_ptr<Net<float> >(prhs[0]);\n  net->Backward();\n}\n\n// Usage: caffe_('net_copy_from', hNet, weights_file)\nstatic void net_copy_from(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsChar(prhs[1]),\n      \"Usage: caffe_('net_copy_from', hNet, weights_file)\");\n  Net<float>* net = handle_to_ptr<Net<float> >(prhs[0]);\n  char* weights_file = mxArrayToString(prhs[1]);\n  mxCHECK_FILE_EXIST(weights_file);\n  net->CopyTrainedLayersFrom(weights_file);\n  mxFree(weights_file);\n}\n\n// Usage: caffe_('net_reshape', hNet)\nstatic void net_reshape(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('net_reshape', hNet)\");\n  Net<float>* net = handle_to_ptr<Net<float> >(prhs[0]);\n  net->Reshape();\n}\n\n// Usage: caffe_('net_save', hNet, save_file)\nstatic void net_save(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsChar(prhs[1]),\n      \"Usage: caffe_('net_save', hNet, save_file)\");\n  Net<float>* net = handle_to_ptr<Net<float> >(prhs[0]);\n  char* weights_file = mxArrayToString(prhs[1]);\n  NetParameter net_param;\n  net->ToProto(&net_param, false);\n  WriteProtoToBinaryFile(net_param, weights_file);\n  mxFree(weights_file);\n}\n\n// Usage: caffe_('layer_get_attr', hLayer)\nstatic void layer_get_attr(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('layer_get_attr', hLayer)\");\n  Layer<float>* layer = handle_to_ptr<Layer<float> >(prhs[0]);\n  const int layer_attr_num = 1;\n  const char* layer_attrs[layer_attr_num] = { \"hBlob_blobs\" };\n  mxArray* mx_layer_attr = mxCreateStructMatrix(1, 1, layer_attr_num,\n      layer_attrs);\n  mxSetField(mx_layer_attr, 0, \"hBlob_blobs\",\n      ptr_vec_to_handle_vec<Blob<float> >(layer->blobs()));\n  plhs[0] = mx_layer_attr;\n}\n\n// Usage: caffe_('layer_get_type', hLayer)\nstatic void layer_get_type(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('layer_get_type', hLayer)\");\n  Layer<float>* layer = handle_to_ptr<Layer<float> >(prhs[0]);\n  plhs[0] = mxCreateString(layer->type());\n}\n\n// Usage: caffe_('blob_get_shape', hBlob)\nstatic void blob_get_shape(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('blob_get_shape', hBlob)\");\n  Blob<float>* blob = handle_to_ptr<Blob<float> >(prhs[0]);\n  const int num_axes = blob->num_axes();\n  mxArray* mx_shape = mxCreateDoubleMatrix(1, num_axes, mxREAL);\n  double* shape_mem_mtr = mxGetPr(mx_shape);\n  for (int blob_axis = 0, mat_axis = num_axes - 1; blob_axis < num_axes;\n       ++blob_axis, --mat_axis) {\n    shape_mem_mtr[mat_axis] = static_cast<double>(blob->shape(blob_axis));\n  }\n  plhs[0] = mx_shape;\n}\n\n// Usage: caffe_('blob_reshape', hBlob, new_shape)\nstatic void blob_reshape(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsDouble(prhs[1]),\n      \"Usage: caffe_('blob_reshape', hBlob, new_shape)\");\n  Blob<float>* blob = handle_to_ptr<Blob<float> >(prhs[0]);\n  const mxArray* mx_shape = prhs[1];\n  double* shape_mem_mtr = mxGetPr(mx_shape);\n  const int num_axes = mxGetNumberOfElements(mx_shape);\n  vector<int> blob_shape(num_axes);\n  for (int blob_axis = 0, mat_axis = num_axes - 1; blob_axis < num_axes;\n       ++blob_axis, --mat_axis) {\n    blob_shape[blob_axis] = static_cast<int>(shape_mem_mtr[mat_axis]);\n  }\n  blob->Reshape(blob_shape);\n}\n\n// Usage: caffe_('blob_get_data', hBlob)\nstatic void blob_get_data(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('blob_get_data', hBlob)\");\n  Blob<float>* blob = handle_to_ptr<Blob<float> >(prhs[0]);\n  plhs[0] = blob_to_mx_mat(blob, DATA);\n}\n\n// Usage: caffe_('blob_set_data', hBlob, new_data)\nstatic void blob_set_data(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsSingle(prhs[1]),\n      \"Usage: caffe_('blob_set_data', hBlob, new_data)\");\n  Blob<float>* blob = handle_to_ptr<Blob<float> >(prhs[0]);\n  mx_mat_to_blob(prhs[1], blob, DATA);\n}\n\n// Usage: caffe_('blob_get_diff', hBlob)\nstatic void blob_get_diff(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsStruct(prhs[0]),\n      \"Usage: caffe_('blob_get_diff', hBlob)\");\n  Blob<float>* blob = handle_to_ptr<Blob<float> >(prhs[0]);\n  plhs[0] = blob_to_mx_mat(blob, DIFF);\n}\n\n// Usage: caffe_('blob_set_diff', hBlob, new_diff)\nstatic void blob_set_diff(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsStruct(prhs[0]) && mxIsSingle(prhs[1]),\n      \"Usage: caffe_('blob_set_diff', hBlob, new_diff)\");\n  Blob<float>* blob = handle_to_ptr<Blob<float> >(prhs[0]);\n  mx_mat_to_blob(prhs[1], blob, DIFF);\n}\n\n// Usage: caffe_('set_mode_cpu')\nstatic void set_mode_cpu(MEX_ARGS) {\n  mxCHECK(nrhs == 0, \"Usage: caffe_('set_mode_cpu')\");\n  Caffe::set_mode(Caffe::CPU);\n}\n\n// Usage: caffe_('set_mode_gpu')\nstatic void set_mode_gpu(MEX_ARGS) {\n  mxCHECK(nrhs == 0, \"Usage: caffe_('set_mode_gpu')\");\n  Caffe::set_mode(Caffe::GPU);\n}\n\n// Usage: caffe_('set_device', device_id)\nstatic void set_device(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsDouble(prhs[0]),\n      \"Usage: caffe_('set_device', device_id)\");\n  int device_id = static_cast<int>(mxGetScalar(prhs[0]));\n  Caffe::SetDevice(device_id);\n}\n\n// Usage: caffe_('get_init_key')\nstatic void get_init_key(MEX_ARGS) {\n  mxCHECK(nrhs == 0, \"Usage: caffe_('get_init_key')\");\n  plhs[0] = mxCreateDoubleScalar(init_key);\n}\n\n// Usage: caffe_('reset')\nstatic void reset(MEX_ARGS) {\n  mxCHECK(nrhs == 0, \"Usage: caffe_('reset')\");\n  // Clear solvers and stand-alone nets\n  mexPrintf(\"Cleared %d solvers and %d stand-alone nets\\n\",\n      solvers_.size(), nets_.size());\n  solvers_.clear();\n  nets_.clear();\n  // Generate new init_key, so that handles created before becomes invalid\n  init_key = static_cast<double>(caffe_rng_rand());\n}\n\n// Usage: caffe_('read_mean', mean_proto_file)\nstatic void read_mean(MEX_ARGS) {\n  mxCHECK(nrhs == 1 && mxIsChar(prhs[0]),\n      \"Usage: caffe_('read_mean', mean_proto_file)\");\n  char* mean_proto_file = mxArrayToString(prhs[0]);\n  mxCHECK_FILE_EXIST(mean_proto_file);\n  Blob<float> data_mean;\n  BlobProto blob_proto;\n  bool result = ReadProtoFromBinaryFile(mean_proto_file, &blob_proto);\n  mxCHECK(result, \"Could not read your mean file\");\n  data_mean.FromProto(blob_proto);\n  plhs[0] = blob_to_mx_mat(&data_mean, DATA);\n  mxFree(mean_proto_file);\n}\n\n// Usage: caffe_('write_mean', mean_data, mean_proto_file)\nstatic void write_mean(MEX_ARGS) {\n  mxCHECK(nrhs == 2 && mxIsSingle(prhs[0]) && mxIsChar(prhs[1]),\n      \"Usage: caffe_('write_mean', mean_data, mean_proto_file)\");\n  char* mean_proto_file = mxArrayToString(prhs[1]);\n  int ndims = mxGetNumberOfDimensions(prhs[0]);\n  mxCHECK(ndims >= 2 && ndims <= 3, \"mean_data must have at 2 or 3 dimensions\");\n  const mwSize *dims = mxGetDimensions(prhs[0]);\n  int width = dims[0];\n  int height = dims[1];\n  int channels;\n  if (ndims == 3)\n    channels = dims[2];\n  else\n    channels = 1;\n  Blob<float> data_mean(1, channels, height, width);\n  mx_mat_to_blob(prhs[0], &data_mean, DATA);\n  BlobProto blob_proto;\n  data_mean.ToProto(&blob_proto, false);\n  WriteProtoToBinaryFile(blob_proto, mean_proto_file);\n  mxFree(mean_proto_file);\n}\n\n// Usage: caffe_('version')\nstatic void version(MEX_ARGS) {\n  mxCHECK(nrhs == 0, \"Usage: caffe_('version')\");\n  // Return version string\n  plhs[0] = mxCreateString(AS_STRING(CAFFE_VERSION));\n}\n\n/** -----------------------------------------------------------------\n ** Available commands.\n **/\nstruct handler_registry {\n  string cmd;\n  void (*func)(MEX_ARGS);\n};\n\nstatic handler_registry handlers[] = {\n  // Public API functions\n  { \"get_solver\",         get_solver      },\n  { \"solver_get_attr\",    solver_get_attr },\n  { \"solver_get_iter\",    solver_get_iter },\n  { \"solver_restore\",     solver_restore  },\n  { \"solver_solve\",       solver_solve    },\n  { \"solver_step\",        solver_step     },\n  { \"get_net\",            get_net         },\n  { \"net_get_attr\",       net_get_attr    },\n  { \"net_forward\",        net_forward     },\n  { \"net_backward\",       net_backward    },\n  { \"net_copy_from\",      net_copy_from   },\n  { \"net_reshape\",        net_reshape     },\n  { \"net_save\",           net_save        },\n  { \"layer_get_attr\",     layer_get_attr  },\n  { \"layer_get_type\",     layer_get_type  },\n  { \"blob_get_shape\",     blob_get_shape  },\n  { \"blob_reshape\",       blob_reshape    },\n  { \"blob_get_data\",      blob_get_data   },\n  { \"blob_set_data\",      blob_set_data   },\n  { \"blob_get_diff\",      blob_get_diff   },\n  { \"blob_set_diff\",      blob_set_diff   },\n  { \"set_mode_cpu\",       set_mode_cpu    },\n  { \"set_mode_gpu\",       set_mode_gpu    },\n  { \"set_device\",         set_device      },\n  { \"get_init_key\",       get_init_key    },\n  { \"reset\",              reset           },\n  { \"read_mean\",          read_mean       },\n  { \"write_mean\",         write_mean      },\n  { \"version\",            version         },\n  // The end.\n  { \"END\",                NULL            },\n};\n\n/** -----------------------------------------------------------------\n ** matlab entry point.\n **/\n// Usage: caffe_(api_command, arg1, arg2, ...)\nvoid mexFunction(MEX_ARGS) {\n  mexLock();  // Avoid clearing the mex file.\n  mxCHECK(nrhs > 0, \"Usage: caffe_(api_command, arg1, arg2, ...)\");\n  // Handle input command\n  char* cmd = mxArrayToString(prhs[0]);\n  bool dispatched = false;\n  // Dispatch to cmd handler\n  for (int i = 0; handlers[i].func != NULL; i++) {\n    if (handlers[i].cmd.compare(cmd) == 0) {\n      handlers[i].func(nlhs, plhs, nrhs-1, prhs+1);\n      dispatched = true;\n      break;\n    }\n  }\n  if (!dispatched) {\n    ostringstream error_msg;\n    error_msg << \"Unknown command '\" << cmd << \"'\";\n    mxERROR(error_msg.str().c_str());\n  }\n  mxFree(cmd);\n}\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/private/is_valid_handle.m",
    "content": "function valid = is_valid_handle(hObj)\n% valid = is_valid_handle(hObj) or is_valid_handle('get_new_init_key')\n%   Check if a handle is valid (has the right data type and init_key matches)\n%   Use is_valid_handle('get_new_init_key') to get new init_key from C++;\n\n% a handle is a struct array with the following fields\n%   (uint64) ptr      : the pointer to the C++ object\n%   (double) init_key : caffe initialization key\n\npersistent init_key;\nif isempty(init_key)\n  init_key = caffe_('get_init_key');\nend\n\n% is_valid_handle('get_new_init_key') to get new init_key from C++;\nif ischar(hObj) && strcmp(hObj, 'get_new_init_key')\n  init_key = caffe_('get_init_key');\n  return\nelse\n  % check whether data types are correct and init_key matches\n  valid = isstruct(hObj) ...\n    && isscalar(hObj.ptr) && isa(hObj.ptr, 'uint64') ...\n    && isscalar(hObj.init_key) && isa(hObj.init_key, 'double') ...\n    && hObj.init_key == init_key;\nend\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/reset_all.m",
    "content": "function reset_all()\n% reset_all()\n%   clear all solvers and stand-alone nets and reset Caffe to initial status\n\ncaffe_('reset');\nis_valid_handle('get_new_init_key');\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/run_tests.m",
    "content": "function results = run_tests()\n% results = run_tests()\n%   run all tests in this caffe matlab wrapper package\n\n% use CPU for testing\ncaffe.set_mode_cpu();\n\n% reset caffe before testing\ncaffe.reset_all();\n\n% put all test cases here\nresults = [...\n  run(caffe.test.test_net) ...\n  run(caffe.test.test_solver) ...\n  run(caffe.test.test_io) ];\n\n% reset caffe after testing\ncaffe.reset_all();\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/set_device.m",
    "content": "function set_device(device_id)\n% set_device(device_id)\n%   set Caffe's GPU device ID\n\nCHECK(isscalar(device_id) && device_id >= 0, ...\n  'device_id must be non-negative integer');\ndevice_id = double(device_id);\n\ncaffe_('set_device', device_id);\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/set_mode_cpu.m",
    "content": "function set_mode_cpu()\n% set_mode_cpu()\n%   set Caffe to CPU mode\n\ncaffe_('set_mode_cpu');\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/set_mode_gpu.m",
    "content": "function set_mode_gpu()\n% set_mode_gpu()\n%   set Caffe to GPU mode\n\ncaffe_('set_mode_gpu');\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/+caffe/version.m",
    "content": "function version_str = version()\n% version()\n%   show Caffe's version.\n\nversion_str = caffe_('version');\n\nend\n"
  },
  {
    "path": "caffe-fpn/matlab/CMakeLists.txt",
    "content": "# Builds Matlab (or Octave) interface. In case of Matlab caffe must be\n# compield as shared library. Octave can link static or shared caffe library\n# To install octave run: sudo apt-get install liboctave-dev\n\nif(NOT BUILD_matlab)\n  return()\nendif()\n\nif(HAVE_MATLAB AND Octave_compiler)\n  set(build_using ${Matlab_build_mex_using})\nelseif(HAVE_MATLAB AND NOT Octave_compiler)\n  set(build_using \"Matlab\")\nelseif(NOT HAVE_MATLAB AND Octave_compiler)\n  set(build_using \"Octave\")\nelse()\n  return()\nendif()\n\nif(NOT BUILD_SHARED_LIBS AND build_using MATCHES Matlab)\n  message(FATAL_ERROR \"Matlab MEX interface (with default mex options file) can only be built if caffe is compiled as shared library. Please enable 'BUILD_SHARED_LIBS' in CMake. Aternativelly you can switch to Octave compiler.\")\nendif()\n\n# helper function to set proper mex file extention\nfunction(caffe_fetch_and_set_proper_mexext mexfile_variable)\n  execute_process(COMMAND ${Matlab_mexext} OUTPUT_STRIP_TRAILING_WHITESPACE RESULT_VARIABLE res OUTPUT_VARIABLE ext)\n  if(res MATCHES 0)\n    get_filename_component(folder  ${${mexfile_variable}} PATH)\n    get_filename_component(name_we ${${mexfile_variable}} NAME_WE)\n    set(${mexfile_variable} ${folder}/${name_we}.${ext} PARENT_SCOPE)\n  endif()\nendfunction()\n\n# global settings\nfile(GLOB Matlab_srcs +caffe/private/caffe_.cpp)\nset(Matlab_caffe_mex ${PROJECT_SOURCE_DIR}/matlab/+caffe/private/caffe_.mex)\n\ncaffe_get_current_cflags(cflags)\ncaffe_parse_linker_libs(Caffe_LINKER_LIBS folders libflags macos_frameworks)\nset(folders $<TARGET_LINKER_FILE_DIR:caffe> ${folders})\n\n# prepare linker flag lists\nstring(REPLACE \";\" \";-L\" link_folders \"-L${folders}\")\nstring(REPLACE \";\" \":\"  rpath_folders   \"${folders}\")\n\nif(build_using MATCHES \"Matlab\")\n  set(libflags -lcaffe${Caffe_POSTFIX} ${libflags}) # Matlab R2014a complans for -Wl,--whole-archive\n\n  caffe_fetch_and_set_proper_mexext(Matlab_caffe_mex)\n  add_custom_command(OUTPUT ${Matlab_caffe_mex} COMMAND ${Matlab_mex}\n      ARGS -output ${Matlab_caffe_mex} ${Matlab_srcs} ${cflags} ${link_folders} ${libflags}\n      DEPENDS caffe COMMENT \"Building Matlab interface: ${Matlab_caffe_mex}\" VERBATIM)\n  add_custom_target(matlab ALL DEPENDS ${Matlab_caffe_mex} SOURCES ${Matlab_srcs})\n\nelseif(build_using MATCHES \"Octave\")\n\n  if(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"Clang\")\n    set(libflags -Wl,-force_load,$<TARGET_LINKER_FILE:caffe> ${libflags})\n  elseif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    set(libflags -Wl,--whole-archive -lcaffe${Caffe_POSTFIX} -Wl,--no-whole-archive ${libflags})\n  endif()\n\n  add_custom_command(OUTPUT ${Matlab_caffe_mex} COMMAND ${Octave_compiler}\n      ARGS --mex -o ${Matlab_caffe_mex} ${Matlab_srcs} ${cflags} ${link_folders} ${libflags} -Wl,-rpath,${rpath_folders}\n      DEPENDS caffe COMMENT \"Building Octave interface: ${Matlab_caffe_mex}\" VERBATIM)\n\n  add_custom_target(octave ALL DEPENDS ${Matlab_caffe_mex} SOURCES ${Matlab_srcs})\nendif()\n\n# ---[ Install\nfile(GLOB mfiles caffe/*.m)\ninstall(FILES ${mfiles} ${Matlab_caffe_mex} DESTINATION matlab)\n\n"
  },
  {
    "path": "caffe-fpn/matlab/demo/classification_demo.m",
    "content": "function [scores, maxlabel] = classification_demo(im, use_gpu)\n% [scores, maxlabel] = classification_demo(im, use_gpu)\n%\n% Image classification demo using BVLC CaffeNet.\n%\n% IMPORTANT: before you run this demo, you should download BVLC CaffeNet\n% from Model Zoo (http://caffe.berkeleyvision.org/model_zoo.html)\n%\n% ****************************************************************************\n% For detailed documentation and usage on Caffe's Matlab interface, please\n% refer to Caffe Interface Tutorial at\n% http://caffe.berkeleyvision.org/tutorial/interfaces.html#matlab\n% ****************************************************************************\n%\n% input\n%   im       color image as uint8 HxWx3\n%   use_gpu  1 to use the GPU, 0 to use the CPU\n%\n% output\n%   scores   1000-dimensional ILSVRC score vector\n%   maxlabel the label of the highest score\n%\n% You may need to do the following before you start matlab:\n%  $ export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:/usr/local/cuda-5.5/lib64\n%  $ export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6\n% Or the equivalent based on where things are installed on your system\n%\n% Usage:\n%  im = imread('../../examples/images/cat.jpg');\n%  scores = classification_demo(im, 1);\n%  [score, class] = max(scores);\n% Five things to be aware of:\n%   caffe uses row-major order\n%   matlab uses column-major order\n%   caffe uses BGR color channel order\n%   matlab uses RGB color channel order\n%   images need to have the data mean subtracted\n\n% Data coming in from matlab needs to be in the order\n%   [width, height, channels, images]\n% where width is the fastest dimension.\n% Here is the rough matlab for putting image data into the correct\n% format in W x H x C with BGR channels:\n%   % permute channels from RGB to BGR\n%   im_data = im(:, :, [3, 2, 1]);\n%   % flip width and height to make width the fastest dimension\n%   im_data = permute(im_data, [2, 1, 3]);\n%   % convert from uint8 to single\n%   im_data = single(im_data);\n%   % reshape to a fixed size (e.g., 227x227).\n%   im_data = imresize(im_data, [IMAGE_DIM IMAGE_DIM], 'bilinear');\n%   % subtract mean_data (already in W x H x C with BGR channels)\n%   im_data = im_data - mean_data;\n\n% If you have multiple images, cat them with cat(4, ...)\n\n% Add caffe/matlab to you Matlab search PATH to use matcaffe\nif exist('../+caffe', 'dir')\n  addpath('..');\nelse\n  error('Please run this demo from caffe/matlab/demo');\nend\n\n% Set caffe mode\nif exist('use_gpu', 'var') && use_gpu\n  caffe.set_mode_gpu();\n  gpu_id = 0;  % we will use the first gpu in this demo\n  caffe.set_device(gpu_id);\nelse\n  caffe.set_mode_cpu();\nend\n\n% Initialize the network using BVLC CaffeNet for image classification\n% Weights (parameter) file needs to be downloaded from Model Zoo.\nmodel_dir = '../../models/bvlc_reference_caffenet/';\nnet_model = [model_dir 'deploy.prototxt'];\nnet_weights = [model_dir 'bvlc_reference_caffenet.caffemodel'];\nphase = 'test'; % run with phase test (so that dropout isn't applied)\nif ~exist(net_weights, 'file')\n  error('Please download CaffeNet from Model Zoo before you run this demo');\nend\n\n% Initialize a network\nnet = caffe.Net(net_model, net_weights, phase);\n\nif nargin < 1\n  % For demo purposes we will use the cat image\n  fprintf('using caffe/examples/images/cat.jpg as input image\\n');\n  im = imread('../../examples/images/cat.jpg');\nend\n\n% prepare oversampled input\n% input_data is Height x Width x Channel x Num\ntic;\ninput_data = {prepare_image(im)};\ntoc;\n\n% do forward pass to get scores\n% scores are now Channels x Num, where Channels == 1000\ntic;\n% The net forward function. It takes in a cell array of N-D arrays\n% (where N == 4 here) containing data of input blob(s) and outputs a cell\n% array containing data from output blob(s)\nscores = net.forward(input_data);\ntoc;\n\nscores = scores{1};\nscores = mean(scores, 2);  % take average scores over 10 crops\n\n[~, maxlabel] = max(scores);\n\n% call caffe.reset_all() to reset caffe\ncaffe.reset_all();\n\n% ------------------------------------------------------------------------\nfunction crops_data = prepare_image(im)\n% ------------------------------------------------------------------------\n% caffe/matlab/+caffe/imagenet/ilsvrc_2012_mean.mat contains mean_data that\n% is already in W x H x C with BGR channels\nd = load('../+caffe/imagenet/ilsvrc_2012_mean.mat');\nmean_data = d.mean_data;\nIMAGE_DIM = 256;\nCROPPED_DIM = 227;\n\n% Convert an image returned by Matlab's imread to im_data in caffe's data\n% format: W x H x C with BGR channels\nim_data = im(:, :, [3, 2, 1]);  % permute channels from RGB to BGR\nim_data = permute(im_data, [2, 1, 3]);  % flip width and height\nim_data = single(im_data);  % convert from uint8 to single\nim_data = imresize(im_data, [IMAGE_DIM IMAGE_DIM], 'bilinear');  % resize im_data\nim_data = im_data - mean_data;  % subtract mean_data (already in W x H x C, BGR)\n\n% oversample (4 corners, center, and their x-axis flips)\ncrops_data = zeros(CROPPED_DIM, CROPPED_DIM, 3, 10, 'single');\nindices = [0 IMAGE_DIM-CROPPED_DIM] + 1;\nn = 1;\nfor i = indices\n  for j = indices\n    crops_data(:, :, :, n) = im_data(i:i+CROPPED_DIM-1, j:j+CROPPED_DIM-1, :);\n    crops_data(:, :, :, n+5) = crops_data(end:-1:1, :, :, n);\n    n = n + 1;\n  end\nend\ncenter = floor(indices(2) / 2) + 1;\ncrops_data(:,:,:,5) = ...\n  im_data(center:center+CROPPED_DIM-1,center:center+CROPPED_DIM-1,:);\ncrops_data(:,:,:,10) = crops_data(end:-1:1, :, :, 5);\n"
  },
  {
    "path": "caffe-fpn/matlab/hdf5creation/.gitignore",
    "content": "*.h5\nlist.txt\n"
  },
  {
    "path": "caffe-fpn/matlab/hdf5creation/demo.m",
    "content": "%% WRITING TO HDF5\nfilename='trial.h5';\n\nnum_total_samples=10000;\n% to simulate data being read from disk / generated etc.\ndata_disk=rand(5,5,1,num_total_samples); \nlabel_disk=rand(10,num_total_samples); \n\nchunksz=100;\ncreated_flag=false;\ntotalct=0;\nfor batchno=1:num_total_samples/chunksz\n  fprintf('batch no. %d\\n', batchno);\n  last_read=(batchno-1)*chunksz;\n\n  % to simulate maximum data to be held in memory before dumping to hdf5 file \n  batchdata=data_disk(:,:,1,last_read+1:last_read+chunksz); \n  batchlabs=label_disk(:,last_read+1:last_read+chunksz);\n\n  % store to hdf5\n  startloc=struct('dat',[1,1,1,totalct+1], 'lab', [1,totalct+1]);\n  curr_dat_sz=store2hdf5(filename, batchdata, batchlabs, ~created_flag, startloc, chunksz); \n  created_flag=true;% flag set so that file is created only once\n  totalct=curr_dat_sz(end);% updated dataset size (#samples)\nend\n\n% display structure of the stored HDF5 file\nh5disp(filename);\n\n%% READING FROM HDF5\n\n% Read data and labels for samples #1000 to 1999\ndata_rd=h5read(filename, '/data', [1 1 1 1000], [5, 5, 1, 1000]);\nlabel_rd=h5read(filename, '/label', [1 1000], [10, 1000]);\nfprintf('Testing ...\\n');\ntry \n  assert(isequal(data_rd, single(data_disk(:,:,:,1000:1999))), 'Data do not match');\n  assert(isequal(label_rd, single(label_disk(:,1000:1999))), 'Labels do not match');\n\n  fprintf('Success!\\n');\ncatch err\n  fprintf('Test failed ...\\n');\n  getReport(err)\nend\n\n%delete(filename);\n\n% CREATE list.txt containing filename, to be used as source for HDF5_DATA_LAYER\nFILE=fopen('list.txt', 'w');\nfprintf(FILE, '%s', filename);\nfclose(FILE);\nfprintf('HDF5 filename listed in %s \\n', 'list.txt');\n\n% NOTE: In net definition prototxt, use list.txt as input to HDF5_DATA as: \n% layer {\n%   name: \"data\"\n%   type: \"HDF5Data\"\n%   top: \"data\"\n%   top: \"labelvec\"\n%   hdf5_data_param {\n%     source: \"/path/to/list.txt\"\n%     batch_size: 64\n%   }\n% }\n"
  },
  {
    "path": "caffe-fpn/matlab/hdf5creation/store2hdf5.m",
    "content": "function [curr_dat_sz, curr_lab_sz] = store2hdf5(filename, data, labels, create, startloc, chunksz)  \n  % *data* is W*H*C*N matrix of images should be normalized (e.g. to lie between 0 and 1) beforehand\n  % *label* is D*N matrix of labels (D labels per sample) \n  % *create* [0/1] specifies whether to create file newly or to append to previously created file, useful to store information in batches when a dataset is too big to be held in memory  (default: 1)\n  % *startloc* (point at which to start writing data). By default, \n  % if create=1 (create mode), startloc.data=[1 1 1 1], and startloc.lab=[1 1]; \n  % if create=0 (append mode), startloc.data=[1 1 1 K+1], and startloc.lab = [1 K+1]; where K is the current number of samples stored in the HDF\n  % chunksz (used only in create mode), specifies number of samples to be stored per chunk (see HDF5 documentation on chunking) for creating HDF5 files with unbounded maximum size - TLDR; higher chunk sizes allow faster read-write operations \n\n  % verify that format is right\n  dat_dims=size(data);\n  lab_dims=size(labels);\n  num_samples=dat_dims(end);\n\n  assert(lab_dims(end)==num_samples, 'Number of samples should be matched between data and labels');\n\n  if ~exist('create','var')\n    create=true;\n  end\n\n  \n  if create\n    %fprintf('Creating dataset with %d samples\\n', num_samples);\n    if ~exist('chunksz', 'var')\n      chunksz=1000;\n    end\n    if exist(filename, 'file')\n      fprintf('Warning: replacing existing file %s \\n', filename);\n      delete(filename);\n    end      \n    h5create(filename, '/data', [dat_dims(1:end-1) Inf], 'Datatype', 'single', 'ChunkSize', [dat_dims(1:end-1) chunksz]); % width, height, channels, number \n    h5create(filename, '/label', [lab_dims(1:end-1) Inf], 'Datatype', 'single', 'ChunkSize', [lab_dims(1:end-1) chunksz]); % width, height, channels, number \n    if ~exist('startloc','var') \n      startloc.dat=[ones(1,length(dat_dims)-1), 1];\n      startloc.lab=[ones(1,length(lab_dims)-1), 1];\n    end \n  else  % append mode\n    if ~exist('startloc','var')\n      info=h5info(filename);\n      prev_dat_sz=info.Datasets(1).Dataspace.Size;\n      prev_lab_sz=info.Datasets(2).Dataspace.Size;\n      assert(all(prev_dat_sz(1:end-1)==dat_dims(1:end-1)), 'Data dimensions must match existing dimensions in dataset');\n      assert(all(prev_lab_sz(1:end-1)==lab_dims(1:end-1)), 'Label dimensions must match existing dimensions in dataset');\n      startloc.dat=[ones(1,length(dat_dims)-1), prev_dat_sz(end)+1];\n      startloc.lab=[ones(1,length(lab_dims)-1), prev_lab_sz(end)+1];\n    end\n  end\n\n  if ~isempty(data)\n    h5write(filename, '/data', single(data), startloc.dat, size(data));\n    h5write(filename, '/label', single(labels), startloc.lab, size(labels));  \n  end\n\n  if nargout\n    info=h5info(filename);\n    curr_dat_sz=info.Datasets(1).Dataspace.Size;\n    curr_lab_sz=info.Datasets(2).Dataspace.Size;\n  end\nend\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_alexnet/deploy.prototxt",
    "content": "name: \"AlexNet\"\ninput: \"data\"\ninput_shape {\n  dim: 10\n  dim: 3\n  dim: 227\n  dim: 227\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"conv1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"norm1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"conv2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"norm2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"fc8\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_alexnet/readme.md",
    "content": "---\nname: BVLC AlexNet Model\ncaffemodel: bvlc_alexnet.caffemodel\ncaffemodel_url: http://dl.caffe.berkeleyvision.org/bvlc_alexnet.caffemodel\nlicense: unrestricted\nsha1: 9116a64c0fbe4459d18f4bb6b56d647b63920377\ncaffe_commit: 709dc15af4a06bebda027c1eb2b3f3e3375d5077\n---\n\nThis model is a replication of the model described in the [AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) publication.\n\nDifferences:\n- not training with the relighting data-augmentation;\n- initializing non-zero biases to 0.1 instead of 1 (found necessary for training, as initialization to 1 gave flat loss).\n\nThe bundled model is the iteration 360,000 snapshot.\nThe best validation performance during training was iteration 358,000 with validation accuracy 57.258% and loss 1.83948.\nThis model obtains a top-1 accuracy 57.1% and a top-5 accuracy 80.2% on the validation set, using just the center crop.\n(Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy.)\n\nThis model was trained by Evan Shelhamer @shelhamer\n\n## License\n\nThis model is released for unrestricted use.\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_alexnet/solver.prototxt",
    "content": "net: \"models/bvlc_alexnet/train_val.prototxt\"\ntest_iter: 1000\ntest_interval: 1000\nbase_lr: 0.01\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 100000\ndisplay: 20\nmax_iter: 450000\nmomentum: 0.9\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"models/bvlc_alexnet/caffe_alexnet_train\"\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_alexnet/train_val.prototxt",
    "content": "name: \"AlexNet\"\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mirror: true\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/imagenet/ilsvrc12_train_lmdb\"\n    batch_size: 256\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mirror: false\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  data_param {\n    source: \"examples/imagenet/ilsvrc12_val_lmdb\"\n    batch_size: 50\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"conv1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"norm1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.1\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"conv2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"norm2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.1\n    }\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.1\n    }\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.1\n    }\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.1\n    }\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"fc8\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc8\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_googlenet/deploy.prototxt",
    "content": "name: \"GoogleNet\"\ninput: \"data\"\ninput_shape {\n  dim: 10\n  dim: 3\n  dim: 224\n  dim: 224\n}\nlayer {\n  name: \"conv1/7x7_s2\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1/7x7_s2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 3\n    kernel_size: 7\n    stride: 2\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"conv1/relu_7x7\"\n  type: \"ReLU\"\n  bottom: \"conv1/7x7_s2\"\n  top: \"conv1/7x7_s2\"\n}\nlayer {\n  name: \"pool1/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"conv1/7x7_s2\"\n  top: \"pool1/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"pool1/norm1\"\n  type: \"LRN\"\n  bottom: \"pool1/3x3_s2\"\n  top: \"pool1/norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool1/norm1\"\n  top: \"conv2/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"conv2/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"conv2/3x3_reduce\"\n  top: \"conv2/3x3_reduce\"\n}\nlayer {\n  name: \"conv2/3x3\"\n  type: \"Convolution\"\n  bottom: \"conv2/3x3_reduce\"\n  top: \"conv2/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"conv2/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"conv2/3x3\"\n  top: \"conv2/3x3\"\n}\nlayer {\n  name: \"conv2/norm2\"\n  type: \"LRN\"\n  bottom: \"conv2/3x3\"\n  top: \"conv2/norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"pool2/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"conv2/norm2\"\n  top: \"pool2/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"inception_3a/1x1\"\n  type: \"Convolution\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/1x1\"\n  top: \"inception_3a/1x1\"\n}\nlayer {\n  name: \"inception_3a/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/3x3_reduce\"\n  top: \"inception_3a/3x3_reduce\"\n}\nlayer {\n  name: \"inception_3a/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/3x3_reduce\"\n  top: \"inception_3a/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/3x3\"\n  top: \"inception_3a/3x3\"\n}\nlayer {\n  name: \"inception_3a/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 16\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/5x5_reduce\"\n  top: \"inception_3a/5x5_reduce\"\n}\nlayer {\n  name: \"inception_3a/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/5x5_reduce\"\n  top: \"inception_3a/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/5x5\"\n  top: \"inception_3a/5x5\"\n}\nlayer {\n  name: \"inception_3a/pool\"\n  type: \"Pooling\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_3a/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/pool\"\n  top: \"inception_3a/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/pool_proj\"\n  top: \"inception_3a/pool_proj\"\n}\nlayer {\n  name: \"inception_3a/output\"\n  type: \"Concat\"\n  bottom: \"inception_3a/1x1\"\n  bottom: \"inception_3a/3x3\"\n  bottom: \"inception_3a/5x5\"\n  bottom: \"inception_3a/pool_proj\"\n  top: \"inception_3a/output\"\n}\nlayer {\n  name: \"inception_3b/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/1x1\"\n  top: \"inception_3b/1x1\"\n}\nlayer {\n  name: \"inception_3b/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/3x3_reduce\"\n  top: \"inception_3b/3x3_reduce\"\n}\nlayer {\n  name: \"inception_3b/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_3b/3x3_reduce\"\n  top: \"inception_3b/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/3x3\"\n  top: \"inception_3b/3x3\"\n}\nlayer {\n  name: \"inception_3b/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/5x5_reduce\"\n  top: \"inception_3b/5x5_reduce\"\n}\nlayer {\n  name: \"inception_3b/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_3b/5x5_reduce\"\n  top: \"inception_3b/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/5x5\"\n  top: \"inception_3b/5x5\"\n}\nlayer {\n  name: \"inception_3b/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_3b/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_3b/pool\"\n  top: \"inception_3b/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/pool_proj\"\n  top: \"inception_3b/pool_proj\"\n}\nlayer {\n  name: \"inception_3b/output\"\n  type: \"Concat\"\n  bottom: \"inception_3b/1x1\"\n  bottom: \"inception_3b/3x3\"\n  bottom: \"inception_3b/5x5\"\n  bottom: \"inception_3b/pool_proj\"\n  top: \"inception_3b/output\"\n}\nlayer {\n  name: \"pool3/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"inception_3b/output\"\n  top: \"pool3/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"inception_4a/1x1\"\n  type: \"Convolution\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/1x1\"\n  top: \"inception_4a/1x1\"\n}\nlayer {\n  name: \"inception_4a/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/3x3_reduce\"\n  top: \"inception_4a/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4a/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/3x3_reduce\"\n  top: \"inception_4a/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 208\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/3x3\"\n  top: \"inception_4a/3x3\"\n}\nlayer {\n  name: \"inception_4a/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 16\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/5x5_reduce\"\n  top: \"inception_4a/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4a/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/5x5_reduce\"\n  top: \"inception_4a/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 48\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/5x5\"\n  top: \"inception_4a/5x5\"\n}\nlayer {\n  name: \"inception_4a/pool\"\n  type: \"Pooling\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4a/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/pool\"\n  top: \"inception_4a/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/pool_proj\"\n  top: \"inception_4a/pool_proj\"\n}\nlayer {\n  name: \"inception_4a/output\"\n  type: \"Concat\"\n  bottom: \"inception_4a/1x1\"\n  bottom: \"inception_4a/3x3\"\n  bottom: \"inception_4a/5x5\"\n  bottom: \"inception_4a/pool_proj\"\n  top: \"inception_4a/output\"\n}\nlayer {\n  name: \"inception_4b/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 160\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/1x1\"\n  top: \"inception_4b/1x1\"\n}\nlayer {\n  name: \"inception_4b/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 112\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/3x3_reduce\"\n  top: \"inception_4b/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4b/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/3x3_reduce\"\n  top: \"inception_4b/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 224\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/3x3\"\n  top: \"inception_4b/3x3\"\n}\nlayer {\n  name: \"inception_4b/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 24\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/5x5_reduce\"\n  top: \"inception_4b/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4b/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/5x5_reduce\"\n  top: \"inception_4b/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/5x5\"\n  top: \"inception_4b/5x5\"\n}\nlayer {\n  name: \"inception_4b/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4b/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/pool\"\n  top: \"inception_4b/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/pool_proj\"\n  top: \"inception_4b/pool_proj\"\n}\nlayer {\n  name: \"inception_4b/output\"\n  type: \"Concat\"\n  bottom: \"inception_4b/1x1\"\n  bottom: \"inception_4b/3x3\"\n  bottom: \"inception_4b/5x5\"\n  bottom: \"inception_4b/pool_proj\"\n  top: \"inception_4b/output\"\n}\nlayer {\n  name: \"inception_4c/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/1x1\"\n  top: \"inception_4c/1x1\"\n}\nlayer {\n  name: \"inception_4c/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/3x3_reduce\"\n  top: \"inception_4c/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4c/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/3x3_reduce\"\n  top: \"inception_4c/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/3x3\"\n  top: \"inception_4c/3x3\"\n}\nlayer {\n  name: \"inception_4c/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 24\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/5x5_reduce\"\n  top: \"inception_4c/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4c/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/5x5_reduce\"\n  top: \"inception_4c/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/5x5\"\n  top: \"inception_4c/5x5\"\n}\nlayer {\n  name: \"inception_4c/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4c/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/pool\"\n  top: \"inception_4c/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/pool_proj\"\n  top: \"inception_4c/pool_proj\"\n}\nlayer {\n  name: \"inception_4c/output\"\n  type: \"Concat\"\n  bottom: \"inception_4c/1x1\"\n  bottom: \"inception_4c/3x3\"\n  bottom: \"inception_4c/5x5\"\n  bottom: \"inception_4c/pool_proj\"\n  top: \"inception_4c/output\"\n}\nlayer {\n  name: \"inception_4d/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 112\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/1x1\"\n  top: \"inception_4d/1x1\"\n}\nlayer {\n  name: \"inception_4d/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 144\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/3x3_reduce\"\n  top: \"inception_4d/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4d/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/3x3_reduce\"\n  top: \"inception_4d/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 288\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/3x3\"\n  top: \"inception_4d/3x3\"\n}\nlayer {\n  name: \"inception_4d/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/5x5_reduce\"\n  top: \"inception_4d/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4d/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/5x5_reduce\"\n  top: \"inception_4d/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/5x5\"\n  top: \"inception_4d/5x5\"\n}\nlayer {\n  name: \"inception_4d/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4d/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/pool\"\n  top: \"inception_4d/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/pool_proj\"\n  top: \"inception_4d/pool_proj\"\n}\nlayer {\n  name: \"inception_4d/output\"\n  type: \"Concat\"\n  bottom: \"inception_4d/1x1\"\n  bottom: \"inception_4d/3x3\"\n  bottom: \"inception_4d/5x5\"\n  bottom: \"inception_4d/pool_proj\"\n  top: \"inception_4d/output\"\n}\nlayer {\n  name: \"inception_4e/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/1x1\"\n  top: \"inception_4e/1x1\"\n}\nlayer {\n  name: \"inception_4e/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 160\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/3x3_reduce\"\n  top: \"inception_4e/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4e/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4e/3x3_reduce\"\n  top: \"inception_4e/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 320\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/3x3\"\n  top: \"inception_4e/3x3\"\n}\nlayer {\n  name: \"inception_4e/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/5x5_reduce\"\n  top: \"inception_4e/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4e/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4e/5x5_reduce\"\n  top: \"inception_4e/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/5x5\"\n  top: \"inception_4e/5x5\"\n}\nlayer {\n  name: \"inception_4e/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4e/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4e/pool\"\n  top: \"inception_4e/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/pool_proj\"\n  top: \"inception_4e/pool_proj\"\n}\nlayer {\n  name: \"inception_4e/output\"\n  type: \"Concat\"\n  bottom: \"inception_4e/1x1\"\n  bottom: \"inception_4e/3x3\"\n  bottom: \"inception_4e/5x5\"\n  bottom: \"inception_4e/pool_proj\"\n  top: \"inception_4e/output\"\n}\nlayer {\n  name: \"pool4/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"inception_4e/output\"\n  top: \"pool4/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"inception_5a/1x1\"\n  type: \"Convolution\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/1x1\"\n  top: \"inception_5a/1x1\"\n}\nlayer {\n  name: \"inception_5a/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 160\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/3x3_reduce\"\n  top: \"inception_5a/3x3_reduce\"\n}\nlayer {\n  name: \"inception_5a/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/3x3_reduce\"\n  top: \"inception_5a/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 320\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/3x3\"\n  top: \"inception_5a/3x3\"\n}\nlayer {\n  name: \"inception_5a/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/5x5_reduce\"\n  top: \"inception_5a/5x5_reduce\"\n}\nlayer {\n  name: \"inception_5a/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/5x5_reduce\"\n  top: \"inception_5a/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/5x5\"\n  top: \"inception_5a/5x5\"\n}\nlayer {\n  name: \"inception_5a/pool\"\n  type: \"Pooling\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_5a/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/pool\"\n  top: \"inception_5a/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/pool_proj\"\n  top: \"inception_5a/pool_proj\"\n}\nlayer {\n  name: \"inception_5a/output\"\n  type: \"Concat\"\n  bottom: \"inception_5a/1x1\"\n  bottom: \"inception_5a/3x3\"\n  bottom: \"inception_5a/5x5\"\n  bottom: \"inception_5a/pool_proj\"\n  top: \"inception_5a/output\"\n}\nlayer {\n  name: \"inception_5b/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/1x1\"\n  top: \"inception_5b/1x1\"\n}\nlayer {\n  name: \"inception_5b/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.09\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/3x3_reduce\"\n  top: \"inception_5b/3x3_reduce\"\n}\nlayer {\n  name: \"inception_5b/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_5b/3x3_reduce\"\n  top: \"inception_5b/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/3x3\"\n  top: \"inception_5b/3x3\"\n}\nlayer {\n  name: \"inception_5b/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 48\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.2\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/5x5_reduce\"\n  top: \"inception_5b/5x5_reduce\"\n}\nlayer {\n  name: \"inception_5b/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_5b/5x5_reduce\"\n  top: \"inception_5b/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n      std: 0.03\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/5x5\"\n  top: \"inception_5b/5x5\"\n}\nlayer {\n  name: \"inception_5b/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_5b/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_5b/pool\"\n  top: \"inception_5b/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n      std: 0.1\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/pool_proj\"\n  top: \"inception_5b/pool_proj\"\n}\nlayer {\n  name: \"inception_5b/output\"\n  type: \"Concat\"\n  bottom: \"inception_5b/1x1\"\n  bottom: \"inception_5b/3x3\"\n  bottom: \"inception_5b/5x5\"\n  bottom: \"inception_5b/pool_proj\"\n  top: \"inception_5b/output\"\n}\nlayer {\n  name: \"pool5/7x7_s1\"\n  type: \"Pooling\"\n  bottom: \"inception_5b/output\"\n  top: \"pool5/7x7_s1\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 7\n    stride: 1\n  }\n}\nlayer {\n  name: \"pool5/drop_7x7_s1\"\n  type: \"Dropout\"\n  bottom: \"pool5/7x7_s1\"\n  top: \"pool5/7x7_s1\"\n  dropout_param {\n    dropout_ratio: 0.4\n  }\n}\nlayer {\n  name: \"loss3/classifier\"\n  type: \"InnerProduct\"\n  bottom: \"pool5/7x7_s1\"\n  top: \"loss3/classifier\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"loss3/classifier\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_googlenet/quick_solver.prototxt",
    "content": "net: \"models/bvlc_googlenet/train_val.prototxt\"\ntest_iter: 1000\ntest_interval: 4000\ntest_initialization: false\ndisplay: 40\naverage_loss: 40\nbase_lr: 0.01\nlr_policy: \"poly\"\npower: 0.5\nmax_iter: 2400000\nmomentum: 0.9\nweight_decay: 0.0002\nsnapshot: 40000\nsnapshot_prefix: \"models/bvlc_googlenet/bvlc_googlenet_quick\"\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_googlenet/readme.md",
    "content": "---\nname: BVLC GoogleNet Model\ncaffemodel: bvlc_googlenet.caffemodel\ncaffemodel_url: http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel\nlicense: unrestricted\nsha1: 405fc5acd08a3bb12de8ee5e23a96bec22f08204\ncaffe_commit: bc614d1bd91896e3faceaf40b23b72dab47d44f5\n---\n\nThis model is a replication of the model described in the [GoogleNet](http://arxiv.org/abs/1409.4842) publication. We would like to thank Christian Szegedy for all his help in the replication of GoogleNet model.\n\nDifferences:\n- not training with the relighting data-augmentation;\n- not training with the scale or aspect-ratio data-augmentation;\n- uses \"xavier\" to initialize the weights instead of \"gaussian\";\n- quick_solver.prototxt uses a different learning rate decay policy than the original solver.prototxt, that allows a much faster training (60 epochs vs 250 epochs);\n\nThe bundled model is the iteration 2,400,000 snapshot (60 epochs) using quick_solver.prototxt\n\nThis bundled model obtains a top-1 accuracy 68.7% (31.3% error) and a top-5 accuracy 88.9% (11.1% error) on the validation set, using just the center crop.\n(Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy.)\n\nTimings for bvlc_googlenet with cuDNN using batch_size:128 on a K40c:\n - Average Forward pass: 562.841 ms.\n - Average Backward pass: 1123.84 ms.\n - Average Forward-Backward: 1688.8 ms.\n\nThis model was trained by Sergio Guadarrama @sguada\n\n## License\n\nThis model is released for unrestricted use.\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_googlenet/solver.prototxt",
    "content": "net: \"models/bvlc_googlenet/train_val.prototxt\"\ntest_iter: 1000\ntest_interval: 4000\ntest_initialization: false\ndisplay: 40\naverage_loss: 40\nbase_lr: 0.01\nlr_policy: \"step\"\nstepsize: 320000\ngamma: 0.96\nmax_iter: 10000000\nmomentum: 0.9\nweight_decay: 0.0002\nsnapshot: 40000\nsnapshot_prefix: \"models/bvlc_googlenet/bvlc_googlenet\"\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_googlenet/train_val.prototxt",
    "content": "name: \"GoogleNet\"\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mirror: true\n    crop_size: 224\n    mean_value: 104\n    mean_value: 117\n    mean_value: 123\n  }\n  data_param {\n    source: \"examples/imagenet/ilsvrc12_train_lmdb\"\n    batch_size: 32\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mirror: false\n    crop_size: 224\n    mean_value: 104\n    mean_value: 117\n    mean_value: 123\n  }\n  data_param {\n    source: \"examples/imagenet/ilsvrc12_val_lmdb\"\n    batch_size: 50\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1/7x7_s2\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1/7x7_s2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 3\n    kernel_size: 7\n    stride: 2\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"conv1/relu_7x7\"\n  type: \"ReLU\"\n  bottom: \"conv1/7x7_s2\"\n  top: \"conv1/7x7_s2\"\n}\nlayer {\n  name: \"pool1/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"conv1/7x7_s2\"\n  top: \"pool1/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"pool1/norm1\"\n  type: \"LRN\"\n  bottom: \"pool1/3x3_s2\"\n  top: \"pool1/norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool1/norm1\"\n  top: \"conv2/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"conv2/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"conv2/3x3_reduce\"\n  top: \"conv2/3x3_reduce\"\n}\nlayer {\n  name: \"conv2/3x3\"\n  type: \"Convolution\"\n  bottom: \"conv2/3x3_reduce\"\n  top: \"conv2/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"conv2/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"conv2/3x3\"\n  top: \"conv2/3x3\"\n}\nlayer {\n  name: \"conv2/norm2\"\n  type: \"LRN\"\n  bottom: \"conv2/3x3\"\n  top: \"conv2/norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"pool2/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"conv2/norm2\"\n  top: \"pool2/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"inception_3a/1x1\"\n  type: \"Convolution\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/1x1\"\n  top: \"inception_3a/1x1\"\n}\nlayer {\n  name: \"inception_3a/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/3x3_reduce\"\n  top: \"inception_3a/3x3_reduce\"\n}\nlayer {\n  name: \"inception_3a/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/3x3_reduce\"\n  top: \"inception_3a/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/3x3\"\n  top: \"inception_3a/3x3\"\n}\nlayer {\n  name: \"inception_3a/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 16\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/5x5_reduce\"\n  top: \"inception_3a/5x5_reduce\"\n}\nlayer {\n  name: \"inception_3a/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/5x5_reduce\"\n  top: \"inception_3a/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/5x5\"\n  top: \"inception_3a/5x5\"\n}\nlayer {\n  name: \"inception_3a/pool\"\n  type: \"Pooling\"\n  bottom: \"pool2/3x3_s2\"\n  top: \"inception_3a/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_3a/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/pool\"\n  top: \"inception_3a/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3a/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_3a/pool_proj\"\n  top: \"inception_3a/pool_proj\"\n}\nlayer {\n  name: \"inception_3a/output\"\n  type: \"Concat\"\n  bottom: \"inception_3a/1x1\"\n  bottom: \"inception_3a/3x3\"\n  bottom: \"inception_3a/5x5\"\n  bottom: \"inception_3a/pool_proj\"\n  top: \"inception_3a/output\"\n}\nlayer {\n  name: \"inception_3b/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/1x1\"\n  top: \"inception_3b/1x1\"\n}\nlayer {\n  name: \"inception_3b/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/3x3_reduce\"\n  top: \"inception_3b/3x3_reduce\"\n}\nlayer {\n  name: \"inception_3b/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_3b/3x3_reduce\"\n  top: \"inception_3b/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/3x3\"\n  top: \"inception_3b/3x3\"\n}\nlayer {\n  name: \"inception_3b/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/5x5_reduce\"\n  top: \"inception_3b/5x5_reduce\"\n}\nlayer {\n  name: \"inception_3b/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_3b/5x5_reduce\"\n  top: \"inception_3b/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/5x5\"\n  top: \"inception_3b/5x5\"\n}\nlayer {\n  name: \"inception_3b/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_3a/output\"\n  top: \"inception_3b/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_3b/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_3b/pool\"\n  top: \"inception_3b/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_3b/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_3b/pool_proj\"\n  top: \"inception_3b/pool_proj\"\n}\nlayer {\n  name: \"inception_3b/output\"\n  type: \"Concat\"\n  bottom: \"inception_3b/1x1\"\n  bottom: \"inception_3b/3x3\"\n  bottom: \"inception_3b/5x5\"\n  bottom: \"inception_3b/pool_proj\"\n  top: \"inception_3b/output\"\n}\nlayer {\n  name: \"pool3/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"inception_3b/output\"\n  top: \"pool3/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"inception_4a/1x1\"\n  type: \"Convolution\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/1x1\"\n  top: \"inception_4a/1x1\"\n}\nlayer {\n  name: \"inception_4a/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/3x3_reduce\"\n  top: \"inception_4a/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4a/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/3x3_reduce\"\n  top: \"inception_4a/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 208\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/3x3\"\n  top: \"inception_4a/3x3\"\n}\nlayer {\n  name: \"inception_4a/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 16\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/5x5_reduce\"\n  top: \"inception_4a/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4a/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/5x5_reduce\"\n  top: \"inception_4a/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 48\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/5x5\"\n  top: \"inception_4a/5x5\"\n}\nlayer {\n  name: \"inception_4a/pool\"\n  type: \"Pooling\"\n  bottom: \"pool3/3x3_s2\"\n  top: \"inception_4a/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4a/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/pool\"\n  top: \"inception_4a/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4a/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4a/pool_proj\"\n  top: \"inception_4a/pool_proj\"\n}\nlayer {\n  name: \"inception_4a/output\"\n  type: \"Concat\"\n  bottom: \"inception_4a/1x1\"\n  bottom: \"inception_4a/3x3\"\n  bottom: \"inception_4a/5x5\"\n  bottom: \"inception_4a/pool_proj\"\n  top: \"inception_4a/output\"\n}\nlayer {\n  name: \"loss1/ave_pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4a/output\"\n  top: \"loss1/ave_pool\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 5\n    stride: 3\n  }\n}\nlayer {\n  name: \"loss1/conv\"\n  type: \"Convolution\"\n  bottom: \"loss1/ave_pool\"\n  top: \"loss1/conv\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"loss1/relu_conv\"\n  type: \"ReLU\"\n  bottom: \"loss1/conv\"\n  top: \"loss1/conv\"\n}\nlayer {\n  name: \"loss1/fc\"\n  type: \"InnerProduct\"\n  bottom: \"loss1/conv\"\n  top: \"loss1/fc\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1024\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"loss1/relu_fc\"\n  type: \"ReLU\"\n  bottom: \"loss1/fc\"\n  top: \"loss1/fc\"\n}\nlayer {\n  name: \"loss1/drop_fc\"\n  type: \"Dropout\"\n  bottom: \"loss1/fc\"\n  top: \"loss1/fc\"\n  dropout_param {\n    dropout_ratio: 0.7\n  }\n}\nlayer {\n  name: \"loss1/classifier\"\n  type: \"InnerProduct\"\n  bottom: \"loss1/fc\"\n  top: \"loss1/classifier\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss1/loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"loss1/classifier\"\n  bottom: \"label\"\n  top: \"loss1/loss1\"\n  loss_weight: 0.3\n}\nlayer {\n  name: \"loss1/top-1\"\n  type: \"Accuracy\"\n  bottom: \"loss1/classifier\"\n  bottom: \"label\"\n  top: \"loss1/top-1\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss1/top-5\"\n  type: \"Accuracy\"\n  bottom: \"loss1/classifier\"\n  bottom: \"label\"\n  top: \"loss1/top-5\"\n  include {\n    phase: TEST\n  }\n  accuracy_param {\n    top_k: 5\n  }\n}\nlayer {\n  name: \"inception_4b/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 160\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/1x1\"\n  top: \"inception_4b/1x1\"\n}\nlayer {\n  name: \"inception_4b/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 112\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/3x3_reduce\"\n  top: \"inception_4b/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4b/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/3x3_reduce\"\n  top: \"inception_4b/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 224\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/3x3\"\n  top: \"inception_4b/3x3\"\n}\nlayer {\n  name: \"inception_4b/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 24\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/5x5_reduce\"\n  top: \"inception_4b/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4b/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/5x5_reduce\"\n  top: \"inception_4b/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/5x5\"\n  top: \"inception_4b/5x5\"\n}\nlayer {\n  name: \"inception_4b/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4a/output\"\n  top: \"inception_4b/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4b/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/pool\"\n  top: \"inception_4b/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4b/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4b/pool_proj\"\n  top: \"inception_4b/pool_proj\"\n}\nlayer {\n  name: \"inception_4b/output\"\n  type: \"Concat\"\n  bottom: \"inception_4b/1x1\"\n  bottom: \"inception_4b/3x3\"\n  bottom: \"inception_4b/5x5\"\n  bottom: \"inception_4b/pool_proj\"\n  top: \"inception_4b/output\"\n}\nlayer {\n  name: \"inception_4c/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/1x1\"\n  top: \"inception_4c/1x1\"\n}\nlayer {\n  name: \"inception_4c/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/3x3_reduce\"\n  top: \"inception_4c/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4c/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/3x3_reduce\"\n  top: \"inception_4c/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/3x3\"\n  top: \"inception_4c/3x3\"\n}\nlayer {\n  name: \"inception_4c/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 24\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/5x5_reduce\"\n  top: \"inception_4c/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4c/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/5x5_reduce\"\n  top: \"inception_4c/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/5x5\"\n  top: \"inception_4c/5x5\"\n}\nlayer {\n  name: \"inception_4c/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4b/output\"\n  top: \"inception_4c/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4c/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/pool\"\n  top: \"inception_4c/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4c/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4c/pool_proj\"\n  top: \"inception_4c/pool_proj\"\n}\nlayer {\n  name: \"inception_4c/output\"\n  type: \"Concat\"\n  bottom: \"inception_4c/1x1\"\n  bottom: \"inception_4c/3x3\"\n  bottom: \"inception_4c/5x5\"\n  bottom: \"inception_4c/pool_proj\"\n  top: \"inception_4c/output\"\n}\nlayer {\n  name: \"inception_4d/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 112\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/1x1\"\n  top: \"inception_4d/1x1\"\n}\nlayer {\n  name: \"inception_4d/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 144\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/3x3_reduce\"\n  top: \"inception_4d/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4d/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/3x3_reduce\"\n  top: \"inception_4d/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 288\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/3x3\"\n  top: \"inception_4d/3x3\"\n}\nlayer {\n  name: \"inception_4d/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/5x5_reduce\"\n  top: \"inception_4d/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4d/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/5x5_reduce\"\n  top: \"inception_4d/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/5x5\"\n  top: \"inception_4d/5x5\"\n}\nlayer {\n  name: \"inception_4d/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4c/output\"\n  top: \"inception_4d/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4d/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/pool\"\n  top: \"inception_4d/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4d/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4d/pool_proj\"\n  top: \"inception_4d/pool_proj\"\n}\nlayer {\n  name: \"inception_4d/output\"\n  type: \"Concat\"\n  bottom: \"inception_4d/1x1\"\n  bottom: \"inception_4d/3x3\"\n  bottom: \"inception_4d/5x5\"\n  bottom: \"inception_4d/pool_proj\"\n  top: \"inception_4d/output\"\n}\nlayer {\n  name: \"loss2/ave_pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4d/output\"\n  top: \"loss2/ave_pool\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 5\n    stride: 3\n  }\n}\nlayer {\n  name: \"loss2/conv\"\n  type: \"Convolution\"\n  bottom: \"loss2/ave_pool\"\n  top: \"loss2/conv\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"loss2/relu_conv\"\n  type: \"ReLU\"\n  bottom: \"loss2/conv\"\n  top: \"loss2/conv\"\n}\nlayer {\n  name: \"loss2/fc\"\n  type: \"InnerProduct\"\n  bottom: \"loss2/conv\"\n  top: \"loss2/fc\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1024\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"loss2/relu_fc\"\n  type: \"ReLU\"\n  bottom: \"loss2/fc\"\n  top: \"loss2/fc\"\n}\nlayer {\n  name: \"loss2/drop_fc\"\n  type: \"Dropout\"\n  bottom: \"loss2/fc\"\n  top: \"loss2/fc\"\n  dropout_param {\n    dropout_ratio: 0.7\n  }\n}\nlayer {\n  name: \"loss2/classifier\"\n  type: \"InnerProduct\"\n  bottom: \"loss2/fc\"\n  top: \"loss2/classifier\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss2/loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"loss2/classifier\"\n  bottom: \"label\"\n  top: \"loss2/loss1\"\n  loss_weight: 0.3\n}\nlayer {\n  name: \"loss2/top-1\"\n  type: \"Accuracy\"\n  bottom: \"loss2/classifier\"\n  bottom: \"label\"\n  top: \"loss2/top-1\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss2/top-5\"\n  type: \"Accuracy\"\n  bottom: \"loss2/classifier\"\n  bottom: \"label\"\n  top: \"loss2/top-5\"\n  include {\n    phase: TEST\n  }\n  accuracy_param {\n    top_k: 5\n  }\n}\nlayer {\n  name: \"inception_4e/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/1x1\"\n  top: \"inception_4e/1x1\"\n}\nlayer {\n  name: \"inception_4e/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 160\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/3x3_reduce\"\n  top: \"inception_4e/3x3_reduce\"\n}\nlayer {\n  name: \"inception_4e/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_4e/3x3_reduce\"\n  top: \"inception_4e/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 320\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/3x3\"\n  top: \"inception_4e/3x3\"\n}\nlayer {\n  name: \"inception_4e/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/5x5_reduce\"\n  top: \"inception_4e/5x5_reduce\"\n}\nlayer {\n  name: \"inception_4e/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_4e/5x5_reduce\"\n  top: \"inception_4e/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/5x5\"\n  top: \"inception_4e/5x5\"\n}\nlayer {\n  name: \"inception_4e/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_4d/output\"\n  top: \"inception_4e/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_4e/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_4e/pool\"\n  top: \"inception_4e/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_4e/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_4e/pool_proj\"\n  top: \"inception_4e/pool_proj\"\n}\nlayer {\n  name: \"inception_4e/output\"\n  type: \"Concat\"\n  bottom: \"inception_4e/1x1\"\n  bottom: \"inception_4e/3x3\"\n  bottom: \"inception_4e/5x5\"\n  bottom: \"inception_4e/pool_proj\"\n  top: \"inception_4e/output\"\n}\nlayer {\n  name: \"pool4/3x3_s2\"\n  type: \"Pooling\"\n  bottom: \"inception_4e/output\"\n  top: \"pool4/3x3_s2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"inception_5a/1x1\"\n  type: \"Convolution\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/1x1\"\n  top: \"inception_5a/1x1\"\n}\nlayer {\n  name: \"inception_5a/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 160\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/3x3_reduce\"\n  top: \"inception_5a/3x3_reduce\"\n}\nlayer {\n  name: \"inception_5a/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/3x3_reduce\"\n  top: \"inception_5a/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 320\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/3x3\"\n  top: \"inception_5a/3x3\"\n}\nlayer {\n  name: \"inception_5a/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 32\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/5x5_reduce\"\n  top: \"inception_5a/5x5_reduce\"\n}\nlayer {\n  name: \"inception_5a/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/5x5_reduce\"\n  top: \"inception_5a/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/5x5\"\n  top: \"inception_5a/5x5\"\n}\nlayer {\n  name: \"inception_5a/pool\"\n  type: \"Pooling\"\n  bottom: \"pool4/3x3_s2\"\n  top: \"inception_5a/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_5a/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/pool\"\n  top: \"inception_5a/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5a/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_5a/pool_proj\"\n  top: \"inception_5a/pool_proj\"\n}\nlayer {\n  name: \"inception_5a/output\"\n  type: \"Concat\"\n  bottom: \"inception_5a/1x1\"\n  bottom: \"inception_5a/3x3\"\n  bottom: \"inception_5a/5x5\"\n  bottom: \"inception_5a/pool_proj\"\n  top: \"inception_5a/output\"\n}\nlayer {\n  name: \"inception_5b/1x1\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/1x1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_1x1\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/1x1\"\n  top: \"inception_5b/1x1\"\n}\nlayer {\n  name: \"inception_5b/3x3_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/3x3_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 192\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_3x3_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/3x3_reduce\"\n  top: \"inception_5b/3x3_reduce\"\n}\nlayer {\n  name: \"inception_5b/3x3\"\n  type: \"Convolution\"\n  bottom: \"inception_5b/3x3_reduce\"\n  top: \"inception_5b/3x3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_3x3\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/3x3\"\n  top: \"inception_5b/3x3\"\n}\nlayer {\n  name: \"inception_5b/5x5_reduce\"\n  type: \"Convolution\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/5x5_reduce\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 48\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_5x5_reduce\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/5x5_reduce\"\n  top: \"inception_5b/5x5_reduce\"\n}\nlayer {\n  name: \"inception_5b/5x5\"\n  type: \"Convolution\"\n  bottom: \"inception_5b/5x5_reduce\"\n  top: \"inception_5b/5x5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 2\n    kernel_size: 5\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_5x5\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/5x5\"\n  top: \"inception_5b/5x5\"\n}\nlayer {\n  name: \"inception_5b/pool\"\n  type: \"Pooling\"\n  bottom: \"inception_5a/output\"\n  top: \"inception_5b/pool\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 1\n    pad: 1\n  }\n}\nlayer {\n  name: \"inception_5b/pool_proj\"\n  type: \"Convolution\"\n  bottom: \"inception_5b/pool\"\n  top: \"inception_5b/pool_proj\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    kernel_size: 1\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0.2\n    }\n  }\n}\nlayer {\n  name: \"inception_5b/relu_pool_proj\"\n  type: \"ReLU\"\n  bottom: \"inception_5b/pool_proj\"\n  top: \"inception_5b/pool_proj\"\n}\nlayer {\n  name: \"inception_5b/output\"\n  type: \"Concat\"\n  bottom: \"inception_5b/1x1\"\n  bottom: \"inception_5b/3x3\"\n  bottom: \"inception_5b/5x5\"\n  bottom: \"inception_5b/pool_proj\"\n  top: \"inception_5b/output\"\n}\nlayer {\n  name: \"pool5/7x7_s1\"\n  type: \"Pooling\"\n  bottom: \"inception_5b/output\"\n  top: \"pool5/7x7_s1\"\n  pooling_param {\n    pool: AVE\n    kernel_size: 7\n    stride: 1\n  }\n}\nlayer {\n  name: \"pool5/drop_7x7_s1\"\n  type: \"Dropout\"\n  bottom: \"pool5/7x7_s1\"\n  top: \"pool5/7x7_s1\"\n  dropout_param {\n    dropout_ratio: 0.4\n  }\n}\nlayer {\n  name: \"loss3/classifier\"\n  type: \"InnerProduct\"\n  bottom: \"pool5/7x7_s1\"\n  top: \"loss3/classifier\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss3/loss3\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"loss3/classifier\"\n  bottom: \"label\"\n  top: \"loss3/loss3\"\n  loss_weight: 1\n}\nlayer {\n  name: \"loss3/top-1\"\n  type: \"Accuracy\"\n  bottom: \"loss3/classifier\"\n  bottom: \"label\"\n  top: \"loss3/top-1\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss3/top-5\"\n  type: \"Accuracy\"\n  bottom: \"loss3/classifier\"\n  bottom: \"label\"\n  top: \"loss3/top-5\"\n  include {\n    phase: TEST\n  }\n  accuracy_param {\n    top_k: 5\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_reference_caffenet/deploy.prototxt",
    "content": "name: \"CaffeNet\"\ninput: \"data\"\ninput_shape {\n  dim: 10\n  dim: 3\n  dim: 227\n  dim: 227\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8\"\n  inner_product_param {\n    num_output: 1000\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"fc8\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_reference_caffenet/readme.md",
    "content": "---\nname: BVLC CaffeNet Model\ncaffemodel: bvlc_reference_caffenet.caffemodel\ncaffemodel_url: http://dl.caffe.berkeleyvision.org/bvlc_reference_caffenet.caffemodel\nlicense: unrestricted\nsha1: 4c8d77deb20ea792f84eb5e6d0a11ca0a8660a46\ncaffe_commit: 709dc15af4a06bebda027c1eb2b3f3e3375d5077\n---\n\nThis model is the result of following the Caffe [ImageNet model training instructions](http://caffe.berkeleyvision.org/gathered/examples/imagenet.html).\nIt is a replication of the model described in the [AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) publication with some differences:\n\n- not training with the relighting data-augmentation;\n- the order of pooling and normalization layers is switched (in CaffeNet, pooling is done before normalization).\n\nThis model is snapshot of iteration 310,000.\nThe best validation performance during training was iteration 313,000 with validation accuracy 57.412% and loss 1.82328.\nThis model obtains a top-1 accuracy 57.4% and a top-5 accuracy 80.4% on the validation set, using just the center crop.\n(Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy still.)\n\nThis model was trained by Jeff Donahue @jeffdonahue\n\n## License\n\nThis model is released for unrestricted use.\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_reference_caffenet/solver.prototxt",
    "content": "net: \"models/bvlc_reference_caffenet/train_val.prototxt\"\ntest_iter: 1000\ntest_interval: 1000\nbase_lr: 0.01\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 100000\ndisplay: 20\nmax_iter: 450000\nmomentum: 0.9\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"models/bvlc_reference_caffenet/caffenet_train\"\nsolver_mode: GPU\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_reference_caffenet/train_val.prototxt",
    "content": "name: \"CaffeNet\"\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mirror: true\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n# mean pixel / channel-wise mean instead of mean image\n#  transform_param {\n#    crop_size: 227\n#    mean_value: 104\n#    mean_value: 117\n#    mean_value: 123\n#    mirror: true\n#  }\n  data_param {\n    source: \"examples/imagenet/ilsvrc12_train_lmdb\"\n    batch_size: 256\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"Data\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mirror: false\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n# mean pixel / channel-wise mean instead of mean image\n#  transform_param {\n#    crop_size: 227\n#    mean_value: 104\n#    mean_value: 117\n#    mean_value: 123\n#    mirror: false\n#  }\n  data_param {\n    source: \"examples/imagenet/ilsvrc12_val_lmdb\"\n    batch_size: 50\n    backend: LMDB\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 1000\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"fc8\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc8\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_reference_rcnn_ilsvrc13/deploy.prototxt",
    "content": "name: \"R-CNN-ilsvrc13\"\ninput: \"data\"\ninput_shape {\n  dim: 10\n  dim: 3\n  dim: 227\n  dim: 227\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\n# R-CNN classification layer made from R-CNN ILSVRC13 SVMs.\nlayer {\n  name: \"fc-rcnn\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc-rcnn\"\n  inner_product_param {\n    num_output: 200\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/models/bvlc_reference_rcnn_ilsvrc13/readme.md",
    "content": "---\nname: BVLC Reference RCNN ILSVRC13 Model\ncaffemodel: bvlc_reference_rcnn_ilsvrc13.caffemodel\ncaffemodel_url: http://dl.caffe.berkeleyvision.org/bvlc_reference_rcnn_ilsvrc13.caffemodel\nlicense: unrestricted\nsha1: bdd8abb885819cba5e2fe1eb36235f2319477e64\ncaffe_commit: a7e397abbda52c0b90323c23ab95bdeabee90a98\n---\n\nThe pure Caffe instantiation of the [R-CNN](https://github.com/rbgirshick/rcnn) model for ILSVRC13 detection.\nThis model was made by transplanting the R-CNN SVM classifiers into a `fc-rcnn` classification layer, provided here as an off-the-shelf Caffe detector.\nTry the [detection example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/detection.ipynb) to see it in action.\n\n*N.B. For research purposes, make use of the official R-CNN package and not this example.*\n\nThis model was trained by Ross Girshick @rbgirshick\n\n## License\n\nThis model is released for unrestricted use.\n"
  },
  {
    "path": "caffe-fpn/models/finetune_flickr_style/deploy.prototxt",
    "content": "name: \"FlickrStyleCaffeNet\"\ninput: \"data\"\ninput_shape {\n  dim: 10\n  dim: 3\n  dim: 227\n  dim: 227\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  # Note that lr_mult can be set to 0 to disable any fine-tuning of this, and any other, layer\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8_flickr\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8_flickr\"\n  # lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained\n  param {\n    lr_mult: 10\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 20\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 20\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"prob\"\n  type: \"Softmax\"\n  bottom: \"fc8_flickr\"\n  top: \"prob\"\n}\n"
  },
  {
    "path": "caffe-fpn/models/finetune_flickr_style/readme.md",
    "content": "---\nname: Finetuning CaffeNet on Flickr Style\ncaffemodel: finetune_flickr_style.caffemodel\ncaffemodel_url: http://dl.caffe.berkeleyvision.org/finetune_flickr_style.caffemodel\nlicense: non-commercial\nsha1: b61b5cef7d771b53b0c488e78d35ccadc073e9cf\ncaffe_commit: 737ea5e936821b5c69f9c3952d72693ae5843370\ngist_id: 034c6ac3865563b69e60\n---\n\nThis model is trained exactly as described in `docs/finetune_flickr_style/readme.md`, using all 80000 images.\nThe final performance:\n\n    I1017 07:36:17.370688 31333 solver.cpp:228] Iteration 100000, loss = 0.757952\n    I1017 07:36:17.370730 31333 solver.cpp:247] Iteration 100000, Testing net (#0)\n    I1017 07:36:34.248730 31333 solver.cpp:298]     Test net output #0: accuracy = 0.3916\n\nThis model was trained by Sergey Karayev @sergeyk\n\n## License\n\nThe Flickr Style dataset contains only URLs to images.\nSome of the images may have copyright.\nTraining a category-recognition model for research/non-commercial use may constitute fair use of this data, but the result should not be used for commercial purposes.\n"
  },
  {
    "path": "caffe-fpn/models/finetune_flickr_style/solver.prototxt",
    "content": "net: \"models/finetune_flickr_style/train_val.prototxt\"\ntest_iter: 100\ntest_interval: 1000\n# lr for fine-tuning should be lower than when starting from scratch\nbase_lr: 0.001\nlr_policy: \"step\"\ngamma: 0.1\n# stepsize should also be lower, as we're closer to being done\nstepsize: 20000\ndisplay: 20\nmax_iter: 100000\nmomentum: 0.9\nweight_decay: 0.0005\nsnapshot: 10000\nsnapshot_prefix: \"models/finetune_flickr_style/finetune_flickr_style\"\n# uncomment the following to default to CPU mode solving\n# solver_mode: CPU\n"
  },
  {
    "path": "caffe-fpn/models/finetune_flickr_style/train_val.prototxt",
    "content": "name: \"FlickrStyleCaffeNet\"\nlayer {\n  name: \"data\"\n  type: \"ImageData\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TRAIN\n  }\n  transform_param {\n    mirror: true\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  image_data_param {\n    source: \"data/flickr_style/train.txt\"\n    batch_size: 50\n    new_height: 256\n    new_width: 256\n  }\n}\nlayer {\n  name: \"data\"\n  type: \"ImageData\"\n  top: \"data\"\n  top: \"label\"\n  include {\n    phase: TEST\n  }\n  transform_param {\n    mirror: false\n    crop_size: 227\n    mean_file: \"data/ilsvrc12/imagenet_mean.binaryproto\"\n  }\n  image_data_param {\n    source: \"data/flickr_style/test.txt\"\n    batch_size: 50\n    new_height: 256\n    new_width: 256\n  }\n}\nlayer {\n  name: \"conv1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 96\n    kernel_size: 11\n    stride: 4\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu1\"\n  type: \"ReLU\"\n  bottom: \"conv1\"\n  top: \"conv1\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm1\"\n  type: \"LRN\"\n  bottom: \"pool1\"\n  top: \"norm1\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv2\"\n  type: \"Convolution\"\n  bottom: \"norm1\"\n  top: \"conv2\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 2\n    kernel_size: 5\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu2\"\n  type: \"ReLU\"\n  bottom: \"conv2\"\n  top: \"conv2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"norm2\"\n  type: \"LRN\"\n  bottom: \"pool2\"\n  top: \"norm2\"\n  lrn_param {\n    local_size: 5\n    alpha: 0.0001\n    beta: 0.75\n  }\n}\nlayer {\n  name: \"conv3\"\n  type: \"Convolution\"\n  bottom: \"norm2\"\n  top: \"conv3\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"relu3\"\n  type: \"ReLU\"\n  bottom: \"conv3\"\n  top: \"conv3\"\n}\nlayer {\n  name: \"conv4\"\n  type: \"Convolution\"\n  bottom: \"conv3\"\n  top: \"conv4\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 384\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu4\"\n  type: \"ReLU\"\n  bottom: \"conv4\"\n  top: \"conv4\"\n}\nlayer {\n  name: \"conv5\"\n  type: \"Convolution\"\n  bottom: \"conv4\"\n  top: \"conv5\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n    group: 2\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu5\"\n  type: \"ReLU\"\n  bottom: \"conv5\"\n  top: \"conv5\"\n}\nlayer {\n  name: \"pool5\"\n  type: \"Pooling\"\n  bottom: \"conv5\"\n  top: \"pool5\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 3\n    stride: 2\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  # Note that lr_mult can be set to 0 to disable any fine-tuning of this, and any other, layer\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.005\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 1\n    }\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc8_flickr\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"fc8_flickr\"\n  # lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained\n  param {\n    lr_mult: 10\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 20\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 20\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"accuracy\"\n  type: \"Accuracy\"\n  bottom: \"fc8_flickr\"\n  bottom: \"label\"\n  top: \"accuracy\"\n  include {\n    phase: TEST\n  }\n}\nlayer {\n  name: \"loss\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"fc8_flickr\"\n  bottom: \"label\"\n  top: \"loss\"\n}\n"
  },
  {
    "path": "caffe-fpn/python/CMakeLists.txt",
    "content": "if(NOT HAVE_PYTHON)\n  message(STATUS \"Python interface is disabled or not all required dependencies found. Building without it...\")\n  return()\nendif()\n\ninclude_directories(${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIR} ${Boost_INCLUDE_DIRS})\nfile(GLOB_RECURSE python_srcs ${PROJECT_SOURCE_DIR}/python/*.cpp)\n\nadd_library(pycaffe SHARED ${python_srcs})\ntarget_link_libraries(pycaffe ${Caffe_LINK} ${PYTHON_LIBRARIES} ${Boost_LIBRARIES})\nset_target_properties(pycaffe PROPERTIES PREFIX \"\" OUTPUT_NAME \"_caffe\")\ncaffe_default_properties(pycaffe)\n\nif(UNIX OR APPLE)\n    set(__linkname \"${PROJECT_SOURCE_DIR}/python/caffe/_caffe.so\")\n    add_custom_command(TARGET pycaffe POST_BUILD\n                       COMMAND ln -sf $<TARGET_LINKER_FILE:pycaffe> \"${__linkname}\"\n                       COMMAND ${CMAKE_COMMAND} -E make_directory ${PROJECT_SOURCE_DIR}/python/caffe/proto\n                       COMMAND touch ${PROJECT_SOURCE_DIR}/python/caffe/proto/__init__.py\n                       COMMAND cp ${proto_gen_folder}/*.py ${PROJECT_SOURCE_DIR}/python/caffe/proto/\n                       COMMENT \"Creating symlink ${__linkname} -> ${PROJECT_BINARY_DIR}/lib/_caffe${Caffe_POSTFIX}.so\")\nendif()\n\n# ---[ Install\nfile(GLOB files1 *.py requirements.txt)\ninstall(FILES ${files1} DESTINATION python)\n\nfile(GLOB files2 caffe/*.py)\ninstall(FILES  ${files2} DESTINATION python/caffe)\ninstall(TARGETS pycaffe  DESTINATION python/caffe)\ninstall(DIRECTORY caffe/imagenet caffe/proto caffe/test DESTINATION python/caffe)\n\n\n\n"
  },
  {
    "path": "caffe-fpn/python/caffe/__init__.py",
    "content": "from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver\nfrom ._caffe import set_mode_cpu, set_mode_gpu, set_device, Layer, get_solver, layer_type_list, set_random_seed\nfrom ._caffe import __version__\nfrom .proto.caffe_pb2 import TRAIN, TEST\nfrom .classifier import Classifier\nfrom .detector import Detector\nfrom . import io\nfrom .net_spec import layers, params, NetSpec, to_proto\n"
  },
  {
    "path": "caffe-fpn/python/caffe/_caffe.cpp",
    "content": "#include <Python.h>  // NOLINT(build/include_alpha)\n\n// Produce deprecation warnings (needs to come before arrayobject.h inclusion).\n#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION\n\n#include <boost/make_shared.hpp>\n#include <boost/python.hpp>\n#include <boost/python/raw_function.hpp>\n#include <boost/python/suite/indexing/vector_indexing_suite.hpp>\n#include <boost/python/enum.hpp>\n#include <numpy/arrayobject.h>\n\n// these need to be included after boost on OS X\n#include <string>  // NOLINT(build/include_order)\n#include <vector>  // NOLINT(build/include_order)\n#include <fstream>  // NOLINT\n\n#include \"caffe/caffe.hpp\"\n#include \"caffe/layers/memory_data_layer.hpp\"\n#include \"caffe/layers/python_layer.hpp\"\n#include \"caffe/sgd_solvers.hpp\"\n\n// Temporary solution for numpy < 1.7 versions: old macro, no promises.\n// You're strongly advised to upgrade to >= 1.7.\n#ifndef NPY_ARRAY_C_CONTIGUOUS\n#define NPY_ARRAY_C_CONTIGUOUS NPY_C_CONTIGUOUS\n#define PyArray_SetBaseObject(arr, x) (PyArray_BASE(arr) = (x))\n#endif\n\nnamespace bp = boost::python;\n\nnamespace caffe {\n\n// For Python, for now, we'll just always use float as the type.\ntypedef float Dtype;\nconst int NPY_DTYPE = NPY_FLOAT32;\n\n// Selecting mode.\nvoid set_mode_cpu() { Caffe::set_mode(Caffe::CPU); }\nvoid set_mode_gpu() { Caffe::set_mode(Caffe::GPU); }\n\n// For convenience, check that input files can be opened, and raise an\n// exception that boost will send to Python if not (caffe could still crash\n// later if the input files are disturbed before they are actually used, but\n// this saves frustration in most cases).\nstatic void CheckFile(const string& filename) {\n    std::ifstream f(filename.c_str());\n    if (!f.good()) {\n      f.close();\n      throw std::runtime_error(\"Could not open file \" + filename);\n    }\n    f.close();\n}\n\nvoid CheckContiguousArray(PyArrayObject* arr, string name,\n    int channels, int height, int width) {\n  if (!(PyArray_FLAGS(arr) & NPY_ARRAY_C_CONTIGUOUS)) {\n    throw std::runtime_error(name + \" must be C contiguous\");\n  }\n  if (PyArray_NDIM(arr) != 4) {\n    throw std::runtime_error(name + \" must be 4-d\");\n  }\n  if (PyArray_TYPE(arr) != NPY_FLOAT32) {\n    throw std::runtime_error(name + \" must be float32\");\n  }\n  if (PyArray_DIMS(arr)[1] != channels) {\n    throw std::runtime_error(name + \" has wrong number of channels\");\n  }\n  if (PyArray_DIMS(arr)[2] != height) {\n    throw std::runtime_error(name + \" has wrong height\");\n  }\n  if (PyArray_DIMS(arr)[3] != width) {\n    throw std::runtime_error(name + \" has wrong width\");\n  }\n}\n\n// Net constructor for passing phase as int\nshared_ptr<Net<Dtype> > Net_Init(\n    string param_file, int phase) {\n  CheckFile(param_file);\n\n  shared_ptr<Net<Dtype> > net(new Net<Dtype>(param_file,\n      static_cast<Phase>(phase)));\n  return net;\n}\n\n// Net construct-and-load convenience constructor\nshared_ptr<Net<Dtype> > Net_Init_Load(\n    string param_file, string pretrained_param_file, int phase) {\n  CheckFile(param_file);\n  CheckFile(pretrained_param_file);\n\n  shared_ptr<Net<Dtype> > net(new Net<Dtype>(param_file,\n      static_cast<Phase>(phase)));\n  net->CopyTrainedLayersFrom(pretrained_param_file);\n  return net;\n}\n\nvoid Net_Save(const Net<Dtype>& net, string filename) {\n  NetParameter net_param;\n  net.ToProto(&net_param, false);\n  WriteProtoToBinaryFile(net_param, filename.c_str());\n}\n\nvoid Net_SetInputArrays(Net<Dtype>* net, bp::object data_obj,\n    bp::object labels_obj) {\n  // check that this network has an input MemoryDataLayer\n  shared_ptr<MemoryDataLayer<Dtype> > md_layer =\n    boost::dynamic_pointer_cast<MemoryDataLayer<Dtype> >(net->layers()[0]);\n  if (!md_layer) {\n    throw std::runtime_error(\"set_input_arrays may only be called if the\"\n        \" first layer is a MemoryDataLayer\");\n  }\n\n  // check that we were passed appropriately-sized contiguous memory\n  PyArrayObject* data_arr =\n      reinterpret_cast<PyArrayObject*>(data_obj.ptr());\n  PyArrayObject* labels_arr =\n      reinterpret_cast<PyArrayObject*>(labels_obj.ptr());\n  CheckContiguousArray(data_arr, \"data array\", md_layer->channels(),\n      md_layer->height(), md_layer->width());\n  CheckContiguousArray(labels_arr, \"labels array\", 1, 1, 1);\n  if (PyArray_DIMS(data_arr)[0] != PyArray_DIMS(labels_arr)[0]) {\n    throw std::runtime_error(\"data and labels must have the same first\"\n        \" dimension\");\n  }\n  if (PyArray_DIMS(data_arr)[0] % md_layer->batch_size() != 0) {\n    throw std::runtime_error(\"first dimensions of input arrays must be a\"\n        \" multiple of batch size\");\n  }\n\n  md_layer->Reset(static_cast<Dtype*>(PyArray_DATA(data_arr)),\n      static_cast<Dtype*>(PyArray_DATA(labels_arr)),\n      PyArray_DIMS(data_arr)[0]);\n}\n\nSolver<Dtype>* GetSolverFromFile(const string& filename) {\n  SolverParameter param;\n  ReadSolverParamsFromTextFileOrDie(filename, &param);\n  return SolverRegistry<Dtype>::CreateSolver(param);\n}\n\nstruct NdarrayConverterGenerator {\n  template <typename T> struct apply;\n};\n\ntemplate <>\nstruct NdarrayConverterGenerator::apply<Dtype*> {\n  struct type {\n    PyObject* operator() (Dtype* data) const {\n      // Just store the data pointer, and add the shape information in postcall.\n      return PyArray_SimpleNewFromData(0, NULL, NPY_DTYPE, data);\n    }\n    const PyTypeObject* get_pytype() {\n      return &PyArray_Type;\n    }\n  };\n};\n\nstruct NdarrayCallPolicies : public bp::default_call_policies {\n  typedef NdarrayConverterGenerator result_converter;\n  PyObject* postcall(PyObject* pyargs, PyObject* result) {\n    bp::object pyblob = bp::extract<bp::tuple>(pyargs)()[0];\n    shared_ptr<Blob<Dtype> > blob =\n      bp::extract<shared_ptr<Blob<Dtype> > >(pyblob);\n    // Free the temporary pointer-holding array, and construct a new one with\n    // the shape information from the blob.\n    void* data = PyArray_DATA(reinterpret_cast<PyArrayObject*>(result));\n    Py_DECREF(result);\n    const int num_axes = blob->num_axes();\n    vector<npy_intp> dims(blob->shape().begin(), blob->shape().end());\n    PyObject *arr_obj = PyArray_SimpleNewFromData(num_axes, dims.data(),\n                                                  NPY_FLOAT32, data);\n    // SetBaseObject steals a ref, so we need to INCREF.\n    Py_INCREF(pyblob.ptr());\n    PyArray_SetBaseObject(reinterpret_cast<PyArrayObject*>(arr_obj),\n        pyblob.ptr());\n    return arr_obj;\n  }\n};\n\nbp::object Blob_Reshape(bp::tuple args, bp::dict kwargs) {\n  if (bp::len(kwargs) > 0) {\n    throw std::runtime_error(\"Blob.reshape takes no kwargs\");\n  }\n  Blob<Dtype>* self = bp::extract<Blob<Dtype>*>(args[0]);\n  vector<int> shape(bp::len(args) - 1);\n  for (int i = 1; i < bp::len(args); ++i) {\n    shape[i - 1] = bp::extract<int>(args[i]);\n  }\n  self->Reshape(shape);\n  // We need to explicitly return None to use bp::raw_function.\n  return bp::object();\n}\n\nbp::object BlobVec_add_blob(bp::tuple args, bp::dict kwargs) {\n  if (bp::len(kwargs) > 0) {\n    throw std::runtime_error(\"BlobVec.add_blob takes no kwargs\");\n  }\n  typedef vector<shared_ptr<Blob<Dtype> > > BlobVec;\n  BlobVec* self = bp::extract<BlobVec*>(args[0]);\n  vector<int> shape(bp::len(args) - 1);\n  for (int i = 1; i < bp::len(args); ++i) {\n    shape[i - 1] = bp::extract<int>(args[i]);\n  }\n  self->push_back(shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));\n  // We need to explicitly return None to use bp::raw_function.\n  return bp::object();\n}\n\nBOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS(SolveOverloads, Solve, 0, 1);\n\nBOOST_PYTHON_MODULE(_caffe) {\n  // below, we prepend an underscore to methods that will be replaced\n  // in Python\n\n  bp::scope().attr(\"__version__\") = AS_STRING(CAFFE_VERSION);\n\n  // Caffe utility functions\n  bp::def(\"set_mode_cpu\", &set_mode_cpu);\n  bp::def(\"set_mode_gpu\", &set_mode_gpu);\n  bp::def(\"set_device\", &Caffe::SetDevice);\n  bp::def(\"set_random_seed\", &Caffe::set_random_seed);\n\n  bp::def(\"layer_type_list\", &LayerRegistry<Dtype>::LayerTypeList);\n\n  bp::enum_<Phase>(\"Phase\")\n    .value(\"TRAIN\", caffe::TRAIN)\n    .value(\"TEST\", caffe::TEST)\n    .export_values();\n\n  bp::class_<Net<Dtype>, shared_ptr<Net<Dtype> >, boost::noncopyable >(\"Net\",\n    bp::no_init)\n    .def(\"__init__\", bp::make_constructor(&Net_Init))\n    .def(\"__init__\", bp::make_constructor(&Net_Init_Load))\n    .def(\"_forward\", &Net<Dtype>::ForwardFromTo)\n    .def(\"_backward\", &Net<Dtype>::BackwardFromTo)\n    .def(\"reshape\", &Net<Dtype>::Reshape)\n    // The cast is to select a particular overload.\n    .def(\"copy_from\", static_cast<void (Net<Dtype>::*)(const string)>(\n        &Net<Dtype>::CopyTrainedLayersFrom))\n    .def(\"share_with\", &Net<Dtype>::ShareTrainedLayersWith)\n    .add_property(\"_blob_loss_weights\", bp::make_function(\n        &Net<Dtype>::blob_loss_weights, bp::return_internal_reference<>()))\n    .def(\"_bottom_ids\", bp::make_function(&Net<Dtype>::bottom_ids,\n        bp::return_value_policy<bp::copy_const_reference>()))\n    .def(\"_top_ids\", bp::make_function(&Net<Dtype>::top_ids,\n        bp::return_value_policy<bp::copy_const_reference>()))\n    .add_property(\"_blobs\", bp::make_function(&Net<Dtype>::blobs,\n        bp::return_internal_reference<>()))\n    .add_property(\"layers\", bp::make_function(&Net<Dtype>::layers,\n        bp::return_internal_reference<>()))\n    .add_property(\"_blob_names\", bp::make_function(&Net<Dtype>::blob_names,\n        bp::return_value_policy<bp::copy_const_reference>()))\n    .add_property(\"_layer_names\", bp::make_function(&Net<Dtype>::layer_names,\n        bp::return_value_policy<bp::copy_const_reference>()))\n    .add_property(\"_inputs\", bp::make_function(&Net<Dtype>::input_blob_indices,\n        bp::return_value_policy<bp::copy_const_reference>()))\n    .add_property(\"_outputs\",\n        bp::make_function(&Net<Dtype>::output_blob_indices,\n        bp::return_value_policy<bp::copy_const_reference>()))\n    .def(\"_set_input_arrays\", &Net_SetInputArrays,\n        bp::with_custodian_and_ward<1, 2, bp::with_custodian_and_ward<1, 3> >())\n    .def(\"save\", &Net_Save);\n\n  bp::class_<Blob<Dtype>, shared_ptr<Blob<Dtype> >, boost::noncopyable>(\n    \"Blob\", bp::no_init)\n    .add_property(\"shape\",\n        bp::make_function(\n            static_cast<const vector<int>& (Blob<Dtype>::*)() const>(\n                &Blob<Dtype>::shape),\n            bp::return_value_policy<bp::copy_const_reference>()))\n    .add_property(\"num\",      &Blob<Dtype>::num)\n    .add_property(\"channels\", &Blob<Dtype>::channels)\n    .add_property(\"height\",   &Blob<Dtype>::height)\n    .add_property(\"width\",    &Blob<Dtype>::width)\n    .add_property(\"count\",    static_cast<int (Blob<Dtype>::*)() const>(\n        &Blob<Dtype>::count))\n    .def(\"reshape\",           bp::raw_function(&Blob_Reshape))\n    .add_property(\"data\",     bp::make_function(&Blob<Dtype>::mutable_cpu_data,\n          NdarrayCallPolicies()))\n    .add_property(\"diff\",     bp::make_function(&Blob<Dtype>::mutable_cpu_diff,\n          NdarrayCallPolicies()));\n\n  bp::class_<Layer<Dtype>, shared_ptr<PythonLayer<Dtype> >,\n    boost::noncopyable>(\"Layer\", bp::init<const LayerParameter&>())\n    .add_property(\"blobs\", bp::make_function(&Layer<Dtype>::blobs,\n          bp::return_internal_reference<>()))\n    .def(\"setup\", &Layer<Dtype>::LayerSetUp)\n    .def(\"reshape\", &Layer<Dtype>::Reshape)\n    .add_property(\"phase\", bp::make_function(&Layer<Dtype>::phase))\n    .add_property(\"type\", bp::make_function(&Layer<Dtype>::type));\n  bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();\n\n  bp::class_<LayerParameter>(\"LayerParameter\", bp::no_init);\n\n  bp::class_<Solver<Dtype>, shared_ptr<Solver<Dtype> >, boost::noncopyable>(\n    \"Solver\", bp::no_init)\n    .add_property(\"net\", &Solver<Dtype>::net)\n    .add_property(\"test_nets\", bp::make_function(&Solver<Dtype>::test_nets,\n          bp::return_internal_reference<>()))\n    .add_property(\"iter\", &Solver<Dtype>::iter)\n    .def(\"solve\", static_cast<void (Solver<Dtype>::*)(const char*)>(\n          &Solver<Dtype>::Solve), SolveOverloads())\n    .def(\"step\", &Solver<Dtype>::Step)\n    .def(\"restore\", &Solver<Dtype>::Restore)\n    .def(\"snapshot\", &Solver<Dtype>::Snapshot);\n\n  bp::class_<SGDSolver<Dtype>, bp::bases<Solver<Dtype> >,\n    shared_ptr<SGDSolver<Dtype> >, boost::noncopyable>(\n        \"SGDSolver\", bp::init<string>());\n  bp::class_<NesterovSolver<Dtype>, bp::bases<Solver<Dtype> >,\n    shared_ptr<NesterovSolver<Dtype> >, boost::noncopyable>(\n        \"NesterovSolver\", bp::init<string>());\n  bp::class_<AdaGradSolver<Dtype>, bp::bases<Solver<Dtype> >,\n    shared_ptr<AdaGradSolver<Dtype> >, boost::noncopyable>(\n        \"AdaGradSolver\", bp::init<string>());\n  bp::class_<RMSPropSolver<Dtype>, bp::bases<Solver<Dtype> >,\n    shared_ptr<RMSPropSolver<Dtype> >, boost::noncopyable>(\n        \"RMSPropSolver\", bp::init<string>());\n  bp::class_<AdaDeltaSolver<Dtype>, bp::bases<Solver<Dtype> >,\n    shared_ptr<AdaDeltaSolver<Dtype> >, boost::noncopyable>(\n        \"AdaDeltaSolver\", bp::init<string>());\n  bp::class_<AdamSolver<Dtype>, bp::bases<Solver<Dtype> >,\n    shared_ptr<AdamSolver<Dtype> >, boost::noncopyable>(\n        \"AdamSolver\", bp::init<string>());\n\n  bp::def(\"get_solver\", &GetSolverFromFile,\n      bp::return_value_policy<bp::manage_new_object>());\n\n  // vector wrappers for all the vector types we use\n  bp::class_<vector<shared_ptr<Blob<Dtype> > > >(\"BlobVec\")\n    .def(bp::vector_indexing_suite<vector<shared_ptr<Blob<Dtype> > >, true>())\n    .def(\"add_blob\", bp::raw_function(&BlobVec_add_blob));\n  bp::class_<vector<Blob<Dtype>*> >(\"RawBlobVec\")\n    .def(bp::vector_indexing_suite<vector<Blob<Dtype>*>, true>());\n  bp::class_<vector<shared_ptr<Layer<Dtype> > > >(\"LayerVec\")\n    .def(bp::vector_indexing_suite<vector<shared_ptr<Layer<Dtype> > >, true>());\n  bp::class_<vector<string> >(\"StringVec\")\n    .def(bp::vector_indexing_suite<vector<string> >());\n  bp::class_<vector<int> >(\"IntVec\")\n    .def(bp::vector_indexing_suite<vector<int> >());\n  bp::class_<vector<Dtype> >(\"DtypeVec\")\n    .def(bp::vector_indexing_suite<vector<Dtype> >());\n  bp::class_<vector<shared_ptr<Net<Dtype> > > >(\"NetVec\")\n    .def(bp::vector_indexing_suite<vector<shared_ptr<Net<Dtype> > >, true>());\n  bp::class_<vector<bool> >(\"BoolVec\")\n    .def(bp::vector_indexing_suite<vector<bool> >());\n\n  // boost python expects a void (missing) return value, while import_array\n  // returns NULL for python3. import_array1() forces a void return value.\n  import_array1();\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/python/caffe/classifier.py",
    "content": "#!/usr/bin/env python\n\"\"\"\nClassifier is an image classifier specialization of Net.\n\"\"\"\n\nimport numpy as np\n\nimport caffe\n\n\nclass Classifier(caffe.Net):\n    \"\"\"\n    Classifier extends Net for image class prediction\n    by scaling, center cropping, or oversampling.\n\n    Parameters\n    ----------\n    image_dims : dimensions to scale input for cropping/sampling.\n        Default is to scale to net input size for whole-image crop.\n    mean, input_scale, raw_scale, channel_swap: params for\n        preprocessing options.\n    \"\"\"\n    def __init__(self, model_file, pretrained_file, image_dims=None,\n                 mean=None, input_scale=None, raw_scale=None,\n                 channel_swap=None):\n        caffe.Net.__init__(self, model_file, pretrained_file, caffe.TEST)\n\n        # configure pre-processing\n        in_ = self.inputs[0]\n        self.transformer = caffe.io.Transformer(\n            {in_: self.blobs[in_].data.shape})\n        self.transformer.set_transpose(in_, (2, 0, 1))\n        if mean is not None:\n            self.transformer.set_mean(in_, mean)\n        if input_scale is not None:\n            self.transformer.set_input_scale(in_, input_scale)\n        if raw_scale is not None:\n            self.transformer.set_raw_scale(in_, raw_scale)\n        if channel_swap is not None:\n            self.transformer.set_channel_swap(in_, channel_swap)\n\n        self.crop_dims = np.array(self.blobs[in_].data.shape[2:])\n        if not image_dims:\n            image_dims = self.crop_dims\n        self.image_dims = image_dims\n\n    def predict(self, inputs, oversample=True):\n        \"\"\"\n        Predict classification probabilities of inputs.\n\n        Parameters\n        ----------\n        inputs : iterable of (H x W x K) input ndarrays.\n        oversample : boolean\n            average predictions across center, corners, and mirrors\n            when True (default). Center-only prediction when False.\n\n        Returns\n        -------\n        predictions: (N x C) ndarray of class probabilities for N images and C\n            classes.\n        \"\"\"\n        # Scale to standardize input dimensions.\n        input_ = np.zeros((len(inputs),\n                           self.image_dims[0],\n                           self.image_dims[1],\n                           inputs[0].shape[2]),\n                          dtype=np.float32)\n        for ix, in_ in enumerate(inputs):\n            input_[ix] = caffe.io.resize_image(in_, self.image_dims)\n\n        if oversample:\n            # Generate center, corner, and mirrored crops.\n            input_ = caffe.io.oversample(input_, self.crop_dims)\n        else:\n            # Take center crop.\n            center = np.array(self.image_dims) / 2.0\n            crop = np.tile(center, (1, 2))[0] + np.concatenate([\n                -self.crop_dims / 2.0,\n                self.crop_dims / 2.0\n            ])\n            input_ = input_[:, crop[0]:crop[2], crop[1]:crop[3], :]\n\n        # Classify\n        caffe_in = np.zeros(np.array(input_.shape)[[0, 3, 1, 2]],\n                            dtype=np.float32)\n        for ix, in_ in enumerate(input_):\n            caffe_in[ix] = self.transformer.preprocess(self.inputs[0], in_)\n        out = self.forward_all(**{self.inputs[0]: caffe_in})\n        predictions = out[self.outputs[0]]\n\n        # For oversampling, average predictions across crops.\n        if oversample:\n            predictions = predictions.reshape((len(predictions) / 10, 10, -1))\n            predictions = predictions.mean(1)\n\n        return predictions\n"
  },
  {
    "path": "caffe-fpn/python/caffe/detector.py",
    "content": "#!/usr/bin/env python\n\"\"\"\nDo windowed detection by classifying a number of images/crops at once,\noptionally using the selective search window proposal method.\n\nThis implementation follows ideas in\n    Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik.\n    Rich feature hierarchies for accurate object detection and semantic\n    segmentation.\n    http://arxiv.org/abs/1311.2524\n\nThe selective_search_ijcv_with_python code required for the selective search\nproposal mode is available at\n    https://github.com/sergeyk/selective_search_ijcv_with_python\n\"\"\"\nimport numpy as np\nimport os\n\nimport caffe\n\n\nclass Detector(caffe.Net):\n    \"\"\"\n    Detector extends Net for windowed detection by a list of crops or\n    selective search proposals.\n\n    Parameters\n    ----------\n    mean, input_scale, raw_scale, channel_swap : params for preprocessing\n        options.\n    context_pad : amount of surrounding context to take s.t. a `context_pad`\n        sized border of pixels in the network input image is context, as in\n        R-CNN feature extraction.\n    \"\"\"\n    def __init__(self, model_file, pretrained_file, mean=None,\n                 input_scale=None, raw_scale=None, channel_swap=None,\n                 context_pad=None):\n        caffe.Net.__init__(self, model_file, pretrained_file, caffe.TEST)\n\n        # configure pre-processing\n        in_ = self.inputs[0]\n        self.transformer = caffe.io.Transformer(\n            {in_: self.blobs[in_].data.shape})\n        self.transformer.set_transpose(in_, (2, 0, 1))\n        if mean is not None:\n            self.transformer.set_mean(in_, mean)\n        if input_scale is not None:\n            self.transformer.set_input_scale(in_, input_scale)\n        if raw_scale is not None:\n            self.transformer.set_raw_scale(in_, raw_scale)\n        if channel_swap is not None:\n            self.transformer.set_channel_swap(in_, channel_swap)\n\n        self.configure_crop(context_pad)\n\n    def detect_windows(self, images_windows):\n        \"\"\"\n        Do windowed detection over given images and windows. Windows are\n        extracted then warped to the input dimensions of the net.\n\n        Parameters\n        ----------\n        images_windows: (image filename, window list) iterable.\n        context_crop: size of context border to crop in pixels.\n\n        Returns\n        -------\n        detections: list of {filename: image filename, window: crop coordinates,\n            predictions: prediction vector} dicts.\n        \"\"\"\n        # Extract windows.\n        window_inputs = []\n        for image_fname, windows in images_windows:\n            image = caffe.io.load_image(image_fname).astype(np.float32)\n            for window in windows:\n                window_inputs.append(self.crop(image, window))\n\n        # Run through the net (warping windows to input dimensions).\n        in_ = self.inputs[0]\n        caffe_in = np.zeros((len(window_inputs), window_inputs[0].shape[2])\n                            + self.blobs[in_].data.shape[2:],\n                            dtype=np.float32)\n        for ix, window_in in enumerate(window_inputs):\n            caffe_in[ix] = self.transformer.preprocess(in_, window_in)\n        out = self.forward_all(**{in_: caffe_in})\n        predictions = out[self.outputs[0]].squeeze(axis=(2, 3))\n\n        # Package predictions with images and windows.\n        detections = []\n        ix = 0\n        for image_fname, windows in images_windows:\n            for window in windows:\n                detections.append({\n                    'window': window,\n                    'prediction': predictions[ix],\n                    'filename': image_fname\n                })\n                ix += 1\n        return detections\n\n    def detect_selective_search(self, image_fnames):\n        \"\"\"\n        Do windowed detection over Selective Search proposals by extracting\n        the crop and warping to the input dimensions of the net.\n\n        Parameters\n        ----------\n        image_fnames: list\n\n        Returns\n        -------\n        detections: list of {filename: image filename, window: crop coordinates,\n            predictions: prediction vector} dicts.\n        \"\"\"\n        import selective_search_ijcv_with_python as selective_search\n        # Make absolute paths so MATLAB can find the files.\n        image_fnames = [os.path.abspath(f) for f in image_fnames]\n        windows_list = selective_search.get_windows(\n            image_fnames,\n            cmd='selective_search_rcnn'\n        )\n        # Run windowed detection on the selective search list.\n        return self.detect_windows(zip(image_fnames, windows_list))\n\n    def crop(self, im, window):\n        \"\"\"\n        Crop a window from the image for detection. Include surrounding context\n        according to the `context_pad` configuration.\n\n        Parameters\n        ----------\n        im: H x W x K image ndarray to crop.\n        window: bounding box coordinates as ymin, xmin, ymax, xmax.\n\n        Returns\n        -------\n        crop: cropped window.\n        \"\"\"\n        # Crop window from the image.\n        crop = im[window[0]:window[2], window[1]:window[3]]\n\n        if self.context_pad:\n            box = window.copy()\n            crop_size = self.blobs[self.inputs[0]].width  # assumes square\n            scale = crop_size / (1. * crop_size - self.context_pad * 2)\n            # Crop a box + surrounding context.\n            half_h = (box[2] - box[0] + 1) / 2.\n            half_w = (box[3] - box[1] + 1) / 2.\n            center = (box[0] + half_h, box[1] + half_w)\n            scaled_dims = scale * np.array((-half_h, -half_w, half_h, half_w))\n            box = np.round(np.tile(center, 2) + scaled_dims)\n            full_h = box[2] - box[0] + 1\n            full_w = box[3] - box[1] + 1\n            scale_h = crop_size / full_h\n            scale_w = crop_size / full_w\n            pad_y = round(max(0, -box[0]) * scale_h)  # amount out-of-bounds\n            pad_x = round(max(0, -box[1]) * scale_w)\n\n            # Clip box to image dimensions.\n            im_h, im_w = im.shape[:2]\n            box = np.clip(box, 0., [im_h, im_w, im_h, im_w])\n            clip_h = box[2] - box[0] + 1\n            clip_w = box[3] - box[1] + 1\n            assert(clip_h > 0 and clip_w > 0)\n            crop_h = round(clip_h * scale_h)\n            crop_w = round(clip_w * scale_w)\n            if pad_y + crop_h > crop_size:\n                crop_h = crop_size - pad_y\n            if pad_x + crop_w > crop_size:\n                crop_w = crop_size - pad_x\n\n            # collect with context padding and place in input\n            # with mean padding\n            context_crop = im[box[0]:box[2], box[1]:box[3]]\n            context_crop = caffe.io.resize_image(context_crop, (crop_h, crop_w))\n            crop = np.ones(self.crop_dims, dtype=np.float32) * self.crop_mean\n            crop[pad_y:(pad_y + crop_h), pad_x:(pad_x + crop_w)] = context_crop\n\n        return crop\n\n    def configure_crop(self, context_pad):\n        \"\"\"\n        Configure crop dimensions and amount of context for cropping.\n        If context is included, make the special input mean for context padding.\n\n        Parameters\n        ----------\n        context_pad : amount of context for cropping.\n        \"\"\"\n        # crop dimensions\n        in_ = self.inputs[0]\n        tpose = self.transformer.transpose[in_]\n        inv_tpose = [tpose[t] for t in tpose]\n        self.crop_dims = np.array(self.blobs[in_].data.shape[1:])[inv_tpose]\n        #.transpose(inv_tpose)\n        # context padding\n        self.context_pad = context_pad\n        if self.context_pad:\n            in_ = self.inputs[0]\n            transpose = self.transformer.transpose.get(in_)\n            channel_order = self.transformer.channel_swap.get(in_)\n            raw_scale = self.transformer.raw_scale.get(in_)\n            # Padding context crops needs the mean in unprocessed input space.\n            mean = self.transformer.mean.get(in_)\n            if mean is not None:\n                inv_transpose = [transpose[t] for t in transpose]\n                crop_mean = mean.copy().transpose(inv_transpose)\n                if channel_order is not None:\n                    channel_order_inverse = [channel_order.index(i)\n                                             for i in range(crop_mean.shape[2])]\n                    crop_mean = crop_mean[:, :, channel_order_inverse]\n                if raw_scale is not None:\n                    crop_mean /= raw_scale\n                self.crop_mean = crop_mean\n            else:\n                self.crop_mean = np.zeros(self.crop_dims, dtype=np.float32)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/draw.py",
    "content": "\"\"\"\nCaffe network visualization: draw the NetParameter protobuffer.\n\n\n.. note::\n\n    This requires pydot>=1.0.2, which is not included in requirements.txt since\n    it requires graphviz and other prerequisites outside the scope of the\n    Caffe.\n\"\"\"\n\nfrom caffe.proto import caffe_pb2\n\n\"\"\"\npydot is not supported under python 3 and pydot2 doesn't work properly.\npydotplus works nicely (pip install pydotplus)\n\"\"\"\ntry:\n    # Try to load pydotplus\n    import pydotplus as pydot\nexcept ImportError:\n    import pydot\n\n# Internal layer and blob styles.\nLAYER_STYLE_DEFAULT = {'shape': 'record',\n                       'fillcolor': '#6495ED',\n                       'style': 'filled'}\nNEURON_LAYER_STYLE = {'shape': 'record',\n                      'fillcolor': '#90EE90',\n                      'style': 'filled'}\nBLOB_STYLE = {'shape': 'octagon',\n              'fillcolor': '#E0E0E0',\n              'style': 'filled'}\n\n\ndef get_pooling_types_dict():\n    \"\"\"Get dictionary mapping pooling type number to type name\n    \"\"\"\n    desc = caffe_pb2.PoolingParameter.PoolMethod.DESCRIPTOR\n    d = {}\n    for k, v in desc.values_by_name.items():\n        d[v.number] = k\n    return d\n\n\ndef get_edge_label(layer):\n    \"\"\"Define edge label based on layer type.\n    \"\"\"\n\n    if layer.type == 'Data':\n        edge_label = 'Batch ' + str(layer.data_param.batch_size)\n    elif layer.type == 'Convolution' or layer.type == 'Deconvolution':\n        edge_label = str(layer.convolution_param.num_output)\n    elif layer.type == 'InnerProduct':\n        edge_label = str(layer.inner_product_param.num_output)\n    else:\n        edge_label = '\"\"'\n\n    return edge_label\n\n\ndef get_layer_label(layer, rankdir):\n    \"\"\"Define node label based on layer type.\n\n    Parameters\n    ----------\n    layer : ?\n    rankdir : {'LR', 'TB', 'BT'}\n        Direction of graph layout.\n\n    Returns\n    -------\n    string :\n        A label for the current layer\n    \"\"\"\n\n    if rankdir in ('TB', 'BT'):\n        # If graph orientation is vertical, horizontal space is free and\n        # vertical space is not; separate words with spaces\n        separator = ' '\n    else:\n        # If graph orientation is horizontal, vertical space is free and\n        # horizontal space is not; separate words with newlines\n        separator = '\\\\n'\n\n    if layer.type == 'Convolution' or layer.type == 'Deconvolution':\n        # Outer double quotes needed or else colon characters don't parse\n        # properly\n        node_label = '\"%s%s(%s)%skernel size: %d%sstride: %d%spad: %d\"' %\\\n                     (layer.name,\n                      separator,\n                      layer.type,\n                      separator,\n                      layer.convolution_param.kernel_size[0] if len(layer.convolution_param.kernel_size._values) else 1,\n                      separator,\n                      layer.convolution_param.stride[0] if len(layer.convolution_param.stride._values) else 1,\n                      separator,\n                      layer.convolution_param.pad[0] if len(layer.convolution_param.pad._values) else 0)\n    elif layer.type == 'Pooling':\n        pooling_types_dict = get_pooling_types_dict()\n        node_label = '\"%s%s(%s %s)%skernel size: %d%sstride: %d%spad: %d\"' %\\\n                     (layer.name,\n                      separator,\n                      pooling_types_dict[layer.pooling_param.pool],\n                      layer.type,\n                      separator,\n                      layer.pooling_param.kernel_size,\n                      separator,\n                      layer.pooling_param.stride,\n                      separator,\n                      layer.pooling_param.pad)\n    else:\n        node_label = '\"%s%s(%s)\"' % (layer.name, separator, layer.type)\n    return node_label\n\n\ndef choose_color_by_layertype(layertype):\n    \"\"\"Define colors for nodes based on the layer type.\n    \"\"\"\n    color = '#6495ED'  # Default\n    if layertype == 'Convolution' or layertype == 'Deconvolution':\n        color = '#FF5050'\n    elif layertype == 'Pooling':\n        color = '#FF9900'\n    elif layertype == 'InnerProduct':\n        color = '#CC33FF'\n    return color\n\n\ndef get_pydot_graph(caffe_net, rankdir, label_edges=True):\n    \"\"\"Create a data structure which represents the `caffe_net`.\n\n    Parameters\n    ----------\n    caffe_net : object\n    rankdir : {'LR', 'TB', 'BT'}\n        Direction of graph layout.\n    label_edges : boolean, optional\n        Label the edges (default is True).\n\n    Returns\n    -------\n    pydot graph object\n    \"\"\"\n    pydot_graph = pydot.Dot(caffe_net.name,\n                            graph_type='digraph',\n                            rankdir=rankdir)\n    pydot_nodes = {}\n    pydot_edges = []\n    for layer in caffe_net.layer:\n        node_label = get_layer_label(layer, rankdir)\n        node_name = \"%s_%s\" % (layer.name, layer.type)\n        if (len(layer.bottom) == 1 and len(layer.top) == 1 and\n           layer.bottom[0] == layer.top[0]):\n            # We have an in-place neuron layer.\n            pydot_nodes[node_name] = pydot.Node(node_label,\n                                                **NEURON_LAYER_STYLE)\n        else:\n            layer_style = LAYER_STYLE_DEFAULT\n            layer_style['fillcolor'] = choose_color_by_layertype(layer.type)\n            pydot_nodes[node_name] = pydot.Node(node_label, **layer_style)\n        for bottom_blob in layer.bottom:\n            pydot_nodes[bottom_blob + '_blob'] = pydot.Node('%s' % bottom_blob,\n                                                            **BLOB_STYLE)\n            edge_label = '\"\"'\n            pydot_edges.append({'src': bottom_blob + '_blob',\n                                'dst': node_name,\n                                'label': edge_label})\n        for top_blob in layer.top:\n            pydot_nodes[top_blob + '_blob'] = pydot.Node('%s' % (top_blob))\n            if label_edges:\n                edge_label = get_edge_label(layer)\n            else:\n                edge_label = '\"\"'\n            pydot_edges.append({'src': node_name,\n                                'dst': top_blob + '_blob',\n                                'label': edge_label})\n    # Now, add the nodes and edges to the graph.\n    for node in pydot_nodes.values():\n        pydot_graph.add_node(node)\n    for edge in pydot_edges:\n        pydot_graph.add_edge(\n            pydot.Edge(pydot_nodes[edge['src']],\n                       pydot_nodes[edge['dst']],\n                       label=edge['label']))\n    return pydot_graph\n\n\ndef draw_net(caffe_net, rankdir, ext='png'):\n    \"\"\"Draws a caffe net and returns the image string encoded using the given\n    extension.\n\n    Parameters\n    ----------\n    caffe_net : a caffe.proto.caffe_pb2.NetParameter protocol buffer.\n    ext : string, optional\n        The image extension (the default is 'png').\n\n    Returns\n    -------\n    string :\n        Postscript representation of the graph.\n    \"\"\"\n    return get_pydot_graph(caffe_net, rankdir).create(format=ext)\n\n\ndef draw_net_to_file(caffe_net, filename, rankdir='LR'):\n    \"\"\"Draws a caffe net, and saves it to file using the format given as the\n    file extension. Use '.raw' to output raw text that you can manually feed\n    to graphviz to draw graphs.\n\n    Parameters\n    ----------\n    caffe_net : a caffe.proto.caffe_pb2.NetParameter protocol buffer.\n    filename : string\n        The path to a file where the networks visualization will be stored.\n    rankdir : {'LR', 'TB', 'BT'}\n        Direction of graph layout.\n    \"\"\"\n    ext = filename[filename.rfind('.')+1:]\n    with open(filename, 'wb') as fid:\n        fid.write(draw_net(caffe_net, rankdir, ext))\n"
  },
  {
    "path": "caffe-fpn/python/caffe/io.py",
    "content": "import numpy as np\nimport skimage.io\nfrom scipy.ndimage import zoom\nfrom skimage.transform import resize\n\ntry:\n    # Python3 will most likely not be able to load protobuf\n    from caffe.proto import caffe_pb2\nexcept:\n    import sys\n    if sys.version_info >= (3, 0):\n        print(\"Failed to include caffe_pb2, things might go wrong!\")\n    else:\n        raise\n\n\n## proto / datum / ndarray conversion\ndef blobproto_to_array(blob, return_diff=False):\n    \"\"\"\n    Convert a blob proto to an array. In default, we will just return the data,\n    unless return_diff is True, in which case we will return the diff.\n    \"\"\"\n    # Read the data into an array\n    if return_diff:\n        data = np.array(blob.diff)\n    else:\n        data = np.array(blob.data)\n\n    # Reshape the array\n    if blob.HasField('num') or blob.HasField('channels') or blob.HasField('height') or blob.HasField('width'):\n        # Use legacy 4D shape\n        return data.reshape(blob.num, blob.channels, blob.height, blob.width)\n    else:\n        return data.reshape(blob.shape.dim)\n\ndef array_to_blobproto(arr, diff=None):\n    \"\"\"Converts a N-dimensional array to blob proto. If diff is given, also\n    convert the diff. You need to make sure that arr and diff have the same\n    shape, and this function does not do sanity check.\n    \"\"\"\n    blob = caffe_pb2.BlobProto()\n    blob.shape.dim.extend(arr.shape)\n    blob.data.extend(arr.astype(float).flat)\n    if diff is not None:\n        blob.diff.extend(diff.astype(float).flat)\n    return blob\n\n\ndef arraylist_to_blobprotovecor_str(arraylist):\n    \"\"\"Converts a list of arrays to a serialized blobprotovec, which could be\n    then passed to a network for processing.\n    \"\"\"\n    vec = caffe_pb2.BlobProtoVector()\n    vec.blobs.extend([array_to_blobproto(arr) for arr in arraylist])\n    return vec.SerializeToString()\n\n\ndef blobprotovector_str_to_arraylist(str):\n    \"\"\"Converts a serialized blobprotovec to a list of arrays.\n    \"\"\"\n    vec = caffe_pb2.BlobProtoVector()\n    vec.ParseFromString(str)\n    return [blobproto_to_array(blob) for blob in vec.blobs]\n\n\ndef array_to_datum(arr, label=0):\n    \"\"\"Converts a 3-dimensional array to datum. If the array has dtype uint8,\n    the output data will be encoded as a string. Otherwise, the output data\n    will be stored in float format.\n    \"\"\"\n    if arr.ndim != 3:\n        raise ValueError('Incorrect array shape.')\n    datum = caffe_pb2.Datum()\n    datum.channels, datum.height, datum.width = arr.shape\n    if arr.dtype == np.uint8:\n        datum.data = arr.tostring()\n    else:\n        datum.float_data.extend(arr.flat)\n    datum.label = label\n    return datum\n\n\ndef datum_to_array(datum):\n    \"\"\"Converts a datum to an array. Note that the label is not returned,\n    as one can easily get it by calling datum.label.\n    \"\"\"\n    if len(datum.data):\n        return np.fromstring(datum.data, dtype=np.uint8).reshape(\n            datum.channels, datum.height, datum.width)\n    else:\n        return np.array(datum.float_data).astype(float).reshape(\n            datum.channels, datum.height, datum.width)\n\n\n## Pre-processing\n\nclass Transformer:\n    \"\"\"\n    Transform input for feeding into a Net.\n\n    Note: this is mostly for illustrative purposes and it is likely better\n    to define your own input preprocessing routine for your needs.\n\n    Parameters\n    ----------\n    net : a Net for which the input should be prepared\n    \"\"\"\n    def __init__(self, inputs):\n        self.inputs = inputs\n        self.transpose = {}\n        self.channel_swap = {}\n        self.raw_scale = {}\n        self.mean = {}\n        self.input_scale = {}\n\n    def __check_input(self, in_):\n        if in_ not in self.inputs:\n            raise Exception('{} is not one of the net inputs: {}'.format(\n                in_, self.inputs))\n\n    def preprocess(self, in_, data):\n        \"\"\"\n        Format input for Caffe:\n        - convert to single\n        - resize to input dimensions (preserving number of channels)\n        - transpose dimensions to K x H x W\n        - reorder channels (for instance color to BGR)\n        - scale raw input (e.g. from [0, 1] to [0, 255] for ImageNet models)\n        - subtract mean\n        - scale feature\n\n        Parameters\n        ----------\n        in_ : name of input blob to preprocess for\n        data : (H' x W' x K) ndarray\n\n        Returns\n        -------\n        caffe_in : (K x H x W) ndarray for input to a Net\n        \"\"\"\n        self.__check_input(in_)\n        caffe_in = data.astype(np.float32, copy=False)\n        transpose = self.transpose.get(in_)\n        channel_swap = self.channel_swap.get(in_)\n        raw_scale = self.raw_scale.get(in_)\n        mean = self.mean.get(in_)\n        input_scale = self.input_scale.get(in_)\n        in_dims = self.inputs[in_][2:]\n        if caffe_in.shape[:2] != in_dims:\n            caffe_in = resize_image(caffe_in, in_dims)\n        if transpose is not None:\n            caffe_in = caffe_in.transpose(transpose)\n        if channel_swap is not None:\n            caffe_in = caffe_in[channel_swap, :, :]\n        if raw_scale is not None:\n            caffe_in *= raw_scale\n        if mean is not None:\n            caffe_in -= mean\n        if input_scale is not None:\n            caffe_in *= input_scale\n        return caffe_in\n\n    def deprocess(self, in_, data):\n        \"\"\"\n        Invert Caffe formatting; see preprocess().\n        \"\"\"\n        self.__check_input(in_)\n        decaf_in = data.copy().squeeze()\n        transpose = self.transpose.get(in_)\n        channel_swap = self.channel_swap.get(in_)\n        raw_scale = self.raw_scale.get(in_)\n        mean = self.mean.get(in_)\n        input_scale = self.input_scale.get(in_)\n        if input_scale is not None:\n            decaf_in /= input_scale\n        if mean is not None:\n            decaf_in += mean\n        if raw_scale is not None:\n            decaf_in /= raw_scale\n        if channel_swap is not None:\n            decaf_in = decaf_in[np.argsort(channel_swap), :, :]\n        if transpose is not None:\n            decaf_in = decaf_in.transpose(np.argsort(transpose))\n        return decaf_in\n\n    def set_transpose(self, in_, order):\n        \"\"\"\n        Set the input channel order for e.g. RGB to BGR conversion\n        as needed for the reference ImageNet model.\n\n        Parameters\n        ----------\n        in_ : which input to assign this channel order\n        order : the order to transpose the dimensions\n        \"\"\"\n        self.__check_input(in_)\n        if len(order) != len(self.inputs[in_]) - 1:\n            raise Exception('Transpose order needs to have the same number of '\n                            'dimensions as the input.')\n        self.transpose[in_] = order\n\n    def set_channel_swap(self, in_, order):\n        \"\"\"\n        Set the input channel order for e.g. RGB to BGR conversion\n        as needed for the reference ImageNet model.\n        N.B. this assumes the channels are the first dimension AFTER transpose.\n\n        Parameters\n        ----------\n        in_ : which input to assign this channel order\n        order : the order to take the channels.\n            (2,1,0) maps RGB to BGR for example.\n        \"\"\"\n        self.__check_input(in_)\n        if len(order) != self.inputs[in_][1]:\n            raise Exception('Channel swap needs to have the same number of '\n                            'dimensions as the input channels.')\n        self.channel_swap[in_] = order\n\n    def set_raw_scale(self, in_, scale):\n        \"\"\"\n        Set the scale of raw features s.t. the input blob = input * scale.\n        While Python represents images in [0, 1], certain Caffe models\n        like CaffeNet and AlexNet represent images in [0, 255] so the raw_scale\n        of these models must be 255.\n\n        Parameters\n        ----------\n        in_ : which input to assign this scale factor\n        scale : scale coefficient\n        \"\"\"\n        self.__check_input(in_)\n        self.raw_scale[in_] = scale\n\n    def set_mean(self, in_, mean):\n        \"\"\"\n        Set the mean to subtract for centering the data.\n\n        Parameters\n        ----------\n        in_ : which input to assign this mean.\n        mean : mean ndarray (input dimensional or broadcastable)\n        \"\"\"\n        self.__check_input(in_)\n        ms = mean.shape\n        if mean.ndim == 1:\n            # broadcast channels\n            if ms[0] != self.inputs[in_][1]:\n                raise ValueError('Mean channels incompatible with input.')\n            mean = mean[:, np.newaxis, np.newaxis]\n        else:\n            # elementwise mean\n            if len(ms) == 2:\n                ms = (1,) + ms\n            if len(ms) != 3:\n                raise ValueError('Mean shape invalid')\n            if ms != self.inputs[in_][1:]:\n                raise ValueError('Mean shape incompatible with input shape.')\n        self.mean[in_] = mean\n\n    def set_input_scale(self, in_, scale):\n        \"\"\"\n        Set the scale of preprocessed inputs s.t. the blob = blob * scale.\n        N.B. input_scale is done AFTER mean subtraction and other preprocessing\n        while raw_scale is done BEFORE.\n\n        Parameters\n        ----------\n        in_ : which input to assign this scale factor\n        scale : scale coefficient\n        \"\"\"\n        self.__check_input(in_)\n        self.input_scale[in_] = scale\n\n\n## Image IO\n\ndef load_image(filename, color=True):\n    \"\"\"\n    Load an image converting from grayscale or alpha as needed.\n\n    Parameters\n    ----------\n    filename : string\n    color : boolean\n        flag for color format. True (default) loads as RGB while False\n        loads as intensity (if image is already grayscale).\n\n    Returns\n    -------\n    image : an image with type np.float32 in range [0, 1]\n        of size (H x W x 3) in RGB or\n        of size (H x W x 1) in grayscale.\n    \"\"\"\n    img = skimage.img_as_float(skimage.io.imread(filename, as_grey=not color)).astype(np.float32)\n    if img.ndim == 2:\n        img = img[:, :, np.newaxis]\n        if color:\n            img = np.tile(img, (1, 1, 3))\n    elif img.shape[2] == 4:\n        img = img[:, :, :3]\n    return img\n\n\ndef resize_image(im, new_dims, interp_order=1):\n    \"\"\"\n    Resize an image array with interpolation.\n\n    Parameters\n    ----------\n    im : (H x W x K) ndarray\n    new_dims : (height, width) tuple of new dimensions.\n    interp_order : interpolation order, default is linear.\n\n    Returns\n    -------\n    im : resized ndarray with shape (new_dims[0], new_dims[1], K)\n    \"\"\"\n    if im.shape[-1] == 1 or im.shape[-1] == 3:\n        im_min, im_max = im.min(), im.max()\n        if im_max > im_min:\n            # skimage is fast but only understands {1,3} channel images\n            # in [0, 1].\n            im_std = (im - im_min) / (im_max - im_min)\n            resized_std = resize(im_std, new_dims, order=interp_order)\n            resized_im = resized_std * (im_max - im_min) + im_min\n        else:\n            # the image is a constant -- avoid divide by 0\n            ret = np.empty((new_dims[0], new_dims[1], im.shape[-1]),\n                           dtype=np.float32)\n            ret.fill(im_min)\n            return ret\n    else:\n        # ndimage interpolates anything but more slowly.\n        scale = tuple(np.array(new_dims, dtype=float) / np.array(im.shape[:2]))\n        resized_im = zoom(im, scale + (1,), order=interp_order)\n    return resized_im.astype(np.float32)\n\n\ndef oversample(images, crop_dims):\n    \"\"\"\n    Crop images into the four corners, center, and their mirrored versions.\n\n    Parameters\n    ----------\n    image : iterable of (H x W x K) ndarrays\n    crop_dims : (height, width) tuple for the crops.\n\n    Returns\n    -------\n    crops : (10*N x H x W x K) ndarray of crops for number of inputs N.\n    \"\"\"\n    # Dimensions and center.\n    im_shape = np.array(images[0].shape)\n    crop_dims = np.array(crop_dims)\n    im_center = im_shape[:2] / 2.0\n\n    # Make crop coordinates\n    h_indices = (0, im_shape[0] - crop_dims[0])\n    w_indices = (0, im_shape[1] - crop_dims[1])\n    crops_ix = np.empty((5, 4), dtype=int)\n    curr = 0\n    for i in h_indices:\n        for j in w_indices:\n            crops_ix[curr] = (i, j, i + crop_dims[0], j + crop_dims[1])\n            curr += 1\n    crops_ix[4] = np.tile(im_center, (1, 2)) + np.concatenate([\n        -crop_dims / 2.0,\n         crop_dims / 2.0\n    ])\n    crops_ix = np.tile(crops_ix, (2, 1))\n\n    # Extract crops\n    crops = np.empty((10 * len(images), crop_dims[0], crop_dims[1],\n                      im_shape[-1]), dtype=np.float32)\n    ix = 0\n    for im in images:\n        for crop in crops_ix:\n            crops[ix] = im[crop[0]:crop[2], crop[1]:crop[3], :]\n            ix += 1\n        crops[ix-5:ix] = crops[ix-5:ix, :, ::-1, :]  # flip for mirrors\n    return crops\n"
  },
  {
    "path": "caffe-fpn/python/caffe/net_spec.py",
    "content": "\"\"\"Python net specification.\n\nThis module provides a way to write nets directly in Python, using a natural,\nfunctional style. See examples/pycaffe/caffenet.py for an example.\n\nCurrently this works as a thin wrapper around the Python protobuf interface,\nwith layers and parameters automatically generated for the \"layers\" and\n\"params\" pseudo-modules, which are actually objects using __getattr__ magic\nto generate protobuf messages.\n\nNote that when using to_proto or Top.to_proto, names of intermediate blobs will\nbe automatically generated. To explicitly specify blob names, use the NetSpec\nclass -- assign to its attributes directly to name layers, and call\nNetSpec.to_proto to serialize all assigned layers.\n\nThis interface is expected to continue to evolve as Caffe gains new capabilities\nfor specifying nets. In particular, the automatically generated layer names\nare not guaranteed to be forward-compatible.\n\"\"\"\n\nfrom collections import OrderedDict, Counter\n\nfrom .proto import caffe_pb2\nfrom google import protobuf\nimport six\n\n\ndef param_name_dict():\n    \"\"\"Find out the correspondence between layer names and parameter names.\"\"\"\n\n    layer = caffe_pb2.LayerParameter()\n    # get all parameter names (typically underscore case) and corresponding\n    # type names (typically camel case), which contain the layer names\n    # (note that not all parameters correspond to layers, but we'll ignore that)\n    param_names = [s for s in dir(layer) if s.endswith('_param')]\n    param_type_names = [type(getattr(layer, s)).__name__ for s in param_names]\n    # strip the final '_param' or 'Parameter'\n    param_names = [s[:-len('_param')] for s in param_names]\n    param_type_names = [s[:-len('Parameter')] for s in param_type_names]\n    return dict(zip(param_type_names, param_names))\n\n\ndef to_proto(*tops):\n    \"\"\"Generate a NetParameter that contains all layers needed to compute\n    all arguments.\"\"\"\n\n    layers = OrderedDict()\n    autonames = Counter()\n    for top in tops:\n        top.fn._to_proto(layers, {}, autonames)\n    net = caffe_pb2.NetParameter()\n    net.layer.extend(layers.values())\n    return net\n\n\ndef assign_proto(proto, name, val):\n    \"\"\"Assign a Python object to a protobuf message, based on the Python\n    type (in recursive fashion). Lists become repeated fields/messages, dicts\n    become messages, and other types are assigned directly. For convenience,\n    repeated fields whose values are not lists are converted to single-element\n    lists; e.g., `my_repeated_int_field=3` is converted to\n    `my_repeated_int_field=[3]`.\"\"\"\n\n    is_repeated_field = hasattr(getattr(proto, name), 'extend')\n    if is_repeated_field and not isinstance(val, list):\n        val = [val]\n    if isinstance(val, list):\n        if isinstance(val[0], dict):\n            for item in val:\n                proto_item = getattr(proto, name).add()\n                for k, v in six.iteritems(item):\n                    assign_proto(proto_item, k, v)\n        else:\n            getattr(proto, name).extend(val)\n    elif isinstance(val, dict):\n        for k, v in six.iteritems(val):\n            assign_proto(getattr(proto, name), k, v)\n    else:\n        setattr(proto, name, val)\n\n\nclass Top(object):\n    \"\"\"A Top specifies a single output blob (which could be one of several\n    produced by a layer.)\"\"\"\n\n    def __init__(self, fn, n):\n        self.fn = fn\n        self.n = n\n\n    def to_proto(self):\n        \"\"\"Generate a NetParameter that contains all layers needed to compute\n        this top.\"\"\"\n\n        return to_proto(self)\n\n    def _to_proto(self, layers, names, autonames):\n        return self.fn._to_proto(layers, names, autonames)\n\n\nclass Function(object):\n    \"\"\"A Function specifies a layer, its parameters, and its inputs (which\n    are Tops from other layers).\"\"\"\n\n    def __init__(self, type_name, inputs, params):\n        self.type_name = type_name\n        self.inputs = inputs\n        self.params = params\n        self.ntop = self.params.get('ntop', 1)\n        # use del to make sure kwargs are not double-processed as layer params\n        if 'ntop' in self.params:\n            del self.params['ntop']\n        self.in_place = self.params.get('in_place', False)\n        if 'in_place' in self.params:\n            del self.params['in_place']\n        self.tops = tuple(Top(self, n) for n in range(self.ntop))\n\n    def _get_name(self, names, autonames):\n        if self not in names and self.ntop > 0:\n            names[self] = self._get_top_name(self.tops[0], names, autonames)\n        elif self not in names:\n            autonames[self.type_name] += 1\n            names[self] = self.type_name + str(autonames[self.type_name])\n        return names[self]\n\n    def _get_top_name(self, top, names, autonames):\n        if top not in names:\n            autonames[top.fn.type_name] += 1\n            names[top] = top.fn.type_name + str(autonames[top.fn.type_name])\n        return names[top]\n\n    def _to_proto(self, layers, names, autonames):\n        if self in layers:\n            return\n        bottom_names = []\n        for inp in self.inputs:\n            inp._to_proto(layers, names, autonames)\n            bottom_names.append(layers[inp.fn].top[inp.n])\n        layer = caffe_pb2.LayerParameter()\n        layer.type = self.type_name\n        layer.bottom.extend(bottom_names)\n\n        if self.in_place:\n            layer.top.extend(layer.bottom)\n        else:\n            for top in self.tops:\n                layer.top.append(self._get_top_name(top, names, autonames))\n        layer.name = self._get_name(names, autonames)\n\n        for k, v in six.iteritems(self.params):\n            # special case to handle generic *params\n            if k.endswith('param'):\n                assign_proto(layer, k, v)\n            else:\n                try:\n                    assign_proto(getattr(layer,\n                        _param_names[self.type_name] + '_param'), k, v)\n                except (AttributeError, KeyError):\n                    assign_proto(layer, k, v)\n\n        layers[self] = layer\n\n\nclass NetSpec(object):\n    \"\"\"A NetSpec contains a set of Tops (assigned directly as attributes).\n    Calling NetSpec.to_proto generates a NetParameter containing all of the\n    layers needed to produce all of the assigned Tops, using the assigned\n    names.\"\"\"\n\n    def __init__(self):\n        super(NetSpec, self).__setattr__('tops', OrderedDict())\n\n    def __setattr__(self, name, value):\n        self.tops[name] = value\n\n    def __getattr__(self, name):\n        return self.tops[name]\n\n    def to_proto(self):\n        names = {v: k for k, v in six.iteritems(self.tops)}\n        autonames = Counter()\n        layers = OrderedDict()\n        for name, top in six.iteritems(self.tops):\n            top._to_proto(layers, names, autonames)\n        net = caffe_pb2.NetParameter()\n        net.layer.extend(layers.values())\n        return net\n\n\nclass Layers(object):\n    \"\"\"A Layers object is a pseudo-module which generates functions that specify\n    layers; e.g., Layers().Convolution(bottom, kernel_size=3) will produce a Top\n    specifying a 3x3 convolution applied to bottom.\"\"\"\n\n    def __getattr__(self, name):\n        def layer_fn(*args, **kwargs):\n            fn = Function(name, args, kwargs)\n            if fn.ntop == 0:\n                return fn\n            elif fn.ntop == 1:\n                return fn.tops[0]\n            else:\n                return fn.tops\n        return layer_fn\n\n\nclass Parameters(object):\n    \"\"\"A Parameters object is a pseudo-module which generates constants used\n    in layer parameters; e.g., Parameters().Pooling.MAX is the value used\n    to specify max pooling.\"\"\"\n\n    def __getattr__(self, name):\n       class Param:\n            def __getattr__(self, param_name):\n                return getattr(getattr(caffe_pb2, name + 'Parameter'), param_name)\n       return Param()\n\n\n_param_names = param_name_dict()\nlayers = Layers()\nparams = Parameters()\n"
  },
  {
    "path": "caffe-fpn/python/caffe/proto/__init__.py",
    "content": ""
  },
  {
    "path": "caffe-fpn/python/caffe/proto/caffe_pb2.py",
    "content": "# Generated by the protocol buffer compiler.  DO NOT EDIT!\n# source: caffe.proto\n\nimport sys\n_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))\nfrom google.protobuf.internal import enum_type_wrapper\nfrom google.protobuf import descriptor as _descriptor\nfrom google.protobuf import message as _message\nfrom google.protobuf import reflection as _reflection\nfrom google.protobuf import symbol_database as _symbol_database\nfrom google.protobuf import descriptor_pb2\n# @@protoc_insertion_point(imports)\n\n_sym_db = _symbol_database.Default()\n\n\n\n\nDESCRIPTOR = _descriptor.FileDescriptor(\n  name='caffe.proto',\n  package='caffe',\n  serialized_pb=_b('\\n\\x0b\\x63\\x61\\x66\\x66\\x65.proto\\x12\\x05\\x63\\x61\\x66\\x66\\x65\\\"\\x1c\\n\\tBlobShape\\x12\\x0f\\n\\x03\\x64im\\x18\\x01 \\x03(\\x03\\x42\\x02\\x10\\x01\\\"\\xcc\\x01\\n\\tBlobProto\\x12\\x1f\\n\\x05shape\\x18\\x07 \\x01(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x10\\n\\x04\\x64\\x61ta\\x18\\x05 \\x03(\\x02\\x42\\x02\\x10\\x01\\x12\\x10\\n\\x04\\x64iff\\x18\\x06 \\x03(\\x02\\x42\\x02\\x10\\x01\\x12\\x17\\n\\x0b\\x64ouble_data\\x18\\x08 \\x03(\\x01\\x42\\x02\\x10\\x01\\x12\\x17\\n\\x0b\\x64ouble_diff\\x18\\t \\x03(\\x01\\x42\\x02\\x10\\x01\\x12\\x0e\\n\\x03num\\x18\\x01 \\x01(\\x05:\\x01\\x30\\x12\\x13\\n\\x08\\x63hannels\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\x11\\n\\x06height\\x18\\x03 \\x01(\\x05:\\x01\\x30\\x12\\x10\\n\\x05width\\x18\\x04 \\x01(\\x05:\\x01\\x30\\\"2\\n\\x0f\\x42lobProtoVector\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x01 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\\"\\x81\\x01\\n\\x05\\x44\\x61tum\\x12\\x10\\n\\x08\\x63hannels\\x18\\x01 \\x01(\\x05\\x12\\x0e\\n\\x06height\\x18\\x02 \\x01(\\x05\\x12\\r\\n\\x05width\\x18\\x03 \\x01(\\x05\\x12\\x0c\\n\\x04\\x64\\x61ta\\x18\\x04 \\x01(\\x0c\\x12\\r\\n\\x05label\\x18\\x05 \\x01(\\x05\\x12\\x12\\n\\nfloat_data\\x18\\x06 \\x03(\\x02\\x12\\x16\\n\\x07\\x65ncoded\\x18\\x07 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\x8a\\x02\\n\\x0f\\x46illerParameter\\x12\\x16\\n\\x04type\\x18\\x01 \\x01(\\t:\\x08\\x63onstant\\x12\\x10\\n\\x05value\\x18\\x02 \\x01(\\x02:\\x01\\x30\\x12\\x0e\\n\\x03min\\x18\\x03 \\x01(\\x02:\\x01\\x30\\x12\\x0e\\n\\x03max\\x18\\x04 \\x01(\\x02:\\x01\\x31\\x12\\x0f\\n\\x04mean\\x18\\x05 \\x01(\\x02:\\x01\\x30\\x12\\x0e\\n\\x03std\\x18\\x06 \\x01(\\x02:\\x01\\x31\\x12\\x12\\n\\x06sparse\\x18\\x07 \\x01(\\x05:\\x02-1\\x12\\x42\\n\\rvariance_norm\\x18\\x08 \\x01(\\x0e\\x32#.caffe.FillerParameter.VarianceNorm:\\x06\\x46\\x41N_IN\\\"4\\n\\x0cVarianceNorm\\x12\\n\\n\\x06\\x46\\x41N_IN\\x10\\x00\\x12\\x0b\\n\\x07\\x46\\x41N_OUT\\x10\\x01\\x12\\x0b\\n\\x07\\x41VERAGE\\x10\\x02\\\"\\x8e\\x02\\n\\x0cNetParameter\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\r\\n\\x05input\\x18\\x03 \\x03(\\t\\x12%\\n\\x0binput_shape\\x18\\x08 \\x03(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x11\\n\\tinput_dim\\x18\\x04 \\x03(\\x05\\x12\\x1d\\n\\x0e\\x66orce_backward\\x18\\x05 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x1e\\n\\x05state\\x18\\x06 \\x01(\\x0b\\x32\\x0f.caffe.NetState\\x12\\x19\\n\\ndebug_info\\x18\\x07 \\x01(\\x08:\\x05\\x66\\x61lse\\x12$\\n\\x05layer\\x18\\x64 \\x03(\\x0b\\x32\\x15.caffe.LayerParameter\\x12\\'\\n\\x06layers\\x18\\x02 \\x03(\\x0b\\x32\\x17.caffe.V1LayerParameter\\\"\\x9c\\n\\n\\x0fSolverParameter\\x12\\x0b\\n\\x03net\\x18\\x18 \\x01(\\t\\x12&\\n\\tnet_param\\x18\\x19 \\x01(\\x0b\\x32\\x13.caffe.NetParameter\\x12\\x11\\n\\ttrain_net\\x18\\x01 \\x01(\\t\\x12\\x10\\n\\x08test_net\\x18\\x02 \\x03(\\t\\x12,\\n\\x0ftrain_net_param\\x18\\x15 \\x01(\\x0b\\x32\\x13.caffe.NetParameter\\x12+\\n\\x0etest_net_param\\x18\\x16 \\x03(\\x0b\\x32\\x13.caffe.NetParameter\\x12$\\n\\x0btrain_state\\x18\\x1a \\x01(\\x0b\\x32\\x0f.caffe.NetState\\x12#\\n\\ntest_state\\x18\\x1b \\x03(\\x0b\\x32\\x0f.caffe.NetState\\x12\\x11\\n\\ttest_iter\\x18\\x03 \\x03(\\x05\\x12\\x18\\n\\rtest_interval\\x18\\x04 \\x01(\\x05:\\x01\\x30\\x12 \\n\\x11test_compute_loss\\x18\\x13 \\x01(\\x08:\\x05\\x66\\x61lse\\x12!\\n\\x13test_initialization\\x18  \\x01(\\x08:\\x04true\\x12\\x0f\\n\\x07\\x62\\x61se_lr\\x18\\x05 \\x01(\\x02\\x12\\x0f\\n\\x07\\x64isplay\\x18\\x06 \\x01(\\x05\\x12\\x17\\n\\x0c\\x61verage_loss\\x18! \\x01(\\x05:\\x01\\x31\\x12\\x10\\n\\x08max_iter\\x18\\x07 \\x01(\\x05\\x12\\x14\\n\\titer_size\\x18$ \\x01(\\x05:\\x01\\x31\\x12\\x11\\n\\tlr_policy\\x18\\x08 \\x01(\\t\\x12\\r\\n\\x05gamma\\x18\\t \\x01(\\x02\\x12\\r\\n\\x05power\\x18\\n \\x01(\\x02\\x12\\x10\\n\\x08momentum\\x18\\x0b \\x01(\\x02\\x12\\x14\\n\\x0cweight_decay\\x18\\x0c \\x01(\\x02\\x12\\x1f\\n\\x13regularization_type\\x18\\x1d \\x01(\\t:\\x02L2\\x12\\x10\\n\\x08stepsize\\x18\\r \\x01(\\x05\\x12\\x11\\n\\tstepvalue\\x18\\\" \\x03(\\x05\\x12\\x1a\\n\\x0e\\x63lip_gradients\\x18# \\x01(\\x02:\\x02-1\\x12\\x13\\n\\x08snapshot\\x18\\x0e \\x01(\\x05:\\x01\\x30\\x12\\x17\\n\\x0fsnapshot_prefix\\x18\\x0f \\x01(\\t\\x12\\x1c\\n\\rsnapshot_diff\\x18\\x10 \\x01(\\x08:\\x05\\x66\\x61lse\\x12K\\n\\x0fsnapshot_format\\x18% \\x01(\\x0e\\x32%.caffe.SolverParameter.SnapshotFormat:\\x0b\\x42INARYPROTO\\x12;\\n\\x0bsolver_mode\\x18\\x11 \\x01(\\x0e\\x32!.caffe.SolverParameter.SolverMode:\\x03GPU\\x12\\x14\\n\\tdevice_id\\x18\\x12 \\x01(\\x05:\\x01\\x30\\x12\\x17\\n\\x0brandom_seed\\x18\\x14 \\x01(\\x03:\\x02-1\\x12\\x11\\n\\x04type\\x18( \\x01(\\t:\\x03SGD\\x12\\x14\\n\\x05\\x64\\x65lta\\x18\\x1f \\x01(\\x02:\\x05\\x31\\x65-08\\x12\\x18\\n\\tmomentum2\\x18\\' \\x01(\\x02:\\x05\\x30.999\\x12\\x11\\n\\trms_decay\\x18& \\x01(\\x02\\x12\\x19\\n\\ndebug_info\\x18\\x17 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\\"\\n\\x14snapshot_after_train\\x18\\x1c \\x01(\\x08:\\x04true\\x12;\\n\\x0bsolver_type\\x18\\x1e \\x01(\\x0e\\x32!.caffe.SolverParameter.SolverType:\\x03SGD\\\"+\\n\\x0eSnapshotFormat\\x12\\x08\\n\\x04HDF5\\x10\\x00\\x12\\x0f\\n\\x0b\\x42INARYPROTO\\x10\\x01\\\"\\x1e\\n\\nSolverMode\\x12\\x07\\n\\x03\\x43PU\\x10\\x00\\x12\\x07\\n\\x03GPU\\x10\\x01\\\"U\\n\\nSolverType\\x12\\x07\\n\\x03SGD\\x10\\x00\\x12\\x0c\\n\\x08NESTEROV\\x10\\x01\\x12\\x0b\\n\\x07\\x41\\x44\\x41GRAD\\x10\\x02\\x12\\x0b\\n\\x07RMSPROP\\x10\\x03\\x12\\x0c\\n\\x08\\x41\\x44\\x41\\x44\\x45LTA\\x10\\x04\\x12\\x08\\n\\x04\\x41\\x44\\x41M\\x10\\x05\\\"l\\n\\x0bSolverState\\x12\\x0c\\n\\x04iter\\x18\\x01 \\x01(\\x05\\x12\\x13\\n\\x0blearned_net\\x18\\x02 \\x01(\\t\\x12!\\n\\x07history\\x18\\x03 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x17\\n\\x0c\\x63urrent_step\\x18\\x04 \\x01(\\x05:\\x01\\x30\\\"N\\n\\x08NetState\\x12!\\n\\x05phase\\x18\\x01 \\x01(\\x0e\\x32\\x0c.caffe.Phase:\\x04TEST\\x12\\x10\\n\\x05level\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\r\\n\\x05stage\\x18\\x03 \\x03(\\t\\\"s\\n\\x0cNetStateRule\\x12\\x1b\\n\\x05phase\\x18\\x01 \\x01(\\x0e\\x32\\x0c.caffe.Phase\\x12\\x11\\n\\tmin_level\\x18\\x02 \\x01(\\x05\\x12\\x11\\n\\tmax_level\\x18\\x03 \\x01(\\x05\\x12\\r\\n\\x05stage\\x18\\x04 \\x03(\\t\\x12\\x11\\n\\tnot_stage\\x18\\x05 \\x03(\\t\\\"\\xa3\\x01\\n\\tParamSpec\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\x31\\n\\nshare_mode\\x18\\x02 \\x01(\\x0e\\x32\\x1d.caffe.ParamSpec.DimCheckMode\\x12\\x12\\n\\x07lr_mult\\x18\\x03 \\x01(\\x02:\\x01\\x31\\x12\\x15\\n\\ndecay_mult\\x18\\x04 \\x01(\\x02:\\x01\\x31\\\"*\\n\\x0c\\x44imCheckMode\\x12\\n\\n\\x06STRICT\\x10\\x00\\x12\\x0e\\n\\nPERMISSIVE\\x10\\x01\\\"\\xe1\\x14\\n\\x0eLayerParameter\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\x0c\\n\\x04type\\x18\\x02 \\x01(\\t\\x12\\x0e\\n\\x06\\x62ottom\\x18\\x03 \\x03(\\t\\x12\\x0b\\n\\x03top\\x18\\x04 \\x03(\\t\\x12\\x1b\\n\\x05phase\\x18\\n \\x01(\\x0e\\x32\\x0c.caffe.Phase\\x12\\x13\\n\\x0bloss_weight\\x18\\x05 \\x03(\\x02\\x12\\x1f\\n\\x05param\\x18\\x06 \\x03(\\x0b\\x32\\x10.caffe.ParamSpec\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x07 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x16\\n\\x0epropagate_down\\x18\\x0b \\x03(\\x08\\x12$\\n\\x07include\\x18\\x08 \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12$\\n\\x07\\x65xclude\\x18\\t \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12\\x37\\n\\x0ftransform_param\\x18\\x64 \\x01(\\x0b\\x32\\x1e.caffe.TransformationParameter\\x12(\\n\\nloss_param\\x18\\x65 \\x01(\\x0b\\x32\\x14.caffe.LossParameter\\x12\\x30\\n\\x0e\\x61\\x63\\x63uracy_param\\x18\\x66 \\x01(\\x0b\\x32\\x18.caffe.AccuracyParameter\\x12,\\n\\x0c\\x61rgmax_param\\x18g \\x01(\\x0b\\x32\\x16.caffe.ArgMaxParameter\\x12\\x34\\n\\x10\\x62\\x61tch_norm_param\\x18\\x8b\\x01 \\x01(\\x0b\\x32\\x19.caffe.BatchNormParameter\\x12)\\n\\nbias_param\\x18\\x8d\\x01 \\x01(\\x0b\\x32\\x14.caffe.BiasParameter\\x12,\\n\\x0c\\x63oncat_param\\x18h \\x01(\\x0b\\x32\\x16.caffe.ConcatParameter\\x12-\\n\\x0c\\x63onadd_param\\x18\\xc8\\x01 \\x01(\\x0b\\x32\\x16.caffe.ConaddParameter\\x12?\\n\\x16\\x63ontrastive_loss_param\\x18i \\x01(\\x0b\\x32\\x1f.caffe.ContrastiveLossParameter\\x12\\x36\\n\\x11\\x63onvolution_param\\x18j \\x01(\\x0b\\x32\\x1b.caffe.ConvolutionParameter\\x12L\\n\\x1c\\x64\\x65\\x66ormable_convolution_param\\x18\\xe7\\x07 \\x01(\\x0b\\x32%.caffe.DeformableConvolutionParameter\\x12(\\n\\ndata_param\\x18k \\x01(\\x0b\\x32\\x14.caffe.DataParameter\\x12.\\n\\rdropout_param\\x18l \\x01(\\x0b\\x32\\x17.caffe.DropoutParameter\\x12\\x33\\n\\x10\\x64ummy_data_param\\x18m \\x01(\\x0b\\x32\\x19.caffe.DummyDataParameter\\x12.\\n\\reltwise_param\\x18n \\x01(\\x0b\\x32\\x17.caffe.EltwiseParameter\\x12\\'\\n\\telu_param\\x18\\x8c\\x01 \\x01(\\x0b\\x32\\x13.caffe.ELUParameter\\x12+\\n\\x0b\\x65mbed_param\\x18\\x89\\x01 \\x01(\\x0b\\x32\\x15.caffe.EmbedParameter\\x12&\\n\\texp_param\\x18o \\x01(\\x0b\\x32\\x13.caffe.ExpParameter\\x12/\\n\\rflatten_param\\x18\\x87\\x01 \\x01(\\x0b\\x32\\x17.caffe.FlattenParameter\\x12\\x31\\n\\x0fhdf5_data_param\\x18p \\x01(\\x0b\\x32\\x18.caffe.HDF5DataParameter\\x12\\x35\\n\\x11hdf5_output_param\\x18q \\x01(\\x0b\\x32\\x1a.caffe.HDF5OutputParameter\\x12\\x33\\n\\x10hinge_loss_param\\x18r \\x01(\\x0b\\x32\\x19.caffe.HingeLossParameter\\x12\\x33\\n\\x10image_data_param\\x18s \\x01(\\x0b\\x32\\x19.caffe.ImageDataParameter\\x12\\x39\\n\\x13infogain_loss_param\\x18t \\x01(\\x0b\\x32\\x1c.caffe.InfogainLossParameter\\x12\\x39\\n\\x13inner_product_param\\x18u \\x01(\\x0b\\x32\\x1c.caffe.InnerProductParameter\\x12\\'\\n\\tlog_param\\x18\\x86\\x01 \\x01(\\x0b\\x32\\x13.caffe.LogParameter\\x12&\\n\\tlrn_param\\x18v \\x01(\\x0b\\x32\\x13.caffe.LRNParameter\\x12\\x35\\n\\x11memory_data_param\\x18w \\x01(\\x0b\\x32\\x1a.caffe.MemoryDataParameter\\x12&\\n\\tmvn_param\\x18x \\x01(\\x0b\\x32\\x13.caffe.MVNParameter\\x12.\\n\\rpooling_param\\x18y \\x01(\\x0b\\x32\\x17.caffe.PoolingParameter\\x12*\\n\\x0bpower_param\\x18z \\x01(\\x0b\\x32\\x15.caffe.PowerParameter\\x12+\\n\\x0bprelu_param\\x18\\x83\\x01 \\x01(\\x0b\\x32\\x15.caffe.PReLUParameter\\x12-\\n\\x0cpython_param\\x18\\x82\\x01 \\x01(\\x0b\\x32\\x16.caffe.PythonParameter\\x12\\x33\\n\\x0freduction_param\\x18\\x88\\x01 \\x01(\\x0b\\x32\\x19.caffe.ReductionParameter\\x12(\\n\\nrelu_param\\x18{ \\x01(\\x0b\\x32\\x14.caffe.ReLUParameter\\x12/\\n\\rreshape_param\\x18\\x85\\x01 \\x01(\\x0b\\x32\\x17.caffe.ReshapeParameter\\x12\\x38\\n\\x11roi_pooling_param\\x18\\xd7\\xc7\\xf8\\x03 \\x01(\\x0b\\x32\\x1a.caffe.ROIPoolingParameter\\x12+\\n\\x0bscale_param\\x18\\x8e\\x01 \\x01(\\x0b\\x32\\x15.caffe.ScaleParameter\\x12.\\n\\rsigmoid_param\\x18| \\x01(\\x0b\\x32\\x17.caffe.SigmoidParameter\\x12=\\n\\x14smooth_l1_loss_param\\x18\\xd8\\xc7\\xf8\\x03 \\x01(\\x0b\\x32\\x1c.caffe.SmoothL1LossParameter\\x12.\\n\\rsoftmax_param\\x18} \\x01(\\x0b\\x32\\x17.caffe.SoftmaxParameter\\x12\\'\\n\\tspp_param\\x18\\x84\\x01 \\x01(\\x0b\\x32\\x13.caffe.SPPParameter\\x12*\\n\\x0bslice_param\\x18~ \\x01(\\x0b\\x32\\x15.caffe.SliceParameter\\x12(\\n\\ntanh_param\\x18\\x7f \\x01(\\x0b\\x32\\x14.caffe.TanHParameter\\x12\\x33\\n\\x0fthreshold_param\\x18\\x80\\x01 \\x01(\\x0b\\x32\\x19.caffe.ThresholdParameter\\x12)\\n\\ntile_param\\x18\\x8a\\x01 \\x01(\\x0b\\x32\\x14.caffe.TileParameter\\x12\\x36\\n\\x11window_data_param\\x18\\x81\\x01 \\x01(\\x0b\\x32\\x1a.caffe.WindowDataParameter\\x12)\\n\\ncrop_param\\x18\\x90\\x01 \\x01(\\x0b\\x32\\x14.caffe.CropParameter\\\"\\xb6\\x01\\n\\x17TransformationParameter\\x12\\x10\\n\\x05scale\\x18\\x01 \\x01(\\x02:\\x01\\x31\\x12\\x15\\n\\x06mirror\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x14\\n\\tcrop_size\\x18\\x03 \\x01(\\r:\\x01\\x30\\x12\\x11\\n\\tmean_file\\x18\\x04 \\x01(\\t\\x12\\x12\\n\\nmean_value\\x18\\x05 \\x03(\\x02\\x12\\x1a\\n\\x0b\\x66orce_color\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x19\\n\\nforce_gray\\x18\\x07 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xc2\\x01\\n\\rLossParameter\\x12\\x14\\n\\x0cignore_label\\x18\\x01 \\x01(\\x05\\x12\\x44\\n\\rnormalization\\x18\\x03 \\x01(\\x0e\\x32&.caffe.LossParameter.NormalizationMode:\\x05VALID\\x12\\x11\\n\\tnormalize\\x18\\x02 \\x01(\\x08\\\"B\\n\\x11NormalizationMode\\x12\\x08\\n\\x04\\x46ULL\\x10\\x00\\x12\\t\\n\\x05VALID\\x10\\x01\\x12\\x0e\\n\\nBATCH_SIZE\\x10\\x02\\x12\\x08\\n\\x04NONE\\x10\\x03\\\"L\\n\\x11\\x41\\x63\\x63uracyParameter\\x12\\x10\\n\\x05top_k\\x18\\x01 \\x01(\\r:\\x01\\x31\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12\\x14\\n\\x0cignore_label\\x18\\x03 \\x01(\\x05\\\"M\\n\\x0f\\x41rgMaxParameter\\x12\\x1a\\n\\x0bout_max_val\\x18\\x01 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x10\\n\\x05top_k\\x18\\x02 \\x01(\\r:\\x01\\x31\\x12\\x0c\\n\\x04\\x61xis\\x18\\x03 \\x01(\\x05\\\"9\\n\\x0f\\x43oncatParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12\\x15\\n\\nconcat_dim\\x18\\x01 \\x01(\\r:\\x01\\x31\\\"\\x11\\n\\x0f\\x43onaddParameter\\\"j\\n\\x12\\x42\\x61tchNormParameter\\x12\\x18\\n\\x10use_global_stats\\x18\\x01 \\x01(\\x08\\x12&\\n\\x17moving_average_fraction\\x18\\x02 \\x01(\\x02:\\x05\\x30.999\\x12\\x12\\n\\x03\\x65ps\\x18\\x03 \\x01(\\x02:\\x05\\x31\\x65-05\\\"]\\n\\rBiasParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\x13\\n\\x08num_axes\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12&\\n\\x06\\x66iller\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\\"L\\n\\x18\\x43ontrastiveLossParameter\\x12\\x11\\n\\x06margin\\x18\\x01 \\x01(\\x02:\\x01\\x31\\x12\\x1d\\n\\x0elegacy_version\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xfc\\x03\\n\\x14\\x43onvolutionParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x02 \\x01(\\x08:\\x04true\\x12\\x0b\\n\\x03pad\\x18\\x03 \\x03(\\r\\x12\\x13\\n\\x0bkernel_size\\x18\\x04 \\x03(\\r\\x12\\x0e\\n\\x06stride\\x18\\x06 \\x03(\\r\\x12\\x10\\n\\x08\\x64ilation\\x18\\x12 \\x03(\\r\\x12\\x10\\n\\x05pad_h\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_w\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x08kernel_h\\x18\\x0b \\x01(\\r\\x12\\x10\\n\\x08kernel_w\\x18\\x0c \\x01(\\r\\x12\\x10\\n\\x08stride_h\\x18\\r \\x01(\\r\\x12\\x10\\n\\x08stride_w\\x18\\x0e \\x01(\\r\\x12\\x10\\n\\x05group\\x18\\x05 \\x01(\\r:\\x01\\x31\\x12-\\n\\rweight_filler\\x18\\x07 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x08 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12;\\n\\x06\\x65ngine\\x18\\x0f \\x01(\\x0e\\x32\\\".caffe.ConvolutionParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x0f\\n\\x04\\x61xis\\x18\\x10 \\x01(\\x05:\\x01\\x31\\x12\\x1e\\n\\x0f\\x66orce_nd_im2col\\x18\\x11 \\x01(\\x08:\\x05\\x66\\x61lse\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"\\xad\\x04\\n\\x1e\\x44\\x65\\x66ormableConvolutionParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x02 \\x01(\\x08:\\x04true\\x12\\x0b\\n\\x03pad\\x18\\x03 \\x03(\\r\\x12\\x13\\n\\x0bkernel_size\\x18\\x04 \\x03(\\r\\x12\\x0e\\n\\x06stride\\x18\\x06 \\x03(\\r\\x12\\x10\\n\\x08\\x64ilation\\x18\\x12 \\x03(\\r\\x12\\x10\\n\\x05pad_h\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_w\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x08kernel_h\\x18\\x0b \\x01(\\r\\x12\\x10\\n\\x08kernel_w\\x18\\x0c \\x01(\\r\\x12\\x10\\n\\x08stride_h\\x18\\r \\x01(\\r\\x12\\x10\\n\\x08stride_w\\x18\\x0e \\x01(\\r\\x12\\x10\\n\\x05group\\x18\\x05 \\x01(\\r:\\x01\\x31\\x12\\x1b\\n\\x10\\x64\\x65\\x66ormable_group\\x18\\x13 \\x01(\\r:\\x01\\x34\\x12-\\n\\rweight_filler\\x18\\x07 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x08 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x45\\n\\x06\\x65ngine\\x18\\x0f \\x01(\\x0e\\x32,.caffe.DeformableConvolutionParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x0f\\n\\x04\\x61xis\\x18\\x10 \\x01(\\x05:\\x01\\x31\\x12\\x1e\\n\\x0f\\x66orce_nd_im2col\\x18\\x11 \\x01(\\x08:\\x05\\x66\\x61lse\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"\\xa4\\x02\\n\\rDataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x12\\n\\nbatch_size\\x18\\x04 \\x01(\\r\\x12\\x14\\n\\trand_skip\\x18\\x07 \\x01(\\r:\\x01\\x30\\x12\\x31\\n\\x07\\x62\\x61\\x63kend\\x18\\x08 \\x01(\\x0e\\x32\\x17.caffe.DataParameter.DB:\\x07LEVELDB\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x11\\n\\tmean_file\\x18\\x03 \\x01(\\t\\x12\\x14\\n\\tcrop_size\\x18\\x05 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\\"\\n\\x13\\x66orce_encoded_color\\x18\\t \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x13\\n\\x08prefetch\\x18\\n \\x01(\\r:\\x01\\x34\\\"\\x1b\\n\\x02\\x44\\x42\\x12\\x0b\\n\\x07LEVELDB\\x10\\x00\\x12\\x08\\n\\x04LMDB\\x10\\x01\\\"0\\n\\rCropParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x32\\x12\\x0e\\n\\x06offset\\x18\\x02 \\x03(\\r\\\"I\\n\\x10\\x44ropoutParameter\\x12\\x1a\\n\\rdropout_ratio\\x18\\x01 \\x01(\\x02:\\x03\\x30.5\\x12\\x19\\n\\x0bscale_train\\x18\\x02 \\x01(\\x08:\\x04true\\\"\\xa0\\x01\\n\\x12\\x44ummyDataParameter\\x12+\\n\\x0b\\x64\\x61ta_filler\\x18\\x01 \\x03(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x1f\\n\\x05shape\\x18\\x06 \\x03(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x0b\\n\\x03num\\x18\\x02 \\x03(\\r\\x12\\x10\\n\\x08\\x63hannels\\x18\\x03 \\x03(\\r\\x12\\x0e\\n\\x06height\\x18\\x04 \\x03(\\r\\x12\\r\\n\\x05width\\x18\\x05 \\x03(\\r\\\"\\xa5\\x01\\n\\x10\\x45ltwiseParameter\\x12\\x39\\n\\toperation\\x18\\x01 \\x01(\\x0e\\x32!.caffe.EltwiseParameter.EltwiseOp:\\x03SUM\\x12\\r\\n\\x05\\x63oeff\\x18\\x02 \\x03(\\x02\\x12\\x1e\\n\\x10stable_prod_grad\\x18\\x03 \\x01(\\x08:\\x04true\\\"\\'\\n\\tEltwiseOp\\x12\\x08\\n\\x04PROD\\x10\\x00\\x12\\x07\\n\\x03SUM\\x10\\x01\\x12\\x07\\n\\x03MAX\\x10\\x02\\\" \\n\\x0c\\x45LUParameter\\x12\\x10\\n\\x05\\x61lpha\\x18\\x01 \\x01(\\x02:\\x01\\x31\\\"\\xac\\x01\\n\\x0e\\x45mbedParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x11\\n\\tinput_dim\\x18\\x02 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x03 \\x01(\\x08:\\x04true\\x12-\\n\\rweight_filler\\x18\\x04 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x05 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\\"D\\n\\x0c\\x45xpParameter\\x12\\x10\\n\\x04\\x62\\x61se\\x18\\x01 \\x01(\\x02:\\x02-1\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05shift\\x18\\x03 \\x01(\\x02:\\x01\\x30\\\"9\\n\\x10\\x46lattenParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\x14\\n\\x08\\x65nd_axis\\x18\\x02 \\x01(\\x05:\\x02-1\\\"O\\n\\x11HDF5DataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x12\\n\\nbatch_size\\x18\\x02 \\x01(\\r\\x12\\x16\\n\\x07shuffle\\x18\\x03 \\x01(\\x08:\\x05\\x66\\x61lse\\\"(\\n\\x13HDF5OutputParameter\\x12\\x11\\n\\tfile_name\\x18\\x01 \\x01(\\t\\\"^\\n\\x12HingeLossParameter\\x12\\x30\\n\\x04norm\\x18\\x01 \\x01(\\x0e\\x32\\x1e.caffe.HingeLossParameter.Norm:\\x02L1\\\"\\x16\\n\\x04Norm\\x12\\x06\\n\\x02L1\\x10\\x01\\x12\\x06\\n\\x02L2\\x10\\x02\\\"\\x97\\x02\\n\\x12ImageDataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x15\\n\\nbatch_size\\x18\\x04 \\x01(\\r:\\x01\\x31\\x12\\x14\\n\\trand_skip\\x18\\x07 \\x01(\\r:\\x01\\x30\\x12\\x16\\n\\x07shuffle\\x18\\x08 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\nnew_height\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x14\\n\\tnew_width\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x16\\n\\x08is_color\\x18\\x0b \\x01(\\x08:\\x04true\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x11\\n\\tmean_file\\x18\\x03 \\x01(\\t\\x12\\x14\\n\\tcrop_size\\x18\\x05 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\x0broot_folder\\x18\\x0c \\x01(\\t:\\x00\\\"\\'\\n\\x15InfogainLossParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\\"\\xb1\\x01\\n\\x15InnerProductParameter\\x12\\x12\\n\\nnum_output\\x18\\x01 \\x01(\\r\\x12\\x17\\n\\tbias_term\\x18\\x02 \\x01(\\x08:\\x04true\\x12-\\n\\rweight_filler\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x04 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x05 \\x01(\\x05:\\x01\\x31\\\"D\\n\\x0cLogParameter\\x12\\x10\\n\\x04\\x62\\x61se\\x18\\x01 \\x01(\\x02:\\x02-1\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05shift\\x18\\x03 \\x01(\\x02:\\x01\\x30\\\"\\xb8\\x02\\n\\x0cLRNParameter\\x12\\x15\\n\\nlocal_size\\x18\\x01 \\x01(\\r:\\x01\\x35\\x12\\x10\\n\\x05\\x61lpha\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x12\\n\\x04\\x62\\x65ta\\x18\\x03 \\x01(\\x02:\\x04\\x30.75\\x12\\x44\\n\\x0bnorm_region\\x18\\x04 \\x01(\\x0e\\x32\\x1e.caffe.LRNParameter.NormRegion:\\x0f\\x41\\x43ROSS_CHANNELS\\x12\\x0c\\n\\x01k\\x18\\x05 \\x01(\\x02:\\x01\\x31\\x12\\x33\\n\\x06\\x65ngine\\x18\\x06 \\x01(\\x0e\\x32\\x1a.caffe.LRNParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"5\\n\\nNormRegion\\x12\\x13\\n\\x0f\\x41\\x43ROSS_CHANNELS\\x10\\x00\\x12\\x12\\n\\x0eWITHIN_CHANNEL\\x10\\x01\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"Z\\n\\x13MemoryDataParameter\\x12\\x12\\n\\nbatch_size\\x18\\x01 \\x01(\\r\\x12\\x10\\n\\x08\\x63hannels\\x18\\x02 \\x01(\\r\\x12\\x0e\\n\\x06height\\x18\\x03 \\x01(\\r\\x12\\r\\n\\x05width\\x18\\x04 \\x01(\\r\\\"d\\n\\x0cMVNParameter\\x12 \\n\\x12normalize_variance\\x18\\x01 \\x01(\\x08:\\x04true\\x12\\x1e\\n\\x0f\\x61\\x63ross_channels\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x12\\n\\x03\\x65ps\\x18\\x03 \\x01(\\x02:\\x05\\x31\\x65-09\\\"\\xa2\\x03\\n\\x10PoolingParameter\\x12\\x35\\n\\x04pool\\x18\\x01 \\x01(\\x0e\\x32\\\".caffe.PoolingParameter.PoolMethod:\\x03MAX\\x12\\x0e\\n\\x03pad\\x18\\x04 \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_h\\x18\\t \\x01(\\r:\\x01\\x30\\x12\\x10\\n\\x05pad_w\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x13\\n\\x0bkernel_size\\x18\\x02 \\x01(\\r\\x12\\x10\\n\\x08kernel_h\\x18\\x05 \\x01(\\r\\x12\\x10\\n\\x08kernel_w\\x18\\x06 \\x01(\\r\\x12\\x11\\n\\x06stride\\x18\\x03 \\x01(\\r:\\x01\\x31\\x12\\x10\\n\\x08stride_h\\x18\\x07 \\x01(\\r\\x12\\x10\\n\\x08stride_w\\x18\\x08 \\x01(\\r\\x12\\x37\\n\\x06\\x65ngine\\x18\\x0b \\x01(\\x0e\\x32\\x1e.caffe.PoolingParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x1d\\n\\x0eglobal_pooling\\x18\\x0c \\x01(\\x08:\\x05\\x66\\x61lse\\\".\\n\\nPoolMethod\\x12\\x07\\n\\x03MAX\\x10\\x00\\x12\\x07\\n\\x03\\x41VE\\x10\\x01\\x12\\x0e\\n\\nSTOCHASTIC\\x10\\x02\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"F\\n\\x0ePowerParameter\\x12\\x10\\n\\x05power\\x18\\x01 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x05shift\\x18\\x03 \\x01(\\x02:\\x01\\x30\\\"g\\n\\x0fPythonParameter\\x12\\x0e\\n\\x06module\\x18\\x01 \\x01(\\t\\x12\\r\\n\\x05layer\\x18\\x02 \\x01(\\t\\x12\\x13\\n\\tparam_str\\x18\\x03 \\x01(\\t:\\x00\\x12 \\n\\x11share_in_parallel\\x18\\x04 \\x01(\\x08:\\x05\\x66\\x61lse\\\"\\xad\\x01\\n\\x12ReductionParameter\\x12=\\n\\toperation\\x18\\x01 \\x01(\\x0e\\x32%.caffe.ReductionParameter.ReductionOp:\\x03SUM\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\x10\\n\\x05\\x63oeff\\x18\\x03 \\x01(\\x02:\\x01\\x31\\\"5\\n\\x0bReductionOp\\x12\\x07\\n\\x03SUM\\x10\\x01\\x12\\x08\\n\\x04\\x41SUM\\x10\\x02\\x12\\t\\n\\x05SUMSQ\\x10\\x03\\x12\\x08\\n\\x04MEAN\\x10\\x04\\\"\\x8d\\x01\\n\\rReLUParameter\\x12\\x19\\n\\x0enegative_slope\\x18\\x01 \\x01(\\x02:\\x01\\x30\\x12\\x34\\n\\x06\\x65ngine\\x18\\x02 \\x01(\\x0e\\x32\\x1b.caffe.ReLUParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"Z\\n\\x10ReshapeParameter\\x12\\x1f\\n\\x05shape\\x18\\x01 \\x01(\\x0b\\x32\\x10.caffe.BlobShape\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x30\\x12\\x14\\n\\x08num_axes\\x18\\x03 \\x01(\\x05:\\x02-1\\\"Y\\n\\x13ROIPoolingParameter\\x12\\x13\\n\\x08pooled_h\\x18\\x01 \\x01(\\r:\\x01\\x30\\x12\\x13\\n\\x08pooled_w\\x18\\x02 \\x01(\\r:\\x01\\x30\\x12\\x18\\n\\rspatial_scale\\x18\\x03 \\x01(\\x02:\\x01\\x31\\\"\\xa5\\x01\\n\\x0eScaleParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\x13\\n\\x08num_axes\\x18\\x02 \\x01(\\x05:\\x01\\x31\\x12&\\n\\x06\\x66iller\\x18\\x03 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x18\\n\\tbias_term\\x18\\x04 \\x01(\\x08:\\x05\\x66\\x61lse\\x12+\\n\\x0b\\x62ias_filler\\x18\\x05 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\\"x\\n\\x10SigmoidParameter\\x12\\x37\\n\\x06\\x65ngine\\x18\\x01 \\x01(\\x0e\\x32\\x1e.caffe.SigmoidParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\")\\n\\x15SmoothL1LossParameter\\x12\\x10\\n\\x05sigma\\x18\\x01 \\x01(\\x02:\\x01\\x31\\\"L\\n\\x0eSliceParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x03 \\x01(\\x05:\\x01\\x31\\x12\\x13\\n\\x0bslice_point\\x18\\x02 \\x03(\\r\\x12\\x14\\n\\tslice_dim\\x18\\x01 \\x01(\\r:\\x01\\x31\\\"\\x89\\x01\\n\\x10SoftmaxParameter\\x12\\x37\\n\\x06\\x65ngine\\x18\\x01 \\x01(\\x0e\\x32\\x1e.caffe.SoftmaxParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\x12\\x0f\\n\\x04\\x61xis\\x18\\x02 \\x01(\\x05:\\x01\\x31\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"r\\n\\rTanHParameter\\x12\\x34\\n\\x06\\x65ngine\\x18\\x01 \\x01(\\x0e\\x32\\x1b.caffe.TanHParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"/\\n\\rTileParameter\\x12\\x0f\\n\\x04\\x61xis\\x18\\x01 \\x01(\\x05:\\x01\\x31\\x12\\r\\n\\x05tiles\\x18\\x02 \\x01(\\x05\\\"*\\n\\x12ThresholdParameter\\x12\\x14\\n\\tthreshold\\x18\\x01 \\x01(\\x02:\\x01\\x30\\\"\\xc1\\x02\\n\\x13WindowDataParameter\\x12\\x0e\\n\\x06source\\x18\\x01 \\x01(\\t\\x12\\x10\\n\\x05scale\\x18\\x02 \\x01(\\x02:\\x01\\x31\\x12\\x11\\n\\tmean_file\\x18\\x03 \\x01(\\t\\x12\\x12\\n\\nbatch_size\\x18\\x04 \\x01(\\r\\x12\\x14\\n\\tcrop_size\\x18\\x05 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x06 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x19\\n\\x0c\\x66g_threshold\\x18\\x07 \\x01(\\x02:\\x03\\x30.5\\x12\\x19\\n\\x0c\\x62g_threshold\\x18\\x08 \\x01(\\x02:\\x03\\x30.5\\x12\\x19\\n\\x0b\\x66g_fraction\\x18\\t \\x01(\\x02:\\x04\\x30.25\\x12\\x16\\n\\x0b\\x63ontext_pad\\x18\\n \\x01(\\r:\\x01\\x30\\x12\\x17\\n\\tcrop_mode\\x18\\x0b \\x01(\\t:\\x04warp\\x12\\x1b\\n\\x0c\\x63\\x61\\x63he_images\\x18\\x0c \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\x0broot_folder\\x18\\r \\x01(\\t:\\x00\\\"\\xeb\\x01\\n\\x0cSPPParameter\\x12\\x16\\n\\x0epyramid_height\\x18\\x01 \\x01(\\r\\x12\\x31\\n\\x04pool\\x18\\x02 \\x01(\\x0e\\x32\\x1e.caffe.SPPParameter.PoolMethod:\\x03MAX\\x12\\x33\\n\\x06\\x65ngine\\x18\\x06 \\x01(\\x0e\\x32\\x1a.caffe.SPPParameter.Engine:\\x07\\x44\\x45\\x46\\x41ULT\\\".\\n\\nPoolMethod\\x12\\x07\\n\\x03MAX\\x10\\x00\\x12\\x07\\n\\x03\\x41VE\\x10\\x01\\x12\\x0e\\n\\nSTOCHASTIC\\x10\\x02\\\"+\\n\\x06\\x45ngine\\x12\\x0b\\n\\x07\\x44\\x45\\x46\\x41ULT\\x10\\x00\\x12\\t\\n\\x05\\x43\\x41\\x46\\x46\\x45\\x10\\x01\\x12\\t\\n\\x05\\x43UDNN\\x10\\x02\\\"\\x83\\x15\\n\\x10V1LayerParameter\\x12\\x0e\\n\\x06\\x62ottom\\x18\\x02 \\x03(\\t\\x12\\x0b\\n\\x03top\\x18\\x03 \\x03(\\t\\x12\\x0c\\n\\x04name\\x18\\x04 \\x01(\\t\\x12$\\n\\x07include\\x18  \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12$\\n\\x07\\x65xclude\\x18! \\x03(\\x0b\\x32\\x13.caffe.NetStateRule\\x12/\\n\\x04type\\x18\\x05 \\x01(\\x0e\\x32!.caffe.V1LayerParameter.LayerType\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x06 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x0e\\n\\x05param\\x18\\xe9\\x07 \\x03(\\t\\x12>\\n\\x0f\\x62lob_share_mode\\x18\\xea\\x07 \\x03(\\x0e\\x32$.caffe.V1LayerParameter.DimCheckMode\\x12\\x10\\n\\x08\\x62lobs_lr\\x18\\x07 \\x03(\\x02\\x12\\x14\\n\\x0cweight_decay\\x18\\x08 \\x03(\\x02\\x12\\x13\\n\\x0bloss_weight\\x18# \\x03(\\x02\\x12\\x30\\n\\x0e\\x61\\x63\\x63uracy_param\\x18\\x1b \\x01(\\x0b\\x32\\x18.caffe.AccuracyParameter\\x12,\\n\\x0c\\x61rgmax_param\\x18\\x17 \\x01(\\x0b\\x32\\x16.caffe.ArgMaxParameter\\x12,\\n\\x0c\\x63oncat_param\\x18\\t \\x01(\\x0b\\x32\\x16.caffe.ConcatParameter\\x12,\\n\\x0c\\x63onadd_param\\x18\\x63 \\x01(\\x0b\\x32\\x16.caffe.ConaddParameter\\x12?\\n\\x16\\x63ontrastive_loss_param\\x18( \\x01(\\x0b\\x32\\x1f.caffe.ContrastiveLossParameter\\x12\\x36\\n\\x11\\x63onvolution_param\\x18\\n \\x01(\\x0b\\x32\\x1b.caffe.ConvolutionParameter\\x12L\\n\\x1c\\x64\\x65\\x66ormable_convolution_param\\x18\\xe7\\x07 \\x01(\\x0b\\x32%.caffe.DeformableConvolutionParameter\\x12(\\n\\ndata_param\\x18\\x0b \\x01(\\x0b\\x32\\x14.caffe.DataParameter\\x12.\\n\\rdropout_param\\x18\\x0c \\x01(\\x0b\\x32\\x17.caffe.DropoutParameter\\x12\\x33\\n\\x10\\x64ummy_data_param\\x18\\x1a \\x01(\\x0b\\x32\\x19.caffe.DummyDataParameter\\x12.\\n\\reltwise_param\\x18\\x18 \\x01(\\x0b\\x32\\x17.caffe.EltwiseParameter\\x12&\\n\\texp_param\\x18) \\x01(\\x0b\\x32\\x13.caffe.ExpParameter\\x12\\x31\\n\\x0fhdf5_data_param\\x18\\r \\x01(\\x0b\\x32\\x18.caffe.HDF5DataParameter\\x12\\x35\\n\\x11hdf5_output_param\\x18\\x0e \\x01(\\x0b\\x32\\x1a.caffe.HDF5OutputParameter\\x12\\x33\\n\\x10hinge_loss_param\\x18\\x1d \\x01(\\x0b\\x32\\x19.caffe.HingeLossParameter\\x12\\x33\\n\\x10image_data_param\\x18\\x0f \\x01(\\x0b\\x32\\x19.caffe.ImageDataParameter\\x12\\x39\\n\\x13infogain_loss_param\\x18\\x10 \\x01(\\x0b\\x32\\x1c.caffe.InfogainLossParameter\\x12\\x39\\n\\x13inner_product_param\\x18\\x11 \\x01(\\x0b\\x32\\x1c.caffe.InnerProductParameter\\x12&\\n\\tlrn_param\\x18\\x12 \\x01(\\x0b\\x32\\x13.caffe.LRNParameter\\x12\\x35\\n\\x11memory_data_param\\x18\\x16 \\x01(\\x0b\\x32\\x1a.caffe.MemoryDataParameter\\x12&\\n\\tmvn_param\\x18\\\" \\x01(\\x0b\\x32\\x13.caffe.MVNParameter\\x12.\\n\\rpooling_param\\x18\\x13 \\x01(\\x0b\\x32\\x17.caffe.PoolingParameter\\x12*\\n\\x0bpower_param\\x18\\x15 \\x01(\\x0b\\x32\\x15.caffe.PowerParameter\\x12(\\n\\nrelu_param\\x18\\x1e \\x01(\\x0b\\x32\\x14.caffe.ReLUParameter\\x12.\\n\\rsigmoid_param\\x18& \\x01(\\x0b\\x32\\x17.caffe.SigmoidParameter\\x12.\\n\\rsoftmax_param\\x18\\' \\x01(\\x0b\\x32\\x17.caffe.SoftmaxParameter\\x12*\\n\\x0bslice_param\\x18\\x1f \\x01(\\x0b\\x32\\x15.caffe.SliceParameter\\x12(\\n\\ntanh_param\\x18% \\x01(\\x0b\\x32\\x14.caffe.TanHParameter\\x12\\x32\\n\\x0fthreshold_param\\x18\\x19 \\x01(\\x0b\\x32\\x19.caffe.ThresholdParameter\\x12\\x35\\n\\x11window_data_param\\x18\\x14 \\x01(\\x0b\\x32\\x1a.caffe.WindowDataParameter\\x12\\x37\\n\\x0ftransform_param\\x18$ \\x01(\\x0b\\x32\\x1e.caffe.TransformationParameter\\x12(\\n\\nloss_param\\x18* \\x01(\\x0b\\x32\\x14.caffe.LossParameter\\x12&\\n\\x05layer\\x18\\x01 \\x01(\\x0b\\x32\\x17.caffe.V0LayerParameter\\\"\\xff\\x04\\n\\tLayerType\\x12\\x08\\n\\x04NONE\\x10\\x00\\x12\\n\\n\\x06\\x41\\x42SVAL\\x10#\\x12\\x0c\\n\\x08\\x41\\x43\\x43URACY\\x10\\x01\\x12\\n\\n\\x06\\x41RGMAX\\x10\\x1e\\x12\\x08\\n\\x04\\x42NLL\\x10\\x02\\x12\\n\\n\\x06\\x43ONCAT\\x10\\x03\\x12\\n\\n\\x06\\x43ONADD\\x10\\x32\\x12\\x14\\n\\x10\\x43ONTRASTIVE_LOSS\\x10%\\x12\\x0f\\n\\x0b\\x43ONVOLUTION\\x10\\x04\\x12\\x19\\n\\x15\\x44\\x45\\x46ORMABLECONVOLUTION\\x10\\x63\\x12\\x08\\n\\x04\\x44\\x41TA\\x10\\x05\\x12\\x11\\n\\rDECONVOLUTION\\x10\\'\\x12\\x0b\\n\\x07\\x44ROPOUT\\x10\\x06\\x12\\x0e\\n\\nDUMMY_DATA\\x10 \\x12\\x12\\n\\x0e\\x45UCLIDEAN_LOSS\\x10\\x07\\x12\\x0b\\n\\x07\\x45LTWISE\\x10\\x19\\x12\\x07\\n\\x03\\x45XP\\x10&\\x12\\x0b\\n\\x07\\x46LATTEN\\x10\\x08\\x12\\r\\n\\tHDF5_DATA\\x10\\t\\x12\\x0f\\n\\x0bHDF5_OUTPUT\\x10\\n\\x12\\x0e\\n\\nHINGE_LOSS\\x10\\x1c\\x12\\n\\n\\x06IM2COL\\x10\\x0b\\x12\\x0e\\n\\nIMAGE_DATA\\x10\\x0c\\x12\\x11\\n\\rINFOGAIN_LOSS\\x10\\r\\x12\\x11\\n\\rINNER_PRODUCT\\x10\\x0e\\x12\\x07\\n\\x03LRN\\x10\\x0f\\x12\\x0f\\n\\x0bMEMORY_DATA\\x10\\x1d\\x12\\x1d\\n\\x19MULTINOMIAL_LOGISTIC_LOSS\\x10\\x10\\x12\\x07\\n\\x03MVN\\x10\\\"\\x12\\x0b\\n\\x07POOLING\\x10\\x11\\x12\\t\\n\\x05POWER\\x10\\x1a\\x12\\x08\\n\\x04RELU\\x10\\x12\\x12\\x0b\\n\\x07SIGMOID\\x10\\x13\\x12\\x1e\\n\\x1aSIGMOID_CROSS_ENTROPY_LOSS\\x10\\x1b\\x12\\x0b\\n\\x07SILENCE\\x10$\\x12\\x0b\\n\\x07SOFTMAX\\x10\\x14\\x12\\x10\\n\\x0cSOFTMAX_LOSS\\x10\\x15\\x12\\t\\n\\x05SPLIT\\x10\\x16\\x12\\t\\n\\x05SLICE\\x10!\\x12\\x08\\n\\x04TANH\\x10\\x17\\x12\\x0f\\n\\x0bWINDOW_DATA\\x10\\x18\\x12\\r\\n\\tTHRESHOLD\\x10\\x1f\\\"*\\n\\x0c\\x44imCheckMode\\x12\\n\\n\\x06STRICT\\x10\\x00\\x12\\x0e\\n\\nPERMISSIVE\\x10\\x01\\\"\\xfd\\x07\\n\\x10V0LayerParameter\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\x0c\\n\\x04type\\x18\\x02 \\x01(\\t\\x12\\x12\\n\\nnum_output\\x18\\x03 \\x01(\\r\\x12\\x16\\n\\x08\\x62iasterm\\x18\\x04 \\x01(\\x08:\\x04true\\x12-\\n\\rweight_filler\\x18\\x05 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12+\\n\\x0b\\x62ias_filler\\x18\\x06 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x0e\\n\\x03pad\\x18\\x07 \\x01(\\r:\\x01\\x30\\x12\\x12\\n\\nkernelsize\\x18\\x08 \\x01(\\r\\x12\\x10\\n\\x05group\\x18\\t \\x01(\\r:\\x01\\x31\\x12\\x11\\n\\x06stride\\x18\\n \\x01(\\r:\\x01\\x31\\x12\\x35\\n\\x04pool\\x18\\x0b \\x01(\\x0e\\x32\\\".caffe.V0LayerParameter.PoolMethod:\\x03MAX\\x12\\x1a\\n\\rdropout_ratio\\x18\\x0c \\x01(\\x02:\\x03\\x30.5\\x12\\x15\\n\\nlocal_size\\x18\\r \\x01(\\r:\\x01\\x35\\x12\\x10\\n\\x05\\x61lpha\\x18\\x0e \\x01(\\x02:\\x01\\x31\\x12\\x12\\n\\x04\\x62\\x65ta\\x18\\x0f \\x01(\\x02:\\x04\\x30.75\\x12\\x0c\\n\\x01k\\x18\\x16 \\x01(\\x02:\\x01\\x31\\x12\\x0e\\n\\x06source\\x18\\x10 \\x01(\\t\\x12\\x10\\n\\x05scale\\x18\\x11 \\x01(\\x02:\\x01\\x31\\x12\\x10\\n\\x08meanfile\\x18\\x12 \\x01(\\t\\x12\\x11\\n\\tbatchsize\\x18\\x13 \\x01(\\r\\x12\\x13\\n\\x08\\x63ropsize\\x18\\x14 \\x01(\\r:\\x01\\x30\\x12\\x15\\n\\x06mirror\\x18\\x15 \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x1f\\n\\x05\\x62lobs\\x18\\x32 \\x03(\\x0b\\x32\\x10.caffe.BlobProto\\x12\\x10\\n\\x08\\x62lobs_lr\\x18\\x33 \\x03(\\x02\\x12\\x14\\n\\x0cweight_decay\\x18\\x34 \\x03(\\x02\\x12\\x14\\n\\trand_skip\\x18\\x35 \\x01(\\r:\\x01\\x30\\x12\\x1d\\n\\x10\\x64\\x65t_fg_threshold\\x18\\x36 \\x01(\\x02:\\x03\\x30.5\\x12\\x1d\\n\\x10\\x64\\x65t_bg_threshold\\x18\\x37 \\x01(\\x02:\\x03\\x30.5\\x12\\x1d\\n\\x0f\\x64\\x65t_fg_fraction\\x18\\x38 \\x01(\\x02:\\x04\\x30.25\\x12\\x1a\\n\\x0f\\x64\\x65t_context_pad\\x18: \\x01(\\r:\\x01\\x30\\x12\\x1b\\n\\rdet_crop_mode\\x18; \\x01(\\t:\\x04warp\\x12\\x12\\n\\x07new_num\\x18< \\x01(\\x05:\\x01\\x30\\x12\\x17\\n\\x0cnew_channels\\x18= \\x01(\\x05:\\x01\\x30\\x12\\x15\\n\\nnew_height\\x18> \\x01(\\x05:\\x01\\x30\\x12\\x14\\n\\tnew_width\\x18? \\x01(\\x05:\\x01\\x30\\x12\\x1d\\n\\x0eshuffle_images\\x18@ \\x01(\\x08:\\x05\\x66\\x61lse\\x12\\x15\\n\\nconcat_dim\\x18\\x41 \\x01(\\r:\\x01\\x31\\x12\\x36\\n\\x11hdf5_output_param\\x18\\xe9\\x07 \\x01(\\x0b\\x32\\x1a.caffe.HDF5OutputParameter\\\".\\n\\nPoolMethod\\x12\\x07\\n\\x03MAX\\x10\\x00\\x12\\x07\\n\\x03\\x41VE\\x10\\x01\\x12\\x0e\\n\\nSTOCHASTIC\\x10\\x02\\\"W\\n\\x0ePReLUParameter\\x12&\\n\\x06\\x66iller\\x18\\x01 \\x01(\\x0b\\x32\\x16.caffe.FillerParameter\\x12\\x1d\\n\\x0e\\x63hannel_shared\\x18\\x02 \\x01(\\x08:\\x05\\x66\\x61lse*\\x1c\\n\\x05Phase\\x12\\t\\n\\x05TRAIN\\x10\\x00\\x12\\x08\\n\\x04TEST\\x10\\x01')\n)\n_sym_db.RegisterFileDescriptor(DESCRIPTOR)\n\n_PHASE = _descriptor.EnumDescriptor(\n  name='Phase',\n  full_name='caffe.Phase',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='TRAIN', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='TEST', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=16018,\n  serialized_end=16046,\n)\n_sym_db.RegisterEnumDescriptor(_PHASE)\n\nPhase = enum_type_wrapper.EnumTypeWrapper(_PHASE)\nTRAIN = 0\nTEST = 1\n\n\n_FILLERPARAMETER_VARIANCENORM = _descriptor.EnumDescriptor(\n  name='VarianceNorm',\n  full_name='caffe.FillerParameter.VarianceNorm',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='FAN_IN', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='FAN_OUT', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='AVERAGE', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=658,\n  serialized_end=710,\n)\n_sym_db.RegisterEnumDescriptor(_FILLERPARAMETER_VARIANCENORM)\n\n_SOLVERPARAMETER_SNAPSHOTFORMAT = _descriptor.EnumDescriptor(\n  name='SnapshotFormat',\n  full_name='caffe.SolverParameter.SnapshotFormat',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='HDF5', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='BINARYPROTO', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=2132,\n  serialized_end=2175,\n)\n_sym_db.RegisterEnumDescriptor(_SOLVERPARAMETER_SNAPSHOTFORMAT)\n\n_SOLVERPARAMETER_SOLVERMODE = _descriptor.EnumDescriptor(\n  name='SolverMode',\n  full_name='caffe.SolverParameter.SolverMode',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='CPU', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='GPU', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=2177,\n  serialized_end=2207,\n)\n_sym_db.RegisterEnumDescriptor(_SOLVERPARAMETER_SOLVERMODE)\n\n_SOLVERPARAMETER_SOLVERTYPE = _descriptor.EnumDescriptor(\n  name='SolverType',\n  full_name='caffe.SolverParameter.SolverType',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='SGD', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='NESTEROV', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ADAGRAD', index=2, number=2,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='RMSPROP', index=3, number=3,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ADADELTA', index=4, number=4,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ADAM', index=5, number=5,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=2209,\n  serialized_end=2294,\n)\n_sym_db.RegisterEnumDescriptor(_SOLVERPARAMETER_SOLVERTYPE)\n\n_PARAMSPEC_DIMCHECKMODE = _descriptor.EnumDescriptor(\n  name='DimCheckMode',\n  full_name='caffe.ParamSpec.DimCheckMode',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='STRICT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='PERMISSIVE', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=2725,\n  serialized_end=2767,\n)\n_sym_db.RegisterEnumDescriptor(_PARAMSPEC_DIMCHECKMODE)\n\n_LOSSPARAMETER_NORMALIZATIONMODE = _descriptor.EnumDescriptor(\n  name='NormalizationMode',\n  full_name='caffe.LossParameter.NormalizationMode',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='FULL', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='VALID', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='BATCH_SIZE', index=2, number=2,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='NONE', index=3, number=3,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=5743,\n  serialized_end=5809,\n)\n_sym_db.RegisterEnumDescriptor(_LOSSPARAMETER_NORMALIZATIONMODE)\n\n_CONVOLUTIONPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.ConvolutionParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_CONVOLUTIONPARAMETER_ENGINE)\n\n_DEFORMABLECONVOLUTIONPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.DeformableConvolutionParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_DEFORMABLECONVOLUTIONPARAMETER_ENGINE)\n\n_DATAPARAMETER_DB = _descriptor.EnumDescriptor(\n  name='DB',\n  full_name='caffe.DataParameter.DB',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='LEVELDB', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='LMDB', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=7664,\n  serialized_end=7691,\n)\n_sym_db.RegisterEnumDescriptor(_DATAPARAMETER_DB)\n\n_ELTWISEPARAMETER_ELTWISEOP = _descriptor.EnumDescriptor(\n  name='EltwiseOp',\n  full_name='caffe.EltwiseParameter.EltwiseOp',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='PROD', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SUM', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=8108,\n  serialized_end=8147,\n)\n_sym_db.RegisterEnumDescriptor(_ELTWISEPARAMETER_ELTWISEOP)\n\n_HINGELOSSPARAMETER_NORM = _descriptor.EnumDescriptor(\n  name='Norm',\n  full_name='caffe.HingeLossParameter.Norm',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='L1', index=0, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='L2', index=1, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=8682,\n  serialized_end=8704,\n)\n_sym_db.RegisterEnumDescriptor(_HINGELOSSPARAMETER_NORM)\n\n_LRNPARAMETER_NORMREGION = _descriptor.EnumDescriptor(\n  name='NormRegion',\n  full_name='caffe.LRNParameter.NormRegion',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='ACROSS_CHANNELS', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='WITHIN_CHANNEL', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=9494,\n  serialized_end=9547,\n)\n_sym_db.RegisterEnumDescriptor(_LRNPARAMETER_NORMREGION)\n\n_LRNPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.LRNParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_LRNPARAMETER_ENGINE)\n\n_POOLINGPARAMETER_POOLMETHOD = _descriptor.EnumDescriptor(\n  name='PoolMethod',\n  full_name='caffe.PoolingParameter.PoolMethod',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='AVE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='STOCHASTIC', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=10116,\n  serialized_end=10162,\n)\n_sym_db.RegisterEnumDescriptor(_POOLINGPARAMETER_POOLMETHOD)\n\n_POOLINGPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.PoolingParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_POOLINGPARAMETER_ENGINE)\n\n_REDUCTIONPARAMETER_REDUCTIONOP = _descriptor.EnumDescriptor(\n  name='ReductionOp',\n  full_name='caffe.ReductionParameter.ReductionOp',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='SUM', index=0, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ASUM', index=1, number=2,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SUMSQ', index=2, number=3,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='MEAN', index=3, number=4,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=10507,\n  serialized_end=10560,\n)\n_sym_db.RegisterEnumDescriptor(_REDUCTIONPARAMETER_REDUCTIONOP)\n\n_RELUPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.ReLUParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_RELUPARAMETER_ENGINE)\n\n_SIGMOIDPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.SigmoidParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_SIGMOIDPARAMETER_ENGINE)\n\n_SOFTMAXPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.SoftmaxParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_SOFTMAXPARAMETER_ENGINE)\n\n_TANHPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.TanHParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_TANHPARAMETER_ENGINE)\n\n_SPPPARAMETER_POOLMETHOD = _descriptor.EnumDescriptor(\n  name='PoolMethod',\n  full_name='caffe.SPPParameter.PoolMethod',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='AVE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='STOCHASTIC', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=10116,\n  serialized_end=10162,\n)\n_sym_db.RegisterEnumDescriptor(_SPPPARAMETER_POOLMETHOD)\n\n_SPPPARAMETER_ENGINE = _descriptor.EnumDescriptor(\n  name='Engine',\n  full_name='caffe.SPPParameter.Engine',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='DEFAULT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CAFFE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CUDNN', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=6793,\n  serialized_end=6836,\n)\n_sym_db.RegisterEnumDescriptor(_SPPPARAMETER_ENGINE)\n\n_V1LAYERPARAMETER_LAYERTYPE = _descriptor.EnumDescriptor(\n  name='LayerType',\n  full_name='caffe.V1LayerParameter.LayerType',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='NONE', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ABSVAL', index=1, number=35,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ACCURACY', index=2, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ARGMAX', index=3, number=30,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='BNLL', index=4, number=2,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CONCAT', index=5, number=3,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CONADD', index=6, number=50,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CONTRASTIVE_LOSS', index=7, number=37,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='CONVOLUTION', index=8, number=4,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='DEFORMABLECONVOLUTION', index=9, number=99,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='DATA', index=10, number=5,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='DECONVOLUTION', index=11, number=39,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='DROPOUT', index=12, number=6,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='DUMMY_DATA', index=13, number=32,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='EUCLIDEAN_LOSS', index=14, number=7,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='ELTWISE', index=15, number=25,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='EXP', index=16, number=38,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='FLATTEN', index=17, number=8,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='HDF5_DATA', index=18, number=9,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='HDF5_OUTPUT', index=19, number=10,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='HINGE_LOSS', index=20, number=28,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='IM2COL', index=21, number=11,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='IMAGE_DATA', index=22, number=12,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='INFOGAIN_LOSS', index=23, number=13,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='INNER_PRODUCT', index=24, number=14,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='LRN', index=25, number=15,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='MEMORY_DATA', index=26, number=29,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='MULTINOMIAL_LOGISTIC_LOSS', index=27, number=16,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='MVN', index=28, number=34,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='POOLING', index=29, number=17,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='POWER', index=30, number=26,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='RELU', index=31, number=18,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SIGMOID', index=32, number=19,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SIGMOID_CROSS_ENTROPY_LOSS', index=33, number=27,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SILENCE', index=34, number=36,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SOFTMAX', index=35, number=20,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SOFTMAX_LOSS', index=36, number=21,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SPLIT', index=37, number=22,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='SLICE', index=38, number=33,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='TANH', index=39, number=23,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='WINDOW_DATA', index=40, number=24,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='THRESHOLD', index=41, number=31,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=14220,\n  serialized_end=14859,\n)\n_sym_db.RegisterEnumDescriptor(_V1LAYERPARAMETER_LAYERTYPE)\n\n_V1LAYERPARAMETER_DIMCHECKMODE = _descriptor.EnumDescriptor(\n  name='DimCheckMode',\n  full_name='caffe.V1LayerParameter.DimCheckMode',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='STRICT', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='PERMISSIVE', index=1, number=1,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=2725,\n  serialized_end=2767,\n)\n_sym_db.RegisterEnumDescriptor(_V1LAYERPARAMETER_DIMCHECKMODE)\n\n_V0LAYERPARAMETER_POOLMETHOD = _descriptor.EnumDescriptor(\n  name='PoolMethod',\n  full_name='caffe.V0LayerParameter.PoolMethod',\n  filename=None,\n  file=DESCRIPTOR,\n  values=[\n    _descriptor.EnumValueDescriptor(\n      name='MAX', index=0, number=0,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='AVE', index=1, number=1,\n      options=None,\n      type=None),\n    _descriptor.EnumValueDescriptor(\n      name='STOCHASTIC', index=2, number=2,\n      options=None,\n      type=None),\n  ],\n  containing_type=None,\n  options=None,\n  serialized_start=10116,\n  serialized_end=10162,\n)\n_sym_db.RegisterEnumDescriptor(_V0LAYERPARAMETER_POOLMETHOD)\n\n\n_BLOBSHAPE = _descriptor.Descriptor(\n  name='BlobShape',\n  full_name='caffe.BlobShape',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='dim', full_name='caffe.BlobShape.dim', index=0,\n      number=1, type=3, cpp_type=2, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=22,\n  serialized_end=50,\n)\n\n\n_BLOBPROTO = _descriptor.Descriptor(\n  name='BlobProto',\n  full_name='caffe.BlobProto',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.BlobProto.shape', index=0,\n      number=7, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='data', full_name='caffe.BlobProto.data', index=1,\n      number=5, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))),\n    _descriptor.FieldDescriptor(\n      name='diff', full_name='caffe.BlobProto.diff', index=2,\n      number=6, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))),\n    _descriptor.FieldDescriptor(\n      name='double_data', full_name='caffe.BlobProto.double_data', index=3,\n      number=8, type=1, cpp_type=5, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))),\n    _descriptor.FieldDescriptor(\n      name='double_diff', full_name='caffe.BlobProto.double_diff', index=4,\n      number=9, type=1, cpp_type=5, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))),\n    _descriptor.FieldDescriptor(\n      name='num', full_name='caffe.BlobProto.num', index=5,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.BlobProto.channels', index=6,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.BlobProto.height', index=7,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.BlobProto.width', index=8,\n      number=4, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=53,\n  serialized_end=257,\n)\n\n\n_BLOBPROTOVECTOR = _descriptor.Descriptor(\n  name='BlobProtoVector',\n  full_name='caffe.BlobProtoVector',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.BlobProtoVector.blobs', index=0,\n      number=1, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=259,\n  serialized_end=309,\n)\n\n\n_DATUM = _descriptor.Descriptor(\n  name='Datum',\n  full_name='caffe.Datum',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.Datum.channels', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.Datum.height', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.Datum.width', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='data', full_name='caffe.Datum.data', index=3,\n      number=4, type=12, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\"),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='label', full_name='caffe.Datum.label', index=4,\n      number=5, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='float_data', full_name='caffe.Datum.float_data', index=5,\n      number=6, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='encoded', full_name='caffe.Datum.encoded', index=6,\n      number=7, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=312,\n  serialized_end=441,\n)\n\n\n_FILLERPARAMETER = _descriptor.Descriptor(\n  name='FillerParameter',\n  full_name='caffe.FillerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.FillerParameter.type', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"constant\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='value', full_name='caffe.FillerParameter.value', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='min', full_name='caffe.FillerParameter.min', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='max', full_name='caffe.FillerParameter.max', index=3,\n      number=4, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mean', full_name='caffe.FillerParameter.mean', index=4,\n      number=5, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='std', full_name='caffe.FillerParameter.std', index=5,\n      number=6, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='sparse', full_name='caffe.FillerParameter.sparse', index=6,\n      number=7, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='variance_norm', full_name='caffe.FillerParameter.variance_norm', index=7,\n      number=8, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _FILLERPARAMETER_VARIANCENORM,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=444,\n  serialized_end=710,\n)\n\n\n_NETPARAMETER = _descriptor.Descriptor(\n  name='NetParameter',\n  full_name='caffe.NetParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.NetParameter.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='input', full_name='caffe.NetParameter.input', index=1,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='input_shape', full_name='caffe.NetParameter.input_shape', index=2,\n      number=8, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='input_dim', full_name='caffe.NetParameter.input_dim', index=3,\n      number=4, type=5, cpp_type=1, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='force_backward', full_name='caffe.NetParameter.force_backward', index=4,\n      number=5, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='state', full_name='caffe.NetParameter.state', index=5,\n      number=6, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='debug_info', full_name='caffe.NetParameter.debug_info', index=6,\n      number=7, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='layer', full_name='caffe.NetParameter.layer', index=7,\n      number=100, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='layers', full_name='caffe.NetParameter.layers', index=8,\n      number=2, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=713,\n  serialized_end=983,\n)\n\n\n_SOLVERPARAMETER = _descriptor.Descriptor(\n  name='SolverParameter',\n  full_name='caffe.SolverParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='net', full_name='caffe.SolverParameter.net', index=0,\n      number=24, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='net_param', full_name='caffe.SolverParameter.net_param', index=1,\n      number=25, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='train_net', full_name='caffe.SolverParameter.train_net', index=2,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_net', full_name='caffe.SolverParameter.test_net', index=3,\n      number=2, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='train_net_param', full_name='caffe.SolverParameter.train_net_param', index=4,\n      number=21, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_net_param', full_name='caffe.SolverParameter.test_net_param', index=5,\n      number=22, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='train_state', full_name='caffe.SolverParameter.train_state', index=6,\n      number=26, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_state', full_name='caffe.SolverParameter.test_state', index=7,\n      number=27, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_iter', full_name='caffe.SolverParameter.test_iter', index=8,\n      number=3, type=5, cpp_type=1, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_interval', full_name='caffe.SolverParameter.test_interval', index=9,\n      number=4, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_compute_loss', full_name='caffe.SolverParameter.test_compute_loss', index=10,\n      number=19, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='test_initialization', full_name='caffe.SolverParameter.test_initialization', index=11,\n      number=32, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='base_lr', full_name='caffe.SolverParameter.base_lr', index=12,\n      number=5, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='display', full_name='caffe.SolverParameter.display', index=13,\n      number=6, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='average_loss', full_name='caffe.SolverParameter.average_loss', index=14,\n      number=33, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='max_iter', full_name='caffe.SolverParameter.max_iter', index=15,\n      number=7, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='iter_size', full_name='caffe.SolverParameter.iter_size', index=16,\n      number=36, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='lr_policy', full_name='caffe.SolverParameter.lr_policy', index=17,\n      number=8, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='gamma', full_name='caffe.SolverParameter.gamma', index=18,\n      number=9, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='power', full_name='caffe.SolverParameter.power', index=19,\n      number=10, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='momentum', full_name='caffe.SolverParameter.momentum', index=20,\n      number=11, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_decay', full_name='caffe.SolverParameter.weight_decay', index=21,\n      number=12, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='regularization_type', full_name='caffe.SolverParameter.regularization_type', index=22,\n      number=29, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"L2\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stepsize', full_name='caffe.SolverParameter.stepsize', index=23,\n      number=13, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stepvalue', full_name='caffe.SolverParameter.stepvalue', index=24,\n      number=34, type=5, cpp_type=1, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='clip_gradients', full_name='caffe.SolverParameter.clip_gradients', index=25,\n      number=35, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='snapshot', full_name='caffe.SolverParameter.snapshot', index=26,\n      number=14, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='snapshot_prefix', full_name='caffe.SolverParameter.snapshot_prefix', index=27,\n      number=15, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='snapshot_diff', full_name='caffe.SolverParameter.snapshot_diff', index=28,\n      number=16, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='snapshot_format', full_name='caffe.SolverParameter.snapshot_format', index=29,\n      number=37, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='solver_mode', full_name='caffe.SolverParameter.solver_mode', index=30,\n      number=17, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='device_id', full_name='caffe.SolverParameter.device_id', index=31,\n      number=18, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='random_seed', full_name='caffe.SolverParameter.random_seed', index=32,\n      number=20, type=3, cpp_type=2, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.SolverParameter.type', index=33,\n      number=40, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"SGD\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='delta', full_name='caffe.SolverParameter.delta', index=34,\n      number=31, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1e-08,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='momentum2', full_name='caffe.SolverParameter.momentum2', index=35,\n      number=39, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.999,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='rms_decay', full_name='caffe.SolverParameter.rms_decay', index=36,\n      number=38, type=2, cpp_type=6, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='debug_info', full_name='caffe.SolverParameter.debug_info', index=37,\n      number=23, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='snapshot_after_train', full_name='caffe.SolverParameter.snapshot_after_train', index=38,\n      number=28, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='solver_type', full_name='caffe.SolverParameter.solver_type', index=39,\n      number=30, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SOLVERPARAMETER_SNAPSHOTFORMAT,\n    _SOLVERPARAMETER_SOLVERMODE,\n    _SOLVERPARAMETER_SOLVERTYPE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=986,\n  serialized_end=2294,\n)\n\n\n_SOLVERSTATE = _descriptor.Descriptor(\n  name='SolverState',\n  full_name='caffe.SolverState',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='iter', full_name='caffe.SolverState.iter', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='learned_net', full_name='caffe.SolverState.learned_net', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='history', full_name='caffe.SolverState.history', index=2,\n      number=3, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='current_step', full_name='caffe.SolverState.current_step', index=3,\n      number=4, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2296,\n  serialized_end=2404,\n)\n\n\n_NETSTATE = _descriptor.Descriptor(\n  name='NetState',\n  full_name='caffe.NetState',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='phase', full_name='caffe.NetState.phase', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='level', full_name='caffe.NetState.level', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stage', full_name='caffe.NetState.stage', index=2,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2406,\n  serialized_end=2484,\n)\n\n\n_NETSTATERULE = _descriptor.Descriptor(\n  name='NetStateRule',\n  full_name='caffe.NetStateRule',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='phase', full_name='caffe.NetStateRule.phase', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='min_level', full_name='caffe.NetStateRule.min_level', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='max_level', full_name='caffe.NetStateRule.max_level', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stage', full_name='caffe.NetStateRule.stage', index=3,\n      number=4, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='not_stage', full_name='caffe.NetStateRule.not_stage', index=4,\n      number=5, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2486,\n  serialized_end=2601,\n)\n\n\n_PARAMSPEC = _descriptor.Descriptor(\n  name='ParamSpec',\n  full_name='caffe.ParamSpec',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.ParamSpec.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='share_mode', full_name='caffe.ParamSpec.share_mode', index=1,\n      number=2, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='lr_mult', full_name='caffe.ParamSpec.lr_mult', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='decay_mult', full_name='caffe.ParamSpec.decay_mult', index=3,\n      number=4, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _PARAMSPEC_DIMCHECKMODE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2604,\n  serialized_end=2767,\n)\n\n\n_LAYERPARAMETER = _descriptor.Descriptor(\n  name='LayerParameter',\n  full_name='caffe.LayerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.LayerParameter.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.LayerParameter.type', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bottom', full_name='caffe.LayerParameter.bottom', index=2,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='top', full_name='caffe.LayerParameter.top', index=3,\n      number=4, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='phase', full_name='caffe.LayerParameter.phase', index=4,\n      number=10, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='loss_weight', full_name='caffe.LayerParameter.loss_weight', index=5,\n      number=5, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='param', full_name='caffe.LayerParameter.param', index=6,\n      number=6, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.LayerParameter.blobs', index=7,\n      number=7, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='propagate_down', full_name='caffe.LayerParameter.propagate_down', index=8,\n      number=11, type=8, cpp_type=7, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='include', full_name='caffe.LayerParameter.include', index=9,\n      number=8, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='exclude', full_name='caffe.LayerParameter.exclude', index=10,\n      number=9, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='transform_param', full_name='caffe.LayerParameter.transform_param', index=11,\n      number=100, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='loss_param', full_name='caffe.LayerParameter.loss_param', index=12,\n      number=101, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='accuracy_param', full_name='caffe.LayerParameter.accuracy_param', index=13,\n      number=102, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='argmax_param', full_name='caffe.LayerParameter.argmax_param', index=14,\n      number=103, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='batch_norm_param', full_name='caffe.LayerParameter.batch_norm_param', index=15,\n      number=139, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_param', full_name='caffe.LayerParameter.bias_param', index=16,\n      number=141, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='concat_param', full_name='caffe.LayerParameter.concat_param', index=17,\n      number=104, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='conadd_param', full_name='caffe.LayerParameter.conadd_param', index=18,\n      number=200, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='contrastive_loss_param', full_name='caffe.LayerParameter.contrastive_loss_param', index=19,\n      number=105, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='convolution_param', full_name='caffe.LayerParameter.convolution_param', index=20,\n      number=106, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='deformable_convolution_param', full_name='caffe.LayerParameter.deformable_convolution_param', index=21,\n      number=999, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='data_param', full_name='caffe.LayerParameter.data_param', index=22,\n      number=107, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dropout_param', full_name='caffe.LayerParameter.dropout_param', index=23,\n      number=108, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dummy_data_param', full_name='caffe.LayerParameter.dummy_data_param', index=24,\n      number=109, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='eltwise_param', full_name='caffe.LayerParameter.eltwise_param', index=25,\n      number=110, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='elu_param', full_name='caffe.LayerParameter.elu_param', index=26,\n      number=140, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='embed_param', full_name='caffe.LayerParameter.embed_param', index=27,\n      number=137, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='exp_param', full_name='caffe.LayerParameter.exp_param', index=28,\n      number=111, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='flatten_param', full_name='caffe.LayerParameter.flatten_param', index=29,\n      number=135, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hdf5_data_param', full_name='caffe.LayerParameter.hdf5_data_param', index=30,\n      number=112, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hdf5_output_param', full_name='caffe.LayerParameter.hdf5_output_param', index=31,\n      number=113, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hinge_loss_param', full_name='caffe.LayerParameter.hinge_loss_param', index=32,\n      number=114, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='image_data_param', full_name='caffe.LayerParameter.image_data_param', index=33,\n      number=115, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='infogain_loss_param', full_name='caffe.LayerParameter.infogain_loss_param', index=34,\n      number=116, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='inner_product_param', full_name='caffe.LayerParameter.inner_product_param', index=35,\n      number=117, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='log_param', full_name='caffe.LayerParameter.log_param', index=36,\n      number=134, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='lrn_param', full_name='caffe.LayerParameter.lrn_param', index=37,\n      number=118, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='memory_data_param', full_name='caffe.LayerParameter.memory_data_param', index=38,\n      number=119, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mvn_param', full_name='caffe.LayerParameter.mvn_param', index=39,\n      number=120, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pooling_param', full_name='caffe.LayerParameter.pooling_param', index=40,\n      number=121, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='power_param', full_name='caffe.LayerParameter.power_param', index=41,\n      number=122, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='prelu_param', full_name='caffe.LayerParameter.prelu_param', index=42,\n      number=131, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='python_param', full_name='caffe.LayerParameter.python_param', index=43,\n      number=130, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='reduction_param', full_name='caffe.LayerParameter.reduction_param', index=44,\n      number=136, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='relu_param', full_name='caffe.LayerParameter.relu_param', index=45,\n      number=123, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='reshape_param', full_name='caffe.LayerParameter.reshape_param', index=46,\n      number=133, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='roi_pooling_param', full_name='caffe.LayerParameter.roi_pooling_param', index=47,\n      number=8266711, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale_param', full_name='caffe.LayerParameter.scale_param', index=48,\n      number=142, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='sigmoid_param', full_name='caffe.LayerParameter.sigmoid_param', index=49,\n      number=124, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='smooth_l1_loss_param', full_name='caffe.LayerParameter.smooth_l1_loss_param', index=50,\n      number=8266712, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='softmax_param', full_name='caffe.LayerParameter.softmax_param', index=51,\n      number=125, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='spp_param', full_name='caffe.LayerParameter.spp_param', index=52,\n      number=132, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='slice_param', full_name='caffe.LayerParameter.slice_param', index=53,\n      number=126, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='tanh_param', full_name='caffe.LayerParameter.tanh_param', index=54,\n      number=127, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='threshold_param', full_name='caffe.LayerParameter.threshold_param', index=55,\n      number=128, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='tile_param', full_name='caffe.LayerParameter.tile_param', index=56,\n      number=138, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='window_data_param', full_name='caffe.LayerParameter.window_data_param', index=57,\n      number=129, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='crop_param', full_name='caffe.LayerParameter.crop_param', index=58,\n      number=144, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=2770,\n  serialized_end=5427,\n)\n\n\n_TRANSFORMATIONPARAMETER = _descriptor.Descriptor(\n  name='TransformationParameter',\n  full_name='caffe.TransformationParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.TransformationParameter.scale', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.TransformationParameter.mirror', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.TransformationParameter.crop_size', index=2,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.TransformationParameter.mean_file', index=3,\n      number=4, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mean_value', full_name='caffe.TransformationParameter.mean_value', index=4,\n      number=5, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='force_color', full_name='caffe.TransformationParameter.force_color', index=5,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='force_gray', full_name='caffe.TransformationParameter.force_gray', index=6,\n      number=7, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5430,\n  serialized_end=5612,\n)\n\n\n_LOSSPARAMETER = _descriptor.Descriptor(\n  name='LossParameter',\n  full_name='caffe.LossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='ignore_label', full_name='caffe.LossParameter.ignore_label', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='normalization', full_name='caffe.LossParameter.normalization', index=1,\n      number=3, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='normalize', full_name='caffe.LossParameter.normalize', index=2,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=False, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _LOSSPARAMETER_NORMALIZATIONMODE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5615,\n  serialized_end=5809,\n)\n\n\n_ACCURACYPARAMETER = _descriptor.Descriptor(\n  name='AccuracyParameter',\n  full_name='caffe.AccuracyParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='top_k', full_name='caffe.AccuracyParameter.top_k', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.AccuracyParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='ignore_label', full_name='caffe.AccuracyParameter.ignore_label', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5811,\n  serialized_end=5887,\n)\n\n\n_ARGMAXPARAMETER = _descriptor.Descriptor(\n  name='ArgMaxParameter',\n  full_name='caffe.ArgMaxParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='out_max_val', full_name='caffe.ArgMaxParameter.out_max_val', index=0,\n      number=1, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='top_k', full_name='caffe.ArgMaxParameter.top_k', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ArgMaxParameter.axis', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5889,\n  serialized_end=5966,\n)\n\n\n_CONCATPARAMETER = _descriptor.Descriptor(\n  name='ConcatParameter',\n  full_name='caffe.ConcatParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ConcatParameter.axis', index=0,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='concat_dim', full_name='caffe.ConcatParameter.concat_dim', index=1,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=5968,\n  serialized_end=6025,\n)\n\n\n_CONADDPARAMETER = _descriptor.Descriptor(\n  name='ConaddParameter',\n  full_name='caffe.ConaddParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6027,\n  serialized_end=6044,\n)\n\n\n_BATCHNORMPARAMETER = _descriptor.Descriptor(\n  name='BatchNormParameter',\n  full_name='caffe.BatchNormParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='use_global_stats', full_name='caffe.BatchNormParameter.use_global_stats', index=0,\n      number=1, type=8, cpp_type=7, label=1,\n      has_default_value=False, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='moving_average_fraction', full_name='caffe.BatchNormParameter.moving_average_fraction', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.999,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='eps', full_name='caffe.BatchNormParameter.eps', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1e-05,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6046,\n  serialized_end=6152,\n)\n\n\n_BIASPARAMETER = _descriptor.Descriptor(\n  name='BiasParameter',\n  full_name='caffe.BiasParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.BiasParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='num_axes', full_name='caffe.BiasParameter.num_axes', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='filler', full_name='caffe.BiasParameter.filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6154,\n  serialized_end=6247,\n)\n\n\n_CONTRASTIVELOSSPARAMETER = _descriptor.Descriptor(\n  name='ContrastiveLossParameter',\n  full_name='caffe.ContrastiveLossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='margin', full_name='caffe.ContrastiveLossParameter.margin', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='legacy_version', full_name='caffe.ContrastiveLossParameter.legacy_version', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6249,\n  serialized_end=6325,\n)\n\n\n_CONVOLUTIONPARAMETER = _descriptor.Descriptor(\n  name='ConvolutionParameter',\n  full_name='caffe.ConvolutionParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.ConvolutionParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.ConvolutionParameter.bias_term', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.ConvolutionParameter.pad', index=2,\n      number=3, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_size', full_name='caffe.ConvolutionParameter.kernel_size', index=3,\n      number=4, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.ConvolutionParameter.stride', index=4,\n      number=6, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dilation', full_name='caffe.ConvolutionParameter.dilation', index=5,\n      number=18, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad_h', full_name='caffe.ConvolutionParameter.pad_h', index=6,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad_w', full_name='caffe.ConvolutionParameter.pad_w', index=7,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_h', full_name='caffe.ConvolutionParameter.kernel_h', index=8,\n      number=11, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_w', full_name='caffe.ConvolutionParameter.kernel_w', index=9,\n      number=12, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride_h', full_name='caffe.ConvolutionParameter.stride_h', index=10,\n      number=13, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride_w', full_name='caffe.ConvolutionParameter.stride_w', index=11,\n      number=14, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='group', full_name='caffe.ConvolutionParameter.group', index=12,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.ConvolutionParameter.weight_filler', index=13,\n      number=7, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.ConvolutionParameter.bias_filler', index=14,\n      number=8, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.ConvolutionParameter.engine', index=15,\n      number=15, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ConvolutionParameter.axis', index=16,\n      number=16, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='force_nd_im2col', full_name='caffe.ConvolutionParameter.force_nd_im2col', index=17,\n      number=17, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _CONVOLUTIONPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6328,\n  serialized_end=6836,\n)\n\n\n_DEFORMABLECONVOLUTIONPARAMETER = _descriptor.Descriptor(\n  name='DeformableConvolutionParameter',\n  full_name='caffe.DeformableConvolutionParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.DeformableConvolutionParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.DeformableConvolutionParameter.bias_term', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.DeformableConvolutionParameter.pad', index=2,\n      number=3, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_size', full_name='caffe.DeformableConvolutionParameter.kernel_size', index=3,\n      number=4, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.DeformableConvolutionParameter.stride', index=4,\n      number=6, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dilation', full_name='caffe.DeformableConvolutionParameter.dilation', index=5,\n      number=18, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad_h', full_name='caffe.DeformableConvolutionParameter.pad_h', index=6,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad_w', full_name='caffe.DeformableConvolutionParameter.pad_w', index=7,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_h', full_name='caffe.DeformableConvolutionParameter.kernel_h', index=8,\n      number=11, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_w', full_name='caffe.DeformableConvolutionParameter.kernel_w', index=9,\n      number=12, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride_h', full_name='caffe.DeformableConvolutionParameter.stride_h', index=10,\n      number=13, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride_w', full_name='caffe.DeformableConvolutionParameter.stride_w', index=11,\n      number=14, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='group', full_name='caffe.DeformableConvolutionParameter.group', index=12,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='deformable_group', full_name='caffe.DeformableConvolutionParameter.deformable_group', index=13,\n      number=19, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=4,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.DeformableConvolutionParameter.weight_filler', index=14,\n      number=7, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.DeformableConvolutionParameter.bias_filler', index=15,\n      number=8, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.DeformableConvolutionParameter.engine', index=16,\n      number=15, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.DeformableConvolutionParameter.axis', index=17,\n      number=16, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='force_nd_im2col', full_name='caffe.DeformableConvolutionParameter.force_nd_im2col', index=18,\n      number=17, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _DEFORMABLECONVOLUTIONPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=6839,\n  serialized_end=7396,\n)\n\n\n_DATAPARAMETER = _descriptor.Descriptor(\n  name='DataParameter',\n  full_name='caffe.DataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.DataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.DataParameter.batch_size', index=1,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='rand_skip', full_name='caffe.DataParameter.rand_skip', index=2,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='backend', full_name='caffe.DataParameter.backend', index=3,\n      number=8, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.DataParameter.scale', index=4,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.DataParameter.mean_file', index=5,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.DataParameter.crop_size', index=6,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.DataParameter.mirror', index=7,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='force_encoded_color', full_name='caffe.DataParameter.force_encoded_color', index=8,\n      number=9, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='prefetch', full_name='caffe.DataParameter.prefetch', index=9,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=4,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _DATAPARAMETER_DB,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7399,\n  serialized_end=7691,\n)\n\n\n_CROPPARAMETER = _descriptor.Descriptor(\n  name='CropParameter',\n  full_name='caffe.CropParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.CropParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=2,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='offset', full_name='caffe.CropParameter.offset', index=1,\n      number=2, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7693,\n  serialized_end=7741,\n)\n\n\n_DROPOUTPARAMETER = _descriptor.Descriptor(\n  name='DropoutParameter',\n  full_name='caffe.DropoutParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='dropout_ratio', full_name='caffe.DropoutParameter.dropout_ratio', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale_train', full_name='caffe.DropoutParameter.scale_train', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7743,\n  serialized_end=7816,\n)\n\n\n_DUMMYDATAPARAMETER = _descriptor.Descriptor(\n  name='DummyDataParameter',\n  full_name='caffe.DummyDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='data_filler', full_name='caffe.DummyDataParameter.data_filler', index=0,\n      number=1, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.DummyDataParameter.shape', index=1,\n      number=6, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='num', full_name='caffe.DummyDataParameter.num', index=2,\n      number=2, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.DummyDataParameter.channels', index=3,\n      number=3, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.DummyDataParameter.height', index=4,\n      number=4, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.DummyDataParameter.width', index=5,\n      number=5, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7819,\n  serialized_end=7979,\n)\n\n\n_ELTWISEPARAMETER = _descriptor.Descriptor(\n  name='EltwiseParameter',\n  full_name='caffe.EltwiseParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='operation', full_name='caffe.EltwiseParameter.operation', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='coeff', full_name='caffe.EltwiseParameter.coeff', index=1,\n      number=2, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stable_prod_grad', full_name='caffe.EltwiseParameter.stable_prod_grad', index=2,\n      number=3, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _ELTWISEPARAMETER_ELTWISEOP,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=7982,\n  serialized_end=8147,\n)\n\n\n_ELUPARAMETER = _descriptor.Descriptor(\n  name='ELUParameter',\n  full_name='caffe.ELUParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='alpha', full_name='caffe.ELUParameter.alpha', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8149,\n  serialized_end=8181,\n)\n\n\n_EMBEDPARAMETER = _descriptor.Descriptor(\n  name='EmbedParameter',\n  full_name='caffe.EmbedParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.EmbedParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='input_dim', full_name='caffe.EmbedParameter.input_dim', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.EmbedParameter.bias_term', index=2,\n      number=3, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.EmbedParameter.weight_filler', index=3,\n      number=4, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.EmbedParameter.bias_filler', index=4,\n      number=5, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8184,\n  serialized_end=8356,\n)\n\n\n_EXPPARAMETER = _descriptor.Descriptor(\n  name='ExpParameter',\n  full_name='caffe.ExpParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='base', full_name='caffe.ExpParameter.base', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.ExpParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shift', full_name='caffe.ExpParameter.shift', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8358,\n  serialized_end=8426,\n)\n\n\n_FLATTENPARAMETER = _descriptor.Descriptor(\n  name='FlattenParameter',\n  full_name='caffe.FlattenParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.FlattenParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='end_axis', full_name='caffe.FlattenParameter.end_axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8428,\n  serialized_end=8485,\n)\n\n\n_HDF5DATAPARAMETER = _descriptor.Descriptor(\n  name='HDF5DataParameter',\n  full_name='caffe.HDF5DataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.HDF5DataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.HDF5DataParameter.batch_size', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shuffle', full_name='caffe.HDF5DataParameter.shuffle', index=2,\n      number=3, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8487,\n  serialized_end=8566,\n)\n\n\n_HDF5OUTPUTPARAMETER = _descriptor.Descriptor(\n  name='HDF5OutputParameter',\n  full_name='caffe.HDF5OutputParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='file_name', full_name='caffe.HDF5OutputParameter.file_name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8568,\n  serialized_end=8608,\n)\n\n\n_HINGELOSSPARAMETER = _descriptor.Descriptor(\n  name='HingeLossParameter',\n  full_name='caffe.HingeLossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='norm', full_name='caffe.HingeLossParameter.norm', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _HINGELOSSPARAMETER_NORM,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8610,\n  serialized_end=8704,\n)\n\n\n_IMAGEDATAPARAMETER = _descriptor.Descriptor(\n  name='ImageDataParameter',\n  full_name='caffe.ImageDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.ImageDataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.ImageDataParameter.batch_size', index=1,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='rand_skip', full_name='caffe.ImageDataParameter.rand_skip', index=2,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shuffle', full_name='caffe.ImageDataParameter.shuffle', index=3,\n      number=8, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='new_height', full_name='caffe.ImageDataParameter.new_height', index=4,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='new_width', full_name='caffe.ImageDataParameter.new_width', index=5,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='is_color', full_name='caffe.ImageDataParameter.is_color', index=6,\n      number=11, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.ImageDataParameter.scale', index=7,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.ImageDataParameter.mean_file', index=8,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.ImageDataParameter.crop_size', index=9,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.ImageDataParameter.mirror', index=10,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='root_folder', full_name='caffe.ImageDataParameter.root_folder', index=11,\n      number=12, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8707,\n  serialized_end=8986,\n)\n\n\n_INFOGAINLOSSPARAMETER = _descriptor.Descriptor(\n  name='InfogainLossParameter',\n  full_name='caffe.InfogainLossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.InfogainLossParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=8988,\n  serialized_end=9027,\n)\n\n\n_INNERPRODUCTPARAMETER = _descriptor.Descriptor(\n  name='InnerProductParameter',\n  full_name='caffe.InnerProductParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.InnerProductParameter.num_output', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.InnerProductParameter.bias_term', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.InnerProductParameter.weight_filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.InnerProductParameter.bias_filler', index=3,\n      number=4, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.InnerProductParameter.axis', index=4,\n      number=5, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9030,\n  serialized_end=9207,\n)\n\n\n_LOGPARAMETER = _descriptor.Descriptor(\n  name='LogParameter',\n  full_name='caffe.LogParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='base', full_name='caffe.LogParameter.base', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.LogParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shift', full_name='caffe.LogParameter.shift', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9209,\n  serialized_end=9277,\n)\n\n\n_LRNPARAMETER = _descriptor.Descriptor(\n  name='LRNParameter',\n  full_name='caffe.LRNParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='local_size', full_name='caffe.LRNParameter.local_size', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='alpha', full_name='caffe.LRNParameter.alpha', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='beta', full_name='caffe.LRNParameter.beta', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.75,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='norm_region', full_name='caffe.LRNParameter.norm_region', index=3,\n      number=4, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='k', full_name='caffe.LRNParameter.k', index=4,\n      number=5, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.LRNParameter.engine', index=5,\n      number=6, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _LRNPARAMETER_NORMREGION,\n    _LRNPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9280,\n  serialized_end=9592,\n)\n\n\n_MEMORYDATAPARAMETER = _descriptor.Descriptor(\n  name='MemoryDataParameter',\n  full_name='caffe.MemoryDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.MemoryDataParameter.batch_size', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='channels', full_name='caffe.MemoryDataParameter.channels', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='height', full_name='caffe.MemoryDataParameter.height', index=2,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='width', full_name='caffe.MemoryDataParameter.width', index=3,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9594,\n  serialized_end=9684,\n)\n\n\n_MVNPARAMETER = _descriptor.Descriptor(\n  name='MVNParameter',\n  full_name='caffe.MVNParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='normalize_variance', full_name='caffe.MVNParameter.normalize_variance', index=0,\n      number=1, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='across_channels', full_name='caffe.MVNParameter.across_channels', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='eps', full_name='caffe.MVNParameter.eps', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1e-09,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9686,\n  serialized_end=9786,\n)\n\n\n_POOLINGPARAMETER = _descriptor.Descriptor(\n  name='PoolingParameter',\n  full_name='caffe.PoolingParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='pool', full_name='caffe.PoolingParameter.pool', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.PoolingParameter.pad', index=1,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad_h', full_name='caffe.PoolingParameter.pad_h', index=2,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad_w', full_name='caffe.PoolingParameter.pad_w', index=3,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_size', full_name='caffe.PoolingParameter.kernel_size', index=4,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_h', full_name='caffe.PoolingParameter.kernel_h', index=5,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernel_w', full_name='caffe.PoolingParameter.kernel_w', index=6,\n      number=6, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.PoolingParameter.stride', index=7,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride_h', full_name='caffe.PoolingParameter.stride_h', index=8,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride_w', full_name='caffe.PoolingParameter.stride_w', index=9,\n      number=8, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.PoolingParameter.engine', index=10,\n      number=11, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='global_pooling', full_name='caffe.PoolingParameter.global_pooling', index=11,\n      number=12, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _POOLINGPARAMETER_POOLMETHOD,\n    _POOLINGPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=9789,\n  serialized_end=10207,\n)\n\n\n_POWERPARAMETER = _descriptor.Descriptor(\n  name='PowerParameter',\n  full_name='caffe.PowerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='power', full_name='caffe.PowerParameter.power', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.PowerParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shift', full_name='caffe.PowerParameter.shift', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10209,\n  serialized_end=10279,\n)\n\n\n_PYTHONPARAMETER = _descriptor.Descriptor(\n  name='PythonParameter',\n  full_name='caffe.PythonParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='module', full_name='caffe.PythonParameter.module', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='layer', full_name='caffe.PythonParameter.layer', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='param_str', full_name='caffe.PythonParameter.param_str', index=2,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='share_in_parallel', full_name='caffe.PythonParameter.share_in_parallel', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10281,\n  serialized_end=10384,\n)\n\n\n_REDUCTIONPARAMETER = _descriptor.Descriptor(\n  name='ReductionParameter',\n  full_name='caffe.ReductionParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='operation', full_name='caffe.ReductionParameter.operation', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ReductionParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='coeff', full_name='caffe.ReductionParameter.coeff', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _REDUCTIONPARAMETER_REDUCTIONOP,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10387,\n  serialized_end=10560,\n)\n\n\n_RELUPARAMETER = _descriptor.Descriptor(\n  name='ReLUParameter',\n  full_name='caffe.ReLUParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='negative_slope', full_name='caffe.ReLUParameter.negative_slope', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.ReLUParameter.engine', index=1,\n      number=2, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _RELUPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10563,\n  serialized_end=10704,\n)\n\n\n_RESHAPEPARAMETER = _descriptor.Descriptor(\n  name='ReshapeParameter',\n  full_name='caffe.ReshapeParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='shape', full_name='caffe.ReshapeParameter.shape', index=0,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ReshapeParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='num_axes', full_name='caffe.ReshapeParameter.num_axes', index=2,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=-1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10706,\n  serialized_end=10796,\n)\n\n\n_ROIPOOLINGPARAMETER = _descriptor.Descriptor(\n  name='ROIPoolingParameter',\n  full_name='caffe.ROIPoolingParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='pooled_h', full_name='caffe.ROIPoolingParameter.pooled_h', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pooled_w', full_name='caffe.ROIPoolingParameter.pooled_w', index=1,\n      number=2, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='spatial_scale', full_name='caffe.ROIPoolingParameter.spatial_scale', index=2,\n      number=3, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10798,\n  serialized_end=10887,\n)\n\n\n_SCALEPARAMETER = _descriptor.Descriptor(\n  name='ScaleParameter',\n  full_name='caffe.ScaleParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.ScaleParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='num_axes', full_name='caffe.ScaleParameter.num_axes', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='filler', full_name='caffe.ScaleParameter.filler', index=2,\n      number=3, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_term', full_name='caffe.ScaleParameter.bias_term', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.ScaleParameter.bias_filler', index=4,\n      number=5, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=10890,\n  serialized_end=11055,\n)\n\n\n_SIGMOIDPARAMETER = _descriptor.Descriptor(\n  name='SigmoidParameter',\n  full_name='caffe.SigmoidParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.SigmoidParameter.engine', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SIGMOIDPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11057,\n  serialized_end=11177,\n)\n\n\n_SMOOTHL1LOSSPARAMETER = _descriptor.Descriptor(\n  name='SmoothL1LossParameter',\n  full_name='caffe.SmoothL1LossParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='sigma', full_name='caffe.SmoothL1LossParameter.sigma', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11179,\n  serialized_end=11220,\n)\n\n\n_SLICEPARAMETER = _descriptor.Descriptor(\n  name='SliceParameter',\n  full_name='caffe.SliceParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.SliceParameter.axis', index=0,\n      number=3, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='slice_point', full_name='caffe.SliceParameter.slice_point', index=1,\n      number=2, type=13, cpp_type=3, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='slice_dim', full_name='caffe.SliceParameter.slice_dim', index=2,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11222,\n  serialized_end=11298,\n)\n\n\n_SOFTMAXPARAMETER = _descriptor.Descriptor(\n  name='SoftmaxParameter',\n  full_name='caffe.SoftmaxParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.SoftmaxParameter.engine', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.SoftmaxParameter.axis', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SOFTMAXPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11301,\n  serialized_end=11438,\n)\n\n\n_TANHPARAMETER = _descriptor.Descriptor(\n  name='TanHParameter',\n  full_name='caffe.TanHParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.TanHParameter.engine', index=0,\n      number=1, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _TANHPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11440,\n  serialized_end=11554,\n)\n\n\n_TILEPARAMETER = _descriptor.Descriptor(\n  name='TileParameter',\n  full_name='caffe.TileParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='axis', full_name='caffe.TileParameter.axis', index=0,\n      number=1, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='tiles', full_name='caffe.TileParameter.tiles', index=1,\n      number=2, type=5, cpp_type=1, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11556,\n  serialized_end=11603,\n)\n\n\n_THRESHOLDPARAMETER = _descriptor.Descriptor(\n  name='ThresholdParameter',\n  full_name='caffe.ThresholdParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='threshold', full_name='caffe.ThresholdParameter.threshold', index=0,\n      number=1, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11605,\n  serialized_end=11647,\n)\n\n\n_WINDOWDATAPARAMETER = _descriptor.Descriptor(\n  name='WindowDataParameter',\n  full_name='caffe.WindowDataParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.WindowDataParameter.source', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.WindowDataParameter.scale', index=1,\n      number=2, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mean_file', full_name='caffe.WindowDataParameter.mean_file', index=2,\n      number=3, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='batch_size', full_name='caffe.WindowDataParameter.batch_size', index=3,\n      number=4, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='crop_size', full_name='caffe.WindowDataParameter.crop_size', index=4,\n      number=5, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.WindowDataParameter.mirror', index=5,\n      number=6, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='fg_threshold', full_name='caffe.WindowDataParameter.fg_threshold', index=6,\n      number=7, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bg_threshold', full_name='caffe.WindowDataParameter.bg_threshold', index=7,\n      number=8, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='fg_fraction', full_name='caffe.WindowDataParameter.fg_fraction', index=8,\n      number=9, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.25,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='context_pad', full_name='caffe.WindowDataParameter.context_pad', index=9,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='crop_mode', full_name='caffe.WindowDataParameter.crop_mode', index=10,\n      number=11, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"warp\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='cache_images', full_name='caffe.WindowDataParameter.cache_images', index=11,\n      number=12, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='root_folder', full_name='caffe.WindowDataParameter.root_folder', index=12,\n      number=13, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11650,\n  serialized_end=11971,\n)\n\n\n_SPPPARAMETER = _descriptor.Descriptor(\n  name='SPPParameter',\n  full_name='caffe.SPPParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='pyramid_height', full_name='caffe.SPPParameter.pyramid_height', index=0,\n      number=1, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pool', full_name='caffe.SPPParameter.pool', index=1,\n      number=2, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='engine', full_name='caffe.SPPParameter.engine', index=2,\n      number=6, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _SPPPARAMETER_POOLMETHOD,\n    _SPPPARAMETER_ENGINE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=11974,\n  serialized_end=12209,\n)\n\n\n_V1LAYERPARAMETER = _descriptor.Descriptor(\n  name='V1LayerParameter',\n  full_name='caffe.V1LayerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='bottom', full_name='caffe.V1LayerParameter.bottom', index=0,\n      number=2, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='top', full_name='caffe.V1LayerParameter.top', index=1,\n      number=3, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.V1LayerParameter.name', index=2,\n      number=4, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='include', full_name='caffe.V1LayerParameter.include', index=3,\n      number=32, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='exclude', full_name='caffe.V1LayerParameter.exclude', index=4,\n      number=33, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.V1LayerParameter.type', index=5,\n      number=5, type=14, cpp_type=8, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.V1LayerParameter.blobs', index=6,\n      number=6, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='param', full_name='caffe.V1LayerParameter.param', index=7,\n      number=1001, type=9, cpp_type=9, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='blob_share_mode', full_name='caffe.V1LayerParameter.blob_share_mode', index=8,\n      number=1002, type=14, cpp_type=8, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='blobs_lr', full_name='caffe.V1LayerParameter.blobs_lr', index=9,\n      number=7, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_decay', full_name='caffe.V1LayerParameter.weight_decay', index=10,\n      number=8, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='loss_weight', full_name='caffe.V1LayerParameter.loss_weight', index=11,\n      number=35, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='accuracy_param', full_name='caffe.V1LayerParameter.accuracy_param', index=12,\n      number=27, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='argmax_param', full_name='caffe.V1LayerParameter.argmax_param', index=13,\n      number=23, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='concat_param', full_name='caffe.V1LayerParameter.concat_param', index=14,\n      number=9, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='conadd_param', full_name='caffe.V1LayerParameter.conadd_param', index=15,\n      number=99, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='contrastive_loss_param', full_name='caffe.V1LayerParameter.contrastive_loss_param', index=16,\n      number=40, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='convolution_param', full_name='caffe.V1LayerParameter.convolution_param', index=17,\n      number=10, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='deformable_convolution_param', full_name='caffe.V1LayerParameter.deformable_convolution_param', index=18,\n      number=999, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='data_param', full_name='caffe.V1LayerParameter.data_param', index=19,\n      number=11, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dropout_param', full_name='caffe.V1LayerParameter.dropout_param', index=20,\n      number=12, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dummy_data_param', full_name='caffe.V1LayerParameter.dummy_data_param', index=21,\n      number=26, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='eltwise_param', full_name='caffe.V1LayerParameter.eltwise_param', index=22,\n      number=24, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='exp_param', full_name='caffe.V1LayerParameter.exp_param', index=23,\n      number=41, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hdf5_data_param', full_name='caffe.V1LayerParameter.hdf5_data_param', index=24,\n      number=13, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hdf5_output_param', full_name='caffe.V1LayerParameter.hdf5_output_param', index=25,\n      number=14, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hinge_loss_param', full_name='caffe.V1LayerParameter.hinge_loss_param', index=26,\n      number=29, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='image_data_param', full_name='caffe.V1LayerParameter.image_data_param', index=27,\n      number=15, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='infogain_loss_param', full_name='caffe.V1LayerParameter.infogain_loss_param', index=28,\n      number=16, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='inner_product_param', full_name='caffe.V1LayerParameter.inner_product_param', index=29,\n      number=17, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='lrn_param', full_name='caffe.V1LayerParameter.lrn_param', index=30,\n      number=18, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='memory_data_param', full_name='caffe.V1LayerParameter.memory_data_param', index=31,\n      number=22, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mvn_param', full_name='caffe.V1LayerParameter.mvn_param', index=32,\n      number=34, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pooling_param', full_name='caffe.V1LayerParameter.pooling_param', index=33,\n      number=19, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='power_param', full_name='caffe.V1LayerParameter.power_param', index=34,\n      number=21, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='relu_param', full_name='caffe.V1LayerParameter.relu_param', index=35,\n      number=30, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='sigmoid_param', full_name='caffe.V1LayerParameter.sigmoid_param', index=36,\n      number=38, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='softmax_param', full_name='caffe.V1LayerParameter.softmax_param', index=37,\n      number=39, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='slice_param', full_name='caffe.V1LayerParameter.slice_param', index=38,\n      number=31, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='tanh_param', full_name='caffe.V1LayerParameter.tanh_param', index=39,\n      number=37, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='threshold_param', full_name='caffe.V1LayerParameter.threshold_param', index=40,\n      number=25, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='window_data_param', full_name='caffe.V1LayerParameter.window_data_param', index=41,\n      number=20, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='transform_param', full_name='caffe.V1LayerParameter.transform_param', index=42,\n      number=36, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='loss_param', full_name='caffe.V1LayerParameter.loss_param', index=43,\n      number=42, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='layer', full_name='caffe.V1LayerParameter.layer', index=44,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _V1LAYERPARAMETER_LAYERTYPE,\n    _V1LAYERPARAMETER_DIMCHECKMODE,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=12212,\n  serialized_end=14903,\n)\n\n\n_V0LAYERPARAMETER = _descriptor.Descriptor(\n  name='V0LayerParameter',\n  full_name='caffe.V0LayerParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='name', full_name='caffe.V0LayerParameter.name', index=0,\n      number=1, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='type', full_name='caffe.V0LayerParameter.type', index=1,\n      number=2, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='num_output', full_name='caffe.V0LayerParameter.num_output', index=2,\n      number=3, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='biasterm', full_name='caffe.V0LayerParameter.biasterm', index=3,\n      number=4, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=True,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_filler', full_name='caffe.V0LayerParameter.weight_filler', index=4,\n      number=5, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='bias_filler', full_name='caffe.V0LayerParameter.bias_filler', index=5,\n      number=6, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pad', full_name='caffe.V0LayerParameter.pad', index=6,\n      number=7, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='kernelsize', full_name='caffe.V0LayerParameter.kernelsize', index=7,\n      number=8, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='group', full_name='caffe.V0LayerParameter.group', index=8,\n      number=9, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='stride', full_name='caffe.V0LayerParameter.stride', index=9,\n      number=10, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='pool', full_name='caffe.V0LayerParameter.pool', index=10,\n      number=11, type=14, cpp_type=8, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='dropout_ratio', full_name='caffe.V0LayerParameter.dropout_ratio', index=11,\n      number=12, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='local_size', full_name='caffe.V0LayerParameter.local_size', index=12,\n      number=13, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='alpha', full_name='caffe.V0LayerParameter.alpha', index=13,\n      number=14, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='beta', full_name='caffe.V0LayerParameter.beta', index=14,\n      number=15, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.75,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='k', full_name='caffe.V0LayerParameter.k', index=15,\n      number=22, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='source', full_name='caffe.V0LayerParameter.source', index=16,\n      number=16, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='scale', full_name='caffe.V0LayerParameter.scale', index=17,\n      number=17, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='meanfile', full_name='caffe.V0LayerParameter.meanfile', index=18,\n      number=18, type=9, cpp_type=9, label=1,\n      has_default_value=False, default_value=_b(\"\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='batchsize', full_name='caffe.V0LayerParameter.batchsize', index=19,\n      number=19, type=13, cpp_type=3, label=1,\n      has_default_value=False, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='cropsize', full_name='caffe.V0LayerParameter.cropsize', index=20,\n      number=20, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='mirror', full_name='caffe.V0LayerParameter.mirror', index=21,\n      number=21, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='blobs', full_name='caffe.V0LayerParameter.blobs', index=22,\n      number=50, type=11, cpp_type=10, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='blobs_lr', full_name='caffe.V0LayerParameter.blobs_lr', index=23,\n      number=51, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='weight_decay', full_name='caffe.V0LayerParameter.weight_decay', index=24,\n      number=52, type=2, cpp_type=6, label=3,\n      has_default_value=False, default_value=[],\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='rand_skip', full_name='caffe.V0LayerParameter.rand_skip', index=25,\n      number=53, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='det_fg_threshold', full_name='caffe.V0LayerParameter.det_fg_threshold', index=26,\n      number=54, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='det_bg_threshold', full_name='caffe.V0LayerParameter.det_bg_threshold', index=27,\n      number=55, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.5,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='det_fg_fraction', full_name='caffe.V0LayerParameter.det_fg_fraction', index=28,\n      number=56, type=2, cpp_type=6, label=1,\n      has_default_value=True, default_value=0.25,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='det_context_pad', full_name='caffe.V0LayerParameter.det_context_pad', index=29,\n      number=58, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='det_crop_mode', full_name='caffe.V0LayerParameter.det_crop_mode', index=30,\n      number=59, type=9, cpp_type=9, label=1,\n      has_default_value=True, default_value=_b(\"warp\").decode('utf-8'),\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='new_num', full_name='caffe.V0LayerParameter.new_num', index=31,\n      number=60, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='new_channels', full_name='caffe.V0LayerParameter.new_channels', index=32,\n      number=61, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='new_height', full_name='caffe.V0LayerParameter.new_height', index=33,\n      number=62, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='new_width', full_name='caffe.V0LayerParameter.new_width', index=34,\n      number=63, type=5, cpp_type=1, label=1,\n      has_default_value=True, default_value=0,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='shuffle_images', full_name='caffe.V0LayerParameter.shuffle_images', index=35,\n      number=64, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='concat_dim', full_name='caffe.V0LayerParameter.concat_dim', index=36,\n      number=65, type=13, cpp_type=3, label=1,\n      has_default_value=True, default_value=1,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='hdf5_output_param', full_name='caffe.V0LayerParameter.hdf5_output_param', index=37,\n      number=1001, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n    _V0LAYERPARAMETER_POOLMETHOD,\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=14906,\n  serialized_end=15927,\n)\n\n\n_PRELUPARAMETER = _descriptor.Descriptor(\n  name='PReLUParameter',\n  full_name='caffe.PReLUParameter',\n  filename=None,\n  file=DESCRIPTOR,\n  containing_type=None,\n  fields=[\n    _descriptor.FieldDescriptor(\n      name='filler', full_name='caffe.PReLUParameter.filler', index=0,\n      number=1, type=11, cpp_type=10, label=1,\n      has_default_value=False, default_value=None,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n    _descriptor.FieldDescriptor(\n      name='channel_shared', full_name='caffe.PReLUParameter.channel_shared', index=1,\n      number=2, type=8, cpp_type=7, label=1,\n      has_default_value=True, default_value=False,\n      message_type=None, enum_type=None, containing_type=None,\n      is_extension=False, extension_scope=None,\n      options=None),\n  ],\n  extensions=[\n  ],\n  nested_types=[],\n  enum_types=[\n  ],\n  options=None,\n  is_extendable=False,\n  extension_ranges=[],\n  oneofs=[\n  ],\n  serialized_start=15929,\n  serialized_end=16016,\n)\n\n_BLOBPROTO.fields_by_name['shape'].message_type = _BLOBSHAPE\n_BLOBPROTOVECTOR.fields_by_name['blobs'].message_type = _BLOBPROTO\n_FILLERPARAMETER.fields_by_name['variance_norm'].enum_type = _FILLERPARAMETER_VARIANCENORM\n_FILLERPARAMETER_VARIANCENORM.containing_type = _FILLERPARAMETER\n_NETPARAMETER.fields_by_name['input_shape'].message_type = _BLOBSHAPE\n_NETPARAMETER.fields_by_name['state'].message_type = _NETSTATE\n_NETPARAMETER.fields_by_name['layer'].message_type = _LAYERPARAMETER\n_NETPARAMETER.fields_by_name['layers'].message_type = _V1LAYERPARAMETER\n_SOLVERPARAMETER.fields_by_name['net_param'].message_type = _NETPARAMETER\n_SOLVERPARAMETER.fields_by_name['train_net_param'].message_type = _NETPARAMETER\n_SOLVERPARAMETER.fields_by_name['test_net_param'].message_type = _NETPARAMETER\n_SOLVERPARAMETER.fields_by_name['train_state'].message_type = _NETSTATE\n_SOLVERPARAMETER.fields_by_name['test_state'].message_type = _NETSTATE\n_SOLVERPARAMETER.fields_by_name['snapshot_format'].enum_type = _SOLVERPARAMETER_SNAPSHOTFORMAT\n_SOLVERPARAMETER.fields_by_name['solver_mode'].enum_type = _SOLVERPARAMETER_SOLVERMODE\n_SOLVERPARAMETER.fields_by_name['solver_type'].enum_type = _SOLVERPARAMETER_SOLVERTYPE\n_SOLVERPARAMETER_SNAPSHOTFORMAT.containing_type = _SOLVERPARAMETER\n_SOLVERPARAMETER_SOLVERMODE.containing_type = _SOLVERPARAMETER\n_SOLVERPARAMETER_SOLVERTYPE.containing_type = _SOLVERPARAMETER\n_SOLVERSTATE.fields_by_name['history'].message_type = _BLOBPROTO\n_NETSTATE.fields_by_name['phase'].enum_type = _PHASE\n_NETSTATERULE.fields_by_name['phase'].enum_type = _PHASE\n_PARAMSPEC.fields_by_name['share_mode'].enum_type = _PARAMSPEC_DIMCHECKMODE\n_PARAMSPEC_DIMCHECKMODE.containing_type = _PARAMSPEC\n_LAYERPARAMETER.fields_by_name['phase'].enum_type = _PHASE\n_LAYERPARAMETER.fields_by_name['param'].message_type = _PARAMSPEC\n_LAYERPARAMETER.fields_by_name['blobs'].message_type = _BLOBPROTO\n_LAYERPARAMETER.fields_by_name['include'].message_type = _NETSTATERULE\n_LAYERPARAMETER.fields_by_name['exclude'].message_type = _NETSTATERULE\n_LAYERPARAMETER.fields_by_name['transform_param'].message_type = _TRANSFORMATIONPARAMETER\n_LAYERPARAMETER.fields_by_name['loss_param'].message_type = _LOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['accuracy_param'].message_type = _ACCURACYPARAMETER\n_LAYERPARAMETER.fields_by_name['argmax_param'].message_type = _ARGMAXPARAMETER\n_LAYERPARAMETER.fields_by_name['batch_norm_param'].message_type = _BATCHNORMPARAMETER\n_LAYERPARAMETER.fields_by_name['bias_param'].message_type = _BIASPARAMETER\n_LAYERPARAMETER.fields_by_name['concat_param'].message_type = _CONCATPARAMETER\n_LAYERPARAMETER.fields_by_name['conadd_param'].message_type = _CONADDPARAMETER\n_LAYERPARAMETER.fields_by_name['contrastive_loss_param'].message_type = _CONTRASTIVELOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['convolution_param'].message_type = _CONVOLUTIONPARAMETER\n_LAYERPARAMETER.fields_by_name['deformable_convolution_param'].message_type = _DEFORMABLECONVOLUTIONPARAMETER\n_LAYERPARAMETER.fields_by_name['data_param'].message_type = _DATAPARAMETER\n_LAYERPARAMETER.fields_by_name['dropout_param'].message_type = _DROPOUTPARAMETER\n_LAYERPARAMETER.fields_by_name['dummy_data_param'].message_type = _DUMMYDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['eltwise_param'].message_type = _ELTWISEPARAMETER\n_LAYERPARAMETER.fields_by_name['elu_param'].message_type = _ELUPARAMETER\n_LAYERPARAMETER.fields_by_name['embed_param'].message_type = _EMBEDPARAMETER\n_LAYERPARAMETER.fields_by_name['exp_param'].message_type = _EXPPARAMETER\n_LAYERPARAMETER.fields_by_name['flatten_param'].message_type = _FLATTENPARAMETER\n_LAYERPARAMETER.fields_by_name['hdf5_data_param'].message_type = _HDF5DATAPARAMETER\n_LAYERPARAMETER.fields_by_name['hdf5_output_param'].message_type = _HDF5OUTPUTPARAMETER\n_LAYERPARAMETER.fields_by_name['hinge_loss_param'].message_type = _HINGELOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['image_data_param'].message_type = _IMAGEDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['infogain_loss_param'].message_type = _INFOGAINLOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['inner_product_param'].message_type = _INNERPRODUCTPARAMETER\n_LAYERPARAMETER.fields_by_name['log_param'].message_type = _LOGPARAMETER\n_LAYERPARAMETER.fields_by_name['lrn_param'].message_type = _LRNPARAMETER\n_LAYERPARAMETER.fields_by_name['memory_data_param'].message_type = _MEMORYDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['mvn_param'].message_type = _MVNPARAMETER\n_LAYERPARAMETER.fields_by_name['pooling_param'].message_type = _POOLINGPARAMETER\n_LAYERPARAMETER.fields_by_name['power_param'].message_type = _POWERPARAMETER\n_LAYERPARAMETER.fields_by_name['prelu_param'].message_type = _PRELUPARAMETER\n_LAYERPARAMETER.fields_by_name['python_param'].message_type = _PYTHONPARAMETER\n_LAYERPARAMETER.fields_by_name['reduction_param'].message_type = _REDUCTIONPARAMETER\n_LAYERPARAMETER.fields_by_name['relu_param'].message_type = _RELUPARAMETER\n_LAYERPARAMETER.fields_by_name['reshape_param'].message_type = _RESHAPEPARAMETER\n_LAYERPARAMETER.fields_by_name['roi_pooling_param'].message_type = _ROIPOOLINGPARAMETER\n_LAYERPARAMETER.fields_by_name['scale_param'].message_type = _SCALEPARAMETER\n_LAYERPARAMETER.fields_by_name['sigmoid_param'].message_type = _SIGMOIDPARAMETER\n_LAYERPARAMETER.fields_by_name['smooth_l1_loss_param'].message_type = _SMOOTHL1LOSSPARAMETER\n_LAYERPARAMETER.fields_by_name['softmax_param'].message_type = _SOFTMAXPARAMETER\n_LAYERPARAMETER.fields_by_name['spp_param'].message_type = _SPPPARAMETER\n_LAYERPARAMETER.fields_by_name['slice_param'].message_type = _SLICEPARAMETER\n_LAYERPARAMETER.fields_by_name['tanh_param'].message_type = _TANHPARAMETER\n_LAYERPARAMETER.fields_by_name['threshold_param'].message_type = _THRESHOLDPARAMETER\n_LAYERPARAMETER.fields_by_name['tile_param'].message_type = _TILEPARAMETER\n_LAYERPARAMETER.fields_by_name['window_data_param'].message_type = _WINDOWDATAPARAMETER\n_LAYERPARAMETER.fields_by_name['crop_param'].message_type = _CROPPARAMETER\n_LOSSPARAMETER.fields_by_name['normalization'].enum_type = _LOSSPARAMETER_NORMALIZATIONMODE\n_LOSSPARAMETER_NORMALIZATIONMODE.containing_type = _LOSSPARAMETER\n_BIASPARAMETER.fields_by_name['filler'].message_type = _FILLERPARAMETER\n_CONVOLUTIONPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_CONVOLUTIONPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_CONVOLUTIONPARAMETER.fields_by_name['engine'].enum_type = _CONVOLUTIONPARAMETER_ENGINE\n_CONVOLUTIONPARAMETER_ENGINE.containing_type = _CONVOLUTIONPARAMETER\n_DEFORMABLECONVOLUTIONPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_DEFORMABLECONVOLUTIONPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_DEFORMABLECONVOLUTIONPARAMETER.fields_by_name['engine'].enum_type = _DEFORMABLECONVOLUTIONPARAMETER_ENGINE\n_DEFORMABLECONVOLUTIONPARAMETER_ENGINE.containing_type = _DEFORMABLECONVOLUTIONPARAMETER\n_DATAPARAMETER.fields_by_name['backend'].enum_type = _DATAPARAMETER_DB\n_DATAPARAMETER_DB.containing_type = _DATAPARAMETER\n_DUMMYDATAPARAMETER.fields_by_name['data_filler'].message_type = _FILLERPARAMETER\n_DUMMYDATAPARAMETER.fields_by_name['shape'].message_type = _BLOBSHAPE\n_ELTWISEPARAMETER.fields_by_name['operation'].enum_type = _ELTWISEPARAMETER_ELTWISEOP\n_ELTWISEPARAMETER_ELTWISEOP.containing_type = _ELTWISEPARAMETER\n_EMBEDPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_EMBEDPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_HINGELOSSPARAMETER.fields_by_name['norm'].enum_type = _HINGELOSSPARAMETER_NORM\n_HINGELOSSPARAMETER_NORM.containing_type = _HINGELOSSPARAMETER\n_INNERPRODUCTPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_INNERPRODUCTPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_LRNPARAMETER.fields_by_name['norm_region'].enum_type = _LRNPARAMETER_NORMREGION\n_LRNPARAMETER.fields_by_name['engine'].enum_type = _LRNPARAMETER_ENGINE\n_LRNPARAMETER_NORMREGION.containing_type = _LRNPARAMETER\n_LRNPARAMETER_ENGINE.containing_type = _LRNPARAMETER\n_POOLINGPARAMETER.fields_by_name['pool'].enum_type = _POOLINGPARAMETER_POOLMETHOD\n_POOLINGPARAMETER.fields_by_name['engine'].enum_type = _POOLINGPARAMETER_ENGINE\n_POOLINGPARAMETER_POOLMETHOD.containing_type = _POOLINGPARAMETER\n_POOLINGPARAMETER_ENGINE.containing_type = _POOLINGPARAMETER\n_REDUCTIONPARAMETER.fields_by_name['operation'].enum_type = _REDUCTIONPARAMETER_REDUCTIONOP\n_REDUCTIONPARAMETER_REDUCTIONOP.containing_type = _REDUCTIONPARAMETER\n_RELUPARAMETER.fields_by_name['engine'].enum_type = _RELUPARAMETER_ENGINE\n_RELUPARAMETER_ENGINE.containing_type = _RELUPARAMETER\n_RESHAPEPARAMETER.fields_by_name['shape'].message_type = _BLOBSHAPE\n_SCALEPARAMETER.fields_by_name['filler'].message_type = _FILLERPARAMETER\n_SCALEPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_SIGMOIDPARAMETER.fields_by_name['engine'].enum_type = _SIGMOIDPARAMETER_ENGINE\n_SIGMOIDPARAMETER_ENGINE.containing_type = _SIGMOIDPARAMETER\n_SOFTMAXPARAMETER.fields_by_name['engine'].enum_type = _SOFTMAXPARAMETER_ENGINE\n_SOFTMAXPARAMETER_ENGINE.containing_type = _SOFTMAXPARAMETER\n_TANHPARAMETER.fields_by_name['engine'].enum_type = _TANHPARAMETER_ENGINE\n_TANHPARAMETER_ENGINE.containing_type = _TANHPARAMETER\n_SPPPARAMETER.fields_by_name['pool'].enum_type = _SPPPARAMETER_POOLMETHOD\n_SPPPARAMETER.fields_by_name['engine'].enum_type = _SPPPARAMETER_ENGINE\n_SPPPARAMETER_POOLMETHOD.containing_type = _SPPPARAMETER\n_SPPPARAMETER_ENGINE.containing_type = _SPPPARAMETER\n_V1LAYERPARAMETER.fields_by_name['include'].message_type = _NETSTATERULE\n_V1LAYERPARAMETER.fields_by_name['exclude'].message_type = _NETSTATERULE\n_V1LAYERPARAMETER.fields_by_name['type'].enum_type = _V1LAYERPARAMETER_LAYERTYPE\n_V1LAYERPARAMETER.fields_by_name['blobs'].message_type = _BLOBPROTO\n_V1LAYERPARAMETER.fields_by_name['blob_share_mode'].enum_type = _V1LAYERPARAMETER_DIMCHECKMODE\n_V1LAYERPARAMETER.fields_by_name['accuracy_param'].message_type = _ACCURACYPARAMETER\n_V1LAYERPARAMETER.fields_by_name['argmax_param'].message_type = _ARGMAXPARAMETER\n_V1LAYERPARAMETER.fields_by_name['concat_param'].message_type = _CONCATPARAMETER\n_V1LAYERPARAMETER.fields_by_name['conadd_param'].message_type = _CONADDPARAMETER\n_V1LAYERPARAMETER.fields_by_name['contrastive_loss_param'].message_type = _CONTRASTIVELOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['convolution_param'].message_type = _CONVOLUTIONPARAMETER\n_V1LAYERPARAMETER.fields_by_name['deformable_convolution_param'].message_type = _DEFORMABLECONVOLUTIONPARAMETER\n_V1LAYERPARAMETER.fields_by_name['data_param'].message_type = _DATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['dropout_param'].message_type = _DROPOUTPARAMETER\n_V1LAYERPARAMETER.fields_by_name['dummy_data_param'].message_type = _DUMMYDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['eltwise_param'].message_type = _ELTWISEPARAMETER\n_V1LAYERPARAMETER.fields_by_name['exp_param'].message_type = _EXPPARAMETER\n_V1LAYERPARAMETER.fields_by_name['hdf5_data_param'].message_type = _HDF5DATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['hdf5_output_param'].message_type = _HDF5OUTPUTPARAMETER\n_V1LAYERPARAMETER.fields_by_name['hinge_loss_param'].message_type = _HINGELOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['image_data_param'].message_type = _IMAGEDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['infogain_loss_param'].message_type = _INFOGAINLOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['inner_product_param'].message_type = _INNERPRODUCTPARAMETER\n_V1LAYERPARAMETER.fields_by_name['lrn_param'].message_type = _LRNPARAMETER\n_V1LAYERPARAMETER.fields_by_name['memory_data_param'].message_type = _MEMORYDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['mvn_param'].message_type = _MVNPARAMETER\n_V1LAYERPARAMETER.fields_by_name['pooling_param'].message_type = _POOLINGPARAMETER\n_V1LAYERPARAMETER.fields_by_name['power_param'].message_type = _POWERPARAMETER\n_V1LAYERPARAMETER.fields_by_name['relu_param'].message_type = _RELUPARAMETER\n_V1LAYERPARAMETER.fields_by_name['sigmoid_param'].message_type = _SIGMOIDPARAMETER\n_V1LAYERPARAMETER.fields_by_name['softmax_param'].message_type = _SOFTMAXPARAMETER\n_V1LAYERPARAMETER.fields_by_name['slice_param'].message_type = _SLICEPARAMETER\n_V1LAYERPARAMETER.fields_by_name['tanh_param'].message_type = _TANHPARAMETER\n_V1LAYERPARAMETER.fields_by_name['threshold_param'].message_type = _THRESHOLDPARAMETER\n_V1LAYERPARAMETER.fields_by_name['window_data_param'].message_type = _WINDOWDATAPARAMETER\n_V1LAYERPARAMETER.fields_by_name['transform_param'].message_type = _TRANSFORMATIONPARAMETER\n_V1LAYERPARAMETER.fields_by_name['loss_param'].message_type = _LOSSPARAMETER\n_V1LAYERPARAMETER.fields_by_name['layer'].message_type = _V0LAYERPARAMETER\n_V1LAYERPARAMETER_LAYERTYPE.containing_type = _V1LAYERPARAMETER\n_V1LAYERPARAMETER_DIMCHECKMODE.containing_type = _V1LAYERPARAMETER\n_V0LAYERPARAMETER.fields_by_name['weight_filler'].message_type = _FILLERPARAMETER\n_V0LAYERPARAMETER.fields_by_name['bias_filler'].message_type = _FILLERPARAMETER\n_V0LAYERPARAMETER.fields_by_name['pool'].enum_type = _V0LAYERPARAMETER_POOLMETHOD\n_V0LAYERPARAMETER.fields_by_name['blobs'].message_type = _BLOBPROTO\n_V0LAYERPARAMETER.fields_by_name['hdf5_output_param'].message_type = _HDF5OUTPUTPARAMETER\n_V0LAYERPARAMETER_POOLMETHOD.containing_type = _V0LAYERPARAMETER\n_PRELUPARAMETER.fields_by_name['filler'].message_type = _FILLERPARAMETER\nDESCRIPTOR.message_types_by_name['BlobShape'] = _BLOBSHAPE\nDESCRIPTOR.message_types_by_name['BlobProto'] = _BLOBPROTO\nDESCRIPTOR.message_types_by_name['BlobProtoVector'] = _BLOBPROTOVECTOR\nDESCRIPTOR.message_types_by_name['Datum'] = _DATUM\nDESCRIPTOR.message_types_by_name['FillerParameter'] = _FILLERPARAMETER\nDESCRIPTOR.message_types_by_name['NetParameter'] = _NETPARAMETER\nDESCRIPTOR.message_types_by_name['SolverParameter'] = _SOLVERPARAMETER\nDESCRIPTOR.message_types_by_name['SolverState'] = _SOLVERSTATE\nDESCRIPTOR.message_types_by_name['NetState'] = _NETSTATE\nDESCRIPTOR.message_types_by_name['NetStateRule'] = _NETSTATERULE\nDESCRIPTOR.message_types_by_name['ParamSpec'] = _PARAMSPEC\nDESCRIPTOR.message_types_by_name['LayerParameter'] = _LAYERPARAMETER\nDESCRIPTOR.message_types_by_name['TransformationParameter'] = _TRANSFORMATIONPARAMETER\nDESCRIPTOR.message_types_by_name['LossParameter'] = _LOSSPARAMETER\nDESCRIPTOR.message_types_by_name['AccuracyParameter'] = _ACCURACYPARAMETER\nDESCRIPTOR.message_types_by_name['ArgMaxParameter'] = _ARGMAXPARAMETER\nDESCRIPTOR.message_types_by_name['ConcatParameter'] = _CONCATPARAMETER\nDESCRIPTOR.message_types_by_name['ConaddParameter'] = _CONADDPARAMETER\nDESCRIPTOR.message_types_by_name['BatchNormParameter'] = _BATCHNORMPARAMETER\nDESCRIPTOR.message_types_by_name['BiasParameter'] = _BIASPARAMETER\nDESCRIPTOR.message_types_by_name['ContrastiveLossParameter'] = _CONTRASTIVELOSSPARAMETER\nDESCRIPTOR.message_types_by_name['ConvolutionParameter'] = _CONVOLUTIONPARAMETER\nDESCRIPTOR.message_types_by_name['DeformableConvolutionParameter'] = _DEFORMABLECONVOLUTIONPARAMETER\nDESCRIPTOR.message_types_by_name['DataParameter'] = _DATAPARAMETER\nDESCRIPTOR.message_types_by_name['CropParameter'] = _CROPPARAMETER\nDESCRIPTOR.message_types_by_name['DropoutParameter'] = _DROPOUTPARAMETER\nDESCRIPTOR.message_types_by_name['DummyDataParameter'] = _DUMMYDATAPARAMETER\nDESCRIPTOR.message_types_by_name['EltwiseParameter'] = _ELTWISEPARAMETER\nDESCRIPTOR.message_types_by_name['ELUParameter'] = _ELUPARAMETER\nDESCRIPTOR.message_types_by_name['EmbedParameter'] = _EMBEDPARAMETER\nDESCRIPTOR.message_types_by_name['ExpParameter'] = _EXPPARAMETER\nDESCRIPTOR.message_types_by_name['FlattenParameter'] = _FLATTENPARAMETER\nDESCRIPTOR.message_types_by_name['HDF5DataParameter'] = _HDF5DATAPARAMETER\nDESCRIPTOR.message_types_by_name['HDF5OutputParameter'] = _HDF5OUTPUTPARAMETER\nDESCRIPTOR.message_types_by_name['HingeLossParameter'] = _HINGELOSSPARAMETER\nDESCRIPTOR.message_types_by_name['ImageDataParameter'] = _IMAGEDATAPARAMETER\nDESCRIPTOR.message_types_by_name['InfogainLossParameter'] = _INFOGAINLOSSPARAMETER\nDESCRIPTOR.message_types_by_name['InnerProductParameter'] = _INNERPRODUCTPARAMETER\nDESCRIPTOR.message_types_by_name['LogParameter'] = _LOGPARAMETER\nDESCRIPTOR.message_types_by_name['LRNParameter'] = _LRNPARAMETER\nDESCRIPTOR.message_types_by_name['MemoryDataParameter'] = _MEMORYDATAPARAMETER\nDESCRIPTOR.message_types_by_name['MVNParameter'] = _MVNPARAMETER\nDESCRIPTOR.message_types_by_name['PoolingParameter'] = _POOLINGPARAMETER\nDESCRIPTOR.message_types_by_name['PowerParameter'] = _POWERPARAMETER\nDESCRIPTOR.message_types_by_name['PythonParameter'] = _PYTHONPARAMETER\nDESCRIPTOR.message_types_by_name['ReductionParameter'] = _REDUCTIONPARAMETER\nDESCRIPTOR.message_types_by_name['ReLUParameter'] = _RELUPARAMETER\nDESCRIPTOR.message_types_by_name['ReshapeParameter'] = _RESHAPEPARAMETER\nDESCRIPTOR.message_types_by_name['ROIPoolingParameter'] = _ROIPOOLINGPARAMETER\nDESCRIPTOR.message_types_by_name['ScaleParameter'] = _SCALEPARAMETER\nDESCRIPTOR.message_types_by_name['SigmoidParameter'] = _SIGMOIDPARAMETER\nDESCRIPTOR.message_types_by_name['SmoothL1LossParameter'] = _SMOOTHL1LOSSPARAMETER\nDESCRIPTOR.message_types_by_name['SliceParameter'] = _SLICEPARAMETER\nDESCRIPTOR.message_types_by_name['SoftmaxParameter'] = _SOFTMAXPARAMETER\nDESCRIPTOR.message_types_by_name['TanHParameter'] = _TANHPARAMETER\nDESCRIPTOR.message_types_by_name['TileParameter'] = _TILEPARAMETER\nDESCRIPTOR.message_types_by_name['ThresholdParameter'] = _THRESHOLDPARAMETER\nDESCRIPTOR.message_types_by_name['WindowDataParameter'] = _WINDOWDATAPARAMETER\nDESCRIPTOR.message_types_by_name['SPPParameter'] = _SPPPARAMETER\nDESCRIPTOR.message_types_by_name['V1LayerParameter'] = _V1LAYERPARAMETER\nDESCRIPTOR.message_types_by_name['V0LayerParameter'] = _V0LAYERPARAMETER\nDESCRIPTOR.message_types_by_name['PReLUParameter'] = _PRELUPARAMETER\nDESCRIPTOR.enum_types_by_name['Phase'] = _PHASE\n\nBlobShape = _reflection.GeneratedProtocolMessageType('BlobShape', (_message.Message,), dict(\n  DESCRIPTOR = _BLOBSHAPE,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BlobShape)\n  ))\n_sym_db.RegisterMessage(BlobShape)\n\nBlobProto = _reflection.GeneratedProtocolMessageType('BlobProto', (_message.Message,), dict(\n  DESCRIPTOR = _BLOBPROTO,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BlobProto)\n  ))\n_sym_db.RegisterMessage(BlobProto)\n\nBlobProtoVector = _reflection.GeneratedProtocolMessageType('BlobProtoVector', (_message.Message,), dict(\n  DESCRIPTOR = _BLOBPROTOVECTOR,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BlobProtoVector)\n  ))\n_sym_db.RegisterMessage(BlobProtoVector)\n\nDatum = _reflection.GeneratedProtocolMessageType('Datum', (_message.Message,), dict(\n  DESCRIPTOR = _DATUM,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.Datum)\n  ))\n_sym_db.RegisterMessage(Datum)\n\nFillerParameter = _reflection.GeneratedProtocolMessageType('FillerParameter', (_message.Message,), dict(\n  DESCRIPTOR = _FILLERPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.FillerParameter)\n  ))\n_sym_db.RegisterMessage(FillerParameter)\n\nNetParameter = _reflection.GeneratedProtocolMessageType('NetParameter', (_message.Message,), dict(\n  DESCRIPTOR = _NETPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.NetParameter)\n  ))\n_sym_db.RegisterMessage(NetParameter)\n\nSolverParameter = _reflection.GeneratedProtocolMessageType('SolverParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SOLVERPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SolverParameter)\n  ))\n_sym_db.RegisterMessage(SolverParameter)\n\nSolverState = _reflection.GeneratedProtocolMessageType('SolverState', (_message.Message,), dict(\n  DESCRIPTOR = _SOLVERSTATE,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SolverState)\n  ))\n_sym_db.RegisterMessage(SolverState)\n\nNetState = _reflection.GeneratedProtocolMessageType('NetState', (_message.Message,), dict(\n  DESCRIPTOR = _NETSTATE,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.NetState)\n  ))\n_sym_db.RegisterMessage(NetState)\n\nNetStateRule = _reflection.GeneratedProtocolMessageType('NetStateRule', (_message.Message,), dict(\n  DESCRIPTOR = _NETSTATERULE,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.NetStateRule)\n  ))\n_sym_db.RegisterMessage(NetStateRule)\n\nParamSpec = _reflection.GeneratedProtocolMessageType('ParamSpec', (_message.Message,), dict(\n  DESCRIPTOR = _PARAMSPEC,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ParamSpec)\n  ))\n_sym_db.RegisterMessage(ParamSpec)\n\nLayerParameter = _reflection.GeneratedProtocolMessageType('LayerParameter', (_message.Message,), dict(\n  DESCRIPTOR = _LAYERPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LayerParameter)\n  ))\n_sym_db.RegisterMessage(LayerParameter)\n\nTransformationParameter = _reflection.GeneratedProtocolMessageType('TransformationParameter', (_message.Message,), dict(\n  DESCRIPTOR = _TRANSFORMATIONPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.TransformationParameter)\n  ))\n_sym_db.RegisterMessage(TransformationParameter)\n\nLossParameter = _reflection.GeneratedProtocolMessageType('LossParameter', (_message.Message,), dict(\n  DESCRIPTOR = _LOSSPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LossParameter)\n  ))\n_sym_db.RegisterMessage(LossParameter)\n\nAccuracyParameter = _reflection.GeneratedProtocolMessageType('AccuracyParameter', (_message.Message,), dict(\n  DESCRIPTOR = _ACCURACYPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.AccuracyParameter)\n  ))\n_sym_db.RegisterMessage(AccuracyParameter)\n\nArgMaxParameter = _reflection.GeneratedProtocolMessageType('ArgMaxParameter', (_message.Message,), dict(\n  DESCRIPTOR = _ARGMAXPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ArgMaxParameter)\n  ))\n_sym_db.RegisterMessage(ArgMaxParameter)\n\nConcatParameter = _reflection.GeneratedProtocolMessageType('ConcatParameter', (_message.Message,), dict(\n  DESCRIPTOR = _CONCATPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ConcatParameter)\n  ))\n_sym_db.RegisterMessage(ConcatParameter)\n\nConaddParameter = _reflection.GeneratedProtocolMessageType('ConaddParameter', (_message.Message,), dict(\n  DESCRIPTOR = _CONADDPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ConaddParameter)\n  ))\n_sym_db.RegisterMessage(ConaddParameter)\n\nBatchNormParameter = _reflection.GeneratedProtocolMessageType('BatchNormParameter', (_message.Message,), dict(\n  DESCRIPTOR = _BATCHNORMPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BatchNormParameter)\n  ))\n_sym_db.RegisterMessage(BatchNormParameter)\n\nBiasParameter = _reflection.GeneratedProtocolMessageType('BiasParameter', (_message.Message,), dict(\n  DESCRIPTOR = _BIASPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.BiasParameter)\n  ))\n_sym_db.RegisterMessage(BiasParameter)\n\nContrastiveLossParameter = _reflection.GeneratedProtocolMessageType('ContrastiveLossParameter', (_message.Message,), dict(\n  DESCRIPTOR = _CONTRASTIVELOSSPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ContrastiveLossParameter)\n  ))\n_sym_db.RegisterMessage(ContrastiveLossParameter)\n\nConvolutionParameter = _reflection.GeneratedProtocolMessageType('ConvolutionParameter', (_message.Message,), dict(\n  DESCRIPTOR = _CONVOLUTIONPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ConvolutionParameter)\n  ))\n_sym_db.RegisterMessage(ConvolutionParameter)\n\nDeformableConvolutionParameter = _reflection.GeneratedProtocolMessageType('DeformableConvolutionParameter', (_message.Message,), dict(\n  DESCRIPTOR = _DEFORMABLECONVOLUTIONPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DeformableConvolutionParameter)\n  ))\n_sym_db.RegisterMessage(DeformableConvolutionParameter)\n\nDataParameter = _reflection.GeneratedProtocolMessageType('DataParameter', (_message.Message,), dict(\n  DESCRIPTOR = _DATAPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DataParameter)\n  ))\n_sym_db.RegisterMessage(DataParameter)\n\nCropParameter = _reflection.GeneratedProtocolMessageType('CropParameter', (_message.Message,), dict(\n  DESCRIPTOR = _CROPPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.CropParameter)\n  ))\n_sym_db.RegisterMessage(CropParameter)\n\nDropoutParameter = _reflection.GeneratedProtocolMessageType('DropoutParameter', (_message.Message,), dict(\n  DESCRIPTOR = _DROPOUTPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DropoutParameter)\n  ))\n_sym_db.RegisterMessage(DropoutParameter)\n\nDummyDataParameter = _reflection.GeneratedProtocolMessageType('DummyDataParameter', (_message.Message,), dict(\n  DESCRIPTOR = _DUMMYDATAPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.DummyDataParameter)\n  ))\n_sym_db.RegisterMessage(DummyDataParameter)\n\nEltwiseParameter = _reflection.GeneratedProtocolMessageType('EltwiseParameter', (_message.Message,), dict(\n  DESCRIPTOR = _ELTWISEPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.EltwiseParameter)\n  ))\n_sym_db.RegisterMessage(EltwiseParameter)\n\nELUParameter = _reflection.GeneratedProtocolMessageType('ELUParameter', (_message.Message,), dict(\n  DESCRIPTOR = _ELUPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ELUParameter)\n  ))\n_sym_db.RegisterMessage(ELUParameter)\n\nEmbedParameter = _reflection.GeneratedProtocolMessageType('EmbedParameter', (_message.Message,), dict(\n  DESCRIPTOR = _EMBEDPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.EmbedParameter)\n  ))\n_sym_db.RegisterMessage(EmbedParameter)\n\nExpParameter = _reflection.GeneratedProtocolMessageType('ExpParameter', (_message.Message,), dict(\n  DESCRIPTOR = _EXPPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ExpParameter)\n  ))\n_sym_db.RegisterMessage(ExpParameter)\n\nFlattenParameter = _reflection.GeneratedProtocolMessageType('FlattenParameter', (_message.Message,), dict(\n  DESCRIPTOR = _FLATTENPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.FlattenParameter)\n  ))\n_sym_db.RegisterMessage(FlattenParameter)\n\nHDF5DataParameter = _reflection.GeneratedProtocolMessageType('HDF5DataParameter', (_message.Message,), dict(\n  DESCRIPTOR = _HDF5DATAPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.HDF5DataParameter)\n  ))\n_sym_db.RegisterMessage(HDF5DataParameter)\n\nHDF5OutputParameter = _reflection.GeneratedProtocolMessageType('HDF5OutputParameter', (_message.Message,), dict(\n  DESCRIPTOR = _HDF5OUTPUTPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.HDF5OutputParameter)\n  ))\n_sym_db.RegisterMessage(HDF5OutputParameter)\n\nHingeLossParameter = _reflection.GeneratedProtocolMessageType('HingeLossParameter', (_message.Message,), dict(\n  DESCRIPTOR = _HINGELOSSPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.HingeLossParameter)\n  ))\n_sym_db.RegisterMessage(HingeLossParameter)\n\nImageDataParameter = _reflection.GeneratedProtocolMessageType('ImageDataParameter', (_message.Message,), dict(\n  DESCRIPTOR = _IMAGEDATAPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ImageDataParameter)\n  ))\n_sym_db.RegisterMessage(ImageDataParameter)\n\nInfogainLossParameter = _reflection.GeneratedProtocolMessageType('InfogainLossParameter', (_message.Message,), dict(\n  DESCRIPTOR = _INFOGAINLOSSPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.InfogainLossParameter)\n  ))\n_sym_db.RegisterMessage(InfogainLossParameter)\n\nInnerProductParameter = _reflection.GeneratedProtocolMessageType('InnerProductParameter', (_message.Message,), dict(\n  DESCRIPTOR = _INNERPRODUCTPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.InnerProductParameter)\n  ))\n_sym_db.RegisterMessage(InnerProductParameter)\n\nLogParameter = _reflection.GeneratedProtocolMessageType('LogParameter', (_message.Message,), dict(\n  DESCRIPTOR = _LOGPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LogParameter)\n  ))\n_sym_db.RegisterMessage(LogParameter)\n\nLRNParameter = _reflection.GeneratedProtocolMessageType('LRNParameter', (_message.Message,), dict(\n  DESCRIPTOR = _LRNPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.LRNParameter)\n  ))\n_sym_db.RegisterMessage(LRNParameter)\n\nMemoryDataParameter = _reflection.GeneratedProtocolMessageType('MemoryDataParameter', (_message.Message,), dict(\n  DESCRIPTOR = _MEMORYDATAPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.MemoryDataParameter)\n  ))\n_sym_db.RegisterMessage(MemoryDataParameter)\n\nMVNParameter = _reflection.GeneratedProtocolMessageType('MVNParameter', (_message.Message,), dict(\n  DESCRIPTOR = _MVNPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.MVNParameter)\n  ))\n_sym_db.RegisterMessage(MVNParameter)\n\nPoolingParameter = _reflection.GeneratedProtocolMessageType('PoolingParameter', (_message.Message,), dict(\n  DESCRIPTOR = _POOLINGPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PoolingParameter)\n  ))\n_sym_db.RegisterMessage(PoolingParameter)\n\nPowerParameter = _reflection.GeneratedProtocolMessageType('PowerParameter', (_message.Message,), dict(\n  DESCRIPTOR = _POWERPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PowerParameter)\n  ))\n_sym_db.RegisterMessage(PowerParameter)\n\nPythonParameter = _reflection.GeneratedProtocolMessageType('PythonParameter', (_message.Message,), dict(\n  DESCRIPTOR = _PYTHONPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PythonParameter)\n  ))\n_sym_db.RegisterMessage(PythonParameter)\n\nReductionParameter = _reflection.GeneratedProtocolMessageType('ReductionParameter', (_message.Message,), dict(\n  DESCRIPTOR = _REDUCTIONPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ReductionParameter)\n  ))\n_sym_db.RegisterMessage(ReductionParameter)\n\nReLUParameter = _reflection.GeneratedProtocolMessageType('ReLUParameter', (_message.Message,), dict(\n  DESCRIPTOR = _RELUPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ReLUParameter)\n  ))\n_sym_db.RegisterMessage(ReLUParameter)\n\nReshapeParameter = _reflection.GeneratedProtocolMessageType('ReshapeParameter', (_message.Message,), dict(\n  DESCRIPTOR = _RESHAPEPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ReshapeParameter)\n  ))\n_sym_db.RegisterMessage(ReshapeParameter)\n\nROIPoolingParameter = _reflection.GeneratedProtocolMessageType('ROIPoolingParameter', (_message.Message,), dict(\n  DESCRIPTOR = _ROIPOOLINGPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ROIPoolingParameter)\n  ))\n_sym_db.RegisterMessage(ROIPoolingParameter)\n\nScaleParameter = _reflection.GeneratedProtocolMessageType('ScaleParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SCALEPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ScaleParameter)\n  ))\n_sym_db.RegisterMessage(ScaleParameter)\n\nSigmoidParameter = _reflection.GeneratedProtocolMessageType('SigmoidParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SIGMOIDPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SigmoidParameter)\n  ))\n_sym_db.RegisterMessage(SigmoidParameter)\n\nSmoothL1LossParameter = _reflection.GeneratedProtocolMessageType('SmoothL1LossParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SMOOTHL1LOSSPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SmoothL1LossParameter)\n  ))\n_sym_db.RegisterMessage(SmoothL1LossParameter)\n\nSliceParameter = _reflection.GeneratedProtocolMessageType('SliceParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SLICEPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SliceParameter)\n  ))\n_sym_db.RegisterMessage(SliceParameter)\n\nSoftmaxParameter = _reflection.GeneratedProtocolMessageType('SoftmaxParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SOFTMAXPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SoftmaxParameter)\n  ))\n_sym_db.RegisterMessage(SoftmaxParameter)\n\nTanHParameter = _reflection.GeneratedProtocolMessageType('TanHParameter', (_message.Message,), dict(\n  DESCRIPTOR = _TANHPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.TanHParameter)\n  ))\n_sym_db.RegisterMessage(TanHParameter)\n\nTileParameter = _reflection.GeneratedProtocolMessageType('TileParameter', (_message.Message,), dict(\n  DESCRIPTOR = _TILEPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.TileParameter)\n  ))\n_sym_db.RegisterMessage(TileParameter)\n\nThresholdParameter = _reflection.GeneratedProtocolMessageType('ThresholdParameter', (_message.Message,), dict(\n  DESCRIPTOR = _THRESHOLDPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.ThresholdParameter)\n  ))\n_sym_db.RegisterMessage(ThresholdParameter)\n\nWindowDataParameter = _reflection.GeneratedProtocolMessageType('WindowDataParameter', (_message.Message,), dict(\n  DESCRIPTOR = _WINDOWDATAPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.WindowDataParameter)\n  ))\n_sym_db.RegisterMessage(WindowDataParameter)\n\nSPPParameter = _reflection.GeneratedProtocolMessageType('SPPParameter', (_message.Message,), dict(\n  DESCRIPTOR = _SPPPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.SPPParameter)\n  ))\n_sym_db.RegisterMessage(SPPParameter)\n\nV1LayerParameter = _reflection.GeneratedProtocolMessageType('V1LayerParameter', (_message.Message,), dict(\n  DESCRIPTOR = _V1LAYERPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.V1LayerParameter)\n  ))\n_sym_db.RegisterMessage(V1LayerParameter)\n\nV0LayerParameter = _reflection.GeneratedProtocolMessageType('V0LayerParameter', (_message.Message,), dict(\n  DESCRIPTOR = _V0LAYERPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.V0LayerParameter)\n  ))\n_sym_db.RegisterMessage(V0LayerParameter)\n\nPReLUParameter = _reflection.GeneratedProtocolMessageType('PReLUParameter', (_message.Message,), dict(\n  DESCRIPTOR = _PRELUPARAMETER,\n  __module__ = 'caffe_pb2'\n  # @@protoc_insertion_point(class_scope:caffe.PReLUParameter)\n  ))\n_sym_db.RegisterMessage(PReLUParameter)\n\n\n_BLOBSHAPE.fields_by_name['dim'].has_options = True\n_BLOBSHAPE.fields_by_name['dim']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))\n_BLOBPROTO.fields_by_name['data'].has_options = True\n_BLOBPROTO.fields_by_name['data']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))\n_BLOBPROTO.fields_by_name['diff'].has_options = True\n_BLOBPROTO.fields_by_name['diff']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))\n_BLOBPROTO.fields_by_name['double_data'].has_options = True\n_BLOBPROTO.fields_by_name['double_data']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))\n_BLOBPROTO.fields_by_name['double_diff'].has_options = True\n_BLOBPROTO.fields_by_name['double_diff']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\\020\\001'))\n# @@protoc_insertion_point(module_scope)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/pycaffe.py",
    "content": "\"\"\"\nWrap the internal caffe C++ module (_caffe.so) with a clean, Pythonic\ninterface.\n\"\"\"\n\nfrom collections import OrderedDict\ntry:\n    from itertools import izip_longest\nexcept:\n    from itertools import zip_longest as izip_longest\nimport numpy as np\n\nfrom ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \\\n        RMSPropSolver, AdaDeltaSolver, AdamSolver\nimport caffe.io\n\n# We directly update methods from Net here (rather than using composition or\n# inheritance) so that nets created by caffe (e.g., by SGDSolver) will\n# automatically have the improved interface.\n\n\n@property\ndef _Net_blobs(self):\n    \"\"\"\n    An OrderedDict (bottom to top, i.e., input to output) of network\n    blobs indexed by name\n    \"\"\"\n    return OrderedDict(zip(self._blob_names, self._blobs))\n\n\n@property\ndef _Net_blob_loss_weights(self):\n    \"\"\"\n    An OrderedDict (bottom to top, i.e., input to output) of network\n    blob loss weights indexed by name\n    \"\"\"\n    return OrderedDict(zip(self._blob_names, self._blob_loss_weights))\n\n\n@property\ndef _Net_params(self):\n    \"\"\"\n    An OrderedDict (bottom to top, i.e., input to output) of network\n    parameters indexed by name; each is a list of multiple blobs (e.g.,\n    weights and biases)\n    \"\"\"\n    return OrderedDict([(name, lr.blobs)\n                        for name, lr in zip(self._layer_names, self.layers)\n                        if len(lr.blobs) > 0])\n\n\n@property\ndef _Net_inputs(self):\n    return [list(self.blobs.keys())[i] for i in self._inputs]\n\n\n@property\ndef _Net_outputs(self):\n    return [list(self.blobs.keys())[i] for i in self._outputs]\n\n\ndef _Net_forward(self, blobs=None, start=None, end=None, **kwargs):\n    \"\"\"\n    Forward pass: prepare inputs and run the net forward.\n\n    Parameters\n    ----------\n    blobs : list of blobs to return in addition to output blobs.\n    kwargs : Keys are input blob names and values are blob ndarrays.\n             For formatting inputs for Caffe, see Net.preprocess().\n             If None, input is taken from data layers.\n    start : optional name of layer at which to begin the forward pass\n    end : optional name of layer at which to finish the forward pass\n          (inclusive)\n\n    Returns\n    -------\n    outs : {blob name: blob ndarray} dict.\n    \"\"\"\n    if blobs is None:\n        blobs = []\n\n    if start is not None:\n        start_ind = list(self._layer_names).index(start)\n    else:\n        start_ind = 0\n\n    if end is not None:\n        end_ind = list(self._layer_names).index(end)\n        outputs = set([end] + blobs)\n    else:\n        end_ind = len(self.layers) - 1\n        outputs = set(self.outputs + blobs)\n\n    if kwargs:\n        if set(kwargs.keys()) != set(self.inputs):\n            raise Exception('Input blob arguments do not match net inputs.')\n        # Set input according to defined shapes and make arrays single and\n        # C-contiguous as Caffe expects.\n        for in_, blob in kwargs.iteritems():\n            if blob.shape[0] != self.blobs[in_].num:\n                raise Exception('Input is not batch sized')\n            self.blobs[in_].data[...] = blob\n\n    self._forward(start_ind, end_ind)\n\n    # Unpack blobs to extract\n    return {out: self.blobs[out].data for out in outputs}\n\n\ndef _Net_backward(self, diffs=None, start=None, end=None, **kwargs):\n    \"\"\"\n    Backward pass: prepare diffs and run the net backward.\n\n    Parameters\n    ----------\n    diffs : list of diffs to return in addition to bottom diffs.\n    kwargs : Keys are output blob names and values are diff ndarrays.\n            If None, top diffs are taken from forward loss.\n    start : optional name of layer at which to begin the backward pass\n    end : optional name of layer at which to finish the backward pass\n        (inclusive)\n\n    Returns\n    -------\n    outs: {blob name: diff ndarray} dict.\n    \"\"\"\n    if diffs is None:\n        diffs = []\n\n    if start is not None:\n        start_ind = list(self._layer_names).index(start)\n    else:\n        start_ind = len(self.layers) - 1\n\n    if end is not None:\n        end_ind = list(self._layer_names).index(end)\n        outputs = set([end] + diffs)\n    else:\n        end_ind = 0\n        outputs = set(self.inputs + diffs)\n\n    if kwargs:\n        if set(kwargs.keys()) != set(self.outputs):\n            raise Exception('Top diff arguments do not match net outputs.')\n        # Set top diffs according to defined shapes and make arrays single and\n        # C-contiguous as Caffe expects.\n        for top, diff in kwargs.iteritems():\n            if diff.shape[0] != self.blobs[top].num:\n                raise Exception('Diff is not batch sized')\n            self.blobs[top].diff[...] = diff\n\n    self._backward(start_ind, end_ind)\n\n    # Unpack diffs to extract\n    return {out: self.blobs[out].diff for out in outputs}\n\n\ndef _Net_forward_all(self, blobs=None, **kwargs):\n    \"\"\"\n    Run net forward in batches.\n\n    Parameters\n    ----------\n    blobs : list of blobs to extract as in forward()\n    kwargs : Keys are input blob names and values are blob ndarrays.\n             Refer to forward().\n\n    Returns\n    -------\n    all_outs : {blob name: list of blobs} dict.\n    \"\"\"\n    # Collect outputs from batches\n    all_outs = {out: [] for out in set(self.outputs + (blobs or []))}\n    for batch in self._batch(kwargs):\n        outs = self.forward(blobs=blobs, **batch)\n        for out, out_blob in outs.iteritems():\n            all_outs[out].extend(out_blob.copy())\n    # Package in ndarray.\n    for out in all_outs:\n        all_outs[out] = np.asarray(all_outs[out])\n    # Discard padding.\n    pad = len(all_outs.itervalues().next()) - len(kwargs.itervalues().next())\n    if pad:\n        for out in all_outs:\n            all_outs[out] = all_outs[out][:-pad]\n    return all_outs\n\n\ndef _Net_forward_backward_all(self, blobs=None, diffs=None, **kwargs):\n    \"\"\"\n    Run net forward + backward in batches.\n\n    Parameters\n    ----------\n    blobs: list of blobs to extract as in forward()\n    diffs: list of diffs to extract as in backward()\n    kwargs: Keys are input (for forward) and output (for backward) blob names\n            and values are ndarrays. Refer to forward() and backward().\n            Prefilled variants are called for lack of input or output blobs.\n\n    Returns\n    -------\n    all_blobs: {blob name: blob ndarray} dict.\n    all_diffs: {blob name: diff ndarray} dict.\n    \"\"\"\n    # Batch blobs and diffs.\n    all_outs = {out: [] for out in set(self.outputs + (blobs or []))}\n    all_diffs = {diff: [] for diff in set(self.inputs + (diffs or []))}\n    forward_batches = self._batch({in_: kwargs[in_]\n                                   for in_ in self.inputs if in_ in kwargs})\n    backward_batches = self._batch({out: kwargs[out]\n                                    for out in self.outputs if out in kwargs})\n    # Collect outputs from batches (and heed lack of forward/backward batches).\n    for fb, bb in izip_longest(forward_batches, backward_batches, fillvalue={}):\n        batch_blobs = self.forward(blobs=blobs, **fb)\n        batch_diffs = self.backward(diffs=diffs, **bb)\n        for out, out_blobs in batch_blobs.iteritems():\n            all_outs[out].extend(out_blobs.copy())\n        for diff, out_diffs in batch_diffs.iteritems():\n            all_diffs[diff].extend(out_diffs.copy())\n    # Package in ndarray.\n    for out, diff in zip(all_outs, all_diffs):\n        all_outs[out] = np.asarray(all_outs[out])\n        all_diffs[diff] = np.asarray(all_diffs[diff])\n    # Discard padding at the end and package in ndarray.\n    pad = len(all_outs.itervalues().next()) - len(kwargs.itervalues().next())\n    if pad:\n        for out, diff in zip(all_outs, all_diffs):\n            all_outs[out] = all_outs[out][:-pad]\n            all_diffs[diff] = all_diffs[diff][:-pad]\n    return all_outs, all_diffs\n\n\ndef _Net_set_input_arrays(self, data, labels):\n    \"\"\"\n    Set input arrays of the in-memory MemoryDataLayer.\n    (Note: this is only for networks declared with the memory data layer.)\n    \"\"\"\n    if labels.ndim == 1:\n        labels = np.ascontiguousarray(labels[:, np.newaxis, np.newaxis,\n                                             np.newaxis])\n    return self._set_input_arrays(data, labels)\n\n\ndef _Net_batch(self, blobs):\n    \"\"\"\n    Batch blob lists according to net's batch size.\n\n    Parameters\n    ----------\n    blobs: Keys blob names and values are lists of blobs (of any length).\n           Naturally, all the lists should have the same length.\n\n    Yields\n    ------\n    batch: {blob name: list of blobs} dict for a single batch.\n    \"\"\"\n    num = len(blobs.itervalues().next())\n    batch_size = self.blobs.itervalues().next().num\n    remainder = num % batch_size\n    num_batches = num / batch_size\n\n    # Yield full batches.\n    for b in range(num_batches):\n        i = b * batch_size\n        yield {name: blobs[name][i:i + batch_size] for name in blobs}\n\n    # Yield last padded batch, if any.\n    if remainder > 0:\n        padded_batch = {}\n        for name in blobs:\n            padding = np.zeros((batch_size - remainder,)\n                               + blobs[name].shape[1:])\n            padded_batch[name] = np.concatenate([blobs[name][-remainder:],\n                                                 padding])\n        yield padded_batch\n\n\nclass _Net_IdNameWrapper:\n    \"\"\"\n    A simple wrapper that allows the ids propery to be accessed as a dict\n    indexed by names. Used for top and bottom names\n    \"\"\"\n    def __init__(self, net, func):\n        self.net, self.func = net, func\n\n    def __getitem__(self, name):\n        # Map the layer name to id\n        ids = self.func(self.net, list(self.net._layer_names).index(name))\n        # Map the blob id to name\n        id_to_name = list(self.net.blobs)\n        return [id_to_name[i] for i in ids]\n\n# Attach methods to Net.\nNet.blobs = _Net_blobs\nNet.blob_loss_weights = _Net_blob_loss_weights\nNet.params = _Net_params\nNet.forward = _Net_forward\nNet.backward = _Net_backward\nNet.forward_all = _Net_forward_all\nNet.forward_backward_all = _Net_forward_backward_all\nNet.set_input_arrays = _Net_set_input_arrays\nNet._batch = _Net_batch\nNet.inputs = _Net_inputs\nNet.outputs = _Net_outputs\nNet.top_names = property(lambda n: _Net_IdNameWrapper(n, Net._top_ids))\nNet.bottom_names = property(lambda n: _Net_IdNameWrapper(n, Net._bottom_ids))\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_io.py",
    "content": "import numpy as np\nimport unittest\n\nimport caffe\n\nclass TestBlobProtoToArray(unittest.TestCase):\n\n    def test_old_format(self):\n        data = np.zeros((10,10))\n        blob = caffe.proto.caffe_pb2.BlobProto()\n        blob.data.extend(list(data.flatten()))\n        shape = (1,1,10,10)\n        blob.num, blob.channels, blob.height, blob.width = shape\n\n        arr = caffe.io.blobproto_to_array(blob)\n        self.assertEqual(arr.shape, shape)\n\n    def test_new_format(self):\n        data = np.zeros((10,10))\n        blob = caffe.proto.caffe_pb2.BlobProto()\n        blob.data.extend(list(data.flatten()))\n        blob.shape.dim.extend(list(data.shape))\n\n        arr = caffe.io.blobproto_to_array(blob)\n        self.assertEqual(arr.shape, data.shape)\n\n    def test_no_shape(self):\n        data = np.zeros((10,10))\n        blob = caffe.proto.caffe_pb2.BlobProto()\n        blob.data.extend(list(data.flatten()))\n\n        with self.assertRaises(ValueError):\n            caffe.io.blobproto_to_array(blob)\n\n    def test_scalar(self):\n        data = np.ones((1)) * 123\n        blob = caffe.proto.caffe_pb2.BlobProto()\n        blob.data.extend(list(data.flatten()))\n\n        arr = caffe.io.blobproto_to_array(blob)\n        self.assertEqual(arr, 123)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_layer_type_list.py",
    "content": "import unittest\n\nimport caffe\n\nclass TestLayerTypeList(unittest.TestCase):\n\n    def test_standard_types(self):\n        #removing 'Data' from list \n        for type_name in ['Data', 'Convolution', 'InnerProduct']:\n            self.assertIn(type_name, caffe.layer_type_list(),\n                    '%s not in layer_type_list()' % type_name)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_net.py",
    "content": "import unittest\nimport tempfile\nimport os\nimport numpy as np\nimport six\n\nimport caffe\n\n\ndef simple_net_file(num_output):\n    \"\"\"Make a simple net prototxt, based on test_net.cpp, returning the name\n    of the (temporary) file.\"\"\"\n\n    f = tempfile.NamedTemporaryFile(mode='w+', delete=False)\n    f.write(\"\"\"name: 'testnet' force_backward: true\n    layer { type: 'DummyData' name: 'data' top: 'data' top: 'label'\n      dummy_data_param { num: 5 channels: 2 height: 3 width: 4\n        num: 5 channels: 1 height: 1 width: 1\n        data_filler { type: 'gaussian' std: 1 }\n        data_filler { type: 'constant' } } }\n    layer { type: 'Convolution' name: 'conv' bottom: 'data' top: 'conv'\n      convolution_param { num_output: 11 kernel_size: 2 pad: 3\n        weight_filler { type: 'gaussian' std: 1 }\n        bias_filler { type: 'constant' value: 2 } }\n        param { decay_mult: 1 } param { decay_mult: 0 }\n        }\n    layer { type: 'InnerProduct' name: 'ip' bottom: 'conv' top: 'ip'\n      inner_product_param { num_output: \"\"\" + str(num_output) + \"\"\"\n        weight_filler { type: 'gaussian' std: 2.5 }\n        bias_filler { type: 'constant' value: -3 } } }\n    layer { type: 'SoftmaxWithLoss' name: 'loss' bottom: 'ip' bottom: 'label'\n      top: 'loss' }\"\"\")\n    f.close()\n    return f.name\n\n\nclass TestNet(unittest.TestCase):\n    def setUp(self):\n        self.num_output = 13\n        net_file = simple_net_file(self.num_output)\n        self.net = caffe.Net(net_file, caffe.TRAIN)\n        # fill in valid labels\n        self.net.blobs['label'].data[...] = \\\n                np.random.randint(self.num_output,\n                    size=self.net.blobs['label'].data.shape)\n        os.remove(net_file)\n\n    def test_memory(self):\n        \"\"\"Check that holding onto blob data beyond the life of a Net is OK\"\"\"\n\n        params = sum(map(list, six.itervalues(self.net.params)), [])\n        blobs = self.net.blobs.values()\n        del self.net\n\n        # now sum everything (forcing all memory to be read)\n        total = 0\n        for p in params:\n            total += p.data.sum() + p.diff.sum()\n        for bl in blobs:\n            total += bl.data.sum() + bl.diff.sum()\n\n    def test_forward_backward(self):\n        self.net.forward()\n        self.net.backward()\n\n    def test_inputs_outputs(self):\n        self.assertEqual(self.net.inputs, [])\n        self.assertEqual(self.net.outputs, ['loss'])\n\n    def test_save_and_read(self):\n        f = tempfile.NamedTemporaryFile(mode='w+', delete=False)\n        f.close()\n        self.net.save(f.name)\n        net_file = simple_net_file(self.num_output)\n        net2 = caffe.Net(net_file, f.name, caffe.TRAIN)\n        os.remove(net_file)\n        os.remove(f.name)\n        for name in self.net.params:\n            for i in range(len(self.net.params[name])):\n                self.assertEqual(abs(self.net.params[name][i].data\n                    - net2.params[name][i].data).sum(), 0)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_net_spec.py",
    "content": "import unittest\nimport tempfile\nimport caffe\nfrom caffe import layers as L\nfrom caffe import params as P\n\ndef lenet(batch_size):\n    n = caffe.NetSpec()\n    n.data, n.label = L.DummyData(shape=[dict(dim=[batch_size, 1, 28, 28]),\n                                         dict(dim=[batch_size, 1, 1, 1])],\n                                  transform_param=dict(scale=1./255), ntop=2)\n    n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20,\n        weight_filler=dict(type='xavier'))\n    n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n    n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50,\n        weight_filler=dict(type='xavier'))\n    n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n    n.ip1 = L.InnerProduct(n.pool2, num_output=500,\n        weight_filler=dict(type='xavier'))\n    n.relu1 = L.ReLU(n.ip1, in_place=True)\n    n.ip2 = L.InnerProduct(n.relu1, num_output=10,\n        weight_filler=dict(type='xavier'))\n    n.loss = L.SoftmaxWithLoss(n.ip2, n.label)\n    return n.to_proto()\n\ndef anon_lenet(batch_size):\n    data, label = L.DummyData(shape=[dict(dim=[batch_size, 1, 28, 28]),\n                                     dict(dim=[batch_size, 1, 1, 1])],\n                              transform_param=dict(scale=1./255), ntop=2)\n    conv1 = L.Convolution(data, kernel_size=5, num_output=20,\n        weight_filler=dict(type='xavier'))\n    pool1 = L.Pooling(conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n    conv2 = L.Convolution(pool1, kernel_size=5, num_output=50,\n        weight_filler=dict(type='xavier'))\n    pool2 = L.Pooling(conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n    ip1 = L.InnerProduct(pool2, num_output=500,\n        weight_filler=dict(type='xavier'))\n    relu1 = L.ReLU(ip1, in_place=True)\n    ip2 = L.InnerProduct(relu1, num_output=10,\n        weight_filler=dict(type='xavier'))\n    loss = L.SoftmaxWithLoss(ip2, label)\n    return loss.to_proto()\n\ndef silent_net():\n    n = caffe.NetSpec()\n    n.data, n.data2 = L.DummyData(shape=dict(dim=3), ntop=2)\n    n.silence_data = L.Silence(n.data, ntop=0)\n    n.silence_data2 = L.Silence(n.data2, ntop=0)\n    return n.to_proto()\n\nclass TestNetSpec(unittest.TestCase):\n    def load_net(self, net_proto):\n        f = tempfile.NamedTemporaryFile(mode='w+', delete=False)\n        f.write(str(net_proto))\n        f.close()\n        return caffe.Net(f.name, caffe.TEST)\n\n    def test_lenet(self):\n        \"\"\"Construct and build the Caffe version of LeNet.\"\"\"\n\n        net_proto = lenet(50)\n        # check that relu is in-place\n        self.assertEqual(net_proto.layer[6].bottom,\n                net_proto.layer[6].top)\n        net = self.load_net(net_proto)\n        # check that all layers are present\n        self.assertEqual(len(net.layers), 9)\n\n        # now the check the version with automatically-generated layer names\n        net_proto = anon_lenet(50)\n        self.assertEqual(net_proto.layer[6].bottom,\n                net_proto.layer[6].top)\n        net = self.load_net(net_proto)\n        self.assertEqual(len(net.layers), 9)\n\n    def test_zero_tops(self):\n        \"\"\"Test net construction for top-less layers.\"\"\"\n\n        net_proto = silent_net()\n        net = self.load_net(net_proto)\n        self.assertEqual(len(net.forward()), 0)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_python_layer.py",
    "content": "import unittest\nimport tempfile\nimport os\nimport six\n\nimport caffe\n\n\nclass SimpleLayer(caffe.Layer):\n    \"\"\"A layer that just multiplies by ten\"\"\"\n\n    def setup(self, bottom, top):\n        pass\n\n    def reshape(self, bottom, top):\n        top[0].reshape(*bottom[0].data.shape)\n\n    def forward(self, bottom, top):\n        top[0].data[...] = 10 * bottom[0].data\n\n    def backward(self, top, propagate_down, bottom):\n        bottom[0].diff[...] = 10 * top[0].diff\n\n\nclass ExceptionLayer(caffe.Layer):\n    \"\"\"A layer for checking exceptions from Python\"\"\"\n\n    def setup(self, bottom, top):\n        raise RuntimeError\n\nclass ParameterLayer(caffe.Layer):\n    \"\"\"A layer that just multiplies by ten\"\"\"\n\n    def setup(self, bottom, top):\n        self.blobs.add_blob(1)\n        self.blobs[0].data[0] = 0\n\n    def reshape(self, bottom, top):\n        top[0].reshape(*bottom[0].data.shape)\n\n    def forward(self, bottom, top):\n        pass\n\n    def backward(self, top, propagate_down, bottom):\n        self.blobs[0].diff[0] = 1\n\ndef python_net_file():\n    with tempfile.NamedTemporaryFile(mode='w+', delete=False) as f:\n        f.write(\"\"\"name: 'pythonnet' force_backward: true\n        input: 'data' input_shape { dim: 10 dim: 9 dim: 8 }\n        layer { type: 'Python' name: 'one' bottom: 'data' top: 'one'\n          python_param { module: 'test_python_layer' layer: 'SimpleLayer' } }\n        layer { type: 'Python' name: 'two' bottom: 'one' top: 'two'\n          python_param { module: 'test_python_layer' layer: 'SimpleLayer' } }\n        layer { type: 'Python' name: 'three' bottom: 'two' top: 'three'\n          python_param { module: 'test_python_layer' layer: 'SimpleLayer' } }\"\"\")\n        return f.name\n\n\ndef exception_net_file():\n    with tempfile.NamedTemporaryFile(mode='w+', delete=False) as f:\n        f.write(\"\"\"name: 'pythonnet' force_backward: true\n        input: 'data' input_shape { dim: 10 dim: 9 dim: 8 }\n        layer { type: 'Python' name: 'layer' bottom: 'data' top: 'top'\n          python_param { module: 'test_python_layer' layer: 'ExceptionLayer' } }\n          \"\"\")\n        return f.name\n\n\ndef parameter_net_file():\n    with tempfile.NamedTemporaryFile(mode='w+', delete=False) as f:\n        f.write(\"\"\"name: 'pythonnet' force_backward: true\n        input: 'data' input_shape { dim: 10 dim: 9 dim: 8 }\n        layer { type: 'Python' name: 'layer' bottom: 'data' top: 'top'\n          python_param { module: 'test_python_layer' layer: 'ParameterLayer' } }\n          \"\"\")\n        return f.name\n\n\n@unittest.skipIf('Python' not in caffe.layer_type_list(),\n    'Caffe built without Python layer support')\nclass TestPythonLayer(unittest.TestCase):\n    def setUp(self):\n        net_file = python_net_file()\n        self.net = caffe.Net(net_file, caffe.TRAIN)\n        os.remove(net_file)\n\n    def test_forward(self):\n        x = 8\n        self.net.blobs['data'].data[...] = x\n        self.net.forward()\n        for y in self.net.blobs['three'].data.flat:\n            self.assertEqual(y, 10**3 * x)\n\n    def test_backward(self):\n        x = 7\n        self.net.blobs['three'].diff[...] = x\n        self.net.backward()\n        for y in self.net.blobs['data'].diff.flat:\n            self.assertEqual(y, 10**3 * x)\n\n    def test_reshape(self):\n        s = 4\n        self.net.blobs['data'].reshape(s, s, s, s)\n        self.net.forward()\n        for blob in six.itervalues(self.net.blobs):\n            for d in blob.data.shape:\n                self.assertEqual(s, d)\n\n    def test_exception(self):\n        net_file = exception_net_file()\n        self.assertRaises(RuntimeError, caffe.Net, net_file, caffe.TEST)\n        os.remove(net_file)\n\n    def test_parameter(self):\n        net_file = parameter_net_file()\n        net = caffe.Net(net_file, caffe.TRAIN)\n        # Test forward and backward\n        net.forward()\n        net.backward()\n        layer = net.layers[list(net._layer_names).index('layer')]\n        self.assertEqual(layer.blobs[0].data[0], 0)\n        self.assertEqual(layer.blobs[0].diff[0], 1)\n        layer.blobs[0].data[0] += layer.blobs[0].diff[0]\n        self.assertEqual(layer.blobs[0].data[0], 1)\n\n        # Test saving and loading\n        h, caffemodel_file = tempfile.mkstemp()\n        net.save(caffemodel_file)\n        layer.blobs[0].data[0] = -1\n        self.assertEqual(layer.blobs[0].data[0], -1)\n        net.copy_from(caffemodel_file)\n        self.assertEqual(layer.blobs[0].data[0], 1)\n        os.remove(caffemodel_file)\n        \n        # Test weight sharing\n        net2 = caffe.Net(net_file, caffe.TRAIN)\n        net2.share_with(net)\n        layer = net.layers[list(net2._layer_names).index('layer')]\n        self.assertEqual(layer.blobs[0].data[0], 1)\n\n        os.remove(net_file)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_python_layer_with_param_str.py",
    "content": "import unittest\nimport tempfile\nimport os\nimport six\n\nimport caffe\n\n\nclass SimpleParamLayer(caffe.Layer):\n    \"\"\"A layer that just multiplies by the numeric value of its param string\"\"\"\n\n    def setup(self, bottom, top):\n        try:\n            self.value = float(self.param_str)\n        except ValueError:\n            raise ValueError(\"Parameter string must be a legible float\")\n\n    def reshape(self, bottom, top):\n        top[0].reshape(*bottom[0].data.shape)\n\n    def forward(self, bottom, top):\n        top[0].data[...] = self.value * bottom[0].data\n\n    def backward(self, top, propagate_down, bottom):\n        bottom[0].diff[...] = self.value * top[0].diff\n\n\ndef python_param_net_file():\n    with tempfile.NamedTemporaryFile(mode='w+', delete=False) as f:\n        f.write(\"\"\"name: 'pythonnet' force_backward: true\n        input: 'data' input_shape { dim: 10 dim: 9 dim: 8 }\n        layer { type: 'Python' name: 'mul10' bottom: 'data' top: 'mul10'\n          python_param { module: 'test_python_layer_with_param_str'\n                layer: 'SimpleParamLayer' param_str: '10' } }\n        layer { type: 'Python' name: 'mul2' bottom: 'mul10' top: 'mul2'\n          python_param { module: 'test_python_layer_with_param_str'\n                layer: 'SimpleParamLayer' param_str: '2' } }\"\"\")\n        return f.name\n\n\n@unittest.skipIf('Python' not in caffe.layer_type_list(),\n    'Caffe built without Python layer support')\nclass TestLayerWithParam(unittest.TestCase):\n    def setUp(self):\n        net_file = python_param_net_file()\n        self.net = caffe.Net(net_file, caffe.TRAIN)\n        os.remove(net_file)\n\n    def test_forward(self):\n        x = 8\n        self.net.blobs['data'].data[...] = x\n        self.net.forward()\n        for y in self.net.blobs['mul2'].data.flat:\n            self.assertEqual(y, 2 * 10 * x)\n\n    def test_backward(self):\n        x = 7\n        self.net.blobs['mul2'].diff[...] = x\n        self.net.backward()\n        for y in self.net.blobs['data'].diff.flat:\n            self.assertEqual(y, 2 * 10 * x)\n"
  },
  {
    "path": "caffe-fpn/python/caffe/test/test_solver.py",
    "content": "import unittest\nimport tempfile\nimport os\nimport numpy as np\nimport six\n\nimport caffe\nfrom test_net import simple_net_file\n\n\nclass TestSolver(unittest.TestCase):\n    def setUp(self):\n        self.num_output = 13\n        net_f = simple_net_file(self.num_output)\n        f = tempfile.NamedTemporaryFile(mode='w+', delete=False)\n        f.write(\"\"\"net: '\"\"\" + net_f + \"\"\"'\n        test_iter: 10 test_interval: 10 base_lr: 0.01 momentum: 0.9\n        weight_decay: 0.0005 lr_policy: 'inv' gamma: 0.0001 power: 0.75\n        display: 100 max_iter: 100 snapshot_after_train: false\n        snapshot_prefix: \"model\" \"\"\")\n        f.close()\n        self.solver = caffe.SGDSolver(f.name)\n        # also make sure get_solver runs\n        caffe.get_solver(f.name)\n        caffe.set_mode_cpu()\n        # fill in valid labels\n        self.solver.net.blobs['label'].data[...] = \\\n                np.random.randint(self.num_output,\n                    size=self.solver.net.blobs['label'].data.shape)\n        self.solver.test_nets[0].blobs['label'].data[...] = \\\n                np.random.randint(self.num_output,\n                    size=self.solver.test_nets[0].blobs['label'].data.shape)\n        os.remove(f.name)\n        os.remove(net_f)\n\n    def test_solve(self):\n        self.assertEqual(self.solver.iter, 0)\n        self.solver.solve()\n        self.assertEqual(self.solver.iter, 100)\n\n    def test_net_memory(self):\n        \"\"\"Check that nets survive after the solver is destroyed.\"\"\"\n\n        nets = [self.solver.net] + list(self.solver.test_nets)\n        self.assertEqual(len(nets), 2)\n        del self.solver\n\n        total = 0\n        for net in nets:\n            for ps in six.itervalues(net.params):\n                for p in ps:\n                    total += p.data.sum() + p.diff.sum()\n            for bl in six.itervalues(net.blobs):\n                total += bl.data.sum() + bl.diff.sum()\n\n    def test_snapshot(self):\n        self.solver.snapshot()\n        # Check that these files exist and then remove them\n        files = ['model_iter_0.caffemodel', 'model_iter_0.solverstate']\n        for fn in files:\n            assert os.path.isfile(fn)\n            os.remove(fn)\n"
  },
  {
    "path": "caffe-fpn/python/classify.py",
    "content": "#!/usr/bin/env python\n\"\"\"\nclassify.py is an out-of-the-box image classifer callable from the command line.\n\nBy default it configures and runs the Caffe reference ImageNet model.\n\"\"\"\nimport numpy as np\nimport os\nimport sys\nimport argparse\nimport glob\nimport time\n\nimport caffe\n\n\ndef main(argv):\n    pycaffe_dir = os.path.dirname(__file__)\n\n    parser = argparse.ArgumentParser()\n    # Required arguments: input and output files.\n    parser.add_argument(\n        \"input_file\",\n        help=\"Input image, directory, or npy.\"\n    )\n    parser.add_argument(\n        \"output_file\",\n        help=\"Output npy filename.\"\n    )\n    # Optional arguments.\n    parser.add_argument(\n        \"--model_def\",\n        default=os.path.join(pycaffe_dir,\n                \"../models/bvlc_reference_caffenet/deploy.prototxt\"),\n        help=\"Model definition file.\"\n    )\n    parser.add_argument(\n        \"--pretrained_model\",\n        default=os.path.join(pycaffe_dir,\n                \"../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel\"),\n        help=\"Trained model weights file.\"\n    )\n    parser.add_argument(\n        \"--gpu\",\n        action='store_true',\n        help=\"Switch for gpu computation.\"\n    )\n    parser.add_argument(\n        \"--center_only\",\n        action='store_true',\n        help=\"Switch for prediction from center crop alone instead of \" +\n             \"averaging predictions across crops (default).\"\n    )\n    parser.add_argument(\n        \"--images_dim\",\n        default='256,256',\n        help=\"Canonical 'height,width' dimensions of input images.\"\n    )\n    parser.add_argument(\n        \"--mean_file\",\n        default=os.path.join(pycaffe_dir,\n                             'caffe/imagenet/ilsvrc_2012_mean.npy'),\n        help=\"Data set image mean of [Channels x Height x Width] dimensions \" +\n             \"(numpy array). Set to '' for no mean subtraction.\"\n    )\n    parser.add_argument(\n        \"--input_scale\",\n        type=float,\n        help=\"Multiply input features by this scale to finish preprocessing.\"\n    )\n    parser.add_argument(\n        \"--raw_scale\",\n        type=float,\n        default=255.0,\n        help=\"Multiply raw input by this scale before preprocessing.\"\n    )\n    parser.add_argument(\n        \"--channel_swap\",\n        default='2,1,0',\n        help=\"Order to permute input channels. The default converts \" +\n             \"RGB -> BGR since BGR is the Caffe default by way of OpenCV.\"\n    )\n    parser.add_argument(\n        \"--ext\",\n        default='jpg',\n        help=\"Image file extension to take as input when a directory \" +\n             \"is given as the input file.\"\n    )\n    args = parser.parse_args()\n\n    image_dims = [int(s) for s in args.images_dim.split(',')]\n\n    mean, channel_swap = None, None\n    if args.mean_file:\n        mean = np.load(args.mean_file)\n    if args.channel_swap:\n        channel_swap = [int(s) for s in args.channel_swap.split(',')]\n\n    if args.gpu:\n        caffe.set_mode_gpu()\n        print(\"GPU mode\")\n    else:\n        caffe.set_mode_cpu()\n        print(\"CPU mode\")\n\n    # Make classifier.\n    classifier = caffe.Classifier(args.model_def, args.pretrained_model,\n            image_dims=image_dims, mean=mean,\n            input_scale=args.input_scale, raw_scale=args.raw_scale,\n            channel_swap=channel_swap)\n\n    # Load numpy array (.npy), directory glob (*.jpg), or image file.\n    args.input_file = os.path.expanduser(args.input_file)\n    if args.input_file.endswith('npy'):\n        print(\"Loading file: %s\" % args.input_file)\n        inputs = np.load(args.input_file)\n    elif os.path.isdir(args.input_file):\n        print(\"Loading folder: %s\" % args.input_file)\n        inputs =[caffe.io.load_image(im_f)\n                 for im_f in glob.glob(args.input_file + '/*.' + args.ext)]\n    else:\n        print(\"Loading file: %s\" % args.input_file)\n        inputs = [caffe.io.load_image(args.input_file)]\n\n    print(\"Classifying %d inputs.\" % len(inputs))\n\n    # Classify.\n    start = time.time()\n    predictions = classifier.predict(inputs, not args.center_only)\n    print(\"Done in %.2f s.\" % (time.time() - start))\n\n    # Save\n    print(\"Saving results into %s\" % args.output_file)\n    np.save(args.output_file, predictions)\n\n\nif __name__ == '__main__':\n    main(sys.argv)\n"
  },
  {
    "path": "caffe-fpn/python/detect.py",
    "content": "#!/usr/bin/env python\n\"\"\"\ndetector.py is an out-of-the-box windowed detector\ncallable from the command line.\n\nBy default it configures and runs the Caffe reference ImageNet model.\nNote that this model was trained for image classification and not detection,\nand finetuning for detection can be expected to improve results.\n\nThe selective_search_ijcv_with_python code required for the selective search\nproposal mode is available at\n    https://github.com/sergeyk/selective_search_ijcv_with_python\n\nTODO:\n- batch up image filenames as well: don't want to load all of them into memory\n- come up with a batching scheme that preserved order / keeps a unique ID\n\"\"\"\nimport numpy as np\nimport pandas as pd\nimport os\nimport argparse\nimport time\n\nimport caffe\n\nCROP_MODES = ['list', 'selective_search']\nCOORD_COLS = ['ymin', 'xmin', 'ymax', 'xmax']\n\n\ndef main(argv):\n    pycaffe_dir = os.path.dirname(__file__)\n\n    parser = argparse.ArgumentParser()\n    # Required arguments: input and output.\n    parser.add_argument(\n        \"input_file\",\n        help=\"Input txt/csv filename. If .txt, must be list of filenames.\\\n        If .csv, must be comma-separated file with header\\\n        'filename, xmin, ymin, xmax, ymax'\"\n    )\n    parser.add_argument(\n        \"output_file\",\n        help=\"Output h5/csv filename. Format depends on extension.\"\n    )\n    # Optional arguments.\n    parser.add_argument(\n        \"--model_def\",\n        default=os.path.join(pycaffe_dir,\n                \"../models/bvlc_reference_caffenet/deploy.prototxt\"),\n        help=\"Model definition file.\"\n    )\n    parser.add_argument(\n        \"--pretrained_model\",\n        default=os.path.join(pycaffe_dir,\n                \"../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel\"),\n        help=\"Trained model weights file.\"\n    )\n    parser.add_argument(\n        \"--crop_mode\",\n        default=\"selective_search\",\n        choices=CROP_MODES,\n        help=\"How to generate windows for detection.\"\n    )\n    parser.add_argument(\n        \"--gpu\",\n        action='store_true',\n        help=\"Switch for gpu computation.\"\n    )\n    parser.add_argument(\n        \"--mean_file\",\n        default=os.path.join(pycaffe_dir,\n                             'caffe/imagenet/ilsvrc_2012_mean.npy'),\n        help=\"Data set image mean of H x W x K dimensions (numpy array). \" +\n             \"Set to '' for no mean subtraction.\"\n    )\n    parser.add_argument(\n        \"--input_scale\",\n        type=float,\n        help=\"Multiply input features by this scale to finish preprocessing.\"\n    )\n    parser.add_argument(\n        \"--raw_scale\",\n        type=float,\n        default=255.0,\n        help=\"Multiply raw input by this scale before preprocessing.\"\n    )\n    parser.add_argument(\n        \"--channel_swap\",\n        default='2,1,0',\n        help=\"Order to permute input channels. The default converts \" +\n             \"RGB -> BGR since BGR is the Caffe default by way of OpenCV.\"\n\n    )\n    parser.add_argument(\n        \"--context_pad\",\n        type=int,\n        default='16',\n        help=\"Amount of surrounding context to collect in input window.\"\n    )\n    args = parser.parse_args()\n\n    mean, channel_swap = None, None\n    if args.mean_file:\n        mean = np.load(args.mean_file)\n        if mean.shape[1:] != (1, 1):\n            mean = mean.mean(1).mean(1)\n    if args.channel_swap:\n        channel_swap = [int(s) for s in args.channel_swap.split(',')]\n\n    if args.gpu:\n        caffe.set_mode_gpu()\n        print(\"GPU mode\")\n    else:\n        caffe.set_mode_cpu()\n        print(\"CPU mode\")\n\n    # Make detector.\n    detector = caffe.Detector(args.model_def, args.pretrained_model, mean=mean,\n            input_scale=args.input_scale, raw_scale=args.raw_scale,\n            channel_swap=channel_swap,\n            context_pad=args.context_pad)\n\n    # Load input.\n    t = time.time()\n    print(\"Loading input...\")\n    if args.input_file.lower().endswith('txt'):\n        with open(args.input_file) as f:\n            inputs = [_.strip() for _ in f.readlines()]\n    elif args.input_file.lower().endswith('csv'):\n        inputs = pd.read_csv(args.input_file, sep=',', dtype={'filename': str})\n        inputs.set_index('filename', inplace=True)\n    else:\n        raise Exception(\"Unknown input file type: not in txt or csv.\")\n\n    # Detect.\n    if args.crop_mode == 'list':\n        # Unpack sequence of (image filename, windows).\n        images_windows = [\n            (ix, inputs.iloc[np.where(inputs.index == ix)][COORD_COLS].values)\n            for ix in inputs.index.unique()\n        ]\n        detections = detector.detect_windows(images_windows)\n    else:\n        detections = detector.detect_selective_search(inputs)\n    print(\"Processed {} windows in {:.3f} s.\".format(len(detections),\n                                                     time.time() - t))\n\n    # Collect into dataframe with labeled fields.\n    df = pd.DataFrame(detections)\n    df.set_index('filename', inplace=True)\n    df[COORD_COLS] = pd.DataFrame(\n        data=np.vstack(df['window']), index=df.index, columns=COORD_COLS)\n    del(df['window'])\n\n    # Save results.\n    t = time.time()\n    if args.output_file.lower().endswith('csv'):\n        # csv\n        # Enumerate the class probabilities.\n        class_cols = ['class{}'.format(x) for x in range(NUM_OUTPUT)]\n        df[class_cols] = pd.DataFrame(\n            data=np.vstack(df['feat']), index=df.index, columns=class_cols)\n        df.to_csv(args.output_file, cols=COORD_COLS + class_cols)\n    else:\n        # h5\n        df.to_hdf(args.output_file, 'df', mode='w')\n    print(\"Saved to {} in {:.3f} s.\".format(args.output_file,\n                                            time.time() - t))\n\n\nif __name__ == \"__main__\":\n    import sys\n    main(sys.argv)\n"
  },
  {
    "path": "caffe-fpn/python/draw_net.py",
    "content": "#!/usr/bin/env python\n\"\"\"\nDraw a graph of the net architecture.\n\"\"\"\nfrom argparse import ArgumentParser, ArgumentDefaultsHelpFormatter\nfrom google.protobuf import text_format\n\nimport caffe\nimport caffe.draw\nfrom caffe.proto import caffe_pb2\n\n\ndef parse_args():\n    \"\"\"Parse input arguments\n    \"\"\"\n\n    parser = ArgumentParser(description=__doc__,\n                            formatter_class=ArgumentDefaultsHelpFormatter)\n\n    parser.add_argument('input_net_proto_file',\n                        help='Input network prototxt file')\n    parser.add_argument('output_image_file',\n                        help='Output image file')\n    parser.add_argument('--rankdir',\n                        help=('One of TB (top-bottom, i.e., vertical), '\n                              'RL (right-left, i.e., horizontal), or another '\n                              'valid dot option; see '\n                              'http://www.graphviz.org/doc/info/'\n                              'attrs.html#k:rankdir'),\n                        default='LR')\n\n    args = parser.parse_args()\n    return args\n\n\ndef main():\n    args = parse_args()\n    net = caffe_pb2.NetParameter()\n    text_format.Merge(open(args.input_net_proto_file).read(), net)\n    print('Drawing net to %s' % args.output_image_file)\n    caffe.draw.draw_net_to_file(net, args.output_image_file, args.rankdir)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "caffe-fpn/python/requirements.txt",
    "content": "Cython>=0.19.2\nnumpy>=1.7.1\nscipy>=0.13.2\nscikit-image>=0.9.3\nmatplotlib>=1.3.1\nipython>=3.0.0\nh5py>=2.2.0\nleveldb>=0.191\nnetworkx>=1.8.1\nnose>=1.3.0\npandas>=0.12.0\npython-dateutil>=1.4,<2\nprotobuf>=2.5.0\npython-gflags>=2.0\npyyaml>=3.10\nPillow>=2.3.0\nsix>=1.1.0"
  },
  {
    "path": "caffe-fpn/scripts/build_docs.sh",
    "content": "#!/bin/bash\n# Build documentation for display in web browser.\n\nPORT=${1:-4000}\n\necho \"usage: build_docs.sh [port]\"\n\n# Find the docs dir, no matter where the script is called\nROOT_DIR=\"$( cd \"$(dirname \"$0\")\"/.. ; pwd -P )\"\ncd $ROOT_DIR\n\n# Gather docs.\nscripts/gather_examples.sh\n\n# Generate developer docs.\nmake docs\n\n# Display docs using web server.\ncd docs\njekyll serve -w -s . -d _site --port=$PORT\n"
  },
  {
    "path": "caffe-fpn/scripts/copy_notebook.py",
    "content": "#!/usr/bin/env python\n\"\"\"\nTakes as arguments:\n1. the path to a JSON file (such as an IPython notebook).\n2. the path to output file\n\nIf 'metadata' dict in the JSON file contains 'include_in_docs': true,\nthen copies the file to output file, appending the 'metadata' property\nas YAML front-matter, adding the field 'category' with value 'notebook'.\n\"\"\"\nimport os\nimport sys\nimport json\n\nfilename = sys.argv[1]\noutput_filename = sys.argv[2]\ncontent = json.load(open(filename))\n\nif 'include_in_docs' in content['metadata'] and content['metadata']['include_in_docs']:\n    yaml_frontmatter = ['---']\n    for key, val in content['metadata'].iteritems():\n        if key == 'example_name':\n            key = 'title'\n            if val == '':\n                val = os.path.basename(filename)\n        yaml_frontmatter.append('{}: {}'.format(key, val))\n    yaml_frontmatter += ['category: notebook']\n    yaml_frontmatter += ['original_path: ' + filename]\n\n    with open(output_filename, 'w') as fo:\n        fo.write('\\n'.join(yaml_frontmatter + ['---']) + '\\n')\n        fo.write(open(filename).read())\n"
  },
  {
    "path": "caffe-fpn/scripts/cpp_lint.py",
    "content": "#!/usr/bin/python2\n#\n# Copyright (c) 2009 Google Inc. All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n#    * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#    * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following disclaimer\n# in the documentation and/or other materials provided with the\n# distribution.\n#    * Neither the name of Google Inc. nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"Does google-lint on c++ files.\n\nThe goal of this script is to identify places in the code that *may*\nbe in non-compliance with google style.  It does not attempt to fix\nup these problems -- the point is to educate.  It does also not\nattempt to find all problems, or to ensure that everything it does\nfind is legitimately a problem.\n\nIn particular, we can get very confused by /* and // inside strings!\nWe do a small hack, which is to ignore //'s with \"'s after them on the\nsame line, but it is far from perfect (in either direction).\n\"\"\"\n\nimport codecs\nimport copy\nimport getopt\nimport math  # for log\nimport os\nimport re\nimport sre_compile\nimport string\nimport sys\nimport unicodedata\n\n\n_USAGE = \"\"\"\nSyntax: cpp_lint.py [--verbose=#] [--output=vs7] [--filter=-x,+y,...]\n                   [--counting=total|toplevel|detailed] [--root=subdir]\n                   [--linelength=digits]\n        <file> [file] ...\n\n  The style guidelines this tries to follow are those in\n    http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml\n\n  Every problem is given a confidence score from 1-5, with 5 meaning we are\n  certain of the problem, and 1 meaning it could be a legitimate construct.\n  This will miss some errors, and is not a substitute for a code review.\n\n  To suppress false-positive errors of a certain category, add a\n  'NOLINT(category)' comment to the line.  NOLINT or NOLINT(*)\n  suppresses errors of all categories on that line.\n\n  The files passed in will be linted; at least one file must be provided.\n  Default linted extensions are .cc, .cpp, .cu, .cuh and .h.  Change the\n  extensions with the --extensions flag.\n\n  Flags:\n\n    output=vs7\n      By default, the output is formatted to ease emacs parsing.  Visual Studio\n      compatible output (vs7) may also be used.  Other formats are unsupported.\n\n    verbose=#\n      Specify a number 0-5 to restrict errors to certain verbosity levels.\n\n    filter=-x,+y,...\n      Specify a comma-separated list of category-filters to apply: only\n      error messages whose category names pass the filters will be printed.\n      (Category names are printed with the message and look like\n      \"[whitespace/indent]\".)  Filters are evaluated left to right.\n      \"-FOO\" and \"FOO\" means \"do not print categories that start with FOO\".\n      \"+FOO\" means \"do print categories that start with FOO\".\n\n      Examples: --filter=-whitespace,+whitespace/braces\n                --filter=whitespace,runtime/printf,+runtime/printf_format\n                --filter=-,+build/include_what_you_use\n\n      To see a list of all the categories used in cpplint, pass no arg:\n         --filter=\n\n    counting=total|toplevel|detailed\n      The total number of errors found is always printed. If\n      'toplevel' is provided, then the count of errors in each of\n      the top-level categories like 'build' and 'whitespace' will\n      also be printed. If 'detailed' is provided, then a count\n      is provided for each category like 'build/class'.\n\n    root=subdir\n      The root directory used for deriving header guard CPP variable.\n      By default, the header guard CPP variable is calculated as the relative\n      path to the directory that contains .git, .hg, or .svn.  When this flag\n      is specified, the relative path is calculated from the specified\n      directory. If the specified directory does not exist, this flag is\n      ignored.\n\n      Examples:\n        Assuing that src/.git exists, the header guard CPP variables for\n        src/chrome/browser/ui/browser.h are:\n\n        No flag => CHROME_BROWSER_UI_BROWSER_H_\n        --root=chrome => BROWSER_UI_BROWSER_H_\n        --root=chrome/browser => UI_BROWSER_H_\n\n    linelength=digits\n      This is the allowed line length for the project. The default value is\n      80 characters.\n\n      Examples:\n        --linelength=120\n\n    extensions=extension,extension,...\n      The allowed file extensions that cpplint will check\n\n      Examples:\n        --extensions=hpp,cpp\n\"\"\"\n\n# We categorize each error message we print.  Here are the categories.\n# We want an explicit list so we can list them all in cpplint --filter=.\n# If you add a new error message with a new category, add it to the list\n# here!  cpplint_unittest.py should tell you if you forget to do this.\n_ERROR_CATEGORIES = [\n  'build/class',\n  'build/deprecated',\n  'build/endif_comment',\n  'build/explicit_make_pair',\n  'build/forward_decl',\n  'build/header_guard',\n  'build/include',\n  'build/include_alpha',\n  'build/include_dir',\n  'build/include_order',\n  'build/include_what_you_use',\n  'build/namespaces',\n  'build/printf_format',\n  'build/storage_class',\n  'caffe/alt_fn',\n  'caffe/data_layer_setup',\n  'caffe/random_fn',\n  'legal/copyright',\n  'readability/alt_tokens',\n  'readability/braces',\n  'readability/casting',\n  'readability/check',\n  'readability/constructors',\n  'readability/fn_size',\n  'readability/function',\n  'readability/multiline_comment',\n  'readability/multiline_string',\n  'readability/namespace',\n  'readability/nolint',\n  'readability/nul',\n  'readability/streams',\n  'readability/todo',\n  'readability/utf8',\n  'runtime/arrays',\n  'runtime/casting',\n  'runtime/explicit',\n  'runtime/int',\n  'runtime/init',\n  'runtime/invalid_increment',\n  'runtime/member_string_references',\n  'runtime/memset',\n  'runtime/operator',\n  'runtime/printf',\n  'runtime/printf_format',\n  'runtime/references',\n  'runtime/string',\n  'runtime/threadsafe_fn',\n  'runtime/vlog',\n  'whitespace/blank_line',\n  'whitespace/braces',\n  'whitespace/comma',\n  'whitespace/comments',\n  'whitespace/empty_conditional_body',\n  'whitespace/empty_loop_body',\n  'whitespace/end_of_line',\n  'whitespace/ending_newline',\n  'whitespace/forcolon',\n  'whitespace/indent',\n  'whitespace/line_length',\n  'whitespace/newline',\n  'whitespace/operators',\n  'whitespace/parens',\n  'whitespace/semicolon',\n  'whitespace/tab',\n  'whitespace/todo'\n  ]\n\n# The default state of the category filter. This is overrided by the --filter=\n# flag. By default all errors are on, so only add here categories that should be\n# off by default (i.e., categories that must be enabled by the --filter= flags).\n# All entries here should start with a '-' or '+', as in the --filter= flag.\n_DEFAULT_FILTERS = [\n  '-build/include_dir',\n  '-readability/todo',\n  ]\n\n# We used to check for high-bit characters, but after much discussion we\n# decided those were OK, as long as they were in UTF-8 and didn't represent\n# hard-coded international strings, which belong in a separate i18n file.\n\n\n# C++ headers\n_CPP_HEADERS = frozenset([\n    # Legacy\n    'algobase.h',\n    'algo.h',\n    'alloc.h',\n    'builtinbuf.h',\n    'bvector.h',\n    'complex.h',\n    'defalloc.h',\n    'deque.h',\n    'editbuf.h',\n    'fstream.h',\n    'function.h',\n    'hash_map',\n    'hash_map.h',\n    'hash_set',\n    'hash_set.h',\n    'hashtable.h',\n    'heap.h',\n    'indstream.h',\n    'iomanip.h',\n    'iostream.h',\n    'istream.h',\n    'iterator.h',\n    'list.h',\n    'map.h',\n    'multimap.h',\n    'multiset.h',\n    'ostream.h',\n    'pair.h',\n    'parsestream.h',\n    'pfstream.h',\n    'procbuf.h',\n    'pthread_alloc',\n    'pthread_alloc.h',\n    'rope',\n    'rope.h',\n    'ropeimpl.h',\n    'set.h',\n    'slist',\n    'slist.h',\n    'stack.h',\n    'stdiostream.h',\n    'stl_alloc.h',\n    'stl_relops.h',\n    'streambuf.h',\n    'stream.h',\n    'strfile.h',\n    'strstream.h',\n    'tempbuf.h',\n    'tree.h',\n    'type_traits.h',\n    'vector.h',\n    # 17.6.1.2 C++ library headers\n    'algorithm',\n    'array',\n    'atomic',\n    'bitset',\n    'chrono',\n    'codecvt',\n    'complex',\n    'condition_variable',\n    'deque',\n    'exception',\n    'forward_list',\n    'fstream',\n    'functional',\n    'future',\n    'initializer_list',\n    'iomanip',\n    'ios',\n    'iosfwd',\n    'iostream',\n    'istream',\n    'iterator',\n    'limits',\n    'list',\n    'locale',\n    'map',\n    'memory',\n    'mutex',\n    'new',\n    'numeric',\n    'ostream',\n    'queue',\n    'random',\n    'ratio',\n    'regex',\n    'set',\n    'sstream',\n    'stack',\n    'stdexcept',\n    'streambuf',\n    'string',\n    'strstream',\n    'system_error',\n    'thread',\n    'tuple',\n    'typeindex',\n    'typeinfo',\n    'type_traits',\n    'unordered_map',\n    'unordered_set',\n    'utility',\n    'valarray',\n    'vector',\n    # 17.6.1.2 C++ headers for C library facilities\n    'cassert',\n    'ccomplex',\n    'cctype',\n    'cerrno',\n    'cfenv',\n    'cfloat',\n    'cinttypes',\n    'ciso646',\n    'climits',\n    'clocale',\n    'cmath',\n    'csetjmp',\n    'csignal',\n    'cstdalign',\n    'cstdarg',\n    'cstdbool',\n    'cstddef',\n    'cstdint',\n    'cstdio',\n    'cstdlib',\n    'cstring',\n    'ctgmath',\n    'ctime',\n    'cuchar',\n    'cwchar',\n    'cwctype',\n    ])\n\n# Assertion macros.  These are defined in base/logging.h and\n# testing/base/gunit.h.  Note that the _M versions need to come first\n# for substring matching to work.\n_CHECK_MACROS = [\n    'DCHECK', 'CHECK',\n    'EXPECT_TRUE_M', 'EXPECT_TRUE',\n    'ASSERT_TRUE_M', 'ASSERT_TRUE',\n    'EXPECT_FALSE_M', 'EXPECT_FALSE',\n    'ASSERT_FALSE_M', 'ASSERT_FALSE',\n    ]\n\n# Replacement macros for CHECK/DCHECK/EXPECT_TRUE/EXPECT_FALSE\n_CHECK_REPLACEMENT = dict([(m, {}) for m in _CHECK_MACROS])\n\nfor op, replacement in [('==', 'EQ'), ('!=', 'NE'),\n                        ('>=', 'GE'), ('>', 'GT'),\n                        ('<=', 'LE'), ('<', 'LT')]:\n  _CHECK_REPLACEMENT['DCHECK'][op] = 'DCHECK_%s' % replacement\n  _CHECK_REPLACEMENT['CHECK'][op] = 'CHECK_%s' % replacement\n  _CHECK_REPLACEMENT['EXPECT_TRUE'][op] = 'EXPECT_%s' % replacement\n  _CHECK_REPLACEMENT['ASSERT_TRUE'][op] = 'ASSERT_%s' % replacement\n  _CHECK_REPLACEMENT['EXPECT_TRUE_M'][op] = 'EXPECT_%s_M' % replacement\n  _CHECK_REPLACEMENT['ASSERT_TRUE_M'][op] = 'ASSERT_%s_M' % replacement\n\nfor op, inv_replacement in [('==', 'NE'), ('!=', 'EQ'),\n                            ('>=', 'LT'), ('>', 'LE'),\n                            ('<=', 'GT'), ('<', 'GE')]:\n  _CHECK_REPLACEMENT['EXPECT_FALSE'][op] = 'EXPECT_%s' % inv_replacement\n  _CHECK_REPLACEMENT['ASSERT_FALSE'][op] = 'ASSERT_%s' % inv_replacement\n  _CHECK_REPLACEMENT['EXPECT_FALSE_M'][op] = 'EXPECT_%s_M' % inv_replacement\n  _CHECK_REPLACEMENT['ASSERT_FALSE_M'][op] = 'ASSERT_%s_M' % inv_replacement\n\n# Alternative tokens and their replacements.  For full list, see section 2.5\n# Alternative tokens [lex.digraph] in the C++ standard.\n#\n# Digraphs (such as '%:') are not included here since it's a mess to\n# match those on a word boundary.\n_ALT_TOKEN_REPLACEMENT = {\n    'and': '&&',\n    'bitor': '|',\n    'or': '||',\n    'xor': '^',\n    'compl': '~',\n    'bitand': '&',\n    'and_eq': '&=',\n    'or_eq': '|=',\n    'xor_eq': '^=',\n    'not': '!',\n    'not_eq': '!='\n    }\n\n# Compile regular expression that matches all the above keywords.  The \"[ =()]\"\n# bit is meant to avoid matching these keywords outside of boolean expressions.\n#\n# False positives include C-style multi-line comments and multi-line strings\n# but those have always been troublesome for cpplint.\n_ALT_TOKEN_REPLACEMENT_PATTERN = re.compile(\n    r'[ =()](' + ('|'.join(_ALT_TOKEN_REPLACEMENT.keys())) + r')(?=[ (]|$)')\n\n\n# These constants define types of headers for use with\n# _IncludeState.CheckNextIncludeOrder().\n_C_SYS_HEADER = 1\n_CPP_SYS_HEADER = 2\n_LIKELY_MY_HEADER = 3\n_POSSIBLE_MY_HEADER = 4\n_OTHER_HEADER = 5\n\n# These constants define the current inline assembly state\n_NO_ASM = 0       # Outside of inline assembly block\n_INSIDE_ASM = 1   # Inside inline assembly block\n_END_ASM = 2      # Last line of inline assembly block\n_BLOCK_ASM = 3    # The whole block is an inline assembly block\n\n# Match start of assembly blocks\n_MATCH_ASM = re.compile(r'^\\s*(?:asm|_asm|__asm|__asm__)'\n                        r'(?:\\s+(volatile|__volatile__))?'\n                        r'\\s*[{(]')\n\n\n_regexp_compile_cache = {}\n\n# Finds occurrences of NOLINT[_NEXT_LINE] or NOLINT[_NEXT_LINE](...).\n_RE_SUPPRESSION = re.compile(r'\\bNOLINT(_NEXT_LINE)?\\b(\\([^)]*\\))?')\n\n# {str, set(int)}: a map from error categories to sets of linenumbers\n# on which those errors are expected and should be suppressed.\n_error_suppressions = {}\n\n# Finds Copyright.\n_RE_COPYRIGHT = re.compile(r'Copyright')\n\n# The root directory used for deriving header guard CPP variable.\n# This is set by --root flag.\n_root = None\n\n# The allowed line length of files.\n# This is set by --linelength flag.\n_line_length = 80\n\n# The allowed extensions for file names\n# This is set by --extensions flag.\n_valid_extensions = set(['cc', 'h', 'cpp', 'hpp', 'cu', 'cuh'])\n\ndef ParseNolintSuppressions(filename, raw_line, linenum, error):\n  \"\"\"Updates the global list of error-suppressions.\n\n  Parses any NOLINT comments on the current line, updating the global\n  error_suppressions store.  Reports an error if the NOLINT comment\n  was malformed.\n\n  Args:\n    filename: str, the name of the input file.\n    raw_line: str, the line of input text, with comments.\n    linenum: int, the number of the current line.\n    error: function, an error handler.\n  \"\"\"\n  # FIXME(adonovan): \"NOLINT(\" is misparsed as NOLINT(*).\n  matched = _RE_SUPPRESSION.search(raw_line)\n  if matched:\n    if matched.group(1) == '_NEXT_LINE':\n      linenum += 1\n    category = matched.group(2)\n    if category in (None, '(*)'):  # => \"suppress all\"\n      _error_suppressions.setdefault(None, set()).add(linenum)\n    else:\n      if category.startswith('(') and category.endswith(')'):\n        category = category[1:-1]\n        if category in _ERROR_CATEGORIES:\n          _error_suppressions.setdefault(category, set()).add(linenum)\n        else:\n          error(filename, linenum, 'readability/nolint', 5,\n                'Unknown NOLINT error category: %s' % category)\n\n\ndef ResetNolintSuppressions():\n  \"Resets the set of NOLINT suppressions to empty.\"\n  _error_suppressions.clear()\n\n\ndef IsErrorSuppressedByNolint(category, linenum):\n  \"\"\"Returns true if the specified error category is suppressed on this line.\n\n  Consults the global error_suppressions map populated by\n  ParseNolintSuppressions/ResetNolintSuppressions.\n\n  Args:\n    category: str, the category of the error.\n    linenum: int, the current line number.\n  Returns:\n    bool, True iff the error should be suppressed due to a NOLINT comment.\n  \"\"\"\n  return (linenum in _error_suppressions.get(category, set()) or\n          linenum in _error_suppressions.get(None, set()))\n\ndef Match(pattern, s):\n  \"\"\"Matches the string with the pattern, caching the compiled regexp.\"\"\"\n  # The regexp compilation caching is inlined in both Match and Search for\n  # performance reasons; factoring it out into a separate function turns out\n  # to be noticeably expensive.\n  if pattern not in _regexp_compile_cache:\n    _regexp_compile_cache[pattern] = sre_compile.compile(pattern)\n  return _regexp_compile_cache[pattern].match(s)\n\n\ndef ReplaceAll(pattern, rep, s):\n  \"\"\"Replaces instances of pattern in a string with a replacement.\n\n  The compiled regex is kept in a cache shared by Match and Search.\n\n  Args:\n    pattern: regex pattern\n    rep: replacement text\n    s: search string\n\n  Returns:\n    string with replacements made (or original string if no replacements)\n  \"\"\"\n  if pattern not in _regexp_compile_cache:\n    _regexp_compile_cache[pattern] = sre_compile.compile(pattern)\n  return _regexp_compile_cache[pattern].sub(rep, s)\n\n\ndef Search(pattern, s):\n  \"\"\"Searches the string for the pattern, caching the compiled regexp.\"\"\"\n  if pattern not in _regexp_compile_cache:\n    _regexp_compile_cache[pattern] = sre_compile.compile(pattern)\n  return _regexp_compile_cache[pattern].search(s)\n\n\nclass _IncludeState(dict):\n  \"\"\"Tracks line numbers for includes, and the order in which includes appear.\n\n  As a dict, an _IncludeState object serves as a mapping between include\n  filename and line number on which that file was included.\n\n  Call CheckNextIncludeOrder() once for each header in the file, passing\n  in the type constants defined above. Calls in an illegal order will\n  raise an _IncludeError with an appropriate error message.\n\n  \"\"\"\n  # self._section will move monotonically through this set. If it ever\n  # needs to move backwards, CheckNextIncludeOrder will raise an error.\n  _INITIAL_SECTION = 0\n  _MY_H_SECTION = 1\n  _C_SECTION = 2\n  _CPP_SECTION = 3\n  _OTHER_H_SECTION = 4\n\n  _TYPE_NAMES = {\n      _C_SYS_HEADER: 'C system header',\n      _CPP_SYS_HEADER: 'C++ system header',\n      _LIKELY_MY_HEADER: 'header this file implements',\n      _POSSIBLE_MY_HEADER: 'header this file may implement',\n      _OTHER_HEADER: 'other header',\n      }\n  _SECTION_NAMES = {\n      _INITIAL_SECTION: \"... nothing. (This can't be an error.)\",\n      _MY_H_SECTION: 'a header this file implements',\n      _C_SECTION: 'C system header',\n      _CPP_SECTION: 'C++ system header',\n      _OTHER_H_SECTION: 'other header',\n      }\n\n  def __init__(self):\n    dict.__init__(self)\n    self.ResetSection()\n\n  def ResetSection(self):\n    # The name of the current section.\n    self._section = self._INITIAL_SECTION\n    # The path of last found header.\n    self._last_header = ''\n\n  def SetLastHeader(self, header_path):\n    self._last_header = header_path\n\n  def CanonicalizeAlphabeticalOrder(self, header_path):\n    \"\"\"Returns a path canonicalized for alphabetical comparison.\n\n    - replaces \"-\" with \"_\" so they both cmp the same.\n    - removes '-inl' since we don't require them to be after the main header.\n    - lowercase everything, just in case.\n\n    Args:\n      header_path: Path to be canonicalized.\n\n    Returns:\n      Canonicalized path.\n    \"\"\"\n    return header_path.replace('-inl.h', '.h').replace('-', '_').lower()\n\n  def IsInAlphabeticalOrder(self, clean_lines, linenum, header_path):\n    \"\"\"Check if a header is in alphabetical order with the previous header.\n\n    Args:\n      clean_lines: A CleansedLines instance containing the file.\n      linenum: The number of the line to check.\n      header_path: Canonicalized header to be checked.\n\n    Returns:\n      Returns true if the header is in alphabetical order.\n    \"\"\"\n    # If previous section is different from current section, _last_header will\n    # be reset to empty string, so it's always less than current header.\n    #\n    # If previous line was a blank line, assume that the headers are\n    # intentionally sorted the way they are.\n    if (self._last_header > header_path and\n        not Match(r'^\\s*$', clean_lines.elided[linenum - 1])):\n      return False\n    return True\n\n  def CheckNextIncludeOrder(self, header_type):\n    \"\"\"Returns a non-empty error message if the next header is out of order.\n\n    This function also updates the internal state to be ready to check\n    the next include.\n\n    Args:\n      header_type: One of the _XXX_HEADER constants defined above.\n\n    Returns:\n      The empty string if the header is in the right order, or an\n      error message describing what's wrong.\n\n    \"\"\"\n    error_message = ('Found %s after %s' %\n                     (self._TYPE_NAMES[header_type],\n                      self._SECTION_NAMES[self._section]))\n\n    last_section = self._section\n\n    if header_type == _C_SYS_HEADER:\n      if self._section <= self._C_SECTION:\n        self._section = self._C_SECTION\n      else:\n        self._last_header = ''\n        return error_message\n    elif header_type == _CPP_SYS_HEADER:\n      if self._section <= self._CPP_SECTION:\n        self._section = self._CPP_SECTION\n      else:\n        self._last_header = ''\n        return error_message\n    elif header_type == _LIKELY_MY_HEADER:\n      if self._section <= self._MY_H_SECTION:\n        self._section = self._MY_H_SECTION\n      else:\n        self._section = self._OTHER_H_SECTION\n    elif header_type == _POSSIBLE_MY_HEADER:\n      if self._section <= self._MY_H_SECTION:\n        self._section = self._MY_H_SECTION\n      else:\n        # This will always be the fallback because we're not sure\n        # enough that the header is associated with this file.\n        self._section = self._OTHER_H_SECTION\n    else:\n      assert header_type == _OTHER_HEADER\n      self._section = self._OTHER_H_SECTION\n\n    if last_section != self._section:\n      self._last_header = ''\n\n    return ''\n\n\nclass _CppLintState(object):\n  \"\"\"Maintains module-wide state..\"\"\"\n\n  def __init__(self):\n    self.verbose_level = 1  # global setting.\n    self.error_count = 0    # global count of reported errors\n    # filters to apply when emitting error messages\n    self.filters = _DEFAULT_FILTERS[:]\n    self.counting = 'total'  # In what way are we counting errors?\n    self.errors_by_category = {}  # string to int dict storing error counts\n\n    # output format:\n    # \"emacs\" - format that emacs can parse (default)\n    # \"vs7\" - format that Microsoft Visual Studio 7 can parse\n    self.output_format = 'emacs'\n\n  def SetOutputFormat(self, output_format):\n    \"\"\"Sets the output format for errors.\"\"\"\n    self.output_format = output_format\n\n  def SetVerboseLevel(self, level):\n    \"\"\"Sets the module's verbosity, and returns the previous setting.\"\"\"\n    last_verbose_level = self.verbose_level\n    self.verbose_level = level\n    return last_verbose_level\n\n  def SetCountingStyle(self, counting_style):\n    \"\"\"Sets the module's counting options.\"\"\"\n    self.counting = counting_style\n\n  def SetFilters(self, filters):\n    \"\"\"Sets the error-message filters.\n\n    These filters are applied when deciding whether to emit a given\n    error message.\n\n    Args:\n      filters: A string of comma-separated filters (eg \"+whitespace/indent\").\n               Each filter should start with + or -; else we die.\n\n    Raises:\n      ValueError: The comma-separated filters did not all start with '+' or '-'.\n                  E.g. \"-,+whitespace,-whitespace/indent,whitespace/badfilter\"\n    \"\"\"\n    # Default filters always have less priority than the flag ones.\n    self.filters = _DEFAULT_FILTERS[:]\n    for filt in filters.split(','):\n      clean_filt = filt.strip()\n      if clean_filt:\n        self.filters.append(clean_filt)\n    for filt in self.filters:\n      if not (filt.startswith('+') or filt.startswith('-')):\n        raise ValueError('Every filter in --filters must start with + or -'\n                         ' (%s does not)' % filt)\n\n  def ResetErrorCounts(self):\n    \"\"\"Sets the module's error statistic back to zero.\"\"\"\n    self.error_count = 0\n    self.errors_by_category = {}\n\n  def IncrementErrorCount(self, category):\n    \"\"\"Bumps the module's error statistic.\"\"\"\n    self.error_count += 1\n    if self.counting in ('toplevel', 'detailed'):\n      if self.counting != 'detailed':\n        category = category.split('/')[0]\n      if category not in self.errors_by_category:\n        self.errors_by_category[category] = 0\n      self.errors_by_category[category] += 1\n\n  def PrintErrorCounts(self):\n    \"\"\"Print a summary of errors by category, and the total.\"\"\"\n    for category, count in self.errors_by_category.iteritems():\n      sys.stderr.write('Category \\'%s\\' errors found: %d\\n' %\n                       (category, count))\n    sys.stderr.write('Total errors found: %d\\n' % self.error_count)\n\n_cpplint_state = _CppLintState()\n\n\ndef _OutputFormat():\n  \"\"\"Gets the module's output format.\"\"\"\n  return _cpplint_state.output_format\n\n\ndef _SetOutputFormat(output_format):\n  \"\"\"Sets the module's output format.\"\"\"\n  _cpplint_state.SetOutputFormat(output_format)\n\n\ndef _VerboseLevel():\n  \"\"\"Returns the module's verbosity setting.\"\"\"\n  return _cpplint_state.verbose_level\n\n\ndef _SetVerboseLevel(level):\n  \"\"\"Sets the module's verbosity, and returns the previous setting.\"\"\"\n  return _cpplint_state.SetVerboseLevel(level)\n\n\ndef _SetCountingStyle(level):\n  \"\"\"Sets the module's counting options.\"\"\"\n  _cpplint_state.SetCountingStyle(level)\n\n\ndef _Filters():\n  \"\"\"Returns the module's list of output filters, as a list.\"\"\"\n  return _cpplint_state.filters\n\n\ndef _SetFilters(filters):\n  \"\"\"Sets the module's error-message filters.\n\n  These filters are applied when deciding whether to emit a given\n  error message.\n\n  Args:\n    filters: A string of comma-separated filters (eg \"whitespace/indent\").\n             Each filter should start with + or -; else we die.\n  \"\"\"\n  _cpplint_state.SetFilters(filters)\n\n\nclass _FunctionState(object):\n  \"\"\"Tracks current function name and the number of lines in its body.\"\"\"\n\n  _NORMAL_TRIGGER = 250  # for --v=0, 500 for --v=1, etc.\n  _TEST_TRIGGER = 400    # about 50% more than _NORMAL_TRIGGER.\n\n  def __init__(self):\n    self.in_a_function = False\n    self.lines_in_function = 0\n    self.current_function = ''\n\n  def Begin(self, function_name):\n    \"\"\"Start analyzing function body.\n\n    Args:\n      function_name: The name of the function being tracked.\n    \"\"\"\n    self.in_a_function = True\n    self.lines_in_function = 0\n    self.current_function = function_name\n\n  def Count(self):\n    \"\"\"Count line in current function body.\"\"\"\n    if self.in_a_function:\n      self.lines_in_function += 1\n\n  def Check(self, error, filename, linenum):\n    \"\"\"Report if too many lines in function body.\n\n    Args:\n      error: The function to call with any errors found.\n      filename: The name of the current file.\n      linenum: The number of the line to check.\n    \"\"\"\n    if Match(r'T(EST|est)', self.current_function):\n      base_trigger = self._TEST_TRIGGER\n    else:\n      base_trigger = self._NORMAL_TRIGGER\n    trigger = base_trigger * 2**_VerboseLevel()\n\n    if self.lines_in_function > trigger:\n      error_level = int(math.log(self.lines_in_function / base_trigger, 2))\n      # 50 => 0, 100 => 1, 200 => 2, 400 => 3, 800 => 4, 1600 => 5, ...\n      if error_level > 5:\n        error_level = 5\n      error(filename, linenum, 'readability/fn_size', error_level,\n            'Small and focused functions are preferred:'\n            ' %s has %d non-comment lines'\n            ' (error triggered by exceeding %d lines).'  % (\n                self.current_function, self.lines_in_function, trigger))\n\n  def End(self):\n    \"\"\"Stop analyzing function body.\"\"\"\n    self.in_a_function = False\n\n\nclass _IncludeError(Exception):\n  \"\"\"Indicates a problem with the include order in a file.\"\"\"\n  pass\n\n\nclass FileInfo:\n  \"\"\"Provides utility functions for filenames.\n\n  FileInfo provides easy access to the components of a file's path\n  relative to the project root.\n  \"\"\"\n\n  def __init__(self, filename):\n    self._filename = filename\n\n  def FullName(self):\n    \"\"\"Make Windows paths like Unix.\"\"\"\n    return os.path.abspath(self._filename).replace('\\\\', '/')\n\n  def RepositoryName(self):\n    \"\"\"FullName after removing the local path to the repository.\n\n    If we have a real absolute path name here we can try to do something smart:\n    detecting the root of the checkout and truncating /path/to/checkout from\n    the name so that we get header guards that don't include things like\n    \"C:\\Documents and Settings\\...\" or \"/home/username/...\" in them and thus\n    people on different computers who have checked the source out to different\n    locations won't see bogus errors.\n    \"\"\"\n    fullname = self.FullName()\n\n    if os.path.exists(fullname):\n      project_dir = os.path.dirname(fullname)\n\n      if os.path.exists(os.path.join(project_dir, \".svn\")):\n        # If there's a .svn file in the current directory, we recursively look\n        # up the directory tree for the top of the SVN checkout\n        root_dir = project_dir\n        one_up_dir = os.path.dirname(root_dir)\n        while os.path.exists(os.path.join(one_up_dir, \".svn\")):\n          root_dir = os.path.dirname(root_dir)\n          one_up_dir = os.path.dirname(one_up_dir)\n\n        prefix = os.path.commonprefix([root_dir, project_dir])\n        return fullname[len(prefix) + 1:]\n\n      # Not SVN <= 1.6? Try to find a git, hg, or svn top level directory by\n      # searching up from the current path.\n      root_dir = os.path.dirname(fullname)\n      while (root_dir != os.path.dirname(root_dir) and\n             not os.path.exists(os.path.join(root_dir, \".git\")) and\n             not os.path.exists(os.path.join(root_dir, \".hg\")) and\n             not os.path.exists(os.path.join(root_dir, \".svn\"))):\n        root_dir = os.path.dirname(root_dir)\n\n      if (os.path.exists(os.path.join(root_dir, \".git\")) or\n          os.path.exists(os.path.join(root_dir, \".hg\")) or\n          os.path.exists(os.path.join(root_dir, \".svn\"))):\n        prefix = os.path.commonprefix([root_dir, project_dir])\n        return fullname[len(prefix) + 1:]\n\n    # Don't know what to do; header guard warnings may be wrong...\n    return fullname\n\n  def Split(self):\n    \"\"\"Splits the file into the directory, basename, and extension.\n\n    For 'chrome/browser/browser.cc', Split() would\n    return ('chrome/browser', 'browser', '.cc')\n\n    Returns:\n      A tuple of (directory, basename, extension).\n    \"\"\"\n\n    googlename = self.RepositoryName()\n    project, rest = os.path.split(googlename)\n    return (project,) + os.path.splitext(rest)\n\n  def BaseName(self):\n    \"\"\"File base name - text after the final slash, before the final period.\"\"\"\n    return self.Split()[1]\n\n  def Extension(self):\n    \"\"\"File extension - text following the final period.\"\"\"\n    return self.Split()[2]\n\n  def NoExtension(self):\n    \"\"\"File has no source file extension.\"\"\"\n    return '/'.join(self.Split()[0:2])\n\n  def IsSource(self):\n    \"\"\"File has a source file extension.\"\"\"\n    return self.Extension()[1:] in ('c', 'cc', 'cpp', 'cxx')\n\n\ndef _ShouldPrintError(category, confidence, linenum):\n  \"\"\"If confidence >= verbose, category passes filter and is not suppressed.\"\"\"\n\n  # There are three ways we might decide not to print an error message:\n  # a \"NOLINT(category)\" comment appears in the source,\n  # the verbosity level isn't high enough, or the filters filter it out.\n  if IsErrorSuppressedByNolint(category, linenum):\n    return False\n  if confidence < _cpplint_state.verbose_level:\n    return False\n\n  is_filtered = False\n  for one_filter in _Filters():\n    if one_filter.startswith('-'):\n      if category.startswith(one_filter[1:]):\n        is_filtered = True\n    elif one_filter.startswith('+'):\n      if category.startswith(one_filter[1:]):\n        is_filtered = False\n    else:\n      assert False  # should have been checked for in SetFilter.\n  if is_filtered:\n    return False\n\n  return True\n\n\ndef Error(filename, linenum, category, confidence, message):\n  \"\"\"Logs the fact we've found a lint error.\n\n  We log where the error was found, and also our confidence in the error,\n  that is, how certain we are this is a legitimate style regression, and\n  not a misidentification or a use that's sometimes justified.\n\n  False positives can be suppressed by the use of\n  \"cpplint(category)\"  comments on the offending line.  These are\n  parsed into _error_suppressions.\n\n  Args:\n    filename: The name of the file containing the error.\n    linenum: The number of the line containing the error.\n    category: A string used to describe the \"category\" this bug\n      falls under: \"whitespace\", say, or \"runtime\".  Categories\n      may have a hierarchy separated by slashes: \"whitespace/indent\".\n    confidence: A number from 1-5 representing a confidence score for\n      the error, with 5 meaning that we are certain of the problem,\n      and 1 meaning that it could be a legitimate construct.\n    message: The error message.\n  \"\"\"\n  if _ShouldPrintError(category, confidence, linenum):\n    _cpplint_state.IncrementErrorCount(category)\n    if _cpplint_state.output_format == 'vs7':\n      sys.stderr.write('%s(%s):  %s  [%s] [%d]\\n' % (\n          filename, linenum, message, category, confidence))\n    elif _cpplint_state.output_format == 'eclipse':\n      sys.stderr.write('%s:%s: warning: %s  [%s] [%d]\\n' % (\n          filename, linenum, message, category, confidence))\n    else:\n      sys.stderr.write('%s:%s:  %s  [%s] [%d]\\n' % (\n          filename, linenum, message, category, confidence))\n\n\n# Matches standard C++ escape sequences per 2.13.2.3 of the C++ standard.\n_RE_PATTERN_CLEANSE_LINE_ESCAPES = re.compile(\n    r'\\\\([abfnrtv?\"\\\\\\']|\\d+|x[0-9a-fA-F]+)')\n# Matches strings.  Escape codes should already be removed by ESCAPES.\n_RE_PATTERN_CLEANSE_LINE_DOUBLE_QUOTES = re.compile(r'\"[^\"]*\"')\n# Matches characters.  Escape codes should already be removed by ESCAPES.\n_RE_PATTERN_CLEANSE_LINE_SINGLE_QUOTES = re.compile(r\"'.'\")\n# Matches multi-line C++ comments.\n# This RE is a little bit more complicated than one might expect, because we\n# have to take care of space removals tools so we can handle comments inside\n# statements better.\n# The current rule is: We only clear spaces from both sides when we're at the\n# end of the line. Otherwise, we try to remove spaces from the right side,\n# if this doesn't work we try on left side but only if there's a non-character\n# on the right.\n_RE_PATTERN_CLEANSE_LINE_C_COMMENTS = re.compile(\n    r\"\"\"(\\s*/\\*.*\\*/\\s*$|\n            /\\*.*\\*/\\s+|\n         \\s+/\\*.*\\*/(?=\\W)|\n            /\\*.*\\*/)\"\"\", re.VERBOSE)\n\n\ndef IsCppString(line):\n  \"\"\"Does line terminate so, that the next symbol is in string constant.\n\n  This function does not consider single-line nor multi-line comments.\n\n  Args:\n    line: is a partial line of code starting from the 0..n.\n\n  Returns:\n    True, if next character appended to 'line' is inside a\n    string constant.\n  \"\"\"\n\n  line = line.replace(r'\\\\', 'XX')  # after this, \\\\\" does not match to \\\"\n  return ((line.count('\"') - line.count(r'\\\"') - line.count(\"'\\\"'\")) & 1) == 1\n\n\ndef CleanseRawStrings(raw_lines):\n  \"\"\"Removes C++11 raw strings from lines.\n\n    Before:\n      static const char kData[] = R\"(\n          multi-line string\n          )\";\n\n    After:\n      static const char kData[] = \"\"\n          (replaced by blank line)\n          \"\";\n\n  Args:\n    raw_lines: list of raw lines.\n\n  Returns:\n    list of lines with C++11 raw strings replaced by empty strings.\n  \"\"\"\n\n  delimiter = None\n  lines_without_raw_strings = []\n  for line in raw_lines:\n    if delimiter:\n      # Inside a raw string, look for the end\n      end = line.find(delimiter)\n      if end >= 0:\n        # Found the end of the string, match leading space for this\n        # line and resume copying the original lines, and also insert\n        # a \"\" on the last line.\n        leading_space = Match(r'^(\\s*)\\S', line)\n        line = leading_space.group(1) + '\"\"' + line[end + len(delimiter):]\n        delimiter = None\n      else:\n        # Haven't found the end yet, append a blank line.\n        line = ''\n\n    else:\n      # Look for beginning of a raw string.\n      # See 2.14.15 [lex.string] for syntax.\n      matched = Match(r'^(.*)\\b(?:R|u8R|uR|UR|LR)\"([^\\s\\\\()]*)\\((.*)$', line)\n      if matched:\n        delimiter = ')' + matched.group(2) + '\"'\n\n        end = matched.group(3).find(delimiter)\n        if end >= 0:\n          # Raw string ended on same line\n          line = (matched.group(1) + '\"\"' +\n                  matched.group(3)[end + len(delimiter):])\n          delimiter = None\n        else:\n          # Start of a multi-line raw string\n          line = matched.group(1) + '\"\"'\n\n    lines_without_raw_strings.append(line)\n\n  # TODO(unknown): if delimiter is not None here, we might want to\n  # emit a warning for unterminated string.\n  return lines_without_raw_strings\n\n\ndef FindNextMultiLineCommentStart(lines, lineix):\n  \"\"\"Find the beginning marker for a multiline comment.\"\"\"\n  while lineix < len(lines):\n    if lines[lineix].strip().startswith('/*'):\n      # Only return this marker if the comment goes beyond this line\n      if lines[lineix].strip().find('*/', 2) < 0:\n        return lineix\n    lineix += 1\n  return len(lines)\n\n\ndef FindNextMultiLineCommentEnd(lines, lineix):\n  \"\"\"We are inside a comment, find the end marker.\"\"\"\n  while lineix < len(lines):\n    if lines[lineix].strip().endswith('*/'):\n      return lineix\n    lineix += 1\n  return len(lines)\n\n\ndef RemoveMultiLineCommentsFromRange(lines, begin, end):\n  \"\"\"Clears a range of lines for multi-line comments.\"\"\"\n  # Having // dummy comments makes the lines non-empty, so we will not get\n  # unnecessary blank line warnings later in the code.\n  for i in range(begin, end):\n    lines[i] = '// dummy'\n\n\ndef RemoveMultiLineComments(filename, lines, error):\n  \"\"\"Removes multiline (c-style) comments from lines.\"\"\"\n  lineix = 0\n  while lineix < len(lines):\n    lineix_begin = FindNextMultiLineCommentStart(lines, lineix)\n    if lineix_begin >= len(lines):\n      return\n    lineix_end = FindNextMultiLineCommentEnd(lines, lineix_begin)\n    if lineix_end >= len(lines):\n      error(filename, lineix_begin + 1, 'readability/multiline_comment', 5,\n            'Could not find end of multi-line comment')\n      return\n    RemoveMultiLineCommentsFromRange(lines, lineix_begin, lineix_end + 1)\n    lineix = lineix_end + 1\n\n\ndef CleanseComments(line):\n  \"\"\"Removes //-comments and single-line C-style /* */ comments.\n\n  Args:\n    line: A line of C++ source.\n\n  Returns:\n    The line with single-line comments removed.\n  \"\"\"\n  commentpos = line.find('//')\n  if commentpos != -1 and not IsCppString(line[:commentpos]):\n    line = line[:commentpos].rstrip()\n  # get rid of /* ... */\n  return _RE_PATTERN_CLEANSE_LINE_C_COMMENTS.sub('', line)\n\n\nclass CleansedLines(object):\n  \"\"\"Holds 3 copies of all lines with different preprocessing applied to them.\n\n  1) elided member contains lines without strings and comments,\n  2) lines member contains lines without comments, and\n  3) raw_lines member contains all the lines without processing.\n  All these three members are of <type 'list'>, and of the same length.\n  \"\"\"\n\n  def __init__(self, lines):\n    self.elided = []\n    self.lines = []\n    self.raw_lines = lines\n    self.num_lines = len(lines)\n    self.lines_without_raw_strings = CleanseRawStrings(lines)\n    for linenum in range(len(self.lines_without_raw_strings)):\n      self.lines.append(CleanseComments(\n          self.lines_without_raw_strings[linenum]))\n      elided = self._CollapseStrings(self.lines_without_raw_strings[linenum])\n      self.elided.append(CleanseComments(elided))\n\n  def NumLines(self):\n    \"\"\"Returns the number of lines represented.\"\"\"\n    return self.num_lines\n\n  @staticmethod\n  def _CollapseStrings(elided):\n    \"\"\"Collapses strings and chars on a line to simple \"\" or '' blocks.\n\n    We nix strings first so we're not fooled by text like '\"http://\"'\n\n    Args:\n      elided: The line being processed.\n\n    Returns:\n      The line with collapsed strings.\n    \"\"\"\n    if not _RE_PATTERN_INCLUDE.match(elided):\n      # Remove escaped characters first to make quote/single quote collapsing\n      # basic.  Things that look like escaped characters shouldn't occur\n      # outside of strings and chars.\n      elided = _RE_PATTERN_CLEANSE_LINE_ESCAPES.sub('', elided)\n      elided = _RE_PATTERN_CLEANSE_LINE_SINGLE_QUOTES.sub(\"''\", elided)\n      elided = _RE_PATTERN_CLEANSE_LINE_DOUBLE_QUOTES.sub('\"\"', elided)\n    return elided\n\n\ndef FindEndOfExpressionInLine(line, startpos, depth, startchar, endchar):\n  \"\"\"Find the position just after the matching endchar.\n\n  Args:\n    line: a CleansedLines line.\n    startpos: start searching at this position.\n    depth: nesting level at startpos.\n    startchar: expression opening character.\n    endchar: expression closing character.\n\n  Returns:\n    On finding matching endchar: (index just after matching endchar, 0)\n    Otherwise: (-1, new depth at end of this line)\n  \"\"\"\n  for i in xrange(startpos, len(line)):\n    if line[i] == startchar:\n      depth += 1\n    elif line[i] == endchar:\n      depth -= 1\n      if depth == 0:\n        return (i + 1, 0)\n  return (-1, depth)\n\n\ndef CloseExpression(clean_lines, linenum, pos):\n  \"\"\"If input points to ( or { or [ or <, finds the position that closes it.\n\n  If lines[linenum][pos] points to a '(' or '{' or '[' or '<', finds the\n  linenum/pos that correspond to the closing of the expression.\n\n  Args:\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    pos: A position on the line.\n\n  Returns:\n    A tuple (line, linenum, pos) pointer *past* the closing brace, or\n    (line, len(lines), -1) if we never find a close.  Note we ignore\n    strings and comments when matching; and the line we return is the\n    'cleansed' line at linenum.\n  \"\"\"\n\n  line = clean_lines.elided[linenum]\n  startchar = line[pos]\n  if startchar not in '({[<':\n    return (line, clean_lines.NumLines(), -1)\n  if startchar == '(': endchar = ')'\n  if startchar == '[': endchar = ']'\n  if startchar == '{': endchar = '}'\n  if startchar == '<': endchar = '>'\n\n  # Check first line\n  (end_pos, num_open) = FindEndOfExpressionInLine(\n      line, pos, 0, startchar, endchar)\n  if end_pos > -1:\n    return (line, linenum, end_pos)\n\n  # Continue scanning forward\n  while linenum < clean_lines.NumLines() - 1:\n    linenum += 1\n    line = clean_lines.elided[linenum]\n    (end_pos, num_open) = FindEndOfExpressionInLine(\n        line, 0, num_open, startchar, endchar)\n    if end_pos > -1:\n      return (line, linenum, end_pos)\n\n  # Did not find endchar before end of file, give up\n  return (line, clean_lines.NumLines(), -1)\n\n\ndef FindStartOfExpressionInLine(line, endpos, depth, startchar, endchar):\n  \"\"\"Find position at the matching startchar.\n\n  This is almost the reverse of FindEndOfExpressionInLine, but note\n  that the input position and returned position differs by 1.\n\n  Args:\n    line: a CleansedLines line.\n    endpos: start searching at this position.\n    depth: nesting level at endpos.\n    startchar: expression opening character.\n    endchar: expression closing character.\n\n  Returns:\n    On finding matching startchar: (index at matching startchar, 0)\n    Otherwise: (-1, new depth at beginning of this line)\n  \"\"\"\n  for i in xrange(endpos, -1, -1):\n    if line[i] == endchar:\n      depth += 1\n    elif line[i] == startchar:\n      depth -= 1\n      if depth == 0:\n        return (i, 0)\n  return (-1, depth)\n\n\ndef ReverseCloseExpression(clean_lines, linenum, pos):\n  \"\"\"If input points to ) or } or ] or >, finds the position that opens it.\n\n  If lines[linenum][pos] points to a ')' or '}' or ']' or '>', finds the\n  linenum/pos that correspond to the opening of the expression.\n\n  Args:\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    pos: A position on the line.\n\n  Returns:\n    A tuple (line, linenum, pos) pointer *at* the opening brace, or\n    (line, 0, -1) if we never find the matching opening brace.  Note\n    we ignore strings and comments when matching; and the line we\n    return is the 'cleansed' line at linenum.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  endchar = line[pos]\n  if endchar not in ')}]>':\n    return (line, 0, -1)\n  if endchar == ')': startchar = '('\n  if endchar == ']': startchar = '['\n  if endchar == '}': startchar = '{'\n  if endchar == '>': startchar = '<'\n\n  # Check last line\n  (start_pos, num_open) = FindStartOfExpressionInLine(\n      line, pos, 0, startchar, endchar)\n  if start_pos > -1:\n    return (line, linenum, start_pos)\n\n  # Continue scanning backward\n  while linenum > 0:\n    linenum -= 1\n    line = clean_lines.elided[linenum]\n    (start_pos, num_open) = FindStartOfExpressionInLine(\n        line, len(line) - 1, num_open, startchar, endchar)\n    if start_pos > -1:\n      return (line, linenum, start_pos)\n\n  # Did not find startchar before beginning of file, give up\n  return (line, 0, -1)\n\n\ndef CheckForCopyright(filename, lines, error):\n  \"\"\"Logs an error if a Copyright message appears at the top of the file.\"\"\"\n\n  # We'll check up to line 10. Don't forget there's a\n  # dummy line at the front.\n  for line in xrange(1, min(len(lines), 11)):\n    if _RE_COPYRIGHT.search(lines[line], re.I):\n      error(filename, 0, 'legal/copyright', 5,\n            'Copyright message found.  '\n            'You should not include a copyright line.')\n\n\ndef GetHeaderGuardCPPVariable(filename):\n  \"\"\"Returns the CPP variable that should be used as a header guard.\n\n  Args:\n    filename: The name of a C++ header file.\n\n  Returns:\n    The CPP variable that should be used as a header guard in the\n    named file.\n\n  \"\"\"\n\n  # Restores original filename in case that cpplint is invoked from Emacs's\n  # flymake.\n  filename = re.sub(r'_flymake\\.h$', '.h', filename)\n  filename = re.sub(r'/\\.flymake/([^/]*)$', r'/\\1', filename)\n\n  fileinfo = FileInfo(filename)\n  file_path_from_root = fileinfo.RepositoryName()\n  if _root:\n    file_path_from_root = re.sub('^' + _root + os.sep, '', file_path_from_root)\n  return re.sub(r'[-./\\s]', '_', file_path_from_root).upper() + '_'\n\n\ndef CheckForHeaderGuard(filename, lines, error):\n  \"\"\"Checks that the file contains a header guard.\n\n  Logs an error if no #ifndef header guard is present.  For other\n  headers, checks that the full pathname is used.\n\n  Args:\n    filename: The name of the C++ header file.\n    lines: An array of strings, each representing a line of the file.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  cppvar = GetHeaderGuardCPPVariable(filename)\n\n  ifndef = None\n  ifndef_linenum = 0\n  define = None\n  endif = None\n  endif_linenum = 0\n  for linenum, line in enumerate(lines):\n    linesplit = line.split()\n    if len(linesplit) >= 2:\n      # find the first occurrence of #ifndef and #define, save arg\n      if not ifndef and linesplit[0] == '#ifndef':\n        # set ifndef to the header guard presented on the #ifndef line.\n        ifndef = linesplit[1]\n        ifndef_linenum = linenum\n      if not define and linesplit[0] == '#define':\n        define = linesplit[1]\n    # find the last occurrence of #endif, save entire line\n    if line.startswith('#endif'):\n      endif = line\n      endif_linenum = linenum\n\n  if not ifndef:\n    error(filename, 0, 'build/header_guard', 5,\n          'No #ifndef header guard found, suggested CPP variable is: %s' %\n          cppvar)\n    return\n\n  if not define:\n    error(filename, 0, 'build/header_guard', 5,\n          'No #define header guard found, suggested CPP variable is: %s' %\n          cppvar)\n    return\n\n  # The guard should be PATH_FILE_H_, but we also allow PATH_FILE_H__\n  # for backward compatibility.\n  if ifndef != cppvar:\n    error_level = 0\n    if ifndef != cppvar + '_':\n      error_level = 5\n\n    ParseNolintSuppressions(filename, lines[ifndef_linenum], ifndef_linenum,\n                            error)\n    error(filename, ifndef_linenum, 'build/header_guard', error_level,\n          '#ifndef header guard has wrong style, please use: %s' % cppvar)\n\n  if define != ifndef:\n    error(filename, 0, 'build/header_guard', 5,\n          '#ifndef and #define don\\'t match, suggested CPP variable is: %s' %\n          cppvar)\n    return\n\n  if endif != ('#endif  // %s' % cppvar):\n    error_level = 0\n    if endif != ('#endif  // %s' % (cppvar + '_')):\n      error_level = 5\n\n    ParseNolintSuppressions(filename, lines[endif_linenum], endif_linenum,\n                            error)\n    error(filename, endif_linenum, 'build/header_guard', error_level,\n          '#endif line should be \"#endif  // %s\"' % cppvar)\n\n\ndef CheckForBadCharacters(filename, lines, error):\n  \"\"\"Logs an error for each line containing bad characters.\n\n  Two kinds of bad characters:\n\n  1. Unicode replacement characters: These indicate that either the file\n  contained invalid UTF-8 (likely) or Unicode replacement characters (which\n  it shouldn't).  Note that it's possible for this to throw off line\n  numbering if the invalid UTF-8 occurred adjacent to a newline.\n\n  2. NUL bytes.  These are problematic for some tools.\n\n  Args:\n    filename: The name of the current file.\n    lines: An array of strings, each representing a line of the file.\n    error: The function to call with any errors found.\n  \"\"\"\n  for linenum, line in enumerate(lines):\n    if u'\\ufffd' in line:\n      error(filename, linenum, 'readability/utf8', 5,\n            'Line contains invalid UTF-8 (or Unicode replacement character).')\n    if '\\0' in line:\n      error(filename, linenum, 'readability/nul', 5, 'Line contains NUL byte.')\n\n\ndef CheckForNewlineAtEOF(filename, lines, error):\n  \"\"\"Logs an error if there is no newline char at the end of the file.\n\n  Args:\n    filename: The name of the current file.\n    lines: An array of strings, each representing a line of the file.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  # The array lines() was created by adding two newlines to the\n  # original file (go figure), then splitting on \\n.\n  # To verify that the file ends in \\n, we just have to make sure the\n  # last-but-two element of lines() exists and is empty.\n  if len(lines) < 3 or lines[-2]:\n    error(filename, len(lines) - 2, 'whitespace/ending_newline', 5,\n          'Could not find a newline character at the end of the file.')\n\n\ndef CheckForMultilineCommentsAndStrings(filename, clean_lines, linenum, error):\n  \"\"\"Logs an error if we see /* ... */ or \"...\" that extend past one line.\n\n  /* ... */ comments are legit inside macros, for one line.\n  Otherwise, we prefer // comments, so it's ok to warn about the\n  other.  Likewise, it's ok for strings to extend across multiple\n  lines, as long as a line continuation character (backslash)\n  terminates each line. Although not currently prohibited by the C++\n  style guide, it's ugly and unnecessary. We don't do well with either\n  in this lint program, so we warn about both.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n\n  # Remove all \\\\ (escaped backslashes) from the line. They are OK, and the\n  # second (escaped) slash may trigger later \\\" detection erroneously.\n  line = line.replace('\\\\\\\\', '')\n\n  if line.count('/*') > line.count('*/'):\n    error(filename, linenum, 'readability/multiline_comment', 5,\n          'Complex multi-line /*...*/-style comment found. '\n          'Lint may give bogus warnings.  '\n          'Consider replacing these with //-style comments, '\n          'with #if 0...#endif, '\n          'or with more clearly structured multi-line comments.')\n\n  if (line.count('\"') - line.count('\\\\\"')) % 2:\n    error(filename, linenum, 'readability/multiline_string', 5,\n          'Multi-line string (\"...\") found.  This lint script doesn\\'t '\n          'do well with such strings, and may give bogus warnings.  '\n          'Use C++11 raw strings or concatenation instead.')\n\n\ncaffe_alt_function_list = (\n    ('memset', ['caffe_set', 'caffe_memset']),\n    ('cudaMemset', ['caffe_gpu_set', 'caffe_gpu_memset']),\n    ('memcpy', ['caffe_copy', 'caffe_memcpy']),\n    ('cudaMemcpy', ['caffe_copy', 'caffe_gpu_memcpy']),\n    )\n\n\ndef CheckCaffeAlternatives(filename, clean_lines, linenum, error):\n  \"\"\"Checks for C(++) functions for which a Caffe substitute should be used.\n\n  For certain native C functions (memset, memcpy), there is a Caffe alternative\n  which should be used instead.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  for function, alts in caffe_alt_function_list:\n    ix = line.find(function + '(')\n    if ix >= 0 and (ix == 0 or (not line[ix - 1].isalnum() and\n                                line[ix - 1] not in ('_', '.', '>'))):\n      disp_alts = ['%s(...)' % alt for alt in alts]\n      error(filename, linenum, 'caffe/alt_fn', 2,\n            'Use Caffe function %s instead of %s(...).' %\n                (' or '.join(disp_alts), function))\n\n\ndef CheckCaffeDataLayerSetUp(filename, clean_lines, linenum, error):\n  \"\"\"Except the base classes, Caffe DataLayer should define DataLayerSetUp\n     instead of LayerSetUp.\n     \n  The base DataLayers define common SetUp steps, the subclasses should\n  not override them.\n  \n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  ix = line.find('DataLayer<Dtype>::LayerSetUp')\n  if ix >= 0 and (\n       line.find('void DataLayer<Dtype>::LayerSetUp') != -1 or\n       line.find('void ImageDataLayer<Dtype>::LayerSetUp') != -1 or\n       line.find('void MemoryDataLayer<Dtype>::LayerSetUp') != -1 or\n       line.find('void WindowDataLayer<Dtype>::LayerSetUp') != -1):\n      error(filename, linenum, 'caffe/data_layer_setup', 2,\n            'Except the base classes, Caffe DataLayer should define'\n            + ' DataLayerSetUp instead of LayerSetUp. The base DataLayers'\n            + ' define common SetUp steps, the subclasses should'\n            + ' not override them.')\n  ix = line.find('DataLayer<Dtype>::DataLayerSetUp')\n  if ix >= 0 and (\n       line.find('void Base') == -1 and\n       line.find('void DataLayer<Dtype>::DataLayerSetUp') == -1 and\n       line.find('void ImageDataLayer<Dtype>::DataLayerSetUp') == -1 and\n       line.find('void MemoryDataLayer<Dtype>::DataLayerSetUp') == -1 and\n       line.find('void WindowDataLayer<Dtype>::DataLayerSetUp') == -1):\n      error(filename, linenum, 'caffe/data_layer_setup', 2,\n            'Except the base classes, Caffe DataLayer should define'\n            + ' DataLayerSetUp instead of LayerSetUp. The base DataLayers'\n            + ' define common SetUp steps, the subclasses should'\n            + ' not override them.')\n\n\nc_random_function_list = (\n    'rand(',\n    'rand_r(',\n    'random(',\n    )\n\ndef CheckCaffeRandom(filename, clean_lines, linenum, error):\n  \"\"\"Checks for calls to C random functions (rand, rand_r, random, ...).\n\n  Caffe code should (almost) always use the caffe_rng_* functions rather\n  than these, as the internal state of these C functions is independent of the\n  native Caffe RNG system which should produce deterministic results for a\n  fixed Caffe seed set using Caffe::set_random_seed(...).\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  for function in c_random_function_list:\n    ix = line.find(function)\n    # Comparisons made explicit for clarity -- pylint: disable=g-explicit-bool-comparison\n    if ix >= 0 and (ix == 0 or (not line[ix - 1].isalnum() and\n                                line[ix - 1] not in ('_', '.', '>'))):\n      error(filename, linenum, 'caffe/random_fn', 2,\n            'Use caffe_rng_rand() (or other caffe_rng_* function) instead of '\n            + function +\n            ') to ensure results are deterministic for a fixed Caffe seed.')\n\n\nthreading_list = (\n    ('asctime(', 'asctime_r('),\n    ('ctime(', 'ctime_r('),\n    ('getgrgid(', 'getgrgid_r('),\n    ('getgrnam(', 'getgrnam_r('),\n    ('getlogin(', 'getlogin_r('),\n    ('getpwnam(', 'getpwnam_r('),\n    ('getpwuid(', 'getpwuid_r('),\n    ('gmtime(', 'gmtime_r('),\n    ('localtime(', 'localtime_r('),\n    ('strtok(', 'strtok_r('),\n    ('ttyname(', 'ttyname_r('),\n    )\n\n\ndef CheckPosixThreading(filename, clean_lines, linenum, error):\n  \"\"\"Checks for calls to thread-unsafe functions.\n\n  Much code has been originally written without consideration of\n  multi-threading. Also, engineers are relying on their old experience;\n  they have learned posix before threading extensions were added. These\n  tests guide the engineers to use thread-safe functions (when using\n  posix directly).\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  for single_thread_function, multithread_safe_function in threading_list:\n    ix = line.find(single_thread_function)\n    # Comparisons made explicit for clarity -- pylint: disable=g-explicit-bool-comparison\n    if ix >= 0 and (ix == 0 or (not line[ix - 1].isalnum() and\n                                line[ix - 1] not in ('_', '.', '>'))):\n      error(filename, linenum, 'runtime/threadsafe_fn', 2,\n            'Consider using ' + multithread_safe_function +\n            '...) instead of ' + single_thread_function +\n            '...) for improved thread safety.')\n\n\ndef CheckVlogArguments(filename, clean_lines, linenum, error):\n  \"\"\"Checks that VLOG() is only used for defining a logging level.\n\n  For example, VLOG(2) is correct. VLOG(INFO), VLOG(WARNING), VLOG(ERROR), and\n  VLOG(FATAL) are not.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  if Search(r'\\bVLOG\\((INFO|ERROR|WARNING|DFATAL|FATAL)\\)', line):\n    error(filename, linenum, 'runtime/vlog', 5,\n          'VLOG() should be used with numeric verbosity level.  '\n          'Use LOG() if you want symbolic severity levels.')\n\n\n# Matches invalid increment: *count++, which moves pointer instead of\n# incrementing a value.\n_RE_PATTERN_INVALID_INCREMENT = re.compile(\n    r'^\\s*\\*\\w+(\\+\\+|--);')\n\n\ndef CheckInvalidIncrement(filename, clean_lines, linenum, error):\n  \"\"\"Checks for invalid increment *count++.\n\n  For example following function:\n  void increment_counter(int* count) {\n    *count++;\n  }\n  is invalid, because it effectively does count++, moving pointer, and should\n  be replaced with ++*count, (*count)++ or *count += 1.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  if _RE_PATTERN_INVALID_INCREMENT.match(line):\n    error(filename, linenum, 'runtime/invalid_increment', 5,\n          'Changing pointer instead of value (or unused value of operator*).')\n\n\nclass _BlockInfo(object):\n  \"\"\"Stores information about a generic block of code.\"\"\"\n\n  def __init__(self, seen_open_brace):\n    self.seen_open_brace = seen_open_brace\n    self.open_parentheses = 0\n    self.inline_asm = _NO_ASM\n\n  def CheckBegin(self, filename, clean_lines, linenum, error):\n    \"\"\"Run checks that applies to text up to the opening brace.\n\n    This is mostly for checking the text after the class identifier\n    and the \"{\", usually where the base class is specified.  For other\n    blocks, there isn't much to check, so we always pass.\n\n    Args:\n      filename: The name of the current file.\n      clean_lines: A CleansedLines instance containing the file.\n      linenum: The number of the line to check.\n      error: The function to call with any errors found.\n    \"\"\"\n    pass\n\n  def CheckEnd(self, filename, clean_lines, linenum, error):\n    \"\"\"Run checks that applies to text after the closing brace.\n\n    This is mostly used for checking end of namespace comments.\n\n    Args:\n      filename: The name of the current file.\n      clean_lines: A CleansedLines instance containing the file.\n      linenum: The number of the line to check.\n      error: The function to call with any errors found.\n    \"\"\"\n    pass\n\n\nclass _ClassInfo(_BlockInfo):\n  \"\"\"Stores information about a class.\"\"\"\n\n  def __init__(self, name, class_or_struct, clean_lines, linenum):\n    _BlockInfo.__init__(self, False)\n    self.name = name\n    self.starting_linenum = linenum\n    self.is_derived = False\n    if class_or_struct == 'struct':\n      self.access = 'public'\n      self.is_struct = True\n    else:\n      self.access = 'private'\n      self.is_struct = False\n\n    # Remember initial indentation level for this class.  Using raw_lines here\n    # instead of elided to account for leading comments.\n    initial_indent = Match(r'^( *)\\S', clean_lines.raw_lines[linenum])\n    if initial_indent:\n      self.class_indent = len(initial_indent.group(1))\n    else:\n      self.class_indent = 0\n\n    # Try to find the end of the class.  This will be confused by things like:\n    #   class A {\n    #   } *x = { ...\n    #\n    # But it's still good enough for CheckSectionSpacing.\n    self.last_line = 0\n    depth = 0\n    for i in range(linenum, clean_lines.NumLines()):\n      line = clean_lines.elided[i]\n      depth += line.count('{') - line.count('}')\n      if not depth:\n        self.last_line = i\n        break\n\n  def CheckBegin(self, filename, clean_lines, linenum, error):\n    # Look for a bare ':'\n    if Search('(^|[^:]):($|[^:])', clean_lines.elided[linenum]):\n      self.is_derived = True\n\n  def CheckEnd(self, filename, clean_lines, linenum, error):\n    # Check that closing brace is aligned with beginning of the class.\n    # Only do this if the closing brace is indented by only whitespaces.\n    # This means we will not check single-line class definitions.\n    indent = Match(r'^( *)\\}', clean_lines.elided[linenum])\n    if indent and len(indent.group(1)) != self.class_indent:\n      if self.is_struct:\n        parent = 'struct ' + self.name\n      else:\n        parent = 'class ' + self.name\n      error(filename, linenum, 'whitespace/indent', 3,\n            'Closing brace should be aligned with beginning of %s' % parent)\n\n\nclass _NamespaceInfo(_BlockInfo):\n  \"\"\"Stores information about a namespace.\"\"\"\n\n  def __init__(self, name, linenum):\n    _BlockInfo.__init__(self, False)\n    self.name = name or ''\n    self.starting_linenum = linenum\n\n  def CheckEnd(self, filename, clean_lines, linenum, error):\n    \"\"\"Check end of namespace comments.\"\"\"\n    line = clean_lines.raw_lines[linenum]\n\n    # Check how many lines is enclosed in this namespace.  Don't issue\n    # warning for missing namespace comments if there aren't enough\n    # lines.  However, do apply checks if there is already an end of\n    # namespace comment and it's incorrect.\n    #\n    # TODO(unknown): We always want to check end of namespace comments\n    # if a namespace is large, but sometimes we also want to apply the\n    # check if a short namespace contained nontrivial things (something\n    # other than forward declarations).  There is currently no logic on\n    # deciding what these nontrivial things are, so this check is\n    # triggered by namespace size only, which works most of the time.\n    if (linenum - self.starting_linenum < 10\n        and not Match(r'};*\\s*(//|/\\*).*\\bnamespace\\b', line)):\n      return\n\n    # Look for matching comment at end of namespace.\n    #\n    # Note that we accept C style \"/* */\" comments for terminating\n    # namespaces, so that code that terminate namespaces inside\n    # preprocessor macros can be cpplint clean.\n    #\n    # We also accept stuff like \"// end of namespace <name>.\" with the\n    # period at the end.\n    #\n    # Besides these, we don't accept anything else, otherwise we might\n    # get false negatives when existing comment is a substring of the\n    # expected namespace.\n    if self.name:\n      # Named namespace\n      if not Match((r'};*\\s*(//|/\\*).*\\bnamespace\\s+' + re.escape(self.name) +\n                    r'[\\*/\\.\\\\\\s]*$'),\n                   line):\n        error(filename, linenum, 'readability/namespace', 5,\n              'Namespace should be terminated with \"// namespace %s\"' %\n              self.name)\n    else:\n      # Anonymous namespace\n      if not Match(r'};*\\s*(//|/\\*).*\\bnamespace[\\*/\\.\\\\\\s]*$', line):\n        error(filename, linenum, 'readability/namespace', 5,\n              'Namespace should be terminated with \"// namespace\"')\n\n\nclass _PreprocessorInfo(object):\n  \"\"\"Stores checkpoints of nesting stacks when #if/#else is seen.\"\"\"\n\n  def __init__(self, stack_before_if):\n    # The entire nesting stack before #if\n    self.stack_before_if = stack_before_if\n\n    # The entire nesting stack up to #else\n    self.stack_before_else = []\n\n    # Whether we have already seen #else or #elif\n    self.seen_else = False\n\n\nclass _NestingState(object):\n  \"\"\"Holds states related to parsing braces.\"\"\"\n\n  def __init__(self):\n    # Stack for tracking all braces.  An object is pushed whenever we\n    # see a \"{\", and popped when we see a \"}\".  Only 3 types of\n    # objects are possible:\n    # - _ClassInfo: a class or struct.\n    # - _NamespaceInfo: a namespace.\n    # - _BlockInfo: some other type of block.\n    self.stack = []\n\n    # Stack of _PreprocessorInfo objects.\n    self.pp_stack = []\n\n  def SeenOpenBrace(self):\n    \"\"\"Check if we have seen the opening brace for the innermost block.\n\n    Returns:\n      True if we have seen the opening brace, False if the innermost\n      block is still expecting an opening brace.\n    \"\"\"\n    return (not self.stack) or self.stack[-1].seen_open_brace\n\n  def InNamespaceBody(self):\n    \"\"\"Check if we are currently one level inside a namespace body.\n\n    Returns:\n      True if top of the stack is a namespace block, False otherwise.\n    \"\"\"\n    return self.stack and isinstance(self.stack[-1], _NamespaceInfo)\n\n  def UpdatePreprocessor(self, line):\n    \"\"\"Update preprocessor stack.\n\n    We need to handle preprocessors due to classes like this:\n      #ifdef SWIG\n      struct ResultDetailsPageElementExtensionPoint {\n      #else\n      struct ResultDetailsPageElementExtensionPoint : public Extension {\n      #endif\n\n    We make the following assumptions (good enough for most files):\n    - Preprocessor condition evaluates to true from #if up to first\n      #else/#elif/#endif.\n\n    - Preprocessor condition evaluates to false from #else/#elif up\n      to #endif.  We still perform lint checks on these lines, but\n      these do not affect nesting stack.\n\n    Args:\n      line: current line to check.\n    \"\"\"\n    if Match(r'^\\s*#\\s*(if|ifdef|ifndef)\\b', line):\n      # Beginning of #if block, save the nesting stack here.  The saved\n      # stack will allow us to restore the parsing state in the #else case.\n      self.pp_stack.append(_PreprocessorInfo(copy.deepcopy(self.stack)))\n    elif Match(r'^\\s*#\\s*(else|elif)\\b', line):\n      # Beginning of #else block\n      if self.pp_stack:\n        if not self.pp_stack[-1].seen_else:\n          # This is the first #else or #elif block.  Remember the\n          # whole nesting stack up to this point.  This is what we\n          # keep after the #endif.\n          self.pp_stack[-1].seen_else = True\n          self.pp_stack[-1].stack_before_else = copy.deepcopy(self.stack)\n\n        # Restore the stack to how it was before the #if\n        self.stack = copy.deepcopy(self.pp_stack[-1].stack_before_if)\n      else:\n        # TODO(unknown): unexpected #else, issue warning?\n        pass\n    elif Match(r'^\\s*#\\s*endif\\b', line):\n      # End of #if or #else blocks.\n      if self.pp_stack:\n        # If we saw an #else, we will need to restore the nesting\n        # stack to its former state before the #else, otherwise we\n        # will just continue from where we left off.\n        if self.pp_stack[-1].seen_else:\n          # Here we can just use a shallow copy since we are the last\n          # reference to it.\n          self.stack = self.pp_stack[-1].stack_before_else\n        # Drop the corresponding #if\n        self.pp_stack.pop()\n      else:\n        # TODO(unknown): unexpected #endif, issue warning?\n        pass\n\n  def Update(self, filename, clean_lines, linenum, error):\n    \"\"\"Update nesting state with current line.\n\n    Args:\n      filename: The name of the current file.\n      clean_lines: A CleansedLines instance containing the file.\n      linenum: The number of the line to check.\n      error: The function to call with any errors found.\n    \"\"\"\n    line = clean_lines.elided[linenum]\n\n    # Update pp_stack first\n    self.UpdatePreprocessor(line)\n\n    # Count parentheses.  This is to avoid adding struct arguments to\n    # the nesting stack.\n    if self.stack:\n      inner_block = self.stack[-1]\n      depth_change = line.count('(') - line.count(')')\n      inner_block.open_parentheses += depth_change\n\n      # Also check if we are starting or ending an inline assembly block.\n      if inner_block.inline_asm in (_NO_ASM, _END_ASM):\n        if (depth_change != 0 and\n            inner_block.open_parentheses == 1 and\n            _MATCH_ASM.match(line)):\n          # Enter assembly block\n          inner_block.inline_asm = _INSIDE_ASM\n        else:\n          # Not entering assembly block.  If previous line was _END_ASM,\n          # we will now shift to _NO_ASM state.\n          inner_block.inline_asm = _NO_ASM\n      elif (inner_block.inline_asm == _INSIDE_ASM and\n            inner_block.open_parentheses == 0):\n        # Exit assembly block\n        inner_block.inline_asm = _END_ASM\n\n    # Consume namespace declaration at the beginning of the line.  Do\n    # this in a loop so that we catch same line declarations like this:\n    #   namespace proto2 { namespace bridge { class MessageSet; } }\n    while True:\n      # Match start of namespace.  The \"\\b\\s*\" below catches namespace\n      # declarations even if it weren't followed by a whitespace, this\n      # is so that we don't confuse our namespace checker.  The\n      # missing spaces will be flagged by CheckSpacing.\n      namespace_decl_match = Match(r'^\\s*namespace\\b\\s*([:\\w]+)?(.*)$', line)\n      if not namespace_decl_match:\n        break\n\n      new_namespace = _NamespaceInfo(namespace_decl_match.group(1), linenum)\n      self.stack.append(new_namespace)\n\n      line = namespace_decl_match.group(2)\n      if line.find('{') != -1:\n        new_namespace.seen_open_brace = True\n        line = line[line.find('{') + 1:]\n\n    # Look for a class declaration in whatever is left of the line\n    # after parsing namespaces.  The regexp accounts for decorated classes\n    # such as in:\n    #   class LOCKABLE API Object {\n    #   };\n    #\n    # Templates with class arguments may confuse the parser, for example:\n    #   template <class T\n    #             class Comparator = less<T>,\n    #             class Vector = vector<T> >\n    #   class HeapQueue {\n    #\n    # Because this parser has no nesting state about templates, by the\n    # time it saw \"class Comparator\", it may think that it's a new class.\n    # Nested templates have a similar problem:\n    #   template <\n    #       typename ExportedType,\n    #       typename TupleType,\n    #       template <typename, typename> class ImplTemplate>\n    #\n    # To avoid these cases, we ignore classes that are followed by '=' or '>'\n    class_decl_match = Match(\n        r'\\s*(template\\s*<[\\w\\s<>,:]*>\\s*)?'\n        r'(class|struct)\\s+([A-Z_]+\\s+)*(\\w+(?:::\\w+)*)'\n        r'(([^=>]|<[^<>]*>|<[^<>]*<[^<>]*>\\s*>)*)$', line)\n    if (class_decl_match and\n        (not self.stack or self.stack[-1].open_parentheses == 0)):\n      self.stack.append(_ClassInfo(\n          class_decl_match.group(4), class_decl_match.group(2),\n          clean_lines, linenum))\n      line = class_decl_match.group(5)\n\n    # If we have not yet seen the opening brace for the innermost block,\n    # run checks here.\n    if not self.SeenOpenBrace():\n      self.stack[-1].CheckBegin(filename, clean_lines, linenum, error)\n\n    # Update access control if we are inside a class/struct\n    if self.stack and isinstance(self.stack[-1], _ClassInfo):\n      classinfo = self.stack[-1]\n      access_match = Match(\n          r'^(.*)\\b(public|private|protected|signals)(\\s+(?:slots\\s*)?)?'\n          r':(?:[^:]|$)',\n          line)\n      if access_match:\n        classinfo.access = access_match.group(2)\n\n        # Check that access keywords are indented +1 space.  Skip this\n        # check if the keywords are not preceded by whitespaces.\n        indent = access_match.group(1)\n        if (len(indent) != classinfo.class_indent + 1 and\n            Match(r'^\\s*$', indent)):\n          if classinfo.is_struct:\n            parent = 'struct ' + classinfo.name\n          else:\n            parent = 'class ' + classinfo.name\n          slots = ''\n          if access_match.group(3):\n            slots = access_match.group(3)\n          error(filename, linenum, 'whitespace/indent', 3,\n                '%s%s: should be indented +1 space inside %s' % (\n                    access_match.group(2), slots, parent))\n\n    # Consume braces or semicolons from what's left of the line\n    while True:\n      # Match first brace, semicolon, or closed parenthesis.\n      matched = Match(r'^[^{;)}]*([{;)}])(.*)$', line)\n      if not matched:\n        break\n\n      token = matched.group(1)\n      if token == '{':\n        # If namespace or class hasn't seen a opening brace yet, mark\n        # namespace/class head as complete.  Push a new block onto the\n        # stack otherwise.\n        if not self.SeenOpenBrace():\n          self.stack[-1].seen_open_brace = True\n        else:\n          self.stack.append(_BlockInfo(True))\n          if _MATCH_ASM.match(line):\n            self.stack[-1].inline_asm = _BLOCK_ASM\n      elif token == ';' or token == ')':\n        # If we haven't seen an opening brace yet, but we already saw\n        # a semicolon, this is probably a forward declaration.  Pop\n        # the stack for these.\n        #\n        # Similarly, if we haven't seen an opening brace yet, but we\n        # already saw a closing parenthesis, then these are probably\n        # function arguments with extra \"class\" or \"struct\" keywords.\n        # Also pop these stack for these.\n        if not self.SeenOpenBrace():\n          self.stack.pop()\n      else:  # token == '}'\n        # Perform end of block checks and pop the stack.\n        if self.stack:\n          self.stack[-1].CheckEnd(filename, clean_lines, linenum, error)\n          self.stack.pop()\n      line = matched.group(2)\n\n  def InnermostClass(self):\n    \"\"\"Get class info on the top of the stack.\n\n    Returns:\n      A _ClassInfo object if we are inside a class, or None otherwise.\n    \"\"\"\n    for i in range(len(self.stack), 0, -1):\n      classinfo = self.stack[i - 1]\n      if isinstance(classinfo, _ClassInfo):\n        return classinfo\n    return None\n\n  def CheckCompletedBlocks(self, filename, error):\n    \"\"\"Checks that all classes and namespaces have been completely parsed.\n\n    Call this when all lines in a file have been processed.\n    Args:\n      filename: The name of the current file.\n      error: The function to call with any errors found.\n    \"\"\"\n    # Note: This test can result in false positives if #ifdef constructs\n    # get in the way of brace matching. See the testBuildClass test in\n    # cpplint_unittest.py for an example of this.\n    for obj in self.stack:\n      if isinstance(obj, _ClassInfo):\n        error(filename, obj.starting_linenum, 'build/class', 5,\n              'Failed to find complete declaration of class %s' %\n              obj.name)\n      elif isinstance(obj, _NamespaceInfo):\n        error(filename, obj.starting_linenum, 'build/namespaces', 5,\n              'Failed to find complete declaration of namespace %s' %\n              obj.name)\n\n\ndef CheckForNonStandardConstructs(filename, clean_lines, linenum,\n                                  nesting_state, error):\n  r\"\"\"Logs an error if we see certain non-ANSI constructs ignored by gcc-2.\n\n  Complain about several constructs which gcc-2 accepts, but which are\n  not standard C++.  Warning about these in lint is one way to ease the\n  transition to new compilers.\n  - put storage class first (e.g. \"static const\" instead of \"const static\").\n  - \"%lld\" instead of %qd\" in printf-type functions.\n  - \"%1$d\" is non-standard in printf-type functions.\n  - \"\\%\" is an undefined character escape sequence.\n  - text after #endif is not allowed.\n  - invalid inner-style forward declaration.\n  - >? and <? operators, and their >?= and <?= cousins.\n\n  Additionally, check for constructor/destructor style violations and reference\n  members, as it is very convenient to do so while checking for\n  gcc-2 compliance.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: A callable to which errors are reported, which takes 4 arguments:\n           filename, line number, error level, and message\n  \"\"\"\n\n  # Remove comments from the line, but leave in strings for now.\n  line = clean_lines.lines[linenum]\n\n  if Search(r'printf\\s*\\(.*\".*%[-+ ]?\\d*q', line):\n    error(filename, linenum, 'runtime/printf_format', 3,\n          '%q in format strings is deprecated.  Use %ll instead.')\n\n  if Search(r'printf\\s*\\(.*\".*%\\d+\\$', line):\n    error(filename, linenum, 'runtime/printf_format', 2,\n          '%N$ formats are unconventional.  Try rewriting to avoid them.')\n\n  # Remove escaped backslashes before looking for undefined escapes.\n  line = line.replace('\\\\\\\\', '')\n\n  if Search(r'(\"|\\').*\\\\(%|\\[|\\(|{)', line):\n    error(filename, linenum, 'build/printf_format', 3,\n          '%, [, (, and { are undefined character escapes.  Unescape them.')\n\n  # For the rest, work with both comments and strings removed.\n  line = clean_lines.elided[linenum]\n\n  if Search(r'\\b(const|volatile|void|char|short|int|long'\n            r'|float|double|signed|unsigned'\n            r'|schar|u?int8|u?int16|u?int32|u?int64)'\n            r'\\s+(register|static|extern|typedef)\\b',\n            line):\n    error(filename, linenum, 'build/storage_class', 5,\n          'Storage class (static, extern, typedef, etc) should be first.')\n\n  if Match(r'\\s*#\\s*endif\\s*[^/\\s]+', line):\n    error(filename, linenum, 'build/endif_comment', 5,\n          'Uncommented text after #endif is non-standard.  Use a comment.')\n\n  if Match(r'\\s*class\\s+(\\w+\\s*::\\s*)+\\w+\\s*;', line):\n    error(filename, linenum, 'build/forward_decl', 5,\n          'Inner-style forward declarations are invalid.  Remove this line.')\n\n  if Search(r'(\\w+|[+-]?\\d+(\\.\\d*)?)\\s*(<|>)\\?=?\\s*(\\w+|[+-]?\\d+)(\\.\\d*)?',\n            line):\n    error(filename, linenum, 'build/deprecated', 3,\n          '>? and <? (max and min) operators are non-standard and deprecated.')\n\n  if Search(r'^\\s*const\\s*string\\s*&\\s*\\w+\\s*;', line):\n    # TODO(unknown): Could it be expanded safely to arbitrary references,\n    # without triggering too many false positives? The first\n    # attempt triggered 5 warnings for mostly benign code in the regtest, hence\n    # the restriction.\n    # Here's the original regexp, for the reference:\n    # type_name = r'\\w+((\\s*::\\s*\\w+)|(\\s*<\\s*\\w+?\\s*>))?'\n    # r'\\s*const\\s*' + type_name + '\\s*&\\s*\\w+\\s*;'\n    error(filename, linenum, 'runtime/member_string_references', 2,\n          'const string& members are dangerous. It is much better to use '\n          'alternatives, such as pointers or simple constants.')\n\n  # Everything else in this function operates on class declarations.\n  # Return early if the top of the nesting stack is not a class, or if\n  # the class head is not completed yet.\n  classinfo = nesting_state.InnermostClass()\n  if not classinfo or not classinfo.seen_open_brace:\n    return\n\n  # The class may have been declared with namespace or classname qualifiers.\n  # The constructor and destructor will not have those qualifiers.\n  base_classname = classinfo.name.split('::')[-1]\n\n  # Look for single-argument constructors that aren't marked explicit.\n  # Technically a valid construct, but against style.\n  args = Match(r'\\s+(?:inline\\s+)?%s\\s*\\(([^,()]+)\\)'\n               % re.escape(base_classname),\n               line)\n  if (args and\n      args.group(1) != 'void' and\n      not Match(r'(const\\s+)?%s(\\s+const)?\\s*(?:<\\w+>\\s*)?&'\n                % re.escape(base_classname), args.group(1).strip())):\n    error(filename, linenum, 'runtime/explicit', 5,\n          'Single-argument constructors should be marked explicit.')\n\n\ndef CheckSpacingForFunctionCall(filename, line, linenum, error):\n  \"\"\"Checks for the correctness of various spacing around function calls.\n\n  Args:\n    filename: The name of the current file.\n    line: The text of the line to check.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  # Since function calls often occur inside if/for/while/switch\n  # expressions - which have their own, more liberal conventions - we\n  # first see if we should be looking inside such an expression for a\n  # function call, to which we can apply more strict standards.\n  fncall = line    # if there's no control flow construct, look at whole line\n  for pattern in (r'\\bif\\s*\\((.*)\\)\\s*{',\n                  r'\\bfor\\s*\\((.*)\\)\\s*{',\n                  r'\\bwhile\\s*\\((.*)\\)\\s*[{;]',\n                  r'\\bswitch\\s*\\((.*)\\)\\s*{'):\n    match = Search(pattern, line)\n    if match:\n      fncall = match.group(1)    # look inside the parens for function calls\n      break\n\n  # Except in if/for/while/switch, there should never be space\n  # immediately inside parens (eg \"f( 3, 4 )\").  We make an exception\n  # for nested parens ( (a+b) + c ).  Likewise, there should never be\n  # a space before a ( when it's a function argument.  I assume it's a\n  # function argument when the char before the whitespace is legal in\n  # a function name (alnum + _) and we're not starting a macro. Also ignore\n  # pointers and references to arrays and functions coz they're too tricky:\n  # we use a very simple way to recognize these:\n  # \" (something)(maybe-something)\" or\n  # \" (something)(maybe-something,\" or\n  # \" (something)[something]\"\n  # Note that we assume the contents of [] to be short enough that\n  # they'll never need to wrap.\n  if (  # Ignore control structures.\n      not Search(r'\\b(if|for|while|switch|return|new|delete|catch|sizeof)\\b',\n                 fncall) and\n      # Ignore pointers/references to functions.\n      not Search(r' \\([^)]+\\)\\([^)]*(\\)|,$)', fncall) and\n      # Ignore pointers/references to arrays.\n      not Search(r' \\([^)]+\\)\\[[^\\]]+\\]', fncall)):\n    if Search(r'\\w\\s*\\(\\s(?!\\s*\\\\$)', fncall):      # a ( used for a fn call\n      error(filename, linenum, 'whitespace/parens', 4,\n            'Extra space after ( in function call')\n    elif Search(r'\\(\\s+(?!(\\s*\\\\)|\\()', fncall):\n      error(filename, linenum, 'whitespace/parens', 2,\n            'Extra space after (')\n    if (Search(r'\\w\\s+\\(', fncall) and\n        not Search(r'#\\s*define|typedef', fncall) and\n        not Search(r'\\w\\s+\\((\\w+::)*\\*\\w+\\)\\(', fncall)):\n      error(filename, linenum, 'whitespace/parens', 4,\n            'Extra space before ( in function call')\n    # If the ) is followed only by a newline or a { + newline, assume it's\n    # part of a control statement (if/while/etc), and don't complain\n    if Search(r'[^)]\\s+\\)\\s*[^{\\s]', fncall):\n      # If the closing parenthesis is preceded by only whitespaces,\n      # try to give a more descriptive error message.\n      if Search(r'^\\s+\\)', fncall):\n        error(filename, linenum, 'whitespace/parens', 2,\n              'Closing ) should be moved to the previous line')\n      else:\n        error(filename, linenum, 'whitespace/parens', 2,\n              'Extra space before )')\n\n\ndef IsBlankLine(line):\n  \"\"\"Returns true if the given line is blank.\n\n  We consider a line to be blank if the line is empty or consists of\n  only white spaces.\n\n  Args:\n    line: A line of a string.\n\n  Returns:\n    True, if the given line is blank.\n  \"\"\"\n  return not line or line.isspace()\n\n\ndef CheckForFunctionLengths(filename, clean_lines, linenum,\n                            function_state, error):\n  \"\"\"Reports for long function bodies.\n\n  For an overview why this is done, see:\n  http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Write_Short_Functions\n\n  Uses a simplistic algorithm assuming other style guidelines\n  (especially spacing) are followed.\n  Only checks unindented functions, so class members are unchecked.\n  Trivial bodies are unchecked, so constructors with huge initializer lists\n  may be missed.\n  Blank/comment lines are not counted so as to avoid encouraging the removal\n  of vertical space and comments just to get through a lint check.\n  NOLINT *on the last line of a function* disables this check.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    function_state: Current function name and lines in body so far.\n    error: The function to call with any errors found.\n  \"\"\"\n  lines = clean_lines.lines\n  line = lines[linenum]\n  raw = clean_lines.raw_lines\n  raw_line = raw[linenum]\n  joined_line = ''\n\n  starting_func = False\n  regexp = r'(\\w(\\w|::|\\*|\\&|\\s)*)\\('  # decls * & space::name( ...\n  match_result = Match(regexp, line)\n  if match_result:\n    # If the name is all caps and underscores, figure it's a macro and\n    # ignore it, unless it's TEST or TEST_F.\n    function_name = match_result.group(1).split()[-1]\n    if function_name == 'TEST' or function_name == 'TEST_F' or (\n        not Match(r'[A-Z_]+$', function_name)):\n      starting_func = True\n\n  if starting_func:\n    body_found = False\n    for start_linenum in xrange(linenum, clean_lines.NumLines()):\n      start_line = lines[start_linenum]\n      joined_line += ' ' + start_line.lstrip()\n      if Search(r'(;|})', start_line):  # Declarations and trivial functions\n        body_found = True\n        break                              # ... ignore\n      elif Search(r'{', start_line):\n        body_found = True\n        function = Search(r'((\\w|:)*)\\(', line).group(1)\n        if Match(r'TEST', function):    # Handle TEST... macros\n          parameter_regexp = Search(r'(\\(.*\\))', joined_line)\n          if parameter_regexp:             # Ignore bad syntax\n            function += parameter_regexp.group(1)\n        else:\n          function += '()'\n        function_state.Begin(function)\n        break\n    if not body_found:\n      # No body for the function (or evidence of a non-function) was found.\n      error(filename, linenum, 'readability/fn_size', 5,\n            'Lint failed to find start of function body.')\n  elif Match(r'^\\}\\s*$', line):  # function end\n    function_state.Check(error, filename, linenum)\n    function_state.End()\n  elif not Match(r'^\\s*$', line):\n    function_state.Count()  # Count non-blank/non-comment lines.\n\n\n_RE_PATTERN_TODO = re.compile(r'^//(\\s*)TODO(\\(.+?\\))?:?(\\s|$)?')\n\n\ndef CheckComment(comment, filename, linenum, error):\n  \"\"\"Checks for common mistakes in TODO comments.\n\n  Args:\n    comment: The text of the comment from the line in question.\n    filename: The name of the current file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  match = _RE_PATTERN_TODO.match(comment)\n  if match:\n    # One whitespace is correct; zero whitespace is handled elsewhere.\n    leading_whitespace = match.group(1)\n    if len(leading_whitespace) > 1:\n      error(filename, linenum, 'whitespace/todo', 2,\n            'Too many spaces before TODO')\n\n    username = match.group(2)\n    if not username:\n      error(filename, linenum, 'readability/todo', 2,\n            'Missing username in TODO; it should look like '\n            '\"// TODO(my_username): Stuff.\"')\n\n    middle_whitespace = match.group(3)\n    # Comparisons made explicit for correctness -- pylint: disable=g-explicit-bool-comparison\n    if middle_whitespace != ' ' and middle_whitespace != '':\n      error(filename, linenum, 'whitespace/todo', 2,\n            'TODO(my_username) should be followed by a space')\n\ndef CheckAccess(filename, clean_lines, linenum, nesting_state, error):\n  \"\"\"Checks for improper use of DISALLOW* macros.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]  # get rid of comments and strings\n\n  matched = Match((r'\\s*(DISALLOW_COPY_AND_ASSIGN|'\n                   r'DISALLOW_EVIL_CONSTRUCTORS|'\n                   r'DISALLOW_IMPLICIT_CONSTRUCTORS)'), line)\n  if not matched:\n    return\n  if nesting_state.stack and isinstance(nesting_state.stack[-1], _ClassInfo):\n    if nesting_state.stack[-1].access != 'private':\n      error(filename, linenum, 'readability/constructors', 3,\n            '%s must be in the private: section' % matched.group(1))\n\n  else:\n    # Found DISALLOW* macro outside a class declaration, or perhaps it\n    # was used inside a function when it should have been part of the\n    # class declaration.  We could issue a warning here, but it\n    # probably resulted in a compiler error already.\n    pass\n\n\ndef FindNextMatchingAngleBracket(clean_lines, linenum, init_suffix):\n  \"\"\"Find the corresponding > to close a template.\n\n  Args:\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: Current line number.\n    init_suffix: Remainder of the current line after the initial <.\n\n  Returns:\n    True if a matching bracket exists.\n  \"\"\"\n  line = init_suffix\n  nesting_stack = ['<']\n  while True:\n    # Find the next operator that can tell us whether < is used as an\n    # opening bracket or as a less-than operator.  We only want to\n    # warn on the latter case.\n    #\n    # We could also check all other operators and terminate the search\n    # early, e.g. if we got something like this \"a<b+c\", the \"<\" is\n    # most likely a less-than operator, but then we will get false\n    # positives for default arguments and other template expressions.\n    match = Search(r'^[^<>(),;\\[\\]]*([<>(),;\\[\\]])(.*)$', line)\n    if match:\n      # Found an operator, update nesting stack\n      operator = match.group(1)\n      line = match.group(2)\n\n      if nesting_stack[-1] == '<':\n        # Expecting closing angle bracket\n        if operator in ('<', '(', '['):\n          nesting_stack.append(operator)\n        elif operator == '>':\n          nesting_stack.pop()\n          if not nesting_stack:\n            # Found matching angle bracket\n            return True\n        elif operator == ',':\n          # Got a comma after a bracket, this is most likely a template\n          # argument.  We have not seen a closing angle bracket yet, but\n          # it's probably a few lines later if we look for it, so just\n          # return early here.\n          return True\n        else:\n          # Got some other operator.\n          return False\n\n      else:\n        # Expecting closing parenthesis or closing bracket\n        if operator in ('<', '(', '['):\n          nesting_stack.append(operator)\n        elif operator in (')', ']'):\n          # We don't bother checking for matching () or [].  If we got\n          # something like (] or [), it would have been a syntax error.\n          nesting_stack.pop()\n\n    else:\n      # Scan the next line\n      linenum += 1\n      if linenum >= len(clean_lines.elided):\n        break\n      line = clean_lines.elided[linenum]\n\n  # Exhausted all remaining lines and still no matching angle bracket.\n  # Most likely the input was incomplete, otherwise we should have\n  # seen a semicolon and returned early.\n  return True\n\n\ndef FindPreviousMatchingAngleBracket(clean_lines, linenum, init_prefix):\n  \"\"\"Find the corresponding < that started a template.\n\n  Args:\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: Current line number.\n    init_prefix: Part of the current line before the initial >.\n\n  Returns:\n    True if a matching bracket exists.\n  \"\"\"\n  line = init_prefix\n  nesting_stack = ['>']\n  while True:\n    # Find the previous operator\n    match = Search(r'^(.*)([<>(),;\\[\\]])[^<>(),;\\[\\]]*$', line)\n    if match:\n      # Found an operator, update nesting stack\n      operator = match.group(2)\n      line = match.group(1)\n\n      if nesting_stack[-1] == '>':\n        # Expecting opening angle bracket\n        if operator in ('>', ')', ']'):\n          nesting_stack.append(operator)\n        elif operator == '<':\n          nesting_stack.pop()\n          if not nesting_stack:\n            # Found matching angle bracket\n            return True\n        elif operator == ',':\n          # Got a comma before a bracket, this is most likely a\n          # template argument.  The opening angle bracket is probably\n          # there if we look for it, so just return early here.\n          return True\n        else:\n          # Got some other operator.\n          return False\n\n      else:\n        # Expecting opening parenthesis or opening bracket\n        if operator in ('>', ')', ']'):\n          nesting_stack.append(operator)\n        elif operator in ('(', '['):\n          nesting_stack.pop()\n\n    else:\n      # Scan the previous line\n      linenum -= 1\n      if linenum < 0:\n        break\n      line = clean_lines.elided[linenum]\n\n  # Exhausted all earlier lines and still no matching angle bracket.\n  return False\n\n\ndef CheckSpacing(filename, clean_lines, linenum, nesting_state, error):\n  \"\"\"Checks for the correctness of various spacing issues in the code.\n\n  Things we check for: spaces around operators, spaces after\n  if/for/while/switch, no spaces around parens in function calls, two\n  spaces between code and comment, don't start a block with a blank\n  line, don't end a function with a blank line, don't add a blank line\n  after public/protected/private, don't have too many blank lines in a row.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  # Don't use \"elided\" lines here, otherwise we can't check commented lines.\n  # Don't want to use \"raw\" either, because we don't want to check inside C++11\n  # raw strings,\n  raw = clean_lines.lines_without_raw_strings\n  line = raw[linenum]\n\n  # Before nixing comments, check if the line is blank for no good\n  # reason.  This includes the first line after a block is opened, and\n  # blank lines at the end of a function (ie, right before a line like '}'\n  #\n  # Skip all the blank line checks if we are immediately inside a\n  # namespace body.  In other words, don't issue blank line warnings\n  # for this block:\n  #   namespace {\n  #\n  #   }\n  #\n  # A warning about missing end of namespace comments will be issued instead.\n  if IsBlankLine(line) and not nesting_state.InNamespaceBody():\n    elided = clean_lines.elided\n    prev_line = elided[linenum - 1]\n    prevbrace = prev_line.rfind('{')\n    # TODO(unknown): Don't complain if line before blank line, and line after,\n    #                both start with alnums and are indented the same amount.\n    #                This ignores whitespace at the start of a namespace block\n    #                because those are not usually indented.\n    if prevbrace != -1 and prev_line[prevbrace:].find('}') == -1:\n      # OK, we have a blank line at the start of a code block.  Before we\n      # complain, we check if it is an exception to the rule: The previous\n      # non-empty line has the parameters of a function header that are indented\n      # 4 spaces (because they did not fit in a 80 column line when placed on\n      # the same line as the function name).  We also check for the case where\n      # the previous line is indented 6 spaces, which may happen when the\n      # initializers of a constructor do not fit into a 80 column line.\n      exception = False\n      if Match(r' {6}\\w', prev_line):  # Initializer list?\n        # We are looking for the opening column of initializer list, which\n        # should be indented 4 spaces to cause 6 space indentation afterwards.\n        search_position = linenum-2\n        while (search_position >= 0\n               and Match(r' {6}\\w', elided[search_position])):\n          search_position -= 1\n        exception = (search_position >= 0\n                     and elided[search_position][:5] == '    :')\n      else:\n        # Search for the function arguments or an initializer list.  We use a\n        # simple heuristic here: If the line is indented 4 spaces; and we have a\n        # closing paren, without the opening paren, followed by an opening brace\n        # or colon (for initializer lists) we assume that it is the last line of\n        # a function header.  If we have a colon indented 4 spaces, it is an\n        # initializer list.\n        exception = (Match(r' {4}\\w[^\\(]*\\)\\s*(const\\s*)?(\\{\\s*$|:)',\n                           prev_line)\n                     or Match(r' {4}:', prev_line))\n\n      if not exception:\n        error(filename, linenum, 'whitespace/blank_line', 2,\n              'Redundant blank line at the start of a code block '\n              'should be deleted.')\n    # Ignore blank lines at the end of a block in a long if-else\n    # chain, like this:\n    #   if (condition1) {\n    #     // Something followed by a blank line\n    #\n    #   } else if (condition2) {\n    #     // Something else\n    #   }\n    if linenum + 1 < clean_lines.NumLines():\n      next_line = raw[linenum + 1]\n      if (next_line\n          and Match(r'\\s*}', next_line)\n          and next_line.find('} else ') == -1):\n        error(filename, linenum, 'whitespace/blank_line', 3,\n              'Redundant blank line at the end of a code block '\n              'should be deleted.')\n\n    matched = Match(r'\\s*(public|protected|private):', prev_line)\n    if matched:\n      error(filename, linenum, 'whitespace/blank_line', 3,\n            'Do not leave a blank line after \"%s:\"' % matched.group(1))\n\n  # Next, we complain if there's a comment too near the text\n  commentpos = line.find('//')\n  if commentpos != -1:\n    # Check if the // may be in quotes.  If so, ignore it\n    # Comparisons made explicit for clarity -- pylint: disable=g-explicit-bool-comparison\n    if (line.count('\"', 0, commentpos) -\n        line.count('\\\\\"', 0, commentpos)) % 2 == 0:   # not in quotes\n      # Allow one space for new scopes, two spaces otherwise:\n      if (not Match(r'^\\s*{ //', line) and\n          ((commentpos >= 1 and\n            line[commentpos-1] not in string.whitespace) or\n           (commentpos >= 2 and\n            line[commentpos-2] not in string.whitespace))):\n        error(filename, linenum, 'whitespace/comments', 2,\n              'At least two spaces is best between code and comments')\n      # There should always be a space between the // and the comment\n      commentend = commentpos + 2\n      if commentend < len(line) and not line[commentend] == ' ':\n        # but some lines are exceptions -- e.g. if they're big\n        # comment delimiters like:\n        # //----------------------------------------------------------\n        # or are an empty C++ style Doxygen comment, like:\n        # ///\n        # or C++ style Doxygen comments placed after the variable:\n        # ///<  Header comment\n        # //!<  Header comment\n        # or they begin with multiple slashes followed by a space:\n        # //////// Header comment\n        match = (Search(r'[=/-]{4,}\\s*$', line[commentend:]) or\n                 Search(r'^/$', line[commentend:]) or\n                 Search(r'^!< ', line[commentend:]) or\n                 Search(r'^/< ', line[commentend:]) or\n                 Search(r'^/+ ', line[commentend:]))\n        if not match:\n          error(filename, linenum, 'whitespace/comments', 4,\n                'Should have a space between // and comment')\n      CheckComment(line[commentpos:], filename, linenum, error)\n\n  line = clean_lines.elided[linenum]  # get rid of comments and strings\n\n  # Don't try to do spacing checks for operator methods\n  line = re.sub(r'operator(==|!=|<|<<|<=|>=|>>|>)\\(', 'operator\\(', line)\n\n  # We allow no-spaces around = within an if: \"if ( (a=Foo()) == 0 )\".\n  # Otherwise not.  Note we only check for non-spaces on *both* sides;\n  # sometimes people put non-spaces on one side when aligning ='s among\n  # many lines (not that this is behavior that I approve of...)\n  if Search(r'[\\w.]=[\\w.]', line) and not Search(r'\\b(if|while) ', line):\n    error(filename, linenum, 'whitespace/operators', 4,\n          'Missing spaces around =')\n\n  # It's ok not to have spaces around binary operators like + - * /, but if\n  # there's too little whitespace, we get concerned.  It's hard to tell,\n  # though, so we punt on this one for now.  TODO.\n\n  # You should always have whitespace around binary operators.\n  #\n  # Check <= and >= first to avoid false positives with < and >, then\n  # check non-include lines for spacing around < and >.\n  match = Search(r'[^<>=!\\s](==|!=|<=|>=)[^<>=!\\s]', line)\n  if match:\n    error(filename, linenum, 'whitespace/operators', 3,\n          'Missing spaces around %s' % match.group(1))\n  # We allow no-spaces around << when used like this: 10<<20, but\n  # not otherwise (particularly, not when used as streams)\n  # Also ignore using ns::operator<<;\n  match = Search(r'(operator|\\S)(?:L|UL|ULL|l|ul|ull)?<<(\\S)', line)\n  if (match and\n      not (match.group(1).isdigit() and match.group(2).isdigit()) and\n      not (match.group(1) == 'operator' and match.group(2) == ';')):\n    error(filename, linenum, 'whitespace/operators', 3,\n          'Missing spaces around <<')\n  elif not Match(r'#.*include', line):\n    # Avoid false positives on ->\n    reduced_line = line.replace('->', '')\n\n    # Look for < that is not surrounded by spaces.  This is only\n    # triggered if both sides are missing spaces, even though\n    # technically should should flag if at least one side is missing a\n    # space.  This is done to avoid some false positives with shifts.\n    match = Search(r'[^\\s<]<([^\\s=<].*)', reduced_line)\n    if (match and\n        not FindNextMatchingAngleBracket(clean_lines, linenum, match.group(1))):\n      error(filename, linenum, 'whitespace/operators', 3,\n            'Missing spaces around <')\n\n    # Look for > that is not surrounded by spaces.  Similar to the\n    # above, we only trigger if both sides are missing spaces to avoid\n    # false positives with shifts.\n    match = Search(r'^(.*[^\\s>])>[^\\s=>]', reduced_line)\n    if (match and\n        not FindPreviousMatchingAngleBracket(clean_lines, linenum,\n                                             match.group(1))):\n      error(filename, linenum, 'whitespace/operators', 3,\n            'Missing spaces around >')\n\n  # We allow no-spaces around >> for almost anything.  This is because\n  # C++11 allows \">>\" to close nested templates, which accounts for\n  # most cases when \">>\" is not followed by a space.\n  #\n  # We still warn on \">>\" followed by alpha character, because that is\n  # likely due to \">>\" being used for right shifts, e.g.:\n  #   value >> alpha\n  #\n  # When \">>\" is used to close templates, the alphanumeric letter that\n  # follows would be part of an identifier, and there should still be\n  # a space separating the template type and the identifier.\n  #   type<type<type>> alpha\n  match = Search(r'>>[a-zA-Z_]', line)\n  if match:\n    error(filename, linenum, 'whitespace/operators', 3,\n          'Missing spaces around >>')\n\n  # There shouldn't be space around unary operators\n  match = Search(r'(!\\s|~\\s|[\\s]--[\\s;]|[\\s]\\+\\+[\\s;])', line)\n  if match:\n    error(filename, linenum, 'whitespace/operators', 4,\n          'Extra space for operator %s' % match.group(1))\n\n  # A pet peeve of mine: no spaces after an if, while, switch, or for\n  match = Search(r' (if\\(|for\\(|while\\(|switch\\()', line)\n  if match:\n    error(filename, linenum, 'whitespace/parens', 5,\n          'Missing space before ( in %s' % match.group(1))\n\n  # For if/for/while/switch, the left and right parens should be\n  # consistent about how many spaces are inside the parens, and\n  # there should either be zero or one spaces inside the parens.\n  # We don't want: \"if ( foo)\" or \"if ( foo   )\".\n  # Exception: \"for ( ; foo; bar)\" and \"for (foo; bar; )\" are allowed.\n  match = Search(r'\\b(if|for|while|switch)\\s*'\n                 r'\\(([ ]*)(.).*[^ ]+([ ]*)\\)\\s*{\\s*$',\n                 line)\n  if match:\n    if len(match.group(2)) != len(match.group(4)):\n      if not (match.group(3) == ';' and\n              len(match.group(2)) == 1 + len(match.group(4)) or\n              not match.group(2) and Search(r'\\bfor\\s*\\(.*; \\)', line)):\n        error(filename, linenum, 'whitespace/parens', 5,\n              'Mismatching spaces inside () in %s' % match.group(1))\n    if len(match.group(2)) not in [0, 1]:\n      error(filename, linenum, 'whitespace/parens', 5,\n            'Should have zero or one spaces inside ( and ) in %s' %\n            match.group(1))\n\n  # You should always have a space after a comma (either as fn arg or operator)\n  #\n  # This does not apply when the non-space character following the\n  # comma is another comma, since the only time when that happens is\n  # for empty macro arguments.\n  #\n  # We run this check in two passes: first pass on elided lines to\n  # verify that lines contain missing whitespaces, second pass on raw\n  # lines to confirm that those missing whitespaces are not due to\n  # elided comments.\n  if Search(r',[^,\\s]', line) and Search(r',[^,\\s]', raw[linenum]):\n    error(filename, linenum, 'whitespace/comma', 3,\n          'Missing space after ,')\n\n  # You should always have a space after a semicolon\n  # except for few corner cases\n  # TODO(unknown): clarify if 'if (1) { return 1;}' is requires one more\n  # space after ;\n  if Search(r';[^\\s};\\\\)/]', line):\n    error(filename, linenum, 'whitespace/semicolon', 3,\n          'Missing space after ;')\n\n  # Next we will look for issues with function calls.\n  CheckSpacingForFunctionCall(filename, line, linenum, error)\n\n  # Except after an opening paren, or after another opening brace (in case of\n  # an initializer list, for instance), you should have spaces before your\n  # braces. And since you should never have braces at the beginning of a line,\n  # this is an easy test.\n  match = Match(r'^(.*[^ ({]){', line)\n  if match:\n    # Try a bit harder to check for brace initialization.  This\n    # happens in one of the following forms:\n    #   Constructor() : initializer_list_{} { ... }\n    #   Constructor{}.MemberFunction()\n    #   Type variable{};\n    #   FunctionCall(type{}, ...);\n    #   LastArgument(..., type{});\n    #   LOG(INFO) << type{} << \" ...\";\n    #   map_of_type[{...}] = ...;\n    #\n    # We check for the character following the closing brace, and\n    # silence the warning if it's one of those listed above, i.e.\n    # \"{.;,)<]\".\n    #\n    # To account for nested initializer list, we allow any number of\n    # closing braces up to \"{;,)<\".  We can't simply silence the\n    # warning on first sight of closing brace, because that would\n    # cause false negatives for things that are not initializer lists.\n    #   Silence this:         But not this:\n    #     Outer{                if (...) {\n    #       Inner{...}            if (...){  // Missing space before {\n    #     };                    }\n    #\n    # There is a false negative with this approach if people inserted\n    # spurious semicolons, e.g. \"if (cond){};\", but we will catch the\n    # spurious semicolon with a separate check.\n    (endline, endlinenum, endpos) = CloseExpression(\n        clean_lines, linenum, len(match.group(1)))\n    trailing_text = ''\n    if endpos > -1:\n      trailing_text = endline[endpos:]\n    for offset in xrange(endlinenum + 1,\n                         min(endlinenum + 3, clean_lines.NumLines() - 1)):\n      trailing_text += clean_lines.elided[offset]\n    if not Match(r'^[\\s}]*[{.;,)<\\]]', trailing_text):\n      error(filename, linenum, 'whitespace/braces', 5,\n            'Missing space before {')\n\n  # Make sure '} else {' has spaces.\n  if Search(r'}else', line):\n    error(filename, linenum, 'whitespace/braces', 5,\n          'Missing space before else')\n\n  # You shouldn't have spaces before your brackets, except maybe after\n  # 'delete []' or 'new char * []'.\n  if Search(r'\\w\\s+\\[', line) and not Search(r'delete\\s+\\[', line):\n    error(filename, linenum, 'whitespace/braces', 5,\n          'Extra space before [')\n\n  # You shouldn't have a space before a semicolon at the end of the line.\n  # There's a special case for \"for\" since the style guide allows space before\n  # the semicolon there.\n  if Search(r':\\s*;\\s*$', line):\n    error(filename, linenum, 'whitespace/semicolon', 5,\n          'Semicolon defining empty statement. Use {} instead.')\n  elif Search(r'^\\s*;\\s*$', line):\n    error(filename, linenum, 'whitespace/semicolon', 5,\n          'Line contains only semicolon. If this should be an empty statement, '\n          'use {} instead.')\n  elif (Search(r'\\s+;\\s*$', line) and\n        not Search(r'\\bfor\\b', line)):\n    error(filename, linenum, 'whitespace/semicolon', 5,\n          'Extra space before last semicolon. If this should be an empty '\n          'statement, use {} instead.')\n\n  # In range-based for, we wanted spaces before and after the colon, but\n  # not around \"::\" tokens that might appear.\n  if (Search('for *\\(.*[^:]:[^: ]', line) or\n      Search('for *\\(.*[^: ]:[^:]', line)):\n    error(filename, linenum, 'whitespace/forcolon', 2,\n          'Missing space around colon in range-based for loop')\n\n\ndef CheckSectionSpacing(filename, clean_lines, class_info, linenum, error):\n  \"\"\"Checks for additional blank line issues related to sections.\n\n  Currently the only thing checked here is blank line before protected/private.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    class_info: A _ClassInfo objects.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  # Skip checks if the class is small, where small means 25 lines or less.\n  # 25 lines seems like a good cutoff since that's the usual height of\n  # terminals, and any class that can't fit in one screen can't really\n  # be considered \"small\".\n  #\n  # Also skip checks if we are on the first line.  This accounts for\n  # classes that look like\n  #   class Foo { public: ... };\n  #\n  # If we didn't find the end of the class, last_line would be zero,\n  # and the check will be skipped by the first condition.\n  if (class_info.last_line - class_info.starting_linenum <= 24 or\n      linenum <= class_info.starting_linenum):\n    return\n\n  matched = Match(r'\\s*(public|protected|private):', clean_lines.lines[linenum])\n  if matched:\n    # Issue warning if the line before public/protected/private was\n    # not a blank line, but don't do this if the previous line contains\n    # \"class\" or \"struct\".  This can happen two ways:\n    #  - We are at the beginning of the class.\n    #  - We are forward-declaring an inner class that is semantically\n    #    private, but needed to be public for implementation reasons.\n    # Also ignores cases where the previous line ends with a backslash as can be\n    # common when defining classes in C macros.\n    prev_line = clean_lines.lines[linenum - 1]\n    if (not IsBlankLine(prev_line) and\n        not Search(r'\\b(class|struct)\\b', prev_line) and\n        not Search(r'\\\\$', prev_line)):\n      # Try a bit harder to find the beginning of the class.  This is to\n      # account for multi-line base-specifier lists, e.g.:\n      #   class Derived\n      #       : public Base {\n      end_class_head = class_info.starting_linenum\n      for i in range(class_info.starting_linenum, linenum):\n        if Search(r'\\{\\s*$', clean_lines.lines[i]):\n          end_class_head = i\n          break\n      if end_class_head < linenum - 1:\n        error(filename, linenum, 'whitespace/blank_line', 3,\n              '\"%s:\" should be preceded by a blank line' % matched.group(1))\n\n\ndef GetPreviousNonBlankLine(clean_lines, linenum):\n  \"\"\"Return the most recent non-blank line and its line number.\n\n  Args:\n    clean_lines: A CleansedLines instance containing the file contents.\n    linenum: The number of the line to check.\n\n  Returns:\n    A tuple with two elements.  The first element is the contents of the last\n    non-blank line before the current line, or the empty string if this is the\n    first non-blank line.  The second is the line number of that line, or -1\n    if this is the first non-blank line.\n  \"\"\"\n\n  prevlinenum = linenum - 1\n  while prevlinenum >= 0:\n    prevline = clean_lines.elided[prevlinenum]\n    if not IsBlankLine(prevline):     # if not a blank line...\n      return (prevline, prevlinenum)\n    prevlinenum -= 1\n  return ('', -1)\n\n\ndef CheckBraces(filename, clean_lines, linenum, error):\n  \"\"\"Looks for misplaced braces (e.g. at the end of line).\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  line = clean_lines.elided[linenum]        # get rid of comments and strings\n\n  if Match(r'\\s*{\\s*$', line):\n    # We allow an open brace to start a line in the case where someone is using\n    # braces in a block to explicitly create a new scope, which is commonly used\n    # to control the lifetime of stack-allocated variables.  Braces are also\n    # used for brace initializers inside function calls.  We don't detect this\n    # perfectly: we just don't complain if the last non-whitespace character on\n    # the previous non-blank line is ',', ';', ':', '(', '{', or '}', or if the\n    # previous line starts a preprocessor block.\n    prevline = GetPreviousNonBlankLine(clean_lines, linenum)[0]\n    if (not Search(r'[,;:}{(]\\s*$', prevline) and\n        not Match(r'\\s*#', prevline)):\n      error(filename, linenum, 'whitespace/braces', 4,\n            '{ should almost always be at the end of the previous line')\n\n  # An else clause should be on the same line as the preceding closing brace.\n  if Match(r'\\s*else\\s*', line):\n    prevline = GetPreviousNonBlankLine(clean_lines, linenum)[0]\n    if Match(r'\\s*}\\s*$', prevline):\n      error(filename, linenum, 'whitespace/newline', 4,\n            'An else should appear on the same line as the preceding }')\n\n  # If braces come on one side of an else, they should be on both.\n  # However, we have to worry about \"else if\" that spans multiple lines!\n  if Search(r'}\\s*else[^{]*$', line) or Match(r'[^}]*else\\s*{', line):\n    if Search(r'}\\s*else if([^{]*)$', line):       # could be multi-line if\n      # find the ( after the if\n      pos = line.find('else if')\n      pos = line.find('(', pos)\n      if pos > 0:\n        (endline, _, endpos) = CloseExpression(clean_lines, linenum, pos)\n        if endline[endpos:].find('{') == -1:    # must be brace after if\n          error(filename, linenum, 'readability/braces', 5,\n                'If an else has a brace on one side, it should have it on both')\n    else:            # common case: else not followed by a multi-line if\n      error(filename, linenum, 'readability/braces', 5,\n            'If an else has a brace on one side, it should have it on both')\n\n  # Likewise, an else should never have the else clause on the same line\n  if Search(r'\\belse [^\\s{]', line) and not Search(r'\\belse if\\b', line):\n    error(filename, linenum, 'whitespace/newline', 4,\n          'Else clause should never be on same line as else (use 2 lines)')\n\n  # In the same way, a do/while should never be on one line\n  if Match(r'\\s*do [^\\s{]', line):\n    error(filename, linenum, 'whitespace/newline', 4,\n          'do/while clauses should not be on a single line')\n\n  # Block bodies should not be followed by a semicolon.  Due to C++11\n  # brace initialization, there are more places where semicolons are\n  # required than not, so we use a whitelist approach to check these\n  # rather than a blacklist.  These are the places where \"};\" should\n  # be replaced by just \"}\":\n  # 1. Some flavor of block following closing parenthesis:\n  #    for (;;) {};\n  #    while (...) {};\n  #    switch (...) {};\n  #    Function(...) {};\n  #    if (...) {};\n  #    if (...) else if (...) {};\n  #\n  # 2. else block:\n  #    if (...) else {};\n  #\n  # 3. const member function:\n  #    Function(...) const {};\n  #\n  # 4. Block following some statement:\n  #    x = 42;\n  #    {};\n  #\n  # 5. Block at the beginning of a function:\n  #    Function(...) {\n  #      {};\n  #    }\n  #\n  #    Note that naively checking for the preceding \"{\" will also match\n  #    braces inside multi-dimensional arrays, but this is fine since\n  #    that expression will not contain semicolons.\n  #\n  # 6. Block following another block:\n  #    while (true) {}\n  #    {};\n  #\n  # 7. End of namespaces:\n  #    namespace {};\n  #\n  #    These semicolons seems far more common than other kinds of\n  #    redundant semicolons, possibly due to people converting classes\n  #    to namespaces.  For now we do not warn for this case.\n  #\n  # Try matching case 1 first.\n  match = Match(r'^(.*\\)\\s*)\\{', line)\n  if match:\n    # Matched closing parenthesis (case 1).  Check the token before the\n    # matching opening parenthesis, and don't warn if it looks like a\n    # macro.  This avoids these false positives:\n    #  - macro that defines a base class\n    #  - multi-line macro that defines a base class\n    #  - macro that defines the whole class-head\n    #\n    # But we still issue warnings for macros that we know are safe to\n    # warn, specifically:\n    #  - TEST, TEST_F, TEST_P, MATCHER, MATCHER_P\n    #  - TYPED_TEST\n    #  - INTERFACE_DEF\n    #  - EXCLUSIVE_LOCKS_REQUIRED, SHARED_LOCKS_REQUIRED, LOCKS_EXCLUDED:\n    #\n    # We implement a whitelist of safe macros instead of a blacklist of\n    # unsafe macros, even though the latter appears less frequently in\n    # google code and would have been easier to implement.  This is because\n    # the downside for getting the whitelist wrong means some extra\n    # semicolons, while the downside for getting the blacklist wrong\n    # would result in compile errors.\n    #\n    # In addition to macros, we also don't want to warn on compound\n    # literals.\n    closing_brace_pos = match.group(1).rfind(')')\n    opening_parenthesis = ReverseCloseExpression(\n        clean_lines, linenum, closing_brace_pos)\n    if opening_parenthesis[2] > -1:\n      line_prefix = opening_parenthesis[0][0:opening_parenthesis[2]]\n      macro = Search(r'\\b([A-Z_]+)\\s*$', line_prefix)\n      if ((macro and\n           macro.group(1) not in (\n               'TEST', 'TEST_F', 'MATCHER', 'MATCHER_P', 'TYPED_TEST',\n               'EXCLUSIVE_LOCKS_REQUIRED', 'SHARED_LOCKS_REQUIRED',\n               'LOCKS_EXCLUDED', 'INTERFACE_DEF')) or\n          Search(r'\\s+=\\s*$', line_prefix)):\n        match = None\n\n  else:\n    # Try matching cases 2-3.\n    match = Match(r'^(.*(?:else|\\)\\s*const)\\s*)\\{', line)\n    if not match:\n      # Try matching cases 4-6.  These are always matched on separate lines.\n      #\n      # Note that we can't simply concatenate the previous line to the\n      # current line and do a single match, otherwise we may output\n      # duplicate warnings for the blank line case:\n      #   if (cond) {\n      #     // blank line\n      #   }\n      prevline = GetPreviousNonBlankLine(clean_lines, linenum)[0]\n      if prevline and Search(r'[;{}]\\s*$', prevline):\n        match = Match(r'^(\\s*)\\{', line)\n\n  # Check matching closing brace\n  if match:\n    (endline, endlinenum, endpos) = CloseExpression(\n        clean_lines, linenum, len(match.group(1)))\n    if endpos > -1 and Match(r'^\\s*;', endline[endpos:]):\n      # Current {} pair is eligible for semicolon check, and we have found\n      # the redundant semicolon, output warning here.\n      #\n      # Note: because we are scanning forward for opening braces, and\n      # outputting warnings for the matching closing brace, if there are\n      # nested blocks with trailing semicolons, we will get the error\n      # messages in reversed order.\n      error(filename, endlinenum, 'readability/braces', 4,\n            \"You don't need a ; after a }\")\n\n\ndef CheckEmptyBlockBody(filename, clean_lines, linenum, error):\n  \"\"\"Look for empty loop/conditional body with only a single semicolon.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  # Search for loop keywords at the beginning of the line.  Because only\n  # whitespaces are allowed before the keywords, this will also ignore most\n  # do-while-loops, since those lines should start with closing brace.\n  #\n  # We also check \"if\" blocks here, since an empty conditional block\n  # is likely an error.\n  line = clean_lines.elided[linenum]\n  matched = Match(r'\\s*(for|while|if)\\s*\\(', line)\n  if matched:\n    # Find the end of the conditional expression\n    (end_line, end_linenum, end_pos) = CloseExpression(\n        clean_lines, linenum, line.find('('))\n\n    # Output warning if what follows the condition expression is a semicolon.\n    # No warning for all other cases, including whitespace or newline, since we\n    # have a separate check for semicolons preceded by whitespace.\n    if end_pos >= 0 and Match(r';', end_line[end_pos:]):\n      if matched.group(1) == 'if':\n        error(filename, end_linenum, 'whitespace/empty_conditional_body', 5,\n              'Empty conditional bodies should use {}')\n      else:\n        error(filename, end_linenum, 'whitespace/empty_loop_body', 5,\n              'Empty loop bodies should use {} or continue')\n\n\ndef CheckCheck(filename, clean_lines, linenum, error):\n  \"\"\"Checks the use of CHECK and EXPECT macros.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  # Decide the set of replacement macros that should be suggested\n  lines = clean_lines.elided\n  check_macro = None\n  start_pos = -1\n  for macro in _CHECK_MACROS:\n    i = lines[linenum].find(macro)\n    if i >= 0:\n      check_macro = macro\n\n      # Find opening parenthesis.  Do a regular expression match here\n      # to make sure that we are matching the expected CHECK macro, as\n      # opposed to some other macro that happens to contain the CHECK\n      # substring.\n      matched = Match(r'^(.*\\b' + check_macro + r'\\s*)\\(', lines[linenum])\n      if not matched:\n        continue\n      start_pos = len(matched.group(1))\n      break\n  if not check_macro or start_pos < 0:\n    # Don't waste time here if line doesn't contain 'CHECK' or 'EXPECT'\n    return\n\n  # Find end of the boolean expression by matching parentheses\n  (last_line, end_line, end_pos) = CloseExpression(\n      clean_lines, linenum, start_pos)\n  if end_pos < 0:\n    return\n  if linenum == end_line:\n    expression = lines[linenum][start_pos + 1:end_pos - 1]\n  else:\n    expression = lines[linenum][start_pos + 1:]\n    for i in xrange(linenum + 1, end_line):\n      expression += lines[i]\n    expression += last_line[0:end_pos - 1]\n\n  # Parse expression so that we can take parentheses into account.\n  # This avoids false positives for inputs like \"CHECK((a < 4) == b)\",\n  # which is not replaceable by CHECK_LE.\n  lhs = ''\n  rhs = ''\n  operator = None\n  while expression:\n    matched = Match(r'^\\s*(<<|<<=|>>|>>=|->\\*|->|&&|\\|\\||'\n                    r'==|!=|>=|>|<=|<|\\()(.*)$', expression)\n    if matched:\n      token = matched.group(1)\n      if token == '(':\n        # Parenthesized operand\n        expression = matched.group(2)\n        (end, _) = FindEndOfExpressionInLine(expression, 0, 1, '(', ')')\n        if end < 0:\n          return  # Unmatched parenthesis\n        lhs += '(' + expression[0:end]\n        expression = expression[end:]\n      elif token in ('&&', '||'):\n        # Logical and/or operators.  This means the expression\n        # contains more than one term, for example:\n        #   CHECK(42 < a && a < b);\n        #\n        # These are not replaceable with CHECK_LE, so bail out early.\n        return\n      elif token in ('<<', '<<=', '>>', '>>=', '->*', '->'):\n        # Non-relational operator\n        lhs += token\n        expression = matched.group(2)\n      else:\n        # Relational operator\n        operator = token\n        rhs = matched.group(2)\n        break\n    else:\n      # Unparenthesized operand.  Instead of appending to lhs one character\n      # at a time, we do another regular expression match to consume several\n      # characters at once if possible.  Trivial benchmark shows that this\n      # is more efficient when the operands are longer than a single\n      # character, which is generally the case.\n      matched = Match(r'^([^-=!<>()&|]+)(.*)$', expression)\n      if not matched:\n        matched = Match(r'^(\\s*\\S)(.*)$', expression)\n        if not matched:\n          break\n      lhs += matched.group(1)\n      expression = matched.group(2)\n\n  # Only apply checks if we got all parts of the boolean expression\n  if not (lhs and operator and rhs):\n    return\n\n  # Check that rhs do not contain logical operators.  We already know\n  # that lhs is fine since the loop above parses out && and ||.\n  if rhs.find('&&') > -1 or rhs.find('||') > -1:\n    return\n\n  # At least one of the operands must be a constant literal.  This is\n  # to avoid suggesting replacements for unprintable things like\n  # CHECK(variable != iterator)\n  #\n  # The following pattern matches decimal, hex integers, strings, and\n  # characters (in that order).\n  lhs = lhs.strip()\n  rhs = rhs.strip()\n  match_constant = r'^([-+]?(\\d+|0[xX][0-9a-fA-F]+)[lLuU]{0,3}|\".*\"|\\'.*\\')$'\n  if Match(match_constant, lhs) or Match(match_constant, rhs):\n    # Note: since we know both lhs and rhs, we can provide a more\n    # descriptive error message like:\n    #   Consider using CHECK_EQ(x, 42) instead of CHECK(x == 42)\n    # Instead of:\n    #   Consider using CHECK_EQ instead of CHECK(a == b)\n    #\n    # We are still keeping the less descriptive message because if lhs\n    # or rhs gets long, the error message might become unreadable.\n    error(filename, linenum, 'readability/check', 2,\n          'Consider using %s instead of %s(a %s b)' % (\n              _CHECK_REPLACEMENT[check_macro][operator],\n              check_macro, operator))\n\n\ndef CheckAltTokens(filename, clean_lines, linenum, error):\n  \"\"\"Check alternative keywords being used in boolean expressions.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n\n  # Avoid preprocessor lines\n  if Match(r'^\\s*#', line):\n    return\n\n  # Last ditch effort to avoid multi-line comments.  This will not help\n  # if the comment started before the current line or ended after the\n  # current line, but it catches most of the false positives.  At least,\n  # it provides a way to workaround this warning for people who use\n  # multi-line comments in preprocessor macros.\n  #\n  # TODO(unknown): remove this once cpplint has better support for\n  # multi-line comments.\n  if line.find('/*') >= 0 or line.find('*/') >= 0:\n    return\n\n  for match in _ALT_TOKEN_REPLACEMENT_PATTERN.finditer(line):\n    error(filename, linenum, 'readability/alt_tokens', 2,\n          'Use operator %s instead of %s' % (\n              _ALT_TOKEN_REPLACEMENT[match.group(1)], match.group(1)))\n\n\ndef GetLineWidth(line):\n  \"\"\"Determines the width of the line in column positions.\n\n  Args:\n    line: A string, which may be a Unicode string.\n\n  Returns:\n    The width of the line in column positions, accounting for Unicode\n    combining characters and wide characters.\n  \"\"\"\n  if isinstance(line, unicode):\n    width = 0\n    for uc in unicodedata.normalize('NFC', line):\n      if unicodedata.east_asian_width(uc) in ('W', 'F'):\n        width += 2\n      elif not unicodedata.combining(uc):\n        width += 1\n    return width\n  else:\n    return len(line)\n\n\ndef CheckStyle(filename, clean_lines, linenum, file_extension, nesting_state,\n               error):\n  \"\"\"Checks rules from the 'C++ style rules' section of cppguide.html.\n\n  Most of these rules are hard to test (naming, comment style), but we\n  do what we can.  In particular we check for 2-space indents, line lengths,\n  tab usage, spaces inside code, etc.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    file_extension: The extension (without the dot) of the filename.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: The function to call with any errors found.\n  \"\"\"\n\n  # Don't use \"elided\" lines here, otherwise we can't check commented lines.\n  # Don't want to use \"raw\" either, because we don't want to check inside C++11\n  # raw strings,\n  raw_lines = clean_lines.lines_without_raw_strings\n  line = raw_lines[linenum]\n\n  if line.find('\\t') != -1:\n    error(filename, linenum, 'whitespace/tab', 1,\n          'Tab found; better to use spaces')\n\n  # One or three blank spaces at the beginning of the line is weird; it's\n  # hard to reconcile that with 2-space indents.\n  # NOTE: here are the conditions rob pike used for his tests.  Mine aren't\n  # as sophisticated, but it may be worth becoming so:  RLENGTH==initial_spaces\n  # if(RLENGTH > 20) complain = 0;\n  # if(match($0, \" +(error|private|public|protected):\")) complain = 0;\n  # if(match(prev, \"&& *$\")) complain = 0;\n  # if(match(prev, \"\\\\|\\\\| *$\")) complain = 0;\n  # if(match(prev, \"[\\\",=><] *$\")) complain = 0;\n  # if(match($0, \" <<\")) complain = 0;\n  # if(match(prev, \" +for \\\\(\")) complain = 0;\n  # if(prevodd && match(prevprev, \" +for \\\\(\")) complain = 0;\n  initial_spaces = 0\n  cleansed_line = clean_lines.elided[linenum]\n  while initial_spaces < len(line) and line[initial_spaces] == ' ':\n    initial_spaces += 1\n  if line and line[-1].isspace():\n    error(filename, linenum, 'whitespace/end_of_line', 4,\n          'Line ends in whitespace.  Consider deleting these extra spaces.')\n  # There are certain situations we allow one space, notably for section labels\n  elif ((initial_spaces == 1 or initial_spaces == 3) and\n        not Match(r'\\s*\\w+\\s*:\\s*$', cleansed_line)):\n    error(filename, linenum, 'whitespace/indent', 3,\n          'Weird number of spaces at line-start.  '\n          'Are you using a 2-space indent?')\n\n  # Check if the line is a header guard.\n  is_header_guard = False\n  if file_extension == 'h':\n    cppvar = GetHeaderGuardCPPVariable(filename)\n    if (line.startswith('#ifndef %s' % cppvar) or\n        line.startswith('#define %s' % cppvar) or\n        line.startswith('#endif  // %s' % cppvar)):\n      is_header_guard = True\n  # #include lines and header guards can be long, since there's no clean way to\n  # split them.\n  #\n  # URLs can be long too.  It's possible to split these, but it makes them\n  # harder to cut&paste.\n  #\n  # The \"$Id:...$\" comment may also get very long without it being the\n  # developers fault.\n  if (not line.startswith('#include') and not is_header_guard and\n      not Match(r'^\\s*//.*http(s?)://\\S*$', line) and\n      not Match(r'^// \\$Id:.*#[0-9]+ \\$$', line)):\n    line_width = GetLineWidth(line)\n    extended_length = int((_line_length * 1.25))\n    if line_width > extended_length:\n      error(filename, linenum, 'whitespace/line_length', 4,\n            'Lines should very rarely be longer than %i characters' %\n            extended_length)\n    elif line_width > _line_length:\n      error(filename, linenum, 'whitespace/line_length', 2,\n            'Lines should be <= %i characters long' % _line_length)\n\n  if (cleansed_line.count(';') > 1 and\n      # for loops are allowed two ;'s (and may run over two lines).\n      cleansed_line.find('for') == -1 and\n      (GetPreviousNonBlankLine(clean_lines, linenum)[0].find('for') == -1 or\n       GetPreviousNonBlankLine(clean_lines, linenum)[0].find(';') != -1) and\n      # It's ok to have many commands in a switch case that fits in 1 line\n      not ((cleansed_line.find('case ') != -1 or\n            cleansed_line.find('default:') != -1) and\n           cleansed_line.find('break;') != -1)):\n    error(filename, linenum, 'whitespace/newline', 0,\n          'More than one command on the same line')\n\n  # Some more style checks\n  CheckBraces(filename, clean_lines, linenum, error)\n  CheckEmptyBlockBody(filename, clean_lines, linenum, error)\n  CheckAccess(filename, clean_lines, linenum, nesting_state, error)\n  CheckSpacing(filename, clean_lines, linenum, nesting_state, error)\n  CheckCheck(filename, clean_lines, linenum, error)\n  CheckAltTokens(filename, clean_lines, linenum, error)\n  classinfo = nesting_state.InnermostClass()\n  if classinfo:\n    CheckSectionSpacing(filename, clean_lines, classinfo, linenum, error)\n\n\n_RE_PATTERN_INCLUDE_NEW_STYLE = re.compile(r'#include +\"[^/]+\\.h\"')\n_RE_PATTERN_INCLUDE = re.compile(r'^\\s*#\\s*include\\s*([<\"])([^>\"]*)[>\"].*$')\n# Matches the first component of a filename delimited by -s and _s. That is:\n#  _RE_FIRST_COMPONENT.match('foo').group(0) == 'foo'\n#  _RE_FIRST_COMPONENT.match('foo.cc').group(0) == 'foo'\n#  _RE_FIRST_COMPONENT.match('foo-bar_baz.cc').group(0) == 'foo'\n#  _RE_FIRST_COMPONENT.match('foo_bar-baz.cc').group(0) == 'foo'\n_RE_FIRST_COMPONENT = re.compile(r'^[^-_.]+')\n\n\ndef _DropCommonSuffixes(filename):\n  \"\"\"Drops common suffixes like _test.cc or -inl.h from filename.\n\n  For example:\n    >>> _DropCommonSuffixes('foo/foo-inl.h')\n    'foo/foo'\n    >>> _DropCommonSuffixes('foo/bar/foo.cc')\n    'foo/bar/foo'\n    >>> _DropCommonSuffixes('foo/foo_internal.h')\n    'foo/foo'\n    >>> _DropCommonSuffixes('foo/foo_unusualinternal.h')\n    'foo/foo_unusualinternal'\n\n  Args:\n    filename: The input filename.\n\n  Returns:\n    The filename with the common suffix removed.\n  \"\"\"\n  for suffix in ('test.cc', 'regtest.cc', 'unittest.cc',\n                 'inl.h', 'impl.h', 'internal.h'):\n    if (filename.endswith(suffix) and len(filename) > len(suffix) and\n        filename[-len(suffix) - 1] in ('-', '_')):\n      return filename[:-len(suffix) - 1]\n  return os.path.splitext(filename)[0]\n\n\ndef _IsTestFilename(filename):\n  \"\"\"Determines if the given filename has a suffix that identifies it as a test.\n\n  Args:\n    filename: The input filename.\n\n  Returns:\n    True if 'filename' looks like a test, False otherwise.\n  \"\"\"\n  if (filename.endswith('_test.cc') or\n      filename.endswith('_unittest.cc') or\n      filename.endswith('_regtest.cc')):\n    return True\n  else:\n    return False\n\n\ndef _ClassifyInclude(fileinfo, include, is_system):\n  \"\"\"Figures out what kind of header 'include' is.\n\n  Args:\n    fileinfo: The current file cpplint is running over. A FileInfo instance.\n    include: The path to a #included file.\n    is_system: True if the #include used <> rather than \"\".\n\n  Returns:\n    One of the _XXX_HEADER constants.\n\n  For example:\n    >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)\n    _C_SYS_HEADER\n    >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)\n    _CPP_SYS_HEADER\n    >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)\n    _LIKELY_MY_HEADER\n    >>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),\n    ...                  'bar/foo_other_ext.h', False)\n    _POSSIBLE_MY_HEADER\n    >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)\n    _OTHER_HEADER\n  \"\"\"\n  # This is a list of all standard c++ header files, except\n  # those already checked for above.\n  is_cpp_h = include in _CPP_HEADERS\n\n  if is_system:\n    if is_cpp_h:\n      return _CPP_SYS_HEADER\n    else:\n      return _C_SYS_HEADER\n\n  # If the target file and the include we're checking share a\n  # basename when we drop common extensions, and the include\n  # lives in . , then it's likely to be owned by the target file.\n  target_dir, target_base = (\n      os.path.split(_DropCommonSuffixes(fileinfo.RepositoryName())))\n  include_dir, include_base = os.path.split(_DropCommonSuffixes(include))\n  if target_base == include_base and (\n      include_dir == target_dir or\n      include_dir == os.path.normpath(target_dir + '/../public')):\n    return _LIKELY_MY_HEADER\n\n  # If the target and include share some initial basename\n  # component, it's possible the target is implementing the\n  # include, so it's allowed to be first, but we'll never\n  # complain if it's not there.\n  target_first_component = _RE_FIRST_COMPONENT.match(target_base)\n  include_first_component = _RE_FIRST_COMPONENT.match(include_base)\n  if (target_first_component and include_first_component and\n      target_first_component.group(0) ==\n      include_first_component.group(0)):\n    return _POSSIBLE_MY_HEADER\n\n  return _OTHER_HEADER\n\n\n\ndef CheckIncludeLine(filename, clean_lines, linenum, include_state, error):\n  \"\"\"Check rules that are applicable to #include lines.\n\n  Strings on #include lines are NOT removed from elided line, to make\n  certain tasks easier. However, to prevent false positives, checks\n  applicable to #include lines in CheckLanguage must be put here.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    include_state: An _IncludeState instance in which the headers are inserted.\n    error: The function to call with any errors found.\n  \"\"\"\n  fileinfo = FileInfo(filename)\n\n  line = clean_lines.lines[linenum]\n\n  # \"include\" should use the new style \"foo/bar.h\" instead of just \"bar.h\"\n  if _RE_PATTERN_INCLUDE_NEW_STYLE.search(line):\n    error(filename, linenum, 'build/include_dir', 4,\n          'Include the directory when naming .h files')\n\n  # we shouldn't include a file more than once. actually, there are a\n  # handful of instances where doing so is okay, but in general it's\n  # not.\n  match = _RE_PATTERN_INCLUDE.search(line)\n  if match:\n    include = match.group(2)\n    is_system = (match.group(1) == '<')\n    if include in include_state:\n      error(filename, linenum, 'build/include', 4,\n            '\"%s\" already included at %s:%s' %\n            (include, filename, include_state[include]))\n    else:\n      include_state[include] = linenum\n\n      # We want to ensure that headers appear in the right order:\n      # 1) for foo.cc, foo.h  (preferred location)\n      # 2) c system files\n      # 3) cpp system files\n      # 4) for foo.cc, foo.h  (deprecated location)\n      # 5) other google headers\n      #\n      # We classify each include statement as one of those 5 types\n      # using a number of techniques. The include_state object keeps\n      # track of the highest type seen, and complains if we see a\n      # lower type after that.\n      error_message = include_state.CheckNextIncludeOrder(\n          _ClassifyInclude(fileinfo, include, is_system))\n      if error_message:\n        error(filename, linenum, 'build/include_order', 4,\n              '%s. Should be: %s.h, c system, c++ system, other.' %\n              (error_message, fileinfo.BaseName()))\n      canonical_include = include_state.CanonicalizeAlphabeticalOrder(include)\n      if not include_state.IsInAlphabeticalOrder(\n          clean_lines, linenum, canonical_include):\n        error(filename, linenum, 'build/include_alpha', 4,\n              'Include \"%s\" not in alphabetical order' % include)\n      include_state.SetLastHeader(canonical_include)\n\n  # Look for any of the stream classes that are part of standard C++.\n  match = _RE_PATTERN_INCLUDE.match(line)\n  if match:\n    include = match.group(2)\n    if Match(r'(f|ind|io|i|o|parse|pf|stdio|str|)?stream$', include):\n      # Many unit tests use cout, so we exempt them.\n      if not _IsTestFilename(filename):\n        error(filename, linenum, 'readability/streams', 3,\n              'Streams are highly discouraged.')\n\n\ndef _GetTextInside(text, start_pattern):\n  r\"\"\"Retrieves all the text between matching open and close parentheses.\n\n  Given a string of lines and a regular expression string, retrieve all the text\n  following the expression and between opening punctuation symbols like\n  (, [, or {, and the matching close-punctuation symbol. This properly nested\n  occurrences of the punctuations, so for the text like\n    printf(a(), b(c()));\n  a call to _GetTextInside(text, r'printf\\(') will return 'a(), b(c())'.\n  start_pattern must match string having an open punctuation symbol at the end.\n\n  Args:\n    text: The lines to extract text. Its comments and strings must be elided.\n           It can be single line and can span multiple lines.\n    start_pattern: The regexp string indicating where to start extracting\n                   the text.\n  Returns:\n    The extracted text.\n    None if either the opening string or ending punctuation could not be found.\n  \"\"\"\n  # TODO(sugawarayu): Audit cpplint.py to see what places could be profitably\n  # rewritten to use _GetTextInside (and use inferior regexp matching today).\n\n  # Give opening punctuations to get the matching close-punctuations.\n  matching_punctuation = {'(': ')', '{': '}', '[': ']'}\n  closing_punctuation = set(matching_punctuation.itervalues())\n\n  # Find the position to start extracting text.\n  match = re.search(start_pattern, text, re.M)\n  if not match:  # start_pattern not found in text.\n    return None\n  start_position = match.end(0)\n\n  assert start_position > 0, (\n      'start_pattern must ends with an opening punctuation.')\n  assert text[start_position - 1] in matching_punctuation, (\n      'start_pattern must ends with an opening punctuation.')\n  # Stack of closing punctuations we expect to have in text after position.\n  punctuation_stack = [matching_punctuation[text[start_position - 1]]]\n  position = start_position\n  while punctuation_stack and position < len(text):\n    if text[position] == punctuation_stack[-1]:\n      punctuation_stack.pop()\n    elif text[position] in closing_punctuation:\n      # A closing punctuation without matching opening punctuations.\n      return None\n    elif text[position] in matching_punctuation:\n      punctuation_stack.append(matching_punctuation[text[position]])\n    position += 1\n  if punctuation_stack:\n    # Opening punctuations left without matching close-punctuations.\n    return None\n  # punctuations match.\n  return text[start_position:position - 1]\n\n\n# Patterns for matching call-by-reference parameters.\n#\n# Supports nested templates up to 2 levels deep using this messy pattern:\n#   < (?: < (?: < [^<>]*\n#               >\n#           |   [^<>] )*\n#         >\n#     |   [^<>] )*\n#   >\n_RE_PATTERN_IDENT = r'[_a-zA-Z]\\w*'  # =~ [[:alpha:]][[:alnum:]]*\n_RE_PATTERN_TYPE = (\n    r'(?:const\\s+)?(?:typename\\s+|class\\s+|struct\\s+|union\\s+|enum\\s+)?'\n    r'(?:\\w|'\n    r'\\s*<(?:<(?:<[^<>]*>|[^<>])*>|[^<>])*>|'\n    r'::)+')\n# A call-by-reference parameter ends with '& identifier'.\n_RE_PATTERN_REF_PARAM = re.compile(\n    r'(' + _RE_PATTERN_TYPE + r'(?:\\s*(?:\\bconst\\b|[*]))*\\s*'\n    r'&\\s*' + _RE_PATTERN_IDENT + r')\\s*(?:=[^,()]+)?[,)]')\n# A call-by-const-reference parameter either ends with 'const& identifier'\n# or looks like 'const type& identifier' when 'type' is atomic.\n_RE_PATTERN_CONST_REF_PARAM = (\n    r'(?:.*\\s*\\bconst\\s*&\\s*' + _RE_PATTERN_IDENT +\n    r'|const\\s+' + _RE_PATTERN_TYPE + r'\\s*&\\s*' + _RE_PATTERN_IDENT + r')')\n\n\ndef CheckLanguage(filename, clean_lines, linenum, file_extension,\n                  include_state, nesting_state, error):\n  \"\"\"Checks rules from the 'C++ language rules' section of cppguide.html.\n\n  Some of these rules are hard to test (function overloading, using\n  uint32 inappropriately), but we do the best we can.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    file_extension: The extension (without the dot) of the filename.\n    include_state: An _IncludeState instance in which the headers are inserted.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: The function to call with any errors found.\n  \"\"\"\n  # If the line is empty or consists of entirely a comment, no need to\n  # check it.\n  line = clean_lines.elided[linenum]\n  if not line:\n    return\n\n  match = _RE_PATTERN_INCLUDE.search(line)\n  if match:\n    CheckIncludeLine(filename, clean_lines, linenum, include_state, error)\n    return\n\n  # Reset include state across preprocessor directives.  This is meant\n  # to silence warnings for conditional includes.\n  if Match(r'^\\s*#\\s*(?:ifdef|elif|else|endif)\\b', line):\n    include_state.ResetSection()\n\n  # Make Windows paths like Unix.\n  fullname = os.path.abspath(filename).replace('\\\\', '/')\n\n  # TODO(unknown): figure out if they're using default arguments in fn proto.\n\n  # Check to see if they're using an conversion function cast.\n  # I just try to capture the most common basic types, though there are more.\n  # Parameterless conversion functions, such as bool(), are allowed as they are\n  # probably a member operator declaration or default constructor.\n  match = Search(\n      r'(\\bnew\\s+)?\\b'  # Grab 'new' operator, if it's there\n      r'(int|float|double|bool|char|int32|uint32|int64|uint64)'\n      r'(\\([^)].*)', line)\n  if match:\n    matched_new = match.group(1)\n    matched_type = match.group(2)\n    matched_funcptr = match.group(3)\n\n    # gMock methods are defined using some variant of MOCK_METHODx(name, type)\n    # where type may be float(), int(string), etc.  Without context they are\n    # virtually indistinguishable from int(x) casts. Likewise, gMock's\n    # MockCallback takes a template parameter of the form return_type(arg_type),\n    # which looks much like the cast we're trying to detect.\n    #\n    # std::function<> wrapper has a similar problem.\n    #\n    # Return types for function pointers also look like casts if they\n    # don't have an extra space.\n    if (matched_new is None and  # If new operator, then this isn't a cast\n        not (Match(r'^\\s*MOCK_(CONST_)?METHOD\\d+(_T)?\\(', line) or\n             Search(r'\\bMockCallback<.*>', line) or\n             Search(r'\\bstd::function<.*>', line)) and\n        not (matched_funcptr and\n             Match(r'\\((?:[^() ]+::\\s*\\*\\s*)?[^() ]+\\)\\s*\\(',\n                   matched_funcptr))):\n      # Try a bit harder to catch gmock lines: the only place where\n      # something looks like an old-style cast is where we declare the\n      # return type of the mocked method, and the only time when we\n      # are missing context is if MOCK_METHOD was split across\n      # multiple lines.  The missing MOCK_METHOD is usually one or two\n      # lines back, so scan back one or two lines.\n      #\n      # It's not possible for gmock macros to appear in the first 2\n      # lines, since the class head + section name takes up 2 lines.\n      if (linenum < 2 or\n          not (Match(r'^\\s*MOCK_(?:CONST_)?METHOD\\d+(?:_T)?\\((?:\\S+,)?\\s*$',\n                     clean_lines.elided[linenum - 1]) or\n               Match(r'^\\s*MOCK_(?:CONST_)?METHOD\\d+(?:_T)?\\(\\s*$',\n                     clean_lines.elided[linenum - 2]))):\n        error(filename, linenum, 'readability/casting', 4,\n              'Using deprecated casting style.  '\n              'Use static_cast<%s>(...) instead' %\n              matched_type)\n\n  CheckCStyleCast(filename, linenum, line, clean_lines.raw_lines[linenum],\n                  'static_cast',\n                  r'\\((int|float|double|bool|char|u?int(16|32|64))\\)', error)\n\n  # This doesn't catch all cases. Consider (const char * const)\"hello\".\n  #\n  # (char *) \"foo\" should always be a const_cast (reinterpret_cast won't\n  # compile).\n  if CheckCStyleCast(filename, linenum, line, clean_lines.raw_lines[linenum],\n                     'const_cast', r'\\((char\\s?\\*+\\s?)\\)\\s*\"', error):\n    pass\n  else:\n    # Check pointer casts for other than string constants\n    CheckCStyleCast(filename, linenum, line, clean_lines.raw_lines[linenum],\n                    'reinterpret_cast', r'\\((\\w+\\s?\\*+\\s?)\\)', error)\n\n  # In addition, we look for people taking the address of a cast.  This\n  # is dangerous -- casts can assign to temporaries, so the pointer doesn't\n  # point where you think.\n  match = Search(\n      r'(?:&\\(([^)]+)\\)[\\w(])|'\n      r'(?:&(static|dynamic|down|reinterpret)_cast\\b)', line)\n  if match and match.group(1) != '*':\n    error(filename, linenum, 'runtime/casting', 4,\n          ('Are you taking an address of a cast?  '\n           'This is dangerous: could be a temp var.  '\n           'Take the address before doing the cast, rather than after'))\n\n  # Create an extended_line, which is the concatenation of the current and\n  # next lines, for more effective checking of code that may span more than one\n  # line.\n  if linenum + 1 < clean_lines.NumLines():\n    extended_line = line + clean_lines.elided[linenum + 1]\n  else:\n    extended_line = line\n\n  # Check for people declaring static/global STL strings at the top level.\n  # This is dangerous because the C++ language does not guarantee that\n  # globals with constructors are initialized before the first access.\n  match = Match(\n      r'((?:|static +)(?:|const +))string +([a-zA-Z0-9_:]+)\\b(.*)',\n      line)\n  # Make sure it's not a function.\n  # Function template specialization looks like: \"string foo<Type>(...\".\n  # Class template definitions look like: \"string Foo<Type>::Method(...\".\n  #\n  # Also ignore things that look like operators.  These are matched separately\n  # because operator names cross non-word boundaries.  If we change the pattern\n  # above, we would decrease the accuracy of matching identifiers.\n  if (match and\n      not Search(r'\\boperator\\W', line) and\n      not Match(r'\\s*(<.*>)?(::[a-zA-Z0-9_]+)?\\s*\\(([^\"]|$)', match.group(3))):\n    error(filename, linenum, 'runtime/string', 4,\n          'For a static/global string constant, use a C style string instead: '\n          '\"%schar %s[]\".' %\n          (match.group(1), match.group(2)))\n\n  if Search(r'\\b([A-Za-z0-9_]*_)\\(\\1\\)', line):\n    error(filename, linenum, 'runtime/init', 4,\n          'You seem to be initializing a member variable with itself.')\n\n  if file_extension == 'h':\n    # TODO(unknown): check that 1-arg constructors are explicit.\n    #                How to tell it's a constructor?\n    #                (handled in CheckForNonStandardConstructs for now)\n    # TODO(unknown): check that classes have DISALLOW_EVIL_CONSTRUCTORS\n    #                (level 1 error)\n    pass\n\n  # Check if people are using the verboten C basic types.  The only exception\n  # we regularly allow is \"unsigned short port\" for port.\n  if Search(r'\\bshort port\\b', line):\n    if not Search(r'\\bunsigned short port\\b', line):\n      error(filename, linenum, 'runtime/int', 4,\n            'Use \"unsigned short\" for ports, not \"short\"')\n  else:\n    match = Search(r'\\b(short|long(?! +double)|long long)\\b', line)\n    if match:\n      error(filename, linenum, 'runtime/int', 4,\n            'Use int16/int64/etc, rather than the C type %s' % match.group(1))\n\n  # When snprintf is used, the second argument shouldn't be a literal.\n  match = Search(r'snprintf\\s*\\(([^,]*),\\s*([0-9]*)\\s*,', line)\n  if match and match.group(2) != '0':\n    # If 2nd arg is zero, snprintf is used to calculate size.\n    error(filename, linenum, 'runtime/printf', 3,\n          'If you can, use sizeof(%s) instead of %s as the 2nd arg '\n          'to snprintf.' % (match.group(1), match.group(2)))\n\n  # Check if some verboten C functions are being used.\n  if Search(r'\\bsprintf\\b', line):\n    error(filename, linenum, 'runtime/printf', 5,\n          'Never use sprintf.  Use snprintf instead.')\n  match = Search(r'\\b(strcpy|strcat)\\b', line)\n  if match:\n    error(filename, linenum, 'runtime/printf', 4,\n          'Almost always, snprintf is better than %s' % match.group(1))\n\n  # Check if some verboten operator overloading is going on\n  # TODO(unknown): catch out-of-line unary operator&:\n  #   class X {};\n  #   int operator&(const X& x) { return 42; }  // unary operator&\n  # The trick is it's hard to tell apart from binary operator&:\n  #   class Y { int operator&(const Y& x) { return 23; } }; // binary operator&\n  if Search(r'\\boperator\\s*&\\s*\\(\\s*\\)', line):\n    error(filename, linenum, 'runtime/operator', 4,\n          'Unary operator& is dangerous.  Do not use it.')\n\n  # Check for suspicious usage of \"if\" like\n  # } if (a == b) {\n  if Search(r'\\}\\s*if\\s*\\(', line):\n    error(filename, linenum, 'readability/braces', 4,\n          'Did you mean \"else if\"? If not, start a new line for \"if\".')\n\n  # Check for potential format string bugs like printf(foo).\n  # We constrain the pattern not to pick things like DocidForPrintf(foo).\n  # Not perfect but it can catch printf(foo.c_str()) and printf(foo->c_str())\n  # TODO(sugawarayu): Catch the following case. Need to change the calling\n  # convention of the whole function to process multiple line to handle it.\n  #   printf(\n  #       boy_this_is_a_really_long_variable_that_cannot_fit_on_the_prev_line);\n  printf_args = _GetTextInside(line, r'(?i)\\b(string)?printf\\s*\\(')\n  if printf_args:\n    match = Match(r'([\\w.\\->()]+)$', printf_args)\n    if match and match.group(1) != '__VA_ARGS__':\n      function_name = re.search(r'\\b((?:string)?printf)\\s*\\(',\n                                line, re.I).group(1)\n      error(filename, linenum, 'runtime/printf', 4,\n            'Potential format string bug. Do %s(\"%%s\", %s) instead.'\n            % (function_name, match.group(1)))\n\n  # Check for potential memset bugs like memset(buf, sizeof(buf), 0).\n  match = Search(r'memset\\s*\\(([^,]*),\\s*([^,]*),\\s*0\\s*\\)', line)\n  if match and not Match(r\"^''|-?[0-9]+|0x[0-9A-Fa-f]$\", match.group(2)):\n    error(filename, linenum, 'runtime/memset', 4,\n          'Did you mean \"memset(%s, 0, %s)\"?'\n          % (match.group(1), match.group(2)))\n\n  if Search(r'\\busing namespace\\b', line):\n    error(filename, linenum, 'build/namespaces', 5,\n          'Do not use namespace using-directives.  '\n          'Use using-declarations instead.')\n\n  # Detect variable-length arrays.\n  match = Match(r'\\s*(.+::)?(\\w+) [a-z]\\w*\\[(.+)];', line)\n  if (match and match.group(2) != 'return' and match.group(2) != 'delete' and\n      match.group(3).find(']') == -1):\n    # Split the size using space and arithmetic operators as delimiters.\n    # If any of the resulting tokens are not compile time constants then\n    # report the error.\n    tokens = re.split(r'\\s|\\+|\\-|\\*|\\/|<<|>>]', match.group(3))\n    is_const = True\n    skip_next = False\n    for tok in tokens:\n      if skip_next:\n        skip_next = False\n        continue\n\n      if Search(r'sizeof\\(.+\\)', tok): continue\n      if Search(r'arraysize\\(\\w+\\)', tok): continue\n\n      tok = tok.lstrip('(')\n      tok = tok.rstrip(')')\n      if not tok: continue\n      if Match(r'\\d+', tok): continue\n      if Match(r'0[xX][0-9a-fA-F]+', tok): continue\n      if Match(r'k[A-Z0-9]\\w*', tok): continue\n      if Match(r'(.+::)?k[A-Z0-9]\\w*', tok): continue\n      if Match(r'(.+::)?[A-Z][A-Z0-9_]*', tok): continue\n      # A catch all for tricky sizeof cases, including 'sizeof expression',\n      # 'sizeof(*type)', 'sizeof(const type)', 'sizeof(struct StructName)'\n      # requires skipping the next token because we split on ' ' and '*'.\n      if tok.startswith('sizeof'):\n        skip_next = True\n        continue\n      is_const = False\n      break\n    if not is_const:\n      error(filename, linenum, 'runtime/arrays', 1,\n            'Do not use variable-length arrays.  Use an appropriately named '\n            \"('k' followed by CamelCase) compile-time constant for the size.\")\n\n  # If DISALLOW_EVIL_CONSTRUCTORS, DISALLOW_COPY_AND_ASSIGN, or\n  # DISALLOW_IMPLICIT_CONSTRUCTORS is present, then it should be the last thing\n  # in the class declaration.\n  match = Match(\n      (r'\\s*'\n       r'(DISALLOW_(EVIL_CONSTRUCTORS|COPY_AND_ASSIGN|IMPLICIT_CONSTRUCTORS))'\n       r'\\(.*\\);$'),\n      line)\n  if match and linenum + 1 < clean_lines.NumLines():\n    next_line = clean_lines.elided[linenum + 1]\n    # We allow some, but not all, declarations of variables to be present\n    # in the statement that defines the class.  The [\\w\\*,\\s]* fragment of\n    # the regular expression below allows users to declare instances of\n    # the class or pointers to instances, but not less common types such\n    # as function pointers or arrays.  It's a tradeoff between allowing\n    # reasonable code and avoiding trying to parse more C++ using regexps.\n    if not Search(r'^\\s*}[\\w\\*,\\s]*;', next_line):\n      error(filename, linenum, 'readability/constructors', 3,\n            match.group(1) + ' should be the last thing in the class')\n\n  # Check for use of unnamed namespaces in header files.  Registration\n  # macros are typically OK, so we allow use of \"namespace {\" on lines\n  # that end with backslashes.\n  if (file_extension == 'h'\n      and Search(r'\\bnamespace\\s*{', line)\n      and line[-1] != '\\\\'):\n    error(filename, linenum, 'build/namespaces', 4,\n          'Do not use unnamed namespaces in header files.  See '\n          'http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Namespaces'\n          ' for more information.')\n\ndef CheckForNonConstReference(filename, clean_lines, linenum,\n                              nesting_state, error):\n  \"\"\"Check for non-const references.\n\n  Separate from CheckLanguage since it scans backwards from current\n  line, instead of scanning forward.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: The function to call with any errors found.\n  \"\"\"\n  # Do nothing if there is no '&' on current line.\n  line = clean_lines.elided[linenum]\n  if '&' not in line:\n    return\n\n  # Long type names may be broken across multiple lines, usually in one\n  # of these forms:\n  #   LongType\n  #       ::LongTypeContinued &identifier\n  #   LongType::\n  #       LongTypeContinued &identifier\n  #   LongType<\n  #       ...>::LongTypeContinued &identifier\n  #\n  # If we detected a type split across two lines, join the previous\n  # line to current line so that we can match const references\n  # accordingly.\n  #\n  # Note that this only scans back one line, since scanning back\n  # arbitrary number of lines would be expensive.  If you have a type\n  # that spans more than 2 lines, please use a typedef.\n  if linenum > 1:\n    previous = None\n    if Match(r'\\s*::(?:[\\w<>]|::)+\\s*&\\s*\\S', line):\n      # previous_line\\n + ::current_line\n      previous = Search(r'\\b((?:const\\s*)?(?:[\\w<>]|::)+[\\w<>])\\s*$',\n                        clean_lines.elided[linenum - 1])\n    elif Match(r'\\s*[a-zA-Z_]([\\w<>]|::)+\\s*&\\s*\\S', line):\n      # previous_line::\\n + current_line\n      previous = Search(r'\\b((?:const\\s*)?(?:[\\w<>]|::)+::)\\s*$',\n                        clean_lines.elided[linenum - 1])\n    if previous:\n      line = previous.group(1) + line.lstrip()\n    else:\n      # Check for templated parameter that is split across multiple lines\n      endpos = line.rfind('>')\n      if endpos > -1:\n        (_, startline, startpos) = ReverseCloseExpression(\n            clean_lines, linenum, endpos)\n        if startpos > -1 and startline < linenum:\n          # Found the matching < on an earlier line, collect all\n          # pieces up to current line.\n          line = ''\n          for i in xrange(startline, linenum + 1):\n            line += clean_lines.elided[i].strip()\n\n  # Check for non-const references in function parameters.  A single '&' may\n  # found in the following places:\n  #   inside expression: binary & for bitwise AND\n  #   inside expression: unary & for taking the address of something\n  #   inside declarators: reference parameter\n  # We will exclude the first two cases by checking that we are not inside a\n  # function body, including one that was just introduced by a trailing '{'.\n  # TODO(unknwon): Doesn't account for preprocessor directives.\n  # TODO(unknown): Doesn't account for 'catch(Exception& e)' [rare].\n  check_params = False\n  if not nesting_state.stack:\n    check_params = True  # top level\n  elif (isinstance(nesting_state.stack[-1], _ClassInfo) or\n        isinstance(nesting_state.stack[-1], _NamespaceInfo)):\n    check_params = True  # within class or namespace\n  elif Match(r'.*{\\s*$', line):\n    if (len(nesting_state.stack) == 1 or\n        isinstance(nesting_state.stack[-2], _ClassInfo) or\n        isinstance(nesting_state.stack[-2], _NamespaceInfo)):\n      check_params = True  # just opened global/class/namespace block\n  # We allow non-const references in a few standard places, like functions\n  # called \"swap()\" or iostream operators like \"<<\" or \">>\".  Do not check\n  # those function parameters.\n  #\n  # We also accept & in static_assert, which looks like a function but\n  # it's actually a declaration expression.\n  whitelisted_functions = (r'(?:[sS]wap(?:<\\w:+>)?|'\n                           r'operator\\s*[<>][<>]|'\n                           r'static_assert|COMPILE_ASSERT'\n                           r')\\s*\\(')\n  if Search(whitelisted_functions, line):\n    check_params = False\n  elif not Search(r'\\S+\\([^)]*$', line):\n    # Don't see a whitelisted function on this line.  Actually we\n    # didn't see any function name on this line, so this is likely a\n    # multi-line parameter list.  Try a bit harder to catch this case.\n    for i in xrange(2):\n      if (linenum > i and\n          Search(whitelisted_functions, clean_lines.elided[linenum - i - 1])):\n        check_params = False\n        break\n\n  if check_params:\n    decls = ReplaceAll(r'{[^}]*}', ' ', line)  # exclude function body\n    for parameter in re.findall(_RE_PATTERN_REF_PARAM, decls):\n      if not Match(_RE_PATTERN_CONST_REF_PARAM, parameter):\n        error(filename, linenum, 'runtime/references', 2,\n              'Is this a non-const reference? '\n              'If so, make const or use a pointer: ' +\n              ReplaceAll(' *<', '<', parameter))\n\n\ndef CheckCStyleCast(filename, linenum, line, raw_line, cast_type, pattern,\n                    error):\n  \"\"\"Checks for a C-style cast by looking for the pattern.\n\n  Args:\n    filename: The name of the current file.\n    linenum: The number of the line to check.\n    line: The line of code to check.\n    raw_line: The raw line of code to check, with comments.\n    cast_type: The string for the C++ cast to recommend.  This is either\n      reinterpret_cast, static_cast, or const_cast, depending.\n    pattern: The regular expression used to find C-style casts.\n    error: The function to call with any errors found.\n\n  Returns:\n    True if an error was emitted.\n    False otherwise.\n  \"\"\"\n  match = Search(pattern, line)\n  if not match:\n    return False\n\n  # Exclude lines with sizeof, since sizeof looks like a cast.\n  sizeof_match = Match(r'.*sizeof\\s*$', line[0:match.start(1) - 1])\n  if sizeof_match:\n    return False\n\n  # operator++(int) and operator--(int)\n  if (line[0:match.start(1) - 1].endswith(' operator++') or\n      line[0:match.start(1) - 1].endswith(' operator--')):\n    return False\n\n  # A single unnamed argument for a function tends to look like old\n  # style cast.  If we see those, don't issue warnings for deprecated\n  # casts, instead issue warnings for unnamed arguments where\n  # appropriate.\n  #\n  # These are things that we want warnings for, since the style guide\n  # explicitly require all parameters to be named:\n  #   Function(int);\n  #   Function(int) {\n  #   ConstMember(int) const;\n  #   ConstMember(int) const {\n  #   ExceptionMember(int) throw (...);\n  #   ExceptionMember(int) throw (...) {\n  #   PureVirtual(int) = 0;\n  #\n  # These are functions of some sort, where the compiler would be fine\n  # if they had named parameters, but people often omit those\n  # identifiers to reduce clutter:\n  #   (FunctionPointer)(int);\n  #   (FunctionPointer)(int) = value;\n  #   Function((function_pointer_arg)(int))\n  #   <TemplateArgument(int)>;\n  #   <(FunctionPointerTemplateArgument)(int)>;\n  remainder = line[match.end(0):]\n  if Match(r'^\\s*(?:;|const\\b|throw\\b|=|>|\\{|\\))', remainder):\n    # Looks like an unnamed parameter.\n\n    # Don't warn on any kind of template arguments.\n    if Match(r'^\\s*>', remainder):\n      return False\n\n    # Don't warn on assignments to function pointers, but keep warnings for\n    # unnamed parameters to pure virtual functions.  Note that this pattern\n    # will also pass on assignments of \"0\" to function pointers, but the\n    # preferred values for those would be \"nullptr\" or \"NULL\".\n    matched_zero = Match(r'^\\s=\\s*(\\S+)\\s*;', remainder)\n    if matched_zero and matched_zero.group(1) != '0':\n      return False\n\n    # Don't warn on function pointer declarations.  For this we need\n    # to check what came before the \"(type)\" string.\n    if Match(r'.*\\)\\s*$', line[0:match.start(0)]):\n      return False\n\n    # Don't warn if the parameter is named with block comments, e.g.:\n    #  Function(int /*unused_param*/);\n    if '/*' in raw_line:\n      return False\n\n    # Passed all filters, issue warning here.\n    error(filename, linenum, 'readability/function', 3,\n          'All parameters should be named in a function')\n    return True\n\n  # At this point, all that should be left is actual casts.\n  error(filename, linenum, 'readability/casting', 4,\n        'Using C-style cast.  Use %s<%s>(...) instead' %\n        (cast_type, match.group(1)))\n\n  return True\n\n\n_HEADERS_CONTAINING_TEMPLATES = (\n    ('<deque>', ('deque',)),\n    ('<functional>', ('unary_function', 'binary_function',\n                      'plus', 'minus', 'multiplies', 'divides', 'modulus',\n                      'negate',\n                      'equal_to', 'not_equal_to', 'greater', 'less',\n                      'greater_equal', 'less_equal',\n                      'logical_and', 'logical_or', 'logical_not',\n                      'unary_negate', 'not1', 'binary_negate', 'not2',\n                      'bind1st', 'bind2nd',\n                      'pointer_to_unary_function',\n                      'pointer_to_binary_function',\n                      'ptr_fun',\n                      'mem_fun_t', 'mem_fun', 'mem_fun1_t', 'mem_fun1_ref_t',\n                      'mem_fun_ref_t',\n                      'const_mem_fun_t', 'const_mem_fun1_t',\n                      'const_mem_fun_ref_t', 'const_mem_fun1_ref_t',\n                      'mem_fun_ref',\n                     )),\n    ('<limits>', ('numeric_limits',)),\n    ('<list>', ('list',)),\n    ('<map>', ('map', 'multimap',)),\n    ('<memory>', ('allocator',)),\n    ('<queue>', ('queue', 'priority_queue',)),\n    ('<set>', ('set', 'multiset',)),\n    ('<stack>', ('stack',)),\n    ('<string>', ('char_traits', 'basic_string',)),\n    ('<utility>', ('pair',)),\n    ('<vector>', ('vector',)),\n\n    # gcc extensions.\n    # Note: std::hash is their hash, ::hash is our hash\n    ('<hash_map>', ('hash_map', 'hash_multimap',)),\n    ('<hash_set>', ('hash_set', 'hash_multiset',)),\n    ('<slist>', ('slist',)),\n    )\n\n_RE_PATTERN_STRING = re.compile(r'\\bstring\\b')\n\n_re_pattern_algorithm_header = []\nfor _template in ('copy', 'max', 'min', 'min_element', 'sort', 'swap',\n                  'transform'):\n  # Match max<type>(..., ...), max(..., ...), but not foo->max, foo.max or\n  # type::max().\n  _re_pattern_algorithm_header.append(\n      (re.compile(r'[^>.]\\b' + _template + r'(<.*?>)?\\([^\\)]'),\n       _template,\n       '<algorithm>'))\n\n_re_pattern_templates = []\nfor _header, _templates in _HEADERS_CONTAINING_TEMPLATES:\n  for _template in _templates:\n    _re_pattern_templates.append(\n        (re.compile(r'(\\<|\\b)' + _template + r'\\s*\\<'),\n         _template + '<>',\n         _header))\n\n\ndef FilesBelongToSameModule(filename_cc, filename_h):\n  \"\"\"Check if these two filenames belong to the same module.\n\n  The concept of a 'module' here is a as follows:\n  foo.h, foo-inl.h, foo.cc, foo_test.cc and foo_unittest.cc belong to the\n  same 'module' if they are in the same directory.\n  some/path/public/xyzzy and some/path/internal/xyzzy are also considered\n  to belong to the same module here.\n\n  If the filename_cc contains a longer path than the filename_h, for example,\n  '/absolute/path/to/base/sysinfo.cc', and this file would include\n  'base/sysinfo.h', this function also produces the prefix needed to open the\n  header. This is used by the caller of this function to more robustly open the\n  header file. We don't have access to the real include paths in this context,\n  so we need this guesswork here.\n\n  Known bugs: tools/base/bar.cc and base/bar.h belong to the same module\n  according to this implementation. Because of this, this function gives\n  some false positives. This should be sufficiently rare in practice.\n\n  Args:\n    filename_cc: is the path for the .cc file\n    filename_h: is the path for the header path\n\n  Returns:\n    Tuple with a bool and a string:\n    bool: True if filename_cc and filename_h belong to the same module.\n    string: the additional prefix needed to open the header file.\n  \"\"\"\n\n  if not filename_cc.endswith('.cc'):\n    return (False, '')\n  filename_cc = filename_cc[:-len('.cc')]\n  if filename_cc.endswith('_unittest'):\n    filename_cc = filename_cc[:-len('_unittest')]\n  elif filename_cc.endswith('_test'):\n    filename_cc = filename_cc[:-len('_test')]\n  filename_cc = filename_cc.replace('/public/', '/')\n  filename_cc = filename_cc.replace('/internal/', '/')\n\n  if not filename_h.endswith('.h'):\n    return (False, '')\n  filename_h = filename_h[:-len('.h')]\n  if filename_h.endswith('-inl'):\n    filename_h = filename_h[:-len('-inl')]\n  filename_h = filename_h.replace('/public/', '/')\n  filename_h = filename_h.replace('/internal/', '/')\n\n  files_belong_to_same_module = filename_cc.endswith(filename_h)\n  common_path = ''\n  if files_belong_to_same_module:\n    common_path = filename_cc[:-len(filename_h)]\n  return files_belong_to_same_module, common_path\n\n\ndef UpdateIncludeState(filename, include_state, io=codecs):\n  \"\"\"Fill up the include_state with new includes found from the file.\n\n  Args:\n    filename: the name of the header to read.\n    include_state: an _IncludeState instance in which the headers are inserted.\n    io: The io factory to use to read the file. Provided for testability.\n\n  Returns:\n    True if a header was succesfully added. False otherwise.\n  \"\"\"\n  headerfile = None\n  try:\n    headerfile = io.open(filename, 'r', 'utf8', 'replace')\n  except IOError:\n    return False\n  linenum = 0\n  for line in headerfile:\n    linenum += 1\n    clean_line = CleanseComments(line)\n    match = _RE_PATTERN_INCLUDE.search(clean_line)\n    if match:\n      include = match.group(2)\n      # The value formatting is cute, but not really used right now.\n      # What matters here is that the key is in include_state.\n      include_state.setdefault(include, '%s:%d' % (filename, linenum))\n  return True\n\n\ndef CheckForIncludeWhatYouUse(filename, clean_lines, include_state, error,\n                              io=codecs):\n  \"\"\"Reports for missing stl includes.\n\n  This function will output warnings to make sure you are including the headers\n  necessary for the stl containers and functions that you use. We only give one\n  reason to include a header. For example, if you use both equal_to<> and\n  less<> in a .h file, only one (the latter in the file) of these will be\n  reported as a reason to include the <functional>.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    include_state: An _IncludeState instance.\n    error: The function to call with any errors found.\n    io: The IO factory to use to read the header file. Provided for unittest\n        injection.\n  \"\"\"\n  required = {}  # A map of header name to linenumber and the template entity.\n                 # Example of required: { '<functional>': (1219, 'less<>') }\n\n  for linenum in xrange(clean_lines.NumLines()):\n    line = clean_lines.elided[linenum]\n    if not line or line[0] == '#':\n      continue\n\n    # String is special -- it is a non-templatized type in STL.\n    matched = _RE_PATTERN_STRING.search(line)\n    if matched:\n      # Don't warn about strings in non-STL namespaces:\n      # (We check only the first match per line; good enough.)\n      prefix = line[:matched.start()]\n      if prefix.endswith('std::') or not prefix.endswith('::'):\n        required['<string>'] = (linenum, 'string')\n\n    for pattern, template, header in _re_pattern_algorithm_header:\n      if pattern.search(line):\n        required[header] = (linenum, template)\n\n    # The following function is just a speed up, no semantics are changed.\n    if not '<' in line:  # Reduces the cpu time usage by skipping lines.\n      continue\n\n    for pattern, template, header in _re_pattern_templates:\n      if pattern.search(line):\n        required[header] = (linenum, template)\n\n  # The policy is that if you #include something in foo.h you don't need to\n  # include it again in foo.cc. Here, we will look at possible includes.\n  # Let's copy the include_state so it is only messed up within this function.\n  include_state = include_state.copy()\n\n  # Did we find the header for this file (if any) and succesfully load it?\n  header_found = False\n\n  # Use the absolute path so that matching works properly.\n  abs_filename = FileInfo(filename).FullName()\n\n  # For Emacs's flymake.\n  # If cpplint is invoked from Emacs's flymake, a temporary file is generated\n  # by flymake and that file name might end with '_flymake.cc'. In that case,\n  # restore original file name here so that the corresponding header file can be\n  # found.\n  # e.g. If the file name is 'foo_flymake.cc', we should search for 'foo.h'\n  # instead of 'foo_flymake.h'\n  abs_filename = re.sub(r'_flymake\\.cc$', '.cc', abs_filename)\n\n  # include_state is modified during iteration, so we iterate over a copy of\n  # the keys.\n  header_keys = include_state.keys()\n  for header in header_keys:\n    (same_module, common_path) = FilesBelongToSameModule(abs_filename, header)\n    fullpath = common_path + header\n    if same_module and UpdateIncludeState(fullpath, include_state, io):\n      header_found = True\n\n  # If we can't find the header file for a .cc, assume it's because we don't\n  # know where to look. In that case we'll give up as we're not sure they\n  # didn't include it in the .h file.\n  # TODO(unknown): Do a better job of finding .h files so we are confident that\n  # not having the .h file means there isn't one.\n  if filename.endswith('.cc') and not header_found:\n    return\n\n  # All the lines have been processed, report the errors found.\n  for required_header_unstripped in required:\n    template = required[required_header_unstripped][1]\n    if required_header_unstripped.strip('<>\"') not in include_state:\n      error(filename, required[required_header_unstripped][0],\n            'build/include_what_you_use', 4,\n            'Add #include ' + required_header_unstripped + ' for ' + template)\n\n\n_RE_PATTERN_EXPLICIT_MAKEPAIR = re.compile(r'\\bmake_pair\\s*<')\n\n\ndef CheckMakePairUsesDeduction(filename, clean_lines, linenum, error):\n  \"\"\"Check that make_pair's template arguments are deduced.\n\n  G++ 4.6 in C++0x mode fails badly if make_pair's template arguments are\n  specified explicitly, and such use isn't intended in any case.\n\n  Args:\n    filename: The name of the current file.\n    clean_lines: A CleansedLines instance containing the file.\n    linenum: The number of the line to check.\n    error: The function to call with any errors found.\n  \"\"\"\n  line = clean_lines.elided[linenum]\n  match = _RE_PATTERN_EXPLICIT_MAKEPAIR.search(line)\n  if match:\n    error(filename, linenum, 'build/explicit_make_pair',\n          4,  # 4 = high confidence\n          'For C++11-compatibility, omit template arguments from make_pair'\n          ' OR use pair directly OR if appropriate, construct a pair directly')\n\n\ndef ProcessLine(filename, file_extension, clean_lines, line,\n                include_state, function_state, nesting_state, error,\n                extra_check_functions=[]):\n  \"\"\"Processes a single line in the file.\n\n  Args:\n    filename: Filename of the file that is being processed.\n    file_extension: The extension (dot not included) of the file.\n    clean_lines: An array of strings, each representing a line of the file,\n                 with comments stripped.\n    line: Number of line being processed.\n    include_state: An _IncludeState instance in which the headers are inserted.\n    function_state: A _FunctionState instance which counts function lines, etc.\n    nesting_state: A _NestingState instance which maintains information about\n                   the current stack of nested blocks being parsed.\n    error: A callable to which errors are reported, which takes 4 arguments:\n           filename, line number, error level, and message\n    extra_check_functions: An array of additional check functions that will be\n                           run on each source line. Each function takes 4\n                           arguments: filename, clean_lines, line, error\n  \"\"\"\n  raw_lines = clean_lines.raw_lines\n  ParseNolintSuppressions(filename, raw_lines[line], line, error)\n  nesting_state.Update(filename, clean_lines, line, error)\n  if nesting_state.stack and nesting_state.stack[-1].inline_asm != _NO_ASM:\n    return\n  CheckForFunctionLengths(filename, clean_lines, line, function_state, error)\n  CheckForMultilineCommentsAndStrings(filename, clean_lines, line, error)\n  CheckStyle(filename, clean_lines, line, file_extension, nesting_state, error)\n  CheckLanguage(filename, clean_lines, line, file_extension, include_state,\n                nesting_state, error)\n  CheckForNonConstReference(filename, clean_lines, line, nesting_state, error)\n  CheckForNonStandardConstructs(filename, clean_lines, line,\n                                nesting_state, error)\n  CheckVlogArguments(filename, clean_lines, line, error)\n  CheckCaffeAlternatives(filename, clean_lines, line, error)\n  CheckCaffeDataLayerSetUp(filename, clean_lines, line, error)\n  CheckCaffeRandom(filename, clean_lines, line, error)\n  CheckPosixThreading(filename, clean_lines, line, error)\n  CheckInvalidIncrement(filename, clean_lines, line, error)\n  CheckMakePairUsesDeduction(filename, clean_lines, line, error)\n  for check_fn in extra_check_functions:\n    check_fn(filename, clean_lines, line, error)\n\ndef ProcessFileData(filename, file_extension, lines, error,\n                    extra_check_functions=[]):\n  \"\"\"Performs lint checks and reports any errors to the given error function.\n\n  Args:\n    filename: Filename of the file that is being processed.\n    file_extension: The extension (dot not included) of the file.\n    lines: An array of strings, each representing a line of the file, with the\n           last element being empty if the file is terminated with a newline.\n    error: A callable to which errors are reported, which takes 4 arguments:\n           filename, line number, error level, and message\n    extra_check_functions: An array of additional check functions that will be\n                           run on each source line. Each function takes 4\n                           arguments: filename, clean_lines, line, error\n  \"\"\"\n  lines = (['// marker so line numbers and indices both start at 1'] + lines +\n           ['// marker so line numbers end in a known way'])\n\n  include_state = _IncludeState()\n  function_state = _FunctionState()\n  nesting_state = _NestingState()\n\n  ResetNolintSuppressions()\n\n  CheckForCopyright(filename, lines, error)\n\n  if file_extension == 'h':\n    CheckForHeaderGuard(filename, lines, error)\n\n  RemoveMultiLineComments(filename, lines, error)\n  clean_lines = CleansedLines(lines)\n  for line in xrange(clean_lines.NumLines()):\n    ProcessLine(filename, file_extension, clean_lines, line,\n                include_state, function_state, nesting_state, error,\n                extra_check_functions)\n  nesting_state.CheckCompletedBlocks(filename, error)\n\n  CheckForIncludeWhatYouUse(filename, clean_lines, include_state, error)\n\n  # We check here rather than inside ProcessLine so that we see raw\n  # lines rather than \"cleaned\" lines.\n  CheckForBadCharacters(filename, lines, error)\n\n  CheckForNewlineAtEOF(filename, lines, error)\n\ndef ProcessFile(filename, vlevel, extra_check_functions=[]):\n  \"\"\"Does google-lint on a single file.\n\n  Args:\n    filename: The name of the file to parse.\n\n    vlevel: The level of errors to report.  Every error of confidence\n    >= verbose_level will be reported.  0 is a good default.\n\n    extra_check_functions: An array of additional check functions that will be\n                           run on each source line. Each function takes 4\n                           arguments: filename, clean_lines, line, error\n  \"\"\"\n\n  _SetVerboseLevel(vlevel)\n\n  try:\n    # Support the UNIX convention of using \"-\" for stdin.  Note that\n    # we are not opening the file with universal newline support\n    # (which codecs doesn't support anyway), so the resulting lines do\n    # contain trailing '\\r' characters if we are reading a file that\n    # has CRLF endings.\n    # If after the split a trailing '\\r' is present, it is removed\n    # below. If it is not expected to be present (i.e. os.linesep !=\n    # '\\r\\n' as in Windows), a warning is issued below if this file\n    # is processed.\n\n    if filename == '-':\n      lines = codecs.StreamReaderWriter(sys.stdin,\n                                        codecs.getreader('utf8'),\n                                        codecs.getwriter('utf8'),\n                                        'replace').read().split('\\n')\n    else:\n      lines = codecs.open(filename, 'r', 'utf8', 'replace').read().split('\\n')\n\n    carriage_return_found = False\n    # Remove trailing '\\r'.\n    for linenum in range(len(lines)):\n      if lines[linenum].endswith('\\r'):\n        lines[linenum] = lines[linenum].rstrip('\\r')\n        carriage_return_found = True\n\n  except IOError:\n    sys.stderr.write(\n        \"Skipping input '%s': Can't open for reading\\n\" % filename)\n    return\n\n  # Note, if no dot is found, this will give the entire filename as the ext.\n  file_extension = filename[filename.rfind('.') + 1:]\n\n  # When reading from stdin, the extension is unknown, so no cpplint tests\n  # should rely on the extension.\n  if filename != '-' and file_extension not in _valid_extensions:\n    sys.stderr.write('Ignoring %s; not a valid file name '\n                     '(%s)\\n' % (filename, ', '.join(_valid_extensions)))\n  else:\n    ProcessFileData(filename, file_extension, lines, Error,\n                    extra_check_functions)\n    if carriage_return_found and os.linesep != '\\r\\n':\n      # Use 0 for linenum since outputting only one error for potentially\n      # several lines.\n      Error(filename, 0, 'whitespace/newline', 1,\n            'One or more unexpected \\\\r (^M) found;'\n            'better to use only a \\\\n')\n\n  sys.stderr.write('Done processing %s\\n' % filename)\n\n\ndef PrintUsage(message):\n  \"\"\"Prints a brief usage string and exits, optionally with an error message.\n\n  Args:\n    message: The optional error message.\n  \"\"\"\n  sys.stderr.write(_USAGE)\n  if message:\n    sys.exit('\\nFATAL ERROR: ' + message)\n  else:\n    sys.exit(1)\n\n\ndef PrintCategories():\n  \"\"\"Prints a list of all the error-categories used by error messages.\n\n  These are the categories used to filter messages via --filter.\n  \"\"\"\n  sys.stderr.write(''.join('  %s\\n' % cat for cat in _ERROR_CATEGORIES))\n  sys.exit(0)\n\n\ndef ParseArguments(args):\n  \"\"\"Parses the command line arguments.\n\n  This may set the output format and verbosity level as side-effects.\n\n  Args:\n    args: The command line arguments:\n\n  Returns:\n    The list of filenames to lint.\n  \"\"\"\n  try:\n    (opts, filenames) = getopt.getopt(args, '', ['help', 'output=', 'verbose=',\n                                                 'counting=',\n                                                 'filter=',\n                                                 'root=',\n                                                 'linelength=',\n                                                 'extensions='])\n  except getopt.GetoptError:\n    PrintUsage('Invalid arguments.')\n\n  verbosity = _VerboseLevel()\n  output_format = _OutputFormat()\n  filters = ''\n  counting_style = ''\n\n  for (opt, val) in opts:\n    if opt == '--help':\n      PrintUsage(None)\n    elif opt == '--output':\n      if val not in ('emacs', 'vs7', 'eclipse'):\n        PrintUsage('The only allowed output formats are emacs, vs7 and eclipse.')\n      output_format = val\n    elif opt == '--verbose':\n      verbosity = int(val)\n    elif opt == '--filter':\n      filters = val\n      if not filters:\n        PrintCategories()\n    elif opt == '--counting':\n      if val not in ('total', 'toplevel', 'detailed'):\n        PrintUsage('Valid counting options are total, toplevel, and detailed')\n      counting_style = val\n    elif opt == '--root':\n      global _root\n      _root = val\n    elif opt == '--linelength':\n      global _line_length\n      try:\n          _line_length = int(val)\n      except ValueError:\n          PrintUsage('Line length must be digits.')\n    elif opt == '--extensions':\n      global _valid_extensions\n      try:\n          _valid_extensions = set(val.split(','))\n      except ValueError:\n          PrintUsage('Extensions must be comma seperated list.')\n\n  if not filenames:\n    PrintUsage('No files were specified.')\n\n  _SetOutputFormat(output_format)\n  _SetVerboseLevel(verbosity)\n  _SetFilters(filters)\n  _SetCountingStyle(counting_style)\n\n  return filenames\n\n\ndef main():\n  filenames = ParseArguments(sys.argv[1:])\n\n  # Change stderr to write with replacement characters so we don't die\n  # if we try to print something containing non-ASCII characters.\n  sys.stderr = codecs.StreamReaderWriter(sys.stderr,\n                                         codecs.getreader('utf8'),\n                                         codecs.getwriter('utf8'),\n                                         'replace')\n\n  _cpplint_state.ResetErrorCounts()\n  for filename in filenames:\n    ProcessFile(filename, _cpplint_state.verbose_level)\n  _cpplint_state.PrintErrorCounts()\n\n  sys.exit(_cpplint_state.error_count > 0)\n\n\nif __name__ == '__main__':\n  main()\n"
  },
  {
    "path": "caffe-fpn/scripts/deploy_docs.sh",
    "content": "#!/bin/bash\n# Publish documentation to the gh-pages site.\n\n# The remote for pushing the docs (defaults to origin).\n# This is where you will submit the PR to BVLC:gh-pages from.\nREMOTE=${1:-origin}\n\necho \"Generating docs and pushing to $REMOTE:gh-pages...\"\necho \"To build and view docs when not on master, simply do 'jekyll serve -s docs'.\"\necho\n\nREMOTE_URL=`git config --get remote.${REMOTE}.url`\nBRANCH=`git rev-parse --abbrev-ref HEAD`\nMSG=`git log --oneline -1`\n\nif [[ $BRANCH = 'master' ]]; then\n    # Find the docs dir, no matter where the script is called\n    DIR=\"$( cd \"$(dirname \"$0\")\" ; pwd -P )\"\n    DOCS_SITE_DIR=$DIR/../docs/_site\n\n    # Make sure that docs/_site tracks remote:gh-pages.\n    # If not, then we make a new repo and check out just that branch.\n    mkdir -p $DOCS_SITE_DIR\n    cd $DOCS_SITE_DIR\n    SITE_REMOTE_URL=`git config --get remote.${REMOTE}.url`\n    SITE_BRANCH=`git rev-parse --abbrev-ref HEAD`\n\n    echo $SITE_REMOTE_URL\n    echo $SITE_BRANCH\n    echo `pwd`\n\n    if [[ ( $SITE_REMOTE_URL = $REMOTE_URL ) && ( $SITE_BRANCH = 'gh-pages' ) ]]; then\n        echo \"Confirmed that docs/_site has same remote as main repo, and is on gh-pages.\"\n    else\n        echo \"Checking out $REMOTE:gh-pages into docs/_site (will take a little time).\"\n        git init .\n        git remote add -t gh-pages -f $REMOTE $REMOTE_URL\n        git checkout gh-pages\n    fi\n\n    echo \"Building the site into docs/_site, and committing the changes.\"\n    jekyll build -s .. -d .\n    git add --all .\n    git commit -m \"$MSG\"\n    git push $REMOTE gh-pages\n\n    echo \"All done!\"\n    cd ../..\nelse echo \"You must run this deployment script from the 'master' branch.\"\nfi\n"
  },
  {
    "path": "caffe-fpn/scripts/download_model_binary.py",
    "content": "#!/usr/bin/env python\nimport os\nimport sys\nimport time\nimport yaml\nimport urllib\nimport hashlib\nimport argparse\n\nrequired_keys = ['caffemodel', 'caffemodel_url', 'sha1']\n\n\ndef reporthook(count, block_size, total_size):\n    \"\"\"\n    From http://blog.moleculea.com/2012/10/04/urlretrieve-progres-indicator/\n    \"\"\"\n    global start_time\n    if count == 0:\n        start_time = time.time()\n        return\n    duration = (time.time() - start_time) or 0.01\n    progress_size = int(count * block_size)\n    speed = int(progress_size / (1024 * duration))\n    percent = int(count * block_size * 100 / total_size)\n    sys.stdout.write(\"\\r...%d%%, %d MB, %d KB/s, %d seconds passed\" %\n                    (percent, progress_size / (1024 * 1024), speed, duration))\n    sys.stdout.flush()\n\n\ndef parse_readme_frontmatter(dirname):\n    readme_filename = os.path.join(dirname, 'readme.md')\n    with open(readme_filename) as f:\n        lines = [line.strip() for line in f.readlines()]\n    top = lines.index('---')\n    bottom = lines.index('---', top + 1)\n    frontmatter = yaml.load('\\n'.join(lines[top + 1:bottom]))\n    assert all(key in frontmatter for key in required_keys)\n    return dirname, frontmatter\n\n\ndef valid_dirname(dirname):\n    try:\n        return parse_readme_frontmatter(dirname)\n    except Exception as e:\n        print('ERROR: {}'.format(e))\n        raise argparse.ArgumentTypeError(\n            'Must be valid Caffe model directory with a correct readme.md')\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(\n        description='Download trained model binary.')\n    parser.add_argument('dirname', type=valid_dirname)\n    args = parser.parse_args()\n\n    # A tiny hack: the dirname validator also returns readme YAML frontmatter.\n    dirname = args.dirname[0]\n    frontmatter = args.dirname[1]\n    model_filename = os.path.join(dirname, frontmatter['caffemodel'])\n\n    # Closure-d function for checking SHA1.\n    def model_checks_out(filename=model_filename, sha1=frontmatter['sha1']):\n        with open(filename, 'r') as f:\n            return hashlib.sha1(f.read()).hexdigest() == sha1\n\n    # Check if model exists.\n    if os.path.exists(model_filename) and model_checks_out():\n        print(\"Model already exists.\")\n        sys.exit(0)\n\n    # Download and verify model.\n    urllib.urlretrieve(\n        frontmatter['caffemodel_url'], model_filename, reporthook)\n    if not model_checks_out():\n        print('ERROR: model did not download correctly! Run this again.')\n        sys.exit(1)\n"
  },
  {
    "path": "caffe-fpn/scripts/download_model_from_gist.sh",
    "content": "#!/usr/bin/env sh\n\nGIST=$1\nDIRNAME=${2:-./models}\n\nif [ -z $GIST ]; then\n  echo \"usage: download_model_from_gist.sh <gist_id> <dirname>\"\n  exit\nfi\n\nGIST_DIR=$(echo $GIST | tr '/' '-')\nMODEL_DIR=\"$DIRNAME/$GIST_DIR\"\n\nif [ -d $MODEL_DIR ]; then\n    echo \"$MODEL_DIR already exists! Please make sure you're not overwriting anything important!\"\n    exit\nfi\n\necho \"Downloading Caffe model info to $MODEL_DIR ...\"\nmkdir -p $MODEL_DIR\nwget https://gist.github.com/$GIST/download -O $MODEL_DIR/gist.zip\nunzip -j $MODEL_DIR/gist.zip -d $MODEL_DIR\nrm $MODEL_DIR/gist.zip\necho \"Done\"\n"
  },
  {
    "path": "caffe-fpn/scripts/gather_examples.sh",
    "content": "#!/bin/bash\n# Assemble documentation for the project into one directory via symbolic links.\n\n# Find the docs dir, no matter where the script is called\nROOT_DIR=\"$( cd \"$(dirname \"$0\")\"/.. ; pwd -P )\"\ncd $ROOT_DIR\n\n# Gather docs from examples/**/readme.md\nGATHERED_DIR=docs/gathered\nrm -r $GATHERED_DIR\nmkdir $GATHERED_DIR\nfor README_FILENAME in $(find examples -iname \"readme.md\"); do\n    # Only use file if it is to be included in docs.\n    if grep -Fxq \"include_in_docs: true\" $README_FILENAME; then\n        # Make link to readme.md in docs/gathered/.\n        # Since everything is called readme.md, rename it by its dirname.\n        README_DIRNAME=`dirname $README_FILENAME`\n        DOCS_FILENAME=$GATHERED_DIR/$README_DIRNAME.md\n        mkdir -p `dirname $DOCS_FILENAME`\n        ln -s $ROOT_DIR/$README_FILENAME $DOCS_FILENAME\n    fi\ndone\n\n# Gather docs from examples/*.ipynb and add YAML front-matter.\nfor NOTEBOOK_FILENAME in $(find examples -depth -iname \"*.ipynb\"); do\n    DOCS_FILENAME=$GATHERED_DIR/$NOTEBOOK_FILENAME\n    mkdir -p `dirname $DOCS_FILENAME`\n    python scripts/copy_notebook.py $NOTEBOOK_FILENAME $DOCS_FILENAME\ndone\n"
  },
  {
    "path": "caffe-fpn/scripts/travis/travis_build_and_test.sh",
    "content": "#!/bin/bash\n# Script called by Travis to build and test Caffe.\n# Travis CI tests are CPU-only for lack of compatible hardware.\n\nset -e\nMAKE=\"make --jobs=$NUM_THREADS --keep-going\"\n\nif $WITH_CMAKE; then\n  mkdir build\n  cd build\n  CPU_ONLY=\" -DCPU_ONLY=ON\"\n  if ! $WITH_CUDA; then\n    CPU_ONLY=\" -DCPU_ONLY=OFF\"\n  fi\n  PYTHON_ARGS=\"\"\n  if [ \"$PYTHON_VERSION\" = \"3\" ]; then\n    PYTHON_ARGS=\"$PYTHON_ARGS -Dpython_version=3 -DBOOST_LIBRARYDIR=$CONDA_DIR/lib/\"\n  fi\n  if $WITH_IO; then\n    IO_ARGS=\"-DUSE_OPENCV=ON -DUSE_LMDB=ON -DUSE_LEVELDB=ON\"\n  else\n    IO_ARGS=\"-DUSE_OPENCV=OFF -DUSE_LMDB=OFF -DUSE_LEVELDB=OFF\"\n  fi\n  cmake -DBUILD_python=ON -DCMAKE_BUILD_TYPE=Release $CPU_ONLY $PYTHON_ARGS -DCMAKE_INCLUDE_PATH=\"$CONDA_DIR/include/\" -DCMAKE_LIBRARY_PATH=\"$CONDA_DIR/lib/\" $IO_ARGS ..\n  $MAKE\n  $MAKE pytest\n  if ! $WITH_CUDA; then\n    $MAKE runtest\n    $MAKE lint\n  fi\n  $MAKE clean\n  cd -\nelse\n  if ! $WITH_CUDA; then\n    export CPU_ONLY=1\n  fi\n  if $WITH_IO; then\n    export USE_LMDB=1\n    export USE_LEVELDB=1\n    export USE_OPENCV=1\n  fi\n  $MAKE all test pycaffe warn lint || true\n  if ! $WITH_CUDA; then\n    $MAKE runtest\n  fi\n  $MAKE all\n  $MAKE test\n  $MAKE pycaffe\n  $MAKE pytest\n  $MAKE warn\n  if ! $WITH_CUDA; then\n    $MAKE lint\n  fi\nfi\n"
  },
  {
    "path": "caffe-fpn/scripts/travis/travis_install.sh",
    "content": "#!/bin/bash\n# This script must be run with sudo.\n\nset -e\n\nMAKE=\"make --jobs=$NUM_THREADS\"\n# Install apt packages where the Ubuntu 12.04 default and ppa works for Caffe\n\n# This ppa is for gflags and glog\nadd-apt-repository -y ppa:tuleu/precise-backports\napt-get -y update\napt-get install \\\n    wget git curl \\\n    python-dev python-numpy python3-dev\\\n    libleveldb-dev libsnappy-dev libopencv-dev \\\n    libprotobuf-dev protobuf-compiler \\\n    libatlas-dev libatlas-base-dev \\\n    libhdf5-serial-dev libgflags-dev libgoogle-glog-dev \\\n    bc\n\n# Add a special apt-repository to install CMake 2.8.9 for CMake Caffe build,\n# if needed.  By default, Aptitude in Ubuntu 12.04 installs CMake 2.8.7, but\n# Caffe requires a minimum CMake version of 2.8.8.\nif $WITH_CMAKE; then\n  # cmake 3 will make sure that the python interpreter and libraries match\n  wget --no-check-certificate http://www.cmake.org/files/v3.2/cmake-3.2.3-Linux-x86_64.sh -O cmake3.sh\n  chmod +x cmake3.sh\n  ./cmake3.sh --prefix=/usr/ --skip-license --exclude-subdir\nfi\n\n# Install CUDA, if needed\nif $WITH_CUDA; then\n  CUDA_URL=http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1204/x86_64/cuda-repo-ubuntu1204_6.5-14_amd64.deb\n  CUDA_FILE=/tmp/cuda_install.deb\n  curl $CUDA_URL -o $CUDA_FILE\n  dpkg -i $CUDA_FILE\n  rm -f $CUDA_FILE\n  apt-get -y update\n  # Install the minimal CUDA subpackages required to test Caffe build.\n  # For a full CUDA installation, add 'cuda' to the list of packages.\n  apt-get -y install cuda-core-6-5 cuda-cublas-6-5 cuda-cublas-dev-6-5 cuda-cudart-6-5 cuda-cudart-dev-6-5 cuda-curand-6-5 cuda-curand-dev-6-5\n  # Create CUDA symlink at /usr/local/cuda\n  # (This would normally be created by the CUDA installer, but we create it\n  # manually since we did a partial installation.)\n  ln -s /usr/local/cuda-6.5 /usr/local/cuda\nfi\n\n# Install LMDB\nLMDB_URL=https://github.com/LMDB/lmdb/archive/LMDB_0.9.14.tar.gz\nLMDB_FILE=/tmp/lmdb.tar.gz\npushd .\nwget $LMDB_URL -O $LMDB_FILE\ntar -C /tmp -xzvf $LMDB_FILE\ncd /tmp/lmdb*/libraries/liblmdb/\n$MAKE\n$MAKE install\npopd\nrm -f $LMDB_FILE\n\n# Install the Python runtime dependencies via miniconda (this is much faster\n# than using pip for everything).\nexport PATH=$CONDA_DIR/bin:$PATH\nif [ ! -d $CONDA_DIR ]; then\n  if [ \"$PYTHON_VERSION\" -eq \"3\" ]; then\n    wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh\n  else\n    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh\n  fi\n  chmod +x miniconda.sh\n  ./miniconda.sh -b -p $CONDA_DIR\n\n  conda update --yes conda\n  # The version of boost we're using for Python 3 depends on 3.4 for now.\n  if [ \"$PYTHON_VERSION\" -eq \"3\" ]; then\n    conda install --yes python=3.4\n  fi\n  conda install --yes numpy scipy matplotlib scikit-image pip\n  # Let conda install boost (so that boost_python matches)\n  conda install --yes -c https://conda.binstar.org/menpo boost=1.56.0\nfi\n\n# install protobuf 3 (just use the miniconda3 directory to avoid having to setup the path again)\nif [ \"$PYTHON_VERSION\" -eq \"3\" ] && [ ! -e \"$CONDA_DIR/bin/protoc\" ]; then\n  pushd .\n  wget https://github.com/google/protobuf/archive/v3.0.0-alpha-3.1.tar.gz -O protobuf-3.tar.gz\n  tar -C /tmp -xzvf protobuf-3.tar.gz\n  cd /tmp/protobuf-3*/\n  ./autogen.sh\n  ./configure --prefix=$CONDA_DIR\n  $MAKE\n  $MAKE install\n  popd\nfi\n\nif [ \"$PYTHON_VERSION\" -eq \"3\" ]; then\n  pip install --pre protobuf\nelse\n  pip install protobuf\nfi\n"
  },
  {
    "path": "caffe-fpn/scripts/travis/travis_setup_makefile_config.sh",
    "content": "#!/bin/bash\n\nset -e\n\nmv Makefile.config.example Makefile.config\n\nif $WITH_CUDA; then\n  # Only generate compute_50.\n  GENCODE=\"-gencode arch=compute_50,code=sm_50\"\n  GENCODE=\"$GENCODE -gencode arch=compute_50,code=compute_50\"\n  echo \"CUDA_ARCH := $GENCODE\" >> Makefile.config\nfi\n\n# Remove IO library settings from Makefile.config\n# to avoid conflicts with CI configuration\nsed -i -e '/USE_LMDB/d' Makefile.config\nsed -i -e '/USE_LEVELDB/d' Makefile.config\nsed -i -e '/USE_OPENCV/d' Makefile.config\n\ncat << 'EOF' >> Makefile.config\n# Travis' nvcc doesn't like newer boost versions\nNVCCFLAGS := -Xcudafe --diag_suppress=cc_clobber_ignored -Xcudafe --diag_suppress=useless_using_declaration -Xcudafe --diag_suppress=set_but_not_used\nANACONDA_HOME := $(CONDA_DIR)\nPYTHON_INCLUDE := $(ANACONDA_HOME)/include \\\n\t\t$(ANACONDA_HOME)/include/python2.7 \\\n\t\t$(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include\nPYTHON_LIB := $(ANACONDA_HOME)/lib\nINCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include\nLIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib\nWITH_PYTHON_LAYER := 1\nEOF\n"
  },
  {
    "path": "caffe-fpn/scripts/upload_model_to_gist.sh",
    "content": "#!/bin/bash\n\n# Check for valid directory\nDIRNAME=$1\nif [ ! -f $DIRNAME/readme.md ]; then\n    echo \"usage: upload_model_to_gist.sh <dirname>\"\n    echo \"  <dirname>/readme.md must exist\"\nfi\ncd $DIRNAME\nFILES=`find . -maxdepth 1 -type f ! -name \"*.caffemodel*\" | xargs echo`\n\n# Check for gist tool.\ngist -v >/dev/null 2>&1 || { echo >&2 \"I require 'gist' but it's not installed. Do 'gem install gist'.\"; exit 1; }\n\nNAME=`sed -n 's/^name:[[:space:]]*//p' readme.md`\nif [ -z \"$NAME\" ]; then\n    echo \"  <dirname>/readme.md must contain name field in the front-matter.\"\nfi\n\nGIST=`sed -n 's/^gist_id:[[:space:]]*//p' readme.md`\nif [ -z \"$GIST\" ]; then\n    echo \"Uploading new Gist\"\n    gist -p -d \"$NAME\" $FILES\nelse\n    echo \"Updating existing Gist, id $GIST\"\n    gist -u $GIST -d \"$NAME\" $FILES\nfi\n\nRESULT=$?\nif [ $RESULT -eq 0 ]; then\n    echo \"You've uploaded your model!\"\n    echo \"Don't forget to add the gist_id field to your <dirname>/readme.md now!\"\n    echo \"Run the command again after you do that, to make sure the Gist id propagates.\"\n    echo \"\"\n    echo \"And do share your model over at https://github.com/BVLC/caffe/wiki/Model-Zoo\"\nelse\n    echo \"Something went wrong!\"\nfi\n"
  },
  {
    "path": "caffe-fpn/src/caffe/CMakeLists.txt",
    "content": "# generate protobuf sources\nfile(GLOB proto_files proto/*.proto)\ncaffe_protobuf_generate_cpp_py(${proto_gen_folder} proto_srcs proto_hdrs proto_python ${proto_files})\n\n# include python files either to force generation\nadd_library(proto STATIC ${proto_hdrs} ${proto_srcs} ${proto_python})\nset(Caffe_LINKER_LIBS proto ${Caffe_LINKER_LIBS}) # note, crucial to prepend!\ncaffe_default_properties(proto)\n\n# --[ Caffe library\n\n# creates 'test_srcs', 'srcs', 'test_cuda', 'cuda' lists\ncaffe_pickup_caffe_sources(${PROJECT_SOURCE_DIR})\n\nif(HAVE_CUDA)\n  caffe_cuda_compile(cuda_objs ${cuda})\n  list(APPEND srcs ${cuda_objs} ${cuda})\nendif()\n\nadd_library(caffe ${srcs})\ntarget_link_libraries(caffe proto ${Caffe_LINKER_LIBS})\ncaffe_default_properties(caffe)\nset_target_properties(caffe PROPERTIES\n    VERSION   ${CAFFE_TARGET_VERSION}\n    SOVERSION ${CAFFE_TARGET_SOVERSION}\n    )\n\n# ---[ Tests\n add_subdirectory(test)\n\n# ---[ Install\ninstall(DIRECTORY ${Caffe_INCLUDE_DIR}/caffe DESTINATION include)\ninstall(FILES ${proto_hdrs} DESTINATION include/caffe/proto)\ninstall(TARGETS caffe proto EXPORT CaffeTargets DESTINATION lib)\n\nfile(WRITE ${PROJECT_BINARY_DIR}/__init__.py)\nlist(APPEND proto_python ${PROJECT_BINARY_DIR}/__init__.py)\ninstall(PROGRAMS ${proto_python} DESTINATION python/caffe/proto)\n\n\n"
  },
  {
    "path": "caffe-fpn/src/caffe/blob.cpp",
    "content": "#include <climits>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include<iostream>\nusing namespace std;\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::Reshape(const int num, const int channels, const int height,\n    const int width) {\n  vector<int> shape(4);\n  shape[0] = num;\n  shape[1] = channels;\n  shape[2] = height;\n  shape[3] = width;\n  Reshape(shape);\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::Reshape(const vector<int>& shape) {\n  CHECK_LE(shape.size(), kMaxBlobAxes);\n  count_ = 1;\n  shape_.resize(shape.size());\n  if (!shape_data_ || shape_data_->size() < shape.size() * sizeof(int)) {\n    shape_data_.reset(new SyncedMemory(shape.size() * sizeof(int)));\n  }\n  int* shape_data = static_cast<int*>(shape_data_->mutable_cpu_data());\n  for (int i = 0; i < shape.size(); ++i) {\n    CHECK_GE(shape[i], 0);\n    CHECK_LE(shape[i], INT_MAX / count_) << \"blob size exceeds INT_MAX\";\n    count_ *= shape[i];\n    shape_[i] = shape[i];\n    shape_data[i] = shape[i];\n  }\n  if (count_ > capacity_) {\n    capacity_ = count_;\n    data_.reset(new SyncedMemory(capacity_ * sizeof(Dtype)));\n    diff_.reset(new SyncedMemory(capacity_ * sizeof(Dtype)));\n  }\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::Reshape(const BlobShape& shape) {\n  CHECK_LE(shape.dim_size(), kMaxBlobAxes);\n  vector<int> shape_vec(shape.dim_size());\n  for (int i = 0; i < shape.dim_size(); ++i) {\n    shape_vec[i] = shape.dim(i);\n  }\n  Reshape(shape_vec);\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::ReshapeLike(const Blob<Dtype>& other) {\n  Reshape(other.shape());\n}\n\ntemplate <typename Dtype>\nBlob<Dtype>::Blob(const int num, const int channels, const int height,\n    const int width)\n  // capacity_ must be initialized before calling Reshape\n  : capacity_(0) {\n  Reshape(num, channels, height, width);\n}\n\ntemplate <typename Dtype>\nBlob<Dtype>::Blob(const vector<int>& shape)\n  // capacity_ must be initialized before calling Reshape\n  : capacity_(0) {\n  Reshape(shape);\n}\n\ntemplate <typename Dtype>\nconst int* Blob<Dtype>::gpu_shape() const {\n  CHECK(shape_data_);\n  return (const int*)shape_data_->gpu_data();\n}\n\ntemplate <typename Dtype>\nconst Dtype* Blob<Dtype>::cpu_data() const {\n  CHECK(data_);\n  return (const Dtype*)data_->cpu_data();\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::set_cpu_data(Dtype* data) {\n  CHECK(data);\n  data_->set_cpu_data(data);\n}\n\ntemplate <typename Dtype>\nconst Dtype* Blob<Dtype>::gpu_data() const {\n  CHECK(data_);\n  return (const Dtype*)data_->gpu_data();\n}\n\ntemplate <typename Dtype>\nconst Dtype* Blob<Dtype>::cpu_diff() const {\n  CHECK(diff_);\n  return (const Dtype*)diff_->cpu_data();\n}\n\ntemplate <typename Dtype>\nconst Dtype* Blob<Dtype>::gpu_diff() const {\n  CHECK(diff_);\n  return (const Dtype*)diff_->gpu_data();\n}\n\ntemplate <typename Dtype>\nDtype* Blob<Dtype>::mutable_cpu_data() {\n  CHECK(data_);\n  return static_cast<Dtype*>(data_->mutable_cpu_data());\n}\n\ntemplate <typename Dtype>\nDtype* Blob<Dtype>::mutable_gpu_data() {\n  CHECK(data_);\n  return static_cast<Dtype*>(data_->mutable_gpu_data());\n}\n\ntemplate <typename Dtype>\nDtype* Blob<Dtype>::mutable_cpu_diff() {\n  CHECK(diff_);\n  return static_cast<Dtype*>(diff_->mutable_cpu_data());\n}\n\ntemplate <typename Dtype>\nDtype* Blob<Dtype>::mutable_gpu_diff() {\n  CHECK(diff_);\n  return static_cast<Dtype*>(diff_->mutable_gpu_data());\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::ShareData(const Blob& other) {\n  CHECK_EQ(count_, other.count());\n  data_ = other.data();\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::ShareDiff(const Blob& other) {\n  CHECK_EQ(count_, other.count());\n  diff_ = other.diff();\n}\n\n// The \"update\" method is used for parameter blobs in a Net, which are stored\n// as Blob<float> or Blob<double> -- hence we do not define it for\n// Blob<int> or Blob<unsigned int>.\ntemplate <> void Blob<unsigned int>::Update() { NOT_IMPLEMENTED; }\ntemplate <> void Blob<int>::Update() { NOT_IMPLEMENTED; }\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::Update() {\n  // We will perform update based on where the data is located.\n  switch (data_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    // perform computation on CPU\n    caffe_axpy<Dtype>(count_, Dtype(-1),\n        static_cast<const Dtype*>(diff_->cpu_data()),\n        static_cast<Dtype*>(data_->mutable_cpu_data()));\n    break;\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n    // perform computation on GPU\n    caffe_gpu_axpy<Dtype>(count_, Dtype(-1),\n        static_cast<const Dtype*>(diff_->gpu_data()),\n        static_cast<Dtype*>(data_->mutable_gpu_data()));\n#else\n    NO_GPU;\n#endif\n    break;\n  default:\n    LOG(FATAL) << \"Syncedmem not initialized.\";\n  }\n}\n\ntemplate <> unsigned int Blob<unsigned int>::asum_data() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <> int Blob<int>::asum_data() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <typename Dtype>\nDtype Blob<Dtype>::asum_data() const {\n  if (!data_) { return 0; }\n  switch (data_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    return caffe_cpu_asum(count_, cpu_data());\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n  {\n    Dtype asum;\n    caffe_gpu_asum(count_, gpu_data(), &asum);\n    return asum;\n  }\n#else\n    NO_GPU;\n#endif\n  case SyncedMemory::UNINITIALIZED:\n    return 0;\n  default:\n    LOG(FATAL) << \"Unknown SyncedMemory head state: \" << data_->head();\n  }\n  return 0;\n}\n\ntemplate <> unsigned int Blob<unsigned int>::asum_diff() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <> int Blob<int>::asum_diff() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <typename Dtype>\nDtype Blob<Dtype>::asum_diff() const {\n  if (!diff_) { return 0; }\n  switch (diff_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    return caffe_cpu_asum(count_, cpu_diff());\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n  {\n    Dtype asum;\n    caffe_gpu_asum(count_, gpu_diff(), &asum);\n    return asum;\n  }\n#else\n    NO_GPU;\n#endif\n  case SyncedMemory::UNINITIALIZED:\n    return 0;\n  default:\n    LOG(FATAL) << \"Unknown SyncedMemory head state: \" << diff_->head();\n  }\n  return 0;\n}\n\ntemplate <> unsigned int Blob<unsigned int>::sumsq_data() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <> int Blob<int>::sumsq_data() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <typename Dtype>\nDtype Blob<Dtype>::sumsq_data() const {\n  Dtype sumsq;\n  const Dtype* data;\n  if (!data_) { return 0; }\n  switch (data_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    data = cpu_data();\n    sumsq = caffe_cpu_dot(count_, data, data);\n    break;\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n    data = gpu_data();\n    caffe_gpu_dot(count_, data, data, &sumsq);\n#else\n    NO_GPU;\n#endif\n    break;\n  case SyncedMemory::UNINITIALIZED:\n    return 0;\n  default:\n    LOG(FATAL) << \"Unknown SyncedMemory head state: \" << data_->head();\n  }\n  return sumsq;\n}\n\ntemplate <> unsigned int Blob<unsigned int>::sumsq_diff() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <> int Blob<int>::sumsq_diff() const {\n  NOT_IMPLEMENTED;\n  return 0;\n}\n\ntemplate <typename Dtype>\nDtype Blob<Dtype>::sumsq_diff() const {\n  Dtype sumsq;\n  const Dtype* diff;\n  if (!diff_) { return 0; }\n  switch (diff_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    diff = cpu_diff();\n    sumsq = caffe_cpu_dot(count_, diff, diff);\n    break;\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n    diff = gpu_diff();\n    caffe_gpu_dot(count_, diff, diff, &sumsq);\n    break;\n#else\n    NO_GPU;\n#endif\n  case SyncedMemory::UNINITIALIZED:\n    return 0;\n  default:\n    LOG(FATAL) << \"Unknown SyncedMemory head state: \" << data_->head();\n  }\n  return sumsq;\n}\n\ntemplate <> void Blob<unsigned int>::scale_data(unsigned int scale_factor) {\n  NOT_IMPLEMENTED;\n}\n\ntemplate <> void Blob<int>::scale_data(int scale_factor) {\n  NOT_IMPLEMENTED;\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::scale_data(Dtype scale_factor) {\n  Dtype* data;\n  if (!data_) { return; }\n  switch (data_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    data = mutable_cpu_data();\n    caffe_scal(count_, scale_factor, data);\n    return;\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n    data = mutable_gpu_data();\n    caffe_gpu_scal(count_, scale_factor, data);\n    return;\n#else\n    NO_GPU;\n#endif\n  case SyncedMemory::UNINITIALIZED:\n    return;\n  default:\n    LOG(FATAL) << \"Unknown SyncedMemory head state: \" << data_->head();\n  }\n}\n\ntemplate <> void Blob<unsigned int>::scale_diff(unsigned int scale_factor) {\n  NOT_IMPLEMENTED;\n}\n\ntemplate <> void Blob<int>::scale_diff(int scale_factor) {\n  NOT_IMPLEMENTED;\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::scale_diff(Dtype scale_factor) {\n  Dtype* diff;\n  if (!diff_) { return; }\n  switch (diff_->head()) {\n  case SyncedMemory::HEAD_AT_CPU:\n    diff = mutable_cpu_diff();\n    caffe_scal(count_, scale_factor, diff);\n    return;\n  case SyncedMemory::HEAD_AT_GPU:\n  case SyncedMemory::SYNCED:\n#ifndef CPU_ONLY\n    diff = mutable_gpu_diff();\n    caffe_gpu_scal(count_, scale_factor, diff);\n    return;\n#else\n    NO_GPU;\n#endif\n  case SyncedMemory::UNINITIALIZED:\n    return;\n  default:\n    LOG(FATAL) << \"Unknown SyncedMemory head state: \" << diff_->head();\n  }\n}\n\ntemplate <typename Dtype>\nbool Blob<Dtype>::ShapeEquals(const BlobProto& other) {\n  if (other.has_num() || other.has_channels() ||\n      other.has_height() || other.has_width()) {\n    // Using deprecated 4D Blob dimensions --\n    // shape is (num, channels, height, width).\n    // Note: we do not use the normal Blob::num(), Blob::channels(), etc.\n    // methods as these index from the beginning of the blob shape, where legacy\n    // parameter blobs were indexed from the end of the blob shape (e.g., bias\n    // Blob shape (1 x 1 x 1 x N), IP layer weight Blob shape (1 x 1 x M x N)).\n    return shape_.size() <= 4 &&\n           LegacyShape(-4) == other.num() &&\n           LegacyShape(-3) == other.channels() &&\n           LegacyShape(-2) == other.height() &&\n           LegacyShape(-1) == other.width();\n  }\n  vector<int> other_shape(other.shape().dim_size());\n  for (int i = 0; i < other.shape().dim_size(); ++i) {\n    other_shape[i] = other.shape().dim(i);\n  }\n  return shape_ == other_shape;\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::CopyFrom(const Blob& source, bool copy_diff, bool reshape) {\n  if (source.count() != count_ || source.shape() != shape_) {\n    if (reshape) {\n      ReshapeLike(source);\n    } else {\n      LOG(FATAL) << \"Trying to copy blobs of different sizes.\";\n    }\n  }\n  switch (Caffe::mode()) {\n  case Caffe::GPU:\n    if (copy_diff) {\n      caffe_copy(count_, source.gpu_diff(),\n          static_cast<Dtype*>(diff_->mutable_gpu_data()));\n    } else {\n      caffe_copy(count_, source.gpu_data(),\n          static_cast<Dtype*>(data_->mutable_gpu_data()));\n    }\n    break;\n  case Caffe::CPU:\n    if (copy_diff) {\n      caffe_copy(count_, source.cpu_diff(),\n          static_cast<Dtype*>(diff_->mutable_cpu_data()));\n    } else {\n      caffe_copy(count_, source.cpu_data(),\n          static_cast<Dtype*>(data_->mutable_cpu_data()));\n    }\n    break;\n  default:\n    LOG(FATAL) << \"Unknown caffe mode.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid Blob<Dtype>::FromProto(const BlobProto& proto, bool reshape) {\n  if (reshape) {\n    vector<int> shape;\n    if (proto.has_num() || proto.has_channels() ||\n        proto.has_height() || proto.has_width()) {\n      // Using deprecated 4D Blob dimensions --\n      // shape is (num, channels, height, width).\n      shape.resize(4);\n      shape[0] = proto.num();\n      shape[1] = proto.channels();\n      shape[2] = proto.height();\n      shape[3] = proto.width();\n    } else {\n      shape.resize(proto.shape().dim_size());\n      for (int i = 0; i < proto.shape().dim_size(); ++i) {\n        shape[i] = proto.shape().dim(i);\n      }\n    }\n    Reshape(shape);\n  } else {\n    CHECK(ShapeEquals(proto)) << \"shape mismatch (reshape not set)\";\n  }\n  // copy data\n  Dtype* data_vec = mutable_cpu_data();\n  if (proto.double_data_size() > 0) {\n    CHECK_EQ(count_, proto.double_data_size());\n    for (int i = 0; i < count_; ++i) {\n      data_vec[i] = proto.double_data(i);\n    }\n  } else {\n    CHECK_EQ(count_, proto.data_size());\n    for (int i = 0; i < count_; ++i) {\n      data_vec[i] = proto.data(i);\n    }\n  }\n  if (proto.double_diff_size() > 0) {\n    CHECK_EQ(count_, proto.double_diff_size());\n    Dtype* diff_vec = mutable_cpu_diff();\n    for (int i = 0; i < count_; ++i) {\n      diff_vec[i] = proto.double_diff(i);\n    }\n  } else if (proto.diff_size() > 0) {\n    CHECK_EQ(count_, proto.diff_size());\n    Dtype* diff_vec = mutable_cpu_diff();\n    for (int i = 0; i < count_; ++i) {\n      diff_vec[i] = proto.diff(i);\n    }\n  }\n}\n\ntemplate <>\nvoid Blob<double>::ToProto(BlobProto* proto, bool write_diff) const {\n  proto->clear_shape();\n  for (int i = 0; i < shape_.size(); ++i) {\n    proto->mutable_shape()->add_dim(shape_[i]);\n  }\n  proto->clear_double_data();\n  proto->clear_double_diff();\n  const double* data_vec = cpu_data();\n  for (int i = 0; i < count_; ++i) {\n    proto->add_double_data(data_vec[i]);\n  }\n  if (write_diff) {\n    const double* diff_vec = cpu_diff();\n    for (int i = 0; i < count_; ++i) {\n      proto->add_double_diff(diff_vec[i]);\n    }\n  }\n}\n\ntemplate <>\nvoid Blob<float>::ToProto(BlobProto* proto, bool write_diff) const {\n  proto->clear_shape();\n  for (int i = 0; i < shape_.size(); ++i) {\n    proto->mutable_shape()->add_dim(shape_[i]);\n  }\n  proto->clear_data();\n  proto->clear_diff();\n  const float* data_vec = cpu_data();\n  for (int i = 0; i < count_; ++i) {\n    proto->add_data(data_vec[i]);\n  }\n  if (write_diff) {\n    const float* diff_vec = cpu_diff();\n    for (int i = 0; i < count_; ++i) {\n      proto->add_diff(diff_vec[i]);\n    }\n  }\n}\n\nINSTANTIATE_CLASS(Blob);\ntemplate class Blob<int>;\ntemplate class Blob<unsigned int>;\n\n}  // namespace caffe\n\n"
  },
  {
    "path": "caffe-fpn/src/caffe/common.cpp",
    "content": "#include <boost/thread.hpp>\n#include <glog/logging.h>\n#include <cmath>\n#include <cstdio>\n#include <ctime>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/rng.hpp\"\n\nnamespace caffe {\n\n// Make sure each thread can have different values.\nstatic boost::thread_specific_ptr<Caffe> thread_instance_;\n\nCaffe& Caffe::Get() {\n  if (!thread_instance_.get()) {\n    thread_instance_.reset(new Caffe());\n  }\n  return *(thread_instance_.get());\n}\n\n// random seeding\nint64_t cluster_seedgen(void) {\n  int64_t s, seed, pid;\n  FILE* f = fopen(\"/dev/urandom\", \"rb\");\n  if (f && fread(&seed, 1, sizeof(seed), f) == sizeof(seed)) {\n    fclose(f);\n    return seed;\n  }\n\n  LOG(INFO) << \"System entropy source not available, \"\n              \"using fallback algorithm to generate seed instead.\";\n  if (f)\n    fclose(f);\n\n  pid = getpid();\n  s = time(NULL);\n  seed = std::abs(((s * 181) * ((pid - 83) * 359)) % 104729);\n  return seed;\n}\n\n\nvoid GlobalInit(int* pargc, char*** pargv) {\n  // Google flags.\n  ::gflags::ParseCommandLineFlags(pargc, pargv, true);\n  // Google logging.\n  ::google::InitGoogleLogging(*(pargv)[0]);\n  // Provide a backtrace on segfault.\n  ::google::InstallFailureSignalHandler();\n}\n\n#ifdef CPU_ONLY  // CPU-only Caffe.\n\nCaffe::Caffe()\n    : random_generator_(), mode_(Caffe::CPU),\n      solver_count_(1), root_solver_(true) { }\n\nCaffe::~Caffe() { }\n\nvoid Caffe::set_random_seed(const unsigned int seed) {\n  // RNG seed\n  Get().random_generator_.reset(new RNG(seed));\n}\n\nvoid Caffe::SetDevice(const int device_id) {\n  NO_GPU;\n}\n\nvoid Caffe::DeviceQuery() {\n  NO_GPU;\n}\n\n\nclass Caffe::RNG::Generator {\n public:\n  Generator() : rng_(new caffe::rng_t(cluster_seedgen())) {}\n  explicit Generator(unsigned int seed) : rng_(new caffe::rng_t(seed)) {}\n  caffe::rng_t* rng() { return rng_.get(); }\n private:\n  shared_ptr<caffe::rng_t> rng_;\n};\n\nCaffe::RNG::RNG() : generator_(new Generator()) { }\n\nCaffe::RNG::RNG(unsigned int seed) : generator_(new Generator(seed)) { }\n\nCaffe::RNG& Caffe::RNG::operator=(const RNG& other) {\n  generator_ = other.generator_;\n  return *this;\n}\n\nvoid* Caffe::RNG::generator() {\n  return static_cast<void*>(generator_->rng());\n}\n\n#else  // Normal GPU + CPU Caffe.\n\nCaffe::Caffe()\n    : cublas_handle_(NULL), curand_generator_(NULL), random_generator_(),\n    mode_(Caffe::CPU), solver_count_(1), root_solver_(true) {\n  // Try to create a cublas handler, and report an error if failed (but we will\n  // keep the program running as one might just want to run CPU code).\n  if (cublasCreate(&cublas_handle_) != CUBLAS_STATUS_SUCCESS) {\n    LOG(ERROR) << \"Cannot create Cublas handle. Cublas won't be available.\";\n  }\n  // Try to create a curand handler.\n  if (curandCreateGenerator(&curand_generator_, CURAND_RNG_PSEUDO_DEFAULT)\n      != CURAND_STATUS_SUCCESS ||\n      curandSetPseudoRandomGeneratorSeed(curand_generator_, cluster_seedgen())\n      != CURAND_STATUS_SUCCESS) {\n    LOG(ERROR) << \"Cannot create Curand generator. Curand won't be available.\";\n  }\n}\n\nCaffe::~Caffe() {\n  if (cublas_handle_) CUBLAS_CHECK(cublasDestroy(cublas_handle_));\n  if (curand_generator_) {\n    CURAND_CHECK(curandDestroyGenerator(curand_generator_));\n  }\n}\n\nvoid Caffe::set_random_seed(const unsigned int seed) {\n  // Curand seed\n  static bool g_curand_availability_logged = false;\n  if (Get().curand_generator_) {\n    CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(curand_generator(),\n        seed));\n    CURAND_CHECK(curandSetGeneratorOffset(curand_generator(), 0));\n  } else {\n    if (!g_curand_availability_logged) {\n        LOG(ERROR) <<\n            \"Curand not available. Skipping setting the curand seed.\";\n        g_curand_availability_logged = true;\n    }\n  }\n  // RNG seed\n  Get().random_generator_.reset(new RNG(seed));\n}\n\nvoid Caffe::SetDevice(const int device_id) {\n  int current_device;\n  CUDA_CHECK(cudaGetDevice(&current_device));\n  if (current_device == device_id) {\n    return;\n  }\n  // The call to cudaSetDevice must come before any calls to Get, which\n  // may perform initialization using the GPU.\n  CUDA_CHECK(cudaSetDevice(device_id));\n  if (Get().cublas_handle_) CUBLAS_CHECK(cublasDestroy(Get().cublas_handle_));\n  if (Get().curand_generator_) {\n    CURAND_CHECK(curandDestroyGenerator(Get().curand_generator_));\n  }\n  CUBLAS_CHECK(cublasCreate(&Get().cublas_handle_));\n  CURAND_CHECK(curandCreateGenerator(&Get().curand_generator_,\n      CURAND_RNG_PSEUDO_DEFAULT));\n  CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(Get().curand_generator_,\n      cluster_seedgen()));\n}\n\nvoid Caffe::DeviceQuery() {\n  cudaDeviceProp prop;\n  int device;\n  if (cudaSuccess != cudaGetDevice(&device)) {\n    printf(\"No cuda device present.\\n\");\n    return;\n  }\n  CUDA_CHECK(cudaGetDeviceProperties(&prop, device));\n  LOG(INFO) << \"Device id:                     \" << device;\n  LOG(INFO) << \"Major revision number:         \" << prop.major;\n  LOG(INFO) << \"Minor revision number:         \" << prop.minor;\n  LOG(INFO) << \"Name:                          \" << prop.name;\n  LOG(INFO) << \"Total global memory:           \" << prop.totalGlobalMem;\n  LOG(INFO) << \"Total shared memory per block: \" << prop.sharedMemPerBlock;\n  LOG(INFO) << \"Total registers per block:     \" << prop.regsPerBlock;\n  LOG(INFO) << \"Warp size:                     \" << prop.warpSize;\n  LOG(INFO) << \"Maximum memory pitch:          \" << prop.memPitch;\n  LOG(INFO) << \"Maximum threads per block:     \" << prop.maxThreadsPerBlock;\n  LOG(INFO) << \"Maximum dimension of block:    \"\n      << prop.maxThreadsDim[0] << \", \" << prop.maxThreadsDim[1] << \", \"\n      << prop.maxThreadsDim[2];\n  LOG(INFO) << \"Maximum dimension of grid:     \"\n      << prop.maxGridSize[0] << \", \" << prop.maxGridSize[1] << \", \"\n      << prop.maxGridSize[2];\n  LOG(INFO) << \"Clock rate:                    \" << prop.clockRate;\n  LOG(INFO) << \"Total constant memory:         \" << prop.totalConstMem;\n  LOG(INFO) << \"Texture alignment:             \" << prop.textureAlignment;\n  LOG(INFO) << \"Concurrent copy and execution: \"\n      << (prop.deviceOverlap ? \"Yes\" : \"No\");\n  LOG(INFO) << \"Number of multiprocessors:     \" << prop.multiProcessorCount;\n  LOG(INFO) << \"Kernel execution timeout:      \"\n      << (prop.kernelExecTimeoutEnabled ? \"Yes\" : \"No\");\n  return;\n}\n\n\nclass Caffe::RNG::Generator {\n public:\n  Generator() : rng_(new caffe::rng_t(cluster_seedgen())) {}\n  explicit Generator(unsigned int seed) : rng_(new caffe::rng_t(seed)) {}\n  caffe::rng_t* rng() { return rng_.get(); }\n private:\n  shared_ptr<caffe::rng_t> rng_;\n};\n\nCaffe::RNG::RNG() : generator_(new Generator()) { }\n\nCaffe::RNG::RNG(unsigned int seed) : generator_(new Generator(seed)) { }\n\nCaffe::RNG& Caffe::RNG::operator=(const RNG& other) {\n  generator_.reset(other.generator_.get());\n  return *this;\n}\n\nvoid* Caffe::RNG::generator() {\n  return static_cast<void*>(generator_->rng());\n}\n\nconst char* cublasGetErrorString(cublasStatus_t error) {\n  switch (error) {\n  case CUBLAS_STATUS_SUCCESS:\n    return \"CUBLAS_STATUS_SUCCESS\";\n  case CUBLAS_STATUS_NOT_INITIALIZED:\n    return \"CUBLAS_STATUS_NOT_INITIALIZED\";\n  case CUBLAS_STATUS_ALLOC_FAILED:\n    return \"CUBLAS_STATUS_ALLOC_FAILED\";\n  case CUBLAS_STATUS_INVALID_VALUE:\n    return \"CUBLAS_STATUS_INVALID_VALUE\";\n  case CUBLAS_STATUS_ARCH_MISMATCH:\n    return \"CUBLAS_STATUS_ARCH_MISMATCH\";\n  case CUBLAS_STATUS_MAPPING_ERROR:\n    return \"CUBLAS_STATUS_MAPPING_ERROR\";\n  case CUBLAS_STATUS_EXECUTION_FAILED:\n    return \"CUBLAS_STATUS_EXECUTION_FAILED\";\n  case CUBLAS_STATUS_INTERNAL_ERROR:\n    return \"CUBLAS_STATUS_INTERNAL_ERROR\";\n#if CUDA_VERSION >= 6000\n  case CUBLAS_STATUS_NOT_SUPPORTED:\n    return \"CUBLAS_STATUS_NOT_SUPPORTED\";\n#endif\n#if CUDA_VERSION >= 6050\n  case CUBLAS_STATUS_LICENSE_ERROR:\n    return \"CUBLAS_STATUS_LICENSE_ERROR\";\n#endif\n  }\n  return \"Unknown cublas status\";\n}\n\nconst char* curandGetErrorString(curandStatus_t error) {\n  switch (error) {\n  case CURAND_STATUS_SUCCESS:\n    return \"CURAND_STATUS_SUCCESS\";\n  case CURAND_STATUS_VERSION_MISMATCH:\n    return \"CURAND_STATUS_VERSION_MISMATCH\";\n  case CURAND_STATUS_NOT_INITIALIZED:\n    return \"CURAND_STATUS_NOT_INITIALIZED\";\n  case CURAND_STATUS_ALLOCATION_FAILED:\n    return \"CURAND_STATUS_ALLOCATION_FAILED\";\n  case CURAND_STATUS_TYPE_ERROR:\n    return \"CURAND_STATUS_TYPE_ERROR\";\n  case CURAND_STATUS_OUT_OF_RANGE:\n    return \"CURAND_STATUS_OUT_OF_RANGE\";\n  case CURAND_STATUS_LENGTH_NOT_MULTIPLE:\n    return \"CURAND_STATUS_LENGTH_NOT_MULTIPLE\";\n  case CURAND_STATUS_DOUBLE_PRECISION_REQUIRED:\n    return \"CURAND_STATUS_DOUBLE_PRECISION_REQUIRED\";\n  case CURAND_STATUS_LAUNCH_FAILURE:\n    return \"CURAND_STATUS_LAUNCH_FAILURE\";\n  case CURAND_STATUS_PREEXISTING_FAILURE:\n    return \"CURAND_STATUS_PREEXISTING_FAILURE\";\n  case CURAND_STATUS_INITIALIZATION_FAILED:\n    return \"CURAND_STATUS_INITIALIZATION_FAILED\";\n  case CURAND_STATUS_ARCH_MISMATCH:\n    return \"CURAND_STATUS_ARCH_MISMATCH\";\n  case CURAND_STATUS_INTERNAL_ERROR:\n    return \"CURAND_STATUS_INTERNAL_ERROR\";\n  }\n  return \"Unknown curand status\";\n}\n\n#endif  // CPU_ONLY\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/data_reader.cpp",
    "content": "#include <boost/thread.hpp>\n#include <map>\n#include <string>\n#include <vector>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/data_reader.hpp\"\n#include \"caffe/layers/data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\nnamespace caffe {\n\nusing boost::weak_ptr;\n\nmap<const string, weak_ptr<DataReader::Body> > DataReader::bodies_;\nstatic boost::mutex bodies_mutex_;\n\nDataReader::DataReader(const LayerParameter& param)\n    : queue_pair_(new QueuePair(  //\n        param.data_param().prefetch() * param.data_param().batch_size())) {\n  // Get or create a body\n  boost::mutex::scoped_lock lock(bodies_mutex_);\n  string key = source_key(param);\n  weak_ptr<Body>& weak = bodies_[key];\n  body_ = weak.lock();\n  if (!body_) {\n    body_.reset(new Body(param));\n    bodies_[key] = weak_ptr<Body>(body_);\n  }\n  body_->new_queue_pairs_.push(queue_pair_);\n}\n\nDataReader::~DataReader() {\n  string key = source_key(body_->param_);\n  body_.reset();\n  boost::mutex::scoped_lock lock(bodies_mutex_);\n  if (bodies_[key].expired()) {\n    bodies_.erase(key);\n  }\n}\n\n//\n\nDataReader::QueuePair::QueuePair(int size) {\n  // Initialize the free queue with requested number of datums\n  for (int i = 0; i < size; ++i) {\n    free_.push(new Datum());\n  }\n}\n\nDataReader::QueuePair::~QueuePair() {\n  Datum* datum;\n  while (free_.try_pop(&datum)) {\n    delete datum;\n  }\n  while (full_.try_pop(&datum)) {\n    delete datum;\n  }\n}\n\n//\n\nDataReader::Body::Body(const LayerParameter& param)\n    : param_(param),\n      new_queue_pairs_() {\n  StartInternalThread();\n}\n\nDataReader::Body::~Body() {\n  StopInternalThread();\n}\n\nvoid DataReader::Body::InternalThreadEntry() {\n  shared_ptr<db::DB> db(db::GetDB(param_.data_param().backend()));\n  db->Open(param_.data_param().source(), db::READ);\n  shared_ptr<db::Cursor> cursor(db->NewCursor());\n  vector<shared_ptr<QueuePair> > qps;\n  try {\n    int solver_count = param_.phase() == TRAIN ? Caffe::solver_count() : 1;\n\n    // To ensure deterministic runs, only start running once all solvers\n    // are ready. But solvers need to peek on one item during initialization,\n    // so read one item, then wait for the next solver.\n    for (int i = 0; i < solver_count; ++i) {\n      shared_ptr<QueuePair> qp(new_queue_pairs_.pop());\n      read_one(cursor.get(), qp.get());\n      qps.push_back(qp);\n    }\n    // Main loop\n    while (!must_stop()) {\n      for (int i = 0; i < solver_count; ++i) {\n        read_one(cursor.get(), qps[i].get());\n      }\n      // Check no additional readers have been created. This can happen if\n      // more than one net is trained at a time per process, whether single\n      // or multi solver. It might also happen if two data layers have same\n      // name and same source.\n      CHECK_EQ(new_queue_pairs_.size(), 0);\n    }\n  } catch (boost::thread_interrupted&) {\n    // Interrupted exception is expected on shutdown\n  }\n}\n\nvoid DataReader::Body::read_one(db::Cursor* cursor, QueuePair* qp) {\n  Datum* datum = qp->free_.pop();\n  // TODO deserialize in-place instead of copy?\n  datum->ParseFromString(cursor->value());\n  qp->full_.push(datum);\n\n  // go to the next iter\n  cursor->Next();\n  if (!cursor->valid()) {\n    DLOG(INFO) << \"Restarting data prefetching from start.\";\n    cursor->SeekToFirst();\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/data_transformer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#endif  // USE_OPENCV\n\n#include <string>\n#include <vector>\n\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include \"caffe/util/rng.hpp\"\n\nnamespace caffe {\n\ntemplate<typename Dtype>\nDataTransformer<Dtype>::DataTransformer(const TransformationParameter& param,\n    Phase phase)\n    : param_(param), phase_(phase) {\n  // check if we want to use mean_file\n  if (param_.has_mean_file()) {\n    CHECK_EQ(param_.mean_value_size(), 0) <<\n      \"Cannot specify mean_file and mean_value at the same time\";\n    const string& mean_file = param.mean_file();\n    if (Caffe::root_solver()) {\n      LOG(INFO) << \"Loading mean file from: \" << mean_file;\n    }\n    BlobProto blob_proto;\n    ReadProtoFromBinaryFileOrDie(mean_file.c_str(), &blob_proto);\n    data_mean_.FromProto(blob_proto);\n  }\n  // check if we want to use mean_value\n  if (param_.mean_value_size() > 0) {\n    CHECK(param_.has_mean_file() == false) <<\n      \"Cannot specify mean_file and mean_value at the same time\";\n    for (int c = 0; c < param_.mean_value_size(); ++c) {\n      mean_values_.push_back(param_.mean_value(c));\n    }\n  }\n}\n\ntemplate<typename Dtype>\nvoid DataTransformer<Dtype>::Transform(const Datum& datum,\n                                       Dtype* transformed_data) {\n  const string& data = datum.data();\n  const int datum_channels = datum.channels();\n  const int datum_height = datum.height();\n  const int datum_width = datum.width();\n\n  const int crop_size = param_.crop_size();\n  const Dtype scale = param_.scale();\n  const bool do_mirror = param_.mirror() && Rand(2);\n  const bool has_mean_file = param_.has_mean_file();\n  const bool has_uint8 = data.size() > 0;\n  const bool has_mean_values = mean_values_.size() > 0;\n\n  CHECK_GT(datum_channels, 0);\n  CHECK_GE(datum_height, crop_size);\n  CHECK_GE(datum_width, crop_size);\n\n  Dtype* mean = NULL;\n  if (has_mean_file) {\n    CHECK_EQ(datum_channels, data_mean_.channels());\n    CHECK_EQ(datum_height, data_mean_.height());\n    CHECK_EQ(datum_width, data_mean_.width());\n    mean = data_mean_.mutable_cpu_data();\n  }\n  if (has_mean_values) {\n    CHECK(mean_values_.size() == 1 || mean_values_.size() == datum_channels) <<\n     \"Specify either 1 mean_value or as many as channels: \" << datum_channels;\n    if (datum_channels > 1 && mean_values_.size() == 1) {\n      // Replicate the mean_value for simplicity\n      for (int c = 1; c < datum_channels; ++c) {\n        mean_values_.push_back(mean_values_[0]);\n      }\n    }\n  }\n\n  int height = datum_height;\n  int width = datum_width;\n\n  int h_off = 0;\n  int w_off = 0;\n  if (crop_size) {\n    height = crop_size;\n    width = crop_size;\n    // We only do random crop when we do training.\n    if (phase_ == TRAIN) {\n      h_off = Rand(datum_height - crop_size + 1);\n      w_off = Rand(datum_width - crop_size + 1);\n    } else {\n      h_off = (datum_height - crop_size) / 2;\n      w_off = (datum_width - crop_size) / 2;\n    }\n  }\n\n  Dtype datum_element;\n  int top_index, data_index;\n  for (int c = 0; c < datum_channels; ++c) {\n    for (int h = 0; h < height; ++h) {\n      for (int w = 0; w < width; ++w) {\n        data_index = (c * datum_height + h_off + h) * datum_width + w_off + w;\n        if (do_mirror) {\n          top_index = (c * height + h) * width + (width - 1 - w);\n        } else {\n          top_index = (c * height + h) * width + w;\n        }\n        if (has_uint8) {\n          datum_element =\n            static_cast<Dtype>(static_cast<uint8_t>(data[data_index]));\n        } else {\n          datum_element = datum.float_data(data_index);\n        }\n        if (has_mean_file) {\n          transformed_data[top_index] =\n            (datum_element - mean[data_index]) * scale;\n        } else {\n          if (has_mean_values) {\n            transformed_data[top_index] =\n              (datum_element - mean_values_[c]) * scale;\n          } else {\n            transformed_data[top_index] = datum_element * scale;\n          }\n        }\n      }\n    }\n  }\n}\n\n\ntemplate<typename Dtype>\nvoid DataTransformer<Dtype>::Transform(const Datum& datum,\n                                       Blob<Dtype>* transformed_blob) {\n  // If datum is encoded, decoded and transform the cv::image.\n  if (datum.encoded()) {\n#ifdef USE_OPENCV\n    CHECK(!(param_.force_color() && param_.force_gray()))\n        << \"cannot set both force_color and force_gray\";\n    cv::Mat cv_img;\n    if (param_.force_color() || param_.force_gray()) {\n    // If force_color then decode in color otherwise decode in gray.\n      cv_img = DecodeDatumToCVMat(datum, param_.force_color());\n    } else {\n      cv_img = DecodeDatumToCVMatNative(datum);\n    }\n    // Transform the cv::image into blob.\n    return Transform(cv_img, transformed_blob);\n#else\n    LOG(FATAL) << \"Encoded datum requires OpenCV; compile with USE_OPENCV.\";\n#endif  // USE_OPENCV\n  } else {\n    if (param_.force_color() || param_.force_gray()) {\n      LOG(ERROR) << \"force_color and force_gray only for encoded datum\";\n    }\n  }\n\n  const int crop_size = param_.crop_size();\n  const int datum_channels = datum.channels();\n  const int datum_height = datum.height();\n  const int datum_width = datum.width();\n\n  // Check dimensions.\n  const int channels = transformed_blob->channels();\n  const int height = transformed_blob->height();\n  const int width = transformed_blob->width();\n  const int num = transformed_blob->num();\n\n  CHECK_EQ(channels, datum_channels);\n  CHECK_LE(height, datum_height);\n  CHECK_LE(width, datum_width);\n  CHECK_GE(num, 1);\n\n  if (crop_size) {\n    CHECK_EQ(crop_size, height);\n    CHECK_EQ(crop_size, width);\n  } else {\n    CHECK_EQ(datum_height, height);\n    CHECK_EQ(datum_width, width);\n  }\n\n  Dtype* transformed_data = transformed_blob->mutable_cpu_data();\n  Transform(datum, transformed_data);\n}\n\ntemplate<typename Dtype>\nvoid DataTransformer<Dtype>::Transform(const vector<Datum> & datum_vector,\n                                       Blob<Dtype>* transformed_blob) {\n  const int datum_num = datum_vector.size();\n  const int num = transformed_blob->num();\n  const int channels = transformed_blob->channels();\n  const int height = transformed_blob->height();\n  const int width = transformed_blob->width();\n\n  CHECK_GT(datum_num, 0) << \"There is no datum to add\";\n  CHECK_LE(datum_num, num) <<\n    \"The size of datum_vector must be no greater than transformed_blob->num()\";\n  Blob<Dtype> uni_blob(1, channels, height, width);\n  for (int item_id = 0; item_id < datum_num; ++item_id) {\n    int offset = transformed_blob->offset(item_id);\n    uni_blob.set_cpu_data(transformed_blob->mutable_cpu_data() + offset);\n    Transform(datum_vector[item_id], &uni_blob);\n  }\n}\n\n#ifdef USE_OPENCV\ntemplate<typename Dtype>\nvoid DataTransformer<Dtype>::Transform(const vector<cv::Mat> & mat_vector,\n                                       Blob<Dtype>* transformed_blob) {\n  const int mat_num = mat_vector.size();\n  const int num = transformed_blob->num();\n  const int channels = transformed_blob->channels();\n  const int height = transformed_blob->height();\n  const int width = transformed_blob->width();\n\n  CHECK_GT(mat_num, 0) << \"There is no MAT to add\";\n  CHECK_EQ(mat_num, num) <<\n    \"The size of mat_vector must be equals to transformed_blob->num()\";\n  Blob<Dtype> uni_blob(1, channels, height, width);\n  for (int item_id = 0; item_id < mat_num; ++item_id) {\n    int offset = transformed_blob->offset(item_id);\n    uni_blob.set_cpu_data(transformed_blob->mutable_cpu_data() + offset);\n    Transform(mat_vector[item_id], &uni_blob);\n  }\n}\n\ntemplate<typename Dtype>\nvoid DataTransformer<Dtype>::Transform(const cv::Mat& cv_img,\n                                       Blob<Dtype>* transformed_blob) {\n  const int crop_size = param_.crop_size();\n  const int img_channels = cv_img.channels();\n  const int img_height = cv_img.rows;\n  const int img_width = cv_img.cols;\n\n  // Check dimensions.\n  const int channels = transformed_blob->channels();\n  const int height = transformed_blob->height();\n  const int width = transformed_blob->width();\n  const int num = transformed_blob->num();\n\n  CHECK_EQ(channels, img_channels);\n  CHECK_LE(height, img_height);\n  CHECK_LE(width, img_width);\n  CHECK_GE(num, 1);\n\n  CHECK(cv_img.depth() == CV_8U) << \"Image data type must be unsigned byte\";\n\n  const Dtype scale = param_.scale();\n  const bool do_mirror = param_.mirror() && Rand(2);\n  const bool has_mean_file = param_.has_mean_file();\n  const bool has_mean_values = mean_values_.size() > 0;\n\n  CHECK_GT(img_channels, 0);\n  CHECK_GE(img_height, crop_size);\n  CHECK_GE(img_width, crop_size);\n\n  Dtype* mean = NULL;\n  if (has_mean_file) {\n    CHECK_EQ(img_channels, data_mean_.channels());\n    CHECK_EQ(img_height, data_mean_.height());\n    CHECK_EQ(img_width, data_mean_.width());\n    mean = data_mean_.mutable_cpu_data();\n  }\n  if (has_mean_values) {\n    CHECK(mean_values_.size() == 1 || mean_values_.size() == img_channels) <<\n     \"Specify either 1 mean_value or as many as channels: \" << img_channels;\n    if (img_channels > 1 && mean_values_.size() == 1) {\n      // Replicate the mean_value for simplicity\n      for (int c = 1; c < img_channels; ++c) {\n        mean_values_.push_back(mean_values_[0]);\n      }\n    }\n  }\n\n  int h_off = 0;\n  int w_off = 0;\n  cv::Mat cv_cropped_img = cv_img;\n  if (crop_size) {\n    CHECK_EQ(crop_size, height);\n    CHECK_EQ(crop_size, width);\n    // We only do random crop when we do training.\n    if (phase_ == TRAIN) {\n      h_off = Rand(img_height - crop_size + 1);\n      w_off = Rand(img_width - crop_size + 1);\n    } else {\n      h_off = (img_height - crop_size) / 2;\n      w_off = (img_width - crop_size) / 2;\n    }\n    cv::Rect roi(w_off, h_off, crop_size, crop_size);\n    cv_cropped_img = cv_img(roi);\n  } else {\n    CHECK_EQ(img_height, height);\n    CHECK_EQ(img_width, width);\n  }\n\n  CHECK(cv_cropped_img.data);\n\n  Dtype* transformed_data = transformed_blob->mutable_cpu_data();\n  int top_index;\n  for (int h = 0; h < height; ++h) {\n    const uchar* ptr = cv_cropped_img.ptr<uchar>(h);\n    int img_index = 0;\n    for (int w = 0; w < width; ++w) {\n      for (int c = 0; c < img_channels; ++c) {\n        if (do_mirror) {\n          top_index = (c * height + h) * width + (width - 1 - w);\n        } else {\n          top_index = (c * height + h) * width + w;\n        }\n        // int top_index = (c * height + h) * width + w;\n        Dtype pixel = static_cast<Dtype>(ptr[img_index++]);\n        if (has_mean_file) {\n          int mean_index = (c * img_height + h_off + h) * img_width + w_off + w;\n          transformed_data[top_index] =\n            (pixel - mean[mean_index]) * scale;\n        } else {\n          if (has_mean_values) {\n            transformed_data[top_index] =\n              (pixel - mean_values_[c]) * scale;\n          } else {\n            transformed_data[top_index] = pixel * scale;\n          }\n        }\n      }\n    }\n  }\n}\n#endif  // USE_OPENCV\n\ntemplate<typename Dtype>\nvoid DataTransformer<Dtype>::Transform(Blob<Dtype>* input_blob,\n                                       Blob<Dtype>* transformed_blob) {\n  const int crop_size = param_.crop_size();\n  const int input_num = input_blob->num();\n  const int input_channels = input_blob->channels();\n  const int input_height = input_blob->height();\n  const int input_width = input_blob->width();\n\n  if (transformed_blob->count() == 0) {\n    // Initialize transformed_blob with the right shape.\n    if (crop_size) {\n      transformed_blob->Reshape(input_num, input_channels,\n                                crop_size, crop_size);\n    } else {\n      transformed_blob->Reshape(input_num, input_channels,\n                                input_height, input_width);\n    }\n  }\n\n  const int num = transformed_blob->num();\n  const int channels = transformed_blob->channels();\n  const int height = transformed_blob->height();\n  const int width = transformed_blob->width();\n  const int size = transformed_blob->count();\n\n  CHECK_LE(input_num, num);\n  CHECK_EQ(input_channels, channels);\n  CHECK_GE(input_height, height);\n  CHECK_GE(input_width, width);\n\n\n  const Dtype scale = param_.scale();\n  const bool do_mirror = param_.mirror() && Rand(2);\n  const bool has_mean_file = param_.has_mean_file();\n  const bool has_mean_values = mean_values_.size() > 0;\n\n  int h_off = 0;\n  int w_off = 0;\n  if (crop_size) {\n    CHECK_EQ(crop_size, height);\n    CHECK_EQ(crop_size, width);\n    // We only do random crop when we do training.\n    if (phase_ == TRAIN) {\n      h_off = Rand(input_height - crop_size + 1);\n      w_off = Rand(input_width - crop_size + 1);\n    } else {\n      h_off = (input_height - crop_size) / 2;\n      w_off = (input_width - crop_size) / 2;\n    }\n  } else {\n    CHECK_EQ(input_height, height);\n    CHECK_EQ(input_width, width);\n  }\n\n  Dtype* input_data = input_blob->mutable_cpu_data();\n  if (has_mean_file) {\n    CHECK_EQ(input_channels, data_mean_.channels());\n    CHECK_EQ(input_height, data_mean_.height());\n    CHECK_EQ(input_width, data_mean_.width());\n    for (int n = 0; n < input_num; ++n) {\n      int offset = input_blob->offset(n);\n      caffe_sub(data_mean_.count(), input_data + offset,\n            data_mean_.cpu_data(), input_data + offset);\n    }\n  }\n\n  if (has_mean_values) {\n    CHECK(mean_values_.size() == 1 || mean_values_.size() == input_channels) <<\n     \"Specify either 1 mean_value or as many as channels: \" << input_channels;\n    if (mean_values_.size() == 1) {\n      caffe_add_scalar(input_blob->count(), -(mean_values_[0]), input_data);\n    } else {\n      for (int n = 0; n < input_num; ++n) {\n        for (int c = 0; c < input_channels; ++c) {\n          int offset = input_blob->offset(n, c);\n          caffe_add_scalar(input_height * input_width, -(mean_values_[c]),\n            input_data + offset);\n        }\n      }\n    }\n  }\n\n  Dtype* transformed_data = transformed_blob->mutable_cpu_data();\n\n  for (int n = 0; n < input_num; ++n) {\n    int top_index_n = n * channels;\n    int data_index_n = n * channels;\n    for (int c = 0; c < channels; ++c) {\n      int top_index_c = (top_index_n + c) * height;\n      int data_index_c = (data_index_n + c) * input_height + h_off;\n      for (int h = 0; h < height; ++h) {\n        int top_index_h = (top_index_c + h) * width;\n        int data_index_h = (data_index_c + h) * input_width + w_off;\n        if (do_mirror) {\n          int top_index_w = top_index_h + width - 1;\n          for (int w = 0; w < width; ++w) {\n            transformed_data[top_index_w-w] = input_data[data_index_h + w];\n          }\n        } else {\n          for (int w = 0; w < width; ++w) {\n            transformed_data[top_index_h + w] = input_data[data_index_h + w];\n          }\n        }\n      }\n    }\n  }\n  if (scale != Dtype(1)) {\n    DLOG(INFO) << \"Scale: \" << scale;\n    caffe_scal(size, scale, transformed_data);\n  }\n}\n\ntemplate<typename Dtype>\nvector<int> DataTransformer<Dtype>::InferBlobShape(const Datum& datum) {\n  if (datum.encoded()) {\n#ifdef USE_OPENCV\n    CHECK(!(param_.force_color() && param_.force_gray()))\n        << \"cannot set both force_color and force_gray\";\n    cv::Mat cv_img;\n    if (param_.force_color() || param_.force_gray()) {\n    // If force_color then decode in color otherwise decode in gray.\n      cv_img = DecodeDatumToCVMat(datum, param_.force_color());\n    } else {\n      cv_img = DecodeDatumToCVMatNative(datum);\n    }\n    // InferBlobShape using the cv::image.\n    return InferBlobShape(cv_img);\n#else\n    LOG(FATAL) << \"Encoded datum requires OpenCV; compile with USE_OPENCV.\";\n#endif  // USE_OPENCV\n  }\n  const int crop_size = param_.crop_size();\n  const int datum_channels = datum.channels();\n  const int datum_height = datum.height();\n  const int datum_width = datum.width();\n  // Check dimensions.\n  CHECK_GT(datum_channels, 0);\n  CHECK_GE(datum_height, crop_size);\n  CHECK_GE(datum_width, crop_size);\n  // Build BlobShape.\n  vector<int> shape(4);\n  shape[0] = 1;\n  shape[1] = datum_channels;\n  shape[2] = (crop_size)? crop_size: datum_height;\n  shape[3] = (crop_size)? crop_size: datum_width;\n  return shape;\n}\n\ntemplate<typename Dtype>\nvector<int> DataTransformer<Dtype>::InferBlobShape(\n    const vector<Datum> & datum_vector) {\n  const int num = datum_vector.size();\n  CHECK_GT(num, 0) << \"There is no datum to in the vector\";\n  // Use first datum in the vector to InferBlobShape.\n  vector<int> shape = InferBlobShape(datum_vector[0]);\n  // Adjust num to the size of the vector.\n  shape[0] = num;\n  return shape;\n}\n\n#ifdef USE_OPENCV\ntemplate<typename Dtype>\nvector<int> DataTransformer<Dtype>::InferBlobShape(const cv::Mat& cv_img) {\n  const int crop_size = param_.crop_size();\n  const int img_channels = cv_img.channels();\n  const int img_height = cv_img.rows;\n  const int img_width = cv_img.cols;\n  // Check dimensions.\n  CHECK_GT(img_channels, 0);\n  CHECK_GE(img_height, crop_size);\n  CHECK_GE(img_width, crop_size);\n  // Build BlobShape.\n  vector<int> shape(4);\n  shape[0] = 1;\n  shape[1] = img_channels;\n  shape[2] = (crop_size)? crop_size: img_height;\n  shape[3] = (crop_size)? crop_size: img_width;\n  return shape;\n}\n\ntemplate<typename Dtype>\nvector<int> DataTransformer<Dtype>::InferBlobShape(\n    const vector<cv::Mat> & mat_vector) {\n  const int num = mat_vector.size();\n  CHECK_GT(num, 0) << \"There is no cv_img to in the vector\";\n  // Use first cv_img in the vector to InferBlobShape.\n  vector<int> shape = InferBlobShape(mat_vector[0]);\n  // Adjust num to the size of the vector.\n  shape[0] = num;\n  return shape;\n}\n#endif  // USE_OPENCV\n\ntemplate <typename Dtype>\nvoid DataTransformer<Dtype>::InitRand() {\n  const bool needs_rand = param_.mirror() ||\n      (phase_ == TRAIN && param_.crop_size());\n  if (needs_rand) {\n    const unsigned int rng_seed = caffe_rng_rand();\n    rng_.reset(new Caffe::RNG(rng_seed));\n  } else {\n    rng_.reset();\n  }\n}\n\ntemplate <typename Dtype>\nint DataTransformer<Dtype>::Rand(int n) {\n  CHECK(rng_);\n  CHECK_GT(n, 0);\n  caffe::rng_t* rng =\n      static_cast<caffe::rng_t*>(rng_->generator());\n  return ((*rng)() % n);\n}\n\nINSTANTIATE_CLASS(DataTransformer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/internal_thread.cpp",
    "content": "#include <boost/thread.hpp>\n#include <exception>\n\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\nInternalThread::~InternalThread() {\n  StopInternalThread();\n}\n\nbool InternalThread::is_started() const {\n  return thread_ && thread_->joinable();\n}\n\nbool InternalThread::must_stop() {\n  return thread_ && thread_->interruption_requested();\n}\n\nvoid InternalThread::StartInternalThread() {\n  CHECK(!is_started()) << \"Threads should persist and not be restarted.\";\n\n  int device = 0;\n#ifndef CPU_ONLY\n  CUDA_CHECK(cudaGetDevice(&device));\n#endif\n  Caffe::Brew mode = Caffe::mode();\n  int rand_seed = caffe_rng_rand();\n  int solver_count = Caffe::solver_count();\n  bool root_solver = Caffe::root_solver();\n\n  try {\n    thread_.reset(new boost::thread(&InternalThread::entry, this, device, mode,\n          rand_seed, solver_count, root_solver));\n  } catch (std::exception& e) {\n    LOG(FATAL) << \"Thread exception: \" << e.what();\n  }\n}\n\nvoid InternalThread::entry(int device, Caffe::Brew mode, int rand_seed,\n    int solver_count, bool root_solver) {\n#ifndef CPU_ONLY\n  CUDA_CHECK(cudaSetDevice(device));\n#endif\n  Caffe::set_mode(mode);\n  Caffe::set_random_seed(rand_seed);\n  Caffe::set_solver_count(solver_count);\n  Caffe::set_root_solver(root_solver);\n\n  InternalThreadEntry();\n}\n\nvoid InternalThread::StopInternalThread() {\n  if (is_started()) {\n    thread_->interrupt();\n    try {\n      thread_->join();\n    } catch (boost::thread_interrupted&) {\n    } catch (std::exception& e) {\n      LOG(FATAL) << \"Thread exception: \" << e.what();\n    }\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layer.cpp",
    "content": "#include <boost/thread.hpp>\n#include \"caffe/layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid Layer<Dtype>::InitMutex() {\n  forward_mutex_.reset(new boost::mutex());\n}\n\ntemplate <typename Dtype>\nvoid Layer<Dtype>::Lock() {\n  if (IsShared()) {\n    forward_mutex_->lock();\n  }\n}\n\ntemplate <typename Dtype>\nvoid Layer<Dtype>::Unlock() {\n  if (IsShared()) {\n    forward_mutex_->unlock();\n  }\n}\n\nINSTANTIATE_CLASS(Layer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layer_factory.cpp",
    "content": "// Make sure we include Python.h before any system header\n// to avoid _POSIX_C_SOURCE redefinition\n#ifdef WITH_PYTHON_LAYER\n#include <boost/python.hpp>\n#endif\n#include <string>\n\n#include \"caffe/layer.hpp\"\n#include \"caffe/layer_factory.hpp\"\n#include \"caffe/layers/conv_layer.hpp\"\n#include \"caffe/layers/deformable_conv_layer.hpp\"\n#include \"caffe/layers/lrn_layer.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n#include \"caffe/layers/relu_layer.hpp\"\n#include \"caffe/layers/sigmoid_layer.hpp\"\n#include \"caffe/layers/softmax_layer.hpp\"\n#include \"caffe/layers/tanh_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#ifdef USE_CUDNN\n#include \"caffe/layers/cudnn_conv_layer.hpp\"\n#include \"caffe/layers/cudnn_lcn_layer.hpp\"\n#include \"caffe/layers/cudnn_lrn_layer.hpp\"\n#include \"caffe/layers/cudnn_pooling_layer.hpp\"\n#include \"caffe/layers/cudnn_relu_layer.hpp\"\n#include \"caffe/layers/cudnn_sigmoid_layer.hpp\"\n#include \"caffe/layers/cudnn_softmax_layer.hpp\"\n#include \"caffe/layers/cudnn_tanh_layer.hpp\"\n#endif\n\n#ifdef WITH_PYTHON_LAYER\n#include \"caffe/layers/python_layer.hpp\"\n#endif\n\nnamespace caffe {\n\n// Get convolution layer according to engine.\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetConvolutionLayer(\n    const LayerParameter& param) {\n  ConvolutionParameter conv_param = param.convolution_param();\n  ConvolutionParameter_Engine engine = conv_param.engine();\n#ifdef USE_CUDNN\n  bool use_dilation = false;\n  for (int i = 0; i < conv_param.dilation_size(); ++i) {\n    if (conv_param.dilation(i) > 1) {\n      use_dilation = true;\n    }\n  }\n#endif\n  if (engine == ConvolutionParameter_Engine_DEFAULT) {\n    engine = ConvolutionParameter_Engine_CAFFE;\n#ifdef USE_CUDNN\n    if (!use_dilation) {\n      engine = ConvolutionParameter_Engine_CUDNN;\n    }\n#endif\n  }\n  if (engine == ConvolutionParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new ConvolutionLayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == ConvolutionParameter_Engine_CUDNN) {\n    if (use_dilation) {\n      LOG(FATAL) << \"CuDNN doesn't support the dilated convolution at Layer \"\n                 << param.name();\n    }\n    return shared_ptr<Layer<Dtype> >(new CuDNNConvolutionLayer<Dtype>(param));\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\nREGISTER_LAYER_CREATOR(Convolution, GetConvolutionLayer);\n\n\n\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetDeformableConvolutionLayer(\n    const LayerParameter& param) {\n  DeformableConvolutionParameter defor_conv_param = param.deformable_convolution_param();\n  DeformableConvolutionParameter_Engine engine = defor_conv_param.engine();\n\n  if (engine == DeformableConvolutionParameter_Engine_DEFAULT) {\n    engine = DeformableConvolutionParameter_Engine_CAFFE;\n  }\n  if (engine == DeformableConvolutionParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new DeformableConvolutionLayer<Dtype>(param));\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(DeformableConvolution, GetDeformableConvolutionLayer);\n\n\n\n// Get pooling layer according to engine.\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetPoolingLayer(const LayerParameter& param) {\n  PoolingParameter_Engine engine = param.pooling_param().engine();\n  if (engine == PoolingParameter_Engine_DEFAULT) {\n    engine = PoolingParameter_Engine_CAFFE;\n#ifdef USE_CUDNN\n    engine = PoolingParameter_Engine_CUDNN;\n#endif\n  }\n  if (engine == PoolingParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new PoolingLayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == PoolingParameter_Engine_CUDNN) {\n    if (param.top_size() > 1) {\n      LOG(INFO) << \"cuDNN does not support multiple tops. \"\n                << \"Using Caffe's own pooling layer.\";\n      return shared_ptr<Layer<Dtype> >(new PoolingLayer<Dtype>(param));\n    }\n    // CuDNN assumes layers are not being modified in place, thus\n    // breaking our index tracking for updates in some cases in Caffe.\n    // Until there is a workaround in Caffe (index management) or\n    // cuDNN, use Caffe layer to max pooling, or don't use in place\n    // layers after max pooling layers\n    if (param.pooling_param().pool() == PoolingParameter_PoolMethod_MAX) {\n        return shared_ptr<Layer<Dtype> >(new PoolingLayer<Dtype>(param));\n    } else {\n        return shared_ptr<Layer<Dtype> >(new CuDNNPoolingLayer<Dtype>(param));\n    }\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(Pooling, GetPoolingLayer);\n\n// Get LRN layer according to engine\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetLRNLayer(const LayerParameter& param) {\n  LRNParameter_Engine engine = param.lrn_param().engine();\n\n  if (engine == LRNParameter_Engine_DEFAULT) {\n#ifdef USE_CUDNN\n    engine = LRNParameter_Engine_CUDNN;\n#else\n    engine = LRNParameter_Engine_CAFFE;\n#endif\n  }\n\n  if (engine == LRNParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new LRNLayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == LRNParameter_Engine_CUDNN) {\n    LRNParameter lrn_param = param.lrn_param();\n\n    if (lrn_param.norm_region() ==LRNParameter_NormRegion_WITHIN_CHANNEL) {\n      return shared_ptr<Layer<Dtype> >(new CuDNNLCNLayer<Dtype>(param));\n    } else {\n      // local size is too big to be handled through cuDNN\n      if (param.lrn_param().local_size() > CUDNN_LRN_MAX_N) {\n        return shared_ptr<Layer<Dtype> >(new LRNLayer<Dtype>(param));\n      } else {\n        return shared_ptr<Layer<Dtype> >(new CuDNNLRNLayer<Dtype>(param));\n      }\n    }\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(LRN, GetLRNLayer);\n\n// Get relu layer according to engine.\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetReLULayer(const LayerParameter& param) {\n  ReLUParameter_Engine engine = param.relu_param().engine();\n  if (engine == ReLUParameter_Engine_DEFAULT) {\n    engine = ReLUParameter_Engine_CAFFE;\n#ifdef USE_CUDNN\n    engine = ReLUParameter_Engine_CUDNN;\n#endif\n  }\n  if (engine == ReLUParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new ReLULayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == ReLUParameter_Engine_CUDNN) {\n    return shared_ptr<Layer<Dtype> >(new CuDNNReLULayer<Dtype>(param));\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(ReLU, GetReLULayer);\n\n// Get sigmoid layer according to engine.\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetSigmoidLayer(const LayerParameter& param) {\n  SigmoidParameter_Engine engine = param.sigmoid_param().engine();\n  if (engine == SigmoidParameter_Engine_DEFAULT) {\n    engine = SigmoidParameter_Engine_CAFFE;\n#ifdef USE_CUDNN\n    engine = SigmoidParameter_Engine_CUDNN;\n#endif\n  }\n  if (engine == SigmoidParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new SigmoidLayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == SigmoidParameter_Engine_CUDNN) {\n    return shared_ptr<Layer<Dtype> >(new CuDNNSigmoidLayer<Dtype>(param));\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(Sigmoid, GetSigmoidLayer);\n\n// Get softmax layer according to engine.\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetSoftmaxLayer(const LayerParameter& param) {\n  SoftmaxParameter_Engine engine = param.softmax_param().engine();\n  if (engine == SoftmaxParameter_Engine_DEFAULT) {\n    engine = SoftmaxParameter_Engine_CAFFE;\n#ifdef USE_CUDNN\n    engine = SoftmaxParameter_Engine_CUDNN;\n#endif\n  }\n  if (engine == SoftmaxParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new SoftmaxLayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == SoftmaxParameter_Engine_CUDNN) {\n    return shared_ptr<Layer<Dtype> >(new CuDNNSoftmaxLayer<Dtype>(param));\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(Softmax, GetSoftmaxLayer);\n\n// Get tanh layer according to engine.\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetTanHLayer(const LayerParameter& param) {\n  TanHParameter_Engine engine = param.tanh_param().engine();\n  if (engine == TanHParameter_Engine_DEFAULT) {\n    engine = TanHParameter_Engine_CAFFE;\n#ifdef USE_CUDNN\n    engine = TanHParameter_Engine_CUDNN;\n#endif\n  }\n  if (engine == TanHParameter_Engine_CAFFE) {\n    return shared_ptr<Layer<Dtype> >(new TanHLayer<Dtype>(param));\n#ifdef USE_CUDNN\n  } else if (engine == TanHParameter_Engine_CUDNN) {\n    return shared_ptr<Layer<Dtype> >(new CuDNNTanHLayer<Dtype>(param));\n#endif\n  } else {\n    LOG(FATAL) << \"Layer \" << param.name() << \" has unknown engine.\";\n  }\n}\n\nREGISTER_LAYER_CREATOR(TanH, GetTanHLayer);\n\n#ifdef WITH_PYTHON_LAYER\ntemplate <typename Dtype>\nshared_ptr<Layer<Dtype> > GetPythonLayer(const LayerParameter& param) {\n  Py_Initialize();\n  try {\n    bp::object module = bp::import(param.python_param().module().c_str());\n    bp::object layer = module.attr(param.python_param().layer().c_str())(param);\n    return bp::extract<shared_ptr<PythonLayer<Dtype> > >(layer)();\n  } catch (bp::error_already_set) {\n    PyErr_Print();\n    throw;\n  }\n}\n\nREGISTER_LAYER_CREATOR(Python, GetPythonLayer);\n#endif\n\n// Layers that use their constructor as their default creator should be\n// registered in their corresponding cpp files. Do not register them here.\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/absval_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/absval_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid AbsValLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::LayerSetUp(bottom, top);\n  CHECK_NE(top[0], bottom[0]) << this->type() << \" Layer does not \"\n    \"allow in-place computation.\";\n}\n\ntemplate <typename Dtype>\nvoid AbsValLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const int count = top[0]->count();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  caffe_abs(count, bottom[0]->cpu_data(), top_data);\n}\n\ntemplate <typename Dtype>\nvoid AbsValLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const int count = top[0]->count();\n  const Dtype* top_diff = top[0]->cpu_diff();\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    caffe_cpu_sign(count, bottom_data, bottom_diff);\n    caffe_mul(count, bottom_diff, top_diff, bottom_diff);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(AbsValLayer);\n#endif\n\nINSTANTIATE_CLASS(AbsValLayer);\nREGISTER_LAYER_CLASS(AbsVal);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/absval_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/absval_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid AbsValLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const int count = top[0]->count();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  caffe_gpu_abs(count, bottom[0]->gpu_data(), top_data);\n}\n\ntemplate <typename Dtype>\nvoid AbsValLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const int count = top[0]->count();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    caffe_gpu_sign(count, bottom_data, bottom_diff);\n    caffe_gpu_mul(count, bottom_diff, top_diff, bottom_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(AbsValLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/accuracy_layer.cpp",
    "content": "#include <functional>\n#include <utility>\n#include <vector>\n\n#include \"caffe/layers/accuracy_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid AccuracyLayer<Dtype>::LayerSetUp(\n  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  top_k_ = this->layer_param_.accuracy_param().top_k();\n\n  has_ignore_label_ =\n    this->layer_param_.accuracy_param().has_ignore_label();\n  if (has_ignore_label_) {\n    ignore_label_ = this->layer_param_.accuracy_param().ignore_label();\n  }\n}\n\ntemplate <typename Dtype>\nvoid AccuracyLayer<Dtype>::Reshape(\n  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  CHECK_LE(top_k_, bottom[0]->count() / bottom[1]->count())\n      << \"top_k must be less than or equal to the number of classes.\";\n  label_axis_ =\n      bottom[0]->CanonicalAxisIndex(this->layer_param_.accuracy_param().axis());\n  outer_num_ = bottom[0]->count(0, label_axis_);\n  inner_num_ = bottom[0]->count(label_axis_ + 1);\n  CHECK_EQ(outer_num_ * inner_num_, bottom[1]->count())\n      << \"Number of labels must match number of predictions; \"\n      << \"e.g., if label axis == 1 and prediction shape is (N, C, H, W), \"\n      << \"label count (number of labels) must be N*H*W, \"\n      << \"with integer values in {0, 1, ..., C-1}.\";\n  vector<int> top_shape(0);  // Accuracy is a scalar; 0 axes.\n  top[0]->Reshape(top_shape);\n  if (top.size() > 1) {\n    // Per-class accuracy is a vector; 1 axes.\n    vector<int> top_shape_per_class(1);\n    top_shape_per_class[0] = bottom[0]->shape(label_axis_);\n    top[1]->Reshape(top_shape_per_class);\n    nums_buffer_.Reshape(top_shape_per_class);\n  }\n}\n\ntemplate <typename Dtype>\nvoid AccuracyLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  Dtype accuracy = 0;\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* bottom_label = bottom[1]->cpu_data();\n  const int dim = bottom[0]->count() / outer_num_;\n  const int num_labels = bottom[0]->shape(label_axis_);\n  vector<Dtype> maxval(top_k_+1);\n  vector<int> max_id(top_k_+1);\n  if (top.size() > 1) {\n    caffe_set(nums_buffer_.count(), Dtype(0), nums_buffer_.mutable_cpu_data());\n    caffe_set(top[1]->count(), Dtype(0), top[1]->mutable_cpu_data());\n  }\n  int count = 0;\n  for (int i = 0; i < outer_num_; ++i) {\n    for (int j = 0; j < inner_num_; ++j) {\n      const int label_value =\n          static_cast<int>(bottom_label[i * inner_num_ + j]);\n      if (has_ignore_label_ && label_value == ignore_label_) {\n        continue;\n      }\n      if (top.size() > 1) ++nums_buffer_.mutable_cpu_data()[label_value];\n      DCHECK_GE(label_value, 0);\n      DCHECK_LT(label_value, num_labels);\n      // Top-k accuracy\n      std::vector<std::pair<Dtype, int> > bottom_data_vector;\n      for (int k = 0; k < num_labels; ++k) {\n        bottom_data_vector.push_back(std::make_pair(\n            bottom_data[i * dim + k * inner_num_ + j], k));\n      }\n      std::partial_sort(\n          bottom_data_vector.begin(), bottom_data_vector.begin() + top_k_,\n          bottom_data_vector.end(), std::greater<std::pair<Dtype, int> >());\n      // check if true label is in top k predictions\n      for (int k = 0; k < top_k_; k++) {\n        if (bottom_data_vector[k].second == label_value) {\n          ++accuracy;\n          if (top.size() > 1) ++top[1]->mutable_cpu_data()[label_value];\n          break;\n        }\n      }\n      ++count;\n    }\n  }\n\n  // LOG(INFO) << \"Accuracy: \" << accuracy;\n  top[0]->mutable_cpu_data()[0] = accuracy / count;\n  if (top.size() > 1) {\n    for (int i = 0; i < top[1]->count(); ++i) {\n      top[1]->mutable_cpu_data()[i] =\n          nums_buffer_.cpu_data()[i] == 0 ? 0\n          : top[1]->cpu_data()[i] / nums_buffer_.cpu_data()[i];\n    }\n  }\n  // Accuracy layer should not be used as a loss function.\n}\n\nINSTANTIATE_CLASS(AccuracyLayer);\nREGISTER_LAYER_CLASS(Accuracy);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/argmax_layer.cpp",
    "content": "#include <algorithm>\n#include <functional>\n#include <utility>\n#include <vector>\n\n#include \"caffe/layers/argmax_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ArgMaxLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const ArgMaxParameter& argmax_param = this->layer_param_.argmax_param();\n  out_max_val_ = argmax_param.out_max_val();\n  top_k_ = argmax_param.top_k();\n  has_axis_ = argmax_param.has_axis();\n  CHECK_GE(top_k_, 1) << \"top k must not be less than 1.\";\n  if (has_axis_) {\n    axis_ = bottom[0]->CanonicalAxisIndex(argmax_param.axis());\n    CHECK_GE(axis_, 0) << \"axis must not be less than 0.\";\n    CHECK_LE(axis_, bottom[0]->num_axes()) <<\n      \"axis must be less than or equal to the number of axis.\";\n    CHECK_LE(top_k_, bottom[0]->shape(axis_))\n      << \"top_k must be less than or equal to the dimension of the axis.\";\n  } else {\n    CHECK_LE(top_k_, bottom[0]->count(1))\n      << \"top_k must be less than or equal to\"\n        \" the dimension of the flattened bottom blob per instance.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid ArgMaxLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  int num_top_axes = bottom[0]->num_axes();\n  if ( num_top_axes < 3 ) num_top_axes = 3;\n  std::vector<int> shape(num_top_axes, 1);\n  if (has_axis_) {\n    // Produces max_ind or max_val per axis\n    shape = bottom[0]->shape();\n    shape[axis_] = top_k_;\n  } else {\n    shape[0] = bottom[0]->shape(0);\n    // Produces max_ind\n    shape[2] = top_k_;\n    if (out_max_val_) {\n      // Produces max_ind and max_val\n      shape[1] = 2;\n    }\n  }\n  top[0]->Reshape(shape);\n}\n\ntemplate <typename Dtype>\nvoid ArgMaxLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  int dim, axis_dist;\n  if (has_axis_) {\n    dim = bottom[0]->shape(axis_);\n    // Distance between values of axis in blob\n    axis_dist = bottom[0]->count(axis_) / dim;\n  } else {\n    dim = bottom[0]->count(1);\n    axis_dist = 1;\n  }\n  int num = bottom[0]->count() / dim;\n  std::vector<std::pair<Dtype, int> > bottom_data_vector(dim);\n  for (int i = 0; i < num; ++i) {\n    for (int j = 0; j < dim; ++j) {\n      bottom_data_vector[j] = std::make_pair(\n        bottom_data[(i / axis_dist * dim + j) * axis_dist + i % axis_dist], j);\n    }\n    std::partial_sort(\n        bottom_data_vector.begin(), bottom_data_vector.begin() + top_k_,\n        bottom_data_vector.end(), std::greater<std::pair<Dtype, int> >());\n    for (int j = 0; j < top_k_; ++j) {\n      if (out_max_val_) {\n        if (has_axis_) {\n          // Produces max_val per axis\n          top_data[(i / axis_dist * top_k_ + j) * axis_dist + i % axis_dist]\n            = bottom_data_vector[j].first;\n        } else {\n          // Produces max_ind and max_val\n          top_data[2 * i * top_k_ + j] = bottom_data_vector[j].second;\n          top_data[2 * i * top_k_ + top_k_ + j] = bottom_data_vector[j].first;\n        }\n      } else {\n        // Produces max_ind per axis\n        top_data[(i / axis_dist * top_k_ + j) * axis_dist + i % axis_dist]\n          = bottom_data_vector[j].second;\n      }\n    }\n  }\n}\n\nINSTANTIATE_CLASS(ArgMaxLayer);\nREGISTER_LAYER_CLASS(ArgMax);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/base_conv_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n#include <iostream>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/base_conv_layer.hpp\"\n#include \"caffe/util/im2col.hpp\"\n#include \"caffe/util/math_functions.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // Configure the kernel size, padding, stride, and inputs.\n  ConvolutionParameter conv_param = this->layer_param_.convolution_param();\n  force_nd_im2col_ = conv_param.force_nd_im2col();\n  channel_axis_ = bottom[0]->CanonicalAxisIndex(conv_param.axis());\n  const int first_spatial_axis = channel_axis_ + 1;\n  const int num_axes = bottom[0]->num_axes();\n  num_spatial_axes_ = num_axes - first_spatial_axis;\n  CHECK_GE(num_spatial_axes_, 0);\n  vector<int> bottom_dim_blob_shape(1, num_spatial_axes_ + 1);\n  vector<int> spatial_dim_blob_shape(1, std::max(num_spatial_axes_, 1));\n  // Setup filter kernel dimensions (kernel_shape_).\n  kernel_shape_.Reshape(spatial_dim_blob_shape);\n  int* kernel_shape_data = kernel_shape_.mutable_cpu_data();\n  if (conv_param.has_kernel_h() || conv_param.has_kernel_w()) {\n    CHECK_EQ(num_spatial_axes_, 2)\n        << \"kernel_h & kernel_w can only be used for 2D convolution.\";\n    CHECK_EQ(0, conv_param.kernel_size_size())\n        << \"Either kernel_size or kernel_h/w should be specified; not both.\";\n    kernel_shape_data[0] = conv_param.kernel_h();\n    kernel_shape_data[1] = conv_param.kernel_w();\n  } else {\n    const int num_kernel_dims = conv_param.kernel_size_size();\n    CHECK(num_kernel_dims == 1 || num_kernel_dims == num_spatial_axes_)\n        << \"kernel_size must be specified once, or once per spatial dimension \"\n        << \"(kernel_size specified \" << num_kernel_dims << \" times; \"\n        << num_spatial_axes_ << \" spatial dims).\";\n      for (int i = 0; i < num_spatial_axes_; ++i) {\n        kernel_shape_data[i] =\n            conv_param.kernel_size((num_kernel_dims == 1) ? 0 : i);\n      }\n  }\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    CHECK_GT(kernel_shape_data[i], 0) << \"Filter dimensions must be nonzero.\";\n  }\n  // Setup stride dimensions (stride_).\n  stride_.Reshape(spatial_dim_blob_shape);\n  int* stride_data = stride_.mutable_cpu_data();\n  if (conv_param.has_stride_h() || conv_param.has_stride_w()) {\n    CHECK_EQ(num_spatial_axes_, 2)\n        << \"stride_h & stride_w can only be used for 2D convolution.\";\n    CHECK_EQ(0, conv_param.stride_size())\n        << \"Either stride or stride_h/w should be specified; not both.\";\n    stride_data[0] = conv_param.stride_h();\n    stride_data[1] = conv_param.stride_w();\n  } else {\n    const int num_stride_dims = conv_param.stride_size();\n    CHECK(num_stride_dims == 0 || num_stride_dims == 1 ||\n          num_stride_dims == num_spatial_axes_)\n        << \"stride must be specified once, or once per spatial dimension \"\n        << \"(stride specified \" << num_stride_dims << \" times; \"\n        << num_spatial_axes_ << \" spatial dims).\";\n    const int kDefaultStride = 1;\n    for (int i = 0; i < num_spatial_axes_; ++i) {\n      stride_data[i] = (num_stride_dims == 0) ? kDefaultStride :\n          conv_param.stride((num_stride_dims == 1) ? 0 : i);\n      CHECK_GT(stride_data[i], 0) << \"Stride dimensions must be nonzero.\";\n    }\n  }\n  // Setup pad dimensions (pad_).\n  pad_.Reshape(spatial_dim_blob_shape);\n  int* pad_data = pad_.mutable_cpu_data();\n  if (conv_param.has_pad_h() || conv_param.has_pad_w()) {\n    CHECK_EQ(num_spatial_axes_, 2)\n        << \"pad_h & pad_w can only be used for 2D convolution.\";\n    CHECK_EQ(0, conv_param.pad_size())\n        << \"Either pad or pad_h/w should be specified; not both.\";\n    pad_data[0] = conv_param.pad_h();\n    pad_data[1] = conv_param.pad_w();\n  } else {\n    const int num_pad_dims = conv_param.pad_size();\n    CHECK(num_pad_dims == 0 || num_pad_dims == 1 ||\n          num_pad_dims == num_spatial_axes_)\n        << \"pad must be specified once, or once per spatial dimension \"\n        << \"(pad specified \" << num_pad_dims << \" times; \"\n        << num_spatial_axes_ << \" spatial dims).\";\n    const int kDefaultPad = 0;\n    for (int i = 0; i < num_spatial_axes_; ++i) {\n      pad_data[i] = (num_pad_dims == 0) ? kDefaultPad :\n          conv_param.pad((num_pad_dims == 1) ? 0 : i);\n    }\n  }\n  // Setup dilation dimensions (dilation_).\n  dilation_.Reshape(spatial_dim_blob_shape);\n  int* dilation_data = dilation_.mutable_cpu_data();\n  const int num_dilation_dims = conv_param.dilation_size();\n  CHECK(num_dilation_dims == 0 || num_dilation_dims == 1 ||\n        num_dilation_dims == num_spatial_axes_)\n      << \"dilation must be specified once, or once per spatial dimension \"\n      << \"(dilation specified \" << num_dilation_dims << \" times; \"\n      << num_spatial_axes_ << \" spatial dims).\";\n  const int kDefaultDilation = 1;\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    dilation_data[i] = (num_dilation_dims == 0) ? kDefaultDilation :\n                       conv_param.dilation((num_dilation_dims == 1) ? 0 : i);\n  }\n  // Special case: im2col is the identity for 1x1 convolution with stride 1\n  // and no padding, so flag for skipping the buffer and transformation.\n  is_1x1_ = true;\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    is_1x1_ &=\n        kernel_shape_data[i] == 1 && stride_data[i] == 1 && pad_data[i] == 0;\n    if (!is_1x1_) { break; }\n  }\n  // Configure output channels and groups.\n  channels_ = bottom[0]->shape(channel_axis_);\n  num_output_ = this->layer_param_.convolution_param().num_output();\n  CHECK_GT(num_output_, 0);\n  group_ = this->layer_param_.convolution_param().group();\n  CHECK_EQ(channels_ % group_, 0);\n  CHECK_EQ(num_output_ % group_, 0)\n      << \"Number of output should be multiples of group.\";\n  if (reverse_dimensions()) {\n    conv_out_channels_ = channels_;\n    conv_in_channels_ = num_output_;\n  } else {\n    conv_out_channels_ = num_output_;\n    conv_in_channels_ = channels_;\n  }\n  // Handle the parameters: weights and biases.\n  // - blobs_[0] holds the filter weights\n  // - blobs_[1] holds the biases (optional)\n  vector<int> weight_shape(2);\n  weight_shape[0] = conv_out_channels_;\n  weight_shape[1] = conv_in_channels_ / group_;\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    weight_shape.push_back(kernel_shape_data[i]);\n  }\n  bias_term_ = this->layer_param_.convolution_param().bias_term();\n  vector<int> bias_shape(bias_term_, num_output_);\n  if (this->blobs_.size() > 0) {\n    CHECK_EQ(1 + bias_term_, this->blobs_.size())\n        << \"Incorrect number of weight blobs.\";\n    if (weight_shape != this->blobs_[0]->shape()) {\n      Blob<Dtype> weight_shaped_blob(weight_shape);\n      LOG(FATAL) << \"Incorrect weight shape: expected shape \"\n          << weight_shaped_blob.shape_string() << \"; instead, shape was \"\n          << this->blobs_[0]->shape_string();\n    }\n    if (bias_term_ && bias_shape != this->blobs_[1]->shape()) {\n      Blob<Dtype> bias_shaped_blob(bias_shape);\n      LOG(FATAL) << \"Incorrect bias shape: expected shape \"\n          << bias_shaped_blob.shape_string() << \"; instead, shape was \"\n          << this->blobs_[1]->shape_string();\n    }\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else {\n    if (bias_term_) {\n      this->blobs_.resize(2);\n    } else {\n      this->blobs_.resize(1);\n    }\n    // Initialize and fill the weights:\n    // output channels x input channels per-group x kernel height x kernel width\n    this->blobs_[0].reset(new Blob<Dtype>(weight_shape));\n    shared_ptr<Filler<Dtype> > weight_filler(GetFiller<Dtype>(\n        this->layer_param_.convolution_param().weight_filler()));\n    weight_filler->Fill(this->blobs_[0].get());\n    // If necessary, initialize and fill the biases.\n    if (bias_term_) {\n      this->blobs_[1].reset(new Blob<Dtype>(bias_shape));\n      shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(\n          this->layer_param_.convolution_param().bias_filler()));\n      bias_filler->Fill(this->blobs_[1].get());\n    }\n  }\n  kernel_dim_ = this->blobs_[0]->count(1);\n  weight_offset_ = conv_out_channels_ * kernel_dim_ / group_;\n  // Propagate gradients to the parameters (as directed by backward pass).\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n        \n\n  const int first_spatial_axis = channel_axis_ + 1;\n  CHECK_EQ(bottom[0]->num_axes(), first_spatial_axis + num_spatial_axes_)\n      << \"bottom num_axes may not change.\";\n  num_ = bottom[0]->count(0, channel_axis_);\n  CHECK_EQ(bottom[0]->shape(channel_axis_), channels_)\n      << \"Input size incompatible with convolution kernel.\";\n  // TODO: generalize to handle inputs of different shapes.\n  for (int bottom_id = 1; bottom_id < bottom.size(); ++bottom_id) {\n    CHECK(bottom[0]->shape() == bottom[bottom_id]->shape())\n        << \"All inputs must have the same shape.\";\n  }\n  // Shape the tops.\n  bottom_shape_ = &bottom[0]->shape();\n  compute_output_shape();\n  vector<int> top_shape(bottom[0]->shape().begin(),\n      bottom[0]->shape().begin() + channel_axis_);\n  top_shape.push_back(num_output_);\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    top_shape.push_back(output_shape_[i]);\n  }\n  for (int top_id = 0; top_id < top.size(); ++top_id) {\n    top[top_id]->Reshape(top_shape);\n  }\n  if (reverse_dimensions()) {\n    conv_out_spatial_dim_ = bottom[0]->count(first_spatial_axis);\n  } else {\n    conv_out_spatial_dim_ = top[0]->count(first_spatial_axis);\n  }\n  col_offset_ = kernel_dim_ * conv_out_spatial_dim_;\n  output_offset_ = conv_out_channels_ * conv_out_spatial_dim_ / group_;\n  // Setup input dimensions (conv_input_shape_).\n  vector<int> bottom_dim_blob_shape(1, num_spatial_axes_ + 1);\n  conv_input_shape_.Reshape(bottom_dim_blob_shape);\n  int* conv_input_shape_data = conv_input_shape_.mutable_cpu_data();\n  for (int i = 0; i < num_spatial_axes_ + 1; ++i) {\n    if (reverse_dimensions()) {\n      conv_input_shape_data[i] = top[0]->shape(channel_axis_ + i);\n    } else {\n      conv_input_shape_data[i] = bottom[0]->shape(channel_axis_ + i);\n    }\n  }\n  // The im2col result buffer will only hold one image at a time to avoid\n  // overly large memory usage. In the special case of 1x1 convolution\n  // it goes lazily unused to save memory.\n  col_buffer_shape_.clear();\n  col_buffer_shape_.push_back(kernel_dim_ * group_);\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    if (reverse_dimensions()) {\n      col_buffer_shape_.push_back(input_shape(i + 1));\n    } else {\n      col_buffer_shape_.push_back(output_shape_[i]);\n    }\n  }\n  col_buffer_.Reshape(col_buffer_shape_);\n  bottom_dim_ = bottom[0]->count(channel_axis_);\n  top_dim_ = top[0]->count(channel_axis_);\n  num_kernels_im2col_ = conv_in_channels_ * conv_out_spatial_dim_;\n  num_kernels_col2im_ = reverse_dimensions() ? top_dim_ : bottom_dim_;\n  // Set up the all ones \"bias multiplier\" for adding biases by BLAS\n  out_spatial_dim_ = top[0]->count(first_spatial_axis);\n  if (bias_term_) {\n    vector<int> bias_multiplier_shape(1, out_spatial_dim_);\n    bias_multiplier_.Reshape(bias_multiplier_shape);\n    caffe_set(bias_multiplier_.count(), Dtype(1),\n        bias_multiplier_.mutable_cpu_data());\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::forward_cpu_gemm(const Dtype* input,\n    const Dtype* weights, Dtype* output, bool skip_im2col) {\n  const Dtype* col_buff = input;\n  if (!is_1x1_) {\n    if (!skip_im2col) {\n      conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());\n    }\n    col_buff = col_buffer_.cpu_data();\n  }\n  for (int g = 0; g < group_; ++g) {\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /\n        group_, conv_out_spatial_dim_, kernel_dim_,\n        (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,\n        (Dtype)0., output + output_offset_ * g);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::forward_cpu_bias(Dtype* output,\n    const Dtype* bias) {\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num_output_,\n      out_spatial_dim_, 1, (Dtype)1., bias, bias_multiplier_.cpu_data(),\n      (Dtype)1., output);\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::backward_cpu_gemm(const Dtype* output,\n    const Dtype* weights, Dtype* input) {\n  Dtype* col_buff = col_buffer_.mutable_cpu_data();\n  if (is_1x1_) {\n    col_buff = input;\n  }\n  for (int g = 0; g < group_; ++g) {\n    caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,\n        conv_out_spatial_dim_, conv_out_channels_ / group_,\n        (Dtype)1., weights + weight_offset_ * g, output + output_offset_ * g,\n        (Dtype)0., col_buff + col_offset_ * g);\n  }\n  if (!is_1x1_) {\n    conv_col2im_cpu(col_buff, input);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::weight_cpu_gemm(const Dtype* input,\n    const Dtype* output, Dtype* weights) {\n  const Dtype* col_buff = input;\n  if (!is_1x1_) {\n    conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());\n    col_buff = col_buffer_.cpu_data();\n  }\n  for (int g = 0; g < group_; ++g) {\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,\n        kernel_dim_, conv_out_spatial_dim_,\n        (Dtype)1., output + output_offset_ * g, col_buff + col_offset_ * g,\n        (Dtype)1., weights + weight_offset_ * g);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::backward_cpu_bias(Dtype* bias,\n    const Dtype* input) {\n  caffe_cpu_gemv<Dtype>(CblasNoTrans, num_output_, out_spatial_dim_, 1.,\n      input, bias_multiplier_.cpu_data(), 1., bias);\n}\n\n#ifndef CPU_ONLY\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::forward_gpu_gemm(const Dtype* input,\n    const Dtype* weights, Dtype* output, bool skip_im2col) {\n  const Dtype* col_buff = input;\n\n\n   //cout<<\"col_buffer:\"<<col_buffer_.shape_string()<<endl;\n  \n  if (!is_1x1_) {\n    if (!skip_im2col) {\n      conv_im2col_gpu(input, col_buffer_.mutable_gpu_data());\n    }\n    col_buff = col_buffer_.gpu_data();\n    \n  }\n  for (int g = 0; g < group_; ++g) {\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /\n        group_, conv_out_spatial_dim_, kernel_dim_,\n        (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,\n        (Dtype)0., output + output_offset_ * g);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::forward_gpu_bias(Dtype* output,\n    const Dtype* bias) {\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num_output_,\n      out_spatial_dim_, 1, (Dtype)1., bias, bias_multiplier_.gpu_data(),\n      (Dtype)1., output);\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::backward_gpu_gemm(const Dtype* output,\n    const Dtype* weights, Dtype* input) {\n  Dtype* col_buff = col_buffer_.mutable_gpu_data();\n  if (is_1x1_) {\n    col_buff = input;\n  }\n  for (int g = 0; g < group_; ++g) {\n    caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,\n        conv_out_spatial_dim_, conv_out_channels_ / group_,\n        (Dtype)1., weights + weight_offset_ * g, output + output_offset_ * g,\n        (Dtype)0., col_buff + col_offset_ * g);\n  }\n  if (!is_1x1_) {\n    conv_col2im_gpu(col_buff, input);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::weight_gpu_gemm(const Dtype* input,\n    const Dtype* output, Dtype* weights) {\n  const Dtype* col_buff = input;\n  if (!is_1x1_) {\n    conv_im2col_gpu(input, col_buffer_.mutable_gpu_data());\n    col_buff = col_buffer_.gpu_data();\n  }\n  for (int g = 0; g < group_; ++g) {\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,\n        kernel_dim_, conv_out_spatial_dim_,\n        (Dtype)1., output + output_offset_ * g, col_buff + col_offset_ * g,\n        (Dtype)1., weights + weight_offset_ * g);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BaseConvolutionLayer<Dtype>::backward_gpu_bias(Dtype* bias,\n    const Dtype* input) {\n  caffe_gpu_gemv<Dtype>(CblasNoTrans, num_output_, out_spatial_dim_, 1.,\n      input, bias_multiplier_.gpu_data(), 1., bias);\n}\n\n#endif  // !CPU_ONLY\n\nINSTANTIATE_CLASS(BaseConvolutionLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/base_data_layer.cpp",
    "content": "#include <boost/thread.hpp>\n#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/blocking_queue.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nBaseDataLayer<Dtype>::BaseDataLayer(const LayerParameter& param)\n    : Layer<Dtype>(param),\n      transform_param_(param.transform_param()) {\n}\n\ntemplate <typename Dtype>\nvoid BaseDataLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (top.size() == 1) {\n    output_labels_ = false;\n  } else {\n    output_labels_ = true;\n  }\n  data_transformer_.reset(\n      new DataTransformer<Dtype>(transform_param_, this->phase_));\n  data_transformer_->InitRand();\n  // The subclasses should setup the size of bottom and top\n  DataLayerSetUp(bottom, top);\n}\n\ntemplate <typename Dtype>\nBasePrefetchingDataLayer<Dtype>::BasePrefetchingDataLayer(\n    const LayerParameter& param)\n    : BaseDataLayer<Dtype>(param),\n      prefetch_free_(), prefetch_full_() {\n  for (int i = 0; i < PREFETCH_COUNT; ++i) {\n    prefetch_free_.push(&prefetch_[i]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BasePrefetchingDataLayer<Dtype>::LayerSetUp(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  BaseDataLayer<Dtype>::LayerSetUp(bottom, top);\n  // Before starting the prefetch thread, we make cpu_data and gpu_data\n  // calls so that the prefetch thread does not accidentally make simultaneous\n  // cudaMalloc calls when the main thread is running. In some GPUs this\n  // seems to cause failures if we do not so.\n  for (int i = 0; i < PREFETCH_COUNT; ++i) {\n    prefetch_[i].data_.mutable_cpu_data();\n    if (this->output_labels_) {\n      prefetch_[i].label_.mutable_cpu_data();\n    }\n  }\n#ifndef CPU_ONLY\n  if (Caffe::mode() == Caffe::GPU) {\n    for (int i = 0; i < PREFETCH_COUNT; ++i) {\n      prefetch_[i].data_.mutable_gpu_data();\n      if (this->output_labels_) {\n        prefetch_[i].label_.mutable_gpu_data();\n      }\n    }\n  }\n#endif\n  DLOG(INFO) << \"Initializing prefetch\";\n  this->data_transformer_->InitRand();\n  StartInternalThread();\n  DLOG(INFO) << \"Prefetch initialized.\";\n}\n\ntemplate <typename Dtype>\nvoid BasePrefetchingDataLayer<Dtype>::InternalThreadEntry() {\n#ifndef CPU_ONLY\n  cudaStream_t stream;\n  if (Caffe::mode() == Caffe::GPU) {\n    CUDA_CHECK(cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking));\n  }\n#endif\n\n  try {\n    while (!must_stop()) {\n      Batch<Dtype>* batch = prefetch_free_.pop();\n      load_batch(batch);\n#ifndef CPU_ONLY\n      if (Caffe::mode() == Caffe::GPU) {\n        batch->data_.data().get()->async_gpu_push(stream);\n        CUDA_CHECK(cudaStreamSynchronize(stream));\n      }\n#endif\n      prefetch_full_.push(batch);\n    }\n  } catch (boost::thread_interrupted&) {\n    // Interrupted exception is expected on shutdown\n  }\n#ifndef CPU_ONLY\n  if (Caffe::mode() == Caffe::GPU) {\n    CUDA_CHECK(cudaStreamDestroy(stream));\n  }\n#endif\n}\n\ntemplate <typename Dtype>\nvoid BasePrefetchingDataLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  Batch<Dtype>* batch = prefetch_full_.pop(\"Data layer prefetch queue empty\");\n  // Reshape to loaded data.\n  top[0]->ReshapeLike(batch->data_);\n  // Copy the data\n  caffe_copy(batch->data_.count(), batch->data_.cpu_data(),\n             top[0]->mutable_cpu_data());\n  DLOG(INFO) << \"Prefetch copied\";\n  if (this->output_labels_) {\n    // Reshape to loaded labels.\n    top[1]->ReshapeLike(batch->label_);\n    // Copy the labels.\n    caffe_copy(batch->label_.count(), batch->label_.cpu_data(),\n        top[1]->mutable_cpu_data());\n  }\n\n  prefetch_free_.push(batch);\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU_FORWARD(BasePrefetchingDataLayer, Forward);\n#endif\n\nINSTANTIATE_CLASS(BaseDataLayer);\nINSTANTIATE_CLASS(BasePrefetchingDataLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/base_data_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/base_data_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid BasePrefetchingDataLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  Batch<Dtype>* batch = prefetch_full_.pop(\"Data layer prefetch queue empty\");\n  // Reshape to loaded data.\n  top[0]->ReshapeLike(batch->data_);\n  // Copy the data\n  caffe_copy(batch->data_.count(), batch->data_.gpu_data(),\n      top[0]->mutable_gpu_data());\n  if (this->output_labels_) {\n    // Reshape to loaded labels.\n    top[1]->ReshapeLike(batch->label_);\n    // Copy the labels.\n    caffe_copy(batch->label_.count(), batch->label_.gpu_data(),\n        top[1]->mutable_gpu_data());\n  }\n  // Ensure the copy is synchronous wrt the host, so that the next batch isn't\n  // copied in meanwhile.\n  CUDA_CHECK(cudaStreamSynchronize(cudaStreamDefault));\n  prefetch_free_.push(batch);\n}\n\nINSTANTIATE_LAYER_GPU_FORWARD(BasePrefetchingDataLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/batch_norm_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/batch_norm_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid BatchNormLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  BatchNormParameter param = this->layer_param_.batch_norm_param();\n  moving_average_fraction_ = param.moving_average_fraction();\n  use_global_stats_ = this->phase_ == TEST;\n  if (param.has_use_global_stats())\n    use_global_stats_ = param.use_global_stats();\n  if (bottom[0]->num_axes() == 1)\n    channels_ = 1;\n  else\n    channels_ = bottom[0]->shape(1);\n  eps_ = param.eps();\n  if (this->blobs_.size() > 0) {\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else {\n    this->blobs_.resize(3);\n    vector<int> sz;\n    sz.push_back(channels_);\n    this->blobs_[0].reset(new Blob<Dtype>(sz));\n    this->blobs_[1].reset(new Blob<Dtype>(sz));\n    sz[0]=1;\n    this->blobs_[2].reset(new Blob<Dtype>(sz));\n    for (int i = 0; i < 3; ++i) {\n      caffe_set(this->blobs_[i]->count(), Dtype(0),\n                this->blobs_[i]->mutable_cpu_data());\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid BatchNormLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (bottom[0]->num_axes() >= 1)\n    CHECK_EQ(bottom[0]->shape(1), channels_);\n  top[0]->ReshapeLike(*bottom[0]);\n\n  vector<int> sz;\n  sz.push_back(channels_);\n  mean_.Reshape(sz);\n  variance_.Reshape(sz);\n  temp_.ReshapeLike(*bottom[0]);\n  x_norm_.ReshapeLike(*bottom[0]);\n  sz[0]=bottom[0]->shape(0);\n  batch_sum_multiplier_.Reshape(sz);\n\n  int spatial_dim = bottom[0]->count()/(channels_*bottom[0]->shape(0));\n  if (spatial_sum_multiplier_.num_axes() == 0 ||\n      spatial_sum_multiplier_.shape(0) != spatial_dim) {\n    sz[0] = spatial_dim;\n    spatial_sum_multiplier_.Reshape(sz);\n    Dtype* multiplier_data = spatial_sum_multiplier_.mutable_cpu_data();\n    caffe_set(spatial_sum_multiplier_.count(), Dtype(1), multiplier_data);\n  }\n\n  int numbychans = channels_*bottom[0]->shape(0);\n  if (num_by_chans_.num_axes() == 0 ||\n      num_by_chans_.shape(0) != numbychans) {\n    sz[0] = numbychans;\n    num_by_chans_.Reshape(sz);\n    caffe_set(batch_sum_multiplier_.count(), Dtype(1),\n        batch_sum_multiplier_.mutable_cpu_data());\n  }\n}\n\ntemplate <typename Dtype>\nvoid BatchNormLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  int num = bottom[0]->shape(0);\n  int spatial_dim = bottom[0]->count()/(bottom[0]->shape(0)*channels_);\n\n  if (bottom[0] != top[0]) {\n    caffe_copy(bottom[0]->count(), bottom_data, top_data);\n  }\n\n  if (use_global_stats_) {\n    // use the stored mean/variance estimates.\n    const Dtype scale_factor = this->blobs_[2]->cpu_data()[0] == 0 ?\n        0 : 1 / this->blobs_[2]->cpu_data()[0];\n    caffe_cpu_scale(variance_.count(), scale_factor,\n        this->blobs_[0]->cpu_data(), mean_.mutable_cpu_data());\n    caffe_cpu_scale(variance_.count(), scale_factor,\n        this->blobs_[1]->cpu_data(), variance_.mutable_cpu_data());\n  } else {\n    // compute mean\n    caffe_cpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim,\n        1. / (num * spatial_dim), bottom_data,\n        spatial_sum_multiplier_.cpu_data(), 0.,\n        num_by_chans_.mutable_cpu_data());\n    caffe_cpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n        num_by_chans_.cpu_data(), batch_sum_multiplier_.cpu_data(), 0.,\n        mean_.mutable_cpu_data());\n  }\n\n  // subtract mean\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.cpu_data(), mean_.cpu_data(), 0.,\n      num_by_chans_.mutable_cpu_data());\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels_ * num,\n      spatial_dim, 1, -1, num_by_chans_.cpu_data(),\n      spatial_sum_multiplier_.cpu_data(), 1., top_data);\n\n  if (!use_global_stats_) {\n    // compute variance using var(X) = E((X-EX)^2)\n    caffe_powx(top[0]->count(), top_data, Dtype(2),\n        temp_.mutable_cpu_data());  // (X-EX)^2\n    caffe_cpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim,\n        1. / (num * spatial_dim), temp_.cpu_data(),\n        spatial_sum_multiplier_.cpu_data(), 0.,\n        num_by_chans_.mutable_cpu_data());\n    caffe_cpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n        num_by_chans_.cpu_data(), batch_sum_multiplier_.cpu_data(), 0.,\n        variance_.mutable_cpu_data());  // E((X_EX)^2)\n\n    // compute and save moving average\n    this->blobs_[2]->mutable_cpu_data()[0] *= moving_average_fraction_;\n    this->blobs_[2]->mutable_cpu_data()[0] += 1;\n    caffe_cpu_axpby(mean_.count(), Dtype(1), mean_.cpu_data(),\n        moving_average_fraction_, this->blobs_[0]->mutable_cpu_data());\n    int m = bottom[0]->count()/channels_;\n    Dtype bias_correction_factor = m > 1 ? Dtype(m)/(m-1) : 1;\n    caffe_cpu_axpby(variance_.count(), bias_correction_factor,\n        variance_.cpu_data(), moving_average_fraction_,\n        this->blobs_[1]->mutable_cpu_data());\n  }\n\n  // normalize variance\n  caffe_add_scalar(variance_.count(), eps_, variance_.mutable_cpu_data());\n  caffe_powx(variance_.count(), variance_.cpu_data(), Dtype(0.5),\n             variance_.mutable_cpu_data());\n\n  // replicate variance to input size\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.cpu_data(), variance_.cpu_data(), 0.,\n      num_by_chans_.mutable_cpu_data());\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels_ * num,\n      spatial_dim, 1, 1., num_by_chans_.cpu_data(),\n      spatial_sum_multiplier_.cpu_data(), 0., temp_.mutable_cpu_data());\n  caffe_div(temp_.count(), top_data, temp_.cpu_data(), top_data);\n  // TODO(cdoersch): The caching is only needed because later in-place layers\n  //                 might clobber the data.  Can we skip this if they won't?\n  caffe_copy(x_norm_.count(), top_data,\n      x_norm_.mutable_cpu_data());\n}\n\ntemplate <typename Dtype>\nvoid BatchNormLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff;\n  if (bottom[0] != top[0]) {\n    top_diff = top[0]->cpu_diff();\n  } else {\n    caffe_copy(x_norm_.count(), top[0]->cpu_diff(), x_norm_.mutable_cpu_diff());\n    top_diff = x_norm_.cpu_diff();\n  }\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  if (use_global_stats_) {\n    caffe_div(temp_.count(), top_diff, temp_.cpu_data(), bottom_diff);\n    return;\n  }\n  const Dtype* top_data = x_norm_.cpu_data();\n  int num = bottom[0]->shape()[0];\n  int spatial_dim = bottom[0]->count()/(bottom[0]->shape(0)*channels_);\n  // if Y = (X-mean(X))/(sqrt(var(X)+eps)), then\n  //\n  // dE(Y)/dX =\n  //   (dE/dY - mean(dE/dY) - mean(dE/dY \\cdot Y) \\cdot Y)\n  //     ./ sqrt(var(X) + eps)\n  //\n  // where \\cdot and ./ are hadamard product and elementwise division,\n  // respectively, dE/dY is the top diff, and mean/var/sum are all computed\n  // along all dimensions except the channels dimension.  In the above\n  // equation, the operations allow for expansion (i.e. broadcast) along all\n  // dimensions except the channels dimension where required.\n\n  // sum(dE/dY \\cdot Y)\n  caffe_mul(temp_.count(), top_data, top_diff, bottom_diff);\n  caffe_cpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim, 1.,\n      bottom_diff, spatial_sum_multiplier_.cpu_data(), 0.,\n      num_by_chans_.mutable_cpu_data());\n  caffe_cpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n      num_by_chans_.cpu_data(), batch_sum_multiplier_.cpu_data(), 0.,\n      mean_.mutable_cpu_data());\n\n  // reshape (broadcast) the above\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.cpu_data(), mean_.cpu_data(), 0.,\n      num_by_chans_.mutable_cpu_data());\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels_ * num,\n      spatial_dim, 1, 1., num_by_chans_.cpu_data(),\n      spatial_sum_multiplier_.cpu_data(), 0., bottom_diff);\n\n  // sum(dE/dY \\cdot Y) \\cdot Y\n  caffe_mul(temp_.count(), top_data, bottom_diff, bottom_diff);\n\n  // sum(dE/dY)-sum(dE/dY \\cdot Y) \\cdot Y\n  caffe_cpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim, 1.,\n      top_diff, spatial_sum_multiplier_.cpu_data(), 0.,\n      num_by_chans_.mutable_cpu_data());\n  caffe_cpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n      num_by_chans_.cpu_data(), batch_sum_multiplier_.cpu_data(), 0.,\n      mean_.mutable_cpu_data());\n  // reshape (broadcast) the above to make\n  // sum(dE/dY)-sum(dE/dY \\cdot Y) \\cdot Y\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.cpu_data(), mean_.cpu_data(), 0.,\n      num_by_chans_.mutable_cpu_data());\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num * channels_,\n      spatial_dim, 1, 1., num_by_chans_.cpu_data(),\n      spatial_sum_multiplier_.cpu_data(), 1., bottom_diff);\n\n  // dE/dY - mean(dE/dY)-mean(dE/dY \\cdot Y) \\cdot Y\n  caffe_cpu_axpby(temp_.count(), Dtype(1), top_diff,\n      Dtype(-1. / (num * spatial_dim)), bottom_diff);\n\n  // note: temp_ still contains sqrt(var(X)+eps), computed during the forward\n  // pass.\n  caffe_div(temp_.count(), bottom_diff, temp_.cpu_data(), bottom_diff);\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(BatchNormLayer);\n#endif\n\nINSTANTIATE_CLASS(BatchNormLayer);\nREGISTER_LAYER_CLASS(BatchNorm);\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/batch_norm_layer.cu",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/batch_norm_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid BatchNormLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  int num = bottom[0]->shape(0);\n  int spatial_dim = bottom[0]->count()/(channels_*bottom[0]->shape(0));\n\n  if (bottom[0] != top[0]) {\n    caffe_copy(bottom[0]->count(), bottom_data, top_data);\n  }\n\n\n  if (use_global_stats_) {\n    // use the stored mean/variance estimates.\n    const Dtype scale_factor = this->blobs_[2]->cpu_data()[0] == 0 ?\n        0 : 1 / this->blobs_[2]->cpu_data()[0];\n    caffe_gpu_scale(variance_.count(), scale_factor,\n        this->blobs_[0]->gpu_data(), mean_.mutable_gpu_data());\n    caffe_gpu_scale(variance_.count(), scale_factor,\n        this->blobs_[1]->gpu_data(), variance_.mutable_gpu_data());\n  } else {\n    // compute mean\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim,\n        1. / (num * spatial_dim), bottom_data,\n        spatial_sum_multiplier_.gpu_data(), 0.,\n        num_by_chans_.mutable_gpu_data());\n    caffe_gpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n        num_by_chans_.gpu_data(), batch_sum_multiplier_.gpu_data(), 0.,\n        mean_.mutable_gpu_data());\n  }\n\n  // subtract mean\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.gpu_data(), mean_.gpu_data(), 0.,\n      num_by_chans_.mutable_gpu_data());\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels_ * num,\n      spatial_dim, 1, -1, num_by_chans_.gpu_data(),\n      spatial_sum_multiplier_.gpu_data(), 1., top_data);\n\n  if (!use_global_stats_) {\n    // compute variance using var(X) = E((X-EX)^2)\n    caffe_gpu_powx(top[0]->count(), top_data, Dtype(2),\n        temp_.mutable_gpu_data());  // (X-EX)^2\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim,\n        1. / (num * spatial_dim), temp_.gpu_data(),\n        spatial_sum_multiplier_.gpu_data(), 0.,\n        num_by_chans_.mutable_gpu_data());\n    caffe_gpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n        num_by_chans_.gpu_data(), batch_sum_multiplier_.gpu_data(), 0.,\n        variance_.mutable_gpu_data());  // E((X_EX)^2)\n\n    // compute and save moving average\n    this->blobs_[2]->mutable_cpu_data()[0] *= moving_average_fraction_;\n    this->blobs_[2]->mutable_cpu_data()[0] += 1;\n    caffe_gpu_axpby(mean_.count(), Dtype(1), mean_.gpu_data(),\n        moving_average_fraction_, this->blobs_[0]->mutable_gpu_data());\n    int m = bottom[0]->count()/channels_;\n    Dtype bias_correction_factor = m > 1 ? Dtype(m)/(m-1) : 1;\n    caffe_gpu_axpby(variance_.count(), bias_correction_factor,\n        variance_.gpu_data(), moving_average_fraction_,\n        this->blobs_[1]->mutable_gpu_data());\n  }\n\n  // normalize variance\n  caffe_gpu_add_scalar(variance_.count(), eps_, variance_.mutable_gpu_data());\n  caffe_gpu_powx(variance_.count(), variance_.gpu_data(), Dtype(0.5),\n      variance_.mutable_gpu_data());\n\n  // replicate variance to input size\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.gpu_data(), variance_.gpu_data(), 0.,\n      num_by_chans_.mutable_gpu_data());\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels_ * num,\n      spatial_dim, 1, 1., num_by_chans_.gpu_data(),\n      spatial_sum_multiplier_.gpu_data(), 0., temp_.mutable_gpu_data());\n  caffe_gpu_div(temp_.count(), top_data, temp_.gpu_data(), top_data);\n  // TODO(cdoersch): The caching is only needed because later in-place layers\n  //                 might clobber the data.  Can we skip this if they won't?\n  caffe_copy(x_norm_.count(), top_data,\n      x_norm_.mutable_gpu_data());\n}\n\ntemplate <typename Dtype>\nvoid BatchNormLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff;\n  if (bottom[0] != top[0]) {\n    top_diff = top[0]->gpu_diff();\n  } else {\n    caffe_copy(x_norm_.count(), top[0]->gpu_diff(), x_norm_.mutable_gpu_diff());\n    top_diff = x_norm_.gpu_diff();\n  }\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  if (use_global_stats_) {\n    caffe_gpu_div(temp_.count(), top_diff, temp_.gpu_data(), bottom_diff);\n    return;\n  }\n  const Dtype* top_data = x_norm_.gpu_data();\n  int num = bottom[0]->shape()[0];\n  int spatial_dim = bottom[0]->count()/(channels_*bottom[0]->shape(0));\n  // if Y = (X-mean(X))/(sqrt(var(X)+eps)), then\n  //\n  // dE(Y)/dX =\n  //   (dE/dY - mean(dE/dY) - mean(dE/dY \\cdot Y) \\cdot Y)\n  //     ./ sqrt(var(X) + eps)\n  //\n  // where \\cdot and ./ are hadamard product and elementwise division,\n  // respectively, dE/dY is the top diff, and mean/var/sum are all computed\n  // along all dimensions except the channels dimension.  In the above\n  // equation, the operations allow for expansion (i.e. broadcast) along all\n  // dimensions except the channels dimension where required.\n\n  // sum(dE/dY \\cdot Y)\n  caffe_gpu_mul(temp_.count(), top_data, top_diff, bottom_diff);\n  caffe_gpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim, 1.,\n      bottom_diff, spatial_sum_multiplier_.gpu_data(), 0.,\n      num_by_chans_.mutable_gpu_data());\n  caffe_gpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n      num_by_chans_.gpu_data(), batch_sum_multiplier_.gpu_data(), 0.,\n      mean_.mutable_gpu_data());\n\n  // reshape (broadcast) the above\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.gpu_data(), mean_.gpu_data(), 0.,\n      num_by_chans_.mutable_gpu_data());\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels_ * num,\n      spatial_dim, 1, 1., num_by_chans_.gpu_data(),\n      spatial_sum_multiplier_.gpu_data(), 0., bottom_diff);\n\n  // sum(dE/dY \\cdot Y) \\cdot Y\n  caffe_gpu_mul(temp_.count(), top_data, bottom_diff, bottom_diff);\n\n  // sum(dE/dY)-sum(dE/dY \\cdot Y) \\cdot Y\n  caffe_gpu_gemv<Dtype>(CblasNoTrans, channels_ * num, spatial_dim, 1.,\n      top_diff, spatial_sum_multiplier_.gpu_data(), 0.,\n      num_by_chans_.mutable_gpu_data());\n  caffe_gpu_gemv<Dtype>(CblasTrans, num, channels_, 1.,\n      num_by_chans_.gpu_data(), batch_sum_multiplier_.gpu_data(), 0.,\n      mean_.mutable_gpu_data());\n  // reshape (broadcast) the above to make\n  // sum(dE/dY)-sum(dE/dY \\cdot Y) \\cdot Y\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, channels_, 1, 1,\n      batch_sum_multiplier_.gpu_data(), mean_.gpu_data(), 0.,\n      num_by_chans_.mutable_gpu_data());\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num * channels_,\n      spatial_dim, 1, 1., num_by_chans_.gpu_data(),\n      spatial_sum_multiplier_.gpu_data(), 1., bottom_diff);\n\n  // dE/dY - mean(dE/dY)-mean(dE/dY \\cdot Y) \\cdot Y\n  caffe_gpu_axpby(temp_.count(), Dtype(1), top_diff,\n      Dtype(-1. / (num * spatial_dim)), bottom_diff);\n\n  // note: temp_ still contains sqrt(var(X)+eps), computed during the forward\n  // pass.\n  caffe_gpu_div(temp_.count(), bottom_diff, temp_.gpu_data(), bottom_diff);\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(BatchNormLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/batch_reindex_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/batch_reindex_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate<typename Dtype>\nvoid BatchReindexLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n                                       const vector<Blob<Dtype>*>& top) {\n  CHECK_EQ(1, bottom[1]->num_axes());\n  vector<int> newshape;\n  newshape.push_back(bottom[1]->shape(0));\n  for (int i = 1; i < bottom[0]->shape().size(); ++i) {\n    newshape.push_back(bottom[0]->shape()[i]);\n  }\n  top[0]->Reshape(newshape);\n}\n\ntemplate<typename Dtype>\nvoid BatchReindexLayer<Dtype>::check_batch_reindex(int initial_num,\n                                                   int final_num,\n                                                   const Dtype* ridx_data) {\n  for (int i = 0; i < final_num; ++i) {\n    CHECK_GE(ridx_data[i], 0)\n        << \"Index specified for reindex layer was negative.\";\n    CHECK_LT(ridx_data[i], initial_num)\n        << \"Index specified for reindex layer was greater than batch size.\";\n  }\n}\n\ntemplate<typename Dtype>\nvoid BatchReindexLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n                                           const vector<Blob<Dtype>*>& top) {\n  check_batch_reindex(bottom[0]->shape(0), bottom[1]->count(),\n                      bottom[1]->cpu_data());\n  if (top[0]->count() == 0) {\n    return;\n  }\n  int inner_dim = bottom[0]->count() / bottom[0]->shape(0);\n  const Dtype* in = bottom[0]->cpu_data();\n  const Dtype* permut = bottom[1]->cpu_data();\n  Dtype* out = top[0]->mutable_cpu_data();\n  for (int index = 0; index < top[0]->count(); ++index) {\n    int n = index / (inner_dim);\n    int in_n = static_cast<int>(permut[n]);\n    out[index] = in[in_n * (inner_dim) + index % (inner_dim)];\n  }\n}\n\ntemplate<typename Dtype>\nvoid BatchReindexLayer<Dtype>::Backward_cpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  CHECK(!propagate_down[1]) << \"Cannot backprop to index.\";\n  if (!propagate_down[0]) {\n    return;\n  }\n  int inner_dim = bottom[0]->count() / bottom[0]->shape(0);\n  Dtype* bot_diff = bottom[0]->mutable_cpu_diff();\n  const Dtype* permut = bottom[1]->cpu_data();\n  const Dtype* top_diff = top[0]->cpu_diff();\n  caffe_set(bottom[0]->count(), Dtype(0), bot_diff);\n  for (int index = 0; index < top[0]->count(); ++index) {\n    int n = index / (inner_dim);\n    int in_n = static_cast<int>(permut[n]);\n    bot_diff[in_n * (inner_dim) + index % (inner_dim)] += top_diff[index];\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(BatchReindexLayer);\n#endif\n\nINSTANTIATE_CLASS(BatchReindexLayer);\nREGISTER_LAYER_CLASS(BatchReindex);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/batch_reindex_layer.cu",
    "content": "#include <algorithm>\n#include <utility>\n#include <vector>\n\n#include \"caffe/layers/batch_reindex_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate<typename Dtype>\n__global__ void BRForward(const int count, const int inner_dim, const Dtype* in,\n                          const Dtype* permut, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, count) {\n    int n = index / (inner_dim);\n    int in_n = static_cast<int>(permut[n]);\n    out[index] = in[in_n * (inner_dim) + index % (inner_dim)];\n  }\n}\n\ntemplate<typename Dtype>\nvoid BatchReindexLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n                                           const vector<Blob<Dtype>*>& top) {\n  check_batch_reindex(bottom[0]->shape(0), bottom[1]->count(),\n                      bottom[1]->cpu_data());\n  if (top[0]->count() == 0) {\n    return;\n  }\n  int threads = top[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  BRForward<Dtype> <<<CAFFE_GET_BLOCKS(threads), CAFFE_CUDA_NUM_THREADS>>>(\n      top[0]->count(), bottom[0]->count() / bottom[0]->shape(0),\n      bottom[0]->gpu_data(), bottom[1]->gpu_data(), top[0]->mutable_gpu_data());\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate<typename Dtype>\n__global__ void BRBackward(const int count, const int inner_dim,\n                           const Dtype* in, const Dtype* top_indexes,\n                           const Dtype* begins, const Dtype* counts,\n                           Dtype* out) {\n  CUDA_KERNEL_LOOP(index, count) {\n    int n = index / (inner_dim);\n    out[index] = 0;\n    int lower = static_cast<int>(begins[n]);\n    int upper = lower + static_cast<int>(counts[n]);\n    for (int i = lower; i < upper; ++i) {\n      int in_n = static_cast<int>(top_indexes[i]);\n      out[index] += in[in_n * (inner_dim) + index % (inner_dim)];\n    }\n  }\n}\n\ntemplate<typename Dtype>\nvoid BatchReindexLayer<Dtype>::Backward_gpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  CHECK(!propagate_down[1]) << \"Cannot backprop to index.\";\n  if (!propagate_down[0]) {\n    return;\n  }\n\n  vector<std::pair<int, int> > mapping;\n  const Dtype* perm = bottom[1]->cpu_data();\n  for (int i = 0; i < bottom[1]->count(); ++i) {\n    mapping.push_back(pair<int, int>(static_cast<int>(perm[i]), i));\n  }\n  std::sort(mapping.begin(), mapping.end(), pair_sort_first());\n\n  // Each element of the bottom diff is potentially the sum of many top diffs.\n  // However, we'd like each CUDA thread to handle exactly one output.  Hence,\n  // we first pre-compute a list of lists of indices that need to be summed for\n  // each output. `top_indexes` holds the data of this list of lists.  The\n  // k'th element of `begins` points to the location in `top_indexes` where the\n  // list for the k'th example begin, and the k'th element of `counts` is the\n  // length of that list.\n  vector<int> shape;\n  shape.push_back(bottom[1]->count());\n  Blob<Dtype> top_indexes(shape);\n  shape[0] = bottom[0]->shape(0);\n  Blob<Dtype> counts(shape);\n  Blob<Dtype> begins(shape);\n  Dtype* t_i_data = top_indexes.mutable_cpu_data();\n  Dtype* c_data = counts.mutable_cpu_data();\n  Dtype* b_data = begins.mutable_cpu_data();\n  caffe_set(begins.count(), Dtype(-1), b_data);\n  caffe_set(counts.count(), Dtype(0), c_data);\n  for (int i = 0; i < mapping.size(); ++i) {\n    t_i_data[i] = mapping[i].second;\n    if (b_data[mapping[i].first] == -1) {\n      b_data[mapping[i].first] = i;\n    }\n    c_data[mapping[i].first] += 1;\n  }\n\n  int threads = bottom[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  BRBackward<Dtype> <<<CAFFE_GET_BLOCKS(threads), CAFFE_CUDA_NUM_THREADS>>>(\n      bottom[0]->count(), bottom[0]->count() / bottom[0]->shape(0),\n      top[0]->gpu_diff(), top_indexes.gpu_data(), begins.gpu_data(),\n      counts.gpu_data(), bottom[0]->mutable_gpu_diff());\n  CUDA_POST_KERNEL_CHECK;\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(BatchReindexLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/bias_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/bias_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid BiasLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (bottom.size() == 1 && this->blobs_.size() > 0) {\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else if (bottom.size() == 1) {\n    // bias is a learned parameter; initialize it\n    const BiasParameter& param = this->layer_param_.bias_param();\n    const int axis = bottom[0]->CanonicalAxisIndex(param.axis());\n    const int num_axes = param.num_axes();\n    CHECK_GE(num_axes, -1) << \"num_axes must be non-negative, \"\n                           << \"or -1 to extend to the end of bottom[0]\";\n    if (num_axes >= 0) {\n      CHECK_GE(bottom[0]->num_axes(), axis + num_axes)\n          << \"bias blob's shape extends past bottom[0]'s shape when applied \"\n          << \"starting with bottom[0] axis = \" << axis;\n    }\n    this->blobs_.resize(1);\n    const vector<int>::const_iterator& shape_start =\n        bottom[0]->shape().begin() + axis;\n    const vector<int>::const_iterator& shape_end =\n        (num_axes == -1) ? bottom[0]->shape().end() : (shape_start + num_axes);\n    vector<int> bias_shape(shape_start, shape_end);\n    this->blobs_[0].reset(new Blob<Dtype>(bias_shape));\n    shared_ptr<Filler<Dtype> > filler(GetFiller<Dtype>(param.filler()));\n    filler->Fill(this->blobs_[0].get());\n  }\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\n}\n\ntemplate <typename Dtype>\nvoid BiasLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const BiasParameter& param = this->layer_param_.bias_param();\n  Blob<Dtype>* bias = (bottom.size() > 1) ? bottom[1] : this->blobs_[0].get();\n  // Always set axis == 0 in special case where bias is a scalar\n  // (num_axes == 0). Mathematically equivalent for any choice of axis, so the\n  // actual setting can be safely ignored; and computation is most efficient\n  // with axis == 0 and (therefore) outer_dim_ == 1.\n  const int axis = (bias->num_axes() == 0) ?\n      0 : bottom[0]->CanonicalAxisIndex(param.axis());\n  CHECK_GE(bottom[0]->num_axes(), axis + bias->num_axes())\n      << \"bias blob's shape extends past bottom[0]'s shape when applied \"\n      << \"starting with bottom[0] axis = \" << axis;\n  for (int i = 0; i < bias->num_axes(); ++i) {\n    CHECK_EQ(bottom[0]->shape(axis + i), bias->shape(i))\n        << \"dimension mismatch between bottom[0]->shape(\" << axis + i\n        << \") and bias->shape(\" << i << \")\";\n  }\n  outer_dim_ = bottom[0]->count(0, axis);\n  bias_dim_ = bias->count();\n  inner_dim_ = bottom[0]->count(axis + bias->num_axes());\n  dim_ = bias_dim_ * inner_dim_;\n  if (bottom[0] != top[0]) {\n    top[0]->ReshapeLike(*bottom[0]);\n  }\n  bias_multiplier_.Reshape(vector<int>(1, inner_dim_));\n  if (bias_multiplier_.cpu_data()[inner_dim_ - 1] != Dtype(1)) {\n    caffe_set(inner_dim_, Dtype(1), bias_multiplier_.mutable_cpu_data());\n  }\n}\n\ntemplate <typename Dtype>\nvoid BiasLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bias_data =\n      ((bottom.size() > 1) ? bottom[1] : this->blobs_[0].get())->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  if (bottom[0] != top[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    caffe_copy(bottom[0]->count(), bottom_data, top_data);\n  }\n  for (int n = 0; n < outer_dim_; ++n) {\n    caffe_cpu_gemm(CblasNoTrans, CblasNoTrans, bias_dim_,\n        inner_dim_, 1, Dtype(1), bias_data,\n        bias_multiplier_.cpu_data(), Dtype(1), top_data);\n    top_data += dim_;\n  }\n}\n\ntemplate <typename Dtype>\nvoid BiasLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0] && bottom[0] != top[0]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    caffe_copy(bottom[0]->count(), top_diff, bottom_diff);\n  }\n  // in-place, we don't need to do anything with the data diff\n  const bool bias_param = (bottom.size() == 1);\n  if ((!bias_param && propagate_down[1]) ||\n      (bias_param && this->param_propagate_down_[0])) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bias_diff = (bias_param ? this->blobs_[0].get() : bottom[1])\n        ->mutable_cpu_diff();\n    bool accum = bias_param;\n    for (int n = 0; n < outer_dim_; ++n) {\n      caffe_cpu_gemv(CblasNoTrans, bias_dim_, inner_dim_, Dtype(1),\n          top_diff, bias_multiplier_.cpu_data(), Dtype(accum), bias_diff);\n      top_diff += dim_;\n      accum = true;\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(BiasLayer);\n#endif\n\nINSTANTIATE_CLASS(BiasLayer);\nREGISTER_LAYER_CLASS(Bias);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/bias_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/bias_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void BiasForward(const int n, const Dtype* in,\n    const Dtype* bias, const int bias_dim, const int inner_dim,\n    Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    const int bias_index = (index / inner_dim) % bias_dim;\n    out[index] = in[index] + bias[bias_index];\n  }\n}\n\ntemplate <typename Dtype>\nvoid BiasLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int count = top[0]->count();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  const Dtype* bias_data =\n      ((bottom.size() > 1) ? bottom[1] : this->blobs_[0].get())->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  BiasForward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, bias_data, bias_dim_, inner_dim_, top_data);\n}\n\ntemplate <typename Dtype>\nvoid BiasLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0] && bottom[0] != top[0]) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    caffe_copy(bottom[0]->count(), top_diff, bottom_diff);\n  }\n  // in-place, we don't need to do anything with the data diff\n  const bool bias_param = (bottom.size() == 1);\n  if ((!bias_param && propagate_down[1]) ||\n      (bias_param && this->param_propagate_down_[0])) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bias_diff = (bias_param ? this->blobs_[0].get() : bottom[1])\n        ->mutable_gpu_diff();\n    bool accum = bias_param;\n    for (int n = 0; n < outer_dim_; ++n) {\n      caffe_gpu_gemv(CblasNoTrans, bias_dim_, inner_dim_, Dtype(1),\n          top_diff, bias_multiplier_.gpu_data(), Dtype(accum), bias_diff);\n      top_diff += dim_;\n      accum = true;\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(BiasLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/bnll_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/bnll_layer.hpp\"\n\nnamespace caffe {\n\nconst float kBNLL_THRESHOLD = 50.;\n\ntemplate <typename Dtype>\nvoid BNLLLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  for (int i = 0; i < count; ++i) {\n    top_data[i] = bottom_data[i] > 0 ?\n        bottom_data[i] + log(1. + exp(-bottom_data[i])) :\n        log(1. + exp(bottom_data[i]));\n  }\n}\n\ntemplate <typename Dtype>\nvoid BNLLLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const int count = bottom[0]->count();\n    Dtype expval;\n    for (int i = 0; i < count; ++i) {\n      expval = exp(std::min(bottom_data[i], Dtype(kBNLL_THRESHOLD)));\n      bottom_diff[i] = top_diff[i] * expval / (expval + 1.);\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(BNLLLayer);\n#endif\n\nINSTANTIATE_CLASS(BNLLLayer);\nREGISTER_LAYER_CLASS(BNLL);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/bnll_layer.cu",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/bnll_layer.hpp\"\n\nnamespace caffe {\n\nconst float kBNLL_THRESHOLD = 50.;\n\ntemplate <typename Dtype>\n__global__ void BNLLForward(const int n, const Dtype* in, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = in[index] > 0 ?\n        in[index] + log(1. + exp(-in[index])) :\n        log(1. + exp(in[index]));\n  }\n}\n\ntemplate <typename Dtype>\nvoid BNLLLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  BNLLForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, top_data);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate <typename Dtype>\n__global__ void BNLLBackward(const int n, const Dtype* in_diff,\n    const Dtype* in_data, Dtype* out_diff) {\n  CUDA_KERNEL_LOOP(index, n) {\n    Dtype expval = exp(min(in_data[index], Dtype(kBNLL_THRESHOLD)));\n    out_diff[index] = in_diff[index] * expval / (expval + 1.);\n  }\n}\n\ntemplate <typename Dtype>\nvoid BNLLLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const int count = bottom[0]->count();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    BNLLBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, bottom_data, bottom_diff);\n    CUDA_POST_KERNEL_CHECK;\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(BNLLLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/conadd_layer.cpp",
    "content": "#include <vector>\n#include \"caffe/layers/conadd_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include <iostream>\nusing  namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ConaddLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top)\n{//conadd参数检查\n\n}\ntemplate <typename Dtype>\nvoid ConaddLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>&bottom,const vector<Blob<Dtype>*>&top)\n{ \n//\tcout<< bottom[0]->asum_data() << \" \"<< top[0]->asum_data() << endl;\n\tconst int num_axes=bottom[0]->num_axes();//bottom的维度\n\t//检查合并的四维shape如果没指定，用bottom的维度进行初始化\n\tvector<int>top_shape = bottom[0]->shape();\n\tfor(int i=0;i<bottom.size();i++)\n\t{   \n\t\tfor(int j=0;j<num_axes;j++)\n\t\t{\n\t\t\tif(bottom[i]->shape(j)!=top_shape[j])\n\t\t\t{   \n\t\t\t    Blob<Dtype> B( bottom[i]->shape());\n\t\t\t    B.CopyFrom(*bottom[i]);\n\t\t\t\tconst Dtype * data_b = bottom[i]->cpu_data();\n\t\t\t\tbottom[i]->Reshape(top_shape);\n\t\t\t\tDtype * temp = bottom[i]->mutable_cpu_data();\n\t\t\t\tfor( int ii=0; ii < B.count(); ii++)\n\t\t\t\t{\n\t\t\t\t\ttemp[ii] = data_b[ii];\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n//初始化top \n\ttop[0]->Reshape(top_shape);\n\tif(bottom.size()==1)\n\t{\n\t\ttop[0]->ShareData(*bottom[0]);\n\t\ttop[0]->ShareDiff(*bottom[0]);\n\t}\n}\n\ntemplate<typename Dtype>\nvoid ConaddLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,const vector<Blob<Dtype>*>& top)\n{\n\tconst int num_axes=bottom[0]->num_axes();//bottom的维度\n\tvector<int>top_shape=bottom[0]->shape();\n\tif(bottom.size()==1){return;}\n\tDtype * top_data=top[0]->mutable_cpu_data();\n\tfor(int i=0;i<bottom.size();i++)\n\t{   \n\t\tCHECK_EQ(num_axes,bottom[i]->num_axes())<<\"the\"<<i<<\"-th input num_axes is different to 1-th input\";\n\t\tfor(int j=0;j<num_axes;j++)\n\t\t{ CHECK_EQ(num_axes,bottom[j]->num_axes())<<\"the inputs are different in the number of the dimensionality \";\n\t\t\t//如果维度不同 填充0\n\t\t\tif(bottom[i]->shape(j)!=top_shape[j])\n\t\t\t{\n\t\t\t\tBlob<Dtype> temp;\n\t\t\t\ttemp.CopyFrom(*bottom[i],0,0);\n\t\t\t\tDtype *temp_data=temp.mutable_cpu_data();\n\t\t\t\tbottom[i]->Reshape(top_shape);\n\t\t\t\tDtype * data_b=bottom[i]->mutable_cpu_data();\n\t\t\t\tfor(int k=0;k<bottom[i]->count();k++)\n\t\t\t\t{\n\t\t\t\t\tdata_b[k]=temp_data[k];\n\t\t\t\t}\n\t\t\t\tcout<<\"debug:\"<<bottom[i]->data_at(0,0,1,1)<<endl;//debug\n\t\t\t}\n\t\t\tCHECK_EQ(top_shape[j],bottom[i]->shape(j))<<\"debug:padding Failure\";\n\t\t}\n\t\t//实现conadd\n\t\tDtype *bottom_data=bottom[i]->mutable_cpu_data();\n\t\tfor(int m=0;m<top[0]->count();m++)\n\t\t\ttop_data[m]=top_data[m]+bottom_data[m];\n\t}\n}\n\n\ntemplate<typename Dtype>\nvoid ConaddLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom)\n{\n\tif(bottom.size()==1){return;}\n\tfor(int i=0;i<bottom.size();i++)\n\t{\n\t\tbottom[i]->ShareDiff(*top[0]);\n\t}\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ConaddLayer);\n#endif\n\nINSTANTIATE_CLASS(ConaddLayer);\nREGISTER_LAYER_CLASS(Conadd);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/conadd_layer.cu",
    "content": "#include <vector>\n#include<iostream>\n#include \"caffe/layers/conadd_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\nusing namespace std;\nnamespace caffe {\ntemplate <typename Dtype>\n__global__ void Conadd() { }\n\n\ntemplate <typename Dtype>\nvoid ConaddLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top)\n{ \n\tvector<int> top_shape = bottom[0]->shape();\n\tconst int num_axes = bottom[0]->num_axes(); \n\tif (bottom.size() == 1) { return; }\n    Dtype* top_data = top[0]->mutable_cpu_data();\n    caffe_scal(top[0]->count(),(Dtype)(0),top_data);\n\tcaffe_abs(top[0]->count(),top_data,top_data);\n    for(int i=0;i < bottom.size(); i++)\n\t{\t\n\t//\ttop[0]->ShareData(*bottom[0]);      \t\n\t\tCHECK_EQ(num_axes, bottom[i]->num_axes() )<< \"the\"<< i << \"-th input num_axes is different to 1-th input\";\n\t\t//实现conadd\n\t    const Dtype *bottom_data = bottom[i]->cpu_data();\n\t    caffe_add(top[0]->count(), bottom_data, top_data, top_data);\n\t}\n\n}\ntemplate <typename Dtype>\nvoid ConaddLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom)  {\n\tif(propagate_down[0])\n\t{\n\t\tif (bottom.size() == 1) { return; }\n\t\tfor(int i=0;i<bottom.size();i++)\n\t\t{\n\t\t\tbottom[i]->ShareDiff(*top[0]);\n\t\t}\n\t}\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ConaddLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/concat_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/concat_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include<iostream>\nusing  namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ConcatLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const ConcatParameter& concat_param = this->layer_param_.concat_param();\n  CHECK(!(concat_param.has_axis() && concat_param.has_concat_dim()))\n      << \"Either axis or concat_dim should be specified; not both.\";\n}\n\ntemplate <typename Dtype>\nvoid ConcatLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {     \n  const int num_axes = bottom[0]->num_axes();\n  const ConcatParameter& concat_param = this->layer_param_.concat_param();\n  if (concat_param.has_concat_dim()) {\n    concat_axis_ = static_cast<int>(concat_param.concat_dim());\n    // Don't allow negative indexing for concat_dim, a uint32 -- almost\n    // certainly unintended.\n    CHECK_GE(concat_axis_, 0) << \"casting concat_dim from uint32 to int32 \"\n        << \"produced negative result; concat_dim must satisfy \"\n        << \"0 <= concat_dim < \" << kMaxBlobAxes;\n    CHECK_LT(concat_axis_, num_axes) << \"concat_dim out of range.\";\n  } else {\n    concat_axis_ = bottom[0]->CanonicalAxisIndex(concat_param.axis());\n  }\n  // Initialize with the first blob.\n  vector<int> top_shape = bottom[0]->shape();\n  num_concats_ = bottom[0]->count(0, concat_axis_);\n  concat_input_size_ = bottom[0]->count(concat_axis_ + 1);\n  int bottom_count_sum = bottom[0]->count();\n  for (int i = 1; i < bottom.size(); ++i) {\n    \n    CHECK_EQ(num_axes, bottom[i]->num_axes())\n        << \"All inputs must have the same #axes.\";\n    for (int j = 0; j < num_axes; ++j) {\n      if (j == concat_axis_) { continue; } \n       CHECK_EQ(top_shape[j], bottom[i]->shape(j)) <<\"All inputs must have the same shape, except at concat_axis.\";\n    }\n    bottom_count_sum += bottom[i]->count();\n    top_shape[concat_axis_] += bottom[i]->shape(concat_axis_);\n  }\n  top[0]->Reshape(top_shape);\n  CHECK_EQ(bottom_count_sum, top[0]->count());\n  if (bottom.size() == 1) {\n    top[0]->ShareData(*bottom[0]);\n    top[0]->ShareDiff(*bottom[0]);\n  }\n}\ntemplate <typename Dtype>\nvoid ConcatLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (bottom.size() == 1) { return; }\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  int offset_concat_axis = 0;\n  const int top_concat_axis = top[0]->shape(concat_axis_);\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->cpu_data();\n    const int bottom_concat_axis = bottom[i]->shape(concat_axis_);\n    for (int n = 0; n < num_concats_; ++n) {\n      caffe_copy(bottom_concat_axis * concat_input_size_,\n          bottom_data + n * bottom_concat_axis * concat_input_size_,\n          top_data + (n * top_concat_axis + offset_concat_axis)\n              * concat_input_size_);\n    }\n    offset_concat_axis += bottom_concat_axis;\n  }\n}\n\ntemplate <typename Dtype>\nvoid ConcatLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (bottom.size() == 1) { return; }\n  const Dtype* top_diff = top[0]->cpu_diff();\n  int offset_concat_axis = 0;\n  const int top_concat_axis = top[0]->shape(concat_axis_);\n  for (int i = 0; i < bottom.size(); ++i) {\n    const int bottom_concat_axis = bottom[i]->shape(concat_axis_);\n    if (propagate_down[i]) {\n      Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();\n      for (int n = 0; n < num_concats_; ++n) {\n        caffe_copy(bottom_concat_axis * concat_input_size_, top_diff +\n            (n * top_concat_axis + offset_concat_axis) * concat_input_size_,\n            bottom_diff + n * bottom_concat_axis * concat_input_size_);\n      }\n    }\n    offset_concat_axis += bottom_concat_axis;\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ConcatLayer);\n#endif\n\nINSTANTIATE_CLASS(ConcatLayer);\nREGISTER_LAYER_CLASS(Concat);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/concat_layer.cu",
    "content": "#include <vector>\n#include<iostream>\n#include \"caffe/layers/concat_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void Concat(const int nthreads, const Dtype* in_data,\n    const bool forward, const int num_concats, const int concat_size,\n    const int top_concat_axis, const int bottom_concat_axis,\n    const int offset_concat_axis, Dtype* out_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int total_concat_size = concat_size * bottom_concat_axis;\n    const int concat_num = index / total_concat_size;\n    const int concat_index = index % total_concat_size;\n    const int top_index = concat_index +\n        (concat_num * top_concat_axis + offset_concat_axis) * concat_size;\n    if (forward) {\n      out_data[top_index] = in_data[index];\n    } else {\n      out_data[index] = in_data[top_index];\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid ConcatLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (bottom.size() == 1) { return; }\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  int offset_concat_axis = 0;\n  const int top_concat_axis = top[0]->shape(concat_axis_);\n  const bool kForward = true;\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->gpu_data();\n    const int bottom_concat_axis = bottom[i]->shape(concat_axis_);\n    const int bottom_concat_size = bottom_concat_axis * concat_input_size_;\n    const int nthreads = bottom_concat_size * num_concats_;\n    Concat<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(nthreads), CAFFE_CUDA_NUM_THREADS>>>(\n        nthreads, bottom_data, kForward, num_concats_, concat_input_size_,\n        top_concat_axis, bottom_concat_axis, offset_concat_axis, top_data);\n    offset_concat_axis += bottom_concat_axis;\n  }\n\n\n}\n\ntemplate <typename Dtype>\nvoid ConcatLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (bottom.size() == 1) { return; }\n  const Dtype* top_diff = top[0]->gpu_diff();\n  int offset_concat_axis = 0;\n  const int top_concat_axis = top[0]->shape(concat_axis_);\n  const bool kForward = false;\n  for (int i = 0; i < bottom.size(); ++i) {\n    const int bottom_concat_axis = bottom[i]->shape(concat_axis_);\n    if (propagate_down[i]) {\n      Dtype* bottom_diff = bottom[i]->mutable_gpu_diff();\n      const int bottom_concat_size = bottom_concat_axis * concat_input_size_;\n      const int nthreads = bottom_concat_size * num_concats_;\n      Concat<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(nthreads), CAFFE_CUDA_NUM_THREADS>>>(\n          nthreads, top_diff, kForward, num_concats_, concat_input_size_,\n          top_concat_axis, bottom_concat_axis, offset_concat_axis, bottom_diff);\n    }\n    offset_concat_axis += bottom_concat_axis;\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ConcatLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/contrastive_loss_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/contrastive_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ContrastiveLossLayer<Dtype>::LayerSetUp(\n  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::LayerSetUp(bottom, top);\n  CHECK_EQ(bottom[0]->channels(), bottom[1]->channels());\n  CHECK_EQ(bottom[0]->height(), 1);\n  CHECK_EQ(bottom[0]->width(), 1);\n  CHECK_EQ(bottom[1]->height(), 1);\n  CHECK_EQ(bottom[1]->width(), 1);\n  CHECK_EQ(bottom[2]->channels(), 1);\n  CHECK_EQ(bottom[2]->height(), 1);\n  CHECK_EQ(bottom[2]->width(), 1);\n  diff_.Reshape(bottom[0]->num(), bottom[0]->channels(), 1, 1);\n  diff_sq_.Reshape(bottom[0]->num(), bottom[0]->channels(), 1, 1);\n  dist_sq_.Reshape(bottom[0]->num(), 1, 1, 1);\n  // vector of ones used to sum along channels\n  summer_vec_.Reshape(bottom[0]->channels(), 1, 1, 1);\n  for (int i = 0; i < bottom[0]->channels(); ++i)\n    summer_vec_.mutable_cpu_data()[i] = Dtype(1);\n}\n\ntemplate <typename Dtype>\nvoid ContrastiveLossLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  int count = bottom[0]->count();\n  caffe_sub(\n      count,\n      bottom[0]->cpu_data(),  // a\n      bottom[1]->cpu_data(),  // b\n      diff_.mutable_cpu_data());  // a_i-b_i\n  const int channels = bottom[0]->channels();\n  Dtype margin = this->layer_param_.contrastive_loss_param().margin();\n  bool legacy_version =\n      this->layer_param_.contrastive_loss_param().legacy_version();\n  Dtype loss(0.0);\n  for (int i = 0; i < bottom[0]->num(); ++i) {\n    dist_sq_.mutable_cpu_data()[i] = caffe_cpu_dot(channels,\n        diff_.cpu_data() + (i*channels), diff_.cpu_data() + (i*channels));\n    if (static_cast<int>(bottom[2]->cpu_data()[i])) {  // similar pairs\n      loss += dist_sq_.cpu_data()[i];\n    } else {  // dissimilar pairs\n      if (legacy_version) {\n        loss += std::max(margin - dist_sq_.cpu_data()[i], Dtype(0.0));\n      } else {\n        Dtype dist = std::max<Dtype>(margin - sqrt(dist_sq_.cpu_data()[i]),\n          Dtype(0.0));\n        loss += dist*dist;\n      }\n    }\n  }\n  loss = loss / static_cast<Dtype>(bottom[0]->num()) / Dtype(2);\n  top[0]->mutable_cpu_data()[0] = loss;\n}\n\ntemplate <typename Dtype>\nvoid ContrastiveLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  Dtype margin = this->layer_param_.contrastive_loss_param().margin();\n  bool legacy_version =\n      this->layer_param_.contrastive_loss_param().legacy_version();\n  for (int i = 0; i < 2; ++i) {\n    if (propagate_down[i]) {\n      const Dtype sign = (i == 0) ? 1 : -1;\n      const Dtype alpha = sign * top[0]->cpu_diff()[0] /\n          static_cast<Dtype>(bottom[i]->num());\n      int num = bottom[i]->num();\n      int channels = bottom[i]->channels();\n      for (int j = 0; j < num; ++j) {\n        Dtype* bout = bottom[i]->mutable_cpu_diff();\n        if (static_cast<int>(bottom[2]->cpu_data()[j])) {  // similar pairs\n          caffe_cpu_axpby(\n              channels,\n              alpha,\n              diff_.cpu_data() + (j*channels),\n              Dtype(0.0),\n              bout + (j*channels));\n        } else {  // dissimilar pairs\n          Dtype mdist(0.0);\n          Dtype beta(0.0);\n          if (legacy_version) {\n            mdist = margin - dist_sq_.cpu_data()[j];\n            beta = -alpha;\n          } else {\n            Dtype dist = sqrt(dist_sq_.cpu_data()[j]);\n            mdist = margin - dist;\n            beta = -alpha * mdist / (dist + Dtype(1e-4));\n          }\n          if (mdist > Dtype(0.0)) {\n            caffe_cpu_axpby(\n                channels,\n                beta,\n                diff_.cpu_data() + (j*channels),\n                Dtype(0.0),\n                bout + (j*channels));\n          } else {\n            caffe_set(channels, Dtype(0), bout + (j*channels));\n          }\n        }\n      }\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ContrastiveLossLayer);\n#endif\n\nINSTANTIATE_CLASS(ContrastiveLossLayer);\nREGISTER_LAYER_CLASS(ContrastiveLoss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/contrastive_loss_layer.cu",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/contrastive_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ContrastiveLossLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const int count = bottom[0]->count();\n  caffe_gpu_sub(\n      count,\n      bottom[0]->gpu_data(),  // a\n      bottom[1]->gpu_data(),  // b\n      diff_.mutable_gpu_data());  // a_i-b_i\n  caffe_gpu_powx(\n      count,\n      diff_.mutable_gpu_data(),  // a_i-b_i\n      Dtype(2),\n      diff_sq_.mutable_gpu_data());  // (a_i-b_i)^2\n  caffe_gpu_gemv(\n      CblasNoTrans,\n      bottom[0]->num(),\n      bottom[0]->channels(),\n      Dtype(1.0),\n      diff_sq_.gpu_data(),  // (a_i-b_i)^2\n      summer_vec_.gpu_data(),\n      Dtype(0.0),\n      dist_sq_.mutable_gpu_data());  // \\Sum (a_i-b_i)^2\n  Dtype margin = this->layer_param_.contrastive_loss_param().margin();\n  bool legacy_version =\n      this->layer_param_.contrastive_loss_param().legacy_version();\n  Dtype loss(0.0);\n  for (int i = 0; i < bottom[0]->num(); ++i) {\n    if (static_cast<int>(bottom[2]->cpu_data()[i])) {  // similar pairs\n      loss += dist_sq_.cpu_data()[i];\n    } else {  // dissimilar pairs\n      if (legacy_version) {\n        loss += std::max(margin - dist_sq_.cpu_data()[i], Dtype(0.0));\n      } else {\n        Dtype dist = std::max(margin - sqrt(dist_sq_.cpu_data()[i]),\n                              Dtype(0.0));\n        loss += dist*dist;\n      }\n    }\n  }\n  loss = loss / static_cast<Dtype>(bottom[0]->num()) / Dtype(2);\n  top[0]->mutable_cpu_data()[0] = loss;\n}\n\ntemplate <typename Dtype>\n__global__ void CLLBackward(const int count, const int channels,\n    const Dtype margin, const bool legacy_version, const Dtype alpha,\n    const Dtype* y, const Dtype* diff, const Dtype* dist_sq,\n    Dtype *bottom_diff) {\n  CUDA_KERNEL_LOOP(i, count) {\n    int n = i / channels;  // the num index, to access y and dist_sq\n    if (static_cast<int>(y[n])) {  // similar pairs\n      bottom_diff[i] = alpha * diff[i];\n    } else {  // dissimilar pairs\n      Dtype mdist(0.0);\n      Dtype beta(0.0);\n      if (legacy_version) {\n        mdist = (margin - dist_sq[n]);\n        beta = -alpha;\n      } else {\n        Dtype dist = sqrt(dist_sq[n]);\n        mdist = (margin - dist);\n        beta = -alpha * mdist / (dist + Dtype(1e-4)) * diff[i];\n      }\n      if (mdist > 0.0) {\n        bottom_diff[i] = beta;\n      } else {\n        bottom_diff[i] = 0;\n      }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid ContrastiveLossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  for (int i = 0; i < 2; ++i) {\n    if (propagate_down[i]) {\n      const int count = bottom[0]->count();\n      const int channels = bottom[0]->channels();\n      Dtype margin = this->layer_param_.contrastive_loss_param().margin();\n      const bool legacy_version =\n          this->layer_param_.contrastive_loss_param().legacy_version();\n      const Dtype sign = (i == 0) ? 1 : -1;\n      const Dtype alpha = sign * top[0]->cpu_diff()[0] /\n          static_cast<Dtype>(bottom[0]->num());\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      CLLBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n          count, channels, margin, legacy_version, alpha,\n          bottom[2]->gpu_data(),  // pair similarity 0 or 1\n          diff_.gpu_data(),  // the cached eltwise difference between a and b\n          dist_sq_.gpu_data(),  // the cached square distance between a and b\n          bottom[i]->mutable_gpu_diff());\n      CUDA_POST_KERNEL_CHECK;\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ContrastiveLossLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/conv_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/conv_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ConvolutionLayer<Dtype>::compute_output_shape() {\n  const int* kernel_shape_data = this->kernel_shape_.cpu_data();\n  const int* stride_data = this->stride_.cpu_data();\n  const int* pad_data = this->pad_.cpu_data();\n  const int* dilation_data = this->dilation_.cpu_data();\n  this->output_shape_.clear();\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\n    // i + 1 to skip channel axis\n    const int input_dim = this->input_shape(i + 1);\n    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;\n    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)\n        / stride_data[i] + 1;\n    this->output_shape_.push_back(output_dim);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ConvolutionLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* weight = this->blobs_[0]->cpu_data();\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->cpu_data();\n    Dtype* top_data = top[i]->mutable_cpu_data();\n    for (int n = 0; n < this->num_; ++n) {\n      this->forward_cpu_gemm(bottom_data + n * this->bottom_dim_, weight,\n          top_data + n * this->top_dim_);\n      if (this->bias_term_) {\n        const Dtype* bias = this->blobs_[1]->cpu_data();\n        this->forward_cpu_bias(top_data + n * this->top_dim_, bias);\n      }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid ConvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* weight = this->blobs_[0]->cpu_data();\n  Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->cpu_diff();\n    const Dtype* bottom_data = bottom[i]->cpu_data();\n    Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();\n    // Bias gradient, if necessary.\n    if (this->bias_term_ && this->param_propagate_down_[1]) {\n      Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();\n      for (int n = 0; n < this->num_; ++n) {\n        this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);\n      }\n    }\n    if (this->param_propagate_down_[0] || propagate_down[i]) {\n      for (int n = 0; n < this->num_; ++n) {\n        // gradient w.r.t. weight. Note that we will accumulate diffs.\n        if (this->param_propagate_down_[0]) {\n          this->weight_cpu_gemm(bottom_data + n * this->bottom_dim_,\n              top_diff + n * this->top_dim_, weight_diff);\n        }\n        // gradient w.r.t. bottom data, if necessary.\n        if (propagate_down[i]) {\n          this->backward_cpu_gemm(top_diff + n * this->top_dim_, weight,\n              bottom_diff + n * this->bottom_dim_);\n        }\n      }\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ConvolutionLayer);\n#endif\n\nINSTANTIATE_CLASS(ConvolutionLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/conv_layer.cu",
    "content": "#include <vector>\n#include<iostream>\n#include \"caffe/layers/conv_layer.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ConvolutionLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  // cout<<\"--------------------------------------\"<<endl;\n  // cout<<\"caffe caffe layer：\"<<endl;\n  // cout<<\"bottom[0] shape: \"<<bottom[0]->shape_string()<<endl;\n  // cout<<\"weight shape: \"<<this->blobs_[0]->shape_string()<<endl; \n  // cout<<\"top[0] shape： \"<<top[0]->shape_string()<<endl;\n\n  // cout<<\"bottom_dim_: \"<<this->bottom_dim_<<endl;\n  // cout<<\"top_dim_: \"<<this->top_dim_<<endl;\n\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->gpu_data();\n    Dtype* top_data = top[i]->mutable_gpu_data();\n    for (int n = 0; n < this->num_; ++n) {\n      this->forward_gpu_gemm(bottom_data + n * this->bottom_dim_, weight,\n          top_data + n * this->top_dim_);\n      if (this->bias_term_) {\n        const Dtype* bias = this->blobs_[1]->gpu_data();\n        this->forward_gpu_bias(top_data + n * this->top_dim_, bias);\n      }\n    }\n\n  }\n\n}\n\ntemplate <typename Dtype>\nvoid ConvolutionLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  Dtype* weight_diff = this->blobs_[0]->mutable_gpu_diff();\n\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->gpu_diff();\n    // Bias gradient, if necessary.\n    if (this->bias_term_ && this->param_propagate_down_[1]) {\n      Dtype* bias_diff = this->blobs_[1]->mutable_gpu_diff();\n      for (int n = 0; n < this->num_; ++n) {\n        this->backward_gpu_bias(bias_diff, top_diff + n * this->top_dim_);\n      }\n    }\n\n  if (this->param_propagate_down_[0] || propagate_down[i]) {\n      const Dtype* bottom_data = bottom[i]->gpu_data();\n      Dtype* bottom_diff = bottom[i]->mutable_gpu_diff();\n      for (int n = 0; n < this->num_; ++n) {\n        // gradient w.r.t. weight. Note that we will accumulate diffs.\n        if (this->param_propagate_down_[0]) {\n          this->weight_gpu_gemm(bottom_data + n * this->bottom_dim_,\n              top_diff + n * this->top_dim_, weight_diff);\n        }\n        // gradient w.r.t. bottom data, if necessary.\n        if (propagate_down[i]) {\n          this->backward_gpu_gemm(top_diff + n * this->top_dim_, weight,\n              bottom_diff + n * this->bottom_dim_);\n        }\n      }\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ConvolutionLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/crop_layer.cpp",
    "content": "#include <algorithm>\n#include <functional>\n#include <map>\n#include <set>\n#include <vector>\n\n\n#include \"caffe/layer.hpp\"\n#include \"caffe/layers/crop_layer.hpp\"\n#include \"caffe/net.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  // LayerSetup() handles the number of dimensions; Reshape() handles the sizes.\n  // bottom[0] supplies the data\n  // bottom[1] supplies the size\n  const CropParameter& param = this->layer_param_.crop_param();\n  CHECK_EQ(bottom.size(), 2) << \"Wrong number of bottom blobs.\";\n  int input_dim = bottom[0]->num_axes();\n  const int start_axis = bottom[0]->CanonicalAxisIndex(param.axis());\n  CHECK_LT(start_axis, input_dim) << \"crop axis bigger than input dim\";\n  if (param.offset_size() > 1) {\n    // the number of crop values specified must be equal to the number\n    // of dimensions following axis\n    CHECK_EQ(start_axis + param.offset_size(), input_dim)\n      << \"number of offset values specified must be equal to the number of \"\n      << \"dimensions following axis.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const CropParameter& param = this->layer_param_.crop_param();\n  int input_dim = bottom[0]->num_axes();\n  const int start_axis = bottom[0]->CanonicalAxisIndex(param.axis());\n\n  // Initialize offsets to 0 and the new shape to the current shape of the data.\n  vector<int> new_shape(bottom[0]->shape());\n  vector<int> offsets_shape(1, input_dim);\n  offsets.Reshape(offsets_shape);\n  int* offset_data = offsets.mutable_cpu_data();\n\n  // Determine crop offsets and the new shape post-crop.\n  for (int i = 0; i < input_dim; ++i) {\n    int crop_offset = 0;\n    int new_size = bottom[0]->shape(i);\n    if (i >= start_axis) {\n      new_size = bottom[1]->shape(i);\n      if (param.offset_size() == 1) {\n        // If only one offset is given, all crops have the same offset.\n        crop_offset = param.offset(0);\n      } else if (param.offset_size() > 1) {\n        // For several offsets, the number of offsets must be equal to the\n        // number of dimensions to crop, that is dimensions after the axis.\n        crop_offset = param.offset(i - start_axis);\n      }\n      // Check that the crop and offset are within the dimension's bounds.\n      CHECK_GE(bottom[0]->shape(i) - crop_offset, bottom[1]->shape(i))\n          << \"the crop for dimension \" << i << \" is out-of-bounds with \"\n          << \"size \" << bottom[1]->shape(i) << \" and offset \" << crop_offset;\n    }\n    new_shape[i] = new_size;\n    offset_data[i] = crop_offset;\n  }\n  top[0]->Reshape(new_shape);\n  // Compute strides\n  src_strides_.Reshape(offsets_shape);\n  dest_strides_.Reshape(offsets_shape);\n  for (int i = 0; i < input_dim; ++i) {\n    src_strides_.mutable_cpu_data()[i] = bottom[0]->count(i + 1, input_dim);\n    dest_strides_.mutable_cpu_data()[i] = top[0]->count(i + 1, input_dim);\n  }\n}\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::crop_copy(const vector<Blob<Dtype>*>& bottom,\n             const vector<Blob<Dtype>*>& top,\n             const int* offsets,\n             vector<int> indices,\n             int cur_dim,\n             const Dtype* src_data,\n             Dtype* dest_data,\n             bool is_forward) {\n  if (cur_dim + 1 < top[0]->num_axes()) {\n    // We are not yet at the final dimension, call copy recursively\n    for (int i = 0; i < top[0]->shape(cur_dim); ++i) {\n      indices[cur_dim] = i;\n      crop_copy(bottom, top, offsets, indices, cur_dim+1,\n                src_data, dest_data, is_forward);\n    }\n  } else {\n    // We are at the last dimensions, which is stored continuously in memory\n    // prepare index vector reduced(red) and with offsets(off)\n    std::vector<int> ind_red(cur_dim, 0);\n    std::vector<int> ind_off(cur_dim+1, 0);\n    for (int j = 0; j < cur_dim; ++j) {\n      ind_red[j] = indices[j];\n      ind_off[j] = indices[j] + offsets[j];\n    }\n    ind_off[cur_dim] = offsets[cur_dim];\n    // do the copy\n    if (is_forward) {\n      caffe_copy(top[0]->shape(cur_dim),\n          src_data + bottom[0]->offset(ind_off),\n          dest_data + top[0]->offset(ind_red));\n    } else {\n      // in the backwards pass the src_data is top_diff\n      // and the dest_data is bottom_diff\n      caffe_copy(top[0]->shape(cur_dim),\n          src_data + top[0]->offset(ind_red),\n          dest_data + bottom[0]->offset(ind_off));\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  std::vector<int> indices(top[0]->num_axes(), 0);\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  crop_copy(bottom, top, offsets.cpu_data(), indices, 0, bottom_data, top_data,\n      true);\n}\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n\n  if (propagate_down[0]) {\n    caffe_set(bottom[0]->count(), static_cast<Dtype>(0), bottom_diff);\n    std::vector<int> indices(top[0]->num_axes(), 0);\n    crop_copy(bottom, top, offsets.cpu_data(), indices, 0, top_diff,\n        bottom_diff, false);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(CropLayer);\n#endif\n\nINSTANTIATE_CLASS(CropLayer);\nREGISTER_LAYER_CLASS(Crop);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/crop_layer.cu",
    "content": "#include <vector>\n#include<iostream>\nusing namespace std;\n\n#include \"caffe/layers/crop_layer.hpp\"\n\nnamespace caffe {\n\n__device__ int compute_uncropped_index(\n    int index,\n    const int ndims,\n    const int* src_strides,\n    const int* dest_strides,\n    const int* offsets) {\n  int dest_index = index;\n  int src_index = 0;\n  for (int i = 0; i < ndims; ++i) {\n      int coord = dest_index / dest_strides[i];\n      dest_index -= coord * dest_strides[i];\n      src_index += src_strides[i] * (coord + offsets[i]);\n  }\n  return src_index;\n}\n\ntemplate <typename Dtype>\n__global__ void crop_kernel_forward(const int nthreads,\n    const int ndims,\n    const int* src_strides,\n    const int* dest_strides,\n    const int* offsets,\n    const Dtype* src, Dtype* dest) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    int src_index = compute_uncropped_index(\n        index, ndims, src_strides, dest_strides, offsets);\n    dest[index] = src[src_index];\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void crop_kernel_backward(const int nthreads,\n    const int ndims,\n    const int* src_strides,\n    const int* dest_strides,\n    const int* offsets,\n    Dtype* src, const Dtype* dest) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    int src_index = compute_uncropped_index(\n        index, ndims, src_strides, dest_strides, offsets);\n    src[src_index] = dest[index];\n  }\n}\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  int n = top[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  crop_kernel_forward<<<CAFFE_GET_BLOCKS(n), CAFFE_CUDA_NUM_THREADS>>>(n,\n      bottom[0]->num_axes(),\n      src_strides_.gpu_data(),\n      dest_strides_.gpu_data(),\n      offsets.gpu_data(),\n      bottom_data, top_data);\n\n}\n\ntemplate <typename Dtype>\nvoid CropLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->gpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  int n = top[0]->count();\n\n  if (propagate_down[0]) {\n    caffe_gpu_set(bottom[0]->count(), static_cast<Dtype>(0), bottom_diff);\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    crop_kernel_backward<<<CAFFE_GET_BLOCKS(n), CAFFE_CUDA_NUM_THREADS>>>(n,\n        bottom[0]->num_axes(),\n        src_strides_.gpu_data(),\n        dest_strides_.gpu_data(),\n        offsets.gpu_data(),\n        bottom_diff, top_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CropLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_conv_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/cudnn_conv_layer.hpp\"\n\nnamespace caffe {\n\n// Set to three for the benefit of the backward pass, which\n// can use separate streams for calculating the gradient w.r.t.\n// bias, filter weights, and bottom data for each group independently\n#define CUDNN_STREAMS_PER_GROUP 3\n\n/**\n * TODO(dox) explain cuDNN interface\n */\ntemplate <typename Dtype>\nvoid CuDNNConvolutionLayer<Dtype>::LayerSetUp(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  ConvolutionLayer<Dtype>::LayerSetUp(bottom, top);\n  // Initialize CUDA streams and cuDNN.\n  stream_         = new cudaStream_t[this->group_ * CUDNN_STREAMS_PER_GROUP];\n  handle_         = new cudnnHandle_t[this->group_ * CUDNN_STREAMS_PER_GROUP];\n\n  // Initialize algorithm arrays\n  fwd_algo_       = new cudnnConvolutionFwdAlgo_t[bottom.size()];\n  bwd_filter_algo_= new cudnnConvolutionBwdFilterAlgo_t[bottom.size()];\n  bwd_data_algo_  = new cudnnConvolutionBwdDataAlgo_t[bottom.size()];\n\n  // initialize size arrays\n  workspace_fwd_sizes_ = new size_t[bottom.size()];\n  workspace_bwd_filter_sizes_ = new size_t[bottom.size()];\n  workspace_bwd_data_sizes_ = new size_t[bottom.size()];\n\n  // workspace data\n  workspaceSizeInBytes = 0;\n  workspaceData = NULL;\n  workspace = new void*[this->group_ * CUDNN_STREAMS_PER_GROUP];\n\n  for (size_t i = 0; i < bottom.size(); ++i) {\n    // initialize all to default algorithms\n    fwd_algo_[i] = (cudnnConvolutionFwdAlgo_t)0;\n    bwd_filter_algo_[i] = (cudnnConvolutionBwdFilterAlgo_t)0;\n    bwd_data_algo_[i] = (cudnnConvolutionBwdDataAlgo_t)0;\n    // default algorithms don't require workspace\n    workspace_fwd_sizes_[i] = 0;\n    workspace_bwd_data_sizes_[i] = 0;\n    workspace_bwd_filter_sizes_[i] = 0;\n  }\n\n  for (int g = 0; g < this->group_ * CUDNN_STREAMS_PER_GROUP; g++) {\n    CUDA_CHECK(cudaStreamCreate(&stream_[g]));\n    CUDNN_CHECK(cudnnCreate(&handle_[g]));\n    CUDNN_CHECK(cudnnSetStream(handle_[g], stream_[g]));\n    workspace[g] = NULL;\n  }\n\n  // Set the indexing parameters.\n  bias_offset_ = (this->num_output_ / this->group_);\n\n  // Create filter descriptor.\n  const int* kernel_shape_data = this->kernel_shape_.cpu_data();\n  const int kernel_h = kernel_shape_data[0];\n  const int kernel_w = kernel_shape_data[1];\n  cudnn::createFilterDesc<Dtype>(&filter_desc_,\n      this->num_output_ / this->group_, this->channels_ / this->group_,\n      kernel_h, kernel_w);\n\n  // Create tensor descriptor(s) for data and corresponding convolution(s).\n  for (int i = 0; i < bottom.size(); i++) {\n    cudnnTensorDescriptor_t bottom_desc;\n    cudnn::createTensor4dDesc<Dtype>(&bottom_desc);\n    bottom_descs_.push_back(bottom_desc);\n    cudnnTensorDescriptor_t top_desc;\n    cudnn::createTensor4dDesc<Dtype>(&top_desc);\n    top_descs_.push_back(top_desc);\n    cudnnConvolutionDescriptor_t conv_desc;\n    cudnn::createConvolutionDesc<Dtype>(&conv_desc);\n    conv_descs_.push_back(conv_desc);\n  }\n\n  // Tensor descriptor for bias.\n  if (this->bias_term_) {\n    cudnn::createTensor4dDesc<Dtype>(&bias_desc_);\n  }\n\n  handles_setup_ = true;\n}\n\ntemplate <typename Dtype>\nvoid CuDNNConvolutionLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  ConvolutionLayer<Dtype>::Reshape(bottom, top);\n  CHECK_EQ(2, this->num_spatial_axes_)\n      << \"CuDNNConvolution input must have 2 spatial axes \"\n      << \"(e.g., height and width). \"\n      << \"Use 'engine: CAFFE' for general ND convolution.\";\n  bottom_offset_ = this->bottom_dim_ / this->group_;\n  top_offset_ = this->top_dim_ / this->group_;\n  const int height = bottom[0]->shape(this->channel_axis_ + 1);\n  const int width = bottom[0]->shape(this->channel_axis_ + 2);\n  const int height_out = top[0]->shape(this->channel_axis_ + 1);\n  const int width_out = top[0]->shape(this->channel_axis_ + 2);\n  const int* pad_data = this->pad_.cpu_data();\n  const int pad_h = pad_data[0];\n  const int pad_w = pad_data[1];\n  const int* stride_data = this->stride_.cpu_data();\n  const int stride_h = stride_data[0];\n  const int stride_w = stride_data[1];\n\n  // Specify workspace limit for kernels directly until we have a\n  // planning strategy and a rewrite of Caffe's GPU memory mangagement\n  size_t workspace_limit_bytes = 8*1024*1024;\n\n  for (int i = 0; i < bottom.size(); i++) {\n    cudnn::setTensor4dDesc<Dtype>(&bottom_descs_[i],\n        this->num_,\n        this->channels_ / this->group_, height, width,\n        this->channels_ * height * width,\n        height * width, width, 1);\n    cudnn::setTensor4dDesc<Dtype>(&top_descs_[i],\n        this->num_,\n        this->num_output_ / this->group_, height_out, width_out,\n        this->num_output_ * this->out_spatial_dim_,\n        this->out_spatial_dim_, width_out, 1);\n    cudnn::setConvolutionDesc<Dtype>(&conv_descs_[i], bottom_descs_[i],\n        filter_desc_, pad_h, pad_w,\n        stride_h, stride_w);\n\n    // choose forward and backward algorithms + workspace(s)\n    CUDNN_CHECK(cudnnGetConvolutionForwardAlgorithm(handle_[0],\n      bottom_descs_[i],\n      filter_desc_,\n      conv_descs_[i],\n      top_descs_[i],\n      CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT,\n      workspace_limit_bytes,\n      &fwd_algo_[i]));\n\n    CUDNN_CHECK(cudnnGetConvolutionForwardWorkspaceSize(handle_[0],\n      bottom_descs_[i],\n      filter_desc_,\n      conv_descs_[i],\n      top_descs_[i],\n      fwd_algo_[i],\n      &(workspace_fwd_sizes_[i])));\n\n    // choose backward algorithm for filter\n    CUDNN_CHECK(cudnnGetConvolutionBackwardFilterAlgorithm(handle_[0],\n          bottom_descs_[i], top_descs_[i], conv_descs_[i], filter_desc_,\n          CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT,\n          workspace_limit_bytes, &bwd_filter_algo_[i]) );\n\n    // get workspace for backwards filter algorithm\n    CUDNN_CHECK(cudnnGetConvolutionBackwardFilterWorkspaceSize(handle_[0],\n          bottom_descs_[i], top_descs_[i], conv_descs_[i], filter_desc_,\n          bwd_filter_algo_[i], &workspace_bwd_filter_sizes_[i]));\n\n    // choose backward algo for data\n    CUDNN_CHECK(cudnnGetConvolutionBackwardDataAlgorithm(handle_[0],\n          filter_desc_, top_descs_[i], conv_descs_[i], bottom_descs_[i],\n          CUDNN_CONVOLUTION_BWD_DATA_SPECIFY_WORKSPACE_LIMIT,\n        workspace_limit_bytes, &bwd_data_algo_[i]));\n\n    // get workspace size\n    CUDNN_CHECK(cudnnGetConvolutionBackwardDataWorkspaceSize(handle_[0],\n          filter_desc_, top_descs_[i], conv_descs_[i], bottom_descs_[i],\n          bwd_data_algo_[i], &workspace_bwd_data_sizes_[i]) );\n  }\n\n  // reduce over all workspace sizes to get a maximum to allocate / reallocate\n  size_t total_workspace_fwd = 0;\n  size_t total_workspace_bwd_data = 0;\n  size_t total_workspace_bwd_filter = 0;\n\n  for (size_t i = 0; i < bottom.size(); i++) {\n    total_workspace_fwd        = std::max(total_workspace_fwd,\n                                     workspace_fwd_sizes_[i]);\n    total_workspace_bwd_data   = std::max(total_workspace_bwd_data,\n                                     workspace_bwd_data_sizes_[i]);\n    total_workspace_bwd_filter = std::max(total_workspace_bwd_filter,\n                                     workspace_bwd_filter_sizes_[i]);\n  }\n  // get max over all operations\n  size_t max_workspace = std::max(total_workspace_fwd,\n                             total_workspace_bwd_data);\n  max_workspace = std::max(max_workspace, total_workspace_bwd_filter);\n  // ensure all groups have enough workspace\n  size_t total_max_workspace = max_workspace *\n                               (this->group_ * CUDNN_STREAMS_PER_GROUP);\n\n  // this is the total amount of storage needed over all groups + streams\n  if (total_max_workspace > workspaceSizeInBytes) {\n    DLOG(INFO) << \"Reallocating workspace storage: \" << total_max_workspace;\n    workspaceSizeInBytes = total_max_workspace;\n\n    // free the existing workspace and allocate a new (larger) one\n    cudaFree(this->workspaceData);\n\n    cudaError_t err = cudaMalloc(&(this->workspaceData), workspaceSizeInBytes);\n    if (err != cudaSuccess) {\n      // force zero memory path\n      for (int i = 0; i < bottom.size(); i++) {\n        workspace_fwd_sizes_[i] = 0;\n        workspace_bwd_filter_sizes_[i] = 0;\n        workspace_bwd_data_sizes_[i] = 0;\n        fwd_algo_[i] = CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM;\n        bwd_filter_algo_[i] = CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0;\n        bwd_data_algo_[i] = CUDNN_CONVOLUTION_BWD_DATA_ALGO_0;\n      }\n\n      // NULL out all workspace pointers\n      for (int g = 0; g < (this->group_ * CUDNN_STREAMS_PER_GROUP); g++) {\n        workspace[g] = NULL;\n      }\n      // NULL out underlying data\n      workspaceData = NULL;\n      workspaceSizeInBytes = 0;\n    }\n\n    // if we succeed in the allocation, set pointer aliases for workspaces\n    for (int g = 0; g < (this->group_ * CUDNN_STREAMS_PER_GROUP); g++) {\n      workspace[g] = reinterpret_cast<char *>(workspaceData) + g*max_workspace;\n    }\n  }\n\n  // Tensor descriptor for bias.\n  if (this->bias_term_) {\n    cudnn::setTensor4dDesc<Dtype>(&bias_desc_,\n        1, this->num_output_ / this->group_, 1, 1);\n  }\n}\n\ntemplate <typename Dtype>\nCuDNNConvolutionLayer<Dtype>::~CuDNNConvolutionLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  for (int i = 0; i < bottom_descs_.size(); i++) {\n    cudnnDestroyTensorDescriptor(bottom_descs_[i]);\n    cudnnDestroyTensorDescriptor(top_descs_[i]);\n    cudnnDestroyConvolutionDescriptor(conv_descs_[i]);\n  }\n  if (this->bias_term_) {\n    cudnnDestroyTensorDescriptor(bias_desc_);\n  }\n  cudnnDestroyFilterDescriptor(filter_desc_);\n\n  for (int g = 0; g < this->group_ * CUDNN_STREAMS_PER_GROUP; g++) {\n    cudaStreamDestroy(stream_[g]);\n    cudnnDestroy(handle_[g]);\n  }\n\n  cudaFree(workspaceData);\n  delete [] workspace;\n  delete [] stream_;\n  delete [] handle_;\n  delete [] fwd_algo_;\n  delete [] bwd_filter_algo_;\n  delete [] bwd_data_algo_;\n  delete [] workspace_fwd_sizes_;\n  delete [] workspace_bwd_data_sizes_;\n  delete [] workspace_bwd_filter_sizes_;\n}\n\nINSTANTIATE_CLASS(CuDNNConvolutionLayer);\n\n}   // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_conv_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n#include<iostream>\n#include \"caffe/layers/cudnn_conv_layer.hpp\"\nusing namespace std;\nnamespace caffe {\n\n__global__ void sync_conv_groups() { }\n\ntemplate <typename Dtype>\nvoid CuDNNConvolutionLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->gpu_data();\n    Dtype* top_data = top[i]->mutable_gpu_data();\n    \n    // Forward through cuDNN in parallel over groups.\n    for (int g = 0; g < this->group_; g++) {\n      // Filters.\n      CUDNN_CHECK(cudnnConvolutionForward(handle_[g],\n            cudnn::dataType<Dtype>::one,\n            bottom_descs_[i], bottom_data + bottom_offset_ * g,\n            filter_desc_, weight + this->weight_offset_ * g,\n            conv_descs_[i],\n            fwd_algo_[i], workspace[g], workspace_fwd_sizes_[i],\n            cudnn::dataType<Dtype>::zero,\n            top_descs_[i], top_data + top_offset_ * g));\n\n      // Bias.\n      if (this->bias_term_) {\n        const Dtype* bias_data = this->blobs_[1]->gpu_data();\n        CUDNN_CHECK(cudnnAddTensor(handle_[g],\n              cudnn::dataType<Dtype>::one,\n              bias_desc_, bias_data + bias_offset_ * g,\n              cudnn::dataType<Dtype>::one,\n              top_descs_[i], top_data + top_offset_ * g));\n      }\n    }\n     \n    // Synchronize the work across groups, each of which went into its own\n    // stream, by launching an empty kernel into the default (null) stream.\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    sync_conv_groups<<<1, 1>>>();\t\n\n}\n\n  }\n  \n\ntemplate <typename Dtype>\nvoid CuDNNConvolutionLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* weight = NULL;\n  Dtype* weight_diff = NULL;\n  if (this->param_propagate_down_[0]) {\n    weight = this->blobs_[0]->gpu_data();\n    weight_diff = this->blobs_[0]->mutable_gpu_diff();\n  }\n  Dtype* bias_diff = NULL;\n  if (this->bias_term_ && this->param_propagate_down_[1]) {\n    bias_diff = this->blobs_[1]->mutable_gpu_diff();\n  }\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->gpu_diff();\n    // Backward through cuDNN in parallel over groups and gradients.\n    for (int g = 0; g < this->group_; g++) {\n      // Gradient w.r.t. bias.\n      if (this->bias_term_ && this->param_propagate_down_[1]) {\n        CUDNN_CHECK(cudnnConvolutionBackwardBias(handle_[0*this->group_ + g],\n              cudnn::dataType<Dtype>::one,\n              top_descs_[i],  top_diff + top_offset_ * g,\n              cudnn::dataType<Dtype>::one,\n              bias_desc_, bias_diff + bias_offset_ * g));\n      }\n\n      // Gradient w.r.t. weights.\n      if (this->param_propagate_down_[0]) {\n        const Dtype* bottom_data = bottom[i]->gpu_data();\n        CUDNN_CHECK(cudnnConvolutionBackwardFilter(\n              handle_[1*this->group_ + g],\n              cudnn::dataType<Dtype>::one,\n              bottom_descs_[i], bottom_data + bottom_offset_ * g,\n              top_descs_[i],    top_diff + top_offset_ * g,\n              conv_descs_[i],\n              bwd_filter_algo_[i], workspace[1*this->group_ + g],\n              workspace_bwd_filter_sizes_[i],\n              cudnn::dataType<Dtype>::one,\n              filter_desc_, weight_diff + this->weight_offset_ * g));\n      }\n\n      // Gradient w.r.t. bottom data.\n      if (propagate_down[i]) {\n        if (weight == NULL) {\n          weight = this->blobs_[0]->gpu_data();\n        }\n        Dtype* bottom_diff = bottom[i]->mutable_gpu_diff();\n        CUDNN_CHECK(cudnnConvolutionBackwardData(\n              handle_[2*this->group_ + g],\n              cudnn::dataType<Dtype>::one,\n              filter_desc_, weight + this->weight_offset_ * g,\n              top_descs_[i], top_diff + top_offset_ * g,\n              conv_descs_[i],\n              bwd_data_algo_[i], workspace[2*this->group_ + g],\n              workspace_bwd_data_sizes_[i],\n              cudnn::dataType<Dtype>::zero,\n              bottom_descs_[i], bottom_diff + bottom_offset_ * g));\n      }\n    }\n\n    // Synchronize the work across groups, each of which went into its own\n    // stream, by launching an empty kernel into the default (null) stream.\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    sync_conv_groups<<<1, 1>>>();\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNConvolutionLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_lcn_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_lcn_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNLCNLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  LRNLayer<Dtype>::LayerSetUp(bottom, top);\n\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  CUDNN_CHECK(cudnnCreateLRNDescriptor(&norm_desc_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n\n  // create a LRN handle\n  handles_setup_ = true;\n\n  size_ = this->layer_param().lrn_param().local_size();\n  pre_pad_ = (size_ - 1) / 2;\n  alpha_ = this->layer_param().lrn_param().alpha();\n  beta_ = this->layer_param().lrn_param().beta();\n  k_ = this->layer_param().lrn_param().k();\n}\n\ntemplate <typename Dtype>\nvoid CuDNNLCNLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  LRNLayer<Dtype>::Reshape(bottom, top);\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, bottom[0]->num(),\n      this->channels_, this->height_, this->width_);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, bottom[0]->num(),\n      this->channels_, this->height_, this->width_);\n  CUDNN_CHECK(cudnnSetLRNDescriptor(norm_desc_, size_, alpha_, beta_, k_));\n\n  // allocate / reallocate tempData buffers\n  size_t totalSizeInBytes = sizeof(Dtype)*bottom[0]->num()* \\\n                            this->channels_*this->height_*this->width_;\n\n  if (totalSizeInBytes > tempDataSize) {\n    tempDataSize = totalSizeInBytes;\n\n    cudaFree(tempData1);\n    cudaFree(tempData2);\n\n    // allocate new buffers\n    CUDA_CHECK(cudaMalloc(&tempData1, totalSizeInBytes));\n    CUDA_CHECK(cudaMalloc(&tempData2, totalSizeInBytes));\n  }\n}\n\ntemplate <typename Dtype>\nCuDNNLCNLayer<Dtype>::~CuDNNLCNLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(bottom_desc_);\n  cudnnDestroyTensorDescriptor(top_desc_);\n\n  // destroy LRN handle\n  cudnnDestroy(handle_);\n\n  // free temp buffers\n  cudaFree(tempData1);\n  cudaFree(tempData2);\n}\n\nINSTANTIATE_CLASS(CuDNNLCNLayer);\n\n}   // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_lcn_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_lcn_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNLCNLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n\n  CUDNN_CHECK(cudnnDivisiveNormalizationForward(\n        handle_, norm_desc_, CUDNN_DIVNORM_PRECOMPUTED_MEANS,\n        cudnn::dataType<Dtype>::one,\n        bottom_desc_, bottom_data,\n        NULL,  // srcMeansData\n        this->tempData1, this->tempData2,\n        cudnn::dataType<Dtype>::zero,\n        top_desc_, top_data) );\n}\n\ntemplate <typename Dtype>\nvoid CuDNNLCNLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n\n  CUDNN_CHECK(cudnnDivisiveNormalizationBackward(\n        handle_, norm_desc_, CUDNN_DIVNORM_PRECOMPUTED_MEANS,\n        cudnn::dataType<Dtype>::one,\n        bottom_desc_, bottom_data,\n        NULL, top_diff,  // NULL - srcMeansData\n        this->tempData1, this->tempData2,\n        cudnn::dataType<Dtype>::zero,\n        bottom_desc_, bottom_diff,\n        NULL) );\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNLCNLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_lrn_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_lrn_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNLRNLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  LRNLayer<Dtype>::LayerSetUp(bottom, top);\n\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  CUDNN_CHECK(cudnnCreateLRNDescriptor(&norm_desc_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n\n  // create a LRN handle\n  handles_setup_ = true;\n\n  size_ = this->layer_param().lrn_param().local_size();\n  alpha_ = this->layer_param().lrn_param().alpha();\n  beta_ = this->layer_param().lrn_param().beta();\n  k_ = this->layer_param().lrn_param().k();\n}\n\ntemplate <typename Dtype>\nvoid CuDNNLRNLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  LRNLayer<Dtype>::Reshape(bottom, top);\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, bottom[0]->num(),\n      this->channels_, this->height_, this->width_);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, bottom[0]->num(),\n      this->channels_, this->height_, this->width_);\n  CUDNN_CHECK(cudnnSetLRNDescriptor(norm_desc_, size_, alpha_, beta_, k_));\n}\n\ntemplate <typename Dtype>\nCuDNNLRNLayer<Dtype>::~CuDNNLRNLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(bottom_desc_);\n  cudnnDestroyTensorDescriptor(top_desc_);\n\n  // destroy LRN handle\n  cudnnDestroy(handle_);\n}\n\nINSTANTIATE_CLASS(CuDNNLRNLayer);\n\n}   // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_lrn_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_lrn_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNLRNLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n\n  CUDNN_CHECK(cudnnLRNCrossChannelForward(\n        handle_, norm_desc_, CUDNN_LRN_CROSS_CHANNEL_DIM1,\n        cudnn::dataType<Dtype>::one,\n        bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        top_desc_, top_data) );\n}\n\ntemplate <typename Dtype>\nvoid CuDNNLRNLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n\n  CUDNN_CHECK(cudnnLRNCrossChannelBackward(\n        handle_, norm_desc_, CUDNN_LRN_CROSS_CHANNEL_DIM1,\n        cudnn::dataType<Dtype>::one,\n        top_desc_, top_data,\n        top_desc_, top_diff,\n        bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        bottom_desc_, bottom_diff) );\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNLRNLayer);\n\n};  // namespace caffe\n\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_pooling_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_pooling_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNPoolingLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  PoolingLayer<Dtype>::LayerSetUp(bottom, top);\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n  cudnn::createPoolingDesc<Dtype>(&pooling_desc_,\n      this->layer_param_.pooling_param().pool(), &mode_,\n      this->kernel_h_, this->kernel_w_, this->pad_h_, this->pad_w_,\n      this->stride_h_, this->stride_w_);\n  handles_setup_ = true;\n}\n\ntemplate <typename Dtype>\nvoid CuDNNPoolingLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  PoolingLayer<Dtype>::Reshape(bottom, top);\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, bottom[0]->num(),\n      this->channels_, this->height_, this->width_);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, bottom[0]->num(),\n      this->channels_, this->pooled_height_, this->pooled_width_);\n}\n\ntemplate <typename Dtype>\nCuDNNPoolingLayer<Dtype>::~CuDNNPoolingLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(bottom_desc_);\n  cudnnDestroyTensorDescriptor(top_desc_);\n  cudnnDestroyPoolingDescriptor(pooling_desc_);\n  cudnnDestroy(handle_);\n}\n\nINSTANTIATE_CLASS(CuDNNPoolingLayer);\n\n}   // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_pooling_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_pooling_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNPoolingLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  CUDNN_CHECK(cudnnPoolingForward(handle_, pooling_desc_,\n        cudnn::dataType<Dtype>::one,\n        bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        top_desc_, top_data));\n}\n\ntemplate <typename Dtype>\nvoid CuDNNPoolingLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  CUDNN_CHECK(cudnnPoolingBackward(handle_, pooling_desc_,\n        cudnn::dataType<Dtype>::one,\n        top_desc_, top_data, top_desc_, top_diff,\n        bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        bottom_desc_, bottom_diff));\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNPoolingLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_relu_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_relu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNReLULayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  ReLULayer<Dtype>::LayerSetUp(bottom, top);\n  // initialize cuDNN\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n  cudnn::createActivationDescriptor<Dtype>(&activ_desc_, CUDNN_ACTIVATION_RELU);\n  handles_setup_ = true;\n}\n\ntemplate <typename Dtype>\nvoid CuDNNReLULayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  ReLULayer<Dtype>::Reshape(bottom, top);\n  const int N = bottom[0]->num();\n  const int K = bottom[0]->channels();\n  const int H = bottom[0]->height();\n  const int W = bottom[0]->width();\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, N, K, H, W);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, N, K, H, W);\n}\n\ntemplate <typename Dtype>\nCuDNNReLULayer<Dtype>::~CuDNNReLULayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(this->bottom_desc_);\n  cudnnDestroyTensorDescriptor(this->top_desc_);\n  cudnnDestroyActivationDescriptor(this->activ_desc_);\n  cudnnDestroy(this->handle_);\n}\n\nINSTANTIATE_CLASS(CuDNNReLULayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_relu_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_relu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNReLULayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  // Fallback to standard Caffe for leaky ReLU.\n  if (ReLULayer<Dtype>::layer_param_.relu_param().negative_slope() != 0) {\n    return ReLULayer<Dtype>::Forward_gpu(bottom, top);\n  }\n\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnActivationForward(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->top_desc_, top_data));\n#else\n  CUDNN_CHECK(cudnnActivationForward_v4(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->top_desc_, top_data));\n#endif\n}\n\ntemplate <typename Dtype>\nvoid CuDNNReLULayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n\n  // Fallback to standard Caffe for leaky ReLU.\n  if (ReLULayer<Dtype>::layer_param_.relu_param().negative_slope() != 0) {\n    return ReLULayer<Dtype>::Backward_gpu(top, propagate_down, bottom);\n  }\n\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnActivationBackward(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->top_desc_, top_data, this->top_desc_, top_diff,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->bottom_desc_, bottom_diff));\n#else\n  CUDNN_CHECK(cudnnActivationBackward_v4(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->top_desc_, top_data, this->top_desc_, top_diff,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->bottom_desc_, bottom_diff));\n#endif\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNReLULayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_sigmoid_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_sigmoid_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNSigmoidLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  SigmoidLayer<Dtype>::LayerSetUp(bottom, top);\n  // initialize cuDNN\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n  cudnn::createActivationDescriptor<Dtype>(&activ_desc_,\n      CUDNN_ACTIVATION_SIGMOID);\n  handles_setup_ = true;\n}\n\ntemplate <typename Dtype>\nvoid CuDNNSigmoidLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  SigmoidLayer<Dtype>::Reshape(bottom, top);\n  const int N = bottom[0]->num();\n  const int K = bottom[0]->channels();\n  const int H = bottom[0]->height();\n  const int W = bottom[0]->width();\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, N, K, H, W);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, N, K, H, W);\n}\n\ntemplate <typename Dtype>\nCuDNNSigmoidLayer<Dtype>::~CuDNNSigmoidLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(this->bottom_desc_);\n  cudnnDestroyTensorDescriptor(this->top_desc_);\n  cudnnDestroy(this->handle_);\n}\n\nINSTANTIATE_CLASS(CuDNNSigmoidLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_sigmoid_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_sigmoid_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNSigmoidLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnActivationForward(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->top_desc_, top_data));\n#else\n  CUDNN_CHECK(cudnnActivationForward_v4(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->top_desc_, top_data));\n#endif\n}\n\ntemplate <typename Dtype>\nvoid CuDNNSigmoidLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnActivationBackward(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->top_desc_, top_data, this->top_desc_, top_diff,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->bottom_desc_, bottom_diff));\n#else\n  CUDNN_CHECK(cudnnActivationBackward_v4(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->top_desc_, top_data, this->top_desc_, top_diff,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->bottom_desc_, bottom_diff));\n#endif\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNSigmoidLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_softmax_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"thrust/device_vector.h\"\n\n#include \"caffe/layers/cudnn_softmax_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNSoftmaxLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  SoftmaxLayer<Dtype>::LayerSetUp(bottom, top);\n  // Initialize CUDNN.\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n  handles_setup_ = true;\n}\n\ntemplate <typename Dtype>\nvoid CuDNNSoftmaxLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  SoftmaxLayer<Dtype>::Reshape(bottom, top);\n  int N = this->outer_num_;\n  int K = bottom[0]->shape(this->softmax_axis_);\n  int H = this->inner_num_;\n  int W = 1;\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, N, K, H, W);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, N, K, H, W);\n}\n\ntemplate <typename Dtype>\nCuDNNSoftmaxLayer<Dtype>::~CuDNNSoftmaxLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(bottom_desc_);\n  cudnnDestroyTensorDescriptor(top_desc_);\n  cudnnDestroy(handle_);\n}\n\nINSTANTIATE_CLASS(CuDNNSoftmaxLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_softmax_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"thrust/device_vector.h\"\n\n#include \"caffe/layers/cudnn_softmax_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNSoftmaxLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  CUDNN_CHECK(cudnnSoftmaxForward(handle_, CUDNN_SOFTMAX_ACCURATE,\n        CUDNN_SOFTMAX_MODE_CHANNEL,\n        cudnn::dataType<Dtype>::one,\n        bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        top_desc_, top_data));\n}\n\ntemplate <typename Dtype>\nvoid CuDNNSoftmaxLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_data = top[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n\n    CUDNN_CHECK(cudnnSoftmaxBackward(handle_, CUDNN_SOFTMAX_ACCURATE,\n          CUDNN_SOFTMAX_MODE_CHANNEL,\n          cudnn::dataType<Dtype>::one,\n          top_desc_, top_data, top_desc_, top_diff,\n          cudnn::dataType<Dtype>::zero,\n          bottom_desc_, bottom_diff));\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNSoftmaxLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_tanh_layer.cpp",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_tanh_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNTanHLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  TanHLayer<Dtype>::LayerSetUp(bottom, top);\n  // initialize cuDNN\n  CUDNN_CHECK(cudnnCreate(&handle_));\n  cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);\n  cudnn::createTensor4dDesc<Dtype>(&top_desc_);\n  cudnn::createActivationDescriptor<Dtype>(&activ_desc_, CUDNN_ACTIVATION_TANH);\n  handles_setup_ = true;\n}\n\ntemplate <typename Dtype>\nvoid CuDNNTanHLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  TanHLayer<Dtype>::Reshape(bottom, top);\n  const int N = bottom[0]->num();\n  const int K = bottom[0]->channels();\n  const int H = bottom[0]->height();\n  const int W = bottom[0]->width();\n  cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, N, K, H, W);\n  cudnn::setTensor4dDesc<Dtype>(&top_desc_, N, K, H, W);\n}\n\ntemplate <typename Dtype>\nCuDNNTanHLayer<Dtype>::~CuDNNTanHLayer() {\n  // Check that handles have been setup before destroying.\n  if (!handles_setup_) { return; }\n\n  cudnnDestroyTensorDescriptor(this->bottom_desc_);\n  cudnnDestroyTensorDescriptor(this->top_desc_);\n  cudnnDestroy(this->handle_);\n}\n\nINSTANTIATE_CLASS(CuDNNTanHLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/cudnn_tanh_layer.cu",
    "content": "#ifdef USE_CUDNN\n#include <vector>\n\n#include \"caffe/layers/cudnn_tanh_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid CuDNNTanHLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnActivationForward(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->top_desc_, top_data));\n#else\n  CUDNN_CHECK(cudnnActivationForward_v4(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->top_desc_, top_data));\n#endif\n}\n\ntemplate <typename Dtype>\nvoid CuDNNTanHLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n\n#if CUDNN_VERSION_MIN(5, 0, 0)\n  CUDNN_CHECK(cudnnActivationBackward(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->top_desc_, top_data, this->top_desc_, top_diff,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->bottom_desc_, bottom_diff));\n#else\n  CUDNN_CHECK(cudnnActivationBackward_v4(this->handle_,\n        activ_desc_,\n        cudnn::dataType<Dtype>::one,\n        this->top_desc_, top_data, this->top_desc_, top_diff,\n        this->bottom_desc_, bottom_data,\n        cudnn::dataType<Dtype>::zero,\n        this->bottom_desc_, bottom_diff));\n#endif\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(CuDNNTanHLayer);\n\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#endif  // USE_OPENCV\n#include <stdint.h>\n\n#include <vector>\n\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/layers/data_layer.hpp\"\n#include \"caffe/util/benchmark.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nDataLayer<Dtype>::DataLayer(const LayerParameter& param)\n  : BasePrefetchingDataLayer<Dtype>(param),\n    reader_(param) {\n}\n\ntemplate <typename Dtype>\nDataLayer<Dtype>::~DataLayer() {\n  this->StopInternalThread();\n}\n\ntemplate <typename Dtype>\nvoid DataLayer<Dtype>::DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int batch_size = this->layer_param_.data_param().batch_size();\n  // Read a data point, and use it to initialize the top blob.\n  Datum& datum = *(reader_.full().peek());\n\n  // Use data_transformer to infer the expected blob shape from datum.\n  vector<int> top_shape = this->data_transformer_->InferBlobShape(datum);\n  this->transformed_data_.Reshape(top_shape);\n  // Reshape top[0] and prefetch_data according to the batch_size.\n  top_shape[0] = batch_size;\n  top[0]->Reshape(top_shape);\n  for (int i = 0; i < this->PREFETCH_COUNT; ++i) {\n    this->prefetch_[i].data_.Reshape(top_shape);\n  }\n  LOG(INFO) << \"output data size: \" << top[0]->num() << \",\"\n      << top[0]->channels() << \",\" << top[0]->height() << \",\"\n      << top[0]->width();\n  // label\n  if (this->output_labels_) {\n    vector<int> label_shape(1, batch_size);\n    top[1]->Reshape(label_shape);\n    for (int i = 0; i < this->PREFETCH_COUNT; ++i) {\n      this->prefetch_[i].label_.Reshape(label_shape);\n    }\n  }\n}\n\n// This function is called on prefetch thread\ntemplate<typename Dtype>\nvoid DataLayer<Dtype>::load_batch(Batch<Dtype>* batch) {\n  CPUTimer batch_timer;\n  batch_timer.Start();\n  double read_time = 0;\n  double trans_time = 0;\n  CPUTimer timer;\n  CHECK(batch->data_.count());\n  CHECK(this->transformed_data_.count());\n\n  // Reshape according to the first datum of each batch\n  // on single input batches allows for inputs of varying dimension.\n  const int batch_size = this->layer_param_.data_param().batch_size();\n  Datum& datum = *(reader_.full().peek());\n  // Use data_transformer to infer the expected blob shape from datum.\n  vector<int> top_shape = this->data_transformer_->InferBlobShape(datum);\n  this->transformed_data_.Reshape(top_shape);\n  // Reshape batch according to the batch_size.\n  top_shape[0] = batch_size;\n  batch->data_.Reshape(top_shape);\n\n  Dtype* top_data = batch->data_.mutable_cpu_data();\n  Dtype* top_label = NULL;  // suppress warnings about uninitialized variables\n\n  if (this->output_labels_) {\n    top_label = batch->label_.mutable_cpu_data();\n  }\n  for (int item_id = 0; item_id < batch_size; ++item_id) {\n    timer.Start();\n    // get a datum\n    Datum& datum = *(reader_.full().pop(\"Waiting for data\"));\n    read_time += timer.MicroSeconds();\n    timer.Start();\n    // Apply data transformations (mirror, scale, crop...)\n    int offset = batch->data_.offset(item_id);\n    this->transformed_data_.set_cpu_data(top_data + offset);\n    this->data_transformer_->Transform(datum, &(this->transformed_data_));\n    // Copy label.\n    if (this->output_labels_) {\n      top_label[item_id] = datum.label();\n    }\n    trans_time += timer.MicroSeconds();\n\n    reader_.free().push(const_cast<Datum*>(&datum));\n  }\n  timer.Stop();\n  batch_timer.Stop();\n  DLOG(INFO) << \"Prefetch batch: \" << batch_timer.MilliSeconds() << \" ms.\";\n  DLOG(INFO) << \"     Read time: \" << read_time / 1000 << \" ms.\";\n  DLOG(INFO) << \"Transform time: \" << trans_time / 1000 << \" ms.\";\n}\n\nINSTANTIATE_CLASS(DataLayer);\nREGISTER_LAYER_CLASS(Data);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/deconv_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/deconv_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid DeconvolutionLayer<Dtype>::compute_output_shape() {\n  const int* kernel_shape_data = this->kernel_shape_.cpu_data();\n  const int* stride_data = this->stride_.cpu_data();\n  const int* pad_data = this->pad_.cpu_data();\n  const int* dilation_data = this->dilation_.cpu_data();\n  this->output_shape_.clear();\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\n    // i + 1 to skip channel axis\n    const int input_dim = this->input_shape(i + 1);\n    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;\n    const int output_dim = stride_data[i] * (input_dim - 1)\n        + kernel_extent - 2 * pad_data[i];\n    this->output_shape_.push_back(output_dim);\n  }\n}\n\ntemplate <typename Dtype>\nvoid DeconvolutionLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* weight = this->blobs_[0]->cpu_data();\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->cpu_data();\n    Dtype* top_data = top[i]->mutable_cpu_data();\n    for (int n = 0; n < this->num_; ++n) {\n      this->backward_cpu_gemm(bottom_data + n * this->bottom_dim_, weight,\n          top_data + n * this->top_dim_);\n      if (this->bias_term_) {\n        const Dtype* bias = this->blobs_[1]->cpu_data();\n        this->forward_cpu_bias(top_data + n * this->top_dim_, bias);\n      }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid DeconvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* weight = this->blobs_[0]->cpu_data();\n  Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->cpu_diff();\n    const Dtype* bottom_data = bottom[i]->cpu_data();\n    Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();\n    // Bias gradient, if necessary.\n    if (this->bias_term_ && this->param_propagate_down_[1]) {\n      Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();\n      for (int n = 0; n < this->num_; ++n) {\n        this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);\n      }\n    }\n    if (this->param_propagate_down_[0] || propagate_down[i]) {\n      for (int n = 0; n < this->num_; ++n) {\n        // Gradient w.r.t. weight. Note that we will accumulate diffs.\n        if (this->param_propagate_down_[0]) {\n          this->weight_cpu_gemm(top_diff + n * this->top_dim_,\n              bottom_data + n * this->bottom_dim_, weight_diff);\n        }\n        // Gradient w.r.t. bottom data, if necessary, reusing the column buffer\n        // we might have just computed above.\n        if (propagate_down[i]) {\n          this->forward_cpu_gemm(top_diff + n * this->top_dim_, weight,\n              bottom_diff + n * this->bottom_dim_,\n              this->param_propagate_down_[0]);\n        }\n      }\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(DeconvolutionLayer);\n#endif\n\nINSTANTIATE_CLASS(DeconvolutionLayer);\nREGISTER_LAYER_CLASS(Deconvolution);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/deconv_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/deconv_layer.hpp\"\n#include <iostream>\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid DeconvolutionLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n       \n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  for (int i = 0; i < bottom.size(); ++i) {\n    const Dtype* bottom_data = bottom[i]->gpu_data();\n    Dtype* top_data = top[i]->mutable_gpu_data();\n    for (int n = 0; n < this->num_; ++n) {\n      this->backward_gpu_gemm(bottom_data + n * this->bottom_dim_, weight,\n          top_data + n * this->top_dim_);\n      if (this->bias_term_) {\n        const Dtype* bias = this->blobs_[1]->gpu_data();\n        this->forward_gpu_bias(top_data + n * this->top_dim_, bias);\n      }\n    }\n   \n  }\n\n  // cout << \"bottom: \"<<bottom[0]->shape_string() << \"top: \" << top[0]->shape_string() << \" \";\n   \n}\n\ntemplate <typename Dtype>\nvoid DeconvolutionLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  Dtype* weight_diff = this->blobs_[0]->mutable_gpu_diff();\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->gpu_diff();\n    const Dtype* bottom_data = bottom[i]->gpu_data();\n    Dtype* bottom_diff = bottom[i]->mutable_gpu_diff();\n    // Bias gradient, if necessary.\n    if (this->bias_term_ && this->param_propagate_down_[1]) {\n      Dtype* bias_diff = this->blobs_[1]->mutable_gpu_diff();\n      for (int n = 0; n < this->num_; ++n) {\n        this->backward_gpu_bias(bias_diff, top_diff + n * this->top_dim_);\n      }\n    }\n    if (this->param_propagate_down_[0] || propagate_down[i]) {\n      for (int n = 0; n < this->num_; ++n) {\n        // gradient w.r.t. weight. Note that we will accumulate diffs.\n        if (this->param_propagate_down_[0]) {\n          this->weight_gpu_gemm(top_diff + n * this->top_dim_,\n              bottom_data + n * this->bottom_dim_, weight_diff);\n        }\n        // gradient w.r.t. bottom data, if necessary.\n        if (propagate_down[i]) {\n          this->forward_gpu_gemm(top_diff + n * this->top_dim_, weight,\n              bottom_diff + n * this->bottom_dim_,\n              this->param_propagate_down_[0]);\n        }\n      }\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(DeconvolutionLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/deformable_conv_layer.cpp",
    "content": "#include <vector>\r\n#include \"caffe/filler.hpp\"\r\n#include <iostream>\r\n#include \"caffe/layers/deformable_conv_layer.hpp\"\r\nusing namespace std;\r\nnamespace caffe {\r\ntemplate <typename Dtype>\r\n int DeformableConvolutionLayer<Dtype>::input_shape(int i) {\r\n   return (*(this->bottom_shape_))[this->channel_axis_ + i];\r\n  }\r\n\r\n\r\ntemplate <typename Dtype>\r\nvoid DeformableConvolutionLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\r\n      const vector<Blob<Dtype>*>& top) {\r\n  const int first_spatial_axis = this->channel_axis_ + 1;\r\n  CHECK_EQ(bottom[0]->num_axes(), first_spatial_axis + this->num_spatial_axes_)\r\n      << \"bottom num_axes may not change.\";\r\n  this->num_ = bottom[0]->count(0, this->channel_axis_);\r\n  CHECK_EQ(bottom[0]->shape(this->channel_axis_), this->channels_)\r\n      << \"Input size incompatible with convolution kernel.\";\r\n  // TODO: generalize to handle inputs of different shapes.\r\n  // Shape the tops.\r\n  this->bottom_shape_ = &bottom[0]->shape();\r\n  compute_output_shape();\r\n  vector<int> top_shape(bottom[0]->shape().begin(),\r\n      bottom[0]->shape().begin() + this->channel_axis_);\r\n  top_shape.push_back(this->num_output_);\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    top_shape.push_back(this->output_shape_[i]);\r\n  }\r\n  for (int top_id = 0; top_id < top.size(); ++top_id) {\r\n    top[top_id]->Reshape(top_shape);\r\n\r\n  }\r\n  \r\n  if (reverse_dimensions()) {\r\n    this->conv_out_spatial_dim_ = bottom[0]->count(first_spatial_axis);\r\n  } else {\r\n    this->conv_out_spatial_dim_ = top[0]->count(first_spatial_axis);\r\n  }\r\n  this->col_offset_ = this->kernel_dim_ * this->conv_out_spatial_dim_;\r\n  this->output_offset_ = this->conv_out_channels_ * this->conv_out_spatial_dim_ / this->group_;\r\n  // Setup input dimensions (conv_input_shape_).\r\n  vector<int> bottom_dim_blob_shape(1, this->num_spatial_axes_ + 1);\r\n  this->conv_input_shape_.Reshape(bottom_dim_blob_shape);\r\n  int* conv_input_shape_data = this->conv_input_shape_.mutable_cpu_data();\r\n  for (int i = 0; i < this->num_spatial_axes_ + 1; ++i) {\r\n    if (reverse_dimensions()) {\r\n      conv_input_shape_data[i] = top[0]->shape(this->channel_axis_ + i);\r\n    } else {\r\n      conv_input_shape_data[i] = bottom[0]->shape(this->channel_axis_ + i);\r\n    }\r\n  }\r\n  // The im2col result buffer will only hold one image at a time to avoid\r\n  // overly large memory usage. In the special case of 1x1 convolution\r\n  // it goes lazily unused to save memory.\r\n  this->col_buffer_shape_.clear();\r\n  this->col_buffer_shape_.push_back(this->kernel_dim_ * this->group_);\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    if (reverse_dimensions()) {\r\n      this->col_buffer_shape_.push_back(input_shape(i + 1));\r\n    } else {\r\n      this->col_buffer_shape_.push_back(this->output_shape_[i]);\r\n    }\r\n  }\r\n  this->col_buffer_.Reshape(this->col_buffer_shape_);\r\n  \r\n  this->input_offset_dim_ = bottom[1]->count(this->channel_axis_);\r\n\r\n\r\n  this->bottom_dim_ = bottom[0]->count(this->channel_axis_);\r\n  this->top_dim_ = top[0]->count(this->channel_axis_);\r\n  this->num_kernels_im2col_ = this->conv_in_channels_ * this->conv_out_spatial_dim_;\r\n  this->num_kernels_col2im_ = reverse_dimensions() ? this->top_dim_ : this->bottom_dim_;\r\n  // Set up the all ones \"bias multiplier\" for adding biases by BLAS\r\n  this->out_spatial_dim_ = top[0]->count(first_spatial_axis);\r\n  if (this->bias_term_) {\r\n    vector<int> bias_multiplier_shape(1, this->out_spatial_dim_);\r\n    this->bias_multiplier_.Reshape(bias_multiplier_shape);\r\n    caffe_set(this->bias_multiplier_.count(), Dtype(1),\r\n        this->bias_multiplier_.mutable_cpu_data());\r\n  }\r\n}\r\ntemplate <typename Dtype>\r\nvoid DeformableConvolutionLayer<Dtype>::compute_output_shape() {\r\n  const int* kernel_shape_data = this->kernel_shape_.cpu_data();\r\n  const int* stride_data = this->stride_.cpu_data();\r\n  const int* pad_data = this->pad_.cpu_data();\r\n  const int* dilation_data = this->dilation_.cpu_data();\r\n  this->output_shape_.clear();\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    // i + 1 to skip channel axis\r\n    const int input_dim = this->input_shape(i + 1);\r\n    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;\r\n    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)\r\n        / stride_data[i] + 1;\r\n    this->output_shape_.push_back(output_dim);\r\n  }\r\n}\r\ntemplate <typename Dtype>\r\nvoid DeformableConvolutionLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\r\n      const vector<Blob<Dtype>*>& top) {\r\n\r\n\r\n  // Configure the kernel size, padding, stride, and inputs.\r\n  DeformableConvolutionParameter conv_param = this->layer_param_.deformable_convolution_param();\r\n  this->force_nd_im2col_ = conv_param.force_nd_im2col();\r\n  this->channel_axis_ = bottom[0]->CanonicalAxisIndex(conv_param.axis());\r\n  const int first_spatial_axis = this->channel_axis_ + 1;\r\n  const int num_axes = bottom[0]->num_axes();\r\n  this->num_spatial_axes_ = num_axes - first_spatial_axis;\r\n  CHECK_GE(this->num_spatial_axes_, 0);\r\n  vector<int> bottom_dim_blob_shape(1, this->num_spatial_axes_ + 1);\r\n  vector<int> spatial_dim_blob_shape(1, std::max(this->num_spatial_axes_, 1));\r\n  // Setup filter kernel dimensions (kernel_shape_).\r\n  this->kernel_shape_.Reshape(spatial_dim_blob_shape);\r\n  int* kernel_shape_data = this->kernel_shape_.mutable_cpu_data();\r\n  if (conv_param.has_kernel_h() || conv_param.has_kernel_w()) {\r\n    CHECK_EQ(this->num_spatial_axes_, 2)\r\n        << \"kernel_h & kernel_w can only be used for 2D convolution.\";\r\n    CHECK_EQ(0, conv_param.kernel_size_size())\r\n        << \"Either kernel_size or kernel_h/w should be specified; not both.\";\r\n    kernel_shape_data[0] = conv_param.kernel_h();\r\n    kernel_shape_data[1] = conv_param.kernel_w();\r\n  } else {\r\n    const int num_kernel_dims = conv_param.kernel_size_size();\r\n    CHECK(num_kernel_dims == 1 || num_kernel_dims == this->num_spatial_axes_)\r\n        << \"kernel_size must be specified once, or once per spatial dimension \"\r\n        << \"(kernel_size specified \" << num_kernel_dims << \" times; \"\r\n        << this->num_spatial_axes_ << \" spatial dims).\";\r\n      for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n        kernel_shape_data[i] =\r\n            conv_param.kernel_size((num_kernel_dims == 1) ? 0 : i);\r\n      }\r\n  }\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    CHECK_GT(kernel_shape_data[i], 0) << \"Filter dimensions must be nonzero.\";\r\n  }\r\n  // Setup stride dimensions (stride_).\r\n  this->stride_.Reshape(spatial_dim_blob_shape);\r\n  int* stride_data = this->stride_.mutable_cpu_data();\r\n  if (conv_param.has_stride_h() || conv_param.has_stride_w()) {\r\n    CHECK_EQ(this->num_spatial_axes_, 2)\r\n        << \"stride_h & stride_w can only be used for 2D convolution.\";\r\n    CHECK_EQ(0, conv_param.stride_size())\r\n        << \"Either stride or stride_h/w should be specified; not both.\";\r\n    stride_data[0] = conv_param.stride_h();\r\n    stride_data[1] = conv_param.stride_w();\r\n  } else {\r\n    const int num_stride_dims = conv_param.stride_size();\r\n    CHECK(num_stride_dims == 0 || num_stride_dims == 1 ||\r\n          num_stride_dims == this->num_spatial_axes_)\r\n        << \"stride must be specified once, or once per spatial dimension \"\r\n        << \"(stride specified \" << num_stride_dims << \" times; \"\r\n        << this->num_spatial_axes_ << \" spatial dims).\";\r\n    const int kDefaultStride = 1;\r\n    for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n      stride_data[i] = (num_stride_dims == 0) ? kDefaultStride :\r\n          conv_param.stride((num_stride_dims == 1) ? 0 : i);\r\n      CHECK_GT(stride_data[i], 0) << \"Stride dimensions must be nonzero.\";\r\n    }\r\n  }\r\n  // Setup pad dimensions (pad_).\r\n  this->pad_.Reshape(spatial_dim_blob_shape);\r\n  int* pad_data = this->pad_.mutable_cpu_data();\r\n  if (conv_param.has_pad_h() || conv_param.has_pad_w()) {\r\n    CHECK_EQ(this->num_spatial_axes_, 2)\r\n        << \"pad_h & pad_w can only be used for 2D convolution.\";\r\n    CHECK_EQ(0, conv_param.pad_size())\r\n        << \"Either pad or pad_h/w should be specified; not both.\";\r\n    pad_data[0] = conv_param.pad_h();\r\n    pad_data[1] = conv_param.pad_w();\r\n  } else {\r\n    const int num_pad_dims = conv_param.pad_size();\r\n    CHECK(num_pad_dims == 0 || num_pad_dims == 1 ||\r\n          num_pad_dims == this->num_spatial_axes_)\r\n        << \"pad must be specified once, or once per spatial dimension \"\r\n        << \"(pad specified \" << num_pad_dims << \" times; \"\r\n        << this->num_spatial_axes_ << \" spatial dims).\";\r\n    const int kDefaultPad = 0;\r\n    for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n      pad_data[i] = (num_pad_dims == 0) ? kDefaultPad :\r\n          conv_param.pad((num_pad_dims == 1) ? 0 : i);\r\n    }\r\n  }\r\n  // Setup dilation dimensions (dilation_).\r\n  this->dilation_.Reshape(spatial_dim_blob_shape);\r\n  int* dilation_data = this->dilation_.mutable_cpu_data();\r\n  const int num_dilation_dims = conv_param.dilation_size();\r\n  CHECK(num_dilation_dims == 0 || num_dilation_dims == 1 ||\r\n        num_dilation_dims == this->num_spatial_axes_)\r\n      << \"dilation must be specified once, or once per spatial dimension \"\r\n      << \"(dilation specified \" << num_dilation_dims << \" times; \"\r\n      << this->num_spatial_axes_ << \" spatial dims).\";\r\n  const int kDefaultDilation = 1;\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    dilation_data[i] = (num_dilation_dims == 0) ? kDefaultDilation :\r\n                       conv_param.dilation((num_dilation_dims == 1) ? 0 : i);\r\n  }\r\n  // Special case: im2col is the identity for 1x1 convolution with stride 1\r\n  // and no padding, so flag for skipping the buffer and transformation.\r\n  this->is_1x1_ = true;\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    this->is_1x1_ &=\r\n        kernel_shape_data[i] == 1 && stride_data[i] == 1 && pad_data[i] == 0;\r\n    if (!this->is_1x1_) { break; }\r\n  }\r\n  // Configure output channels and groups.\r\n  this->channels_ = bottom[0]->shape(this->channel_axis_);\r\n  this->num_output_ = this->layer_param_.deformable_convolution_param().num_output();\r\n  CHECK_GT(this->num_output_, 0);\r\n  this->group_ = this->layer_param_.deformable_convolution_param().group();\r\n  this->deformable_group_ = this->layer_param_.deformable_convolution_param().deformable_group();\r\n  CHECK_EQ(this->channels_ % this->group_, 0);\r\n  CHECK_EQ(this->num_output_ % this->group_, 0)\r\n      << \"Number of output should be multiples of group.\";\r\n\r\n\r\n  CHECK_EQ(bottom[1]->shape(1), kernel_shape_data[0]*kernel_shape_data[1]*this->deformable_group_*2)\r\n  << \"Number channels of offset should be kernel*h*kernel_w*deformance*2\";\r\n\r\n  CHECK_EQ(bottom[1]->shape(2), bottom[0]->shape(2))\r\n  << \"Height and width of deformable conv layer and offset should be equal\";\r\n// cout<<bottom[1]->shape(2)<<\" \"<< bottom[1]->shape(3)<<endl;\r\n// cout<<bottom[0]->shape(2)<<\" \"<< bottom[0]->shape(3)<<endl;\r\n// cout<<bottom[1]->shape_string()<<endl;\r\n// cout<<bottom[0]->shape_string()<<endl;\r\n  CHECK_EQ(bottom[1]->shape(3), bottom[0]->shape(3))\r\n  << \"Height and width of deformable conv layer and offset should be equal\";\r\n  if (reverse_dimensions()) {\r\n    this->conv_out_channels_ = this->channels_;\r\n    this->conv_in_channels_ = this->num_output_;\r\n  } else {\r\n    this->conv_out_channels_ = this->num_output_;\r\n    this->conv_in_channels_ = this->channels_;\r\n  }\r\n  // Handle the parameters: weights and biases.\r\n  // - blobs_[0] holds the filter weights\r\n  // - blobs_[1] holds the biases (optional)\r\n  vector<int> weight_shape(2);\r\n  weight_shape[0] = this->conv_out_channels_;\r\n  weight_shape[1] = this->conv_in_channels_ / this->group_;\r\n  for (int i = 0; i < this->num_spatial_axes_; ++i) {\r\n    weight_shape.push_back(kernel_shape_data[i]);\r\n  }\r\n  this->bias_term_ = this->layer_param_.deformable_convolution_param().bias_term();\r\n  vector<int> bias_shape(this->bias_term_, this->num_output_);\r\n  if (this->blobs_.size() > 0) {\r\n    CHECK_EQ(1 + this->bias_term_, this->blobs_.size())\r\n        << \"Incorrect number of weight blobs.\";\r\n    if (weight_shape != this->blobs_[0]->shape()) {\r\n      Blob<Dtype> weight_shaped_blob(weight_shape);\r\n      LOG(FATAL) << \"Incorrect weight shape: expected shape \"\r\n          << weight_shaped_blob.shape_string() << \"; instead, shape was \"\r\n          << this->blobs_[0]->shape_string();\r\n    }\r\n    if (this->bias_term_ && bias_shape != this->blobs_[1]->shape()) {\r\n      Blob<Dtype> bias_shaped_blob(bias_shape);\r\n      LOG(FATAL) << \"Incorrect bias shape: expected shape \"\r\n          << bias_shaped_blob.shape_string() << \"; instead, shape was \"\r\n          << this->blobs_[1]->shape_string();\r\n    }\r\n    LOG(INFO) << \"Skipping parameter initialization\";\r\n  } else {\r\n    if (this->bias_term_) {\r\n      this->blobs_.resize(2);\r\n    } else {\r\n      this->blobs_.resize(1);\r\n    }\r\n    // Initialize and fill the weights:\r\n    // output channels x input channels per-group x kernel height x kernel width\r\n    this->blobs_[0].reset(new Blob<Dtype>(weight_shape));\r\n    shared_ptr<Filler<Dtype> > weight_filler(GetFiller<Dtype>(\r\n        this->layer_param_.deformable_convolution_param().weight_filler()));\r\n    weight_filler->Fill(this->blobs_[0].get());\r\n    // If necessary, initialize and fill the biases.\r\n    if (this->bias_term_) {\r\n      this->blobs_[1].reset(new Blob<Dtype>(bias_shape));\r\n      shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(\r\n          this->layer_param_.deformable_convolution_param().bias_filler()));\r\n      bias_filler->Fill(this->blobs_[1].get());\r\n    }\r\n  }\r\n  this->kernel_dim_ = this->blobs_[0]->count(1);\r\n  this->weight_offset_ = this->conv_out_channels_ * this->kernel_dim_ / this->group_;\r\n  // Propagate gradients to the parameters (as directed by backward pass).\r\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\r\n\r\n}\r\n\r\n\r\n#ifdef CPU_ONLY\r\nSTUB_GPU(DeformableConvolutionLayer);\r\n#endif\r\nINSTANTIATE_CLASS(DeformableConvolutionLayer);\r\n\r\n\r\n\r\n\r\n}  // namespace caffe\r\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/deformable_conv_layer.cu",
    "content": "#include <vector>\r\n#include <iostream>\r\n\r\n#include \"caffe/layers/deformable_conv_layer.hpp\"\r\n#include \"caffe/util/im2col.hpp\"\r\nusing namespace std;\r\nnamespace caffe {\r\ntemplate <typename Dtype>\r\nvoid DeformableConvolutionLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\r\n    const vector<Blob<Dtype>*>& top) {\r\n    const Dtype* weights = this->blobs_[0]->gpu_data();\r\n    const Dtype* bottom_data = bottom[0]->gpu_data();\r\n    const Dtype* offset = bottom[1]->gpu_data();\r\n    top[0]->scale_data(0);//data protect\r\n    Dtype* top_data = top[0]->mutable_gpu_data();\r\n    Dtype *col_buff =this->col_buffer_.mutable_gpu_data();\r\n    for (int n = 0; n < this->num_; ++n) {\r\n      const Dtype* col_buff = bottom_data + n*this->bottom_dim_;\r\n      deformable_im2col_gpu<Dtype>(bottom_data + n*this->bottom_dim_, //data_col\r\n                                          offset + n*this->input_offset_dim_,//offset\r\n                                          bottom[0]->shape(1),\r\n                                          bottom[0]->shape(2),bottom[0]->shape(3),this->kernel_shape_.cpu_data()[0],this->kernel_shape_.cpu_data()[1],\r\n                                          this->pad_.cpu_data()[0],this->pad_.cpu_data()[1],this->stride_.cpu_data()[0],this->stride_.cpu_data()[1],\r\n                                          this->dilation_.cpu_data()[0],this->dilation_.cpu_data()[1],this->deformable_group_,\r\n                                          this->col_buffer_.mutable_gpu_data());\r\n    // gemm\r\n    for (int g = 0; g < this->group_; ++g) {\r\n          caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, this->conv_out_channels_ /\r\n                                this->group_, this->conv_out_spatial_dim_, this->kernel_dim_,\r\n                                (Dtype)1., weights + this->weight_offset_ * g, col_buff + this->col_offset_ * g,\r\n                                (Dtype)0., top_data + n * this->top_dim_ + this->output_offset_ * g);\r\n    }\r\n\r\n    if (this->bias_term_) {\r\n      const Dtype* bias = this->blobs_[1]->gpu_data();\r\n      caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, this->num_output_,\r\n          this->out_spatial_dim_, 1, (Dtype)1., bias, this->bias_multiplier_.gpu_data(),\r\n          (Dtype)1., top_data + n * this->top_dim_);\r\n      }\r\n  }\r\n}\r\n\r\ntemplate <typename Dtype>\r\nvoid DeformableConvolutionLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\r\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\r\n      const Dtype* weight = this->blobs_[0]->gpu_data();\r\n      Dtype* weight_diff = this->blobs_[0]->mutable_gpu_diff();\r\n      const Dtype* top_diff = top[0]->gpu_diff();\r\n     // Bias gradient, if necessary.\r\n      if (this->bias_term_ && this->param_propagate_down_[1]) {\r\n        Dtype* bias_diff = this->blobs_[1]->mutable_gpu_diff();\r\n        for (int n = 0; n < this->num_; ++n) {\r\n          caffe_gpu_gemv<Dtype>(CblasNoTrans, this->num_output_, this->out_spatial_dim_, 1.,\r\n            top_diff + n * this->top_dim_ , this->bias_multiplier_.gpu_data(), 1., bias_diff);\r\n        }\r\n      }\r\n      if (this->param_propagate_down_[0] || propagate_down[0]) {\r\n        const Dtype* bottom_data = bottom[0]->gpu_data();\r\n        bottom[0]->scale_diff(0);//data protect\r\n        Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\r\n        const Dtype * bottom_offset=bottom[1]->gpu_data();\r\n        bottom[1]->scale_diff(0);//data protect\r\n        Dtype * bottom_offset_diff = bottom[1]->mutable_gpu_diff();\r\n\r\n        for (int n = 0; n < this->num_; ++n) {\r\n          if (this->param_propagate_down_[0]) {            \r\n              deformable_im2col_gpu<Dtype>(\r\n                   bottom_data + n*this->bottom_dim_, //data_col\r\n                    bottom_offset + n*this->input_offset_dim_,//offset\r\n                    bottom[0]->shape(1),bottom[0]->shape(2),bottom[0]->shape(3),\r\n                    this->kernel_shape_.cpu_data()[0],this->kernel_shape_.cpu_data()[1],\r\n                    this->pad_.cpu_data()[0],this->pad_.cpu_data()[1],this->stride_.cpu_data()[0],this->stride_.cpu_data()[1],\r\n                    this->dilation_.cpu_data()[0],this->dilation_.cpu_data()[1],this->deformable_group_,\r\n                    this->col_buffer_.mutable_gpu_data());  \r\n              for (int g = 0; g < this->group_; ++g) {\r\n                      caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasTrans, this->conv_out_channels_ / this->group_,\r\n                              this->kernel_dim_, this->conv_out_spatial_dim_,\r\n                              (Dtype)1.,  top[0]->gpu_diff() + this->output_offset_ * g, this->col_buffer_.gpu_data()+ this->col_offset_ * g,\r\n                              (Dtype)1., weight_diff + this->weight_offset_ * g);\r\n                              }\r\n          }\r\n\r\n\r\n          if (propagate_down[0]) {\r\n            for (int g = 0; g < this->group_; ++g) {\r\n              caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans, this->kernel_dim_,\r\n               this->conv_out_spatial_dim_, this->conv_out_channels_ / this->group_,\r\n               (Dtype)1., weight + this->weight_offset_ * g,  top[0]->gpu_diff() + this->output_offset_ * g,\r\n               (Dtype)0., this->col_buffer_.mutable_gpu_data() + this->col_offset_ * g);}\r\n            // gradient w.r.t. input offset\r\n              deformable_col2im_coord_gpu<Dtype>(this->col_buffer_.gpu_data(),\r\n                bottom_data + n*this->bottom_dim_,\r\n                bottom_offset + n*this->input_offset_dim_,\r\n                this->col_buffer_.shape(0),bottom[0]->shape(2),bottom[0]->shape(3),\r\n                this->kernel_shape_.cpu_data()[0],this->kernel_shape_.cpu_data()[1],\r\n                  this->pad_.cpu_data()[0],this->pad_.cpu_data()[1],\r\n                  this->stride_.cpu_data()[0],this->stride_.cpu_data()[1],\r\n                  this->dilation_.cpu_data()[0],this->dilation_.cpu_data()[1],\r\n                  this->deformable_group_, bottom_offset_diff + n*this->input_offset_dim_);\r\n              // gradient w.r.t. input data\r\n              deformable_col2im_gpu<Dtype>(this->col_buffer_.gpu_data(),\r\n                bottom_offset + n*this->input_offset_dim_,\r\n                this->conv_in_channels_,this->conv_input_shape_.cpu_data()[1],this->conv_input_shape_.cpu_data()[2],this->col_buffer_.shape(0),\r\n                this->kernel_shape_.cpu_data()[0],this->kernel_shape_.cpu_data()[1],\r\n                this->pad_.cpu_data()[0],this->pad_.cpu_data()[1],this->stride_.cpu_data()[0],this->stride_.cpu_data()[1],\r\n                        this->dilation_.cpu_data()[0],this->dilation_.cpu_data()[1],this->deformable_group_,bottom_diff+ n*this->input_offset_dim_);\r\n   \r\n              }\r\n\r\n      }    \r\n  }\r\n }\r\n//\r\n\r\nINSTANTIATE_LAYER_GPU_FUNCS(DeformableConvolutionLayer);\r\n\r\n\r\n}  // namespace caffe\r\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/dropout_layer.cpp",
    "content": "// TODO (sergeyk): effect should not be dependent on phase. wasted memcpy.\n\n#include <vector>\n\n#include \"caffe/layers/dropout_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid DropoutLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::LayerSetUp(bottom, top);\n  threshold_ = this->layer_param_.dropout_param().dropout_ratio();\n  DCHECK(threshold_ > 0.);\n  DCHECK(threshold_ < 1.);\n  scale_ = 1. / (1. - threshold_);\n  uint_thres_ = static_cast<unsigned int>(UINT_MAX * threshold_);\n  scale_train_ = this->layer_param_.dropout_param().scale_train();\n}\n\ntemplate <typename Dtype>\nvoid DropoutLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::Reshape(bottom, top);\n  // Set up the cache for random number generation\n  rand_vec_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      bottom[0]->height(), bottom[0]->width());\n}\n\ntemplate <typename Dtype>\nvoid DropoutLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  unsigned int* mask = rand_vec_.mutable_cpu_data();\n  const int count = bottom[0]->count();\n  if (this->phase_ == TRAIN) {\n    // Create random numbers\n    caffe_rng_bernoulli(count, 1. - threshold_, mask);\n    if (scale_train_) {\n      for (int i = 0; i < count; ++i) {\n        top_data[i] = bottom_data[i] * mask[i] * scale_;\n      }\n    } else {\n      for (int i = 0; i < count; ++i) {\n        top_data[i] = bottom_data[i] * mask[i];\n      }\n    }\n  } else {\n    caffe_copy(bottom[0]->count(), bottom_data, top_data);\n    if (!scale_train_) {\n      caffe_scal<Dtype>(  count, 1. / scale_, top_data);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid DropoutLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    if (this->phase_ == TRAIN) {\n      const unsigned int* mask = rand_vec_.cpu_data();\n      const int count = bottom[0]->count();\n      if (scale_train_) {\n        for (int i = 0; i < count; ++i) {\n          bottom_diff[i] = top_diff[i] * mask[i] * scale_;\n        }\n      } else {\n        for (int i = 0; i < count; ++i) {\n          bottom_diff[i] = top_diff[i] * mask[i];\n        }\n      }\n    } else {\n      caffe_copy(top[0]->count(), top_diff, bottom_diff);\n      if (!scale_train_) {\n        caffe_scal<Dtype>(top[0]->count(), 1. / scale_, bottom_diff);\n      }\n    }\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(DropoutLayer);\n#endif\n\nINSTANTIATE_CLASS(DropoutLayer);\nREGISTER_LAYER_CLASS(Dropout);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/dropout_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/dropout_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void DropoutForward(const int n, const Dtype* in,\n    const unsigned int* mask, const unsigned int threshold, const float scale,\n    Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = in[index] * (mask[index] > threshold) * scale;\n  }\n}\n\ntemplate <typename Dtype>\nvoid DropoutLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  if (this->phase_ == TRAIN) {\n    unsigned int* mask =\n        static_cast<unsigned int*>(rand_vec_.mutable_gpu_data());\n    caffe_gpu_rng_uniform(count, mask);\n    // set thresholds\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    if (scale_train_) {\n      DropoutForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n          count, bottom_data, mask, uint_thres_, scale_, top_data);\n    } else {\n      DropoutForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n          count, bottom_data, mask, uint_thres_, 1.f, top_data);\n    }\n    CUDA_POST_KERNEL_CHECK;\n  } else {\n    caffe_copy(count, bottom_data, top_data);\n    if (!scale_train_) {\n      caffe_gpu_scal<Dtype>(count, 1. / scale_, top_data);\n    }\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void DropoutBackward(const int n, const Dtype* in_diff,\n    const unsigned int* mask, const unsigned int threshold, const float scale,\n    Dtype* out_diff) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out_diff[index] = in_diff[index] * scale * (mask[index] > threshold);\n  }\n}\n\ntemplate <typename Dtype>\nvoid DropoutLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    if (this->phase_ == TRAIN) {\n      const unsigned int* mask =\n          static_cast<const unsigned int*>(rand_vec_.gpu_data());\n      const int count = bottom[0]->count();\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      if (scale_train_) {\n        DropoutBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n            count, top_diff, mask, uint_thres_, scale_, bottom_diff);\n      } else {\n        DropoutBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n            count, top_diff, mask, uint_thres_, 1.f, bottom_diff);\n      }\n      CUDA_POST_KERNEL_CHECK;\n    } else {\n      caffe_copy(top[0]->count(), top_diff, bottom_diff);\n      if (!scale_train_) {\n        caffe_gpu_scal<Dtype>(top[0]->count(), 1. / scale_, bottom_diff);\n      }\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(DropoutLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/dummy_data_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/dummy_data_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid DummyDataLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int num_top = top.size();\n  const DummyDataParameter& param = this->layer_param_.dummy_data_param();\n  const int num_data_filler = param.data_filler_size();\n  CHECK(num_data_filler == 0 || num_data_filler == 1 ||\n        num_data_filler == num_top)\n      << \"Number of data fillers must be 0, 1 or equal to the number of tops: \"\n      << num_top << \"; you specified \" << num_data_filler << \" data fillers.\";\n\n  const bool legacy_dims = param.num_size() || param.channels_size() ||\n                           param.height_size() || param.width_size();\n  if (legacy_dims) {\n    CHECK_EQ(0, param.shape_size())\n        << \"Both shape and legacy fields were specified\";\n    // Using deprecated 4D output dim specifiers.\n    CHECK(param.num_size() == 1 || param.num_size() == num_top)\n        << \"Must specify 'num' once, or once per top blob \"\n        << \"(\" << num_top << \"); specified \" << param.num_size() << \".\";\n    CHECK(param.channels_size() == 1 || param.channels_size() == num_top)\n        << \"Must specify 'channels' once, or once per top blob \"\n        << \"(\" << num_top << \"); specified \" << param.channels_size() << \".\";\n    CHECK(param.height_size() == 1 || param.height_size() == num_top)\n        << \"Must specify 'height' once, or once per top blob \"\n        << \"(\" << num_top << \"); specified \" << param.height_size() << \".\";\n    CHECK(param.width_size() == 1 || param.width_size() == num_top)\n        << \"Must specify 'width' once, or once per top blob \"\n        << \"(\" << num_top << \"); specified \" << param.width_size() << \".\";\n  } else {\n    CHECK(param.shape_size() == 1 || param.shape_size() == num_top)\n        << \"Must specify 'shape' once, or once per top blob \"\n        << \"(\" << num_top << \"); specified \" << param.shape_size() << \".\";\n  }\n  // refill_[i] tells Forward i whether or not to actually refill top Blob i.\n  // If refill_[i] is false, Forward does nothing for Blob i. We use this to\n  // avoid wastefully refilling \"constant\" Blobs in every forward pass.\n  // We first fill refill_ in with the INVERSE of its final values.\n  // The first time we run Forward from the LayerSetUp method, we'll fill only\n  // Blobs for which refill_ is normally false.  These Blobs will never be\n  // filled again.\n  refill_.clear();\n  fillers_.clear();\n  if (num_data_filler <= 1) {\n    FillerParameter filler_param;\n    if (num_data_filler == 0) {\n      filler_param.set_type(\"constant\");\n      filler_param.set_value(0);\n    } else {\n      filler_param.CopyFrom(param.data_filler(0));\n    }\n    // Refill on each iteration iff not using a constant filler,\n    // but use the inverse of this rule for the first run.\n    refill_.resize(1);\n    refill_[0] = (strcmp(filler_param.type().c_str(), \"constant\") == 0);\n    fillers_.resize(1);\n    fillers_[0].reset(GetFiller<Dtype>(filler_param));\n  } else {\n    refill_.resize(num_top);\n    fillers_.resize(num_top);\n    for (int i = 0; i < num_top; ++i) {\n      fillers_[i].reset(GetFiller<Dtype>(param.data_filler(i)));\n      // Refill on each iteration iff not using a constant filler,\n      // but use the inverse of this rule for the first run.\n      refill_[i] =\n          (strcmp(param.data_filler(i).type().c_str(), \"constant\") == 0);\n    }\n  }\n  for (int i = 0; i < num_top; ++i) {\n    if (legacy_dims) {\n      const int num = (param.num_size() == 1) ? param.num(0) : param.num(i);\n      const int channels =\n          (param.channels_size() == 1) ? param.channels(0) : param.channels(i);\n      const int height =\n          (param.height_size() == 1) ? param.height(0) : param.height(i);\n      const int width =\n          (param.width_size() == 1) ? param.width(0) : param.width(i);\n      top[i]->Reshape(num, channels, height, width);\n    } else {\n      const int shape_index = (param.shape_size() == 1) ? 0 : i;\n      top[i]->Reshape(param.shape(shape_index));\n    }\n  }\n  // Run Forward once, with refill_ inverted, to fill the constant Blobs.\n  this->Forward(bottom, top);\n  // Invert the inverted refill_ values to refill the desired (non-constant)\n  // Blobs in every usual forward pass.\n  for (int i = 0; i < refill_.size(); ++i) {\n    refill_[i] = !refill_[i];\n  }\n}\n\ntemplate <typename Dtype>\nvoid DummyDataLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  for (int i = 0; i < top.size(); ++i) {\n    const int filler_id = (fillers_.size() > 1) ? i : 0;\n    if (refill_[filler_id]) {\n      fillers_[filler_id]->Fill(top[i]);\n    }\n  }\n}\n\nINSTANTIATE_CLASS(DummyDataLayer);\nREGISTER_LAYER_CLASS(DummyData);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/eltwise_layer.cpp",
    "content": "#include <cfloat>\n#include <vector>\n\n#include \"caffe/layers/eltwise_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid EltwiseLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK(this->layer_param().eltwise_param().coeff_size() == 0\n      || this->layer_param().eltwise_param().coeff_size() == bottom.size()) <<\n      \"Eltwise Layer takes one coefficient per bottom blob.\";\n  CHECK(!(this->layer_param().eltwise_param().operation()\n      == EltwiseParameter_EltwiseOp_PROD\n      && this->layer_param().eltwise_param().coeff_size())) <<\n      \"Eltwise layer only takes coefficients for summation.\";\n  op_ = this->layer_param_.eltwise_param().operation();\n  // Blob-wise coefficients for the elementwise operation.\n  coeffs_ = vector<Dtype>(bottom.size(), 1);\n  if (this->layer_param().eltwise_param().coeff_size()) {\n    for (int i = 0; i < bottom.size(); ++i) {\n      coeffs_[i] = this->layer_param().eltwise_param().coeff(i);\n    }\n  }\n  stable_prod_grad_ = this->layer_param_.eltwise_param().stable_prod_grad();\n}\n\ntemplate <typename Dtype>\nvoid EltwiseLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  for (int i = 1; i < bottom.size(); ++i) {\n    CHECK(bottom[i]->shape() == bottom[0]->shape());\n  }\n  top[0]->ReshapeLike(*bottom[0]);\n  // If max operation, we will initialize the vector index part.\n  if (this->layer_param_.eltwise_param().operation() ==\n      EltwiseParameter_EltwiseOp_MAX && top.size() == 1) {\n    max_idx_.Reshape(bottom[0]->shape());\n  }\n}\n\ntemplate <typename Dtype>\nvoid EltwiseLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  int* mask = NULL;\n  const Dtype* bottom_data_a = NULL;\n  const Dtype* bottom_data_b = NULL;\n  const int count = top[0]->count();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  switch (op_) {\n  case EltwiseParameter_EltwiseOp_PROD:\n    caffe_mul(count, bottom[0]->cpu_data(), bottom[1]->cpu_data(), top_data);\n    for (int i = 2; i < bottom.size(); ++i) {\n      caffe_mul(count, top_data, bottom[i]->cpu_data(), top_data);\n    }\n    break;\n  case EltwiseParameter_EltwiseOp_SUM:\n    caffe_set(count, Dtype(0), top_data);\n    // TODO(shelhamer) does BLAS optimize to sum for coeff = 1?\n    for (int i = 0; i < bottom.size(); ++i) {\n      caffe_axpy(count, coeffs_[i], bottom[i]->cpu_data(), top_data);\n    }\n    break;\n  case EltwiseParameter_EltwiseOp_MAX:\n    // Initialize\n    mask = max_idx_.mutable_cpu_data();\n    caffe_set(count, -1, mask);\n    caffe_set(count, Dtype(-FLT_MAX), top_data);\n    // bottom 0 & 1\n    bottom_data_a = bottom[0]->cpu_data();\n    bottom_data_b = bottom[1]->cpu_data();\n    for (int idx = 0; idx < count; ++idx) {\n      if (bottom_data_a[idx] > bottom_data_b[idx]) {\n        top_data[idx] = bottom_data_a[idx];  // maxval\n        mask[idx] = 0;  // maxid\n      } else {\n        top_data[idx] = bottom_data_b[idx];  // maxval\n        mask[idx] = 1;  // maxid\n      }\n    }\n    // bottom 2++\n    for (int blob_idx = 2; blob_idx < bottom.size(); ++blob_idx) {\n      bottom_data_b = bottom[blob_idx]->cpu_data();\n      for (int idx = 0; idx < count; ++idx) {\n        if (bottom_data_b[idx] > top_data[idx]) {\n          top_data[idx] = bottom_data_b[idx];  // maxval\n          mask[idx] = blob_idx;  // maxid\n        }\n      }\n    }\n    break;\n  default:\n    LOG(FATAL) << \"Unknown elementwise operation.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid EltwiseLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const int* mask = NULL;\n  const int count = top[0]->count();\n  const Dtype* top_data = top[0]->cpu_data();\n  const Dtype* top_diff = top[0]->cpu_diff();\n  for (int i = 0; i < bottom.size(); ++i) {\n    if (propagate_down[i]) {\n      const Dtype* bottom_data = bottom[i]->cpu_data();\n      Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();\n      switch (op_) {\n      case EltwiseParameter_EltwiseOp_PROD:\n        if (stable_prod_grad_) {\n          bool initialized = false;\n          for (int j = 0; j < bottom.size(); ++j) {\n            if (i == j) { continue; }\n            if (!initialized) {\n              caffe_copy(count, bottom[j]->cpu_data(), bottom_diff);\n              initialized = true;\n            } else {\n              caffe_mul(count, bottom[j]->cpu_data(), bottom_diff,\n                        bottom_diff);\n            }\n          }\n        } else {\n          caffe_div(count, top_data, bottom_data, bottom_diff);\n        }\n        caffe_mul(count, bottom_diff, top_diff, bottom_diff);\n        break;\n      case EltwiseParameter_EltwiseOp_SUM:\n        if (coeffs_[i] == Dtype(1)) {\n          caffe_copy(count, top_diff, bottom_diff);\n        } else {\n          caffe_cpu_scale(count, coeffs_[i], top_diff, bottom_diff);\n        }\n        break;\n      case EltwiseParameter_EltwiseOp_MAX:\n        mask = max_idx_.cpu_data();\n        for (int index = 0; index < count; ++index) {\n          Dtype gradient = 0;\n          if (mask[index] == i) {\n            gradient += top_diff[index];\n          }\n          bottom_diff[index] = gradient;\n        }\n        break;\n      default:\n        LOG(FATAL) << \"Unknown elementwise operation.\";\n      }\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(EltwiseLayer);\n#endif\n\nINSTANTIATE_CLASS(EltwiseLayer);\nREGISTER_LAYER_CLASS(Eltwise);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/eltwise_layer.cu",
    "content": "#include <cfloat>\n#include <vector>\n#include <iostream>\n#include \"caffe/layers/eltwise_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void MaxForward(const int nthreads, const Dtype* bottom_data_a,\n    const Dtype* bottom_data_b, const int blob_idx, Dtype* top_data,\n    int* mask) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    Dtype maxval = -FLT_MAX;\n    int maxidx = -1;\n    if (bottom_data_a[index] > bottom_data_b[index]) {\n      // only update for very first bottom_data blob (blob_idx == 0)\n      if (blob_idx == 0) {\n        maxval = bottom_data_a[index];\n        top_data[index] = maxval;\n        maxidx = blob_idx;\n        mask[index] = maxidx;\n      }\n    } else {\n      maxval = bottom_data_b[index];\n      top_data[index] = maxval;\n      maxidx = blob_idx + 1;\n      mask[index] = maxidx;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid EltwiseLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  int* mask = NULL;\n  const int count = top[0]->count();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  switch (op_) {\n  case EltwiseParameter_EltwiseOp_PROD:\n    caffe_gpu_mul(count, bottom[0]->gpu_data(), bottom[1]->gpu_data(),\n        top_data);\n    for (int i = 2; i < bottom.size(); ++i) {\n      caffe_gpu_mul(count, top_data, bottom[i]->gpu_data(), top_data);\n    }\n    break;\n  case EltwiseParameter_EltwiseOp_SUM:\n    caffe_gpu_set(count, Dtype(0.), top_data);\n    // TODO(shelhamer) does cuBLAS optimize to sum for coeff = 1?\n    for (int i = 0; i < bottom.size(); ++i) {\n      caffe_gpu_axpy(count, coeffs_[i], bottom[i]->gpu_data(), top_data);\n    }\n    break;\n  case EltwiseParameter_EltwiseOp_MAX:\n    mask = max_idx_.mutable_gpu_data();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    MaxForward<Dtype> <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, bottom[0]->gpu_data(), bottom[1]->gpu_data(), 0, top_data, mask);\n    for (int i = 2; i < bottom.size(); ++i) {\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      MaxForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n          count, top_data, bottom[i]->gpu_data(), i-1, top_data, mask);\n    }\n    break;\n  default:\n    LOG(FATAL) << \"Unknown elementwise operation.\";\n  }\n\n}\n\ntemplate <typename Dtype>\n__global__ void MaxBackward(const int nthreads, const Dtype* top_diff,\n    const int blob_idx, const int* mask, Dtype* bottom_diff) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    Dtype gradient = 0;\n    if (mask[index] == blob_idx) {\n      gradient += top_diff[index];\n    }\n    bottom_diff[index] = gradient;\n  }\n}\n\ntemplate <typename Dtype>\nvoid EltwiseLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const int* mask = NULL;\n  const int count = top[0]->count();\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  for (int i = 0; i < bottom.size(); ++i) {\n    if (propagate_down[i]) {\n      const Dtype* bottom_data = bottom[i]->gpu_data();\n      Dtype* bottom_diff = bottom[i]->mutable_gpu_diff();\n      switch (op_) {\n      case EltwiseParameter_EltwiseOp_PROD:\n        if (stable_prod_grad_) {\n          bool initialized = false;\n          for (int j = 0; j < bottom.size(); ++j) {\n            if (i == j) { continue; }\n            if (!initialized) {\n              caffe_copy(count, bottom[j]->gpu_data(), bottom_diff);\n              initialized = true;\n            } else {\n              caffe_gpu_mul(count, bottom[j]->gpu_data(), bottom_diff,\n                            bottom_diff);\n            }\n          }\n        } else {\n          caffe_gpu_div(count, top_data, bottom_data, bottom_diff);\n        }\n        caffe_gpu_mul(count, bottom_diff, top_diff, bottom_diff);\n        break;\n      case EltwiseParameter_EltwiseOp_SUM:\n        if (coeffs_[i] == Dtype(1.)) {\n          caffe_copy(count, top_diff, bottom_diff);\n        } else {\n          caffe_gpu_scale(count, coeffs_[i], top_diff, bottom_diff);\n        }\n        break;\n      case EltwiseParameter_EltwiseOp_MAX:\n        mask = max_idx_.gpu_data();\n        MaxBackward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n            <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n            count, top_diff, i, mask, bottom_diff);\n        break;\n      default:\n        LOG(FATAL) << \"Unknown elementwise operation.\";\n      }\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(EltwiseLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/elu_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/elu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ELULayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  Dtype alpha = this->layer_param_.elu_param().alpha();\n  for (int i = 0; i < count; ++i) {\n    top_data[i] = std::max(bottom_data[i], Dtype(0))\n        + alpha * (exp(std::min(bottom_data[i], Dtype(0))) - Dtype(1));\n  }\n}\n\ntemplate <typename Dtype>\nvoid ELULayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    const Dtype* top_data = top[0]->cpu_data();\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const int count = bottom[0]->count();\n    Dtype alpha = this->layer_param_.elu_param().alpha();\n    for (int i = 0; i < count; ++i) {\n      bottom_diff[i] = top_diff[i] * ((bottom_data[i] > 0)\n          + (alpha + top_data[i]) * (bottom_data[i] <= 0));\n    }\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(ELULayer);\n#endif\n\nINSTANTIATE_CLASS(ELULayer);\nREGISTER_LAYER_CLASS(ELU);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/elu_layer.cu",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/elu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void ELUForward(const int n, const Dtype* in, Dtype* out,\n    Dtype alpha) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = in[index] > 0 ? in[index] :\n        alpha * (exp(in[index]) - 1);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ELULayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  Dtype alpha = this->layer_param_.elu_param().alpha();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  ELUForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, top_data, alpha);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate <typename Dtype>\n__global__ void ELUBackward(const int n, const Dtype* in_diff,\n    const Dtype* out_data, const Dtype* in_data,\n    Dtype* out_diff, Dtype alpha) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out_diff[index] = in_data[index] > 0 ? in_diff[index] :\n        in_diff[index] * (out_data[index] + alpha);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ELULayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    const Dtype* top_data = top[0]->gpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const int count = bottom[0]->count();\n    Dtype alpha = this->layer_param_.elu_param().alpha();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    ELUBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, top_data, bottom_data, bottom_diff, alpha);\n    CUDA_POST_KERNEL_CHECK;\n  }\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(ELULayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/embed_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/embed_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid EmbedLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  N_ = this->layer_param_.embed_param().num_output();\n  CHECK_GT(N_, 0) << \"EmbedLayer num_output must be positive.\";\n  K_ = this->layer_param_.embed_param().input_dim();\n  CHECK_GT(K_, 0) << \"EmbedLayer input_dim must be positive.\";\n  bias_term_ = this->layer_param_.embed_param().bias_term();\n  // Check if we need to set up the weights\n  if (this->blobs_.size() > 0) {\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else {\n    if (bias_term_) {\n      this->blobs_.resize(2);\n    } else {\n      this->blobs_.resize(1);\n    }\n    // Initialize the weights --\n    // transposed from InnerProductLayer for spatial locality.\n    vector<int> weight_shape(2);\n    weight_shape[0] = K_;\n    weight_shape[1] = N_;\n    this->blobs_[0].reset(new Blob<Dtype>(weight_shape));\n    // fill the weights\n    shared_ptr<Filler<Dtype> > weight_filler(GetFiller<Dtype>(\n        this->layer_param_.embed_param().weight_filler()));\n    weight_filler->Fill(this->blobs_[0].get());\n    // If necessary, initialize and fill the bias term\n    if (bias_term_) {\n      vector<int> bias_shape(1, N_);\n      this->blobs_[1].reset(new Blob<Dtype>(bias_shape));\n      shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(\n          this->layer_param_.embed_param().bias_filler()));\n      bias_filler->Fill(this->blobs_[1].get());\n    }\n  }  // parameter initialization\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\n}\n\ntemplate <typename Dtype>\nvoid EmbedLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // Figure out the dimensions\n  M_ = bottom[0]->count();\n  vector<int> top_shape = bottom[0]->shape();\n  top_shape.push_back(N_);\n  top[0]->Reshape(top_shape);\n  // Set up the bias multiplier\n  if (bias_term_) {\n    vector<int> bias_shape(1, M_);\n    bias_multiplier_.Reshape(bias_shape);\n    caffe_set(M_, Dtype(1), bias_multiplier_.mutable_cpu_data());\n  }\n}\n\ntemplate <typename Dtype>\nvoid EmbedLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* weight = this->blobs_[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  int index;\n  for (int n = 0; n < M_; ++n) {\n    index = static_cast<int>(bottom_data[n]);\n    DCHECK_GE(index, 0);\n    DCHECK_LT(index, K_);\n    DCHECK_EQ(static_cast<Dtype>(index), bottom_data[n]) << \"non-integer input\";\n    caffe_copy(N_, weight + index * N_, top_data + n * N_);\n  }\n  if (bias_term_) {\n    const Dtype* bias = this->blobs_[1]->cpu_data();\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, N_, 1, Dtype(1),\n        bias_multiplier_.cpu_data(), bias, Dtype(1), top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid EmbedLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  CHECK(!propagate_down[0]) << \"Can't backpropagate to EmbedLayer input.\";\n  if (this->param_propagate_down_[0]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    // Gradient with respect to weight\n    Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();\n    int index;\n    for (int n = 0; n < M_; ++n) {\n      index = static_cast<int>(bottom_data[n]);\n      DCHECK_GE(index, 0);\n      DCHECK_LT(index, K_);\n      DCHECK_EQ(static_cast<Dtype>(index), bottom_data[n])\n          << \"non-integer input\";\n      caffe_axpy(N_, Dtype(1), top_diff + n * N_, weight_diff + index * N_);\n    }\n  }\n  if (bias_term_ && this->param_propagate_down_[1]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();\n    caffe_cpu_gemv<Dtype>(CblasTrans, M_, N_, Dtype(1), top_diff,\n        bias_multiplier_.cpu_data(), Dtype(1), bias_diff);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(EmbedLayer);\n#endif\n\nINSTANTIATE_CLASS(EmbedLayer);\nREGISTER_LAYER_CLASS(Embed);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/embed_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/embed_layer.hpp\"\n#include \"caffe/util/gpu_util.cuh\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void EmbedForward(const int nthreads, const Dtype* bottom_data,\n    const Dtype* weight, const int M, const int N, const int K,\n    Dtype* top_data) {\n  CUDA_KERNEL_LOOP(top_index, nthreads) {\n    const int n = top_index / N;\n    const int d = top_index % N;\n    const int index = static_cast<int>(bottom_data[n]);\n    const int weight_index = index * N + d;\n    top_data[top_index] = weight[weight_index];\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void EmbedBackward(const int nthreads, const Dtype* bottom_data,\n    const Dtype* top_diff, const int M, const int N, const int K,\n    Dtype* weight_diff);\n\ntemplate <typename Dtype>\n__global__ void EmbedBackward(const int nthreads, const Dtype* bottom_data,\n    const Dtype* top_diff, const int M, const int N, const int K,\n    Dtype* weight_diff) {\n  CUDA_KERNEL_LOOP(top_index, nthreads) {\n    const int n = top_index / N;\n    const int d = top_index % N;\n    const int index = static_cast<int>(bottom_data[n]);\n    const int weight_index = index * N + d;\n    caffe_gpu_atomic_add(top_diff[top_index], weight_diff + weight_index);\n  }\n}\n\ntemplate <typename Dtype>\nvoid EmbedLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  const int count = top[0]->count();\n  EmbedForward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, weight, M_, N_, K_, top_data);\n  if (bias_term_) {\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, N_, 1, Dtype(1),\n        bias_multiplier_.gpu_data(),\n        this->blobs_[1]->gpu_data(), Dtype(1), top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid EmbedLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  CHECK(!propagate_down[0]) << \"Can't backpropagate to EmbedLayer input.\";\n  if (this->param_propagate_down_[0]) {\n    const int top_count = top[0]->count();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    Dtype* weight_diff = this->blobs_[0]->mutable_gpu_diff();\n    EmbedBackward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(top_count), CAFFE_CUDA_NUM_THREADS>>>(\n        top_count, bottom_data, top_diff, M_, N_, K_, weight_diff);\n  }\n  if (bias_term_ && this->param_propagate_down_[1]) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bias_diff = this->blobs_[1]->mutable_gpu_diff();\n    caffe_gpu_gemv<Dtype>(CblasTrans, M_, N_, Dtype(1), top_diff,\n        bias_multiplier_.gpu_data(), Dtype(1), bias_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(EmbedLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/euclidean_loss_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/euclidean_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid EuclideanLossLayer<Dtype>::Reshape(\n  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::Reshape(bottom, top);\n  CHECK_EQ(bottom[0]->count(1), bottom[1]->count(1))\n      << \"Inputs must have the same dimension.\";\n  diff_.ReshapeLike(*bottom[0]);\n}\n\ntemplate <typename Dtype>\nvoid EuclideanLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  int count = bottom[0]->count();\n  caffe_sub(\n      count,\n      bottom[0]->cpu_data(),\n      bottom[1]->cpu_data(),\n      diff_.mutable_cpu_data());\n  Dtype dot = caffe_cpu_dot(count, diff_.cpu_data(), diff_.cpu_data());\n  Dtype loss = dot / bottom[0]->num() / Dtype(2);\n  top[0]->mutable_cpu_data()[0] = loss;\n}\n\ntemplate <typename Dtype>\nvoid EuclideanLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  for (int i = 0; i < 2; ++i) {\n    if (propagate_down[i]) {\n      const Dtype sign = (i == 0) ? 1 : -1;\n      const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();\n      caffe_cpu_axpby(\n          bottom[i]->count(),              // count\n          alpha,                              // alpha\n          diff_.cpu_data(),                   // a\n          Dtype(0),                           // beta\n          bottom[i]->mutable_cpu_diff());  // b\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(EuclideanLossLayer);\n#endif\n\nINSTANTIATE_CLASS(EuclideanLossLayer);\nREGISTER_LAYER_CLASS(EuclideanLoss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/euclidean_loss_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/euclidean_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid EuclideanLossLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  int count = bottom[0]->count();\n  caffe_gpu_sub(\n      count,\n      bottom[0]->gpu_data(),\n      bottom[1]->gpu_data(),\n      diff_.mutable_gpu_data());\n  Dtype dot;\n  caffe_gpu_dot(count, diff_.gpu_data(), diff_.gpu_data(), &dot);\n  Dtype loss = dot / bottom[0]->num() / Dtype(2);\n  top[0]->mutable_cpu_data()[0] = loss;\n}\n\ntemplate <typename Dtype>\nvoid EuclideanLossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  for (int i = 0; i < 2; ++i) {\n    if (propagate_down[i]) {\n      const Dtype sign = (i == 0) ? 1 : -1;\n      const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();\n      caffe_gpu_axpby(\n          bottom[i]->count(),              // count\n          alpha,                              // alpha\n          diff_.gpu_data(),                   // a\n          Dtype(0),                           // beta\n          bottom[i]->mutable_gpu_diff());  // b\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(EuclideanLossLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/exp_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/exp_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ExpLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::LayerSetUp(bottom, top);\n  const Dtype base = this->layer_param_.exp_param().base();\n  if (base != Dtype(-1)) {\n    CHECK_GT(base, 0) << \"base must be strictly positive.\";\n  }\n  // If base == -1, interpret the base as e and set log_base = 1 exactly.\n  // Otherwise, calculate its log explicitly.\n  const Dtype log_base = (base == Dtype(-1)) ? Dtype(1) : log(base);\n  CHECK(!isnan(log_base))\n      << \"NaN result: log(base) = log(\" << base << \") = \" << log_base;\n  CHECK(!isinf(log_base))\n      << \"Inf result: log(base) = log(\" << base << \") = \" << log_base;\n  const Dtype input_scale = this->layer_param_.exp_param().scale();\n  const Dtype input_shift = this->layer_param_.exp_param().shift();\n  inner_scale_ = log_base * input_scale;\n  outer_scale_ = (input_shift == Dtype(0)) ? Dtype(1) : pow(base, input_shift);\n}\n\ntemplate <typename Dtype>\nvoid ExpLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const int count = bottom[0]->count();\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  if (inner_scale_ == Dtype(1)) {\n    caffe_exp(count, bottom_data, top_data);\n  } else {\n    caffe_cpu_scale(count, inner_scale_, bottom_data, top_data);\n    caffe_exp(count, top_data, top_data);\n  }\n  if (outer_scale_ != Dtype(1)) {\n    caffe_scal(count, outer_scale_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ExpLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  const int count = bottom[0]->count();\n  const Dtype* top_data = top[0]->cpu_data();\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  caffe_mul(count, top_data, top_diff, bottom_diff);\n  if (inner_scale_ != Dtype(1)) {\n    caffe_scal(count, inner_scale_, bottom_diff);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ExpLayer);\n#endif\n\nINSTANTIATE_CLASS(ExpLayer);\nREGISTER_LAYER_CLASS(Exp);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/exp_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/exp_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ExpLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const int count = bottom[0]->count();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  if (inner_scale_ == Dtype(1)) {\n    caffe_gpu_exp(count, bottom_data, top_data);\n  } else {\n    caffe_gpu_scale(count, inner_scale_, bottom_data, top_data);\n    caffe_gpu_exp(count, top_data, top_data);\n  }\n  if (outer_scale_ != Dtype(1)) {\n    caffe_gpu_scal(count, outer_scale_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ExpLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  const int count = bottom[0]->count();\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  caffe_gpu_mul(count, top_data, top_diff, bottom_diff);\n  if (inner_scale_ != Dtype(1)) {\n    caffe_gpu_scal(count, inner_scale_, bottom_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ExpLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/filter_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/filter_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid FilterLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_EQ(top.size(), bottom.size() - 1);\n  first_reshape_ = true;\n}\n\ntemplate <typename Dtype>\nvoid FilterLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // bottom[0...k-1] are the blobs to filter\n  // bottom[last] is the \"selector_blob\"\n  int selector_index = bottom.size() - 1;\n  for (int i = 1; i < bottom[selector_index]->num_axes(); ++i) {\n    CHECK_EQ(bottom[selector_index]->shape(i), 1)\n        << \"Selector blob dimensions must be singletons (1), except the first\";\n  }\n  for (int i = 0; i < bottom.size() - 1; ++i) {\n    CHECK_EQ(bottom[selector_index]->shape(0), bottom[i]->shape(0)) <<\n        \"Each bottom should have the same 0th dimension as the selector blob\";\n  }\n\n  const Dtype* bottom_data_selector = bottom[selector_index]->cpu_data();\n  indices_to_forward_.clear();\n\n  // look for non-zero elements in bottom[0]. Items of each bottom that\n  // have the same index as the items in bottom[0] with value == non-zero\n  // will be forwarded\n  for (int item_id = 0; item_id < bottom[selector_index]->shape(0); ++item_id) {\n    // we don't need an offset because item size == 1\n    const Dtype* tmp_data_selector = bottom_data_selector + item_id;\n    if (*tmp_data_selector) {\n      indices_to_forward_.push_back(item_id);\n    }\n  }\n  // only filtered items will be forwarded\n  int new_tops_num = indices_to_forward_.size();\n  // init\n  if (first_reshape_) {\n    new_tops_num = bottom[0]->shape(0);\n    first_reshape_ = false;\n  }\n  for (int t = 0; t < top.size(); ++t) {\n    int num_axes = bottom[t]->num_axes();\n    vector<int> shape_top(num_axes);\n    shape_top[0] = new_tops_num;\n    for (int ts = 1; ts < num_axes; ++ts)\n      shape_top[ts] = bottom[t]->shape(ts);\n    top[t]->Reshape(shape_top);\n  }\n}\n\ntemplate <typename Dtype>\nvoid FilterLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  int new_tops_num = indices_to_forward_.size();\n  // forward all filtered items for all bottoms but the Selector (bottom[last])\n  for (int t = 0; t < top.size(); ++t) {\n    const Dtype* bottom_data = bottom[t]->cpu_data();\n    Dtype* top_data = top[t]->mutable_cpu_data();\n    int dim = bottom[t]->count() / bottom[t]->shape(0);\n    for (int n = 0; n < new_tops_num; ++n) {\n      int data_offset_top = n * dim;\n      int data_offset_bottom = indices_to_forward_[n] * bottom[t]->count(1);\n      caffe_copy(dim, bottom_data + data_offset_bottom,\n          top_data + data_offset_top);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid FilterLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[bottom.size() - 1]) {\n    LOG(FATAL) << this->type()\n               << \"Layer cannot backpropagate to filter index inputs\";\n  }\n  for (int i = 0; i < top.size(); i++) {\n    // bottom[last] is the selector and never needs backpropagation\n    // so we can iterate over top vector because top.size() == bottom.size() -1\n    if (propagate_down[i]) {\n      const int dim = top[i]->count() / top[i]->shape(0);\n      int next_to_backward_offset = 0;\n      int batch_offset = 0;\n      int data_offset_bottom = 0;\n      int data_offset_top = 0;\n      for (int n = 0; n < bottom[i]->shape(0); n++) {\n        data_offset_bottom = n * dim;\n        if (next_to_backward_offset >= indices_to_forward_.size()) {\n          // we already visited all items that were been forwarded, so\n          // just set to zero remaining ones\n          caffe_set(dim, Dtype(0),\n              bottom[i]->mutable_cpu_diff() + data_offset_bottom);\n        } else {\n          batch_offset = indices_to_forward_[next_to_backward_offset];\n          if (n != batch_offset) {  // this data was not been forwarded\n            caffe_set(dim, Dtype(0),\n                bottom[i]->mutable_cpu_diff() + data_offset_bottom);\n          } else {  // this data was been forwarded\n            data_offset_top = next_to_backward_offset * dim;\n            next_to_backward_offset++;  // point to next forwarded item index\n            caffe_copy(dim, top[i]->mutable_cpu_diff() + data_offset_top,\n                bottom[i]->mutable_cpu_diff() + data_offset_bottom);\n          }\n        }\n      }\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(FilterLayer);\n#endif\n\nINSTANTIATE_CLASS(FilterLayer);\nREGISTER_LAYER_CLASS(Filter);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/filter_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/filter_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid FilterLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  int new_tops_num = indices_to_forward_.size();\n  // forward all filtered items for all bottoms but the Selector (bottom[last])\n  for (int t = 0; t < top.size(); ++t) {\n    const Dtype* bottom_data = bottom[t]->gpu_data();\n    Dtype* top_data = top[t]->mutable_gpu_data();\n    int dim = bottom[t]->count() / bottom[t]->shape(0);\n    for (int n = 0; n < new_tops_num; ++n) {\n      int data_offset_top = n * dim;\n      int data_offset_bottom = indices_to_forward_[n] * dim;\n      caffe_copy(dim, bottom_data + data_offset_bottom,\n          top_data + data_offset_top);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid FilterLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[bottom.size() - 1]) {\n    LOG(FATAL) << this->type()\n               << \"Layer cannot backpropagate to filter index inputs\";\n  }\n  for (int i = 0; i < top.size(); ++i) {\n    // bottom[last] is the selector and never needs backpropagation\n    // so we can iterate over top vector because top.size() == bottom.size() -1\n    if (propagate_down[i]) {\n      const int dim = top[i]->count() / top[i]->shape(0);\n      int next_to_backward_offset = 0;\n      int batch_offset = 0;\n      int data_offset_bottom = 0;\n      int data_offset_top = 0;\n      for (int n = 0; n < bottom[i]->shape(0); ++n) {\n        if (next_to_backward_offset >= indices_to_forward_.size()) {\n          // we already visited all items that were been forwarded, so\n          // just set to zero remaining ones\n          data_offset_bottom = n * dim;\n          caffe_gpu_set(dim, Dtype(0),\n              bottom[i]->mutable_gpu_diff() + data_offset_bottom);\n        } else {\n          batch_offset = indices_to_forward_[next_to_backward_offset];\n          data_offset_bottom = n * dim;\n          if (n != batch_offset) {  // this data was not been forwarded\n            caffe_gpu_set(dim, Dtype(0),\n                bottom[i]->mutable_gpu_diff() + data_offset_bottom);\n          } else {  // this data was been forwarded\n            data_offset_top = next_to_backward_offset * dim;\n            ++next_to_backward_offset;  // point to next forwarded item index\n            caffe_copy(dim, top[i]->mutable_gpu_diff() + data_offset_top,\n                bottom[i]->mutable_gpu_diff() + data_offset_bottom);\n          }\n        }\n      }\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(FilterLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/flatten_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/flatten_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid FlattenLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_NE(top[0], bottom[0]) << this->type() << \" Layer does not \"\n      \"allow in-place computation.\";\n  const int start_axis = bottom[0]->CanonicalAxisIndex(\n      this->layer_param_.flatten_param().axis());\n  const int end_axis = bottom[0]->CanonicalAxisIndex(\n      this->layer_param_.flatten_param().end_axis());\n  vector<int> top_shape;\n  for (int i = 0; i < start_axis; ++i) {\n    top_shape.push_back(bottom[0]->shape(i));\n  }\n  const int flattened_dim = bottom[0]->count(start_axis, end_axis + 1);\n  top_shape.push_back(flattened_dim);\n  for (int i = end_axis + 1; i < bottom[0]->num_axes(); ++i) {\n    top_shape.push_back(bottom[0]->shape(i));\n  }\n  top[0]->Reshape(top_shape);\n  CHECK_EQ(top[0]->count(), bottom[0]->count());\n}\n\ntemplate <typename Dtype>\nvoid FlattenLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  top[0]->ShareData(*bottom[0]);\n}\n\ntemplate <typename Dtype>\nvoid FlattenLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  bottom[0]->ShareDiff(*top[0]);\n}\n\nINSTANTIATE_CLASS(FlattenLayer);\nREGISTER_LAYER_CLASS(Flatten);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/hdf5_data_layer.cpp",
    "content": "/*\nTODO:\n- load file in a separate thread (\"prefetch\")\n- can be smarter about the memcpy call instead of doing it row-by-row\n  :: use util functions caffe_copy, and Blob->offset()\n  :: don't forget to update hdf5_daa_layer.cu accordingly\n- add ability to shuffle filenames if flag is set\n*/\n#include <fstream>  // NOLINT(readability/streams)\n#include <string>\n#include <vector>\n\n#include \"hdf5.h\"\n#include \"hdf5_hl.h\"\n#include \"stdint.h\"\n\n#include \"caffe/layers/hdf5_data_layer.hpp\"\n#include \"caffe/util/hdf5.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nHDF5DataLayer<Dtype>::~HDF5DataLayer<Dtype>() { }\n\n// Load data and label from HDF5 filename into the class property blobs.\ntemplate <typename Dtype>\nvoid HDF5DataLayer<Dtype>::LoadHDF5FileData(const char* filename) {\n  DLOG(INFO) << \"Loading HDF5 file: \" << filename;\n  hid_t file_id = H5Fopen(filename, H5F_ACC_RDONLY, H5P_DEFAULT);\n  if (file_id < 0) {\n    LOG(FATAL) << \"Failed opening HDF5 file: \" << filename;\n  }\n\n  int top_size = this->layer_param_.top_size();\n  hdf_blobs_.resize(top_size);\n\n  const int MIN_DATA_DIM = 1;\n  const int MAX_DATA_DIM = INT_MAX;\n\n  for (int i = 0; i < top_size; ++i) {\n    hdf_blobs_[i] = shared_ptr<Blob<Dtype> >(new Blob<Dtype>());\n    hdf5_load_nd_dataset(file_id, this->layer_param_.top(i).c_str(),\n        MIN_DATA_DIM, MAX_DATA_DIM, hdf_blobs_[i].get());\n  }\n\n  herr_t status = H5Fclose(file_id);\n  CHECK_GE(status, 0) << \"Failed to close HDF5 file: \" << filename;\n\n  // MinTopBlobs==1 guarantees at least one top blob\n  CHECK_GE(hdf_blobs_[0]->num_axes(), 1) << \"Input must have at least 1 axis.\";\n  const int num = hdf_blobs_[0]->shape(0);\n  for (int i = 1; i < top_size; ++i) {\n    CHECK_EQ(hdf_blobs_[i]->shape(0), num);\n  }\n  // Default to identity permutation.\n  data_permutation_.clear();\n  data_permutation_.resize(hdf_blobs_[0]->shape(0));\n  for (int i = 0; i < hdf_blobs_[0]->shape(0); i++)\n    data_permutation_[i] = i;\n\n  // Shuffle if needed.\n  if (this->layer_param_.hdf5_data_param().shuffle()) {\n    std::random_shuffle(data_permutation_.begin(), data_permutation_.end());\n    DLOG(INFO) << \"Successully loaded \" << hdf_blobs_[0]->shape(0)\n               << \" rows (shuffled)\";\n  } else {\n    DLOG(INFO) << \"Successully loaded \" << hdf_blobs_[0]->shape(0) << \" rows\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid HDF5DataLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // Refuse transformation parameters since HDF5 is totally generic.\n  CHECK(!this->layer_param_.has_transform_param()) <<\n      this->type() << \" does not transform data.\";\n  // Read the source to parse the filenames.\n  const string& source = this->layer_param_.hdf5_data_param().source();\n  LOG(INFO) << \"Loading list of HDF5 filenames from: \" << source;\n  hdf_filenames_.clear();\n  std::ifstream source_file(source.c_str());\n  if (source_file.is_open()) {\n    std::string line;\n    while (source_file >> line) {\n      hdf_filenames_.push_back(line);\n    }\n  } else {\n    LOG(FATAL) << \"Failed to open source file: \" << source;\n  }\n  source_file.close();\n  num_files_ = hdf_filenames_.size();\n  current_file_ = 0;\n  LOG(INFO) << \"Number of HDF5 files: \" << num_files_;\n  CHECK_GE(num_files_, 1) << \"Must have at least 1 HDF5 filename listed in \"\n    << source;\n\n  file_permutation_.clear();\n  file_permutation_.resize(num_files_);\n  // Default to identity permutation.\n  for (int i = 0; i < num_files_; i++) {\n    file_permutation_[i] = i;\n  }\n\n  // Shuffle if needed.\n  if (this->layer_param_.hdf5_data_param().shuffle()) {\n    std::random_shuffle(file_permutation_.begin(), file_permutation_.end());\n  }\n\n  // Load the first HDF5 file and initialize the line counter.\n  LoadHDF5FileData(hdf_filenames_[file_permutation_[current_file_]].c_str());\n  current_row_ = 0;\n\n  // Reshape blobs.\n  const int batch_size = this->layer_param_.hdf5_data_param().batch_size();\n  const int top_size = this->layer_param_.top_size();\n  vector<int> top_shape;\n  for (int i = 0; i < top_size; ++i) {\n    top_shape.resize(hdf_blobs_[i]->num_axes());\n    top_shape[0] = batch_size;\n    for (int j = 1; j < top_shape.size(); ++j) {\n      top_shape[j] = hdf_blobs_[i]->shape(j);\n    }\n    top[i]->Reshape(top_shape);\n  }\n}\n\ntemplate <typename Dtype>\nvoid HDF5DataLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int batch_size = this->layer_param_.hdf5_data_param().batch_size();\n  for (int i = 0; i < batch_size; ++i, ++current_row_) {\n    if (current_row_ == hdf_blobs_[0]->shape(0)) {\n      if (num_files_ > 1) {\n        ++current_file_;\n        if (current_file_ == num_files_) {\n          current_file_ = 0;\n          if (this->layer_param_.hdf5_data_param().shuffle()) {\n            std::random_shuffle(file_permutation_.begin(),\n                                file_permutation_.end());\n          }\n          DLOG(INFO) << \"Looping around to first file.\";\n        }\n        LoadHDF5FileData(\n            hdf_filenames_[file_permutation_[current_file_]].c_str());\n      }\n      current_row_ = 0;\n      if (this->layer_param_.hdf5_data_param().shuffle())\n        std::random_shuffle(data_permutation_.begin(), data_permutation_.end());\n    }\n    for (int j = 0; j < this->layer_param_.top_size(); ++j) {\n      int data_dim = top[j]->count() / top[j]->shape(0);\n      caffe_copy(data_dim,\n          &hdf_blobs_[j]->cpu_data()[data_permutation_[current_row_]\n            * data_dim], &top[j]->mutable_cpu_data()[i * data_dim]);\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU_FORWARD(HDF5DataLayer, Forward);\n#endif\n\nINSTANTIATE_CLASS(HDF5DataLayer);\nREGISTER_LAYER_CLASS(HDF5Data);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/hdf5_data_layer.cu",
    "content": "/*\nTODO:\n- only load parts of the file, in accordance with a prototxt param \"max_mem\"\n*/\n\n#include <stdint.h>\n#include <vector>\n\n#include \"hdf5.h\"\n#include \"hdf5_hl.h\"\n\n#include \"caffe/layers/hdf5_data_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid HDF5DataLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int batch_size = this->layer_param_.hdf5_data_param().batch_size();\n  for (int i = 0; i < batch_size; ++i, ++current_row_) {\n    if (current_row_ == hdf_blobs_[0]->shape(0)) {\n      if (num_files_ > 1) {\n        current_file_ += 1;\n        if (current_file_ == num_files_) {\n          current_file_ = 0;\n          if (this->layer_param_.hdf5_data_param().shuffle()) {\n            std::random_shuffle(file_permutation_.begin(),\n                                file_permutation_.end());\n          }\n          DLOG(INFO) << \"Looping around to first file.\";\n        }\n        LoadHDF5FileData(\n            hdf_filenames_[file_permutation_[current_file_]].c_str());\n      }\n      current_row_ = 0;\n      if (this->layer_param_.hdf5_data_param().shuffle())\n        std::random_shuffle(data_permutation_.begin(), data_permutation_.end());\n    }\n    for (int j = 0; j < this->layer_param_.top_size(); ++j) {\n      int data_dim = top[j]->count() / top[j]->shape(0);\n      caffe_copy(data_dim,\n          &hdf_blobs_[j]->cpu_data()[data_permutation_[current_row_]\n            * data_dim], &top[j]->mutable_gpu_data()[i * data_dim]);\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(HDF5DataLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/hdf5_output_layer.cpp",
    "content": "#include <vector>\n\n#include \"hdf5.h\"\n#include \"hdf5_hl.h\"\n\n#include \"caffe/layers/hdf5_output_layer.hpp\"\n#include \"caffe/util/hdf5.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid HDF5OutputLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  file_name_ = this->layer_param_.hdf5_output_param().file_name();\n  file_id_ = H5Fcreate(file_name_.c_str(), H5F_ACC_TRUNC, H5P_DEFAULT,\n                       H5P_DEFAULT);\n  CHECK_GE(file_id_, 0) << \"Failed to open HDF5 file\" << file_name_;\n  file_opened_ = true;\n}\n\ntemplate <typename Dtype>\nHDF5OutputLayer<Dtype>::~HDF5OutputLayer<Dtype>() {\n  if (file_opened_) {\n    herr_t status = H5Fclose(file_id_);\n    CHECK_GE(status, 0) << \"Failed to close HDF5 file \" << file_name_;\n  }\n}\n\ntemplate <typename Dtype>\nvoid HDF5OutputLayer<Dtype>::SaveBlobs() {\n  // TODO: no limit on the number of blobs\n  LOG(INFO) << \"Saving HDF5 file \" << file_name_;\n  CHECK_EQ(data_blob_.num(), label_blob_.num()) <<\n      \"data blob and label blob must have the same batch size\";\n  hdf5_save_nd_dataset(file_id_, HDF5_DATA_DATASET_NAME, data_blob_);\n  hdf5_save_nd_dataset(file_id_, HDF5_DATA_LABEL_NAME, label_blob_);\n  LOG(INFO) << \"Successfully saved \" << data_blob_.num() << \" rows\";\n}\n\ntemplate <typename Dtype>\nvoid HDF5OutputLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_GE(bottom.size(), 2);\n  CHECK_EQ(bottom[0]->num(), bottom[1]->num());\n  data_blob_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n                     bottom[0]->height(), bottom[0]->width());\n  label_blob_.Reshape(bottom[1]->num(), bottom[1]->channels(),\n                     bottom[1]->height(), bottom[1]->width());\n  const int data_datum_dim = bottom[0]->count() / bottom[0]->num();\n  const int label_datum_dim = bottom[1]->count() / bottom[1]->num();\n\n  for (int i = 0; i < bottom[0]->num(); ++i) {\n    caffe_copy(data_datum_dim, &bottom[0]->cpu_data()[i * data_datum_dim],\n        &data_blob_.mutable_cpu_data()[i * data_datum_dim]);\n    caffe_copy(label_datum_dim, &bottom[1]->cpu_data()[i * label_datum_dim],\n        &label_blob_.mutable_cpu_data()[i * label_datum_dim]);\n  }\n  SaveBlobs();\n}\n\ntemplate <typename Dtype>\nvoid HDF5OutputLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  return;\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(HDF5OutputLayer);\n#endif\n\nINSTANTIATE_CLASS(HDF5OutputLayer);\nREGISTER_LAYER_CLASS(HDF5Output);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/hdf5_output_layer.cu",
    "content": "#include <vector>\n\n#include \"hdf5.h\"\n#include \"hdf5_hl.h\"\n\n#include \"caffe/layers/hdf5_output_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid HDF5OutputLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_GE(bottom.size(), 2);\n  CHECK_EQ(bottom[0]->num(), bottom[1]->num());\n  data_blob_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n                     bottom[0]->height(), bottom[0]->width());\n  label_blob_.Reshape(bottom[1]->num(), bottom[1]->channels(),\n                     bottom[1]->height(), bottom[1]->width());\n  const int data_datum_dim = bottom[0]->count() / bottom[0]->num();\n  const int label_datum_dim = bottom[1]->count() / bottom[1]->num();\n\n  for (int i = 0; i < bottom[0]->num(); ++i) {\n    caffe_copy(data_datum_dim, &bottom[0]->gpu_data()[i * data_datum_dim],\n        &data_blob_.mutable_cpu_data()[i * data_datum_dim]);\n    caffe_copy(label_datum_dim, &bottom[1]->gpu_data()[i * label_datum_dim],\n        &label_blob_.mutable_cpu_data()[i * label_datum_dim]);\n  }\n  SaveBlobs();\n}\n\ntemplate <typename Dtype>\nvoid HDF5OutputLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  return;\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(HDF5OutputLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/hinge_loss_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/hinge_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid HingeLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  const Dtype* label = bottom[1]->cpu_data();\n  int num = bottom[0]->num();\n  int count = bottom[0]->count();\n  int dim = count / num;\n\n  caffe_copy(count, bottom_data, bottom_diff);\n  for (int i = 0; i < num; ++i) {\n    bottom_diff[i * dim + static_cast<int>(label[i])] *= -1;\n  }\n  for (int i = 0; i < num; ++i) {\n    for (int j = 0; j < dim; ++j) {\n      bottom_diff[i * dim + j] = std::max(\n        Dtype(0), 1 + bottom_diff[i * dim + j]);\n    }\n  }\n  Dtype* loss = top[0]->mutable_cpu_data();\n  switch (this->layer_param_.hinge_loss_param().norm()) {\n  case HingeLossParameter_Norm_L1:\n    loss[0] = caffe_cpu_asum(count, bottom_diff) / num;\n    break;\n  case HingeLossParameter_Norm_L2:\n    loss[0] = caffe_cpu_dot(count, bottom_diff, bottom_diff) / num;\n    break;\n  default:\n    LOG(FATAL) << \"Unknown Norm\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid HingeLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const Dtype* label = bottom[1]->cpu_data();\n    int num = bottom[0]->num();\n    int count = bottom[0]->count();\n    int dim = count / num;\n\n    for (int i = 0; i < num; ++i) {\n      bottom_diff[i * dim + static_cast<int>(label[i])] *= -1;\n    }\n\n    const Dtype loss_weight = top[0]->cpu_diff()[0];\n    switch (this->layer_param_.hinge_loss_param().norm()) {\n    case HingeLossParameter_Norm_L1:\n      caffe_cpu_sign(count, bottom_diff, bottom_diff);\n      caffe_scal(count, loss_weight / num, bottom_diff);\n      break;\n    case HingeLossParameter_Norm_L2:\n      caffe_scal(count, loss_weight * 2 / num, bottom_diff);\n      break;\n    default:\n      LOG(FATAL) << \"Unknown Norm\";\n    }\n  }\n}\n\nINSTANTIATE_CLASS(HingeLossLayer);\nREGISTER_LAYER_CLASS(HingeLoss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/im2col_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/im2col_layer.hpp\"\n#include \"caffe/util/im2col.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid Im2colLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  ConvolutionParameter conv_param = this->layer_param_.convolution_param();\n  force_nd_im2col_ = conv_param.force_nd_im2col();\n  const int input_num_dims = bottom[0]->shape().size();\n  channel_axis_ = bottom[0]->CanonicalAxisIndex(conv_param.axis());\n  const int first_spatial_dim = channel_axis_ + 1;\n  num_spatial_axes_ = input_num_dims - first_spatial_dim;\n  CHECK_GE(num_spatial_axes_, 1);\n  vector<int> dim_blob_shape(1, num_spatial_axes_);\n  // Setup filter kernel dimensions (kernel_shape_).\n  kernel_shape_.Reshape(dim_blob_shape);\n  int* kernel_shape_data = kernel_shape_.mutable_cpu_data();\n  if (conv_param.has_kernel_h() || conv_param.has_kernel_w()) {\n    CHECK_EQ(num_spatial_axes_, 2)\n        << \"kernel_h & kernel_w can only be used for 2D convolution.\";\n    CHECK_EQ(0, conv_param.kernel_size_size())\n        << \"Either kernel_size or kernel_h/w should be specified; not both.\";\n    kernel_shape_data[0] = conv_param.kernel_h();\n    kernel_shape_data[1] = conv_param.kernel_w();\n  } else {\n    const int num_kernel_dims = conv_param.kernel_size_size();\n    CHECK(num_kernel_dims == 1 || num_kernel_dims == num_spatial_axes_)\n        << \"kernel_size must be specified once, or once per spatial dimension \"\n        << \"(kernel_size specified \" << num_kernel_dims << \" times; \"\n        << num_spatial_axes_ << \" spatial dims);\";\n      for (int i = 0; i < num_spatial_axes_; ++i) {\n        kernel_shape_data[i] =\n            conv_param.kernel_size((num_kernel_dims == 1) ? 0 : i);\n      }\n  }\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    CHECK_GT(kernel_shape_data[i], 0) << \"Filter dimensions must be nonzero.\";\n  }\n  // Setup stride dimensions (stride_).\n  stride_.Reshape(dim_blob_shape);\n  int* stride_data = stride_.mutable_cpu_data();\n  if (conv_param.has_stride_h() || conv_param.has_stride_w()) {\n    CHECK_EQ(num_spatial_axes_, 2)\n        << \"stride_h & stride_w can only be used for 2D convolution.\";\n    CHECK_EQ(0, conv_param.stride_size())\n        << \"Either stride or stride_h/w should be specified; not both.\";\n    stride_data[0] = conv_param.stride_h();\n    stride_data[1] = conv_param.stride_w();\n  } else {\n    const int num_stride_dims = conv_param.stride_size();\n    CHECK(num_stride_dims == 0 || num_stride_dims == 1 ||\n          num_stride_dims == num_spatial_axes_)\n        << \"stride must be specified once, or once per spatial dimension \"\n        << \"(stride specified \" << num_stride_dims << \" times; \"\n        << num_spatial_axes_ << \" spatial dims);\";\n    const int kDefaultStride = 1;\n    for (int i = 0; i < num_spatial_axes_; ++i) {\n      stride_data[i] = (num_stride_dims == 0) ? kDefaultStride :\n          conv_param.stride((num_stride_dims == 1) ? 0 : i);\n      CHECK_GT(stride_data[i], 0) << \"Stride dimensions must be nonzero.\";\n    }\n  }\n  // Setup pad dimensions (pad_).\n  pad_.Reshape(dim_blob_shape);\n  int* pad_data = pad_.mutable_cpu_data();\n  if (conv_param.has_pad_h() || conv_param.has_pad_w()) {\n    CHECK_EQ(num_spatial_axes_, 2)\n        << \"pad_h & pad_w can only be used for 2D convolution.\";\n    CHECK_EQ(0, conv_param.pad_size())\n        << \"Either pad or pad_h/w should be specified; not both.\";\n    pad_data[0] = conv_param.pad_h();\n    pad_data[1] = conv_param.pad_w();\n  } else {\n    const int num_pad_dims = conv_param.pad_size();\n    CHECK(num_pad_dims == 0 || num_pad_dims == 1 ||\n          num_pad_dims == num_spatial_axes_)\n        << \"pad must be specified once, or once per spatial dimension \"\n        << \"(pad specified \" << num_pad_dims << \" times; \"\n        << num_spatial_axes_ << \" spatial dims);\";\n    const int kDefaultPad = 0;\n    for (int i = 0; i < num_spatial_axes_; ++i) {\n      pad_data[i] = (num_pad_dims == 0) ? kDefaultPad :\n          conv_param.pad((num_pad_dims == 1) ? 0 : i);\n    }\n  }\n  // Setup dilation dimensions (dilation_).\n  dilation_.Reshape(dim_blob_shape);\n  int* dilation_data = dilation_.mutable_cpu_data();\n  const int num_dilation_dims = conv_param.dilation_size();\n  CHECK(num_dilation_dims == 0 || num_dilation_dims == 1 ||\n        num_dilation_dims == num_spatial_axes_)\n      << \"dilation must be specified once, or once per spatial dimension \"\n      << \"(dilation specified \" << num_dilation_dims << \" times; \"\n      << num_spatial_axes_ << \" spatial dims).\";\n  const int kDefaultDilation = 1;\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    dilation_data[i] = (num_dilation_dims == 0) ? kDefaultDilation :\n                       conv_param.dilation((num_dilation_dims == 1) ? 0 : i);\n  }\n}\n\ntemplate <typename Dtype>\nvoid Im2colLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  vector<int> top_shape = bottom[0]->shape();\n  const int* kernel_shape_data = kernel_shape_.cpu_data();\n  const int* stride_data = stride_.cpu_data();\n  const int* pad_data = pad_.cpu_data();\n  const int* dilation_data = dilation_.cpu_data();\n  for (int i = 0; i < num_spatial_axes_; ++i) {\n    top_shape[channel_axis_] *= kernel_shape_data[i];\n    const int input_dim = bottom[0]->shape(channel_axis_ + i + 1);\n    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;\n    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)\n        / stride_data[i] + 1;\n    top_shape[channel_axis_ + i + 1] = output_dim;\n  }\n  top[0]->Reshape(top_shape);\n  num_ = bottom[0]->count(0, channel_axis_);\n  bottom_dim_ = bottom[0]->count(channel_axis_);\n  top_dim_ = top[0]->count(channel_axis_);\n\n  channels_ = bottom[0]->shape(channel_axis_);\n}\n\ntemplate <typename Dtype>\nvoid Im2colLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  for (int n = 0; n < num_; ++n) {\n    DCHECK_EQ(bottom[0]->shape().size() - channel_axis_, num_spatial_axes_ + 1);\n    DCHECK_EQ(top[0]->shape().size() - channel_axis_, num_spatial_axes_ + 1);\n    DCHECK_EQ(kernel_shape_.count(), num_spatial_axes_);\n    DCHECK_EQ(pad_.count(), num_spatial_axes_);\n    DCHECK_EQ(stride_.count(), num_spatial_axes_);\n    DCHECK_EQ(dilation_.count(), num_spatial_axes_);\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      im2col_cpu(bottom_data + n * bottom_dim_, channels_,\n          bottom[0]->shape(channel_axis_ + 1),\n          bottom[0]->shape(channel_axis_ + 2),\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1],\n          top_data + n * top_dim_);\n    } else {\n      im2col_nd_cpu(bottom_data + n * bottom_dim_, num_spatial_axes_,\n          bottom[0]->shape().data() + channel_axis_,\n          top[0]->shape().data() + channel_axis_,\n          kernel_shape_.cpu_data(), pad_.cpu_data(), stride_.cpu_data(),\n          dilation_.cpu_data(), top_data + n * top_dim_);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Im2colLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  for (int n = 0; n < num_; ++n) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      col2im_cpu(top_diff + n * top_dim_, channels_,\n          bottom[0]->shape(channel_axis_ + 1),\n          bottom[0]->shape(channel_axis_ + 2),\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1],\n          bottom_diff + n * bottom_dim_);\n    } else {\n      col2im_nd_cpu(top_diff + n * top_dim_, num_spatial_axes_,\n          bottom[0]->shape().data() + channel_axis_,\n          top[0]->shape().data() + channel_axis_,\n          kernel_shape_.cpu_data(), pad_.cpu_data(), stride_.cpu_data(),\n          dilation_.cpu_data(), bottom_diff + n * bottom_dim_);\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(Im2colLayer);\n#endif\n\nINSTANTIATE_CLASS(Im2colLayer);\nREGISTER_LAYER_CLASS(Im2col);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/im2col_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/im2col_layer.hpp\"\n#include \"caffe/util/im2col.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid Im2colLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int num_kernels = channels_ * top[0]->count(channel_axis_ + 1);\n  for (int n = 0; n < num_; ++n) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      im2col_gpu(bottom_data + n * bottom_dim_, channels_,\n          bottom[0]->shape(channel_axis_ + 1),\n          bottom[0]->shape(channel_axis_ + 2),\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1],\n          top_data + n * top_dim_);\n    } else {\n      im2col_nd_gpu(bottom_data + n * bottom_dim_, num_spatial_axes_,\n          num_kernels, bottom[0]->gpu_shape() + channel_axis_,\n          top[0]->gpu_shape() + channel_axis_,\n          kernel_shape_.gpu_data(), pad_.gpu_data(), stride_.gpu_data(),\n          dilation_.gpu_data(), top_data + n * top_dim_);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Im2colLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->gpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  for (int n = 0; n < num_; ++n) {\n    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {\n      col2im_gpu(top_diff + n * top_dim_, channels_,\n          bottom[0]->shape(channel_axis_ + 1),\n          bottom[0]->shape(channel_axis_ + 2),\n          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],\n          pad_.cpu_data()[0], pad_.cpu_data()[1],\n          stride_.cpu_data()[0], stride_.cpu_data()[1],\n          dilation_.cpu_data()[0], dilation_.cpu_data()[1],\n          bottom_diff + n * bottom_dim_);\n    } else {\n      col2im_nd_gpu(top_diff + n * top_dim_, num_spatial_axes_, bottom_dim_,\n          bottom[0]->gpu_shape() + channel_axis_,\n          top[0]->gpu_shape() + channel_axis_,\n          kernel_shape_.gpu_data(), pad_.gpu_data(), stride_.gpu_data(),\n          dilation_.gpu_data(), bottom_diff + n * bottom_dim_);\n    }\n  }\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(Im2colLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/image_data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n\n#include <fstream>  // NOLINT(readability/streams)\n#include <iostream>  // NOLINT(readability/streams)\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/layers/image_data_layer.hpp\"\n#include \"caffe/util/benchmark.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include \"caffe/util/rng.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nImageDataLayer<Dtype>::~ImageDataLayer<Dtype>() {\n  this->StopInternalThread();\n}\n\ntemplate <typename Dtype>\nvoid ImageDataLayer<Dtype>::DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int new_height = this->layer_param_.image_data_param().new_height();\n  const int new_width  = this->layer_param_.image_data_param().new_width();\n  const bool is_color  = this->layer_param_.image_data_param().is_color();\n  string root_folder = this->layer_param_.image_data_param().root_folder();\n\n  CHECK((new_height == 0 && new_width == 0) ||\n      (new_height > 0 && new_width > 0)) << \"Current implementation requires \"\n      \"new_height and new_width to be set at the same time.\";\n  // Read the file with filenames and labels\n  const string& source = this->layer_param_.image_data_param().source();\n  LOG(INFO) << \"Opening file \" << source;\n  std::ifstream infile(source.c_str());\n  string filename;\n  int label;\n  while (infile >> filename >> label) {\n    lines_.push_back(std::make_pair(filename, label));\n  }\n\n  if (this->layer_param_.image_data_param().shuffle()) {\n    // randomly shuffle data\n    LOG(INFO) << \"Shuffling data\";\n    const unsigned int prefetch_rng_seed = caffe_rng_rand();\n    prefetch_rng_.reset(new Caffe::RNG(prefetch_rng_seed));\n    ShuffleImages();\n  }\n  LOG(INFO) << \"A total of \" << lines_.size() << \" images.\";\n\n  lines_id_ = 0;\n  // Check if we would need to randomly skip a few data points\n  if (this->layer_param_.image_data_param().rand_skip()) {\n    unsigned int skip = caffe_rng_rand() %\n        this->layer_param_.image_data_param().rand_skip();\n    LOG(INFO) << \"Skipping first \" << skip << \" data points.\";\n    CHECK_GT(lines_.size(), skip) << \"Not enough points to skip\";\n    lines_id_ = skip;\n  }\n  // Read an image, and use it to initialize the top blob.\n  cv::Mat cv_img = ReadImageToCVMat(root_folder + lines_[lines_id_].first,\n                                    new_height, new_width, is_color);\n  CHECK(cv_img.data) << \"Could not load \" << lines_[lines_id_].first;\n  // Use data_transformer to infer the expected blob shape from a cv_image.\n  vector<int> top_shape = this->data_transformer_->InferBlobShape(cv_img);\n  this->transformed_data_.Reshape(top_shape);\n  // Reshape prefetch_data and top[0] according to the batch_size.\n  const int batch_size = this->layer_param_.image_data_param().batch_size();\n  CHECK_GT(batch_size, 0) << \"Positive batch size required\";\n  top_shape[0] = batch_size;\n  for (int i = 0; i < this->PREFETCH_COUNT; ++i) {\n    this->prefetch_[i].data_.Reshape(top_shape);\n  }\n  top[0]->Reshape(top_shape);\n\n  LOG(INFO) << \"output data size: \" << top[0]->num() << \",\"\n      << top[0]->channels() << \",\" << top[0]->height() << \",\"\n      << top[0]->width();\n  // label\n  vector<int> label_shape(1, batch_size);\n  top[1]->Reshape(label_shape);\n  for (int i = 0; i < this->PREFETCH_COUNT; ++i) {\n    this->prefetch_[i].label_.Reshape(label_shape);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ImageDataLayer<Dtype>::ShuffleImages() {\n  caffe::rng_t* prefetch_rng =\n      static_cast<caffe::rng_t*>(prefetch_rng_->generator());\n  shuffle(lines_.begin(), lines_.end(), prefetch_rng);\n}\n\n// This function is called on prefetch thread\ntemplate <typename Dtype>\nvoid ImageDataLayer<Dtype>::load_batch(Batch<Dtype>* batch) {\n  CPUTimer batch_timer;\n  batch_timer.Start();\n  double read_time = 0;\n  double trans_time = 0;\n  CPUTimer timer;\n  CHECK(batch->data_.count());\n  CHECK(this->transformed_data_.count());\n  ImageDataParameter image_data_param = this->layer_param_.image_data_param();\n  const int batch_size = image_data_param.batch_size();\n  const int new_height = image_data_param.new_height();\n  const int new_width = image_data_param.new_width();\n  const bool is_color = image_data_param.is_color();\n  string root_folder = image_data_param.root_folder();\n\n  // Reshape according to the first image of each batch\n  // on single input batches allows for inputs of varying dimension.\n  cv::Mat cv_img = ReadImageToCVMat(root_folder + lines_[lines_id_].first,\n      new_height, new_width, is_color);\n  CHECK(cv_img.data) << \"Could not load \" << lines_[lines_id_].first;\n  // Use data_transformer to infer the expected blob shape from a cv_img.\n  vector<int> top_shape = this->data_transformer_->InferBlobShape(cv_img);\n  this->transformed_data_.Reshape(top_shape);\n  // Reshape batch according to the batch_size.\n  top_shape[0] = batch_size;\n  batch->data_.Reshape(top_shape);\n\n  Dtype* prefetch_data = batch->data_.mutable_cpu_data();\n  Dtype* prefetch_label = batch->label_.mutable_cpu_data();\n\n  // datum scales\n  const int lines_size = lines_.size();\n  for (int item_id = 0; item_id < batch_size; ++item_id) {\n    // get a blob\n    timer.Start();\n    CHECK_GT(lines_size, lines_id_);\n    cv::Mat cv_img = ReadImageToCVMat(root_folder + lines_[lines_id_].first,\n        new_height, new_width, is_color);\n    CHECK(cv_img.data) << \"Could not load \" << lines_[lines_id_].first;\n    read_time += timer.MicroSeconds();\n    timer.Start();\n    // Apply transformations (mirror, crop...) to the image\n    int offset = batch->data_.offset(item_id);\n    this->transformed_data_.set_cpu_data(prefetch_data + offset);\n    this->data_transformer_->Transform(cv_img, &(this->transformed_data_));\n    trans_time += timer.MicroSeconds();\n\n    prefetch_label[item_id] = lines_[lines_id_].second;\n    // go to the next iter\n    lines_id_++;\n    if (lines_id_ >= lines_size) {\n      // We have reached the end. Restart from the first.\n      DLOG(INFO) << \"Restarting data prefetching from start.\";\n      lines_id_ = 0;\n      if (this->layer_param_.image_data_param().shuffle()) {\n        ShuffleImages();\n      }\n    }\n  }\n  batch_timer.Stop();\n  DLOG(INFO) << \"Prefetch batch: \" << batch_timer.MilliSeconds() << \" ms.\";\n  DLOG(INFO) << \"     Read time: \" << read_time / 1000 << \" ms.\";\n  DLOG(INFO) << \"Transform time: \" << trans_time / 1000 << \" ms.\";\n}\n\nINSTANTIATE_CLASS(ImageDataLayer);\nREGISTER_LAYER_CLASS(ImageData);\n\n}  // namespace caffe\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/infogain_loss_layer.cpp",
    "content": "#include <algorithm>\n#include <cmath>\n#include <vector>\n\n#include \"caffe/layers/infogain_loss_layer.hpp\"\n#include \"caffe/util/io.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid InfogainLossLayer<Dtype>::LayerSetUp(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::LayerSetUp(bottom, top);\n  if (bottom.size() < 3) {\n    CHECK(this->layer_param_.infogain_loss_param().has_source())\n        << \"Infogain matrix source must be specified.\";\n    BlobProto blob_proto;\n    ReadProtoFromBinaryFile(\n      this->layer_param_.infogain_loss_param().source(), &blob_proto);\n    infogain_.FromProto(blob_proto);\n  }\n}\n\ntemplate <typename Dtype>\nvoid InfogainLossLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::Reshape(bottom, top);\n  Blob<Dtype>* infogain = NULL;\n  if (bottom.size() < 3) {\n    infogain = &infogain_;\n  } else {\n    infogain = bottom[2];\n  }\n  CHECK_EQ(bottom[1]->channels(), 1);\n  CHECK_EQ(bottom[1]->height(), 1);\n  CHECK_EQ(bottom[1]->width(), 1);\n  const int num = bottom[0]->num();\n  const int dim = bottom[0]->count() / num;\n  CHECK_EQ(infogain->num(), 1);\n  CHECK_EQ(infogain->channels(), 1);\n  CHECK_EQ(infogain->height(), dim);\n  CHECK_EQ(infogain->width(), dim);\n}\n\n\ntemplate <typename Dtype>\nvoid InfogainLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* bottom_label = bottom[1]->cpu_data();\n  const Dtype* infogain_mat = NULL;\n  if (bottom.size() < 3) {\n    infogain_mat = infogain_.cpu_data();\n  } else {\n    infogain_mat = bottom[2]->cpu_data();\n  }\n  int num = bottom[0]->num();\n  int dim = bottom[0]->count() / bottom[0]->num();\n  Dtype loss = 0;\n  for (int i = 0; i < num; ++i) {\n    int label = static_cast<int>(bottom_label[i]);\n    for (int j = 0; j < dim; ++j) {\n      Dtype prob = std::max(bottom_data[i * dim + j], Dtype(kLOG_THRESHOLD));\n      loss -= infogain_mat[label * dim + j] * log(prob);\n    }\n  }\n  top[0]->mutable_cpu_data()[0] = loss / num;\n}\n\ntemplate <typename Dtype>\nvoid InfogainLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down.size() > 2 && propagate_down[2]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to infogain inputs.\";\n  }\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    const Dtype* bottom_label = bottom[1]->cpu_data();\n    const Dtype* infogain_mat = NULL;\n    if (bottom.size() < 3) {\n      infogain_mat = infogain_.cpu_data();\n    } else {\n      infogain_mat = bottom[2]->cpu_data();\n    }\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    int num = bottom[0]->num();\n    int dim = bottom[0]->count() / bottom[0]->num();\n    const Dtype scale = - top[0]->cpu_diff()[0] / num;\n    for (int i = 0; i < num; ++i) {\n      const int label = static_cast<int>(bottom_label[i]);\n      for (int j = 0; j < dim; ++j) {\n        Dtype prob = std::max(bottom_data[i * dim + j], Dtype(kLOG_THRESHOLD));\n        bottom_diff[i * dim + j] = scale * infogain_mat[label * dim + j] / prob;\n      }\n    }\n  }\n}\n\nINSTANTIATE_CLASS(InfogainLossLayer);\nREGISTER_LAYER_CLASS(InfogainLoss);\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/inner_product_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/inner_product_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid InnerProductLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int num_output = this->layer_param_.inner_product_param().num_output();\n  bias_term_ = this->layer_param_.inner_product_param().bias_term();\n  N_ = num_output;\n  const int axis = bottom[0]->CanonicalAxisIndex(\n      this->layer_param_.inner_product_param().axis());\n  // Dimensions starting from \"axis\" are \"flattened\" into a single\n  // length K_ vector. For example, if bottom[0]'s shape is (N, C, H, W),\n  // and axis == 1, N inner products with dimension CHW are performed.\n  K_ = bottom[0]->count(axis);\n  // Check if we need to set up the weights\n  if (this->blobs_.size() > 0) {\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else {\n    if (bias_term_) {\n      this->blobs_.resize(2);\n    } else {\n      this->blobs_.resize(1);\n    }\n    // Intialize the weight\n    vector<int> weight_shape(2);\n    weight_shape[0] = N_;\n    weight_shape[1] = K_;\n    this->blobs_[0].reset(new Blob<Dtype>(weight_shape));\n    // fill the weights\n    shared_ptr<Filler<Dtype> > weight_filler(GetFiller<Dtype>(\n        this->layer_param_.inner_product_param().weight_filler()));\n    weight_filler->Fill(this->blobs_[0].get());\n    // If necessary, intiialize and fill the bias term\n    if (bias_term_) {\n      vector<int> bias_shape(1, N_);\n      this->blobs_[1].reset(new Blob<Dtype>(bias_shape));\n      shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(\n          this->layer_param_.inner_product_param().bias_filler()));\n      bias_filler->Fill(this->blobs_[1].get());\n    }\n  }  // parameter initialization\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\n}\n\ntemplate <typename Dtype>\nvoid InnerProductLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // Figure out the dimensions\n  const int axis = bottom[0]->CanonicalAxisIndex(\n      this->layer_param_.inner_product_param().axis());\n  const int new_K = bottom[0]->count(axis);\n  CHECK_EQ(K_, new_K)\n      << \"Input size incompatible with inner product parameters.\";\n  // The first \"axis\" dimensions are independent inner products; the total\n  // number of these is M_, the product over these dimensions.\n  M_ = bottom[0]->count(0, axis);\n  // The top shape will be the bottom shape with the flattened axes dropped,\n  // and replaced by a single axis with dimension num_output (N_).\n  vector<int> top_shape = bottom[0]->shape();\n  top_shape.resize(axis + 1);\n  top_shape[axis] = N_;\n  top[0]->Reshape(top_shape);\n  // Set up the bias multiplier\n  if (bias_term_) {\n    vector<int> bias_shape(1, M_);\n    bias_multiplier_.Reshape(bias_shape);\n    caffe_set(M_, Dtype(1), bias_multiplier_.mutable_cpu_data());\n  }\n}\n\ntemplate <typename Dtype>\nvoid InnerProductLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const Dtype* weight = this->blobs_[0]->cpu_data();\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasTrans, M_, N_, K_, (Dtype)1.,\n      bottom_data, weight, (Dtype)0., top_data);\n  if (bias_term_) {\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, N_, 1, (Dtype)1.,\n        bias_multiplier_.cpu_data(),\n        this->blobs_[1]->cpu_data(), (Dtype)1., top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid InnerProductLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (this->param_propagate_down_[0]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    // Gradient with respect to weight\n    caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans, N_, K_, M_, (Dtype)1.,\n        top_diff, bottom_data, (Dtype)1., this->blobs_[0]->mutable_cpu_diff());\n  }\n  if (bias_term_ && this->param_propagate_down_[1]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    // Gradient with respect to bias\n    caffe_cpu_gemv<Dtype>(CblasTrans, M_, N_, (Dtype)1., top_diff,\n        bias_multiplier_.cpu_data(), (Dtype)1.,\n        this->blobs_[1]->mutable_cpu_diff());\n  }\n  if (propagate_down[0]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    // Gradient with respect to bottom data\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, K_, N_, (Dtype)1.,\n        top_diff, this->blobs_[0]->cpu_data(), (Dtype)0.,\n        bottom[0]->mutable_cpu_diff());\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(InnerProductLayer);\n#endif\n\nINSTANTIATE_CLASS(InnerProductLayer);\nREGISTER_LAYER_CLASS(InnerProduct);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/inner_product_layer.cu",
    "content": "#include <vector>\n#include<iostream>\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/inner_product_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid InnerProductLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const Dtype* weight = this->blobs_[0]->gpu_data();\n  if (M_ == 1) {\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, N_, K_, (Dtype)1.,\n                         weight, bottom_data, (Dtype)0., top_data);\n    if (bias_term_)\n      caffe_gpu_axpy<Dtype>(N_, bias_multiplier_.cpu_data()[0],\n                            this->blobs_[1]->gpu_data(), top_data);\n  } else {\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasTrans, M_, N_, K_, (Dtype)1.,\n                          bottom_data, weight, (Dtype)0., top_data);\n    if (bias_term_)\n      caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, N_, 1, (Dtype)1.,\n                            bias_multiplier_.gpu_data(),\n                            this->blobs_[1]->gpu_data(), (Dtype)1., top_data);\n  }\n \n}\n\ntemplate <typename Dtype>\nvoid InnerProductLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (this->param_propagate_down_[0]) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    // Gradient with respect to weight\n    caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans, N_, K_, M_, (Dtype)1.,\n        top_diff, bottom_data, (Dtype)1., this->blobs_[0]->mutable_gpu_diff());\n  }\n  if (bias_term_ && this->param_propagate_down_[1]) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    // Gradient with respect to bias\n    caffe_gpu_gemv<Dtype>(CblasTrans, M_, N_, (Dtype)1., top_diff,\n        bias_multiplier_.gpu_data(), (Dtype)1.,\n        this->blobs_[1]->mutable_gpu_diff());\n  }\n  if (propagate_down[0]) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    // Gradient with respect to bottom data\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, K_, N_, (Dtype)1.,\n        top_diff, this->blobs_[0]->gpu_data(), (Dtype)0.,\n        bottom[0]->mutable_gpu_diff());\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(InnerProductLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/log_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/log_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid LogLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::LayerSetUp(bottom, top);\n  const Dtype base = this->layer_param_.log_param().base();\n  if (base != Dtype(-1)) {\n    CHECK_GT(base, 0) << \"base must be strictly positive.\";\n  }\n  // If base == -1, interpret the base as e and set log_base = 1 exactly.\n  // Otherwise, calculate its log explicitly.\n  const Dtype log_base = (base == Dtype(-1)) ? Dtype(1) : log(base);\n  CHECK(!isnan(log_base))\n      << \"NaN result: log(base) = log(\" << base << \") = \" << log_base;\n  CHECK(!isinf(log_base))\n      << \"Inf result: log(base) = log(\" << base << \") = \" << log_base;\n  base_scale_ = Dtype(1) / log_base;\n  CHECK(!isnan(base_scale_))\n      << \"NaN result: 1/log(base) = 1/log(\" << base << \") = \" << base_scale_;\n  CHECK(!isinf(base_scale_))\n      << \"Inf result: 1/log(base) = 1/log(\" << base << \") = \" << base_scale_;\n  input_scale_ = this->layer_param_.log_param().scale();\n  input_shift_ = this->layer_param_.log_param().shift();\n  backward_num_scale_ = input_scale_ / log_base;\n}\n\ntemplate <typename Dtype>\nvoid LogLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const int count = bottom[0]->count();\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  if (input_scale_ == Dtype(1) && input_shift_ == Dtype(0)) {\n    caffe_log(count, bottom_data, top_data);\n  } else {\n    caffe_copy(count, bottom_data, top_data);\n    if (input_scale_ != Dtype(1)) {\n      caffe_scal(count, input_scale_, top_data);\n    }\n    if (input_shift_ != Dtype(0)) {\n      caffe_add_scalar(count, input_shift_, top_data);\n    }\n    caffe_log(count, top_data, top_data);\n  }\n  if (base_scale_ != Dtype(1)) {\n    caffe_scal(count, base_scale_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid LogLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  const int count = bottom[0]->count();\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  caffe_copy(count, bottom_data, bottom_diff);\n  if (input_scale_ != Dtype(1)) {\n    caffe_scal(count, input_scale_, bottom_diff);\n  }\n  if (input_shift_ != Dtype(0)) {\n    caffe_add_scalar(count, input_shift_, bottom_diff);\n  }\n  caffe_powx(count, bottom_diff, Dtype(-1), bottom_diff);\n  if (backward_num_scale_ != Dtype(1)) {\n    caffe_scal(count, backward_num_scale_, bottom_diff);\n  }\n  caffe_mul(count, top_diff, bottom_diff, bottom_diff);\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(LogLayer);\n#endif\n\nINSTANTIATE_CLASS(LogLayer);\nREGISTER_LAYER_CLASS(Log);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/log_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/log_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid LogLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const int count = bottom[0]->count();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  if (input_scale_ == Dtype(1) && input_shift_ == Dtype(0)) {\n    caffe_gpu_log(count, bottom_data, top_data);\n  } else {\n    caffe_copy(count, bottom_data, top_data);\n    if (input_scale_ != Dtype(1)) {\n      caffe_gpu_scal(count, input_scale_, top_data);\n    }\n    if (input_shift_ != Dtype(0)) {\n      caffe_gpu_add_scalar(count, input_shift_, top_data);\n    }\n    caffe_gpu_log(count, top_data, top_data);\n  }\n  if (base_scale_ != Dtype(1)) {\n    caffe_gpu_scal(count, base_scale_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid LogLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n    const int count = bottom[0]->count();\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    caffe_copy(count, bottom_data, bottom_diff);\n    if (input_scale_ != Dtype(1)) {\n      caffe_gpu_scal(count, input_scale_, bottom_diff);\n    }\n    if (input_shift_ != Dtype(0)) {\n      caffe_gpu_add_scalar(count, input_shift_, bottom_diff);\n    }\n    caffe_gpu_powx(count, bottom_diff, Dtype(-1), bottom_diff);\n    if (backward_num_scale_ != Dtype(1)) {\n      caffe_gpu_scal(count, backward_num_scale_, bottom_diff);\n    }\n    caffe_gpu_mul(count, top_diff, bottom_diff, bottom_diff);\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(LogLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/loss_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/loss_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid LossLayer<Dtype>::LayerSetUp(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  // LossLayers have a non-zero (1) loss by default.\n  if (this->layer_param_.loss_weight_size() == 0) {\n    this->layer_param_.add_loss_weight(Dtype(1));\n  }\n}\n\ntemplate <typename Dtype>\nvoid LossLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n // CHECK_EQ(bottom[0]->num(), bottom[1]->num())\n   //   << \"The data and label should have the same number.\";\n  vector<int> loss_shape(0);  // Loss layers output a scalar; 0 axes.\n  top[0]->Reshape(loss_shape);\n}\n\nINSTANTIATE_CLASS(LossLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/lrn_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/lrn_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  size_ = this->layer_param_.lrn_param().local_size();\n  CHECK_EQ(size_ % 2, 1) << \"LRN only supports odd values for local_size\";\n  pre_pad_ = (size_ - 1) / 2;\n  alpha_ = this->layer_param_.lrn_param().alpha();\n  beta_ = this->layer_param_.lrn_param().beta();\n  k_ = this->layer_param_.lrn_param().k();\n  if (this->layer_param_.lrn_param().norm_region() ==\n      LRNParameter_NormRegion_WITHIN_CHANNEL) {\n    // Set up split_layer_ to use inputs in the numerator and denominator.\n    split_top_vec_.clear();\n    split_top_vec_.push_back(&product_input_);\n    split_top_vec_.push_back(&square_input_);\n    LayerParameter split_param;\n    split_layer_.reset(new SplitLayer<Dtype>(split_param));\n    split_layer_->SetUp(bottom, split_top_vec_);\n    // Set up square_layer_ to square the inputs.\n    square_bottom_vec_.clear();\n    square_top_vec_.clear();\n    square_bottom_vec_.push_back(&square_input_);\n    square_top_vec_.push_back(&square_output_);\n    LayerParameter square_param;\n    square_param.mutable_power_param()->set_power(Dtype(2));\n    square_layer_.reset(new PowerLayer<Dtype>(square_param));\n    square_layer_->SetUp(square_bottom_vec_, square_top_vec_);\n    // Set up pool_layer_ to sum over square neighborhoods of the input.\n    pool_top_vec_.clear();\n    pool_top_vec_.push_back(&pool_output_);\n    LayerParameter pool_param;\n    pool_param.mutable_pooling_param()->set_pool(\n        PoolingParameter_PoolMethod_AVE);\n    pool_param.mutable_pooling_param()->set_pad(pre_pad_);\n    pool_param.mutable_pooling_param()->set_kernel_size(size_);\n    pool_layer_.reset(new PoolingLayer<Dtype>(pool_param));\n    pool_layer_->SetUp(square_top_vec_, pool_top_vec_);\n    // Set up power_layer_ to compute (1 + alpha_/N^2 s)^-beta_, where s is\n    // the sum of a squared neighborhood (the output of pool_layer_).\n    power_top_vec_.clear();\n    power_top_vec_.push_back(&power_output_);\n    LayerParameter power_param;\n    power_param.mutable_power_param()->set_power(-beta_);\n    power_param.mutable_power_param()->set_scale(alpha_);\n    power_param.mutable_power_param()->set_shift(Dtype(1));\n    power_layer_.reset(new PowerLayer<Dtype>(power_param));\n    power_layer_->SetUp(pool_top_vec_, power_top_vec_);\n    // Set up a product_layer_ to compute outputs by multiplying inputs by the\n    // inverse demoninator computed by the power layer.\n    product_bottom_vec_.clear();\n    product_bottom_vec_.push_back(&product_input_);\n    product_bottom_vec_.push_back(&power_output_);\n    LayerParameter product_param;\n    EltwiseParameter* eltwise_param = product_param.mutable_eltwise_param();\n    eltwise_param->set_operation(EltwiseParameter_EltwiseOp_PROD);\n    product_layer_.reset(new EltwiseLayer<Dtype>(product_param));\n    product_layer_->SetUp(product_bottom_vec_, top);\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_EQ(4, bottom[0]->num_axes()) << \"Input must have 4 axes, \"\n      << \"corresponding to (num, channels, height, width)\";\n  num_ = bottom[0]->num();\n  channels_ = bottom[0]->channels();\n  height_ = bottom[0]->height();\n  width_ = bottom[0]->width();\n  switch (this->layer_param_.lrn_param().norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    top[0]->Reshape(num_, channels_, height_, width_);\n    scale_.Reshape(num_, channels_, height_, width_);\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    split_layer_->Reshape(bottom, split_top_vec_);\n    square_layer_->Reshape(square_bottom_vec_, square_top_vec_);\n    pool_layer_->Reshape(square_top_vec_, pool_top_vec_);\n    power_layer_->Reshape(pool_top_vec_, power_top_vec_);\n    product_layer_->Reshape(product_bottom_vec_, top);\n    break;\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  switch (this->layer_param_.lrn_param().norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    CrossChannelForward_cpu(bottom, top);\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    WithinChannelForward(bottom, top);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown normalization region.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::CrossChannelForward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  Dtype* scale_data = scale_.mutable_cpu_data();\n  // start with the constant value\n  for (int i = 0; i < scale_.count(); ++i) {\n    scale_data[i] = k_;\n  }\n  Blob<Dtype> padded_square(1, channels_ + size_ - 1, height_, width_);\n  Dtype* padded_square_data = padded_square.mutable_cpu_data();\n  caffe_set(padded_square.count(), Dtype(0), padded_square_data);\n  Dtype alpha_over_size = alpha_ / size_;\n  // go through the images\n  for (int n = 0; n < num_; ++n) {\n    // compute the padded square\n    caffe_sqr(channels_ * height_ * width_,\n        bottom_data + bottom[0]->offset(n),\n        padded_square_data + padded_square.offset(0, pre_pad_));\n    // Create the first channel scale\n    for (int c = 0; c < size_; ++c) {\n      caffe_axpy<Dtype>(height_ * width_, alpha_over_size,\n          padded_square_data + padded_square.offset(0, c),\n          scale_data + scale_.offset(n, 0));\n    }\n    for (int c = 1; c < channels_; ++c) {\n      // copy previous scale\n      caffe_copy<Dtype>(height_ * width_,\n          scale_data + scale_.offset(n, c - 1),\n          scale_data + scale_.offset(n, c));\n      // add head\n      caffe_axpy<Dtype>(height_ * width_, alpha_over_size,\n          padded_square_data + padded_square.offset(0, c + size_ - 1),\n          scale_data + scale_.offset(n, c));\n      // subtract tail\n      caffe_axpy<Dtype>(height_ * width_, -alpha_over_size,\n          padded_square_data + padded_square.offset(0, c - 1),\n          scale_data + scale_.offset(n, c));\n    }\n  }\n\n  // In the end, compute output\n  caffe_powx<Dtype>(scale_.count(), scale_data, -beta_, top_data);\n  caffe_mul<Dtype>(scale_.count(), top_data, bottom_data, top_data);\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::WithinChannelForward(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  split_layer_->Forward(bottom, split_top_vec_);\n  square_layer_->Forward(square_bottom_vec_, square_top_vec_);\n  pool_layer_->Forward(square_top_vec_, pool_top_vec_);\n  power_layer_->Forward(pool_top_vec_, power_top_vec_);\n  product_layer_->Forward(product_bottom_vec_, top);\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  switch (this->layer_param_.lrn_param().norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    CrossChannelBackward_cpu(top, propagate_down, bottom);\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    WithinChannelBackward(top, propagate_down, bottom);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown normalization region.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::CrossChannelBackward_cpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->cpu_diff();\n  const Dtype* top_data = top[0]->cpu_data();\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* scale_data = scale_.cpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  Blob<Dtype> padded_ratio(1, channels_ + size_ - 1, height_, width_);\n  Blob<Dtype> accum_ratio(1, 1, height_, width_);\n  Dtype* padded_ratio_data = padded_ratio.mutable_cpu_data();\n  Dtype* accum_ratio_data = accum_ratio.mutable_cpu_data();\n  // We hack a little bit by using the diff() to store an additional result\n  Dtype* accum_ratio_times_bottom = accum_ratio.mutable_cpu_diff();\n  caffe_set(padded_ratio.count(), Dtype(0), padded_ratio_data);\n  Dtype cache_ratio_value = 2. * alpha_ * beta_ / size_;\n\n  caffe_powx<Dtype>(scale_.count(), scale_data, -beta_, bottom_diff);\n  caffe_mul<Dtype>(scale_.count(), top_diff, bottom_diff, bottom_diff);\n\n  // go through individual data\n  int inverse_pre_pad = size_ - (size_ + 1) / 2;\n  for (int n = 0; n < num_; ++n) {\n    int block_offset = scale_.offset(n);\n    // first, compute diff_i * y_i / s_i\n    caffe_mul<Dtype>(channels_ * height_ * width_,\n        top_diff + block_offset, top_data + block_offset,\n        padded_ratio_data + padded_ratio.offset(0, inverse_pre_pad));\n    caffe_div<Dtype>(channels_ * height_ * width_,\n        padded_ratio_data + padded_ratio.offset(0, inverse_pre_pad),\n        scale_data + block_offset,\n        padded_ratio_data + padded_ratio.offset(0, inverse_pre_pad));\n    // Now, compute the accumulated ratios and the bottom diff\n    caffe_set(accum_ratio.count(), Dtype(0), accum_ratio_data);\n    for (int c = 0; c < size_ - 1; ++c) {\n      caffe_axpy<Dtype>(height_ * width_, 1.,\n          padded_ratio_data + padded_ratio.offset(0, c), accum_ratio_data);\n    }\n    for (int c = 0; c < channels_; ++c) {\n      caffe_axpy<Dtype>(height_ * width_, 1.,\n          padded_ratio_data + padded_ratio.offset(0, c + size_ - 1),\n          accum_ratio_data);\n      // compute bottom diff\n      caffe_mul<Dtype>(height_ * width_,\n          bottom_data + top[0]->offset(n, c),\n          accum_ratio_data, accum_ratio_times_bottom);\n      caffe_axpy<Dtype>(height_ * width_, -cache_ratio_value,\n          accum_ratio_times_bottom, bottom_diff + top[0]->offset(n, c));\n      caffe_axpy<Dtype>(height_ * width_, -1.,\n          padded_ratio_data + padded_ratio.offset(0, c), accum_ratio_data);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::WithinChannelBackward(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    vector<bool> product_propagate_down(2, true);\n    product_layer_->Backward(top, product_propagate_down, product_bottom_vec_);\n    power_layer_->Backward(power_top_vec_, propagate_down, pool_top_vec_);\n    pool_layer_->Backward(pool_top_vec_, propagate_down, square_top_vec_);\n    square_layer_->Backward(square_top_vec_, propagate_down,\n                            square_bottom_vec_);\n    split_layer_->Backward(split_top_vec_, propagate_down, bottom);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(LRNLayer);\nSTUB_GPU_FORWARD(LRNLayer, CrossChannelForward);\nSTUB_GPU_BACKWARD(LRNLayer, CrossChannelBackward);\n#endif\n\nINSTANTIATE_CLASS(LRNLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/lrn_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/lrn_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void LRNFillScale(const int nthreads, const Dtype* const in,\n    const int num, const int channels, const int height,\n    const int width, const int size, const Dtype alpha_over_size,\n    const Dtype k, Dtype* const scale) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // find out the local offset\n    const int w = index % width;\n    const int h = (index / width) % height;\n    const int n = index / width / height;\n    const int offset = (n * channels * height + h) * width + w;\n    const int step = height * width;\n    const Dtype* const in_off = in + offset;\n    Dtype* const scale_off = scale + offset;\n    int head = 0;\n    const int pre_pad = (size - 1) / 2;\n    const int post_pad = size - pre_pad - 1;\n    Dtype accum_scale = 0;\n    // fill the scale at [n, :, h, w]\n    // accumulate values\n    while (head < post_pad && head < channels) {\n      accum_scale += in_off[head * step] * in_off[head * step];\n      ++head;\n    }\n    // both add and subtract\n    while (head < channels) {\n      accum_scale += in_off[head * step] * in_off[head * step];\n      if (head - size >= 0) {\n        accum_scale -= in_off[(head - size) * step]\n                       * in_off[(head - size) * step];\n      }\n      scale_off[(head - post_pad) * step] = k + accum_scale * alpha_over_size;\n      ++head;\n    }\n    // subtract only\n    while (head < channels + post_pad) {\n      if (head - size >= 0) {\n        accum_scale -= in_off[(head - size) * step]\n                       * in_off[(head - size) * step];\n      }\n      scale_off[(head - post_pad) * step] = k + accum_scale * alpha_over_size;\n      ++head;\n    }\n  }\n}\n\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  switch (this->layer_param_.lrn_param().norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    CrossChannelForward_gpu(bottom, top);\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    WithinChannelForward(bottom, top);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown normalization region.\";\n  }\n}\n\n// TODO: check if it would be faster to just put it into the previous kernel.\ntemplate <typename Dtype>\n__global__ void LRNComputeOutput(const int nthreads, const Dtype* const in,\n    const Dtype* const scale, const Dtype negative_beta, Dtype* const out) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    out[index] = in[index] * pow(scale[index], negative_beta);\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::CrossChannelForward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  // First, compute scale\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  Dtype* scale_data = scale_.mutable_gpu_data();\n  // We will launch one kernel for each pixel location, and have the kernel\n  // go through all the channels.\n  int n_threads = num_ * height_ * width_;\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  LRNFillScale<<<CAFFE_GET_BLOCKS(n_threads), CAFFE_CUDA_NUM_THREADS>>>(\n      n_threads, bottom_data, num_, channels_, height_, width_, size_,\n      alpha_ / size_, k_, scale_data);\n  CUDA_POST_KERNEL_CHECK;\n  n_threads = bottom[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  LRNComputeOutput<<<CAFFE_GET_BLOCKS(n_threads), CAFFE_CUDA_NUM_THREADS>>>(\n      n_threads, bottom_data, scale_data, -beta_, top_data);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void LRNLayer<float>::CrossChannelForward_gpu(\n    const vector<Blob<float>*>& bottom, const vector<Blob<float>*>& top);\ntemplate void LRNLayer<double>::CrossChannelForward_gpu(\n    const vector<Blob<double>*>& bottom, const vector<Blob<double>*>& top);\n\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  switch (this->layer_param_.lrn_param().norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    CrossChannelBackward_gpu(top, propagate_down, bottom);\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    WithinChannelBackward(top, propagate_down, bottom);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown normalization region.\";\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void LRNComputeDiff(const int nthreads,\n    const Dtype* const bottom_data, const Dtype* const top_data,\n    const Dtype* const scale, const Dtype* const top_diff,\n    const int num, const int channels, const int height,\n    const int width, const int size, const Dtype negative_beta,\n    const Dtype cache_ratio, Dtype* const bottom_diff) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // find out the local offset\n    const int w = index % width;\n    const int h = (index / width) % height;\n    const int n = index / width / height;\n    const int offset = (n * channels * height + h) * width + w;\n    const int step = height * width;\n    const Dtype* const bottom_off = bottom_data + offset;\n    const Dtype* const top_off = top_data + offset;\n    const Dtype* const scale_off = scale + offset;\n    const Dtype* const top_diff_off = top_diff + offset;\n    Dtype* const bottom_diff_off = bottom_diff + offset;\n    int head = 0;\n    const int pre_pad = size - (size + 1) / 2;\n    const int post_pad = size - pre_pad - 1;\n    Dtype accum_ratio = 0;\n    // accumulate values\n    while (head < post_pad && head < channels) {\n      accum_ratio += top_diff_off[head * step] * top_off[head * step] /\n          scale_off[head * step];\n      ++head;\n    }\n    // both add and subtract\n    while (head < channels) {\n      accum_ratio += top_diff_off[head * step] * top_off[head * step] /\n          scale_off[head * step];\n      if (head - size >= 0) {\n        accum_ratio -= top_diff_off[(head - size) * step] *\n            top_off[(head - size) * step] / scale_off[(head - size) * step];\n      }\n      bottom_diff_off[(head - post_pad) * step] =\n          top_diff_off[(head - post_pad) * step]\n            * pow(scale_off[(head - post_pad) * step], negative_beta)\n          - cache_ratio * bottom_off[(head - post_pad) * step] * accum_ratio;\n      ++head;\n    }\n    // subtract only\n    while (head < channels + post_pad) {\n      if (head - size >= 0) {\n        accum_ratio -= top_diff_off[(head - size) * step] *\n            top_off[(head - size) * step] / scale_off[(head - size) * step];\n      }\n      bottom_diff_off[(head - post_pad) * step] =\n          top_diff_off[(head - post_pad) * step]\n            * pow(scale_off[(head - post_pad) * step], negative_beta)\n          - cache_ratio * bottom_off[(head - post_pad) * step] * accum_ratio;\n      ++head;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid LRNLayer<Dtype>::CrossChannelBackward_gpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  int n_threads = num_ * height_ * width_;\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  LRNComputeDiff<<<CAFFE_GET_BLOCKS(n_threads), CAFFE_CUDA_NUM_THREADS>>>(\n      n_threads, bottom[0]->gpu_data(), top[0]->gpu_data(),\n      scale_.gpu_data(), top[0]->gpu_diff(), num_, channels_, height_, width_,\n      size_, -beta_, Dtype(2. * alpha_ * beta_ / size_),\n      bottom[0]->mutable_gpu_diff());\n}\ntemplate void LRNLayer<float>::CrossChannelBackward_gpu(\n    const vector<Blob<float>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<float>*>& bottom);\ntemplate void LRNLayer<double>::CrossChannelBackward_gpu(\n    const vector<Blob<double>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<double>*>& bottom);\n\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(LRNLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/memory_data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#endif  // USE_OPENCV\n\n#include <vector>\n\n#include \"caffe/layers/memory_data_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid MemoryDataLayer<Dtype>::DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n     const vector<Blob<Dtype>*>& top) {\n  batch_size_ = this->layer_param_.memory_data_param().batch_size();\n  channels_ = this->layer_param_.memory_data_param().channels();\n  height_ = this->layer_param_.memory_data_param().height();\n  width_ = this->layer_param_.memory_data_param().width();\n  size_ = channels_ * height_ * width_;\n  CHECK_GT(batch_size_ * size_, 0) <<\n      \"batch_size, channels, height, and width must be specified and\"\n      \" positive in memory_data_param\";\n  vector<int> label_shape(1, batch_size_);\n  top[0]->Reshape(batch_size_, channels_, height_, width_);\n  top[1]->Reshape(label_shape);\n  added_data_.Reshape(batch_size_, channels_, height_, width_);\n  added_label_.Reshape(label_shape);\n  data_ = NULL;\n  labels_ = NULL;\n  added_data_.cpu_data();\n  added_label_.cpu_data();\n}\n\ntemplate <typename Dtype>\nvoid MemoryDataLayer<Dtype>::AddDatumVector(const vector<Datum>& datum_vector) {\n  CHECK(!has_new_data_) <<\n      \"Can't add data until current data has been consumed.\";\n  size_t num = datum_vector.size();\n  CHECK_GT(num, 0) << \"There is no datum to add.\";\n  CHECK_EQ(num % batch_size_, 0) <<\n      \"The added data must be a multiple of the batch size.\";\n  added_data_.Reshape(num, channels_, height_, width_);\n  added_label_.Reshape(num, 1, 1, 1);\n  // Apply data transformations (mirror, scale, crop...)\n  this->data_transformer_->Transform(datum_vector, &added_data_);\n  // Copy Labels\n  Dtype* top_label = added_label_.mutable_cpu_data();\n  for (int item_id = 0; item_id < num; ++item_id) {\n    top_label[item_id] = datum_vector[item_id].label();\n  }\n  // num_images == batch_size_\n  Dtype* top_data = added_data_.mutable_cpu_data();\n  Reset(top_data, top_label, num);\n  has_new_data_ = true;\n}\n\n#ifdef USE_OPENCV\ntemplate <typename Dtype>\nvoid MemoryDataLayer<Dtype>::AddMatVector(const vector<cv::Mat>& mat_vector,\n    const vector<int>& labels) {\n  size_t num = mat_vector.size();\n  CHECK(!has_new_data_) <<\n      \"Can't add mat until current data has been consumed.\";\n  CHECK_GT(num, 0) << \"There is no mat to add\";\n  CHECK_EQ(num % batch_size_, 0) <<\n      \"The added data must be a multiple of the batch size.\";\n  added_data_.Reshape(num, channels_, height_, width_);\n  added_label_.Reshape(num, 1, 1, 1);\n  // Apply data transformations (mirror, scale, crop...)\n  this->data_transformer_->Transform(mat_vector, &added_data_);\n  // Copy Labels\n  Dtype* top_label = added_label_.mutable_cpu_data();\n  for (int item_id = 0; item_id < num; ++item_id) {\n    top_label[item_id] = labels[item_id];\n  }\n  // num_images == batch_size_\n  Dtype* top_data = added_data_.mutable_cpu_data();\n  Reset(top_data, top_label, num);\n  has_new_data_ = true;\n}\n#endif  // USE_OPENCV\n\ntemplate <typename Dtype>\nvoid MemoryDataLayer<Dtype>::Reset(Dtype* data, Dtype* labels, int n) {\n  CHECK(data);\n  CHECK(labels);\n  CHECK_EQ(n % batch_size_, 0) << \"n must be a multiple of batch size\";\n  // Warn with transformation parameters since a memory array is meant to\n  // be generic and no transformations are done with Reset().\n  if (this->layer_param_.has_transform_param()) {\n    LOG(WARNING) << this->type() << \" does not transform array data on Reset()\";\n  }\n  data_ = data;\n  labels_ = labels;\n  n_ = n;\n  pos_ = 0;\n}\n\ntemplate <typename Dtype>\nvoid MemoryDataLayer<Dtype>::set_batch_size(int new_size) {\n  CHECK(!has_new_data_) <<\n      \"Can't change batch_size until current data has been consumed.\";\n  batch_size_ = new_size;\n  added_data_.Reshape(batch_size_, channels_, height_, width_);\n  added_label_.Reshape(batch_size_, 1, 1, 1);\n}\n\ntemplate <typename Dtype>\nvoid MemoryDataLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK(data_) << \"MemoryDataLayer needs to be initalized by calling Reset\";\n  top[0]->Reshape(batch_size_, channels_, height_, width_);\n  top[1]->Reshape(batch_size_, 1, 1, 1);\n  top[0]->set_cpu_data(data_ + pos_ * size_);\n  top[1]->set_cpu_data(labels_ + pos_);\n  pos_ = (pos_ + batch_size_) % n_;\n  if (pos_ == 0)\n    has_new_data_ = false;\n}\n\nINSTANTIATE_CLASS(MemoryDataLayer);\nREGISTER_LAYER_CLASS(MemoryData);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/multinomial_logistic_loss_layer.cpp",
    "content": "#include <algorithm>\n#include <cmath>\n#include <vector>\n\n#include \"caffe/layers/multinomial_logistic_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid MultinomialLogisticLossLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::Reshape(bottom, top);\n  CHECK_EQ(bottom[1]->channels(), 1);\n  CHECK_EQ(bottom[1]->height(), 1);\n  CHECK_EQ(bottom[1]->width(), 1);\n}\n\ntemplate <typename Dtype>\nvoid MultinomialLogisticLossLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* bottom_label = bottom[1]->cpu_data();\n  int num = bottom[0]->num();\n  int dim = bottom[0]->count() / bottom[0]->num();\n  Dtype loss = 0;\n  for (int i = 0; i < num; ++i) {\n    int label = static_cast<int>(bottom_label[i]);\n    Dtype prob = std::max(\n        bottom_data[i * dim + label], Dtype(kLOG_THRESHOLD));\n    loss -= log(prob);\n  }\n  top[0]->mutable_cpu_data()[0] = loss / num;\n}\n\ntemplate <typename Dtype>\nvoid MultinomialLogisticLossLayer<Dtype>::Backward_cpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    const Dtype* bottom_label = bottom[1]->cpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    int num = bottom[0]->num();\n    int dim = bottom[0]->count() / bottom[0]->num();\n    caffe_set(bottom[0]->count(), Dtype(0), bottom_diff);\n    const Dtype scale = - top[0]->cpu_diff()[0] / num;\n    for (int i = 0; i < num; ++i) {\n      int label = static_cast<int>(bottom_label[i]);\n      Dtype prob = std::max(\n          bottom_data[i * dim + label], Dtype(kLOG_THRESHOLD));\n      bottom_diff[i * dim + label] = scale / prob;\n    }\n  }\n}\n\nINSTANTIATE_CLASS(MultinomialLogisticLossLayer);\nREGISTER_LAYER_CLASS(MultinomialLogisticLoss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/mvn_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/mvn_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid MVNLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  top[0]->Reshape(bottom[0]->num(), bottom[0]->channels(),\n      bottom[0]->height(), bottom[0]->width());\n  mean_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      1, 1);\n  variance_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      1, 1);\n  temp_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      bottom[0]->height(), bottom[0]->width());\n  if ( this->layer_param_.mvn_param().across_channels() ) {\n    sum_multiplier_.Reshape(1, bottom[0]->channels(), bottom[0]->height(),\n                            bottom[0]->width());\n  } else {\n    sum_multiplier_.Reshape(1, 1, bottom[0]->height(), bottom[0]->width());\n  }\n  Dtype* multiplier_data = sum_multiplier_.mutable_cpu_data();\n  caffe_set(sum_multiplier_.count(), Dtype(1), multiplier_data);\n  eps_ = this->layer_param_.mvn_param().eps();\n}\n\ntemplate <typename Dtype>\nvoid MVNLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  int num;\n  if (this->layer_param_.mvn_param().across_channels())\n    num = bottom[0]->num();\n  else\n    num = bottom[0]->num() * bottom[0]->channels();\n\n  int dim = bottom[0]->count() / num;\n\n  // subtract mean\n  caffe_cpu_gemv<Dtype>(CblasNoTrans, num, dim, 1. / dim, bottom_data,\n      sum_multiplier_.cpu_data(), 0., mean_.mutable_cpu_data());  // EX\n  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, -1.,\n      mean_.cpu_data(), sum_multiplier_.cpu_data(), 0.,\n      temp_.mutable_cpu_data());\n  caffe_add(temp_.count(), bottom_data, temp_.cpu_data(), top_data);  // X-EX\n\n  if (this->layer_param_.mvn_param().normalize_variance()) {\n    // compute variance using var(X) = E((X-EX)^2)\n    caffe_powx(bottom[0]->count(), top_data, Dtype(2),\n        temp_.mutable_cpu_data());  // (X-EX)^2\n    caffe_cpu_gemv<Dtype>(CblasNoTrans, num, dim, 1. / dim, temp_.cpu_data(),\n        sum_multiplier_.cpu_data(), 0.,\n        variance_.mutable_cpu_data());  // E((X-EX)^2)\n\n    // normalize variance\n    caffe_powx(variance_.count(), variance_.cpu_data(), Dtype(0.5),\n          variance_.mutable_cpu_data());\n\n    caffe_add_scalar(variance_.count(), eps_, variance_.mutable_cpu_data());\n\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n          variance_.cpu_data(), sum_multiplier_.cpu_data(), 0.,\n          temp_.mutable_cpu_data());\n\n    caffe_div(temp_.count(), top_data, temp_.cpu_data(), top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid MVNLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->cpu_diff();\n  const Dtype* top_data = top[0]->cpu_data();\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n\n  int num;\n  if (this->layer_param_.mvn_param().across_channels())\n    num = bottom[0]->num();\n  else\n    num = bottom[0]->num() * bottom[0]->channels();\n\n  int dim = bottom[0]->count() / num;\n\n  if (this->layer_param_.mvn_param().normalize_variance()) {\n    caffe_mul(temp_.count(), top_data, top_diff, bottom_diff);\n    caffe_cpu_gemv<Dtype>(CblasNoTrans, num, dim, 1., bottom_diff,\n          sum_multiplier_.cpu_data(), 0., mean_.mutable_cpu_data());\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n          mean_.cpu_data(), sum_multiplier_.cpu_data(), 0.,\n          bottom_diff);\n    caffe_mul(temp_.count(), top_data, bottom_diff, bottom_diff);\n\n    caffe_cpu_gemv<Dtype>(CblasNoTrans, num, dim, 1., top_diff,\n            sum_multiplier_.cpu_data(), 0., mean_.mutable_cpu_data());\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n            mean_.cpu_data(), sum_multiplier_.cpu_data(), 1.,\n            bottom_diff);\n\n    caffe_cpu_axpby(temp_.count(), Dtype(1), top_diff, Dtype(-1. / dim),\n        bottom_diff);\n\n    // put the squares of bottom into temp_\n    caffe_powx(temp_.count(), bottom_data, Dtype(2),\n        temp_.mutable_cpu_data());\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n        variance_.cpu_data(), sum_multiplier_.cpu_data(), 0.,\n        temp_.mutable_cpu_data());\n\n    caffe_div(temp_.count(), bottom_diff, temp_.cpu_data(), bottom_diff);\n  } else {\n    caffe_cpu_gemv<Dtype>(CblasNoTrans, num, dim, 1. / dim, top_diff,\n      sum_multiplier_.cpu_data(), 0., mean_.mutable_cpu_data());\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, -1.,\n      mean_.cpu_data(), sum_multiplier_.cpu_data(), 0.,\n      temp_.mutable_cpu_data());\n    caffe_add(temp_.count(), top_diff, temp_.cpu_data(), bottom_diff);\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(MVNLayer);\n#endif\n\nINSTANTIATE_CLASS(MVNLayer);\nREGISTER_LAYER_CLASS(MVN);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/mvn_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/mvn_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid MVNLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  int num;\n  if (this->layer_param_.mvn_param().across_channels())\n    num = bottom[0]->num();\n  else\n    num = bottom[0]->num() * bottom[0]->channels();\n\n  int dim = bottom[0]->count() / num;\n\n  // subtract mean\n  caffe_gpu_gemv<Dtype>(CblasNoTrans, num, dim, 1. / dim, bottom_data,\n      sum_multiplier_.gpu_data(), 0., mean_.mutable_gpu_data());  // EX\n  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, -1.,\n      mean_.gpu_data(), sum_multiplier_.gpu_data(), 0.,\n      temp_.mutable_gpu_data());\n  caffe_gpu_add(temp_.count(), bottom_data, temp_.gpu_data(),\n      top_data);  // X-EX\n\n  if (this->layer_param_.mvn_param().normalize_variance()) {\n    // compute variance using var(X) = E((X-EX)^2)\n    caffe_gpu_powx(bottom[0]->count(), top_data, Dtype(2),\n        temp_.mutable_gpu_data());  // (X-EX)^2\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, num, dim, 1. / dim, temp_.gpu_data(),\n        sum_multiplier_.gpu_data(), 0.,\n        variance_.mutable_gpu_data());  // E((X-EX)^2)\n\n    // normalize variance\n    caffe_gpu_powx(variance_.count(), variance_.gpu_data(), Dtype(0.5),\n          variance_.mutable_gpu_data());\n\n    caffe_gpu_add_scalar(variance_.count(), eps_, variance_.mutable_gpu_data());\n\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n          variance_.gpu_data(), sum_multiplier_.gpu_data(), 0.,\n          temp_.mutable_gpu_data());\n\n    caffe_gpu_div(temp_.count(), top_data, temp_.gpu_data(), top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid MVNLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* top_data = top[0]->gpu_data();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n\n  int num;\n  if (this->layer_param_.mvn_param().across_channels())\n    num = bottom[0]->num();\n  else\n    num = bottom[0]->num() * bottom[0]->channels();\n\n  int dim = bottom[0]->count() / num;\n\n  if (this->layer_param_.mvn_param().normalize_variance()) {\n    caffe_gpu_mul(temp_.count(), top_data, top_diff, bottom_diff);\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, num, dim, 1., bottom_diff,\n          sum_multiplier_.gpu_data(), 0., mean_.mutable_gpu_data());\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n          mean_.gpu_data(), sum_multiplier_.gpu_data(), 0.,\n          bottom_diff);\n    caffe_gpu_mul(temp_.count(), top_data, bottom_diff, bottom_diff);\n\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, num, dim, 1., top_diff,\n            sum_multiplier_.gpu_data(), 0., mean_.mutable_gpu_data());\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n            mean_.gpu_data(), sum_multiplier_.gpu_data(), 1.,\n            bottom_diff);\n\n    caffe_gpu_axpby(temp_.count(), Dtype(1), top_diff, Dtype(-1. / dim),\n        bottom_diff);\n\n    // put the squares of bottom into temp_\n    caffe_gpu_powx(temp_.count(), bottom_data, Dtype(2),\n        temp_.mutable_gpu_data());\n\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, 1.,\n        variance_.gpu_data(), sum_multiplier_.gpu_data(), 0.,\n        temp_.mutable_gpu_data());\n\n    caffe_gpu_div(temp_.count(), bottom_diff, temp_.gpu_data(), bottom_diff);\n  } else {\n    caffe_gpu_gemv<Dtype>(CblasNoTrans, num, dim, 1. / dim, top_diff,\n            sum_multiplier_.gpu_data(), 0., mean_.mutable_gpu_data());\n    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num, dim, 1, -1.,\n            mean_.gpu_data(), sum_multiplier_.gpu_data(), 0.,\n            temp_.mutable_gpu_data());\n    caffe_gpu_add(temp_.count(), top_diff, temp_.gpu_data(), bottom_diff);\n  }\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(MVNLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/neuron_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/neuron_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid NeuronLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  top[0]->ReshapeLike(*bottom[0]);\n}\n\nINSTANTIATE_CLASS(NeuronLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/pooling_layer.cpp",
    "content": "#include <algorithm>\n#include <cfloat>\n#include <vector>\n\n#include \"caffe/layers/pooling_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\nusing std::min;\nusing std::max;\n\ntemplate <typename Dtype>\nvoid PoolingLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  PoolingParameter pool_param = this->layer_param_.pooling_param();\n  if (pool_param.global_pooling()) {\n    CHECK(!(pool_param.has_kernel_size() ||\n      pool_param.has_kernel_h() || pool_param.has_kernel_w()))\n      << \"With Global_pooling: true Filter size cannot specified\";\n  } else {\n    CHECK(!pool_param.has_kernel_size() !=\n      !(pool_param.has_kernel_h() && pool_param.has_kernel_w()))\n      << \"Filter size is kernel_size OR kernel_h and kernel_w; not both\";\n    CHECK(pool_param.has_kernel_size() ||\n      (pool_param.has_kernel_h() && pool_param.has_kernel_w()))\n      << \"For non-square filters both kernel_h and kernel_w are required.\";\n  }\n  CHECK((!pool_param.has_pad() && pool_param.has_pad_h()\n      && pool_param.has_pad_w())\n      || (!pool_param.has_pad_h() && !pool_param.has_pad_w()))\n      << \"pad is pad OR pad_h and pad_w are required.\";\n  CHECK((!pool_param.has_stride() && pool_param.has_stride_h()\n      && pool_param.has_stride_w())\n      || (!pool_param.has_stride_h() && !pool_param.has_stride_w()))\n      << \"Stride is stride OR stride_h and stride_w are required.\";\n  global_pooling_ = pool_param.global_pooling();\n  if (global_pooling_) {\n    kernel_h_ = bottom[0]->height();\n    kernel_w_ = bottom[0]->width();\n  } else {\n    if (pool_param.has_kernel_size()) {\n      kernel_h_ = kernel_w_ = pool_param.kernel_size();\n    } else {\n      kernel_h_ = pool_param.kernel_h();\n      kernel_w_ = pool_param.kernel_w();\n    }\n  }\n  CHECK_GT(kernel_h_, 0) << \"Filter dimensions cannot be zero.\";\n  CHECK_GT(kernel_w_, 0) << \"Filter dimensions cannot be zero.\";\n  if (!pool_param.has_pad_h()) {\n    pad_h_ = pad_w_ = pool_param.pad();\n  } else {\n    pad_h_ = pool_param.pad_h();\n    pad_w_ = pool_param.pad_w();\n  }\n  if (!pool_param.has_stride_h()) {\n    stride_h_ = stride_w_ = pool_param.stride();\n  } else {\n    stride_h_ = pool_param.stride_h();\n    stride_w_ = pool_param.stride_w();\n  }\n  if (global_pooling_) {\n    CHECK(pad_h_ == 0 && pad_w_ == 0 && stride_h_ == 1 && stride_w_ == 1)\n      << \"With Global_pooling: true; only pad = 0 and stride = 1\";\n  }\n  if (pad_h_ != 0 || pad_w_ != 0) {\n    CHECK(this->layer_param_.pooling_param().pool()\n        == PoolingParameter_PoolMethod_AVE\n        || this->layer_param_.pooling_param().pool()\n        == PoolingParameter_PoolMethod_MAX)\n        << \"Padding implemented only for average and max pooling.\";\n    CHECK_LT(pad_h_, kernel_h_);\n    CHECK_LT(pad_w_, kernel_w_);\n  }\n}\n\ntemplate <typename Dtype>\nvoid PoolingLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_EQ(4, bottom[0]->num_axes()) << \"Input must have 4 axes, \"\n      << \"corresponding to (num, channels, height, width)\";\n  channels_ = bottom[0]->channels();\n  height_ = bottom[0]->height();\n  width_ = bottom[0]->width();\n  if (global_pooling_) {\n    kernel_h_ = bottom[0]->height();\n    kernel_w_ = bottom[0]->width();\n  }\n  pooled_height_ = static_cast<int>(ceil(static_cast<float>(\n      height_ + 2 * pad_h_ - kernel_h_) / stride_h_)) + 1;\n  pooled_width_ = static_cast<int>(ceil(static_cast<float>(\n      width_ + 2 * pad_w_ - kernel_w_) / stride_w_)) + 1;\n  if (pad_h_ || pad_w_) {\n    // If we have padding, ensure that the last pooling starts strictly\n    // inside the image (instead of at the padding); otherwise clip the last.\n    if ((pooled_height_ - 1) * stride_h_ >= height_ + pad_h_) {\n      --pooled_height_;\n    }\n    if ((pooled_width_ - 1) * stride_w_ >= width_ + pad_w_) {\n      --pooled_width_;\n    }\n    CHECK_LT((pooled_height_ - 1) * stride_h_, height_ + pad_h_);\n    CHECK_LT((pooled_width_ - 1) * stride_w_, width_ + pad_w_);\n  }\n  top[0]->Reshape(bottom[0]->num(), channels_, pooled_height_,\n      pooled_width_);\n  if (top.size() > 1) {\n    top[1]->ReshapeLike(*top[0]);\n  }\n  // If max pooling, we will initialize the vector index part.\n  if (this->layer_param_.pooling_param().pool() ==\n      PoolingParameter_PoolMethod_MAX && top.size() == 1) {\n    max_idx_.Reshape(bottom[0]->num(), channels_, pooled_height_,\n        pooled_width_);\n  }\n  // If stochastic pooling, we will initialize the random index part.\n  if (this->layer_param_.pooling_param().pool() ==\n      PoolingParameter_PoolMethod_STOCHASTIC) {\n    rand_idx_.Reshape(bottom[0]->num(), channels_, pooled_height_,\n      pooled_width_);\n  }\n}\n\n// TODO(Yangqing): Is there a faster way to do pooling in the channel-first\n// case?\ntemplate <typename Dtype>\nvoid PoolingLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int top_count = top[0]->count();\n  // We'll output the mask to top[1] if it's of size >1.\n  const bool use_top_mask = top.size() > 1;\n  int* mask = NULL;  // suppress warnings about uninitalized variables\n  Dtype* top_mask = NULL;\n  // Different pooling methods. We explicitly do the switch outside the for\n  // loop to save time, although this results in more code.\n  switch (this->layer_param_.pooling_param().pool()) {\n  case PoolingParameter_PoolMethod_MAX:\n    // Initialize\n    if (use_top_mask) {\n      top_mask = top[1]->mutable_cpu_data();\n      caffe_set(top_count, Dtype(-1), top_mask);\n    } else {\n      mask = max_idx_.mutable_cpu_data();\n      caffe_set(top_count, -1, mask);\n    }\n    caffe_set(top_count, Dtype(-FLT_MAX), top_data);\n    // The main loop\n    for (int n = 0; n < bottom[0]->num(); ++n) {\n      for (int c = 0; c < channels_; ++c) {\n        for (int ph = 0; ph < pooled_height_; ++ph) {\n          for (int pw = 0; pw < pooled_width_; ++pw) {\n            int hstart = ph * stride_h_ - pad_h_;\n            int wstart = pw * stride_w_ - pad_w_;\n            int hend = min(hstart + kernel_h_, height_);\n            int wend = min(wstart + kernel_w_, width_);\n            hstart = max(hstart, 0);\n            wstart = max(wstart, 0);\n            const int pool_index = ph * pooled_width_ + pw;\n            for (int h = hstart; h < hend; ++h) {\n              for (int w = wstart; w < wend; ++w) {\n                const int index = h * width_ + w;\n                if (bottom_data[index] > top_data[pool_index]) {\n                  top_data[pool_index] = bottom_data[index];\n                  if (use_top_mask) {\n                    top_mask[pool_index] = static_cast<Dtype>(index);\n                  } else {\n                    mask[pool_index] = index;\n                  }\n                }\n              }\n            }\n          }\n        }\n        // compute offset\n        bottom_data += bottom[0]->offset(0, 1);\n        top_data += top[0]->offset(0, 1);\n        if (use_top_mask) {\n          top_mask += top[0]->offset(0, 1);\n        } else {\n          mask += top[0]->offset(0, 1);\n        }\n      }\n    }\n    break;\n  case PoolingParameter_PoolMethod_AVE:\n    for (int i = 0; i < top_count; ++i) {\n      top_data[i] = 0;\n    }\n    // The main loop\n    for (int n = 0; n < bottom[0]->num(); ++n) {\n      for (int c = 0; c < channels_; ++c) {\n        for (int ph = 0; ph < pooled_height_; ++ph) {\n          for (int pw = 0; pw < pooled_width_; ++pw) {\n            int hstart = ph * stride_h_ - pad_h_;\n            int wstart = pw * stride_w_ - pad_w_;\n            int hend = min(hstart + kernel_h_, height_ + pad_h_);\n            int wend = min(wstart + kernel_w_, width_ + pad_w_);\n            int pool_size = (hend - hstart) * (wend - wstart);\n            hstart = max(hstart, 0);\n            wstart = max(wstart, 0);\n            hend = min(hend, height_);\n            wend = min(wend, width_);\n            for (int h = hstart; h < hend; ++h) {\n              for (int w = wstart; w < wend; ++w) {\n                top_data[ph * pooled_width_ + pw] +=\n                    bottom_data[h * width_ + w];\n              }\n            }\n            top_data[ph * pooled_width_ + pw] /= pool_size;\n          }\n        }\n        // compute offset\n        bottom_data += bottom[0]->offset(0, 1);\n        top_data += top[0]->offset(0, 1);\n      }\n    }\n    break;\n  case PoolingParameter_PoolMethod_STOCHASTIC:\n    NOT_IMPLEMENTED;\n    break;\n  default:\n    LOG(FATAL) << \"Unknown pooling method.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid PoolingLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  // Different pooling methods. We explicitly do the switch outside the for\n  // loop to save time, although this results in more codes.\n  caffe_set(bottom[0]->count(), Dtype(0), bottom_diff);\n  // We'll output the mask to top[1] if it's of size >1.\n  const bool use_top_mask = top.size() > 1;\n  const int* mask = NULL;  // suppress warnings about uninitialized variables\n  const Dtype* top_mask = NULL;\n  switch (this->layer_param_.pooling_param().pool()) {\n  case PoolingParameter_PoolMethod_MAX:\n    // The main loop\n    if (use_top_mask) {\n      top_mask = top[1]->cpu_data();\n    } else {\n      mask = max_idx_.cpu_data();\n    }\n    for (int n = 0; n < top[0]->num(); ++n) {\n      for (int c = 0; c < channels_; ++c) {\n        for (int ph = 0; ph < pooled_height_; ++ph) {\n          for (int pw = 0; pw < pooled_width_; ++pw) {\n            const int index = ph * pooled_width_ + pw;\n            const int bottom_index =\n                use_top_mask ? top_mask[index] : mask[index];\n            bottom_diff[bottom_index] += top_diff[index];\n          }\n        }\n        bottom_diff += bottom[0]->offset(0, 1);\n        top_diff += top[0]->offset(0, 1);\n        if (use_top_mask) {\n          top_mask += top[0]->offset(0, 1);\n        } else {\n          mask += top[0]->offset(0, 1);\n        }\n      }\n    }\n    break;\n  case PoolingParameter_PoolMethod_AVE:\n    // The main loop\n    for (int n = 0; n < top[0]->num(); ++n) {\n      for (int c = 0; c < channels_; ++c) {\n        for (int ph = 0; ph < pooled_height_; ++ph) {\n          for (int pw = 0; pw < pooled_width_; ++pw) {\n            int hstart = ph * stride_h_ - pad_h_;\n            int wstart = pw * stride_w_ - pad_w_;\n            int hend = min(hstart + kernel_h_, height_ + pad_h_);\n            int wend = min(wstart + kernel_w_, width_ + pad_w_);\n            int pool_size = (hend - hstart) * (wend - wstart);\n            hstart = max(hstart, 0);\n            wstart = max(wstart, 0);\n            hend = min(hend, height_);\n            wend = min(wend, width_);\n            for (int h = hstart; h < hend; ++h) {\n              for (int w = wstart; w < wend; ++w) {\n                bottom_diff[h * width_ + w] +=\n                  top_diff[ph * pooled_width_ + pw] / pool_size;\n              }\n            }\n          }\n        }\n        // offset\n        bottom_diff += bottom[0]->offset(0, 1);\n        top_diff += top[0]->offset(0, 1);\n      }\n    }\n    break;\n  case PoolingParameter_PoolMethod_STOCHASTIC:\n    NOT_IMPLEMENTED;\n    break;\n  default:\n    LOG(FATAL) << \"Unknown pooling method.\";\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(PoolingLayer);\n#endif\n\nINSTANTIATE_CLASS(PoolingLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/pooling_layer.cu",
    "content": "#include <algorithm>\n#include <cfloat>\n#include <vector>\n\n#include \"caffe/layers/pooling_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void MaxPoolForward(const int nthreads,\n    const Dtype* const bottom_data, const int num, const int channels,\n    const int height, const int width, const int pooled_height,\n    const int pooled_width, const int kernel_h, const int kernel_w,\n    const int stride_h, const int stride_w, const int pad_h, const int pad_w,\n    Dtype* const top_data, int* mask, Dtype* top_mask) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int pw = index % pooled_width;\n    const int ph = (index / pooled_width) % pooled_height;\n    const int c = (index / pooled_width / pooled_height) % channels;\n    const int n = index / pooled_width / pooled_height / channels;\n    int hstart = ph * stride_h - pad_h;\n    int wstart = pw * stride_w - pad_w;\n    const int hend = min(hstart + kernel_h, height);\n    const int wend = min(wstart + kernel_w, width);\n    hstart = max(hstart, 0);\n    wstart = max(wstart, 0);\n    Dtype maxval = -FLT_MAX;\n    int maxidx = -1;\n    const Dtype* const bottom_slice =\n        bottom_data + (n * channels + c) * height * width;\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        if (bottom_slice[h * width + w] > maxval) {\n          maxidx = h * width + w;\n          maxval = bottom_slice[maxidx];\n        }\n      }\n    }\n    top_data[index] = maxval;\n    if (mask) {\n      mask[index] = maxidx;\n    } else {\n      top_mask[index] = maxidx;\n    }\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void AvePoolForward(const int nthreads,\n    const Dtype* const bottom_data, const int num, const int channels,\n    const int height, const int width, const int pooled_height,\n    const int pooled_width, const int kernel_h, const int kernel_w,\n    const int stride_h, const int stride_w, const int pad_h, const int pad_w,\n    Dtype* const top_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int pw = index % pooled_width;\n    const int ph = (index / pooled_width) % pooled_height;\n    const int c = (index / pooled_width / pooled_height) % channels;\n    const int n = index / pooled_width / pooled_height / channels;\n    int hstart = ph * stride_h - pad_h;\n    int wstart = pw * stride_w - pad_w;\n    int hend = min(hstart + kernel_h, height + pad_h);\n    int wend = min(wstart + kernel_w, width + pad_w);\n    const int pool_size = (hend - hstart) * (wend - wstart);\n    hstart = max(hstart, 0);\n    wstart = max(wstart, 0);\n    hend = min(hend, height);\n    wend = min(wend, width);\n    Dtype aveval = 0;\n    const Dtype* const bottom_slice =\n        bottom_data + (n * channels + c) * height * width;\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        aveval += bottom_slice[h * width + w];\n      }\n    }\n    top_data[index] = aveval / pool_size;\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void StoPoolForwardTrain(const int nthreads,\n    const Dtype* const bottom_data,\n    const int num, const int channels, const int height,\n    const int width, const int pooled_height, const int pooled_width,\n    const int kernel_h, const int kernel_w, const int stride_h,\n    const int stride_w, Dtype* const rand_idx, Dtype* const top_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int pw = index % pooled_width;\n    const int ph = (index / pooled_width) % pooled_height;\n    const int c = (index / pooled_width / pooled_height) % channels;\n    const int n = index / pooled_width / pooled_height / channels;\n    const int hstart = ph * stride_h;\n    const int hend = min(hstart + kernel_h, height);\n    const int wstart = pw * stride_w;\n    const int wend = min(wstart + kernel_w, width);\n    Dtype cumsum = 0.;\n    const Dtype* const bottom_slice =\n        bottom_data + (n * channels + c) * height * width;\n    // First pass: get sum\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        cumsum += bottom_slice[h * width + w];\n      }\n    }\n    const float thres = rand_idx[index] * cumsum;\n    // Second pass: get value, and set index.\n    cumsum = 0;\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        cumsum += bottom_slice[h * width + w];\n        if (cumsum >= thres) {\n          rand_idx[index] = ((n * channels + c) * height + h) * width + w;\n          top_data[index] = bottom_slice[h * width + w];\n          return;\n        }\n      }\n    }\n  }\n}\n\n\ntemplate <typename Dtype>\n__global__ void StoPoolForwardTest(const int nthreads,\n    const Dtype* const bottom_data,\n    const int num, const int channels, const int height,\n    const int width, const int pooled_height, const int pooled_width,\n    const int kernel_h, const int kernel_w, const int stride_h,\n    const int stride_w, Dtype* const top_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int pw = index % pooled_width;\n    const int ph = (index / pooled_width) % pooled_height;\n    const int c = (index / pooled_width / pooled_height) % channels;\n    const int n = index / pooled_width / pooled_height / channels;\n    const int hstart = ph * stride_h;\n    const int hend = min(hstart + kernel_h, height);\n    const int wstart = pw * stride_w;\n    const int wend = min(wstart + kernel_w, width);\n    // We set cumsum to be 0 to avoid divide-by-zero problems\n    Dtype cumsum = FLT_MIN;\n    Dtype cumvalues = 0.;\n    const Dtype* const bottom_slice =\n        bottom_data + (n * channels + c) * height * width;\n    // First pass: get sum\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        cumsum += bottom_slice[h * width + w];\n        cumvalues += bottom_slice[h * width + w] * bottom_slice[h * width + w];\n      }\n    }\n    top_data[index] = cumvalues / cumsum;\n  }\n}\n\n\ntemplate <typename Dtype>\nvoid PoolingLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  int count = top[0]->count();\n  // We'll output the mask to top[1] if it's of size >1.\n  const bool use_top_mask = top.size() > 1;\n  int* mask = NULL;\n  Dtype* top_mask = NULL;\n  switch (this->layer_param_.pooling_param().pool()) {\n  case PoolingParameter_PoolMethod_MAX:\n    if (use_top_mask) {\n      top_mask = top[1]->mutable_gpu_data();\n    } else {\n      mask = max_idx_.mutable_gpu_data();\n    }\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    MaxPoolForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, bottom_data, bottom[0]->num(), channels_,\n        height_, width_, pooled_height_, pooled_width_, kernel_h_,\n        kernel_w_, stride_h_, stride_w_, pad_h_, pad_w_, top_data,\n        mask, top_mask);\n    break;\n  case PoolingParameter_PoolMethod_AVE:\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    AvePoolForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, bottom_data, bottom[0]->num(), channels_,\n        height_, width_, pooled_height_, pooled_width_, kernel_h_,\n        kernel_w_, stride_h_, stride_w_, pad_h_, pad_w_, top_data);\n    break;\n  case PoolingParameter_PoolMethod_STOCHASTIC:\n    if (this->phase_ == TRAIN) {\n      // We need to create the random index as well.\n      caffe_gpu_rng_uniform(count, Dtype(0), Dtype(1),\n                            rand_idx_.mutable_gpu_data());\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      StoPoolForwardTrain<Dtype><<<CAFFE_GET_BLOCKS(count),\n                                   CAFFE_CUDA_NUM_THREADS>>>(\n          count, bottom_data, bottom[0]->num(), channels_,\n          height_, width_, pooled_height_, pooled_width_, kernel_h_,\n          kernel_w_, stride_h_, stride_w_,\n          rand_idx_.mutable_gpu_data(), top_data);\n    } else {\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      StoPoolForwardTest<Dtype><<<CAFFE_GET_BLOCKS(count),\n                                  CAFFE_CUDA_NUM_THREADS>>>(\n          count, bottom_data, bottom[0]->num(), channels_,\n          height_, width_, pooled_height_, pooled_width_, kernel_h_,\n          kernel_w_, stride_h_, stride_w_, top_data);\n    }\n    break;\n  default:\n    LOG(FATAL) << \"Unknown pooling method.\";\n  }\n  CUDA_POST_KERNEL_CHECK;\n}\n\n\ntemplate <typename Dtype>\n__global__ void MaxPoolBackward(const int nthreads, const Dtype* const top_diff,\n    const int* const mask, const Dtype* const top_mask, const int num,\n    const int channels, const int height, const int width,\n    const int pooled_height, const int pooled_width, const int kernel_h,\n    const int kernel_w, const int stride_h, const int stride_w, const int pad_h,\n    const int pad_w, Dtype* const bottom_diff) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // find out the local index\n    // find out the local offset\n    const int w = index % width;\n    const int h = (index / width) % height;\n    const int c = (index / width / height) % channels;\n    const int n = index / width / height / channels;\n    const int phstart =\n         (h + pad_h < kernel_h) ? 0 : (h + pad_h - kernel_h) / stride_h + 1;\n    const int phend = min((h + pad_h) / stride_h + 1, pooled_height);\n    const int pwstart =\n         (w + pad_w < kernel_w) ? 0 : (w + pad_w - kernel_w) / stride_w + 1;\n    const int pwend = min((w + pad_w) / stride_w + 1, pooled_width);\n    Dtype gradient = 0;\n    const int offset = (n * channels + c) * pooled_height * pooled_width;\n    const Dtype* const top_diff_slice = top_diff + offset;\n    if (mask) {\n      const int* const mask_slice = mask + offset;\n      for (int ph = phstart; ph < phend; ++ph) {\n        for (int pw = pwstart; pw < pwend; ++pw) {\n          if (mask_slice[ph * pooled_width + pw] == h * width + w) {\n            gradient += top_diff_slice[ph * pooled_width + pw];\n          }\n        }\n      }\n    } else {\n      const Dtype* const top_mask_slice = top_mask + offset;\n      for (int ph = phstart; ph < phend; ++ph) {\n        for (int pw = pwstart; pw < pwend; ++pw) {\n          if (top_mask_slice[ph * pooled_width + pw] == h * width + w) {\n            gradient += top_diff_slice[ph * pooled_width + pw];\n          }\n        }\n      }\n    }\n    bottom_diff[index] = gradient;\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void AvePoolBackward(const int nthreads, const Dtype* const top_diff,\n    const int num, const int channels, const int height,\n    const int width, const int pooled_height, const int pooled_width,\n    const int kernel_h, const int kernel_w, const int stride_h,\n    const int stride_w, const int pad_h, const int pad_w,\n    Dtype* const bottom_diff) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // find out the local index\n    // find out the local offset\n    const int w = index % width + pad_w;\n    const int h = (index / width) % height + pad_h;\n    const int c = (index / width / height) % channels;\n    const int n = index / width / height / channels;\n    const int phstart = (h < kernel_h) ? 0 : (h - kernel_h) / stride_h + 1;\n    const int phend = min(h / stride_h + 1, pooled_height);\n    const int pwstart = (w < kernel_w) ? 0 : (w - kernel_w) / stride_w + 1;\n    const int pwend = min(w / stride_w + 1, pooled_width);\n    Dtype gradient = 0;\n    const Dtype* const top_diff_slice =\n        top_diff + (n * channels + c) * pooled_height * pooled_width;\n    for (int ph = phstart; ph < phend; ++ph) {\n      for (int pw = pwstart; pw < pwend; ++pw) {\n        // figure out the pooling size\n        int hstart = ph * stride_h - pad_h;\n        int wstart = pw * stride_w - pad_w;\n        int hend = min(hstart + kernel_h, height + pad_h);\n        int wend = min(wstart + kernel_w, width + pad_w);\n        int pool_size = (hend - hstart) * (wend - wstart);\n        gradient += top_diff_slice[ph * pooled_width + pw] / pool_size;\n      }\n    }\n    bottom_diff[index] = gradient;\n  }\n}\n\n\ntemplate <typename Dtype>\n__global__ void StoPoolBackward(const int nthreads,\n    const Dtype* const rand_idx, const Dtype* const top_diff,\n    const int num, const int channels, const int height,\n    const int width, const int pooled_height, const int pooled_width,\n    const int kernel_h, const int kernel_w, const int stride_h,\n    const int stride_w, Dtype* const bottom_diff) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // find out the local index\n    // find out the local offset\n    const int w = index % width;\n    const int h = (index / width) % height;\n    const int c = (index / width / height) % channels;\n    const int n = index / width / height / channels;\n    const int phstart = (h < kernel_h) ? 0 : (h - kernel_h) / stride_h + 1;\n    const int phend = min(h / stride_h + 1, pooled_height);\n    const int pwstart = (w < kernel_w) ? 0 : (w - kernel_w) / stride_w + 1;\n    const int pwend = min(w / stride_w + 1, pooled_width);\n    Dtype gradient = 0;\n    const Dtype* const rand_idx_slice =\n        rand_idx + (n * channels + c) * pooled_height * pooled_width;\n    const Dtype* const top_diff_slice =\n        top_diff + (n * channels + c) * pooled_height * pooled_width;\n    for (int ph = phstart; ph < phend; ++ph) {\n      for (int pw = pwstart; pw < pwend; ++pw) {\n        gradient += top_diff_slice[ph * pooled_width + pw] *\n            (index == static_cast<int>(rand_idx_slice[ph * pooled_width + pw]));\n      }\n    }\n    bottom_diff[index] = gradient;\n  }\n}\n\n\ntemplate <typename Dtype>\nvoid PoolingLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n  const Dtype* top_diff = top[0]->gpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  const int count = bottom[0]->count();\n  caffe_gpu_set(count, Dtype(0.), bottom_diff);\n  // We'll output the mask to top[1] if it's of size >1.\n  const bool use_top_mask = top.size() > 1;\n  const int* mask = NULL;\n  const Dtype* top_mask = NULL;\n  switch (this->layer_param_.pooling_param().pool()) {\n  case PoolingParameter_PoolMethod_MAX:\n    if (use_top_mask) {\n      top_mask = top[1]->gpu_data();\n    } else {\n      mask = max_idx_.gpu_data();\n    }\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    MaxPoolBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, mask, top_mask, top[0]->num(), channels_,\n        height_, width_, pooled_height_, pooled_width_,\n        kernel_h_, kernel_w_, stride_h_, stride_w_, pad_h_, pad_w_,\n        bottom_diff);\n    break;\n  case PoolingParameter_PoolMethod_AVE:\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    AvePoolBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, top[0]->num(), channels_,\n        height_, width_, pooled_height_, pooled_width_, kernel_h_,\n        kernel_w_, stride_h_, stride_w_, pad_h_, pad_w_, bottom_diff);\n    break;\n  case PoolingParameter_PoolMethod_STOCHASTIC:\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    StoPoolBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, rand_idx_.gpu_data(), top_diff,\n        top[0]->num(), channels_, height_, width_, pooled_height_,\n        pooled_width_, kernel_h_, kernel_w_, stride_h_, stride_w_,\n        bottom_diff);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown pooling method.\";\n  }\n  CUDA_POST_KERNEL_CHECK;\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(PoolingLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/power_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/power_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid PowerLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::LayerSetUp(bottom, top);\n  power_ = this->layer_param_.power_param().power();\n  scale_ = this->layer_param_.power_param().scale();\n  shift_ = this->layer_param_.power_param().shift();\n  diff_scale_ = power_  * scale_;\n}\n\n// Compute y = (shift + scale * x)^power\ntemplate <typename Dtype>\nvoid PowerLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  // Special case where we can ignore the input: scale or power is 0.\n  if (diff_scale_ == Dtype(0)) {\n    Dtype value = (power_ == 0) ? Dtype(1) : pow(shift_, power_);\n    caffe_set(count, value, top_data);\n    return;\n  }\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  caffe_copy(count, bottom_data, top_data);\n  if (scale_ != Dtype(1)) {\n    caffe_scal(count, scale_, top_data);\n  }\n  if (shift_ != Dtype(0)) {\n    caffe_add_scalar(count, shift_, top_data);\n  }\n  if (power_ != Dtype(1)) {\n    caffe_powx(count, top_data, power_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid PowerLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const int count = bottom[0]->count();\n    const Dtype* top_diff = top[0]->cpu_diff();\n    if (diff_scale_ == Dtype(0) || power_ == Dtype(1)) {\n      caffe_set(count, diff_scale_, bottom_diff);\n    } else {\n      const Dtype* bottom_data = bottom[0]->cpu_data();\n      // Compute dy/dx = scale * power * (shift + scale * x)^(power - 1)\n      //               = diff_scale * y / (shift + scale * x)\n      if (power_ == Dtype(2)) {\n        // Special case for y = (shift + scale * x)^2\n        //     -> dy/dx = 2 * scale * (shift + scale * x)\n        //              = diff_scale * shift + diff_scale * scale * x\n        caffe_cpu_axpby(count, diff_scale_ * scale_, bottom_data,\n            Dtype(0), bottom_diff);\n        if (shift_ != Dtype(0)) {\n          caffe_add_scalar(count, diff_scale_ * shift_, bottom_diff);\n        }\n      } else if (shift_ == Dtype(0)) {\n        // Special case for y = (scale * x)^power\n        //     -> dy/dx = scale * power * (scale * x)^(power - 1)\n        //              = scale * power * (scale * x)^power * (scale * x)^(-1)\n        //              = power * y / x\n        const Dtype* top_data = top[0]->cpu_data();\n        caffe_div(count, top_data, bottom_data, bottom_diff);\n        caffe_scal(count, power_, bottom_diff);\n      } else {\n        caffe_copy(count, bottom_data, bottom_diff);\n        if (scale_ != Dtype(1)) {\n          caffe_scal(count, scale_, bottom_diff);\n        }\n        if (shift_ != Dtype(0)) {\n          caffe_add_scalar(count, shift_, bottom_diff);\n        }\n        const Dtype* top_data = top[0]->cpu_data();\n        caffe_div<Dtype>(count, top_data, bottom_diff, bottom_diff);\n        if (diff_scale_ != Dtype(1)) {\n          caffe_scal(count, diff_scale_, bottom_diff);\n        }\n      }\n    }\n    if (diff_scale_ != Dtype(0)) {\n      caffe_mul(count, top_diff, bottom_diff, bottom_diff);\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(PowerLayer);\n#endif\n\nINSTANTIATE_CLASS(PowerLayer);\nREGISTER_LAYER_CLASS(Power);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/power_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/power_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid PowerLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  // Special case where we can ignore the input: scale or power is 0.\n  if (diff_scale_ == Dtype(0)) {\n    Dtype value = (power_ == 0) ? Dtype(1) : pow(shift_, power_);\n    caffe_gpu_set(count, value, top_data);\n    return;\n  }\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  caffe_copy(count, bottom_data, top_data);\n  if (scale_ != Dtype(1)) {\n    caffe_gpu_scal(count, scale_, top_data);\n  }\n  if (shift_ != Dtype(0)) {\n    caffe_gpu_add_scalar(count, shift_, top_data);\n  }\n  if (power_ != Dtype(1)) {\n    caffe_gpu_powx(count, top_data, power_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid PowerLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const int count = bottom[0]->count();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    if (diff_scale_ == Dtype(0) || power_ == Dtype(1)) {\n      caffe_gpu_set(count, diff_scale_, bottom_diff);\n    } else {\n      const Dtype* bottom_data = bottom[0]->gpu_data();\n      // Compute dy/dx = scale * power * (shift + scale * x)^(power - 1)\n      //               = diff_scale * y / (shift + scale * x)\n      if (power_ == Dtype(2)) {\n        // Special case for y = (shift + scale * x)^2\n        //     -> dy/dx = 2 * scale * (shift + scale * x)\n        //              = diff_scale * shift + diff_scale * scale * x\n        caffe_gpu_axpby(count, diff_scale_ * scale_, bottom_data,\n            Dtype(0), bottom_diff);\n        if (shift_ != Dtype(0)) {\n          caffe_gpu_add_scalar(count, diff_scale_ * shift_, bottom_diff);\n        }\n      } else if (shift_ == Dtype(0)) {\n        // Special case for y = (scale * x)^power\n        //     -> dy/dx = scale * power * (scale * x)^(power - 1)\n        //              = scale * power * (scale * x)^power * (scale * x)^(-1)\n        //              = power * y / x\n        const Dtype* top_data = top[0]->gpu_data();\n        caffe_gpu_div(count, top_data, bottom_data, bottom_diff);\n        caffe_gpu_scal(count, power_, bottom_diff);\n      } else {\n        caffe_copy(count, bottom_data, bottom_diff);\n        if (scale_ != Dtype(1)) {\n          caffe_gpu_scal(count, scale_, bottom_diff);\n        }\n        if (shift_ != Dtype(0)) {\n          caffe_gpu_add_scalar(count, shift_, bottom_diff);\n        }\n        const Dtype* top_data = top[0]->gpu_data();\n        caffe_gpu_div<Dtype>(count, top_data, bottom_diff, bottom_diff);\n        if (diff_scale_ != Dtype(1)) {\n          caffe_gpu_scal(count, diff_scale_, bottom_diff);\n        }\n      }\n    }\n    caffe_gpu_mul(count, top_diff, bottom_diff, bottom_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(PowerLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/prelu_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/filler.hpp\"\n\n#include \"caffe/layers/neuron_layer.hpp\"\n#include \"caffe/layers/prelu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid PReLULayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  CHECK_GE(bottom[0]->num_axes(), 2)\n      << \"Number of axes of bottom blob must be >=2.\";\n  PReLUParameter prelu_param = this->layer_param().prelu_param();\n  int channels = bottom[0]->channels();\n  channel_shared_ = prelu_param.channel_shared();\n  if (this->blobs_.size() > 0) {\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else {\n    this->blobs_.resize(1);\n    if (channel_shared_) {\n      this->blobs_[0].reset(new Blob<Dtype>(vector<int>(0)));\n    } else {\n      this->blobs_[0].reset(new Blob<Dtype>(vector<int>(1, channels)));\n    }\n    shared_ptr<Filler<Dtype> > filler;\n    if (prelu_param.has_filler()) {\n      filler.reset(GetFiller<Dtype>(prelu_param.filler()));\n    } else {\n      FillerParameter filler_param;\n      filler_param.set_type(\"constant\");\n      filler_param.set_value(0.25);\n      filler.reset(GetFiller<Dtype>(filler_param));\n    }\n    filler->Fill(this->blobs_[0].get());\n  }\n  if (channel_shared_) {\n    CHECK_EQ(this->blobs_[0]->count(), 1)\n        << \"Negative slope size is inconsistent with prototxt config\";\n  } else {\n    CHECK_EQ(this->blobs_[0]->count(), channels)\n        << \"Negative slope size is inconsistent with prototxt config\";\n  }\n\n  // Propagate gradients to the parameters (as directed by backward pass).\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\n  multiplier_.Reshape(vector<int>(1, bottom[0]->count(1)));\n  backward_buff_.Reshape(vector<int>(1, bottom[0]->count(1)));\n  caffe_set(multiplier_.count(), Dtype(1), multiplier_.mutable_cpu_data());\n}\n\ntemplate <typename Dtype>\nvoid PReLULayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  CHECK_GE(bottom[0]->num_axes(), 2)\n      << \"Number of axes of bottom blob must be >=2.\";\n  top[0]->ReshapeLike(*bottom[0]);\n  if (bottom[0] == top[0]) {\n    // For in-place computation\n    bottom_memory_.ReshapeLike(*bottom[0]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid PReLULayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  const int dim = bottom[0]->count(2);\n  const int channels = bottom[0]->channels();\n  const Dtype* slope_data = this->blobs_[0]->cpu_data();\n\n  // For in-place computation\n  if (bottom[0] == top[0]) {\n    caffe_copy(count, bottom_data, bottom_memory_.mutable_cpu_data());\n  }\n\n  // if channel_shared, channel index in the following computation becomes\n  // always zero.\n  const int div_factor = channel_shared_ ? channels : 1;\n  for (int i = 0; i < count; ++i) {\n    int c = (i / dim) % channels / div_factor;\n    top_data[i] = std::max(bottom_data[i], Dtype(0))\n        + slope_data[c] * std::min(bottom_data[i], Dtype(0));\n  }\n}\n\ntemplate <typename Dtype>\nvoid PReLULayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* slope_data = this->blobs_[0]->cpu_data();\n  const Dtype* top_diff = top[0]->cpu_diff();\n  const int count = bottom[0]->count();\n  const int dim = bottom[0]->count(2);\n  const int channels = bottom[0]->channels();\n\n  // For in-place computation\n  if (top[0] == bottom[0]) {\n    bottom_data = bottom_memory_.cpu_data();\n  }\n\n  // if channel_shared, channel index in the following computation becomes\n  // always zero.\n  const int div_factor = channel_shared_ ? channels : 1;\n\n  // Propagte to param\n  // Since to write bottom diff will affect top diff if top and bottom blobs\n  // are identical (in-place computaion), we first compute param backward to\n  // keep top_diff unchanged.\n  if (this->param_propagate_down_[0]) {\n    Dtype* slope_diff = this->blobs_[0]->mutable_cpu_diff();\n    for (int i = 0; i < count; ++i) {\n      int c = (i / dim) % channels / div_factor;\n      slope_diff[c] += top_diff[i] * bottom_data[i] * (bottom_data[i] <= 0);\n    }\n  }\n  // Propagate to bottom\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    for (int i = 0; i < count; ++i) {\n      int c = (i / dim) % channels / div_factor;\n      bottom_diff[i] = top_diff[i] * ((bottom_data[i] > 0)\n          + slope_data[c] * (bottom_data[i] <= 0));\n    }\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(PReLULayer);\n#endif\n\nINSTANTIATE_CLASS(PReLULayer);\nREGISTER_LAYER_CLASS(PReLU);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/prelu_layer.cu",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/neuron_layer.hpp\"\n#include \"caffe/layers/prelu_layer.hpp\"\n\nnamespace caffe {\n\n// CUDA kernele for forward\ntemplate <typename Dtype>\n__global__ void PReLUForward(const int n, const int channels, const int dim,\n    const Dtype* in, Dtype* out, const Dtype* slope_data,\n    const int div_factor) {\n  CUDA_KERNEL_LOOP(index, n) {\n    int c = (index / dim) % channels / div_factor;\n    out[index] = in[index] > 0 ? in[index] : in[index] * slope_data[c];\n  }\n}\n\n// CUDA kernel for bottom backward\ntemplate <typename Dtype>\n__global__ void PReLUBackward(const int n, const int channels, const int dim,\n    const Dtype* in_diff, const Dtype* in_data, Dtype* out_diff,\n    const Dtype* slope_data, const int div_factor) {\n  CUDA_KERNEL_LOOP(index, n) {\n    int c = (index / dim) % channels / div_factor;\n    out_diff[index] = in_diff[index] * ((in_data[index] > 0)\n        + (in_data[index] <= 0) * slope_data[c]);\n  }\n}\n\n// CUDA kernel for element-wise parameter backward\ntemplate <typename Dtype>\n__global__ void PReLUParamBackward(const int n,\n    const int rows, const int rowPitch, const Dtype* in_diff,\n    const Dtype* in_data, Dtype* out_diff) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out_diff[index] = in_diff[index] * in_data[index] * (in_data[index] <= 0);\n    for ( int k = 1; k < rows; k++ ) {\n        out_diff[index] += in_diff[index + k*rowPitch]\n           * in_data[index + k*rowPitch] * (in_data[index + k*rowPitch] <= 0);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid PReLULayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  const int dim = bottom[0]->count(2);\n  const int channels = bottom[0]->channels();\n  const Dtype* slope_data = this->blobs_[0]->gpu_data();\n  const int div_factor = channel_shared_ ? channels : 1;\n\n  // For in-place computation\n  if (top[0] == bottom[0]) {\n    caffe_copy(count, bottom_data, bottom_memory_.mutable_gpu_data());\n  }\n\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  PReLUForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, channels, dim, bottom_data, top_data, slope_data, div_factor);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate <typename Dtype>\nvoid PReLULayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const int count = bottom[0]->count();\n  const int dim = bottom[0]->count(2);\n  const int channels = bottom[0]->channels();\n\n  // For in-place computation\n  if (top[0] == bottom[0]) {\n    bottom_data = bottom_memory_.gpu_data();\n  }\n\n  // Propagate to param\n  // Since to write bottom diff will affect top diff if top and bottom blobs\n  // are identical (in-place computaion), we first compute param backward to\n  // keep top_diff unchanged.\n  if (this->param_propagate_down_[0]) {\n    Dtype* slope_diff = this->blobs_[0]->mutable_gpu_diff();\n    int cdim = channels * dim;\n\n    // compute element-wise diff\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    PReLUParamBackward<Dtype><<<CAFFE_GET_BLOCKS(cdim),\n      CAFFE_CUDA_NUM_THREADS>>>(\n      cdim, bottom[0]->num(), top[0]->offset(1), top_diff ,\n      bottom_data ,\n      backward_buff_.mutable_gpu_diff());\n    CUDA_POST_KERNEL_CHECK;\n    if (channel_shared_) {\n      Dtype dsum;\n      caffe_gpu_dot<Dtype>(channels * dim, backward_buff_.gpu_diff(),\n       multiplier_.gpu_data(), &dsum);\n      caffe_gpu_add_scalar(this->blobs_[0]->count(), Dtype(dsum), slope_diff);\n    } else {\n      caffe_gpu_gemv<Dtype>(CblasNoTrans, channels, dim, 1.,\n        backward_buff_.gpu_diff(), multiplier_.gpu_data(), 1.,\n        slope_diff);\n    }\n  }\n  // Propagate to bottom\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const Dtype* slope_data = this->blobs_[0]->gpu_data();\n    int div_factor = channel_shared_ ? channels : 1;\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    PReLUBackward<Dtype><<<CAFFE_GET_BLOCKS(count),\n        CAFFE_CUDA_NUM_THREADS>>>(\n        count, channels, dim, top_diff, bottom_data, bottom_diff, slope_data,\n        div_factor);\n    CUDA_POST_KERNEL_CHECK;\n  }\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(PReLULayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/reduction_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/reduction_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ReductionLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  op_ = this->layer_param_.reduction_param().operation();\n}\n\ntemplate <typename Dtype>\nvoid ReductionLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  axis_ = bottom[0]->CanonicalAxisIndex(\n      this->layer_param_.reduction_param().axis());\n  // In the output, we'll keep all axes up to the reduction axis, but\n  // throw away any after that.\n  // Note: currently reducing along non-tail axes is not supported; otherwise,\n  // we'd need to also copy any axes following an \"end_axis\".\n  vector<int> top_shape(bottom[0]->shape().begin(),\n                        bottom[0]->shape().begin() + axis_);\n  top[0]->Reshape(top_shape);\n  num_ = bottom[0]->count(0, axis_);\n  dim_ = bottom[0]->count(axis_);\n  CHECK_EQ(num_, top[0]->count());\n  if (op_ == ReductionParameter_ReductionOp_SUM ||\n      op_ == ReductionParameter_ReductionOp_MEAN) {\n    vector<int> sum_mult_shape(1, dim_);\n    sum_multiplier_.Reshape(sum_mult_shape);\n    caffe_set(dim_, Dtype(1), sum_multiplier_.mutable_cpu_data());\n  }\n  coeff_ = this->layer_param().reduction_param().coeff();\n  if (op_ == ReductionParameter_ReductionOp_MEAN) {\n    coeff_ /= dim_;\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReductionLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* mult_data = NULL;\n  if (sum_multiplier_.count() > 0) {\n    mult_data = sum_multiplier_.cpu_data();\n  }\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  for (int i = 0; i < num_; ++i) {\n    switch (op_) {\n    case ReductionParameter_ReductionOp_SUM:\n    case ReductionParameter_ReductionOp_MEAN:\n      *top_data = caffe_cpu_dot(dim_, mult_data, bottom_data);\n      break;\n    case ReductionParameter_ReductionOp_ASUM:\n      *top_data = caffe_cpu_asum(dim_, bottom_data);\n      break;\n    case ReductionParameter_ReductionOp_SUMSQ:\n      *top_data = caffe_cpu_dot(dim_, bottom_data, bottom_data);\n      break;\n    default:\n      LOG(FATAL) << \"Unknown reduction op: \"\n          << ReductionParameter_ReductionOp_Name(op_);\n    }\n    bottom_data += dim_;\n    ++top_data;\n  }\n  if (coeff_ != Dtype(1)) {\n    // Reset the top_data pointer.\n    top_data = top[0]->mutable_cpu_data();\n    caffe_scal(num_, coeff_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReductionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  // Get bottom_data, if needed.\n  const Dtype* bottom_data = NULL;\n  switch (op_) {\n  // Operations that don't need bottom_data\n  case ReductionParameter_ReductionOp_SUM:\n  case ReductionParameter_ReductionOp_MEAN:\n    break;\n  // Operations that need bottom_data\n  case ReductionParameter_ReductionOp_ASUM:\n  case ReductionParameter_ReductionOp_SUMSQ:\n    bottom_data = bottom[0]->cpu_data();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown reduction op: \"\n        << ReductionParameter_ReductionOp_Name(op_);\n  }\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  for (int i = 0; i < num_; ++i) {\n    const Dtype bottom_coeff = (*top_diff) * coeff_;\n    switch (op_) {\n    case ReductionParameter_ReductionOp_SUM:\n    case ReductionParameter_ReductionOp_MEAN:\n      caffe_set(dim_, bottom_coeff, bottom_diff);\n      break;\n    case ReductionParameter_ReductionOp_ASUM:\n      caffe_cpu_sign(dim_, bottom_data, bottom_diff);\n      caffe_scal(dim_, bottom_coeff, bottom_diff);\n      break;\n    case ReductionParameter_ReductionOp_SUMSQ:\n      caffe_cpu_scale(dim_, 2 * bottom_coeff, bottom_data, bottom_diff);\n      break;\n    default:\n      LOG(FATAL) << \"Unknown reduction op: \"\n          << ReductionParameter_ReductionOp_Name(op_);\n    }\n    bottom_data += dim_;\n    bottom_diff += dim_;\n    ++top_diff;\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ReductionLayer);\n#endif\n\nINSTANTIATE_CLASS(ReductionLayer);\nREGISTER_LAYER_CLASS(Reduction);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/reduction_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/reduction_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ReductionLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  const Dtype* mult_data = NULL;\n  if (sum_multiplier_.count() > 0) {\n    mult_data = sum_multiplier_.gpu_data();\n  }\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  for (int i = 0; i < num_; ++i) {\n    switch (op_) {\n    case ReductionParameter_ReductionOp_SUM:\n    case ReductionParameter_ReductionOp_MEAN:\n      caffe_gpu_dot(dim_, mult_data, bottom_data, top_data);\n      break;\n    case ReductionParameter_ReductionOp_ASUM:\n      caffe_gpu_asum(dim_, bottom_data, top_data);\n      break;\n    case ReductionParameter_ReductionOp_SUMSQ:\n      caffe_gpu_dot(dim_, bottom_data, bottom_data, top_data);\n      break;\n    default:\n      LOG(FATAL) << \"Unknown reduction op: \"\n          << ReductionParameter_ReductionOp_Name(op_);\n    }\n    bottom_data += dim_;\n    ++top_data;\n  }\n  if (coeff_ != Dtype(1)) {\n    // Reset the top_data pointer.\n    top_data = top[0]->mutable_gpu_data();\n    caffe_gpu_scal(num_, coeff_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReductionLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  // Get bottom_data, if needed.\n  const Dtype* bottom_data = NULL;\n  switch (op_) {\n  // Operations that don't need bottom_data\n  case ReductionParameter_ReductionOp_SUM:\n  case ReductionParameter_ReductionOp_MEAN:\n    break;\n  // Operations that need bottom_data\n  case ReductionParameter_ReductionOp_ASUM:\n  case ReductionParameter_ReductionOp_SUMSQ:\n    bottom_data = bottom[0]->gpu_data();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown reduction op: \"\n        << ReductionParameter_ReductionOp_Name(op_);\n  }\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  for (int i = 0; i < num_; ++i) {\n    const Dtype bottom_coeff = (*top_diff) * coeff_;\n    switch (op_) {\n    case ReductionParameter_ReductionOp_SUM:\n    case ReductionParameter_ReductionOp_MEAN:\n      caffe_gpu_set(dim_, bottom_coeff, bottom_diff);\n      break;\n    case ReductionParameter_ReductionOp_ASUM:\n      caffe_gpu_sign(dim_, bottom_data, bottom_diff);\n      caffe_gpu_scal(dim_, bottom_coeff, bottom_diff);\n      break;\n    case ReductionParameter_ReductionOp_SUMSQ:\n      caffe_gpu_scale(dim_, 2 * bottom_coeff, bottom_data, bottom_diff);\n      break;\n    default:\n      LOG(FATAL) << \"Unknown reduction op: \"\n          << ReductionParameter_ReductionOp_Name(op_);\n    }\n    bottom_data += dim_;\n    bottom_diff += dim_;\n    ++top_diff;\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ReductionLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/relu_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/relu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ReLULayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  Dtype negative_slope = this->layer_param_.relu_param().negative_slope();\n  for (int i = 0; i < count; ++i) {\n    top_data[i] = std::max(bottom_data[i], Dtype(0))\n        + negative_slope * std::min(bottom_data[i], Dtype(0));\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReLULayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->cpu_data();\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const int count = bottom[0]->count();\n    Dtype negative_slope = this->layer_param_.relu_param().negative_slope();\n    for (int i = 0; i < count; ++i) {\n      bottom_diff[i] = top_diff[i] * ((bottom_data[i] > 0)\n          + negative_slope * (bottom_data[i] <= 0));\n    }\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(ReLULayer);\n#endif\n\nINSTANTIATE_CLASS(ReLULayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/relu_layer.cu",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/relu_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void ReLUForward(const int n, const Dtype* in, Dtype* out,\n    Dtype negative_slope) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = in[index] > 0 ? in[index] : in[index] * negative_slope;\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReLULayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  Dtype negative_slope = this->layer_param_.relu_param().negative_slope();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  ReLUForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, top_data, negative_slope);\n  CUDA_POST_KERNEL_CHECK;\n  // << \" count: \" << count << \" bottom_data: \"\n  //     << (unsigned long)bottom_data\n  //     << \" top_data: \" << (unsigned long)top_data\n  //     << \" blocks: \" << CAFFE_GET_BLOCKS(count)\n  //     << \" threads: \" << CAFFE_CUDA_NUM_THREADS;\n}\n\ntemplate <typename Dtype>\n__global__ void ReLUBackward(const int n, const Dtype* in_diff,\n    const Dtype* in_data, Dtype* out_diff, Dtype negative_slope) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out_diff[index] = in_diff[index] * ((in_data[index] > 0)\n        + (in_data[index] <= 0) * negative_slope);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReLULayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* bottom_data = bottom[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const int count = bottom[0]->count();\n    Dtype negative_slope = this->layer_param_.relu_param().negative_slope();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    ReLUBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, bottom_data, bottom_diff, negative_slope);\n    CUDA_POST_KERNEL_CHECK;\n  }\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(ReLULayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/reshape_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/reshape_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ReshapeLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  CHECK_NE(top[0], bottom[0]) << this->type() << \" Layer does not \"\n      \"allow in-place computation.\";\n  inferred_axis_ = -1;\n  copy_axes_.clear();\n  const BlobShape& top_blob_shape = this->layer_param_.reshape_param().shape();\n  const int top_num_axes = top_blob_shape.dim_size();\n  constant_count_ = 1;\n  for (int i = 0; i < top_num_axes; ++i) {\n    const int top_dim = top_blob_shape.dim(i);\n    if (top_dim == 0) {\n      copy_axes_.push_back(i);\n    } else if (top_dim == -1) {\n      CHECK_EQ(inferred_axis_, -1) << \"new shape contains multiple \"\n          << \"-1 dims; at most a single (1) value of -1 may be specified\";\n      inferred_axis_ = i;\n    } else {\n      constant_count_ *= top_dim;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid ReshapeLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const int input_start_axis = this->layer_param_.reshape_param().axis();\n  const int start_axis = (input_start_axis >= 0) ? input_start_axis :\n      bottom[0]->num_axes() + input_start_axis + 1;\n  CHECK_GE(start_axis, 0) << \"axis \" << input_start_axis << \" out of range\";\n  CHECK_LE(start_axis, bottom[0]->num_axes()) << \"axis \" << input_start_axis\n      << \" out of range for \" << bottom[0]->num_axes() << \"-D input blob\";\n  const int num_axes = this->layer_param_.reshape_param().num_axes();\n  CHECK_GE(num_axes, -1) << \"num_axes must be >= 0, or -1 for all\";\n  const int end_axis =\n      (num_axes == -1) ? bottom[0]->num_axes() : (start_axis + num_axes);\n  CHECK_LE(end_axis, bottom[0]->num_axes())\n      << \"end_axis = axis + num_axes is out of range\";\n  const int num_axes_replaced = end_axis - start_axis;\n  const int num_axes_retained = bottom[0]->num_axes() - num_axes_replaced;\n  const BlobShape& top_blob_shape = this->layer_param_.reshape_param().shape();\n  const int num_new_axes = top_blob_shape.dim_size();\n  vector<int> top_shape(num_axes_retained + num_new_axes);\n  int top_shape_index = 0;\n  for (int i = 0; i < start_axis; ++i) {\n    top_shape[top_shape_index++] = bottom[0]->shape(i);\n  }\n  for (int i = 0; i < num_new_axes; ++i) {\n    top_shape[top_shape_index++] = top_blob_shape.dim(i);\n  }\n  for (int i = end_axis; i < bottom[0]->num_axes(); ++i) {\n    top_shape[top_shape_index++] = bottom[0]->shape(i);\n  }\n  CHECK_EQ(top_shape_index, top_shape.size());\n  for (int i = 0; i < copy_axes_.size(); ++i) {\n    const int copy_axis_index = copy_axes_[i];\n    CHECK_GT(bottom[0]->num_axes(), start_axis + copy_axis_index)\n        << \"new shape contains a 0, but there was no corresponding bottom axis \"\n        << \"to copy\";\n    top_shape[start_axis + copy_axis_index] =\n        bottom[0]->shape(start_axis + copy_axis_index);\n  }\n  if (inferred_axis_ >= 0) {\n    // A -1 dim was specified; infer the correct dimension by computing the\n    // product of the other dimensions.\n    int explicit_count = constant_count_;\n    explicit_count *= bottom[0]->count(0, start_axis);\n    explicit_count *= bottom[0]->count(end_axis);\n    for (int i = 0; i < copy_axes_.size(); ++i) {\n      const int copy_axis_index = copy_axes_[i];\n      explicit_count *= top_shape[start_axis + copy_axis_index];\n    }\n    CHECK_EQ(0, bottom[0]->count() % explicit_count) << \"bottom count (\"\n        << bottom[0]->count() << \") must be divisible by the product of \"\n        << \"the specified dimensions (\" << explicit_count << \")\";\n    const int inferred_dim = bottom[0]->count() / explicit_count;\n    top_shape[start_axis + inferred_axis_] = inferred_dim;\n  }\n  top[0]->Reshape(top_shape);\n  CHECK_EQ(top[0]->count(), bottom[0]->count())\n      << \"output count must match input count\";\n  top[0]->ShareData(*bottom[0]);\n  top[0]->ShareDiff(*bottom[0]);\n}\n\nINSTANTIATE_CLASS(ReshapeLayer);\nREGISTER_LAYER_CLASS(Reshape);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/roi_pooling_layer.cpp",
    "content": "// ------------------------------------------------------------------\n// Fast R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Ross Girshick\n// ------------------------------------------------------------------\n\n#include <cfloat>\n\n#include \"caffe/fast_rcnn_layers.hpp\"\n\nusing std::max;\nusing std::min;\nusing std::floor;\nusing std::ceil;\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ROIPoolingLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  ROIPoolingParameter roi_pool_param = this->layer_param_.roi_pooling_param();\n  CHECK_GT(roi_pool_param.pooled_h(), 0)\n      << \"pooled_h must be > 0\";\n  CHECK_GT(roi_pool_param.pooled_w(), 0)\n      << \"pooled_w must be > 0\";\n  pooled_height_ = roi_pool_param.pooled_h();\n  pooled_width_ = roi_pool_param.pooled_w();\n  spatial_scale_ = roi_pool_param.spatial_scale();\n  LOG(INFO) << \"Spatial scale: \" << spatial_scale_;\n}\n\ntemplate <typename Dtype>\nvoid ROIPoolingLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  channels_ = bottom[0]->channels();\n  height_ = bottom[0]->height();\n  width_ = bottom[0]->width();\n  top[0]->Reshape(bottom[1]->num(), channels_, pooled_height_,\n      pooled_width_);\n  max_idx_.Reshape(bottom[1]->num(), channels_, pooled_height_,\n      pooled_width_);\n}\n\ntemplate <typename Dtype>\nvoid ROIPoolingLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const Dtype* bottom_rois = bottom[1]->cpu_data();\n  // Number of ROIs\n  int num_rois = bottom[1]->num();\n  int batch_size = bottom[0]->num();\n  int top_count = top[0]->count();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  caffe_set(top_count, Dtype(-FLT_MAX), top_data);\n  int* argmax_data = max_idx_.mutable_cpu_data();\n  caffe_set(top_count, -1, argmax_data);\n\n  // For each ROI R = [batch_index x1 y1 x2 y2]: max pool over R\n  for (int n = 0; n < num_rois; ++n) {\n    int roi_batch_ind = bottom_rois[0];\n    int roi_start_w = round(bottom_rois[1] * spatial_scale_);\n    int roi_start_h = round(bottom_rois[2] * spatial_scale_);\n    int roi_end_w = round(bottom_rois[3] * spatial_scale_);\n    int roi_end_h = round(bottom_rois[4] * spatial_scale_);\n    CHECK_GE(roi_batch_ind, 0);\n    CHECK_LT(roi_batch_ind, batch_size);\n\n    int roi_height = max(roi_end_h - roi_start_h + 1, 1);\n    int roi_width = max(roi_end_w - roi_start_w + 1, 1);\n    const Dtype bin_size_h = static_cast<Dtype>(roi_height)\n                             / static_cast<Dtype>(pooled_height_);\n    const Dtype bin_size_w = static_cast<Dtype>(roi_width)\n                             / static_cast<Dtype>(pooled_width_);\n\n    const Dtype* batch_data = bottom_data + bottom[0]->offset(roi_batch_ind);\n\n    for (int c = 0; c < channels_; ++c) {\n      for (int ph = 0; ph < pooled_height_; ++ph) {\n        for (int pw = 0; pw < pooled_width_; ++pw) {\n          // Compute pooling region for this output unit:\n          //  start (included) = floor(ph * roi_height / pooled_height_)\n          //  end (excluded) = ceil((ph + 1) * roi_height / pooled_height_)\n          int hstart = static_cast<int>(floor(static_cast<Dtype>(ph)\n                                              * bin_size_h));\n          int wstart = static_cast<int>(floor(static_cast<Dtype>(pw)\n                                              * bin_size_w));\n          int hend = static_cast<int>(ceil(static_cast<Dtype>(ph + 1)\n                                           * bin_size_h));\n          int wend = static_cast<int>(ceil(static_cast<Dtype>(pw + 1)\n                                           * bin_size_w));\n\n          hstart = min(max(hstart + roi_start_h, 0), height_);\n          hend = min(max(hend + roi_start_h, 0), height_);\n          wstart = min(max(wstart + roi_start_w, 0), width_);\n          wend = min(max(wend + roi_start_w, 0), width_);\n\n          bool is_empty = (hend <= hstart) || (wend <= wstart);\n\n          const int pool_index = ph * pooled_width_ + pw;\n          if (is_empty) {\n            top_data[pool_index] = 0;\n            argmax_data[pool_index] = -1;\n          }\n\n          for (int h = hstart; h < hend; ++h) {\n            for (int w = wstart; w < wend; ++w) {\n              const int index = h * width_ + w;\n              if (batch_data[index] > top_data[pool_index]) {\n                top_data[pool_index] = batch_data[index];\n                argmax_data[pool_index] = index;\n              }\n            }\n          }\n        }\n      }\n      // Increment all data pointers by one channel\n      batch_data += bottom[0]->offset(0, 1);\n      top_data += top[0]->offset(0, 1);\n      argmax_data += max_idx_.offset(0, 1);\n    }\n    // Increment ROI data pointer\n    bottom_rois += bottom[1]->offset(1);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ROIPoolingLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  NOT_IMPLEMENTED;\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(ROIPoolingLayer);\n#endif\n\nINSTANTIATE_CLASS(ROIPoolingLayer);\nREGISTER_LAYER_CLASS(ROIPooling);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/roi_pooling_layer.cu",
    "content": "// ------------------------------------------------------------------\n// Fast R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Ross Girshick\n// ------------------------------------------------------------------\n\n#include <cfloat>\n\n#include \"caffe/fast_rcnn_layers.hpp\"\n\nusing std::max;\nusing std::min;\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void ROIPoolForward(const int nthreads, const Dtype* bottom_data,\n    const Dtype spatial_scale, const int channels, const int height,\n    const int width, const int pooled_height, const int pooled_width,\n    const Dtype* bottom_rois, Dtype* top_data, int* argmax_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // (n, c, ph, pw) is an element in the pooled output\n    int pw = index % pooled_width;\n    int ph = (index / pooled_width) % pooled_height;\n    int c = (index / pooled_width / pooled_height) % channels;\n    int n = index / pooled_width / pooled_height / channels;\n\n    bottom_rois += n * 5;\n    int roi_batch_ind = bottom_rois[0];\n    int roi_start_w = round(bottom_rois[1] * spatial_scale);\n    int roi_start_h = round(bottom_rois[2] * spatial_scale);\n    int roi_end_w = round(bottom_rois[3] * spatial_scale);\n    int roi_end_h = round(bottom_rois[4] * spatial_scale);\n\n    // Force malformed ROIs to be 1x1\n    int roi_width = max(roi_end_w - roi_start_w + 1, 1);\n    int roi_height = max(roi_end_h - roi_start_h + 1, 1);\n    Dtype bin_size_h = static_cast<Dtype>(roi_height)\n                       / static_cast<Dtype>(pooled_height);\n    Dtype bin_size_w = static_cast<Dtype>(roi_width)\n                       / static_cast<Dtype>(pooled_width);\n\n    int hstart = static_cast<int>(floor(static_cast<Dtype>(ph)\n                                        * bin_size_h));\n    int wstart = static_cast<int>(floor(static_cast<Dtype>(pw)\n                                        * bin_size_w));\n    int hend = static_cast<int>(ceil(static_cast<Dtype>(ph + 1)\n                                     * bin_size_h));\n    int wend = static_cast<int>(ceil(static_cast<Dtype>(pw + 1)\n                                     * bin_size_w));\n\n    // Add roi offsets and clip to input boundaries\n    hstart = min(max(hstart + roi_start_h, 0), height);\n    hend = min(max(hend + roi_start_h, 0), height);\n    wstart = min(max(wstart + roi_start_w, 0), width);\n    wend = min(max(wend + roi_start_w, 0), width);\n    bool is_empty = (hend <= hstart) || (wend <= wstart);\n\n    // Define an empty pooling region to be zero\n    Dtype maxval = is_empty ? 0 : -FLT_MAX;\n    // If nothing is pooled, argmax = -1 causes nothing to be backprop'd\n    int maxidx = -1;\n    bottom_data += (roi_batch_ind * channels + c) * height * width;\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        int bottom_index = h * width + w;\n        if (bottom_data[bottom_index] > maxval) {\n          maxval = bottom_data[bottom_index];\n          maxidx = bottom_index;\n        }\n      }\n    }\n    top_data[index] = maxval;\n    argmax_data[index] = maxidx;\n  }\n}\n\ntemplate <typename Dtype>\nvoid ROIPoolingLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  const Dtype* bottom_rois = bottom[1]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  int* argmax_data = max_idx_.mutable_gpu_data();\n  int count = top[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  ROIPoolForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, spatial_scale_, channels_, height_, width_,\n      pooled_height_, pooled_width_, bottom_rois, top_data, argmax_data);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate <typename Dtype>\n__global__ void ROIPoolBackward(const int nthreads, const Dtype* top_diff,\n    const int* argmax_data, const int num_rois, const Dtype spatial_scale,\n    const int channels, const int height, const int width,\n    const int pooled_height, const int pooled_width, Dtype* bottom_diff,\n    const Dtype* bottom_rois) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    // (n, c, h, w) coords in bottom data\n    int w = index % width;\n    int h = (index / width) % height;\n    int c = (index / width / height) % channels;\n    int n = index / width / height / channels;\n\n    Dtype gradient = 0;\n    // Accumulate gradient over all ROIs that pooled this element\n    for (int roi_n = 0; roi_n < num_rois; ++roi_n) {\n      const Dtype* offset_bottom_rois = bottom_rois + roi_n * 5;\n      int roi_batch_ind = offset_bottom_rois[0];\n      // Skip if ROI's batch index doesn't match n\n      if (n != roi_batch_ind) {\n        continue;\n      }\n\n      int roi_start_w = round(offset_bottom_rois[1] * spatial_scale);\n      int roi_start_h = round(offset_bottom_rois[2] * spatial_scale);\n      int roi_end_w = round(offset_bottom_rois[3] * spatial_scale);\n      int roi_end_h = round(offset_bottom_rois[4] * spatial_scale);\n\n      // Skip if ROI doesn't include (h, w)\n      const bool in_roi = (w >= roi_start_w && w <= roi_end_w &&\n                           h >= roi_start_h && h <= roi_end_h);\n      if (!in_roi) {\n        continue;\n      }\n\n      int offset = (roi_n * channels + c) * pooled_height * pooled_width;\n      const Dtype* offset_top_diff = top_diff + offset;\n      const int* offset_argmax_data = argmax_data + offset;\n\n      // Compute feasible set of pooled units that could have pooled\n      // this bottom unit\n\n      // Force malformed ROIs to be 1x1\n      int roi_width = max(roi_end_w - roi_start_w + 1, 1);\n      int roi_height = max(roi_end_h - roi_start_h + 1, 1);\n\n      Dtype bin_size_h = static_cast<Dtype>(roi_height)\n                         / static_cast<Dtype>(pooled_height);\n      Dtype bin_size_w = static_cast<Dtype>(roi_width)\n                         / static_cast<Dtype>(pooled_width);\n\n      int phstart = floor(static_cast<Dtype>(h - roi_start_h) / bin_size_h);\n      int phend = ceil(static_cast<Dtype>(h - roi_start_h + 1) / bin_size_h);\n      int pwstart = floor(static_cast<Dtype>(w - roi_start_w) / bin_size_w);\n      int pwend = ceil(static_cast<Dtype>(w - roi_start_w + 1) / bin_size_w);\n\n      phstart = min(max(phstart, 0), pooled_height);\n      phend = min(max(phend, 0), pooled_height);\n      pwstart = min(max(pwstart, 0), pooled_width);\n      pwend = min(max(pwend, 0), pooled_width);\n\n      for (int ph = phstart; ph < phend; ++ph) {\n        for (int pw = pwstart; pw < pwend; ++pw) {\n          if (offset_argmax_data[ph * pooled_width + pw] == (h * width + w)) {\n            gradient += offset_top_diff[ph * pooled_width + pw];\n          }\n        }\n      }\n    }\n    bottom_diff[index] = gradient;\n  }\n}\n\ntemplate <typename Dtype>\nvoid ROIPoolingLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n  const Dtype* bottom_rois = bottom[1]->gpu_data();\n  const Dtype* top_diff = top[0]->gpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  const int count = bottom[0]->count();\n  caffe_gpu_set(count, Dtype(0.), bottom_diff);\n  const int* argmax_data = max_idx_.gpu_data();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  ROIPoolBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, top_diff, argmax_data, top[0]->num(), spatial_scale_, channels_,\n      height_, width_, pooled_height_, pooled_width_, bottom_diff, bottom_rois);\n  CUDA_POST_KERNEL_CHECK;\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ROIPoolingLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/scale_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layer_factory.hpp\"\n#include \"caffe/layers/scale_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ScaleLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const ScaleParameter& param = this->layer_param_.scale_param();\n  if (bottom.size() == 1 && this->blobs_.size() > 0) {\n    LOG(INFO) << \"Skipping parameter initialization\";\n  } else if (bottom.size() == 1) {\n    // scale is a learned parameter; initialize it\n    axis_ = bottom[0]->CanonicalAxisIndex(param.axis());\n    const int num_axes = param.num_axes();\n    CHECK_GE(num_axes, -1) << \"num_axes must be non-negative, \"\n                           << \"or -1 to extend to the end of bottom[0]\";\n    if (num_axes >= 0) {\n      CHECK_GE(bottom[0]->num_axes(), axis_ + num_axes)\n          << \"scale blob's shape extends past bottom[0]'s shape when applied \"\n          << \"starting with bottom[0] axis = \" << axis_;\n    }\n    this->blobs_.resize(1);\n    const vector<int>::const_iterator& shape_start =\n        bottom[0]->shape().begin() + axis_;\n    const vector<int>::const_iterator& shape_end =\n        (num_axes == -1) ? bottom[0]->shape().end() : (shape_start + num_axes);\n    vector<int> scale_shape(shape_start, shape_end);\n    this->blobs_[0].reset(new Blob<Dtype>(scale_shape));\n    FillerParameter filler_param(param.filler());\n    if (!param.has_filler()) {\n      // Default to unit (1) filler for identity operation.\n      filler_param.set_type(\"constant\");\n      filler_param.set_value(1);\n    }\n    shared_ptr<Filler<Dtype> > filler(GetFiller<Dtype>(filler_param));\n    filler->Fill(this->blobs_[0].get());\n  }\n  if (param.bias_term()) {\n    LayerParameter layer_param(this->layer_param_);\n    layer_param.set_type(\"Bias\");\n    BiasParameter* bias_param = layer_param.mutable_bias_param();\n    bias_param->set_axis(param.axis());\n    if (bottom.size() > 1) {\n      bias_param->set_num_axes(bottom[1]->num_axes());\n    } else {\n      bias_param->set_num_axes(param.num_axes());\n    }\n    bias_param->mutable_filler()->CopyFrom(param.bias_filler());\n    bias_layer_ = LayerRegistry<Dtype>::CreateLayer(layer_param);\n    bias_bottom_vec_.resize(1);\n    bias_bottom_vec_[0] = bottom[0];\n    bias_layer_->SetUp(bias_bottom_vec_, top);\n    bias_param_id_ = this->blobs_.size();\n    this->blobs_.resize(bias_param_id_ + 1);\n    this->blobs_[bias_param_id_] = bias_layer_->blobs()[0];\n    bias_propagate_down_.resize(1, false);\n  }\n  this->param_propagate_down_.resize(this->blobs_.size(), true);\n}\n\ntemplate <typename Dtype>\nvoid ScaleLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const ScaleParameter& param = this->layer_param_.scale_param();\n  Blob<Dtype>* scale = (bottom.size() > 1) ? bottom[1] : this->blobs_[0].get();\n  // Always set axis_ == 0 in special case where scale is a scalar\n  // (num_axes == 0). Mathematically equivalent for any choice of axis_, so the\n  // actual setting can be safely ignored; and computation is most efficient\n  // with axis_ == 0 and (therefore) outer_dim_ == 1. (Setting axis_ to\n  // bottom[0]->num_axes() - 1, giving inner_dim_ == 1, would be equally\n  // performant.)\n  axis_ = (scale->num_axes() == 0) ?\n      0 : bottom[0]->CanonicalAxisIndex(param.axis());\n  CHECK_GE(bottom[0]->num_axes(), axis_ + scale->num_axes())\n      << \"scale blob's shape extends past bottom[0]'s shape when applied \"\n      << \"starting with bottom[0] axis = \" << axis_;\n  for (int i = 0; i < scale->num_axes(); ++i) {\n    CHECK_EQ(bottom[0]->shape(axis_ + i), scale->shape(i))\n        << \"dimension mismatch between bottom[0]->shape(\" << axis_ + i\n        << \") and scale->shape(\" << i << \")\";\n  }\n  outer_dim_ = bottom[0]->count(0, axis_);\n  scale_dim_ = scale->count();\n  inner_dim_ = bottom[0]->count(axis_ + scale->num_axes());\n  if (bottom[0] == top[0]) {  // in-place computation\n    temp_.ReshapeLike(*bottom[0]);\n  } else {\n    top[0]->ReshapeLike(*bottom[0]);\n  }\n  sum_result_.Reshape(vector<int>(1, outer_dim_ * scale_dim_));\n  const int sum_mult_size = std::max(outer_dim_, inner_dim_);\n  sum_multiplier_.Reshape(vector<int>(1, sum_mult_size));\n  if (sum_multiplier_.cpu_data()[sum_mult_size - 1] != Dtype(1)) {\n    caffe_set(sum_mult_size, Dtype(1), sum_multiplier_.mutable_cpu_data());\n  }\n  if (bias_layer_) {\n    bias_bottom_vec_[0] = top[0];\n    bias_layer_->Reshape(bias_bottom_vec_, top);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ScaleLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  if (bottom[0] == top[0]) {\n    // In-place computation; need to store bottom data before overwriting it.\n    // Note that this is only necessary for Backward; we could skip this if not\n    // doing Backward, but Caffe currently provides no way of knowing whether\n    // we'll need to do Backward at the time of the Forward call.\n    caffe_copy(bottom[0]->count(), bottom[0]->cpu_data(),\n               temp_.mutable_cpu_data());\n  }\n  const Dtype* scale_data =\n      ((bottom.size() > 1) ? bottom[1] : this->blobs_[0].get())->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  for (int n = 0; n < outer_dim_; ++n) {\n    for (int d = 0; d < scale_dim_; ++d) {\n      const Dtype factor = scale_data[d];\n      caffe_cpu_scale(inner_dim_, factor, bottom_data, top_data);\n      bottom_data += inner_dim_;\n      top_data += inner_dim_;\n    }\n  }\n  if (bias_layer_) {\n    bias_layer_->Forward(bias_bottom_vec_, top);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ScaleLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (bias_layer_ &&\n      this->param_propagate_down_[this->param_propagate_down_.size() - 1]) {\n    bias_layer_->Backward(top, bias_propagate_down_, bias_bottom_vec_);\n  }\n  const bool scale_param = (bottom.size() == 1);\n  Blob<Dtype>* scale = scale_param ? this->blobs_[0].get() : bottom[1];\n  if ((!scale_param && propagate_down[1]) ||\n      (scale_param && this->param_propagate_down_[0])) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    const bool in_place = (bottom[0] == top[0]);\n    const Dtype* bottom_data = (in_place ? &temp_ : bottom[0])->cpu_data();\n    // Hack: store big eltwise product in bottom[0] diff, except in the special\n    // case where this layer itself does the eltwise product, in which case we\n    // can store it directly in the scale diff, and we're done.\n    // If we're computing in-place (and not doing eltwise computation), this\n    // hack doesn't work and we store the product in temp_.\n    const bool is_eltwise = (bottom[0]->count() == scale->count());\n    Dtype* product = (is_eltwise ? scale->mutable_cpu_diff() :\n        (in_place ? temp_.mutable_cpu_data() : bottom[0]->mutable_cpu_diff()));\n    caffe_mul(top[0]->count(), top_diff, bottom_data, product);\n    if (!is_eltwise) {\n      Dtype* sum_result = NULL;\n      if (inner_dim_ == 1) {\n        sum_result = product;\n      } else if (sum_result_.count() == 1) {\n        const Dtype* sum_mult = sum_multiplier_.cpu_data();\n        Dtype* scale_diff = scale->mutable_cpu_diff();\n        if (scale_param) {\n          Dtype result = caffe_cpu_dot(inner_dim_, product, sum_mult);\n          *scale_diff += result;\n        } else {\n          *scale_diff = caffe_cpu_dot(inner_dim_, product, sum_mult);\n        }\n      } else {\n        const Dtype* sum_mult = sum_multiplier_.cpu_data();\n        sum_result = (outer_dim_ == 1) ?\n            scale->mutable_cpu_diff() : sum_result_.mutable_cpu_data();\n        caffe_cpu_gemv(CblasNoTrans, sum_result_.count(), inner_dim_,\n                       Dtype(1), product, sum_mult, Dtype(0), sum_result);\n      }\n      if (outer_dim_ != 1) {\n        const Dtype* sum_mult = sum_multiplier_.cpu_data();\n        Dtype* scale_diff = scale->mutable_cpu_diff();\n        if (scale_dim_ == 1) {\n          if (scale_param) {\n            Dtype result = caffe_cpu_dot(outer_dim_, sum_mult, sum_result);\n            *scale_diff += result;\n          } else {\n            *scale_diff = caffe_cpu_dot(outer_dim_, sum_mult, sum_result);\n          }\n        } else {\n          caffe_cpu_gemv(CblasTrans, outer_dim_, scale_dim_,\n                         Dtype(1), sum_result, sum_mult, Dtype(scale_param),\n                         scale_diff);\n        }\n      }\n    }\n  }\n  if (propagate_down[0]) {\n    const Dtype* top_diff = top[0]->cpu_diff();\n    const Dtype* scale_data = scale->cpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    for (int n = 0; n < outer_dim_; ++n) {\n      for (int d = 0; d < scale_dim_; ++d) {\n        const Dtype factor = scale_data[d];\n        caffe_cpu_scale(inner_dim_, factor, top_diff, bottom_diff);\n        bottom_diff += inner_dim_;\n        top_diff += inner_dim_;\n      }\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(ScaleLayer);\n#endif\n\nINSTANTIATE_CLASS(ScaleLayer);\nREGISTER_LAYER_CLASS(Scale);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/scale_layer.cu",
    "content": "#include <cfloat>\n#include <vector>\n\n#include \"caffe/layers/scale_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void ScaleForward(const int n, const Dtype* in,\n    const Dtype* scale, const int scale_dim, const int inner_dim,\n    Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    const int scale_index = (index / inner_dim) % scale_dim;\n    out[index] = in[index] * scale[scale_index];\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void ScaleBiasForward(const int n, const Dtype* in,\n    const Dtype* scale, const Dtype* bias,\n    const int scale_dim, const int inner_dim, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    const int scale_index = (index / inner_dim) % scale_dim;\n    out[index] = in[index] * scale[scale_index] + bias[scale_index];\n  }\n}\n\ntemplate <typename Dtype>\nvoid ScaleLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const int count = top[0]->count();\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n \n  if (bottom[0] == top[0]) {\n    // in-place computation; need to store bottom data before overwriting it.\n    // Note that this is only necessary for Backward; we could skip this if not\n    // doing Backward, but Caffe currently provides no way of knowing whether\n    // we'll need to do Backward at the time of the Forward call.\n    caffe_copy(bottom[0]->count(), bottom[0]->gpu_data(),\n               temp_.mutable_gpu_data());\n  }\n  const Dtype* scale_data =\n      ((bottom.size() > 1) ? bottom[1] : this->blobs_[0].get())->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  if (bias_layer_) {\n    const Dtype* bias_data = this->blobs_[bias_param_id_]->gpu_data();\n    ScaleBiasForward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, bottom_data, scale_data, bias_data, scale_dim_, inner_dim_,\n        top_data);\n  } else {\n    ScaleForward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, bottom_data, scale_data, scale_dim_, inner_dim_, top_data);\n  }\n}\n\ntemplate <typename Dtype>\nvoid ScaleLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (bias_layer_ &&\n      this->param_propagate_down_[this->param_propagate_down_.size() - 1]) {\n    bias_layer_->Backward(top, bias_propagate_down_, bias_bottom_vec_);\n  }\n  const bool scale_param = (bottom.size() == 1);\n  Blob<Dtype>* scale = scale_param ? this->blobs_[0].get() : bottom[1];\n  if ((!scale_param && propagate_down[1]) ||\n      (scale_param && this->param_propagate_down_[0])) {\n    const Dtype* top_diff = top[0]->gpu_diff();\n    const bool in_place = (bottom[0] == top[0]);\n    const Dtype* bottom_data = (in_place ? &temp_ : bottom[0])->gpu_data();\n    // Hack: store big eltwise product in bottom[0] diff, except in the special\n    // case where this layer itself does the eltwise product, in which case we\n    // can store it directly in the scale diff, and we're done.\n    // If we're computing in-place (and not doing eltwise computation), this\n    // hack doesn't work and we store the product in temp_.\n    const bool is_eltwise = (bottom[0]->count() == scale->count());\n    Dtype* product = (is_eltwise ? scale->mutable_gpu_diff() :\n        (in_place ? temp_.mutable_gpu_data() : bottom[0]->mutable_gpu_diff()));\n    caffe_gpu_mul(top[0]->count(), top_diff, bottom_data, product);\n    if (!is_eltwise) {\n      Dtype* sum_result = NULL;\n      if (inner_dim_ == 1) {\n        sum_result = product;\n      } else if (sum_result_.count() == 1) {\n        const Dtype* sum_mult = sum_multiplier_.gpu_data();\n        Dtype* scale_diff = scale->mutable_cpu_diff();\n        if (scale_param) {\n          Dtype result;\n          caffe_gpu_dot(inner_dim_, product, sum_mult, &result);\n          *scale_diff += result;\n        } else {\n          caffe_gpu_dot(inner_dim_, product, sum_mult, scale_diff);\n        }\n      } else {\n        const Dtype* sum_mult = sum_multiplier_.gpu_data();\n        sum_result = (outer_dim_ == 1) ?\n            scale->mutable_gpu_diff() : sum_result_.mutable_gpu_data();\n        caffe_gpu_gemv(CblasNoTrans, sum_result_.count(), inner_dim_,\n                       Dtype(1), product, sum_mult, Dtype(0), sum_result);\n      }\n      if (outer_dim_ != 1) {\n        const Dtype* sum_mult = sum_multiplier_.gpu_data();\n        if (scale_dim_ == 1) {\n          Dtype* scale_diff = scale->mutable_cpu_diff();\n          if (scale_param) {\n            Dtype result;\n            caffe_gpu_dot(outer_dim_, sum_mult, sum_result, &result);\n            *scale_diff += result;\n          } else {\n            caffe_gpu_dot(outer_dim_, sum_mult, sum_result, scale_diff);\n          }\n        } else {\n          Dtype* scale_diff = scale->mutable_gpu_diff();\n          caffe_gpu_gemv(CblasTrans, outer_dim_, scale_dim_,\n                         Dtype(1), sum_result, sum_mult, Dtype(scale_param),\n                         scale_diff);\n        }\n      }\n    }\n  }\n  if (propagate_down[0]) {\n    const int count = top[0]->count();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    const Dtype* scale_data = scale->gpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    ScaleForward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, scale_data, scale_dim_, inner_dim_, bottom_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(ScaleLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/sigmoid_cross_entropy_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SigmoidCrossEntropyLossLayer<Dtype>::LayerSetUp(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::LayerSetUp(bottom, top);\n  sigmoid_bottom_vec_.clear();\n  sigmoid_bottom_vec_.push_back(bottom[0]);\n  sigmoid_top_vec_.clear();\n  sigmoid_top_vec_.push_back(sigmoid_output_.get());\n  sigmoid_layer_->SetUp(sigmoid_bottom_vec_, sigmoid_top_vec_);\n}\n\ntemplate <typename Dtype>\nvoid SigmoidCrossEntropyLossLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::Reshape(bottom, top);\n  CHECK_EQ(bottom[0]->count(), bottom[1]->count()) <<\n      \"SIGMOID_CROSS_ENTROPY_LOSS layer inputs must have the same count.\";\n  sigmoid_layer_->Reshape(sigmoid_bottom_vec_, sigmoid_top_vec_);\n}\n\ntemplate <typename Dtype>\nvoid SigmoidCrossEntropyLossLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  // The forward pass computes the sigmoid outputs.\n  sigmoid_bottom_vec_[0] = bottom[0];\n  sigmoid_layer_->Forward(sigmoid_bottom_vec_, sigmoid_top_vec_);\n  // Compute the loss (negative log likelihood)\n  const int count = bottom[0]->count();\n  const int num = bottom[0]->num();\n  // Stable version of loss computation from input data\n  const Dtype* input_data = bottom[0]->cpu_data();\n  const Dtype* target = bottom[1]->cpu_data();\n  Dtype loss = 0;\n  for (int i = 0; i < count; ++i) {\n    loss -= input_data[i] * (target[i] - (input_data[i] >= 0)) -\n        log(1 + exp(input_data[i] - 2 * input_data[i] * (input_data[i] >= 0)));\n  }\n  top[0]->mutable_cpu_data()[0] = loss / num;\n}\n\ntemplate <typename Dtype>\nvoid SigmoidCrossEntropyLossLayer<Dtype>::Backward_cpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down[0]) {\n    // First, compute the diff\n    const int count = bottom[0]->count();\n    const int num = bottom[0]->num();\n    const Dtype* sigmoid_output_data = sigmoid_output_->cpu_data();\n    const Dtype* target = bottom[1]->cpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    caffe_sub(count, sigmoid_output_data, target, bottom_diff);\n    // Scale down gradient\n    const Dtype loss_weight = top[0]->cpu_diff()[0];\n    caffe_scal(count, loss_weight / num, bottom_diff);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU_BACKWARD(SigmoidCrossEntropyLossLayer, Backward);\n#endif\n\nINSTANTIATE_CLASS(SigmoidCrossEntropyLossLayer);\nREGISTER_LAYER_CLASS(SigmoidCrossEntropyLoss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/sigmoid_cross_entropy_loss_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/sigmoid_cross_entropy_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SigmoidCrossEntropyLossLayer<Dtype>::Backward_gpu(\n    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down[0]) {\n    // First, compute the diff\n    const int count = bottom[0]->count();\n    const int num = bottom[0]->num();\n    const Dtype* sigmoid_output_data = sigmoid_output_->gpu_data();\n    const Dtype* target = bottom[1]->gpu_data();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    caffe_copy(count, sigmoid_output_data, bottom_diff);\n    caffe_gpu_axpy(count, Dtype(-1), target, bottom_diff);\n    // Scale down gradient\n    const Dtype loss_weight = top[0]->cpu_diff()[0];\n    caffe_gpu_scal(count, loss_weight / num, bottom_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_BACKWARD(SigmoidCrossEntropyLossLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/sigmoid_layer.cpp",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"caffe/layers/sigmoid_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\ninline Dtype sigmoid(Dtype x) {\n  return 1. / (1. + exp(-x));\n}\n\ntemplate <typename Dtype>\nvoid SigmoidLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  for (int i = 0; i < count; ++i) {\n    top_data[i] = sigmoid(bottom_data[i]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SigmoidLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_data = top[0]->cpu_data();\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const int count = bottom[0]->count();\n    for (int i = 0; i < count; ++i) {\n      const Dtype sigmoid_x = top_data[i];\n      bottom_diff[i] = top_diff[i] * sigmoid_x * (1. - sigmoid_x);\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(SigmoidLayer);\n#endif\n\nINSTANTIATE_CLASS(SigmoidLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/sigmoid_layer.cu",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"caffe/layers/sigmoid_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void SigmoidForward(const int n, const Dtype* in, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = 1. / (1. + exp(-in[index]));\n  }\n}\n\ntemplate <typename Dtype>\nvoid SigmoidLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  SigmoidForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, top_data);\n  CUDA_POST_KERNEL_CHECK;\n  // << \" count: \" << count << \" bottom_data: \"\n  //     << (unsigned long)bottom_data\n  //     << \" top_data: \" << (unsigned long)top_data\n  //     << \" blocks: \" << CAFFE_GET_BLOCKS(count)\n  //     << \" threads: \" << CAFFE_CUDA_NUM_THREADS;\n}\n\ntemplate <typename Dtype>\n__global__ void SigmoidBackward(const int n, const Dtype* in_diff,\n    const Dtype* out_data, Dtype* out_diff) {\n  CUDA_KERNEL_LOOP(index, n) {\n    const Dtype sigmoid_x = out_data[index];\n    out_diff[index] = in_diff[index] * sigmoid_x * (1 - sigmoid_x);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SigmoidLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_data = top[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const int count = bottom[0]->count();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    SigmoidBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, top_data, bottom_diff);\n    CUDA_POST_KERNEL_CHECK;\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(SigmoidLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/silence_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/silence_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SilenceLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  for (int i = 0; i < bottom.size(); ++i) {\n    if (propagate_down[i]) {\n      caffe_set(bottom[i]->count(), Dtype(0),\n                bottom[i]->mutable_cpu_diff());\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(SilenceLayer);\n#endif\n\nINSTANTIATE_CLASS(SilenceLayer);\nREGISTER_LAYER_CLASS(Silence);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/silence_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/silence_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SilenceLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // Do nothing.\n}\n\ntemplate <typename Dtype>\nvoid SilenceLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  for (int i = 0; i < bottom.size(); ++i) {\n    if (propagate_down[i]) {\n      caffe_gpu_set(bottom[i]->count(), Dtype(0),\n                    bottom[i]->mutable_gpu_diff());\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(SilenceLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/slice_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/slice_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SliceLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const SliceParameter& slice_param = this->layer_param_.slice_param();\n  CHECK(!(slice_param.has_axis() && slice_param.has_slice_dim()))\n      << \"Either axis or slice_dim should be specified; not both.\";\n  slice_point_.clear();\n  std::copy(slice_param.slice_point().begin(),\n      slice_param.slice_point().end(),\n      std::back_inserter(slice_point_));\n}\n\ntemplate <typename Dtype>\nvoid SliceLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  const int num_axes = bottom[0]->num_axes();\n  const SliceParameter& slice_param = this->layer_param_.slice_param();\n  if (slice_param.has_slice_dim()) {\n    slice_axis_ = static_cast<int>(slice_param.slice_dim());\n    // Don't allow negative indexing for slice_dim, a uint32 -- almost\n    // certainly unintended.\n    CHECK_GE(slice_axis_, 0) << \"casting slice_dim from uint32 to int32 \"\n        << \"produced negative result; slice_dim must satisfy \"\n        << \"0 <= slice_dim < \" << kMaxBlobAxes;\n    CHECK_LT(slice_axis_, num_axes) << \"slice_dim out of range.\";\n  } else {\n    slice_axis_ = bottom[0]->CanonicalAxisIndex(slice_param.axis());\n  }\n  vector<int> top_shape = bottom[0]->shape();\n  const int bottom_slice_axis = bottom[0]->shape(slice_axis_);\n  num_slices_ = bottom[0]->count(0, slice_axis_);\n  slice_size_ = bottom[0]->count(slice_axis_ + 1);\n  int count = 0;\n  if (slice_point_.size() != 0) {\n    CHECK_EQ(slice_point_.size(), top.size() - 1);\n    CHECK_LE(top.size(), bottom_slice_axis);\n    int prev = 0;\n    vector<int> slices;\n    for (int i = 0; i < slice_point_.size(); ++i) {\n      CHECK_GT(slice_point_[i], prev);\n      slices.push_back(slice_point_[i] - prev);\n      prev = slice_point_[i];\n    }\n    slices.push_back(bottom_slice_axis - prev);\n    for (int i = 0; i < top.size(); ++i) {\n      top_shape[slice_axis_] = slices[i];\n      top[i]->Reshape(top_shape);\n      count += top[i]->count();\n    }\n  } else {\n    CHECK_EQ(bottom_slice_axis % top.size(), 0)\n        << \"Number of top blobs (\" << top.size() << \") should evenly \"\n        << \"divide input slice axis (\" << bottom_slice_axis << \")\";\n    top_shape[slice_axis_] = bottom_slice_axis / top.size();\n    for (int i = 0; i < top.size(); ++i) {\n      top[i]->Reshape(top_shape);\n      count += top[i]->count();\n    }\n  }\n  CHECK_EQ(count, bottom[0]->count());\n  if (top.size() == 1) {\n    top[0]->ShareData(*bottom[0]);\n    top[0]->ShareDiff(*bottom[0]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SliceLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (top.size() == 1) { return; }\n  int offset_slice_axis = 0;\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  const int bottom_slice_axis = bottom[0]->shape(slice_axis_);\n  for (int i = 0; i < top.size(); ++i) {\n    Dtype* top_data = top[i]->mutable_cpu_data();\n    const int top_slice_axis = top[i]->shape(slice_axis_);\n    for (int n = 0; n < num_slices_; ++n) {\n      const int top_offset = n * top_slice_axis * slice_size_;\n      const int bottom_offset =\n          (n * bottom_slice_axis + offset_slice_axis) * slice_size_;\n      caffe_copy(top_slice_axis * slice_size_,\n          bottom_data + bottom_offset, top_data + top_offset);\n    }\n    offset_slice_axis += top_slice_axis;\n  }\n}\n\ntemplate <typename Dtype>\nvoid SliceLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0] || top.size() == 1) { return; }\n  int offset_slice_axis = 0;\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  const int bottom_slice_axis = bottom[0]->shape(slice_axis_);\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->cpu_diff();\n    const int top_slice_axis = top[i]->shape(slice_axis_);\n    for (int n = 0; n < num_slices_; ++n) {\n      const int top_offset = n * top_slice_axis * slice_size_;\n      const int bottom_offset =\n          (n * bottom_slice_axis + offset_slice_axis) * slice_size_;\n      caffe_copy(top_slice_axis * slice_size_,\n          top_diff + top_offset, bottom_diff + bottom_offset);\n    }\n    offset_slice_axis += top_slice_axis;\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(SliceLayer);\n#endif\n\nINSTANTIATE_CLASS(SliceLayer);\nREGISTER_LAYER_CLASS(Slice);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/slice_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/slice_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void Slice(const int nthreads, const Dtype* in_data,\n    const bool forward, const int num_slices, const int slice_size,\n    const int bottom_slice_axis, const int top_slice_axis,\n    const int offset_slice_axis, Dtype* out_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int total_slice_size = slice_size * top_slice_axis;\n    const int slice_num = index / total_slice_size;\n    const int slice_index = index % total_slice_size;\n    const int bottom_index = slice_index +\n        (slice_num * bottom_slice_axis + offset_slice_axis) * slice_size;\n    if (forward) {\n      out_data[index] = in_data[bottom_index];\n    } else {\n      out_data[bottom_index] = in_data[index];\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SliceLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (top.size() == 1) { return; }\n  int offset_slice_axis = 0;\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  const int bottom_slice_axis = bottom[0]->shape(slice_axis_);\n  const bool kForward = true;\n  for (int i = 0; i < top.size(); ++i) {\n    Dtype* top_data = top[i]->mutable_gpu_data();\n    const int top_slice_axis = top[i]->shape(slice_axis_);\n    const int top_slice_size = top_slice_axis * slice_size_;\n    const int nthreads = top_slice_size * num_slices_;\n    Slice<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(nthreads), CAFFE_CUDA_NUM_THREADS>>>(\n        nthreads, bottom_data, kForward, num_slices_, slice_size_,\n        bottom_slice_axis, top_slice_axis, offset_slice_axis, top_data);\n    offset_slice_axis += top_slice_axis;\n  }\n}\n\ntemplate <typename Dtype>\nvoid SliceLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0] || top.size() == 1) { return; }\n  int offset_slice_axis = 0;\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  const int bottom_slice_axis = bottom[0]->shape(slice_axis_);\n  const bool kForward = false;\n  for (int i = 0; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->gpu_diff();\n    const int top_slice_axis = top[i]->shape(slice_axis_);\n    const int top_slice_size = top_slice_axis * slice_size_;\n    const int nthreads = top_slice_size * num_slices_;\n    Slice<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(nthreads), CAFFE_CUDA_NUM_THREADS>>>(\n        nthreads, top_diff, kForward, num_slices_, slice_size_,\n        bottom_slice_axis, top_slice_axis, offset_slice_axis, bottom_diff);\n    offset_slice_axis += top_slice_axis;\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(SliceLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/smooth_L1_loss_layer.cpp",
    "content": "// ------------------------------------------------------------------\n// Fast R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Ross Girshick\n// ------------------------------------------------------------------\n\n#include \"caffe/fast_rcnn_layers.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SmoothL1LossLayer<Dtype>::LayerSetUp(\n  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  SmoothL1LossParameter loss_param = this->layer_param_.smooth_l1_loss_param();\n  sigma2_ = loss_param.sigma() * loss_param.sigma();\n  has_weights_ = (bottom.size() >= 3);\n  if (has_weights_) {\n    CHECK_EQ(bottom.size(), 4) << \"If weights are used, must specify both \"\n      \"inside and outside weights\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid SmoothL1LossLayer<Dtype>::Reshape(\n  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::Reshape(bottom, top);\n  //CHECK_EQ(bottom[0]->channels(), bottom[1]->channels());\n  CHECK_EQ(bottom[0]->height(), bottom[1]->height());\n  CHECK_EQ(bottom[0]->width(), bottom[1]->width());\n  if (has_weights_) {\n    //CHECK_EQ(bottom[0]->channels(), bottom[2]->channels());\n    CHECK_EQ(bottom[0]->height(), bottom[2]->height());\n    CHECK_EQ(bottom[0]->width(), bottom[2]->width());\n   // CHECK_EQ(bottom[0]->channels(), bottom[3]->channels());\n    CHECK_EQ(bottom[0]->height(), bottom[3]->height());\n    CHECK_EQ(bottom[0]->width(), bottom[3]->width());\n  }\n  diff_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      bottom[0]->height(), bottom[0]->width());\n  errors_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      bottom[0]->height(), bottom[0]->width());\n  // vector of ones used to sum\n  ones_.Reshape(bottom[0]->num(), bottom[0]->channels(),\n      bottom[0]->height(), bottom[0]->width());\n  for (int i = 0; i < bottom[0]->count(); ++i) {\n    ones_.mutable_cpu_data()[i] = Dtype(1);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SmoothL1LossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  NOT_IMPLEMENTED;\n}\n\ntemplate <typename Dtype>\nvoid SmoothL1LossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  NOT_IMPLEMENTED;\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(SmoothL1LossLayer);\n#endif\n\nINSTANTIATE_CLASS(SmoothL1LossLayer);\nREGISTER_LAYER_CLASS(SmoothL1Loss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/smooth_L1_loss_layer.cu",
    "content": "// ------------------------------------------------------------------\n// Fast R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Ross Girshick\n// ------------------------------------------------------------------\n\n#include \"caffe/fast_rcnn_layers.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void SmoothL1Forward(const int n, const Dtype* in, Dtype* out,\n    Dtype sigma2) {\n  // f(x) = 0.5 * (sigma * x)^2          if |x| < 1 / sigma / sigma\n  //        |x| - 0.5 / sigma / sigma    otherwise\n  CUDA_KERNEL_LOOP(index, n) {\n    Dtype val = in[index];\n    Dtype abs_val = abs(val);\n    if (abs_val < 1.0 / sigma2) {\n      out[index] = 0.5 * val * val * sigma2;\n    } else {\n      out[index] = abs_val - 0.5 / sigma2;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SmoothL1LossLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  int count = bottom[0]->count();\n  caffe_gpu_sub(\n      count,\n      bottom[0]->gpu_data(),\n      bottom[1]->gpu_data(),\n      diff_.mutable_gpu_data());    // d := b0 - b1\n  if (has_weights_) {\n    // apply \"inside\" weights\n    caffe_gpu_mul(\n        count,\n        bottom[2]->gpu_data(),\n        diff_.gpu_data(),\n        diff_.mutable_gpu_data());  // d := w_in * (b0 - b1)\n  }\n  SmoothL1Forward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, diff_.gpu_data(), errors_.mutable_gpu_data(), sigma2_);\n  CUDA_POST_KERNEL_CHECK;\n\n  if (has_weights_) {\n    // apply \"outside\" weights\n    caffe_gpu_mul(\n        count,\n        bottom[3]->gpu_data(),\n        errors_.gpu_data(),\n        errors_.mutable_gpu_data());  // d := w_out * SmoothL1(w_in * (b0 - b1))\n  }\n\n  Dtype loss;\n  caffe_gpu_dot(count, ones_.gpu_data(), errors_.gpu_data(), &loss);\n  top[0]->mutable_cpu_data()[0] = loss / bottom[0]->num();\n}\n\ntemplate <typename Dtype>\n__global__ void SmoothL1Backward(const int n, const Dtype* in, Dtype* out,\n    Dtype sigma2) {\n  // f'(x) = sigma * sigma * x         if |x| < 1 / sigma / sigma\n  //       = sign(x)                   otherwise\n  CUDA_KERNEL_LOOP(index, n) {\n    Dtype val = in[index];\n    Dtype abs_val = abs(val);\n    if (abs_val < 1.0 / sigma2) {\n      out[index] = sigma2 * val;\n    } else {\n      out[index] = (Dtype(0) < val) - (val < Dtype(0));\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SmoothL1LossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  // after forwards, diff_ holds w_in * (b0 - b1)\n  int count = diff_.count();\n  SmoothL1Backward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, diff_.gpu_data(), diff_.mutable_gpu_data(), sigma2_);\n  CUDA_POST_KERNEL_CHECK;\n  for (int i = 0; i < 2; ++i) {\n    if (propagate_down[i]) {\n      const Dtype sign = (i == 0) ? 1 : -1;\n      const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();\n      caffe_gpu_axpby(\n          count,                           // count\n          alpha,                           // alpha\n          diff_.gpu_data(),                // x\n          Dtype(0),                        // beta\n          bottom[i]->mutable_gpu_diff());  // y\n      if (has_weights_) {\n        // Scale by \"inside\" weight\n        caffe_gpu_mul(\n            count,\n            bottom[2]->gpu_data(),\n            bottom[i]->gpu_diff(),\n            bottom[i]->mutable_gpu_diff());\n        // Scale by \"outside\" weight\n        caffe_gpu_mul(\n            count,\n            bottom[3]->gpu_data(),\n            bottom[i]->gpu_diff(),\n            bottom[i]->mutable_gpu_diff());\n      }\n    }\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(SmoothL1LossLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/softmax_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layers/softmax_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SoftmaxLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  softmax_axis_ =\n      bottom[0]->CanonicalAxisIndex(this->layer_param_.softmax_param().axis());\n  top[0]->ReshapeLike(*bottom[0]);\n  vector<int> mult_dims(1, bottom[0]->shape(softmax_axis_));\n  sum_multiplier_.Reshape(mult_dims);\n  Dtype* multiplier_data = sum_multiplier_.mutable_cpu_data();\n  caffe_set(sum_multiplier_.count(), Dtype(1), multiplier_data);\n  outer_num_ = bottom[0]->count(0, softmax_axis_);\n  inner_num_ = bottom[0]->count(softmax_axis_ + 1);\n  vector<int> scale_dims = bottom[0]->shape();\n  scale_dims[softmax_axis_] = 1;\n  scale_.Reshape(scale_dims);\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  Dtype* scale_data = scale_.mutable_cpu_data();\n  int channels = bottom[0]->shape(softmax_axis_);\n  int dim = bottom[0]->count() / outer_num_;\n  caffe_copy(bottom[0]->count(), bottom_data, top_data);\n  // We need to subtract the max to avoid numerical issues, compute the exp,\n  // and then normalize.\n  for (int i = 0; i < outer_num_; ++i) {\n    // initialize scale_data to the first plane\n    caffe_copy(inner_num_, bottom_data + i * dim, scale_data);\n    for (int j = 0; j < channels; j++) {\n      for (int k = 0; k < inner_num_; k++) {\n        scale_data[k] = std::max(scale_data[k],\n            bottom_data[i * dim + j * inner_num_ + k]);\n      }\n    }\n    // subtraction\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels, inner_num_,\n        1, -1., sum_multiplier_.cpu_data(), scale_data, 1., top_data);\n    // exponentiation\n    caffe_exp<Dtype>(dim, top_data, top_data);\n    // sum after exp\n    caffe_cpu_gemv<Dtype>(CblasTrans, channels, inner_num_, 1.,\n        top_data, sum_multiplier_.cpu_data(), 0., scale_data);\n    // division\n    for (int j = 0; j < channels; j++) {\n      caffe_div(inner_num_, top_data, scale_data, top_data);\n      top_data += inner_num_;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->cpu_diff();\n  const Dtype* top_data = top[0]->cpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  Dtype* scale_data = scale_.mutable_cpu_data();\n  int channels = top[0]->shape(softmax_axis_);\n  int dim = top[0]->count() / outer_num_;\n  caffe_copy(top[0]->count(), top_diff, bottom_diff);\n  for (int i = 0; i < outer_num_; ++i) {\n    // compute dot(top_diff, top_data) and subtract them from the bottom diff\n    for (int k = 0; k < inner_num_; ++k) {\n      scale_data[k] = caffe_cpu_strided_dot<Dtype>(channels,\n          bottom_diff + i * dim + k, inner_num_,\n          top_data + i * dim + k, inner_num_);\n    }\n    // subtraction\n    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, channels, inner_num_, 1,\n        -1., sum_multiplier_.cpu_data(), scale_data, 1., bottom_diff + i * dim);\n  }\n  // elementwise multiplication\n  caffe_mul(top[0]->count(), bottom_diff, top_data, bottom_diff);\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(SoftmaxLayer);\n#endif\n\nINSTANTIATE_CLASS(SoftmaxLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/softmax_layer.cu",
    "content": "#include <algorithm>\n#include <cfloat>\n#include <vector>\n\n#include \"thrust/device_vector.h\"\n\n#include \"caffe/layers/softmax_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void kernel_channel_max(const int num, const int channels,\n    const int spatial_dim, const Dtype* data, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, num * spatial_dim) {\n    int n = index / spatial_dim;\n    int s = index % spatial_dim;\n    Dtype maxval = -FLT_MAX;\n    for (int c = 0; c < channels; ++c) {\n      maxval = max(data[(n * channels + c) * spatial_dim + s], maxval);\n    }\n    out[index] = maxval;\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void kernel_channel_subtract(const int count,\n    const int num, const int channels,\n    const int spatial_dim, const Dtype* channel_max, Dtype* data) {\n  CUDA_KERNEL_LOOP(index, count) {\n    int n = index / channels / spatial_dim;\n    int s = index % spatial_dim;\n    data[index] -= channel_max[n * spatial_dim + s];\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void kernel_exp(const int count, const Dtype* data, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, count) {\n    out[index] = exp(data[index]);\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void kernel_channel_sum(const int num, const int channels,\n    const int spatial_dim, const Dtype* data, Dtype* channel_sum) {\n  CUDA_KERNEL_LOOP(index, num * spatial_dim) {\n    int n = index / spatial_dim;\n    int s = index % spatial_dim;\n    Dtype sum = 0;\n    for (int c = 0; c < channels; ++c) {\n      sum += data[(n * channels + c) * spatial_dim + s];\n    }\n    channel_sum[index] = sum;\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void kernel_channel_div(const int count,\n    const int num, const int channels,\n    const int spatial_dim, const Dtype* channel_sum, Dtype* data) {\n  CUDA_KERNEL_LOOP(index, count) {\n    int n = index / channels / spatial_dim;\n    int s = index % spatial_dim;\n    data[index] /= channel_sum[n * spatial_dim + s];\n  }\n}\n\ntemplate <typename Dtype>\n__global__ void kernel_channel_dot(const int num, const int channels,\n    const int spatial_dim, const Dtype* data_1, const Dtype* data_2,\n    Dtype* channel_dot) {\n  CUDA_KERNEL_LOOP(index, num * spatial_dim) {\n    int n = index / spatial_dim;\n    int s = index % spatial_dim;\n    Dtype dot = 0;\n    for (int c = 0; c < channels; ++c) {\n      dot += (data_1[(n * channels + c) * spatial_dim + s]\n          * data_2[(n * channels + c) * spatial_dim + s]);\n    }\n    channel_dot[index] = dot;\n  }\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  Dtype* scale_data = scale_.mutable_gpu_data();\n  int count = bottom[0]->count();\n  int channels = top[0]->shape(softmax_axis_);\n  caffe_copy(count, bottom_data, top_data);\n  // We need to subtract the max to avoid numerical issues, compute the exp,\n  // and then normalize.\n  // compute max\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_channel_max<Dtype><<<CAFFE_GET_BLOCKS(outer_num_ * inner_num_),\n      CAFFE_CUDA_NUM_THREADS>>>(outer_num_, channels, inner_num_, top_data,\n      scale_data);\n  // subtract\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_channel_subtract<Dtype><<<CAFFE_GET_BLOCKS(count),\n      CAFFE_CUDA_NUM_THREADS>>>(count, outer_num_, channels, inner_num_,\n      scale_data, top_data);\n  // exponentiate\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_exp<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, top_data, top_data);\n  // sum after exp\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_channel_sum<Dtype><<<CAFFE_GET_BLOCKS(outer_num_ * inner_num_),\n      CAFFE_CUDA_NUM_THREADS>>>(outer_num_, channels, inner_num_, top_data,\n      scale_data);\n  // divide\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_channel_div<Dtype><<<CAFFE_GET_BLOCKS(count),\n      CAFFE_CUDA_NUM_THREADS>>>(count, outer_num_, channels, inner_num_,\n      scale_data, top_data);\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  const Dtype* top_diff = top[0]->gpu_diff();\n  const Dtype* top_data = top[0]->gpu_data();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  Dtype* scale_data = scale_.mutable_gpu_data();\n  int count = top[0]->count();\n  int channels = top[0]->shape(softmax_axis_);\n  caffe_copy(count, top_diff, bottom_diff);\n  // Compute inner1d(top_diff, top_data) and subtract them from the bottom diff.\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_channel_dot<Dtype><<<CAFFE_GET_BLOCKS(outer_num_ * inner_num_),\n      CAFFE_CUDA_NUM_THREADS>>>(outer_num_, channels, inner_num_,\n      top_diff, top_data, scale_data);\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  kernel_channel_subtract<Dtype><<<CAFFE_GET_BLOCKS(count),\n      CAFFE_CUDA_NUM_THREADS>>>(count, outer_num_, channels, inner_num_,\n      scale_data, bottom_diff);\n  // elementwise multiplication\n  caffe_gpu_mul<Dtype>(top[0]->count(), bottom_diff, top_data, bottom_diff);\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(SoftmaxLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/softmax_loss_layer.cpp",
    "content": "#include <algorithm>\n#include <cfloat>\n#include <vector>\n\n#include \"caffe/layers/softmax_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SoftmaxWithLossLayer<Dtype>::LayerSetUp(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::LayerSetUp(bottom, top);\n  LayerParameter softmax_param(this->layer_param_);\n  softmax_param.set_type(\"Softmax\");\n  softmax_layer_ = LayerRegistry<Dtype>::CreateLayer(softmax_param);\n  softmax_bottom_vec_.clear();\n  softmax_bottom_vec_.push_back(bottom[0]);\n  softmax_top_vec_.clear();\n  softmax_top_vec_.push_back(&prob_);\n  softmax_layer_->SetUp(softmax_bottom_vec_, softmax_top_vec_);\n\n  has_ignore_label_ =\n    this->layer_param_.loss_param().has_ignore_label();\n  if (has_ignore_label_) {\n    ignore_label_ = this->layer_param_.loss_param().ignore_label();\n  }\n  if (!this->layer_param_.loss_param().has_normalization() &&\n      this->layer_param_.loss_param().has_normalize()) {\n    normalization_ = this->layer_param_.loss_param().normalize() ?\n                     LossParameter_NormalizationMode_VALID :\n                     LossParameter_NormalizationMode_BATCH_SIZE;\n  } else {\n    normalization_ = this->layer_param_.loss_param().normalization();\n  }\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxWithLossLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  LossLayer<Dtype>::Reshape(bottom, top);\n  softmax_layer_->Reshape(softmax_bottom_vec_, softmax_top_vec_);\n  softmax_axis_ =\n      bottom[0]->CanonicalAxisIndex(this->layer_param_.softmax_param().axis());\n  outer_num_ = bottom[0]->count(0, softmax_axis_);\n  inner_num_ = bottom[0]->count(softmax_axis_ + 1);\n // CHECK_EQ(outer_num_ * inner_num_, bottom[1]->count())\n  //    << \"Number of labels must match number of predictions; \"\n //     << \"e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), \"\n //     << \"label count (number of labels) must be N*H*W, \"\n //     << \"with integer values in {0, 1, ..., C-1}.\";\n  if (top.size() >= 2) {\n    // softmax output\n    top[1]->ReshapeLike(*bottom[0]);\n  }\n}\n\ntemplate <typename Dtype>\nDtype SoftmaxWithLossLayer<Dtype>::get_normalizer(\n    LossParameter_NormalizationMode normalization_mode, int valid_count) {\n  Dtype normalizer;\n  switch (normalization_mode) {\n    case LossParameter_NormalizationMode_FULL:\n      normalizer = Dtype(outer_num_ * inner_num_);\n      break;\n    case LossParameter_NormalizationMode_VALID:\n      if (valid_count == -1) {\n        normalizer = Dtype(outer_num_ * inner_num_);\n      } else {\n        normalizer = Dtype(valid_count);\n      }\n      break;\n    case LossParameter_NormalizationMode_BATCH_SIZE:\n      normalizer = Dtype(outer_num_);\n      break;\n    case LossParameter_NormalizationMode_NONE:\n      normalizer = Dtype(1);\n      break;\n    default:\n      LOG(FATAL) << \"Unknown normalization mode: \"\n          << LossParameter_NormalizationMode_Name(normalization_mode);\n  }\n  // Some users will have no labels for some examples in order to 'turn off' a\n  // particular loss in a multi-task setup. The max prevents NaNs in that case.\n  return std::max(Dtype(1.0), normalizer);\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxWithLossLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  // The forward pass computes the softmax prob values.\n  softmax_layer_->Forward(softmax_bottom_vec_, softmax_top_vec_);\n  const Dtype* prob_data = prob_.cpu_data();\n  const Dtype* label = bottom[1]->cpu_data();\n  int dim = prob_.count() / outer_num_;\n  int count = 0;\n  Dtype loss = 0;\n  for (int i = 0; i < outer_num_; ++i) {\n    for (int j = 0; j < inner_num_; j++) {\n      const int label_value = static_cast<int>(label[i * inner_num_ + j]);\n      if (has_ignore_label_ && label_value == ignore_label_) {\n        continue;\n      }\n      DCHECK_GE(label_value, 0);\n      DCHECK_LT(label_value, prob_.shape(softmax_axis_));\n      loss -= log(std::max(prob_data[i * dim + label_value * inner_num_ + j],\n                           Dtype(FLT_MIN)));\n      ++count;\n    }\n  }\n  top[0]->mutable_cpu_data()[0] = loss / get_normalizer(normalization_, count);\n  if (top.size() == 2) {\n    top[1]->ShareData(prob_);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxWithLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const Dtype* prob_data = prob_.cpu_data();\n    caffe_copy(prob_.count(), prob_data, bottom_diff);\n    const Dtype* label = bottom[1]->cpu_data();\n    int dim = prob_.count() / outer_num_;\n    int count = 0;\n    for (int i = 0; i < outer_num_; ++i) {\n      for (int j = 0; j < inner_num_; ++j) {\n        const int label_value = static_cast<int>(label[i * inner_num_ + j]);\n        if (has_ignore_label_ && label_value == ignore_label_) {\n          for (int c = 0; c < bottom[0]->shape(softmax_axis_); ++c) {\n            bottom_diff[i * dim + c * inner_num_ + j] = 0;\n          }\n        } else {\n          bottom_diff[i * dim + label_value * inner_num_ + j] -= 1;\n          ++count;\n        }\n      }\n    }\n    // Scale gradient\n    Dtype loss_weight = top[0]->cpu_diff()[0] /\n                        get_normalizer(normalization_, count);\n    caffe_scal(prob_.count(), loss_weight, bottom_diff);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(SoftmaxWithLossLayer);\n#endif\n\nINSTANTIATE_CLASS(SoftmaxWithLossLayer);\nREGISTER_LAYER_CLASS(SoftmaxWithLoss);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/softmax_loss_layer.cu",
    "content": "#include <algorithm>\n#include <cfloat>\n#include <vector>\n#include <iostream>\n#include \"caffe/layers/softmax_loss_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void SoftmaxLossForwardGPU(const int nthreads,\n          const Dtype* prob_data, const Dtype* label, Dtype* loss,\n          const int num, const int dim, const int spatial_dim,\n          const bool has_ignore_label_, const int ignore_label_,\n          Dtype* counts) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int n = index / spatial_dim;\n    const int s = index % spatial_dim;\n    const int label_value = static_cast<int>(label[n * spatial_dim + s]);\n    if (has_ignore_label_ && label_value == ignore_label_) {\n      loss[index] = 0;\n      counts[index] = 0;\n    } else {\n      loss[index] = -log(max(prob_data[n * dim + label_value * spatial_dim + s],\n                      Dtype(FLT_MIN)));\n      counts[index] = 1;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxWithLossLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  softmax_layer_->Forward(softmax_bottom_vec_, softmax_top_vec_);\n  const Dtype* prob_data = prob_.gpu_data();\n  const Dtype* label = bottom[1]->gpu_data();\n  const int dim = prob_.count() / outer_num_;\n  const int nthreads = outer_num_ * inner_num_;\n  // Since this memory is not used for anything until it is overwritten\n  // on the backward pass, we use it here to avoid having to allocate new GPU\n  // memory to accumulate intermediate results in the kernel.\n  Dtype* loss_data = bottom[0]->mutable_gpu_diff();\n  // Similarly, this memory is never used elsewhere, and thus we can use it\n  // to avoid having to allocate additional GPU memory.\n  Dtype* counts = prob_.mutable_gpu_diff();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  SoftmaxLossForwardGPU<Dtype><<<CAFFE_GET_BLOCKS(nthreads),\n      CAFFE_CUDA_NUM_THREADS>>>(nthreads, prob_data, label, loss_data,\n      outer_num_, dim, inner_num_, has_ignore_label_, ignore_label_, counts);\n  Dtype loss;\n  caffe_gpu_asum(nthreads, loss_data, &loss);\n  Dtype valid_count = -1;\n  // Only launch another CUDA kernel if we actually need the count of valid\n  // outputs.\n  if (normalization_ == LossParameter_NormalizationMode_VALID &&\n      has_ignore_label_) {\n    caffe_gpu_asum(nthreads, counts, &valid_count);\n  }\n  top[0]->mutable_cpu_data()[0] = loss / get_normalizer(normalization_,\n                                                        valid_count);\n  if (top.size() == 2) {\n    top[1]->ShareData(prob_);\n  }\n\n}\n\ntemplate <typename Dtype>\n__global__ void SoftmaxLossBackwardGPU(const int nthreads, const Dtype* top,\n          const Dtype* label, Dtype* bottom_diff, const int num, const int dim,\n          const int spatial_dim, const bool has_ignore_label_,\n          const int ignore_label_, Dtype* counts) {\n  const int channels = dim / spatial_dim;\n\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int n = index / spatial_dim;\n    const int s = index % spatial_dim;\n    const int label_value = static_cast<int>(label[n * spatial_dim + s]);\n\n    if (has_ignore_label_ && label_value == ignore_label_) {\n      for (int c = 0; c < channels; ++c) {\n        bottom_diff[n * dim + c * spatial_dim + s] = 0;\n      }\n      counts[index] = 0;\n    } else {\n      bottom_diff[n * dim + label_value * spatial_dim + s] -= 1;\n      counts[index] = 1;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SoftmaxWithLossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[1]) {\n    LOG(FATAL) << this->type()\n               << \" Layer cannot backpropagate to label inputs.\";\n  }\n  if (propagate_down[0]) {\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const Dtype* prob_data = prob_.gpu_data();\n    const Dtype* top_data = top[0]->gpu_data();\n    caffe_gpu_memcpy(prob_.count() * sizeof(Dtype), prob_data, bottom_diff);\n    const Dtype* label = bottom[1]->gpu_data();\n    const int dim = prob_.count() / outer_num_;\n    const int nthreads = outer_num_ * inner_num_;\n    // Since this memory is never used for anything else,\n    // we use to to avoid allocating new GPU memory.\n    Dtype* counts = prob_.mutable_gpu_diff();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    SoftmaxLossBackwardGPU<Dtype><<<CAFFE_GET_BLOCKS(nthreads),\n        CAFFE_CUDA_NUM_THREADS>>>(nthreads, top_data, label, bottom_diff,\n        outer_num_, dim, inner_num_, has_ignore_label_, ignore_label_, counts);\n\n    Dtype valid_count = -1;\n    // Only launch another CUDA kernel if we actually need the count of valid\n    // outputs.\n    if (normalization_ == LossParameter_NormalizationMode_VALID &&\n        has_ignore_label_) {\n      caffe_gpu_asum(nthreads, counts, &valid_count);\n    }\n    const Dtype loss_weight = top[0]->cpu_diff()[0] /\n                              get_normalizer(normalization_, valid_count);\n    caffe_gpu_scal(prob_.count(), loss_weight , bottom_diff);\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(SoftmaxWithLossLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/split_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/split_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SplitLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  count_ = bottom[0]->count();\n  for (int i = 0; i < top.size(); ++i) {\n    // Do not allow in-place computation in the SplitLayer.  Instead, share data\n    // by reference in the forward pass, and keep separate diff allocations in\n    // the backward pass.  (Technically, it should be possible to share the diff\n    // blob of the first split output with the input, but this seems to cause\n    // some strange effects in practice...)\n    CHECK_NE(top[i], bottom[0]) << this->type() << \" Layer does not \"\n        \"allow in-place computation.\";\n    top[i]->ReshapeLike(*bottom[0]);\n    CHECK_EQ(count_, top[i]->count());\n  }\n}\n\ntemplate <typename Dtype>\nvoid SplitLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  for (int i = 0; i < top.size(); ++i) {\n    top[i]->ShareData(*bottom[0]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SplitLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  if (top.size() == 1) {\n    caffe_copy(count_, top[0]->cpu_diff(), bottom[0]->mutable_cpu_diff());\n    return;\n  }\n  caffe_add(count_, top[0]->cpu_diff(), top[1]->cpu_diff(),\n            bottom[0]->mutable_cpu_diff());\n  // Add remaining top blob diffs.\n  for (int i = 2; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    caffe_axpy(count_, Dtype(1.), top_diff, bottom_diff);\n  }\n}\n\n\n#ifdef CPU_ONLY\nSTUB_GPU(SplitLayer);\n#endif\n\nINSTANTIATE_CLASS(SplitLayer);\nREGISTER_LAYER_CLASS(Split);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/split_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/split_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid SplitLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  for (int i = 0; i < top.size(); ++i) {\n    top[i]->ShareData(*bottom[0]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid SplitLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  if (top.size() == 1) {\n    caffe_copy(count_, top[0]->gpu_diff(), bottom[0]->mutable_gpu_diff());\n    return;\n  }\n  caffe_gpu_add(count_, top[0]->gpu_diff(), top[1]->gpu_diff(),\n                bottom[0]->mutable_gpu_diff());\n  // Add remaining top blob diffs.\n  for (int i = 2; i < top.size(); ++i) {\n    const Dtype* top_diff = top[i]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    caffe_gpu_axpy(count_, Dtype(1.), top_diff, bottom_diff);\n  }\n}\n\n\nINSTANTIATE_LAYER_GPU_FUNCS(SplitLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/spp_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"caffe/layer.hpp\"\n#include \"caffe/layers/concat_layer.hpp\"\n#include \"caffe/layers/flatten_layer.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n#include \"caffe/layers/split_layer.hpp\"\n#include \"caffe/layers/spp_layer.hpp\"\n\nnamespace caffe {\n\nusing std::min;\nusing std::max;\n\ntemplate <typename Dtype>\nLayerParameter SPPLayer<Dtype>::GetPoolingParam(const int pyramid_level,\n      const int bottom_h, const int bottom_w, const SPPParameter spp_param) {\n  LayerParameter pooling_param;\n  int num_bins = pow(2, pyramid_level);\n\n  // find padding and kernel size so that the pooling is\n  // performed across the entire image\n  int kernel_h = ceil(bottom_h / static_cast<double>(num_bins));\n  // remainder_h is the min number of pixels that need to be padded before\n  // entire image height is pooled over with the chosen kernel dimension\n  int remainder_h = kernel_h * num_bins - bottom_h;\n  // pooling layer pads (2 * pad_h) pixels on the top and bottom of the\n  // image.\n  int pad_h = (remainder_h + 1) / 2;\n\n  // similar logic for width\n  int kernel_w = ceil(bottom_w / static_cast<double>(num_bins));\n  int remainder_w = kernel_w * num_bins - bottom_w;\n  int pad_w = (remainder_w + 1) / 2;\n\n  pooling_param.mutable_pooling_param()->set_pad_h(pad_h);\n  pooling_param.mutable_pooling_param()->set_pad_w(pad_w);\n  pooling_param.mutable_pooling_param()->set_kernel_h(kernel_h);\n  pooling_param.mutable_pooling_param()->set_kernel_w(kernel_w);\n  pooling_param.mutable_pooling_param()->set_stride_h(kernel_h);\n  pooling_param.mutable_pooling_param()->set_stride_w(kernel_w);\n\n  switch (spp_param.pool()) {\n  case SPPParameter_PoolMethod_MAX:\n    pooling_param.mutable_pooling_param()->set_pool(\n        PoolingParameter_PoolMethod_MAX);\n    break;\n  case SPPParameter_PoolMethod_AVE:\n    pooling_param.mutable_pooling_param()->set_pool(\n        PoolingParameter_PoolMethod_AVE);\n    break;\n  case SPPParameter_PoolMethod_STOCHASTIC:\n    pooling_param.mutable_pooling_param()->set_pool(\n        PoolingParameter_PoolMethod_STOCHASTIC);\n    break;\n  default:\n    LOG(FATAL) << \"Unknown pooling method.\";\n  }\n\n  return pooling_param;\n}\n\ntemplate <typename Dtype>\nvoid SPPLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  SPPParameter spp_param = this->layer_param_.spp_param();\n\n  num_ = bottom[0]->num();\n  channels_ = bottom[0]->channels();\n  bottom_h_ = bottom[0]->height();\n  bottom_w_ = bottom[0]->width();\n  reshaped_first_time_ = false;\n  CHECK_GT(bottom_h_, 0) << \"Input dimensions cannot be zero.\";\n  CHECK_GT(bottom_w_, 0) << \"Input dimensions cannot be zero.\";\n\n  pyramid_height_ = spp_param.pyramid_height();\n  split_top_vec_.clear();\n  pooling_bottom_vecs_.clear();\n  pooling_layers_.clear();\n  pooling_top_vecs_.clear();\n  pooling_outputs_.clear();\n  flatten_layers_.clear();\n  flatten_top_vecs_.clear();\n  flatten_outputs_.clear();\n  concat_bottom_vec_.clear();\n\n  if (pyramid_height_ == 1) {\n    // pooling layer setup\n    LayerParameter pooling_param = GetPoolingParam(0, bottom_h_, bottom_w_,\n        spp_param);\n    pooling_layers_.push_back(shared_ptr<PoolingLayer<Dtype> > (\n        new PoolingLayer<Dtype>(pooling_param)));\n    pooling_layers_[0]->SetUp(bottom, top);\n    return;\n  }\n  // split layer output holders setup\n  for (int i = 0; i < pyramid_height_; i++) {\n    split_top_vec_.push_back(new Blob<Dtype>());\n  }\n\n  // split layer setup\n  LayerParameter split_param;\n  split_layer_.reset(new SplitLayer<Dtype>(split_param));\n  split_layer_->SetUp(bottom, split_top_vec_);\n\n  for (int i = 0; i < pyramid_height_; i++) {\n    // pooling layer input holders setup\n    pooling_bottom_vecs_.push_back(new vector<Blob<Dtype>*>);\n    pooling_bottom_vecs_[i]->push_back(split_top_vec_[i]);\n\n    // pooling layer output holders setup\n    pooling_outputs_.push_back(new Blob<Dtype>());\n    pooling_top_vecs_.push_back(new vector<Blob<Dtype>*>);\n    pooling_top_vecs_[i]->push_back(pooling_outputs_[i]);\n\n    // pooling layer setup\n    LayerParameter pooling_param = GetPoolingParam(\n        i, bottom_h_, bottom_w_, spp_param);\n\n    pooling_layers_.push_back(shared_ptr<PoolingLayer<Dtype> > (\n        new PoolingLayer<Dtype>(pooling_param)));\n    pooling_layers_[i]->SetUp(*pooling_bottom_vecs_[i], *pooling_top_vecs_[i]);\n\n    // flatten layer output holders setup\n    flatten_outputs_.push_back(new Blob<Dtype>());\n    flatten_top_vecs_.push_back(new vector<Blob<Dtype>*>);\n    flatten_top_vecs_[i]->push_back(flatten_outputs_[i]);\n\n    // flatten layer setup\n    LayerParameter flatten_param;\n    flatten_layers_.push_back(new FlattenLayer<Dtype>(flatten_param));\n    flatten_layers_[i]->SetUp(*pooling_top_vecs_[i], *flatten_top_vecs_[i]);\n\n    // concat layer input holders setup\n    concat_bottom_vec_.push_back(flatten_outputs_[i]);\n  }\n\n  // concat layer setup\n  LayerParameter concat_param;\n  concat_layer_.reset(new ConcatLayer<Dtype>(concat_param));\n  concat_layer_->SetUp(concat_bottom_vec_, top);\n}\n\ntemplate <typename Dtype>\nvoid SPPLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  CHECK_EQ(4, bottom[0]->num_axes()) << \"Input must have 4 axes, \"\n      << \"corresponding to (num, channels, height, width)\";\n  // Do nothing if bottom shape is unchanged since last Reshape\n  if (num_ == bottom[0]->num() && channels_ == bottom[0]->channels() &&\n      bottom_h_ == bottom[0]->height() && bottom_w_ == bottom[0]->width() &&\n      reshaped_first_time_) {\n    return;\n  }\n  num_ = bottom[0]->num();\n  channels_ = bottom[0]->channels();\n  bottom_h_ = bottom[0]->height();\n  bottom_w_ = bottom[0]->width();\n  reshaped_first_time_ = true;\n  SPPParameter spp_param = this->layer_param_.spp_param();\n  if (pyramid_height_ == 1) {\n    LayerParameter pooling_param = GetPoolingParam(0, bottom_h_, bottom_w_,\n        spp_param);\n    pooling_layers_[0].reset(new PoolingLayer<Dtype>(pooling_param));\n    pooling_layers_[0]->SetUp(bottom, top);\n    pooling_layers_[0]->Reshape(bottom, top);\n    return;\n  }\n  split_layer_->Reshape(bottom, split_top_vec_);\n  for (int i = 0; i < pyramid_height_; i++) {\n    LayerParameter pooling_param = GetPoolingParam(\n        i, bottom_h_, bottom_w_, spp_param);\n\n    pooling_layers_[i].reset(\n        new PoolingLayer<Dtype>(pooling_param));\n    pooling_layers_[i]->SetUp(\n        *pooling_bottom_vecs_[i], *pooling_top_vecs_[i]);\n    pooling_layers_[i]->Reshape(\n        *pooling_bottom_vecs_[i], *pooling_top_vecs_[i]);\n    flatten_layers_[i]->Reshape(\n        *pooling_top_vecs_[i], *flatten_top_vecs_[i]);\n  }\n  concat_layer_->Reshape(concat_bottom_vec_, top);\n}\n\ntemplate <typename Dtype>\nvoid SPPLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  if (pyramid_height_ == 1) {\n    pooling_layers_[0]->Forward(bottom, top);\n    return;\n  }\n  split_layer_->Forward(bottom, split_top_vec_);\n  for (int i = 0; i < pyramid_height_; i++) {\n    pooling_layers_[i]->Forward(\n        *pooling_bottom_vecs_[i], *pooling_top_vecs_[i]);\n    flatten_layers_[i]->Forward(\n        *pooling_top_vecs_[i], *flatten_top_vecs_[i]);\n  }\n  concat_layer_->Forward(concat_bottom_vec_, top);\n}\n\ntemplate <typename Dtype>\nvoid SPPLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) {\n    return;\n  }\n  if (pyramid_height_ == 1) {\n    pooling_layers_[0]->Backward(top, propagate_down, bottom);\n    return;\n  }\n  vector<bool> concat_propagate_down(pyramid_height_, true);\n  concat_layer_->Backward(top, concat_propagate_down, concat_bottom_vec_);\n  for (int i = 0; i < pyramid_height_; i++) {\n    flatten_layers_[i]->Backward(\n        *flatten_top_vecs_[i], propagate_down, *pooling_top_vecs_[i]);\n    pooling_layers_[i]->Backward(\n        *pooling_top_vecs_[i], propagate_down, *pooling_bottom_vecs_[i]);\n  }\n  split_layer_->Backward(split_top_vec_, propagate_down, bottom);\n}\n\nINSTANTIATE_CLASS(SPPLayer);\nREGISTER_LAYER_CLASS(SPP);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/tanh_layer.cpp",
    "content": "// TanH neuron activation function layer.\n// Adapted from ReLU layer code written by Yangqing Jia\n\n#include <vector>\n\n#include \"caffe/layers/tanh_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid TanHLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  for (int i = 0; i < count; ++i) {\n    top_data[i] = tanh(bottom_data[i]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid TanHLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_data = top[0]->cpu_data();\n    const Dtype* top_diff = top[0]->cpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n    const int count = bottom[0]->count();\n    Dtype tanhx;\n    for (int i = 0; i < count; ++i) {\n      tanhx = top_data[i];\n      bottom_diff[i] = top_diff[i] * (1 - tanhx * tanhx);\n    }\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(TanHLayer);\n#endif\n\nINSTANTIATE_CLASS(TanHLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/tanh_layer.cu",
    "content": "// TanH neuron activation function layer.\n// Adapted from ReLU layer code written by Yangqing Jia\n\n#include <vector>\n\n#include \"caffe/layers/tanh_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void TanHForward(const int n, const Dtype* in, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = tanh(in[index]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid TanHLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  TanHForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, bottom_data, top_data);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate <typename Dtype>\n__global__ void TanHBackward(const int n, const Dtype* in_diff,\n    const Dtype* out_data, Dtype* out_diff) {\n  CUDA_KERNEL_LOOP(index, n) {\n    Dtype tanhx = out_data[index];\n    out_diff[index] = in_diff[index] * (1 - tanhx * tanhx);\n  }\n}\n\ntemplate <typename Dtype>\nvoid TanHLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down,\n    const vector<Blob<Dtype>*>& bottom) {\n  if (propagate_down[0]) {\n    const Dtype* top_data = top[0]->gpu_data();\n    const Dtype* top_diff = top[0]->gpu_diff();\n    Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n    const int count = bottom[0]->count();\n    // NOLINT_NEXT_LINE(whitespace/operators)\n    TanHBackward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n        count, top_diff, top_data, bottom_diff);\n    CUDA_POST_KERNEL_CHECK;\n  }\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(TanHLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/threshold_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/threshold_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid ThresholdLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  NeuronLayer<Dtype>::LayerSetUp(bottom, top);\n  threshold_ = this->layer_param_.threshold_param().threshold();\n}\n\ntemplate <typename Dtype>\nvoid ThresholdLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  const int count = bottom[0]->count();\n  for (int i = 0; i < count; ++i) {\n    top_data[i] = (bottom_data[i] > threshold_) ? Dtype(1) : Dtype(0);\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU_FORWARD(ThresholdLayer, Forward);\n#endif\n\nINSTANTIATE_CLASS(ThresholdLayer);\nREGISTER_LAYER_CLASS(Threshold);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/threshold_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/threshold_layer.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void ThresholdForward(const int n, const Dtype threshold,\n    const Dtype* in, Dtype* out) {\n  CUDA_KERNEL_LOOP(index, n) {\n    out[index] = in[index] > threshold ? 1 : 0;\n  }\n}\n\ntemplate <typename Dtype>\nvoid ThresholdLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,\n    const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int count = bottom[0]->count();\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  ThresholdForward<Dtype><<<CAFFE_GET_BLOCKS(count), CAFFE_CUDA_NUM_THREADS>>>(\n      count, threshold_, bottom_data, top_data);\n  CUDA_POST_KERNEL_CHECK;\n}\n\n\nINSTANTIATE_LAYER_GPU_FORWARD(ThresholdLayer);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/tile_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/layers/tile_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid TileLayer<Dtype>::Reshape(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const TileParameter& tile_param = this->layer_param_.tile_param();\n  axis_ = bottom[0]->CanonicalAxisIndex(tile_param.axis());\n  CHECK(tile_param.has_tiles()) << \"Number of tiles must be specified\";\n  tiles_ = tile_param.tiles();\n  CHECK_GT(tiles_, 0) << \"Number of tiles must be positive.\";\n  vector<int> top_shape = bottom[0]->shape();\n  top_shape[axis_] = bottom[0]->shape(axis_) * tiles_;\n  top[0]->Reshape(top_shape);\n  outer_dim_ = bottom[0]->count(0, axis_);\n  inner_dim_ = bottom[0]->count(axis_);\n}\n\ntemplate <typename Dtype>\nvoid TileLayer<Dtype>::Forward_cpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->cpu_data();\n  Dtype* top_data = top[0]->mutable_cpu_data();\n  for (int i = 0; i < outer_dim_; ++i) {\n    for (int t = 0; t < tiles_; ++t) {\n      caffe_copy(inner_dim_, bottom_data, top_data);\n      top_data += inner_dim_;\n    }\n    bottom_data += inner_dim_;\n  }\n}\n\ntemplate <typename Dtype>\nvoid TileLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  const Dtype* top_diff = top[0]->cpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();\n  for (int i = 0; i < outer_dim_; ++i) {\n    caffe_copy(inner_dim_, top_diff, bottom_diff);\n    top_diff += inner_dim_;\n    for (int t = 1; t < tiles_; ++t) {\n      caffe_axpy(inner_dim_, Dtype(1), top_diff, bottom_diff);\n      top_diff += inner_dim_;\n    }\n    bottom_diff += inner_dim_;\n  }\n}\n\n#ifdef CPU_ONLY\nSTUB_GPU(TileLayer);\n#endif\n\nINSTANTIATE_CLASS(TileLayer);\nREGISTER_LAYER_CLASS(Tile);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/tile_layer.cu",
    "content": "#include <vector>\n\n#include \"caffe/layers/tile_layer.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void Tile(const int nthreads, const Dtype* bottom_data,\n    const int tile_size, const int num_tiles, const int bottom_tile_axis,\n    Dtype* top_data) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int d = index % tile_size;\n    const int b = (index / tile_size / num_tiles) % bottom_tile_axis;\n    const int n = index / tile_size / num_tiles / bottom_tile_axis;\n    const int bottom_index = (n * bottom_tile_axis + b) * tile_size + d;\n    top_data[index] = bottom_data[bottom_index];\n  }\n}\n\ntemplate <typename Dtype>\nvoid TileLayer<Dtype>::Forward_gpu(\n    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {\n  const Dtype* bottom_data = bottom[0]->gpu_data();\n  Dtype* top_data = top[0]->mutable_gpu_data();\n  const int bottom_tile_axis = bottom[0]->shape(axis_);\n  const int nthreads = top[0]->count();\n  Tile<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(nthreads), CAFFE_CUDA_NUM_THREADS>>>(\n      nthreads, bottom_data, inner_dim_, tiles_, bottom_tile_axis, top_data);\n}\n\ntemplate <typename Dtype>\n__global__ void TileBackward(const int nthreads, const Dtype* top_diff,\n    const int tile_size, const int num_tiles, const int bottom_tile_axis,\n    Dtype* bottom_diff) {\n  CUDA_KERNEL_LOOP(index, nthreads) {\n    const int d = index % tile_size;\n    const int b = (index / tile_size) % bottom_tile_axis;\n    const int n = index / tile_size / bottom_tile_axis;\n    bottom_diff[index] = 0;\n    int top_index = (n * num_tiles * bottom_tile_axis + b) * tile_size + d;\n    for (int t = 0; t < num_tiles; ++t) {\n      bottom_diff[index] += top_diff[top_index];\n      top_index += bottom_tile_axis * tile_size;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid TileLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,\n    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {\n  if (!propagate_down[0]) { return; }\n  const Dtype* top_diff = top[0]->gpu_diff();\n  Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();\n  const int bottom_tile_axis = bottom[0]->shape(axis_);\n  const int tile_size = inner_dim_ / bottom_tile_axis;\n  const int nthreads = bottom[0]->count();\n  TileBackward<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(nthreads), CAFFE_CUDA_NUM_THREADS>>>(\n      nthreads, top_diff, tile_size, tiles_, bottom_tile_axis, bottom_diff);\n}\n\nINSTANTIATE_LAYER_GPU_FUNCS(TileLayer);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/layers/window_data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/highgui/highgui_c.h>\n#include <stdint.h>\n\n#include <algorithm>\n#include <map>\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"opencv2/core/core.hpp\"\n#include \"opencv2/highgui/highgui.hpp\"\n#include \"opencv2/imgproc/imgproc.hpp\"\n\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/layers/window_data_layer.hpp\"\n#include \"caffe/util/benchmark.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include \"caffe/util/rng.hpp\"\n\n// caffe.proto > LayerParameter > WindowDataParameter\n//   'source' field specifies the window_file\n//   'crop_size' indicates the desired warped size\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nWindowDataLayer<Dtype>::~WindowDataLayer<Dtype>() {\n  this->StopInternalThread();\n}\n\ntemplate <typename Dtype>\nvoid WindowDataLayer<Dtype>::DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,\n      const vector<Blob<Dtype>*>& top) {\n  // LayerSetUp runs through the window_file and creates two structures\n  // that hold windows: one for foreground (object) windows and one\n  // for background (non-object) windows. We use an overlap threshold\n  // to decide which is which.\n\n  // window_file format\n  // repeated:\n  //    # image_index\n  //    img_path (abs path)\n  //    channels\n  //    height\n  //    width\n  //    num_windows\n  //    class_index overlap x1 y1 x2 y2\n\n  LOG(INFO) << \"Window data layer:\" << std::endl\n      << \"  foreground (object) overlap threshold: \"\n      << this->layer_param_.window_data_param().fg_threshold() << std::endl\n      << \"  background (non-object) overlap threshold: \"\n      << this->layer_param_.window_data_param().bg_threshold() << std::endl\n      << \"  foreground sampling fraction: \"\n      << this->layer_param_.window_data_param().fg_fraction() << std::endl\n      << \"  cache_images: \"\n      << this->layer_param_.window_data_param().cache_images() << std::endl\n      << \"  root_folder: \"\n      << this->layer_param_.window_data_param().root_folder();\n\n  cache_images_ = this->layer_param_.window_data_param().cache_images();\n  string root_folder = this->layer_param_.window_data_param().root_folder();\n\n  const bool prefetch_needs_rand =\n      this->transform_param_.mirror() ||\n      this->transform_param_.crop_size();\n  if (prefetch_needs_rand) {\n    const unsigned int prefetch_rng_seed = caffe_rng_rand();\n    prefetch_rng_.reset(new Caffe::RNG(prefetch_rng_seed));\n  } else {\n    prefetch_rng_.reset();\n  }\n\n  std::ifstream infile(this->layer_param_.window_data_param().source().c_str());\n  CHECK(infile.good()) << \"Failed to open window file \"\n      << this->layer_param_.window_data_param().source() << std::endl;\n\n  map<int, int> label_hist;\n  label_hist.insert(std::make_pair(0, 0));\n\n  string hashtag;\n  int image_index, channels;\n  if (!(infile >> hashtag >> image_index)) {\n    LOG(FATAL) << \"Window file is empty\";\n  }\n  do {\n    CHECK_EQ(hashtag, \"#\");\n    // read image path\n    string image_path;\n    infile >> image_path;\n    image_path = root_folder + image_path;\n    // read image dimensions\n    vector<int> image_size(3);\n    infile >> image_size[0] >> image_size[1] >> image_size[2];\n    channels = image_size[0];\n    image_database_.push_back(std::make_pair(image_path, image_size));\n\n    if (cache_images_) {\n      Datum datum;\n      if (!ReadFileToDatum(image_path, &datum)) {\n        LOG(ERROR) << \"Could not open or find file \" << image_path;\n        return;\n      }\n      image_database_cache_.push_back(std::make_pair(image_path, datum));\n    }\n    // read each box\n    int num_windows;\n    infile >> num_windows;\n    const float fg_threshold =\n        this->layer_param_.window_data_param().fg_threshold();\n    const float bg_threshold =\n        this->layer_param_.window_data_param().bg_threshold();\n    for (int i = 0; i < num_windows; ++i) {\n      int label, x1, y1, x2, y2;\n      float overlap;\n      infile >> label >> overlap >> x1 >> y1 >> x2 >> y2;\n\n      vector<float> window(WindowDataLayer::NUM);\n      window[WindowDataLayer::IMAGE_INDEX] = image_index;\n      window[WindowDataLayer::LABEL] = label;\n      window[WindowDataLayer::OVERLAP] = overlap;\n      window[WindowDataLayer::X1] = x1;\n      window[WindowDataLayer::Y1] = y1;\n      window[WindowDataLayer::X2] = x2;\n      window[WindowDataLayer::Y2] = y2;\n\n      // add window to foreground list or background list\n      if (overlap >= fg_threshold) {\n        int label = window[WindowDataLayer::LABEL];\n        CHECK_GT(label, 0);\n        fg_windows_.push_back(window);\n        label_hist.insert(std::make_pair(label, 0));\n        label_hist[label]++;\n      } else if (overlap < bg_threshold) {\n        // background window, force label and overlap to 0\n        window[WindowDataLayer::LABEL] = 0;\n        window[WindowDataLayer::OVERLAP] = 0;\n        bg_windows_.push_back(window);\n        label_hist[0]++;\n      }\n    }\n\n    if (image_index % 100 == 0) {\n      LOG(INFO) << \"num: \" << image_index << \" \"\n          << image_path << \" \"\n          << image_size[0] << \" \"\n          << image_size[1] << \" \"\n          << image_size[2] << \" \"\n          << \"windows to process: \" << num_windows;\n    }\n  } while (infile >> hashtag >> image_index);\n\n  LOG(INFO) << \"Number of images: \" << image_index+1;\n\n  for (map<int, int>::iterator it = label_hist.begin();\n      it != label_hist.end(); ++it) {\n    LOG(INFO) << \"class \" << it->first << \" has \" << label_hist[it->first]\n              << \" samples\";\n  }\n\n  LOG(INFO) << \"Amount of context padding: \"\n      << this->layer_param_.window_data_param().context_pad();\n\n  LOG(INFO) << \"Crop mode: \"\n      << this->layer_param_.window_data_param().crop_mode();\n\n  // image\n  const int crop_size = this->transform_param_.crop_size();\n  CHECK_GT(crop_size, 0);\n  const int batch_size = this->layer_param_.window_data_param().batch_size();\n  top[0]->Reshape(batch_size, channels, crop_size, crop_size);\n  for (int i = 0; i < this->PREFETCH_COUNT; ++i)\n    this->prefetch_[i].data_.Reshape(\n        batch_size, channels, crop_size, crop_size);\n\n  LOG(INFO) << \"output data size: \" << top[0]->num() << \",\"\n      << top[0]->channels() << \",\" << top[0]->height() << \",\"\n      << top[0]->width();\n  // label\n  vector<int> label_shape(1, batch_size);\n  top[1]->Reshape(label_shape);\n  for (int i = 0; i < this->PREFETCH_COUNT; ++i) {\n    this->prefetch_[i].label_.Reshape(label_shape);\n  }\n\n  // data mean\n  has_mean_file_ = this->transform_param_.has_mean_file();\n  has_mean_values_ = this->transform_param_.mean_value_size() > 0;\n  if (has_mean_file_) {\n    const string& mean_file =\n          this->transform_param_.mean_file();\n    LOG(INFO) << \"Loading mean file from: \" << mean_file;\n    BlobProto blob_proto;\n    ReadProtoFromBinaryFileOrDie(mean_file.c_str(), &blob_proto);\n    data_mean_.FromProto(blob_proto);\n  }\n  if (has_mean_values_) {\n    CHECK(has_mean_file_ == false) <<\n      \"Cannot specify mean_file and mean_value at the same time\";\n    for (int c = 0; c < this->transform_param_.mean_value_size(); ++c) {\n      mean_values_.push_back(this->transform_param_.mean_value(c));\n    }\n    CHECK(mean_values_.size() == 1 || mean_values_.size() == channels) <<\n     \"Specify either 1 mean_value or as many as channels: \" << channels;\n    if (channels > 1 && mean_values_.size() == 1) {\n      // Replicate the mean_value for simplicity\n      for (int c = 1; c < channels; ++c) {\n        mean_values_.push_back(mean_values_[0]);\n      }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nunsigned int WindowDataLayer<Dtype>::PrefetchRand() {\n  CHECK(prefetch_rng_);\n  caffe::rng_t* prefetch_rng =\n      static_cast<caffe::rng_t*>(prefetch_rng_->generator());\n  return (*prefetch_rng)();\n}\n\n// This function is called on prefetch thread\ntemplate <typename Dtype>\nvoid WindowDataLayer<Dtype>::load_batch(Batch<Dtype>* batch) {\n  // At each iteration, sample N windows where N*p are foreground (object)\n  // windows and N*(1-p) are background (non-object) windows\n  CPUTimer batch_timer;\n  batch_timer.Start();\n  double read_time = 0;\n  double trans_time = 0;\n  CPUTimer timer;\n  Dtype* top_data = batch->data_.mutable_cpu_data();\n  Dtype* top_label = batch->label_.mutable_cpu_data();\n  const Dtype scale = this->layer_param_.window_data_param().scale();\n  const int batch_size = this->layer_param_.window_data_param().batch_size();\n  const int context_pad = this->layer_param_.window_data_param().context_pad();\n  const int crop_size = this->transform_param_.crop_size();\n  const bool mirror = this->transform_param_.mirror();\n  const float fg_fraction =\n      this->layer_param_.window_data_param().fg_fraction();\n  Dtype* mean = NULL;\n  int mean_off = 0;\n  int mean_width = 0;\n  int mean_height = 0;\n  if (this->has_mean_file_) {\n    mean = this->data_mean_.mutable_cpu_data();\n    mean_off = (this->data_mean_.width() - crop_size) / 2;\n    mean_width = this->data_mean_.width();\n    mean_height = this->data_mean_.height();\n  }\n  cv::Size cv_crop_size(crop_size, crop_size);\n  const string& crop_mode = this->layer_param_.window_data_param().crop_mode();\n\n  bool use_square = (crop_mode == \"square\") ? true : false;\n\n  // zero out batch\n  caffe_set(batch->data_.count(), Dtype(0), top_data);\n\n  const int num_fg = static_cast<int>(static_cast<float>(batch_size)\n      * fg_fraction);\n  const int num_samples[2] = { batch_size - num_fg, num_fg };\n\n  int item_id = 0;\n  // sample from bg set then fg set\n  for (int is_fg = 0; is_fg < 2; ++is_fg) {\n    for (int dummy = 0; dummy < num_samples[is_fg]; ++dummy) {\n      // sample a window\n      timer.Start();\n      const unsigned int rand_index = PrefetchRand();\n      vector<float> window = (is_fg) ?\n          fg_windows_[rand_index % fg_windows_.size()] :\n          bg_windows_[rand_index % bg_windows_.size()];\n\n      bool do_mirror = mirror && PrefetchRand() % 2;\n\n      // load the image containing the window\n      pair<std::string, vector<int> > image =\n          image_database_[window[WindowDataLayer<Dtype>::IMAGE_INDEX]];\n\n      cv::Mat cv_img;\n      if (this->cache_images_) {\n        pair<std::string, Datum> image_cached =\n          image_database_cache_[window[WindowDataLayer<Dtype>::IMAGE_INDEX]];\n        cv_img = DecodeDatumToCVMat(image_cached.second, true);\n      } else {\n        cv_img = cv::imread(image.first, CV_LOAD_IMAGE_COLOR);\n        if (!cv_img.data) {\n          LOG(ERROR) << \"Could not open or find file \" << image.first;\n          return;\n        }\n      }\n      read_time += timer.MicroSeconds();\n      timer.Start();\n      const int channels = cv_img.channels();\n\n      // crop window out of image and warp it\n      int x1 = window[WindowDataLayer<Dtype>::X1];\n      int y1 = window[WindowDataLayer<Dtype>::Y1];\n      int x2 = window[WindowDataLayer<Dtype>::X2];\n      int y2 = window[WindowDataLayer<Dtype>::Y2];\n\n      int pad_w = 0;\n      int pad_h = 0;\n      if (context_pad > 0 || use_square) {\n        // scale factor by which to expand the original region\n        // such that after warping the expanded region to crop_size x crop_size\n        // there's exactly context_pad amount of padding on each side\n        Dtype context_scale = static_cast<Dtype>(crop_size) /\n            static_cast<Dtype>(crop_size - 2*context_pad);\n\n        // compute the expanded region\n        Dtype half_height = static_cast<Dtype>(y2-y1+1)/2.0;\n        Dtype half_width = static_cast<Dtype>(x2-x1+1)/2.0;\n        Dtype center_x = static_cast<Dtype>(x1) + half_width;\n        Dtype center_y = static_cast<Dtype>(y1) + half_height;\n        if (use_square) {\n          if (half_height > half_width) {\n            half_width = half_height;\n          } else {\n            half_height = half_width;\n          }\n        }\n        x1 = static_cast<int>(round(center_x - half_width*context_scale));\n        x2 = static_cast<int>(round(center_x + half_width*context_scale));\n        y1 = static_cast<int>(round(center_y - half_height*context_scale));\n        y2 = static_cast<int>(round(center_y + half_height*context_scale));\n\n        // the expanded region may go outside of the image\n        // so we compute the clipped (expanded) region and keep track of\n        // the extent beyond the image\n        int unclipped_height = y2-y1+1;\n        int unclipped_width = x2-x1+1;\n        int pad_x1 = std::max(0, -x1);\n        int pad_y1 = std::max(0, -y1);\n        int pad_x2 = std::max(0, x2 - cv_img.cols + 1);\n        int pad_y2 = std::max(0, y2 - cv_img.rows + 1);\n        // clip bounds\n        x1 = x1 + pad_x1;\n        x2 = x2 - pad_x2;\n        y1 = y1 + pad_y1;\n        y2 = y2 - pad_y2;\n        CHECK_GT(x1, -1);\n        CHECK_GT(y1, -1);\n        CHECK_LT(x2, cv_img.cols);\n        CHECK_LT(y2, cv_img.rows);\n\n        int clipped_height = y2-y1+1;\n        int clipped_width = x2-x1+1;\n\n        // scale factors that would be used to warp the unclipped\n        // expanded region\n        Dtype scale_x =\n            static_cast<Dtype>(crop_size)/static_cast<Dtype>(unclipped_width);\n        Dtype scale_y =\n            static_cast<Dtype>(crop_size)/static_cast<Dtype>(unclipped_height);\n\n        // size to warp the clipped expanded region to\n        cv_crop_size.width =\n            static_cast<int>(round(static_cast<Dtype>(clipped_width)*scale_x));\n        cv_crop_size.height =\n            static_cast<int>(round(static_cast<Dtype>(clipped_height)*scale_y));\n        pad_x1 = static_cast<int>(round(static_cast<Dtype>(pad_x1)*scale_x));\n        pad_x2 = static_cast<int>(round(static_cast<Dtype>(pad_x2)*scale_x));\n        pad_y1 = static_cast<int>(round(static_cast<Dtype>(pad_y1)*scale_y));\n        pad_y2 = static_cast<int>(round(static_cast<Dtype>(pad_y2)*scale_y));\n\n        pad_h = pad_y1;\n        // if we're mirroring, we mirror the padding too (to be pedantic)\n        if (do_mirror) {\n          pad_w = pad_x2;\n        } else {\n          pad_w = pad_x1;\n        }\n\n        // ensure that the warped, clipped region plus the padding fits in the\n        // crop_size x crop_size image (it might not due to rounding)\n        if (pad_h + cv_crop_size.height > crop_size) {\n          cv_crop_size.height = crop_size - pad_h;\n        }\n        if (pad_w + cv_crop_size.width > crop_size) {\n          cv_crop_size.width = crop_size - pad_w;\n        }\n      }\n\n      cv::Rect roi(x1, y1, x2-x1+1, y2-y1+1);\n      cv::Mat cv_cropped_img = cv_img(roi);\n      cv::resize(cv_cropped_img, cv_cropped_img,\n          cv_crop_size, 0, 0, cv::INTER_LINEAR);\n\n      // horizontal flip at random\n      if (do_mirror) {\n        cv::flip(cv_cropped_img, cv_cropped_img, 1);\n      }\n\n      // copy the warped window into top_data\n      for (int h = 0; h < cv_cropped_img.rows; ++h) {\n        const uchar* ptr = cv_cropped_img.ptr<uchar>(h);\n        int img_index = 0;\n        for (int w = 0; w < cv_cropped_img.cols; ++w) {\n          for (int c = 0; c < channels; ++c) {\n            int top_index = ((item_id * channels + c) * crop_size + h + pad_h)\n                     * crop_size + w + pad_w;\n            // int top_index = (c * height + h) * width + w;\n            Dtype pixel = static_cast<Dtype>(ptr[img_index++]);\n            if (this->has_mean_file_) {\n              int mean_index = (c * mean_height + h + mean_off + pad_h)\n                           * mean_width + w + mean_off + pad_w;\n              top_data[top_index] = (pixel - mean[mean_index]) * scale;\n            } else {\n              if (this->has_mean_values_) {\n                top_data[top_index] = (pixel - this->mean_values_[c]) * scale;\n              } else {\n                top_data[top_index] = pixel * scale;\n              }\n            }\n          }\n        }\n      }\n      trans_time += timer.MicroSeconds();\n      // get window label\n      top_label[item_id] = window[WindowDataLayer<Dtype>::LABEL];\n\n      #if 0\n      // useful debugging code for dumping transformed windows to disk\n      string file_id;\n      std::stringstream ss;\n      ss << PrefetchRand();\n      ss >> file_id;\n      std::ofstream inf((string(\"dump/\") + file_id +\n          string(\"_info.txt\")).c_str(), std::ofstream::out);\n      inf << image.first << std::endl\n          << window[WindowDataLayer<Dtype>::X1]+1 << std::endl\n          << window[WindowDataLayer<Dtype>::Y1]+1 << std::endl\n          << window[WindowDataLayer<Dtype>::X2]+1 << std::endl\n          << window[WindowDataLayer<Dtype>::Y2]+1 << std::endl\n          << do_mirror << std::endl\n          << top_label[item_id] << std::endl\n          << is_fg << std::endl;\n      inf.close();\n      std::ofstream top_data_file((string(\"dump/\") + file_id +\n          string(\"_data.txt\")).c_str(),\n          std::ofstream::out | std::ofstream::binary);\n      for (int c = 0; c < channels; ++c) {\n        for (int h = 0; h < crop_size; ++h) {\n          for (int w = 0; w < crop_size; ++w) {\n            top_data_file.write(reinterpret_cast<char*>(\n                &top_data[((item_id * channels + c) * crop_size + h)\n                          * crop_size + w]),\n                sizeof(Dtype));\n          }\n        }\n      }\n      top_data_file.close();\n      #endif\n\n      item_id++;\n    }\n  }\n  batch_timer.Stop();\n  DLOG(INFO) << \"Prefetch batch: \" << batch_timer.MilliSeconds() << \" ms.\";\n  DLOG(INFO) << \"     Read time: \" << read_time / 1000 << \" ms.\";\n  DLOG(INFO) << \"Transform time: \" << trans_time / 1000 << \" ms.\";\n}\n\nINSTANTIATE_CLASS(WindowDataLayer);\nREGISTER_LAYER_CLASS(WindowData);\n\n}  // namespace caffe\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/net.cpp",
    "content": "#include <algorithm>\n#include <map>\n#include <set>\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"hdf5.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/net.hpp\"\n#include \"caffe/parallel.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/hdf5.hpp\"\n#include \"caffe/util/insert_splits.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nNet<Dtype>::Net(const NetParameter& param, const Net* root_net)\n    : root_net_(root_net) {\n  Init(param);\n}\n\ntemplate <typename Dtype>\nNet<Dtype>::Net(const string& param_file, Phase phase, const Net* root_net)\n    : root_net_(root_net) {\n  NetParameter param;\n  ReadNetParamsFromTextFileOrDie(param_file, &param);\n  param.mutable_state()->set_phase(phase);\n  Init(param);\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::Init(const NetParameter& in_param) {\n  CHECK(Caffe::root_solver() || root_net_)\n      << \"root_net_ needs to be set for all non-root solvers\";\n  // Set phase from the state.\n  phase_ = in_param.state().phase();\n  // Filter layers based on their include/exclude rules and\n  // the current NetState.\n  NetParameter filtered_param;\n  FilterNet(in_param, &filtered_param);\n  LOG_IF(INFO, Caffe::root_solver())\n      << \"Initializing net from parameters: \" << std::endl\n      << filtered_param.DebugString();\n  // Create a copy of filtered_param with splits added where necessary.\n  NetParameter param;\n  InsertSplits(filtered_param, &param);\n  // Basically, build all the layers and set up their connections.\n  name_ = param.name();\n  map<string, int> blob_name_to_idx;\n  set<string> available_blobs;\n  CHECK(param.input_dim_size() == 0 || param.input_shape_size() == 0)\n      << \"Must specify either input_shape OR deprecated input_dim, not both.\";\n  if (param.input_dim_size() > 0) {\n    // Deprecated 4D dimensions.\n    CHECK_EQ(param.input_size() * 4, param.input_dim_size())\n        << \"Incorrect input blob dimension specifications.\";\n  } else {\n    CHECK_EQ(param.input_size(), param.input_shape_size())\n        << \"Exactly one input_shape must be specified per input.\";\n  }\n  memory_used_ = 0;\n  // set the input blobs\n  for (int input_id = 0; input_id < param.input_size(); ++input_id) {\n    const int layer_id = -1;  // inputs have fake layer ID -1\n    AppendTop(param, layer_id, input_id, &available_blobs, &blob_name_to_idx);\n  }\n  // For each layer, set up its input and output\n  bottom_vecs_.resize(param.layer_size());\n  top_vecs_.resize(param.layer_size());\n  bottom_id_vecs_.resize(param.layer_size());\n  param_id_vecs_.resize(param.layer_size());\n  top_id_vecs_.resize(param.layer_size());\n  bottom_need_backward_.resize(param.layer_size());\n  for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) {\n    // For non-root solvers, whether this layer is shared from root_net_.\n    bool share_from_root = !Caffe::root_solver()\n        && root_net_->layers_[layer_id]->ShareInParallel();\n    // Inherit phase from net if unset.\n    if (!param.layer(layer_id).has_phase()) {\n      param.mutable_layer(layer_id)->set_phase(phase_);\n    }\n    // Setup layer.\n    const LayerParameter& layer_param = param.layer(layer_id);\n    if (layer_param.propagate_down_size() > 0) {\n      CHECK_EQ(layer_param.propagate_down_size(),\n          layer_param.bottom_size())\n          << \"propagate_down param must be specified \"\n          << \"either 0 or bottom_size times \";\n    }\n    if (share_from_root) {\n      LOG(INFO) << \"Sharing layer \" << layer_param.name() << \" from root net\";\n      layers_.push_back(root_net_->layers_[layer_id]);\n      layers_[layer_id]->SetShared(true);\n    } else {\n      layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));\n    }\n    layer_names_.push_back(layer_param.name());\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Creating Layer \" << layer_param.name();\n    bool need_backward = false;\n\n    // Figure out this layer's input and output\n    for (int bottom_id = 0; bottom_id < layer_param.bottom_size();\n         ++bottom_id) {\n      const int blob_id = AppendBottom(param, layer_id, bottom_id,\n                                       &available_blobs, &blob_name_to_idx);\n      // If a blob needs backward, this layer should provide it.\n      need_backward |= blob_need_backward_[blob_id];\n    }\n    int num_top = layer_param.top_size();\n    for (int top_id = 0; top_id < num_top; ++top_id) {\n      AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);\n    }\n    // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter\n    // specified fewer than the required number (as specified by\n    // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.\n    Layer<Dtype>* layer = layers_[layer_id].get();\n    if (layer->AutoTopBlobs()) {\n      const int needed_num_top =\n          std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());\n      for (; num_top < needed_num_top; ++num_top) {\n        // Add \"anonymous\" top blobs -- do not modify available_blobs or\n        // blob_name_to_idx as we don't want these blobs to be usable as input\n        // to other layers.\n        AppendTop(param, layer_id, num_top, NULL, NULL);\n      }\n    }\n    // After this layer is connected, set it up.\n    if (share_from_root) {\n      // Set up size of top blobs using root_net_\n      const vector<Blob<Dtype>*>& base_top = root_net_->top_vecs_[layer_id];\n      const vector<Blob<Dtype>*>& this_top = this->top_vecs_[layer_id];\n      for (int top_id = 0; top_id < base_top.size(); ++top_id) {\n        this_top[top_id]->ReshapeLike(*base_top[top_id]);\n        LOG(INFO) << \"Created top blob \" << top_id << \" (shape: \"\n            << this_top[top_id]->shape_string() <<  \") for shared layer \"\n            << layer_param.name();\n      }\n    } else {\n      layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);\n    }\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Setting up \" << layer_names_[layer_id];\n    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {\n      if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {\n        blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));\n      }\n      blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);\n      LOG_IF(INFO, Caffe::root_solver())\n          << \"Top shape: \" << top_vecs_[layer_id][top_id]->shape_string();\n      if (layer->loss(top_id)) {\n        LOG_IF(INFO, Caffe::root_solver())\n            << \"    with loss weight \" << layer->loss(top_id);\n      }\n      memory_used_ += top_vecs_[layer_id][top_id]->count();\n    }\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Memory required for data: \" << memory_used_ * sizeof(Dtype);\n    const int param_size = layer_param.param_size();\n    const int num_param_blobs = layers_[layer_id]->blobs().size();\n    CHECK_LE(param_size, num_param_blobs)\n        << \"Too many params specified for layer \" << layer_param.name();\n    ParamSpec default_param_spec;\n    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {\n      const ParamSpec* param_spec = (param_id < param_size) ?\n          &layer_param.param(param_id) : &default_param_spec;\n      const bool param_need_backward = param_spec->lr_mult() != 0;\n      need_backward |= param_need_backward;\n      layers_[layer_id]->set_param_propagate_down(param_id,\n                                                  param_need_backward);\n    }\n    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {\n      AppendParam(param, layer_id, param_id);\n    }\n    // Finally, set the backward flag\n    layer_need_backward_.push_back(need_backward);\n    if (need_backward) {\n      for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {\n        blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;\n      }\n    }\n  }\n  // Go through the net backwards to determine which blobs contribute to the\n  // loss.  We can skip backward computation for blobs that don't contribute\n  // to the loss.\n  // Also checks if all bottom blobs don't need backward computation (possible\n  // because the skip_propagate_down param) and so we can skip bacward\n  // computation for the entire layer\n  set<string> blobs_under_loss;\n  set<string> blobs_skip_backp;\n  for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {\n    bool layer_contributes_loss = false;\n    bool layer_skip_propagate_down = true;\n    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {\n      const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];\n      if (layers_[layer_id]->loss(top_id) ||\n          (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {\n        layer_contributes_loss = true;\n      }\n      if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {\n        layer_skip_propagate_down = false;\n      }\n      if (layer_contributes_loss && !layer_skip_propagate_down)\n        break;\n    }\n    // If this layer can skip backward computation, also all his bottom blobs\n    // don't need backpropagation\n    if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {\n      layer_need_backward_[layer_id] = false;\n      for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();\n               ++bottom_id) {\n        bottom_need_backward_[layer_id][bottom_id] = false;\n      }\n    }\n    if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }\n    if (Caffe::root_solver()) {\n      if (layer_need_backward_[layer_id]) {\n        LOG(INFO) << layer_names_[layer_id] << \" needs backward computation.\";\n      } else {\n        LOG(INFO) << layer_names_[layer_id]\n            << \" does not need backward computation.\";\n      }\n    }\n    for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();\n         ++bottom_id) {\n      if (layer_contributes_loss) {\n        const string& blob_name =\n            blob_names_[bottom_id_vecs_[layer_id][bottom_id]];\n        blobs_under_loss.insert(blob_name);\n      } else {\n        bottom_need_backward_[layer_id][bottom_id] = false;\n      }\n      if (!bottom_need_backward_[layer_id][bottom_id]) {\n        const string& blob_name =\n                   blob_names_[bottom_id_vecs_[layer_id][bottom_id]];\n        blobs_skip_backp.insert(blob_name);\n      }\n    }\n  }\n  // Handle force_backward if needed.\n  if (param.force_backward()) {\n    for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {\n      layer_need_backward_[layer_id] = true;\n      for (int bottom_id = 0;\n           bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {\n        bottom_need_backward_[layer_id][bottom_id] =\n            bottom_need_backward_[layer_id][bottom_id] ||\n            layers_[layer_id]->AllowForceBackward(bottom_id);\n        blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =\n            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||\n            bottom_need_backward_[layer_id][bottom_id];\n      }\n      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();\n           ++param_id) {\n        layers_[layer_id]->set_param_propagate_down(param_id, true);\n      }\n    }\n  }\n  // In the end, all remaining blobs are considered output blobs.\n  for (set<string>::iterator it = available_blobs.begin();\n      it != available_blobs.end(); ++it) {\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"This network produces output \" << *it;\n    net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());\n    net_output_blob_indices_.push_back(blob_name_to_idx[*it]);\n  }\n  for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {\n    blob_names_index_[blob_names_[blob_id]] = blob_id;\n  }\n  for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {\n    layer_names_index_[layer_names_[layer_id]] = layer_id;\n  }\n  ShareWeights();\n  debug_info_ = param.debug_info();\n  LOG_IF(INFO, Caffe::root_solver()) << \"Network initialization done.\";\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::FilterNet(const NetParameter& param,\n    NetParameter* param_filtered) {\n  NetState net_state(param.state());\n  param_filtered->CopyFrom(param);\n  param_filtered->clear_layer();\n  for (int i = 0; i < param.layer_size(); ++i) {\n    const LayerParameter& layer_param = param.layer(i);\n    const string& layer_name = layer_param.name();\n    CHECK(layer_param.include_size() == 0 || layer_param.exclude_size() == 0)\n          << \"Specify either include rules or exclude rules; not both.\";\n    // If no include rules are specified, the layer is included by default and\n    // only excluded if it meets one of the exclude rules.\n    bool layer_included = (layer_param.include_size() == 0);\n    for (int j = 0; layer_included && j < layer_param.exclude_size(); ++j) {\n      if (StateMeetsRule(net_state, layer_param.exclude(j), layer_name)) {\n        layer_included = false;\n      }\n    }\n    for (int j = 0; !layer_included && j < layer_param.include_size(); ++j) {\n      if (StateMeetsRule(net_state, layer_param.include(j), layer_name)) {\n        layer_included = true;\n      }\n    }\n    if (layer_included) {\n      param_filtered->add_layer()->CopyFrom(layer_param);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nbool Net<Dtype>::StateMeetsRule(const NetState& state,\n    const NetStateRule& rule, const string& layer_name) {\n  // Check whether the rule is broken due to phase.\n  if (rule.has_phase()) {\n      if (rule.phase() != state.phase()) {\n        LOG_IF(INFO, Caffe::root_solver())\n            << \"The NetState phase (\" << state.phase()\n            << \") differed from the phase (\" << rule.phase()\n            << \") specified by a rule in layer \" << layer_name;\n        return false;\n      }\n  }\n  // Check whether the rule is broken due to min level.\n  if (rule.has_min_level()) {\n    if (state.level() < rule.min_level()) {\n      LOG_IF(INFO, Caffe::root_solver())\n          << \"The NetState level (\" << state.level()\n          << \") is above the min_level (\" << rule.min_level()\n          << \") specified by a rule in layer \" << layer_name;\n      return false;\n    }\n  }\n  // Check whether the rule is broken due to max level.\n  if (rule.has_max_level()) {\n    if (state.level() > rule.max_level()) {\n      LOG_IF(INFO, Caffe::root_solver())\n          << \"The NetState level (\" << state.level()\n          << \") is above the max_level (\" << rule.max_level()\n          << \") specified by a rule in layer \" << layer_name;\n      return false;\n    }\n  }\n  // Check whether the rule is broken due to stage. The NetState must\n  // contain ALL of the rule's stages to meet it.\n  for (int i = 0; i < rule.stage_size(); ++i) {\n    // Check that the NetState contains the rule's ith stage.\n    bool has_stage = false;\n    for (int j = 0; !has_stage && j < state.stage_size(); ++j) {\n      if (rule.stage(i) == state.stage(j)) { has_stage = true; }\n    }\n    if (!has_stage) {\n      LOG_IF(INFO, Caffe::root_solver())\n          << \"The NetState did not contain stage '\" << rule.stage(i)\n          << \"' specified by a rule in layer \" << layer_name;\n      return false;\n    }\n  }\n  // Check whether the rule is broken due to not_stage. The NetState must\n  // contain NONE of the rule's not_stages to meet it.\n  for (int i = 0; i < rule.not_stage_size(); ++i) {\n    // Check that the NetState contains the rule's ith not_stage.\n    bool has_stage = false;\n    for (int j = 0; !has_stage && j < state.stage_size(); ++j) {\n      if (rule.not_stage(i) == state.stage(j)) { has_stage = true; }\n    }\n    if (has_stage) {\n      LOG_IF(INFO, Caffe::root_solver())\n          << \"The NetState contained a not_stage '\" << rule.not_stage(i)\n          << \"' specified by a rule in layer \" << layer_name;\n      return false;\n    }\n  }\n  return true;\n}\n\n// Helper for Net::Init: add a new input or top blob to the net.  (Inputs have\n// layer_id == -1, tops have layer_id >= 0.)\ntemplate <typename Dtype>\nvoid Net<Dtype>::AppendTop(const NetParameter& param, const int layer_id,\n                           const int top_id, set<string>* available_blobs,\n                           map<string, int>* blob_name_to_idx) {\n  shared_ptr<LayerParameter> layer_param((layer_id >= 0) ?\n    (new LayerParameter(param.layer(layer_id))) : NULL);\n  const string& blob_name = layer_param ?\n      (layer_param->top_size() > top_id ?\n          layer_param->top(top_id) : \"(automatic)\") : param.input(top_id);\n  // Check if we are doing in-place computation\n  if (blob_name_to_idx && layer_param && layer_param->bottom_size() > top_id &&\n      blob_name == layer_param->bottom(top_id)) {\n    // In-place computation\n    LOG_IF(INFO, Caffe::root_solver())\n        << layer_param->name() << \" -> \" << blob_name << \" (in-place)\";\n    top_vecs_[layer_id].push_back(blobs_[(*blob_name_to_idx)[blob_name]].get());\n    top_id_vecs_[layer_id].push_back((*blob_name_to_idx)[blob_name]);\n  } else if (blob_name_to_idx &&\n             blob_name_to_idx->find(blob_name) != blob_name_to_idx->end()) {\n    // If we are not doing in-place computation but have duplicated blobs,\n    // raise an error.\n    LOG(FATAL) << \"Top blob '\" << blob_name\n               << \"' produced by multiple sources.\";\n  } else {\n    // Normal output.\n    if (Caffe::root_solver()) {\n      if (layer_param) {\n        LOG(INFO) << layer_param->name() << \" -> \" << blob_name;\n      } else {\n        LOG(INFO) << \"Input \" << top_id << \" -> \" << blob_name;\n      }\n    }\n    shared_ptr<Blob<Dtype> > blob_pointer(new Blob<Dtype>());\n    const int blob_id = blobs_.size();\n    blobs_.push_back(blob_pointer);\n    blob_names_.push_back(blob_name);\n    blob_need_backward_.push_back(false);\n    if (blob_name_to_idx) { (*blob_name_to_idx)[blob_name] = blob_id; }\n    if (layer_id == -1) {\n      // Set the (explicitly specified) dimensions of the input blob.\n      if (param.input_dim_size() > 0) {\n        blob_pointer->Reshape(param.input_dim(top_id * 4),\n                              param.input_dim(top_id * 4 + 1),\n                              param.input_dim(top_id * 4 + 2),\n                              param.input_dim(top_id * 4 + 3));\n      } else {\n        blob_pointer->Reshape(param.input_shape(top_id));\n      }\n      net_input_blob_indices_.push_back(blob_id);\n      net_input_blobs_.push_back(blob_pointer.get());\n    } else {\n      top_id_vecs_[layer_id].push_back(blob_id);\n      top_vecs_[layer_id].push_back(blob_pointer.get());\n    }\n  }\n  if (available_blobs) { available_blobs->insert(blob_name); }\n}\n\n// Helper for Net::Init: add a new bottom blob to the net.\ntemplate <typename Dtype>\nint Net<Dtype>::AppendBottom(const NetParameter& param, const int layer_id,\n    const int bottom_id, set<string>* available_blobs,\n    map<string, int>* blob_name_to_idx) {\n  const LayerParameter& layer_param = param.layer(layer_id);\n  const string& blob_name = layer_param.bottom(bottom_id);\n  if (available_blobs->find(blob_name) == available_blobs->end()) {\n    LOG(FATAL) << \"Unknown bottom blob '\" << blob_name << \"' (layer '\"\n               << layer_param.name() << \"', bottom index \" << bottom_id << \")\";\n  }\n  const int blob_id = (*blob_name_to_idx)[blob_name];\n  LOG_IF(INFO, Caffe::root_solver())\n      << layer_names_[layer_id] << \" <- \" << blob_name;\n  bottom_vecs_[layer_id].push_back(blobs_[blob_id].get());\n  bottom_id_vecs_[layer_id].push_back(blob_id);\n  available_blobs->erase(blob_name);\n  bool propagate_down = true;\n  // Check if the backpropagation on bottom_id should be skipped\n  if (layer_param.propagate_down_size() > 0)\n    propagate_down = layer_param.propagate_down(bottom_id);\n  const bool need_backward = blob_need_backward_[blob_id] &&\n                          propagate_down;\n  bottom_need_backward_[layer_id].push_back(need_backward);\n  return blob_id;\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::AppendParam(const NetParameter& param, const int layer_id,\n                             const int param_id) {\n  const LayerParameter& layer_param = layers_[layer_id]->layer_param();\n  const int param_size = layer_param.param_size();\n  string param_name =\n      (param_size > param_id) ? layer_param.param(param_id).name() : \"\";\n  if (param_name.size()) {\n    param_display_names_.push_back(param_name);\n  } else {\n    ostringstream param_display_name;\n    param_display_name << param_id;\n    param_display_names_.push_back(param_display_name.str());\n  }\n  const int net_param_id = params_.size();\n  params_.push_back(layers_[layer_id]->blobs()[param_id]);\n  param_id_vecs_[layer_id].push_back(net_param_id);\n  param_layer_indices_.push_back(make_pair(layer_id, param_id));\n  ParamSpec default_param_spec;\n  const ParamSpec* param_spec = (layer_param.param_size() > param_id) ?\n      &layer_param.param(param_id) : &default_param_spec;\n  if (!param_size || !param_name.size() || (param_name.size() &&\n      param_names_index_.find(param_name) == param_names_index_.end())) {\n    // This layer \"owns\" this parameter blob -- it is either anonymous\n    // (i.e., not given a param_name) or explicitly given a name that we\n    // haven't already seen.\n    param_owners_.push_back(-1);\n    if (param_name.size()) {\n      param_names_index_[param_name] = net_param_id;\n    }\n    const int learnable_param_id = learnable_params_.size();\n    learnable_params_.push_back(params_[net_param_id].get());\n    learnable_param_ids_.push_back(learnable_param_id);\n    has_params_lr_.push_back(param_spec->has_lr_mult());\n    has_params_decay_.push_back(param_spec->has_decay_mult());\n    params_lr_.push_back(param_spec->lr_mult());\n    params_weight_decay_.push_back(param_spec->decay_mult());\n  } else {\n    // Named param blob with name we've seen before: share params\n    const int owner_net_param_id = param_names_index_[param_name];\n    param_owners_.push_back(owner_net_param_id);\n    const pair<int, int>& owner_index =\n        param_layer_indices_[owner_net_param_id];\n    const int owner_layer_id = owner_index.first;\n    const int owner_param_id = owner_index.second;\n    LOG_IF(INFO, Caffe::root_solver()) << \"Sharing parameters '\" << param_name\n        << \"' owned by \"\n        << \"layer '\" << layer_names_[owner_layer_id] << \"', param \"\n        << \"index \" << owner_param_id;\n    Blob<Dtype>* this_blob = layers_[layer_id]->blobs()[param_id].get();\n    Blob<Dtype>* owner_blob =\n        layers_[owner_layer_id]->blobs()[owner_param_id].get();\n    const int param_size = layer_param.param_size();\n    if (param_size > param_id && (layer_param.param(param_id).share_mode() ==\n                                  ParamSpec_DimCheckMode_PERMISSIVE)) {\n      // Permissive dimension checking -- only check counts are the same.\n      CHECK_EQ(this_blob->count(), owner_blob->count())\n          << \"Cannot share param '\" << param_name << \"' owned by layer '\"\n          << layer_names_[owner_layer_id] << \"' with layer '\"\n          << layer_names_[layer_id] << \"'; count mismatch.  Owner layer param \"\n          << \"shape is \" << owner_blob->shape_string() << \"; sharing layer \"\n          << \"shape is \" << this_blob->shape_string();\n    } else {\n      // Strict dimension checking -- all dims must be the same.\n      CHECK(this_blob->shape() == owner_blob->shape())\n          << \"Cannot share param '\" << param_name << \"' owned by layer '\"\n          << layer_names_[owner_layer_id] << \"' with layer '\"\n          << layer_names_[layer_id] << \"'; shape mismatch.  Owner layer param \"\n          << \"shape is \" << owner_blob->shape_string() << \"; sharing layer \"\n          << \"expects shape \" << this_blob->shape_string();\n    }\n    const int learnable_param_id = learnable_param_ids_[owner_net_param_id];\n    learnable_param_ids_.push_back(learnable_param_id);\n    if (param_spec->has_lr_mult()) {\n      if (has_params_lr_[learnable_param_id]) {\n        CHECK_EQ(param_spec->lr_mult(), params_lr_[learnable_param_id])\n            << \"Shared param '\" << param_name << \"' has mismatched lr_mult.\";\n      } else {\n        has_params_lr_[learnable_param_id] = true;\n        params_lr_[learnable_param_id] = param_spec->lr_mult();\n      }\n    }\n    if (param_spec->has_decay_mult()) {\n      if (has_params_decay_[learnable_param_id]) {\n        CHECK_EQ(param_spec->decay_mult(),\n                 params_weight_decay_[learnable_param_id])\n            << \"Shared param '\" << param_name << \"' has mismatched decay_mult.\";\n      } else {\n        has_params_decay_[learnable_param_id] = true;\n        params_weight_decay_[learnable_param_id] = param_spec->decay_mult();\n      }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nDtype Net<Dtype>::ForwardFromTo(int start, int end) {\n  CHECK_GE(start, 0);\n  CHECK_LT(end, layers_.size());\n  Dtype loss = 0;\n  if (debug_info_) {\n    for (int i = 0; i < net_input_blobs_.size(); ++i) {\n      InputDebugInfo(i);\n    }\n  }\n  for (int i = start; i <= end; ++i) {\n    // LOG(ERROR) << \"Forwarding \" << layer_names_[i];\n    Dtype layer_loss = layers_[i]->Forward(bottom_vecs_[i], top_vecs_[i]);\n    loss += layer_loss;\n    if (debug_info_) { ForwardDebugInfo(i); }\n  }\n  return loss;\n}\n\ntemplate <typename Dtype>\nDtype Net<Dtype>::ForwardFrom(int start) {\n  return ForwardFromTo(start, layers_.size() - 1);\n}\n\ntemplate <typename Dtype>\nDtype Net<Dtype>::ForwardTo(int end) {\n  return ForwardFromTo(0, end);\n}\n\ntemplate <typename Dtype>\nconst vector<Blob<Dtype>*>& Net<Dtype>::ForwardPrefilled(Dtype* loss) {\n  if (loss != NULL) {\n    *loss = ForwardFromTo(0, layers_.size() - 1);\n  } else {\n    ForwardFromTo(0, layers_.size() - 1);\n  }\n  return net_output_blobs_;\n}\n\ntemplate <typename Dtype>\nconst vector<Blob<Dtype>*>& Net<Dtype>::Forward(\n    const vector<Blob<Dtype>*> & bottom, Dtype* loss) {\n  // Copy bottom to internal bottom\n  for (int i = 0; i < bottom.size(); ++i) {\n    net_input_blobs_[i]->CopyFrom(*bottom[i]);\n  }\n  return ForwardPrefilled(loss);\n}\n\ntemplate <typename Dtype>\nstring Net<Dtype>::Forward(const string& input_blob_protos, Dtype* loss) {\n  BlobProtoVector blob_proto_vec;\n  if (net_input_blobs_.size()) {\n    blob_proto_vec.ParseFromString(input_blob_protos);\n    CHECK_EQ(blob_proto_vec.blobs_size(), net_input_blobs_.size())\n        << \"Incorrect input size.\";\n    for (int i = 0; i < blob_proto_vec.blobs_size(); ++i) {\n      net_input_blobs_[i]->FromProto(blob_proto_vec.blobs(i));\n    }\n  }\n  ForwardPrefilled(loss);\n  blob_proto_vec.Clear();\n  for (int i = 0; i < net_output_blobs_.size(); ++i) {\n    net_output_blobs_[i]->ToProto(blob_proto_vec.add_blobs());\n  }\n  string output;\n  blob_proto_vec.SerializeToString(&output);\n  return output;\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::BackwardFromTo(int start, int end) {\n  CHECK_GE(end, 0);\n  CHECK_LT(start, layers_.size());\n  for (int i = start; i >= end; --i) {\n    if (layer_need_backward_[i]) {\n      layers_[i]->Backward(\n          top_vecs_[i], bottom_need_backward_[i], bottom_vecs_[i]);\n      if (debug_info_) { BackwardDebugInfo(i); }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::InputDebugInfo(const int input_id) {\n  const Blob<Dtype>& blob = *net_input_blobs_[input_id];\n  const string& blob_name = blob_names_[net_input_blob_indices_[input_id]];\n  const Dtype data_abs_val_mean = blob.asum_data() / blob.count();\n  LOG_IF(INFO, Caffe::root_solver())\n      << \"    [Forward] \"\n      << \"Input \" << blob_name << \" data: \" << data_abs_val_mean;\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::ForwardDebugInfo(const int layer_id) {\n  for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {\n    const Blob<Dtype>& blob = *top_vecs_[layer_id][top_id];\n    const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];\n    const Dtype data_abs_val_mean = blob.asum_data() / blob.count();\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"    [Forward] \"\n        << \"Layer \" << layer_names_[layer_id]\n        << \", top blob \" << blob_name\n        << \" data: \" << data_abs_val_mean;\n  }\n  for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();\n       ++param_id) {\n    const Blob<Dtype>& blob = *layers_[layer_id]->blobs()[param_id];\n    const int net_param_id = param_id_vecs_[layer_id][param_id];\n    const string& blob_name = param_display_names_[net_param_id];\n    const Dtype data_abs_val_mean = blob.asum_data() / blob.count();\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"    [Forward] \"\n        << \"Layer \" << layer_names_[layer_id]\n        << \", param blob \" << blob_name\n        << \" data: \" << data_abs_val_mean;\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::BackwardDebugInfo(const int layer_id) {\n  const vector<Blob<Dtype>*>& bottom_vec = bottom_vecs_[layer_id];\n  for (int bottom_id = 0; bottom_id < bottom_vec.size(); ++bottom_id) {\n    if (!bottom_need_backward_[layer_id][bottom_id]) { continue; }\n    const Blob<Dtype>& blob = *bottom_vec[bottom_id];\n    const string& blob_name = blob_names_[bottom_id_vecs_[layer_id][bottom_id]];\n    const Dtype diff_abs_val_mean = blob.asum_diff() / blob.count();\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"    [Backward] \"\n        << \"Layer \" << layer_names_[layer_id]\n        << \", bottom blob \" << blob_name\n        << \" diff: \" << diff_abs_val_mean;\n  }\n  for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();\n       ++param_id) {\n    if (!layers_[layer_id]->param_propagate_down(param_id)) { continue; }\n    const Blob<Dtype>& blob = *layers_[layer_id]->blobs()[param_id];\n    const Dtype diff_abs_val_mean = blob.asum_diff() / blob.count();\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"    [Backward] \"\n        << \"Layer \" << layer_names_[layer_id]\n        << \", param blob \" << param_id\n        << \" diff: \" << diff_abs_val_mean;\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::UpdateDebugInfo(const int param_id) {\n  const Blob<Dtype>& blob = *params_[param_id];\n  const int param_owner = param_owners_[param_id];\n  const string& layer_name = layer_names_[param_layer_indices_[param_id].first];\n  const string& param_display_name = param_display_names_[param_id];\n  const Dtype diff_abs_val_mean = blob.asum_diff() / blob.count();\n  if (param_owner < 0) {\n    const Dtype data_abs_val_mean = blob.asum_data() / blob.count();\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"    [Update] Layer \" << layer_name\n        << \", param \" << param_display_name\n        << \" data: \" << data_abs_val_mean\n        << \"; diff: \" << diff_abs_val_mean;\n  } else {\n    const string& owner_layer_name =\n        layer_names_[param_layer_indices_[param_owner].first];\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"    [Update] Layer \" << layer_name\n        << \", param blob \" << param_display_name\n        << \" (owned by layer \" << owner_layer_name << \", \" << \"param \"\n        << param_display_names_[param_owners_[param_id]] << \")\"\n        << \" diff: \" << diff_abs_val_mean;\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::ShareTrainedLayersWith(const Net* other) {\n  int num_source_layers = other->layers().size();\n  for (int i = 0; i < num_source_layers; ++i) {\n    Layer<Dtype>* source_layer = other->layers()[i].get();\n    const string& source_layer_name = other->layer_names()[i];\n    int target_layer_id = 0;\n    while (target_layer_id != layer_names_.size() &&\n        layer_names_[target_layer_id] != source_layer_name) {\n      ++target_layer_id;\n    }\n    if (target_layer_id == layer_names_.size()) {\n      LOG(INFO) << \"Ignoring source layer \" << source_layer_name;\n      continue;\n    }\n    DLOG(INFO) << \"Copying source layer \" << source_layer_name;\n    vector<shared_ptr<Blob<Dtype> > >& target_blobs =\n        layers_[target_layer_id]->blobs();\n    CHECK_EQ(target_blobs.size(), source_layer->blobs().size())\n        << \"Incompatible number of blobs for layer \" << source_layer_name;\n    for (int j = 0; j < target_blobs.size(); ++j) {\n      Blob<Dtype>* source_blob = source_layer->blobs()[j].get();\n      CHECK(target_blobs[j]->shape() == source_blob->shape())\n          << \"Cannot share param \" << j << \" weights from layer '\"\n          << source_layer_name << \"'; shape mismatch.  Source param shape is \"\n          << source_blob->shape_string() << \"; target param shape is \"\n          << target_blobs[j]->shape_string();\n      target_blobs[j]->ShareData(*source_blob);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::BackwardFrom(int start) {\n  BackwardFromTo(start, 0);\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::BackwardTo(int end) {\n  BackwardFromTo(layers_.size() - 1, end);\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::Backward() {\n  BackwardFromTo(layers_.size() - 1, 0);\n  if (debug_info_) {\n    Dtype asum_data = 0, asum_diff = 0, sumsq_data = 0, sumsq_diff = 0;\n    for (int i = 0; i < learnable_params_.size(); ++i) {\n      asum_data += learnable_params_[i]->asum_data();\n      asum_diff += learnable_params_[i]->asum_diff();\n      sumsq_data += learnable_params_[i]->sumsq_data();\n      sumsq_diff += learnable_params_[i]->sumsq_diff();\n    }\n    const Dtype l2norm_data = std::sqrt(sumsq_data);\n    const Dtype l2norm_diff = std::sqrt(sumsq_diff);\n    LOG(ERROR) << \"    [Backward] All net params (data, diff): \"\n               << \"L1 norm = (\" << asum_data << \", \" << asum_diff << \"); \"\n               << \"L2 norm = (\" << l2norm_data << \", \" << l2norm_diff << \")\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::Reshape() {\n  for (int i = 0; i < layers_.size(); ++i) {\n    layers_[i]->Reshape(bottom_vecs_[i], top_vecs_[i]);\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::CopyTrainedLayersFrom(const NetParameter& param) {\n  int num_source_layers = param.layer_size();\n  for (int i = 0; i < num_source_layers; ++i) {\n    const LayerParameter& source_layer = param.layer(i);\n    const string& source_layer_name = source_layer.name();\n    int target_layer_id = 0;\n    while (target_layer_id != layer_names_.size() &&\n        layer_names_[target_layer_id] != source_layer_name) {\n      ++target_layer_id;\n    }\n    if (target_layer_id == layer_names_.size()) {\n      LOG(INFO) << \"Ignoring source layer \" << source_layer_name;\n      continue;\n    }\n    DLOG(INFO) << \"Copying source layer \" << source_layer_name;\n    vector<shared_ptr<Blob<Dtype> > >& target_blobs =\n        layers_[target_layer_id]->blobs();\n    CHECK_EQ(target_blobs.size(), source_layer.blobs_size())\n        << \"Incompatible number of blobs for layer \" << source_layer_name;\n    for (int j = 0; j < target_blobs.size(); ++j) {\n      if (!target_blobs[j]->ShapeEquals(source_layer.blobs(j))) {\n        Blob<Dtype> source_blob;\n        const bool kReshape = true;\n        source_blob.FromProto(source_layer.blobs(j), kReshape);\n        LOG(FATAL) << \"Cannot copy param \" << j << \" weights from layer '\"\n            << source_layer_name << \"'; shape mismatch.  Source param shape is \"\n            << source_blob.shape_string() << \"; target param shape is \"\n            << target_blobs[j]->shape_string() << \". \"\n            << \"To learn this layer's parameters from scratch rather than \"\n            << \"copying from a saved net, rename the layer.\";\n      }\n      const bool kReshape = false;\n      target_blobs[j]->FromProto(source_layer.blobs(j), kReshape);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::CopyTrainedLayersFrom(const string trained_filename) {\n  if (trained_filename.size() >= 3 &&\n      trained_filename.compare(trained_filename.size() - 3, 3, \".h5\") == 0) {\n    CopyTrainedLayersFromHDF5(trained_filename);\n  } else {\n    CopyTrainedLayersFromBinaryProto(trained_filename);\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::CopyTrainedLayersFromBinaryProto(\n    const string trained_filename) {\n  NetParameter param;\n  ReadNetParamsFromBinaryFileOrDie(trained_filename, &param);\n  CopyTrainedLayersFrom(param);\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::CopyTrainedLayersFromHDF5(const string trained_filename) {\n  hid_t file_hid = H5Fopen(trained_filename.c_str(), H5F_ACC_RDONLY,\n                           H5P_DEFAULT);\n  CHECK_GE(file_hid, 0) << \"Couldn't open \" << trained_filename;\n  hid_t data_hid = H5Gopen2(file_hid, \"data\", H5P_DEFAULT);\n  CHECK_GE(data_hid, 0) << \"Error reading weights from \" << trained_filename;\n  int num_layers = hdf5_get_num_links(data_hid);\n  for (int i = 0; i < num_layers; ++i) {\n    string source_layer_name = hdf5_get_name_by_idx(data_hid, i);\n    if (!layer_names_index_.count(source_layer_name)) {\n      LOG(INFO) << \"Ignoring source layer \" << source_layer_name;\n      continue;\n    }\n    int target_layer_id = layer_names_index_[source_layer_name];\n    DLOG(INFO) << \"Copying source layer \" << source_layer_name;\n    vector<shared_ptr<Blob<Dtype> > >& target_blobs =\n        layers_[target_layer_id]->blobs();\n    hid_t layer_hid = H5Gopen2(data_hid, source_layer_name.c_str(),\n        H5P_DEFAULT);\n    CHECK_GE(layer_hid, 0)\n        << \"Error reading weights from \" << trained_filename;\n    // Check that source layer doesn't have more params than target layer\n    int num_source_params = hdf5_get_num_links(layer_hid);\n    CHECK_LE(num_source_params, target_blobs.size())\n        << \"Incompatible number of blobs for layer \" << source_layer_name;\n    for (int j = 0; j < target_blobs.size(); ++j) {\n      ostringstream oss;\n      oss << j;\n      string dataset_name = oss.str();\n      int target_net_param_id = param_id_vecs_[target_layer_id][j];\n      if (!H5Lexists(layer_hid, dataset_name.c_str(), H5P_DEFAULT)) {\n        // Target param doesn't exist in source weights...\n        if (param_owners_[target_net_param_id] != -1) {\n          // ...but it's weight-shared in target, so that's fine.\n          continue;\n        } else {\n          LOG(FATAL) << \"Incompatible number of blobs for layer \"\n              << source_layer_name;\n        }\n      }\n      hdf5_load_nd_dataset(layer_hid, dataset_name.c_str(), 0, kMaxBlobAxes,\n          target_blobs[j].get());\n    }\n    H5Gclose(layer_hid);\n  }\n  H5Gclose(data_hid);\n  H5Fclose(file_hid);\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::ToProto(NetParameter* param, bool write_diff) const {\n  param->Clear();\n  param->set_name(name_);\n  // Add bottom and top\n  for (int i = 0; i < net_input_blob_indices_.size(); ++i) {\n    param->add_input(blob_names_[net_input_blob_indices_[i]]);\n  }\n  DLOG(INFO) << \"Serializing \" << layers_.size() << \" layers\";\n  for (int i = 0; i < layers_.size(); ++i) {\n    LayerParameter* layer_param = param->add_layer();\n    layers_[i]->ToProto(layer_param, write_diff);\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::ToHDF5(const string& filename, bool write_diff) const {\n  hid_t file_hid = H5Fcreate(filename.c_str(), H5F_ACC_TRUNC, H5P_DEFAULT,\n      H5P_DEFAULT);\n  CHECK_GE(file_hid, 0)\n      << \"Couldn't open \" << filename << \" to save weights.\";\n  hid_t data_hid = H5Gcreate2(file_hid, \"data\", H5P_DEFAULT, H5P_DEFAULT,\n      H5P_DEFAULT);\n  CHECK_GE(data_hid, 0) << \"Error saving weights to \" << filename << \".\";\n  hid_t diff_hid = -1;\n  if (write_diff) {\n    diff_hid = H5Gcreate2(file_hid, \"diff\", H5P_DEFAULT, H5P_DEFAULT,\n        H5P_DEFAULT);\n    CHECK_GE(diff_hid, 0) << \"Error saving weights to \" << filename << \".\";\n  }\n  for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {\n    const LayerParameter& layer_param = layers_[layer_id]->layer_param();\n    string layer_name = layer_param.name();\n    hid_t layer_data_hid = H5Gcreate2(data_hid, layer_name.c_str(),\n        H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);\n    CHECK_GE(layer_data_hid, 0)\n        << \"Error saving weights to \" << filename << \".\";\n    hid_t layer_diff_hid = -1;\n    if (write_diff) {\n      layer_diff_hid = H5Gcreate2(diff_hid, layer_name.c_str(),\n          H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);\n      CHECK_GE(layer_diff_hid, 0)\n          << \"Error saving weights to \" << filename << \".\";\n    }\n    int num_params = layers_[layer_id]->blobs().size();\n    for (int param_id = 0; param_id < num_params; ++param_id) {\n      ostringstream dataset_name;\n      dataset_name << param_id;\n      const int net_param_id = param_id_vecs_[layer_id][param_id];\n      if (param_owners_[net_param_id] == -1) {\n        // Only save params that own themselves\n        hdf5_save_nd_dataset<Dtype>(layer_data_hid, dataset_name.str(),\n            *params_[net_param_id]);\n      }\n      if (write_diff) {\n        // Write diffs regardless of weight-sharing\n        hdf5_save_nd_dataset<Dtype>(layer_diff_hid, dataset_name.str(),\n            *params_[net_param_id], true);\n      }\n    }\n    H5Gclose(layer_data_hid);\n    if (write_diff) {\n      H5Gclose(layer_diff_hid);\n    }\n  }\n  H5Gclose(data_hid);\n  if (write_diff) {\n    H5Gclose(diff_hid);\n  }\n  H5Fclose(file_hid);\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::Update() {\n  for (int i = 0; i < learnable_params_.size(); ++i) {\n    learnable_params_[i]->Update();\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::ClearParamDiffs() {\n  for (int i = 0; i < learnable_params_.size(); ++i) {\n    Blob<Dtype>* blob = learnable_params_[i];\n    switch (Caffe::mode()) {\n    case Caffe::CPU:\n      caffe_set(blob->count(), static_cast<Dtype>(0),\n                blob->mutable_cpu_diff());\n      break;\n    case Caffe::GPU:\n#ifndef CPU_ONLY\n      caffe_gpu_set(blob->count(), static_cast<Dtype>(0),\n                    blob->mutable_gpu_diff());\n#else\n      NO_GPU;\n#endif\n      break;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Net<Dtype>::ShareWeights() {\n  for (int i = 0; i < params_.size(); ++i) {\n    if (param_owners_[i] < 0) { continue; }\n    params_[i]->ShareData(*params_[param_owners_[i]]);\n    params_[i]->ShareDiff(*params_[param_owners_[i]]);\n  }\n}\n\ntemplate <typename Dtype>\nbool Net<Dtype>::has_blob(const string& blob_name) const {\n  return blob_names_index_.find(blob_name) != blob_names_index_.end();\n}\n\ntemplate <typename Dtype>\nconst shared_ptr<Blob<Dtype> > Net<Dtype>::blob_by_name(\n    const string& blob_name) const {\n  shared_ptr<Blob<Dtype> > blob_ptr;\n  if (has_blob(blob_name)) {\n    blob_ptr = blobs_[blob_names_index_.find(blob_name)->second];\n  } else {\n    blob_ptr.reset((Blob<Dtype>*)(NULL));\n    LOG(WARNING) << \"Unknown blob name \" << blob_name;\n  }\n  return blob_ptr;\n}\n\ntemplate <typename Dtype>\nbool Net<Dtype>::has_layer(const string& layer_name) const {\n  return layer_names_index_.find(layer_name) != layer_names_index_.end();\n}\n\ntemplate <typename Dtype>\nconst shared_ptr<Layer<Dtype> > Net<Dtype>::layer_by_name(\n    const string& layer_name) const {\n  shared_ptr<Layer<Dtype> > layer_ptr;\n  if (has_layer(layer_name)) {\n    layer_ptr = layers_[layer_names_index_.find(layer_name)->second];\n  } else {\n    layer_ptr.reset((Layer<Dtype>*)(NULL));\n    LOG(WARNING) << \"Unknown layer name \" << layer_name;\n  }\n  return layer_ptr;\n}\n\nINSTANTIATE_CLASS(Net);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/parallel.cpp",
    "content": "#ifndef CPU_ONLY\n#include <cuda_runtime.h>\n#endif\n#include <glog/logging.h>\n#include <stdio.h>\n\n#include <sstream>\n#include <string>\n#include <vector>\n\n#include \"boost/thread.hpp\"\n#include \"caffe/caffe.hpp\"\n#include \"caffe/parallel.hpp\"\n\nnamespace caffe {\n\nenum Op {\n  copy,\n  replace_cpu,\n  replace_gpu,\n  replace_cpu_diff,\n  replace_gpu_diff\n};\n\ntemplate<typename Dtype>\nstatic void apply_buffers(const vector<Blob<Dtype>*>& blobs,\n                          Dtype* buffer, size_t total_size, Op op) {\n  Dtype* ptr = buffer;\n  for (int i = 0; i < blobs.size(); ++i) {\n    int size = blobs[i]->count();\n    switch (op) {\n      case copy: {\n        // Init buffer to current values of blobs\n        caffe_copy(size,\n                   reinterpret_cast<const Dtype*>(blobs[i]->data()->cpu_data()),\n                   ptr);\n        break;\n      }\n      case replace_cpu:\n        blobs[i]->data()->set_cpu_data(ptr);\n        break;\n      case replace_gpu:\n        blobs[i]->data()->set_gpu_data(ptr);\n        break;\n      case replace_cpu_diff:\n        blobs[i]->diff()->set_cpu_data(ptr);\n        break;\n      case replace_gpu_diff:\n        blobs[i]->diff()->set_gpu_data(ptr);\n        break;\n    }\n    ptr += size;\n  }\n  // total_size is at least one byte\n  CHECK_EQ(total_size, (ptr == buffer ? 1 : ptr - buffer));\n}\n\n// Buffer size necessary to store given blobs\ntemplate<typename Dtype>\nstatic size_t total_size(const vector<Blob<Dtype>*>& params) {\n  size_t size = 0;\n  for (int i = 0; i < params.size(); ++i)\n    size += params[i]->count();\n  // Size have at least one byte, otherwise cudaMalloc fails if net has no\n  // learnable parameters.\n  return (size > 0) ? size : 1;\n}\n\ntemplate<typename Dtype>\nParams<Dtype>::Params(shared_ptr<Solver<Dtype> > root_solver)\n    : size_(total_size<Dtype>(root_solver->net()->learnable_params())),\n      data_(),\n      diff_() {\n}\n\ntemplate<typename Dtype>\nGPUParams<Dtype>::GPUParams(shared_ptr<Solver<Dtype> > root_solver, int device)\n    : Params<Dtype>(root_solver) {\n#ifndef CPU_ONLY\n  int initial_device;\n  CUDA_CHECK(cudaGetDevice(&initial_device));\n\n  // Allocate device buffers\n  CUDA_CHECK(cudaSetDevice(device));\n  CUDA_CHECK(cudaMalloc(&data_, size_ * sizeof(Dtype)));\n\n  // Copy blob values\n  const vector<Blob<Dtype>*>& net =\n      root_solver->net()->learnable_params();\n  apply_buffers(net, data_, size_, copy);\n\n  CUDA_CHECK(cudaMalloc(&diff_, size_ * sizeof(Dtype)));\n  caffe_gpu_set(size_, Dtype(0), diff_);\n\n  CUDA_CHECK(cudaSetDevice(initial_device));\n#else\n  NO_GPU;\n#endif\n}\n\ntemplate<typename Dtype>\nGPUParams<Dtype>::~GPUParams() {\n#ifndef CPU_ONLY\n  CUDA_CHECK(cudaFree(data_));\n  CUDA_CHECK(cudaFree(diff_));\n#endif\n}\n\ntemplate<typename Dtype>\nvoid GPUParams<Dtype>::configure(Solver<Dtype>* solver) const {\n  const vector<Blob<Dtype>*>& net =\n      solver->net()->learnable_params();\n  apply_buffers(net, data_, size_, replace_gpu);\n  apply_buffers(net, diff_, size_, replace_gpu_diff);\n}\n\nvoid DevicePair::compute(const vector<int> devices, vector<DevicePair>* pairs) {\n#ifndef CPU_ONLY\n  vector<int> remaining(devices);\n\n  // Depth for reduction tree\n  int remaining_depth = static_cast<int>(ceil(log2(remaining.size())));\n\n  // Group GPUs by board\n  for (int d = 0; d < remaining_depth; ++d) {\n    for (int i = 0; i < remaining.size(); ++i) {\n      for (int j = i + 1; j < remaining.size(); ++j) {\n        cudaDeviceProp a, b;\n        CUDA_CHECK(cudaGetDeviceProperties(&a, remaining[i]));\n        CUDA_CHECK(cudaGetDeviceProperties(&b, remaining[j]));\n        if (a.isMultiGpuBoard && b.isMultiGpuBoard) {\n          if (a.multiGpuBoardGroupID == b.multiGpuBoardGroupID) {\n            pairs->push_back(DevicePair(remaining[i], remaining[j]));\n            DLOG(INFO) << \"GPU board: \" << remaining[i] << \":\" << remaining[j];\n            remaining.erase(remaining.begin() + j);\n            break;\n          }\n        }\n      }\n    }\n  }\n  ostringstream s;\n  for (int i = 0; i < remaining.size(); ++i) {\n    s << (i ? \", \" : \"\") << remaining[i];\n  }\n  DLOG(INFO) << \"GPUs paired by boards, remaining: \" << s.str();\n\n  // Group by P2P accessibility\n  remaining_depth = ceil(log2(remaining.size()));\n  for (int d = 0; d < remaining_depth; ++d) {\n    for (int i = 0; i < remaining.size(); ++i) {\n      for (int j = i + 1; j < remaining.size(); ++j) {\n        int access;\n        CUDA_CHECK(\n            cudaDeviceCanAccessPeer(&access, remaining[i], remaining[j]));\n        if (access) {\n          pairs->push_back(DevicePair(remaining[i], remaining[j]));\n          DLOG(INFO) << \"P2P pair: \" << remaining[i] << \":\" << remaining[j];\n          remaining.erase(remaining.begin() + j);\n          break;\n        }\n      }\n    }\n  }\n  s.str(\"\");\n  for (int i = 0; i < remaining.size(); ++i) {\n    s << (i ? \", \" : \"\") << remaining[i];\n  }\n  DLOG(INFO) << \"GPUs paired by P2P access, remaining: \" << s.str();\n\n  // Group remaining\n  remaining_depth = ceil(log2(remaining.size()));\n  for (int d = 0; d < remaining_depth; ++d) {\n    for (int i = 0; i < remaining.size(); ++i) {\n      pairs->push_back(DevicePair(remaining[i], remaining[i + 1]));\n      DLOG(INFO) << \"Remaining pair: \" << remaining[i] << \":\"\n                 << remaining[i + 1];\n      remaining.erase(remaining.begin() + i + 1);\n    }\n  }\n\n  // Should only be the parent node remaining\n  CHECK_EQ(remaining.size(), 1);\n\n  pairs->insert(pairs->begin(), DevicePair(-1, remaining[0]));\n\n  CHECK(pairs->size() == devices.size());\n  for (int i = 0; i < pairs->size(); ++i) {\n    CHECK((*pairs)[i].parent() != (*pairs)[i].device());\n    for (int j = i + 1; j < pairs->size(); ++j) {\n      CHECK((*pairs)[i].device() != (*pairs)[j].device());\n    }\n  }\n#else\n  NO_GPU;\n#endif\n}\n\n//\n\ntemplate<typename Dtype>\nP2PSync<Dtype>::P2PSync(shared_ptr<Solver<Dtype> > root_solver,\n                        P2PSync<Dtype>* parent, const SolverParameter& param)\n    : GPUParams<Dtype>(root_solver, param.device_id()),\n      parent_(parent),\n      children_(),\n      queue_(),\n      initial_iter_(root_solver->iter()),\n      solver_() {\n#ifndef CPU_ONLY\n  int initial_device;\n  CUDA_CHECK(cudaGetDevice(&initial_device));\n  const int self = param.device_id();\n  CUDA_CHECK(cudaSetDevice(self));\n\n  if (parent == NULL) {\n    solver_ = root_solver;\n  } else {\n    Caffe::set_root_solver(false);\n    solver_.reset(new WorkerSolver<Dtype>(param, root_solver.get()));\n    Caffe::set_root_solver(true);\n  }\n  this->configure(solver_.get());\n  solver_->add_callback(this);\n\n  if (parent) {\n    // Enable p2p access between devices\n    const int peer = parent->solver_->param().device_id();\n    int access;\n    CUDA_CHECK(cudaDeviceCanAccessPeer(&access, self, peer));\n    if (access) {\n      CUDA_CHECK(cudaDeviceEnablePeerAccess(peer, 0));\n    } else {\n      LOG(INFO)<< \"GPU \" << self << \" does not have p2p access to GPU \" << peer;\n    }\n    // Allocate receiving buffer on parent\n    CUDA_CHECK(cudaSetDevice(peer));\n    CUDA_CHECK(cudaMalloc(&parent_grads_, size_ * sizeof(Dtype)));\n    CUDA_CHECK(cudaSetDevice(self));\n  }\n\n  CUDA_CHECK(cudaSetDevice(initial_device));\n#else\n  NO_GPU;\n#endif\n}\n\ntemplate<typename Dtype>\nP2PSync<Dtype>::~P2PSync() {\n#ifndef CPU_ONLY\n  int initial_device;\n  CUDA_CHECK(cudaGetDevice(&initial_device));\n  const int self = solver_->param().device_id();\n  CUDA_CHECK(cudaSetDevice(self));\n\n  if (parent_) {\n    CUDA_CHECK(cudaFree(parent_grads_));\n    const int peer = parent_->solver_->param().device_id();\n    int access;\n    CUDA_CHECK(cudaDeviceCanAccessPeer(&access, self, peer));\n    if (access) {\n      CUDA_CHECK(cudaDeviceDisablePeerAccess(peer));\n    }\n  }\n\n  CUDA_CHECK(cudaSetDevice(initial_device));\n#endif\n}\n\ntemplate<typename Dtype>\nvoid P2PSync<Dtype>::InternalThreadEntry() {\n  Caffe::SetDevice(solver_->param().device_id());\n  CHECK(Caffe::root_solver());\n  Caffe::set_root_solver(false);\n  // See if there is a defined seed and reset random state if so\n  if (solver_->param().random_seed() >= 0) {\n    // Fetch random seed and modulate by device ID to make sure\n    // everyone doesn't have the same seed.  We seem to have some\n    // solver instability if we have everyone with the same seed\n    Caffe::set_random_seed(\n        solver_->param().random_seed() + solver_->param().device_id());\n  }\n  solver_->Step(solver_->param().max_iter() - initial_iter_);\n}\n\ntemplate<typename Dtype>\nvoid P2PSync<Dtype>::on_start() {\n#ifndef CPU_ONLY\n#ifdef DEBUG\n  int device;\n  CUDA_CHECK(cudaGetDevice(&device));\n  CHECK(device == solver_->param().device_id());\n#else\n//  CHECK(false);\n#endif\n\n  // Wait for update from parent\n  if (parent_) {\n    P2PSync<Dtype> *parent = queue_.pop();\n    CHECK(parent == parent_);\n  }\n\n  // Update children\n  for (int i = children_.size() - 1; i >= 0; i--) {\n    Dtype* src = data_;\n    Dtype* dst = children_[i]->data_;\n\n#ifdef DEBUG\n    cudaPointerAttributes attributes;\n    CUDA_CHECK(cudaPointerGetAttributes(&attributes, src));\n    CHECK(attributes.device == device);\n    CUDA_CHECK(cudaPointerGetAttributes(&attributes, dst));\n    CHECK(attributes.device == children_[i]->solver_->param().device_id());\n#endif\n\n    CUDA_CHECK(cudaMemcpyAsync(dst, src, size_ * sizeof(Dtype),\n        cudaMemcpyDeviceToDevice, cudaStreamDefault));\n    CUDA_CHECK(cudaStreamSynchronize(cudaStreamDefault));\n    children_[i]->queue_.push(this);\n  }\n#endif\n}\n\ntemplate<typename Dtype>\nvoid P2PSync<Dtype>::on_gradients_ready() {\n#ifndef CPU_ONLY\n#ifdef DEBUG\n  int device;\n  CUDA_CHECK(cudaGetDevice(&device));\n  CHECK(device == solver_->param().device_id());\n#endif\n\n  // Sum children gradients as they appear in the queue\n  for (int i = 0; i < children_.size(); ++i) {\n    P2PSync<Dtype> *child = queue_.pop();\n    Dtype* src = child->parent_grads_;\n    Dtype* dst = diff_;\n\n#ifdef DEBUG\n    bool ok = false;\n    for (int j = 0; j < children_.size(); ++j) {\n      if (child == children_[j]) {\n        ok = true;\n      }\n    }\n    CHECK(ok);\n    cudaPointerAttributes attributes;\n    CUDA_CHECK(cudaPointerGetAttributes(&attributes, src));\n    CHECK(attributes.device == device);\n    CUDA_CHECK(cudaPointerGetAttributes(&attributes, dst));\n    CHECK(attributes.device == device);\n#endif\n\n    caffe_gpu_add(size_, src, dst, dst);\n  }\n\n  // Send gradients to parent\n  if (parent_) {\n    Dtype* src = diff_;\n    Dtype* dst = parent_grads_;\n\n#ifdef DEBUG\n    cudaPointerAttributes attributes;\n    CUDA_CHECK(cudaPointerGetAttributes(&attributes, src));\n    CHECK(attributes.device == device);\n    CUDA_CHECK(cudaPointerGetAttributes(&attributes, dst));\n    CHECK(attributes.device == parent_->solver_->param().device_id());\n#endif\n\n    CUDA_CHECK(cudaMemcpyAsync(dst, src, size_ * sizeof(Dtype),  //\n        cudaMemcpyDeviceToDevice, cudaStreamDefault));\n    CUDA_CHECK(cudaStreamSynchronize(cudaStreamDefault));\n    parent_->queue_.push(this);\n  } else {\n    // Loss functions divide gradients by the batch size, so to compensate\n    // for split batch, the root solver divides by number of solvers.\n    caffe_gpu_scal(size_, Dtype(1.0 / Caffe::solver_count()), diff_);\n  }\n#endif\n}\n\ntemplate<typename Dtype>\nvoid P2PSync<Dtype>::run(const vector<int>& gpus) {\n  // Pair devices for map-reduce synchronization\n  vector<DevicePair> pairs;\n  DevicePair::compute(gpus, &pairs);\n  ostringstream s;\n  for (int i = 1; i < pairs.size(); ++i) {\n    s << (i == 1 ? \"\" : \", \") << pairs[i].parent() << \":\" << pairs[i].device();\n  }\n  LOG(INFO)<< \"GPUs pairs \" << s.str();\n\n  SolverParameter param(solver_->param());\n  vector<shared_ptr<P2PSync<Dtype> > > syncs(gpus.size());\n\n  // Build the GPU tree by finding the parent for each solver\n  for (int attempts = 0; attempts < pairs.size(); ++attempts) {\n    for (int i = 1; i < pairs.size(); ++i) {\n      if (!syncs[i].get()) {\n        P2PSync<Dtype>* parent = NULL;\n        for (int j = 0; j < syncs.size(); ++j) {\n          P2PSync<Dtype>* sync = j == 0 ? this : syncs[j].get();\n          if (sync) {\n            const SolverParameter& p = sync->solver()->param();\n            if (p.device_id() == pairs[i].parent()) {\n              parent = sync;\n            }\n          }\n        }\n        if (parent) {\n          param.set_device_id(pairs[i].device());\n          syncs[i].reset(new P2PSync<Dtype>(solver_, parent, param));\n          parent->children_.push_back((P2PSync<Dtype>*) syncs[i].get());\n        }\n      }\n    }\n  }\n\n  LOG(INFO)<< \"Starting Optimization\";\n\n  for (int i = 1; i < syncs.size(); ++i) {\n    syncs[i]->StartInternalThread();\n  }\n\n  // Run root solver on current thread\n  solver_->Solve();\n\n  for (int i = 1; i < syncs.size(); ++i) {\n    syncs[i]->StopInternalThread();\n  }\n}\n\nINSTANTIATE_CLASS(Params);\nINSTANTIATE_CLASS(GPUParams);\nINSTANTIATE_CLASS(P2PSync);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/proto/caffe.proto",
    "content": "syntax = \"proto2\";\n\npackage caffe;\n\n// Specifies the shape (dimensions) of a Blob.\nmessage BlobShape {\n  repeated int64 dim = 1 [packed = true];\n}\n\nmessage BlobProto {\n  optional BlobShape shape = 7;\n  repeated float data = 5 [packed = true];\n  repeated float diff = 6 [packed = true];\n  repeated double double_data = 8 [packed = true];\n  repeated double double_diff = 9 [packed = true];\n\n  // 4D dimensions -- deprecated.  Use \"shape\" instead.\n  optional int32 num = 1 [default = 0];\n  optional int32 channels = 2 [default = 0];\n  optional int32 height = 3 [default = 0];\n  optional int32 width = 4 [default = 0];\n}\n\n// The BlobProtoVector is simply a way to pass multiple blobproto instances\n// around.\nmessage BlobProtoVector {\n  repeated BlobProto blobs = 1;\n}\n\nmessage Datum {\n  optional int32 channels = 1;\n  optional int32 height = 2;\n  optional int32 width = 3;\n  // the actual image data, in bytes\n  optional bytes data = 4;\n  optional int32 label = 5;\n  // Optionally, the datum could also hold float data.\n  repeated float float_data = 6;\n  // If true data contains an encoded image that need to be decoded\n  optional bool encoded = 7 [default = false];\n}\n\nmessage FillerParameter {\n  // The filler type.\n  optional string type = 1 [default = 'constant'];\n  optional float value = 2 [default = 0]; // the value in constant filler\n  optional float min = 3 [default = 0]; // the min value in uniform filler\n  optional float max = 4 [default = 1]; // the max value in uniform filler\n  optional float mean = 5 [default = 0]; // the mean value in Gaussian filler\n  optional float std = 6 [default = 1]; // the std value in Gaussian filler\n  // The expected number of non-zero output weights for a given input in\n  // Gaussian filler -- the default -1 means don't perform sparsification.\n  optional int32 sparse = 7 [default = -1];\n  // Normalize the filler variance by fan_in, fan_out, or their average.\n  // Applies to 'xavier' and 'msra' fillers.\n  enum VarianceNorm {\n    FAN_IN = 0;\n    FAN_OUT = 1;\n    AVERAGE = 2;\n  }\n  optional VarianceNorm variance_norm = 8 [default = FAN_IN];\n}\n\nmessage NetParameter {\n  optional string name = 1; // consider giving the network a name\n  // The input blobs to the network.\n  repeated string input = 3;\n  // The shape of the input blobs.\n  repeated BlobShape input_shape = 8;\n\n  // 4D input dimensions -- deprecated.  Use \"shape\" instead.\n  // If specified, for each input blob there should be four\n  // values specifying the num, channels, height and width of the input blob.\n  // Thus, there should be a total of (4 * #input) numbers.\n  repeated int32 input_dim = 4;\n\n  // Whether the network will force every layer to carry out backward operation.\n  // If set False, then whether to carry out backward is determined\n  // automatically according to the net structure and learning rates.\n  optional bool force_backward = 5 [default = false];\n  // The current \"state\" of the network, including the phase, level, and stage.\n  // Some layers may be included/excluded depending on this state and the states\n  // specified in the layers' include and exclude fields.\n  optional NetState state = 6;\n\n  // Print debugging information about results while running Net::Forward,\n  // Net::Backward, and Net::Update.\n  optional bool debug_info = 7 [default = false];\n\n  // The layers that make up the net.  Each of their configurations, including\n  // connectivity and behavior, is specified as a LayerParameter.\n  repeated LayerParameter layer = 100;  // ID 100 so layers are printed last.\n\n  // DEPRECATED: use 'layer' instead.\n  repeated V1LayerParameter layers = 2;\n}\n\n// NOTE\n// Update the next available ID when you add a new SolverParameter field.\n//\n// SolverParameter next available ID: 41 (last added: type)\nmessage SolverParameter {\n  //////////////////////////////////////////////////////////////////////////////\n  // Specifying the train and test networks\n  //\n  // Exactly one train net must be specified using one of the following fields:\n  //     train_net_param, train_net, net_param, net\n  // One or more test nets may be specified using any of the following fields:\n  //     test_net_param, test_net, net_param, net\n  // If more than one test net field is specified (e.g., both net and\n  // test_net are specified), they will be evaluated in the field order given\n  // above: (1) test_net_param, (2) test_net, (3) net_param/net.\n  // A test_iter must be specified for each test_net.\n  // A test_level and/or a test_stage may also be specified for each test_net.\n  //////////////////////////////////////////////////////////////////////////////\n\n  // Proto filename for the train net, possibly combined with one or more\n  // test nets.\n  optional string net = 24;\n  // Inline train net param, possibly combined with one or more test nets.\n  optional NetParameter net_param = 25;\n\n  optional string train_net = 1; // Proto filename for the train net.\n  repeated string test_net = 2; // Proto filenames for the test nets.\n  optional NetParameter train_net_param = 21; // Inline train net params.\n  repeated NetParameter test_net_param = 22; // Inline test net params.\n\n  // The states for the train/test nets. Must be unspecified or\n  // specified once per net.\n  //\n  // By default, all states will have solver = true;\n  // train_state will have phase = TRAIN,\n  // and all test_state's will have phase = TEST.\n  // Other defaults are set according to the NetState defaults.\n  optional NetState train_state = 26;\n  repeated NetState test_state = 27;\n\n  // The number of iterations for each test net.\n  repeated int32 test_iter = 3;\n\n  // The number of iterations between two testing phases.\n  optional int32 test_interval = 4 [default = 0];\n  optional bool test_compute_loss = 19 [default = false];\n  // If true, run an initial test pass before the first iteration,\n  // ensuring memory availability and printing the starting value of the loss.\n  optional bool test_initialization = 32 [default = true];\n  optional float base_lr = 5; // The base learning rate\n  // the number of iterations between displaying info. If display = 0, no info\n  // will be displayed.\n  optional int32 display = 6;\n  // Display the loss averaged over the last average_loss iterations\n  optional int32 average_loss = 33 [default = 1];\n  optional int32 max_iter = 7; // the maximum number of iterations\n  // accumulate gradients over `iter_size` x `batch_size` instances\n  optional int32 iter_size = 36 [default = 1];\n\n  // The learning rate decay policy. The currently implemented learning rate\n  // policies are as follows:\n  //    - fixed: always return base_lr.\n  //    - step: return base_lr * gamma ^ (floor(iter / step))\n  //    - exp: return base_lr * gamma ^ iter\n  //    - inv: return base_lr * (1 + gamma * iter) ^ (- power)\n  //    - multistep: similar to step but it allows non uniform steps defined by\n  //      stepvalue\n  //    - poly: the effective learning rate follows a polynomial decay, to be\n  //      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)\n  //    - sigmoid: the effective learning rate follows a sigmod decay\n  //      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))\n  //\n  // where base_lr, max_iter, gamma, step, stepvalue and power are defined\n  // in the solver parameter protocol buffer, and iter is the current iteration.\n  optional string lr_policy = 8;\n  optional float gamma = 9; // The parameter to compute the learning rate.\n  optional float power = 10; // The parameter to compute the learning rate.\n  optional float momentum = 11; // The momentum value.\n  optional float weight_decay = 12; // The weight decay.\n  // regularization types supported: L1 and L2\n  // controlled by weight_decay\n  optional string regularization_type = 29 [default = \"L2\"];\n  // the stepsize for learning rate policy \"step\"\n  optional int32 stepsize = 13;\n  // the stepsize for learning rate policy \"multistep\"\n  repeated int32 stepvalue = 34;\n\n  // Set clip_gradients to >= 0 to clip parameter gradients to that L2 norm,\n  // whenever their actual L2 norm is larger.\n  optional float clip_gradients = 35 [default = -1];\n\n  optional int32 snapshot = 14 [default = 0]; // The snapshot interval\n  optional string snapshot_prefix = 15; // The prefix for the snapshot.\n  // whether to snapshot diff in the results or not. Snapshotting diff will help\n  // debugging but the final protocol buffer size will be much larger.\n  optional bool snapshot_diff = 16 [default = false];\n  enum SnapshotFormat {\n    HDF5 = 0;\n    BINARYPROTO = 1;\n  }\n  optional SnapshotFormat snapshot_format = 37 [default = BINARYPROTO];\n  // the mode solver will use: 0 for CPU and 1 for GPU. Use GPU in default.\n  enum SolverMode {\n    CPU = 0;\n    GPU = 1;\n  }\n  optional SolverMode solver_mode = 17 [default = GPU];\n  // the device_id will that be used in GPU mode. Use device_id = 0 in default.\n  optional int32 device_id = 18 [default = 0];\n  // If non-negative, the seed with which the Solver will initialize the Caffe\n  // random number generator -- useful for reproducible results. Otherwise,\n  // (and by default) initialize using a seed derived from the system clock.\n  optional int64 random_seed = 20 [default = -1];\n\n  // type of the solver\n  optional string type = 40 [default = \"SGD\"];\n\n  // numerical stability for RMSProp, AdaGrad and AdaDelta and Adam\n  optional float delta = 31 [default = 1e-8];\n  // parameters for the Adam solver\n  optional float momentum2 = 39 [default = 0.999];\n\n  // RMSProp decay value\n  // MeanSquare(t) = rms_decay*MeanSquare(t-1) + (1-rms_decay)*SquareGradient(t)\n  optional float rms_decay = 38;\n\n  // If true, print information about the state of the net that may help with\n  // debugging learning problems.\n  optional bool debug_info = 23 [default = false];\n\n  // If false, don't save a snapshot after training finishes.\n  optional bool snapshot_after_train = 28 [default = true];\n\n  // DEPRECATED: old solver enum types, use string instead\n  enum SolverType {\n    SGD = 0;\n    NESTEROV = 1;\n    ADAGRAD = 2;\n    RMSPROP = 3;\n    ADADELTA = 4;\n    ADAM = 5;\n  }\n  // DEPRECATED: use type instead of solver_type\n  optional SolverType solver_type = 30 [default = SGD];\n}\n\n// A message that stores the solver snapshots\nmessage SolverState {\n  optional int32 iter = 1; // The current iteration\n  optional string learned_net = 2; // The file that stores the learned net.\n  repeated BlobProto history = 3; // The history for sgd solvers\n  optional int32 current_step = 4 [default = 0]; // The current step for learning rate\n}\n\nenum Phase {\n   TRAIN = 0;\n   TEST = 1;\n}\n\nmessage NetState {\n  optional Phase phase = 1 [default = TEST];\n  optional int32 level = 2 [default = 0];\n  repeated string stage = 3;\n}\n\nmessage NetStateRule {\n  // Set phase to require the NetState have a particular phase (TRAIN or TEST)\n  // to meet this rule.\n  optional Phase phase = 1;\n\n  // Set the minimum and/or maximum levels in which the layer should be used.\n  // Leave undefined to meet the rule regardless of level.\n  optional int32 min_level = 2;\n  optional int32 max_level = 3;\n\n  // Customizable sets of stages to include or exclude.\n  // The net must have ALL of the specified stages and NONE of the specified\n  // \"not_stage\"s to meet the rule.\n  // (Use multiple NetStateRules to specify conjunctions of stages.)\n  repeated string stage = 4;\n  repeated string not_stage = 5;\n}\n\n// Specifies training parameters (multipliers on global learning constants,\n// and the name and other settings used for weight sharing).\nmessage ParamSpec {\n  // The names of the parameter blobs -- useful for sharing parameters among\n  // layers, but never required otherwise.  To share a parameter between two\n  // layers, give it a (non-empty) name.\n  optional string name = 1;\n\n  // Whether to require shared weights to have the same shape, or just the same\n  // count -- defaults to STRICT if unspecified.\n  optional DimCheckMode share_mode = 2;\n  enum DimCheckMode {\n    // STRICT (default) requires that num, channels, height, width each match.\n    STRICT = 0;\n    // PERMISSIVE requires only the count (num*channels*height*width) to match.\n    PERMISSIVE = 1;\n  }\n\n  // The multiplier on the global learning rate for this parameter.\n  optional float lr_mult = 3 [default = 1.0];\n\n  // The multiplier on the global weight decay for this parameter.\n  optional float decay_mult = 4 [default = 1.0];\n}\n\n// NOTE\n// Update the next available ID when you add a new LayerParameter field.\n//\n// LayerParameter next available layer-specific ID: 143 (last added: scale_param)\nmessage LayerParameter {\n  optional string name = 1; // the layer name\n  optional string type = 2; // the layer type\n  repeated string bottom = 3; // the name of each bottom blob\n  repeated string top = 4; // the name of each top blob\n\n  // The train / test phase for computation.\n  optional Phase phase = 10;\n\n  // The amount of weight to assign each top blob in the objective.\n  // Each layer assigns a default value, usually of either 0 or 1,\n  // to each top blob.\n  repeated float loss_weight = 5;\n\n  // Specifies training parameters (multipliers on global learning constants,\n  // and the name and other settings used for weight sharing).\n  repeated ParamSpec param = 6;\n\n  // The blobs containing the numeric parameters of the layer.\n  repeated BlobProto blobs = 7;\n\n  // Specifies on which bottoms the backpropagation should be skipped.\n  // The size must be either 0 or equal to the number of bottoms.\n  repeated bool propagate_down = 11;\n\n  // Rules controlling whether and when a layer is included in the network,\n  // based on the current NetState.  You may specify a non-zero number of rules\n  // to include OR exclude, but not both.  If no include or exclude rules are\n  // specified, the layer is always included.  If the current NetState meets\n  // ANY (i.e., one or more) of the specified rules, the layer is\n  // included/excluded.\n  repeated NetStateRule include = 8;\n  repeated NetStateRule exclude = 9;\n\n  // Parameters for data pre-processing.\n  optional TransformationParameter transform_param = 100;\n\n  // Parameters shared by loss layers.\n  optional LossParameter loss_param = 101;\n\n  // Layer type-specific parameters.\n  //\n  // Note: certain layers may have more than one computational engine\n  // for their implementation. These layers include an Engine type and\n  // engine parameter for selecting the implementation.\n  // The default for the engine is set by the ENGINE switch at compile-time.\n  optional AccuracyParameter accuracy_param = 102;\n  optional ArgMaxParameter argmax_param = 103;\n  optional BatchNormParameter batch_norm_param = 139;\n  optional BiasParameter bias_param = 141;\n  optional ConcatParameter concat_param = 104;\n  optional ConaddParameter conadd_param = 200;\n  optional ContrastiveLossParameter contrastive_loss_param = 105;\n  optional ConvolutionParameter convolution_param = 106;\n  optional DeformableConvolutionParameter deformable_convolution_param = 999;\n  optional DataParameter data_param = 107;\n  optional DropoutParameter dropout_param = 108;\n  optional DummyDataParameter dummy_data_param = 109;\n  optional EltwiseParameter eltwise_param = 110;\n  optional ELUParameter elu_param = 140;\n  optional EmbedParameter embed_param = 137;\n  optional ExpParameter exp_param = 111;\n  optional FlattenParameter flatten_param = 135;\n  optional HDF5DataParameter hdf5_data_param = 112;\n  optional HDF5OutputParameter hdf5_output_param = 113;\n  optional HingeLossParameter hinge_loss_param = 114;\n  optional ImageDataParameter image_data_param = 115;\n  optional InfogainLossParameter infogain_loss_param = 116;\n  optional InnerProductParameter inner_product_param = 117;\n  optional LogParameter log_param = 134;\n  optional LRNParameter lrn_param = 118;\n  optional MemoryDataParameter memory_data_param = 119;\n  optional MVNParameter mvn_param = 120;\n  optional PoolingParameter pooling_param = 121;\n  optional PowerParameter power_param = 122;\n  optional PReLUParameter prelu_param = 131;\n  optional PythonParameter python_param = 130;\n  optional ReductionParameter reduction_param = 136;\n  optional ReLUParameter relu_param = 123;\n  optional ReshapeParameter reshape_param = 133;\n  optional ROIPoolingParameter roi_pooling_param = 8266711;\n  optional ScaleParameter scale_param = 142;\n  optional SigmoidParameter sigmoid_param = 124;\n  optional SmoothL1LossParameter smooth_l1_loss_param = 8266712;\n  optional SoftmaxParameter softmax_param = 125;\n  optional SPPParameter spp_param = 132;\n  optional SliceParameter slice_param = 126;\n  optional TanHParameter tanh_param = 127;\n  optional ThresholdParameter threshold_param = 128;\n  optional TileParameter tile_param = 138;\n  optional WindowDataParameter window_data_param = 129;\n  optional CropParameter crop_param = 144;\n}\n\n// Message that stores parameters used to apply transformation\n// to the data layer's data\nmessage TransformationParameter {\n  // For data pre-processing, we can do simple scaling and subtracting the\n  // data mean, if provided. Note that the mean subtraction is always carried\n  // out before scaling.\n  optional float scale = 1 [default = 1];\n  // Specify if we want to randomly mirror data.\n  optional bool mirror = 2 [default = false];\n  // Specify if we would like to randomly crop an image.\n  optional uint32 crop_size = 3 [default = 0];\n  // mean_file and mean_value cannot be specified at the same time\n  optional string mean_file = 4;\n  // if specified can be repeated once (would substract it from all the channels)\n  // or can be repeated the same number of times as channels\n  // (would subtract them from the corresponding channel)\n  repeated float mean_value = 5;\n  // Force the decoded image to have 3 color channels.\n  optional bool force_color = 6 [default = false];\n  // Force the decoded image to have 1 color channels.\n  optional bool force_gray = 7 [default = false];\n}\n\n// Message that stores parameters shared by loss layers\nmessage LossParameter {\n  // If specified, ignore instances with the given label.\n  optional int32 ignore_label = 1;\n  // How to normalize the loss for loss layers that aggregate across batches,\n  // spatial dimensions, or other dimensions.  Currently only implemented in\n  // SoftmaxWithLoss layer.\n  enum NormalizationMode {\n    // Divide by the number of examples in the batch times spatial dimensions.\n    // Outputs that receive the ignore label will NOT be ignored in computing\n    // the normalization factor.\n    FULL = 0;\n    // Divide by the total number of output locations that do not take the \n    // ignore_label.  If ignore_label is not set, this behaves like FULL.\n    VALID = 1;\n    // Divide by the batch size.\n    BATCH_SIZE = 2;\n    // Do not normalize the loss.\n    NONE = 3;\n  }\n  optional NormalizationMode normalization = 3 [default = VALID];\n  // Deprecated.  Ignored if normalization is specified.  If normalization\n  // is not specified, then setting this to false will be equivalent to\n  // normalization = BATCH_SIZE to be consistent with previous behavior.\n  optional bool normalize = 2;\n}\n\n// Messages that store parameters used by individual layer types follow, in\n// alphabetical order.\n\nmessage AccuracyParameter {\n  // When computing accuracy, count as correct by comparing the true label to\n  // the top k scoring classes.  By default, only compare to the top scoring\n  // class (i.e. argmax).\n  optional uint32 top_k = 1 [default = 1];\n\n  // The \"label\" axis of the prediction blob, whose argmax corresponds to the\n  // predicted label -- may be negative to index from the end (e.g., -1 for the\n  // last axis).  For example, if axis == 1 and the predictions are\n  // (N x C x H x W), the label blob is expected to contain N*H*W ground truth\n  // labels with integer values in {0, 1, ..., C-1}.\n  optional int32 axis = 2 [default = 1];\n\n  // If specified, ignore instances with the given label.\n  optional int32 ignore_label = 3;\n}\n\nmessage ArgMaxParameter {\n  // If true produce pairs (argmax, maxval)\n  optional bool out_max_val = 1 [default = false];\n  optional uint32 top_k = 2 [default = 1];\n  // The axis along which to maximise -- may be negative to index from the\n  // end (e.g., -1 for the last axis).\n  // By default ArgMaxLayer maximizes over the flattened trailing dimensions\n  // for each index of the first / num dimension.\n  optional int32 axis = 3;\n}\n\nmessage ConcatParameter {\n  // The axis along which to concatenate -- may be negative to index from the\n  // end (e.g., -1 for the last axis).  Other axes must have the\n  // same dimension for all the bottom blobs.\n  // By default, ConcatLayer concatenates blobs along the \"channels\" axis (1).\n  optional int32 axis = 2 [default = 1];\n\n  // DEPRECATED: alias for \"axis\" -- does not support negative indexing.\n  optional uint32 concat_dim = 1 [default = 1];\n}\nmessage ConaddParameter {\n\n \n}\nmessage BatchNormParameter {\n  // If false, accumulate global mean/variance values via a moving average. If\n  // true, use those accumulated values instead of computing mean/variance\n  // across the batch.\n  optional bool use_global_stats = 1;\n  // How much does the moving average decay each iteration?\n  optional float moving_average_fraction = 2 [default = .999];\n  // Small value to add to the variance estimate so that we don't divide by\n  // zero.\n  optional float eps = 3 [default = 1e-5];\n}\n\nmessage BiasParameter {\n  // The first axis of bottom[0] (the first input Blob) along which to apply\n  // bottom[1] (the second input Blob).  May be negative to index from the end\n  // (e.g., -1 for the last axis).\n  //\n  // For example, if bottom[0] is 4D with shape 100x3x40x60, the output\n  // top[0] will have the same shape, and bottom[1] may have any of the\n  // following shapes (for the given value of axis):\n  //    (axis == 0 == -4) 100; 100x3; 100x3x40; 100x3x40x60\n  //    (axis == 1 == -3)          3;     3x40;     3x40x60\n  //    (axis == 2 == -2)                   40;       40x60\n  //    (axis == 3 == -1)                                60\n  // Furthermore, bottom[1] may have the empty shape (regardless of the value of\n  // \"axis\") -- a scalar bias.\n  optional int32 axis = 1 [default = 1];\n\n  // (num_axes is ignored unless just one bottom is given and the bias is\n  // a learned parameter of the layer.  Otherwise, num_axes is determined by the\n  // number of axes by the second bottom.)\n  // The number of axes of the input (bottom[0]) covered by the bias\n  // parameter, or -1 to cover all axes of bottom[0] starting from `axis`.\n  // Set num_axes := 0, to add a zero-axis Blob: a scalar.\n  optional int32 num_axes = 2 [default = 1];\n\n  // (filler is ignored unless just one bottom is given and the bias is\n  // a learned parameter of the layer.)\n  // The initialization for the learned bias parameter.\n  // Default is the zero (0) initialization, resulting in the BiasLayer\n  // initially performing the identity operation.\n  optional FillerParameter filler = 3;\n}\n\nmessage ContrastiveLossParameter {\n  // margin for dissimilar pair\n  optional float margin = 1 [default = 1.0];\n  // The first implementation of this cost did not exactly match the cost of\n  // Hadsell et al 2006 -- using (margin - d^2) instead of (margin - d)^2.\n  // legacy_version = false (the default) uses (margin - d)^2 as proposed in the\n  // Hadsell paper. New models should probably use this version.\n  // legacy_version = true uses (margin - d^2). This is kept to support /\n  // reproduce existing models and results\n  optional bool legacy_version = 2 [default = false];\n}\n\nmessage ConvolutionParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  optional bool bias_term = 2 [default = true]; // whether to have bias terms\n\n  // Pad, kernel size, and stride are all given as a single value for equal\n  // dimensions in all spatial dimensions, or once per spatial dimension.\n  repeated uint32 pad = 3; // The padding size; defaults to 0\n  repeated uint32 kernel_size = 4; // The kernel size\n  repeated uint32 stride = 6; // The stride; defaults to 1\n  // Factor used to dilate the kernel, (implicitly) zero-filling the resulting\n  // holes. (Kernel dilation is sometimes referred to by its use in the\n  // algorithme à trous from Holschneider et al. 1987.)\n  repeated uint32 dilation = 18; // The dilation; defaults to 1\n\n  // For 2D convolution only, the *_h and *_w versions may also be used to\n  // specify both spatial dimensions.\n  optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only)\n  optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only)\n  optional uint32 kernel_h = 11; // The kernel height (2D only)\n  optional uint32 kernel_w = 12; // The kernel width (2D only)\n  optional uint32 stride_h = 13; // The stride height (2D only)\n  optional uint32 stride_w = 14; // The stride width (2D only)\n\n  optional uint32 group = 5 [default = 1]; // The group size for group conv\n\n  optional FillerParameter weight_filler = 7; // The filler for the weight\n  optional FillerParameter bias_filler = 8; // The filler for the bias\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 15 [default = DEFAULT];\n\n  // The axis to interpret as \"channels\" when performing convolution.\n  // Preceding dimensions are treated as independent inputs;\n  // succeeding dimensions are treated as \"spatial\".\n  // With (N, C, H, W) inputs, and axis == 1 (the default), we perform\n  // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for\n  // groups g>1) filters across the spatial axes (H, W) of the input.\n  // With (N, C, D, H, W) inputs, and axis == 1, we perform\n  // N independent 3D convolutions, sliding (C/g)-channels\n  // filters across the spatial axes (D, H, W) of the input.\n  optional int32 axis = 16 [default = 1];\n\n  // Whether to force use of the general ND convolution, even if a specific\n  // implementation for blobs of the appropriate number of spatial dimensions\n  // is available. (Currently, there is only a 2D-specific convolution\n  // implementation; for input blobs with num_axes != 2, this option is\n  // ignored and the ND implementation will be used.)\n  optional bool force_nd_im2col = 17 [default = false];\n}\n\nmessage DeformableConvolutionParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  optional bool bias_term = 2 [default = true]; // whether to have bias terms\n\n  // Pad, kernel size, and stride are all given as a single value for equal\n  // dimensions in all spatial dimensions, or once per spatial dimension.\n  repeated uint32 pad = 3; // The padding size; defaults to 0\n  repeated uint32 kernel_size = 4; // The kernel size\n  repeated uint32 stride = 6; // The stride; defaults to 1\n  // Factor used to dilate the kernel, (implicitly) zero-filling the resulting\n  // holes. (Kernel dilation is sometimes referred to by its use in the\n  // algorithme à trous from Holschneider et al. 1987.)\n  repeated uint32 dilation = 18; // The dilation; defaults to 1\n\n  // For 2D convolution only, the *_h and *_w versions may also be used to\n  // specify both spatial dimensions.\n  optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only)\n  optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only)\n  optional uint32 kernel_h = 11; // The kernel height (2D only)\n  optional uint32 kernel_w = 12; // The kernel width (2D only)\n  optional uint32 stride_h = 13; // The stride height (2D only)\n  optional uint32 stride_w = 14; // The stride width (2D only)\n\n  optional uint32 group = 5 [default = 1]; // The group size for group conv\n  optional uint32 deformable_group = 19 [default = 4];\n\n  optional FillerParameter weight_filler = 7; // The filler for the weight\n  optional FillerParameter bias_filler = 8; // The filler for the bias\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 15 [default = DEFAULT];\n\n  // The axis to interpret as \"channels\" when performing convolution.\n  // Preceding dimensions are treated as independent inputs;\n  // succeeding dimensions are treated as \"spatial\".\n  // With (N, C, H, W) inputs, and axis == 1 (the default), we perform\n  // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for\n  // groups g>1) filters across the spatial axes (H, W) of the input.\n  // With (N, C, D, H, W) inputs, and axis == 1, we perform\n  // N independent 3D convolutions, sliding (C/g)-channels\n  // filters across the spatial axes (D, H, W) of the input.\n  optional int32 axis = 16 [default = 1];\n\n  // Whether to force use of the general ND convolution, even if a specific\n  // implementation for blobs of the appropriate number of spatial dimensions\n  // is available. (Currently, there is only a 2D-specific convolution\n  // implementation; for input blobs with num_axes != 2, this option is\n  // ignored and the ND implementation will be used.)\n  optional bool force_nd_im2col = 17 [default = false];\n}\n\n\n\n\nmessage DataParameter {\n  enum DB {\n    LEVELDB = 0;\n    LMDB = 1;\n  }\n  // Specify the data source.\n  optional string source = 1;\n  // Specify the batch size.\n  optional uint32 batch_size = 4;\n  // The rand_skip variable is for the data layer to skip a few data points\n  // to avoid all asynchronous sgd clients to start at the same point. The skip\n  // point would be set as rand_skip * rand(0,1). Note that rand_skip should not\n  // be larger than the number of keys in the database.\n  // DEPRECATED. Each solver accesses a different subset of the database.\n  optional uint32 rand_skip = 7 [default = 0];\n  optional DB backend = 8 [default = LEVELDB];\n  // DEPRECATED. See TransformationParameter. For data pre-processing, we can do\n  // simple scaling and subtracting the data mean, if provided. Note that the\n  // mean subtraction is always carried out before scaling.\n  optional float scale = 2 [default = 1];\n  optional string mean_file = 3;\n  // DEPRECATED. See TransformationParameter. Specify if we would like to randomly\n  // crop an image.\n  optional uint32 crop_size = 5 [default = 0];\n  // DEPRECATED. See TransformationParameter. Specify if we want to randomly mirror\n  // data.\n  optional bool mirror = 6 [default = false];\n  // Force the encoded image to have 3 color channels\n  optional bool force_encoded_color = 9 [default = false];\n  // Prefetch queue (Number of batches to prefetch to host memory, increase if\n  // data access bandwidth varies).\n  optional uint32 prefetch = 10 [default = 4];\n}\nmessage CropParameter {\n  // To crop, elements of the first bottom are selected to fit the dimensions\n  // of the second, reference bottom. The crop is configured by\n  // - the crop `axis` to pick the dimensions for cropping\n  // - the crop `offset` to set the shift for all/each dimension\n  // to align the cropped bottom with the reference bottom.\n  // All dimensions up to but excluding `axis` are preserved, while\n  // the dimensions including and trailing `axis` are cropped.\n  // If only one `offset` is set, then all dimensions are offset by this amount.\n  // Otherwise, the number of offsets must equal the number of cropped axes to\n  // shift the crop in each dimension accordingly.\n  // Note: standard dimensions are N,C,H,W so the default is a spatial crop,\n  // and `axis` may be negative to index from the end (e.g., -1 for the last\n  // axis).\n  optional int32 axis = 1 [default = 2];\n  repeated uint32 offset = 2;\n}\n\n\n\n\nmessage DropoutParameter {\n  optional float dropout_ratio = 1 [default = 0.5]; // dropout ratio\n  optional bool scale_train = 2 [default = true];  // scale train or test phase\n}\n\n// DummyDataLayer fills any number of arbitrarily shaped blobs with random\n// (or constant) data generated by \"Fillers\" (see \"message FillerParameter\").\nmessage DummyDataParameter {\n  // This layer produces N >= 1 top blobs.  DummyDataParameter must specify 1 or N\n  // shape fields, and 0, 1 or N data_fillers.\n  //\n  // If 0 data_fillers are specified, ConstantFiller with a value of 0 is used.\n  // If 1 data_filler is specified, it is applied to all top blobs.  If N are\n  // specified, the ith is applied to the ith top blob.\n  repeated FillerParameter data_filler = 1;\n  repeated BlobShape shape = 6;\n\n  // 4D dimensions -- deprecated.  Use \"shape\" instead.\n  repeated uint32 num = 2;\n  repeated uint32 channels = 3;\n  repeated uint32 height = 4;\n  repeated uint32 width = 5;\n}\n\nmessage EltwiseParameter {\n  enum EltwiseOp {\n    PROD = 0;\n    SUM = 1;\n    MAX = 2;\n  }\n  optional EltwiseOp operation = 1 [default = SUM]; // element-wise operation\n  repeated float coeff = 2; // blob-wise coefficient for SUM operation\n\n  // Whether to use an asymptotically slower (for >2 inputs) but stabler method\n  // of computing the gradient for the PROD operation. (No effect for SUM op.)\n  optional bool stable_prod_grad = 3 [default = true];\n}\n\n// Message that stores parameters used by ELULayer\nmessage ELUParameter {\n  // Described in:\n  // Clevert, D.-A., Unterthiner, T., & Hochreiter, S. (2015). Fast and Accurate \n  // Deep Network Learning by Exponential Linear Units (ELUs). arXiv\n  optional float alpha = 1 [default = 1];\n}\n\n// Message that stores parameters used by EmbedLayer\nmessage EmbedParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  // The input is given as integers to be interpreted as one-hot\n  // vector indices with dimension num_input.  Hence num_input should be\n  // 1 greater than the maximum possible input value.\n  optional uint32 input_dim = 2;\n\n  optional bool bias_term = 3 [default = true]; // Whether to use a bias term\n  optional FillerParameter weight_filler = 4; // The filler for the weight\n  optional FillerParameter bias_filler = 5; // The filler for the bias\n\n}\n\n// Message that stores parameters used by ExpLayer\nmessage ExpParameter {\n  // ExpLayer computes outputs y = base ^ (shift + scale * x), for base > 0.\n  // Or if base is set to the default (-1), base is set to e,\n  // so y = exp(shift + scale * x).\n  optional float base = 1 [default = -1.0];\n  optional float scale = 2 [default = 1.0];\n  optional float shift = 3 [default = 0.0];\n}\n\n/// Message that stores parameters used by FlattenLayer\nmessage FlattenParameter {\n  // The first axis to flatten: all preceding axes are retained in the output.\n  // May be negative to index from the end (e.g., -1 for the last axis).\n  optional int32 axis = 1 [default = 1];\n\n  // The last axis to flatten: all following axes are retained in the output.\n  // May be negative to index from the end (e.g., the default -1 for the last\n  // axis).\n  optional int32 end_axis = 2 [default = -1];\n}\n\n// Message that stores parameters used by HDF5DataLayer\nmessage HDF5DataParameter {\n  // Specify the data source.\n  optional string source = 1;\n  // Specify the batch size.\n  optional uint32 batch_size = 2;\n\n  // Specify whether to shuffle the data.\n  // If shuffle == true, the ordering of the HDF5 files is shuffled,\n  // and the ordering of data within any given HDF5 file is shuffled,\n  // but data between different files are not interleaved; all of a file's\n  // data are output (in a random order) before moving onto another file.\n  optional bool shuffle = 3 [default = false];\n}\n\nmessage HDF5OutputParameter {\n  optional string file_name = 1;\n}\n\nmessage HingeLossParameter {\n  enum Norm {\n    L1 = 1;\n    L2 = 2;\n  }\n  // Specify the Norm to use L1 or L2\n  optional Norm norm = 1 [default = L1];\n}\n\nmessage ImageDataParameter {\n  // Specify the data source.\n  optional string source = 1;\n  // Specify the batch size.\n  optional uint32 batch_size = 4 [default = 1];\n  // The rand_skip variable is for the data layer to skip a few data points\n  // to avoid all asynchronous sgd clients to start at the same point. The skip\n  // point would be set as rand_skip * rand(0,1). Note that rand_skip should not\n  // be larger than the number of keys in the database.\n  optional uint32 rand_skip = 7 [default = 0];\n  // Whether or not ImageLayer should shuffle the list of files at every epoch.\n  optional bool shuffle = 8 [default = false];\n  // It will also resize images if new_height or new_width are not zero.\n  optional uint32 new_height = 9 [default = 0];\n  optional uint32 new_width = 10 [default = 0];\n  // Specify if the images are color or gray\n  optional bool is_color = 11 [default = true];\n  // DEPRECATED. See TransformationParameter. For data pre-processing, we can do\n  // simple scaling and subtracting the data mean, if provided. Note that the\n  // mean subtraction is always carried out before scaling.\n  optional float scale = 2 [default = 1];\n  optional string mean_file = 3;\n  // DEPRECATED. See TransformationParameter. Specify if we would like to randomly\n  // crop an image.\n  optional uint32 crop_size = 5 [default = 0];\n  // DEPRECATED. See TransformationParameter. Specify if we want to randomly mirror\n  // data.\n  optional bool mirror = 6 [default = false];\n  optional string root_folder = 12 [default = \"\"];\n}\n\nmessage InfogainLossParameter {\n  // Specify the infogain matrix source.\n  optional string source = 1;\n}\n\nmessage InnerProductParameter {\n  optional uint32 num_output = 1; // The number of outputs for the layer\n  optional bool bias_term = 2 [default = true]; // whether to have bias terms\n  optional FillerParameter weight_filler = 3; // The filler for the weight\n  optional FillerParameter bias_filler = 4; // The filler for the bias\n\n  // The first axis to be lumped into a single inner product computation;\n  // all preceding axes are retained in the output.\n  // May be negative to index from the end (e.g., -1 for the last axis).\n  optional int32 axis = 5 [default = 1];\n}\n\n// Message that stores parameters used by LogLayer\nmessage LogParameter {\n  // LogLayer computes outputs y = log_base(shift + scale * x), for base > 0.\n  // Or if base is set to the default (-1), base is set to e,\n  // so y = ln(shift + scale * x) = log_e(shift + scale * x)\n  optional float base = 1 [default = -1.0];\n  optional float scale = 2 [default = 1.0];\n  optional float shift = 3 [default = 0.0];\n}\n\n// Message that stores parameters used by LRNLayer\nmessage LRNParameter {\n  optional uint32 local_size = 1 [default = 5];\n  optional float alpha = 2 [default = 1.];\n  optional float beta = 3 [default = 0.75];\n  enum NormRegion {\n    ACROSS_CHANNELS = 0;\n    WITHIN_CHANNEL = 1;\n  }\n  optional NormRegion norm_region = 4 [default = ACROSS_CHANNELS];\n  optional float k = 5 [default = 1.];\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 6 [default = DEFAULT];\n}\n\nmessage MemoryDataParameter {\n  optional uint32 batch_size = 1;\n  optional uint32 channels = 2;\n  optional uint32 height = 3;\n  optional uint32 width = 4;\n}\n\nmessage MVNParameter {\n  // This parameter can be set to false to normalize mean only\n  optional bool normalize_variance = 1 [default = true];\n\n  // This parameter can be set to true to perform DNN-like MVN\n  optional bool across_channels = 2 [default = false];\n\n  // Epsilon for not dividing by zero while normalizing variance\n  optional float eps = 3 [default = 1e-9];\n}\n\nmessage PoolingParameter {\n  enum PoolMethod {\n    MAX = 0;\n    AVE = 1;\n    STOCHASTIC = 2;\n  }\n  optional PoolMethod pool = 1 [default = MAX]; // The pooling method\n  // Pad, kernel size, and stride are all given as a single value for equal\n  // dimensions in height and width or as Y, X pairs.\n  optional uint32 pad = 4 [default = 0]; // The padding size (equal in Y, X)\n  optional uint32 pad_h = 9 [default = 0]; // The padding height\n  optional uint32 pad_w = 10 [default = 0]; // The padding width\n  optional uint32 kernel_size = 2; // The kernel size (square)\n  optional uint32 kernel_h = 5; // The kernel height\n  optional uint32 kernel_w = 6; // The kernel width\n  optional uint32 stride = 3 [default = 1]; // The stride (equal in Y, X)\n  optional uint32 stride_h = 7; // The stride height\n  optional uint32 stride_w = 8; // The stride width\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 11 [default = DEFAULT];\n  // If global_pooling then it will pool over the size of the bottom by doing\n  // kernel_h = bottom->height and kernel_w = bottom->width\n  optional bool global_pooling = 12 [default = false];\n}\n\nmessage PowerParameter {\n  // PowerLayer computes outputs y = (shift + scale * x) ^ power.\n  optional float power = 1 [default = 1.0];\n  optional float scale = 2 [default = 1.0];\n  optional float shift = 3 [default = 0.0];\n}\n\nmessage PythonParameter {\n  optional string module = 1;\n  optional string layer = 2;\n  // This value is set to the attribute `param_str` of the `PythonLayer` object\n  // in Python before calling the `setup()` method. This could be a number,\n  // string, dictionary in Python dict format, JSON, etc. You may parse this\n  // string in `setup` method and use it in `forward` and `backward`.\n  optional string param_str = 3 [default = ''];\n  // Whether this PythonLayer is shared among worker solvers during data parallelism.\n  // If true, each worker solver sequentially run forward from this layer.\n  // This value should be set true if you are using it as a data layer.\n  optional bool share_in_parallel = 4 [default = false];\n}\n\n// Message that stores parameters used by ReductionLayer\nmessage ReductionParameter {\n  enum ReductionOp {\n    SUM = 1;\n    ASUM = 2;\n    SUMSQ = 3;\n    MEAN = 4;\n  }\n\n  optional ReductionOp operation = 1 [default = SUM]; // reduction operation\n\n  // The first axis to reduce to a scalar -- may be negative to index from the\n  // end (e.g., -1 for the last axis).\n  // (Currently, only reduction along ALL \"tail\" axes is supported; reduction\n  // of axis M through N, where N < num_axes - 1, is unsupported.)\n  // Suppose we have an n-axis bottom Blob with shape:\n  //     (d0, d1, d2, ..., d(m-1), dm, d(m+1), ..., d(n-1)).\n  // If axis == m, the output Blob will have shape\n  //     (d0, d1, d2, ..., d(m-1)),\n  // and the ReductionOp operation is performed (d0 * d1 * d2 * ... * d(m-1))\n  // times, each including (dm * d(m+1) * ... * d(n-1)) individual data.\n  // If axis == 0 (the default), the output Blob always has the empty shape\n  // (count 1), performing reduction across the entire input --\n  // often useful for creating new loss functions.\n  optional int32 axis = 2 [default = 0];\n\n  optional float coeff = 3 [default = 1.0]; // coefficient for output\n}\n\n// Message that stores parameters used by ReLULayer\nmessage ReLUParameter {\n  // Allow non-zero slope for negative inputs to speed up optimization\n  // Described in:\n  // Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities\n  // improve neural network acoustic models. In ICML Workshop on Deep Learning\n  // for Audio, Speech, and Language Processing.\n  optional float negative_slope = 1 [default = 0];\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 2 [default = DEFAULT];\n}\n\nmessage ReshapeParameter {\n  // Specify the output dimensions. If some of the dimensions are set to 0,\n  // the corresponding dimension from the bottom layer is used (unchanged).\n  // Exactly one dimension may be set to -1, in which case its value is\n  // inferred from the count of the bottom blob and the remaining dimensions.\n  // For example, suppose we want to reshape a 2D blob \"input\" with shape 2 x 8:\n  //\n  //   layer {\n  //     type: \"Reshape\" bottom: \"input\" top: \"output\"\n  //     reshape_param { ... }\n  //   }\n  //\n  // If \"input\" is 2D with shape 2 x 8, then the following reshape_param\n  // specifications are all equivalent, producing a 3D blob \"output\" with shape\n  // 2 x 2 x 4:\n  //\n  //   reshape_param { shape { dim:  2  dim: 2  dim:  4 } }\n  //   reshape_param { shape { dim:  0  dim: 2  dim:  4 } }\n  //   reshape_param { shape { dim:  0  dim: 2  dim: -1 } }\n  //   reshape_param { shape { dim: -1  dim: 0  dim:  2 } }\n  //\n  optional BlobShape shape = 1;\n\n  // axis and num_axes control the portion of the bottom blob's shape that are\n  // replaced by (included in) the reshape. By default (axis == 0 and\n  // num_axes == -1), the entire bottom blob shape is included in the reshape,\n  // and hence the shape field must specify the entire output shape.\n  //\n  // axis may be non-zero to retain some portion of the beginning of the input\n  // shape (and may be negative to index from the end; e.g., -1 to begin the\n  // reshape after the last axis, including nothing in the reshape,\n  // -2 to include only the last axis, etc.).\n  //\n  // For example, suppose \"input\" is a 2D blob with shape 2 x 8.\n  // Then the following ReshapeLayer specifications are all equivalent,\n  // producing a blob \"output\" with shape 2 x 2 x 4:\n  //\n  //   reshape_param { shape { dim: 2  dim: 2  dim: 4 } }\n  //   reshape_param { shape { dim: 2  dim: 4 } axis:  1 }\n  //   reshape_param { shape { dim: 2  dim: 4 } axis: -3 }\n  //\n  // num_axes specifies the extent of the reshape.\n  // If num_axes >= 0 (and axis >= 0), the reshape will be performed only on\n  // input axes in the range [axis, axis+num_axes].\n  // num_axes may also be -1, the default, to include all remaining axes\n  // (starting from axis).\n  //\n  // For example, suppose \"input\" is a 2D blob with shape 2 x 8.\n  // Then the following ReshapeLayer specifications are equivalent,\n  // producing a blob \"output\" with shape 1 x 2 x 8.\n  //\n  //   reshape_param { shape { dim:  1  dim: 2  dim:  8 } }\n  //   reshape_param { shape { dim:  1  dim: 2  }  num_axes: 1 }\n  //   reshape_param { shape { dim:  1  }  num_axes: 0 }\n  //\n  // On the other hand, these would produce output blob shape 2 x 1 x 8:\n  //\n  //   reshape_param { shape { dim: 2  dim: 1  dim: 8  }  }\n  //   reshape_param { shape { dim: 1 }  axis: 1  num_axes: 0 }\n  //\n  optional int32 axis = 2 [default = 0];\n  optional int32 num_axes = 3 [default = -1];\n}\n\n// Message that stores parameters used by ROIPoolingLayer\nmessage ROIPoolingParameter {\n  // Pad, kernel size, and stride are all given as a single value for equal\n  // dimensions in height and width or as Y, X pairs.\n  optional uint32 pooled_h = 1 [default = 0]; // The pooled output height\n  optional uint32 pooled_w = 2 [default = 0]; // The pooled output width\n  // Multiplicative spatial scale factor to translate ROI coords from their\n  // input scale to the scale used when pooling\n  optional float spatial_scale = 3 [default = 1];\n}\n\nmessage ScaleParameter {\n  // The first axis of bottom[0] (the first input Blob) along which to apply\n  // bottom[1] (the second input Blob).  May be negative to index from the end\n  // (e.g., -1 for the last axis).\n  //\n  // For example, if bottom[0] is 4D with shape 100x3x40x60, the output\n  // top[0] will have the same shape, and bottom[1] may have any of the\n  // following shapes (for the given value of axis):\n  //    (axis == 0 == -4) 100; 100x3; 100x3x40; 100x3x40x60\n  //    (axis == 1 == -3)          3;     3x40;     3x40x60\n  //    (axis == 2 == -2)                   40;       40x60\n  //    (axis == 3 == -1)                                60\n  // Furthermore, bottom[1] may have the empty shape (regardless of the value of\n  // \"axis\") -- a scalar multiplier.\n  optional int32 axis = 1 [default = 1];\n\n  // (num_axes is ignored unless just one bottom is given and the scale is\n  // a learned parameter of the layer.  Otherwise, num_axes is determined by the\n  // number of axes by the second bottom.)\n  // The number of axes of the input (bottom[0]) covered by the scale\n  // parameter, or -1 to cover all axes of bottom[0] starting from `axis`.\n  // Set num_axes := 0, to multiply with a zero-axis Blob: a scalar.\n  optional int32 num_axes = 2 [default = 1];\n\n  // (filler is ignored unless just one bottom is given and the scale is\n  // a learned parameter of the layer.)\n  // The initialization for the learned scale parameter.\n  // Default is the unit (1) initialization, resulting in the ScaleLayer\n  // initially performing the identity operation.\n  optional FillerParameter filler = 3;\n\n  // Whether to also learn a bias (equivalent to a ScaleLayer+BiasLayer, but\n  // may be more efficient).  Initialized with bias_filler (defaults to 0).\n  optional bool bias_term = 4 [default = false];\n  optional FillerParameter bias_filler = 5;\n}\n\nmessage SigmoidParameter {\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 1 [default = DEFAULT];\n}\n\nmessage SmoothL1LossParameter {\n  // SmoothL1Loss(x) =\n  //   0.5 * (sigma * x) ** 2    -- if x < 1.0 / sigma / sigma\n  //   |x| - 0.5 / sigma / sigma -- otherwise\n  optional float sigma = 1 [default = 1];\n}\n\nmessage SliceParameter {\n  // The axis along which to slice -- may be negative to index from the end\n  // (e.g., -1 for the last axis).\n  // By default, SliceLayer concatenates blobs along the \"channels\" axis (1).\n  optional int32 axis = 3 [default = 1];\n  repeated uint32 slice_point = 2;\n\n  // DEPRECATED: alias for \"axis\" -- does not support negative indexing.\n  optional uint32 slice_dim = 1 [default = 1];\n}\n\n// Message that stores parameters used by SoftmaxLayer, SoftmaxWithLossLayer\nmessage SoftmaxParameter {\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 1 [default = DEFAULT];\n\n  // The axis along which to perform the softmax -- may be negative to index\n  // from the end (e.g., -1 for the last axis).\n  // Any other axes will be evaluated as independent softmaxes.\n  optional int32 axis = 2 [default = 1];\n}\n\nmessage TanHParameter {\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 1 [default = DEFAULT];\n}\n\n// Message that stores parameters used by TileLayer\nmessage TileParameter {\n  // The index of the axis to tile.\n  optional int32 axis = 1 [default = 1];\n\n  // The number of copies (tiles) of the blob to output.\n  optional int32 tiles = 2;\n}\n\n// Message that stores parameters used by ThresholdLayer\nmessage ThresholdParameter {\n  optional float threshold = 1 [default = 0]; // Strictly positive values\n}\n\nmessage WindowDataParameter {\n  // Specify the data source.\n  optional string source = 1;\n  // For data pre-processing, we can do simple scaling and subtracting the\n  // data mean, if provided. Note that the mean subtraction is always carried\n  // out before scaling.\n  optional float scale = 2 [default = 1];\n  optional string mean_file = 3;\n  // Specify the batch size.\n  optional uint32 batch_size = 4;\n  // Specify if we would like to randomly crop an image.\n  optional uint32 crop_size = 5 [default = 0];\n  // Specify if we want to randomly mirror data.\n  optional bool mirror = 6 [default = false];\n  // Foreground (object) overlap threshold\n  optional float fg_threshold = 7 [default = 0.5];\n  // Background (non-object) overlap threshold\n  optional float bg_threshold = 8 [default = 0.5];\n  // Fraction of batch that should be foreground objects\n  optional float fg_fraction = 9 [default = 0.25];\n  // Amount of contextual padding to add around a window\n  // (used only by the window_data_layer)\n  optional uint32 context_pad = 10 [default = 0];\n  // Mode for cropping out a detection window\n  // warp: cropped window is warped to a fixed size and aspect ratio\n  // square: the tightest square around the window is cropped\n  optional string crop_mode = 11 [default = \"warp\"];\n  // cache_images: will load all images in memory for faster access\n  optional bool cache_images = 12 [default = false];\n  // append root_folder to locate images\n  optional string root_folder = 13 [default = \"\"];\n}\n\nmessage SPPParameter {\n  enum PoolMethod {\n    MAX = 0;\n    AVE = 1;\n    STOCHASTIC = 2;\n  }\n  optional uint32 pyramid_height = 1;\n  optional PoolMethod pool = 2 [default = MAX]; // The pooling method\n  enum Engine {\n    DEFAULT = 0;\n    CAFFE = 1;\n    CUDNN = 2;\n  }\n  optional Engine engine = 6 [default = DEFAULT];\n}\n\n// DEPRECATED: use LayerParameter.\nmessage V1LayerParameter {\n  repeated string bottom = 2;\n  repeated string top = 3;\n  optional string name = 4;\n  repeated NetStateRule include = 32;\n  repeated NetStateRule exclude = 33;\n  enum LayerType {\n    NONE = 0;\n    ABSVAL = 35;\n    ACCURACY = 1;\n    ARGMAX = 30;\n    BNLL = 2;\n    CONCAT = 3;\n    CONADD=50;\n    CONTRASTIVE_LOSS = 37;\n    CONVOLUTION = 4;\n    DEFORMABLECONVOLUTION =99;\n    DATA = 5;\n    DECONVOLUTION = 39;\n    DROPOUT = 6;\n    DUMMY_DATA = 32;\n    EUCLIDEAN_LOSS = 7;\n    ELTWISE = 25;\n    EXP = 38;\n    FLATTEN = 8;\n    HDF5_DATA = 9;\n    HDF5_OUTPUT = 10;\n    HINGE_LOSS = 28;\n    IM2COL = 11;\n    IMAGE_DATA = 12;\n    INFOGAIN_LOSS = 13;\n    INNER_PRODUCT = 14;\n    LRN = 15;\n    MEMORY_DATA = 29;\n    MULTINOMIAL_LOGISTIC_LOSS = 16;\n    MVN = 34;\n    POOLING = 17;\n    POWER = 26;\n    RELU = 18;\n    SIGMOID = 19;\n    SIGMOID_CROSS_ENTROPY_LOSS = 27;\n    SILENCE = 36;\n    SOFTMAX = 20;\n    SOFTMAX_LOSS = 21;\n    SPLIT = 22;\n    SLICE = 33;\n    TANH = 23;\n    WINDOW_DATA = 24;\n    THRESHOLD = 31;\n  }\n  optional LayerType type = 5;\n  repeated BlobProto blobs = 6;\n  repeated string param = 1001;\n  repeated DimCheckMode blob_share_mode = 1002;\n  enum DimCheckMode {\n    STRICT = 0;\n    PERMISSIVE = 1;\n  }\n  repeated float blobs_lr = 7;\n  repeated float weight_decay = 8;\n  repeated float loss_weight = 35;\n  optional AccuracyParameter accuracy_param = 27;\n  optional ArgMaxParameter argmax_param = 23;\n  optional ConcatParameter concat_param = 9;\n  optional ConaddParameter conadd_param = 99;\n  optional ContrastiveLossParameter contrastive_loss_param = 40;\n  optional ConvolutionParameter convolution_param = 10;\n  optional DeformableConvolutionParameter deformable_convolution_param = 999;\n  optional DataParameter data_param = 11;\n  optional DropoutParameter dropout_param = 12;\n  optional DummyDataParameter dummy_data_param = 26;\n  optional EltwiseParameter eltwise_param = 24;\n  optional ExpParameter exp_param = 41;\n  optional HDF5DataParameter hdf5_data_param = 13;\n  optional HDF5OutputParameter hdf5_output_param = 14;\n  optional HingeLossParameter hinge_loss_param = 29;\n  optional ImageDataParameter image_data_param = 15;\n  optional InfogainLossParameter infogain_loss_param = 16;\n  optional InnerProductParameter inner_product_param = 17;\n  optional LRNParameter lrn_param = 18;\n  optional MemoryDataParameter memory_data_param = 22;\n  optional MVNParameter mvn_param = 34;\n  optional PoolingParameter pooling_param = 19;\n  optional PowerParameter power_param = 21;\n  optional ReLUParameter relu_param = 30;\n  optional SigmoidParameter sigmoid_param = 38;\n  optional SoftmaxParameter softmax_param = 39;\n  optional SliceParameter slice_param = 31;\n  optional TanHParameter tanh_param = 37;\n  optional ThresholdParameter threshold_param = 25;\n  optional WindowDataParameter window_data_param = 20;\n  optional TransformationParameter transform_param = 36;\n  optional LossParameter loss_param = 42;\n  optional V0LayerParameter layer = 1;\n}\n\n// DEPRECATED: V0LayerParameter is the old way of specifying layer parameters\n// in Caffe.  We keep this message type around for legacy support.\nmessage V0LayerParameter {\n  optional string name = 1; // the layer name\n  optional string type = 2; // the string to specify the layer type\n\n  // Parameters to specify layers with inner products.\n  optional uint32 num_output = 3; // The number of outputs for the layer\n  optional bool biasterm = 4 [default = true]; // whether to have bias terms\n  optional FillerParameter weight_filler = 5; // The filler for the weight\n  optional FillerParameter bias_filler = 6; // The filler for the bias\n\n  optional uint32 pad = 7 [default = 0]; // The padding size\n  optional uint32 kernelsize = 8; // The kernel size\n  optional uint32 group = 9 [default = 1]; // The group size for group conv\n  optional uint32 stride = 10 [default = 1]; // The stride\n  enum PoolMethod {\n    MAX = 0;\n    AVE = 1;\n    STOCHASTIC = 2;\n  }\n  optional PoolMethod pool = 11 [default = MAX]; // The pooling method\n  optional float dropout_ratio = 12 [default = 0.5]; // dropout ratio\n\n  optional uint32 local_size = 13 [default = 5]; // for local response norm\n  optional float alpha = 14 [default = 1.]; // for local response norm\n  optional float beta = 15 [default = 0.75]; // for local response norm\n  optional float k = 22 [default = 1.];\n\n  // For data layers, specify the data source\n  optional string source = 16;\n  // For data pre-processing, we can do simple scaling and subtracting the\n  // data mean, if provided. Note that the mean subtraction is always carried\n  // out before scaling.\n  optional float scale = 17 [default = 1];\n  optional string meanfile = 18;\n  // For data layers, specify the batch size.\n  optional uint32 batchsize = 19;\n  // For data layers, specify if we would like to randomly crop an image.\n  optional uint32 cropsize = 20 [default = 0];\n  // For data layers, specify if we want to randomly mirror data.\n  optional bool mirror = 21 [default = false];\n\n  // The blobs containing the numeric parameters of the layer\n  repeated BlobProto blobs = 50;\n  // The ratio that is multiplied on the global learning rate. If you want to\n  // set the learning ratio for one blob, you need to set it for all blobs.\n  repeated float blobs_lr = 51;\n  // The weight decay that is multiplied on the global weight decay.\n  repeated float weight_decay = 52;\n\n  // The rand_skip variable is for the data layer to skip a few data points\n  // to avoid all asynchronous sgd clients to start at the same point. The skip\n  // point would be set as rand_skip * rand(0,1). Note that rand_skip should not\n  // be larger than the number of keys in the database.\n  optional uint32 rand_skip = 53 [default = 0];\n\n  // Fields related to detection (det_*)\n  // foreground (object) overlap threshold\n  optional float det_fg_threshold = 54 [default = 0.5];\n  // background (non-object) overlap threshold\n  optional float det_bg_threshold = 55 [default = 0.5];\n  // Fraction of batch that should be foreground objects\n  optional float det_fg_fraction = 56 [default = 0.25];\n\n  // optional bool OBSOLETE_can_clobber = 57 [default = true];\n\n  // Amount of contextual padding to add around a window\n  // (used only by the window_data_layer)\n  optional uint32 det_context_pad = 58 [default = 0];\n\n  // Mode for cropping out a detection window\n  // warp: cropped window is warped to a fixed size and aspect ratio\n  // square: the tightest square around the window is cropped\n  optional string det_crop_mode = 59 [default = \"warp\"];\n\n  // For ReshapeLayer, one needs to specify the new dimensions.\n  optional int32 new_num = 60 [default = 0];\n  optional int32 new_channels = 61 [default = 0];\n  optional int32 new_height = 62 [default = 0];\n  optional int32 new_width = 63 [default = 0];\n\n  // Whether or not ImageLayer should shuffle the list of files at every epoch.\n  // It will also resize images if new_height or new_width are not zero.\n  optional bool shuffle_images = 64 [default = false];\n\n  // For ConcatLayer, one needs to specify the dimension for concatenation, and\n  // the other dimensions must be the same for all the bottom blobs.\n  // By default it will concatenate blobs along the channels dimension.\n  optional uint32 concat_dim = 65 [default = 1];\n\n\n  optional HDF5OutputParameter hdf5_output_param = 1001;\n}\n\nmessage PReLUParameter {\n  // Parametric ReLU described in K. He et al, Delving Deep into Rectifiers:\n  // Surpassing Human-Level Performance on ImageNet Classification, 2015.\n\n  // Initial value of a_i. Default is a_i=0.25 for all i.\n  optional FillerParameter filler = 1;\n  // Whether or not slope paramters are shared across channels.\n  optional bool channel_shared = 2 [default = false];\n}\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solver.cpp",
    "content": "#include <cstdio>\n\n#include <string>\n#include <vector>\n\n#include \"caffe/solver.hpp\"\n#include \"caffe/util/format.hpp\"\n#include \"caffe/util/hdf5.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\nnamespace caffe {\n\ntemplate<typename Dtype>\nvoid Solver<Dtype>::SetActionFunction(ActionCallback func) {\n  action_request_function_ = func;\n}\n\ntemplate<typename Dtype>\nSolverAction::Enum Solver<Dtype>::GetRequestedAction() {\n  if (action_request_function_) {\n    // If the external request function has been set, call it.\n    return action_request_function_();\n  }\n  return SolverAction::NONE;\n}\n\ntemplate <typename Dtype>\nSolver<Dtype>::Solver(const SolverParameter& param, const Solver* root_solver)\n    : net_(), callbacks_(), root_solver_(root_solver),\n      requested_early_exit_(false) {\n  Init(param);\n}\n\ntemplate <typename Dtype>\nSolver<Dtype>::Solver(const string& param_file, const Solver* root_solver)\n    : net_(), callbacks_(), root_solver_(root_solver),\n      requested_early_exit_(false) {\n  SolverParameter param;\n  ReadSolverParamsFromTextFileOrDie(param_file, &param);\n  Init(param);\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::Init(const SolverParameter& param) {\n  CHECK(Caffe::root_solver() || root_solver_)\n      << \"root_solver_ needs to be set for all non-root solvers\";\n  LOG_IF(INFO, Caffe::root_solver()) << \"Initializing solver from parameters: \"\n    << std::endl << param.DebugString();\n  param_ = param;\n  CHECK_GE(param_.average_loss(), 1) << \"average_loss should be non-negative.\";\n  CheckSnapshotWritePermissions();\n  if (Caffe::root_solver() && param_.random_seed() >= 0) {\n    Caffe::set_random_seed(param_.random_seed());\n  }\n  // Scaffolding code\n  InitTrainNet();\n  if (Caffe::root_solver()) {\n    InitTestNets();\n    LOG(INFO) << \"Solver scaffolding done.\";\n  }\n  iter_ = 0;\n  current_step_ = 0;\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::InitTrainNet() {\n  const int num_train_nets = param_.has_net() + param_.has_net_param() +\n      param_.has_train_net() + param_.has_train_net_param();\n  const string& field_names = \"net, net_param, train_net, train_net_param\";\n  CHECK_GE(num_train_nets, 1) << \"SolverParameter must specify a train net \"\n      << \"using one of these fields: \" << field_names;\n  CHECK_LE(num_train_nets, 1) << \"SolverParameter must not contain more than \"\n      << \"one of these fields specifying a train_net: \" << field_names;\n  NetParameter net_param;\n  if (param_.has_train_net_param()) {\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Creating training net specified in train_net_param.\";\n    net_param.CopyFrom(param_.train_net_param());\n  } else if (param_.has_train_net()) {\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Creating training net from train_net file: \" << param_.train_net();\n    ReadNetParamsFromTextFileOrDie(param_.train_net(), &net_param);\n  }\n  if (param_.has_net_param()) {\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Creating training net specified in net_param.\";\n    net_param.CopyFrom(param_.net_param());\n  }\n  if (param_.has_net()) {\n    LOG_IF(INFO, Caffe::root_solver())\n        << \"Creating training net from net file: \" << param_.net();\n    ReadNetParamsFromTextFileOrDie(param_.net(), &net_param);\n  }\n  // Set the correct NetState.  We start with the solver defaults (lowest\n  // precedence); then, merge in any NetState specified by the net_param itself;\n  // finally, merge in any NetState specified by the train_state (highest\n  // precedence).\n  NetState net_state;\n  net_state.set_phase(TRAIN);\n  net_state.MergeFrom(net_param.state());\n  net_state.MergeFrom(param_.train_state());\n  net_param.mutable_state()->CopyFrom(net_state);\n  if (Caffe::root_solver()) {\n    net_.reset(new Net<Dtype>(net_param));\n  } else {\n    net_.reset(new Net<Dtype>(net_param, root_solver_->net_.get()));\n  }\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::InitTestNets() {\n  CHECK(Caffe::root_solver());\n  const bool has_net_param = param_.has_net_param();\n  const bool has_net_file = param_.has_net();\n  const int num_generic_nets = has_net_param + has_net_file;\n  CHECK_LE(num_generic_nets, 1)\n      << \"Both net_param and net_file may not be specified.\";\n  const int num_test_net_params = param_.test_net_param_size();\n  const int num_test_net_files = param_.test_net_size();\n  const int num_test_nets = num_test_net_params + num_test_net_files;\n  if (num_generic_nets) {\n      CHECK_GE(param_.test_iter_size(), num_test_nets)\n          << \"test_iter must be specified for each test network.\";\n  } else {\n      CHECK_EQ(param_.test_iter_size(), num_test_nets)\n          << \"test_iter must be specified for each test network.\";\n  }\n  // If we have a generic net (specified by net or net_param, rather than\n  // test_net or test_net_param), we may have an unlimited number of actual\n  // test networks -- the actual number is given by the number of remaining\n  // test_iters after any test nets specified by test_net_param and/or test_net\n  // are evaluated.\n  const int num_generic_net_instances = param_.test_iter_size() - num_test_nets;\n  const int num_test_net_instances = num_test_nets + num_generic_net_instances;\n  if (param_.test_state_size()) {\n    CHECK_EQ(param_.test_state_size(), num_test_net_instances)\n        << \"test_state must be unspecified or specified once per test net.\";\n  }\n  if (num_test_net_instances) {\n    CHECK_GT(param_.test_interval(), 0);\n  }\n  int test_net_id = 0;\n  vector<string> sources(num_test_net_instances);\n  vector<NetParameter> net_params(num_test_net_instances);\n  for (int i = 0; i < num_test_net_params; ++i, ++test_net_id) {\n      sources[test_net_id] = \"test_net_param\";\n      net_params[test_net_id].CopyFrom(param_.test_net_param(i));\n  }\n  for (int i = 0; i < num_test_net_files; ++i, ++test_net_id) {\n      sources[test_net_id] = \"test_net file: \" + param_.test_net(i);\n      ReadNetParamsFromTextFileOrDie(param_.test_net(i),\n          &net_params[test_net_id]);\n  }\n  const int remaining_test_nets = param_.test_iter_size() - test_net_id;\n  if (has_net_param) {\n    for (int i = 0; i < remaining_test_nets; ++i, ++test_net_id) {\n      sources[test_net_id] = \"net_param\";\n      net_params[test_net_id].CopyFrom(param_.net_param());\n    }\n  }\n  if (has_net_file) {\n    for (int i = 0; i < remaining_test_nets; ++i, ++test_net_id) {\n      sources[test_net_id] = \"net file: \" + param_.net();\n      ReadNetParamsFromTextFileOrDie(param_.net(), &net_params[test_net_id]);\n    }\n  }\n  test_nets_.resize(num_test_net_instances);\n  for (int i = 0; i < num_test_net_instances; ++i) {\n    // Set the correct NetState.  We start with the solver defaults (lowest\n    // precedence); then, merge in any NetState specified by the net_param\n    // itself; finally, merge in any NetState specified by the test_state\n    // (highest precedence).\n    NetState net_state;\n    net_state.set_phase(TEST);\n    net_state.MergeFrom(net_params[i].state());\n    if (param_.test_state_size()) {\n      net_state.MergeFrom(param_.test_state(i));\n    }\n    net_params[i].mutable_state()->CopyFrom(net_state);\n    LOG(INFO)\n        << \"Creating test net (#\" << i << \") specified by \" << sources[i];\n    if (Caffe::root_solver()) {\n      test_nets_[i].reset(new Net<Dtype>(net_params[i]));\n    } else {\n      test_nets_[i].reset(new Net<Dtype>(net_params[i],\n          root_solver_->test_nets_[i].get()));\n    }\n    test_nets_[i]->set_debug_info(param_.debug_info());\n  }\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::Step(int iters) {\n  vector<Blob<Dtype>*> bottom_vec;\n  const int start_iter = iter_;\n  const int stop_iter = iter_ + iters;\n  int average_loss = this->param_.average_loss();\n  losses_.clear();\n  smoothed_loss_ = 0;\n\n  while (iter_ < stop_iter) {\n    // zero-init the params\n    net_->ClearParamDiffs();\n    if (param_.test_interval() && iter_ % param_.test_interval() == 0\n        && (iter_ > 0 || param_.test_initialization())\n        && Caffe::root_solver()) {\n      TestAll();\n      if (requested_early_exit_) {\n        // Break out of the while loop because stop was requested while testing.\n        break;\n      }\n    }\n\n    for (int i = 0; i < callbacks_.size(); ++i) {\n      callbacks_[i]->on_start();\n    }\n    const bool display = param_.display() && iter_ % param_.display() == 0;\n    net_->set_debug_info(display && param_.debug_info());\n    // accumulate the loss and gradient\n    Dtype loss = 0;\n    for (int i = 0; i < param_.iter_size(); ++i) {\n      loss += net_->ForwardBackward(bottom_vec);\n    }\n    loss /= param_.iter_size();\n    // average the loss across iterations for smoothed reporting\n    UpdateSmoothedLoss(loss, start_iter, average_loss);\n    if (display) {\n      LOG_IF(INFO, Caffe::root_solver()) << \"Iteration \" << iter_\n          << \", loss = \" << smoothed_loss_;\n      const vector<Blob<Dtype>*>& result = net_->output_blobs();\n      int score_index = 0;\n      for (int j = 0; j < result.size(); ++j) {\n        const Dtype* result_vec = result[j]->cpu_data();\n        const string& output_name =\n            net_->blob_names()[net_->output_blob_indices()[j]];\n        const Dtype loss_weight =\n            net_->blob_loss_weights()[net_->output_blob_indices()[j]];\n        for (int k = 0; k < result[j]->count(); ++k) {\n          ostringstream loss_msg_stream;\n          if (loss_weight) {\n            loss_msg_stream << \" (* \" << loss_weight\n                            << \" = \" << loss_weight * result_vec[k] << \" loss)\";\n          }\n          LOG_IF(INFO, Caffe::root_solver()) << \"    Train net output #\"\n              << score_index++ << \": \" << output_name << \" = \"\n              << result_vec[k] << loss_msg_stream.str();\n        }\n      }\n    }\n    for (int i = 0; i < callbacks_.size(); ++i) {\n      callbacks_[i]->on_gradients_ready();\n    }\n    ApplyUpdate();\n\n    // Increment the internal iter_ counter -- its value should always indicate\n    // the number of times the weights have been updated.\n    ++iter_;\n\n    SolverAction::Enum request = GetRequestedAction();\n\n    // Save a snapshot if needed.\n    if ((param_.snapshot()\n         && iter_ % param_.snapshot() == 0\n         && Caffe::root_solver()) ||\n         (request == SolverAction::SNAPSHOT)) {\n      Snapshot();\n    }\n    if (SolverAction::STOP == request) {\n      requested_early_exit_ = true;\n      // Break out of training loop.\n      break;\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::Solve(const char* resume_file) {\n  CHECK(Caffe::root_solver());\n  LOG(INFO) << \"Solving \" << net_->name();\n  LOG(INFO) << \"Learning Rate Policy: \" << param_.lr_policy();\n\n  // Initialize to false every time we start solving.\n  requested_early_exit_ = false;\n\n  if (resume_file) {\n    LOG(INFO) << \"Restoring previous solver status from \" << resume_file;\n    Restore(resume_file);\n  }\n\n  // For a network that is trained by the solver, no bottom or top vecs\n  // should be given, and we will just provide dummy vecs.\n  int start_iter = iter_;\n  Step(param_.max_iter() - iter_);\n  // If we haven't already, save a snapshot after optimization, unless\n  // overridden by setting snapshot_after_train := false\n  if (param_.snapshot_after_train()\n      && (!param_.snapshot() || iter_ % param_.snapshot() != 0)) {\n    Snapshot();\n  }\n  if (requested_early_exit_) {\n    LOG(INFO) << \"Optimization stopped early.\";\n    return;\n  }\n  // After the optimization is done, run an additional train and test pass to\n  // display the train and test loss/outputs if appropriate (based on the\n  // display and test_interval settings, respectively).  Unlike in the rest of\n  // training, for the train net we only run a forward pass as we've already\n  // updated the parameters \"max_iter\" times -- this final pass is only done to\n  // display the loss, which is computed in the forward pass.\n  if (param_.display() && iter_ % param_.display() == 0) {\n    int average_loss = this->param_.average_loss();\n    Dtype loss;\n    net_->ForwardPrefilled(&loss);\n\n    UpdateSmoothedLoss(loss, start_iter, average_loss);\n\n    LOG(INFO) << \"Iteration \" << iter_ << \", loss = \" << smoothed_loss_;\n  }\n  if (param_.test_interval() && iter_ % param_.test_interval() == 0) {\n    TestAll();\n  }\n  LOG(INFO) << \"Optimization Done.\";\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::TestAll() {\n  for (int test_net_id = 0;\n       test_net_id < test_nets_.size() && !requested_early_exit_;\n       ++test_net_id) {\n    Test(test_net_id);\n  }\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::Test(const int test_net_id) {\n  CHECK(Caffe::root_solver());\n  LOG(INFO) << \"Iteration \" << iter_\n            << \", Testing net (#\" << test_net_id << \")\";\n  CHECK_NOTNULL(test_nets_[test_net_id].get())->\n      ShareTrainedLayersWith(net_.get());\n  vector<Dtype> test_score;\n  vector<int> test_score_output_id;\n  vector<Blob<Dtype>*> bottom_vec;\n  const shared_ptr<Net<Dtype> >& test_net = test_nets_[test_net_id];\n  Dtype loss = 0;\n  for (int i = 0; i < param_.test_iter(test_net_id); ++i) {\n    SolverAction::Enum request = GetRequestedAction();\n    // Check to see if stoppage of testing/training has been requested.\n    while (request != SolverAction::NONE) {\n        if (SolverAction::SNAPSHOT == request) {\n          Snapshot();\n        } else if (SolverAction::STOP == request) {\n          requested_early_exit_ = true;\n        }\n        request = GetRequestedAction();\n    }\n    if (requested_early_exit_) {\n      // break out of test loop.\n      break;\n    }\n\n    Dtype iter_loss;\n    const vector<Blob<Dtype>*>& result =\n        test_net->Forward(bottom_vec, &iter_loss);\n    if (param_.test_compute_loss()) {\n      loss += iter_loss;\n    }\n    if (i == 0) {\n      for (int j = 0; j < result.size(); ++j) {\n        const Dtype* result_vec = result[j]->cpu_data();\n        for (int k = 0; k < result[j]->count(); ++k) {\n          test_score.push_back(result_vec[k]);\n          test_score_output_id.push_back(j);\n        }\n      }\n    } else {\n      int idx = 0;\n      for (int j = 0; j < result.size(); ++j) {\n        const Dtype* result_vec = result[j]->cpu_data();\n        for (int k = 0; k < result[j]->count(); ++k) {\n          test_score[idx++] += result_vec[k];\n        }\n      }\n    }\n  }\n  if (requested_early_exit_) {\n    LOG(INFO)     << \"Test interrupted.\";\n    return;\n  }\n  if (param_.test_compute_loss()) {\n    loss /= param_.test_iter(test_net_id);\n    LOG(INFO) << \"Test loss: \" << loss;\n  }\n  for (int i = 0; i < test_score.size(); ++i) {\n    const int output_blob_index =\n        test_net->output_blob_indices()[test_score_output_id[i]];\n    const string& output_name = test_net->blob_names()[output_blob_index];\n    const Dtype loss_weight = test_net->blob_loss_weights()[output_blob_index];\n    ostringstream loss_msg_stream;\n    const Dtype mean_score = test_score[i] / param_.test_iter(test_net_id);\n    if (loss_weight) {\n      loss_msg_stream << \" (* \" << loss_weight\n                      << \" = \" << loss_weight * mean_score << \" loss)\";\n    }\n    LOG(INFO) << \"    Test net output #\" << i << \": \" << output_name << \" = \"\n              << mean_score << loss_msg_stream.str();\n  }\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::Snapshot() {\n  CHECK(Caffe::root_solver());\n  string model_filename;\n  switch (param_.snapshot_format()) {\n  case caffe::SolverParameter_SnapshotFormat_BINARYPROTO:\n    model_filename = SnapshotToBinaryProto();\n    break;\n  case caffe::SolverParameter_SnapshotFormat_HDF5:\n    model_filename = SnapshotToHDF5();\n    break;\n  default:\n    LOG(FATAL) << \"Unsupported snapshot format.\";\n  }\n\n  SnapshotSolverState(model_filename);\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::CheckSnapshotWritePermissions() {\n  if (Caffe::root_solver() && param_.snapshot()) {\n    CHECK(param_.has_snapshot_prefix())\n        << \"In solver params, snapshot is specified but snapshot_prefix is not\";\n    string probe_filename = SnapshotFilename(\".tempfile\");\n    std::ofstream probe_ofs(probe_filename.c_str());\n    if (probe_ofs.good()) {\n      probe_ofs.close();\n      std::remove(probe_filename.c_str());\n    } else {\n      LOG(FATAL) << \"Cannot write to snapshot prefix '\"\n          << param_.snapshot_prefix() << \"'.  Make sure \"\n          << \"that the directory exists and is writeable.\";\n    }\n  }\n}\n\ntemplate <typename Dtype>\nstring Solver<Dtype>::SnapshotFilename(const string extension) {\n  return param_.snapshot_prefix() + \"_iter_\" + caffe::format_int(iter_)\n    + extension;\n}\n\ntemplate <typename Dtype>\nstring Solver<Dtype>::SnapshotToBinaryProto() {\n  string model_filename = SnapshotFilename(\".caffemodel\");\n  LOG(INFO) << \"Snapshotting to binary proto file \" << model_filename;\n  NetParameter net_param;\n  net_->ToProto(&net_param, param_.snapshot_diff());\n  WriteProtoToBinaryFile(net_param, model_filename);\n  return model_filename;\n}\n\ntemplate <typename Dtype>\nstring Solver<Dtype>::SnapshotToHDF5() {\n  string model_filename = SnapshotFilename(\".caffemodel.h5\");\n  LOG(INFO) << \"Snapshotting to HDF5 file \" << model_filename;\n  net_->ToHDF5(model_filename, param_.snapshot_diff());\n  return model_filename;\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::Restore(const char* state_file) {\n  CHECK(Caffe::root_solver());\n  string state_filename(state_file);\n  if (state_filename.size() >= 3 &&\n      state_filename.compare(state_filename.size() - 3, 3, \".h5\") == 0) {\n    RestoreSolverStateFromHDF5(state_filename);\n  } else {\n    RestoreSolverStateFromBinaryProto(state_filename);\n  }\n}\n\ntemplate <typename Dtype>\nvoid Solver<Dtype>::UpdateSmoothedLoss(Dtype loss, int start_iter,\n    int average_loss) {\n  if (losses_.size() < average_loss) {\n    losses_.push_back(loss);\n    int size = losses_.size();\n    smoothed_loss_ = (smoothed_loss_ * (size - 1) + loss) / size;\n  } else {\n    int idx = (iter_ - start_iter) % average_loss;\n    smoothed_loss_ += (loss - losses_[idx]) / average_loss;\n    losses_[idx] = loss;\n  }\n}\n\nINSTANTIATE_CLASS(Solver);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/adadelta_solver.cpp",
    "content": "#include <vector>\n\n#include \"caffe/sgd_solvers.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid AdaDeltaSolver<Dtype>::AdaDeltaPreSolve() {\n  // Add the extra history entries for AdaDelta after those from\n  // SGDSolver::PreSolve\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  for (int i = 0; i < net_params.size(); ++i) {\n        const vector<int>& shape = net_params[i]->shape();\n        this->history_.push_back(\n                shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));\n  }\n}\n\n#ifndef CPU_ONLY\ntemplate <typename Dtype>\nvoid adadelta_update_gpu(int N, Dtype* g, Dtype* h, Dtype* h2, Dtype momentum,\n    Dtype delta, Dtype local_rate);\n#endif\n\ntemplate <typename Dtype>\nvoid AdaDeltaSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_lr = this->net_->params_lr();\n  Dtype delta = this->param_.delta();\n  Dtype momentum = this->param_.momentum();\n  Dtype local_rate = rate * net_params_lr[param_id];\n  size_t update_history_offset = net_params.size();\n  switch (Caffe::mode()) {\n  case Caffe::CPU: {\n    // compute square of gradient in update\n    caffe_powx(net_params[param_id]->count(),\n        net_params[param_id]->cpu_diff(), Dtype(2),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // update history of gradients\n    caffe_cpu_axpby(net_params[param_id]->count(), Dtype(1) - momentum,\n        this->update_[param_id]->cpu_data(), momentum,\n        this->history_[param_id]->mutable_cpu_data());\n\n    // add delta to history to guard against dividing by zero later\n    caffe_set(net_params[param_id]->count(), delta,\n        this->temp_[param_id]->mutable_cpu_data());\n\n    caffe_add(net_params[param_id]->count(),\n        this->temp_[param_id]->cpu_data(),\n        this->history_[update_history_offset + param_id]->cpu_data(),\n        this->update_[param_id]->mutable_cpu_data());\n\n    caffe_add(net_params[param_id]->count(),\n        this->temp_[param_id]->cpu_data(),\n        this->history_[param_id]->cpu_data(),\n        this->temp_[param_id]->mutable_cpu_data());\n\n    // divide history of updates by history of gradients\n    caffe_div(net_params[param_id]->count(),\n        this->update_[param_id]->cpu_data(),\n        this->temp_[param_id]->cpu_data(),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // jointly compute the RMS of both for update and gradient history\n    caffe_powx(net_params[param_id]->count(),\n        this->update_[param_id]->cpu_data(), Dtype(0.5),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // compute the update\n    caffe_mul(net_params[param_id]->count(),\n        net_params[param_id]->cpu_diff(),\n        this->update_[param_id]->cpu_data(),\n        net_params[param_id]->mutable_cpu_diff());\n\n    // compute square of update\n    caffe_powx(net_params[param_id]->count(),\n        net_params[param_id]->cpu_diff(), Dtype(2),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // update history of updates\n    caffe_cpu_axpby(net_params[param_id]->count(), Dtype(1) - momentum,\n        this->update_[param_id]->cpu_data(), momentum,\n        this->history_[update_history_offset + param_id]->mutable_cpu_data());\n\n    // apply learning rate\n    caffe_cpu_scale(net_params[param_id]->count(), local_rate,\n        net_params[param_id]->cpu_diff(),\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    adadelta_update_gpu(net_params[param_id]->count(),\n        net_params[param_id]->mutable_gpu_diff(),\n        this->history_[param_id]->mutable_gpu_data(),\n        this->history_[update_history_offset + param_id]->mutable_gpu_data(),\n        momentum, delta, local_rate);\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\nINSTANTIATE_CLASS(AdaDeltaSolver);\nREGISTER_SOLVER_CLASS(AdaDelta);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/adadelta_solver.cu",
    "content": "#include \"caffe/util/math_functions.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void AdaDeltaUpdate(int N, Dtype* g, Dtype* h, Dtype* h2,\n    Dtype momentum, Dtype delta, Dtype local_rate) {\n  CUDA_KERNEL_LOOP(i, N) {\n    float gi = g[i];\n    float hi = h[i] = momentum * h[i] + (1-momentum) * gi * gi;\n    gi = gi * sqrt((h2[i] + delta) / (hi + delta));\n    h2[i] = momentum * h2[i] + (1-momentum) * gi * gi;\n    g[i] = local_rate * gi;\n  }\n}\ntemplate <typename Dtype>\nvoid adadelta_update_gpu(int N, Dtype* g, Dtype* h, Dtype* h2, Dtype momentum,\n    Dtype delta, Dtype local_rate) {\n  AdaDeltaUpdate<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, g, h, h2, momentum, delta, local_rate);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void adadelta_update_gpu<float>(int , float*, float*, float*,\n    float, float, float);\ntemplate void adadelta_update_gpu<double>(int, double*, double*, double*,\n    double, double, double);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/adagrad_solver.cpp",
    "content": "#include <vector>\n\n#include \"caffe/sgd_solvers.hpp\"\n\nnamespace caffe {\n\n#ifndef CPU_ONLY\ntemplate <typename Dtype>\nvoid adagrad_update_gpu(int N, Dtype* g, Dtype* h, Dtype delta,\n    Dtype local_rate);\n#endif\n\ntemplate <typename Dtype>\nvoid AdaGradSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {\n  CHECK(Caffe::root_solver());\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_lr = this->net_->params_lr();\n  Dtype delta = this->param_.delta();\n  Dtype local_rate = rate * net_params_lr[param_id];\n  switch (Caffe::mode()) {\n  case Caffe::CPU: {\n    // compute square of gradient in update\n    caffe_powx(net_params[param_id]->count(),\n        net_params[param_id]->cpu_diff(), Dtype(2),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // update history\n    caffe_add(net_params[param_id]->count(),\n        this->update_[param_id]->cpu_data(),\n        this->history_[param_id]->cpu_data(),\n        this->history_[param_id]->mutable_cpu_data());\n\n    // prepare update\n    caffe_powx(net_params[param_id]->count(),\n              this->history_[param_id]->cpu_data(), Dtype(0.5),\n              this->update_[param_id]->mutable_cpu_data());\n\n    caffe_add_scalar(net_params[param_id]->count(),\n              delta, this->update_[param_id]->mutable_cpu_data());\n\n    caffe_div(net_params[param_id]->count(),\n              net_params[param_id]->cpu_diff(),\n              this->update_[param_id]->cpu_data(),\n              this->update_[param_id]->mutable_cpu_data());\n\n    // scale and copy\n    caffe_cpu_axpby(net_params[param_id]->count(), local_rate,\n        this->update_[param_id]->cpu_data(), Dtype(0),\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    adagrad_update_gpu(net_params[param_id]->count(),\n        net_params[param_id]->mutable_gpu_diff(),\n        this->history_[param_id]->mutable_gpu_data(), delta, local_rate);\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\nINSTANTIATE_CLASS(AdaGradSolver);\nREGISTER_SOLVER_CLASS(AdaGrad);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/adagrad_solver.cu",
    "content": "#include \"caffe/util/math_functions.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void AdaGradUpdate(int N, Dtype* g, Dtype* h, Dtype delta,\n    Dtype local_rate) {\n  CUDA_KERNEL_LOOP(i, N) {\n    float gi = g[i];\n    float hi = h[i] = h[i] + gi*gi;\n    g[i] = local_rate * gi / (sqrt(hi) + delta);\n  }\n}\ntemplate <typename Dtype>\nvoid adagrad_update_gpu(int N, Dtype* g, Dtype* h, Dtype delta,\n    Dtype local_rate) {\n  AdaGradUpdate<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, g, h, delta, local_rate);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void adagrad_update_gpu<float>(int, float*, float*, float, float);\ntemplate void adagrad_update_gpu<double>(int, double*, double*, double, double);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/adam_solver.cpp",
    "content": "#include <vector>\n\n#include \"caffe/sgd_solvers.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nvoid AdamSolver<Dtype>::AdamPreSolve() {\n  // Add the extra history entries for Adam after those from\n  // SGDSolver::PreSolve\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  for (int i = 0; i < net_params.size(); ++i) {\n    const vector<int>& shape = net_params[i]->shape();\n    this->history_.push_back(\n            shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));\n  }\n}\n\n#ifndef CPU_ONLY\ntemplate <typename Dtype>\nvoid adam_update_gpu(int N, Dtype* g, Dtype* m, Dtype* v, Dtype beta1,\n    Dtype beta2, Dtype eps_hat, Dtype corrected_local_rate);\n#endif\n\ntemplate <typename Dtype>\nvoid AdamSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_lr = this->net_->params_lr();\n  Dtype local_rate = rate * net_params_lr[param_id];\n  const Dtype beta1 = this->param_.momentum();\n  const Dtype beta2 = this->param_.momentum2();\n\n  // we create aliases for convenience\n  size_t update_history_offset = net_params.size();\n  Blob<Dtype>* val_m = this->history_[param_id].get();\n  Blob<Dtype>* val_v = this->history_[param_id + update_history_offset].get();\n  Blob<Dtype>* val_t = this->temp_[param_id].get();\n\n  const int t = this->iter_ + 1;\n  const Dtype correction = std::sqrt(Dtype(1) - pow(beta2, t)) /\n      (Dtype(1.) - pow(beta1, t));\n  const int N = net_params[param_id]->count();\n  const Dtype eps_hat = this->param_.delta();\n\n  switch (Caffe::mode()) {\n    case Caffe::CPU: {\n    // update m <- \\beta_1 m_{t-1} + (1-\\beta_1)g_t\n    caffe_cpu_axpby(N, Dtype(1)-beta1,\n        net_params[param_id]->cpu_diff(), beta1,\n        val_m->mutable_cpu_data());\n\n    // update v <- \\beta_2 m_{t-1} + (1-\\beta_2)g_t^2\n    caffe_mul(N,\n        net_params[param_id]->cpu_diff(),\n        net_params[param_id]->cpu_diff(),\n    val_t->mutable_cpu_data());\n    caffe_cpu_axpby(N, Dtype(1)-beta2,\n        val_t->cpu_data(), beta2,\n        val_v->mutable_cpu_data());\n\n    // set update\n    caffe_powx(N,\n        val_v->cpu_data(), Dtype(0.5),\n        val_t->mutable_cpu_data());\n    caffe_add_scalar(N, eps_hat, val_t->mutable_cpu_data());\n    caffe_div(N,\n        val_m->cpu_data(),\n        val_t->cpu_data(),\n        val_t->mutable_cpu_data());\n\n    caffe_cpu_scale(N, local_rate*correction,\n        val_t->cpu_data(),\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    adam_update_gpu(N, net_params[param_id]->mutable_gpu_diff(),\n        val_m->mutable_gpu_data(), val_v->mutable_gpu_data(), beta1, beta2,\n        eps_hat, local_rate*correction);\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\nINSTANTIATE_CLASS(AdamSolver);\nREGISTER_SOLVER_CLASS(Adam);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/adam_solver.cu",
    "content": "#include \"caffe/util/math_functions.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void AdamUpdate(int N, Dtype* g, Dtype* m, Dtype* v,\n    Dtype beta1, Dtype beta2, Dtype eps_hat, Dtype corrected_local_rate) {\n  CUDA_KERNEL_LOOP(i, N) {\n    float gi = g[i];\n    float mi = m[i] = m[i]*beta1 + gi*(1-beta1);\n    float vi = v[i] = v[i]*beta2 + gi*gi*(1-beta2);\n    g[i] = corrected_local_rate * mi / (sqrt(vi) + eps_hat);\n  }\n}\ntemplate <typename Dtype>\nvoid adam_update_gpu(int N, Dtype* g, Dtype* m, Dtype* v, Dtype beta1,\n    Dtype beta2, Dtype eps_hat, Dtype corrected_local_rate) {\n  AdamUpdate<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, g, m, v, beta1, beta2, eps_hat, corrected_local_rate);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void adam_update_gpu<float>(int, float*, float*, float*,\n    float, float, float, float);\ntemplate void adam_update_gpu<double>(int, double*, double*, double*,\n    double, double, double, double);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/nesterov_solver.cpp",
    "content": "#include <vector>\n\n#include \"caffe/sgd_solvers.hpp\"\n\nnamespace caffe {\n\n#ifndef CPU_ONLY\ntemplate <typename Dtype>\nvoid nesterov_update_gpu(int N, Dtype* g, Dtype* h, Dtype momentum,\n    Dtype local_rate);\n#endif\n\ntemplate <typename Dtype>\nvoid NesterovSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {\n  CHECK(Caffe::root_solver());\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_lr = this->net_->params_lr();\n  Dtype momentum = this->param_.momentum();\n  Dtype local_rate = rate * net_params_lr[param_id];\n  switch (Caffe::mode()) {\n  case Caffe::CPU: {\n    // save history momentum for stepping back\n    caffe_copy(net_params[param_id]->count(),\n        this->history_[param_id]->cpu_data(),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // update history\n    caffe_cpu_axpby(net_params[param_id]->count(), local_rate,\n              net_params[param_id]->cpu_diff(), momentum,\n              this->history_[param_id]->mutable_cpu_data());\n\n    // compute update: step back then over step\n    caffe_cpu_axpby(net_params[param_id]->count(), Dtype(1) + momentum,\n        this->history_[param_id]->cpu_data(), -momentum,\n        this->update_[param_id]->mutable_cpu_data());\n\n    // copy\n    caffe_copy(net_params[param_id]->count(),\n        this->update_[param_id]->cpu_data(),\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    nesterov_update_gpu(net_params[param_id]->count(),\n        net_params[param_id]->mutable_gpu_diff(),\n        this->history_[param_id]->mutable_gpu_data(),\n        momentum, local_rate);\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\nINSTANTIATE_CLASS(NesterovSolver);\nREGISTER_SOLVER_CLASS(Nesterov);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/nesterov_solver.cu",
    "content": "#include \"caffe/util/math_functions.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void NesterovUpdate(int N, Dtype* g, Dtype* h,\n    Dtype momentum, Dtype local_rate) {\n  CUDA_KERNEL_LOOP(i, N) {\n    float hi = h[i];\n    float hi_new = h[i] = momentum * hi + local_rate * g[i];\n    g[i] = (1+momentum) * hi_new - momentum * hi;\n  }\n}\ntemplate <typename Dtype>\nvoid nesterov_update_gpu(int N, Dtype* g, Dtype* h, Dtype momentum,\n    Dtype local_rate) {\n  NesterovUpdate<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, g, h, momentum, local_rate);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void nesterov_update_gpu<float>(int, float*, float*, float, float);\ntemplate void nesterov_update_gpu<double>(int, double*, double*, double,\n    double);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/rmsprop_solver.cpp",
    "content": "#include <vector>\n\n#include \"caffe/sgd_solvers.hpp\"\n\nnamespace caffe {\n\n#ifndef CPU_ONLY\ntemplate <typename Dtype>\nvoid rmsprop_update_gpu(int N, Dtype* g, Dtype* h, Dtype rms_decay,\n    Dtype delta, Dtype local_rate);\n#endif\n\ntemplate <typename Dtype>\nvoid RMSPropSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_lr = this->net_->params_lr();\n\n  // get the learning rate\n  Dtype delta = this->param_.delta();\n  Dtype rms_decay = this->param_.rms_decay();\n  Dtype local_rate = rate * net_params_lr[param_id];\n\n  switch (Caffe::mode()) {\n  case Caffe::CPU:\n    // compute square of gradient in update\n    caffe_powx(net_params[param_id]->count(),\n        net_params[param_id]->cpu_diff(), Dtype(2),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // update history\n    caffe_cpu_axpby(net_params[param_id] -> count(),\n        Dtype(1-rms_decay), this->update_[param_id]->cpu_data(),\n        rms_decay, this->history_[param_id]-> mutable_cpu_data());\n\n    // prepare update\n    caffe_powx(net_params[param_id]->count(),\n        this->history_[param_id]->cpu_data(), Dtype(0.5),\n        this->update_[param_id]->mutable_cpu_data());\n\n    caffe_add_scalar(net_params[param_id]->count(),\n        delta, this->update_[param_id]->mutable_cpu_data());\n\n    caffe_div(net_params[param_id]->count(),\n        net_params[param_id]->cpu_diff(), this->update_[param_id]->cpu_data(),\n        this->update_[param_id]->mutable_cpu_data());\n\n    // scale and copy\n    caffe_cpu_axpby(net_params[param_id]->count(), local_rate,\n        this->update_[param_id]->cpu_data(), Dtype(0),\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  case Caffe::GPU:\n#ifndef CPU_ONLY\n    rmsprop_update_gpu(net_params[param_id]->count(),\n        net_params[param_id]->mutable_gpu_diff(),\n        this->history_[param_id]->mutable_gpu_data(),\n        rms_decay, delta, local_rate);\n#else\n    NO_GPU;\n#endif\n    break;\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\nINSTANTIATE_CLASS(RMSPropSolver);\nREGISTER_SOLVER_CLASS(RMSProp);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/rmsprop_solver.cu",
    "content": "#include \"caffe/util/math_functions.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void RMSPropUpdate(int N, Dtype* g, Dtype* h,\n    Dtype rms_decay, Dtype delta, Dtype local_rate) {\n  CUDA_KERNEL_LOOP(i, N) {\n    float gi = g[i];\n    float hi = h[i] = rms_decay*h[i] + (1-rms_decay)*gi*gi;\n    g[i] = local_rate * g[i] / (sqrt(hi) + delta);\n  }\n}\ntemplate <typename Dtype>\nvoid rmsprop_update_gpu(int N, Dtype* g, Dtype* h, Dtype rms_decay,\n    Dtype delta, Dtype local_rate) {\n  RMSPropUpdate<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, g, h, rms_decay, delta, local_rate);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void rmsprop_update_gpu<float>(int, float*, float*, float, float,\n    float);\ntemplate void rmsprop_update_gpu<double>(int, double*, double*, double, double,\n    double);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/sgd_solver.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"caffe/sgd_solvers.hpp\"\n#include \"caffe/util/hdf5.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\nnamespace caffe {\n\n// Return the current learning rate. The currently implemented learning rate\n// policies are as follows:\n//    - fixed: always return base_lr.\n//    - step: return base_lr * gamma ^ (floor(iter / step))\n//    - exp: return base_lr * gamma ^ iter\n//    - inv: return base_lr * (1 + gamma * iter) ^ (- power)\n//    - multistep: similar to step but it allows non uniform steps defined by\n//      stepvalue\n//    - poly: the effective learning rate follows a polynomial decay, to be\n//      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)\n//    - sigmoid: the effective learning rate follows a sigmod decay\n//      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))\n//\n// where base_lr, max_iter, gamma, step, stepvalue and power are defined\n// in the solver parameter protocol buffer, and iter is the current iteration.\ntemplate <typename Dtype>\nDtype SGDSolver<Dtype>::GetLearningRate() {\n  Dtype rate;\n  const string& lr_policy = this->param_.lr_policy();\n  if (lr_policy == \"fixed\") {\n    rate = this->param_.base_lr();\n  } else if (lr_policy == \"step\") {\n    this->current_step_ = this->iter_ / this->param_.stepsize();\n    rate = this->param_.base_lr() *\n        pow(this->param_.gamma(), this->current_step_);\n  } else if (lr_policy == \"exp\") {\n    rate = this->param_.base_lr() * pow(this->param_.gamma(), this->iter_);\n  } else if (lr_policy == \"inv\") {\n    rate = this->param_.base_lr() *\n        pow(Dtype(1) + this->param_.gamma() * this->iter_,\n            - this->param_.power());\n  } else if (lr_policy == \"multistep\") {\n    if (this->current_step_ < this->param_.stepvalue_size() &&\n          this->iter_ >= this->param_.stepvalue(this->current_step_)) {\n      this->current_step_++;\n      LOG(INFO) << \"MultiStep Status: Iteration \" <<\n      this->iter_ << \", step = \" << this->current_step_;\n    }\n    rate = this->param_.base_lr() *\n        pow(this->param_.gamma(), this->current_step_);\n  } else if (lr_policy == \"poly\") {\n    rate = this->param_.base_lr() * pow(Dtype(1.) -\n        (Dtype(this->iter_) / Dtype(this->param_.max_iter())),\n        this->param_.power());\n  } else if (lr_policy == \"sigmoid\") {\n    rate = this->param_.base_lr() * (Dtype(1.) /\n        (Dtype(1.) + exp(-this->param_.gamma() * (Dtype(this->iter_) -\n          Dtype(this->param_.stepsize())))));\n  } else {\n    LOG(FATAL) << \"Unknown learning rate policy: \" << lr_policy;\n  }\n  return rate;\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::PreSolve() {\n  // Initialize the history\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  history_.clear();\n  update_.clear();\n  temp_.clear();\n  for (int i = 0; i < net_params.size(); ++i) {\n    const vector<int>& shape = net_params[i]->shape();\n    history_.push_back(shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));\n    update_.push_back(shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));\n    temp_.push_back(shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));\n  }\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::ClipGradients() {\n  const Dtype clip_gradients = this->param_.clip_gradients();\n  if (clip_gradients < 0) { return; }\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  Dtype sumsq_diff = 0;\n  for (int i = 0; i < net_params.size(); ++i) {\n    sumsq_diff += net_params[i]->sumsq_diff();\n  }\n  const Dtype l2norm_diff = std::sqrt(sumsq_diff);\n  cout<<\"----------------************************************************************\"<<l2norm_diff <<endl;\n  if (l2norm_diff > clip_gradients) {\n    Dtype scale_factor = clip_gradients / l2norm_diff;\n    LOG(INFO) << \"Gradient clipping: scaling down gradients (L2 norm \"\n        << l2norm_diff << \" > \" << clip_gradients << \") \"\n        << \"by scale factor \" << scale_factor;\n    for (int i = 0; i < net_params.size(); ++i) {\n      net_params[i]->scale_diff(scale_factor);\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::ApplyUpdate() {\n  CHECK(Caffe::root_solver());\n  Dtype rate = GetLearningRate();\n  if (this->param_.display() && this->iter_ % this->param_.display() == 0) {\n    LOG(INFO) << \"Iteration \" << this->iter_ << \", lr = \" << rate;\n  }\n  ClipGradients();\n  for (int param_id = 0; param_id < this->net_->learnable_params().size();\n       ++param_id) {\n    Normalize(param_id);\n    Regularize(param_id);\n    ComputeUpdateValue(param_id, rate);\n  }\n  this->net_->Update();\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::Normalize(int param_id) {\n  if (this->param_.iter_size() == 1) { return; }\n  // Scale gradient to counterbalance accumulation.\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const Dtype accum_normalization = Dtype(1.) / this->param_.iter_size();\n  switch (Caffe::mode()) {\n  case Caffe::CPU: {\n    caffe_scal(net_params[param_id]->count(), accum_normalization,\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    caffe_gpu_scal(net_params[param_id]->count(), accum_normalization,\n        net_params[param_id]->mutable_gpu_diff());\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::Regularize(int param_id) {\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_weight_decay =\n      this->net_->params_weight_decay();\n  Dtype weight_decay = this->param_.weight_decay();\n  string regularization_type = this->param_.regularization_type();\n  Dtype local_decay = weight_decay * net_params_weight_decay[param_id];\n  switch (Caffe::mode()) {\n  case Caffe::CPU: {\n    if (local_decay) {\n      if (regularization_type == \"L2\") {\n        // add weight decay\n        caffe_axpy(net_params[param_id]->count(),\n            local_decay,\n            net_params[param_id]->cpu_data(),\n            net_params[param_id]->mutable_cpu_diff());\n      } else if (regularization_type == \"L1\") {\n        caffe_cpu_sign(net_params[param_id]->count(),\n            net_params[param_id]->cpu_data(),\n            temp_[param_id]->mutable_cpu_data());\n        caffe_axpy(net_params[param_id]->count(),\n            local_decay,\n            temp_[param_id]->cpu_data(),\n            net_params[param_id]->mutable_cpu_diff());\n      } else {\n        LOG(FATAL) << \"Unknown regularization type: \" << regularization_type;\n      }\n    }\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    if (local_decay) {\n      if (regularization_type == \"L2\") {\n        // add weight decay\n        caffe_gpu_axpy(net_params[param_id]->count(),\n            local_decay,\n            net_params[param_id]->gpu_data(),\n            net_params[param_id]->mutable_gpu_diff());\n      } else if (regularization_type == \"L1\") {\n        caffe_gpu_sign(net_params[param_id]->count(),\n            net_params[param_id]->gpu_data(),\n            temp_[param_id]->mutable_gpu_data());\n        caffe_gpu_axpy(net_params[param_id]->count(),\n            local_decay,\n            temp_[param_id]->gpu_data(),\n            net_params[param_id]->mutable_gpu_diff());\n      } else {\n        LOG(FATAL) << \"Unknown regularization type: \" << regularization_type;\n      }\n    }\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\n#ifndef CPU_ONLY\ntemplate <typename Dtype>\nvoid sgd_update_gpu(int N, Dtype* g, Dtype* h, Dtype momentum,\n    Dtype local_rate);\n#endif\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {\n  const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params();\n  const vector<float>& net_params_lr = this->net_->params_lr();\n  Dtype momentum = this->param_.momentum();\n  Dtype local_rate = rate * net_params_lr[param_id];\n  // Compute the update to history, then copy it to the parameter diff.\n  switch (Caffe::mode()) {\n  case Caffe::CPU: {\n    caffe_cpu_axpby(net_params[param_id]->count(), local_rate,\n              net_params[param_id]->cpu_diff(), momentum,\n              history_[param_id]->mutable_cpu_data());\n    caffe_copy(net_params[param_id]->count(),\n        history_[param_id]->cpu_data(),\n        net_params[param_id]->mutable_cpu_diff());\n    break;\n  }\n  case Caffe::GPU: {\n#ifndef CPU_ONLY\n    sgd_update_gpu(net_params[param_id]->count(),\n        net_params[param_id]->mutable_gpu_diff(),\n        history_[param_id]->mutable_gpu_data(),\n        momentum, local_rate);\n#else\n    NO_GPU;\n#endif\n    break;\n  }\n  default:\n    LOG(FATAL) << \"Unknown caffe mode: \" << Caffe::mode();\n  }\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::SnapshotSolverState(const string& model_filename) {\n  switch (this->param_.snapshot_format()) {\n    case caffe::SolverParameter_SnapshotFormat_BINARYPROTO:\n      SnapshotSolverStateToBinaryProto(model_filename);\n      break;\n    case caffe::SolverParameter_SnapshotFormat_HDF5:\n      SnapshotSolverStateToHDF5(model_filename);\n      break;\n    default:\n      LOG(FATAL) << \"Unsupported snapshot format.\";\n  }\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::SnapshotSolverStateToBinaryProto(\n    const string& model_filename) {\n  SolverState state;\n  state.set_iter(this->iter_);\n  state.set_learned_net(model_filename);\n  state.set_current_step(this->current_step_);\n  state.clear_history();\n  for (int i = 0; i < history_.size(); ++i) {\n    // Add history\n    BlobProto* history_blob = state.add_history();\n    history_[i]->ToProto(history_blob);\n  }\n  string snapshot_filename = Solver<Dtype>::SnapshotFilename(\".solverstate\");\n  LOG(INFO)\n    << \"Snapshotting solver state to binary proto file \" << snapshot_filename;\n  WriteProtoToBinaryFile(state, snapshot_filename.c_str());\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::SnapshotSolverStateToHDF5(\n    const string& model_filename) {\n  string snapshot_filename =\n      Solver<Dtype>::SnapshotFilename(\".solverstate.h5\");\n  LOG(INFO) << \"Snapshotting solver state to HDF5 file \" << snapshot_filename;\n  hid_t file_hid = H5Fcreate(snapshot_filename.c_str(), H5F_ACC_TRUNC,\n      H5P_DEFAULT, H5P_DEFAULT);\n  CHECK_GE(file_hid, 0)\n      << \"Couldn't open \" << snapshot_filename << \" to save solver state.\";\n  hdf5_save_int(file_hid, \"iter\", this->iter_);\n  hdf5_save_string(file_hid, \"learned_net\", model_filename);\n  hdf5_save_int(file_hid, \"current_step\", this->current_step_);\n  hid_t history_hid = H5Gcreate2(file_hid, \"history\", H5P_DEFAULT, H5P_DEFAULT,\n      H5P_DEFAULT);\n  CHECK_GE(history_hid, 0)\n      << \"Error saving solver state to \" << snapshot_filename << \".\";\n  for (int i = 0; i < history_.size(); ++i) {\n    ostringstream oss;\n    oss << i;\n    hdf5_save_nd_dataset<Dtype>(history_hid, oss.str(), *history_[i]);\n  }\n  H5Gclose(history_hid);\n  H5Fclose(file_hid);\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::RestoreSolverStateFromBinaryProto(\n    const string& state_file) {\n  SolverState state;\n  ReadProtoFromBinaryFile(state_file, &state);\n  this->iter_ = state.iter();\n  if (state.has_learned_net()) {\n    NetParameter net_param;\n    ReadNetParamsFromBinaryFileOrDie(state.learned_net().c_str(), &net_param);\n    this->net_->CopyTrainedLayersFrom(net_param);\n  }\n  this->current_step_ = state.current_step();\n  CHECK_EQ(state.history_size(), history_.size())\n      << \"Incorrect length of history blobs.\";\n  LOG(INFO) << \"SGDSolver: restoring history\";\n  for (int i = 0; i < history_.size(); ++i) {\n    history_[i]->FromProto(state.history(i));\n  }\n}\n\ntemplate <typename Dtype>\nvoid SGDSolver<Dtype>::RestoreSolverStateFromHDF5(const string& state_file) {\n  hid_t file_hid = H5Fopen(state_file.c_str(), H5F_ACC_RDONLY, H5P_DEFAULT);\n  CHECK_GE(file_hid, 0) << \"Couldn't open solver state file \" << state_file;\n  this->iter_ = hdf5_load_int(file_hid, \"iter\");\n  if (H5LTfind_dataset(file_hid, \"learned_net\")) {\n    string learned_net = hdf5_load_string(file_hid, \"learned_net\");\n    this->net_->CopyTrainedLayersFrom(learned_net);\n  }\n  this->current_step_ = hdf5_load_int(file_hid, \"current_step\");\n  hid_t history_hid = H5Gopen2(file_hid, \"history\", H5P_DEFAULT);\n  CHECK_GE(history_hid, 0) << \"Error reading history from \" << state_file;\n  int state_history_size = hdf5_get_num_links(history_hid);\n  CHECK_EQ(state_history_size, history_.size())\n      << \"Incorrect length of history blobs.\";\n  for (int i = 0; i < history_.size(); ++i) {\n    ostringstream oss;\n    oss << i;\n    hdf5_load_nd_dataset<Dtype>(history_hid, oss.str().c_str(), 0,\n                                kMaxBlobAxes, history_[i].get());\n  }\n  H5Gclose(history_hid);\n  H5Fclose(file_hid);\n}\n\nINSTANTIATE_CLASS(SGDSolver);\nREGISTER_SOLVER_CLASS(SGD);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/solvers/sgd_solver.cu",
    "content": "#include \"caffe/util/math_functions.hpp\"\n\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void SGDUpdate(int N, Dtype* g, Dtype* h,\n    Dtype momentum, Dtype local_rate) {\n  CUDA_KERNEL_LOOP(i, N) {\n    g[i] = h[i] = momentum*h[i] + local_rate*g[i];\n  }\n}\ntemplate <typename Dtype>\nvoid sgd_update_gpu(int N, Dtype* g, Dtype* h, Dtype momentum,\n    Dtype local_rate) {\n  SGDUpdate<Dtype>  // NOLINT_NEXT_LINE(whitespace/operators)\n      <<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, g, h, momentum, local_rate);\n  CUDA_POST_KERNEL_CHECK;\n}\ntemplate void sgd_update_gpu<float>(int, float*, float*, float, float);\ntemplate void sgd_update_gpu<double>(int, double*, double*, double, double);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/syncedmem.cpp",
    "content": "#include \"caffe/common.hpp\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\nSyncedMemory::~SyncedMemory() {\n  if (cpu_ptr_ && own_cpu_data_) {\n    CaffeFreeHost(cpu_ptr_, cpu_malloc_use_cuda_);\n  }\n\n#ifndef CPU_ONLY\n  if (gpu_ptr_ && own_gpu_data_) {\n    int initial_device;\n    cudaGetDevice(&initial_device);\n    if (gpu_device_ != -1) {\n      CUDA_CHECK(cudaSetDevice(gpu_device_));\n    }\n    CUDA_CHECK(cudaFree(gpu_ptr_));\n    cudaSetDevice(initial_device);\n  }\n#endif  // CPU_ONLY\n}\n\ninline void SyncedMemory::to_cpu() {\n  switch (head_) {\n  case UNINITIALIZED:\n    CaffeMallocHost(&cpu_ptr_, size_, &cpu_malloc_use_cuda_);\n    caffe_memset(size_, 0, cpu_ptr_);\n    head_ = HEAD_AT_CPU;\n    own_cpu_data_ = true;\n    break;\n  case HEAD_AT_GPU:\n#ifndef CPU_ONLY\n    if (cpu_ptr_ == NULL) {\n      CaffeMallocHost(&cpu_ptr_, size_, &cpu_malloc_use_cuda_);\n      own_cpu_data_ = true;\n    }\n    caffe_gpu_memcpy(size_, gpu_ptr_, cpu_ptr_);\n    head_ = SYNCED;\n#else\n    NO_GPU;\n#endif\n    break;\n  case HEAD_AT_CPU:\n  case SYNCED:\n    break;\n  }\n}\n\ninline void SyncedMemory::to_gpu() {\n#ifndef CPU_ONLY\n  switch (head_) {\n  case UNINITIALIZED:\n    CUDA_CHECK(cudaGetDevice(&gpu_device_));\n    CUDA_CHECK(cudaMalloc(&gpu_ptr_, size_));\n    caffe_gpu_memset(size_, 0, gpu_ptr_);\n    head_ = HEAD_AT_GPU;\n    own_gpu_data_ = true;\n    break;\n  case HEAD_AT_CPU:\n    if (gpu_ptr_ == NULL) {\n      CUDA_CHECK(cudaGetDevice(&gpu_device_));\n      CUDA_CHECK(cudaMalloc(&gpu_ptr_, size_));\n      own_gpu_data_ = true;\n    }\n    caffe_gpu_memcpy(size_, cpu_ptr_, gpu_ptr_);\n    head_ = SYNCED;\n    break;\n  case HEAD_AT_GPU:\n  case SYNCED:\n    break;\n  }\n#else\n  NO_GPU;\n#endif\n}\n\nconst void* SyncedMemory::cpu_data() {\n  to_cpu();\n  return (const void*)cpu_ptr_;\n}\n\nvoid SyncedMemory::set_cpu_data(void* data) {\n  CHECK(data);\n  if (own_cpu_data_) {\n    CaffeFreeHost(cpu_ptr_, cpu_malloc_use_cuda_);\n  }\n  cpu_ptr_ = data;\n  head_ = HEAD_AT_CPU;\n  own_cpu_data_ = false;\n}\n\nconst void* SyncedMemory::gpu_data() {\n#ifndef CPU_ONLY\n  to_gpu();\n  return (const void*)gpu_ptr_;\n#else\n  NO_GPU;\n  return NULL;\n#endif\n}\n\nvoid SyncedMemory::set_gpu_data(void* data) {\n#ifndef CPU_ONLY\n  CHECK(data);\n  if (own_gpu_data_) {\n    int initial_device;\n    cudaGetDevice(&initial_device);\n    if (gpu_device_ != -1) {\n      CUDA_CHECK(cudaSetDevice(gpu_device_));\n    }\n    CUDA_CHECK(cudaFree(gpu_ptr_));\n    cudaSetDevice(initial_device);\n  }\n  gpu_ptr_ = data;\n  head_ = HEAD_AT_GPU;\n  own_gpu_data_ = false;\n#else\n  NO_GPU;\n#endif\n}\n\nvoid* SyncedMemory::mutable_cpu_data() {\n  to_cpu();\n  head_ = HEAD_AT_CPU;\n  return cpu_ptr_;\n}\n\nvoid* SyncedMemory::mutable_gpu_data() {\n#ifndef CPU_ONLY\n  to_gpu();\n  head_ = HEAD_AT_GPU;\n  return gpu_ptr_;\n#else\n  NO_GPU;\n  return NULL;\n#endif\n}\n\n#ifndef CPU_ONLY\nvoid SyncedMemory::async_gpu_push(const cudaStream_t& stream) {\n  CHECK(head_ == HEAD_AT_CPU);\n  if (gpu_ptr_ == NULL) {\n    CUDA_CHECK(cudaGetDevice(&gpu_device_));\n    CUDA_CHECK(cudaMalloc(&gpu_ptr_, size_));\n    own_gpu_data_ = true;\n  }\n  const cudaMemcpyKind put = cudaMemcpyHostToDevice;\n  CUDA_CHECK(cudaMemcpyAsync(gpu_ptr_, cpu_ptr_, size_, put, stream));\n  // Assume caller will synchronize on the stream before use\n  head_ = SYNCED;\n}\n#endif\n\n}  // namespace caffe\n\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/CMakeLists.txt",
    "content": "# The option allows to include in build only selected test files and exclude all others\n# Usage example:\n#  cmake -DBUILD_only_tests=\"common,net,blob,im2col_kernel\"\nset(BUILD_only_tests \"\" CACHE STRING \"Blank or comma-separated list of test files to build without 'test_' prefix and extention\")\ncaffe_leave_only_selected_tests(test_srcs ${BUILD_only_tests})\ncaffe_leave_only_selected_tests(test_cuda ${BUILD_only_tests})\n\n# For 'make runtest' target we don't need to embed test data paths to\n# source files, because test target is executed in source directory\n# That's why the lines below are commented. TODO: remove them\n\n# definition needed to include CMake generated files\n#add_definitions(-DCMAKE_BUILD)\n\n# generates test_data/sample_data_list.txt.gen.cmake\n#caffe_configure_testdatafile(test_data/sample_data_list.txt)\n\nset(the_target test.testbin)\nset(test_args --gtest_shuffle)\n\nif(HAVE_CUDA)\n  caffe_cuda_compile(test_cuda_objs ${test_cuda})\n  list(APPEND test_srcs ${test_cuda_objs} ${test_cuda})\nelse()\n  list(APPEND test_args --gtest_filter=\"-*GPU*\")\nendif()\n\n# ---[ Adding test target\nadd_executable(${the_target} EXCLUDE_FROM_ALL ${test_srcs})\ntarget_link_libraries(${the_target} gtest ${Caffe_LINK})\ncaffe_default_properties(${the_target})\ncaffe_set_runtime_directory(${the_target} \"${PROJECT_BINARY_DIR}/test\")\n\n# ---[ Adding runtest\nadd_custom_target(runtest COMMAND ${the_target} ${test_args}\n                          WORKING_DIRECTORY ${PROJECT_SOURCE_DIR})\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_accuracy_layer.cpp",
    "content": "#include <cfloat>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/accuracy_layer.hpp\"\n#include \"caffe/util/rng.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass AccuracyLayerTest : public CPUDeviceTest<Dtype> {\n protected:\n  AccuracyLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>()),\n        blob_bottom_label_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_per_class_(new Blob<Dtype>()),\n        top_k_(3) {\n    vector<int> shape(2);\n    shape[0] = 100;\n    shape[1] = 10;\n    blob_bottom_data_->Reshape(shape);\n    shape.resize(1);\n    blob_bottom_label_->Reshape(shape);\n    FillBottoms();\n\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n    blob_top_vec_.push_back(blob_top_);\n    blob_top_per_class_vec_.push_back(blob_top_);\n    blob_top_per_class_vec_.push_back(blob_top_per_class_);\n  }\n\n  virtual void FillBottoms() {\n    // fill the probability values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n\n    const unsigned int prefetch_rng_seed = caffe_rng_rand();\n    shared_ptr<Caffe::RNG> rng(new Caffe::RNG(prefetch_rng_seed));\n    caffe::rng_t* prefetch_rng =\n          static_cast<caffe::rng_t*>(rng->generator());\n    Dtype* label_data = blob_bottom_label_->mutable_cpu_data();\n    for (int i = 0; i < blob_bottom_label_->count(); ++i) {\n      label_data[i] = (*prefetch_rng)() % 10;\n    }\n  }\n\n  virtual ~AccuracyLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_top_;\n    delete blob_top_per_class_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_per_class_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n  vector<Blob<Dtype>*> blob_top_per_class_vec_;\n  int top_k_;\n};\n\nTYPED_TEST_CASE(AccuracyLayerTest, TestDtypes);\n\nTYPED_TEST(AccuracyLayerTest, TestSetup) {\n  LayerParameter layer_param;\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestSetupTopK) {\n  LayerParameter layer_param;\n  AccuracyParameter* accuracy_param =\n      layer_param.mutable_accuracy_param();\n  accuracy_param->set_top_k(5);\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestSetupOutputPerClass) {\n  LayerParameter layer_param;\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_per_class_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  EXPECT_EQ(this->blob_top_per_class_->num(), 10);\n  EXPECT_EQ(this->blob_top_per_class_->channels(), 1);\n  EXPECT_EQ(this->blob_top_per_class_->height(), 1);\n  EXPECT_EQ(this->blob_top_per_class_->width(), 1);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestForwardCPU) {\n  LayerParameter layer_param;\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  TypeParam max_value;\n  int max_id;\n  int num_correct_labels = 0;\n  for (int i = 0; i < 100; ++i) {\n    max_value = -FLT_MAX;\n    max_id = 0;\n    for (int j = 0; j < 10; ++j) {\n      if (this->blob_bottom_data_->data_at(i, j, 0, 0) > max_value) {\n        max_value = this->blob_bottom_data_->data_at(i, j, 0, 0);\n        max_id = j;\n      }\n    }\n    if (max_id == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n      ++num_correct_labels;\n    }\n  }\n  EXPECT_NEAR(this->blob_top_->data_at(0, 0, 0, 0),\n              num_correct_labels / 100.0, 1e-4);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestForwardWithSpatialAxes) {\n  this->blob_bottom_data_->Reshape(2, 10, 4, 5);\n  vector<int> label_shape(3);\n  label_shape[0] = 2; label_shape[1] = 4; label_shape[2] = 5;\n  this->blob_bottom_label_->Reshape(label_shape);\n  this->FillBottoms();\n  LayerParameter layer_param;\n  layer_param.mutable_accuracy_param()->set_axis(1);\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  TypeParam max_value;\n  const int num_labels = this->blob_bottom_label_->count();\n  int max_id;\n  int num_correct_labels = 0;\n  vector<int> label_offset(3);\n  for (int n = 0; n < this->blob_bottom_data_->num(); ++n) {\n    for (int h = 0; h < this->blob_bottom_data_->height(); ++h) {\n      for (int w = 0; w < this->blob_bottom_data_->width(); ++w) {\n        max_value = -FLT_MAX;\n        max_id = 0;\n        for (int c = 0; c < this->blob_bottom_data_->channels(); ++c) {\n          const TypeParam pred_value =\n              this->blob_bottom_data_->data_at(n, c, h, w);\n          if (pred_value > max_value) {\n            max_value = pred_value;\n            max_id = c;\n          }\n        }\n        label_offset[0] = n; label_offset[1] = h; label_offset[2] = w;\n        const int correct_label =\n            static_cast<int>(this->blob_bottom_label_->data_at(label_offset));\n        if (max_id == correct_label) {\n          ++num_correct_labels;\n        }\n      }\n    }\n  }\n  EXPECT_NEAR(this->blob_top_->data_at(0, 0, 0, 0),\n              num_correct_labels / TypeParam(num_labels), 1e-4);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestForwardIgnoreLabel) {\n  LayerParameter layer_param;\n  const TypeParam kIgnoreLabelValue = -1;\n  layer_param.mutable_accuracy_param()->set_ignore_label(kIgnoreLabelValue);\n  AccuracyLayer<TypeParam> layer(layer_param);\n  // Manually set some labels to the ignore label value (-1).\n  this->blob_bottom_label_->mutable_cpu_data()[2] = kIgnoreLabelValue;\n  this->blob_bottom_label_->mutable_cpu_data()[5] = kIgnoreLabelValue;\n  this->blob_bottom_label_->mutable_cpu_data()[32] = kIgnoreLabelValue;\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  TypeParam max_value;\n  int max_id;\n  int num_correct_labels = 0;\n  int count = 0;\n  for (int i = 0; i < 100; ++i) {\n    if (kIgnoreLabelValue == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n      continue;\n    }\n    ++count;\n    max_value = -FLT_MAX;\n    max_id = 0;\n    for (int j = 0; j < 10; ++j) {\n      if (this->blob_bottom_data_->data_at(i, j, 0, 0) > max_value) {\n        max_value = this->blob_bottom_data_->data_at(i, j, 0, 0);\n        max_id = j;\n      }\n    }\n    if (max_id == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n      ++num_correct_labels;\n    }\n  }\n  EXPECT_EQ(count, 97);  // We set 3 out of 100 labels to kIgnoreLabelValue.\n  EXPECT_NEAR(this->blob_top_->data_at(0, 0, 0, 0),\n              num_correct_labels / TypeParam(count), 1e-4);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestForwardCPUTopK) {\n  LayerParameter layer_param;\n  AccuracyParameter* accuracy_param = layer_param.mutable_accuracy_param();\n  accuracy_param->set_top_k(this->top_k_);\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  TypeParam current_value;\n  int current_rank;\n  int num_correct_labels = 0;\n  for (int i = 0; i < 100; ++i) {\n    for (int j = 0; j < 10; ++j) {\n      current_value = this->blob_bottom_data_->data_at(i, j, 0, 0);\n      current_rank = 0;\n      for (int k = 0; k < 10; ++k) {\n        if (this->blob_bottom_data_->data_at(i, k, 0, 0) > current_value) {\n          ++current_rank;\n        }\n      }\n      if (current_rank < this->top_k_ &&\n          j == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n        ++num_correct_labels;\n      }\n    }\n  }\n\n  EXPECT_NEAR(this->blob_top_->data_at(0, 0, 0, 0),\n              num_correct_labels / 100.0, 1e-4);\n}\n\nTYPED_TEST(AccuracyLayerTest, TestForwardCPUPerClass) {\n  LayerParameter layer_param;\n  AccuracyLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_per_class_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_per_class_vec_);\n\n  TypeParam max_value;\n  int max_id;\n  int num_correct_labels = 0;\n  const int num_class = this->blob_top_per_class_->num();\n  vector<int> correct_per_class(num_class, 0);\n  vector<int> num_per_class(num_class, 0);\n  for (int i = 0; i < 100; ++i) {\n    max_value = -FLT_MAX;\n    max_id = 0;\n    for (int j = 0; j < 10; ++j) {\n      if (this->blob_bottom_data_->data_at(i, j, 0, 0) > max_value) {\n        max_value = this->blob_bottom_data_->data_at(i, j, 0, 0);\n        max_id = j;\n      }\n    }\n    ++num_per_class[this->blob_bottom_label_->data_at(i, 0, 0, 0)];\n    if (max_id == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n      ++num_correct_labels;\n      ++correct_per_class[max_id];\n    }\n  }\n  EXPECT_NEAR(this->blob_top_->data_at(0, 0, 0, 0),\n              num_correct_labels / 100.0, 1e-4);\n  for (int i = 0; i < num_class; ++i) {\n    TypeParam accuracy_per_class = (num_per_class[i] > 0 ?\n       static_cast<TypeParam>(correct_per_class[i]) / num_per_class[i] : 0);\n    EXPECT_NEAR(this->blob_top_per_class_->data_at(i, 0, 0, 0),\n                accuracy_per_class, 1e-4);\n  }\n}\n\n\nTYPED_TEST(AccuracyLayerTest, TestForwardCPUPerClassWithIgnoreLabel) {\n  LayerParameter layer_param;\n  const TypeParam kIgnoreLabelValue = -1;\n  layer_param.mutable_accuracy_param()->set_ignore_label(kIgnoreLabelValue);\n  AccuracyLayer<TypeParam> layer(layer_param);\n  // Manually set some labels to the ignore label value (-1).\n  this->blob_bottom_label_->mutable_cpu_data()[2] = kIgnoreLabelValue;\n  this->blob_bottom_label_->mutable_cpu_data()[5] = kIgnoreLabelValue;\n  this->blob_bottom_label_->mutable_cpu_data()[32] = kIgnoreLabelValue;\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_per_class_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_per_class_vec_);\n\n  TypeParam max_value;\n  int max_id;\n  int num_correct_labels = 0;\n  const int num_class = this->blob_top_per_class_->num();\n  vector<int> correct_per_class(num_class, 0);\n  vector<int> num_per_class(num_class, 0);\n  int count = 0;\n  for (int i = 0; i < 100; ++i) {\n    if (kIgnoreLabelValue == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n      continue;\n    }\n    ++count;\n    max_value = -FLT_MAX;\n    max_id = 0;\n    for (int j = 0; j < 10; ++j) {\n      if (this->blob_bottom_data_->data_at(i, j, 0, 0) > max_value) {\n        max_value = this->blob_bottom_data_->data_at(i, j, 0, 0);\n        max_id = j;\n      }\n    }\n    ++num_per_class[this->blob_bottom_label_->data_at(i, 0, 0, 0)];\n    if (max_id == this->blob_bottom_label_->data_at(i, 0, 0, 0)) {\n      ++num_correct_labels;\n      ++correct_per_class[max_id];\n    }\n  }\n  EXPECT_EQ(count, 97);\n  EXPECT_NEAR(this->blob_top_->data_at(0, 0, 0, 0),\n              num_correct_labels / TypeParam(count), 1e-4);\n  for (int i = 0; i < 10; ++i) {\n    TypeParam accuracy_per_class = (num_per_class[i] > 0 ?\n       static_cast<TypeParam>(correct_per_class[i]) / num_per_class[i] : 0);\n    EXPECT_NEAR(this->blob_top_per_class_->data_at(i, 0, 0, 0),\n                accuracy_per_class, 1e-4);\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_argmax_layer.cpp",
    "content": "#include <utility>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/argmax_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass ArgMaxLayerTest : public CPUDeviceTest<Dtype> {\n protected:\n  ArgMaxLayerTest()\n      : blob_bottom_(new Blob<Dtype>(10, 10, 20, 20)),\n        blob_top_(new Blob<Dtype>()),\n        top_k_(5) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~ArgMaxLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n  size_t top_k_;\n};\n\nTYPED_TEST_CASE(ArgMaxLayerTest, TestDtypes);\n\nTYPED_TEST(ArgMaxLayerTest, TestSetup) {\n  LayerParameter layer_param;\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestSetupMaxVal) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_out_max_val(true);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), 2);\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestSetupAxis) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_axis(0);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->shape(0), argmax_param->top_k());\n  EXPECT_EQ(this->blob_top_->shape(1), this->blob_bottom_->shape(0));\n  EXPECT_EQ(this->blob_top_->shape(2), this->blob_bottom_->shape(2));\n  EXPECT_EQ(this->blob_top_->shape(3), this->blob_bottom_->shape(3));\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestSetupAxisNegativeIndexing) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_axis(-2);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->shape(0), this->blob_bottom_->shape(0));\n  EXPECT_EQ(this->blob_top_->shape(1), this->blob_bottom_->shape(1));\n  EXPECT_EQ(this->blob_top_->shape(2), argmax_param->top_k());\n  EXPECT_EQ(this->blob_top_->shape(3), this->blob_bottom_->shape(3));\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestSetupAxisMaxVal) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_axis(2);\n  argmax_param->set_out_max_val(true);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->shape(0), this->blob_bottom_->shape(0));\n  EXPECT_EQ(this->blob_top_->shape(1), this->blob_bottom_->shape(1));\n  EXPECT_EQ(this->blob_top_->shape(2), argmax_param->top_k());\n  EXPECT_EQ(this->blob_top_->shape(3), this->blob_bottom_->shape(3));\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPU) {\n  LayerParameter layer_param;\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  int max_ind;\n  TypeParam max_val;\n  int num = this->blob_bottom_->num();\n  int dim = this->blob_bottom_->count() / num;\n  for (int i = 0; i < num; ++i) {\n    EXPECT_GE(top_data[i], 0);\n    EXPECT_LE(top_data[i], dim);\n    max_ind = top_data[i];\n    max_val = bottom_data[i * dim + max_ind];\n    for (int j = 0; j < dim; ++j) {\n      EXPECT_LE(bottom_data[i * dim + j], max_val);\n    }\n  }\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPUMaxVal) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_out_max_val(true);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  int max_ind;\n  TypeParam max_val;\n  int num = this->blob_bottom_->num();\n  int dim = this->blob_bottom_->count() / num;\n  for (int i = 0; i < num; ++i) {\n    EXPECT_GE(top_data[i], 0);\n    EXPECT_LE(top_data[i], dim);\n    max_ind = top_data[i * 2];\n    max_val = top_data[i * 2 + 1];\n    EXPECT_EQ(bottom_data[i * dim + max_ind], max_val);\n    for (int j = 0; j < dim; ++j) {\n      EXPECT_LE(bottom_data[i * dim + j], max_val);\n    }\n  }\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPUTopK) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_top_k(this->top_k_);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  int max_ind;\n  TypeParam max_val;\n  int num = this->blob_bottom_->num();\n  int dim = this->blob_bottom_->count() / num;\n  for (int i = 0; i < num; ++i) {\n    EXPECT_GE(this->blob_top_->data_at(i, 0, 0, 0), 0);\n    EXPECT_LE(this->blob_top_->data_at(i, 0, 0, 0), dim);\n    for (int j = 0; j < this->top_k_; ++j) {\n      max_ind = this->blob_top_->data_at(i, 0, j, 0);\n      max_val = bottom_data[i * dim + max_ind];\n      int count = 0;\n      for (int k = 0; k < dim; ++k) {\n        if (bottom_data[i * dim + k] > max_val) {\n          ++count;\n        }\n      }\n      EXPECT_EQ(j, count);\n    }\n  }\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPUMaxValTopK) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_out_max_val(true);\n  argmax_param->set_top_k(this->top_k_);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  int max_ind;\n  TypeParam max_val;\n  int num = this->blob_bottom_->num();\n  int dim = this->blob_bottom_->count() / num;\n  for (int i = 0; i < num; ++i) {\n    EXPECT_GE(this->blob_top_->data_at(i, 0, 0, 0), 0);\n    EXPECT_LE(this->blob_top_->data_at(i, 0, 0, 0), dim);\n    for (int j = 0; j < this->top_k_; ++j) {\n      max_ind = this->blob_top_->data_at(i, 0, j, 0);\n      max_val = this->blob_top_->data_at(i, 1, j, 0);\n      EXPECT_EQ(bottom_data[i * dim + max_ind], max_val);\n      int count = 0;\n      for (int k = 0; k < dim; ++k) {\n        if (bottom_data[i * dim + k] > max_val) {\n          ++count;\n        }\n      }\n      EXPECT_EQ(j, count);\n    }\n  }\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPUAxis) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_axis(0);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  int max_ind;\n  TypeParam max_val;\n  std::vector<int> shape = this->blob_bottom_->shape();\n  for (int i = 0; i < shape[1]; ++i) {\n    for (int j = 0; j < shape[2]; ++j) {\n      for (int k = 0; k < shape[3]; ++k) {\n        max_ind = this->blob_top_->data_at(0, i, j, k);\n        max_val = this->blob_bottom_->data_at(max_ind, i, j, k);\n        EXPECT_GE(max_ind, 0);\n        EXPECT_LE(max_ind, shape[0]);\n        for (int l = 0; l < shape[0]; ++l) {\n          EXPECT_LE(this->blob_bottom_->data_at(l, i, j, k), max_val);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPUAxisTopK) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_axis(2);\n  argmax_param->set_top_k(this->top_k_);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  int max_ind;\n  TypeParam max_val;\n  std::vector<int> shape = this->blob_bottom_->shape();\n  for (int i = 0; i < shape[0]; ++i) {\n    for (int j = 0; j < shape[1]; ++j) {\n      for (int k = 0; k < shape[3]; ++k) {\n        for (int m = 0; m < this->top_k_; ++m) {\n          max_ind = this->blob_top_->data_at(i, j, m, k);\n          max_val = this->blob_bottom_->data_at(i, j, max_ind, k);\n          EXPECT_GE(max_ind, 0);\n          EXPECT_LE(max_ind, shape[2]);\n          int count = 0;\n          for (int l = 0; l < shape[2]; ++l) {\n            if (this->blob_bottom_->data_at(i, j, l, k) > max_val) {\n              ++count;\n            }\n          }\n          EXPECT_EQ(m, count);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ArgMaxLayerTest, TestCPUAxisMaxValTopK) {\n  LayerParameter layer_param;\n  ArgMaxParameter* argmax_param = layer_param.mutable_argmax_param();\n  argmax_param->set_axis(-1);\n  argmax_param->set_top_k(this->top_k_);\n  argmax_param->set_out_max_val(true);\n  ArgMaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  TypeParam max_val;\n  std::vector<int> shape = this->blob_bottom_->shape();\n  for (int i = 0; i < shape[0]; ++i) {\n    for (int j = 0; j < shape[1]; ++j) {\n      for (int k = 0; k < shape[2]; ++k) {\n        for (int m = 0; m < this->top_k_; ++m) {\n          max_val = this->blob_top_->data_at(i, j, k, m);\n          int count = 0;\n          for (int l = 0; l < shape[3]; ++l) {\n            if (this->blob_bottom_->data_at(i, j, k, l) > max_val) {\n              ++count;\n            }\n          }\n          EXPECT_EQ(m, count);\n        }\n      }\n    }\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_batch_norm_layer.cpp",
    "content": "#include <algorithm>\n#include <cstring>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/batch_norm_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\n#define BATCH_SIZE 2\n#define INPUT_DATA_SIZE 3\n\nnamespace caffe {\n\n  template <typename TypeParam>\n  class BatchNormLayerTest : public MultiDeviceTest<TypeParam> {\n    typedef typename TypeParam::Dtype Dtype;\n   protected:\n    BatchNormLayerTest()\n        : blob_bottom_(new Blob<Dtype>(5, 2, 3, 4)),\n          blob_top_(new Blob<Dtype>()) {\n      // fill the values\n      FillerParameter filler_param;\n      GaussianFiller<Dtype> filler(filler_param);\n      filler.Fill(this->blob_bottom_);\n      blob_bottom_vec_.push_back(blob_bottom_);\n      blob_top_vec_.push_back(blob_top_);\n    }\n    virtual ~BatchNormLayerTest() { delete blob_bottom_; delete blob_top_; }\n    Blob<Dtype>* const blob_bottom_;\n    Blob<Dtype>* const blob_top_;\n    vector<Blob<Dtype>*> blob_bottom_vec_;\n    vector<Blob<Dtype>*> blob_top_vec_;\n  };\n\n  TYPED_TEST_CASE(BatchNormLayerTest, TestDtypesAndDevices);\n\n  TYPED_TEST(BatchNormLayerTest, TestForward) {\n    typedef typename TypeParam::Dtype Dtype;\n    LayerParameter layer_param;\n\n    BatchNormLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n    // Test mean\n    int num = this->blob_bottom_->num();\n    int channels = this->blob_bottom_->channels();\n    int height = this->blob_bottom_->height();\n    int width = this->blob_bottom_->width();\n\n    for (int j = 0; j < channels; ++j) {\n      Dtype sum = 0, var = 0;\n      for (int i = 0; i < num; ++i) {\n        for ( int k = 0; k < height; ++k ) {\n          for ( int l = 0; l < width; ++l ) {\n            Dtype data = this->blob_top_->data_at(i, j, k, l);\n            sum += data;\n            var += data * data;\n          }\n        }\n      }\n      sum /= height * width * num;\n      var /= height * width * num;\n\n      const Dtype kErrorBound = 0.001;\n      // expect zero mean\n      EXPECT_NEAR(0, sum, kErrorBound);\n      // expect unit variance\n      EXPECT_NEAR(1, var, kErrorBound);\n    }\n  }\n\n  TYPED_TEST(BatchNormLayerTest, TestForwardInplace) {\n    typedef typename TypeParam::Dtype Dtype;\n    Blob<Dtype> blob_inplace(5, 2, 3, 4);\n    vector<Blob<Dtype>*> blob_bottom_vec;\n    vector<Blob<Dtype>*> blob_top_vec;\n    LayerParameter layer_param;\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(&blob_inplace);\n    blob_bottom_vec.push_back(&blob_inplace);\n    blob_top_vec.push_back(&blob_inplace);\n\n    BatchNormLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec, blob_top_vec);\n    layer.Forward(blob_bottom_vec, blob_top_vec);\n\n    // Test mean\n    int num = blob_inplace.num();\n    int channels = blob_inplace.channels();\n    int height = blob_inplace.height();\n    int width = blob_inplace.width();\n\n    for (int j = 0; j < channels; ++j) {\n      Dtype sum = 0, var = 0;\n      for (int i = 0; i < num; ++i) {\n        for ( int k = 0; k < height; ++k ) {\n          for ( int l = 0; l < width; ++l ) {\n            Dtype data = blob_inplace.data_at(i, j, k, l);\n            sum += data;\n            var += data * data;\n          }\n        }\n      }\n      sum /= height * width * num;\n      var /= height * width * num;\n\n      const Dtype kErrorBound = 0.001;\n      // expect zero mean\n      EXPECT_NEAR(0, sum, kErrorBound);\n      // expect unit variance\n      EXPECT_NEAR(1, var, kErrorBound);\n    }\n  }\n\n  TYPED_TEST(BatchNormLayerTest, TestGradient) {\n    typedef typename TypeParam::Dtype Dtype;\n    LayerParameter layer_param;\n\n    BatchNormLayer<Dtype> layer(layer_param);\n    GradientChecker<Dtype> checker(1e-2, 1e-4);\n    checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n        this->blob_top_vec_);\n  }\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_batch_reindex_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/batch_reindex_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate<typename TypeParam>\nclass BatchReindexLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  BatchReindexLayerTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_bottom_permute_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {\n  }\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    vector<int> sz;\n    sz.push_back(5);\n    sz.push_back(4);\n    sz.push_back(3);\n    sz.push_back(2);\n    blob_bottom_->Reshape(sz);\n    vector<int> permsz;\n    permsz.push_back(6);\n    blob_bottom_permute_->Reshape(permsz);\n\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    int perm[] = { 4, 0, 4, 0, 1, 2 };\n    for (int i = 0; i < blob_bottom_permute_->count(); ++i) {\n      blob_bottom_permute_->mutable_cpu_data()[i] = perm[i];\n    }\n\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_permute_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~BatchReindexLayerTest() {\n    delete blob_bottom_permute_;\n    delete blob_bottom_;\n    delete blob_top_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_permute_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n\n  void TestForward() {\n    LayerParameter layer_param;\n\n    vector<int> sz;\n    sz.push_back(5);\n    sz.push_back(4);\n    sz.push_back(3);\n    sz.push_back(2);\n    blob_bottom_->Reshape(sz);\n    for (int i = 0; i < blob_bottom_->count(); ++i) {\n      blob_bottom_->mutable_cpu_data()[i] = i;\n    }\n\n    vector<int> permsz;\n    permsz.push_back(6);\n    blob_bottom_permute_->Reshape(permsz);\n    int perm[] = { 4, 0, 4, 0, 1, 2 };\n    for (int i = 0; i < blob_bottom_permute_->count(); ++i) {\n      blob_bottom_permute_->mutable_cpu_data()[i] = perm[i];\n    }\n    BatchReindexLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), blob_bottom_permute_->num());\n    EXPECT_EQ(blob_top_->channels(), blob_bottom_->channels());\n    EXPECT_EQ(blob_top_->height(), blob_bottom_->height());\n    EXPECT_EQ(blob_top_->width(), blob_bottom_->width());\n\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    int channels = blob_top_->channels();\n    int height = blob_top_->height();\n    int width = blob_top_->width();\n    for (int i = 0; i < blob_top_->count(); ++i) {\n      int n = i / (channels * width * height);\n      int inner_idx = (i % (channels * width * height));\n      EXPECT_EQ(\n          blob_top_->cpu_data()[i],\n          blob_bottom_->cpu_data()[perm[n] * channels * width * height\n              + inner_idx]);\n    }\n  }\n};\n\nTYPED_TEST_CASE(BatchReindexLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(BatchReindexLayerTest, TestForward) {\n  this->TestForward();\n}\n\nTYPED_TEST(BatchReindexLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BatchReindexLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-4, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n  }\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_benchmark.cpp",
    "content": "#include <boost/thread.hpp>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/benchmark.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nconst float kMillisecondsThreshold = 30;\n\ntemplate <typename TypeParam>\nclass BenchmarkTest : public MultiDeviceTest<TypeParam> {};\n\nTYPED_TEST_CASE(BenchmarkTest, TestDtypesAndDevices);\n\nTYPED_TEST(BenchmarkTest, TestTimerConstructor) {\n  Timer timer;\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_FALSE(timer.has_run_at_least_once());\n}\n\nTYPED_TEST(BenchmarkTest, TestTimerStart) {\n  Timer timer;\n  timer.Start();\n  EXPECT_TRUE(timer.initted());\n  EXPECT_TRUE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n  timer.Start();\n  EXPECT_TRUE(timer.initted());\n  EXPECT_TRUE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n  timer.Stop();\n  timer.Start();\n  EXPECT_TRUE(timer.initted());\n  EXPECT_TRUE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n}\n\nTYPED_TEST(BenchmarkTest, TestTimerStop) {\n  Timer timer;\n  timer.Stop();\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_FALSE(timer.has_run_at_least_once());\n  timer.Start();\n  timer.Stop();\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n  timer.Stop();\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n}\n\nTYPED_TEST(BenchmarkTest, TestTimerMilliSeconds) {\n  Timer timer;\n  EXPECT_EQ(timer.MilliSeconds(), 0);\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_FALSE(timer.has_run_at_least_once());\n  timer.Start();\n  boost::this_thread::sleep(boost::posix_time::milliseconds(300));\n  EXPECT_GE(timer.MilliSeconds(), 300 - kMillisecondsThreshold);\n  EXPECT_LE(timer.MilliSeconds(), 300 + kMillisecondsThreshold);\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n}\n\nTYPED_TEST(BenchmarkTest, TestTimerSeconds) {\n  Timer timer;\n  EXPECT_EQ(timer.Seconds(), 0);\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_FALSE(timer.has_run_at_least_once());\n  timer.Start();\n  boost::this_thread::sleep(boost::posix_time::milliseconds(300));\n  EXPECT_GE(timer.Seconds(), 0.3 - kMillisecondsThreshold / 1000.);\n  EXPECT_LE(timer.Seconds(), 0.3 + kMillisecondsThreshold / 1000.);\n  EXPECT_TRUE(timer.initted());\n  EXPECT_FALSE(timer.running());\n  EXPECT_TRUE(timer.has_run_at_least_once());\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_bias_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/bias_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass BiasLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  BiasLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_eltwise_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_broadcast_0_(new Blob<Dtype>()),\n        blob_bottom_broadcast_1_(new Blob<Dtype>()),\n        blob_bottom_broadcast_2_(new Blob<Dtype>()),\n        blob_bottom_bias_(new Blob<Dtype>(vector<int>())),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    vector<int> broadcast_shape(2);\n    broadcast_shape[0] = 2; broadcast_shape[1] = 3;\n    this->blob_bottom_broadcast_0_->Reshape(broadcast_shape);\n    broadcast_shape[0] = 3; broadcast_shape[1] = 4;\n    this->blob_bottom_broadcast_1_->Reshape(broadcast_shape);\n    broadcast_shape[0] = 4; broadcast_shape[1] = 5;\n    this->blob_bottom_broadcast_2_->Reshape(broadcast_shape);\n    FillerParameter filler_param;\n    filler_param.set_min(1);\n    filler_param.set_max(10);\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    filler.Fill(this->blob_bottom_eltwise_);\n    filler.Fill(this->blob_bottom_broadcast_0_);\n    filler.Fill(this->blob_bottom_broadcast_1_);\n    filler.Fill(this->blob_bottom_broadcast_2_);\n    filler.Fill(this->blob_bottom_bias_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~BiasLayerTest() {\n    delete blob_bottom_;\n    delete blob_bottom_eltwise_;\n    delete blob_bottom_broadcast_0_;\n    delete blob_bottom_broadcast_1_;\n    delete blob_bottom_broadcast_2_;\n    delete blob_bottom_bias_;\n    delete blob_top_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_eltwise_;\n  Blob<Dtype>* const blob_bottom_broadcast_0_;\n  Blob<Dtype>* const blob_bottom_broadcast_1_;\n  Blob<Dtype>* const blob_bottom_broadcast_2_;\n  Blob<Dtype>* const blob_bottom_bias_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(BiasLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(BiasLayerTest, TestForwardEltwise) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(0);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_->cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_eltwise_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] + in_data_b[i], 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardEltwiseInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(0);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_bottom_->cpu_data();\n  const int count = this->blob_bottom_->count();\n  const Dtype* in_data_a = orig_bottom.cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_eltwise_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] + in_data_b[i], 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestBackwardEltwiseInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(0);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  Blob<Dtype> top_diff(this->blob_bottom_->shape());\n  FillerParameter filler_param;\n  filler_param.set_type(\"gaussian\");\n  filler_param.set_std(1);\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(&top_diff);\n  vector<bool> propagate_down(2, true);\n  // Run forward + backward without in-place computation;\n  // save resulting bottom diffs.\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_top_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  const bool kReshape = true;\n  const bool kCopyDiff = true;\n  Blob<Dtype> orig_bottom_diff;\n  orig_bottom_diff.CopyFrom(*this->blob_bottom_, kCopyDiff, kReshape);\n  Blob<Dtype> orig_bias_diff;\n  orig_bias_diff.CopyFrom(*this->blob_bottom_eltwise_,\n                            kCopyDiff, kReshape);\n  // Rerun forward + backward with in-place computation;\n  // check that resulting bottom diffs are the same.\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_bottom_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(orig_bottom_diff.cpu_diff()[i],\n                this->blob_bottom_->cpu_diff()[i], 1e-5);\n  }\n  for (int i = 0; i < this->blob_bottom_eltwise_->count(); ++i) {\n    EXPECT_NEAR(orig_bias_diff.cpu_diff()[i],\n                this->blob_bottom_eltwise_->cpu_diff()[i], 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardEltwiseWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BiasParameter* bias_param = layer_param.mutable_bias_param();\n  bias_param->set_axis(0);\n  bias_param->set_num_axes(-1);\n  bias_param->mutable_filler()->set_type(\"gaussian\");\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_->cpu_data();\n  const Dtype* in_data_b = layer->blobs()[0]->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] + in_data_b[i], 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBroadcastBegin) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_0_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(0);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) +\n                      this->blob_bottom_broadcast_0_->data_at(n, c, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBroadcastMiddle) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(1);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) +\n                      this->blob_bottom_broadcast_1_->data_at(c, h, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBroadcastMiddleInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(1);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_bottom_->data_at(n, c, h, w),\n                      orig_bottom.data_at(n, c, h, w) +\n                      this->blob_bottom_broadcast_1_->data_at(c, h, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestBackwardBroadcastMiddleInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(1);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  Blob<Dtype> top_diff(this->blob_bottom_->shape());\n  FillerParameter filler_param;\n  filler_param.set_type(\"gaussian\");\n  filler_param.set_std(1);\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(&top_diff);\n  vector<bool> propagate_down(2, true);\n  // Run forward + backward without in-place computation;\n  // save resulting bottom diffs.\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_top_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  const bool kReshape = true;\n  const bool kCopyDiff = true;\n  Blob<Dtype> orig_bottom_diff;\n  orig_bottom_diff.CopyFrom(*this->blob_bottom_, kCopyDiff, kReshape);\n  Blob<Dtype> orig_bias_diff;\n  orig_bias_diff.CopyFrom(*this->blob_bottom_broadcast_1_,\n                            kCopyDiff, kReshape);\n  // Rerun forward + backward with in-place computation;\n  // check that resulting bottom diffs are the same.\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_bottom_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(orig_bottom_diff.cpu_diff()[i],\n                this->blob_bottom_->cpu_diff()[i], 1e-5);\n  }\n  for (int i = 0; i < this->blob_bottom_broadcast_1_->count(); ++i) {\n    EXPECT_NEAR(orig_bias_diff.cpu_diff()[i],\n                this->blob_bottom_broadcast_1_->cpu_diff()[i], 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBroadcastMiddleWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BiasParameter* bias_param = layer_param.mutable_bias_param();\n  bias_param->set_axis(1);\n  bias_param->set_num_axes(2);\n  bias_param->mutable_filler()->set_type(\"gaussian\");\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) +\n                      layer->blobs()[0]->data_at(c, h, 0, 0), 1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBroadcastEnd) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_2_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(2);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) +\n                      this->blob_bottom_broadcast_2_->data_at(h, w, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBias) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_bias_);\n  LayerParameter layer_param;\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data = this->blob_bottom_->cpu_data();\n  const Dtype bias = *this->blob_bottom_bias_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data[i] + bias, 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestForwardBiasAxis2) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_bias_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(2);\n  shared_ptr<BiasLayer<Dtype> > layer(new BiasLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data = this->blob_bottom_->cpu_data();\n  const Dtype bias = *this->blob_bottom_bias_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data[i] + bias, 1e-5);\n  }\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientEltwise) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(0);\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientEltwiseWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BiasParameter* bias_param = layer_param.mutable_bias_param();\n  bias_param->set_axis(0);\n  bias_param->set_num_axes(-1);\n  bias_param->mutable_filler()->set_type(\"gaussian\");\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientBroadcastBegin) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_0_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(0);\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientBroadcastMiddle) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(1);\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientBroadcastMiddleWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  BiasParameter* bias_param = layer_param.mutable_bias_param();\n  bias_param->set_axis(1);\n  bias_param->set_num_axes(2);\n  bias_param->mutable_filler()->set_type(\"gaussian\");\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientBroadcastEnd) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_2_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(2);\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientBias) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_bias_);\n  LayerParameter layer_param;\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(BiasLayerTest, TestGradientBiasAxis2) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_bias_);\n  LayerParameter layer_param;\n  layer_param.mutable_bias_param()->set_axis(2);\n  BiasLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_blob.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass BlobSimpleTest : public ::testing::Test {\n protected:\n  BlobSimpleTest()\n      : blob_(new Blob<Dtype>()),\n        blob_preshaped_(new Blob<Dtype>(2, 3, 4, 5)) {}\n  virtual ~BlobSimpleTest() { delete blob_; delete blob_preshaped_; }\n  Blob<Dtype>* const blob_;\n  Blob<Dtype>* const blob_preshaped_;\n};\n\nTYPED_TEST_CASE(BlobSimpleTest, TestDtypes);\n\nTYPED_TEST(BlobSimpleTest, TestInitialization) {\n  EXPECT_TRUE(this->blob_);\n  EXPECT_TRUE(this->blob_preshaped_);\n  EXPECT_EQ(this->blob_preshaped_->num(), 2);\n  EXPECT_EQ(this->blob_preshaped_->channels(), 3);\n  EXPECT_EQ(this->blob_preshaped_->height(), 4);\n  EXPECT_EQ(this->blob_preshaped_->width(), 5);\n  EXPECT_EQ(this->blob_preshaped_->count(), 120);\n  EXPECT_EQ(this->blob_->num_axes(), 0);\n  EXPECT_EQ(this->blob_->count(), 0);\n}\n\nTYPED_TEST(BlobSimpleTest, TestPointersCPUGPU) {\n  EXPECT_TRUE(this->blob_preshaped_->gpu_data());\n  EXPECT_TRUE(this->blob_preshaped_->cpu_data());\n  EXPECT_TRUE(this->blob_preshaped_->mutable_gpu_data());\n  EXPECT_TRUE(this->blob_preshaped_->mutable_cpu_data());\n}\n\nTYPED_TEST(BlobSimpleTest, TestReshape) {\n  this->blob_->Reshape(2, 3, 4, 5);\n  EXPECT_EQ(this->blob_->num(), 2);\n  EXPECT_EQ(this->blob_->channels(), 3);\n  EXPECT_EQ(this->blob_->height(), 4);\n  EXPECT_EQ(this->blob_->width(), 5);\n  EXPECT_EQ(this->blob_->count(), 120);\n}\n\nTYPED_TEST(BlobSimpleTest, TestLegacyBlobProtoShapeEquals) {\n  BlobProto blob_proto;\n\n  // Reshape to (3 x 2).\n  vector<int> shape(2);\n  shape[0] = 3;\n  shape[1] = 2;\n  this->blob_->Reshape(shape);\n\n  // (3 x 2) blob == (1 x 1 x 3 x 2) legacy blob\n  blob_proto.set_num(1);\n  blob_proto.set_channels(1);\n  blob_proto.set_height(3);\n  blob_proto.set_width(2);\n  EXPECT_TRUE(this->blob_->ShapeEquals(blob_proto));\n\n  // (3 x 2) blob != (0 x 1 x 3 x 2) legacy blob\n  blob_proto.set_num(0);\n  blob_proto.set_channels(1);\n  blob_proto.set_height(3);\n  blob_proto.set_width(2);\n  EXPECT_FALSE(this->blob_->ShapeEquals(blob_proto));\n\n  // (3 x 2) blob != (3 x 1 x 3 x 2) legacy blob\n  blob_proto.set_num(3);\n  blob_proto.set_channels(1);\n  blob_proto.set_height(3);\n  blob_proto.set_width(2);\n  EXPECT_FALSE(this->blob_->ShapeEquals(blob_proto));\n\n  // Reshape to (1 x 3 x 2).\n  shape.insert(shape.begin(), 1);\n  this->blob_->Reshape(shape);\n\n  // (1 x 3 x 2) blob == (1 x 1 x 3 x 2) legacy blob\n  blob_proto.set_num(1);\n  blob_proto.set_channels(1);\n  blob_proto.set_height(3);\n  blob_proto.set_width(2);\n  EXPECT_TRUE(this->blob_->ShapeEquals(blob_proto));\n\n  // Reshape to (2 x 3 x 2).\n  shape[0] = 2;\n  this->blob_->Reshape(shape);\n\n  // (2 x 3 x 2) blob != (1 x 1 x 3 x 2) legacy blob\n  blob_proto.set_num(1);\n  blob_proto.set_channels(1);\n  blob_proto.set_height(3);\n  blob_proto.set_width(2);\n  EXPECT_FALSE(this->blob_->ShapeEquals(blob_proto));\n}\n\ntemplate <typename TypeParam>\nclass BlobMathTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  BlobMathTest()\n      : blob_(new Blob<Dtype>(2, 3, 4, 5)),\n        epsilon_(1e-6) {}\n\n  virtual ~BlobMathTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  Dtype epsilon_;\n};\n\nTYPED_TEST_CASE(BlobMathTest, TestDtypesAndDevices);\n\nTYPED_TEST(BlobMathTest, TestSumOfSquares) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  // Uninitialized Blob should have sum of squares == 0.\n  EXPECT_EQ(0, this->blob_->sumsq_data());\n  EXPECT_EQ(0, this->blob_->sumsq_diff());\n  FillerParameter filler_param;\n  filler_param.set_min(-3);\n  filler_param.set_max(3);\n  UniformFiller<Dtype> filler(filler_param);\n  filler.Fill(this->blob_);\n  Dtype expected_sumsq = 0;\n  const Dtype* data = this->blob_->cpu_data();\n  for (int i = 0; i < this->blob_->count(); ++i) {\n    expected_sumsq += data[i] * data[i];\n  }\n  // Do a mutable access on the current device,\n  // so that the sumsq computation is done on that device.\n  // (Otherwise, this would only check the CPU sumsq implementation.)\n  switch (TypeParam::device) {\n  case Caffe::CPU:\n    this->blob_->mutable_cpu_data();\n    break;\n  case Caffe::GPU:\n    this->blob_->mutable_gpu_data();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown device: \" << TypeParam::device;\n  }\n  EXPECT_NEAR(expected_sumsq, this->blob_->sumsq_data(),\n              this->epsilon_ * expected_sumsq);\n  EXPECT_EQ(0, this->blob_->sumsq_diff());\n\n  // Check sumsq_diff too.\n  const Dtype kDiffScaleFactor = 7;\n  caffe_cpu_scale(this->blob_->count(), kDiffScaleFactor, data,\n                  this->blob_->mutable_cpu_diff());\n  switch (TypeParam::device) {\n  case Caffe::CPU:\n    this->blob_->mutable_cpu_diff();\n    break;\n  case Caffe::GPU:\n    this->blob_->mutable_gpu_diff();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown device: \" << TypeParam::device;\n  }\n  EXPECT_NEAR(expected_sumsq, this->blob_->sumsq_data(),\n              this->epsilon_ * expected_sumsq);\n  const Dtype expected_sumsq_diff =\n      expected_sumsq * kDiffScaleFactor * kDiffScaleFactor;\n  EXPECT_NEAR(expected_sumsq_diff, this->blob_->sumsq_diff(),\n              this->epsilon_ * expected_sumsq_diff);\n}\n\nTYPED_TEST(BlobMathTest, TestAsum) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  // Uninitialized Blob should have asum == 0.\n  EXPECT_EQ(0, this->blob_->asum_data());\n  EXPECT_EQ(0, this->blob_->asum_diff());\n  FillerParameter filler_param;\n  filler_param.set_min(-3);\n  filler_param.set_max(3);\n  UniformFiller<Dtype> filler(filler_param);\n  filler.Fill(this->blob_);\n  Dtype expected_asum = 0;\n  const Dtype* data = this->blob_->cpu_data();\n  for (int i = 0; i < this->blob_->count(); ++i) {\n    expected_asum += std::fabs(data[i]);\n  }\n  // Do a mutable access on the current device,\n  // so that the asum computation is done on that device.\n  // (Otherwise, this would only check the CPU asum implementation.)\n  switch (TypeParam::device) {\n  case Caffe::CPU:\n    this->blob_->mutable_cpu_data();\n    break;\n  case Caffe::GPU:\n    this->blob_->mutable_gpu_data();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown device: \" << TypeParam::device;\n  }\n  EXPECT_NEAR(expected_asum, this->blob_->asum_data(),\n              this->epsilon_ * expected_asum);\n  EXPECT_EQ(0, this->blob_->asum_diff());\n\n  // Check asum_diff too.\n  const Dtype kDiffScaleFactor = 7;\n  caffe_cpu_scale(this->blob_->count(), kDiffScaleFactor, data,\n                  this->blob_->mutable_cpu_diff());\n  switch (TypeParam::device) {\n  case Caffe::CPU:\n    this->blob_->mutable_cpu_diff();\n    break;\n  case Caffe::GPU:\n    this->blob_->mutable_gpu_diff();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown device: \" << TypeParam::device;\n  }\n  EXPECT_NEAR(expected_asum, this->blob_->asum_data(),\n              this->epsilon_ * expected_asum);\n  const Dtype expected_diff_asum = expected_asum * kDiffScaleFactor;\n  EXPECT_NEAR(expected_diff_asum, this->blob_->asum_diff(),\n              this->epsilon_ * expected_diff_asum);\n}\n\nTYPED_TEST(BlobMathTest, TestScaleData) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  EXPECT_EQ(0, this->blob_->asum_data());\n  EXPECT_EQ(0, this->blob_->asum_diff());\n  FillerParameter filler_param;\n  filler_param.set_min(-3);\n  filler_param.set_max(3);\n  UniformFiller<Dtype> filler(filler_param);\n  filler.Fill(this->blob_);\n  const Dtype asum_before_scale = this->blob_->asum_data();\n  // Do a mutable access on the current device,\n  // so that the asum computation is done on that device.\n  // (Otherwise, this would only check the CPU asum implementation.)\n  switch (TypeParam::device) {\n  case Caffe::CPU:\n    this->blob_->mutable_cpu_data();\n    break;\n  case Caffe::GPU:\n    this->blob_->mutable_gpu_data();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown device: \" << TypeParam::device;\n  }\n  const Dtype kDataScaleFactor = 3;\n  this->blob_->scale_data(kDataScaleFactor);\n  EXPECT_NEAR(asum_before_scale * kDataScaleFactor, this->blob_->asum_data(),\n              this->epsilon_ * asum_before_scale * kDataScaleFactor);\n  EXPECT_EQ(0, this->blob_->asum_diff());\n\n  // Check scale_diff too.\n  const Dtype kDataToDiffScaleFactor = 7;\n  const Dtype* data = this->blob_->cpu_data();\n  caffe_cpu_scale(this->blob_->count(), kDataToDiffScaleFactor, data,\n                  this->blob_->mutable_cpu_diff());\n  const Dtype expected_asum_before_scale = asum_before_scale * kDataScaleFactor;\n  EXPECT_NEAR(expected_asum_before_scale, this->blob_->asum_data(),\n              this->epsilon_ * expected_asum_before_scale);\n  const Dtype expected_diff_asum_before_scale =\n      asum_before_scale * kDataScaleFactor * kDataToDiffScaleFactor;\n  EXPECT_NEAR(expected_diff_asum_before_scale, this->blob_->asum_diff(),\n              this->epsilon_ * expected_diff_asum_before_scale);\n  switch (TypeParam::device) {\n  case Caffe::CPU:\n    this->blob_->mutable_cpu_diff();\n    break;\n  case Caffe::GPU:\n    this->blob_->mutable_gpu_diff();\n    break;\n  default:\n    LOG(FATAL) << \"Unknown device: \" << TypeParam::device;\n  }\n  const Dtype kDiffScaleFactor = 3;\n  this->blob_->scale_diff(kDiffScaleFactor);\n  EXPECT_NEAR(asum_before_scale * kDataScaleFactor, this->blob_->asum_data(),\n              this->epsilon_ * asum_before_scale * kDataScaleFactor);\n  const Dtype expected_diff_asum =\n      expected_diff_asum_before_scale * kDiffScaleFactor;\n  EXPECT_NEAR(expected_diff_asum, this->blob_->asum_diff(),\n              this->epsilon_ * expected_diff_asum);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_caffe_main.cpp",
    "content": "// The main caffe test code. Your test cpp code should include this hpp\n// to allow a main function to be compiled into the binary.\n\n#include \"caffe/caffe.hpp\"\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n#ifndef CPU_ONLY\n  cudaDeviceProp CAFFE_TEST_CUDA_PROP;\n#endif\n}\n\n#ifndef CPU_ONLY\nusing caffe::CAFFE_TEST_CUDA_PROP;\n#endif\n\nint main(int argc, char** argv) {\n  ::testing::InitGoogleTest(&argc, argv);\n  caffe::GlobalInit(&argc, &argv);\n#ifndef CPU_ONLY\n  // Before starting testing, let's first print out a few cuda defice info.\n  int device;\n  cudaGetDeviceCount(&device);\n  cout << \"Cuda number of devices: \" << device << endl;\n  if (argc > 1) {\n    // Use the given device\n    device = atoi(argv[1]);\n    cudaSetDevice(device);\n    cout << \"Setting to use device \" << device << endl;\n  } else if (CUDA_TEST_DEVICE >= 0) {\n    // Use the device assigned in build configuration; but with a lower priority\n    device = CUDA_TEST_DEVICE;\n  }\n  cudaGetDevice(&device);\n  cout << \"Current device id: \" << device << endl;\n  cudaGetDeviceProperties(&CAFFE_TEST_CUDA_PROP, device);\n  cout << \"Current device name: \" << CAFFE_TEST_CUDA_PROP.name << endl;\n#endif\n  // invoke the test.\n  return RUN_ALL_TESTS();\n}\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_common.cpp",
    "content": "#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nclass CommonTest : public ::testing::Test {};\n\n#ifndef CPU_ONLY  // GPU Caffe singleton test.\n\nTEST_F(CommonTest, TestCublasHandlerGPU) {\n  int cuda_device_id;\n  CUDA_CHECK(cudaGetDevice(&cuda_device_id));\n  EXPECT_TRUE(Caffe::cublas_handle());\n}\n\n#endif\n\nTEST_F(CommonTest, TestBrewMode) {\n  Caffe::set_mode(Caffe::CPU);\n  EXPECT_EQ(Caffe::mode(), Caffe::CPU);\n  Caffe::set_mode(Caffe::GPU);\n  EXPECT_EQ(Caffe::mode(), Caffe::GPU);\n}\n\nTEST_F(CommonTest, TestRandSeedCPU) {\n  SyncedMemory data_a(10 * sizeof(int));\n  SyncedMemory data_b(10 * sizeof(int));\n  Caffe::set_random_seed(1701);\n  caffe_rng_bernoulli(10, 0.5, static_cast<int*>(data_a.mutable_cpu_data()));\n\n  Caffe::set_random_seed(1701);\n  caffe_rng_bernoulli(10, 0.5, static_cast<int*>(data_b.mutable_cpu_data()));\n\n  for (int i = 0; i < 10; ++i) {\n    EXPECT_EQ(static_cast<const int*>(data_a.cpu_data())[i],\n        static_cast<const int*>(data_b.cpu_data())[i]);\n  }\n}\n\n#ifndef CPU_ONLY  // GPU Caffe singleton test.\n\nTEST_F(CommonTest, TestRandSeedGPU) {\n  SyncedMemory data_a(10 * sizeof(unsigned int));\n  SyncedMemory data_b(10 * sizeof(unsigned int));\n  Caffe::set_random_seed(1701);\n  CURAND_CHECK(curandGenerate(Caffe::curand_generator(),\n        static_cast<unsigned int*>(data_a.mutable_gpu_data()), 10));\n  Caffe::set_random_seed(1701);\n  CURAND_CHECK(curandGenerate(Caffe::curand_generator(),\n        static_cast<unsigned int*>(data_b.mutable_gpu_data()), 10));\n  for (int i = 0; i < 10; ++i) {\n    EXPECT_EQ(((const unsigned int*)(data_a.cpu_data()))[i],\n        ((const unsigned int*)(data_b.cpu_data()))[i]);\n  }\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_concat_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/concat_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ConcatLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ConcatLayerTest()\n      : blob_bottom_0_(new Blob<Dtype>(2, 3, 6, 5)),\n        blob_bottom_1_(new Blob<Dtype>(2, 5, 6, 5)),\n        blob_bottom_2_(new Blob<Dtype>(5, 3, 6, 5)),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    // fill the values\n    shared_ptr<ConstantFiller<Dtype> > filler;\n    FillerParameter filler_param;\n    filler_param.set_value(1.);\n    filler.reset(new ConstantFiller<Dtype>(filler_param));\n    filler->Fill(this->blob_bottom_0_);\n    filler_param.set_value(2.);\n    filler.reset(new ConstantFiller<Dtype>(filler_param));\n    filler->Fill(this->blob_bottom_1_);\n    filler_param.set_value(3.);\n    filler.reset(new ConstantFiller<Dtype>(filler_param));\n    filler->Fill(this->blob_bottom_2_);\n    blob_bottom_vec_0_.push_back(blob_bottom_0_);\n    blob_bottom_vec_0_.push_back(blob_bottom_1_);\n    blob_bottom_vec_1_.push_back(blob_bottom_0_);\n    blob_bottom_vec_1_.push_back(blob_bottom_2_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n\n  virtual ~ConcatLayerTest() {\n    delete blob_bottom_0_; delete blob_bottom_1_;\n    delete blob_bottom_2_; delete blob_top_;\n  }\n\n  Blob<Dtype>* const blob_bottom_0_;\n  Blob<Dtype>* const blob_bottom_1_;\n  Blob<Dtype>* const blob_bottom_2_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_0_, blob_bottom_vec_1_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ConcatLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ConcatLayerTest, TestSetupNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_concat_param()->set_axis(0);\n  ConcatLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_1_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(),\n      this->blob_bottom_0_->num() + this->blob_bottom_2_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_0_->channels());\n  EXPECT_EQ(this->blob_top_->height(), this->blob_bottom_0_->height());\n  EXPECT_EQ(this->blob_top_->width(), this->blob_bottom_0_->width());\n}\n\nTYPED_TEST(ConcatLayerTest, TestSetupChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_0_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_0_->num());\n  EXPECT_EQ(this->blob_top_->channels(),\n      this->blob_bottom_0_->channels() + this->blob_bottom_1_->channels());\n  EXPECT_EQ(this->blob_top_->height(), this->blob_bottom_0_->height());\n  EXPECT_EQ(this->blob_top_->width(), this->blob_bottom_0_->width());\n}\n\nTYPED_TEST(ConcatLayerTest, TestSetupChannelsNegativeIndexing) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  // \"channels\" index is the third one from the end -- test negative indexing\n  // by setting axis to -3 and checking that we get the same results as above in\n  // TestSetupChannels.\n  layer_param.mutable_concat_param()->set_axis(-3);\n  layer.SetUp(this->blob_bottom_vec_0_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_0_->num());\n  EXPECT_EQ(this->blob_top_->channels(),\n      this->blob_bottom_0_->channels() + this->blob_bottom_1_->channels());\n  EXPECT_EQ(this->blob_top_->height(), this->blob_bottom_0_->height());\n  EXPECT_EQ(this->blob_top_->width(), this->blob_bottom_0_->width());\n}\n\nTYPED_TEST(ConcatLayerTest, TestForwardTrivial) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  this->blob_bottom_vec_0_.resize(1);\n  layer.SetUp(this->blob_bottom_vec_0_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_0_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_bottom_0_->count(); ++i) {\n    EXPECT_EQ(this->blob_bottom_0_->cpu_data()[i],\n              this->blob_top_->cpu_data()[i]);\n  }\n}\n\nTYPED_TEST(ConcatLayerTest, TestForwardNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_concat_param()->set_axis(0);\n  ConcatLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_1_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_1_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_vec_1_[0]->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n      for (int h = 0; h < this->blob_top_->height(); ++h) {\n        for (int w = 0; w < this->blob_top_->width(); ++w) {\n          EXPECT_EQ(this->blob_top_->data_at(n, c, h, w),\n              this->blob_bottom_vec_1_[0]->data_at(n, c, h, w));\n        }\n      }\n    }\n  }\n  for (int n = 0; n < this->blob_bottom_vec_1_[1]->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n      for (int h = 0; h < this->blob_top_->height(); ++h) {\n        for (int w = 0; w < this->blob_top_->width(); ++w) {\n          EXPECT_EQ(this->blob_top_->data_at(n + 2, c, h, w),\n              this->blob_bottom_vec_1_[1]->data_at(n, c, h, w));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ConcatLayerTest, TestForwardChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_0_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_0_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_top_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_0_->channels(); ++c) {\n      for (int h = 0; h < this->blob_top_->height(); ++h) {\n        for (int w = 0; w < this->blob_top_->width(); ++w) {\n          EXPECT_EQ(this->blob_top_->data_at(n, c, h, w),\n              this->blob_bottom_vec_0_[0]->data_at(n, c, h, w));\n        }\n      }\n    }\n    for (int c = 0; c < this->blob_bottom_1_->channels(); ++c) {\n      for (int h = 0; h < this->blob_top_->height(); ++h) {\n        for (int w = 0; w < this->blob_top_->width(); ++w) {\n          EXPECT_EQ(this->blob_top_->data_at(n, c + 3, h, w),\n              this->blob_bottom_vec_0_[1]->data_at(n, c, h, w));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ConcatLayerTest, TestGradientTrivial) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  this->blob_bottom_vec_0_.resize(1);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_0_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ConcatLayerTest, TestGradientNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_concat_param()->set_axis(0);\n  ConcatLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradient(&layer, this->blob_bottom_vec_1_,\n    this->blob_top_vec_);\n}\n\nTYPED_TEST(ConcatLayerTest, TestGradientChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradient(&layer, this->blob_bottom_vec_0_,\n    this->blob_top_vec_);\n}\n\nTYPED_TEST(ConcatLayerTest, TestGradientChannelsBottomOneOnly) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConcatLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradient(&layer, this->blob_bottom_vec_0_,\n    this->blob_top_vec_, 1);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_contrastive_loss_layer.cpp",
    "content": "#include <algorithm>\n#include <cmath>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/contrastive_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ContrastiveLossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ContrastiveLossLayerTest()\n      : blob_bottom_data_i_(new Blob<Dtype>(512, 2, 1, 1)),\n        blob_bottom_data_j_(new Blob<Dtype>(512, 2, 1, 1)),\n        blob_bottom_y_(new Blob<Dtype>(512, 1, 1, 1)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_min(-1.0);\n    filler_param.set_max(1.0);  // distances~=1.0 to test both sides of margin\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_i_);\n    blob_bottom_vec_.push_back(blob_bottom_data_i_);\n    filler.Fill(this->blob_bottom_data_j_);\n    blob_bottom_vec_.push_back(blob_bottom_data_j_);\n    for (int i = 0; i < blob_bottom_y_->count(); ++i) {\n      blob_bottom_y_->mutable_cpu_data()[i] = caffe_rng_rand() % 2;  // 0 or 1\n    }\n    blob_bottom_vec_.push_back(blob_bottom_y_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~ContrastiveLossLayerTest() {\n    delete blob_bottom_data_i_;\n    delete blob_bottom_data_j_;\n    delete blob_bottom_y_;\n    delete blob_top_loss_;\n  }\n\n  Blob<Dtype>* const blob_bottom_data_i_;\n  Blob<Dtype>* const blob_bottom_data_j_;\n  Blob<Dtype>* const blob_bottom_y_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ContrastiveLossLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ContrastiveLossLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ContrastiveLossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // manually compute to compare\n  const Dtype margin = layer_param.contrastive_loss_param().margin();\n  const int num = this->blob_bottom_data_i_->num();\n  const int channels = this->blob_bottom_data_i_->channels();\n  Dtype loss(0);\n  for (int i = 0; i < num; ++i) {\n    Dtype dist_sq(0);\n    for (int j = 0; j < channels; ++j) {\n      Dtype diff = this->blob_bottom_data_i_->cpu_data()[i*channels+j] -\n          this->blob_bottom_data_j_->cpu_data()[i*channels+j];\n      dist_sq += diff*diff;\n    }\n    if (this->blob_bottom_y_->cpu_data()[i]) {  // similar pairs\n      loss += dist_sq;\n    } else {\n      Dtype dist = std::max<Dtype>(margin - sqrt(dist_sq), 0.0);\n      loss += dist*dist;\n    }\n  }\n  loss /= static_cast<Dtype>(num) * Dtype(2);\n  EXPECT_NEAR(this->blob_top_loss_->cpu_data()[0], loss, 1e-6);\n}\n\nTYPED_TEST(ContrastiveLossLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ContrastiveLossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  // check the gradient for the first two bottom layers\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 1);\n}\n\nTYPED_TEST(ContrastiveLossLayerTest, TestForwardLegacy) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_contrastive_loss_param()->set_legacy_version(true);\n  ContrastiveLossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // manually compute to compare\n  const Dtype margin = layer_param.contrastive_loss_param().margin();\n  const int num = this->blob_bottom_data_i_->num();\n  const int channels = this->blob_bottom_data_i_->channels();\n  Dtype loss(0);\n  for (int i = 0; i < num; ++i) {\n    Dtype dist_sq(0);\n    for (int j = 0; j < channels; ++j) {\n      Dtype diff = this->blob_bottom_data_i_->cpu_data()[i*channels+j] -\n          this->blob_bottom_data_j_->cpu_data()[i*channels+j];\n      dist_sq += diff*diff;\n    }\n    if (this->blob_bottom_y_->cpu_data()[i]) {  // similar pairs\n      loss += dist_sq;\n    } else {\n      loss += std::max(margin - dist_sq, Dtype(0.0));\n    }\n  }\n  loss /= static_cast<Dtype>(num) * Dtype(2);\n  EXPECT_NEAR(this->blob_top_loss_->cpu_data()[0], loss, 1e-6);\n}\n\nTYPED_TEST(ContrastiveLossLayerTest, TestGradientLegacy) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_contrastive_loss_param()->set_legacy_version(true);\n  ContrastiveLossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  // check the gradient for the first two bottom layers\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 1);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_convolution_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/conv_layer.hpp\"\n\n#ifdef USE_CUDNN\n#include \"caffe/layers/cudnn_conv_layer.hpp\"\n#endif\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\n// Reference convolution for checking results:\n// accumulate through explicit loops over input, output, and filters.\ntemplate <typename Dtype>\nvoid caffe_conv(const Blob<Dtype>* in, ConvolutionParameter* conv_param,\n    const vector<shared_ptr<Blob<Dtype> > >& weights,\n    Blob<Dtype>* out) {\n  const bool has_depth = (out->num_axes() == 5);\n  if (!has_depth) { CHECK_EQ(4, out->num_axes()); }\n  // Kernel size, stride, and pad\n  int kernel_h, kernel_w;\n  if (conv_param->has_kernel_h() || conv_param->has_kernel_w()) {\n    kernel_h = conv_param->kernel_h();\n    kernel_w = conv_param->kernel_w();\n  } else {\n    kernel_h = kernel_w = conv_param->kernel_size(0);\n  }\n  int pad_h, pad_w;\n  if (conv_param->has_pad_h() || conv_param->has_pad_w()) {\n    pad_h = conv_param->pad_h();\n    pad_w = conv_param->pad_w();\n  } else {\n    pad_h = pad_w = conv_param->pad_size() ? conv_param->pad(0) : 0;\n  }\n  int stride_h, stride_w;\n  if (conv_param->has_stride_h() || conv_param->has_stride_w()) {\n    stride_h = conv_param->stride_h();\n    stride_w = conv_param->stride_w();\n  } else {\n    stride_h = stride_w = conv_param->stride_size() ? conv_param->stride(0) : 1;\n  }\n  int dilation_h, dilation_w;\n  dilation_h = dilation_w = conv_param->dilation_size() ?\n                            conv_param->dilation(0) : 1;\n  int kernel_d, pad_d, stride_d, dilation_d;\n  if (has_depth) {\n    kernel_d = kernel_h;\n    stride_d = stride_h;\n    pad_d = pad_h;\n    dilation_d = dilation_h;\n  } else {\n    kernel_d = stride_d = dilation_d = 1;\n    pad_d = 0;\n  }\n  // Groups\n  int groups = conv_param->group();\n  int o_g = out->shape(1) / groups;\n  int k_g = in->shape(1) / groups;\n  int o_head, k_head;\n  // Convolution\n  vector<int> weight_offset(4 + has_depth);\n  vector<int> in_offset(4 + has_depth);\n  vector<int> out_offset(4 + has_depth);\n  Dtype* out_data = out->mutable_cpu_data();\n  for (int n = 0; n < out->shape(0); n++) {\n    for (int g = 0; g < groups; g++) {\n      o_head = o_g * g;\n      k_head = k_g * g;\n      for (int o = 0; o < o_g; o++) {\n        for (int k = 0; k < k_g; k++) {\n          for (int z = 0; z < (has_depth ? out->shape(2) : 1); z++) {\n            for (int y = 0; y < out->shape(2 + has_depth); y++) {\n              for (int x = 0; x < out->shape(3 + has_depth); x++) {\n                for (int r = 0; r < kernel_d; r++) {\n                  for (int p = 0; p < kernel_h; p++) {\n                    for (int q = 0; q < kernel_w; q++) {\n                      int in_z = z * stride_d - pad_d + r * dilation_d;\n                      int in_y = y * stride_h - pad_h + p * dilation_h;\n                      int in_x = x * stride_w - pad_w + q * dilation_w;\n                      if (in_z >= 0 && in_z < (has_depth ? in->shape(2) : 1)\n                          && in_y >= 0 && in_y < in->shape(2 + has_depth)\n                          && in_x >= 0 && in_x < in->shape(3 + has_depth)) {\n                        weight_offset[0] = o + o_head;\n                        weight_offset[1] = k;\n                        if (has_depth) { weight_offset[2] = r; }\n                        weight_offset[2 + has_depth] = p;\n                        weight_offset[3 + has_depth] = q;\n                        in_offset[0] = n;\n                        in_offset[1] = k + k_head;\n                        if (has_depth) { in_offset[2] = in_z; }\n                        in_offset[2 + has_depth] = in_y;\n                        in_offset[3 + has_depth] = in_x;\n                        out_offset[0] = n;\n                        out_offset[1] = o + o_head;\n                        if (has_depth) { out_offset[2] = z; }\n                        out_offset[2 + has_depth] = y;\n                        out_offset[3 + has_depth] = x;\n                        out_data[out->offset(out_offset)] +=\n                            in->data_at(in_offset)\n                            * weights[0]->data_at(weight_offset);\n                      }\n                    }\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n  // Bias\n  if (conv_param->bias_term()) {\n    const Dtype* bias_data = weights[1]->cpu_data();\n    for (int n = 0; n < out->shape(0); n++) {\n      for (int o = 0; o < out->shape(1); o++) {\n        for (int z = 0; z < (has_depth ? out->shape(2) : 1); z++) {\n          for (int y = 0; y < out->shape(2 + has_depth); y++) {\n            for (int x = 0; x < out->shape(3 + has_depth); x++) {\n              out_offset[0] = n;\n              out_offset[1] = o;\n              if (has_depth) { out_offset[2] = z; }\n              out_offset[2 + has_depth] = y;\n              out_offset[3 + has_depth] = x;\n              out_data[out->offset(out_offset)] += bias_data[o];\n            }\n          }\n        }\n      }\n    }\n  }\n}\n\ntemplate void caffe_conv(const Blob<float>* in,\n    ConvolutionParameter* conv_param,\n    const vector<shared_ptr<Blob<float> > >& weights,\n    Blob<float>* out);\ntemplate void caffe_conv(const Blob<double>* in,\n    ConvolutionParameter* conv_param,\n    const vector<shared_ptr<Blob<double> > >& weights,\n    Blob<double>* out);\n\ntemplate <typename TypeParam>\nclass ConvolutionLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ConvolutionLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 4)),\n        blob_bottom_2_(new Blob<Dtype>(2, 3, 6, 4)),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_2_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_value(1.);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    filler.Fill(this->blob_bottom_2_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n\n  virtual ~ConvolutionLayerTest() {\n    delete blob_bottom_;\n    delete blob_bottom_2_;\n    delete blob_top_;\n    delete blob_top_2_;\n  }\n\n  virtual Blob<Dtype>* MakeReferenceTop(Blob<Dtype>* top) {\n    this->ref_blob_top_.reset(new Blob<Dtype>());\n    this->ref_blob_top_->ReshapeLike(*top);\n    return this->ref_blob_top_.get();\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_2_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_2_;\n  shared_ptr<Blob<Dtype> > ref_blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ConvolutionLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ConvolutionLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 4);\n  EXPECT_EQ(this->blob_top_->height(), 2);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  EXPECT_EQ(this->blob_top_2_->num(), 2);\n  EXPECT_EQ(this->blob_top_2_->channels(), 4);\n  EXPECT_EQ(this->blob_top_2_->height(), 2);\n  EXPECT_EQ(this->blob_top_2_->width(), 1);\n  // setting group should not change the shape\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  layer.reset(new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 2);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  EXPECT_EQ(this->blob_top_2_->num(), 2);\n  EXPECT_EQ(this->blob_top_2_->channels(), 3);\n  EXPECT_EQ(this->blob_top_2_->height(), 2);\n  EXPECT_EQ(this->blob_top_2_->width(), 1);\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestSimpleConvolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const Dtype* top_data;\n  const Dtype* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n  caffe_conv(this->blob_bottom_2_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_2_));\n  top_data = this->blob_top_2_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestDilatedConvolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  vector<int> bottom_shape;\n  bottom_shape.push_back(2);\n  bottom_shape.push_back(3);\n  bottom_shape.push_back(8);\n  bottom_shape.push_back(7);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n  }\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_dilation(2);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const Dtype* top_data;\n  const Dtype* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n             this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n  caffe_conv(this->blob_bottom_2_, convolution_param, layer->blobs(),\n             this->MakeReferenceTop(this->blob_top_2_));\n  top_data = this->blob_top_2_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, Test0DConvolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  const int kNumOutput = 3;\n  convolution_param->set_num_output(kNumOutput);\n  convolution_param->set_axis(3);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  vector<int> top_shape = this->blob_bottom_->shape();\n  top_shape[3] = kNumOutput;\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(top_shape, this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  vector<int> weight_offset(2);\n  const Blob<Dtype>* weight = layer->blobs()[0].get();\n  const Blob<Dtype>* bias = layer->blobs()[1].get();\n  const int num = this->blob_top_->count(3);\n  const int dim = this->blob_top_->shape(3);\n  const int bottom_dim = this->blob_bottom_->shape(3);\n  for (int n = 0; n < num; ++n) {\n    for (int d = 0; d < dim; ++d) {\n      weight_offset[0] = d;\n      Dtype value = bias->cpu_data()[d];\n      for (int bottom_d = 0; bottom_d < bottom_dim; ++bottom_d) {\n        weight_offset[1] = bottom_d;\n        value += weight->data_at(weight_offset) *\n                 this->blob_bottom_->cpu_data()[n * bottom_dim + bottom_d];\n      }\n      EXPECT_NEAR(value, this->blob_top_->cpu_data()[n * dim + d], 1e-4);\n    }\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestSimple3DConvolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  vector<int> bottom_shape(5);\n  bottom_shape[0] = this->blob_bottom_vec_[0]->shape(0);\n  bottom_shape[1] = this->blob_bottom_vec_[0]->shape(1);\n  bottom_shape[2] = 5;\n  bottom_shape[3] = this->blob_bottom_vec_[0]->shape(2);\n  bottom_shape[4] = this->blob_bottom_vec_[0]->shape(3);\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n    filler.Fill(this->blob_bottom_vec_[i]);\n  }\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const Dtype* top_data;\n  const Dtype* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n  caffe_conv(this->blob_bottom_2_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_2_));\n  top_data = this->blob_top_2_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestDilated3DConvolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  vector<int> bottom_shape(5);\n  bottom_shape[0] = this->blob_bottom_vec_[0]->shape(0);\n  bottom_shape[1] = this->blob_bottom_vec_[0]->shape(1);\n  bottom_shape[2] = 6;\n  bottom_shape[3] = 7;\n  bottom_shape[4] = 8;\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n    filler.Fill(this->blob_bottom_vec_[i]);\n  }\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_dilation(2);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const Dtype* top_data;\n  const Dtype* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n             this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n  caffe_conv(this->blob_bottom_2_, convolution_param, layer->blobs(),\n             this->MakeReferenceTop(this->blob_top_2_));\n  top_data = this->blob_top_2_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, Test1x1Convolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(1);\n  convolution_param->add_stride(1);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const Dtype* top_data;\n  const Dtype* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestSimpleConvolutionGroup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const Dtype* top_data;\n  const Dtype* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestSobelConvolution) {\n  // Test separable convolution by computing the Sobel operator\n  // as a single filter then comparing the result\n  // as the convolution of two rectangular filters.\n  typedef typename TypeParam::Dtype Dtype;\n  // Fill bottoms with identical Gaussian noise.\n  shared_ptr<GaussianFiller<Dtype> > filler;\n  FillerParameter filler_param;\n  filler_param.set_value(1.);\n  filler.reset(new GaussianFiller<Dtype>(filler_param));\n  filler->Fill(this->blob_bottom_);\n  this->blob_bottom_2_->CopyFrom(*this->blob_bottom_);\n  // Compute Sobel G_x operator as 3 x 3 convolution.\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(1);\n  convolution_param->set_bias_term(false);\n  shared_ptr<Layer<Dtype> > layer(\n      new ConvolutionLayer<Dtype>(layer_param));\n  layer->blobs().resize(1);\n  layer->blobs()[0].reset(new Blob<Dtype>(1, 3, 3, 3));\n  Dtype* weights = layer->blobs()[0]->mutable_cpu_data();\n  for (int c = 0; c < 3; ++c) {\n    int i = c * 9;  // 3 x 3 filter\n    weights[i +  0] = -1;\n    weights[i +  1] =  0;\n    weights[i +  2] =  1;\n    weights[i +  3] = -2;\n    weights[i +  4] =  0;\n    weights[i +  5] =  2;\n    weights[i +  6] = -1;\n    weights[i +  7] =  0;\n    weights[i +  8] =  1;\n  }\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Compute Sobel G_x operator as separable 3 x 1 and 1 x 3 convolutions.\n  // (1) the [1 2 1] column filter\n  vector<Blob<Dtype>*> sep_blob_bottom_vec;\n  vector<Blob<Dtype>*> sep_blob_top_vec;\n  shared_ptr<Blob<Dtype> > blob_sep(new Blob<Dtype>());\n  sep_blob_bottom_vec.push_back(this->blob_bottom_2_);\n  sep_blob_top_vec.push_back(this->blob_top_2_);\n  convolution_param->clear_kernel_size();\n  convolution_param->clear_stride();\n  convolution_param->set_kernel_h(3);\n  convolution_param->set_kernel_w(1);\n  convolution_param->set_stride_h(2);\n  convolution_param->set_stride_w(1);\n  convolution_param->set_num_output(1);\n  convolution_param->set_bias_term(false);\n  layer.reset(new ConvolutionLayer<Dtype>(layer_param));\n  layer->blobs().resize(1);\n  layer->blobs()[0].reset(new Blob<Dtype>(1, 3, 3, 1));\n  Dtype* weights_1 = layer->blobs()[0]->mutable_cpu_data();\n  for (int c = 0; c < 3; ++c) {\n    int i = c * 3;  // 3 x 1 filter\n    weights_1[i +  0] = 1;\n    weights_1[i +  1] = 2;\n    weights_1[i +  2] = 1;\n  }\n  layer->SetUp(sep_blob_bottom_vec, sep_blob_top_vec);\n  layer->Forward(sep_blob_bottom_vec, sep_blob_top_vec);\n  // (2) the [-1 0 1] row filter\n  blob_sep->CopyFrom(*this->blob_top_2_, false, true);\n  sep_blob_bottom_vec.clear();\n  sep_blob_bottom_vec.push_back(blob_sep.get());\n  convolution_param->set_kernel_h(1);\n  convolution_param->set_kernel_w(3);\n  convolution_param->set_stride_h(1);\n  convolution_param->set_stride_w(2);\n  convolution_param->set_num_output(1);\n  convolution_param->set_bias_term(false);\n  layer.reset(new ConvolutionLayer<Dtype>(layer_param));\n  layer->blobs().resize(1);\n  layer->blobs()[0].reset(new Blob<Dtype>(1, 1, 1, 3));\n  Dtype* weights_2 = layer->blobs()[0]->mutable_cpu_data();\n  weights_2[0] = -1;\n  weights_2[1] =  0;\n  weights_2[2] =  1;\n  layer->SetUp(sep_blob_bottom_vec, sep_blob_top_vec);\n  layer->Forward(sep_blob_bottom_vec, sep_blob_top_vec);\n  // Test equivalence of full and separable filters.\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  const Dtype* sep_top_data = this->blob_top_2_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], sep_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestNDAgainst2D) {\n  typedef typename TypeParam::Dtype Dtype;\n  const int kernel_h = 11;\n  const int kernel_w = 13;\n  vector<int> bottom_shape(4);\n  bottom_shape[0] = 15;\n  bottom_shape[1] = 18;\n  bottom_shape[2] = kernel_h * 2;\n  bottom_shape[3] = kernel_w * 2;\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n    filler.Fill(this->blob_bottom_vec_[i]);\n  }\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->set_num_output(12);\n  convolution_param->set_bias_term(false);\n  convolution_param->set_group(6);\n  convolution_param->set_kernel_h(kernel_h);\n  convolution_param->set_kernel_w(kernel_w);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  Blob<Dtype> weights;\n  Blob<Dtype> top_diff;\n  // Shape and fill weights and top_diff.\n  bool copy_diff;\n  bool reshape;\n  {\n    ConvolutionLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    top_diff.ReshapeLike(*this->blob_top_);\n    filler.Fill(&top_diff);\n    ASSERT_EQ(1, layer.blobs().size());\n    copy_diff = false; reshape = true;\n    weights.CopyFrom(*layer.blobs()[0], copy_diff, reshape);\n  }\n  vector<bool> propagate_down(1, true);\n  Blob<Dtype> result_2d;\n  Blob<Dtype> backward_result_2d;\n  Blob<Dtype> backward_weight_result_2d;\n  // Test with 2D im2col\n  {\n    caffe_set(this->blob_top_->count(), Dtype(0),\n              this->blob_top_->mutable_cpu_data());\n    caffe_set(this->blob_bottom_->count(), Dtype(0),\n              this->blob_bottom_->mutable_cpu_diff());\n    caffe_set(weights.count(), Dtype(0), weights.mutable_cpu_diff());\n    // Do SetUp and Forward; save Forward result in result_2d.\n    convolution_param->set_force_nd_im2col(false);\n    ConvolutionLayer<Dtype> layer_2d(layer_param);\n    layer_2d.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    ASSERT_EQ(1, layer_2d.blobs().size());\n    copy_diff = false; reshape = false;\n    layer_2d.blobs()[0]->CopyFrom(weights, copy_diff, reshape);\n    layer_2d.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    copy_diff = false; reshape = true;\n    result_2d.CopyFrom(*this->blob_top_, copy_diff, reshape);\n    // Copy pre-generated top diff into actual top diff;\n    // do Backward and save result in backward_result_2d.\n    ASSERT_EQ(this->blob_top_->shape(), top_diff.shape());\n    caffe_copy(top_diff.count(), top_diff.cpu_data(),\n               this->blob_top_->mutable_cpu_diff());\n    layer_2d.Backward(this->blob_top_vec_, propagate_down,\n                      this->blob_bottom_vec_);\n    copy_diff = true; reshape = true;\n    backward_result_2d.CopyFrom(*this->blob_bottom_, copy_diff, reshape);\n    backward_weight_result_2d.CopyFrom(weights, copy_diff, reshape);\n  }\n  Blob<Dtype> result_nd;\n  Blob<Dtype> backward_result_nd;\n  Blob<Dtype> backward_weight_result_nd;\n  // Test with ND im2col\n  {\n    caffe_set(this->blob_top_->count(), Dtype(0),\n              this->blob_top_->mutable_cpu_data());\n    caffe_set(this->blob_bottom_->count(), Dtype(0),\n              this->blob_bottom_->mutable_cpu_diff());\n    caffe_set(weights.count(), Dtype(0), weights.mutable_cpu_diff());\n    // Do SetUp and Forward; save Forward result in result_nd.\n    convolution_param->set_force_nd_im2col(true);\n    ConvolutionLayer<Dtype> layer_nd(layer_param);\n    layer_nd.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    ASSERT_EQ(1, layer_nd.blobs().size());\n    copy_diff = false; reshape = false;\n    layer_nd.blobs()[0]->CopyFrom(weights, copy_diff, reshape);\n    layer_nd.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    copy_diff = false; reshape = true;\n    result_nd.CopyFrom(*this->blob_top_, copy_diff, reshape);\n    // Copy pre-generated top diff into actual top diff;\n    // do Backward and save result in backward_result_nd.\n    ASSERT_EQ(this->blob_top_->shape(), top_diff.shape());\n    caffe_copy(top_diff.count(), top_diff.cpu_data(),\n               this->blob_top_->mutable_cpu_diff());\n    layer_nd.Backward(this->blob_top_vec_, propagate_down,\n                      this->blob_bottom_vec_);\n    copy_diff = true; reshape = true;\n    backward_result_nd.CopyFrom(*this->blob_bottom_, copy_diff, reshape);\n    backward_weight_result_nd.CopyFrom(weights, copy_diff, reshape);\n  }\n  ASSERT_EQ(result_nd.count(), result_2d.count());\n  for (int i = 0; i < result_2d.count(); ++i)  {\n    EXPECT_EQ(result_2d.cpu_data()[i], result_nd.cpu_data()[i]);\n  }\n  ASSERT_EQ(backward_result_nd.count(), backward_result_2d.count());\n  for (int i = 0; i < backward_result_2d.count(); ++i) {\n    EXPECT_EQ(backward_result_2d.cpu_diff()[i],\n              backward_result_nd.cpu_diff()[i]);\n  }\n  ASSERT_EQ(backward_weight_result_nd.count(),\n            backward_weight_result_2d.count());\n  for (int i = 0; i < backward_weight_result_2d.count(); ++i) {\n    EXPECT_EQ(backward_weight_result_2d.cpu_diff()[i],\n              backward_weight_result_nd.cpu_diff()[i]);\n  }\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(2);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  ConvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestDilatedGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  vector<int> bottom_shape;\n  bottom_shape.push_back(2);\n  bottom_shape.push_back(3);\n  bottom_shape.push_back(5);\n  bottom_shape.push_back(6);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n  }\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_dilation(2);\n  convolution_param->set_num_output(2);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  ConvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n                                  this->blob_top_vec_);\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestGradient3D) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  vector<int> bottom_shape(5);\n  bottom_shape[0] = this->blob_bottom_vec_[0]->shape(0);\n  bottom_shape[1] = this->blob_bottom_vec_[0]->shape(1);\n  bottom_shape[2] = 5;\n  bottom_shape[3] = this->blob_bottom_vec_[0]->shape(2);\n  bottom_shape[4] = this->blob_bottom_vec_[0]->shape(3);\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n    filler.Fill(this->blob_bottom_vec_[i]);\n  }\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(2);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  ConvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ConvolutionLayerTest, Test1x1Gradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  convolution_param->add_kernel_size(1);\n  convolution_param->add_stride(1);\n  convolution_param->set_num_output(2);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  ConvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ConvolutionLayerTest, TestGradientGroup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  ConvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#ifdef USE_CUDNN\n\ntemplate <typename Dtype>\nclass CuDNNConvolutionLayerTest : public GPUDeviceTest<Dtype> {\n protected:\n  CuDNNConvolutionLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 4)),\n        blob_bottom_2_(new Blob<Dtype>(2, 3, 6, 4)),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_2_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_value(1.);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    filler.Fill(this->blob_bottom_2_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n\n  virtual ~CuDNNConvolutionLayerTest() {\n    delete blob_bottom_;\n    delete blob_bottom_2_;\n    delete blob_top_;\n    delete blob_top_2_;\n  }\n\n  virtual Blob<Dtype>* MakeReferenceTop(Blob<Dtype>* top) {\n    this->ref_blob_top_.reset(new Blob<Dtype>());\n    this->ref_blob_top_->ReshapeLike(*top);\n    return this->ref_blob_top_.get();\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_2_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_2_;\n  shared_ptr<Blob<Dtype> > ref_blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(CuDNNConvolutionLayerTest, TestDtypes);\n\nTYPED_TEST(CuDNNConvolutionLayerTest, TestSetupCuDNN) {\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  shared_ptr<Layer<TypeParam> > layer(\n      new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 4);\n  EXPECT_EQ(this->blob_top_->height(), 2);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  EXPECT_EQ(this->blob_top_2_->num(), 2);\n  EXPECT_EQ(this->blob_top_2_->channels(), 4);\n  EXPECT_EQ(this->blob_top_2_->height(), 2);\n  EXPECT_EQ(this->blob_top_2_->width(), 1);\n  // setting group should not change the shape\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  layer.reset(new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 2);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  EXPECT_EQ(this->blob_top_2_->num(), 2);\n  EXPECT_EQ(this->blob_top_2_->channels(), 3);\n  EXPECT_EQ(this->blob_top_2_->height(), 2);\n  EXPECT_EQ(this->blob_top_2_->width(), 1);\n}\n\nTYPED_TEST(CuDNNConvolutionLayerTest, TestSimpleConvolutionCuDNN) {\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<TypeParam> > layer(\n      new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const TypeParam* top_data;\n  const TypeParam* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n  caffe_conv(this->blob_bottom_2_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_2_));\n  top_data = this->blob_top_2_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(CuDNNConvolutionLayerTest, TestSimpleConvolutionGroupCuDNN) {\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<TypeParam> > layer(\n      new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Check against reference convolution.\n  const TypeParam* top_data;\n  const TypeParam* ref_top_data;\n  caffe_conv(this->blob_bottom_, convolution_param, layer->blobs(),\n      this->MakeReferenceTop(this->blob_top_));\n  top_data = this->blob_top_->cpu_data();\n  ref_top_data = this->ref_blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], ref_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(CuDNNConvolutionLayerTest, TestSobelConvolutionCuDNN) {\n  // Test separable convolution by computing the Sobel operator\n  // as a single filter then comparing the result\n  // as the convolution of two rectangular filters.\n\n  // Fill bottoms with identical Gaussian noise.\n  shared_ptr<GaussianFiller<TypeParam> > filler;\n  FillerParameter filler_param;\n  filler_param.set_value(1.);\n  filler.reset(new GaussianFiller<TypeParam>(filler_param));\n  filler->Fill(this->blob_bottom_);\n  this->blob_bottom_2_->CopyFrom(*this->blob_bottom_);\n  // Compute Sobel G_x operator as 3 x 3 convolution.\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(1);\n  convolution_param->set_bias_term(false);\n  shared_ptr<Layer<TypeParam> > layer(\n      new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->blobs().resize(1);\n  layer->blobs()[0].reset(new Blob<TypeParam>(1, 3, 3, 3));\n  TypeParam* weights = layer->blobs()[0]->mutable_cpu_data();\n  for (int c = 0; c < 3; ++c) {\n    int i = c * 9;  // 3 x 3 filter\n    weights[i +  0] = -1;\n    weights[i +  1] =  0;\n    weights[i +  2] =  1;\n    weights[i +  3] = -2;\n    weights[i +  4] =  0;\n    weights[i +  5] =  2;\n    weights[i +  6] = -1;\n    weights[i +  7] =  0;\n    weights[i +  8] =  1;\n  }\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Compute Sobel G_x operator as separable 3 x 1 and 1 x 3 convolutions.\n  // (1) the [1 2 1] column filter\n  vector<Blob<TypeParam>*> sep_blob_bottom_vec;\n  vector<Blob<TypeParam>*> sep_blob_top_vec;\n  shared_ptr<Blob<TypeParam> > blob_sep(new Blob<TypeParam>());\n  sep_blob_bottom_vec.push_back(this->blob_bottom_2_);\n  sep_blob_top_vec.push_back(this->blob_top_2_);\n  convolution_param->clear_kernel_size();\n  convolution_param->clear_stride();\n  convolution_param->set_kernel_h(3);\n  convolution_param->set_kernel_w(1);\n  convolution_param->set_stride_h(2);\n  convolution_param->set_stride_w(1);\n  convolution_param->set_num_output(1);\n  convolution_param->set_bias_term(false);\n  layer.reset(new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->blobs().resize(1);\n  layer->blobs()[0].reset(new Blob<TypeParam>(1, 3, 3, 1));\n  TypeParam* weights_1 = layer->blobs()[0]->mutable_cpu_data();\n  for (int c = 0; c < 3; ++c) {\n    int i = c * 3;  // 3 x 1 filter\n    weights_1[i +  0] = 1;\n    weights_1[i +  1] = 2;\n    weights_1[i +  2] = 1;\n  }\n  layer->SetUp(sep_blob_bottom_vec, sep_blob_top_vec);\n  layer->Forward(sep_blob_bottom_vec, sep_blob_top_vec);\n  // (2) the [-1 0 1] row filter\n  blob_sep->CopyFrom(*this->blob_top_2_, false, true);\n  sep_blob_bottom_vec.clear();\n  sep_blob_bottom_vec.push_back(blob_sep.get());\n  convolution_param->set_kernel_h(1);\n  convolution_param->set_kernel_w(3);\n  convolution_param->set_stride_h(1);\n  convolution_param->set_stride_w(2);\n  convolution_param->set_num_output(1);\n  convolution_param->set_bias_term(false);\n  layer.reset(new CuDNNConvolutionLayer<TypeParam>(layer_param));\n  layer->blobs().resize(1);\n  layer->blobs()[0].reset(new Blob<TypeParam>(1, 1, 1, 3));\n  TypeParam* weights_2 = layer->blobs()[0]->mutable_cpu_data();\n  weights_2[0] = -1;\n  weights_2[1] =  0;\n  weights_2[2] =  1;\n  layer->SetUp(sep_blob_bottom_vec, sep_blob_top_vec);\n  layer->Forward(sep_blob_bottom_vec, sep_blob_top_vec);\n  // Test equivalence of full and separable filters.\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  const TypeParam* sep_top_data = this->blob_top_2_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    EXPECT_NEAR(top_data[i], sep_top_data[i], 1e-4);\n  }\n}\n\nTYPED_TEST(CuDNNConvolutionLayerTest, TestGradientCuDNN) {\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(2);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  CuDNNConvolutionLayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(CuDNNConvolutionLayerTest, TestGradientGroupCuDNN) {\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  CuDNNConvolutionLayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_data/generate_sample_data.py",
    "content": "\"\"\"\nGenerate data used in the HDF5DataLayer and GradientBasedSolver tests.\n\"\"\"\nimport os\nimport numpy as np\nimport h5py\n\nscript_dir = os.path.dirname(os.path.abspath(__file__))\n\n# Generate HDF5DataLayer sample_data.h5\n\nnum_cols = 8\nnum_rows = 10\nheight = 6\nwidth = 5\ntotal_size = num_cols * num_rows * height * width\n\ndata = np.arange(total_size)\ndata = data.reshape(num_rows, num_cols, height, width)\ndata = data.astype('float32')\n\n# We had a bug where data was copied into label, but the tests weren't\n# catching it, so let's make label 1-indexed.\nlabel = 1 + np.arange(num_rows)[:, np.newaxis]\nlabel = label.astype('float32')\n\n# We add an extra label2 dataset to test HDF5 layer's ability\n# to handle arbitrary number of output (\"top\") Blobs.\nlabel2 = label + 1\n\nprint data\nprint label\n\nwith h5py.File(script_dir + '/sample_data.h5', 'w') as f:\n    f['data'] = data\n    f['label'] = label\n    f['label2'] = label2\n\nwith h5py.File(script_dir + '/sample_data_2_gzip.h5', 'w') as f:\n    f.create_dataset(\n        'data', data=data + total_size,\n        compression='gzip', compression_opts=1\n    )\n    f.create_dataset(\n        'label', data=label,\n        compression='gzip', compression_opts=1,\n        dtype='uint8',\n    )\n    f.create_dataset(\n        'label2', data=label2,\n        compression='gzip', compression_opts=1,\n        dtype='uint8',\n    )\n\nwith open(script_dir + '/sample_data_list.txt', 'w') as f:\n    f.write('src/caffe/test/test_data/sample_data.h5\\n')\n    f.write('src/caffe/test/test_data/sample_data_2_gzip.h5\\n')\n\n# Generate GradientBasedSolver solver_data.h5\n\nnum_cols = 3\nnum_rows = 8\nheight = 10\nwidth = 10\n\ndata = np.random.randn(num_rows, num_cols, height, width)\ndata = data.reshape(num_rows, num_cols, height, width)\ndata = data.astype('float32')\n\ntargets = np.random.randn(num_rows, 1)\ntargets = targets.astype('float32')\n\nprint data\nprint targets\n\nwith h5py.File(script_dir + '/solver_data.h5', 'w') as f:\n    f['data'] = data\n    f['targets'] = targets\n\nwith open(script_dir + '/solver_data_list.txt', 'w') as f:\n    f.write('src/caffe/test/test_data/solver_data.h5\\n')\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_data/sample_data_list.txt",
    "content": "src/caffe/test/test_data/sample_data.h5\nsrc/caffe/test/test_data/sample_data_2_gzip.h5\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_data/solver_data_list.txt",
    "content": "src/caffe/test/test_data/solver_data.h5\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <string>\n#include <vector>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nusing boost::scoped_ptr;\n\ntemplate <typename TypeParam>\nclass DataLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  DataLayerTest()\n      : backend_(DataParameter_DB_LEVELDB),\n        blob_top_data_(new Blob<Dtype>()),\n        blob_top_label_(new Blob<Dtype>()),\n        seed_(1701) {}\n  virtual void SetUp() {\n    filename_.reset(new string());\n    MakeTempDir(filename_.get());\n    *filename_ += \"/db\";\n    blob_top_vec_.push_back(blob_top_data_);\n    blob_top_vec_.push_back(blob_top_label_);\n  }\n\n  // Fill the DB with data: if unique_pixels, each pixel is unique but\n  // all images are the same; else each image is unique but all pixels within\n  // an image are the same.\n  void Fill(const bool unique_pixels, DataParameter_DB backend) {\n    backend_ = backend;\n    LOG(INFO) << \"Using temporary dataset \" << *filename_;\n    scoped_ptr<db::DB> db(db::GetDB(backend));\n    db->Open(*filename_, db::NEW);\n    scoped_ptr<db::Transaction> txn(db->NewTransaction());\n    for (int i = 0; i < 5; ++i) {\n      Datum datum;\n      datum.set_label(i);\n      datum.set_channels(2);\n      datum.set_height(3);\n      datum.set_width(4);\n      std::string* data = datum.mutable_data();\n      for (int j = 0; j < 24; ++j) {\n        int datum = unique_pixels ? j : i;\n        data->push_back(static_cast<uint8_t>(datum));\n      }\n      stringstream ss;\n      ss << i;\n      string out;\n      CHECK(datum.SerializeToString(&out));\n      txn->Put(ss.str(), out);\n    }\n    txn->Commit();\n    db->Close();\n  }\n\n  void TestRead() {\n    const Dtype scale = 3;\n    LayerParameter param;\n    param.set_phase(TRAIN);\n    DataParameter* data_param = param.mutable_data_param();\n    data_param->set_batch_size(5);\n    data_param->set_source(filename_->c_str());\n    data_param->set_backend(backend_);\n\n    TransformationParameter* transform_param =\n        param.mutable_transform_param();\n    transform_param->set_scale(scale);\n\n    DataLayer<Dtype> layer(param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_data_->num(), 5);\n    EXPECT_EQ(blob_top_data_->channels(), 2);\n    EXPECT_EQ(blob_top_data_->height(), 3);\n    EXPECT_EQ(blob_top_data_->width(), 4);\n    EXPECT_EQ(blob_top_label_->num(), 5);\n    EXPECT_EQ(blob_top_label_->channels(), 1);\n    EXPECT_EQ(blob_top_label_->height(), 1);\n    EXPECT_EQ(blob_top_label_->width(), 1);\n\n    for (int iter = 0; iter < 100; ++iter) {\n      layer.Forward(blob_bottom_vec_, blob_top_vec_);\n      for (int i = 0; i < 5; ++i) {\n        EXPECT_EQ(i, blob_top_label_->cpu_data()[i]);\n      }\n      for (int i = 0; i < 5; ++i) {\n        for (int j = 0; j < 24; ++j) {\n          EXPECT_EQ(scale * i, blob_top_data_->cpu_data()[i * 24 + j])\n              << \"debug: iter \" << iter << \" i \" << i << \" j \" << j;\n        }\n      }\n    }\n  }\n\n  void TestReshape(DataParameter_DB backend) {\n    const int num_inputs = 5;\n    // Save data of varying shapes.\n    LOG(INFO) << \"Using temporary dataset \" << *filename_;\n    scoped_ptr<db::DB> db(db::GetDB(backend));\n    db->Open(*filename_, db::NEW);\n    scoped_ptr<db::Transaction> txn(db->NewTransaction());\n    for (int i = 0; i < num_inputs; ++i) {\n      Datum datum;\n      datum.set_label(i);\n      datum.set_channels(2);\n      datum.set_height(i % 2 + 1);\n      datum.set_width(i % 4 + 1);\n      std::string* data = datum.mutable_data();\n      const int data_size = datum.channels() * datum.height() * datum.width();\n      for (int j = 0; j < data_size; ++j) {\n        data->push_back(static_cast<uint8_t>(j));\n      }\n      stringstream ss;\n      ss << i;\n      string out;\n      CHECK(datum.SerializeToString(&out));\n      txn->Put(ss.str(), out);\n    }\n    txn->Commit();\n    db->Close();\n\n    // Load and check data of various shapes.\n    LayerParameter param;\n    param.set_phase(TEST);\n    DataParameter* data_param = param.mutable_data_param();\n    data_param->set_batch_size(1);\n    data_param->set_source(filename_->c_str());\n    data_param->set_backend(backend);\n\n    DataLayer<Dtype> layer(param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_data_->num(), 1);\n    EXPECT_EQ(blob_top_data_->channels(), 2);\n    EXPECT_EQ(blob_top_label_->num(), 1);\n    EXPECT_EQ(blob_top_label_->channels(), 1);\n    EXPECT_EQ(blob_top_label_->height(), 1);\n    EXPECT_EQ(blob_top_label_->width(), 1);\n\n    for (int iter = 0; iter < num_inputs; ++iter) {\n      layer.Forward(blob_bottom_vec_, blob_top_vec_);\n      EXPECT_EQ(blob_top_data_->height(), iter % 2 + 1);\n      EXPECT_EQ(blob_top_data_->width(), iter % 4 + 1);\n      EXPECT_EQ(iter, blob_top_label_->cpu_data()[0]);\n      const int channels = blob_top_data_->channels();\n      const int height = blob_top_data_->height();\n      const int width = blob_top_data_->width();\n      for (int c = 0; c < channels; ++c) {\n        for (int h = 0; h < height; ++h) {\n          for (int w = 0; w < width; ++w) {\n            const int idx = (c * height + h) * width + w;\n            EXPECT_EQ(idx, static_cast<int>(blob_top_data_->cpu_data()[idx]))\n                << \"debug: iter \" << iter << \" c \" << c\n                << \" h \" << h << \" w \" << w;\n          }\n        }\n      }\n    }\n  }\n\n  void TestReadCrop(Phase phase) {\n    const Dtype scale = 3;\n    LayerParameter param;\n    param.set_phase(phase);\n    Caffe::set_random_seed(1701);\n\n    DataParameter* data_param = param.mutable_data_param();\n    data_param->set_batch_size(5);\n    data_param->set_source(filename_->c_str());\n    data_param->set_backend(backend_);\n\n    TransformationParameter* transform_param =\n        param.mutable_transform_param();\n    transform_param->set_scale(scale);\n    transform_param->set_crop_size(1);\n\n    DataLayer<Dtype> layer(param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_data_->num(), 5);\n    EXPECT_EQ(blob_top_data_->channels(), 2);\n    EXPECT_EQ(blob_top_data_->height(), 1);\n    EXPECT_EQ(blob_top_data_->width(), 1);\n    EXPECT_EQ(blob_top_label_->num(), 5);\n    EXPECT_EQ(blob_top_label_->channels(), 1);\n    EXPECT_EQ(blob_top_label_->height(), 1);\n    EXPECT_EQ(blob_top_label_->width(), 1);\n\n    for (int iter = 0; iter < 2; ++iter) {\n      layer.Forward(blob_bottom_vec_, blob_top_vec_);\n      for (int i = 0; i < 5; ++i) {\n        EXPECT_EQ(i, blob_top_label_->cpu_data()[i]);\n      }\n      int num_with_center_value = 0;\n      for (int i = 0; i < 5; ++i) {\n        for (int j = 0; j < 2; ++j) {\n          const Dtype center_value = scale * (j ? 17 : 5);\n          num_with_center_value +=\n              (center_value == blob_top_data_->cpu_data()[i * 2 + j]);\n          // At TEST time, check that we always get center value.\n          if (phase == caffe::TEST) {\n            EXPECT_EQ(center_value, this->blob_top_data_->cpu_data()[i * 2 + j])\n                << \"debug: iter \" << iter << \" i \" << i << \" j \" << j;\n          }\n        }\n      }\n      // At TRAIN time, check that we did not get the center crop all 10 times.\n      // (This check fails with probability 1-1/12^10 in a correct\n      // implementation, so we call set_random_seed.)\n      if (phase == caffe::TRAIN) {\n        EXPECT_LT(num_with_center_value, 10);\n      }\n    }\n  }\n\n  void TestReadCropTrainSequenceSeeded() {\n    LayerParameter param;\n    param.set_phase(TRAIN);\n    DataParameter* data_param = param.mutable_data_param();\n    data_param->set_batch_size(5);\n    data_param->set_source(filename_->c_str());\n    data_param->set_backend(backend_);\n\n    TransformationParameter* transform_param =\n        param.mutable_transform_param();\n    transform_param->set_crop_size(1);\n    transform_param->set_mirror(true);\n\n    // Get crop sequence with Caffe seed 1701.\n    Caffe::set_random_seed(seed_);\n    vector<vector<Dtype> > crop_sequence;\n    {\n      DataLayer<Dtype> layer1(param);\n      layer1.SetUp(blob_bottom_vec_, blob_top_vec_);\n      for (int iter = 0; iter < 2; ++iter) {\n        layer1.Forward(blob_bottom_vec_, blob_top_vec_);\n        for (int i = 0; i < 5; ++i) {\n          EXPECT_EQ(i, blob_top_label_->cpu_data()[i]);\n        }\n        vector<Dtype> iter_crop_sequence;\n        for (int i = 0; i < 5; ++i) {\n          for (int j = 0; j < 2; ++j) {\n            iter_crop_sequence.push_back(\n                blob_top_data_->cpu_data()[i * 2 + j]);\n          }\n        }\n        crop_sequence.push_back(iter_crop_sequence);\n      }\n    }  // destroy 1st data layer and unlock the db\n\n    // Get crop sequence after reseeding Caffe with 1701.\n    // Check that the sequence is the same as the original.\n    Caffe::set_random_seed(seed_);\n    DataLayer<Dtype> layer2(param);\n    layer2.SetUp(blob_bottom_vec_, blob_top_vec_);\n    for (int iter = 0; iter < 2; ++iter) {\n      layer2.Forward(blob_bottom_vec_, blob_top_vec_);\n      for (int i = 0; i < 5; ++i) {\n        EXPECT_EQ(i, blob_top_label_->cpu_data()[i]);\n      }\n      for (int i = 0; i < 5; ++i) {\n        for (int j = 0; j < 2; ++j) {\n          EXPECT_EQ(crop_sequence[iter][i * 2 + j],\n                    blob_top_data_->cpu_data()[i * 2 + j])\n              << \"debug: iter \" << iter << \" i \" << i << \" j \" << j;\n        }\n      }\n    }\n  }\n\n  void TestReadCropTrainSequenceUnseeded() {\n    LayerParameter param;\n    param.set_phase(TRAIN);\n    DataParameter* data_param = param.mutable_data_param();\n    data_param->set_batch_size(5);\n    data_param->set_source(filename_->c_str());\n    data_param->set_backend(backend_);\n\n    TransformationParameter* transform_param =\n        param.mutable_transform_param();\n    transform_param->set_crop_size(1);\n    transform_param->set_mirror(true);\n\n    // Get crop sequence with Caffe seed 1701, srand seed 1701.\n    Caffe::set_random_seed(seed_);\n    srand(seed_);\n    vector<vector<Dtype> > crop_sequence;\n    {\n      DataLayer<Dtype> layer1(param);\n      layer1.SetUp(blob_bottom_vec_, blob_top_vec_);\n      for (int iter = 0; iter < 2; ++iter) {\n        layer1.Forward(blob_bottom_vec_, blob_top_vec_);\n        for (int i = 0; i < 5; ++i) {\n          EXPECT_EQ(i, blob_top_label_->cpu_data()[i]);\n        }\n        vector<Dtype> iter_crop_sequence;\n        for (int i = 0; i < 5; ++i) {\n          for (int j = 0; j < 2; ++j) {\n            iter_crop_sequence.push_back(\n                blob_top_data_->cpu_data()[i * 2 + j]);\n          }\n        }\n        crop_sequence.push_back(iter_crop_sequence);\n      }\n    }  // destroy 1st data layer and unlock the db\n\n    // Get crop sequence continuing from previous Caffe RNG state; reseed\n    // srand with 1701. Check that the sequence differs from the original.\n    srand(seed_);\n    DataLayer<Dtype> layer2(param);\n    layer2.SetUp(blob_bottom_vec_, blob_top_vec_);\n    for (int iter = 0; iter < 2; ++iter) {\n      layer2.Forward(blob_bottom_vec_, blob_top_vec_);\n      for (int i = 0; i < 5; ++i) {\n        EXPECT_EQ(i, blob_top_label_->cpu_data()[i]);\n      }\n      int num_sequence_matches = 0;\n      for (int i = 0; i < 5; ++i) {\n        for (int j = 0; j < 2; ++j) {\n          num_sequence_matches += (crop_sequence[iter][i * 2 + j] ==\n                                   blob_top_data_->cpu_data()[i * 2 + j]);\n        }\n      }\n      EXPECT_LT(num_sequence_matches, 10);\n    }\n  }\n\n  virtual ~DataLayerTest() { delete blob_top_data_; delete blob_top_label_; }\n\n  DataParameter_DB backend_;\n  shared_ptr<string> filename_;\n  Blob<Dtype>* const blob_top_data_;\n  Blob<Dtype>* const blob_top_label_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n  int seed_;\n};\n\nTYPED_TEST_CASE(DataLayerTest, TestDtypesAndDevices);\n\n#ifdef USE_LEVELDB\nTYPED_TEST(DataLayerTest, TestReadLevelDB) {\n  const bool unique_pixels = false;  // all pixels the same; images different\n  this->Fill(unique_pixels, DataParameter_DB_LEVELDB);\n  this->TestRead();\n}\n\nTYPED_TEST(DataLayerTest, TestReshapeLevelDB) {\n  this->TestReshape(DataParameter_DB_LEVELDB);\n}\n\nTYPED_TEST(DataLayerTest, TestReadCropTrainLevelDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LEVELDB);\n  this->TestReadCrop(TRAIN);\n}\n\n// Test that the sequence of random crops is consistent when using\n// Caffe::set_random_seed.\nTYPED_TEST(DataLayerTest, TestReadCropTrainSequenceSeededLevelDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LEVELDB);\n  this->TestReadCropTrainSequenceSeeded();\n}\n\n// Test that the sequence of random crops differs across iterations when\n// Caffe::set_random_seed isn't called (and seeds from srand are ignored).\nTYPED_TEST(DataLayerTest, TestReadCropTrainSequenceUnseededLevelDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LEVELDB);\n  this->TestReadCropTrainSequenceUnseeded();\n}\n\nTYPED_TEST(DataLayerTest, TestReadCropTestLevelDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LEVELDB);\n  this->TestReadCrop(TEST);\n}\n#endif  // USE_LEVELDB\n\n#ifdef USE_LMDB\nTYPED_TEST(DataLayerTest, TestReadLMDB) {\n  const bool unique_pixels = false;  // all pixels the same; images different\n  this->Fill(unique_pixels, DataParameter_DB_LMDB);\n  this->TestRead();\n}\n\nTYPED_TEST(DataLayerTest, TestReshapeLMDB) {\n  this->TestReshape(DataParameter_DB_LMDB);\n}\n\nTYPED_TEST(DataLayerTest, TestReadCropTrainLMDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LMDB);\n  this->TestReadCrop(TRAIN);\n}\n\n// Test that the sequence of random crops is consistent when using\n// Caffe::set_random_seed.\nTYPED_TEST(DataLayerTest, TestReadCropTrainSequenceSeededLMDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LMDB);\n  this->TestReadCropTrainSequenceSeeded();\n}\n\n// Test that the sequence of random crops differs across iterations when\n// Caffe::set_random_seed isn't called (and seeds from srand are ignored).\nTYPED_TEST(DataLayerTest, TestReadCropTrainSequenceUnseededLMDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LMDB);\n  this->TestReadCropTrainSequenceUnseeded();\n}\n\nTYPED_TEST(DataLayerTest, TestReadCropTestLMDB) {\n  const bool unique_pixels = true;  // all images the same; pixels different\n  this->Fill(unique_pixels, DataParameter_DB_LMDB);\n  this->TestReadCrop(TEST);\n}\n\n#endif  // USE_LMDB\n}  // namespace caffe\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_data_transformer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <string>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n#include \"leveldb/db.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/data_transformer.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nvoid FillDatum(const int label, const int channels, const int height,\n  const int width, const bool unique_pixels, Datum * datum) {\n  datum->set_label(label);\n  datum->set_channels(channels);\n  datum->set_height(height);\n  datum->set_width(width);\n  int size = channels * height * width;\n  std::string* data = datum->mutable_data();\n  for (int j = 0; j < size; ++j) {\n    int datum = unique_pixels ? j : label;\n    data->push_back(static_cast<uint8_t>(datum));\n  }\n}\n\ntemplate <typename Dtype>\nclass DataTransformTest : public ::testing::Test {\n protected:\n  DataTransformTest()\n      : seed_(1701),\n      num_iter_(10) {}\n\n  int NumSequenceMatches(const TransformationParameter transform_param,\n      const Datum& datum, Phase phase) {\n    // Get crop sequence with Caffe seed 1701.\n    DataTransformer<Dtype> transformer(transform_param, phase);\n    const int crop_size = transform_param.crop_size();\n    Caffe::set_random_seed(seed_);\n    transformer.InitRand();\n    Blob<Dtype> blob(1, datum.channels(), datum.height(), datum.width());\n    if (transform_param.crop_size() > 0) {\n      blob.Reshape(1, datum.channels(), crop_size, crop_size);\n    }\n\n    vector<vector<Dtype> > crop_sequence;\n    for (int iter = 0; iter < this->num_iter_; ++iter) {\n      vector<Dtype> iter_crop_sequence;\n      transformer.Transform(datum, &blob);\n      for (int j = 0; j < blob.count(); ++j) {\n        iter_crop_sequence.push_back(blob.cpu_data()[j]);\n      }\n      crop_sequence.push_back(iter_crop_sequence);\n    }\n    // Check if the sequence differs from the previous\n    int num_sequence_matches = 0;\n    for (int iter = 0; iter < this->num_iter_; ++iter) {\n      vector<Dtype> iter_crop_sequence = crop_sequence[iter];\n      transformer.Transform(datum, &blob);\n      for (int j = 0; j < blob.count(); ++j) {\n        num_sequence_matches += (crop_sequence[iter][j] == blob.cpu_data()[j]);\n      }\n    }\n    return num_sequence_matches;\n  }\n\n  int seed_;\n  int num_iter_;\n};\n\nTYPED_TEST_CASE(DataTransformTest, TestDtypes);\n\nTYPED_TEST(DataTransformTest, TestEmptyTransform) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = false;  // all pixels the same equal to label\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  Blob<TypeParam> blob(1, channels, height, width);\n  DataTransformer<TypeParam> transformer(transform_param, TEST);\n  transformer.InitRand();\n  transformer.Transform(datum, &blob);\n  EXPECT_EQ(blob.num(), 1);\n  EXPECT_EQ(blob.channels(), datum.channels());\n  EXPECT_EQ(blob.height(), datum.height());\n  EXPECT_EQ(blob.width(), datum.width());\n  for (int j = 0; j < blob.count(); ++j) {\n    EXPECT_EQ(blob.cpu_data()[j], label);\n  }\n}\n\nTYPED_TEST(DataTransformTest, TestEmptyTransformUniquePixels) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  Blob<TypeParam> blob(1, 3, 4, 5);\n  DataTransformer<TypeParam> transformer(transform_param, TEST);\n  transformer.InitRand();\n  transformer.Transform(datum, &blob);\n  EXPECT_EQ(blob.num(), 1);\n  EXPECT_EQ(blob.channels(), datum.channels());\n  EXPECT_EQ(blob.height(), datum.height());\n  EXPECT_EQ(blob.width(), datum.width());\n  for (int j = 0; j < blob.count(); ++j) {\n    EXPECT_EQ(blob.cpu_data()[j], j);\n  }\n}\n\nTYPED_TEST(DataTransformTest, TestCropSize) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = false;  // all pixels the same equal to label\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int crop_size = 2;\n\n  transform_param.set_crop_size(crop_size);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  DataTransformer<TypeParam> transformer(transform_param, TEST);\n  transformer.InitRand();\n  Blob<TypeParam> blob(1, channels, crop_size, crop_size);\n  for (int iter = 0; iter < this->num_iter_; ++iter) {\n    transformer.Transform(datum, &blob);\n    EXPECT_EQ(blob.num(), 1);\n    EXPECT_EQ(blob.channels(), datum.channels());\n    EXPECT_EQ(blob.height(), crop_size);\n    EXPECT_EQ(blob.width(), crop_size);\n    for (int j = 0; j < blob.count(); ++j) {\n      EXPECT_EQ(blob.cpu_data()[j], label);\n    }\n  }\n}\n\nTYPED_TEST(DataTransformTest, TestCropTrain) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int crop_size = 2;\n  const int size = channels * crop_size * crop_size;\n\n  transform_param.set_crop_size(crop_size);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  int num_matches = this->NumSequenceMatches(transform_param, datum, TRAIN);\n  EXPECT_LT(num_matches, size * this->num_iter_);\n}\n\nTYPED_TEST(DataTransformTest, TestCropTest) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int crop_size = 2;\n  const int size = channels * crop_size * crop_size;\n\n  transform_param.set_crop_size(crop_size);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  int num_matches = this->NumSequenceMatches(transform_param, datum, TEST);\n  EXPECT_EQ(num_matches, size * this->num_iter_);\n}\n\nTYPED_TEST(DataTransformTest, TestMirrorTrain) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int size = channels * height * width;\n\n  transform_param.set_mirror(true);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  int num_matches = this->NumSequenceMatches(transform_param, datum, TRAIN);\n  EXPECT_LT(num_matches, size * this->num_iter_);\n}\n\nTYPED_TEST(DataTransformTest, TestMirrorTest) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int size = channels * height * width;\n\n  transform_param.set_mirror(true);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  int num_matches = this->NumSequenceMatches(transform_param, datum, TEST);\n  EXPECT_LT(num_matches, size * this->num_iter_);\n}\n\nTYPED_TEST(DataTransformTest, TestCropMirrorTrain) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int crop_size = 2;\n\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  transform_param.set_crop_size(crop_size);\n  int num_matches_crop = this->NumSequenceMatches(\n      transform_param, datum, TRAIN);\n\n  transform_param.set_mirror(true);\n  int num_matches_crop_mirror =\n      this->NumSequenceMatches(transform_param, datum, TRAIN);\n  // When doing crop and mirror we expect less num_matches than just crop\n  EXPECT_LE(num_matches_crop_mirror, num_matches_crop);\n}\n\nTYPED_TEST(DataTransformTest, TestCropMirrorTest) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int crop_size = 2;\n\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  transform_param.set_crop_size(crop_size);\n  int num_matches_crop = this->NumSequenceMatches(transform_param, datum, TEST);\n\n  transform_param.set_mirror(true);\n  int num_matches_crop_mirror =\n      this->NumSequenceMatches(transform_param, datum, TEST);\n  // When doing crop and mirror we expect less num_matches than just crop\n  EXPECT_LT(num_matches_crop_mirror, num_matches_crop);\n}\n\n\nTYPED_TEST(DataTransformTest, TestMeanValue) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = false;  // pixels are equal to label\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int mean_value = 2;\n\n  transform_param.add_mean_value(mean_value);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  Blob<TypeParam> blob(1, channels, height, width);\n  DataTransformer<TypeParam> transformer(transform_param, TEST);\n  transformer.InitRand();\n  transformer.Transform(datum, &blob);\n  for (int j = 0; j < blob.count(); ++j) {\n    EXPECT_EQ(blob.cpu_data()[j], label - mean_value);\n  }\n}\n\nTYPED_TEST(DataTransformTest, TestMeanValues) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = false;  // pixels are equal to label\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n\n  transform_param.add_mean_value(0);\n  transform_param.add_mean_value(1);\n  transform_param.add_mean_value(2);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  Blob<TypeParam> blob(1, channels, height, width);\n  DataTransformer<TypeParam> transformer(transform_param, TEST);\n  transformer.InitRand();\n  transformer.Transform(datum, &blob);\n  for (int c = 0; c < channels; ++c) {\n    for (int j = 0; j < height * width; ++j) {\n      EXPECT_EQ(blob.cpu_data()[blob.offset(0, c) + j], label - c);\n    }\n  }\n}\n\nTYPED_TEST(DataTransformTest, TestMeanFile) {\n  TransformationParameter transform_param;\n  const bool unique_pixels = true;  // pixels are consecutive ints [0,size]\n  const int label = 0;\n  const int channels = 3;\n  const int height = 4;\n  const int width = 5;\n  const int size = channels * height * width;\n\n  // Create a mean file\n  string mean_file;\n  MakeTempFilename(&mean_file);\n  BlobProto blob_mean;\n  blob_mean.set_num(1);\n  blob_mean.set_channels(channels);\n  blob_mean.set_height(height);\n  blob_mean.set_width(width);\n\n  for (int j = 0; j < size; ++j) {\n      blob_mean.add_data(j);\n  }\n\n  LOG(INFO) << \"Using temporary mean_file \" << mean_file;\n  WriteProtoToBinaryFile(blob_mean, mean_file);\n\n  transform_param.set_mean_file(mean_file);\n  Datum datum;\n  FillDatum(label, channels, height, width, unique_pixels, &datum);\n  Blob<TypeParam> blob(1, channels, height, width);\n  DataTransformer<TypeParam> transformer(transform_param, TEST);\n  transformer.InitRand();\n  transformer.Transform(datum, &blob);\n  for (int j = 0; j < blob.count(); ++j) {\n    EXPECT_EQ(blob.cpu_data()[j], 0);\n  }\n}\n\n}  // namespace caffe\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_db.cpp",
    "content": "#if defined(USE_LEVELDB) && defined(USE_LMDB) && defined(USE_OPENCV)\n#include <string>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nusing boost::scoped_ptr;\n\ntemplate <typename TypeParam>\nclass DBTest : public ::testing::Test {\n protected:\n  DBTest()\n      : backend_(TypeParam::backend),\n      root_images_(string(EXAMPLES_SOURCE_DIR) + string(\"images/\")) {}\n\n  virtual void SetUp() {\n    MakeTempDir(&source_);\n    source_ += \"/db\";\n    string keys[] = {\"cat.jpg\", \"fish-bike.jpg\"};\n    LOG(INFO) << \"Using temporary db \" << source_;\n    scoped_ptr<db::DB> db(db::GetDB(TypeParam::backend));\n    db->Open(this->source_, db::NEW);\n    scoped_ptr<db::Transaction> txn(db->NewTransaction());\n    for (int i = 0; i < 2; ++i) {\n      Datum datum;\n      ReadImageToDatum(root_images_ + keys[i], i, &datum);\n      string out;\n      CHECK(datum.SerializeToString(&out));\n      txn->Put(keys[i], out);\n    }\n    txn->Commit();\n  }\n\n  virtual ~DBTest() { }\n\n  DataParameter_DB backend_;\n  string source_;\n  string root_images_;\n};\n\nstruct TypeLevelDB {\n  static DataParameter_DB backend;\n};\nDataParameter_DB TypeLevelDB::backend = DataParameter_DB_LEVELDB;\n\nstruct TypeLMDB {\n  static DataParameter_DB backend;\n};\nDataParameter_DB TypeLMDB::backend = DataParameter_DB_LMDB;\n\n// typedef ::testing::Types<TypeLmdb> TestTypes;\ntypedef ::testing::Types<TypeLevelDB, TypeLMDB> TestTypes;\n\nTYPED_TEST_CASE(DBTest, TestTypes);\n\nTYPED_TEST(DBTest, TestGetDB) {\n  scoped_ptr<db::DB> db(db::GetDB(TypeParam::backend));\n}\n\nTYPED_TEST(DBTest, TestNext) {\n  scoped_ptr<db::DB> db(db::GetDB(TypeParam::backend));\n  db->Open(this->source_, db::READ);\n  scoped_ptr<db::Cursor> cursor(db->NewCursor());\n  EXPECT_TRUE(cursor->valid());\n  cursor->Next();\n  EXPECT_TRUE(cursor->valid());\n  cursor->Next();\n  EXPECT_FALSE(cursor->valid());\n}\n\nTYPED_TEST(DBTest, TestSeekToFirst) {\n  scoped_ptr<db::DB> db(db::GetDB(TypeParam::backend));\n  db->Open(this->source_, db::READ);\n  scoped_ptr<db::Cursor> cursor(db->NewCursor());\n  cursor->Next();\n  cursor->SeekToFirst();\n  EXPECT_TRUE(cursor->valid());\n  string key = cursor->key();\n  Datum datum;\n  datum.ParseFromString(cursor->value());\n  EXPECT_EQ(key, \"cat.jpg\");\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 360);\n  EXPECT_EQ(datum.width(), 480);\n}\n\nTYPED_TEST(DBTest, TestKeyValue) {\n  scoped_ptr<db::DB> db(db::GetDB(TypeParam::backend));\n  db->Open(this->source_, db::READ);\n  scoped_ptr<db::Cursor> cursor(db->NewCursor());\n  EXPECT_TRUE(cursor->valid());\n  string key = cursor->key();\n  Datum datum;\n  datum.ParseFromString(cursor->value());\n  EXPECT_EQ(key, \"cat.jpg\");\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 360);\n  EXPECT_EQ(datum.width(), 480);\n  cursor->Next();\n  EXPECT_TRUE(cursor->valid());\n  key = cursor->key();\n  datum.ParseFromString(cursor->value());\n  EXPECT_EQ(key, \"fish-bike.jpg\");\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 323);\n  EXPECT_EQ(datum.width(), 481);\n  cursor->Next();\n  EXPECT_FALSE(cursor->valid());\n}\n\nTYPED_TEST(DBTest, TestWrite) {\n  scoped_ptr<db::DB> db(db::GetDB(TypeParam::backend));\n  db->Open(this->source_, db::WRITE);\n  scoped_ptr<db::Transaction> txn(db->NewTransaction());\n  Datum datum;\n  ReadFileToDatum(this->root_images_ + \"cat.jpg\", 0, &datum);\n  string out;\n  CHECK(datum.SerializeToString(&out));\n  txn->Put(\"cat.jpg\", out);\n  ReadFileToDatum(this->root_images_ + \"fish-bike.jpg\", 1, &datum);\n  CHECK(datum.SerializeToString(&out));\n  txn->Put(\"fish-bike.jpg\", out);\n  txn->Commit();\n}\n\n}  // namespace caffe\n#endif  // USE_LEVELDB, USE_LMDB and USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_deconvolution_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/deconv_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\n// Since ConvolutionLayerTest checks the shared conv/deconv code in detail,\n// we'll just do a simple forward test and a gradient check.\ntemplate <typename TypeParam>\nclass DeconvolutionLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  DeconvolutionLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 4)),\n        blob_bottom_2_(new Blob<Dtype>(2, 3, 6, 4)),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_2_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_value(1.);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    filler.Fill(this->blob_bottom_2_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n\n  virtual ~DeconvolutionLayerTest() {\n    delete blob_bottom_;\n    delete blob_bottom_2_;\n    delete blob_top_;\n    delete blob_top_2_;\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_2_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_2_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(DeconvolutionLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(DeconvolutionLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  shared_ptr<Layer<Dtype> > layer(\n      new DeconvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 4);\n  EXPECT_EQ(this->blob_top_->height(), 13);\n  EXPECT_EQ(this->blob_top_->width(), 9);\n  EXPECT_EQ(this->blob_top_2_->num(), 2);\n  EXPECT_EQ(this->blob_top_2_->channels(), 4);\n  EXPECT_EQ(this->blob_top_2_->height(), 13);\n  EXPECT_EQ(this->blob_top_2_->width(), 9);\n  // setting group should not change the shape\n  convolution_param->set_num_output(3);\n  convolution_param->set_group(3);\n  layer.reset(new DeconvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 13);\n  EXPECT_EQ(this->blob_top_->width(), 9);\n  EXPECT_EQ(this->blob_top_2_->num(), 2);\n  EXPECT_EQ(this->blob_top_2_->channels(), 3);\n  EXPECT_EQ(this->blob_top_2_->height(), 13);\n  EXPECT_EQ(this->blob_top_2_->width(), 9);\n}\n\nTYPED_TEST(DeconvolutionLayerTest, TestSimpleDeconvolution) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_num_output(4);\n  convolution_param->mutable_weight_filler()->set_type(\"constant\");\n  convolution_param->mutable_weight_filler()->set_value(1);\n  convolution_param->mutable_bias_filler()->set_type(\"constant\");\n  convolution_param->mutable_bias_filler()->set_value(0.1);\n  shared_ptr<Layer<Dtype> > layer(\n      new DeconvolutionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  // constant-fill the bottom blobs\n  FillerParameter filler_param;\n  filler_param.set_value(1.);\n  ConstantFiller<Dtype> filler(filler_param);\n  filler.Fill(this->blob_bottom_);\n  filler.Fill(this->blob_bottom_2_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // simply check that accumulation works with overlapping filters\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int n = 0; n < this->blob_top_->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n      for (int h = 0; h < this->blob_top_->height(); ++h) {\n        for (int w = 0; w < this->blob_top_->width(); ++w) {\n          Dtype expected = 3.1;\n          bool h_overlap = h % 2 == 0 && h > 0\n            && h < this->blob_top_->height() - 1;\n          bool w_overlap = w % 2 == 0 && w > 0\n            && w < this->blob_top_->width() - 1;\n          if (h_overlap && w_overlap) {\n            expected += 9;\n          } else if (h_overlap || w_overlap) {\n            expected += 3;\n          }\n          EXPECT_NEAR(top_data[this->blob_top_->offset(n, c, h, w)],\n              expected, 1e-4);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(DeconvolutionLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  this->blob_bottom_vec_.push_back(this->blob_bottom_2_);\n  this->blob_top_vec_.push_back(this->blob_top_2_);\n  convolution_param->add_kernel_size(2);\n  convolution_param->add_stride(1);\n  convolution_param->set_num_output(1);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  DeconvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(DeconvolutionLayerTest, TestNDAgainst2D) {\n  typedef typename TypeParam::Dtype Dtype;\n  const int kernel_h = 11;\n  const int kernel_w = 13;\n  vector<int> bottom_shape(4);\n  bottom_shape[0] = 15;\n  bottom_shape[1] = 12;\n  bottom_shape[2] = kernel_h * 2;\n  bottom_shape[3] = kernel_w * 2;\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n    filler.Fill(this->blob_bottom_vec_[i]);\n  }\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->set_num_output(18);\n  convolution_param->set_bias_term(false);\n  convolution_param->set_group(6);\n  convolution_param->set_kernel_h(kernel_h);\n  convolution_param->set_kernel_w(kernel_w);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  Blob<Dtype> weights;\n  Blob<Dtype> top_diff;\n  // Shape and fill weights and top_diff.\n  bool copy_diff;\n  bool reshape;\n  {\n    DeconvolutionLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    top_diff.ReshapeLike(*this->blob_top_);\n    filler.Fill(&top_diff);\n    ASSERT_EQ(1, layer.blobs().size());\n    copy_diff = false; reshape = true;\n    weights.CopyFrom(*layer.blobs()[0], copy_diff, reshape);\n  }\n  vector<bool> propagate_down(1, true);\n  Blob<Dtype> result_2d;\n  Blob<Dtype> backward_result_2d;\n  Blob<Dtype> backward_weight_result_2d;\n  // Test with 2D im2col\n  {\n    caffe_set(this->blob_top_->count(), Dtype(0),\n              this->blob_top_->mutable_cpu_data());\n    caffe_set(this->blob_bottom_->count(), Dtype(0),\n              this->blob_bottom_->mutable_cpu_diff());\n    caffe_set(weights.count(), Dtype(0), weights.mutable_cpu_diff());\n    // Do SetUp and Forward; save Forward result in result_2d.\n    convolution_param->set_force_nd_im2col(false);\n    DeconvolutionLayer<Dtype> layer_2d(layer_param);\n    layer_2d.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    ASSERT_EQ(1, layer_2d.blobs().size());\n    copy_diff = false; reshape = false;\n    layer_2d.blobs()[0]->CopyFrom(weights, copy_diff, reshape);\n    layer_2d.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    copy_diff = false; reshape = true;\n    result_2d.CopyFrom(*this->blob_top_, copy_diff, reshape);\n    // Copy pre-generated top diff into actual top diff;\n    // do Backward and save result in backward_result_2d.\n    ASSERT_EQ(this->blob_top_->shape(), top_diff.shape());\n    caffe_copy(top_diff.count(), top_diff.cpu_data(),\n               this->blob_top_->mutable_cpu_diff());\n    layer_2d.Backward(this->blob_top_vec_, propagate_down,\n                      this->blob_bottom_vec_);\n    copy_diff = true; reshape = true;\n    backward_result_2d.CopyFrom(*this->blob_bottom_, copy_diff, reshape);\n    backward_weight_result_2d.CopyFrom(weights, copy_diff, reshape);\n  }\n  Blob<Dtype> result_nd;\n  Blob<Dtype> backward_result_nd;\n  Blob<Dtype> backward_weight_result_nd;\n  // Test with ND im2col\n  {\n    caffe_set(this->blob_top_->count(), Dtype(0),\n              this->blob_top_->mutable_cpu_data());\n    caffe_set(this->blob_bottom_->count(), Dtype(0),\n              this->blob_bottom_->mutable_cpu_diff());\n    caffe_set(weights.count(), Dtype(0), weights.mutable_cpu_diff());\n    // Do SetUp and Forward; save Forward result in result_nd.\n    convolution_param->set_force_nd_im2col(true);\n    DeconvolutionLayer<Dtype> layer_nd(layer_param);\n    layer_nd.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    ASSERT_EQ(1, layer_nd.blobs().size());\n    copy_diff = false; reshape = false;\n    layer_nd.blobs()[0]->CopyFrom(weights, copy_diff, reshape);\n    layer_nd.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    copy_diff = false; reshape = true;\n    result_nd.CopyFrom(*this->blob_top_, copy_diff, reshape);\n    // Copy pre-generated top diff into actual top diff;\n    // do Backward and save result in backward_result_nd.\n    ASSERT_EQ(this->blob_top_->shape(), top_diff.shape());\n    caffe_copy(top_diff.count(), top_diff.cpu_data(),\n               this->blob_top_->mutable_cpu_diff());\n    layer_nd.Backward(this->blob_top_vec_, propagate_down,\n                      this->blob_bottom_vec_);\n    copy_diff = true; reshape = true;\n    backward_result_nd.CopyFrom(*this->blob_bottom_, copy_diff, reshape);\n    backward_weight_result_nd.CopyFrom(weights, copy_diff, reshape);\n  }\n  ASSERT_EQ(result_nd.count(), result_2d.count());\n  for (int i = 0; i < result_2d.count(); ++i)  {\n    EXPECT_EQ(result_2d.cpu_data()[i], result_nd.cpu_data()[i]);\n  }\n  ASSERT_EQ(backward_result_nd.count(), backward_result_2d.count());\n  for (int i = 0; i < backward_result_2d.count(); ++i) {\n    EXPECT_EQ(backward_result_2d.cpu_diff()[i],\n              backward_result_nd.cpu_diff()[i]);\n  }\n  ASSERT_EQ(backward_weight_result_nd.count(),\n            backward_weight_result_2d.count());\n  for (int i = 0; i < backward_weight_result_2d.count(); ++i) {\n    EXPECT_EQ(backward_weight_result_2d.cpu_diff()[i],\n              backward_weight_result_nd.cpu_diff()[i]);\n  }\n}\n\nTYPED_TEST(DeconvolutionLayerTest, TestGradient3D) {\n  typedef typename TypeParam::Dtype Dtype;\n  vector<int> bottom_shape(5);\n  bottom_shape[0] = this->blob_bottom_vec_[0]->shape(0);\n  bottom_shape[1] = this->blob_bottom_vec_[0]->shape(1);\n  bottom_shape[2] = 2;\n  bottom_shape[3] = 3;\n  bottom_shape[4] = 2;\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  for (int i = 0; i < this->blob_bottom_vec_.size(); ++i) {\n    this->blob_bottom_vec_[i]->Reshape(bottom_shape);\n    filler.Fill(this->blob_bottom_vec_[i]);\n  }\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(2);\n  convolution_param->add_stride(2);\n  convolution_param->add_pad(1);\n  convolution_param->set_num_output(2);\n  convolution_param->mutable_weight_filler()->set_type(\"gaussian\");\n  convolution_param->mutable_bias_filler()->set_type(\"gaussian\");\n  DeconvolutionLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_dummy_data_layer.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layers/dummy_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass DummyDataLayerTest : public CPUDeviceTest<Dtype> {\n protected:\n  DummyDataLayerTest()\n      : blob_top_a_(new Blob<Dtype>()),\n        blob_top_b_(new Blob<Dtype>()),\n        blob_top_c_(new Blob<Dtype>()) {}\n\n  virtual void SetUp() {\n    blob_bottom_vec_.clear();\n    blob_top_vec_.clear();\n    blob_top_vec_.push_back(blob_top_a_);\n    blob_top_vec_.push_back(blob_top_b_);\n    blob_top_vec_.push_back(blob_top_c_);\n  }\n\n  virtual ~DummyDataLayerTest() {\n    delete blob_top_a_;\n    delete blob_top_b_;\n    delete blob_top_c_;\n  }\n\n  Blob<Dtype>* const blob_top_a_;\n  Blob<Dtype>* const blob_top_b_;\n  Blob<Dtype>* const blob_top_c_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(DummyDataLayerTest, TestDtypes);\n\nTYPED_TEST(DummyDataLayerTest, TestOneTopConstant) {\n  LayerParameter param;\n  DummyDataParameter* dummy_data_param = param.mutable_dummy_data_param();\n  dummy_data_param->add_num(5);\n  dummy_data_param->add_channels(3);\n  dummy_data_param->add_height(2);\n  dummy_data_param->add_width(4);\n  this->blob_top_vec_.resize(1);\n  DummyDataLayer<TypeParam> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_a_->num(), 5);\n  EXPECT_EQ(this->blob_top_a_->channels(), 3);\n  EXPECT_EQ(this->blob_top_a_->height(), 2);\n  EXPECT_EQ(this->blob_top_a_->width(), 4);\n  EXPECT_EQ(this->blob_top_b_->count(), 0);\n  EXPECT_EQ(this->blob_top_c_->count(), 0);\n  for (int i = 0; i < this->blob_top_vec_.size(); ++i) {\n    for (int j = 0; j < this->blob_top_vec_[i]->count(); ++j) {\n      EXPECT_EQ(0, this->blob_top_vec_[i]->cpu_data()[j]);\n    }\n  }\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_vec_.size(); ++i) {\n    for (int j = 0; j < this->blob_top_vec_[i]->count(); ++j) {\n      EXPECT_EQ(0, this->blob_top_vec_[i]->cpu_data()[j]);\n    }\n  }\n}\n\nTYPED_TEST(DummyDataLayerTest, TestTwoTopConstant) {\n  LayerParameter param;\n  DummyDataParameter* dummy_data_param = param.mutable_dummy_data_param();\n  dummy_data_param->add_num(5);\n  dummy_data_param->add_channels(3);\n  dummy_data_param->add_height(2);\n  dummy_data_param->add_width(4);\n  dummy_data_param->add_num(5);\n  // Don't explicitly set number of channels or height for 2nd top blob; should\n  // default to first channels and height (as we check later).\n  dummy_data_param->add_height(1);\n  FillerParameter* data_filler_param = dummy_data_param->add_data_filler();\n  data_filler_param->set_value(7);\n  this->blob_top_vec_.resize(2);\n  DummyDataLayer<TypeParam> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_a_->num(), 5);\n  EXPECT_EQ(this->blob_top_a_->channels(), 3);\n  EXPECT_EQ(this->blob_top_a_->height(), 2);\n  EXPECT_EQ(this->blob_top_a_->width(), 4);\n  EXPECT_EQ(this->blob_top_b_->num(), 5);\n  EXPECT_EQ(this->blob_top_b_->channels(), 3);\n  EXPECT_EQ(this->blob_top_b_->height(), 1);\n  EXPECT_EQ(this->blob_top_b_->width(), 4);\n  EXPECT_EQ(this->blob_top_c_->count(), 0);\n  for (int i = 0; i < this->blob_top_vec_.size(); ++i) {\n    for (int j = 0; j < this->blob_top_vec_[i]->count(); ++j) {\n      EXPECT_EQ(7, this->blob_top_vec_[i]->cpu_data()[j]);\n    }\n  }\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_vec_.size(); ++i) {\n    for (int j = 0; j < this->blob_top_vec_[i]->count(); ++j) {\n      EXPECT_EQ(7, this->blob_top_vec_[i]->cpu_data()[j]);\n    }\n  }\n}\n\nTYPED_TEST(DummyDataLayerTest, TestThreeTopConstantGaussianConstant) {\n  LayerParameter param;\n  DummyDataParameter* dummy_data_param = param.mutable_dummy_data_param();\n  dummy_data_param->add_num(5);\n  dummy_data_param->add_channels(3);\n  dummy_data_param->add_height(2);\n  dummy_data_param->add_width(4);\n  FillerParameter* data_filler_param_a = dummy_data_param->add_data_filler();\n  data_filler_param_a->set_value(7);\n  FillerParameter* data_filler_param_b = dummy_data_param->add_data_filler();\n  data_filler_param_b->set_type(\"gaussian\");\n  TypeParam gaussian_mean = 3.0;\n  TypeParam gaussian_std = 0.01;\n  data_filler_param_b->set_mean(gaussian_mean);\n  data_filler_param_b->set_std(gaussian_std);\n  FillerParameter* data_filler_param_c = dummy_data_param->add_data_filler();\n  data_filler_param_c->set_value(9);\n  DummyDataLayer<TypeParam> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_a_->num(), 5);\n  EXPECT_EQ(this->blob_top_a_->channels(), 3);\n  EXPECT_EQ(this->blob_top_a_->height(), 2);\n  EXPECT_EQ(this->blob_top_a_->width(), 4);\n  EXPECT_EQ(this->blob_top_b_->num(), 5);\n  EXPECT_EQ(this->blob_top_b_->channels(), 3);\n  EXPECT_EQ(this->blob_top_b_->height(), 2);\n  EXPECT_EQ(this->blob_top_b_->width(), 4);\n  EXPECT_EQ(this->blob_top_c_->num(), 5);\n  EXPECT_EQ(this->blob_top_c_->channels(), 3);\n  EXPECT_EQ(this->blob_top_c_->height(), 2);\n  EXPECT_EQ(this->blob_top_c_->width(), 4);\n  for (int i = 0; i < this->blob_top_a_->count(); ++i) {\n    EXPECT_EQ(7, this->blob_top_a_->cpu_data()[i]);\n  }\n  // Blob b uses a Gaussian filler, so SetUp should not have initialized it.\n  // Blob b's data should therefore be the default Blob data value: 0.\n  for (int i = 0; i < this->blob_top_b_->count(); ++i) {\n    EXPECT_EQ(0, this->blob_top_b_->cpu_data()[i]);\n  }\n  for (int i = 0; i < this->blob_top_c_->count(); ++i) {\n    EXPECT_EQ(9, this->blob_top_c_->cpu_data()[i]);\n  }\n\n  // Do a Forward pass to fill in Blob b with Gaussian data.\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_a_->count(); ++i) {\n    EXPECT_EQ(7, this->blob_top_a_->cpu_data()[i]);\n  }\n  // Check that the Gaussian's data has been filled in with values within\n  // 10 standard deviations of the mean. Record the first and last sample.\n  // to check that they're different after the next Forward pass.\n  for (int i = 0; i < this->blob_top_b_->count(); ++i) {\n    EXPECT_NEAR(gaussian_mean, this->blob_top_b_->cpu_data()[i],\n                gaussian_std * 10);\n  }\n  const TypeParam first_gaussian_sample = this->blob_top_b_->cpu_data()[0];\n  const TypeParam last_gaussian_sample =\n      this->blob_top_b_->cpu_data()[this->blob_top_b_->count() - 1];\n  for (int i = 0; i < this->blob_top_c_->count(); ++i) {\n    EXPECT_EQ(9, this->blob_top_c_->cpu_data()[i]);\n  }\n\n  // Do another Forward pass to fill in Blob b with Gaussian data again,\n  // checking that we get different values.\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_a_->count(); ++i) {\n    EXPECT_EQ(7, this->blob_top_a_->cpu_data()[i]);\n  }\n  for (int i = 0; i < this->blob_top_b_->count(); ++i) {\n    EXPECT_NEAR(gaussian_mean, this->blob_top_b_->cpu_data()[i],\n                gaussian_std * 10);\n  }\n  EXPECT_NE(first_gaussian_sample, this->blob_top_b_->cpu_data()[0]);\n  EXPECT_NE(last_gaussian_sample,\n      this->blob_top_b_->cpu_data()[this->blob_top_b_->count() - 1]);\n  for (int i = 0; i < this->blob_top_c_->count(); ++i) {\n    EXPECT_EQ(9, this->blob_top_c_->cpu_data()[i]);\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_eltwise_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/eltwise_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass EltwiseLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  EltwiseLayerTest()\n      : blob_bottom_a_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_b_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_c_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_a_);\n    filler.Fill(this->blob_bottom_b_);\n    filler.Fill(this->blob_bottom_c_);\n    blob_bottom_vec_.push_back(blob_bottom_a_);\n    blob_bottom_vec_.push_back(blob_bottom_b_);\n    blob_bottom_vec_.push_back(blob_bottom_c_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~EltwiseLayerTest() {\n    delete blob_bottom_a_;\n    delete blob_bottom_b_;\n    delete blob_bottom_c_;\n    delete blob_top_;\n  }\n  Blob<Dtype>* const blob_bottom_a_;\n  Blob<Dtype>* const blob_bottom_b_;\n  Blob<Dtype>* const blob_bottom_c_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(EltwiseLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(EltwiseLayerTest, TestSetUp) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_PROD);\n  shared_ptr<EltwiseLayer<Dtype> > layer(\n      new EltwiseLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 4);\n  EXPECT_EQ(this->blob_top_->width(), 5);\n}\n\nTYPED_TEST(EltwiseLayerTest, TestProd) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_PROD);\n  shared_ptr<EltwiseLayer<Dtype> > layer(\n      new EltwiseLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_a_->cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_b_->cpu_data();\n  const Dtype* in_data_c = this->blob_bottom_c_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] * in_data_b[i] * in_data_c[i], 1e-4);\n  }\n}\n\nTYPED_TEST(EltwiseLayerTest, TestSum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_SUM);\n  shared_ptr<EltwiseLayer<Dtype> > layer(\n      new EltwiseLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_a_->cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_b_->cpu_data();\n  const Dtype* in_data_c = this->blob_bottom_c_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] + in_data_b[i] + in_data_c[i], 1e-4);\n  }\n}\n\nTYPED_TEST(EltwiseLayerTest, TestSumCoeff) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_SUM);\n  eltwise_param->add_coeff(1);\n  eltwise_param->add_coeff(-0.5);\n  eltwise_param->add_coeff(2);\n  shared_ptr<EltwiseLayer<Dtype> > layer(\n      new EltwiseLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_a_->cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_b_->cpu_data();\n  const Dtype* in_data_c = this->blob_bottom_c_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] - 0.5*in_data_b[i] + 2*in_data_c[i],\n        1e-4);\n  }\n}\n\nTYPED_TEST(EltwiseLayerTest, TestStableProdGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_PROD);\n  eltwise_param->set_stable_prod_grad(true);\n  EltwiseLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(EltwiseLayerTest, TestUnstableProdGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_PROD);\n  eltwise_param->set_stable_prod_grad(false);\n  EltwiseLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(EltwiseLayerTest, TestSumGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_SUM);\n  EltwiseLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(EltwiseLayerTest, TestSumCoeffGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_SUM);\n  eltwise_param->add_coeff(1);\n  eltwise_param->add_coeff(-0.5);\n  eltwise_param->add_coeff(2);\n  EltwiseLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(EltwiseLayerTest, TestMax) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_MAX);\n  shared_ptr<EltwiseLayer<Dtype> > layer(\n      new EltwiseLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_a_->cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_b_->cpu_data();\n  const Dtype* in_data_c = this->blob_bottom_c_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_EQ(data[i],\n              std::max(in_data_a[i], std::max(in_data_b[i], in_data_c[i])));\n  }\n}\n\nTYPED_TEST(EltwiseLayerTest, TestMaxGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EltwiseParameter* eltwise_param = layer_param.mutable_eltwise_param();\n  eltwise_param->set_operation(EltwiseParameter_EltwiseOp_MAX);\n  EltwiseLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-4, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_embed_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/embed_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass EmbedLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  EmbedLayerTest()\n      : blob_bottom_(new Blob<Dtype>(4, 1, 1, 1)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~EmbedLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(EmbedLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(EmbedLayerTest, TestSetUp) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EmbedParameter* embed_param = layer_param.mutable_embed_param();\n  embed_param->set_num_output(10);\n  embed_param->set_input_dim(5);\n  shared_ptr<EmbedLayer<Dtype> > layer(new EmbedLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 5);\n  EXPECT_EQ(this->blob_top_->shape(0), 4);\n  EXPECT_EQ(this->blob_top_->shape(1), 1);\n  EXPECT_EQ(this->blob_top_->shape(2), 1);\n  EXPECT_EQ(this->blob_top_->shape(3), 1);\n  EXPECT_EQ(this->blob_top_->shape(4), 10);\n}\n\nTYPED_TEST(EmbedLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EmbedParameter* embed_param = layer_param.mutable_embed_param();\n  const int kNumOutput = 10;\n  const int kInputDim = 5;\n  embed_param->set_num_output(kNumOutput);\n  embed_param->set_input_dim(kInputDim);\n  embed_param->mutable_weight_filler()->set_type(\"uniform\");\n  embed_param->mutable_weight_filler()->set_min(-10);\n  embed_param->mutable_weight_filler()->set_max(10);\n  embed_param->set_bias_term(false);\n  shared_ptr<EmbedLayer<Dtype> > layer(new EmbedLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(1, layer->blobs().size());\n  vector<int> weight_shape(2);\n  weight_shape[0] = kInputDim;\n  weight_shape[1] = kNumOutput;\n  ASSERT_TRUE(weight_shape == layer->blobs()[0]->shape());\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    this->blob_bottom_->mutable_cpu_data()[i] = caffe_rng_rand() % kInputDim;\n  }\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  vector<int> weight_offset(2, 0);\n  vector<int> top_offset(5, 0);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    weight_offset[0] = static_cast<int>(this->blob_bottom_->cpu_data()[i]);\n    weight_offset[1] = 0;\n    top_offset[0] = i;\n    top_offset[4] = 0;\n    for (int j = 0; j < kNumOutput; ++j) {\n      EXPECT_EQ(layer->blobs()[0]->data_at(weight_offset),\n                this->blob_top_->data_at(top_offset));\n      ++top_offset[4];\n      ++weight_offset[1];\n    }\n  }\n}\n\nTYPED_TEST(EmbedLayerTest, TestForwardWithBias) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EmbedParameter* embed_param = layer_param.mutable_embed_param();\n  const int kNumOutput = 10;\n  const int kInputDim = 5;\n  embed_param->set_num_output(kNumOutput);\n  embed_param->set_input_dim(kInputDim);\n  embed_param->mutable_weight_filler()->set_type(\"uniform\");\n  embed_param->mutable_weight_filler()->set_min(-10);\n  embed_param->mutable_weight_filler()->set_max(10);\n  embed_param->mutable_bias_filler()->CopyFrom(embed_param->weight_filler());\n  embed_param->set_bias_term(true);\n  shared_ptr<EmbedLayer<Dtype> > layer(new EmbedLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(2, layer->blobs().size());\n  vector<int> weight_shape(2);\n  weight_shape[0] = kInputDim;\n  weight_shape[1] = kNumOutput;\n  ASSERT_TRUE(weight_shape == layer->blobs()[0]->shape());\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    this->blob_bottom_->mutable_cpu_data()[i] = caffe_rng_rand() % kInputDim;\n  }\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  vector<int> bias_offset(1, 0);\n  vector<int> weight_offset(2, 0);\n  vector<int> top_offset(5, 0);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    weight_offset[0] = static_cast<int>(this->blob_bottom_->cpu_data()[i]);\n    weight_offset[1] = 0;\n    top_offset[0] = i;\n    top_offset[4] = 0;\n    bias_offset[0] = 0;\n    for (int j = 0; j < kNumOutput; ++j) {\n      EXPECT_EQ(layer->blobs()[0]->data_at(weight_offset) +\n                layer->blobs()[1]->data_at(bias_offset),\n                this->blob_top_->data_at(top_offset));\n      ++top_offset[4];\n      ++weight_offset[1];\n      ++bias_offset[0];\n    }\n  }\n}\n\nTYPED_TEST(EmbedLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EmbedParameter* embed_param = layer_param.mutable_embed_param();\n  embed_param->set_num_output(10);\n  embed_param->set_input_dim(5);\n  embed_param->set_bias_term(false);\n  embed_param->mutable_weight_filler()->set_type(\"uniform\");\n  embed_param->mutable_weight_filler()->set_min(-10);\n  embed_param->mutable_weight_filler()->set_max(10);\n  EmbedLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  this->blob_bottom_->mutable_cpu_data()[0] = 4;\n  this->blob_bottom_->mutable_cpu_data()[1] = 2;\n  this->blob_bottom_->mutable_cpu_data()[2] = 2;\n  this->blob_bottom_->mutable_cpu_data()[3] = 3;\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, -2);\n}\n\nTYPED_TEST(EmbedLayerTest, TestGradientWithBias) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  EmbedParameter* embed_param = layer_param.mutable_embed_param();\n  embed_param->set_num_output(10);\n  embed_param->set_input_dim(5);\n  embed_param->set_bias_term(true);\n  embed_param->mutable_weight_filler()->set_type(\"uniform\");\n  embed_param->mutable_weight_filler()->set_min(-10);\n  embed_param->mutable_weight_filler()->set_max(10);\n  embed_param->mutable_bias_filler()->CopyFrom(embed_param->weight_filler());\n  EmbedLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  this->blob_bottom_->mutable_cpu_data()[0] = 4;\n  this->blob_bottom_->mutable_cpu_data()[1] = 2;\n  this->blob_bottom_->mutable_cpu_data()[2] = 2;\n  this->blob_bottom_->mutable_cpu_data()[3] = 3;\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, -2);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_euclidean_loss_layer.cpp",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/euclidean_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass EuclideanLossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  EuclideanLossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_label_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    filler.Fill(this->blob_bottom_label_);\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~EuclideanLossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_top_loss_;\n  }\n\n  void TestForward() {\n    // Get the loss without a specified objective weight -- should be\n    // equivalent to explicitly specifiying a weight of 1.\n    LayerParameter layer_param;\n    EuclideanLossLayer<Dtype> layer_weight_1(layer_param);\n    layer_weight_1.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype loss_weight_1 =\n        layer_weight_1.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n    // Get the loss again with a different objective weight; check that it is\n    // scaled appropriately.\n    const Dtype kLossWeight = 3.7;\n    layer_param.add_loss_weight(kLossWeight);\n    EuclideanLossLayer<Dtype> layer_weight_2(layer_param);\n    layer_weight_2.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype loss_weight_2 =\n        layer_weight_2.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype kErrorMargin = 1e-5;\n    EXPECT_NEAR(loss_weight_1 * kLossWeight, loss_weight_2, kErrorMargin);\n    // Make sure the loss is non-trivial.\n    const Dtype kNonTrivialAbsThresh = 1e-1;\n    EXPECT_GE(fabs(loss_weight_1), kNonTrivialAbsThresh);\n  }\n\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(EuclideanLossLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(EuclideanLossLayerTest, TestForward) {\n  this->TestForward();\n}\n\nTYPED_TEST(EuclideanLossLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const Dtype kLossWeight = 3.7;\n  layer_param.add_loss_weight(kLossWeight);\n  EuclideanLossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_filler.cpp",
    "content": "#include \"gtest/gtest.h\"\n\n#include \"caffe/filler.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass ConstantFillerTest : public ::testing::Test {\n protected:\n  ConstantFillerTest()\n      : blob_(new Blob<Dtype>(2, 3, 4, 5)),\n        filler_param_() {\n    filler_param_.set_value(10.);\n    filler_.reset(new ConstantFiller<Dtype>(filler_param_));\n    filler_->Fill(blob_);\n  }\n  virtual ~ConstantFillerTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  FillerParameter filler_param_;\n  shared_ptr<ConstantFiller<Dtype> > filler_;\n};\n\nTYPED_TEST_CASE(ConstantFillerTest, TestDtypes);\n\nTYPED_TEST(ConstantFillerTest, TestFill) {\n  EXPECT_TRUE(this->blob_);\n  const int count = this->blob_->count();\n  const TypeParam* data = this->blob_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_GE(data[i], this->filler_param_.value());\n  }\n}\n\n\ntemplate <typename Dtype>\nclass UniformFillerTest : public ::testing::Test {\n protected:\n  UniformFillerTest()\n      : blob_(new Blob<Dtype>(2, 3, 4, 5)),\n        filler_param_() {\n    filler_param_.set_min(1.);\n    filler_param_.set_max(2.);\n    filler_.reset(new UniformFiller<Dtype>(filler_param_));\n    filler_->Fill(blob_);\n  }\n  virtual ~UniformFillerTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  FillerParameter filler_param_;\n  shared_ptr<UniformFiller<Dtype> > filler_;\n};\n\nTYPED_TEST_CASE(UniformFillerTest, TestDtypes);\n\nTYPED_TEST(UniformFillerTest, TestFill) {\n  EXPECT_TRUE(this->blob_);\n  const int count = this->blob_->count();\n  const TypeParam* data = this->blob_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_GE(data[i], this->filler_param_.min());\n    EXPECT_LE(data[i], this->filler_param_.max());\n  }\n}\n\ntemplate <typename Dtype>\nclass PositiveUnitballFillerTest : public ::testing::Test {\n protected:\n  PositiveUnitballFillerTest()\n      : blob_(new Blob<Dtype>(2, 3, 4, 5)),\n        filler_param_() {\n    filler_.reset(new PositiveUnitballFiller<Dtype>(filler_param_));\n    filler_->Fill(blob_);\n  }\n  virtual ~PositiveUnitballFillerTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  FillerParameter filler_param_;\n  shared_ptr<PositiveUnitballFiller<Dtype> > filler_;\n};\n\nTYPED_TEST_CASE(PositiveUnitballFillerTest, TestDtypes);\n\nTYPED_TEST(PositiveUnitballFillerTest, TestFill) {\n  EXPECT_TRUE(this->blob_);\n  const int num = this->blob_->num();\n  const int count = this->blob_->count();\n  const int dim = count / num;\n  const TypeParam* data = this->blob_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_GE(data[i], 0);\n    EXPECT_LE(data[i], 1);\n  }\n  for (int i = 0; i < num; ++i) {\n    TypeParam sum = 0;\n    for (int j = 0; j < dim; ++j) {\n      sum += data[i * dim + j];\n    }\n    EXPECT_GE(sum, 0.999);\n    EXPECT_LE(sum, 1.001);\n  }\n}\n\ntemplate <typename Dtype>\nclass GaussianFillerTest : public ::testing::Test {\n protected:\n  GaussianFillerTest()\n      : blob_(new Blob<Dtype>(2, 3, 4, 5)),\n        filler_param_() {\n    filler_param_.set_mean(10.);\n    filler_param_.set_std(0.1);\n    filler_.reset(new GaussianFiller<Dtype>(filler_param_));\n    filler_->Fill(blob_);\n  }\n  virtual ~GaussianFillerTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  FillerParameter filler_param_;\n  shared_ptr<GaussianFiller<Dtype> > filler_;\n};\n\nTYPED_TEST_CASE(GaussianFillerTest, TestDtypes);\n\nTYPED_TEST(GaussianFillerTest, TestFill) {\n  EXPECT_TRUE(this->blob_);\n  const int count = this->blob_->count();\n  const TypeParam* data = this->blob_->cpu_data();\n  TypeParam mean = 0.;\n  TypeParam var = 0.;\n  for (int i = 0; i < count; ++i) {\n    mean += data[i];\n    var += (data[i] - this->filler_param_.mean()) *\n        (data[i] - this->filler_param_.mean());\n  }\n  mean /= count;\n  var /= count;\n  // Very loose test.\n  EXPECT_GE(mean, this->filler_param_.mean() - this->filler_param_.std() * 5);\n  EXPECT_LE(mean, this->filler_param_.mean() + this->filler_param_.std() * 5);\n  TypeParam target_var = this->filler_param_.std() * this->filler_param_.std();\n  EXPECT_GE(var, target_var / 5.);\n  EXPECT_LE(var, target_var * 5.);\n}\n\ntemplate <typename Dtype>\nclass XavierFillerTest : public ::testing::Test {\n protected:\n  XavierFillerTest()\n      : blob_(new Blob<Dtype>(1000, 2, 4, 5)),\n        filler_param_() {\n  }\n  virtual void test_params(FillerParameter_VarianceNorm variance_norm,\n      Dtype n) {\n    this->filler_param_.set_variance_norm(variance_norm);\n    this->filler_.reset(new XavierFiller<Dtype>(this->filler_param_));\n    this->filler_->Fill(blob_);\n    EXPECT_TRUE(this->blob_);\n    const int count = this->blob_->count();\n    const Dtype* data = this->blob_->cpu_data();\n    Dtype mean = 0.;\n    Dtype ex2 = 0.;\n    for (int i = 0; i < count; ++i) {\n      mean += data[i];\n      ex2 += data[i] * data[i];\n    }\n    mean /= count;\n    ex2 /= count;\n    Dtype std = sqrt(ex2 - mean*mean);\n    Dtype target_std = sqrt(2.0 / n);\n    EXPECT_NEAR(mean, 0.0, 0.1);\n    EXPECT_NEAR(std, target_std, 0.1);\n  }\n  virtual ~XavierFillerTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  FillerParameter filler_param_;\n  shared_ptr<XavierFiller<Dtype> > filler_;\n};\n\nTYPED_TEST_CASE(XavierFillerTest, TestDtypes);\n\nTYPED_TEST(XavierFillerTest, TestFillFanIn) {\n  TypeParam n = 2*4*5;\n  this->test_params(FillerParameter_VarianceNorm_FAN_IN, n);\n}\nTYPED_TEST(XavierFillerTest, TestFillFanOut) {\n  TypeParam n = 1000*4*5;\n  this->test_params(FillerParameter_VarianceNorm_FAN_OUT, n);\n}\nTYPED_TEST(XavierFillerTest, TestFillAverage) {\n  TypeParam n = (2*4*5 + 1000*4*5) / 2.0;\n  this->test_params(FillerParameter_VarianceNorm_AVERAGE, n);\n}\n\ntemplate <typename Dtype>\nclass MSRAFillerTest : public ::testing::Test {\n protected:\n  MSRAFillerTest()\n      : blob_(new Blob<Dtype>(1000, 2, 4, 5)),\n        filler_param_() {\n  }\n  virtual void test_params(FillerParameter_VarianceNorm variance_norm,\n      Dtype n) {\n    this->filler_param_.set_variance_norm(variance_norm);\n    this->filler_.reset(new MSRAFiller<Dtype>(this->filler_param_));\n    this->filler_->Fill(blob_);\n    EXPECT_TRUE(this->blob_);\n    const int count = this->blob_->count();\n    const Dtype* data = this->blob_->cpu_data();\n    Dtype mean = 0.;\n    Dtype ex2 = 0.;\n    for (int i = 0; i < count; ++i) {\n      mean += data[i];\n      ex2 += data[i] * data[i];\n    }\n    mean /= count;\n    ex2 /= count;\n    Dtype std = sqrt(ex2 - mean*mean);\n    Dtype target_std = sqrt(2.0 / n);\n    EXPECT_NEAR(mean, 0.0, 0.1);\n    EXPECT_NEAR(std, target_std, 0.1);\n  }\n  virtual ~MSRAFillerTest() { delete blob_; }\n  Blob<Dtype>* const blob_;\n  FillerParameter filler_param_;\n  shared_ptr<MSRAFiller<Dtype> > filler_;\n};\n\nTYPED_TEST_CASE(MSRAFillerTest, TestDtypes);\n\nTYPED_TEST(MSRAFillerTest, TestFillFanIn) {\n  TypeParam n = 2*4*5;\n  this->test_params(FillerParameter_VarianceNorm_FAN_IN, n);\n}\nTYPED_TEST(MSRAFillerTest, TestFillFanOut) {\n  TypeParam n = 1000*4*5;\n  this->test_params(FillerParameter_VarianceNorm_FAN_OUT, n);\n}\nTYPED_TEST(MSRAFillerTest, TestFillAverage) {\n  TypeParam n = (2*4*5 + 1000*4*5) / 2.0;\n  this->test_params(FillerParameter_VarianceNorm_AVERAGE, n);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_filter_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/filter_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass FilterLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  FilterLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(4, 3, 6, 4)),\n        blob_bottom_labels_(new Blob<Dtype>(4, 1, 1, 1)),\n        blob_bottom_selector_(new Blob<Dtype>(4, 1, 1, 1)),\n        blob_top_data_(new Blob<Dtype>()),\n        blob_top_labels_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    // fill the values\n    Caffe::set_random_seed(1890);\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    // fill the selector blob\n    Dtype* bottom_data_selector_ = blob_bottom_selector_->mutable_cpu_data();\n    bottom_data_selector_[0] = 0;\n    bottom_data_selector_[1] = 1;\n    bottom_data_selector_[2] = 1;\n    bottom_data_selector_[3] = 0;\n    // fill the other bottom blobs\n    filler.Fill(blob_bottom_data_);\n    for (int i = 0; i < blob_bottom_labels_->count(); ++i) {\n      blob_bottom_labels_->mutable_cpu_data()[i] = caffe_rng_rand() % 5;\n    }\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_labels_);\n    blob_bottom_vec_.push_back(blob_bottom_selector_);\n    blob_top_vec_.push_back(blob_top_data_);\n    blob_top_vec_.push_back(blob_top_labels_);\n  }\n  virtual ~FilterLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_labels_;\n    delete blob_bottom_selector_;\n    delete blob_top_data_;\n    delete blob_top_labels_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_labels_;\n  Blob<Dtype>* const blob_bottom_selector_;\n  // blobs for the top of FilterLayer\n  Blob<Dtype>* const blob_top_data_;\n  Blob<Dtype>* const blob_top_labels_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(FilterLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(FilterLayerTest, TestReshape) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  FilterLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Reshape(this->blob_bottom_vec_, this->blob_top_vec_);\n  // In the test first and last items should have been filtered\n  // so we just expect 2 remaining items\n  EXPECT_EQ(this->blob_top_data_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_labels_->shape(0), 2);\n  EXPECT_GT(this->blob_bottom_data_->shape(0),\n      this->blob_top_data_->shape(0));\n  EXPECT_GT(this->blob_bottom_labels_->shape(0),\n      this->blob_top_labels_->shape(0));\n  for (int i = 1; i < this->blob_bottom_labels_->num_axes(); i++) {\n    EXPECT_EQ(this->blob_bottom_labels_->shape(i),\n        this->blob_top_labels_->shape(i));\n  }\n}\n\nTYPED_TEST(FilterLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  FilterLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Reshape(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_labels_->data_at(0, 0, 0, 0),\n      this->blob_bottom_labels_->data_at(1, 0, 0, 0));\n  EXPECT_EQ(this->blob_top_labels_->data_at(1, 0, 0, 0),\n      this->blob_bottom_labels_->data_at(2, 0, 0, 0));\n\n  int dim = this->blob_top_data_->count() /\n      this->blob_top_data_->shape(0);\n  const Dtype* top_data = this->blob_top_data_->cpu_data();\n  const Dtype* bottom_data = this->blob_bottom_data_->cpu_data();\n  // selector is 0 1 1 0, so we need to compare bottom(1,c,h,w)\n  // with top(0,c,h,w) and bottom(2,c,h,w) with top(1,c,h,w)\n  bottom_data += dim;  // bottom(1,c,h,w)\n  for (size_t n = 0; n < dim; n++)\n    EXPECT_EQ(top_data[n], bottom_data[n]);\n\n  bottom_data += dim;  // bottom(2,c,h,w)\n  top_data += dim;  // top(1,c,h,w)\n  for (size_t n = 0; n < dim; n++)\n    EXPECT_EQ(top_data[n], bottom_data[n]);\n}\n\nTYPED_TEST(FilterLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  FilterLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  // check only input 0 (data) because labels and selector\n  // don't need backpropagation\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_flatten_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/flatten_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass FlattenLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  FlattenLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~FlattenLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(FlattenLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(FlattenLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  FlattenLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 2);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3 * 6 * 5);\n}\n\nTYPED_TEST(FlattenLayerTest, TestSetupWithAxis) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_flatten_param()->set_axis(2);\n  FlattenLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 3);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3);\n  EXPECT_EQ(this->blob_top_->shape(2), 6 * 5);\n}\n\nTYPED_TEST(FlattenLayerTest, TestSetupWithEndAxis) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_flatten_param()->set_end_axis(-2);\n  FlattenLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 3);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3 * 6);\n  EXPECT_EQ(this->blob_top_->shape(2), 5);\n}\n\nTYPED_TEST(FlattenLayerTest, TestSetupWithStartAndEndAxis) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_flatten_param()->set_axis(0);\n  layer_param.mutable_flatten_param()->set_end_axis(-2);\n  FlattenLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 2);\n  EXPECT_EQ(this->blob_top_->shape(0), 2 * 3 * 6);\n  EXPECT_EQ(this->blob_top_->shape(1), 5);\n}\n\nTYPED_TEST(FlattenLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  FlattenLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int c = 0; c < 3 * 6 * 5; ++c) {\n    EXPECT_EQ(this->blob_top_->data_at(0, c, 0, 0),\n        this->blob_bottom_->data_at(0, c / (6 * 5), (c / 5) % 6, c % 5));\n    EXPECT_EQ(this->blob_top_->data_at(1, c, 0, 0),\n        this->blob_bottom_->data_at(1, c / (6 * 5), (c / 5) % 6, c % 5));\n  }\n}\n\nTYPED_TEST(FlattenLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  FlattenLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_gradient_based_solver.cpp",
    "content": "#include <algorithm>\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"google/protobuf/text_format.h\"\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/parallel.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/sgd_solvers.hpp\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nusing std::ostringstream;\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass GradientBasedSolverTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  GradientBasedSolverTest() :\n      seed_(1701), num_(4), channels_(3), height_(10), width_(10),\n      share_(false) {\n        input_file_ = new string(\n        CMAKE_SOURCE_DIR \"caffe/test/test_data/solver_data_list.txt\" CMAKE_EXT);\n      }\n  ~GradientBasedSolverTest() {\n    delete input_file_;\n  }\n\n  string snapshot_prefix_;\n  shared_ptr<SGDSolver<Dtype> > solver_;\n  shared_ptr<P2PSync<Dtype> > sync_;\n  int seed_;\n  // Dimensions are determined by generate_sample_data.py\n  // TODO this is brittle and the hdf5 file should be checked instead.\n  int num_, channels_, height_, width_;\n  bool share_;\n  Dtype delta_;  // Stability constant for RMSProp, AdaGrad, AdaDelta and Adam\n\n  // Test data: check out generate_sample_data.py in the same directory.\n  string* input_file_;\n\n  virtual void InitSolver(const SolverParameter& param) = 0;\n\n  virtual void InitSolverFromProtoString(const string& proto) {\n    SolverParameter param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(proto, &param));\n    // Set the solver_mode according to current Caffe::mode.\n    switch (Caffe::mode()) {\n      case Caffe::CPU:\n        param.set_solver_mode(SolverParameter_SolverMode_CPU);\n        break;\n      case Caffe::GPU:\n        param.set_solver_mode(SolverParameter_SolverMode_GPU);\n        break;\n      default:\n        LOG(FATAL) << \"Unknown Caffe mode: \" << Caffe::mode();\n    }\n    InitSolver(param);\n    delta_ = param.delta();\n  }\n\n  string RunLeastSquaresSolver(const Dtype learning_rate,\n      const Dtype weight_decay, const Dtype momentum, const int num_iters,\n      const int iter_size = 1, const int devices = 1,\n      const bool snapshot = false, const char* from_snapshot = NULL) {\n    ostringstream proto;\n    int device_id = 0;\n#ifndef CPU_ONLY\n    if (Caffe::mode() == Caffe::GPU) {\n      CUDA_CHECK(cudaGetDevice(&device_id));\n    }\n#endif\n    proto <<\n       \"snapshot_after_train: \" << snapshot << \" \"\n       \"max_iter: \" << num_iters << \" \"\n       \"base_lr: \" << learning_rate << \" \"\n       \"lr_policy: 'fixed' \"\n       \"iter_size: \" << iter_size << \" \"\n       \"device_id: \" << device_id << \" \"\n       \"net_param { \"\n       \"  name: 'TestNetwork' \"\n       \"  layer { \"\n       \"    name: 'data' \"\n       \"    type: 'HDF5Data' \"\n       \"    hdf5_data_param { \"\n       \"      source: '\" << *(this->input_file_) << \"' \"\n       \"      batch_size: \" << num_ / iter_size << \" \"\n       \"    } \"\n       \"    top: 'data' \"\n       \"    top: 'targets' \"\n       \"  } \";\n    if (share_) {\n      proto <<\n         \"  layer { \"\n         \"    name: 'slice' \"\n         \"    type: 'Slice' \"\n         \"    bottom: 'data' \"\n         \"    top: 'data1' \"\n         \"    top: 'data2' \"\n         \"    slice_param { \"\n         \"      axis: 0 \"\n         \"    } \"\n         \"  } \";\n    }\n    proto <<\n       \"  layer { \"\n       \"    name: 'innerprod' \"\n       \"    type: 'InnerProduct' \"\n       \"    param { name: 'weights' } \"\n       \"    param { name: 'bias' } \"\n       \"    inner_product_param { \"\n       \"      num_output: 1 \"\n       \"      weight_filler { \"\n       \"        type: 'gaussian' \"\n       \"        std: 1.0 \"\n       \"      } \"\n       \"      bias_filler { \"\n       \"        type: 'gaussian' \"\n       \"        std: 1.0 \"\n       \"      } \"\n       \"    } \"\n       \"    bottom: '\" << string(share_ ? \"data1\": \"data\") << \"' \"\n       \"    top: '\" << string(share_ ? \"innerprod1\": \"innerprod\") << \"' \"\n       \"  } \";\n    if (share_) {\n      proto <<\n         \"  layer { \"\n         \"    name: 'innerprod2' \"\n         \"    type: 'InnerProduct' \"\n         \"    param { name: 'weights' } \"\n         \"    param { name: 'bias' } \"\n         \"    inner_product_param { \"\n         \"      num_output: 1 \"\n         \"      weight_filler { \"\n         \"        type: 'gaussian' \"\n         \"        std: 1.0 \"\n         \"      } \"\n         \"      bias_filler { \"\n         \"        type: 'gaussian' \"\n         \"        std: 1.0 \"\n         \"      } \"\n         \"    } \"\n         \"    bottom: 'data2' \"\n         \"    top: 'innerprod2' \"\n         \"  } \"\n         \"  layer { \"\n         \"    name: 'concat' \"\n         \"    type: 'Concat' \"\n         \"    bottom: 'innerprod1' \"\n         \"    bottom: 'innerprod2' \"\n         \"    top: 'innerprod' \"\n         \"    concat_param { \"\n         \"      axis: 0 \"\n         \"    } \"\n         \"  } \";\n    }\n    proto <<\n       \"  layer { \"\n       \"    name: 'loss' \"\n       \"    type: 'EuclideanLoss' \"\n       \"    bottom: 'innerprod' \"\n       \"    bottom: 'targets' \"\n       \"  } \"\n       \"} \";\n    if (weight_decay != 0) {\n      proto << \"weight_decay: \" << weight_decay << \" \";\n    }\n    if (momentum != 0) {\n      proto << \"momentum: \" << momentum << \" \";\n    }\n    MakeTempDir(&snapshot_prefix_);\n    proto << \"snapshot_prefix: '\" << snapshot_prefix_ << \"/' \";\n    if (snapshot) {\n      proto << \"snapshot: \" << num_iters << \" \";\n    }\n    Caffe::set_random_seed(this->seed_);\n    this->InitSolverFromProtoString(proto.str());\n    if (from_snapshot != NULL) {\n      this->solver_->Restore(from_snapshot);\n      vector<Blob<Dtype>*> empty_bottom_vec;\n      for (int i = 0; i < this->solver_->iter(); ++i) {\n        this->solver_->net()->Forward(empty_bottom_vec);\n      }\n    }\n    if (devices == 1) {\n      this->solver_->Solve();\n    } else {\n      LOG(INFO) << \"Multi-GPU test on \" << devices << \" devices\";\n      vector<int> gpus;\n      // put current device at the beginning\n      int device_id = solver_->param().device_id();\n      gpus.push_back(device_id);\n      for (int i = 0; gpus.size() < devices; ++i) {\n        if (i != device_id)\n          gpus.push_back(i);\n      }\n      Caffe::set_solver_count(gpus.size());\n      this->sync_.reset(new P2PSync<Dtype>(\n          this->solver_, NULL, this->solver_->param()));\n      this->sync_->run(gpus);\n      Caffe::set_solver_count(1);\n    }\n    if (snapshot) {\n      ostringstream resume_file;\n      resume_file << snapshot_prefix_ << \"/_iter_\" << num_iters\n                  << \".solverstate\";\n      string resume_filename = resume_file.str();\n      return resume_filename;\n    }\n    return string();\n  }\n\n  // Compute an update value given the current state of the train net,\n  // using the analytical formula for the least squares gradient.\n  // updated_params will store the updated weight and bias results,\n  // using the blobs' diffs to hold the update values themselves.\n  void ComputeLeastSquaresUpdate(const Dtype learning_rate,\n      const Dtype weight_decay, const Dtype momentum, const int num_iters,\n      vector<shared_ptr<Blob<Dtype> > >* updated_params) {\n    const int N = num_;\n    const int D = channels_ * height_ * width_;\n\n    // Run a forward pass, and manually compute the update values from the\n    // result.\n    Net<Dtype>& net = *this->solver_->net();\n    vector<Blob<Dtype>*> empty_bottom_vec;\n    net.Forward(empty_bottom_vec);\n    ASSERT_TRUE(net.has_blob(\"data\"));\n    const Blob<Dtype>& data = *net.blob_by_name(\"data\");\n    ASSERT_TRUE(net.has_blob(\"targets\"));\n    const Blob<Dtype>& targets = *net.blob_by_name(\"targets\");\n    ASSERT_TRUE(net.has_layer(\"innerprod\"));\n    const vector<shared_ptr<Blob<Dtype> > >& param_blobs =\n        net.layer_by_name(\"innerprod\")->blobs();\n    const int num_param_blobs = 2;\n    ASSERT_EQ(num_param_blobs, param_blobs.size());\n    const Blob<Dtype>& weights = *param_blobs[0];\n    const Blob<Dtype>& bias = *param_blobs[1];\n    ASSERT_EQ(D * N, data.count());\n    ASSERT_EQ(N, targets.count());\n    ASSERT_EQ(D, weights.count());\n    ASSERT_EQ(1, bias.count());\n\n    updated_params->clear();\n    updated_params->resize(num_param_blobs);\n    for (int i = 0; i < num_param_blobs; ++i) {\n      (*updated_params)[i].reset(new Blob<Dtype>());\n    }\n    Blob<Dtype>& updated_weights = *(*updated_params)[0];\n    updated_weights.ReshapeLike(weights);\n    Blob<Dtype>& updated_bias = *(*updated_params)[1];\n    updated_bias.ReshapeLike(bias);\n\n    for (int i = 0; i <= D; ++i) {\n      // Compute the derivative with respect to the ith weight (i.e., the ith\n      // element of the gradient).\n      Dtype grad = 0;\n      for (int j = 0; j <= D; ++j) {\n        // Compute element (i, j) of X^T * X.\n        Dtype element = 0;\n        for (int k = 0; k < N; ++k) {\n          // (i, k) in X^T (== (k, i) in X) times (k, j) in X.\n          const Dtype element_i = (i == D) ? 1 : data.cpu_data()[k * D + i];\n          const Dtype element_j = (j == D) ? 1 : data.cpu_data()[k * D + j];\n          element += element_i * element_j;\n        }\n        if (j == D) {\n          grad += element * bias.cpu_data()[0];\n        } else {\n          grad += element * weights.cpu_data()[j];\n        }\n      }\n      for (int k = 0; k < N; ++k) {\n        const Dtype element_i = (i == D) ? 1 : data.cpu_data()[k * D + i];\n        grad -= element_i * targets.cpu_data()[k];\n      }\n      // Scale the gradient over the N samples.\n      grad /= N;\n      // Add the weight decay to the gradient.\n      grad += weight_decay *\n          ((i == D) ? bias.cpu_data()[0] : weights.cpu_data()[i]);\n      // Finally, compute update.\n      const vector<shared_ptr<Blob<Dtype> > >& history = solver_->history();\n      if (solver_->type() != string(\"AdaDelta\")\n          && solver_->type() != string(\"Adam\")) {\n        ASSERT_EQ(2, history.size());  // 1 blob for weights, 1 for bias\n      } else {\n        ASSERT_EQ(4, history.size());  // additional blobs for update history\n      }\n      Dtype update_value = learning_rate * grad;\n      const Dtype history_value = (i == D) ?\n            history[1]->cpu_data()[0] : history[0]->cpu_data()[i];\n      const Dtype temp = momentum * history_value;\n      if (solver_->type() == string(\"SGD\")) {\n        update_value += temp;\n      } else if (solver_->type() == string(\"Nesterov\")) {\n        update_value += temp;\n        // step back then over-step\n        update_value = (1 + momentum) * update_value - temp;\n      } else if (solver_->type() == string(\"AdaGrad\")) {\n        update_value /= std::sqrt(history_value + grad * grad) + delta_;\n      } else if (solver_->type() == string(\"RMSProp\")) {\n        const Dtype rms_decay = 0.95;\n        update_value /= std::sqrt(rms_decay*history_value\n            + grad * grad * (1 - rms_decay)) + delta_;\n      } else if (solver_->type() == string(\"AdaDelta\")) {\n        const Dtype update_history_value = (i == D) ?\n            history[1 + num_param_blobs]->cpu_data()[0] :\n            history[0 + num_param_blobs]->cpu_data()[i];\n        const Dtype weighted_gradient_average =\n            momentum * history_value + (1 - momentum) * (grad * grad);\n        update_value = grad * std::sqrt((update_history_value + delta_) /\n            (weighted_gradient_average + delta_)) * learning_rate;\n        // not actually needed, just here for illustrative purposes\n        // const Dtype weighted_update_average =\n        //   momentum * update_history_value + (1 - momentum) * (update_value);\n      } else if (solver_->type() == string(\"Adam\")) {\n        const Dtype momentum2 = 0.999;\n        const Dtype m = history_value;\n        const Dtype v = (i == D) ?\n            history[1 + num_param_blobs]->cpu_data()[0] :\n            history[0 + num_param_blobs]->cpu_data()[i];\n        const Dtype val_m = (1 - momentum) * grad + momentum * m;\n        const Dtype val_v = (1 - momentum2) * grad * grad + momentum2 * v;\n        Dtype alpha_t = learning_rate *\n            std::sqrt(Dtype(1) - pow(momentum2, num_iters)) /\n            (Dtype(1.) - pow(momentum, num_iters));\n        update_value = alpha_t * val_m / (std::sqrt(val_v) + delta_);\n      } else {\n        LOG(FATAL) << \"Unknown solver type: \" << solver_->type();\n      }\n      if (i == D) {\n        updated_bias.mutable_cpu_diff()[0] = update_value;\n        updated_bias.mutable_cpu_data()[0] = bias.cpu_data()[0] - update_value;\n      } else {\n        updated_weights.mutable_cpu_diff()[i] = update_value;\n        updated_weights.mutable_cpu_data()[i] =\n            weights.cpu_data()[i] - update_value;\n      }\n    }\n  }\n\n  void CheckLeastSquaresUpdate(\n      const vector<shared_ptr<Blob<Dtype> > >& updated_params) {\n    const int D = channels_ * height_ * width_;\n\n    const Blob<Dtype>& updated_weights = *updated_params[0];\n    const Blob<Dtype>& updated_bias = *updated_params[1];\n\n    Net<Dtype>& net = *this->solver_->net();\n    ASSERT_TRUE(net.has_layer(\"innerprod\"));\n    const vector<shared_ptr<Blob<Dtype> > >& param_blobs =\n        net.layer_by_name(\"innerprod\")->blobs();\n    ASSERT_EQ(2, param_blobs.size());\n    const Blob<Dtype>& solver_updated_weights = *param_blobs[0];\n    ASSERT_EQ(D, solver_updated_weights.count());\n    const double kPrecision = 1e-2;\n    const double kMinPrecision = 1e-7;\n    for (int i = 0; i < D; ++i) {\n      const Dtype expected_updated_weight = updated_weights.cpu_data()[i];\n      const Dtype solver_updated_weight = solver_updated_weights.cpu_data()[i];\n      const Dtype error_margin = std::max(kMinPrecision, kPrecision *\n          std::min(fabs(expected_updated_weight), fabs(solver_updated_weight)));\n      EXPECT_NEAR(expected_updated_weight, solver_updated_weight, error_margin);\n    }\n    const Blob<Dtype>& solver_updated_bias_blob = *param_blobs[1];\n    ASSERT_EQ(1, solver_updated_bias_blob.count());\n    const Dtype expected_updated_bias = updated_bias.cpu_data()[0];\n    const Dtype solver_updated_bias = solver_updated_bias_blob.cpu_data()[0];\n    const Dtype error_margin = std::max(kMinPrecision, kPrecision *\n          std::min(fabs(expected_updated_bias), fabs(solver_updated_bias)));\n    EXPECT_NEAR(expected_updated_bias, solver_updated_bias, error_margin);\n\n    // Check the solver's history -- should contain the previous update value.\n    if (solver_->type() == string(\"SGD\")) {\n      const vector<shared_ptr<Blob<Dtype> > >& history = solver_->history();\n      ASSERT_EQ(2, history.size());\n      for (int i = 0; i < D; ++i) {\n        const Dtype expected_history = updated_weights.cpu_diff()[i];\n        const Dtype solver_history = history[0]->cpu_data()[i];\n        const Dtype error_margin_hist = std::max(kMinPrecision, kPrecision *\n            std::min(fabs(expected_history), fabs(solver_history)));\n        EXPECT_NEAR(expected_history, solver_history, error_margin_hist);\n      }\n      const Dtype expected_history = updated_bias.cpu_diff()[0];\n      const Dtype solver_history = history[1]->cpu_data()[0];\n      const Dtype error_margin_hist = std::max(kMinPrecision, kPrecision *\n          std::min(fabs(expected_history), fabs(solver_history)));\n      EXPECT_NEAR(expected_history, solver_history, error_margin_hist);\n    }\n  }\n\n  void CheckAccumulation(const Dtype kLearningRate, const Dtype kWeightDecay,\n      const Dtype kMomentum, const int kNumIters, const int kIterSize) {\n    const double kPrecision = 1e-2;\n    const double kMinPrecision = 1e-7;\n    // Solve without accumulation and save parameters.\n    this->RunLeastSquaresSolver(kLearningRate, kWeightDecay, kMomentum,\n        kNumIters);\n    // Save parameters for comparison.\n    Net<Dtype>& net = *this->solver_->net();\n    const vector<shared_ptr<Blob<Dtype> > >& param_blobs =\n        net.layer_by_name(\"innerprod\")->blobs();\n    vector<shared_ptr<Blob<Dtype> > > noaccum_params(param_blobs.size());\n    for (int i = 0; i < param_blobs.size(); ++i) {\n      noaccum_params[i].reset(new Blob<Dtype>());\n      noaccum_params[i]->CopyFrom(*param_blobs[i], false, true);\n    }\n    // Solve by equivalent accumulation of gradients over divided batches.\n    this->RunLeastSquaresSolver(kLearningRate, kWeightDecay, kMomentum,\n        kNumIters, kIterSize);\n    Net<Dtype>& net_accum = *this->solver_->net();\n    const vector<shared_ptr<Blob<Dtype> > >& accum_params =\n        net_accum.layer_by_name(\"innerprod\")->blobs();\n    // Compare accumulated parameters against no accumulation standard.\n    const int D = this->channels_ * this->height_ * this->width_;\n    for (int i = 0; i < D; ++i) {\n      const Dtype expected_param = noaccum_params[0]->cpu_data()[i];\n      const Dtype accum_param = accum_params[0]->cpu_data()[i];\n      const Dtype error_margin = std::max(kMinPrecision, kPrecision *\n          std::min(fabs(expected_param), fabs(accum_param)));\n      EXPECT_NEAR(expected_param, accum_param, error_margin);\n    }\n    ASSERT_EQ(1, accum_params[1]->count());\n    const Dtype expected_bias = noaccum_params[1]->cpu_data()[0];\n    const Dtype accum_bias = accum_params[1]->cpu_data()[0];\n    const Dtype error_margin = std::max(kMinPrecision, kPrecision *\n        std::min(fabs(expected_bias), fabs(accum_bias)));\n    EXPECT_NEAR(expected_bias, accum_bias, error_margin);\n  }\n\n  // Test that the correct update is computed for a regularized least squares\n  // problem:\n  //\n  //            E = (1/(2n)) || X w - y ||^2 + (lambda / 2) || w ||^2\n  //   \\nabla_w E = (1/n) (X^T X w - X^T y) + lambda * w\n  //\n  // X \\in R^{n x (d+1)} (each example is a row, (d+1)th element is always 1)\n  // w \\in R^{(d+1) x 1} ((d+1)th element is the bias)\n  // y \\in R^{n x 1}\n  // lambda is weight_decay\n  //\n  // TestLeastSquaresUpdate works \"inductively\", assuming that the solver\n  // correctly updates the net K (= iter_to_check) times, then given the history\n  // from the Kth update, we compute the (K+1)th update and check that it\n  // matches the solver's (K+1)th update.\n  void TestLeastSquaresUpdate(const Dtype learning_rate = 1.0,\n      const Dtype weight_decay = 0.0, const Dtype momentum = 0.0,\n      const int iter_to_check = 0) {\n    const int kNum = num_;\n    const int kIterSize = 1;\n    // Test over all numbers of devices.\n    int available_devices = 1;\n#ifndef CPU_ONLY\n    if (Caffe::mode() == Caffe::GPU) {\n      CUDA_CHECK(cudaGetDeviceCount(&available_devices));\n    }\n#endif\n    for (int devices = 1; devices <= available_devices; ++devices) {\n      // Configure batch size for single / multi device equivalence.\n      // Constant data is needed for multi device as for accumulation.\n      num_ = kNum * devices;\n\n      // Initialize the solver and run K (= iter_to_check) solver iterations\n      // (on single device).\n      RunLeastSquaresSolver(learning_rate, weight_decay, momentum,\n                            iter_to_check, kIterSize, 1);\n\n      // Compute the (K+1)th update using the analytic least squares gradient.\n      vector<shared_ptr<Blob<Dtype> > > updated_params;\n      ComputeLeastSquaresUpdate(learning_rate, weight_decay, momentum,\n          iter_to_check + 1, &updated_params);\n\n      // Reinitialize the solver and run K+1 solver iterations.\n      num_ = kNum;\n      RunLeastSquaresSolver(learning_rate, weight_decay, momentum,\n          iter_to_check + 1, kIterSize, devices);\n\n      // Check that the solver's solution matches ours.\n      CheckLeastSquaresUpdate(updated_params);\n    }\n  }\n\n  void TestSnapshot(const Dtype learning_rate = 1.0,\n      const Dtype weight_decay = 0.0, const Dtype momentum = 0.0,\n      const int num_iters = 1) {\n    // Run the solver for num_iters * 2 iterations.\n    const int total_num_iters = num_iters * 2;\n    bool snapshot = false;\n    const int kIterSize = 1;\n    const int kDevices = 1;\n    RunLeastSquaresSolver(learning_rate, weight_decay, momentum,\n        total_num_iters, kIterSize, kDevices, snapshot);\n\n    // Save the resulting param values.\n    vector<shared_ptr<Blob<Dtype> > > param_copies;\n    const vector<Blob<Dtype>*>& orig_params =\n        solver_->net()->learnable_params();\n    param_copies.resize(orig_params.size());\n    for (int i = 0; i < orig_params.size(); ++i) {\n      param_copies[i].reset(new Blob<Dtype>());\n      const bool kReshape = true;\n      for (int copy_diff = false; copy_diff <= true; ++copy_diff) {\n        param_copies[i]->CopyFrom(*orig_params[i], copy_diff, kReshape);\n      }\n    }\n\n    // Save the solver history\n    vector<shared_ptr<Blob<Dtype> > > history_copies;\n    const vector<shared_ptr<Blob<Dtype> > >& orig_history = solver_->history();\n    history_copies.resize(orig_history.size());\n    for (int i = 0; i < orig_history.size(); ++i) {\n      history_copies[i].reset(new Blob<Dtype>());\n      const bool kReshape = true;\n      for (int copy_diff = false; copy_diff <= true; ++copy_diff) {\n        history_copies[i]->CopyFrom(*orig_history[i], copy_diff, kReshape);\n      }\n    }\n\n    // Run the solver for num_iters iterations and snapshot.\n    snapshot = true;\n    string snapshot_name = RunLeastSquaresSolver(learning_rate, weight_decay,\n        momentum, num_iters, kIterSize, kDevices, snapshot);\n\n    // Reinitialize the solver and run for num_iters more iterations.\n    snapshot = false;\n    RunLeastSquaresSolver(learning_rate, weight_decay, momentum,\n        total_num_iters, kIterSize, kDevices,\n        snapshot, snapshot_name.c_str());\n\n    // Check that params now match.\n    const vector<Blob<Dtype>*>& params = solver_->net()->learnable_params();\n    for (int i = 0; i < params.size(); ++i) {\n      for (int j = 0; j < params[i]->count(); ++j) {\n        EXPECT_EQ(param_copies[i]->cpu_data()[j], params[i]->cpu_data()[j])\n            << \"param \" << i << \" data differed at dim \" << j;\n        EXPECT_EQ(param_copies[i]->cpu_diff()[j], params[i]->cpu_diff()[j])\n            << \"param \" << i << \" diff differed at dim \" << j;\n      }\n    }\n\n    // Check that history now matches.\n    const vector<shared_ptr<Blob<Dtype> > >& history = solver_->history();\n    for (int i = 0; i < history.size(); ++i) {\n      for (int j = 0; j < history[i]->count(); ++j) {\n        EXPECT_EQ(history_copies[i]->cpu_data()[j], history[i]->cpu_data()[j])\n            << \"history blob \" << i << \" data differed at dim \" << j;\n        EXPECT_EQ(history_copies[i]->cpu_diff()[j], history[i]->cpu_diff()[j])\n            << \"history blob \" << i << \" diff differed at dim \" << j;\n      }\n    }\n  }\n};\n\n\ntemplate <typename TypeParam>\nclass SGDSolverTest : public GradientBasedSolverTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolver(const SolverParameter& param) {\n    this->solver_.reset(new SGDSolver<Dtype>(param));\n  }\n};\n\nTYPED_TEST_CASE(SGDSolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdate) {\n  this->TestLeastSquaresUpdate();\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateLROneHundredth) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  this->TestLeastSquaresUpdate(kLearningRate);\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithWeightDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 1;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithWeightDecayMultiIter) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithMomentum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 1;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithMomentumMultiIter) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithEverything) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithEverythingShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithEverythingAccum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(SGDSolverTest, TestLeastSquaresUpdateWithEverythingAccumShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->share_ = true;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(SGDSolverTest, TestSnapshot) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(SGDSolverTest, TestSnapshotShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\n\ntemplate <typename TypeParam>\nclass AdaGradSolverTest : public GradientBasedSolverTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolver(const SolverParameter& param) {\n    this->solver_.reset(new AdaGradSolver<Dtype>(param));\n  }\n};\n\nTYPED_TEST_CASE(AdaGradSolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(AdaGradSolverTest, TestAdaGradLeastSquaresUpdate) {\n  this->TestLeastSquaresUpdate();\n}\n\nTYPED_TEST(AdaGradSolverTest, TestAdaGradLeastSquaresUpdateLROneHundredth) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  this->TestLeastSquaresUpdate(kLearningRate);\n}\n\nTYPED_TEST(AdaGradSolverTest, TestAdaGradLeastSquaresUpdateWithWeightDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay);\n}\n\nTYPED_TEST(AdaGradSolverTest, TestAdaGradLeastSquaresUpdateWithEverything) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaGradSolverTest,\n      TestAdaGradLeastSquaresUpdateWithEverythingShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaGradSolverTest, TestLeastSquaresUpdateWithEverythingAccum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(AdaGradSolverTest, TestLeastSquaresUpdateWithEverythingAccumShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->share_ = true;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(AdaGradSolverTest, TestSnapshot) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaGradSolverTest, TestSnapshotShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\n\ntemplate <typename TypeParam>\nclass NesterovSolverTest : public GradientBasedSolverTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolver(const SolverParameter& param) {\n    this->solver_.reset(new NesterovSolver<Dtype>(param));\n  }\n};\n\nTYPED_TEST_CASE(NesterovSolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(NesterovSolverTest, TestNesterovLeastSquaresUpdate) {\n  this->TestLeastSquaresUpdate();\n}\n\nTYPED_TEST(NesterovSolverTest, TestNesterovLeastSquaresUpdateLROneHundredth) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  this->TestLeastSquaresUpdate(kLearningRate);\n}\n\nTYPED_TEST(NesterovSolverTest, TestNesterovLeastSquaresUpdateWithWeightDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay);\n}\n\nTYPED_TEST(NesterovSolverTest,\n           TestNesterovLeastSquaresUpdateWithWeightDecayMultiIter) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(NesterovSolverTest, TestNesterovLeastSquaresUpdateWithMomentum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 1;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(NesterovSolverTest, TestLeastSquaresUpdateWithMomentumMultiIter) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(NesterovSolverTest, TestNesterovLeastSquaresUpdateWithEverything) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(NesterovSolverTest,\n           TestNesterovLeastSquaresUpdateWithEverythingShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(NesterovSolverTest, TestLeastSquaresUpdateWithEverythingAccum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(NesterovSolverTest, TestLeastSquaresUpdateWithEverythingAccumShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->share_ = true;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(NesterovSolverTest, TestSnapshot) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(NesterovSolverTest, TestSnapshotShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\ntemplate <typename TypeParam>\nclass AdaDeltaSolverTest : public GradientBasedSolverTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolver(const SolverParameter& param) {\n    this->solver_.reset(new AdaDeltaSolver<Dtype>(param));\n  }\n};\n\nTYPED_TEST_CASE(AdaDeltaSolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(AdaDeltaSolverTest, TestAdaDeltaLeastSquaresUpdate) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  this->TestLeastSquaresUpdate(kLearningRate);\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestAdaDeltaLeastSquaresUpdateWithWeightDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.95;\n  this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum);\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestAdaDeltaLeastSquaresUpdateWithHalfMomentum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.0;\n  const Dtype kMomentum = 0.5;\n  const int kNumIters = 1;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum);\n  }\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestAdaDeltaLeastSquaresUpdateWithMomentum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.0;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 1;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum);\n  }\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestLeastSquaresUpdateWithMomentumMultiIter) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.0;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestAdaDeltaLeastSquaresUpdateWithEverything) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.1;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaDeltaSolverTest,\n           TestAdaDeltaLeastSquaresUpdateWithEverythingShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.1;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestLeastSquaresUpdateWithEverythingAccum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.1;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestLeastSquaresUpdateWithEverythingAccumShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.1;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->share_ = true;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestSnapshot) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.1;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdaDeltaSolverTest, TestSnapshotShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.1;\n  const Dtype kWeightDecay = 0.1;\n  const Dtype kMomentum = 0.95;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\ntemplate <typename TypeParam>\nclass AdamSolverTest : public GradientBasedSolverTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolver(const SolverParameter& param) {\n    SolverParameter new_param = param;\n    const Dtype momentum = 0.9;\n    new_param.set_momentum(momentum);\n    const Dtype momentum2 = 0.999;\n    new_param.set_momentum2(momentum2);\n    this->solver_.reset(new AdamSolver<Dtype>(new_param));\n  }\n};\n\nTYPED_TEST_CASE(AdamSolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(AdamSolverTest, TestAdamLeastSquaresUpdate) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0;\n  const Dtype kMomentum = 0.9;\n  this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum);\n}\n\nTYPED_TEST(AdamSolverTest, TestAdamLeastSquaresUpdateWithWeightDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum);\n}\n\nTYPED_TEST(AdamSolverTest, TestAdamLeastSquaresUpdateWithEverything) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdamSolverTest, TestAdamLeastSquaresUpdateWithEverythingShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdamSolverTest, TestLeastSquaresUpdateWithEverythingAccum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(AdamSolverTest, TestLeastSquaresUpdateWithEverythingAccumShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->share_ = true;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(AdamSolverTest, TestSnapshot) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(AdamSolverTest, TestSnapshotShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.9;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\ntemplate <typename TypeParam>\nclass RMSPropSolverTest : public GradientBasedSolverTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolver(const SolverParameter& param) {\n    const Dtype rms_decay = 0.95;\n    SolverParameter new_param = param;\n    new_param.set_rms_decay(rms_decay);\n    this->solver_.reset(new RMSPropSolver<Dtype>(new_param));\n  }\n};\n\nTYPED_TEST_CASE(RMSPropSolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(RMSPropSolverTest, TestRMSPropLeastSquaresUpdateWithWeightDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 1.0;\n  const Dtype kWeightDecay = 0.5;\n  this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay);\n}\n\nTYPED_TEST(RMSPropSolverTest, TestRMSPropLeastSquaresUpdateWithRmsDecay) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.0;\n  const Dtype kMomentum = 0.0;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(RMSPropSolverTest, TestRMSPropLeastSquaresUpdateWithEverything) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.0;\n  const int kNumIters = 4;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(RMSPropSolverTest,\n      TestRMSPropLeastSquaresUpdateWithEverythingShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.0;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 0; i <= kNumIters; ++i) {\n    this->TestLeastSquaresUpdate(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(RMSPropSolverTest, TestLeastSquaresUpdateWithEverythingAccum) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.0;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(RMSPropSolverTest, TestLeastSquaresUpdateWithEverythingAccumShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0.0;\n  const int kNumIters = 4;\n  const int kIterSize = 2;\n  this->share_ = true;\n  this->CheckAccumulation(kLearningRate, kWeightDecay, kMomentum, kNumIters,\n      kIterSize);\n}\n\nTYPED_TEST(RMSPropSolverTest, TestSnapshot) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\nTYPED_TEST(RMSPropSolverTest, TestSnapshotShare) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kLearningRate = 0.01;\n  const Dtype kWeightDecay = 0.5;\n  const Dtype kMomentum = 0;\n  const int kNumIters = 4;\n  this->share_ = true;\n  for (int i = 1; i <= kNumIters; ++i) {\n    this->TestSnapshot(kLearningRate, kWeightDecay, kMomentum, i);\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_hdf5_output_layer.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layers/hdf5_output_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/hdf5.hpp\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate<typename TypeParam>\nclass HDF5OutputLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  HDF5OutputLayerTest()\n      : input_file_name_(\n        CMAKE_SOURCE_DIR \"caffe/test/test_data/sample_data.h5\"),\n        blob_data_(new Blob<Dtype>()),\n        blob_label_(new Blob<Dtype>()),\n        num_(5),\n        channels_(8),\n        height_(5),\n        width_(5) {\n    MakeTempFilename(&output_file_name_);\n  }\n\n  virtual ~HDF5OutputLayerTest() {\n    delete blob_data_;\n    delete blob_label_;\n  }\n\n  void CheckBlobEqual(const Blob<Dtype>& b1, const Blob<Dtype>& b2);\n\n  string output_file_name_;\n  string input_file_name_;\n  Blob<Dtype>* const blob_data_;\n  Blob<Dtype>* const blob_label_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n  int num_;\n  int channels_;\n  int height_;\n  int width_;\n};\n\ntemplate<typename TypeParam>\nvoid HDF5OutputLayerTest<TypeParam>::CheckBlobEqual(const Blob<Dtype>& b1,\n                                                    const Blob<Dtype>& b2) {\n  EXPECT_EQ(b1.num(), b2.num());\n  EXPECT_EQ(b1.channels(), b2.channels());\n  EXPECT_EQ(b1.height(), b2.height());\n  EXPECT_EQ(b1.width(), b2.width());\n  for (int n = 0; n < b1.num(); ++n) {\n    for (int c = 0; c < b1.channels(); ++c) {\n      for (int h = 0; h < b1.height(); ++h) {\n        for (int w = 0; w < b1.width(); ++w) {\n          EXPECT_EQ(b1.data_at(n, c, h, w), b2.data_at(n, c, h, w));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST_CASE(HDF5OutputLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(HDF5OutputLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LOG(INFO) << \"Loading HDF5 file \" << this->input_file_name_;\n  hid_t file_id = H5Fopen(this->input_file_name_.c_str(), H5F_ACC_RDONLY,\n                          H5P_DEFAULT);\n  ASSERT_GE(file_id, 0)<< \"Failed to open HDF5 file\" <<\n      this->input_file_name_;\n  hdf5_load_nd_dataset(file_id, HDF5_DATA_DATASET_NAME, 0, 4,\n                       this->blob_data_);\n  hdf5_load_nd_dataset(file_id, HDF5_DATA_LABEL_NAME, 0, 4,\n                       this->blob_label_);\n  herr_t status = H5Fclose(file_id);\n  EXPECT_GE(status, 0)<< \"Failed to close HDF5 file \" <<\n      this->input_file_name_;\n  this->blob_bottom_vec_.push_back(this->blob_data_);\n  this->blob_bottom_vec_.push_back(this->blob_label_);\n\n  LayerParameter param;\n  param.mutable_hdf5_output_param()->set_file_name(this->output_file_name_);\n  // This code block ensures that the layer is deconstructed and\n  //   the output hdf5 file is closed.\n  {\n    HDF5OutputLayer<Dtype> layer(param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    EXPECT_EQ(layer.file_name(), this->output_file_name_);\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  }\n  file_id = H5Fopen(this->output_file_name_.c_str(), H5F_ACC_RDONLY,\n                          H5P_DEFAULT);\n  ASSERT_GE(\n    file_id, 0)<< \"Failed to open HDF5 file\" <<\n          this->input_file_name_;\n\n  Blob<Dtype>* blob_data = new Blob<Dtype>();\n  hdf5_load_nd_dataset(file_id, HDF5_DATA_DATASET_NAME, 0, 4,\n                       blob_data);\n  this->CheckBlobEqual(*(this->blob_data_), *blob_data);\n\n  Blob<Dtype>* blob_label = new Blob<Dtype>();\n  hdf5_load_nd_dataset(file_id, HDF5_DATA_LABEL_NAME, 0, 4,\n                       blob_label);\n  this->CheckBlobEqual(*(this->blob_label_), *blob_label);\n\n  status = H5Fclose(file_id);\n  EXPECT_GE(status, 0) << \"Failed to close HDF5 file \" <<\n      this->output_file_name_;\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_hdf5data_layer.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"hdf5.h\"\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layers/hdf5_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass HDF5DataLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  HDF5DataLayerTest()\n      : filename(NULL),\n        blob_top_data_(new Blob<Dtype>()),\n        blob_top_label_(new Blob<Dtype>()),\n        blob_top_label2_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    blob_top_vec_.push_back(blob_top_data_);\n    blob_top_vec_.push_back(blob_top_label_);\n    blob_top_vec_.push_back(blob_top_label2_);\n\n    // Check out generate_sample_data.py in the same directory.\n    filename = new string(\n    CMAKE_SOURCE_DIR \"caffe/test/test_data/sample_data_list.txt\" CMAKE_EXT);\n    LOG(INFO)<< \"Using sample HDF5 data file \" << filename;\n  }\n\n  virtual ~HDF5DataLayerTest() {\n    delete blob_top_data_;\n    delete blob_top_label_;\n    delete blob_top_label2_;\n    delete filename;\n  }\n\n  string* filename;\n  Blob<Dtype>* const blob_top_data_;\n  Blob<Dtype>* const blob_top_label_;\n  Blob<Dtype>* const blob_top_label2_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(HDF5DataLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(HDF5DataLayerTest, TestRead) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Create LayerParameter with the known parameters.\n  // The data file we are reading has 10 rows and 8 columns,\n  // with values from 0 to 10*8 reshaped in row-major order.\n  LayerParameter param;\n  param.add_top(\"data\");\n  param.add_top(\"label\");\n  param.add_top(\"label2\");\n\n  HDF5DataParameter* hdf5_data_param = param.mutable_hdf5_data_param();\n  int batch_size = 5;\n  hdf5_data_param->set_batch_size(batch_size);\n  hdf5_data_param->set_source(*(this->filename));\n  int num_cols = 8;\n  int height = 6;\n  int width = 5;\n\n  // Test that the layer setup got the correct parameters.\n  HDF5DataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_data_->num(), batch_size);\n  EXPECT_EQ(this->blob_top_data_->channels(), num_cols);\n  EXPECT_EQ(this->blob_top_data_->height(), height);\n  EXPECT_EQ(this->blob_top_data_->width(), width);\n\n  EXPECT_EQ(this->blob_top_label_->num_axes(), 2);\n  EXPECT_EQ(this->blob_top_label_->shape(0), batch_size);\n  EXPECT_EQ(this->blob_top_label_->shape(1), 1);\n\n  EXPECT_EQ(this->blob_top_label2_->num_axes(), 2);\n  EXPECT_EQ(this->blob_top_label2_->shape(0), batch_size);\n  EXPECT_EQ(this->blob_top_label2_->shape(1), 1);\n\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  // Go through the data 10 times (5 batches).\n  const int data_size = num_cols * height * width;\n  for (int iter = 0; iter < 10; ++iter) {\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n    // On even iterations, we're reading the first half of the data.\n    // On odd iterations, we're reading the second half of the data.\n    // NB: label is 1-indexed\n    int label_offset = 1 + ((iter % 2 == 0) ? 0 : batch_size);\n    int label2_offset = 1 + label_offset;\n    int data_offset = (iter % 2 == 0) ? 0 : batch_size * data_size;\n\n    // Every two iterations we are reading the second file,\n    // which has the same labels, but data is offset by total data size,\n    // which is 2400 (see generate_sample_data).\n    int file_offset = (iter % 4 < 2) ? 0 : 2400;\n\n    for (int i = 0; i < batch_size; ++i) {\n      EXPECT_EQ(\n        label_offset + i,\n        this->blob_top_label_->cpu_data()[i]);\n      EXPECT_EQ(\n        label2_offset + i,\n        this->blob_top_label2_->cpu_data()[i]);\n    }\n    for (int i = 0; i < batch_size; ++i) {\n      for (int j = 0; j < num_cols; ++j) {\n        for (int h = 0; h < height; ++h) {\n          for (int w = 0; w < width; ++w) {\n            int idx = (\n              i * num_cols * height * width +\n              j * height * width +\n              h * width + w);\n            EXPECT_EQ(\n              file_offset + data_offset + idx,\n              this->blob_top_data_->cpu_data()[idx])\n              << \"debug: i \" << i << \" j \" << j\n              << \" iter \" << iter;\n          }\n        }\n      }\n    }\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_hinge_loss_layer.cpp",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/hinge_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass HingeLossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  HingeLossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_label_(new Blob<Dtype>(10, 1, 1, 1)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    // fill the values\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    filler_param.set_std(10);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    for (int i = 0; i < blob_bottom_label_->count(); ++i) {\n      blob_bottom_label_->mutable_cpu_data()[i] = caffe_rng_rand() % 5;\n    }\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~HingeLossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_top_loss_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(HingeLossLayerTest, TestDtypesAndDevices);\n\n\nTYPED_TEST(HingeLossLayerTest, TestGradientL1) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  HingeLossLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 2e-3, 1701, 1, 0.01);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\nTYPED_TEST(HingeLossLayerTest, TestGradientL2) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  // Set norm to L2\n  HingeLossParameter* hinge_loss_param = layer_param.mutable_hinge_loss_param();\n  hinge_loss_param->set_norm(HingeLossParameter_Norm_L2);\n  HingeLossLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_im2col_kernel.cu",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/im2col_layer.hpp\"\n#include \"caffe/util/im2col.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\n// Forward declare kernel functions\ntemplate <typename Dtype>\n__global__ void im2col_gpu_kernel(const int n, const Dtype* data_im,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int height_col, const int width_col,\n    Dtype* data_col);\n\ntemplate <typename Dtype, int num_axes>\n__global__ void im2col_nd_gpu_kernel(const int n, const Dtype* data_im,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_col);\n\ntemplate <typename Dtype>\nclass Im2colKernelTest : public GPUDeviceTest<Dtype> {\n protected:\n  Im2colKernelTest()\n        // big so launches > 1024 threads\n      : blob_bottom_(new Blob<Dtype>(5, 500, 15, 15)),\n        blob_kernel_shape_(new Blob<int>()),\n        blob_stride_(new Blob<int>()),\n        blob_pad_(new Blob<int>()),\n        blob_dilation_(new Blob<int>()),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_cpu_(new Blob<Dtype>()) {\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    vector<int> dim_blob_shape(1, 2);\n    blob_kernel_shape_->Reshape(dim_blob_shape);\n    blob_stride_->Reshape(dim_blob_shape);\n    blob_pad_->Reshape(dim_blob_shape);\n    blob_dilation_->Reshape(dim_blob_shape);\n\n    height_ = blob_bottom_->height();\n    width_ = blob_bottom_->width();\n    channels_ = blob_bottom_->channels();\n    pad_ = 0;\n    stride_ = 2;\n    dilation_ = 3;\n    kernel_size_ = 3;\n    height_col_ = (height_ + 2 * pad_ -\n        (dilation_ * (kernel_size_ - 1) + 1)) / stride_ + 1;\n    width_col_ = (width_ + 2 * pad_ -\n        (dilation_ * (kernel_size_ - 1) + 1)) / stride_ + 1;\n\n    for (int i = 0; i < 2; ++i) {\n      blob_kernel_shape_->mutable_cpu_data()[i] = kernel_size_;\n      blob_stride_->mutable_cpu_data()[i] = stride_;\n      blob_pad_->mutable_cpu_data()[i] = pad_;\n      blob_dilation_->mutable_cpu_data()[i] = dilation_;\n    }\n  }\n\n  virtual ~Im2colKernelTest() {\n    delete blob_bottom_;\n    delete blob_top_;\n    delete blob_top_cpu_;\n    delete blob_kernel_shape_;\n    delete blob_stride_;\n    delete blob_pad_;\n    delete blob_dilation_;\n  }\n\n  Blob<int>* const blob_kernel_shape_;\n  Blob<int>* const blob_stride_;\n  Blob<int>* const blob_pad_;\n  Blob<int>* const blob_dilation_;\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_cpu_;\n  int height_;\n  int width_;\n  int channels_;\n  int pad_;\n  int stride_;\n  int dilation_;\n  int kernel_size_;\n  int height_col_;\n  int width_col_;\n};\n\nTYPED_TEST_CASE(Im2colKernelTest, TestDtypes);\n\nTYPED_TEST(Im2colKernelTest, Test2D) {\n  // Reshape the blobs to correct size for im2col output\n  this->blob_top_->Reshape(this->blob_bottom_->num(),\n          this->channels_ * this->kernel_size_ * this->kernel_size_,\n          this->height_col_,\n          this->width_col_);\n\n  this->blob_top_cpu_->Reshape(this->blob_bottom_->num(),\n          this->channels_ * this->kernel_size_ * this->kernel_size_,\n          this->height_col_,\n          this->width_col_);\n\n  const TypeParam* bottom_data = this->blob_bottom_->gpu_data();\n  TypeParam* top_data = this->blob_top_->mutable_gpu_data();\n  TypeParam* cpu_data = this->blob_top_cpu_->mutable_cpu_data();\n\n  // CPU Version\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    im2col_cpu(this->blob_bottom_->cpu_data() + this->blob_bottom_->offset(n),\n      this->channels_, this->height_, this->width_,\n      this->kernel_size_, this->kernel_size_, this->pad_, this->pad_,\n      this->stride_, this->stride_, this->dilation_, this->dilation_,\n      cpu_data + this->blob_top_cpu_->offset(n));\n  }\n\n  // GPU version\n  int num_kernels = this->channels_ * this->height_col_ * this->width_col_;\n  int default_grid_dim = CAFFE_GET_BLOCKS(num_kernels);\n\n  // Launch with different grid sizes\n  for (int grid_div = 2; grid_div <= 8; grid_div++) {\n    for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n      int grid_dim = default_grid_dim/grid_div;\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      im2col_gpu_kernel<TypeParam><<<grid_dim, CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, bottom_data + this->blob_bottom_->offset(n),\n        this->height_, this->width_, this->kernel_size_, this->kernel_size_,\n        this->pad_, this->pad_, this->stride_, this->stride_,\n        this->dilation_, this->dilation_,\n        this->height_col_, this->width_col_,\n        top_data + this->blob_top_->offset(n));\n      CUDA_POST_KERNEL_CHECK;\n    }\n\n    // Compare results against CPU version\n    for (int i = 0; i < this->blob_top_->count(); ++i) {\n      TypeParam cpuval = cpu_data[i];\n      TypeParam gpuval = this->blob_top_->cpu_data()[i];\n      EXPECT_EQ(cpuval, gpuval);\n      if (cpuval != gpuval) {\n        break;\n      }\n    }\n  }\n}\n\nTYPED_TEST(Im2colKernelTest, TestND) {\n  // Reshape the blobs to correct size for im2col output\n  this->blob_top_->Reshape(this->blob_bottom_->num(),\n      this->channels_ * this->kernel_size_ * this->kernel_size_,\n      this->height_col_,\n      this->width_col_);\n\n  this->blob_top_cpu_->ReshapeLike(*this->blob_top_);\n\n  const TypeParam* bottom_data_cpu = this->blob_bottom_->cpu_data();\n  TypeParam* top_data_cpu = this->blob_top_cpu_->mutable_cpu_data();\n\n  // CPU Version\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    im2col_nd_cpu(bottom_data_cpu + this->blob_bottom_->offset(n), 2,\n        this->blob_bottom_->shape().data() + 1,\n        this->blob_top_cpu_->shape().data() + 1,\n        this->blob_kernel_shape_->cpu_data(),\n        this->blob_pad_->cpu_data(), this->blob_stride_->cpu_data(),\n        this->blob_dilation_->cpu_data(),\n        top_data_cpu + this->blob_top_cpu_->offset(n));\n  }\n\n  // GPU version\n  int num_kernels = this->channels_ * this->height_col_ * this->width_col_;\n  int default_grid_dim = CAFFE_GET_BLOCKS(num_kernels);\n  const TypeParam* bottom_data_gpu = this->blob_bottom_->gpu_data();\n\n  // Launch with different grid sizes\n  for (int grid_div = 2; grid_div <= 8; grid_div++) {\n    for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n      const int grid_dim = default_grid_dim / grid_div;\n      TypeParam* top_data_gpu = this->blob_top_->mutable_gpu_data();\n      // NOLINT_NEXT_LINE(whitespace/operators)\n      im2col_nd_gpu_kernel<TypeParam, 2><<<grid_dim, CAFFE_CUDA_NUM_THREADS>>>(\n          num_kernels, bottom_data_gpu + this->blob_bottom_->offset(n),\n          this->blob_bottom_->gpu_shape() + 1, this->blob_top_->gpu_shape() + 1,\n          this->blob_kernel_shape_->gpu_data(), this->blob_pad_->gpu_data(),\n          this->blob_stride_->gpu_data(), this->blob_dilation_->gpu_data(),\n          top_data_gpu + this->blob_top_->offset(n));\n      CUDA_POST_KERNEL_CHECK;\n    }\n\n    // Compare results against CPU version\n    for (int i = 0; i < this->blob_top_->count(); ++i) {\n      TypeParam cpuval = top_data_cpu[i];\n      TypeParam gpuval = this->blob_top_->cpu_data()[i];\n      EXPECT_EQ(cpuval, gpuval);\n      if (cpuval != gpuval) {\n        break;\n      }\n    }\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_im2col_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/im2col_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass Im2colLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  Im2colLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~Im2colLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(Im2colLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(Im2colLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  vector<int> bottom_shape;\n  bottom_shape.push_back(2);\n  bottom_shape.push_back(3);\n  bottom_shape.push_back(10);\n  bottom_shape.push_back(11);\n  this->blob_bottom_->Reshape(bottom_shape);\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->add_dilation(3);\n  Im2colLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 27);\n  EXPECT_EQ(this->blob_top_->height(), 2);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\nTYPED_TEST(Im2colLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  Im2colLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // We are lazy and will only check the top left block\n  for (int c = 0; c < 27; ++c) {\n    EXPECT_EQ(this->blob_bottom_->data_at(0, (c / 9), (c / 3) % 3, c % 3),\n        this->blob_top_->data_at(0, c, 0, 0));\n  }\n}\n\nTYPED_TEST(Im2colLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  Im2colLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(Im2colLayerTest, TestDilatedGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  vector<int> bottom_shape;\n  bottom_shape.push_back(2);\n  bottom_shape.push_back(3);\n  bottom_shape.push_back(10);\n  bottom_shape.push_back(9);\n  this->blob_bottom_->Reshape(bottom_shape);\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->add_dilation(3);\n  Im2colLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n                                  this->blob_top_vec_);\n}\n\nTYPED_TEST(Im2colLayerTest, TestGradientForceND) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->set_force_nd_im2col(true);\n  Im2colLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(Im2colLayerTest, TestDilatedGradientForceND) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  vector<int> bottom_shape;\n  bottom_shape.push_back(2);\n  bottom_shape.push_back(3);\n  bottom_shape.push_back(10);\n  bottom_shape.push_back(9);\n  this->blob_bottom_->Reshape(bottom_shape);\n  convolution_param->add_kernel_size(3);\n  convolution_param->add_stride(2);\n  convolution_param->add_dilation(3);\n  convolution_param->set_force_nd_im2col(true);\n  Im2colLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n                                  this->blob_top_vec_);\n}\n\nTYPED_TEST(Im2colLayerTest, TestRect) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->set_kernel_h(5);\n  convolution_param->set_kernel_w(3);\n  convolution_param->add_stride(2);\n  Im2colLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // We are lazy and will only check the top left block\n  for (int c = 0; c < 45; ++c) {\n    EXPECT_EQ(this->blob_top_->data_at(0, c, 0, 0),\n        this->blob_bottom_->data_at(0, (c / 15), (c / 3) % 5, c % 3));\n  }\n}\n\nTYPED_TEST(Im2colLayerTest, TestRectGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ConvolutionParameter* convolution_param =\n      layer_param.mutable_convolution_param();\n  convolution_param->set_kernel_h(5);\n  convolution_param->set_kernel_w(3);\n  convolution_param->add_stride(2);\n  Im2colLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_image_data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <map>\n#include <string>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/image_data_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ImageDataLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ImageDataLayerTest()\n      : seed_(1701),\n        blob_top_data_(new Blob<Dtype>()),\n        blob_top_label_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    blob_top_vec_.push_back(blob_top_data_);\n    blob_top_vec_.push_back(blob_top_label_);\n    Caffe::set_random_seed(seed_);\n    // Create test input file.\n    MakeTempFilename(&filename_);\n    std::ofstream outfile(filename_.c_str(), std::ofstream::out);\n    LOG(INFO) << \"Using temporary file \" << filename_;\n    for (int i = 0; i < 5; ++i) {\n      outfile << EXAMPLES_SOURCE_DIR \"images/cat.jpg \" << i;\n    }\n    outfile.close();\n    // Create test input file for images of distinct sizes.\n    MakeTempFilename(&filename_reshape_);\n    std::ofstream reshapefile(filename_reshape_.c_str(), std::ofstream::out);\n    LOG(INFO) << \"Using temporary file \" << filename_reshape_;\n    reshapefile << EXAMPLES_SOURCE_DIR \"images/cat.jpg \" << 0;\n    reshapefile << EXAMPLES_SOURCE_DIR \"images/fish-bike.jpg \" << 1;\n    reshapefile.close();\n  }\n\n  virtual ~ImageDataLayerTest() {\n    delete blob_top_data_;\n    delete blob_top_label_;\n  }\n\n  int seed_;\n  string filename_;\n  string filename_reshape_;\n  Blob<Dtype>* const blob_top_data_;\n  Blob<Dtype>* const blob_top_label_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ImageDataLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ImageDataLayerTest, TestRead) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter param;\n  ImageDataParameter* image_data_param = param.mutable_image_data_param();\n  image_data_param->set_batch_size(5);\n  image_data_param->set_source(this->filename_.c_str());\n  image_data_param->set_shuffle(false);\n  ImageDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_data_->num(), 5);\n  EXPECT_EQ(this->blob_top_data_->channels(), 3);\n  EXPECT_EQ(this->blob_top_data_->height(), 360);\n  EXPECT_EQ(this->blob_top_data_->width(), 480);\n  EXPECT_EQ(this->blob_top_label_->num(), 5);\n  EXPECT_EQ(this->blob_top_label_->channels(), 1);\n  EXPECT_EQ(this->blob_top_label_->height(), 1);\n  EXPECT_EQ(this->blob_top_label_->width(), 1);\n  // Go through the data twice\n  for (int iter = 0; iter < 2; ++iter) {\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    for (int i = 0; i < 5; ++i) {\n      EXPECT_EQ(i, this->blob_top_label_->cpu_data()[i]);\n    }\n  }\n}\n\nTYPED_TEST(ImageDataLayerTest, TestResize) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter param;\n  ImageDataParameter* image_data_param = param.mutable_image_data_param();\n  image_data_param->set_batch_size(5);\n  image_data_param->set_source(this->filename_.c_str());\n  image_data_param->set_new_height(256);\n  image_data_param->set_new_width(256);\n  image_data_param->set_shuffle(false);\n  ImageDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_data_->num(), 5);\n  EXPECT_EQ(this->blob_top_data_->channels(), 3);\n  EXPECT_EQ(this->blob_top_data_->height(), 256);\n  EXPECT_EQ(this->blob_top_data_->width(), 256);\n  EXPECT_EQ(this->blob_top_label_->num(), 5);\n  EXPECT_EQ(this->blob_top_label_->channels(), 1);\n  EXPECT_EQ(this->blob_top_label_->height(), 1);\n  EXPECT_EQ(this->blob_top_label_->width(), 1);\n  // Go through the data twice\n  for (int iter = 0; iter < 2; ++iter) {\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    for (int i = 0; i < 5; ++i) {\n      EXPECT_EQ(i, this->blob_top_label_->cpu_data()[i]);\n    }\n  }\n}\n\nTYPED_TEST(ImageDataLayerTest, TestReshape) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter param;\n  ImageDataParameter* image_data_param = param.mutable_image_data_param();\n  image_data_param->set_batch_size(1);\n  image_data_param->set_source(this->filename_reshape_.c_str());\n  image_data_param->set_shuffle(false);\n  ImageDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_label_->num(), 1);\n  EXPECT_EQ(this->blob_top_label_->channels(), 1);\n  EXPECT_EQ(this->blob_top_label_->height(), 1);\n  EXPECT_EQ(this->blob_top_label_->width(), 1);\n  // cat.jpg\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_data_->num(), 1);\n  EXPECT_EQ(this->blob_top_data_->channels(), 3);\n  EXPECT_EQ(this->blob_top_data_->height(), 360);\n  EXPECT_EQ(this->blob_top_data_->width(), 480);\n  // fish-bike.jpg\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_data_->num(), 1);\n  EXPECT_EQ(this->blob_top_data_->channels(), 3);\n  EXPECT_EQ(this->blob_top_data_->height(), 323);\n  EXPECT_EQ(this->blob_top_data_->width(), 481);\n}\n\nTYPED_TEST(ImageDataLayerTest, TestShuffle) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter param;\n  ImageDataParameter* image_data_param = param.mutable_image_data_param();\n  image_data_param->set_batch_size(5);\n  image_data_param->set_source(this->filename_.c_str());\n  image_data_param->set_shuffle(true);\n  ImageDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_data_->num(), 5);\n  EXPECT_EQ(this->blob_top_data_->channels(), 3);\n  EXPECT_EQ(this->blob_top_data_->height(), 360);\n  EXPECT_EQ(this->blob_top_data_->width(), 480);\n  EXPECT_EQ(this->blob_top_label_->num(), 5);\n  EXPECT_EQ(this->blob_top_label_->channels(), 1);\n  EXPECT_EQ(this->blob_top_label_->height(), 1);\n  EXPECT_EQ(this->blob_top_label_->width(), 1);\n  // Go through the data twice\n  for (int iter = 0; iter < 2; ++iter) {\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    map<Dtype, int> values_to_indices;\n    int num_in_order = 0;\n    for (int i = 0; i < 5; ++i) {\n      Dtype value = this->blob_top_label_->cpu_data()[i];\n      // Check that the value has not been seen already (no duplicates).\n      EXPECT_EQ(values_to_indices.find(value), values_to_indices.end());\n      values_to_indices[value] = i;\n      num_in_order += (value == Dtype(i));\n    }\n    EXPECT_EQ(5, values_to_indices.size());\n    EXPECT_GT(5, num_in_order);\n  }\n}\n\n}  // namespace caffe\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_infogain_loss_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/infogain_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass InfogainLossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  InfogainLossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_label_(new Blob<Dtype>(10, 1, 1, 1)),\n        blob_bottom_infogain_(new Blob<Dtype>(1, 1, 5, 5)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    PositiveUnitballFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    for (int i = 0; i < blob_bottom_label_->count(); ++i) {\n      blob_bottom_label_->mutable_cpu_data()[i] = caffe_rng_rand() % 5;\n    }\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n    filler_param.set_min(0.1);\n    filler_param.set_max(2.0);\n    UniformFiller<Dtype> infogain_filler(filler_param);\n    infogain_filler.Fill(this->blob_bottom_infogain_);\n    blob_bottom_vec_.push_back(blob_bottom_infogain_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~InfogainLossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_bottom_infogain_;\n    delete blob_top_loss_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_bottom_infogain_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(InfogainLossLayerTest, TestDtypesAndDevices);\n\n\nTYPED_TEST(InfogainLossLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  InfogainLossLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-4, 2e-2, 1701, 1, 0.01);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_inner_product_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/inner_product_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\n#ifndef CPU_ONLY\nextern cudaDeviceProp CAFFE_TEST_CUDA_PROP;\n#endif\n\ntemplate <typename TypeParam>\nclass InnerProductLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  InnerProductLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_nobatch_(new Blob<Dtype>(1, 2, 3, 4)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~InnerProductLayerTest() {\n    delete blob_bottom_;\n    delete blob_bottom_nobatch_;\n    delete blob_top_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_nobatch_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(InnerProductLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(InnerProductLayerTest, TestSetUp) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_);\n  LayerParameter layer_param;\n  InnerProductParameter* inner_product_param =\n      layer_param.mutable_inner_product_param();\n  inner_product_param->set_num_output(10);\n  shared_ptr<InnerProductLayer<Dtype> > layer(\n      new InnerProductLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 10);\n}\n\nTYPED_TEST(InnerProductLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_);\n  bool IS_VALID_CUDA = false;\n#ifndef CPU_ONLY\n  IS_VALID_CUDA = CAFFE_TEST_CUDA_PROP.major >= 2;\n#endif\n  if (Caffe::mode() == Caffe::CPU ||\n      sizeof(Dtype) == 4 || IS_VALID_CUDA) {\n    LayerParameter layer_param;\n    InnerProductParameter* inner_product_param =\n        layer_param.mutable_inner_product_param();\n    inner_product_param->set_num_output(10);\n    inner_product_param->mutable_weight_filler()->set_type(\"uniform\");\n    inner_product_param->mutable_bias_filler()->set_type(\"uniform\");\n    inner_product_param->mutable_bias_filler()->set_min(1);\n    inner_product_param->mutable_bias_filler()->set_max(2);\n    shared_ptr<InnerProductLayer<Dtype> > layer(\n        new InnerProductLayer<Dtype>(layer_param));\n    layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype* data = this->blob_top_->cpu_data();\n    const int count = this->blob_top_->count();\n    for (int i = 0; i < count; ++i) {\n      EXPECT_GE(data[i], 1.);\n    }\n  } else {\n    LOG(ERROR) << \"Skipping test due to old architecture.\";\n  }\n}\n\nTYPED_TEST(InnerProductLayerTest, TestForwardNoBatch) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_nobatch_);\n  bool IS_VALID_CUDA = false;\n#ifndef CPU_ONLY\n  IS_VALID_CUDA = CAFFE_TEST_CUDA_PROP.major >= 2;\n#endif\n  if (Caffe::mode() == Caffe::CPU ||\n      sizeof(Dtype) == 4 || IS_VALID_CUDA) {\n    LayerParameter layer_param;\n    InnerProductParameter* inner_product_param =\n        layer_param.mutable_inner_product_param();\n    inner_product_param->set_num_output(10);\n    inner_product_param->mutable_weight_filler()->set_type(\"uniform\");\n    inner_product_param->mutable_bias_filler()->set_type(\"uniform\");\n    inner_product_param->mutable_bias_filler()->set_min(1);\n    inner_product_param->mutable_bias_filler()->set_max(2);\n    shared_ptr<InnerProductLayer<Dtype> > layer(\n        new InnerProductLayer<Dtype>(layer_param));\n    layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype* data = this->blob_top_->cpu_data();\n    const int count = this->blob_top_->count();\n    for (int i = 0; i < count; ++i) {\n      EXPECT_GE(data[i], 1.);\n    }\n  } else {\n    LOG(ERROR) << \"Skipping test due to old architecture.\";\n  }\n}\n\nTYPED_TEST(InnerProductLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_);\n  bool IS_VALID_CUDA = false;\n#ifndef CPU_ONLY\n  IS_VALID_CUDA = CAFFE_TEST_CUDA_PROP.major >= 2;\n#endif\n  if (Caffe::mode() == Caffe::CPU ||\n      sizeof(Dtype) == 4 || IS_VALID_CUDA) {\n    LayerParameter layer_param;\n    InnerProductParameter* inner_product_param =\n        layer_param.mutable_inner_product_param();\n    inner_product_param->set_num_output(10);\n    inner_product_param->mutable_weight_filler()->set_type(\"gaussian\");\n    inner_product_param->mutable_bias_filler()->set_type(\"gaussian\");\n    inner_product_param->mutable_bias_filler()->set_min(1);\n    inner_product_param->mutable_bias_filler()->set_max(2);\n    InnerProductLayer<Dtype> layer(layer_param);\n    GradientChecker<Dtype> checker(1e-2, 1e-3);\n    checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n        this->blob_top_vec_);\n  } else {\n    LOG(ERROR) << \"Skipping test due to old architecture.\";\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_internal_thread.cpp",
    "content": "#include \"glog/logging.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/internal_thread.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\n\nclass InternalThreadTest : public ::testing::Test {};\n\nTEST_F(InternalThreadTest, TestStartAndExit) {\n  InternalThread thread;\n  EXPECT_FALSE(thread.is_started());\n  thread.StartInternalThread();\n  EXPECT_TRUE(thread.is_started());\n  thread.StopInternalThread();\n  EXPECT_FALSE(thread.is_started());\n}\n\nclass TestThreadA : public InternalThread {\n  void InternalThreadEntry() {\n    EXPECT_EQ(4244559767, caffe_rng_rand());\n  }\n};\n\nclass TestThreadB : public InternalThread {\n  void InternalThreadEntry() {\n    EXPECT_EQ(1726478280, caffe_rng_rand());\n  }\n};\n\nTEST_F(InternalThreadTest, TestRandomSeed) {\n  TestThreadA t1;\n  Caffe::set_random_seed(9658361);\n  t1.StartInternalThread();\n  t1.StopInternalThread();\n\n  TestThreadA t2;\n  Caffe::set_random_seed(9658361);\n  t2.StartInternalThread();\n  t2.StopInternalThread();\n\n  TestThreadB t3;\n  Caffe::set_random_seed(3435563);\n  t3.StartInternalThread();\n  t3.StopInternalThread();\n}\n\n}  // namespace caffe\n\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_io.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/highgui/highgui_c.h>\n#include <opencv2/imgproc/imgproc.hpp>\n\n#include <string>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nclass IOTest : public ::testing::Test {};\n\nbool ReadImageToDatumReference(const string& filename, const int label,\n    const int height, const int width, const bool is_color, Datum* datum) {\n  cv::Mat cv_img;\n  int cv_read_flag = (is_color ? CV_LOAD_IMAGE_COLOR :\n    CV_LOAD_IMAGE_GRAYSCALE);\n\n  cv::Mat cv_img_origin = cv::imread(filename, cv_read_flag);\n  if (!cv_img_origin.data) {\n    LOG(ERROR) << \"Could not open or find file \" << filename;\n    return false;\n  }\n  if (height > 0 && width > 0) {\n    cv::resize(cv_img_origin, cv_img, cv::Size(width, height));\n  } else {\n    cv_img = cv_img_origin;\n  }\n\n  int num_channels = (is_color ? 3 : 1);\n  datum->set_channels(num_channels);\n  datum->set_height(cv_img.rows);\n  datum->set_width(cv_img.cols);\n  datum->set_label(label);\n  datum->clear_data();\n  datum->clear_float_data();\n  string* datum_string = datum->mutable_data();\n  if (is_color) {\n    for (int c = 0; c < num_channels; ++c) {\n      for (int h = 0; h < cv_img.rows; ++h) {\n        for (int w = 0; w < cv_img.cols; ++w) {\n          datum_string->push_back(\n            static_cast<char>(cv_img.at<cv::Vec3b>(h, w)[c]));\n        }\n      }\n    }\n  } else {  // Faster than repeatedly testing is_color for each pixel w/i loop\n    for (int h = 0; h < cv_img.rows; ++h) {\n      for (int w = 0; w < cv_img.cols; ++w) {\n        datum_string->push_back(\n          static_cast<char>(cv_img.at<uchar>(h, w)));\n        }\n      }\n  }\n  return true;\n}\n\nTEST_F(IOTest, TestReadImageToDatum) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  ReadImageToDatum(filename, 0, &datum);\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 360);\n  EXPECT_EQ(datum.width(), 480);\n}\n\nTEST_F(IOTest, TestReadImageToDatumReference) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum, datum_ref;\n  ReadImageToDatum(filename, 0, 0, 0, true, &datum);\n  ReadImageToDatumReference(filename, 0, 0, 0, true, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum.data();\n\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\n\nTEST_F(IOTest, TestReadImageToDatumReferenceResized) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum, datum_ref;\n  ReadImageToDatum(filename, 0, 100, 200, true, &datum);\n  ReadImageToDatumReference(filename, 0, 100, 200, true, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum.data();\n\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\nTEST_F(IOTest, TestReadImageToDatumContent) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  ReadImageToDatum(filename, 0, &datum);\n  cv::Mat cv_img = ReadImageToCVMat(filename);\n  EXPECT_EQ(datum.channels(), cv_img.channels());\n  EXPECT_EQ(datum.height(), cv_img.rows);\n  EXPECT_EQ(datum.width(), cv_img.cols);\n\n  const string& data = datum.data();\n  int index = 0;\n  for (int c = 0; c < datum.channels(); ++c) {\n    for (int h = 0; h < datum.height(); ++h) {\n      for (int w = 0; w < datum.width(); ++w) {\n        EXPECT_TRUE(data[index++] ==\n          static_cast<char>(cv_img.at<cv::Vec3b>(h, w)[c]));\n      }\n    }\n  }\n}\n\nTEST_F(IOTest, TestReadImageToDatumContentGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  const bool is_color = false;\n  ReadImageToDatum(filename, 0, is_color, &datum);\n  cv::Mat cv_img = ReadImageToCVMat(filename, is_color);\n  EXPECT_EQ(datum.channels(), cv_img.channels());\n  EXPECT_EQ(datum.height(), cv_img.rows);\n  EXPECT_EQ(datum.width(), cv_img.cols);\n\n  const string& data = datum.data();\n  int index = 0;\n  for (int h = 0; h < datum.height(); ++h) {\n    for (int w = 0; w < datum.width(); ++w) {\n      EXPECT_TRUE(data[index++] == static_cast<char>(cv_img.at<uchar>(h, w)));\n    }\n  }\n}\n\nTEST_F(IOTest, TestReadImageToDatumResized) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  ReadImageToDatum(filename, 0, 100, 200, &datum);\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 100);\n  EXPECT_EQ(datum.width(), 200);\n}\n\n\nTEST_F(IOTest, TestReadImageToDatumResizedSquare) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  ReadImageToDatum(filename, 0, 256, 256, &datum);\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 256);\n  EXPECT_EQ(datum.width(), 256);\n}\n\nTEST_F(IOTest, TestReadImageToDatumGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  const bool is_color = false;\n  ReadImageToDatum(filename, 0, is_color, &datum);\n  EXPECT_EQ(datum.channels(), 1);\n  EXPECT_EQ(datum.height(), 360);\n  EXPECT_EQ(datum.width(), 480);\n}\n\nTEST_F(IOTest, TestReadImageToDatumResizedGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  const bool is_color = false;\n  ReadImageToDatum(filename, 0, 256, 256, is_color, &datum);\n  EXPECT_EQ(datum.channels(), 1);\n  EXPECT_EQ(datum.height(), 256);\n  EXPECT_EQ(datum.width(), 256);\n}\n\nTEST_F(IOTest, TestReadImageToCVMat) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  cv::Mat cv_img = ReadImageToCVMat(filename);\n  EXPECT_EQ(cv_img.channels(), 3);\n  EXPECT_EQ(cv_img.rows, 360);\n  EXPECT_EQ(cv_img.cols, 480);\n}\n\nTEST_F(IOTest, TestReadImageToCVMatResized) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  cv::Mat cv_img = ReadImageToCVMat(filename, 100, 200);\n  EXPECT_EQ(cv_img.channels(), 3);\n  EXPECT_EQ(cv_img.rows, 100);\n  EXPECT_EQ(cv_img.cols, 200);\n}\n\nTEST_F(IOTest, TestReadImageToCVMatResizedSquare) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  cv::Mat cv_img = ReadImageToCVMat(filename, 256, 256);\n  EXPECT_EQ(cv_img.channels(), 3);\n  EXPECT_EQ(cv_img.rows, 256);\n  EXPECT_EQ(cv_img.cols, 256);\n}\n\nTEST_F(IOTest, TestReadImageToCVMatGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  const bool is_color = false;\n  cv::Mat cv_img = ReadImageToCVMat(filename, is_color);\n  EXPECT_EQ(cv_img.channels(), 1);\n  EXPECT_EQ(cv_img.rows, 360);\n  EXPECT_EQ(cv_img.cols, 480);\n}\n\nTEST_F(IOTest, TestReadImageToCVMatResizedGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  const bool is_color = false;\n  cv::Mat cv_img = ReadImageToCVMat(filename, 256, 256, is_color);\n  EXPECT_EQ(cv_img.channels(), 1);\n  EXPECT_EQ(cv_img.rows, 256);\n  EXPECT_EQ(cv_img.cols, 256);\n}\n\nTEST_F(IOTest, TestCVMatToDatum) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  cv::Mat cv_img = ReadImageToCVMat(filename);\n  Datum datum;\n  CVMatToDatum(cv_img, &datum);\n  EXPECT_EQ(datum.channels(), 3);\n  EXPECT_EQ(datum.height(), 360);\n  EXPECT_EQ(datum.width(), 480);\n}\n\nTEST_F(IOTest, TestCVMatToDatumContent) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  cv::Mat cv_img = ReadImageToCVMat(filename);\n  Datum datum;\n  CVMatToDatum(cv_img, &datum);\n  Datum datum_ref;\n  ReadImageToDatum(filename, 0, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum_ref.data();\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\nTEST_F(IOTest, TestCVMatToDatumReference) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  cv::Mat cv_img = ReadImageToCVMat(filename);\n  Datum datum;\n  CVMatToDatum(cv_img, &datum);\n  Datum datum_ref;\n  ReadImageToDatumReference(filename, 0, 0, 0, true, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum_ref.data();\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\nTEST_F(IOTest, TestReadFileToDatum) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  EXPECT_TRUE(datum.encoded());\n  EXPECT_EQ(datum.label(), -1);\n  EXPECT_EQ(datum.data().size(), 140391);\n}\n\nTEST_F(IOTest, TestDecodeDatum) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  EXPECT_TRUE(DecodeDatum(&datum, true));\n  EXPECT_FALSE(DecodeDatum(&datum, true));\n  Datum datum_ref;\n  ReadImageToDatumReference(filename, 0, 0, 0, true, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum_ref.data();\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\nTEST_F(IOTest, TestDecodeDatumToCVMat) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  cv::Mat cv_img = DecodeDatumToCVMat(datum, true);\n  EXPECT_EQ(cv_img.channels(), 3);\n  EXPECT_EQ(cv_img.rows, 360);\n  EXPECT_EQ(cv_img.cols, 480);\n  cv_img = DecodeDatumToCVMat(datum, false);\n  EXPECT_EQ(cv_img.channels(), 1);\n  EXPECT_EQ(cv_img.rows, 360);\n  EXPECT_EQ(cv_img.cols, 480);\n}\n\nTEST_F(IOTest, TestDecodeDatumToCVMatContent) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadImageToDatum(filename, 0, std::string(\"jpg\"), &datum));\n  cv::Mat cv_img = DecodeDatumToCVMat(datum, true);\n  cv::Mat cv_img_ref = ReadImageToCVMat(filename);\n  EXPECT_EQ(cv_img_ref.channels(), cv_img.channels());\n  EXPECT_EQ(cv_img_ref.rows, cv_img.rows);\n  EXPECT_EQ(cv_img_ref.cols, cv_img.cols);\n\n  for (int c = 0; c < datum.channels(); ++c) {\n    for (int h = 0; h < datum.height(); ++h) {\n      for (int w = 0; w < datum.width(); ++w) {\n        EXPECT_TRUE(cv_img.at<cv::Vec3b>(h, w)[c]==\n          cv_img_ref.at<cv::Vec3b>(h, w)[c]);\n      }\n    }\n  }\n}\n\nTEST_F(IOTest, TestDecodeDatumNative) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  EXPECT_TRUE(DecodeDatumNative(&datum));\n  EXPECT_FALSE(DecodeDatumNative(&datum));\n  Datum datum_ref;\n  ReadImageToDatumReference(filename, 0, 0, 0, true, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum_ref.data();\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\nTEST_F(IOTest, TestDecodeDatumToCVMatNative) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  cv::Mat cv_img = DecodeDatumToCVMatNative(datum);\n  EXPECT_EQ(cv_img.channels(), 3);\n  EXPECT_EQ(cv_img.rows, 360);\n  EXPECT_EQ(cv_img.cols, 480);\n}\n\nTEST_F(IOTest, TestDecodeDatumNativeGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat_gray.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  EXPECT_TRUE(DecodeDatumNative(&datum));\n  EXPECT_FALSE(DecodeDatumNative(&datum));\n  Datum datum_ref;\n  ReadImageToDatumReference(filename, 0, 0, 0, false, &datum_ref);\n  EXPECT_EQ(datum.channels(), datum_ref.channels());\n  EXPECT_EQ(datum.height(), datum_ref.height());\n  EXPECT_EQ(datum.width(), datum_ref.width());\n  EXPECT_EQ(datum.data().size(), datum_ref.data().size());\n\n  const string& data = datum.data();\n  const string& data_ref = datum_ref.data();\n  for (int i = 0; i < datum.data().size(); ++i) {\n    EXPECT_TRUE(data[i] == data_ref[i]);\n  }\n}\n\nTEST_F(IOTest, TestDecodeDatumToCVMatNativeGray) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat_gray.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadFileToDatum(filename, &datum));\n  cv::Mat cv_img = DecodeDatumToCVMatNative(datum);\n  EXPECT_EQ(cv_img.channels(), 1);\n  EXPECT_EQ(cv_img.rows, 360);\n  EXPECT_EQ(cv_img.cols, 480);\n}\n\nTEST_F(IOTest, TestDecodeDatumToCVMatContentNative) {\n  string filename = EXAMPLES_SOURCE_DIR \"images/cat.jpg\";\n  Datum datum;\n  EXPECT_TRUE(ReadImageToDatum(filename, 0, std::string(\"jpg\"), &datum));\n  cv::Mat cv_img = DecodeDatumToCVMatNative(datum);\n  cv::Mat cv_img_ref = ReadImageToCVMat(filename);\n  EXPECT_EQ(cv_img_ref.channels(), cv_img.channels());\n  EXPECT_EQ(cv_img_ref.rows, cv_img.rows);\n  EXPECT_EQ(cv_img_ref.cols, cv_img.cols);\n\n  for (int c = 0; c < datum.channels(); ++c) {\n    for (int h = 0; h < datum.height(); ++h) {\n      for (int w = 0; w < datum.width(); ++w) {\n        EXPECT_TRUE(cv_img.at<cv::Vec3b>(h, w)[c]==\n          cv_img_ref.at<cv::Vec3b>(h, w)[c]);\n      }\n    }\n  }\n}\n\n}  // namespace caffe\n#endif  // USE_OPENCV\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_layer_factory.cpp",
    "content": "#include <map>\n#include <string>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/layer_factory.hpp\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/io.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass LayerFactoryTest : public MultiDeviceTest<TypeParam> {};\n\nTYPED_TEST_CASE(LayerFactoryTest, TestDtypesAndDevices);\n\nTYPED_TEST(LayerFactoryTest, TestCreateLayer) {\n  typedef typename TypeParam::Dtype Dtype;\n  typename LayerRegistry<Dtype>::CreatorRegistry& registry =\n      LayerRegistry<Dtype>::Registry();\n  shared_ptr<Layer<Dtype> > layer;\n  for (typename LayerRegistry<Dtype>::CreatorRegistry::iterator iter =\n       registry.begin(); iter != registry.end(); ++iter) {\n    // Special case: PythonLayer is checked by pytest\n    if (iter->first == \"Python\") { continue; }\n    LayerParameter layer_param;\n    // Data layers expect a DB\n    if (iter->first == \"Data\") {\n#ifdef USE_LEVELDB\n      string tmp;\n      MakeTempDir(&tmp);\n      boost::scoped_ptr<db::DB> db(db::GetDB(DataParameter_DB_LEVELDB));\n      db->Open(tmp, db::NEW);\n      db->Close();\n      layer_param.mutable_data_param()->set_source(tmp);\n#else\n      continue;\n#endif  // USE_LEVELDB\n    }\n    layer_param.set_type(iter->first);\n    layer = LayerRegistry<Dtype>::CreateLayer(layer_param);\n    EXPECT_EQ(iter->first, layer->type());\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_lrn_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/lrn_layer.hpp\"\n\n#ifdef USE_CUDNN\n#include \"caffe/layers/cudnn_lcn_layer.hpp\"\n#include \"caffe/layers/cudnn_lrn_layer.hpp\"\n#endif\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nusing std::min;\nusing std::max;\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass LRNLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  LRNLayerTest()\n      : epsilon_(Dtype(1e-5)),\n        blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    blob_bottom_->Reshape(2, 7, 3, 3);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~LRNLayerTest() { delete blob_bottom_; delete blob_top_; }\n  void ReferenceLRNForward(const Blob<Dtype>& blob_bottom,\n      const LayerParameter& layer_param, Blob<Dtype>* blob_top);\n\n  Dtype epsilon_;\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\ntemplate <typename TypeParam>\nvoid LRNLayerTest<TypeParam>::ReferenceLRNForward(\n    const Blob<Dtype>& blob_bottom, const LayerParameter& layer_param,\n    Blob<Dtype>* blob_top) {\n  typedef typename TypeParam::Dtype Dtype;\n  blob_top->Reshape(blob_bottom.num(), blob_bottom.channels(),\n      blob_bottom.height(), blob_bottom.width());\n  Dtype* top_data = blob_top->mutable_cpu_data();\n  LRNParameter lrn_param = layer_param.lrn_param();\n  Dtype alpha = lrn_param.alpha();\n  Dtype beta = lrn_param.beta();\n  int size = lrn_param.local_size();\n  switch (lrn_param.norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    for (int n = 0; n < blob_bottom.num(); ++n) {\n      for (int c = 0; c < blob_bottom.channels(); ++c) {\n        for (int h = 0; h < blob_bottom.height(); ++h) {\n          for (int w = 0; w < blob_bottom.width(); ++w) {\n            int c_start = c - (size - 1) / 2;\n            int c_end = min(c_start + size, blob_bottom.channels());\n            c_start = max(c_start, 0);\n            Dtype scale = 1.;\n            for (int i = c_start; i < c_end; ++i) {\n              Dtype value = blob_bottom.data_at(n, i, h, w);\n              scale += value * value * alpha / size;\n            }\n            *(top_data + blob_top->offset(n, c, h, w)) =\n              blob_bottom.data_at(n, c, h, w) / pow(scale, beta);\n          }\n        }\n      }\n    }\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    for (int n = 0; n < blob_bottom.num(); ++n) {\n      for (int c = 0; c < blob_bottom.channels(); ++c) {\n        for (int h = 0; h < blob_bottom.height(); ++h) {\n          int h_start = h - (size - 1) / 2;\n          int h_end = min(h_start + size, blob_bottom.height());\n          h_start = max(h_start, 0);\n          for (int w = 0; w < blob_bottom.width(); ++w) {\n            Dtype scale = 1.;\n            int w_start = w - (size - 1) / 2;\n            int w_end = min(w_start + size, blob_bottom.width());\n            w_start = max(w_start, 0);\n            for (int nh = h_start; nh < h_end; ++nh) {\n              for (int nw = w_start; nw < w_end; ++nw) {\n                Dtype value = blob_bottom.data_at(n, c, nh, nw);\n                scale += value * value * alpha / (size * size);\n              }\n            }\n            *(top_data + blob_top->offset(n, c, h, w)) =\n              blob_bottom.data_at(n, c, h, w) / pow(scale, beta);\n          }\n        }\n      }\n    }\n    break;\n  default:\n    LOG(FATAL) << \"Unknown normalization region.\";\n  }\n}\n\nTYPED_TEST_CASE(LRNLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(LRNLayerTest, TestSetupAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  LRNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 7);\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\nTYPED_TEST(LRNLayerTest, TestForwardAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  LRNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Blob<Dtype> top_reference;\n  this->ReferenceLRNForward(*(this->blob_bottom_), layer_param,\n      &top_reference);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(this->blob_top_->cpu_data()[i], top_reference.cpu_data()[i],\n                this->epsilon_);\n  }\n}\n\nTYPED_TEST(LRNLayerTest, TestForwardAcrossChannelsLargeRegion) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_local_size(15);\n  LRNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Blob<Dtype> top_reference;\n  this->ReferenceLRNForward(*(this->blob_bottom_), layer_param,\n      &top_reference);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(this->blob_top_->cpu_data()[i], top_reference.cpu_data()[i],\n                this->epsilon_);\n  }\n}\n\nTYPED_TEST(LRNLayerTest, TestGradientAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  LRNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  vector<bool> propagate_down(this->blob_bottom_vec_.size(), true);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n  // for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n  //   std::cout << \"CPU diff \" << this->blob_bottom_->cpu_diff()[i]\n  //       << std::endl;\n  // }\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(LRNLayerTest, TestGradientAcrossChannelsLargeRegion) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_local_size(15);\n  LRNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  vector<bool> propagate_down(this->blob_bottom_vec_.size(), true);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n  // for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n  //   std::cout << \"CPU diff \" << this->blob_bottom_->cpu_diff()[i]\n  //       << std::endl;\n  // }\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(LRNLayerTest, TestSetupWithinChannel) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_norm_region(\n      LRNParameter_NormRegion_WITHIN_CHANNEL);\n  layer_param.mutable_lrn_param()->set_local_size(3);\n  LRNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 7);\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\nTYPED_TEST(LRNLayerTest, TestForwardWithinChannel) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_norm_region(\n      LRNParameter_NormRegion_WITHIN_CHANNEL);\n  layer_param.mutable_lrn_param()->set_local_size(3);\n  LRNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Blob<Dtype> top_reference;\n  this->ReferenceLRNForward(*(this->blob_bottom_), layer_param,\n      &top_reference);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(this->blob_top_->cpu_data()[i], top_reference.cpu_data()[i],\n                this->epsilon_);\n  }\n}\n\nTYPED_TEST(LRNLayerTest, TestGradientWithinChannel) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_norm_region(\n      LRNParameter_NormRegion_WITHIN_CHANNEL);\n  layer_param.mutable_lrn_param()->set_local_size(3);\n  LRNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#ifdef USE_CUDNN\ntemplate <typename Dtype>\nclass CuDNNLRNLayerTest : public GPUDeviceTest<Dtype> {\n protected:\n  CuDNNLRNLayerTest()\n      : epsilon_(Dtype(1e-5)),\n        blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    blob_bottom_->Reshape(2, 7, 3, 3);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~CuDNNLRNLayerTest() { delete blob_bottom_; delete blob_top_; }\n  void ReferenceLRNForward(const Blob<Dtype>& blob_bottom,\n      const LayerParameter& layer_param, Blob<Dtype>* blob_top);\n\n  Dtype epsilon_;\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\ntemplate <typename TypeParam>\nvoid CuDNNLRNLayerTest<TypeParam>::ReferenceLRNForward(\n    const Blob<TypeParam>& blob_bottom, const LayerParameter& layer_param,\n    Blob<TypeParam>* blob_top) {\n  typedef TypeParam Dtype;\n  blob_top->Reshape(blob_bottom.num(), blob_bottom.channels(),\n      blob_bottom.height(), blob_bottom.width());\n  Dtype* top_data = blob_top->mutable_cpu_data();\n  LRNParameter lrn_param = layer_param.lrn_param();\n  Dtype alpha = lrn_param.alpha();\n  Dtype beta = lrn_param.beta();\n  int size = lrn_param.local_size();\n  switch (lrn_param.norm_region()) {\n  case LRNParameter_NormRegion_ACROSS_CHANNELS:\n    for (int n = 0; n < blob_bottom.num(); ++n) {\n      for (int c = 0; c < blob_bottom.channels(); ++c) {\n        for (int h = 0; h < blob_bottom.height(); ++h) {\n          for (int w = 0; w < blob_bottom.width(); ++w) {\n            int c_start = c - (size - 1) / 2;\n            int c_end = min(c_start + size, blob_bottom.channels());\n            c_start = max(c_start, 0);\n            Dtype scale = 1.;\n            for (int i = c_start; i < c_end; ++i) {\n              Dtype value = blob_bottom.data_at(n, i, h, w);\n              scale += value * value * alpha / size;\n            }\n            *(top_data + blob_top->offset(n, c, h, w)) =\n              blob_bottom.data_at(n, c, h, w) / pow(scale, beta);\n          }\n        }\n      }\n    }\n    break;\n  case LRNParameter_NormRegion_WITHIN_CHANNEL:\n    for (int n = 0; n < blob_bottom.num(); ++n) {\n      for (int c = 0; c < blob_bottom.channels(); ++c) {\n        for (int h = 0; h < blob_bottom.height(); ++h) {\n          int h_start = h - (size - 1) / 2;\n          int h_end = min(h_start + size, blob_bottom.height());\n          h_start = max(h_start, 0);\n          for (int w = 0; w < blob_bottom.width(); ++w) {\n            Dtype scale = 1.;\n            int w_start = w - (size - 1) / 2;\n            int w_end = min(w_start + size, blob_bottom.width());\n            w_start = max(w_start, 0);\n            for (int nh = h_start; nh < h_end; ++nh) {\n              for (int nw = w_start; nw < w_end; ++nw) {\n                Dtype value = blob_bottom.data_at(n, c, nh, nw);\n                scale += value * value * alpha / (size * size);\n              }\n            }\n            *(top_data + blob_top->offset(n, c, h, w)) =\n              blob_bottom.data_at(n, c, h, w) / pow(scale, beta);\n          }\n        }\n      }\n    }\n    break;\n  default:\n    LOG(FATAL) << \"Unknown normalization region.\";\n  }\n}\n\nTYPED_TEST_CASE(CuDNNLRNLayerTest, TestDtypes);\n\nTYPED_TEST(CuDNNLRNLayerTest, TestForwardAcrossChannelsCuDNN) {\n  // typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CuDNNLRNLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Blob<TypeParam> top_reference;\n  this->ReferenceLRNForward(*(this->blob_bottom_), layer_param,\n      &top_reference);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(this->blob_top_->cpu_data()[i], top_reference.cpu_data()[i],\n                this->epsilon_);\n  }\n}\n\nTYPED_TEST(CuDNNLRNLayerTest, TestForwardAcrossChannelsLargeRegionCuDNN) {\n  typedef TypeParam Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_local_size(15);\n  CuDNNLRNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Blob<Dtype> top_reference;\n  this->ReferenceLRNForward(*(this->blob_bottom_), layer_param,\n      &top_reference);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(this->blob_top_->cpu_data()[i], top_reference.cpu_data()[i],\n                this->epsilon_);\n  }\n}\n\nTYPED_TEST(CuDNNLRNLayerTest, TestGradientAcrossChannelsCuDNN) {\n  typedef TypeParam Dtype;\n  LayerParameter layer_param;\n  CuDNNLRNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  vector<bool> propagate_down(this->blob_bottom_vec_.size(), true);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(CuDNNLRNLayerTest, TestForwardWithinChannel) {\n  typedef TypeParam Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_norm_region(\n      LRNParameter_NormRegion_WITHIN_CHANNEL);\n  layer_param.mutable_lrn_param()->set_local_size(3);\n  CuDNNLCNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Blob<Dtype> top_reference;\n  this->ReferenceLRNForward(*(this->blob_bottom_), layer_param,\n      &top_reference);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(this->blob_top_->cpu_data()[i], top_reference.cpu_data()[i],\n                this->epsilon_);\n  }\n}\n\nTYPED_TEST(CuDNNLRNLayerTest, TestGradientWithinChannel) {\n  typedef TypeParam Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_norm_region(\n      LRNParameter_NormRegion_WITHIN_CHANNEL);\n  layer_param.mutable_lrn_param()->set_local_size(3);\n  CuDNNLCNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(CuDNNLRNLayerTest, TestGradientAcrossChannelsLargeRegionCuDNN) {\n  typedef TypeParam Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_lrn_param()->set_local_size(15);\n  CuDNNLRNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  vector<bool> propagate_down(this->blob_bottom_vec_.size(), true);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_math_functions.cpp",
    "content": "#include <stdint.h>  // for uint32_t & uint64_t\n#include <time.h>\n#include <cmath>  // for std::fabs\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass MathFunctionsTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  MathFunctionsTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {\n  }\n\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    this->blob_bottom_->Reshape(11, 17, 19, 23);\n    this->blob_top_->Reshape(11, 17, 19, 23);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    filler.Fill(this->blob_top_);\n  }\n\n  virtual ~MathFunctionsTest() {\n    delete blob_bottom_;\n    delete blob_top_;\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n};\n\ntemplate <typename Dtype>\nclass CPUMathFunctionsTest\n  : public MathFunctionsTest<CPUDevice<Dtype> > {\n};\n\nTYPED_TEST_CASE(CPUMathFunctionsTest, TestDtypes);\n\nTYPED_TEST(CPUMathFunctionsTest, TestNothing) {\n  // The first test case of a test suite takes the longest time\n  //   due to the set up overhead.\n}\n\nTYPED_TEST(CPUMathFunctionsTest, TestAsum) {\n  int n = this->blob_bottom_->count();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  TypeParam std_asum = 0;\n  for (int i = 0; i < n; ++i) {\n    std_asum += std::fabs(x[i]);\n  }\n  TypeParam cpu_asum = caffe_cpu_asum<TypeParam>(n, x);\n  EXPECT_LT((cpu_asum - std_asum) / std_asum, 1e-2);\n}\n\nTYPED_TEST(CPUMathFunctionsTest, TestSign) {\n  int n = this->blob_bottom_->count();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  caffe_cpu_sign<TypeParam>(n, x, this->blob_bottom_->mutable_cpu_diff());\n  const TypeParam* signs = this->blob_bottom_->cpu_diff();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(signs[i], x[i] > 0 ? 1 : (x[i] < 0 ? -1 : 0));\n  }\n}\n\nTYPED_TEST(CPUMathFunctionsTest, TestSgnbit) {\n  int n = this->blob_bottom_->count();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  caffe_cpu_sgnbit<TypeParam>(n, x, this->blob_bottom_->mutable_cpu_diff());\n  const TypeParam* signbits = this->blob_bottom_->cpu_diff();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(signbits[i], x[i] < 0 ? 1 : 0);\n  }\n}\n\nTYPED_TEST(CPUMathFunctionsTest, TestFabs) {\n  int n = this->blob_bottom_->count();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  caffe_abs<TypeParam>(n, x, this->blob_bottom_->mutable_cpu_diff());\n  const TypeParam* abs_val = this->blob_bottom_->cpu_diff();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(abs_val[i], x[i] > 0 ? x[i] : -x[i]);\n  }\n}\n\nTYPED_TEST(CPUMathFunctionsTest, TestScale) {\n  int n = this->blob_bottom_->count();\n  TypeParam alpha = this->blob_bottom_->cpu_diff()[caffe_rng_rand() %\n                                                   this->blob_bottom_->count()];\n  caffe_cpu_scale<TypeParam>(n, alpha, this->blob_bottom_->cpu_data(),\n                             this->blob_bottom_->mutable_cpu_diff());\n  const TypeParam* scaled = this->blob_bottom_->cpu_diff();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(scaled[i], x[i] * alpha);\n  }\n}\n\nTYPED_TEST(CPUMathFunctionsTest, TestCopy) {\n  const int n = this->blob_bottom_->count();\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  TypeParam* top_data = this->blob_top_->mutable_cpu_data();\n  caffe_copy(n, bottom_data, top_data);\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(bottom_data[i], top_data[i]);\n  }\n}\n\n#ifndef CPU_ONLY\n\ntemplate <typename Dtype>\nclass GPUMathFunctionsTest : public MathFunctionsTest<GPUDevice<Dtype> > {\n};\n\nTYPED_TEST_CASE(GPUMathFunctionsTest, TestDtypes);\n\nTYPED_TEST(GPUMathFunctionsTest, TestAsum) {\n  int n = this->blob_bottom_->count();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  TypeParam std_asum = 0;\n  for (int i = 0; i < n; ++i) {\n    std_asum += std::fabs(x[i]);\n  }\n  TypeParam gpu_asum;\n  caffe_gpu_asum<TypeParam>(n, this->blob_bottom_->gpu_data(), &gpu_asum);\n  EXPECT_LT((gpu_asum - std_asum) / std_asum, 1e-2);\n}\n\nTYPED_TEST(GPUMathFunctionsTest, TestSign) {\n  int n = this->blob_bottom_->count();\n  caffe_gpu_sign<TypeParam>(n, this->blob_bottom_->gpu_data(),\n                            this->blob_bottom_->mutable_gpu_diff());\n  const TypeParam* signs = this->blob_bottom_->cpu_diff();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(signs[i], x[i] > 0 ? 1 : (x[i] < 0 ? -1 : 0));\n  }\n}\n\nTYPED_TEST(GPUMathFunctionsTest, TestSgnbit) {\n  int n = this->blob_bottom_->count();\n  caffe_gpu_sgnbit<TypeParam>(n, this->blob_bottom_->gpu_data(),\n                            this->blob_bottom_->mutable_gpu_diff());\n  const TypeParam* signbits = this->blob_bottom_->cpu_diff();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(signbits[i], x[i] < 0 ? 1 : 0);\n  }\n}\n\nTYPED_TEST(GPUMathFunctionsTest, TestFabs) {\n  int n = this->blob_bottom_->count();\n  caffe_gpu_abs<TypeParam>(n, this->blob_bottom_->gpu_data(),\n                            this->blob_bottom_->mutable_gpu_diff());\n  const TypeParam* abs_val = this->blob_bottom_->cpu_diff();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(abs_val[i], x[i] > 0 ? x[i] : -x[i]);\n  }\n}\n\nTYPED_TEST(GPUMathFunctionsTest, TestScale) {\n  int n = this->blob_bottom_->count();\n  TypeParam alpha = this->blob_bottom_->cpu_diff()[caffe_rng_rand() %\n                                                   this->blob_bottom_->count()];\n  caffe_gpu_scale<TypeParam>(n, alpha, this->blob_bottom_->gpu_data(),\n                             this->blob_bottom_->mutable_gpu_diff());\n  const TypeParam* scaled = this->blob_bottom_->cpu_diff();\n  const TypeParam* x = this->blob_bottom_->cpu_data();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(scaled[i], x[i] * alpha);\n  }\n}\n\nTYPED_TEST(GPUMathFunctionsTest, TestCopy) {\n  const int n = this->blob_bottom_->count();\n  const TypeParam* bottom_data = this->blob_bottom_->gpu_data();\n  TypeParam* top_data = this->blob_top_->mutable_gpu_data();\n  caffe_copy(n, bottom_data, top_data);\n  bottom_data = this->blob_bottom_->cpu_data();\n  top_data = this->blob_top_->mutable_cpu_data();\n  for (int i = 0; i < n; ++i) {\n    EXPECT_EQ(bottom_data[i], top_data[i]);\n  }\n}\n\n#endif\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_maxpool_dropout_layers.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/dropout_layer.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass MaxPoolingDropoutTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  MaxPoolingDropoutTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1703);\n    blob_bottom_->Reshape(2, 3, 6, 5);\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_value(1.);\n    ConstantFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~MaxPoolingDropoutTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(MaxPoolingDropoutTest, TestDtypesAndDevices);\n\nTYPED_TEST(MaxPoolingDropoutTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  PoolingLayer<Dtype> max_layer(layer_param);\n  max_layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  DropoutLayer<Dtype> dropout_layer(layer_param);\n  dropout_layer.SetUp(this->blob_top_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 2);\n}\n\n\nTYPED_TEST(MaxPoolingDropoutTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  Dtype sum = 0.;\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    sum += top_data[i];\n  }\n  EXPECT_EQ(sum, this->blob_top_->count());\n  // Dropout in-place\n  DropoutLayer<Dtype> dropout_layer(layer_param);\n  dropout_layer.SetUp(this->blob_top_vec_, this->blob_top_vec_);\n  dropout_layer.Forward(this->blob_top_vec_, this->blob_top_vec_);\n  sum = 0.;\n  Dtype scale = 1. / (1. - layer_param.dropout_param().dropout_ratio());\n  top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    sum += top_data[i];\n  }\n  EXPECT_GE(sum, 0);\n  EXPECT_LE(sum, this->blob_top_->count()*scale);\n}\n\nTYPED_TEST(MaxPoolingDropoutTest, TestBackward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.set_phase(TRAIN);\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = 1.;\n  }\n  vector<bool> propagate_down(this->blob_bottom_vec_.size(), true);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n  const Dtype* bottom_diff = this->blob_bottom_->cpu_diff();\n  Dtype sum = 0.;\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    sum += bottom_diff[i];\n  }\n  EXPECT_EQ(sum, this->blob_top_->count());\n  // Dropout in-place\n  DropoutLayer<Dtype> dropout_layer(layer_param);\n  dropout_layer.SetUp(this->blob_top_vec_, this->blob_top_vec_);\n  dropout_layer.Forward(this->blob_top_vec_, this->blob_top_vec_);\n  dropout_layer.Backward(this->blob_top_vec_, propagate_down,\n                         this->blob_top_vec_);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n  Dtype sum_with_dropout = 0.;\n  bottom_diff = this->blob_bottom_->cpu_diff();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    sum_with_dropout += bottom_diff[i];\n  }\n  EXPECT_GE(sum_with_dropout, sum);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_memory_data_layer.cpp",
    "content": "#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#endif  // USE_OPENCV\n\n#include <string>\n#include <vector>\n\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/memory_data_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass MemoryDataLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  MemoryDataLayerTest()\n    : data_(new Blob<Dtype>()),\n      labels_(new Blob<Dtype>()),\n      data_blob_(new Blob<Dtype>()),\n      label_blob_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    batch_size_ = 8;\n    batches_ = 12;\n    channels_ = 4;\n    height_ = 7;\n    width_ = 11;\n    blob_top_vec_.push_back(data_blob_);\n    blob_top_vec_.push_back(label_blob_);\n    // pick random input data\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    data_->Reshape(batches_ * batch_size_, channels_, height_, width_);\n    labels_->Reshape(batches_ * batch_size_, 1, 1, 1);\n    filler.Fill(this->data_);\n    filler.Fill(this->labels_);\n  }\n\n  virtual ~MemoryDataLayerTest() {\n    delete data_blob_;\n    delete label_blob_;\n    delete data_;\n    delete labels_;\n  }\n  int batch_size_;\n  int batches_;\n  int channels_;\n  int height_;\n  int width_;\n  // we don't really need blobs for the input data, but it makes it\n  //  easier to call Filler\n  Blob<Dtype>* const data_;\n  Blob<Dtype>* const labels_;\n  // blobs for the top of MemoryDataLayer\n  Blob<Dtype>* const data_blob_;\n  Blob<Dtype>* const label_blob_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(MemoryDataLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(MemoryDataLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  LayerParameter layer_param;\n  MemoryDataParameter* md_param = layer_param.mutable_memory_data_param();\n  md_param->set_batch_size(this->batch_size_);\n  md_param->set_channels(this->channels_);\n  md_param->set_height(this->height_);\n  md_param->set_width(this->width_);\n  shared_ptr<Layer<Dtype> > layer(\n      new MemoryDataLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->data_blob_->num(), this->batch_size_);\n  EXPECT_EQ(this->data_blob_->channels(), this->channels_);\n  EXPECT_EQ(this->data_blob_->height(), this->height_);\n  EXPECT_EQ(this->data_blob_->width(), this->width_);\n  EXPECT_EQ(this->label_blob_->num(), this->batch_size_);\n  EXPECT_EQ(this->label_blob_->channels(), 1);\n  EXPECT_EQ(this->label_blob_->height(), 1);\n  EXPECT_EQ(this->label_blob_->width(), 1);\n}\n\n// run through a few batches and check that the right data appears\nTYPED_TEST(MemoryDataLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  LayerParameter layer_param;\n  MemoryDataParameter* md_param = layer_param.mutable_memory_data_param();\n  md_param->set_batch_size(this->batch_size_);\n  md_param->set_channels(this->channels_);\n  md_param->set_height(this->height_);\n  md_param->set_width(this->width_);\n  shared_ptr<MemoryDataLayer<Dtype> > layer(\n      new MemoryDataLayer<Dtype>(layer_param));\n  layer->DataLayerSetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Reset(this->data_->mutable_cpu_data(),\n      this->labels_->mutable_cpu_data(), this->data_->num());\n  for (int i = 0; i < this->batches_ * 6; ++i) {\n    int batch_num = i % this->batches_;\n    layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    for (int j = 0; j < this->data_blob_->count(); ++j) {\n      EXPECT_EQ(this->data_blob_->cpu_data()[j],\n          this->data_->cpu_data()[\n              this->data_->offset(1) * this->batch_size_ * batch_num + j]);\n    }\n    for (int j = 0; j < this->label_blob_->count(); ++j) {\n      EXPECT_EQ(this->label_blob_->cpu_data()[j],\n          this->labels_->cpu_data()[this->batch_size_ * batch_num + j]);\n    }\n  }\n}\n\n#ifdef USE_OPENCV\nTYPED_TEST(MemoryDataLayerTest, AddDatumVectorDefaultTransform) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  LayerParameter param;\n  MemoryDataParameter* memory_data_param = param.mutable_memory_data_param();\n  memory_data_param->set_batch_size(this->batch_size_);\n  memory_data_param->set_channels(this->channels_);\n  memory_data_param->set_height(this->height_);\n  memory_data_param->set_width(this->width_);\n  MemoryDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  // We add batch_size*num_iter items, then for each iteration\n  // we forward batch_size elements\n  int num_iter = 5;\n  vector<Datum> datum_vector(this->batch_size_ * num_iter);\n  const size_t count = this->channels_ * this->height_ * this->width_;\n  size_t pixel_index = 0;\n  for (int i = 0; i < this->batch_size_ * num_iter; ++i) {\n    datum_vector[i].set_channels(this->channels_);\n    datum_vector[i].set_height(this->height_);\n    datum_vector[i].set_width(this->width_);\n    datum_vector[i].set_label(i);\n    vector<char> pixels(count);\n    for (int j = 0; j < count; ++j) {\n      pixels[j] = pixel_index++ % 256;\n    }\n    datum_vector[i].set_data(&(pixels[0]), count);\n  }\n  layer.AddDatumVector(datum_vector);\n\n  int data_index;\n  // Go through the data 5 times\n  for (int iter = 0; iter < num_iter; ++iter) {\n    int offset = this->batch_size_ * iter;\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype* data = this->data_blob_->cpu_data();\n    size_t index = 0;\n    for (int i = 0; i < this->batch_size_; ++i) {\n      const string& data_string = datum_vector[offset + i].data();\n      EXPECT_EQ(offset + i, this->label_blob_->cpu_data()[i]);\n      for (int c = 0; c < this->channels_; ++c) {\n        for (int h = 0; h < this->height_; ++h) {\n          for (int w = 0; w < this->width_; ++w) {\n            data_index = (c * this->height_ + h) * this->width_ + w;\n            EXPECT_EQ(static_cast<Dtype>(\n                static_cast<uint8_t>(data_string[data_index])),\n                      data[index++]);\n          }\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(MemoryDataLayerTest, AddMatVectorDefaultTransform) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter param;\n  MemoryDataParameter* memory_data_param = param.mutable_memory_data_param();\n  memory_data_param->set_batch_size(this->batch_size_);\n  memory_data_param->set_channels(this->channels_);\n  memory_data_param->set_height(this->height_);\n  memory_data_param->set_width(this->width_);\n  MemoryDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  // We add batch_size*num_iter items, then for each iteration\n  // we forward batch_size elements\n  int num_iter = 5;\n  vector<cv::Mat> mat_vector(this->batch_size_ * num_iter);\n  vector<int> label_vector(this->batch_size_ * num_iter);\n  for (int i = 0; i < this->batch_size_*num_iter; ++i) {\n    mat_vector[i] = cv::Mat(this->height_, this->width_, CV_8UC4);\n    label_vector[i] = i;\n    cv::randu(mat_vector[i], cv::Scalar::all(0), cv::Scalar::all(255));\n  }\n  layer.AddMatVector(mat_vector, label_vector);\n\n  int data_index;\n  const size_t count = this->channels_ * this->height_ * this->width_;\n  for (int iter = 0; iter < num_iter; ++iter) {\n    int offset = this->batch_size_ * iter;\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype* data = this->data_blob_->cpu_data();\n    for (int i = 0; i < this->batch_size_; ++i) {\n      EXPECT_EQ(offset + i, this->label_blob_->cpu_data()[i]);\n      for (int h = 0; h < this->height_; ++h) {\n        const unsigned char* ptr_mat = mat_vector[offset + i].ptr<uchar>(h);\n        int index = 0;\n        for (int w = 0; w < this->width_; ++w) {\n          for (int c = 0; c < this->channels_; ++c) {\n            data_index = (i*count) + (c * this->height_ + h) * this->width_ + w;\n            Dtype pixel = static_cast<Dtype>(ptr_mat[index++]);\n            EXPECT_EQ(static_cast<int>(pixel),\n                      data[data_index]);\n          }\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(MemoryDataLayerTest, TestSetBatchSize) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter param;\n  MemoryDataParameter* memory_data_param = param.mutable_memory_data_param();\n  memory_data_param->set_batch_size(this->batch_size_);\n  memory_data_param->set_channels(this->channels_);\n  memory_data_param->set_height(this->height_);\n  memory_data_param->set_width(this->width_);\n  MemoryDataLayer<Dtype> layer(param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  // first add data as usual\n  int num_iter = 5;\n  vector<cv::Mat> mat_vector(this->batch_size_ * num_iter);\n  vector<int> label_vector(this->batch_size_ * num_iter);\n  for (int i = 0; i < this->batch_size_*num_iter; ++i) {\n    mat_vector[i] = cv::Mat(this->height_, this->width_, CV_8UC4);\n    label_vector[i] = i;\n    cv::randu(mat_vector[i], cv::Scalar::all(0), cv::Scalar::all(255));\n  }\n  layer.AddMatVector(mat_vector, label_vector);\n  // then consume the data\n  int data_index;\n  const size_t count = this->channels_ * this->height_ * this->width_;\n  for (int iter = 0; iter < num_iter; ++iter) {\n    int offset = this->batch_size_ * iter;\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype* data = this->data_blob_->cpu_data();\n    for (int i = 0; i < this->batch_size_; ++i) {\n      EXPECT_EQ(offset + i, this->label_blob_->cpu_data()[i]);\n      for (int h = 0; h < this->height_; ++h) {\n        const unsigned char* ptr_mat = mat_vector[offset + i].ptr<uchar>(h);\n        int index = 0;\n        for (int w = 0; w < this->width_; ++w) {\n          for (int c = 0; c < this->channels_; ++c) {\n            data_index = (i*count) + (c * this->height_ + h) * this->width_ + w;\n            Dtype pixel = static_cast<Dtype>(ptr_mat[index++]);\n            EXPECT_EQ(static_cast<int>(pixel), data[data_index]);\n          }\n        }\n      }\n    }\n  }\n  // and then add new data with different batch_size\n  int new_batch_size = 16;\n  layer.set_batch_size(new_batch_size);\n  mat_vector.clear();\n  mat_vector.resize(new_batch_size * num_iter);\n  label_vector.clear();\n  label_vector.resize(new_batch_size * num_iter);\n  for (int i = 0; i < new_batch_size*num_iter; ++i) {\n    mat_vector[i] = cv::Mat(this->height_, this->width_, CV_8UC4);\n    label_vector[i] = i;\n    cv::randu(mat_vector[i], cv::Scalar::all(0), cv::Scalar::all(255));\n  }\n  layer.AddMatVector(mat_vector, label_vector);\n\n  // finally consume new data and check if everything is fine\n  for (int iter = 0; iter < num_iter; ++iter) {\n    int offset = new_batch_size * iter;\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    EXPECT_EQ(new_batch_size, this->blob_top_vec_[0]->num());\n    EXPECT_EQ(new_batch_size, this->blob_top_vec_[1]->num());\n    const Dtype* data = this->data_blob_->cpu_data();\n    for (int i = 0; i < new_batch_size; ++i) {\n      EXPECT_EQ(offset + i, this->label_blob_->cpu_data()[i]);\n      for (int h = 0; h < this->height_; ++h) {\n        const unsigned char* ptr_mat = mat_vector[offset + i].ptr<uchar>(h);\n        int index = 0;\n        for (int w = 0; w < this->width_; ++w) {\n          for (int c = 0; c < this->channels_; ++c) {\n            data_index = (i*count) + (c * this->height_ + h) * this->width_ + w;\n            Dtype pixel = static_cast<Dtype>(ptr_mat[index++]);\n            EXPECT_EQ(static_cast<int>(pixel), data[data_index]);\n          }\n        }\n      }\n    }\n  }\n}\n#endif  // USE_OPENCV\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_multinomial_logistic_loss_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/multinomial_logistic_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass MultinomialLogisticLossLayerTest : public CPUDeviceTest<Dtype> {\n protected:\n  MultinomialLogisticLossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_label_(new Blob<Dtype>(10, 1, 1, 1)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    PositiveUnitballFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    for (int i = 0; i < blob_bottom_label_->count(); ++i) {\n      blob_bottom_label_->mutable_cpu_data()[i] = caffe_rng_rand() % 5;\n    }\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~MultinomialLogisticLossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_top_loss_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(MultinomialLogisticLossLayerTest, TestDtypes);\n\n\nTYPED_TEST(MultinomialLogisticLossLayerTest, TestGradientCPU) {\n  LayerParameter layer_param;\n  MultinomialLogisticLossLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<TypeParam> checker(1e-2, 2*1e-2, 1701, 0, 0.05);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_mvn_layer.cpp",
    "content": "#include <vector>\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/mvn_layer.hpp\"\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass MVNLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  MVNLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~MVNLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(MVNLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(MVNLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  MVNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test mean\n  int num = this->blob_bottom_->num();\n  int channels = this->blob_bottom_->channels();\n  int height = this->blob_bottom_->height();\n  int width = this->blob_bottom_->width();\n\n  for (int i = 0; i < num; ++i) {\n    for (int j = 0; j < channels; ++j) {\n      Dtype sum = 0, var = 0;\n      for (int k = 0; k < height; ++k) {\n        for (int l = 0; l < width; ++l) {\n          Dtype data = this->blob_top_->data_at(i, j, k, l);\n          sum += data;\n          var += data * data;\n        }\n      }\n      sum /= height * width;\n      var /= height * width;\n\n      const Dtype kErrorBound = 0.001;\n      // expect zero mean\n      EXPECT_NEAR(0, sum, kErrorBound);\n      // expect unit variance\n      EXPECT_NEAR(1, var, kErrorBound);\n    }\n  }\n}\n\nTYPED_TEST(MVNLayerTest, TestForwardMeanOnly) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"mvn_param{normalize_variance: false}\", &layer_param));\n  MVNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test mean\n  int num = this->blob_bottom_->num();\n  int channels = this->blob_bottom_->channels();\n  int height = this->blob_bottom_->height();\n  int width = this->blob_bottom_->width();\n\n  for (int i = 0; i < num; ++i) {\n    for (int j = 0; j < channels; ++j) {\n      Dtype sum = 0, var = 0;\n      for (int k = 0; k < height; ++k) {\n        for (int l = 0; l < width; ++l) {\n          Dtype data = this->blob_top_->data_at(i, j, k, l);\n          sum += data;\n          var += data * data;\n        }\n      }\n      sum /= height * width;\n\n      const Dtype kErrorBound = 0.001;\n      // expect zero mean\n      EXPECT_NEAR(0, sum, kErrorBound);\n    }\n  }\n}\n\nTYPED_TEST(MVNLayerTest, TestForwardAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"mvn_param{across_channels: true}\", &layer_param));\n  MVNLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test mean\n  int num = this->blob_bottom_->num();\n  int channels = this->blob_bottom_->channels();\n  int height = this->blob_bottom_->height();\n  int width = this->blob_bottom_->width();\n\n  for (int i = 0; i < num; ++i) {\n    Dtype sum = 0, var = 0;\n    for (int j = 0; j < channels; ++j) {\n      for (int k = 0; k < height; ++k) {\n        for (int l = 0; l < width; ++l) {\n          Dtype data = this->blob_top_->data_at(i, j, k, l);\n          sum += data;\n          var += data * data;\n        }\n      }\n    }\n    sum /= height * width * channels;\n    var /= height * width * channels;\n\n    const Dtype kErrorBound = 0.001;\n    // expect zero mean\n    EXPECT_NEAR(0, sum, kErrorBound);\n    // expect unit variance\n    EXPECT_NEAR(1, var, kErrorBound);\n  }\n}\n\nTYPED_TEST(MVNLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  MVNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(MVNLayerTest, TestGradientMeanOnly) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"mvn_param{normalize_variance: false}\", &layer_param));\n  MVNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(MVNLayerTest, TestGradientAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"mvn_param{across_channels: true}\", &layer_param));\n  MVNLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_net.cpp",
    "content": "#include <string>\n#include <utility>\n#include <vector>\n\n#include \"google/protobuf/text_format.h\"\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/net.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass NetTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  NetTest() : seed_(1701) {}\n\n  virtual void InitNetFromProtoString(const string& proto) {\n    NetParameter param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(proto, &param));\n    net_.reset(new Net<Dtype>(param));\n  }\n\n  virtual void CopyNetBlobs(const bool copy_diff,\n      vector<shared_ptr<Blob<Dtype> > >* blobs_copy) {\n    CHECK(net_);\n    const vector<shared_ptr<Blob<Dtype> > >& net_blobs = net_->blobs();\n    blobs_copy->clear();\n    blobs_copy->resize(net_blobs.size());\n    const bool kReshape = true;\n    for (int i = 0; i < net_blobs.size(); ++i) {\n      (*blobs_copy)[i].reset(new Blob<Dtype>());\n      (*blobs_copy)[i]->CopyFrom(*net_blobs[i], copy_diff, kReshape);\n    }\n  }\n\n  virtual void CopyNetParams(const bool copy_diff,\n      vector<shared_ptr<Blob<Dtype> > >* params_copy) {\n    CHECK(net_);\n    const vector<shared_ptr<Blob<Dtype> > >& net_params = net_->params();\n    params_copy->clear();\n    params_copy->resize(net_params.size());\n    const bool kReshape = true;\n    for (int i = 0; i < net_params.size(); ++i) {\n      (*params_copy)[i].reset(new Blob<Dtype>());\n      (*params_copy)[i]->CopyFrom(*net_params[i], copy_diff, kReshape);\n    }\n  }\n\n  virtual void InitTinyNet(const bool force_backward = false,\n                           const bool accuracy_layer = false) {\n    string proto =\n        \"name: 'TinyTestNetwork' \"\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    shape { \"\n        \"      dim: 5 \"\n        \"      dim: 2 \"\n        \"      dim: 3 \"\n        \"      dim: 4 \"\n        \"    } \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"    shape { \"\n        \"      dim: 5 \"\n        \"    } \"\n        \"    data_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data' \"\n        \"  top: 'label' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 1000 \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"    bias_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0 \"\n        \"    } \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 1 \"\n        \"    decay_mult: 1 \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 2 \"\n        \"    decay_mult: 0 \"\n        \"  } \"\n        \"  bottom: 'data' \"\n        \"  top: 'innerproduct' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'SoftmaxWithLoss' \"\n        \"  bottom: 'innerproduct' \"\n        \"  bottom: 'label' \"\n        \"  top: 'top_loss' \"\n        \"} \";\n    if (accuracy_layer) {\n      proto +=\n          \"layer { \"\n          \"  name: 'loss' \"\n          \"  type: 'Accuracy' \"\n          \"  bottom: 'innerproduct' \"\n          \"  bottom: 'label' \"\n          \"  top: 'accuracy' \"\n          \"} \";\n    }\n    if (force_backward) {\n      proto += \"force_backward: true \";\n    }\n    InitNetFromProtoString(proto);\n  }\n\n  virtual void InitTinyNetEuclidean(const bool force_backward = false) {\n    string proto =\n        \"name: 'TinyTestEuclidLossNetwork' \"\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    num: 5 \"\n        \"    channels: 2 \"\n        \"    height: 3 \"\n        \"    width: 4 \"\n        \"    num: 5 \"\n        \"    channels: 1 \"\n        \"    height: 1 \"\n        \"    width: 1 \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data' \"\n        \"  top: 'label' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 1 \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"    bias_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0 \"\n        \"    } \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 1 \"\n        \"    decay_mult: 1 \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 2 \"\n        \"    decay_mult: 0 \"\n        \"  } \"\n        \"  bottom: 'data' \"\n        \"  top: 'innerproduct' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'EuclideanLoss' \"\n        \"  bottom: 'innerproduct' \"\n        \"  bottom: 'label' \"\n        \"} \";\n    if (force_backward) {\n      proto += \"force_backward: true \";\n    }\n    InitNetFromProtoString(proto);\n  }\n\n  virtual void InitTrickyNet(Dtype* loss_weight = NULL) {\n    ostringstream loss_weight_stream;\n    if (loss_weight) {\n      loss_weight_stream << \"  loss_weight: \" << *loss_weight << \" \";\n    }\n    const string& proto =\n        \"name: 'TrickyTestNetwork' \"\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    num: 5 \"\n        \"    channels: 2 \"\n        \"    height: 3 \"\n        \"    width: 4 \"\n        \"    num: 5 \"\n        \"    channels: 1 \"\n        \"    height: 1 \"\n        \"    width: 1 \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data' \"\n        \"  top: 'label' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 1000 \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"    bias_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0 \"\n        \"    } \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 1 \"\n        \"    decay_mult: 1 \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 2 \"\n        \"    decay_mult: 0 \"\n        \"  } \"\n        \"  bottom: 'data' \"\n        \"  top: 'transformed_data' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 1 \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"    bias_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0 \"\n        \"    } \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 1 \"\n        \"    decay_mult: 1 \"\n        \"  } \"\n        \"  param { \"\n        \"    lr_mult: 2 \"\n        \"    decay_mult: 0 \"\n        \"  } \"\n        \"  bottom: 'label' \"\n        \"  top: 'transformed_label' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'SoftmaxWithLoss' \" +\n        loss_weight_stream.str() +\n        \"  bottom: 'transformed_data' \"\n        \"  bottom: 'transformed_label' \"\n        \"} \";\n    InitNetFromProtoString(proto);\n  }\n\n  // loss_weight is the loss weight for the 'EuclideanLoss' layer output.\n  // midnet_loss_weight is the loss weight for the first 'InnerProduct' layer\n  // output.  Should both default to 0.0 if unspecified (i.e., if NULL is\n  // passed to this function).\n  virtual void InitUnsharedWeightsNet(const Dtype* loss_weight = NULL,\n      const Dtype* midnet_loss_weight = NULL,\n      const bool force_backward = false, const bool bias_term = false,\n      const Dtype blobs_lr_w1 = 1, const Dtype blobs_lr_b1 = 2,\n      const Dtype blobs_lr_w2 = 1, const Dtype blobs_lr_b2 = 2) {\n    string bias_str = bias_term ? \"true \":\"false \";\n    ostringstream proto;\n    proto << \"name: 'UnsharedWeightsNetwork' \";\n    if (force_backward) {\n      proto << \"force_backward: true \";\n    }\n    proto <<\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    num: 5 \"\n        \"    channels: 2 \"\n        \"    height: 3 \"\n        \"    width: 4 \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct1' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: \" << bias_str <<\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 10 \"\n        \"    } \"\n        \"  } \"\n        \"  param { \"\n        \"    name: 'unsharedweights1' \"\n        \"    lr_mult: \" << blobs_lr_w1 <<\n        \"  } \";\n    if (bias_term) {\n      proto << \"  param { lr_mult: \" << blobs_lr_b1 << \" } \";\n    }\n    proto <<\n        \"  bottom: 'data' \"\n        \"  top: 'innerproduct1' \";\n    if (midnet_loss_weight) {\n      proto << \"  loss_weight: \" << *midnet_loss_weight << \" \";\n    }\n    proto <<\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct2' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: \" << bias_str <<\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 10 \"\n        \"    } \"\n        \"  } \"\n        \"  param { \"\n        \"    name: 'unsharedweights2' \"\n        \"    lr_mult: \" << blobs_lr_w2 <<\n        \"  } \";\n    if (bias_term) {\n      proto << \"  param { lr_mult: \" << blobs_lr_b2 << \" } \";\n    }\n    proto <<\n        \"  bottom: 'data' \"\n        \"  top: 'innerproduct2' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'EuclideanLoss' \";\n    if (loss_weight) {\n      proto << \"  loss_weight: \" << *loss_weight << \" \";\n    }\n    proto <<\n        \"  bottom: 'innerproduct1' \"\n        \"  bottom: 'innerproduct2' \"\n        \"} \";\n    InitNetFromProtoString(proto.str());\n  }\n\n  virtual void InitSharedWeightsNet() {\n    const string& proto =\n        \"name: 'SharedWeightsNetwork' \"\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    num: 5 \"\n        \"    channels: 2 \"\n        \"    height: 3 \"\n        \"    width: 4 \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct1' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: false \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 10 \"\n        \"    } \"\n        \"  } \"\n        \"  param { name: 'sharedweights' } \"\n        \"  bottom: 'data' \"\n        \"  top: 'innerproduct1' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct2' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: false \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 10 \"\n        \"    } \"\n        \"  } \"\n        \"  param { name: 'sharedweights' } \"\n        \"  bottom: 'data' \"\n        \"  top: 'innerproduct2' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'EuclideanLoss' \"\n        \"  bottom: 'innerproduct1' \"\n        \"  bottom: 'innerproduct2' \"\n        \"} \";\n    InitNetFromProtoString(proto);\n  }\n\n  virtual void InitDiffDataUnsharedWeightsNet() {\n    const string& proto =\n        \"name: 'DiffDataUnsharedWeightsNetwork' \"\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    num: 10 \"\n        \"    channels: 10 \"\n        \"    height: 1 \"\n        \"    width: 1 \"\n        \"    num: 10 \"\n        \"    channels: 10 \"\n        \"    height: 1 \"\n        \"    width: 1 \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 10 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data1' \"\n        \"  top: 'data2' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct1' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: false \"\n        \"    weight_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0.5 \"\n        \"    } \"\n        \"  } \"\n        \"  param { name: 'unsharedweights1' } \"\n        \"  bottom: 'data1' \"\n        \"  top: 'innerproduct1' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct2' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: false \"\n        \"    weight_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0.5 \"\n        \"    } \"\n        \"  } \"\n        \"  param { name: 'unsharedweights2' } \"\n        \"  bottom: 'innerproduct1' \"\n        \"  top: 'innerproduct2' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'EuclideanLoss' \"\n        \"  bottom: 'data2' \"\n        \"  bottom: 'innerproduct2' \"\n        \"} \";\n    InitNetFromProtoString(proto);\n  }\n\n  virtual void InitDiffDataSharedWeightsNet() {\n    const string& proto =\n        \"name: 'DiffDataSharedWeightsNetwork' \"\n        \"layer { \"\n        \"  name: 'data' \"\n        \"  type: 'DummyData' \"\n        \"  dummy_data_param { \"\n        \"    num: 10 \"\n        \"    channels: 10 \"\n        \"    height: 1 \"\n        \"    width: 1 \"\n        \"    num: 10 \"\n        \"    channels: 10 \"\n        \"    height: 1 \"\n        \"    width: 1 \"\n        \"    data_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 10 \"\n        \"    } \"\n        \"  } \"\n        \"  top: 'data1' \"\n        \"  top: 'data2' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct1' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: false \"\n        \"    weight_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0.5 \"\n        \"    } \"\n        \"  } \"\n        \"  param { name: 'sharedweights' } \"\n        \"  bottom: 'data1' \"\n        \"  top: 'innerproduct1' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'innerproduct2' \"\n        \"  type: 'InnerProduct' \"\n        \"  inner_product_param { \"\n        \"    num_output: 10 \"\n        \"    bias_term: false \"\n        \"    weight_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0.5 \"\n        \"    } \"\n        \"  } \"\n        \"  param { name: 'sharedweights' } \"\n        \"  bottom: 'innerproduct1' \"\n        \"  top: 'innerproduct2' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'loss' \"\n        \"  type: 'EuclideanLoss' \"\n        \"  bottom: 'data2' \"\n        \"  bottom: 'innerproduct2' \"\n        \"} \";\n    InitNetFromProtoString(proto);\n  }\n\n  virtual void InitReshapableNet() {\n    const string& proto =\n        \"name: 'ReshapableNetwork' \"\n        \"input: 'data' \"\n        \"input_dim: 1 \"\n        \"input_dim: 3 \"\n        \"input_dim: 100 \"\n        \"input_dim: 100 \"\n        \"layer { \"\n        \"  name: 'conv1' \"\n        \"  type: 'Convolution' \"\n        \"  bottom: 'data' \"\n        \"  top: 'conv1' \"\n        \"  convolution_param { \"\n        \"    num_output: 5 \"\n        \"    kernel_size: 3 \"\n        \"    stride: 2 \"\n        \"    weight_filler { \"\n        \"      type: 'gaussian' \"\n        \"      std: 0.01 \"\n        \"    } \"\n        \"    bias_filler { \"\n        \"      type: 'constant' \"\n        \"      value: 0.2 \"\n        \"    } \"\n        \"  } \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'relu1' \"\n        \"  type: 'ReLU' \"\n        \"  bottom: 'conv1' \"\n        \"  top: 'conv1' \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'pool1' \"\n        \"  type: 'Pooling' \"\n        \"  bottom: 'conv1' \"\n        \"  top: 'pool1' \"\n        \"  pooling_param { \"\n        \"    pool: MAX \"\n        \"    kernel_size: 2 \"\n        \"    stride: 2 \"\n        \"  } \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'norm1' \"\n        \"  type: 'LRN' \"\n        \"  bottom: 'pool1' \"\n        \"  top: 'norm1' \"\n        \"  lrn_param { \"\n        \"    local_size: 3 \"\n        \"  } \"\n        \"} \"\n        \"layer { \"\n        \"  name: 'softmax' \"\n        \"  type: 'Softmax' \"\n        \"  bottom: 'norm1' \"\n        \"  top: 'softmax' \"\n        \"} \";\n    InitNetFromProtoString(proto);\n  }\n\n  virtual void InitSkipPropNet(bool test_skip_true) {\n    string proto =\n      \"name: 'SkipPropTestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'DummyData' \"\n      \"  dummy_data_param { \"\n      \"    shape { \"\n      \"      dim: 5 \"\n      \"      dim: 2 \"\n      \"      dim: 3 \"\n      \"      dim: 4 \"\n      \"    } \"\n      \"    data_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    shape { \"\n      \"      dim: 5 \"\n      \"    } \"\n      \"    data_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'silence' \"\n      \"  bottom: 'label' \"\n      \"  type: 'Silence' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerproduct' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerproduct' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'ip_fake_labels' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'fake_labels' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'argmax' \"\n      \"  bottom: 'fake_labels' \"\n      \"  top: 'label_argmax' \"\n      \"  type: 'ArgMax' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  bottom: 'innerproduct' \"\n      \"  bottom: 'label_argmax' \";\n    if (test_skip_true)\n      proto += \"  propagate_down: true \"\n               \"  propagate_down: false \";\n    else\n      proto += \"  propagate_down: true \"\n               \"  propagate_down: true \";\n    proto +=\n      \"  top: 'cross_entropy_loss' \"\n      \"  type: 'SigmoidCrossEntropyLoss' \"\n      \"  loss_weight: 0.1 \"\n      \"} \";\n    InitNetFromProtoString(proto);\n  }\n\n  int seed_;\n  shared_ptr<Net<Dtype> > net_;\n};\n\nTYPED_TEST_CASE(NetTest, TestDtypesAndDevices);\n\nTYPED_TEST(NetTest, TestHasBlob) {\n  this->InitTinyNet();\n  EXPECT_TRUE(this->net_->has_blob(\"data\"));\n  EXPECT_TRUE(this->net_->has_blob(\"label\"));\n  EXPECT_TRUE(this->net_->has_blob(\"innerproduct\"));\n  EXPECT_FALSE(this->net_->has_blob(\"loss\"));\n  EXPECT_TRUE(this->net_->has_blob(\"top_loss\"));\n}\n\nTYPED_TEST(NetTest, TestGetBlob) {\n  this->InitTinyNet();\n  EXPECT_EQ(this->net_->blob_by_name(\"data\"), this->net_->blobs()[0]);\n  EXPECT_EQ(this->net_->blob_by_name(\"label\"), this->net_->blobs()[1]);\n  EXPECT_EQ(this->net_->blob_by_name(\"innerproduct\"), this->net_->blobs()[2]);\n  EXPECT_FALSE(this->net_->blob_by_name(\"loss\"));\n  EXPECT_EQ(this->net_->blob_by_name(\"top_loss\"), this->net_->blobs()[3]);\n}\n\nTYPED_TEST(NetTest, TestHasLayer) {\n  this->InitTinyNet();\n  EXPECT_TRUE(this->net_->has_layer(\"data\"));\n  EXPECT_TRUE(this->net_->has_layer(\"innerproduct\"));\n  EXPECT_TRUE(this->net_->has_layer(\"loss\"));\n  EXPECT_FALSE(this->net_->has_layer(\"label\"));\n}\n\nTYPED_TEST(NetTest, TestGetLayerByName) {\n  this->InitTinyNet();\n  EXPECT_EQ(this->net_->layer_by_name(\"data\"), this->net_->layers()[0]);\n  EXPECT_EQ(this->net_->layer_by_name(\"innerproduct\"), this->net_->layers()[1]);\n  EXPECT_EQ(this->net_->layer_by_name(\"loss\"), this->net_->layers()[2]);\n  EXPECT_FALSE(this->net_->layer_by_name(\"label\"));\n}\n\nTYPED_TEST(NetTest, TestBottomNeedBackward) {\n  this->InitTinyNet();\n  const vector<vector<bool> >& bottom_need_backward =\n      this->net_->bottom_need_backward();\n  EXPECT_EQ(3, bottom_need_backward.size());\n  EXPECT_EQ(0, bottom_need_backward[0].size());\n  EXPECT_EQ(1, bottom_need_backward[1].size());\n  EXPECT_EQ(false, bottom_need_backward[1][0]);\n  EXPECT_EQ(2, bottom_need_backward[2].size());\n  EXPECT_EQ(true, bottom_need_backward[2][0]);\n  EXPECT_EQ(false, bottom_need_backward[2][1]);\n}\n\nTYPED_TEST(NetTest, TestBottomNeedBackwardForce) {\n  const bool force_backward = true;\n  this->InitTinyNet(force_backward);\n  const vector<vector<bool> >& bottom_need_backward =\n      this->net_->bottom_need_backward();\n  EXPECT_EQ(3, bottom_need_backward.size());\n  EXPECT_EQ(0, bottom_need_backward[0].size());\n  EXPECT_EQ(1, bottom_need_backward[1].size());\n  EXPECT_EQ(true, bottom_need_backward[1][0]);\n  EXPECT_EQ(2, bottom_need_backward[2].size());\n  EXPECT_EQ(true, bottom_need_backward[2][0]);\n  EXPECT_EQ(false, bottom_need_backward[2][1]);\n}\n\nTYPED_TEST(NetTest, TestBottomNeedBackwardEuclideanForce) {\n  const bool force_backward = true;\n  this->InitTinyNetEuclidean(force_backward);\n  const vector<vector<bool> >& bottom_need_backward =\n      this->net_->bottom_need_backward();\n  EXPECT_EQ(3, bottom_need_backward.size());\n  EXPECT_EQ(0, bottom_need_backward[0].size());\n  EXPECT_EQ(1, bottom_need_backward[1].size());\n  EXPECT_EQ(true, bottom_need_backward[1][0]);\n  EXPECT_EQ(2, bottom_need_backward[2].size());\n  EXPECT_EQ(true, bottom_need_backward[2][0]);\n  EXPECT_EQ(true, bottom_need_backward[2][1]);\n}\n\nTYPED_TEST(NetTest, TestBottomNeedBackwardTricky) {\n  this->InitTrickyNet();\n  const vector<vector<bool> >& bottom_need_backward =\n      this->net_->bottom_need_backward();\n  EXPECT_EQ(4, bottom_need_backward.size());\n  EXPECT_EQ(0, bottom_need_backward[0].size());\n  EXPECT_EQ(1, bottom_need_backward[1].size());\n  EXPECT_EQ(false, bottom_need_backward[1][0]);\n  EXPECT_EQ(1, bottom_need_backward[2].size());\n  EXPECT_EQ(false, bottom_need_backward[2][0]);\n  EXPECT_EQ(2, bottom_need_backward[3].size());\n  EXPECT_EQ(true, bottom_need_backward[3][0]);\n  // The label input to the SoftmaxLossLayer should say it \"needs backward\"\n  // since it has weights under it, even though we expect this to cause a crash\n  // at training/test time.\n  EXPECT_EQ(true, bottom_need_backward[3][1]);\n}\n\nTYPED_TEST(NetTest, TestLossWeight) {\n  typedef typename TypeParam::Dtype Dtype;\n  // First, compute the loss and gradients with no loss_weight specified.\n  // In this case, the loss weight for the 'EuclideanLoss' layer should default\n  // to 1.\n  vector<Blob<Dtype>*> bottom;\n  Caffe::set_random_seed(this->seed_);\n  const bool kForceBackward = true;\n  this->InitUnsharedWeightsNet(NULL, NULL, kForceBackward);\n  const Dtype loss = this->net_->ForwardBackward(bottom);\n  const bool kCopyDiff = true;\n  vector<shared_ptr<Blob<Dtype> > > blob_grads;\n  this->CopyNetBlobs(kCopyDiff, &blob_grads);\n  vector<shared_ptr<Blob<Dtype> > > param_grads;\n  this->CopyNetParams(kCopyDiff, &param_grads);\n  // Check that the loss is non-trivial, otherwise the test doesn't prove much.\n  const Dtype kMinLossAbsValue = 1e-2;\n  ASSERT_GE(fabs(loss), kMinLossAbsValue);\n  const Dtype kErrorMargin = 1e-4;\n  const int kNumLossWeights = 6;\n  Dtype kLossWeights[kNumLossWeights] = {2, 0, 1, -1, -2.5, 3.7};\n  for (int i = 0; i < kNumLossWeights; ++i) {\n    Caffe::set_random_seed(this->seed_);\n    this->InitUnsharedWeightsNet(&kLossWeights[i], NULL, kForceBackward);\n    const Dtype weighted_loss = this->net_->ForwardBackward(bottom);\n    const Dtype error_margin = kErrorMargin * fabs(kLossWeights[i]);\n    EXPECT_NEAR(loss * kLossWeights[i], weighted_loss, error_margin)\n        << \"loss weight = \" << kLossWeights[i];\n    const vector<shared_ptr<Blob<Dtype> > >& weighted_blobs =\n        this->net_->blobs();\n    ASSERT_EQ(blob_grads.size(), weighted_blobs.size());\n    for (int j = 0; j < blob_grads.size(); ++j) {\n      ASSERT_EQ(blob_grads[j]->count(), weighted_blobs[j]->count());\n      for (int k = 0; k < blob_grads[j]->count(); ++k) {\n        EXPECT_NEAR(blob_grads[j]->cpu_diff()[k] * kLossWeights[i],\n                    weighted_blobs[j]->cpu_diff()[k], error_margin);\n      }\n    }\n    const vector<shared_ptr<Blob<Dtype> > >& weighted_params =\n        this->net_->params();\n    ASSERT_EQ(param_grads.size(), weighted_params.size());\n    for (int j = 0; j < param_grads.size(); ++j) {\n      ASSERT_EQ(param_grads[j]->count(), weighted_params[j]->count());\n      for (int k = 0; k < param_grads[j]->count(); ++k) {\n        EXPECT_NEAR(param_grads[j]->cpu_diff()[k] * kLossWeights[i],\n                    weighted_params[j]->cpu_diff()[k], error_margin);\n      }\n    }\n  }\n}\n\nTYPED_TEST(NetTest, TestLossWeightMidNet) {\n  typedef typename TypeParam::Dtype Dtype;\n  vector<Blob<Dtype>*> bottom;\n  Caffe::set_random_seed(this->seed_);\n  const bool kForceBackward = true;\n  Dtype loss_weight = 0;\n  Dtype midnet_loss_weight = 1;\n  this->InitUnsharedWeightsNet(&loss_weight, &midnet_loss_weight,\n                               kForceBackward);\n  const Dtype loss = this->net_->ForwardBackward(bottom);\n  const bool kCopyDiff = true;\n  const bool kReshape = true;\n  Blob<Dtype> data_grad;\n  data_grad.CopyFrom(*this->net_->blob_by_name(\"data\"), kCopyDiff, kReshape);\n  // Check that the loss is non-trivial, otherwise the test doesn't prove much.\n  const Dtype kMinLossAbsValue = 1e-2;\n  ASSERT_GE(fabs(loss), kMinLossAbsValue);\n  const Dtype kErrorMargin = 1e-4;\n  const int kNumLossWeights = 6;\n  Dtype kLossWeights[kNumLossWeights] = {2, 0, 1, -1, -2.5, 3.7};\n  for (int i = 0; i < kNumLossWeights; ++i) {\n    Caffe::set_random_seed(this->seed_);\n    this->InitUnsharedWeightsNet(&loss_weight, &kLossWeights[i],\n                                 kForceBackward);\n    const Dtype weighted_loss = this->net_->ForwardBackward(bottom);\n    const Dtype error_margin = kErrorMargin * fabs(kLossWeights[i]);\n    EXPECT_NEAR(loss * kLossWeights[i], weighted_loss, error_margin)\n        << \"loss weight = \" << kLossWeights[i];\n    const shared_ptr<Blob<Dtype> >& weighted_blob =\n        this->net_->blob_by_name(\"data\");\n    ASSERT_EQ(data_grad.count(), weighted_blob->count());\n    for (int j = 0; j < data_grad.count(); ++j) {\n      EXPECT_NEAR(data_grad.cpu_diff()[j] * kLossWeights[i],\n                  weighted_blob->cpu_diff()[j], error_margin);\n    }\n  }\n}\n\nTYPED_TEST(NetTest, TestComboLossWeight) {\n  typedef typename TypeParam::Dtype Dtype;\n  vector<Blob<Dtype>*> bottom;\n  Dtype loss_weight;\n  Dtype midnet_loss_weight;\n  const bool kForceBackward = true;\n  const Dtype kErrorMargin = 1e-4;\n\n  // Get the loss and gradients with 'EuclideanLoss' weight 1,\n  // 'InnerProduct' weight 1.\n  loss_weight = 1;\n  midnet_loss_weight = 1;\n  Caffe::set_random_seed(this->seed_);\n  this->InitUnsharedWeightsNet(&loss_weight, &midnet_loss_weight,\n                               kForceBackward);\n  const Dtype loss = this->net_->ForwardBackward(bottom);\n  const bool kCopyDiff = true;\n  vector<shared_ptr<Blob<Dtype> > > blob_grads;\n  this->CopyNetBlobs(kCopyDiff, &blob_grads);\n  vector<shared_ptr<Blob<Dtype> > > param_grads;\n  this->CopyNetParams(kCopyDiff, &param_grads);\n\n  loss_weight = 2;\n  midnet_loss_weight = 1;\n  Caffe::set_random_seed(this->seed_);\n  this->InitUnsharedWeightsNet(&loss_weight, &midnet_loss_weight,\n                               kForceBackward);\n  const Dtype loss_main_2 = this->net_->ForwardBackward(bottom);\n  vector<shared_ptr<Blob<Dtype> > > blob_grads_loss_2;\n  this->CopyNetBlobs(kCopyDiff, &blob_grads_loss_2);\n  vector<shared_ptr<Blob<Dtype> > > param_grads_loss_2;\n  this->CopyNetParams(kCopyDiff, &param_grads_loss_2);\n\n  loss_weight = 3;\n  midnet_loss_weight = 1;\n  Caffe::set_random_seed(this->seed_);\n  this->InitUnsharedWeightsNet(&loss_weight, &midnet_loss_weight,\n                               kForceBackward);\n  const Dtype loss_main_3 = this->net_->ForwardBackward(bottom);\n  const vector<shared_ptr<Blob<Dtype> > >& blob_grads_loss_3 =\n      this->net_->blobs();\n  ASSERT_EQ(blob_grads.size(), blob_grads_loss_3.size());\n  ASSERT_EQ(blob_grads_loss_2.size(), blob_grads_loss_3.size());\n  for (int j = 0; j < blob_grads.size(); ++j) {\n    const string& blob_name = this->net_->blob_names()[j];\n    bool grad_should_change = true;\n    if (blob_name == \"innerproduct1_innerproduct1_0_split_0\") {\n      grad_should_change = false;\n    }\n    ASSERT_EQ(blob_grads[j]->count(), blob_grads_loss_3[j]->count());\n    ASSERT_EQ(blob_grads_loss_2[j]->count(), blob_grads_loss_3[j]->count());\n    for (int k = 0; k < blob_grads[j]->count(); ++k) {\n      const Dtype grad_diff_2 = blob_grads_loss_2[j]->cpu_diff()[k] -\n                                    blob_grads[j]->cpu_diff()[k];\n      const Dtype grad_diff_3 = blob_grads_loss_3[j]->cpu_diff()[k] -\n                                    blob_grads[j]->cpu_diff()[k];\n      if (grad_should_change) {\n        // Test non-triviality.\n        const Dtype kMinGradDiffAbsValue = 1e-4;\n        EXPECT_GT(fabs(grad_diff_2), kMinGradDiffAbsValue) << blob_name;\n        EXPECT_NEAR(2 * grad_diff_2, grad_diff_3, kErrorMargin) << blob_name;\n      } else {\n        EXPECT_EQ(0, grad_diff_2) << blob_name;\n        EXPECT_EQ(0, grad_diff_3) << blob_name;\n      }\n    }\n  }\n\n  loss_weight = 1;\n  midnet_loss_weight = 2;\n  Caffe::set_random_seed(this->seed_);\n  this->InitUnsharedWeightsNet(&loss_weight, &midnet_loss_weight,\n                               kForceBackward);\n  const Dtype loss_midnet_2 = this->net_->ForwardBackward(bottom);\n  this->CopyNetBlobs(kCopyDiff, &blob_grads_loss_2);\n  this->CopyNetParams(kCopyDiff, &param_grads_loss_2);\n\n  loss_weight = 1;\n  midnet_loss_weight = 3;\n  Caffe::set_random_seed(this->seed_);\n  this->InitUnsharedWeightsNet(&loss_weight, &midnet_loss_weight,\n                               kForceBackward);\n  const Dtype loss_midnet_3 = this->net_->ForwardBackward(bottom);\n  const vector<shared_ptr<Blob<Dtype> > >& blob_grads_midnet_loss_3 =\n      this->net_->blobs();\n  ASSERT_EQ(blob_grads.size(), blob_grads_midnet_loss_3.size());\n  ASSERT_EQ(blob_grads_loss_2.size(), blob_grads_midnet_loss_3.size());\n  const vector<string>& blob_names = this->net_->blob_names();\n  for (int j = 0; j < blob_grads.size(); ++j) {\n    const string& blob_name = blob_names[j];\n    bool grad_should_change = false;\n    if (blob_name == \"innerproduct1\" ||\n        blob_name == \"innerproduct1_innerproduct1_0_split_0\" ||\n        blob_name == \"data_data_0_split_0\" || blob_name == \"data\") {\n      grad_should_change = true;\n    }\n    ASSERT_EQ(blob_grads[j]->count(), blob_grads_midnet_loss_3[j]->count());\n    ASSERT_EQ(blob_grads[j]->count(), blob_grads_loss_2[j]->count());\n    for (int k = 0; k < blob_grads[j]->count(); ++k) {\n      const Dtype grad_diff_2 = blob_grads_loss_2[j]->cpu_diff()[k] -\n                                    blob_grads[j]->cpu_diff()[k];\n      const Dtype grad_diff_3 = blob_grads_midnet_loss_3[j]->cpu_diff()[k] -\n                                    blob_grads[j]->cpu_diff()[k];\n      if (grad_should_change) {\n        // Test non-triviality.\n        const Dtype kMinGradDiffAbsValue = 1e-4;\n        EXPECT_GT(fabs(grad_diff_2), kMinGradDiffAbsValue) << blob_name;\n        EXPECT_NEAR(2 * grad_diff_2, grad_diff_3, kErrorMargin) << blob_name;\n      } else {\n        EXPECT_EQ(0, grad_diff_2) << blob_name;\n        EXPECT_EQ(0, grad_diff_3) << blob_name;\n      }\n    }\n  }\n\n  const Dtype kMinLossDiffAbsValue = 1e-4;\n\n  Dtype loss_diff_2 = loss_main_2 - loss;\n  // Test non-triviality.\n  EXPECT_GT(fabs(loss_diff_2), kMinLossDiffAbsValue);\n  Dtype loss_diff_3 = loss_main_3 - loss;\n  EXPECT_NEAR(2 * loss_diff_2, loss_diff_3, kErrorMargin);\n\n  loss_diff_2 = loss_midnet_2 - loss;\n  // Test non-triviality.\n  EXPECT_GT(fabs(loss_diff_2), kMinLossDiffAbsValue);\n  loss_diff_3 = loss_midnet_3 - loss;\n  EXPECT_NEAR(2 * loss_diff_2, loss_diff_3, kErrorMargin);\n}\n\nTYPED_TEST(NetTest, TestBackwardWithAccuracyLayer) {\n  typedef typename TypeParam::Dtype Dtype;\n  const bool kForceBackward = false;\n  const bool kAccuracyLayer = true;\n  this->InitTinyNet(kForceBackward, kAccuracyLayer);\n  EXPECT_TRUE(this->net_->has_blob(\"accuracy\"));\n  vector<Blob<Dtype>*> bottom;\n  // Test that we can do Backward even though we have an 'Accuracy' layer.\n  this->net_->ForwardBackward(bottom);\n}\n\nTYPED_TEST(NetTest, TestUnsharedWeightsDataNet) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->InitUnsharedWeightsNet();\n  vector<Blob<Dtype>*> bottom;\n  Dtype loss;\n  this->net_->Forward(bottom, &loss);\n  EXPECT_GT(loss, 0);\n}\n\nTYPED_TEST(NetTest, TestSharedWeightsDataNet) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->InitSharedWeightsNet();\n  vector<Blob<Dtype>*> bottom;\n  Dtype loss;\n  this->net_->Forward(bottom, &loss);\n  EXPECT_FLOAT_EQ(loss, 0);\n}\n\nTYPED_TEST(NetTest, TestUnsharedWeightsDiffNet) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->InitUnsharedWeightsNet();\n  vector<Blob<Dtype>*> bottom;\n  Net<Dtype>* net = this->net_.get();\n  net->Forward(bottom);\n  net->Backward();\n  Layer<Dtype>* ip1_layer = net->layer_by_name(\"innerproduct1\").get();\n  Layer<Dtype>* ip2_layer = net->layer_by_name(\"innerproduct2\").get();\n  const int count = ip1_layer->blobs()[0]->count();\n  const Dtype* grad1 = ip1_layer->blobs()[0]->cpu_diff();\n  const Dtype* grad2 = ip2_layer->blobs()[0]->cpu_diff();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_GT(fabs(grad1[i]), 0);\n    EXPECT_FLOAT_EQ(-1 * grad1[i], grad2[i]);\n  }\n}\n\nTYPED_TEST(NetTest, TestSharedWeightsDiffNet) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->InitSharedWeightsNet();\n  vector<Blob<Dtype>*> bottom;\n  Net<Dtype>* net = this->net_.get();\n  Dtype loss;\n  net->Forward(bottom, &loss);\n  net->Backward();\n  EXPECT_FLOAT_EQ(loss, 0);\n  Layer<Dtype>* ip1_layer = net->layer_by_name(\"innerproduct1\").get();\n  Layer<Dtype>* ip2_layer = net->layer_by_name(\"innerproduct2\").get();\n  const int count = ip1_layer->blobs()[0]->count();\n  const Dtype* grad1 = ip1_layer->blobs()[0]->cpu_diff();\n  const Dtype* grad2 = ip2_layer->blobs()[0]->cpu_diff();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_FLOAT_EQ(0, grad1[i]);\n    EXPECT_FLOAT_EQ(0, grad2[i]);\n  }\n}\n\nTYPED_TEST(NetTest, TestSharedWeightsUpdate) {\n  typedef typename TypeParam::Dtype Dtype;\n  Caffe::set_random_seed(this->seed_);\n  this->InitDiffDataSharedWeightsNet();\n  vector<Blob<Dtype>*> bottom;\n  EXPECT_EQ(this->net_->layer_names()[1], \"innerproduct1\");\n  EXPECT_EQ(this->net_->layer_names()[2], \"innerproduct2\");\n  Blob<Dtype>* ip1_weights = this->net_->layers()[1]->blobs()[0].get();\n  Blob<Dtype>* ip2_weights = this->net_->layers()[2]->blobs()[0].get();\n  // Check that data and diff blobs of shared weights share the same memory\n  // locations.\n  EXPECT_EQ(ip1_weights->cpu_data(), ip2_weights->cpu_data());\n  EXPECT_EQ(ip1_weights->cpu_diff(), ip2_weights->cpu_diff());\n  this->net_->Forward(bottom);\n  this->net_->Backward();\n  // Compute the expected update as the data minus the two diffs.\n  Blob<Dtype> shared_params;\n  const bool reshape = true;\n  const bool copy_diff = false;\n  shared_params.CopyFrom(*ip1_weights, copy_diff, reshape);\n  shared_params.CopyFrom(*ip1_weights, !copy_diff, reshape);\n  const int count = ip1_weights->count();\n  // Make sure the diffs are non-trivial.\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NE(0, ip1_weights->cpu_diff()[i]);\n  }\n  caffe_axpy(count, Dtype(-1), shared_params.cpu_diff(),\n             shared_params.mutable_cpu_data());\n  const Dtype* expected_updated_params = shared_params.cpu_data();\n  this->net_->Update();\n  const Dtype* actual_updated_params = ip1_weights->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_EQ(expected_updated_params[i], actual_updated_params[i]);\n  }\n  // Check that data blobs of shared weights STILL point to the same memory\n  // location (because ... who knows).\n  EXPECT_EQ(ip1_weights->cpu_data(), ip2_weights->cpu_data());\n\n  Caffe::set_random_seed(this->seed_);\n  this->InitDiffDataUnsharedWeightsNet();\n  EXPECT_EQ(this->net_->layer_names()[1], \"innerproduct1\");\n  EXPECT_EQ(this->net_->layer_names()[2], \"innerproduct2\");\n  ip1_weights = this->net_->layers()[1]->blobs()[0].get();\n  ip2_weights = this->net_->layers()[2]->blobs()[0].get();\n  // Check that data and diff blobs of unshared weights are at different\n  // locations in memory.\n  EXPECT_NE(ip1_weights->cpu_data(), ip2_weights->cpu_data());\n  EXPECT_NE(ip1_weights->cpu_diff(), ip2_weights->cpu_diff());\n  this->net_->Forward(bottom);\n  this->net_->Backward();\n  // Compute the expected update.\n  Blob<Dtype> unshared_params1;\n  unshared_params1.CopyFrom(*ip1_weights, copy_diff, reshape);\n  unshared_params1.CopyFrom(*ip1_weights, !copy_diff, reshape);\n  Blob<Dtype> unshared_params2;\n  unshared_params2.CopyFrom(*ip2_weights, copy_diff, reshape);\n  unshared_params2.CopyFrom(*ip2_weights, !copy_diff, reshape);\n  // Make sure the diffs are non-trivial and sum to the diff in the shared net.\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NE(0, ip1_weights->cpu_diff()[i]);\n    EXPECT_NE(0, ip2_weights->cpu_diff()[i]);\n    EXPECT_NE(ip1_weights->cpu_diff()[i], ip2_weights->cpu_diff()[i]);\n    EXPECT_FLOAT_EQ(ip1_weights->cpu_diff()[i] + ip2_weights->cpu_diff()[i],\n                    shared_params.cpu_diff()[i]);\n  }\n  caffe_axpy(count, Dtype(-1), ip1_weights->cpu_diff(),\n             unshared_params1.mutable_cpu_data());\n  caffe_axpy(count, Dtype(-1), ip2_weights->cpu_diff(),\n             unshared_params2.mutable_cpu_data());\n  const Dtype* expected_updated_params1 = unshared_params1.cpu_data();\n  const Dtype* expected_updated_params2 = unshared_params2.cpu_data();\n  this->net_->Update();\n  const Dtype* actual_updated_params1 = ip1_weights->cpu_data();\n  const Dtype* actual_updated_params2 = ip2_weights->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_EQ(expected_updated_params1[i], actual_updated_params1[i]);\n    EXPECT_EQ(expected_updated_params2[i], actual_updated_params2[i]);\n    EXPECT_NE(actual_updated_params1[i], actual_updated_params2[i]);\n    EXPECT_NE(expected_updated_params, expected_updated_params1);\n  }\n}\n\nTYPED_TEST(NetTest, TestSharedWeightsResume) {\n  typedef typename TypeParam::Dtype Dtype;\n\n  // Create a net with weight sharing; Update it once.\n  Caffe::set_random_seed(this->seed_);\n  this->InitDiffDataSharedWeightsNet();\n  vector<Blob<Dtype>*> bottom;\n  EXPECT_EQ(this->net_->layer_names()[1], \"innerproduct1\");\n  EXPECT_EQ(this->net_->layer_names()[2], \"innerproduct2\");\n  Blob<Dtype>* ip1_weights = this->net_->layers()[1]->blobs()[0].get();\n  Blob<Dtype>* ip2_weights = this->net_->layers()[2]->blobs()[0].get();\n  // Check that data and diff blobs of shared weights share the same memory\n  // locations.\n  EXPECT_EQ(ip1_weights->cpu_data(), ip2_weights->cpu_data());\n  EXPECT_EQ(ip1_weights->cpu_diff(), ip2_weights->cpu_diff());\n  this->net_->ForwardBackward(bottom);\n  this->net_->Update();\n  Blob<Dtype> shared_params;\n  const bool kReshape = true;\n  const bool kCopyDiff = false;\n  shared_params.CopyFrom(*ip1_weights, kCopyDiff, kReshape);\n  const int count = ip1_weights->count();\n\n  // Write the net to a NetParameter, as in Solver::Snapshot.\n  NetParameter net_param;\n  this->net_->ToProto(&net_param);\n\n  // Reinitialize the net and copy parameters from net_param, as in\n  // Solver::Restore.\n  Caffe::set_random_seed(this->seed_);\n  this->InitDiffDataSharedWeightsNet();\n  this->net_->CopyTrainedLayersFrom(net_param);\n  ip1_weights = this->net_->layers()[1]->blobs()[0].get();\n  ip2_weights = this->net_->layers()[2]->blobs()[0].get();\n  ASSERT_FALSE(NULL == ip1_weights);\n  ASSERT_FALSE(NULL == ip2_weights);\n  EXPECT_NE(ip1_weights, ip2_weights);\n  // Check that data and diff blobs of shared weights share the same memory\n  // locations.\n  EXPECT_EQ(ip1_weights->cpu_data(), ip2_weights->cpu_data());\n  EXPECT_EQ(ip1_weights->cpu_diff(), ip2_weights->cpu_diff());\n  for (int i = 0; i < count; ++i) {\n    EXPECT_FLOAT_EQ(shared_params.cpu_data()[i], ip1_weights->cpu_data()[i]);\n  }\n}\n\nTYPED_TEST(NetTest, TestParamPropagateDown) {\n  typedef typename TypeParam::Dtype Dtype;\n  vector<Blob<Dtype>*> bottom;\n  const bool kBiasTerm = true, kForceBackward = false;\n  const Dtype* kLossWeight1 = NULL;\n  const Dtype* kLossWeight2 = NULL;\n\n  // Run the net with all params learned; check that gradients are non-zero.\n  Caffe::set_random_seed(this->seed_);\n  Dtype blobs_lr_w1 = 1, blobs_lr_w2 = 1, blobs_lr_b1 = 2, blobs_lr_b2 = 2;\n  this->InitUnsharedWeightsNet(kLossWeight1, kLossWeight2, kForceBackward,\n      kBiasTerm, blobs_lr_w1, blobs_lr_w2, blobs_lr_b1, blobs_lr_b2);\n  this->net_->Forward(bottom);\n  this->net_->Backward();\n  const vector<shared_ptr<Blob<Dtype> > >& params = this->net_->params();\n  const int num_params = params.size();\n  ASSERT_EQ(4, num_params);\n  const Dtype kNonZeroTestMin = 1e-3;\n  vector<Dtype> param_asums(params.size());\n  for (int i = 0; i < num_params; ++i) {\n    const Dtype param_asum =\n       caffe_cpu_asum(params[i]->count(), params[i]->cpu_diff());\n    param_asums[i] = param_asum;\n    EXPECT_GT(param_asum, kNonZeroTestMin);\n  }\n\n  // Change the learning rates to different non-zero values; should see same\n  // gradients.\n  Caffe::set_random_seed(this->seed_);\n  blobs_lr_w1 *= 2, blobs_lr_w2 *= 2, blobs_lr_b1 *= 2, blobs_lr_b2 *= 2;\n  this->InitUnsharedWeightsNet(kLossWeight1, kLossWeight2, kForceBackward,\n      kBiasTerm, blobs_lr_w1, blobs_lr_w2, blobs_lr_b1, blobs_lr_b2);\n  this->net_->Forward(bottom);\n  this->net_->Backward();\n  const vector<shared_ptr<Blob<Dtype> > >& params2 = this->net_->params();\n  ASSERT_EQ(num_params, params2.size());\n  for (int i = 0; i < num_params; ++i) {\n    const Dtype param_asum =\n       caffe_cpu_asum(params2[i]->count(), params2[i]->cpu_diff());\n    EXPECT_FLOAT_EQ(param_asum, param_asums[i]);\n  }\n\n  // Change a subset of the learning rates to zero; check that we see zero\n  // gradients for those.\n  Caffe::set_random_seed(this->seed_);\n  blobs_lr_w1 = 1, blobs_lr_w2 = 0, blobs_lr_b1 = 0, blobs_lr_b2 = 1;\n  this->InitUnsharedWeightsNet(kLossWeight1, kLossWeight2, kForceBackward,\n      kBiasTerm, blobs_lr_w1, blobs_lr_w2, blobs_lr_b1, blobs_lr_b2);\n  this->net_->Forward(bottom);\n  this->net_->Backward();\n  const vector<shared_ptr<Blob<Dtype> > >& params3 = this->net_->params();\n  ASSERT_EQ(num_params, params3.size());\n  for (int i = 0; i < num_params; ++i) {\n    const Dtype param_asum =\n       caffe_cpu_asum(params3[i]->count(), params3[i]->cpu_diff());\n    if (i == 1 || i == 2) {\n      EXPECT_FLOAT_EQ(0, param_asum);\n    } else {\n      EXPECT_FLOAT_EQ(param_asum, param_asums[i]);\n    }\n  }\n\n  // Change the opposite subset of the learning rates to zero.\n  Caffe::set_random_seed(this->seed_);\n  blobs_lr_w1 = 0, blobs_lr_w2 = 1, blobs_lr_b1 = 1, blobs_lr_b2 = 0;\n  this->InitUnsharedWeightsNet(kLossWeight1, kLossWeight2, kForceBackward,\n      kBiasTerm, blobs_lr_w1, blobs_lr_w2, blobs_lr_b1, blobs_lr_b2);\n  this->net_->Forward(bottom);\n  this->net_->Backward();\n  const vector<shared_ptr<Blob<Dtype> > >& params4 = this->net_->params();\n  ASSERT_EQ(num_params, params4.size());\n  for (int i = 0; i < num_params; ++i) {\n    const Dtype param_asum =\n       caffe_cpu_asum(params4[i]->count(), params4[i]->cpu_diff());\n    if (i == 0 || i == 3) {\n      EXPECT_FLOAT_EQ(0, param_asum);\n    } else {\n      EXPECT_FLOAT_EQ(param_asum, param_asums[i]);\n    }\n  }\n}\n\nTYPED_TEST(NetTest, TestFromTo) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->InitTinyNet();\n\n  // Run Forward and Backward, recording the data diff and loss.\n  Blob<Dtype> data;\n  data.ReshapeLike(*this->net_->blob_by_name(\"data\"));\n  this->net_->ForwardPrefilled();\n  this->net_->Backward();\n  data.CopyFrom(*this->net_->blob_by_name(\"data\"), true, true);\n  const Dtype *loss_ptr = this->net_->output_blobs()[0]->cpu_data();\n  Dtype loss = *loss_ptr;\n\n  // Check that combining partial Forwards gives the same loss.\n  for (int i = 1; i < this->net_->layers().size(); ++i) {\n    // Note that we skip layer zero to keep the same data.\n    this->net_->ForwardFromTo(1, 1);\n    if (i < this->net_->layers().size() - 1) {\n      this->net_->ForwardFrom(i + 1);\n    }\n    EXPECT_EQ(loss, *loss_ptr);\n  }\n\n  // Check that combining partial Backwards gives the same data diff.\n  for (int i = 1; i < this->net_->layers().size(); ++i) {\n    this->net_->BackwardTo(i);\n    this->net_->BackwardFrom(i - 1);\n    for (int j = 0; j < data.count(); ++j) {\n      EXPECT_EQ(data.cpu_diff()[j],\n          this->net_->blob_by_name(\"data\")->cpu_diff()[j]);\n    }\n  }\n}\n\nclass FilterNetTest : public ::testing::Test {\n protected:\n  void RunFilterNetTest(\n      const string& input_param_string, const string& filtered_param_string) {\n    NetParameter input_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        input_param_string, &input_param));\n    NetParameter expected_filtered_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        filtered_param_string, &expected_filtered_param));\n    NetParameter actual_filtered_param;\n    Net<float>::FilterNet(input_param, &actual_filtered_param);\n    EXPECT_EQ(expected_filtered_param.DebugString(),\n        actual_filtered_param.DebugString());\n    // Also test idempotence.\n    NetParameter double_filtered_param;\n    Net<float>::FilterNet(actual_filtered_param, &double_filtered_param);\n    EXPECT_EQ(actual_filtered_param.DebugString(),\n       double_filtered_param.DebugString());\n  }\n};\n\nTEST_F(FilterNetTest, TestNoFilter) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterLeNetTrainTest) {\n  const string& input_proto =\n      \"name: 'LeNet' \"\n      \"layer { \"\n      \"  name: 'mnist' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"  data_param { \"\n      \"    source: 'mnist-train-leveldb' \"\n      \"    batch_size: 64 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    scale: 0.00390625 \"\n      \"  } \"\n      \"  include: { phase: TRAIN } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'mnist' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"  data_param { \"\n      \"    source: 'mnist-test-leveldb' \"\n      \"    batch_size: 100 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    scale: 0.00390625 \"\n      \"  } \"\n      \"  include: { phase: TEST } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv1' \"\n      \"  type: 'Convolution' \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"  } \"\n      \"  convolution_param { \"\n      \"    num_output: 20 \"\n      \"    kernel_size: 5 \"\n      \"    stride: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'xavier' \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'ip1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'ip1' \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"  } \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    weight_filler { \"\n      \"      type: 'xavier' \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'accuracy' \"\n      \"  type: 'Accuracy' \"\n      \"  bottom: 'ip1' \"\n      \"  bottom: 'label' \"\n      \"  top: 'accuracy' \"\n      \"  include: { phase: TEST } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'ip2' \"\n      \"  bottom: 'label' \"\n      \"  top: 'loss' \"\n      \"} \";\n  const string input_proto_train = \"state: { phase: TRAIN } \" + input_proto;\n  const string input_proto_test = \"state: { phase: TEST } \" + input_proto;\n  const string output_proto_train =\n      \"name: 'LeNet' \"\n      \"layer { \"\n      \"  name: 'mnist' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"  data_param { \"\n      \"    source: 'mnist-train-leveldb' \"\n      \"    batch_size: 64 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    scale: 0.00390625 \"\n      \"  } \"\n      \"  include: { phase: TRAIN } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv1' \"\n      \"  type: 'Convolution' \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"  } \"\n      \"  convolution_param { \"\n      \"    num_output: 20 \"\n      \"    kernel_size: 5 \"\n      \"    stride: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'xavier' \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'ip1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'ip1' \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"  } \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    weight_filler { \"\n      \"      type: 'xavier' \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'ip2' \"\n      \"  bottom: 'label' \"\n      \"  top: 'loss' \"\n      \"} \";\n  const string& output_proto_test =\n      \"name: 'LeNet' \"\n      \"layer { \"\n      \"  name: 'mnist' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"  data_param { \"\n      \"    source: 'mnist-test-leveldb' \"\n      \"    batch_size: 100 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    scale: 0.00390625 \"\n      \"  } \"\n      \"  include: { phase: TEST } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv1' \"\n      \"  type: 'Convolution' \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"  } \"\n      \"  convolution_param { \"\n      \"    num_output: 20 \"\n      \"    kernel_size: 5 \"\n      \"    stride: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'xavier' \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'ip1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'ip1' \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"  } \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    weight_filler { \"\n      \"      type: 'xavier' \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'accuracy' \"\n      \"  type: 'Accuracy' \"\n      \"  bottom: 'ip1' \"\n      \"  bottom: 'label' \"\n      \"  top: 'accuracy' \"\n      \"  include: { phase: TEST } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'ip2' \"\n      \"  bottom: 'label' \"\n      \"  top: 'loss' \"\n      \"} \";\n  const string output_proto_train_explicit =\n      output_proto_train + \" state: { phase: TRAIN } \";\n  const string output_proto_test_explicit =\n      output_proto_test + \" state: { phase: TEST } \";\n  this->RunFilterNetTest(input_proto_train, output_proto_train_explicit);\n  this->RunFilterNetTest(input_proto_test, output_proto_test_explicit);\n}\n\nTEST_F(FilterNetTest, TestFilterOutByStage) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, output_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterOutByStage2) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, output_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByStage) {\n  const string& input_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByStage2) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  exclude: { stage: 'mystage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterOutByMultipleStage) {\n  const string& input_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { stage: 'mystage' stage: 'myotherstage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \";\n  const string& output_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, output_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByMultipleStage) {\n  const string& input_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { stage: 'myotherstage' } \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByMultipleStage2) {\n  const string& input_proto =\n      \"state: { stage: 'mystage' stage: 'myotherstage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { stage: 'mystage' stage: 'myotherstage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { stage: 'mystage' } \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByNotStage) {\n  const string& input_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { not_stage: 'myotherstage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { not_stage: 'myotherstage' } \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterOutByNotStage) {\n  const string& input_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { not_stage: 'mystage' } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { not_stage: 'mystage' } \"\n      \"} \";\n  const string& output_proto =\n      \"state: { stage: 'mystage' } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, output_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterOutByMinLevel) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { min_level: 3 } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, output_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterOutByMaxLevel) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { max_level: -3 } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, output_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByMinLevel) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { min_level: 0 } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByMinLevel2) {\n  const string& input_proto =\n      \"state: { level: 7 } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { min_level: 3 } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByMaxLevel) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { max_level: 0 } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInByMaxLevel2) {\n  const string& input_proto =\n      \"state: { level: -7 } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { max_level: -3 } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunFilterNetTest(input_proto, input_proto);\n}\n\nTEST_F(FilterNetTest, TestFilterInOutByIncludeMultiRule) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { min_level: 2  phase: TRAIN } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { min_level: 2  phase: TEST } \"\n      \"} \";\n  const string& input_proto_train =\n      \"state: { level: 4  phase: TRAIN } \" + input_proto;\n  const string& input_proto_test =\n      \"state: { level: 4  phase: TEST } \" + input_proto;\n  const string& output_proto_train =\n      \"state: { level: 4  phase: TRAIN } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { min_level: 2  phase: TRAIN } \"\n      \"} \";\n  const string& output_proto_test =\n      \"state: { level: 4  phase: TEST } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { min_level: 2  phase: TEST } \"\n      \"} \";\n  this->RunFilterNetTest(input_proto_train, output_proto_train);\n  this->RunFilterNetTest(input_proto_test, output_proto_test);\n}\n\nTEST_F(FilterNetTest, TestFilterInByIncludeMultiRule) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  include: { min_level: 2  phase: TRAIN } \"\n      \"  include: { phase: TEST } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  include: { min_level: 2  phase: TEST } \"\n      \"  include: { phase: TRAIN } \"\n      \"} \";\n  const string& input_proto_train =\n      \"state: { level: 2  phase: TRAIN } \" + input_proto;\n  const string& input_proto_test =\n      \"state: { level: 2  phase: TEST } \" + input_proto;\n  this->RunFilterNetTest(input_proto_train, input_proto_train);\n  this->RunFilterNetTest(input_proto_test, input_proto_test);\n}\n\nTEST_F(FilterNetTest, TestFilterInOutByExcludeMultiRule) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  exclude: { min_level: 2  phase: TRAIN } \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  exclude: { min_level: 2  phase: TEST } \"\n      \"} \";\n  const string& input_proto_train =\n      \"state: { level: 4  phase: TRAIN } \" + input_proto;\n  const string& input_proto_test =\n      \"state: { level: 4  phase: TEST } \" + input_proto;\n  const string& output_proto_train =\n      \"state: { level: 4  phase: TRAIN } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"  exclude: { min_level: 2  phase: TEST } \"\n      \"} \";\n  const string& output_proto_test =\n      \"state: { level: 4  phase: TEST } \"\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"  exclude: { min_level: 2  phase: TRAIN } \"\n      \"} \";\n  this->RunFilterNetTest(input_proto_train, output_proto_train);\n  this->RunFilterNetTest(input_proto_test, output_proto_test);\n}\n\nTYPED_TEST(NetTest, TestReshape) {\n  typedef typename TypeParam::Dtype Dtype;\n  // We set up bottom blobs of two different sizes, switch between\n  // them, check that forward and backward both run and the results\n  // are the same, and check that the output shapes change.\n  Caffe::set_random_seed(this->seed_);\n  Caffe::set_mode(Caffe::CPU);\n  FillerParameter filler_param;\n  filler_param.set_std(1);\n  GaussianFiller<Dtype> filler(filler_param);\n  // Check smaller shape first as larger first could hide realloc failures.\n  Blob<Dtype> blob1(2, 3, 12, 10);\n  Blob<Dtype> blob2(4, 3, 9, 11);\n  ASSERT_LT(blob1.count(), blob2.count());\n  filler.Fill(&blob1);\n  filler.Fill(&blob2);\n\n  this->InitReshapableNet();\n  Blob<Dtype>* input_blob = this->net_->input_blobs()[0];\n  Blob<Dtype>* output_blob = this->net_->output_blobs()[0];\n  input_blob->Reshape(blob1.num(), blob1.channels(), blob1.height(),\n      blob1.width());\n  caffe_copy(blob1.count(), blob1.cpu_data(), input_blob->mutable_cpu_data());\n  this->net_->ForwardPrefilled();\n  // call backward just to make sure it runs\n  this->net_->Backward();\n  Blob<Dtype> output1(output_blob->num(), output_blob->channels(),\n      output_blob->height(), output_blob->width());\n  caffe_copy(output1.count(), output_blob->cpu_data(),\n      output1.mutable_cpu_data());\n\n  input_blob->Reshape(blob2.num(), blob2.channels(), blob2.height(),\n      blob2.width());\n  caffe_copy(blob2.count(), blob2.cpu_data(), input_blob->mutable_cpu_data());\n  this->net_->ForwardPrefilled();\n  this->net_->Backward();\n  Blob<Dtype> output2(output_blob->num(), output_blob->channels(),\n      output_blob->height(), output_blob->width());\n  caffe_copy(output2.count(), output_blob->cpu_data(),\n      output2.mutable_cpu_data());\n\n  input_blob->Reshape(blob1.num(), blob1.channels(), blob1.height(),\n      blob1.width());\n  caffe_copy(blob1.count(), blob1.cpu_data(), input_blob->mutable_cpu_data());\n  this->net_->ForwardPrefilled();\n  this->net_->Backward();\n  for (int i = 0; i < output1.count(); ++i) {\n    EXPECT_FLOAT_EQ(*(output1.cpu_data() + i), *(output_blob->cpu_data() + i));\n  }\n\n  input_blob->Reshape(blob2.num(), blob2.channels(), blob2.height(),\n      blob2.width());\n  caffe_copy(blob2.count(), blob2.cpu_data(), input_blob->mutable_cpu_data());\n  this->net_->ForwardPrefilled();\n  this->net_->Backward();\n  for (int i = 0; i < output2.count(); ++i) {\n    EXPECT_FLOAT_EQ(*(output2.cpu_data() + i), *(output_blob->cpu_data() + i));\n  }\n\n  EXPECT_EQ(output1.num(), blob1.num());\n  EXPECT_EQ(output2.num(), blob2.num());\n  bool same_spatial_shape = true;\n  const int kFirstSpatialAxis = 2;\n  for (int i = kFirstSpatialAxis; i < output1.num_axes(); ++i) {\n    if (output1.shape(i) != output2.shape(i)) {\n      same_spatial_shape = false;\n      break;\n    }\n  }\n  EXPECT_FALSE(same_spatial_shape);\n}\n\nTYPED_TEST(NetTest, TestSkipPropagateDown) {\n  // check bottom_need_backward if propagate_down is true\n  this->InitSkipPropNet(false);\n  vector<bool> vec_layer_need_backward = this->net_->layer_need_backward();\n  for (int layer_id = 0; layer_id < this->net_->layers().size(); ++layer_id) {\n    string layer_name = this->net_->layer_names()[layer_id];\n    if (layer_name == \"loss\") {\n      // access to bottom_need_backward coresponding to label's blob\n      bool need_back = this->net_->bottom_need_backward()[layer_id][1];\n      // if propagate_down is true, the loss layer will try to\n      // backpropagate on labels\n      EXPECT_TRUE(need_back) << \"bottom_need_backward should be True\";\n    }\n    // layer_need_backward should be True except for data and silence layers\n    if (layer_name.find(\"data\") != std::string::npos ||\n          layer_name == \"silence\") {\n      EXPECT_FALSE(vec_layer_need_backward[layer_id])\n          << \"layer_need_backward for \" << layer_name << \" should be False\";\n    } else {\n      EXPECT_TRUE(vec_layer_need_backward[layer_id])\n          << \"layer_need_backward for \" << layer_name << \" should be True\";\n    }\n  }\n  // check bottom_need_backward if propagat_down is false\n  this->InitSkipPropNet(true);\n  vec_layer_need_backward.clear();\n  vec_layer_need_backward = this->net_->layer_need_backward();\n  for (int layer_id = 0; layer_id < this->net_->layers().size(); ++layer_id) {\n    string layer_name = this->net_->layer_names()[layer_id];\n    if (layer_name == \"loss\") {\n      // access to bottom_need_backward coresponding to label's blob\n      bool need_back = this->net_->bottom_need_backward()[layer_id][1];\n      // if propagate_down is false, the loss layer will not try to\n      // backpropagate on labels\n      EXPECT_FALSE(need_back) << \"bottom_need_backward should be False\";\n    }\n    // layer_need_backward should be False except for innerproduct and\n    // loss layers\n    if (layer_name == \"innerproduct\" || layer_name == \"loss\") {\n      EXPECT_TRUE(vec_layer_need_backward[layer_id])\n          << \"layer_need_backward for \" << layer_name << \" should be True\";\n    } else {\n      EXPECT_FALSE(vec_layer_need_backward[layer_id])\n          << \"layer_need_backward for \" << layer_name << \" should be False\";\n    }\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_neuron_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n\n#include \"caffe/layers/absval_layer.hpp\"\n#include \"caffe/layers/bnll_layer.hpp\"\n#include \"caffe/layers/dropout_layer.hpp\"\n#include \"caffe/layers/elu_layer.hpp\"\n#include \"caffe/layers/exp_layer.hpp\"\n#include \"caffe/layers/inner_product_layer.hpp\"\n#include \"caffe/layers/log_layer.hpp\"\n#include \"caffe/layers/power_layer.hpp\"\n#include \"caffe/layers/prelu_layer.hpp\"\n#include \"caffe/layers/relu_layer.hpp\"\n#include \"caffe/layers/sigmoid_layer.hpp\"\n#include \"caffe/layers/tanh_layer.hpp\"\n#include \"caffe/layers/threshold_layer.hpp\"\n\n#ifdef USE_CUDNN\n#include \"caffe/layers/cudnn_relu_layer.hpp\"\n#include \"caffe/layers/cudnn_sigmoid_layer.hpp\"\n#include \"caffe/layers/cudnn_tanh_layer.hpp\"\n#endif\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass NeuronLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  NeuronLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~NeuronLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n\n  void TestDropoutForward(const float dropout_ratio) {\n    LayerParameter layer_param;\n    // Fill in the given dropout_ratio, unless it's 0.5, in which case we don't\n    // set it explicitly to test that 0.5 is the default.\n    if (dropout_ratio != 0.5) {\n      layer_param.mutable_dropout_param()->set_dropout_ratio(dropout_ratio);\n    }\n    DropoutLayer<Dtype> layer(layer_param);\n    layer_param.set_phase(TRAIN);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    // Now, check values\n    const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n    const Dtype* top_data = this->blob_top_->cpu_data();\n    float scale = 1. / (1. - layer_param.dropout_param().dropout_ratio());\n    const int count = this->blob_bottom_->count();\n    // Initialize num_kept to count the number of inputs NOT dropped out.\n    int num_kept = 0;\n    for (int i = 0; i < count; ++i) {\n      if (top_data[i] != 0) {\n        ++num_kept;\n        EXPECT_EQ(top_data[i], bottom_data[i] * scale);\n      }\n    }\n    const Dtype std_error = sqrt(dropout_ratio * (1 - dropout_ratio) / count);\n    // Fail if the number dropped was more than 1.96 * std_error away from the\n    // expected number -- requires 95% confidence that the dropout layer is not\n    // obeying the given dropout_ratio for test failure.\n    const Dtype empirical_dropout_ratio = 1 - num_kept / Dtype(count);\n    EXPECT_NEAR(empirical_dropout_ratio, dropout_ratio, 1.96 * std_error);\n  }\n\n  void TestExpForward(const float base, const float scale, const float shift) {\n    LayerParameter layer_param;\n    layer_param.mutable_exp_param()->set_base(base);\n    layer_param.mutable_exp_param()->set_scale(scale);\n    layer_param.mutable_exp_param()->set_shift(shift);\n    ExpLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    const Dtype kDelta = 2e-4;\n    const Dtype* bottom_data = blob_bottom_->cpu_data();\n    const Dtype* top_data = blob_top_->cpu_data();\n    for (int i = 0; i < blob_bottom_->count(); ++i) {\n      const Dtype bottom_val = bottom_data[i];\n      const Dtype top_val = top_data[i];\n      if (base == -1) {\n        EXPECT_NEAR(top_val, exp(shift + scale * bottom_val), kDelta);\n      } else {\n        EXPECT_NEAR(top_val, pow(base, shift + scale * bottom_val), kDelta);\n      }\n    }\n  }\n\n  void TestExpGradient(const float base, const float scale, const float shift) {\n    LayerParameter layer_param;\n    layer_param.mutable_exp_param()->set_base(base);\n    layer_param.mutable_exp_param()->set_scale(scale);\n    layer_param.mutable_exp_param()->set_shift(shift);\n    ExpLayer<Dtype> layer(layer_param);\n    GradientChecker<Dtype> checker(1e-2, 1e-3);\n    checker.CheckGradientEltwise(&layer, blob_bottom_vec_, blob_top_vec_);\n  }\n\n  void TestPReLU(PReLULayer<Dtype> *layer) {\n    layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n    const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n    const Dtype* top_data = this->blob_top_->cpu_data();\n    const Dtype* slope_data = layer->blobs()[0]->cpu_data();\n    int hw = this->blob_bottom_->height() * this->blob_bottom_->width();\n    int channels = this->blob_bottom_->channels();\n    bool channel_shared = layer->layer_param().prelu_param().channel_shared();\n    for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n      int c = channel_shared ? 0 : (i / hw) % channels;\n      EXPECT_EQ(top_data[i],\n          std::max(bottom_data[i], (Dtype)(0))\n          + slope_data[c] * std::min(bottom_data[i], (Dtype)(0)));\n    }\n  }\n\n  void LogBottomInit() {\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    Dtype* bottom_data = this->blob_bottom_->mutable_cpu_data();\n    caffe_exp(this->blob_bottom_->count(), bottom_data, bottom_data);\n  }\n\n  void TestLogForward(const float base, const float scale, const float shift) {\n    LogBottomInit();\n    LayerParameter layer_param;\n    layer_param.mutable_log_param()->set_base(base);\n    layer_param.mutable_log_param()->set_scale(scale);\n    layer_param.mutable_log_param()->set_shift(shift);\n    LogLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    const Dtype kDelta = 2e-4;\n    const Dtype* bottom_data = blob_bottom_->cpu_data();\n    const Dtype* top_data = blob_top_->cpu_data();\n    for (int i = 0; i < blob_bottom_->count(); ++i) {\n      const Dtype bottom_val = bottom_data[i];\n      const Dtype top_val = top_data[i];\n      if (base == -1) {\n        EXPECT_NEAR(top_val, log(shift + scale * bottom_val), kDelta);\n      } else {\n        EXPECT_NEAR(top_val, log(shift + scale * bottom_val) / log(base),\n                    kDelta);\n      }\n    }\n  }\n\n  void TestLogGradient(const float base, const float scale, const float shift) {\n    LogBottomInit();\n    LayerParameter layer_param;\n    layer_param.mutable_log_param()->set_base(base);\n    layer_param.mutable_log_param()->set_scale(scale);\n    layer_param.mutable_log_param()->set_shift(shift);\n    LogLayer<Dtype> layer(layer_param);\n    GradientChecker<Dtype> checker(1e-2, 1e-2);\n    checker.CheckGradientEltwise(&layer, blob_bottom_vec_, blob_top_vec_);\n  }\n};\n\nTYPED_TEST_CASE(NeuronLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(NeuronLayerTest, TestAbsVal) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  AbsValLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data    = this->blob_top_->cpu_data();\n  const int count = this->blob_bottom_->count();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_EQ(top_data[i], fabs(bottom_data[i]));\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestAbsGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  AbsValLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestReLU) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_TRUE(top_data[i] == 0 || top_data[i] == bottom_data[i]);\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestReLUGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ReLULayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestReLUWithNegativeSlope) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"relu_param { negative_slope: 0.01 }\", &layer_param));\n  ReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    if (top_data[i] >= 0) {\n      EXPECT_FLOAT_EQ(top_data[i], bottom_data[i]);\n    } else {\n      EXPECT_FLOAT_EQ(top_data[i], bottom_data[i] * 0.01);\n    }\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestReLUGradientWithNegativeSlope) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"relu_param { negative_slope: 0.01 }\", &layer_param));\n  ReLULayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestELU) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"elu_param { alpha: 0.5 }\", &layer_param));\n  ELULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype kDelta = 2e-4;\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    if (bottom_data[i] > 0) {\n      EXPECT_FLOAT_EQ(top_data[i], bottom_data[i]);\n    } else {\n      EXPECT_NEAR(top_data[i], 0.5 * (exp(bottom_data[i]) - 1), kDelta);\n    }\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestELUasReLU) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"elu_param { alpha: 0 }\", &layer_param));\n  ELULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_TRUE(top_data[i] == 0 || top_data[i] == bottom_data[i]);\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestELUGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ELULayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestELUasReLUGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"elu_param { alpha: 0 }\", &layer_param));\n  ELULayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestSigmoid) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SigmoidLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_FLOAT_EQ(top_data[i], 1. / (1 + exp(-bottom_data[i])));\n    // check that we squashed the value between 0 and 1\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_LE(top_data[i], 1.);\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestSigmoidGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SigmoidLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestTanH) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  TanHLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test exact values\n  for (int i = 0; i < this->blob_bottom_->num(); ++i) {\n    for (int j = 0; j < this->blob_bottom_->channels(); ++j) {\n      for (int k = 0; k < this->blob_bottom_->height(); ++k) {\n        for (int l = 0; l < this->blob_bottom_->width(); ++l) {\n          EXPECT_GE(this->blob_top_->data_at(i, j, k, l) + 1e-4,\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) - 1) /\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) + 1));\n          EXPECT_LE(this->blob_top_->data_at(i, j, k, l) - 1e-4,\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) - 1) /\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) + 1));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestTanHGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  TanHLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpLayer) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Test default base of \"-1\" -- should actually set base := e.\n  const Dtype kBase = -1;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestExpForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Test default base of \"-1\" -- should actually set base := e.\n  const Dtype kBase = -1;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestExpGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpLayerBase2) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestExpForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpGradientBase2) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestExpGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpLayerBase2Shift1) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 1;\n  this->TestExpForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpGradientBase2Shift1) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 1;\n  this->TestExpGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpLayerBase2Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 0;\n  this->TestExpForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpGradientBase2Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 0;\n  this->TestExpGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpLayerBase2Shift1Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 1;\n  this->TestExpForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestExpGradientBase2Shift1Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 1;\n  this->TestExpGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogLayer) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Test default base of \"-1\" -- should actually set base := e.\n  const Dtype kBase = -1;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestLogForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Test default base of \"-1\" -- should actually set base := e.\n  const Dtype kBase = -1;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestLogGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogLayerBase2) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestLogForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogGradientBase2) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 0;\n  this->TestLogGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogLayerBase2Shift1) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 1;\n  this->TestLogForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogGradientBase2Shift1) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 1;\n  const Dtype kShift = 1;\n  this->TestLogGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogLayerBase2Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 0;\n  this->TestLogForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogGradientBase2Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 0;\n  this->TestLogGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogLayerBase2Shift1Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 1;\n  this->TestLogForward(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestLogGradientBase2Shift1Scale3) {\n  typedef typename TypeParam::Dtype Dtype;\n  const Dtype kBase = 2;\n  const Dtype kScale = 3;\n  const Dtype kShift = 1;\n  this->TestLogGradient(kBase, kScale, kShift);\n}\n\nTYPED_TEST(NeuronLayerTest, TestDropoutHalf) {\n  const float kDropoutRatio = 0.5;\n  this->TestDropoutForward(kDropoutRatio);\n}\n\nTYPED_TEST(NeuronLayerTest, TestDropoutThreeQuarters) {\n  const float kDropoutRatio = 0.75;\n  this->TestDropoutForward(kDropoutRatio);\n}\n\nTYPED_TEST(NeuronLayerTest, TestDropoutTestPhase) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.set_phase(TEST);\n  DropoutLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    if (top_data[i] != 0) {\n      EXPECT_EQ(top_data[i], bottom_data[i]);\n    }\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestDropoutGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.set_phase(TRAIN);\n  DropoutLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestDropoutGradientTest) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.set_phase(TEST);\n  DropoutLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestBNLL) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BNLLLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_GE(top_data[i], bottom_data[i]);\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestBNLLGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BNLLLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* slopes = layer.blobs()[0]->cpu_data();\n  int count = layer.blobs()[0]->count();\n  for (int i = 0; i < count; ++i, ++slopes) {\n    EXPECT_EQ(*slopes, 0.25);\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(layer.blobs()[0].get());\n  this->TestPReLU(&layer);\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUForwardChannelShared) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_prelu_param()->set_channel_shared(true);\n  PReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  this->TestPReLU(&layer);\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(layer.blobs()[0].get());\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUGradientChannelShared) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_prelu_param()->set_channel_shared(true);\n  PReLULayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<Dtype> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUConsistencyReLU) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter prelu_layer_param;\n  LayerParameter relu_layer_param;\n  relu_layer_param.mutable_relu_param()->set_negative_slope(0.25);\n  PReLULayer<Dtype> prelu(prelu_layer_param);\n  ReLULayer<Dtype> relu(relu_layer_param);\n  // Set up blobs\n  vector<Blob<Dtype>*> blob_bottom_vec_2;\n  vector<Blob<Dtype>*> blob_top_vec_2;\n  shared_ptr<Blob<Dtype> > blob_bottom_2(new Blob<Dtype>());\n  shared_ptr<Blob<Dtype> > blob_top_2(new Blob<Dtype>());\n  blob_bottom_vec_2.push_back(blob_bottom_2.get());\n  blob_top_vec_2.push_back(blob_top_2.get());\n  blob_bottom_2->CopyFrom(*this->blob_bottom_, false, true);\n  // SetUp layers\n  prelu.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  relu.SetUp(blob_bottom_vec_2, blob_top_vec_2);\n  // Check forward\n  prelu.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  relu.Forward(this->blob_bottom_vec_, blob_top_vec_2);\n  for (int s = 0; s < blob_top_2->count(); ++s) {\n    EXPECT_EQ(this->blob_top_->cpu_data()[s], blob_top_2->cpu_data()[s]);\n  }\n  // Check backward\n  shared_ptr<Blob<Dtype> > tmp_blob(new Blob<Dtype>());\n  tmp_blob->ReshapeLike(*blob_top_2.get());\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(tmp_blob.get());\n  caffe_copy(blob_top_2->count(), tmp_blob->cpu_data(),\n      this->blob_top_->mutable_cpu_diff());\n  caffe_copy(blob_top_2->count(), tmp_blob->cpu_data(),\n      blob_top_2->mutable_cpu_diff());\n  vector<bool> propagate_down;\n  propagate_down.push_back(true);\n  prelu.Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  relu.Backward(blob_top_vec_2, propagate_down, blob_bottom_vec_2);\n  for (int s = 0; s < blob_bottom_2->count(); ++s) {\n    EXPECT_EQ(this->blob_bottom_->cpu_diff()[s], blob_bottom_2->cpu_diff()[s]);\n  }\n}\n\nTYPED_TEST(NeuronLayerTest, TestPReLUInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Set layer parameters\n  LayerParameter ip_layer_param;\n  LayerParameter prelu_layer_param;\n  InnerProductParameter *ip_param =\n      ip_layer_param.mutable_inner_product_param();\n  ip_param->mutable_weight_filler()->set_type(\"gaussian\");\n  ip_param->set_num_output(3);\n  InnerProductLayer<Dtype> ip(ip_layer_param);\n  PReLULayer<Dtype> prelu(prelu_layer_param);\n  InnerProductLayer<Dtype> ip2(ip_layer_param);\n  PReLULayer<Dtype> prelu2(prelu_layer_param);\n  // Set up blobs\n  vector<Blob<Dtype>*> blob_bottom_vec_2;\n  vector<Blob<Dtype>*> blob_middle_vec_2;\n  vector<Blob<Dtype>*> blob_top_vec_2;\n  shared_ptr<Blob<Dtype> > blob_bottom_2(new Blob<Dtype>());\n  shared_ptr<Blob<Dtype> > blob_middle_2(new Blob<Dtype>());\n  shared_ptr<Blob<Dtype> > blob_top_2(new Blob<Dtype>());\n  blob_bottom_vec_2.push_back(blob_bottom_2.get());\n  blob_middle_vec_2.push_back(blob_middle_2.get());\n  blob_top_vec_2.push_back(blob_top_2.get());\n  blob_bottom_2->CopyFrom(*this->blob_bottom_, false, true);\n  // SetUp layers\n  ip.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  prelu.SetUp(this->blob_top_vec_, this->blob_top_vec_);\n  ip2.SetUp(blob_bottom_vec_2, blob_middle_vec_2);\n  prelu2.SetUp(blob_middle_vec_2, blob_top_vec_2);\n  caffe_copy(ip2.blobs()[0]->count(), ip.blobs()[0]->cpu_data(),\n      ip2.blobs()[0]->mutable_cpu_data());\n  // Forward in-place\n  ip.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  prelu.Forward(this->blob_top_vec_, this->blob_top_vec_);\n  // Forward non-in-place\n  ip2.Forward(blob_bottom_vec_2, blob_middle_vec_2);\n  prelu2.Forward(blob_middle_vec_2, blob_top_vec_2);\n  // Check numbers\n  for (int s = 0; s < blob_top_2->count(); ++s) {\n    EXPECT_EQ(this->blob_top_->cpu_data()[s], blob_top_2->cpu_data()[s]);\n  }\n  // Fill top diff with random numbers\n  shared_ptr<Blob<Dtype> > tmp_blob(new Blob<Dtype>());\n  tmp_blob->ReshapeLike(*blob_top_2.get());\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(tmp_blob.get());\n  caffe_copy(blob_top_2->count(), tmp_blob->cpu_data(),\n      this->blob_top_->mutable_cpu_diff());\n  caffe_copy(blob_top_2->count(), tmp_blob->cpu_data(),\n      blob_top_2->mutable_cpu_diff());\n  // Backward in-place\n  vector<bool> propagate_down;\n  propagate_down.push_back(true);\n  prelu.Backward(this->blob_top_vec_, propagate_down, this->blob_top_vec_);\n  ip.Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  // Backward non-in-place\n  prelu2.Backward(blob_top_vec_2, propagate_down, blob_middle_vec_2);\n  ip2.Backward(blob_middle_vec_2, propagate_down, blob_bottom_vec_2);\n  // Check numbers\n  for (int s = 0; s < blob_bottom_2->count(); ++s) {\n    EXPECT_EQ(this->blob_bottom_->cpu_diff()[s], blob_bottom_2->cpu_diff()[s]);\n  }\n  for (int s = 0; s < ip.blobs()[0]->count(); ++s) {\n    EXPECT_EQ(ip.blobs()[0]->cpu_diff()[s], ip2.blobs()[0]->cpu_diff()[s]);\n  }\n  for (int s = 0; s < ip.blobs()[1]->count(); ++s) {\n    EXPECT_EQ(ip.blobs()[1]->cpu_diff()[s], ip2.blobs()[1]->cpu_diff()[s]);\n  }\n  for (int s = 0; s < prelu.blobs()[0]->count(); ++s) {\n    EXPECT_EQ(prelu.blobs()[0]->cpu_diff()[s],\n        prelu2.blobs()[0]->cpu_diff()[s]);\n  }\n}\n\n#ifdef USE_CUDNN\ntemplate <typename Dtype>\nclass CuDNNNeuronLayerTest : public GPUDeviceTest<Dtype> {\n protected:\n  CuDNNNeuronLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~CuDNNNeuronLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(CuDNNNeuronLayerTest, TestDtypes);\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestReLUCuDNN) {\n  LayerParameter layer_param;\n  CuDNNReLULayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_TRUE(top_data[i] == 0 || top_data[i] == bottom_data[i]);\n  }\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestReLUGradientCuDNN) {\n  LayerParameter layer_param;\n  CuDNNReLULayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestReLUWithNegativeSlopeCuDNN) {\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"relu_param { negative_slope: 0.01 }\", &layer_param));\n  CuDNNReLULayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    if (top_data[i] >= 0) {\n      EXPECT_FLOAT_EQ(top_data[i], bottom_data[i]);\n    } else {\n      EXPECT_FLOAT_EQ(top_data[i], bottom_data[i] * 0.01);\n    }\n  }\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestReLUGradientWithNegativeSlopeCuDNN) {\n  LayerParameter layer_param;\n  CHECK(google::protobuf::TextFormat::ParseFromString(\n      \"relu_param { negative_slope: 0.01 }\", &layer_param));\n  CuDNNReLULayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestSigmoidCuDNN) {\n  LayerParameter layer_param;\n  CuDNNSigmoidLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_FLOAT_EQ(top_data[i], 1. / (1 + exp(-bottom_data[i])));\n    // check that we squashed the value between 0 and 1\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_LE(top_data[i], 1.);\n  }\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestSigmoidGradientCuDNN) {\n  LayerParameter layer_param;\n  CuDNNSigmoidLayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3, 1701, 0., 0.01);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestTanHCuDNN) {\n  LayerParameter layer_param;\n  CuDNNTanHLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test exact values\n  for (int i = 0; i < this->blob_bottom_->num(); ++i) {\n    for (int j = 0; j < this->blob_bottom_->channels(); ++j) {\n      for (int k = 0; k < this->blob_bottom_->height(); ++k) {\n        for (int l = 0; l < this->blob_bottom_->width(); ++l) {\n          EXPECT_GE(this->blob_top_->data_at(i, j, k, l) + 1e-4,\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) - 1) /\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) + 1));\n          EXPECT_LE(this->blob_top_->data_at(i, j, k, l) - 1e-4,\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) - 1) /\n             (exp(2*this->blob_bottom_->data_at(i, j, k, l)) + 1));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(CuDNNNeuronLayerTest, TestTanHGradientCuDNN) {\n  LayerParameter layer_param;\n  CuDNNTanHLayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_platform.cpp",
    "content": "#ifndef CPU_ONLY\n\n#include <cstdio>\n#include <cstdlib>\n\n#include \"glog/logging.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nextern cudaDeviceProp CAFFE_TEST_CUDA_PROP;\n\nclass PlatformTest : public ::testing::Test {};\n\nTEST_F(PlatformTest, TestInitialization) {\n  printf(\"Major revision number:         %d\\n\",  CAFFE_TEST_CUDA_PROP.major);\n  printf(\"Minor revision number:         %d\\n\",  CAFFE_TEST_CUDA_PROP.minor);\n  printf(\"Name:                          %s\\n\",  CAFFE_TEST_CUDA_PROP.name);\n  printf(\"Total global memory:           %lu\\n\",\n         CAFFE_TEST_CUDA_PROP.totalGlobalMem);\n  printf(\"Total shared memory per block: %lu\\n\",\n         CAFFE_TEST_CUDA_PROP.sharedMemPerBlock);\n  printf(\"Total registers per block:     %d\\n\",\n         CAFFE_TEST_CUDA_PROP.regsPerBlock);\n  printf(\"Warp size:                     %d\\n\",\n         CAFFE_TEST_CUDA_PROP.warpSize);\n  printf(\"Maximum memory pitch:          %lu\\n\",\n         CAFFE_TEST_CUDA_PROP.memPitch);\n  printf(\"Maximum threads per block:     %d\\n\",\n         CAFFE_TEST_CUDA_PROP.maxThreadsPerBlock);\n  for (int i = 0; i < 3; ++i)\n    printf(\"Maximum dimension %d of block:  %d\\n\", i,\n           CAFFE_TEST_CUDA_PROP.maxThreadsDim[i]);\n  for (int i = 0; i < 3; ++i)\n    printf(\"Maximum dimension %d of grid:   %d\\n\", i,\n           CAFFE_TEST_CUDA_PROP.maxGridSize[i]);\n  printf(\"Clock rate:                    %d\\n\", CAFFE_TEST_CUDA_PROP.clockRate);\n  printf(\"Total constant memory:         %lu\\n\",\n         CAFFE_TEST_CUDA_PROP.totalConstMem);\n  printf(\"Texture alignment:             %lu\\n\",\n         CAFFE_TEST_CUDA_PROP.textureAlignment);\n  printf(\"Concurrent copy and execution: %s\\n\",\n         (CAFFE_TEST_CUDA_PROP.deviceOverlap ? \"Yes\" : \"No\"));\n  printf(\"Number of multiprocessors:     %d\\n\",\n         CAFFE_TEST_CUDA_PROP.multiProcessorCount);\n  printf(\"Kernel execution timeout:      %s\\n\",\n         (CAFFE_TEST_CUDA_PROP.kernelExecTimeoutEnabled ? \"Yes\" : \"No\"));\n  printf(\"Unified virtual addressing:    %s\\n\",\n         (CAFFE_TEST_CUDA_PROP.unifiedAddressing ? \"Yes\" : \"No\"));\n  EXPECT_TRUE(true);\n}\n\n}  // namespace caffe\n\n#endif  // CPU_ONLY\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_pooling_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n\n#ifdef USE_CUDNN\n#include \"caffe/layers/cudnn_pooling_layer.hpp\"\n#endif\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass PoolingLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  PoolingLayerTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_mask_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    blob_bottom_->Reshape(2, 3, 6, 5);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~PoolingLayerTest() {\n    delete blob_bottom_;\n    delete blob_top_;\n    delete blob_top_mask_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_mask_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n  // Test for 2x 2 square pooling layer\n  void TestForwardSquare() {\n    LayerParameter layer_param;\n    PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n    pooling_param->set_kernel_size(2);\n    pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n    const int num = 2;\n    const int channels = 2;\n    blob_bottom_->Reshape(num, channels, 3, 5);\n    // Input: 2x 2 channels of:\n    //     [1 2 5 2 3]\n    //     [9 4 1 4 8]\n    //     [1 2 5 2 3]\n    for (int i = 0; i < 15 * num * channels; i += 15) {\n      blob_bottom_->mutable_cpu_data()[i +  0] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  1] = 2;\n      blob_bottom_->mutable_cpu_data()[i +  2] = 5;\n      blob_bottom_->mutable_cpu_data()[i +  3] = 2;\n      blob_bottom_->mutable_cpu_data()[i +  4] = 3;\n      blob_bottom_->mutable_cpu_data()[i +  5] = 9;\n      blob_bottom_->mutable_cpu_data()[i +  6] = 4;\n      blob_bottom_->mutable_cpu_data()[i +  7] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  8] = 4;\n      blob_bottom_->mutable_cpu_data()[i +  9] = 8;\n      blob_bottom_->mutable_cpu_data()[i + 10] = 1;\n      blob_bottom_->mutable_cpu_data()[i + 11] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 12] = 5;\n      blob_bottom_->mutable_cpu_data()[i + 13] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 14] = 3;\n    }\n    PoolingLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), num);\n    EXPECT_EQ(blob_top_->channels(), channels);\n    EXPECT_EQ(blob_top_->height(), 2);\n    EXPECT_EQ(blob_top_->width(), 4);\n    if (blob_top_vec_.size() > 1) {\n      EXPECT_EQ(blob_top_mask_->num(), num);\n      EXPECT_EQ(blob_top_mask_->channels(), channels);\n      EXPECT_EQ(blob_top_mask_->height(), 2);\n      EXPECT_EQ(blob_top_mask_->width(), 4);\n    }\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    // Expected output: 2x 2 channels of:\n    //     [9 5 5 8]\n    //     [9 5 5 8]\n    for (int i = 0; i < 8 * num * channels; i += 8) {\n      EXPECT_EQ(blob_top_->cpu_data()[i + 0], 9);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 1], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 2], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 3], 8);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 4], 9);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 5], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 6], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 7], 8);\n    }\n    if (blob_top_vec_.size() > 1) {\n      // Expected mask output: 2x 2 channels of:\n      //     [5  2  2 9]\n      //     [5 12 12 9]\n      for (int i = 0; i < 8 * num * channels; i += 8) {\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 0],  5);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 1],  2);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 2],  2);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 3],  9);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 4],  5);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 5], 12);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 6], 12);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 7],  9);\n      }\n    }\n  }\n  // Test for 3x 2 rectangular pooling layer with kernel_h > kernel_w\n  void TestForwardRectHigh() {\n    LayerParameter layer_param;\n    PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n    pooling_param->set_kernel_h(3);\n    pooling_param->set_kernel_w(2);\n    pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n    const int num = 2;\n    const int channels = 2;\n    blob_bottom_->Reshape(num, channels, 6, 6);\n    // Input: 2x 2 channels of:\n    // [35     1     6    26    19    24]\n    // [ 3    32     7    21    23    25]\n    // [31     9     2    22    27    20]\n    // [ 8    28    33    17    10    15]\n    // [30     5    34    12    14    16]\n    // [ 4    36    29    13    18    11]\n    // (this is generated by magic(6) in MATLAB)\n    for (int i = 0; i < 36 * num * channels; i += 36) {\n      blob_bottom_->mutable_cpu_data()[i +  0] = 35;\n      blob_bottom_->mutable_cpu_data()[i +  1] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  2] = 6;\n      blob_bottom_->mutable_cpu_data()[i +  3] = 26;\n      blob_bottom_->mutable_cpu_data()[i +  4] = 19;\n      blob_bottom_->mutable_cpu_data()[i +  5] = 24;\n      blob_bottom_->mutable_cpu_data()[i +  6] = 3;\n      blob_bottom_->mutable_cpu_data()[i +  7] = 32;\n      blob_bottom_->mutable_cpu_data()[i +  8] = 7;\n      blob_bottom_->mutable_cpu_data()[i +  9] = 21;\n      blob_bottom_->mutable_cpu_data()[i + 10] = 23;\n      blob_bottom_->mutable_cpu_data()[i + 11] = 25;\n      blob_bottom_->mutable_cpu_data()[i + 12] = 31;\n      blob_bottom_->mutable_cpu_data()[i + 13] = 9;\n      blob_bottom_->mutable_cpu_data()[i + 14] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 15] = 22;\n      blob_bottom_->mutable_cpu_data()[i + 16] = 27;\n      blob_bottom_->mutable_cpu_data()[i + 17] = 20;\n      blob_bottom_->mutable_cpu_data()[i + 18] = 8;\n      blob_bottom_->mutable_cpu_data()[i + 19] = 28;\n      blob_bottom_->mutable_cpu_data()[i + 20] = 33;\n      blob_bottom_->mutable_cpu_data()[i + 21] = 17;\n      blob_bottom_->mutable_cpu_data()[i + 22] = 10;\n      blob_bottom_->mutable_cpu_data()[i + 23] = 15;\n      blob_bottom_->mutable_cpu_data()[i + 24] = 30;\n      blob_bottom_->mutable_cpu_data()[i + 25] = 5;\n      blob_bottom_->mutable_cpu_data()[i + 26] = 34;\n      blob_bottom_->mutable_cpu_data()[i + 27] = 12;\n      blob_bottom_->mutable_cpu_data()[i + 28] = 14;\n      blob_bottom_->mutable_cpu_data()[i + 29] = 16;\n      blob_bottom_->mutable_cpu_data()[i + 30] = 4;\n      blob_bottom_->mutable_cpu_data()[i + 31] = 36;\n      blob_bottom_->mutable_cpu_data()[i + 32] = 29;\n      blob_bottom_->mutable_cpu_data()[i + 33] = 13;\n      blob_bottom_->mutable_cpu_data()[i + 34] = 18;\n      blob_bottom_->mutable_cpu_data()[i + 35] = 11;\n    }\n    PoolingLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), num);\n    EXPECT_EQ(blob_top_->channels(), channels);\n    EXPECT_EQ(blob_top_->height(), 4);\n    EXPECT_EQ(blob_top_->width(), 5);\n    if (blob_top_vec_.size() > 1) {\n      EXPECT_EQ(blob_top_mask_->num(), num);\n      EXPECT_EQ(blob_top_mask_->channels(), channels);\n      EXPECT_EQ(blob_top_mask_->height(), 4);\n      EXPECT_EQ(blob_top_mask_->width(), 5);\n    }\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    // Expected output: 2x 2 channels of:\n    // [35    32    26    27    27]\n    // [32    33    33    27    27]\n    // [31    34    34    27    27]\n    // [36    36    34    18    18]\n    for (int i = 0; i < 20 * num * channels; i += 20) {\n      EXPECT_EQ(blob_top_->cpu_data()[i +  0], 35);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  1], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  2], 26);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  3], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  4], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  5], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  6], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  7], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  8], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  9], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 10], 31);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 11], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 12], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 13], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 14], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 15], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 16], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 17], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 18], 18);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 19], 18);\n    }\n    if (blob_top_vec_.size() > 1) {\n        // [ 1     8     4    17    17]\n        // [ 8    21    21    17    17]\n        // [13    27    27    17    17]\n        // [32    32    27    35    35]\n      for (int i = 0; i < 20 * num * channels; i += 20) {\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  0],  0);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  1],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  2],  3);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  3], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  4], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  5],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  6], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  7], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  8], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  9], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 10], 12);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 11], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 12], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 13], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 14], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 15], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 16], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 17], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 18], 34);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 19], 34);\n      }\n    }\n  }\n  // Test for rectangular pooling layer with kernel_w > kernel_h\n  void TestForwardRectWide() {\n    LayerParameter layer_param;\n    PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n    pooling_param->set_kernel_h(2);\n    pooling_param->set_kernel_w(3);\n    pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n    const int num = 2;\n    const int channels = 2;\n    blob_bottom_->Reshape(num, channels, 6, 6);\n    // Input: 2x 2 channels of:\n    // [35     1     6    26    19    24]\n    // [ 3    32     7    21    23    25]\n    // [31     9     2    22    27    20]\n    // [ 8    28    33    17    10    15]\n    // [30     5    34    12    14    16]\n    // [ 4    36    29    13    18    11]\n    // (this is generated by magic(6) in MATLAB)\n    for (int i = 0; i < 36 * num * channels; i += 36) {\n      blob_bottom_->mutable_cpu_data()[i +  0] = 35;\n      blob_bottom_->mutable_cpu_data()[i +  1] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  2] = 6;\n      blob_bottom_->mutable_cpu_data()[i +  3] = 26;\n      blob_bottom_->mutable_cpu_data()[i +  4] = 19;\n      blob_bottom_->mutable_cpu_data()[i +  5] = 24;\n      blob_bottom_->mutable_cpu_data()[i +  6] = 3;\n      blob_bottom_->mutable_cpu_data()[i +  7] = 32;\n      blob_bottom_->mutable_cpu_data()[i +  8] = 7;\n      blob_bottom_->mutable_cpu_data()[i +  9] = 21;\n      blob_bottom_->mutable_cpu_data()[i + 10] = 23;\n      blob_bottom_->mutable_cpu_data()[i + 11] = 25;\n      blob_bottom_->mutable_cpu_data()[i + 12] = 31;\n      blob_bottom_->mutable_cpu_data()[i + 13] = 9;\n      blob_bottom_->mutable_cpu_data()[i + 14] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 15] = 22;\n      blob_bottom_->mutable_cpu_data()[i + 16] = 27;\n      blob_bottom_->mutable_cpu_data()[i + 17] = 20;\n      blob_bottom_->mutable_cpu_data()[i + 18] = 8;\n      blob_bottom_->mutable_cpu_data()[i + 19] = 28;\n      blob_bottom_->mutable_cpu_data()[i + 20] = 33;\n      blob_bottom_->mutable_cpu_data()[i + 21] = 17;\n      blob_bottom_->mutable_cpu_data()[i + 22] = 10;\n      blob_bottom_->mutable_cpu_data()[i + 23] = 15;\n      blob_bottom_->mutable_cpu_data()[i + 24] = 30;\n      blob_bottom_->mutable_cpu_data()[i + 25] = 5;\n      blob_bottom_->mutable_cpu_data()[i + 26] = 34;\n      blob_bottom_->mutable_cpu_data()[i + 27] = 12;\n      blob_bottom_->mutable_cpu_data()[i + 28] = 14;\n      blob_bottom_->mutable_cpu_data()[i + 29] = 16;\n      blob_bottom_->mutable_cpu_data()[i + 30] = 4;\n      blob_bottom_->mutable_cpu_data()[i + 31] = 36;\n      blob_bottom_->mutable_cpu_data()[i + 32] = 29;\n      blob_bottom_->mutable_cpu_data()[i + 33] = 13;\n      blob_bottom_->mutable_cpu_data()[i + 34] = 18;\n      blob_bottom_->mutable_cpu_data()[i + 35] = 11;\n    }\n    PoolingLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), num);\n    EXPECT_EQ(blob_top_->channels(), channels);\n    EXPECT_EQ(blob_top_->height(), 5);\n    EXPECT_EQ(blob_top_->width(), 4);\n    if (blob_top_vec_.size() > 1) {\n      EXPECT_EQ(blob_top_mask_->num(), num);\n      EXPECT_EQ(blob_top_mask_->channels(), channels);\n      EXPECT_EQ(blob_top_mask_->height(), 5);\n      EXPECT_EQ(blob_top_mask_->width(), 4);\n    }\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    // Expected output: 2x 2 channels of:\n    // [35    32    26    26]\n    // [32    32    27    27]\n    // [33    33    33    27]\n    // [34    34    34    17]\n    // [36    36    34    18]\n    for (int i = 0; i < 20 * num * channels; i += 20) {\n      EXPECT_EQ(blob_top_->cpu_data()[i +  0], 35);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  1], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  2], 26);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  3], 26);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  4], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  5], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  6], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  7], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  8], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  9], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 10], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 11], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 12], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 13], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 14], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 15], 17);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 16], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 17], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 18], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 19], 18);\n    }\n    if (blob_top_vec_.size() > 1) {\n        // [ 1     8     4     4]\n        // [ 8     8    17    17]\n        // [21    21    21    17]\n        // [27    27    27    22]\n        // [32    32    27    35]\n      for (int i = 0; i < 20 * num * channels; i += 20) {\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  0],  0);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  1],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  2],  3);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  3],  3);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  4],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  5],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  6], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  7], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  8], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  9], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 10], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 11], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 12], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 13], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 14], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 15], 21);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 16], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 17], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 18], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 19], 34);\n      }\n    }\n  }\n};\n\nTYPED_TEST_CASE(PoolingLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(PoolingLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 2);\n}\n\nTYPED_TEST(PoolingLayerTest, TestSetupPadded) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pad(1);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 4);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\nTYPED_TEST(PoolingLayerTest, TestSetupGlobalPooling) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_global_pooling(true);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\n/*\nTYPED_TEST(PoolingLayerTest, PrintBackward) {\n  LayerParameter layer_param;\n  layer_param.set_kernelsize(3);\n  layer_param.set_stride(2);\n  layer_param.set_pool(LayerParameter_PoolMethod_MAX);\n  PoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    cout << \"bottom data \" << i << \" \" << this->blob_bottom_->cpu_data()[i] << endl;\n  }\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    cout << \"top data \" << i << \" \" << this->blob_top_->cpu_data()[i] << endl;\n  }\n\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = i;\n  }\n  layer.Backward(this->blob_top_vec_, true, this->blob_bottom_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    cout << \"bottom diff \" << i << \" \" << this->blob_bottom_->cpu_diff()[i] << endl;\n  }\n}\n*/\n\nTYPED_TEST(PoolingLayerTest, TestForwardMax) {\n  this->TestForwardSquare();\n  this->TestForwardRectHigh();\n  this->TestForwardRectWide();\n}\n\nTYPED_TEST(PoolingLayerTest, TestForwardMaxTopMask) {\n  this->blob_top_vec_.push_back(this->blob_top_mask_);\n  this->TestForwardSquare();\n  this->TestForwardRectHigh();\n  this->TestForwardRectWide();\n}\n\nTYPED_TEST(PoolingLayerTest, TestGradientMax) {\n  typedef typename TypeParam::Dtype Dtype;\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pad(1);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n      PoolingLayer<Dtype> layer(layer_param);\n      GradientChecker<Dtype> checker(1e-4, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n    }\n  }\n}\n\nTYPED_TEST(PoolingLayerTest, TestForwardMaxPadded) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pad(2);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n  this->blob_bottom_->Reshape(1, 1, 3, 3);\n  // Input:\n  //     [ 1 2 4 ]\n  //     [ 2 3 2 ]\n  //     [ 4 2 1 ]\n  this->blob_bottom_->mutable_cpu_data()[0] = 1;\n  this->blob_bottom_->mutable_cpu_data()[1] = 2;\n  this->blob_bottom_->mutable_cpu_data()[2] = 4;\n  this->blob_bottom_->mutable_cpu_data()[3] = 2;\n  this->blob_bottom_->mutable_cpu_data()[4] = 3;\n  this->blob_bottom_->mutable_cpu_data()[5] = 2;\n  this->blob_bottom_->mutable_cpu_data()[6] = 4;\n  this->blob_bottom_->mutable_cpu_data()[7] = 2;\n  this->blob_bottom_->mutable_cpu_data()[8] = 1;\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Dtype epsilon = 1e-8;\n  // Output:\n  //     [ 1 4 4 ]\n  //     [ 4 4 4 ]\n  //     [ 4 4 1 ]\n  EXPECT_NEAR(this->blob_top_->cpu_data()[0], 1, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[1], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[2], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[3], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[4], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[5], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[6], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[7], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[8], 1, epsilon);\n}\n\nTYPED_TEST(PoolingLayerTest, TestGradientMaxTopMask) {\n  typedef typename TypeParam::Dtype Dtype;\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n      this->blob_top_vec_.push_back(this->blob_top_mask_);\n      PoolingLayer<Dtype> layer(layer_param);\n      GradientChecker<Dtype> checker(1e-4, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n      this->blob_top_vec_.pop_back();\n    }\n  }\n}\n\nTYPED_TEST(PoolingLayerTest, TestForwardAve) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(1);\n  pooling_param->set_pad(1);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n  this->blob_bottom_->Reshape(1, 1, 3, 3);\n  FillerParameter filler_param;\n  filler_param.set_value(Dtype(2));\n  ConstantFiller<Dtype> filler(filler_param);\n  filler.Fill(this->blob_bottom_);\n  PoolingLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Dtype epsilon = 1e-5;\n  EXPECT_NEAR(this->blob_top_->cpu_data()[0], 8.0 / 9, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[1], 4.0 / 3, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[2], 8.0 / 9, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[3], 4.0 / 3, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[4], 2.0    , epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[5], 4.0 / 3, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[6], 8.0 / 9, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[7], 4.0 / 3, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[8], 8.0 / 9, epsilon);\n}\n\nTYPED_TEST(PoolingLayerTest, TestGradientAve) {\n  typedef typename TypeParam::Dtype Dtype;\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n      PoolingLayer<Dtype> layer(layer_param);\n      GradientChecker<Dtype> checker(1e-2, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n    }\n  }\n}\n\nTYPED_TEST(PoolingLayerTest, TestGradientAvePadded) {\n  typedef typename TypeParam::Dtype Dtype;\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pad(2);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n      PoolingLayer<Dtype> layer(layer_param);\n      GradientChecker<Dtype> checker(1e-2, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n    }\n  }\n}\n\n#ifdef USE_CUDNN\ntemplate <typename Dtype>\nclass CuDNNPoolingLayerTest : public GPUDeviceTest<Dtype> {\n protected:\n  CuDNNPoolingLayerTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()),\n        blob_top_mask_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    blob_bottom_->Reshape(2, 3, 6, 5);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~CuDNNPoolingLayerTest() {\n    delete blob_bottom_;\n    delete blob_top_;\n    delete blob_top_mask_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  Blob<Dtype>* const blob_top_mask_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n  // Test for 2x 2 square pooling layer\n  void TestForwardSquare() {\n    LayerParameter layer_param;\n    PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n    pooling_param->set_kernel_size(2);\n    pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n    const int num = 2;\n    const int channels = 2;\n    blob_bottom_->Reshape(num, channels, 3, 5);\n    // Input: 2x 2 channels of:\n    //     [1 2 5 2 3]\n    //     [9 4 1 4 8]\n    //     [1 2 5 2 3]\n    for (int i = 0; i < 15 * num * channels; i += 15) {\n      blob_bottom_->mutable_cpu_data()[i +  0] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  1] = 2;\n      blob_bottom_->mutable_cpu_data()[i +  2] = 5;\n      blob_bottom_->mutable_cpu_data()[i +  3] = 2;\n      blob_bottom_->mutable_cpu_data()[i +  4] = 3;\n      blob_bottom_->mutable_cpu_data()[i +  5] = 9;\n      blob_bottom_->mutable_cpu_data()[i +  6] = 4;\n      blob_bottom_->mutable_cpu_data()[i +  7] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  8] = 4;\n      blob_bottom_->mutable_cpu_data()[i +  9] = 8;\n      blob_bottom_->mutable_cpu_data()[i + 10] = 1;\n      blob_bottom_->mutable_cpu_data()[i + 11] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 12] = 5;\n      blob_bottom_->mutable_cpu_data()[i + 13] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 14] = 3;\n    }\n    CuDNNPoolingLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), num);\n    EXPECT_EQ(blob_top_->channels(), channels);\n    EXPECT_EQ(blob_top_->height(), 2);\n    EXPECT_EQ(blob_top_->width(), 4);\n    if (blob_top_vec_.size() > 1) {\n      EXPECT_EQ(blob_top_mask_->num(), num);\n      EXPECT_EQ(blob_top_mask_->channels(), channels);\n      EXPECT_EQ(blob_top_mask_->height(), 2);\n      EXPECT_EQ(blob_top_mask_->width(), 4);\n    }\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    // Expected output: 2x 2 channels of:\n    //     [9 5 5 8]\n    //     [9 5 5 8]\n    for (int i = 0; i < 8 * num * channels; i += 8) {\n      EXPECT_EQ(blob_top_->cpu_data()[i + 0], 9);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 1], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 2], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 3], 8);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 4], 9);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 5], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 6], 5);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 7], 8);\n    }\n    if (blob_top_vec_.size() > 1) {\n      // Expected mask output: 2x 2 channels of:\n      //     [5  2  2 9]\n      //     [5 12 12 9]\n      for (int i = 0; i < 8 * num * channels; i += 8) {\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 0],  5);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 1],  2);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 2],  2);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 3],  9);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 4],  5);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 5], 12);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 6], 12);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 7],  9);\n      }\n    }\n  }\n  // Test for 3x 2 rectangular pooling layer with kernel_h > kernel_w\n  void TestForwardRectHigh() {\n    LayerParameter layer_param;\n    PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n    pooling_param->set_kernel_h(3);\n    pooling_param->set_kernel_w(2);\n    pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n    const int num = 2;\n    const int channels = 2;\n    blob_bottom_->Reshape(num, channels, 6, 6);\n    // Input: 2x 2 channels of:\n    // [35     1     6    26    19    24]\n    // [ 3    32     7    21    23    25]\n    // [31     9     2    22    27    20]\n    // [ 8    28    33    17    10    15]\n    // [30     5    34    12    14    16]\n    // [ 4    36    29    13    18    11]\n    // (this is generated by magic(6) in MATLAB)\n    for (int i = 0; i < 36 * num * channels; i += 36) {\n      blob_bottom_->mutable_cpu_data()[i +  0] = 35;\n      blob_bottom_->mutable_cpu_data()[i +  1] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  2] = 6;\n      blob_bottom_->mutable_cpu_data()[i +  3] = 26;\n      blob_bottom_->mutable_cpu_data()[i +  4] = 19;\n      blob_bottom_->mutable_cpu_data()[i +  5] = 24;\n      blob_bottom_->mutable_cpu_data()[i +  6] = 3;\n      blob_bottom_->mutable_cpu_data()[i +  7] = 32;\n      blob_bottom_->mutable_cpu_data()[i +  8] = 7;\n      blob_bottom_->mutable_cpu_data()[i +  9] = 21;\n      blob_bottom_->mutable_cpu_data()[i + 10] = 23;\n      blob_bottom_->mutable_cpu_data()[i + 11] = 25;\n      blob_bottom_->mutable_cpu_data()[i + 12] = 31;\n      blob_bottom_->mutable_cpu_data()[i + 13] = 9;\n      blob_bottom_->mutable_cpu_data()[i + 14] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 15] = 22;\n      blob_bottom_->mutable_cpu_data()[i + 16] = 27;\n      blob_bottom_->mutable_cpu_data()[i + 17] = 20;\n      blob_bottom_->mutable_cpu_data()[i + 18] = 8;\n      blob_bottom_->mutable_cpu_data()[i + 19] = 28;\n      blob_bottom_->mutable_cpu_data()[i + 20] = 33;\n      blob_bottom_->mutable_cpu_data()[i + 21] = 17;\n      blob_bottom_->mutable_cpu_data()[i + 22] = 10;\n      blob_bottom_->mutable_cpu_data()[i + 23] = 15;\n      blob_bottom_->mutable_cpu_data()[i + 24] = 30;\n      blob_bottom_->mutable_cpu_data()[i + 25] = 5;\n      blob_bottom_->mutable_cpu_data()[i + 26] = 34;\n      blob_bottom_->mutable_cpu_data()[i + 27] = 12;\n      blob_bottom_->mutable_cpu_data()[i + 28] = 14;\n      blob_bottom_->mutable_cpu_data()[i + 29] = 16;\n      blob_bottom_->mutable_cpu_data()[i + 30] = 4;\n      blob_bottom_->mutable_cpu_data()[i + 31] = 36;\n      blob_bottom_->mutable_cpu_data()[i + 32] = 29;\n      blob_bottom_->mutable_cpu_data()[i + 33] = 13;\n      blob_bottom_->mutable_cpu_data()[i + 34] = 18;\n      blob_bottom_->mutable_cpu_data()[i + 35] = 11;\n    }\n    CuDNNPoolingLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), num);\n    EXPECT_EQ(blob_top_->channels(), channels);\n    EXPECT_EQ(blob_top_->height(), 4);\n    EXPECT_EQ(blob_top_->width(), 5);\n    if (blob_top_vec_.size() > 1) {\n      EXPECT_EQ(blob_top_mask_->num(), num);\n      EXPECT_EQ(blob_top_mask_->channels(), channels);\n      EXPECT_EQ(blob_top_mask_->height(), 4);\n      EXPECT_EQ(blob_top_mask_->width(), 5);\n    }\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    // Expected output: 2x 2 channels of:\n    // [35    32    26    27    27]\n    // [32    33    33    27    27]\n    // [31    34    34    27    27]\n    // [36    36    34    18    18]\n    for (int i = 0; i < 20 * num * channels; i += 20) {\n      EXPECT_EQ(blob_top_->cpu_data()[i +  0], 35);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  1], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  2], 26);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  3], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  4], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  5], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  6], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  7], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  8], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  9], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 10], 31);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 11], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 12], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 13], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 14], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 15], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 16], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 17], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 18], 18);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 19], 18);\n    }\n    if (blob_top_vec_.size() > 1) {\n        // [ 1     8     4    17    17]\n        // [ 8    21    21    17    17]\n        // [13    27    27    17    17]\n        // [32    32    27    35    35]\n      for (int i = 0; i < 20 * num * channels; i += 20) {\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  0],  0);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  1],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  2],  3);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  3], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  4], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  5],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  6], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  7], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  8], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  9], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 10], 12);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 11], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 12], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 13], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 14], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 15], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 16], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 17], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 18], 34);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 19], 34);\n      }\n    }\n  }\n  // Test for rectangular pooling layer with kernel_w > kernel_h\n  void TestForwardRectWide() {\n    LayerParameter layer_param;\n    PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n    pooling_param->set_kernel_h(2);\n    pooling_param->set_kernel_w(3);\n    pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n    const int num = 2;\n    const int channels = 2;\n    blob_bottom_->Reshape(num, channels, 6, 6);\n    // Input: 2x 2 channels of:\n    // [35     1     6    26    19    24]\n    // [ 3    32     7    21    23    25]\n    // [31     9     2    22    27    20]\n    // [ 8    28    33    17    10    15]\n    // [30     5    34    12    14    16]\n    // [ 4    36    29    13    18    11]\n    // (this is generated by magic(6) in MATLAB)\n    for (int i = 0; i < 36 * num * channels; i += 36) {\n      blob_bottom_->mutable_cpu_data()[i +  0] = 35;\n      blob_bottom_->mutable_cpu_data()[i +  1] = 1;\n      blob_bottom_->mutable_cpu_data()[i +  2] = 6;\n      blob_bottom_->mutable_cpu_data()[i +  3] = 26;\n      blob_bottom_->mutable_cpu_data()[i +  4] = 19;\n      blob_bottom_->mutable_cpu_data()[i +  5] = 24;\n      blob_bottom_->mutable_cpu_data()[i +  6] = 3;\n      blob_bottom_->mutable_cpu_data()[i +  7] = 32;\n      blob_bottom_->mutable_cpu_data()[i +  8] = 7;\n      blob_bottom_->mutable_cpu_data()[i +  9] = 21;\n      blob_bottom_->mutable_cpu_data()[i + 10] = 23;\n      blob_bottom_->mutable_cpu_data()[i + 11] = 25;\n      blob_bottom_->mutable_cpu_data()[i + 12] = 31;\n      blob_bottom_->mutable_cpu_data()[i + 13] = 9;\n      blob_bottom_->mutable_cpu_data()[i + 14] = 2;\n      blob_bottom_->mutable_cpu_data()[i + 15] = 22;\n      blob_bottom_->mutable_cpu_data()[i + 16] = 27;\n      blob_bottom_->mutable_cpu_data()[i + 17] = 20;\n      blob_bottom_->mutable_cpu_data()[i + 18] = 8;\n      blob_bottom_->mutable_cpu_data()[i + 19] = 28;\n      blob_bottom_->mutable_cpu_data()[i + 20] = 33;\n      blob_bottom_->mutable_cpu_data()[i + 21] = 17;\n      blob_bottom_->mutable_cpu_data()[i + 22] = 10;\n      blob_bottom_->mutable_cpu_data()[i + 23] = 15;\n      blob_bottom_->mutable_cpu_data()[i + 24] = 30;\n      blob_bottom_->mutable_cpu_data()[i + 25] = 5;\n      blob_bottom_->mutable_cpu_data()[i + 26] = 34;\n      blob_bottom_->mutable_cpu_data()[i + 27] = 12;\n      blob_bottom_->mutable_cpu_data()[i + 28] = 14;\n      blob_bottom_->mutable_cpu_data()[i + 29] = 16;\n      blob_bottom_->mutable_cpu_data()[i + 30] = 4;\n      blob_bottom_->mutable_cpu_data()[i + 31] = 36;\n      blob_bottom_->mutable_cpu_data()[i + 32] = 29;\n      blob_bottom_->mutable_cpu_data()[i + 33] = 13;\n      blob_bottom_->mutable_cpu_data()[i + 34] = 18;\n      blob_bottom_->mutable_cpu_data()[i + 35] = 11;\n    }\n    CuDNNPoolingLayer<Dtype> layer(layer_param);\n    layer.SetUp(blob_bottom_vec_, blob_top_vec_);\n    EXPECT_EQ(blob_top_->num(), num);\n    EXPECT_EQ(blob_top_->channels(), channels);\n    EXPECT_EQ(blob_top_->height(), 5);\n    EXPECT_EQ(blob_top_->width(), 4);\n    if (blob_top_vec_.size() > 1) {\n      EXPECT_EQ(blob_top_mask_->num(), num);\n      EXPECT_EQ(blob_top_mask_->channels(), channels);\n      EXPECT_EQ(blob_top_mask_->height(), 5);\n      EXPECT_EQ(blob_top_mask_->width(), 4);\n    }\n    layer.Forward(blob_bottom_vec_, blob_top_vec_);\n    // Expected output: 2x 2 channels of:\n    // [35    32    26    26]\n    // [32    32    27    27]\n    // [33    33    33    27]\n    // [34    34    34    17]\n    // [36    36    34    18]\n    for (int i = 0; i < 20 * num * channels; i += 20) {\n      EXPECT_EQ(blob_top_->cpu_data()[i +  0], 35);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  1], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  2], 26);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  3], 26);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  4], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  5], 32);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  6], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  7], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  8], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i +  9], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 10], 33);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 11], 27);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 12], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 13], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 14], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 15], 17);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 16], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 17], 36);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 18], 34);\n      EXPECT_EQ(blob_top_->cpu_data()[i + 19], 18);\n    }\n    if (blob_top_vec_.size() > 1) {\n        // [ 1     8     4     4]\n        // [ 8     8    17    17]\n        // [21    21    21    17]\n        // [27    27    27    22]\n        // [32    32    27    35]\n      for (int i = 0; i < 20 * num * channels; i += 20) {\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  0],  0);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  1],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  2],  3);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  3],  3);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  4],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  5],  7);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  6], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  7], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  8], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i +  9], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 10], 20);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 11], 16);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 12], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 13], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 14], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 15], 21);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 16], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 17], 31);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 18], 26);\n        EXPECT_EQ(blob_top_mask_->cpu_data()[i + 19], 34);\n      }\n    }\n  }\n};\n\nTYPED_TEST_CASE(CuDNNPoolingLayerTest, TestDtypes);\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestSetupCuDNN) {\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  CuDNNPoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 2);\n}\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestSetupPaddedCuDNN) {\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pad(1);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n  CuDNNPoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 4);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\n/*\nTYPED_TEST(CuDNNPoolingLayerTest, PrintBackwardCuDNN) {\n  LayerParameter layer_param;\n  layer_param.set_kernelsize(3);\n  layer_param.set_stride(2);\n  layer_param.set_pool(LayerParameter_PoolMethod_MAX);\n  CuDNNPoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    cout << \"bottom data \" << i << \" \" << this->blob_bottom_->cpu_data()[i] << endl;\n  }\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    cout << \"top data \" << i << \" \" << this->blob_top_->cpu_data()[i] << endl;\n  }\n\n  for (int i = 0; i < this->blob_top_->count(); ++i) {\n    this->blob_top_->mutable_cpu_diff()[i] = i;\n  }\n  layer.Backward(this->blob_top_vec_, true, this->blob_bottom_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    cout << \"bottom diff \" << i << \" \" << this->blob_bottom_->cpu_diff()[i] << endl;\n  }\n}\n*/\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestForwardMaxCuDNN) {\n  this->TestForwardSquare();\n  this->TestForwardRectHigh();\n  this->TestForwardRectWide();\n}\n\n// Currently, cuDNN does not support a top mask, so we comment this and\n// the corresponding backward test.\n/*\nTYPED_TEST(CuDNNPoolingLayerTest, TestForwardMaxTopMaskCuDNN) {\n  this->blob_top_vec_.push_back(this->blob_top_mask_);\n  this->TestForwardSquare();\n  this->TestForwardRectHigh();\n  this->TestForwardRectWide();\n}\n*/\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestGradientMaxCuDNN) {\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      // currenty, cuDNN pooling does not support padding\n      pooling_param->set_pad(0);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n      CuDNNPoolingLayer<TypeParam> layer(layer_param);\n      GradientChecker<TypeParam> checker(1e-4, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n    }\n  }\n}\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestForwardMaxPaddedCuDNN) {\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pad(2);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n  this->blob_bottom_->Reshape(1, 1, 3, 3);\n  // Input:\n  //     [ 1 2 4 ]\n  //     [ 2 3 2 ]\n  //     [ 4 2 1 ]\n  this->blob_bottom_->mutable_cpu_data()[0] = 1;\n  this->blob_bottom_->mutable_cpu_data()[1] = 2;\n  this->blob_bottom_->mutable_cpu_data()[2] = 4;\n  this->blob_bottom_->mutable_cpu_data()[3] = 2;\n  this->blob_bottom_->mutable_cpu_data()[4] = 3;\n  this->blob_bottom_->mutable_cpu_data()[5] = 2;\n  this->blob_bottom_->mutable_cpu_data()[6] = 4;\n  this->blob_bottom_->mutable_cpu_data()[7] = 2;\n  this->blob_bottom_->mutable_cpu_data()[8] = 1;\n  CuDNNPoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  TypeParam epsilon = 1e-8;\n  // Output:\n  //     [ 1 4 4 ]\n  //     [ 4 4 4 ]\n  //     [ 4 4 1 ]\n  EXPECT_NEAR(this->blob_top_->cpu_data()[0], 1, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[1], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[2], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[3], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[4], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[5], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[6], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[7], 4, epsilon);\n  EXPECT_NEAR(this->blob_top_->cpu_data()[8], 1, epsilon);\n}\n\n/*\nTYPED_TEST(CuDNNPoolingLayerTest, TestGradientMaxTopMaskCuDNN) {\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_MAX);\n      this->blob_top_vec_.push_back(this->blob_top_mask_);\n      CuDNNPoolingLayer<TypeParam> layer(layer_param);\n      GradientChecker<TypeParam> checker(1e-4, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n      this->blob_top_vec_.pop_back();\n    }\n  }\n}\n*/\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestForwardAveCuDNN) {\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(1);\n  // Currently, cuDNN pooling does not support padding, so we use\n  // a simplified version of this test.\n  pooling_param->set_pad(0);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n  this->blob_bottom_->Reshape(1, 1, 3, 3);\n  FillerParameter filler_param;\n  filler_param.set_value(TypeParam(2));\n  ConstantFiller<TypeParam> filler(filler_param);\n  filler.Fill(this->blob_bottom_);\n  CuDNNPoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 1);\n  EXPECT_EQ(this->blob_top_->channels(), 1);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  TypeParam epsilon = 1e-5;\n  EXPECT_NEAR(this->blob_top_->cpu_data()[0], 2.0, epsilon);\n}\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestGradientAveCuDNN) {\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n      CuDNNPoolingLayer<TypeParam> layer(layer_param);\n      GradientChecker<TypeParam> checker(1e-2, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n    }\n  }\n}\n\nTYPED_TEST(CuDNNPoolingLayerTest, TestGradientAvePaddedCuDNN) {\n  for (int kernel_h = 3; kernel_h <= 4; kernel_h++) {\n    for (int kernel_w = 3; kernel_w <= 4; kernel_w++) {\n      LayerParameter layer_param;\n      PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n      pooling_param->set_kernel_h(kernel_h);\n      pooling_param->set_kernel_w(kernel_w);\n      pooling_param->set_stride(2);\n      pooling_param->set_pad(2);\n      pooling_param->set_pool(PoolingParameter_PoolMethod_AVE);\n      CuDNNPoolingLayer<TypeParam> layer(layer_param);\n      GradientChecker<TypeParam> checker(1e-2, 1e-2);\n      checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n          this->blob_top_vec_);\n    }\n  }\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_power_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/power_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass PowerLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  PowerLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~PowerLayerTest() { delete blob_bottom_; delete blob_top_; }\n\n  void TestForward(Dtype power, Dtype scale, Dtype shift) {\n    LayerParameter layer_param;\n    layer_param.mutable_power_param()->set_power(power);\n    layer_param.mutable_power_param()->set_scale(scale);\n    layer_param.mutable_power_param()->set_shift(shift);\n    PowerLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    // Now, check values\n    const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n    const Dtype* top_data = this->blob_top_->cpu_data();\n    const Dtype min_precision = 1e-5;\n    for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n      Dtype expected_value = pow(shift + scale * bottom_data[i], power);\n      if (power == Dtype(0) || power == Dtype(1) || power == Dtype(2)) {\n        EXPECT_FALSE(isnan(top_data[i]));\n      }\n      if (isnan(expected_value)) {\n        EXPECT_TRUE(isnan(top_data[i]));\n      } else {\n        Dtype precision = std::max(\n          Dtype(std::abs(expected_value * Dtype(1e-4))), min_precision);\n        EXPECT_NEAR(expected_value, top_data[i], precision);\n      }\n    }\n  }\n\n  void TestBackward(Dtype power, Dtype scale, Dtype shift) {\n    LayerParameter layer_param;\n    layer_param.mutable_power_param()->set_power(power);\n    layer_param.mutable_power_param()->set_scale(scale);\n    layer_param.mutable_power_param()->set_shift(shift);\n    PowerLayer<Dtype> layer(layer_param);\n    if (power != Dtype(0) && power != Dtype(1) && power != Dtype(2)) {\n      // Avoid NaNs by forcing (shift + scale * x) >= 0\n      Dtype* bottom_data = this->blob_bottom_->mutable_cpu_data();\n      Dtype min_value = -shift / scale;\n      for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n        if (bottom_data[i] < min_value) {\n          bottom_data[i] = min_value + (min_value - bottom_data[i]);\n        }\n      }\n    }\n    GradientChecker<Dtype> checker(1e-3, 1e-2, 1701, 0., 0.01);\n    checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n        this->blob_top_vec_);\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(PowerLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(PowerLayerTest, TestPower) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 0.37;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestForward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 0.37;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestBackward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerGradientShiftZero) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 0.37;\n  Dtype scale = 0.83;\n  Dtype shift = 0.0;\n  this->TestBackward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerZero) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 0.0;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestForward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerZeroGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 0.0;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestBackward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerOne) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 1.0;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestForward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerOneGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 1.0;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestBackward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerTwo) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 2.0;\n  Dtype scale = 0.34;\n  Dtype shift = -2.4;\n  this->TestForward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerTwoGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 2.0;\n  Dtype scale = 0.83;\n  Dtype shift = -2.4;\n  this->TestBackward(power, scale, shift);\n}\n\nTYPED_TEST(PowerLayerTest, TestPowerTwoScaleHalfGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  Dtype power = 2.0;\n  Dtype scale = 0.5;\n  Dtype shift = -2.4;\n  this->TestBackward(power, scale, shift);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_protobuf.cpp",
    "content": "// This is simply a script that tries serializing protocol buffer in text\n// format. Nothing special here and no actual code is being tested.\n#include <string>\n\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/proto/caffe.pb.h\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nclass ProtoTest : public ::testing::Test {};\n\nTEST_F(ProtoTest, TestSerialization) {\n  LayerParameter param;\n  param.set_name(\"test\");\n  param.set_type(\"Test\");\n  std::cout << \"Printing in binary format.\" << std::endl;\n  std::cout << param.SerializeAsString() << std::endl;\n  std::cout << \"Printing in text format.\" << std::endl;\n  std::string str;\n  google::protobuf::TextFormat::PrintToString(param, &str);\n  std::cout << str << std::endl;\n  EXPECT_TRUE(true);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_random_number_generator.cpp",
    "content": "#include <cmath>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename Dtype>\nclass RandomNumberGeneratorTest : public ::testing::Test {\n protected:\n  RandomNumberGeneratorTest()\n     : mean_bound_multiplier_(3.8),  // ~99.99% confidence for test failure.\n       sample_size_(10000),\n       seed_(1701),\n       data_(new SyncedMemory(sample_size_ * sizeof(Dtype))),\n       data_2_(new SyncedMemory(sample_size_ * sizeof(Dtype))),\n       int_data_(new SyncedMemory(sample_size_ * sizeof(int))),\n       int_data_2_(new SyncedMemory(sample_size_ * sizeof(int))) {}\n\n  virtual void SetUp() {\n    Caffe::set_random_seed(this->seed_);\n  }\n\n  Dtype sample_mean(const Dtype* const seqs, const int sample_size) {\n    Dtype sum = 0;\n    for (int i = 0; i < sample_size; ++i) {\n      sum += seqs[i];\n    }\n    return sum / sample_size;\n  }\n\n  Dtype sample_mean(const Dtype* const seqs) {\n    return sample_mean(seqs, sample_size_);\n  }\n\n  Dtype sample_mean(const int* const seqs, const int sample_size) {\n    Dtype sum = 0;\n    for (int i = 0; i < sample_size; ++i) {\n      sum += Dtype(seqs[i]);\n    }\n    return sum / sample_size;\n  }\n\n  Dtype sample_mean(const int* const seqs) {\n    return sample_mean(seqs, sample_size_);\n  }\n\n  Dtype mean_bound(const Dtype std, const int sample_size) {\n    return mean_bound_multiplier_ * std / sqrt(static_cast<Dtype>(sample_size));\n  }\n\n  Dtype mean_bound(const Dtype std) {\n    return mean_bound(std, sample_size_);\n  }\n\n  void RngGaussianFill(const Dtype mu, const Dtype sigma, void* cpu_data) {\n    Dtype* rng_data = static_cast<Dtype*>(cpu_data);\n    caffe_rng_gaussian(sample_size_, mu, sigma, rng_data);\n  }\n\n  void RngGaussianChecks(const Dtype mu, const Dtype sigma,\n                         const void* cpu_data, const Dtype sparse_p = 0) {\n    const Dtype* rng_data = static_cast<const Dtype*>(cpu_data);\n    const Dtype true_mean = mu;\n    const Dtype true_std = sigma;\n    // Check that sample mean roughly matches true mean.\n    const Dtype bound = this->mean_bound(true_std);\n    const Dtype sample_mean = this->sample_mean(\n        static_cast<const Dtype*>(cpu_data));\n    EXPECT_NEAR(sample_mean, true_mean, bound);\n    // Check that roughly half the samples are above the true mean.\n    int num_above_mean = 0;\n    int num_below_mean = 0;\n    int num_mean = 0;\n    int num_nan = 0;\n    for (int i = 0; i < sample_size_; ++i) {\n      if (rng_data[i] > true_mean) {\n        ++num_above_mean;\n      } else if (rng_data[i] < true_mean) {\n        ++num_below_mean;\n      } else if (rng_data[i] == true_mean) {\n        ++num_mean;\n      } else {\n        ++num_nan;\n      }\n    }\n    EXPECT_EQ(0, num_nan);\n    if (sparse_p == Dtype(0)) {\n      EXPECT_EQ(0, num_mean);\n    }\n    const Dtype sample_p_above_mean =\n        static_cast<Dtype>(num_above_mean) / sample_size_;\n    const Dtype bernoulli_p = (1 - sparse_p) * 0.5;\n    const Dtype bernoulli_std = sqrt(bernoulli_p * (1 - bernoulli_p));\n    const Dtype bernoulli_bound = this->mean_bound(bernoulli_std);\n    EXPECT_NEAR(bernoulli_p, sample_p_above_mean, bernoulli_bound);\n  }\n\n  void RngUniformFill(const Dtype lower, const Dtype upper, void* cpu_data) {\n    CHECK_GE(upper, lower);\n    Dtype* rng_data = static_cast<Dtype*>(cpu_data);\n    caffe_rng_uniform(sample_size_, lower, upper, rng_data);\n  }\n\n  void RngUniformChecks(const Dtype lower, const Dtype upper,\n                        const void* cpu_data, const Dtype sparse_p = 0) {\n    const Dtype* rng_data = static_cast<const Dtype*>(cpu_data);\n    const Dtype true_mean = (lower + upper) / 2;\n    const Dtype true_std = (upper - lower) / sqrt(12);\n    // Check that sample mean roughly matches true mean.\n    const Dtype bound = this->mean_bound(true_std);\n    const Dtype sample_mean = this->sample_mean(rng_data);\n    EXPECT_NEAR(sample_mean, true_mean, bound);\n    // Check that roughly half the samples are above the true mean, and none are\n    // above upper or below lower.\n    int num_above_mean = 0;\n    int num_below_mean = 0;\n    int num_mean = 0;\n    int num_nan = 0;\n    int num_above_upper = 0;\n    int num_below_lower = 0;\n    for (int i = 0; i < sample_size_; ++i) {\n      if (rng_data[i] > true_mean) {\n        ++num_above_mean;\n      } else if (rng_data[i] < true_mean) {\n        ++num_below_mean;\n      } else if (rng_data[i] == true_mean) {\n        ++num_mean;\n      } else {\n        ++num_nan;\n      }\n      if (rng_data[i] > upper) {\n        ++num_above_upper;\n      } else if (rng_data[i] < lower) {\n        ++num_below_lower;\n      }\n    }\n    EXPECT_EQ(0, num_nan);\n    EXPECT_EQ(0, num_above_upper);\n    EXPECT_EQ(0, num_below_lower);\n    if (sparse_p == Dtype(0)) {\n      EXPECT_EQ(0, num_mean);\n    }\n    const Dtype sample_p_above_mean =\n        static_cast<Dtype>(num_above_mean) / sample_size_;\n    const Dtype bernoulli_p = (1 - sparse_p) * 0.5;\n    const Dtype bernoulli_std = sqrt(bernoulli_p * (1 - bernoulli_p));\n    const Dtype bernoulli_bound = this->mean_bound(bernoulli_std);\n    EXPECT_NEAR(bernoulli_p, sample_p_above_mean, bernoulli_bound);\n  }\n\n  void RngBernoulliFill(const Dtype p, void* cpu_data) {\n    int* rng_data = static_cast<int*>(cpu_data);\n    caffe_rng_bernoulli(sample_size_, p, rng_data);\n  }\n\n  void RngBernoulliChecks(const Dtype p, const void* cpu_data) {\n    const int* rng_data = static_cast<const int*>(cpu_data);\n    const Dtype true_mean = p;\n    const Dtype true_std = sqrt(p * (1 - p));\n    const Dtype bound = this->mean_bound(true_std);\n    const Dtype sample_mean = this->sample_mean(rng_data);\n    EXPECT_NEAR(sample_mean, true_mean, bound);\n  }\n\n#ifndef CPU_ONLY\n\n  void RngGaussianFillGPU(const Dtype mu, const Dtype sigma, void* gpu_data) {\n    Dtype* rng_data = static_cast<Dtype*>(gpu_data);\n    caffe_gpu_rng_gaussian(sample_size_, mu, sigma, rng_data);\n  }\n\n  void RngUniformFillGPU(const Dtype lower, const Dtype upper, void* gpu_data) {\n    CHECK_GE(upper, lower);\n    Dtype* rng_data = static_cast<Dtype*>(gpu_data);\n    caffe_gpu_rng_uniform(sample_size_, lower, upper, rng_data);\n  }\n\n  // Fills with uniform integers in [0, UINT_MAX] using 2 argument form of\n  // caffe_gpu_rng_uniform.\n  void RngUniformIntFillGPU(void* gpu_data) {\n    unsigned int* rng_data = static_cast<unsigned int*>(gpu_data);\n    caffe_gpu_rng_uniform(sample_size_, rng_data);\n  }\n\n#endif\n\n  int num_above_mean;\n  int num_below_mean;\n\n  Dtype mean_bound_multiplier_;\n\n  size_t sample_size_;\n  uint32_t seed_;\n\n  shared_ptr<SyncedMemory> data_;\n  shared_ptr<SyncedMemory> data_2_;\n  shared_ptr<SyncedMemory> int_data_;\n  shared_ptr<SyncedMemory> int_data_2_;\n};\n\nTYPED_TEST_CASE(RandomNumberGeneratorTest, TestDtypes);\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussian) {\n  const TypeParam mu = 0;\n  const TypeParam sigma = 1;\n  void* gaussian_data = this->data_->mutable_cpu_data();\n  this->RngGaussianFill(mu, sigma, gaussian_data);\n  this->RngGaussianChecks(mu, sigma, gaussian_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussian2) {\n  const TypeParam mu = -2;\n  const TypeParam sigma = 3;\n  void* gaussian_data = this->data_->mutable_cpu_data();\n  this->RngGaussianFill(mu, sigma, gaussian_data);\n  this->RngGaussianChecks(mu, sigma, gaussian_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniform) {\n  const TypeParam lower = 0;\n  const TypeParam upper = 1;\n  void* uniform_data = this->data_->mutable_cpu_data();\n  this->RngUniformFill(lower, upper, uniform_data);\n  this->RngUniformChecks(lower, upper, uniform_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniform2) {\n  const TypeParam lower = -7.3;\n  const TypeParam upper = -2.3;\n  void* uniform_data = this->data_->mutable_cpu_data();\n  this->RngUniformFill(lower, upper, uniform_data);\n  this->RngUniformChecks(lower, upper, uniform_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngBernoulli) {\n  const TypeParam p = 0.3;\n  void* bernoulli_data = this->int_data_->mutable_cpu_data();\n  this->RngBernoulliFill(p, bernoulli_data);\n  this->RngBernoulliChecks(p, bernoulli_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngBernoulli2) {\n  const TypeParam p = 0.9;\n  void* bernoulli_data = this->int_data_->mutable_cpu_data();\n  this->RngBernoulliFill(p, bernoulli_data);\n  this->RngBernoulliChecks(p, bernoulli_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussianTimesGaussian) {\n  const TypeParam mu = 0;\n  const TypeParam sigma = 1;\n\n  // Sample from 0 mean Gaussian.\n  TypeParam* gaussian_data_1 =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  this->RngGaussianFill(mu, sigma, gaussian_data_1);\n\n  // Sample from 0 mean Gaussian again.\n  TypeParam* gaussian_data_2 =\n      static_cast<TypeParam*>(this->data_2_->mutable_cpu_data());\n  this->RngGaussianFill(mu, sigma, gaussian_data_2);\n\n  // Multiply Gaussians.\n  for (int i = 0; i < this->sample_size_; ++i) {\n    gaussian_data_1[i] *= gaussian_data_2[i];\n  }\n\n  // Check that result has mean 0.\n  TypeParam mu_product = pow(mu, 2);\n  TypeParam sigma_product = sqrt(pow(sigma, 2) / 2);\n  this->RngGaussianChecks(mu_product, sigma_product, gaussian_data_1);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniformTimesUniform) {\n  // Sample from Uniform on [-2, 2].\n  const TypeParam lower_1 = -2;\n  const TypeParam upper_1 = -lower_1;\n  TypeParam* uniform_data_1 =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  this->RngUniformFill(lower_1, upper_1, uniform_data_1);\n\n  // Sample from Uniform on [-3, 3].\n  const TypeParam lower_2 = -3;\n  const TypeParam upper_2 = -lower_2;\n  TypeParam* uniform_data_2 =\n      static_cast<TypeParam*>(this->data_2_->mutable_cpu_data());\n  this->RngUniformFill(lower_2, upper_2, uniform_data_2);\n\n  // Multiply Uniforms.\n  for (int i = 0; i < this->sample_size_; ++i) {\n    uniform_data_1[i] *= uniform_data_2[i];\n  }\n\n  // Check that result does not violate checked properties of Uniform on [-6, 6]\n  // (though it is not actually uniformly distributed).\n  const TypeParam lower_prod = lower_1 * upper_2;\n  const TypeParam upper_prod = -lower_prod;\n  this->RngUniformChecks(lower_prod, upper_prod, uniform_data_1);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussianTimesBernoulli) {\n  // Sample from 0 mean Gaussian.\n  const TypeParam mu = 0;\n  const TypeParam sigma = 1;\n  TypeParam* gaussian_data =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  this->RngGaussianFill(mu, sigma, gaussian_data);\n\n  // Sample from Bernoulli with p = 0.3.\n  const TypeParam bernoulli_p = 0.3;\n  int* bernoulli_data =\n      static_cast<int*>(this->int_data_->mutable_cpu_data());\n  this->RngBernoulliFill(bernoulli_p, bernoulli_data);\n\n  // Multiply Gaussian by Bernoulli.\n  for (int i = 0; i < this->sample_size_; ++i) {\n    gaussian_data[i] *= bernoulli_data[i];\n  }\n\n  // Check that result does not violate checked properties of sparsified\n  // Gaussian (though it is not actually a Gaussian).\n  this->RngGaussianChecks(mu, sigma, gaussian_data, 1 - bernoulli_p);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniformTimesBernoulli) {\n  // Sample from Uniform on [-1, 1].\n  const TypeParam lower = -1;\n  const TypeParam upper = 1;\n  TypeParam* uniform_data =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  this->RngUniformFill(lower, upper, uniform_data);\n\n  // Sample from Bernoulli with p = 0.3.\n  const TypeParam bernoulli_p = 0.3;\n  int* bernoulli_data =\n      static_cast<int*>(this->int_data_->mutable_cpu_data());\n  this->RngBernoulliFill(bernoulli_p, bernoulli_data);\n\n  // Multiply Uniform by Bernoulli.\n  for (int i = 0; i < this->sample_size_; ++i) {\n    uniform_data[i] *= bernoulli_data[i];\n  }\n\n  // Check that result does not violate checked properties of sparsified\n  // Uniform on [-1, 1] (though it is not actually uniformly distributed).\n  this->RngUniformChecks(lower, upper, uniform_data, 1 - bernoulli_p);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngBernoulliTimesBernoulli) {\n  // Sample from Bernoulli with p = 0.5.\n  const TypeParam p_a = 0.5;\n  int* bernoulli_data_a =\n      static_cast<int*>(this->int_data_->mutable_cpu_data());\n  this->RngBernoulliFill(p_a, bernoulli_data_a);\n\n  // Sample from Bernoulli with p = 0.3.\n  const TypeParam p_b = 0.3;\n  int* bernoulli_data_b =\n      static_cast<int*>(this->int_data_2_->mutable_cpu_data());\n  this->RngBernoulliFill(p_b, bernoulli_data_b);\n\n  // Multiply Bernoullis.\n  for (int i = 0; i < this->sample_size_; ++i) {\n    bernoulli_data_a[i] *= bernoulli_data_b[i];\n  }\n  int num_ones = 0;\n  for (int i = 0; i < this->sample_size_; ++i) {\n    if (bernoulli_data_a[i] != TypeParam(0)) {\n      EXPECT_EQ(TypeParam(1), bernoulli_data_a[i]);\n      ++num_ones;\n    }\n  }\n\n  // Check that resulting product has roughly p_a * p_b ones.\n  const TypeParam sample_p = this->sample_mean(bernoulli_data_a);\n  const TypeParam true_mean = p_a * p_b;\n  const TypeParam true_std = sqrt(true_mean * (1 - true_mean));\n  const TypeParam bound = this->mean_bound(true_std);\n  EXPECT_NEAR(true_mean, sample_p, bound);\n}\n\n#ifndef CPU_ONLY\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussianGPU) {\n  const TypeParam mu = 0;\n  const TypeParam sigma = 1;\n  void* gaussian_gpu_data = this->data_->mutable_gpu_data();\n  this->RngGaussianFillGPU(mu, sigma, gaussian_gpu_data);\n  const void* gaussian_data = this->data_->cpu_data();\n  this->RngGaussianChecks(mu, sigma, gaussian_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussian2GPU) {\n  const TypeParam mu = -2;\n  const TypeParam sigma = 3;\n  void* gaussian_gpu_data = this->data_->mutable_gpu_data();\n  this->RngGaussianFillGPU(mu, sigma, gaussian_gpu_data);\n  const void* gaussian_data = this->data_->cpu_data();\n  this->RngGaussianChecks(mu, sigma, gaussian_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniformGPU) {\n  const TypeParam lower = 0;\n  const TypeParam upper = 1;\n  void* uniform_gpu_data = this->data_->mutable_gpu_data();\n  this->RngUniformFillGPU(lower, upper, uniform_gpu_data);\n  const void* uniform_data = this->data_->cpu_data();\n  this->RngUniformChecks(lower, upper, uniform_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniform2GPU) {\n  const TypeParam lower = -7.3;\n  const TypeParam upper = -2.3;\n  void* uniform_gpu_data = this->data_->mutable_gpu_data();\n  this->RngUniformFillGPU(lower, upper, uniform_gpu_data);\n  const void* uniform_data = this->data_->cpu_data();\n  this->RngUniformChecks(lower, upper, uniform_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniformIntGPU) {\n  unsigned int* uniform_uint_gpu_data =\n      static_cast<unsigned int*>(this->int_data_->mutable_gpu_data());\n  this->RngUniformIntFillGPU(uniform_uint_gpu_data);\n  const unsigned int* uniform_uint_data =\n      static_cast<const unsigned int*>(this->int_data_->cpu_data());\n  TypeParam* uniform_data =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  for (int i = 0; i < this->sample_size_; ++i) {\n    uniform_data[i] = static_cast<const TypeParam>(uniform_uint_data[i]);\n  }\n  const TypeParam lower = 0;\n  const TypeParam upper = UINT_MAX;\n  this->RngUniformChecks(lower, upper, uniform_data);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngGaussianTimesGaussianGPU) {\n  const TypeParam mu = 0;\n  const TypeParam sigma = 1;\n\n  // Sample from 0 mean Gaussian.\n  TypeParam* gaussian_gpu_data_1 =\n      static_cast<TypeParam*>(this->data_->mutable_gpu_data());\n  this->RngGaussianFillGPU(mu, sigma, gaussian_gpu_data_1);\n\n  // Sample from 0 mean Gaussian again.\n  TypeParam* gaussian_gpu_data_2 =\n      static_cast<TypeParam*>(this->data_2_->mutable_gpu_data());\n  this->RngGaussianFillGPU(mu, sigma, gaussian_gpu_data_2);\n\n  // Multiply Gaussians.\n  TypeParam* gaussian_data_1 =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  const TypeParam* gaussian_data_2 =\n      static_cast<const TypeParam*>(this->data_2_->cpu_data());\n  for (int i = 0; i < this->sample_size_; ++i) {\n    gaussian_data_1[i] *= gaussian_data_2[i];\n  }\n\n  // Check that result does not violate checked properties of Gaussian\n  // (though it is not actually a Gaussian).\n  TypeParam mu_product = pow(mu, 2);\n  TypeParam sigma_product = sqrt(pow(sigma, 2) / 2);\n  this->RngGaussianChecks(mu_product, sigma_product, gaussian_data_1);\n}\n\n\nTYPED_TEST(RandomNumberGeneratorTest, TestRngUniformTimesUniformGPU) {\n  // Sample from Uniform on [-2, 2].\n  const TypeParam lower_1 = -2;\n  const TypeParam upper_1 = -lower_1;\n  TypeParam* uniform_gpu_data_1 =\n      static_cast<TypeParam*>(this->data_->mutable_gpu_data());\n  this->RngUniformFillGPU(lower_1, upper_1, uniform_gpu_data_1);\n\n  // Sample from Uniform on [-3, 3].\n  const TypeParam lower_2 = -3;\n  const TypeParam upper_2 = -lower_2;\n  TypeParam* uniform_gpu_data_2 =\n      static_cast<TypeParam*>(this->data_2_->mutable_gpu_data());\n  this->RngUniformFillGPU(lower_2, upper_2, uniform_gpu_data_2);\n\n  // Multiply Uniforms.\n  TypeParam* uniform_data_1 =\n      static_cast<TypeParam*>(this->data_->mutable_cpu_data());\n  const TypeParam* uniform_data_2 =\n      static_cast<const TypeParam*>(this->data_2_->cpu_data());\n  for (int i = 0; i < this->sample_size_; ++i) {\n    uniform_data_1[i] *= uniform_data_2[i];\n  }\n\n  // Check that result does not violate properties of Uniform on [-7, -3].\n  const TypeParam lower_prod = lower_1 * upper_2;\n  const TypeParam upper_prod = -lower_prod;\n  this->RngUniformChecks(lower_prod, upper_prod, uniform_data_1);\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_reduction_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/reduction_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ReductionLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ReductionLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~ReductionLayerTest() {\n    delete blob_bottom_;\n    delete blob_top_;\n  }\n\n  void TestForward(ReductionParameter_ReductionOp op,\n                   float coeff = 1, int axis = 0) {\n    LayerParameter layer_param;\n    ReductionParameter* reduction_param = layer_param.mutable_reduction_param();\n    reduction_param->set_operation(op);\n    if (coeff != 1.0) { reduction_param->set_coeff(coeff); }\n    if (axis != 0) { reduction_param->set_axis(axis); }\n    shared_ptr<ReductionLayer<Dtype> > layer(\n        new ReductionLayer<Dtype>(layer_param));\n    layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    const Dtype* in_data = this->blob_bottom_->cpu_data();\n    const int num = this->blob_bottom_->count(0, axis);\n    const int dim = this->blob_bottom_->count(axis);\n    for (int n = 0; n < num; ++n) {\n      Dtype expected_result = 0;\n      for (int d = 0; d < dim; ++d) {\n        switch (op) {\n          case ReductionParameter_ReductionOp_SUM:\n            expected_result += *in_data;\n            break;\n          case ReductionParameter_ReductionOp_MEAN:\n            expected_result += *in_data / dim;\n            break;\n          case ReductionParameter_ReductionOp_ASUM:\n            expected_result += fabs(*in_data);\n            break;\n          case ReductionParameter_ReductionOp_SUMSQ:\n            expected_result += (*in_data) * (*in_data);\n            break;\n          default:\n            LOG(FATAL) << \"Unknown reduction op: \"\n                << ReductionParameter_ReductionOp_Name(op);\n        }\n        ++in_data;\n      }\n      expected_result *= coeff;\n      const Dtype computed_result = this->blob_top_->cpu_data()[n];\n      EXPECT_FLOAT_EQ(expected_result, computed_result)\n          << \"Incorrect result computed with op \"\n          << ReductionParameter_ReductionOp_Name(op) << \", coeff \" << coeff;\n    }\n  }\n\n  void TestGradient(ReductionParameter_ReductionOp op,\n                    float coeff = 1, int axis = 0) {\n    typedef typename TypeParam::Dtype Dtype;\n    LayerParameter layer_param;\n    ReductionParameter* reduction_param = layer_param.mutable_reduction_param();\n    reduction_param->set_operation(op);\n    reduction_param->set_coeff(coeff);\n    reduction_param->set_axis(axis);\n    ReductionLayer<Dtype> layer(layer_param);\n    GradientChecker<Dtype> checker(1e-2, 2e-3);\n    checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n        this->blob_top_vec_);\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ReductionLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ReductionLayerTest, TestSetUp) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  shared_ptr<ReductionLayer<Dtype> > layer(\n      new ReductionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 0);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSetUpWithAxis1) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reduction_param()->set_axis(1);\n  shared_ptr<ReductionLayer<Dtype> > layer(\n      new ReductionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 1);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSetUpWithAxis2) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reduction_param()->set_axis(2);\n  shared_ptr<ReductionLayer<Dtype> > layer(\n      new ReductionLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_top_->num_axes(), 2);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSum) {\n  const ReductionParameter_ReductionOp kOp = ReductionParameter_ReductionOp_SUM;\n  this->TestForward(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumCoeff) {\n  const ReductionParameter_ReductionOp kOp = ReductionParameter_ReductionOp_SUM;\n  const float kCoeff = 2.3;\n  this->TestForward(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumCoeffAxis1) {\n  const ReductionParameter_ReductionOp kOp = ReductionParameter_ReductionOp_SUM;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestForward(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumGradient) {\n  const ReductionParameter_ReductionOp kOp = ReductionParameter_ReductionOp_SUM;\n  this->TestGradient(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumCoeffGradient) {\n  const ReductionParameter_ReductionOp kOp = ReductionParameter_ReductionOp_SUM;\n  const float kCoeff = 2.3;\n  this->TestGradient(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumCoeffAxis1Gradient) {\n  const ReductionParameter_ReductionOp kOp = ReductionParameter_ReductionOp_SUM;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestGradient(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestMean) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_MEAN;\n  this->TestForward(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestMeanCoeff) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_MEAN;\n  const float kCoeff = 2.3;\n  this->TestForward(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestMeanCoeffAxis1) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_MEAN;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestForward(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestMeanGradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_MEAN;\n  this->TestGradient(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestMeanCoeffGradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_MEAN;\n  const float kCoeff = 2.3;\n  this->TestGradient(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestMeanCoeffGradientAxis1) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_MEAN;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestGradient(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestAbsSum) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_ASUM;\n  this->TestForward(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestAbsSumCoeff) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_ASUM;\n  const float kCoeff = 2.3;\n  this->TestForward(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestAbsSumCoeffAxis1) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_ASUM;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestForward(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestAbsSumGradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_ASUM;\n  this->TestGradient(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestAbsSumCoeffGradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_ASUM;\n  const float kCoeff = 2.3;\n  this->TestGradient(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestAbsSumCoeffAxis1Gradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_ASUM;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestGradient(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumOfSquares) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_SUMSQ;\n  this->TestForward(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumOfSquaresCoeff) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_SUMSQ;\n  const float kCoeff = 2.3;\n  this->TestForward(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumOfSquaresCoeffAxis1) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_SUMSQ;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestForward(kOp, kCoeff, kAxis);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumOfSquaresGradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_SUMSQ;\n  this->TestGradient(kOp);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumOfSquaresCoeffGradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_SUMSQ;\n  const float kCoeff = 2.3;\n  this->TestGradient(kOp, kCoeff);\n}\n\nTYPED_TEST(ReductionLayerTest, TestSumOfSquaresCoeffAxis1Gradient) {\n  const ReductionParameter_ReductionOp kOp =\n      ReductionParameter_ReductionOp_SUMSQ;\n  const float kCoeff = 2.3;\n  const int kAxis = 1;\n  this->TestGradient(kOp, kCoeff, kAxis);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_reshape_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/reshape_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ReshapeLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  ReshapeLayerTest()\n    : blob_bottom_(new Blob<Dtype>(2, 3, 6, 5)),\n      blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n\n  virtual ~ReshapeLayerTest() { delete blob_bottom_; delete blob_top_; }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ReshapeLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ReshapeLayerTest, TestFlattenOutputSizes) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(0);\n  blob_shape->add_dim(-1);\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3 * 6 * 5);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestFlattenValues) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(0);\n  blob_shape->add_dim(-1);\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int c = 0; c < 3 * 6 * 5; ++c) {\n    EXPECT_EQ(this->blob_top_->data_at(0, c, 0, 0),\n        this->blob_bottom_->data_at(0, c / (6 * 5), (c / 5) % 6, c % 5));\n    EXPECT_EQ(this->blob_top_->data_at(1, c, 0, 0),\n        this->blob_bottom_->data_at(1, c / (6 * 5), (c / 5) % 6, c % 5));\n  }\n}\n\n// Test whether setting output dimensions to 0 either explicitly or implicitly\n// copies the respective dimension of the input layer.\nTYPED_TEST(ReshapeLayerTest, TestCopyDimensions) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(0);\n  blob_shape->add_dim(0);\n  blob_shape->add_dim(0);\n  blob_shape->add_dim(0);\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 6);\n  EXPECT_EQ(this->blob_top_->width(), 5);\n}\n\n// When a dimension is set to -1, we should infer its value from the other\n// dimensions (including those that get copied from below).\nTYPED_TEST(ReshapeLayerTest, TestInferenceOfUnspecified) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(0);\n  blob_shape->add_dim(3);\n  blob_shape->add_dim(10);\n  blob_shape->add_dim(-1);\n\n  // Count is 180, thus height should be 180 / (2*3*10) = 3.\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 10);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestInferenceOfUnspecifiedWithStartAxis) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reshape_param()->set_axis(1);\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(3);\n  blob_shape->add_dim(10);\n  blob_shape->add_dim(-1);\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  ASSERT_EQ(this->blob_top_->num_axes(), 4);\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 3);\n  EXPECT_EQ(this->blob_top_->height(), 10);\n  EXPECT_EQ(this->blob_top_->width(), 3);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestInsertSingletonAxesStart) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reshape_param()->set_axis(0);\n  layer_param.mutable_reshape_param()->set_num_axes(0);\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  ASSERT_EQ(this->blob_top_->num_axes(), 7);\n  EXPECT_EQ(this->blob_top_->shape(0), 1);\n  EXPECT_EQ(this->blob_top_->shape(1), 1);\n  EXPECT_EQ(this->blob_top_->shape(2), 1);\n  EXPECT_EQ(this->blob_top_->shape(3), 2);\n  EXPECT_EQ(this->blob_top_->shape(4), 3);\n  EXPECT_EQ(this->blob_top_->shape(5), 6);\n  EXPECT_EQ(this->blob_top_->shape(6), 5);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestInsertSingletonAxesMiddle) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reshape_param()->set_axis(2);\n  layer_param.mutable_reshape_param()->set_num_axes(0);\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  ASSERT_EQ(this->blob_top_->num_axes(), 7);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3);\n  EXPECT_EQ(this->blob_top_->shape(2), 1);\n  EXPECT_EQ(this->blob_top_->shape(3), 1);\n  EXPECT_EQ(this->blob_top_->shape(4), 1);\n  EXPECT_EQ(this->blob_top_->shape(5), 6);\n  EXPECT_EQ(this->blob_top_->shape(6), 5);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestInsertSingletonAxesEnd) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reshape_param()->set_axis(-1);\n  layer_param.mutable_reshape_param()->set_num_axes(0);\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n  blob_shape->add_dim(1);\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  ASSERT_EQ(this->blob_top_->num_axes(), 7);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3);\n  EXPECT_EQ(this->blob_top_->shape(2), 6);\n  EXPECT_EQ(this->blob_top_->shape(3), 5);\n  EXPECT_EQ(this->blob_top_->shape(4), 1);\n  EXPECT_EQ(this->blob_top_->shape(5), 1);\n  EXPECT_EQ(this->blob_top_->shape(6), 1);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestFlattenMiddle) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_reshape_param()->set_axis(1);\n  layer_param.mutable_reshape_param()->set_num_axes(2);\n  BlobShape* blob_shape = layer_param.mutable_reshape_param()->mutable_shape();\n  blob_shape->add_dim(-1);\n\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  ASSERT_EQ(this->blob_top_->num_axes(), 3);\n  EXPECT_EQ(this->blob_top_->shape(0), 2);\n  EXPECT_EQ(this->blob_top_->shape(1), 3 * 6);\n  EXPECT_EQ(this->blob_top_->shape(2), 5);\n}\n\nTYPED_TEST(ReshapeLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* shape = layer_param.mutable_reshape_param()->mutable_shape();\n  shape->add_dim(6);\n  shape->add_dim(2);\n  shape->add_dim(3);\n  shape->add_dim(5);\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_EQ(this->blob_top_->cpu_data()[i],\n              this->blob_bottom_->cpu_data()[i]);\n  }\n}\n\nTYPED_TEST(ReshapeLayerTest, TestForwardAfterReshape) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* shape = layer_param.mutable_reshape_param()->mutable_shape();\n  shape->add_dim(6);\n  shape->add_dim(2);\n  shape->add_dim(3);\n  shape->add_dim(5);\n  ReshapeLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // We know the above produced the correct result from TestForward.\n  // Reshape the bottom and call layer.Reshape, then try again.\n  vector<int> new_bottom_shape(1, 2 * 3 * 6 * 5);\n  this->blob_bottom_->Reshape(new_bottom_shape);\n  layer.Reshape(this->blob_bottom_vec_, this->blob_top_vec_);\n  FillerParameter filler_param;\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(this->blob_bottom_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_EQ(this->blob_top_->cpu_data()[i],\n              this->blob_bottom_->cpu_data()[i]);\n  }\n}\n\nTYPED_TEST(ReshapeLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  BlobShape* shape = layer_param.mutable_reshape_param()->mutable_shape();\n  shape->add_dim(6);\n  shape->add_dim(2);\n  shape->add_dim(3);\n  shape->add_dim(5);\n  ReshapeLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_roi_pooling_layer.cpp",
    "content": "// ------------------------------------------------------------------\n// Fast R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Ross Girshick\n// ------------------------------------------------------------------\n\n#include <cmath>\n#include <cstdlib>\n#include <cstring>\n#include <vector>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/fast_rcnn_layers.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nusing boost::scoped_ptr;\n\nnamespace caffe {\n\ntypedef ::testing::Types<GPUDevice<float>, GPUDevice<double> > TestDtypesGPU;\n\ntemplate <typename TypeParam>\nclass ROIPoolingLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ROIPoolingLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(4, 3, 12, 8)),\n        blob_bottom_rois_(new Blob<Dtype>(4, 5, 1, 1)),\n        blob_top_data_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_std(10);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n    //for (int i = 0; i < blob_bottom_data_->count(); ++i) {\n    //  blob_bottom_data_->mutable_cpu_data()[i] = i;\n    //}\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    int i = 0;\n    blob_bottom_rois_->mutable_cpu_data()[0 + 5*i] = 0; //caffe_rng_rand() % 4;\n    blob_bottom_rois_->mutable_cpu_data()[1 + 5*i] = 1; // x1 < 8\n    blob_bottom_rois_->mutable_cpu_data()[2 + 5*i] = 1; // y1 < 12\n    blob_bottom_rois_->mutable_cpu_data()[3 + 5*i] = 6; // x2 < 8\n    blob_bottom_rois_->mutable_cpu_data()[4 + 5*i] = 6; // y2 < 12\n    i = 1;\n    blob_bottom_rois_->mutable_cpu_data()[0 + 5*i] = 2;\n    blob_bottom_rois_->mutable_cpu_data()[1 + 5*i] = 6; // x1 < 8\n    blob_bottom_rois_->mutable_cpu_data()[2 + 5*i] = 2; // y1 < 12\n    blob_bottom_rois_->mutable_cpu_data()[3 + 5*i] = 7; // x2 < 8\n    blob_bottom_rois_->mutable_cpu_data()[4 + 5*i] = 11; // y2 < 12\n    i = 2;\n    blob_bottom_rois_->mutable_cpu_data()[0 + 5*i] = 1;\n    blob_bottom_rois_->mutable_cpu_data()[1 + 5*i] = 3; // x1 < 8\n    blob_bottom_rois_->mutable_cpu_data()[2 + 5*i] = 1; // y1 < 12\n    blob_bottom_rois_->mutable_cpu_data()[3 + 5*i] = 5; // x2 < 8\n    blob_bottom_rois_->mutable_cpu_data()[4 + 5*i] = 10; // y2 < 12\n    i = 3;\n    blob_bottom_rois_->mutable_cpu_data()[0 + 5*i] = 0;\n    blob_bottom_rois_->mutable_cpu_data()[1 + 5*i] = 3; // x1 < 8\n    blob_bottom_rois_->mutable_cpu_data()[2 + 5*i] = 3; // y1 < 12\n    blob_bottom_rois_->mutable_cpu_data()[3 + 5*i] = 3; // x2 < 8\n    blob_bottom_rois_->mutable_cpu_data()[4 + 5*i] = 3; // y2 < 12\n\n    blob_bottom_vec_.push_back(blob_bottom_rois_);\n    blob_top_vec_.push_back(blob_top_data_);\n  }\n  virtual ~ROIPoolingLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_rois_;\n    delete blob_top_data_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_rois_;\n  Blob<Dtype>* const blob_top_data_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ROIPoolingLayerTest, TestDtypesGPU);\n\nTYPED_TEST(ROIPoolingLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ROIPoolingParameter* roi_pooling_param =\n      layer_param.mutable_roi_pooling_param();\n  roi_pooling_param->set_pooled_h(6);\n  roi_pooling_param->set_pooled_w(6);\n  ROIPoolingLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-4, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_scale_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/scale_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ScaleLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  ScaleLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_eltwise_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_bottom_broadcast_0_(new Blob<Dtype>()),\n        blob_bottom_broadcast_1_(new Blob<Dtype>()),\n        blob_bottom_broadcast_2_(new Blob<Dtype>()),\n        blob_bottom_scale_(new Blob<Dtype>(vector<int>())),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    vector<int> broadcast_shape(2);\n    broadcast_shape[0] = 2; broadcast_shape[1] = 3;\n    this->blob_bottom_broadcast_0_->Reshape(broadcast_shape);\n    broadcast_shape[0] = 3; broadcast_shape[1] = 4;\n    this->blob_bottom_broadcast_1_->Reshape(broadcast_shape);\n    broadcast_shape[0] = 4; broadcast_shape[1] = 5;\n    this->blob_bottom_broadcast_2_->Reshape(broadcast_shape);\n    FillerParameter filler_param;\n    filler_param.set_min(1);\n    filler_param.set_max(10);\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    filler.Fill(this->blob_bottom_eltwise_);\n    filler.Fill(this->blob_bottom_broadcast_0_);\n    filler.Fill(this->blob_bottom_broadcast_1_);\n    filler.Fill(this->blob_bottom_broadcast_2_);\n    filler.Fill(this->blob_bottom_scale_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~ScaleLayerTest() {\n    delete blob_bottom_;\n    delete blob_bottom_eltwise_;\n    delete blob_bottom_broadcast_0_;\n    delete blob_bottom_broadcast_1_;\n    delete blob_bottom_broadcast_2_;\n    delete blob_bottom_scale_;\n    delete blob_top_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_eltwise_;\n  Blob<Dtype>* const blob_bottom_broadcast_0_;\n  Blob<Dtype>* const blob_bottom_broadcast_1_;\n  Blob<Dtype>* const blob_bottom_broadcast_2_;\n  Blob<Dtype>* const blob_bottom_scale_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ScaleLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(ScaleLayerTest, TestForwardEltwise) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(0);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_->cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_eltwise_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] * in_data_b[i], 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardEltwiseInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(0);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_bottom_->cpu_data();\n  const int count = this->blob_bottom_->count();\n  const Dtype* in_data_a = orig_bottom.cpu_data();\n  const Dtype* in_data_b = this->blob_bottom_eltwise_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] * in_data_b[i], 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestBackwardEltwiseInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(0);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  Blob<Dtype> top_diff(this->blob_bottom_->shape());\n  FillerParameter filler_param;\n  filler_param.set_type(\"gaussian\");\n  filler_param.set_std(1);\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(&top_diff);\n  vector<bool> propagate_down(2, true);\n  // Run forward + backward without in-place computation;\n  // save resulting bottom diffs.\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_top_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  const bool kReshape = true;\n  const bool kCopyDiff = true;\n  Blob<Dtype> orig_bottom_diff;\n  orig_bottom_diff.CopyFrom(*this->blob_bottom_, kCopyDiff, kReshape);\n  Blob<Dtype> orig_scale_diff;\n  orig_scale_diff.CopyFrom(*this->blob_bottom_eltwise_,\n                            kCopyDiff, kReshape);\n  // Rerun forward + backward with in-place computation;\n  // check that resulting bottom diffs are the same.\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_bottom_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(orig_bottom_diff.cpu_diff()[i],\n                this->blob_bottom_->cpu_diff()[i], 1e-5);\n  }\n  for (int i = 0; i < this->blob_bottom_eltwise_->count(); ++i) {\n    EXPECT_NEAR(orig_scale_diff.cpu_diff()[i],\n                this->blob_bottom_eltwise_->cpu_diff()[i], 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardEltwiseWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ScaleParameter* scale_param = layer_param.mutable_scale_param();\n  scale_param->set_axis(0);\n  scale_param->set_num_axes(-1);\n  scale_param->mutable_filler()->set_type(\"gaussian\");\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data_a = this->blob_bottom_->cpu_data();\n  const Dtype* in_data_b = layer->blobs()[0]->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data_a[i] * in_data_b[i], 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardBroadcastBegin) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_0_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(0);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) *\n                      this->blob_bottom_broadcast_0_->data_at(n, c, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardBroadcastMiddle) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(1);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) *\n                      this->blob_bottom_broadcast_1_->data_at(c, h, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardBroadcastMiddleInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(1);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_bottom_->data_at(n, c, h, w),\n                      orig_bottom.data_at(n, c, h, w) *\n                      this->blob_bottom_broadcast_1_->data_at(c, h, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestBackwardBroadcastMiddleInPlace) {\n  typedef typename TypeParam::Dtype Dtype;\n  Blob<Dtype> orig_bottom(this->blob_bottom_->shape());\n  orig_bottom.CopyFrom(*this->blob_bottom_);\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(1);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  Blob<Dtype> top_diff(this->blob_bottom_->shape());\n  FillerParameter filler_param;\n  filler_param.set_type(\"gaussian\");\n  filler_param.set_std(1);\n  GaussianFiller<Dtype> filler(filler_param);\n  filler.Fill(&top_diff);\n  vector<bool> propagate_down(2, true);\n  // Run forward + backward without in-place computation;\n  // save resulting bottom diffs.\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_top_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  const bool kReshape = true;\n  const bool kCopyDiff = true;\n  Blob<Dtype> orig_bottom_diff;\n  orig_bottom_diff.CopyFrom(*this->blob_bottom_, kCopyDiff, kReshape);\n  Blob<Dtype> orig_scale_diff;\n  orig_scale_diff.CopyFrom(*this->blob_bottom_broadcast_1_,\n                            kCopyDiff, kReshape);\n  // Rerun forward + backward with in-place computation;\n  // check that resulting bottom diffs are the same.\n  this->blob_top_vec_[0] = this->blob_bottom_;  // in-place computation\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  caffe_copy(top_diff.count(), top_diff.cpu_data(),\n             this->blob_bottom_->mutable_cpu_diff());\n  layer->Backward(this->blob_top_vec_, propagate_down, this->blob_bottom_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_NEAR(orig_bottom_diff.cpu_diff()[i],\n                this->blob_bottom_->cpu_diff()[i], 1e-5);\n  }\n  for (int i = 0; i < this->blob_bottom_broadcast_1_->count(); ++i) {\n    EXPECT_NEAR(orig_scale_diff.cpu_diff()[i],\n                this->blob_bottom_broadcast_1_->cpu_diff()[i], 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardBroadcastMiddleWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ScaleParameter* scale_param = layer_param.mutable_scale_param();\n  scale_param->set_axis(1);\n  scale_param->set_num_axes(2);\n  scale_param->mutable_filler()->set_type(\"gaussian\");\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) *\n                      layer->blobs()[0]->data_at(c, h, 0, 0), 1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardBroadcastMiddleWithParamAndBias) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ScaleParameter* scale_param = layer_param.mutable_scale_param();\n  scale_param->set_axis(1);\n  scale_param->set_num_axes(2);\n  scale_param->mutable_filler()->set_type(\"gaussian\");\n  scale_param->set_bias_term(true);\n  scale_param->mutable_bias_filler()->set_type(\"gaussian\");\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) *\n                      layer->blobs()[0]->data_at(c, h, 0, 0) +\n                      layer->blobs()[1]->data_at(c, h, 0, 0), 1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardBroadcastEnd) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_2_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(2);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_bottom_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_NEAR(this->blob_top_->data_at(n, c, h, w),\n                      this->blob_bottom_->data_at(n, c, h, w) *\n                      this->blob_bottom_broadcast_2_->data_at(h, w, 0, 0),\n                      1e-5);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardScale) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_scale_);\n  LayerParameter layer_param;\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data = this->blob_bottom_->cpu_data();\n  const Dtype scale = *this->blob_bottom_scale_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data[i] * scale, 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestForwardScaleAxis2) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_scale_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(2);\n  shared_ptr<ScaleLayer<Dtype> > layer(new ScaleLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_->shape());\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  const Dtype* data = this->blob_top_->cpu_data();\n  const int count = this->blob_top_->count();\n  const Dtype* in_data = this->blob_bottom_->cpu_data();\n  const Dtype scale = *this->blob_bottom_scale_->cpu_data();\n  for (int i = 0; i < count; ++i) {\n    EXPECT_NEAR(data[i], in_data[i] * scale, 1e-5);\n  }\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientEltwise) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_eltwise_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(0);\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientEltwiseWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ScaleParameter* scale_param = layer_param.mutable_scale_param();\n  scale_param->set_axis(0);\n  scale_param->set_num_axes(-1);\n  scale_param->mutable_filler()->set_type(\"gaussian\");\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientBroadcastBegin) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_0_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(0);\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientBroadcastMiddle) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(1);\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientBroadcastMiddleWithParam) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_1_);\n  LayerParameter layer_param;\n  ScaleParameter* scale_param = layer_param.mutable_scale_param();\n  scale_param->set_axis(1);\n  scale_param->set_num_axes(2);\n  scale_param->mutable_filler()->set_type(\"gaussian\");\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientBroadcastEnd) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_broadcast_2_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(2);\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientScale) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_scale_);\n  LayerParameter layer_param;\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientScaleAndBias) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_scale_);\n  LayerParameter layer_param;\n  ScaleParameter* scale_param = layer_param.mutable_scale_param();\n  scale_param->set_bias_term(true);\n  scale_param->mutable_bias_filler()->set_type(\"gaussian\");\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(ScaleLayerTest, TestGradientScaleAxis2) {\n  typedef typename TypeParam::Dtype Dtype;\n  this->blob_bottom_vec_.push_back(this->blob_bottom_scale_);\n  LayerParameter layer_param;\n  layer_param.mutable_scale_param()->set_axis(2);\n  ScaleLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_sigmoid_cross_entropy_loss_layer.cpp",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/sigmoid_cross_entropy_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SigmoidCrossEntropyLossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  SigmoidCrossEntropyLossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_targets_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    // Fill the data vector\n    FillerParameter data_filler_param;\n    data_filler_param.set_std(1);\n    GaussianFiller<Dtype> data_filler(data_filler_param);\n    data_filler.Fill(blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    // Fill the targets vector\n    FillerParameter targets_filler_param;\n    targets_filler_param.set_min(0);\n    targets_filler_param.set_max(1);\n    UniformFiller<Dtype> targets_filler(targets_filler_param);\n    targets_filler.Fill(blob_bottom_targets_);\n    blob_bottom_vec_.push_back(blob_bottom_targets_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~SigmoidCrossEntropyLossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_targets_;\n    delete blob_top_loss_;\n  }\n\n  Dtype SigmoidCrossEntropyLossReference(const int count, const int num,\n                                         const Dtype* input,\n                                         const Dtype* target) {\n    Dtype loss = 0;\n    for (int i = 0; i < count; ++i) {\n      const Dtype prediction = 1 / (1 + exp(-input[i]));\n      EXPECT_LE(prediction, 1);\n      EXPECT_GE(prediction, 0);\n      EXPECT_LE(target[i], 1);\n      EXPECT_GE(target[i], 0);\n      loss -= target[i] * log(prediction + (target[i] == Dtype(0)));\n      loss -= (1 - target[i]) * log(1 - prediction + (target[i] == Dtype(1)));\n    }\n    return loss / num;\n  }\n\n  void TestForward() {\n    LayerParameter layer_param;\n    const Dtype kLossWeight = 3.7;\n    layer_param.add_loss_weight(kLossWeight);\n    FillerParameter data_filler_param;\n    data_filler_param.set_std(1);\n    GaussianFiller<Dtype> data_filler(data_filler_param);\n    FillerParameter targets_filler_param;\n    targets_filler_param.set_min(0.0);\n    targets_filler_param.set_max(1.0);\n    UniformFiller<Dtype> targets_filler(targets_filler_param);\n    Dtype eps = 2e-2;\n    for (int i = 0; i < 100; ++i) {\n      // Fill the data vector\n      data_filler.Fill(this->blob_bottom_data_);\n      // Fill the targets vector\n      targets_filler.Fill(this->blob_bottom_targets_);\n      SigmoidCrossEntropyLossLayer<Dtype> layer(layer_param);\n      layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n      Dtype layer_loss =\n          layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n      const int count = this->blob_bottom_data_->count();\n      const int num = this->blob_bottom_data_->num();\n      const Dtype* blob_bottom_data = this->blob_bottom_data_->cpu_data();\n      const Dtype* blob_bottom_targets =\n          this->blob_bottom_targets_->cpu_data();\n      Dtype reference_loss = kLossWeight * SigmoidCrossEntropyLossReference(\n          count, num, blob_bottom_data, blob_bottom_targets);\n      EXPECT_NEAR(reference_loss, layer_loss, eps) << \"debug: trial #\" << i;\n    }\n  }\n\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_targets_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(SigmoidCrossEntropyLossLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(SigmoidCrossEntropyLossLayerTest, TestSigmoidCrossEntropyLoss) {\n  this->TestForward();\n}\n\nTYPED_TEST(SigmoidCrossEntropyLossLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const Dtype kLossWeight = 3.7;\n  layer_param.add_loss_weight(kLossWeight);\n  SigmoidCrossEntropyLossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_slice_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/slice_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SliceLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  SliceLayerTest()\n      : blob_bottom_(new Blob<Dtype>(6, 12, 2, 3)),\n        blob_top_0_(new Blob<Dtype>()),\n        blob_top_1_(new Blob<Dtype>()),\n        blob_top_2_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    // fill the values\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_top_vec_0_.push_back(blob_top_0_);\n    blob_top_vec_0_.push_back(blob_top_1_);\n    blob_top_vec_1_.push_back(blob_top_0_);\n    blob_top_vec_1_.push_back(blob_top_1_);\n    blob_top_vec_1_.push_back(blob_top_2_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n  }\n\n  virtual void ReduceBottomBlobSize() {\n    blob_bottom_->Reshape(4, 5, 2, 2);\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n  }\n\n  virtual ~SliceLayerTest() {\n    delete blob_top_0_; delete blob_top_1_;\n    delete blob_top_2_; delete blob_bottom_;\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_0_;\n  Blob<Dtype>* const blob_top_1_;\n  Blob<Dtype>* const blob_top_2_;\n  vector<Blob<Dtype>*> blob_top_vec_0_, blob_top_vec_1_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n};\n\nTYPED_TEST_CASE(SliceLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(SliceLayerTest, TestSetupNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_slice_param()->set_axis(0);\n  SliceLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_1_);\n  EXPECT_EQ(this->blob_bottom_->num(), 3 * this->blob_top_0_->num());\n  EXPECT_EQ(this->blob_top_0_->num(), this->blob_top_1_->num());\n  EXPECT_EQ(this->blob_top_0_->num(), this->blob_top_2_->num());\n  EXPECT_EQ(this->blob_bottom_->channels(), this->blob_top_0_->channels());\n  EXPECT_EQ(this->blob_bottom_->height(), this->blob_top_0_->height());\n  EXPECT_EQ(this->blob_bottom_->width(), this->blob_top_0_->width());\n}\n\nTYPED_TEST(SliceLayerTest, TestSetupChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_slice_param()->add_slice_point(3);\n  SliceLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_0_);\n  EXPECT_EQ(this->blob_top_0_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_0_->channels(), 3);\n  EXPECT_EQ(this->blob_top_1_->channels(), 9);\n  EXPECT_EQ(this->blob_bottom_->channels(),\n    this->blob_top_0_->channels() + this->blob_top_1_->channels());\n  EXPECT_EQ(this->blob_bottom_->height(), this->blob_top_0_->height());\n  EXPECT_EQ(this->blob_bottom_->width(), this->blob_top_0_->width());\n}\n\nTYPED_TEST(SliceLayerTest, TestTrivialSlice) {\n  // Test the trivial (single output) \"slice\" operation --\n  // should be the identity.\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SliceLayer<Dtype> layer(layer_param);\n  this->blob_top_vec_0_.resize(1);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_0_);\n  ASSERT_EQ(this->blob_bottom_->shape(), this->blob_top_0_->shape());\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_EQ(this->blob_bottom_->cpu_data()[i],\n              this->blob_top_0_->cpu_data()[i]);\n  }\n}\n\nTYPED_TEST(SliceLayerTest, TestSliceAcrossNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_slice_param()->set_axis(0);\n  SliceLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_0_);\n  const int top_num = this->blob_bottom_->num() / 2;\n  ASSERT_EQ(top_num, this->blob_top_0_->num());\n  ASSERT_EQ(top_num, this->blob_top_1_->num());\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_0_);\n  for (int n = 0; n < top_num; ++n) {\n    for (int c = 0; c < this->blob_top_0_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_EQ(this->blob_bottom_->data_at(n, c, h, w),\n                    this->blob_top_0_->data_at(n, c, h, w));\n        }\n      }\n    }\n    for (int c = 0; c < this->blob_top_1_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_EQ(this->blob_bottom_->data_at(n + 3, c, h, w),\n                    this->blob_top_1_->data_at(n, c, h, w));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(SliceLayerTest, TestSliceAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  // Slice at 2, 8: should produce output blobs with #channels 2, 6, 4.\n  const int kSlicePoint0 = 2;\n  const int kSlicePoint1 = 8;\n  layer_param.mutable_slice_param()->add_slice_point(kSlicePoint0);\n  layer_param.mutable_slice_param()->add_slice_point(kSlicePoint1);\n  SliceLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_1_);\n  ASSERT_EQ(kSlicePoint0, this->blob_top_0_->channels());\n  ASSERT_EQ(kSlicePoint1 - kSlicePoint0, this->blob_top_1_->channels());\n  ASSERT_EQ(this->blob_bottom_->channels() - kSlicePoint1,\n            this->blob_top_2_->channels());\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_1_);\n  for (int n = 0; n < this->blob_bottom_->num(); ++n) {\n    for (int c = 0; c < this->blob_top_0_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_EQ(this->blob_bottom_->data_at(n, c, h, w),\n              this->blob_top_0_->data_at(n, c, h, w));\n        }\n      }\n    }\n    for (int c = 0; c < this->blob_top_1_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_EQ(this->blob_bottom_->data_at(n, c + kSlicePoint0, h, w),\n              this->blob_top_1_->data_at(n, c, h, w));\n        }\n      }\n    }\n    for (int c = 0; c < this->blob_top_2_->channels(); ++c) {\n      for (int h = 0; h < this->blob_bottom_->height(); ++h) {\n        for (int w = 0; w < this->blob_bottom_->width(); ++w) {\n          EXPECT_EQ(this->blob_bottom_->data_at(n, c + kSlicePoint1, h, w),\n              this->blob_top_2_->data_at(n, c, h, w));\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(SliceLayerTest, TestGradientTrivial) {\n  // Test the trivial (single output) \"slice\" operation --\n  // should be the identity.\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SliceLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  this->blob_top_vec_0_.resize(1);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_0_);\n}\n\nTYPED_TEST(SliceLayerTest, TestGradientAcrossNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Gradient checks are slow; reduce blob size.\n  this->ReduceBottomBlobSize();\n  LayerParameter layer_param;\n  layer_param.mutable_slice_param()->set_axis(0);\n  SliceLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n    this->blob_top_vec_0_);\n}\n\nTYPED_TEST(SliceLayerTest, TestGradientAcrossChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  // Gradient checks are slow; reduce blob size.\n  this->ReduceBottomBlobSize();\n  LayerParameter layer_param;\n  const int kSlicePoint = 4;\n  layer_param.mutable_slice_param()->add_slice_point(kSlicePoint);\n  SliceLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n    this->blob_top_vec_0_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_smooth_L1_loss_layer.cpp",
    "content": "#include <cmath>\n#include <cstdlib>\n#include <cstring>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/vision_layers.hpp\"\n#include \"caffe/fast_rcnn_layers.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntypedef ::testing::Types<GPUDevice<float>, GPUDevice<double> > TestDtypesGPU;\n\ntemplate <typename TypeParam>\nclass SmoothL1LossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  SmoothL1LossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_label_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_inside_weights_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_bottom_outside_weights_(new Blob<Dtype>(10, 5, 1, 1)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter const_filler_param;\n    const_filler_param.set_value(-1.);\n    ConstantFiller<Dtype> const_filler(const_filler_param);\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n\n    filler.Fill(this->blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    filler.Fill(this->blob_bottom_label_);\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n\n    //const_filler.Fill(this->blob_bottom_inside_weights_);\n    filler.Fill(this->blob_bottom_inside_weights_);\n    blob_bottom_vec_.push_back(blob_bottom_inside_weights_);\n    //const_filler.Fill(this->blob_bottom_outside_weights_);\n    filler.Fill(this->blob_bottom_outside_weights_);\n    blob_bottom_vec_.push_back(blob_bottom_outside_weights_);\n\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~SmoothL1LossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_bottom_inside_weights_;\n    delete blob_bottom_outside_weights_;\n    delete blob_top_loss_;\n  }\n\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_bottom_inside_weights_;\n  Blob<Dtype>* const blob_bottom_outside_weights_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(SmoothL1LossLayerTest, TestDtypesGPU);\n\nTYPED_TEST(SmoothL1LossLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SmoothL1LossParameter* loss_param =\n      layer_param.mutable_smooth_l1_loss_param();\n  loss_param->set_sigma(2.4);\n\n  const Dtype kLossWeight = 3.7;\n  layer_param.add_loss_weight(kLossWeight);\n  SmoothL1LossLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 1);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_softmax_layer.cpp",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/softmax_layer.hpp\"\n\n#ifdef USE_CUDNN\n#include \"caffe/layers/cudnn_softmax_layer.hpp\"\n#endif\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SoftmaxLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  SoftmaxLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 10, 2, 3)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~SoftmaxLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(SoftmaxLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(SoftmaxLayerTest, TestForward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SoftmaxLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test sum\n  for (int i = 0; i < this->blob_bottom_->num(); ++i) {\n    for (int k = 0; k < this->blob_bottom_->height(); ++k) {\n      for (int l = 0; l < this->blob_bottom_->width(); ++l) {\n        Dtype sum = 0;\n        for (int j = 0; j < this->blob_top_->channels(); ++j) {\n          sum += this->blob_top_->data_at(i, j, k, l);\n        }\n        EXPECT_GE(sum, 0.999);\n        EXPECT_LE(sum, 1.001);\n        // Test exact values\n        Dtype scale = 0;\n        for (int j = 0; j < this->blob_bottom_->channels(); ++j) {\n          scale += exp(this->blob_bottom_->data_at(i, j, k, l));\n        }\n        for (int j = 0; j < this->blob_bottom_->channels(); ++j) {\n          EXPECT_GE(this->blob_top_->data_at(i, j, k, l) + 1e-4,\n              exp(this->blob_bottom_->data_at(i, j, k, l)) / scale)\n              << \"debug: \" << i << \" \" << j;\n          EXPECT_LE(this->blob_top_->data_at(i, j, k, l) - 1e-4,\n              exp(this->blob_bottom_->data_at(i, j, k, l)) / scale)\n              << \"debug: \" << i << \" \" << j;\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(SoftmaxLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SoftmaxLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#ifdef USE_CUDNN\ntemplate <typename Dtype>\nclass CuDNNSoftmaxLayerTest : public GPUDeviceTest<Dtype> {\n protected:\n  CuDNNSoftmaxLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 10, 2, 3)),\n        blob_top_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~CuDNNSoftmaxLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(CuDNNSoftmaxLayerTest, TestDtypes);\n\nTYPED_TEST(CuDNNSoftmaxLayerTest, TestForwardCuDNN) {\n  LayerParameter layer_param;\n  CuDNNSoftmaxLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Test sum\n  for (int i = 0; i < this->blob_bottom_->num(); ++i) {\n    for (int k = 0; k < this->blob_bottom_->height(); ++k) {\n      for (int l = 0; l < this->blob_bottom_->width(); ++l) {\n        TypeParam sum = 0;\n        for (int j = 0; j < this->blob_top_->channels(); ++j) {\n          sum += this->blob_top_->data_at(i, j, k, l);\n        }\n        EXPECT_GE(sum, 0.999);\n        EXPECT_LE(sum, 1.001);\n        // Test exact values\n        TypeParam scale = 0;\n        for (int j = 0; j < this->blob_bottom_->channels(); ++j) {\n          scale += exp(this->blob_bottom_->data_at(i, j, k, l));\n        }\n        for (int j = 0; j < this->blob_bottom_->channels(); ++j) {\n          EXPECT_GE(this->blob_top_->data_at(i, j, k, l) + 1e-4,\n              exp(this->blob_bottom_->data_at(i, j, k, l)) / scale)\n              << \"debug: \" << i << \" \" << j;\n          EXPECT_LE(this->blob_top_->data_at(i, j, k, l) - 1e-4,\n              exp(this->blob_bottom_->data_at(i, j, k, l)) / scale)\n              << \"debug: \" << i << \" \" << j;\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(CuDNNSoftmaxLayerTest, TestGradientCuDNN) {\n  LayerParameter layer_param;\n  CuDNNSoftmaxLayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-2, 1e-3);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_softmax_with_loss_layer.cpp",
    "content": "#include <cmath>\n#include <vector>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/softmax_loss_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nusing boost::scoped_ptr;\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SoftmaxWithLossLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  SoftmaxWithLossLayerTest()\n      : blob_bottom_data_(new Blob<Dtype>(10, 5, 2, 3)),\n        blob_bottom_label_(new Blob<Dtype>(10, 1, 2, 3)),\n        blob_top_loss_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_std(10);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_data_);\n    blob_bottom_vec_.push_back(blob_bottom_data_);\n    for (int i = 0; i < blob_bottom_label_->count(); ++i) {\n      blob_bottom_label_->mutable_cpu_data()[i] = caffe_rng_rand() % 5;\n    }\n    blob_bottom_vec_.push_back(blob_bottom_label_);\n    blob_top_vec_.push_back(blob_top_loss_);\n  }\n  virtual ~SoftmaxWithLossLayerTest() {\n    delete blob_bottom_data_;\n    delete blob_bottom_label_;\n    delete blob_top_loss_;\n  }\n  Blob<Dtype>* const blob_bottom_data_;\n  Blob<Dtype>* const blob_bottom_label_;\n  Blob<Dtype>* const blob_top_loss_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(SoftmaxWithLossLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(SoftmaxWithLossLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.add_loss_weight(3);\n  SoftmaxWithLossLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\nTYPED_TEST(SoftmaxWithLossLayerTest, TestForwardIgnoreLabel) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_loss_param()->set_normalize(false);\n  // First, compute the loss with all labels\n  scoped_ptr<SoftmaxWithLossLayer<Dtype> > layer(\n      new SoftmaxWithLossLayer<Dtype>(layer_param));\n  layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  Dtype full_loss = this->blob_top_loss_->cpu_data()[0];\n  // Now, accumulate the loss, ignoring each label in {0, ..., 4} in turn.\n  Dtype accum_loss = 0;\n  for (int label = 0; label < 5; ++label) {\n    layer_param.mutable_loss_param()->set_ignore_label(label);\n    layer.reset(new SoftmaxWithLossLayer<Dtype>(layer_param));\n    layer->SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer->Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    accum_loss += this->blob_top_loss_->cpu_data()[0];\n  }\n  // Check that each label was included all but once.\n  EXPECT_NEAR(4 * full_loss, accum_loss, 1e-4);\n}\n\nTYPED_TEST(SoftmaxWithLossLayerTest, TestGradientIgnoreLabel) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  // labels are in {0, ..., 4}, so we'll ignore about a fifth of them\n  layer_param.mutable_loss_param()->set_ignore_label(0);\n  SoftmaxWithLossLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\nTYPED_TEST(SoftmaxWithLossLayerTest, TestGradientUnnormalized) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_loss_param()->set_normalize(false);\n  SoftmaxWithLossLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_, 0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_solver.cpp",
    "content": "#include <string>\n#include <utility>\n#include <vector>\n\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/sgd_solvers.hpp\"\n#include \"caffe/solver.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nusing std::ostringstream;\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SolverTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  virtual void InitSolverFromProtoString(const string& proto) {\n    SolverParameter param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(proto, &param));\n    // Set the solver_mode according to current Caffe::mode.\n    switch (Caffe::mode()) {\n      case Caffe::CPU:\n        param.set_solver_mode(SolverParameter_SolverMode_CPU);\n        break;\n      case Caffe::GPU:\n        param.set_solver_mode(SolverParameter_SolverMode_GPU);\n        break;\n      default:\n        LOG(FATAL) << \"Unknown Caffe mode: \" << Caffe::mode();\n    }\n    solver_.reset(new SGDSolver<Dtype>(param));\n  }\n\n  shared_ptr<Solver<Dtype> > solver_;\n};\n\nTYPED_TEST_CASE(SolverTest, TestDtypesAndDevices);\n\nTYPED_TEST(SolverTest, TestInitTrainTestNets) {\n  const string& proto =\n     \"test_interval: 10 \"\n     \"test_iter: 10 \"\n     \"test_state: { stage: 'with-softmax' }\"\n     \"test_iter: 10 \"\n     \"test_state: {}\"\n     \"net_param { \"\n     \"  name: 'TestNetwork' \"\n     \"  layer { \"\n     \"    name: 'data' \"\n     \"    type: 'DummyData' \"\n     \"    dummy_data_param { \"\n     \"      shape { \"\n     \"        dim: 5 \"\n     \"        dim: 2 \"\n     \"        dim: 3 \"\n     \"        dim: 4 \"\n     \"      } \"\n     \"      shape { \"\n     \"        dim: 5 \"\n     \"      } \"\n     \"    } \"\n     \"    top: 'data' \"\n     \"    top: 'label' \"\n     \"  } \"\n     \"  layer { \"\n     \"    name: 'innerprod' \"\n     \"    type: 'InnerProduct' \"\n     \"    inner_product_param { \"\n     \"      num_output: 10 \"\n     \"    } \"\n     \"    bottom: 'data' \"\n     \"    top: 'innerprod' \"\n     \"  } \"\n     \"  layer { \"\n     \"    name: 'accuracy' \"\n     \"    type: 'Accuracy' \"\n     \"    bottom: 'innerprod' \"\n     \"    bottom: 'label' \"\n     \"    top: 'accuracy' \"\n     \"    exclude: { phase: TRAIN } \"\n     \"  } \"\n     \"  layer { \"\n     \"    name: 'loss' \"\n     \"    type: 'SoftmaxWithLoss' \"\n     \"    bottom: 'innerprod' \"\n     \"    bottom: 'label' \"\n     \"    include: { phase: TRAIN } \"\n     \"    include: { phase: TEST stage: 'with-softmax' } \"\n     \"  } \"\n     \"} \";\n  this->InitSolverFromProtoString(proto);\n  ASSERT_TRUE(this->solver_->net() != NULL);\n  EXPECT_TRUE(this->solver_->net()->has_layer(\"loss\"));\n  EXPECT_FALSE(this->solver_->net()->has_layer(\"accuracy\"));\n  ASSERT_EQ(2, this->solver_->test_nets().size());\n  EXPECT_TRUE(this->solver_->test_nets()[0]->has_layer(\"loss\"));\n  EXPECT_TRUE(this->solver_->test_nets()[0]->has_layer(\"accuracy\"));\n  EXPECT_FALSE(this->solver_->test_nets()[1]->has_layer(\"loss\"));\n  EXPECT_TRUE(this->solver_->test_nets()[1]->has_layer(\"accuracy\"));\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_solver_factory.cpp",
    "content": "#include <map>\n#include <string>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/solver.hpp\"\n#include \"caffe/solver_factory.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SolverFactoryTest : public MultiDeviceTest<TypeParam> {\n protected:\n  SolverParameter simple_solver_param() {\n    const string solver_proto =\n        \"train_net_param { \"\n        \"  layer { \"\n        \"    name: 'data' type: 'DummyData' top: 'data' \"\n        \"    dummy_data_param { shape { dim: 1 } } \"\n        \"  } \"\n        \"} \";\n    SolverParameter solver_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        solver_proto, &solver_param));\n    return solver_param;\n  }\n};\n\nTYPED_TEST_CASE(SolverFactoryTest, TestDtypesAndDevices);\n\nTYPED_TEST(SolverFactoryTest, TestCreateSolver) {\n  typedef typename TypeParam::Dtype Dtype;\n  typename SolverRegistry<Dtype>::CreatorRegistry& registry =\n      SolverRegistry<Dtype>::Registry();\n  shared_ptr<Solver<Dtype> > solver;\n  SolverParameter solver_param = this->simple_solver_param();\n  for (typename SolverRegistry<Dtype>::CreatorRegistry::iterator iter =\n       registry.begin(); iter != registry.end(); ++iter) {\n    solver_param.set_type(iter->first);\n    solver.reset(SolverRegistry<Dtype>::CreateSolver(solver_param));\n    EXPECT_EQ(iter->first, solver->type());\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_split_layer.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/split_layer.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/insert_splits.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SplitLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  SplitLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 5)),\n        blob_top_a_(new Blob<Dtype>()),\n        blob_top_b_(new Blob<Dtype>()) {\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_a_);\n    blob_top_vec_.push_back(blob_top_b_);\n  }\n  virtual ~SplitLayerTest() {\n    delete blob_bottom_;\n    delete blob_top_a_;\n    delete blob_top_b_;\n  }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_a_;\n  Blob<Dtype>* const blob_top_b_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(SplitLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(SplitLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SplitLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_a_->num(), 2);\n  EXPECT_EQ(this->blob_top_a_->channels(), 3);\n  EXPECT_EQ(this->blob_top_a_->height(), 6);\n  EXPECT_EQ(this->blob_top_a_->width(), 5);\n  EXPECT_EQ(this->blob_top_b_->num(), 2);\n  EXPECT_EQ(this->blob_top_b_->channels(), 3);\n  EXPECT_EQ(this->blob_top_b_->height(), 6);\n  EXPECT_EQ(this->blob_top_b_->width(), 5);\n}\n\nTYPED_TEST(SplitLayerTest, Test) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SplitLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    Dtype bottom_value = this->blob_bottom_->cpu_data()[i];\n    EXPECT_EQ(bottom_value, this->blob_top_a_->cpu_data()[i]);\n    EXPECT_EQ(bottom_value, this->blob_top_b_->cpu_data()[i]);\n  }\n}\n\nTYPED_TEST(SplitLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SplitLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n\nclass SplitLayerInsertionTest : public ::testing::Test {\n protected:\n  void RunInsertionTest(\n      const string& input_param_string, const string& output_param_string) {\n    // Test that InsertSplits called on the proto specified by\n    // input_param_string results in the proto specified by\n    // output_param_string.\n    NetParameter input_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        input_param_string, &input_param));\n    NetParameter expected_output_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        output_param_string, &expected_output_param));\n    NetParameter actual_output_param;\n    InsertSplits(input_param, &actual_output_param);\n    EXPECT_EQ(expected_output_param.DebugString(),\n        actual_output_param.DebugString());\n    // Also test idempotence.\n    NetParameter double_split_insert_param;\n    InsertSplits(actual_output_param, &double_split_insert_param);\n    EXPECT_EQ(actual_output_param.DebugString(),\n       double_split_insert_param.DebugString());\n  }\n};\n\nTEST_F(SplitLayerInsertionTest, TestNoInsertion1) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, input_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestNoInsertion2) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'data_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'data' \"\n      \"  top: 'data_split_0' \"\n      \"  top: 'data_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_split_0' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_split_1' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod2' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, input_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestNoInsertionImageNet) {\n  const string& input_proto =\n      \"name: 'CaffeNet' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv1' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 96 \"\n      \"    kernel_size: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu1' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'pool1' \"\n      \"  type: 'Pooling' \"\n      \"  pooling_param { \"\n      \"    pool: MAX \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'norm1' \"\n      \"  type: 'LRN' \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv2' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernel_size: 5 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu2' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'pool2' \"\n      \"  type: 'Pooling' \"\n      \"  pooling_param { \"\n      \"    pool: MAX \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'pool2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'norm2' \"\n      \"  type: 'LRN' \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool2' \"\n      \"  top: 'norm2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv3' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 384 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'norm2' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu3' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv4' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 384 \"\n      \"    group: 2 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu4' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv5' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu5' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'pool5' \"\n      \"  type: 'Pooling' \"\n      \"  pooling_param { \"\n      \"    kernel_size: 3 \"\n      \"    pool: MAX \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'pool5' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc6' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'pool5' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu6' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'drop6' \"\n      \"  type: 'Dropout' \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc7' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu7' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'drop7' \"\n      \"  type: 'Dropout' \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc8' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, input_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestNoInsertionWithInPlace) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'innerprod' \"\n      \"  top: 'innerprod' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'innerprod' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, input_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestLossInsertion) {\n  const string& input_proto =\n      \"name: 'UnsharedWeightsNetwork' \"\n      \"force_backward: true \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'DummyData' \"\n      \"  dummy_data_param { \"\n      \"    num: 5 \"\n      \"    channels: 2 \"\n      \"    height: 3 \"\n      \"    width: 4 \"\n      \"    data_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerproduct1' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    bias_term: false \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 10 \"\n      \"    } \"\n      \"  } \"\n      \"  param { name: 'unsharedweights1' } \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerproduct1' \"\n      \"  loss_weight: 2.5 \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerproduct2' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    bias_term: false \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 10 \"\n      \"    } \"\n      \"  } \"\n      \"  param { name: 'unsharedweights2' } \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerproduct2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerproduct1' \"\n      \"  bottom: 'innerproduct2' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'UnsharedWeightsNetwork' \"\n      \"force_backward: true \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'DummyData' \"\n      \"  dummy_data_param { \"\n      \"    num: 5 \"\n      \"    channels: 2 \"\n      \"    height: 3 \"\n      \"    width: 4 \"\n      \"    data_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'data_data_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'data' \"\n      \"  top: 'data_data_0_split_0' \"\n      \"  top: 'data_data_0_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerproduct1' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    bias_term: false \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 10 \"\n      \"    } \"\n      \"  } \"\n      \"  param { name: 'unsharedweights1' } \"\n      \"  bottom: 'data_data_0_split_0' \"\n      \"  top: 'innerproduct1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerproduct1_innerproduct1_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'innerproduct1' \"\n      \"  top: 'innerproduct1_innerproduct1_0_split_0' \"\n      \"  top: 'innerproduct1_innerproduct1_0_split_1' \"\n      \"  loss_weight: 2.5 \"\n      \"  loss_weight: 0 \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerproduct2' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 10 \"\n      \"    bias_term: false \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 10 \"\n      \"    } \"\n      \"  } \"\n      \"  param { name: 'unsharedweights2' } \"\n      \"  bottom: 'data_data_0_split_1' \"\n      \"  top: 'innerproduct2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerproduct1_innerproduct1_0_split_1' \"\n      \"  bottom: 'innerproduct2' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, expected_output_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestInsertion) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod3' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss1' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss2' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod2' \"\n      \"  bottom: 'innerprod3' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'data_data_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'data' \"\n      \"  top: 'data_data_0_split_0' \"\n      \"  top: 'data_data_0_split_1' \"\n      \"  top: 'data_data_0_split_2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_data_0_split_0' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_data_0_split_1' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2_innerprod2_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'innerprod2' \"\n      \"  top: 'innerprod2_innerprod2_0_split_0' \"\n      \"  top: 'innerprod2_innerprod2_0_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod3' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_data_0_split_2' \"\n      \"  top: 'innerprod3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss1' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod2_innerprod2_0_split_0' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss2' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod2_innerprod2_0_split_1' \"\n      \"  bottom: 'innerprod3' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, expected_output_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestInsertionTwoTop) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'label' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod3' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod4' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'label' \"\n      \"  top: 'innerprod4' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss1' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss2' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod2' \"\n      \"  bottom: 'innerprod4' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'data_data_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'data' \"\n      \"  top: 'data_data_0_split_0' \"\n      \"  top: 'data_data_0_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'label_data_1_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'label' \"\n      \"  top: 'label_data_1_split_0' \"\n      \"  top: 'label_data_1_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_data_0_split_0' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'label_data_1_split_0' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod3' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_data_0_split_1' \"\n      \"  top: 'innerprod3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod4' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'label_data_1_split_1' \"\n      \"  top: 'innerprod4' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss1' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss2' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod2' \"\n      \"  bottom: 'innerprod4' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, expected_output_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestInputInsertion) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"input: 'data' \"\n      \"input_dim: 10 \"\n      \"input_dim: 3 \"\n      \"input_dim: 227 \"\n      \"input_dim: 227 \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod2' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'TestNetwork' \"\n      \"input: 'data' \"\n      \"input_dim: 10 \"\n      \"input_dim: 3 \"\n      \"input_dim: 227 \"\n      \"input_dim: 227 \"\n      \"layer { \"\n      \"  name: 'data_input_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'data' \"\n      \"  top: 'data_input_0_split_0' \"\n      \"  top: 'data_input_0_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_input_0_split_0' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_input_0_split_1' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'innerprod2' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, expected_output_proto);\n}\n\nTEST_F(SplitLayerInsertionTest, TestWithInPlace) {\n  const string& input_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu1' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'innerprod1' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'innerprod1' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss1' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1' \"\n      \"  bottom: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss2' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod2' \"\n      \"  bottom: 'data' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'TestNetwork' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'data_data_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'data' \"\n      \"  top: 'data_data_0_split_0' \"\n      \"  top: 'data_data_0_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'data_data_0_split_0' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu1' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'innerprod1' \"\n      \"  top: 'innerprod1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod1_relu1_0_split' \"\n      \"  type: 'Split' \"\n      \"  bottom: 'innerprod1' \"\n      \"  top: 'innerprod1_relu1_0_split_0' \"\n      \"  top: 'innerprod1_relu1_0_split_1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'innerprod2' \"\n      \"  type: 'InnerProduct' \"\n      \"  bottom: 'innerprod1_relu1_0_split_0' \"\n      \"  top: 'innerprod2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss1' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod1_relu1_0_split_1' \"\n      \"  bottom: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss2' \"\n      \"  type: 'EuclideanLoss' \"\n      \"  bottom: 'innerprod2' \"\n      \"  bottom: 'data_data_0_split_1' \"\n      \"} \";\n  this->RunInsertionTest(input_proto, expected_output_proto);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_spp_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/concat_layer.hpp\"\n#include \"caffe/layers/flatten_layer.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n#include \"caffe/layers/split_layer.hpp\"\n#include \"caffe/layers/spp_layer.hpp\"\n\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass SPPLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  SPPLayerTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_bottom_2_(new Blob<Dtype>()),\n        blob_bottom_3_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    blob_bottom_->Reshape(2, 3, 9, 8);\n    blob_bottom_2_->Reshape(4, 3, 1024, 765);\n    blob_bottom_3_->Reshape(10, 3, 7, 7);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_bottom_vec_2_.push_back(blob_bottom_2_);\n    blob_bottom_vec_3_.push_back(blob_bottom_3_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~SPPLayerTest() { delete blob_bottom_; delete blob_top_; }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_bottom_2_;\n  Blob<Dtype>* const blob_bottom_3_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_bottom_vec_2_;\n  vector<Blob<Dtype>*> blob_bottom_vec_3_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(SPPLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(SPPLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_spp_param()->set_pyramid_height(3);\n  SPPLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  // expected number of pool results is geometric sum\n  // (1 - r ** n)/(1 - r) where r = 4 and n = pyramid_height\n  // (1 - 4 ** 3)/(1 - 4) = 21\n  // multiply bottom num_channels * expected_pool_results\n  // to get expected num_channels (3 * 21 = 63)\n  EXPECT_EQ(this->blob_top_->num(), 2);\n  EXPECT_EQ(this->blob_top_->channels(), 63);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\nTYPED_TEST(SPPLayerTest, TestEqualOutputDims) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_spp_param()->set_pyramid_height(5);\n  SPPLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_2_, this->blob_top_vec_);\n  // expected number of pool results is geometric sum\n  // (1 - r ** n)/(1 - r) where r = 4 and n = pyramid_height\n  // (1 - 4 ** 5)/(1 - 4) = 341\n  // multiply bottom num_channels * expected_pool_results\n  // to get expected num_channels (3 * 341 = 1023)\n  EXPECT_EQ(this->blob_top_->num(), 4);\n  EXPECT_EQ(this->blob_top_->channels(), 1023);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\nTYPED_TEST(SPPLayerTest, TestEqualOutputDims2) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_spp_param()->set_pyramid_height(3);\n  SPPLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_3_, this->blob_top_vec_);\n  // expected number of pool results is geometric sum\n  // (1 - r ** n)/(1 - r) where r = 4 and n = pyramid_height\n  // (1 - 4 ** 3)/(1 - 4) = 21\n  // multiply bottom num_channels * expected_pool_results\n  // to get expected num_channels (3 * 21 = 63)\n  EXPECT_EQ(this->blob_top_->num(), 10);\n  EXPECT_EQ(this->blob_top_->channels(), 63);\n  EXPECT_EQ(this->blob_top_->height(), 1);\n  EXPECT_EQ(this->blob_top_->width(), 1);\n}\n\nTYPED_TEST(SPPLayerTest, TestForwardBackward) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  layer_param.mutable_spp_param()->set_pyramid_height(3);\n  SPPLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  vector<bool> propagate_down(this->blob_bottom_vec_.size(), true);\n  layer.Backward(this->blob_top_vec_, propagate_down,\n                 this->blob_bottom_vec_);\n}\n\nTYPED_TEST(SPPLayerTest, TestGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  SPPParameter* spp_param = layer_param.mutable_spp_param();\n  spp_param->set_pyramid_height(3);\n  SPPLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-4, 1e-2);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_stochastic_pooling.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/pooling_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nusing std::min;\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass StochasticPoolingLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  StochasticPoolingLayerTest()\n      : blob_bottom_(new Blob<Dtype>()),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    Caffe::set_random_seed(1701);\n    blob_bottom_->Reshape(2, 3, 6, 5);\n    // fill the values\n    FillerParameter filler_param;\n    filler_param.set_min(0.1);\n    filler_param.set_max(1.);\n    UniformFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n\n  virtual ~StochasticPoolingLayerTest() {\n    delete blob_bottom_; delete blob_top_;\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\ntemplate <typename Dtype>\nclass CPUStochasticPoolingLayerTest\n  : public StochasticPoolingLayerTest<CPUDevice<Dtype> > {\n};\n\nTYPED_TEST_CASE(CPUStochasticPoolingLayerTest, TestDtypes);\n\nTYPED_TEST(CPUStochasticPoolingLayerTest, TestSetup) {\n  LayerParameter layer_param;\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  PoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), 3);\n  EXPECT_EQ(this->blob_top_->width(), 2);\n}\n\n#ifndef CPU_ONLY\n\ntemplate <typename Dtype>\nclass GPUStochasticPoolingLayerTest\n  : public StochasticPoolingLayerTest<GPUDevice<Dtype> > {\n};\n\nTYPED_TEST_CASE(GPUStochasticPoolingLayerTest, TestDtypes);\n\nTYPED_TEST(GPUStochasticPoolingLayerTest, TestStochastic) {\n  LayerParameter layer_param;\n  layer_param.set_phase(TRAIN);\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_STOCHASTIC);\n  PoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  // Check if the output is correct - it should do random sampling\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  TypeParam total = 0;\n  for (int n = 0; n < this->blob_top_->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n      for (int ph = 0; ph < this->blob_top_->height(); ++ph) {\n        for (int pw = 0; pw < this->blob_top_->width(); ++pw) {\n          TypeParam pooled = top_data[this->blob_top_->offset(n, c, ph, pw)];\n          total += pooled;\n          int hstart = ph * 2;\n          int hend = min(hstart + 3, this->blob_bottom_->height());\n          int wstart = pw * 2;\n          int wend = min(wstart + 3, this->blob_bottom_->width());\n          bool has_equal = false;\n          for (int h = hstart; h < hend; ++h) {\n            for (int w = wstart; w < wend; ++w) {\n              has_equal |= (pooled == bottom_data[this->blob_bottom_->\n                  offset(n, c, h, w)]);\n            }\n          }\n          EXPECT_TRUE(has_equal);\n        }\n      }\n    }\n  }\n  // When we are doing stochastic pooling, the average we get should be higher\n  // than the simple data average since we are weighting more on higher-valued\n  // ones.\n  EXPECT_GE(total / this->blob_top_->count(), 0.55);\n}\n\nTYPED_TEST(GPUStochasticPoolingLayerTest, TestStochasticTestPhase) {\n  LayerParameter layer_param;\n  layer_param.set_phase(TEST);\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_STOCHASTIC);\n  PoolingLayer<TypeParam> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n\n  // Check if the output is correct - it should do random sampling\n  const TypeParam* bottom_data = this->blob_bottom_->cpu_data();\n  const TypeParam* top_data = this->blob_top_->cpu_data();\n  for (int n = 0; n < this->blob_top_->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n      for (int ph = 0; ph < this->blob_top_->height(); ++ph) {\n        for (int pw = 0; pw < this->blob_top_->width(); ++pw) {\n          TypeParam pooled = top_data[this->blob_top_->offset(n, c, ph, pw)];\n          int hstart = ph * 2;\n          int hend = min(hstart + 3, this->blob_bottom_->height());\n          int wstart = pw * 2;\n          int wend = min(wstart + 3, this->blob_bottom_->width());\n          bool smaller_than_max = false;\n          for (int h = hstart; h < hend; ++h) {\n            for (int w = wstart; w < wend; ++w) {\n              smaller_than_max |= (pooled <= bottom_data[this->blob_bottom_->\n                  offset(n, c, h, w)]);\n            }\n          }\n          EXPECT_TRUE(smaller_than_max);\n        }\n      }\n    }\n  }\n}\n\nTYPED_TEST(GPUStochasticPoolingLayerTest, TestGradient) {\n  LayerParameter layer_param;\n  layer_param.set_phase(TRAIN);\n  PoolingParameter* pooling_param = layer_param.mutable_pooling_param();\n  pooling_param->set_kernel_size(3);\n  pooling_param->set_stride(2);\n  pooling_param->set_pool(PoolingParameter_PoolMethod_STOCHASTIC);\n  PoolingLayer<TypeParam> layer(layer_param);\n  GradientChecker<TypeParam> checker(1e-4, 1e-2);\n  // it is too expensive to call curand multiple times, so we don't do an\n  // exhaustive gradient check.\n  checker.CheckGradient(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_syncedmem.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/common.hpp\"\n#include \"caffe/syncedmem.hpp\"\n#include \"caffe/util/device_alternate.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nclass SyncedMemoryTest : public ::testing::Test {};\n\nTEST_F(SyncedMemoryTest, TestInitialization) {\n  SyncedMemory mem(10);\n  EXPECT_EQ(mem.head(), SyncedMemory::UNINITIALIZED);\n  EXPECT_EQ(mem.size(), 10);\n  SyncedMemory* p_mem = new SyncedMemory(10 * sizeof(float));\n  EXPECT_EQ(p_mem->size(), 10 * sizeof(float));\n  delete p_mem;\n}\n\n#ifndef CPU_ONLY  // GPU test\n\nTEST_F(SyncedMemoryTest, TestAllocationCPUGPU) {\n  SyncedMemory mem(10);\n  EXPECT_TRUE(mem.cpu_data());\n  EXPECT_TRUE(mem.gpu_data());\n  EXPECT_TRUE(mem.mutable_cpu_data());\n  EXPECT_TRUE(mem.mutable_gpu_data());\n}\n\n#endif\n\nTEST_F(SyncedMemoryTest, TestAllocationCPU) {\n  SyncedMemory mem(10);\n  EXPECT_TRUE(mem.cpu_data());\n  EXPECT_TRUE(mem.mutable_cpu_data());\n}\n\n#ifndef CPU_ONLY  // GPU test\n\nTEST_F(SyncedMemoryTest, TestAllocationGPU) {\n  SyncedMemory mem(10);\n  EXPECT_TRUE(mem.gpu_data());\n  EXPECT_TRUE(mem.mutable_gpu_data());\n}\n\n#endif\n\nTEST_F(SyncedMemoryTest, TestCPUWrite) {\n  SyncedMemory mem(10);\n  void* cpu_data = mem.mutable_cpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::HEAD_AT_CPU);\n  caffe_memset(mem.size(), 1, cpu_data);\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<char*>(cpu_data))[i], 1);\n  }\n  // do another round\n  cpu_data = mem.mutable_cpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::HEAD_AT_CPU);\n  caffe_memset(mem.size(), 2, cpu_data);\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<char*>(cpu_data))[i], 2);\n  }\n}\n\n#ifndef CPU_ONLY  // GPU test\n\nTEST_F(SyncedMemoryTest, TestGPURead) {\n  SyncedMemory mem(10);\n  void* cpu_data = mem.mutable_cpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::HEAD_AT_CPU);\n  caffe_memset(mem.size(), 1, cpu_data);\n  const void* gpu_data = mem.gpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::SYNCED);\n  // check if values are the same\n  char* recovered_value = new char[10];\n  caffe_gpu_memcpy(10, gpu_data, recovered_value);\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<char*>(recovered_value))[i], 1);\n  }\n  // do another round\n  cpu_data = mem.mutable_cpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::HEAD_AT_CPU);\n  caffe_memset(mem.size(), 2, cpu_data);\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<char*>(cpu_data))[i], 2);\n  }\n  gpu_data = mem.gpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::SYNCED);\n  // check if values are the same\n  caffe_gpu_memcpy(10, gpu_data, recovered_value);\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<char*>(recovered_value))[i], 2);\n  }\n  delete[] recovered_value;\n}\n\nTEST_F(SyncedMemoryTest, TestGPUWrite) {\n  SyncedMemory mem(10);\n  void* gpu_data = mem.mutable_gpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::HEAD_AT_GPU);\n  caffe_gpu_memset(mem.size(), 1, gpu_data);\n  const void* cpu_data = mem.cpu_data();\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<const char*>(cpu_data))[i], 1);\n  }\n  EXPECT_EQ(mem.head(), SyncedMemory::SYNCED);\n\n  gpu_data = mem.mutable_gpu_data();\n  EXPECT_EQ(mem.head(), SyncedMemory::HEAD_AT_GPU);\n  caffe_gpu_memset(mem.size(), 2, gpu_data);\n  cpu_data = mem.cpu_data();\n  for (int i = 0; i < mem.size(); ++i) {\n    EXPECT_EQ((static_cast<const char*>(cpu_data))[i], 2);\n  }\n  EXPECT_EQ(mem.head(), SyncedMemory::SYNCED);\n}\n\n#endif\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_tanh_layer.cpp",
    "content": "#include <algorithm>\n#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/tanh_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ndouble tanh_naive(double x) {\n  if (x < -40) {\n    // avoid negative overflow\n    return -1;\n  } else if (x > 40) {\n    // avoid positive overflow\n    return 1;\n  } else {\n    // exact expression for tanh, which is unstable for large x\n    double exp2x = exp(2 * x);\n    return (exp2x - 1.0) / (exp2x + 1.0);\n  }\n}\n\ntemplate <typename TypeParam>\nclass TanHLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  TanHLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    FillerParameter filler_param;\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~TanHLayerTest() { delete blob_bottom_; delete blob_top_; }\n\n  void TestForward(Dtype filler_std) {\n    FillerParameter filler_param;\n    filler_param.set_std(filler_std);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n\n    LayerParameter layer_param;\n    TanHLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n    // Now, check values\n    const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n    const Dtype* top_data = this->blob_top_->cpu_data();\n    const Dtype min_precision = 1e-5;\n    for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n      Dtype expected_value = tanh_naive(bottom_data[i]);\n      Dtype precision = std::max(\n        Dtype(std::abs(expected_value * Dtype(1e-4))), min_precision);\n      EXPECT_NEAR(expected_value, top_data[i], precision);\n    }\n  }\n\n  void TestBackward(Dtype filler_std) {\n    FillerParameter filler_param;\n    filler_param.set_std(filler_std);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n\n    LayerParameter layer_param;\n    TanHLayer<Dtype> layer(layer_param);\n    GradientChecker<Dtype> checker(1e-2, 1e-2, 1701);\n    checker.CheckGradientEltwise(&layer, this->blob_bottom_vec_,\n        this->blob_top_vec_);\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(TanHLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(TanHLayerTest, TestTanH) {\n  this->TestForward(1.0);\n}\n\nTYPED_TEST(TanHLayerTest, TestTanHOverflow) {\n  // this will fail if tanh overflow is not properly handled\n  this->TestForward(10000.0);\n}\n\nTYPED_TEST(TanHLayerTest, TestTanHGradient) {\n  this->TestBackward(1.0);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_threshold_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/threshold_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass ThresholdLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n protected:\n  ThresholdLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 6, 5)),\n        blob_top_(new Blob<Dtype>()) {\n    Caffe::set_random_seed(1701);\n    // fill the values\n    FillerParameter filler_param;\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(this->blob_bottom_);\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n  }\n  virtual ~ThresholdLayerTest() { delete blob_bottom_; delete blob_top_; }\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(ThresholdLayerTest, TestDtypesAndDevices);\n\n\nTYPED_TEST(ThresholdLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ThresholdLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  EXPECT_EQ(this->blob_top_->num(), this->blob_bottom_->num());\n  EXPECT_EQ(this->blob_top_->channels(), this->blob_bottom_->channels());\n  EXPECT_EQ(this->blob_top_->height(), this->blob_bottom_->height());\n  EXPECT_EQ(this->blob_top_->width(), this->blob_bottom_->width());\n}\n\nTYPED_TEST(ThresholdLayerTest, Test) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ThresholdLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  const Dtype threshold_ = layer_param.threshold_param().threshold();\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_LE(top_data[i], 1.);\n    if (top_data[i] == 0) {\n      EXPECT_LE(bottom_data[i], threshold_);\n    }\n    if (top_data[i] == 1) {\n      EXPECT_GT(bottom_data[i], threshold_);\n    }\n  }\n}\n\nTYPED_TEST(ThresholdLayerTest, Test2) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  ThresholdParameter* threshold_param =\n    layer_param.mutable_threshold_param();\n  threshold_param->set_threshold(0.5);\n  ThresholdLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  // Now, check values\n  const Dtype* bottom_data = this->blob_bottom_->cpu_data();\n  const Dtype* top_data = this->blob_top_->cpu_data();\n  const Dtype threshold_ = layer_param.threshold_param().threshold();\n  EXPECT_FLOAT_EQ(threshold_, 0.5);\n  for (int i = 0; i < this->blob_bottom_->count(); ++i) {\n    EXPECT_GE(top_data[i], 0.);\n    EXPECT_LE(top_data[i], 1.);\n    if (top_data[i] == 0) {\n      EXPECT_LE(bottom_data[i], threshold_);\n    }\n    if (top_data[i] == 1) {\n      EXPECT_GT(bottom_data[i], threshold_);\n    }\n  }\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_tile_layer.cpp",
    "content": "#include <vector>\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/filler.hpp\"\n#include \"caffe/layers/tile_layer.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n#include \"caffe/test/test_gradient_check_util.hpp\"\n\nnamespace caffe {\n\ntemplate <typename TypeParam>\nclass TileLayerTest : public MultiDeviceTest<TypeParam> {\n  typedef typename TypeParam::Dtype Dtype;\n\n protected:\n  TileLayerTest()\n      : blob_bottom_(new Blob<Dtype>(2, 3, 4, 5)),\n        blob_top_(new Blob<Dtype>()) {}\n  virtual void SetUp() {\n    blob_bottom_vec_.push_back(blob_bottom_);\n    blob_top_vec_.push_back(blob_top_);\n    FillerParameter filler_param;\n    filler_param.set_mean(0.0);\n    filler_param.set_std(1.0);\n    GaussianFiller<Dtype> filler(filler_param);\n    filler.Fill(blob_bottom_);\n  }\n\n  virtual ~TileLayerTest() {\n    delete blob_bottom_;\n    delete blob_top_;\n  }\n\n  Blob<Dtype>* const blob_bottom_;\n  Blob<Dtype>* const blob_top_;\n  vector<Blob<Dtype>*> blob_bottom_vec_;\n  vector<Blob<Dtype>*> blob_top_vec_;\n};\n\nTYPED_TEST_CASE(TileLayerTest, TestDtypesAndDevices);\n\nTYPED_TEST(TileLayerTest, TestTrivialSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kNumTiles = 1;\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  for (int i = 0; i < this->blob_bottom_->num_axes(); ++i) {\n    layer_param.mutable_tile_param()->set_axis(i);\n    TileLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    ASSERT_EQ(this->blob_top_->num_axes(), this->blob_bottom_->num_axes());\n    for (int j = 0; j < this->blob_bottom_->num_axes(); ++j) {\n      EXPECT_EQ(this->blob_top_->shape(j), this->blob_bottom_->shape(j));\n    }\n  }\n}\n\nTYPED_TEST(TileLayerTest, TestSetup) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kNumTiles = 3;\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  for (int i = 0; i < this->blob_bottom_->num_axes(); ++i) {\n    layer_param.mutable_tile_param()->set_axis(i);\n    TileLayer<Dtype> layer(layer_param);\n    layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n    ASSERT_EQ(this->blob_top_->num_axes(), this->blob_bottom_->num_axes());\n    for (int j = 0; j < this->blob_bottom_->num_axes(); ++j) {\n      const int top_dim =\n          ((i == j) ? kNumTiles : 1) * this->blob_bottom_->shape(j);\n      EXPECT_EQ(top_dim, this->blob_top_->shape(j));\n    }\n  }\n}\n\nTYPED_TEST(TileLayerTest, TestForwardNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kTileAxis = 0;\n  const int kNumTiles = 3;\n  layer_param.mutable_tile_param()->set_axis(kTileAxis);\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  TileLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_top_->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n       for (int h = 0; h < this->blob_top_->height(); ++h) {\n         for (int w = 0; w < this->blob_top_->width(); ++w) {\n           const int bottom_n = n % this->blob_bottom_->num();\n           EXPECT_EQ(this->blob_bottom_->data_at(bottom_n, c, h, w),\n                     this->blob_top_->data_at(n, c, h, w));\n         }\n       }\n    }\n  }\n}\n\nTYPED_TEST(TileLayerTest, TestForwardChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kNumTiles = 3;\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  TileLayer<Dtype> layer(layer_param);\n  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);\n  layer.Forward(this->blob_bottom_vec_, this->blob_top_vec_);\n  for (int n = 0; n < this->blob_top_->num(); ++n) {\n    for (int c = 0; c < this->blob_top_->channels(); ++c) {\n       for (int h = 0; h < this->blob_top_->height(); ++h) {\n         for (int w = 0; w < this->blob_top_->width(); ++w) {\n           const int bottom_c = c % this->blob_bottom_->channels();\n           EXPECT_EQ(this->blob_bottom_->data_at(n, bottom_c, h, w),\n                     this->blob_top_->data_at(n, c, h, w));\n         }\n       }\n    }\n  }\n}\n\nTYPED_TEST(TileLayerTest, TestTrivialGradient) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kNumTiles = 1;\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  TileLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(TileLayerTest, TestGradientNum) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kTileAxis = 0;\n  const int kNumTiles = 3;\n  layer_param.mutable_tile_param()->set_axis(kTileAxis);\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  TileLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\nTYPED_TEST(TileLayerTest, TestGradientChannels) {\n  typedef typename TypeParam::Dtype Dtype;\n  LayerParameter layer_param;\n  const int kTileAxis = 1;\n  const int kNumTiles = 3;\n  layer_param.mutable_tile_param()->set_axis(kTileAxis);\n  layer_param.mutable_tile_param()->set_tiles(kNumTiles);\n  TileLayer<Dtype> layer(layer_param);\n  GradientChecker<Dtype> checker(1e-2, 1e-2);\n  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,\n      this->blob_top_vec_);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_upgrade_proto.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"google/protobuf/text_format.h\"\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/layer.hpp\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nclass PaddingLayerUpgradeTest : public ::testing::Test {\n protected:\n  void RunPaddingUpgradeTest(\n      const string& input_param_string, const string& output_param_string) {\n    // Test that UpgradeV0PaddingLayers called on the proto specified by\n    // input_param_string results in the proto specified by\n    // output_param_string.\n    NetParameter input_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        input_param_string, &input_param));\n    NetParameter expected_output_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        output_param_string, &expected_output_param));\n    NetParameter actual_output_param;\n    UpgradeV0PaddingLayers(input_param, &actual_output_param);\n    EXPECT_EQ(expected_output_param.DebugString(),\n        actual_output_param.DebugString());\n    // Also test idempotence.\n    NetParameter double_pad_upgrade_param;\n    UpgradeV0PaddingLayers(actual_output_param, &double_pad_upgrade_param);\n    EXPECT_EQ(actual_output_param.DebugString(),\n       double_pad_upgrade_param.DebugString());\n  }\n};\n\nTEST_F(PaddingLayerUpgradeTest, TestSimple) {\n  const string& input_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad1' \"\n      \"    type: 'padding' \"\n      \"    pad: 2 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'pad1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunPaddingUpgradeTest(input_proto, expected_output_proto);\n}\n\nTEST_F(PaddingLayerUpgradeTest, TestTwoTops) {\n  const string& input_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad1' \"\n      \"    type: 'padding' \"\n      \"    pad: 2 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'pad1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv2' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad1' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv2' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunPaddingUpgradeTest(input_proto, expected_output_proto);\n}\n\nTEST_F(PaddingLayerUpgradeTest, TestImageNet) {\n  const string& input_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu1' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool1' \"\n      \"    type: 'pool' \"\n      \"    pool: MAX \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm1' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad2' \"\n      \"    type: 'padding' \"\n      \"    pad: 2 \"\n      \"  } \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'pad2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv2' \"\n      \"    type: 'conv' \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernelsize: 5 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu2' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool2' \"\n      \"    type: 'pool' \"\n      \"    pool: MAX \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'pool2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm2' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool2' \"\n      \"  top: 'norm2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad3' \"\n      \"    type: 'padding' \"\n      \"    pad: 1 \"\n      \"  } \"\n      \"  bottom: 'norm2' \"\n      \"  top: 'pad3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv3' \"\n      \"    type: 'conv' \"\n      \"    num_output: 384 \"\n      \"    kernelsize: 3 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu3' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad4' \"\n      \"    type: 'padding' \"\n      \"    pad: 1 \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'pad4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv4' \"\n      \"    type: 'conv' \"\n      \"    num_output: 384 \"\n      \"    group: 2 \"\n      \"    kernelsize: 3 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu4' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad5' \"\n      \"    type: 'padding' \"\n      \"    pad: 1 \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'pad5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv5' \"\n      \"    type: 'conv' \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernelsize: 3 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu5' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool5' \"\n      \"    type: 'pool' \"\n      \"    kernelsize: 3 \"\n      \"    pool: MAX \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'pool5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc6' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pool5' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu6' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop6' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc7' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu7' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop7' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu1' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool1' \"\n      \"    type: 'pool' \"\n      \"    pool: MAX \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm1' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv2' \"\n      \"    type: 'conv' \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernelsize: 5 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu2' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool2' \"\n      \"    type: 'pool' \"\n      \"    pool: MAX \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'pool2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm2' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool2' \"\n      \"  top: 'norm2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv3' \"\n      \"    type: 'conv' \"\n      \"    num_output: 384 \"\n      \"    kernelsize: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'norm2' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu3' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv4' \"\n      \"    type: 'conv' \"\n      \"    num_output: 384 \"\n      \"    group: 2 \"\n      \"    kernelsize: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu4' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv5' \"\n      \"    type: 'conv' \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernelsize: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu5' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool5' \"\n      \"    type: 'pool' \"\n      \"    kernelsize: 3 \"\n      \"    pool: MAX \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'pool5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc6' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pool5' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu6' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop6' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc7' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu7' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop7' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunPaddingUpgradeTest(input_proto, expected_output_proto);\n}\n\nclass NetUpgradeTest : public ::testing::Test {\n protected:\n  void RunV0UpgradeTest(\n      const string& input_param_string, const string& output_param_string) {\n    // Test that UpgradeV0Net called on the NetParameter proto specified by\n    // input_param_string results in the NetParameter proto specified by\n    // output_param_string.\n    NetParameter input_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        input_param_string, &input_param));\n    NetParameter expected_output_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        output_param_string, &expected_output_param));\n    NetParameter actual_output_param;\n    UpgradeV0Net(input_param, &actual_output_param);\n    EXPECT_EQ(expected_output_param.DebugString(),\n        actual_output_param.DebugString());\n  }\n\n  void RunV1UpgradeTest(\n      const string& input_param_string, const string& output_param_string) {\n    // Test that UpgradeV0Net called on the NetParameter proto specified by\n    // input_param_string results in the NetParameter proto specified by\n    // output_param_string.\n    NetParameter input_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        input_param_string, &input_param));\n    NetParameter expected_output_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        output_param_string, &expected_output_param));\n    NetParameter actual_output_param;\n    UpgradeV1Net(input_param, &actual_output_param);\n    EXPECT_EQ(expected_output_param.DebugString(),\n        actual_output_param.DebugString());\n  }\n};\n\nTEST_F(NetUpgradeTest, TestSimple) {\n  const string& v0_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad1' \"\n      \"    type: 'padding' \"\n      \"    pad: 2 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'pad1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& expected_v1_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  name: 'data' \"\n      \"  type: DATA \"\n      \"  data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv1' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 96 \"\n      \"    kernel_size: 11 \"\n      \"    stride: 4 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'fc8' \"\n      \"  type: INNER_PRODUCT \"\n      \"  inner_product_param { \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'loss' \"\n      \"  type: SOFTMAX_LOSS \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunV0UpgradeTest(v0_proto, expected_v1_proto);\n\n  const string& expected_v2_proto =\n      \"name: 'CaffeNet' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv1' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 96 \"\n      \"    kernel_size: 11 \"\n      \"    stride: 4 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc8' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunV1UpgradeTest(expected_v1_proto, expected_v2_proto);\n}\n\n// Test any layer or parameter upgrades not covered by other tests.\nTEST_F(NetUpgradeTest, TestAllParams) {\n  const string& input_proto =\n      \"name: 'CaffeNet' \"\n      \"input: 'input_data' \"\n      \"input_dim: 64 \"\n      \"input_dim: 3 \"\n      \"input_dim: 32 \"\n      \"input_dim: 32 \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"    scale: 0.25 \"\n      \"    rand_skip: 73 \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'images' \"\n      \"    type: 'images' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-images' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"    scale: 0.25 \"\n      \"    rand_skip: 73 \"\n      \"    shuffle_images: true \"\n      \"    new_height: 40 \"\n      \"    new_width: 30 \"\n      \"  } \"\n      \"  top: 'images_data' \"\n      \"  top: 'images_label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'window_data' \"\n      \"    type: 'window_data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"    det_fg_threshold: 0.25 \"\n      \"    det_bg_threshold: 0.75 \"\n      \"    det_fg_fraction: 0.5 \"\n      \"    det_context_pad: 16 \"\n      \"    det_crop_mode: 'square' \"\n      \"  } \"\n      \"  top: 'window_data' \"\n      \"  top: 'window_label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'hdf5data' \"\n      \"    type: 'hdf5_data' \"\n      \"    source: '/my/hdf5/data' \"\n      \"    batchsize: 256 \"\n      \"  } \"\n      \"  top: 'hdf5data' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    biasterm: false \"\n      \"    pad: 4 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 3. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool1ave' \"\n      \"    type: 'pool' \"\n      \"    pool: AVE \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1ave' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool1stoch' \"\n      \"    type: 'pool' \"\n      \"    pool: STOCHASTIC \"\n      \"    kernelsize: 4 \"\n      \"    stride: 5 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1stoch' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'concat' \"\n      \"    type: 'concat' \"\n      \"    concat_dim: 2 \"\n      \"  } \"\n      \"  bottom: 'pool1ave' \"\n      \"  bottom: 'pool1stoch' \"\n      \"  top: 'pool1concat' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm1' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1concat' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc6' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    biasterm: false \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu6' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop6' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.2 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'infogain_loss' \"\n      \"    source: '/my/infogain/matrix' \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  bottom: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'accuracy' \"\n      \"    type: 'accuracy' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'bnll' \"\n      \"    type: 'bnll' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'euclidean_loss' \"\n      \"    type: 'euclidean_loss' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'flatten' \"\n      \"    type: 'flatten' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'hdf5_output' \"\n      \"    type: 'hdf5_output' \"\n      \"    hdf5_output_param { \"\n      \"      file_name: '/my/hdf5/output/file' \"\n      \"    } \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'im2col' \"\n      \"    type: 'im2col' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'images' \"\n      \"    type: 'images' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'multinomial_logistic_loss' \"\n      \"    type: 'multinomial_logistic_loss' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'sigmoid' \"\n      \"    type: 'sigmoid' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'softmax' \"\n      \"    type: 'softmax' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'split' \"\n      \"    type: 'split' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'tanh' \"\n      \"    type: 'tanh' \"\n      \"  } \"\n      \"} \";\n  const string& expected_output_proto =\n      \"name: 'CaffeNet' \"\n      \"input: 'input_data' \"\n      \"input_dim: 64 \"\n      \"input_dim: 3 \"\n      \"input_dim: 32 \"\n      \"input_dim: 32 \"\n      \"layers { \"\n      \"  name: 'data' \"\n      \"  type: DATA \"\n      \"  data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"    rand_skip: 73 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    scale: 0.25 \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'images' \"\n      \"  type: IMAGE_DATA \"\n      \"  image_data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-images' \"\n      \"    batch_size: 256 \"\n      \"    rand_skip: 73 \"\n      \"    shuffle: true \"\n      \"    new_height: 40 \"\n      \"    new_width: 30 \"\n      \"  } \"\n      \"  transform_param {\"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    scale: 0.25 \"\n      \"  } \"\n      \"  top: 'images_data' \"\n      \"  top: 'images_label' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'window_data' \"\n      \"  type: WINDOW_DATA \"\n      \"  window_data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"    fg_threshold: 0.25 \"\n      \"    bg_threshold: 0.75 \"\n      \"    fg_fraction: 0.5 \"\n      \"    context_pad: 16 \"\n      \"    crop_mode: 'square' \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    mirror: true \"\n      \"    crop_size: 227 \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  }\"\n      \"  top: 'window_data' \"\n      \"  top: 'window_label' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'hdf5data' \"\n      \"  type: HDF5_DATA \"\n      \"  hdf5_data_param { \"\n      \"    source: '/my/hdf5/data' \"\n      \"    batch_size: 256 \"\n      \"  } \"\n      \"  top: 'hdf5data' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv1' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 96 \"\n      \"    bias_term: false \"\n      \"    pad: 4 \"\n      \"    kernel_size: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 3. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'pool1ave' \"\n      \"  type: POOLING \"\n      \"  pooling_param { \"\n      \"    pool: AVE \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1ave' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'pool1stoch' \"\n      \"  type: POOLING \"\n      \"  pooling_param { \"\n      \"    pool: STOCHASTIC \"\n      \"    kernel_size: 4 \"\n      \"    stride: 5 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1stoch' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'concat' \"\n      \"  type: CONCAT \"\n      \"  concat_param { \"\n      \"    concat_dim: 2 \"\n      \"  } \"\n      \"  bottom: 'pool1ave' \"\n      \"  bottom: 'pool1stoch' \"\n      \"  top: 'pool1concat' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'norm1' \"\n      \"  type: LRN \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1concat' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'fc6' \"\n      \"  type: INNER_PRODUCT \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    bias_term: false \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu6' \"\n      \"  type: RELU \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'drop6' \"\n      \"  type: DROPOUT \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.2 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'loss' \"\n      \"  type: INFOGAIN_LOSS \"\n      \"  infogain_loss_param { \"\n      \"    source: '/my/infogain/matrix' \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  bottom: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'accuracy' \"\n      \"  type: ACCURACY \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'bnll' \"\n      \"  type: BNLL \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'euclidean_loss' \"\n      \"  type: EUCLIDEAN_LOSS \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'flatten' \"\n      \"  type: FLATTEN \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'hdf5_output' \"\n      \"  type: HDF5_OUTPUT \"\n      \"  hdf5_output_param { \"\n      \"    file_name: '/my/hdf5/output/file' \"\n      \"  } \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'im2col' \"\n      \"  type: IM2COL \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'images' \"\n      \"  type: IMAGE_DATA \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'multinomial_logistic_loss' \"\n      \"  type: MULTINOMIAL_LOGISTIC_LOSS \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'sigmoid' \"\n      \"  type: SIGMOID \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'softmax' \"\n      \"  type: SOFTMAX \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'split' \"\n      \"  type: SPLIT \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'tanh' \"\n      \"  type: TANH \"\n      \"} \";\n  this->RunV0UpgradeTest(input_proto, expected_output_proto);\n}\n\nTEST_F(NetUpgradeTest, TestImageNet) {\n  const string& v0_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'data' \"\n      \"    type: 'data' \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    meanfile: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"    batchsize: 256 \"\n      \"    cropsize: 227 \"\n      \"    mirror: true \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv1' \"\n      \"    type: 'conv' \"\n      \"    num_output: 96 \"\n      \"    kernelsize: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu1' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool1' \"\n      \"    type: 'pool' \"\n      \"    pool: MAX \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm1' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad2' \"\n      \"    type: 'padding' \"\n      \"    pad: 2 \"\n      \"  } \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'pad2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv2' \"\n      \"    type: 'conv' \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernelsize: 5 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu2' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool2' \"\n      \"    type: 'pool' \"\n      \"    pool: MAX \"\n      \"    kernelsize: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'pool2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'norm2' \"\n      \"    type: 'lrn' \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool2' \"\n      \"  top: 'norm2' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad3' \"\n      \"    type: 'padding' \"\n      \"    pad: 1 \"\n      \"  } \"\n      \"  bottom: 'norm2' \"\n      \"  top: 'pad3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv3' \"\n      \"    type: 'conv' \"\n      \"    num_output: 384 \"\n      \"    kernelsize: 3 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu3' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad4' \"\n      \"    type: 'padding' \"\n      \"    pad: 1 \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'pad4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv4' \"\n      \"    type: 'conv' \"\n      \"    num_output: 384 \"\n      \"    group: 2 \"\n      \"    kernelsize: 3 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu4' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pad5' \"\n      \"    type: 'padding' \"\n      \"    pad: 1 \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'pad5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'conv5' \"\n      \"    type: 'conv' \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernelsize: 3 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pad5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu5' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'pool5' \"\n      \"    type: 'pool' \"\n      \"    kernelsize: 3 \"\n      \"    pool: MAX \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'pool5' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc6' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'pool5' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu6' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop6' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc7' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'relu7' \"\n      \"    type: 'relu' \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'drop7' \"\n      \"    type: 'dropout' \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'fc8' \"\n      \"    type: 'innerproduct' \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"    blobs_lr: 1. \"\n      \"    blobs_lr: 2. \"\n      \"    weight_decay: 1. \"\n      \"    weight_decay: 0. \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  layer { \"\n      \"    name: 'loss' \"\n      \"    type: 'softmax_loss' \"\n      \"  } \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  const string& expected_v1_proto =\n      \"name: 'CaffeNet' \"\n      \"layers { \"\n      \"  name: 'data' \"\n      \"  type: DATA \"\n      \"  data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv1' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 96 \"\n      \"    kernel_size: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu1' \"\n      \"  type: RELU \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'pool1' \"\n      \"  type: POOLING \"\n      \"  pooling_param { \"\n      \"    pool: MAX \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'norm1' \"\n      \"  type: LRN \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv2' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernel_size: 5 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu2' \"\n      \"  type: RELU \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'pool2' \"\n      \"  type: POOLING \"\n      \"  pooling_param { \"\n      \"    pool: MAX \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'pool2' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'norm2' \"\n      \"  type: LRN \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool2' \"\n      \"  top: 'norm2' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv3' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 384 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'norm2' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu3' \"\n      \"  type: RELU \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv4' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 384 \"\n      \"    group: 2 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu4' \"\n      \"  type: RELU \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'conv5' \"\n      \"  type: CONVOLUTION \"\n      \"  convolution_param { \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu5' \"\n      \"  type: RELU \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'pool5' \"\n      \"  type: POOLING \"\n      \"  pooling_param { \"\n      \"    kernel_size: 3 \"\n      \"    pool: MAX \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'pool5' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'fc6' \"\n      \"  type: INNER_PRODUCT \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'pool5' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu6' \"\n      \"  type: RELU \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'drop6' \"\n      \"  type: DROPOUT \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'fc7' \"\n      \"  type: INNER_PRODUCT \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'relu7' \"\n      \"  type: RELU \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'drop7' \"\n      \"  type: DROPOUT \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'fc8' \"\n      \"  type: INNER_PRODUCT \"\n      \"  inner_product_param { \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  blobs_lr: 1. \"\n      \"  blobs_lr: 2. \"\n      \"  weight_decay: 1. \"\n      \"  weight_decay: 0. \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layers { \"\n      \"  name: 'loss' \"\n      \"  type: SOFTMAX_LOSS \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunV0UpgradeTest(v0_proto, expected_v1_proto);\n\n  const string& expected_v2_proto =\n      \"name: 'CaffeNet' \"\n      \"layer { \"\n      \"  name: 'data' \"\n      \"  type: 'Data' \"\n      \"  data_param { \"\n      \"    source: '/home/jiayq/Data/ILSVRC12/train-leveldb' \"\n      \"    batch_size: 256 \"\n      \"  } \"\n      \"  transform_param { \"\n      \"    crop_size: 227 \"\n      \"    mirror: true \"\n      \"    mean_file: '/home/jiayq/Data/ILSVRC12/image_mean.binaryproto' \"\n      \"  } \"\n      \"  top: 'data' \"\n      \"  top: 'label' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv1' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 96 \"\n      \"    kernel_size: 11 \"\n      \"    stride: 4 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'data' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu1' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'conv1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'pool1' \"\n      \"  type: 'Pooling' \"\n      \"  pooling_param { \"\n      \"    pool: MAX \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv1' \"\n      \"  top: 'pool1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'norm1' \"\n      \"  type: 'LRN' \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool1' \"\n      \"  top: 'norm1' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv2' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernel_size: 5 \"\n      \"    pad: 2 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'norm1' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu2' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'conv2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'pool2' \"\n      \"  type: 'Pooling' \"\n      \"  pooling_param { \"\n      \"    pool: MAX \"\n      \"    kernel_size: 3 \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv2' \"\n      \"  top: 'pool2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'norm2' \"\n      \"  type: 'LRN' \"\n      \"  lrn_param { \"\n      \"    local_size: 5 \"\n      \"    alpha: 0.0001 \"\n      \"    beta: 0.75 \"\n      \"  } \"\n      \"  bottom: 'pool2' \"\n      \"  top: 'norm2' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv3' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 384 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'norm2' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu3' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv3' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv4' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 384 \"\n      \"    group: 2 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'conv3' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu4' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv4' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'conv5' \"\n      \"  type: 'Convolution' \"\n      \"  convolution_param { \"\n      \"    num_output: 256 \"\n      \"    group: 2 \"\n      \"    kernel_size: 3 \"\n      \"    pad: 1 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'conv4' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu5' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'conv5' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'pool5' \"\n      \"  type: 'Pooling' \"\n      \"  pooling_param { \"\n      \"    kernel_size: 3 \"\n      \"    pool: MAX \"\n      \"    stride: 2 \"\n      \"  } \"\n      \"  bottom: 'conv5' \"\n      \"  top: 'pool5' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc6' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'pool5' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu6' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'drop6' \"\n      \"  type: 'Dropout' \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc6' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc7' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 4096 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.005 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 1. \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'fc6' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'relu7' \"\n      \"  type: 'ReLU' \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'drop7' \"\n      \"  type: 'Dropout' \"\n      \"  dropout_param { \"\n      \"    dropout_ratio: 0.5 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc7' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'fc8' \"\n      \"  type: 'InnerProduct' \"\n      \"  inner_product_param { \"\n      \"    num_output: 1000 \"\n      \"    weight_filler { \"\n      \"      type: 'gaussian' \"\n      \"      std: 0.01 \"\n      \"    } \"\n      \"    bias_filler { \"\n      \"      type: 'constant' \"\n      \"      value: 0 \"\n      \"    } \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 1 \"\n      \"    decay_mult: 1 \"\n      \"  } \"\n      \"  param { \"\n      \"    lr_mult: 2 \"\n      \"    decay_mult: 0 \"\n      \"  } \"\n      \"  bottom: 'fc7' \"\n      \"  top: 'fc8' \"\n      \"} \"\n      \"layer { \"\n      \"  name: 'loss' \"\n      \"  type: 'SoftmaxWithLoss' \"\n      \"  bottom: 'fc8' \"\n      \"  bottom: 'label' \"\n      \"} \";\n  this->RunV1UpgradeTest(expected_v1_proto, expected_v2_proto);\n}  // NOLINT(readability/fn_size)\n\nTEST_F(NetUpgradeTest, TestUpgradeV1LayerType) {\n  LayerParameter layer_param;\n  shared_ptr<Layer<float> > layer;\n  for (int i = 0; i < V1LayerParameter_LayerType_LayerType_ARRAYSIZE; ++i) {\n    ASSERT_TRUE(V1LayerParameter_LayerType_IsValid(i));\n    V1LayerParameter_LayerType v1_type = V1LayerParameter_LayerType(i);\n    string v2_layer_type(UpgradeV1LayerType(v1_type));\n    if (v2_layer_type == \"\") {\n      EXPECT_EQ(V1LayerParameter_LayerType_NONE, v1_type);\n      continue;  // Empty string isn't actually a valid layer type.\n    }\n    layer_param.set_type(v2_layer_type);\n    // Data layers expect a DB\n    if (v2_layer_type == \"Data\") {\n      #ifdef USE_LEVELDB\n      string tmp;\n      MakeTempDir(&tmp);\n      boost::scoped_ptr<db::DB> db(db::GetDB(DataParameter_DB_LEVELDB));\n      db->Open(tmp, db::NEW);\n      db->Close();\n      layer_param.mutable_data_param()->set_source(tmp);\n      #else\n      continue;\n      #endif  // USE_LEVELDB\n    }\n    #ifndef USE_OPENCV\n    if (v2_layer_type == \"ImageData\" || v2_layer_type == \"WindowData\") {\n     continue;\n    }\n    #endif  // !USE_OPENCV\n    layer = LayerRegistry<float>::CreateLayer(layer_param);\n    EXPECT_EQ(v2_layer_type, layer->type());\n  }\n}\n\nclass SolverTypeUpgradeTest : public ::testing::Test {\n protected:\n  void RunSolverTypeUpgradeTest(\n      const string& input_param_string, const string& output_param_string) {\n    // Test upgrading old solver_type field (enum) to new type field (string)\n    SolverParameter input_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        input_param_string, &input_param));\n    SolverParameter expected_output_param;\n    CHECK(google::protobuf::TextFormat::ParseFromString(\n        output_param_string, &expected_output_param));\n    SolverParameter actual_output_param = input_param;\n    UpgradeSolverType(&actual_output_param);\n    EXPECT_EQ(expected_output_param.DebugString(),\n        actual_output_param.DebugString());\n  }\n};\n\nTEST_F(SolverTypeUpgradeTest, TestSimple) {\n  const char* old_type_vec[6] = { \"SGD\", \"ADAGRAD\", \"NESTEROV\", \"RMSPROP\",\n      \"ADADELTA\", \"ADAM\" };\n  const char* new_type_vec[6] = { \"SGD\", \"AdaGrad\", \"Nesterov\", \"RMSProp\",\n      \"AdaDelta\", \"Adam\" };\n  for (int i = 0; i < 6; ++i) {\n    const string& input_proto =\n        \"net: 'examples/mnist/lenet_train_test.prototxt' \"\n        \"test_iter: 100 \"\n        \"test_interval: 500 \"\n        \"base_lr: 0.01 \"\n        \"momentum: 0.0 \"\n        \"weight_decay: 0.0005 \"\n        \"lr_policy: 'inv' \"\n        \"gamma: 0.0001 \"\n        \"power: 0.75 \"\n        \"display: 100 \"\n        \"max_iter: 10000 \"\n        \"snapshot: 5000 \"\n        \"snapshot_prefix: 'examples/mnist/lenet_rmsprop' \"\n        \"solver_mode: GPU \"\n        \"solver_type: \" + std::string(old_type_vec[i]) + \" \";\n    const string& expected_output_proto =\n        \"net: 'examples/mnist/lenet_train_test.prototxt' \"\n        \"test_iter: 100 \"\n        \"test_interval: 500 \"\n        \"base_lr: 0.01 \"\n        \"momentum: 0.0 \"\n        \"weight_decay: 0.0005 \"\n        \"lr_policy: 'inv' \"\n        \"gamma: 0.0001 \"\n        \"power: 0.75 \"\n        \"display: 100 \"\n        \"max_iter: 10000 \"\n        \"snapshot: 5000 \"\n        \"snapshot_prefix: 'examples/mnist/lenet_rmsprop' \"\n        \"solver_mode: GPU \"\n        \"type: '\" + std::string(new_type_vec[i]) + \"' \";\n    this->RunSolverTypeUpgradeTest(input_proto, expected_output_proto);\n  }\n}\n\n}  // NOLINT(readability/fn_size)  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/test/test_util_blas.cpp",
    "content": "#ifndef CPU_ONLY  // CPU-GPU test\n\n#include \"gtest/gtest.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/util/device_alternate.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\n#include \"caffe/test/test_caffe_main.hpp\"\n\nnamespace caffe {\n\nextern cudaDeviceProp CAFFE_TEST_CUDA_PROP;\n\ntemplate <typename TypeParam>\nclass GemmTest : public ::testing::Test {};\n\nTYPED_TEST_CASE(GemmTest, TestDtypes);\n\nTYPED_TEST(GemmTest, TestGemmCPUGPU) {\n  Blob<TypeParam> A(1, 1, 2, 3);\n  Blob<TypeParam> B(1, 1, 3, 4);\n  Blob<TypeParam> C(1, 1, 2, 4);\n  TypeParam data[12] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12};\n  TypeParam A_reshape_data[6] = {1, 4, 2, 5, 3, 6};\n  TypeParam B_reshape_data[12] = {1, 5, 9, 2, 6, 10, 3, 7, 11, 4, 8, 12};\n  TypeParam result[8] = {38, 44, 50, 56, 83, 98, 113, 128};\n  caffe_copy(6, data, A.mutable_cpu_data());\n  caffe_copy(12, data, B.mutable_cpu_data());\n\n  if (sizeof(TypeParam) == 4 || CAFFE_TEST_CUDA_PROP.major >= 2) {\n    // [1, 2, 3; 4 5 6] * [1, 2, 3, 4; 5, 6, 7, 8; 9, 10, 11, 12];\n    caffe_cpu_gemm<TypeParam>(CblasNoTrans, CblasNoTrans, 2, 4, 3, 1.,\n        A.cpu_data(), B.cpu_data(), 0., C.mutable_cpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n    caffe_gpu_gemm<TypeParam>(CblasNoTrans, CblasNoTrans, 2, 4, 3, 1.,\n        A.gpu_data(), B.gpu_data(), 0., C.mutable_gpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n\n    // Test when we have a transposed A\n    A.Reshape(1, 1, 3, 2);\n    caffe_copy(6, A_reshape_data, A.mutable_cpu_data());\n    caffe_cpu_gemm<TypeParam>(CblasTrans, CblasNoTrans, 2, 4, 3, 1.,\n        A.cpu_data(), B.cpu_data(), 0., C.mutable_cpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n    caffe_gpu_gemm<TypeParam>(CblasTrans, CblasNoTrans, 2, 4, 3, 1.,\n        A.gpu_data(), B.gpu_data(), 0., C.mutable_gpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n\n    // Test when we have a transposed A and a transposed B too\n    B.Reshape(1, 1, 4, 3);\n    caffe_copy(12, B_reshape_data, B.mutable_cpu_data());\n    caffe_cpu_gemm<TypeParam>(CblasTrans, CblasTrans, 2, 4, 3, 1.,\n        A.cpu_data(), B.cpu_data(), 0., C.mutable_cpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n    caffe_gpu_gemm<TypeParam>(CblasTrans, CblasTrans, 2, 4, 3, 1.,\n        A.gpu_data(), B.gpu_data(), 0., C.mutable_gpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n\n    // Test when we have a transposed B\n    A.Reshape(1, 1, 2, 3);\n    caffe_copy(6, data, A.mutable_cpu_data());\n    caffe_cpu_gemm<TypeParam>(CblasNoTrans, CblasTrans, 2, 4, 3, 1.,\n        A.cpu_data(), B.cpu_data(), 0., C.mutable_cpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n    caffe_gpu_gemm<TypeParam>(CblasNoTrans, CblasTrans, 2, 4, 3, 1.,\n        A.gpu_data(), B.gpu_data(), 0., C.mutable_gpu_data());\n    for (int i = 0; i < 8; ++i) {\n      EXPECT_EQ(C.cpu_data()[i], result[i]);\n    }\n  } else {\n    LOG(ERROR) << \"Skipping test due to old architecture.\";\n  }\n}\n\n\nTYPED_TEST(GemmTest, TestGemvCPUGPU) {\n  Blob<TypeParam> A(1, 1, 2, 3);\n  Blob<TypeParam> x(1, 1, 1, 3);\n  Blob<TypeParam> y(1, 1, 1, 2);\n  TypeParam data[6] = {1, 2, 3, 4, 5, 6};\n  TypeParam result_2[2] = {14, 32};\n  TypeParam result_3[3] = {9, 12, 15};\n  caffe_copy(6, data, A.mutable_cpu_data());\n  caffe_copy(3, data, x.mutable_cpu_data());\n\n  if (sizeof(TypeParam) == 4 || CAFFE_TEST_CUDA_PROP.major >= 2) {\n    caffe_cpu_gemv<TypeParam>(CblasNoTrans, 2, 3, 1., A.cpu_data(),\n        x.cpu_data(), 0., y.mutable_cpu_data());\n    for (int i = 0; i < 2; ++i) {\n      EXPECT_EQ(y.cpu_data()[i], result_2[i]);\n    }\n    caffe_gpu_gemv<TypeParam>(CblasNoTrans, 2, 3, 1., A.gpu_data(),\n        x.gpu_data(), 0., y.mutable_gpu_data());\n    for (int i = 0; i < 2; ++i) {\n      EXPECT_EQ(y.cpu_data()[i], result_2[i]);\n    }\n\n    // Test transpose case\n    caffe_copy(2, data, y.mutable_cpu_data());\n    caffe_cpu_gemv<TypeParam>(CblasTrans, 2, 3, 1., A.cpu_data(),\n        y.cpu_data(), 0., x.mutable_cpu_data());\n    for (int i = 0; i < 3; ++i) {\n      EXPECT_EQ(x.cpu_data()[i], result_3[i]);\n    }\n    caffe_gpu_gemv<TypeParam>(CblasTrans, 2, 3, 1., A.gpu_data(),\n        y.gpu_data(), 0., x.mutable_gpu_data());\n    for (int i = 0; i < 3; ++i) {\n      EXPECT_EQ(x.cpu_data()[i], result_3[i]);\n    }\n  } else {\n    LOG(ERROR) << \"Skipping test due to old architecture.\";\n  }\n}\n\n}  // namespace caffe\n\n#endif  // CPU_ONLY\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/benchmark.cpp",
    "content": "#include <boost/date_time/posix_time/posix_time.hpp>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/benchmark.hpp\"\n\nnamespace caffe {\n\nTimer::Timer()\n    : initted_(false),\n      running_(false),\n      has_run_at_least_once_(false) {\n  Init();\n}\n\nTimer::~Timer() {\n  if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n    CUDA_CHECK(cudaEventDestroy(start_gpu_));\n    CUDA_CHECK(cudaEventDestroy(stop_gpu_));\n#else\n    NO_GPU;\n#endif\n  }\n}\n\nvoid Timer::Start() {\n  if (!running()) {\n    if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n      CUDA_CHECK(cudaEventRecord(start_gpu_, 0));\n#else\n      NO_GPU;\n#endif\n    } else {\n      start_cpu_ = boost::posix_time::microsec_clock::local_time();\n    }\n    running_ = true;\n    has_run_at_least_once_ = true;\n  }\n}\n\nvoid Timer::Stop() {\n  if (running()) {\n    if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n      CUDA_CHECK(cudaEventRecord(stop_gpu_, 0));\n      CUDA_CHECK(cudaEventSynchronize(stop_gpu_));\n#else\n      NO_GPU;\n#endif\n    } else {\n      stop_cpu_ = boost::posix_time::microsec_clock::local_time();\n    }\n    running_ = false;\n  }\n}\n\n\nfloat Timer::MicroSeconds() {\n  if (!has_run_at_least_once()) {\n    LOG(WARNING) << \"Timer has never been run before reading time.\";\n    return 0;\n  }\n  if (running()) {\n    Stop();\n  }\n  if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n    CUDA_CHECK(cudaEventElapsedTime(&elapsed_milliseconds_, start_gpu_,\n                                    stop_gpu_));\n    // Cuda only measure milliseconds\n    elapsed_microseconds_ = elapsed_milliseconds_ * 1000;\n#else\n      NO_GPU;\n#endif\n  } else {\n    elapsed_microseconds_ = (stop_cpu_ - start_cpu_).total_microseconds();\n  }\n  return elapsed_microseconds_;\n}\n\nfloat Timer::MilliSeconds() {\n  if (!has_run_at_least_once()) {\n    LOG(WARNING) << \"Timer has never been run before reading time.\";\n    return 0;\n  }\n  if (running()) {\n    Stop();\n  }\n  if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n    CUDA_CHECK(cudaEventElapsedTime(&elapsed_milliseconds_, start_gpu_,\n                                    stop_gpu_));\n#else\n      NO_GPU;\n#endif\n  } else {\n    elapsed_milliseconds_ = (stop_cpu_ - start_cpu_).total_milliseconds();\n  }\n  return elapsed_milliseconds_;\n}\n\nfloat Timer::Seconds() {\n  return MilliSeconds() / 1000.;\n}\n\nvoid Timer::Init() {\n  if (!initted()) {\n    if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n      CUDA_CHECK(cudaEventCreate(&start_gpu_));\n      CUDA_CHECK(cudaEventCreate(&stop_gpu_));\n#else\n      NO_GPU;\n#endif\n    }\n    initted_ = true;\n  }\n}\n\nCPUTimer::CPUTimer() {\n  this->initted_ = true;\n  this->running_ = false;\n  this->has_run_at_least_once_ = false;\n}\n\nvoid CPUTimer::Start() {\n  if (!running()) {\n    this->start_cpu_ = boost::posix_time::microsec_clock::local_time();\n    this->running_ = true;\n    this->has_run_at_least_once_ = true;\n  }\n}\n\nvoid CPUTimer::Stop() {\n  if (running()) {\n    this->stop_cpu_ = boost::posix_time::microsec_clock::local_time();\n    this->running_ = false;\n  }\n}\n\nfloat CPUTimer::MilliSeconds() {\n  if (!has_run_at_least_once()) {\n    LOG(WARNING) << \"Timer has never been run before reading time.\";\n    return 0;\n  }\n  if (running()) {\n    Stop();\n  }\n  this->elapsed_milliseconds_ = (this->stop_cpu_ -\n                                this->start_cpu_).total_milliseconds();\n  return this->elapsed_milliseconds_;\n}\n\nfloat CPUTimer::MicroSeconds() {\n  if (!has_run_at_least_once()) {\n    LOG(WARNING) << \"Timer has never been run before reading time.\";\n    return 0;\n  }\n  if (running()) {\n    Stop();\n  }\n  this->elapsed_microseconds_ = (this->stop_cpu_ -\n                                this->start_cpu_).total_microseconds();\n  return this->elapsed_microseconds_;\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/blocking_queue.cpp",
    "content": "#include <boost/thread.hpp>\n#include <string>\n\n#include \"caffe/data_reader.hpp\"\n#include \"caffe/layers/base_data_layer.hpp\"\n#include \"caffe/parallel.hpp\"\n#include \"caffe/util/blocking_queue.hpp\"\n\nnamespace caffe {\n\ntemplate<typename T>\nclass BlockingQueue<T>::sync {\n public:\n  mutable boost::mutex mutex_;\n  boost::condition_variable condition_;\n};\n\ntemplate<typename T>\nBlockingQueue<T>::BlockingQueue()\n    : sync_(new sync()) {\n}\n\ntemplate<typename T>\nvoid BlockingQueue<T>::push(const T& t) {\n  boost::mutex::scoped_lock lock(sync_->mutex_);\n  queue_.push(t);\n  lock.unlock();\n  sync_->condition_.notify_one();\n}\n\ntemplate<typename T>\nbool BlockingQueue<T>::try_pop(T* t) {\n  boost::mutex::scoped_lock lock(sync_->mutex_);\n\n  if (queue_.empty()) {\n    return false;\n  }\n\n  *t = queue_.front();\n  queue_.pop();\n  return true;\n}\n\ntemplate<typename T>\nT BlockingQueue<T>::pop(const string& log_on_wait) {\n  boost::mutex::scoped_lock lock(sync_->mutex_);\n\n  while (queue_.empty()) {\n    if (!log_on_wait.empty()) {\n      LOG_EVERY_N(INFO, 1000)<< log_on_wait;\n    }\n    sync_->condition_.wait(lock);\n  }\n\n  T t = queue_.front();\n  queue_.pop();\n  return t;\n}\n\ntemplate<typename T>\nbool BlockingQueue<T>::try_peek(T* t) {\n  boost::mutex::scoped_lock lock(sync_->mutex_);\n\n  if (queue_.empty()) {\n    return false;\n  }\n\n  *t = queue_.front();\n  return true;\n}\n\ntemplate<typename T>\nT BlockingQueue<T>::peek() {\n  boost::mutex::scoped_lock lock(sync_->mutex_);\n\n  while (queue_.empty()) {\n    sync_->condition_.wait(lock);\n  }\n\n  return queue_.front();\n}\n\ntemplate<typename T>\nsize_t BlockingQueue<T>::size() const {\n  boost::mutex::scoped_lock lock(sync_->mutex_);\n  return queue_.size();\n}\n\ntemplate class BlockingQueue<Batch<float>*>;\ntemplate class BlockingQueue<Batch<double>*>;\ntemplate class BlockingQueue<Datum*>;\ntemplate class BlockingQueue<shared_ptr<DataReader::QueuePair> >;\ntemplate class BlockingQueue<P2PSync<float>*>;\ntemplate class BlockingQueue<P2PSync<double>*>;\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/cudnn.cpp",
    "content": "#ifdef USE_CUDNN\n#include \"caffe/util/cudnn.hpp\"\n\nnamespace caffe {\nnamespace cudnn {\n\nfloat dataType<float>::oneval = 1.0;\nfloat dataType<float>::zeroval = 0.0;\nconst void* dataType<float>::one =\n    static_cast<void *>(&dataType<float>::oneval);\nconst void* dataType<float>::zero =\n    static_cast<void *>(&dataType<float>::zeroval);\n\ndouble dataType<double>::oneval = 1.0;\ndouble dataType<double>::zeroval = 0.0;\nconst void* dataType<double>::one =\n    static_cast<void *>(&dataType<double>::oneval);\nconst void* dataType<double>::zero =\n    static_cast<void *>(&dataType<double>::zeroval);\n\n}  // namespace cudnn\n}  // namespace caffe\n#endif\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/db.cpp",
    "content": "#include \"caffe/util/db.hpp\"\n#include \"caffe/util/db_leveldb.hpp\"\n#include \"caffe/util/db_lmdb.hpp\"\n\n#include <string>\n\nnamespace caffe { namespace db {\n\nDB* GetDB(DataParameter::DB backend) {\n  switch (backend) {\n#ifdef USE_LEVELDB\n  case DataParameter_DB_LEVELDB:\n    return new LevelDB();\n#endif  // USE_LEVELDB\n#ifdef USE_LMDB\n  case DataParameter_DB_LMDB:\n    return new LMDB();\n#endif  // USE_LMDB\n  default:\n    LOG(FATAL) << \"Unknown database backend\";\n    return NULL;\n  }\n}\n\nDB* GetDB(const string& backend) {\n#ifdef USE_LEVELDB\n  if (backend == \"leveldb\") {\n    return new LevelDB();\n  }\n#endif  // USE_LEVELDB\n#ifdef USE_LMDB\n  if (backend == \"lmdb\") {\n    return new LMDB();\n  }\n#endif  // USE_LMDB\n  LOG(FATAL) << \"Unknown database backend\";\n  return NULL;\n}\n\n}  // namespace db\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/db_leveldb.cpp",
    "content": "#ifdef USE_LEVELDB\n#include \"caffe/util/db_leveldb.hpp\"\n\n#include <string>\n\nnamespace caffe { namespace db {\n\nvoid LevelDB::Open(const string& source, Mode mode) {\n  leveldb::Options options;\n  options.block_size = 65536;\n  options.write_buffer_size = 268435456;\n  options.max_open_files = 100;\n  options.error_if_exists = mode == NEW;\n  options.create_if_missing = mode != READ;\n  leveldb::Status status = leveldb::DB::Open(options, source, &db_);\n  CHECK(status.ok()) << \"Failed to open leveldb \" << source\n                     << std::endl << status.ToString();\n  LOG(INFO) << \"Opened leveldb \" << source;\n}\n\n}  // namespace db\n}  // namespace caffe\n#endif  // USE_LEVELDB\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/db_lmdb.cpp",
    "content": "#ifdef USE_LMDB\n#include \"caffe/util/db_lmdb.hpp\"\n\n#include <sys/stat.h>\n\n#include <string>\n\nnamespace caffe { namespace db {\n\nconst size_t LMDB_MAP_SIZE = 1099511627776;  // 1 TB\n\nvoid LMDB::Open(const string& source, Mode mode) {\n  MDB_CHECK(mdb_env_create(&mdb_env_));\n  MDB_CHECK(mdb_env_set_mapsize(mdb_env_, LMDB_MAP_SIZE));\n  if (mode == NEW) {\n    CHECK_EQ(mkdir(source.c_str(), 0744), 0) << \"mkdir \" << source << \"failed\";\n  }\n  int flags = 0;\n  if (mode == READ) {\n    flags = MDB_RDONLY | MDB_NOTLS;\n  }\n  int rc = mdb_env_open(mdb_env_, source.c_str(), flags, 0664);\n#ifndef ALLOW_LMDB_NOLOCK\n  MDB_CHECK(rc);\n#else\n  if (rc == EACCES) {\n    LOG(WARNING) << \"Permission denied. Trying with MDB_NOLOCK ...\";\n    // Close and re-open environment handle\n    mdb_env_close(mdb_env_);\n    MDB_CHECK(mdb_env_create(&mdb_env_));\n    // Try again with MDB_NOLOCK\n    flags |= MDB_NOLOCK;\n    MDB_CHECK(mdb_env_open(mdb_env_, source.c_str(), flags, 0664));\n  } else {\n    MDB_CHECK(rc);\n  }\n#endif\n  LOG(INFO) << \"Opened lmdb \" << source;\n}\n\nLMDBCursor* LMDB::NewCursor() {\n  MDB_txn* mdb_txn;\n  MDB_cursor* mdb_cursor;\n  MDB_CHECK(mdb_txn_begin(mdb_env_, NULL, MDB_RDONLY, &mdb_txn));\n  MDB_CHECK(mdb_dbi_open(mdb_txn, NULL, 0, &mdb_dbi_));\n  MDB_CHECK(mdb_cursor_open(mdb_txn, mdb_dbi_, &mdb_cursor));\n  return new LMDBCursor(mdb_txn, mdb_cursor);\n}\n\nLMDBTransaction* LMDB::NewTransaction() {\n  MDB_txn* mdb_txn;\n  MDB_CHECK(mdb_txn_begin(mdb_env_, NULL, 0, &mdb_txn));\n  MDB_CHECK(mdb_dbi_open(mdb_txn, NULL, 0, &mdb_dbi_));\n  return new LMDBTransaction(&mdb_dbi_, mdb_txn);\n}\n\nvoid LMDBTransaction::Put(const string& key, const string& value) {\n  MDB_val mdb_key, mdb_value;\n  mdb_key.mv_data = const_cast<char*>(key.data());\n  mdb_key.mv_size = key.size();\n  mdb_value.mv_data = const_cast<char*>(value.data());\n  mdb_value.mv_size = value.size();\n  MDB_CHECK(mdb_put(mdb_txn_, *mdb_dbi_, &mdb_key, &mdb_value, 0));\n}\n\n}  // namespace db\n}  // namespace caffe\n#endif  // USE_LMDB\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/deformable_im2col.cu",
    "content": "#include <algorithm>\n#include <iostream>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/gpu_util.cuh\"\n#include \"caffe/util/deformable_im2col.hpp\"\nusing namespace std;\n\nnamespace caffe {\n\ntemplate <typename Dtype>\n__device__ Dtype deformable_im2col_bilinear(const Dtype* bottom_data, const int data_width, \n  const int height, const int width, Dtype h, Dtype w) {\n\n  int h_low = floor(h);\n  int w_low = floor(w);\n  int h_high;\n  int w_high;\n  if (h_low >= height - 1) {\n\th_high = h_low = height - 1;\n\th = (Dtype)h_low;\n  }\n  else {\n\th_high = h_low + 1;\n  }\n\n  if (w_low >= width - 1) {\n\tw_high = w_low = width - 1;\n\tw = (Dtype)w_low;\n  }\n  else {\n\tw_high = w_low + 1;\n  }\n\n  Dtype lh = h - h_low;\n  Dtype lw = w - w_low;\n  Dtype hh = 1 - lh, hw = 1 - lw;\n\n  Dtype v1 = bottom_data[h_low * data_width + w_low];\n  Dtype v2 = bottom_data[h_low * data_width + w_high];\n  Dtype v3 = bottom_data[h_high * data_width + w_low];\n  Dtype v4 = bottom_data[h_high * data_width + w_high];\n  Dtype w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\n\n  Dtype val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n  return val;\n}\n\n\ntemplate <typename Dtype>\n__device__ Dtype get_gradient_weight(Dtype argmax_h, Dtype argmax_w, \n  const int h, const int w, const int height, const int width) {\n\n  if (argmax_h < 0 || argmax_h > height || argmax_w < 0 || argmax_w > width) {\n\t//empty\n\treturn 0;\n  }\n\n  argmax_h = max(argmax_h, (Dtype)0.0f);\n  argmax_w = max(argmax_w, (Dtype)0.0f);\n\n  int argmax_h_low = (int)argmax_h;\n  int argmax_w_low = (int)argmax_w;\n  int argmax_h_high;\n  int argmax_w_high;\n  if (argmax_h_low >= height - 1) {\n\targmax_h_high = argmax_h_low = height - 1;\n\targmax_h = (Dtype)argmax_h_low;\n  } else {\n\targmax_h_high = argmax_h_low + 1;\n  }\n  if (argmax_w_low >= width - 1)\n  {\n\targmax_w_high = argmax_w_low = width - 1;\n\targmax_w = (Dtype)argmax_w_low;\n  } else {\n\targmax_w_high = argmax_w_low + 1;\n  }\n  Dtype weight = 0;\n  if (h == argmax_h_low) {\n\tif (w == argmax_w_low) {\n\t  weight = (h + 1 - argmax_h) * (w + 1 - argmax_w);\n\t} else if (w == argmax_w_high) {\n\t  weight = (h + 1 - argmax_h) * (argmax_w + 1 - w);\n\t}\n  } else if (h == argmax_h_high) {\n\tif (w == argmax_w_low) {\n\t  weight = (argmax_h + 1 - h) * (w + 1 - argmax_w);\n\t} else if (w == argmax_w_high) {\n\t  weight = (argmax_h + 1 - h) * (argmax_w + 1 - w);\n\t}\n  }\n  return weight;\n}\n\n\ntemplate <typename Dtype>\n__device__ Dtype get_coordinate_weight(Dtype argmax_h, Dtype argmax_w,\n  const int height, const int width, const Dtype* im_data,\n  const int data_width, const int bp_dir) {\n\n  if (argmax_h < 0 || argmax_h > height || argmax_w < 0 || argmax_w > width)\n  {\n\t//empty\n\treturn 0;\n  }\n\n  if (argmax_h < 0) argmax_h = 0;\n  if (argmax_w < 0) argmax_w = 0;\n\n  int argmax_h_low = (int)argmax_h;\n  int argmax_w_low = (int)argmax_w;\n  int argmax_h_high;\n  int argmax_w_high;\n  if (argmax_h_low >= height - 1) {\n\targmax_h_high = argmax_h_low = height - 1;\n\targmax_h = (Dtype)argmax_h_low;\n  } else {\n\targmax_h_high = argmax_h_low + 1;\n  }\n  if (argmax_w_low >= width - 1) {\n\targmax_w_high = argmax_w_low = width - 1;\n\targmax_w = (Dtype)argmax_w_low;\n  } else {\n\targmax_w_high = argmax_w_low + 1;\n  }\n  Dtype weight = 0;\n\n  if (bp_dir == 0) {\n\tweight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low];\n\tweight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high];\n\tweight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low];\n\tweight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high];\n  } else if (bp_dir == 1) {\n\tweight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low];\n\tweight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high];\n\tweight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low];\n\tweight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high];\n  }\n\n  return weight;\n}\n\n\n/*!\n * \\brief deformable_im2col gpu kernel.\n * DO NOT call this directly. Use wrapper function im2col() instead;\n */\ntemplate <typename Dtype>\n__global__ void deformable_im2col_gpu_kernel(const int n, const Dtype* data_im, const Dtype* data_offset,\n  const int height, const int width, const int kernel_h, const int kernel_w,\n  const int pad_h, const int pad_w,\n  const int stride_h, const int stride_w,\n  const int dilation_h, const int dilation_w,\n  const int channel_per_deformable_group,\n  const int height_col, const int width_col,\n  Dtype* data_col) {\n  CUDA_KERNEL_LOOP(index, n) {\n\t// index index of output matrix\n\tconst int w_col = index % width_col;\n\tconst int h_col = (index / width_col) % height_col;\n\tconst int c_im = (index / width_col) / height_col;\n\tconst int c_col = c_im * kernel_h * kernel_w;\n\n\t// compute deformable group index\n\tconst int deformable_group_index = c_im / channel_per_deformable_group;\n\n\tconst int h_in = h_col * stride_h - pad_h;\n  const int w_in = w_col * stride_w - pad_w;\n  Dtype* data_col_ptr = data_col;\n  data_col_ptr += (c_col * height_col + h_col) * width_col + w_col;\n  const Dtype* data_im_ptr = data_im;\n  data_im_ptr += (c_im * height + h_in) * width + w_in;//0\n\n  const Dtype* data_offset_ptr = data_offset;\n  data_offset_ptr += deformable_group_index * 2 * kernel_h * kernel_w * height_col * width_col;//0\n\n\n\tfor (int i = 0; i < kernel_h; ++i) {\n\t  for (int j = 0; j < kernel_w; ++j) {\n\t\tconst int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col;\n\t\tconst int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + w_col;\n\t\tconst Dtype offset_h = data_offset_ptr[data_offset_h_ptr];\n\t\tconst Dtype offset_w = data_offset_ptr[data_offset_w_ptr];\n\t\tDtype val = static_cast<Dtype>(0);\n\t\tconst Dtype h_im = h_in + i * dilation_h + offset_h;\n\t\tconst Dtype w_im = w_in + j * dilation_w + offset_w;\n\t\tif (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) {\n\t\t  const Dtype map_h = i * dilation_h + offset_h;\n\t\t  const Dtype map_w = j * dilation_w + offset_w;\n\t\t  const int cur_height = height - h_in;\n\t\t  const int cur_width = width - w_in;\n\t\t  val = deformable_im2col_bilinear(data_im_ptr, width, cur_height, cur_width, map_h, map_w);\n\t\t}\n\t\t*data_col_ptr = val;\n\t\tdata_col_ptr += height_col * width_col;\n\t  }\n\t}\n  }\n}\ntemplate <typename Dtype>\nvoid deformable_im2col_gpu(const Dtype* data_im, const Dtype* data_offset, const int channels,\n\tconst int height, const int width, const int kernel_h, const int kernel_w,\n\tconst int pad_h, const int pad_w,\n\tconst int stride_h, const int stride_w,\n\tconst int dilation_h, const int dilation_w,\n\tconst int deformable_group,\n\tDtype* data_col) {\n  // We are going to launch channels * height_col * width_col kernels, each\n  // kernel responsible for copying a single-channel grid.\n  int height_col = (height + 2 * pad_h -\n\t  (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;\n  int width_col = (width + 2 * pad_w -\n\t  (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;\n  int num_kernels = channels * height_col * width_col;\n  int channel_per_deformable_group =  channels/ deformable_group;\n  deformable_im2col_gpu_kernel<Dtype><<<CAFFE_GET_BLOCKS(num_kernels),\n\t\t\t\t\t\t\t CAFFE_CUDA_NUM_THREADS>>>(\n\t  num_kernels, data_im, data_offset, height, width, kernel_h, kernel_w, pad_h,\n\t  pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, height_col,\n    width_col, data_col);\n\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate void deformable_im2col_gpu<float>(const float* data_im, const float* data_offset, const int channels,\n\tconst int height, const int width, const int kernel_h, const int kernel_w,\n\tconst int pad_h, const int pad_w,\n\tconst int stride_h, const int stride_w,\n\tconst int dilation_h, const int dilation_w,\n\tconst int deformable_group,\n\tfloat* data_col);\ntemplate void deformable_im2col_gpu<double>(const double* data_im, const double* data_offset, const int channels,\n\tconst int height, const int width, const int kernel_h, const int kernel_w,\n\tconst int pad_h, const int pad_w,\n\tconst int stride_h, const int stride_w,\n\tconst int dilation_h, const int dilation_w,\n\tconst int deformable_group,\n\tdouble* data_col);\n\n\n\ntemplate <typename Dtype>\n__global__ void deformable_col2im_gpu_kernel(const int n, const Dtype* data_col, \n  const Dtype* data_offset,\n  const int channels, const int height, const int width,\n  const int kernel_h, const int kernel_w,\n  const int pad_h, const int pad_w,\n  const int stride_h, const int stride_w,\n  const int dilation_h, const int dilation_w,\n  int channel_per_deformable_group,\n  int height_col, int width_col,\n  Dtype* grad_im) {\n  CUDA_KERNEL_LOOP(index, n) {\n    const int j = (index / width_col / height_col) % kernel_w;\n    const int i = (index / width_col / height_col / kernel_w) % kernel_h;\n    const int c = index / width_col / height_col / kernel_w / kernel_h;\n    // compute the start and end of the output\n\n    const int deformable_group_index = c / channel_per_deformable_group;\n\n    int w_out = index % width_col;\n    int h_out = (index / width_col) % height_col;\n    int w_in = w_out * stride_w - pad_w;\n    int h_in = h_out * stride_h - pad_h;\n\n    const Dtype* data_offset_ptr = data_offset + deformable_group_index * 2 * kernel_h * kernel_w * height_col * width_col;\n    const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out;\n    const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out;\n    const Dtype offset_h = data_offset_ptr[data_offset_h_ptr];\n    const Dtype offset_w = data_offset_ptr[data_offset_w_ptr];\n    const Dtype cur_inv_h_data = h_in + i * dilation_h + offset_h;\n    const Dtype cur_inv_w_data = w_in + j * dilation_w + offset_w;\n\n    const Dtype cur_top_grad = data_col[index];\n    const int cur_h = (int)cur_inv_h_data;\n    const int cur_w = (int)cur_inv_w_data;\n    for (int dy = -2; dy <= 2; dy++) {\n      for (int dx = -2; dx <= 2; dx++) {\n        if (cur_h + dy >= 0 && cur_h + dy < height &&\n          cur_w + dx >= 0 && cur_w + dx < width &&\n          abs(cur_inv_h_data - (cur_h + dy)) < 1 &&\n          abs(cur_inv_w_data - (cur_w + dx)) < 1\n          ) {\n          int cur_bottom_grad_pos = (c * height + cur_h + dy) * width + cur_w + dx;\n          Dtype weight = get_gradient_weight(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width);\n          caffe_gpu_atomic_add(weight * cur_top_grad, grad_im + cur_bottom_grad_pos);\n        }\n      }\n    }\n  }\n}\ntemplate <typename Dtype>\nvoid deformable_col2im_gpu(const Dtype* data_col, const Dtype* data_offset,\n   const int channels,const int height, const int width,const int num_kernels,\n    const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int deformable_group, Dtype* grad_im) {\n  int height_col = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) /\n      stride_h + 1;\n  int width_col = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) /\n      stride_w + 1;\n  int channel_per_deformable_group = channels / deformable_group;\n  // To avoid involving atomic operations, we will launch one kernel per\n  // bottom dimension, and then in the kernel add up the top dimensions.\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  deformable_col2im_gpu_kernel<Dtype><<<CAFFE_GET_BLOCKS(num_kernels),\n                             CAFFE_CUDA_NUM_THREADS>>>(\n      num_kernels, data_col, data_offset,channels, height, width,  kernel_h, kernel_w,\n      pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w,\n      channel_per_deformable_group, height_col, width_col, grad_im);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate void deformable_col2im_gpu<float>(const float* data_col, const float* data_offset, const int channels,\n    const int height, const int width,const int num_kernels, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int deformable_group, float* grad_im);\ntemplate void deformable_col2im_gpu<double>(const double* data_col, const double* data_offset, const int channels,\n    const int height, const int width, const int num_kernels,const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int deformable_group, double* grad_im);\n\n\ntemplate <typename Dtype>\n__global__ void deformable_col2im_coord_gpu_kernel(const int n, const Dtype* data_col, \n  const Dtype* data_im, const Dtype* data_offset,\n  const int channels, const int height, const int width,\n  const int kernel_h, const int kernel_w,\n  const int pad_h, const int pad_w,\n  const int stride_h, const int stride_w,\n  const int dilation_h, const int dilation_w,\n  const int channel_per_deformable_group,\n  const int height_col, const int width_col,\n  Dtype* grad_offset) {\n  CUDA_KERNEL_LOOP(index, n) {\n    Dtype val = 0;\n    int w = index % width_col;\n    int h = (index / width_col) % height_col;\n    int c = index / width_col / height_col;\n    // compute the start and end of the output\n\n    const int deformable_group_index = c / (2 * kernel_h * kernel_w);\n    const int col_step = kernel_h * kernel_w;\n    int cnt = 0;\n    const Dtype* data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * width_col * height_col;\n    const Dtype* data_im_ptr = data_im + deformable_group_index * channel_per_deformable_group / kernel_h / kernel_w * height * width;\n    const Dtype* data_offset_ptr = data_offset + deformable_group_index * 2 * kernel_h * kernel_w * height_col * width_col;\n\n    const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w;\n\n    for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step) {\n      const int col_pos = ((col_c * height_col) + h) * width_col + w;\n      const int bp_dir = offset_c % 2;\n\n      int j = (col_pos / width_col / height_col) % kernel_w;\n      int i = (col_pos / width_col / height_col / kernel_w) % kernel_h;\n      int w_out = col_pos % width_col;\n      int h_out = (col_pos / width_col) % height_col;\n      int w_in = w_out * stride_w - pad_w;\n      int h_in = h_out * stride_h - pad_h;\n      const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out);\n      const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out);\n      const Dtype offset_h = data_offset_ptr[data_offset_h_ptr];\n      const Dtype offset_w = data_offset_ptr[data_offset_w_ptr];\n      Dtype inv_h = h_in + i * dilation_h + offset_h;\n      Dtype inv_w = w_in + j * dilation_w + offset_w;\n      if (inv_h < 0 || inv_w < 0 || inv_h >= height || inv_w >= width) {\n        inv_h = inv_w = -1;\n      }\n      const Dtype weight = get_coordinate_weight(\n        inv_h, inv_w,\n        height, width, data_im_ptr + cnt * height * width, width, bp_dir);\n      val += weight * data_col_ptr[col_pos];\n      cnt += 1;\n    }\n\n    grad_offset[index] = val;\n  }\n}\ntemplate <typename Dtype>\nvoid deformable_col2im_coord_gpu(const Dtype* data_col, const Dtype* data_im, const Dtype* data_offset, const int channels,\n    const int height, const int width, \n    const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int deformable_group, Dtype* grad_offset) {\n  int height_col = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) /\n      stride_h + 1;\n  int width_col = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) /\n      stride_w + 1;\n  int num_kernels = height_col * width_col * 2 * kernel_h * kernel_h * deformable_group;\n\n  int channel_per_deformable_group = channels/ deformable_group;\n  // To avoid involving atomic operations, we will launch one kernel per\n  // bottom dimension, and then in the kernel add up the top dimensions.\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  deformable_col2im_coord_gpu_kernel<Dtype><<<CAFFE_GET_BLOCKS(num_kernels),\n                             CAFFE_CUDA_NUM_THREADS>>>(\n      num_kernels, data_col, data_im,data_offset, channels,height, width,  kernel_h, kernel_w,\n      pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w,\n      channel_per_deformable_group, height_col, width_col, grad_offset);\n  CUDA_POST_KERNEL_CHECK;\n}\n\ntemplate void deformable_col2im_coord_gpu<float>(const float* data_col, const float* data_im,const float* data_offset, const int channels,\nconst int height, const int width, const int kernel_h, const int kernel_w,\nconst int pad_h, const int pad_w, const int stride_h,\nconst int stride_w, const int dilation_h, const int dilation_w,\nconst int deformable_group, float* grad_offset);\n\ntemplate void deformable_col2im_coord_gpu<double>(const double* data_col, const double* data_im,const double* data_offset, const int channels,\nconst int height, const int width, const int kernel_h, const int kernel_w,\nconst int pad_h, const int pad_w, const int stride_h,\nconst int stride_w, const int dilation_h, const int dilation_w,\nconst int deformable_group, double* grad_offset);\n}\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/hdf5.cpp",
    "content": "#include \"caffe/util/hdf5.hpp\"\n\n#include <string>\n#include <vector>\n\nnamespace caffe {\n\n// Verifies format of data stored in HDF5 file and reshapes blob accordingly.\ntemplate <typename Dtype>\nvoid hdf5_load_nd_dataset_helper(\n    hid_t file_id, const char* dataset_name_, int min_dim, int max_dim,\n    Blob<Dtype>* blob) {\n  // Verify that the dataset exists.\n  CHECK(H5LTfind_dataset(file_id, dataset_name_))\n      << \"Failed to find HDF5 dataset \" << dataset_name_;\n  // Verify that the number of dimensions is in the accepted range.\n  herr_t status;\n  int ndims;\n  status = H5LTget_dataset_ndims(file_id, dataset_name_, &ndims);\n  CHECK_GE(status, 0) << \"Failed to get dataset ndims for \" << dataset_name_;\n  CHECK_GE(ndims, min_dim);\n  CHECK_LE(ndims, max_dim);\n\n  // Verify that the data format is what we expect: float or double.\n  std::vector<hsize_t> dims(ndims);\n  H5T_class_t class_;\n  status = H5LTget_dataset_info(\n      file_id, dataset_name_, dims.data(), &class_, NULL);\n  CHECK_GE(status, 0) << \"Failed to get dataset info for \" << dataset_name_;\n  switch (class_) {\n  case H5T_FLOAT:\n    LOG_FIRST_N(INFO, 1) << \"Datatype class: H5T_FLOAT\";\n    break;\n  case H5T_INTEGER:\n    LOG_FIRST_N(INFO, 1) << \"Datatype class: H5T_INTEGER\";\n    break;\n  case H5T_TIME:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_TIME\";\n  case H5T_STRING:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_STRING\";\n  case H5T_BITFIELD:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_BITFIELD\";\n  case H5T_OPAQUE:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_OPAQUE\";\n  case H5T_COMPOUND:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_COMPOUND\";\n  case H5T_REFERENCE:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_REFERENCE\";\n  case H5T_ENUM:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_ENUM\";\n  case H5T_VLEN:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_VLEN\";\n  case H5T_ARRAY:\n    LOG(FATAL) << \"Unsupported datatype class: H5T_ARRAY\";\n  default:\n    LOG(FATAL) << \"Datatype class unknown\";\n  }\n\n  vector<int> blob_dims(dims.size());\n  for (int i = 0; i < dims.size(); ++i) {\n    blob_dims[i] = dims[i];\n  }\n  blob->Reshape(blob_dims);\n}\n\ntemplate <>\nvoid hdf5_load_nd_dataset<float>(hid_t file_id, const char* dataset_name_,\n        int min_dim, int max_dim, Blob<float>* blob) {\n  hdf5_load_nd_dataset_helper(file_id, dataset_name_, min_dim, max_dim, blob);\n  herr_t status = H5LTread_dataset_float(\n    file_id, dataset_name_, blob->mutable_cpu_data());\n  CHECK_GE(status, 0) << \"Failed to read float dataset \" << dataset_name_;\n}\n\ntemplate <>\nvoid hdf5_load_nd_dataset<double>(hid_t file_id, const char* dataset_name_,\n        int min_dim, int max_dim, Blob<double>* blob) {\n  hdf5_load_nd_dataset_helper(file_id, dataset_name_, min_dim, max_dim, blob);\n  herr_t status = H5LTread_dataset_double(\n    file_id, dataset_name_, blob->mutable_cpu_data());\n  CHECK_GE(status, 0) << \"Failed to read double dataset \" << dataset_name_;\n}\n\ntemplate <>\nvoid hdf5_save_nd_dataset<float>(\n    const hid_t file_id, const string& dataset_name, const Blob<float>& blob,\n    bool write_diff) {\n  int num_axes = blob.num_axes();\n  hsize_t *dims = new hsize_t[num_axes];\n  for (int i = 0; i < num_axes; ++i) {\n    dims[i] = blob.shape(i);\n  }\n  const float* data;\n  if (write_diff) {\n    data = blob.cpu_diff();\n  } else {\n    data = blob.cpu_data();\n  }\n  herr_t status = H5LTmake_dataset_float(\n      file_id, dataset_name.c_str(), num_axes, dims, data);\n  CHECK_GE(status, 0) << \"Failed to make float dataset \" << dataset_name;\n  delete[] dims;\n}\n\ntemplate <>\nvoid hdf5_save_nd_dataset<double>(\n    hid_t file_id, const string& dataset_name, const Blob<double>& blob,\n    bool write_diff) {\n  int num_axes = blob.num_axes();\n  hsize_t *dims = new hsize_t[num_axes];\n  for (int i = 0; i < num_axes; ++i) {\n    dims[i] = blob.shape(i);\n  }\n  const double* data;\n  if (write_diff) {\n    data = blob.cpu_diff();\n  } else {\n    data = blob.cpu_data();\n  }\n  herr_t status = H5LTmake_dataset_double(\n      file_id, dataset_name.c_str(), num_axes, dims, data);\n  CHECK_GE(status, 0) << \"Failed to make double dataset \" << dataset_name;\n  delete[] dims;\n}\n\nstring hdf5_load_string(hid_t loc_id, const string& dataset_name) {\n  // Get size of dataset\n  size_t size;\n  H5T_class_t class_;\n  herr_t status = \\\n    H5LTget_dataset_info(loc_id, dataset_name.c_str(), NULL, &class_, &size);\n  CHECK_GE(status, 0) << \"Failed to get dataset info for \" << dataset_name;\n  char *buf = new char[size];\n  status = H5LTread_dataset_string(loc_id, dataset_name.c_str(), buf);\n  CHECK_GE(status, 0)\n    << \"Failed to load int dataset with name \" << dataset_name;\n  string val(buf);\n  delete[] buf;\n  return val;\n}\n\nvoid hdf5_save_string(hid_t loc_id, const string& dataset_name,\n                      const string& s) {\n  herr_t status = \\\n    H5LTmake_dataset_string(loc_id, dataset_name.c_str(), s.c_str());\n  CHECK_GE(status, 0)\n    << \"Failed to save string dataset with name \" << dataset_name;\n}\n\nint hdf5_load_int(hid_t loc_id, const string& dataset_name) {\n  int val;\n  herr_t status = H5LTread_dataset_int(loc_id, dataset_name.c_str(), &val);\n  CHECK_GE(status, 0)\n    << \"Failed to load int dataset with name \" << dataset_name;\n  return val;\n}\n\nvoid hdf5_save_int(hid_t loc_id, const string& dataset_name, int i) {\n  hsize_t one = 1;\n  herr_t status = \\\n    H5LTmake_dataset_int(loc_id, dataset_name.c_str(), 1, &one, &i);\n  CHECK_GE(status, 0)\n    << \"Failed to save int dataset with name \" << dataset_name;\n}\n\nint hdf5_get_num_links(hid_t loc_id) {\n  H5G_info_t info;\n  herr_t status = H5Gget_info(loc_id, &info);\n  CHECK_GE(status, 0) << \"Error while counting HDF5 links.\";\n  return info.nlinks;\n}\n\nstring hdf5_get_name_by_idx(hid_t loc_id, int idx) {\n  ssize_t str_size = H5Lget_name_by_idx(\n      loc_id, \".\", H5_INDEX_NAME, H5_ITER_NATIVE, idx, NULL, 0, H5P_DEFAULT);\n  CHECK_GE(str_size, 0) << \"Error retrieving HDF5 dataset at index \" << idx;\n  char *c_str = new char[str_size+1];\n  ssize_t status = H5Lget_name_by_idx(\n      loc_id, \".\", H5_INDEX_NAME, H5_ITER_NATIVE, idx, c_str, str_size+1,\n      H5P_DEFAULT);\n  CHECK_GE(status, 0) << \"Error retrieving HDF5 dataset at index \" << idx;\n  string result(c_str);\n  delete[] c_str;\n  return result;\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/im2col.cpp",
    "content": "#include <vector>\n\n#include \"caffe/util/im2col.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\n// Function uses casting from int to unsigned to compare if value of\n// parameter a is greater or equal to zero and lower than value of\n// parameter b. The b parameter is of type signed and is always positive,\n// therefore its value is always lower than 0x800... where casting\n// negative value of a parameter converts it to value higher than 0x800...\n// The casting allows to use one condition instead of two.\ninline bool is_a_ge_zero_and_a_lt_b(int a, int b) {\n  return static_cast<unsigned>(a) < static_cast<unsigned>(b);\n}\n\ntemplate <typename Dtype>\nvoid im2col_cpu(const Dtype* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    Dtype* data_col) {\n  const int output_h = (height + 2 * pad_h -\n    (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;\n  const int output_w = (width + 2 * pad_w -\n    (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;\n  const int channel_size = height * width;\n  for (int channel = channels; channel--; data_im += channel_size) {\n    for (int kernel_row = 0; kernel_row < kernel_h; kernel_row++) {\n      for (int kernel_col = 0; kernel_col < kernel_w; kernel_col++) {\n        int input_row = -pad_h + kernel_row * dilation_h;\n        for (int output_rows = output_h; output_rows; output_rows--) {\n          if (!is_a_ge_zero_and_a_lt_b(input_row, height)) {\n            for (int output_cols = output_w; output_cols; output_cols--) {\n              *(data_col++) = 0;\n            }\n          } else {\n            int input_col = -pad_w + kernel_col * dilation_w;\n            for (int output_col = output_w; output_col; output_col--) {\n              if (is_a_ge_zero_and_a_lt_b(input_col, width)) {\n                *(data_col++) = data_im[input_row * width + input_col];\n              } else {\n                *(data_col++) = 0;\n              }\n              input_col += stride_w;\n            }\n          }\n          input_row += stride_h;\n        }\n      }\n    }\n  }\n}\n\n// Explicit instantiation\ntemplate void im2col_cpu<float>(const float* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    float* data_col);\ntemplate void im2col_cpu<double>(const double* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    double* data_col);\n\ntemplate <typename Dtype>\ninline void im2col_nd_core_cpu(const Dtype* data_input, const bool im2col,\n    const int num_spatial_axes, const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_output) {\n  if (!im2col) {\n    int im_size = im_shape[0];\n    for (int i = 0; i < num_spatial_axes; ++i) {\n      im_size *= im_shape[1 + i];\n    }\n    caffe_set(im_size, Dtype(0), data_output);\n  }\n  int kernel_size = 1;\n  for (int i = 0; i < num_spatial_axes; ++i) {\n    kernel_size *= kernel_shape[i];\n  }\n  const int channels_col = col_shape[0];\n  vector<int> d_offset(num_spatial_axes, 0);\n  vector<int> d_iter(num_spatial_axes, 0);\n  for (int c_col = 0; c_col < channels_col; ++c_col) {\n    // Loop over spatial axes in reverse order to compute a per-axis offset.\n    int offset = c_col;\n    for (int d_i = num_spatial_axes - 1; d_i >= 0; --d_i) {\n      if (d_i < num_spatial_axes - 1) {\n        offset /= kernel_shape[d_i + 1];\n      }\n      d_offset[d_i] = offset % kernel_shape[d_i];\n    }\n    for (bool incremented = true; incremented; ) {\n      // Loop over spatial axes in forward order to compute the indices in the\n      // image and column, and whether the index lies in the padding.\n      int index_col = c_col;\n      int index_im = c_col / kernel_size;\n      bool is_padding = false;\n      for (int d_i = 0; d_i < num_spatial_axes; ++d_i) {\n        const int d = d_iter[d_i];\n        const int d_im = d * stride[d_i] - pad[d_i] +\n            d_offset[d_i] * dilation[d_i];\n        is_padding |= d_im < 0 || d_im >= im_shape[d_i + 1];\n        index_col *= col_shape[d_i + 1];\n        index_col += d;\n        index_im *= im_shape[d_i + 1];\n        index_im += d_im;\n      }\n      if (im2col) {\n        if (is_padding) {\n          data_output[index_col] = 0;\n        } else {\n          data_output[index_col] = data_input[index_im];\n        }\n      } else if (!is_padding) {  // col2im\n        data_output[index_im] += data_input[index_col];\n      }\n      // Loop over spatial axes in reverse order to choose an index,\n      // like counting.\n      incremented = false;\n      for (int d_i = num_spatial_axes - 1; d_i >= 0; --d_i) {\n        const int d_max = col_shape[d_i + 1];\n        DCHECK_LT(d_iter[d_i], d_max);\n        if (d_iter[d_i] == d_max - 1) {\n          d_iter[d_i] = 0;\n        } else {  // d_iter[d_i] < d_max - 1\n          ++d_iter[d_i];\n          incremented = true;\n          break;\n        }\n      }\n    }  // while(incremented) {\n  }  // for (int c = 0; c < channels_col; ++c) {\n}\n\ntemplate <typename Dtype>\nvoid im2col_nd_cpu(const Dtype* data_im, const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_col) {\n  const bool kIm2Col = true;\n  im2col_nd_core_cpu(data_im, kIm2Col, num_spatial_axes, im_shape, col_shape,\n                  kernel_shape, pad, stride, dilation, data_col);\n}\n\n// Explicit instantiation\ntemplate void im2col_nd_cpu<float>(const float* data_im,\n    const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, float* data_col);\ntemplate void im2col_nd_cpu<double>(const double* data_im,\n    const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, double* data_col);\n\ntemplate <typename Dtype>\nvoid col2im_cpu(const Dtype* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    Dtype* data_im) {\n  caffe_set(height * width * channels, Dtype(0), data_im);\n  const int output_h = (height + 2 * pad_h -\n    (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;\n  const int output_w = (width + 2 * pad_w -\n    (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;\n  const int channel_size = height * width;\n  for (int channel = channels; channel--; data_im += channel_size) {\n    for (int kernel_row = 0; kernel_row < kernel_h; kernel_row++) {\n      for (int kernel_col = 0; kernel_col < kernel_w; kernel_col++) {\n        int input_row = -pad_h + kernel_row * dilation_h;\n        for (int output_rows = output_h; output_rows; output_rows--) {\n          if (!is_a_ge_zero_and_a_lt_b(input_row, height)) {\n            data_col += output_w;\n          } else {\n            int input_col = -pad_w + kernel_col * dilation_w;\n            for (int output_col = output_w; output_col; output_col--) {\n              if (is_a_ge_zero_and_a_lt_b(input_col, width)) {\n                data_im[input_row * width + input_col] += *data_col;\n              }\n              data_col++;\n              input_col += stride_w;\n            }\n          }\n          input_row += stride_h;\n        }\n      }\n    }\n  }\n}\n\n// Explicit instantiation\ntemplate void col2im_cpu<float>(const float* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    float* data_im);\ntemplate void col2im_cpu<double>(const double* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    double* data_im);\n\ntemplate <typename Dtype>\nvoid col2im_nd_cpu(const Dtype* data_col, const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_im) {\n  const bool kIm2Col = false;\n  im2col_nd_core_cpu(data_col, kIm2Col, num_spatial_axes, im_shape, col_shape,\n                     kernel_shape, pad, stride, dilation, data_im);\n}\n\n// Explicit instantiation\ntemplate void col2im_nd_cpu<float>(const float* data_col,\n    const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, float* data_im);\ntemplate void col2im_nd_cpu<double>(const double* data_col,\n    const int num_spatial_axes,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, double* data_im);\n\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/im2col.cu",
    "content": "#include <algorithm>\n#include <iostream>\n#include \"caffe/common.hpp\"\n#include \"caffe/util/im2col.hpp\"\nusing namespace std;\nnamespace caffe {\n\ntemplate <typename Dtype>\n__global__ void im2col_gpu_kernel(const int n, const Dtype* data_im,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int height_col, const int width_col,\n    Dtype* data_col) {\n  CUDA_KERNEL_LOOP(index, n) {\n    const int h_index = index / width_col;//0\n    const int h_col = h_index % height_col;//0\n    const int w_col = index % width_col;//0\n    const int c_im = h_index / height_col;//0\n    const int c_col = c_im * kernel_h * kernel_w;//0\n    const int h_offset = h_col * stride_h - pad_h;//0\n    const int w_offset = w_col * stride_w - pad_w;//0\n    Dtype* data_col_ptr = data_col;\n    data_col_ptr += (c_col * height_col + h_col) * width_col + w_col;//0\n    const Dtype* data_im_ptr = data_im;\n    data_im_ptr += (c_im * height + h_offset) * width + w_offset;//0\n    for (int i = 0; i < kernel_h; ++i) {\n      for (int j = 0; j < kernel_w; ++j) {\n        int h_im = h_offset + i * dilation_h;\n        int w_im = w_offset + j * dilation_w;\n        *data_col_ptr =\n            (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) ?\n            data_im_ptr[i * dilation_h * width + j * dilation_w] : 0;\n        data_col_ptr += height_col * width_col;\n      }\n    }\n  }\n}\n\ntemplate <typename Dtype>\nvoid im2col_gpu(const Dtype* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    Dtype* data_col) {\n  // We are going to launch channels * height_col * width_col kernels, each\n  // kernel responsible for copying a single-channel grid.\n  int height_col = (height + 2 * pad_h -\n      (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;\n  int width_col = (width + 2 * pad_w -\n      (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;\n  int num_kernels = channels * height_col * width_col;\n  // cout<<\"height_col: \"<<height_col<<endl;\n  // cout<<\"width_col: \"<<width_col<<endl;\n  // cout<<\"num_kernels: \"<<num_kernels<<endl;\n  // cout<<\"height: \"<<height<<endl;\n  // cout<<\"width: \"<<width<<endl;\n  // cout<<\"kernel_h: \"<<kernel_h<<endl;\n  // cout<< \"kernel_w: \"<<kernel_w<<endl;\n  // cout<<\"pad_h: \"<<pad_h<<endl;\n  // cout<<\"pad_w: \"<<pad_w<<endl;\n  // cout<<\"stride_h: \"<<stride_h<<endl;\n  // cout<<\"stride_w: \"<<stride_w<<endl;\n  // cout<<\"dilation_h: \"<<dilation_h<<endl;\n  // cout<<\"dilation_w: \"<<dilation_h<<endl;\n\n   \n  // cout<<\"threads: \"<<CAFFE_CUDA_NUM_THREADS<<\" \"<<endl;\n  // cout<<\"CAFFE_GET_BLOCKS(num_kernels): \"<< CAFFE_GET_BLOCKS(num_kernels)<<endl;\n\n\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  im2col_gpu_kernel<Dtype><<<CAFFE_GET_BLOCKS(num_kernels),\n                             CAFFE_CUDA_NUM_THREADS>>>(\n      num_kernels, data_im, height, width, kernel_h, kernel_w, pad_h,\n      pad_w, stride_h, stride_w, dilation_h, dilation_w, height_col,\n      width_col, data_col);\n  CUDA_POST_KERNEL_CHECK;\n}\n\n// Explicit instantiation\ntemplate void im2col_gpu<float>(const float* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w, float* data_col);\ntemplate void im2col_gpu<double>(const double* data_im, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w, double* data_col);\n\ntemplate <typename Dtype, int num_axes>\n__global__ void im2col_nd_gpu_kernel(const int n, const Dtype* data_im,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_col) {\n  int d_temp[num_axes];  // NOLINT(runtime/arrays)\n  int d_iter[num_axes];  // NOLINT(runtime/arrays)\n\n  __shared__ int shared_dilation[num_axes];\n  __shared__ int shared_kernel_shape[num_axes];\n  __shared__ int shared_pad[num_axes];\n  __shared__ int shared_stride[num_axes];\n  __shared__ int shared_col_shape[num_axes + 1];\n  __shared__ int shared_im_shape[num_axes + 1];\n\n  if (threadIdx.x < num_axes) {\n    shared_dilation[threadIdx.x] = dilation[threadIdx.x];\n    shared_kernel_shape[threadIdx.x] = kernel_shape[threadIdx.x];\n    shared_pad[threadIdx.x] = pad[threadIdx.x];\n    shared_stride[threadIdx.x] = stride[threadIdx.x];\n  }\n  if (threadIdx.x < num_axes + 1) {\n    shared_col_shape[threadIdx.x] = col_shape[threadIdx.x];\n    shared_im_shape[threadIdx.x] = im_shape[threadIdx.x];\n  }\n  __syncthreads();\n\n  int i;\n  CUDA_KERNEL_LOOP(index, n) {\n    // Initialize channel_in, computed in the loop below, with intermediate\n    // computations used to compute the spatial indices.\n    int channel_in = index;\n    int channel_out = 1;\n    for (i = num_axes - 1; i >= 0; --i) {\n      d_temp[i] = channel_in % shared_col_shape[i + 1];\n      channel_in /= shared_col_shape[i + 1];\n      channel_out *= shared_kernel_shape[i];\n    }\n    channel_out *= channel_in;\n    int data_col_inc = 1;\n    for (i = 0; i < num_axes; ++i) {\n      channel_out *= shared_col_shape[i + 1];\n      channel_out += d_temp[i];\n      d_temp[i] = d_temp[i] * shared_stride[i] - shared_pad[i];\n      channel_in *= shared_im_shape[i + 1];\n      channel_in += d_temp[i];\n      data_col_inc *= shared_col_shape[i + 1];\n      d_iter[i] = 0;\n    }\n    Dtype* data_col_ptr = data_col + channel_out;\n    const Dtype* data_im_ptr = data_im + channel_in;\n    bool incremented;\n    do {\n      bool in_range = true;\n      for (i = 0; i < num_axes; ++i) {\n        const int d_iter_im = d_iter[i] * shared_dilation[i] + d_temp[i];\n        in_range &= d_iter_im >= 0 && d_iter_im < shared_im_shape[i + 1];\n        if (!in_range) { break; }\n      }\n      if (in_range) {\n        int data_im_offset = d_iter[0] * shared_dilation[0];\n        for (i = 1; i < num_axes; ++i) {\n          data_im_offset *= shared_im_shape[i + 1];\n          data_im_offset += d_iter[i] * shared_dilation[i];\n        }\n        *data_col_ptr = data_im_ptr[data_im_offset];\n      } else {\n        *data_col_ptr = 0;\n      }\n      data_col_ptr += data_col_inc;\n      incremented = false;\n      for (i = num_axes - 1; i >= 0; --i) {\n        const int d_max = shared_kernel_shape[i];\n        if (d_iter[i] == d_max - 1) {\n          d_iter[i] = 0;\n        } else {  // d_iter[i] < d_max - 1\n          ++d_iter[i];\n          incremented = true;\n          break;\n        }\n      }  // for (int i = num_axes - 1; i >= 0; --i)\n    } while (incremented);  // do\n  }  // CUDA_KERNEL_LOOP(index, n)\n}\n\ntemplate <typename Dtype>\nvoid im2col_nd_gpu(const Dtype* data_im, const int num_spatial_axes,\n    const int num_kernels, const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_col) {\n  // num_axes should be smaller than block size\n  DCHECK_LT(num_spatial_axes, CAFFE_CUDA_NUM_THREADS);\n  switch (num_spatial_axes) {\n  case 1:\n    im2col_nd_gpu_kernel<Dtype, 1>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 2:\n    im2col_nd_gpu_kernel<Dtype, 2>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 3:\n    im2col_nd_gpu_kernel<Dtype, 3>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 4:\n    im2col_nd_gpu_kernel<Dtype, 4>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 5:\n    im2col_nd_gpu_kernel<Dtype, 5>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 6:\n    im2col_nd_gpu_kernel<Dtype, 6>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 7:\n    im2col_nd_gpu_kernel<Dtype, 7>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 8:\n    im2col_nd_gpu_kernel<Dtype, 8>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 9:\n    im2col_nd_gpu_kernel<Dtype, 9>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  case 10:\n    im2col_nd_gpu_kernel<Dtype, 10>  // NOLINT_NEXT_LINE(whitespace/operators)\n        <<<CAFFE_GET_BLOCKS(num_kernels), CAFFE_CUDA_NUM_THREADS>>>(\n        num_kernels, data_im, im_shape, col_shape,\n        kernel_shape, pad, stride, dilation, data_col);\n    break;\n  default:\n    LOG(FATAL) << \"im2col_nd_gpu does not support computation with \"\n               << num_spatial_axes << \" spatial axes\";\n  }\n  CUDA_POST_KERNEL_CHECK;\n}\n\n// Explicit instantiation\ntemplate void im2col_nd_gpu<float>(const float* data_im,\n    const int num_spatial_axes, const int col_size,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, float* data_col);\ntemplate void im2col_nd_gpu<double>(const double* data_im,\n    const int num_spatial_axes, const int col_size,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, double* data_col);\n\ntemplate <typename Dtype>\n__global__ void col2im_gpu_kernel(const int n, const Dtype* data_col,\n    const int height, const int width, const int channels,\n    const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int height_col, const int width_col,\n    Dtype* data_im) {\n  CUDA_KERNEL_LOOP(index, n) {\n    Dtype val = 0;\n    const int w_im = index % width + pad_w;\n    const int h_im = (index / width) % height + pad_h;\n    const int c_im = index / (width * height);\n    int kernel_extent_w = (kernel_w - 1) * dilation_w + 1;\n    int kernel_extent_h = (kernel_h - 1) * dilation_h + 1;\n    // compute the start and end of the output\n    const int w_col_start =\n        (w_im < kernel_extent_w) ? 0 : (w_im - kernel_extent_w) / stride_w + 1;\n    const int w_col_end = min(w_im / stride_w + 1, width_col);\n    const int h_col_start =\n        (h_im < kernel_extent_h) ? 0 : (h_im - kernel_extent_h) / stride_h + 1;\n    const int h_col_end = min(h_im / stride_h + 1, height_col);\n    // TODO: use LCM of stride and dilation to avoid unnecessary loops\n    for (int h_col = h_col_start; h_col < h_col_end; h_col += 1) {\n      for (int w_col = w_col_start; w_col < w_col_end; w_col += 1) {\n        int h_k = (h_im - h_col * stride_h);\n        int w_k = (w_im - w_col * stride_w);\n        if (h_k % dilation_h == 0 && w_k % dilation_w == 0) {\n          h_k /= dilation_h;\n          w_k /= dilation_w;\n          int data_col_index = (((c_im * kernel_h + h_k) * kernel_w + w_k) *\n                                height_col + h_col) * width_col + w_col;\n          val += data_col[data_col_index];\n        }\n      }\n    }\n    data_im[index] = val;\n  }\n}\n\ntemplate <typename Dtype>\nvoid col2im_gpu(const Dtype* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    Dtype* data_im) {\n  int height_col = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) /\n      stride_h + 1;\n  int width_col = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) /\n      stride_w + 1;\n  int num_kernels = channels * height * width;\n  // To avoid involving atomic operations, we will launch one kernel per\n  // bottom dimension, and then in the kernel add up the top dimensions.\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  col2im_gpu_kernel<Dtype><<<CAFFE_GET_BLOCKS(num_kernels),\n                             CAFFE_CUDA_NUM_THREADS>>>(\n      num_kernels, data_col, height, width, channels, kernel_h, kernel_w,\n      pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w,\n      height_col, width_col, data_im);\n  CUDA_POST_KERNEL_CHECK;\n}\n\n// Explicit instantiation\ntemplate void col2im_gpu<float>(const float* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    float* data_im);\ntemplate void col2im_gpu<double>(const double* data_col, const int channels,\n    const int height, const int width, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    double* data_im);\n\ntemplate <typename Dtype, int num_axes>\n__global__ void col2im_nd_gpu_kernel(const int n, const Dtype* data_col,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_im) {\n  int d_im[num_axes];  // NOLINT(runtime/arrays)\n  int d_col_iter[num_axes];  // NOLINT(runtime/arrays)\n  int d_col_start[num_axes];  // NOLINT(runtime/arrays)\n  int d_col_end[num_axes];  // NOLINT(runtime/arrays)\n\n  __shared__ int shared_dilation[num_axes];\n  __shared__ int shared_kernel_shape[num_axes];\n  __shared__ int shared_pad[num_axes];\n  __shared__ int shared_stride[num_axes];\n  __shared__ int shared_col_shape[num_axes + 1];\n  __shared__ int shared_im_shape[num_axes + 1];\n\n  if (threadIdx.x < num_axes) {\n    shared_dilation[threadIdx.x] = dilation[threadIdx.x];\n    shared_kernel_shape[threadIdx.x] = kernel_shape[threadIdx.x];\n    shared_pad[threadIdx.x] = pad[threadIdx.x];\n    shared_stride[threadIdx.x] = stride[threadIdx.x];\n  }\n  if (threadIdx.x < num_axes + 1) {\n    shared_col_shape[threadIdx.x] = col_shape[threadIdx.x];\n    shared_im_shape[threadIdx.x] = im_shape[threadIdx.x];\n  }\n  __syncthreads();\n\n  CUDA_KERNEL_LOOP(index, n) {\n    // Initialize channel_in, computed in the loop below, with intermediate\n    // computations used to compute the spatial indices.\n    int c_im = index;\n    // Calculate d_im (image dimensions).\n    for (int i = num_axes - 1; i >= 0; --i) {\n      d_im[i] = c_im % shared_im_shape[i + 1] + shared_pad[i];\n      c_im /= shared_im_shape[i + 1];\n    }\n    // Calculate col start/end indices.\n    bool done = false;\n    for (int i = 0; i < num_axes; ++i) {\n      const int kernel_extent =\n          shared_dilation[i] * (shared_kernel_shape[i] - 1) + 1;\n      d_col_start[i] = d_col_iter[i] =\n          (d_im[i] < kernel_extent) ? 0 :\n          (d_im[i] - kernel_extent) / shared_stride[i] + 1;\n      d_col_end[i] =\n          min(d_im[i] / shared_stride[i] + 1, shared_col_shape[i + 1]);\n      if (d_col_start[i] >= d_col_end[i]) {\n        // Skip computation if the dimension is 0 at any spatial axis --\n        // final val will be 0.\n        data_im[index] = 0;\n        done = true;\n        break;  // for (int i = 0; i < num_axes; ++i)\n      }\n    }\n    if (done) {\n      continue;  // CUDA_KERNEL_LOOP(index, n)\n    }\n    // Loop over the col to compute the output val.\n    Dtype val = 0;\n    bool incremented = true;\n    bool skip = false;\n    do {\n      // Compute the final offset.\n      int final_offset = 0;\n      int kernel_shape_prod = 1;\n      int kernel_index;\n      for (int i = num_axes - 1; i >= 0; --i) {\n        kernel_index = d_im[i] - d_col_iter[i] * shared_stride[i];\n        if (kernel_index % shared_dilation[i]) {\n          skip = true;\n          break;\n        } else {\n          kernel_index /= shared_dilation[i];\n          final_offset += kernel_index * kernel_shape_prod;\n          kernel_shape_prod *= shared_kernel_shape[i];\n        }\n      }\n      if (!skip) {\n        final_offset += kernel_shape_prod * c_im;\n        for (int i = 0; i < num_axes; ++i) {\n          final_offset *= shared_col_shape[i + 1];\n          final_offset += d_col_iter[i];\n        }\n        val += data_col[final_offset];\n      }\n      skip = false;\n      incremented = false;\n      for (int i = num_axes - 1; i >= 0; --i) {\n        const int d_max = d_col_end[i];\n        if (d_col_iter[i] == d_max - 1) {\n          d_col_iter[i] = d_col_start[i];\n        } else {  // d_col_iter[i] < d_max - 1\n          ++d_col_iter[i];\n          incremented = true;\n          break;  // for (int i = num_axes - 1; i >= 0; --i)\n        }\n      }  // for (int i = num_axes - 1; i >= 0; --i)\n    }  while (incremented);\n    data_im[index] = val;\n  }  // CUDA_KERNEL_LOOP(index, n)\n}\n\ntemplate <typename Dtype>\nvoid col2im_nd_gpu(const Dtype* data_col, const int num_spatial_axes,\n    const int im_size, const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, Dtype* data_im) {\n  // num_axes should be smaller than block size\n  DCHECK_LT(num_spatial_axes, CAFFE_CUDA_NUM_THREADS);\n  switch (num_spatial_axes) {\n  case 1:\n    col2im_nd_gpu_kernel<Dtype, 1>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 2:\n    col2im_nd_gpu_kernel<Dtype, 2>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 3:\n    col2im_nd_gpu_kernel<Dtype, 3>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 4:\n    col2im_nd_gpu_kernel<Dtype, 4>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 5:\n    col2im_nd_gpu_kernel<Dtype, 5>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 6:\n    col2im_nd_gpu_kernel<Dtype, 6>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 7:\n    col2im_nd_gpu_kernel<Dtype, 7>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 8:\n    col2im_nd_gpu_kernel<Dtype, 8>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 9:\n    col2im_nd_gpu_kernel<Dtype, 9>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  case 10:\n    col2im_nd_gpu_kernel<Dtype, 10>  // NOLINT_NEXT_LINE(whitespace/operators)\n          <<<CAFFE_GET_BLOCKS(im_size), CAFFE_CUDA_NUM_THREADS>>>(\n          im_size, data_col, im_shape, col_shape,\n          kernel_shape, pad, stride, dilation, data_im);\n    break;\n  default:\n    LOG(FATAL) << \"col2im_nd_gpu does not support computation with \"\n               << num_spatial_axes << \" spatial axes\";\n  }\n  CUDA_POST_KERNEL_CHECK;\n}\n\n// Explicit instantiation\ntemplate void col2im_nd_gpu<float>(const float* data_col,\n    const int num_spatial_axes, const int im_size,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, float* data_im);\ntemplate void col2im_nd_gpu<double>(const double* data_col,\n    const int num_spatial_axes, const int im_size,\n    const int* im_shape, const int* col_shape,\n    const int* kernel_shape, const int* pad, const int* stride,\n    const int* dilation, double* data_im);\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/insert_splits.cpp",
    "content": "#include <algorithm>\n#include <map>\n#include <sstream>\n#include <string>\n#include <utility>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/insert_splits.hpp\"\n\nnamespace caffe {\n\nvoid InsertSplits(const NetParameter& param, NetParameter* param_split) {\n  // Initialize by copying from the input NetParameter.\n  param_split->CopyFrom(param);\n  param_split->clear_layer();\n  map<string, pair<int, int> > blob_name_to_last_top_idx;\n  map<pair<int, int>, pair<int, int> > bottom_idx_to_source_top_idx;\n  map<pair<int, int>, int> top_idx_to_bottom_count;\n  map<pair<int, int>, float> top_idx_to_loss_weight;\n  map<pair<int, int>, int> top_idx_to_bottom_split_idx;\n  map<int, string> layer_idx_to_layer_name;\n  layer_idx_to_layer_name[-1] = \"input\";\n  // Determine the number of times each blob is used as an input (bottom) blob.\n  for (int i = 0; i < param.input_size(); ++i) {\n    const string& blob_name = param.input(i);\n    blob_name_to_last_top_idx[blob_name] = make_pair(-1, i);\n  }\n  for (int i = 0; i < param.layer_size(); ++i) {\n    const LayerParameter& layer_param = param.layer(i);\n    layer_idx_to_layer_name[i] = layer_param.name();\n    for (int j = 0; j < layer_param.bottom_size(); ++j) {\n      const string& blob_name = layer_param.bottom(j);\n      if (blob_name_to_last_top_idx.find(blob_name) ==\n          blob_name_to_last_top_idx.end()) {\n        LOG(FATAL) << \"Unknown bottom blob '\" << blob_name << \"' (layer '\"\n                   << layer_param.name() << \"', bottom index \" << j << \")\";\n      }\n      const pair<int, int>& bottom_idx = make_pair(i, j);\n      const pair<int, int>& top_idx = blob_name_to_last_top_idx[blob_name];\n      bottom_idx_to_source_top_idx[bottom_idx] = top_idx;\n      ++top_idx_to_bottom_count[top_idx];\n    }\n    for (int j = 0; j < layer_param.top_size(); ++j) {\n      const string& blob_name = layer_param.top(j);\n      blob_name_to_last_top_idx[blob_name] = make_pair(i, j);\n    }\n    // A use of a top blob as a loss should be handled similarly to the use of\n    // a top blob as an input (bottom) blob to another layer.\n    const int last_loss =\n        std::min(layer_param.loss_weight_size(), layer_param.top_size());\n    for (int j = 0; j < last_loss; ++j) {\n      const string& blob_name = layer_param.top(j);\n      const pair<int, int>& top_idx = blob_name_to_last_top_idx[blob_name];\n      top_idx_to_loss_weight[top_idx] = layer_param.loss_weight(j);\n      if (top_idx_to_loss_weight[top_idx]) {\n        ++top_idx_to_bottom_count[top_idx];\n      }\n    }\n  }\n  // Create split layer for any input blobs used by other layer as bottom\n  // blobs more than once.\n  for (int i = 0; i < param.input_size(); ++i) {\n    const int split_count = top_idx_to_bottom_count[make_pair(-1, i)];\n    if (split_count > 1) {\n      const string& layer_name = layer_idx_to_layer_name[-1];\n      const string& blob_name = param.input(i);\n      LayerParameter* split_layer_param = param_split->add_layer();\n      const float kZeroLossWeight = 0;\n      ConfigureSplitLayer(layer_name, blob_name, i, split_count,\n          kZeroLossWeight, split_layer_param);\n    }\n  }\n  for (int i = 0; i < param.layer_size(); ++i) {\n    LayerParameter* layer_param = param_split->add_layer();\n    layer_param->CopyFrom(param.layer(i));\n    // Replace any shared bottom blobs with split layer outputs.\n    for (int j = 0; j < layer_param->bottom_size(); ++j) {\n      const pair<int, int>& top_idx =\n          bottom_idx_to_source_top_idx[make_pair(i, j)];\n      const int split_count = top_idx_to_bottom_count[top_idx];\n      if (split_count > 1) {\n        const string& layer_name = layer_idx_to_layer_name[top_idx.first];\n        const string& blob_name = layer_param->bottom(j);\n        layer_param->set_bottom(j, SplitBlobName(layer_name,\n            blob_name, top_idx.second, top_idx_to_bottom_split_idx[top_idx]++));\n      }\n    }\n    // Create split layer for any top blobs used by other layer as bottom\n    // blobs more than once.\n    for (int j = 0; j < layer_param->top_size(); ++j) {\n      const pair<int, int>& top_idx = make_pair(i, j);\n      const int split_count = top_idx_to_bottom_count[top_idx];\n      if (split_count > 1) {\n        const string& layer_name = layer_idx_to_layer_name[i];\n        const string& blob_name = layer_param->top(j);\n        LayerParameter* split_layer_param = param_split->add_layer();\n        const float loss_weight = top_idx_to_loss_weight[top_idx];\n        ConfigureSplitLayer(layer_name, blob_name, j, split_count,\n            loss_weight, split_layer_param);\n        if (loss_weight) {\n          layer_param->clear_loss_weight();\n          top_idx_to_bottom_split_idx[top_idx]++;\n        }\n      }\n    }\n  }\n}\n\nvoid ConfigureSplitLayer(const string& layer_name, const string& blob_name,\n    const int blob_idx, const int split_count, const float loss_weight,\n    LayerParameter* split_layer_param) {\n  split_layer_param->Clear();\n  split_layer_param->add_bottom(blob_name);\n  split_layer_param->set_name(SplitLayerName(layer_name, blob_name, blob_idx));\n  split_layer_param->set_type(\"Split\");\n  for (int k = 0; k < split_count; ++k) {\n    split_layer_param->add_top(\n        SplitBlobName(layer_name, blob_name, blob_idx, k));\n    if (loss_weight) {\n      if (k == 0) {\n        split_layer_param->add_loss_weight(loss_weight);\n      } else {\n        split_layer_param->add_loss_weight(0);\n      }\n    }\n  }\n}\n\nstring SplitLayerName(const string& layer_name, const string& blob_name,\n    const int blob_idx) {\n  ostringstream split_layer_name;\n  split_layer_name << blob_name << \"_\" << layer_name << \"_\" << blob_idx\n      << \"_split\";\n  return split_layer_name.str();\n}\n\nstring SplitBlobName(const string& layer_name, const string& blob_name,\n    const int blob_idx, const int split_idx) {\n  ostringstream split_blob_name;\n  split_blob_name << blob_name << \"_\" << layer_name << \"_\" << blob_idx\n      << \"_split_\" << split_idx;\n  return split_blob_name.str();\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/io.cpp",
    "content": "#include <fcntl.h>\n#include <google/protobuf/io/coded_stream.h>\n#include <google/protobuf/io/zero_copy_stream_impl.h>\n#include <google/protobuf/text_format.h>\n#ifdef USE_OPENCV\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/highgui/highgui_c.h>\n#include <opencv2/imgproc/imgproc.hpp>\n#endif  // USE_OPENCV\n#include <stdint.h>\n\n#include <algorithm>\n#include <fstream>  // NOLINT(readability/streams)\n#include <string>\n#include <vector>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/io.hpp\"\n\nconst int kProtoReadBytesLimit = INT_MAX;  // Max size of 2 GB minus 1 byte.\n\nnamespace caffe {\n\nusing google::protobuf::io::FileInputStream;\nusing google::protobuf::io::FileOutputStream;\nusing google::protobuf::io::ZeroCopyInputStream;\nusing google::protobuf::io::CodedInputStream;\nusing google::protobuf::io::ZeroCopyOutputStream;\nusing google::protobuf::io::CodedOutputStream;\nusing google::protobuf::Message;\n\nbool ReadProtoFromTextFile(const char* filename, Message* proto) {\n  int fd = open(filename, O_RDONLY);\n  CHECK_NE(fd, -1) << \"File not found: \" << filename;\n  FileInputStream* input = new FileInputStream(fd);\n  bool success = google::protobuf::TextFormat::Parse(input, proto);\n  delete input;\n  close(fd);\n  return success;\n}\n\nvoid WriteProtoToTextFile(const Message& proto, const char* filename) {\n  int fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);\n  FileOutputStream* output = new FileOutputStream(fd);\n  CHECK(google::protobuf::TextFormat::Print(proto, output));\n  delete output;\n  close(fd);\n}\n\nbool ReadProtoFromBinaryFile(const char* filename, Message* proto) {\n  int fd = open(filename, O_RDONLY);\n  CHECK_NE(fd, -1) << \"File not found: \" << filename;\n  ZeroCopyInputStream* raw_input = new FileInputStream(fd);\n  CodedInputStream* coded_input = new CodedInputStream(raw_input);\n  coded_input->SetTotalBytesLimit(kProtoReadBytesLimit, 536870912);\n\n  bool success = proto->ParseFromCodedStream(coded_input);\n\n  delete coded_input;\n  delete raw_input;\n  close(fd);\n  return success;\n}\n\nvoid WriteProtoToBinaryFile(const Message& proto, const char* filename) {\n  fstream output(filename, ios::out | ios::trunc | ios::binary);\n  CHECK(proto.SerializeToOstream(&output));\n}\n\n#ifdef USE_OPENCV\ncv::Mat ReadImageToCVMat(const string& filename,\n    const int height, const int width, const bool is_color) {\n  cv::Mat cv_img;\n  int cv_read_flag = (is_color ? CV_LOAD_IMAGE_COLOR :\n    CV_LOAD_IMAGE_GRAYSCALE);\n  cv::Mat cv_img_origin = cv::imread(filename, cv_read_flag);\n  if (!cv_img_origin.data) {\n    LOG(ERROR) << \"Could not open or find file \" << filename;\n    return cv_img_origin;\n  }\n  if (height > 0 && width > 0) {\n    cv::resize(cv_img_origin, cv_img, cv::Size(width, height));\n  } else {\n    cv_img = cv_img_origin;\n  }\n  return cv_img;\n}\n\ncv::Mat ReadImageToCVMat(const string& filename,\n    const int height, const int width) {\n  return ReadImageToCVMat(filename, height, width, true);\n}\n\ncv::Mat ReadImageToCVMat(const string& filename,\n    const bool is_color) {\n  return ReadImageToCVMat(filename, 0, 0, is_color);\n}\n\ncv::Mat ReadImageToCVMat(const string& filename) {\n  return ReadImageToCVMat(filename, 0, 0, true);\n}\n\n// Do the file extension and encoding match?\nstatic bool matchExt(const std::string & fn,\n                     std::string en) {\n  size_t p = fn.rfind('.');\n  std::string ext = p != fn.npos ? fn.substr(p) : fn;\n  std::transform(ext.begin(), ext.end(), ext.begin(), ::tolower);\n  std::transform(en.begin(), en.end(), en.begin(), ::tolower);\n  if ( ext == en )\n    return true;\n  if ( en == \"jpg\" && ext == \"jpeg\" )\n    return true;\n  return false;\n}\n\nbool ReadImageToDatum(const string& filename, const int label,\n    const int height, const int width, const bool is_color,\n    const std::string & encoding, Datum* datum) {\n  cv::Mat cv_img = ReadImageToCVMat(filename, height, width, is_color);\n  if (cv_img.data) {\n    if (encoding.size()) {\n      if ( (cv_img.channels() == 3) == is_color && !height && !width &&\n          matchExt(filename, encoding) )\n        return ReadFileToDatum(filename, label, datum);\n      std::vector<uchar> buf;\n      cv::imencode(\".\"+encoding, cv_img, buf);\n      datum->set_data(std::string(reinterpret_cast<char*>(&buf[0]),\n                      buf.size()));\n      datum->set_label(label);\n      datum->set_encoded(true);\n      return true;\n    }\n    CVMatToDatum(cv_img, datum);\n    datum->set_label(label);\n    return true;\n  } else {\n    return false;\n  }\n}\n#endif  // USE_OPENCV\n\nbool ReadFileToDatum(const string& filename, const int label,\n    Datum* datum) {\n  std::streampos size;\n\n  fstream file(filename.c_str(), ios::in|ios::binary|ios::ate);\n  if (file.is_open()) {\n    size = file.tellg();\n    std::string buffer(size, ' ');\n    file.seekg(0, ios::beg);\n    file.read(&buffer[0], size);\n    file.close();\n    datum->set_data(buffer);\n    datum->set_label(label);\n    datum->set_encoded(true);\n    return true;\n  } else {\n    return false;\n  }\n}\n\n#ifdef USE_OPENCV\ncv::Mat DecodeDatumToCVMatNative(const Datum& datum) {\n  cv::Mat cv_img;\n  CHECK(datum.encoded()) << \"Datum not encoded\";\n  const string& data = datum.data();\n  std::vector<char> vec_data(data.c_str(), data.c_str() + data.size());\n  cv_img = cv::imdecode(vec_data, -1);\n  if (!cv_img.data) {\n    LOG(ERROR) << \"Could not decode datum \";\n  }\n  return cv_img;\n}\ncv::Mat DecodeDatumToCVMat(const Datum& datum, bool is_color) {\n  cv::Mat cv_img;\n  CHECK(datum.encoded()) << \"Datum not encoded\";\n  const string& data = datum.data();\n  std::vector<char> vec_data(data.c_str(), data.c_str() + data.size());\n  int cv_read_flag = (is_color ? CV_LOAD_IMAGE_COLOR :\n    CV_LOAD_IMAGE_GRAYSCALE);\n  cv_img = cv::imdecode(vec_data, cv_read_flag);\n  if (!cv_img.data) {\n    LOG(ERROR) << \"Could not decode datum \";\n  }\n  return cv_img;\n}\n\n// If Datum is encoded will decoded using DecodeDatumToCVMat and CVMatToDatum\n// If Datum is not encoded will do nothing\nbool DecodeDatumNative(Datum* datum) {\n  if (datum->encoded()) {\n    cv::Mat cv_img = DecodeDatumToCVMatNative((*datum));\n    CVMatToDatum(cv_img, datum);\n    return true;\n  } else {\n    return false;\n  }\n}\nbool DecodeDatum(Datum* datum, bool is_color) {\n  if (datum->encoded()) {\n    cv::Mat cv_img = DecodeDatumToCVMat((*datum), is_color);\n    CVMatToDatum(cv_img, datum);\n    return true;\n  } else {\n    return false;\n  }\n}\n\nvoid CVMatToDatum(const cv::Mat& cv_img, Datum* datum) {\n  CHECK(cv_img.depth() == CV_8U) << \"Image data type must be unsigned byte\";\n  datum->set_channels(cv_img.channels());\n  datum->set_height(cv_img.rows);\n  datum->set_width(cv_img.cols);\n  datum->clear_data();\n  datum->clear_float_data();\n  datum->set_encoded(false);\n  int datum_channels = datum->channels();\n  int datum_height = datum->height();\n  int datum_width = datum->width();\n  int datum_size = datum_channels * datum_height * datum_width;\n  std::string buffer(datum_size, ' ');\n  for (int h = 0; h < datum_height; ++h) {\n    const uchar* ptr = cv_img.ptr<uchar>(h);\n    int img_index = 0;\n    for (int w = 0; w < datum_width; ++w) {\n      for (int c = 0; c < datum_channels; ++c) {\n        int datum_index = (c * datum_height + h) * datum_width + w;\n        buffer[datum_index] = static_cast<char>(ptr[img_index++]);\n      }\n    }\n  }\n  datum->set_data(buffer);\n}\n#endif  // USE_OPENCV\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/math_functions.cpp",
    "content": "#include <boost/math/special_functions/next.hpp>\n#include <boost/random.hpp>\n\n#include <limits>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n#include \"caffe/util/rng.hpp\"\n\nnamespace caffe {\n\ntemplate<>\nvoid caffe_cpu_gemm<float>(const CBLAS_TRANSPOSE TransA,\n    const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,\n    const float alpha, const float* A, const float* B, const float beta,\n    float* C) {\n  int lda = (TransA == CblasNoTrans) ? K : M;\n  int ldb = (TransB == CblasNoTrans) ? N : K;\n  cblas_sgemm(CblasRowMajor, TransA, TransB, M, N, K, alpha, A, lda, B,\n      ldb, beta, C, N);\n}\n\ntemplate<>\nvoid caffe_cpu_gemm<double>(const CBLAS_TRANSPOSE TransA,\n    const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,\n    const double alpha, const double* A, const double* B, const double beta,\n    double* C) {\n  int lda = (TransA == CblasNoTrans) ? K : M;\n  int ldb = (TransB == CblasNoTrans) ? N : K;\n  cblas_dgemm(CblasRowMajor, TransA, TransB, M, N, K, alpha, A, lda, B,\n      ldb, beta, C, N);\n}\n\ntemplate <>\nvoid caffe_cpu_gemv<float>(const CBLAS_TRANSPOSE TransA, const int M,\n    const int N, const float alpha, const float* A, const float* x,\n    const float beta, float* y) {\n  cblas_sgemv(CblasRowMajor, TransA, M, N, alpha, A, N, x, 1, beta, y, 1);\n}\n\ntemplate <>\nvoid caffe_cpu_gemv<double>(const CBLAS_TRANSPOSE TransA, const int M,\n    const int N, const double alpha, const double* A, const double* x,\n    const double beta, double* y) {\n  cblas_dgemv(CblasRowMajor, TransA, M, N, alpha, A, N, x, 1, beta, y, 1);\n}\n\ntemplate <>\nvoid caffe_axpy<float>(const int N, const float alpha, const float* X,\n    float* Y) { cblas_saxpy(N, alpha, X, 1, Y, 1); }\n\ntemplate <>\nvoid caffe_axpy<double>(const int N, const double alpha, const double* X,\n    double* Y) { cblas_daxpy(N, alpha, X, 1, Y, 1); }\n\ntemplate <typename Dtype>\nvoid caffe_set(const int N, const Dtype alpha, Dtype* Y) {\n  if (alpha == 0) {\n    memset(Y, 0, sizeof(Dtype) * N);  // NOLINT(caffe/alt_fn)\n    return;\n  }\n  for (int i = 0; i < N; ++i) {\n    Y[i] = alpha;\n  }\n}\n\ntemplate void caffe_set<int>(const int N, const int alpha, int* Y);\ntemplate void caffe_set<float>(const int N, const float alpha, float* Y);\ntemplate void caffe_set<double>(const int N, const double alpha, double* Y);\n\ntemplate <>\nvoid caffe_add_scalar(const int N, const float alpha, float* Y) {\n  for (int i = 0; i < N; ++i) {\n    Y[i] += alpha;\n  }\n}\n\ntemplate <>\nvoid caffe_add_scalar(const int N, const double alpha, double* Y) {\n  for (int i = 0; i < N; ++i) {\n    Y[i] += alpha;\n  }\n}\n\ntemplate <typename Dtype>\nvoid caffe_copy(const int N, const Dtype* X, Dtype* Y) {\n  if (X != Y) {\n    if (Caffe::mode() == Caffe::GPU) {\n#ifndef CPU_ONLY\n      // NOLINT_NEXT_LINE(caffe/alt_fn)\n      CUDA_CHECK(cudaMemcpy(Y, X, sizeof(Dtype) * N, cudaMemcpyDefault));\n#else\n      NO_GPU;\n#endif\n    } else {\n      memcpy(Y, X, sizeof(Dtype) * N);  // NOLINT(caffe/alt_fn)\n    }\n  }\n}\n\ntemplate void caffe_copy<int>(const int N, const int* X, int* Y);\ntemplate void caffe_copy<unsigned int>(const int N, const unsigned int* X,\n    unsigned int* Y);\ntemplate void caffe_copy<float>(const int N, const float* X, float* Y);\ntemplate void caffe_copy<double>(const int N, const double* X, double* Y);\n\ntemplate <>\nvoid caffe_scal<float>(const int N, const float alpha, float *X) {\n  cblas_sscal(N, alpha, X, 1);\n}\n\ntemplate <>\nvoid caffe_scal<double>(const int N, const double alpha, double *X) {\n  cblas_dscal(N, alpha, X, 1);\n}\n\ntemplate <>\nvoid caffe_cpu_axpby<float>(const int N, const float alpha, const float* X,\n                            const float beta, float* Y) {\n  cblas_saxpby(N, alpha, X, 1, beta, Y, 1);\n}\n\ntemplate <>\nvoid caffe_cpu_axpby<double>(const int N, const double alpha, const double* X,\n                             const double beta, double* Y) {\n  cblas_daxpby(N, alpha, X, 1, beta, Y, 1);\n}\n\ntemplate <>\nvoid caffe_add<float>(const int n, const float* a, const float* b,\n    float* y) {\n  vsAdd(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_add<double>(const int n, const double* a, const double* b,\n    double* y) {\n  vdAdd(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_sub<float>(const int n, const float* a, const float* b,\n    float* y) {\n  vsSub(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_sub<double>(const int n, const double* a, const double* b,\n    double* y) {\n  vdSub(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_mul<float>(const int n, const float* a, const float* b,\n    float* y) {\n  vsMul(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_mul<double>(const int n, const double* a, const double* b,\n    double* y) {\n  vdMul(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_div<float>(const int n, const float* a, const float* b,\n    float* y) {\n  vsDiv(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_div<double>(const int n, const double* a, const double* b,\n    double* y) {\n  vdDiv(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_powx<float>(const int n, const float* a, const float b,\n    float* y) {\n  vsPowx(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_powx<double>(const int n, const double* a, const double b,\n    double* y) {\n  vdPowx(n, a, b, y);\n}\n\ntemplate <>\nvoid caffe_sqr<float>(const int n, const float* a, float* y) {\n  vsSqr(n, a, y);\n}\n\ntemplate <>\nvoid caffe_sqr<double>(const int n, const double* a, double* y) {\n  vdSqr(n, a, y);\n}\n\ntemplate <>\nvoid caffe_exp<float>(const int n, const float* a, float* y) {\n  vsExp(n, a, y);\n}\n\ntemplate <>\nvoid caffe_exp<double>(const int n, const double* a, double* y) {\n  vdExp(n, a, y);\n}\n\ntemplate <>\nvoid caffe_log<float>(const int n, const float* a, float* y) {\n  vsLn(n, a, y);\n}\n\ntemplate <>\nvoid caffe_log<double>(const int n, const double* a, double* y) {\n  vdLn(n, a, y);\n}\n\ntemplate <>\nvoid caffe_abs<float>(const int n, const float* a, float* y) {\n    vsAbs(n, a, y);\n}\n\ntemplate <>\nvoid caffe_abs<double>(const int n, const double* a, double* y) {\n    vdAbs(n, a, y);\n}\n\nunsigned int caffe_rng_rand() {\n  return (*caffe_rng())();\n}\n\ntemplate <typename Dtype>\nDtype caffe_nextafter(const Dtype b) {\n  return boost::math::nextafter<Dtype>(\n      b, std::numeric_limits<Dtype>::max());\n}\n\ntemplate\nfloat caffe_nextafter(const float b);\n\ntemplate\ndouble caffe_nextafter(const double b);\n\ntemplate <typename Dtype>\nvoid caffe_rng_uniform(const int n, const Dtype a, const Dtype b, Dtype* r) {\n  CHECK_GE(n, 0);\n  CHECK(r);\n  CHECK_LE(a, b);\n  boost::uniform_real<Dtype> random_distribution(a, caffe_nextafter<Dtype>(b));\n  boost::variate_generator<caffe::rng_t*, boost::uniform_real<Dtype> >\n      variate_generator(caffe_rng(), random_distribution);\n  for (int i = 0; i < n; ++i) {\n    r[i] = variate_generator();\n  }\n}\n\ntemplate\nvoid caffe_rng_uniform<float>(const int n, const float a, const float b,\n                              float* r);\n\ntemplate\nvoid caffe_rng_uniform<double>(const int n, const double a, const double b,\n                               double* r);\n\ntemplate <typename Dtype>\nvoid caffe_rng_gaussian(const int n, const Dtype a,\n                        const Dtype sigma, Dtype* r) {\n  CHECK_GE(n, 0);\n  CHECK(r);\n  CHECK_GT(sigma, 0);\n  boost::normal_distribution<Dtype> random_distribution(a, sigma);\n  boost::variate_generator<caffe::rng_t*, boost::normal_distribution<Dtype> >\n      variate_generator(caffe_rng(), random_distribution);\n  for (int i = 0; i < n; ++i) {\n    r[i] = variate_generator();\n  }\n}\n\ntemplate\nvoid caffe_rng_gaussian<float>(const int n, const float mu,\n                               const float sigma, float* r);\n\ntemplate\nvoid caffe_rng_gaussian<double>(const int n, const double mu,\n                                const double sigma, double* r);\n\ntemplate <typename Dtype>\nvoid caffe_rng_bernoulli(const int n, const Dtype p, int* r) {\n  CHECK_GE(n, 0);\n  CHECK(r);\n  CHECK_GE(p, 0);\n  CHECK_LE(p, 1);\n  boost::bernoulli_distribution<Dtype> random_distribution(p);\n  boost::variate_generator<caffe::rng_t*, boost::bernoulli_distribution<Dtype> >\n      variate_generator(caffe_rng(), random_distribution);\n  for (int i = 0; i < n; ++i) {\n    r[i] = variate_generator();\n  }\n}\n\ntemplate\nvoid caffe_rng_bernoulli<double>(const int n, const double p, int* r);\n\ntemplate\nvoid caffe_rng_bernoulli<float>(const int n, const float p, int* r);\n\ntemplate <typename Dtype>\nvoid caffe_rng_bernoulli(const int n, const Dtype p, unsigned int* r) {\n  CHECK_GE(n, 0);\n  CHECK(r);\n  CHECK_GE(p, 0);\n  CHECK_LE(p, 1);\n  boost::bernoulli_distribution<Dtype> random_distribution(p);\n  boost::variate_generator<caffe::rng_t*, boost::bernoulli_distribution<Dtype> >\n      variate_generator(caffe_rng(), random_distribution);\n  for (int i = 0; i < n; ++i) {\n    r[i] = static_cast<unsigned int>(variate_generator());\n  }\n}\n\ntemplate\nvoid caffe_rng_bernoulli<double>(const int n, const double p, unsigned int* r);\n\ntemplate\nvoid caffe_rng_bernoulli<float>(const int n, const float p, unsigned int* r);\n\ntemplate <>\nfloat caffe_cpu_strided_dot<float>(const int n, const float* x, const int incx,\n    const float* y, const int incy) {\n  return cblas_sdot(n, x, incx, y, incy);\n}\n\ntemplate <>\ndouble caffe_cpu_strided_dot<double>(const int n, const double* x,\n    const int incx, const double* y, const int incy) {\n  return cblas_ddot(n, x, incx, y, incy);\n}\n\ntemplate <typename Dtype>\nDtype caffe_cpu_dot(const int n, const Dtype* x, const Dtype* y) {\n  return caffe_cpu_strided_dot(n, x, 1, y, 1);\n}\n\ntemplate\nfloat caffe_cpu_dot<float>(const int n, const float* x, const float* y);\n\ntemplate\ndouble caffe_cpu_dot<double>(const int n, const double* x, const double* y);\n\ntemplate <>\nfloat caffe_cpu_asum<float>(const int n, const float* x) {\n  return cblas_sasum(n, x, 1);\n}\n\ntemplate <>\ndouble caffe_cpu_asum<double>(const int n, const double* x) {\n  return cblas_dasum(n, x, 1);\n}\n\ntemplate <>\nvoid caffe_cpu_scale<float>(const int n, const float alpha, const float *x,\n                            float* y) {\n  cblas_scopy(n, x, 1, y, 1);\n  cblas_sscal(n, alpha, y, 1);\n}\n\ntemplate <>\nvoid caffe_cpu_scale<double>(const int n, const double alpha, const double *x,\n                             double* y) {\n  cblas_dcopy(n, x, 1, y, 1);\n  cblas_dscal(n, alpha, y, 1);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/math_functions.cu",
    "content": "#include <math_functions.h>  // CUDA's, not caffe's, for fabs, signbit\n#include <thrust/device_vector.h>\n#include <thrust/functional.h>  // thrust::plus\n#include <thrust/reduce.h>\n\n#include <cmath>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/util/math_functions.hpp\"\n\nnamespace caffe {\n\ntemplate <>\nvoid caffe_gpu_gemm<float>(const CBLAS_TRANSPOSE TransA,\n    const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,\n    const float alpha, const float* A, const float* B, const float beta,\n    float* C) {\n  // Note that cublas follows fortran order.\n  int lda = (TransA == CblasNoTrans) ? K : M;\n  int ldb = (TransB == CblasNoTrans) ? N : K;\n  cublasOperation_t cuTransA =\n      (TransA == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;\n  cublasOperation_t cuTransB =\n      (TransB == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;\n  CUBLAS_CHECK(cublasSgemm(Caffe::cublas_handle(), cuTransB, cuTransA,\n      N, M, K, &alpha, B, ldb, A, lda, &beta, C, N));\n}\n\ntemplate <>\nvoid caffe_gpu_gemm<double>(const CBLAS_TRANSPOSE TransA,\n    const CBLAS_TRANSPOSE TransB, const int M, const int N, const int K,\n    const double alpha, const double* A, const double* B, const double beta,\n    double* C) {\n  // Note that cublas follows fortran order.\n  int lda = (TransA == CblasNoTrans) ? K : M;\n  int ldb = (TransB == CblasNoTrans) ? N : K;\n  cublasOperation_t cuTransA =\n      (TransA == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;\n  cublasOperation_t cuTransB =\n      (TransB == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;\n  CUBLAS_CHECK(cublasDgemm(Caffe::cublas_handle(), cuTransB, cuTransA,\n      N, M, K, &alpha, B, ldb, A, lda, &beta, C, N));\n}\n\ntemplate <>\nvoid caffe_gpu_gemv<float>(const CBLAS_TRANSPOSE TransA, const int M,\n    const int N, const float alpha, const float* A, const float* x,\n    const float beta, float* y) {\n  cublasOperation_t cuTransA =\n      (TransA == CblasNoTrans) ? CUBLAS_OP_T : CUBLAS_OP_N;\n  CUBLAS_CHECK(cublasSgemv(Caffe::cublas_handle(), cuTransA, N, M, &alpha,\n      A, N, x, 1, &beta, y, 1));\n}\n\ntemplate <>\nvoid caffe_gpu_gemv<double>(const CBLAS_TRANSPOSE TransA, const int M,\n    const int N, const double alpha, const double* A, const double* x,\n    const double beta, double* y) {\n  cublasOperation_t cuTransA =\n      (TransA == CblasNoTrans) ? CUBLAS_OP_T : CUBLAS_OP_N;\n  CUBLAS_CHECK(cublasDgemv(Caffe::cublas_handle(), cuTransA, N, M, &alpha,\n      A, N, x, 1, &beta, y, 1));\n}\n\ntemplate <>\nvoid caffe_gpu_axpy<float>(const int N, const float alpha, const float* X,\n    float* Y) {\n  CUBLAS_CHECK(cublasSaxpy(Caffe::cublas_handle(), N, &alpha, X, 1, Y, 1));\n}\n\ntemplate <>\nvoid caffe_gpu_axpy<double>(const int N, const double alpha, const double* X,\n    double* Y) {\n  CUBLAS_CHECK(cublasDaxpy(Caffe::cublas_handle(), N, &alpha, X, 1, Y, 1));\n}\n\nvoid caffe_gpu_memcpy(const size_t N, const void* X, void* Y) {\n  if (X != Y) {\n    CUDA_CHECK(cudaMemcpy(Y, X, N, cudaMemcpyDefault));  // NOLINT(caffe/alt_fn)\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_scal<float>(const int N, const float alpha, float *X) {\n  CUBLAS_CHECK(cublasSscal(Caffe::cublas_handle(), N, &alpha, X, 1));\n}\n\ntemplate <>\nvoid caffe_gpu_scal<double>(const int N, const double alpha, double *X) {\n  CUBLAS_CHECK(cublasDscal(Caffe::cublas_handle(), N, &alpha, X, 1));\n}\n\ntemplate <>\nvoid caffe_gpu_axpby<float>(const int N, const float alpha, const float* X,\n    const float beta, float* Y) {\n  caffe_gpu_scal<float>(N, beta, Y);\n  caffe_gpu_axpy<float>(N, alpha, X, Y);\n}\n\ntemplate <>\nvoid caffe_gpu_axpby<double>(const int N, const double alpha, const double* X,\n    const double beta, double* Y) {\n  caffe_gpu_scal<double>(N, beta, Y);\n  caffe_gpu_axpy<double>(N, alpha, X, Y);\n}\n\ntemplate <>\nvoid caffe_gpu_dot<float>(const int n, const float* x, const float* y,\n    float* out) {\n  CUBLAS_CHECK(cublasSdot(Caffe::cublas_handle(), n, x, 1, y, 1, out));\n}\n\ntemplate <>\nvoid caffe_gpu_dot<double>(const int n, const double* x, const double* y,\n    double * out) {\n  CUBLAS_CHECK(cublasDdot(Caffe::cublas_handle(), n, x, 1, y, 1, out));\n}\n\ntemplate <>\nvoid caffe_gpu_asum<float>(const int n, const float* x, float* y) {\n  CUBLAS_CHECK(cublasSasum(Caffe::cublas_handle(), n, x, 1, y));\n}\n\ntemplate <>\nvoid caffe_gpu_asum<double>(const int n, const double* x, double* y) {\n  CUBLAS_CHECK(cublasDasum(Caffe::cublas_handle(), n, x, 1, y));\n}\n\ntemplate <>\nvoid caffe_gpu_scale<float>(const int n, const float alpha, const float *x,\n                            float* y) {\n  CUBLAS_CHECK(cublasScopy(Caffe::cublas_handle(), n, x, 1, y, 1));\n  CUBLAS_CHECK(cublasSscal(Caffe::cublas_handle(), n, &alpha, y, 1));\n}\n\ntemplate <>\nvoid caffe_gpu_scale<double>(const int n, const double alpha, const double *x,\n                             double* y) {\n  CUBLAS_CHECK(cublasDcopy(Caffe::cublas_handle(), n, x, 1, y, 1));\n  CUBLAS_CHECK(cublasDscal(Caffe::cublas_handle(), n, &alpha, y, 1));\n}\n\ntemplate <typename Dtype>\n__global__ void set_kernel(const int n, const Dtype alpha, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = alpha;\n  }\n}\n\ntemplate <typename Dtype>\nvoid caffe_gpu_set(const int N, const Dtype alpha, Dtype* Y) {\n  if (alpha == 0) {\n    CUDA_CHECK(cudaMemset(Y, 0, sizeof(Dtype) * N));  // NOLINT(caffe/alt_fn)\n    return;\n  }\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  set_kernel<Dtype><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, alpha, Y);\n}\n\ntemplate void caffe_gpu_set<int>(const int N, const int alpha, int* Y);\ntemplate void caffe_gpu_set<float>(const int N, const float alpha, float* Y);\ntemplate void caffe_gpu_set<double>(const int N, const double alpha, double* Y);\n\ntemplate <typename Dtype>\n__global__ void add_scalar_kernel(const int n, const Dtype alpha, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] += alpha;\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_add_scalar(const int N, const float alpha, float* Y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  add_scalar_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, alpha, Y);\n}\n\ntemplate <>\nvoid caffe_gpu_add_scalar(const int N, const double alpha, double* Y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  add_scalar_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, alpha, Y);\n}\n\ntemplate <typename Dtype>\n__global__ void add_kernel(const int n, const Dtype* a,\n    const Dtype* b, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = a[index] + b[index];\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_add<float>(const int N, const float* a, const float* b,\n    float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  add_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <>\nvoid caffe_gpu_add<double>(const int N, const double* a, const double* b,\n    double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  add_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <typename Dtype>\n__global__ void sub_kernel(const int n, const Dtype* a,\n    const Dtype* b, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = a[index] - b[index];\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_sub<float>(const int N, const float* a, const float* b,\n    float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  sub_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <>\nvoid caffe_gpu_sub<double>(const int N, const double* a, const double* b,\n    double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  sub_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <typename Dtype>\n__global__ void mul_kernel(const int n, const Dtype* a,\n    const Dtype* b, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = a[index] * b[index];\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_mul<float>(const int N, const float* a,\n    const float* b, float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  mul_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <>\nvoid caffe_gpu_mul<double>(const int N, const double* a,\n    const double* b, double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  mul_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <typename Dtype>\n__global__ void div_kernel(const int n, const Dtype* a,\n    const Dtype* b, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = a[index] / b[index];\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_div<float>(const int N, const float* a,\n    const float* b, float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  div_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <>\nvoid caffe_gpu_div<double>(const int N, const double* a,\n    const double* b, double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  div_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, b, y);\n}\n\ntemplate <typename Dtype>\n__global__ void abs_kernel(const int n, const Dtype* a, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = abs(a[index]);\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_abs<float>(const int N, const float* a, float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  abs_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, y);\n}\n\ntemplate <>\nvoid caffe_gpu_abs<double>(const int N, const double* a, double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  abs_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, y);\n}\n\n\ntemplate <typename Dtype>\n__global__ void exp_kernel(const int n, const Dtype* a, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = exp(a[index]);\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_exp<float>(const int N, const float* a, float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  exp_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, y);\n}\n\ntemplate <>\nvoid caffe_gpu_exp<double>(const int N, const double* a, double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  exp_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, y);\n}\n\ntemplate <typename Dtype>\n__global__ void log_kernel(const int n, const Dtype* a, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = log(a[index]);\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_log<float>(const int N, const float* a, float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  log_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, y);\n}\n\ntemplate <>\nvoid caffe_gpu_log<double>(const int N, const double* a, double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  log_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, y);\n}\n\ntemplate <typename Dtype>\n__global__ void powx_kernel(const int n, const Dtype* a,\n    const Dtype alpha, Dtype* y) {\n  CUDA_KERNEL_LOOP(index, n) {\n    y[index] = pow(a[index], alpha);\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_powx<float>(const int N, const float* a,\n    const float alpha, float* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  powx_kernel<float><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, alpha, y);\n}\n\ntemplate <>\nvoid caffe_gpu_powx<double>(const int N, const double* a,\n    const double alpha, double* y) {\n  // NOLINT_NEXT_LINE(whitespace/operators)\n  powx_kernel<double><<<CAFFE_GET_BLOCKS(N), CAFFE_CUDA_NUM_THREADS>>>(\n      N, a, alpha, y);\n}\n\nDEFINE_AND_INSTANTIATE_GPU_UNARY_FUNC(sign, y[index] = (Dtype(0) < x[index])\n                                      - (x[index] < Dtype(0)));\nDEFINE_AND_INSTANTIATE_GPU_UNARY_FUNC(sgnbit, y[index] = signbit(x[index]));\n\nvoid caffe_gpu_rng_uniform(const int n, unsigned int* r) {\n  CURAND_CHECK(curandGenerate(Caffe::curand_generator(), r, n));\n}\n\ntemplate <>\nvoid caffe_gpu_rng_uniform<float>(const int n, const float a, const float b,\n                                  float* r) {\n  CURAND_CHECK(curandGenerateUniform(Caffe::curand_generator(), r, n));\n  const float range = b - a;\n  if (range != static_cast<float>(1)) {\n    caffe_gpu_scal(n, range, r);\n  }\n  if (a != static_cast<float>(0)) {\n    caffe_gpu_add_scalar(n, a, r);\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_rng_uniform<double>(const int n, const double a, const double b,\n                                   double* r) {\n  CURAND_CHECK(curandGenerateUniformDouble(Caffe::curand_generator(), r, n));\n  const double range = b - a;\n  if (range != static_cast<double>(1)) {\n    caffe_gpu_scal(n, range, r);\n  }\n  if (a != static_cast<double>(0)) {\n    caffe_gpu_add_scalar(n, a, r);\n  }\n}\n\ntemplate <>\nvoid caffe_gpu_rng_gaussian(const int n, const float mu, const float sigma,\n                            float* r) {\n  CURAND_CHECK(\n      curandGenerateNormal(Caffe::curand_generator(), r, n, mu, sigma));\n}\n\ntemplate <>\nvoid caffe_gpu_rng_gaussian(const int n, const double mu, const double sigma,\n                            double* r) {\n  CURAND_CHECK(\n      curandGenerateNormalDouble(Caffe::curand_generator(), r, n, mu, sigma));\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/signal_handler.cpp",
    "content": "#include <boost/bind.hpp>\n#include <glog/logging.h>\n\n#include <signal.h>\n#include <csignal>\n\n#include \"caffe/util/signal_handler.h\"\n\nnamespace {\n  static volatile sig_atomic_t got_sigint = false;\n  static volatile sig_atomic_t got_sighup = false;\n  static bool already_hooked_up = false;\n\n  void handle_signal(int signal) {\n    switch (signal) {\n    case SIGHUP:\n      got_sighup = true;\n      break;\n    case SIGINT:\n      got_sigint = true;\n      break;\n    }\n  }\n\n  void HookupHandler() {\n    if (already_hooked_up) {\n      LOG(FATAL) << \"Tried to hookup signal handlers more than once.\";\n    }\n    already_hooked_up = true;\n\n    struct sigaction sa;\n    // Setup the handler\n    sa.sa_handler = &handle_signal;\n    // Restart the system call, if at all possible\n    sa.sa_flags = SA_RESTART;\n    // Block every signal during the handler\n    sigfillset(&sa.sa_mask);\n    // Intercept SIGHUP and SIGINT\n    if (sigaction(SIGHUP, &sa, NULL) == -1) {\n      LOG(FATAL) << \"Cannot install SIGHUP handler.\";\n    }\n    if (sigaction(SIGINT, &sa, NULL) == -1) {\n      LOG(FATAL) << \"Cannot install SIGINT handler.\";\n    }\n  }\n\n  // Set the signal handlers to the default.\n  void UnhookHandler() {\n    if (already_hooked_up) {\n      struct sigaction sa;\n      // Setup the sighub handler\n      sa.sa_handler = SIG_DFL;\n      // Restart the system call, if at all possible\n      sa.sa_flags = SA_RESTART;\n      // Block every signal during the handler\n      sigfillset(&sa.sa_mask);\n      // Intercept SIGHUP and SIGINT\n      if (sigaction(SIGHUP, &sa, NULL) == -1) {\n        LOG(FATAL) << \"Cannot uninstall SIGHUP handler.\";\n      }\n      if (sigaction(SIGINT, &sa, NULL) == -1) {\n        LOG(FATAL) << \"Cannot uninstall SIGINT handler.\";\n      }\n\n      already_hooked_up = false;\n    }\n  }\n\n  // Return true iff a SIGINT has been received since the last time this\n  // function was called.\n  bool GotSIGINT() {\n    bool result = got_sigint;\n    got_sigint = false;\n    return result;\n  }\n\n  // Return true iff a SIGHUP has been received since the last time this\n  // function was called.\n  bool GotSIGHUP() {\n    bool result = got_sighup;\n    got_sighup = false;\n    return result;\n  }\n}  // namespace\n\nnamespace caffe {\n\nSignalHandler::SignalHandler(SolverAction::Enum SIGINT_action,\n                             SolverAction::Enum SIGHUP_action):\n  SIGINT_action_(SIGINT_action),\n  SIGHUP_action_(SIGHUP_action) {\n  HookupHandler();\n}\n\nSignalHandler::~SignalHandler() {\n  UnhookHandler();\n}\n\nSolverAction::Enum SignalHandler::CheckForSignals() const {\n  if (GotSIGHUP()) {\n    return SIGHUP_action_;\n  }\n  if (GotSIGINT()) {\n    return SIGINT_action_;\n  }\n  return SolverAction::NONE;\n}\n\n// Return the function that the solver can use to find out if a snapshot or\n// early exit is being requested.\nActionCallback SignalHandler::GetActionFunction() {\n  return boost::bind(&SignalHandler::CheckForSignals, this);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/caffe/util/upgrade_proto.cpp",
    "content": "#include <google/protobuf/io/coded_stream.h>\n#include <google/protobuf/io/zero_copy_stream_impl.h>\n#include <google/protobuf/text_format.h>\n\n#include <map>\n#include <string>\n\n#include \"caffe/common.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\nnamespace caffe {\n\nbool NetNeedsUpgrade(const NetParameter& net_param) {\n  return NetNeedsV0ToV1Upgrade(net_param) || NetNeedsV1ToV2Upgrade(net_param);\n}\n\nbool UpgradeNetAsNeeded(const string& param_file, NetParameter* param) {\n  bool success = true;\n  if (NetNeedsV0ToV1Upgrade(*param)) {\n    // NetParameter was specified using the old style (V0LayerParameter); try to\n    // upgrade it.\n    LOG(INFO) << \"Attempting to upgrade input file specified using deprecated \"\n              << \"V0LayerParameter: \" << param_file;\n    NetParameter original_param(*param);\n    if (!UpgradeV0Net(original_param, param)) {\n      success = false;\n      LOG(ERROR) << \"Warning: had one or more problems upgrading \"\n          << \"V0NetParameter to NetParameter (see above); continuing anyway.\";\n    } else {\n      LOG(INFO) << \"Successfully upgraded file specified using deprecated \"\n                << \"V0LayerParameter\";\n    }\n    LOG(WARNING) << \"Note that future Caffe releases will not support \"\n        << \"V0NetParameter; use ./build/tools/upgrade_net_proto_text for \"\n        << \"prototxt and ./build/tools/upgrade_net_proto_binary for model \"\n        << \"weights upgrade this and any other net protos to the new format.\";\n  }\n  // NetParameter uses old style data transformation fields; try to upgrade it.\n  if (NetNeedsDataUpgrade(*param)) {\n    LOG(INFO) << \"Attempting to upgrade input file specified using deprecated \"\n              << \"transformation parameters: \" << param_file;\n    UpgradeNetDataTransformation(param);\n    LOG(INFO) << \"Successfully upgraded file specified using deprecated \"\n              << \"data transformation parameters.\";\n    LOG(WARNING) << \"Note that future Caffe releases will only support \"\n                 << \"transform_param messages for transformation fields.\";\n  }\n  if (NetNeedsV1ToV2Upgrade(*param)) {\n    LOG(INFO) << \"Attempting to upgrade input file specified using deprecated \"\n              << \"V1LayerParameter: \" << param_file;\n    NetParameter original_param(*param);\n    if (!UpgradeV1Net(original_param, param)) {\n      success = false;\n      LOG(ERROR) << \"Warning: had one or more problems upgrading \"\n                 << \"V1LayerParameter (see above); continuing anyway.\";\n    } else {\n      LOG(INFO) << \"Successfully upgraded file specified using deprecated \"\n                << \"V1LayerParameter\";\n    }\n  }\n  return success;\n}\n\nvoid ReadNetParamsFromTextFileOrDie(const string& param_file,\n                                    NetParameter* param) {\n  CHECK(ReadProtoFromTextFile(param_file, param))\n      << \"Failed to parse NetParameter file: \" << param_file;\n  UpgradeNetAsNeeded(param_file, param);\n}\n\nvoid ReadNetParamsFromBinaryFileOrDie(const string& param_file,\n                                      NetParameter* param) {\n  CHECK(ReadProtoFromBinaryFile(param_file, param))\n      << \"Failed to parse NetParameter file: \" << param_file;\n  UpgradeNetAsNeeded(param_file, param);\n}\n\nbool NetNeedsV0ToV1Upgrade(const NetParameter& net_param) {\n  for (int i = 0; i < net_param.layers_size(); ++i) {\n    if (net_param.layers(i).has_layer()) {\n      return true;\n    }\n  }\n  return false;\n}\n\nbool NetNeedsV1ToV2Upgrade(const NetParameter& net_param) {\n  return net_param.layers_size() > 0;\n}\n\nbool UpgradeV0Net(const NetParameter& v0_net_param_padding_layers,\n                  NetParameter* net_param) {\n  // First upgrade padding layers to padded conv layers.\n  NetParameter v0_net_param;\n  UpgradeV0PaddingLayers(v0_net_param_padding_layers, &v0_net_param);\n  // Now upgrade layer parameters.\n  bool is_fully_compatible = true;\n  net_param->Clear();\n  if (v0_net_param.has_name()) {\n    net_param->set_name(v0_net_param.name());\n  }\n  for (int i = 0; i < v0_net_param.layers_size(); ++i) {\n    is_fully_compatible &= UpgradeV0LayerParameter(v0_net_param.layers(i),\n                                                   net_param->add_layers());\n  }\n  for (int i = 0; i < v0_net_param.input_size(); ++i) {\n    net_param->add_input(v0_net_param.input(i));\n  }\n  for (int i = 0; i < v0_net_param.input_dim_size(); ++i) {\n    net_param->add_input_dim(v0_net_param.input_dim(i));\n  }\n  if (v0_net_param.has_force_backward()) {\n    net_param->set_force_backward(v0_net_param.force_backward());\n  }\n  return is_fully_compatible;\n}\n\nvoid UpgradeV0PaddingLayers(const NetParameter& param,\n                            NetParameter* param_upgraded_pad) {\n  // Copy everything other than the layers from the original param.\n  param_upgraded_pad->Clear();\n  param_upgraded_pad->CopyFrom(param);\n  param_upgraded_pad->clear_layers();\n  // Figure out which layer each bottom blob comes from.\n  map<string, int> blob_name_to_last_top_idx;\n  for (int i = 0; i < param.input_size(); ++i) {\n    const string& blob_name = param.input(i);\n    blob_name_to_last_top_idx[blob_name] = -1;\n  }\n  for (int i = 0; i < param.layers_size(); ++i) {\n    const V1LayerParameter& layer_connection = param.layers(i);\n    const V0LayerParameter& layer_param = layer_connection.layer();\n    // Add the layer to the new net, unless it's a padding layer.\n    if (layer_param.type() != \"padding\") {\n      param_upgraded_pad->add_layers()->CopyFrom(layer_connection);\n    }\n    for (int j = 0; j < layer_connection.bottom_size(); ++j) {\n      const string& blob_name = layer_connection.bottom(j);\n      if (blob_name_to_last_top_idx.find(blob_name) ==\n          blob_name_to_last_top_idx.end()) {\n        LOG(FATAL) << \"Unknown blob input \" << blob_name << \" to layer \" << j;\n      }\n      const int top_idx = blob_name_to_last_top_idx[blob_name];\n      if (top_idx == -1) {\n        continue;\n      }\n      const V1LayerParameter& source_layer = param.layers(top_idx);\n      if (source_layer.layer().type() == \"padding\") {\n        // This layer has a padding layer as input -- check that it is a conv\n        // layer or a pooling layer and takes only one input.  Also check that\n        // the padding layer input has only one input and one output.  Other\n        // cases have undefined behavior in Caffe.\n        CHECK((layer_param.type() == \"conv\") || (layer_param.type() == \"pool\"))\n            << \"Padding layer input to \"\n            \"non-convolutional / non-pooling layer type \"\n            << layer_param.type();\n        CHECK_EQ(layer_connection.bottom_size(), 1)\n            << \"Conv Layer takes a single blob as input.\";\n        CHECK_EQ(source_layer.bottom_size(), 1)\n            << \"Padding Layer takes a single blob as input.\";\n        CHECK_EQ(source_layer.top_size(), 1)\n            << \"Padding Layer produces a single blob as output.\";\n        int layer_index = param_upgraded_pad->layers_size() - 1;\n        param_upgraded_pad->mutable_layers(layer_index)->mutable_layer()\n            ->set_pad(source_layer.layer().pad());\n        param_upgraded_pad->mutable_layers(layer_index)\n            ->set_bottom(j, source_layer.bottom(0));\n      }\n    }\n    for (int j = 0; j < layer_connection.top_size(); ++j) {\n      const string& blob_name = layer_connection.top(j);\n      blob_name_to_last_top_idx[blob_name] = i;\n    }\n  }\n}\n\nbool UpgradeV0LayerParameter(const V1LayerParameter& v0_layer_connection,\n                             V1LayerParameter* layer_param) {\n  bool is_fully_compatible = true;\n  layer_param->Clear();\n  for (int i = 0; i < v0_layer_connection.bottom_size(); ++i) {\n    layer_param->add_bottom(v0_layer_connection.bottom(i));\n  }\n  for (int i = 0; i < v0_layer_connection.top_size(); ++i) {\n    layer_param->add_top(v0_layer_connection.top(i));\n  }\n  if (v0_layer_connection.has_layer()) {\n    const V0LayerParameter& v0_layer_param = v0_layer_connection.layer();\n    if (v0_layer_param.has_name()) {\n      layer_param->set_name(v0_layer_param.name());\n    }\n    const string& type = v0_layer_param.type();\n    if (v0_layer_param.has_type()) {\n      layer_param->set_type(UpgradeV0LayerType(type));\n    }\n    for (int i = 0; i < v0_layer_param.blobs_size(); ++i) {\n      layer_param->add_blobs()->CopyFrom(v0_layer_param.blobs(i));\n    }\n    for (int i = 0; i < v0_layer_param.blobs_lr_size(); ++i) {\n      layer_param->add_blobs_lr(v0_layer_param.blobs_lr(i));\n    }\n    for (int i = 0; i < v0_layer_param.weight_decay_size(); ++i) {\n      layer_param->add_weight_decay(v0_layer_param.weight_decay(i));\n    }\n    if (v0_layer_param.has_num_output()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->set_num_output(\n            v0_layer_param.num_output());\n      } else if (type == \"innerproduct\") {\n        layer_param->mutable_inner_product_param()->set_num_output(\n            v0_layer_param.num_output());\n      } else {\n        LOG(ERROR) << \"Unknown parameter num_output for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_biasterm()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->set_bias_term(\n            v0_layer_param.biasterm());\n      } else if (type == \"innerproduct\") {\n        layer_param->mutable_inner_product_param()->set_bias_term(\n            v0_layer_param.biasterm());\n      } else {\n        LOG(ERROR) << \"Unknown parameter biasterm for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_weight_filler()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->\n            mutable_weight_filler()->CopyFrom(v0_layer_param.weight_filler());\n      } else if (type == \"innerproduct\") {\n        layer_param->mutable_inner_product_param()->\n            mutable_weight_filler()->CopyFrom(v0_layer_param.weight_filler());\n      } else {\n        LOG(ERROR) << \"Unknown parameter weight_filler for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_bias_filler()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->\n            mutable_bias_filler()->CopyFrom(v0_layer_param.bias_filler());\n      } else if (type == \"innerproduct\") {\n        layer_param->mutable_inner_product_param()->\n            mutable_bias_filler()->CopyFrom(v0_layer_param.bias_filler());\n      } else {\n        LOG(ERROR) << \"Unknown parameter bias_filler for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_pad()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->add_pad(v0_layer_param.pad());\n      } else if (type == \"pool\") {\n        layer_param->mutable_pooling_param()->set_pad(v0_layer_param.pad());\n      } else {\n        LOG(ERROR) << \"Unknown parameter pad for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_kernelsize()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->add_kernel_size(\n            v0_layer_param.kernelsize());\n      } else if (type == \"pool\") {\n        layer_param->mutable_pooling_param()->set_kernel_size(\n            v0_layer_param.kernelsize());\n      } else {\n        LOG(ERROR) << \"Unknown parameter kernelsize for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_group()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->set_group(\n            v0_layer_param.group());\n      } else {\n        LOG(ERROR) << \"Unknown parameter group for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_stride()) {\n      if (type == \"conv\") {\n        layer_param->mutable_convolution_param()->add_stride(\n            v0_layer_param.stride());\n      } else if (type == \"pool\") {\n        layer_param->mutable_pooling_param()->set_stride(\n            v0_layer_param.stride());\n      } else {\n        LOG(ERROR) << \"Unknown parameter stride for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_pool()) {\n      if (type == \"pool\") {\n        V0LayerParameter_PoolMethod pool = v0_layer_param.pool();\n        switch (pool) {\n        case V0LayerParameter_PoolMethod_MAX:\n          layer_param->mutable_pooling_param()->set_pool(\n              PoolingParameter_PoolMethod_MAX);\n          break;\n        case V0LayerParameter_PoolMethod_AVE:\n          layer_param->mutable_pooling_param()->set_pool(\n              PoolingParameter_PoolMethod_AVE);\n          break;\n        case V0LayerParameter_PoolMethod_STOCHASTIC:\n          layer_param->mutable_pooling_param()->set_pool(\n              PoolingParameter_PoolMethod_STOCHASTIC);\n          break;\n        default:\n          LOG(ERROR) << \"Unknown pool method \" << pool;\n          is_fully_compatible = false;\n        }\n      } else {\n        LOG(ERROR) << \"Unknown parameter pool for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_dropout_ratio()) {\n      if (type == \"dropout\") {\n        layer_param->mutable_dropout_param()->set_dropout_ratio(\n            v0_layer_param.dropout_ratio());\n      } else {\n        LOG(ERROR) << \"Unknown parameter dropout_ratio for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_local_size()) {\n      if (type == \"lrn\") {\n        layer_param->mutable_lrn_param()->set_local_size(\n            v0_layer_param.local_size());\n      } else {\n        LOG(ERROR) << \"Unknown parameter local_size for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_alpha()) {\n      if (type == \"lrn\") {\n        layer_param->mutable_lrn_param()->set_alpha(v0_layer_param.alpha());\n      } else {\n        LOG(ERROR) << \"Unknown parameter alpha for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_beta()) {\n      if (type == \"lrn\") {\n        layer_param->mutable_lrn_param()->set_beta(v0_layer_param.beta());\n      } else {\n        LOG(ERROR) << \"Unknown parameter beta for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_k()) {\n      if (type == \"lrn\") {\n        layer_param->mutable_lrn_param()->set_k(v0_layer_param.k());\n      } else {\n        LOG(ERROR) << \"Unknown parameter k for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_source()) {\n      if (type == \"data\") {\n        layer_param->mutable_data_param()->set_source(v0_layer_param.source());\n      } else if (type == \"hdf5_data\") {\n        layer_param->mutable_hdf5_data_param()->set_source(\n            v0_layer_param.source());\n      } else if (type == \"images\") {\n        layer_param->mutable_image_data_param()->set_source(\n            v0_layer_param.source());\n      } else if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_source(\n            v0_layer_param.source());\n      } else if (type == \"infogain_loss\") {\n        layer_param->mutable_infogain_loss_param()->set_source(\n            v0_layer_param.source());\n      } else {\n        LOG(ERROR) << \"Unknown parameter source for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_scale()) {\n      layer_param->mutable_transform_param()->\n          set_scale(v0_layer_param.scale());\n    }\n    if (v0_layer_param.has_meanfile()) {\n      layer_param->mutable_transform_param()->\n          set_mean_file(v0_layer_param.meanfile());\n    }\n    if (v0_layer_param.has_batchsize()) {\n      if (type == \"data\") {\n        layer_param->mutable_data_param()->set_batch_size(\n            v0_layer_param.batchsize());\n      } else if (type == \"hdf5_data\") {\n        layer_param->mutable_hdf5_data_param()->set_batch_size(\n            v0_layer_param.batchsize());\n      } else if (type == \"images\") {\n        layer_param->mutable_image_data_param()->set_batch_size(\n            v0_layer_param.batchsize());\n      } else if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_batch_size(\n            v0_layer_param.batchsize());\n      } else {\n        LOG(ERROR) << \"Unknown parameter batchsize for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_cropsize()) {\n      layer_param->mutable_transform_param()->\n          set_crop_size(v0_layer_param.cropsize());\n    }\n    if (v0_layer_param.has_mirror()) {\n      layer_param->mutable_transform_param()->\n          set_mirror(v0_layer_param.mirror());\n    }\n    if (v0_layer_param.has_rand_skip()) {\n      if (type == \"data\") {\n        layer_param->mutable_data_param()->set_rand_skip(\n            v0_layer_param.rand_skip());\n      } else if (type == \"images\") {\n        layer_param->mutable_image_data_param()->set_rand_skip(\n            v0_layer_param.rand_skip());\n      } else {\n        LOG(ERROR) << \"Unknown parameter rand_skip for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_shuffle_images()) {\n      if (type == \"images\") {\n        layer_param->mutable_image_data_param()->set_shuffle(\n            v0_layer_param.shuffle_images());\n      } else {\n        LOG(ERROR) << \"Unknown parameter shuffle for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_new_height()) {\n      if (type == \"images\") {\n        layer_param->mutable_image_data_param()->set_new_height(\n            v0_layer_param.new_height());\n      } else {\n        LOG(ERROR) << \"Unknown parameter new_height for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_new_width()) {\n      if (type == \"images\") {\n        layer_param->mutable_image_data_param()->set_new_width(\n            v0_layer_param.new_width());\n      } else {\n        LOG(ERROR) << \"Unknown parameter new_width for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_concat_dim()) {\n      if (type == \"concat\") {\n        layer_param->mutable_concat_param()->set_concat_dim(\n            v0_layer_param.concat_dim());\n      } else {\n        LOG(ERROR) << \"Unknown parameter concat_dim for layer type \" << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_det_fg_threshold()) {\n      if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_fg_threshold(\n            v0_layer_param.det_fg_threshold());\n      } else {\n        LOG(ERROR) << \"Unknown parameter det_fg_threshold for layer type \"\n                   << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_det_bg_threshold()) {\n      if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_bg_threshold(\n            v0_layer_param.det_bg_threshold());\n      } else {\n        LOG(ERROR) << \"Unknown parameter det_bg_threshold for layer type \"\n                   << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_det_fg_fraction()) {\n      if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_fg_fraction(\n            v0_layer_param.det_fg_fraction());\n      } else {\n        LOG(ERROR) << \"Unknown parameter det_fg_fraction for layer type \"\n                   << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_det_context_pad()) {\n      if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_context_pad(\n            v0_layer_param.det_context_pad());\n      } else {\n        LOG(ERROR) << \"Unknown parameter det_context_pad for layer type \"\n                   << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_det_crop_mode()) {\n      if (type == \"window_data\") {\n        layer_param->mutable_window_data_param()->set_crop_mode(\n            v0_layer_param.det_crop_mode());\n      } else {\n        LOG(ERROR) << \"Unknown parameter det_crop_mode for layer type \"\n                   << type;\n        is_fully_compatible = false;\n      }\n    }\n    if (v0_layer_param.has_hdf5_output_param()) {\n      if (type == \"hdf5_output\") {\n        layer_param->mutable_hdf5_output_param()->CopyFrom(\n            v0_layer_param.hdf5_output_param());\n      } else {\n        LOG(ERROR) << \"Unknown parameter hdf5_output_param for layer type \"\n                   << type;\n        is_fully_compatible = false;\n      }\n    }\n  }\n  return is_fully_compatible;\n}\n\nV1LayerParameter_LayerType UpgradeV0LayerType(const string& type) {\n  if (type == \"accuracy\") {\n    return V1LayerParameter_LayerType_ACCURACY;\n  } else if (type == \"bnll\") {\n    return V1LayerParameter_LayerType_BNLL;\n  } else if (type == \"concat\") {\n    return V1LayerParameter_LayerType_CONCAT;\n  } else if (type == \"conv\") {\n    return V1LayerParameter_LayerType_CONVOLUTION;\n  } else if (type == \"data\") {\n    return V1LayerParameter_LayerType_DATA;\n  } else if (type == \"dropout\") {\n    return V1LayerParameter_LayerType_DROPOUT;\n  } else if (type == \"euclidean_loss\") {\n    return V1LayerParameter_LayerType_EUCLIDEAN_LOSS;\n  } else if (type == \"flatten\") {\n    return V1LayerParameter_LayerType_FLATTEN;\n  } else if (type == \"hdf5_data\") {\n    return V1LayerParameter_LayerType_HDF5_DATA;\n  } else if (type == \"hdf5_output\") {\n    return V1LayerParameter_LayerType_HDF5_OUTPUT;\n  } else if (type == \"im2col\") {\n    return V1LayerParameter_LayerType_IM2COL;\n  } else if (type == \"images\") {\n    return V1LayerParameter_LayerType_IMAGE_DATA;\n  } else if (type == \"infogain_loss\") {\n    return V1LayerParameter_LayerType_INFOGAIN_LOSS;\n  } else if (type == \"innerproduct\") {\n    return V1LayerParameter_LayerType_INNER_PRODUCT;\n  } else if (type == \"lrn\") {\n    return V1LayerParameter_LayerType_LRN;\n  } else if (type == \"multinomial_logistic_loss\") {\n    return V1LayerParameter_LayerType_MULTINOMIAL_LOGISTIC_LOSS;\n  } else if (type == \"pool\") {\n    return V1LayerParameter_LayerType_POOLING;\n  } else if (type == \"relu\") {\n    return V1LayerParameter_LayerType_RELU;\n  } else if (type == \"sigmoid\") {\n    return V1LayerParameter_LayerType_SIGMOID;\n  } else if (type == \"softmax\") {\n    return V1LayerParameter_LayerType_SOFTMAX;\n  } else if (type == \"softmax_loss\") {\n    return V1LayerParameter_LayerType_SOFTMAX_LOSS;\n  } else if (type == \"split\") {\n    return V1LayerParameter_LayerType_SPLIT;\n  } else if (type == \"tanh\") {\n    return V1LayerParameter_LayerType_TANH;\n  } else if (type == \"window_data\") {\n    return V1LayerParameter_LayerType_WINDOW_DATA;\n  } else {\n    LOG(FATAL) << \"Unknown layer name: \" << type;\n    return V1LayerParameter_LayerType_NONE;\n  }\n}\n\nbool NetNeedsDataUpgrade(const NetParameter& net_param) {\n  for (int i = 0; i < net_param.layers_size(); ++i) {\n    if (net_param.layers(i).type() == V1LayerParameter_LayerType_DATA) {\n      DataParameter layer_param = net_param.layers(i).data_param();\n      if (layer_param.has_scale()) { return true; }\n      if (layer_param.has_mean_file()) { return true; }\n      if (layer_param.has_crop_size()) { return true; }\n      if (layer_param.has_mirror()) { return true; }\n    }\n    if (net_param.layers(i).type() == V1LayerParameter_LayerType_IMAGE_DATA) {\n      ImageDataParameter layer_param = net_param.layers(i).image_data_param();\n      if (layer_param.has_scale()) { return true; }\n      if (layer_param.has_mean_file()) { return true; }\n      if (layer_param.has_crop_size()) { return true; }\n      if (layer_param.has_mirror()) { return true; }\n    }\n    if (net_param.layers(i).type() == V1LayerParameter_LayerType_WINDOW_DATA) {\n      WindowDataParameter layer_param = net_param.layers(i).window_data_param();\n      if (layer_param.has_scale()) { return true; }\n      if (layer_param.has_mean_file()) { return true; }\n      if (layer_param.has_crop_size()) { return true; }\n      if (layer_param.has_mirror()) { return true; }\n    }\n  }\n  return false;\n}\n\n#define CONVERT_LAYER_TRANSFORM_PARAM(TYPE, Name, param_name) \\\n  do { \\\n    if (net_param->layers(i).type() == V1LayerParameter_LayerType_##TYPE) { \\\n      Name##Parameter* layer_param = \\\n          net_param->mutable_layers(i)->mutable_##param_name##_param(); \\\n      TransformationParameter* transform_param = \\\n          net_param->mutable_layers(i)->mutable_transform_param(); \\\n      if (layer_param->has_scale()) { \\\n        transform_param->set_scale(layer_param->scale()); \\\n        layer_param->clear_scale(); \\\n      } \\\n      if (layer_param->has_mean_file()) { \\\n        transform_param->set_mean_file(layer_param->mean_file()); \\\n        layer_param->clear_mean_file(); \\\n      } \\\n      if (layer_param->has_crop_size()) { \\\n        transform_param->set_crop_size(layer_param->crop_size()); \\\n        layer_param->clear_crop_size(); \\\n      } \\\n      if (layer_param->has_mirror()) { \\\n        transform_param->set_mirror(layer_param->mirror()); \\\n        layer_param->clear_mirror(); \\\n      } \\\n    } \\\n  } while (0)\n\nvoid UpgradeNetDataTransformation(NetParameter* net_param) {\n  for (int i = 0; i < net_param->layers_size(); ++i) {\n    CONVERT_LAYER_TRANSFORM_PARAM(DATA, Data, data);\n    CONVERT_LAYER_TRANSFORM_PARAM(IMAGE_DATA, ImageData, image_data);\n    CONVERT_LAYER_TRANSFORM_PARAM(WINDOW_DATA, WindowData, window_data);\n  }\n}\n\nbool UpgradeV1Net(const NetParameter& v1_net_param, NetParameter* net_param) {\n  bool is_fully_compatible = true;\n  if (v1_net_param.layer_size() > 0) {\n    LOG(ERROR) << \"Input NetParameter to be upgraded already specifies 'layer' \"\n               << \"fields; these will be ignored for the upgrade.\";\n    is_fully_compatible = false;\n  }\n  net_param->CopyFrom(v1_net_param);\n  net_param->clear_layers();\n  net_param->clear_layer();\n  for (int i = 0; i < v1_net_param.layers_size(); ++i) {\n    if (!UpgradeV1LayerParameter(v1_net_param.layers(i),\n                                 net_param->add_layer())) {\n      LOG(ERROR) << \"Upgrade of input layer \" << i << \" failed.\";\n      is_fully_compatible = false;\n    }\n  }\n  return is_fully_compatible;\n}\n\nbool UpgradeV1LayerParameter(const V1LayerParameter& v1_layer_param,\n                             LayerParameter* layer_param) {\n  layer_param->Clear();\n  bool is_fully_compatible = true;\n  for (int i = 0; i < v1_layer_param.bottom_size(); ++i) {\n    layer_param->add_bottom(v1_layer_param.bottom(i));\n  }\n  for (int i = 0; i < v1_layer_param.top_size(); ++i) {\n    layer_param->add_top(v1_layer_param.top(i));\n  }\n  if (v1_layer_param.has_name()) {\n    layer_param->set_name(v1_layer_param.name());\n  }\n  for (int i = 0; i < v1_layer_param.include_size(); ++i) {\n    layer_param->add_include()->CopyFrom(v1_layer_param.include(i));\n  }\n  for (int i = 0; i < v1_layer_param.exclude_size(); ++i) {\n    layer_param->add_exclude()->CopyFrom(v1_layer_param.exclude(i));\n  }\n  if (v1_layer_param.has_type()) {\n    layer_param->set_type(UpgradeV1LayerType(v1_layer_param.type()));\n  }\n  for (int i = 0; i < v1_layer_param.blobs_size(); ++i) {\n    layer_param->add_blobs()->CopyFrom(v1_layer_param.blobs(i));\n  }\n  for (int i = 0; i < v1_layer_param.param_size(); ++i) {\n    while (layer_param->param_size() <= i) { layer_param->add_param(); }\n    layer_param->mutable_param(i)->set_name(v1_layer_param.param(i));\n  }\n  ParamSpec_DimCheckMode mode;\n  for (int i = 0; i < v1_layer_param.blob_share_mode_size(); ++i) {\n    while (layer_param->param_size() <= i) { layer_param->add_param(); }\n    switch (v1_layer_param.blob_share_mode(i)) {\n    case V1LayerParameter_DimCheckMode_STRICT:\n      mode = ParamSpec_DimCheckMode_STRICT;\n      break;\n    case V1LayerParameter_DimCheckMode_PERMISSIVE:\n      mode = ParamSpec_DimCheckMode_PERMISSIVE;\n      break;\n    default:\n      LOG(FATAL) << \"Unknown blob_share_mode: \"\n                 << v1_layer_param.blob_share_mode(i);\n      break;\n    }\n    layer_param->mutable_param(i)->set_share_mode(mode);\n  }\n  for (int i = 0; i < v1_layer_param.blobs_lr_size(); ++i) {\n    while (layer_param->param_size() <= i) { layer_param->add_param(); }\n    layer_param->mutable_param(i)->set_lr_mult(v1_layer_param.blobs_lr(i));\n  }\n  for (int i = 0; i < v1_layer_param.weight_decay_size(); ++i) {\n    while (layer_param->param_size() <= i) { layer_param->add_param(); }\n    layer_param->mutable_param(i)->set_decay_mult(\n        v1_layer_param.weight_decay(i));\n  }\n  for (int i = 0; i < v1_layer_param.loss_weight_size(); ++i) {\n    layer_param->add_loss_weight(v1_layer_param.loss_weight(i));\n  }\n  if (v1_layer_param.has_accuracy_param()) {\n    layer_param->mutable_accuracy_param()->CopyFrom(\n        v1_layer_param.accuracy_param());\n  }\n  if (v1_layer_param.has_argmax_param()) {\n    layer_param->mutable_argmax_param()->CopyFrom(\n        v1_layer_param.argmax_param());\n  }\n  if (v1_layer_param.has_concat_param()) {\n    layer_param->mutable_concat_param()->CopyFrom(\n        v1_layer_param.concat_param());\n  }\n  if (v1_layer_param.has_contrastive_loss_param()) {\n    layer_param->mutable_contrastive_loss_param()->CopyFrom(\n        v1_layer_param.contrastive_loss_param());\n  }\n  if (v1_layer_param.has_convolution_param()) {\n    layer_param->mutable_convolution_param()->CopyFrom(\n        v1_layer_param.convolution_param());\n  }\n  if (v1_layer_param.has_data_param()) {\n    layer_param->mutable_data_param()->CopyFrom(\n        v1_layer_param.data_param());\n  }\n  if (v1_layer_param.has_dropout_param()) {\n    layer_param->mutable_dropout_param()->CopyFrom(\n        v1_layer_param.dropout_param());\n  }\n  if (v1_layer_param.has_dummy_data_param()) {\n    layer_param->mutable_dummy_data_param()->CopyFrom(\n        v1_layer_param.dummy_data_param());\n  }\n  if (v1_layer_param.has_eltwise_param()) {\n    layer_param->mutable_eltwise_param()->CopyFrom(\n        v1_layer_param.eltwise_param());\n  }\n  if (v1_layer_param.has_exp_param()) {\n    layer_param->mutable_exp_param()->CopyFrom(\n        v1_layer_param.exp_param());\n  }\n  if (v1_layer_param.has_hdf5_data_param()) {\n    layer_param->mutable_hdf5_data_param()->CopyFrom(\n        v1_layer_param.hdf5_data_param());\n  }\n  if (v1_layer_param.has_hdf5_output_param()) {\n    layer_param->mutable_hdf5_output_param()->CopyFrom(\n        v1_layer_param.hdf5_output_param());\n  }\n  if (v1_layer_param.has_hinge_loss_param()) {\n    layer_param->mutable_hinge_loss_param()->CopyFrom(\n        v1_layer_param.hinge_loss_param());\n  }\n  if (v1_layer_param.has_image_data_param()) {\n    layer_param->mutable_image_data_param()->CopyFrom(\n        v1_layer_param.image_data_param());\n  }\n  if (v1_layer_param.has_infogain_loss_param()) {\n    layer_param->mutable_infogain_loss_param()->CopyFrom(\n        v1_layer_param.infogain_loss_param());\n  }\n  if (v1_layer_param.has_inner_product_param()) {\n    layer_param->mutable_inner_product_param()->CopyFrom(\n        v1_layer_param.inner_product_param());\n  }\n  if (v1_layer_param.has_lrn_param()) {\n    layer_param->mutable_lrn_param()->CopyFrom(\n        v1_layer_param.lrn_param());\n  }\n  if (v1_layer_param.has_memory_data_param()) {\n    layer_param->mutable_memory_data_param()->CopyFrom(\n        v1_layer_param.memory_data_param());\n  }\n  if (v1_layer_param.has_mvn_param()) {\n    layer_param->mutable_mvn_param()->CopyFrom(\n        v1_layer_param.mvn_param());\n  }\n  if (v1_layer_param.has_pooling_param()) {\n    layer_param->mutable_pooling_param()->CopyFrom(\n        v1_layer_param.pooling_param());\n  }\n  if (v1_layer_param.has_power_param()) {\n    layer_param->mutable_power_param()->CopyFrom(\n        v1_layer_param.power_param());\n  }\n  if (v1_layer_param.has_relu_param()) {\n    layer_param->mutable_relu_param()->CopyFrom(\n        v1_layer_param.relu_param());\n  }\n  if (v1_layer_param.has_sigmoid_param()) {\n    layer_param->mutable_sigmoid_param()->CopyFrom(\n        v1_layer_param.sigmoid_param());\n  }\n  if (v1_layer_param.has_softmax_param()) {\n    layer_param->mutable_softmax_param()->CopyFrom(\n        v1_layer_param.softmax_param());\n  }\n  if (v1_layer_param.has_slice_param()) {\n    layer_param->mutable_slice_param()->CopyFrom(\n        v1_layer_param.slice_param());\n  }\n  if (v1_layer_param.has_tanh_param()) {\n    layer_param->mutable_tanh_param()->CopyFrom(\n        v1_layer_param.tanh_param());\n  }\n  if (v1_layer_param.has_threshold_param()) {\n    layer_param->mutable_threshold_param()->CopyFrom(\n        v1_layer_param.threshold_param());\n  }\n  if (v1_layer_param.has_window_data_param()) {\n    layer_param->mutable_window_data_param()->CopyFrom(\n        v1_layer_param.window_data_param());\n  }\n  if (v1_layer_param.has_transform_param()) {\n    layer_param->mutable_transform_param()->CopyFrom(\n        v1_layer_param.transform_param());\n  }\n  if (v1_layer_param.has_loss_param()) {\n    layer_param->mutable_loss_param()->CopyFrom(\n        v1_layer_param.loss_param());\n  }\n  if (v1_layer_param.has_layer()) {\n    LOG(ERROR) << \"Input NetParameter has V0 layer -- ignoring.\";\n    is_fully_compatible = false;\n  }\n  return is_fully_compatible;\n}\n\nconst char* UpgradeV1LayerType(const V1LayerParameter_LayerType type) {\n  switch (type) {\n  case V1LayerParameter_LayerType_NONE:\n    return \"\";\n  case V1LayerParameter_LayerType_ABSVAL:\n    return \"AbsVal\";\n  case V1LayerParameter_LayerType_ACCURACY:\n    return \"Accuracy\";\n  case V1LayerParameter_LayerType_ARGMAX:\n    return \"ArgMax\";\n  case V1LayerParameter_LayerType_BNLL:\n    return \"BNLL\";\n  case V1LayerParameter_LayerType_CONCAT:\n    return \"Concat\";\n  case V1LayerParameter_LayerType_CONTRASTIVE_LOSS:\n    return \"ContrastiveLoss\";\n  case V1LayerParameter_LayerType_CONVOLUTION:\n    return \"Convolution\";\n  case V1LayerParameter_LayerType_DECONVOLUTION:\n    return \"Deconvolution\";\n  case V1LayerParameter_LayerType_DATA:\n    return \"Data\";\n  case V1LayerParameter_LayerType_DROPOUT:\n    return \"Dropout\";\n  case V1LayerParameter_LayerType_DUMMY_DATA:\n    return \"DummyData\";\n  case V1LayerParameter_LayerType_EUCLIDEAN_LOSS:\n    return \"EuclideanLoss\";\n  case V1LayerParameter_LayerType_ELTWISE:\n    return \"Eltwise\";\n  case V1LayerParameter_LayerType_EXP:\n    return \"Exp\";\n  case V1LayerParameter_LayerType_FLATTEN:\n    return \"Flatten\";\n  case V1LayerParameter_LayerType_HDF5_DATA:\n    return \"HDF5Data\";\n  case V1LayerParameter_LayerType_HDF5_OUTPUT:\n    return \"HDF5Output\";\n  case V1LayerParameter_LayerType_HINGE_LOSS:\n    return \"HingeLoss\";\n  case V1LayerParameter_LayerType_IM2COL:\n    return \"Im2col\";\n  case V1LayerParameter_LayerType_IMAGE_DATA:\n    return \"ImageData\";\n  case V1LayerParameter_LayerType_INFOGAIN_LOSS:\n    return \"InfogainLoss\";\n  case V1LayerParameter_LayerType_INNER_PRODUCT:\n    return \"InnerProduct\";\n  case V1LayerParameter_LayerType_LRN:\n    return \"LRN\";\n  case V1LayerParameter_LayerType_MEMORY_DATA:\n    return \"MemoryData\";\n  case V1LayerParameter_LayerType_MULTINOMIAL_LOGISTIC_LOSS:\n    return \"MultinomialLogisticLoss\";\n  case V1LayerParameter_LayerType_MVN:\n    return \"MVN\";\n  case V1LayerParameter_LayerType_POOLING:\n    return \"Pooling\";\n  case V1LayerParameter_LayerType_POWER:\n    return \"Power\";\n  case V1LayerParameter_LayerType_RELU:\n    return \"ReLU\";\n  case V1LayerParameter_LayerType_SIGMOID:\n    return \"Sigmoid\";\n  case V1LayerParameter_LayerType_SIGMOID_CROSS_ENTROPY_LOSS:\n    return \"SigmoidCrossEntropyLoss\";\n  case V1LayerParameter_LayerType_SILENCE:\n    return \"Silence\";\n  case V1LayerParameter_LayerType_SOFTMAX:\n    return \"Softmax\";\n  case V1LayerParameter_LayerType_SOFTMAX_LOSS:\n    return \"SoftmaxWithLoss\";\n  case V1LayerParameter_LayerType_SPLIT:\n    return \"Split\";\n  case V1LayerParameter_LayerType_SLICE:\n    return \"Slice\";\n  case V1LayerParameter_LayerType_TANH:\n    return \"TanH\";\n  case V1LayerParameter_LayerType_WINDOW_DATA:\n    return \"WindowData\";\n  case V1LayerParameter_LayerType_THRESHOLD:\n    return \"Threshold\";\n  default:\n    LOG(FATAL) << \"Unknown V1LayerParameter layer type: \" << type;\n    return \"\";\n  }\n}\n\n// Return true iff the solver contains any old solver_type specified as enums\nbool SolverNeedsTypeUpgrade(const SolverParameter& solver_param) {\n  if (solver_param.has_solver_type()) {\n    return true;\n  }\n  return false;\n}\n\nbool UpgradeSolverType(SolverParameter* solver_param) {\n  CHECK(!solver_param->has_solver_type() || !solver_param->has_type())\n      << \"Failed to upgrade solver: old solver_type field (enum) and new type \"\n      << \"field (string) cannot be both specified in solver proto text.\";\n  if (solver_param->has_solver_type()) {\n    string type;\n    switch (solver_param->solver_type()) {\n    case SolverParameter_SolverType_SGD:\n      type = \"SGD\";\n      break;\n    case SolverParameter_SolverType_NESTEROV:\n      type = \"Nesterov\";\n      break;\n    case SolverParameter_SolverType_ADAGRAD:\n      type = \"AdaGrad\";\n      break;\n    case SolverParameter_SolverType_RMSPROP:\n      type = \"RMSProp\";\n      break;\n    case SolverParameter_SolverType_ADADELTA:\n      type = \"AdaDelta\";\n      break;\n    case SolverParameter_SolverType_ADAM:\n      type = \"Adam\";\n      break;\n    default:\n      LOG(FATAL) << \"Unknown SolverParameter solver_type: \" << type;\n    }\n    solver_param->set_type(type);\n    solver_param->clear_solver_type();\n  } else {\n    LOG(ERROR) << \"Warning: solver type already up to date. \";\n    return false;\n  }\n  return true;\n}\n\n// Check for deprecations and upgrade the SolverParameter as needed.\nbool UpgradeSolverAsNeeded(const string& param_file, SolverParameter* param) {\n  bool success = true;\n  // Try to upgrade old style solver_type enum fields into new string type\n  if (SolverNeedsTypeUpgrade(*param)) {\n    LOG(INFO) << \"Attempting to upgrade input file specified using deprecated \"\n              << \"'solver_type' field (enum)': \" << param_file;\n    if (!UpgradeSolverType(param)) {\n      success = false;\n      LOG(ERROR) << \"Warning: had one or more problems upgrading \"\n                 << \"SolverType (see above).\";\n    } else {\n      LOG(INFO) << \"Successfully upgraded file specified using deprecated \"\n                << \"'solver_type' field (enum) to 'type' field (string).\";\n      LOG(WARNING) << \"Note that future Caffe releases will only support \"\n                   << \"'type' field (string) for a solver's type.\";\n    }\n  }\n  return success;\n}\n\n// Read parameters from a file into a SolverParameter proto message.\nvoid ReadSolverParamsFromTextFileOrDie(const string& param_file,\n                                       SolverParameter* param) {\n  CHECK(ReadProtoFromTextFile(param_file, param))\n      << \"Failed to parse SolverParameter file: \" << param_file;\n  UpgradeSolverAsNeeded(param_file, param);\n}\n\n}  // namespace caffe\n"
  },
  {
    "path": "caffe-fpn/src/gtest/CMakeLists.txt",
    "content": "add_library(gtest STATIC EXCLUDE_FROM_ALL gtest.h gtest-all.cpp)\ncaffe_default_properties(gtest)\n\n#add_library(gtest_main gtest_main.cc)\n#target_link_libraries(gtest_main gtest)\n"
  },
  {
    "path": "caffe-fpn/src/gtest/gtest-all.cpp",
    "content": "// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: mheule@google.com (Markus Heule)\n//\n// Google C++ Testing Framework (Google Test)\n//\n// Sometimes it's desirable to build Google Test by compiling a single file.\n// This file serves this purpose.\n\n// This line ensures that gtest.h can be compiled on its own, even\n// when it's fused.\n#include \"gtest/gtest.h\"\n\n// The following lines pull in the real gtest *.cc files.\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// The Google C++ Testing Framework (Google Test)\n\n// Copyright 2007, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// Utilities for testing Google Test itself and code that uses Google Test\n// (e.g. frameworks built on top of Google Test).\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_SPI_H_\n#define GTEST_INCLUDE_GTEST_GTEST_SPI_H_\n\n\nnamespace testing {\n\n// This helper class can be used to mock out Google Test failure reporting\n// so that we can test Google Test or code that builds on Google Test.\n//\n// An object of this class appends a TestPartResult object to the\n// TestPartResultArray object given in the constructor whenever a Google Test\n// failure is reported. It can either intercept only failures that are\n// generated in the same thread that created this object or it can intercept\n// all generated failures. The scope of this mock object can be controlled with\n// the second argument to the two arguments constructor.\nclass GTEST_API_ ScopedFakeTestPartResultReporter\n    : public TestPartResultReporterInterface {\n public:\n  // The two possible mocking modes of this object.\n  enum InterceptMode {\n    INTERCEPT_ONLY_CURRENT_THREAD,  // Intercepts only thread local failures.\n    INTERCEPT_ALL_THREADS           // Intercepts all failures.\n  };\n\n  // The c'tor sets this object as the test part result reporter used\n  // by Google Test.  The 'result' parameter specifies where to report the\n  // results. This reporter will only catch failures generated in the current\n  // thread. DEPRECATED\n  explicit ScopedFakeTestPartResultReporter(TestPartResultArray* result);\n\n  // Same as above, but you can choose the interception scope of this object.\n  ScopedFakeTestPartResultReporter(InterceptMode intercept_mode,\n                                   TestPartResultArray* result);\n\n  // The d'tor restores the previous test part result reporter.\n  virtual ~ScopedFakeTestPartResultReporter();\n\n  // Appends the TestPartResult object to the TestPartResultArray\n  // received in the constructor.\n  //\n  // This method is from the TestPartResultReporterInterface\n  // interface.\n  virtual void ReportTestPartResult(const TestPartResult& result);\n private:\n  void Init();\n\n  const InterceptMode intercept_mode_;\n  TestPartResultReporterInterface* old_reporter_;\n  TestPartResultArray* const result_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedFakeTestPartResultReporter);\n};\n\nnamespace internal {\n\n// A helper class for implementing EXPECT_FATAL_FAILURE() and\n// EXPECT_NONFATAL_FAILURE().  Its destructor verifies that the given\n// TestPartResultArray contains exactly one failure that has the given\n// type and contains the given substring.  If that's not the case, a\n// non-fatal failure will be generated.\nclass GTEST_API_ SingleFailureChecker {\n public:\n  // The constructor remembers the arguments.\n  SingleFailureChecker(const TestPartResultArray* results,\n                       TestPartResult::Type type,\n                       const string& substr);\n  ~SingleFailureChecker();\n private:\n  const TestPartResultArray* const results_;\n  const TestPartResult::Type type_;\n  const string substr_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(SingleFailureChecker);\n};\n\n}  // namespace internal\n\n}  // namespace testing\n\n// A set of macros for testing Google Test assertions or code that's expected\n// to generate Google Test fatal failures.  It verifies that the given\n// statement will cause exactly one fatal Google Test failure with 'substr'\n// being part of the failure message.\n//\n// There are two different versions of this macro. EXPECT_FATAL_FAILURE only\n// affects and considers failures generated in the current thread and\n// EXPECT_FATAL_FAILURE_ON_ALL_THREADS does the same but for all threads.\n//\n// The verification of the assertion is done correctly even when the statement\n// throws an exception or aborts the current function.\n//\n// Known restrictions:\n//   - 'statement' cannot reference local non-static variables or\n//     non-static members of the current object.\n//   - 'statement' cannot return a value.\n//   - You cannot stream a failure message to this macro.\n//\n// Note that even though the implementations of the following two\n// macros are much alike, we cannot refactor them to use a common\n// helper macro, due to some peculiarity in how the preprocessor\n// works.  The AcceptsMacroThatExpandsToUnprotectedComma test in\n// gtest_unittest.cc will fail to compile if we do that.\n#define EXPECT_FATAL_FAILURE(statement, substr) \\\n  do { \\\n    class GTestExpectFatalFailureHelper {\\\n     public:\\\n      static void Execute() { statement; }\\\n    };\\\n    ::testing::TestPartResultArray gtest_failures;\\\n    ::testing::internal::SingleFailureChecker gtest_checker(\\\n        &gtest_failures, ::testing::TestPartResult::kFatalFailure, (substr));\\\n    {\\\n      ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\\\n          ::testing::ScopedFakeTestPartResultReporter:: \\\n          INTERCEPT_ONLY_CURRENT_THREAD, &gtest_failures);\\\n      GTestExpectFatalFailureHelper::Execute();\\\n    }\\\n  } while (::testing::internal::AlwaysFalse())\n\n#define EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substr) \\\n  do { \\\n    class GTestExpectFatalFailureHelper {\\\n     public:\\\n      static void Execute() { statement; }\\\n    };\\\n    ::testing::TestPartResultArray gtest_failures;\\\n    ::testing::internal::SingleFailureChecker gtest_checker(\\\n        &gtest_failures, ::testing::TestPartResult::kFatalFailure, (substr));\\\n    {\\\n      ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\\\n          ::testing::ScopedFakeTestPartResultReporter:: \\\n          INTERCEPT_ALL_THREADS, &gtest_failures);\\\n      GTestExpectFatalFailureHelper::Execute();\\\n    }\\\n  } while (::testing::internal::AlwaysFalse())\n\n// A macro for testing Google Test assertions or code that's expected to\n// generate Google Test non-fatal failures.  It asserts that the given\n// statement will cause exactly one non-fatal Google Test failure with 'substr'\n// being part of the failure message.\n//\n// There are two different versions of this macro. EXPECT_NONFATAL_FAILURE only\n// affects and considers failures generated in the current thread and\n// EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS does the same but for all threads.\n//\n// 'statement' is allowed to reference local variables and members of\n// the current object.\n//\n// The verification of the assertion is done correctly even when the statement\n// throws an exception or aborts the current function.\n//\n// Known restrictions:\n//   - You cannot stream a failure message to this macro.\n//\n// Note that even though the implementations of the following two\n// macros are much alike, we cannot refactor them to use a common\n// helper macro, due to some peculiarity in how the preprocessor\n// works.  If we do that, the code won't compile when the user gives\n// EXPECT_NONFATAL_FAILURE() a statement that contains a macro that\n// expands to code containing an unprotected comma.  The\n// AcceptsMacroThatExpandsToUnprotectedComma test in gtest_unittest.cc\n// catches that.\n//\n// For the same reason, we have to write\n//   if (::testing::internal::AlwaysTrue()) { statement; }\n// instead of\n//   GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement)\n// to avoid an MSVC warning on unreachable code.\n#define EXPECT_NONFATAL_FAILURE(statement, substr) \\\n  do {\\\n    ::testing::TestPartResultArray gtest_failures;\\\n    ::testing::internal::SingleFailureChecker gtest_checker(\\\n        &gtest_failures, ::testing::TestPartResult::kNonFatalFailure, \\\n        (substr));\\\n    {\\\n      ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\\\n          ::testing::ScopedFakeTestPartResultReporter:: \\\n          INTERCEPT_ONLY_CURRENT_THREAD, &gtest_failures);\\\n      if (::testing::internal::AlwaysTrue()) { statement; }\\\n    }\\\n  } while (::testing::internal::AlwaysFalse())\n\n#define EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substr) \\\n  do {\\\n    ::testing::TestPartResultArray gtest_failures;\\\n    ::testing::internal::SingleFailureChecker gtest_checker(\\\n        &gtest_failures, ::testing::TestPartResult::kNonFatalFailure, \\\n        (substr));\\\n    {\\\n      ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\\\n          ::testing::ScopedFakeTestPartResultReporter::INTERCEPT_ALL_THREADS,\\\n          &gtest_failures);\\\n      if (::testing::internal::AlwaysTrue()) { statement; }\\\n    }\\\n  } while (::testing::internal::AlwaysFalse())\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_SPI_H_\n\n#include <ctype.h>\n#include <math.h>\n#include <stdarg.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <wchar.h>\n#include <wctype.h>\n\n#include <algorithm>\n#include <ostream>  // NOLINT\n#include <sstream>\n#include <vector>\n\n#if GTEST_OS_LINUX\n\n// TODO(kenton@google.com): Use autoconf to detect availability of\n// gettimeofday().\n# define GTEST_HAS_GETTIMEOFDAY_ 1\n\n# include <fcntl.h>  // NOLINT\n# include <limits.h>  // NOLINT\n# include <sched.h>  // NOLINT\n// Declares vsnprintf().  This header is not available on Windows.\n# include <strings.h>  // NOLINT\n# include <sys/mman.h>  // NOLINT\n# include <sys/time.h>  // NOLINT\n# include <unistd.h>  // NOLINT\n# include <string>\n\n#elif GTEST_OS_SYMBIAN\n# define GTEST_HAS_GETTIMEOFDAY_ 1\n# include <sys/time.h>  // NOLINT\n\n#elif GTEST_OS_ZOS\n# define GTEST_HAS_GETTIMEOFDAY_ 1\n# include <sys/time.h>  // NOLINT\n\n// On z/OS we additionally need strings.h for strcasecmp.\n# include <strings.h>  // NOLINT\n\n#elif GTEST_OS_WINDOWS_MOBILE  // We are on Windows CE.\n\n# include <windows.h>  // NOLINT\n\n#elif GTEST_OS_WINDOWS  // We are on Windows proper.\n\n# include <io.h>  // NOLINT\n# include <sys/timeb.h>  // NOLINT\n# include <sys/types.h>  // NOLINT\n# include <sys/stat.h>  // NOLINT\n\n# if GTEST_OS_WINDOWS_MINGW\n// MinGW has gettimeofday() but not _ftime64().\n// TODO(kenton@google.com): Use autoconf to detect availability of\n//   gettimeofday().\n// TODO(kenton@google.com): There are other ways to get the time on\n//   Windows, like GetTickCount() or GetSystemTimeAsFileTime().  MinGW\n//   supports these.  consider using them instead.\n#  define GTEST_HAS_GETTIMEOFDAY_ 1\n#  include <sys/time.h>  // NOLINT\n# endif  // GTEST_OS_WINDOWS_MINGW\n\n// cpplint thinks that the header is already included, so we want to\n// silence it.\n# include <windows.h>  // NOLINT\n\n#else\n\n// Assume other platforms have gettimeofday().\n// TODO(kenton@google.com): Use autoconf to detect availability of\n//   gettimeofday().\n# define GTEST_HAS_GETTIMEOFDAY_ 1\n\n// cpplint thinks that the header is already included, so we want to\n// silence it.\n# include <sys/time.h>  // NOLINT\n# include <unistd.h>  // NOLINT\n\n#endif  // GTEST_OS_LINUX\n\n#if GTEST_HAS_EXCEPTIONS\n# include <stdexcept>\n#endif\n\n#if GTEST_CAN_STREAM_RESULTS_\n# include <arpa/inet.h>  // NOLINT\n# include <netdb.h>  // NOLINT\n#endif\n\n// Indicates that this translation unit is part of Google Test's\n// implementation.  It must come before gtest-internal-inl.h is\n// included, or there will be a compiler error.  This trick is to\n// prevent a user from accidentally including gtest-internal-inl.h in\n// his code.\n#define GTEST_IMPLEMENTATION_ 1\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n// Utility functions and classes used by the Google C++ testing framework.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// This file contains purely Google Test's internal implementation.  Please\n// DO NOT #INCLUDE IT IN A USER PROGRAM.\n\n#ifndef GTEST_SRC_GTEST_INTERNAL_INL_H_\n#define GTEST_SRC_GTEST_INTERNAL_INL_H_\n\n// GTEST_IMPLEMENTATION_ is defined to 1 iff the current translation unit is\n// part of Google Test's implementation; otherwise it's undefined.\n#if !GTEST_IMPLEMENTATION_\n// A user is trying to include this from his code - just say no.\n# error \"gtest-internal-inl.h is part of Google Test's internal implementation.\"\n# error \"It must not be included except by Google Test itself.\"\n#endif  // GTEST_IMPLEMENTATION_\n\n#ifndef _WIN32_WCE\n# include <errno.h>\n#endif  // !_WIN32_WCE\n#include <stddef.h>\n#include <stdlib.h>  // For strtoll/_strtoul64/malloc/free.\n#include <string.h>  // For memmove.\n\n#include <algorithm>\n#include <string>\n#include <vector>\n\n\n#if GTEST_OS_WINDOWS\n# include <windows.h>  // NOLINT\n#endif  // GTEST_OS_WINDOWS\n\n\nnamespace testing {\n\n// Declares the flags.\n//\n// We don't want the users to modify this flag in the code, but want\n// Google Test's own unit tests to be able to access it. Therefore we\n// declare it here as opposed to in gtest.h.\nGTEST_DECLARE_bool_(death_test_use_fork);\n\nnamespace internal {\n\n// The value of GetTestTypeId() as seen from within the Google Test\n// library.  This is solely for testing GetTestTypeId().\nGTEST_API_ extern const TypeId kTestTypeIdInGoogleTest;\n\n// Names of the flags (needed for parsing Google Test flags).\nconst char kAlsoRunDisabledTestsFlag[] = \"also_run_disabled_tests\";\nconst char kBreakOnFailureFlag[] = \"break_on_failure\";\nconst char kCatchExceptionsFlag[] = \"catch_exceptions\";\nconst char kColorFlag[] = \"color\";\nconst char kFilterFlag[] = \"filter\";\nconst char kListTestsFlag[] = \"list_tests\";\nconst char kOutputFlag[] = \"output\";\nconst char kPrintTimeFlag[] = \"print_time\";\nconst char kRandomSeedFlag[] = \"random_seed\";\nconst char kRepeatFlag[] = \"repeat\";\nconst char kShuffleFlag[] = \"shuffle\";\nconst char kStackTraceDepthFlag[] = \"stack_trace_depth\";\nconst char kStreamResultToFlag[] = \"stream_result_to\";\nconst char kThrowOnFailureFlag[] = \"throw_on_failure\";\n\n// A valid random seed must be in [1, kMaxRandomSeed].\nconst int kMaxRandomSeed = 99999;\n\n// g_help_flag is true iff the --help flag or an equivalent form is\n// specified on the command line.\nGTEST_API_ extern bool g_help_flag;\n\n// Returns the current time in milliseconds.\nGTEST_API_ TimeInMillis GetTimeInMillis();\n\n// Returns true iff Google Test should use colors in the output.\nGTEST_API_ bool ShouldUseColor(bool stdout_is_tty);\n\n// Formats the given time in milliseconds as seconds.\nGTEST_API_ std::string FormatTimeInMillisAsSeconds(TimeInMillis ms);\n\n// Parses a string for an Int32 flag, in the form of \"--flag=value\".\n//\n// On success, stores the value of the flag in *value, and returns\n// true.  On failure, returns false without changing *value.\nGTEST_API_ bool ParseInt32Flag(\n    const char* str, const char* flag, Int32* value);\n\n// Returns a random seed in range [1, kMaxRandomSeed] based on the\n// given --gtest_random_seed flag value.\ninline int GetRandomSeedFromFlag(Int32 random_seed_flag) {\n  const unsigned int raw_seed = (random_seed_flag == 0) ?\n      static_cast<unsigned int>(GetTimeInMillis()) :\n      static_cast<unsigned int>(random_seed_flag);\n\n  // Normalizes the actual seed to range [1, kMaxRandomSeed] such that\n  // it's easy to type.\n  const int normalized_seed =\n      static_cast<int>((raw_seed - 1U) %\n                       static_cast<unsigned int>(kMaxRandomSeed)) + 1;\n  return normalized_seed;\n}\n\n// Returns the first valid random seed after 'seed'.  The behavior is\n// undefined if 'seed' is invalid.  The seed after kMaxRandomSeed is\n// considered to be 1.\ninline int GetNextRandomSeed(int seed) {\n  GTEST_CHECK_(1 <= seed && seed <= kMaxRandomSeed)\n      << \"Invalid random seed \" << seed << \" - must be in [1, \"\n      << kMaxRandomSeed << \"].\";\n  const int next_seed = seed + 1;\n  return (next_seed > kMaxRandomSeed) ? 1 : next_seed;\n}\n\n// This class saves the values of all Google Test flags in its c'tor, and\n// restores them in its d'tor.\nclass GTestFlagSaver {\n public:\n  // The c'tor.\n  GTestFlagSaver() {\n    also_run_disabled_tests_ = GTEST_FLAG(also_run_disabled_tests);\n    break_on_failure_ = GTEST_FLAG(break_on_failure);\n    catch_exceptions_ = GTEST_FLAG(catch_exceptions);\n    color_ = GTEST_FLAG(color);\n    death_test_style_ = GTEST_FLAG(death_test_style);\n    death_test_use_fork_ = GTEST_FLAG(death_test_use_fork);\n    filter_ = GTEST_FLAG(filter);\n    internal_run_death_test_ = GTEST_FLAG(internal_run_death_test);\n    list_tests_ = GTEST_FLAG(list_tests);\n    output_ = GTEST_FLAG(output);\n    print_time_ = GTEST_FLAG(print_time);\n    random_seed_ = GTEST_FLAG(random_seed);\n    repeat_ = GTEST_FLAG(repeat);\n    shuffle_ = GTEST_FLAG(shuffle);\n    stack_trace_depth_ = GTEST_FLAG(stack_trace_depth);\n    stream_result_to_ = GTEST_FLAG(stream_result_to);\n    throw_on_failure_ = GTEST_FLAG(throw_on_failure);\n  }\n\n  // The d'tor is not virtual.  DO NOT INHERIT FROM THIS CLASS.\n  ~GTestFlagSaver() {\n    GTEST_FLAG(also_run_disabled_tests) = also_run_disabled_tests_;\n    GTEST_FLAG(break_on_failure) = break_on_failure_;\n    GTEST_FLAG(catch_exceptions) = catch_exceptions_;\n    GTEST_FLAG(color) = color_;\n    GTEST_FLAG(death_test_style) = death_test_style_;\n    GTEST_FLAG(death_test_use_fork) = death_test_use_fork_;\n    GTEST_FLAG(filter) = filter_;\n    GTEST_FLAG(internal_run_death_test) = internal_run_death_test_;\n    GTEST_FLAG(list_tests) = list_tests_;\n    GTEST_FLAG(output) = output_;\n    GTEST_FLAG(print_time) = print_time_;\n    GTEST_FLAG(random_seed) = random_seed_;\n    GTEST_FLAG(repeat) = repeat_;\n    GTEST_FLAG(shuffle) = shuffle_;\n    GTEST_FLAG(stack_trace_depth) = stack_trace_depth_;\n    GTEST_FLAG(stream_result_to) = stream_result_to_;\n    GTEST_FLAG(throw_on_failure) = throw_on_failure_;\n  }\n private:\n  // Fields for saving the original values of flags.\n  bool also_run_disabled_tests_;\n  bool break_on_failure_;\n  bool catch_exceptions_;\n  String color_;\n  String death_test_style_;\n  bool death_test_use_fork_;\n  String filter_;\n  String internal_run_death_test_;\n  bool list_tests_;\n  String output_;\n  bool print_time_;\n  internal::Int32 random_seed_;\n  internal::Int32 repeat_;\n  bool shuffle_;\n  internal::Int32 stack_trace_depth_;\n  String stream_result_to_;\n  bool throw_on_failure_;\n} GTEST_ATTRIBUTE_UNUSED_;\n\n// Converts a Unicode code point to a narrow string in UTF-8 encoding.\n// code_point parameter is of type UInt32 because wchar_t may not be\n// wide enough to contain a code point.\n// The output buffer str must containt at least 32 characters.\n// The function returns the address of the output buffer.\n// If the code_point is not a valid Unicode code point\n// (i.e. outside of Unicode range U+0 to U+10FFFF) it will be output\n// as '(Invalid Unicode 0xXXXXXXXX)'.\nGTEST_API_ char* CodePointToUtf8(UInt32 code_point, char* str);\n\n// Converts a wide string to a narrow string in UTF-8 encoding.\n// The wide string is assumed to have the following encoding:\n//   UTF-16 if sizeof(wchar_t) == 2 (on Windows, Cygwin, Symbian OS)\n//   UTF-32 if sizeof(wchar_t) == 4 (on Linux)\n// Parameter str points to a null-terminated wide string.\n// Parameter num_chars may additionally limit the number\n// of wchar_t characters processed. -1 is used when the entire string\n// should be processed.\n// If the string contains code points that are not valid Unicode code points\n// (i.e. outside of Unicode range U+0 to U+10FFFF) they will be output\n// as '(Invalid Unicode 0xXXXXXXXX)'. If the string is in UTF16 encoding\n// and contains invalid UTF-16 surrogate pairs, values in those pairs\n// will be encoded as individual Unicode characters from Basic Normal Plane.\nGTEST_API_ String WideStringToUtf8(const wchar_t* str, int num_chars);\n\n// Reads the GTEST_SHARD_STATUS_FILE environment variable, and creates the file\n// if the variable is present. If a file already exists at this location, this\n// function will write over it. If the variable is present, but the file cannot\n// be created, prints an error and exits.\nvoid WriteToShardStatusFileIfNeeded();\n\n// Checks whether sharding is enabled by examining the relevant\n// environment variable values. If the variables are present,\n// but inconsistent (e.g., shard_index >= total_shards), prints\n// an error and exits. If in_subprocess_for_death_test, sharding is\n// disabled because it must only be applied to the original test\n// process. Otherwise, we could filter out death tests we intended to execute.\nGTEST_API_ bool ShouldShard(const char* total_shards_str,\n                            const char* shard_index_str,\n                            bool in_subprocess_for_death_test);\n\n// Parses the environment variable var as an Int32. If it is unset,\n// returns default_val. If it is not an Int32, prints an error and\n// and aborts.\nGTEST_API_ Int32 Int32FromEnvOrDie(const char* env_var, Int32 default_val);\n\n// Given the total number of shards, the shard index, and the test id,\n// returns true iff the test should be run on this shard. The test id is\n// some arbitrary but unique non-negative integer assigned to each test\n// method. Assumes that 0 <= shard_index < total_shards.\nGTEST_API_ bool ShouldRunTestOnShard(\n    int total_shards, int shard_index, int test_id);\n\n// STL container utilities.\n\n// Returns the number of elements in the given container that satisfy\n// the given predicate.\ntemplate <class Container, typename Predicate>\ninline int CountIf(const Container& c, Predicate predicate) {\n  // Implemented as an explicit loop since std::count_if() in libCstd on\n  // Solaris has a non-standard signature.\n  int count = 0;\n  for (typename Container::const_iterator it = c.begin(); it != c.end(); ++it) {\n    if (predicate(*it))\n      ++count;\n  }\n  return count;\n}\n\n// Applies a function/functor to each element in the container.\ntemplate <class Container, typename Functor>\nvoid ForEach(const Container& c, Functor functor) {\n  std::for_each(c.begin(), c.end(), functor);\n}\n\n// Returns the i-th element of the vector, or default_value if i is not\n// in range [0, v.size()).\ntemplate <typename E>\ninline E GetElementOr(const std::vector<E>& v, int i, E default_value) {\n  return (i < 0 || i >= static_cast<int>(v.size())) ? default_value : v[i];\n}\n\n// Performs an in-place shuffle of a range of the vector's elements.\n// 'begin' and 'end' are element indices as an STL-style range;\n// i.e. [begin, end) are shuffled, where 'end' == size() means to\n// shuffle to the end of the vector.\ntemplate <typename E>\nvoid ShuffleRange(internal::Random* random, int begin, int end,\n                  std::vector<E>* v) {\n  const int size = static_cast<int>(v->size());\n  GTEST_CHECK_(0 <= begin && begin <= size)\n      << \"Invalid shuffle range start \" << begin << \": must be in range [0, \"\n      << size << \"].\";\n  GTEST_CHECK_(begin <= end && end <= size)\n      << \"Invalid shuffle range finish \" << end << \": must be in range [\"\n      << begin << \", \" << size << \"].\";\n\n  // Fisher-Yates shuffle, from\n  // http://en.wikipedia.org/wiki/Fisher-Yates_shuffle\n  for (int range_width = end - begin; range_width >= 2; range_width--) {\n    const int last_in_range = begin + range_width - 1;\n    const int selected = begin + random->Generate(range_width);\n    std::swap((*v)[selected], (*v)[last_in_range]);\n  }\n}\n\n// Performs an in-place shuffle of the vector's elements.\ntemplate <typename E>\ninline void Shuffle(internal::Random* random, std::vector<E>* v) {\n  ShuffleRange(random, 0, static_cast<int>(v->size()), v);\n}\n\n// A function for deleting an object.  Handy for being used as a\n// functor.\ntemplate <typename T>\nstatic void Delete(T* x) {\n  delete x;\n}\n\n// A predicate that checks the key of a TestProperty against a known key.\n//\n// TestPropertyKeyIs is copyable.\nclass TestPropertyKeyIs {\n public:\n  // Constructor.\n  //\n  // TestPropertyKeyIs has NO default constructor.\n  explicit TestPropertyKeyIs(const char* key)\n      : key_(key) {}\n\n  // Returns true iff the test name of test property matches on key_.\n  bool operator()(const TestProperty& test_property) const {\n    return String(test_property.key()).Compare(key_) == 0;\n  }\n\n private:\n  String key_;\n};\n\n// Class UnitTestOptions.\n//\n// This class contains functions for processing options the user\n// specifies when running the tests.  It has only static members.\n//\n// In most cases, the user can specify an option using either an\n// environment variable or a command line flag.  E.g. you can set the\n// test filter using either GTEST_FILTER or --gtest_filter.  If both\n// the variable and the flag are present, the latter overrides the\n// former.\nclass GTEST_API_ UnitTestOptions {\n public:\n  // Functions for processing the gtest_output flag.\n\n  // Returns the output format, or \"\" for normal printed output.\n  static String GetOutputFormat();\n\n  // Returns the absolute path of the requested output file, or the\n  // default (test_detail.xml in the original working directory) if\n  // none was explicitly specified.\n  static String GetAbsolutePathToOutputFile();\n\n  // Functions for processing the gtest_filter flag.\n\n  // Returns true iff the wildcard pattern matches the string.  The\n  // first ':' or '\\0' character in pattern marks the end of it.\n  //\n  // This recursive algorithm isn't very efficient, but is clear and\n  // works well enough for matching test names, which are short.\n  static bool PatternMatchesString(const char *pattern, const char *str);\n\n  // Returns true iff the user-specified filter matches the test case\n  // name and the test name.\n  static bool FilterMatchesTest(const String &test_case_name,\n                                const String &test_name);\n\n#if GTEST_OS_WINDOWS\n  // Function for supporting the gtest_catch_exception flag.\n\n  // Returns EXCEPTION_EXECUTE_HANDLER if Google Test should handle the\n  // given SEH exception, or EXCEPTION_CONTINUE_SEARCH otherwise.\n  // This function is useful as an __except condition.\n  static int GTestShouldProcessSEH(DWORD exception_code);\n#endif  // GTEST_OS_WINDOWS\n\n  // Returns true if \"name\" matches the ':' separated list of glob-style\n  // filters in \"filter\".\n  static bool MatchesFilter(const String& name, const char* filter);\n};\n\n// Returns the current application's name, removing directory path if that\n// is present.  Used by UnitTestOptions::GetOutputFile.\nGTEST_API_ FilePath GetCurrentExecutableName();\n\n// The role interface for getting the OS stack trace as a string.\nclass OsStackTraceGetterInterface {\n public:\n  OsStackTraceGetterInterface() {}\n  virtual ~OsStackTraceGetterInterface() {}\n\n  // Returns the current OS stack trace as a String.  Parameters:\n  //\n  //   max_depth  - the maximum number of stack frames to be included\n  //                in the trace.\n  //   skip_count - the number of top frames to be skipped; doesn't count\n  //                against max_depth.\n  virtual String CurrentStackTrace(int max_depth, int skip_count) = 0;\n\n  // UponLeavingGTest() should be called immediately before Google Test calls\n  // user code. It saves some information about the current stack that\n  // CurrentStackTrace() will use to find and hide Google Test stack frames.\n  virtual void UponLeavingGTest() = 0;\n\n private:\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetterInterface);\n};\n\n// A working implementation of the OsStackTraceGetterInterface interface.\nclass OsStackTraceGetter : public OsStackTraceGetterInterface {\n public:\n  OsStackTraceGetter() : caller_frame_(NULL) {}\n  virtual String CurrentStackTrace(int max_depth, int skip_count);\n  virtual void UponLeavingGTest();\n\n  // This string is inserted in place of stack frames that are part of\n  // Google Test's implementation.\n  static const char* const kElidedFramesMarker;\n\n private:\n  Mutex mutex_;  // protects all internal state\n\n  // We save the stack frame below the frame that calls user code.\n  // We do this because the address of the frame immediately below\n  // the user code changes between the call to UponLeavingGTest()\n  // and any calls to CurrentStackTrace() from within the user code.\n  void* caller_frame_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetter);\n};\n\n// Information about a Google Test trace point.\nstruct TraceInfo {\n  const char* file;\n  int line;\n  String message;\n};\n\n// This is the default global test part result reporter used in UnitTestImpl.\n// This class should only be used by UnitTestImpl.\nclass DefaultGlobalTestPartResultReporter\n  : public TestPartResultReporterInterface {\n public:\n  explicit DefaultGlobalTestPartResultReporter(UnitTestImpl* unit_test);\n  // Implements the TestPartResultReporterInterface. Reports the test part\n  // result in the current test.\n  virtual void ReportTestPartResult(const TestPartResult& result);\n\n private:\n  UnitTestImpl* const unit_test_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultGlobalTestPartResultReporter);\n};\n\n// This is the default per thread test part result reporter used in\n// UnitTestImpl. This class should only be used by UnitTestImpl.\nclass DefaultPerThreadTestPartResultReporter\n    : public TestPartResultReporterInterface {\n public:\n  explicit DefaultPerThreadTestPartResultReporter(UnitTestImpl* unit_test);\n  // Implements the TestPartResultReporterInterface. The implementation just\n  // delegates to the current global test part result reporter of *unit_test_.\n  virtual void ReportTestPartResult(const TestPartResult& result);\n\n private:\n  UnitTestImpl* const unit_test_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultPerThreadTestPartResultReporter);\n};\n\n// The private implementation of the UnitTest class.  We don't protect\n// the methods under a mutex, as this class is not accessible by a\n// user and the UnitTest class that delegates work to this class does\n// proper locking.\nclass GTEST_API_ UnitTestImpl {\n public:\n  explicit UnitTestImpl(UnitTest* parent);\n  virtual ~UnitTestImpl();\n\n  // There are two different ways to register your own TestPartResultReporter.\n  // You can register your own repoter to listen either only for test results\n  // from the current thread or for results from all threads.\n  // By default, each per-thread test result repoter just passes a new\n  // TestPartResult to the global test result reporter, which registers the\n  // test part result for the currently running test.\n\n  // Returns the global test part result reporter.\n  TestPartResultReporterInterface* GetGlobalTestPartResultReporter();\n\n  // Sets the global test part result reporter.\n  void SetGlobalTestPartResultReporter(\n      TestPartResultReporterInterface* reporter);\n\n  // Returns the test part result reporter for the current thread.\n  TestPartResultReporterInterface* GetTestPartResultReporterForCurrentThread();\n\n  // Sets the test part result reporter for the current thread.\n  void SetTestPartResultReporterForCurrentThread(\n      TestPartResultReporterInterface* reporter);\n\n  // Gets the number of successful test cases.\n  int successful_test_case_count() const;\n\n  // Gets the number of failed test cases.\n  int failed_test_case_count() const;\n\n  // Gets the number of all test cases.\n  int total_test_case_count() const;\n\n  // Gets the number of all test cases that contain at least one test\n  // that should run.\n  int test_case_to_run_count() const;\n\n  // Gets the number of successful tests.\n  int successful_test_count() const;\n\n  // Gets the number of failed tests.\n  int failed_test_count() const;\n\n  // Gets the number of disabled tests.\n  int disabled_test_count() const;\n\n  // Gets the number of all tests.\n  int total_test_count() const;\n\n  // Gets the number of tests that should run.\n  int test_to_run_count() const;\n\n  // Gets the elapsed time, in milliseconds.\n  TimeInMillis elapsed_time() const { return elapsed_time_; }\n\n  // Returns true iff the unit test passed (i.e. all test cases passed).\n  bool Passed() const { return !Failed(); }\n\n  // Returns true iff the unit test failed (i.e. some test case failed\n  // or something outside of all tests failed).\n  bool Failed() const {\n    return failed_test_case_count() > 0 || ad_hoc_test_result()->Failed();\n  }\n\n  // Gets the i-th test case among all the test cases. i can range from 0 to\n  // total_test_case_count() - 1. If i is not in that range, returns NULL.\n  const TestCase* GetTestCase(int i) const {\n    const int index = GetElementOr(test_case_indices_, i, -1);\n    return index < 0 ? NULL : test_cases_[i];\n  }\n\n  // Gets the i-th test case among all the test cases. i can range from 0 to\n  // total_test_case_count() - 1. If i is not in that range, returns NULL.\n  TestCase* GetMutableTestCase(int i) {\n    const int index = GetElementOr(test_case_indices_, i, -1);\n    return index < 0 ? NULL : test_cases_[index];\n  }\n\n  // Provides access to the event listener list.\n  TestEventListeners* listeners() { return &listeners_; }\n\n  // Returns the TestResult for the test that's currently running, or\n  // the TestResult for the ad hoc test if no test is running.\n  TestResult* current_test_result();\n\n  // Returns the TestResult for the ad hoc test.\n  const TestResult* ad_hoc_test_result() const { return &ad_hoc_test_result_; }\n\n  // Sets the OS stack trace getter.\n  //\n  // Does nothing if the input and the current OS stack trace getter\n  // are the same; otherwise, deletes the old getter and makes the\n  // input the current getter.\n  void set_os_stack_trace_getter(OsStackTraceGetterInterface* getter);\n\n  // Returns the current OS stack trace getter if it is not NULL;\n  // otherwise, creates an OsStackTraceGetter, makes it the current\n  // getter, and returns it.\n  OsStackTraceGetterInterface* os_stack_trace_getter();\n\n  // Returns the current OS stack trace as a String.\n  //\n  // The maximum number of stack frames to be included is specified by\n  // the gtest_stack_trace_depth flag.  The skip_count parameter\n  // specifies the number of top frames to be skipped, which doesn't\n  // count against the number of frames to be included.\n  //\n  // For example, if Foo() calls Bar(), which in turn calls\n  // CurrentOsStackTraceExceptTop(1), Foo() will be included in the\n  // trace but Bar() and CurrentOsStackTraceExceptTop() won't.\n  String CurrentOsStackTraceExceptTop(int skip_count);\n\n  // Finds and returns a TestCase with the given name.  If one doesn't\n  // exist, creates one and returns it.\n  //\n  // Arguments:\n  //\n  //   test_case_name: name of the test case\n  //   type_param:     the name of the test's type parameter, or NULL if\n  //                   this is not a typed or a type-parameterized test.\n  //   set_up_tc:      pointer to the function that sets up the test case\n  //   tear_down_tc:   pointer to the function that tears down the test case\n  TestCase* GetTestCase(const char* test_case_name,\n                        const char* type_param,\n                        Test::SetUpTestCaseFunc set_up_tc,\n                        Test::TearDownTestCaseFunc tear_down_tc);\n\n  // Adds a TestInfo to the unit test.\n  //\n  // Arguments:\n  //\n  //   set_up_tc:    pointer to the function that sets up the test case\n  //   tear_down_tc: pointer to the function that tears down the test case\n  //   test_info:    the TestInfo object\n  void AddTestInfo(Test::SetUpTestCaseFunc set_up_tc,\n                   Test::TearDownTestCaseFunc tear_down_tc,\n                   TestInfo* test_info) {\n    // In order to support thread-safe death tests, we need to\n    // remember the original working directory when the test program\n    // was first invoked.  We cannot do this in RUN_ALL_TESTS(), as\n    // the user may have changed the current directory before calling\n    // RUN_ALL_TESTS().  Therefore we capture the current directory in\n    // AddTestInfo(), which is called to register a TEST or TEST_F\n    // before main() is reached.\n    if (original_working_dir_.IsEmpty()) {\n      original_working_dir_.Set(FilePath::GetCurrentDir());\n      GTEST_CHECK_(!original_working_dir_.IsEmpty())\n          << \"Failed to get the current working directory.\";\n    }\n\n    GetTestCase(test_info->test_case_name(),\n                test_info->type_param(),\n                set_up_tc,\n                tear_down_tc)->AddTestInfo(test_info);\n  }\n\n#if GTEST_HAS_PARAM_TEST\n  // Returns ParameterizedTestCaseRegistry object used to keep track of\n  // value-parameterized tests and instantiate and register them.\n  internal::ParameterizedTestCaseRegistry& parameterized_test_registry() {\n    return parameterized_test_registry_;\n  }\n#endif  // GTEST_HAS_PARAM_TEST\n\n  // Sets the TestCase object for the test that's currently running.\n  void set_current_test_case(TestCase* a_current_test_case) {\n    current_test_case_ = a_current_test_case;\n  }\n\n  // Sets the TestInfo object for the test that's currently running.  If\n  // current_test_info is NULL, the assertion results will be stored in\n  // ad_hoc_test_result_.\n  void set_current_test_info(TestInfo* a_current_test_info) {\n    current_test_info_ = a_current_test_info;\n  }\n\n  // Registers all parameterized tests defined using TEST_P and\n  // INSTANTIATE_TEST_CASE_P, creating regular tests for each test/parameter\n  // combination. This method can be called more then once; it has guards\n  // protecting from registering the tests more then once.  If\n  // value-parameterized tests are disabled, RegisterParameterizedTests is\n  // present but does nothing.\n  void RegisterParameterizedTests();\n\n  // Runs all tests in this UnitTest object, prints the result, and\n  // returns true if all tests are successful.  If any exception is\n  // thrown during a test, this test is considered to be failed, but\n  // the rest of the tests will still be run.\n  bool RunAllTests();\n\n  // Clears the results of all tests, except the ad hoc tests.\n  void ClearNonAdHocTestResult() {\n    ForEach(test_cases_, TestCase::ClearTestCaseResult);\n  }\n\n  // Clears the results of ad-hoc test assertions.\n  void ClearAdHocTestResult() {\n    ad_hoc_test_result_.Clear();\n  }\n\n  enum ReactionToSharding {\n    HONOR_SHARDING_PROTOCOL,\n    IGNORE_SHARDING_PROTOCOL\n  };\n\n  // Matches the full name of each test against the user-specified\n  // filter to decide whether the test should run, then records the\n  // result in each TestCase and TestInfo object.\n  // If shard_tests == HONOR_SHARDING_PROTOCOL, further filters tests\n  // based on sharding variables in the environment.\n  // Returns the number of tests that should run.\n  int FilterTests(ReactionToSharding shard_tests);\n\n  // Prints the names of the tests matching the user-specified filter flag.\n  void ListTestsMatchingFilter();\n\n  const TestCase* current_test_case() const { return current_test_case_; }\n  TestInfo* current_test_info() { return current_test_info_; }\n  const TestInfo* current_test_info() const { return current_test_info_; }\n\n  // Returns the vector of environments that need to be set-up/torn-down\n  // before/after the tests are run.\n  std::vector<Environment*>& environments() { return environments_; }\n\n  // Getters for the per-thread Google Test trace stack.\n  std::vector<TraceInfo>& gtest_trace_stack() {\n    return *(gtest_trace_stack_.pointer());\n  }\n  const std::vector<TraceInfo>& gtest_trace_stack() const {\n    return gtest_trace_stack_.get();\n  }\n\n#if GTEST_HAS_DEATH_TEST\n  void InitDeathTestSubprocessControlInfo() {\n    internal_run_death_test_flag_.reset(ParseInternalRunDeathTestFlag());\n  }\n  // Returns a pointer to the parsed --gtest_internal_run_death_test\n  // flag, or NULL if that flag was not specified.\n  // This information is useful only in a death test child process.\n  // Must not be called before a call to InitGoogleTest.\n  const InternalRunDeathTestFlag* internal_run_death_test_flag() const {\n    return internal_run_death_test_flag_.get();\n  }\n\n  // Returns a pointer to the current death test factory.\n  internal::DeathTestFactory* death_test_factory() {\n    return death_test_factory_.get();\n  }\n\n  void SuppressTestEventsIfInSubprocess();\n\n  friend class ReplaceDeathTestFactory;\n#endif  // GTEST_HAS_DEATH_TEST\n\n  // Initializes the event listener performing XML output as specified by\n  // UnitTestOptions. Must not be called before InitGoogleTest.\n  void ConfigureXmlOutput();\n\n#if GTEST_CAN_STREAM_RESULTS_\n  // Initializes the event listener for streaming test results to a socket.\n  // Must not be called before InitGoogleTest.\n  void ConfigureStreamingOutput();\n#endif\n\n  // Performs initialization dependent upon flag values obtained in\n  // ParseGoogleTestFlagsOnly.  Is called from InitGoogleTest after the call to\n  // ParseGoogleTestFlagsOnly.  In case a user neglects to call InitGoogleTest\n  // this function is also called from RunAllTests.  Since this function can be\n  // called more than once, it has to be idempotent.\n  void PostFlagParsingInit();\n\n  // Gets the random seed used at the start of the current test iteration.\n  int random_seed() const { return random_seed_; }\n\n  // Gets the random number generator.\n  internal::Random* random() { return &random_; }\n\n  // Shuffles all test cases, and the tests within each test case,\n  // making sure that death tests are still run first.\n  void ShuffleTests();\n\n  // Restores the test cases and tests to their order before the first shuffle.\n  void UnshuffleTests();\n\n  // Returns the value of GTEST_FLAG(catch_exceptions) at the moment\n  // UnitTest::Run() starts.\n  bool catch_exceptions() const { return catch_exceptions_; }\n\n private:\n  friend class ::testing::UnitTest;\n\n  // Used by UnitTest::Run() to capture the state of\n  // GTEST_FLAG(catch_exceptions) at the moment it starts.\n  void set_catch_exceptions(bool value) { catch_exceptions_ = value; }\n\n  // The UnitTest object that owns this implementation object.\n  UnitTest* const parent_;\n\n  // The working directory when the first TEST() or TEST_F() was\n  // executed.\n  internal::FilePath original_working_dir_;\n\n  // The default test part result reporters.\n  DefaultGlobalTestPartResultReporter default_global_test_part_result_reporter_;\n  DefaultPerThreadTestPartResultReporter\n      default_per_thread_test_part_result_reporter_;\n\n  // Points to (but doesn't own) the global test part result reporter.\n  TestPartResultReporterInterface* global_test_part_result_repoter_;\n\n  // Protects read and write access to global_test_part_result_reporter_.\n  internal::Mutex global_test_part_result_reporter_mutex_;\n\n  // Points to (but doesn't own) the per-thread test part result reporter.\n  internal::ThreadLocal<TestPartResultReporterInterface*>\n      per_thread_test_part_result_reporter_;\n\n  // The vector of environments that need to be set-up/torn-down\n  // before/after the tests are run.\n  std::vector<Environment*> environments_;\n\n  // The vector of TestCases in their original order.  It owns the\n  // elements in the vector.\n  std::vector<TestCase*> test_cases_;\n\n  // Provides a level of indirection for the test case list to allow\n  // easy shuffling and restoring the test case order.  The i-th\n  // element of this vector is the index of the i-th test case in the\n  // shuffled order.\n  std::vector<int> test_case_indices_;\n\n#if GTEST_HAS_PARAM_TEST\n  // ParameterizedTestRegistry object used to register value-parameterized\n  // tests.\n  internal::ParameterizedTestCaseRegistry parameterized_test_registry_;\n\n  // Indicates whether RegisterParameterizedTests() has been called already.\n  bool parameterized_tests_registered_;\n#endif  // GTEST_HAS_PARAM_TEST\n\n  // Index of the last death test case registered.  Initially -1.\n  int last_death_test_case_;\n\n  // This points to the TestCase for the currently running test.  It\n  // changes as Google Test goes through one test case after another.\n  // When no test is running, this is set to NULL and Google Test\n  // stores assertion results in ad_hoc_test_result_.  Initially NULL.\n  TestCase* current_test_case_;\n\n  // This points to the TestInfo for the currently running test.  It\n  // changes as Google Test goes through one test after another.  When\n  // no test is running, this is set to NULL and Google Test stores\n  // assertion results in ad_hoc_test_result_.  Initially NULL.\n  TestInfo* current_test_info_;\n\n  // Normally, a user only writes assertions inside a TEST or TEST_F,\n  // or inside a function called by a TEST or TEST_F.  Since Google\n  // Test keeps track of which test is current running, it can\n  // associate such an assertion with the test it belongs to.\n  //\n  // If an assertion is encountered when no TEST or TEST_F is running,\n  // Google Test attributes the assertion result to an imaginary \"ad hoc\"\n  // test, and records the result in ad_hoc_test_result_.\n  TestResult ad_hoc_test_result_;\n\n  // The list of event listeners that can be used to track events inside\n  // Google Test.\n  TestEventListeners listeners_;\n\n  // The OS stack trace getter.  Will be deleted when the UnitTest\n  // object is destructed.  By default, an OsStackTraceGetter is used,\n  // but the user can set this field to use a custom getter if that is\n  // desired.\n  OsStackTraceGetterInterface* os_stack_trace_getter_;\n\n  // True iff PostFlagParsingInit() has been called.\n  bool post_flag_parse_init_performed_;\n\n  // The random number seed used at the beginning of the test run.\n  int random_seed_;\n\n  // Our random number generator.\n  internal::Random random_;\n\n  // How long the test took to run, in milliseconds.\n  TimeInMillis elapsed_time_;\n\n#if GTEST_HAS_DEATH_TEST\n  // The decomposed components of the gtest_internal_run_death_test flag,\n  // parsed when RUN_ALL_TESTS is called.\n  internal::scoped_ptr<InternalRunDeathTestFlag> internal_run_death_test_flag_;\n  internal::scoped_ptr<internal::DeathTestFactory> death_test_factory_;\n#endif  // GTEST_HAS_DEATH_TEST\n\n  // A per-thread stack of traces created by the SCOPED_TRACE() macro.\n  internal::ThreadLocal<std::vector<TraceInfo> > gtest_trace_stack_;\n\n  // The value of GTEST_FLAG(catch_exceptions) at the moment RunAllTests()\n  // starts.\n  bool catch_exceptions_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(UnitTestImpl);\n};  // class UnitTestImpl\n\n// Convenience function for accessing the global UnitTest\n// implementation object.\ninline UnitTestImpl* GetUnitTestImpl() {\n  return UnitTest::GetInstance()->impl();\n}\n\n#if GTEST_USES_SIMPLE_RE\n\n// Internal helper functions for implementing the simple regular\n// expression matcher.\nGTEST_API_ bool IsInSet(char ch, const char* str);\nGTEST_API_ bool IsAsciiDigit(char ch);\nGTEST_API_ bool IsAsciiPunct(char ch);\nGTEST_API_ bool IsRepeat(char ch);\nGTEST_API_ bool IsAsciiWhiteSpace(char ch);\nGTEST_API_ bool IsAsciiWordChar(char ch);\nGTEST_API_ bool IsValidEscape(char ch);\nGTEST_API_ bool AtomMatchesChar(bool escaped, char pattern, char ch);\nGTEST_API_ bool ValidateRegex(const char* regex);\nGTEST_API_ bool MatchRegexAtHead(const char* regex, const char* str);\nGTEST_API_ bool MatchRepetitionAndRegexAtHead(\n    bool escaped, char ch, char repeat, const char* regex, const char* str);\nGTEST_API_ bool MatchRegexAnywhere(const char* regex, const char* str);\n\n#endif  // GTEST_USES_SIMPLE_RE\n\n// Parses the command line for Google Test flags, without initializing\n// other parts of Google Test.\nGTEST_API_ void ParseGoogleTestFlagsOnly(int* argc, char** argv);\nGTEST_API_ void ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv);\n\n#if GTEST_HAS_DEATH_TEST\n\n// Returns the message describing the last system error, regardless of the\n// platform.\nGTEST_API_ String GetLastErrnoDescription();\n\n# if GTEST_OS_WINDOWS\n// Provides leak-safe Windows kernel handle ownership.\nclass AutoHandle {\n public:\n  AutoHandle() : handle_(INVALID_HANDLE_VALUE) {}\n  explicit AutoHandle(HANDLE handle) : handle_(handle) {}\n\n  ~AutoHandle() { Reset(); }\n\n  HANDLE Get() const { return handle_; }\n  void Reset() { Reset(INVALID_HANDLE_VALUE); }\n  void Reset(HANDLE handle) {\n    if (handle != handle_) {\n      if (handle_ != INVALID_HANDLE_VALUE)\n        ::CloseHandle(handle_);\n      handle_ = handle;\n    }\n  }\n\n private:\n  HANDLE handle_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(AutoHandle);\n};\n# endif  // GTEST_OS_WINDOWS\n\n// Attempts to parse a string into a positive integer pointed to by the\n// number parameter.  Returns true if that is possible.\n// GTEST_HAS_DEATH_TEST implies that we have ::std::string, so we can use\n// it here.\ntemplate <typename Integer>\nbool ParseNaturalNumber(const ::std::string& str, Integer* number) {\n  // Fail fast if the given string does not begin with a digit;\n  // this bypasses strtoXXX's \"optional leading whitespace and plus\n  // or minus sign\" semantics, which are undesirable here.\n  if (str.empty() || !IsDigit(str[0])) {\n    return false;\n  }\n  errno = 0;\n\n  char* end;\n  // BiggestConvertible is the largest integer type that system-provided\n  // string-to-number conversion routines can return.\n\n# if GTEST_OS_WINDOWS && !defined(__GNUC__)\n\n  // MSVC and C++ Builder define __int64 instead of the standard long long.\n  typedef unsigned __int64 BiggestConvertible;\n  const BiggestConvertible parsed = _strtoui64(str.c_str(), &end, 10);\n\n# else\n\n  typedef unsigned long long BiggestConvertible;  // NOLINT\n  const BiggestConvertible parsed = strtoull(str.c_str(), &end, 10);\n\n# endif  // GTEST_OS_WINDOWS && !defined(__GNUC__)\n\n  const bool parse_success = *end == '\\0' && errno == 0;\n\n  // TODO(vladl@google.com): Convert this to compile time assertion when it is\n  // available.\n  GTEST_CHECK_(sizeof(Integer) <= sizeof(parsed));\n\n  const Integer result = static_cast<Integer>(parsed);\n  if (parse_success && static_cast<BiggestConvertible>(result) == parsed) {\n    *number = result;\n    return true;\n  }\n  return false;\n}\n#endif  // GTEST_HAS_DEATH_TEST\n\n// TestResult contains some private methods that should be hidden from\n// Google Test user but are required for testing. This class allow our tests\n// to access them.\n//\n// This class is supplied only for the purpose of testing Google Test's own\n// constructs. Do not use it in user tests, either directly or indirectly.\nclass TestResultAccessor {\n public:\n  static void RecordProperty(TestResult* test_result,\n                             const TestProperty& property) {\n    test_result->RecordProperty(property);\n  }\n\n  static void ClearTestPartResults(TestResult* test_result) {\n    test_result->ClearTestPartResults();\n  }\n\n  static const std::vector<testing::TestPartResult>& test_part_results(\n      const TestResult& test_result) {\n    return test_result.test_part_results();\n  }\n};\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_SRC_GTEST_INTERNAL_INL_H_\n#undef GTEST_IMPLEMENTATION_\n\n#if GTEST_OS_WINDOWS\n# define vsnprintf _vsnprintf\n#endif  // GTEST_OS_WINDOWS\n\nnamespace testing {\n\nusing internal::CountIf;\nusing internal::ForEach;\nusing internal::GetElementOr;\nusing internal::Shuffle;\n\n// Constants.\n\n// A test whose test case name or test name matches this filter is\n// disabled and not run.\nstatic const char kDisableTestFilter[] = \"DISABLED_*:*/DISABLED_*\";\n\n// A test case whose name matches this filter is considered a death\n// test case and will be run before test cases whose name doesn't\n// match this filter.\nstatic const char kDeathTestCaseFilter[] = \"*DeathTest:*DeathTest/*\";\n\n// A test filter that matches everything.\nstatic const char kUniversalFilter[] = \"*\";\n\n// The default output file for XML output.\nstatic const char kDefaultOutputFile[] = \"test_detail.xml\";\n\n// The environment variable name for the test shard index.\nstatic const char kTestShardIndex[] = \"GTEST_SHARD_INDEX\";\n// The environment variable name for the total number of test shards.\nstatic const char kTestTotalShards[] = \"GTEST_TOTAL_SHARDS\";\n// The environment variable name for the test shard status file.\nstatic const char kTestShardStatusFile[] = \"GTEST_SHARD_STATUS_FILE\";\n\nnamespace internal {\n\n// The text used in failure messages to indicate the start of the\n// stack trace.\nconst char kStackTraceMarker[] = \"\\nStack trace:\\n\";\n\n// g_help_flag is true iff the --help flag or an equivalent form is\n// specified on the command line.\nbool g_help_flag = false;\n\n}  // namespace internal\n\nGTEST_DEFINE_bool_(\n    also_run_disabled_tests,\n    internal::BoolFromGTestEnv(\"also_run_disabled_tests\", false),\n    \"Run disabled tests too, in addition to the tests normally being run.\");\n\nGTEST_DEFINE_bool_(\n    break_on_failure,\n    internal::BoolFromGTestEnv(\"break_on_failure\", false),\n    \"True iff a failed assertion should be a debugger break-point.\");\n\nGTEST_DEFINE_bool_(\n    catch_exceptions,\n    internal::BoolFromGTestEnv(\"catch_exceptions\", true),\n    \"True iff \" GTEST_NAME_\n    \" should catch exceptions and treat them as test failures.\");\n\nGTEST_DEFINE_string_(\n    color,\n    internal::StringFromGTestEnv(\"color\", \"auto\"),\n    \"Whether to use colors in the output.  Valid values: yes, no, \"\n    \"and auto.  'auto' means to use colors if the output is \"\n    \"being sent to a terminal and the TERM environment variable \"\n    \"is set to xterm, xterm-color, xterm-256color, linux or cygwin.\");\n\nGTEST_DEFINE_string_(\n    filter,\n    internal::StringFromGTestEnv(\"filter\", kUniversalFilter),\n    \"A colon-separated list of glob (not regex) patterns \"\n    \"for filtering the tests to run, optionally followed by a \"\n    \"'-' and a : separated list of negative patterns (tests to \"\n    \"exclude).  A test is run if it matches one of the positive \"\n    \"patterns and does not match any of the negative patterns.\");\n\nGTEST_DEFINE_bool_(list_tests, false,\n                   \"List all tests without running them.\");\n\nGTEST_DEFINE_string_(\n    output,\n    internal::StringFromGTestEnv(\"output\", \"\"),\n    \"A format (currently must be \\\"xml\\\"), optionally followed \"\n    \"by a colon and an output file name or directory. A directory \"\n    \"is indicated by a trailing pathname separator. \"\n    \"Examples: \\\"xml:filename.xml\\\", \\\"xml::directoryname/\\\". \"\n    \"If a directory is specified, output files will be created \"\n    \"within that directory, with file-names based on the test \"\n    \"executable's name and, if necessary, made unique by adding \"\n    \"digits.\");\n\nGTEST_DEFINE_bool_(\n    print_time,\n    internal::BoolFromGTestEnv(\"print_time\", true),\n    \"True iff \" GTEST_NAME_\n    \" should display elapsed time in text output.\");\n\nGTEST_DEFINE_int32_(\n    random_seed,\n    internal::Int32FromGTestEnv(\"random_seed\", 0),\n    \"Random number seed to use when shuffling test orders.  Must be in range \"\n    \"[1, 99999], or 0 to use a seed based on the current time.\");\n\nGTEST_DEFINE_int32_(\n    repeat,\n    internal::Int32FromGTestEnv(\"repeat\", 1),\n    \"How many times to repeat each test.  Specify a negative number \"\n    \"for repeating forever.  Useful for shaking out flaky tests.\");\n\nGTEST_DEFINE_bool_(\n    show_internal_stack_frames, false,\n    \"True iff \" GTEST_NAME_ \" should include internal stack frames when \"\n    \"printing test failure stack traces.\");\n\nGTEST_DEFINE_bool_(\n    shuffle,\n    internal::BoolFromGTestEnv(\"shuffle\", false),\n    \"True iff \" GTEST_NAME_\n    \" should randomize tests' order on every run.\");\n\nGTEST_DEFINE_int32_(\n    stack_trace_depth,\n    internal::Int32FromGTestEnv(\"stack_trace_depth\", kMaxStackTraceDepth),\n    \"The maximum number of stack frames to print when an \"\n    \"assertion fails.  The valid range is 0 through 100, inclusive.\");\n\nGTEST_DEFINE_string_(\n    stream_result_to,\n    internal::StringFromGTestEnv(\"stream_result_to\", \"\"),\n    \"This flag specifies the host name and the port number on which to stream \"\n    \"test results. Example: \\\"localhost:555\\\". The flag is effective only on \"\n    \"Linux.\");\n\nGTEST_DEFINE_bool_(\n    throw_on_failure,\n    internal::BoolFromGTestEnv(\"throw_on_failure\", false),\n    \"When this flag is specified, a failed assertion will throw an exception \"\n    \"if exceptions are enabled or exit the program with a non-zero code \"\n    \"otherwise.\");\n\nnamespace internal {\n\n// Generates a random number from [0, range), using a Linear\n// Congruential Generator (LCG).  Crashes if 'range' is 0 or greater\n// than kMaxRange.\nUInt32 Random::Generate(UInt32 range) {\n  // These constants are the same as are used in glibc's rand(3).\n  state_ = (1103515245U*state_ + 12345U) % kMaxRange;\n\n  GTEST_CHECK_(range > 0)\n      << \"Cannot generate a number in the range [0, 0).\";\n  GTEST_CHECK_(range <= kMaxRange)\n      << \"Generation of a number in [0, \" << range << \") was requested, \"\n      << \"but this can only generate numbers in [0, \" << kMaxRange << \").\";\n\n  // Converting via modulus introduces a bit of downward bias, but\n  // it's simple, and a linear congruential generator isn't too good\n  // to begin with.\n  return state_ % range;\n}\n\n// GTestIsInitialized() returns true iff the user has initialized\n// Google Test.  Useful for catching the user mistake of not initializing\n// Google Test before calling RUN_ALL_TESTS().\n//\n// A user must call testing::InitGoogleTest() to initialize Google\n// Test.  g_init_gtest_count is set to the number of times\n// InitGoogleTest() has been called.  We don't protect this variable\n// under a mutex as it is only accessed in the main thread.\nint g_init_gtest_count = 0;\nstatic bool GTestIsInitialized() { return g_init_gtest_count != 0; }\n\n// Iterates over a vector of TestCases, keeping a running sum of the\n// results of calling a given int-returning method on each.\n// Returns the sum.\nstatic int SumOverTestCaseList(const std::vector<TestCase*>& case_list,\n                               int (TestCase::*method)() const) {\n  int sum = 0;\n  for (size_t i = 0; i < case_list.size(); i++) {\n    sum += (case_list[i]->*method)();\n  }\n  return sum;\n}\n\n// Returns true iff the test case passed.\nstatic bool TestCasePassed(const TestCase* test_case) {\n  return test_case->should_run() && test_case->Passed();\n}\n\n// Returns true iff the test case failed.\nstatic bool TestCaseFailed(const TestCase* test_case) {\n  return test_case->should_run() && test_case->Failed();\n}\n\n// Returns true iff test_case contains at least one test that should\n// run.\nstatic bool ShouldRunTestCase(const TestCase* test_case) {\n  return test_case->should_run();\n}\n\n// AssertHelper constructor.\nAssertHelper::AssertHelper(TestPartResult::Type type,\n                           const char* file,\n                           int line,\n                           const char* message)\n    : data_(new AssertHelperData(type, file, line, message)) {\n}\n\nAssertHelper::~AssertHelper() {\n  delete data_;\n}\n\n// Message assignment, for assertion streaming support.\nvoid AssertHelper::operator=(const Message& message) const {\n  UnitTest::GetInstance()->\n    AddTestPartResult(data_->type, data_->file, data_->line,\n                      AppendUserMessage(data_->message, message),\n                      UnitTest::GetInstance()->impl()\n                      ->CurrentOsStackTraceExceptTop(1)\n                      // Skips the stack frame for this function itself.\n                      );  // NOLINT\n}\n\n// Mutex for linked pointers.\nGTEST_DEFINE_STATIC_MUTEX_(g_linked_ptr_mutex);\n\n// Application pathname gotten in InitGoogleTest.\nString g_executable_path;\n\n// Returns the current application's name, removing directory path if that\n// is present.\nFilePath GetCurrentExecutableName() {\n  FilePath result;\n\n#if GTEST_OS_WINDOWS\n  result.Set(FilePath(g_executable_path).RemoveExtension(\"exe\"));\n#else\n  result.Set(FilePath(g_executable_path));\n#endif  // GTEST_OS_WINDOWS\n\n  return result.RemoveDirectoryName();\n}\n\n// Functions for processing the gtest_output flag.\n\n// Returns the output format, or \"\" for normal printed output.\nString UnitTestOptions::GetOutputFormat() {\n  const char* const gtest_output_flag = GTEST_FLAG(output).c_str();\n  if (gtest_output_flag == NULL) return String(\"\");\n\n  const char* const colon = strchr(gtest_output_flag, ':');\n  return (colon == NULL) ?\n      String(gtest_output_flag) :\n      String(gtest_output_flag, colon - gtest_output_flag);\n}\n\n// Returns the name of the requested output file, or the default if none\n// was explicitly specified.\nString UnitTestOptions::GetAbsolutePathToOutputFile() {\n  const char* const gtest_output_flag = GTEST_FLAG(output).c_str();\n  if (gtest_output_flag == NULL)\n    return String(\"\");\n\n  const char* const colon = strchr(gtest_output_flag, ':');\n  if (colon == NULL)\n    return String(internal::FilePath::ConcatPaths(\n               internal::FilePath(\n                   UnitTest::GetInstance()->original_working_dir()),\n               internal::FilePath(kDefaultOutputFile)).ToString() );\n\n  internal::FilePath output_name(colon + 1);\n  if (!output_name.IsAbsolutePath())\n    // TODO(wan@google.com): on Windows \\some\\path is not an absolute\n    // path (as its meaning depends on the current drive), yet the\n    // following logic for turning it into an absolute path is wrong.\n    // Fix it.\n    output_name = internal::FilePath::ConcatPaths(\n        internal::FilePath(UnitTest::GetInstance()->original_working_dir()),\n        internal::FilePath(colon + 1));\n\n  if (!output_name.IsDirectory())\n    return output_name.ToString();\n\n  internal::FilePath result(internal::FilePath::GenerateUniqueFileName(\n      output_name, internal::GetCurrentExecutableName(),\n      GetOutputFormat().c_str()));\n  return result.ToString();\n}\n\n// Returns true iff the wildcard pattern matches the string.  The\n// first ':' or '\\0' character in pattern marks the end of it.\n//\n// This recursive algorithm isn't very efficient, but is clear and\n// works well enough for matching test names, which are short.\nbool UnitTestOptions::PatternMatchesString(const char *pattern,\n                                           const char *str) {\n  switch (*pattern) {\n    case '\\0':\n    case ':':  // Either ':' or '\\0' marks the end of the pattern.\n      return *str == '\\0';\n    case '?':  // Matches any single character.\n      return *str != '\\0' && PatternMatchesString(pattern + 1, str + 1);\n    case '*':  // Matches any string (possibly empty) of characters.\n      return (*str != '\\0' && PatternMatchesString(pattern, str + 1)) ||\n          PatternMatchesString(pattern + 1, str);\n    default:  // Non-special character.  Matches itself.\n      return *pattern == *str &&\n          PatternMatchesString(pattern + 1, str + 1);\n  }\n}\n\nbool UnitTestOptions::MatchesFilter(const String& name, const char* filter) {\n  const char *cur_pattern = filter;\n  for (;;) {\n    if (PatternMatchesString(cur_pattern, name.c_str())) {\n      return true;\n    }\n\n    // Finds the next pattern in the filter.\n    cur_pattern = strchr(cur_pattern, ':');\n\n    // Returns if no more pattern can be found.\n    if (cur_pattern == NULL) {\n      return false;\n    }\n\n    // Skips the pattern separater (the ':' character).\n    cur_pattern++;\n  }\n}\n\n// TODO(keithray): move String function implementations to gtest-string.cc.\n\n// Returns true iff the user-specified filter matches the test case\n// name and the test name.\nbool UnitTestOptions::FilterMatchesTest(const String &test_case_name,\n                                        const String &test_name) {\n  const String& full_name = String::Format(\"%s.%s\",\n                                           test_case_name.c_str(),\n                                           test_name.c_str());\n\n  // Split --gtest_filter at '-', if there is one, to separate into\n  // positive filter and negative filter portions\n  const char* const p = GTEST_FLAG(filter).c_str();\n  const char* const dash = strchr(p, '-');\n  String positive;\n  String negative;\n  if (dash == NULL) {\n    positive = GTEST_FLAG(filter).c_str();  // Whole string is a positive filter\n    negative = String(\"\");\n  } else {\n    positive = String(p, dash - p);  // Everything up to the dash\n    negative = String(dash+1);       // Everything after the dash\n    if (positive.empty()) {\n      // Treat '-test1' as the same as '*-test1'\n      positive = kUniversalFilter;\n    }\n  }\n\n  // A filter is a colon-separated list of patterns.  It matches a\n  // test if any pattern in it matches the test.\n  return (MatchesFilter(full_name, positive.c_str()) &&\n          !MatchesFilter(full_name, negative.c_str()));\n}\n\n#if GTEST_HAS_SEH\n// Returns EXCEPTION_EXECUTE_HANDLER if Google Test should handle the\n// given SEH exception, or EXCEPTION_CONTINUE_SEARCH otherwise.\n// This function is useful as an __except condition.\nint UnitTestOptions::GTestShouldProcessSEH(DWORD exception_code) {\n  // Google Test should handle a SEH exception if:\n  //   1. the user wants it to, AND\n  //   2. this is not a breakpoint exception, AND\n  //   3. this is not a C++ exception (VC++ implements them via SEH,\n  //      apparently).\n  //\n  // SEH exception code for C++ exceptions.\n  // (see http://support.microsoft.com/kb/185294 for more information).\n  const DWORD kCxxExceptionCode = 0xe06d7363;\n\n  bool should_handle = true;\n\n  if (!GTEST_FLAG(catch_exceptions))\n    should_handle = false;\n  else if (exception_code == EXCEPTION_BREAKPOINT)\n    should_handle = false;\n  else if (exception_code == kCxxExceptionCode)\n    should_handle = false;\n\n  return should_handle ? EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH;\n}\n#endif  // GTEST_HAS_SEH\n\n}  // namespace internal\n\n// The c'tor sets this object as the test part result reporter used by\n// Google Test.  The 'result' parameter specifies where to report the\n// results. Intercepts only failures from the current thread.\nScopedFakeTestPartResultReporter::ScopedFakeTestPartResultReporter(\n    TestPartResultArray* result)\n    : intercept_mode_(INTERCEPT_ONLY_CURRENT_THREAD),\n      result_(result) {\n  Init();\n}\n\n// The c'tor sets this object as the test part result reporter used by\n// Google Test.  The 'result' parameter specifies where to report the\n// results.\nScopedFakeTestPartResultReporter::ScopedFakeTestPartResultReporter(\n    InterceptMode intercept_mode, TestPartResultArray* result)\n    : intercept_mode_(intercept_mode),\n      result_(result) {\n  Init();\n}\n\nvoid ScopedFakeTestPartResultReporter::Init() {\n  internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();\n  if (intercept_mode_ == INTERCEPT_ALL_THREADS) {\n    old_reporter_ = impl->GetGlobalTestPartResultReporter();\n    impl->SetGlobalTestPartResultReporter(this);\n  } else {\n    old_reporter_ = impl->GetTestPartResultReporterForCurrentThread();\n    impl->SetTestPartResultReporterForCurrentThread(this);\n  }\n}\n\n// The d'tor restores the test part result reporter used by Google Test\n// before.\nScopedFakeTestPartResultReporter::~ScopedFakeTestPartResultReporter() {\n  internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();\n  if (intercept_mode_ == INTERCEPT_ALL_THREADS) {\n    impl->SetGlobalTestPartResultReporter(old_reporter_);\n  } else {\n    impl->SetTestPartResultReporterForCurrentThread(old_reporter_);\n  }\n}\n\n// Increments the test part result count and remembers the result.\n// This method is from the TestPartResultReporterInterface interface.\nvoid ScopedFakeTestPartResultReporter::ReportTestPartResult(\n    const TestPartResult& result) {\n  result_->Append(result);\n}\n\nnamespace internal {\n\n// Returns the type ID of ::testing::Test.  We should always call this\n// instead of GetTypeId< ::testing::Test>() to get the type ID of\n// testing::Test.  This is to work around a suspected linker bug when\n// using Google Test as a framework on Mac OS X.  The bug causes\n// GetTypeId< ::testing::Test>() to return different values depending\n// on whether the call is from the Google Test framework itself or\n// from user test code.  GetTestTypeId() is guaranteed to always\n// return the same value, as it always calls GetTypeId<>() from the\n// gtest.cc, which is within the Google Test framework.\nTypeId GetTestTypeId() {\n  return GetTypeId<Test>();\n}\n\n// The value of GetTestTypeId() as seen from within the Google Test\n// library.  This is solely for testing GetTestTypeId().\nextern const TypeId kTestTypeIdInGoogleTest = GetTestTypeId();\n\n// This predicate-formatter checks that 'results' contains a test part\n// failure of the given type and that the failure message contains the\n// given substring.\nAssertionResult HasOneFailure(const char* /* results_expr */,\n                              const char* /* type_expr */,\n                              const char* /* substr_expr */,\n                              const TestPartResultArray& results,\n                              TestPartResult::Type type,\n                              const string& substr) {\n  const String expected(type == TestPartResult::kFatalFailure ?\n                        \"1 fatal failure\" :\n                        \"1 non-fatal failure\");\n  Message msg;\n  if (results.size() != 1) {\n    msg << \"Expected: \" << expected << \"\\n\"\n        << \"  Actual: \" << results.size() << \" failures\";\n    for (int i = 0; i < results.size(); i++) {\n      msg << \"\\n\" << results.GetTestPartResult(i);\n    }\n    return AssertionFailure() << msg;\n  }\n\n  const TestPartResult& r = results.GetTestPartResult(0);\n  if (r.type() != type) {\n    return AssertionFailure() << \"Expected: \" << expected << \"\\n\"\n                              << \"  Actual:\\n\"\n                              << r;\n  }\n\n  if (strstr(r.message(), substr.c_str()) == NULL) {\n    return AssertionFailure() << \"Expected: \" << expected << \" containing \\\"\"\n                              << substr << \"\\\"\\n\"\n                              << \"  Actual:\\n\"\n                              << r;\n  }\n\n  return AssertionSuccess();\n}\n\n// The constructor of SingleFailureChecker remembers where to look up\n// test part results, what type of failure we expect, and what\n// substring the failure message should contain.\nSingleFailureChecker:: SingleFailureChecker(\n    const TestPartResultArray* results,\n    TestPartResult::Type type,\n    const string& substr)\n    : results_(results),\n      type_(type),\n      substr_(substr) {}\n\n// The destructor of SingleFailureChecker verifies that the given\n// TestPartResultArray contains exactly one failure that has the given\n// type and contains the given substring.  If that's not the case, a\n// non-fatal failure will be generated.\nSingleFailureChecker::~SingleFailureChecker() {\n  EXPECT_PRED_FORMAT3(HasOneFailure, *results_, type_, substr_);\n}\n\nDefaultGlobalTestPartResultReporter::DefaultGlobalTestPartResultReporter(\n    UnitTestImpl* unit_test) : unit_test_(unit_test) {}\n\nvoid DefaultGlobalTestPartResultReporter::ReportTestPartResult(\n    const TestPartResult& result) {\n  unit_test_->current_test_result()->AddTestPartResult(result);\n  unit_test_->listeners()->repeater()->OnTestPartResult(result);\n}\n\nDefaultPerThreadTestPartResultReporter::DefaultPerThreadTestPartResultReporter(\n    UnitTestImpl* unit_test) : unit_test_(unit_test) {}\n\nvoid DefaultPerThreadTestPartResultReporter::ReportTestPartResult(\n    const TestPartResult& result) {\n  unit_test_->GetGlobalTestPartResultReporter()->ReportTestPartResult(result);\n}\n\n// Returns the global test part result reporter.\nTestPartResultReporterInterface*\nUnitTestImpl::GetGlobalTestPartResultReporter() {\n  internal::MutexLock lock(&global_test_part_result_reporter_mutex_);\n  return global_test_part_result_repoter_;\n}\n\n// Sets the global test part result reporter.\nvoid UnitTestImpl::SetGlobalTestPartResultReporter(\n    TestPartResultReporterInterface* reporter) {\n  internal::MutexLock lock(&global_test_part_result_reporter_mutex_);\n  global_test_part_result_repoter_ = reporter;\n}\n\n// Returns the test part result reporter for the current thread.\nTestPartResultReporterInterface*\nUnitTestImpl::GetTestPartResultReporterForCurrentThread() {\n  return per_thread_test_part_result_reporter_.get();\n}\n\n// Sets the test part result reporter for the current thread.\nvoid UnitTestImpl::SetTestPartResultReporterForCurrentThread(\n    TestPartResultReporterInterface* reporter) {\n  per_thread_test_part_result_reporter_.set(reporter);\n}\n\n// Gets the number of successful test cases.\nint UnitTestImpl::successful_test_case_count() const {\n  return CountIf(test_cases_, TestCasePassed);\n}\n\n// Gets the number of failed test cases.\nint UnitTestImpl::failed_test_case_count() const {\n  return CountIf(test_cases_, TestCaseFailed);\n}\n\n// Gets the number of all test cases.\nint UnitTestImpl::total_test_case_count() const {\n  return static_cast<int>(test_cases_.size());\n}\n\n// Gets the number of all test cases that contain at least one test\n// that should run.\nint UnitTestImpl::test_case_to_run_count() const {\n  return CountIf(test_cases_, ShouldRunTestCase);\n}\n\n// Gets the number of successful tests.\nint UnitTestImpl::successful_test_count() const {\n  return SumOverTestCaseList(test_cases_, &TestCase::successful_test_count);\n}\n\n// Gets the number of failed tests.\nint UnitTestImpl::failed_test_count() const {\n  return SumOverTestCaseList(test_cases_, &TestCase::failed_test_count);\n}\n\n// Gets the number of disabled tests.\nint UnitTestImpl::disabled_test_count() const {\n  return SumOverTestCaseList(test_cases_, &TestCase::disabled_test_count);\n}\n\n// Gets the number of all tests.\nint UnitTestImpl::total_test_count() const {\n  return SumOverTestCaseList(test_cases_, &TestCase::total_test_count);\n}\n\n// Gets the number of tests that should run.\nint UnitTestImpl::test_to_run_count() const {\n  return SumOverTestCaseList(test_cases_, &TestCase::test_to_run_count);\n}\n\n// Returns the current OS stack trace as a String.\n//\n// The maximum number of stack frames to be included is specified by\n// the gtest_stack_trace_depth flag.  The skip_count parameter\n// specifies the number of top frames to be skipped, which doesn't\n// count against the number of frames to be included.\n//\n// For example, if Foo() calls Bar(), which in turn calls\n// CurrentOsStackTraceExceptTop(1), Foo() will be included in the\n// trace but Bar() and CurrentOsStackTraceExceptTop() won't.\nString UnitTestImpl::CurrentOsStackTraceExceptTop(int skip_count) {\n  (void)skip_count;\n  return String(\"\");\n}\n\n// Returns the current time in milliseconds.\nTimeInMillis GetTimeInMillis() {\n#if GTEST_OS_WINDOWS_MOBILE || defined(__BORLANDC__)\n  // Difference between 1970-01-01 and 1601-01-01 in milliseconds.\n  // http://analogous.blogspot.com/2005/04/epoch.html\n  const TimeInMillis kJavaEpochToWinFileTimeDelta =\n    static_cast<TimeInMillis>(116444736UL) * 100000UL;\n  const DWORD kTenthMicrosInMilliSecond = 10000;\n\n  SYSTEMTIME now_systime;\n  FILETIME now_filetime;\n  ULARGE_INTEGER now_int64;\n  // TODO(kenton@google.com): Shouldn't this just use\n  //   GetSystemTimeAsFileTime()?\n  GetSystemTime(&now_systime);\n  if (SystemTimeToFileTime(&now_systime, &now_filetime)) {\n    now_int64.LowPart = now_filetime.dwLowDateTime;\n    now_int64.HighPart = now_filetime.dwHighDateTime;\n    now_int64.QuadPart = (now_int64.QuadPart / kTenthMicrosInMilliSecond) -\n      kJavaEpochToWinFileTimeDelta;\n    return now_int64.QuadPart;\n  }\n  return 0;\n#elif GTEST_OS_WINDOWS && !GTEST_HAS_GETTIMEOFDAY_\n  __timeb64 now;\n\n# ifdef _MSC_VER\n\n  // MSVC 8 deprecates _ftime64(), so we want to suppress warning 4996\n  // (deprecated function) there.\n  // TODO(kenton@google.com): Use GetTickCount()?  Or use\n  //   SystemTimeToFileTime()\n#  pragma warning(push)          // Saves the current warning state.\n#  pragma warning(disable:4996)  // Temporarily disables warning 4996.\n  _ftime64(&now);\n#  pragma warning(pop)           // Restores the warning state.\n# else\n\n  _ftime64(&now);\n\n# endif  // _MSC_VER\n\n  return static_cast<TimeInMillis>(now.time) * 1000 + now.millitm;\n#elif GTEST_HAS_GETTIMEOFDAY_\n  struct timeval now;\n  gettimeofday(&now, NULL);\n  return static_cast<TimeInMillis>(now.tv_sec) * 1000 + now.tv_usec / 1000;\n#else\n# error \"Don't know how to get the current time on your system.\"\n#endif\n}\n\n// Utilities\n\n// class String\n\n// Returns the input enclosed in double quotes if it's not NULL;\n// otherwise returns \"(null)\".  For example, \"\\\"Hello\\\"\" is returned\n// for input \"Hello\".\n//\n// This is useful for printing a C string in the syntax of a literal.\n//\n// Known issue: escape sequences are not handled yet.\nString String::ShowCStringQuoted(const char* c_str) {\n  return c_str ? String::Format(\"\\\"%s\\\"\", c_str) : String(\"(null)\");\n}\n\n// Copies at most length characters from str into a newly-allocated\n// piece of memory of size length+1.  The memory is allocated with new[].\n// A terminating null byte is written to the memory, and a pointer to it\n// is returned.  If str is NULL, NULL is returned.\nstatic char* CloneString(const char* str, size_t length) {\n  if (str == NULL) {\n    return NULL;\n  } else {\n    char* const clone = new char[length + 1];\n    posix::StrNCpy(clone, str, length);\n    clone[length] = '\\0';\n    return clone;\n  }\n}\n\n// Clones a 0-terminated C string, allocating memory using new.  The\n// caller is responsible for deleting[] the return value.  Returns the\n// cloned string, or NULL if the input is NULL.\nconst char * String::CloneCString(const char* c_str) {\n  return (c_str == NULL) ?\n                    NULL : CloneString(c_str, strlen(c_str));\n}\n\n#if GTEST_OS_WINDOWS_MOBILE\n// Creates a UTF-16 wide string from the given ANSI string, allocating\n// memory using new. The caller is responsible for deleting the return\n// value using delete[]. Returns the wide string, or NULL if the\n// input is NULL.\nLPCWSTR String::AnsiToUtf16(const char* ansi) {\n  if (!ansi) return NULL;\n  const int length = strlen(ansi);\n  const int unicode_length =\n      MultiByteToWideChar(CP_ACP, 0, ansi, length,\n                          NULL, 0);\n  WCHAR* unicode = new WCHAR[unicode_length + 1];\n  MultiByteToWideChar(CP_ACP, 0, ansi, length,\n                      unicode, unicode_length);\n  unicode[unicode_length] = 0;\n  return unicode;\n}\n\n// Creates an ANSI string from the given wide string, allocating\n// memory using new. The caller is responsible for deleting the return\n// value using delete[]. Returns the ANSI string, or NULL if the\n// input is NULL.\nconst char* String::Utf16ToAnsi(LPCWSTR utf16_str)  {\n  if (!utf16_str) return NULL;\n  const int ansi_length =\n      WideCharToMultiByte(CP_ACP, 0, utf16_str, -1,\n                          NULL, 0, NULL, NULL);\n  char* ansi = new char[ansi_length + 1];\n  WideCharToMultiByte(CP_ACP, 0, utf16_str, -1,\n                      ansi, ansi_length, NULL, NULL);\n  ansi[ansi_length] = 0;\n  return ansi;\n}\n\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n// Compares two C strings.  Returns true iff they have the same content.\n//\n// Unlike strcmp(), this function can handle NULL argument(s).  A NULL\n// C string is considered different to any non-NULL C string,\n// including the empty string.\nbool String::CStringEquals(const char * lhs, const char * rhs) {\n  if ( lhs == NULL ) return rhs == NULL;\n\n  if ( rhs == NULL ) return false;\n\n  return strcmp(lhs, rhs) == 0;\n}\n\n#if GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING\n\n// Converts an array of wide chars to a narrow string using the UTF-8\n// encoding, and streams the result to the given Message object.\nstatic void StreamWideCharsToMessage(const wchar_t* wstr, size_t length,\n                                     Message* msg) {\n  // TODO(wan): consider allowing a testing::String object to\n  // contain '\\0'.  This will make it behave more like std::string,\n  // and will allow ToUtf8String() to return the correct encoding\n  // for '\\0' s.t. we can get rid of the conditional here (and in\n  // several other places).\n  for (size_t i = 0; i != length; ) {  // NOLINT\n    if (wstr[i] != L'\\0') {\n      *msg << WideStringToUtf8(wstr + i, static_cast<int>(length - i));\n      while (i != length && wstr[i] != L'\\0')\n        i++;\n    } else {\n      *msg << '\\0';\n      i++;\n    }\n  }\n}\n\n#endif  // GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING\n\n}  // namespace internal\n\n#if GTEST_HAS_STD_WSTRING\n// Converts the given wide string to a narrow string using the UTF-8\n// encoding, and streams the result to this Message object.\nMessage& Message::operator <<(const ::std::wstring& wstr) {\n  internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this);\n  return *this;\n}\n#endif  // GTEST_HAS_STD_WSTRING\n\n#if GTEST_HAS_GLOBAL_WSTRING\n// Converts the given wide string to a narrow string using the UTF-8\n// encoding, and streams the result to this Message object.\nMessage& Message::operator <<(const ::wstring& wstr) {\n  internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this);\n  return *this;\n}\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n// AssertionResult constructors.\n// Used in EXPECT_TRUE/FALSE(assertion_result).\nAssertionResult::AssertionResult(const AssertionResult& other)\n    : success_(other.success_),\n      message_(other.message_.get() != NULL ?\n               new ::std::string(*other.message_) :\n               static_cast< ::std::string*>(NULL)) {\n}\n\n// Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE.\nAssertionResult AssertionResult::operator!() const {\n  AssertionResult negation(!success_);\n  if (message_.get() != NULL)\n    negation << *message_;\n  return negation;\n}\n\n// Makes a successful assertion result.\nAssertionResult AssertionSuccess() {\n  return AssertionResult(true);\n}\n\n// Makes a failed assertion result.\nAssertionResult AssertionFailure() {\n  return AssertionResult(false);\n}\n\n// Makes a failed assertion result with the given failure message.\n// Deprecated; use AssertionFailure() << message.\nAssertionResult AssertionFailure(const Message& message) {\n  return AssertionFailure() << message;\n}\n\nnamespace internal {\n\n// Constructs and returns the message for an equality assertion\n// (e.g. ASSERT_EQ, EXPECT_STREQ, etc) failure.\n//\n// The first four parameters are the expressions used in the assertion\n// and their values, as strings.  For example, for ASSERT_EQ(foo, bar)\n// where foo is 5 and bar is 6, we have:\n//\n//   expected_expression: \"foo\"\n//   actual_expression:   \"bar\"\n//   expected_value:      \"5\"\n//   actual_value:        \"6\"\n//\n// The ignoring_case parameter is true iff the assertion is a\n// *_STRCASEEQ*.  When it's true, the string \" (ignoring case)\" will\n// be inserted into the message.\nAssertionResult EqFailure(const char* expected_expression,\n                          const char* actual_expression,\n                          const String& expected_value,\n                          const String& actual_value,\n                          bool ignoring_case) {\n  Message msg;\n  msg << \"Value of: \" << actual_expression;\n  if (actual_value != actual_expression) {\n    msg << \"\\n  Actual: \" << actual_value;\n  }\n\n  msg << \"\\nExpected: \" << expected_expression;\n  if (ignoring_case) {\n    msg << \" (ignoring case)\";\n  }\n  if (expected_value != expected_expression) {\n    msg << \"\\nWhich is: \" << expected_value;\n  }\n\n  return AssertionFailure() << msg;\n}\n\n// Constructs a failure message for Boolean assertions such as EXPECT_TRUE.\nString GetBoolAssertionFailureMessage(const AssertionResult& assertion_result,\n                                      const char* expression_text,\n                                      const char* actual_predicate_value,\n                                      const char* expected_predicate_value) {\n  const char* actual_message = assertion_result.message();\n  Message msg;\n  msg << \"Value of: \" << expression_text\n      << \"\\n  Actual: \" << actual_predicate_value;\n  if (actual_message[0] != '\\0')\n    msg << \" (\" << actual_message << \")\";\n  msg << \"\\nExpected: \" << expected_predicate_value;\n  return msg.GetString();\n}\n\n// Helper function for implementing ASSERT_NEAR.\nAssertionResult DoubleNearPredFormat(const char* expr1,\n                                     const char* expr2,\n                                     const char* abs_error_expr,\n                                     double val1,\n                                     double val2,\n                                     double abs_error) {\n  const double diff = fabs(val1 - val2);\n  if (diff <= abs_error) return AssertionSuccess();\n\n  // TODO(wan): do not print the value of an expression if it's\n  // already a literal.\n  return AssertionFailure()\n      << \"The difference between \" << expr1 << \" and \" << expr2\n      << \" is \" << diff << \", which exceeds \" << abs_error_expr << \", where\\n\"\n      << expr1 << \" evaluates to \" << val1 << \",\\n\"\n      << expr2 << \" evaluates to \" << val2 << \", and\\n\"\n      << abs_error_expr << \" evaluates to \" << abs_error << \".\";\n}\n\n\n// Helper template for implementing FloatLE() and DoubleLE().\ntemplate <typename RawType>\nAssertionResult FloatingPointLE(const char* expr1,\n                                const char* expr2,\n                                RawType val1,\n                                RawType val2) {\n  // Returns success if val1 is less than val2,\n  if (val1 < val2) {\n    return AssertionSuccess();\n  }\n\n  // or if val1 is almost equal to val2.\n  const FloatingPoint<RawType> lhs(val1), rhs(val2);\n  if (lhs.AlmostEquals(rhs)) {\n    return AssertionSuccess();\n  }\n\n  // Note that the above two checks will both fail if either val1 or\n  // val2 is NaN, as the IEEE floating-point standard requires that\n  // any predicate involving a NaN must return false.\n\n  ::std::stringstream val1_ss;\n  val1_ss << std::setprecision(std::numeric_limits<RawType>::digits10 + 2)\n          << val1;\n\n  ::std::stringstream val2_ss;\n  val2_ss << std::setprecision(std::numeric_limits<RawType>::digits10 + 2)\n          << val2;\n\n  return AssertionFailure()\n      << \"Expected: (\" << expr1 << \") <= (\" << expr2 << \")\\n\"\n      << \"  Actual: \" << StringStreamToString(&val1_ss) << \" vs \"\n      << StringStreamToString(&val2_ss);\n}\n\n}  // namespace internal\n\n// Asserts that val1 is less than, or almost equal to, val2.  Fails\n// otherwise.  In particular, it fails if either val1 or val2 is NaN.\nAssertionResult FloatLE(const char* expr1, const char* expr2,\n                        float val1, float val2) {\n  return internal::FloatingPointLE<float>(expr1, expr2, val1, val2);\n}\n\n// Asserts that val1 is less than, or almost equal to, val2.  Fails\n// otherwise.  In particular, it fails if either val1 or val2 is NaN.\nAssertionResult DoubleLE(const char* expr1, const char* expr2,\n                         double val1, double val2) {\n  return internal::FloatingPointLE<double>(expr1, expr2, val1, val2);\n}\n\nnamespace internal {\n\n// The helper function for {ASSERT|EXPECT}_EQ with int or enum\n// arguments.\nAssertionResult CmpHelperEQ(const char* expected_expression,\n                            const char* actual_expression,\n                            BiggestInt expected,\n                            BiggestInt actual) {\n  if (expected == actual) {\n    return AssertionSuccess();\n  }\n\n  return EqFailure(expected_expression,\n                   actual_expression,\n                   FormatForComparisonFailureMessage(expected, actual),\n                   FormatForComparisonFailureMessage(actual, expected),\n                   false);\n}\n\n// A macro for implementing the helper functions needed to implement\n// ASSERT_?? and EXPECT_?? with integer or enum arguments.  It is here\n// just to avoid copy-and-paste of similar code.\n#define GTEST_IMPL_CMP_HELPER_(op_name, op)\\\nAssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \\\n                                   BiggestInt val1, BiggestInt val2) {\\\n  if (val1 op val2) {\\\n    return AssertionSuccess();\\\n  } else {\\\n    return AssertionFailure() \\\n        << \"Expected: (\" << expr1 << \") \" #op \" (\" << expr2\\\n        << \"), actual: \" << FormatForComparisonFailureMessage(val1, val2)\\\n        << \" vs \" << FormatForComparisonFailureMessage(val2, val1);\\\n  }\\\n}\n\n// Implements the helper function for {ASSERT|EXPECT}_NE with int or\n// enum arguments.\nGTEST_IMPL_CMP_HELPER_(NE, !=)\n// Implements the helper function for {ASSERT|EXPECT}_LE with int or\n// enum arguments.\nGTEST_IMPL_CMP_HELPER_(LE, <=)\n// Implements the helper function for {ASSERT|EXPECT}_LT with int or\n// enum arguments.\nGTEST_IMPL_CMP_HELPER_(LT, < )\n// Implements the helper function for {ASSERT|EXPECT}_GE with int or\n// enum arguments.\nGTEST_IMPL_CMP_HELPER_(GE, >=)\n// Implements the helper function for {ASSERT|EXPECT}_GT with int or\n// enum arguments.\nGTEST_IMPL_CMP_HELPER_(GT, > )\n\n#undef GTEST_IMPL_CMP_HELPER_\n\n// The helper function for {ASSERT|EXPECT}_STREQ.\nAssertionResult CmpHelperSTREQ(const char* expected_expression,\n                               const char* actual_expression,\n                               const char* expected,\n                               const char* actual) {\n  if (String::CStringEquals(expected, actual)) {\n    return AssertionSuccess();\n  }\n\n  return EqFailure(expected_expression,\n                   actual_expression,\n                   String::ShowCStringQuoted(expected),\n                   String::ShowCStringQuoted(actual),\n                   false);\n}\n\n// The helper function for {ASSERT|EXPECT}_STRCASEEQ.\nAssertionResult CmpHelperSTRCASEEQ(const char* expected_expression,\n                                   const char* actual_expression,\n                                   const char* expected,\n                                   const char* actual) {\n  if (String::CaseInsensitiveCStringEquals(expected, actual)) {\n    return AssertionSuccess();\n  }\n\n  return EqFailure(expected_expression,\n                   actual_expression,\n                   String::ShowCStringQuoted(expected),\n                   String::ShowCStringQuoted(actual),\n                   true);\n}\n\n// The helper function for {ASSERT|EXPECT}_STRNE.\nAssertionResult CmpHelperSTRNE(const char* s1_expression,\n                               const char* s2_expression,\n                               const char* s1,\n                               const char* s2) {\n  if (!String::CStringEquals(s1, s2)) {\n    return AssertionSuccess();\n  } else {\n    return AssertionFailure() << \"Expected: (\" << s1_expression << \") != (\"\n                              << s2_expression << \"), actual: \\\"\"\n                              << s1 << \"\\\" vs \\\"\" << s2 << \"\\\"\";\n  }\n}\n\n// The helper function for {ASSERT|EXPECT}_STRCASENE.\nAssertionResult CmpHelperSTRCASENE(const char* s1_expression,\n                                   const char* s2_expression,\n                                   const char* s1,\n                                   const char* s2) {\n  if (!String::CaseInsensitiveCStringEquals(s1, s2)) {\n    return AssertionSuccess();\n  } else {\n    return AssertionFailure()\n        << \"Expected: (\" << s1_expression << \") != (\"\n        << s2_expression << \") (ignoring case), actual: \\\"\"\n        << s1 << \"\\\" vs \\\"\" << s2 << \"\\\"\";\n  }\n}\n\n}  // namespace internal\n\nnamespace {\n\n// Helper functions for implementing IsSubString() and IsNotSubstring().\n\n// This group of overloaded functions return true iff needle is a\n// substring of haystack.  NULL is considered a substring of itself\n// only.\n\nbool IsSubstringPred(const char* needle, const char* haystack) {\n  if (needle == NULL || haystack == NULL)\n    return needle == haystack;\n\n  return strstr(haystack, needle) != NULL;\n}\n\nbool IsSubstringPred(const wchar_t* needle, const wchar_t* haystack) {\n  if (needle == NULL || haystack == NULL)\n    return needle == haystack;\n\n  return wcsstr(haystack, needle) != NULL;\n}\n\n// StringType here can be either ::std::string or ::std::wstring.\ntemplate <typename StringType>\nbool IsSubstringPred(const StringType& needle,\n                     const StringType& haystack) {\n  return haystack.find(needle) != StringType::npos;\n}\n\n// This function implements either IsSubstring() or IsNotSubstring(),\n// depending on the value of the expected_to_be_substring parameter.\n// StringType here can be const char*, const wchar_t*, ::std::string,\n// or ::std::wstring.\ntemplate <typename StringType>\nAssertionResult IsSubstringImpl(\n    bool expected_to_be_substring,\n    const char* needle_expr, const char* haystack_expr,\n    const StringType& needle, const StringType& haystack) {\n  if (IsSubstringPred(needle, haystack) == expected_to_be_substring)\n    return AssertionSuccess();\n\n  const bool is_wide_string = sizeof(needle[0]) > 1;\n  const char* const begin_string_quote = is_wide_string ? \"L\\\"\" : \"\\\"\";\n  return AssertionFailure()\n      << \"Value of: \" << needle_expr << \"\\n\"\n      << \"  Actual: \" << begin_string_quote << needle << \"\\\"\\n\"\n      << \"Expected: \" << (expected_to_be_substring ? \"\" : \"not \")\n      << \"a substring of \" << haystack_expr << \"\\n\"\n      << \"Which is: \" << begin_string_quote << haystack << \"\\\"\";\n}\n\n}  // namespace\n\n// IsSubstring() and IsNotSubstring() check whether needle is a\n// substring of haystack (NULL is considered a substring of itself\n// only), and return an appropriate error message when they fail.\n\nAssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const char* needle, const char* haystack) {\n  return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack);\n}\n\nAssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const wchar_t* needle, const wchar_t* haystack) {\n  return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack);\n}\n\nAssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const char* needle, const char* haystack) {\n  return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack);\n}\n\nAssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const wchar_t* needle, const wchar_t* haystack) {\n  return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack);\n}\n\nAssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::string& needle, const ::std::string& haystack) {\n  return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack);\n}\n\nAssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::string& needle, const ::std::string& haystack) {\n  return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack);\n}\n\n#if GTEST_HAS_STD_WSTRING\nAssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::wstring& needle, const ::std::wstring& haystack) {\n  return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack);\n}\n\nAssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::wstring& needle, const ::std::wstring& haystack) {\n  return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack);\n}\n#endif  // GTEST_HAS_STD_WSTRING\n\nnamespace internal {\n\n#if GTEST_OS_WINDOWS\n\nnamespace {\n\n// Helper function for IsHRESULT{SuccessFailure} predicates\nAssertionResult HRESULTFailureHelper(const char* expr,\n                                     const char* expected,\n                                     long hr) {  // NOLINT\n# if GTEST_OS_WINDOWS_MOBILE\n\n  // Windows CE doesn't support FormatMessage.\n  const char error_text[] = \"\";\n\n# else\n\n  // Looks up the human-readable system message for the HRESULT code\n  // and since we're not passing any params to FormatMessage, we don't\n  // want inserts expanded.\n  const DWORD kFlags = FORMAT_MESSAGE_FROM_SYSTEM |\n                       FORMAT_MESSAGE_IGNORE_INSERTS;\n  const DWORD kBufSize = 4096;  // String::Format can't exceed this length.\n  // Gets the system's human readable message string for this HRESULT.\n  char error_text[kBufSize] = { '\\0' };\n  DWORD message_length = ::FormatMessageA(kFlags,\n                                          0,  // no source, we're asking system\n                                          hr,  // the error\n                                          0,  // no line width restrictions\n                                          error_text,  // output buffer\n                                          kBufSize,  // buf size\n                                          NULL);  // no arguments for inserts\n  // Trims tailing white space (FormatMessage leaves a trailing cr-lf)\n  for (; message_length && IsSpace(error_text[message_length - 1]);\n          --message_length) {\n    error_text[message_length - 1] = '\\0';\n  }\n\n# endif  // GTEST_OS_WINDOWS_MOBILE\n\n  const String error_hex(String::Format(\"0x%08X \", hr));\n  return ::testing::AssertionFailure()\n      << \"Expected: \" << expr << \" \" << expected << \".\\n\"\n      << \"  Actual: \" << error_hex << error_text << \"\\n\";\n}\n\n}  // namespace\n\nAssertionResult IsHRESULTSuccess(const char* expr, long hr) {  // NOLINT\n  if (SUCCEEDED(hr)) {\n    return AssertionSuccess();\n  }\n  return HRESULTFailureHelper(expr, \"succeeds\", hr);\n}\n\nAssertionResult IsHRESULTFailure(const char* expr, long hr) {  // NOLINT\n  if (FAILED(hr)) {\n    return AssertionSuccess();\n  }\n  return HRESULTFailureHelper(expr, \"fails\", hr);\n}\n\n#endif  // GTEST_OS_WINDOWS\n\n// Utility functions for encoding Unicode text (wide strings) in\n// UTF-8.\n\n// A Unicode code-point can have upto 21 bits, and is encoded in UTF-8\n// like this:\n//\n// Code-point length   Encoding\n//   0 -  7 bits       0xxxxxxx\n//   8 - 11 bits       110xxxxx 10xxxxxx\n//  12 - 16 bits       1110xxxx 10xxxxxx 10xxxxxx\n//  17 - 21 bits       11110xxx 10xxxxxx 10xxxxxx 10xxxxxx\n\n// The maximum code-point a one-byte UTF-8 sequence can represent.\nconst UInt32 kMaxCodePoint1 = (static_cast<UInt32>(1) <<  7) - 1;\n\n// The maximum code-point a two-byte UTF-8 sequence can represent.\nconst UInt32 kMaxCodePoint2 = (static_cast<UInt32>(1) << (5 + 6)) - 1;\n\n// The maximum code-point a three-byte UTF-8 sequence can represent.\nconst UInt32 kMaxCodePoint3 = (static_cast<UInt32>(1) << (4 + 2*6)) - 1;\n\n// The maximum code-point a four-byte UTF-8 sequence can represent.\nconst UInt32 kMaxCodePoint4 = (static_cast<UInt32>(1) << (3 + 3*6)) - 1;\n\n// Chops off the n lowest bits from a bit pattern.  Returns the n\n// lowest bits.  As a side effect, the original bit pattern will be\n// shifted to the right by n bits.\ninline UInt32 ChopLowBits(UInt32* bits, int n) {\n  const UInt32 low_bits = *bits & ((static_cast<UInt32>(1) << n) - 1);\n  *bits >>= n;\n  return low_bits;\n}\n\n// Converts a Unicode code point to a narrow string in UTF-8 encoding.\n// code_point parameter is of type UInt32 because wchar_t may not be\n// wide enough to contain a code point.\n// The output buffer str must containt at least 32 characters.\n// The function returns the address of the output buffer.\n// If the code_point is not a valid Unicode code point\n// (i.e. outside of Unicode range U+0 to U+10FFFF) it will be output\n// as '(Invalid Unicode 0xXXXXXXXX)'.\nchar* CodePointToUtf8(UInt32 code_point, char* str) {\n  if (code_point <= kMaxCodePoint1) {\n    str[1] = '\\0';\n    str[0] = static_cast<char>(code_point);                          // 0xxxxxxx\n  } else if (code_point <= kMaxCodePoint2) {\n    str[2] = '\\0';\n    str[1] = static_cast<char>(0x80 | ChopLowBits(&code_point, 6));  // 10xxxxxx\n    str[0] = static_cast<char>(0xC0 | code_point);                   // 110xxxxx\n  } else if (code_point <= kMaxCodePoint3) {\n    str[3] = '\\0';\n    str[2] = static_cast<char>(0x80 | ChopLowBits(&code_point, 6));  // 10xxxxxx\n    str[1] = static_cast<char>(0x80 | ChopLowBits(&code_point, 6));  // 10xxxxxx\n    str[0] = static_cast<char>(0xE0 | code_point);                   // 1110xxxx\n  } else if (code_point <= kMaxCodePoint4) {\n    str[4] = '\\0';\n    str[3] = static_cast<char>(0x80 | ChopLowBits(&code_point, 6));  // 10xxxxxx\n    str[2] = static_cast<char>(0x80 | ChopLowBits(&code_point, 6));  // 10xxxxxx\n    str[1] = static_cast<char>(0x80 | ChopLowBits(&code_point, 6));  // 10xxxxxx\n    str[0] = static_cast<char>(0xF0 | code_point);                   // 11110xxx\n  } else {\n    // The longest string String::Format can produce when invoked\n    // with these parameters is 28 character long (not including\n    // the terminating nul character). We are asking for 32 character\n    // buffer just in case. This is also enough for strncpy to\n    // null-terminate the destination string.\n    posix::StrNCpy(\n        str, String::Format(\"(Invalid Unicode 0x%X)\", code_point).c_str(), 32);\n    str[31] = '\\0';  // Makes sure no change in the format to strncpy leaves\n                     // the result unterminated.\n  }\n  return str;\n}\n\n// The following two functions only make sense if the the system\n// uses UTF-16 for wide string encoding. All supported systems\n// with 16 bit wchar_t (Windows, Cygwin, Symbian OS) do use UTF-16.\n\n// Determines if the arguments constitute UTF-16 surrogate pair\n// and thus should be combined into a single Unicode code point\n// using CreateCodePointFromUtf16SurrogatePair.\ninline bool IsUtf16SurrogatePair(wchar_t first, wchar_t second) {\n  return sizeof(wchar_t) == 2 &&\n      (first & 0xFC00) == 0xD800 && (second & 0xFC00) == 0xDC00;\n}\n\n// Creates a Unicode code point from UTF16 surrogate pair.\ninline UInt32 CreateCodePointFromUtf16SurrogatePair(wchar_t first,\n                                                    wchar_t second) {\n  const UInt32 mask = (1 << 10) - 1;\n  return (sizeof(wchar_t) == 2) ?\n      (((first & mask) << 10) | (second & mask)) + 0x10000 :\n      // This function should not be called when the condition is\n      // false, but we provide a sensible default in case it is.\n      static_cast<UInt32>(first);\n}\n\n// Converts a wide string to a narrow string in UTF-8 encoding.\n// The wide string is assumed to have the following encoding:\n//   UTF-16 if sizeof(wchar_t) == 2 (on Windows, Cygwin, Symbian OS)\n//   UTF-32 if sizeof(wchar_t) == 4 (on Linux)\n// Parameter str points to a null-terminated wide string.\n// Parameter num_chars may additionally limit the number\n// of wchar_t characters processed. -1 is used when the entire string\n// should be processed.\n// If the string contains code points that are not valid Unicode code points\n// (i.e. outside of Unicode range U+0 to U+10FFFF) they will be output\n// as '(Invalid Unicode 0xXXXXXXXX)'. If the string is in UTF16 encoding\n// and contains invalid UTF-16 surrogate pairs, values in those pairs\n// will be encoded as individual Unicode characters from Basic Normal Plane.\nString WideStringToUtf8(const wchar_t* str, int num_chars) {\n  if (num_chars == -1)\n    num_chars = static_cast<int>(wcslen(str));\n\n  ::std::stringstream stream;\n  for (int i = 0; i < num_chars; ++i) {\n    UInt32 unicode_code_point;\n\n    if (str[i] == L'\\0') {\n      break;\n    } else if (i + 1 < num_chars && IsUtf16SurrogatePair(str[i], str[i + 1])) {\n      unicode_code_point = CreateCodePointFromUtf16SurrogatePair(str[i],\n                                                                 str[i + 1]);\n      i++;\n    } else {\n      unicode_code_point = static_cast<UInt32>(str[i]);\n    }\n\n    char buffer[32];  // CodePointToUtf8 requires a buffer this big.\n    stream << CodePointToUtf8(unicode_code_point, buffer);\n  }\n  return StringStreamToString(&stream);\n}\n\n// Converts a wide C string to a String using the UTF-8 encoding.\n// NULL will be converted to \"(null)\".\nString String::ShowWideCString(const wchar_t * wide_c_str) {\n  if (wide_c_str == NULL) return String(\"(null)\");\n\n  return String(internal::WideStringToUtf8(wide_c_str, -1).c_str());\n}\n\n// Similar to ShowWideCString(), except that this function encloses\n// the converted string in double quotes.\nString String::ShowWideCStringQuoted(const wchar_t* wide_c_str) {\n  if (wide_c_str == NULL) return String(\"(null)\");\n\n  return String::Format(\"L\\\"%s\\\"\",\n                        String::ShowWideCString(wide_c_str).c_str());\n}\n\n// Compares two wide C strings.  Returns true iff they have the same\n// content.\n//\n// Unlike wcscmp(), this function can handle NULL argument(s).  A NULL\n// C string is considered different to any non-NULL C string,\n// including the empty string.\nbool String::WideCStringEquals(const wchar_t * lhs, const wchar_t * rhs) {\n  if (lhs == NULL) return rhs == NULL;\n\n  if (rhs == NULL) return false;\n\n  return wcscmp(lhs, rhs) == 0;\n}\n\n// Helper function for *_STREQ on wide strings.\nAssertionResult CmpHelperSTREQ(const char* expected_expression,\n                               const char* actual_expression,\n                               const wchar_t* expected,\n                               const wchar_t* actual) {\n  if (String::WideCStringEquals(expected, actual)) {\n    return AssertionSuccess();\n  }\n\n  return EqFailure(expected_expression,\n                   actual_expression,\n                   String::ShowWideCStringQuoted(expected),\n                   String::ShowWideCStringQuoted(actual),\n                   false);\n}\n\n// Helper function for *_STRNE on wide strings.\nAssertionResult CmpHelperSTRNE(const char* s1_expression,\n                               const char* s2_expression,\n                               const wchar_t* s1,\n                               const wchar_t* s2) {\n  if (!String::WideCStringEquals(s1, s2)) {\n    return AssertionSuccess();\n  }\n\n  return AssertionFailure() << \"Expected: (\" << s1_expression << \") != (\"\n                            << s2_expression << \"), actual: \"\n                            << String::ShowWideCStringQuoted(s1)\n                            << \" vs \" << String::ShowWideCStringQuoted(s2);\n}\n\n// Compares two C strings, ignoring case.  Returns true iff they have\n// the same content.\n//\n// Unlike strcasecmp(), this function can handle NULL argument(s).  A\n// NULL C string is considered different to any non-NULL C string,\n// including the empty string.\nbool String::CaseInsensitiveCStringEquals(const char * lhs, const char * rhs) {\n  if (lhs == NULL)\n    return rhs == NULL;\n  if (rhs == NULL)\n    return false;\n  return posix::StrCaseCmp(lhs, rhs) == 0;\n}\n\n  // Compares two wide C strings, ignoring case.  Returns true iff they\n  // have the same content.\n  //\n  // Unlike wcscasecmp(), this function can handle NULL argument(s).\n  // A NULL C string is considered different to any non-NULL wide C string,\n  // including the empty string.\n  // NB: The implementations on different platforms slightly differ.\n  // On windows, this method uses _wcsicmp which compares according to LC_CTYPE\n  // environment variable. On GNU platform this method uses wcscasecmp\n  // which compares according to LC_CTYPE category of the current locale.\n  // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the\n  // current locale.\nbool String::CaseInsensitiveWideCStringEquals(const wchar_t* lhs,\n                                              const wchar_t* rhs) {\n  if (lhs == NULL) return rhs == NULL;\n\n  if (rhs == NULL) return false;\n\n#if GTEST_OS_WINDOWS\n  return _wcsicmp(lhs, rhs) == 0;\n#elif GTEST_OS_LINUX && !GTEST_OS_LINUX_ANDROID\n  return wcscasecmp(lhs, rhs) == 0;\n#else\n  // Android, Mac OS X and Cygwin don't define wcscasecmp.\n  // Other unknown OSes may not define it either.\n  wint_t left, right;\n  do {\n    left = towlower(*lhs++);\n    right = towlower(*rhs++);\n  } while (left && left == right);\n  return left == right;\n#endif  // OS selector\n}\n\n// Compares this with another String.\n// Returns < 0 if this is less than rhs, 0 if this is equal to rhs, or > 0\n// if this is greater than rhs.\nint String::Compare(const String & rhs) const {\n  const char* const lhs_c_str = c_str();\n  const char* const rhs_c_str = rhs.c_str();\n\n  if (lhs_c_str == NULL) {\n    return rhs_c_str == NULL ? 0 : -1;  // NULL < anything except NULL\n  } else if (rhs_c_str == NULL) {\n    return 1;\n  }\n\n  const size_t shorter_str_len =\n      length() <= rhs.length() ? length() : rhs.length();\n  for (size_t i = 0; i != shorter_str_len; i++) {\n    if (lhs_c_str[i] < rhs_c_str[i]) {\n      return -1;\n    } else if (lhs_c_str[i] > rhs_c_str[i]) {\n      return 1;\n    }\n  }\n  return (length() < rhs.length()) ? -1 :\n      (length() > rhs.length()) ? 1 : 0;\n}\n\n// Returns true iff this String ends with the given suffix.  *Any*\n// String is considered to end with a NULL or empty suffix.\nbool String::EndsWith(const char* suffix) const {\n  if (suffix == NULL || CStringEquals(suffix, \"\")) return true;\n\n  if (c_str() == NULL) return false;\n\n  const size_t this_len = strlen(c_str());\n  const size_t suffix_len = strlen(suffix);\n  return (this_len >= suffix_len) &&\n         CStringEquals(c_str() + this_len - suffix_len, suffix);\n}\n\n// Returns true iff this String ends with the given suffix, ignoring case.\n// Any String is considered to end with a NULL or empty suffix.\nbool String::EndsWithCaseInsensitive(const char* suffix) const {\n  if (suffix == NULL || CStringEquals(suffix, \"\")) return true;\n\n  if (c_str() == NULL) return false;\n\n  const size_t this_len = strlen(c_str());\n  const size_t suffix_len = strlen(suffix);\n  return (this_len >= suffix_len) &&\n         CaseInsensitiveCStringEquals(c_str() + this_len - suffix_len, suffix);\n}\n\n// Formats a list of arguments to a String, using the same format\n// spec string as for printf.\n//\n// We do not use the StringPrintf class as it is not universally\n// available.\n//\n// The result is limited to 4096 characters (including the tailing 0).\n// If 4096 characters are not enough to format the input, or if\n// there's an error, \"<formatting error or buffer exceeded>\" is\n// returned.\nString String::Format(const char * format, ...) {\n  va_list args;\n  va_start(args, format);\n\n  char buffer[4096];\n  const int kBufferSize = sizeof(buffer)/sizeof(buffer[0]);\n\n  // MSVC 8 deprecates vsnprintf(), so we want to suppress warning\n  // 4996 (deprecated function) there.\n#ifdef _MSC_VER  // We are using MSVC.\n# pragma warning(push)          // Saves the current warning state.\n# pragma warning(disable:4996)  // Temporarily disables warning 4996.\n\n  const int size = vsnprintf(buffer, kBufferSize, format, args);\n\n# pragma warning(pop)           // Restores the warning state.\n#else  // We are not using MSVC.\n  const int size = vsnprintf(buffer, kBufferSize, format, args);\n#endif  // _MSC_VER\n  va_end(args);\n\n  // vsnprintf()'s behavior is not portable.  When the buffer is not\n  // big enough, it returns a negative value in MSVC, and returns the\n  // needed buffer size on Linux.  When there is an output error, it\n  // always returns a negative value.  For simplicity, we lump the two\n  // error cases together.\n  if (size < 0 || size >= kBufferSize) {\n    return String(\"<formatting error or buffer exceeded>\");\n  } else {\n    return String(buffer, size);\n  }\n}\n\n// Converts the buffer in a stringstream to a String, converting NUL\n// bytes to \"\\\\0\" along the way.\nString StringStreamToString(::std::stringstream* ss) {\n  const ::std::string& str = ss->str();\n  const char* const start = str.c_str();\n  const char* const end = start + str.length();\n\n  // We need to use a helper stringstream to do this transformation\n  // because String doesn't support push_back().\n  ::std::stringstream helper;\n  for (const char* ch = start; ch != end; ++ch) {\n    if (*ch == '\\0') {\n      helper << \"\\\\0\";  // Replaces NUL with \"\\\\0\";\n    } else {\n      helper.put(*ch);\n    }\n  }\n\n  return String(helper.str().c_str());\n}\n\n// Appends the user-supplied message to the Google-Test-generated message.\nString AppendUserMessage(const String& gtest_msg,\n                         const Message& user_msg) {\n  // Appends the user message if it's non-empty.\n  const String user_msg_string = user_msg.GetString();\n  if (user_msg_string.empty()) {\n    return gtest_msg;\n  }\n\n  Message msg;\n  msg << gtest_msg << \"\\n\" << user_msg_string;\n\n  return msg.GetString();\n}\n\n}  // namespace internal\n\n// class TestResult\n\n// Creates an empty TestResult.\nTestResult::TestResult()\n    : death_test_count_(0),\n      elapsed_time_(0) {\n}\n\n// D'tor.\nTestResult::~TestResult() {\n}\n\n// Returns the i-th test part result among all the results. i can\n// range from 0 to total_part_count() - 1. If i is not in that range,\n// aborts the program.\nconst TestPartResult& TestResult::GetTestPartResult(int i) const {\n  if (i < 0 || i >= total_part_count())\n    internal::posix::Abort();\n  return test_part_results_.at(i);\n}\n\n// Returns the i-th test property. i can range from 0 to\n// test_property_count() - 1. If i is not in that range, aborts the\n// program.\nconst TestProperty& TestResult::GetTestProperty(int i) const {\n  if (i < 0 || i >= test_property_count())\n    internal::posix::Abort();\n  return test_properties_.at(i);\n}\n\n// Clears the test part results.\nvoid TestResult::ClearTestPartResults() {\n  test_part_results_.clear();\n}\n\n// Adds a test part result to the list.\nvoid TestResult::AddTestPartResult(const TestPartResult& test_part_result) {\n  test_part_results_.push_back(test_part_result);\n}\n\n// Adds a test property to the list. If a property with the same key as the\n// supplied property is already represented, the value of this test_property\n// replaces the old value for that key.\nvoid TestResult::RecordProperty(const TestProperty& test_property) {\n  if (!ValidateTestProperty(test_property)) {\n    return;\n  }\n  internal::MutexLock lock(&test_properites_mutex_);\n  const std::vector<TestProperty>::iterator property_with_matching_key =\n      std::find_if(test_properties_.begin(), test_properties_.end(),\n                   internal::TestPropertyKeyIs(test_property.key()));\n  if (property_with_matching_key == test_properties_.end()) {\n    test_properties_.push_back(test_property);\n    return;\n  }\n  property_with_matching_key->SetValue(test_property.value());\n}\n\n// Adds a failure if the key is a reserved attribute of Google Test\n// testcase tags.  Returns true if the property is valid.\nbool TestResult::ValidateTestProperty(const TestProperty& test_property) {\n  internal::String key(test_property.key());\n  if (key == \"name\" || key == \"status\" || key == \"time\" || key == \"classname\") {\n    ADD_FAILURE()\n        << \"Reserved key used in RecordProperty(): \"\n        << key\n        << \" ('name', 'status', 'time', and 'classname' are reserved by \"\n        << GTEST_NAME_ << \")\";\n    return false;\n  }\n  return true;\n}\n\n// Clears the object.\nvoid TestResult::Clear() {\n  test_part_results_.clear();\n  test_properties_.clear();\n  death_test_count_ = 0;\n  elapsed_time_ = 0;\n}\n\n// Returns true iff the test failed.\nbool TestResult::Failed() const {\n  for (int i = 0; i < total_part_count(); ++i) {\n    if (GetTestPartResult(i).failed())\n      return true;\n  }\n  return false;\n}\n\n// Returns true iff the test part fatally failed.\nstatic bool TestPartFatallyFailed(const TestPartResult& result) {\n  return result.fatally_failed();\n}\n\n// Returns true iff the test fatally failed.\nbool TestResult::HasFatalFailure() const {\n  return CountIf(test_part_results_, TestPartFatallyFailed) > 0;\n}\n\n// Returns true iff the test part non-fatally failed.\nstatic bool TestPartNonfatallyFailed(const TestPartResult& result) {\n  return result.nonfatally_failed();\n}\n\n// Returns true iff the test has a non-fatal failure.\nbool TestResult::HasNonfatalFailure() const {\n  return CountIf(test_part_results_, TestPartNonfatallyFailed) > 0;\n}\n\n// Gets the number of all test parts.  This is the sum of the number\n// of successful test parts and the number of failed test parts.\nint TestResult::total_part_count() const {\n  return static_cast<int>(test_part_results_.size());\n}\n\n// Returns the number of the test properties.\nint TestResult::test_property_count() const {\n  return static_cast<int>(test_properties_.size());\n}\n\n// class Test\n\n// Creates a Test object.\n\n// The c'tor saves the values of all Google Test flags.\nTest::Test()\n    : gtest_flag_saver_(new internal::GTestFlagSaver) {\n}\n\n// The d'tor restores the values of all Google Test flags.\nTest::~Test() {\n  delete gtest_flag_saver_;\n}\n\n// Sets up the test fixture.\n//\n// A sub-class may override this.\nvoid Test::SetUp() {\n}\n\n// Tears down the test fixture.\n//\n// A sub-class may override this.\nvoid Test::TearDown() {\n}\n\n// Allows user supplied key value pairs to be recorded for later output.\nvoid Test::RecordProperty(const char* key, const char* value) {\n  UnitTest::GetInstance()->RecordPropertyForCurrentTest(key, value);\n}\n\n// Allows user supplied key value pairs to be recorded for later output.\nvoid Test::RecordProperty(const char* key, int value) {\n  Message value_message;\n  value_message << value;\n  RecordProperty(key, value_message.GetString().c_str());\n}\n\nnamespace internal {\n\nvoid ReportFailureInUnknownLocation(TestPartResult::Type result_type,\n                                    const String& message) {\n  // This function is a friend of UnitTest and as such has access to\n  // AddTestPartResult.\n  UnitTest::GetInstance()->AddTestPartResult(\n      result_type,\n      NULL,  // No info about the source file where the exception occurred.\n      -1,    // We have no info on which line caused the exception.\n      message,\n      String());  // No stack trace, either.\n}\n\n}  // namespace internal\n\n// Google Test requires all tests in the same test case to use the same test\n// fixture class.  This function checks if the current test has the\n// same fixture class as the first test in the current test case.  If\n// yes, it returns true; otherwise it generates a Google Test failure and\n// returns false.\nbool Test::HasSameFixtureClass() {\n  internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();\n  const TestCase* const test_case = impl->current_test_case();\n\n  // Info about the first test in the current test case.\n  const TestInfo* const first_test_info = test_case->test_info_list()[0];\n  const internal::TypeId first_fixture_id = first_test_info->fixture_class_id_;\n  const char* const first_test_name = first_test_info->name();\n\n  // Info about the current test.\n  const TestInfo* const this_test_info = impl->current_test_info();\n  const internal::TypeId this_fixture_id = this_test_info->fixture_class_id_;\n  const char* const this_test_name = this_test_info->name();\n\n  if (this_fixture_id != first_fixture_id) {\n    // Is the first test defined using TEST?\n    const bool first_is_TEST = first_fixture_id == internal::GetTestTypeId();\n    // Is this test defined using TEST?\n    const bool this_is_TEST = this_fixture_id == internal::GetTestTypeId();\n\n    if (first_is_TEST || this_is_TEST) {\n      // The user mixed TEST and TEST_F in this test case - we'll tell\n      // him/her how to fix it.\n\n      // Gets the name of the TEST and the name of the TEST_F.  Note\n      // that first_is_TEST and this_is_TEST cannot both be true, as\n      // the fixture IDs are different for the two tests.\n      const char* const TEST_name =\n          first_is_TEST ? first_test_name : this_test_name;\n      const char* const TEST_F_name =\n          first_is_TEST ? this_test_name : first_test_name;\n\n      ADD_FAILURE()\n          << \"All tests in the same test case must use the same test fixture\\n\"\n          << \"class, so mixing TEST_F and TEST in the same test case is\\n\"\n          << \"illegal.  In test case \" << this_test_info->test_case_name()\n          << \",\\n\"\n          << \"test \" << TEST_F_name << \" is defined using TEST_F but\\n\"\n          << \"test \" << TEST_name << \" is defined using TEST.  You probably\\n\"\n          << \"want to change the TEST to TEST_F or move it to another test\\n\"\n          << \"case.\";\n    } else {\n      // The user defined two fixture classes with the same name in\n      // two namespaces - we'll tell him/her how to fix it.\n      ADD_FAILURE()\n          << \"All tests in the same test case must use the same test fixture\\n\"\n          << \"class.  However, in test case \"\n          << this_test_info->test_case_name() << \",\\n\"\n          << \"you defined test \" << first_test_name\n          << \" and test \" << this_test_name << \"\\n\"\n          << \"using two different test fixture classes.  This can happen if\\n\"\n          << \"the two classes are from different namespaces or translation\\n\"\n          << \"units and have the same name.  You should probably rename one\\n\"\n          << \"of the classes to put the tests into different test cases.\";\n    }\n    return false;\n  }\n\n  return true;\n}\n\n#if GTEST_HAS_SEH\n\n// Adds an \"exception thrown\" fatal failure to the current test.  This\n// function returns its result via an output parameter pointer because VC++\n// prohibits creation of objects with destructors on stack in functions\n// using __try (see error C2712).\nstatic internal::String* FormatSehExceptionMessage(DWORD exception_code,\n                                                   const char* location) {\n  Message message;\n  message << \"SEH exception with code 0x\" << std::setbase(16) <<\n    exception_code << std::setbase(10) << \" thrown in \" << location << \".\";\n\n  return new internal::String(message.GetString());\n}\n\n#endif  // GTEST_HAS_SEH\n\n#if GTEST_HAS_EXCEPTIONS\n\n// Adds an \"exception thrown\" fatal failure to the current test.\nstatic internal::String FormatCxxExceptionMessage(const char* description,\n                                                  const char* location) {\n  Message message;\n  if (description != NULL) {\n    message << \"C++ exception with description \\\"\" << description << \"\\\"\";\n  } else {\n    message << \"Unknown C++ exception\";\n  }\n  message << \" thrown in \" << location << \".\";\n\n  return message.GetString();\n}\n\nstatic internal::String PrintTestPartResultToString(\n    const TestPartResult& test_part_result);\n\n// A failed Google Test assertion will throw an exception of this type when\n// GTEST_FLAG(throw_on_failure) is true (if exceptions are enabled).  We\n// derive it from std::runtime_error, which is for errors presumably\n// detectable only at run time.  Since std::runtime_error inherits from\n// std::exception, many testing frameworks know how to extract and print the\n// message inside it.\nclass GoogleTestFailureException : public ::std::runtime_error {\n public:\n  explicit GoogleTestFailureException(const TestPartResult& failure)\n      : ::std::runtime_error(PrintTestPartResultToString(failure).c_str()) {}\n};\n#endif  // GTEST_HAS_EXCEPTIONS\n\nnamespace internal {\n// We put these helper functions in the internal namespace as IBM's xlC\n// compiler rejects the code if they were declared static.\n\n// Runs the given method and handles SEH exceptions it throws, when\n// SEH is supported; returns the 0-value for type Result in case of an\n// SEH exception.  (Microsoft compilers cannot handle SEH and C++\n// exceptions in the same function.  Therefore, we provide a separate\n// wrapper function for handling SEH exceptions.)\ntemplate <class T, typename Result>\nResult HandleSehExceptionsInMethodIfSupported(\n    T* object, Result (T::*method)(), const char* location) {\n#if GTEST_HAS_SEH\n  __try {\n    return (object->*method)();\n  } __except (internal::UnitTestOptions::GTestShouldProcessSEH(  // NOLINT\n      GetExceptionCode())) {\n    // We create the exception message on the heap because VC++ prohibits\n    // creation of objects with destructors on stack in functions using __try\n    // (see error C2712).\n    internal::String* exception_message = FormatSehExceptionMessage(\n        GetExceptionCode(), location);\n    internal::ReportFailureInUnknownLocation(TestPartResult::kFatalFailure,\n                                             *exception_message);\n    delete exception_message;\n    return static_cast<Result>(0);\n  }\n#else\n  (void)location;\n  return (object->*method)();\n#endif  // GTEST_HAS_SEH\n}\n\n// Runs the given method and catches and reports C++ and/or SEH-style\n// exceptions, if they are supported; returns the 0-value for type\n// Result in case of an SEH exception.\ntemplate <class T, typename Result>\nResult HandleExceptionsInMethodIfSupported(\n    T* object, Result (T::*method)(), const char* location) {\n  // NOTE: The user code can affect the way in which Google Test handles\n  // exceptions by setting GTEST_FLAG(catch_exceptions), but only before\n  // RUN_ALL_TESTS() starts. It is technically possible to check the flag\n  // after the exception is caught and either report or re-throw the\n  // exception based on the flag's value:\n  //\n  // try {\n  //   // Perform the test method.\n  // } catch (...) {\n  //   if (GTEST_FLAG(catch_exceptions))\n  //     // Report the exception as failure.\n  //   else\n  //     throw;  // Re-throws the original exception.\n  // }\n  //\n  // However, the purpose of this flag is to allow the program to drop into\n  // the debugger when the exception is thrown. On most platforms, once the\n  // control enters the catch block, the exception origin information is\n  // lost and the debugger will stop the program at the point of the\n  // re-throw in this function -- instead of at the point of the original\n  // throw statement in the code under test.  For this reason, we perform\n  // the check early, sacrificing the ability to affect Google Test's\n  // exception handling in the method where the exception is thrown.\n  if (internal::GetUnitTestImpl()->catch_exceptions()) {\n#if GTEST_HAS_EXCEPTIONS\n    try {\n      return HandleSehExceptionsInMethodIfSupported(object, method, location);\n    } catch (const GoogleTestFailureException&) {  // NOLINT\n      // This exception doesn't originate in code under test. It makes no\n      // sense to report it as a test failure.\n      throw;\n    } catch (const std::exception& e) {  // NOLINT\n      internal::ReportFailureInUnknownLocation(\n          TestPartResult::kFatalFailure,\n          FormatCxxExceptionMessage(e.what(), location));\n    } catch (...) {  // NOLINT\n      internal::ReportFailureInUnknownLocation(\n          TestPartResult::kFatalFailure,\n          FormatCxxExceptionMessage(NULL, location));\n    }\n    return static_cast<Result>(0);\n#else\n    return HandleSehExceptionsInMethodIfSupported(object, method, location);\n#endif  // GTEST_HAS_EXCEPTIONS\n  } else {\n    return (object->*method)();\n  }\n}\n\n}  // namespace internal\n\n// Runs the test and updates the test result.\nvoid Test::Run() {\n  if (!HasSameFixtureClass()) return;\n\n  internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();\n  impl->os_stack_trace_getter()->UponLeavingGTest();\n  internal::HandleExceptionsInMethodIfSupported(this, &Test::SetUp, \"SetUp()\");\n  // We will run the test only if SetUp() was successful.\n  if (!HasFatalFailure()) {\n    impl->os_stack_trace_getter()->UponLeavingGTest();\n    internal::HandleExceptionsInMethodIfSupported(\n        this, &Test::TestBody, \"the test body\");\n  }\n\n  // However, we want to clean up as much as possible.  Hence we will\n  // always call TearDown(), even if SetUp() or the test body has\n  // failed.\n  impl->os_stack_trace_getter()->UponLeavingGTest();\n  internal::HandleExceptionsInMethodIfSupported(\n      this, &Test::TearDown, \"TearDown()\");\n}\n\n// Returns true iff the current test has a fatal failure.\nbool Test::HasFatalFailure() {\n  return internal::GetUnitTestImpl()->current_test_result()->HasFatalFailure();\n}\n\n// Returns true iff the current test has a non-fatal failure.\nbool Test::HasNonfatalFailure() {\n  return internal::GetUnitTestImpl()->current_test_result()->\n      HasNonfatalFailure();\n}\n\n// class TestInfo\n\n// Constructs a TestInfo object. It assumes ownership of the test factory\n// object.\n// TODO(vladl@google.com): Make a_test_case_name and a_name const string&'s\n// to signify they cannot be NULLs.\nTestInfo::TestInfo(const char* a_test_case_name,\n                   const char* a_name,\n                   const char* a_type_param,\n                   const char* a_value_param,\n                   internal::TypeId fixture_class_id,\n                   internal::TestFactoryBase* factory)\n    : test_case_name_(a_test_case_name),\n      name_(a_name),\n      type_param_(a_type_param ? new std::string(a_type_param) : NULL),\n      value_param_(a_value_param ? new std::string(a_value_param) : NULL),\n      fixture_class_id_(fixture_class_id),\n      should_run_(false),\n      is_disabled_(false),\n      matches_filter_(false),\n      factory_(factory),\n      result_() {}\n\n// Destructs a TestInfo object.\nTestInfo::~TestInfo() { delete factory_; }\n\nnamespace internal {\n\n// Creates a new TestInfo object and registers it with Google Test;\n// returns the created object.\n//\n// Arguments:\n//\n//   test_case_name:   name of the test case\n//   name:             name of the test\n//   type_param:       the name of the test's type parameter, or NULL if\n//                     this is not a typed or a type-parameterized test.\n//   value_param:      text representation of the test's value parameter,\n//                     or NULL if this is not a value-parameterized test.\n//   fixture_class_id: ID of the test fixture class\n//   set_up_tc:        pointer to the function that sets up the test case\n//   tear_down_tc:     pointer to the function that tears down the test case\n//   factory:          pointer to the factory that creates a test object.\n//                     The newly created TestInfo instance will assume\n//                     ownership of the factory object.\nTestInfo* MakeAndRegisterTestInfo(\n    const char* test_case_name, const char* name,\n    const char* type_param,\n    const char* value_param,\n    TypeId fixture_class_id,\n    SetUpTestCaseFunc set_up_tc,\n    TearDownTestCaseFunc tear_down_tc,\n    TestFactoryBase* factory) {\n  TestInfo* const test_info =\n      new TestInfo(test_case_name, name, type_param, value_param,\n                   fixture_class_id, factory);\n  GetUnitTestImpl()->AddTestInfo(set_up_tc, tear_down_tc, test_info);\n  return test_info;\n}\n\n#if GTEST_HAS_PARAM_TEST\nvoid ReportInvalidTestCaseType(const char* test_case_name,\n                               const char* file, int line) {\n  Message errors;\n  errors\n      << \"Attempted redefinition of test case \" << test_case_name << \".\\n\"\n      << \"All tests in the same test case must use the same test fixture\\n\"\n      << \"class.  However, in test case \" << test_case_name << \", you tried\\n\"\n      << \"to define a test using a fixture class different from the one\\n\"\n      << \"used earlier. This can happen if the two fixture classes are\\n\"\n      << \"from different namespaces and have the same name. You should\\n\"\n      << \"probably rename one of the classes to put the tests into different\\n\"\n      << \"test cases.\";\n\n  fprintf(stderr, \"%s %s\", FormatFileLocation(file, line).c_str(),\n          errors.GetString().c_str());\n}\n#endif  // GTEST_HAS_PARAM_TEST\n\n}  // namespace internal\n\nnamespace {\n\n// A predicate that checks the test name of a TestInfo against a known\n// value.\n//\n// This is used for implementation of the TestCase class only.  We put\n// it in the anonymous namespace to prevent polluting the outer\n// namespace.\n//\n// TestNameIs is copyable.\nclass TestNameIs {\n public:\n  // Constructor.\n  //\n  // TestNameIs has NO default constructor.\n  explicit TestNameIs(const char* name)\n      : name_(name) {}\n\n  // Returns true iff the test name of test_info matches name_.\n  bool operator()(const TestInfo * test_info) const {\n    return test_info && internal::String(test_info->name()).Compare(name_) == 0;\n  }\n\n private:\n  internal::String name_;\n};\n\n}  // namespace\n\nnamespace internal {\n\n// This method expands all parameterized tests registered with macros TEST_P\n// and INSTANTIATE_TEST_CASE_P into regular tests and registers those.\n// This will be done just once during the program runtime.\nvoid UnitTestImpl::RegisterParameterizedTests() {\n#if GTEST_HAS_PARAM_TEST\n  if (!parameterized_tests_registered_) {\n    parameterized_test_registry_.RegisterTests();\n    parameterized_tests_registered_ = true;\n  }\n#endif\n}\n\n}  // namespace internal\n\n// Creates the test object, runs it, records its result, and then\n// deletes it.\nvoid TestInfo::Run() {\n  if (!should_run_) return;\n\n  // Tells UnitTest where to store test result.\n  internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();\n  impl->set_current_test_info(this);\n\n  TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater();\n\n  // Notifies the unit test event listeners that a test is about to start.\n  repeater->OnTestStart(*this);\n\n  const TimeInMillis start = internal::GetTimeInMillis();\n\n  impl->os_stack_trace_getter()->UponLeavingGTest();\n\n  // Creates the test object.\n  Test* const test = internal::HandleExceptionsInMethodIfSupported(\n      factory_, &internal::TestFactoryBase::CreateTest,\n      \"the test fixture's constructor\");\n\n  // Runs the test only if the test object was created and its\n  // constructor didn't generate a fatal failure.\n  if ((test != NULL) && !Test::HasFatalFailure()) {\n    // This doesn't throw as all user code that can throw are wrapped into\n    // exception handling code.\n    test->Run();\n  }\n\n  // Deletes the test object.\n  impl->os_stack_trace_getter()->UponLeavingGTest();\n  internal::HandleExceptionsInMethodIfSupported(\n      test, &Test::DeleteSelf_, \"the test fixture's destructor\");\n\n  result_.set_elapsed_time(internal::GetTimeInMillis() - start);\n\n  // Notifies the unit test event listener that a test has just finished.\n  repeater->OnTestEnd(*this);\n\n  // Tells UnitTest to stop associating assertion results to this\n  // test.\n  impl->set_current_test_info(NULL);\n}\n\n// class TestCase\n\n// Gets the number of successful tests in this test case.\nint TestCase::successful_test_count() const {\n  return CountIf(test_info_list_, TestPassed);\n}\n\n// Gets the number of failed tests in this test case.\nint TestCase::failed_test_count() const {\n  return CountIf(test_info_list_, TestFailed);\n}\n\nint TestCase::disabled_test_count() const {\n  return CountIf(test_info_list_, TestDisabled);\n}\n\n// Get the number of tests in this test case that should run.\nint TestCase::test_to_run_count() const {\n  return CountIf(test_info_list_, ShouldRunTest);\n}\n\n// Gets the number of all tests.\nint TestCase::total_test_count() const {\n  return static_cast<int>(test_info_list_.size());\n}\n\n// Creates a TestCase with the given name.\n//\n// Arguments:\n//\n//   name:         name of the test case\n//   a_type_param: the name of the test case's type parameter, or NULL if\n//                 this is not a typed or a type-parameterized test case.\n//   set_up_tc:    pointer to the function that sets up the test case\n//   tear_down_tc: pointer to the function that tears down the test case\nTestCase::TestCase(const char* a_name, const char* a_type_param,\n                   Test::SetUpTestCaseFunc set_up_tc,\n                   Test::TearDownTestCaseFunc tear_down_tc)\n    : name_(a_name),\n      type_param_(a_type_param ? new std::string(a_type_param) : NULL),\n      set_up_tc_(set_up_tc),\n      tear_down_tc_(tear_down_tc),\n      should_run_(false),\n      elapsed_time_(0) {\n}\n\n// Destructor of TestCase.\nTestCase::~TestCase() {\n  // Deletes every Test in the collection.\n  ForEach(test_info_list_, internal::Delete<TestInfo>);\n}\n\n// Returns the i-th test among all the tests. i can range from 0 to\n// total_test_count() - 1. If i is not in that range, returns NULL.\nconst TestInfo* TestCase::GetTestInfo(int i) const {\n  const int index = GetElementOr(test_indices_, i, -1);\n  return index < 0 ? NULL : test_info_list_[index];\n}\n\n// Returns the i-th test among all the tests. i can range from 0 to\n// total_test_count() - 1. If i is not in that range, returns NULL.\nTestInfo* TestCase::GetMutableTestInfo(int i) {\n  const int index = GetElementOr(test_indices_, i, -1);\n  return index < 0 ? NULL : test_info_list_[index];\n}\n\n// Adds a test to this test case.  Will delete the test upon\n// destruction of the TestCase object.\nvoid TestCase::AddTestInfo(TestInfo * test_info) {\n  test_info_list_.push_back(test_info);\n  test_indices_.push_back(static_cast<int>(test_indices_.size()));\n}\n\n// Runs every test in this TestCase.\nvoid TestCase::Run() {\n  if (!should_run_) return;\n\n  internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();\n  impl->set_current_test_case(this);\n\n  TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater();\n\n  repeater->OnTestCaseStart(*this);\n  impl->os_stack_trace_getter()->UponLeavingGTest();\n  internal::HandleExceptionsInMethodIfSupported(\n      this, &TestCase::RunSetUpTestCase, \"SetUpTestCase()\");\n\n  const internal::TimeInMillis start = internal::GetTimeInMillis();\n  for (int i = 0; i < total_test_count(); i++) {\n    GetMutableTestInfo(i)->Run();\n  }\n  elapsed_time_ = internal::GetTimeInMillis() - start;\n\n  impl->os_stack_trace_getter()->UponLeavingGTest();\n  internal::HandleExceptionsInMethodIfSupported(\n      this, &TestCase::RunTearDownTestCase, \"TearDownTestCase()\");\n\n  repeater->OnTestCaseEnd(*this);\n  impl->set_current_test_case(NULL);\n}\n\n// Clears the results of all tests in this test case.\nvoid TestCase::ClearResult() {\n  ForEach(test_info_list_, TestInfo::ClearTestResult);\n}\n\n// Shuffles the tests in this test case.\nvoid TestCase::ShuffleTests(internal::Random* random) {\n  Shuffle(random, &test_indices_);\n}\n\n// Restores the test order to before the first shuffle.\nvoid TestCase::UnshuffleTests() {\n  for (size_t i = 0; i < test_indices_.size(); i++) {\n    test_indices_[i] = static_cast<int>(i);\n  }\n}\n\n// Formats a countable noun.  Depending on its quantity, either the\n// singular form or the plural form is used. e.g.\n//\n// FormatCountableNoun(1, \"formula\", \"formuli\") returns \"1 formula\".\n// FormatCountableNoun(5, \"book\", \"books\") returns \"5 books\".\nstatic internal::String FormatCountableNoun(int count,\n                                            const char * singular_form,\n                                            const char * plural_form) {\n  return internal::String::Format(\"%d %s\", count,\n                                  count == 1 ? singular_form : plural_form);\n}\n\n// Formats the count of tests.\nstatic internal::String FormatTestCount(int test_count) {\n  return FormatCountableNoun(test_count, \"test\", \"tests\");\n}\n\n// Formats the count of test cases.\nstatic internal::String FormatTestCaseCount(int test_case_count) {\n  return FormatCountableNoun(test_case_count, \"test case\", \"test cases\");\n}\n\n// Converts a TestPartResult::Type enum to human-friendly string\n// representation.  Both kNonFatalFailure and kFatalFailure are translated\n// to \"Failure\", as the user usually doesn't care about the difference\n// between the two when viewing the test result.\nstatic const char * TestPartResultTypeToString(TestPartResult::Type type) {\n  switch (type) {\n    case TestPartResult::kSuccess:\n      return \"Success\";\n\n    case TestPartResult::kNonFatalFailure:\n    case TestPartResult::kFatalFailure:\n#ifdef _MSC_VER\n      return \"error: \";\n#else\n      return \"Failure\\n\";\n#endif\n    default:\n      return \"Unknown result type\";\n  }\n}\n\n// Prints a TestPartResult to a String.\nstatic internal::String PrintTestPartResultToString(\n    const TestPartResult& test_part_result) {\n  return (Message()\n          << internal::FormatFileLocation(test_part_result.file_name(),\n                                          test_part_result.line_number())\n          << \" \" << TestPartResultTypeToString(test_part_result.type())\n          << test_part_result.message()).GetString();\n}\n\n// Prints a TestPartResult.\nstatic void PrintTestPartResult(const TestPartResult& test_part_result) {\n  const internal::String& result =\n      PrintTestPartResultToString(test_part_result);\n  printf(\"%s\\n\", result.c_str());\n  fflush(stdout);\n  // If the test program runs in Visual Studio or a debugger, the\n  // following statements add the test part result message to the Output\n  // window such that the user can double-click on it to jump to the\n  // corresponding source code location; otherwise they do nothing.\n#if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE\n  // We don't call OutputDebugString*() on Windows Mobile, as printing\n  // to stdout is done by OutputDebugString() there already - we don't\n  // want the same message printed twice.\n  ::OutputDebugStringA(result.c_str());\n  ::OutputDebugStringA(\"\\n\");\n#endif\n}\n\n// class PrettyUnitTestResultPrinter\n\nnamespace internal {\n\nenum GTestColor {\n  COLOR_DEFAULT,\n  COLOR_RED,\n  COLOR_GREEN,\n  COLOR_YELLOW\n};\n\n#if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE\n\n// Returns the character attribute for the given color.\nWORD GetColorAttribute(GTestColor color) {\n  switch (color) {\n    case COLOR_RED:    return FOREGROUND_RED;\n    case COLOR_GREEN:  return FOREGROUND_GREEN;\n    case COLOR_YELLOW: return FOREGROUND_RED | FOREGROUND_GREEN;\n    default:           return 0;\n  }\n}\n\n#else\n\n// Returns the ANSI color code for the given color.  COLOR_DEFAULT is\n// an invalid input.\nconst char* GetAnsiColorCode(GTestColor color) {\n  switch (color) {\n    case COLOR_RED:     return \"1\";\n    case COLOR_GREEN:   return \"2\";\n    case COLOR_YELLOW:  return \"3\";\n    default:            return NULL;\n  };\n}\n\n#endif  // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE\n\n// Returns true iff Google Test should use colors in the output.\nbool ShouldUseColor(bool stdout_is_tty) {\n  const char* const gtest_color = GTEST_FLAG(color).c_str();\n\n  if (String::CaseInsensitiveCStringEquals(gtest_color, \"auto\")) {\n#if GTEST_OS_WINDOWS\n    // On Windows the TERM variable is usually not set, but the\n    // console there does support colors.\n    return stdout_is_tty;\n#else\n    // On non-Windows platforms, we rely on the TERM variable.\n    const char* const term = posix::GetEnv(\"TERM\");\n    const bool term_supports_color =\n        String::CStringEquals(term, \"xterm\") ||\n        String::CStringEquals(term, \"xterm-color\") ||\n        String::CStringEquals(term, \"xterm-256color\") ||\n        String::CStringEquals(term, \"screen\") ||\n        String::CStringEquals(term, \"linux\") ||\n        String::CStringEquals(term, \"cygwin\");\n    return stdout_is_tty && term_supports_color;\n#endif  // GTEST_OS_WINDOWS\n  }\n\n  return String::CaseInsensitiveCStringEquals(gtest_color, \"yes\") ||\n      String::CaseInsensitiveCStringEquals(gtest_color, \"true\") ||\n      String::CaseInsensitiveCStringEquals(gtest_color, \"t\") ||\n      String::CStringEquals(gtest_color, \"1\");\n  // We take \"yes\", \"true\", \"t\", and \"1\" as meaning \"yes\".  If the\n  // value is neither one of these nor \"auto\", we treat it as \"no\" to\n  // be conservative.\n}\n\n// Helpers for printing colored strings to stdout. Note that on Windows, we\n// cannot simply emit special characters and have the terminal change colors.\n// This routine must actually emit the characters rather than return a string\n// that would be colored when printed, as can be done on Linux.\nvoid ColoredPrintf(GTestColor color, const char* fmt, ...) {\n  va_list args;\n  va_start(args, fmt);\n\n#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN || GTEST_OS_ZOS\n  const bool use_color = false;\n#else\n  static const bool in_color_mode =\n      ShouldUseColor(posix::IsATTY(posix::FileNo(stdout)) != 0);\n  const bool use_color = in_color_mode && (color != COLOR_DEFAULT);\n#endif  // GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN || GTEST_OS_ZOS\n  // The '!= 0' comparison is necessary to satisfy MSVC 7.1.\n\n  if (!use_color) {\n    vprintf(fmt, args);\n    va_end(args);\n    return;\n  }\n\n#if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE\n  const HANDLE stdout_handle = GetStdHandle(STD_OUTPUT_HANDLE);\n\n  // Gets the current text color.\n  CONSOLE_SCREEN_BUFFER_INFO buffer_info;\n  GetConsoleScreenBufferInfo(stdout_handle, &buffer_info);\n  const WORD old_color_attrs = buffer_info.wAttributes;\n\n  // We need to flush the stream buffers into the console before each\n  // SetConsoleTextAttribute call lest it affect the text that is already\n  // printed but has not yet reached the console.\n  fflush(stdout);\n  SetConsoleTextAttribute(stdout_handle,\n                          GetColorAttribute(color) | FOREGROUND_INTENSITY);\n  vprintf(fmt, args);\n\n  fflush(stdout);\n  // Restores the text color.\n  SetConsoleTextAttribute(stdout_handle, old_color_attrs);\n#else\n  printf(\"\\033[0;3%sm\", GetAnsiColorCode(color));\n  vprintf(fmt, args);\n  printf(\"\\033[m\");  // Resets the terminal to default.\n#endif  // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE\n  va_end(args);\n}\n\nvoid PrintFullTestCommentIfPresent(const TestInfo& test_info) {\n  const char* const type_param = test_info.type_param();\n  const char* const value_param = test_info.value_param();\n\n  if (type_param != NULL || value_param != NULL) {\n    printf(\", where \");\n    if (type_param != NULL) {\n      printf(\"TypeParam = %s\", type_param);\n      if (value_param != NULL)\n        printf(\" and \");\n    }\n    if (value_param != NULL) {\n      printf(\"GetParam() = %s\", value_param);\n    }\n  }\n}\n\n// This class implements the TestEventListener interface.\n//\n// Class PrettyUnitTestResultPrinter is copyable.\nclass PrettyUnitTestResultPrinter : public TestEventListener {\n public:\n  PrettyUnitTestResultPrinter() {}\n  static void PrintTestName(const char * test_case, const char * test) {\n    printf(\"%s.%s\", test_case, test);\n  }\n\n  // The following methods override what's in the TestEventListener class.\n  virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) {}\n  virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration);\n  virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test);\n  virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) {}\n  virtual void OnTestCaseStart(const TestCase& test_case);\n  virtual void OnTestStart(const TestInfo& test_info);\n  virtual void OnTestPartResult(const TestPartResult& result);\n  virtual void OnTestEnd(const TestInfo& test_info);\n  virtual void OnTestCaseEnd(const TestCase& test_case);\n  virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test);\n  virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) {}\n  virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration);\n  virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) {}\n\n private:\n  static void PrintFailedTests(const UnitTest& unit_test);\n\n  internal::String test_case_name_;\n};\n\n  // Fired before each iteration of tests starts.\nvoid PrettyUnitTestResultPrinter::OnTestIterationStart(\n    const UnitTest& unit_test, int iteration) {\n  if (GTEST_FLAG(repeat) != 1)\n    printf(\"\\nRepeating all tests (iteration %d) . . .\\n\\n\", iteration + 1);\n\n  const char* const filter = GTEST_FLAG(filter).c_str();\n\n  // Prints the filter if it's not *.  This reminds the user that some\n  // tests may be skipped.\n  if (!internal::String::CStringEquals(filter, kUniversalFilter)) {\n    ColoredPrintf(COLOR_YELLOW,\n                  \"Note: %s filter = %s\\n\", GTEST_NAME_, filter);\n  }\n\n  if (internal::ShouldShard(kTestTotalShards, kTestShardIndex, false)) {\n    const Int32 shard_index = Int32FromEnvOrDie(kTestShardIndex, -1);\n    ColoredPrintf(COLOR_YELLOW,\n                  \"Note: This is test shard %d of %s.\\n\",\n                  static_cast<int>(shard_index) + 1,\n                  internal::posix::GetEnv(kTestTotalShards));\n  }\n\n  if (GTEST_FLAG(shuffle)) {\n    ColoredPrintf(COLOR_YELLOW,\n                  \"Note: Randomizing tests' orders with a seed of %d .\\n\",\n                  unit_test.random_seed());\n  }\n\n  ColoredPrintf(COLOR_GREEN,  \"[==========] \");\n  printf(\"Running %s from %s.\\n\",\n         FormatTestCount(unit_test.test_to_run_count()).c_str(),\n         FormatTestCaseCount(unit_test.test_case_to_run_count()).c_str());\n  fflush(stdout);\n}\n\nvoid PrettyUnitTestResultPrinter::OnEnvironmentsSetUpStart(\n    const UnitTest& /*unit_test*/) {\n  ColoredPrintf(COLOR_GREEN,  \"[----------] \");\n  printf(\"Global test environment set-up.\\n\");\n  fflush(stdout);\n}\n\nvoid PrettyUnitTestResultPrinter::OnTestCaseStart(const TestCase& test_case) {\n  test_case_name_ = test_case.name();\n  const internal::String counts =\n      FormatCountableNoun(test_case.test_to_run_count(), \"test\", \"tests\");\n  ColoredPrintf(COLOR_GREEN, \"[----------] \");\n  printf(\"%s from %s\", counts.c_str(), test_case_name_.c_str());\n  if (test_case.type_param() == NULL) {\n    printf(\"\\n\");\n  } else {\n    printf(\", where TypeParam = %s\\n\", test_case.type_param());\n  }\n  fflush(stdout);\n}\n\nvoid PrettyUnitTestResultPrinter::OnTestStart(const TestInfo& test_info) {\n  ColoredPrintf(COLOR_GREEN,  \"[ RUN      ] \");\n  PrintTestName(test_case_name_.c_str(), test_info.name());\n  printf(\"\\n\");\n  fflush(stdout);\n}\n\n// Called after an assertion failure.\nvoid PrettyUnitTestResultPrinter::OnTestPartResult(\n    const TestPartResult& result) {\n  // If the test part succeeded, we don't need to do anything.\n  if (result.type() == TestPartResult::kSuccess)\n    return;\n\n  // Print failure message from the assertion (e.g. expected this and got that).\n  PrintTestPartResult(result);\n  fflush(stdout);\n}\n\nvoid PrettyUnitTestResultPrinter::OnTestEnd(const TestInfo& test_info) {\n  if (test_info.result()->Passed()) {\n    ColoredPrintf(COLOR_GREEN, \"[       OK ] \");\n  } else {\n    ColoredPrintf(COLOR_RED, \"[  FAILED  ] \");\n  }\n  PrintTestName(test_case_name_.c_str(), test_info.name());\n  if (test_info.result()->Failed())\n    PrintFullTestCommentIfPresent(test_info);\n\n  if (GTEST_FLAG(print_time)) {\n    printf(\" (%s ms)\\n\", internal::StreamableToString(\n           test_info.result()->elapsed_time()).c_str());\n  } else {\n    printf(\"\\n\");\n  }\n  fflush(stdout);\n}\n\nvoid PrettyUnitTestResultPrinter::OnTestCaseEnd(const TestCase& test_case) {\n  if (!GTEST_FLAG(print_time)) return;\n\n  test_case_name_ = test_case.name();\n  const internal::String counts =\n      FormatCountableNoun(test_case.test_to_run_count(), \"test\", \"tests\");\n  ColoredPrintf(COLOR_GREEN, \"[----------] \");\n  printf(\"%s from %s (%s ms total)\\n\\n\",\n         counts.c_str(), test_case_name_.c_str(),\n         internal::StreamableToString(test_case.elapsed_time()).c_str());\n  fflush(stdout);\n}\n\nvoid PrettyUnitTestResultPrinter::OnEnvironmentsTearDownStart(\n    const UnitTest& /*unit_test*/) {\n  ColoredPrintf(COLOR_GREEN,  \"[----------] \");\n  printf(\"Global test environment tear-down\\n\");\n  fflush(stdout);\n}\n\n// Internal helper for printing the list of failed tests.\nvoid PrettyUnitTestResultPrinter::PrintFailedTests(const UnitTest& unit_test) {\n  const int failed_test_count = unit_test.failed_test_count();\n  if (failed_test_count == 0) {\n    return;\n  }\n\n  for (int i = 0; i < unit_test.total_test_case_count(); ++i) {\n    const TestCase& test_case = *unit_test.GetTestCase(i);\n    if (!test_case.should_run() || (test_case.failed_test_count() == 0)) {\n      continue;\n    }\n    for (int j = 0; j < test_case.total_test_count(); ++j) {\n      const TestInfo& test_info = *test_case.GetTestInfo(j);\n      if (!test_info.should_run() || test_info.result()->Passed()) {\n        continue;\n      }\n      ColoredPrintf(COLOR_RED, \"[  FAILED  ] \");\n      printf(\"%s.%s\", test_case.name(), test_info.name());\n      PrintFullTestCommentIfPresent(test_info);\n      printf(\"\\n\");\n    }\n  }\n}\n\nvoid PrettyUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test,\n                                                     int /*iteration*/) {\n  ColoredPrintf(COLOR_GREEN,  \"[==========] \");\n  printf(\"%s from %s ran.\",\n         FormatTestCount(unit_test.test_to_run_count()).c_str(),\n         FormatTestCaseCount(unit_test.test_case_to_run_count()).c_str());\n  if (GTEST_FLAG(print_time)) {\n    printf(\" (%s ms total)\",\n           internal::StreamableToString(unit_test.elapsed_time()).c_str());\n  }\n  printf(\"\\n\");\n  ColoredPrintf(COLOR_GREEN,  \"[  PASSED  ] \");\n  printf(\"%s.\\n\", FormatTestCount(unit_test.successful_test_count()).c_str());\n\n  int num_failures = unit_test.failed_test_count();\n  if (!unit_test.Passed()) {\n    const int failed_test_count = unit_test.failed_test_count();\n    ColoredPrintf(COLOR_RED,  \"[  FAILED  ] \");\n    printf(\"%s, listed below:\\n\", FormatTestCount(failed_test_count).c_str());\n    PrintFailedTests(unit_test);\n    printf(\"\\n%2d FAILED %s\\n\", num_failures,\n                        num_failures == 1 ? \"TEST\" : \"TESTS\");\n  }\n\n  int num_disabled = unit_test.disabled_test_count();\n  if (num_disabled && !GTEST_FLAG(also_run_disabled_tests)) {\n    if (!num_failures) {\n      printf(\"\\n\");  // Add a spacer if no FAILURE banner is displayed.\n    }\n    ColoredPrintf(COLOR_YELLOW,\n                  \"  YOU HAVE %d DISABLED %s\\n\\n\",\n                  num_disabled,\n                  num_disabled == 1 ? \"TEST\" : \"TESTS\");\n  }\n  // Ensure that Google Test output is printed before, e.g., heapchecker output.\n  fflush(stdout);\n}\n\n// End PrettyUnitTestResultPrinter\n\n// class TestEventRepeater\n//\n// This class forwards events to other event listeners.\nclass TestEventRepeater : public TestEventListener {\n public:\n  TestEventRepeater() : forwarding_enabled_(true) {}\n  virtual ~TestEventRepeater();\n  void Append(TestEventListener *listener);\n  TestEventListener* Release(TestEventListener* listener);\n\n  // Controls whether events will be forwarded to listeners_. Set to false\n  // in death test child processes.\n  bool forwarding_enabled() const { return forwarding_enabled_; }\n  void set_forwarding_enabled(bool enable) { forwarding_enabled_ = enable; }\n\n  virtual void OnTestProgramStart(const UnitTest& unit_test);\n  virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration);\n  virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test);\n  virtual void OnEnvironmentsSetUpEnd(const UnitTest& unit_test);\n  virtual void OnTestCaseStart(const TestCase& test_case);\n  virtual void OnTestStart(const TestInfo& test_info);\n  virtual void OnTestPartResult(const TestPartResult& result);\n  virtual void OnTestEnd(const TestInfo& test_info);\n  virtual void OnTestCaseEnd(const TestCase& test_case);\n  virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test);\n  virtual void OnEnvironmentsTearDownEnd(const UnitTest& unit_test);\n  virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration);\n  virtual void OnTestProgramEnd(const UnitTest& unit_test);\n\n private:\n  // Controls whether events will be forwarded to listeners_. Set to false\n  // in death test child processes.\n  bool forwarding_enabled_;\n  // The list of listeners that receive events.\n  std::vector<TestEventListener*> listeners_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestEventRepeater);\n};\n\nTestEventRepeater::~TestEventRepeater() {\n  ForEach(listeners_, Delete<TestEventListener>);\n}\n\nvoid TestEventRepeater::Append(TestEventListener *listener) {\n  listeners_.push_back(listener);\n}\n\n// TODO(vladl@google.com): Factor the search functionality into Vector::Find.\nTestEventListener* TestEventRepeater::Release(TestEventListener *listener) {\n  for (size_t i = 0; i < listeners_.size(); ++i) {\n    if (listeners_[i] == listener) {\n      listeners_.erase(listeners_.begin() + i);\n      return listener;\n    }\n  }\n\n  return NULL;\n}\n\n// Since most methods are very similar, use macros to reduce boilerplate.\n// This defines a member that forwards the call to all listeners.\n#define GTEST_REPEATER_METHOD_(Name, Type) \\\nvoid TestEventRepeater::Name(const Type& parameter) { \\\n  if (forwarding_enabled_) { \\\n    for (size_t i = 0; i < listeners_.size(); i++) { \\\n      listeners_[i]->Name(parameter); \\\n    } \\\n  } \\\n}\n// This defines a member that forwards the call to all listeners in reverse\n// order.\n#define GTEST_REVERSE_REPEATER_METHOD_(Name, Type) \\\nvoid TestEventRepeater::Name(const Type& parameter) { \\\n  if (forwarding_enabled_) { \\\n    for (int i = static_cast<int>(listeners_.size()) - 1; i >= 0; i--) { \\\n      listeners_[i]->Name(parameter); \\\n    } \\\n  } \\\n}\n\nGTEST_REPEATER_METHOD_(OnTestProgramStart, UnitTest)\nGTEST_REPEATER_METHOD_(OnEnvironmentsSetUpStart, UnitTest)\nGTEST_REPEATER_METHOD_(OnTestCaseStart, TestCase)\nGTEST_REPEATER_METHOD_(OnTestStart, TestInfo)\nGTEST_REPEATER_METHOD_(OnTestPartResult, TestPartResult)\nGTEST_REPEATER_METHOD_(OnEnvironmentsTearDownStart, UnitTest)\nGTEST_REVERSE_REPEATER_METHOD_(OnEnvironmentsSetUpEnd, UnitTest)\nGTEST_REVERSE_REPEATER_METHOD_(OnEnvironmentsTearDownEnd, UnitTest)\nGTEST_REVERSE_REPEATER_METHOD_(OnTestEnd, TestInfo)\nGTEST_REVERSE_REPEATER_METHOD_(OnTestCaseEnd, TestCase)\nGTEST_REVERSE_REPEATER_METHOD_(OnTestProgramEnd, UnitTest)\n\n#undef GTEST_REPEATER_METHOD_\n#undef GTEST_REVERSE_REPEATER_METHOD_\n\nvoid TestEventRepeater::OnTestIterationStart(const UnitTest& unit_test,\n                                             int iteration) {\n  if (forwarding_enabled_) {\n    for (size_t i = 0; i < listeners_.size(); i++) {\n      listeners_[i]->OnTestIterationStart(unit_test, iteration);\n    }\n  }\n}\n\nvoid TestEventRepeater::OnTestIterationEnd(const UnitTest& unit_test,\n                                           int iteration) {\n  if (forwarding_enabled_) {\n    for (int i = static_cast<int>(listeners_.size()) - 1; i >= 0; i--) {\n      listeners_[i]->OnTestIterationEnd(unit_test, iteration);\n    }\n  }\n}\n\n// End TestEventRepeater\n\n// This class generates an XML output file.\nclass XmlUnitTestResultPrinter : public EmptyTestEventListener {\n public:\n  explicit XmlUnitTestResultPrinter(const char* output_file);\n\n  virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration);\n\n private:\n  // Is c a whitespace character that is normalized to a space character\n  // when it appears in an XML attribute value?\n  static bool IsNormalizableWhitespace(char c) {\n    return c == 0x9 || c == 0xA || c == 0xD;\n  }\n\n  // May c appear in a well-formed XML document?\n  static bool IsValidXmlCharacter(char c) {\n    return IsNormalizableWhitespace(c) || c >= 0x20;\n  }\n\n  // Returns an XML-escaped copy of the input string str.  If\n  // is_attribute is true, the text is meant to appear as an attribute\n  // value, and normalizable whitespace is preserved by replacing it\n  // with character references.\n  static String EscapeXml(const char* str, bool is_attribute);\n\n  // Returns the given string with all characters invalid in XML removed.\n  static string RemoveInvalidXmlCharacters(const string& str);\n\n  // Convenience wrapper around EscapeXml when str is an attribute value.\n  static String EscapeXmlAttribute(const char* str) {\n    return EscapeXml(str, true);\n  }\n\n  // Convenience wrapper around EscapeXml when str is not an attribute value.\n  static String EscapeXmlText(const char* str) { return EscapeXml(str, false); }\n\n  // Streams an XML CDATA section, escaping invalid CDATA sequences as needed.\n  static void OutputXmlCDataSection(::std::ostream* stream, const char* data);\n\n  // Streams an XML representation of a TestInfo object.\n  static void OutputXmlTestInfo(::std::ostream* stream,\n                                const char* test_case_name,\n                                const TestInfo& test_info);\n\n  // Prints an XML representation of a TestCase object\n  static void PrintXmlTestCase(FILE* out, const TestCase& test_case);\n\n  // Prints an XML summary of unit_test to output stream out.\n  static void PrintXmlUnitTest(FILE* out, const UnitTest& unit_test);\n\n  // Produces a string representing the test properties in a result as space\n  // delimited XML attributes based on the property key=\"value\" pairs.\n  // When the String is not empty, it includes a space at the beginning,\n  // to delimit this attribute from prior attributes.\n  static String TestPropertiesAsXmlAttributes(const TestResult& result);\n\n  // The output file.\n  const String output_file_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(XmlUnitTestResultPrinter);\n};\n\n// Creates a new XmlUnitTestResultPrinter.\nXmlUnitTestResultPrinter::XmlUnitTestResultPrinter(const char* output_file)\n    : output_file_(output_file) {\n  if (output_file_.c_str() == NULL || output_file_.empty()) {\n    fprintf(stderr, \"XML output file may not be null\\n\");\n    fflush(stderr);\n    exit(EXIT_FAILURE);\n  }\n}\n\n// Called after the unit test ends.\nvoid XmlUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test,\n                                                  int /*iteration*/) {\n  FILE* xmlout = NULL;\n  FilePath output_file(output_file_);\n  FilePath output_dir(output_file.RemoveFileName());\n\n  if (output_dir.CreateDirectoriesRecursively()) {\n    xmlout = posix::FOpen(output_file_.c_str(), \"w\");\n  }\n  if (xmlout == NULL) {\n    // TODO(wan): report the reason of the failure.\n    //\n    // We don't do it for now as:\n    //\n    //   1. There is no urgent need for it.\n    //   2. It's a bit involved to make the errno variable thread-safe on\n    //      all three operating systems (Linux, Windows, and Mac OS).\n    //   3. To interpret the meaning of errno in a thread-safe way,\n    //      we need the strerror_r() function, which is not available on\n    //      Windows.\n    fprintf(stderr,\n            \"Unable to open file \\\"%s\\\"\\n\",\n            output_file_.c_str());\n    fflush(stderr);\n    exit(EXIT_FAILURE);\n  }\n  PrintXmlUnitTest(xmlout, unit_test);\n  fclose(xmlout);\n}\n\n// Returns an XML-escaped copy of the input string str.  If is_attribute\n// is true, the text is meant to appear as an attribute value, and\n// normalizable whitespace is preserved by replacing it with character\n// references.\n//\n// Invalid XML characters in str, if any, are stripped from the output.\n// It is expected that most, if not all, of the text processed by this\n// module will consist of ordinary English text.\n// If this module is ever modified to produce version 1.1 XML output,\n// most invalid characters can be retained using character references.\n// TODO(wan): It might be nice to have a minimally invasive, human-readable\n// escaping scheme for invalid characters, rather than dropping them.\nString XmlUnitTestResultPrinter::EscapeXml(const char* str, bool is_attribute) {\n  Message m;\n\n  if (str != NULL) {\n    for (const char* src = str; *src; ++src) {\n      switch (*src) {\n        case '<':\n          m << \"&lt;\";\n          break;\n        case '>':\n          m << \"&gt;\";\n          break;\n        case '&':\n          m << \"&amp;\";\n          break;\n        case '\\'':\n          if (is_attribute)\n            m << \"&apos;\";\n          else\n            m << '\\'';\n          break;\n        case '\"':\n          if (is_attribute)\n            m << \"&quot;\";\n          else\n            m << '\"';\n          break;\n        default:\n          if (IsValidXmlCharacter(*src)) {\n            if (is_attribute && IsNormalizableWhitespace(*src))\n              m << String::Format(\"&#x%02X;\", unsigned(*src));\n            else\n              m << *src;\n          }\n          break;\n      }\n    }\n  }\n\n  return m.GetString();\n}\n\n// Returns the given string with all characters invalid in XML removed.\n// Currently invalid characters are dropped from the string. An\n// alternative is to replace them with certain characters such as . or ?.\nstring XmlUnitTestResultPrinter::RemoveInvalidXmlCharacters(const string& str) {\n  string output;\n  output.reserve(str.size());\n  for (string::const_iterator it = str.begin(); it != str.end(); ++it)\n    if (IsValidXmlCharacter(*it))\n      output.push_back(*it);\n\n  return output;\n}\n\n// The following routines generate an XML representation of a UnitTest\n// object.\n//\n// This is how Google Test concepts map to the DTD:\n//\n// <testsuites name=\"AllTests\">        <-- corresponds to a UnitTest object\n//   <testsuite name=\"testcase-name\">  <-- corresponds to a TestCase object\n//     <testcase name=\"test-name\">     <-- corresponds to a TestInfo object\n//       <failure message=\"...\">...</failure>\n//       <failure message=\"...\">...</failure>\n//       <failure message=\"...\">...</failure>\n//                                     <-- individual assertion failures\n//     </testcase>\n//   </testsuite>\n// </testsuites>\n\n// Formats the given time in milliseconds as seconds.\nstd::string FormatTimeInMillisAsSeconds(TimeInMillis ms) {\n  ::std::stringstream ss;\n  ss << ms/1000.0;\n  return ss.str();\n}\n\n// Streams an XML CDATA section, escaping invalid CDATA sequences as needed.\nvoid XmlUnitTestResultPrinter::OutputXmlCDataSection(::std::ostream* stream,\n                                                     const char* data) {\n  const char* segment = data;\n  *stream << \"<![CDATA[\";\n  for (;;) {\n    const char* const next_segment = strstr(segment, \"]]>\");\n    if (next_segment != NULL) {\n      stream->write(\n          segment, static_cast<std::streamsize>(next_segment - segment));\n      *stream << \"]]>]]&gt;<![CDATA[\";\n      segment = next_segment + strlen(\"]]>\");\n    } else {\n      *stream << segment;\n      break;\n    }\n  }\n  *stream << \"]]>\";\n}\n\n// Prints an XML representation of a TestInfo object.\n// TODO(wan): There is also value in printing properties with the plain printer.\nvoid XmlUnitTestResultPrinter::OutputXmlTestInfo(::std::ostream* stream,\n                                                 const char* test_case_name,\n                                                 const TestInfo& test_info) {\n  const TestResult& result = *test_info.result();\n  *stream << \"    <testcase name=\\\"\"\n          << EscapeXmlAttribute(test_info.name()).c_str() << \"\\\"\";\n\n  if (test_info.value_param() != NULL) {\n    *stream << \" value_param=\\\"\" << EscapeXmlAttribute(test_info.value_param())\n            << \"\\\"\";\n  }\n  if (test_info.type_param() != NULL) {\n    *stream << \" type_param=\\\"\" << EscapeXmlAttribute(test_info.type_param())\n            << \"\\\"\";\n  }\n\n  *stream << \" status=\\\"\"\n          << (test_info.should_run() ? \"run\" : \"notrun\")\n          << \"\\\" time=\\\"\"\n          << FormatTimeInMillisAsSeconds(result.elapsed_time())\n          << \"\\\" classname=\\\"\" << EscapeXmlAttribute(test_case_name).c_str()\n          << \"\\\"\" << TestPropertiesAsXmlAttributes(result).c_str();\n\n  int failures = 0;\n  for (int i = 0; i < result.total_part_count(); ++i) {\n    const TestPartResult& part = result.GetTestPartResult(i);\n    if (part.failed()) {\n      if (++failures == 1)\n        *stream << \">\\n\";\n      *stream << \"      <failure message=\\\"\"\n              << EscapeXmlAttribute(part.summary()).c_str()\n              << \"\\\" type=\\\"\\\">\";\n      const string location = internal::FormatCompilerIndependentFileLocation(\n          part.file_name(), part.line_number());\n      const string message = location + \"\\n\" + part.message();\n      OutputXmlCDataSection(stream,\n                            RemoveInvalidXmlCharacters(message).c_str());\n      *stream << \"</failure>\\n\";\n    }\n  }\n\n  if (failures == 0)\n    *stream << \" />\\n\";\n  else\n    *stream << \"    </testcase>\\n\";\n}\n\n// Prints an XML representation of a TestCase object\nvoid XmlUnitTestResultPrinter::PrintXmlTestCase(FILE* out,\n                                                const TestCase& test_case) {\n  fprintf(out,\n          \"  <testsuite name=\\\"%s\\\" tests=\\\"%d\\\" failures=\\\"%d\\\" \"\n          \"disabled=\\\"%d\\\" \",\n          EscapeXmlAttribute(test_case.name()).c_str(),\n          test_case.total_test_count(),\n          test_case.failed_test_count(),\n          test_case.disabled_test_count());\n  fprintf(out,\n          \"errors=\\\"0\\\" time=\\\"%s\\\">\\n\",\n          FormatTimeInMillisAsSeconds(test_case.elapsed_time()).c_str());\n  for (int i = 0; i < test_case.total_test_count(); ++i) {\n    ::std::stringstream stream;\n    OutputXmlTestInfo(&stream, test_case.name(), *test_case.GetTestInfo(i));\n    fprintf(out, \"%s\", StringStreamToString(&stream).c_str());\n  }\n  fprintf(out, \"  </testsuite>\\n\");\n}\n\n// Prints an XML summary of unit_test to output stream out.\nvoid XmlUnitTestResultPrinter::PrintXmlUnitTest(FILE* out,\n                                                const UnitTest& unit_test) {\n  fprintf(out, \"<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\\n\");\n  fprintf(out,\n          \"<testsuites tests=\\\"%d\\\" failures=\\\"%d\\\" disabled=\\\"%d\\\" \"\n          \"errors=\\\"0\\\" time=\\\"%s\\\" \",\n          unit_test.total_test_count(),\n          unit_test.failed_test_count(),\n          unit_test.disabled_test_count(),\n          FormatTimeInMillisAsSeconds(unit_test.elapsed_time()).c_str());\n  if (GTEST_FLAG(shuffle)) {\n    fprintf(out, \"random_seed=\\\"%d\\\" \", unit_test.random_seed());\n  }\n  fprintf(out, \"name=\\\"AllTests\\\">\\n\");\n  for (int i = 0; i < unit_test.total_test_case_count(); ++i)\n    PrintXmlTestCase(out, *unit_test.GetTestCase(i));\n  fprintf(out, \"</testsuites>\\n\");\n}\n\n// Produces a string representing the test properties in a result as space\n// delimited XML attributes based on the property key=\"value\" pairs.\nString XmlUnitTestResultPrinter::TestPropertiesAsXmlAttributes(\n    const TestResult& result) {\n  Message attributes;\n  for (int i = 0; i < result.test_property_count(); ++i) {\n    const TestProperty& property = result.GetTestProperty(i);\n    attributes << \" \" << property.key() << \"=\"\n        << \"\\\"\" << EscapeXmlAttribute(property.value()) << \"\\\"\";\n  }\n  return attributes.GetString();\n}\n\n// End XmlUnitTestResultPrinter\n\n#if GTEST_CAN_STREAM_RESULTS_\n\n// Streams test results to the given port on the given host machine.\nclass StreamingListener : public EmptyTestEventListener {\n public:\n  // Escapes '=', '&', '%', and '\\n' characters in str as \"%xx\".\n  static string UrlEncode(const char* str);\n\n  StreamingListener(const string& host, const string& port)\n      : sockfd_(-1), host_name_(host), port_num_(port) {\n    MakeConnection();\n    Send(\"gtest_streaming_protocol_version=1.0\\n\");\n  }\n\n  virtual ~StreamingListener() {\n    if (sockfd_ != -1)\n      CloseConnection();\n  }\n\n  void OnTestProgramStart(const UnitTest& /* unit_test */) {\n    Send(\"event=TestProgramStart\\n\");\n  }\n\n  void OnTestProgramEnd(const UnitTest& unit_test) {\n    // Note that Google Test current only report elapsed time for each\n    // test iteration, not for the entire test program.\n    Send(String::Format(\"event=TestProgramEnd&passed=%d\\n\",\n                        unit_test.Passed()));\n\n    // Notify the streaming server to stop.\n    CloseConnection();\n  }\n\n  void OnTestIterationStart(const UnitTest& /* unit_test */, int iteration) {\n    Send(String::Format(\"event=TestIterationStart&iteration=%d\\n\",\n                        iteration));\n  }\n\n  void OnTestIterationEnd(const UnitTest& unit_test, int /* iteration */) {\n    Send(String::Format(\"event=TestIterationEnd&passed=%d&elapsed_time=%sms\\n\",\n                        unit_test.Passed(),\n                        StreamableToString(unit_test.elapsed_time()).c_str()));\n  }\n\n  void OnTestCaseStart(const TestCase& test_case) {\n    Send(String::Format(\"event=TestCaseStart&name=%s\\n\", test_case.name()));\n  }\n\n  void OnTestCaseEnd(const TestCase& test_case) {\n    Send(String::Format(\"event=TestCaseEnd&passed=%d&elapsed_time=%sms\\n\",\n                        test_case.Passed(),\n                        StreamableToString(test_case.elapsed_time()).c_str()));\n  }\n\n  void OnTestStart(const TestInfo& test_info) {\n    Send(String::Format(\"event=TestStart&name=%s\\n\", test_info.name()));\n  }\n\n  void OnTestEnd(const TestInfo& test_info) {\n    Send(String::Format(\n        \"event=TestEnd&passed=%d&elapsed_time=%sms\\n\",\n        (test_info.result())->Passed(),\n        StreamableToString((test_info.result())->elapsed_time()).c_str()));\n  }\n\n  void OnTestPartResult(const TestPartResult& test_part_result) {\n    const char* file_name = test_part_result.file_name();\n    if (file_name == NULL)\n      file_name = \"\";\n    Send(String::Format(\"event=TestPartResult&file=%s&line=%d&message=\",\n                        UrlEncode(file_name).c_str(),\n                        test_part_result.line_number()));\n    Send(UrlEncode(test_part_result.message()) + \"\\n\");\n  }\n\n private:\n  // Creates a client socket and connects to the server.\n  void MakeConnection();\n\n  // Closes the socket.\n  void CloseConnection() {\n    GTEST_CHECK_(sockfd_ != -1)\n        << \"CloseConnection() can be called only when there is a connection.\";\n\n    close(sockfd_);\n    sockfd_ = -1;\n  }\n\n  // Sends a string to the socket.\n  void Send(const string& message) {\n    GTEST_CHECK_(sockfd_ != -1)\n        << \"Send() can be called only when there is a connection.\";\n\n    const int len = static_cast<int>(message.length());\n    if (write(sockfd_, message.c_str(), len) != len) {\n      GTEST_LOG_(WARNING)\n          << \"stream_result_to: failed to stream to \"\n          << host_name_ << \":\" << port_num_;\n    }\n  }\n\n  int sockfd_;   // socket file descriptor\n  const string host_name_;\n  const string port_num_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(StreamingListener);\n};  // class StreamingListener\n\n// Checks if str contains '=', '&', '%' or '\\n' characters. If yes,\n// replaces them by \"%xx\" where xx is their hexadecimal value. For\n// example, replaces \"=\" with \"%3D\".  This algorithm is O(strlen(str))\n// in both time and space -- important as the input str may contain an\n// arbitrarily long test failure message and stack trace.\nstring StreamingListener::UrlEncode(const char* str) {\n  string result;\n  result.reserve(strlen(str) + 1);\n  for (char ch = *str; ch != '\\0'; ch = *++str) {\n    switch (ch) {\n      case '%':\n      case '=':\n      case '&':\n      case '\\n':\n        result.append(String::Format(\"%%%02x\", static_cast<unsigned char>(ch)));\n        break;\n      default:\n        result.push_back(ch);\n        break;\n    }\n  }\n  return result;\n}\n\nvoid StreamingListener::MakeConnection() {\n  GTEST_CHECK_(sockfd_ == -1)\n      << \"MakeConnection() can't be called when there is already a connection.\";\n\n  addrinfo hints;\n  memset(&hints, 0, sizeof(hints));\n  hints.ai_family = AF_UNSPEC;    // To allow both IPv4 and IPv6 addresses.\n  hints.ai_socktype = SOCK_STREAM;\n  addrinfo* servinfo = NULL;\n\n  // Use the getaddrinfo() to get a linked list of IP addresses for\n  // the given host name.\n  const int error_num = getaddrinfo(\n      host_name_.c_str(), port_num_.c_str(), &hints, &servinfo);\n  if (error_num != 0) {\n    GTEST_LOG_(WARNING) << \"stream_result_to: getaddrinfo() failed: \"\n                        << gai_strerror(error_num);\n  }\n\n  // Loop through all the results and connect to the first we can.\n  for (addrinfo* cur_addr = servinfo; sockfd_ == -1 && cur_addr != NULL;\n       cur_addr = cur_addr->ai_next) {\n    sockfd_ = socket(\n        cur_addr->ai_family, cur_addr->ai_socktype, cur_addr->ai_protocol);\n    if (sockfd_ != -1) {\n      // Connect the client socket to the server socket.\n      if (connect(sockfd_, cur_addr->ai_addr, cur_addr->ai_addrlen) == -1) {\n        close(sockfd_);\n        sockfd_ = -1;\n      }\n    }\n  }\n\n  freeaddrinfo(servinfo);  // all done with this structure\n\n  if (sockfd_ == -1) {\n    GTEST_LOG_(WARNING) << \"stream_result_to: failed to connect to \"\n                        << host_name_ << \":\" << port_num_;\n  }\n}\n\n// End of class Streaming Listener\n#endif  // GTEST_CAN_STREAM_RESULTS__\n\n// Class ScopedTrace\n\n// Pushes the given source file location and message onto a per-thread\n// trace stack maintained by Google Test.\n// L < UnitTest::mutex_\nScopedTrace::ScopedTrace(const char* file, int line, const Message& message) {\n  TraceInfo trace;\n  trace.file = file;\n  trace.line = line;\n  trace.message = message.GetString();\n\n  UnitTest::GetInstance()->PushGTestTrace(trace);\n}\n\n// Pops the info pushed by the c'tor.\n// L < UnitTest::mutex_\nScopedTrace::~ScopedTrace() {\n  UnitTest::GetInstance()->PopGTestTrace();\n}\n\n\n// class OsStackTraceGetter\n\n// Returns the current OS stack trace as a String.  Parameters:\n//\n//   max_depth  - the maximum number of stack frames to be included\n//                in the trace.\n//   skip_count - the number of top frames to be skipped; doesn't count\n//                against max_depth.\n//\n// L < mutex_\n// We use \"L < mutex_\" to denote that the function may acquire mutex_.\nString OsStackTraceGetter::CurrentStackTrace(int, int) {\n  return String(\"\");\n}\n\n// L < mutex_\nvoid OsStackTraceGetter::UponLeavingGTest() {\n}\n\nconst char* const\nOsStackTraceGetter::kElidedFramesMarker =\n    \"... \" GTEST_NAME_ \" internal frames ...\";\n\n}  // namespace internal\n\n// class TestEventListeners\n\nTestEventListeners::TestEventListeners()\n    : repeater_(new internal::TestEventRepeater()),\n      default_result_printer_(NULL),\n      default_xml_generator_(NULL) {\n}\n\nTestEventListeners::~TestEventListeners() { delete repeater_; }\n\n// Returns the standard listener responsible for the default console\n// output.  Can be removed from the listeners list to shut down default\n// console output.  Note that removing this object from the listener list\n// with Release transfers its ownership to the user.\nvoid TestEventListeners::Append(TestEventListener* listener) {\n  repeater_->Append(listener);\n}\n\n// Removes the given event listener from the list and returns it.  It then\n// becomes the caller's responsibility to delete the listener. Returns\n// NULL if the listener is not found in the list.\nTestEventListener* TestEventListeners::Release(TestEventListener* listener) {\n  if (listener == default_result_printer_)\n    default_result_printer_ = NULL;\n  else if (listener == default_xml_generator_)\n    default_xml_generator_ = NULL;\n  return repeater_->Release(listener);\n}\n\n// Returns repeater that broadcasts the TestEventListener events to all\n// subscribers.\nTestEventListener* TestEventListeners::repeater() { return repeater_; }\n\n// Sets the default_result_printer attribute to the provided listener.\n// The listener is also added to the listener list and previous\n// default_result_printer is removed from it and deleted. The listener can\n// also be NULL in which case it will not be added to the list. Does\n// nothing if the previous and the current listener objects are the same.\nvoid TestEventListeners::SetDefaultResultPrinter(TestEventListener* listener) {\n  if (default_result_printer_ != listener) {\n    // It is an error to pass this method a listener that is already in the\n    // list.\n    delete Release(default_result_printer_);\n    default_result_printer_ = listener;\n    if (listener != NULL)\n      Append(listener);\n  }\n}\n\n// Sets the default_xml_generator attribute to the provided listener.  The\n// listener is also added to the listener list and previous\n// default_xml_generator is removed from it and deleted. The listener can\n// also be NULL in which case it will not be added to the list. Does\n// nothing if the previous and the current listener objects are the same.\nvoid TestEventListeners::SetDefaultXmlGenerator(TestEventListener* listener) {\n  if (default_xml_generator_ != listener) {\n    // It is an error to pass this method a listener that is already in the\n    // list.\n    delete Release(default_xml_generator_);\n    default_xml_generator_ = listener;\n    if (listener != NULL)\n      Append(listener);\n  }\n}\n\n// Controls whether events will be forwarded by the repeater to the\n// listeners in the list.\nbool TestEventListeners::EventForwardingEnabled() const {\n  return repeater_->forwarding_enabled();\n}\n\nvoid TestEventListeners::SuppressEventForwarding() {\n  repeater_->set_forwarding_enabled(false);\n}\n\n// class UnitTest\n\n// Gets the singleton UnitTest object.  The first time this method is\n// called, a UnitTest object is constructed and returned.  Consecutive\n// calls will return the same object.\n//\n// We don't protect this under mutex_ as a user is not supposed to\n// call this before main() starts, from which point on the return\n// value will never change.\nUnitTest * UnitTest::GetInstance() {\n  // When compiled with MSVC 7.1 in optimized mode, destroying the\n  // UnitTest object upon exiting the program messes up the exit code,\n  // causing successful tests to appear failed.  We have to use a\n  // different implementation in this case to bypass the compiler bug.\n  // This implementation makes the compiler happy, at the cost of\n  // leaking the UnitTest object.\n\n  // CodeGear C++Builder insists on a public destructor for the\n  // default implementation.  Use this implementation to keep good OO\n  // design with private destructor.\n\n#if (_MSC_VER == 1310 && !defined(_DEBUG)) || defined(__BORLANDC__)\n  static UnitTest* const instance = new UnitTest;\n  return instance;\n#else\n  static UnitTest instance;\n  return &instance;\n#endif  // (_MSC_VER == 1310 && !defined(_DEBUG)) || defined(__BORLANDC__)\n}\n\n// Gets the number of successful test cases.\nint UnitTest::successful_test_case_count() const {\n  return impl()->successful_test_case_count();\n}\n\n// Gets the number of failed test cases.\nint UnitTest::failed_test_case_count() const {\n  return impl()->failed_test_case_count();\n}\n\n// Gets the number of all test cases.\nint UnitTest::total_test_case_count() const {\n  return impl()->total_test_case_count();\n}\n\n// Gets the number of all test cases that contain at least one test\n// that should run.\nint UnitTest::test_case_to_run_count() const {\n  return impl()->test_case_to_run_count();\n}\n\n// Gets the number of successful tests.\nint UnitTest::successful_test_count() const {\n  return impl()->successful_test_count();\n}\n\n// Gets the number of failed tests.\nint UnitTest::failed_test_count() const { return impl()->failed_test_count(); }\n\n// Gets the number of disabled tests.\nint UnitTest::disabled_test_count() const {\n  return impl()->disabled_test_count();\n}\n\n// Gets the number of all tests.\nint UnitTest::total_test_count() const { return impl()->total_test_count(); }\n\n// Gets the number of tests that should run.\nint UnitTest::test_to_run_count() const { return impl()->test_to_run_count(); }\n\n// Gets the elapsed time, in milliseconds.\ninternal::TimeInMillis UnitTest::elapsed_time() const {\n  return impl()->elapsed_time();\n}\n\n// Returns true iff the unit test passed (i.e. all test cases passed).\nbool UnitTest::Passed() const { return impl()->Passed(); }\n\n// Returns true iff the unit test failed (i.e. some test case failed\n// or something outside of all tests failed).\nbool UnitTest::Failed() const { return impl()->Failed(); }\n\n// Gets the i-th test case among all the test cases. i can range from 0 to\n// total_test_case_count() - 1. If i is not in that range, returns NULL.\nconst TestCase* UnitTest::GetTestCase(int i) const {\n  return impl()->GetTestCase(i);\n}\n\n// Gets the i-th test case among all the test cases. i can range from 0 to\n// total_test_case_count() - 1. If i is not in that range, returns NULL.\nTestCase* UnitTest::GetMutableTestCase(int i) {\n  return impl()->GetMutableTestCase(i);\n}\n\n// Returns the list of event listeners that can be used to track events\n// inside Google Test.\nTestEventListeners& UnitTest::listeners() {\n  return *impl()->listeners();\n}\n\n// Registers and returns a global test environment.  When a test\n// program is run, all global test environments will be set-up in the\n// order they were registered.  After all tests in the program have\n// finished, all global test environments will be torn-down in the\n// *reverse* order they were registered.\n//\n// The UnitTest object takes ownership of the given environment.\n//\n// We don't protect this under mutex_, as we only support calling it\n// from the main thread.\nEnvironment* UnitTest::AddEnvironment(Environment* env) {\n  if (env == NULL) {\n    return NULL;\n  }\n\n  impl_->environments().push_back(env);\n  return env;\n}\n\n// Adds a TestPartResult to the current TestResult object.  All Google Test\n// assertion macros (e.g. ASSERT_TRUE, EXPECT_EQ, etc) eventually call\n// this to report their results.  The user code should use the\n// assertion macros instead of calling this directly.\n// L < mutex_\nvoid UnitTest::AddTestPartResult(TestPartResult::Type result_type,\n                                 const char* file_name,\n                                 int line_number,\n                                 const internal::String& message,\n                                 const internal::String& os_stack_trace) {\n  Message msg;\n  msg << message;\n\n  internal::MutexLock lock(&mutex_);\n  if (impl_->gtest_trace_stack().size() > 0) {\n    msg << \"\\n\" << GTEST_NAME_ << \" trace:\";\n\n    for (int i = static_cast<int>(impl_->gtest_trace_stack().size());\n         i > 0; --i) {\n      const internal::TraceInfo& trace = impl_->gtest_trace_stack()[i - 1];\n      msg << \"\\n\" << internal::FormatFileLocation(trace.file, trace.line)\n          << \" \" << trace.message;\n    }\n  }\n\n  if (os_stack_trace.c_str() != NULL && !os_stack_trace.empty()) {\n    msg << internal::kStackTraceMarker << os_stack_trace;\n  }\n\n  const TestPartResult result =\n    TestPartResult(result_type, file_name, line_number,\n                   msg.GetString().c_str());\n  impl_->GetTestPartResultReporterForCurrentThread()->\n      ReportTestPartResult(result);\n\n  if (result_type != TestPartResult::kSuccess) {\n    // gtest_break_on_failure takes precedence over\n    // gtest_throw_on_failure.  This allows a user to set the latter\n    // in the code (perhaps in order to use Google Test assertions\n    // with another testing framework) and specify the former on the\n    // command line for debugging.\n    if (GTEST_FLAG(break_on_failure)) {\n#if GTEST_OS_WINDOWS\n      // Using DebugBreak on Windows allows gtest to still break into a debugger\n      // when a failure happens and both the --gtest_break_on_failure and\n      // the --gtest_catch_exceptions flags are specified.\n      DebugBreak();\n#else\n      // Dereference NULL through a volatile pointer to prevent the compiler\n      // from removing. We use this rather than abort() or __builtin_trap() for\n      // portability: Symbian doesn't implement abort() well, and some debuggers\n      // don't correctly trap abort().\n      *static_cast<volatile int*>(NULL) = 1;\n#endif  // GTEST_OS_WINDOWS\n    } else if (GTEST_FLAG(throw_on_failure)) {\n#if GTEST_HAS_EXCEPTIONS\n      throw GoogleTestFailureException(result);\n#else\n      // We cannot call abort() as it generates a pop-up in debug mode\n      // that cannot be suppressed in VC 7.1 or below.\n      exit(1);\n#endif\n    }\n  }\n}\n\n// Creates and adds a property to the current TestResult. If a property matching\n// the supplied value already exists, updates its value instead.\nvoid UnitTest::RecordPropertyForCurrentTest(const char* key,\n                                            const char* value) {\n  const TestProperty test_property(key, value);\n  impl_->current_test_result()->RecordProperty(test_property);\n}\n\n// Runs all tests in this UnitTest object and prints the result.\n// Returns 0 if successful, or 1 otherwise.\n//\n// We don't protect this under mutex_, as we only support calling it\n// from the main thread.\nint UnitTest::Run() {\n  // Captures the value of GTEST_FLAG(catch_exceptions).  This value will be\n  // used for the duration of the program.\n  impl()->set_catch_exceptions(GTEST_FLAG(catch_exceptions));\n\n#if GTEST_HAS_SEH\n  const bool in_death_test_child_process =\n      internal::GTEST_FLAG(internal_run_death_test).length() > 0;\n\n  // Either the user wants Google Test to catch exceptions thrown by the\n  // tests or this is executing in the context of death test child\n  // process. In either case the user does not want to see pop-up dialogs\n  // about crashes - they are expected.\n  if (impl()->catch_exceptions() || in_death_test_child_process) {\n\n# if !GTEST_OS_WINDOWS_MOBILE\n    // SetErrorMode doesn't exist on CE.\n    SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOALIGNMENTFAULTEXCEPT |\n                 SEM_NOGPFAULTERRORBOX | SEM_NOOPENFILEERRORBOX);\n# endif  // !GTEST_OS_WINDOWS_MOBILE\n\n# if (defined(_MSC_VER) || GTEST_OS_WINDOWS_MINGW) && !GTEST_OS_WINDOWS_MOBILE\n    // Death test children can be terminated with _abort().  On Windows,\n    // _abort() can show a dialog with a warning message.  This forces the\n    // abort message to go to stderr instead.\n    _set_error_mode(_OUT_TO_STDERR);\n# endif\n\n# if _MSC_VER >= 1400 && !GTEST_OS_WINDOWS_MOBILE\n    // In the debug version, Visual Studio pops up a separate dialog\n    // offering a choice to debug the aborted program. We need to suppress\n    // this dialog or it will pop up for every EXPECT/ASSERT_DEATH statement\n    // executed. Google Test will notify the user of any unexpected\n    // failure via stderr.\n    //\n    // VC++ doesn't define _set_abort_behavior() prior to the version 8.0.\n    // Users of prior VC versions shall suffer the agony and pain of\n    // clicking through the countless debug dialogs.\n    // TODO(vladl@google.com): find a way to suppress the abort dialog() in the\n    // debug mode when compiled with VC 7.1 or lower.\n    if (!GTEST_FLAG(break_on_failure))\n      _set_abort_behavior(\n          0x0,                                    // Clear the following flags:\n          _WRITE_ABORT_MSG | _CALL_REPORTFAULT);  // pop-up window, core dump.\n# endif\n\n  }\n#endif  // GTEST_HAS_SEH\n\n  return internal::HandleExceptionsInMethodIfSupported(\n      impl(),\n      &internal::UnitTestImpl::RunAllTests,\n      \"auxiliary test code (environments or event listeners)\") ? 0 : 1;\n}\n\n// Returns the working directory when the first TEST() or TEST_F() was\n// executed.\nconst char* UnitTest::original_working_dir() const {\n  return impl_->original_working_dir_.c_str();\n}\n\n// Returns the TestCase object for the test that's currently running,\n// or NULL if no test is running.\n// L < mutex_\nconst TestCase* UnitTest::current_test_case() const {\n  internal::MutexLock lock(&mutex_);\n  return impl_->current_test_case();\n}\n\n// Returns the TestInfo object for the test that's currently running,\n// or NULL if no test is running.\n// L < mutex_\nconst TestInfo* UnitTest::current_test_info() const {\n  internal::MutexLock lock(&mutex_);\n  return impl_->current_test_info();\n}\n\n// Returns the random seed used at the start of the current test run.\nint UnitTest::random_seed() const { return impl_->random_seed(); }\n\n#if GTEST_HAS_PARAM_TEST\n// Returns ParameterizedTestCaseRegistry object used to keep track of\n// value-parameterized tests and instantiate and register them.\n// L < mutex_\ninternal::ParameterizedTestCaseRegistry&\n    UnitTest::parameterized_test_registry() {\n  return impl_->parameterized_test_registry();\n}\n#endif  // GTEST_HAS_PARAM_TEST\n\n// Creates an empty UnitTest.\nUnitTest::UnitTest() {\n  impl_ = new internal::UnitTestImpl(this);\n}\n\n// Destructor of UnitTest.\nUnitTest::~UnitTest() {\n  delete impl_;\n}\n\n// Pushes a trace defined by SCOPED_TRACE() on to the per-thread\n// Google Test trace stack.\n// L < mutex_\nvoid UnitTest::PushGTestTrace(const internal::TraceInfo& trace) {\n  internal::MutexLock lock(&mutex_);\n  impl_->gtest_trace_stack().push_back(trace);\n}\n\n// Pops a trace from the per-thread Google Test trace stack.\n// L < mutex_\nvoid UnitTest::PopGTestTrace() {\n  internal::MutexLock lock(&mutex_);\n  impl_->gtest_trace_stack().pop_back();\n}\n\nnamespace internal {\n\nUnitTestImpl::UnitTestImpl(UnitTest* parent)\n    : parent_(parent),\n#ifdef _MSC_VER\n# pragma warning(push)                    // Saves the current warning state.\n# pragma warning(disable:4355)            // Temporarily disables warning 4355\n                                         // (using this in initializer).\n      default_global_test_part_result_reporter_(this),\n      default_per_thread_test_part_result_reporter_(this),\n# pragma warning(pop)                     // Restores the warning state again.\n#else\n      default_global_test_part_result_reporter_(this),\n      default_per_thread_test_part_result_reporter_(this),\n#endif  // _MSC_VER\n      global_test_part_result_repoter_(\n          &default_global_test_part_result_reporter_),\n      per_thread_test_part_result_reporter_(\n          &default_per_thread_test_part_result_reporter_),\n#if GTEST_HAS_PARAM_TEST\n      parameterized_test_registry_(),\n      parameterized_tests_registered_(false),\n#endif  // GTEST_HAS_PARAM_TEST\n      last_death_test_case_(-1),\n      current_test_case_(NULL),\n      current_test_info_(NULL),\n      ad_hoc_test_result_(),\n      os_stack_trace_getter_(NULL),\n      post_flag_parse_init_performed_(false),\n      random_seed_(0),  // Will be overridden by the flag before first use.\n      random_(0),  // Will be reseeded before first use.\n      elapsed_time_(0),\n#if GTEST_HAS_DEATH_TEST\n      internal_run_death_test_flag_(NULL),\n      death_test_factory_(new DefaultDeathTestFactory),\n#endif\n      // Will be overridden by the flag before first use.\n      catch_exceptions_(false) {\n  listeners()->SetDefaultResultPrinter(new PrettyUnitTestResultPrinter);\n}\n\nUnitTestImpl::~UnitTestImpl() {\n  // Deletes every TestCase.\n  ForEach(test_cases_, internal::Delete<TestCase>);\n\n  // Deletes every Environment.\n  ForEach(environments_, internal::Delete<Environment>);\n\n  delete os_stack_trace_getter_;\n}\n\n#if GTEST_HAS_DEATH_TEST\n// Disables event forwarding if the control is currently in a death test\n// subprocess. Must not be called before InitGoogleTest.\nvoid UnitTestImpl::SuppressTestEventsIfInSubprocess() {\n  if (internal_run_death_test_flag_.get() != NULL)\n    listeners()->SuppressEventForwarding();\n}\n#endif  // GTEST_HAS_DEATH_TEST\n\n// Initializes event listeners performing XML output as specified by\n// UnitTestOptions. Must not be called before InitGoogleTest.\nvoid UnitTestImpl::ConfigureXmlOutput() {\n  const String& output_format = UnitTestOptions::GetOutputFormat();\n  if (output_format == \"xml\") {\n    listeners()->SetDefaultXmlGenerator(new XmlUnitTestResultPrinter(\n        UnitTestOptions::GetAbsolutePathToOutputFile().c_str()));\n  } else if (output_format != \"\") {\n    printf(\"WARNING: unrecognized output format \\\"%s\\\" ignored.\\n\",\n           output_format.c_str());\n    fflush(stdout);\n  }\n}\n\n#if GTEST_CAN_STREAM_RESULTS_\n// Initializes event listeners for streaming test results in String form.\n// Must not be called before InitGoogleTest.\nvoid UnitTestImpl::ConfigureStreamingOutput() {\n  const string& target = GTEST_FLAG(stream_result_to);\n  if (!target.empty()) {\n    const size_t pos = target.find(':');\n    if (pos != string::npos) {\n      listeners()->Append(new StreamingListener(target.substr(0, pos),\n                                                target.substr(pos+1)));\n    } else {\n      printf(\"WARNING: unrecognized streaming target \\\"%s\\\" ignored.\\n\",\n             target.c_str());\n      fflush(stdout);\n    }\n  }\n}\n#endif  // GTEST_CAN_STREAM_RESULTS_\n\n// Performs initialization dependent upon flag values obtained in\n// ParseGoogleTestFlagsOnly.  Is called from InitGoogleTest after the call to\n// ParseGoogleTestFlagsOnly.  In case a user neglects to call InitGoogleTest\n// this function is also called from RunAllTests.  Since this function can be\n// called more than once, it has to be idempotent.\nvoid UnitTestImpl::PostFlagParsingInit() {\n  // Ensures that this function does not execute more than once.\n  if (!post_flag_parse_init_performed_) {\n    post_flag_parse_init_performed_ = true;\n\n#if GTEST_HAS_DEATH_TEST\n    InitDeathTestSubprocessControlInfo();\n    SuppressTestEventsIfInSubprocess();\n#endif  // GTEST_HAS_DEATH_TEST\n\n    // Registers parameterized tests. This makes parameterized tests\n    // available to the UnitTest reflection API without running\n    // RUN_ALL_TESTS.\n    RegisterParameterizedTests();\n\n    // Configures listeners for XML output. This makes it possible for users\n    // to shut down the default XML output before invoking RUN_ALL_TESTS.\n    ConfigureXmlOutput();\n\n#if GTEST_CAN_STREAM_RESULTS_\n    // Configures listeners for streaming test results to the specified server.\n    ConfigureStreamingOutput();\n#endif  // GTEST_CAN_STREAM_RESULTS_\n  }\n}\n\n// A predicate that checks the name of a TestCase against a known\n// value.\n//\n// This is used for implementation of the UnitTest class only.  We put\n// it in the anonymous namespace to prevent polluting the outer\n// namespace.\n//\n// TestCaseNameIs is copyable.\nclass TestCaseNameIs {\n public:\n  // Constructor.\n  explicit TestCaseNameIs(const String& name)\n      : name_(name) {}\n\n  // Returns true iff the name of test_case matches name_.\n  bool operator()(const TestCase* test_case) const {\n    return test_case != NULL && strcmp(test_case->name(), name_.c_str()) == 0;\n  }\n\n private:\n  String name_;\n};\n\n// Finds and returns a TestCase with the given name.  If one doesn't\n// exist, creates one and returns it.  It's the CALLER'S\n// RESPONSIBILITY to ensure that this function is only called WHEN THE\n// TESTS ARE NOT SHUFFLED.\n//\n// Arguments:\n//\n//   test_case_name: name of the test case\n//   type_param:     the name of the test case's type parameter, or NULL if\n//                   this is not a typed or a type-parameterized test case.\n//   set_up_tc:      pointer to the function that sets up the test case\n//   tear_down_tc:   pointer to the function that tears down the test case\nTestCase* UnitTestImpl::GetTestCase(const char* test_case_name,\n                                    const char* type_param,\n                                    Test::SetUpTestCaseFunc set_up_tc,\n                                    Test::TearDownTestCaseFunc tear_down_tc) {\n  // Can we find a TestCase with the given name?\n  const std::vector<TestCase*>::const_iterator test_case =\n      std::find_if(test_cases_.begin(), test_cases_.end(),\n                   TestCaseNameIs(test_case_name));\n\n  if (test_case != test_cases_.end())\n    return *test_case;\n\n  // No.  Let's create one.\n  TestCase* const new_test_case =\n      new TestCase(test_case_name, type_param, set_up_tc, tear_down_tc);\n\n  // Is this a death test case?\n  if (internal::UnitTestOptions::MatchesFilter(String(test_case_name),\n                                               kDeathTestCaseFilter)) {\n    // Yes.  Inserts the test case after the last death test case\n    // defined so far.  This only works when the test cases haven't\n    // been shuffled.  Otherwise we may end up running a death test\n    // after a non-death test.\n    ++last_death_test_case_;\n    test_cases_.insert(test_cases_.begin() + last_death_test_case_,\n                       new_test_case);\n  } else {\n    // No.  Appends to the end of the list.\n    test_cases_.push_back(new_test_case);\n  }\n\n  test_case_indices_.push_back(static_cast<int>(test_case_indices_.size()));\n  return new_test_case;\n}\n\n// Helpers for setting up / tearing down the given environment.  They\n// are for use in the ForEach() function.\nstatic void SetUpEnvironment(Environment* env) { env->SetUp(); }\nstatic void TearDownEnvironment(Environment* env) { env->TearDown(); }\n\n// Runs all tests in this UnitTest object, prints the result, and\n// returns true if all tests are successful.  If any exception is\n// thrown during a test, the test is considered to be failed, but the\n// rest of the tests will still be run.\n//\n// When parameterized tests are enabled, it expands and registers\n// parameterized tests first in RegisterParameterizedTests().\n// All other functions called from RunAllTests() may safely assume that\n// parameterized tests are ready to be counted and run.\nbool UnitTestImpl::RunAllTests() {\n  // Makes sure InitGoogleTest() was called.\n  if (!GTestIsInitialized()) {\n    printf(\"%s\",\n           \"\\nThis test program did NOT call ::testing::InitGoogleTest \"\n           \"before calling RUN_ALL_TESTS().  Please fix it.\\n\");\n    return false;\n  }\n\n  // Do not run any test if the --help flag was specified.\n  if (g_help_flag)\n    return true;\n\n  // Repeats the call to the post-flag parsing initialization in case the\n  // user didn't call InitGoogleTest.\n  PostFlagParsingInit();\n\n  // Even if sharding is not on, test runners may want to use the\n  // GTEST_SHARD_STATUS_FILE to query whether the test supports the sharding\n  // protocol.\n  internal::WriteToShardStatusFileIfNeeded();\n\n  // True iff we are in a subprocess for running a thread-safe-style\n  // death test.\n  bool in_subprocess_for_death_test = false;\n\n#if GTEST_HAS_DEATH_TEST\n  in_subprocess_for_death_test = (internal_run_death_test_flag_.get() != NULL);\n#endif  // GTEST_HAS_DEATH_TEST\n\n  const bool should_shard = ShouldShard(kTestTotalShards, kTestShardIndex,\n                                        in_subprocess_for_death_test);\n\n  // Compares the full test names with the filter to decide which\n  // tests to run.\n  const bool has_tests_to_run = FilterTests(should_shard\n                                              ? HONOR_SHARDING_PROTOCOL\n                                              : IGNORE_SHARDING_PROTOCOL) > 0;\n\n  // Lists the tests and exits if the --gtest_list_tests flag was specified.\n  if (GTEST_FLAG(list_tests)) {\n    // This must be called *after* FilterTests() has been called.\n    ListTestsMatchingFilter();\n    return true;\n  }\n\n  random_seed_ = GTEST_FLAG(shuffle) ?\n      GetRandomSeedFromFlag(GTEST_FLAG(random_seed)) : 0;\n\n  // True iff at least one test has failed.\n  bool failed = false;\n\n  TestEventListener* repeater = listeners()->repeater();\n\n  repeater->OnTestProgramStart(*parent_);\n\n  // How many times to repeat the tests?  We don't want to repeat them\n  // when we are inside the subprocess of a death test.\n  const int repeat = in_subprocess_for_death_test ? 1 : GTEST_FLAG(repeat);\n  // Repeats forever if the repeat count is negative.\n  const bool forever = repeat < 0;\n  for (int i = 0; forever || i != repeat; i++) {\n    // We want to preserve failures generated by ad-hoc test\n    // assertions executed before RUN_ALL_TESTS().\n    ClearNonAdHocTestResult();\n\n    const TimeInMillis start = GetTimeInMillis();\n\n    // Shuffles test cases and tests if requested.\n    if (has_tests_to_run && GTEST_FLAG(shuffle)) {\n      random()->Reseed(random_seed_);\n      // This should be done before calling OnTestIterationStart(),\n      // such that a test event listener can see the actual test order\n      // in the event.\n      ShuffleTests();\n    }\n\n    // Tells the unit test event listeners that the tests are about to start.\n    repeater->OnTestIterationStart(*parent_, i);\n\n    // Runs each test case if there is at least one test to run.\n    if (has_tests_to_run) {\n      // Sets up all environments beforehand.\n      repeater->OnEnvironmentsSetUpStart(*parent_);\n      ForEach(environments_, SetUpEnvironment);\n      repeater->OnEnvironmentsSetUpEnd(*parent_);\n\n      // Runs the tests only if there was no fatal failure during global\n      // set-up.\n      if (!Test::HasFatalFailure()) {\n        for (int test_index = 0; test_index < total_test_case_count();\n             test_index++) {\n          GetMutableTestCase(test_index)->Run();\n        }\n      }\n\n      // Tears down all environments in reverse order afterwards.\n      repeater->OnEnvironmentsTearDownStart(*parent_);\n      std::for_each(environments_.rbegin(), environments_.rend(),\n                    TearDownEnvironment);\n      repeater->OnEnvironmentsTearDownEnd(*parent_);\n    }\n\n    elapsed_time_ = GetTimeInMillis() - start;\n\n    // Tells the unit test event listener that the tests have just finished.\n    repeater->OnTestIterationEnd(*parent_, i);\n\n    // Gets the result and clears it.\n    if (!Passed()) {\n      failed = true;\n    }\n\n    // Restores the original test order after the iteration.  This\n    // allows the user to quickly repro a failure that happens in the\n    // N-th iteration without repeating the first (N - 1) iterations.\n    // This is not enclosed in \"if (GTEST_FLAG(shuffle)) { ... }\", in\n    // case the user somehow changes the value of the flag somewhere\n    // (it's always safe to unshuffle the tests).\n    UnshuffleTests();\n\n    if (GTEST_FLAG(shuffle)) {\n      // Picks a new random seed for each iteration.\n      random_seed_ = GetNextRandomSeed(random_seed_);\n    }\n  }\n\n  repeater->OnTestProgramEnd(*parent_);\n\n  return !failed;\n}\n\n// Reads the GTEST_SHARD_STATUS_FILE environment variable, and creates the file\n// if the variable is present. If a file already exists at this location, this\n// function will write over it. If the variable is present, but the file cannot\n// be created, prints an error and exits.\nvoid WriteToShardStatusFileIfNeeded() {\n  const char* const test_shard_file = posix::GetEnv(kTestShardStatusFile);\n  if (test_shard_file != NULL) {\n    FILE* const file = posix::FOpen(test_shard_file, \"w\");\n    if (file == NULL) {\n      ColoredPrintf(COLOR_RED,\n                    \"Could not write to the test shard status file \\\"%s\\\" \"\n                    \"specified by the %s environment variable.\\n\",\n                    test_shard_file, kTestShardStatusFile);\n      fflush(stdout);\n      exit(EXIT_FAILURE);\n    }\n    fclose(file);\n  }\n}\n\n// Checks whether sharding is enabled by examining the relevant\n// environment variable values. If the variables are present,\n// but inconsistent (i.e., shard_index >= total_shards), prints\n// an error and exits. If in_subprocess_for_death_test, sharding is\n// disabled because it must only be applied to the original test\n// process. Otherwise, we could filter out death tests we intended to execute.\nbool ShouldShard(const char* total_shards_env,\n                 const char* shard_index_env,\n                 bool in_subprocess_for_death_test) {\n  if (in_subprocess_for_death_test) {\n    return false;\n  }\n\n  const Int32 total_shards = Int32FromEnvOrDie(total_shards_env, -1);\n  const Int32 shard_index = Int32FromEnvOrDie(shard_index_env, -1);\n\n  if (total_shards == -1 && shard_index == -1) {\n    return false;\n  } else if (total_shards == -1 && shard_index != -1) {\n    const Message msg = Message()\n      << \"Invalid environment variables: you have \"\n      << kTestShardIndex << \" = \" << shard_index\n      << \", but have left \" << kTestTotalShards << \" unset.\\n\";\n    ColoredPrintf(COLOR_RED, msg.GetString().c_str());\n    fflush(stdout);\n    exit(EXIT_FAILURE);\n  } else if (total_shards != -1 && shard_index == -1) {\n    const Message msg = Message()\n      << \"Invalid environment variables: you have \"\n      << kTestTotalShards << \" = \" << total_shards\n      << \", but have left \" << kTestShardIndex << \" unset.\\n\";\n    ColoredPrintf(COLOR_RED, msg.GetString().c_str());\n    fflush(stdout);\n    exit(EXIT_FAILURE);\n  } else if (shard_index < 0 || shard_index >= total_shards) {\n    const Message msg = Message()\n      << \"Invalid environment variables: we require 0 <= \"\n      << kTestShardIndex << \" < \" << kTestTotalShards\n      << \", but you have \" << kTestShardIndex << \"=\" << shard_index\n      << \", \" << kTestTotalShards << \"=\" << total_shards << \".\\n\";\n    ColoredPrintf(COLOR_RED, msg.GetString().c_str());\n    fflush(stdout);\n    exit(EXIT_FAILURE);\n  }\n\n  return total_shards > 1;\n}\n\n// Parses the environment variable var as an Int32. If it is unset,\n// returns default_val. If it is not an Int32, prints an error\n// and aborts.\nInt32 Int32FromEnvOrDie(const char* var, Int32 default_val) {\n  const char* str_val = posix::GetEnv(var);\n  if (str_val == NULL) {\n    return default_val;\n  }\n\n  Int32 result;\n  if (!ParseInt32(Message() << \"The value of environment variable \" << var,\n                  str_val, &result)) {\n    exit(EXIT_FAILURE);\n  }\n  return result;\n}\n\n// Given the total number of shards, the shard index, and the test id,\n// returns true iff the test should be run on this shard. The test id is\n// some arbitrary but unique non-negative integer assigned to each test\n// method. Assumes that 0 <= shard_index < total_shards.\nbool ShouldRunTestOnShard(int total_shards, int shard_index, int test_id) {\n  return (test_id % total_shards) == shard_index;\n}\n\n// Compares the name of each test with the user-specified filter to\n// decide whether the test should be run, then records the result in\n// each TestCase and TestInfo object.\n// If shard_tests == true, further filters tests based on sharding\n// variables in the environment - see\n// http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide.\n// Returns the number of tests that should run.\nint UnitTestImpl::FilterTests(ReactionToSharding shard_tests) {\n  const Int32 total_shards = shard_tests == HONOR_SHARDING_PROTOCOL ?\n      Int32FromEnvOrDie(kTestTotalShards, -1) : -1;\n  const Int32 shard_index = shard_tests == HONOR_SHARDING_PROTOCOL ?\n      Int32FromEnvOrDie(kTestShardIndex, -1) : -1;\n\n  // num_runnable_tests are the number of tests that will\n  // run across all shards (i.e., match filter and are not disabled).\n  // num_selected_tests are the number of tests to be run on\n  // this shard.\n  int num_runnable_tests = 0;\n  int num_selected_tests = 0;\n  for (size_t i = 0; i < test_cases_.size(); i++) {\n    TestCase* const test_case = test_cases_[i];\n    const String &test_case_name = test_case->name();\n    test_case->set_should_run(false);\n\n    for (size_t j = 0; j < test_case->test_info_list().size(); j++) {\n      TestInfo* const test_info = test_case->test_info_list()[j];\n      const String test_name(test_info->name());\n      // A test is disabled if test case name or test name matches\n      // kDisableTestFilter.\n      const bool is_disabled =\n          internal::UnitTestOptions::MatchesFilter(test_case_name,\n                                                   kDisableTestFilter) ||\n          internal::UnitTestOptions::MatchesFilter(test_name,\n                                                   kDisableTestFilter);\n      test_info->is_disabled_ = is_disabled;\n\n      const bool matches_filter =\n          internal::UnitTestOptions::FilterMatchesTest(test_case_name,\n                                                       test_name);\n      test_info->matches_filter_ = matches_filter;\n\n      const bool is_runnable =\n          (GTEST_FLAG(also_run_disabled_tests) || !is_disabled) &&\n          matches_filter;\n\n      const bool is_selected = is_runnable &&\n          (shard_tests == IGNORE_SHARDING_PROTOCOL ||\n           ShouldRunTestOnShard(total_shards, shard_index,\n                                num_runnable_tests));\n\n      num_runnable_tests += is_runnable;\n      num_selected_tests += is_selected;\n\n      test_info->should_run_ = is_selected;\n      test_case->set_should_run(test_case->should_run() || is_selected);\n    }\n  }\n  return num_selected_tests;\n}\n\n// Prints the names of the tests matching the user-specified filter flag.\nvoid UnitTestImpl::ListTestsMatchingFilter() {\n  for (size_t i = 0; i < test_cases_.size(); i++) {\n    const TestCase* const test_case = test_cases_[i];\n    bool printed_test_case_name = false;\n\n    for (size_t j = 0; j < test_case->test_info_list().size(); j++) {\n      const TestInfo* const test_info =\n          test_case->test_info_list()[j];\n      if (test_info->matches_filter_) {\n        if (!printed_test_case_name) {\n          printed_test_case_name = true;\n          printf(\"%s.\\n\", test_case->name());\n        }\n        printf(\"  %s\\n\", test_info->name());\n      }\n    }\n  }\n  fflush(stdout);\n}\n\n// Sets the OS stack trace getter.\n//\n// Does nothing if the input and the current OS stack trace getter are\n// the same; otherwise, deletes the old getter and makes the input the\n// current getter.\nvoid UnitTestImpl::set_os_stack_trace_getter(\n    OsStackTraceGetterInterface* getter) {\n  if (os_stack_trace_getter_ != getter) {\n    delete os_stack_trace_getter_;\n    os_stack_trace_getter_ = getter;\n  }\n}\n\n// Returns the current OS stack trace getter if it is not NULL;\n// otherwise, creates an OsStackTraceGetter, makes it the current\n// getter, and returns it.\nOsStackTraceGetterInterface* UnitTestImpl::os_stack_trace_getter() {\n  if (os_stack_trace_getter_ == NULL) {\n    os_stack_trace_getter_ = new OsStackTraceGetter;\n  }\n\n  return os_stack_trace_getter_;\n}\n\n// Returns the TestResult for the test that's currently running, or\n// the TestResult for the ad hoc test if no test is running.\nTestResult* UnitTestImpl::current_test_result() {\n  return current_test_info_ ?\n      &(current_test_info_->result_) : &ad_hoc_test_result_;\n}\n\n// Shuffles all test cases, and the tests within each test case,\n// making sure that death tests are still run first.\nvoid UnitTestImpl::ShuffleTests() {\n  // Shuffles the death test cases.\n  ShuffleRange(random(), 0, last_death_test_case_ + 1, &test_case_indices_);\n\n  // Shuffles the non-death test cases.\n  ShuffleRange(random(), last_death_test_case_ + 1,\n               static_cast<int>(test_cases_.size()), &test_case_indices_);\n\n  // Shuffles the tests inside each test case.\n  for (size_t i = 0; i < test_cases_.size(); i++) {\n    test_cases_[i]->ShuffleTests(random());\n  }\n}\n\n// Restores the test cases and tests to their order before the first shuffle.\nvoid UnitTestImpl::UnshuffleTests() {\n  for (size_t i = 0; i < test_cases_.size(); i++) {\n    // Unshuffles the tests in each test case.\n    test_cases_[i]->UnshuffleTests();\n    // Resets the index of each test case.\n    test_case_indices_[i] = static_cast<int>(i);\n  }\n}\n\n// Returns the current OS stack trace as a String.\n//\n// The maximum number of stack frames to be included is specified by\n// the gtest_stack_trace_depth flag.  The skip_count parameter\n// specifies the number of top frames to be skipped, which doesn't\n// count against the number of frames to be included.\n//\n// For example, if Foo() calls Bar(), which in turn calls\n// GetCurrentOsStackTraceExceptTop(..., 1), Foo() will be included in\n// the trace but Bar() and GetCurrentOsStackTraceExceptTop() won't.\nString GetCurrentOsStackTraceExceptTop(UnitTest* /*unit_test*/,\n                                       int skip_count) {\n  // We pass skip_count + 1 to skip this wrapper function in addition\n  // to what the user really wants to skip.\n  return GetUnitTestImpl()->CurrentOsStackTraceExceptTop(skip_count + 1);\n}\n\n// Used by the GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_ macro to\n// suppress unreachable code warnings.\nnamespace {\nclass ClassUniqueToAlwaysTrue {};\n}\n\nbool IsTrue(bool condition) { return condition; }\n\nbool AlwaysTrue() {\n#if GTEST_HAS_EXCEPTIONS\n  // This condition is always false so AlwaysTrue() never actually throws,\n  // but it makes the compiler think that it may throw.\n  if (IsTrue(false))\n    throw ClassUniqueToAlwaysTrue();\n#endif  // GTEST_HAS_EXCEPTIONS\n  return true;\n}\n\n// If *pstr starts with the given prefix, modifies *pstr to be right\n// past the prefix and returns true; otherwise leaves *pstr unchanged\n// and returns false.  None of pstr, *pstr, and prefix can be NULL.\nbool SkipPrefix(const char* prefix, const char** pstr) {\n  const size_t prefix_len = strlen(prefix);\n  if (strncmp(*pstr, prefix, prefix_len) == 0) {\n    *pstr += prefix_len;\n    return true;\n  }\n  return false;\n}\n\n// Parses a string as a command line flag.  The string should have\n// the format \"--flag=value\".  When def_optional is true, the \"=value\"\n// part can be omitted.\n//\n// Returns the value of the flag, or NULL if the parsing failed.\nconst char* ParseFlagValue(const char* str,\n                           const char* flag,\n                           bool def_optional) {\n  // str and flag must not be NULL.\n  if (str == NULL || flag == NULL) return NULL;\n\n  // The flag must start with \"--\" followed by GTEST_FLAG_PREFIX_.\n  const String flag_str = String::Format(\"--%s%s\", GTEST_FLAG_PREFIX_, flag);\n  const size_t flag_len = flag_str.length();\n  if (strncmp(str, flag_str.c_str(), flag_len) != 0) return NULL;\n\n  // Skips the flag name.\n  const char* flag_end = str + flag_len;\n\n  // When def_optional is true, it's OK to not have a \"=value\" part.\n  if (def_optional && (flag_end[0] == '\\0')) {\n    return flag_end;\n  }\n\n  // If def_optional is true and there are more characters after the\n  // flag name, or if def_optional is false, there must be a '=' after\n  // the flag name.\n  if (flag_end[0] != '=') return NULL;\n\n  // Returns the string after \"=\".\n  return flag_end + 1;\n}\n\n// Parses a string for a bool flag, in the form of either\n// \"--flag=value\" or \"--flag\".\n//\n// In the former case, the value is taken as true as long as it does\n// not start with '0', 'f', or 'F'.\n//\n// In the latter case, the value is taken as true.\n//\n// On success, stores the value of the flag in *value, and returns\n// true.  On failure, returns false without changing *value.\nbool ParseBoolFlag(const char* str, const char* flag, bool* value) {\n  // Gets the value of the flag as a string.\n  const char* const value_str = ParseFlagValue(str, flag, true);\n\n  // Aborts if the parsing failed.\n  if (value_str == NULL) return false;\n\n  // Converts the string value to a bool.\n  *value = !(*value_str == '0' || *value_str == 'f' || *value_str == 'F');\n  return true;\n}\n\n// Parses a string for an Int32 flag, in the form of\n// \"--flag=value\".\n//\n// On success, stores the value of the flag in *value, and returns\n// true.  On failure, returns false without changing *value.\nbool ParseInt32Flag(const char* str, const char* flag, Int32* value) {\n  // Gets the value of the flag as a string.\n  const char* const value_str = ParseFlagValue(str, flag, false);\n\n  // Aborts if the parsing failed.\n  if (value_str == NULL) return false;\n\n  // Sets *value to the value of the flag.\n  return ParseInt32(Message() << \"The value of flag --\" << flag,\n                    value_str, value);\n}\n\n// Parses a string for a string flag, in the form of\n// \"--flag=value\".\n//\n// On success, stores the value of the flag in *value, and returns\n// true.  On failure, returns false without changing *value.\nbool ParseStringFlag(const char* str, const char* flag, String* value) {\n  // Gets the value of the flag as a string.\n  const char* const value_str = ParseFlagValue(str, flag, false);\n\n  // Aborts if the parsing failed.\n  if (value_str == NULL) return false;\n\n  // Sets *value to the value of the flag.\n  *value = value_str;\n  return true;\n}\n\n// Determines whether a string has a prefix that Google Test uses for its\n// flags, i.e., starts with GTEST_FLAG_PREFIX_ or GTEST_FLAG_PREFIX_DASH_.\n// If Google Test detects that a command line flag has its prefix but is not\n// recognized, it will print its help message. Flags starting with\n// GTEST_INTERNAL_PREFIX_ followed by \"internal_\" are considered Google Test\n// internal flags and do not trigger the help message.\nstatic bool HasGoogleTestFlagPrefix(const char* str) {\n  return (SkipPrefix(\"--\", &str) ||\n          SkipPrefix(\"-\", &str) ||\n          SkipPrefix(\"/\", &str)) &&\n         !SkipPrefix(GTEST_FLAG_PREFIX_ \"internal_\", &str) &&\n         (SkipPrefix(GTEST_FLAG_PREFIX_, &str) ||\n          SkipPrefix(GTEST_FLAG_PREFIX_DASH_, &str));\n}\n\n// Prints a string containing code-encoded text.  The following escape\n// sequences can be used in the string to control the text color:\n//\n//   @@    prints a single '@' character.\n//   @R    changes the color to red.\n//   @G    changes the color to green.\n//   @Y    changes the color to yellow.\n//   @D    changes to the default terminal text color.\n//\n// TODO(wan@google.com): Write tests for this once we add stdout\n// capturing to Google Test.\nstatic void PrintColorEncoded(const char* str) {\n  GTestColor color = COLOR_DEFAULT;  // The current color.\n\n  // Conceptually, we split the string into segments divided by escape\n  // sequences.  Then we print one segment at a time.  At the end of\n  // each iteration, the str pointer advances to the beginning of the\n  // next segment.\n  for (;;) {\n    const char* p = strchr(str, '@');\n    if (p == NULL) {\n      ColoredPrintf(color, \"%s\", str);\n      return;\n    }\n\n    ColoredPrintf(color, \"%s\", String(str, p - str).c_str());\n\n    const char ch = p[1];\n    str = p + 2;\n    if (ch == '@') {\n      ColoredPrintf(color, \"@\");\n    } else if (ch == 'D') {\n      color = COLOR_DEFAULT;\n    } else if (ch == 'R') {\n      color = COLOR_RED;\n    } else if (ch == 'G') {\n      color = COLOR_GREEN;\n    } else if (ch == 'Y') {\n      color = COLOR_YELLOW;\n    } else {\n      --str;\n    }\n  }\n}\n\nstatic const char kColorEncodedHelpMessage[] =\n\"This program contains tests written using \" GTEST_NAME_ \". You can use the\\n\"\n\"following command line flags to control its behavior:\\n\"\n\"\\n\"\n\"Test Selection:\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"list_tests@D\\n\"\n\"      List the names of all tests instead of running them. The name of\\n\"\n\"      TEST(Foo, Bar) is \\\"Foo.Bar\\\".\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"filter=@YPOSTIVE_PATTERNS\"\n    \"[@G-@YNEGATIVE_PATTERNS]@D\\n\"\n\"      Run only the tests whose name matches one of the positive patterns but\\n\"\n\"      none of the negative patterns. '?' matches any single character; '*'\\n\"\n\"      matches any substring; ':' separates two patterns.\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"also_run_disabled_tests@D\\n\"\n\"      Run all disabled tests too.\\n\"\n\"\\n\"\n\"Test Execution:\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"repeat=@Y[COUNT]@D\\n\"\n\"      Run the tests repeatedly; use a negative count to repeat forever.\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"shuffle@D\\n\"\n\"      Randomize tests' orders on every iteration.\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"random_seed=@Y[NUMBER]@D\\n\"\n\"      Random number seed to use for shuffling test orders (between 1 and\\n\"\n\"      99999, or 0 to use a seed based on the current time).\\n\"\n\"\\n\"\n\"Test Output:\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"color=@Y(@Gyes@Y|@Gno@Y|@Gauto@Y)@D\\n\"\n\"      Enable/disable colored output. The default is @Gauto@D.\\n\"\n\"  -@G-\" GTEST_FLAG_PREFIX_ \"print_time=0@D\\n\"\n\"      Don't print the elapsed time of each test.\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"output=xml@Y[@G:@YDIRECTORY_PATH@G\"\n    GTEST_PATH_SEP_ \"@Y|@G:@YFILE_PATH]@D\\n\"\n\"      Generate an XML report in the given directory or with the given file\\n\"\n\"      name. @YFILE_PATH@D defaults to @Gtest_details.xml@D.\\n\"\n#if GTEST_CAN_STREAM_RESULTS_\n\"  @G--\" GTEST_FLAG_PREFIX_ \"stream_result_to=@YHOST@G:@YPORT@D\\n\"\n\"      Stream test results to the given server.\\n\"\n#endif  // GTEST_CAN_STREAM_RESULTS_\n\"\\n\"\n\"Assertion Behavior:\\n\"\n#if GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS\n\"  @G--\" GTEST_FLAG_PREFIX_ \"death_test_style=@Y(@Gfast@Y|@Gthreadsafe@Y)@D\\n\"\n\"      Set the default death test style.\\n\"\n#endif  // GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS\n\"  @G--\" GTEST_FLAG_PREFIX_ \"break_on_failure@D\\n\"\n\"      Turn assertion failures into debugger break-points.\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"throw_on_failure@D\\n\"\n\"      Turn assertion failures into C++ exceptions.\\n\"\n\"  @G--\" GTEST_FLAG_PREFIX_ \"catch_exceptions=0@D\\n\"\n\"      Do not report exceptions as test failures. Instead, allow them\\n\"\n\"      to crash the program or throw a pop-up (on Windows).\\n\"\n\"\\n\"\n\"Except for @G--\" GTEST_FLAG_PREFIX_ \"list_tests@D, you can alternatively set \"\n    \"the corresponding\\n\"\n\"environment variable of a flag (all letters in upper-case). For example, to\\n\"\n\"disable colored text output, you can either specify @G--\" GTEST_FLAG_PREFIX_\n    \"color=no@D or set\\n\"\n\"the @G\" GTEST_FLAG_PREFIX_UPPER_ \"COLOR@D environment variable to @Gno@D.\\n\"\n\"\\n\"\n\"For more information, please read the \" GTEST_NAME_ \" documentation at\\n\"\n\"@G\" GTEST_PROJECT_URL_ \"@D. If you find a bug in \" GTEST_NAME_ \"\\n\"\n\"(not one in your own code or tests), please report it to\\n\"\n\"@G<\" GTEST_DEV_EMAIL_ \">@D.\\n\";\n\n// Parses the command line for Google Test flags, without initializing\n// other parts of Google Test.  The type parameter CharType can be\n// instantiated to either char or wchar_t.\ntemplate <typename CharType>\nvoid ParseGoogleTestFlagsOnlyImpl(int* argc, CharType** argv) {\n  for (int i = 1; i < *argc; i++) {\n    const String arg_string = StreamableToString(argv[i]);\n    const char* const arg = arg_string.c_str();\n\n    using internal::ParseBoolFlag;\n    using internal::ParseInt32Flag;\n    using internal::ParseStringFlag;\n\n    // Do we see a Google Test flag?\n    if (ParseBoolFlag(arg, kAlsoRunDisabledTestsFlag,\n                      &GTEST_FLAG(also_run_disabled_tests)) ||\n        ParseBoolFlag(arg, kBreakOnFailureFlag,\n                      &GTEST_FLAG(break_on_failure)) ||\n        ParseBoolFlag(arg, kCatchExceptionsFlag,\n                      &GTEST_FLAG(catch_exceptions)) ||\n        ParseStringFlag(arg, kColorFlag, &GTEST_FLAG(color)) ||\n        ParseStringFlag(arg, kDeathTestStyleFlag,\n                        &GTEST_FLAG(death_test_style)) ||\n        ParseBoolFlag(arg, kDeathTestUseFork,\n                      &GTEST_FLAG(death_test_use_fork)) ||\n        ParseStringFlag(arg, kFilterFlag, &GTEST_FLAG(filter)) ||\n        ParseStringFlag(arg, kInternalRunDeathTestFlag,\n                        &GTEST_FLAG(internal_run_death_test)) ||\n        ParseBoolFlag(arg, kListTestsFlag, &GTEST_FLAG(list_tests)) ||\n        ParseStringFlag(arg, kOutputFlag, &GTEST_FLAG(output)) ||\n        ParseBoolFlag(arg, kPrintTimeFlag, &GTEST_FLAG(print_time)) ||\n        ParseInt32Flag(arg, kRandomSeedFlag, &GTEST_FLAG(random_seed)) ||\n        ParseInt32Flag(arg, kRepeatFlag, &GTEST_FLAG(repeat)) ||\n        ParseBoolFlag(arg, kShuffleFlag, &GTEST_FLAG(shuffle)) ||\n        ParseInt32Flag(arg, kStackTraceDepthFlag,\n                       &GTEST_FLAG(stack_trace_depth)) ||\n        ParseStringFlag(arg, kStreamResultToFlag,\n                        &GTEST_FLAG(stream_result_to)) ||\n        ParseBoolFlag(arg, kThrowOnFailureFlag,\n                      &GTEST_FLAG(throw_on_failure))\n        ) {\n      // Yes.  Shift the remainder of the argv list left by one.  Note\n      // that argv has (*argc + 1) elements, the last one always being\n      // NULL.  The following loop moves the trailing NULL element as\n      // well.\n      for (int j = i; j != *argc; j++) {\n        argv[j] = argv[j + 1];\n      }\n\n      // Decrements the argument count.\n      (*argc)--;\n\n      // We also need to decrement the iterator as we just removed\n      // an element.\n      i--;\n    } else if (arg_string == \"--help\" || arg_string == \"-h\" ||\n               arg_string == \"-?\" || arg_string == \"/?\" ||\n               HasGoogleTestFlagPrefix(arg)) {\n      // Both help flag and unrecognized Google Test flags (excluding\n      // internal ones) trigger help display.\n      g_help_flag = true;\n    }\n  }\n\n  if (g_help_flag) {\n    // We print the help here instead of in RUN_ALL_TESTS(), as the\n    // latter may not be called at all if the user is using Google\n    // Test with another testing framework.\n    PrintColorEncoded(kColorEncodedHelpMessage);\n  }\n}\n\n// Parses the command line for Google Test flags, without initializing\n// other parts of Google Test.\nvoid ParseGoogleTestFlagsOnly(int* argc, char** argv) {\n  ParseGoogleTestFlagsOnlyImpl(argc, argv);\n}\nvoid ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv) {\n  ParseGoogleTestFlagsOnlyImpl(argc, argv);\n}\n\n// The internal implementation of InitGoogleTest().\n//\n// The type parameter CharType can be instantiated to either char or\n// wchar_t.\ntemplate <typename CharType>\nvoid InitGoogleTestImpl(int* argc, CharType** argv) {\n  g_init_gtest_count++;\n\n  // We don't want to run the initialization code twice.\n  if (g_init_gtest_count != 1) return;\n\n  if (*argc <= 0) return;\n\n  internal::g_executable_path = internal::StreamableToString(argv[0]);\n\n#if GTEST_HAS_DEATH_TEST\n\n  g_argvs.clear();\n  for (int i = 0; i != *argc; i++) {\n    g_argvs.push_back(StreamableToString(argv[i]));\n  }\n\n#endif  // GTEST_HAS_DEATH_TEST\n\n  ParseGoogleTestFlagsOnly(argc, argv);\n  GetUnitTestImpl()->PostFlagParsingInit();\n}\n\n}  // namespace internal\n\n// Initializes Google Test.  This must be called before calling\n// RUN_ALL_TESTS().  In particular, it parses a command line for the\n// flags that Google Test recognizes.  Whenever a Google Test flag is\n// seen, it is removed from argv, and *argc is decremented.\n//\n// No value is returned.  Instead, the Google Test flag variables are\n// updated.\n//\n// Calling the function for the second time has no user-visible effect.\nvoid InitGoogleTest(int* argc, char** argv) {\n  internal::InitGoogleTestImpl(argc, argv);\n}\n\n// This overloaded version can be used in Windows programs compiled in\n// UNICODE mode.\nvoid InitGoogleTest(int* argc, wchar_t** argv) {\n  internal::InitGoogleTestImpl(argc, argv);\n}\n\n}  // namespace testing\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan), vladl@google.com (Vlad Losev)\n//\n// This file implements death tests.\n\n\n#if GTEST_HAS_DEATH_TEST\n\n# if GTEST_OS_MAC\n#  include <crt_externs.h>\n# endif  // GTEST_OS_MAC\n\n# include <errno.h>\n# include <fcntl.h>\n# include <limits.h>\n# include <stdarg.h>\n\n# if GTEST_OS_WINDOWS\n#  include <windows.h>\n# else\n#  include <sys/mman.h>\n#  include <sys/wait.h>\n# endif  // GTEST_OS_WINDOWS\n\n#endif  // GTEST_HAS_DEATH_TEST\n\n\n// Indicates that this translation unit is part of Google Test's\n// implementation.  It must come before gtest-internal-inl.h is\n// included, or there will be a compiler error.  This trick is to\n// prevent a user from accidentally including gtest-internal-inl.h in\n// his code.\n#define GTEST_IMPLEMENTATION_ 1\n#undef GTEST_IMPLEMENTATION_\n\nnamespace testing {\n\n// Constants.\n\n// The default death test style.\nstatic const char kDefaultDeathTestStyle[] = \"fast\";\n\nGTEST_DEFINE_string_(\n    death_test_style,\n    internal::StringFromGTestEnv(\"death_test_style\", kDefaultDeathTestStyle),\n    \"Indicates how to run a death test in a forked child process: \"\n    \"\\\"threadsafe\\\" (child process re-executes the test binary \"\n    \"from the beginning, running only the specific death test) or \"\n    \"\\\"fast\\\" (child process runs the death test immediately \"\n    \"after forking).\");\n\nGTEST_DEFINE_bool_(\n    death_test_use_fork,\n    internal::BoolFromGTestEnv(\"death_test_use_fork\", false),\n    \"Instructs to use fork()/_exit() instead of clone() in death tests. \"\n    \"Ignored and always uses fork() on POSIX systems where clone() is not \"\n    \"implemented. Useful when running under valgrind or similar tools if \"\n    \"those do not support clone(). Valgrind 3.3.1 will just fail if \"\n    \"it sees an unsupported combination of clone() flags. \"\n    \"It is not recommended to use this flag w/o valgrind though it will \"\n    \"work in 99% of the cases. Once valgrind is fixed, this flag will \"\n    \"most likely be removed.\");\n\nnamespace internal {\nGTEST_DEFINE_string_(\n    internal_run_death_test, \"\",\n    \"Indicates the file, line number, temporal index of \"\n    \"the single death test to run, and a file descriptor to \"\n    \"which a success code may be sent, all separated by \"\n    \"colons.  This flag is specified if and only if the current \"\n    \"process is a sub-process launched for running a thread-safe \"\n    \"death test.  FOR INTERNAL USE ONLY.\");\n}  // namespace internal\n\n#if GTEST_HAS_DEATH_TEST\n\n// ExitedWithCode constructor.\nExitedWithCode::ExitedWithCode(int exit_code) : exit_code_(exit_code) {\n}\n\n// ExitedWithCode function-call operator.\nbool ExitedWithCode::operator()(int exit_status) const {\n# if GTEST_OS_WINDOWS\n\n  return exit_status == exit_code_;\n\n# else\n\n  return WIFEXITED(exit_status) && WEXITSTATUS(exit_status) == exit_code_;\n\n# endif  // GTEST_OS_WINDOWS\n}\n\n# if !GTEST_OS_WINDOWS\n// KilledBySignal constructor.\nKilledBySignal::KilledBySignal(int signum) : signum_(signum) {\n}\n\n// KilledBySignal function-call operator.\nbool KilledBySignal::operator()(int exit_status) const {\n  return WIFSIGNALED(exit_status) && WTERMSIG(exit_status) == signum_;\n}\n# endif  // !GTEST_OS_WINDOWS\n\nnamespace internal {\n\n// Utilities needed for death tests.\n\n// Generates a textual description of a given exit code, in the format\n// specified by wait(2).\nstatic String ExitSummary(int exit_code) {\n  Message m;\n\n# if GTEST_OS_WINDOWS\n\n  m << \"Exited with exit status \" << exit_code;\n\n# else\n\n  if (WIFEXITED(exit_code)) {\n    m << \"Exited with exit status \" << WEXITSTATUS(exit_code);\n  } else if (WIFSIGNALED(exit_code)) {\n    m << \"Terminated by signal \" << WTERMSIG(exit_code);\n  }\n#  ifdef WCOREDUMP\n  if (WCOREDUMP(exit_code)) {\n    m << \" (core dumped)\";\n  }\n#  endif\n# endif  // GTEST_OS_WINDOWS\n\n  return m.GetString();\n}\n\n// Returns true if exit_status describes a process that was terminated\n// by a signal, or exited normally with a nonzero exit code.\nbool ExitedUnsuccessfully(int exit_status) {\n  return !ExitedWithCode(0)(exit_status);\n}\n\n# if !GTEST_OS_WINDOWS\n// Generates a textual failure message when a death test finds more than\n// one thread running, or cannot determine the number of threads, prior\n// to executing the given statement.  It is the responsibility of the\n// caller not to pass a thread_count of 1.\nstatic String DeathTestThreadWarning(size_t thread_count) {\n  Message msg;\n  msg << \"Death tests use fork(), which is unsafe particularly\"\n      << \" in a threaded context. For this test, \" << GTEST_NAME_ << \" \";\n  if (thread_count == 0)\n    msg << \"couldn't detect the number of threads.\";\n  else\n    msg << \"detected \" << thread_count << \" threads.\";\n  return msg.GetString();\n}\n# endif  // !GTEST_OS_WINDOWS\n\n// Flag characters for reporting a death test that did not die.\nstatic const char kDeathTestLived = 'L';\nstatic const char kDeathTestReturned = 'R';\nstatic const char kDeathTestThrew = 'T';\nstatic const char kDeathTestInternalError = 'I';\n\n// An enumeration describing all of the possible ways that a death test can\n// conclude.  DIED means that the process died while executing the test\n// code; LIVED means that process lived beyond the end of the test code;\n// RETURNED means that the test statement attempted to execute a return\n// statement, which is not allowed; THREW means that the test statement\n// returned control by throwing an exception.  IN_PROGRESS means the test\n// has not yet concluded.\n// TODO(vladl@google.com): Unify names and possibly values for\n// AbortReason, DeathTestOutcome, and flag characters above.\nenum DeathTestOutcome { IN_PROGRESS, DIED, LIVED, RETURNED, THREW };\n\n// Routine for aborting the program which is safe to call from an\n// exec-style death test child process, in which case the error\n// message is propagated back to the parent process.  Otherwise, the\n// message is simply printed to stderr.  In either case, the program\n// then exits with status 1.\nvoid DeathTestAbort(const String& message) {\n  // On a POSIX system, this function may be called from a threadsafe-style\n  // death test child process, which operates on a very small stack.  Use\n  // the heap for any additional non-minuscule memory requirements.\n  const InternalRunDeathTestFlag* const flag =\n      GetUnitTestImpl()->internal_run_death_test_flag();\n  if (flag != NULL) {\n    FILE* parent = posix::FDOpen(flag->write_fd(), \"w\");\n    fputc(kDeathTestInternalError, parent);\n    fprintf(parent, \"%s\", message.c_str());\n    fflush(parent);\n    _exit(1);\n  } else {\n    fprintf(stderr, \"%s\", message.c_str());\n    fflush(stderr);\n    posix::Abort();\n  }\n}\n\n// A replacement for CHECK that calls DeathTestAbort if the assertion\n// fails.\n# define GTEST_DEATH_TEST_CHECK_(expression) \\\n  do { \\\n    if (!::testing::internal::IsTrue(expression)) { \\\n      DeathTestAbort(::testing::internal::String::Format( \\\n          \"CHECK failed: File %s, line %d: %s\", \\\n          __FILE__, __LINE__, #expression)); \\\n    } \\\n  } while (::testing::internal::AlwaysFalse())\n\n// This macro is similar to GTEST_DEATH_TEST_CHECK_, but it is meant for\n// evaluating any system call that fulfills two conditions: it must return\n// -1 on failure, and set errno to EINTR when it is interrupted and\n// should be tried again.  The macro expands to a loop that repeatedly\n// evaluates the expression as long as it evaluates to -1 and sets\n// errno to EINTR.  If the expression evaluates to -1 but errno is\n// something other than EINTR, DeathTestAbort is called.\n# define GTEST_DEATH_TEST_CHECK_SYSCALL_(expression) \\\n  do { \\\n    int gtest_retval; \\\n    do { \\\n      gtest_retval = (expression); \\\n    } while (gtest_retval == -1 && errno == EINTR); \\\n    if (gtest_retval == -1) { \\\n      DeathTestAbort(::testing::internal::String::Format( \\\n          \"CHECK failed: File %s, line %d: %s != -1\", \\\n          __FILE__, __LINE__, #expression)); \\\n    } \\\n  } while (::testing::internal::AlwaysFalse())\n\n// Returns the message describing the last system error in errno.\nString GetLastErrnoDescription() {\n    return String(errno == 0 ? \"\" : posix::StrError(errno));\n}\n\n// This is called from a death test parent process to read a failure\n// message from the death test child process and log it with the FATAL\n// severity. On Windows, the message is read from a pipe handle. On other\n// platforms, it is read from a file descriptor.\nstatic void FailFromInternalError(int fd) {\n  Message error;\n  char buffer[256];\n  int num_read;\n\n  do {\n    while ((num_read = posix::Read(fd, buffer, 255)) > 0) {\n      buffer[num_read] = '\\0';\n      error << buffer;\n    }\n  } while (num_read == -1 && errno == EINTR);\n\n  if (num_read == 0) {\n    GTEST_LOG_(FATAL) << error.GetString();\n  } else {\n    const int last_error = errno;\n    GTEST_LOG_(FATAL) << \"Error while reading death test internal: \"\n                      << GetLastErrnoDescription() << \" [\" << last_error << \"]\";\n  }\n}\n\n// Death test constructor.  Increments the running death test count\n// for the current test.\nDeathTest::DeathTest() {\n  TestInfo* const info = GetUnitTestImpl()->current_test_info();\n  if (info == NULL) {\n    DeathTestAbort(\"Cannot run a death test outside of a TEST or \"\n                   \"TEST_F construct\");\n  }\n}\n\n// Creates and returns a death test by dispatching to the current\n// death test factory.\nbool DeathTest::Create(const char* statement, const RE* regex,\n                       const char* file, int line, DeathTest** test) {\n  return GetUnitTestImpl()->death_test_factory()->Create(\n      statement, regex, file, line, test);\n}\n\nconst char* DeathTest::LastMessage() {\n  return last_death_test_message_.c_str();\n}\n\nvoid DeathTest::set_last_death_test_message(const String& message) {\n  last_death_test_message_ = message;\n}\n\nString DeathTest::last_death_test_message_;\n\n// Provides cross platform implementation for some death functionality.\nclass DeathTestImpl : public DeathTest {\n protected:\n  DeathTestImpl(const char* a_statement, const RE* a_regex)\n      : statement_(a_statement),\n        regex_(a_regex),\n        spawned_(false),\n        status_(-1),\n        outcome_(IN_PROGRESS),\n        read_fd_(-1),\n        write_fd_(-1) {}\n\n  // read_fd_ is expected to be closed and cleared by a derived class.\n  ~DeathTestImpl() { GTEST_DEATH_TEST_CHECK_(read_fd_ == -1); }\n\n  void Abort(AbortReason reason);\n  virtual bool Passed(bool status_ok);\n\n  const char* statement() const { return statement_; }\n  const RE* regex() const { return regex_; }\n  bool spawned() const { return spawned_; }\n  void set_spawned(bool is_spawned) { spawned_ = is_spawned; }\n  int status() const { return status_; }\n  void set_status(int a_status) { status_ = a_status; }\n  DeathTestOutcome outcome() const { return outcome_; }\n  void set_outcome(DeathTestOutcome an_outcome) { outcome_ = an_outcome; }\n  int read_fd() const { return read_fd_; }\n  void set_read_fd(int fd) { read_fd_ = fd; }\n  int write_fd() const { return write_fd_; }\n  void set_write_fd(int fd) { write_fd_ = fd; }\n\n  // Called in the parent process only. Reads the result code of the death\n  // test child process via a pipe, interprets it to set the outcome_\n  // member, and closes read_fd_.  Outputs diagnostics and terminates in\n  // case of unexpected codes.\n  void ReadAndInterpretStatusByte();\n\n private:\n  // The textual content of the code this object is testing.  This class\n  // doesn't own this string and should not attempt to delete it.\n  const char* const statement_;\n  // The regular expression which test output must match.  DeathTestImpl\n  // doesn't own this object and should not attempt to delete it.\n  const RE* const regex_;\n  // True if the death test child process has been successfully spawned.\n  bool spawned_;\n  // The exit status of the child process.\n  int status_;\n  // How the death test concluded.\n  DeathTestOutcome outcome_;\n  // Descriptor to the read end of the pipe to the child process.  It is\n  // always -1 in the child process.  The child keeps its write end of the\n  // pipe in write_fd_.\n  int read_fd_;\n  // Descriptor to the child's write end of the pipe to the parent process.\n  // It is always -1 in the parent process.  The parent keeps its end of the\n  // pipe in read_fd_.\n  int write_fd_;\n};\n\n// Called in the parent process only. Reads the result code of the death\n// test child process via a pipe, interprets it to set the outcome_\n// member, and closes read_fd_.  Outputs diagnostics and terminates in\n// case of unexpected codes.\nvoid DeathTestImpl::ReadAndInterpretStatusByte() {\n  char flag;\n  int bytes_read;\n\n  // The read() here blocks until data is available (signifying the\n  // failure of the death test) or until the pipe is closed (signifying\n  // its success), so it's okay to call this in the parent before\n  // the child process has exited.\n  do {\n    bytes_read = posix::Read(read_fd(), &flag, 1);\n  } while (bytes_read == -1 && errno == EINTR);\n\n  if (bytes_read == 0) {\n    set_outcome(DIED);\n  } else if (bytes_read == 1) {\n    switch (flag) {\n      case kDeathTestReturned:\n        set_outcome(RETURNED);\n        break;\n      case kDeathTestThrew:\n        set_outcome(THREW);\n        break;\n      case kDeathTestLived:\n        set_outcome(LIVED);\n        break;\n      case kDeathTestInternalError:\n        FailFromInternalError(read_fd());  // Does not return.\n        break;\n      default:\n        GTEST_LOG_(FATAL) << \"Death test child process reported \"\n                          << \"unexpected status byte (\"\n                          << static_cast<unsigned int>(flag) << \")\";\n    }\n  } else {\n    GTEST_LOG_(FATAL) << \"Read from death test child process failed: \"\n                      << GetLastErrnoDescription();\n  }\n  GTEST_DEATH_TEST_CHECK_SYSCALL_(posix::Close(read_fd()));\n  set_read_fd(-1);\n}\n\n// Signals that the death test code which should have exited, didn't.\n// Should be called only in a death test child process.\n// Writes a status byte to the child's status file descriptor, then\n// calls _exit(1).\nvoid DeathTestImpl::Abort(AbortReason reason) {\n  // The parent process considers the death test to be a failure if\n  // it finds any data in our pipe.  So, here we write a single flag byte\n  // to the pipe, then exit.\n  const char status_ch =\n      reason == TEST_DID_NOT_DIE ? kDeathTestLived :\n      reason == TEST_THREW_EXCEPTION ? kDeathTestThrew : kDeathTestReturned;\n\n  GTEST_DEATH_TEST_CHECK_SYSCALL_(posix::Write(write_fd(), &status_ch, 1));\n  // We are leaking the descriptor here because on some platforms (i.e.,\n  // when built as Windows DLL), destructors of global objects will still\n  // run after calling _exit(). On such systems, write_fd_ will be\n  // indirectly closed from the destructor of UnitTestImpl, causing double\n  // close if it is also closed here. On debug configurations, double close\n  // may assert. As there are no in-process buffers to flush here, we are\n  // relying on the OS to close the descriptor after the process terminates\n  // when the destructors are not run.\n  _exit(1);  // Exits w/o any normal exit hooks (we were supposed to crash)\n}\n\n// Returns an indented copy of stderr output for a death test.\n// This makes distinguishing death test output lines from regular log lines\n// much easier.\nstatic ::std::string FormatDeathTestOutput(const ::std::string& output) {\n  ::std::string ret;\n  for (size_t at = 0; ; ) {\n    const size_t line_end = output.find('\\n', at);\n    ret += \"[  DEATH   ] \";\n    if (line_end == ::std::string::npos) {\n      ret += output.substr(at);\n      break;\n    }\n    ret += output.substr(at, line_end + 1 - at);\n    at = line_end + 1;\n  }\n  return ret;\n}\n\n// Assesses the success or failure of a death test, using both private\n// members which have previously been set, and one argument:\n//\n// Private data members:\n//   outcome:  An enumeration describing how the death test\n//             concluded: DIED, LIVED, THREW, or RETURNED.  The death test\n//             fails in the latter three cases.\n//   status:   The exit status of the child process. On *nix, it is in the\n//             in the format specified by wait(2). On Windows, this is the\n//             value supplied to the ExitProcess() API or a numeric code\n//             of the exception that terminated the program.\n//   regex:    A regular expression object to be applied to\n//             the test's captured standard error output; the death test\n//             fails if it does not match.\n//\n// Argument:\n//   status_ok: true if exit_status is acceptable in the context of\n//              this particular death test, which fails if it is false\n//\n// Returns true iff all of the above conditions are met.  Otherwise, the\n// first failing condition, in the order given above, is the one that is\n// reported. Also sets the last death test message string.\nbool DeathTestImpl::Passed(bool status_ok) {\n  if (!spawned())\n    return false;\n\n  const String error_message = GetCapturedStderr();\n\n  bool success = false;\n  Message buffer;\n\n  buffer << \"Death test: \" << statement() << \"\\n\";\n  switch (outcome()) {\n    case LIVED:\n      buffer << \"    Result: failed to die.\\n\"\n             << \" Error msg:\\n\" << FormatDeathTestOutput(error_message);\n      break;\n    case THREW:\n      buffer << \"    Result: threw an exception.\\n\"\n             << \" Error msg:\\n\" << FormatDeathTestOutput(error_message);\n      break;\n    case RETURNED:\n      buffer << \"    Result: illegal return in test statement.\\n\"\n             << \" Error msg:\\n\" << FormatDeathTestOutput(error_message);\n      break;\n    case DIED:\n      if (status_ok) {\n        const bool matched = RE::PartialMatch(error_message.c_str(), *regex());\n        if (matched) {\n          success = true;\n        } else {\n          buffer << \"    Result: died but not with expected error.\\n\"\n                 << \"  Expected: \" << regex()->pattern() << \"\\n\"\n                 << \"Actual msg:\\n\" << FormatDeathTestOutput(error_message);\n        }\n      } else {\n        buffer << \"    Result: died but not with expected exit code:\\n\"\n               << \"            \" << ExitSummary(status()) << \"\\n\"\n               << \"Actual msg:\\n\" << FormatDeathTestOutput(error_message);\n      }\n      break;\n    case IN_PROGRESS:\n    default:\n      GTEST_LOG_(FATAL)\n          << \"DeathTest::Passed somehow called before conclusion of test\";\n  }\n\n  DeathTest::set_last_death_test_message(buffer.GetString());\n  return success;\n}\n\n# if GTEST_OS_WINDOWS\n// WindowsDeathTest implements death tests on Windows. Due to the\n// specifics of starting new processes on Windows, death tests there are\n// always threadsafe, and Google Test considers the\n// --gtest_death_test_style=fast setting to be equivalent to\n// --gtest_death_test_style=threadsafe there.\n//\n// A few implementation notes:  Like the Linux version, the Windows\n// implementation uses pipes for child-to-parent communication. But due to\n// the specifics of pipes on Windows, some extra steps are required:\n//\n// 1. The parent creates a communication pipe and stores handles to both\n//    ends of it.\n// 2. The parent starts the child and provides it with the information\n//    necessary to acquire the handle to the write end of the pipe.\n// 3. The child acquires the write end of the pipe and signals the parent\n//    using a Windows event.\n// 4. Now the parent can release the write end of the pipe on its side. If\n//    this is done before step 3, the object's reference count goes down to\n//    0 and it is destroyed, preventing the child from acquiring it. The\n//    parent now has to release it, or read operations on the read end of\n//    the pipe will not return when the child terminates.\n// 5. The parent reads child's output through the pipe (outcome code and\n//    any possible error messages) from the pipe, and its stderr and then\n//    determines whether to fail the test.\n//\n// Note: to distinguish Win32 API calls from the local method and function\n// calls, the former are explicitly resolved in the global namespace.\n//\nclass WindowsDeathTest : public DeathTestImpl {\n public:\n  WindowsDeathTest(const char* a_statement,\n                   const RE* a_regex,\n                   const char* file,\n                   int line)\n      : DeathTestImpl(a_statement, a_regex), file_(file), line_(line) {}\n\n  // All of these virtual functions are inherited from DeathTest.\n  virtual int Wait();\n  virtual TestRole AssumeRole();\n\n private:\n  // The name of the file in which the death test is located.\n  const char* const file_;\n  // The line number on which the death test is located.\n  const int line_;\n  // Handle to the write end of the pipe to the child process.\n  AutoHandle write_handle_;\n  // Child process handle.\n  AutoHandle child_handle_;\n  // Event the child process uses to signal the parent that it has\n  // acquired the handle to the write end of the pipe. After seeing this\n  // event the parent can release its own handles to make sure its\n  // ReadFile() calls return when the child terminates.\n  AutoHandle event_handle_;\n};\n\n// Waits for the child in a death test to exit, returning its exit\n// status, or 0 if no child process exists.  As a side effect, sets the\n// outcome data member.\nint WindowsDeathTest::Wait() {\n  if (!spawned())\n    return 0;\n\n  // Wait until the child either signals that it has acquired the write end\n  // of the pipe or it dies.\n  const HANDLE wait_handles[2] = { child_handle_.Get(), event_handle_.Get() };\n  switch (::WaitForMultipleObjects(2,\n                                   wait_handles,\n                                   FALSE,  // Waits for any of the handles.\n                                   INFINITE)) {\n    case WAIT_OBJECT_0:\n    case WAIT_OBJECT_0 + 1:\n      break;\n    default:\n      GTEST_DEATH_TEST_CHECK_(false);  // Should not get here.\n  }\n\n  // The child has acquired the write end of the pipe or exited.\n  // We release the handle on our side and continue.\n  write_handle_.Reset();\n  event_handle_.Reset();\n\n  ReadAndInterpretStatusByte();\n\n  // Waits for the child process to exit if it haven't already. This\n  // returns immediately if the child has already exited, regardless of\n  // whether previous calls to WaitForMultipleObjects synchronized on this\n  // handle or not.\n  GTEST_DEATH_TEST_CHECK_(\n      WAIT_OBJECT_0 == ::WaitForSingleObject(child_handle_.Get(),\n                                             INFINITE));\n  DWORD status_code;\n  GTEST_DEATH_TEST_CHECK_(\n      ::GetExitCodeProcess(child_handle_.Get(), &status_code) != FALSE);\n  child_handle_.Reset();\n  set_status(static_cast<int>(status_code));\n  return status();\n}\n\n// The AssumeRole process for a Windows death test.  It creates a child\n// process with the same executable as the current process to run the\n// death test.  The child process is given the --gtest_filter and\n// --gtest_internal_run_death_test flags such that it knows to run the\n// current death test only.\nDeathTest::TestRole WindowsDeathTest::AssumeRole() {\n  const UnitTestImpl* const impl = GetUnitTestImpl();\n  const InternalRunDeathTestFlag* const flag =\n      impl->internal_run_death_test_flag();\n  const TestInfo* const info = impl->current_test_info();\n  const int death_test_index = info->result()->death_test_count();\n\n  if (flag != NULL) {\n    // ParseInternalRunDeathTestFlag() has performed all the necessary\n    // processing.\n    set_write_fd(flag->write_fd());\n    return EXECUTE_TEST;\n  }\n\n  // WindowsDeathTest uses an anonymous pipe to communicate results of\n  // a death test.\n  SECURITY_ATTRIBUTES handles_are_inheritable = {\n    sizeof(SECURITY_ATTRIBUTES), NULL, TRUE };\n  HANDLE read_handle, write_handle;\n  GTEST_DEATH_TEST_CHECK_(\n      ::CreatePipe(&read_handle, &write_handle, &handles_are_inheritable,\n                   0)  // Default buffer size.\n      != FALSE);\n  set_read_fd(::_open_osfhandle(reinterpret_cast<intptr_t>(read_handle),\n                                O_RDONLY));\n  write_handle_.Reset(write_handle);\n  event_handle_.Reset(::CreateEvent(\n      &handles_are_inheritable,\n      TRUE,    // The event will automatically reset to non-signaled state.\n      FALSE,   // The initial state is non-signalled.\n      NULL));  // The even is unnamed.\n  GTEST_DEATH_TEST_CHECK_(event_handle_.Get() != NULL);\n  const String filter_flag = String::Format(\"--%s%s=%s.%s\",\n                                            GTEST_FLAG_PREFIX_, kFilterFlag,\n                                            info->test_case_name(),\n                                            info->name());\n  const String internal_flag = String::Format(\n    \"--%s%s=%s|%d|%d|%u|%Iu|%Iu\",\n      GTEST_FLAG_PREFIX_,\n      kInternalRunDeathTestFlag,\n      file_, line_,\n      death_test_index,\n      static_cast<unsigned int>(::GetCurrentProcessId()),\n      // size_t has the same with as pointers on both 32-bit and 64-bit\n      // Windows platforms.\n      // See http://msdn.microsoft.com/en-us/library/tcxf1dw6.aspx.\n      reinterpret_cast<size_t>(write_handle),\n      reinterpret_cast<size_t>(event_handle_.Get()));\n\n  char executable_path[_MAX_PATH + 1];  // NOLINT\n  GTEST_DEATH_TEST_CHECK_(\n      _MAX_PATH + 1 != ::GetModuleFileNameA(NULL,\n                                            executable_path,\n                                            _MAX_PATH));\n\n  String command_line = String::Format(\"%s %s \\\"%s\\\"\",\n                                       ::GetCommandLineA(),\n                                       filter_flag.c_str(),\n                                       internal_flag.c_str());\n\n  DeathTest::set_last_death_test_message(\"\");\n\n  CaptureStderr();\n  // Flush the log buffers since the log streams are shared with the child.\n  FlushInfoLog();\n\n  // The child process will share the standard handles with the parent.\n  STARTUPINFOA startup_info;\n  memset(&startup_info, 0, sizeof(STARTUPINFO));\n  startup_info.dwFlags = STARTF_USESTDHANDLES;\n  startup_info.hStdInput = ::GetStdHandle(STD_INPUT_HANDLE);\n  startup_info.hStdOutput = ::GetStdHandle(STD_OUTPUT_HANDLE);\n  startup_info.hStdError = ::GetStdHandle(STD_ERROR_HANDLE);\n\n  PROCESS_INFORMATION process_info;\n  GTEST_DEATH_TEST_CHECK_(::CreateProcessA(\n      executable_path,\n      const_cast<char*>(command_line.c_str()),\n      NULL,   // Retuned process handle is not inheritable.\n      NULL,   // Retuned thread handle is not inheritable.\n      TRUE,   // Child inherits all inheritable handles (for write_handle_).\n      0x0,    // Default creation flags.\n      NULL,   // Inherit the parent's environment.\n      UnitTest::GetInstance()->original_working_dir(),\n      &startup_info,\n      &process_info) != FALSE);\n  child_handle_.Reset(process_info.hProcess);\n  ::CloseHandle(process_info.hThread);\n  set_spawned(true);\n  return OVERSEE_TEST;\n}\n# else  // We are not on Windows.\n\n// ForkingDeathTest provides implementations for most of the abstract\n// methods of the DeathTest interface.  Only the AssumeRole method is\n// left undefined.\nclass ForkingDeathTest : public DeathTestImpl {\n public:\n  ForkingDeathTest(const char* statement, const RE* regex);\n\n  // All of these virtual functions are inherited from DeathTest.\n  virtual int Wait();\n\n protected:\n  void set_child_pid(pid_t child_pid) { child_pid_ = child_pid; }\n\n private:\n  // PID of child process during death test; 0 in the child process itself.\n  pid_t child_pid_;\n};\n\n// Constructs a ForkingDeathTest.\nForkingDeathTest::ForkingDeathTest(const char* a_statement, const RE* a_regex)\n    : DeathTestImpl(a_statement, a_regex),\n      child_pid_(-1) {}\n\n// Waits for the child in a death test to exit, returning its exit\n// status, or 0 if no child process exists.  As a side effect, sets the\n// outcome data member.\nint ForkingDeathTest::Wait() {\n  if (!spawned())\n    return 0;\n\n  ReadAndInterpretStatusByte();\n\n  int status_value;\n  GTEST_DEATH_TEST_CHECK_SYSCALL_(waitpid(child_pid_, &status_value, 0));\n  set_status(status_value);\n  return status_value;\n}\n\n// A concrete death test class that forks, then immediately runs the test\n// in the child process.\nclass NoExecDeathTest : public ForkingDeathTest {\n public:\n  NoExecDeathTest(const char* a_statement, const RE* a_regex) :\n      ForkingDeathTest(a_statement, a_regex) { }\n  virtual TestRole AssumeRole();\n};\n\n// The AssumeRole process for a fork-and-run death test.  It implements a\n// straightforward fork, with a simple pipe to transmit the status byte.\nDeathTest::TestRole NoExecDeathTest::AssumeRole() {\n  const size_t thread_count = GetThreadCount();\n  if (thread_count != 1) {\n    GTEST_LOG_(WARNING) << DeathTestThreadWarning(thread_count);\n  }\n\n  int pipe_fd[2];\n  GTEST_DEATH_TEST_CHECK_(pipe(pipe_fd) != -1);\n\n  DeathTest::set_last_death_test_message(\"\");\n  CaptureStderr();\n  // When we fork the process below, the log file buffers are copied, but the\n  // file descriptors are shared.  We flush all log files here so that closing\n  // the file descriptors in the child process doesn't throw off the\n  // synchronization between descriptors and buffers in the parent process.\n  // This is as close to the fork as possible to avoid a race condition in case\n  // there are multiple threads running before the death test, and another\n  // thread writes to the log file.\n  FlushInfoLog();\n\n  const pid_t child_pid = fork();\n  GTEST_DEATH_TEST_CHECK_(child_pid != -1);\n  set_child_pid(child_pid);\n  if (child_pid == 0) {\n    GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[0]));\n    set_write_fd(pipe_fd[1]);\n    // Redirects all logging to stderr in the child process to prevent\n    // concurrent writes to the log files.  We capture stderr in the parent\n    // process and append the child process' output to a log.\n    LogToStderr();\n    // Event forwarding to the listeners of event listener API mush be shut\n    // down in death test subprocesses.\n    GetUnitTestImpl()->listeners()->SuppressEventForwarding();\n    return EXECUTE_TEST;\n  } else {\n    GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[1]));\n    set_read_fd(pipe_fd[0]);\n    set_spawned(true);\n    return OVERSEE_TEST;\n  }\n}\n\n// A concrete death test class that forks and re-executes the main\n// program from the beginning, with command-line flags set that cause\n// only this specific death test to be run.\nclass ExecDeathTest : public ForkingDeathTest {\n public:\n  ExecDeathTest(const char* a_statement, const RE* a_regex,\n                const char* file, int line) :\n      ForkingDeathTest(a_statement, a_regex), file_(file), line_(line) { }\n  virtual TestRole AssumeRole();\n private:\n  // The name of the file in which the death test is located.\n  const char* const file_;\n  // The line number on which the death test is located.\n  const int line_;\n};\n\n// Utility class for accumulating command-line arguments.\nclass Arguments {\n public:\n  Arguments() {\n    args_.push_back(NULL);\n  }\n\n  ~Arguments() {\n    for (std::vector<char*>::iterator i = args_.begin(); i != args_.end();\n         ++i) {\n      free(*i);\n    }\n  }\n  void AddArgument(const char* argument) {\n    args_.insert(args_.end() - 1, posix::StrDup(argument));\n  }\n\n  template <typename Str>\n  void AddArguments(const ::std::vector<Str>& arguments) {\n    for (typename ::std::vector<Str>::const_iterator i = arguments.begin();\n         i != arguments.end();\n         ++i) {\n      args_.insert(args_.end() - 1, posix::StrDup(i->c_str()));\n    }\n  }\n  char* const* Argv() {\n    return &args_[0];\n  }\n private:\n  std::vector<char*> args_;\n};\n\n// A struct that encompasses the arguments to the child process of a\n// threadsafe-style death test process.\nstruct ExecDeathTestArgs {\n  char* const* argv;  // Command-line arguments for the child's call to exec\n  int close_fd;       // File descriptor to close; the read end of a pipe\n};\n\n#  if GTEST_OS_MAC\ninline char** GetEnviron() {\n  // When Google Test is built as a framework on MacOS X, the environ variable\n  // is unavailable. Apple's documentation (man environ) recommends using\n  // _NSGetEnviron() instead.\n  return *_NSGetEnviron();\n}\n#  else\n// Some POSIX platforms expect you to declare environ. extern \"C\" makes\n// it reside in the global namespace.\nextern \"C\" char** environ;\ninline char** GetEnviron() { return environ; }\n#  endif  // GTEST_OS_MAC\n\n// The main function for a threadsafe-style death test child process.\n// This function is called in a clone()-ed process and thus must avoid\n// any potentially unsafe operations like malloc or libc functions.\nstatic int ExecDeathTestChildMain(void* child_arg) {\n  ExecDeathTestArgs* const args = static_cast<ExecDeathTestArgs*>(child_arg);\n  GTEST_DEATH_TEST_CHECK_SYSCALL_(close(args->close_fd));\n\n  // We need to execute the test program in the same environment where\n  // it was originally invoked.  Therefore we change to the original\n  // working directory first.\n  const char* const original_dir =\n      UnitTest::GetInstance()->original_working_dir();\n  // We can safely call chdir() as it's a direct system call.\n  if (chdir(original_dir) != 0) {\n    DeathTestAbort(String::Format(\"chdir(\\\"%s\\\") failed: %s\",\n                                  original_dir,\n                                  GetLastErrnoDescription().c_str()));\n    return EXIT_FAILURE;\n  }\n\n  // We can safely call execve() as it's a direct system call.  We\n  // cannot use execvp() as it's a libc function and thus potentially\n  // unsafe.  Since execve() doesn't search the PATH, the user must\n  // invoke the test program via a valid path that contains at least\n  // one path separator.\n  execve(args->argv[0], args->argv, GetEnviron());\n  DeathTestAbort(String::Format(\"execve(%s, ...) in %s failed: %s\",\n                                args->argv[0],\n                                original_dir,\n                                GetLastErrnoDescription().c_str()));\n  return EXIT_FAILURE;\n}\n\n// Two utility routines that together determine the direction the stack\n// grows.\n// This could be accomplished more elegantly by a single recursive\n// function, but we want to guard against the unlikely possibility of\n// a smart compiler optimizing the recursion away.\n//\n// GTEST_NO_INLINE_ is required to prevent GCC 4.6 from inlining\n// StackLowerThanAddress into StackGrowsDown, which then doesn't give\n// correct answer.\nbool StackLowerThanAddress(const void* ptr) GTEST_NO_INLINE_;\nbool StackLowerThanAddress(const void* ptr) {\n  int dummy;\n  return &dummy < ptr;\n}\n\nbool StackGrowsDown() {\n  int dummy;\n  return StackLowerThanAddress(&dummy);\n}\n\n// A threadsafe implementation of fork(2) for threadsafe-style death tests\n// that uses clone(2).  It dies with an error message if anything goes\n// wrong.\nstatic pid_t ExecDeathTestFork(char* const* argv, int close_fd) {\n  ExecDeathTestArgs args = { argv, close_fd };\n  pid_t child_pid = -1;\n\n#  if GTEST_HAS_CLONE\n  const bool use_fork = GTEST_FLAG(death_test_use_fork);\n\n  if (!use_fork) {\n    static const bool stack_grows_down = StackGrowsDown();\n    const size_t stack_size = getpagesize();\n    // MMAP_ANONYMOUS is not defined on Mac, so we use MAP_ANON instead.\n    void* const stack = mmap(NULL, stack_size, PROT_READ | PROT_WRITE,\n                             MAP_ANON | MAP_PRIVATE, -1, 0);\n    GTEST_DEATH_TEST_CHECK_(stack != MAP_FAILED);\n    void* const stack_top =\n        static_cast<char*>(stack) + (stack_grows_down ? stack_size : 0);\n\n    child_pid = clone(&ExecDeathTestChildMain, stack_top, SIGCHLD, &args);\n\n    GTEST_DEATH_TEST_CHECK_(munmap(stack, stack_size) != -1);\n  }\n#  else\n  const bool use_fork = true;\n#  endif  // GTEST_HAS_CLONE\n\n  if (use_fork && (child_pid = fork()) == 0) {\n      ExecDeathTestChildMain(&args);\n      _exit(0);\n  }\n\n  GTEST_DEATH_TEST_CHECK_(child_pid != -1);\n  return child_pid;\n}\n\n// The AssumeRole process for a fork-and-exec death test.  It re-executes the\n// main program from the beginning, setting the --gtest_filter\n// and --gtest_internal_run_death_test flags to cause only the current\n// death test to be re-run.\nDeathTest::TestRole ExecDeathTest::AssumeRole() {\n  const UnitTestImpl* const impl = GetUnitTestImpl();\n  const InternalRunDeathTestFlag* const flag =\n      impl->internal_run_death_test_flag();\n  const TestInfo* const info = impl->current_test_info();\n  const int death_test_index = info->result()->death_test_count();\n\n  if (flag != NULL) {\n    set_write_fd(flag->write_fd());\n    return EXECUTE_TEST;\n  }\n\n  int pipe_fd[2];\n  GTEST_DEATH_TEST_CHECK_(pipe(pipe_fd) != -1);\n  // Clear the close-on-exec flag on the write end of the pipe, lest\n  // it be closed when the child process does an exec:\n  GTEST_DEATH_TEST_CHECK_(fcntl(pipe_fd[1], F_SETFD, 0) != -1);\n\n  const String filter_flag =\n      String::Format(\"--%s%s=%s.%s\",\n                     GTEST_FLAG_PREFIX_, kFilterFlag,\n                     info->test_case_name(), info->name());\n  const String internal_flag =\n      String::Format(\"--%s%s=%s|%d|%d|%d\",\n                     GTEST_FLAG_PREFIX_, kInternalRunDeathTestFlag,\n                     file_, line_, death_test_index, pipe_fd[1]);\n  Arguments args;\n  args.AddArguments(GetArgvs());\n  args.AddArgument(filter_flag.c_str());\n  args.AddArgument(internal_flag.c_str());\n\n  DeathTest::set_last_death_test_message(\"\");\n\n  CaptureStderr();\n  // See the comment in NoExecDeathTest::AssumeRole for why the next line\n  // is necessary.\n  FlushInfoLog();\n\n  const pid_t child_pid = ExecDeathTestFork(args.Argv(), pipe_fd[0]);\n  GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[1]));\n  set_child_pid(child_pid);\n  set_read_fd(pipe_fd[0]);\n  set_spawned(true);\n  return OVERSEE_TEST;\n}\n\n# endif  // !GTEST_OS_WINDOWS\n\n// Creates a concrete DeathTest-derived class that depends on the\n// --gtest_death_test_style flag, and sets the pointer pointed to\n// by the \"test\" argument to its address.  If the test should be\n// skipped, sets that pointer to NULL.  Returns true, unless the\n// flag is set to an invalid value.\nbool DefaultDeathTestFactory::Create(const char* statement, const RE* regex,\n                                     const char* file, int line,\n                                     DeathTest** test) {\n  UnitTestImpl* const impl = GetUnitTestImpl();\n  const InternalRunDeathTestFlag* const flag =\n      impl->internal_run_death_test_flag();\n  const int death_test_index = impl->current_test_info()\n      ->increment_death_test_count();\n\n  if (flag != NULL) {\n    if (death_test_index > flag->index()) {\n      DeathTest::set_last_death_test_message(String::Format(\n          \"Death test count (%d) somehow exceeded expected maximum (%d)\",\n          death_test_index, flag->index()));\n      return false;\n    }\n\n    if (!(flag->file() == file && flag->line() == line &&\n          flag->index() == death_test_index)) {\n      *test = NULL;\n      return true;\n    }\n  }\n\n# if GTEST_OS_WINDOWS\n\n  if (GTEST_FLAG(death_test_style) == \"threadsafe\" ||\n      GTEST_FLAG(death_test_style) == \"fast\") {\n    *test = new WindowsDeathTest(statement, regex, file, line);\n  }\n\n# else\n\n  if (GTEST_FLAG(death_test_style) == \"threadsafe\") {\n    *test = new ExecDeathTest(statement, regex, file, line);\n  } else if (GTEST_FLAG(death_test_style) == \"fast\") {\n    *test = new NoExecDeathTest(statement, regex);\n  }\n\n# endif  // GTEST_OS_WINDOWS\n\n  else {  // NOLINT - this is more readable than unbalanced brackets inside #if.\n    DeathTest::set_last_death_test_message(String::Format(\n        \"Unknown death test style \\\"%s\\\" encountered\",\n        GTEST_FLAG(death_test_style).c_str()));\n    return false;\n  }\n\n  return true;\n}\n\n// Splits a given string on a given delimiter, populating a given\n// vector with the fields.  GTEST_HAS_DEATH_TEST implies that we have\n// ::std::string, so we can use it here.\nstatic void SplitString(const ::std::string& str, char delimiter,\n                        ::std::vector< ::std::string>* dest) {\n  ::std::vector< ::std::string> parsed;\n  ::std::string::size_type pos = 0;\n  while (::testing::internal::AlwaysTrue()) {\n    const ::std::string::size_type colon = str.find(delimiter, pos);\n    if (colon == ::std::string::npos) {\n      parsed.push_back(str.substr(pos));\n      break;\n    } else {\n      parsed.push_back(str.substr(pos, colon - pos));\n      pos = colon + 1;\n    }\n  }\n  dest->swap(parsed);\n}\n\n# if GTEST_OS_WINDOWS\n// Recreates the pipe and event handles from the provided parameters,\n// signals the event, and returns a file descriptor wrapped around the pipe\n// handle. This function is called in the child process only.\nint GetStatusFileDescriptor(unsigned int parent_process_id,\n                            size_t write_handle_as_size_t,\n                            size_t event_handle_as_size_t) {\n  AutoHandle parent_process_handle(::OpenProcess(PROCESS_DUP_HANDLE,\n                                                   FALSE,  // Non-inheritable.\n                                                   parent_process_id));\n  if (parent_process_handle.Get() == INVALID_HANDLE_VALUE) {\n    DeathTestAbort(String::Format(\"Unable to open parent process %u\",\n                                  parent_process_id));\n  }\n\n  // TODO(vladl@google.com): Replace the following check with a\n  // compile-time assertion when available.\n  GTEST_CHECK_(sizeof(HANDLE) <= sizeof(size_t));\n\n  const HANDLE write_handle =\n      reinterpret_cast<HANDLE>(write_handle_as_size_t);\n  HANDLE dup_write_handle;\n\n  // The newly initialized handle is accessible only in in the parent\n  // process. To obtain one accessible within the child, we need to use\n  // DuplicateHandle.\n  if (!::DuplicateHandle(parent_process_handle.Get(), write_handle,\n                         ::GetCurrentProcess(), &dup_write_handle,\n                         0x0,    // Requested privileges ignored since\n                                 // DUPLICATE_SAME_ACCESS is used.\n                         FALSE,  // Request non-inheritable handler.\n                         DUPLICATE_SAME_ACCESS)) {\n    DeathTestAbort(String::Format(\n        \"Unable to duplicate the pipe handle %Iu from the parent process %u\",\n        write_handle_as_size_t, parent_process_id));\n  }\n\n  const HANDLE event_handle = reinterpret_cast<HANDLE>(event_handle_as_size_t);\n  HANDLE dup_event_handle;\n\n  if (!::DuplicateHandle(parent_process_handle.Get(), event_handle,\n                         ::GetCurrentProcess(), &dup_event_handle,\n                         0x0,\n                         FALSE,\n                         DUPLICATE_SAME_ACCESS)) {\n    DeathTestAbort(String::Format(\n        \"Unable to duplicate the event handle %Iu from the parent process %u\",\n        event_handle_as_size_t, parent_process_id));\n  }\n\n  const int write_fd =\n      ::_open_osfhandle(reinterpret_cast<intptr_t>(dup_write_handle), O_APPEND);\n  if (write_fd == -1) {\n    DeathTestAbort(String::Format(\n        \"Unable to convert pipe handle %Iu to a file descriptor\",\n        write_handle_as_size_t));\n  }\n\n  // Signals the parent that the write end of the pipe has been acquired\n  // so the parent can release its own write end.\n  ::SetEvent(dup_event_handle);\n\n  return write_fd;\n}\n# endif  // GTEST_OS_WINDOWS\n\n// Returns a newly created InternalRunDeathTestFlag object with fields\n// initialized from the GTEST_FLAG(internal_run_death_test) flag if\n// the flag is specified; otherwise returns NULL.\nInternalRunDeathTestFlag* ParseInternalRunDeathTestFlag() {\n  if (GTEST_FLAG(internal_run_death_test) == \"\") return NULL;\n\n  // GTEST_HAS_DEATH_TEST implies that we have ::std::string, so we\n  // can use it here.\n  int line = -1;\n  int index = -1;\n  ::std::vector< ::std::string> fields;\n  SplitString(GTEST_FLAG(internal_run_death_test).c_str(), '|', &fields);\n  int write_fd = -1;\n\n# if GTEST_OS_WINDOWS\n\n  unsigned int parent_process_id = 0;\n  size_t write_handle_as_size_t = 0;\n  size_t event_handle_as_size_t = 0;\n\n  if (fields.size() != 6\n      || !ParseNaturalNumber(fields[1], &line)\n      || !ParseNaturalNumber(fields[2], &index)\n      || !ParseNaturalNumber(fields[3], &parent_process_id)\n      || !ParseNaturalNumber(fields[4], &write_handle_as_size_t)\n      || !ParseNaturalNumber(fields[5], &event_handle_as_size_t)) {\n    DeathTestAbort(String::Format(\n        \"Bad --gtest_internal_run_death_test flag: %s\",\n        GTEST_FLAG(internal_run_death_test).c_str()));\n  }\n  write_fd = GetStatusFileDescriptor(parent_process_id,\n                                     write_handle_as_size_t,\n                                     event_handle_as_size_t);\n# else\n\n  if (fields.size() != 4\n      || !ParseNaturalNumber(fields[1], &line)\n      || !ParseNaturalNumber(fields[2], &index)\n      || !ParseNaturalNumber(fields[3], &write_fd)) {\n    DeathTestAbort(String::Format(\n        \"Bad --gtest_internal_run_death_test flag: %s\",\n        GTEST_FLAG(internal_run_death_test).c_str()));\n  }\n\n# endif  // GTEST_OS_WINDOWS\n\n  return new InternalRunDeathTestFlag(fields[0], line, index, write_fd);\n}\n\n}  // namespace internal\n\n#endif  // GTEST_HAS_DEATH_TEST\n\n}  // namespace testing\n// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: keith.ray@gmail.com (Keith Ray)\n\n\n#include <stdlib.h>\n\n#if GTEST_OS_WINDOWS_MOBILE\n# include <windows.h>\n#elif GTEST_OS_WINDOWS\n# include <direct.h>\n# include <io.h>\n#elif GTEST_OS_SYMBIAN || GTEST_OS_NACL\n// Symbian OpenC and NaCl have PATH_MAX in sys/syslimits.h\n# include <sys/syslimits.h>\n#else\n# include <limits.h>\n# include <climits>  // Some Linux distributions define PATH_MAX here.\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n#if GTEST_OS_WINDOWS\n# define GTEST_PATH_MAX_ _MAX_PATH\n#elif defined(PATH_MAX)\n# define GTEST_PATH_MAX_ PATH_MAX\n#elif defined(_XOPEN_PATH_MAX)\n# define GTEST_PATH_MAX_ _XOPEN_PATH_MAX\n#else\n# define GTEST_PATH_MAX_ _POSIX_PATH_MAX\n#endif  // GTEST_OS_WINDOWS\n\n\nnamespace testing {\nnamespace internal {\n\n#if GTEST_OS_WINDOWS\n// On Windows, '\\\\' is the standard path separator, but many tools and the\n// Windows API also accept '/' as an alternate path separator. Unless otherwise\n// noted, a file path can contain either kind of path separators, or a mixture\n// of them.\nconst char kPathSeparator = '\\\\';\nconst char kAlternatePathSeparator = '/';\nconst char kPathSeparatorString[] = \"\\\\\";\nconst char kAlternatePathSeparatorString[] = \"/\";\n# if GTEST_OS_WINDOWS_MOBILE\n// Windows CE doesn't have a current directory. You should not use\n// the current directory in tests on Windows CE, but this at least\n// provides a reasonable fallback.\nconst char kCurrentDirectoryString[] = \"\\\\\";\n// Windows CE doesn't define INVALID_FILE_ATTRIBUTES\nconst DWORD kInvalidFileAttributes = 0xffffffff;\n# else\nconst char kCurrentDirectoryString[] = \".\\\\\";\n# endif  // GTEST_OS_WINDOWS_MOBILE\n#else\nconst char kPathSeparator = '/';\nconst char kPathSeparatorString[] = \"/\";\nconst char kCurrentDirectoryString[] = \"./\";\n#endif  // GTEST_OS_WINDOWS\n\n// Returns whether the given character is a valid path separator.\nstatic bool IsPathSeparator(char c) {\n#if GTEST_HAS_ALT_PATH_SEP_\n  return (c == kPathSeparator) || (c == kAlternatePathSeparator);\n#else\n  return c == kPathSeparator;\n#endif\n}\n\n// Returns the current working directory, or \"\" if unsuccessful.\nFilePath FilePath::GetCurrentDir() {\n#if GTEST_OS_WINDOWS_MOBILE\n  // Windows CE doesn't have a current directory, so we just return\n  // something reasonable.\n  return FilePath(kCurrentDirectoryString);\n#elif GTEST_OS_WINDOWS\n  char cwd[GTEST_PATH_MAX_ + 1] = { '\\0' };\n  return FilePath(_getcwd(cwd, sizeof(cwd)) == NULL ? \"\" : cwd);\n#else\n  char cwd[GTEST_PATH_MAX_ + 1] = { '\\0' };\n  return FilePath(getcwd(cwd, sizeof(cwd)) == NULL ? \"\" : cwd);\n#endif  // GTEST_OS_WINDOWS_MOBILE\n}\n\n// Returns a copy of the FilePath with the case-insensitive extension removed.\n// Example: FilePath(\"dir/file.exe\").RemoveExtension(\"EXE\") returns\n// FilePath(\"dir/file\"). If a case-insensitive extension is not\n// found, returns a copy of the original FilePath.\nFilePath FilePath::RemoveExtension(const char* extension) const {\n  String dot_extension(String::Format(\".%s\", extension));\n  if (pathname_.EndsWithCaseInsensitive(dot_extension.c_str())) {\n    return FilePath(String(pathname_.c_str(), pathname_.length() - 4));\n  }\n  return *this;\n}\n\n// Returns a pointer to the last occurence of a valid path separator in\n// the FilePath. On Windows, for example, both '/' and '\\' are valid path\n// separators. Returns NULL if no path separator was found.\nconst char* FilePath::FindLastPathSeparator() const {\n  const char* const last_sep = strrchr(c_str(), kPathSeparator);\n#if GTEST_HAS_ALT_PATH_SEP_\n  const char* const last_alt_sep = strrchr(c_str(), kAlternatePathSeparator);\n  // Comparing two pointers of which only one is NULL is undefined.\n  if (last_alt_sep != NULL &&\n      (last_sep == NULL || last_alt_sep > last_sep)) {\n    return last_alt_sep;\n  }\n#endif\n  return last_sep;\n}\n\n// Returns a copy of the FilePath with the directory part removed.\n// Example: FilePath(\"path/to/file\").RemoveDirectoryName() returns\n// FilePath(\"file\"). If there is no directory part (\"just_a_file\"), it returns\n// the FilePath unmodified. If there is no file part (\"just_a_dir/\") it\n// returns an empty FilePath (\"\").\n// On Windows platform, '\\' is the path separator, otherwise it is '/'.\nFilePath FilePath::RemoveDirectoryName() const {\n  const char* const last_sep = FindLastPathSeparator();\n  return last_sep ? FilePath(String(last_sep + 1)) : *this;\n}\n\n// RemoveFileName returns the directory path with the filename removed.\n// Example: FilePath(\"path/to/file\").RemoveFileName() returns \"path/to/\".\n// If the FilePath is \"a_file\" or \"/a_file\", RemoveFileName returns\n// FilePath(\"./\") or, on Windows, FilePath(\".\\\\\"). If the filepath does\n// not have a file, like \"just/a/dir/\", it returns the FilePath unmodified.\n// On Windows platform, '\\' is the path separator, otherwise it is '/'.\nFilePath FilePath::RemoveFileName() const {\n  const char* const last_sep = FindLastPathSeparator();\n  String dir;\n  if (last_sep) {\n    dir = String(c_str(), last_sep + 1 - c_str());\n  } else {\n    dir = kCurrentDirectoryString;\n  }\n  return FilePath(dir);\n}\n\n// Helper functions for naming files in a directory for xml output.\n\n// Given directory = \"dir\", base_name = \"test\", number = 0,\n// extension = \"xml\", returns \"dir/test.xml\". If number is greater\n// than zero (e.g., 12), returns \"dir/test_12.xml\".\n// On Windows platform, uses \\ as the separator rather than /.\nFilePath FilePath::MakeFileName(const FilePath& directory,\n                                const FilePath& base_name,\n                                int number,\n                                const char* extension) {\n  String file;\n  if (number == 0) {\n    file = String::Format(\"%s.%s\", base_name.c_str(), extension);\n  } else {\n    file = String::Format(\"%s_%d.%s\", base_name.c_str(), number, extension);\n  }\n  return ConcatPaths(directory, FilePath(file));\n}\n\n// Given directory = \"dir\", relative_path = \"test.xml\", returns \"dir/test.xml\".\n// On Windows, uses \\ as the separator rather than /.\nFilePath FilePath::ConcatPaths(const FilePath& directory,\n                               const FilePath& relative_path) {\n  if (directory.IsEmpty())\n    return relative_path;\n  const FilePath dir(directory.RemoveTrailingPathSeparator());\n  return FilePath(String::Format(\"%s%c%s\", dir.c_str(), kPathSeparator,\n                                 relative_path.c_str()));\n}\n\n// Returns true if pathname describes something findable in the file-system,\n// either a file, directory, or whatever.\nbool FilePath::FileOrDirectoryExists() const {\n#if GTEST_OS_WINDOWS_MOBILE\n  LPCWSTR unicode = String::AnsiToUtf16(pathname_.c_str());\n  const DWORD attributes = GetFileAttributes(unicode);\n  delete [] unicode;\n  return attributes != kInvalidFileAttributes;\n#else\n  posix::StatStruct file_stat;\n  return posix::Stat(pathname_.c_str(), &file_stat) == 0;\n#endif  // GTEST_OS_WINDOWS_MOBILE\n}\n\n// Returns true if pathname describes a directory in the file-system\n// that exists.\nbool FilePath::DirectoryExists() const {\n  bool result = false;\n#if GTEST_OS_WINDOWS\n  // Don't strip off trailing separator if path is a root directory on\n  // Windows (like \"C:\\\\\").\n  const FilePath& path(IsRootDirectory() ? *this :\n                                           RemoveTrailingPathSeparator());\n#else\n  const FilePath& path(*this);\n#endif\n\n#if GTEST_OS_WINDOWS_MOBILE\n  LPCWSTR unicode = String::AnsiToUtf16(path.c_str());\n  const DWORD attributes = GetFileAttributes(unicode);\n  delete [] unicode;\n  if ((attributes != kInvalidFileAttributes) &&\n      (attributes & FILE_ATTRIBUTE_DIRECTORY)) {\n    result = true;\n  }\n#else\n  posix::StatStruct file_stat;\n  result = posix::Stat(path.c_str(), &file_stat) == 0 &&\n      posix::IsDir(file_stat);\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n  return result;\n}\n\n// Returns true if pathname describes a root directory. (Windows has one\n// root directory per disk drive.)\nbool FilePath::IsRootDirectory() const {\n#if GTEST_OS_WINDOWS\n  // TODO(wan@google.com): on Windows a network share like\n  // \\\\server\\share can be a root directory, although it cannot be the\n  // current directory.  Handle this properly.\n  return pathname_.length() == 3 && IsAbsolutePath();\n#else\n  return pathname_.length() == 1 && IsPathSeparator(pathname_.c_str()[0]);\n#endif\n}\n\n// Returns true if pathname describes an absolute path.\nbool FilePath::IsAbsolutePath() const {\n  const char* const name = pathname_.c_str();\n#if GTEST_OS_WINDOWS\n  return pathname_.length() >= 3 &&\n     ((name[0] >= 'a' && name[0] <= 'z') ||\n      (name[0] >= 'A' && name[0] <= 'Z')) &&\n     name[1] == ':' &&\n     IsPathSeparator(name[2]);\n#else\n  return IsPathSeparator(name[0]);\n#endif\n}\n\n// Returns a pathname for a file that does not currently exist. The pathname\n// will be directory/base_name.extension or\n// directory/base_name_<number>.extension if directory/base_name.extension\n// already exists. The number will be incremented until a pathname is found\n// that does not already exist.\n// Examples: 'dir/foo_test.xml' or 'dir/foo_test_1.xml'.\n// There could be a race condition if two or more processes are calling this\n// function at the same time -- they could both pick the same filename.\nFilePath FilePath::GenerateUniqueFileName(const FilePath& directory,\n                                          const FilePath& base_name,\n                                          const char* extension) {\n  FilePath full_pathname;\n  int number = 0;\n  do {\n    full_pathname.Set(MakeFileName(directory, base_name, number++, extension));\n  } while (full_pathname.FileOrDirectoryExists());\n  return full_pathname;\n}\n\n// Returns true if FilePath ends with a path separator, which indicates that\n// it is intended to represent a directory. Returns false otherwise.\n// This does NOT check that a directory (or file) actually exists.\nbool FilePath::IsDirectory() const {\n  return !pathname_.empty() &&\n         IsPathSeparator(pathname_.c_str()[pathname_.length() - 1]);\n}\n\n// Create directories so that path exists. Returns true if successful or if\n// the directories already exist; returns false if unable to create directories\n// for any reason.\nbool FilePath::CreateDirectoriesRecursively() const {\n  if (!this->IsDirectory()) {\n    return false;\n  }\n\n  if (pathname_.length() == 0 || this->DirectoryExists()) {\n    return true;\n  }\n\n  const FilePath parent(this->RemoveTrailingPathSeparator().RemoveFileName());\n  return parent.CreateDirectoriesRecursively() && this->CreateFolder();\n}\n\n// Create the directory so that path exists. Returns true if successful or\n// if the directory already exists; returns false if unable to create the\n// directory for any reason, including if the parent directory does not\n// exist. Not named \"CreateDirectory\" because that's a macro on Windows.\nbool FilePath::CreateFolder() const {\n#if GTEST_OS_WINDOWS_MOBILE\n  FilePath removed_sep(this->RemoveTrailingPathSeparator());\n  LPCWSTR unicode = String::AnsiToUtf16(removed_sep.c_str());\n  int result = CreateDirectory(unicode, NULL) ? 0 : -1;\n  delete [] unicode;\n#elif GTEST_OS_WINDOWS\n  int result = _mkdir(pathname_.c_str());\n#else\n  int result = mkdir(pathname_.c_str(), 0777);\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n  if (result == -1) {\n    return this->DirectoryExists();  // An error is OK if the directory exists.\n  }\n  return true;  // No error.\n}\n\n// If input name has a trailing separator character, remove it and return the\n// name, otherwise return the name string unmodified.\n// On Windows platform, uses \\ as the separator, other platforms use /.\nFilePath FilePath::RemoveTrailingPathSeparator() const {\n  return IsDirectory()\n      ? FilePath(String(pathname_.c_str(), pathname_.length() - 1))\n      : *this;\n}\n\n// Removes any redundant separators that might be in the pathname.\n// For example, \"bar///foo\" becomes \"bar/foo\". Does not eliminate other\n// redundancies that might be in a pathname involving \".\" or \"..\".\n// TODO(wan@google.com): handle Windows network shares (e.g. \\\\server\\share).\nvoid FilePath::Normalize() {\n  if (pathname_.c_str() == NULL) {\n    pathname_ = \"\";\n    return;\n  }\n  const char* src = pathname_.c_str();\n  char* const dest = new char[pathname_.length() + 1];\n  char* dest_ptr = dest;\n  memset(dest_ptr, 0, pathname_.length() + 1);\n\n  while (*src != '\\0') {\n    *dest_ptr = *src;\n    if (!IsPathSeparator(*src)) {\n      src++;\n    } else {\n#if GTEST_HAS_ALT_PATH_SEP_\n      if (*dest_ptr == kAlternatePathSeparator) {\n        *dest_ptr = kPathSeparator;\n      }\n#endif\n      while (IsPathSeparator(*src))\n        src++;\n    }\n    dest_ptr++;\n  }\n  *dest_ptr = '\\0';\n  pathname_ = dest;\n  delete[] dest;\n}\n\n}  // namespace internal\n}  // namespace testing\n// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n\n#include <limits.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n\n#if GTEST_OS_WINDOWS_MOBILE\n# include <windows.h>  // For TerminateProcess()\n#elif GTEST_OS_WINDOWS\n# include <io.h>\n# include <sys/stat.h>\n#else\n# include <unistd.h>\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n#if GTEST_OS_MAC\n# include <mach/mach_init.h>\n# include <mach/task.h>\n# include <mach/vm_map.h>\n#endif  // GTEST_OS_MAC\n\n\n// Indicates that this translation unit is part of Google Test's\n// implementation.  It must come before gtest-internal-inl.h is\n// included, or there will be a compiler error.  This trick is to\n// prevent a user from accidentally including gtest-internal-inl.h in\n// his code.\n#define GTEST_IMPLEMENTATION_ 1\n#undef GTEST_IMPLEMENTATION_\n\nnamespace testing {\nnamespace internal {\n\n#if defined(_MSC_VER) || defined(__BORLANDC__)\n// MSVC and C++Builder do not provide a definition of STDERR_FILENO.\nconst int kStdOutFileno = 1;\nconst int kStdErrFileno = 2;\n#else\nconst int kStdOutFileno = STDOUT_FILENO;\nconst int kStdErrFileno = STDERR_FILENO;\n#endif  // _MSC_VER\n\n#if GTEST_OS_MAC\n\n// Returns the number of threads running in the process, or 0 to indicate that\n// we cannot detect it.\nsize_t GetThreadCount() {\n  const task_t task = mach_task_self();\n  mach_msg_type_number_t thread_count;\n  thread_act_array_t thread_list;\n  const kern_return_t status = task_threads(task, &thread_list, &thread_count);\n  if (status == KERN_SUCCESS) {\n    // task_threads allocates resources in thread_list and we need to free them\n    // to avoid leaks.\n    vm_deallocate(task,\n                  reinterpret_cast<vm_address_t>(thread_list),\n                  sizeof(thread_t) * thread_count);\n    return static_cast<size_t>(thread_count);\n  } else {\n    return 0;\n  }\n}\n\n#else\n\nsize_t GetThreadCount() {\n  // There's no portable way to detect the number of threads, so we just\n  // return 0 to indicate that we cannot detect it.\n  return 0;\n}\n\n#endif  // GTEST_OS_MAC\n\n#if GTEST_USES_POSIX_RE\n\n// Implements RE.  Currently only needed for death tests.\n\nRE::~RE() {\n  if (is_valid_) {\n    // regfree'ing an invalid regex might crash because the content\n    // of the regex is undefined. Since the regex's are essentially\n    // the same, one cannot be valid (or invalid) without the other\n    // being so too.\n    regfree(&partial_regex_);\n    regfree(&full_regex_);\n  }\n  free(const_cast<char*>(pattern_));\n}\n\n// Returns true iff regular expression re matches the entire str.\nbool RE::FullMatch(const char* str, const RE& re) {\n  if (!re.is_valid_) return false;\n\n  regmatch_t match;\n  return regexec(&re.full_regex_, str, 1, &match, 0) == 0;\n}\n\n// Returns true iff regular expression re matches a substring of str\n// (including str itself).\nbool RE::PartialMatch(const char* str, const RE& re) {\n  if (!re.is_valid_) return false;\n\n  regmatch_t match;\n  return regexec(&re.partial_regex_, str, 1, &match, 0) == 0;\n}\n\n// Initializes an RE from its string representation.\nvoid RE::Init(const char* regex) {\n  pattern_ = posix::StrDup(regex);\n\n  // Reserves enough bytes to hold the regular expression used for a\n  // full match.\n  const size_t full_regex_len = strlen(regex) + 10;\n  char* const full_pattern = new char[full_regex_len];\n\n  snprintf(full_pattern, full_regex_len, \"^(%s)$\", regex);\n  is_valid_ = regcomp(&full_regex_, full_pattern, REG_EXTENDED) == 0;\n  // We want to call regcomp(&partial_regex_, ...) even if the\n  // previous expression returns false.  Otherwise partial_regex_ may\n  // not be properly initialized can may cause trouble when it's\n  // freed.\n  //\n  // Some implementation of POSIX regex (e.g. on at least some\n  // versions of Cygwin) doesn't accept the empty string as a valid\n  // regex.  We change it to an equivalent form \"()\" to be safe.\n  if (is_valid_) {\n    const char* const partial_regex = (*regex == '\\0') ? \"()\" : regex;\n    is_valid_ = regcomp(&partial_regex_, partial_regex, REG_EXTENDED) == 0;\n  }\n  EXPECT_TRUE(is_valid_)\n      << \"Regular expression \\\"\" << regex\n      << \"\\\" is not a valid POSIX Extended regular expression.\";\n\n  delete[] full_pattern;\n}\n\n#elif GTEST_USES_SIMPLE_RE\n\n// Returns true iff ch appears anywhere in str (excluding the\n// terminating '\\0' character).\nbool IsInSet(char ch, const char* str) {\n  return ch != '\\0' && strchr(str, ch) != NULL;\n}\n\n// Returns true iff ch belongs to the given classification.  Unlike\n// similar functions in <ctype.h>, these aren't affected by the\n// current locale.\nbool IsAsciiDigit(char ch) { return '0' <= ch && ch <= '9'; }\nbool IsAsciiPunct(char ch) {\n  return IsInSet(ch, \"^-!\\\"#$%&'()*+,./:;<=>?@[\\\\]_`{|}~\");\n}\nbool IsRepeat(char ch) { return IsInSet(ch, \"?*+\"); }\nbool IsAsciiWhiteSpace(char ch) { return IsInSet(ch, \" \\f\\n\\r\\t\\v\"); }\nbool IsAsciiWordChar(char ch) {\n  return ('a' <= ch && ch <= 'z') || ('A' <= ch && ch <= 'Z') ||\n      ('0' <= ch && ch <= '9') || ch == '_';\n}\n\n// Returns true iff \"\\\\c\" is a supported escape sequence.\nbool IsValidEscape(char c) {\n  return (IsAsciiPunct(c) || IsInSet(c, \"dDfnrsStvwW\"));\n}\n\n// Returns true iff the given atom (specified by escaped and pattern)\n// matches ch.  The result is undefined if the atom is invalid.\nbool AtomMatchesChar(bool escaped, char pattern_char, char ch) {\n  if (escaped) {  // \"\\\\p\" where p is pattern_char.\n    switch (pattern_char) {\n      case 'd': return IsAsciiDigit(ch);\n      case 'D': return !IsAsciiDigit(ch);\n      case 'f': return ch == '\\f';\n      case 'n': return ch == '\\n';\n      case 'r': return ch == '\\r';\n      case 's': return IsAsciiWhiteSpace(ch);\n      case 'S': return !IsAsciiWhiteSpace(ch);\n      case 't': return ch == '\\t';\n      case 'v': return ch == '\\v';\n      case 'w': return IsAsciiWordChar(ch);\n      case 'W': return !IsAsciiWordChar(ch);\n    }\n    return IsAsciiPunct(pattern_char) && pattern_char == ch;\n  }\n\n  return (pattern_char == '.' && ch != '\\n') || pattern_char == ch;\n}\n\n// Helper function used by ValidateRegex() to format error messages.\nString FormatRegexSyntaxError(const char* regex, int index) {\n  return (Message() << \"Syntax error at index \" << index\n          << \" in simple regular expression \\\"\" << regex << \"\\\": \").GetString();\n}\n\n// Generates non-fatal failures and returns false if regex is invalid;\n// otherwise returns true.\nbool ValidateRegex(const char* regex) {\n  if (regex == NULL) {\n    // TODO(wan@google.com): fix the source file location in the\n    // assertion failures to match where the regex is used in user\n    // code.\n    ADD_FAILURE() << \"NULL is not a valid simple regular expression.\";\n    return false;\n  }\n\n  bool is_valid = true;\n\n  // True iff ?, *, or + can follow the previous atom.\n  bool prev_repeatable = false;\n  for (int i = 0; regex[i]; i++) {\n    if (regex[i] == '\\\\') {  // An escape sequence\n      i++;\n      if (regex[i] == '\\0') {\n        ADD_FAILURE() << FormatRegexSyntaxError(regex, i - 1)\n                      << \"'\\\\' cannot appear at the end.\";\n        return false;\n      }\n\n      if (!IsValidEscape(regex[i])) {\n        ADD_FAILURE() << FormatRegexSyntaxError(regex, i - 1)\n                      << \"invalid escape sequence \\\"\\\\\" << regex[i] << \"\\\".\";\n        is_valid = false;\n      }\n      prev_repeatable = true;\n    } else {  // Not an escape sequence.\n      const char ch = regex[i];\n\n      if (ch == '^' && i > 0) {\n        ADD_FAILURE() << FormatRegexSyntaxError(regex, i)\n                      << \"'^' can only appear at the beginning.\";\n        is_valid = false;\n      } else if (ch == '$' && regex[i + 1] != '\\0') {\n        ADD_FAILURE() << FormatRegexSyntaxError(regex, i)\n                      << \"'$' can only appear at the end.\";\n        is_valid = false;\n      } else if (IsInSet(ch, \"()[]{}|\")) {\n        ADD_FAILURE() << FormatRegexSyntaxError(regex, i)\n                      << \"'\" << ch << \"' is unsupported.\";\n        is_valid = false;\n      } else if (IsRepeat(ch) && !prev_repeatable) {\n        ADD_FAILURE() << FormatRegexSyntaxError(regex, i)\n                      << \"'\" << ch << \"' can only follow a repeatable token.\";\n        is_valid = false;\n      }\n\n      prev_repeatable = !IsInSet(ch, \"^$?*+\");\n    }\n  }\n\n  return is_valid;\n}\n\n// Matches a repeated regex atom followed by a valid simple regular\n// expression.  The regex atom is defined as c if escaped is false,\n// or \\c otherwise.  repeat is the repetition meta character (?, *,\n// or +).  The behavior is undefined if str contains too many\n// characters to be indexable by size_t, in which case the test will\n// probably time out anyway.  We are fine with this limitation as\n// std::string has it too.\nbool MatchRepetitionAndRegexAtHead(\n    bool escaped, char c, char repeat, const char* regex,\n    const char* str) {\n  const size_t min_count = (repeat == '+') ? 1 : 0;\n  const size_t max_count = (repeat == '?') ? 1 :\n      static_cast<size_t>(-1) - 1;\n  // We cannot call numeric_limits::max() as it conflicts with the\n  // max() macro on Windows.\n\n  for (size_t i = 0; i <= max_count; ++i) {\n    // We know that the atom matches each of the first i characters in str.\n    if (i >= min_count && MatchRegexAtHead(regex, str + i)) {\n      // We have enough matches at the head, and the tail matches too.\n      // Since we only care about *whether* the pattern matches str\n      // (as opposed to *how* it matches), there is no need to find a\n      // greedy match.\n      return true;\n    }\n    if (str[i] == '\\0' || !AtomMatchesChar(escaped, c, str[i]))\n      return false;\n  }\n  return false;\n}\n\n// Returns true iff regex matches a prefix of str.  regex must be a\n// valid simple regular expression and not start with \"^\", or the\n// result is undefined.\nbool MatchRegexAtHead(const char* regex, const char* str) {\n  if (*regex == '\\0')  // An empty regex matches a prefix of anything.\n    return true;\n\n  // \"$\" only matches the end of a string.  Note that regex being\n  // valid guarantees that there's nothing after \"$\" in it.\n  if (*regex == '$')\n    return *str == '\\0';\n\n  // Is the first thing in regex an escape sequence?\n  const bool escaped = *regex == '\\\\';\n  if (escaped)\n    ++regex;\n  if (IsRepeat(regex[1])) {\n    // MatchRepetitionAndRegexAtHead() calls MatchRegexAtHead(), so\n    // here's an indirect recursion.  It terminates as the regex gets\n    // shorter in each recursion.\n    return MatchRepetitionAndRegexAtHead(\n        escaped, regex[0], regex[1], regex + 2, str);\n  } else {\n    // regex isn't empty, isn't \"$\", and doesn't start with a\n    // repetition.  We match the first atom of regex with the first\n    // character of str and recurse.\n    return (*str != '\\0') && AtomMatchesChar(escaped, *regex, *str) &&\n        MatchRegexAtHead(regex + 1, str + 1);\n  }\n}\n\n// Returns true iff regex matches any substring of str.  regex must be\n// a valid simple regular expression, or the result is undefined.\n//\n// The algorithm is recursive, but the recursion depth doesn't exceed\n// the regex length, so we won't need to worry about running out of\n// stack space normally.  In rare cases the time complexity can be\n// exponential with respect to the regex length + the string length,\n// but usually it's must faster (often close to linear).\nbool MatchRegexAnywhere(const char* regex, const char* str) {\n  if (regex == NULL || str == NULL)\n    return false;\n\n  if (*regex == '^')\n    return MatchRegexAtHead(regex + 1, str);\n\n  // A successful match can be anywhere in str.\n  do {\n    if (MatchRegexAtHead(regex, str))\n      return true;\n  } while (*str++ != '\\0');\n  return false;\n}\n\n// Implements the RE class.\n\nRE::~RE() {\n  free(const_cast<char*>(pattern_));\n  free(const_cast<char*>(full_pattern_));\n}\n\n// Returns true iff regular expression re matches the entire str.\nbool RE::FullMatch(const char* str, const RE& re) {\n  return re.is_valid_ && MatchRegexAnywhere(re.full_pattern_, str);\n}\n\n// Returns true iff regular expression re matches a substring of str\n// (including str itself).\nbool RE::PartialMatch(const char* str, const RE& re) {\n  return re.is_valid_ && MatchRegexAnywhere(re.pattern_, str);\n}\n\n// Initializes an RE from its string representation.\nvoid RE::Init(const char* regex) {\n  pattern_ = full_pattern_ = NULL;\n  if (regex != NULL) {\n    pattern_ = posix::StrDup(regex);\n  }\n\n  is_valid_ = ValidateRegex(regex);\n  if (!is_valid_) {\n    // No need to calculate the full pattern when the regex is invalid.\n    return;\n  }\n\n  const size_t len = strlen(regex);\n  // Reserves enough bytes to hold the regular expression used for a\n  // full match: we need space to prepend a '^', append a '$', and\n  // terminate the string with '\\0'.\n  char* buffer = static_cast<char*>(malloc(len + 3));\n  full_pattern_ = buffer;\n\n  if (*regex != '^')\n    *buffer++ = '^';  // Makes sure full_pattern_ starts with '^'.\n\n  // We don't use snprintf or strncpy, as they trigger a warning when\n  // compiled with VC++ 8.0.\n  memcpy(buffer, regex, len);\n  buffer += len;\n\n  if (len == 0 || regex[len - 1] != '$')\n    *buffer++ = '$';  // Makes sure full_pattern_ ends with '$'.\n\n  *buffer = '\\0';\n}\n\n#endif  // GTEST_USES_POSIX_RE\n\nconst char kUnknownFile[] = \"unknown file\";\n\n// Formats a source file path and a line number as they would appear\n// in an error message from the compiler used to compile this code.\nGTEST_API_ ::std::string FormatFileLocation(const char* file, int line) {\n  const char* const file_name = file == NULL ? kUnknownFile : file;\n\n  if (line < 0) {\n    return String::Format(\"%s:\", file_name).c_str();\n  }\n#ifdef _MSC_VER\n  return String::Format(\"%s(%d):\", file_name, line).c_str();\n#else\n  return String::Format(\"%s:%d:\", file_name, line).c_str();\n#endif  // _MSC_VER\n}\n\n// Formats a file location for compiler-independent XML output.\n// Although this function is not platform dependent, we put it next to\n// FormatFileLocation in order to contrast the two functions.\n// Note that FormatCompilerIndependentFileLocation() does NOT append colon\n// to the file location it produces, unlike FormatFileLocation().\nGTEST_API_ ::std::string FormatCompilerIndependentFileLocation(\n    const char* file, int line) {\n  const char* const file_name = file == NULL ? kUnknownFile : file;\n\n  if (line < 0)\n    return file_name;\n  else\n    return String::Format(\"%s:%d\", file_name, line).c_str();\n}\n\n\nGTestLog::GTestLog(GTestLogSeverity severity, const char* file, int line)\n    : severity_(severity) {\n  const char* const marker =\n      severity == GTEST_INFO ?    \"[  INFO ]\" :\n      severity == GTEST_WARNING ? \"[WARNING]\" :\n      severity == GTEST_ERROR ?   \"[ ERROR ]\" : \"[ FATAL ]\";\n  GetStream() << ::std::endl << marker << \" \"\n              << FormatFileLocation(file, line).c_str() << \": \";\n}\n\n// Flushes the buffers and, if severity is GTEST_FATAL, aborts the program.\nGTestLog::~GTestLog() {\n  GetStream() << ::std::endl;\n  if (severity_ == GTEST_FATAL) {\n    fflush(stderr);\n    posix::Abort();\n  }\n}\n// Disable Microsoft deprecation warnings for POSIX functions called from\n// this class (creat, dup, dup2, and close)\n#ifdef _MSC_VER\n# pragma warning(push)\n# pragma warning(disable: 4996)\n#endif  // _MSC_VER\n\n#if GTEST_HAS_STREAM_REDIRECTION\n\n// Object that captures an output stream (stdout/stderr).\nclass CapturedStream {\n public:\n  // The ctor redirects the stream to a temporary file.\n  CapturedStream(int fd) : fd_(fd), uncaptured_fd_(dup(fd)) {\n\n# if GTEST_OS_WINDOWS\n    char temp_dir_path[MAX_PATH + 1] = { '\\0' };  // NOLINT\n    char temp_file_path[MAX_PATH + 1] = { '\\0' };  // NOLINT\n\n    ::GetTempPathA(sizeof(temp_dir_path), temp_dir_path);\n    const UINT success = ::GetTempFileNameA(temp_dir_path,\n                                            \"gtest_redir\",\n                                            0,  // Generate unique file name.\n                                            temp_file_path);\n    GTEST_CHECK_(success != 0)\n        << \"Unable to create a temporary file in \" << temp_dir_path;\n    const int captured_fd = creat(temp_file_path, _S_IREAD | _S_IWRITE);\n    GTEST_CHECK_(captured_fd != -1) << \"Unable to open temporary file \"\n                                    << temp_file_path;\n    filename_ = temp_file_path;\n# else\n    // There's no guarantee that a test has write access to the\n    // current directory, so we create the temporary file in the /tmp\n    // directory instead.\n    char name_template[] = \"/tmp/captured_stream.XXXXXX\";\n    const int captured_fd = mkstemp(name_template);\n    filename_ = name_template;\n# endif  // GTEST_OS_WINDOWS\n    fflush(NULL);\n    dup2(captured_fd, fd_);\n    close(captured_fd);\n  }\n\n  ~CapturedStream() {\n    remove(filename_.c_str());\n  }\n\n  String GetCapturedString() {\n    if (uncaptured_fd_ != -1) {\n      // Restores the original stream.\n      fflush(NULL);\n      dup2(uncaptured_fd_, fd_);\n      close(uncaptured_fd_);\n      uncaptured_fd_ = -1;\n    }\n\n    FILE* const file = posix::FOpen(filename_.c_str(), \"r\");\n    const String content = ReadEntireFile(file);\n    posix::FClose(file);\n    return content;\n  }\n\n private:\n  // Reads the entire content of a file as a String.\n  static String ReadEntireFile(FILE* file);\n\n  // Returns the size (in bytes) of a file.\n  static size_t GetFileSize(FILE* file);\n\n  const int fd_;  // A stream to capture.\n  int uncaptured_fd_;\n  // Name of the temporary file holding the stderr output.\n  ::std::string filename_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(CapturedStream);\n};\n\n// Returns the size (in bytes) of a file.\nsize_t CapturedStream::GetFileSize(FILE* file) {\n  fseek(file, 0, SEEK_END);\n  return static_cast<size_t>(ftell(file));\n}\n\n// Reads the entire content of a file as a string.\nString CapturedStream::ReadEntireFile(FILE* file) {\n  const size_t file_size = GetFileSize(file);\n  char* const buffer = new char[file_size];\n\n  size_t bytes_last_read = 0;  // # of bytes read in the last fread()\n  size_t bytes_read = 0;       // # of bytes read so far\n\n  fseek(file, 0, SEEK_SET);\n\n  // Keeps reading the file until we cannot read further or the\n  // pre-determined file size is reached.\n  do {\n    bytes_last_read = fread(buffer+bytes_read, 1, file_size-bytes_read, file);\n    bytes_read += bytes_last_read;\n  } while (bytes_last_read > 0 && bytes_read < file_size);\n\n  const String content(buffer, bytes_read);\n  delete[] buffer;\n\n  return content;\n}\n\n# ifdef _MSC_VER\n#  pragma warning(pop)\n# endif  // _MSC_VER\n\nstatic CapturedStream* g_captured_stderr = NULL;\nstatic CapturedStream* g_captured_stdout = NULL;\n\n// Starts capturing an output stream (stdout/stderr).\nvoid CaptureStream(int fd, const char* stream_name, CapturedStream** stream) {\n  if (*stream != NULL) {\n    GTEST_LOG_(FATAL) << \"Only one \" << stream_name\n                      << \" capturer can exist at a time.\";\n  }\n  *stream = new CapturedStream(fd);\n}\n\n// Stops capturing the output stream and returns the captured string.\nString GetCapturedStream(CapturedStream** captured_stream) {\n  const String content = (*captured_stream)->GetCapturedString();\n\n  delete *captured_stream;\n  *captured_stream = NULL;\n\n  return content;\n}\n\n// Starts capturing stdout.\nvoid CaptureStdout() {\n  CaptureStream(kStdOutFileno, \"stdout\", &g_captured_stdout);\n}\n\n// Starts capturing stderr.\nvoid CaptureStderr() {\n  CaptureStream(kStdErrFileno, \"stderr\", &g_captured_stderr);\n}\n\n// Stops capturing stdout and returns the captured string.\nString GetCapturedStdout() { return GetCapturedStream(&g_captured_stdout); }\n\n// Stops capturing stderr and returns the captured string.\nString GetCapturedStderr() { return GetCapturedStream(&g_captured_stderr); }\n\n#endif  // GTEST_HAS_STREAM_REDIRECTION\n\n#if GTEST_HAS_DEATH_TEST\n\n// A copy of all command line arguments.  Set by InitGoogleTest().\n::std::vector<String> g_argvs;\n\n// Returns the command line as a vector of strings.\nconst ::std::vector<String>& GetArgvs() { return g_argvs; }\n\n#endif  // GTEST_HAS_DEATH_TEST\n\n#if GTEST_OS_WINDOWS_MOBILE\nnamespace posix {\nvoid Abort() {\n  DebugBreak();\n  TerminateProcess(GetCurrentProcess(), 1);\n}\n}  // namespace posix\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n// Returns the name of the environment variable corresponding to the\n// given flag.  For example, FlagToEnvVar(\"foo\") will return\n// \"GTEST_FOO\" in the open-source version.\nstatic String FlagToEnvVar(const char* flag) {\n  const String full_flag =\n      (Message() << GTEST_FLAG_PREFIX_ << flag).GetString();\n\n  Message env_var;\n  for (size_t i = 0; i != full_flag.length(); i++) {\n    env_var << ToUpper(full_flag.c_str()[i]);\n  }\n\n  return env_var.GetString();\n}\n\n// Parses 'str' for a 32-bit signed integer.  If successful, writes\n// the result to *value and returns true; otherwise leaves *value\n// unchanged and returns false.\nbool ParseInt32(const Message& src_text, const char* str, Int32* value) {\n  // Parses the environment variable as a decimal integer.\n  char* end = NULL;\n  const long long_value = strtol(str, &end, 10);  // NOLINT\n\n  // Has strtol() consumed all characters in the string?\n  if (*end != '\\0') {\n    // No - an invalid character was encountered.\n    Message msg;\n    msg << \"WARNING: \" << src_text\n        << \" is expected to be a 32-bit integer, but actually\"\n        << \" has value \\\"\" << str << \"\\\".\\n\";\n    printf(\"%s\", msg.GetString().c_str());\n    fflush(stdout);\n    return false;\n  }\n\n  // Is the parsed value in the range of an Int32?\n  const Int32 result = static_cast<Int32>(long_value);\n  if (long_value == LONG_MAX || long_value == LONG_MIN ||\n      // The parsed value overflows as a long.  (strtol() returns\n      // LONG_MAX or LONG_MIN when the input overflows.)\n      result != long_value\n      // The parsed value overflows as an Int32.\n      ) {\n    Message msg;\n    msg << \"WARNING: \" << src_text\n        << \" is expected to be a 32-bit integer, but actually\"\n        << \" has value \" << str << \", which overflows.\\n\";\n    printf(\"%s\", msg.GetString().c_str());\n    fflush(stdout);\n    return false;\n  }\n\n  *value = result;\n  return true;\n}\n\n// Reads and returns the Boolean environment variable corresponding to\n// the given flag; if it's not set, returns default_value.\n//\n// The value is considered true iff it's not \"0\".\nbool BoolFromGTestEnv(const char* flag, bool default_value) {\n  const String env_var = FlagToEnvVar(flag);\n  const char* const string_value = posix::GetEnv(env_var.c_str());\n  return string_value == NULL ?\n      default_value : strcmp(string_value, \"0\") != 0;\n}\n\n// Reads and returns a 32-bit integer stored in the environment\n// variable corresponding to the given flag; if it isn't set or\n// doesn't represent a valid 32-bit integer, returns default_value.\nInt32 Int32FromGTestEnv(const char* flag, Int32 default_value) {\n  const String env_var = FlagToEnvVar(flag);\n  const char* const string_value = posix::GetEnv(env_var.c_str());\n  if (string_value == NULL) {\n    // The environment variable is not set.\n    return default_value;\n  }\n\n  Int32 result = default_value;\n  if (!ParseInt32(Message() << \"Environment variable \" << env_var,\n                  string_value, &result)) {\n    printf(\"The default value %s is used.\\n\",\n           (Message() << default_value).GetString().c_str());\n    fflush(stdout);\n    return default_value;\n  }\n\n  return result;\n}\n\n// Reads and returns the string environment variable corresponding to\n// the given flag; if it's not set, returns default_value.\nconst char* StringFromGTestEnv(const char* flag, const char* default_value) {\n  const String env_var = FlagToEnvVar(flag);\n  const char* const value = posix::GetEnv(env_var.c_str());\n  return value == NULL ? default_value : value;\n}\n\n}  // namespace internal\n}  // namespace testing\n// Copyright 2007, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n// Google Test - The Google C++ Testing Framework\n//\n// This file implements a universal value printer that can print a\n// value of any type T:\n//\n//   void ::testing::internal::UniversalPrinter<T>::Print(value, ostream_ptr);\n//\n// It uses the << operator when possible, and prints the bytes in the\n// object otherwise.  A user can override its behavior for a class\n// type Foo by defining either operator<<(::std::ostream&, const Foo&)\n// or void PrintTo(const Foo&, ::std::ostream*) in the namespace that\n// defines Foo.\n\n#include <ctype.h>\n#include <stdio.h>\n#include <ostream>  // NOLINT\n#include <string>\n\nnamespace testing {\n\nnamespace {\n\nusing ::std::ostream;\n\n#if GTEST_OS_WINDOWS_MOBILE  // Windows CE does not define _snprintf_s.\n# define snprintf _snprintf\n#elif _MSC_VER >= 1400  // VC 8.0 and later deprecate snprintf and _snprintf.\n# define snprintf _snprintf_s\n#elif _MSC_VER\n# define snprintf _snprintf\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n// Prints a segment of bytes in the given object.\nvoid PrintByteSegmentInObjectTo(const unsigned char* obj_bytes, size_t start,\n                                size_t count, ostream* os) {\n  char text[5] = \"\";\n  for (size_t i = 0; i != count; i++) {\n    const size_t j = start + i;\n    if (i != 0) {\n      // Organizes the bytes into groups of 2 for easy parsing by\n      // human.\n      if ((j % 2) == 0)\n        *os << ' ';\n      else\n        *os << '-';\n    }\n    snprintf(text, sizeof(text), \"%02X\", obj_bytes[j]);\n    *os << text;\n  }\n}\n\n// Prints the bytes in the given value to the given ostream.\nvoid PrintBytesInObjectToImpl(const unsigned char* obj_bytes, size_t count,\n                              ostream* os) {\n  // Tells the user how big the object is.\n  *os << count << \"-byte object <\";\n\n  const size_t kThreshold = 132;\n  const size_t kChunkSize = 64;\n  // If the object size is bigger than kThreshold, we'll have to omit\n  // some details by printing only the first and the last kChunkSize\n  // bytes.\n  // TODO(wan): let the user control the threshold using a flag.\n  if (count < kThreshold) {\n    PrintByteSegmentInObjectTo(obj_bytes, 0, count, os);\n  } else {\n    PrintByteSegmentInObjectTo(obj_bytes, 0, kChunkSize, os);\n    *os << \" ... \";\n    // Rounds up to 2-byte boundary.\n    const size_t resume_pos = (count - kChunkSize + 1)/2*2;\n    PrintByteSegmentInObjectTo(obj_bytes, resume_pos, count - resume_pos, os);\n  }\n  *os << \">\";\n}\n\n}  // namespace\n\nnamespace internal2 {\n\n// Delegates to PrintBytesInObjectToImpl() to print the bytes in the\n// given object.  The delegation simplifies the implementation, which\n// uses the << operator and thus is easier done outside of the\n// ::testing::internal namespace, which contains a << operator that\n// sometimes conflicts with the one in STL.\nvoid PrintBytesInObjectTo(const unsigned char* obj_bytes, size_t count,\n                          ostream* os) {\n  PrintBytesInObjectToImpl(obj_bytes, count, os);\n}\n\n}  // namespace internal2\n\nnamespace internal {\n\n// Depending on the value of a char (or wchar_t), we print it in one\n// of three formats:\n//   - as is if it's a printable ASCII (e.g. 'a', '2', ' '),\n//   - as a hexidecimal escape sequence (e.g. '\\x7F'), or\n//   - as a special escape sequence (e.g. '\\r', '\\n').\nenum CharFormat {\n  kAsIs,\n  kHexEscape,\n  kSpecialEscape\n};\n\n// Returns true if c is a printable ASCII character.  We test the\n// value of c directly instead of calling isprint(), which is buggy on\n// Windows Mobile.\ninline bool IsPrintableAscii(wchar_t c) {\n  return 0x20 <= c && c <= 0x7E;\n}\n\n// Prints a wide or narrow char c as a character literal without the\n// quotes, escaping it when necessary; returns how c was formatted.\n// The template argument UnsignedChar is the unsigned version of Char,\n// which is the type of c.\ntemplate <typename UnsignedChar, typename Char>\nstatic CharFormat PrintAsCharLiteralTo(Char c, ostream* os) {\n  switch (static_cast<wchar_t>(c)) {\n    case L'\\0':\n      *os << \"\\\\0\";\n      break;\n    case L'\\'':\n      *os << \"\\\\'\";\n      break;\n    case L'\\\\':\n      *os << \"\\\\\\\\\";\n      break;\n    case L'\\a':\n      *os << \"\\\\a\";\n      break;\n    case L'\\b':\n      *os << \"\\\\b\";\n      break;\n    case L'\\f':\n      *os << \"\\\\f\";\n      break;\n    case L'\\n':\n      *os << \"\\\\n\";\n      break;\n    case L'\\r':\n      *os << \"\\\\r\";\n      break;\n    case L'\\t':\n      *os << \"\\\\t\";\n      break;\n    case L'\\v':\n      *os << \"\\\\v\";\n      break;\n    default:\n      if (IsPrintableAscii(c)) {\n        *os << static_cast<char>(c);\n        return kAsIs;\n      } else {\n        *os << String::Format(\"\\\\x%X\", static_cast<UnsignedChar>(c));\n        return kHexEscape;\n      }\n  }\n  return kSpecialEscape;\n}\n\n// Prints a char c as if it's part of a string literal, escaping it when\n// necessary; returns how c was formatted.\nstatic CharFormat PrintAsWideStringLiteralTo(wchar_t c, ostream* os) {\n  switch (c) {\n    case L'\\'':\n      *os << \"'\";\n      return kAsIs;\n    case L'\"':\n      *os << \"\\\\\\\"\";\n      return kSpecialEscape;\n    default:\n      return PrintAsCharLiteralTo<wchar_t>(c, os);\n  }\n}\n\n// Prints a char c as if it's part of a string literal, escaping it when\n// necessary; returns how c was formatted.\nstatic CharFormat PrintAsNarrowStringLiteralTo(char c, ostream* os) {\n  return PrintAsWideStringLiteralTo(static_cast<unsigned char>(c), os);\n}\n\n// Prints a wide or narrow character c and its code.  '\\0' is printed\n// as \"'\\\\0'\", other unprintable characters are also properly escaped\n// using the standard C++ escape sequence.  The template argument\n// UnsignedChar is the unsigned version of Char, which is the type of c.\ntemplate <typename UnsignedChar, typename Char>\nvoid PrintCharAndCodeTo(Char c, ostream* os) {\n  // First, print c as a literal in the most readable form we can find.\n  *os << ((sizeof(c) > 1) ? \"L'\" : \"'\");\n  const CharFormat format = PrintAsCharLiteralTo<UnsignedChar>(c, os);\n  *os << \"'\";\n\n  // To aid user debugging, we also print c's code in decimal, unless\n  // it's 0 (in which case c was printed as '\\\\0', making the code\n  // obvious).\n  if (c == 0)\n    return;\n  *os << \" (\" << String::Format(\"%d\", c).c_str();\n\n  // For more convenience, we print c's code again in hexidecimal,\n  // unless c was already printed in the form '\\x##' or the code is in\n  // [1, 9].\n  if (format == kHexEscape || (1 <= c && c <= 9)) {\n    // Do nothing.\n  } else {\n    *os << String::Format(\", 0x%X\",\n                          static_cast<UnsignedChar>(c)).c_str();\n  }\n  *os << \")\";\n}\n\nvoid PrintTo(unsigned char c, ::std::ostream* os) {\n  PrintCharAndCodeTo<unsigned char>(c, os);\n}\nvoid PrintTo(signed char c, ::std::ostream* os) {\n  PrintCharAndCodeTo<unsigned char>(c, os);\n}\n\n// Prints a wchar_t as a symbol if it is printable or as its internal\n// code otherwise and also as its code.  L'\\0' is printed as \"L'\\\\0'\".\nvoid PrintTo(wchar_t wc, ostream* os) {\n  PrintCharAndCodeTo<wchar_t>(wc, os);\n}\n\n// Prints the given array of characters to the ostream.\n// The array starts at *begin, the length is len, it may include '\\0' characters\n// and may not be null-terminated.\nstatic void PrintCharsAsStringTo(const char* begin, size_t len, ostream* os) {\n  *os << \"\\\"\";\n  bool is_previous_hex = false;\n  for (size_t index = 0; index < len; ++index) {\n    const char cur = begin[index];\n    if (is_previous_hex && IsXDigit(cur)) {\n      // Previous character is of '\\x..' form and this character can be\n      // interpreted as another hexadecimal digit in its number. Break string to\n      // disambiguate.\n      *os << \"\\\" \\\"\";\n    }\n    is_previous_hex = PrintAsNarrowStringLiteralTo(cur, os) == kHexEscape;\n  }\n  *os << \"\\\"\";\n}\n\n// Prints a (const) char array of 'len' elements, starting at address 'begin'.\nvoid UniversalPrintArray(const char* begin, size_t len, ostream* os) {\n  PrintCharsAsStringTo(begin, len, os);\n}\n\n// Prints the given array of wide characters to the ostream.\n// The array starts at *begin, the length is len, it may include L'\\0'\n// characters and may not be null-terminated.\nstatic void PrintWideCharsAsStringTo(const wchar_t* begin, size_t len,\n                                     ostream* os) {\n  *os << \"L\\\"\";\n  bool is_previous_hex = false;\n  for (size_t index = 0; index < len; ++index) {\n    const wchar_t cur = begin[index];\n    if (is_previous_hex && isascii(cur) && IsXDigit(static_cast<char>(cur))) {\n      // Previous character is of '\\x..' form and this character can be\n      // interpreted as another hexadecimal digit in its number. Break string to\n      // disambiguate.\n      *os << \"\\\" L\\\"\";\n    }\n    is_previous_hex = PrintAsWideStringLiteralTo(cur, os) == kHexEscape;\n  }\n  *os << \"\\\"\";\n}\n\n// Prints the given C string to the ostream.\nvoid PrintTo(const char* s, ostream* os) {\n  if (s == NULL) {\n    *os << \"NULL\";\n  } else {\n    *os << ImplicitCast_<const void*>(s) << \" pointing to \";\n    PrintCharsAsStringTo(s, strlen(s), os);\n  }\n}\n\n// MSVC compiler can be configured to define whar_t as a typedef\n// of unsigned short. Defining an overload for const wchar_t* in that case\n// would cause pointers to unsigned shorts be printed as wide strings,\n// possibly accessing more memory than intended and causing invalid\n// memory accesses. MSVC defines _NATIVE_WCHAR_T_DEFINED symbol when\n// wchar_t is implemented as a native type.\n#if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED)\n// Prints the given wide C string to the ostream.\nvoid PrintTo(const wchar_t* s, ostream* os) {\n  if (s == NULL) {\n    *os << \"NULL\";\n  } else {\n    *os << ImplicitCast_<const void*>(s) << \" pointing to \";\n    PrintWideCharsAsStringTo(s, wcslen(s), os);\n  }\n}\n#endif  // wchar_t is native\n\n// Prints a ::string object.\n#if GTEST_HAS_GLOBAL_STRING\nvoid PrintStringTo(const ::string& s, ostream* os) {\n  PrintCharsAsStringTo(s.data(), s.size(), os);\n}\n#endif  // GTEST_HAS_GLOBAL_STRING\n\nvoid PrintStringTo(const ::std::string& s, ostream* os) {\n  PrintCharsAsStringTo(s.data(), s.size(), os);\n}\n\n// Prints a ::wstring object.\n#if GTEST_HAS_GLOBAL_WSTRING\nvoid PrintWideStringTo(const ::wstring& s, ostream* os) {\n  PrintWideCharsAsStringTo(s.data(), s.size(), os);\n}\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n#if GTEST_HAS_STD_WSTRING\nvoid PrintWideStringTo(const ::std::wstring& s, ostream* os) {\n  PrintWideCharsAsStringTo(s.data(), s.size(), os);\n}\n#endif  // GTEST_HAS_STD_WSTRING\n\n}  // namespace internal\n\n}  // namespace testing\n// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: mheule@google.com (Markus Heule)\n//\n// The Google C++ Testing Framework (Google Test)\n\n\n// Indicates that this translation unit is part of Google Test's\n// implementation.  It must come before gtest-internal-inl.h is\n// included, or there will be a compiler error.  This trick is to\n// prevent a user from accidentally including gtest-internal-inl.h in\n// his code.\n#define GTEST_IMPLEMENTATION_ 1\n#undef GTEST_IMPLEMENTATION_\n\nnamespace testing {\n\nusing internal::GetUnitTestImpl;\n\n// Gets the summary of the failure message by omitting the stack trace\n// in it.\ninternal::String TestPartResult::ExtractSummary(const char* message) {\n  const char* const stack_trace = strstr(message, internal::kStackTraceMarker);\n  return stack_trace == NULL ? internal::String(message) :\n      internal::String(message, stack_trace - message);\n}\n\n// Prints a TestPartResult object.\nstd::ostream& operator<<(std::ostream& os, const TestPartResult& result) {\n  return os\n      << result.file_name() << \":\" << result.line_number() << \": \"\n      << (result.type() == TestPartResult::kSuccess ? \"Success\" :\n          result.type() == TestPartResult::kFatalFailure ? \"Fatal failure\" :\n          \"Non-fatal failure\") << \":\\n\"\n      << result.message() << std::endl;\n}\n\n// Appends a TestPartResult to the array.\nvoid TestPartResultArray::Append(const TestPartResult& result) {\n  array_.push_back(result);\n}\n\n// Returns the TestPartResult at the given index (0-based).\nconst TestPartResult& TestPartResultArray::GetTestPartResult(int index) const {\n  if (index < 0 || index >= size()) {\n    printf(\"\\nInvalid index (%d) into TestPartResultArray.\\n\", index);\n    internal::posix::Abort();\n  }\n\n  return array_[index];\n}\n\n// Returns the number of TestPartResult objects in the array.\nint TestPartResultArray::size() const {\n  return static_cast<int>(array_.size());\n}\n\nnamespace internal {\n\nHasNewFatalFailureHelper::HasNewFatalFailureHelper()\n    : has_new_fatal_failure_(false),\n      original_reporter_(GetUnitTestImpl()->\n                         GetTestPartResultReporterForCurrentThread()) {\n  GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread(this);\n}\n\nHasNewFatalFailureHelper::~HasNewFatalFailureHelper() {\n  GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread(\n      original_reporter_);\n}\n\nvoid HasNewFatalFailureHelper::ReportTestPartResult(\n    const TestPartResult& result) {\n  if (result.fatally_failed())\n    has_new_fatal_failure_ = true;\n  original_reporter_->ReportTestPartResult(result);\n}\n\n}  // namespace internal\n\n}  // namespace testing\n// Copyright 2008 Google Inc.\n// All Rights Reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n\nnamespace testing {\nnamespace internal {\n\n#if GTEST_HAS_TYPED_TEST_P\n\n// Skips to the first non-space char in str. Returns an empty string if str\n// contains only whitespace characters.\nstatic const char* SkipSpaces(const char* str) {\n  while (IsSpace(*str))\n    str++;\n  return str;\n}\n\n// Verifies that registered_tests match the test names in\n// defined_test_names_; returns registered_tests if successful, or\n// aborts the program otherwise.\nconst char* TypedTestCasePState::VerifyRegisteredTestNames(\n    const char* file, int line, const char* registered_tests) {\n  typedef ::std::set<const char*>::const_iterator DefinedTestIter;\n  registered_ = true;\n\n  // Skip initial whitespace in registered_tests since some\n  // preprocessors prefix stringizied literals with whitespace.\n  registered_tests = SkipSpaces(registered_tests);\n\n  Message errors;\n  ::std::set<String> tests;\n  for (const char* names = registered_tests; names != NULL;\n       names = SkipComma(names)) {\n    const String name = GetPrefixUntilComma(names);\n    if (tests.count(name) != 0) {\n      errors << \"Test \" << name << \" is listed more than once.\\n\";\n      continue;\n    }\n\n    bool found = false;\n    for (DefinedTestIter it = defined_test_names_.begin();\n         it != defined_test_names_.end();\n         ++it) {\n      if (name == *it) {\n        found = true;\n        break;\n      }\n    }\n\n    if (found) {\n      tests.insert(name);\n    } else {\n      errors << \"No test named \" << name\n             << \" can be found in this test case.\\n\";\n    }\n  }\n\n  for (DefinedTestIter it = defined_test_names_.begin();\n       it != defined_test_names_.end();\n       ++it) {\n    if (tests.count(*it) == 0) {\n      errors << \"You forgot to list test \" << *it << \".\\n\";\n    }\n  }\n\n  const String& errors_str = errors.GetString();\n  if (errors_str != \"\") {\n    fprintf(stderr, \"%s %s\", FormatFileLocation(file, line).c_str(),\n            errors_str.c_str());\n    fflush(stderr);\n    posix::Abort();\n  }\n\n  return registered_tests;\n}\n\n#endif  // GTEST_HAS_TYPED_TEST_P\n\n}  // namespace internal\n}  // namespace testing\n"
  },
  {
    "path": "caffe-fpn/src/gtest/gtest.h",
    "content": "// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// The Google C++ Testing Framework (Google Test)\n//\n// This header file defines the public API for Google Test.  It should be\n// included by any test program that uses Google Test.\n//\n// IMPORTANT NOTE: Due to limitation of the C++ language, we have to\n// leave some internal implementation details in this header file.\n// They are clearly marked by comments like this:\n//\n//   // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n//\n// Such code is NOT meant to be used by a user directly, and is subject\n// to CHANGE WITHOUT NOTICE.  Therefore DO NOT DEPEND ON IT in a user\n// program!\n//\n// Acknowledgment: Google Test borrowed the idea of automatic test\n// registration from Barthelemy Dagenais' (barthelemy@prologique.com)\n// easyUnit framework.\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_H_\n#define GTEST_INCLUDE_GTEST_GTEST_H_\n\n#include <limits>\n#include <vector>\n\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)\n//\n// The Google C++ Testing Framework (Google Test)\n//\n// This header file declares functions and macros used internally by\n// Google Test.  They are subject to change without notice.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_\n\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: wan@google.com (Zhanyong Wan)\n//\n// Low-level types and utilities for porting Google Test to various\n// platforms.  They are subject to change without notice.  DO NOT USE\n// THEM IN USER CODE.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_\n\n// The user can define the following macros in the build script to\n// control Google Test's behavior.  If the user doesn't define a macro\n// in this list, Google Test will define it.\n//\n//   GTEST_HAS_CLONE          - Define it to 1/0 to indicate that clone(2)\n//                              is/isn't available.\n//   GTEST_HAS_EXCEPTIONS     - Define it to 1/0 to indicate that exceptions\n//                              are enabled.\n//   GTEST_HAS_GLOBAL_STRING  - Define it to 1/0 to indicate that ::string\n//                              is/isn't available (some systems define\n//                              ::string, which is different to std::string).\n//   GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::string\n//                              is/isn't available (some systems define\n//                              ::wstring, which is different to std::wstring).\n//   GTEST_HAS_POSIX_RE       - Define it to 1/0 to indicate that POSIX regular\n//                              expressions are/aren't available.\n//   GTEST_HAS_PTHREAD        - Define it to 1/0 to indicate that <pthread.h>\n//                              is/isn't available.\n//   GTEST_HAS_RTTI           - Define it to 1/0 to indicate that RTTI is/isn't\n//                              enabled.\n//   GTEST_HAS_STD_WSTRING    - Define it to 1/0 to indicate that\n//                              std::wstring does/doesn't work (Google Test can\n//                              be used where std::wstring is unavailable).\n//   GTEST_HAS_TR1_TUPLE      - Define it to 1/0 to indicate tr1::tuple\n//                              is/isn't available.\n//   GTEST_HAS_SEH            - Define it to 1/0 to indicate whether the\n//                              compiler supports Microsoft's \"Structured\n//                              Exception Handling\".\n//   GTEST_HAS_STREAM_REDIRECTION\n//                            - Define it to 1/0 to indicate whether the\n//                              platform supports I/O stream redirection using\n//                              dup() and dup2().\n//   GTEST_USE_OWN_TR1_TUPLE  - Define it to 1/0 to indicate whether Google\n//                              Test's own tr1 tuple implementation should be\n//                              used.  Unused when the user sets\n//                              GTEST_HAS_TR1_TUPLE to 0.\n//   GTEST_LINKED_AS_SHARED_LIBRARY\n//                            - Define to 1 when compiling tests that use\n//                              Google Test as a shared library (known as\n//                              DLL on Windows).\n//   GTEST_CREATE_SHARED_LIBRARY\n//                            - Define to 1 when compiling Google Test itself\n//                              as a shared library.\n\n// This header defines the following utilities:\n//\n// Macros indicating the current platform (defined to 1 if compiled on\n// the given platform; otherwise undefined):\n//   GTEST_OS_AIX      - IBM AIX\n//   GTEST_OS_CYGWIN   - Cygwin\n//   GTEST_OS_HPUX     - HP-UX\n//   GTEST_OS_LINUX    - Linux\n//     GTEST_OS_LINUX_ANDROID - Google Android\n//   GTEST_OS_MAC      - Mac OS X\n//   GTEST_OS_NACL     - Google Native Client (NaCl)\n//   GTEST_OS_SOLARIS  - Sun Solaris\n//   GTEST_OS_SYMBIAN  - Symbian\n//   GTEST_OS_WINDOWS  - Windows (Desktop, MinGW, or Mobile)\n//     GTEST_OS_WINDOWS_DESKTOP  - Windows Desktop\n//     GTEST_OS_WINDOWS_MINGW    - MinGW\n//     GTEST_OS_WINDOWS_MOBILE   - Windows Mobile\n//   GTEST_OS_ZOS      - z/OS\n//\n// Among the platforms, Cygwin, Linux, Max OS X, and Windows have the\n// most stable support.  Since core members of the Google Test project\n// don't have access to other platforms, support for them may be less\n// stable.  If you notice any problems on your platform, please notify\n// googletestframework@googlegroups.com (patches for fixing them are\n// even more welcome!).\n//\n// Note that it is possible that none of the GTEST_OS_* macros are defined.\n//\n// Macros indicating available Google Test features (defined to 1 if\n// the corresponding feature is supported; otherwise undefined):\n//   GTEST_HAS_COMBINE      - the Combine() function (for value-parameterized\n//                            tests)\n//   GTEST_HAS_DEATH_TEST   - death tests\n//   GTEST_HAS_PARAM_TEST   - value-parameterized tests\n//   GTEST_HAS_TYPED_TEST   - typed tests\n//   GTEST_HAS_TYPED_TEST_P - type-parameterized tests\n//   GTEST_USES_POSIX_RE    - enhanced POSIX regex is used. Do not confuse with\n//                            GTEST_HAS_POSIX_RE (see above) which users can\n//                            define themselves.\n//   GTEST_USES_SIMPLE_RE   - our own simple regex is used;\n//                            the above two are mutually exclusive.\n//   GTEST_CAN_COMPARE_NULL - accepts untyped NULL in EXPECT_EQ().\n//\n// Macros for basic C++ coding:\n//   GTEST_AMBIGUOUS_ELSE_BLOCKER_ - for disabling a gcc warning.\n//   GTEST_ATTRIBUTE_UNUSED_  - declares that a class' instances or a\n//                              variable don't have to be used.\n//   GTEST_DISALLOW_ASSIGN_   - disables operator=.\n//   GTEST_DISALLOW_COPY_AND_ASSIGN_ - disables copy ctor and operator=.\n//   GTEST_MUST_USE_RESULT_   - declares that a function's result must be used.\n//\n// Synchronization:\n//   Mutex, MutexLock, ThreadLocal, GetThreadCount()\n//                  - synchronization primitives.\n//   GTEST_IS_THREADSAFE - defined to 1 to indicate that the above\n//                         synchronization primitives have real implementations\n//                         and Google Test is thread-safe; or 0 otherwise.\n//\n// Template meta programming:\n//   is_pointer     - as in TR1; needed on Symbian and IBM XL C/C++ only.\n//   IteratorTraits - partial implementation of std::iterator_traits, which\n//                    is not available in libCstd when compiled with Sun C++.\n//\n// Smart pointers:\n//   scoped_ptr     - as in TR2.\n//\n// Regular expressions:\n//   RE             - a simple regular expression class using the POSIX\n//                    Extended Regular Expression syntax on UNIX-like\n//                    platforms, or a reduced regular exception syntax on\n//                    other platforms, including Windows.\n//\n// Logging:\n//   GTEST_LOG_()   - logs messages at the specified severity level.\n//   LogToStderr()  - directs all log messages to stderr.\n//   FlushInfoLog() - flushes informational log messages.\n//\n// Stdout and stderr capturing:\n//   CaptureStdout()     - starts capturing stdout.\n//   GetCapturedStdout() - stops capturing stdout and returns the captured\n//                         string.\n//   CaptureStderr()     - starts capturing stderr.\n//   GetCapturedStderr() - stops capturing stderr and returns the captured\n//                         string.\n//\n// Integer types:\n//   TypeWithSize   - maps an integer to a int type.\n//   Int32, UInt32, Int64, UInt64, TimeInMillis\n//                  - integers of known sizes.\n//   BiggestInt     - the biggest signed integer type.\n//\n// Command-line utilities:\n//   GTEST_FLAG()       - references a flag.\n//   GTEST_DECLARE_*()  - declares a flag.\n//   GTEST_DEFINE_*()   - defines a flag.\n//   GetArgvs()         - returns the command line as a vector of strings.\n//\n// Environment variable utilities:\n//   GetEnv()             - gets the value of an environment variable.\n//   BoolFromGTestEnv()   - parses a bool environment variable.\n//   Int32FromGTestEnv()  - parses an Int32 environment variable.\n//   StringFromGTestEnv() - parses a string environment variable.\n\n#include <ctype.h>   // for isspace, etc\n#include <stddef.h>  // for ptrdiff_t\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#ifndef _WIN32_WCE\n# include <sys/types.h>\n# include <sys/stat.h>\n#endif  // !_WIN32_WCE\n\n#include <iostream>  // NOLINT\n#include <sstream>  // NOLINT\n#include <string>  // NOLINT\n\n#define GTEST_DEV_EMAIL_ \"googletestframework@@googlegroups.com\"\n#define GTEST_FLAG_PREFIX_ \"gtest_\"\n#define GTEST_FLAG_PREFIX_DASH_ \"gtest-\"\n#define GTEST_FLAG_PREFIX_UPPER_ \"GTEST_\"\n#define GTEST_NAME_ \"Google Test\"\n#define GTEST_PROJECT_URL_ \"http://code.google.com/p/googletest/\"\n\n// Determines the version of gcc that is used to compile this.\n#ifdef __GNUC__\n// 40302 means version 4.3.2.\n# define GTEST_GCC_VER_ \\\n    (__GNUC__*10000 + __GNUC_MINOR__*100 + __GNUC_PATCHLEVEL__)\n#endif  // __GNUC__\n\n// Determines the platform on which Google Test is compiled.\n#ifdef __CYGWIN__\n# define GTEST_OS_CYGWIN 1\n#elif defined __SYMBIAN32__\n# define GTEST_OS_SYMBIAN 1\n#elif defined _WIN32\n# define GTEST_OS_WINDOWS 1\n# ifdef _WIN32_WCE\n#  define GTEST_OS_WINDOWS_MOBILE 1\n# elif defined(__MINGW__) || defined(__MINGW32__)\n#  define GTEST_OS_WINDOWS_MINGW 1\n# else\n#  define GTEST_OS_WINDOWS_DESKTOP 1\n# endif  // _WIN32_WCE\n#elif defined __APPLE__\n# define GTEST_OS_MAC 1\n#elif defined __linux__\n# define GTEST_OS_LINUX 1\n# ifdef ANDROID\n#  define GTEST_OS_LINUX_ANDROID 1\n# endif  // ANDROID\n#elif defined __MVS__\n# define GTEST_OS_ZOS 1\n#elif defined(__sun) && defined(__SVR4)\n# define GTEST_OS_SOLARIS 1\n#elif defined(_AIX)\n# define GTEST_OS_AIX 1\n#elif defined(__hpux)\n# define GTEST_OS_HPUX 1\n#elif defined __native_client__\n# define GTEST_OS_NACL 1\n#endif  // __CYGWIN__\n\n// Brings in definitions for functions used in the testing::internal::posix\n// namespace (read, write, close, chdir, isatty, stat). We do not currently\n// use them on Windows Mobile.\n#if !GTEST_OS_WINDOWS\n// This assumes that non-Windows OSes provide unistd.h. For OSes where this\n// is not the case, we need to include headers that provide the functions\n// mentioned above.\n# include <unistd.h>\n# if !GTEST_OS_NACL\n// TODO(vladl@google.com): Remove this condition when Native Client SDK adds\n// strings.h (tracked in\n// http://code.google.com/p/nativeclient/issues/detail?id=1175).\n#  include <strings.h>  // Native Client doesn't provide strings.h.\n# endif\n#elif !GTEST_OS_WINDOWS_MOBILE\n# include <direct.h>\n# include <io.h>\n#endif\n\n// Defines this to true iff Google Test can use POSIX regular expressions.\n#ifndef GTEST_HAS_POSIX_RE\n# define GTEST_HAS_POSIX_RE (!GTEST_OS_WINDOWS)\n#endif\n\n#if GTEST_HAS_POSIX_RE\n\n// On some platforms, <regex.h> needs someone to define size_t, and\n// won't compile otherwise.  We can #include it here as we already\n// included <stdlib.h>, which is guaranteed to define size_t through\n// <stddef.h>.\n# include <regex.h>  // NOLINT\n\n# define GTEST_USES_POSIX_RE 1\n\n#elif GTEST_OS_WINDOWS\n\n// <regex.h> is not available on Windows.  Use our own simple regex\n// implementation instead.\n# define GTEST_USES_SIMPLE_RE 1\n\n#else\n\n// <regex.h> may not be available on this platform.  Use our own\n// simple regex implementation instead.\n# define GTEST_USES_SIMPLE_RE 1\n\n#endif  // GTEST_HAS_POSIX_RE\n\n#ifndef GTEST_HAS_EXCEPTIONS\n// The user didn't tell us whether exceptions are enabled, so we need\n// to figure it out.\n# if defined(_MSC_VER) || defined(__BORLANDC__)\n// MSVC's and C++Builder's implementations of the STL use the _HAS_EXCEPTIONS\n// macro to enable exceptions, so we'll do the same.\n// Assumes that exceptions are enabled by default.\n#  ifndef _HAS_EXCEPTIONS\n#   define _HAS_EXCEPTIONS 1\n#  endif  // _HAS_EXCEPTIONS\n#  define GTEST_HAS_EXCEPTIONS _HAS_EXCEPTIONS\n# elif defined(__GNUC__) && __EXCEPTIONS\n// gcc defines __EXCEPTIONS to 1 iff exceptions are enabled.\n#  define GTEST_HAS_EXCEPTIONS 1\n# elif defined(__SUNPRO_CC)\n// Sun Pro CC supports exceptions.  However, there is no compile-time way of\n// detecting whether they are enabled or not.  Therefore, we assume that\n// they are enabled unless the user tells us otherwise.\n#  define GTEST_HAS_EXCEPTIONS 1\n# elif defined(__IBMCPP__) && __EXCEPTIONS\n// xlC defines __EXCEPTIONS to 1 iff exceptions are enabled.\n#  define GTEST_HAS_EXCEPTIONS 1\n# elif defined(__HP_aCC)\n// Exception handling is in effect by default in HP aCC compiler. It has to\n// be turned of by +noeh compiler option if desired.\n#  define GTEST_HAS_EXCEPTIONS 1\n# else\n// For other compilers, we assume exceptions are disabled to be\n// conservative.\n#  define GTEST_HAS_EXCEPTIONS 0\n# endif  // defined(_MSC_VER) || defined(__BORLANDC__)\n#endif  // GTEST_HAS_EXCEPTIONS\n\n#if !defined(GTEST_HAS_STD_STRING)\n// Even though we don't use this macro any longer, we keep it in case\n// some clients still depend on it.\n# define GTEST_HAS_STD_STRING 1\n#elif !GTEST_HAS_STD_STRING\n// The user told us that ::std::string isn't available.\n# error \"Google Test cannot be used where ::std::string isn't available.\"\n#endif  // !defined(GTEST_HAS_STD_STRING)\n\n#ifndef GTEST_HAS_GLOBAL_STRING\n// The user didn't tell us whether ::string is available, so we need\n// to figure it out.\n\n# define GTEST_HAS_GLOBAL_STRING 0\n\n#endif  // GTEST_HAS_GLOBAL_STRING\n\n#ifndef GTEST_HAS_STD_WSTRING\n// The user didn't tell us whether ::std::wstring is available, so we need\n// to figure it out.\n// TODO(wan@google.com): uses autoconf to detect whether ::std::wstring\n//   is available.\n\n// Cygwin 1.7 and below doesn't support ::std::wstring.\n// Solaris' libc++ doesn't support it either.  Android has\n// no support for it at least as recent as Froyo (2.2).\n# define GTEST_HAS_STD_WSTRING \\\n    (!(GTEST_OS_LINUX_ANDROID || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS))\n\n#endif  // GTEST_HAS_STD_WSTRING\n\n#ifndef GTEST_HAS_GLOBAL_WSTRING\n// The user didn't tell us whether ::wstring is available, so we need\n// to figure it out.\n# define GTEST_HAS_GLOBAL_WSTRING \\\n    (GTEST_HAS_STD_WSTRING && GTEST_HAS_GLOBAL_STRING)\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n// Determines whether RTTI is available.\n#ifndef GTEST_HAS_RTTI\n// The user didn't tell us whether RTTI is enabled, so we need to\n// figure it out.\n\n# ifdef _MSC_VER\n\n#  ifdef _CPPRTTI  // MSVC defines this macro iff RTTI is enabled.\n#   define GTEST_HAS_RTTI 1\n#  else\n#   define GTEST_HAS_RTTI 0\n#  endif\n\n// Starting with version 4.3.2, gcc defines __GXX_RTTI iff RTTI is enabled.\n# elif defined(__GNUC__) && (GTEST_GCC_VER_ >= 40302)\n\n#  ifdef __GXX_RTTI\n#   define GTEST_HAS_RTTI 1\n#  else\n#   define GTEST_HAS_RTTI 0\n#  endif  // __GXX_RTTI\n\n// Starting with version 9.0 IBM Visual Age defines __RTTI_ALL__ to 1 if\n// both the typeid and dynamic_cast features are present.\n# elif defined(__IBMCPP__) && (__IBMCPP__ >= 900)\n\n#  ifdef __RTTI_ALL__\n#   define GTEST_HAS_RTTI 1\n#  else\n#   define GTEST_HAS_RTTI 0\n#  endif\n\n# else\n\n// For all other compilers, we assume RTTI is enabled.\n#  define GTEST_HAS_RTTI 1\n\n# endif  // _MSC_VER\n\n#endif  // GTEST_HAS_RTTI\n\n// It's this header's responsibility to #include <typeinfo> when RTTI\n// is enabled.\n#if GTEST_HAS_RTTI\n# include <typeinfo>\n#endif\n\n// Determines whether Google Test can use the pthreads library.\n#ifndef GTEST_HAS_PTHREAD\n// The user didn't tell us explicitly, so we assume pthreads support is\n// available on Linux and Mac.\n//\n// To disable threading support in Google Test, add -DGTEST_HAS_PTHREAD=0\n// to your compiler flags.\n# define GTEST_HAS_PTHREAD (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX)\n#endif  // GTEST_HAS_PTHREAD\n\n#if GTEST_HAS_PTHREAD\n// gtest-port.h guarantees to #include <pthread.h> when GTEST_HAS_PTHREAD is\n// true.\n# include <pthread.h>  // NOLINT\n\n// For timespec and nanosleep, used below.\n# include <time.h>  // NOLINT\n#endif\n\n// Determines whether Google Test can use tr1/tuple.  You can define\n// this macro to 0 to prevent Google Test from using tuple (any\n// feature depending on tuple with be disabled in this mode).\n#ifndef GTEST_HAS_TR1_TUPLE\n// The user didn't tell us not to do it, so we assume it's OK.\n# define GTEST_HAS_TR1_TUPLE 1\n#endif  // GTEST_HAS_TR1_TUPLE\n\n// Determines whether Google Test's own tr1 tuple implementation\n// should be used.\n#ifndef GTEST_USE_OWN_TR1_TUPLE\n// The user didn't tell us, so we need to figure it out.\n\n// We use our own TR1 tuple if we aren't sure the user has an\n// implementation of it already.  At this time, GCC 4.0.0+ and MSVC\n// 2010 are the only mainstream compilers that come with a TR1 tuple\n// implementation.  NVIDIA's CUDA NVCC compiler pretends to be GCC by\n// defining __GNUC__ and friends, but cannot compile GCC's tuple\n// implementation.  MSVC 2008 (9.0) provides TR1 tuple in a 323 MB\n// Feature Pack download, which we cannot assume the user has.\n# if (defined(__GNUC__) && !defined(__CUDACC__) && (GTEST_GCC_VER_ >= 40000)) \\\n    || _MSC_VER >= 1600\n#  define GTEST_USE_OWN_TR1_TUPLE 0\n# else\n#  define GTEST_USE_OWN_TR1_TUPLE 1\n# endif\n\n#endif  // GTEST_USE_OWN_TR1_TUPLE\n\n// To avoid conditional compilation everywhere, we make it\n// gtest-port.h's responsibility to #include the header implementing\n// tr1/tuple.\n#if GTEST_HAS_TR1_TUPLE\n\n# if GTEST_USE_OWN_TR1_TUPLE\n// This file was GENERATED by a script.  DO NOT EDIT BY HAND!!!\n\n// Copyright 2009 Google Inc.\n// All Rights Reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n// Implements a subset of TR1 tuple needed by Google Test and Google Mock.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_\n\n#include <utility>  // For ::std::pair.\n\n// The compiler used in Symbian has a bug that prevents us from declaring the\n// tuple template as a friend (it complains that tuple is redefined).  This\n// hack bypasses the bug by declaring the members that should otherwise be\n// private as public.\n// Sun Studio versions < 12 also have the above bug.\n#if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590)\n# define GTEST_DECLARE_TUPLE_AS_FRIEND_ public:\n#else\n# define GTEST_DECLARE_TUPLE_AS_FRIEND_ \\\n    template <GTEST_10_TYPENAMES_(U)> friend class tuple; \\\n   private:\n#endif\n\n// GTEST_n_TUPLE_(T) is the type of an n-tuple.\n#define GTEST_0_TUPLE_(T) tuple<>\n#define GTEST_1_TUPLE_(T) tuple<T##0, void, void, void, void, void, void, \\\n    void, void, void>\n#define GTEST_2_TUPLE_(T) tuple<T##0, T##1, void, void, void, void, void, \\\n    void, void, void>\n#define GTEST_3_TUPLE_(T) tuple<T##0, T##1, T##2, void, void, void, void, \\\n    void, void, void>\n#define GTEST_4_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, void, void, void, \\\n    void, void, void>\n#define GTEST_5_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, T##4, void, void, \\\n    void, void, void>\n#define GTEST_6_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, T##4, T##5, void, \\\n    void, void, void>\n#define GTEST_7_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, T##4, T##5, T##6, \\\n    void, void, void>\n#define GTEST_8_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, T##4, T##5, T##6, \\\n    T##7, void, void>\n#define GTEST_9_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, T##4, T##5, T##6, \\\n    T##7, T##8, void>\n#define GTEST_10_TUPLE_(T) tuple<T##0, T##1, T##2, T##3, T##4, T##5, T##6, \\\n    T##7, T##8, T##9>\n\n// GTEST_n_TYPENAMES_(T) declares a list of n typenames.\n#define GTEST_0_TYPENAMES_(T)\n#define GTEST_1_TYPENAMES_(T) typename T##0\n#define GTEST_2_TYPENAMES_(T) typename T##0, typename T##1\n#define GTEST_3_TYPENAMES_(T) typename T##0, typename T##1, typename T##2\n#define GTEST_4_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3\n#define GTEST_5_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3, typename T##4\n#define GTEST_6_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3, typename T##4, typename T##5\n#define GTEST_7_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3, typename T##4, typename T##5, typename T##6\n#define GTEST_8_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3, typename T##4, typename T##5, typename T##6, typename T##7\n#define GTEST_9_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3, typename T##4, typename T##5, typename T##6, \\\n    typename T##7, typename T##8\n#define GTEST_10_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \\\n    typename T##3, typename T##4, typename T##5, typename T##6, \\\n    typename T##7, typename T##8, typename T##9\n\n// In theory, defining stuff in the ::std namespace is undefined\n// behavior.  We can do this as we are playing the role of a standard\n// library vendor.\nnamespace std {\nnamespace tr1 {\n\ntemplate <typename T0 = void, typename T1 = void, typename T2 = void,\n    typename T3 = void, typename T4 = void, typename T5 = void,\n    typename T6 = void, typename T7 = void, typename T8 = void,\n    typename T9 = void>\nclass tuple;\n\n// Anything in namespace gtest_internal is Google Test's INTERNAL\n// IMPLEMENTATION DETAIL and MUST NOT BE USED DIRECTLY in user code.\nnamespace gtest_internal {\n\n// ByRef<T>::type is T if T is a reference; otherwise it's const T&.\ntemplate <typename T>\nstruct ByRef { typedef const T& type; };  // NOLINT\ntemplate <typename T>\nstruct ByRef<T&> { typedef T& type; };  // NOLINT\n\n// A handy wrapper for ByRef.\n#define GTEST_BY_REF_(T) typename ::std::tr1::gtest_internal::ByRef<T>::type\n\n// AddRef<T>::type is T if T is a reference; otherwise it's T&.  This\n// is the same as tr1::add_reference<T>::type.\ntemplate <typename T>\nstruct AddRef { typedef T& type; };  // NOLINT\ntemplate <typename T>\nstruct AddRef<T&> { typedef T& type; };  // NOLINT\n\n// A handy wrapper for AddRef.\n#define GTEST_ADD_REF_(T) typename ::std::tr1::gtest_internal::AddRef<T>::type\n\n// A helper for implementing get<k>().\ntemplate <int k> class Get;\n\n// A helper for implementing tuple_element<k, T>.  kIndexValid is true\n// iff k < the number of fields in tuple type T.\ntemplate <bool kIndexValid, int kIndex, class Tuple>\nstruct TupleElement;\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 0, GTEST_10_TUPLE_(T)> { typedef T0 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 1, GTEST_10_TUPLE_(T)> { typedef T1 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 2, GTEST_10_TUPLE_(T)> { typedef T2 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 3, GTEST_10_TUPLE_(T)> { typedef T3 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 4, GTEST_10_TUPLE_(T)> { typedef T4 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 5, GTEST_10_TUPLE_(T)> { typedef T5 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 6, GTEST_10_TUPLE_(T)> { typedef T6 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 7, GTEST_10_TUPLE_(T)> { typedef T7 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 8, GTEST_10_TUPLE_(T)> { typedef T8 type; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct TupleElement<true, 9, GTEST_10_TUPLE_(T)> { typedef T9 type; };\n\n}  // namespace gtest_internal\n\ntemplate <>\nclass tuple<> {\n public:\n  tuple() {}\n  tuple(const tuple& /* t */)  {}\n  tuple& operator=(const tuple& /* t */) { return *this; }\n};\n\ntemplate <GTEST_1_TYPENAMES_(T)>\nclass GTEST_1_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0) : f0_(f0) {}\n\n  tuple(const tuple& t) : f0_(t.f0_) {}\n\n  template <GTEST_1_TYPENAMES_(U)>\n  tuple(const GTEST_1_TUPLE_(U)& t) : f0_(t.f0_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_1_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_1_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_1_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_1_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    return *this;\n  }\n\n  T0 f0_;\n};\n\ntemplate <GTEST_2_TYPENAMES_(T)>\nclass GTEST_2_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1) : f0_(f0),\n      f1_(f1) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_) {}\n\n  template <GTEST_2_TYPENAMES_(U)>\n  tuple(const GTEST_2_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_) {}\n  template <typename U0, typename U1>\n  tuple(const ::std::pair<U0, U1>& p) : f0_(p.first), f1_(p.second) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_2_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_2_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n  template <typename U0, typename U1>\n  tuple& operator=(const ::std::pair<U0, U1>& p) {\n    f0_ = p.first;\n    f1_ = p.second;\n    return *this;\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_2_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_2_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n};\n\ntemplate <GTEST_3_TYPENAMES_(T)>\nclass GTEST_3_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2) : f0_(f0), f1_(f1), f2_(f2) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_) {}\n\n  template <GTEST_3_TYPENAMES_(U)>\n  tuple(const GTEST_3_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_3_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_3_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_3_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_3_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n};\n\ntemplate <GTEST_4_TYPENAMES_(T)>\nclass GTEST_4_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3) : f0_(f0), f1_(f1), f2_(f2),\n      f3_(f3) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_) {}\n\n  template <GTEST_4_TYPENAMES_(U)>\n  tuple(const GTEST_4_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_4_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_4_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_4_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_4_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n};\n\ntemplate <GTEST_5_TYPENAMES_(T)>\nclass GTEST_5_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_(), f4_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3,\n      GTEST_BY_REF_(T4) f4) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_),\n      f4_(t.f4_) {}\n\n  template <GTEST_5_TYPENAMES_(U)>\n  tuple(const GTEST_5_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_), f4_(t.f4_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_5_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_5_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_5_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_5_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    f4_ = t.f4_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n  T4 f4_;\n};\n\ntemplate <GTEST_6_TYPENAMES_(T)>\nclass GTEST_6_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4,\n      GTEST_BY_REF_(T5) f5) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4),\n      f5_(f5) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_),\n      f4_(t.f4_), f5_(t.f5_) {}\n\n  template <GTEST_6_TYPENAMES_(U)>\n  tuple(const GTEST_6_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_), f4_(t.f4_), f5_(t.f5_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_6_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_6_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_6_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_6_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    f4_ = t.f4_;\n    f5_ = t.f5_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n  T4 f4_;\n  T5 f5_;\n};\n\ntemplate <GTEST_7_TYPENAMES_(T)>\nclass GTEST_7_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4,\n      GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6) : f0_(f0), f1_(f1), f2_(f2),\n      f3_(f3), f4_(f4), f5_(f5), f6_(f6) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_),\n      f4_(t.f4_), f5_(t.f5_), f6_(t.f6_) {}\n\n  template <GTEST_7_TYPENAMES_(U)>\n  tuple(const GTEST_7_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_7_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_7_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_7_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_7_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    f4_ = t.f4_;\n    f5_ = t.f5_;\n    f6_ = t.f6_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n  T4 f4_;\n  T5 f5_;\n  T6 f6_;\n};\n\ntemplate <GTEST_8_TYPENAMES_(T)>\nclass GTEST_8_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4,\n      GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6,\n      GTEST_BY_REF_(T7) f7) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4),\n      f5_(f5), f6_(f6), f7_(f7) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_),\n      f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_) {}\n\n  template <GTEST_8_TYPENAMES_(U)>\n  tuple(const GTEST_8_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_8_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_8_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_8_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_8_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    f4_ = t.f4_;\n    f5_ = t.f5_;\n    f6_ = t.f6_;\n    f7_ = t.f7_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n  T4 f4_;\n  T5 f5_;\n  T6 f6_;\n  T7 f7_;\n};\n\ntemplate <GTEST_9_TYPENAMES_(T)>\nclass GTEST_9_TUPLE_(T) {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_(), f8_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4,\n      GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7,\n      GTEST_BY_REF_(T8) f8) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4),\n      f5_(f5), f6_(f6), f7_(f7), f8_(f8) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_),\n      f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_) {}\n\n  template <GTEST_9_TYPENAMES_(U)>\n  tuple(const GTEST_9_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_9_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_9_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_9_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_9_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    f4_ = t.f4_;\n    f5_ = t.f5_;\n    f6_ = t.f6_;\n    f7_ = t.f7_;\n    f8_ = t.f8_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n  T4 f4_;\n  T5 f5_;\n  T6 f6_;\n  T7 f7_;\n  T8 f8_;\n};\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nclass tuple {\n public:\n  template <int k> friend class gtest_internal::Get;\n\n  tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_(), f8_(),\n      f9_() {}\n\n  explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1,\n      GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4,\n      GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7,\n      GTEST_BY_REF_(T8) f8, GTEST_BY_REF_(T9) f9) : f0_(f0), f1_(f1), f2_(f2),\n      f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7), f8_(f8), f9_(f9) {}\n\n  tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_),\n      f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_), f9_(t.f9_) {}\n\n  template <GTEST_10_TYPENAMES_(U)>\n  tuple(const GTEST_10_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_),\n      f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_),\n      f9_(t.f9_) {}\n\n  tuple& operator=(const tuple& t) { return CopyFrom(t); }\n\n  template <GTEST_10_TYPENAMES_(U)>\n  tuple& operator=(const GTEST_10_TUPLE_(U)& t) {\n    return CopyFrom(t);\n  }\n\n  GTEST_DECLARE_TUPLE_AS_FRIEND_\n\n  template <GTEST_10_TYPENAMES_(U)>\n  tuple& CopyFrom(const GTEST_10_TUPLE_(U)& t) {\n    f0_ = t.f0_;\n    f1_ = t.f1_;\n    f2_ = t.f2_;\n    f3_ = t.f3_;\n    f4_ = t.f4_;\n    f5_ = t.f5_;\n    f6_ = t.f6_;\n    f7_ = t.f7_;\n    f8_ = t.f8_;\n    f9_ = t.f9_;\n    return *this;\n  }\n\n  T0 f0_;\n  T1 f1_;\n  T2 f2_;\n  T3 f3_;\n  T4 f4_;\n  T5 f5_;\n  T6 f6_;\n  T7 f7_;\n  T8 f8_;\n  T9 f9_;\n};\n\n// 6.1.3.2 Tuple creation functions.\n\n// Known limitations: we don't support passing an\n// std::tr1::reference_wrapper<T> to make_tuple().  And we don't\n// implement tie().\n\ninline tuple<> make_tuple() { return tuple<>(); }\n\ntemplate <GTEST_1_TYPENAMES_(T)>\ninline GTEST_1_TUPLE_(T) make_tuple(const T0& f0) {\n  return GTEST_1_TUPLE_(T)(f0);\n}\n\ntemplate <GTEST_2_TYPENAMES_(T)>\ninline GTEST_2_TUPLE_(T) make_tuple(const T0& f0, const T1& f1) {\n  return GTEST_2_TUPLE_(T)(f0, f1);\n}\n\ntemplate <GTEST_3_TYPENAMES_(T)>\ninline GTEST_3_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2) {\n  return GTEST_3_TUPLE_(T)(f0, f1, f2);\n}\n\ntemplate <GTEST_4_TYPENAMES_(T)>\ninline GTEST_4_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3) {\n  return GTEST_4_TUPLE_(T)(f0, f1, f2, f3);\n}\n\ntemplate <GTEST_5_TYPENAMES_(T)>\ninline GTEST_5_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3, const T4& f4) {\n  return GTEST_5_TUPLE_(T)(f0, f1, f2, f3, f4);\n}\n\ntemplate <GTEST_6_TYPENAMES_(T)>\ninline GTEST_6_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3, const T4& f4, const T5& f5) {\n  return GTEST_6_TUPLE_(T)(f0, f1, f2, f3, f4, f5);\n}\n\ntemplate <GTEST_7_TYPENAMES_(T)>\ninline GTEST_7_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3, const T4& f4, const T5& f5, const T6& f6) {\n  return GTEST_7_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6);\n}\n\ntemplate <GTEST_8_TYPENAMES_(T)>\ninline GTEST_8_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7) {\n  return GTEST_8_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7);\n}\n\ntemplate <GTEST_9_TYPENAMES_(T)>\ninline GTEST_9_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7,\n    const T8& f8) {\n  return GTEST_9_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7, f8);\n}\n\ntemplate <GTEST_10_TYPENAMES_(T)>\ninline GTEST_10_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2,\n    const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7,\n    const T8& f8, const T9& f9) {\n  return GTEST_10_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7, f8, f9);\n}\n\n// 6.1.3.3 Tuple helper classes.\n\ntemplate <typename Tuple> struct tuple_size;\n\ntemplate <GTEST_0_TYPENAMES_(T)>\nstruct tuple_size<GTEST_0_TUPLE_(T)> { static const int value = 0; };\n\ntemplate <GTEST_1_TYPENAMES_(T)>\nstruct tuple_size<GTEST_1_TUPLE_(T)> { static const int value = 1; };\n\ntemplate <GTEST_2_TYPENAMES_(T)>\nstruct tuple_size<GTEST_2_TUPLE_(T)> { static const int value = 2; };\n\ntemplate <GTEST_3_TYPENAMES_(T)>\nstruct tuple_size<GTEST_3_TUPLE_(T)> { static const int value = 3; };\n\ntemplate <GTEST_4_TYPENAMES_(T)>\nstruct tuple_size<GTEST_4_TUPLE_(T)> { static const int value = 4; };\n\ntemplate <GTEST_5_TYPENAMES_(T)>\nstruct tuple_size<GTEST_5_TUPLE_(T)> { static const int value = 5; };\n\ntemplate <GTEST_6_TYPENAMES_(T)>\nstruct tuple_size<GTEST_6_TUPLE_(T)> { static const int value = 6; };\n\ntemplate <GTEST_7_TYPENAMES_(T)>\nstruct tuple_size<GTEST_7_TUPLE_(T)> { static const int value = 7; };\n\ntemplate <GTEST_8_TYPENAMES_(T)>\nstruct tuple_size<GTEST_8_TUPLE_(T)> { static const int value = 8; };\n\ntemplate <GTEST_9_TYPENAMES_(T)>\nstruct tuple_size<GTEST_9_TUPLE_(T)> { static const int value = 9; };\n\ntemplate <GTEST_10_TYPENAMES_(T)>\nstruct tuple_size<GTEST_10_TUPLE_(T)> { static const int value = 10; };\n\ntemplate <int k, class Tuple>\nstruct tuple_element {\n  typedef typename gtest_internal::TupleElement<\n      k < (tuple_size<Tuple>::value), k, Tuple>::type type;\n};\n\n#define GTEST_TUPLE_ELEMENT_(k, Tuple) typename tuple_element<k, Tuple >::type\n\n// 6.1.3.4 Element access.\n\nnamespace gtest_internal {\n\ntemplate <>\nclass Get<0> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(0, Tuple))\n  Field(Tuple& t) { return t.f0_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(0, Tuple))\n  ConstField(const Tuple& t) { return t.f0_; }\n};\n\ntemplate <>\nclass Get<1> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(1, Tuple))\n  Field(Tuple& t) { return t.f1_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(1, Tuple))\n  ConstField(const Tuple& t) { return t.f1_; }\n};\n\ntemplate <>\nclass Get<2> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(2, Tuple))\n  Field(Tuple& t) { return t.f2_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(2, Tuple))\n  ConstField(const Tuple& t) { return t.f2_; }\n};\n\ntemplate <>\nclass Get<3> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(3, Tuple))\n  Field(Tuple& t) { return t.f3_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(3, Tuple))\n  ConstField(const Tuple& t) { return t.f3_; }\n};\n\ntemplate <>\nclass Get<4> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(4, Tuple))\n  Field(Tuple& t) { return t.f4_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(4, Tuple))\n  ConstField(const Tuple& t) { return t.f4_; }\n};\n\ntemplate <>\nclass Get<5> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(5, Tuple))\n  Field(Tuple& t) { return t.f5_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(5, Tuple))\n  ConstField(const Tuple& t) { return t.f5_; }\n};\n\ntemplate <>\nclass Get<6> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(6, Tuple))\n  Field(Tuple& t) { return t.f6_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(6, Tuple))\n  ConstField(const Tuple& t) { return t.f6_; }\n};\n\ntemplate <>\nclass Get<7> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(7, Tuple))\n  Field(Tuple& t) { return t.f7_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(7, Tuple))\n  ConstField(const Tuple& t) { return t.f7_; }\n};\n\ntemplate <>\nclass Get<8> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(8, Tuple))\n  Field(Tuple& t) { return t.f8_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(8, Tuple))\n  ConstField(const Tuple& t) { return t.f8_; }\n};\n\ntemplate <>\nclass Get<9> {\n public:\n  template <class Tuple>\n  static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(9, Tuple))\n  Field(Tuple& t) { return t.f9_; }  // NOLINT\n\n  template <class Tuple>\n  static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(9, Tuple))\n  ConstField(const Tuple& t) { return t.f9_; }\n};\n\n}  // namespace gtest_internal\n\ntemplate <int k, GTEST_10_TYPENAMES_(T)>\nGTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_10_TUPLE_(T)))\nget(GTEST_10_TUPLE_(T)& t) {\n  return gtest_internal::Get<k>::Field(t);\n}\n\ntemplate <int k, GTEST_10_TYPENAMES_(T)>\nGTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(k,  GTEST_10_TUPLE_(T)))\nget(const GTEST_10_TUPLE_(T)& t) {\n  return gtest_internal::Get<k>::ConstField(t);\n}\n\n// 6.1.3.5 Relational operators\n\n// We only implement == and !=, as we don't have a need for the rest yet.\n\nnamespace gtest_internal {\n\n// SameSizeTuplePrefixComparator<k, k>::Eq(t1, t2) returns true if the\n// first k fields of t1 equals the first k fields of t2.\n// SameSizeTuplePrefixComparator(k1, k2) would be a compiler error if\n// k1 != k2.\ntemplate <int kSize1, int kSize2>\nstruct SameSizeTuplePrefixComparator;\n\ntemplate <>\nstruct SameSizeTuplePrefixComparator<0, 0> {\n  template <class Tuple1, class Tuple2>\n  static bool Eq(const Tuple1& /* t1 */, const Tuple2& /* t2 */) {\n    return true;\n  }\n};\n\ntemplate <int k>\nstruct SameSizeTuplePrefixComparator<k, k> {\n  template <class Tuple1, class Tuple2>\n  static bool Eq(const Tuple1& t1, const Tuple2& t2) {\n    return SameSizeTuplePrefixComparator<k - 1, k - 1>::Eq(t1, t2) &&\n        ::std::tr1::get<k - 1>(t1) == ::std::tr1::get<k - 1>(t2);\n  }\n};\n\n}  // namespace gtest_internal\n\ntemplate <GTEST_10_TYPENAMES_(T), GTEST_10_TYPENAMES_(U)>\ninline bool operator==(const GTEST_10_TUPLE_(T)& t,\n                       const GTEST_10_TUPLE_(U)& u) {\n  return gtest_internal::SameSizeTuplePrefixComparator<\n      tuple_size<GTEST_10_TUPLE_(T)>::value,\n      tuple_size<GTEST_10_TUPLE_(U)>::value>::Eq(t, u);\n}\n\ntemplate <GTEST_10_TYPENAMES_(T), GTEST_10_TYPENAMES_(U)>\ninline bool operator!=(const GTEST_10_TUPLE_(T)& t,\n                       const GTEST_10_TUPLE_(U)& u) { return !(t == u); }\n\n// 6.1.4 Pairs.\n// Unimplemented.\n\n}  // namespace tr1\n}  // namespace std\n\n#undef GTEST_0_TUPLE_\n#undef GTEST_1_TUPLE_\n#undef GTEST_2_TUPLE_\n#undef GTEST_3_TUPLE_\n#undef GTEST_4_TUPLE_\n#undef GTEST_5_TUPLE_\n#undef GTEST_6_TUPLE_\n#undef GTEST_7_TUPLE_\n#undef GTEST_8_TUPLE_\n#undef GTEST_9_TUPLE_\n#undef GTEST_10_TUPLE_\n\n#undef GTEST_0_TYPENAMES_\n#undef GTEST_1_TYPENAMES_\n#undef GTEST_2_TYPENAMES_\n#undef GTEST_3_TYPENAMES_\n#undef GTEST_4_TYPENAMES_\n#undef GTEST_5_TYPENAMES_\n#undef GTEST_6_TYPENAMES_\n#undef GTEST_7_TYPENAMES_\n#undef GTEST_8_TYPENAMES_\n#undef GTEST_9_TYPENAMES_\n#undef GTEST_10_TYPENAMES_\n\n#undef GTEST_DECLARE_TUPLE_AS_FRIEND_\n#undef GTEST_BY_REF_\n#undef GTEST_ADD_REF_\n#undef GTEST_TUPLE_ELEMENT_\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_\n# elif GTEST_OS_SYMBIAN\n\n// On Symbian, BOOST_HAS_TR1_TUPLE causes Boost's TR1 tuple library to\n// use STLport's tuple implementation, which unfortunately doesn't\n// work as the copy of STLport distributed with Symbian is incomplete.\n// By making sure BOOST_HAS_TR1_TUPLE is undefined, we force Boost to\n// use its own tuple implementation.\n#  ifdef BOOST_HAS_TR1_TUPLE\n#   undef BOOST_HAS_TR1_TUPLE\n#  endif  // BOOST_HAS_TR1_TUPLE\n\n// This prevents <boost/tr1/detail/config.hpp>, which defines\n// BOOST_HAS_TR1_TUPLE, from being #included by Boost's <tuple>.\n#  define BOOST_TR1_DETAIL_CONFIG_HPP_INCLUDED\n#  include <tuple>\n\n# elif defined(__GNUC__) && (GTEST_GCC_VER_ >= 40000)\n// GCC 4.0+ implements tr1/tuple in the <tr1/tuple> header.  This does\n// not conform to the TR1 spec, which requires the header to be <tuple>.\n\n#  if !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302\n// Until version 4.3.2, gcc has a bug that causes <tr1/functional>,\n// which is #included by <tr1/tuple>, to not compile when RTTI is\n// disabled.  _TR1_FUNCTIONAL is the header guard for\n// <tr1/functional>.  Hence the following #define is a hack to prevent\n// <tr1/functional> from being included.\n#   define _TR1_FUNCTIONAL 1\n#   include <tr1/tuple>\n#   undef _TR1_FUNCTIONAL  // Allows the user to #include\n                        // <tr1/functional> if he chooses to.\n#  else\n#   include <tr1/tuple>  // NOLINT\n#  endif  // !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302\n\n# else\n// If the compiler is not GCC 4.0+, we assume the user is using a\n// spec-conforming TR1 implementation.\n#  include <tuple>  // NOLINT\n# endif  // GTEST_USE_OWN_TR1_TUPLE\n\n#endif  // GTEST_HAS_TR1_TUPLE\n\n// Determines whether clone(2) is supported.\n// Usually it will only be available on Linux, excluding\n// Linux on the Itanium architecture.\n// Also see http://linux.die.net/man/2/clone.\n#ifndef GTEST_HAS_CLONE\n// The user didn't tell us, so we need to figure it out.\n\n# if GTEST_OS_LINUX && !defined(__ia64__)\n#  define GTEST_HAS_CLONE 1\n# else\n#  define GTEST_HAS_CLONE 0\n# endif  // GTEST_OS_LINUX && !defined(__ia64__)\n\n#endif  // GTEST_HAS_CLONE\n\n// Determines whether to support stream redirection. This is used to test\n// output correctness and to implement death tests.\n#ifndef GTEST_HAS_STREAM_REDIRECTION\n// By default, we assume that stream redirection is supported on all\n// platforms except known mobile ones.\n# if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN\n#  define GTEST_HAS_STREAM_REDIRECTION 0\n# else\n#  define GTEST_HAS_STREAM_REDIRECTION 1\n# endif  // !GTEST_OS_WINDOWS_MOBILE && !GTEST_OS_SYMBIAN\n#endif  // GTEST_HAS_STREAM_REDIRECTION\n\n// Determines whether to support death tests.\n// Google Test does not support death tests for VC 7.1 and earlier as\n// abort() in a VC 7.1 application compiled as GUI in debug config\n// pops up a dialog window that cannot be suppressed programmatically.\n#if (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \\\n     (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER >= 1400) || \\\n     GTEST_OS_WINDOWS_MINGW || GTEST_OS_AIX || GTEST_OS_HPUX)\n# define GTEST_HAS_DEATH_TEST 1\n# include <vector>  // NOLINT\n#endif\n\n// We don't support MSVC 7.1 with exceptions disabled now.  Therefore\n// all the compilers we care about are adequate for supporting\n// value-parameterized tests.\n#define GTEST_HAS_PARAM_TEST 1\n\n// Determines whether to support type-driven tests.\n\n// Typed tests need <typeinfo> and variadic macros, which GCC, VC++ 8.0,\n// Sun Pro CC, IBM Visual Age, and HP aCC support.\n#if defined(__GNUC__) || (_MSC_VER >= 1400) || defined(__SUNPRO_CC) || \\\n    defined(__IBMCPP__) || defined(__HP_aCC)\n# define GTEST_HAS_TYPED_TEST 1\n# define GTEST_HAS_TYPED_TEST_P 1\n#endif\n\n// Determines whether to support Combine(). This only makes sense when\n// value-parameterized tests are enabled.  The implementation doesn't\n// work on Sun Studio since it doesn't understand templated conversion\n// operators.\n#if GTEST_HAS_PARAM_TEST && GTEST_HAS_TR1_TUPLE && !defined(__SUNPRO_CC)\n# define GTEST_HAS_COMBINE 1\n#endif\n\n// Determines whether the system compiler uses UTF-16 for encoding wide strings.\n#define GTEST_WIDE_STRING_USES_UTF16_ \\\n    (GTEST_OS_WINDOWS || GTEST_OS_CYGWIN || GTEST_OS_SYMBIAN || GTEST_OS_AIX)\n\n// Determines whether test results can be streamed to a socket.\n#if GTEST_OS_LINUX\n# define GTEST_CAN_STREAM_RESULTS_ 1\n#endif\n\n// Defines some utility macros.\n\n// The GNU compiler emits a warning if nested \"if\" statements are followed by\n// an \"else\" statement and braces are not used to explicitly disambiguate the\n// \"else\" binding.  This leads to problems with code like:\n//\n//   if (gate)\n//     ASSERT_*(condition) << \"Some message\";\n//\n// The \"switch (0) case 0:\" idiom is used to suppress this.\n#ifdef __INTEL_COMPILER\n# define GTEST_AMBIGUOUS_ELSE_BLOCKER_\n#else\n# define GTEST_AMBIGUOUS_ELSE_BLOCKER_ switch (0) case 0: default:  // NOLINT\n#endif\n\n// Use this annotation at the end of a struct/class definition to\n// prevent the compiler from optimizing away instances that are never\n// used.  This is useful when all interesting logic happens inside the\n// c'tor and / or d'tor.  Example:\n//\n//   struct Foo {\n//     Foo() { ... }\n//   } GTEST_ATTRIBUTE_UNUSED_;\n//\n// Also use it after a variable or parameter declaration to tell the\n// compiler the variable/parameter does not have to be used.\n#if defined(__GNUC__) && !defined(COMPILER_ICC)\n# define GTEST_ATTRIBUTE_UNUSED_ __attribute__ ((unused))\n#else\n# define GTEST_ATTRIBUTE_UNUSED_\n#endif\n\n// A macro to disallow operator=\n// This should be used in the private: declarations for a class.\n#define GTEST_DISALLOW_ASSIGN_(type)\\\n  void operator=(type const &)\n\n// A macro to disallow copy constructor and operator=\n// This should be used in the private: declarations for a class.\n#define GTEST_DISALLOW_COPY_AND_ASSIGN_(type)\\\n  type(type const &);\\\n  GTEST_DISALLOW_ASSIGN_(type)\n\n// Tell the compiler to warn about unused return values for functions declared\n// with this macro.  The macro should be used on function declarations\n// following the argument list:\n//\n//   Sprocket* AllocateSprocket() GTEST_MUST_USE_RESULT_;\n#if defined(__GNUC__) && (GTEST_GCC_VER_ >= 30400) && !defined(COMPILER_ICC)\n# define GTEST_MUST_USE_RESULT_ __attribute__ ((warn_unused_result))\n#else\n# define GTEST_MUST_USE_RESULT_\n#endif  // __GNUC__ && (GTEST_GCC_VER_ >= 30400) && !COMPILER_ICC\n\n// Determine whether the compiler supports Microsoft's Structured Exception\n// Handling.  This is supported by several Windows compilers but generally\n// does not exist on any other system.\n#ifndef GTEST_HAS_SEH\n// The user didn't tell us, so we need to figure it out.\n\n# if defined(_MSC_VER) || defined(__BORLANDC__)\n// These two compilers are known to support SEH.\n#  define GTEST_HAS_SEH 1\n# else\n// Assume no SEH.\n#  define GTEST_HAS_SEH 0\n# endif\n\n#endif  // GTEST_HAS_SEH\n\n#ifdef _MSC_VER\n\n# if GTEST_LINKED_AS_SHARED_LIBRARY\n#  define GTEST_API_ __declspec(dllimport)\n# elif GTEST_CREATE_SHARED_LIBRARY\n#  define GTEST_API_ __declspec(dllexport)\n# endif\n\n#endif  // _MSC_VER\n\n#ifndef GTEST_API_\n# define GTEST_API_\n#endif\n\n#ifdef __GNUC__\n// Ask the compiler to never inline a given function.\n# define GTEST_NO_INLINE_ __attribute__((noinline))\n#else\n# define GTEST_NO_INLINE_\n#endif\n\nnamespace testing {\n\nclass Message;\n\nnamespace internal {\n\nclass String;\n\n// The GTEST_COMPILE_ASSERT_ macro can be used to verify that a compile time\n// expression is true. For example, you could use it to verify the\n// size of a static array:\n//\n//   GTEST_COMPILE_ASSERT_(ARRAYSIZE(content_type_names) == CONTENT_NUM_TYPES,\n//                         content_type_names_incorrect_size);\n//\n// or to make sure a struct is smaller than a certain size:\n//\n//   GTEST_COMPILE_ASSERT_(sizeof(foo) < 128, foo_too_large);\n//\n// The second argument to the macro is the name of the variable. If\n// the expression is false, most compilers will issue a warning/error\n// containing the name of the variable.\n\ntemplate <bool>\nstruct CompileAssert {\n};\n\n#define GTEST_COMPILE_ASSERT_(expr, msg) \\\n  typedef ::testing::internal::CompileAssert<(bool(expr))> \\\n      msg[bool(expr) ? 1 : -1]\n\n// Implementation details of GTEST_COMPILE_ASSERT_:\n//\n// - GTEST_COMPILE_ASSERT_ works by defining an array type that has -1\n//   elements (and thus is invalid) when the expression is false.\n//\n// - The simpler definition\n//\n//    #define GTEST_COMPILE_ASSERT_(expr, msg) typedef char msg[(expr) ? 1 : -1]\n//\n//   does not work, as gcc supports variable-length arrays whose sizes\n//   are determined at run-time (this is gcc's extension and not part\n//   of the C++ standard).  As a result, gcc fails to reject the\n//   following code with the simple definition:\n//\n//     int foo;\n//     GTEST_COMPILE_ASSERT_(foo, msg); // not supposed to compile as foo is\n//                                      // not a compile-time constant.\n//\n// - By using the type CompileAssert<(bool(expr))>, we ensures that\n//   expr is a compile-time constant.  (Template arguments must be\n//   determined at compile-time.)\n//\n// - The outter parentheses in CompileAssert<(bool(expr))> are necessary\n//   to work around a bug in gcc 3.4.4 and 4.0.1.  If we had written\n//\n//     CompileAssert<bool(expr)>\n//\n//   instead, these compilers will refuse to compile\n//\n//     GTEST_COMPILE_ASSERT_(5 > 0, some_message);\n//\n//   (They seem to think the \">\" in \"5 > 0\" marks the end of the\n//   template argument list.)\n//\n// - The array size is (bool(expr) ? 1 : -1), instead of simply\n//\n//     ((expr) ? 1 : -1).\n//\n//   This is to avoid running into a bug in MS VC 7.1, which\n//   causes ((0.0) ? 1 : -1) to incorrectly evaluate to 1.\n\n// StaticAssertTypeEqHelper is used by StaticAssertTypeEq defined in gtest.h.\n//\n// This template is declared, but intentionally undefined.\ntemplate <typename T1, typename T2>\nstruct StaticAssertTypeEqHelper;\n\ntemplate <typename T>\nstruct StaticAssertTypeEqHelper<T, T> {};\n\n#if GTEST_HAS_GLOBAL_STRING\ntypedef ::string string;\n#else\ntypedef ::std::string string;\n#endif  // GTEST_HAS_GLOBAL_STRING\n\n#if GTEST_HAS_GLOBAL_WSTRING\ntypedef ::wstring wstring;\n#elif GTEST_HAS_STD_WSTRING\ntypedef ::std::wstring wstring;\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n// A helper for suppressing warnings on constant condition.  It just\n// returns 'condition'.\nGTEST_API_ bool IsTrue(bool condition);\n\n// Defines scoped_ptr.\n\n// This implementation of scoped_ptr is PARTIAL - it only contains\n// enough stuff to satisfy Google Test's need.\ntemplate <typename T>\nclass scoped_ptr {\n public:\n  typedef T element_type;\n\n  explicit scoped_ptr(T* p = NULL) : ptr_(p) {}\n  ~scoped_ptr() { reset(); }\n\n  T& operator*() const { return *ptr_; }\n  T* operator->() const { return ptr_; }\n  T* get() const { return ptr_; }\n\n  T* release() {\n    T* const ptr = ptr_;\n    ptr_ = NULL;\n    return ptr;\n  }\n\n  void reset(T* p = NULL) {\n    if (p != ptr_) {\n      if (IsTrue(sizeof(T) > 0)) {  // Makes sure T is a complete type.\n        delete ptr_;\n      }\n      ptr_ = p;\n    }\n  }\n private:\n  T* ptr_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(scoped_ptr);\n};\n\n// Defines RE.\n\n// A simple C++ wrapper for <regex.h>.  It uses the POSIX Extended\n// Regular Expression syntax.\nclass GTEST_API_ RE {\n public:\n  // A copy constructor is required by the Standard to initialize object\n  // references from r-values.\n  RE(const RE& other) { Init(other.pattern()); }\n\n  // Constructs an RE from a string.\n  RE(const ::std::string& regex) { Init(regex.c_str()); }  // NOLINT\n\n#if GTEST_HAS_GLOBAL_STRING\n\n  RE(const ::string& regex) { Init(regex.c_str()); }  // NOLINT\n\n#endif  // GTEST_HAS_GLOBAL_STRING\n\n  RE(const char* regex) { Init(regex); }  // NOLINT\n  ~RE();\n\n  // Returns the string representation of the regex.\n  const char* pattern() const { return pattern_; }\n\n  // FullMatch(str, re) returns true iff regular expression re matches\n  // the entire str.\n  // PartialMatch(str, re) returns true iff regular expression re\n  // matches a substring of str (including str itself).\n  //\n  // TODO(wan@google.com): make FullMatch() and PartialMatch() work\n  // when str contains NUL characters.\n  static bool FullMatch(const ::std::string& str, const RE& re) {\n    return FullMatch(str.c_str(), re);\n  }\n  static bool PartialMatch(const ::std::string& str, const RE& re) {\n    return PartialMatch(str.c_str(), re);\n  }\n\n#if GTEST_HAS_GLOBAL_STRING\n\n  static bool FullMatch(const ::string& str, const RE& re) {\n    return FullMatch(str.c_str(), re);\n  }\n  static bool PartialMatch(const ::string& str, const RE& re) {\n    return PartialMatch(str.c_str(), re);\n  }\n\n#endif  // GTEST_HAS_GLOBAL_STRING\n\n  static bool FullMatch(const char* str, const RE& re);\n  static bool PartialMatch(const char* str, const RE& re);\n\n private:\n  void Init(const char* regex);\n\n  // We use a const char* instead of a string, as Google Test may be used\n  // where string is not available.  We also do not use Google Test's own\n  // String type here, in order to simplify dependencies between the\n  // files.\n  const char* pattern_;\n  bool is_valid_;\n\n#if GTEST_USES_POSIX_RE\n\n  regex_t full_regex_;     // For FullMatch().\n  regex_t partial_regex_;  // For PartialMatch().\n\n#else  // GTEST_USES_SIMPLE_RE\n\n  const char* full_pattern_;  // For FullMatch();\n\n#endif\n\n  GTEST_DISALLOW_ASSIGN_(RE);\n};\n\n// Formats a source file path and a line number as they would appear\n// in an error message from the compiler used to compile this code.\nGTEST_API_ ::std::string FormatFileLocation(const char* file, int line);\n\n// Formats a file location for compiler-independent XML output.\n// Although this function is not platform dependent, we put it next to\n// FormatFileLocation in order to contrast the two functions.\nGTEST_API_ ::std::string FormatCompilerIndependentFileLocation(const char* file,\n                                                               int line);\n\n// Defines logging utilities:\n//   GTEST_LOG_(severity) - logs messages at the specified severity level. The\n//                          message itself is streamed into the macro.\n//   LogToStderr()  - directs all log messages to stderr.\n//   FlushInfoLog() - flushes informational log messages.\n\nenum GTestLogSeverity {\n  GTEST_INFO,\n  GTEST_WARNING,\n  GTEST_ERROR,\n  GTEST_FATAL\n};\n\n// Formats log entry severity, provides a stream object for streaming the\n// log message, and terminates the message with a newline when going out of\n// scope.\nclass GTEST_API_ GTestLog {\n public:\n  GTestLog(GTestLogSeverity severity, const char* file, int line);\n\n  // Flushes the buffers and, if severity is GTEST_FATAL, aborts the program.\n  ~GTestLog();\n\n  ::std::ostream& GetStream() { return ::std::cerr; }\n\n private:\n  const GTestLogSeverity severity_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(GTestLog);\n};\n\n#define GTEST_LOG_(severity) \\\n    ::testing::internal::GTestLog(::testing::internal::GTEST_##severity, \\\n                                  __FILE__, __LINE__).GetStream()\n\ninline void LogToStderr() {}\ninline void FlushInfoLog() { fflush(NULL); }\n\n// INTERNAL IMPLEMENTATION - DO NOT USE.\n//\n// GTEST_CHECK_ is an all-mode assert. It aborts the program if the condition\n// is not satisfied.\n//  Synopsys:\n//    GTEST_CHECK_(boolean_condition);\n//     or\n//    GTEST_CHECK_(boolean_condition) << \"Additional message\";\n//\n//    This checks the condition and if the condition is not satisfied\n//    it prints message about the condition violation, including the\n//    condition itself, plus additional message streamed into it, if any,\n//    and then it aborts the program. It aborts the program irrespective of\n//    whether it is built in the debug mode or not.\n#define GTEST_CHECK_(condition) \\\n    GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n    if (::testing::internal::IsTrue(condition)) \\\n      ; \\\n    else \\\n      GTEST_LOG_(FATAL) << \"Condition \" #condition \" failed. \"\n\n// An all-mode assert to verify that the given POSIX-style function\n// call returns 0 (indicating success).  Known limitation: this\n// doesn't expand to a balanced 'if' statement, so enclose the macro\n// in {} if you need to use it as the only statement in an 'if'\n// branch.\n#define GTEST_CHECK_POSIX_SUCCESS_(posix_call) \\\n  if (const int gtest_error = (posix_call)) \\\n    GTEST_LOG_(FATAL) << #posix_call << \"failed with error \" \\\n                      << gtest_error\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Use ImplicitCast_ as a safe version of static_cast for upcasting in\n// the type hierarchy (e.g. casting a Foo* to a SuperclassOfFoo* or a\n// const Foo*).  When you use ImplicitCast_, the compiler checks that\n// the cast is safe.  Such explicit ImplicitCast_s are necessary in\n// surprisingly many situations where C++ demands an exact type match\n// instead of an argument type convertable to a target type.\n//\n// The syntax for using ImplicitCast_ is the same as for static_cast:\n//\n//   ImplicitCast_<ToType>(expr)\n//\n// ImplicitCast_ would have been part of the C++ standard library,\n// but the proposal was submitted too late.  It will probably make\n// its way into the language in the future.\n//\n// This relatively ugly name is intentional. It prevents clashes with\n// similar functions users may have (e.g., implicit_cast). The internal\n// namespace alone is not enough because the function can be found by ADL.\ntemplate<typename To>\ninline To ImplicitCast_(To x) { return x; }\n\n// When you upcast (that is, cast a pointer from type Foo to type\n// SuperclassOfFoo), it's fine to use ImplicitCast_<>, since upcasts\n// always succeed.  When you downcast (that is, cast a pointer from\n// type Foo to type SubclassOfFoo), static_cast<> isn't safe, because\n// how do you know the pointer is really of type SubclassOfFoo?  It\n// could be a bare Foo, or of type DifferentSubclassOfFoo.  Thus,\n// when you downcast, you should use this macro.  In debug mode, we\n// use dynamic_cast<> to double-check the downcast is legal (we die\n// if it's not).  In normal mode, we do the efficient static_cast<>\n// instead.  Thus, it's important to test in debug mode to make sure\n// the cast is legal!\n//    This is the only place in the code we should use dynamic_cast<>.\n// In particular, you SHOULDN'T be using dynamic_cast<> in order to\n// do RTTI (eg code like this:\n//    if (dynamic_cast<Subclass1>(foo)) HandleASubclass1Object(foo);\n//    if (dynamic_cast<Subclass2>(foo)) HandleASubclass2Object(foo);\n// You should design the code some other way not to need this.\n//\n// This relatively ugly name is intentional. It prevents clashes with\n// similar functions users may have (e.g., down_cast). The internal\n// namespace alone is not enough because the function can be found by ADL.\ntemplate<typename To, typename From>  // use like this: DownCast_<T*>(foo);\ninline To DownCast_(From* f) {  // so we only accept pointers\n  // Ensures that To is a sub-type of From *.  This test is here only\n  // for compile-time type checking, and has no overhead in an\n  // optimized build at run-time, as it will be optimized away\n  // completely.\n  if (false) {\n    const To to = NULL;\n    ::testing::internal::ImplicitCast_<From*>(to);\n  }\n\n#if GTEST_HAS_RTTI\n  // RTTI: debug mode only!\n  GTEST_CHECK_(f == NULL || dynamic_cast<To>(f) != NULL);\n#endif\n  return static_cast<To>(f);\n}\n\n// Downcasts the pointer of type Base to Derived.\n// Derived must be a subclass of Base. The parameter MUST\n// point to a class of type Derived, not any subclass of it.\n// When RTTI is available, the function performs a runtime\n// check to enforce this.\ntemplate <class Derived, class Base>\nDerived* CheckedDowncastToActualType(Base* base) {\n#if GTEST_HAS_RTTI\n  GTEST_CHECK_(typeid(*base) == typeid(Derived));\n  return dynamic_cast<Derived*>(base);  // NOLINT\n#else\n  return static_cast<Derived*>(base);  // Poor man's downcast.\n#endif\n}\n\n#if GTEST_HAS_STREAM_REDIRECTION\n\n// Defines the stderr capturer:\n//   CaptureStdout     - starts capturing stdout.\n//   GetCapturedStdout - stops capturing stdout and returns the captured string.\n//   CaptureStderr     - starts capturing stderr.\n//   GetCapturedStderr - stops capturing stderr and returns the captured string.\n//\nGTEST_API_ void CaptureStdout();\nGTEST_API_ String GetCapturedStdout();\nGTEST_API_ void CaptureStderr();\nGTEST_API_ String GetCapturedStderr();\n\n#endif  // GTEST_HAS_STREAM_REDIRECTION\n\n\n#if GTEST_HAS_DEATH_TEST\n\n// A copy of all command line arguments.  Set by InitGoogleTest().\nextern ::std::vector<String> g_argvs;\n\n// GTEST_HAS_DEATH_TEST implies we have ::std::string.\nconst ::std::vector<String>& GetArgvs();\n\n#endif  // GTEST_HAS_DEATH_TEST\n\n// Defines synchronization primitives.\n\n#if GTEST_HAS_PTHREAD\n\n// Sleeps for (roughly) n milli-seconds.  This function is only for\n// testing Google Test's own constructs.  Don't use it in user tests,\n// either directly or indirectly.\ninline void SleepMilliseconds(int n) {\n  const timespec time = {\n    0,                  // 0 seconds.\n    n * 1000L * 1000L,  // And n ms.\n  };\n  nanosleep(&time, NULL);\n}\n\n// Allows a controller thread to pause execution of newly created\n// threads until notified.  Instances of this class must be created\n// and destroyed in the controller thread.\n//\n// This class is only for testing Google Test's own constructs. Do not\n// use it in user tests, either directly or indirectly.\nclass Notification {\n public:\n  Notification() : notified_(false) {}\n\n  // Notifies all threads created with this notification to start. Must\n  // be called from the controller thread.\n  void Notify() { notified_ = true; }\n\n  // Blocks until the controller thread notifies. Must be called from a test\n  // thread.\n  void WaitForNotification() {\n    while(!notified_) {\n      SleepMilliseconds(10);\n    }\n  }\n\n private:\n  volatile bool notified_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(Notification);\n};\n\n// As a C-function, ThreadFuncWithCLinkage cannot be templated itself.\n// Consequently, it cannot select a correct instantiation of ThreadWithParam\n// in order to call its Run(). Introducing ThreadWithParamBase as a\n// non-templated base class for ThreadWithParam allows us to bypass this\n// problem.\nclass ThreadWithParamBase {\n public:\n  virtual ~ThreadWithParamBase() {}\n  virtual void Run() = 0;\n};\n\n// pthread_create() accepts a pointer to a function type with the C linkage.\n// According to the Standard (7.5/1), function types with different linkages\n// are different even if they are otherwise identical.  Some compilers (for\n// example, SunStudio) treat them as different types.  Since class methods\n// cannot be defined with C-linkage we need to define a free C-function to\n// pass into pthread_create().\nextern \"C\" inline void* ThreadFuncWithCLinkage(void* thread) {\n  static_cast<ThreadWithParamBase*>(thread)->Run();\n  return NULL;\n}\n\n// Helper class for testing Google Test's multi-threading constructs.\n// To use it, write:\n//\n//   void ThreadFunc(int param) { /* Do things with param */ }\n//   Notification thread_can_start;\n//   ...\n//   // The thread_can_start parameter is optional; you can supply NULL.\n//   ThreadWithParam<int> thread(&ThreadFunc, 5, &thread_can_start);\n//   thread_can_start.Notify();\n//\n// These classes are only for testing Google Test's own constructs. Do\n// not use them in user tests, either directly or indirectly.\ntemplate <typename T>\nclass ThreadWithParam : public ThreadWithParamBase {\n public:\n  typedef void (*UserThreadFunc)(T);\n\n  ThreadWithParam(\n      UserThreadFunc func, T param, Notification* thread_can_start)\n      : func_(func),\n        param_(param),\n        thread_can_start_(thread_can_start),\n        finished_(false) {\n    ThreadWithParamBase* const base = this;\n    // The thread can be created only after all fields except thread_\n    // have been initialized.\n    GTEST_CHECK_POSIX_SUCCESS_(\n        pthread_create(&thread_, 0, &ThreadFuncWithCLinkage, base));\n  }\n  ~ThreadWithParam() { Join(); }\n\n  void Join() {\n    if (!finished_) {\n      GTEST_CHECK_POSIX_SUCCESS_(pthread_join(thread_, 0));\n      finished_ = true;\n    }\n  }\n\n  virtual void Run() {\n    if (thread_can_start_ != NULL)\n      thread_can_start_->WaitForNotification();\n    func_(param_);\n  }\n\n private:\n  const UserThreadFunc func_;  // User-supplied thread function.\n  const T param_;  // User-supplied parameter to the thread function.\n  // When non-NULL, used to block execution until the controller thread\n  // notifies.\n  Notification* const thread_can_start_;\n  bool finished_;  // true iff we know that the thread function has finished.\n  pthread_t thread_;  // The native thread object.\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadWithParam);\n};\n\n// MutexBase and Mutex implement mutex on pthreads-based platforms. They\n// are used in conjunction with class MutexLock:\n//\n//   Mutex mutex;\n//   ...\n//   MutexLock lock(&mutex);  // Acquires the mutex and releases it at the end\n//                            // of the current scope.\n//\n// MutexBase implements behavior for both statically and dynamically\n// allocated mutexes.  Do not use MutexBase directly.  Instead, write\n// the following to define a static mutex:\n//\n//   GTEST_DEFINE_STATIC_MUTEX_(g_some_mutex);\n//\n// You can forward declare a static mutex like this:\n//\n//   GTEST_DECLARE_STATIC_MUTEX_(g_some_mutex);\n//\n// To create a dynamic mutex, just define an object of type Mutex.\nclass MutexBase {\n public:\n  // Acquires this mutex.\n  void Lock() {\n    GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_lock(&mutex_));\n    owner_ = pthread_self();\n  }\n\n  // Releases this mutex.\n  void Unlock() {\n    // We don't protect writing to owner_ here, as it's the caller's\n    // responsibility to ensure that the current thread holds the\n    // mutex when this is called.\n    owner_ = 0;\n    GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_unlock(&mutex_));\n  }\n\n  // Does nothing if the current thread holds the mutex. Otherwise, crashes\n  // with high probability.\n  void AssertHeld() const {\n    GTEST_CHECK_(owner_ == pthread_self())\n        << \"The current thread is not holding the mutex @\" << this;\n  }\n\n  // A static mutex may be used before main() is entered.  It may even\n  // be used before the dynamic initialization stage.  Therefore we\n  // must be able to initialize a static mutex object at link time.\n  // This means MutexBase has to be a POD and its member variables\n  // have to be public.\n public:\n  pthread_mutex_t mutex_;  // The underlying pthread mutex.\n  pthread_t owner_;  // The thread holding the mutex; 0 means no one holds it.\n};\n\n// Forward-declares a static mutex.\n# define GTEST_DECLARE_STATIC_MUTEX_(mutex) \\\n    extern ::testing::internal::MutexBase mutex\n\n// Defines and statically (i.e. at link time) initializes a static mutex.\n# define GTEST_DEFINE_STATIC_MUTEX_(mutex) \\\n    ::testing::internal::MutexBase mutex = { PTHREAD_MUTEX_INITIALIZER, 0 }\n\n// The Mutex class can only be used for mutexes created at runtime. It\n// shares its API with MutexBase otherwise.\nclass Mutex : public MutexBase {\n public:\n  Mutex() {\n    GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_init(&mutex_, NULL));\n    owner_ = 0;\n  }\n  ~Mutex() {\n    GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_destroy(&mutex_));\n  }\n\n private:\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(Mutex);\n};\n\n// We cannot name this class MutexLock as the ctor declaration would\n// conflict with a macro named MutexLock, which is defined on some\n// platforms.  Hence the typedef trick below.\nclass GTestMutexLock {\n public:\n  explicit GTestMutexLock(MutexBase* mutex)\n      : mutex_(mutex) { mutex_->Lock(); }\n\n  ~GTestMutexLock() { mutex_->Unlock(); }\n\n private:\n  MutexBase* const mutex_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(GTestMutexLock);\n};\n\ntypedef GTestMutexLock MutexLock;\n\n// Helpers for ThreadLocal.\n\n// pthread_key_create() requires DeleteThreadLocalValue() to have\n// C-linkage.  Therefore it cannot be templatized to access\n// ThreadLocal<T>.  Hence the need for class\n// ThreadLocalValueHolderBase.\nclass ThreadLocalValueHolderBase {\n public:\n  virtual ~ThreadLocalValueHolderBase() {}\n};\n\n// Called by pthread to delete thread-local data stored by\n// pthread_setspecific().\nextern \"C\" inline void DeleteThreadLocalValue(void* value_holder) {\n  delete static_cast<ThreadLocalValueHolderBase*>(value_holder);\n}\n\n// Implements thread-local storage on pthreads-based systems.\n//\n//   // Thread 1\n//   ThreadLocal<int> tl(100);  // 100 is the default value for each thread.\n//\n//   // Thread 2\n//   tl.set(150);  // Changes the value for thread 2 only.\n//   EXPECT_EQ(150, tl.get());\n//\n//   // Thread 1\n//   EXPECT_EQ(100, tl.get());  // In thread 1, tl has the original value.\n//   tl.set(200);\n//   EXPECT_EQ(200, tl.get());\n//\n// The template type argument T must have a public copy constructor.\n// In addition, the default ThreadLocal constructor requires T to have\n// a public default constructor.\n//\n// An object managed for a thread by a ThreadLocal instance is deleted\n// when the thread exits.  Or, if the ThreadLocal instance dies in\n// that thread, when the ThreadLocal dies.  It's the user's\n// responsibility to ensure that all other threads using a ThreadLocal\n// have exited when it dies, or the per-thread objects for those\n// threads will not be deleted.\n//\n// Google Test only uses global ThreadLocal objects.  That means they\n// will die after main() has returned.  Therefore, no per-thread\n// object managed by Google Test will be leaked as long as all threads\n// using Google Test have exited when main() returns.\ntemplate <typename T>\nclass ThreadLocal {\n public:\n  ThreadLocal() : key_(CreateKey()),\n                  default_() {}\n  explicit ThreadLocal(const T& value) : key_(CreateKey()),\n                                         default_(value) {}\n\n  ~ThreadLocal() {\n    // Destroys the managed object for the current thread, if any.\n    DeleteThreadLocalValue(pthread_getspecific(key_));\n\n    // Releases resources associated with the key.  This will *not*\n    // delete managed objects for other threads.\n    GTEST_CHECK_POSIX_SUCCESS_(pthread_key_delete(key_));\n  }\n\n  T* pointer() { return GetOrCreateValue(); }\n  const T* pointer() const { return GetOrCreateValue(); }\n  const T& get() const { return *pointer(); }\n  void set(const T& value) { *pointer() = value; }\n\n private:\n  // Holds a value of type T.\n  class ValueHolder : public ThreadLocalValueHolderBase {\n   public:\n    explicit ValueHolder(const T& value) : value_(value) {}\n\n    T* pointer() { return &value_; }\n\n   private:\n    T value_;\n    GTEST_DISALLOW_COPY_AND_ASSIGN_(ValueHolder);\n  };\n\n  static pthread_key_t CreateKey() {\n    pthread_key_t key;\n    // When a thread exits, DeleteThreadLocalValue() will be called on\n    // the object managed for that thread.\n    GTEST_CHECK_POSIX_SUCCESS_(\n        pthread_key_create(&key, &DeleteThreadLocalValue));\n    return key;\n  }\n\n  T* GetOrCreateValue() const {\n    ThreadLocalValueHolderBase* const holder =\n        static_cast<ThreadLocalValueHolderBase*>(pthread_getspecific(key_));\n    if (holder != NULL) {\n      return CheckedDowncastToActualType<ValueHolder>(holder)->pointer();\n    }\n\n    ValueHolder* const new_holder = new ValueHolder(default_);\n    ThreadLocalValueHolderBase* const holder_base = new_holder;\n    GTEST_CHECK_POSIX_SUCCESS_(pthread_setspecific(key_, holder_base));\n    return new_holder->pointer();\n  }\n\n  // A key pthreads uses for looking up per-thread values.\n  const pthread_key_t key_;\n  const T default_;  // The default value for each thread.\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadLocal);\n};\n\n# define GTEST_IS_THREADSAFE 1\n\n#else  // GTEST_HAS_PTHREAD\n\n// A dummy implementation of synchronization primitives (mutex, lock,\n// and thread-local variable).  Necessary for compiling Google Test where\n// mutex is not supported - using Google Test in multiple threads is not\n// supported on such platforms.\n\nclass Mutex {\n public:\n  Mutex() {}\n  void AssertHeld() const {}\n};\n\n# define GTEST_DECLARE_STATIC_MUTEX_(mutex) \\\n  extern ::testing::internal::Mutex mutex\n\n# define GTEST_DEFINE_STATIC_MUTEX_(mutex) ::testing::internal::Mutex mutex\n\nclass GTestMutexLock {\n public:\n  explicit GTestMutexLock(Mutex*) {}  // NOLINT\n};\n\ntypedef GTestMutexLock MutexLock;\n\ntemplate <typename T>\nclass ThreadLocal {\n public:\n  ThreadLocal() : value_() {}\n  explicit ThreadLocal(const T& value) : value_(value) {}\n  T* pointer() { return &value_; }\n  const T* pointer() const { return &value_; }\n  const T& get() const { return value_; }\n  void set(const T& value) { value_ = value; }\n private:\n  T value_;\n};\n\n// The above synchronization primitives have dummy implementations.\n// Therefore Google Test is not thread-safe.\n# define GTEST_IS_THREADSAFE 0\n\n#endif  // GTEST_HAS_PTHREAD\n\n// Returns the number of threads running in the process, or 0 to indicate that\n// we cannot detect it.\nGTEST_API_ size_t GetThreadCount();\n\n// Passing non-POD classes through ellipsis (...) crashes the ARM\n// compiler and generates a warning in Sun Studio.  The Nokia Symbian\n// and the IBM XL C/C++ compiler try to instantiate a copy constructor\n// for objects passed through ellipsis (...), failing for uncopyable\n// objects.  We define this to ensure that only POD is passed through\n// ellipsis on these systems.\n#if defined(__SYMBIAN32__) || defined(__IBMCPP__) || defined(__SUNPRO_CC)\n// We lose support for NULL detection where the compiler doesn't like\n// passing non-POD classes through ellipsis (...).\n# define GTEST_ELLIPSIS_NEEDS_POD_ 1\n#else\n# define GTEST_CAN_COMPARE_NULL 1\n#endif\n\n// The Nokia Symbian and IBM XL C/C++ compilers cannot decide between\n// const T& and const T* in a function template.  These compilers\n// _can_ decide between class template specializations for T and T*,\n// so a tr1::type_traits-like is_pointer works.\n#if defined(__SYMBIAN32__) || defined(__IBMCPP__)\n# define GTEST_NEEDS_IS_POINTER_ 1\n#endif\n\ntemplate <bool bool_value>\nstruct bool_constant {\n  typedef bool_constant<bool_value> type;\n  static const bool value = bool_value;\n};\ntemplate <bool bool_value> const bool bool_constant<bool_value>::value;\n\ntypedef bool_constant<false> false_type;\ntypedef bool_constant<true> true_type;\n\ntemplate <typename T>\nstruct is_pointer : public false_type {};\n\ntemplate <typename T>\nstruct is_pointer<T*> : public true_type {};\n\ntemplate <typename Iterator>\nstruct IteratorTraits {\n  typedef typename Iterator::value_type value_type;\n};\n\ntemplate <typename T>\nstruct IteratorTraits<T*> {\n  typedef T value_type;\n};\n\ntemplate <typename T>\nstruct IteratorTraits<const T*> {\n  typedef T value_type;\n};\n\n#if GTEST_OS_WINDOWS\n# define GTEST_PATH_SEP_ \"\\\\\"\n# define GTEST_HAS_ALT_PATH_SEP_ 1\n// The biggest signed integer type the compiler supports.\ntypedef __int64 BiggestInt;\n#else\n# define GTEST_PATH_SEP_ \"/\"\n# define GTEST_HAS_ALT_PATH_SEP_ 0\ntypedef long long BiggestInt;  // NOLINT\n#endif  // GTEST_OS_WINDOWS\n\n// Utilities for char.\n\n// isspace(int ch) and friends accept an unsigned char or EOF.  char\n// may be signed, depending on the compiler (or compiler flags).\n// Therefore we need to cast a char to unsigned char before calling\n// isspace(), etc.\n\ninline bool IsAlpha(char ch) {\n  return isalpha(static_cast<unsigned char>(ch)) != 0;\n}\ninline bool IsAlNum(char ch) {\n  return isalnum(static_cast<unsigned char>(ch)) != 0;\n}\ninline bool IsDigit(char ch) {\n  return isdigit(static_cast<unsigned char>(ch)) != 0;\n}\ninline bool IsLower(char ch) {\n  return islower(static_cast<unsigned char>(ch)) != 0;\n}\ninline bool IsSpace(char ch) {\n  return isspace(static_cast<unsigned char>(ch)) != 0;\n}\ninline bool IsUpper(char ch) {\n  return isupper(static_cast<unsigned char>(ch)) != 0;\n}\ninline bool IsXDigit(char ch) {\n  return isxdigit(static_cast<unsigned char>(ch)) != 0;\n}\n\ninline char ToLower(char ch) {\n  return static_cast<char>(tolower(static_cast<unsigned char>(ch)));\n}\ninline char ToUpper(char ch) {\n  return static_cast<char>(toupper(static_cast<unsigned char>(ch)));\n}\n\n// The testing::internal::posix namespace holds wrappers for common\n// POSIX functions.  These wrappers hide the differences between\n// Windows/MSVC and POSIX systems.  Since some compilers define these\n// standard functions as macros, the wrapper cannot have the same name\n// as the wrapped function.\n\nnamespace posix {\n\n// Functions with a different name on Windows.\n\n#if GTEST_OS_WINDOWS\n\ntypedef struct _stat StatStruct;\n\n# ifdef __BORLANDC__\ninline int IsATTY(int fd) { return isatty(fd); }\ninline int StrCaseCmp(const char* s1, const char* s2) {\n  return stricmp(s1, s2);\n}\ninline char* StrDup(const char* src) { return strdup(src); }\n# else  // !__BORLANDC__\n#  if GTEST_OS_WINDOWS_MOBILE\ninline int IsATTY(int /* fd */) { return 0; }\n#  else\ninline int IsATTY(int fd) { return _isatty(fd); }\n#  endif  // GTEST_OS_WINDOWS_MOBILE\ninline int StrCaseCmp(const char* s1, const char* s2) {\n  return _stricmp(s1, s2);\n}\ninline char* StrDup(const char* src) { return _strdup(src); }\n# endif  // __BORLANDC__\n\n# if GTEST_OS_WINDOWS_MOBILE\ninline int FileNo(FILE* file) { return reinterpret_cast<int>(_fileno(file)); }\n// Stat(), RmDir(), and IsDir() are not needed on Windows CE at this\n// time and thus not defined there.\n# else\ninline int FileNo(FILE* file) { return _fileno(file); }\ninline int Stat(const char* path, StatStruct* buf) { return _stat(path, buf); }\ninline int RmDir(const char* dir) { return _rmdir(dir); }\ninline bool IsDir(const StatStruct& st) {\n  return (_S_IFDIR & st.st_mode) != 0;\n}\n# endif  // GTEST_OS_WINDOWS_MOBILE\n\n#else\n\ntypedef struct stat StatStruct;\n\ninline int FileNo(FILE* file) { return fileno(file); }\ninline int IsATTY(int fd) { return isatty(fd); }\ninline int Stat(const char* path, StatStruct* buf) { return stat(path, buf); }\ninline int StrCaseCmp(const char* s1, const char* s2) {\n  return strcasecmp(s1, s2);\n}\ninline char* StrDup(const char* src) { return strdup(src); }\ninline int RmDir(const char* dir) { return rmdir(dir); }\ninline bool IsDir(const StatStruct& st) { return S_ISDIR(st.st_mode); }\n\n#endif  // GTEST_OS_WINDOWS\n\n// Functions deprecated by MSVC 8.0.\n\n#ifdef _MSC_VER\n// Temporarily disable warning 4996 (deprecated function).\n# pragma warning(push)\n# pragma warning(disable:4996)\n#endif\n\ninline const char* StrNCpy(char* dest, const char* src, size_t n) {\n  return strncpy(dest, src, n);\n}\n\n// ChDir(), FReopen(), FDOpen(), Read(), Write(), Close(), and\n// StrError() aren't needed on Windows CE at this time and thus not\n// defined there.\n\n#if !GTEST_OS_WINDOWS_MOBILE\ninline int ChDir(const char* dir) { return chdir(dir); }\n#endif\ninline FILE* FOpen(const char* path, const char* mode) {\n  return fopen(path, mode);\n}\n#if !GTEST_OS_WINDOWS_MOBILE\ninline FILE *FReopen(const char* path, const char* mode, FILE* stream) {\n  return freopen(path, mode, stream);\n}\ninline FILE* FDOpen(int fd, const char* mode) { return fdopen(fd, mode); }\n#endif\ninline int FClose(FILE* fp) { return fclose(fp); }\n#if !GTEST_OS_WINDOWS_MOBILE\ninline int Read(int fd, void* buf, unsigned int count) {\n  return static_cast<int>(read(fd, buf, count));\n}\ninline int Write(int fd, const void* buf, unsigned int count) {\n  return static_cast<int>(write(fd, buf, count));\n}\ninline int Close(int fd) { return close(fd); }\ninline const char* StrError(int errnum) { return strerror(errnum); }\n#endif\ninline const char* GetEnv(const char* name) {\n#if GTEST_OS_WINDOWS_MOBILE\n  // We are on Windows CE, which has no environment variables.\n  return NULL;\n#elif defined(__BORLANDC__) || defined(__SunOS_5_8) || defined(__SunOS_5_9)\n  // Environment variables which we programmatically clear will be set to the\n  // empty string rather than unset (NULL).  Handle that case.\n  const char* const env = getenv(name);\n  return (env != NULL && env[0] != '\\0') ? env : NULL;\n#else\n  return getenv(name);\n#endif\n}\n\n#ifdef _MSC_VER\n# pragma warning(pop)  // Restores the warning state.\n#endif\n\n#if GTEST_OS_WINDOWS_MOBILE\n// Windows CE has no C library. The abort() function is used in\n// several places in Google Test. This implementation provides a reasonable\n// imitation of standard behaviour.\nvoid Abort();\n#else\ninline void Abort() { abort(); }\n#endif  // GTEST_OS_WINDOWS_MOBILE\n\n}  // namespace posix\n\n// The maximum number a BiggestInt can represent.  This definition\n// works no matter BiggestInt is represented in one's complement or\n// two's complement.\n//\n// We cannot rely on numeric_limits in STL, as __int64 and long long\n// are not part of standard C++ and numeric_limits doesn't need to be\n// defined for them.\nconst BiggestInt kMaxBiggestInt =\n    ~(static_cast<BiggestInt>(1) << (8*sizeof(BiggestInt) - 1));\n\n// This template class serves as a compile-time function from size to\n// type.  It maps a size in bytes to a primitive type with that\n// size. e.g.\n//\n//   TypeWithSize<4>::UInt\n//\n// is typedef-ed to be unsigned int (unsigned integer made up of 4\n// bytes).\n//\n// Such functionality should belong to STL, but I cannot find it\n// there.\n//\n// Google Test uses this class in the implementation of floating-point\n// comparison.\n//\n// For now it only handles UInt (unsigned int) as that's all Google Test\n// needs.  Other types can be easily added in the future if need\n// arises.\ntemplate <size_t size>\nclass TypeWithSize {\n public:\n  // This prevents the user from using TypeWithSize<N> with incorrect\n  // values of N.\n  typedef void UInt;\n};\n\n// The specialization for size 4.\ntemplate <>\nclass TypeWithSize<4> {\n public:\n  // unsigned int has size 4 in both gcc and MSVC.\n  //\n  // As base/basictypes.h doesn't compile on Windows, we cannot use\n  // uint32, uint64, and etc here.\n  typedef int Int;\n  typedef unsigned int UInt;\n};\n\n// The specialization for size 8.\ntemplate <>\nclass TypeWithSize<8> {\n public:\n\n#if GTEST_OS_WINDOWS\n  typedef __int64 Int;\n  typedef unsigned __int64 UInt;\n#else\n  typedef long long Int;  // NOLINT\n  typedef unsigned long long UInt;  // NOLINT\n#endif  // GTEST_OS_WINDOWS\n};\n\n// Integer types of known sizes.\ntypedef TypeWithSize<4>::Int Int32;\ntypedef TypeWithSize<4>::UInt UInt32;\ntypedef TypeWithSize<8>::Int Int64;\ntypedef TypeWithSize<8>::UInt UInt64;\ntypedef TypeWithSize<8>::Int TimeInMillis;  // Represents time in milliseconds.\n\n// Utilities for command line flags and environment variables.\n\n// Macro for referencing flags.\n#define GTEST_FLAG(name) FLAGS_gtest_##name\n\n// Macros for declaring flags.\n#define GTEST_DECLARE_bool_(name) GTEST_API_ extern bool GTEST_FLAG(name)\n#define GTEST_DECLARE_int32_(name) \\\n    GTEST_API_ extern ::testing::internal::Int32 GTEST_FLAG(name)\n#define GTEST_DECLARE_string_(name) \\\n    GTEST_API_ extern ::testing::internal::String GTEST_FLAG(name)\n\n// Macros for defining flags.\n#define GTEST_DEFINE_bool_(name, default_val, doc) \\\n    GTEST_API_ bool GTEST_FLAG(name) = (default_val)\n#define GTEST_DEFINE_int32_(name, default_val, doc) \\\n    GTEST_API_ ::testing::internal::Int32 GTEST_FLAG(name) = (default_val)\n#define GTEST_DEFINE_string_(name, default_val, doc) \\\n    GTEST_API_ ::testing::internal::String GTEST_FLAG(name) = (default_val)\n\n// Parses 'str' for a 32-bit signed integer.  If successful, writes the result\n// to *value and returns true; otherwise leaves *value unchanged and returns\n// false.\n// TODO(chandlerc): Find a better way to refactor flag and environment parsing\n// out of both gtest-port.cc and gtest.cc to avoid exporting this utility\n// function.\nbool ParseInt32(const Message& src_text, const char* str, Int32* value);\n\n// Parses a bool/Int32/string from the environment variable\n// corresponding to the given Google Test flag.\nbool BoolFromGTestEnv(const char* flag, bool default_val);\nGTEST_API_ Int32 Int32FromGTestEnv(const char* flag, Int32 default_val);\nconst char* StringFromGTestEnv(const char* flag, const char* default_val);\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_\n\n#if GTEST_OS_LINUX\n# include <stdlib.h>\n# include <sys/types.h>\n# include <sys/wait.h>\n# include <unistd.h>\n#endif  // GTEST_OS_LINUX\n\n#include <ctype.h>\n#include <string.h>\n#include <iomanip>\n#include <limits>\n#include <set>\n\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)\n//\n// The Google C++ Testing Framework (Google Test)\n//\n// This header file declares the String class and functions used internally by\n// Google Test.  They are subject to change without notice. They should not used\n// by code external to Google Test.\n//\n// This header file is #included by <gtest/internal/gtest-internal.h>.\n// It should not be #included by other files.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_\n\n#ifdef __BORLANDC__\n// string.h is not guaranteed to provide strcpy on C++ Builder.\n# include <mem.h>\n#endif\n\n#include <string.h>\n\n#include <string>\n\nnamespace testing {\nnamespace internal {\n\n// String - a UTF-8 string class.\n//\n// For historic reasons, we don't use std::string.\n//\n// TODO(wan@google.com): replace this class with std::string or\n// implement it in terms of the latter.\n//\n// Note that String can represent both NULL and the empty string,\n// while std::string cannot represent NULL.\n//\n// NULL and the empty string are considered different.  NULL is less\n// than anything (including the empty string) except itself.\n//\n// This class only provides minimum functionality necessary for\n// implementing Google Test.  We do not intend to implement a full-fledged\n// string class here.\n//\n// Since the purpose of this class is to provide a substitute for\n// std::string on platforms where it cannot be used, we define a copy\n// constructor and assignment operators such that we don't need\n// conditional compilation in a lot of places.\n//\n// In order to make the representation efficient, the d'tor of String\n// is not virtual.  Therefore DO NOT INHERIT FROM String.\nclass GTEST_API_ String {\n public:\n  // Static utility methods\n\n  // Returns the input enclosed in double quotes if it's not NULL;\n  // otherwise returns \"(null)\".  For example, \"\\\"Hello\\\"\" is returned\n  // for input \"Hello\".\n  //\n  // This is useful for printing a C string in the syntax of a literal.\n  //\n  // Known issue: escape sequences are not handled yet.\n  static String ShowCStringQuoted(const char* c_str);\n\n  // Clones a 0-terminated C string, allocating memory using new.  The\n  // caller is responsible for deleting the return value using\n  // delete[].  Returns the cloned string, or NULL if the input is\n  // NULL.\n  //\n  // This is different from strdup() in string.h, which allocates\n  // memory using malloc().\n  static const char* CloneCString(const char* c_str);\n\n#if GTEST_OS_WINDOWS_MOBILE\n  // Windows CE does not have the 'ANSI' versions of Win32 APIs. To be\n  // able to pass strings to Win32 APIs on CE we need to convert them\n  // to 'Unicode', UTF-16.\n\n  // Creates a UTF-16 wide string from the given ANSI string, allocating\n  // memory using new. The caller is responsible for deleting the return\n  // value using delete[]. Returns the wide string, or NULL if the\n  // input is NULL.\n  //\n  // The wide string is created using the ANSI codepage (CP_ACP) to\n  // match the behaviour of the ANSI versions of Win32 calls and the\n  // C runtime.\n  static LPCWSTR AnsiToUtf16(const char* c_str);\n\n  // Creates an ANSI string from the given wide string, allocating\n  // memory using new. The caller is responsible for deleting the return\n  // value using delete[]. Returns the ANSI string, or NULL if the\n  // input is NULL.\n  //\n  // The returned string is created using the ANSI codepage (CP_ACP) to\n  // match the behaviour of the ANSI versions of Win32 calls and the\n  // C runtime.\n  static const char* Utf16ToAnsi(LPCWSTR utf16_str);\n#endif\n\n  // Compares two C strings.  Returns true iff they have the same content.\n  //\n  // Unlike strcmp(), this function can handle NULL argument(s).  A\n  // NULL C string is considered different to any non-NULL C string,\n  // including the empty string.\n  static bool CStringEquals(const char* lhs, const char* rhs);\n\n  // Converts a wide C string to a String using the UTF-8 encoding.\n  // NULL will be converted to \"(null)\".  If an error occurred during\n  // the conversion, \"(failed to convert from wide string)\" is\n  // returned.\n  static String ShowWideCString(const wchar_t* wide_c_str);\n\n  // Similar to ShowWideCString(), except that this function encloses\n  // the converted string in double quotes.\n  static String ShowWideCStringQuoted(const wchar_t* wide_c_str);\n\n  // Compares two wide C strings.  Returns true iff they have the same\n  // content.\n  //\n  // Unlike wcscmp(), this function can handle NULL argument(s).  A\n  // NULL C string is considered different to any non-NULL C string,\n  // including the empty string.\n  static bool WideCStringEquals(const wchar_t* lhs, const wchar_t* rhs);\n\n  // Compares two C strings, ignoring case.  Returns true iff they\n  // have the same content.\n  //\n  // Unlike strcasecmp(), this function can handle NULL argument(s).\n  // A NULL C string is considered different to any non-NULL C string,\n  // including the empty string.\n  static bool CaseInsensitiveCStringEquals(const char* lhs,\n                                           const char* rhs);\n\n  // Compares two wide C strings, ignoring case.  Returns true iff they\n  // have the same content.\n  //\n  // Unlike wcscasecmp(), this function can handle NULL argument(s).\n  // A NULL C string is considered different to any non-NULL wide C string,\n  // including the empty string.\n  // NB: The implementations on different platforms slightly differ.\n  // On windows, this method uses _wcsicmp which compares according to LC_CTYPE\n  // environment variable. On GNU platform this method uses wcscasecmp\n  // which compares according to LC_CTYPE category of the current locale.\n  // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the\n  // current locale.\n  static bool CaseInsensitiveWideCStringEquals(const wchar_t* lhs,\n                                               const wchar_t* rhs);\n\n  // Formats a list of arguments to a String, using the same format\n  // spec string as for printf.\n  //\n  // We do not use the StringPrintf class as it is not universally\n  // available.\n  //\n  // The result is limited to 4096 characters (including the tailing\n  // 0).  If 4096 characters are not enough to format the input,\n  // \"<buffer exceeded>\" is returned.\n  static String Format(const char* format, ...);\n\n  // C'tors\n\n  // The default c'tor constructs a NULL string.\n  String() : c_str_(NULL), length_(0) {}\n\n  // Constructs a String by cloning a 0-terminated C string.\n  String(const char* a_c_str) {  // NOLINT\n    if (a_c_str == NULL) {\n      c_str_ = NULL;\n      length_ = 0;\n    } else {\n      ConstructNonNull(a_c_str, strlen(a_c_str));\n    }\n  }\n\n  // Constructs a String by copying a given number of chars from a\n  // buffer.  E.g. String(\"hello\", 3) creates the string \"hel\",\n  // String(\"a\\0bcd\", 4) creates \"a\\0bc\", String(NULL, 0) creates \"\",\n  // and String(NULL, 1) results in access violation.\n  String(const char* buffer, size_t a_length) {\n    ConstructNonNull(buffer, a_length);\n  }\n\n  // The copy c'tor creates a new copy of the string.  The two\n  // String objects do not share content.\n  String(const String& str) : c_str_(NULL), length_(0) { *this = str; }\n\n  // D'tor.  String is intended to be a final class, so the d'tor\n  // doesn't need to be virtual.\n  ~String() { delete[] c_str_; }\n\n  // Allows a String to be implicitly converted to an ::std::string or\n  // ::string, and vice versa.  Converting a String containing a NULL\n  // pointer to ::std::string or ::string is undefined behavior.\n  // Converting a ::std::string or ::string containing an embedded NUL\n  // character to a String will result in the prefix up to the first\n  // NUL character.\n  String(const ::std::string& str) {\n    ConstructNonNull(str.c_str(), str.length());\n  }\n\n  operator ::std::string() const { return ::std::string(c_str(), length()); }\n\n#if GTEST_HAS_GLOBAL_STRING\n  String(const ::string& str) {\n    ConstructNonNull(str.c_str(), str.length());\n  }\n\n  operator ::string() const { return ::string(c_str(), length()); }\n#endif  // GTEST_HAS_GLOBAL_STRING\n\n  // Returns true iff this is an empty string (i.e. \"\").\n  bool empty() const { return (c_str() != NULL) && (length() == 0); }\n\n  // Compares this with another String.\n  // Returns < 0 if this is less than rhs, 0 if this is equal to rhs, or > 0\n  // if this is greater than rhs.\n  int Compare(const String& rhs) const;\n\n  // Returns true iff this String equals the given C string.  A NULL\n  // string and a non-NULL string are considered not equal.\n  bool operator==(const char* a_c_str) const { return Compare(a_c_str) == 0; }\n\n  // Returns true iff this String is less than the given String.  A\n  // NULL string is considered less than \"\".\n  bool operator<(const String& rhs) const { return Compare(rhs) < 0; }\n\n  // Returns true iff this String doesn't equal the given C string.  A NULL\n  // string and a non-NULL string are considered not equal.\n  bool operator!=(const char* a_c_str) const { return !(*this == a_c_str); }\n\n  // Returns true iff this String ends with the given suffix.  *Any*\n  // String is considered to end with a NULL or empty suffix.\n  bool EndsWith(const char* suffix) const;\n\n  // Returns true iff this String ends with the given suffix, not considering\n  // case. Any String is considered to end with a NULL or empty suffix.\n  bool EndsWithCaseInsensitive(const char* suffix) const;\n\n  // Returns the length of the encapsulated string, or 0 if the\n  // string is NULL.\n  size_t length() const { return length_; }\n\n  // Gets the 0-terminated C string this String object represents.\n  // The String object still owns the string.  Therefore the caller\n  // should NOT delete the return value.\n  const char* c_str() const { return c_str_; }\n\n  // Assigns a C string to this object.  Self-assignment works.\n  const String& operator=(const char* a_c_str) {\n    return *this = String(a_c_str);\n  }\n\n  // Assigns a String object to this object.  Self-assignment works.\n  const String& operator=(const String& rhs) {\n    if (this != &rhs) {\n      delete[] c_str_;\n      if (rhs.c_str() == NULL) {\n        c_str_ = NULL;\n        length_ = 0;\n      } else {\n        ConstructNonNull(rhs.c_str(), rhs.length());\n      }\n    }\n\n    return *this;\n  }\n\n private:\n  // Constructs a non-NULL String from the given content.  This\n  // function can only be called when c_str_ has not been allocated.\n  // ConstructNonNull(NULL, 0) results in an empty string (\"\").\n  // ConstructNonNull(NULL, non_zero) is undefined behavior.\n  void ConstructNonNull(const char* buffer, size_t a_length) {\n    char* const str = new char[a_length + 1];\n    memcpy(str, buffer, a_length);\n    str[a_length] = '\\0';\n    c_str_ = str;\n    length_ = a_length;\n  }\n\n  const char* c_str_;\n  size_t length_;\n};  // class String\n\n// Streams a String to an ostream.  Each '\\0' character in the String\n// is replaced with \"\\\\0\".\ninline ::std::ostream& operator<<(::std::ostream& os, const String& str) {\n  if (str.c_str() == NULL) {\n    os << \"(null)\";\n  } else {\n    const char* const c_str = str.c_str();\n    for (size_t i = 0; i != str.length(); i++) {\n      if (c_str[i] == '\\0') {\n        os << \"\\\\0\";\n      } else {\n        os << c_str[i];\n      }\n    }\n  }\n  return os;\n}\n\n// Gets the content of the stringstream's buffer as a String.  Each '\\0'\n// character in the buffer is replaced with \"\\\\0\".\nGTEST_API_ String StringStreamToString(::std::stringstream* stream);\n\n// Converts a streamable value to a String.  A NULL pointer is\n// converted to \"(null)\".  When the input value is a ::string,\n// ::std::string, ::wstring, or ::std::wstring object, each NUL\n// character in it is replaced with \"\\\\0\".\n\n// Declared here but defined in gtest.h, so that it has access\n// to the definition of the Message class, required by the ARM\n// compiler.\ntemplate <typename T>\nString StreamableToString(const T& streamable);\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_\n// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: keith.ray@gmail.com (Keith Ray)\n//\n// Google Test filepath utilities\n//\n// This header file declares classes and functions used internally by\n// Google Test.  They are subject to change without notice.\n//\n// This file is #included in <gtest/internal/gtest-internal.h>.\n// Do not include this header file separately!\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_\n\n\nnamespace testing {\nnamespace internal {\n\n// FilePath - a class for file and directory pathname manipulation which\n// handles platform-specific conventions (like the pathname separator).\n// Used for helper functions for naming files in a directory for xml output.\n// Except for Set methods, all methods are const or static, which provides an\n// \"immutable value object\" -- useful for peace of mind.\n// A FilePath with a value ending in a path separator (\"like/this/\") represents\n// a directory, otherwise it is assumed to represent a file. In either case,\n// it may or may not represent an actual file or directory in the file system.\n// Names are NOT checked for syntax correctness -- no checking for illegal\n// characters, malformed paths, etc.\n\nclass GTEST_API_ FilePath {\n public:\n  FilePath() : pathname_(\"\") { }\n  FilePath(const FilePath& rhs) : pathname_(rhs.pathname_) { }\n\n  explicit FilePath(const char* pathname) : pathname_(pathname) {\n    Normalize();\n  }\n\n  explicit FilePath(const String& pathname) : pathname_(pathname) {\n    Normalize();\n  }\n\n  FilePath& operator=(const FilePath& rhs) {\n    Set(rhs);\n    return *this;\n  }\n\n  void Set(const FilePath& rhs) {\n    pathname_ = rhs.pathname_;\n  }\n\n  String ToString() const { return pathname_; }\n  const char* c_str() const { return pathname_.c_str(); }\n\n  // Returns the current working directory, or \"\" if unsuccessful.\n  static FilePath GetCurrentDir();\n\n  // Given directory = \"dir\", base_name = \"test\", number = 0,\n  // extension = \"xml\", returns \"dir/test.xml\". If number is greater\n  // than zero (e.g., 12), returns \"dir/test_12.xml\".\n  // On Windows platform, uses \\ as the separator rather than /.\n  static FilePath MakeFileName(const FilePath& directory,\n                               const FilePath& base_name,\n                               int number,\n                               const char* extension);\n\n  // Given directory = \"dir\", relative_path = \"test.xml\",\n  // returns \"dir/test.xml\".\n  // On Windows, uses \\ as the separator rather than /.\n  static FilePath ConcatPaths(const FilePath& directory,\n                              const FilePath& relative_path);\n\n  // Returns a pathname for a file that does not currently exist. The pathname\n  // will be directory/base_name.extension or\n  // directory/base_name_<number>.extension if directory/base_name.extension\n  // already exists. The number will be incremented until a pathname is found\n  // that does not already exist.\n  // Examples: 'dir/foo_test.xml' or 'dir/foo_test_1.xml'.\n  // There could be a race condition if two or more processes are calling this\n  // function at the same time -- they could both pick the same filename.\n  static FilePath GenerateUniqueFileName(const FilePath& directory,\n                                         const FilePath& base_name,\n                                         const char* extension);\n\n  // Returns true iff the path is NULL or \"\".\n  bool IsEmpty() const { return c_str() == NULL || *c_str() == '\\0'; }\n\n  // If input name has a trailing separator character, removes it and returns\n  // the name, otherwise return the name string unmodified.\n  // On Windows platform, uses \\ as the separator, other platforms use /.\n  FilePath RemoveTrailingPathSeparator() const;\n\n  // Returns a copy of the FilePath with the directory part removed.\n  // Example: FilePath(\"path/to/file\").RemoveDirectoryName() returns\n  // FilePath(\"file\"). If there is no directory part (\"just_a_file\"), it returns\n  // the FilePath unmodified. If there is no file part (\"just_a_dir/\") it\n  // returns an empty FilePath (\"\").\n  // On Windows platform, '\\' is the path separator, otherwise it is '/'.\n  FilePath RemoveDirectoryName() const;\n\n  // RemoveFileName returns the directory path with the filename removed.\n  // Example: FilePath(\"path/to/file\").RemoveFileName() returns \"path/to/\".\n  // If the FilePath is \"a_file\" or \"/a_file\", RemoveFileName returns\n  // FilePath(\"./\") or, on Windows, FilePath(\".\\\\\"). If the filepath does\n  // not have a file, like \"just/a/dir/\", it returns the FilePath unmodified.\n  // On Windows platform, '\\' is the path separator, otherwise it is '/'.\n  FilePath RemoveFileName() const;\n\n  // Returns a copy of the FilePath with the case-insensitive extension removed.\n  // Example: FilePath(\"dir/file.exe\").RemoveExtension(\"EXE\") returns\n  // FilePath(\"dir/file\"). If a case-insensitive extension is not\n  // found, returns a copy of the original FilePath.\n  FilePath RemoveExtension(const char* extension) const;\n\n  // Creates directories so that path exists. Returns true if successful or if\n  // the directories already exist; returns false if unable to create\n  // directories for any reason. Will also return false if the FilePath does\n  // not represent a directory (that is, it doesn't end with a path separator).\n  bool CreateDirectoriesRecursively() const;\n\n  // Create the directory so that path exists. Returns true if successful or\n  // if the directory already exists; returns false if unable to create the\n  // directory for any reason, including if the parent directory does not\n  // exist. Not named \"CreateDirectory\" because that's a macro on Windows.\n  bool CreateFolder() const;\n\n  // Returns true if FilePath describes something in the file-system,\n  // either a file, directory, or whatever, and that something exists.\n  bool FileOrDirectoryExists() const;\n\n  // Returns true if pathname describes a directory in the file-system\n  // that exists.\n  bool DirectoryExists() const;\n\n  // Returns true if FilePath ends with a path separator, which indicates that\n  // it is intended to represent a directory. Returns false otherwise.\n  // This does NOT check that a directory (or file) actually exists.\n  bool IsDirectory() const;\n\n  // Returns true if pathname describes a root directory. (Windows has one\n  // root directory per disk drive.)\n  bool IsRootDirectory() const;\n\n  // Returns true if pathname describes an absolute path.\n  bool IsAbsolutePath() const;\n\n private:\n  // Replaces multiple consecutive separators with a single separator.\n  // For example, \"bar///foo\" becomes \"bar/foo\". Does not eliminate other\n  // redundancies that might be in a pathname involving \".\" or \"..\".\n  //\n  // A pathname with multiple consecutive separators may occur either through\n  // user error or as a result of some scripts or APIs that generate a pathname\n  // with a trailing separator. On other platforms the same API or script\n  // may NOT generate a pathname with a trailing \"/\". Then elsewhere that\n  // pathname may have another \"/\" and pathname components added to it,\n  // without checking for the separator already being there.\n  // The script language and operating system may allow paths like \"foo//bar\"\n  // but some of the functions in FilePath will not handle that correctly. In\n  // particular, RemoveTrailingPathSeparator() only removes one separator, and\n  // it is called in CreateDirectoriesRecursively() assuming that it will change\n  // a pathname from directory syntax (trailing separator) to filename syntax.\n  //\n  // On Windows this method also replaces the alternate path separator '/' with\n  // the primary path separator '\\\\', so that for example \"bar\\\\/\\\\foo\" becomes\n  // \"bar\\\\foo\".\n\n  void Normalize();\n\n  // Returns a pointer to the last occurence of a valid path separator in\n  // the FilePath. On Windows, for example, both '/' and '\\' are valid path\n  // separators. Returns NULL if no path separator was found.\n  const char* FindLastPathSeparator() const;\n\n  String pathname_;\n};  // class FilePath\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_\n// This file was GENERATED by command:\n//     pump.py gtest-type-util.h.pump\n// DO NOT EDIT BY HAND!!!\n\n// Copyright 2008 Google Inc.\n// All Rights Reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n// Type utilities needed for implementing typed and type-parameterized\n// tests.  This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!\n//\n// Currently we support at most 50 types in a list, and at most 50\n// type-parameterized tests in one type-parameterized test case.\n// Please contact googletestframework@googlegroups.com if you need\n// more.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_\n\n\n// #ifdef __GNUC__ is too general here.  It is possible to use gcc without using\n// libstdc++ (which is where cxxabi.h comes from).\n# ifdef __GLIBCXX__\n#  include <cxxabi.h>\n# elif defined(__HP_aCC)\n#  include <acxx_demangle.h>\n# endif  // __GLIBCXX__\n\nnamespace testing {\nnamespace internal {\n\n// GetTypeName<T>() returns a human-readable name of type T.\n// NB: This function is also used in Google Mock, so don't move it inside of\n// the typed-test-only section below.\ntemplate <typename T>\nString GetTypeName() {\n# if GTEST_HAS_RTTI\n\n  const char* const name = typeid(T).name();\n#  if defined(__GLIBCXX__) || defined(__HP_aCC)\n  int status = 0;\n  // gcc's implementation of typeid(T).name() mangles the type name,\n  // so we have to demangle it.\n#   ifdef __GLIBCXX__\n  using abi::__cxa_demangle;\n#   endif // __GLIBCXX__\n  char* const readable_name = __cxa_demangle(name, 0, 0, &status);\n  const String name_str(status == 0 ? readable_name : name);\n  free(readable_name);\n  return name_str;\n#  else\n  return name;\n#  endif  // __GLIBCXX__ || __HP_aCC\n\n# else\n\n  return \"<type>\";\n\n# endif  // GTEST_HAS_RTTI\n}\n\n#if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P\n\n// AssertyTypeEq<T1, T2>::type is defined iff T1 and T2 are the same\n// type.  This can be used as a compile-time assertion to ensure that\n// two types are equal.\n\ntemplate <typename T1, typename T2>\nstruct AssertTypeEq;\n\ntemplate <typename T>\nstruct AssertTypeEq<T, T> {\n  typedef bool type;\n};\n\n// A unique type used as the default value for the arguments of class\n// template Types.  This allows us to simulate variadic templates\n// (e.g. Types<int>, Type<int, double>, and etc), which C++ doesn't\n// support directly.\nstruct None {};\n\n// The following family of struct and struct templates are used to\n// represent type lists.  In particular, TypesN<T1, T2, ..., TN>\n// represents a type list with N types (T1, T2, ..., and TN) in it.\n// Except for Types0, every struct in the family has two member types:\n// Head for the first type in the list, and Tail for the rest of the\n// list.\n\n// The empty type list.\nstruct Types0 {};\n\n// Type lists of length 1, 2, 3, and so on.\n\ntemplate <typename T1>\nstruct Types1 {\n  typedef T1 Head;\n  typedef Types0 Tail;\n};\ntemplate <typename T1, typename T2>\nstruct Types2 {\n  typedef T1 Head;\n  typedef Types1<T2> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3>\nstruct Types3 {\n  typedef T1 Head;\n  typedef Types2<T2, T3> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4>\nstruct Types4 {\n  typedef T1 Head;\n  typedef Types3<T2, T3, T4> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5>\nstruct Types5 {\n  typedef T1 Head;\n  typedef Types4<T2, T3, T4, T5> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6>\nstruct Types6 {\n  typedef T1 Head;\n  typedef Types5<T2, T3, T4, T5, T6> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7>\nstruct Types7 {\n  typedef T1 Head;\n  typedef Types6<T2, T3, T4, T5, T6, T7> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8>\nstruct Types8 {\n  typedef T1 Head;\n  typedef Types7<T2, T3, T4, T5, T6, T7, T8> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9>\nstruct Types9 {\n  typedef T1 Head;\n  typedef Types8<T2, T3, T4, T5, T6, T7, T8, T9> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10>\nstruct Types10 {\n  typedef T1 Head;\n  typedef Types9<T2, T3, T4, T5, T6, T7, T8, T9, T10> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11>\nstruct Types11 {\n  typedef T1 Head;\n  typedef Types10<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12>\nstruct Types12 {\n  typedef T1 Head;\n  typedef Types11<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13>\nstruct Types13 {\n  typedef T1 Head;\n  typedef Types12<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14>\nstruct Types14 {\n  typedef T1 Head;\n  typedef Types13<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15>\nstruct Types15 {\n  typedef T1 Head;\n  typedef Types14<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16>\nstruct Types16 {\n  typedef T1 Head;\n  typedef Types15<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17>\nstruct Types17 {\n  typedef T1 Head;\n  typedef Types16<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18>\nstruct Types18 {\n  typedef T1 Head;\n  typedef Types17<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19>\nstruct Types19 {\n  typedef T1 Head;\n  typedef Types18<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20>\nstruct Types20 {\n  typedef T1 Head;\n  typedef Types19<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21>\nstruct Types21 {\n  typedef T1 Head;\n  typedef Types20<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22>\nstruct Types22 {\n  typedef T1 Head;\n  typedef Types21<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23>\nstruct Types23 {\n  typedef T1 Head;\n  typedef Types22<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24>\nstruct Types24 {\n  typedef T1 Head;\n  typedef Types23<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25>\nstruct Types25 {\n  typedef T1 Head;\n  typedef Types24<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26>\nstruct Types26 {\n  typedef T1 Head;\n  typedef Types25<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27>\nstruct Types27 {\n  typedef T1 Head;\n  typedef Types26<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28>\nstruct Types28 {\n  typedef T1 Head;\n  typedef Types27<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29>\nstruct Types29 {\n  typedef T1 Head;\n  typedef Types28<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30>\nstruct Types30 {\n  typedef T1 Head;\n  typedef Types29<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31>\nstruct Types31 {\n  typedef T1 Head;\n  typedef Types30<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32>\nstruct Types32 {\n  typedef T1 Head;\n  typedef Types31<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33>\nstruct Types33 {\n  typedef T1 Head;\n  typedef Types32<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34>\nstruct Types34 {\n  typedef T1 Head;\n  typedef Types33<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35>\nstruct Types35 {\n  typedef T1 Head;\n  typedef Types34<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36>\nstruct Types36 {\n  typedef T1 Head;\n  typedef Types35<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37>\nstruct Types37 {\n  typedef T1 Head;\n  typedef Types36<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38>\nstruct Types38 {\n  typedef T1 Head;\n  typedef Types37<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39>\nstruct Types39 {\n  typedef T1 Head;\n  typedef Types38<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40>\nstruct Types40 {\n  typedef T1 Head;\n  typedef Types39<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41>\nstruct Types41 {\n  typedef T1 Head;\n  typedef Types40<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42>\nstruct Types42 {\n  typedef T1 Head;\n  typedef Types41<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43>\nstruct Types43 {\n  typedef T1 Head;\n  typedef Types42<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44>\nstruct Types44 {\n  typedef T1 Head;\n  typedef Types43<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45>\nstruct Types45 {\n  typedef T1 Head;\n  typedef Types44<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44, T45> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46>\nstruct Types46 {\n  typedef T1 Head;\n  typedef Types45<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44, T45, T46> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47>\nstruct Types47 {\n  typedef T1 Head;\n  typedef Types46<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44, T45, T46, T47> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48>\nstruct Types48 {\n  typedef T1 Head;\n  typedef Types47<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44, T45, T46, T47, T48> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49>\nstruct Types49 {\n  typedef T1 Head;\n  typedef Types48<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44, T45, T46, T47, T48, T49> Tail;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49, typename T50>\nstruct Types50 {\n  typedef T1 Head;\n  typedef Types49<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n      T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n      T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n      T44, T45, T46, T47, T48, T49, T50> Tail;\n};\n\n\n}  // namespace internal\n\n// We don't want to require the users to write TypesN<...> directly,\n// as that would require them to count the length.  Types<...> is much\n// easier to write, but generates horrible messages when there is a\n// compiler error, as gcc insists on printing out each template\n// argument, even if it has the default value (this means Types<int>\n// will appear as Types<int, None, None, ..., None> in the compiler\n// errors).\n//\n// Our solution is to combine the best part of the two approaches: a\n// user would write Types<T1, ..., TN>, and Google Test will translate\n// that to TypesN<T1, ..., TN> internally to make error messages\n// readable.  The translation is done by the 'type' member of the\n// Types template.\ntemplate <typename T1 = internal::None, typename T2 = internal::None,\n    typename T3 = internal::None, typename T4 = internal::None,\n    typename T5 = internal::None, typename T6 = internal::None,\n    typename T7 = internal::None, typename T8 = internal::None,\n    typename T9 = internal::None, typename T10 = internal::None,\n    typename T11 = internal::None, typename T12 = internal::None,\n    typename T13 = internal::None, typename T14 = internal::None,\n    typename T15 = internal::None, typename T16 = internal::None,\n    typename T17 = internal::None, typename T18 = internal::None,\n    typename T19 = internal::None, typename T20 = internal::None,\n    typename T21 = internal::None, typename T22 = internal::None,\n    typename T23 = internal::None, typename T24 = internal::None,\n    typename T25 = internal::None, typename T26 = internal::None,\n    typename T27 = internal::None, typename T28 = internal::None,\n    typename T29 = internal::None, typename T30 = internal::None,\n    typename T31 = internal::None, typename T32 = internal::None,\n    typename T33 = internal::None, typename T34 = internal::None,\n    typename T35 = internal::None, typename T36 = internal::None,\n    typename T37 = internal::None, typename T38 = internal::None,\n    typename T39 = internal::None, typename T40 = internal::None,\n    typename T41 = internal::None, typename T42 = internal::None,\n    typename T43 = internal::None, typename T44 = internal::None,\n    typename T45 = internal::None, typename T46 = internal::None,\n    typename T47 = internal::None, typename T48 = internal::None,\n    typename T49 = internal::None, typename T50 = internal::None>\nstruct Types {\n  typedef internal::Types50<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45, T46, T47, T48, T49, T50> type;\n};\n\ntemplate <>\nstruct Types<internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types0 type;\n};\ntemplate <typename T1>\nstruct Types<T1, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types1<T1> type;\n};\ntemplate <typename T1, typename T2>\nstruct Types<T1, T2, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types2<T1, T2> type;\n};\ntemplate <typename T1, typename T2, typename T3>\nstruct Types<T1, T2, T3, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types3<T1, T2, T3> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4>\nstruct Types<T1, T2, T3, T4, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types4<T1, T2, T3, T4> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5>\nstruct Types<T1, T2, T3, T4, T5, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types5<T1, T2, T3, T4, T5> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6>\nstruct Types<T1, T2, T3, T4, T5, T6, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types6<T1, T2, T3, T4, T5, T6> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types7<T1, T2, T3, T4, T5, T6, T7> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types8<T1, T2, T3, T4, T5, T6, T7, T8> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types9<T1, T2, T3, T4, T5, T6, T7, T8, T9> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types10<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types11<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types12<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types13<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types14<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types15<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types16<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types17<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types18<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types19<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types20<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types21<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types22<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types23<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types24<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types25<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types26<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types27<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types28<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types29<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types30<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types31<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types32<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types33<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types34<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types35<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types36<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types37<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types38<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types39<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types40<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types41<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, internal::None,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types42<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None, internal::None> {\n  typedef internal::Types43<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None, internal::None> {\n  typedef internal::Types44<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,\n    internal::None, internal::None, internal::None, internal::None,\n    internal::None> {\n  typedef internal::Types45<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,\n    T46, internal::None, internal::None, internal::None, internal::None> {\n  typedef internal::Types46<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45, T46> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,\n    T46, T47, internal::None, internal::None, internal::None> {\n  typedef internal::Types47<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45, T46, T47> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,\n    T46, T47, T48, internal::None, internal::None> {\n  typedef internal::Types48<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45, T46, T47, T48> type;\n};\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49>\nstruct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,\n    T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,\n    T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,\n    T46, T47, T48, T49, internal::None> {\n  typedef internal::Types49<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45, T46, T47, T48, T49> type;\n};\n\nnamespace internal {\n\n# define GTEST_TEMPLATE_ template <typename T> class\n\n// The template \"selector\" struct TemplateSel<Tmpl> is used to\n// represent Tmpl, which must be a class template with one type\n// parameter, as a type.  TemplateSel<Tmpl>::Bind<T>::type is defined\n// as the type Tmpl<T>.  This allows us to actually instantiate the\n// template \"selected\" by TemplateSel<Tmpl>.\n//\n// This trick is necessary for simulating typedef for class templates,\n// which C++ doesn't support directly.\ntemplate <GTEST_TEMPLATE_ Tmpl>\nstruct TemplateSel {\n  template <typename T>\n  struct Bind {\n    typedef Tmpl<T> type;\n  };\n};\n\n# define GTEST_BIND_(TmplSel, T) \\\n  TmplSel::template Bind<T>::type\n\n// A unique struct template used as the default value for the\n// arguments of class template Templates.  This allows us to simulate\n// variadic templates (e.g. Templates<int>, Templates<int, double>,\n// and etc), which C++ doesn't support directly.\ntemplate <typename T>\nstruct NoneT {};\n\n// The following family of struct and struct templates are used to\n// represent template lists.  In particular, TemplatesN<T1, T2, ...,\n// TN> represents a list of N templates (T1, T2, ..., and TN).  Except\n// for Templates0, every struct in the family has two member types:\n// Head for the selector of the first template in the list, and Tail\n// for the rest of the list.\n\n// The empty template list.\nstruct Templates0 {};\n\n// Template lists of length 1, 2, 3, and so on.\n\ntemplate <GTEST_TEMPLATE_ T1>\nstruct Templates1 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates0 Tail;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2>\nstruct Templates2 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates1<T2> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3>\nstruct Templates3 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates2<T2, T3> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4>\nstruct Templates4 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates3<T2, T3, T4> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5>\nstruct Templates5 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates4<T2, T3, T4, T5> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6>\nstruct Templates6 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates5<T2, T3, T4, T5, T6> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7>\nstruct Templates7 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates6<T2, T3, T4, T5, T6, T7> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8>\nstruct Templates8 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates7<T2, T3, T4, T5, T6, T7, T8> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9>\nstruct Templates9 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates8<T2, T3, T4, T5, T6, T7, T8, T9> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10>\nstruct Templates10 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates9<T2, T3, T4, T5, T6, T7, T8, T9, T10> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11>\nstruct Templates11 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates10<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12>\nstruct Templates12 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates11<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13>\nstruct Templates13 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates12<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14>\nstruct Templates14 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates13<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15>\nstruct Templates15 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates14<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16>\nstruct Templates16 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates15<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17>\nstruct Templates17 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates16<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18>\nstruct Templates18 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates17<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19>\nstruct Templates19 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates18<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20>\nstruct Templates20 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates19<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21>\nstruct Templates21 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates20<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22>\nstruct Templates22 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates21<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23>\nstruct Templates23 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates22<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24>\nstruct Templates24 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates23<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25>\nstruct Templates25 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates24<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26>\nstruct Templates26 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates25<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27>\nstruct Templates27 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates26<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28>\nstruct Templates28 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates27<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29>\nstruct Templates29 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates28<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30>\nstruct Templates30 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates29<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31>\nstruct Templates31 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates30<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32>\nstruct Templates32 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates31<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33>\nstruct Templates33 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates32<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34>\nstruct Templates34 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates33<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35>\nstruct Templates35 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates34<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36>\nstruct Templates36 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates35<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37>\nstruct Templates37 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates36<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38>\nstruct Templates38 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates37<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39>\nstruct Templates39 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates38<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40>\nstruct Templates40 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates39<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41>\nstruct Templates41 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates40<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42>\nstruct Templates42 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates41<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43>\nstruct Templates43 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates42<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44>\nstruct Templates44 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates43<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45>\nstruct Templates45 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates44<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44, T45> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46>\nstruct Templates46 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates45<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44, T45, T46> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47>\nstruct Templates47 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates46<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44, T45, T46, T47> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48>\nstruct Templates48 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates47<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44, T45, T46, T47, T48> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48,\n    GTEST_TEMPLATE_ T49>\nstruct Templates49 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates48<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44, T45, T46, T47, T48, T49> Tail;\n};\n\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48,\n    GTEST_TEMPLATE_ T49, GTEST_TEMPLATE_ T50>\nstruct Templates50 {\n  typedef TemplateSel<T1> Head;\n  typedef Templates49<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n      T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n      T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n      T43, T44, T45, T46, T47, T48, T49, T50> Tail;\n};\n\n\n// We don't want to require the users to write TemplatesN<...> directly,\n// as that would require them to count the length.  Templates<...> is much\n// easier to write, but generates horrible messages when there is a\n// compiler error, as gcc insists on printing out each template\n// argument, even if it has the default value (this means Templates<list>\n// will appear as Templates<list, NoneT, NoneT, ..., NoneT> in the compiler\n// errors).\n//\n// Our solution is to combine the best part of the two approaches: a\n// user would write Templates<T1, ..., TN>, and Google Test will translate\n// that to TemplatesN<T1, ..., TN> internally to make error messages\n// readable.  The translation is done by the 'type' member of the\n// Templates template.\ntemplate <GTEST_TEMPLATE_ T1 = NoneT, GTEST_TEMPLATE_ T2 = NoneT,\n    GTEST_TEMPLATE_ T3 = NoneT, GTEST_TEMPLATE_ T4 = NoneT,\n    GTEST_TEMPLATE_ T5 = NoneT, GTEST_TEMPLATE_ T6 = NoneT,\n    GTEST_TEMPLATE_ T7 = NoneT, GTEST_TEMPLATE_ T8 = NoneT,\n    GTEST_TEMPLATE_ T9 = NoneT, GTEST_TEMPLATE_ T10 = NoneT,\n    GTEST_TEMPLATE_ T11 = NoneT, GTEST_TEMPLATE_ T12 = NoneT,\n    GTEST_TEMPLATE_ T13 = NoneT, GTEST_TEMPLATE_ T14 = NoneT,\n    GTEST_TEMPLATE_ T15 = NoneT, GTEST_TEMPLATE_ T16 = NoneT,\n    GTEST_TEMPLATE_ T17 = NoneT, GTEST_TEMPLATE_ T18 = NoneT,\n    GTEST_TEMPLATE_ T19 = NoneT, GTEST_TEMPLATE_ T20 = NoneT,\n    GTEST_TEMPLATE_ T21 = NoneT, GTEST_TEMPLATE_ T22 = NoneT,\n    GTEST_TEMPLATE_ T23 = NoneT, GTEST_TEMPLATE_ T24 = NoneT,\n    GTEST_TEMPLATE_ T25 = NoneT, GTEST_TEMPLATE_ T26 = NoneT,\n    GTEST_TEMPLATE_ T27 = NoneT, GTEST_TEMPLATE_ T28 = NoneT,\n    GTEST_TEMPLATE_ T29 = NoneT, GTEST_TEMPLATE_ T30 = NoneT,\n    GTEST_TEMPLATE_ T31 = NoneT, GTEST_TEMPLATE_ T32 = NoneT,\n    GTEST_TEMPLATE_ T33 = NoneT, GTEST_TEMPLATE_ T34 = NoneT,\n    GTEST_TEMPLATE_ T35 = NoneT, GTEST_TEMPLATE_ T36 = NoneT,\n    GTEST_TEMPLATE_ T37 = NoneT, GTEST_TEMPLATE_ T38 = NoneT,\n    GTEST_TEMPLATE_ T39 = NoneT, GTEST_TEMPLATE_ T40 = NoneT,\n    GTEST_TEMPLATE_ T41 = NoneT, GTEST_TEMPLATE_ T42 = NoneT,\n    GTEST_TEMPLATE_ T43 = NoneT, GTEST_TEMPLATE_ T44 = NoneT,\n    GTEST_TEMPLATE_ T45 = NoneT, GTEST_TEMPLATE_ T46 = NoneT,\n    GTEST_TEMPLATE_ T47 = NoneT, GTEST_TEMPLATE_ T48 = NoneT,\n    GTEST_TEMPLATE_ T49 = NoneT, GTEST_TEMPLATE_ T50 = NoneT>\nstruct Templates {\n  typedef Templates50<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44, T45, T46, T47, T48, T49, T50> type;\n};\n\ntemplate <>\nstruct Templates<NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT> {\n  typedef Templates0 type;\n};\ntemplate <GTEST_TEMPLATE_ T1>\nstruct Templates<T1, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT> {\n  typedef Templates1<T1> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2>\nstruct Templates<T1, T2, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT> {\n  typedef Templates2<T1, T2> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3>\nstruct Templates<T1, T2, T3, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates3<T1, T2, T3> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4>\nstruct Templates<T1, T2, T3, T4, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates4<T1, T2, T3, T4> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5>\nstruct Templates<T1, T2, T3, T4, T5, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates5<T1, T2, T3, T4, T5> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6>\nstruct Templates<T1, T2, T3, T4, T5, T6, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates6<T1, T2, T3, T4, T5, T6> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates7<T1, T2, T3, T4, T5, T6, T7> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates8<T1, T2, T3, T4, T5, T6, T7, T8> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates9<T1, T2, T3, T4, T5, T6, T7, T8, T9> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates10<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates11<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates12<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates13<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates14<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates15<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates16<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates17<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates18<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates19<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates20<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates21<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT> {\n  typedef Templates22<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT> {\n  typedef Templates23<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT> {\n  typedef Templates24<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT> {\n  typedef Templates25<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT> {\n  typedef Templates26<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT> {\n  typedef Templates27<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT> {\n  typedef Templates28<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT> {\n  typedef Templates29<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates30<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates31<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates32<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates33<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates34<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates35<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates36<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, NoneT, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates37<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, NoneT, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates38<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates39<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, NoneT, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates40<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, NoneT, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates41<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, NoneT,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates42<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates43<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates44<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    T45, NoneT, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates45<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44, T45> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    T45, T46, NoneT, NoneT, NoneT, NoneT> {\n  typedef Templates46<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44, T45, T46> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    T45, T46, T47, NoneT, NoneT, NoneT> {\n  typedef Templates47<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44, T45, T46, T47> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    T45, T46, T47, T48, NoneT, NoneT> {\n  typedef Templates48<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44, T45, T46, T47, T48> type;\n};\ntemplate <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,\n    GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,\n    GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,\n    GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,\n    GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,\n    GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,\n    GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,\n    GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,\n    GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,\n    GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,\n    GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,\n    GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,\n    GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,\n    GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,\n    GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,\n    GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48,\n    GTEST_TEMPLATE_ T49>\nstruct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,\n    T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,\n    T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,\n    T45, T46, T47, T48, T49, NoneT> {\n  typedef Templates49<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n      T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n      T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n      T42, T43, T44, T45, T46, T47, T48, T49> type;\n};\n\n// The TypeList template makes it possible to use either a single type\n// or a Types<...> list in TYPED_TEST_CASE() and\n// INSTANTIATE_TYPED_TEST_CASE_P().\n\ntemplate <typename T>\nstruct TypeList { typedef Types1<T> type; };\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49, typename T50>\nstruct TypeList<Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45, T46, T47, T48, T49, T50> > {\n  typedef typename Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n      T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n      T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n      T41, T42, T43, T44, T45, T46, T47, T48, T49, T50>::type type;\n};\n\n#endif  // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_\n\n// Due to C++ preprocessor weirdness, we need double indirection to\n// concatenate two tokens when one of them is __LINE__.  Writing\n//\n//   foo ## __LINE__\n//\n// will result in the token foo__LINE__, instead of foo followed by\n// the current line number.  For more details, see\n// http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.6\n#define GTEST_CONCAT_TOKEN_(foo, bar) GTEST_CONCAT_TOKEN_IMPL_(foo, bar)\n#define GTEST_CONCAT_TOKEN_IMPL_(foo, bar) foo ## bar\n\n// Google Test defines the testing::Message class to allow construction of\n// test messages via the << operator.  The idea is that anything\n// streamable to std::ostream can be streamed to a testing::Message.\n// This allows a user to use his own types in Google Test assertions by\n// overloading the << operator.\n//\n// util/gtl/stl_logging-inl.h overloads << for STL containers.  These\n// overloads cannot be defined in the std namespace, as that will be\n// undefined behavior.  Therefore, they are defined in the global\n// namespace instead.\n//\n// C++'s symbol lookup rule (i.e. Koenig lookup) says that these\n// overloads are visible in either the std namespace or the global\n// namespace, but not other namespaces, including the testing\n// namespace which Google Test's Message class is in.\n//\n// To allow STL containers (and other types that has a << operator\n// defined in the global namespace) to be used in Google Test assertions,\n// testing::Message must access the custom << operator from the global\n// namespace.  Hence this helper function.\n//\n// Note: Jeffrey Yasskin suggested an alternative fix by \"using\n// ::operator<<;\" in the definition of Message's operator<<.  That fix\n// doesn't require a helper function, but unfortunately doesn't\n// compile with MSVC.\ntemplate <typename T>\ninline void GTestStreamToHelper(std::ostream* os, const T& val) {\n  *os << val;\n}\n\nclass ProtocolMessage;\nnamespace proto2 { class Message; }\n\nnamespace testing {\n\n// Forward declarations.\n\nclass AssertionResult;                 // Result of an assertion.\nclass Message;                         // Represents a failure message.\nclass Test;                            // Represents a test.\nclass TestInfo;                        // Information about a test.\nclass TestPartResult;                  // Result of a test part.\nclass UnitTest;                        // A collection of test cases.\n\ntemplate <typename T>\n::std::string PrintToString(const T& value);\n\nnamespace internal {\n\nstruct TraceInfo;                      // Information about a trace point.\nclass ScopedTrace;                     // Implements scoped trace.\nclass TestInfoImpl;                    // Opaque implementation of TestInfo\nclass UnitTestImpl;                    // Opaque implementation of UnitTest\n\n// How many times InitGoogleTest() has been called.\nextern int g_init_gtest_count;\n\n// The text used in failure messages to indicate the start of the\n// stack trace.\nGTEST_API_ extern const char kStackTraceMarker[];\n\n// A secret type that Google Test users don't know about.  It has no\n// definition on purpose.  Therefore it's impossible to create a\n// Secret object, which is what we want.\nclass Secret;\n\n// Two overloaded helpers for checking at compile time whether an\n// expression is a null pointer literal (i.e. NULL or any 0-valued\n// compile-time integral constant).  Their return values have\n// different sizes, so we can use sizeof() to test which version is\n// picked by the compiler.  These helpers have no implementations, as\n// we only need their signatures.\n//\n// Given IsNullLiteralHelper(x), the compiler will pick the first\n// version if x can be implicitly converted to Secret*, and pick the\n// second version otherwise.  Since Secret is a secret and incomplete\n// type, the only expression a user can write that has type Secret* is\n// a null pointer literal.  Therefore, we know that x is a null\n// pointer literal if and only if the first version is picked by the\n// compiler.\nchar IsNullLiteralHelper(Secret* p);\nchar (&IsNullLiteralHelper(...))[2];  // NOLINT\n\n// A compile-time bool constant that is true if and only if x is a\n// null pointer literal (i.e. NULL or any 0-valued compile-time\n// integral constant).\n#ifdef GTEST_ELLIPSIS_NEEDS_POD_\n// We lose support for NULL detection where the compiler doesn't like\n// passing non-POD classes through ellipsis (...).\n# define GTEST_IS_NULL_LITERAL_(x) false\n#else\n# define GTEST_IS_NULL_LITERAL_(x) \\\n    (sizeof(::testing::internal::IsNullLiteralHelper(x)) == 1)\n#endif  // GTEST_ELLIPSIS_NEEDS_POD_\n\n// Appends the user-supplied message to the Google-Test-generated message.\nGTEST_API_ String AppendUserMessage(const String& gtest_msg,\n                                    const Message& user_msg);\n\n// A helper class for creating scoped traces in user programs.\nclass GTEST_API_ ScopedTrace {\n public:\n  // The c'tor pushes the given source file location and message onto\n  // a trace stack maintained by Google Test.\n  ScopedTrace(const char* file, int line, const Message& message);\n\n  // The d'tor pops the info pushed by the c'tor.\n  //\n  // Note that the d'tor is not virtual in order to be efficient.\n  // Don't inherit from ScopedTrace!\n  ~ScopedTrace();\n\n private:\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedTrace);\n} GTEST_ATTRIBUTE_UNUSED_;  // A ScopedTrace object does its job in its\n                            // c'tor and d'tor.  Therefore it doesn't\n                            // need to be used otherwise.\n\n// Converts a streamable value to a String.  A NULL pointer is\n// converted to \"(null)\".  When the input value is a ::string,\n// ::std::string, ::wstring, or ::std::wstring object, each NUL\n// character in it is replaced with \"\\\\0\".\n// Declared here but defined in gtest.h, so that it has access\n// to the definition of the Message class, required by the ARM\n// compiler.\ntemplate <typename T>\nString StreamableToString(const T& streamable);\n\n// The Symbian compiler has a bug that prevents it from selecting the\n// correct overload of FormatForComparisonFailureMessage (see below)\n// unless we pass the first argument by reference.  If we do that,\n// however, Visual Age C++ 10.1 generates a compiler error.  Therefore\n// we only apply the work-around for Symbian.\n#if defined(__SYMBIAN32__)\n# define GTEST_CREF_WORKAROUND_ const&\n#else\n# define GTEST_CREF_WORKAROUND_\n#endif\n\n// When this operand is a const char* or char*, if the other operand\n// is a ::std::string or ::string, we print this operand as a C string\n// rather than a pointer (we do the same for wide strings); otherwise\n// we print it as a pointer to be safe.\n\n// This internal macro is used to avoid duplicated code.\n#define GTEST_FORMAT_IMPL_(operand2_type, operand1_printer)\\\ninline String FormatForComparisonFailureMessage(\\\n    operand2_type::value_type* GTEST_CREF_WORKAROUND_ str, \\\n    const operand2_type& /*operand2*/) {\\\n  return operand1_printer(str);\\\n}\\\ninline String FormatForComparisonFailureMessage(\\\n    const operand2_type::value_type* GTEST_CREF_WORKAROUND_ str, \\\n    const operand2_type& /*operand2*/) {\\\n  return operand1_printer(str);\\\n}\n\nGTEST_FORMAT_IMPL_(::std::string, String::ShowCStringQuoted)\n#if GTEST_HAS_STD_WSTRING\nGTEST_FORMAT_IMPL_(::std::wstring, String::ShowWideCStringQuoted)\n#endif  // GTEST_HAS_STD_WSTRING\n\n#if GTEST_HAS_GLOBAL_STRING\nGTEST_FORMAT_IMPL_(::string, String::ShowCStringQuoted)\n#endif  // GTEST_HAS_GLOBAL_STRING\n#if GTEST_HAS_GLOBAL_WSTRING\nGTEST_FORMAT_IMPL_(::wstring, String::ShowWideCStringQuoted)\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n#undef GTEST_FORMAT_IMPL_\n\n// The next four overloads handle the case where the operand being\n// printed is a char/wchar_t pointer and the other operand is not a\n// string/wstring object.  In such cases, we just print the operand as\n// a pointer to be safe.\n#define GTEST_FORMAT_CHAR_PTR_IMPL_(CharType)                       \\\n  template <typename T>                                             \\\n  String FormatForComparisonFailureMessage(CharType* GTEST_CREF_WORKAROUND_ p, \\\n                                           const T&) { \\\n    return PrintToString(static_cast<const void*>(p));              \\\n  }\n\nGTEST_FORMAT_CHAR_PTR_IMPL_(char)\nGTEST_FORMAT_CHAR_PTR_IMPL_(const char)\nGTEST_FORMAT_CHAR_PTR_IMPL_(wchar_t)\nGTEST_FORMAT_CHAR_PTR_IMPL_(const wchar_t)\n\n#undef GTEST_FORMAT_CHAR_PTR_IMPL_\n\n// Constructs and returns the message for an equality assertion\n// (e.g. ASSERT_EQ, EXPECT_STREQ, etc) failure.\n//\n// The first four parameters are the expressions used in the assertion\n// and their values, as strings.  For example, for ASSERT_EQ(foo, bar)\n// where foo is 5 and bar is 6, we have:\n//\n//   expected_expression: \"foo\"\n//   actual_expression:   \"bar\"\n//   expected_value:      \"5\"\n//   actual_value:        \"6\"\n//\n// The ignoring_case parameter is true iff the assertion is a\n// *_STRCASEEQ*.  When it's true, the string \" (ignoring case)\" will\n// be inserted into the message.\nGTEST_API_ AssertionResult EqFailure(const char* expected_expression,\n                                     const char* actual_expression,\n                                     const String& expected_value,\n                                     const String& actual_value,\n                                     bool ignoring_case);\n\n// Constructs a failure message for Boolean assertions such as EXPECT_TRUE.\nGTEST_API_ String GetBoolAssertionFailureMessage(\n    const AssertionResult& assertion_result,\n    const char* expression_text,\n    const char* actual_predicate_value,\n    const char* expected_predicate_value);\n\n// This template class represents an IEEE floating-point number\n// (either single-precision or double-precision, depending on the\n// template parameters).\n//\n// The purpose of this class is to do more sophisticated number\n// comparison.  (Due to round-off error, etc, it's very unlikely that\n// two floating-points will be equal exactly.  Hence a naive\n// comparison by the == operation often doesn't work.)\n//\n// Format of IEEE floating-point:\n//\n//   The most-significant bit being the leftmost, an IEEE\n//   floating-point looks like\n//\n//     sign_bit exponent_bits fraction_bits\n//\n//   Here, sign_bit is a single bit that designates the sign of the\n//   number.\n//\n//   For float, there are 8 exponent bits and 23 fraction bits.\n//\n//   For double, there are 11 exponent bits and 52 fraction bits.\n//\n//   More details can be found at\n//   http://en.wikipedia.org/wiki/IEEE_floating-point_standard.\n//\n// Template parameter:\n//\n//   RawType: the raw floating-point type (either float or double)\ntemplate <typename RawType>\nclass FloatingPoint {\n public:\n  // Defines the unsigned integer type that has the same size as the\n  // floating point number.\n  typedef typename TypeWithSize<sizeof(RawType)>::UInt Bits;\n\n  // Constants.\n\n  // # of bits in a number.\n  static const size_t kBitCount = 8*sizeof(RawType);\n\n  // # of fraction bits in a number.\n  static const size_t kFractionBitCount =\n    std::numeric_limits<RawType>::digits - 1;\n\n  // # of exponent bits in a number.\n  static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount;\n\n  // The mask for the sign bit.\n  static const Bits kSignBitMask = static_cast<Bits>(1) << (kBitCount - 1);\n\n  // The mask for the fraction bits.\n  static const Bits kFractionBitMask =\n    ~static_cast<Bits>(0) >> (kExponentBitCount + 1);\n\n  // The mask for the exponent bits.\n  static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask);\n\n  // How many ULP's (Units in the Last Place) we want to tolerate when\n  // comparing two numbers.  The larger the value, the more error we\n  // allow.  A 0 value means that two numbers must be exactly the same\n  // to be considered equal.\n  //\n  // The maximum error of a single floating-point operation is 0.5\n  // units in the last place.  On Intel CPU's, all floating-point\n  // calculations are done with 80-bit precision, while double has 64\n  // bits.  Therefore, 4 should be enough for ordinary use.\n  //\n  // See the following article for more details on ULP:\n  // http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm.\n  static const size_t kMaxUlps = 4;\n\n  // Constructs a FloatingPoint from a raw floating-point number.\n  //\n  // On an Intel CPU, passing a non-normalized NAN (Not a Number)\n  // around may change its bits, although the new value is guaranteed\n  // to be also a NAN.  Therefore, don't expect this constructor to\n  // preserve the bits in x when x is a NAN.\n  explicit FloatingPoint(const RawType& x) { u_.value_ = x; }\n\n  // Static methods\n\n  // Reinterprets a bit pattern as a floating-point number.\n  //\n  // This function is needed to test the AlmostEquals() method.\n  static RawType ReinterpretBits(const Bits bits) {\n    FloatingPoint fp(0);\n    fp.u_.bits_ = bits;\n    return fp.u_.value_;\n  }\n\n  // Returns the floating-point number that represent positive infinity.\n  static RawType Infinity() {\n    return ReinterpretBits(kExponentBitMask);\n  }\n\n  // Non-static methods\n\n  // Returns the bits that represents this number.\n  const Bits &bits() const { return u_.bits_; }\n\n  // Returns the exponent bits of this number.\n  Bits exponent_bits() const { return kExponentBitMask & u_.bits_; }\n\n  // Returns the fraction bits of this number.\n  Bits fraction_bits() const { return kFractionBitMask & u_.bits_; }\n\n  // Returns the sign bit of this number.\n  Bits sign_bit() const { return kSignBitMask & u_.bits_; }\n\n  // Returns true iff this is NAN (not a number).\n  bool is_nan() const {\n    // It's a NAN if the exponent bits are all ones and the fraction\n    // bits are not entirely zeros.\n    return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0);\n  }\n\n  // Returns true iff this number is at most kMaxUlps ULP's away from\n  // rhs.  In particular, this function:\n  //\n  //   - returns false if either number is (or both are) NAN.\n  //   - treats really large numbers as almost equal to infinity.\n  //   - thinks +0.0 and -0.0 are 0 DLP's apart.\n  bool AlmostEquals(const FloatingPoint& rhs) const {\n    // The IEEE standard says that any comparison operation involving\n    // a NAN must return false.\n    if (is_nan() || rhs.is_nan()) return false;\n\n    return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_)\n        <= kMaxUlps;\n  }\n\n private:\n  // The data type used to store the actual floating-point number.\n  union FloatingPointUnion {\n    RawType value_;  // The raw floating-point number.\n    Bits bits_;      // The bits that represent the number.\n  };\n\n  // Converts an integer from the sign-and-magnitude representation to\n  // the biased representation.  More precisely, let N be 2 to the\n  // power of (kBitCount - 1), an integer x is represented by the\n  // unsigned number x + N.\n  //\n  // For instance,\n  //\n  //   -N + 1 (the most negative number representable using\n  //          sign-and-magnitude) is represented by 1;\n  //   0      is represented by N; and\n  //   N - 1  (the biggest number representable using\n  //          sign-and-magnitude) is represented by 2N - 1.\n  //\n  // Read http://en.wikipedia.org/wiki/Signed_number_representations\n  // for more details on signed number representations.\n  static Bits SignAndMagnitudeToBiased(const Bits &sam) {\n    if (kSignBitMask & sam) {\n      // sam represents a negative number.\n      return ~sam + 1;\n    } else {\n      // sam represents a positive number.\n      return kSignBitMask | sam;\n    }\n  }\n\n  // Given two numbers in the sign-and-magnitude representation,\n  // returns the distance between them as an unsigned number.\n  static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1,\n                                                     const Bits &sam2) {\n    const Bits biased1 = SignAndMagnitudeToBiased(sam1);\n    const Bits biased2 = SignAndMagnitudeToBiased(sam2);\n    return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1);\n  }\n\n  FloatingPointUnion u_;\n};\n\n// Typedefs the instances of the FloatingPoint template class that we\n// care to use.\ntypedef FloatingPoint<float> Float;\ntypedef FloatingPoint<double> Double;\n\n// In order to catch the mistake of putting tests that use different\n// test fixture classes in the same test case, we need to assign\n// unique IDs to fixture classes and compare them.  The TypeId type is\n// used to hold such IDs.  The user should treat TypeId as an opaque\n// type: the only operation allowed on TypeId values is to compare\n// them for equality using the == operator.\ntypedef const void* TypeId;\n\ntemplate <typename T>\nclass TypeIdHelper {\n public:\n  // dummy_ must not have a const type.  Otherwise an overly eager\n  // compiler (e.g. MSVC 7.1 & 8.0) may try to merge\n  // TypeIdHelper<T>::dummy_ for different Ts as an \"optimization\".\n  static bool dummy_;\n};\n\ntemplate <typename T>\nbool TypeIdHelper<T>::dummy_ = false;\n\n// GetTypeId<T>() returns the ID of type T.  Different values will be\n// returned for different types.  Calling the function twice with the\n// same type argument is guaranteed to return the same ID.\ntemplate <typename T>\nTypeId GetTypeId() {\n  // The compiler is required to allocate a different\n  // TypeIdHelper<T>::dummy_ variable for each T used to instantiate\n  // the template.  Therefore, the address of dummy_ is guaranteed to\n  // be unique.\n  return &(TypeIdHelper<T>::dummy_);\n}\n\n// Returns the type ID of ::testing::Test.  Always call this instead\n// of GetTypeId< ::testing::Test>() to get the type ID of\n// ::testing::Test, as the latter may give the wrong result due to a\n// suspected linker bug when compiling Google Test as a Mac OS X\n// framework.\nGTEST_API_ TypeId GetTestTypeId();\n\n// Defines the abstract factory interface that creates instances\n// of a Test object.\nclass TestFactoryBase {\n public:\n  virtual ~TestFactoryBase() {}\n\n  // Creates a test instance to run. The instance is both created and destroyed\n  // within TestInfoImpl::Run()\n  virtual Test* CreateTest() = 0;\n\n protected:\n  TestFactoryBase() {}\n\n private:\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestFactoryBase);\n};\n\n// This class provides implementation of TeastFactoryBase interface.\n// It is used in TEST and TEST_F macros.\ntemplate <class TestClass>\nclass TestFactoryImpl : public TestFactoryBase {\n public:\n  virtual Test* CreateTest() { return new TestClass; }\n};\n\n#if GTEST_OS_WINDOWS\n\n// Predicate-formatters for implementing the HRESULT checking macros\n// {ASSERT|EXPECT}_HRESULT_{SUCCEEDED|FAILED}\n// We pass a long instead of HRESULT to avoid causing an\n// include dependency for the HRESULT type.\nGTEST_API_ AssertionResult IsHRESULTSuccess(const char* expr,\n                                            long hr);  // NOLINT\nGTEST_API_ AssertionResult IsHRESULTFailure(const char* expr,\n                                            long hr);  // NOLINT\n\n#endif  // GTEST_OS_WINDOWS\n\n// Types of SetUpTestCase() and TearDownTestCase() functions.\ntypedef void (*SetUpTestCaseFunc)();\ntypedef void (*TearDownTestCaseFunc)();\n\n// Creates a new TestInfo object and registers it with Google Test;\n// returns the created object.\n//\n// Arguments:\n//\n//   test_case_name:   name of the test case\n//   name:             name of the test\n//   type_param        the name of the test's type parameter, or NULL if\n//                     this is not  a typed or a type-parameterized test.\n//   value_param       text representation of the test's value parameter,\n//                     or NULL if this is not a type-parameterized test.\n//   fixture_class_id: ID of the test fixture class\n//   set_up_tc:        pointer to the function that sets up the test case\n//   tear_down_tc:     pointer to the function that tears down the test case\n//   factory:          pointer to the factory that creates a test object.\n//                     The newly created TestInfo instance will assume\n//                     ownership of the factory object.\nGTEST_API_ TestInfo* MakeAndRegisterTestInfo(\n    const char* test_case_name, const char* name,\n    const char* type_param,\n    const char* value_param,\n    TypeId fixture_class_id,\n    SetUpTestCaseFunc set_up_tc,\n    TearDownTestCaseFunc tear_down_tc,\n    TestFactoryBase* factory);\n\n// If *pstr starts with the given prefix, modifies *pstr to be right\n// past the prefix and returns true; otherwise leaves *pstr unchanged\n// and returns false.  None of pstr, *pstr, and prefix can be NULL.\nGTEST_API_ bool SkipPrefix(const char* prefix, const char** pstr);\n\n#if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P\n\n// State of the definition of a type-parameterized test case.\nclass GTEST_API_ TypedTestCasePState {\n public:\n  TypedTestCasePState() : registered_(false) {}\n\n  // Adds the given test name to defined_test_names_ and return true\n  // if the test case hasn't been registered; otherwise aborts the\n  // program.\n  bool AddTestName(const char* file, int line, const char* case_name,\n                   const char* test_name) {\n    if (registered_) {\n      fprintf(stderr, \"%s Test %s must be defined before \"\n              \"REGISTER_TYPED_TEST_CASE_P(%s, ...).\\n\",\n              FormatFileLocation(file, line).c_str(), test_name, case_name);\n      fflush(stderr);\n      posix::Abort();\n    }\n    defined_test_names_.insert(test_name);\n    return true;\n  }\n\n  // Verifies that registered_tests match the test names in\n  // defined_test_names_; returns registered_tests if successful, or\n  // aborts the program otherwise.\n  const char* VerifyRegisteredTestNames(\n      const char* file, int line, const char* registered_tests);\n\n private:\n  bool registered_;\n  ::std::set<const char*> defined_test_names_;\n};\n\n// Skips to the first non-space char after the first comma in 'str';\n// returns NULL if no comma is found in 'str'.\ninline const char* SkipComma(const char* str) {\n  const char* comma = strchr(str, ',');\n  if (comma == NULL) {\n    return NULL;\n  }\n  while (IsSpace(*(++comma))) {}\n  return comma;\n}\n\n// Returns the prefix of 'str' before the first comma in it; returns\n// the entire string if it contains no comma.\ninline String GetPrefixUntilComma(const char* str) {\n  const char* comma = strchr(str, ',');\n  return comma == NULL ? String(str) : String(str, comma - str);\n}\n\n// TypeParameterizedTest<Fixture, TestSel, Types>::Register()\n// registers a list of type-parameterized tests with Google Test.  The\n// return value is insignificant - we just need to return something\n// such that we can call this function in a namespace scope.\n//\n// Implementation note: The GTEST_TEMPLATE_ macro declares a template\n// template parameter.  It's defined in gtest-type-util.h.\ntemplate <GTEST_TEMPLATE_ Fixture, class TestSel, typename Types>\nclass TypeParameterizedTest {\n public:\n  // 'index' is the index of the test in the type list 'Types'\n  // specified in INSTANTIATE_TYPED_TEST_CASE_P(Prefix, TestCase,\n  // Types).  Valid values for 'index' are [0, N - 1] where N is the\n  // length of Types.\n  static bool Register(const char* prefix, const char* case_name,\n                       const char* test_names, int index) {\n    typedef typename Types::Head Type;\n    typedef Fixture<Type> FixtureClass;\n    typedef typename GTEST_BIND_(TestSel, Type) TestClass;\n\n    // First, registers the first type-parameterized test in the type\n    // list.\n    MakeAndRegisterTestInfo(\n        String::Format(\"%s%s%s/%d\", prefix, prefix[0] == '\\0' ? \"\" : \"/\",\n                       case_name, index).c_str(),\n        GetPrefixUntilComma(test_names).c_str(),\n        GetTypeName<Type>().c_str(),\n        NULL,  // No value parameter.\n        GetTypeId<FixtureClass>(),\n        TestClass::SetUpTestCase,\n        TestClass::TearDownTestCase,\n        new TestFactoryImpl<TestClass>);\n\n    // Next, recurses (at compile time) with the tail of the type list.\n    return TypeParameterizedTest<Fixture, TestSel, typename Types::Tail>\n        ::Register(prefix, case_name, test_names, index + 1);\n  }\n};\n\n// The base case for the compile time recursion.\ntemplate <GTEST_TEMPLATE_ Fixture, class TestSel>\nclass TypeParameterizedTest<Fixture, TestSel, Types0> {\n public:\n  static bool Register(const char* /*prefix*/, const char* /*case_name*/,\n                       const char* /*test_names*/, int /*index*/) {\n    return true;\n  }\n};\n\n// TypeParameterizedTestCase<Fixture, Tests, Types>::Register()\n// registers *all combinations* of 'Tests' and 'Types' with Google\n// Test.  The return value is insignificant - we just need to return\n// something such that we can call this function in a namespace scope.\ntemplate <GTEST_TEMPLATE_ Fixture, typename Tests, typename Types>\nclass TypeParameterizedTestCase {\n public:\n  static bool Register(const char* prefix, const char* case_name,\n                       const char* test_names) {\n    typedef typename Tests::Head Head;\n\n    // First, register the first test in 'Test' for each type in 'Types'.\n    TypeParameterizedTest<Fixture, Head, Types>::Register(\n        prefix, case_name, test_names, 0);\n\n    // Next, recurses (at compile time) with the tail of the test list.\n    return TypeParameterizedTestCase<Fixture, typename Tests::Tail, Types>\n        ::Register(prefix, case_name, SkipComma(test_names));\n  }\n};\n\n// The base case for the compile time recursion.\ntemplate <GTEST_TEMPLATE_ Fixture, typename Types>\nclass TypeParameterizedTestCase<Fixture, Templates0, Types> {\n public:\n  static bool Register(const char* /*prefix*/, const char* /*case_name*/,\n                       const char* /*test_names*/) {\n    return true;\n  }\n};\n\n#endif  // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P\n\n// Returns the current OS stack trace as a String.\n//\n// The maximum number of stack frames to be included is specified by\n// the gtest_stack_trace_depth flag.  The skip_count parameter\n// specifies the number of top frames to be skipped, which doesn't\n// count against the number of frames to be included.\n//\n// For example, if Foo() calls Bar(), which in turn calls\n// GetCurrentOsStackTraceExceptTop(..., 1), Foo() will be included in\n// the trace but Bar() and GetCurrentOsStackTraceExceptTop() won't.\nGTEST_API_ String GetCurrentOsStackTraceExceptTop(UnitTest* unit_test,\n                                                  int skip_count);\n\n// Helpers for suppressing warnings on unreachable code or constant\n// condition.\n\n// Always returns true.\nGTEST_API_ bool AlwaysTrue();\n\n// Always returns false.\ninline bool AlwaysFalse() { return !AlwaysTrue(); }\n\n// Helper for suppressing false warning from Clang on a const char*\n// variable declared in a conditional expression always being NULL in\n// the else branch.\nstruct GTEST_API_ ConstCharPtr {\n  ConstCharPtr(const char* str) : value(str) {}\n  operator bool() const { return true; }\n  const char* value;\n};\n\n// A simple Linear Congruential Generator for generating random\n// numbers with a uniform distribution.  Unlike rand() and srand(), it\n// doesn't use global state (and therefore can't interfere with user\n// code).  Unlike rand_r(), it's portable.  An LCG isn't very random,\n// but it's good enough for our purposes.\nclass GTEST_API_ Random {\n public:\n  static const UInt32 kMaxRange = 1u << 31;\n\n  explicit Random(UInt32 seed) : state_(seed) {}\n\n  void Reseed(UInt32 seed) { state_ = seed; }\n\n  // Generates a random number from [0, range).  Crashes if 'range' is\n  // 0 or greater than kMaxRange.\n  UInt32 Generate(UInt32 range);\n\n private:\n  UInt32 state_;\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(Random);\n};\n\n// Defining a variable of type CompileAssertTypesEqual<T1, T2> will cause a\n// compiler error iff T1 and T2 are different types.\ntemplate <typename T1, typename T2>\nstruct CompileAssertTypesEqual;\n\ntemplate <typename T>\nstruct CompileAssertTypesEqual<T, T> {\n};\n\n// Removes the reference from a type if it is a reference type,\n// otherwise leaves it unchanged.  This is the same as\n// tr1::remove_reference, which is not widely available yet.\ntemplate <typename T>\nstruct RemoveReference { typedef T type; };  // NOLINT\ntemplate <typename T>\nstruct RemoveReference<T&> { typedef T type; };  // NOLINT\n\n// A handy wrapper around RemoveReference that works when the argument\n// T depends on template parameters.\n#define GTEST_REMOVE_REFERENCE_(T) \\\n    typename ::testing::internal::RemoveReference<T>::type\n\n// Removes const from a type if it is a const type, otherwise leaves\n// it unchanged.  This is the same as tr1::remove_const, which is not\n// widely available yet.\ntemplate <typename T>\nstruct RemoveConst { typedef T type; };  // NOLINT\ntemplate <typename T>\nstruct RemoveConst<const T> { typedef T type; };  // NOLINT\n\n// MSVC 8.0, Sun C++, and IBM XL C++ have a bug which causes the above\n// definition to fail to remove the const in 'const int[3]' and 'const\n// char[3][4]'.  The following specialization works around the bug.\n// However, it causes trouble with GCC and thus needs to be\n// conditionally compiled.\n#if defined(_MSC_VER) || defined(__SUNPRO_CC) || defined(__IBMCPP__)\ntemplate <typename T, size_t N>\nstruct RemoveConst<const T[N]> {\n  typedef typename RemoveConst<T>::type type[N];\n};\n#endif\n\n// A handy wrapper around RemoveConst that works when the argument\n// T depends on template parameters.\n#define GTEST_REMOVE_CONST_(T) \\\n    typename ::testing::internal::RemoveConst<T>::type\n\n// Turns const U&, U&, const U, and U all into U.\n#define GTEST_REMOVE_REFERENCE_AND_CONST_(T) \\\n    GTEST_REMOVE_CONST_(GTEST_REMOVE_REFERENCE_(T))\n\n// Adds reference to a type if it is not a reference type,\n// otherwise leaves it unchanged.  This is the same as\n// tr1::add_reference, which is not widely available yet.\ntemplate <typename T>\nstruct AddReference { typedef T& type; };  // NOLINT\ntemplate <typename T>\nstruct AddReference<T&> { typedef T& type; };  // NOLINT\n\n// A handy wrapper around AddReference that works when the argument T\n// depends on template parameters.\n#define GTEST_ADD_REFERENCE_(T) \\\n    typename ::testing::internal::AddReference<T>::type\n\n// Adds a reference to const on top of T as necessary.  For example,\n// it transforms\n//\n//   char         ==> const char&\n//   const char   ==> const char&\n//   char&        ==> const char&\n//   const char&  ==> const char&\n//\n// The argument T must depend on some template parameters.\n#define GTEST_REFERENCE_TO_CONST_(T) \\\n    GTEST_ADD_REFERENCE_(const GTEST_REMOVE_REFERENCE_(T))\n\n// ImplicitlyConvertible<From, To>::value is a compile-time bool\n// constant that's true iff type From can be implicitly converted to\n// type To.\ntemplate <typename From, typename To>\nclass ImplicitlyConvertible {\n private:\n  // We need the following helper functions only for their types.\n  // They have no implementations.\n\n  // MakeFrom() is an expression whose type is From.  We cannot simply\n  // use From(), as the type From may not have a public default\n  // constructor.\n  static From MakeFrom();\n\n  // These two functions are overloaded.  Given an expression\n  // Helper(x), the compiler will pick the first version if x can be\n  // implicitly converted to type To; otherwise it will pick the\n  // second version.\n  //\n  // The first version returns a value of size 1, and the second\n  // version returns a value of size 2.  Therefore, by checking the\n  // size of Helper(x), which can be done at compile time, we can tell\n  // which version of Helper() is used, and hence whether x can be\n  // implicitly converted to type To.\n  static char Helper(To);\n  static char (&Helper(...))[2];  // NOLINT\n\n  // We have to put the 'public' section after the 'private' section,\n  // or MSVC refuses to compile the code.\n public:\n  // MSVC warns about implicitly converting from double to int for\n  // possible loss of data, so we need to temporarily disable the\n  // warning.\n#ifdef _MSC_VER\n# pragma warning(push)          // Saves the current warning state.\n# pragma warning(disable:4244)  // Temporarily disables warning 4244.\n\n  static const bool value =\n      sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1;\n# pragma warning(pop)           // Restores the warning state.\n#elif defined(__BORLANDC__)\n  // C++Builder cannot use member overload resolution during template\n  // instantiation.  The simplest workaround is to use its C++0x type traits\n  // functions (C++Builder 2009 and above only).\n  static const bool value = __is_convertible(From, To);\n#else\n  static const bool value =\n      sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1;\n#endif  // _MSV_VER\n};\ntemplate <typename From, typename To>\nconst bool ImplicitlyConvertible<From, To>::value;\n\n// IsAProtocolMessage<T>::value is a compile-time bool constant that's\n// true iff T is type ProtocolMessage, proto2::Message, or a subclass\n// of those.\ntemplate <typename T>\nstruct IsAProtocolMessage\n    : public bool_constant<\n  ImplicitlyConvertible<const T*, const ::ProtocolMessage*>::value ||\n  ImplicitlyConvertible<const T*, const ::proto2::Message*>::value> {\n};\n\n// When the compiler sees expression IsContainerTest<C>(0), if C is an\n// STL-style container class, the first overload of IsContainerTest\n// will be viable (since both C::iterator* and C::const_iterator* are\n// valid types and NULL can be implicitly converted to them).  It will\n// be picked over the second overload as 'int' is a perfect match for\n// the type of argument 0.  If C::iterator or C::const_iterator is not\n// a valid type, the first overload is not viable, and the second\n// overload will be picked.  Therefore, we can determine whether C is\n// a container class by checking the type of IsContainerTest<C>(0).\n// The value of the expression is insignificant.\n//\n// Note that we look for both C::iterator and C::const_iterator.  The\n// reason is that C++ injects the name of a class as a member of the\n// class itself (e.g. you can refer to class iterator as either\n// 'iterator' or 'iterator::iterator').  If we look for C::iterator\n// only, for example, we would mistakenly think that a class named\n// iterator is an STL container.\n//\n// Also note that the simpler approach of overloading\n// IsContainerTest(typename C::const_iterator*) and\n// IsContainerTest(...) doesn't work with Visual Age C++ and Sun C++.\ntypedef int IsContainer;\ntemplate <class C>\nIsContainer IsContainerTest(int /* dummy */,\n                            typename C::iterator* /* it */ = NULL,\n                            typename C::const_iterator* /* const_it */ = NULL) {\n  return 0;\n}\n\ntypedef char IsNotContainer;\ntemplate <class C>\nIsNotContainer IsContainerTest(long /* dummy */) { return '\\0'; }\n\n// EnableIf<condition>::type is void when 'Cond' is true, and\n// undefined when 'Cond' is false.  To use SFINAE to make a function\n// overload only apply when a particular expression is true, add\n// \"typename EnableIf<expression>::type* = 0\" as the last parameter.\ntemplate<bool> struct EnableIf;\ntemplate<> struct EnableIf<true> { typedef void type; };  // NOLINT\n\n// Utilities for native arrays.\n\n// ArrayEq() compares two k-dimensional native arrays using the\n// elements' operator==, where k can be any integer >= 0.  When k is\n// 0, ArrayEq() degenerates into comparing a single pair of values.\n\ntemplate <typename T, typename U>\nbool ArrayEq(const T* lhs, size_t size, const U* rhs);\n\n// This generic version is used when k is 0.\ntemplate <typename T, typename U>\ninline bool ArrayEq(const T& lhs, const U& rhs) { return lhs == rhs; }\n\n// This overload is used when k >= 1.\ntemplate <typename T, typename U, size_t N>\ninline bool ArrayEq(const T(&lhs)[N], const U(&rhs)[N]) {\n  return internal::ArrayEq(lhs, N, rhs);\n}\n\n// This helper reduces code bloat.  If we instead put its logic inside\n// the previous ArrayEq() function, arrays with different sizes would\n// lead to different copies of the template code.\ntemplate <typename T, typename U>\nbool ArrayEq(const T* lhs, size_t size, const U* rhs) {\n  for (size_t i = 0; i != size; i++) {\n    if (!internal::ArrayEq(lhs[i], rhs[i]))\n      return false;\n  }\n  return true;\n}\n\n// Finds the first element in the iterator range [begin, end) that\n// equals elem.  Element may be a native array type itself.\ntemplate <typename Iter, typename Element>\nIter ArrayAwareFind(Iter begin, Iter end, const Element& elem) {\n  for (Iter it = begin; it != end; ++it) {\n    if (internal::ArrayEq(*it, elem))\n      return it;\n  }\n  return end;\n}\n\n// CopyArray() copies a k-dimensional native array using the elements'\n// operator=, where k can be any integer >= 0.  When k is 0,\n// CopyArray() degenerates into copying a single value.\n\ntemplate <typename T, typename U>\nvoid CopyArray(const T* from, size_t size, U* to);\n\n// This generic version is used when k is 0.\ntemplate <typename T, typename U>\ninline void CopyArray(const T& from, U* to) { *to = from; }\n\n// This overload is used when k >= 1.\ntemplate <typename T, typename U, size_t N>\ninline void CopyArray(const T(&from)[N], U(*to)[N]) {\n  internal::CopyArray(from, N, *to);\n}\n\n// This helper reduces code bloat.  If we instead put its logic inside\n// the previous CopyArray() function, arrays with different sizes\n// would lead to different copies of the template code.\ntemplate <typename T, typename U>\nvoid CopyArray(const T* from, size_t size, U* to) {\n  for (size_t i = 0; i != size; i++) {\n    internal::CopyArray(from[i], to + i);\n  }\n}\n\n// The relation between an NativeArray object (see below) and the\n// native array it represents.\nenum RelationToSource {\n  kReference,  // The NativeArray references the native array.\n  kCopy        // The NativeArray makes a copy of the native array and\n               // owns the copy.\n};\n\n// Adapts a native array to a read-only STL-style container.  Instead\n// of the complete STL container concept, this adaptor only implements\n// members useful for Google Mock's container matchers.  New members\n// should be added as needed.  To simplify the implementation, we only\n// support Element being a raw type (i.e. having no top-level const or\n// reference modifier).  It's the client's responsibility to satisfy\n// this requirement.  Element can be an array type itself (hence\n// multi-dimensional arrays are supported).\ntemplate <typename Element>\nclass NativeArray {\n public:\n  // STL-style container typedefs.\n  typedef Element value_type;\n  typedef Element* iterator;\n  typedef const Element* const_iterator;\n\n  // Constructs from a native array.\n  NativeArray(const Element* array, size_t count, RelationToSource relation) {\n    Init(array, count, relation);\n  }\n\n  // Copy constructor.\n  NativeArray(const NativeArray& rhs) {\n    Init(rhs.array_, rhs.size_, rhs.relation_to_source_);\n  }\n\n  ~NativeArray() {\n    // Ensures that the user doesn't instantiate NativeArray with a\n    // const or reference type.\n    static_cast<void>(StaticAssertTypeEqHelper<Element,\n        GTEST_REMOVE_REFERENCE_AND_CONST_(Element)>());\n    if (relation_to_source_ == kCopy)\n      delete[] array_;\n  }\n\n  // STL-style container methods.\n  size_t size() const { return size_; }\n  const_iterator begin() const { return array_; }\n  const_iterator end() const { return array_ + size_; }\n  bool operator==(const NativeArray& rhs) const {\n    return size() == rhs.size() &&\n        ArrayEq(begin(), size(), rhs.begin());\n  }\n\n private:\n  // Initializes this object; makes a copy of the input array if\n  // 'relation' is kCopy.\n  void Init(const Element* array, size_t a_size, RelationToSource relation) {\n    if (relation == kReference) {\n      array_ = array;\n    } else {\n      Element* const copy = new Element[a_size];\n      CopyArray(array, a_size, copy);\n      array_ = copy;\n    }\n    size_ = a_size;\n    relation_to_source_ = relation;\n  }\n\n  const Element* array_;\n  size_t size_;\n  RelationToSource relation_to_source_;\n\n  GTEST_DISALLOW_ASSIGN_(NativeArray);\n};\n\n}  // namespace internal\n}  // namespace testing\n\n#define GTEST_MESSAGE_AT_(file, line, message, result_type) \\\n  ::testing::internal::AssertHelper(result_type, file, line, message) \\\n    = ::testing::Message()\n\n#define GTEST_MESSAGE_(message, result_type) \\\n  GTEST_MESSAGE_AT_(__FILE__, __LINE__, message, result_type)\n\n#define GTEST_FATAL_FAILURE_(message) \\\n  return GTEST_MESSAGE_(message, ::testing::TestPartResult::kFatalFailure)\n\n#define GTEST_NONFATAL_FAILURE_(message) \\\n  GTEST_MESSAGE_(message, ::testing::TestPartResult::kNonFatalFailure)\n\n#define GTEST_SUCCESS_(message) \\\n  GTEST_MESSAGE_(message, ::testing::TestPartResult::kSuccess)\n\n// Suppresses MSVC warnings 4072 (unreachable code) for the code following\n// statement if it returns or throws (or doesn't return or throw in some\n// situations).\n#define GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) \\\n  if (::testing::internal::AlwaysTrue()) { statement; }\n\n#define GTEST_TEST_THROW_(statement, expected_exception, fail) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (::testing::internal::ConstCharPtr gtest_msg = \"\") { \\\n    bool gtest_caught_expected = false; \\\n    try { \\\n      GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \\\n    } \\\n    catch (expected_exception const&) { \\\n      gtest_caught_expected = true; \\\n    } \\\n    catch (...) { \\\n      gtest_msg.value = \\\n          \"Expected: \" #statement \" throws an exception of type \" \\\n          #expected_exception \".\\n  Actual: it throws a different type.\"; \\\n      goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \\\n    } \\\n    if (!gtest_caught_expected) { \\\n      gtest_msg.value = \\\n          \"Expected: \" #statement \" throws an exception of type \" \\\n          #expected_exception \".\\n  Actual: it throws nothing.\"; \\\n      goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \\\n    } \\\n  } else \\\n    GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__): \\\n      fail(gtest_msg.value)\n\n#define GTEST_TEST_NO_THROW_(statement, fail) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (::testing::internal::AlwaysTrue()) { \\\n    try { \\\n      GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \\\n    } \\\n    catch (...) { \\\n      goto GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__); \\\n    } \\\n  } else \\\n    GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__): \\\n      fail(\"Expected: \" #statement \" doesn't throw an exception.\\n\" \\\n           \"  Actual: it throws.\")\n\n#define GTEST_TEST_ANY_THROW_(statement, fail) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (::testing::internal::AlwaysTrue()) { \\\n    bool gtest_caught_any = false; \\\n    try { \\\n      GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \\\n    } \\\n    catch (...) { \\\n      gtest_caught_any = true; \\\n    } \\\n    if (!gtest_caught_any) { \\\n      goto GTEST_CONCAT_TOKEN_(gtest_label_testanythrow_, __LINE__); \\\n    } \\\n  } else \\\n    GTEST_CONCAT_TOKEN_(gtest_label_testanythrow_, __LINE__): \\\n      fail(\"Expected: \" #statement \" throws an exception.\\n\" \\\n           \"  Actual: it doesn't.\")\n\n\n// Implements Boolean test assertions such as EXPECT_TRUE. expression can be\n// either a boolean expression or an AssertionResult. text is a textual\n// represenation of expression as it was passed into the EXPECT_TRUE.\n#define GTEST_TEST_BOOLEAN_(expression, text, actual, expected, fail) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (const ::testing::AssertionResult gtest_ar_ = \\\n      ::testing::AssertionResult(expression)) \\\n    ; \\\n  else \\\n    fail(::testing::internal::GetBoolAssertionFailureMessage(\\\n        gtest_ar_, text, #actual, #expected).c_str())\n\n#define GTEST_TEST_NO_FATAL_FAILURE_(statement, fail) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (::testing::internal::AlwaysTrue()) { \\\n    ::testing::internal::HasNewFatalFailureHelper gtest_fatal_failure_checker; \\\n    GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \\\n    if (gtest_fatal_failure_checker.has_new_fatal_failure()) { \\\n      goto GTEST_CONCAT_TOKEN_(gtest_label_testnofatal_, __LINE__); \\\n    } \\\n  } else \\\n    GTEST_CONCAT_TOKEN_(gtest_label_testnofatal_, __LINE__): \\\n      fail(\"Expected: \" #statement \" doesn't generate new fatal \" \\\n           \"failures in the current thread.\\n\" \\\n           \"  Actual: it does.\")\n\n// Expands to the name of the class that implements the given test.\n#define GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \\\n  test_case_name##_##test_name##_Test\n\n// Helper macro for defining tests.\n#define GTEST_TEST_(test_case_name, test_name, parent_class, parent_id)\\\nclass GTEST_TEST_CLASS_NAME_(test_case_name, test_name) : public parent_class {\\\n public:\\\n  GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {}\\\n private:\\\n  virtual void TestBody();\\\n  static ::testing::TestInfo* const test_info_ GTEST_ATTRIBUTE_UNUSED_;\\\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(\\\n      GTEST_TEST_CLASS_NAME_(test_case_name, test_name));\\\n};\\\n\\\n::testing::TestInfo* const GTEST_TEST_CLASS_NAME_(test_case_name, test_name)\\\n  ::test_info_ =\\\n    ::testing::internal::MakeAndRegisterTestInfo(\\\n        #test_case_name, #test_name, NULL, NULL, \\\n        (parent_id), \\\n        parent_class::SetUpTestCase, \\\n        parent_class::TearDownTestCase, \\\n        new ::testing::internal::TestFactoryImpl<\\\n            GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>);\\\nvoid GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody()\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// The Google C++ Testing Framework (Google Test)\n//\n// This header file defines the public API for death tests.  It is\n// #included by gtest.h so a user doesn't need to include this\n// directly.\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_\n#define GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_\n\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)\n//\n// The Google C++ Testing Framework (Google Test)\n//\n// This header file defines internal utilities needed for implementing\n// death tests.  They are subject to change without notice.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_\n\n\n#include <stdio.h>\n\nnamespace testing {\nnamespace internal {\n\nGTEST_DECLARE_string_(internal_run_death_test);\n\n// Names of the flags (needed for parsing Google Test flags).\nconst char kDeathTestStyleFlag[] = \"death_test_style\";\nconst char kDeathTestUseFork[] = \"death_test_use_fork\";\nconst char kInternalRunDeathTestFlag[] = \"internal_run_death_test\";\n\n#if GTEST_HAS_DEATH_TEST\n\n// DeathTest is a class that hides much of the complexity of the\n// GTEST_DEATH_TEST_ macro.  It is abstract; its static Create method\n// returns a concrete class that depends on the prevailing death test\n// style, as defined by the --gtest_death_test_style and/or\n// --gtest_internal_run_death_test flags.\n\n// In describing the results of death tests, these terms are used with\n// the corresponding definitions:\n//\n// exit status:  The integer exit information in the format specified\n//               by wait(2)\n// exit code:    The integer code passed to exit(3), _exit(2), or\n//               returned from main()\nclass GTEST_API_ DeathTest {\n public:\n  // Create returns false if there was an error determining the\n  // appropriate action to take for the current death test; for example,\n  // if the gtest_death_test_style flag is set to an invalid value.\n  // The LastMessage method will return a more detailed message in that\n  // case.  Otherwise, the DeathTest pointer pointed to by the \"test\"\n  // argument is set.  If the death test should be skipped, the pointer\n  // is set to NULL; otherwise, it is set to the address of a new concrete\n  // DeathTest object that controls the execution of the current test.\n  static bool Create(const char* statement, const RE* regex,\n                     const char* file, int line, DeathTest** test);\n  DeathTest();\n  virtual ~DeathTest() { }\n\n  // A helper class that aborts a death test when it's deleted.\n  class ReturnSentinel {\n   public:\n    explicit ReturnSentinel(DeathTest* test) : test_(test) { }\n    ~ReturnSentinel() { test_->Abort(TEST_ENCOUNTERED_RETURN_STATEMENT); }\n   private:\n    DeathTest* const test_;\n    GTEST_DISALLOW_COPY_AND_ASSIGN_(ReturnSentinel);\n  } GTEST_ATTRIBUTE_UNUSED_;\n\n  // An enumeration of possible roles that may be taken when a death\n  // test is encountered.  EXECUTE means that the death test logic should\n  // be executed immediately.  OVERSEE means that the program should prepare\n  // the appropriate environment for a child process to execute the death\n  // test, then wait for it to complete.\n  enum TestRole { OVERSEE_TEST, EXECUTE_TEST };\n\n  // An enumeration of the three reasons that a test might be aborted.\n  enum AbortReason {\n    TEST_ENCOUNTERED_RETURN_STATEMENT,\n    TEST_THREW_EXCEPTION,\n    TEST_DID_NOT_DIE\n  };\n\n  // Assumes one of the above roles.\n  virtual TestRole AssumeRole() = 0;\n\n  // Waits for the death test to finish and returns its status.\n  virtual int Wait() = 0;\n\n  // Returns true if the death test passed; that is, the test process\n  // exited during the test, its exit status matches a user-supplied\n  // predicate, and its stderr output matches a user-supplied regular\n  // expression.\n  // The user-supplied predicate may be a macro expression rather\n  // than a function pointer or functor, or else Wait and Passed could\n  // be combined.\n  virtual bool Passed(bool exit_status_ok) = 0;\n\n  // Signals that the death test did not die as expected.\n  virtual void Abort(AbortReason reason) = 0;\n\n  // Returns a human-readable outcome message regarding the outcome of\n  // the last death test.\n  static const char* LastMessage();\n\n  static void set_last_death_test_message(const String& message);\n\n private:\n  // A string containing a description of the outcome of the last death test.\n  static String last_death_test_message_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(DeathTest);\n};\n\n// Factory interface for death tests.  May be mocked out for testing.\nclass DeathTestFactory {\n public:\n  virtual ~DeathTestFactory() { }\n  virtual bool Create(const char* statement, const RE* regex,\n                      const char* file, int line, DeathTest** test) = 0;\n};\n\n// A concrete DeathTestFactory implementation for normal use.\nclass DefaultDeathTestFactory : public DeathTestFactory {\n public:\n  virtual bool Create(const char* statement, const RE* regex,\n                      const char* file, int line, DeathTest** test);\n};\n\n// Returns true if exit_status describes a process that was terminated\n// by a signal, or exited normally with a nonzero exit code.\nGTEST_API_ bool ExitedUnsuccessfully(int exit_status);\n\n// Traps C++ exceptions escaping statement and reports them as test\n// failures. Note that trapping SEH exceptions is not implemented here.\n# if GTEST_HAS_EXCEPTIONS\n#  define GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, death_test) \\\n  try { \\\n    GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \\\n  } catch (const ::std::exception& gtest_exception) { \\\n    fprintf(\\\n        stderr, \\\n        \"\\n%s: Caught std::exception-derived exception escaping the \" \\\n        \"death test statement. Exception message: %s\\n\", \\\n        ::testing::internal::FormatFileLocation(__FILE__, __LINE__).c_str(), \\\n        gtest_exception.what()); \\\n    fflush(stderr); \\\n    death_test->Abort(::testing::internal::DeathTest::TEST_THREW_EXCEPTION); \\\n  } catch (...) { \\\n    death_test->Abort(::testing::internal::DeathTest::TEST_THREW_EXCEPTION); \\\n  }\n\n# else\n#  define GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, death_test) \\\n  GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement)\n\n# endif\n\n// This macro is for implementing ASSERT_DEATH*, EXPECT_DEATH*,\n// ASSERT_EXIT*, and EXPECT_EXIT*.\n# define GTEST_DEATH_TEST_(statement, predicate, regex, fail) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (::testing::internal::AlwaysTrue()) { \\\n    const ::testing::internal::RE& gtest_regex = (regex); \\\n    ::testing::internal::DeathTest* gtest_dt; \\\n    if (!::testing::internal::DeathTest::Create(#statement, &gtest_regex, \\\n        __FILE__, __LINE__, &gtest_dt)) { \\\n      goto GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__); \\\n    } \\\n    if (gtest_dt != NULL) { \\\n      ::testing::internal::scoped_ptr< ::testing::internal::DeathTest> \\\n          gtest_dt_ptr(gtest_dt); \\\n      switch (gtest_dt->AssumeRole()) { \\\n        case ::testing::internal::DeathTest::OVERSEE_TEST: \\\n          if (!gtest_dt->Passed(predicate(gtest_dt->Wait()))) { \\\n            goto GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__); \\\n          } \\\n          break; \\\n        case ::testing::internal::DeathTest::EXECUTE_TEST: { \\\n          ::testing::internal::DeathTest::ReturnSentinel \\\n              gtest_sentinel(gtest_dt); \\\n          GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, gtest_dt); \\\n          gtest_dt->Abort(::testing::internal::DeathTest::TEST_DID_NOT_DIE); \\\n          break; \\\n        } \\\n        default: \\\n          break; \\\n      } \\\n    } \\\n  } else \\\n    GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__): \\\n      fail(::testing::internal::DeathTest::LastMessage())\n// The symbol \"fail\" here expands to something into which a message\n// can be streamed.\n\n// A class representing the parsed contents of the\n// --gtest_internal_run_death_test flag, as it existed when\n// RUN_ALL_TESTS was called.\nclass InternalRunDeathTestFlag {\n public:\n  InternalRunDeathTestFlag(const String& a_file,\n                           int a_line,\n                           int an_index,\n                           int a_write_fd)\n      : file_(a_file), line_(a_line), index_(an_index),\n        write_fd_(a_write_fd) {}\n\n  ~InternalRunDeathTestFlag() {\n    if (write_fd_ >= 0)\n      posix::Close(write_fd_);\n  }\n\n  String file() const { return file_; }\n  int line() const { return line_; }\n  int index() const { return index_; }\n  int write_fd() const { return write_fd_; }\n\n private:\n  String file_;\n  int line_;\n  int index_;\n  int write_fd_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(InternalRunDeathTestFlag);\n};\n\n// Returns a newly created InternalRunDeathTestFlag object with fields\n// initialized from the GTEST_FLAG(internal_run_death_test) flag if\n// the flag is specified; otherwise returns NULL.\nInternalRunDeathTestFlag* ParseInternalRunDeathTestFlag();\n\n#else  // GTEST_HAS_DEATH_TEST\n\n// This macro is used for implementing macros such as\n// EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED on systems where\n// death tests are not supported. Those macros must compile on such systems\n// iff EXPECT_DEATH and ASSERT_DEATH compile with the same parameters on\n// systems that support death tests. This allows one to write such a macro\n// on a system that does not support death tests and be sure that it will\n// compile on a death-test supporting system.\n//\n// Parameters:\n//   statement -  A statement that a macro such as EXPECT_DEATH would test\n//                for program termination. This macro has to make sure this\n//                statement is compiled but not executed, to ensure that\n//                EXPECT_DEATH_IF_SUPPORTED compiles with a certain\n//                parameter iff EXPECT_DEATH compiles with it.\n//   regex     -  A regex that a macro such as EXPECT_DEATH would use to test\n//                the output of statement.  This parameter has to be\n//                compiled but not evaluated by this macro, to ensure that\n//                this macro only accepts expressions that a macro such as\n//                EXPECT_DEATH would accept.\n//   terminator - Must be an empty statement for EXPECT_DEATH_IF_SUPPORTED\n//                and a return statement for ASSERT_DEATH_IF_SUPPORTED.\n//                This ensures that ASSERT_DEATH_IF_SUPPORTED will not\n//                compile inside functions where ASSERT_DEATH doesn't\n//                compile.\n//\n//  The branch that has an always false condition is used to ensure that\n//  statement and regex are compiled (and thus syntactically correct) but\n//  never executed. The unreachable code macro protects the terminator\n//  statement from generating an 'unreachable code' warning in case\n//  statement unconditionally returns or throws. The Message constructor at\n//  the end allows the syntax of streaming additional messages into the\n//  macro, for compilational compatibility with EXPECT_DEATH/ASSERT_DEATH.\n# define GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, terminator) \\\n    GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n    if (::testing::internal::AlwaysTrue()) { \\\n      GTEST_LOG_(WARNING) \\\n          << \"Death tests are not supported on this platform.\\n\" \\\n          << \"Statement '\" #statement \"' cannot be verified.\"; \\\n    } else if (::testing::internal::AlwaysFalse()) { \\\n      ::testing::internal::RE::PartialMatch(\".*\", (regex)); \\\n      GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \\\n      terminator; \\\n    } else \\\n      ::testing::Message()\n\n#endif  // GTEST_HAS_DEATH_TEST\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_\n\nnamespace testing {\n\n// This flag controls the style of death tests.  Valid values are \"threadsafe\",\n// meaning that the death test child process will re-execute the test binary\n// from the start, running only a single death test, or \"fast\",\n// meaning that the child process will execute the test logic immediately\n// after forking.\nGTEST_DECLARE_string_(death_test_style);\n\n#if GTEST_HAS_DEATH_TEST\n\n// The following macros are useful for writing death tests.\n\n// Here's what happens when an ASSERT_DEATH* or EXPECT_DEATH* is\n// executed:\n//\n//   1. It generates a warning if there is more than one active\n//   thread.  This is because it's safe to fork() or clone() only\n//   when there is a single thread.\n//\n//   2. The parent process clone()s a sub-process and runs the death\n//   test in it; the sub-process exits with code 0 at the end of the\n//   death test, if it hasn't exited already.\n//\n//   3. The parent process waits for the sub-process to terminate.\n//\n//   4. The parent process checks the exit code and error message of\n//   the sub-process.\n//\n// Examples:\n//\n//   ASSERT_DEATH(server.SendMessage(56, \"Hello\"), \"Invalid port number\");\n//   for (int i = 0; i < 5; i++) {\n//     EXPECT_DEATH(server.ProcessRequest(i),\n//                  \"Invalid request .* in ProcessRequest()\")\n//         << \"Failed to die on request \" << i);\n//   }\n//\n//   ASSERT_EXIT(server.ExitNow(), ::testing::ExitedWithCode(0), \"Exiting\");\n//\n//   bool KilledBySIGHUP(int exit_code) {\n//     return WIFSIGNALED(exit_code) && WTERMSIG(exit_code) == SIGHUP;\n//   }\n//\n//   ASSERT_EXIT(client.HangUpServer(), KilledBySIGHUP, \"Hanging up!\");\n//\n// On the regular expressions used in death tests:\n//\n//   On POSIX-compliant systems (*nix), we use the <regex.h> library,\n//   which uses the POSIX extended regex syntax.\n//\n//   On other platforms (e.g. Windows), we only support a simple regex\n//   syntax implemented as part of Google Test.  This limited\n//   implementation should be enough most of the time when writing\n//   death tests; though it lacks many features you can find in PCRE\n//   or POSIX extended regex syntax.  For example, we don't support\n//   union (\"x|y\"), grouping (\"(xy)\"), brackets (\"[xy]\"), and\n//   repetition count (\"x{5,7}\"), among others.\n//\n//   Below is the syntax that we do support.  We chose it to be a\n//   subset of both PCRE and POSIX extended regex, so it's easy to\n//   learn wherever you come from.  In the following: 'A' denotes a\n//   literal character, period (.), or a single \\\\ escape sequence;\n//   'x' and 'y' denote regular expressions; 'm' and 'n' are for\n//   natural numbers.\n//\n//     c     matches any literal character c\n//     \\\\d   matches any decimal digit\n//     \\\\D   matches any character that's not a decimal digit\n//     \\\\f   matches \\f\n//     \\\\n   matches \\n\n//     \\\\r   matches \\r\n//     \\\\s   matches any ASCII whitespace, including \\n\n//     \\\\S   matches any character that's not a whitespace\n//     \\\\t   matches \\t\n//     \\\\v   matches \\v\n//     \\\\w   matches any letter, _, or decimal digit\n//     \\\\W   matches any character that \\\\w doesn't match\n//     \\\\c   matches any literal character c, which must be a punctuation\n//     .     matches any single character except \\n\n//     A?    matches 0 or 1 occurrences of A\n//     A*    matches 0 or many occurrences of A\n//     A+    matches 1 or many occurrences of A\n//     ^     matches the beginning of a string (not that of each line)\n//     $     matches the end of a string (not that of each line)\n//     xy    matches x followed by y\n//\n//   If you accidentally use PCRE or POSIX extended regex features\n//   not implemented by us, you will get a run-time failure.  In that\n//   case, please try to rewrite your regular expression within the\n//   above syntax.\n//\n//   This implementation is *not* meant to be as highly tuned or robust\n//   as a compiled regex library, but should perform well enough for a\n//   death test, which already incurs significant overhead by launching\n//   a child process.\n//\n// Known caveats:\n//\n//   A \"threadsafe\" style death test obtains the path to the test\n//   program from argv[0] and re-executes it in the sub-process.  For\n//   simplicity, the current implementation doesn't search the PATH\n//   when launching the sub-process.  This means that the user must\n//   invoke the test program via a path that contains at least one\n//   path separator (e.g. path/to/foo_test and\n//   /absolute/path/to/bar_test are fine, but foo_test is not).  This\n//   is rarely a problem as people usually don't put the test binary\n//   directory in PATH.\n//\n// TODO(wan@google.com): make thread-safe death tests search the PATH.\n\n// Asserts that a given statement causes the program to exit, with an\n// integer exit status that satisfies predicate, and emitting error output\n// that matches regex.\n# define ASSERT_EXIT(statement, predicate, regex) \\\n    GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_FATAL_FAILURE_)\n\n// Like ASSERT_EXIT, but continues on to successive tests in the\n// test case, if any:\n# define EXPECT_EXIT(statement, predicate, regex) \\\n    GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_NONFATAL_FAILURE_)\n\n// Asserts that a given statement causes the program to exit, either by\n// explicitly exiting with a nonzero exit code or being killed by a\n// signal, and emitting error output that matches regex.\n# define ASSERT_DEATH(statement, regex) \\\n    ASSERT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex)\n\n// Like ASSERT_DEATH, but continues on to successive tests in the\n// test case, if any:\n# define EXPECT_DEATH(statement, regex) \\\n    EXPECT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex)\n\n// Two predicate classes that can be used in {ASSERT,EXPECT}_EXIT*:\n\n// Tests that an exit code describes a normal exit with a given exit code.\nclass GTEST_API_ ExitedWithCode {\n public:\n  explicit ExitedWithCode(int exit_code);\n  bool operator()(int exit_status) const;\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ExitedWithCode& other);\n\n  const int exit_code_;\n};\n\n# if !GTEST_OS_WINDOWS\n// Tests that an exit code describes an exit due to termination by a\n// given signal.\nclass GTEST_API_ KilledBySignal {\n public:\n  explicit KilledBySignal(int signum);\n  bool operator()(int exit_status) const;\n private:\n  const int signum_;\n};\n# endif  // !GTEST_OS_WINDOWS\n\n// EXPECT_DEBUG_DEATH asserts that the given statements die in debug mode.\n// The death testing framework causes this to have interesting semantics,\n// since the sideeffects of the call are only visible in opt mode, and not\n// in debug mode.\n//\n// In practice, this can be used to test functions that utilize the\n// LOG(DFATAL) macro using the following style:\n//\n// int DieInDebugOr12(int* sideeffect) {\n//   if (sideeffect) {\n//     *sideeffect = 12;\n//   }\n//   LOG(DFATAL) << \"death\";\n//   return 12;\n// }\n//\n// TEST(TestCase, TestDieOr12WorksInDgbAndOpt) {\n//   int sideeffect = 0;\n//   // Only asserts in dbg.\n//   EXPECT_DEBUG_DEATH(DieInDebugOr12(&sideeffect), \"death\");\n//\n// #ifdef NDEBUG\n//   // opt-mode has sideeffect visible.\n//   EXPECT_EQ(12, sideeffect);\n// #else\n//   // dbg-mode no visible sideeffect.\n//   EXPECT_EQ(0, sideeffect);\n// #endif\n// }\n//\n// This will assert that DieInDebugReturn12InOpt() crashes in debug\n// mode, usually due to a DCHECK or LOG(DFATAL), but returns the\n// appropriate fallback value (12 in this case) in opt mode. If you\n// need to test that a function has appropriate side-effects in opt\n// mode, include assertions against the side-effects.  A general\n// pattern for this is:\n//\n// EXPECT_DEBUG_DEATH({\n//   // Side-effects here will have an effect after this statement in\n//   // opt mode, but none in debug mode.\n//   EXPECT_EQ(12, DieInDebugOr12(&sideeffect));\n// }, \"death\");\n//\n# ifdef NDEBUG\n\n#  define EXPECT_DEBUG_DEATH(statement, regex) \\\n  do { statement; } while (::testing::internal::AlwaysFalse())\n\n#  define ASSERT_DEBUG_DEATH(statement, regex) \\\n  do { statement; } while (::testing::internal::AlwaysFalse())\n\n# else\n\n#  define EXPECT_DEBUG_DEATH(statement, regex) \\\n  EXPECT_DEATH(statement, regex)\n\n#  define ASSERT_DEBUG_DEATH(statement, regex) \\\n  ASSERT_DEATH(statement, regex)\n\n# endif  // NDEBUG for EXPECT_DEBUG_DEATH\n#endif  // GTEST_HAS_DEATH_TEST\n\n// EXPECT_DEATH_IF_SUPPORTED(statement, regex) and\n// ASSERT_DEATH_IF_SUPPORTED(statement, regex) expand to real death tests if\n// death tests are supported; otherwise they just issue a warning.  This is\n// useful when you are combining death test assertions with normal test\n// assertions in one test.\n#if GTEST_HAS_DEATH_TEST\n# define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \\\n    EXPECT_DEATH(statement, regex)\n# define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \\\n    ASSERT_DEATH(statement, regex)\n#else\n# define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \\\n    GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, )\n# define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \\\n    GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, return)\n#endif\n\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_\n// Copyright 2005, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// The Google C++ Testing Framework (Google Test)\n//\n// This header file defines the Message class.\n//\n// IMPORTANT NOTE: Due to limitation of the C++ language, we have to\n// leave some internal implementation details in this header file.\n// They are clearly marked by comments like this:\n//\n//   // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n//\n// Such code is NOT meant to be used by a user directly, and is subject\n// to CHANGE WITHOUT NOTICE.  Therefore DO NOT DEPEND ON IT in a user\n// program!\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_\n#define GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_\n\n#include <limits>\n\n\nnamespace testing {\n\n// The Message class works like an ostream repeater.\n//\n// Typical usage:\n//\n//   1. You stream a bunch of values to a Message object.\n//      It will remember the text in a stringstream.\n//   2. Then you stream the Message object to an ostream.\n//      This causes the text in the Message to be streamed\n//      to the ostream.\n//\n// For example;\n//\n//   testing::Message foo;\n//   foo << 1 << \" != \" << 2;\n//   std::cout << foo;\n//\n// will print \"1 != 2\".\n//\n// Message is not intended to be inherited from.  In particular, its\n// destructor is not virtual.\n//\n// Note that stringstream behaves differently in gcc and in MSVC.  You\n// can stream a NULL char pointer to it in the former, but not in the\n// latter (it causes an access violation if you do).  The Message\n// class hides this difference by treating a NULL char pointer as\n// \"(null)\".\nclass GTEST_API_ Message {\n private:\n  // The type of basic IO manipulators (endl, ends, and flush) for\n  // narrow streams.\n  typedef std::ostream& (*BasicNarrowIoManip)(std::ostream&);\n\n public:\n  // Constructs an empty Message.\n  // We allocate the stringstream separately because otherwise each use of\n  // ASSERT/EXPECT in a procedure adds over 200 bytes to the procedure's\n  // stack frame leading to huge stack frames in some cases; gcc does not reuse\n  // the stack space.\n  Message() : ss_(new ::std::stringstream) {\n    // By default, we want there to be enough precision when printing\n    // a double to a Message.\n    *ss_ << std::setprecision(std::numeric_limits<double>::digits10 + 2);\n  }\n\n  // Copy constructor.\n  Message(const Message& msg) : ss_(new ::std::stringstream) {  // NOLINT\n    *ss_ << msg.GetString();\n  }\n\n  // Constructs a Message from a C-string.\n  explicit Message(const char* str) : ss_(new ::std::stringstream) {\n    *ss_ << str;\n  }\n\n#if GTEST_OS_SYMBIAN\n  // Streams a value (either a pointer or not) to this object.\n  template <typename T>\n  inline Message& operator <<(const T& value) {\n    StreamHelper(typename internal::is_pointer<T>::type(), value);\n    return *this;\n  }\n#else\n  // Streams a non-pointer value to this object.\n  template <typename T>\n  inline Message& operator <<(const T& val) {\n    ::GTestStreamToHelper(ss_.get(), val);\n    return *this;\n  }\n\n  // Streams a pointer value to this object.\n  //\n  // This function is an overload of the previous one.  When you\n  // stream a pointer to a Message, this definition will be used as it\n  // is more specialized.  (The C++ Standard, section\n  // [temp.func.order].)  If you stream a non-pointer, then the\n  // previous definition will be used.\n  //\n  // The reason for this overload is that streaming a NULL pointer to\n  // ostream is undefined behavior.  Depending on the compiler, you\n  // may get \"0\", \"(nil)\", \"(null)\", or an access violation.  To\n  // ensure consistent result across compilers, we always treat NULL\n  // as \"(null)\".\n  template <typename T>\n  inline Message& operator <<(T* const& pointer) {  // NOLINT\n    if (pointer == NULL) {\n      *ss_ << \"(null)\";\n    } else {\n      ::GTestStreamToHelper(ss_.get(), pointer);\n    }\n    return *this;\n  }\n#endif  // GTEST_OS_SYMBIAN\n\n  // Since the basic IO manipulators are overloaded for both narrow\n  // and wide streams, we have to provide this specialized definition\n  // of operator <<, even though its body is the same as the\n  // templatized version above.  Without this definition, streaming\n  // endl or other basic IO manipulators to Message will confuse the\n  // compiler.\n  Message& operator <<(BasicNarrowIoManip val) {\n    *ss_ << val;\n    return *this;\n  }\n\n  // Instead of 1/0, we want to see true/false for bool values.\n  Message& operator <<(bool b) {\n    return *this << (b ? \"true\" : \"false\");\n  }\n\n  // These two overloads allow streaming a wide C string to a Message\n  // using the UTF-8 encoding.\n  Message& operator <<(const wchar_t* wide_c_str) {\n    return *this << internal::String::ShowWideCString(wide_c_str);\n  }\n  Message& operator <<(wchar_t* wide_c_str) {\n    return *this << internal::String::ShowWideCString(wide_c_str);\n  }\n\n#if GTEST_HAS_STD_WSTRING\n  // Converts the given wide string to a narrow string using the UTF-8\n  // encoding, and streams the result to this Message object.\n  Message& operator <<(const ::std::wstring& wstr);\n#endif  // GTEST_HAS_STD_WSTRING\n\n#if GTEST_HAS_GLOBAL_WSTRING\n  // Converts the given wide string to a narrow string using the UTF-8\n  // encoding, and streams the result to this Message object.\n  Message& operator <<(const ::wstring& wstr);\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n  // Gets the text streamed to this object so far as a String.\n  // Each '\\0' character in the buffer is replaced with \"\\\\0\".\n  //\n  // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n  internal::String GetString() const {\n    return internal::StringStreamToString(ss_.get());\n  }\n\n private:\n\n#if GTEST_OS_SYMBIAN\n  // These are needed as the Nokia Symbian Compiler cannot decide between\n  // const T& and const T* in a function template. The Nokia compiler _can_\n  // decide between class template specializations for T and T*, so a\n  // tr1::type_traits-like is_pointer works, and we can overload on that.\n  template <typename T>\n  inline void StreamHelper(internal::true_type /*dummy*/, T* pointer) {\n    if (pointer == NULL) {\n      *ss_ << \"(null)\";\n    } else {\n      ::GTestStreamToHelper(ss_.get(), pointer);\n    }\n  }\n  template <typename T>\n  inline void StreamHelper(internal::false_type /*dummy*/, const T& value) {\n    ::GTestStreamToHelper(ss_.get(), value);\n  }\n#endif  // GTEST_OS_SYMBIAN\n\n  // We'll hold the text streamed to this object here.\n  const internal::scoped_ptr< ::std::stringstream> ss_;\n\n  // We declare (but don't implement) this to prevent the compiler\n  // from implementing the assignment operator.\n  void operator=(const Message&);\n};\n\n// Streams a Message to an ostream.\ninline std::ostream& operator <<(std::ostream& os, const Message& sb) {\n  return os << sb.GetString();\n}\n\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_\n// This file was GENERATED by command:\n//     pump.py gtest-param-test.h.pump\n// DO NOT EDIT BY HAND!!!\n\n// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: vladl@google.com (Vlad Losev)\n//\n// Macros and functions for implementing parameterized tests\n// in Google C++ Testing Framework (Google Test)\n//\n// This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!\n//\n#ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_\n#define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_\n\n\n// Value-parameterized tests allow you to test your code with different\n// parameters without writing multiple copies of the same test.\n//\n// Here is how you use value-parameterized tests:\n\n#if 0\n\n// To write value-parameterized tests, first you should define a fixture\n// class. It is usually derived from testing::TestWithParam<T> (see below for\n// another inheritance scheme that's sometimes useful in more complicated\n// class hierarchies), where the type of your parameter values.\n// TestWithParam<T> is itself derived from testing::Test. T can be any\n// copyable type. If it's a raw pointer, you are responsible for managing the\n// lifespan of the pointed values.\n\nclass FooTest : public ::testing::TestWithParam<const char*> {\n  // You can implement all the usual class fixture members here.\n};\n\n// Then, use the TEST_P macro to define as many parameterized tests\n// for this fixture as you want. The _P suffix is for \"parameterized\"\n// or \"pattern\", whichever you prefer to think.\n\nTEST_P(FooTest, DoesBlah) {\n  // Inside a test, access the test parameter with the GetParam() method\n  // of the TestWithParam<T> class:\n  EXPECT_TRUE(foo.Blah(GetParam()));\n  ...\n}\n\nTEST_P(FooTest, HasBlahBlah) {\n  ...\n}\n\n// Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test\n// case with any set of parameters you want. Google Test defines a number\n// of functions for generating test parameters. They return what we call\n// (surprise!) parameter generators. Here is a  summary of them, which\n// are all in the testing namespace:\n//\n//\n//  Range(begin, end [, step]) - Yields values {begin, begin+step,\n//                               begin+step+step, ...}. The values do not\n//                               include end. step defaults to 1.\n//  Values(v1, v2, ..., vN)    - Yields values {v1, v2, ..., vN}.\n//  ValuesIn(container)        - Yields values from a C-style array, an STL\n//  ValuesIn(begin,end)          container, or an iterator range [begin, end).\n//  Bool()                     - Yields sequence {false, true}.\n//  Combine(g1, g2, ..., gN)   - Yields all combinations (the Cartesian product\n//                               for the math savvy) of the values generated\n//                               by the N generators.\n//\n// For more details, see comments at the definitions of these functions below\n// in this file.\n//\n// The following statement will instantiate tests from the FooTest test case\n// each with parameter values \"meeny\", \"miny\", and \"moe\".\n\nINSTANTIATE_TEST_CASE_P(InstantiationName,\n                        FooTest,\n                        Values(\"meeny\", \"miny\", \"moe\"));\n\n// To distinguish different instances of the pattern, (yes, you\n// can instantiate it more then once) the first argument to the\n// INSTANTIATE_TEST_CASE_P macro is a prefix that will be added to the\n// actual test case name. Remember to pick unique prefixes for different\n// instantiations. The tests from the instantiation above will have\n// these names:\n//\n//    * InstantiationName/FooTest.DoesBlah/0 for \"meeny\"\n//    * InstantiationName/FooTest.DoesBlah/1 for \"miny\"\n//    * InstantiationName/FooTest.DoesBlah/2 for \"moe\"\n//    * InstantiationName/FooTest.HasBlahBlah/0 for \"meeny\"\n//    * InstantiationName/FooTest.HasBlahBlah/1 for \"miny\"\n//    * InstantiationName/FooTest.HasBlahBlah/2 for \"moe\"\n//\n// You can use these names in --gtest_filter.\n//\n// This statement will instantiate all tests from FooTest again, each\n// with parameter values \"cat\" and \"dog\":\n\nconst char* pets[] = {\"cat\", \"dog\"};\nINSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest, ValuesIn(pets));\n\n// The tests from the instantiation above will have these names:\n//\n//    * AnotherInstantiationName/FooTest.DoesBlah/0 for \"cat\"\n//    * AnotherInstantiationName/FooTest.DoesBlah/1 for \"dog\"\n//    * AnotherInstantiationName/FooTest.HasBlahBlah/0 for \"cat\"\n//    * AnotherInstantiationName/FooTest.HasBlahBlah/1 for \"dog\"\n//\n// Please note that INSTANTIATE_TEST_CASE_P will instantiate all tests\n// in the given test case, whether their definitions come before or\n// AFTER the INSTANTIATE_TEST_CASE_P statement.\n//\n// Please also note that generator expressions (including parameters to the\n// generators) are evaluated in InitGoogleTest(), after main() has started.\n// This allows the user on one hand, to adjust generator parameters in order\n// to dynamically determine a set of tests to run and on the other hand,\n// give the user a chance to inspect the generated tests with Google Test\n// reflection API before RUN_ALL_TESTS() is executed.\n//\n// You can see samples/sample7_unittest.cc and samples/sample8_unittest.cc\n// for more examples.\n//\n// In the future, we plan to publish the API for defining new parameter\n// generators. But for now this interface remains part of the internal\n// implementation and is subject to change.\n//\n//\n// A parameterized test fixture must be derived from testing::Test and from\n// testing::WithParamInterface<T>, where T is the type of the parameter\n// values. Inheriting from TestWithParam<T> satisfies that requirement because\n// TestWithParam<T> inherits from both Test and WithParamInterface. In more\n// complicated hierarchies, however, it is occasionally useful to inherit\n// separately from Test and WithParamInterface. For example:\n\nclass BaseTest : public ::testing::Test {\n  // You can inherit all the usual members for a non-parameterized test\n  // fixture here.\n};\n\nclass DerivedTest : public BaseTest, public ::testing::WithParamInterface<int> {\n  // The usual test fixture members go here too.\n};\n\nTEST_F(BaseTest, HasFoo) {\n  // This is an ordinary non-parameterized test.\n}\n\nTEST_P(DerivedTest, DoesBlah) {\n  // GetParam works just the same here as if you inherit from TestWithParam.\n  EXPECT_TRUE(foo.Blah(GetParam()));\n}\n\n#endif  // 0\n\n\n#if !GTEST_OS_SYMBIAN\n# include <utility>\n#endif\n\n// scripts/fuse_gtest.py depends on gtest's own header being #included\n// *unconditionally*.  Therefore these #includes cannot be moved\n// inside #if GTEST_HAS_PARAM_TEST.\n// Copyright 2008 Google Inc.\n// All Rights Reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: vladl@google.com (Vlad Losev)\n\n// Type and function utilities for implementing parameterized tests.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_\n\n#include <iterator>\n#include <utility>\n#include <vector>\n\n// scripts/fuse_gtest.py depends on gtest's own header being #included\n// *unconditionally*.  Therefore these #includes cannot be moved\n// inside #if GTEST_HAS_PARAM_TEST.\n// Copyright 2003 Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Authors: Dan Egnor (egnor@google.com)\n//\n// A \"smart\" pointer type with reference tracking.  Every pointer to a\n// particular object is kept on a circular linked list.  When the last pointer\n// to an object is destroyed or reassigned, the object is deleted.\n//\n// Used properly, this deletes the object when the last reference goes away.\n// There are several caveats:\n// - Like all reference counting schemes, cycles lead to leaks.\n// - Each smart pointer is actually two pointers (8 bytes instead of 4).\n// - Every time a pointer is assigned, the entire list of pointers to that\n//   object is traversed.  This class is therefore NOT SUITABLE when there\n//   will often be more than two or three pointers to a particular object.\n// - References are only tracked as long as linked_ptr<> objects are copied.\n//   If a linked_ptr<> is converted to a raw pointer and back, BAD THINGS\n//   will happen (double deletion).\n//\n// A good use of this class is storing object references in STL containers.\n// You can safely put linked_ptr<> in a vector<>.\n// Other uses may not be as good.\n//\n// Note: If you use an incomplete type with linked_ptr<>, the class\n// *containing* linked_ptr<> must have a constructor and destructor (even\n// if they do nothing!).\n//\n// Bill Gibbons suggested we use something like this.\n//\n// Thread Safety:\n//   Unlike other linked_ptr implementations, in this implementation\n//   a linked_ptr object is thread-safe in the sense that:\n//     - it's safe to copy linked_ptr objects concurrently,\n//     - it's safe to copy *from* a linked_ptr and read its underlying\n//       raw pointer (e.g. via get()) concurrently, and\n//     - it's safe to write to two linked_ptrs that point to the same\n//       shared object concurrently.\n// TODO(wan@google.com): rename this to safe_linked_ptr to avoid\n// confusion with normal linked_ptr.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_\n\n#include <stdlib.h>\n#include <assert.h>\n\n\nnamespace testing {\nnamespace internal {\n\n// Protects copying of all linked_ptr objects.\nGTEST_API_ GTEST_DECLARE_STATIC_MUTEX_(g_linked_ptr_mutex);\n\n// This is used internally by all instances of linked_ptr<>.  It needs to be\n// a non-template class because different types of linked_ptr<> can refer to\n// the same object (linked_ptr<Superclass>(obj) vs linked_ptr<Subclass>(obj)).\n// So, it needs to be possible for different types of linked_ptr to participate\n// in the same circular linked list, so we need a single class type here.\n//\n// DO NOT USE THIS CLASS DIRECTLY YOURSELF.  Use linked_ptr<T>.\nclass linked_ptr_internal {\n public:\n  // Create a new circle that includes only this instance.\n  void join_new() {\n    next_ = this;\n  }\n\n  // Many linked_ptr operations may change p.link_ for some linked_ptr\n  // variable p in the same circle as this object.  Therefore we need\n  // to prevent two such operations from occurring concurrently.\n  //\n  // Note that different types of linked_ptr objects can coexist in a\n  // circle (e.g. linked_ptr<Base>, linked_ptr<Derived1>, and\n  // linked_ptr<Derived2>).  Therefore we must use a single mutex to\n  // protect all linked_ptr objects.  This can create serious\n  // contention in production code, but is acceptable in a testing\n  // framework.\n\n  // Join an existing circle.\n  // L < g_linked_ptr_mutex\n  void join(linked_ptr_internal const* ptr) {\n    MutexLock lock(&g_linked_ptr_mutex);\n\n    linked_ptr_internal const* p = ptr;\n    while (p->next_ != ptr) p = p->next_;\n    p->next_ = this;\n    next_ = ptr;\n  }\n\n  // Leave whatever circle we're part of.  Returns true if we were the\n  // last member of the circle.  Once this is done, you can join() another.\n  // L < g_linked_ptr_mutex\n  bool depart() {\n    MutexLock lock(&g_linked_ptr_mutex);\n\n    if (next_ == this) return true;\n    linked_ptr_internal const* p = next_;\n    while (p->next_ != this) p = p->next_;\n    p->next_ = next_;\n    return false;\n  }\n\n private:\n  mutable linked_ptr_internal const* next_;\n};\n\ntemplate <typename T>\nclass linked_ptr {\n public:\n  typedef T element_type;\n\n  // Take over ownership of a raw pointer.  This should happen as soon as\n  // possible after the object is created.\n  explicit linked_ptr(T* ptr = NULL) { capture(ptr); }\n  ~linked_ptr() { depart(); }\n\n  // Copy an existing linked_ptr<>, adding ourselves to the list of references.\n  template <typename U> linked_ptr(linked_ptr<U> const& ptr) { copy(&ptr); }\n  linked_ptr(linked_ptr const& ptr) {  // NOLINT\n    assert(&ptr != this);\n    copy(&ptr);\n  }\n\n  // Assignment releases the old value and acquires the new.\n  template <typename U> linked_ptr& operator=(linked_ptr<U> const& ptr) {\n    depart();\n    copy(&ptr);\n    return *this;\n  }\n\n  linked_ptr& operator=(linked_ptr const& ptr) {\n    if (&ptr != this) {\n      depart();\n      copy(&ptr);\n    }\n    return *this;\n  }\n\n  // Smart pointer members.\n  void reset(T* ptr = NULL) {\n    depart();\n    capture(ptr);\n  }\n  T* get() const { return value_; }\n  T* operator->() const { return value_; }\n  T& operator*() const { return *value_; }\n\n  bool operator==(T* p) const { return value_ == p; }\n  bool operator!=(T* p) const { return value_ != p; }\n  template <typename U>\n  bool operator==(linked_ptr<U> const& ptr) const {\n    return value_ == ptr.get();\n  }\n  template <typename U>\n  bool operator!=(linked_ptr<U> const& ptr) const {\n    return value_ != ptr.get();\n  }\n\n private:\n  template <typename U>\n  friend class linked_ptr;\n\n  T* value_;\n  linked_ptr_internal link_;\n\n  void depart() {\n    if (link_.depart()) delete value_;\n  }\n\n  void capture(T* ptr) {\n    value_ = ptr;\n    link_.join_new();\n  }\n\n  template <typename U> void copy(linked_ptr<U> const* ptr) {\n    value_ = ptr->get();\n    if (value_)\n      link_.join(&ptr->link_);\n    else\n      link_.join_new();\n  }\n};\n\ntemplate<typename T> inline\nbool operator==(T* ptr, const linked_ptr<T>& x) {\n  return ptr == x.get();\n}\n\ntemplate<typename T> inline\nbool operator!=(T* ptr, const linked_ptr<T>& x) {\n  return ptr != x.get();\n}\n\n// A function to convert T* into linked_ptr<T>\n// Doing e.g. make_linked_ptr(new FooBarBaz<type>(arg)) is a shorter notation\n// for linked_ptr<FooBarBaz<type> >(new FooBarBaz<type>(arg))\ntemplate <typename T>\nlinked_ptr<T> make_linked_ptr(T* ptr) {\n  return linked_ptr<T>(ptr);\n}\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_\n// Copyright 2007, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n// Google Test - The Google C++ Testing Framework\n//\n// This file implements a universal value printer that can print a\n// value of any type T:\n//\n//   void ::testing::internal::UniversalPrinter<T>::Print(value, ostream_ptr);\n//\n// A user can teach this function how to print a class type T by\n// defining either operator<<() or PrintTo() in the namespace that\n// defines T.  More specifically, the FIRST defined function in the\n// following list will be used (assuming T is defined in namespace\n// foo):\n//\n//   1. foo::PrintTo(const T&, ostream*)\n//   2. operator<<(ostream&, const T&) defined in either foo or the\n//      global namespace.\n//\n// If none of the above is defined, it will print the debug string of\n// the value if it is a protocol buffer, or print the raw bytes in the\n// value otherwise.\n//\n// To aid debugging: when T is a reference type, the address of the\n// value is also printed; when T is a (const) char pointer, both the\n// pointer value and the NUL-terminated string it points to are\n// printed.\n//\n// We also provide some convenient wrappers:\n//\n//   // Prints a value to a string.  For a (const or not) char\n//   // pointer, the NUL-terminated string (but not the pointer) is\n//   // printed.\n//   std::string ::testing::PrintToString(const T& value);\n//\n//   // Prints a value tersely: for a reference type, the referenced\n//   // value (but not the address) is printed; for a (const or not) char\n//   // pointer, the NUL-terminated string (but not the pointer) is\n//   // printed.\n//   void ::testing::internal::UniversalTersePrint(const T& value, ostream*);\n//\n//   // Prints value using the type inferred by the compiler.  The difference\n//   // from UniversalTersePrint() is that this function prints both the\n//   // pointer and the NUL-terminated string for a (const or not) char pointer.\n//   void ::testing::internal::UniversalPrint(const T& value, ostream*);\n//\n//   // Prints the fields of a tuple tersely to a string vector, one\n//   // element for each field. Tuple support must be enabled in\n//   // gtest-port.h.\n//   std::vector<string> UniversalTersePrintTupleFieldsToStrings(\n//       const Tuple& value);\n//\n// Known limitation:\n//\n// The print primitives print the elements of an STL-style container\n// using the compiler-inferred type of *iter where iter is a\n// const_iterator of the container.  When const_iterator is an input\n// iterator but not a forward iterator, this inferred type may not\n// match value_type, and the print output may be incorrect.  In\n// practice, this is rarely a problem as for most containers\n// const_iterator is a forward iterator.  We'll fix this if there's an\n// actual need for it.  Note that this fix cannot rely on value_type\n// being defined as many user-defined container types don't have\n// value_type.\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_\n#define GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_\n\n#include <ostream>  // NOLINT\n#include <sstream>\n#include <string>\n#include <utility>\n#include <vector>\n\nnamespace testing {\n\n// Definitions in the 'internal' and 'internal2' name spaces are\n// subject to change without notice.  DO NOT USE THEM IN USER CODE!\nnamespace internal2 {\n\n// Prints the given number of bytes in the given object to the given\n// ostream.\nGTEST_API_ void PrintBytesInObjectTo(const unsigned char* obj_bytes,\n                                     size_t count,\n                                     ::std::ostream* os);\n\n// For selecting which printer to use when a given type has neither <<\n// nor PrintTo().\nenum TypeKind {\n  kProtobuf,              // a protobuf type\n  kConvertibleToInteger,  // a type implicitly convertible to BiggestInt\n                          // (e.g. a named or unnamed enum type)\n  kOtherType              // anything else\n};\n\n// TypeWithoutFormatter<T, kTypeKind>::PrintValue(value, os) is called\n// by the universal printer to print a value of type T when neither\n// operator<< nor PrintTo() is defined for T, where kTypeKind is the\n// \"kind\" of T as defined by enum TypeKind.\ntemplate <typename T, TypeKind kTypeKind>\nclass TypeWithoutFormatter {\n public:\n  // This default version is called when kTypeKind is kOtherType.\n  static void PrintValue(const T& value, ::std::ostream* os) {\n    PrintBytesInObjectTo(reinterpret_cast<const unsigned char*>(&value),\n                         sizeof(value), os);\n  }\n};\n\n// We print a protobuf using its ShortDebugString() when the string\n// doesn't exceed this many characters; otherwise we print it using\n// DebugString() for better readability.\nconst size_t kProtobufOneLinerMaxLength = 50;\n\ntemplate <typename T>\nclass TypeWithoutFormatter<T, kProtobuf> {\n public:\n  static void PrintValue(const T& value, ::std::ostream* os) {\n    const ::testing::internal::string short_str = value.ShortDebugString();\n    const ::testing::internal::string pretty_str =\n        short_str.length() <= kProtobufOneLinerMaxLength ?\n        short_str : (\"\\n\" + value.DebugString());\n    *os << (\"<\" + pretty_str + \">\");\n  }\n};\n\ntemplate <typename T>\nclass TypeWithoutFormatter<T, kConvertibleToInteger> {\n public:\n  // Since T has no << operator or PrintTo() but can be implicitly\n  // converted to BiggestInt, we print it as a BiggestInt.\n  //\n  // Most likely T is an enum type (either named or unnamed), in which\n  // case printing it as an integer is the desired behavior.  In case\n  // T is not an enum, printing it as an integer is the best we can do\n  // given that it has no user-defined printer.\n  static void PrintValue(const T& value, ::std::ostream* os) {\n    const internal::BiggestInt kBigInt = value;\n    *os << kBigInt;\n  }\n};\n\n// Prints the given value to the given ostream.  If the value is a\n// protocol message, its debug string is printed; if it's an enum or\n// of a type implicitly convertible to BiggestInt, it's printed as an\n// integer; otherwise the bytes in the value are printed.  This is\n// what UniversalPrinter<T>::Print() does when it knows nothing about\n// type T and T has neither << operator nor PrintTo().\n//\n// A user can override this behavior for a class type Foo by defining\n// a << operator in the namespace where Foo is defined.\n//\n// We put this operator in namespace 'internal2' instead of 'internal'\n// to simplify the implementation, as much code in 'internal' needs to\n// use << in STL, which would conflict with our own << were it defined\n// in 'internal'.\n//\n// Note that this operator<< takes a generic std::basic_ostream<Char,\n// CharTraits> type instead of the more restricted std::ostream.  If\n// we define it to take an std::ostream instead, we'll get an\n// \"ambiguous overloads\" compiler error when trying to print a type\n// Foo that supports streaming to std::basic_ostream<Char,\n// CharTraits>, as the compiler cannot tell whether\n// operator<<(std::ostream&, const T&) or\n// operator<<(std::basic_stream<Char, CharTraits>, const Foo&) is more\n// specific.\ntemplate <typename Char, typename CharTraits, typename T>\n::std::basic_ostream<Char, CharTraits>& operator<<(\n    ::std::basic_ostream<Char, CharTraits>& os, const T& x) {\n  TypeWithoutFormatter<T,\n      (internal::IsAProtocolMessage<T>::value ? kProtobuf :\n       internal::ImplicitlyConvertible<const T&, internal::BiggestInt>::value ?\n       kConvertibleToInteger : kOtherType)>::PrintValue(x, &os);\n  return os;\n}\n\n}  // namespace internal2\n}  // namespace testing\n\n// This namespace MUST NOT BE NESTED IN ::testing, or the name look-up\n// magic needed for implementing UniversalPrinter won't work.\nnamespace testing_internal {\n\n// Used to print a value that is not an STL-style container when the\n// user doesn't define PrintTo() for it.\ntemplate <typename T>\nvoid DefaultPrintNonContainerTo(const T& value, ::std::ostream* os) {\n  // With the following statement, during unqualified name lookup,\n  // testing::internal2::operator<< appears as if it was declared in\n  // the nearest enclosing namespace that contains both\n  // ::testing_internal and ::testing::internal2, i.e. the global\n  // namespace.  For more details, refer to the C++ Standard section\n  // 7.3.4-1 [namespace.udir].  This allows us to fall back onto\n  // testing::internal2::operator<< in case T doesn't come with a <<\n  // operator.\n  //\n  // We cannot write 'using ::testing::internal2::operator<<;', which\n  // gcc 3.3 fails to compile due to a compiler bug.\n  using namespace ::testing::internal2;  // NOLINT\n\n  // Assuming T is defined in namespace foo, in the next statement,\n  // the compiler will consider all of:\n  //\n  //   1. foo::operator<< (thanks to Koenig look-up),\n  //   2. ::operator<< (as the current namespace is enclosed in ::),\n  //   3. testing::internal2::operator<< (thanks to the using statement above).\n  //\n  // The operator<< whose type matches T best will be picked.\n  //\n  // We deliberately allow #2 to be a candidate, as sometimes it's\n  // impossible to define #1 (e.g. when foo is ::std, defining\n  // anything in it is undefined behavior unless you are a compiler\n  // vendor.).\n  *os << value;\n}\n\n}  // namespace testing_internal\n\nnamespace testing {\nnamespace internal {\n\n// UniversalPrinter<T>::Print(value, ostream_ptr) prints the given\n// value to the given ostream.  The caller must ensure that\n// 'ostream_ptr' is not NULL, or the behavior is undefined.\n//\n// We define UniversalPrinter as a class template (as opposed to a\n// function template), as we need to partially specialize it for\n// reference types, which cannot be done with function templates.\ntemplate <typename T>\nclass UniversalPrinter;\n\ntemplate <typename T>\nvoid UniversalPrint(const T& value, ::std::ostream* os);\n\n// Used to print an STL-style container when the user doesn't define\n// a PrintTo() for it.\ntemplate <typename C>\nvoid DefaultPrintTo(IsContainer /* dummy */,\n                    false_type /* is not a pointer */,\n                    const C& container, ::std::ostream* os) {\n  const size_t kMaxCount = 32;  // The maximum number of elements to print.\n  *os << '{';\n  size_t count = 0;\n  for (typename C::const_iterator it = container.begin();\n       it != container.end(); ++it, ++count) {\n    if (count > 0) {\n      *os << ',';\n      if (count == kMaxCount) {  // Enough has been printed.\n        *os << \" ...\";\n        break;\n      }\n    }\n    *os << ' ';\n    // We cannot call PrintTo(*it, os) here as PrintTo() doesn't\n    // handle *it being a native array.\n    internal::UniversalPrint(*it, os);\n  }\n\n  if (count > 0) {\n    *os << ' ';\n  }\n  *os << '}';\n}\n\n// Used to print a pointer that is neither a char pointer nor a member\n// pointer, when the user doesn't define PrintTo() for it.  (A member\n// variable pointer or member function pointer doesn't really point to\n// a location in the address space.  Their representation is\n// implementation-defined.  Therefore they will be printed as raw\n// bytes.)\ntemplate <typename T>\nvoid DefaultPrintTo(IsNotContainer /* dummy */,\n                    true_type /* is a pointer */,\n                    T* p, ::std::ostream* os) {\n  if (p == NULL) {\n    *os << \"NULL\";\n  } else {\n    // C++ doesn't allow casting from a function pointer to any object\n    // pointer.\n    //\n    // IsTrue() silences warnings: \"Condition is always true\",\n    // \"unreachable code\".\n    if (IsTrue(ImplicitlyConvertible<T*, const void*>::value)) {\n      // T is not a function type.  We just call << to print p,\n      // relying on ADL to pick up user-defined << for their pointer\n      // types, if any.\n      *os << p;\n    } else {\n      // T is a function type, so '*os << p' doesn't do what we want\n      // (it just prints p as bool).  We want to print p as a const\n      // void*.  However, we cannot cast it to const void* directly,\n      // even using reinterpret_cast, as earlier versions of gcc\n      // (e.g. 3.4.5) cannot compile the cast when p is a function\n      // pointer.  Casting to UInt64 first solves the problem.\n      *os << reinterpret_cast<const void*>(\n          reinterpret_cast<internal::UInt64>(p));\n    }\n  }\n}\n\n// Used to print a non-container, non-pointer value when the user\n// doesn't define PrintTo() for it.\ntemplate <typename T>\nvoid DefaultPrintTo(IsNotContainer /* dummy */,\n                    false_type /* is not a pointer */,\n                    const T& value, ::std::ostream* os) {\n  ::testing_internal::DefaultPrintNonContainerTo(value, os);\n}\n\n// Prints the given value using the << operator if it has one;\n// otherwise prints the bytes in it.  This is what\n// UniversalPrinter<T>::Print() does when PrintTo() is not specialized\n// or overloaded for type T.\n//\n// A user can override this behavior for a class type Foo by defining\n// an overload of PrintTo() in the namespace where Foo is defined.  We\n// give the user this option as sometimes defining a << operator for\n// Foo is not desirable (e.g. the coding style may prevent doing it,\n// or there is already a << operator but it doesn't do what the user\n// wants).\ntemplate <typename T>\nvoid PrintTo(const T& value, ::std::ostream* os) {\n  // DefaultPrintTo() is overloaded.  The type of its first two\n  // arguments determine which version will be picked.  If T is an\n  // STL-style container, the version for container will be called; if\n  // T is a pointer, the pointer version will be called; otherwise the\n  // generic version will be called.\n  //\n  // Note that we check for container types here, prior to we check\n  // for protocol message types in our operator<<.  The rationale is:\n  //\n  // For protocol messages, we want to give people a chance to\n  // override Google Mock's format by defining a PrintTo() or\n  // operator<<.  For STL containers, other formats can be\n  // incompatible with Google Mock's format for the container\n  // elements; therefore we check for container types here to ensure\n  // that our format is used.\n  //\n  // The second argument of DefaultPrintTo() is needed to bypass a bug\n  // in Symbian's C++ compiler that prevents it from picking the right\n  // overload between:\n  //\n  //   PrintTo(const T& x, ...);\n  //   PrintTo(T* x, ...);\n  DefaultPrintTo(IsContainerTest<T>(0), is_pointer<T>(), value, os);\n}\n\n// The following list of PrintTo() overloads tells\n// UniversalPrinter<T>::Print() how to print standard types (built-in\n// types, strings, plain arrays, and pointers).\n\n// Overloads for various char types.\nGTEST_API_ void PrintTo(unsigned char c, ::std::ostream* os);\nGTEST_API_ void PrintTo(signed char c, ::std::ostream* os);\ninline void PrintTo(char c, ::std::ostream* os) {\n  // When printing a plain char, we always treat it as unsigned.  This\n  // way, the output won't be affected by whether the compiler thinks\n  // char is signed or not.\n  PrintTo(static_cast<unsigned char>(c), os);\n}\n\n// Overloads for other simple built-in types.\ninline void PrintTo(bool x, ::std::ostream* os) {\n  *os << (x ? \"true\" : \"false\");\n}\n\n// Overload for wchar_t type.\n// Prints a wchar_t as a symbol if it is printable or as its internal\n// code otherwise and also as its decimal code (except for L'\\0').\n// The L'\\0' char is printed as \"L'\\\\0'\". The decimal code is printed\n// as signed integer when wchar_t is implemented by the compiler\n// as a signed type and is printed as an unsigned integer when wchar_t\n// is implemented as an unsigned type.\nGTEST_API_ void PrintTo(wchar_t wc, ::std::ostream* os);\n\n// Overloads for C strings.\nGTEST_API_ void PrintTo(const char* s, ::std::ostream* os);\ninline void PrintTo(char* s, ::std::ostream* os) {\n  PrintTo(ImplicitCast_<const char*>(s), os);\n}\n\n// signed/unsigned char is often used for representing binary data, so\n// we print pointers to it as void* to be safe.\ninline void PrintTo(const signed char* s, ::std::ostream* os) {\n  PrintTo(ImplicitCast_<const void*>(s), os);\n}\ninline void PrintTo(signed char* s, ::std::ostream* os) {\n  PrintTo(ImplicitCast_<const void*>(s), os);\n}\ninline void PrintTo(const unsigned char* s, ::std::ostream* os) {\n  PrintTo(ImplicitCast_<const void*>(s), os);\n}\ninline void PrintTo(unsigned char* s, ::std::ostream* os) {\n  PrintTo(ImplicitCast_<const void*>(s), os);\n}\n\n// MSVC can be configured to define wchar_t as a typedef of unsigned\n// short.  It defines _NATIVE_WCHAR_T_DEFINED when wchar_t is a native\n// type.  When wchar_t is a typedef, defining an overload for const\n// wchar_t* would cause unsigned short* be printed as a wide string,\n// possibly causing invalid memory accesses.\n#if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED)\n// Overloads for wide C strings\nGTEST_API_ void PrintTo(const wchar_t* s, ::std::ostream* os);\ninline void PrintTo(wchar_t* s, ::std::ostream* os) {\n  PrintTo(ImplicitCast_<const wchar_t*>(s), os);\n}\n#endif\n\n// Overload for C arrays.  Multi-dimensional arrays are printed\n// properly.\n\n// Prints the given number of elements in an array, without printing\n// the curly braces.\ntemplate <typename T>\nvoid PrintRawArrayTo(const T a[], size_t count, ::std::ostream* os) {\n  UniversalPrint(a[0], os);\n  for (size_t i = 1; i != count; i++) {\n    *os << \", \";\n    UniversalPrint(a[i], os);\n  }\n}\n\n// Overloads for ::string and ::std::string.\n#if GTEST_HAS_GLOBAL_STRING\nGTEST_API_ void PrintStringTo(const ::string&s, ::std::ostream* os);\ninline void PrintTo(const ::string& s, ::std::ostream* os) {\n  PrintStringTo(s, os);\n}\n#endif  // GTEST_HAS_GLOBAL_STRING\n\nGTEST_API_ void PrintStringTo(const ::std::string&s, ::std::ostream* os);\ninline void PrintTo(const ::std::string& s, ::std::ostream* os) {\n  PrintStringTo(s, os);\n}\n\n// Overloads for ::wstring and ::std::wstring.\n#if GTEST_HAS_GLOBAL_WSTRING\nGTEST_API_ void PrintWideStringTo(const ::wstring&s, ::std::ostream* os);\ninline void PrintTo(const ::wstring& s, ::std::ostream* os) {\n  PrintWideStringTo(s, os);\n}\n#endif  // GTEST_HAS_GLOBAL_WSTRING\n\n#if GTEST_HAS_STD_WSTRING\nGTEST_API_ void PrintWideStringTo(const ::std::wstring&s, ::std::ostream* os);\ninline void PrintTo(const ::std::wstring& s, ::std::ostream* os) {\n  PrintWideStringTo(s, os);\n}\n#endif  // GTEST_HAS_STD_WSTRING\n\n#if GTEST_HAS_TR1_TUPLE\n// Overload for ::std::tr1::tuple.  Needed for printing function arguments,\n// which are packed as tuples.\n\n// Helper function for printing a tuple.  T must be instantiated with\n// a tuple type.\ntemplate <typename T>\nvoid PrintTupleTo(const T& t, ::std::ostream* os);\n\n// Overloaded PrintTo() for tuples of various arities.  We support\n// tuples of up-to 10 fields.  The following implementation works\n// regardless of whether tr1::tuple is implemented using the\n// non-standard variadic template feature or not.\n\ninline void PrintTo(const ::std::tr1::tuple<>& t, ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1>\nvoid PrintTo(const ::std::tr1::tuple<T1>& t, ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2>& t, ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3>& t, ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3, T4>& t, ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3, T4, T5>& t,\n             ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n          typename T6>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3, T4, T5, T6>& t,\n             ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n          typename T6, typename T7>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7>& t,\n             ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n          typename T6, typename T7, typename T8>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8>& t,\n             ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n          typename T6, typename T7, typename T8, typename T9>\nvoid PrintTo(const ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8, T9>& t,\n             ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n          typename T6, typename T7, typename T8, typename T9, typename T10>\nvoid PrintTo(\n    const ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10>& t,\n    ::std::ostream* os) {\n  PrintTupleTo(t, os);\n}\n#endif  // GTEST_HAS_TR1_TUPLE\n\n// Overload for std::pair.\ntemplate <typename T1, typename T2>\nvoid PrintTo(const ::std::pair<T1, T2>& value, ::std::ostream* os) {\n  *os << '(';\n  // We cannot use UniversalPrint(value.first, os) here, as T1 may be\n  // a reference type.  The same for printing value.second.\n  UniversalPrinter<T1>::Print(value.first, os);\n  *os << \", \";\n  UniversalPrinter<T2>::Print(value.second, os);\n  *os << ')';\n}\n\n// Implements printing a non-reference type T by letting the compiler\n// pick the right overload of PrintTo() for T.\ntemplate <typename T>\nclass UniversalPrinter {\n public:\n  // MSVC warns about adding const to a function type, so we want to\n  // disable the warning.\n#ifdef _MSC_VER\n# pragma warning(push)          // Saves the current warning state.\n# pragma warning(disable:4180)  // Temporarily disables warning 4180.\n#endif  // _MSC_VER\n\n  // Note: we deliberately don't call this PrintTo(), as that name\n  // conflicts with ::testing::internal::PrintTo in the body of the\n  // function.\n  static void Print(const T& value, ::std::ostream* os) {\n    // By default, ::testing::internal::PrintTo() is used for printing\n    // the value.\n    //\n    // Thanks to Koenig look-up, if T is a class and has its own\n    // PrintTo() function defined in its namespace, that function will\n    // be visible here.  Since it is more specific than the generic ones\n    // in ::testing::internal, it will be picked by the compiler in the\n    // following statement - exactly what we want.\n    PrintTo(value, os);\n  }\n\n#ifdef _MSC_VER\n# pragma warning(pop)           // Restores the warning state.\n#endif  // _MSC_VER\n};\n\n// UniversalPrintArray(begin, len, os) prints an array of 'len'\n// elements, starting at address 'begin'.\ntemplate <typename T>\nvoid UniversalPrintArray(const T* begin, size_t len, ::std::ostream* os) {\n  if (len == 0) {\n    *os << \"{}\";\n  } else {\n    *os << \"{ \";\n    const size_t kThreshold = 18;\n    const size_t kChunkSize = 8;\n    // If the array has more than kThreshold elements, we'll have to\n    // omit some details by printing only the first and the last\n    // kChunkSize elements.\n    // TODO(wan@google.com): let the user control the threshold using a flag.\n    if (len <= kThreshold) {\n      PrintRawArrayTo(begin, len, os);\n    } else {\n      PrintRawArrayTo(begin, kChunkSize, os);\n      *os << \", ..., \";\n      PrintRawArrayTo(begin + len - kChunkSize, kChunkSize, os);\n    }\n    *os << \" }\";\n  }\n}\n// This overload prints a (const) char array compactly.\nGTEST_API_ void UniversalPrintArray(const char* begin,\n                                    size_t len,\n                                    ::std::ostream* os);\n\n// Implements printing an array type T[N].\ntemplate <typename T, size_t N>\nclass UniversalPrinter<T[N]> {\n public:\n  // Prints the given array, omitting some elements when there are too\n  // many.\n  static void Print(const T (&a)[N], ::std::ostream* os) {\n    UniversalPrintArray(a, N, os);\n  }\n};\n\n// Implements printing a reference type T&.\ntemplate <typename T>\nclass UniversalPrinter<T&> {\n public:\n  // MSVC warns about adding const to a function type, so we want to\n  // disable the warning.\n#ifdef _MSC_VER\n# pragma warning(push)          // Saves the current warning state.\n# pragma warning(disable:4180)  // Temporarily disables warning 4180.\n#endif  // _MSC_VER\n\n  static void Print(const T& value, ::std::ostream* os) {\n    // Prints the address of the value.  We use reinterpret_cast here\n    // as static_cast doesn't compile when T is a function type.\n    *os << \"@\" << reinterpret_cast<const void*>(&value) << \" \";\n\n    // Then prints the value itself.\n    UniversalPrint(value, os);\n  }\n\n#ifdef _MSC_VER\n# pragma warning(pop)           // Restores the warning state.\n#endif  // _MSC_VER\n};\n\n// Prints a value tersely: for a reference type, the referenced value\n// (but not the address) is printed; for a (const) char pointer, the\n// NUL-terminated string (but not the pointer) is printed.\ntemplate <typename T>\nvoid UniversalTersePrint(const T& value, ::std::ostream* os) {\n  UniversalPrint(value, os);\n}\ninline void UniversalTersePrint(const char* str, ::std::ostream* os) {\n  if (str == NULL) {\n    *os << \"NULL\";\n  } else {\n    UniversalPrint(string(str), os);\n  }\n}\ninline void UniversalTersePrint(char* str, ::std::ostream* os) {\n  UniversalTersePrint(static_cast<const char*>(str), os);\n}\n\n// Prints a value using the type inferred by the compiler.  The\n// difference between this and UniversalTersePrint() is that for a\n// (const) char pointer, this prints both the pointer and the\n// NUL-terminated string.\ntemplate <typename T>\nvoid UniversalPrint(const T& value, ::std::ostream* os) {\n  UniversalPrinter<T>::Print(value, os);\n}\n\n#if GTEST_HAS_TR1_TUPLE\ntypedef ::std::vector<string> Strings;\n\n// This helper template allows PrintTo() for tuples and\n// UniversalTersePrintTupleFieldsToStrings() to be defined by\n// induction on the number of tuple fields.  The idea is that\n// TuplePrefixPrinter<N>::PrintPrefixTo(t, os) prints the first N\n// fields in tuple t, and can be defined in terms of\n// TuplePrefixPrinter<N - 1>.\n\n// The inductive case.\ntemplate <size_t N>\nstruct TuplePrefixPrinter {\n  // Prints the first N fields of a tuple.\n  template <typename Tuple>\n  static void PrintPrefixTo(const Tuple& t, ::std::ostream* os) {\n    TuplePrefixPrinter<N - 1>::PrintPrefixTo(t, os);\n    *os << \", \";\n    UniversalPrinter<typename ::std::tr1::tuple_element<N - 1, Tuple>::type>\n        ::Print(::std::tr1::get<N - 1>(t), os);\n  }\n\n  // Tersely prints the first N fields of a tuple to a string vector,\n  // one element for each field.\n  template <typename Tuple>\n  static void TersePrintPrefixToStrings(const Tuple& t, Strings* strings) {\n    TuplePrefixPrinter<N - 1>::TersePrintPrefixToStrings(t, strings);\n    ::std::stringstream ss;\n    UniversalTersePrint(::std::tr1::get<N - 1>(t), &ss);\n    strings->push_back(ss.str());\n  }\n};\n\n// Base cases.\ntemplate <>\nstruct TuplePrefixPrinter<0> {\n  template <typename Tuple>\n  static void PrintPrefixTo(const Tuple&, ::std::ostream*) {}\n\n  template <typename Tuple>\n  static void TersePrintPrefixToStrings(const Tuple&, Strings*) {}\n};\n// We have to specialize the entire TuplePrefixPrinter<> class\n// template here, even though the definition of\n// TersePrintPrefixToStrings() is the same as the generic version, as\n// Embarcadero (formerly CodeGear, formerly Borland) C++ doesn't\n// support specializing a method template of a class template.\ntemplate <>\nstruct TuplePrefixPrinter<1> {\n  template <typename Tuple>\n  static void PrintPrefixTo(const Tuple& t, ::std::ostream* os) {\n    UniversalPrinter<typename ::std::tr1::tuple_element<0, Tuple>::type>::\n        Print(::std::tr1::get<0>(t), os);\n  }\n\n  template <typename Tuple>\n  static void TersePrintPrefixToStrings(const Tuple& t, Strings* strings) {\n    ::std::stringstream ss;\n    UniversalTersePrint(::std::tr1::get<0>(t), &ss);\n    strings->push_back(ss.str());\n  }\n};\n\n// Helper function for printing a tuple.  T must be instantiated with\n// a tuple type.\ntemplate <typename T>\nvoid PrintTupleTo(const T& t, ::std::ostream* os) {\n  *os << \"(\";\n  TuplePrefixPrinter< ::std::tr1::tuple_size<T>::value>::\n      PrintPrefixTo(t, os);\n  *os << \")\";\n}\n\n// Prints the fields of a tuple tersely to a string vector, one\n// element for each field.  See the comment before\n// UniversalTersePrint() for how we define \"tersely\".\ntemplate <typename Tuple>\nStrings UniversalTersePrintTupleFieldsToStrings(const Tuple& value) {\n  Strings result;\n  TuplePrefixPrinter< ::std::tr1::tuple_size<Tuple>::value>::\n      TersePrintPrefixToStrings(value, &result);\n  return result;\n}\n#endif  // GTEST_HAS_TR1_TUPLE\n\n}  // namespace internal\n\ntemplate <typename T>\n::std::string PrintToString(const T& value) {\n  ::std::stringstream ss;\n  internal::UniversalTersePrint(value, &ss);\n  return ss.str();\n}\n\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_\n\n#if GTEST_HAS_PARAM_TEST\n\nnamespace testing {\nnamespace internal {\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Outputs a message explaining invalid registration of different\n// fixture class for the same test case. This may happen when\n// TEST_P macro is used to define two tests with the same name\n// but in different namespaces.\nGTEST_API_ void ReportInvalidTestCaseType(const char* test_case_name,\n                                          const char* file, int line);\n\ntemplate <typename> class ParamGeneratorInterface;\ntemplate <typename> class ParamGenerator;\n\n// Interface for iterating over elements provided by an implementation\n// of ParamGeneratorInterface<T>.\ntemplate <typename T>\nclass ParamIteratorInterface {\n public:\n  virtual ~ParamIteratorInterface() {}\n  // A pointer to the base generator instance.\n  // Used only for the purposes of iterator comparison\n  // to make sure that two iterators belong to the same generator.\n  virtual const ParamGeneratorInterface<T>* BaseGenerator() const = 0;\n  // Advances iterator to point to the next element\n  // provided by the generator. The caller is responsible\n  // for not calling Advance() on an iterator equal to\n  // BaseGenerator()->End().\n  virtual void Advance() = 0;\n  // Clones the iterator object. Used for implementing copy semantics\n  // of ParamIterator<T>.\n  virtual ParamIteratorInterface* Clone() const = 0;\n  // Dereferences the current iterator and provides (read-only) access\n  // to the pointed value. It is the caller's responsibility not to call\n  // Current() on an iterator equal to BaseGenerator()->End().\n  // Used for implementing ParamGenerator<T>::operator*().\n  virtual const T* Current() const = 0;\n  // Determines whether the given iterator and other point to the same\n  // element in the sequence generated by the generator.\n  // Used for implementing ParamGenerator<T>::operator==().\n  virtual bool Equals(const ParamIteratorInterface& other) const = 0;\n};\n\n// Class iterating over elements provided by an implementation of\n// ParamGeneratorInterface<T>. It wraps ParamIteratorInterface<T>\n// and implements the const forward iterator concept.\ntemplate <typename T>\nclass ParamIterator {\n public:\n  typedef T value_type;\n  typedef const T& reference;\n  typedef ptrdiff_t difference_type;\n\n  // ParamIterator assumes ownership of the impl_ pointer.\n  ParamIterator(const ParamIterator& other) : impl_(other.impl_->Clone()) {}\n  ParamIterator& operator=(const ParamIterator& other) {\n    if (this != &other)\n      impl_.reset(other.impl_->Clone());\n    return *this;\n  }\n\n  const T& operator*() const { return *impl_->Current(); }\n  const T* operator->() const { return impl_->Current(); }\n  // Prefix version of operator++.\n  ParamIterator& operator++() {\n    impl_->Advance();\n    return *this;\n  }\n  // Postfix version of operator++.\n  ParamIterator operator++(int /*unused*/) {\n    ParamIteratorInterface<T>* clone = impl_->Clone();\n    impl_->Advance();\n    return ParamIterator(clone);\n  }\n  bool operator==(const ParamIterator& other) const {\n    return impl_.get() == other.impl_.get() || impl_->Equals(*other.impl_);\n  }\n  bool operator!=(const ParamIterator& other) const {\n    return !(*this == other);\n  }\n\n private:\n  friend class ParamGenerator<T>;\n  explicit ParamIterator(ParamIteratorInterface<T>* impl) : impl_(impl) {}\n  scoped_ptr<ParamIteratorInterface<T> > impl_;\n};\n\n// ParamGeneratorInterface<T> is the binary interface to access generators\n// defined in other translation units.\ntemplate <typename T>\nclass ParamGeneratorInterface {\n public:\n  typedef T ParamType;\n\n  virtual ~ParamGeneratorInterface() {}\n\n  // Generator interface definition\n  virtual ParamIteratorInterface<T>* Begin() const = 0;\n  virtual ParamIteratorInterface<T>* End() const = 0;\n};\n\n// Wraps ParamGeneratorInterface<T> and provides general generator syntax\n// compatible with the STL Container concept.\n// This class implements copy initialization semantics and the contained\n// ParamGeneratorInterface<T> instance is shared among all copies\n// of the original object. This is possible because that instance is immutable.\ntemplate<typename T>\nclass ParamGenerator {\n public:\n  typedef ParamIterator<T> iterator;\n\n  explicit ParamGenerator(ParamGeneratorInterface<T>* impl) : impl_(impl) {}\n  ParamGenerator(const ParamGenerator& other) : impl_(other.impl_) {}\n\n  ParamGenerator& operator=(const ParamGenerator& other) {\n    impl_ = other.impl_;\n    return *this;\n  }\n\n  iterator begin() const { return iterator(impl_->Begin()); }\n  iterator end() const { return iterator(impl_->End()); }\n\n private:\n  linked_ptr<const ParamGeneratorInterface<T> > impl_;\n};\n\n// Generates values from a range of two comparable values. Can be used to\n// generate sequences of user-defined types that implement operator+() and\n// operator<().\n// This class is used in the Range() function.\ntemplate <typename T, typename IncrementT>\nclass RangeGenerator : public ParamGeneratorInterface<T> {\n public:\n  RangeGenerator(T begin, T end, IncrementT step)\n      : begin_(begin), end_(end),\n        step_(step), end_index_(CalculateEndIndex(begin, end, step)) {}\n  virtual ~RangeGenerator() {}\n\n  virtual ParamIteratorInterface<T>* Begin() const {\n    return new Iterator(this, begin_, 0, step_);\n  }\n  virtual ParamIteratorInterface<T>* End() const {\n    return new Iterator(this, end_, end_index_, step_);\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<T> {\n   public:\n    Iterator(const ParamGeneratorInterface<T>* base, T value, int index,\n             IncrementT step)\n        : base_(base), value_(value), index_(index), step_(step) {}\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<T>* BaseGenerator() const {\n      return base_;\n    }\n    virtual void Advance() {\n      value_ = value_ + step_;\n      index_++;\n    }\n    virtual ParamIteratorInterface<T>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const T* Current() const { return &value_; }\n    virtual bool Equals(const ParamIteratorInterface<T>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const int other_index =\n          CheckedDowncastToActualType<const Iterator>(&other)->index_;\n      return index_ == other_index;\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : ParamIteratorInterface<T>(),\n          base_(other.base_), value_(other.value_), index_(other.index_),\n          step_(other.step_) {}\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<T>* const base_;\n    T value_;\n    int index_;\n    const IncrementT step_;\n  };  // class RangeGenerator::Iterator\n\n  static int CalculateEndIndex(const T& begin,\n                               const T& end,\n                               const IncrementT& step) {\n    int end_index = 0;\n    for (T i = begin; i < end; i = i + step)\n      end_index++;\n    return end_index;\n  }\n\n  // No implementation - assignment is unsupported.\n  void operator=(const RangeGenerator& other);\n\n  const T begin_;\n  const T end_;\n  const IncrementT step_;\n  // The index for the end() iterator. All the elements in the generated\n  // sequence are indexed (0-based) to aid iterator comparison.\n  const int end_index_;\n};  // class RangeGenerator\n\n\n// Generates values from a pair of STL-style iterators. Used in the\n// ValuesIn() function. The elements are copied from the source range\n// since the source can be located on the stack, and the generator\n// is likely to persist beyond that stack frame.\ntemplate <typename T>\nclass ValuesInIteratorRangeGenerator : public ParamGeneratorInterface<T> {\n public:\n  template <typename ForwardIterator>\n  ValuesInIteratorRangeGenerator(ForwardIterator begin, ForwardIterator end)\n      : container_(begin, end) {}\n  virtual ~ValuesInIteratorRangeGenerator() {}\n\n  virtual ParamIteratorInterface<T>* Begin() const {\n    return new Iterator(this, container_.begin());\n  }\n  virtual ParamIteratorInterface<T>* End() const {\n    return new Iterator(this, container_.end());\n  }\n\n private:\n  typedef typename ::std::vector<T> ContainerType;\n\n  class Iterator : public ParamIteratorInterface<T> {\n   public:\n    Iterator(const ParamGeneratorInterface<T>* base,\n             typename ContainerType::const_iterator iterator)\n        : base_(base), iterator_(iterator) {}\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<T>* BaseGenerator() const {\n      return base_;\n    }\n    virtual void Advance() {\n      ++iterator_;\n      value_.reset();\n    }\n    virtual ParamIteratorInterface<T>* Clone() const {\n      return new Iterator(*this);\n    }\n    // We need to use cached value referenced by iterator_ because *iterator_\n    // can return a temporary object (and of type other then T), so just\n    // having \"return &*iterator_;\" doesn't work.\n    // value_ is updated here and not in Advance() because Advance()\n    // can advance iterator_ beyond the end of the range, and we cannot\n    // detect that fact. The client code, on the other hand, is\n    // responsible for not calling Current() on an out-of-range iterator.\n    virtual const T* Current() const {\n      if (value_.get() == NULL)\n        value_.reset(new T(*iterator_));\n      return value_.get();\n    }\n    virtual bool Equals(const ParamIteratorInterface<T>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      return iterator_ ==\n          CheckedDowncastToActualType<const Iterator>(&other)->iterator_;\n    }\n\n   private:\n    Iterator(const Iterator& other)\n          // The explicit constructor call suppresses a false warning\n          // emitted by gcc when supplied with the -Wextra option.\n        : ParamIteratorInterface<T>(),\n          base_(other.base_),\n          iterator_(other.iterator_) {}\n\n    const ParamGeneratorInterface<T>* const base_;\n    typename ContainerType::const_iterator iterator_;\n    // A cached value of *iterator_. We keep it here to allow access by\n    // pointer in the wrapping iterator's operator->().\n    // value_ needs to be mutable to be accessed in Current().\n    // Use of scoped_ptr helps manage cached value's lifetime,\n    // which is bound by the lifespan of the iterator itself.\n    mutable scoped_ptr<const T> value_;\n  };  // class ValuesInIteratorRangeGenerator::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const ValuesInIteratorRangeGenerator& other);\n\n  const ContainerType container_;\n};  // class ValuesInIteratorRangeGenerator\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Stores a parameter value and later creates tests parameterized with that\n// value.\ntemplate <class TestClass>\nclass ParameterizedTestFactory : public TestFactoryBase {\n public:\n  typedef typename TestClass::ParamType ParamType;\n  explicit ParameterizedTestFactory(ParamType parameter) :\n      parameter_(parameter) {}\n  virtual Test* CreateTest() {\n    TestClass::SetParam(&parameter_);\n    return new TestClass();\n  }\n\n private:\n  const ParamType parameter_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestFactory);\n};\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// TestMetaFactoryBase is a base class for meta-factories that create\n// test factories for passing into MakeAndRegisterTestInfo function.\ntemplate <class ParamType>\nclass TestMetaFactoryBase {\n public:\n  virtual ~TestMetaFactoryBase() {}\n\n  virtual TestFactoryBase* CreateTestFactory(ParamType parameter) = 0;\n};\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// TestMetaFactory creates test factories for passing into\n// MakeAndRegisterTestInfo function. Since MakeAndRegisterTestInfo receives\n// ownership of test factory pointer, same factory object cannot be passed\n// into that method twice. But ParameterizedTestCaseInfo is going to call\n// it for each Test/Parameter value combination. Thus it needs meta factory\n// creator class.\ntemplate <class TestCase>\nclass TestMetaFactory\n    : public TestMetaFactoryBase<typename TestCase::ParamType> {\n public:\n  typedef typename TestCase::ParamType ParamType;\n\n  TestMetaFactory() {}\n\n  virtual TestFactoryBase* CreateTestFactory(ParamType parameter) {\n    return new ParameterizedTestFactory<TestCase>(parameter);\n  }\n\n private:\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestMetaFactory);\n};\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// ParameterizedTestCaseInfoBase is a generic interface\n// to ParameterizedTestCaseInfo classes. ParameterizedTestCaseInfoBase\n// accumulates test information provided by TEST_P macro invocations\n// and generators provided by INSTANTIATE_TEST_CASE_P macro invocations\n// and uses that information to register all resulting test instances\n// in RegisterTests method. The ParameterizeTestCaseRegistry class holds\n// a collection of pointers to the ParameterizedTestCaseInfo objects\n// and calls RegisterTests() on each of them when asked.\nclass ParameterizedTestCaseInfoBase {\n public:\n  virtual ~ParameterizedTestCaseInfoBase() {}\n\n  // Base part of test case name for display purposes.\n  virtual const string& GetTestCaseName() const = 0;\n  // Test case id to verify identity.\n  virtual TypeId GetTestCaseTypeId() const = 0;\n  // UnitTest class invokes this method to register tests in this\n  // test case right before running them in RUN_ALL_TESTS macro.\n  // This method should not be called more then once on any single\n  // instance of a ParameterizedTestCaseInfoBase derived class.\n  virtual void RegisterTests() = 0;\n\n protected:\n  ParameterizedTestCaseInfoBase() {}\n\n private:\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseInfoBase);\n};\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// ParameterizedTestCaseInfo accumulates tests obtained from TEST_P\n// macro invocations for a particular test case and generators\n// obtained from INSTANTIATE_TEST_CASE_P macro invocations for that\n// test case. It registers tests with all values generated by all\n// generators when asked.\ntemplate <class TestCase>\nclass ParameterizedTestCaseInfo : public ParameterizedTestCaseInfoBase {\n public:\n  // ParamType and GeneratorCreationFunc are private types but are required\n  // for declarations of public methods AddTestPattern() and\n  // AddTestCaseInstantiation().\n  typedef typename TestCase::ParamType ParamType;\n  // A function that returns an instance of appropriate generator type.\n  typedef ParamGenerator<ParamType>(GeneratorCreationFunc)();\n\n  explicit ParameterizedTestCaseInfo(const char* name)\n      : test_case_name_(name) {}\n\n  // Test case base name for display purposes.\n  virtual const string& GetTestCaseName() const { return test_case_name_; }\n  // Test case id to verify identity.\n  virtual TypeId GetTestCaseTypeId() const { return GetTypeId<TestCase>(); }\n  // TEST_P macro uses AddTestPattern() to record information\n  // about a single test in a LocalTestInfo structure.\n  // test_case_name is the base name of the test case (without invocation\n  // prefix). test_base_name is the name of an individual test without\n  // parameter index. For the test SequenceA/FooTest.DoBar/1 FooTest is\n  // test case base name and DoBar is test base name.\n  void AddTestPattern(const char* test_case_name,\n                      const char* test_base_name,\n                      TestMetaFactoryBase<ParamType>* meta_factory) {\n    tests_.push_back(linked_ptr<TestInfo>(new TestInfo(test_case_name,\n                                                       test_base_name,\n                                                       meta_factory)));\n  }\n  // INSTANTIATE_TEST_CASE_P macro uses AddGenerator() to record information\n  // about a generator.\n  int AddTestCaseInstantiation(const string& instantiation_name,\n                               GeneratorCreationFunc* func,\n                               const char* /* file */,\n                               int /* line */) {\n    instantiations_.push_back(::std::make_pair(instantiation_name, func));\n    return 0;  // Return value used only to run this method in namespace scope.\n  }\n  // UnitTest class invokes this method to register tests in this test case\n  // test cases right before running tests in RUN_ALL_TESTS macro.\n  // This method should not be called more then once on any single\n  // instance of a ParameterizedTestCaseInfoBase derived class.\n  // UnitTest has a guard to prevent from calling this method more then once.\n  virtual void RegisterTests() {\n    for (typename TestInfoContainer::iterator test_it = tests_.begin();\n         test_it != tests_.end(); ++test_it) {\n      linked_ptr<TestInfo> test_info = *test_it;\n      for (typename InstantiationContainer::iterator gen_it =\n               instantiations_.begin(); gen_it != instantiations_.end();\n               ++gen_it) {\n        const string& instantiation_name = gen_it->first;\n        ParamGenerator<ParamType> generator((*gen_it->second)());\n\n        Message test_case_name_stream;\n        if ( !instantiation_name.empty() )\n          test_case_name_stream << instantiation_name << \"/\";\n        test_case_name_stream << test_info->test_case_base_name;\n\n        int i = 0;\n        for (typename ParamGenerator<ParamType>::iterator param_it =\n                 generator.begin();\n             param_it != generator.end(); ++param_it, ++i) {\n          Message test_name_stream;\n          test_name_stream << test_info->test_base_name << \"/\" << i;\n          MakeAndRegisterTestInfo(\n              test_case_name_stream.GetString().c_str(),\n              test_name_stream.GetString().c_str(),\n              NULL,  // No type parameter.\n              PrintToString(*param_it).c_str(),\n              GetTestCaseTypeId(),\n              TestCase::SetUpTestCase,\n              TestCase::TearDownTestCase,\n              test_info->test_meta_factory->CreateTestFactory(*param_it));\n        }  // for param_it\n      }  // for gen_it\n    }  // for test_it\n  }  // RegisterTests\n\n private:\n  // LocalTestInfo structure keeps information about a single test registered\n  // with TEST_P macro.\n  struct TestInfo {\n    TestInfo(const char* a_test_case_base_name,\n             const char* a_test_base_name,\n             TestMetaFactoryBase<ParamType>* a_test_meta_factory) :\n        test_case_base_name(a_test_case_base_name),\n        test_base_name(a_test_base_name),\n        test_meta_factory(a_test_meta_factory) {}\n\n    const string test_case_base_name;\n    const string test_base_name;\n    const scoped_ptr<TestMetaFactoryBase<ParamType> > test_meta_factory;\n  };\n  typedef ::std::vector<linked_ptr<TestInfo> > TestInfoContainer;\n  // Keeps pairs of <Instantiation name, Sequence generator creation function>\n  // received from INSTANTIATE_TEST_CASE_P macros.\n  typedef ::std::vector<std::pair<string, GeneratorCreationFunc*> >\n      InstantiationContainer;\n\n  const string test_case_name_;\n  TestInfoContainer tests_;\n  InstantiationContainer instantiations_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseInfo);\n};  // class ParameterizedTestCaseInfo\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// ParameterizedTestCaseRegistry contains a map of ParameterizedTestCaseInfoBase\n// classes accessed by test case names. TEST_P and INSTANTIATE_TEST_CASE_P\n// macros use it to locate their corresponding ParameterizedTestCaseInfo\n// descriptors.\nclass ParameterizedTestCaseRegistry {\n public:\n  ParameterizedTestCaseRegistry() {}\n  ~ParameterizedTestCaseRegistry() {\n    for (TestCaseInfoContainer::iterator it = test_case_infos_.begin();\n         it != test_case_infos_.end(); ++it) {\n      delete *it;\n    }\n  }\n\n  // Looks up or creates and returns a structure containing information about\n  // tests and instantiations of a particular test case.\n  template <class TestCase>\n  ParameterizedTestCaseInfo<TestCase>* GetTestCasePatternHolder(\n      const char* test_case_name,\n      const char* file,\n      int line) {\n    ParameterizedTestCaseInfo<TestCase>* typed_test_info = NULL;\n    for (TestCaseInfoContainer::iterator it = test_case_infos_.begin();\n         it != test_case_infos_.end(); ++it) {\n      if ((*it)->GetTestCaseName() == test_case_name) {\n        if ((*it)->GetTestCaseTypeId() != GetTypeId<TestCase>()) {\n          // Complain about incorrect usage of Google Test facilities\n          // and terminate the program since we cannot guaranty correct\n          // test case setup and tear-down in this case.\n          ReportInvalidTestCaseType(test_case_name,  file, line);\n          posix::Abort();\n        } else {\n          // At this point we are sure that the object we found is of the same\n          // type we are looking for, so we downcast it to that type\n          // without further checks.\n          typed_test_info = CheckedDowncastToActualType<\n              ParameterizedTestCaseInfo<TestCase> >(*it);\n        }\n        break;\n      }\n    }\n    if (typed_test_info == NULL) {\n      typed_test_info = new ParameterizedTestCaseInfo<TestCase>(test_case_name);\n      test_case_infos_.push_back(typed_test_info);\n    }\n    return typed_test_info;\n  }\n  void RegisterTests() {\n    for (TestCaseInfoContainer::iterator it = test_case_infos_.begin();\n         it != test_case_infos_.end(); ++it) {\n      (*it)->RegisterTests();\n    }\n  }\n\n private:\n  typedef ::std::vector<ParameterizedTestCaseInfoBase*> TestCaseInfoContainer;\n\n  TestCaseInfoContainer test_case_infos_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseRegistry);\n};\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  //  GTEST_HAS_PARAM_TEST\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_\n// This file was GENERATED by command:\n//     pump.py gtest-param-util-generated.h.pump\n// DO NOT EDIT BY HAND!!!\n\n// Copyright 2008 Google Inc.\n// All Rights Reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: vladl@google.com (Vlad Losev)\n\n// Type and function utilities for implementing parameterized tests.\n// This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!\n//\n// Currently Google Test supports at most 50 arguments in Values,\n// and at most 10 arguments in Combine. Please contact\n// googletestframework@googlegroups.com if you need more.\n// Please note that the number of arguments to Combine is limited\n// by the maximum arity of the implementation of tr1::tuple which is\n// currently set at 10.\n\n#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_\n#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_\n\n// scripts/fuse_gtest.py depends on gtest's own header being #included\n// *unconditionally*.  Therefore these #includes cannot be moved\n// inside #if GTEST_HAS_PARAM_TEST.\n\n#if GTEST_HAS_PARAM_TEST\n\nnamespace testing {\n\n// Forward declarations of ValuesIn(), which is implemented in\n// include/gtest/gtest-param-test.h.\ntemplate <typename ForwardIterator>\ninternal::ParamGenerator<\n  typename ::testing::internal::IteratorTraits<ForwardIterator>::value_type>\nValuesIn(ForwardIterator begin, ForwardIterator end);\n\ntemplate <typename T, size_t N>\ninternal::ParamGenerator<T> ValuesIn(const T (&array)[N]);\n\ntemplate <class Container>\ninternal::ParamGenerator<typename Container::value_type> ValuesIn(\n    const Container& container);\n\nnamespace internal {\n\n// Used in the Values() function to provide polymorphic capabilities.\ntemplate <typename T1>\nclass ValueArray1 {\n public:\n  explicit ValueArray1(T1 v1) : v1_(v1) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const { return ValuesIn(&v1_, &v1_ + 1); }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray1& other);\n\n  const T1 v1_;\n};\n\ntemplate <typename T1, typename T2>\nclass ValueArray2 {\n public:\n  ValueArray2(T1 v1, T2 v2) : v1_(v1), v2_(v2) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray2& other);\n\n  const T1 v1_;\n  const T2 v2_;\n};\n\ntemplate <typename T1, typename T2, typename T3>\nclass ValueArray3 {\n public:\n  ValueArray3(T1 v1, T2 v2, T3 v3) : v1_(v1), v2_(v2), v3_(v3) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray3& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4>\nclass ValueArray4 {\n public:\n  ValueArray4(T1 v1, T2 v2, T3 v3, T4 v4) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray4& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5>\nclass ValueArray5 {\n public:\n  ValueArray5(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4), v5_(v5) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray5& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6>\nclass ValueArray6 {\n public:\n  ValueArray6(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) : v1_(v1), v2_(v2),\n      v3_(v3), v4_(v4), v5_(v5), v6_(v6) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray6& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7>\nclass ValueArray7 {\n public:\n  ValueArray7(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7) : v1_(v1),\n      v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray7& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8>\nclass ValueArray8 {\n public:\n  ValueArray8(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n      T8 v8) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray8& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9>\nclass ValueArray9 {\n public:\n  ValueArray9(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8,\n      T9 v9) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray9& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10>\nclass ValueArray10 {\n public:\n  ValueArray10(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray10& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11>\nclass ValueArray11 {\n public:\n  ValueArray11(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6),\n      v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray11& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12>\nclass ValueArray12 {\n public:\n  ValueArray12(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5),\n      v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray12& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13>\nclass ValueArray13 {\n public:\n  ValueArray13(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13) : v1_(v1), v2_(v2), v3_(v3), v4_(v4),\n      v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11),\n      v12_(v12), v13_(v13) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray13& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14>\nclass ValueArray14 {\n public:\n  ValueArray14(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray14& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15>\nclass ValueArray15 {\n public:\n  ValueArray15(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15) : v1_(v1), v2_(v2),\n      v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray15& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16>\nclass ValueArray16 {\n public:\n  ValueArray16(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16) : v1_(v1),\n      v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9),\n      v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15),\n      v16_(v16) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray16& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17>\nclass ValueArray17 {\n public:\n  ValueArray17(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16,\n      T17 v17) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray17& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18>\nclass ValueArray18 {\n public:\n  ValueArray18(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray18& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19>\nclass ValueArray19 {\n public:\n  ValueArray19(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6),\n      v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13),\n      v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray19& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20>\nclass ValueArray20 {\n public:\n  ValueArray20(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5),\n      v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12),\n      v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18),\n      v19_(v19), v20_(v20) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray20& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21>\nclass ValueArray21 {\n public:\n  ValueArray21(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21) : v1_(v1), v2_(v2), v3_(v3), v4_(v4),\n      v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11),\n      v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17),\n      v18_(v18), v19_(v19), v20_(v20), v21_(v21) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray21& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22>\nclass ValueArray22 {\n public:\n  ValueArray22(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray22& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23>\nclass ValueArray23 {\n public:\n  ValueArray23(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23) : v1_(v1), v2_(v2),\n      v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_,\n        v23_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray23& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24>\nclass ValueArray24 {\n public:\n  ValueArray24(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24) : v1_(v1),\n      v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9),\n      v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15),\n      v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21),\n      v22_(v22), v23_(v23), v24_(v24) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray24& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25>\nclass ValueArray25 {\n public:\n  ValueArray25(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24,\n      T25 v25) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray25& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26>\nclass ValueArray26 {\n public:\n  ValueArray26(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray26& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27>\nclass ValueArray27 {\n public:\n  ValueArray27(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6),\n      v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13),\n      v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19),\n      v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25),\n      v26_(v26), v27_(v27) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray27& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28>\nclass ValueArray28 {\n public:\n  ValueArray28(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5),\n      v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12),\n      v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18),\n      v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24),\n      v25_(v25), v26_(v26), v27_(v27), v28_(v28) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray28& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29>\nclass ValueArray29 {\n public:\n  ValueArray29(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29) : v1_(v1), v2_(v2), v3_(v3), v4_(v4),\n      v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11),\n      v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17),\n      v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23),\n      v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray29& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30>\nclass ValueArray30 {\n public:\n  ValueArray30(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28),\n      v29_(v29), v30_(v30) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray30& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31>\nclass ValueArray31 {\n public:\n  ValueArray31(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31) : v1_(v1), v2_(v2),\n      v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28),\n      v29_(v29), v30_(v30), v31_(v31) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray31& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32>\nclass ValueArray32 {\n public:\n  ValueArray32(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32) : v1_(v1),\n      v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9),\n      v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15),\n      v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21),\n      v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27),\n      v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray32& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33>\nclass ValueArray33 {\n public:\n  ValueArray33(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32,\n      T33 v33) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26),\n      v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32),\n      v33_(v33) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray33& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34>\nclass ValueArray34 {\n public:\n  ValueArray34(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26),\n      v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32),\n      v33_(v33), v34_(v34) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray34& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35>\nclass ValueArray35 {\n public:\n  ValueArray35(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6),\n      v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13),\n      v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19),\n      v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25),\n      v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31),\n      v32_(v32), v33_(v33), v34_(v34), v35_(v35) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_,\n        v35_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray35& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36>\nclass ValueArray36 {\n public:\n  ValueArray36(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5),\n      v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12),\n      v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18),\n      v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24),\n      v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30),\n      v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray36& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37>\nclass ValueArray37 {\n public:\n  ValueArray37(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37) : v1_(v1), v2_(v2), v3_(v3), v4_(v4),\n      v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11),\n      v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17),\n      v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23),\n      v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29),\n      v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35),\n      v36_(v36), v37_(v37) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray37& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38>\nclass ValueArray38 {\n public:\n  ValueArray38(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28),\n      v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34),\n      v35_(v35), v36_(v36), v37_(v37), v38_(v38) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray38& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39>\nclass ValueArray39 {\n public:\n  ValueArray39(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39) : v1_(v1), v2_(v2),\n      v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28),\n      v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34),\n      v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray39& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40>\nclass ValueArray40 {\n public:\n  ValueArray40(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40) : v1_(v1),\n      v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9),\n      v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15),\n      v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21),\n      v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27),\n      v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33),\n      v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39),\n      v40_(v40) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray40& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41>\nclass ValueArray41 {\n public:\n  ValueArray41(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40,\n      T41 v41) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26),\n      v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32),\n      v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38),\n      v39_(v39), v40_(v40), v41_(v41) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray41& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42>\nclass ValueArray42 {\n public:\n  ValueArray42(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26),\n      v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32),\n      v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38),\n      v39_(v39), v40_(v40), v41_(v41), v42_(v42) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray42& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43>\nclass ValueArray43 {\n public:\n  ValueArray43(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6),\n      v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13),\n      v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19),\n      v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25),\n      v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31),\n      v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37),\n      v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray43& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44>\nclass ValueArray44 {\n public:\n  ValueArray44(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5),\n      v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12),\n      v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18),\n      v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24),\n      v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30),\n      v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36),\n      v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42),\n      v43_(v43), v44_(v44) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray44& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45>\nclass ValueArray45 {\n public:\n  ValueArray45(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44, T45 v45) : v1_(v1), v2_(v2), v3_(v3), v4_(v4),\n      v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11),\n      v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17),\n      v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23),\n      v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29),\n      v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35),\n      v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41),\n      v42_(v42), v43_(v43), v44_(v44), v45_(v45) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_, v45_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray45& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n  const T45 v45_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46>\nclass ValueArray46 {\n public:\n  ValueArray46(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44, T45 v45, T46 v46) : v1_(v1), v2_(v2), v3_(v3),\n      v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28),\n      v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34),\n      v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40),\n      v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_, v45_, v46_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray46& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n  const T45 v45_;\n  const T46 v46_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47>\nclass ValueArray47 {\n public:\n  ValueArray47(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47) : v1_(v1), v2_(v2),\n      v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10),\n      v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16),\n      v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22),\n      v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28),\n      v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34),\n      v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40),\n      v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46),\n      v47_(v47) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_, v45_, v46_,\n        v47_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray47& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n  const T45 v45_;\n  const T46 v46_;\n  const T47 v47_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48>\nclass ValueArray48 {\n public:\n  ValueArray48(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48) : v1_(v1),\n      v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9),\n      v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15),\n      v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21),\n      v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27),\n      v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33),\n      v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39),\n      v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45),\n      v46_(v46), v47_(v47), v48_(v48) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_, v45_, v46_, v47_,\n        v48_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray48& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n  const T45 v45_;\n  const T46 v46_;\n  const T47 v47_;\n  const T48 v48_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49>\nclass ValueArray49 {\n public:\n  ValueArray49(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48,\n      T49 v49) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26),\n      v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32),\n      v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38),\n      v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44),\n      v45_(v45), v46_(v46), v47_(v47), v48_(v48), v49_(v49) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_, v45_, v46_, v47_,\n        v48_, v49_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray49& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n  const T45 v45_;\n  const T46 v46_;\n  const T47 v47_;\n  const T48 v48_;\n  const T49 v49_;\n};\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49, typename T50>\nclass ValueArray50 {\n public:\n  ValueArray50(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n      T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n      T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n      T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n      T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n      T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49,\n      T50 v50) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7),\n      v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14),\n      v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20),\n      v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26),\n      v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32),\n      v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38),\n      v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44),\n      v45_(v45), v46_(v46), v47_(v47), v48_(v48), v49_(v49), v50_(v50) {}\n\n  template <typename T>\n  operator ParamGenerator<T>() const {\n    const T array[] = {v1_, v2_, v3_, v4_, v5_, v6_, v7_, v8_, v9_, v10_, v11_,\n        v12_, v13_, v14_, v15_, v16_, v17_, v18_, v19_, v20_, v21_, v22_, v23_,\n        v24_, v25_, v26_, v27_, v28_, v29_, v30_, v31_, v32_, v33_, v34_, v35_,\n        v36_, v37_, v38_, v39_, v40_, v41_, v42_, v43_, v44_, v45_, v46_, v47_,\n        v48_, v49_, v50_};\n    return ValuesIn(array);\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const ValueArray50& other);\n\n  const T1 v1_;\n  const T2 v2_;\n  const T3 v3_;\n  const T4 v4_;\n  const T5 v5_;\n  const T6 v6_;\n  const T7 v7_;\n  const T8 v8_;\n  const T9 v9_;\n  const T10 v10_;\n  const T11 v11_;\n  const T12 v12_;\n  const T13 v13_;\n  const T14 v14_;\n  const T15 v15_;\n  const T16 v16_;\n  const T17 v17_;\n  const T18 v18_;\n  const T19 v19_;\n  const T20 v20_;\n  const T21 v21_;\n  const T22 v22_;\n  const T23 v23_;\n  const T24 v24_;\n  const T25 v25_;\n  const T26 v26_;\n  const T27 v27_;\n  const T28 v28_;\n  const T29 v29_;\n  const T30 v30_;\n  const T31 v31_;\n  const T32 v32_;\n  const T33 v33_;\n  const T34 v34_;\n  const T35 v35_;\n  const T36 v36_;\n  const T37 v37_;\n  const T38 v38_;\n  const T39 v39_;\n  const T40 v40_;\n  const T41 v41_;\n  const T42 v42_;\n  const T43 v43_;\n  const T44 v44_;\n  const T45 v45_;\n  const T46 v46_;\n  const T47 v47_;\n  const T48 v48_;\n  const T49 v49_;\n  const T50 v50_;\n};\n\n# if GTEST_HAS_COMBINE\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Generates values from the Cartesian product of values produced\n// by the argument generators.\n//\ntemplate <typename T1, typename T2>\nclass CartesianProductGenerator2\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2> ParamType;\n\n  CartesianProductGenerator2(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2)\n      : g1_(g1), g2_(g2) {}\n  virtual ~CartesianProductGenerator2() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current2_;\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator2::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator2& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n};  // class CartesianProductGenerator2\n\n\ntemplate <typename T1, typename T2, typename T3>\nclass CartesianProductGenerator3\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3> ParamType;\n\n  CartesianProductGenerator3(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3)\n      : g1_(g1), g2_(g2), g3_(g3) {}\n  virtual ~CartesianProductGenerator3() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current3_;\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator3::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator3& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n};  // class CartesianProductGenerator3\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4>\nclass CartesianProductGenerator4\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4> ParamType;\n\n  CartesianProductGenerator4(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4) {}\n  virtual ~CartesianProductGenerator4() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current4_;\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator4::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator4& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n};  // class CartesianProductGenerator4\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5>\nclass CartesianProductGenerator5\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4, T5> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4, T5> ParamType;\n\n  CartesianProductGenerator5(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4, const ParamGenerator<T5>& g5)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5) {}\n  virtual ~CartesianProductGenerator5() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end(), g5_, g5_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4,\n      const ParamGenerator<T5>& g5,\n      const typename ParamGenerator<T5>::iterator& current5)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4),\n          begin5_(g5.begin()), end5_(g5.end()), current5_(current5)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current5_;\n      if (current5_ == end5_) {\n        current5_ = begin5_;\n        ++current4_;\n      }\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_ &&\n          current5_ == typed_other->current5_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_),\n        begin5_(other.begin5_),\n        end5_(other.end5_),\n        current5_(other.current5_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_, *current5_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_ ||\n          current5_ == end5_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    const typename ParamGenerator<T5>::iterator begin5_;\n    const typename ParamGenerator<T5>::iterator end5_;\n    typename ParamGenerator<T5>::iterator current5_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator5::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator5& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n  const ParamGenerator<T5> g5_;\n};  // class CartesianProductGenerator5\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6>\nclass CartesianProductGenerator6\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4, T5,\n        T6> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4, T5, T6> ParamType;\n\n  CartesianProductGenerator6(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4, const ParamGenerator<T5>& g5,\n      const ParamGenerator<T6>& g6)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6) {}\n  virtual ~CartesianProductGenerator6() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4,\n      const ParamGenerator<T5>& g5,\n      const typename ParamGenerator<T5>::iterator& current5,\n      const ParamGenerator<T6>& g6,\n      const typename ParamGenerator<T6>::iterator& current6)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4),\n          begin5_(g5.begin()), end5_(g5.end()), current5_(current5),\n          begin6_(g6.begin()), end6_(g6.end()), current6_(current6)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current6_;\n      if (current6_ == end6_) {\n        current6_ = begin6_;\n        ++current5_;\n      }\n      if (current5_ == end5_) {\n        current5_ = begin5_;\n        ++current4_;\n      }\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_ &&\n          current5_ == typed_other->current5_ &&\n          current6_ == typed_other->current6_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_),\n        begin5_(other.begin5_),\n        end5_(other.end5_),\n        current5_(other.current5_),\n        begin6_(other.begin6_),\n        end6_(other.end6_),\n        current6_(other.current6_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_, *current5_, *current6_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_ ||\n          current5_ == end5_ ||\n          current6_ == end6_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    const typename ParamGenerator<T5>::iterator begin5_;\n    const typename ParamGenerator<T5>::iterator end5_;\n    typename ParamGenerator<T5>::iterator current5_;\n    const typename ParamGenerator<T6>::iterator begin6_;\n    const typename ParamGenerator<T6>::iterator end6_;\n    typename ParamGenerator<T6>::iterator current6_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator6::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator6& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n  const ParamGenerator<T5> g5_;\n  const ParamGenerator<T6> g6_;\n};  // class CartesianProductGenerator6\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7>\nclass CartesianProductGenerator7\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6,\n        T7> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7> ParamType;\n\n  CartesianProductGenerator7(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4, const ParamGenerator<T5>& g5,\n      const ParamGenerator<T6>& g6, const ParamGenerator<T7>& g7)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7) {}\n  virtual ~CartesianProductGenerator7() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_,\n        g7_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4,\n      const ParamGenerator<T5>& g5,\n      const typename ParamGenerator<T5>::iterator& current5,\n      const ParamGenerator<T6>& g6,\n      const typename ParamGenerator<T6>::iterator& current6,\n      const ParamGenerator<T7>& g7,\n      const typename ParamGenerator<T7>::iterator& current7)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4),\n          begin5_(g5.begin()), end5_(g5.end()), current5_(current5),\n          begin6_(g6.begin()), end6_(g6.end()), current6_(current6),\n          begin7_(g7.begin()), end7_(g7.end()), current7_(current7)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current7_;\n      if (current7_ == end7_) {\n        current7_ = begin7_;\n        ++current6_;\n      }\n      if (current6_ == end6_) {\n        current6_ = begin6_;\n        ++current5_;\n      }\n      if (current5_ == end5_) {\n        current5_ = begin5_;\n        ++current4_;\n      }\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_ &&\n          current5_ == typed_other->current5_ &&\n          current6_ == typed_other->current6_ &&\n          current7_ == typed_other->current7_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_),\n        begin5_(other.begin5_),\n        end5_(other.end5_),\n        current5_(other.current5_),\n        begin6_(other.begin6_),\n        end6_(other.end6_),\n        current6_(other.current6_),\n        begin7_(other.begin7_),\n        end7_(other.end7_),\n        current7_(other.current7_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_, *current5_, *current6_, *current7_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_ ||\n          current5_ == end5_ ||\n          current6_ == end6_ ||\n          current7_ == end7_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    const typename ParamGenerator<T5>::iterator begin5_;\n    const typename ParamGenerator<T5>::iterator end5_;\n    typename ParamGenerator<T5>::iterator current5_;\n    const typename ParamGenerator<T6>::iterator begin6_;\n    const typename ParamGenerator<T6>::iterator end6_;\n    typename ParamGenerator<T6>::iterator current6_;\n    const typename ParamGenerator<T7>::iterator begin7_;\n    const typename ParamGenerator<T7>::iterator end7_;\n    typename ParamGenerator<T7>::iterator current7_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator7::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator7& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n  const ParamGenerator<T5> g5_;\n  const ParamGenerator<T6> g6_;\n  const ParamGenerator<T7> g7_;\n};  // class CartesianProductGenerator7\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8>\nclass CartesianProductGenerator8\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6,\n        T7, T8> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8> ParamType;\n\n  CartesianProductGenerator8(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4, const ParamGenerator<T5>& g5,\n      const ParamGenerator<T6>& g6, const ParamGenerator<T7>& g7,\n      const ParamGenerator<T8>& g8)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7),\n          g8_(g8) {}\n  virtual ~CartesianProductGenerator8() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_,\n        g7_.begin(), g8_, g8_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_,\n        g8_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4,\n      const ParamGenerator<T5>& g5,\n      const typename ParamGenerator<T5>::iterator& current5,\n      const ParamGenerator<T6>& g6,\n      const typename ParamGenerator<T6>::iterator& current6,\n      const ParamGenerator<T7>& g7,\n      const typename ParamGenerator<T7>::iterator& current7,\n      const ParamGenerator<T8>& g8,\n      const typename ParamGenerator<T8>::iterator& current8)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4),\n          begin5_(g5.begin()), end5_(g5.end()), current5_(current5),\n          begin6_(g6.begin()), end6_(g6.end()), current6_(current6),\n          begin7_(g7.begin()), end7_(g7.end()), current7_(current7),\n          begin8_(g8.begin()), end8_(g8.end()), current8_(current8)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current8_;\n      if (current8_ == end8_) {\n        current8_ = begin8_;\n        ++current7_;\n      }\n      if (current7_ == end7_) {\n        current7_ = begin7_;\n        ++current6_;\n      }\n      if (current6_ == end6_) {\n        current6_ = begin6_;\n        ++current5_;\n      }\n      if (current5_ == end5_) {\n        current5_ = begin5_;\n        ++current4_;\n      }\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_ &&\n          current5_ == typed_other->current5_ &&\n          current6_ == typed_other->current6_ &&\n          current7_ == typed_other->current7_ &&\n          current8_ == typed_other->current8_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_),\n        begin5_(other.begin5_),\n        end5_(other.end5_),\n        current5_(other.current5_),\n        begin6_(other.begin6_),\n        end6_(other.end6_),\n        current6_(other.current6_),\n        begin7_(other.begin7_),\n        end7_(other.end7_),\n        current7_(other.current7_),\n        begin8_(other.begin8_),\n        end8_(other.end8_),\n        current8_(other.current8_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_, *current5_, *current6_, *current7_, *current8_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_ ||\n          current5_ == end5_ ||\n          current6_ == end6_ ||\n          current7_ == end7_ ||\n          current8_ == end8_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    const typename ParamGenerator<T5>::iterator begin5_;\n    const typename ParamGenerator<T5>::iterator end5_;\n    typename ParamGenerator<T5>::iterator current5_;\n    const typename ParamGenerator<T6>::iterator begin6_;\n    const typename ParamGenerator<T6>::iterator end6_;\n    typename ParamGenerator<T6>::iterator current6_;\n    const typename ParamGenerator<T7>::iterator begin7_;\n    const typename ParamGenerator<T7>::iterator end7_;\n    typename ParamGenerator<T7>::iterator current7_;\n    const typename ParamGenerator<T8>::iterator begin8_;\n    const typename ParamGenerator<T8>::iterator end8_;\n    typename ParamGenerator<T8>::iterator current8_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator8::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator8& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n  const ParamGenerator<T5> g5_;\n  const ParamGenerator<T6> g6_;\n  const ParamGenerator<T7> g7_;\n  const ParamGenerator<T8> g8_;\n};  // class CartesianProductGenerator8\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9>\nclass CartesianProductGenerator9\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6,\n        T7, T8, T9> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8, T9> ParamType;\n\n  CartesianProductGenerator9(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4, const ParamGenerator<T5>& g5,\n      const ParamGenerator<T6>& g6, const ParamGenerator<T7>& g7,\n      const ParamGenerator<T8>& g8, const ParamGenerator<T9>& g9)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8),\n          g9_(g9) {}\n  virtual ~CartesianProductGenerator9() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_,\n        g7_.begin(), g8_, g8_.begin(), g9_, g9_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_,\n        g8_.end(), g9_, g9_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4,\n      const ParamGenerator<T5>& g5,\n      const typename ParamGenerator<T5>::iterator& current5,\n      const ParamGenerator<T6>& g6,\n      const typename ParamGenerator<T6>::iterator& current6,\n      const ParamGenerator<T7>& g7,\n      const typename ParamGenerator<T7>::iterator& current7,\n      const ParamGenerator<T8>& g8,\n      const typename ParamGenerator<T8>::iterator& current8,\n      const ParamGenerator<T9>& g9,\n      const typename ParamGenerator<T9>::iterator& current9)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4),\n          begin5_(g5.begin()), end5_(g5.end()), current5_(current5),\n          begin6_(g6.begin()), end6_(g6.end()), current6_(current6),\n          begin7_(g7.begin()), end7_(g7.end()), current7_(current7),\n          begin8_(g8.begin()), end8_(g8.end()), current8_(current8),\n          begin9_(g9.begin()), end9_(g9.end()), current9_(current9)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current9_;\n      if (current9_ == end9_) {\n        current9_ = begin9_;\n        ++current8_;\n      }\n      if (current8_ == end8_) {\n        current8_ = begin8_;\n        ++current7_;\n      }\n      if (current7_ == end7_) {\n        current7_ = begin7_;\n        ++current6_;\n      }\n      if (current6_ == end6_) {\n        current6_ = begin6_;\n        ++current5_;\n      }\n      if (current5_ == end5_) {\n        current5_ = begin5_;\n        ++current4_;\n      }\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_ &&\n          current5_ == typed_other->current5_ &&\n          current6_ == typed_other->current6_ &&\n          current7_ == typed_other->current7_ &&\n          current8_ == typed_other->current8_ &&\n          current9_ == typed_other->current9_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_),\n        begin5_(other.begin5_),\n        end5_(other.end5_),\n        current5_(other.current5_),\n        begin6_(other.begin6_),\n        end6_(other.end6_),\n        current6_(other.current6_),\n        begin7_(other.begin7_),\n        end7_(other.end7_),\n        current7_(other.current7_),\n        begin8_(other.begin8_),\n        end8_(other.end8_),\n        current8_(other.current8_),\n        begin9_(other.begin9_),\n        end9_(other.end9_),\n        current9_(other.current9_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_, *current5_, *current6_, *current7_, *current8_,\n            *current9_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_ ||\n          current5_ == end5_ ||\n          current6_ == end6_ ||\n          current7_ == end7_ ||\n          current8_ == end8_ ||\n          current9_ == end9_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    const typename ParamGenerator<T5>::iterator begin5_;\n    const typename ParamGenerator<T5>::iterator end5_;\n    typename ParamGenerator<T5>::iterator current5_;\n    const typename ParamGenerator<T6>::iterator begin6_;\n    const typename ParamGenerator<T6>::iterator end6_;\n    typename ParamGenerator<T6>::iterator current6_;\n    const typename ParamGenerator<T7>::iterator begin7_;\n    const typename ParamGenerator<T7>::iterator end7_;\n    typename ParamGenerator<T7>::iterator current7_;\n    const typename ParamGenerator<T8>::iterator begin8_;\n    const typename ParamGenerator<T8>::iterator end8_;\n    typename ParamGenerator<T8>::iterator current8_;\n    const typename ParamGenerator<T9>::iterator begin9_;\n    const typename ParamGenerator<T9>::iterator end9_;\n    typename ParamGenerator<T9>::iterator current9_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator9::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator9& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n  const ParamGenerator<T5> g5_;\n  const ParamGenerator<T6> g6_;\n  const ParamGenerator<T7> g7_;\n  const ParamGenerator<T8> g8_;\n  const ParamGenerator<T9> g9_;\n};  // class CartesianProductGenerator9\n\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10>\nclass CartesianProductGenerator10\n    : public ParamGeneratorInterface< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6,\n        T7, T8, T9, T10> > {\n public:\n  typedef ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10> ParamType;\n\n  CartesianProductGenerator10(const ParamGenerator<T1>& g1,\n      const ParamGenerator<T2>& g2, const ParamGenerator<T3>& g3,\n      const ParamGenerator<T4>& g4, const ParamGenerator<T5>& g5,\n      const ParamGenerator<T6>& g6, const ParamGenerator<T7>& g7,\n      const ParamGenerator<T8>& g8, const ParamGenerator<T9>& g9,\n      const ParamGenerator<T10>& g10)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8),\n          g9_(g9), g10_(g10) {}\n  virtual ~CartesianProductGenerator10() {}\n\n  virtual ParamIteratorInterface<ParamType>* Begin() const {\n    return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_,\n        g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_,\n        g7_.begin(), g8_, g8_.begin(), g9_, g9_.begin(), g10_, g10_.begin());\n  }\n  virtual ParamIteratorInterface<ParamType>* End() const {\n    return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(),\n        g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_,\n        g8_.end(), g9_, g9_.end(), g10_, g10_.end());\n  }\n\n private:\n  class Iterator : public ParamIteratorInterface<ParamType> {\n   public:\n    Iterator(const ParamGeneratorInterface<ParamType>* base,\n      const ParamGenerator<T1>& g1,\n      const typename ParamGenerator<T1>::iterator& current1,\n      const ParamGenerator<T2>& g2,\n      const typename ParamGenerator<T2>::iterator& current2,\n      const ParamGenerator<T3>& g3,\n      const typename ParamGenerator<T3>::iterator& current3,\n      const ParamGenerator<T4>& g4,\n      const typename ParamGenerator<T4>::iterator& current4,\n      const ParamGenerator<T5>& g5,\n      const typename ParamGenerator<T5>::iterator& current5,\n      const ParamGenerator<T6>& g6,\n      const typename ParamGenerator<T6>::iterator& current6,\n      const ParamGenerator<T7>& g7,\n      const typename ParamGenerator<T7>::iterator& current7,\n      const ParamGenerator<T8>& g8,\n      const typename ParamGenerator<T8>::iterator& current8,\n      const ParamGenerator<T9>& g9,\n      const typename ParamGenerator<T9>::iterator& current9,\n      const ParamGenerator<T10>& g10,\n      const typename ParamGenerator<T10>::iterator& current10)\n        : base_(base),\n          begin1_(g1.begin()), end1_(g1.end()), current1_(current1),\n          begin2_(g2.begin()), end2_(g2.end()), current2_(current2),\n          begin3_(g3.begin()), end3_(g3.end()), current3_(current3),\n          begin4_(g4.begin()), end4_(g4.end()), current4_(current4),\n          begin5_(g5.begin()), end5_(g5.end()), current5_(current5),\n          begin6_(g6.begin()), end6_(g6.end()), current6_(current6),\n          begin7_(g7.begin()), end7_(g7.end()), current7_(current7),\n          begin8_(g8.begin()), end8_(g8.end()), current8_(current8),\n          begin9_(g9.begin()), end9_(g9.end()), current9_(current9),\n          begin10_(g10.begin()), end10_(g10.end()), current10_(current10)    {\n      ComputeCurrentValue();\n    }\n    virtual ~Iterator() {}\n\n    virtual const ParamGeneratorInterface<ParamType>* BaseGenerator() const {\n      return base_;\n    }\n    // Advance should not be called on beyond-of-range iterators\n    // so no component iterators must be beyond end of range, either.\n    virtual void Advance() {\n      assert(!AtEnd());\n      ++current10_;\n      if (current10_ == end10_) {\n        current10_ = begin10_;\n        ++current9_;\n      }\n      if (current9_ == end9_) {\n        current9_ = begin9_;\n        ++current8_;\n      }\n      if (current8_ == end8_) {\n        current8_ = begin8_;\n        ++current7_;\n      }\n      if (current7_ == end7_) {\n        current7_ = begin7_;\n        ++current6_;\n      }\n      if (current6_ == end6_) {\n        current6_ = begin6_;\n        ++current5_;\n      }\n      if (current5_ == end5_) {\n        current5_ = begin5_;\n        ++current4_;\n      }\n      if (current4_ == end4_) {\n        current4_ = begin4_;\n        ++current3_;\n      }\n      if (current3_ == end3_) {\n        current3_ = begin3_;\n        ++current2_;\n      }\n      if (current2_ == end2_) {\n        current2_ = begin2_;\n        ++current1_;\n      }\n      ComputeCurrentValue();\n    }\n    virtual ParamIteratorInterface<ParamType>* Clone() const {\n      return new Iterator(*this);\n    }\n    virtual const ParamType* Current() const { return &current_value_; }\n    virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {\n      // Having the same base generator guarantees that the other\n      // iterator is of the same type and we can downcast.\n      GTEST_CHECK_(BaseGenerator() == other.BaseGenerator())\n          << \"The program attempted to compare iterators \"\n          << \"from different generators.\" << std::endl;\n      const Iterator* typed_other =\n          CheckedDowncastToActualType<const Iterator>(&other);\n      // We must report iterators equal if they both point beyond their\n      // respective ranges. That can happen in a variety of fashions,\n      // so we have to consult AtEnd().\n      return (AtEnd() && typed_other->AtEnd()) ||\n         (\n          current1_ == typed_other->current1_ &&\n          current2_ == typed_other->current2_ &&\n          current3_ == typed_other->current3_ &&\n          current4_ == typed_other->current4_ &&\n          current5_ == typed_other->current5_ &&\n          current6_ == typed_other->current6_ &&\n          current7_ == typed_other->current7_ &&\n          current8_ == typed_other->current8_ &&\n          current9_ == typed_other->current9_ &&\n          current10_ == typed_other->current10_);\n    }\n\n   private:\n    Iterator(const Iterator& other)\n        : base_(other.base_),\n        begin1_(other.begin1_),\n        end1_(other.end1_),\n        current1_(other.current1_),\n        begin2_(other.begin2_),\n        end2_(other.end2_),\n        current2_(other.current2_),\n        begin3_(other.begin3_),\n        end3_(other.end3_),\n        current3_(other.current3_),\n        begin4_(other.begin4_),\n        end4_(other.end4_),\n        current4_(other.current4_),\n        begin5_(other.begin5_),\n        end5_(other.end5_),\n        current5_(other.current5_),\n        begin6_(other.begin6_),\n        end6_(other.end6_),\n        current6_(other.current6_),\n        begin7_(other.begin7_),\n        end7_(other.end7_),\n        current7_(other.current7_),\n        begin8_(other.begin8_),\n        end8_(other.end8_),\n        current8_(other.current8_),\n        begin9_(other.begin9_),\n        end9_(other.end9_),\n        current9_(other.current9_),\n        begin10_(other.begin10_),\n        end10_(other.end10_),\n        current10_(other.current10_) {\n      ComputeCurrentValue();\n    }\n\n    void ComputeCurrentValue() {\n      if (!AtEnd())\n        current_value_ = ParamType(*current1_, *current2_, *current3_,\n            *current4_, *current5_, *current6_, *current7_, *current8_,\n            *current9_, *current10_);\n    }\n    bool AtEnd() const {\n      // We must report iterator past the end of the range when either of the\n      // component iterators has reached the end of its range.\n      return\n          current1_ == end1_ ||\n          current2_ == end2_ ||\n          current3_ == end3_ ||\n          current4_ == end4_ ||\n          current5_ == end5_ ||\n          current6_ == end6_ ||\n          current7_ == end7_ ||\n          current8_ == end8_ ||\n          current9_ == end9_ ||\n          current10_ == end10_;\n    }\n\n    // No implementation - assignment is unsupported.\n    void operator=(const Iterator& other);\n\n    const ParamGeneratorInterface<ParamType>* const base_;\n    // begin[i]_ and end[i]_ define the i-th range that Iterator traverses.\n    // current[i]_ is the actual traversing iterator.\n    const typename ParamGenerator<T1>::iterator begin1_;\n    const typename ParamGenerator<T1>::iterator end1_;\n    typename ParamGenerator<T1>::iterator current1_;\n    const typename ParamGenerator<T2>::iterator begin2_;\n    const typename ParamGenerator<T2>::iterator end2_;\n    typename ParamGenerator<T2>::iterator current2_;\n    const typename ParamGenerator<T3>::iterator begin3_;\n    const typename ParamGenerator<T3>::iterator end3_;\n    typename ParamGenerator<T3>::iterator current3_;\n    const typename ParamGenerator<T4>::iterator begin4_;\n    const typename ParamGenerator<T4>::iterator end4_;\n    typename ParamGenerator<T4>::iterator current4_;\n    const typename ParamGenerator<T5>::iterator begin5_;\n    const typename ParamGenerator<T5>::iterator end5_;\n    typename ParamGenerator<T5>::iterator current5_;\n    const typename ParamGenerator<T6>::iterator begin6_;\n    const typename ParamGenerator<T6>::iterator end6_;\n    typename ParamGenerator<T6>::iterator current6_;\n    const typename ParamGenerator<T7>::iterator begin7_;\n    const typename ParamGenerator<T7>::iterator end7_;\n    typename ParamGenerator<T7>::iterator current7_;\n    const typename ParamGenerator<T8>::iterator begin8_;\n    const typename ParamGenerator<T8>::iterator end8_;\n    typename ParamGenerator<T8>::iterator current8_;\n    const typename ParamGenerator<T9>::iterator begin9_;\n    const typename ParamGenerator<T9>::iterator end9_;\n    typename ParamGenerator<T9>::iterator current9_;\n    const typename ParamGenerator<T10>::iterator begin10_;\n    const typename ParamGenerator<T10>::iterator end10_;\n    typename ParamGenerator<T10>::iterator current10_;\n    ParamType current_value_;\n  };  // class CartesianProductGenerator10::Iterator\n\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductGenerator10& other);\n\n  const ParamGenerator<T1> g1_;\n  const ParamGenerator<T2> g2_;\n  const ParamGenerator<T3> g3_;\n  const ParamGenerator<T4> g4_;\n  const ParamGenerator<T5> g5_;\n  const ParamGenerator<T6> g6_;\n  const ParamGenerator<T7> g7_;\n  const ParamGenerator<T8> g8_;\n  const ParamGenerator<T9> g9_;\n  const ParamGenerator<T10> g10_;\n};  // class CartesianProductGenerator10\n\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Helper classes providing Combine() with polymorphic features. They allow\n// casting CartesianProductGeneratorN<T> to ParamGenerator<U> if T is\n// convertible to U.\n//\ntemplate <class Generator1, class Generator2>\nclass CartesianProductHolder2 {\n public:\nCartesianProductHolder2(const Generator1& g1, const Generator2& g2)\n      : g1_(g1), g2_(g2) {}\n  template <typename T1, typename T2>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2> >(\n        new CartesianProductGenerator2<T1, T2>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder2& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n};  // class CartesianProductHolder2\n\ntemplate <class Generator1, class Generator2, class Generator3>\nclass CartesianProductHolder3 {\n public:\nCartesianProductHolder3(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3)\n      : g1_(g1), g2_(g2), g3_(g3) {}\n  template <typename T1, typename T2, typename T3>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3> >(\n        new CartesianProductGenerator3<T1, T2, T3>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder3& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n};  // class CartesianProductHolder3\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4>\nclass CartesianProductHolder4 {\n public:\nCartesianProductHolder4(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4) {}\n  template <typename T1, typename T2, typename T3, typename T4>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4> >(\n        new CartesianProductGenerator4<T1, T2, T3, T4>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder4& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n};  // class CartesianProductHolder4\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4, class Generator5>\nclass CartesianProductHolder5 {\n public:\nCartesianProductHolder5(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4, const Generator5& g5)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5) {}\n  template <typename T1, typename T2, typename T3, typename T4, typename T5>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5> >(\n        new CartesianProductGenerator5<T1, T2, T3, T4, T5>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_),\n        static_cast<ParamGenerator<T5> >(g5_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder5& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n  const Generator5 g5_;\n};  // class CartesianProductHolder5\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4, class Generator5, class Generator6>\nclass CartesianProductHolder6 {\n public:\nCartesianProductHolder6(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4, const Generator5& g5,\n    const Generator6& g6)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6) {}\n  template <typename T1, typename T2, typename T3, typename T4, typename T5,\n      typename T6>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6> >(\n        new CartesianProductGenerator6<T1, T2, T3, T4, T5, T6>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_),\n        static_cast<ParamGenerator<T5> >(g5_),\n        static_cast<ParamGenerator<T6> >(g6_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder6& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n  const Generator5 g5_;\n  const Generator6 g6_;\n};  // class CartesianProductHolder6\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4, class Generator5, class Generator6, class Generator7>\nclass CartesianProductHolder7 {\n public:\nCartesianProductHolder7(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4, const Generator5& g5,\n    const Generator6& g6, const Generator7& g7)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7) {}\n  template <typename T1, typename T2, typename T3, typename T4, typename T5,\n      typename T6, typename T7>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6,\n      T7> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7> >(\n        new CartesianProductGenerator7<T1, T2, T3, T4, T5, T6, T7>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_),\n        static_cast<ParamGenerator<T5> >(g5_),\n        static_cast<ParamGenerator<T6> >(g6_),\n        static_cast<ParamGenerator<T7> >(g7_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder7& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n  const Generator5 g5_;\n  const Generator6 g6_;\n  const Generator7 g7_;\n};  // class CartesianProductHolder7\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4, class Generator5, class Generator6, class Generator7,\n    class Generator8>\nclass CartesianProductHolder8 {\n public:\nCartesianProductHolder8(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4, const Generator5& g5,\n    const Generator6& g6, const Generator7& g7, const Generator8& g8)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7),\n          g8_(g8) {}\n  template <typename T1, typename T2, typename T3, typename T4, typename T5,\n      typename T6, typename T7, typename T8>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7,\n      T8> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8> >(\n        new CartesianProductGenerator8<T1, T2, T3, T4, T5, T6, T7, T8>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_),\n        static_cast<ParamGenerator<T5> >(g5_),\n        static_cast<ParamGenerator<T6> >(g6_),\n        static_cast<ParamGenerator<T7> >(g7_),\n        static_cast<ParamGenerator<T8> >(g8_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder8& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n  const Generator5 g5_;\n  const Generator6 g6_;\n  const Generator7 g7_;\n  const Generator8 g8_;\n};  // class CartesianProductHolder8\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4, class Generator5, class Generator6, class Generator7,\n    class Generator8, class Generator9>\nclass CartesianProductHolder9 {\n public:\nCartesianProductHolder9(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4, const Generator5& g5,\n    const Generator6& g6, const Generator7& g7, const Generator8& g8,\n    const Generator9& g9)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8),\n          g9_(g9) {}\n  template <typename T1, typename T2, typename T3, typename T4, typename T5,\n      typename T6, typename T7, typename T8, typename T9>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8,\n      T9> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8,\n        T9> >(\n        new CartesianProductGenerator9<T1, T2, T3, T4, T5, T6, T7, T8, T9>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_),\n        static_cast<ParamGenerator<T5> >(g5_),\n        static_cast<ParamGenerator<T6> >(g6_),\n        static_cast<ParamGenerator<T7> >(g7_),\n        static_cast<ParamGenerator<T8> >(g8_),\n        static_cast<ParamGenerator<T9> >(g9_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder9& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n  const Generator5 g5_;\n  const Generator6 g6_;\n  const Generator7 g7_;\n  const Generator8 g8_;\n  const Generator9 g9_;\n};  // class CartesianProductHolder9\n\ntemplate <class Generator1, class Generator2, class Generator3,\n    class Generator4, class Generator5, class Generator6, class Generator7,\n    class Generator8, class Generator9, class Generator10>\nclass CartesianProductHolder10 {\n public:\nCartesianProductHolder10(const Generator1& g1, const Generator2& g2,\n    const Generator3& g3, const Generator4& g4, const Generator5& g5,\n    const Generator6& g6, const Generator7& g7, const Generator8& g8,\n    const Generator9& g9, const Generator10& g10)\n      : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8),\n          g9_(g9), g10_(g10) {}\n  template <typename T1, typename T2, typename T3, typename T4, typename T5,\n      typename T6, typename T7, typename T8, typename T9, typename T10>\n  operator ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8,\n      T9, T10> >() const {\n    return ParamGenerator< ::std::tr1::tuple<T1, T2, T3, T4, T5, T6, T7, T8,\n        T9, T10> >(\n        new CartesianProductGenerator10<T1, T2, T3, T4, T5, T6, T7, T8, T9,\n            T10>(\n        static_cast<ParamGenerator<T1> >(g1_),\n        static_cast<ParamGenerator<T2> >(g2_),\n        static_cast<ParamGenerator<T3> >(g3_),\n        static_cast<ParamGenerator<T4> >(g4_),\n        static_cast<ParamGenerator<T5> >(g5_),\n        static_cast<ParamGenerator<T6> >(g6_),\n        static_cast<ParamGenerator<T7> >(g7_),\n        static_cast<ParamGenerator<T8> >(g8_),\n        static_cast<ParamGenerator<T9> >(g9_),\n        static_cast<ParamGenerator<T10> >(g10_)));\n  }\n\n private:\n  // No implementation - assignment is unsupported.\n  void operator=(const CartesianProductHolder10& other);\n\n  const Generator1 g1_;\n  const Generator2 g2_;\n  const Generator3 g3_;\n  const Generator4 g4_;\n  const Generator5 g5_;\n  const Generator6 g6_;\n  const Generator7 g7_;\n  const Generator8 g8_;\n  const Generator9 g9_;\n  const Generator10 g10_;\n};  // class CartesianProductHolder10\n\n# endif  // GTEST_HAS_COMBINE\n\n}  // namespace internal\n}  // namespace testing\n\n#endif  //  GTEST_HAS_PARAM_TEST\n\n#endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_\n\n#if GTEST_HAS_PARAM_TEST\n\nnamespace testing {\n\n// Functions producing parameter generators.\n//\n// Google Test uses these generators to produce parameters for value-\n// parameterized tests. When a parameterized test case is instantiated\n// with a particular generator, Google Test creates and runs tests\n// for each element in the sequence produced by the generator.\n//\n// In the following sample, tests from test case FooTest are instantiated\n// each three times with parameter values 3, 5, and 8:\n//\n// class FooTest : public TestWithParam<int> { ... };\n//\n// TEST_P(FooTest, TestThis) {\n// }\n// TEST_P(FooTest, TestThat) {\n// }\n// INSTANTIATE_TEST_CASE_P(TestSequence, FooTest, Values(3, 5, 8));\n//\n\n// Range() returns generators providing sequences of values in a range.\n//\n// Synopsis:\n// Range(start, end)\n//   - returns a generator producing a sequence of values {start, start+1,\n//     start+2, ..., }.\n// Range(start, end, step)\n//   - returns a generator producing a sequence of values {start, start+step,\n//     start+step+step, ..., }.\n// Notes:\n//   * The generated sequences never include end. For example, Range(1, 5)\n//     returns a generator producing a sequence {1, 2, 3, 4}. Range(1, 9, 2)\n//     returns a generator producing {1, 3, 5, 7}.\n//   * start and end must have the same type. That type may be any integral or\n//     floating-point type or a user defined type satisfying these conditions:\n//     * It must be assignable (have operator=() defined).\n//     * It must have operator+() (operator+(int-compatible type) for\n//       two-operand version).\n//     * It must have operator<() defined.\n//     Elements in the resulting sequences will also have that type.\n//   * Condition start < end must be satisfied in order for resulting sequences\n//     to contain any elements.\n//\ntemplate <typename T, typename IncrementT>\ninternal::ParamGenerator<T> Range(T start, T end, IncrementT step) {\n  return internal::ParamGenerator<T>(\n      new internal::RangeGenerator<T, IncrementT>(start, end, step));\n}\n\ntemplate <typename T>\ninternal::ParamGenerator<T> Range(T start, T end) {\n  return Range(start, end, 1);\n}\n\n// ValuesIn() function allows generation of tests with parameters coming from\n// a container.\n//\n// Synopsis:\n// ValuesIn(const T (&array)[N])\n//   - returns a generator producing sequences with elements from\n//     a C-style array.\n// ValuesIn(const Container& container)\n//   - returns a generator producing sequences with elements from\n//     an STL-style container.\n// ValuesIn(Iterator begin, Iterator end)\n//   - returns a generator producing sequences with elements from\n//     a range [begin, end) defined by a pair of STL-style iterators. These\n//     iterators can also be plain C pointers.\n//\n// Please note that ValuesIn copies the values from the containers\n// passed in and keeps them to generate tests in RUN_ALL_TESTS().\n//\n// Examples:\n//\n// This instantiates tests from test case StringTest\n// each with C-string values of \"foo\", \"bar\", and \"baz\":\n//\n// const char* strings[] = {\"foo\", \"bar\", \"baz\"};\n// INSTANTIATE_TEST_CASE_P(StringSequence, SrtingTest, ValuesIn(strings));\n//\n// This instantiates tests from test case StlStringTest\n// each with STL strings with values \"a\" and \"b\":\n//\n// ::std::vector< ::std::string> GetParameterStrings() {\n//   ::std::vector< ::std::string> v;\n//   v.push_back(\"a\");\n//   v.push_back(\"b\");\n//   return v;\n// }\n//\n// INSTANTIATE_TEST_CASE_P(CharSequence,\n//                         StlStringTest,\n//                         ValuesIn(GetParameterStrings()));\n//\n//\n// This will also instantiate tests from CharTest\n// each with parameter values 'a' and 'b':\n//\n// ::std::list<char> GetParameterChars() {\n//   ::std::list<char> list;\n//   list.push_back('a');\n//   list.push_back('b');\n//   return list;\n// }\n// ::std::list<char> l = GetParameterChars();\n// INSTANTIATE_TEST_CASE_P(CharSequence2,\n//                         CharTest,\n//                         ValuesIn(l.begin(), l.end()));\n//\ntemplate <typename ForwardIterator>\ninternal::ParamGenerator<\n  typename ::testing::internal::IteratorTraits<ForwardIterator>::value_type>\nValuesIn(ForwardIterator begin, ForwardIterator end) {\n  typedef typename ::testing::internal::IteratorTraits<ForwardIterator>\n      ::value_type ParamType;\n  return internal::ParamGenerator<ParamType>(\n      new internal::ValuesInIteratorRangeGenerator<ParamType>(begin, end));\n}\n\ntemplate <typename T, size_t N>\ninternal::ParamGenerator<T> ValuesIn(const T (&array)[N]) {\n  return ValuesIn(array, array + N);\n}\n\ntemplate <class Container>\ninternal::ParamGenerator<typename Container::value_type> ValuesIn(\n    const Container& container) {\n  return ValuesIn(container.begin(), container.end());\n}\n\n// Values() allows generating tests from explicitly specified list of\n// parameters.\n//\n// Synopsis:\n// Values(T v1, T v2, ..., T vN)\n//   - returns a generator producing sequences with elements v1, v2, ..., vN.\n//\n// For example, this instantiates tests from test case BarTest each\n// with values \"one\", \"two\", and \"three\":\n//\n// INSTANTIATE_TEST_CASE_P(NumSequence, BarTest, Values(\"one\", \"two\", \"three\"));\n//\n// This instantiates tests from test case BazTest each with values 1, 2, 3.5.\n// The exact type of values will depend on the type of parameter in BazTest.\n//\n// INSTANTIATE_TEST_CASE_P(FloatingNumbers, BazTest, Values(1, 2, 3.5));\n//\n// Currently, Values() supports from 1 to 50 parameters.\n//\ntemplate <typename T1>\ninternal::ValueArray1<T1> Values(T1 v1) {\n  return internal::ValueArray1<T1>(v1);\n}\n\ntemplate <typename T1, typename T2>\ninternal::ValueArray2<T1, T2> Values(T1 v1, T2 v2) {\n  return internal::ValueArray2<T1, T2>(v1, v2);\n}\n\ntemplate <typename T1, typename T2, typename T3>\ninternal::ValueArray3<T1, T2, T3> Values(T1 v1, T2 v2, T3 v3) {\n  return internal::ValueArray3<T1, T2, T3>(v1, v2, v3);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4>\ninternal::ValueArray4<T1, T2, T3, T4> Values(T1 v1, T2 v2, T3 v3, T4 v4) {\n  return internal::ValueArray4<T1, T2, T3, T4>(v1, v2, v3, v4);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5>\ninternal::ValueArray5<T1, T2, T3, T4, T5> Values(T1 v1, T2 v2, T3 v3, T4 v4,\n    T5 v5) {\n  return internal::ValueArray5<T1, T2, T3, T4, T5>(v1, v2, v3, v4, v5);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6>\ninternal::ValueArray6<T1, T2, T3, T4, T5, T6> Values(T1 v1, T2 v2, T3 v3,\n    T4 v4, T5 v5, T6 v6) {\n  return internal::ValueArray6<T1, T2, T3, T4, T5, T6>(v1, v2, v3, v4, v5, v6);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7>\ninternal::ValueArray7<T1, T2, T3, T4, T5, T6, T7> Values(T1 v1, T2 v2, T3 v3,\n    T4 v4, T5 v5, T6 v6, T7 v7) {\n  return internal::ValueArray7<T1, T2, T3, T4, T5, T6, T7>(v1, v2, v3, v4, v5,\n      v6, v7);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8>\ninternal::ValueArray8<T1, T2, T3, T4, T5, T6, T7, T8> Values(T1 v1, T2 v2,\n    T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8) {\n  return internal::ValueArray8<T1, T2, T3, T4, T5, T6, T7, T8>(v1, v2, v3, v4,\n      v5, v6, v7, v8);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9>\ninternal::ValueArray9<T1, T2, T3, T4, T5, T6, T7, T8, T9> Values(T1 v1, T2 v2,\n    T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9) {\n  return internal::ValueArray9<T1, T2, T3, T4, T5, T6, T7, T8, T9>(v1, v2, v3,\n      v4, v5, v6, v7, v8, v9);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10>\ninternal::ValueArray10<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10> Values(T1 v1,\n    T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10) {\n  return internal::ValueArray10<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10>(v1,\n      v2, v3, v4, v5, v6, v7, v8, v9, v10);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11>\ninternal::ValueArray11<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10,\n    T11> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11) {\n  return internal::ValueArray11<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10,\n      T11>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12>\ninternal::ValueArray12<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n    T12> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12) {\n  return internal::ValueArray12<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13>\ninternal::ValueArray13<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,\n    T13> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13) {\n  return internal::ValueArray13<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14>\ninternal::ValueArray14<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14) {\n  return internal::ValueArray14<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13,\n      v14);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15>\ninternal::ValueArray15<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8,\n    T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15) {\n  return internal::ValueArray15<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12,\n      v13, v14, v15);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16>\ninternal::ValueArray16<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n    T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16) {\n  return internal::ValueArray16<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11,\n      v12, v13, v14, v15, v16);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17>\ninternal::ValueArray17<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n    T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17) {\n  return internal::ValueArray17<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10,\n      v11, v12, v13, v14, v15, v16, v17);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18>\ninternal::ValueArray18<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6,\n    T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18) {\n  return internal::ValueArray18<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18>(v1, v2, v3, v4, v5, v6, v7, v8, v9,\n      v10, v11, v12, v13, v14, v15, v16, v17, v18);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19>\ninternal::ValueArray19<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5,\n    T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14,\n    T15 v15, T16 v16, T17 v17, T18 v18, T19 v19) {\n  return internal::ValueArray19<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19>(v1, v2, v3, v4, v5, v6, v7, v8,\n      v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20>\ninternal::ValueArray20<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20> Values(T1 v1, T2 v2, T3 v3, T4 v4,\n    T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13,\n    T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20) {\n  return internal::ValueArray20<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20>(v1, v2, v3, v4, v5, v6, v7,\n      v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21>\ninternal::ValueArray21<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21> Values(T1 v1, T2 v2, T3 v3, T4 v4,\n    T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13,\n    T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21) {\n  return internal::ValueArray21<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21>(v1, v2, v3, v4, v5, v6,\n      v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22>\ninternal::ValueArray22<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22> Values(T1 v1, T2 v2, T3 v3,\n    T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12,\n    T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20,\n    T21 v21, T22 v22) {\n  return internal::ValueArray22<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22>(v1, v2, v3, v4,\n      v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19,\n      v20, v21, v22);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23>\ninternal::ValueArray23<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23> Values(T1 v1, T2 v2,\n    T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12,\n    T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20,\n    T21 v21, T22 v22, T23 v23) {\n  return internal::ValueArray23<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23>(v1, v2, v3,\n      v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19,\n      v20, v21, v22, v23);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24>\ninternal::ValueArray24<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> Values(T1 v1, T2 v2,\n    T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12,\n    T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20,\n    T21 v21, T22 v22, T23 v23, T24 v24) {\n  return internal::ValueArray24<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24>(v1, v2,\n      v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18,\n      v19, v20, v21, v22, v23, v24);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25>\ninternal::ValueArray25<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> Values(T1 v1,\n    T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11,\n    T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19,\n    T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25) {\n  return internal::ValueArray25<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25>(v1,\n      v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17,\n      v18, v19, v20, v21, v22, v23, v24, v25);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26>\ninternal::ValueArray26<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n    T26> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26) {\n  return internal::ValueArray26<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15,\n      v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27>\ninternal::ValueArray27<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,\n    T27> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27) {\n  return internal::ValueArray27<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14,\n      v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28>\ninternal::ValueArray28<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,\n    T28> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27, T28 v28) {\n  return internal::ValueArray28<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13,\n      v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27,\n      v28);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29>\ninternal::ValueArray29<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27, T28 v28, T29 v29) {\n  return internal::ValueArray29<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12,\n      v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26,\n      v27, v28, v29);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30>\ninternal::ValueArray30<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8,\n    T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16,\n    T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24,\n    T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30) {\n  return internal::ValueArray30<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11,\n      v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25,\n      v26, v27, v28, v29, v30);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31>\ninternal::ValueArray31<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n    T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23,\n    T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31) {\n  return internal::ValueArray31<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10,\n      v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24,\n      v25, v26, v27, v28, v29, v30, v31);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32>\ninternal::ValueArray32<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n    T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23,\n    T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31,\n    T32 v32) {\n  return internal::ValueArray32<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32>(v1, v2, v3, v4, v5, v6, v7, v8, v9,\n      v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23,\n      v24, v25, v26, v27, v28, v29, v30, v31, v32);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33>\ninternal::ValueArray33<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6,\n    T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23,\n    T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31,\n    T32 v32, T33 v33) {\n  return internal::ValueArray33<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33>(v1, v2, v3, v4, v5, v6, v7, v8,\n      v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23,\n      v24, v25, v26, v27, v28, v29, v30, v31, v32, v33);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34>\ninternal::ValueArray34<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5,\n    T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14,\n    T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22,\n    T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30,\n    T31 v31, T32 v32, T33 v33, T34 v34) {\n  return internal::ValueArray34<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34>(v1, v2, v3, v4, v5, v6, v7,\n      v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22,\n      v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35>\ninternal::ValueArray35<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35> Values(T1 v1, T2 v2, T3 v3, T4 v4,\n    T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13,\n    T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21,\n    T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29,\n    T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35) {\n  return internal::ValueArray35<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35>(v1, v2, v3, v4, v5, v6,\n      v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21,\n      v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36>\ninternal::ValueArray36<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36> Values(T1 v1, T2 v2, T3 v3, T4 v4,\n    T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13,\n    T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21,\n    T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29,\n    T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36) {\n  return internal::ValueArray36<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36>(v1, v2, v3, v4,\n      v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19,\n      v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33,\n      v34, v35, v36);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37>\ninternal::ValueArray37<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37> Values(T1 v1, T2 v2, T3 v3,\n    T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12,\n    T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20,\n    T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28,\n    T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36,\n    T37 v37) {\n  return internal::ValueArray37<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37>(v1, v2, v3,\n      v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19,\n      v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33,\n      v34, v35, v36, v37);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38>\ninternal::ValueArray38<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> Values(T1 v1, T2 v2,\n    T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12,\n    T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20,\n    T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28,\n    T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36,\n    T37 v37, T38 v38) {\n  return internal::ValueArray38<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38>(v1, v2,\n      v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18,\n      v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32,\n      v33, v34, v35, v36, v37, v38);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39>\ninternal::ValueArray39<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> Values(T1 v1, T2 v2,\n    T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12,\n    T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20,\n    T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28,\n    T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36,\n    T37 v37, T38 v38, T39 v39) {\n  return internal::ValueArray39<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39>(v1,\n      v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17,\n      v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31,\n      v32, v33, v34, v35, v36, v37, v38, v39);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40>\ninternal::ValueArray40<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> Values(T1 v1,\n    T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11,\n    T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19,\n    T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27,\n    T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35,\n    T36 v36, T37 v37, T38 v38, T39 v39, T40 v40) {\n  return internal::ValueArray40<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15,\n      v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29,\n      v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41>\ninternal::ValueArray41<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,\n    T41> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n    T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41) {\n  return internal::ValueArray41<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14,\n      v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28,\n      v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42>\ninternal::ValueArray42<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,\n    T42> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n    T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n    T42 v42) {\n  return internal::ValueArray42<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13,\n      v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27,\n      v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41,\n      v42);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43>\ninternal::ValueArray43<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,\n    T43> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n    T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n    T42 v42, T43 v43) {\n  return internal::ValueArray43<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12,\n      v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26,\n      v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40,\n      v41, v42, v43);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44>\ninternal::ValueArray44<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9,\n    T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17,\n    T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25,\n    T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33,\n    T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41,\n    T42 v42, T43 v43, T44 v44) {\n  return internal::ValueArray44<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11,\n      v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25,\n      v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39,\n      v40, v41, v42, v43, v44);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45>\ninternal::ValueArray45<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8,\n    T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16,\n    T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24,\n    T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32,\n    T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40,\n    T41 v41, T42 v42, T43 v43, T44 v44, T45 v45) {\n  return internal::ValueArray45<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44, T45>(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10,\n      v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24,\n      v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38,\n      v39, v40, v41, v42, v43, v44, v45);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46>\ninternal::ValueArray46<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45, T46> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n    T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23,\n    T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31,\n    T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39,\n    T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46) {\n  return internal::ValueArray46<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44, T45, T46>(v1, v2, v3, v4, v5, v6, v7, v8, v9,\n      v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23,\n      v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37,\n      v38, v39, v40, v41, v42, v43, v44, v45, v46);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47>\ninternal::ValueArray47<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45, T46, T47> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7,\n    T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23,\n    T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31,\n    T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39,\n    T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47) {\n  return internal::ValueArray47<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44, T45, T46, T47>(v1, v2, v3, v4, v5, v6, v7, v8,\n      v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23,\n      v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37,\n      v38, v39, v40, v41, v42, v43, v44, v45, v46, v47);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48>\ninternal::ValueArray48<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45, T46, T47, T48> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6,\n    T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15,\n    T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23,\n    T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31,\n    T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39,\n    T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47,\n    T48 v48) {\n  return internal::ValueArray48<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44, T45, T46, T47, T48>(v1, v2, v3, v4, v5, v6, v7,\n      v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22,\n      v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36,\n      v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49>\ninternal::ValueArray49<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45, T46, T47, T48, T49> Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5,\n    T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14,\n    T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22,\n    T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30,\n    T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38,\n    T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46,\n    T47 v47, T48 v48, T49 v49) {\n  return internal::ValueArray49<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44, T45, T46, T47, T48, T49>(v1, v2, v3, v4, v5, v6,\n      v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21,\n      v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35,\n      v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48, v49);\n}\n\ntemplate <typename T1, typename T2, typename T3, typename T4, typename T5,\n    typename T6, typename T7, typename T8, typename T9, typename T10,\n    typename T11, typename T12, typename T13, typename T14, typename T15,\n    typename T16, typename T17, typename T18, typename T19, typename T20,\n    typename T21, typename T22, typename T23, typename T24, typename T25,\n    typename T26, typename T27, typename T28, typename T29, typename T30,\n    typename T31, typename T32, typename T33, typename T34, typename T35,\n    typename T36, typename T37, typename T38, typename T39, typename T40,\n    typename T41, typename T42, typename T43, typename T44, typename T45,\n    typename T46, typename T47, typename T48, typename T49, typename T50>\ninternal::ValueArray50<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,\n    T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,\n    T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,\n    T44, T45, T46, T47, T48, T49, T50> Values(T1 v1, T2 v2, T3 v3, T4 v4,\n    T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13,\n    T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21,\n    T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29,\n    T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37,\n    T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45,\n    T46 v46, T47 v47, T48 v48, T49 v49, T50 v50) {\n  return internal::ValueArray50<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,\n      T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,\n      T26, T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,\n      T40, T41, T42, T43, T44, T45, T46, T47, T48, T49, T50>(v1, v2, v3, v4,\n      v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19,\n      v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33,\n      v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47,\n      v48, v49, v50);\n}\n\n// Bool() allows generating tests with parameters in a set of (false, true).\n//\n// Synopsis:\n// Bool()\n//   - returns a generator producing sequences with elements {false, true}.\n//\n// It is useful when testing code that depends on Boolean flags. Combinations\n// of multiple flags can be tested when several Bool()'s are combined using\n// Combine() function.\n//\n// In the following example all tests in the test case FlagDependentTest\n// will be instantiated twice with parameters false and true.\n//\n// class FlagDependentTest : public testing::TestWithParam<bool> {\n//   virtual void SetUp() {\n//     external_flag = GetParam();\n//   }\n// }\n// INSTANTIATE_TEST_CASE_P(BoolSequence, FlagDependentTest, Bool());\n//\ninline internal::ParamGenerator<bool> Bool() {\n  return Values(false, true);\n}\n\n# if GTEST_HAS_COMBINE\n// Combine() allows the user to combine two or more sequences to produce\n// values of a Cartesian product of those sequences' elements.\n//\n// Synopsis:\n// Combine(gen1, gen2, ..., genN)\n//   - returns a generator producing sequences with elements coming from\n//     the Cartesian product of elements from the sequences generated by\n//     gen1, gen2, ..., genN. The sequence elements will have a type of\n//     tuple<T1, T2, ..., TN> where T1, T2, ..., TN are the types\n//     of elements from sequences produces by gen1, gen2, ..., genN.\n//\n// Combine can have up to 10 arguments. This number is currently limited\n// by the maximum number of elements in the tuple implementation used by Google\n// Test.\n//\n// Example:\n//\n// This will instantiate tests in test case AnimalTest each one with\n// the parameter values tuple(\"cat\", BLACK), tuple(\"cat\", WHITE),\n// tuple(\"dog\", BLACK), and tuple(\"dog\", WHITE):\n//\n// enum Color { BLACK, GRAY, WHITE };\n// class AnimalTest\n//     : public testing::TestWithParam<tuple<const char*, Color> > {...};\n//\n// TEST_P(AnimalTest, AnimalLooksNice) {...}\n//\n// INSTANTIATE_TEST_CASE_P(AnimalVariations, AnimalTest,\n//                         Combine(Values(\"cat\", \"dog\"),\n//                                 Values(BLACK, WHITE)));\n//\n// This will instantiate tests in FlagDependentTest with all variations of two\n// Boolean flags:\n//\n// class FlagDependentTest\n//     : public testing::TestWithParam<tuple(bool, bool)> > {\n//   virtual void SetUp() {\n//     // Assigns external_flag_1 and external_flag_2 values from the tuple.\n//     tie(external_flag_1, external_flag_2) = GetParam();\n//   }\n// };\n//\n// TEST_P(FlagDependentTest, TestFeature1) {\n//   // Test your code using external_flag_1 and external_flag_2 here.\n// }\n// INSTANTIATE_TEST_CASE_P(TwoBoolSequence, FlagDependentTest,\n//                         Combine(Bool(), Bool()));\n//\ntemplate <typename Generator1, typename Generator2>\ninternal::CartesianProductHolder2<Generator1, Generator2> Combine(\n    const Generator1& g1, const Generator2& g2) {\n  return internal::CartesianProductHolder2<Generator1, Generator2>(\n      g1, g2);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3>\ninternal::CartesianProductHolder3<Generator1, Generator2, Generator3> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3) {\n  return internal::CartesianProductHolder3<Generator1, Generator2, Generator3>(\n      g1, g2, g3);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4>\ninternal::CartesianProductHolder4<Generator1, Generator2, Generator3,\n    Generator4> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4) {\n  return internal::CartesianProductHolder4<Generator1, Generator2, Generator3,\n      Generator4>(\n      g1, g2, g3, g4);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4, typename Generator5>\ninternal::CartesianProductHolder5<Generator1, Generator2, Generator3,\n    Generator4, Generator5> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4, const Generator5& g5) {\n  return internal::CartesianProductHolder5<Generator1, Generator2, Generator3,\n      Generator4, Generator5>(\n      g1, g2, g3, g4, g5);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4, typename Generator5, typename Generator6>\ninternal::CartesianProductHolder6<Generator1, Generator2, Generator3,\n    Generator4, Generator5, Generator6> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4, const Generator5& g5, const Generator6& g6) {\n  return internal::CartesianProductHolder6<Generator1, Generator2, Generator3,\n      Generator4, Generator5, Generator6>(\n      g1, g2, g3, g4, g5, g6);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4, typename Generator5, typename Generator6,\n    typename Generator7>\ninternal::CartesianProductHolder7<Generator1, Generator2, Generator3,\n    Generator4, Generator5, Generator6, Generator7> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4, const Generator5& g5, const Generator6& g6,\n        const Generator7& g7) {\n  return internal::CartesianProductHolder7<Generator1, Generator2, Generator3,\n      Generator4, Generator5, Generator6, Generator7>(\n      g1, g2, g3, g4, g5, g6, g7);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4, typename Generator5, typename Generator6,\n    typename Generator7, typename Generator8>\ninternal::CartesianProductHolder8<Generator1, Generator2, Generator3,\n    Generator4, Generator5, Generator6, Generator7, Generator8> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4, const Generator5& g5, const Generator6& g6,\n        const Generator7& g7, const Generator8& g8) {\n  return internal::CartesianProductHolder8<Generator1, Generator2, Generator3,\n      Generator4, Generator5, Generator6, Generator7, Generator8>(\n      g1, g2, g3, g4, g5, g6, g7, g8);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4, typename Generator5, typename Generator6,\n    typename Generator7, typename Generator8, typename Generator9>\ninternal::CartesianProductHolder9<Generator1, Generator2, Generator3,\n    Generator4, Generator5, Generator6, Generator7, Generator8,\n    Generator9> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4, const Generator5& g5, const Generator6& g6,\n        const Generator7& g7, const Generator8& g8, const Generator9& g9) {\n  return internal::CartesianProductHolder9<Generator1, Generator2, Generator3,\n      Generator4, Generator5, Generator6, Generator7, Generator8, Generator9>(\n      g1, g2, g3, g4, g5, g6, g7, g8, g9);\n}\n\ntemplate <typename Generator1, typename Generator2, typename Generator3,\n    typename Generator4, typename Generator5, typename Generator6,\n    typename Generator7, typename Generator8, typename Generator9,\n    typename Generator10>\ninternal::CartesianProductHolder10<Generator1, Generator2, Generator3,\n    Generator4, Generator5, Generator6, Generator7, Generator8, Generator9,\n    Generator10> Combine(\n    const Generator1& g1, const Generator2& g2, const Generator3& g3,\n        const Generator4& g4, const Generator5& g5, const Generator6& g6,\n        const Generator7& g7, const Generator8& g8, const Generator9& g9,\n        const Generator10& g10) {\n  return internal::CartesianProductHolder10<Generator1, Generator2, Generator3,\n      Generator4, Generator5, Generator6, Generator7, Generator8, Generator9,\n      Generator10>(\n      g1, g2, g3, g4, g5, g6, g7, g8, g9, g10);\n}\n# endif  // GTEST_HAS_COMBINE\n\n\n\n# define TEST_P(test_case_name, test_name) \\\n  class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \\\n      : public test_case_name { \\\n   public: \\\n    GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {} \\\n    virtual void TestBody(); \\\n   private: \\\n    static int AddToRegistry() { \\\n      ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \\\n          GetTestCasePatternHolder<test_case_name>(\\\n              #test_case_name, __FILE__, __LINE__)->AddTestPattern(\\\n                  #test_case_name, \\\n                  #test_name, \\\n                  new ::testing::internal::TestMetaFactory< \\\n                      GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>()); \\\n      return 0; \\\n    } \\\n    static int gtest_registering_dummy_; \\\n    GTEST_DISALLOW_COPY_AND_ASSIGN_(\\\n        GTEST_TEST_CLASS_NAME_(test_case_name, test_name)); \\\n  }; \\\n  int GTEST_TEST_CLASS_NAME_(test_case_name, \\\n                             test_name)::gtest_registering_dummy_ = \\\n      GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::AddToRegistry(); \\\n  void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody()\n\n# define INSTANTIATE_TEST_CASE_P(prefix, test_case_name, generator) \\\n  ::testing::internal::ParamGenerator<test_case_name::ParamType> \\\n      gtest_##prefix##test_case_name##_EvalGenerator_() { return generator; } \\\n  int gtest_##prefix##test_case_name##_dummy_ = \\\n      ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \\\n          GetTestCasePatternHolder<test_case_name>(\\\n              #test_case_name, __FILE__, __LINE__)->AddTestCaseInstantiation(\\\n                  #prefix, \\\n                  &gtest_##prefix##test_case_name##_EvalGenerator_, \\\n                  __FILE__, __LINE__)\n\n}  // namespace testing\n\n#endif  // GTEST_HAS_PARAM_TEST\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_\n// Copyright 2006, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n//\n// Google C++ Testing Framework definitions useful in production code.\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_PROD_H_\n#define GTEST_INCLUDE_GTEST_GTEST_PROD_H_\n\n// When you need to test the private or protected members of a class,\n// use the FRIEND_TEST macro to declare your tests as friends of the\n// class.  For example:\n//\n// class MyClass {\n//  private:\n//   void MyMethod();\n//   FRIEND_TEST(MyClassTest, MyMethod);\n// };\n//\n// class MyClassTest : public testing::Test {\n//   // ...\n// };\n//\n// TEST_F(MyClassTest, MyMethod) {\n//   // Can call MyClass::MyMethod() here.\n// }\n\n#define FRIEND_TEST(test_case_name, test_name)\\\nfriend class test_case_name##_##test_name##_Test\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_PROD_H_\n// Copyright 2008, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: mheule@google.com (Markus Heule)\n//\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_\n#define GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_\n\n#include <iosfwd>\n#include <vector>\n\nnamespace testing {\n\n// A copyable object representing the result of a test part (i.e. an\n// assertion or an explicit FAIL(), ADD_FAILURE(), or SUCCESS()).\n//\n// Don't inherit from TestPartResult as its destructor is not virtual.\nclass GTEST_API_ TestPartResult {\n public:\n  // The possible outcomes of a test part (i.e. an assertion or an\n  // explicit SUCCEED(), FAIL(), or ADD_FAILURE()).\n  enum Type {\n    kSuccess,          // Succeeded.\n    kNonFatalFailure,  // Failed but the test can continue.\n    kFatalFailure      // Failed and the test should be terminated.\n  };\n\n  // C'tor.  TestPartResult does NOT have a default constructor.\n  // Always use this constructor (with parameters) to create a\n  // TestPartResult object.\n  TestPartResult(Type a_type,\n                 const char* a_file_name,\n                 int a_line_number,\n                 const char* a_message)\n      : type_(a_type),\n        file_name_(a_file_name),\n        line_number_(a_line_number),\n        summary_(ExtractSummary(a_message)),\n        message_(a_message) {\n  }\n\n  // Gets the outcome of the test part.\n  Type type() const { return type_; }\n\n  // Gets the name of the source file where the test part took place, or\n  // NULL if it's unknown.\n  const char* file_name() const { return file_name_.c_str(); }\n\n  // Gets the line in the source file where the test part took place,\n  // or -1 if it's unknown.\n  int line_number() const { return line_number_; }\n\n  // Gets the summary of the failure message.\n  const char* summary() const { return summary_.c_str(); }\n\n  // Gets the message associated with the test part.\n  const char* message() const { return message_.c_str(); }\n\n  // Returns true iff the test part passed.\n  bool passed() const { return type_ == kSuccess; }\n\n  // Returns true iff the test part failed.\n  bool failed() const { return type_ != kSuccess; }\n\n  // Returns true iff the test part non-fatally failed.\n  bool nonfatally_failed() const { return type_ == kNonFatalFailure; }\n\n  // Returns true iff the test part fatally failed.\n  bool fatally_failed() const { return type_ == kFatalFailure; }\n private:\n  Type type_;\n\n  // Gets the summary of the failure message by omitting the stack\n  // trace in it.\n  static internal::String ExtractSummary(const char* message);\n\n  // The name of the source file where the test part took place, or\n  // NULL if the source file is unknown.\n  internal::String file_name_;\n  // The line in the source file where the test part took place, or -1\n  // if the line number is unknown.\n  int line_number_;\n  internal::String summary_;  // The test failure summary.\n  internal::String message_;  // The test failure message.\n};\n\n// Prints a TestPartResult object.\nstd::ostream& operator<<(std::ostream& os, const TestPartResult& result);\n\n// An array of TestPartResult objects.\n//\n// Don't inherit from TestPartResultArray as its destructor is not\n// virtual.\nclass GTEST_API_ TestPartResultArray {\n public:\n  TestPartResultArray() {}\n\n  // Appends the given TestPartResult to the array.\n  void Append(const TestPartResult& result);\n\n  // Returns the TestPartResult at the given index (0-based).\n  const TestPartResult& GetTestPartResult(int index) const;\n\n  // Returns the number of TestPartResult objects in the array.\n  int size() const;\n\n private:\n  std::vector<TestPartResult> array_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestPartResultArray);\n};\n\n// This interface knows how to report a test part result.\nclass TestPartResultReporterInterface {\n public:\n  virtual ~TestPartResultReporterInterface() {}\n\n  virtual void ReportTestPartResult(const TestPartResult& result) = 0;\n};\n\nnamespace internal {\n\n// This helper class is used by {ASSERT|EXPECT}_NO_FATAL_FAILURE to check if a\n// statement generates new fatal failures. To do so it registers itself as the\n// current test part result reporter. Besides checking if fatal failures were\n// reported, it only delegates the reporting to the former result reporter.\n// The original result reporter is restored in the destructor.\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nclass GTEST_API_ HasNewFatalFailureHelper\n    : public TestPartResultReporterInterface {\n public:\n  HasNewFatalFailureHelper();\n  virtual ~HasNewFatalFailureHelper();\n  virtual void ReportTestPartResult(const TestPartResult& result);\n  bool has_new_fatal_failure() const { return has_new_fatal_failure_; }\n private:\n  bool has_new_fatal_failure_;\n  TestPartResultReporterInterface* original_reporter_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(HasNewFatalFailureHelper);\n};\n\n}  // namespace internal\n\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_\n// Copyright 2008 Google Inc.\n// All Rights Reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n//\n// Author: wan@google.com (Zhanyong Wan)\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_\n#define GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_\n\n// This header implements typed tests and type-parameterized tests.\n\n// Typed (aka type-driven) tests repeat the same test for types in a\n// list.  You must know which types you want to test with when writing\n// typed tests. Here's how you do it:\n\n#if 0\n\n// First, define a fixture class template.  It should be parameterized\n// by a type.  Remember to derive it from testing::Test.\ntemplate <typename T>\nclass FooTest : public testing::Test {\n public:\n  ...\n  typedef std::list<T> List;\n  static T shared_;\n  T value_;\n};\n\n// Next, associate a list of types with the test case, which will be\n// repeated for each type in the list.  The typedef is necessary for\n// the macro to parse correctly.\ntypedef testing::Types<char, int, unsigned int> MyTypes;\nTYPED_TEST_CASE(FooTest, MyTypes);\n\n// If the type list contains only one type, you can write that type\n// directly without Types<...>:\n//   TYPED_TEST_CASE(FooTest, int);\n\n// Then, use TYPED_TEST() instead of TEST_F() to define as many typed\n// tests for this test case as you want.\nTYPED_TEST(FooTest, DoesBlah) {\n  // Inside a test, refer to TypeParam to get the type parameter.\n  // Since we are inside a derived class template, C++ requires use to\n  // visit the members of FooTest via 'this'.\n  TypeParam n = this->value_;\n\n  // To visit static members of the fixture, add the TestFixture::\n  // prefix.\n  n += TestFixture::shared_;\n\n  // To refer to typedefs in the fixture, add the \"typename\n  // TestFixture::\" prefix.\n  typename TestFixture::List values;\n  values.push_back(n);\n  ...\n}\n\nTYPED_TEST(FooTest, HasPropertyA) { ... }\n\n#endif  // 0\n\n// Type-parameterized tests are abstract test patterns parameterized\n// by a type.  Compared with typed tests, type-parameterized tests\n// allow you to define the test pattern without knowing what the type\n// parameters are.  The defined pattern can be instantiated with\n// different types any number of times, in any number of translation\n// units.\n//\n// If you are designing an interface or concept, you can define a\n// suite of type-parameterized tests to verify properties that any\n// valid implementation of the interface/concept should have.  Then,\n// each implementation can easily instantiate the test suite to verify\n// that it conforms to the requirements, without having to write\n// similar tests repeatedly.  Here's an example:\n\n#if 0\n\n// First, define a fixture class template.  It should be parameterized\n// by a type.  Remember to derive it from testing::Test.\ntemplate <typename T>\nclass FooTest : public testing::Test {\n  ...\n};\n\n// Next, declare that you will define a type-parameterized test case\n// (the _P suffix is for \"parameterized\" or \"pattern\", whichever you\n// prefer):\nTYPED_TEST_CASE_P(FooTest);\n\n// Then, use TYPED_TEST_P() to define as many type-parameterized tests\n// for this type-parameterized test case as you want.\nTYPED_TEST_P(FooTest, DoesBlah) {\n  // Inside a test, refer to TypeParam to get the type parameter.\n  TypeParam n = 0;\n  ...\n}\n\nTYPED_TEST_P(FooTest, HasPropertyA) { ... }\n\n// Now the tricky part: you need to register all test patterns before\n// you can instantiate them.  The first argument of the macro is the\n// test case name; the rest are the names of the tests in this test\n// case.\nREGISTER_TYPED_TEST_CASE_P(FooTest,\n                           DoesBlah, HasPropertyA);\n\n// Finally, you are free to instantiate the pattern with the types you\n// want.  If you put the above code in a header file, you can #include\n// it in multiple C++ source files and instantiate it multiple times.\n//\n// To distinguish different instances of the pattern, the first\n// argument to the INSTANTIATE_* macro is a prefix that will be added\n// to the actual test case name.  Remember to pick unique prefixes for\n// different instances.\ntypedef testing::Types<char, int, unsigned int> MyTypes;\nINSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes);\n\n// If the type list contains only one type, you can write that type\n// directly without Types<...>:\n//   INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);\n\n#endif  // 0\n\n\n// Implements typed tests.\n\n#if GTEST_HAS_TYPED_TEST\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Expands to the name of the typedef for the type parameters of the\n// given test case.\n# define GTEST_TYPE_PARAMS_(TestCaseName) gtest_type_params_##TestCaseName##_\n\n// The 'Types' template argument below must have spaces around it\n// since some compilers may choke on '>>' when passing a template\n// instance (e.g. Types<int>)\n# define TYPED_TEST_CASE(CaseName, Types) \\\n  typedef ::testing::internal::TypeList< Types >::type \\\n      GTEST_TYPE_PARAMS_(CaseName)\n\n# define TYPED_TEST(CaseName, TestName) \\\n  template <typename gtest_TypeParam_> \\\n  class GTEST_TEST_CLASS_NAME_(CaseName, TestName) \\\n      : public CaseName<gtest_TypeParam_> { \\\n   private: \\\n    typedef CaseName<gtest_TypeParam_> TestFixture; \\\n    typedef gtest_TypeParam_ TypeParam; \\\n    virtual void TestBody(); \\\n  }; \\\n  bool gtest_##CaseName##_##TestName##_registered_ GTEST_ATTRIBUTE_UNUSED_ = \\\n      ::testing::internal::TypeParameterizedTest< \\\n          CaseName, \\\n          ::testing::internal::TemplateSel< \\\n              GTEST_TEST_CLASS_NAME_(CaseName, TestName)>, \\\n          GTEST_TYPE_PARAMS_(CaseName)>::Register(\\\n              \"\", #CaseName, #TestName, 0); \\\n  template <typename gtest_TypeParam_> \\\n  void GTEST_TEST_CLASS_NAME_(CaseName, TestName)<gtest_TypeParam_>::TestBody()\n\n#endif  // GTEST_HAS_TYPED_TEST\n\n// Implements type-parameterized tests.\n\n#if GTEST_HAS_TYPED_TEST_P\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Expands to the namespace name that the type-parameterized tests for\n// the given type-parameterized test case are defined in.  The exact\n// name of the namespace is subject to change without notice.\n# define GTEST_CASE_NAMESPACE_(TestCaseName) \\\n  gtest_case_##TestCaseName##_\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n//\n// Expands to the name of the variable used to remember the names of\n// the defined tests in the given test case.\n# define GTEST_TYPED_TEST_CASE_P_STATE_(TestCaseName) \\\n  gtest_typed_test_case_p_state_##TestCaseName##_\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE DIRECTLY.\n//\n// Expands to the name of the variable used to remember the names of\n// the registered tests in the given test case.\n# define GTEST_REGISTERED_TEST_NAMES_(TestCaseName) \\\n  gtest_registered_test_names_##TestCaseName##_\n\n// The variables defined in the type-parameterized test macros are\n// static as typically these macros are used in a .h file that can be\n// #included in multiple translation units linked together.\n# define TYPED_TEST_CASE_P(CaseName) \\\n  static ::testing::internal::TypedTestCasePState \\\n      GTEST_TYPED_TEST_CASE_P_STATE_(CaseName)\n\n# define TYPED_TEST_P(CaseName, TestName) \\\n  namespace GTEST_CASE_NAMESPACE_(CaseName) { \\\n  template <typename gtest_TypeParam_> \\\n  class TestName : public CaseName<gtest_TypeParam_> { \\\n   private: \\\n    typedef CaseName<gtest_TypeParam_> TestFixture; \\\n    typedef gtest_TypeParam_ TypeParam; \\\n    virtual void TestBody(); \\\n  }; \\\n  static bool gtest_##TestName##_defined_ GTEST_ATTRIBUTE_UNUSED_ = \\\n      GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).AddTestName(\\\n          __FILE__, __LINE__, #CaseName, #TestName); \\\n  } \\\n  template <typename gtest_TypeParam_> \\\n  void GTEST_CASE_NAMESPACE_(CaseName)::TestName<gtest_TypeParam_>::TestBody()\n\n# define REGISTER_TYPED_TEST_CASE_P(CaseName, ...) \\\n  namespace GTEST_CASE_NAMESPACE_(CaseName) { \\\n  typedef ::testing::internal::Templates<__VA_ARGS__>::type gtest_AllTests_; \\\n  } \\\n  static const char* const GTEST_REGISTERED_TEST_NAMES_(CaseName) = \\\n      GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).VerifyRegisteredTestNames(\\\n          __FILE__, __LINE__, #__VA_ARGS__)\n\n// The 'Types' template argument below must have spaces around it\n// since some compilers may choke on '>>' when passing a template\n// instance (e.g. Types<int>)\n# define INSTANTIATE_TYPED_TEST_CASE_P(Prefix, CaseName, Types) \\\n  bool gtest_##Prefix##_##CaseName GTEST_ATTRIBUTE_UNUSED_ = \\\n      ::testing::internal::TypeParameterizedTestCase<CaseName, \\\n          GTEST_CASE_NAMESPACE_(CaseName)::gtest_AllTests_, \\\n          ::testing::internal::TypeList< Types >::type>::Register(\\\n              #Prefix, #CaseName, GTEST_REGISTERED_TEST_NAMES_(CaseName))\n\n#endif  // GTEST_HAS_TYPED_TEST_P\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_\n\n// Depending on the platform, different string classes are available.\n// On Linux, in addition to ::std::string, Google also makes use of\n// class ::string, which has the same interface as ::std::string, but\n// has a different implementation.\n//\n// The user can define GTEST_HAS_GLOBAL_STRING to 1 to indicate that\n// ::string is available AND is a distinct type to ::std::string, or\n// define it to 0 to indicate otherwise.\n//\n// If the user's ::std::string and ::string are the same class due to\n// aliasing, he should define GTEST_HAS_GLOBAL_STRING to 0.\n//\n// If the user doesn't define GTEST_HAS_GLOBAL_STRING, it is defined\n// heuristically.\n\nnamespace testing {\n\n// Declares the flags.\n\n// This flag temporary enables the disabled tests.\nGTEST_DECLARE_bool_(also_run_disabled_tests);\n\n// This flag brings the debugger on an assertion failure.\nGTEST_DECLARE_bool_(break_on_failure);\n\n// This flag controls whether Google Test catches all test-thrown exceptions\n// and logs them as failures.\nGTEST_DECLARE_bool_(catch_exceptions);\n\n// This flag enables using colors in terminal output. Available values are\n// \"yes\" to enable colors, \"no\" (disable colors), or \"auto\" (the default)\n// to let Google Test decide.\nGTEST_DECLARE_string_(color);\n\n// This flag sets up the filter to select by name using a glob pattern\n// the tests to run. If the filter is not given all tests are executed.\nGTEST_DECLARE_string_(filter);\n\n// This flag causes the Google Test to list tests. None of the tests listed\n// are actually run if the flag is provided.\nGTEST_DECLARE_bool_(list_tests);\n\n// This flag controls whether Google Test emits a detailed XML report to a file\n// in addition to its normal textual output.\nGTEST_DECLARE_string_(output);\n\n// This flags control whether Google Test prints the elapsed time for each\n// test.\nGTEST_DECLARE_bool_(print_time);\n\n// This flag specifies the random number seed.\nGTEST_DECLARE_int32_(random_seed);\n\n// This flag sets how many times the tests are repeated. The default value\n// is 1. If the value is -1 the tests are repeating forever.\nGTEST_DECLARE_int32_(repeat);\n\n// This flag controls whether Google Test includes Google Test internal\n// stack frames in failure stack traces.\nGTEST_DECLARE_bool_(show_internal_stack_frames);\n\n// When this flag is specified, tests' order is randomized on every iteration.\nGTEST_DECLARE_bool_(shuffle);\n\n// This flag specifies the maximum number of stack frames to be\n// printed in a failure message.\nGTEST_DECLARE_int32_(stack_trace_depth);\n\n// When this flag is specified, a failed assertion will throw an\n// exception if exceptions are enabled, or exit the program with a\n// non-zero code otherwise.\nGTEST_DECLARE_bool_(throw_on_failure);\n\n// When this flag is set with a \"host:port\" string, on supported\n// platforms test results are streamed to the specified port on\n// the specified host machine.\nGTEST_DECLARE_string_(stream_result_to);\n\n// The upper limit for valid stack trace depths.\nconst int kMaxStackTraceDepth = 100;\n\nnamespace internal {\n\nclass AssertHelper;\nclass DefaultGlobalTestPartResultReporter;\nclass ExecDeathTest;\nclass NoExecDeathTest;\nclass FinalSuccessChecker;\nclass GTestFlagSaver;\nclass TestResultAccessor;\nclass TestEventListenersAccessor;\nclass TestEventRepeater;\nclass WindowsDeathTest;\nclass UnitTestImpl* GetUnitTestImpl();\nvoid ReportFailureInUnknownLocation(TestPartResult::Type result_type,\n                                    const String& message);\n\n// Converts a streamable value to a String.  A NULL pointer is\n// converted to \"(null)\".  When the input value is a ::string,\n// ::std::string, ::wstring, or ::std::wstring object, each NUL\n// character in it is replaced with \"\\\\0\".\n// Declared in gtest-internal.h but defined here, so that it has access\n// to the definition of the Message class, required by the ARM\n// compiler.\ntemplate <typename T>\nString StreamableToString(const T& streamable) {\n  return (Message() << streamable).GetString();\n}\n\n}  // namespace internal\n\n// The friend relationship of some of these classes is cyclic.\n// If we don't forward declare them the compiler might confuse the classes\n// in friendship clauses with same named classes on the scope.\nclass Test;\nclass TestCase;\nclass TestInfo;\nclass UnitTest;\n\n// A class for indicating whether an assertion was successful.  When\n// the assertion wasn't successful, the AssertionResult object\n// remembers a non-empty message that describes how it failed.\n//\n// To create an instance of this class, use one of the factory functions\n// (AssertionSuccess() and AssertionFailure()).\n//\n// This class is useful for two purposes:\n//   1. Defining predicate functions to be used with Boolean test assertions\n//      EXPECT_TRUE/EXPECT_FALSE and their ASSERT_ counterparts\n//   2. Defining predicate-format functions to be\n//      used with predicate assertions (ASSERT_PRED_FORMAT*, etc).\n//\n// For example, if you define IsEven predicate:\n//\n//   testing::AssertionResult IsEven(int n) {\n//     if ((n % 2) == 0)\n//       return testing::AssertionSuccess();\n//     else\n//       return testing::AssertionFailure() << n << \" is odd\";\n//   }\n//\n// Then the failed expectation EXPECT_TRUE(IsEven(Fib(5)))\n// will print the message\n//\n//   Value of: IsEven(Fib(5))\n//     Actual: false (5 is odd)\n//   Expected: true\n//\n// instead of a more opaque\n//\n//   Value of: IsEven(Fib(5))\n//     Actual: false\n//   Expected: true\n//\n// in case IsEven is a simple Boolean predicate.\n//\n// If you expect your predicate to be reused and want to support informative\n// messages in EXPECT_FALSE and ASSERT_FALSE (negative assertions show up\n// about half as often as positive ones in our tests), supply messages for\n// both success and failure cases:\n//\n//   testing::AssertionResult IsEven(int n) {\n//     if ((n % 2) == 0)\n//       return testing::AssertionSuccess() << n << \" is even\";\n//     else\n//       return testing::AssertionFailure() << n << \" is odd\";\n//   }\n//\n// Then a statement EXPECT_FALSE(IsEven(Fib(6))) will print\n//\n//   Value of: IsEven(Fib(6))\n//     Actual: true (8 is even)\n//   Expected: false\n//\n// NB: Predicates that support negative Boolean assertions have reduced\n// performance in positive ones so be careful not to use them in tests\n// that have lots (tens of thousands) of positive Boolean assertions.\n//\n// To use this class with EXPECT_PRED_FORMAT assertions such as:\n//\n//   // Verifies that Foo() returns an even number.\n//   EXPECT_PRED_FORMAT1(IsEven, Foo());\n//\n// you need to define:\n//\n//   testing::AssertionResult IsEven(const char* expr, int n) {\n//     if ((n % 2) == 0)\n//       return testing::AssertionSuccess();\n//     else\n//       return testing::AssertionFailure()\n//         << \"Expected: \" << expr << \" is even\\n  Actual: it's \" << n;\n//   }\n//\n// If Foo() returns 5, you will see the following message:\n//\n//   Expected: Foo() is even\n//     Actual: it's 5\n//\nclass GTEST_API_ AssertionResult {\n public:\n  // Copy constructor.\n  // Used in EXPECT_TRUE/FALSE(assertion_result).\n  AssertionResult(const AssertionResult& other);\n  // Used in the EXPECT_TRUE/FALSE(bool_expression).\n  explicit AssertionResult(bool success) : success_(success) {}\n\n  // Returns true iff the assertion succeeded.\n  operator bool() const { return success_; }  // NOLINT\n\n  // Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE.\n  AssertionResult operator!() const;\n\n  // Returns the text streamed into this AssertionResult. Test assertions\n  // use it when they fail (i.e., the predicate's outcome doesn't match the\n  // assertion's expectation). When nothing has been streamed into the\n  // object, returns an empty string.\n  const char* message() const {\n    return message_.get() != NULL ?  message_->c_str() : \"\";\n  }\n  // TODO(vladl@google.com): Remove this after making sure no clients use it.\n  // Deprecated; please use message() instead.\n  const char* failure_message() const { return message(); }\n\n  // Streams a custom failure message into this object.\n  template <typename T> AssertionResult& operator<<(const T& value) {\n    AppendMessage(Message() << value);\n    return *this;\n  }\n\n  // Allows streaming basic output manipulators such as endl or flush into\n  // this object.\n  AssertionResult& operator<<(\n      ::std::ostream& (*basic_manipulator)(::std::ostream& stream)) {\n    AppendMessage(Message() << basic_manipulator);\n    return *this;\n  }\n\n private:\n  // Appends the contents of message to message_.\n  void AppendMessage(const Message& a_message) {\n    if (message_.get() == NULL)\n      message_.reset(new ::std::string);\n    message_->append(a_message.GetString().c_str());\n  }\n\n  // Stores result of the assertion predicate.\n  bool success_;\n  // Stores the message describing the condition in case the expectation\n  // construct is not satisfied with the predicate's outcome.\n  // Referenced via a pointer to avoid taking too much stack frame space\n  // with test assertions.\n  internal::scoped_ptr< ::std::string> message_;\n\n  GTEST_DISALLOW_ASSIGN_(AssertionResult);\n};\n\n// Makes a successful assertion result.\nGTEST_API_ AssertionResult AssertionSuccess();\n\n// Makes a failed assertion result.\nGTEST_API_ AssertionResult AssertionFailure();\n\n// Makes a failed assertion result with the given failure message.\n// Deprecated; use AssertionFailure() << msg.\nGTEST_API_ AssertionResult AssertionFailure(const Message& msg);\n\n// The abstract class that all tests inherit from.\n//\n// In Google Test, a unit test program contains one or many TestCases, and\n// each TestCase contains one or many Tests.\n//\n// When you define a test using the TEST macro, you don't need to\n// explicitly derive from Test - the TEST macro automatically does\n// this for you.\n//\n// The only time you derive from Test is when defining a test fixture\n// to be used a TEST_F.  For example:\n//\n//   class FooTest : public testing::Test {\n//    protected:\n//     virtual void SetUp() { ... }\n//     virtual void TearDown() { ... }\n//     ...\n//   };\n//\n//   TEST_F(FooTest, Bar) { ... }\n//   TEST_F(FooTest, Baz) { ... }\n//\n// Test is not copyable.\nclass GTEST_API_ Test {\n public:\n  friend class TestInfo;\n\n  // Defines types for pointers to functions that set up and tear down\n  // a test case.\n  typedef internal::SetUpTestCaseFunc SetUpTestCaseFunc;\n  typedef internal::TearDownTestCaseFunc TearDownTestCaseFunc;\n\n  // The d'tor is virtual as we intend to inherit from Test.\n  virtual ~Test();\n\n  // Sets up the stuff shared by all tests in this test case.\n  //\n  // Google Test will call Foo::SetUpTestCase() before running the first\n  // test in test case Foo.  Hence a sub-class can define its own\n  // SetUpTestCase() method to shadow the one defined in the super\n  // class.\n  static void SetUpTestCase() {}\n\n  // Tears down the stuff shared by all tests in this test case.\n  //\n  // Google Test will call Foo::TearDownTestCase() after running the last\n  // test in test case Foo.  Hence a sub-class can define its own\n  // TearDownTestCase() method to shadow the one defined in the super\n  // class.\n  static void TearDownTestCase() {}\n\n  // Returns true iff the current test has a fatal failure.\n  static bool HasFatalFailure();\n\n  // Returns true iff the current test has a non-fatal failure.\n  static bool HasNonfatalFailure();\n\n  // Returns true iff the current test has a (either fatal or\n  // non-fatal) failure.\n  static bool HasFailure() { return HasFatalFailure() || HasNonfatalFailure(); }\n\n  // Logs a property for the current test.  Only the last value for a given\n  // key is remembered.\n  // These are public static so they can be called from utility functions\n  // that are not members of the test fixture.\n  // The arguments are const char* instead strings, as Google Test is used\n  // on platforms where string doesn't compile.\n  //\n  // Note that a driving consideration for these RecordProperty methods\n  // was to produce xml output suited to the Greenspan charting utility,\n  // which at present will only chart values that fit in a 32-bit int. It\n  // is the user's responsibility to restrict their values to 32-bit ints\n  // if they intend them to be used with Greenspan.\n  static void RecordProperty(const char* key, const char* value);\n  static void RecordProperty(const char* key, int value);\n\n protected:\n  // Creates a Test object.\n  Test();\n\n  // Sets up the test fixture.\n  virtual void SetUp();\n\n  // Tears down the test fixture.\n  virtual void TearDown();\n\n private:\n  // Returns true iff the current test has the same fixture class as\n  // the first test in the current test case.\n  static bool HasSameFixtureClass();\n\n  // Runs the test after the test fixture has been set up.\n  //\n  // A sub-class must implement this to define the test logic.\n  //\n  // DO NOT OVERRIDE THIS FUNCTION DIRECTLY IN A USER PROGRAM.\n  // Instead, use the TEST or TEST_F macro.\n  virtual void TestBody() = 0;\n\n  // Sets up, executes, and tears down the test.\n  void Run();\n\n  // Deletes self.  We deliberately pick an unusual name for this\n  // internal method to avoid clashing with names used in user TESTs.\n  void DeleteSelf_() { delete this; }\n\n  // Uses a GTestFlagSaver to save and restore all Google Test flags.\n  const internal::GTestFlagSaver* const gtest_flag_saver_;\n\n  // Often a user mis-spells SetUp() as Setup() and spends a long time\n  // wondering why it is never called by Google Test.  The declaration of\n  // the following method is solely for catching such an error at\n  // compile time:\n  //\n  //   - The return type is deliberately chosen to be not void, so it\n  //   will be a conflict if a user declares void Setup() in his test\n  //   fixture.\n  //\n  //   - This method is private, so it will be another compiler error\n  //   if a user calls it from his test fixture.\n  //\n  // DO NOT OVERRIDE THIS FUNCTION.\n  //\n  // If you see an error about overriding the following function or\n  // about it being private, you have mis-spelled SetUp() as Setup().\n  struct Setup_should_be_spelled_SetUp {};\n  virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; }\n\n  // We disallow copying Tests.\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(Test);\n};\n\ntypedef internal::TimeInMillis TimeInMillis;\n\n// A copyable object representing a user specified test property which can be\n// output as a key/value string pair.\n//\n// Don't inherit from TestProperty as its destructor is not virtual.\nclass TestProperty {\n public:\n  // C'tor.  TestProperty does NOT have a default constructor.\n  // Always use this constructor (with parameters) to create a\n  // TestProperty object.\n  TestProperty(const char* a_key, const char* a_value) :\n    key_(a_key), value_(a_value) {\n  }\n\n  // Gets the user supplied key.\n  const char* key() const {\n    return key_.c_str();\n  }\n\n  // Gets the user supplied value.\n  const char* value() const {\n    return value_.c_str();\n  }\n\n  // Sets a new value, overriding the one supplied in the constructor.\n  void SetValue(const char* new_value) {\n    value_ = new_value;\n  }\n\n private:\n  // The key supplied by the user.\n  internal::String key_;\n  // The value supplied by the user.\n  internal::String value_;\n};\n\n// The result of a single Test.  This includes a list of\n// TestPartResults, a list of TestProperties, a count of how many\n// death tests there are in the Test, and how much time it took to run\n// the Test.\n//\n// TestResult is not copyable.\nclass GTEST_API_ TestResult {\n public:\n  // Creates an empty TestResult.\n  TestResult();\n\n  // D'tor.  Do not inherit from TestResult.\n  ~TestResult();\n\n  // Gets the number of all test parts.  This is the sum of the number\n  // of successful test parts and the number of failed test parts.\n  int total_part_count() const;\n\n  // Returns the number of the test properties.\n  int test_property_count() const;\n\n  // Returns true iff the test passed (i.e. no test part failed).\n  bool Passed() const { return !Failed(); }\n\n  // Returns true iff the test failed.\n  bool Failed() const;\n\n  // Returns true iff the test fatally failed.\n  bool HasFatalFailure() const;\n\n  // Returns true iff the test has a non-fatal failure.\n  bool HasNonfatalFailure() const;\n\n  // Returns the elapsed time, in milliseconds.\n  TimeInMillis elapsed_time() const { return elapsed_time_; }\n\n  // Returns the i-th test part result among all the results. i can range\n  // from 0 to test_property_count() - 1. If i is not in that range, aborts\n  // the program.\n  const TestPartResult& GetTestPartResult(int i) const;\n\n  // Returns the i-th test property. i can range from 0 to\n  // test_property_count() - 1. If i is not in that range, aborts the\n  // program.\n  const TestProperty& GetTestProperty(int i) const;\n\n private:\n  friend class TestInfo;\n  friend class UnitTest;\n  friend class internal::DefaultGlobalTestPartResultReporter;\n  friend class internal::ExecDeathTest;\n  friend class internal::TestResultAccessor;\n  friend class internal::UnitTestImpl;\n  friend class internal::WindowsDeathTest;\n\n  // Gets the vector of TestPartResults.\n  const std::vector<TestPartResult>& test_part_results() const {\n    return test_part_results_;\n  }\n\n  // Gets the vector of TestProperties.\n  const std::vector<TestProperty>& test_properties() const {\n    return test_properties_;\n  }\n\n  // Sets the elapsed time.\n  void set_elapsed_time(TimeInMillis elapsed) { elapsed_time_ = elapsed; }\n\n  // Adds a test property to the list. The property is validated and may add\n  // a non-fatal failure if invalid (e.g., if it conflicts with reserved\n  // key names). If a property is already recorded for the same key, the\n  // value will be updated, rather than storing multiple values for the same\n  // key.\n  void RecordProperty(const TestProperty& test_property);\n\n  // Adds a failure if the key is a reserved attribute of Google Test\n  // testcase tags.  Returns true if the property is valid.\n  // TODO(russr): Validate attribute names are legal and human readable.\n  static bool ValidateTestProperty(const TestProperty& test_property);\n\n  // Adds a test part result to the list.\n  void AddTestPartResult(const TestPartResult& test_part_result);\n\n  // Returns the death test count.\n  int death_test_count() const { return death_test_count_; }\n\n  // Increments the death test count, returning the new count.\n  int increment_death_test_count() { return ++death_test_count_; }\n\n  // Clears the test part results.\n  void ClearTestPartResults();\n\n  // Clears the object.\n  void Clear();\n\n  // Protects mutable state of the property vector and of owned\n  // properties, whose values may be updated.\n  internal::Mutex test_properites_mutex_;\n\n  // The vector of TestPartResults\n  std::vector<TestPartResult> test_part_results_;\n  // The vector of TestProperties\n  std::vector<TestProperty> test_properties_;\n  // Running count of death tests.\n  int death_test_count_;\n  // The elapsed time, in milliseconds.\n  TimeInMillis elapsed_time_;\n\n  // We disallow copying TestResult.\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestResult);\n};  // class TestResult\n\n// A TestInfo object stores the following information about a test:\n//\n//   Test case name\n//   Test name\n//   Whether the test should be run\n//   A function pointer that creates the test object when invoked\n//   Test result\n//\n// The constructor of TestInfo registers itself with the UnitTest\n// singleton such that the RUN_ALL_TESTS() macro knows which tests to\n// run.\nclass GTEST_API_ TestInfo {\n public:\n  // Destructs a TestInfo object.  This function is not virtual, so\n  // don't inherit from TestInfo.\n  ~TestInfo();\n\n  // Returns the test case name.\n  const char* test_case_name() const { return test_case_name_.c_str(); }\n\n  // Returns the test name.\n  const char* name() const { return name_.c_str(); }\n\n  // Returns the name of the parameter type, or NULL if this is not a typed\n  // or a type-parameterized test.\n  const char* type_param() const {\n    if (type_param_.get() != NULL)\n      return type_param_->c_str();\n    return NULL;\n  }\n\n  // Returns the text representation of the value parameter, or NULL if this\n  // is not a value-parameterized test.\n  const char* value_param() const {\n    if (value_param_.get() != NULL)\n      return value_param_->c_str();\n    return NULL;\n  }\n\n  // Returns true if this test should run, that is if the test is not disabled\n  // (or it is disabled but the also_run_disabled_tests flag has been specified)\n  // and its full name matches the user-specified filter.\n  //\n  // Google Test allows the user to filter the tests by their full names.\n  // The full name of a test Bar in test case Foo is defined as\n  // \"Foo.Bar\".  Only the tests that match the filter will run.\n  //\n  // A filter is a colon-separated list of glob (not regex) patterns,\n  // optionally followed by a '-' and a colon-separated list of\n  // negative patterns (tests to exclude).  A test is run if it\n  // matches one of the positive patterns and does not match any of\n  // the negative patterns.\n  //\n  // For example, *A*:Foo.* is a filter that matches any string that\n  // contains the character 'A' or starts with \"Foo.\".\n  bool should_run() const { return should_run_; }\n\n  // Returns the result of the test.\n  const TestResult* result() const { return &result_; }\n\n private:\n\n#if GTEST_HAS_DEATH_TEST\n  friend class internal::DefaultDeathTestFactory;\n#endif  // GTEST_HAS_DEATH_TEST\n  friend class Test;\n  friend class TestCase;\n  friend class internal::UnitTestImpl;\n  friend TestInfo* internal::MakeAndRegisterTestInfo(\n      const char* test_case_name, const char* name,\n      const char* type_param,\n      const char* value_param,\n      internal::TypeId fixture_class_id,\n      Test::SetUpTestCaseFunc set_up_tc,\n      Test::TearDownTestCaseFunc tear_down_tc,\n      internal::TestFactoryBase* factory);\n\n  // Constructs a TestInfo object. The newly constructed instance assumes\n  // ownership of the factory object.\n  TestInfo(const char* test_case_name, const char* name,\n           const char* a_type_param,\n           const char* a_value_param,\n           internal::TypeId fixture_class_id,\n           internal::TestFactoryBase* factory);\n\n  // Increments the number of death tests encountered in this test so\n  // far.\n  int increment_death_test_count() {\n    return result_.increment_death_test_count();\n  }\n\n  // Creates the test object, runs it, records its result, and then\n  // deletes it.\n  void Run();\n\n  static void ClearTestResult(TestInfo* test_info) {\n    test_info->result_.Clear();\n  }\n\n  // These fields are immutable properties of the test.\n  const std::string test_case_name_;     // Test case name\n  const std::string name_;               // Test name\n  // Name of the parameter type, or NULL if this is not a typed or a\n  // type-parameterized test.\n  const internal::scoped_ptr<const ::std::string> type_param_;\n  // Text representation of the value parameter, or NULL if this is not a\n  // value-parameterized test.\n  const internal::scoped_ptr<const ::std::string> value_param_;\n  const internal::TypeId fixture_class_id_;   // ID of the test fixture class\n  bool should_run_;                 // True iff this test should run\n  bool is_disabled_;                // True iff this test is disabled\n  bool matches_filter_;             // True if this test matches the\n                                    // user-specified filter.\n  internal::TestFactoryBase* const factory_;  // The factory that creates\n                                              // the test object\n\n  // This field is mutable and needs to be reset before running the\n  // test for the second time.\n  TestResult result_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestInfo);\n};\n\n// A test case, which consists of a vector of TestInfos.\n//\n// TestCase is not copyable.\nclass GTEST_API_ TestCase {\n public:\n  // Creates a TestCase with the given name.\n  //\n  // TestCase does NOT have a default constructor.  Always use this\n  // constructor to create a TestCase object.\n  //\n  // Arguments:\n  //\n  //   name:         name of the test case\n  //   a_type_param: the name of the test's type parameter, or NULL if\n  //                 this is not a type-parameterized test.\n  //   set_up_tc:    pointer to the function that sets up the test case\n  //   tear_down_tc: pointer to the function that tears down the test case\n  TestCase(const char* name, const char* a_type_param,\n           Test::SetUpTestCaseFunc set_up_tc,\n           Test::TearDownTestCaseFunc tear_down_tc);\n\n  // Destructor of TestCase.\n  virtual ~TestCase();\n\n  // Gets the name of the TestCase.\n  const char* name() const { return name_.c_str(); }\n\n  // Returns the name of the parameter type, or NULL if this is not a\n  // type-parameterized test case.\n  const char* type_param() const {\n    if (type_param_.get() != NULL)\n      return type_param_->c_str();\n    return NULL;\n  }\n\n  // Returns true if any test in this test case should run.\n  bool should_run() const { return should_run_; }\n\n  // Gets the number of successful tests in this test case.\n  int successful_test_count() const;\n\n  // Gets the number of failed tests in this test case.\n  int failed_test_count() const;\n\n  // Gets the number of disabled tests in this test case.\n  int disabled_test_count() const;\n\n  // Get the number of tests in this test case that should run.\n  int test_to_run_count() const;\n\n  // Gets the number of all tests in this test case.\n  int total_test_count() const;\n\n  // Returns true iff the test case passed.\n  bool Passed() const { return !Failed(); }\n\n  // Returns true iff the test case failed.\n  bool Failed() const { return failed_test_count() > 0; }\n\n  // Returns the elapsed time, in milliseconds.\n  TimeInMillis elapsed_time() const { return elapsed_time_; }\n\n  // Returns the i-th test among all the tests. i can range from 0 to\n  // total_test_count() - 1. If i is not in that range, returns NULL.\n  const TestInfo* GetTestInfo(int i) const;\n\n private:\n  friend class Test;\n  friend class internal::UnitTestImpl;\n\n  // Gets the (mutable) vector of TestInfos in this TestCase.\n  std::vector<TestInfo*>& test_info_list() { return test_info_list_; }\n\n  // Gets the (immutable) vector of TestInfos in this TestCase.\n  const std::vector<TestInfo*>& test_info_list() const {\n    return test_info_list_;\n  }\n\n  // Returns the i-th test among all the tests. i can range from 0 to\n  // total_test_count() - 1. If i is not in that range, returns NULL.\n  TestInfo* GetMutableTestInfo(int i);\n\n  // Sets the should_run member.\n  void set_should_run(bool should) { should_run_ = should; }\n\n  // Adds a TestInfo to this test case.  Will delete the TestInfo upon\n  // destruction of the TestCase object.\n  void AddTestInfo(TestInfo * test_info);\n\n  // Clears the results of all tests in this test case.\n  void ClearResult();\n\n  // Clears the results of all tests in the given test case.\n  static void ClearTestCaseResult(TestCase* test_case) {\n    test_case->ClearResult();\n  }\n\n  // Runs every test in this TestCase.\n  void Run();\n\n  // Runs SetUpTestCase() for this TestCase.  This wrapper is needed\n  // for catching exceptions thrown from SetUpTestCase().\n  void RunSetUpTestCase() { (*set_up_tc_)(); }\n\n  // Runs TearDownTestCase() for this TestCase.  This wrapper is\n  // needed for catching exceptions thrown from TearDownTestCase().\n  void RunTearDownTestCase() { (*tear_down_tc_)(); }\n\n  // Returns true iff test passed.\n  static bool TestPassed(const TestInfo* test_info) {\n    return test_info->should_run() && test_info->result()->Passed();\n  }\n\n  // Returns true iff test failed.\n  static bool TestFailed(const TestInfo* test_info) {\n    return test_info->should_run() && test_info->result()->Failed();\n  }\n\n  // Returns true iff test is disabled.\n  static bool TestDisabled(const TestInfo* test_info) {\n    return test_info->is_disabled_;\n  }\n\n  // Returns true if the given test should run.\n  static bool ShouldRunTest(const TestInfo* test_info) {\n    return test_info->should_run();\n  }\n\n  // Shuffles the tests in this test case.\n  void ShuffleTests(internal::Random* random);\n\n  // Restores the test order to before the first shuffle.\n  void UnshuffleTests();\n\n  // Name of the test case.\n  internal::String name_;\n  // Name of the parameter type, or NULL if this is not a typed or a\n  // type-parameterized test.\n  const internal::scoped_ptr<const ::std::string> type_param_;\n  // The vector of TestInfos in their original order.  It owns the\n  // elements in the vector.\n  std::vector<TestInfo*> test_info_list_;\n  // Provides a level of indirection for the test list to allow easy\n  // shuffling and restoring the test order.  The i-th element in this\n  // vector is the index of the i-th test in the shuffled test list.\n  std::vector<int> test_indices_;\n  // Pointer to the function that sets up the test case.\n  Test::SetUpTestCaseFunc set_up_tc_;\n  // Pointer to the function that tears down the test case.\n  Test::TearDownTestCaseFunc tear_down_tc_;\n  // True iff any test in this test case should run.\n  bool should_run_;\n  // Elapsed time, in milliseconds.\n  TimeInMillis elapsed_time_;\n\n  // We disallow copying TestCases.\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestCase);\n};\n\n// An Environment object is capable of setting up and tearing down an\n// environment.  The user should subclass this to define his own\n// environment(s).\n//\n// An Environment object does the set-up and tear-down in virtual\n// methods SetUp() and TearDown() instead of the constructor and the\n// destructor, as:\n//\n//   1. You cannot safely throw from a destructor.  This is a problem\n//      as in some cases Google Test is used where exceptions are enabled, and\n//      we may want to implement ASSERT_* using exceptions where they are\n//      available.\n//   2. You cannot use ASSERT_* directly in a constructor or\n//      destructor.\nclass Environment {\n public:\n  // The d'tor is virtual as we need to subclass Environment.\n  virtual ~Environment() {}\n\n  // Override this to define how to set up the environment.\n  virtual void SetUp() {}\n\n  // Override this to define how to tear down the environment.\n  virtual void TearDown() {}\n private:\n  // If you see an error about overriding the following function or\n  // about it being private, you have mis-spelled SetUp() as Setup().\n  struct Setup_should_be_spelled_SetUp {};\n  virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; }\n};\n\n// The interface for tracing execution of tests. The methods are organized in\n// the order the corresponding events are fired.\nclass TestEventListener {\n public:\n  virtual ~TestEventListener() {}\n\n  // Fired before any test activity starts.\n  virtual void OnTestProgramStart(const UnitTest& unit_test) = 0;\n\n  // Fired before each iteration of tests starts.  There may be more than\n  // one iteration if GTEST_FLAG(repeat) is set. iteration is the iteration\n  // index, starting from 0.\n  virtual void OnTestIterationStart(const UnitTest& unit_test,\n                                    int iteration) = 0;\n\n  // Fired before environment set-up for each iteration of tests starts.\n  virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test) = 0;\n\n  // Fired after environment set-up for each iteration of tests ends.\n  virtual void OnEnvironmentsSetUpEnd(const UnitTest& unit_test) = 0;\n\n  // Fired before the test case starts.\n  virtual void OnTestCaseStart(const TestCase& test_case) = 0;\n\n  // Fired before the test starts.\n  virtual void OnTestStart(const TestInfo& test_info) = 0;\n\n  // Fired after a failed assertion or a SUCCEED() invocation.\n  virtual void OnTestPartResult(const TestPartResult& test_part_result) = 0;\n\n  // Fired after the test ends.\n  virtual void OnTestEnd(const TestInfo& test_info) = 0;\n\n  // Fired after the test case ends.\n  virtual void OnTestCaseEnd(const TestCase& test_case) = 0;\n\n  // Fired before environment tear-down for each iteration of tests starts.\n  virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test) = 0;\n\n  // Fired after environment tear-down for each iteration of tests ends.\n  virtual void OnEnvironmentsTearDownEnd(const UnitTest& unit_test) = 0;\n\n  // Fired after each iteration of tests finishes.\n  virtual void OnTestIterationEnd(const UnitTest& unit_test,\n                                  int iteration) = 0;\n\n  // Fired after all test activities have ended.\n  virtual void OnTestProgramEnd(const UnitTest& unit_test) = 0;\n};\n\n// The convenience class for users who need to override just one or two\n// methods and are not concerned that a possible change to a signature of\n// the methods they override will not be caught during the build.  For\n// comments about each method please see the definition of TestEventListener\n// above.\nclass EmptyTestEventListener : public TestEventListener {\n public:\n  virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) {}\n  virtual void OnTestIterationStart(const UnitTest& /*unit_test*/,\n                                    int /*iteration*/) {}\n  virtual void OnEnvironmentsSetUpStart(const UnitTest& /*unit_test*/) {}\n  virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) {}\n  virtual void OnTestCaseStart(const TestCase& /*test_case*/) {}\n  virtual void OnTestStart(const TestInfo& /*test_info*/) {}\n  virtual void OnTestPartResult(const TestPartResult& /*test_part_result*/) {}\n  virtual void OnTestEnd(const TestInfo& /*test_info*/) {}\n  virtual void OnTestCaseEnd(const TestCase& /*test_case*/) {}\n  virtual void OnEnvironmentsTearDownStart(const UnitTest& /*unit_test*/) {}\n  virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) {}\n  virtual void OnTestIterationEnd(const UnitTest& /*unit_test*/,\n                                  int /*iteration*/) {}\n  virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) {}\n};\n\n// TestEventListeners lets users add listeners to track events in Google Test.\nclass GTEST_API_ TestEventListeners {\n public:\n  TestEventListeners();\n  ~TestEventListeners();\n\n  // Appends an event listener to the end of the list. Google Test assumes\n  // the ownership of the listener (i.e. it will delete the listener when\n  // the test program finishes).\n  void Append(TestEventListener* listener);\n\n  // Removes the given event listener from the list and returns it.  It then\n  // becomes the caller's responsibility to delete the listener. Returns\n  // NULL if the listener is not found in the list.\n  TestEventListener* Release(TestEventListener* listener);\n\n  // Returns the standard listener responsible for the default console\n  // output.  Can be removed from the listeners list to shut down default\n  // console output.  Note that removing this object from the listener list\n  // with Release transfers its ownership to the caller and makes this\n  // function return NULL the next time.\n  TestEventListener* default_result_printer() const {\n    return default_result_printer_;\n  }\n\n  // Returns the standard listener responsible for the default XML output\n  // controlled by the --gtest_output=xml flag.  Can be removed from the\n  // listeners list by users who want to shut down the default XML output\n  // controlled by this flag and substitute it with custom one.  Note that\n  // removing this object from the listener list with Release transfers its\n  // ownership to the caller and makes this function return NULL the next\n  // time.\n  TestEventListener* default_xml_generator() const {\n    return default_xml_generator_;\n  }\n\n private:\n  friend class TestCase;\n  friend class TestInfo;\n  friend class internal::DefaultGlobalTestPartResultReporter;\n  friend class internal::NoExecDeathTest;\n  friend class internal::TestEventListenersAccessor;\n  friend class internal::UnitTestImpl;\n\n  // Returns repeater that broadcasts the TestEventListener events to all\n  // subscribers.\n  TestEventListener* repeater();\n\n  // Sets the default_result_printer attribute to the provided listener.\n  // The listener is also added to the listener list and previous\n  // default_result_printer is removed from it and deleted. The listener can\n  // also be NULL in which case it will not be added to the list. Does\n  // nothing if the previous and the current listener objects are the same.\n  void SetDefaultResultPrinter(TestEventListener* listener);\n\n  // Sets the default_xml_generator attribute to the provided listener.  The\n  // listener is also added to the listener list and previous\n  // default_xml_generator is removed from it and deleted. The listener can\n  // also be NULL in which case it will not be added to the list. Does\n  // nothing if the previous and the current listener objects are the same.\n  void SetDefaultXmlGenerator(TestEventListener* listener);\n\n  // Controls whether events will be forwarded by the repeater to the\n  // listeners in the list.\n  bool EventForwardingEnabled() const;\n  void SuppressEventForwarding();\n\n  // The actual list of listeners.\n  internal::TestEventRepeater* repeater_;\n  // Listener responsible for the standard result output.\n  TestEventListener* default_result_printer_;\n  // Listener responsible for the creation of the XML output file.\n  TestEventListener* default_xml_generator_;\n\n  // We disallow copying TestEventListeners.\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(TestEventListeners);\n};\n\n// A UnitTest consists of a vector of TestCases.\n//\n// This is a singleton class.  The only instance of UnitTest is\n// created when UnitTest::GetInstance() is first called.  This\n// instance is never deleted.\n//\n// UnitTest is not copyable.\n//\n// This class is thread-safe as long as the methods are called\n// according to their specification.\nclass GTEST_API_ UnitTest {\n public:\n  // Gets the singleton UnitTest object.  The first time this method\n  // is called, a UnitTest object is constructed and returned.\n  // Consecutive calls will return the same object.\n  static UnitTest* GetInstance();\n\n  // Runs all tests in this UnitTest object and prints the result.\n  // Returns 0 if successful, or 1 otherwise.\n  //\n  // This method can only be called from the main thread.\n  //\n  // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n  int Run() GTEST_MUST_USE_RESULT_;\n\n  // Returns the working directory when the first TEST() or TEST_F()\n  // was executed.  The UnitTest object owns the string.\n  const char* original_working_dir() const;\n\n  // Returns the TestCase object for the test that's currently running,\n  // or NULL if no test is running.\n  const TestCase* current_test_case() const;\n\n  // Returns the TestInfo object for the test that's currently running,\n  // or NULL if no test is running.\n  const TestInfo* current_test_info() const;\n\n  // Returns the random seed used at the start of the current test run.\n  int random_seed() const;\n\n#if GTEST_HAS_PARAM_TEST\n  // Returns the ParameterizedTestCaseRegistry object used to keep track of\n  // value-parameterized tests and instantiate and register them.\n  //\n  // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n  internal::ParameterizedTestCaseRegistry& parameterized_test_registry();\n#endif  // GTEST_HAS_PARAM_TEST\n\n  // Gets the number of successful test cases.\n  int successful_test_case_count() const;\n\n  // Gets the number of failed test cases.\n  int failed_test_case_count() const;\n\n  // Gets the number of all test cases.\n  int total_test_case_count() const;\n\n  // Gets the number of all test cases that contain at least one test\n  // that should run.\n  int test_case_to_run_count() const;\n\n  // Gets the number of successful tests.\n  int successful_test_count() const;\n\n  // Gets the number of failed tests.\n  int failed_test_count() const;\n\n  // Gets the number of disabled tests.\n  int disabled_test_count() const;\n\n  // Gets the number of all tests.\n  int total_test_count() const;\n\n  // Gets the number of tests that should run.\n  int test_to_run_count() const;\n\n  // Gets the elapsed time, in milliseconds.\n  TimeInMillis elapsed_time() const;\n\n  // Returns true iff the unit test passed (i.e. all test cases passed).\n  bool Passed() const;\n\n  // Returns true iff the unit test failed (i.e. some test case failed\n  // or something outside of all tests failed).\n  bool Failed() const;\n\n  // Gets the i-th test case among all the test cases. i can range from 0 to\n  // total_test_case_count() - 1. If i is not in that range, returns NULL.\n  const TestCase* GetTestCase(int i) const;\n\n  // Returns the list of event listeners that can be used to track events\n  // inside Google Test.\n  TestEventListeners& listeners();\n\n private:\n  // Registers and returns a global test environment.  When a test\n  // program is run, all global test environments will be set-up in\n  // the order they were registered.  After all tests in the program\n  // have finished, all global test environments will be torn-down in\n  // the *reverse* order they were registered.\n  //\n  // The UnitTest object takes ownership of the given environment.\n  //\n  // This method can only be called from the main thread.\n  Environment* AddEnvironment(Environment* env);\n\n  // Adds a TestPartResult to the current TestResult object.  All\n  // Google Test assertion macros (e.g. ASSERT_TRUE, EXPECT_EQ, etc)\n  // eventually call this to report their results.  The user code\n  // should use the assertion macros instead of calling this directly.\n  void AddTestPartResult(TestPartResult::Type result_type,\n                         const char* file_name,\n                         int line_number,\n                         const internal::String& message,\n                         const internal::String& os_stack_trace);\n\n  // Adds a TestProperty to the current TestResult object. If the result already\n  // contains a property with the same key, the value will be updated.\n  void RecordPropertyForCurrentTest(const char* key, const char* value);\n\n  // Gets the i-th test case among all the test cases. i can range from 0 to\n  // total_test_case_count() - 1. If i is not in that range, returns NULL.\n  TestCase* GetMutableTestCase(int i);\n\n  // Accessors for the implementation object.\n  internal::UnitTestImpl* impl() { return impl_; }\n  const internal::UnitTestImpl* impl() const { return impl_; }\n\n  // These classes and funcions are friends as they need to access private\n  // members of UnitTest.\n  friend class Test;\n  friend class internal::AssertHelper;\n  friend class internal::ScopedTrace;\n  friend Environment* AddGlobalTestEnvironment(Environment* env);\n  friend internal::UnitTestImpl* internal::GetUnitTestImpl();\n  friend void internal::ReportFailureInUnknownLocation(\n      TestPartResult::Type result_type,\n      const internal::String& message);\n\n  // Creates an empty UnitTest.\n  UnitTest();\n\n  // D'tor\n  virtual ~UnitTest();\n\n  // Pushes a trace defined by SCOPED_TRACE() on to the per-thread\n  // Google Test trace stack.\n  void PushGTestTrace(const internal::TraceInfo& trace);\n\n  // Pops a trace from the per-thread Google Test trace stack.\n  void PopGTestTrace();\n\n  // Protects mutable state in *impl_.  This is mutable as some const\n  // methods need to lock it too.\n  mutable internal::Mutex mutex_;\n\n  // Opaque implementation object.  This field is never changed once\n  // the object is constructed.  We don't mark it as const here, as\n  // doing so will cause a warning in the constructor of UnitTest.\n  // Mutable state in *impl_ is protected by mutex_.\n  internal::UnitTestImpl* impl_;\n\n  // We disallow copying UnitTest.\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(UnitTest);\n};\n\n// A convenient wrapper for adding an environment for the test\n// program.\n//\n// You should call this before RUN_ALL_TESTS() is called, probably in\n// main().  If you use gtest_main, you need to call this before main()\n// starts for it to take effect.  For example, you can define a global\n// variable like this:\n//\n//   testing::Environment* const foo_env =\n//       testing::AddGlobalTestEnvironment(new FooEnvironment);\n//\n// However, we strongly recommend you to write your own main() and\n// call AddGlobalTestEnvironment() there, as relying on initialization\n// of global variables makes the code harder to read and may cause\n// problems when you register multiple environments from different\n// translation units and the environments have dependencies among them\n// (remember that the compiler doesn't guarantee the order in which\n// global variables from different translation units are initialized).\ninline Environment* AddGlobalTestEnvironment(Environment* env) {\n  return UnitTest::GetInstance()->AddEnvironment(env);\n}\n\n// Initializes Google Test.  This must be called before calling\n// RUN_ALL_TESTS().  In particular, it parses a command line for the\n// flags that Google Test recognizes.  Whenever a Google Test flag is\n// seen, it is removed from argv, and *argc is decremented.\n//\n// No value is returned.  Instead, the Google Test flag variables are\n// updated.\n//\n// Calling the function for the second time has no user-visible effect.\nGTEST_API_ void InitGoogleTest(int* argc, char** argv);\n\n// This overloaded version can be used in Windows programs compiled in\n// UNICODE mode.\nGTEST_API_ void InitGoogleTest(int* argc, wchar_t** argv);\n\nnamespace internal {\n\n// Formats a comparison assertion (e.g. ASSERT_EQ, EXPECT_LT, and etc)\n// operand to be used in a failure message.  The type (but not value)\n// of the other operand may affect the format.  This allows us to\n// print a char* as a raw pointer when it is compared against another\n// char*, and print it as a C string when it is compared against an\n// std::string object, for example.\n//\n// The default implementation ignores the type of the other operand.\n// Some specialized versions are used to handle formatting wide or\n// narrow C strings.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\ntemplate <typename T1, typename T2>\nString FormatForComparisonFailureMessage(const T1& value,\n                                         const T2& /* other_operand */) {\n  // C++Builder compiles this incorrectly if the namespace isn't explicitly\n  // given.\n  return ::testing::PrintToString(value);\n}\n\n// The helper function for {ASSERT|EXPECT}_EQ.\ntemplate <typename T1, typename T2>\nAssertionResult CmpHelperEQ(const char* expected_expression,\n                            const char* actual_expression,\n                            const T1& expected,\n                            const T2& actual) {\n#ifdef _MSC_VER\n# pragma warning(push)          // Saves the current warning state.\n# pragma warning(disable:4389)  // Temporarily disables warning on\n                               // signed/unsigned mismatch.\n#endif\n\n  if (expected == actual) {\n    return AssertionSuccess();\n  }\n\n#ifdef _MSC_VER\n# pragma warning(pop)          // Restores the warning state.\n#endif\n\n  return EqFailure(expected_expression,\n                   actual_expression,\n                   FormatForComparisonFailureMessage(expected, actual),\n                   FormatForComparisonFailureMessage(actual, expected),\n                   false);\n}\n\n// With this overloaded version, we allow anonymous enums to be used\n// in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous enums\n// can be implicitly cast to BiggestInt.\nGTEST_API_ AssertionResult CmpHelperEQ(const char* expected_expression,\n                                       const char* actual_expression,\n                                       BiggestInt expected,\n                                       BiggestInt actual);\n\n// The helper class for {ASSERT|EXPECT}_EQ.  The template argument\n// lhs_is_null_literal is true iff the first argument to ASSERT_EQ()\n// is a null pointer literal.  The following default implementation is\n// for lhs_is_null_literal being false.\ntemplate <bool lhs_is_null_literal>\nclass EqHelper {\n public:\n  // This templatized version is for the general case.\n  template <typename T1, typename T2>\n  static AssertionResult Compare(const char* expected_expression,\n                                 const char* actual_expression,\n                                 const T1& expected,\n                                 const T2& actual) {\n    return CmpHelperEQ(expected_expression, actual_expression, expected,\n                       actual);\n  }\n\n  // With this overloaded version, we allow anonymous enums to be used\n  // in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous\n  // enums can be implicitly cast to BiggestInt.\n  //\n  // Even though its body looks the same as the above version, we\n  // cannot merge the two, as it will make anonymous enums unhappy.\n  static AssertionResult Compare(const char* expected_expression,\n                                 const char* actual_expression,\n                                 BiggestInt expected,\n                                 BiggestInt actual) {\n    return CmpHelperEQ(expected_expression, actual_expression, expected,\n                       actual);\n  }\n};\n\n// This specialization is used when the first argument to ASSERT_EQ()\n// is a null pointer literal, like NULL, false, or 0.\ntemplate <>\nclass EqHelper<true> {\n public:\n  // We define two overloaded versions of Compare().  The first\n  // version will be picked when the second argument to ASSERT_EQ() is\n  // NOT a pointer, e.g. ASSERT_EQ(0, AnIntFunction()) or\n  // EXPECT_EQ(false, a_bool).\n  template <typename T1, typename T2>\n  static AssertionResult Compare(\n      const char* expected_expression,\n      const char* actual_expression,\n      const T1& expected,\n      const T2& actual,\n      // The following line prevents this overload from being considered if T2\n      // is not a pointer type.  We need this because ASSERT_EQ(NULL, my_ptr)\n      // expands to Compare(\"\", \"\", NULL, my_ptr), which requires a conversion\n      // to match the Secret* in the other overload, which would otherwise make\n      // this template match better.\n      typename EnableIf<!is_pointer<T2>::value>::type* = 0) {\n    return CmpHelperEQ(expected_expression, actual_expression, expected,\n                       actual);\n  }\n\n  // This version will be picked when the second argument to ASSERT_EQ() is a\n  // pointer, e.g. ASSERT_EQ(NULL, a_pointer).\n  template <typename T>\n  static AssertionResult Compare(\n      const char* expected_expression,\n      const char* actual_expression,\n      // We used to have a second template parameter instead of Secret*.  That\n      // template parameter would deduce to 'long', making this a better match\n      // than the first overload even without the first overload's EnableIf.\n      // Unfortunately, gcc with -Wconversion-null warns when \"passing NULL to\n      // non-pointer argument\" (even a deduced integral argument), so the old\n      // implementation caused warnings in user code.\n      Secret* /* expected (NULL) */,\n      T* actual) {\n    // We already know that 'expected' is a null pointer.\n    return CmpHelperEQ(expected_expression, actual_expression,\n                       static_cast<T*>(NULL), actual);\n  }\n};\n\n// A macro for implementing the helper functions needed to implement\n// ASSERT_?? and EXPECT_??.  It is here just to avoid copy-and-paste\n// of similar code.\n//\n// For each templatized helper function, we also define an overloaded\n// version for BiggestInt in order to reduce code bloat and allow\n// anonymous enums to be used with {ASSERT|EXPECT}_?? when compiled\n// with gcc 4.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n#define GTEST_IMPL_CMP_HELPER_(op_name, op)\\\ntemplate <typename T1, typename T2>\\\nAssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \\\n                                   const T1& val1, const T2& val2) {\\\n  if (val1 op val2) {\\\n    return AssertionSuccess();\\\n  } else {\\\n    return AssertionFailure() \\\n        << \"Expected: (\" << expr1 << \") \" #op \" (\" << expr2\\\n        << \"), actual: \" << FormatForComparisonFailureMessage(val1, val2)\\\n        << \" vs \" << FormatForComparisonFailureMessage(val2, val1);\\\n  }\\\n}\\\nGTEST_API_ AssertionResult CmpHelper##op_name(\\\n    const char* expr1, const char* expr2, BiggestInt val1, BiggestInt val2)\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\n\n// Implements the helper function for {ASSERT|EXPECT}_NE\nGTEST_IMPL_CMP_HELPER_(NE, !=);\n// Implements the helper function for {ASSERT|EXPECT}_LE\nGTEST_IMPL_CMP_HELPER_(LE, <=);\n// Implements the helper function for {ASSERT|EXPECT}_LT\nGTEST_IMPL_CMP_HELPER_(LT, < );\n// Implements the helper function for {ASSERT|EXPECT}_GE\nGTEST_IMPL_CMP_HELPER_(GE, >=);\n// Implements the helper function for {ASSERT|EXPECT}_GT\nGTEST_IMPL_CMP_HELPER_(GT, > );\n\n#undef GTEST_IMPL_CMP_HELPER_\n\n// The helper function for {ASSERT|EXPECT}_STREQ.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult CmpHelperSTREQ(const char* expected_expression,\n                                          const char* actual_expression,\n                                          const char* expected,\n                                          const char* actual);\n\n// The helper function for {ASSERT|EXPECT}_STRCASEEQ.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult CmpHelperSTRCASEEQ(const char* expected_expression,\n                                              const char* actual_expression,\n                                              const char* expected,\n                                              const char* actual);\n\n// The helper function for {ASSERT|EXPECT}_STRNE.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult CmpHelperSTRNE(const char* s1_expression,\n                                          const char* s2_expression,\n                                          const char* s1,\n                                          const char* s2);\n\n// The helper function for {ASSERT|EXPECT}_STRCASENE.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult CmpHelperSTRCASENE(const char* s1_expression,\n                                              const char* s2_expression,\n                                              const char* s1,\n                                              const char* s2);\n\n\n// Helper function for *_STREQ on wide strings.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult CmpHelperSTREQ(const char* expected_expression,\n                                          const char* actual_expression,\n                                          const wchar_t* expected,\n                                          const wchar_t* actual);\n\n// Helper function for *_STRNE on wide strings.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult CmpHelperSTRNE(const char* s1_expression,\n                                          const char* s2_expression,\n                                          const wchar_t* s1,\n                                          const wchar_t* s2);\n\n}  // namespace internal\n\n// IsSubstring() and IsNotSubstring() are intended to be used as the\n// first argument to {EXPECT,ASSERT}_PRED_FORMAT2(), not by\n// themselves.  They check whether needle is a substring of haystack\n// (NULL is considered a substring of itself only), and return an\n// appropriate error message when they fail.\n//\n// The {needle,haystack}_expr arguments are the stringified\n// expressions that generated the two real arguments.\nGTEST_API_ AssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const char* needle, const char* haystack);\nGTEST_API_ AssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const wchar_t* needle, const wchar_t* haystack);\nGTEST_API_ AssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const char* needle, const char* haystack);\nGTEST_API_ AssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const wchar_t* needle, const wchar_t* haystack);\nGTEST_API_ AssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::string& needle, const ::std::string& haystack);\nGTEST_API_ AssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::string& needle, const ::std::string& haystack);\n\n#if GTEST_HAS_STD_WSTRING\nGTEST_API_ AssertionResult IsSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::wstring& needle, const ::std::wstring& haystack);\nGTEST_API_ AssertionResult IsNotSubstring(\n    const char* needle_expr, const char* haystack_expr,\n    const ::std::wstring& needle, const ::std::wstring& haystack);\n#endif  // GTEST_HAS_STD_WSTRING\n\nnamespace internal {\n\n// Helper template function for comparing floating-points.\n//\n// Template parameter:\n//\n//   RawType: the raw floating-point type (either float or double)\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\ntemplate <typename RawType>\nAssertionResult CmpHelperFloatingPointEQ(const char* expected_expression,\n                                         const char* actual_expression,\n                                         RawType expected,\n                                         RawType actual) {\n  const FloatingPoint<RawType> lhs(expected), rhs(actual);\n\n  if (lhs.AlmostEquals(rhs)) {\n    return AssertionSuccess();\n  }\n\n  ::std::stringstream expected_ss;\n  expected_ss << std::setprecision(std::numeric_limits<RawType>::digits10 + 2)\n              << expected;\n\n  ::std::stringstream actual_ss;\n  actual_ss << std::setprecision(std::numeric_limits<RawType>::digits10 + 2)\n            << actual;\n\n  return EqFailure(expected_expression,\n                   actual_expression,\n                   StringStreamToString(&expected_ss),\n                   StringStreamToString(&actual_ss),\n                   false);\n}\n\n// Helper function for implementing ASSERT_NEAR.\n//\n// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.\nGTEST_API_ AssertionResult DoubleNearPredFormat(const char* expr1,\n                                                const char* expr2,\n                                                const char* abs_error_expr,\n                                                double val1,\n                                                double val2,\n                                                double abs_error);\n\n// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.\n// A class that enables one to stream messages to assertion macros\nclass GTEST_API_ AssertHelper {\n public:\n  // Constructor.\n  AssertHelper(TestPartResult::Type type,\n               const char* file,\n               int line,\n               const char* message);\n  ~AssertHelper();\n\n  // Message assignment is a semantic trick to enable assertion\n  // streaming; see the GTEST_MESSAGE_ macro below.\n  void operator=(const Message& message) const;\n\n private:\n  // We put our data in a struct so that the size of the AssertHelper class can\n  // be as small as possible.  This is important because gcc is incapable of\n  // re-using stack space even for temporary variables, so every EXPECT_EQ\n  // reserves stack space for another AssertHelper.\n  struct AssertHelperData {\n    AssertHelperData(TestPartResult::Type t,\n                     const char* srcfile,\n                     int line_num,\n                     const char* msg)\n        : type(t), file(srcfile), line(line_num), message(msg) { }\n\n    TestPartResult::Type const type;\n    const char*        const file;\n    int                const line;\n    String             const message;\n\n   private:\n    GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelperData);\n  };\n\n  AssertHelperData* const data_;\n\n  GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelper);\n};\n\n}  // namespace internal\n\n#if GTEST_HAS_PARAM_TEST\n// The pure interface class that all value-parameterized tests inherit from.\n// A value-parameterized class must inherit from both ::testing::Test and\n// ::testing::WithParamInterface. In most cases that just means inheriting\n// from ::testing::TestWithParam, but more complicated test hierarchies\n// may need to inherit from Test and WithParamInterface at different levels.\n//\n// This interface has support for accessing the test parameter value via\n// the GetParam() method.\n//\n// Use it with one of the parameter generator defining functions, like Range(),\n// Values(), ValuesIn(), Bool(), and Combine().\n//\n// class FooTest : public ::testing::TestWithParam<int> {\n//  protected:\n//   FooTest() {\n//     // Can use GetParam() here.\n//   }\n//   virtual ~FooTest() {\n//     // Can use GetParam() here.\n//   }\n//   virtual void SetUp() {\n//     // Can use GetParam() here.\n//   }\n//   virtual void TearDown {\n//     // Can use GetParam() here.\n//   }\n// };\n// TEST_P(FooTest, DoesBar) {\n//   // Can use GetParam() method here.\n//   Foo foo;\n//   ASSERT_TRUE(foo.DoesBar(GetParam()));\n// }\n// INSTANTIATE_TEST_CASE_P(OneToTenRange, FooTest, ::testing::Range(1, 10));\n\ntemplate <typename T>\nclass WithParamInterface {\n public:\n  typedef T ParamType;\n  virtual ~WithParamInterface() {}\n\n  // The current parameter value. Is also available in the test fixture's\n  // constructor. This member function is non-static, even though it only\n  // references static data, to reduce the opportunity for incorrect uses\n  // like writing 'WithParamInterface<bool>::GetParam()' for a test that\n  // uses a fixture whose parameter type is int.\n  const ParamType& GetParam() const { return *parameter_; }\n\n private:\n  // Sets parameter value. The caller is responsible for making sure the value\n  // remains alive and unchanged throughout the current test.\n  static void SetParam(const ParamType* parameter) {\n    parameter_ = parameter;\n  }\n\n  // Static value used for accessing parameter during a test lifetime.\n  static const ParamType* parameter_;\n\n  // TestClass must be a subclass of WithParamInterface<T> and Test.\n  template <class TestClass> friend class internal::ParameterizedTestFactory;\n};\n\ntemplate <typename T>\nconst T* WithParamInterface<T>::parameter_ = NULL;\n\n// Most value-parameterized classes can ignore the existence of\n// WithParamInterface, and can just inherit from ::testing::TestWithParam.\n\ntemplate <typename T>\nclass TestWithParam : public Test, public WithParamInterface<T> {\n};\n\n#endif  // GTEST_HAS_PARAM_TEST\n\n// Macros for indicating success/failure in test code.\n\n// ADD_FAILURE unconditionally adds a failure to the current test.\n// SUCCEED generates a success - it doesn't automatically make the\n// current test successful, as a test is only successful when it has\n// no failure.\n//\n// EXPECT_* verifies that a certain condition is satisfied.  If not,\n// it behaves like ADD_FAILURE.  In particular:\n//\n//   EXPECT_TRUE  verifies that a Boolean condition is true.\n//   EXPECT_FALSE verifies that a Boolean condition is false.\n//\n// FAIL and ASSERT_* are similar to ADD_FAILURE and EXPECT_*, except\n// that they will also abort the current function on failure.  People\n// usually want the fail-fast behavior of FAIL and ASSERT_*, but those\n// writing data-driven tests often find themselves using ADD_FAILURE\n// and EXPECT_* more.\n//\n// Examples:\n//\n//   EXPECT_TRUE(server.StatusIsOK());\n//   ASSERT_FALSE(server.HasPendingRequest(port))\n//       << \"There are still pending requests \" << \"on port \" << port;\n\n// Generates a nonfatal failure with a generic message.\n#define ADD_FAILURE() GTEST_NONFATAL_FAILURE_(\"Failed\")\n\n// Generates a nonfatal failure at the given source file location with\n// a generic message.\n#define ADD_FAILURE_AT(file, line) \\\n  GTEST_MESSAGE_AT_(file, line, \"Failed\", \\\n                    ::testing::TestPartResult::kNonFatalFailure)\n\n// Generates a fatal failure with a generic message.\n#define GTEST_FAIL() GTEST_FATAL_FAILURE_(\"Failed\")\n\n// Define this macro to 1 to omit the definition of FAIL(), which is a\n// generic name and clashes with some other libraries.\n#if !GTEST_DONT_DEFINE_FAIL\n# define FAIL() GTEST_FAIL()\n#endif\n\n// Generates a success with a generic message.\n#define GTEST_SUCCEED() GTEST_SUCCESS_(\"Succeeded\")\n\n// Define this macro to 1 to omit the definition of SUCCEED(), which\n// is a generic name and clashes with some other libraries.\n#if !GTEST_DONT_DEFINE_SUCCEED\n# define SUCCEED() GTEST_SUCCEED()\n#endif\n\n// Macros for testing exceptions.\n//\n//    * {ASSERT|EXPECT}_THROW(statement, expected_exception):\n//         Tests that the statement throws the expected exception.\n//    * {ASSERT|EXPECT}_NO_THROW(statement):\n//         Tests that the statement doesn't throw any exception.\n//    * {ASSERT|EXPECT}_ANY_THROW(statement):\n//         Tests that the statement throws an exception.\n\n#define EXPECT_THROW(statement, expected_exception) \\\n  GTEST_TEST_THROW_(statement, expected_exception, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_NO_THROW(statement) \\\n  GTEST_TEST_NO_THROW_(statement, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_ANY_THROW(statement) \\\n  GTEST_TEST_ANY_THROW_(statement, GTEST_NONFATAL_FAILURE_)\n#define ASSERT_THROW(statement, expected_exception) \\\n  GTEST_TEST_THROW_(statement, expected_exception, GTEST_FATAL_FAILURE_)\n#define ASSERT_NO_THROW(statement) \\\n  GTEST_TEST_NO_THROW_(statement, GTEST_FATAL_FAILURE_)\n#define ASSERT_ANY_THROW(statement) \\\n  GTEST_TEST_ANY_THROW_(statement, GTEST_FATAL_FAILURE_)\n\n// Boolean assertions. Condition can be either a Boolean expression or an\n// AssertionResult. For more information on how to use AssertionResult with\n// these macros see comments on that class.\n#define EXPECT_TRUE(condition) \\\n  GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \\\n                      GTEST_NONFATAL_FAILURE_)\n#define EXPECT_FALSE(condition) \\\n  GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \\\n                      GTEST_NONFATAL_FAILURE_)\n#define ASSERT_TRUE(condition) \\\n  GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \\\n                      GTEST_FATAL_FAILURE_)\n#define ASSERT_FALSE(condition) \\\n  GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \\\n                      GTEST_FATAL_FAILURE_)\n\n// Includes the auto-generated header that implements a family of\n// generic predicate assertion macros.\n// Copyright 2006, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n// This file is AUTOMATICALLY GENERATED on 09/24/2010 by command\n// 'gen_gtest_pred_impl.py 5'.  DO NOT EDIT BY HAND!\n//\n// Implements a family of generic predicate assertion macros.\n\n#ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_\n#define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_\n\n// Makes sure this header is not included before gtest.h.\n#ifndef GTEST_INCLUDE_GTEST_GTEST_H_\n# error Do not include gtest_pred_impl.h directly.  Include gtest.h instead.\n#endif  // GTEST_INCLUDE_GTEST_GTEST_H_\n\n// This header implements a family of generic predicate assertion\n// macros:\n//\n//   ASSERT_PRED_FORMAT1(pred_format, v1)\n//   ASSERT_PRED_FORMAT2(pred_format, v1, v2)\n//   ...\n//\n// where pred_format is a function or functor that takes n (in the\n// case of ASSERT_PRED_FORMATn) values and their source expression\n// text, and returns a testing::AssertionResult.  See the definition\n// of ASSERT_EQ in gtest.h for an example.\n//\n// If you don't care about formatting, you can use the more\n// restrictive version:\n//\n//   ASSERT_PRED1(pred, v1)\n//   ASSERT_PRED2(pred, v1, v2)\n//   ...\n//\n// where pred is an n-ary function or functor that returns bool,\n// and the values v1, v2, ..., must support the << operator for\n// streaming to std::ostream.\n//\n// We also define the EXPECT_* variations.\n//\n// For now we only support predicates whose arity is at most 5.\n// Please email googletestframework@googlegroups.com if you need\n// support for higher arities.\n\n// GTEST_ASSERT_ is the basic statement to which all of the assertions\n// in this file reduce.  Don't use this in your code.\n\n#define GTEST_ASSERT_(expression, on_failure) \\\n  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\\n  if (const ::testing::AssertionResult gtest_ar = (expression)) \\\n    ; \\\n  else \\\n    on_failure(gtest_ar.failure_message())\n\n\n// Helper function for implementing {EXPECT|ASSERT}_PRED1.  Don't use\n// this in your code.\ntemplate <typename Pred,\n          typename T1>\nAssertionResult AssertPred1Helper(const char* pred_text,\n                                  const char* e1,\n                                  Pred pred,\n                                  const T1& v1) {\n  if (pred(v1)) return AssertionSuccess();\n\n  return AssertionFailure() << pred_text << \"(\"\n                            << e1 << \") evaluates to false, where\"\n                            << \"\\n\" << e1 << \" evaluates to \" << v1;\n}\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT1.\n// Don't use this in your code.\n#define GTEST_PRED_FORMAT1_(pred_format, v1, on_failure)\\\n  GTEST_ASSERT_(pred_format(#v1, v1),\\\n                on_failure)\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED1.  Don't use\n// this in your code.\n#define GTEST_PRED1_(pred, v1, on_failure)\\\n  GTEST_ASSERT_(::testing::AssertPred1Helper(#pred, \\\n                                             #v1, \\\n                                             pred, \\\n                                             v1), on_failure)\n\n// Unary predicate assertion macros.\n#define EXPECT_PRED_FORMAT1(pred_format, v1) \\\n  GTEST_PRED_FORMAT1_(pred_format, v1, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_PRED1(pred, v1) \\\n  GTEST_PRED1_(pred, v1, GTEST_NONFATAL_FAILURE_)\n#define ASSERT_PRED_FORMAT1(pred_format, v1) \\\n  GTEST_PRED_FORMAT1_(pred_format, v1, GTEST_FATAL_FAILURE_)\n#define ASSERT_PRED1(pred, v1) \\\n  GTEST_PRED1_(pred, v1, GTEST_FATAL_FAILURE_)\n\n\n\n// Helper function for implementing {EXPECT|ASSERT}_PRED2.  Don't use\n// this in your code.\ntemplate <typename Pred,\n          typename T1,\n          typename T2>\nAssertionResult AssertPred2Helper(const char* pred_text,\n                                  const char* e1,\n                                  const char* e2,\n                                  Pred pred,\n                                  const T1& v1,\n                                  const T2& v2) {\n  if (pred(v1, v2)) return AssertionSuccess();\n\n  return AssertionFailure() << pred_text << \"(\"\n                            << e1 << \", \"\n                            << e2 << \") evaluates to false, where\"\n                            << \"\\n\" << e1 << \" evaluates to \" << v1\n                            << \"\\n\" << e2 << \" evaluates to \" << v2;\n}\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT2.\n// Don't use this in your code.\n#define GTEST_PRED_FORMAT2_(pred_format, v1, v2, on_failure)\\\n  GTEST_ASSERT_(pred_format(#v1, #v2, v1, v2),\\\n                on_failure)\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED2.  Don't use\n// this in your code.\n#define GTEST_PRED2_(pred, v1, v2, on_failure)\\\n  GTEST_ASSERT_(::testing::AssertPred2Helper(#pred, \\\n                                             #v1, \\\n                                             #v2, \\\n                                             pred, \\\n                                             v1, \\\n                                             v2), on_failure)\n\n// Binary predicate assertion macros.\n#define EXPECT_PRED_FORMAT2(pred_format, v1, v2) \\\n  GTEST_PRED_FORMAT2_(pred_format, v1, v2, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_PRED2(pred, v1, v2) \\\n  GTEST_PRED2_(pred, v1, v2, GTEST_NONFATAL_FAILURE_)\n#define ASSERT_PRED_FORMAT2(pred_format, v1, v2) \\\n  GTEST_PRED_FORMAT2_(pred_format, v1, v2, GTEST_FATAL_FAILURE_)\n#define ASSERT_PRED2(pred, v1, v2) \\\n  GTEST_PRED2_(pred, v1, v2, GTEST_FATAL_FAILURE_)\n\n\n\n// Helper function for implementing {EXPECT|ASSERT}_PRED3.  Don't use\n// this in your code.\ntemplate <typename Pred,\n          typename T1,\n          typename T2,\n          typename T3>\nAssertionResult AssertPred3Helper(const char* pred_text,\n                                  const char* e1,\n                                  const char* e2,\n                                  const char* e3,\n                                  Pred pred,\n                                  const T1& v1,\n                                  const T2& v2,\n                                  const T3& v3) {\n  if (pred(v1, v2, v3)) return AssertionSuccess();\n\n  return AssertionFailure() << pred_text << \"(\"\n                            << e1 << \", \"\n                            << e2 << \", \"\n                            << e3 << \") evaluates to false, where\"\n                            << \"\\n\" << e1 << \" evaluates to \" << v1\n                            << \"\\n\" << e2 << \" evaluates to \" << v2\n                            << \"\\n\" << e3 << \" evaluates to \" << v3;\n}\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT3.\n// Don't use this in your code.\n#define GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, on_failure)\\\n  GTEST_ASSERT_(pred_format(#v1, #v2, #v3, v1, v2, v3),\\\n                on_failure)\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED3.  Don't use\n// this in your code.\n#define GTEST_PRED3_(pred, v1, v2, v3, on_failure)\\\n  GTEST_ASSERT_(::testing::AssertPred3Helper(#pred, \\\n                                             #v1, \\\n                                             #v2, \\\n                                             #v3, \\\n                                             pred, \\\n                                             v1, \\\n                                             v2, \\\n                                             v3), on_failure)\n\n// Ternary predicate assertion macros.\n#define EXPECT_PRED_FORMAT3(pred_format, v1, v2, v3) \\\n  GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_PRED3(pred, v1, v2, v3) \\\n  GTEST_PRED3_(pred, v1, v2, v3, GTEST_NONFATAL_FAILURE_)\n#define ASSERT_PRED_FORMAT3(pred_format, v1, v2, v3) \\\n  GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, GTEST_FATAL_FAILURE_)\n#define ASSERT_PRED3(pred, v1, v2, v3) \\\n  GTEST_PRED3_(pred, v1, v2, v3, GTEST_FATAL_FAILURE_)\n\n\n\n// Helper function for implementing {EXPECT|ASSERT}_PRED4.  Don't use\n// this in your code.\ntemplate <typename Pred,\n          typename T1,\n          typename T2,\n          typename T3,\n          typename T4>\nAssertionResult AssertPred4Helper(const char* pred_text,\n                                  const char* e1,\n                                  const char* e2,\n                                  const char* e3,\n                                  const char* e4,\n                                  Pred pred,\n                                  const T1& v1,\n                                  const T2& v2,\n                                  const T3& v3,\n                                  const T4& v4) {\n  if (pred(v1, v2, v3, v4)) return AssertionSuccess();\n\n  return AssertionFailure() << pred_text << \"(\"\n                            << e1 << \", \"\n                            << e2 << \", \"\n                            << e3 << \", \"\n                            << e4 << \") evaluates to false, where\"\n                            << \"\\n\" << e1 << \" evaluates to \" << v1\n                            << \"\\n\" << e2 << \" evaluates to \" << v2\n                            << \"\\n\" << e3 << \" evaluates to \" << v3\n                            << \"\\n\" << e4 << \" evaluates to \" << v4;\n}\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT4.\n// Don't use this in your code.\n#define GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, on_failure)\\\n  GTEST_ASSERT_(pred_format(#v1, #v2, #v3, #v4, v1, v2, v3, v4),\\\n                on_failure)\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED4.  Don't use\n// this in your code.\n#define GTEST_PRED4_(pred, v1, v2, v3, v4, on_failure)\\\n  GTEST_ASSERT_(::testing::AssertPred4Helper(#pred, \\\n                                             #v1, \\\n                                             #v2, \\\n                                             #v3, \\\n                                             #v4, \\\n                                             pred, \\\n                                             v1, \\\n                                             v2, \\\n                                             v3, \\\n                                             v4), on_failure)\n\n// 4-ary predicate assertion macros.\n#define EXPECT_PRED_FORMAT4(pred_format, v1, v2, v3, v4) \\\n  GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_PRED4(pred, v1, v2, v3, v4) \\\n  GTEST_PRED4_(pred, v1, v2, v3, v4, GTEST_NONFATAL_FAILURE_)\n#define ASSERT_PRED_FORMAT4(pred_format, v1, v2, v3, v4) \\\n  GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, GTEST_FATAL_FAILURE_)\n#define ASSERT_PRED4(pred, v1, v2, v3, v4) \\\n  GTEST_PRED4_(pred, v1, v2, v3, v4, GTEST_FATAL_FAILURE_)\n\n\n\n// Helper function for implementing {EXPECT|ASSERT}_PRED5.  Don't use\n// this in your code.\ntemplate <typename Pred,\n          typename T1,\n          typename T2,\n          typename T3,\n          typename T4,\n          typename T5>\nAssertionResult AssertPred5Helper(const char* pred_text,\n                                  const char* e1,\n                                  const char* e2,\n                                  const char* e3,\n                                  const char* e4,\n                                  const char* e5,\n                                  Pred pred,\n                                  const T1& v1,\n                                  const T2& v2,\n                                  const T3& v3,\n                                  const T4& v4,\n                                  const T5& v5) {\n  if (pred(v1, v2, v3, v4, v5)) return AssertionSuccess();\n\n  return AssertionFailure() << pred_text << \"(\"\n                            << e1 << \", \"\n                            << e2 << \", \"\n                            << e3 << \", \"\n                            << e4 << \", \"\n                            << e5 << \") evaluates to false, where\"\n                            << \"\\n\" << e1 << \" evaluates to \" << v1\n                            << \"\\n\" << e2 << \" evaluates to \" << v2\n                            << \"\\n\" << e3 << \" evaluates to \" << v3\n                            << \"\\n\" << e4 << \" evaluates to \" << v4\n                            << \"\\n\" << e5 << \" evaluates to \" << v5;\n}\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT5.\n// Don't use this in your code.\n#define GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, on_failure)\\\n  GTEST_ASSERT_(pred_format(#v1, #v2, #v3, #v4, #v5, v1, v2, v3, v4, v5),\\\n                on_failure)\n\n// Internal macro for implementing {EXPECT|ASSERT}_PRED5.  Don't use\n// this in your code.\n#define GTEST_PRED5_(pred, v1, v2, v3, v4, v5, on_failure)\\\n  GTEST_ASSERT_(::testing::AssertPred5Helper(#pred, \\\n                                             #v1, \\\n                                             #v2, \\\n                                             #v3, \\\n                                             #v4, \\\n                                             #v5, \\\n                                             pred, \\\n                                             v1, \\\n                                             v2, \\\n                                             v3, \\\n                                             v4, \\\n                                             v5), on_failure)\n\n// 5-ary predicate assertion macros.\n#define EXPECT_PRED_FORMAT5(pred_format, v1, v2, v3, v4, v5) \\\n  GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, GTEST_NONFATAL_FAILURE_)\n#define EXPECT_PRED5(pred, v1, v2, v3, v4, v5) \\\n  GTEST_PRED5_(pred, v1, v2, v3, v4, v5, GTEST_NONFATAL_FAILURE_)\n#define ASSERT_PRED_FORMAT5(pred_format, v1, v2, v3, v4, v5) \\\n  GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, GTEST_FATAL_FAILURE_)\n#define ASSERT_PRED5(pred, v1, v2, v3, v4, v5) \\\n  GTEST_PRED5_(pred, v1, v2, v3, v4, v5, GTEST_FATAL_FAILURE_)\n\n\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_\n\n// Macros for testing equalities and inequalities.\n//\n//    * {ASSERT|EXPECT}_EQ(expected, actual): Tests that expected == actual\n//    * {ASSERT|EXPECT}_NE(v1, v2):           Tests that v1 != v2\n//    * {ASSERT|EXPECT}_LT(v1, v2):           Tests that v1 < v2\n//    * {ASSERT|EXPECT}_LE(v1, v2):           Tests that v1 <= v2\n//    * {ASSERT|EXPECT}_GT(v1, v2):           Tests that v1 > v2\n//    * {ASSERT|EXPECT}_GE(v1, v2):           Tests that v1 >= v2\n//\n// When they are not, Google Test prints both the tested expressions and\n// their actual values.  The values must be compatible built-in types,\n// or you will get a compiler error.  By \"compatible\" we mean that the\n// values can be compared by the respective operator.\n//\n// Note:\n//\n//   1. It is possible to make a user-defined type work with\n//   {ASSERT|EXPECT}_??(), but that requires overloading the\n//   comparison operators and is thus discouraged by the Google C++\n//   Usage Guide.  Therefore, you are advised to use the\n//   {ASSERT|EXPECT}_TRUE() macro to assert that two objects are\n//   equal.\n//\n//   2. The {ASSERT|EXPECT}_??() macros do pointer comparisons on\n//   pointers (in particular, C strings).  Therefore, if you use it\n//   with two C strings, you are testing how their locations in memory\n//   are related, not how their content is related.  To compare two C\n//   strings by content, use {ASSERT|EXPECT}_STR*().\n//\n//   3. {ASSERT|EXPECT}_EQ(expected, actual) is preferred to\n//   {ASSERT|EXPECT}_TRUE(expected == actual), as the former tells you\n//   what the actual value is when it fails, and similarly for the\n//   other comparisons.\n//\n//   4. Do not depend on the order in which {ASSERT|EXPECT}_??()\n//   evaluate their arguments, which is undefined.\n//\n//   5. These macros evaluate their arguments exactly once.\n//\n// Examples:\n//\n//   EXPECT_NE(5, Foo());\n//   EXPECT_EQ(NULL, a_pointer);\n//   ASSERT_LT(i, array_size);\n//   ASSERT_GT(records.size(), 0) << \"There is no record left.\";\n\n#define EXPECT_EQ(expected, actual) \\\n  EXPECT_PRED_FORMAT2(::testing::internal:: \\\n                      EqHelper<GTEST_IS_NULL_LITERAL_(expected)>::Compare, \\\n                      expected, actual)\n#define EXPECT_NE(expected, actual) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperNE, expected, actual)\n#define EXPECT_LE(val1, val2) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val1, val2)\n#define EXPECT_LT(val1, val2) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLT, val1, val2)\n#define EXPECT_GE(val1, val2) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val1, val2)\n#define EXPECT_GT(val1, val2) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2)\n\n#define GTEST_ASSERT_EQ(expected, actual) \\\n  ASSERT_PRED_FORMAT2(::testing::internal:: \\\n                      EqHelper<GTEST_IS_NULL_LITERAL_(expected)>::Compare, \\\n                      expected, actual)\n#define GTEST_ASSERT_NE(val1, val2) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperNE, val1, val2)\n#define GTEST_ASSERT_LE(val1, val2) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val1, val2)\n#define GTEST_ASSERT_LT(val1, val2) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperLT, val1, val2)\n#define GTEST_ASSERT_GE(val1, val2) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val1, val2)\n#define GTEST_ASSERT_GT(val1, val2) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2)\n\n// Define macro GTEST_DONT_DEFINE_ASSERT_XY to 1 to omit the definition of\n// ASSERT_XY(), which clashes with some users' own code.\n\n#if !GTEST_DONT_DEFINE_ASSERT_EQ\n# define ASSERT_EQ(val1, val2) GTEST_ASSERT_EQ(val1, val2)\n#endif\n\n#if !GTEST_DONT_DEFINE_ASSERT_NE\n# define ASSERT_NE(val1, val2) GTEST_ASSERT_NE(val1, val2)\n#endif\n\n#if !GTEST_DONT_DEFINE_ASSERT_LE\n# define ASSERT_LE(val1, val2) GTEST_ASSERT_LE(val1, val2)\n#endif\n\n#if !GTEST_DONT_DEFINE_ASSERT_LT\n# define ASSERT_LT(val1, val2) GTEST_ASSERT_LT(val1, val2)\n#endif\n\n#if !GTEST_DONT_DEFINE_ASSERT_GE\n# define ASSERT_GE(val1, val2) GTEST_ASSERT_GE(val1, val2)\n#endif\n\n#if !GTEST_DONT_DEFINE_ASSERT_GT\n# define ASSERT_GT(val1, val2) GTEST_ASSERT_GT(val1, val2)\n#endif\n\n// C String Comparisons.  All tests treat NULL and any non-NULL string\n// as different.  Two NULLs are equal.\n//\n//    * {ASSERT|EXPECT}_STREQ(s1, s2):     Tests that s1 == s2\n//    * {ASSERT|EXPECT}_STRNE(s1, s2):     Tests that s1 != s2\n//    * {ASSERT|EXPECT}_STRCASEEQ(s1, s2): Tests that s1 == s2, ignoring case\n//    * {ASSERT|EXPECT}_STRCASENE(s1, s2): Tests that s1 != s2, ignoring case\n//\n// For wide or narrow string objects, you can use the\n// {ASSERT|EXPECT}_??() macros.\n//\n// Don't depend on the order in which the arguments are evaluated,\n// which is undefined.\n//\n// These macros evaluate their arguments exactly once.\n\n#define EXPECT_STREQ(expected, actual) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTREQ, expected, actual)\n#define EXPECT_STRNE(s1, s2) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRNE, s1, s2)\n#define EXPECT_STRCASEEQ(expected, actual) \\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASEEQ, expected, actual)\n#define EXPECT_STRCASENE(s1, s2)\\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASENE, s1, s2)\n\n#define ASSERT_STREQ(expected, actual) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTREQ, expected, actual)\n#define ASSERT_STRNE(s1, s2) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRNE, s1, s2)\n#define ASSERT_STRCASEEQ(expected, actual) \\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASEEQ, expected, actual)\n#define ASSERT_STRCASENE(s1, s2)\\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASENE, s1, s2)\n\n// Macros for comparing floating-point numbers.\n//\n//    * {ASSERT|EXPECT}_FLOAT_EQ(expected, actual):\n//         Tests that two float values are almost equal.\n//    * {ASSERT|EXPECT}_DOUBLE_EQ(expected, actual):\n//         Tests that two double values are almost equal.\n//    * {ASSERT|EXPECT}_NEAR(v1, v2, abs_error):\n//         Tests that v1 and v2 are within the given distance to each other.\n//\n// Google Test uses ULP-based comparison to automatically pick a default\n// error bound that is appropriate for the operands.  See the\n// FloatingPoint template class in gtest-internal.h if you are\n// interested in the implementation details.\n\n#define EXPECT_FLOAT_EQ(expected, actual)\\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ<float>, \\\n                      expected, actual)\n\n#define EXPECT_DOUBLE_EQ(expected, actual)\\\n  EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ<double>, \\\n                      expected, actual)\n\n#define ASSERT_FLOAT_EQ(expected, actual)\\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ<float>, \\\n                      expected, actual)\n\n#define ASSERT_DOUBLE_EQ(expected, actual)\\\n  ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ<double>, \\\n                      expected, actual)\n\n#define EXPECT_NEAR(val1, val2, abs_error)\\\n  EXPECT_PRED_FORMAT3(::testing::internal::DoubleNearPredFormat, \\\n                      val1, val2, abs_error)\n\n#define ASSERT_NEAR(val1, val2, abs_error)\\\n  ASSERT_PRED_FORMAT3(::testing::internal::DoubleNearPredFormat, \\\n                      val1, val2, abs_error)\n\n// These predicate format functions work on floating-point values, and\n// can be used in {ASSERT|EXPECT}_PRED_FORMAT2*(), e.g.\n//\n//   EXPECT_PRED_FORMAT2(testing::DoubleLE, Foo(), 5.0);\n\n// Asserts that val1 is less than, or almost equal to, val2.  Fails\n// otherwise.  In particular, it fails if either val1 or val2 is NaN.\nGTEST_API_ AssertionResult FloatLE(const char* expr1, const char* expr2,\n                                   float val1, float val2);\nGTEST_API_ AssertionResult DoubleLE(const char* expr1, const char* expr2,\n                                    double val1, double val2);\n\n\n#if GTEST_OS_WINDOWS\n\n// Macros that test for HRESULT failure and success, these are only useful\n// on Windows, and rely on Windows SDK macros and APIs to compile.\n//\n//    * {ASSERT|EXPECT}_HRESULT_{SUCCEEDED|FAILED}(expr)\n//\n// When expr unexpectedly fails or succeeds, Google Test prints the\n// expected result and the actual result with both a human-readable\n// string representation of the error, if available, as well as the\n// hex result code.\n# define EXPECT_HRESULT_SUCCEEDED(expr) \\\n    EXPECT_PRED_FORMAT1(::testing::internal::IsHRESULTSuccess, (expr))\n\n# define ASSERT_HRESULT_SUCCEEDED(expr) \\\n    ASSERT_PRED_FORMAT1(::testing::internal::IsHRESULTSuccess, (expr))\n\n# define EXPECT_HRESULT_FAILED(expr) \\\n    EXPECT_PRED_FORMAT1(::testing::internal::IsHRESULTFailure, (expr))\n\n# define ASSERT_HRESULT_FAILED(expr) \\\n    ASSERT_PRED_FORMAT1(::testing::internal::IsHRESULTFailure, (expr))\n\n#endif  // GTEST_OS_WINDOWS\n\n// Macros that execute statement and check that it doesn't generate new fatal\n// failures in the current thread.\n//\n//   * {ASSERT|EXPECT}_NO_FATAL_FAILURE(statement);\n//\n// Examples:\n//\n//   EXPECT_NO_FATAL_FAILURE(Process());\n//   ASSERT_NO_FATAL_FAILURE(Process()) << \"Process() failed\";\n//\n#define ASSERT_NO_FATAL_FAILURE(statement) \\\n    GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_FATAL_FAILURE_)\n#define EXPECT_NO_FATAL_FAILURE(statement) \\\n    GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_NONFATAL_FAILURE_)\n\n// Causes a trace (including the source file path, the current line\n// number, and the given message) to be included in every test failure\n// message generated by code in the current scope.  The effect is\n// undone when the control leaves the current scope.\n//\n// The message argument can be anything streamable to std::ostream.\n//\n// In the implementation, we include the current line number as part\n// of the dummy variable name, thus allowing multiple SCOPED_TRACE()s\n// to appear in the same block - as long as they are on different\n// lines.\n#define SCOPED_TRACE(message) \\\n  ::testing::internal::ScopedTrace GTEST_CONCAT_TOKEN_(gtest_trace_, __LINE__)(\\\n    __FILE__, __LINE__, ::testing::Message() << (message))\n\n// Compile-time assertion for type equality.\n// StaticAssertTypeEq<type1, type2>() compiles iff type1 and type2 are\n// the same type.  The value it returns is not interesting.\n//\n// Instead of making StaticAssertTypeEq a class template, we make it a\n// function template that invokes a helper class template.  This\n// prevents a user from misusing StaticAssertTypeEq<T1, T2> by\n// defining objects of that type.\n//\n// CAVEAT:\n//\n// When used inside a method of a class template,\n// StaticAssertTypeEq<T1, T2>() is effective ONLY IF the method is\n// instantiated.  For example, given:\n//\n//   template <typename T> class Foo {\n//    public:\n//     void Bar() { testing::StaticAssertTypeEq<int, T>(); }\n//   };\n//\n// the code:\n//\n//   void Test1() { Foo<bool> foo; }\n//\n// will NOT generate a compiler error, as Foo<bool>::Bar() is never\n// actually instantiated.  Instead, you need:\n//\n//   void Test2() { Foo<bool> foo; foo.Bar(); }\n//\n// to cause a compiler error.\ntemplate <typename T1, typename T2>\nbool StaticAssertTypeEq() {\n  (void)internal::StaticAssertTypeEqHelper<T1, T2>();\n  return true;\n}\n\n// Defines a test.\n//\n// The first parameter is the name of the test case, and the second\n// parameter is the name of the test within the test case.\n//\n// The convention is to end the test case name with \"Test\".  For\n// example, a test case for the Foo class can be named FooTest.\n//\n// The user should put his test code between braces after using this\n// macro.  Example:\n//\n//   TEST(FooTest, InitializesCorrectly) {\n//     Foo foo;\n//     EXPECT_TRUE(foo.StatusIsOK());\n//   }\n\n// Note that we call GetTestTypeId() instead of GetTypeId<\n// ::testing::Test>() here to get the type ID of testing::Test.  This\n// is to work around a suspected linker bug when using Google Test as\n// a framework on Mac OS X.  The bug causes GetTypeId<\n// ::testing::Test>() to return different values depending on whether\n// the call is from the Google Test framework itself or from user test\n// code.  GetTestTypeId() is guaranteed to always return the same\n// value, as it always calls GetTypeId<>() from the Google Test\n// framework.\n#define GTEST_TEST(test_case_name, test_name)\\\n  GTEST_TEST_(test_case_name, test_name, \\\n              ::testing::Test, ::testing::internal::GetTestTypeId())\n\n// Define this macro to 1 to omit the definition of TEST(), which\n// is a generic name and clashes with some other libraries.\n#if !GTEST_DONT_DEFINE_TEST\n# define TEST(test_case_name, test_name) GTEST_TEST(test_case_name, test_name)\n#endif\n\n// Defines a test that uses a test fixture.\n//\n// The first parameter is the name of the test fixture class, which\n// also doubles as the test case name.  The second parameter is the\n// name of the test within the test case.\n//\n// A test fixture class must be declared earlier.  The user should put\n// his test code between braces after using this macro.  Example:\n//\n//   class FooTest : public testing::Test {\n//    protected:\n//     virtual void SetUp() { b_.AddElement(3); }\n//\n//     Foo a_;\n//     Foo b_;\n//   };\n//\n//   TEST_F(FooTest, InitializesCorrectly) {\n//     EXPECT_TRUE(a_.StatusIsOK());\n//   }\n//\n//   TEST_F(FooTest, ReturnsElementCountCorrectly) {\n//     EXPECT_EQ(0, a_.size());\n//     EXPECT_EQ(1, b_.size());\n//   }\n\n#define TEST_F(test_fixture, test_name)\\\n  GTEST_TEST_(test_fixture, test_name, test_fixture, \\\n              ::testing::internal::GetTypeId<test_fixture>())\n\n// Use this macro in main() to run all tests.  It returns 0 if all\n// tests are successful, or 1 otherwise.\n//\n// RUN_ALL_TESTS() should be invoked after the command line has been\n// parsed by InitGoogleTest().\n\n#define RUN_ALL_TESTS()\\\n  (::testing::UnitTest::GetInstance()->Run())\n\n}  // namespace testing\n\n#endif  // GTEST_INCLUDE_GTEST_GTEST_H_\n"
  },
  {
    "path": "caffe-fpn/src/gtest/gtest_main.cc",
    "content": "// Copyright 2006, Google Inc.\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are\n// met:\n//\n//     * Redistributions of source code must retain the above copyright\n// notice, this list of conditions and the following disclaimer.\n//     * Redistributions in binary form must reproduce the above\n// copyright notice, this list of conditions and the following disclaimer\n// in the documentation and/or other materials provided with the\n// distribution.\n//     * Neither the name of Google Inc. nor the names of its\n// contributors may be used to endorse or promote products derived from\n// this software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n#include <iostream>\n\n#include \"gtest/gtest.h\"\n\nGTEST_API_ int main(int argc, char **argv) {\n  std::cout << \"Running main() from gtest_main.cc\\n\";\n\n  testing::InitGoogleTest(&argc, argv);\n  return RUN_ALL_TESTS();\n}\n"
  },
  {
    "path": "caffe-fpn/tools/CMakeLists.txt",
    "content": "# Collect source files\nfile(GLOB_RECURSE srcs ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)\n\n# Build each source file independently\nforeach(source ${srcs})\n  get_filename_component(name ${source} NAME_WE)\n\n  # caffe target already exits\n  if(name MATCHES \"caffe\")\n    set(name ${name}.bin)\n  endif()\n\n  # target\n  add_executable(${name} ${source})\n  target_link_libraries(${name} ${Caffe_LINK})\n  caffe_default_properties(${name})\n\n  # set back RUNTIME_OUTPUT_DIRECTORY\n  caffe_set_runtime_directory(${name} \"${PROJECT_BINARY_DIR}/tools\")\n  caffe_set_solution_folder(${name} tools)\n\n  # restore output name without suffix\n  if(name MATCHES \"caffe.bin\")\n    set_target_properties(${name} PROPERTIES OUTPUT_NAME caffe)\n  endif()\n\n  # Install\n  install(TARGETS ${name} DESTINATION bin)\nendforeach(source)\n"
  },
  {
    "path": "caffe-fpn/tools/caffe.cpp",
    "content": "#ifdef WITH_PYTHON_LAYER\n#include \"boost/python.hpp\"\nnamespace bp = boost::python;\n#endif\n\n#include <gflags/gflags.h>\n#include <glog/logging.h>\n\n#include <cstring>\n#include <map>\n#include <string>\n#include <vector>\n\n#include \"boost/algorithm/string.hpp\"\n#include \"caffe/caffe.hpp\"\n#include \"caffe/util/signal_handler.h\"\n\nusing caffe::Blob;\nusing caffe::Caffe;\nusing caffe::Net;\nusing caffe::Layer;\nusing caffe::Solver;\nusing caffe::shared_ptr;\nusing caffe::string;\nusing caffe::Timer;\nusing caffe::vector;\nusing std::ostringstream;\n\nDEFINE_string(gpu, \"\",\n    \"Optional; run in GPU mode on given device IDs separated by ','.\"\n    \"Use '-gpu all' to run on all available GPUs. The effective training \"\n    \"batch size is multiplied by the number of devices.\");\nDEFINE_string(solver, \"\",\n    \"The solver definition protocol buffer text file.\");\nDEFINE_string(model, \"\",\n    \"The model definition protocol buffer text file..\");\nDEFINE_string(snapshot, \"\",\n    \"Optional; the snapshot solver state to resume training.\");\nDEFINE_string(weights, \"\",\n    \"Optional; the pretrained weights to initialize finetuning, \"\n    \"separated by ','. Cannot be set simultaneously with snapshot.\");\nDEFINE_int32(iterations, 50,\n    \"The number of iterations to run.\");\nDEFINE_string(sigint_effect, \"stop\",\n             \"Optional; action to take when a SIGINT signal is received: \"\n              \"snapshot, stop or none.\");\nDEFINE_string(sighup_effect, \"snapshot\",\n             \"Optional; action to take when a SIGHUP signal is received: \"\n             \"snapshot, stop or none.\");\n\n// A simple registry for caffe commands.\ntypedef int (*BrewFunction)();\ntypedef std::map<caffe::string, BrewFunction> BrewMap;\nBrewMap g_brew_map;\n\n#define RegisterBrewFunction(func) \\\nnamespace { \\\nclass __Registerer_##func { \\\n public: /* NOLINT */ \\\n  __Registerer_##func() { \\\n    g_brew_map[#func] = &func; \\\n  } \\\n}; \\\n__Registerer_##func g_registerer_##func; \\\n}\n\nstatic BrewFunction GetBrewFunction(const caffe::string& name) {\n  if (g_brew_map.count(name)) {\n    return g_brew_map[name];\n  } else {\n    LOG(ERROR) << \"Available caffe actions:\";\n    for (BrewMap::iterator it = g_brew_map.begin();\n         it != g_brew_map.end(); ++it) {\n      LOG(ERROR) << \"\\t\" << it->first;\n    }\n    LOG(FATAL) << \"Unknown action: \" << name;\n    return NULL;  // not reachable, just to suppress old compiler warnings.\n  }\n}\n\n// Parse GPU ids or use all available devices\nstatic void get_gpus(vector<int>* gpus) {\n  if (FLAGS_gpu == \"all\") {\n    int count = 0;\n#ifndef CPU_ONLY\n    CUDA_CHECK(cudaGetDeviceCount(&count));\n#else\n    NO_GPU;\n#endif\n    for (int i = 0; i < count; ++i) {\n      gpus->push_back(i);\n    }\n  } else if (FLAGS_gpu.size()) {\n    vector<string> strings;\n    boost::split(strings, FLAGS_gpu, boost::is_any_of(\",\"));\n    for (int i = 0; i < strings.size(); ++i) {\n      gpus->push_back(boost::lexical_cast<int>(strings[i]));\n    }\n  } else {\n    CHECK_EQ(gpus->size(), 0);\n  }\n}\n\n// caffe commands to call by\n//     caffe <command> <args>\n//\n// To add a command, define a function \"int command()\" and register it with\n// RegisterBrewFunction(action);\n\n// Device Query: show diagnostic information for a GPU device.\nint device_query() {\n  LOG(INFO) << \"Querying GPUs \" << FLAGS_gpu;\n  vector<int> gpus;\n  get_gpus(&gpus);\n  for (int i = 0; i < gpus.size(); ++i) {\n    caffe::Caffe::SetDevice(gpus[i]);\n    caffe::Caffe::DeviceQuery();\n  }\n  return 0;\n}\nRegisterBrewFunction(device_query);\n\n// Load the weights from the specified caffemodel(s) into the train and\n// test nets.\nvoid CopyLayers(caffe::Solver<float>* solver, const std::string& model_list) {\n  std::vector<std::string> model_names;\n  boost::split(model_names, model_list, boost::is_any_of(\",\") );\n  for (int i = 0; i < model_names.size(); ++i) {\n    LOG(INFO) << \"Finetuning from \" << model_names[i];\n    solver->net()->CopyTrainedLayersFrom(model_names[i]);\n    for (int j = 0; j < solver->test_nets().size(); ++j) {\n      solver->test_nets()[j]->CopyTrainedLayersFrom(model_names[i]);\n    }\n  }\n}\n\n// Translate the signal effect the user specified on the command-line to the\n// corresponding enumeration.\ncaffe::SolverAction::Enum GetRequestedAction(\n    const std::string& flag_value) {\n  if (flag_value == \"stop\") {\n    return caffe::SolverAction::STOP;\n  }\n  if (flag_value == \"snapshot\") {\n    return caffe::SolverAction::SNAPSHOT;\n  }\n  if (flag_value == \"none\") {\n    return caffe::SolverAction::NONE;\n  }\n  LOG(FATAL) << \"Invalid signal effect \\\"\"<< flag_value << \"\\\" was specified\";\n}\n\n// Train / Finetune a model.\nint train() {\n  CHECK_GT(FLAGS_solver.size(), 0) << \"Need a solver definition to train.\";\n  CHECK(!FLAGS_snapshot.size() || !FLAGS_weights.size())\n      << \"Give a snapshot to resume training or weights to finetune \"\n      \"but not both.\";\n\n  caffe::SolverParameter solver_param;\n  caffe::ReadSolverParamsFromTextFileOrDie(FLAGS_solver, &solver_param);\n\n  // If the gpus flag is not provided, allow the mode and device to be set\n  // in the solver prototxt.\n  if (FLAGS_gpu.size() == 0\n      && solver_param.solver_mode() == caffe::SolverParameter_SolverMode_GPU) {\n      if (solver_param.has_device_id()) {\n          FLAGS_gpu = \"\" +\n              boost::lexical_cast<string>(solver_param.device_id());\n      } else {  // Set default GPU if unspecified\n          FLAGS_gpu = \"\" + boost::lexical_cast<string>(0);\n      }\n  }\n\n  vector<int> gpus;\n  get_gpus(&gpus);\n  if (gpus.size() == 0) {\n    LOG(INFO) << \"Use CPU.\";\n    Caffe::set_mode(Caffe::CPU);\n  } else {\n    ostringstream s;\n    for (int i = 0; i < gpus.size(); ++i) {\n      s << (i ? \", \" : \"\") << gpus[i];\n    }\n    LOG(INFO) << \"Using GPUs \" << s.str();\n#ifndef CPU_ONLY\n    cudaDeviceProp device_prop;\n    for (int i = 0; i < gpus.size(); ++i) {\n      cudaGetDeviceProperties(&device_prop, gpus[i]);\n      LOG(INFO) << \"GPU \" << gpus[i] << \": \" << device_prop.name;\n    }\n#endif\n    solver_param.set_device_id(gpus[0]);\n    Caffe::SetDevice(gpus[0]);\n    Caffe::set_mode(Caffe::GPU);\n    Caffe::set_solver_count(gpus.size());\n  }\n\n  caffe::SignalHandler signal_handler(\n        GetRequestedAction(FLAGS_sigint_effect),\n        GetRequestedAction(FLAGS_sighup_effect));\n\n  shared_ptr<caffe::Solver<float> >\n      solver(caffe::SolverRegistry<float>::CreateSolver(solver_param));\n\n  solver->SetActionFunction(signal_handler.GetActionFunction());\n\n  if (FLAGS_snapshot.size()) {\n    LOG(INFO) << \"Resuming from \" << FLAGS_snapshot;\n    solver->Restore(FLAGS_snapshot.c_str());\n  } else if (FLAGS_weights.size()) {\n    CopyLayers(solver.get(), FLAGS_weights);\n  }\n\n  if (gpus.size() > 1) {\n    caffe::P2PSync<float> sync(solver, NULL, solver->param());\n    sync.run(gpus);\n  } else {\n    LOG(INFO) << \"Starting Optimization\";\n    solver->Solve();\n  }\n  LOG(INFO) << \"Optimization Done.\";\n  return 0;\n}\nRegisterBrewFunction(train);\n\n\n// Test: score a model.\nint test() {\n  CHECK_GT(FLAGS_model.size(), 0) << \"Need a model definition to score.\";\n  CHECK_GT(FLAGS_weights.size(), 0) << \"Need model weights to score.\";\n\n  // Set device id and mode\n  vector<int> gpus;\n  get_gpus(&gpus);\n  if (gpus.size() != 0) {\n    LOG(INFO) << \"Use GPU with device ID \" << gpus[0];\n#ifndef CPU_ONLY\n    cudaDeviceProp device_prop;\n    cudaGetDeviceProperties(&device_prop, gpus[0]);\n    LOG(INFO) << \"GPU device name: \" << device_prop.name;\n#endif\n    Caffe::SetDevice(gpus[0]);\n    Caffe::set_mode(Caffe::GPU);\n  } else {\n    LOG(INFO) << \"Use CPU.\";\n    Caffe::set_mode(Caffe::CPU);\n  }\n  // Instantiate the caffe net.\n  Net<float> caffe_net(FLAGS_model, caffe::TEST);\n  caffe_net.CopyTrainedLayersFrom(FLAGS_weights);\n  LOG(INFO) << \"Running for \" << FLAGS_iterations << \" iterations.\";\n\n  vector<Blob<float>* > bottom_vec;\n  vector<int> test_score_output_id;\n  vector<float> test_score;\n  float loss = 0;\n  for (int i = 0; i < FLAGS_iterations; ++i) {\n    float iter_loss;\n    const vector<Blob<float>*>& result =\n        caffe_net.Forward(bottom_vec, &iter_loss);\n    loss += iter_loss;\n    int idx = 0;\n    for (int j = 0; j < result.size(); ++j) {\n      const float* result_vec = result[j]->cpu_data();\n      for (int k = 0; k < result[j]->count(); ++k, ++idx) {\n        const float score = result_vec[k];\n        if (i == 0) {\n          test_score.push_back(score);\n          test_score_output_id.push_back(j);\n        } else {\n          test_score[idx] += score;\n        }\n        const std::string& output_name = caffe_net.blob_names()[\n            caffe_net.output_blob_indices()[j]];\n        LOG(INFO) << \"Batch \" << i << \", \" << output_name << \" = \" << score;\n      }\n    }\n  }\n  loss /= FLAGS_iterations;\n  LOG(INFO) << \"Loss: \" << loss;\n  for (int i = 0; i < test_score.size(); ++i) {\n    const std::string& output_name = caffe_net.blob_names()[\n        caffe_net.output_blob_indices()[test_score_output_id[i]]];\n    const float loss_weight = caffe_net.blob_loss_weights()[\n        caffe_net.output_blob_indices()[test_score_output_id[i]]];\n    std::ostringstream loss_msg_stream;\n    const float mean_score = test_score[i] / FLAGS_iterations;\n    if (loss_weight) {\n      loss_msg_stream << \" (* \" << loss_weight\n                      << \" = \" << loss_weight * mean_score << \" loss)\";\n    }\n    LOG(INFO) << output_name << \" = \" << mean_score << loss_msg_stream.str();\n  }\n\n  return 0;\n}\nRegisterBrewFunction(test);\n\n\n// Time: benchmark the execution time of a model.\nint time() {\n  CHECK_GT(FLAGS_model.size(), 0) << \"Need a model definition to time.\";\n\n  // Set device id and mode\n  vector<int> gpus;\n  get_gpus(&gpus);\n  if (gpus.size() != 0) {\n    LOG(INFO) << \"Use GPU with device ID \" << gpus[0];\n    Caffe::SetDevice(gpus[0]);\n    Caffe::set_mode(Caffe::GPU);\n  } else {\n    LOG(INFO) << \"Use CPU.\";\n    Caffe::set_mode(Caffe::CPU);\n  }\n  // Instantiate the caffe net.\n  Net<float> caffe_net(FLAGS_model, caffe::TRAIN);\n\n  // Do a clean forward and backward pass, so that memory allocation are done\n  // and future iterations will be more stable.\n  LOG(INFO) << \"Performing Forward\";\n  // Note that for the speed benchmark, we will assume that the network does\n  // not take any input blobs.\n  float initial_loss;\n  caffe_net.Forward(vector<Blob<float>*>(), &initial_loss);\n  LOG(INFO) << \"Initial loss: \" << initial_loss;\n  LOG(INFO) << \"Performing Backward\";\n  caffe_net.Backward();\n\n  const vector<shared_ptr<Layer<float> > >& layers = caffe_net.layers();\n  const vector<vector<Blob<float>*> >& bottom_vecs = caffe_net.bottom_vecs();\n  const vector<vector<Blob<float>*> >& top_vecs = caffe_net.top_vecs();\n  const vector<vector<bool> >& bottom_need_backward =\n      caffe_net.bottom_need_backward();\n  LOG(INFO) << \"*** Benchmark begins ***\";\n  LOG(INFO) << \"Testing for \" << FLAGS_iterations << \" iterations.\";\n  Timer total_timer;\n  total_timer.Start();\n  Timer forward_timer;\n  Timer backward_timer;\n  Timer timer;\n  std::vector<double> forward_time_per_layer(layers.size(), 0.0);\n  std::vector<double> backward_time_per_layer(layers.size(), 0.0);\n  double forward_time = 0.0;\n  double backward_time = 0.0;\n  for (int j = 0; j < FLAGS_iterations; ++j) {\n    Timer iter_timer;\n    iter_timer.Start();\n    forward_timer.Start();\n    for (int i = 0; i < layers.size(); ++i) {\n      timer.Start();\n      layers[i]->Forward(bottom_vecs[i], top_vecs[i]);\n      forward_time_per_layer[i] += timer.MicroSeconds();\n    }\n    forward_time += forward_timer.MicroSeconds();\n    backward_timer.Start();\n    for (int i = layers.size() - 1; i >= 0; --i) {\n      timer.Start();\n      layers[i]->Backward(top_vecs[i], bottom_need_backward[i],\n                          bottom_vecs[i]);\n      backward_time_per_layer[i] += timer.MicroSeconds();\n    }\n    backward_time += backward_timer.MicroSeconds();\n    LOG(INFO) << \"Iteration: \" << j + 1 << \" forward-backward time: \"\n      << iter_timer.MilliSeconds() << \" ms.\";\n  }\n  LOG(INFO) << \"Average time per layer: \";\n  for (int i = 0; i < layers.size(); ++i) {\n    const caffe::string& layername = layers[i]->layer_param().name();\n    LOG(INFO) << std::setfill(' ') << std::setw(10) << layername <<\n      \"\\tforward: \" << forward_time_per_layer[i] / 1000 /\n      FLAGS_iterations << \" ms.\";\n    LOG(INFO) << std::setfill(' ') << std::setw(10) << layername  <<\n      \"\\tbackward: \" << backward_time_per_layer[i] / 1000 /\n      FLAGS_iterations << \" ms.\";\n  }\n  total_timer.Stop();\n  LOG(INFO) << \"Average Forward pass: \" << forward_time / 1000 /\n    FLAGS_iterations << \" ms.\";\n  LOG(INFO) << \"Average Backward pass: \" << backward_time / 1000 /\n    FLAGS_iterations << \" ms.\";\n  LOG(INFO) << \"Average Forward-Backward: \" << total_timer.MilliSeconds() /\n    FLAGS_iterations << \" ms.\";\n  LOG(INFO) << \"Total Time: \" << total_timer.MilliSeconds() << \" ms.\";\n  LOG(INFO) << \"*** Benchmark ends ***\";\n  return 0;\n}\nRegisterBrewFunction(time);\n\nint main(int argc, char** argv) {\n  // Print output to stderr (while still logging).\n  FLAGS_alsologtostderr = 1;\n  // Set version\n  gflags::SetVersionString(AS_STRING(CAFFE_VERSION));\n  // Usage message.\n  gflags::SetUsageMessage(\"command line brew\\n\"\n      \"usage: caffe <command> <args>\\n\\n\"\n      \"commands:\\n\"\n      \"  train           train or finetune a model\\n\"\n      \"  test            score a model\\n\"\n      \"  device_query    show GPU diagnostic information\\n\"\n      \"  time            benchmark model execution time\");\n  // Run tool or show usage.\n  caffe::GlobalInit(&argc, &argv);\n  if (argc == 2) {\n#ifdef WITH_PYTHON_LAYER\n    try {\n#endif\n      return GetBrewFunction(caffe::string(argv[1]))();\n#ifdef WITH_PYTHON_LAYER\n    } catch (bp::error_already_set) {\n      PyErr_Print();\n      return 1;\n    }\n#endif\n  } else {\n    gflags::ShowUsageWithFlagsRestrict(argv[0], \"tools/caffe\");\n  }\n}\n"
  },
  {
    "path": "caffe-fpn/tools/compute_image_mean.cpp",
    "content": "#include <stdint.h>\n#include <algorithm>\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gflags/gflags.h\"\n#include \"glog/logging.h\"\n\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/io.hpp\"\n\nusing namespace caffe;  // NOLINT(build/namespaces)\n\nusing std::max;\nusing std::pair;\nusing boost::scoped_ptr;\n\nDEFINE_string(backend, \"lmdb\",\n        \"The backend {leveldb, lmdb} containing the images\");\n\nint main(int argc, char** argv) {\n  ::google::InitGoogleLogging(argv[0]);\n\n#ifdef USE_OPENCV\n#ifndef GFLAGS_GFLAGS_H_\n  namespace gflags = google;\n#endif\n\n  gflags::SetUsageMessage(\"Compute the mean_image of a set of images given by\"\n        \" a leveldb/lmdb\\n\"\n        \"Usage:\\n\"\n        \"    compute_image_mean [FLAGS] INPUT_DB [OUTPUT_FILE]\\n\");\n\n  gflags::ParseCommandLineFlags(&argc, &argv, true);\n\n  if (argc < 2 || argc > 3) {\n    gflags::ShowUsageWithFlagsRestrict(argv[0], \"tools/compute_image_mean\");\n    return 1;\n  }\n\n  scoped_ptr<db::DB> db(db::GetDB(FLAGS_backend));\n  db->Open(argv[1], db::READ);\n  scoped_ptr<db::Cursor> cursor(db->NewCursor());\n\n  BlobProto sum_blob;\n  int count = 0;\n  // load first datum\n  Datum datum;\n  datum.ParseFromString(cursor->value());\n\n  if (DecodeDatumNative(&datum)) {\n    LOG(INFO) << \"Decoding Datum\";\n  }\n\n  sum_blob.set_num(1);\n  sum_blob.set_channels(datum.channels());\n  sum_blob.set_height(datum.height());\n  sum_blob.set_width(datum.width());\n  const int data_size = datum.channels() * datum.height() * datum.width();\n  int size_in_datum = std::max<int>(datum.data().size(),\n                                    datum.float_data_size());\n  for (int i = 0; i < size_in_datum; ++i) {\n    sum_blob.add_data(0.);\n  }\n  LOG(INFO) << \"Starting Iteration\";\n  while (cursor->valid()) {\n    Datum datum;\n    datum.ParseFromString(cursor->value());\n    DecodeDatumNative(&datum);\n\n    const std::string& data = datum.data();\n    size_in_datum = std::max<int>(datum.data().size(),\n        datum.float_data_size());\n    CHECK_EQ(size_in_datum, data_size) << \"Incorrect data field size \" <<\n        size_in_datum;\n    if (data.size() != 0) {\n      CHECK_EQ(data.size(), size_in_datum);\n      for (int i = 0; i < size_in_datum; ++i) {\n        sum_blob.set_data(i, sum_blob.data(i) + (uint8_t)data[i]);\n      }\n    } else {\n      CHECK_EQ(datum.float_data_size(), size_in_datum);\n      for (int i = 0; i < size_in_datum; ++i) {\n        sum_blob.set_data(i, sum_blob.data(i) +\n            static_cast<float>(datum.float_data(i)));\n      }\n    }\n    ++count;\n    if (count % 10000 == 0) {\n      LOG(INFO) << \"Processed \" << count << \" files.\";\n    }\n    cursor->Next();\n  }\n\n  if (count % 10000 != 0) {\n    LOG(INFO) << \"Processed \" << count << \" files.\";\n  }\n  for (int i = 0; i < sum_blob.data_size(); ++i) {\n    sum_blob.set_data(i, sum_blob.data(i) / count);\n  }\n  // Write to disk\n  if (argc == 3) {\n    LOG(INFO) << \"Write to \" << argv[2];\n    WriteProtoToBinaryFile(sum_blob, argv[2]);\n  }\n  const int channels = sum_blob.channels();\n  const int dim = sum_blob.height() * sum_blob.width();\n  std::vector<float> mean_values(channels, 0.0);\n  LOG(INFO) << \"Number of channels: \" << channels;\n  for (int c = 0; c < channels; ++c) {\n    for (int i = 0; i < dim; ++i) {\n      mean_values[c] += sum_blob.data(dim * c + i);\n    }\n    LOG(INFO) << \"mean_value channel [\" << c << \"]:\" << mean_values[c] / dim;\n  }\n#else\n  LOG(FATAL) << \"This tool requires OpenCV; compile with USE_OPENCV.\";\n#endif  // USE_OPENCV\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/convert_imageset.cpp",
    "content": "// This program converts a set of images to a lmdb/leveldb by storing them\n// as Datum proto buffers.\n// Usage:\n//   convert_imageset [FLAGS] ROOTFOLDER/ LISTFILE DB_NAME\n//\n// where ROOTFOLDER is the root folder that holds all the images, and LISTFILE\n// should be a list of files as well as their labels, in the format as\n//   subfolder1/file1.JPEG 7\n//   ....\n\n#include <algorithm>\n#include <fstream>  // NOLINT(readability/streams)\n#include <string>\n#include <utility>\n#include <vector>\n\n#include \"boost/scoped_ptr.hpp\"\n#include \"gflags/gflags.h\"\n#include \"glog/logging.h\"\n\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/format.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/rng.hpp\"\n\nusing namespace caffe;  // NOLINT(build/namespaces)\nusing std::pair;\nusing boost::scoped_ptr;\n\nDEFINE_bool(gray, false,\n    \"When this option is on, treat images as grayscale ones\");\nDEFINE_bool(shuffle, false,\n    \"Randomly shuffle the order of images and their labels\");\nDEFINE_string(backend, \"lmdb\",\n        \"The backend {lmdb, leveldb} for storing the result\");\nDEFINE_int32(resize_width, 0, \"Width images are resized to\");\nDEFINE_int32(resize_height, 0, \"Height images are resized to\");\nDEFINE_bool(check_size, false,\n    \"When this option is on, check that all the datum have the same size\");\nDEFINE_bool(encoded, false,\n    \"When this option is on, the encoded image will be save in datum\");\nDEFINE_string(encode_type, \"\",\n    \"Optional: What type should we encode the image as ('png','jpg',...).\");\n\nint main(int argc, char** argv) {\n#ifdef USE_OPENCV\n  ::google::InitGoogleLogging(argv[0]);\n  // Print output to stderr (while still logging)\n  FLAGS_alsologtostderr = 1;\n\n#ifndef GFLAGS_GFLAGS_H_\n  namespace gflags = google;\n#endif\n\n  gflags::SetUsageMessage(\"Convert a set of images to the leveldb/lmdb\\n\"\n        \"format used as input for Caffe.\\n\"\n        \"Usage:\\n\"\n        \"    convert_imageset [FLAGS] ROOTFOLDER/ LISTFILE DB_NAME\\n\"\n        \"The ImageNet dataset for the training demo is at\\n\"\n        \"    http://www.image-net.org/download-images\\n\");\n  gflags::ParseCommandLineFlags(&argc, &argv, true);\n\n  if (argc < 4) {\n    gflags::ShowUsageWithFlagsRestrict(argv[0], \"tools/convert_imageset\");\n    return 1;\n  }\n\n  const bool is_color = !FLAGS_gray;\n  const bool check_size = FLAGS_check_size;\n  const bool encoded = FLAGS_encoded;\n  const string encode_type = FLAGS_encode_type;\n\n  std::ifstream infile(argv[2]);\n  std::vector<std::pair<std::string, int> > lines;\n  std::string filename;\n  int label;\n  while (infile >> filename >> label) {\n    lines.push_back(std::make_pair(filename, label));\n  }\n  if (FLAGS_shuffle) {\n    // randomly shuffle data\n    LOG(INFO) << \"Shuffling data\";\n    shuffle(lines.begin(), lines.end());\n  }\n  LOG(INFO) << \"A total of \" << lines.size() << \" images.\";\n\n  if (encode_type.size() && !encoded)\n    LOG(INFO) << \"encode_type specified, assuming encoded=true.\";\n\n  int resize_height = std::max<int>(0, FLAGS_resize_height);\n  int resize_width = std::max<int>(0, FLAGS_resize_width);\n\n  // Create new DB\n  scoped_ptr<db::DB> db(db::GetDB(FLAGS_backend));\n  db->Open(argv[3], db::NEW);\n  scoped_ptr<db::Transaction> txn(db->NewTransaction());\n\n  // Storing to db\n  std::string root_folder(argv[1]);\n  Datum datum;\n  int count = 0;\n  int data_size = 0;\n  bool data_size_initialized = false;\n\n  for (int line_id = 0; line_id < lines.size(); ++line_id) {\n    bool status;\n    std::string enc = encode_type;\n    if (encoded && !enc.size()) {\n      // Guess the encoding type from the file name\n      string fn = lines[line_id].first;\n      size_t p = fn.rfind('.');\n      if ( p == fn.npos )\n        LOG(WARNING) << \"Failed to guess the encoding of '\" << fn << \"'\";\n      enc = fn.substr(p);\n      std::transform(enc.begin(), enc.end(), enc.begin(), ::tolower);\n    }\n    status = ReadImageToDatum(root_folder + lines[line_id].first,\n        lines[line_id].second, resize_height, resize_width, is_color,\n        enc, &datum);\n    if (status == false) continue;\n    if (check_size) {\n      if (!data_size_initialized) {\n        data_size = datum.channels() * datum.height() * datum.width();\n        data_size_initialized = true;\n      } else {\n        const std::string& data = datum.data();\n        CHECK_EQ(data.size(), data_size) << \"Incorrect data field size \"\n            << data.size();\n      }\n    }\n    // sequential\n    string key_str = caffe::format_int(line_id, 8) + \"_\" + lines[line_id].first;\n\n    // Put in db\n    string out;\n    CHECK(datum.SerializeToString(&out));\n    txn->Put(key_str, out);\n\n    if (++count % 1000 == 0) {\n      // Commit db\n      txn->Commit();\n      txn.reset(db->NewTransaction());\n      LOG(INFO) << \"Processed \" << count << \" files.\";\n    }\n  }\n  // write the last batch\n  if (count % 1000 != 0) {\n    txn->Commit();\n    LOG(INFO) << \"Processed \" << count << \" files.\";\n  }\n#else\n  LOG(FATAL) << \"This tool requires OpenCV; compile with USE_OPENCV.\";\n#endif  // USE_OPENCV\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/device_query.cpp",
    "content": "#include \"caffe/common.hpp\"\n\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"Deprecated. Use caffe device_query \"\n                \"[--device_id=0] instead.\";\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/extra/extract_seconds.py",
    "content": "#!/usr/bin/env python\nimport datetime\nimport os\nimport sys\n\ndef extract_datetime_from_line(line, year):\n    # Expected format: I0210 13:39:22.381027 25210 solver.cpp:204] Iteration 100, lr = 0.00992565\n    line = line.strip().split()\n    month = int(line[0][1:3])\n    day = int(line[0][3:])\n    timestamp = line[1]\n    pos = timestamp.rfind('.')\n    ts = [int(x) for x in timestamp[:pos].split(':')]\n    hour = ts[0]\n    minute = ts[1]\n    second = ts[2]\n    microsecond = int(timestamp[pos + 1:])\n    dt = datetime.datetime(year, month, day, hour, minute, second, microsecond)\n    return dt\n\n\ndef get_log_created_year(input_file):\n    \"\"\"Get year from log file system timestamp\n    \"\"\"\n\n    log_created_time = os.path.getctime(input_file)\n    log_created_year = datetime.datetime.fromtimestamp(log_created_time).year\n    return log_created_year\n\n\ndef get_start_time(line_iterable, year):\n    \"\"\"Find start time from group of lines\n    \"\"\"\n\n    start_datetime = None\n    for line in line_iterable:\n        line = line.strip()\n        if line.find('Solving') != -1:\n            start_datetime = extract_datetime_from_line(line, year)\n            break\n    return start_datetime\n\n\ndef extract_seconds(input_file, output_file):\n    with open(input_file, 'r') as f:\n        lines = f.readlines()\n    log_created_year = get_log_created_year(input_file)\n    start_datetime = get_start_time(lines, log_created_year)\n    assert start_datetime, 'Start time not found'\n\n    out = open(output_file, 'w')\n    for line in lines:\n        line = line.strip()\n        if line.find('Iteration') != -1:\n            dt = extract_datetime_from_line(line, log_created_year)\n            elapsed_seconds = (dt - start_datetime).total_seconds()\n            out.write('%f\\n' % elapsed_seconds)\n    out.close()\n\nif __name__ == '__main__':\n    if len(sys.argv) < 3:\n        print('Usage: ./extract_seconds input_file output_file')\n        exit(1)\n    extract_seconds(sys.argv[1], sys.argv[2])\n"
  },
  {
    "path": "caffe-fpn/tools/extra/launch_resize_and_crop_images.sh",
    "content": "#!/bin/bash\n#### https://github.com/Yangqing/mincepie/wiki/Launch-Your-Mapreducer\n\n# If you encounter error that the address already in use, kill the process.\n# 11235 is the port of server process\n# https://github.com/Yangqing/mincepie/blob/master/mincepie/mince.py\n#     sudo netstat -ap | grep 11235\n# The last column of the output is  PID/Program name\n#     kill -9 PID\n# Second solution: \n#     nmap localhost\n#     fuser -k 11235/tcp\n# Or just wait a few seconds.\n\n## Launch your Mapreduce locally\n# num_clients: number of processes\n# image_lib: OpenCV or PIL, case insensitive. The default value is the faster OpenCV.\n# input: the file containing one image path relative to input_folder each line\n# input_folder: where are the original images\n# output_folder: where to save the resized and cropped images\n./resize_and_crop_images.py --num_clients=8 --image_lib=opencv --input=/home/user/Datasets/ImageNet/ILSVRC2010/ILSVRC2010_images.txt --input_folder=/home/user/Datasets/ImageNet/ILSVRC2010/ILSVRC2010_images_train/ --output_folder=/home/user/Datasets/ImageNet/ILSVRC2010/ILSVRC2010_images_train_resized/\n\n## Launch your Mapreduce with MPI\n# mpirun -n 8 --launch=mpi resize_and_crop_images.py --image_lib=opencv --input=/home/user/Datasets/ImageNet/ILSVRC2010/ILSVRC2010_images.txt --input_folder=/home/user/Datasets/ImageNet/ILSVRC2010/ILSVRC2010_images_train/ --output_folder=/home/user/Datasets/ImageNet/ILSVRC2010/ILSVRC2010_images_train_resized/\n"
  },
  {
    "path": "caffe-fpn/tools/extra/parse_log.py",
    "content": "#!/usr/bin/env python\n\n\"\"\"\nParse training log\n\nEvolved from parse_log.sh\n\"\"\"\n\nimport os\nimport re\nimport extract_seconds\nimport argparse\nimport csv\nfrom collections import OrderedDict\n\n\ndef parse_log(path_to_log):\n    \"\"\"Parse log file\n    Returns (train_dict_list, train_dict_names, test_dict_list, test_dict_names)\n\n    train_dict_list and test_dict_list are lists of dicts that define the table\n    rows\n\n    train_dict_names and test_dict_names are ordered tuples of the column names\n    for the two dict_lists\n    \"\"\"\n\n    regex_iteration = re.compile('Iteration (\\d+)')\n    regex_train_output = re.compile('Train net output #(\\d+): (\\S+) = ([\\.\\deE+-]+)')\n    regex_test_output = re.compile('Test net output #(\\d+): (\\S+) = ([\\.\\deE+-]+)')\n    regex_learning_rate = re.compile('lr = ([-+]?[0-9]*\\.?[0-9]+([eE]?[-+]?[0-9]+)?)')\n\n    # Pick out lines of interest\n    iteration = -1\n    learning_rate = float('NaN')\n    train_dict_list = []\n    test_dict_list = []\n    train_row = None\n    test_row = None\n\n    logfile_year = extract_seconds.get_log_created_year(path_to_log)\n    with open(path_to_log) as f:\n        start_time = extract_seconds.get_start_time(f, logfile_year)\n\n        for line in f:\n            iteration_match = regex_iteration.search(line)\n            if iteration_match:\n                iteration = float(iteration_match.group(1))\n            if iteration == -1:\n                # Only start parsing for other stuff if we've found the first\n                # iteration\n                continue\n\n            time = extract_seconds.extract_datetime_from_line(line,\n                                                              logfile_year)\n            seconds = (time - start_time).total_seconds()\n\n            learning_rate_match = regex_learning_rate.search(line)\n            if learning_rate_match:\n                learning_rate = float(learning_rate_match.group(1))\n\n            train_dict_list, train_row = parse_line_for_net_output(\n                regex_train_output, train_row, train_dict_list,\n                line, iteration, seconds, learning_rate\n            )\n            test_dict_list, test_row = parse_line_for_net_output(\n                regex_test_output, test_row, test_dict_list,\n                line, iteration, seconds, learning_rate\n            )\n\n    fix_initial_nan_learning_rate(train_dict_list)\n    fix_initial_nan_learning_rate(test_dict_list)\n\n    return train_dict_list, test_dict_list\n\n\ndef parse_line_for_net_output(regex_obj, row, row_dict_list,\n                              line, iteration, seconds, learning_rate):\n    \"\"\"Parse a single line for training or test output\n\n    Returns a a tuple with (row_dict_list, row)\n    row: may be either a new row or an augmented version of the current row\n    row_dict_list: may be either the current row_dict_list or an augmented\n    version of the current row_dict_list\n    \"\"\"\n\n    output_match = regex_obj.search(line)\n    if output_match:\n        if not row or row['NumIters'] != iteration:\n            # Push the last row and start a new one\n            if row:\n                # If we're on a new iteration, push the last row\n                # This will probably only happen for the first row; otherwise\n                # the full row checking logic below will push and clear full\n                # rows\n                row_dict_list.append(row)\n\n            row = OrderedDict([\n                ('NumIters', iteration),\n                ('Seconds', seconds),\n                ('LearningRate', learning_rate)\n            ])\n\n        # output_num is not used; may be used in the future\n        # output_num = output_match.group(1)\n        output_name = output_match.group(2)\n        output_val = output_match.group(3)\n        row[output_name] = float(output_val)\n\n    if row and len(row_dict_list) >= 1 and len(row) == len(row_dict_list[0]):\n        # The row is full, based on the fact that it has the same number of\n        # columns as the first row; append it to the list\n        row_dict_list.append(row)\n        row = None\n\n    return row_dict_list, row\n\n\ndef fix_initial_nan_learning_rate(dict_list):\n    \"\"\"Correct initial value of learning rate\n\n    Learning rate is normally not printed until after the initial test and\n    training step, which means the initial testing and training rows have\n    LearningRate = NaN. Fix this by copying over the LearningRate from the\n    second row, if it exists.\n    \"\"\"\n\n    if len(dict_list) > 1:\n        dict_list[0]['LearningRate'] = dict_list[1]['LearningRate']\n\n\ndef save_csv_files(logfile_path, output_dir, train_dict_list, test_dict_list,\n                   delimiter=',', verbose=False):\n    \"\"\"Save CSV files to output_dir\n\n    If the input log file is, e.g., caffe.INFO, the names will be\n    caffe.INFO.train and caffe.INFO.test\n    \"\"\"\n\n    log_basename = os.path.basename(logfile_path)\n    train_filename = os.path.join(output_dir, log_basename + '.train')\n    write_csv(train_filename, train_dict_list, delimiter, verbose)\n\n    test_filename = os.path.join(output_dir, log_basename + '.test')\n    write_csv(test_filename, test_dict_list, delimiter, verbose)\n\n\ndef write_csv(output_filename, dict_list, delimiter, verbose=False):\n    \"\"\"Write a CSV file\n    \"\"\"\n\n    dialect = csv.excel\n    dialect.delimiter = delimiter\n\n    with open(output_filename, 'w') as f:\n        dict_writer = csv.DictWriter(f, fieldnames=dict_list[0].keys(),\n                                     dialect=dialect)\n        dict_writer.writeheader()\n        dict_writer.writerows(dict_list)\n    if verbose:\n        print 'Wrote %s' % output_filename\n\n\ndef parse_args():\n    description = ('Parse a Caffe training log into two CSV files '\n                   'containing training and testing information')\n    parser = argparse.ArgumentParser(description=description)\n\n    parser.add_argument('logfile_path',\n                        help='Path to log file')\n\n    parser.add_argument('output_dir',\n                        help='Directory in which to place output CSV files')\n\n    parser.add_argument('--verbose',\n                        action='store_true',\n                        help='Print some extra info (e.g., output filenames)')\n\n    parser.add_argument('--delimiter',\n                        default=',',\n                        help=('Column delimiter in output files '\n                              '(default: \\'%(default)s\\')'))\n\n    args = parser.parse_args()\n    return args\n\n\ndef main():\n    args = parse_args()\n    train_dict_list, test_dict_list = parse_log(args.logfile_path)\n    save_csv_files(args.logfile_path, args.output_dir, train_dict_list,\n                   test_dict_list, delimiter=args.delimiter)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "caffe-fpn/tools/extra/parse_log.sh",
    "content": "#!/bin/bash\n# Usage parse_log.sh caffe.log\n# It creates the following two text files, each containing a table:\n#     caffe.log.test (columns: '#Iters Seconds TestAccuracy TestLoss')\n#     caffe.log.train (columns: '#Iters Seconds TrainingLoss LearningRate')\n\n\n# get the dirname of the script\nDIR=\"$( cd \"$(dirname \"$0\")\" ; pwd -P )\"\n\nif [ \"$#\" -lt 1 ]\nthen\necho \"Usage parse_log.sh /path/to/your.log\"\nexit\nfi\nLOG=`basename $1`\nsed -n '/Iteration .* Testing net/,/Iteration *. loss/p' $1 > aux.txt\nsed -i '/Waiting for data/d' aux.txt\nsed -i '/prefetch queue empty/d' aux.txt\nsed -i '/Iteration .* loss/d' aux.txt\nsed -i '/Iteration .* lr/d' aux.txt\nsed -i '/Train net/d' aux.txt\n\n\n# Extracting elapsed seconds\n# For extraction of time since this line contains the start time\n\n# For extraction of time since this line contains the start time\ngrep '] Solving ' $1 > aux.txt\ngrep ', loss = ' $1 >> aux.txt\ngrep 'Iteration ' aux.txt | sed  's/.*Iteration \\([[:digit:]]*\\).*/\\1/g' > aux0.txt\ngrep ', loss = ' $1 | awk '{print $9}' > aux1.txt\ngrep ', lr = ' $1 | awk '{print $9}' > aux2.txt\n\n# Extracting elapsed seconds\n\n\n# Generating\necho '#Iters Seconds TrainingLoss LearningRate'> $LOG.train\npaste aux0.txt aux0.txt aux1.txt aux2.txt | column -t >> $LOG.train\n\n"
  },
  {
    "path": "caffe-fpn/tools/extra/plot_log.gnuplot.example",
    "content": "# These snippets serve only as basic examples.\n# Customization is a must.\n# You can copy, paste, edit them in whatever way you want.\n# Be warned that the fields in the training log may change in the future.\n# You had better check the data files before designing your own plots.\n\n# Please generate the neccessary data files with \n# /path/to/caffe/tools/extra/parse_log.sh before plotting.\n# Example usage: \n#     ./parse_log.sh mnist.log\n# Now you have mnist.log.train and mnist.log.test.\n#     gnuplot mnist.gnuplot\n\n# The fields present in the data files that are usually proper to plot along\n# the y axis are test accuracy, test loss, training loss, and learning rate.\n# Those should plot along the x axis are training iterations and seconds.\n# Possible combinations:\n# 1. Test accuracy (test score 0) vs. training iterations / time;\n# 2. Test loss (test score 1) time;\n# 3. Training loss vs. training iterations / time;\n# 4. Learning rate vs. training iterations / time;\n# A rarer one: Training time vs. iterations.\n\n# What is the difference between plotting against iterations and time?\n# If the overhead in one iteration is too high, one algorithm might appear\n# to be faster in terms of progress per iteration and slower when measured\n# against time. And the reverse case is not entirely impossible. Thus, some\n# papers chose to only publish the more favorable type. It is your freedom\n# to decide what to plot.\n\nreset\nset terminal png\nset output \"your_chart_name.png\"\nset style data lines\nset key right\n\n###### Fields in the data file your_log_name.log.train are\n###### Iters Seconds TrainingLoss LearningRate\n\n# Training loss vs. training iterations\nset title \"Training loss vs. training iterations\"\nset xlabel \"Training iterations\"\nset ylabel \"Training loss\"\nplot \"mnist.log.train\" using 1:3 title \"mnist\"\n\n# Training loss vs. training time\n# plot \"mnist.log.train\" using 2:3 title \"mnist\"\n\n# Learning rate vs. training iterations;\n# plot \"mnist.log.train\" using 1:4 title \"mnist\"\n\n# Learning rate vs. training time;\n# plot \"mnist.log.train\" using 2:4 title \"mnist\"\n\n\n###### Fields in the data file your_log_name.log.test are\n###### Iters Seconds TestAccuracy TestLoss\n\n# Test loss vs. training iterations\n# plot \"mnist.log.test\" using 1:4 title \"mnist\"\n\n# Test accuracy vs. training iterations\n# plot \"mnist.log.test\" using 1:3 title \"mnist\"\n\n# Test loss vs. training time\n# plot \"mnist.log.test\" using 2:4 title \"mnist\"\n\n# Test accuracy vs. training time\n# plot \"mnist.log.test\" using 2:3 title \"mnist\"\n"
  },
  {
    "path": "caffe-fpn/tools/extra/plot_training_log.py",
    "content": "#!/usr/bin/env python\nimport inspect\nimport os\nimport random\nimport sys\nimport matplotlib.cm as cmx\nimport matplotlib.colors as colors\nimport matplotlib.pyplot as plt\nimport matplotlib.legend as lgd\nimport matplotlib.markers as mks\n\ndef get_log_parsing_script():\n    dirname = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\n    return dirname + '/parse_log.sh'\n\ndef get_log_file_suffix():\n    return '.log'\n\ndef get_chart_type_description_separator():\n    return '  vs. '\n\ndef is_x_axis_field(field):\n    x_axis_fields = ['Iters', 'Seconds']\n    return field in x_axis_fields\n\ndef create_field_index():\n    train_key = 'Train'\n    test_key = 'Test'\n    field_index = {train_key:{'Iters':0, 'Seconds':1, train_key + ' loss':2,\n                              train_key + ' learning rate':3},\n                   test_key:{'Iters':0, 'Seconds':1, test_key + ' accuracy':2,\n                             test_key + ' loss':3}}\n    fields = set()\n    for data_file_type in field_index.keys():\n        fields = fields.union(set(field_index[data_file_type].keys()))\n    fields = list(fields)\n    fields.sort()\n    return field_index, fields\n\ndef get_supported_chart_types():\n    field_index, fields = create_field_index()\n    num_fields = len(fields)\n    supported_chart_types = []\n    for i in xrange(num_fields):\n        if not is_x_axis_field(fields[i]):\n            for j in xrange(num_fields):\n                if i != j and is_x_axis_field(fields[j]):\n                    supported_chart_types.append('%s%s%s' % (\n                        fields[i], get_chart_type_description_separator(),\n                        fields[j]))\n    return supported_chart_types\n\ndef get_chart_type_description(chart_type):\n    supported_chart_types = get_supported_chart_types()\n    chart_type_description = supported_chart_types[chart_type]\n    return chart_type_description\n\ndef get_data_file_type(chart_type):\n    description = get_chart_type_description(chart_type)\n    data_file_type = description.split()[0]\n    return data_file_type\n\ndef get_data_file(chart_type, path_to_log):\n    return os.path.basename(path_to_log) + '.' + get_data_file_type(chart_type).lower()\n\ndef get_field_descriptions(chart_type):\n    description = get_chart_type_description(chart_type).split(\n        get_chart_type_description_separator())\n    y_axis_field = description[0]\n    x_axis_field = description[1]\n    return x_axis_field, y_axis_field    \n\ndef get_field_indecies(x_axis_field, y_axis_field):    \n    data_file_type = get_data_file_type(chart_type)\n    fields = create_field_index()[0][data_file_type]\n    return fields[x_axis_field], fields[y_axis_field]\n\ndef load_data(data_file, field_idx0, field_idx1):\n    data = [[], []]\n    with open(data_file, 'r') as f:\n        for line in f:\n            line = line.strip()\n            if line[0] != '#':\n                fields = line.split()\n                data[0].append(float(fields[field_idx0].strip()))\n                data[1].append(float(fields[field_idx1].strip()))\n    return data\n\ndef random_marker():\n    markers = mks.MarkerStyle.markers\n    num = len(markers.values())\n    idx = random.randint(0, num - 1)\n    return markers.values()[idx]\n\ndef get_data_label(path_to_log):\n    label = path_to_log[path_to_log.rfind('/')+1 : path_to_log.rfind(\n        get_log_file_suffix())]\n    return label\n\ndef get_legend_loc(chart_type):\n    x_axis, y_axis = get_field_descriptions(chart_type)\n    loc = 'lower right'\n    if y_axis.find('accuracy') != -1:\n        pass\n    if y_axis.find('loss') != -1 or y_axis.find('learning rate') != -1:\n        loc = 'upper right'\n    return loc\n\ndef plot_chart(chart_type, path_to_png, path_to_log_list):\n    for path_to_log in path_to_log_list:\n        os.system('%s %s' % (get_log_parsing_script(), path_to_log))\n        data_file = get_data_file(chart_type, path_to_log)\n        x_axis_field, y_axis_field = get_field_descriptions(chart_type)\n        x, y = get_field_indecies(x_axis_field, y_axis_field)\n        data = load_data(data_file, x, y)\n        ## TODO: more systematic color cycle for lines\n        color = [random.random(), random.random(), random.random()]\n        label = get_data_label(path_to_log)\n        linewidth = 0.75\n        ## If there too many datapoints, do not use marker.\n##        use_marker = False\n        use_marker = True\n        if not use_marker:\n            plt.plot(data[0], data[1], label = label, color = color,\n                     linewidth = linewidth)\n        else:\n            ok = False\n            ## Some markers throw ValueError: Unrecognized marker style\n            while not ok:\n                try:\n                    marker = random_marker()\n                    plt.plot(data[0], data[1], label = label, color = color,\n                             marker = marker, linewidth = linewidth)\n                    ok = True\n                except:\n                    pass\n    legend_loc = get_legend_loc(chart_type)\n    plt.legend(loc = legend_loc, ncol = 1) # ajust ncol to fit the space\n    plt.title(get_chart_type_description(chart_type))\n    plt.xlabel(x_axis_field)\n    plt.ylabel(y_axis_field)  \n    plt.savefig(path_to_png)     \n    plt.show()\n\ndef print_help():\n    print \"\"\"This script mainly serves as the basis of your customizations.\nCustomization is a must.\nYou can copy, paste, edit them in whatever way you want.\nBe warned that the fields in the training log may change in the future.\nYou had better check the data files and change the mapping from field name to\n field index in create_field_index before designing your own plots.\nUsage:\n    ./plot_training_log.py chart_type[0-%s] /where/to/save.png /path/to/first.log ...\nNotes:\n    1. Supporting multiple logs.\n    2. Log file name must end with the lower-cased \"%s\".\nSupported chart types:\"\"\" % (len(get_supported_chart_types()) - 1,\n                             get_log_file_suffix())\n    supported_chart_types = get_supported_chart_types()\n    num = len(supported_chart_types)\n    for i in xrange(num):\n        print '    %d: %s' % (i, supported_chart_types[i])\n    exit\n\ndef is_valid_chart_type(chart_type):\n    return chart_type >= 0 and chart_type < len(get_supported_chart_types())\n  \nif __name__ == '__main__':\n    if len(sys.argv) < 4:\n        print_help()\n    else:\n        chart_type = int(sys.argv[1])\n        if not is_valid_chart_type(chart_type):\n            print_help()\n        path_to_png = sys.argv[2]\n        if not path_to_png.endswith('.png'):\n            print 'Path must ends with png' % path_to_png\n            exit            \n        path_to_logs = sys.argv[3:]\n        for path_to_log in path_to_logs:\n            if not os.path.exists(path_to_log):\n                print 'Path does not exist: %s' % path_to_log\n                exit\n            if not path_to_log.endswith(get_log_file_suffix()):\n                print_help()\n        ## plot_chart accpets multiple path_to_logs\n        plot_chart(chart_type, path_to_png, path_to_logs)\n"
  },
  {
    "path": "caffe-fpn/tools/extra/resize_and_crop_images.py",
    "content": "#!/usr/bin/env python\nfrom mincepie import mapreducer, launcher\nimport gflags\nimport os\nimport cv2\nfrom PIL import Image\n\n# gflags\ngflags.DEFINE_string('image_lib', 'opencv',\n                     'OpenCV or PIL, case insensitive. The default value is the faster OpenCV.')\ngflags.DEFINE_string('input_folder', '',\n                     'The folder that contains all input images, organized in synsets.')\ngflags.DEFINE_integer('output_side_length', 256,\n                     'Expected side length of the output image.')\ngflags.DEFINE_string('output_folder', '',\n                     'The folder that we write output resized and cropped images to')\nFLAGS = gflags.FLAGS\n\nclass OpenCVResizeCrop:\n    def resize_and_crop_image(self, input_file, output_file, output_side_length = 256):\n        '''Takes an image name, resize it and crop the center square\n        '''\n        img = cv2.imread(input_file)\n        height, width, depth = img.shape\n        new_height = output_side_length\n        new_width = output_side_length\n        if height > width:\n            new_height = output_side_length * height / width\n        else:\n            new_width = output_side_length * width / height\n        resized_img = cv2.resize(img, (new_width, new_height))\n        height_offset = (new_height - output_side_length) / 2\n        width_offset = (new_width - output_side_length) / 2\n        cropped_img = resized_img[height_offset:height_offset + output_side_length,\n                                  width_offset:width_offset + output_side_length]\n        cv2.imwrite(output_file, cropped_img)\n\nclass PILResizeCrop:\n## http://united-coders.com/christian-harms/image-resizing-tips-every-coder-should-know/\n    def resize_and_crop_image(self, input_file, output_file, output_side_length = 256, fit = True):\n        '''Downsample the image.\n        '''\n        img = Image.open(input_file)\n        box = (output_side_length, output_side_length)\n        #preresize image with factor 2, 4, 8 and fast algorithm\n        factor = 1\n        while img.size[0]/factor > 2*box[0] and img.size[1]*2/factor > 2*box[1]:\n            factor *=2\n        if factor > 1:\n            img.thumbnail((img.size[0]/factor, img.size[1]/factor), Image.NEAREST)\n\n        #calculate the cropping box and get the cropped part\n        if fit:\n            x1 = y1 = 0\n            x2, y2 = img.size\n            wRatio = 1.0 * x2/box[0]\n            hRatio = 1.0 * y2/box[1]\n            if hRatio > wRatio:\n                y1 = int(y2/2-box[1]*wRatio/2)\n                y2 = int(y2/2+box[1]*wRatio/2)\n            else:\n                x1 = int(x2/2-box[0]*hRatio/2)\n                x2 = int(x2/2+box[0]*hRatio/2)\n            img = img.crop((x1,y1,x2,y2))\n\n        #Resize the image with best quality algorithm ANTI-ALIAS\n        img.thumbnail(box, Image.ANTIALIAS)\n\n        #save it into a file-like object\n        with open(output_file, 'wb') as out:\n            img.save(out, 'JPEG', quality=75)\n\nclass ResizeCropImagesMapper(mapreducer.BasicMapper):\n    '''The ImageNet Compute mapper. \n    The input value would be the file listing images' paths relative to input_folder.\n    '''\n    def map(self, key, value):\n        if type(value) is not str:\n            value = str(value)\n        files = [value]\n        image_lib = FLAGS.image_lib.lower()\n        if image_lib == 'pil':\n            resize_crop = PILResizeCrop()\n        else:\n            resize_crop = OpenCVResizeCrop()\n        for i, line in enumerate(files):\n            try:\n                line = line.replace(FLAGS.input_folder, '').strip()\n                line = line.split()\n                image_file_name = line[0]\n                input_file = os.path.join(FLAGS.input_folder, image_file_name)\n                output_file = os.path.join(FLAGS.output_folder, image_file_name)\n                output_dir = output_file[:output_file.rfind('/')]\n                if not os.path.exists(output_dir):\n                    os.makedirs(output_dir)\n                feat = resize_crop.resize_and_crop_image(input_file, output_file,\n                                                              FLAGS.output_side_length)\n            except Exception, e:\n                # we ignore the exception (maybe the image is corrupted?)\n                print line, Exception, e\n        yield value, FLAGS.output_folder\n\nmapreducer.REGISTER_DEFAULT_MAPPER(ResizeCropImagesMapper)\n\nmapreducer.REGISTER_DEFAULT_READER(mapreducer.FileReader)\nmapreducer.REGISTER_DEFAULT_WRITER(mapreducer.FileWriter)\n \nif __name__ == '__main__':\n    launcher.launch()\n"
  },
  {
    "path": "caffe-fpn/tools/extra/summarize.py",
    "content": "#!/usr/bin/env python\n\n\"\"\"Net summarization tool.\n\nThis tool summarizes the structure of a net in a concise but comprehensive\ntabular listing, taking a prototxt file as input.\n\nUse this tool to check at a glance that the computation you've specified is the\ncomputation you expect.\n\"\"\"\n\nfrom caffe.proto import caffe_pb2\nfrom google import protobuf\nimport re\nimport argparse\n\n# ANSI codes for coloring blobs (used cyclically)\nCOLORS = ['92', '93', '94', '95', '97', '96', '42', '43;30', '100',\n          '444', '103;30', '107;30']\nDISCONNECTED_COLOR = '41'\n\ndef read_net(filename):\n    net = caffe_pb2.NetParameter()\n    with open(filename) as f:\n        protobuf.text_format.Parse(f.read(), net)\n    return net\n\ndef format_param(param):\n    out = []\n    if len(param.name) > 0:\n        out.append(param.name)\n    if param.lr_mult != 1:\n        out.append('x{}'.format(param.lr_mult))\n    if param.decay_mult != 1:\n        out.append('Dx{}'.format(param.decay_mult))\n    return ' '.join(out)\n\ndef printed_len(s):\n    return len(re.sub(r'\\033\\[[\\d;]+m', '', s))\n\ndef print_table(table, max_width):\n    \"\"\"Print a simple nicely-aligned table.\n\n    table must be a list of (equal-length) lists. Columns are space-separated,\n    and as narrow as possible, but no wider than max_width. Text may overflow\n    columns; note that unlike string.format, this will not affect subsequent\n    columns, if possible.\"\"\"\n\n    max_widths = [max_width] * len(table[0])\n    column_widths = [max(printed_len(row[j]) + 1 for row in table)\n                     for j in range(len(table[0]))]\n    column_widths = [min(w, max_w) for w, max_w in zip(column_widths, max_widths)]\n\n    for row in table:\n        row_str = ''\n        right_col = 0\n        for cell, width in zip(row, column_widths):\n            right_col += width\n            row_str += cell + ' '\n            row_str += ' ' * max(right_col - printed_len(row_str), 0)\n        print row_str\n\ndef summarize_net(net):\n    disconnected_tops = set()\n    for lr in net.layer:\n        disconnected_tops |= set(lr.top)\n        disconnected_tops -= set(lr.bottom)\n\n    table = []\n    colors = {}\n    for lr in net.layer:\n        tops = []\n        for ind, top in enumerate(lr.top):\n            color = colors.setdefault(top, COLORS[len(colors) % len(COLORS)])\n            if top in disconnected_tops:\n                top = '\\033[1;4m' + top\n            if len(lr.loss_weight) > 0:\n                top = '{} * {}'.format(lr.loss_weight[ind], top)\n            tops.append('\\033[{}m{}\\033[0m'.format(color, top))\n        top_str = ', '.join(tops)\n\n        bottoms = []\n        for bottom in lr.bottom:\n            color = colors.get(bottom, DISCONNECTED_COLOR)\n            bottoms.append('\\033[{}m{}\\033[0m'.format(color, bottom))\n        bottom_str = ', '.join(bottoms)\n\n        if lr.type == 'Python':\n            type_str = lr.python_param.module + '.' + lr.python_param.layer\n        else:\n            type_str = lr.type\n\n        # Summarize conv/pool parameters.\n        # TODO support rectangular/ND parameters\n        conv_param = lr.convolution_param\n        if (lr.type in ['Convolution', 'Deconvolution']\n                and len(conv_param.kernel_size) == 1):\n            arg_str = str(conv_param.kernel_size[0])\n            if len(conv_param.stride) > 0 and conv_param.stride[0] != 1:\n                arg_str += '/' + str(conv_param.stride[0])\n            if len(conv_param.pad) > 0 and conv_param.pad[0] != 0:\n                arg_str += '+' + str(conv_param.pad[0])\n            arg_str += ' ' + str(conv_param.num_output)\n            if conv_param.group != 1:\n                arg_str += '/' + str(conv_param.group)\n        elif lr.type == 'Pooling':\n            arg_str = str(lr.pooling_param.kernel_size)\n            if lr.pooling_param.stride != 1:\n                arg_str += '/' + str(lr.pooling_param.stride)\n            if lr.pooling_param.pad != 0:\n                arg_str += '+' + str(lr.pooling_param.pad)\n        else:\n            arg_str = ''\n\n        if len(lr.param) > 0:\n            param_strs = map(format_param, lr.param)\n            if max(map(len, param_strs)) > 0:\n                param_str = '({})'.format(', '.join(param_strs))\n            else:\n                param_str = ''\n        else:\n            param_str = ''\n\n        table.append([lr.name, type_str, param_str, bottom_str, '->', top_str,\n                      arg_str])\n    return table\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Print a concise summary of net computation.\")\n    parser.add_argument('filename', help='net prototxt file to summarize')\n    parser.add_argument('-w', '--max-width', help='maximum field width',\n            type=int, default=30)\n    args = parser.parse_args()\n\n    net = read_net(args.filename)\n    table = summarize_net(net)\n    print_table(table, max_width=args.max_width)\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "caffe-fpn/tools/extra/train.log",
    "content": "+ echo Logging output to experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-06-07_20-56-29\nLogging output to experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-06-07_20-56-29\n+ ./tools/train_net.py --gpu 0 --solver models/pascal_voc/VGG16/FP_Net_end2end/solver.prototxt --weights data/imagenet_models/VGG16.v2.caffemodel --imdb voc_2007_trainval --iters 70000 --cfg experiments/cfgs/FP_Net_end2end.yml\nCalled with args:\nNamespace(cfg_file='experiments/cfgs/FP_Net_end2end.yml', gpu_id=0, imdb_name='voc_2007_trainval', max_iters=70000, pretrained_model='data/imagenet_models/VGG16.v2.caffemodel', randomize=False, set_cfgs=None, solver='models/pascal_voc/VGG16/FP_Net_end2end/solver.prototxt')\nUsing config:\n{'DATA_DIR': '/home/ubuntu/Work/brbchen/unskychen/FP_Net/data',\n 'DEDUP_BOXES': 0.0625,\n 'EPS': 1e-14,\n 'EXP_DIR': 'FP_Net_end2end',\n 'GPU_ID': 0,\n 'MATLAB': 'matlab',\n 'MODELS_DIR': '/home/ubuntu/Work/brbchen/unskychen/FP_Net/models/pascal_voc',\n 'PIXEL_MEANS': array([[[ 102.9801,  115.9465,  122.7717]]]),\n 'RNG_SEED': 3,\n 'ROOT_DIR': '/home/ubuntu/Work/brbchen/unskychen/FP_Net',\n 'TEST': {'BBOX_REG': True,\n          'HAS_RPN': True,\n          'MAX_SIZE': 1000,\n          'NMS': 0.3,\n          'PROPOSAL_METHOD': 'selective_search',\n          'RPN_MIN_SIZE': 16,\n          'RPN_NMS_THRESH': 0.7,\n          'RPN_POST_NMS_TOP_N': 300,\n          'RPN_PRE_NMS_TOP_N': 6000,\n          'SCALES': [600],\n          'SVM': False},\n 'TRAIN': {'ASPECT_GROUPING': True,\n           'BATCH_SIZE': 128,\n           'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],\n           'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],\n           'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],\n           'BBOX_NORMALIZE_TARGETS': True,\n           'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,\n           'BBOX_REG': True,\n           'BBOX_THRESH': 0.5,\n           'BG_THRESH_HI': 0.5,\n           'BG_THRESH_LO': 0.0,\n           'FG_FRACTION': 0.25,\n           'FG_THRESH': 0.5,\n           'HAS_RPN': True,\n           'IMS_PER_BATCH': 1,\n           'MAX_SIZE': 1000,\n           'PROPOSAL_METHOD': 'gt',\n           'RPN_BATCHSIZE': 256,\n           'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],\n           'RPN_CLOBBER_POSITIVES': False,\n           'RPN_FG_FRACTION': 0.5,\n           'RPN_MIN_SIZE': 16,\n           'RPN_NEGATIVE_OVERLAP': 0.3,\n           'RPN_NMS_THRESH': 0.7,\n           'RPN_POSITIVE_OVERLAP': 0.7,\n           'RPN_POSITIVE_WEIGHT': -1.0,\n           'RPN_POST_NMS_TOP_N': 2000,\n           'RPN_PRE_NMS_TOP_N': 12000,\n           'SCALES': [600],\n           'SNAPSHOT_INFIX': '',\n           'SNAPSHOT_ITERS': 10000,\n           'USE_FLIPPED': True,\n           'USE_PREFETCH': False},\n 'USE_GPU_NMS': True}\nLoaded dataset `voc_2007_trainval` for training\nSet proposal method: gt\nAppending horizontally-flipped training examples...\nvoc_2007_trainval gt roidb loaded from /home/ubuntu/Work/brbchen/unskychen/FP_Net/data/cache/voc_2007_trainval_gt_roidb.pkl\ndone\nPreparing training data...\n/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py:2652: VisibleDeprecationWarning: `rank` is deprecated; use the `ndim` attribute or function instead. To find the rank of a matrix see `numpy.linalg.matrix_rank`.\n  VisibleDeprecationWarning)\ndone\n10022 roidb entries\nOutput will be saved to `/home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval`\nFiltered 0 roidb entries: 10022 -> 10022\nComputing bounding-box regression targets...\nbbox target means:\n[[ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]\n [ 0.  0.  0.  0.]]\n[ 0.  0.  0.  0.]\nbbox target stdevs:\n[[ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]\n [ 0.1  0.1  0.2  0.2]]\n[ 0.1  0.1  0.2  0.2]\nNormalizing targets\ndone\nWARNING: Logging before InitGoogleLogging() is written to STDERR\nI0607 20:56:35.218992 13573 solver.cpp:48] Initializing solver from parameters: \ntrain_net: \"models/pascal_voc/VGG16/FP_Net_end2end/train.prototxt\"\nbase_lr: 0.001\ndisplay: 20\nlr_policy: \"step\"\ngamma: 0.1\nmomentum: 0.9\nweight_decay: 0.0005\nstepsize: 50000\nsnapshot: 0\nsnapshot_prefix: \"FP_Net_end2end\"\naverage_loss: 100\niter_size: 2\nI0607 20:56:35.219055 13573 solver.cpp:81] Creating training net from train_net file: models/pascal_voc/VGG16/FP_Net_end2end/train.prototxt\nI0607 20:56:35.220648 13573 net.cpp:49] Initializing net from parameters: \nname: \"VGG_ILSVRC_16_layers\"\nstate {\n  phase: TRAIN\n}\nlayer {\n  name: \"input-data\"\n  type: \"Python\"\n  top: \"data\"\n  top: \"im_info\"\n  top: \"gt_boxes\"\n  python_param {\n    module: \"roi_data_layer.layer\"\n    layer: \"RoIDataLayer\"\n    param_str: \"\\'num_classes\\': 21\"\n  }\n}\nlayer {\n  name: \"conv1_1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1_1\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu1_1\"\n  type: \"ReLU\"\n  bottom: \"conv1_1\"\n  top: \"conv1_1\"\n}\nlayer {\n  name: \"conv1_2\"\n  type: \"Convolution\"\n  bottom: \"conv1_1\"\n  top: \"conv1_2\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu1_2\"\n  type: \"ReLU\"\n  bottom: \"conv1_2\"\n  top: \"conv1_2\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1_2\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2_1\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2_1\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu2_1\"\n  type: \"ReLU\"\n  bottom: \"conv2_1\"\n  top: \"conv2_1\"\n}\nlayer {\n  name: \"conv2_2\"\n  type: \"Convolution\"\n  bottom: \"conv2_1\"\n  top: \"conv2_2\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu2_2\"\n  type: \"ReLU\"\n  bottom: \"conv2_2\"\n  top: \"conv2_2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2_2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3_1\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3_1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3_1\"\n  type: \"ReLU\"\n  bottom: \"conv3_1\"\n  top: \"conv3_1\"\n}\nlayer {\n  name: \"conv3_2\"\n  type: \"Convolution\"\n  bottom: \"conv3_1\"\n  top: \"conv3_2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3_2\"\n  type: \"ReLU\"\n  bottom: \"conv3_2\"\n  top: \"conv3_2\"\n}\nlayer {\n  name: \"conv3_3\"\n  type: \"Convolution\"\n  bottom: \"conv3_2\"\n  top: \"conv3_3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3_3\"\n  type: \"ReLU\"\n  bottom: \"conv3_3\"\n  top: \"conv3_3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"conv3_3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv4_1\"\n  type: \"Convolution\"\n  bottom: \"pool3\"\n  top: \"conv4_1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu4_1\"\n  type: \"ReLU\"\n  bottom: \"conv4_1\"\n  top: \"conv4_1\"\n}\nlayer {\n  name: \"conv4_2\"\n  type: \"Convolution\"\n  bottom: \"conv4_1\"\n  top: \"conv4_2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu4_2\"\n  type: \"ReLU\"\n  bottom: \"conv4_2\"\n  top: \"conv4_2\"\n}\nlayer {\n  name: \"conv4_3\"\n  type: \"Convolution\"\n  bottom: \"conv4_2\"\n  top: \"conv4_3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu4_3\"\n  type: \"ReLU\"\n  bottom: \"conv4_3\"\n  top: \"conv4_3\"\n}\nlayer {\n  name: \"pool4\"\n  type: \"Pooling\"\n  bottom: \"conv4_3\"\n  top: \"pool4\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv5_1\"\n  type: \"Convolution\"\n  bottom: \"pool4\"\n  top: \"conv5_1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu5_1\"\n  type: \"ReLU\"\n  bottom: \"conv5_1\"\n  top: \"conv5_1\"\n}\nlayer {\n  name: \"conv5_2\"\n  type: \"Convolution\"\n  bottom: \"conv5_1\"\n  top: \"conv5_2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu5_2\"\n  type: \"ReLU\"\n  bottom: \"conv5_2\"\n  top: \"conv5_2\"\n}\nlayer {\n  name: \"conv5_3\"\n  type: \"Convolution\"\n  bottom: \"conv5_2\"\n  top: \"conv5_3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu5_3\"\n  type: \"ReLU\"\n  bottom: \"conv5_3\"\n  top: \"conv5_3\"\n}\nlayer {\n  name: \"p1_conv\"\n  type: \"Convolution\"\n  bottom: \"conv5_3\"\n  top: \"p1_conv\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"p1_upsample\"\n  type: \"Deconvolution\"\n  bottom: \"p1_conv\"\n  top: \"p1_upsample\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    bias_term: false\n    pad: 0\n    kernel_size: 1\n    group: 256\n    stride: 2\n    weight_filler {\n      type: \"bilinear\"\n    }\n  }\n}\nlayer {\n  name: \"p2_conv\"\n  type: \"Convolution\"\n  bottom: \"conv4_3\"\n  top: \"p2_conv\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"conadd\"\n  type: \"Concat\"\n  bottom: \"p1_upsample\"\n  bottom: \"p2_conv\"\n  top: \"conadd\"\n  concat_param {\n    axis: 1\n  }\n}\nlayer {\n  name: \"rpn_conv/3x3\"\n  type: \"Convolution\"\n  bottom: \"p1_upsample\"\n  top: \"rpn/output\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3\"\n  type: \"ReLU\"\n  bottom: \"rpn/output\"\n  top: \"rpn/output\"\n}\nlayer {\n  name: \"rpn_cls_score\"\n  type: \"Convolution\"\n  bottom: \"rpn/output\"\n  top: \"rpn_cls_score\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 18\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_bbox_pred\"\n  type: \"Convolution\"\n  bottom: \"rpn/output\"\n  top: \"rpn_bbox_pred\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 36\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_cls_score_reshape\"\n  type: \"Reshape\"\n  bottom: \"rpn_cls_score\"\n  top: \"rpn_cls_score_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 2\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn-data\"\n  type: \"Python\"\n  bottom: \"rpn_cls_score\"\n  bottom: \"gt_boxes\"\n  bottom: \"im_info\"\n  bottom: \"data\"\n  top: \"rpn_labels\"\n  top: \"rpn_bbox_targets\"\n  top: \"rpn_bbox_inside_weights\"\n  top: \"rpn_bbox_outside_weights\"\n  python_param {\n    module: \"rpn.anchor_target_layer\"\n    layer: \"AnchorTargetLayer\"\n    param_str: \"\\'feat_stride\\': 16\"\n  }\n}\nlayer {\n  name: \"rpn_loss_cls\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"rpn_cls_score_reshape\"\n  bottom: \"rpn_labels\"\n  top: \"rpn_cls_loss\"\n  loss_weight: 1\n  propagate_down: true\n  propagate_down: false\n  loss_param {\n    ignore_label: -1\n    normalize: true\n  }\n}\nlayer {\n  name: \"rpn_loss_bbox\"\n  type: \"SmoothL1Loss\"\n  bottom: \"rpn_bbox_pred\"\n  bottom: \"rpn_bbox_targets\"\n  bottom: \"rpn_bbox_inside_weights\"\n  bottom: \"rpn_bbox_outside_weights\"\n  top: \"rpn_loss_bbox\"\n  loss_weight: 1\n  smooth_l1_loss_param {\n    sigma: 3\n  }\n}\nlayer {\n  name: \"rpn_cls_prob\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape\"\n  top: \"rpn_cls_prob\"\n}\nlayer {\n  name: \"rpn_cls_prob_reshape\"\n  type: \"Reshape\"\n  bottom: \"rpn_cls_prob\"\n  top: \"rpn_cls_prob_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 18\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"proposal\"\n  type: \"Python\"\n  bottom: \"rpn_cls_prob_reshape\"\n  bottom: \"rpn_bbox_pred\"\n  bottom: \"im_info\"\n  top: \"rpn_rois\"\n  python_param {\n    module: \"rpn.proposal_layer\"\n    layer: \"ProposalLayer\"\n    param_str: \"\\'feat_stride\\': 16\"\n  }\n}\nlayer {\n  name: \"p2_rpn_conv/3x3\"\n  type: \"Convolution\"\n  bottom: \"conadd\"\n  top: \"p2_rpn/output\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_relu/3x3\"\n  type: \"ReLU\"\n  bottom: \"p2_rpn/output\"\n  top: \"p2_rpn/output\"\n}\nlayer {\n  name: \"p2_rpn_cls_score\"\n  type: \"Convolution\"\n  bottom: \"p2_rpn/output\"\n  top: \"p2_rpn_cls_score\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 18\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_bbox_pred\"\n  type: \"Convolution\"\n  bottom: \"p2_rpn/output\"\n  top: \"p2_rpn_bbox_pred\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 36\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_cls_score_reshape\"\n  type: \"Reshape\"\n  bottom: \"p2_rpn_cls_score\"\n  top: \"p2_rpn_cls_score_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 2\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn-data\"\n  type: \"Python\"\n  bottom: \"p2_rpn_cls_score\"\n  bottom: \"gt_boxes\"\n  bottom: \"im_info\"\n  bottom: \"data\"\n  top: \"p2_rpn_labels\"\n  top: \"p2_rpn_bbox_targets\"\n  top: \"p2_rpn_bbox_inside_weights\"\n  top: \"p2_rpn_bbox_outside_weights\"\n  python_param {\n    module: \"rpn.anchor_target_layer\"\n    layer: \"AnchorTargetLayer\"\n    param_str: \"\\'feat_stride\\': 16\"\n  }\n}\nlayer {\n  name: \"p2_rpn_loss_cls\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"p2_rpn_cls_score_reshape\"\n  bottom: \"p2_rpn_labels\"\n  top: \"p2_rpn_cls_loss\"\n  loss_weight: 1\n  propagate_down: true\n  propagate_down: false\n  loss_param {\n    ignore_label: -1\n    normalize: true\n  }\n}\nlayer {\n  name: \"p2_rpn_loss_bbox\"\n  type: \"SmoothL1Loss\"\n  bottom: \"p2_rpn_bbox_pred\"\n  bottom: \"p2_rpn_bbox_targets\"\n  bottom: \"p2_rpn_bbox_inside_weights\"\n  bottom: \"p2_rpn_bbox_outside_weights\"\n  top: \"p2_rpn_loss_bbox\"\n  loss_weight: 1\n  smooth_l1_loss_param {\n    sigma: 3\n  }\n}\nlayer {\n  name: \"p2_rpn_cls_prob\"\n  type: \"Softmax\"\n  bottom: \"p2_rpn_cls_score_reshape\"\n  top: \"p2_rpn_cls_prob\"\n}\nlayer {\n  name: \"p2_rpn_cls_prob_reshape\"\n  type: \"Reshape\"\n  bottom: \"p2_rpn_cls_prob\"\n  top: \"p2_rpn_cls_prob_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 18\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_proposal\"\n  type: \"Python\"\n  bottom: \"p2_rpn_cls_prob_reshape\"\n  bottom: \"p2_rpn_bbox_pred\"\n  bottom: \"im_info\"\n  top: \"p2_rpn_rois\"\n  python_param {\n    module: \"rpn.proposal_layer\"\n    layer: \"ProposalLayer\"\n    param_str: \"\\'feat_stride\\': 16\"\n  }\n}\nlayer {\n  name: \"data_all\"\n  type: \"Concat\"\n  bottom: \"rpn_rois\"\n  bottom: \"p2_rpn_rois\"\n  top: \"data_all\"\n  concat_param {\n    axis: 0\n  }\n}\nlayer {\n  name: \"roi-data\"\n  type: \"Python\"\n  bottom: \"data_all\"\n  bottom: \"gt_boxes\"\n  top: \"rois\"\n  top: \"labels\"\n  top: \"bbox_targets\"\n  top: \"bbox_inside_weights\"\n  top: \"bbox_outside_weights\"\n  python_param {\n    module: \"rpn.proposal_target_layer\"\n    layer: \"ProposalTargetLayer\"\n    param_str: \"\\'num_classes\\': 21\"\n  }\n}\nlayer {\n  name: \"roi_pool5\"\n  type: \"ROIPooling\"\n  bottom: \"conv5_3\"\n  bottom: \"rois\"\n  top: \"pool5\"\n  roi_pooling_param {\n    pooled_h: 7\n    pooled_w: 7\n    spatial_scale: 0.0625\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"cls_score\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"bbox_pred\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"loss_cls\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"cls_score\"\n  bottom: \"labels\"\n  top: \"loss_cls\"\n  loss_weight: 1\n  propagate_down: true\n  propagate_down: false\n}\nlayer {\n  name: \"loss_bbox\"\n  type: \"SmoothL1Loss\"\n  bottom: \"bbox_pred\"\n  bottom: \"bbox_targets\"\n  bottom: \"bbox_inside_weights\"\n  bottom: \"bbox_outside_weights\"\n  top: \"loss_bbox\"\n  loss_weight: 1\n}\nI0607 20:56:35.221045 13573 layer_factory.hpp:77] Creating layer input-data\nI0607 20:56:35.226554 13573 net.cpp:106] Creating Layer input-data\nI0607 20:56:35.226573 13573 net.cpp:411] input-data -> data\nI0607 20:56:35.226588 13573 net.cpp:411] input-data -> im_info\nI0607 20:56:35.226594 13573 net.cpp:411] input-data -> gt_boxes\nRoiDataLayer: name_to_top: {'gt_boxes': 2, 'data': 0, 'im_info': 1}\nI0607 20:56:35.240749 13573 net.cpp:150] Setting up input-data\nI0607 20:56:35.240775 13573 net.cpp:157] Top shape: 1 3 600 1000 (1800000)\nI0607 20:56:35.240782 13573 net.cpp:157] Top shape: 1 3 (3)\nI0607 20:56:35.240785 13573 net.cpp:157] Top shape: 1 4 (4)\nI0607 20:56:35.240788 13573 net.cpp:165] Memory required for data: 7200028\nI0607 20:56:35.240793 13573 layer_factory.hpp:77] Creating layer data_input-data_0_split\nI0607 20:56:35.240803 13573 net.cpp:106] Creating Layer data_input-data_0_split\nI0607 20:56:35.240813 13573 net.cpp:454] data_input-data_0_split <- data\nI0607 20:56:35.240831 13573 net.cpp:411] data_input-data_0_split -> data_input-data_0_split_0\nI0607 20:56:35.240844 13573 net.cpp:411] data_input-data_0_split -> data_input-data_0_split_1\nI0607 20:56:35.240851 13573 net.cpp:411] data_input-data_0_split -> data_input-data_0_split_2\nI0607 20:56:35.240897 13573 net.cpp:150] Setting up data_input-data_0_split\nI0607 20:56:35.240911 13573 net.cpp:157] Top shape: 1 3 600 1000 (1800000)\nI0607 20:56:35.240916 13573 net.cpp:157] Top shape: 1 3 600 1000 (1800000)\nI0607 20:56:35.240919 13573 net.cpp:157] Top shape: 1 3 600 1000 (1800000)\nI0607 20:56:35.240922 13573 net.cpp:165] Memory required for data: 28800028\nI0607 20:56:35.240926 13573 layer_factory.hpp:77] Creating layer im_info_input-data_1_split\nI0607 20:56:35.240933 13573 net.cpp:106] Creating Layer im_info_input-data_1_split\nI0607 20:56:35.240936 13573 net.cpp:454] im_info_input-data_1_split <- im_info\nI0607 20:56:35.240942 13573 net.cpp:411] im_info_input-data_1_split -> im_info_input-data_1_split_0\nI0607 20:56:35.240947 13573 net.cpp:411] im_info_input-data_1_split -> im_info_input-data_1_split_1\nI0607 20:56:35.240952 13573 net.cpp:411] im_info_input-data_1_split -> im_info_input-data_1_split_2\nI0607 20:56:35.240957 13573 net.cpp:411] im_info_input-data_1_split -> im_info_input-data_1_split_3\nI0607 20:56:35.241004 13573 net.cpp:150] Setting up im_info_input-data_1_split\nI0607 20:56:35.241015 13573 net.cpp:157] Top shape: 1 3 (3)\nI0607 20:56:35.241020 13573 net.cpp:157] Top shape: 1 3 (3)\nI0607 20:56:35.241024 13573 net.cpp:157] Top shape: 1 3 (3)\nI0607 20:56:35.241027 13573 net.cpp:157] Top shape: 1 3 (3)\nI0607 20:56:35.241030 13573 net.cpp:165] Memory required for data: 28800076\nI0607 20:56:35.241034 13573 layer_factory.hpp:77] Creating layer gt_boxes_input-data_2_split\nI0607 20:56:35.241039 13573 net.cpp:106] Creating Layer gt_boxes_input-data_2_split\nI0607 20:56:35.241041 13573 net.cpp:454] gt_boxes_input-data_2_split <- gt_boxes\nI0607 20:56:35.241049 13573 net.cpp:411] gt_boxes_input-data_2_split -> gt_boxes_input-data_2_split_0\nI0607 20:56:35.241053 13573 net.cpp:411] gt_boxes_input-data_2_split -> gt_boxes_input-data_2_split_1\nI0607 20:56:35.241058 13573 net.cpp:411] gt_boxes_input-data_2_split -> gt_boxes_input-data_2_split_2\nI0607 20:56:35.241096 13573 net.cpp:150] Setting up gt_boxes_input-data_2_split\nI0607 20:56:35.241109 13573 net.cpp:157] Top shape: 1 4 (4)\nI0607 20:56:35.241114 13573 net.cpp:157] Top shape: 1 4 (4)\nI0607 20:56:35.241118 13573 net.cpp:157] Top shape: 1 4 (4)\nI0607 20:56:35.241122 13573 net.cpp:165] Memory required for data: 28800124\nI0607 20:56:35.241124 13573 layer_factory.hpp:77] Creating layer conv1_1\nI0607 20:56:35.241137 13573 net.cpp:106] Creating Layer conv1_1\nI0607 20:56:35.241150 13573 net.cpp:454] conv1_1 <- data_input-data_0_split_0\nI0607 20:56:35.241158 13573 net.cpp:411] conv1_1 -> conv1_1\nI0607 20:56:35.534591 13573 net.cpp:150] Setting up conv1_1\nI0607 20:56:35.534649 13573 net.cpp:157] Top shape: 1 64 600 1000 (38400000)\nI0607 20:56:35.534656 13573 net.cpp:165] Memory required for data: 182400124\nI0607 20:56:35.534684 13573 layer_factory.hpp:77] Creating layer relu1_1\nI0607 20:56:35.534703 13573 net.cpp:106] Creating Layer relu1_1\nI0607 20:56:35.534708 13573 net.cpp:454] relu1_1 <- conv1_1\nI0607 20:56:35.534718 13573 net.cpp:397] relu1_1 -> conv1_1 (in-place)\nI0607 20:56:35.535639 13573 net.cpp:150] Setting up relu1_1\nI0607 20:56:35.535661 13573 net.cpp:157] Top shape: 1 64 600 1000 (38400000)\nI0607 20:56:35.535666 13573 net.cpp:165] Memory required for data: 336000124\nI0607 20:56:35.535671 13573 layer_factory.hpp:77] Creating layer conv1_2\nI0607 20:56:35.535686 13573 net.cpp:106] Creating Layer conv1_2\nI0607 20:56:35.535689 13573 net.cpp:454] conv1_2 <- conv1_1\nI0607 20:56:35.535696 13573 net.cpp:411] conv1_2 -> conv1_2\nI0607 20:56:35.540830 13573 net.cpp:150] Setting up conv1_2\nI0607 20:56:35.540856 13573 net.cpp:157] Top shape: 1 64 600 1000 (38400000)\nI0607 20:56:35.540861 13573 net.cpp:165] Memory required for data: 489600124\nI0607 20:56:35.540874 13573 layer_factory.hpp:77] Creating layer relu1_2\nI0607 20:56:35.540881 13573 net.cpp:106] Creating Layer relu1_2\nI0607 20:56:35.540885 13573 net.cpp:454] relu1_2 <- conv1_2\nI0607 20:56:35.540891 13573 net.cpp:397] relu1_2 -> conv1_2 (in-place)\nI0607 20:56:35.541067 13573 net.cpp:150] Setting up relu1_2\nI0607 20:56:35.541085 13573 net.cpp:157] Top shape: 1 64 600 1000 (38400000)\nI0607 20:56:35.541090 13573 net.cpp:165] Memory required for data: 643200124\nI0607 20:56:35.541093 13573 layer_factory.hpp:77] Creating layer pool1\nI0607 20:56:35.541108 13573 net.cpp:106] Creating Layer pool1\nI0607 20:56:35.541112 13573 net.cpp:454] pool1 <- conv1_2\nI0607 20:56:35.541119 13573 net.cpp:411] pool1 -> pool1\nI0607 20:56:35.541173 13573 net.cpp:150] Setting up pool1\nI0607 20:56:35.541179 13573 net.cpp:157] Top shape: 1 64 300 500 (9600000)\nI0607 20:56:35.541182 13573 net.cpp:165] Memory required for data: 681600124\nI0607 20:56:35.541187 13573 layer_factory.hpp:77] Creating layer conv2_1\nI0607 20:56:35.541195 13573 net.cpp:106] Creating Layer conv2_1\nI0607 20:56:35.541198 13573 net.cpp:454] conv2_1 <- pool1\nI0607 20:56:35.541204 13573 net.cpp:411] conv2_1 -> conv2_1\nI0607 20:56:35.544801 13573 net.cpp:150] Setting up conv2_1\nI0607 20:56:35.544827 13573 net.cpp:157] Top shape: 1 128 300 500 (19200000)\nI0607 20:56:35.544832 13573 net.cpp:165] Memory required for data: 758400124\nI0607 20:56:35.544842 13573 layer_factory.hpp:77] Creating layer relu2_1\nI0607 20:56:35.544850 13573 net.cpp:106] Creating Layer relu2_1\nI0607 20:56:35.544854 13573 net.cpp:454] relu2_1 <- conv2_1\nI0607 20:56:35.544859 13573 net.cpp:397] relu2_1 -> conv2_1 (in-place)\nI0607 20:56:35.545044 13573 net.cpp:150] Setting up relu2_1\nI0607 20:56:35.545063 13573 net.cpp:157] Top shape: 1 128 300 500 (19200000)\nI0607 20:56:35.545068 13573 net.cpp:165] Memory required for data: 835200124\nI0607 20:56:35.545073 13573 layer_factory.hpp:77] Creating layer conv2_2\nI0607 20:56:35.545083 13573 net.cpp:106] Creating Layer conv2_2\nI0607 20:56:35.545085 13573 net.cpp:454] conv2_2 <- conv2_1\nI0607 20:56:35.545092 13573 net.cpp:411] conv2_2 -> conv2_2\nI0607 20:56:35.549396 13573 net.cpp:150] Setting up conv2_2\nI0607 20:56:35.549420 13573 net.cpp:157] Top shape: 1 128 300 500 (19200000)\nI0607 20:56:35.549425 13573 net.cpp:165] Memory required for data: 912000124\nI0607 20:56:35.549433 13573 layer_factory.hpp:77] Creating layer relu2_2\nI0607 20:56:35.549441 13573 net.cpp:106] Creating Layer relu2_2\nI0607 20:56:35.549445 13573 net.cpp:454] relu2_2 <- conv2_2\nI0607 20:56:35.549451 13573 net.cpp:397] relu2_2 -> conv2_2 (in-place)\nI0607 20:56:35.550356 13573 net.cpp:150] Setting up relu2_2\nI0607 20:56:35.550377 13573 net.cpp:157] Top shape: 1 128 300 500 (19200000)\nI0607 20:56:35.550382 13573 net.cpp:165] Memory required for data: 988800124\nI0607 20:56:35.550386 13573 layer_factory.hpp:77] Creating layer pool2\nI0607 20:56:35.550395 13573 net.cpp:106] Creating Layer pool2\nI0607 20:56:35.550398 13573 net.cpp:454] pool2 <- conv2_2\nI0607 20:56:35.550405 13573 net.cpp:411] pool2 -> pool2\nI0607 20:56:35.550449 13573 net.cpp:150] Setting up pool2\nI0607 20:56:35.550456 13573 net.cpp:157] Top shape: 1 128 150 250 (4800000)\nI0607 20:56:35.550458 13573 net.cpp:165] Memory required for data: 1008000124\nI0607 20:56:35.550462 13573 layer_factory.hpp:77] Creating layer conv3_1\nI0607 20:56:35.550470 13573 net.cpp:106] Creating Layer conv3_1\nI0607 20:56:35.550474 13573 net.cpp:454] conv3_1 <- pool2\nI0607 20:56:35.550480 13573 net.cpp:411] conv3_1 -> conv3_1\nI0607 20:56:35.554199 13573 net.cpp:150] Setting up conv3_1\nI0607 20:56:35.554224 13573 net.cpp:157] Top shape: 1 256 150 250 (9600000)\nI0607 20:56:35.554229 13573 net.cpp:165] Memory required for data: 1046400124\nI0607 20:56:35.554240 13573 layer_factory.hpp:77] Creating layer relu3_1\nI0607 20:56:35.554247 13573 net.cpp:106] Creating Layer relu3_1\nI0607 20:56:35.554251 13573 net.cpp:454] relu3_1 <- conv3_1\nI0607 20:56:35.554257 13573 net.cpp:397] relu3_1 -> conv3_1 (in-place)\nI0607 20:56:35.554445 13573 net.cpp:150] Setting up relu3_1\nI0607 20:56:35.554462 13573 net.cpp:157] Top shape: 1 256 150 250 (9600000)\nI0607 20:56:35.554467 13573 net.cpp:165] Memory required for data: 1084800124\nI0607 20:56:35.554471 13573 layer_factory.hpp:77] Creating layer conv3_2\nI0607 20:56:35.554482 13573 net.cpp:106] Creating Layer conv3_2\nI0607 20:56:35.554486 13573 net.cpp:454] conv3_2 <- conv3_1\nI0607 20:56:35.554493 13573 net.cpp:411] conv3_2 -> conv3_2\nI0607 20:56:35.558957 13573 net.cpp:150] Setting up conv3_2\nI0607 20:56:35.558981 13573 net.cpp:157] Top shape: 1 256 150 250 (9600000)\nI0607 20:56:35.558986 13573 net.cpp:165] Memory required for data: 1123200124\nI0607 20:56:35.558995 13573 layer_factory.hpp:77] Creating layer relu3_2\nI0607 20:56:35.559001 13573 net.cpp:106] Creating Layer relu3_2\nI0607 20:56:35.559005 13573 net.cpp:454] relu3_2 <- conv3_2\nI0607 20:56:35.559011 13573 net.cpp:397] relu3_2 -> conv3_2 (in-place)\nI0607 20:56:35.559188 13573 net.cpp:150] Setting up relu3_2\nI0607 20:56:35.559206 13573 net.cpp:157] Top shape: 1 256 150 250 (9600000)\nI0607 20:56:35.559211 13573 net.cpp:165] Memory required for data: 1161600124\nI0607 20:56:35.559214 13573 layer_factory.hpp:77] Creating layer conv3_3\nI0607 20:56:35.559223 13573 net.cpp:106] Creating Layer conv3_3\nI0607 20:56:35.559227 13573 net.cpp:454] conv3_3 <- conv3_2\nI0607 20:56:35.559232 13573 net.cpp:411] conv3_3 -> conv3_3\nI0607 20:56:35.563588 13573 net.cpp:150] Setting up conv3_3\nI0607 20:56:35.563613 13573 net.cpp:157] Top shape: 1 256 150 250 (9600000)\nI0607 20:56:35.563618 13573 net.cpp:165] Memory required for data: 1200000124\nI0607 20:56:35.563627 13573 layer_factory.hpp:77] Creating layer relu3_3\nI0607 20:56:35.563633 13573 net.cpp:106] Creating Layer relu3_3\nI0607 20:56:35.563638 13573 net.cpp:454] relu3_3 <- conv3_3\nI0607 20:56:35.563644 13573 net.cpp:397] relu3_3 -> conv3_3 (in-place)\nI0607 20:56:35.564525 13573 net.cpp:150] Setting up relu3_3\nI0607 20:56:35.564545 13573 net.cpp:157] Top shape: 1 256 150 250 (9600000)\nI0607 20:56:35.564550 13573 net.cpp:165] Memory required for data: 1238400124\nI0607 20:56:35.564554 13573 layer_factory.hpp:77] Creating layer pool3\nI0607 20:56:35.564563 13573 net.cpp:106] Creating Layer pool3\nI0607 20:56:35.564566 13573 net.cpp:454] pool3 <- conv3_3\nI0607 20:56:35.564573 13573 net.cpp:411] pool3 -> pool3\nI0607 20:56:35.564620 13573 net.cpp:150] Setting up pool3\nI0607 20:56:35.564625 13573 net.cpp:157] Top shape: 1 256 75 125 (2400000)\nI0607 20:56:35.564630 13573 net.cpp:165] Memory required for data: 1248000124\nI0607 20:56:35.564632 13573 layer_factory.hpp:77] Creating layer conv4_1\nI0607 20:56:35.564646 13573 net.cpp:106] Creating Layer conv4_1\nI0607 20:56:35.564651 13573 net.cpp:454] conv4_1 <- pool3\nI0607 20:56:35.564657 13573 net.cpp:411] conv4_1 -> conv4_1\nI0607 20:56:35.572183 13573 net.cpp:150] Setting up conv4_1\nI0607 20:56:35.572209 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.572214 13573 net.cpp:165] Memory required for data: 1267200124\nI0607 20:56:35.572221 13573 layer_factory.hpp:77] Creating layer relu4_1\nI0607 20:56:35.572232 13573 net.cpp:106] Creating Layer relu4_1\nI0607 20:56:35.572235 13573 net.cpp:454] relu4_1 <- conv4_1\nI0607 20:56:35.572240 13573 net.cpp:397] relu4_1 -> conv4_1 (in-place)\nI0607 20:56:35.573164 13573 net.cpp:150] Setting up relu4_1\nI0607 20:56:35.573187 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.573191 13573 net.cpp:165] Memory required for data: 1286400124\nI0607 20:56:35.573196 13573 layer_factory.hpp:77] Creating layer conv4_2\nI0607 20:56:35.573205 13573 net.cpp:106] Creating Layer conv4_2\nI0607 20:56:35.573210 13573 net.cpp:454] conv4_2 <- conv4_1\nI0607 20:56:35.573217 13573 net.cpp:411] conv4_2 -> conv4_2\nI0607 20:56:35.582156 13573 net.cpp:150] Setting up conv4_2\nI0607 20:56:35.582181 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.582186 13573 net.cpp:165] Memory required for data: 1305600124\nI0607 20:56:35.582198 13573 layer_factory.hpp:77] Creating layer relu4_2\nI0607 20:56:35.582211 13573 net.cpp:106] Creating Layer relu4_2\nI0607 20:56:35.582214 13573 net.cpp:454] relu4_2 <- conv4_2\nI0607 20:56:35.582219 13573 net.cpp:397] relu4_2 -> conv4_2 (in-place)\nI0607 20:56:35.582418 13573 net.cpp:150] Setting up relu4_2\nI0607 20:56:35.582428 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.582432 13573 net.cpp:165] Memory required for data: 1324800124\nI0607 20:56:35.582435 13573 layer_factory.hpp:77] Creating layer conv4_3\nI0607 20:56:35.582445 13573 net.cpp:106] Creating Layer conv4_3\nI0607 20:56:35.582450 13573 net.cpp:454] conv4_3 <- conv4_2\nI0607 20:56:35.582458 13573 net.cpp:411] conv4_3 -> conv4_3\nI0607 20:56:35.590520 13573 net.cpp:150] Setting up conv4_3\nI0607 20:56:35.590546 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.590551 13573 net.cpp:165] Memory required for data: 1344000124\nI0607 20:56:35.590559 13573 layer_factory.hpp:77] Creating layer relu4_3\nI0607 20:56:35.590565 13573 net.cpp:106] Creating Layer relu4_3\nI0607 20:56:35.590569 13573 net.cpp:454] relu4_3 <- conv4_3\nI0607 20:56:35.590574 13573 net.cpp:397] relu4_3 -> conv4_3 (in-place)\nI0607 20:56:35.591490 13573 net.cpp:150] Setting up relu4_3\nI0607 20:56:35.591512 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.591517 13573 net.cpp:165] Memory required for data: 1363200124\nI0607 20:56:35.591521 13573 layer_factory.hpp:77] Creating layer conv4_3_relu4_3_0_split\nI0607 20:56:35.591531 13573 net.cpp:106] Creating Layer conv4_3_relu4_3_0_split\nI0607 20:56:35.591534 13573 net.cpp:454] conv4_3_relu4_3_0_split <- conv4_3\nI0607 20:56:35.591547 13573 net.cpp:411] conv4_3_relu4_3_0_split -> conv4_3_relu4_3_0_split_0\nI0607 20:56:35.591554 13573 net.cpp:411] conv4_3_relu4_3_0_split -> conv4_3_relu4_3_0_split_1\nI0607 20:56:35.591601 13573 net.cpp:150] Setting up conv4_3_relu4_3_0_split\nI0607 20:56:35.591608 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.591611 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.591614 13573 net.cpp:165] Memory required for data: 1401600124\nI0607 20:56:35.591617 13573 layer_factory.hpp:77] Creating layer pool4\nI0607 20:56:35.591626 13573 net.cpp:106] Creating Layer pool4\nI0607 20:56:35.591629 13573 net.cpp:454] pool4 <- conv4_3_relu4_3_0_split_0\nI0607 20:56:35.591634 13573 net.cpp:411] pool4 -> pool4\nI0607 20:56:35.591673 13573 net.cpp:150] Setting up pool4\nI0607 20:56:35.591680 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.591682 13573 net.cpp:165] Memory required for data: 1406503036\nI0607 20:56:35.591686 13573 layer_factory.hpp:77] Creating layer conv5_1\nI0607 20:56:35.591694 13573 net.cpp:106] Creating Layer conv5_1\nI0607 20:56:35.591698 13573 net.cpp:454] conv5_1 <- pool4\nI0607 20:56:35.591706 13573 net.cpp:411] conv5_1 -> conv5_1\nI0607 20:56:35.599761 13573 net.cpp:150] Setting up conv5_1\nI0607 20:56:35.599787 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.599792 13573 net.cpp:165] Memory required for data: 1411405948\nI0607 20:56:35.599799 13573 layer_factory.hpp:77] Creating layer relu5_1\nI0607 20:56:35.599808 13573 net.cpp:106] Creating Layer relu5_1\nI0607 20:56:35.599813 13573 net.cpp:454] relu5_1 <- conv5_1\nI0607 20:56:35.599818 13573 net.cpp:397] relu5_1 -> conv5_1 (in-place)\nI0607 20:56:35.600016 13573 net.cpp:150] Setting up relu5_1\nI0607 20:56:35.600035 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.600039 13573 net.cpp:165] Memory required for data: 1416308860\nI0607 20:56:35.600044 13573 layer_factory.hpp:77] Creating layer conv5_2\nI0607 20:56:35.600051 13573 net.cpp:106] Creating Layer conv5_2\nI0607 20:56:35.600055 13573 net.cpp:454] conv5_2 <- conv5_1\nI0607 20:56:35.600062 13573 net.cpp:411] conv5_2 -> conv5_2\nI0607 20:56:35.608178 13573 net.cpp:150] Setting up conv5_2\nI0607 20:56:35.608204 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.608209 13573 net.cpp:165] Memory required for data: 1421211772\nI0607 20:56:35.608217 13573 layer_factory.hpp:77] Creating layer relu5_2\nI0607 20:56:35.608232 13573 net.cpp:106] Creating Layer relu5_2\nI0607 20:56:35.608237 13573 net.cpp:454] relu5_2 <- conv5_2\nI0607 20:56:35.608245 13573 net.cpp:397] relu5_2 -> conv5_2 (in-place)\nI0607 20:56:35.608428 13573 net.cpp:150] Setting up relu5_2\nI0607 20:56:35.608438 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.608440 13573 net.cpp:165] Memory required for data: 1426114684\nI0607 20:56:35.608444 13573 layer_factory.hpp:77] Creating layer conv5_3\nI0607 20:56:35.608454 13573 net.cpp:106] Creating Layer conv5_3\nI0607 20:56:35.608458 13573 net.cpp:454] conv5_3 <- conv5_2\nI0607 20:56:35.608465 13573 net.cpp:411] conv5_3 -> conv5_3\nI0607 20:56:35.616578 13573 net.cpp:150] Setting up conv5_3\nI0607 20:56:35.616605 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.616611 13573 net.cpp:165] Memory required for data: 1431017596\nI0607 20:56:35.616618 13573 layer_factory.hpp:77] Creating layer relu5_3\nI0607 20:56:35.616628 13573 net.cpp:106] Creating Layer relu5_3\nI0607 20:56:35.616633 13573 net.cpp:454] relu5_3 <- conv5_3\nI0607 20:56:35.616639 13573 net.cpp:397] relu5_3 -> conv5_3 (in-place)\nI0607 20:56:35.617550 13573 net.cpp:150] Setting up relu5_3\nI0607 20:56:35.617570 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.617575 13573 net.cpp:165] Memory required for data: 1435920508\nI0607 20:56:35.617579 13573 layer_factory.hpp:77] Creating layer conv5_3_relu5_3_0_split\nI0607 20:56:35.617586 13573 net.cpp:106] Creating Layer conv5_3_relu5_3_0_split\nI0607 20:56:35.617590 13573 net.cpp:454] conv5_3_relu5_3_0_split <- conv5_3\nI0607 20:56:35.617595 13573 net.cpp:411] conv5_3_relu5_3_0_split -> conv5_3_relu5_3_0_split_0\nI0607 20:56:35.617609 13573 net.cpp:411] conv5_3_relu5_3_0_split -> conv5_3_relu5_3_0_split_1\nI0607 20:56:35.617658 13573 net.cpp:150] Setting up conv5_3_relu5_3_0_split\nI0607 20:56:35.617666 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.617669 13573 net.cpp:157] Top shape: 1 512 38 63 (1225728)\nI0607 20:56:35.617672 13573 net.cpp:165] Memory required for data: 1445726332\nI0607 20:56:35.617676 13573 layer_factory.hpp:77] Creating layer p1_conv\nI0607 20:56:35.617687 13573 net.cpp:106] Creating Layer p1_conv\nI0607 20:56:35.617691 13573 net.cpp:454] p1_conv <- conv5_3_relu5_3_0_split_0\nI0607 20:56:35.617697 13573 net.cpp:411] p1_conv -> p1_conv\nI0607 20:56:35.623770 13573 net.cpp:150] Setting up p1_conv\nI0607 20:56:35.623795 13573 net.cpp:157] Top shape: 1 256 38 63 (612864)\nI0607 20:56:35.623800 13573 net.cpp:165] Memory required for data: 1448177788\nI0607 20:56:35.623807 13573 layer_factory.hpp:77] Creating layer p1_upsample\nI0607 20:56:35.623833 13573 net.cpp:106] Creating Layer p1_upsample\nI0607 20:56:35.623838 13573 net.cpp:454] p1_upsample <- p1_conv\nI0607 20:56:35.623843 13573 net.cpp:411] p1_upsample -> p1_upsample\nI0607 20:56:35.624068 13573 net.cpp:150] Setting up p1_upsample\nI0607 20:56:35.624089 13573 net.cpp:157] Top shape: 1 256 75 125 (2400000)\nI0607 20:56:35.624094 13573 net.cpp:165] Memory required for data: 1457777788\nI0607 20:56:35.624099 13573 layer_factory.hpp:77] Creating layer p1_upsample_p1_upsample_0_split\nI0607 20:56:35.624104 13573 net.cpp:106] Creating Layer p1_upsample_p1_upsample_0_split\nI0607 20:56:35.624109 13573 net.cpp:454] p1_upsample_p1_upsample_0_split <- p1_upsample\nI0607 20:56:35.624112 13573 net.cpp:411] p1_upsample_p1_upsample_0_split -> p1_upsample_p1_upsample_0_split_0\nI0607 20:56:35.624120 13573 net.cpp:411] p1_upsample_p1_upsample_0_split -> p1_upsample_p1_upsample_0_split_1\nI0607 20:56:35.624158 13573 net.cpp:150] Setting up p1_upsample_p1_upsample_0_split\nI0607 20:56:35.624164 13573 net.cpp:157] Top shape: 1 256 75 125 (2400000)\nI0607 20:56:35.624168 13573 net.cpp:157] Top shape: 1 256 75 125 (2400000)\nI0607 20:56:35.624171 13573 net.cpp:165] Memory required for data: 1476977788\nI0607 20:56:35.624174 13573 layer_factory.hpp:77] Creating layer p2_conv\nI0607 20:56:35.624183 13573 net.cpp:106] Creating Layer p2_conv\nI0607 20:56:35.624187 13573 net.cpp:454] p2_conv <- conv4_3_relu4_3_0_split_1\nI0607 20:56:35.624194 13573 net.cpp:411] p2_conv -> p2_conv\nI0607 20:56:35.629429 13573 net.cpp:150] Setting up p2_conv\nI0607 20:56:35.629454 13573 net.cpp:157] Top shape: 1 256 75 125 (2400000)\nI0607 20:56:35.629459 13573 net.cpp:165] Memory required for data: 1486577788\nI0607 20:56:35.629470 13573 layer_factory.hpp:77] Creating layer conadd\nI0607 20:56:35.629484 13573 net.cpp:106] Creating Layer conadd\nI0607 20:56:35.629488 13573 net.cpp:454] conadd <- p1_upsample_p1_upsample_0_split_0\nI0607 20:56:35.629493 13573 net.cpp:454] conadd <- p2_conv\nI0607 20:56:35.629499 13573 net.cpp:411] conadd -> conadd\nI0607 20:56:35.629540 13573 net.cpp:150] Setting up conadd\nI0607 20:56:35.629546 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.629549 13573 net.cpp:165] Memory required for data: 1505777788\nI0607 20:56:35.629552 13573 layer_factory.hpp:77] Creating layer rpn_conv/3x3\nI0607 20:56:35.629564 13573 net.cpp:106] Creating Layer rpn_conv/3x3\nI0607 20:56:35.629567 13573 net.cpp:454] rpn_conv/3x3 <- p1_upsample_p1_upsample_0_split_1\nI0607 20:56:35.629576 13573 net.cpp:411] rpn_conv/3x3 -> rpn/output\nI0607 20:56:35.663460 13573 net.cpp:150] Setting up rpn_conv/3x3\nI0607 20:56:35.663486 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.663491 13573 net.cpp:165] Memory required for data: 1524977788\nI0607 20:56:35.663507 13573 layer_factory.hpp:77] Creating layer rpn_relu/3x3\nI0607 20:56:35.663516 13573 net.cpp:106] Creating Layer rpn_relu/3x3\nI0607 20:56:35.663544 13573 net.cpp:454] rpn_relu/3x3 <- rpn/output\nI0607 20:56:35.663555 13573 net.cpp:397] rpn_relu/3x3 -> rpn/output (in-place)\nI0607 20:56:35.664479 13573 net.cpp:150] Setting up rpn_relu/3x3\nI0607 20:56:35.664499 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.664504 13573 net.cpp:165] Memory required for data: 1544177788\nI0607 20:56:35.664508 13573 layer_factory.hpp:77] Creating layer rpn/output_rpn_relu/3x3_0_split\nI0607 20:56:35.664515 13573 net.cpp:106] Creating Layer rpn/output_rpn_relu/3x3_0_split\nI0607 20:56:35.664518 13573 net.cpp:454] rpn/output_rpn_relu/3x3_0_split <- rpn/output\nI0607 20:56:35.664523 13573 net.cpp:411] rpn/output_rpn_relu/3x3_0_split -> rpn/output_rpn_relu/3x3_0_split_0\nI0607 20:56:35.664530 13573 net.cpp:411] rpn/output_rpn_relu/3x3_0_split -> rpn/output_rpn_relu/3x3_0_split_1\nI0607 20:56:35.664580 13573 net.cpp:150] Setting up rpn/output_rpn_relu/3x3_0_split\nI0607 20:56:35.664592 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.664597 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.664600 13573 net.cpp:165] Memory required for data: 1582577788\nI0607 20:56:35.664604 13573 layer_factory.hpp:77] Creating layer rpn_cls_score\nI0607 20:56:35.664618 13573 net.cpp:106] Creating Layer rpn_cls_score\nI0607 20:56:35.664634 13573 net.cpp:454] rpn_cls_score <- rpn/output_rpn_relu/3x3_0_split_0\nI0607 20:56:35.664641 13573 net.cpp:411] rpn_cls_score -> rpn_cls_score\nI0607 20:56:35.667183 13573 net.cpp:150] Setting up rpn_cls_score\nI0607 20:56:35.667207 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.667212 13573 net.cpp:165] Memory required for data: 1583252788\nI0607 20:56:35.667218 13573 layer_factory.hpp:77] Creating layer rpn_cls_score_rpn_cls_score_0_split\nI0607 20:56:35.667228 13573 net.cpp:106] Creating Layer rpn_cls_score_rpn_cls_score_0_split\nI0607 20:56:35.667235 13573 net.cpp:454] rpn_cls_score_rpn_cls_score_0_split <- rpn_cls_score\nI0607 20:56:35.667242 13573 net.cpp:411] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_0\nI0607 20:56:35.667248 13573 net.cpp:411] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_1\nI0607 20:56:35.667313 13573 net.cpp:150] Setting up rpn_cls_score_rpn_cls_score_0_split\nI0607 20:56:35.667326 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.667331 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.667335 13573 net.cpp:165] Memory required for data: 1584602788\nI0607 20:56:35.667337 13573 layer_factory.hpp:77] Creating layer rpn_bbox_pred\nI0607 20:56:35.667349 13573 net.cpp:106] Creating Layer rpn_bbox_pred\nI0607 20:56:35.667358 13573 net.cpp:454] rpn_bbox_pred <- rpn/output_rpn_relu/3x3_0_split_1\nI0607 20:56:35.667367 13573 net.cpp:411] rpn_bbox_pred -> rpn_bbox_pred\nI0607 20:56:35.669369 13573 net.cpp:150] Setting up rpn_bbox_pred\nI0607 20:56:35.669392 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.669396 13573 net.cpp:165] Memory required for data: 1585952788\nI0607 20:56:35.669404 13573 layer_factory.hpp:77] Creating layer rpn_bbox_pred_rpn_bbox_pred_0_split\nI0607 20:56:35.669414 13573 net.cpp:106] Creating Layer rpn_bbox_pred_rpn_bbox_pred_0_split\nI0607 20:56:35.669417 13573 net.cpp:454] rpn_bbox_pred_rpn_bbox_pred_0_split <- rpn_bbox_pred\nI0607 20:56:35.669423 13573 net.cpp:411] rpn_bbox_pred_rpn_bbox_pred_0_split -> rpn_bbox_pred_rpn_bbox_pred_0_split_0\nI0607 20:56:35.669430 13573 net.cpp:411] rpn_bbox_pred_rpn_bbox_pred_0_split -> rpn_bbox_pred_rpn_bbox_pred_0_split_1\nI0607 20:56:35.669478 13573 net.cpp:150] Setting up rpn_bbox_pred_rpn_bbox_pred_0_split\nI0607 20:56:35.669492 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.669497 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.669499 13573 net.cpp:165] Memory required for data: 1588652788\nI0607 20:56:35.669503 13573 layer_factory.hpp:77] Creating layer rpn_cls_score_reshape\nI0607 20:56:35.669524 13573 net.cpp:106] Creating Layer rpn_cls_score_reshape\nI0607 20:56:35.669540 13573 net.cpp:454] rpn_cls_score_reshape <- rpn_cls_score_rpn_cls_score_0_split_0\nI0607 20:56:35.669550 13573 net.cpp:411] rpn_cls_score_reshape -> rpn_cls_score_reshape\nI0607 20:56:35.669590 13573 net.cpp:150] Setting up rpn_cls_score_reshape\nI0607 20:56:35.669602 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.669606 13573 net.cpp:165] Memory required for data: 1589327788\nI0607 20:56:35.669610 13573 layer_factory.hpp:77] Creating layer rpn_cls_score_reshape_rpn_cls_score_reshape_0_split\nI0607 20:56:35.669618 13573 net.cpp:106] Creating Layer rpn_cls_score_reshape_rpn_cls_score_reshape_0_split\nI0607 20:56:35.669621 13573 net.cpp:454] rpn_cls_score_reshape_rpn_cls_score_reshape_0_split <- rpn_cls_score_reshape\nI0607 20:56:35.669626 13573 net.cpp:411] rpn_cls_score_reshape_rpn_cls_score_reshape_0_split -> rpn_cls_score_reshape_rpn_cls_score_reshape_0_split_0\nI0607 20:56:35.669632 13573 net.cpp:411] rpn_cls_score_reshape_rpn_cls_score_reshape_0_split -> rpn_cls_score_reshape_rpn_cls_score_reshape_0_split_1\nI0607 20:56:35.669674 13573 net.cpp:150] Setting up rpn_cls_score_reshape_rpn_cls_score_reshape_0_split\nI0607 20:56:35.669685 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.669690 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.669693 13573 net.cpp:165] Memory required for data: 1590677788\nI0607 20:56:35.669697 13573 layer_factory.hpp:77] Creating layer rpn-data\nI0607 20:56:35.670825 13573 net.cpp:106] Creating Layer rpn-data\nI0607 20:56:35.670846 13573 net.cpp:454] rpn-data <- rpn_cls_score_rpn_cls_score_0_split_1\nI0607 20:56:35.670855 13573 net.cpp:454] rpn-data <- gt_boxes_input-data_2_split_0\nI0607 20:56:35.670859 13573 net.cpp:454] rpn-data <- im_info_input-data_1_split_0\nI0607 20:56:35.670863 13573 net.cpp:454] rpn-data <- data_input-data_0_split_1\nI0607 20:56:35.670869 13573 net.cpp:411] rpn-data -> rpn_labels\nI0607 20:56:35.670876 13573 net.cpp:411] rpn-data -> rpn_bbox_targets\nI0607 20:56:35.670893 13573 net.cpp:411] rpn-data -> rpn_bbox_inside_weights\nI0607 20:56:35.670900 13573 net.cpp:411] rpn-data -> rpn_bbox_outside_weights\nI0607 20:56:35.672507 13573 net.cpp:150] Setting up rpn-data\nI0607 20:56:35.672531 13573 net.cpp:157] Top shape: 1 1 675 125 (84375)\nI0607 20:56:35.672538 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.672541 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.672545 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.672549 13573 net.cpp:165] Memory required for data: 1595065288\nI0607 20:56:35.672552 13573 layer_factory.hpp:77] Creating layer rpn_loss_cls\nI0607 20:56:35.672565 13573 net.cpp:106] Creating Layer rpn_loss_cls\nI0607 20:56:35.672580 13573 net.cpp:454] rpn_loss_cls <- rpn_cls_score_reshape_rpn_cls_score_reshape_0_split_0\nI0607 20:56:35.672585 13573 net.cpp:454] rpn_loss_cls <- rpn_labels\nI0607 20:56:35.672590 13573 net.cpp:411] rpn_loss_cls -> rpn_cls_loss\nI0607 20:56:35.672606 13573 layer_factory.hpp:77] Creating layer rpn_loss_cls\nI0607 20:56:35.674389 13573 net.cpp:150] Setting up rpn_loss_cls\nI0607 20:56:35.674412 13573 net.cpp:157] Top shape: (1)\nI0607 20:56:35.674417 13573 net.cpp:160]     with loss weight 1\nI0607 20:56:35.674437 13573 net.cpp:165] Memory required for data: 1595065292\nI0607 20:56:35.674440 13573 layer_factory.hpp:77] Creating layer rpn_loss_bbox\nI0607 20:56:35.674451 13573 net.cpp:106] Creating Layer rpn_loss_bbox\nI0607 20:56:35.674456 13573 net.cpp:454] rpn_loss_bbox <- rpn_bbox_pred_rpn_bbox_pred_0_split_0\nI0607 20:56:35.674464 13573 net.cpp:454] rpn_loss_bbox <- rpn_bbox_targets\nI0607 20:56:35.674471 13573 net.cpp:454] rpn_loss_bbox <- rpn_bbox_inside_weights\nI0607 20:56:35.674475 13573 net.cpp:454] rpn_loss_bbox <- rpn_bbox_outside_weights\nI0607 20:56:35.674480 13573 net.cpp:411] rpn_loss_bbox -> rpn_loss_bbox\nI0607 20:56:35.677081 13573 net.cpp:150] Setting up rpn_loss_bbox\nI0607 20:56:35.677103 13573 net.cpp:157] Top shape: (1)\nI0607 20:56:35.677109 13573 net.cpp:160]     with loss weight 1\nI0607 20:56:35.677114 13573 net.cpp:165] Memory required for data: 1595065296\nI0607 20:56:35.677119 13573 layer_factory.hpp:77] Creating layer rpn_cls_prob\nI0607 20:56:35.677125 13573 net.cpp:106] Creating Layer rpn_cls_prob\nI0607 20:56:35.677130 13573 net.cpp:454] rpn_cls_prob <- rpn_cls_score_reshape_rpn_cls_score_reshape_0_split_1\nI0607 20:56:35.677137 13573 net.cpp:411] rpn_cls_prob -> rpn_cls_prob\nI0607 20:56:35.677417 13573 net.cpp:150] Setting up rpn_cls_prob\nI0607 20:56:35.677434 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.677438 13573 net.cpp:165] Memory required for data: 1595740296\nI0607 20:56:35.677443 13573 layer_factory.hpp:77] Creating layer rpn_cls_prob_reshape\nI0607 20:56:35.677449 13573 net.cpp:106] Creating Layer rpn_cls_prob_reshape\nI0607 20:56:35.677453 13573 net.cpp:454] rpn_cls_prob_reshape <- rpn_cls_prob\nI0607 20:56:35.677460 13573 net.cpp:411] rpn_cls_prob_reshape -> rpn_cls_prob_reshape\nI0607 20:56:35.677491 13573 net.cpp:150] Setting up rpn_cls_prob_reshape\nI0607 20:56:35.677506 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.677510 13573 net.cpp:165] Memory required for data: 1596415296\nI0607 20:56:35.677515 13573 layer_factory.hpp:77] Creating layer proposal\nI0607 20:56:35.678661 13573 net.cpp:106] Creating Layer proposal\nI0607 20:56:35.678684 13573 net.cpp:454] proposal <- rpn_cls_prob_reshape\nI0607 20:56:35.678691 13573 net.cpp:454] proposal <- rpn_bbox_pred_rpn_bbox_pred_0_split_1\nI0607 20:56:35.678696 13573 net.cpp:454] proposal <- im_info_input-data_1_split_1\nI0607 20:56:35.678702 13573 net.cpp:411] proposal -> rpn_rois\nI0607 20:56:35.679464 13573 net.cpp:150] Setting up proposal\nI0607 20:56:35.679491 13573 net.cpp:157] Top shape: 1 5 (5)\nI0607 20:56:35.679496 13573 net.cpp:165] Memory required for data: 1596415316\nI0607 20:56:35.679500 13573 layer_factory.hpp:77] Creating layer p2_rpn_conv/3x3\nI0607 20:56:35.679512 13573 net.cpp:106] Creating Layer p2_rpn_conv/3x3\nI0607 20:56:35.679527 13573 net.cpp:454] p2_rpn_conv/3x3 <- conadd\nI0607 20:56:35.679534 13573 net.cpp:411] p2_rpn_conv/3x3 -> p2_rpn/output\nI0607 20:56:35.741835 13573 net.cpp:150] Setting up p2_rpn_conv/3x3\nI0607 20:56:35.741863 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.741868 13573 net.cpp:165] Memory required for data: 1615615316\nI0607 20:56:35.741876 13573 layer_factory.hpp:77] Creating layer p2_rpn_relu/3x3\nI0607 20:56:35.741884 13573 net.cpp:106] Creating Layer p2_rpn_relu/3x3\nI0607 20:56:35.741888 13573 net.cpp:454] p2_rpn_relu/3x3 <- p2_rpn/output\nI0607 20:56:35.741896 13573 net.cpp:397] p2_rpn_relu/3x3 -> p2_rpn/output (in-place)\nI0607 20:56:35.742833 13573 net.cpp:150] Setting up p2_rpn_relu/3x3\nI0607 20:56:35.742856 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.742859 13573 net.cpp:165] Memory required for data: 1634815316\nI0607 20:56:35.742863 13573 layer_factory.hpp:77] Creating layer p2_rpn/output_p2_rpn_relu/3x3_0_split\nI0607 20:56:35.742871 13573 net.cpp:106] Creating Layer p2_rpn/output_p2_rpn_relu/3x3_0_split\nI0607 20:56:35.742874 13573 net.cpp:454] p2_rpn/output_p2_rpn_relu/3x3_0_split <- p2_rpn/output\nI0607 20:56:35.742883 13573 net.cpp:411] p2_rpn/output_p2_rpn_relu/3x3_0_split -> p2_rpn/output_p2_rpn_relu/3x3_0_split_0\nI0607 20:56:35.742889 13573 net.cpp:411] p2_rpn/output_p2_rpn_relu/3x3_0_split -> p2_rpn/output_p2_rpn_relu/3x3_0_split_1\nI0607 20:56:35.742939 13573 net.cpp:150] Setting up p2_rpn/output_p2_rpn_relu/3x3_0_split\nI0607 20:56:35.742954 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.742959 13573 net.cpp:157] Top shape: 1 512 75 125 (4800000)\nI0607 20:56:35.742962 13573 net.cpp:165] Memory required for data: 1673215316\nI0607 20:56:35.742966 13573 layer_factory.hpp:77] Creating layer p2_rpn_cls_score\nI0607 20:56:35.742979 13573 net.cpp:106] Creating Layer p2_rpn_cls_score\nI0607 20:56:35.742983 13573 net.cpp:454] p2_rpn_cls_score <- p2_rpn/output_p2_rpn_relu/3x3_0_split_0\nI0607 20:56:35.742990 13573 net.cpp:411] p2_rpn_cls_score -> p2_rpn_cls_score\nI0607 20:56:35.746284 13573 net.cpp:150] Setting up p2_rpn_cls_score\nI0607 20:56:35.746309 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.746314 13573 net.cpp:165] Memory required for data: 1673890316\nI0607 20:56:35.746320 13573 layer_factory.hpp:77] Creating layer p2_rpn_cls_score_p2_rpn_cls_score_0_split\nI0607 20:56:35.746366 13573 net.cpp:106] Creating Layer p2_rpn_cls_score_p2_rpn_cls_score_0_split\nI0607 20:56:35.746372 13573 net.cpp:454] p2_rpn_cls_score_p2_rpn_cls_score_0_split <- p2_rpn_cls_score\nI0607 20:56:35.746383 13573 net.cpp:411] p2_rpn_cls_score_p2_rpn_cls_score_0_split -> p2_rpn_cls_score_p2_rpn_cls_score_0_split_0\nI0607 20:56:35.746393 13573 net.cpp:411] p2_rpn_cls_score_p2_rpn_cls_score_0_split -> p2_rpn_cls_score_p2_rpn_cls_score_0_split_1\nI0607 20:56:35.746446 13573 net.cpp:150] Setting up p2_rpn_cls_score_p2_rpn_cls_score_0_split\nI0607 20:56:35.746459 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.746464 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.746469 13573 net.cpp:165] Memory required for data: 1675240316\nI0607 20:56:35.746471 13573 layer_factory.hpp:77] Creating layer p2_rpn_bbox_pred\nI0607 20:56:35.746485 13573 net.cpp:106] Creating Layer p2_rpn_bbox_pred\nI0607 20:56:35.746492 13573 net.cpp:454] p2_rpn_bbox_pred <- p2_rpn/output_p2_rpn_relu/3x3_0_split_1\nI0607 20:56:35.746498 13573 net.cpp:411] p2_rpn_bbox_pred -> p2_rpn_bbox_pred\nI0607 20:56:35.748571 13573 net.cpp:150] Setting up p2_rpn_bbox_pred\nI0607 20:56:35.748595 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.748600 13573 net.cpp:165] Memory required for data: 1676590316\nI0607 20:56:35.748607 13573 layer_factory.hpp:77] Creating layer p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split\nI0607 20:56:35.748615 13573 net.cpp:106] Creating Layer p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split\nI0607 20:56:35.748622 13573 net.cpp:454] p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split <- p2_rpn_bbox_pred\nI0607 20:56:35.748628 13573 net.cpp:411] p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split -> p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split_0\nI0607 20:56:35.748636 13573 net.cpp:411] p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split -> p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split_1\nI0607 20:56:35.748705 13573 net.cpp:150] Setting up p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split\nI0607 20:56:35.748720 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.748725 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.748728 13573 net.cpp:165] Memory required for data: 1679290316\nI0607 20:56:35.748731 13573 layer_factory.hpp:77] Creating layer p2_rpn_cls_score_reshape\nI0607 20:56:35.748739 13573 net.cpp:106] Creating Layer p2_rpn_cls_score_reshape\nI0607 20:56:35.748742 13573 net.cpp:454] p2_rpn_cls_score_reshape <- p2_rpn_cls_score_p2_rpn_cls_score_0_split_0\nI0607 20:56:35.748749 13573 net.cpp:411] p2_rpn_cls_score_reshape -> p2_rpn_cls_score_reshape\nI0607 20:56:35.748780 13573 net.cpp:150] Setting up p2_rpn_cls_score_reshape\nI0607 20:56:35.748792 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.748796 13573 net.cpp:165] Memory required for data: 1679965316\nI0607 20:56:35.748800 13573 layer_factory.hpp:77] Creating layer p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split\nI0607 20:56:35.748816 13573 net.cpp:106] Creating Layer p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split\nI0607 20:56:35.748819 13573 net.cpp:454] p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split <- p2_rpn_cls_score_reshape\nI0607 20:56:35.748826 13573 net.cpp:411] p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split -> p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split_0\nI0607 20:56:35.748832 13573 net.cpp:411] p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split -> p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split_1\nI0607 20:56:35.748873 13573 net.cpp:150] Setting up p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split\nI0607 20:56:35.748885 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.748890 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.748893 13573 net.cpp:165] Memory required for data: 1681315316\nI0607 20:56:35.748898 13573 layer_factory.hpp:77] Creating layer p2_rpn-data\nI0607 20:56:35.748932 13573 net.cpp:106] Creating Layer p2_rpn-data\nI0607 20:56:35.748945 13573 net.cpp:454] p2_rpn-data <- p2_rpn_cls_score_p2_rpn_cls_score_0_split_1\nI0607 20:56:35.748951 13573 net.cpp:454] p2_rpn-data <- gt_boxes_input-data_2_split_1\nI0607 20:56:35.748956 13573 net.cpp:454] p2_rpn-data <- im_info_input-data_1_split_2\nI0607 20:56:35.748960 13573 net.cpp:454] p2_rpn-data <- data_input-data_0_split_2\nI0607 20:56:35.748965 13573 net.cpp:411] p2_rpn-data -> p2_rpn_labels\nI0607 20:56:35.748971 13573 net.cpp:411] p2_rpn-data -> p2_rpn_bbox_targets\nI0607 20:56:35.748977 13573 net.cpp:411] p2_rpn-data -> p2_rpn_bbox_inside_weights\nI0607 20:56:35.748982 13573 net.cpp:411] p2_rpn-data -> p2_rpn_bbox_outside_weights\nI0607 20:56:35.750275 13573 net.cpp:150] Setting up p2_rpn-data\nI0607 20:56:35.750303 13573 net.cpp:157] Top shape: 1 1 675 125 (84375)\nI0607 20:56:35.750308 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.750311 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.750315 13573 net.cpp:157] Top shape: 1 36 75 125 (337500)\nI0607 20:56:35.750319 13573 net.cpp:165] Memory required for data: 1685702816\nI0607 20:56:35.750329 13573 layer_factory.hpp:77] Creating layer p2_rpn_loss_cls\nI0607 20:56:35.750349 13573 net.cpp:106] Creating Layer p2_rpn_loss_cls\nI0607 20:56:35.750353 13573 net.cpp:454] p2_rpn_loss_cls <- p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split_0\nI0607 20:56:35.750360 13573 net.cpp:454] p2_rpn_loss_cls <- p2_rpn_labels\nI0607 20:56:35.750365 13573 net.cpp:411] p2_rpn_loss_cls -> p2_rpn_cls_loss\nI0607 20:56:35.750373 13573 layer_factory.hpp:77] Creating layer p2_rpn_loss_cls\nI0607 20:56:35.752110 13573 net.cpp:150] Setting up p2_rpn_loss_cls\nI0607 20:56:35.752135 13573 net.cpp:157] Top shape: (1)\nI0607 20:56:35.752140 13573 net.cpp:160]     with loss weight 1\nI0607 20:56:35.752147 13573 net.cpp:165] Memory required for data: 1685702820\nI0607 20:56:35.752151 13573 layer_factory.hpp:77] Creating layer p2_rpn_loss_bbox\nI0607 20:56:35.752159 13573 net.cpp:106] Creating Layer p2_rpn_loss_bbox\nI0607 20:56:35.752176 13573 net.cpp:454] p2_rpn_loss_bbox <- p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split_0\nI0607 20:56:35.752182 13573 net.cpp:454] p2_rpn_loss_bbox <- p2_rpn_bbox_targets\nI0607 20:56:35.752187 13573 net.cpp:454] p2_rpn_loss_bbox <- p2_rpn_bbox_inside_weights\nI0607 20:56:35.752190 13573 net.cpp:454] p2_rpn_loss_bbox <- p2_rpn_bbox_outside_weights\nI0607 20:56:35.752198 13573 net.cpp:411] p2_rpn_loss_bbox -> p2_rpn_loss_bbox\nI0607 20:56:35.754787 13573 net.cpp:150] Setting up p2_rpn_loss_bbox\nI0607 20:56:35.754807 13573 net.cpp:157] Top shape: (1)\nI0607 20:56:35.754812 13573 net.cpp:160]     with loss weight 1\nI0607 20:56:35.754818 13573 net.cpp:165] Memory required for data: 1685702824\nI0607 20:56:35.754822 13573 layer_factory.hpp:77] Creating layer p2_rpn_cls_prob\nI0607 20:56:35.754832 13573 net.cpp:106] Creating Layer p2_rpn_cls_prob\nI0607 20:56:35.754853 13573 net.cpp:454] p2_rpn_cls_prob <- p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split_1\nI0607 20:56:35.754861 13573 net.cpp:411] p2_rpn_cls_prob -> p2_rpn_cls_prob\nI0607 20:56:35.755128 13573 net.cpp:150] Setting up p2_rpn_cls_prob\nI0607 20:56:35.755146 13573 net.cpp:157] Top shape: 1 2 675 125 (168750)\nI0607 20:56:35.755151 13573 net.cpp:165] Memory required for data: 1686377824\nI0607 20:56:35.755154 13573 layer_factory.hpp:77] Creating layer p2_rpn_cls_prob_reshape\nI0607 20:56:35.755163 13573 net.cpp:106] Creating Layer p2_rpn_cls_prob_reshape\nI0607 20:56:35.755167 13573 net.cpp:454] p2_rpn_cls_prob_reshape <- p2_rpn_cls_prob\nI0607 20:56:35.755172 13573 net.cpp:411] p2_rpn_cls_prob_reshape -> p2_rpn_cls_prob_reshape\nI0607 20:56:35.755208 13573 net.cpp:150] Setting up p2_rpn_cls_prob_reshape\nI0607 20:56:35.755220 13573 net.cpp:157] Top shape: 1 18 75 125 (168750)\nI0607 20:56:35.755224 13573 net.cpp:165] Memory required for data: 1687052824\nI0607 20:56:35.755228 13573 layer_factory.hpp:77] Creating layer p2_proposal\nI0607 20:56:35.755260 13573 net.cpp:106] Creating Layer p2_proposal\nI0607 20:56:35.755273 13573 net.cpp:454] p2_proposal <- p2_rpn_cls_prob_reshape\nI0607 20:56:35.755280 13573 net.cpp:454] p2_proposal <- p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split_1\nI0607 20:56:35.755283 13573 net.cpp:454] p2_proposal <- im_info_input-data_1_split_3\nI0607 20:56:35.755290 13573 net.cpp:411] p2_proposal -> p2_rpn_rois\nI0607 20:56:35.755800 13573 net.cpp:150] Setting up p2_proposal\nI0607 20:56:35.755822 13573 net.cpp:157] Top shape: 1 5 (5)\nI0607 20:56:35.755827 13573 net.cpp:165] Memory required for data: 1687052844\nI0607 20:56:35.755831 13573 layer_factory.hpp:77] Creating layer data_all\nI0607 20:56:35.755839 13573 net.cpp:106] Creating Layer data_all\nI0607 20:56:35.755843 13573 net.cpp:454] data_all <- rpn_rois\nI0607 20:56:35.755848 13573 net.cpp:454] data_all <- p2_rpn_rois\nI0607 20:56:35.755858 13573 net.cpp:411] data_all -> data_all\nI0607 20:56:35.755904 13573 net.cpp:150] Setting up data_all\nI0607 20:56:35.755918 13573 net.cpp:157] Top shape: 2 5 (10)\nI0607 20:56:35.755921 13573 net.cpp:165] Memory required for data: 1687052884\nI0607 20:56:35.755925 13573 layer_factory.hpp:77] Creating layer roi-data\nI0607 20:56:35.756217 13573 net.cpp:106] Creating Layer roi-data\nI0607 20:56:35.756238 13573 net.cpp:454] roi-data <- data_all\nI0607 20:56:35.756245 13573 net.cpp:454] roi-data <- gt_boxes_input-data_2_split_2\nI0607 20:56:35.756252 13573 net.cpp:411] roi-data -> rois\nI0607 20:56:35.756258 13573 net.cpp:411] roi-data -> labels\nI0607 20:56:35.756271 13573 net.cpp:411] roi-data -> bbox_targets\nI0607 20:56:35.756278 13573 net.cpp:411] roi-data -> bbox_inside_weights\nI0607 20:56:35.756283 13573 net.cpp:411] roi-data -> bbox_outside_weights\nI0607 20:56:35.757936 13573 net.cpp:150] Setting up roi-data\nI0607 20:56:35.757961 13573 net.cpp:157] Top shape: 1 5 (5)\nI0607 20:56:35.757966 13573 net.cpp:157] Top shape: 1 1 (1)\nI0607 20:56:35.757971 13573 net.cpp:157] Top shape: 1 84 (84)\nI0607 20:56:35.757974 13573 net.cpp:157] Top shape: 1 84 (84)\nI0607 20:56:35.757978 13573 net.cpp:157] Top shape: 1 84 (84)\nI0607 20:56:35.757982 13573 net.cpp:165] Memory required for data: 1687053916\nI0607 20:56:35.757985 13573 layer_factory.hpp:77] Creating layer roi_pool5\nI0607 20:56:35.757998 13573 net.cpp:106] Creating Layer roi_pool5\nI0607 20:56:35.758009 13573 net.cpp:454] roi_pool5 <- conv5_3_relu5_3_0_split_1\nI0607 20:56:35.758016 13573 net.cpp:454] roi_pool5 <- rois\nI0607 20:56:35.758021 13573 net.cpp:411] roi_pool5 -> pool5\nI0607 20:56:35.758031 13573 roi_pooling_layer.cpp:30] Spatial scale: 0.0625\nI0607 20:56:35.758086 13573 net.cpp:150] Setting up roi_pool5\nI0607 20:56:35.758100 13573 net.cpp:157] Top shape: 1 512 7 7 (25088)\nI0607 20:56:35.758105 13573 net.cpp:165] Memory required for data: 1687154268\nI0607 20:56:35.758107 13573 layer_factory.hpp:77] Creating layer fc6\nI0607 20:56:35.758117 13573 net.cpp:106] Creating Layer fc6\nI0607 20:56:35.758121 13573 net.cpp:454] fc6 <- pool5\nI0607 20:56:35.758127 13573 net.cpp:411] fc6 -> fc6\nI0607 20:56:36.032600 13573 net.cpp:150] Setting up fc6\nI0607 20:56:36.032660 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.032665 13573 net.cpp:165] Memory required for data: 1687170652\nI0607 20:56:36.032681 13573 layer_factory.hpp:77] Creating layer relu6\nI0607 20:56:36.032697 13573 net.cpp:106] Creating Layer relu6\nI0607 20:56:36.032704 13573 net.cpp:454] relu6 <- fc6\nI0607 20:56:36.032716 13573 net.cpp:397] relu6 -> fc6 (in-place)\nI0607 20:56:36.033907 13573 net.cpp:150] Setting up relu6\nI0607 20:56:36.033928 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.033932 13573 net.cpp:165] Memory required for data: 1687187036\nI0607 20:56:36.033937 13573 layer_factory.hpp:77] Creating layer drop6\nI0607 20:56:36.033951 13573 net.cpp:106] Creating Layer drop6\nI0607 20:56:36.033972 13573 net.cpp:454] drop6 <- fc6\nI0607 20:56:36.033982 13573 net.cpp:397] drop6 -> fc6 (in-place)\nI0607 20:56:36.034023 13573 net.cpp:150] Setting up drop6\nI0607 20:56:36.034037 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.034041 13573 net.cpp:165] Memory required for data: 1687203420\nI0607 20:56:36.034045 13573 layer_factory.hpp:77] Creating layer fc7\nI0607 20:56:36.034054 13573 net.cpp:106] Creating Layer fc7\nI0607 20:56:36.034059 13573 net.cpp:454] fc7 <- fc6\nI0607 20:56:36.034063 13573 net.cpp:411] fc7 -> fc7\nI0607 20:56:36.076436 13573 net.cpp:150] Setting up fc7\nI0607 20:56:36.076491 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.076498 13573 net.cpp:165] Memory required for data: 1687219804\nI0607 20:56:36.076510 13573 layer_factory.hpp:77] Creating layer relu7\nI0607 20:56:36.076531 13573 net.cpp:106] Creating Layer relu7\nI0607 20:56:36.076550 13573 net.cpp:454] relu7 <- fc7\nI0607 20:56:36.076557 13573 net.cpp:397] relu7 -> fc7 (in-place)\nI0607 20:56:36.076874 13573 net.cpp:150] Setting up relu7\nI0607 20:56:36.076889 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.076894 13573 net.cpp:165] Memory required for data: 1687236188\nI0607 20:56:36.076897 13573 layer_factory.hpp:77] Creating layer drop7\nI0607 20:56:36.076910 13573 net.cpp:106] Creating Layer drop7\nI0607 20:56:36.076913 13573 net.cpp:454] drop7 <- fc7\nI0607 20:56:36.076920 13573 net.cpp:397] drop7 -> fc7 (in-place)\nI0607 20:56:36.076972 13573 net.cpp:150] Setting up drop7\nI0607 20:56:36.076984 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.076988 13573 net.cpp:165] Memory required for data: 1687252572\nI0607 20:56:36.076992 13573 layer_factory.hpp:77] Creating layer fc7_drop7_0_split\nI0607 20:56:36.076997 13573 net.cpp:106] Creating Layer fc7_drop7_0_split\nI0607 20:56:36.077002 13573 net.cpp:454] fc7_drop7_0_split <- fc7\nI0607 20:56:36.077008 13573 net.cpp:411] fc7_drop7_0_split -> fc7_drop7_0_split_0\nI0607 20:56:36.077015 13573 net.cpp:411] fc7_drop7_0_split -> fc7_drop7_0_split_1\nI0607 20:56:36.077071 13573 net.cpp:150] Setting up fc7_drop7_0_split\nI0607 20:56:36.077082 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.077087 13573 net.cpp:157] Top shape: 1 4096 (4096)\nI0607 20:56:36.077090 13573 net.cpp:165] Memory required for data: 1687285340\nI0607 20:56:36.077095 13573 layer_factory.hpp:77] Creating layer cls_score\nI0607 20:56:36.077105 13573 net.cpp:106] Creating Layer cls_score\nI0607 20:56:36.077109 13573 net.cpp:454] cls_score <- fc7_drop7_0_split_0\nI0607 20:56:36.077116 13573 net.cpp:411] cls_score -> cls_score\nI0607 20:56:36.079362 13573 net.cpp:150] Setting up cls_score\nI0607 20:56:36.079380 13573 net.cpp:157] Top shape: 1 21 (21)\nI0607 20:56:36.079385 13573 net.cpp:165] Memory required for data: 1687285424\nI0607 20:56:36.079391 13573 layer_factory.hpp:77] Creating layer bbox_pred\nI0607 20:56:36.079397 13573 net.cpp:106] Creating Layer bbox_pred\nI0607 20:56:36.079401 13573 net.cpp:454] bbox_pred <- fc7_drop7_0_split_1\nI0607 20:56:36.079409 13573 net.cpp:411] bbox_pred -> bbox_pred\nI0607 20:56:36.088384 13573 net.cpp:150] Setting up bbox_pred\nI0607 20:56:36.088404 13573 net.cpp:157] Top shape: 1 84 (84)\nI0607 20:56:36.088409 13573 net.cpp:165] Memory required for data: 1687285760\nI0607 20:56:36.088418 13573 layer_factory.hpp:77] Creating layer loss_cls\nI0607 20:56:36.088424 13573 net.cpp:106] Creating Layer loss_cls\nI0607 20:56:36.088428 13573 net.cpp:454] loss_cls <- cls_score\nI0607 20:56:36.088434 13573 net.cpp:454] loss_cls <- labels\nI0607 20:56:36.088439 13573 net.cpp:411] loss_cls -> loss_cls\nI0607 20:56:36.088449 13573 layer_factory.hpp:77] Creating layer loss_cls\nI0607 20:56:36.089521 13573 net.cpp:150] Setting up loss_cls\nI0607 20:56:36.089541 13573 net.cpp:157] Top shape: (1)\nI0607 20:56:36.089545 13573 net.cpp:160]     with loss weight 1\nI0607 20:56:36.089566 13573 net.cpp:165] Memory required for data: 1687285764\nI0607 20:56:36.089571 13573 layer_factory.hpp:77] Creating layer loss_bbox\nI0607 20:56:36.089581 13573 net.cpp:106] Creating Layer loss_bbox\nI0607 20:56:36.089592 13573 net.cpp:454] loss_bbox <- bbox_pred\nI0607 20:56:36.089598 13573 net.cpp:454] loss_bbox <- bbox_targets\nI0607 20:56:36.089603 13573 net.cpp:454] loss_bbox <- bbox_inside_weights\nI0607 20:56:36.089607 13573 net.cpp:454] loss_bbox <- bbox_outside_weights\nI0607 20:56:36.089612 13573 net.cpp:411] loss_bbox -> loss_bbox\nI0607 20:56:36.089710 13573 net.cpp:150] Setting up loss_bbox\nI0607 20:56:36.089722 13573 net.cpp:157] Top shape: (1)\nI0607 20:56:36.089726 13573 net.cpp:160]     with loss weight 1\nI0607 20:56:36.089731 13573 net.cpp:165] Memory required for data: 1687285768\nI0607 20:56:36.089735 13573 net.cpp:226] loss_bbox needs backward computation.\nI0607 20:56:36.089740 13573 net.cpp:226] loss_cls needs backward computation.\nI0607 20:56:36.089742 13573 net.cpp:226] bbox_pred needs backward computation.\nI0607 20:56:36.089746 13573 net.cpp:226] cls_score needs backward computation.\nI0607 20:56:36.089749 13573 net.cpp:226] fc7_drop7_0_split needs backward computation.\nI0607 20:56:36.089752 13573 net.cpp:226] drop7 needs backward computation.\nI0607 20:56:36.089756 13573 net.cpp:226] relu7 needs backward computation.\nI0607 20:56:36.089758 13573 net.cpp:226] fc7 needs backward computation.\nI0607 20:56:36.089762 13573 net.cpp:226] drop6 needs backward computation.\nI0607 20:56:36.089766 13573 net.cpp:226] relu6 needs backward computation.\nI0607 20:56:36.089769 13573 net.cpp:226] fc6 needs backward computation.\nI0607 20:56:36.089773 13573 net.cpp:226] roi_pool5 needs backward computation.\nI0607 20:56:36.089777 13573 net.cpp:226] roi-data needs backward computation.\nI0607 20:56:36.089781 13573 net.cpp:226] data_all needs backward computation.\nI0607 20:56:36.089787 13573 net.cpp:226] p2_proposal needs backward computation.\nI0607 20:56:36.089792 13573 net.cpp:226] p2_rpn_cls_prob_reshape needs backward computation.\nI0607 20:56:36.089795 13573 net.cpp:226] p2_rpn_cls_prob needs backward computation.\nI0607 20:56:36.089799 13573 net.cpp:226] p2_rpn_loss_bbox needs backward computation.\nI0607 20:56:36.089804 13573 net.cpp:226] p2_rpn_loss_cls needs backward computation.\nI0607 20:56:36.089809 13573 net.cpp:226] p2_rpn-data needs backward computation.\nI0607 20:56:36.089815 13573 net.cpp:226] p2_rpn_cls_score_reshape_p2_rpn_cls_score_reshape_0_split needs backward computation.\nI0607 20:56:36.089826 13573 net.cpp:226] p2_rpn_cls_score_reshape needs backward computation.\nI0607 20:56:36.089831 13573 net.cpp:226] p2_rpn_bbox_pred_p2_rpn_bbox_pred_0_split needs backward computation.\nI0607 20:56:36.089835 13573 net.cpp:226] p2_rpn_bbox_pred needs backward computation.\nI0607 20:56:36.089840 13573 net.cpp:226] p2_rpn_cls_score_p2_rpn_cls_score_0_split needs backward computation.\nI0607 20:56:36.089844 13573 net.cpp:226] p2_rpn_cls_score needs backward computation.\nI0607 20:56:36.089848 13573 net.cpp:226] p2_rpn/output_p2_rpn_relu/3x3_0_split needs backward computation.\nI0607 20:56:36.089853 13573 net.cpp:226] p2_rpn_relu/3x3 needs backward computation.\nI0607 20:56:36.089856 13573 net.cpp:226] p2_rpn_conv/3x3 needs backward computation.\nI0607 20:56:36.089860 13573 net.cpp:226] proposal needs backward computation.\nI0607 20:56:36.089867 13573 net.cpp:226] rpn_cls_prob_reshape needs backward computation.\nI0607 20:56:36.089872 13573 net.cpp:226] rpn_cls_prob needs backward computation.\nI0607 20:56:36.089876 13573 net.cpp:226] rpn_loss_bbox needs backward computation.\nI0607 20:56:36.089881 13573 net.cpp:226] rpn_loss_cls needs backward computation.\nI0607 20:56:36.089887 13573 net.cpp:226] rpn-data needs backward computation.\nI0607 20:56:36.089892 13573 net.cpp:226] rpn_cls_score_reshape_rpn_cls_score_reshape_0_split needs backward computation.\nI0607 20:56:36.089897 13573 net.cpp:226] rpn_cls_score_reshape needs backward computation.\nI0607 20:56:36.089901 13573 net.cpp:226] rpn_bbox_pred_rpn_bbox_pred_0_split needs backward computation.\nI0607 20:56:36.089905 13573 net.cpp:226] rpn_bbox_pred needs backward computation.\nI0607 20:56:36.089910 13573 net.cpp:226] rpn_cls_score_rpn_cls_score_0_split needs backward computation.\nI0607 20:56:36.089913 13573 net.cpp:226] rpn_cls_score needs backward computation.\nI0607 20:56:36.089918 13573 net.cpp:226] rpn/output_rpn_relu/3x3_0_split needs backward computation.\nI0607 20:56:36.089922 13573 net.cpp:226] rpn_relu/3x3 needs backward computation.\nI0607 20:56:36.089926 13573 net.cpp:226] rpn_conv/3x3 needs backward computation.\nI0607 20:56:36.089931 13573 net.cpp:226] conadd needs backward computation.\nI0607 20:56:36.089936 13573 net.cpp:226] p2_conv needs backward computation.\nI0607 20:56:36.089939 13573 net.cpp:226] p1_upsample_p1_upsample_0_split needs backward computation.\nI0607 20:56:36.089944 13573 net.cpp:226] p1_upsample needs backward computation.\nI0607 20:56:36.089948 13573 net.cpp:226] p1_conv needs backward computation.\nI0607 20:56:36.089952 13573 net.cpp:226] conv5_3_relu5_3_0_split needs backward computation.\nI0607 20:56:36.089957 13573 net.cpp:226] relu5_3 needs backward computation.\nI0607 20:56:36.089963 13573 net.cpp:226] conv5_3 needs backward computation.\nI0607 20:56:36.089967 13573 net.cpp:226] relu5_2 needs backward computation.\nI0607 20:56:36.089972 13573 net.cpp:226] conv5_2 needs backward computation.\nI0607 20:56:36.089975 13573 net.cpp:226] relu5_1 needs backward computation.\nI0607 20:56:36.089979 13573 net.cpp:226] conv5_1 needs backward computation.\nI0607 20:56:36.089983 13573 net.cpp:226] pool4 needs backward computation.\nI0607 20:56:36.089987 13573 net.cpp:226] conv4_3_relu4_3_0_split needs backward computation.\nI0607 20:56:36.089993 13573 net.cpp:226] relu4_3 needs backward computation.\nI0607 20:56:36.089996 13573 net.cpp:226] conv4_3 needs backward computation.\nI0607 20:56:36.090000 13573 net.cpp:226] relu4_2 needs backward computation.\nI0607 20:56:36.090003 13573 net.cpp:226] conv4_2 needs backward computation.\nI0607 20:56:36.090008 13573 net.cpp:226] relu4_1 needs backward computation.\nI0607 20:56:36.090011 13573 net.cpp:226] conv4_1 needs backward computation.\nI0607 20:56:36.090015 13573 net.cpp:226] pool3 needs backward computation.\nI0607 20:56:36.090019 13573 net.cpp:226] relu3_3 needs backward computation.\nI0607 20:56:36.090023 13573 net.cpp:226] conv3_3 needs backward computation.\nI0607 20:56:36.090026 13573 net.cpp:226] relu3_2 needs backward computation.\nI0607 20:56:36.090030 13573 net.cpp:226] conv3_2 needs backward computation.\nI0607 20:56:36.090035 13573 net.cpp:226] relu3_1 needs backward computation.\nI0607 20:56:36.090039 13573 net.cpp:226] conv3_1 needs backward computation.\nI0607 20:56:36.090042 13573 net.cpp:228] pool2 does not need backward computation.\nI0607 20:56:36.090047 13573 net.cpp:228] relu2_2 does not need backward computation.\nI0607 20:56:36.090050 13573 net.cpp:228] conv2_2 does not need backward computation.\nI0607 20:56:36.090054 13573 net.cpp:228] relu2_1 does not need backward computation.\nI0607 20:56:36.090059 13573 net.cpp:228] conv2_1 does not need backward computation.\nI0607 20:56:36.090062 13573 net.cpp:228] pool1 does not need backward computation.\nI0607 20:56:36.090066 13573 net.cpp:228] relu1_2 does not need backward computation.\nI0607 20:56:36.090070 13573 net.cpp:228] conv1_2 does not need backward computation.\nI0607 20:56:36.090073 13573 net.cpp:228] relu1_1 does not need backward computation.\nI0607 20:56:36.090077 13573 net.cpp:228] conv1_1 does not need backward computation.\nI0607 20:56:36.090082 13573 net.cpp:228] gt_boxes_input-data_2_split does not need backward computation.\nI0607 20:56:36.090087 13573 net.cpp:228] im_info_input-data_1_split does not need backward computation.\nI0607 20:56:36.090092 13573 net.cpp:228] data_input-data_0_split does not need backward computation.\nI0607 20:56:36.090096 13573 net.cpp:228] input-data does not need backward computation.\nI0607 20:56:36.090101 13573 net.cpp:270] This network produces output loss_bbox\nI0607 20:56:36.090104 13573 net.cpp:270] This network produces output loss_cls\nI0607 20:56:36.090107 13573 net.cpp:270] This network produces output p2_rpn_cls_loss\nI0607 20:56:36.090112 13573 net.cpp:270] This network produces output p2_rpn_loss_bbox\nI0607 20:56:36.090116 13573 net.cpp:270] This network produces output rpn_cls_loss\nI0607 20:56:36.090119 13573 net.cpp:270] This network produces output rpn_loss_bbox\nI0607 20:56:36.090178 13573 net.cpp:283] Network initialization done.\nI0607 20:56:36.090401 13573 solver.cpp:60] Solver scaffolding done.\nLoading pretrained model weights from data/imagenet_models/VGG16.v2.caffemodel\n[libprotobuf INFO google/protobuf/io/coded_stream.cc:610] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.\n[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 553432430\nI0607 20:56:36.474828 13573 net.cpp:816] Ignoring source layer pool5\nI0607 20:56:36.570307 13573 net.cpp:816] Ignoring source layer fc8\nI0607 20:56:36.570358 13573 net.cpp:816] Ignoring source layer prob\nSolving...\n/home/ubuntu/Work/brbchen/unskychen/FP_Net/tools/../lib/rpn/proposal_target_layer.py:166: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n  fg_inds = npr.choice(fg_inds, size=fg_rois_per_this_image, replace=False)\n/home/ubuntu/Work/brbchen/unskychen/FP_Net/tools/../lib/rpn/proposal_target_layer.py:177: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n  bg_inds = npr.choice(bg_inds, size=bg_rois_per_this_image, replace=False)\n/home/ubuntu/Work/brbchen/unskychen/FP_Net/tools/../lib/rpn/proposal_target_layer.py:184: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n  labels[fg_rois_per_this_image:] = 0\n/home/ubuntu/Work/brbchen/unskychen/FP_Net/tools/../lib/rpn/proposal_target_layer.py:127: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n  bbox_targets[ind, start:end] = bbox_target_data[ind, 1:]\n/home/ubuntu/Work/brbchen/unskychen/FP_Net/tools/../lib/rpn/proposal_target_layer.py:128: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n  bbox_inside_weights[ind, start:end] = cfg.TRAIN.BBOX_INSIDE_WEIGHTS\nI0607 20:56:37.297803 13573 solver.cpp:229] Iteration 0, loss = 5.75535\nI0607 20:56:37.297866 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.658738 (* 1 = 0.658738 loss)\nI0607 20:56:37.297875 13573 solver.cpp:245]     Train net output #1: loss_cls = 3.71419 (* 1 = 3.71419 loss)\nI0607 20:56:37.297881 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.693147 (* 1 = 0.693147 loss)\nI0607 20:56:37.297888 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0306204 (* 1 = 0.0306204 loss)\nI0607 20:56:37.297893 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.693147 (* 1 = 0.693147 loss)\nI0607 20:56:37.297899 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0306204 (* 1 = 0.0306204 loss)\nI0607 20:56:37.297914 13573 sgd_solver.cpp:106] Iteration 0, lr = 0.001\nI0607 20:56:49.915751 13573 solver.cpp:229] Iteration 20, loss = 2.91147\nI0607 20:56:49.915824 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184991 (* 1 = 0.184991 loss)\nI0607 20:56:49.915837 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.703872 (* 1 = 0.703872 loss)\nI0607 20:56:49.915843 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.676324 (* 1 = 0.676324 loss)\nI0607 20:56:49.915848 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0086595 (* 1 = 0.0086595 loss)\nI0607 20:56:49.915853 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.675991 (* 1 = 0.675991 loss)\nI0607 20:56:49.915858 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0086595 (* 1 = 0.0086595 loss)\nI0607 20:56:49.915865 13573 sgd_solver.cpp:106] Iteration 20, lr = 0.001\nI0607 20:57:02.911805 13573 solver.cpp:229] Iteration 40, loss = 2.87693\nI0607 20:57:02.911876 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00395806 (* 1 = 0.00395806 loss)\nI0607 20:57:02.911890 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172954 (* 1 = 0.172954 loss)\nI0607 20:57:02.911895 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.658599 (* 1 = 0.658599 loss)\nI0607 20:57:02.911901 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0554137 (* 1 = 0.0554137 loss)\nI0607 20:57:02.911906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.656067 (* 1 = 0.656067 loss)\nI0607 20:57:02.911912 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0554137 (* 1 = 0.0554137 loss)\nI0607 20:57:02.911918 13573 sgd_solver.cpp:106] Iteration 40, lr = 0.001\nI0607 20:57:16.073887 13573 solver.cpp:229] Iteration 60, loss = 2.61586\nI0607 20:57:16.073956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 2.60075e-05 (* 1 = 2.60075e-05 loss)\nI0607 20:57:16.073967 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0968212 (* 1 = 0.0968212 loss)\nI0607 20:57:16.073973 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.646788 (* 1 = 0.646788 loss)\nI0607 20:57:16.073978 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.232379 (* 1 = 0.232379 loss)\nI0607 20:57:16.073984 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.645089 (* 1 = 0.645089 loss)\nI0607 20:57:16.073989 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.232379 (* 1 = 0.232379 loss)\nI0607 20:57:16.073997 13573 sgd_solver.cpp:106] Iteration 60, lr = 0.001\nI0607 20:57:29.126812 13573 solver.cpp:229] Iteration 80, loss = 1.76048\nI0607 20:57:29.126878 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00414452 (* 1 = 0.00414452 loss)\nI0607 20:57:29.126888 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0972382 (* 1 = 0.0972382 loss)\nI0607 20:57:29.126893 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.630218 (* 1 = 0.630218 loss)\nI0607 20:57:29.126899 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.153965 (* 1 = 0.153965 loss)\nI0607 20:57:29.126904 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.631355 (* 1 = 0.631355 loss)\nI0607 20:57:29.126910 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.153965 (* 1 = 0.153965 loss)\nI0607 20:57:29.126917 13573 sgd_solver.cpp:106] Iteration 80, lr = 0.001\nI0607 20:57:42.032887 13573 solver.cpp:229] Iteration 100, loss = 2.70806\nI0607 20:57:42.032956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.697918 (* 1 = 0.697918 loss)\nI0607 20:57:42.032968 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.79408 (* 1 = 0.79408 loss)\nI0607 20:57:42.032975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.584256 (* 1 = 0.584256 loss)\nI0607 20:57:42.032982 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0675202 (* 1 = 0.0675202 loss)\nI0607 20:57:42.032989 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.585105 (* 1 = 0.585105 loss)\nI0607 20:57:42.032994 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0675202 (* 1 = 0.0675202 loss)\nI0607 20:57:42.033002 13573 sgd_solver.cpp:106] Iteration 100, lr = 0.001\nI0607 20:57:54.978543 13573 solver.cpp:229] Iteration 120, loss = 1.90072\nI0607 20:57:54.978611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.428479 (* 1 = 0.428479 loss)\nI0607 20:57:54.978623 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.964012 (* 1 = 0.964012 loss)\nI0607 20:57:54.978629 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.562894 (* 1 = 0.562894 loss)\nI0607 20:57:54.978636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130267 (* 1 = 0.0130267 loss)\nI0607 20:57:54.978642 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.564836 (* 1 = 0.564836 loss)\nI0607 20:57:54.978649 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0130268 (* 1 = 0.0130268 loss)\nI0607 20:57:54.978657 13573 sgd_solver.cpp:106] Iteration 120, lr = 0.001\nI0607 20:58:08.054041 13573 solver.cpp:229] Iteration 140, loss = 1.93119\nI0607 20:58:08.054116 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.605293 (* 1 = 0.605293 loss)\nI0607 20:58:08.054128 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.937853 (* 1 = 0.937853 loss)\nI0607 20:58:08.054134 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.542619 (* 1 = 0.542619 loss)\nI0607 20:58:08.054142 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0222721 (* 1 = 0.0222721 loss)\nI0607 20:58:08.054147 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.545535 (* 1 = 0.545535 loss)\nI0607 20:58:08.054154 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0222721 (* 1 = 0.0222721 loss)\nI0607 20:58:08.054162 13573 sgd_solver.cpp:106] Iteration 140, lr = 0.001\nI0607 20:58:20.829402 13573 solver.cpp:229] Iteration 160, loss = 2.68836\nI0607 20:58:20.829466 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.596113 (* 1 = 0.596113 loss)\nI0607 20:58:20.829478 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.09387 (* 1 = 1.09387 loss)\nI0607 20:58:20.829484 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.526494 (* 1 = 0.526494 loss)\nI0607 20:58:20.829490 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0414578 (* 1 = 0.0414578 loss)\nI0607 20:58:20.829496 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.522514 (* 1 = 0.522514 loss)\nI0607 20:58:20.829501 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0414578 (* 1 = 0.0414578 loss)\nI0607 20:58:20.829509 13573 sgd_solver.cpp:106] Iteration 160, lr = 0.001\nI0607 20:58:33.549151 13573 solver.cpp:229] Iteration 180, loss = 2.77831\nI0607 20:58:33.549221 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000497255 (* 1 = 0.000497255 loss)\nI0607 20:58:33.549230 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.362114 (* 1 = 0.362114 loss)\nI0607 20:58:33.549238 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.666463 (* 1 = 0.666463 loss)\nI0607 20:58:33.549245 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.57665 (* 1 = 0.57665 loss)\nI0607 20:58:33.549250 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.670771 (* 1 = 0.670771 loss)\nI0607 20:58:33.549257 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.576695 (* 1 = 0.576695 loss)\nI0607 20:58:33.549263 13573 sgd_solver.cpp:106] Iteration 180, lr = 0.001\nspeed: 0.646s / iter\nI0607 20:58:46.378669 13573 solver.cpp:229] Iteration 200, loss = 2.72727\nI0607 20:58:46.378739 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.781286 (* 1 = 0.781286 loss)\nI0607 20:58:46.378752 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.89254 (* 1 = 0.89254 loss)\nI0607 20:58:46.378758 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.51348 (* 1 = 0.51348 loss)\nI0607 20:58:46.378764 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0392155 (* 1 = 0.0392155 loss)\nI0607 20:58:46.378784 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.508387 (* 1 = 0.508387 loss)\nI0607 20:58:46.378793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0392154 (* 1 = 0.0392154 loss)\nI0607 20:58:46.378800 13573 sgd_solver.cpp:106] Iteration 200, lr = 0.001\nI0607 20:58:59.298079 13573 solver.cpp:229] Iteration 220, loss = 2.18744\nI0607 20:58:59.298143 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000707869 (* 1 = 0.000707869 loss)\nI0607 20:58:59.298159 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0919617 (* 1 = 0.0919617 loss)\nI0607 20:58:59.298166 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.482446 (* 1 = 0.482446 loss)\nI0607 20:58:59.298171 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00667347 (* 1 = 0.00667347 loss)\nI0607 20:58:59.298177 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.482128 (* 1 = 0.482128 loss)\nI0607 20:58:59.298183 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00667347 (* 1 = 0.00667347 loss)\nI0607 20:58:59.298190 13573 sgd_solver.cpp:106] Iteration 220, lr = 0.001\nI0607 20:59:12.132787 13573 solver.cpp:229] Iteration 240, loss = 1.82587\nI0607 20:59:12.132854 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.669571 (* 1 = 0.669571 loss)\nI0607 20:59:12.132866 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.788725 (* 1 = 0.788725 loss)\nI0607 20:59:12.132871 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.470864 (* 1 = 0.470864 loss)\nI0607 20:59:12.132879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269571 (* 1 = 0.0269571 loss)\nI0607 20:59:12.132884 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.469987 (* 1 = 0.469987 loss)\nI0607 20:59:12.132889 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269571 (* 1 = 0.0269571 loss)\nI0607 20:59:12.132896 13573 sgd_solver.cpp:106] Iteration 240, lr = 0.001\nI0607 20:59:24.995242 13573 solver.cpp:229] Iteration 260, loss = 2.09799\nI0607 20:59:24.995303 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.712865 (* 1 = 0.712865 loss)\nI0607 20:59:24.995314 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.434996 (* 1 = 0.434996 loss)\nI0607 20:59:24.995321 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.471622 (* 1 = 0.471622 loss)\nI0607 20:59:24.995327 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269736 (* 1 = 0.0269736 loss)\nI0607 20:59:24.995333 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.47096 (* 1 = 0.47096 loss)\nI0607 20:59:24.995339 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269737 (* 1 = 0.0269737 loss)\nI0607 20:59:24.995347 13573 sgd_solver.cpp:106] Iteration 260, lr = 0.001\nI0607 20:59:37.769798 13573 solver.cpp:229] Iteration 280, loss = 2.04241\nI0607 20:59:37.769867 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.630445 (* 1 = 0.630445 loss)\nI0607 20:59:37.769876 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.676989 (* 1 = 0.676989 loss)\nI0607 20:59:37.769882 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.432186 (* 1 = 0.432186 loss)\nI0607 20:59:37.769888 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00533881 (* 1 = 0.00533881 loss)\nI0607 20:59:37.769894 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.426593 (* 1 = 0.426593 loss)\nI0607 20:59:37.769901 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00533895 (* 1 = 0.00533895 loss)\nI0607 20:59:37.769908 13573 sgd_solver.cpp:106] Iteration 280, lr = 0.001\nI0607 20:59:50.628671 13573 solver.cpp:229] Iteration 300, loss = 2.19427\nI0607 20:59:50.628733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.666671 (* 1 = 0.666671 loss)\nI0607 20:59:50.628742 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.51831 (* 1 = 0.51831 loss)\nI0607 20:59:50.628748 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.494625 (* 1 = 0.494625 loss)\nI0607 20:59:50.628753 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.14246 (* 1 = 0.14246 loss)\nI0607 20:59:50.628759 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.500288 (* 1 = 0.500288 loss)\nI0607 20:59:50.628765 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.142437 (* 1 = 0.142437 loss)\nI0607 20:59:50.628772 13573 sgd_solver.cpp:106] Iteration 300, lr = 0.001\nI0607 21:00:03.494832 13573 solver.cpp:229] Iteration 320, loss = 1.2491\nI0607 21:00:03.494899 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.258723 (* 1 = 0.258723 loss)\nI0607 21:00:03.494910 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.29935 (* 1 = 0.29935 loss)\nI0607 21:00:03.494916 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.457974 (* 1 = 0.457974 loss)\nI0607 21:00:03.494923 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0468617 (* 1 = 0.0468617 loss)\nI0607 21:00:03.494928 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.457261 (* 1 = 0.457261 loss)\nI0607 21:00:03.494935 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0468698 (* 1 = 0.0468698 loss)\nI0607 21:00:03.494941 13573 sgd_solver.cpp:106] Iteration 320, lr = 0.001\nI0607 21:00:16.293133 13573 solver.cpp:229] Iteration 340, loss = 1.57029\nI0607 21:00:16.293196 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00601811 (* 1 = 0.00601811 loss)\nI0607 21:00:16.293206 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.234626 (* 1 = 0.234626 loss)\nI0607 21:00:16.293215 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.455656 (* 1 = 0.455656 loss)\nI0607 21:00:16.293221 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0268376 (* 1 = 0.0268376 loss)\nI0607 21:00:16.293227 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.460125 (* 1 = 0.460125 loss)\nI0607 21:00:16.293234 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0268422 (* 1 = 0.0268422 loss)\nI0607 21:00:16.293241 13573 sgd_solver.cpp:106] Iteration 340, lr = 0.001\nI0607 21:00:29.369027 13573 solver.cpp:229] Iteration 360, loss = 1.68125\nI0607 21:00:29.369093 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.298948 (* 1 = 0.298948 loss)\nI0607 21:00:29.369103 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.33315 (* 1 = 0.33315 loss)\nI0607 21:00:29.369109 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41349 (* 1 = 0.41349 loss)\nI0607 21:00:29.369117 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00461879 (* 1 = 0.00461879 loss)\nI0607 21:00:29.369122 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408085 (* 1 = 0.408085 loss)\nI0607 21:00:29.369127 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00462213 (* 1 = 0.00462213 loss)\nI0607 21:00:29.369134 13573 sgd_solver.cpp:106] Iteration 360, lr = 0.001\nI0607 21:00:42.252727 13573 solver.cpp:229] Iteration 380, loss = 2.36362\nI0607 21:00:42.252789 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.272025 (* 1 = 0.272025 loss)\nI0607 21:00:42.252799 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.299054 (* 1 = 0.299054 loss)\nI0607 21:00:42.252805 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.658222 (* 1 = 0.658222 loss)\nI0607 21:00:42.252810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.615029 (* 1 = 0.615029 loss)\nI0607 21:00:42.252816 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.657015 (* 1 = 0.657015 loss)\nI0607 21:00:42.252822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.615053 (* 1 = 0.615053 loss)\nI0607 21:00:42.252830 13573 sgd_solver.cpp:106] Iteration 380, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:00:55.092828 13573 solver.cpp:229] Iteration 400, loss = 2.02803\nI0607 21:00:55.092895 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.509066 (* 1 = 0.509066 loss)\nI0607 21:00:55.092905 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.456733 (* 1 = 0.456733 loss)\nI0607 21:00:55.092911 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.385635 (* 1 = 0.385635 loss)\nI0607 21:00:55.092917 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0271275 (* 1 = 0.0271275 loss)\nI0607 21:00:55.092923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.38468 (* 1 = 0.38468 loss)\nI0607 21:00:55.092929 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0271306 (* 1 = 0.0271306 loss)\nI0607 21:00:55.092936 13573 sgd_solver.cpp:106] Iteration 400, lr = 0.001\nI0607 21:01:08.034063 13573 solver.cpp:229] Iteration 420, loss = 1.87503\nI0607 21:01:08.034126 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.605642 (* 1 = 0.605642 loss)\nI0607 21:01:08.034134 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.728012 (* 1 = 0.728012 loss)\nI0607 21:01:08.034142 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.42255 (* 1 = 0.42255 loss)\nI0607 21:01:08.034147 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0626515 (* 1 = 0.0626515 loss)\nI0607 21:01:08.034153 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.423991 (* 1 = 0.423991 loss)\nI0607 21:01:08.034158 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0626457 (* 1 = 0.0626457 loss)\nI0607 21:01:08.034168 13573 sgd_solver.cpp:106] Iteration 420, lr = 0.001\nI0607 21:01:21.013563 13573 solver.cpp:229] Iteration 440, loss = 1.77966\nI0607 21:01:21.013629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00204994 (* 1 = 0.00204994 loss)\nI0607 21:01:21.013638 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125614 (* 1 = 0.125614 loss)\nI0607 21:01:21.013645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.413165 (* 1 = 0.413165 loss)\nI0607 21:01:21.013651 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00358339 (* 1 = 0.00358339 loss)\nI0607 21:01:21.013661 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.413079 (* 1 = 0.413079 loss)\nI0607 21:01:21.013669 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00358913 (* 1 = 0.00358913 loss)\nI0607 21:01:21.013676 13573 sgd_solver.cpp:106] Iteration 440, lr = 0.001\nI0607 21:01:33.789173 13573 solver.cpp:229] Iteration 460, loss = 1.31459\nI0607 21:01:33.789235 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00158702 (* 1 = 0.00158702 loss)\nI0607 21:01:33.789245 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.045752 (* 1 = 0.045752 loss)\nI0607 21:01:33.789252 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.362315 (* 1 = 0.362315 loss)\nI0607 21:01:33.789258 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100252 (* 1 = 0.0100252 loss)\nI0607 21:01:33.789263 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.373366 (* 1 = 0.373366 loss)\nI0607 21:01:33.789268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100138 (* 1 = 0.0100138 loss)\nI0607 21:01:33.789275 13573 sgd_solver.cpp:106] Iteration 460, lr = 0.001\nI0607 21:01:46.761245 13573 solver.cpp:229] Iteration 480, loss = 1.71876\nI0607 21:01:46.761312 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.56786 (* 1 = 0.56786 loss)\nI0607 21:01:46.761322 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.266485 (* 1 = 0.266485 loss)\nI0607 21:01:46.761329 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.372997 (* 1 = 0.372997 loss)\nI0607 21:01:46.761335 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0330279 (* 1 = 0.0330279 loss)\nI0607 21:01:46.761342 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.365454 (* 1 = 0.365454 loss)\nI0607 21:01:46.761348 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.033034 (* 1 = 0.033034 loss)\nI0607 21:01:46.761356 13573 sgd_solver.cpp:106] Iteration 480, lr = 0.001\nI0607 21:01:59.680052 13573 solver.cpp:229] Iteration 500, loss = 2.14536\nI0607 21:01:59.680122 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.593984 (* 1 = 0.593984 loss)\nI0607 21:01:59.680132 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.595567 (* 1 = 0.595567 loss)\nI0607 21:01:59.680138 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.375527 (* 1 = 0.375527 loss)\nI0607 21:01:59.680145 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112684 (* 1 = 0.0112684 loss)\nI0607 21:01:59.680151 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.374275 (* 1 = 0.374275 loss)\nI0607 21:01:59.680160 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112707 (* 1 = 0.0112707 loss)\nI0607 21:01:59.680166 13573 sgd_solver.cpp:106] Iteration 500, lr = 0.001\nI0607 21:02:12.597404 13573 solver.cpp:229] Iteration 520, loss = 1.24532\nI0607 21:02:12.597467 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.488191 (* 1 = 0.488191 loss)\nI0607 21:02:12.597478 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.381094 (* 1 = 0.381094 loss)\nI0607 21:02:12.597486 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.391744 (* 1 = 0.391744 loss)\nI0607 21:02:12.597499 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017031 (* 1 = 0.017031 loss)\nI0607 21:02:12.597507 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.398441 (* 1 = 0.398441 loss)\nI0607 21:02:12.597513 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170791 (* 1 = 0.0170791 loss)\nI0607 21:02:12.597525 13573 sgd_solver.cpp:106] Iteration 520, lr = 0.001\nI0607 21:02:25.407996 13573 solver.cpp:229] Iteration 540, loss = 3.17657\nI0607 21:02:25.408067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.588431 (* 1 = 0.588431 loss)\nI0607 21:02:25.408079 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.832301 (* 1 = 0.832301 loss)\nI0607 21:02:25.408087 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.459006 (* 1 = 0.459006 loss)\nI0607 21:02:25.408093 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.115049 (* 1 = 0.115049 loss)\nI0607 21:02:25.408099 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.460757 (* 1 = 0.460757 loss)\nI0607 21:02:25.408105 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.11507 (* 1 = 0.11507 loss)\nI0607 21:02:25.408113 13573 sgd_solver.cpp:106] Iteration 540, lr = 0.001\nI0607 21:02:38.221251 13573 solver.cpp:229] Iteration 560, loss = 1.52187\nI0607 21:02:38.221316 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.103275 (* 1 = 0.103275 loss)\nI0607 21:02:38.221326 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.301364 (* 1 = 0.301364 loss)\nI0607 21:02:38.221336 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.402328 (* 1 = 0.402328 loss)\nI0607 21:02:38.221343 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279204 (* 1 = 0.0279204 loss)\nI0607 21:02:38.221350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.395205 (* 1 = 0.395205 loss)\nI0607 21:02:38.221356 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279273 (* 1 = 0.0279273 loss)\nI0607 21:02:38.221364 13573 sgd_solver.cpp:106] Iteration 560, lr = 0.001\nI0607 21:02:51.178867 13573 solver.cpp:229] Iteration 580, loss = 2.55084\nI0607 21:02:51.178936 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.546505 (* 1 = 0.546505 loss)\nI0607 21:02:51.178946 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.02194 (* 1 = 1.02194 loss)\nI0607 21:02:51.178954 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.533324 (* 1 = 0.533324 loss)\nI0607 21:02:51.178961 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0915179 (* 1 = 0.0915179 loss)\nI0607 21:02:51.178966 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.544542 (* 1 = 0.544542 loss)\nI0607 21:02:51.178973 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0914946 (* 1 = 0.0914946 loss)\nI0607 21:02:51.178982 13573 sgd_solver.cpp:106] Iteration 580, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:03:04.091020 13573 solver.cpp:229] Iteration 600, loss = 1.69317\nI0607 21:03:04.091084 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.338464 (* 1 = 0.338464 loss)\nI0607 21:03:04.091094 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.708921 (* 1 = 0.708921 loss)\nI0607 21:03:04.091099 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.368269 (* 1 = 0.368269 loss)\nI0607 21:03:04.091106 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0222274 (* 1 = 0.0222274 loss)\nI0607 21:03:04.091111 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.380052 (* 1 = 0.380052 loss)\nI0607 21:03:04.091119 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0222246 (* 1 = 0.0222246 loss)\nI0607 21:03:04.091125 13573 sgd_solver.cpp:106] Iteration 600, lr = 0.001\nI0607 21:03:17.157939 13573 solver.cpp:229] Iteration 620, loss = 1.44879\nI0607 21:03:17.158007 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00655997 (* 1 = 0.00655997 loss)\nI0607 21:03:17.158020 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0400868 (* 1 = 0.0400868 loss)\nI0607 21:03:17.158027 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.355459 (* 1 = 0.355459 loss)\nI0607 21:03:17.158033 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0293817 (* 1 = 0.0293817 loss)\nI0607 21:03:17.158040 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.341247 (* 1 = 0.341247 loss)\nI0607 21:03:17.158046 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0293817 (* 1 = 0.0293817 loss)\nI0607 21:03:17.158053 13573 sgd_solver.cpp:106] Iteration 620, lr = 0.001\nI0607 21:03:29.993037 13573 solver.cpp:229] Iteration 640, loss = 1.82576\nI0607 21:03:29.993101 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.404631 (* 1 = 0.404631 loss)\nI0607 21:03:29.993113 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.42753 (* 1 = 0.42753 loss)\nI0607 21:03:29.993119 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.546382 (* 1 = 0.546382 loss)\nI0607 21:03:29.993125 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.102727 (* 1 = 0.102727 loss)\nI0607 21:03:29.993131 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.529333 (* 1 = 0.529333 loss)\nI0607 21:03:29.993139 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.102713 (* 1 = 0.102713 loss)\nI0607 21:03:29.993146 13573 sgd_solver.cpp:106] Iteration 640, lr = 0.001\nI0607 21:03:43.030035 13573 solver.cpp:229] Iteration 660, loss = 1.94879\nI0607 21:03:43.030102 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.165616 (* 1 = 0.165616 loss)\nI0607 21:03:43.030112 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.611396 (* 1 = 0.611396 loss)\nI0607 21:03:43.030118 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.573278 (* 1 = 0.573278 loss)\nI0607 21:03:43.030128 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0552872 (* 1 = 0.0552872 loss)\nI0607 21:03:43.030133 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.57846 (* 1 = 0.57846 loss)\nI0607 21:03:43.030138 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0552982 (* 1 = 0.0552982 loss)\nI0607 21:03:43.030146 13573 sgd_solver.cpp:106] Iteration 660, lr = 0.001\nI0607 21:03:55.664918 13573 solver.cpp:229] Iteration 680, loss = 1.46843\nI0607 21:03:55.664985 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.284741 (* 1 = 0.284741 loss)\nI0607 21:03:55.664994 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.436158 (* 1 = 0.436158 loss)\nI0607 21:03:55.665001 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.287254 (* 1 = 0.287254 loss)\nI0607 21:03:55.665007 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0256019 (* 1 = 0.0256019 loss)\nI0607 21:03:55.665014 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27849 (* 1 = 0.27849 loss)\nI0607 21:03:55.665019 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0256071 (* 1 = 0.0256071 loss)\nI0607 21:03:55.665026 13573 sgd_solver.cpp:106] Iteration 680, lr = 0.001\nI0607 21:04:08.475046 13573 solver.cpp:229] Iteration 700, loss = 1.85917\nI0607 21:04:08.475111 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.62126 (* 1 = 0.62126 loss)\nI0607 21:04:08.475121 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.276805 (* 1 = 0.276805 loss)\nI0607 21:04:08.475128 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.443727 (* 1 = 0.443727 loss)\nI0607 21:04:08.475134 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.083027 (* 1 = 0.083027 loss)\nI0607 21:04:08.475141 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.437359 (* 1 = 0.437359 loss)\nI0607 21:04:08.475147 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0830406 (* 1 = 0.0830406 loss)\nI0607 21:04:08.475154 13573 sgd_solver.cpp:106] Iteration 700, lr = 0.001\nI0607 21:04:21.346189 13573 solver.cpp:229] Iteration 720, loss = 1.71796\nI0607 21:04:21.346251 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.512032 (* 1 = 0.512032 loss)\nI0607 21:04:21.346261 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.611854 (* 1 = 0.611854 loss)\nI0607 21:04:21.346267 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.274411 (* 1 = 0.274411 loss)\nI0607 21:04:21.346273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111268 (* 1 = 0.0111268 loss)\nI0607 21:04:21.346279 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28163 (* 1 = 0.28163 loss)\nI0607 21:04:21.346285 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0111538 (* 1 = 0.0111538 loss)\nI0607 21:04:21.346292 13573 sgd_solver.cpp:106] Iteration 720, lr = 0.001\nI0607 21:04:34.340068 13573 solver.cpp:229] Iteration 740, loss = 1.43661\nI0607 21:04:34.340134 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.634064 (* 1 = 0.634064 loss)\nI0607 21:04:34.340144 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.45006 (* 1 = 0.45006 loss)\nI0607 21:04:34.340150 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.35264 (* 1 = 0.35264 loss)\nI0607 21:04:34.340157 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0335956 (* 1 = 0.0335956 loss)\nI0607 21:04:34.340162 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.356886 (* 1 = 0.356886 loss)\nI0607 21:04:34.340168 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0335911 (* 1 = 0.0335911 loss)\nI0607 21:04:34.340176 13573 sgd_solver.cpp:106] Iteration 740, lr = 0.001\nI0607 21:04:47.207455 13573 solver.cpp:229] Iteration 760, loss = 1.65799\nI0607 21:04:47.207525 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00251858 (* 1 = 0.00251858 loss)\nI0607 21:04:47.207535 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146983 (* 1 = 0.146983 loss)\nI0607 21:04:47.207541 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.561141 (* 1 = 0.561141 loss)\nI0607 21:04:47.207547 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.183281 (* 1 = 0.183281 loss)\nI0607 21:04:47.207553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.563691 (* 1 = 0.563691 loss)\nI0607 21:04:47.207561 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.183235 (* 1 = 0.183235 loss)\nI0607 21:04:47.207567 13573 sgd_solver.cpp:106] Iteration 760, lr = 0.001\nI0607 21:05:00.191272 13573 solver.cpp:229] Iteration 780, loss = 0.710457\nI0607 21:05:00.191334 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0468283 (* 1 = 0.0468283 loss)\nI0607 21:05:00.191344 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0603641 (* 1 = 0.0603641 loss)\nI0607 21:05:00.191350 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284399 (* 1 = 0.284399 loss)\nI0607 21:05:00.191357 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00368525 (* 1 = 0.00368525 loss)\nI0607 21:05:00.191364 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271432 (* 1 = 0.271432 loss)\nI0607 21:05:00.191370 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00368903 (* 1 = 0.00368903 loss)\nI0607 21:05:00.191377 13573 sgd_solver.cpp:106] Iteration 780, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:05:13.003648 13573 solver.cpp:229] Iteration 800, loss = 1.44346\nI0607 21:05:13.003715 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.54238 (* 1 = 0.54238 loss)\nI0607 21:05:13.003728 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.344846 (* 1 = 0.344846 loss)\nI0607 21:05:13.003736 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279839 (* 1 = 0.279839 loss)\nI0607 21:05:13.003742 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0372895 (* 1 = 0.0372895 loss)\nI0607 21:05:13.003747 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.275986 (* 1 = 0.275986 loss)\nI0607 21:05:13.003753 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0372845 (* 1 = 0.0372845 loss)\nI0607 21:05:13.003760 13573 sgd_solver.cpp:106] Iteration 800, lr = 0.001\nI0607 21:05:25.747869 13573 solver.cpp:229] Iteration 820, loss = 1.12701\nI0607 21:05:25.747931 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.440735 (* 1 = 0.440735 loss)\nI0607 21:05:25.747948 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.239765 (* 1 = 0.239765 loss)\nI0607 21:05:25.747956 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295577 (* 1 = 0.295577 loss)\nI0607 21:05:25.747962 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0525823 (* 1 = 0.0525823 loss)\nI0607 21:05:25.747968 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293435 (* 1 = 0.293435 loss)\nI0607 21:05:25.747974 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0525125 (* 1 = 0.0525125 loss)\nI0607 21:05:25.747984 13573 sgd_solver.cpp:106] Iteration 820, lr = 0.001\nI0607 21:05:38.631028 13573 solver.cpp:229] Iteration 840, loss = 2.86324\nI0607 21:05:38.631094 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.552335 (* 1 = 0.552335 loss)\nI0607 21:05:38.631105 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.575943 (* 1 = 0.575943 loss)\nI0607 21:05:38.631112 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.800526 (* 1 = 0.800526 loss)\nI0607 21:05:38.631119 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.553918 (* 1 = 0.553918 loss)\nI0607 21:05:38.631124 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.796808 (* 1 = 0.796808 loss)\nI0607 21:05:38.631131 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.553848 (* 1 = 0.553848 loss)\nI0607 21:05:38.631137 13573 sgd_solver.cpp:106] Iteration 840, lr = 0.001\nI0607 21:05:51.557241 13573 solver.cpp:229] Iteration 860, loss = 1.44869\nI0607 21:05:51.557309 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.588439 (* 1 = 0.588439 loss)\nI0607 21:05:51.557319 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.294642 (* 1 = 0.294642 loss)\nI0607 21:05:51.557325 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258664 (* 1 = 0.258664 loss)\nI0607 21:05:51.557332 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00723645 (* 1 = 0.00723645 loss)\nI0607 21:05:51.557337 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271094 (* 1 = 0.271094 loss)\nI0607 21:05:51.557343 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00723876 (* 1 = 0.00723876 loss)\nI0607 21:05:51.557353 13573 sgd_solver.cpp:106] Iteration 860, lr = 0.001\nI0607 21:06:04.605456 13573 solver.cpp:229] Iteration 880, loss = 1.65969\nI0607 21:06:04.605523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.076599 (* 1 = 0.076599 loss)\nI0607 21:06:04.605533 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100067 (* 1 = 0.100067 loss)\nI0607 21:06:04.605545 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.548859 (* 1 = 0.548859 loss)\nI0607 21:06:04.605551 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.165738 (* 1 = 0.165738 loss)\nI0607 21:06:04.605556 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.55129 (* 1 = 0.55129 loss)\nI0607 21:06:04.605563 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.16577 (* 1 = 0.16577 loss)\nI0607 21:06:04.605571 13573 sgd_solver.cpp:106] Iteration 880, lr = 0.001\nI0607 21:06:17.695364 13573 solver.cpp:229] Iteration 900, loss = 1.69039\nI0607 21:06:17.695425 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.555794 (* 1 = 0.555794 loss)\nI0607 21:06:17.695442 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.381785 (* 1 = 0.381785 loss)\nI0607 21:06:17.695449 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41152 (* 1 = 0.41152 loss)\nI0607 21:06:17.695456 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0829045 (* 1 = 0.0829045 loss)\nI0607 21:06:17.695462 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.421557 (* 1 = 0.421557 loss)\nI0607 21:06:17.695468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0829242 (* 1 = 0.0829242 loss)\nI0607 21:06:17.695475 13573 sgd_solver.cpp:106] Iteration 900, lr = 0.001\nI0607 21:06:30.579823 13573 solver.cpp:229] Iteration 920, loss = 2.46348\nI0607 21:06:30.579895 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.519687 (* 1 = 0.519687 loss)\nI0607 21:06:30.579906 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.02502 (* 1 = 1.02502 loss)\nI0607 21:06:30.579915 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310684 (* 1 = 0.310684 loss)\nI0607 21:06:30.579921 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.061702 (* 1 = 0.061702 loss)\nI0607 21:06:30.579926 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.317672 (* 1 = 0.317672 loss)\nI0607 21:06:30.579932 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0616996 (* 1 = 0.0616996 loss)\nI0607 21:06:30.579939 13573 sgd_solver.cpp:106] Iteration 920, lr = 0.001\nI0607 21:06:43.587828 13573 solver.cpp:229] Iteration 940, loss = 1.03377\nI0607 21:06:43.587891 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00178224 (* 1 = 0.00178224 loss)\nI0607 21:06:43.587901 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.023479 (* 1 = 0.023479 loss)\nI0607 21:06:43.587908 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.331934 (* 1 = 0.331934 loss)\nI0607 21:06:43.587915 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0393785 (* 1 = 0.0393785 loss)\nI0607 21:06:43.587923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.332578 (* 1 = 0.332578 loss)\nI0607 21:06:43.587929 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0393862 (* 1 = 0.0393862 loss)\nI0607 21:06:43.587936 13573 sgd_solver.cpp:106] Iteration 940, lr = 0.001\nI0607 21:06:56.360225 13573 solver.cpp:229] Iteration 960, loss = 2.12535\nI0607 21:06:56.360298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.521649 (* 1 = 0.521649 loss)\nI0607 21:06:56.360309 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.868878 (* 1 = 0.868878 loss)\nI0607 21:06:56.360316 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.637634 (* 1 = 0.637634 loss)\nI0607 21:06:56.360321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.177075 (* 1 = 0.177075 loss)\nI0607 21:06:56.360327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.639796 (* 1 = 0.639796 loss)\nI0607 21:06:56.360333 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.177029 (* 1 = 0.177029 loss)\nI0607 21:06:56.360342 13573 sgd_solver.cpp:106] Iteration 960, lr = 0.001\nI0607 21:07:09.240425 13573 solver.cpp:229] Iteration 980, loss = 1.07547\nI0607 21:07:09.240489 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.489 (* 1 = 0.489 loss)\nI0607 21:07:09.240505 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.499045 (* 1 = 0.499045 loss)\nI0607 21:07:09.240512 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226094 (* 1 = 0.226094 loss)\nI0607 21:07:09.240520 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0456447 (* 1 = 0.0456447 loss)\nI0607 21:07:09.240525 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227365 (* 1 = 0.227365 loss)\nI0607 21:07:09.240530 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0456454 (* 1 = 0.0456454 loss)\nI0607 21:07:09.240538 13573 sgd_solver.cpp:106] Iteration 980, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:07:22.402911 13573 solver.cpp:229] Iteration 1000, loss = 1.36627\nI0607 21:07:22.402977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.523246 (* 1 = 0.523246 loss)\nI0607 21:07:22.402989 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.288273 (* 1 = 0.288273 loss)\nI0607 21:07:22.402997 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26066 (* 1 = 0.26066 loss)\nI0607 21:07:22.403003 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189388 (* 1 = 0.0189388 loss)\nI0607 21:07:22.403010 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259659 (* 1 = 0.259659 loss)\nI0607 21:07:22.403017 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189358 (* 1 = 0.0189358 loss)\nI0607 21:07:22.403024 13573 sgd_solver.cpp:106] Iteration 1000, lr = 0.001\nI0607 21:07:35.525264 13573 solver.cpp:229] Iteration 1020, loss = 1.06635\nI0607 21:07:35.525331 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102425 (* 1 = 0.102425 loss)\nI0607 21:07:35.525339 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103118 (* 1 = 0.103118 loss)\nI0607 21:07:35.525347 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.230637 (* 1 = 0.230637 loss)\nI0607 21:07:35.525353 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0277753 (* 1 = 0.0277753 loss)\nI0607 21:07:35.525359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227725 (* 1 = 0.227725 loss)\nI0607 21:07:35.525367 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0277975 (* 1 = 0.0277975 loss)\nI0607 21:07:35.525373 13573 sgd_solver.cpp:106] Iteration 1020, lr = 0.001\nI0607 21:07:48.342399 13573 solver.cpp:229] Iteration 1040, loss = 1.68359\nI0607 21:07:48.342459 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.516488 (* 1 = 0.516488 loss)\nI0607 21:07:48.342468 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.461556 (* 1 = 0.461556 loss)\nI0607 21:07:48.342473 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.509553 (* 1 = 0.509553 loss)\nI0607 21:07:48.342483 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.118443 (* 1 = 0.118443 loss)\nI0607 21:07:48.342489 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.503454 (* 1 = 0.503454 loss)\nI0607 21:07:48.342494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.118433 (* 1 = 0.118433 loss)\nI0607 21:07:48.342501 13573 sgd_solver.cpp:106] Iteration 1040, lr = 0.001\nI0607 21:08:01.212287 13573 solver.cpp:229] Iteration 1060, loss = 1.08394\nI0607 21:08:01.212362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.640966 (* 1 = 0.640966 loss)\nI0607 21:08:01.212373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.348662 (* 1 = 0.348662 loss)\nI0607 21:08:01.212378 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.224948 (* 1 = 0.224948 loss)\nI0607 21:08:01.212385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0790799 (* 1 = 0.0790799 loss)\nI0607 21:08:01.212390 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227676 (* 1 = 0.227676 loss)\nI0607 21:08:01.212396 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0790773 (* 1 = 0.0790773 loss)\nI0607 21:08:01.212404 13573 sgd_solver.cpp:106] Iteration 1060, lr = 0.001\nI0607 21:08:14.185755 13573 solver.cpp:229] Iteration 1080, loss = 1.53406\nI0607 21:08:14.185816 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00116493 (* 1 = 0.00116493 loss)\nI0607 21:08:14.185825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106656 (* 1 = 0.106656 loss)\nI0607 21:08:14.185832 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.392154 (* 1 = 0.392154 loss)\nI0607 21:08:14.185840 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0598759 (* 1 = 0.0598759 loss)\nI0607 21:08:14.185847 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.412124 (* 1 = 0.412124 loss)\nI0607 21:08:14.185853 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0599098 (* 1 = 0.0599098 loss)\nI0607 21:08:14.185860 13573 sgd_solver.cpp:106] Iteration 1080, lr = 0.001\nI0607 21:08:27.119081 13573 solver.cpp:229] Iteration 1100, loss = 1.18316\nI0607 21:08:27.119148 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.451957 (* 1 = 0.451957 loss)\nI0607 21:08:27.119156 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.486776 (* 1 = 0.486776 loss)\nI0607 21:08:27.119163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.224008 (* 1 = 0.224008 loss)\nI0607 21:08:27.119168 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0125785 (* 1 = 0.0125785 loss)\nI0607 21:08:27.119180 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216971 (* 1 = 0.216971 loss)\nI0607 21:08:27.119186 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0125948 (* 1 = 0.0125948 loss)\nI0607 21:08:27.119194 13573 sgd_solver.cpp:106] Iteration 1100, lr = 0.001\nI0607 21:08:39.899667 13573 solver.cpp:229] Iteration 1120, loss = 1.21229\nI0607 21:08:39.899727 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.46908 (* 1 = 0.46908 loss)\nI0607 21:08:39.899736 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.197512 (* 1 = 0.197512 loss)\nI0607 21:08:39.899742 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259115 (* 1 = 0.259115 loss)\nI0607 21:08:39.899749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0442503 (* 1 = 0.0442503 loss)\nI0607 21:08:39.899755 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264789 (* 1 = 0.264789 loss)\nI0607 21:08:39.899760 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0442487 (* 1 = 0.0442487 loss)\nI0607 21:08:39.899775 13573 sgd_solver.cpp:106] Iteration 1120, lr = 0.001\nI0607 21:08:52.810869 13573 solver.cpp:229] Iteration 1140, loss = 1.42793\nI0607 21:08:52.810935 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.438057 (* 1 = 0.438057 loss)\nI0607 21:08:52.810945 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.330619 (* 1 = 0.330619 loss)\nI0607 21:08:52.810950 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217433 (* 1 = 0.217433 loss)\nI0607 21:08:52.810956 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00851019 (* 1 = 0.00851019 loss)\nI0607 21:08:52.810963 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238498 (* 1 = 0.238498 loss)\nI0607 21:08:52.810971 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00851035 (* 1 = 0.00851035 loss)\nI0607 21:08:52.810976 13573 sgd_solver.cpp:106] Iteration 1140, lr = 0.001\nI0607 21:09:05.604256 13573 solver.cpp:229] Iteration 1160, loss = 1.45192\nI0607 21:09:05.604316 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126283 (* 1 = 0.126283 loss)\nI0607 21:09:05.604327 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.183306 (* 1 = 0.183306 loss)\nI0607 21:09:05.604333 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270542 (* 1 = 0.270542 loss)\nI0607 21:09:05.604339 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0185226 (* 1 = 0.0185226 loss)\nI0607 21:09:05.604346 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.272896 (* 1 = 0.272896 loss)\nI0607 21:09:05.604352 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0185178 (* 1 = 0.0185178 loss)\nI0607 21:09:05.604359 13573 sgd_solver.cpp:106] Iteration 1160, lr = 0.001\nI0607 21:09:18.522446 13573 solver.cpp:229] Iteration 1180, loss = 1.95924\nI0607 21:09:18.522512 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.660037 (* 1 = 0.660037 loss)\nI0607 21:09:18.522521 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.698358 (* 1 = 0.698358 loss)\nI0607 21:09:18.522532 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.419976 (* 1 = 0.419976 loss)\nI0607 21:09:18.522539 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0820766 (* 1 = 0.0820766 loss)\nI0607 21:09:18.522544 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.415946 (* 1 = 0.415946 loss)\nI0607 21:09:18.522550 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0819872 (* 1 = 0.0819872 loss)\nI0607 21:09:18.522557 13573 sgd_solver.cpp:106] Iteration 1180, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:09:31.683249 13573 solver.cpp:229] Iteration 1200, loss = 1.24939\nI0607 21:09:31.683310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.464498 (* 1 = 0.464498 loss)\nI0607 21:09:31.683326 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.276111 (* 1 = 0.276111 loss)\nI0607 21:09:31.683333 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.285939 (* 1 = 0.285939 loss)\nI0607 21:09:31.683339 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0423668 (* 1 = 0.0423668 loss)\nI0607 21:09:31.683379 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282134 (* 1 = 0.282134 loss)\nI0607 21:09:31.683401 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.042376 (* 1 = 0.042376 loss)\nI0607 21:09:31.683418 13573 sgd_solver.cpp:106] Iteration 1200, lr = 0.001\nI0607 21:09:44.384976 13573 solver.cpp:229] Iteration 1220, loss = 1.87423\nI0607 21:09:44.385040 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0834562 (* 1 = 0.0834562 loss)\nI0607 21:09:44.385051 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280217 (* 1 = 0.280217 loss)\nI0607 21:09:44.385056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.431494 (* 1 = 0.431494 loss)\nI0607 21:09:44.385063 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0709711 (* 1 = 0.0709711 loss)\nI0607 21:09:44.385069 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.450431 (* 1 = 0.450431 loss)\nI0607 21:09:44.385076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0709756 (* 1 = 0.0709756 loss)\nI0607 21:09:44.385082 13573 sgd_solver.cpp:106] Iteration 1220, lr = 0.001\nI0607 21:09:57.261028 13573 solver.cpp:229] Iteration 1240, loss = 1.66454\nI0607 21:09:57.261090 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.500158 (* 1 = 0.500158 loss)\nI0607 21:09:57.261098 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.343255 (* 1 = 0.343255 loss)\nI0607 21:09:57.261103 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.20152 (* 1 = 0.20152 loss)\nI0607 21:09:57.261109 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0101383 (* 1 = 0.0101383 loss)\nI0607 21:09:57.261116 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19097 (* 1 = 0.19097 loss)\nI0607 21:09:57.261121 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0101408 (* 1 = 0.0101408 loss)\nI0607 21:09:57.261127 13573 sgd_solver.cpp:106] Iteration 1240, lr = 0.001\nI0607 21:10:10.400647 13573 solver.cpp:229] Iteration 1260, loss = 0.581533\nI0607 21:10:10.400713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000144384 (* 1 = 0.000144384 loss)\nI0607 21:10:10.400723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0812893 (* 1 = 0.0812893 loss)\nI0607 21:10:10.400728 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246567 (* 1 = 0.246567 loss)\nI0607 21:10:10.400734 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.000899645 (* 1 = 0.000899645 loss)\nI0607 21:10:10.400740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24719 (* 1 = 0.24719 loss)\nI0607 21:10:10.400745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.000900597 (* 1 = 0.000900597 loss)\nI0607 21:10:10.400753 13573 sgd_solver.cpp:106] Iteration 1260, lr = 0.001\nI0607 21:10:23.250809 13573 solver.cpp:229] Iteration 1280, loss = 1.36698\nI0607 21:10:23.250881 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.473651 (* 1 = 0.473651 loss)\nI0607 21:10:23.250891 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.232848 (* 1 = 0.232848 loss)\nI0607 21:10:23.250897 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.29063 (* 1 = 0.29063 loss)\nI0607 21:10:23.250903 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0828765 (* 1 = 0.0828765 loss)\nI0607 21:10:23.250910 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282725 (* 1 = 0.282725 loss)\nI0607 21:10:23.250915 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0828518 (* 1 = 0.0828518 loss)\nI0607 21:10:23.250921 13573 sgd_solver.cpp:106] Iteration 1280, lr = 0.001\nI0607 21:10:36.192414 13573 solver.cpp:229] Iteration 1300, loss = 1.59252\nI0607 21:10:36.192476 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00188489 (* 1 = 0.00188489 loss)\nI0607 21:10:36.192484 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0299999 (* 1 = 0.0299999 loss)\nI0607 21:10:36.192490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.521957 (* 1 = 0.521957 loss)\nI0607 21:10:36.192497 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.236041 (* 1 = 0.236041 loss)\nI0607 21:10:36.192502 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.512614 (* 1 = 0.512614 loss)\nI0607 21:10:36.192507 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.236067 (* 1 = 0.236067 loss)\nI0607 21:10:36.192513 13573 sgd_solver.cpp:106] Iteration 1300, lr = 0.001\nI0607 21:10:49.105401 13573 solver.cpp:229] Iteration 1320, loss = 1.63926\nI0607 21:10:49.105465 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.608792 (* 1 = 0.608792 loss)\nI0607 21:10:49.105475 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.650174 (* 1 = 0.650174 loss)\nI0607 21:10:49.105481 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215945 (* 1 = 0.215945 loss)\nI0607 21:10:49.105486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117347 (* 1 = 0.0117347 loss)\nI0607 21:10:49.105492 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219614 (* 1 = 0.219614 loss)\nI0607 21:10:49.105499 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117327 (* 1 = 0.0117327 loss)\nI0607 21:10:49.105505 13573 sgd_solver.cpp:106] Iteration 1320, lr = 0.001\nI0607 21:11:01.836910 13573 solver.cpp:229] Iteration 1340, loss = 1.38708\nI0607 21:11:01.836971 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.547375 (* 1 = 0.547375 loss)\nI0607 21:11:01.836982 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.278525 (* 1 = 0.278525 loss)\nI0607 21:11:01.836987 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189212 (* 1 = 0.189212 loss)\nI0607 21:11:01.836993 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246275 (* 1 = 0.0246275 loss)\nI0607 21:11:01.836998 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.179105 (* 1 = 0.179105 loss)\nI0607 21:11:01.837004 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246278 (* 1 = 0.0246278 loss)\nI0607 21:11:01.837011 13573 sgd_solver.cpp:106] Iteration 1340, lr = 0.001\nI0607 21:11:14.697599 13573 solver.cpp:229] Iteration 1360, loss = 1.45138\nI0607 21:11:14.697674 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000394166 (* 1 = 0.000394166 loss)\nI0607 21:11:14.697682 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113646 (* 1 = 0.113646 loss)\nI0607 21:11:14.697688 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.359469 (* 1 = 0.359469 loss)\nI0607 21:11:14.697695 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.063547 (* 1 = 0.063547 loss)\nI0607 21:11:14.697700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.37635 (* 1 = 0.37635 loss)\nI0607 21:11:14.697706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0635661 (* 1 = 0.0635661 loss)\nI0607 21:11:14.697712 13573 sgd_solver.cpp:106] Iteration 1360, lr = 0.001\nI0607 21:11:27.416520 13573 solver.cpp:229] Iteration 1380, loss = 1.95489\nI0607 21:11:27.416579 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.52839 (* 1 = 0.52839 loss)\nI0607 21:11:27.416589 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.426169 (* 1 = 0.426169 loss)\nI0607 21:11:27.416594 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.653842 (* 1 = 0.653842 loss)\nI0607 21:11:27.416600 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.314752 (* 1 = 0.314752 loss)\nI0607 21:11:27.416605 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.664919 (* 1 = 0.664919 loss)\nI0607 21:11:27.416611 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.317308 (* 1 = 0.317308 loss)\nI0607 21:11:27.416618 13573 sgd_solver.cpp:106] Iteration 1380, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:11:40.703117 13573 solver.cpp:229] Iteration 1400, loss = 1.0495\nI0607 21:11:40.703184 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.218675 (* 1 = 0.218675 loss)\nI0607 21:11:40.703194 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.497358 (* 1 = 0.497358 loss)\nI0607 21:11:40.703199 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282467 (* 1 = 0.282467 loss)\nI0607 21:11:40.703205 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126455 (* 1 = 0.0126455 loss)\nI0607 21:11:40.703212 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.257577 (* 1 = 0.257577 loss)\nI0607 21:11:40.703217 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.012636 (* 1 = 0.012636 loss)\nI0607 21:11:40.703223 13573 sgd_solver.cpp:106] Iteration 1400, lr = 0.001\nI0607 21:11:53.569151 13573 solver.cpp:229] Iteration 1420, loss = 1.03692\nI0607 21:11:53.569211 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.209109 (* 1 = 0.209109 loss)\nI0607 21:11:53.569219 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.228778 (* 1 = 0.228778 loss)\nI0607 21:11:53.569226 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209328 (* 1 = 0.209328 loss)\nI0607 21:11:53.569231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176556 (* 1 = 0.0176556 loss)\nI0607 21:11:53.569236 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21087 (* 1 = 0.21087 loss)\nI0607 21:11:53.569242 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176355 (* 1 = 0.0176355 loss)\nI0607 21:11:53.569248 13573 sgd_solver.cpp:106] Iteration 1420, lr = 0.001\nI0607 21:12:06.545876 13573 solver.cpp:229] Iteration 1440, loss = 1.2589\nI0607 21:12:06.545948 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.397136 (* 1 = 0.397136 loss)\nI0607 21:12:06.545958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.240696 (* 1 = 0.240696 loss)\nI0607 21:12:06.545964 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.242434 (* 1 = 0.242434 loss)\nI0607 21:12:06.545970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0549411 (* 1 = 0.0549411 loss)\nI0607 21:12:06.545975 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239497 (* 1 = 0.239497 loss)\nI0607 21:12:06.545981 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0549411 (* 1 = 0.0549411 loss)\nI0607 21:12:06.545987 13573 sgd_solver.cpp:106] Iteration 1440, lr = 0.001\nI0607 21:12:19.575495 13573 solver.cpp:229] Iteration 1460, loss = 1.10283\nI0607 21:12:19.575559 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.14618 (* 1 = 0.14618 loss)\nI0607 21:12:19.575568 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.334224 (* 1 = 0.334224 loss)\nI0607 21:12:19.575574 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216162 (* 1 = 0.216162 loss)\nI0607 21:12:19.575580 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00196586 (* 1 = 0.00196586 loss)\nI0607 21:12:19.575587 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.200868 (* 1 = 0.200868 loss)\nI0607 21:12:19.575592 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00196539 (* 1 = 0.00196539 loss)\nI0607 21:12:19.575599 13573 sgd_solver.cpp:106] Iteration 1460, lr = 0.001\nI0607 21:12:32.516702 13573 solver.cpp:229] Iteration 1480, loss = 2.20443\nI0607 21:12:32.516767 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.537677 (* 1 = 0.537677 loss)\nI0607 21:12:32.516777 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.40747 (* 1 = 1.40747 loss)\nI0607 21:12:32.516783 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.397919 (* 1 = 0.397919 loss)\nI0607 21:12:32.516789 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.107643 (* 1 = 0.107643 loss)\nI0607 21:12:32.516794 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.40267 (* 1 = 0.40267 loss)\nI0607 21:12:32.516800 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.107659 (* 1 = 0.107659 loss)\nI0607 21:12:32.516808 13573 sgd_solver.cpp:106] Iteration 1480, lr = 0.001\nI0607 21:12:45.261037 13573 solver.cpp:229] Iteration 1500, loss = 1.14416\nI0607 21:12:45.261098 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00824702 (* 1 = 0.00824702 loss)\nI0607 21:12:45.261108 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0758491 (* 1 = 0.0758491 loss)\nI0607 21:12:45.261114 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297376 (* 1 = 0.297376 loss)\nI0607 21:12:45.261121 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242249 (* 1 = 0.0242249 loss)\nI0607 21:12:45.261126 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.251078 (* 1 = 0.251078 loss)\nI0607 21:12:45.261132 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0242211 (* 1 = 0.0242211 loss)\nI0607 21:12:45.261138 13573 sgd_solver.cpp:106] Iteration 1500, lr = 0.001\nI0607 21:12:58.217232 13573 solver.cpp:229] Iteration 1520, loss = 2.02919\nI0607 21:12:58.217304 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.569799 (* 1 = 0.569799 loss)\nI0607 21:12:58.217314 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.633982 (* 1 = 0.633982 loss)\nI0607 21:12:58.217320 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.249328 (* 1 = 0.249328 loss)\nI0607 21:12:58.217326 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041065 (* 1 = 0.041065 loss)\nI0607 21:12:58.217331 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.242745 (* 1 = 0.242745 loss)\nI0607 21:12:58.217337 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0410426 (* 1 = 0.0410426 loss)\nI0607 21:12:58.217344 13573 sgd_solver.cpp:106] Iteration 1520, lr = 0.001\nI0607 21:13:11.195315 13573 solver.cpp:229] Iteration 1540, loss = 1.20915\nI0607 21:13:11.195381 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00282874 (* 1 = 0.00282874 loss)\nI0607 21:13:11.195390 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00888622 (* 1 = 0.00888622 loss)\nI0607 21:13:11.195396 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247514 (* 1 = 0.247514 loss)\nI0607 21:13:11.195401 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00357102 (* 1 = 0.00357102 loss)\nI0607 21:13:11.195407 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233499 (* 1 = 0.233499 loss)\nI0607 21:13:11.195413 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00356074 (* 1 = 0.00356074 loss)\nI0607 21:13:11.195420 13573 sgd_solver.cpp:106] Iteration 1540, lr = 0.001\nI0607 21:13:23.792140 13573 solver.cpp:229] Iteration 1560, loss = 1.32344\nI0607 21:13:23.792201 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000462023 (* 1 = 0.000462023 loss)\nI0607 21:13:23.792209 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108301 (* 1 = 0.108301 loss)\nI0607 21:13:23.792215 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.51966 (* 1 = 0.51966 loss)\nI0607 21:13:23.792222 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.387968 (* 1 = 0.387968 loss)\nI0607 21:13:23.792227 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.528464 (* 1 = 0.528464 loss)\nI0607 21:13:23.792232 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.388004 (* 1 = 0.388004 loss)\nI0607 21:13:23.792238 13573 sgd_solver.cpp:106] Iteration 1560, lr = 0.001\nI0607 21:13:36.750924 13573 solver.cpp:229] Iteration 1580, loss = 2.72568\nI0607 21:13:36.750996 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.288324 (* 1 = 0.288324 loss)\nI0607 21:13:36.751006 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.407354 (* 1 = 0.407354 loss)\nI0607 21:13:36.751013 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.664362 (* 1 = 0.664362 loss)\nI0607 21:13:36.751018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.323331 (* 1 = 0.323331 loss)\nI0607 21:13:36.751024 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.667348 (* 1 = 0.667348 loss)\nI0607 21:13:36.751029 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.323436 (* 1 = 0.323436 loss)\nI0607 21:13:36.751035 13573 sgd_solver.cpp:106] Iteration 1580, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:13:49.712494 13573 solver.cpp:229] Iteration 1600, loss = 0.925849\nI0607 21:13:49.712555 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00158217 (* 1 = 0.00158217 loss)\nI0607 21:13:49.712566 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0716223 (* 1 = 0.0716223 loss)\nI0607 21:13:49.712573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.351278 (* 1 = 0.351278 loss)\nI0607 21:13:49.712579 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0431793 (* 1 = 0.0431793 loss)\nI0607 21:13:49.712585 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.349096 (* 1 = 0.349096 loss)\nI0607 21:13:49.712591 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431765 (* 1 = 0.0431765 loss)\nI0607 21:13:49.712599 13573 sgd_solver.cpp:106] Iteration 1600, lr = 0.001\nI0607 21:14:02.536875 13573 solver.cpp:229] Iteration 1620, loss = 1.41194\nI0607 21:14:02.536948 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.439096 (* 1 = 0.439096 loss)\nI0607 21:14:02.536958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.382255 (* 1 = 0.382255 loss)\nI0607 21:14:02.536964 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.210529 (* 1 = 0.210529 loss)\nI0607 21:14:02.536972 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00739509 (* 1 = 0.00739509 loss)\nI0607 21:14:02.536978 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.20269 (* 1 = 0.20269 loss)\nI0607 21:14:02.536983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00739509 (* 1 = 0.00739509 loss)\nI0607 21:14:02.536990 13573 sgd_solver.cpp:106] Iteration 1620, lr = 0.001\nI0607 21:14:15.523550 13573 solver.cpp:229] Iteration 1640, loss = 1.2961\nI0607 21:14:15.523608 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.603778 (* 1 = 0.603778 loss)\nI0607 21:14:15.523617 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195261 (* 1 = 0.195261 loss)\nI0607 21:14:15.523623 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266891 (* 1 = 0.266891 loss)\nI0607 21:14:15.523629 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0409157 (* 1 = 0.0409157 loss)\nI0607 21:14:15.523635 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268777 (* 1 = 0.268777 loss)\nI0607 21:14:15.523640 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0409258 (* 1 = 0.0409258 loss)\nI0607 21:14:15.523648 13573 sgd_solver.cpp:106] Iteration 1640, lr = 0.001\nI0607 21:14:28.205461 13573 solver.cpp:229] Iteration 1660, loss = 1.29705\nI0607 21:14:28.205530 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.241106 (* 1 = 0.241106 loss)\nI0607 21:14:28.205540 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269515 (* 1 = 0.269515 loss)\nI0607 21:14:28.205546 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258044 (* 1 = 0.258044 loss)\nI0607 21:14:28.205552 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0327292 (* 1 = 0.0327292 loss)\nI0607 21:14:28.205557 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256081 (* 1 = 0.256081 loss)\nI0607 21:14:28.205564 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0327287 (* 1 = 0.0327287 loss)\nI0607 21:14:28.205569 13573 sgd_solver.cpp:106] Iteration 1660, lr = 0.001\nI0607 21:14:41.008466 13573 solver.cpp:229] Iteration 1680, loss = 1.8493\nI0607 21:14:41.008523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.527185 (* 1 = 0.527185 loss)\nI0607 21:14:41.008534 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.535925 (* 1 = 0.535925 loss)\nI0607 21:14:41.008541 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145381 (* 1 = 0.145381 loss)\nI0607 21:14:41.008548 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010951 (* 1 = 0.010951 loss)\nI0607 21:14:41.008553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145476 (* 1 = 0.145476 loss)\nI0607 21:14:41.008558 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109533 (* 1 = 0.0109533 loss)\nI0607 21:14:41.008565 13573 sgd_solver.cpp:106] Iteration 1680, lr = 0.001\nI0607 21:14:53.827015 13573 solver.cpp:229] Iteration 1700, loss = 0.685647\nI0607 21:14:53.827077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00837289 (* 1 = 0.00837289 loss)\nI0607 21:14:53.827086 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105258 (* 1 = 0.105258 loss)\nI0607 21:14:53.827093 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.336684 (* 1 = 0.336684 loss)\nI0607 21:14:53.827100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00929013 (* 1 = 0.00929013 loss)\nI0607 21:14:53.827105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315931 (* 1 = 0.315931 loss)\nI0607 21:14:53.827111 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00928035 (* 1 = 0.00928035 loss)\nI0607 21:14:53.827116 13573 sgd_solver.cpp:106] Iteration 1700, lr = 0.001\nI0607 21:15:06.782002 13573 solver.cpp:229] Iteration 1720, loss = 1.36782\nI0607 21:15:06.782061 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.825343 (* 1 = 0.825343 loss)\nI0607 21:15:06.782070 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.244822 (* 1 = 0.244822 loss)\nI0607 21:15:06.782076 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189523 (* 1 = 0.189523 loss)\nI0607 21:15:06.782083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011344 (* 1 = 0.011344 loss)\nI0607 21:15:06.782088 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.18409 (* 1 = 0.18409 loss)\nI0607 21:15:06.782094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0113387 (* 1 = 0.0113387 loss)\nI0607 21:15:06.782100 13573 sgd_solver.cpp:106] Iteration 1720, lr = 0.001\nI0607 21:15:19.534961 13573 solver.cpp:229] Iteration 1740, loss = 1.47916\nI0607 21:15:19.535027 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.357568 (* 1 = 0.357568 loss)\nI0607 21:15:19.535037 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.500743 (* 1 = 0.500743 loss)\nI0607 21:15:19.535043 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.142505 (* 1 = 0.142505 loss)\nI0607 21:15:19.535048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00633926 (* 1 = 0.00633926 loss)\nI0607 21:15:19.535054 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143885 (* 1 = 0.143885 loss)\nI0607 21:15:19.535059 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00633996 (* 1 = 0.00633996 loss)\nI0607 21:15:19.535065 13573 sgd_solver.cpp:106] Iteration 1740, lr = 0.001\nI0607 21:15:32.315785 13573 solver.cpp:229] Iteration 1760, loss = 0.94058\nI0607 21:15:32.315845 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.287159 (* 1 = 0.287159 loss)\nI0607 21:15:32.315853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171561 (* 1 = 0.171561 loss)\nI0607 21:15:32.315860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26199 (* 1 = 0.26199 loss)\nI0607 21:15:32.315865 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0031993 (* 1 = 0.0031993 loss)\nI0607 21:15:32.315871 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250729 (* 1 = 0.250729 loss)\nI0607 21:15:32.315877 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00320061 (* 1 = 0.00320061 loss)\nI0607 21:15:32.315884 13573 sgd_solver.cpp:106] Iteration 1760, lr = 0.001\nI0607 21:15:45.292888 13573 solver.cpp:229] Iteration 1780, loss = 0.862696\nI0607 21:15:45.292954 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0491483 (* 1 = 0.0491483 loss)\nI0607 21:15:45.292963 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122498 (* 1 = 0.122498 loss)\nI0607 21:15:45.292969 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.236701 (* 1 = 0.236701 loss)\nI0607 21:15:45.292975 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169388 (* 1 = 0.0169388 loss)\nI0607 21:15:45.292980 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224859 (* 1 = 0.224859 loss)\nI0607 21:15:45.292986 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169288 (* 1 = 0.0169288 loss)\nI0607 21:15:45.292994 13573 sgd_solver.cpp:106] Iteration 1780, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:15:58.067876 13573 solver.cpp:229] Iteration 1800, loss = 1.63472\nI0607 21:15:58.067941 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.149809 (* 1 = 0.149809 loss)\nI0607 21:15:58.067950 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.394789 (* 1 = 0.394789 loss)\nI0607 21:15:58.067956 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.341726 (* 1 = 0.341726 loss)\nI0607 21:15:58.067962 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149326 (* 1 = 0.0149326 loss)\nI0607 21:15:58.067967 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315744 (* 1 = 0.315744 loss)\nI0607 21:15:58.067973 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.014927 (* 1 = 0.014927 loss)\nI0607 21:15:58.068014 13573 sgd_solver.cpp:106] Iteration 1800, lr = 0.001\nI0607 21:16:10.914547 13573 solver.cpp:229] Iteration 1820, loss = 1.7458\nI0607 21:16:10.914613 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.454315 (* 1 = 0.454315 loss)\nI0607 21:16:10.914623 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.485015 (* 1 = 0.485015 loss)\nI0607 21:16:10.914628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295108 (* 1 = 0.295108 loss)\nI0607 21:16:10.914635 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0397093 (* 1 = 0.0397093 loss)\nI0607 21:16:10.914641 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.284496 (* 1 = 0.284496 loss)\nI0607 21:16:10.914647 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0397018 (* 1 = 0.0397018 loss)\nI0607 21:16:10.914655 13573 sgd_solver.cpp:106] Iteration 1820, lr = 0.001\nI0607 21:16:23.700505 13573 solver.cpp:229] Iteration 1840, loss = 0.589357\nI0607 21:16:23.700572 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000437108 (* 1 = 0.000437108 loss)\nI0607 21:16:23.700582 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0441005 (* 1 = 0.0441005 loss)\nI0607 21:16:23.700587 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241372 (* 1 = 0.241372 loss)\nI0607 21:16:23.700592 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00943896 (* 1 = 0.00943896 loss)\nI0607 21:16:23.700598 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227993 (* 1 = 0.227993 loss)\nI0607 21:16:23.700603 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00943393 (* 1 = 0.00943393 loss)\nI0607 21:16:23.700610 13573 sgd_solver.cpp:106] Iteration 1840, lr = 0.001\nI0607 21:16:36.601215 13573 solver.cpp:229] Iteration 1860, loss = 1.06328\nI0607 21:16:36.601279 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.096829 (* 1 = 0.096829 loss)\nI0607 21:16:36.601289 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.165937 (* 1 = 0.165937 loss)\nI0607 21:16:36.601294 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286504 (* 1 = 0.286504 loss)\nI0607 21:16:36.601300 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110628 (* 1 = 0.0110628 loss)\nI0607 21:16:36.601307 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268278 (* 1 = 0.268278 loss)\nI0607 21:16:36.601312 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110919 (* 1 = 0.0110919 loss)\nI0607 21:16:36.601320 13573 sgd_solver.cpp:106] Iteration 1860, lr = 0.001\nI0607 21:16:49.413069 13573 solver.cpp:229] Iteration 1880, loss = 0.992141\nI0607 21:16:49.413142 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.383726 (* 1 = 0.383726 loss)\nI0607 21:16:49.413151 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.247494 (* 1 = 0.247494 loss)\nI0607 21:16:49.413156 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.196831 (* 1 = 0.196831 loss)\nI0607 21:16:49.413162 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0445974 (* 1 = 0.0445974 loss)\nI0607 21:16:49.413168 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197763 (* 1 = 0.197763 loss)\nI0607 21:16:49.413174 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0445906 (* 1 = 0.0445906 loss)\nI0607 21:16:49.413182 13573 sgd_solver.cpp:106] Iteration 1880, lr = 0.001\nI0607 21:17:02.198557 13573 solver.cpp:229] Iteration 1900, loss = 0.63294\nI0607 21:17:02.198618 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121308 (* 1 = 0.121308 loss)\nI0607 21:17:02.198628 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.12664 (* 1 = 0.12664 loss)\nI0607 21:17:02.198633 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.160566 (* 1 = 0.160566 loss)\nI0607 21:17:02.198639 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0573743 (* 1 = 0.0573743 loss)\nI0607 21:17:02.198645 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173141 (* 1 = 0.173141 loss)\nI0607 21:17:02.198652 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0573804 (* 1 = 0.0573804 loss)\nI0607 21:17:02.198657 13573 sgd_solver.cpp:106] Iteration 1900, lr = 0.001\nI0607 21:17:15.006816 13573 solver.cpp:229] Iteration 1920, loss = 1.42183\nI0607 21:17:15.006880 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.702998 (* 1 = 0.702998 loss)\nI0607 21:17:15.006887 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.466989 (* 1 = 0.466989 loss)\nI0607 21:17:15.006893 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.378405 (* 1 = 0.378405 loss)\nI0607 21:17:15.006901 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0905914 (* 1 = 0.0905914 loss)\nI0607 21:17:15.006906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.384554 (* 1 = 0.384554 loss)\nI0607 21:17:15.006911 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0905982 (* 1 = 0.0905982 loss)\nI0607 21:17:15.006917 13573 sgd_solver.cpp:106] Iteration 1920, lr = 0.001\nI0607 21:17:27.924604 13573 solver.cpp:229] Iteration 1940, loss = 1.663\nI0607 21:17:27.924667 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00184172 (* 1 = 0.00184172 loss)\nI0607 21:17:27.924677 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0156128 (* 1 = 0.0156128 loss)\nI0607 21:17:27.924684 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237586 (* 1 = 0.237586 loss)\nI0607 21:17:27.924688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00956558 (* 1 = 0.00956558 loss)\nI0607 21:17:27.924695 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245485 (* 1 = 0.245485 loss)\nI0607 21:17:27.924700 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00959652 (* 1 = 0.00959652 loss)\nI0607 21:17:27.924706 13573 sgd_solver.cpp:106] Iteration 1940, lr = 0.001\nI0607 21:17:40.939920 13573 solver.cpp:229] Iteration 1960, loss = 1.14311\nI0607 21:17:40.939995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.463865 (* 1 = 0.463865 loss)\nI0607 21:17:40.940003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269579 (* 1 = 0.269579 loss)\nI0607 21:17:40.940009 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.160366 (* 1 = 0.160366 loss)\nI0607 21:17:40.940016 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00401426 (* 1 = 0.00401426 loss)\nI0607 21:17:40.940021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162757 (* 1 = 0.162757 loss)\nI0607 21:17:40.940026 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00401426 (* 1 = 0.00401426 loss)\nI0607 21:17:40.940033 13573 sgd_solver.cpp:106] Iteration 1960, lr = 0.001\nI0607 21:17:53.783599 13573 solver.cpp:229] Iteration 1980, loss = 1.34724\nI0607 21:17:53.783679 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.447523 (* 1 = 0.447523 loss)\nI0607 21:17:53.783689 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.474478 (* 1 = 0.474478 loss)\nI0607 21:17:53.783695 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.471614 (* 1 = 0.471614 loss)\nI0607 21:17:53.783701 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0611436 (* 1 = 0.0611436 loss)\nI0607 21:17:53.783740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.45409 (* 1 = 0.45409 loss)\nI0607 21:17:53.783761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.061153 (* 1 = 0.061153 loss)\nI0607 21:17:53.783777 13573 sgd_solver.cpp:106] Iteration 1980, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:18:06.848508 13573 solver.cpp:229] Iteration 2000, loss = 0.561836\nI0607 21:18:06.848565 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00341936 (* 1 = 0.00341936 loss)\nI0607 21:18:06.848577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0932569 (* 1 = 0.0932569 loss)\nI0607 21:18:06.848582 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203344 (* 1 = 0.203344 loss)\nI0607 21:18:06.848588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00487082 (* 1 = 0.00487082 loss)\nI0607 21:18:06.848594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181583 (* 1 = 0.181583 loss)\nI0607 21:18:06.848600 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0048718 (* 1 = 0.0048718 loss)\nI0607 21:18:06.848608 13573 sgd_solver.cpp:106] Iteration 2000, lr = 0.001\nI0607 21:18:19.639575 13573 solver.cpp:229] Iteration 2020, loss = 0.875582\nI0607 21:18:19.639636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0040648 (* 1 = 0.0040648 loss)\nI0607 21:18:19.639647 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.060617 (* 1 = 0.060617 loss)\nI0607 21:18:19.639659 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289288 (* 1 = 0.289288 loss)\nI0607 21:18:19.639667 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0565152 (* 1 = 0.0565152 loss)\nI0607 21:18:19.639672 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269505 (* 1 = 0.269505 loss)\nI0607 21:18:19.639678 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0565003 (* 1 = 0.0565003 loss)\nI0607 21:18:19.639684 13573 sgd_solver.cpp:106] Iteration 2020, lr = 0.001\nI0607 21:18:32.411795 13573 solver.cpp:229] Iteration 2040, loss = 1.13697\nI0607 21:18:32.411869 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139267 (* 1 = 0.139267 loss)\nI0607 21:18:32.411878 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105308 (* 1 = 0.105308 loss)\nI0607 21:18:32.411885 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162689 (* 1 = 0.162689 loss)\nI0607 21:18:32.411890 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0397829 (* 1 = 0.0397829 loss)\nI0607 21:18:32.411895 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161564 (* 1 = 0.161564 loss)\nI0607 21:18:32.411901 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0397927 (* 1 = 0.0397927 loss)\nI0607 21:18:32.411908 13573 sgd_solver.cpp:106] Iteration 2040, lr = 0.001\nI0607 21:18:45.273957 13573 solver.cpp:229] Iteration 2060, loss = 1.22153\nI0607 21:18:45.274019 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.40726 (* 1 = 0.40726 loss)\nI0607 21:18:45.274029 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.266216 (* 1 = 0.266216 loss)\nI0607 21:18:45.274035 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.181185 (* 1 = 0.181185 loss)\nI0607 21:18:45.274040 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156813 (* 1 = 0.0156813 loss)\nI0607 21:18:45.274046 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181769 (* 1 = 0.181769 loss)\nI0607 21:18:45.274052 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156853 (* 1 = 0.0156853 loss)\nI0607 21:18:45.274060 13573 sgd_solver.cpp:106] Iteration 2060, lr = 0.001\nI0607 21:18:58.174543 13573 solver.cpp:229] Iteration 2080, loss = 1.28191\nI0607 21:18:58.174610 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.507032 (* 1 = 0.507032 loss)\nI0607 21:18:58.174619 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158785 (* 1 = 0.158785 loss)\nI0607 21:18:58.174625 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152372 (* 1 = 0.152372 loss)\nI0607 21:18:58.174633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0144656 (* 1 = 0.0144656 loss)\nI0607 21:18:58.174638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149487 (* 1 = 0.149487 loss)\nI0607 21:18:58.174643 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144686 (* 1 = 0.0144686 loss)\nI0607 21:18:58.174650 13573 sgd_solver.cpp:106] Iteration 2080, lr = 0.001\nI0607 21:19:11.259320 13573 solver.cpp:229] Iteration 2100, loss = 2.01174\nI0607 21:19:11.259387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.457587 (* 1 = 0.457587 loss)\nI0607 21:19:11.259397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.388752 (* 1 = 0.388752 loss)\nI0607 21:19:11.259402 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.382949 (* 1 = 0.382949 loss)\nI0607 21:19:11.259408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.049809 (* 1 = 0.049809 loss)\nI0607 21:19:11.259413 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.366126 (* 1 = 0.366126 loss)\nI0607 21:19:11.259419 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0498148 (* 1 = 0.0498148 loss)\nI0607 21:19:11.259425 13573 sgd_solver.cpp:106] Iteration 2100, lr = 0.001\nI0607 21:19:24.088340 13573 solver.cpp:229] Iteration 2120, loss = 2.0806\nI0607 21:19:24.088400 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.417749 (* 1 = 0.417749 loss)\nI0607 21:19:24.088409 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.828128 (* 1 = 0.828128 loss)\nI0607 21:19:24.088415 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.461441 (* 1 = 0.461441 loss)\nI0607 21:19:24.088421 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.10832 (* 1 = 0.10832 loss)\nI0607 21:19:24.088426 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.466439 (* 1 = 0.466439 loss)\nI0607 21:19:24.088433 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.108241 (* 1 = 0.108241 loss)\nI0607 21:19:24.088438 13573 sgd_solver.cpp:106] Iteration 2120, lr = 0.001\nI0607 21:19:37.013152 13573 solver.cpp:229] Iteration 2140, loss = 0.771625\nI0607 21:19:37.013221 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000659402 (* 1 = 0.000659402 loss)\nI0607 21:19:37.013231 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0383641 (* 1 = 0.0383641 loss)\nI0607 21:19:37.013236 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.25317 (* 1 = 0.25317 loss)\nI0607 21:19:37.013242 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0290286 (* 1 = 0.0290286 loss)\nI0607 21:19:37.013247 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238157 (* 1 = 0.238157 loss)\nI0607 21:19:37.013252 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0290269 (* 1 = 0.0290269 loss)\nI0607 21:19:37.013259 13573 sgd_solver.cpp:106] Iteration 2140, lr = 0.001\nI0607 21:19:49.687945 13573 solver.cpp:229] Iteration 2160, loss = 1.20559\nI0607 21:19:49.688005 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.511083 (* 1 = 0.511083 loss)\nI0607 21:19:49.688015 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.373995 (* 1 = 0.373995 loss)\nI0607 21:19:49.688021 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.319878 (* 1 = 0.319878 loss)\nI0607 21:19:49.688027 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0284621 (* 1 = 0.0284621 loss)\nI0607 21:19:49.688032 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.29149 (* 1 = 0.29149 loss)\nI0607 21:19:49.688038 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0284784 (* 1 = 0.0284784 loss)\nI0607 21:19:49.688046 13573 sgd_solver.cpp:106] Iteration 2160, lr = 0.001\nI0607 21:20:02.761530 13573 solver.cpp:229] Iteration 2180, loss = 0.813419\nI0607 21:20:02.761602 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.357866 (* 1 = 0.357866 loss)\nI0607 21:20:02.761612 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.469862 (* 1 = 0.469862 loss)\nI0607 21:20:02.761618 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157827 (* 1 = 0.157827 loss)\nI0607 21:20:02.761623 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0213806 (* 1 = 0.0213806 loss)\nI0607 21:20:02.761629 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158449 (* 1 = 0.158449 loss)\nI0607 21:20:02.761634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.021376 (* 1 = 0.021376 loss)\nI0607 21:20:02.761641 13573 sgd_solver.cpp:106] Iteration 2180, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:20:15.747054 13573 solver.cpp:229] Iteration 2200, loss = 1.22748\nI0607 21:20:15.747114 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.576411 (* 1 = 0.576411 loss)\nI0607 21:20:15.747123 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.631552 (* 1 = 0.631552 loss)\nI0607 21:20:15.747129 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226073 (* 1 = 0.226073 loss)\nI0607 21:20:15.747135 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0746039 (* 1 = 0.0746039 loss)\nI0607 21:20:15.747141 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224736 (* 1 = 0.224736 loss)\nI0607 21:20:15.747148 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0745875 (* 1 = 0.0745875 loss)\nI0607 21:20:15.747153 13573 sgd_solver.cpp:106] Iteration 2200, lr = 0.001\nI0607 21:20:28.626544 13573 solver.cpp:229] Iteration 2220, loss = 1.05768\nI0607 21:20:28.626611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.401175 (* 1 = 0.401175 loss)\nI0607 21:20:28.626621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.266683 (* 1 = 0.266683 loss)\nI0607 21:20:28.626626 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.140168 (* 1 = 0.140168 loss)\nI0607 21:20:28.626632 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0107239 (* 1 = 0.0107239 loss)\nI0607 21:20:28.626638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.135783 (* 1 = 0.135783 loss)\nI0607 21:20:28.626644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107198 (* 1 = 0.0107198 loss)\nI0607 21:20:28.626651 13573 sgd_solver.cpp:106] Iteration 2220, lr = 0.001\nI0607 21:20:41.284011 13573 solver.cpp:229] Iteration 2240, loss = 0.713589\nI0607 21:20:41.284075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0908315 (* 1 = 0.0908315 loss)\nI0607 21:20:41.284085 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152475 (* 1 = 0.152475 loss)\nI0607 21:20:41.284090 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241819 (* 1 = 0.241819 loss)\nI0607 21:20:41.284096 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275286 (* 1 = 0.0275286 loss)\nI0607 21:20:41.284102 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277421 (* 1 = 0.277421 loss)\nI0607 21:20:41.284107 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275194 (* 1 = 0.0275194 loss)\nI0607 21:20:41.284114 13573 sgd_solver.cpp:106] Iteration 2240, lr = 0.001\nI0607 21:20:54.149358 13573 solver.cpp:229] Iteration 2260, loss = 0.762268\nI0607 21:20:54.149422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.086051 (* 1 = 0.086051 loss)\nI0607 21:20:54.149432 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113675 (* 1 = 0.113675 loss)\nI0607 21:20:54.149440 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.131187 (* 1 = 0.131187 loss)\nI0607 21:20:54.149446 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153576 (* 1 = 0.0153576 loss)\nI0607 21:20:54.149451 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137431 (* 1 = 0.137431 loss)\nI0607 21:20:54.149456 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153596 (* 1 = 0.0153596 loss)\nI0607 21:20:54.149466 13573 sgd_solver.cpp:106] Iteration 2260, lr = 0.001\nI0607 21:21:07.159744 13573 solver.cpp:229] Iteration 2280, loss = 1.74036\nI0607 21:21:07.159807 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.560002 (* 1 = 0.560002 loss)\nI0607 21:21:07.159817 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.5881 (* 1 = 0.5881 loss)\nI0607 21:21:07.159823 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.344656 (* 1 = 0.344656 loss)\nI0607 21:21:07.159829 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.15364 (* 1 = 0.15364 loss)\nI0607 21:21:07.159835 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.34512 (* 1 = 0.34512 loss)\nI0607 21:21:07.159842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.153629 (* 1 = 0.153629 loss)\nI0607 21:21:07.159848 13573 sgd_solver.cpp:106] Iteration 2280, lr = 0.001\nI0607 21:21:19.956105 13573 solver.cpp:229] Iteration 2300, loss = 0.649375\nI0607 21:21:19.956172 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00138117 (* 1 = 0.00138117 loss)\nI0607 21:21:19.956182 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0318601 (* 1 = 0.0318601 loss)\nI0607 21:21:19.956187 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.28345 (* 1 = 0.28345 loss)\nI0607 21:21:19.956193 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0648827 (* 1 = 0.0648827 loss)\nI0607 21:21:19.956198 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.291793 (* 1 = 0.291793 loss)\nI0607 21:21:19.956204 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0649239 (* 1 = 0.0649239 loss)\nI0607 21:21:19.956212 13573 sgd_solver.cpp:106] Iteration 2300, lr = 0.001\nI0607 21:21:32.780223 13573 solver.cpp:229] Iteration 2320, loss = 1.30136\nI0607 21:21:32.780289 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.495694 (* 1 = 0.495694 loss)\nI0607 21:21:32.780299 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.383874 (* 1 = 0.383874 loss)\nI0607 21:21:32.780305 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.391603 (* 1 = 0.391603 loss)\nI0607 21:21:32.780311 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0603274 (* 1 = 0.0603274 loss)\nI0607 21:21:32.780318 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.417296 (* 1 = 0.417296 loss)\nI0607 21:21:32.780324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0603311 (* 1 = 0.0603311 loss)\nI0607 21:21:32.780331 13573 sgd_solver.cpp:106] Iteration 2320, lr = 0.001\nI0607 21:21:45.760236 13573 solver.cpp:229] Iteration 2340, loss = 1.14435\nI0607 21:21:45.760309 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.513404 (* 1 = 0.513404 loss)\nI0607 21:21:45.760318 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.78098 (* 1 = 0.78098 loss)\nI0607 21:21:45.760324 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169867 (* 1 = 0.169867 loss)\nI0607 21:21:45.760330 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0236366 (* 1 = 0.0236366 loss)\nI0607 21:21:45.760335 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16753 (* 1 = 0.16753 loss)\nI0607 21:21:45.760341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236269 (* 1 = 0.0236269 loss)\nI0607 21:21:45.760349 13573 sgd_solver.cpp:106] Iteration 2340, lr = 0.001\nI0607 21:21:58.519826 13573 solver.cpp:229] Iteration 2360, loss = 1.41456\nI0607 21:21:58.519893 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.560356 (* 1 = 0.560356 loss)\nI0607 21:21:58.519902 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.396634 (* 1 = 0.396634 loss)\nI0607 21:21:58.519908 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.340635 (* 1 = 0.340635 loss)\nI0607 21:21:58.519914 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.131459 (* 1 = 0.131459 loss)\nI0607 21:21:58.519919 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.33624 (* 1 = 0.33624 loss)\nI0607 21:21:58.519925 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.131422 (* 1 = 0.131422 loss)\nI0607 21:21:58.519932 13573 sgd_solver.cpp:106] Iteration 2360, lr = 0.001\nI0607 21:22:11.499227 13573 solver.cpp:229] Iteration 2380, loss = 1.02689\nI0607 21:22:11.499294 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.49875 (* 1 = 0.49875 loss)\nI0607 21:22:11.499303 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.452581 (* 1 = 0.452581 loss)\nI0607 21:22:11.499310 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237205 (* 1 = 0.237205 loss)\nI0607 21:22:11.499316 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0441058 (* 1 = 0.0441058 loss)\nI0607 21:22:11.499322 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240236 (* 1 = 0.240236 loss)\nI0607 21:22:11.499328 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0441275 (* 1 = 0.0441275 loss)\nI0607 21:22:11.499336 13573 sgd_solver.cpp:106] Iteration 2380, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:22:24.523705 13573 solver.cpp:229] Iteration 2400, loss = 1.5242\nI0607 21:22:24.523780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.51416 (* 1 = 0.51416 loss)\nI0607 21:22:24.523790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.637448 (* 1 = 0.637448 loss)\nI0607 21:22:24.523797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.406667 (* 1 = 0.406667 loss)\nI0607 21:22:24.523802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.178704 (* 1 = 0.178704 loss)\nI0607 21:22:24.523808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.405985 (* 1 = 0.405985 loss)\nI0607 21:22:24.523814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.178679 (* 1 = 0.178679 loss)\nI0607 21:22:24.523823 13573 sgd_solver.cpp:106] Iteration 2400, lr = 0.001\nI0607 21:22:37.396126 13573 solver.cpp:229] Iteration 2420, loss = 0.891014\nI0607 21:22:37.396189 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.289583 (* 1 = 0.289583 loss)\nI0607 21:22:37.396200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.316506 (* 1 = 0.316506 loss)\nI0607 21:22:37.396206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.225498 (* 1 = 0.225498 loss)\nI0607 21:22:37.396214 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0310143 (* 1 = 0.0310143 loss)\nI0607 21:22:37.396219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22484 (* 1 = 0.22484 loss)\nI0607 21:22:37.396225 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309972 (* 1 = 0.0309972 loss)\nI0607 21:22:37.396232 13573 sgd_solver.cpp:106] Iteration 2420, lr = 0.001\nI0607 21:22:50.232235 13573 solver.cpp:229] Iteration 2440, loss = 1.28534\nI0607 21:22:50.232323 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.504255 (* 1 = 0.504255 loss)\nI0607 21:22:50.232336 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.237698 (* 1 = 0.237698 loss)\nI0607 21:22:50.232342 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.249226 (* 1 = 0.249226 loss)\nI0607 21:22:50.232348 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321845 (* 1 = 0.0321845 loss)\nI0607 21:22:50.232353 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252405 (* 1 = 0.252405 loss)\nI0607 21:22:50.232359 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0322053 (* 1 = 0.0322053 loss)\nI0607 21:22:50.232367 13573 sgd_solver.cpp:106] Iteration 2440, lr = 0.001\nI0607 21:23:03.268744 13573 solver.cpp:229] Iteration 2460, loss = 0.987902\nI0607 21:23:03.268807 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0128001 (* 1 = 0.0128001 loss)\nI0607 21:23:03.268816 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0392711 (* 1 = 0.0392711 loss)\nI0607 21:23:03.268822 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221645 (* 1 = 0.221645 loss)\nI0607 21:23:03.268828 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0057959 (* 1 = 0.0057959 loss)\nI0607 21:23:03.268836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22527 (* 1 = 0.22527 loss)\nI0607 21:23:03.268841 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00580047 (* 1 = 0.00580047 loss)\nI0607 21:23:03.268848 13573 sgd_solver.cpp:106] Iteration 2460, lr = 0.001\nI0607 21:23:16.111227 13573 solver.cpp:229] Iteration 2480, loss = 0.994781\nI0607 21:23:16.111300 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0025015 (* 1 = 0.0025015 loss)\nI0607 21:23:16.111310 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0239816 (* 1 = 0.0239816 loss)\nI0607 21:23:16.111315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.512808 (* 1 = 0.512808 loss)\nI0607 21:23:16.111321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.223894 (* 1 = 0.223894 loss)\nI0607 21:23:16.111327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.517952 (* 1 = 0.517952 loss)\nI0607 21:23:16.111332 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.223983 (* 1 = 0.223983 loss)\nI0607 21:23:16.111340 13573 sgd_solver.cpp:106] Iteration 2480, lr = 0.001\nI0607 21:23:29.045382 13573 solver.cpp:229] Iteration 2500, loss = 1.37252\nI0607 21:23:29.045442 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.501665 (* 1 = 0.501665 loss)\nI0607 21:23:29.045452 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.668239 (* 1 = 0.668239 loss)\nI0607 21:23:29.045457 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238476 (* 1 = 0.238476 loss)\nI0607 21:23:29.045464 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0270615 (* 1 = 0.0270615 loss)\nI0607 21:23:29.045469 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.244848 (* 1 = 0.244848 loss)\nI0607 21:23:29.045475 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0270544 (* 1 = 0.0270544 loss)\nI0607 21:23:29.045481 13573 sgd_solver.cpp:106] Iteration 2500, lr = 0.001\nI0607 21:23:42.014639 13573 solver.cpp:229] Iteration 2520, loss = 1.36002\nI0607 21:23:42.014706 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.383463 (* 1 = 0.383463 loss)\nI0607 21:23:42.014716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.386195 (* 1 = 0.386195 loss)\nI0607 21:23:42.014722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.39419 (* 1 = 0.39419 loss)\nI0607 21:23:42.014729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.05544 (* 1 = 0.05544 loss)\nI0607 21:23:42.014734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.400871 (* 1 = 0.400871 loss)\nI0607 21:23:42.014740 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0554585 (* 1 = 0.0554585 loss)\nI0607 21:23:42.014747 13573 sgd_solver.cpp:106] Iteration 2520, lr = 0.001\nI0607 21:23:54.744148 13573 solver.cpp:229] Iteration 2540, loss = 0.99073\nI0607 21:23:54.744210 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00900335 (* 1 = 0.00900335 loss)\nI0607 21:23:54.744220 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0698998 (* 1 = 0.0698998 loss)\nI0607 21:23:54.744226 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.229686 (* 1 = 0.229686 loss)\nI0607 21:23:54.744231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176158 (* 1 = 0.0176158 loss)\nI0607 21:23:54.744237 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252441 (* 1 = 0.252441 loss)\nI0607 21:23:54.744243 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176323 (* 1 = 0.0176323 loss)\nI0607 21:23:54.744251 13573 sgd_solver.cpp:106] Iteration 2540, lr = 0.001\nI0607 21:24:07.598290 13573 solver.cpp:229] Iteration 2560, loss = 0.882777\nI0607 21:24:07.598364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000371735 (* 1 = 0.000371735 loss)\nI0607 21:24:07.598373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0101837 (* 1 = 0.0101837 loss)\nI0607 21:24:07.598379 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180315 (* 1 = 0.180315 loss)\nI0607 21:24:07.598386 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0307997 (* 1 = 0.0307997 loss)\nI0607 21:24:07.598390 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19204 (* 1 = 0.19204 loss)\nI0607 21:24:07.598397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307907 (* 1 = 0.0307907 loss)\nI0607 21:24:07.598403 13573 sgd_solver.cpp:106] Iteration 2560, lr = 0.001\nI0607 21:24:20.440768 13573 solver.cpp:229] Iteration 2580, loss = 1.15264\nI0607 21:24:20.440825 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.201391 (* 1 = 0.201391 loss)\nI0607 21:24:20.440834 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132599 (* 1 = 0.132599 loss)\nI0607 21:24:20.440840 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164127 (* 1 = 0.164127 loss)\nI0607 21:24:20.440845 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0522916 (* 1 = 0.0522916 loss)\nI0607 21:24:20.440851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16377 (* 1 = 0.16377 loss)\nI0607 21:24:20.440856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0523231 (* 1 = 0.0523231 loss)\nI0607 21:24:20.440863 13573 sgd_solver.cpp:106] Iteration 2580, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:24:33.479300 13573 solver.cpp:229] Iteration 2600, loss = 1.03977\nI0607 21:24:33.479362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0908542 (* 1 = 0.0908542 loss)\nI0607 21:24:33.479370 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190834 (* 1 = 0.190834 loss)\nI0607 21:24:33.479377 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.133819 (* 1 = 0.133819 loss)\nI0607 21:24:33.479382 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0360439 (* 1 = 0.0360439 loss)\nI0607 21:24:33.479387 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139052 (* 1 = 0.139052 loss)\nI0607 21:24:33.479393 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0360596 (* 1 = 0.0360596 loss)\nI0607 21:24:33.479399 13573 sgd_solver.cpp:106] Iteration 2600, lr = 0.001\nI0607 21:24:46.450680 13573 solver.cpp:229] Iteration 2620, loss = 1.53462\nI0607 21:24:46.450749 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.599312 (* 1 = 0.599312 loss)\nI0607 21:24:46.450759 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.2491 (* 1 = 0.2491 loss)\nI0607 21:24:46.450765 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197348 (* 1 = 0.197348 loss)\nI0607 21:24:46.450783 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219987 (* 1 = 0.0219987 loss)\nI0607 21:24:46.450791 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197789 (* 1 = 0.197789 loss)\nI0607 21:24:46.450796 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219904 (* 1 = 0.0219904 loss)\nI0607 21:24:46.450803 13573 sgd_solver.cpp:106] Iteration 2620, lr = 0.001\nI0607 21:24:59.094368 13573 solver.cpp:229] Iteration 2640, loss = 1.60991\nI0607 21:24:59.094427 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.334951 (* 1 = 0.334951 loss)\nI0607 21:24:59.094437 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.235688 (* 1 = 0.235688 loss)\nI0607 21:24:59.094444 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.274452 (* 1 = 0.274452 loss)\nI0607 21:24:59.094449 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0918271 (* 1 = 0.0918271 loss)\nI0607 21:24:59.094455 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27015 (* 1 = 0.27015 loss)\nI0607 21:24:59.094461 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0918257 (* 1 = 0.0918257 loss)\nI0607 21:24:59.094468 13573 sgd_solver.cpp:106] Iteration 2640, lr = 0.001\nI0607 21:25:12.152251 13573 solver.cpp:229] Iteration 2660, loss = 2.02155\nI0607 21:25:12.152313 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.552056 (* 1 = 0.552056 loss)\nI0607 21:25:12.152323 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.622485 (* 1 = 0.622485 loss)\nI0607 21:25:12.152329 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.446659 (* 1 = 0.446659 loss)\nI0607 21:25:12.152334 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.214914 (* 1 = 0.214914 loss)\nI0607 21:25:12.152339 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.444716 (* 1 = 0.444716 loss)\nI0607 21:25:12.152345 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.214885 (* 1 = 0.214885 loss)\nI0607 21:25:12.152353 13573 sgd_solver.cpp:106] Iteration 2660, lr = 0.001\nI0607 21:25:24.828591 13573 solver.cpp:229] Iteration 2680, loss = 1.10572\nI0607 21:25:24.828650 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.251921 (* 1 = 0.251921 loss)\nI0607 21:25:24.828660 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1278 (* 1 = 0.1278 loss)\nI0607 21:25:24.828666 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135913 (* 1 = 0.135913 loss)\nI0607 21:25:24.828672 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0410168 (* 1 = 0.0410168 loss)\nI0607 21:25:24.828677 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143559 (* 1 = 0.143559 loss)\nI0607 21:25:24.828683 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.041038 (* 1 = 0.041038 loss)\nI0607 21:25:24.828691 13573 sgd_solver.cpp:106] Iteration 2680, lr = 0.001\nI0607 21:25:37.570598 13573 solver.cpp:229] Iteration 2700, loss = 0.839528\nI0607 21:25:37.570662 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.468304 (* 1 = 0.468304 loss)\nI0607 21:25:37.570670 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16037 (* 1 = 0.16037 loss)\nI0607 21:25:37.570677 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.142209 (* 1 = 0.142209 loss)\nI0607 21:25:37.570683 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00903769 (* 1 = 0.00903769 loss)\nI0607 21:25:37.570688 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147487 (* 1 = 0.147487 loss)\nI0607 21:25:37.570694 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00903646 (* 1 = 0.00903646 loss)\nI0607 21:25:37.570700 13573 sgd_solver.cpp:106] Iteration 2700, lr = 0.001\nI0607 21:25:50.632061 13573 solver.cpp:229] Iteration 2720, loss = 1.65201\nI0607 21:25:50.632122 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.464269 (* 1 = 0.464269 loss)\nI0607 21:25:50.632133 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280066 (* 1 = 0.280066 loss)\nI0607 21:25:50.632138 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.381878 (* 1 = 0.381878 loss)\nI0607 21:25:50.632143 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.048158 (* 1 = 0.048158 loss)\nI0607 21:25:50.632149 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.402323 (* 1 = 0.402323 loss)\nI0607 21:25:50.632154 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.048172 (* 1 = 0.048172 loss)\nI0607 21:25:50.632161 13573 sgd_solver.cpp:106] Iteration 2720, lr = 0.001\nI0607 21:26:03.827780 13573 solver.cpp:229] Iteration 2740, loss = 1.10152\nI0607 21:26:03.827846 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.351283 (* 1 = 0.351283 loss)\nI0607 21:26:03.827854 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.308042 (* 1 = 0.308042 loss)\nI0607 21:26:03.827860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.163735 (* 1 = 0.163735 loss)\nI0607 21:26:03.827867 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105164 (* 1 = 0.0105164 loss)\nI0607 21:26:03.827872 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163878 (* 1 = 0.163878 loss)\nI0607 21:26:03.827877 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105159 (* 1 = 0.0105159 loss)\nI0607 21:26:03.827884 13573 sgd_solver.cpp:106] Iteration 2740, lr = 0.001\nI0607 21:26:16.683279 13573 solver.cpp:229] Iteration 2760, loss = 0.709194\nI0607 21:26:16.683337 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00360442 (* 1 = 0.00360442 loss)\nI0607 21:26:16.683347 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00148816 (* 1 = 0.00148816 loss)\nI0607 21:26:16.683354 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.320708 (* 1 = 0.320708 loss)\nI0607 21:26:16.683360 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0526044 (* 1 = 0.0526044 loss)\nI0607 21:26:16.683367 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.304853 (* 1 = 0.304853 loss)\nI0607 21:26:16.683372 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0525973 (* 1 = 0.0525973 loss)\nI0607 21:26:16.683380 13573 sgd_solver.cpp:106] Iteration 2760, lr = 0.001\nI0607 21:26:29.553995 13573 solver.cpp:229] Iteration 2780, loss = 0.475377\nI0607 21:26:29.554069 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00315248 (* 1 = 0.00315248 loss)\nI0607 21:26:29.554077 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0130952 (* 1 = 0.0130952 loss)\nI0607 21:26:29.554083 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175998 (* 1 = 0.175998 loss)\nI0607 21:26:29.554090 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00566522 (* 1 = 0.00566522 loss)\nI0607 21:26:29.554095 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161854 (* 1 = 0.161854 loss)\nI0607 21:26:29.554100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0056661 (* 1 = 0.0056661 loss)\nI0607 21:26:29.554108 13573 sgd_solver.cpp:106] Iteration 2780, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:26:42.271457 13573 solver.cpp:229] Iteration 2800, loss = 1.06183\nI0607 21:26:42.271517 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.339091 (* 1 = 0.339091 loss)\nI0607 21:26:42.271526 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.168339 (* 1 = 0.168339 loss)\nI0607 21:26:42.271533 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145337 (* 1 = 0.145337 loss)\nI0607 21:26:42.271539 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281911 (* 1 = 0.0281911 loss)\nI0607 21:26:42.271545 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.144996 (* 1 = 0.144996 loss)\nI0607 21:26:42.271551 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0281829 (* 1 = 0.0281829 loss)\nI0607 21:26:42.271559 13573 sgd_solver.cpp:106] Iteration 2800, lr = 0.001\nI0607 21:26:55.145053 13573 solver.cpp:229] Iteration 2820, loss = 0.759828\nI0607 21:26:55.145120 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00110932 (* 1 = 0.00110932 loss)\nI0607 21:26:55.145131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0684061 (* 1 = 0.0684061 loss)\nI0607 21:26:55.145138 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15565 (* 1 = 0.15565 loss)\nI0607 21:26:55.145143 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.137481 (* 1 = 0.137481 loss)\nI0607 21:26:55.145148 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146509 (* 1 = 0.146509 loss)\nI0607 21:26:55.145154 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.137507 (* 1 = 0.137507 loss)\nI0607 21:26:55.145160 13573 sgd_solver.cpp:106] Iteration 2820, lr = 0.001\nI0607 21:27:07.916692 13573 solver.cpp:229] Iteration 2840, loss = 1.35158\nI0607 21:27:07.916752 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.68194 (* 1 = 0.68194 loss)\nI0607 21:27:07.916762 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.235122 (* 1 = 0.235122 loss)\nI0607 21:27:07.916769 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156961 (* 1 = 0.156961 loss)\nI0607 21:27:07.916774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0504143 (* 1 = 0.0504143 loss)\nI0607 21:27:07.916780 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159396 (* 1 = 0.159396 loss)\nI0607 21:27:07.916786 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0504294 (* 1 = 0.0504294 loss)\nI0607 21:27:07.916792 13573 sgd_solver.cpp:106] Iteration 2840, lr = 0.001\nI0607 21:27:20.709177 13573 solver.cpp:229] Iteration 2860, loss = 0.515239\nI0607 21:27:20.709244 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00626052 (* 1 = 0.00626052 loss)\nI0607 21:27:20.709252 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0173659 (* 1 = 0.0173659 loss)\nI0607 21:27:20.709259 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148359 (* 1 = 0.148359 loss)\nI0607 21:27:20.709264 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00175151 (* 1 = 0.00175151 loss)\nI0607 21:27:20.709270 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155831 (* 1 = 0.155831 loss)\nI0607 21:27:20.709276 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00175045 (* 1 = 0.00175045 loss)\nI0607 21:27:20.709282 13573 sgd_solver.cpp:106] Iteration 2860, lr = 0.001\nI0607 21:27:33.423615 13573 solver.cpp:229] Iteration 2880, loss = 0.783716\nI0607 21:27:33.423676 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.295018 (* 1 = 0.295018 loss)\nI0607 21:27:33.423684 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214835 (* 1 = 0.214835 loss)\nI0607 21:27:33.423691 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128717 (* 1 = 0.128717 loss)\nI0607 21:27:33.423696 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028501 (* 1 = 0.028501 loss)\nI0607 21:27:33.423702 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125857 (* 1 = 0.125857 loss)\nI0607 21:27:33.423707 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0284953 (* 1 = 0.0284953 loss)\nI0607 21:27:33.423713 13573 sgd_solver.cpp:106] Iteration 2880, lr = 0.001\nI0607 21:27:46.221624 13573 solver.cpp:229] Iteration 2900, loss = 0.66823\nI0607 21:27:46.221685 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00402766 (* 1 = 0.00402766 loss)\nI0607 21:27:46.221695 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.145364 (* 1 = 0.145364 loss)\nI0607 21:27:46.221700 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203455 (* 1 = 0.203455 loss)\nI0607 21:27:46.221706 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00810129 (* 1 = 0.00810129 loss)\nI0607 21:27:46.221712 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186109 (* 1 = 0.186109 loss)\nI0607 21:27:46.221717 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00810986 (* 1 = 0.00810986 loss)\nI0607 21:27:46.221724 13573 sgd_solver.cpp:106] Iteration 2900, lr = 0.001\nI0607 21:27:59.168579 13573 solver.cpp:229] Iteration 2920, loss = 1.07996\nI0607 21:27:59.168642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.260969 (* 1 = 0.260969 loss)\nI0607 21:27:59.168653 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280209 (* 1 = 0.280209 loss)\nI0607 21:27:59.168658 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231718 (* 1 = 0.231718 loss)\nI0607 21:27:59.168663 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0226105 (* 1 = 0.0226105 loss)\nI0607 21:27:59.168669 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24574 (* 1 = 0.24574 loss)\nI0607 21:27:59.168675 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0226079 (* 1 = 0.0226079 loss)\nI0607 21:27:59.168681 13573 sgd_solver.cpp:106] Iteration 2920, lr = 0.001\nI0607 21:28:11.918707 13573 solver.cpp:229] Iteration 2940, loss = 1.05079\nI0607 21:28:11.918767 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.450614 (* 1 = 0.450614 loss)\nI0607 21:28:11.918790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.135343 (* 1 = 0.135343 loss)\nI0607 21:28:11.918797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187663 (* 1 = 0.187663 loss)\nI0607 21:28:11.918802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00990524 (* 1 = 0.00990524 loss)\nI0607 21:28:11.918807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189894 (* 1 = 0.189894 loss)\nI0607 21:28:11.918813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00990512 (* 1 = 0.00990512 loss)\nI0607 21:28:11.918820 13573 sgd_solver.cpp:106] Iteration 2940, lr = 0.001\nI0607 21:28:24.820154 13573 solver.cpp:229] Iteration 2960, loss = 1.16245\nI0607 21:28:24.820227 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111718 (* 1 = 0.111718 loss)\nI0607 21:28:24.820236 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.201923 (* 1 = 0.201923 loss)\nI0607 21:28:24.820242 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250603 (* 1 = 0.250603 loss)\nI0607 21:28:24.820247 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00583853 (* 1 = 0.00583853 loss)\nI0607 21:28:24.820253 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246459 (* 1 = 0.246459 loss)\nI0607 21:28:24.820260 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0058387 (* 1 = 0.0058387 loss)\nI0607 21:28:24.820266 13573 sgd_solver.cpp:106] Iteration 2960, lr = 0.001\nI0607 21:28:37.733273 13573 solver.cpp:229] Iteration 2980, loss = 1.51395\nI0607 21:28:37.733337 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.485265 (* 1 = 0.485265 loss)\nI0607 21:28:37.733346 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.309665 (* 1 = 0.309665 loss)\nI0607 21:28:37.733352 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217934 (* 1 = 0.217934 loss)\nI0607 21:28:37.733358 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.166866 (* 1 = 0.166866 loss)\nI0607 21:28:37.733363 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216188 (* 1 = 0.216188 loss)\nI0607 21:28:37.733369 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.166835 (* 1 = 0.166835 loss)\nI0607 21:28:37.733376 13573 sgd_solver.cpp:106] Iteration 2980, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:28:50.763185 13573 solver.cpp:229] Iteration 3000, loss = 1.03882\nI0607 21:28:50.763245 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100383 (* 1 = 0.100383 loss)\nI0607 21:28:50.763255 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.204975 (* 1 = 0.204975 loss)\nI0607 21:28:50.763262 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262478 (* 1 = 0.262478 loss)\nI0607 21:28:50.763267 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00351575 (* 1 = 0.00351575 loss)\nI0607 21:28:50.763273 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256843 (* 1 = 0.256843 loss)\nI0607 21:28:50.763278 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0035189 (* 1 = 0.0035189 loss)\nI0607 21:28:50.763286 13573 sgd_solver.cpp:106] Iteration 3000, lr = 0.001\nI0607 21:29:03.752197 13573 solver.cpp:229] Iteration 3020, loss = 0.885829\nI0607 21:29:03.752269 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00535877 (* 1 = 0.00535877 loss)\nI0607 21:29:03.752279 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0323672 (* 1 = 0.0323672 loss)\nI0607 21:29:03.752285 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304096 (* 1 = 0.304096 loss)\nI0607 21:29:03.752290 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0498007 (* 1 = 0.0498007 loss)\nI0607 21:29:03.752296 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.292655 (* 1 = 0.292655 loss)\nI0607 21:29:03.752301 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0498017 (* 1 = 0.0498017 loss)\nI0607 21:29:03.752308 13573 sgd_solver.cpp:106] Iteration 3020, lr = 0.001\nI0607 21:29:16.538538 13573 solver.cpp:229] Iteration 3040, loss = 1.26035\nI0607 21:29:16.538599 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.200769 (* 1 = 0.200769 loss)\nI0607 21:29:16.538609 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143667 (* 1 = 0.143667 loss)\nI0607 21:29:16.538616 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.170777 (* 1 = 0.170777 loss)\nI0607 21:29:16.538622 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0218966 (* 1 = 0.0218966 loss)\nI0607 21:29:16.538628 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171831 (* 1 = 0.171831 loss)\nI0607 21:29:16.538635 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219184 (* 1 = 0.0219184 loss)\nI0607 21:29:16.538641 13573 sgd_solver.cpp:106] Iteration 3040, lr = 0.001\nI0607 21:29:29.317864 13573 solver.cpp:229] Iteration 3060, loss = 1.54773\nI0607 21:29:29.317929 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.68531 (* 1 = 0.68531 loss)\nI0607 21:29:29.317939 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.365228 (* 1 = 0.365228 loss)\nI0607 21:29:29.317945 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106084 (* 1 = 0.106084 loss)\nI0607 21:29:29.317950 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00438238 (* 1 = 0.00438238 loss)\nI0607 21:29:29.317956 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105119 (* 1 = 0.105119 loss)\nI0607 21:29:29.317961 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00438345 (* 1 = 0.00438345 loss)\nI0607 21:29:29.317968 13573 sgd_solver.cpp:106] Iteration 3060, lr = 0.001\nI0607 21:29:42.102902 13573 solver.cpp:229] Iteration 3080, loss = 1.19362\nI0607 21:29:42.102962 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0288264 (* 1 = 0.0288264 loss)\nI0607 21:29:42.102972 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107116 (* 1 = 0.107116 loss)\nI0607 21:29:42.102978 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.387125 (* 1 = 0.387125 loss)\nI0607 21:29:42.102984 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0831699 (* 1 = 0.0831699 loss)\nI0607 21:29:42.102989 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.371626 (* 1 = 0.371626 loss)\nI0607 21:29:42.102995 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.08317 (* 1 = 0.08317 loss)\nI0607 21:29:42.103001 13573 sgd_solver.cpp:106] Iteration 3080, lr = 0.001\nI0607 21:29:55.024914 13573 solver.cpp:229] Iteration 3100, loss = 0.638312\nI0607 21:29:55.024977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0835574 (* 1 = 0.0835574 loss)\nI0607 21:29:55.024986 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.074075 (* 1 = 0.074075 loss)\nI0607 21:29:55.024992 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114831 (* 1 = 0.114831 loss)\nI0607 21:29:55.024997 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259384 (* 1 = 0.0259384 loss)\nI0607 21:29:55.025003 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119715 (* 1 = 0.119715 loss)\nI0607 21:29:55.025008 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0259427 (* 1 = 0.0259427 loss)\nI0607 21:29:55.025015 13573 sgd_solver.cpp:106] Iteration 3100, lr = 0.001\nI0607 21:30:07.913708 13573 solver.cpp:229] Iteration 3120, loss = 0.673077\nI0607 21:30:07.913766 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00609733 (* 1 = 0.00609733 loss)\nI0607 21:30:07.913776 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0617827 (* 1 = 0.0617827 loss)\nI0607 21:30:07.913782 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26385 (* 1 = 0.26385 loss)\nI0607 21:30:07.913787 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0546215 (* 1 = 0.0546215 loss)\nI0607 21:30:07.913794 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.263805 (* 1 = 0.263805 loss)\nI0607 21:30:07.913799 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.054637 (* 1 = 0.054637 loss)\nI0607 21:30:07.913805 13573 sgd_solver.cpp:106] Iteration 3120, lr = 0.001\nI0607 21:30:20.641587 13573 solver.cpp:229] Iteration 3140, loss = 0.963317\nI0607 21:30:20.641649 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.497866 (* 1 = 0.497866 loss)\nI0607 21:30:20.641659 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19364 (* 1 = 0.19364 loss)\nI0607 21:30:20.641664 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214803 (* 1 = 0.214803 loss)\nI0607 21:30:20.641670 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0537159 (* 1 = 0.0537159 loss)\nI0607 21:30:20.641676 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21603 (* 1 = 0.21603 loss)\nI0607 21:30:20.641682 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0537038 (* 1 = 0.0537038 loss)\nI0607 21:30:20.641690 13573 sgd_solver.cpp:106] Iteration 3140, lr = 0.001\nI0607 21:30:33.424341 13573 solver.cpp:229] Iteration 3160, loss = 1.46722\nI0607 21:30:33.424403 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0724058 (* 1 = 0.0724058 loss)\nI0607 21:30:33.424412 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164348 (* 1 = 0.164348 loss)\nI0607 21:30:33.424417 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.374646 (* 1 = 0.374646 loss)\nI0607 21:30:33.424423 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0943042 (* 1 = 0.0943042 loss)\nI0607 21:30:33.424429 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.372155 (* 1 = 0.372155 loss)\nI0607 21:30:33.424434 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0943145 (* 1 = 0.0943145 loss)\nI0607 21:30:33.424441 13573 sgd_solver.cpp:106] Iteration 3160, lr = 0.001\nI0607 21:30:46.363310 13573 solver.cpp:229] Iteration 3180, loss = 1.19371\nI0607 21:30:46.363384 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.317258 (* 1 = 0.317258 loss)\nI0607 21:30:46.363394 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.462218 (* 1 = 0.462218 loss)\nI0607 21:30:46.363399 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153935 (* 1 = 0.153935 loss)\nI0607 21:30:46.363404 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189775 (* 1 = 0.0189775 loss)\nI0607 21:30:46.363410 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150395 (* 1 = 0.150395 loss)\nI0607 21:30:46.363415 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.018976 (* 1 = 0.018976 loss)\nI0607 21:30:46.363422 13573 sgd_solver.cpp:106] Iteration 3180, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:30:59.218377 13573 solver.cpp:229] Iteration 3200, loss = 0.685202\nI0607 21:30:59.218438 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.327392 (* 1 = 0.327392 loss)\nI0607 21:30:59.218447 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.38838 (* 1 = 0.38838 loss)\nI0607 21:30:59.218453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143497 (* 1 = 0.143497 loss)\nI0607 21:30:59.218459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189343 (* 1 = 0.0189343 loss)\nI0607 21:30:59.218466 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.144207 (* 1 = 0.144207 loss)\nI0607 21:30:59.218471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189209 (* 1 = 0.0189209 loss)\nI0607 21:30:59.218480 13573 sgd_solver.cpp:106] Iteration 3200, lr = 0.001\nI0607 21:31:11.989855 13573 solver.cpp:229] Iteration 3220, loss = 1.00138\nI0607 21:31:11.989920 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161825 (* 1 = 0.161825 loss)\nI0607 21:31:11.989930 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.216642 (* 1 = 0.216642 loss)\nI0607 21:31:11.989936 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15181 (* 1 = 0.15181 loss)\nI0607 21:31:11.989943 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0617797 (* 1 = 0.0617797 loss)\nI0607 21:31:11.989948 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146106 (* 1 = 0.146106 loss)\nI0607 21:31:11.989953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0617832 (* 1 = 0.0617832 loss)\nI0607 21:31:11.989960 13573 sgd_solver.cpp:106] Iteration 3220, lr = 0.001\nI0607 21:31:25.112742 13573 solver.cpp:229] Iteration 3240, loss = 0.64623\nI0607 21:31:25.112804 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00412675 (* 1 = 0.00412675 loss)\nI0607 21:31:25.112813 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0416062 (* 1 = 0.0416062 loss)\nI0607 21:31:25.112820 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.377734 (* 1 = 0.377734 loss)\nI0607 21:31:25.112825 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0427403 (* 1 = 0.0427403 loss)\nI0607 21:31:25.112831 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.359496 (* 1 = 0.359496 loss)\nI0607 21:31:25.112838 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0427328 (* 1 = 0.0427328 loss)\nI0607 21:31:25.112843 13573 sgd_solver.cpp:106] Iteration 3240, lr = 0.001\nI0607 21:31:37.989673 13573 solver.cpp:229] Iteration 3260, loss = 1.59334\nI0607 21:31:37.989740 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.465532 (* 1 = 0.465532 loss)\nI0607 21:31:37.989749 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.667344 (* 1 = 0.667344 loss)\nI0607 21:31:37.989755 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270253 (* 1 = 0.270253 loss)\nI0607 21:31:37.989761 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0589569 (* 1 = 0.0589569 loss)\nI0607 21:31:37.989768 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268588 (* 1 = 0.268588 loss)\nI0607 21:31:37.989773 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0589454 (* 1 = 0.0589454 loss)\nI0607 21:31:37.989780 13573 sgd_solver.cpp:106] Iteration 3260, lr = 0.001\nI0607 21:31:51.073118 13573 solver.cpp:229] Iteration 3280, loss = 0.880625\nI0607 21:31:51.073179 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.299264 (* 1 = 0.299264 loss)\nI0607 21:31:51.073189 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.359328 (* 1 = 0.359328 loss)\nI0607 21:31:51.073194 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141249 (* 1 = 0.141249 loss)\nI0607 21:31:51.073201 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170666 (* 1 = 0.0170666 loss)\nI0607 21:31:51.073206 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133211 (* 1 = 0.133211 loss)\nI0607 21:31:51.073212 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170668 (* 1 = 0.0170668 loss)\nI0607 21:31:51.073220 13573 sgd_solver.cpp:106] Iteration 3280, lr = 0.001\nI0607 21:32:04.117172 13573 solver.cpp:229] Iteration 3300, loss = 1.34161\nI0607 21:32:04.117229 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.367881 (* 1 = 0.367881 loss)\nI0607 21:32:04.117238 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.305153 (* 1 = 0.305153 loss)\nI0607 21:32:04.117244 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246414 (* 1 = 0.246414 loss)\nI0607 21:32:04.117250 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.122125 (* 1 = 0.122125 loss)\nI0607 21:32:04.117255 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.241093 (* 1 = 0.241093 loss)\nI0607 21:32:04.117261 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.122129 (* 1 = 0.122129 loss)\nI0607 21:32:04.117269 13573 sgd_solver.cpp:106] Iteration 3300, lr = 0.001\nI0607 21:32:16.851084 13573 solver.cpp:229] Iteration 3320, loss = 1.49816\nI0607 21:32:16.851156 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.572864 (* 1 = 0.572864 loss)\nI0607 21:32:16.851166 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.323324 (* 1 = 0.323324 loss)\nI0607 21:32:16.851172 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195078 (* 1 = 0.195078 loss)\nI0607 21:32:16.851178 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0607994 (* 1 = 0.0607994 loss)\nI0607 21:32:16.851183 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196038 (* 1 = 0.196038 loss)\nI0607 21:32:16.851189 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0607974 (* 1 = 0.0607974 loss)\nI0607 21:32:16.851196 13573 sgd_solver.cpp:106] Iteration 3320, lr = 0.001\nI0607 21:32:29.574632 13573 solver.cpp:229] Iteration 3340, loss = 1.25427\nI0607 21:32:29.574692 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.351523 (* 1 = 0.351523 loss)\nI0607 21:32:29.574702 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.307342 (* 1 = 0.307342 loss)\nI0607 21:32:29.574708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177985 (* 1 = 0.177985 loss)\nI0607 21:32:29.574714 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0208506 (* 1 = 0.0208506 loss)\nI0607 21:32:29.574720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175697 (* 1 = 0.175697 loss)\nI0607 21:32:29.574726 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0208506 (* 1 = 0.0208506 loss)\nI0607 21:32:29.574733 13573 sgd_solver.cpp:106] Iteration 3340, lr = 0.001\nI0607 21:32:42.486881 13573 solver.cpp:229] Iteration 3360, loss = 0.923309\nI0607 21:32:42.486948 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116589 (* 1 = 0.116589 loss)\nI0607 21:32:42.486956 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100589 (* 1 = 0.100589 loss)\nI0607 21:32:42.486963 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240384 (* 1 = 0.240384 loss)\nI0607 21:32:42.486968 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0223894 (* 1 = 0.0223894 loss)\nI0607 21:32:42.486974 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228025 (* 1 = 0.228025 loss)\nI0607 21:32:42.486979 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0223979 (* 1 = 0.0223979 loss)\nI0607 21:32:42.486986 13573 sgd_solver.cpp:106] Iteration 3360, lr = 0.001\nI0607 21:32:55.448704 13573 solver.cpp:229] Iteration 3380, loss = 1.79977\nI0607 21:32:55.448773 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106944 (* 1 = 0.106944 loss)\nI0607 21:32:55.448783 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0936611 (* 1 = 0.0936611 loss)\nI0607 21:32:55.448789 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171606 (* 1 = 0.171606 loss)\nI0607 21:32:55.448796 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105166 (* 1 = 0.0105166 loss)\nI0607 21:32:55.448801 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196356 (* 1 = 0.196356 loss)\nI0607 21:32:55.448807 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105168 (* 1 = 0.0105168 loss)\nI0607 21:32:55.448813 13573 sgd_solver.cpp:106] Iteration 3380, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:33:08.382803 13573 solver.cpp:229] Iteration 3400, loss = 0.820677\nI0607 21:33:08.382877 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.440179 (* 1 = 0.440179 loss)\nI0607 21:33:08.382886 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19339 (* 1 = 0.19339 loss)\nI0607 21:33:08.382894 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141038 (* 1 = 0.141038 loss)\nI0607 21:33:08.382899 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102272 (* 1 = 0.0102272 loss)\nI0607 21:33:08.382905 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137817 (* 1 = 0.137817 loss)\nI0607 21:33:08.382910 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0102218 (* 1 = 0.0102218 loss)\nI0607 21:33:08.382916 13573 sgd_solver.cpp:106] Iteration 3400, lr = 0.001\nI0607 21:33:21.119097 13573 solver.cpp:229] Iteration 3420, loss = 1.25995\nI0607 21:33:21.119158 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.237463 (* 1 = 0.237463 loss)\nI0607 21:33:21.119166 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172095 (* 1 = 0.172095 loss)\nI0607 21:33:21.119174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264116 (* 1 = 0.264116 loss)\nI0607 21:33:21.119179 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0672485 (* 1 = 0.0672485 loss)\nI0607 21:33:21.119184 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261657 (* 1 = 0.261657 loss)\nI0607 21:33:21.119189 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0672319 (* 1 = 0.0672319 loss)\nI0607 21:33:21.119196 13573 sgd_solver.cpp:106] Iteration 3420, lr = 0.001\nI0607 21:33:34.026733 13573 solver.cpp:229] Iteration 3440, loss = 1.74615\nI0607 21:33:34.026808 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.329662 (* 1 = 0.329662 loss)\nI0607 21:33:34.026818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.410756 (* 1 = 0.410756 loss)\nI0607 21:33:34.026824 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.401139 (* 1 = 0.401139 loss)\nI0607 21:33:34.026830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.075849 (* 1 = 0.075849 loss)\nI0607 21:33:34.026836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.403819 (* 1 = 0.403819 loss)\nI0607 21:33:34.026841 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.075853 (* 1 = 0.075853 loss)\nI0607 21:33:34.026849 13573 sgd_solver.cpp:106] Iteration 3440, lr = 0.001\nI0607 21:33:46.893610 13573 solver.cpp:229] Iteration 3460, loss = 1.63004\nI0607 21:33:46.893671 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.54493 (* 1 = 0.54493 loss)\nI0607 21:33:46.893681 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.539691 (* 1 = 0.539691 loss)\nI0607 21:33:46.893687 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302453 (* 1 = 0.302453 loss)\nI0607 21:33:46.893693 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0778418 (* 1 = 0.0778418 loss)\nI0607 21:33:46.893699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.300625 (* 1 = 0.300625 loss)\nI0607 21:33:46.893705 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0778512 (* 1 = 0.0778512 loss)\nI0607 21:33:46.893712 13573 sgd_solver.cpp:106] Iteration 3460, lr = 0.001\nI0607 21:33:59.635138 13573 solver.cpp:229] Iteration 3480, loss = 2.13677\nI0607 21:33:59.635211 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.018284 (* 1 = 0.018284 loss)\nI0607 21:33:59.635221 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.254869 (* 1 = 0.254869 loss)\nI0607 21:33:59.635227 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.843788 (* 1 = 0.843788 loss)\nI0607 21:33:59.635232 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.169557 (* 1 = 0.169557 loss)\nI0607 21:33:59.635238 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.824015 (* 1 = 0.824015 loss)\nI0607 21:33:59.635243 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.16966 (* 1 = 0.16966 loss)\nI0607 21:33:59.635251 13573 sgd_solver.cpp:106] Iteration 3480, lr = 0.001\nI0607 21:34:12.304903 13573 solver.cpp:229] Iteration 3500, loss = 1.00069\nI0607 21:34:12.304965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.255975 (* 1 = 0.255975 loss)\nI0607 21:34:12.304975 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280692 (* 1 = 0.280692 loss)\nI0607 21:34:12.304980 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252862 (* 1 = 0.252862 loss)\nI0607 21:34:12.304986 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0728493 (* 1 = 0.0728493 loss)\nI0607 21:34:12.304991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.248457 (* 1 = 0.248457 loss)\nI0607 21:34:12.304997 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0728209 (* 1 = 0.0728209 loss)\nI0607 21:34:12.305004 13573 sgd_solver.cpp:106] Iteration 3500, lr = 0.001\nI0607 21:34:25.039091 13573 solver.cpp:229] Iteration 3520, loss = 1.36736\nI0607 21:34:25.039155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115915 (* 1 = 0.115915 loss)\nI0607 21:34:25.039163 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0982847 (* 1 = 0.0982847 loss)\nI0607 21:34:25.039170 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137115 (* 1 = 0.137115 loss)\nI0607 21:34:25.039175 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418985 (* 1 = 0.0418985 loss)\nI0607 21:34:25.039181 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.144684 (* 1 = 0.144684 loss)\nI0607 21:34:25.039187 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0419068 (* 1 = 0.0419068 loss)\nI0607 21:34:25.039223 13573 sgd_solver.cpp:106] Iteration 3520, lr = 0.001\nI0607 21:34:37.814910 13573 solver.cpp:229] Iteration 3540, loss = 0.858699\nI0607 21:34:37.814976 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.272764 (* 1 = 0.272764 loss)\nI0607 21:34:37.814985 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.234025 (* 1 = 0.234025 loss)\nI0607 21:34:37.814991 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204591 (* 1 = 0.204591 loss)\nI0607 21:34:37.814998 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0393243 (* 1 = 0.0393243 loss)\nI0607 21:34:37.815003 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.198716 (* 1 = 0.198716 loss)\nI0607 21:34:37.815009 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0393217 (* 1 = 0.0393217 loss)\nI0607 21:34:37.815016 13573 sgd_solver.cpp:106] Iteration 3540, lr = 0.001\nI0607 21:34:50.577875 13573 solver.cpp:229] Iteration 3560, loss = 0.56873\nI0607 21:34:50.577937 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00611682 (* 1 = 0.00611682 loss)\nI0607 21:34:50.577947 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0941628 (* 1 = 0.0941628 loss)\nI0607 21:34:50.577953 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.28503 (* 1 = 0.28503 loss)\nI0607 21:34:50.577960 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00515332 (* 1 = 0.00515332 loss)\nI0607 21:34:50.577965 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.307768 (* 1 = 0.307768 loss)\nI0607 21:34:50.577970 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00515461 (* 1 = 0.00515461 loss)\nI0607 21:34:50.577977 13573 sgd_solver.cpp:106] Iteration 3560, lr = 0.001\nI0607 21:35:03.458564 13573 solver.cpp:229] Iteration 3580, loss = 1.4137\nI0607 21:35:03.458628 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.295054 (* 1 = 0.295054 loss)\nI0607 21:35:03.458637 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269262 (* 1 = 0.269262 loss)\nI0607 21:35:03.458642 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279759 (* 1 = 0.279759 loss)\nI0607 21:35:03.458648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0769122 (* 1 = 0.0769122 loss)\nI0607 21:35:03.458654 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280279 (* 1 = 0.280279 loss)\nI0607 21:35:03.458659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0768926 (* 1 = 0.0768926 loss)\nI0607 21:35:03.458667 13573 sgd_solver.cpp:106] Iteration 3580, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:35:16.349864 13573 solver.cpp:229] Iteration 3600, loss = 1.5519\nI0607 21:35:16.349925 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.290005 (* 1 = 0.290005 loss)\nI0607 21:35:16.349933 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190054 (* 1 = 0.190054 loss)\nI0607 21:35:16.349939 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282386 (* 1 = 0.282386 loss)\nI0607 21:35:16.349946 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0203656 (* 1 = 0.0203656 loss)\nI0607 21:35:16.349951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27919 (* 1 = 0.27919 loss)\nI0607 21:35:16.349956 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0203711 (* 1 = 0.0203711 loss)\nI0607 21:35:16.349963 13573 sgd_solver.cpp:106] Iteration 3600, lr = 0.001\nI0607 21:35:29.382599 13573 solver.cpp:229] Iteration 3620, loss = 1.00408\nI0607 21:35:29.382670 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.344361 (* 1 = 0.344361 loss)\nI0607 21:35:29.382679 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209149 (* 1 = 0.209149 loss)\nI0607 21:35:29.382685 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.21921 (* 1 = 0.21921 loss)\nI0607 21:35:29.382690 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.213141 (* 1 = 0.213141 loss)\nI0607 21:35:29.382696 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223808 (* 1 = 0.223808 loss)\nI0607 21:35:29.382701 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.21309 (* 1 = 0.21309 loss)\nI0607 21:35:29.382709 13573 sgd_solver.cpp:106] Iteration 3620, lr = 0.001\nI0607 21:35:42.347036 13573 solver.cpp:229] Iteration 3640, loss = 1.138\nI0607 21:35:42.347095 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.401316 (* 1 = 0.401316 loss)\nI0607 21:35:42.347105 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.356455 (* 1 = 0.356455 loss)\nI0607 21:35:42.347110 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123023 (* 1 = 0.123023 loss)\nI0607 21:35:42.347116 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0233738 (* 1 = 0.0233738 loss)\nI0607 21:35:42.347122 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.132 (* 1 = 0.132 loss)\nI0607 21:35:42.347127 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0233761 (* 1 = 0.0233761 loss)\nI0607 21:35:42.347134 13573 sgd_solver.cpp:106] Iteration 3640, lr = 0.001\nI0607 21:35:55.249840 13573 solver.cpp:229] Iteration 3660, loss = 0.551207\nI0607 21:35:55.249905 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00155983 (* 1 = 0.00155983 loss)\nI0607 21:35:55.249913 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0217834 (* 1 = 0.0217834 loss)\nI0607 21:35:55.249919 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190412 (* 1 = 0.190412 loss)\nI0607 21:35:55.249925 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0266017 (* 1 = 0.0266017 loss)\nI0607 21:35:55.249930 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204196 (* 1 = 0.204196 loss)\nI0607 21:35:55.249936 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0265896 (* 1 = 0.0265896 loss)\nI0607 21:35:55.249943 13573 sgd_solver.cpp:106] Iteration 3660, lr = 0.001\nI0607 21:36:08.015450 13573 solver.cpp:229] Iteration 3680, loss = 0.97458\nI0607 21:36:08.015511 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.587065 (* 1 = 0.587065 loss)\nI0607 21:36:08.015519 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.491893 (* 1 = 0.491893 loss)\nI0607 21:36:08.015525 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116848 (* 1 = 0.116848 loss)\nI0607 21:36:08.015532 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00728483 (* 1 = 0.00728483 loss)\nI0607 21:36:08.015537 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119952 (* 1 = 0.119952 loss)\nI0607 21:36:08.015542 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00728423 (* 1 = 0.00728423 loss)\nI0607 21:36:08.015548 13573 sgd_solver.cpp:106] Iteration 3680, lr = 0.001\nI0607 21:36:20.992161 13573 solver.cpp:229] Iteration 3700, loss = 0.859932\nI0607 21:36:20.992231 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.094586 (* 1 = 0.094586 loss)\nI0607 21:36:20.992241 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184055 (* 1 = 0.184055 loss)\nI0607 21:36:20.992247 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108288 (* 1 = 0.108288 loss)\nI0607 21:36:20.992252 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00187003 (* 1 = 0.00187003 loss)\nI0607 21:36:20.992259 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103164 (* 1 = 0.103164 loss)\nI0607 21:36:20.992264 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00187011 (* 1 = 0.00187011 loss)\nI0607 21:36:20.992271 13573 sgd_solver.cpp:106] Iteration 3700, lr = 0.001\nI0607 21:36:33.692483 13573 solver.cpp:229] Iteration 3720, loss = 2.03067\nI0607 21:36:33.692543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.48661 (* 1 = 0.48661 loss)\nI0607 21:36:33.692553 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.03317 (* 1 = 1.03317 loss)\nI0607 21:36:33.692559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.326592 (* 1 = 0.326592 loss)\nI0607 21:36:33.692566 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0970339 (* 1 = 0.0970339 loss)\nI0607 21:36:33.692572 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330681 (* 1 = 0.330681 loss)\nI0607 21:36:33.692579 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0970378 (* 1 = 0.0970378 loss)\nI0607 21:36:33.692585 13573 sgd_solver.cpp:106] Iteration 3720, lr = 0.001\nI0607 21:36:46.472158 13573 solver.cpp:229] Iteration 3740, loss = 1.09937\nI0607 21:36:46.472228 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.273768 (* 1 = 0.273768 loss)\nI0607 21:36:46.472237 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.378067 (* 1 = 0.378067 loss)\nI0607 21:36:46.472244 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137341 (* 1 = 0.137341 loss)\nI0607 21:36:46.472249 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243396 (* 1 = 0.0243396 loss)\nI0607 21:36:46.472254 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.129224 (* 1 = 0.129224 loss)\nI0607 21:36:46.472260 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243431 (* 1 = 0.0243431 loss)\nI0607 21:36:46.472267 13573 sgd_solver.cpp:106] Iteration 3740, lr = 0.001\nI0607 21:36:59.304811 13573 solver.cpp:229] Iteration 3760, loss = 1.17576\nI0607 21:36:59.304872 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.284171 (* 1 = 0.284171 loss)\nI0607 21:36:59.304883 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.183049 (* 1 = 0.183049 loss)\nI0607 21:36:59.304888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.24563 (* 1 = 0.24563 loss)\nI0607 21:36:59.304894 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158874 (* 1 = 0.0158874 loss)\nI0607 21:36:59.304899 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265514 (* 1 = 0.265514 loss)\nI0607 21:36:59.304904 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158877 (* 1 = 0.0158877 loss)\nI0607 21:36:59.304911 13573 sgd_solver.cpp:106] Iteration 3760, lr = 0.001\nI0607 21:37:12.285698 13573 solver.cpp:229] Iteration 3780, loss = 1.89727\nI0607 21:37:12.285770 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.540375 (* 1 = 0.540375 loss)\nI0607 21:37:12.285779 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.64845 (* 1 = 0.64845 loss)\nI0607 21:37:12.285785 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.161868 (* 1 = 0.161868 loss)\nI0607 21:37:12.285791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110054 (* 1 = 0.0110054 loss)\nI0607 21:37:12.285796 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190172 (* 1 = 0.190172 loss)\nI0607 21:37:12.285802 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110099 (* 1 = 0.0110099 loss)\nI0607 21:37:12.285809 13573 sgd_solver.cpp:106] Iteration 3780, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:37:25.114398 13573 solver.cpp:229] Iteration 3800, loss = 0.717511\nI0607 21:37:25.114461 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00191665 (* 1 = 0.00191665 loss)\nI0607 21:37:25.114471 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0319097 (* 1 = 0.0319097 loss)\nI0607 21:37:25.114477 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.146441 (* 1 = 0.146441 loss)\nI0607 21:37:25.114485 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011508 (* 1 = 0.011508 loss)\nI0607 21:37:25.114490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162744 (* 1 = 0.162744 loss)\nI0607 21:37:25.114496 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115118 (* 1 = 0.0115118 loss)\nI0607 21:37:25.114509 13573 sgd_solver.cpp:106] Iteration 3800, lr = 0.001\nI0607 21:37:38.002167 13573 solver.cpp:229] Iteration 3820, loss = 1.20435\nI0607 21:37:38.002238 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.422811 (* 1 = 0.422811 loss)\nI0607 21:37:38.002246 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.271817 (* 1 = 0.271817 loss)\nI0607 21:37:38.002254 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102596 (* 1 = 0.102596 loss)\nI0607 21:37:38.002259 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00214588 (* 1 = 0.00214588 loss)\nI0607 21:37:38.002264 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105325 (* 1 = 0.105325 loss)\nI0607 21:37:38.002270 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00214592 (* 1 = 0.00214592 loss)\nI0607 21:37:38.002277 13573 sgd_solver.cpp:106] Iteration 3820, lr = 0.001\nI0607 21:37:50.847110 13573 solver.cpp:229] Iteration 3840, loss = 0.989965\nI0607 21:37:50.847185 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.415559 (* 1 = 0.415559 loss)\nI0607 21:37:50.847194 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.159307 (* 1 = 0.159307 loss)\nI0607 21:37:50.847200 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.146833 (* 1 = 0.146833 loss)\nI0607 21:37:50.847206 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.077987 (* 1 = 0.077987 loss)\nI0607 21:37:50.847211 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149488 (* 1 = 0.149488 loss)\nI0607 21:37:50.847218 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0779862 (* 1 = 0.0779862 loss)\nI0607 21:37:50.847223 13573 sgd_solver.cpp:106] Iteration 3840, lr = 0.001\nI0607 21:38:03.709009 13573 solver.cpp:229] Iteration 3860, loss = 1.07921\nI0607 21:38:03.709070 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.326706 (* 1 = 0.326706 loss)\nI0607 21:38:03.709079 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.208507 (* 1 = 0.208507 loss)\nI0607 21:38:03.709085 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.163506 (* 1 = 0.163506 loss)\nI0607 21:38:03.709090 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.144758 (* 1 = 0.144758 loss)\nI0607 21:38:03.709096 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162765 (* 1 = 0.162765 loss)\nI0607 21:38:03.709101 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.144757 (* 1 = 0.144757 loss)\nI0607 21:38:03.709108 13573 sgd_solver.cpp:106] Iteration 3860, lr = 0.001\nI0607 21:38:16.858626 13573 solver.cpp:229] Iteration 3880, loss = 1.11547\nI0607 21:38:16.858690 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.225336 (* 1 = 0.225336 loss)\nI0607 21:38:16.858700 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.234664 (* 1 = 0.234664 loss)\nI0607 21:38:16.858705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.298831 (* 1 = 0.298831 loss)\nI0607 21:38:16.858711 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0705547 (* 1 = 0.0705547 loss)\nI0607 21:38:16.858717 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.302968 (* 1 = 0.302968 loss)\nI0607 21:38:16.858723 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0705506 (* 1 = 0.0705506 loss)\nI0607 21:38:16.858731 13573 sgd_solver.cpp:106] Iteration 3880, lr = 0.001\nI0607 21:38:29.942615 13573 solver.cpp:229] Iteration 3900, loss = 0.845023\nI0607 21:38:29.942673 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0064002 (* 1 = 0.0064002 loss)\nI0607 21:38:29.942683 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0139706 (* 1 = 0.0139706 loss)\nI0607 21:38:29.942688 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.154003 (* 1 = 0.154003 loss)\nI0607 21:38:29.942694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198152 (* 1 = 0.0198152 loss)\nI0607 21:38:29.942700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15258 (* 1 = 0.15258 loss)\nI0607 21:38:29.942705 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198168 (* 1 = 0.0198168 loss)\nI0607 21:38:29.942713 13573 sgd_solver.cpp:106] Iteration 3900, lr = 0.001\nI0607 21:38:42.771885 13573 solver.cpp:229] Iteration 3920, loss = 1.73284\nI0607 21:38:42.771953 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.191481 (* 1 = 0.191481 loss)\nI0607 21:38:42.771962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1535 (* 1 = 0.1535 loss)\nI0607 21:38:42.771968 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.234342 (* 1 = 0.234342 loss)\nI0607 21:38:42.771975 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010897 (* 1 = 0.010897 loss)\nI0607 21:38:42.771980 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.234924 (* 1 = 0.234924 loss)\nI0607 21:38:42.771986 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109002 (* 1 = 0.0109002 loss)\nI0607 21:38:42.771992 13573 sgd_solver.cpp:106] Iteration 3920, lr = 0.001\nI0607 21:38:55.891296 13573 solver.cpp:229] Iteration 3940, loss = 2.44625\nI0607 21:38:55.891358 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00426984 (* 1 = 0.00426984 loss)\nI0607 21:38:55.891367 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0292145 (* 1 = 0.0292145 loss)\nI0607 21:38:55.891373 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.509378 (* 1 = 0.509378 loss)\nI0607 21:38:55.891379 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.172692 (* 1 = 0.172692 loss)\nI0607 21:38:55.891384 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.558922 (* 1 = 0.558922 loss)\nI0607 21:38:55.891391 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.17272 (* 1 = 0.17272 loss)\nI0607 21:38:55.891397 13573 sgd_solver.cpp:106] Iteration 3940, lr = 0.001\nI0607 21:39:09.075403 13573 solver.cpp:229] Iteration 3960, loss = 2.32607\nI0607 21:39:09.075471 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.306039 (* 1 = 0.306039 loss)\nI0607 21:39:09.075480 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.227575 (* 1 = 0.227575 loss)\nI0607 21:39:09.075487 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199958 (* 1 = 0.199958 loss)\nI0607 21:39:09.075492 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.169441 (* 1 = 0.169441 loss)\nI0607 21:39:09.075498 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19417 (* 1 = 0.19417 loss)\nI0607 21:39:09.075503 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.16933 (* 1 = 0.16933 loss)\nI0607 21:39:09.075511 13573 sgd_solver.cpp:106] Iteration 3960, lr = 0.001\nI0607 21:39:22.042948 13573 solver.cpp:229] Iteration 3980, loss = 1.51254\nI0607 21:39:22.043005 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.343656 (* 1 = 0.343656 loss)\nI0607 21:39:22.043015 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.340872 (* 1 = 0.340872 loss)\nI0607 21:39:22.043020 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.58112 (* 1 = 0.58112 loss)\nI0607 21:39:22.043026 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.181224 (* 1 = 0.181224 loss)\nI0607 21:39:22.043031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.595643 (* 1 = 0.595643 loss)\nI0607 21:39:22.043037 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.181222 (* 1 = 0.181222 loss)\nI0607 21:39:22.043043 13573 sgd_solver.cpp:106] Iteration 3980, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:39:34.890579 13573 solver.cpp:229] Iteration 4000, loss = 1.25269\nI0607 21:39:34.890655 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0600282 (* 1 = 0.0600282 loss)\nI0607 21:39:34.890663 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.20415 (* 1 = 0.20415 loss)\nI0607 21:39:34.890669 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.314678 (* 1 = 0.314678 loss)\nI0607 21:39:34.890676 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.135985 (* 1 = 0.135985 loss)\nI0607 21:39:34.890681 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.309564 (* 1 = 0.309564 loss)\nI0607 21:39:34.890691 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.136015 (* 1 = 0.136015 loss)\nI0607 21:39:34.890697 13573 sgd_solver.cpp:106] Iteration 4000, lr = 0.001\nI0607 21:39:47.731618 13573 solver.cpp:229] Iteration 4020, loss = 1.51442\nI0607 21:39:47.731684 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0520472 (* 1 = 0.0520472 loss)\nI0607 21:39:47.731694 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.156009 (* 1 = 0.156009 loss)\nI0607 21:39:47.731700 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250341 (* 1 = 0.250341 loss)\nI0607 21:39:47.731706 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0263576 (* 1 = 0.0263576 loss)\nI0607 21:39:47.731712 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233514 (* 1 = 0.233514 loss)\nI0607 21:39:47.731719 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0263544 (* 1 = 0.0263544 loss)\nI0607 21:39:47.731726 13573 sgd_solver.cpp:106] Iteration 4020, lr = 0.001\nI0607 21:40:00.609575 13573 solver.cpp:229] Iteration 4040, loss = 0.876494\nI0607 21:40:00.609642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.268754 (* 1 = 0.268754 loss)\nI0607 21:40:00.609650 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167399 (* 1 = 0.167399 loss)\nI0607 21:40:00.609657 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179781 (* 1 = 0.179781 loss)\nI0607 21:40:00.609663 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243177 (* 1 = 0.0243177 loss)\nI0607 21:40:00.609668 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19816 (* 1 = 0.19816 loss)\nI0607 21:40:00.609673 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243152 (* 1 = 0.0243152 loss)\nI0607 21:40:00.609681 13573 sgd_solver.cpp:106] Iteration 4040, lr = 0.001\nI0607 21:40:13.611280 13573 solver.cpp:229] Iteration 4060, loss = 1.17599\nI0607 21:40:13.611346 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00549272 (* 1 = 0.00549272 loss)\nI0607 21:40:13.611356 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229111 (* 1 = 0.229111 loss)\nI0607 21:40:13.611362 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.486118 (* 1 = 0.486118 loss)\nI0607 21:40:13.611367 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.128772 (* 1 = 0.128772 loss)\nI0607 21:40:13.611373 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.448434 (* 1 = 0.448434 loss)\nI0607 21:40:13.611378 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.128785 (* 1 = 0.128785 loss)\nI0607 21:40:13.611385 13573 sgd_solver.cpp:106] Iteration 4060, lr = 0.001\nI0607 21:40:26.334142 13573 solver.cpp:229] Iteration 4080, loss = 1.09225\nI0607 21:40:26.334203 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.287729 (* 1 = 0.287729 loss)\nI0607 21:40:26.334213 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.562873 (* 1 = 0.562873 loss)\nI0607 21:40:26.334218 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.303441 (* 1 = 0.303441 loss)\nI0607 21:40:26.334224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0306998 (* 1 = 0.0306998 loss)\nI0607 21:40:26.334230 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.320342 (* 1 = 0.320342 loss)\nI0607 21:40:26.334236 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307004 (* 1 = 0.0307004 loss)\nI0607 21:40:26.334242 13573 sgd_solver.cpp:106] Iteration 4080, lr = 0.001\nI0607 21:40:39.193807 13573 solver.cpp:229] Iteration 4100, loss = 0.616821\nI0607 21:40:39.193869 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00272438 (* 1 = 0.00272438 loss)\nI0607 21:40:39.193878 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00516947 (* 1 = 0.00516947 loss)\nI0607 21:40:39.193884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198028 (* 1 = 0.198028 loss)\nI0607 21:40:39.193891 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0192412 (* 1 = 0.0192412 loss)\nI0607 21:40:39.193897 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168196 (* 1 = 0.168196 loss)\nI0607 21:40:39.193902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0192431 (* 1 = 0.0192431 loss)\nI0607 21:40:39.193908 13573 sgd_solver.cpp:106] Iteration 4100, lr = 0.001\nI0607 21:40:51.999500 13573 solver.cpp:229] Iteration 4120, loss = 0.822829\nI0607 21:40:51.999567 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00191578 (* 1 = 0.00191578 loss)\nI0607 21:40:51.999577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0155031 (* 1 = 0.0155031 loss)\nI0607 21:40:51.999583 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17801 (* 1 = 0.17801 loss)\nI0607 21:40:51.999589 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178176 (* 1 = 0.0178176 loss)\nI0607 21:40:51.999594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172728 (* 1 = 0.172728 loss)\nI0607 21:40:51.999600 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0178204 (* 1 = 0.0178204 loss)\nI0607 21:40:51.999606 13573 sgd_solver.cpp:106] Iteration 4120, lr = 0.001\nI0607 21:41:05.017999 13573 solver.cpp:229] Iteration 4140, loss = 1.07762\nI0607 21:41:05.018059 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.225058 (* 1 = 0.225058 loss)\nI0607 21:41:05.018069 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.243864 (* 1 = 0.243864 loss)\nI0607 21:41:05.018074 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114024 (* 1 = 0.114024 loss)\nI0607 21:41:05.018080 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0741636 (* 1 = 0.0741636 loss)\nI0607 21:41:05.018085 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115739 (* 1 = 0.115739 loss)\nI0607 21:41:05.018091 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0741561 (* 1 = 0.0741561 loss)\nI0607 21:41:05.018098 13573 sgd_solver.cpp:106] Iteration 4140, lr = 0.001\nI0607 21:41:17.837096 13573 solver.cpp:229] Iteration 4160, loss = 1.96475\nI0607 21:41:17.837172 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.324414 (* 1 = 0.324414 loss)\nI0607 21:41:17.837182 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.331592 (* 1 = 0.331592 loss)\nI0607 21:41:17.837189 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241636 (* 1 = 0.241636 loss)\nI0607 21:41:17.837195 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0440912 (* 1 = 0.0440912 loss)\nI0607 21:41:17.837201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.242057 (* 1 = 0.242057 loss)\nI0607 21:41:17.837208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0440938 (* 1 = 0.0440938 loss)\nI0607 21:41:17.837215 13573 sgd_solver.cpp:106] Iteration 4160, lr = 0.001\nI0607 21:41:30.761355 13573 solver.cpp:229] Iteration 4180, loss = 1.20765\nI0607 21:41:30.761418 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.407203 (* 1 = 0.407203 loss)\nI0607 21:41:30.761428 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.12309 (* 1 = 0.12309 loss)\nI0607 21:41:30.761435 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.150609 (* 1 = 0.150609 loss)\nI0607 21:41:30.761441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120351 (* 1 = 0.0120351 loss)\nI0607 21:41:30.761447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153392 (* 1 = 0.153392 loss)\nI0607 21:41:30.761453 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0120343 (* 1 = 0.0120343 loss)\nI0607 21:41:30.761461 13573 sgd_solver.cpp:106] Iteration 4180, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:41:43.674304 13573 solver.cpp:229] Iteration 4200, loss = 0.990373\nI0607 21:41:43.674372 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.222413 (* 1 = 0.222413 loss)\nI0607 21:41:43.674382 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.144758 (* 1 = 0.144758 loss)\nI0607 21:41:43.674389 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100075 (* 1 = 0.100075 loss)\nI0607 21:41:43.674396 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0255471 (* 1 = 0.0255471 loss)\nI0607 21:41:43.674401 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106646 (* 1 = 0.106646 loss)\nI0607 21:41:43.674409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0255456 (* 1 = 0.0255456 loss)\nI0607 21:41:43.674417 13573 sgd_solver.cpp:106] Iteration 4200, lr = 0.001\nI0607 21:41:56.357810 13573 solver.cpp:229] Iteration 4220, loss = 1.19645\nI0607 21:41:56.357887 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.374687 (* 1 = 0.374687 loss)\nI0607 21:41:56.357897 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.449616 (* 1 = 0.449616 loss)\nI0607 21:41:56.357903 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123036 (* 1 = 0.123036 loss)\nI0607 21:41:56.357909 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0201876 (* 1 = 0.0201876 loss)\nI0607 21:41:56.357915 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124268 (* 1 = 0.124268 loss)\nI0607 21:41:56.357921 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201881 (* 1 = 0.0201881 loss)\nI0607 21:41:56.357929 13573 sgd_solver.cpp:106] Iteration 4220, lr = 0.001\nI0607 21:42:09.248724 13573 solver.cpp:229] Iteration 4240, loss = 1.03522\nI0607 21:42:09.248786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.416461 (* 1 = 0.416461 loss)\nI0607 21:42:09.248795 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.324154 (* 1 = 0.324154 loss)\nI0607 21:42:09.248801 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228723 (* 1 = 0.228723 loss)\nI0607 21:42:09.248806 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.038627 (* 1 = 0.038627 loss)\nI0607 21:42:09.248812 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227465 (* 1 = 0.227465 loss)\nI0607 21:42:09.248818 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.038636 (* 1 = 0.038636 loss)\nI0607 21:42:09.248824 13573 sgd_solver.cpp:106] Iteration 4240, lr = 0.001\nI0607 21:42:22.150873 13573 solver.cpp:229] Iteration 4260, loss = 0.965098\nI0607 21:42:22.150938 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.252341 (* 1 = 0.252341 loss)\nI0607 21:42:22.150948 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.572167 (* 1 = 0.572167 loss)\nI0607 21:42:22.150954 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117882 (* 1 = 0.117882 loss)\nI0607 21:42:22.150959 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0148497 (* 1 = 0.0148497 loss)\nI0607 21:42:22.150965 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125967 (* 1 = 0.125967 loss)\nI0607 21:42:22.150971 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0148516 (* 1 = 0.0148516 loss)\nI0607 21:42:22.150979 13573 sgd_solver.cpp:106] Iteration 4260, lr = 0.001\nI0607 21:42:35.193130 13573 solver.cpp:229] Iteration 4280, loss = 0.641442\nI0607 21:42:35.193192 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0022312 (* 1 = 0.0022312 loss)\nI0607 21:42:35.193200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0726901 (* 1 = 0.0726901 loss)\nI0607 21:42:35.193207 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.185307 (* 1 = 0.185307 loss)\nI0607 21:42:35.193212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00783986 (* 1 = 0.00783986 loss)\nI0607 21:42:35.193217 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191168 (* 1 = 0.191168 loss)\nI0607 21:42:35.193222 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00784064 (* 1 = 0.00784064 loss)\nI0607 21:42:35.193229 13573 sgd_solver.cpp:106] Iteration 4280, lr = 0.001\nI0607 21:42:48.117698 13573 solver.cpp:229] Iteration 4300, loss = 0.895966\nI0607 21:42:48.117774 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148922 (* 1 = 0.148922 loss)\nI0607 21:42:48.117782 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.27234 (* 1 = 0.27234 loss)\nI0607 21:42:48.117789 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175923 (* 1 = 0.175923 loss)\nI0607 21:42:48.117794 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00234495 (* 1 = 0.00234495 loss)\nI0607 21:42:48.117820 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19067 (* 1 = 0.19067 loss)\nI0607 21:42:48.117830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00234589 (* 1 = 0.00234589 loss)\nI0607 21:42:48.117837 13573 sgd_solver.cpp:106] Iteration 4300, lr = 0.001\nI0607 21:43:01.023249 13573 solver.cpp:229] Iteration 4320, loss = 1.82711\nI0607 21:43:01.023308 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.333998 (* 1 = 0.333998 loss)\nI0607 21:43:01.023317 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.785728 (* 1 = 0.785728 loss)\nI0607 21:43:01.023324 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107347 (* 1 = 0.107347 loss)\nI0607 21:43:01.023329 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110963 (* 1 = 0.0110963 loss)\nI0607 21:43:01.023335 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103825 (* 1 = 0.103825 loss)\nI0607 21:43:01.023341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011098 (* 1 = 0.011098 loss)\nI0607 21:43:01.023349 13573 sgd_solver.cpp:106] Iteration 4320, lr = 0.001\nI0607 21:43:13.960463 13573 solver.cpp:229] Iteration 4340, loss = 1.06304\nI0607 21:43:13.960526 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.416631 (* 1 = 0.416631 loss)\nI0607 21:43:13.960536 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188531 (* 1 = 0.188531 loss)\nI0607 21:43:13.960541 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.129612 (* 1 = 0.129612 loss)\nI0607 21:43:13.960547 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0310055 (* 1 = 0.0310055 loss)\nI0607 21:43:13.960552 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126599 (* 1 = 0.126599 loss)\nI0607 21:43:13.960558 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0310148 (* 1 = 0.0310148 loss)\nI0607 21:43:13.960564 13573 sgd_solver.cpp:106] Iteration 4340, lr = 0.001\nI0607 21:43:26.868649 13573 solver.cpp:229] Iteration 4360, loss = 0.69979\nI0607 21:43:26.868707 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.137853 (* 1 = 0.137853 loss)\nI0607 21:43:26.868717 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117633 (* 1 = 0.117633 loss)\nI0607 21:43:26.868722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111755 (* 1 = 0.111755 loss)\nI0607 21:43:26.868728 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0163589 (* 1 = 0.0163589 loss)\nI0607 21:43:26.868734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.116502 (* 1 = 0.116502 loss)\nI0607 21:43:26.868741 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0163555 (* 1 = 0.0163555 loss)\nI0607 21:43:26.868747 13573 sgd_solver.cpp:106] Iteration 4360, lr = 0.001\nI0607 21:43:40.109055 13573 solver.cpp:229] Iteration 4380, loss = 1.00267\nI0607 21:43:40.109129 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.265036 (* 1 = 0.265036 loss)\nI0607 21:43:40.109138 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116267 (* 1 = 0.116267 loss)\nI0607 21:43:40.109144 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14167 (* 1 = 0.14167 loss)\nI0607 21:43:40.109150 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0403205 (* 1 = 0.0403205 loss)\nI0607 21:43:40.109156 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136029 (* 1 = 0.136029 loss)\nI0607 21:43:40.109163 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.040321 (* 1 = 0.040321 loss)\nI0607 21:43:40.109169 13573 sgd_solver.cpp:106] Iteration 4380, lr = 0.001\nspeed: 0.644s / iter\nI0607 21:43:53.097416 13573 solver.cpp:229] Iteration 4400, loss = 1.39943\nI0607 21:43:53.097478 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.420042 (* 1 = 0.420042 loss)\nI0607 21:43:53.097488 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.421673 (* 1 = 0.421673 loss)\nI0607 21:43:53.097494 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125477 (* 1 = 0.125477 loss)\nI0607 21:43:53.097501 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193508 (* 1 = 0.0193508 loss)\nI0607 21:43:53.097507 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127167 (* 1 = 0.127167 loss)\nI0607 21:43:53.097513 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.019336 (* 1 = 0.019336 loss)\nI0607 21:43:53.097520 13573 sgd_solver.cpp:106] Iteration 4400, lr = 0.001\nI0607 21:44:06.021217 13573 solver.cpp:229] Iteration 4420, loss = 1.55729\nI0607 21:44:06.021282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.326414 (* 1 = 0.326414 loss)\nI0607 21:44:06.021292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.463863 (* 1 = 0.463863 loss)\nI0607 21:44:06.021297 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.625133 (* 1 = 0.625133 loss)\nI0607 21:44:06.021303 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.15414 (* 1 = 0.15414 loss)\nI0607 21:44:06.021308 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.62123 (* 1 = 0.62123 loss)\nI0607 21:44:06.021313 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.154155 (* 1 = 0.154155 loss)\nI0607 21:44:06.021320 13573 sgd_solver.cpp:106] Iteration 4420, lr = 0.001\nI0607 21:44:18.633464 13573 solver.cpp:229] Iteration 4440, loss = 1.02813\nI0607 21:44:18.633528 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.466968 (* 1 = 0.466968 loss)\nI0607 21:44:18.633538 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.715228 (* 1 = 0.715228 loss)\nI0607 21:44:18.633543 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126687 (* 1 = 0.126687 loss)\nI0607 21:44:18.633549 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153127 (* 1 = 0.0153127 loss)\nI0607 21:44:18.633554 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.130955 (* 1 = 0.130955 loss)\nI0607 21:44:18.633560 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153107 (* 1 = 0.0153107 loss)\nI0607 21:44:18.633568 13573 sgd_solver.cpp:106] Iteration 4440, lr = 0.001\nI0607 21:44:31.438984 13573 solver.cpp:229] Iteration 4460, loss = 1.24077\nI0607 21:44:31.439051 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.313095 (* 1 = 0.313095 loss)\nI0607 21:44:31.439061 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147202 (* 1 = 0.147202 loss)\nI0607 21:44:31.439067 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145412 (* 1 = 0.145412 loss)\nI0607 21:44:31.439074 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0685716 (* 1 = 0.0685716 loss)\nI0607 21:44:31.439079 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136305 (* 1 = 0.136305 loss)\nI0607 21:44:31.439085 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0685726 (* 1 = 0.0685726 loss)\nI0607 21:44:31.439091 13573 sgd_solver.cpp:106] Iteration 4460, lr = 0.001\nI0607 21:44:44.586057 13573 solver.cpp:229] Iteration 4480, loss = 1.22237\nI0607 21:44:44.586124 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00239192 (* 1 = 0.00239192 loss)\nI0607 21:44:44.586134 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00977511 (* 1 = 0.00977511 loss)\nI0607 21:44:44.586140 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155615 (* 1 = 0.155615 loss)\nI0607 21:44:44.586145 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00455724 (* 1 = 0.00455724 loss)\nI0607 21:44:44.586151 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183029 (* 1 = 0.183029 loss)\nI0607 21:44:44.586158 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0045647 (* 1 = 0.0045647 loss)\nI0607 21:44:44.586164 13573 sgd_solver.cpp:106] Iteration 4480, lr = 0.001\nI0607 21:44:57.693500 13573 solver.cpp:229] Iteration 4500, loss = 1.22253\nI0607 21:44:57.693565 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.27173 (* 1 = 0.27173 loss)\nI0607 21:44:57.693574 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.259708 (* 1 = 0.259708 loss)\nI0607 21:44:57.693581 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.160114 (* 1 = 0.160114 loss)\nI0607 21:44:57.693588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.070711 (* 1 = 0.070711 loss)\nI0607 21:44:57.693593 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159088 (* 1 = 0.159088 loss)\nI0607 21:44:57.693599 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0706984 (* 1 = 0.0706984 loss)\nI0607 21:44:57.693606 13573 sgd_solver.cpp:106] Iteration 4500, lr = 0.001\nI0607 21:45:10.473567 13573 solver.cpp:229] Iteration 4520, loss = 1.1244\nI0607 21:45:10.473639 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.55915 (* 1 = 0.55915 loss)\nI0607 21:45:10.473649 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.216229 (* 1 = 0.216229 loss)\nI0607 21:45:10.473654 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.20331 (* 1 = 0.20331 loss)\nI0607 21:45:10.473660 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00195016 (* 1 = 0.00195016 loss)\nI0607 21:45:10.473666 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.184021 (* 1 = 0.184021 loss)\nI0607 21:45:10.473671 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00194868 (* 1 = 0.00194868 loss)\nI0607 21:45:10.473678 13573 sgd_solver.cpp:106] Iteration 4520, lr = 0.001\nI0607 21:45:23.449663 13573 solver.cpp:229] Iteration 4540, loss = 1.01682\nI0607 21:45:23.449724 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.214831 (* 1 = 0.214831 loss)\nI0607 21:45:23.449733 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167331 (* 1 = 0.167331 loss)\nI0607 21:45:23.449740 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106027 (* 1 = 0.106027 loss)\nI0607 21:45:23.449746 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0145351 (* 1 = 0.0145351 loss)\nI0607 21:45:23.449751 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0983056 (* 1 = 0.0983056 loss)\nI0607 21:45:23.449757 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.014545 (* 1 = 0.014545 loss)\nI0607 21:45:23.449764 13573 sgd_solver.cpp:106] Iteration 4540, lr = 0.001\nI0607 21:45:36.284770 13573 solver.cpp:229] Iteration 4560, loss = 1.60544\nI0607 21:45:36.284834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.191078 (* 1 = 0.191078 loss)\nI0607 21:45:36.284844 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188753 (* 1 = 0.188753 loss)\nI0607 21:45:36.284850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.129199 (* 1 = 0.129199 loss)\nI0607 21:45:36.284857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0299376 (* 1 = 0.0299376 loss)\nI0607 21:45:36.284862 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133242 (* 1 = 0.133242 loss)\nI0607 21:45:36.284868 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0299013 (* 1 = 0.0299013 loss)\nI0607 21:45:36.284874 13573 sgd_solver.cpp:106] Iteration 4560, lr = 0.001\nI0607 21:45:49.332178 13573 solver.cpp:229] Iteration 4580, loss = 1.03721\nI0607 21:45:49.332242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.459046 (* 1 = 0.459046 loss)\nI0607 21:45:49.332250 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.483899 (* 1 = 0.483899 loss)\nI0607 21:45:49.332257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.158027 (* 1 = 0.158027 loss)\nI0607 21:45:49.332262 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0907545 (* 1 = 0.0907545 loss)\nI0607 21:45:49.332267 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157677 (* 1 = 0.157677 loss)\nI0607 21:45:49.332273 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0907584 (* 1 = 0.0907584 loss)\nI0607 21:45:49.332280 13573 sgd_solver.cpp:106] Iteration 4580, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:46:02.348493 13573 solver.cpp:229] Iteration 4600, loss = 1.45526\nI0607 21:46:02.348567 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.473897 (* 1 = 0.473897 loss)\nI0607 21:46:02.348577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.478117 (* 1 = 0.478117 loss)\nI0607 21:46:02.348582 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195276 (* 1 = 0.195276 loss)\nI0607 21:46:02.348588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00994026 (* 1 = 0.00994026 loss)\nI0607 21:46:02.348594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194104 (* 1 = 0.194104 loss)\nI0607 21:46:02.348600 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00993603 (* 1 = 0.00993603 loss)\nI0607 21:46:02.348608 13573 sgd_solver.cpp:106] Iteration 4600, lr = 0.001\nI0607 21:46:15.288390 13573 solver.cpp:229] Iteration 4620, loss = 1.44831\nI0607 21:46:15.288452 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0051793 (* 1 = 0.0051793 loss)\nI0607 21:46:15.288462 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.144008 (* 1 = 0.144008 loss)\nI0607 21:46:15.288470 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.395109 (* 1 = 0.395109 loss)\nI0607 21:46:15.288475 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103951 (* 1 = 0.103951 loss)\nI0607 21:46:15.288481 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.405215 (* 1 = 0.405215 loss)\nI0607 21:46:15.288487 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103927 (* 1 = 0.103927 loss)\nI0607 21:46:15.288494 13573 sgd_solver.cpp:106] Iteration 4620, lr = 0.001\nI0607 21:46:28.270663 13573 solver.cpp:229] Iteration 4640, loss = 1.27031\nI0607 21:46:28.270730 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.207287 (* 1 = 0.207287 loss)\nI0607 21:46:28.270738 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209375 (* 1 = 0.209375 loss)\nI0607 21:46:28.270745 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0978046 (* 1 = 0.0978046 loss)\nI0607 21:46:28.270750 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0171513 (* 1 = 0.0171513 loss)\nI0607 21:46:28.270756 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0964833 (* 1 = 0.0964833 loss)\nI0607 21:46:28.270761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0171391 (* 1 = 0.0171391 loss)\nI0607 21:46:28.270768 13573 sgd_solver.cpp:106] Iteration 4640, lr = 0.001\nI0607 21:46:41.214267 13573 solver.cpp:229] Iteration 4660, loss = 1.09988\nI0607 21:46:41.214329 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0100705 (* 1 = 0.0100705 loss)\nI0607 21:46:41.214337 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0752615 (* 1 = 0.0752615 loss)\nI0607 21:46:41.214344 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.411332 (* 1 = 0.411332 loss)\nI0607 21:46:41.214349 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0333734 (* 1 = 0.0333734 loss)\nI0607 21:46:41.214355 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376755 (* 1 = 0.376755 loss)\nI0607 21:46:41.214360 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0333778 (* 1 = 0.0333778 loss)\nI0607 21:46:41.214367 13573 sgd_solver.cpp:106] Iteration 4660, lr = 0.001\nI0607 21:46:54.033890 13573 solver.cpp:229] Iteration 4680, loss = 1.14753\nI0607 21:46:54.033960 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.298386 (* 1 = 0.298386 loss)\nI0607 21:46:54.033969 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.609666 (* 1 = 0.609666 loss)\nI0607 21:46:54.033975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240738 (* 1 = 0.240738 loss)\nI0607 21:46:54.033982 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.152604 (* 1 = 0.152604 loss)\nI0607 21:46:54.033987 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238978 (* 1 = 0.238978 loss)\nI0607 21:46:54.033993 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.152602 (* 1 = 0.152602 loss)\nI0607 21:46:54.034000 13573 sgd_solver.cpp:106] Iteration 4680, lr = 0.001\nI0607 21:47:07.050132 13573 solver.cpp:229] Iteration 4700, loss = 1.16976\nI0607 21:47:07.050197 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169554 (* 1 = 0.169554 loss)\nI0607 21:47:07.050207 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209676 (* 1 = 0.209676 loss)\nI0607 21:47:07.050212 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.127065 (* 1 = 0.127065 loss)\nI0607 21:47:07.050218 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321097 (* 1 = 0.0321097 loss)\nI0607 21:47:07.050225 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131693 (* 1 = 0.131693 loss)\nI0607 21:47:07.050230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0320771 (* 1 = 0.0320771 loss)\nI0607 21:47:07.050236 13573 sgd_solver.cpp:106] Iteration 4700, lr = 0.001\nI0607 21:47:19.815742 13573 solver.cpp:229] Iteration 4720, loss = 0.946762\nI0607 21:47:19.815804 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00583107 (* 1 = 0.00583107 loss)\nI0607 21:47:19.815814 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.037808 (* 1 = 0.037808 loss)\nI0607 21:47:19.815819 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.452897 (* 1 = 0.452897 loss)\nI0607 21:47:19.815825 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0516792 (* 1 = 0.0516792 loss)\nI0607 21:47:19.815830 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.504715 (* 1 = 0.504715 loss)\nI0607 21:47:19.815836 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0516705 (* 1 = 0.0516705 loss)\nI0607 21:47:19.815842 13573 sgd_solver.cpp:106] Iteration 4720, lr = 0.001\nI0607 21:47:32.650138 13573 solver.cpp:229] Iteration 4740, loss = 1.37254\nI0607 21:47:32.650208 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.448206 (* 1 = 0.448206 loss)\nI0607 21:47:32.650218 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.393173 (* 1 = 0.393173 loss)\nI0607 21:47:32.650224 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.333542 (* 1 = 0.333542 loss)\nI0607 21:47:32.650229 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.054971 (* 1 = 0.054971 loss)\nI0607 21:47:32.650235 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.331633 (* 1 = 0.331633 loss)\nI0607 21:47:32.650241 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0549532 (* 1 = 0.0549532 loss)\nI0607 21:47:32.650248 13573 sgd_solver.cpp:106] Iteration 4740, lr = 0.001\nI0607 21:47:45.527559 13573 solver.cpp:229] Iteration 4760, loss = 1.39044\nI0607 21:47:45.527617 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.526168 (* 1 = 0.526168 loss)\nI0607 21:47:45.527626 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.411896 (* 1 = 0.411896 loss)\nI0607 21:47:45.527633 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174941 (* 1 = 0.174941 loss)\nI0607 21:47:45.527639 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.114302 (* 1 = 0.114302 loss)\nI0607 21:47:45.527645 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176916 (* 1 = 0.176916 loss)\nI0607 21:47:45.527652 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.1143 (* 1 = 0.1143 loss)\nI0607 21:47:45.527658 13573 sgd_solver.cpp:106] Iteration 4760, lr = 0.001\nI0607 21:47:58.660051 13573 solver.cpp:229] Iteration 4780, loss = 1.04524\nI0607 21:47:58.660116 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.235386 (* 1 = 0.235386 loss)\nI0607 21:47:58.660125 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119954 (* 1 = 0.119954 loss)\nI0607 21:47:58.660131 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1103 (* 1 = 0.1103 loss)\nI0607 21:47:58.660137 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0033175 (* 1 = 0.0033175 loss)\nI0607 21:47:58.660143 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111016 (* 1 = 0.111016 loss)\nI0607 21:47:58.660148 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00331951 (* 1 = 0.00331951 loss)\nI0607 21:47:58.660156 13573 sgd_solver.cpp:106] Iteration 4780, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:48:11.548660 13573 solver.cpp:229] Iteration 4800, loss = 1.39799\nI0607 21:48:11.548720 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.213963 (* 1 = 0.213963 loss)\nI0607 21:48:11.548737 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269415 (* 1 = 0.269415 loss)\nI0607 21:48:11.548743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121626 (* 1 = 0.121626 loss)\nI0607 21:48:11.548749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0468236 (* 1 = 0.0468236 loss)\nI0607 21:48:11.548755 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124106 (* 1 = 0.124106 loss)\nI0607 21:48:11.548760 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0468589 (* 1 = 0.0468589 loss)\nI0607 21:48:11.548768 13573 sgd_solver.cpp:106] Iteration 4800, lr = 0.001\nI0607 21:48:24.324602 13573 solver.cpp:229] Iteration 4820, loss = 0.932727\nI0607 21:48:24.324676 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0124317 (* 1 = 0.0124317 loss)\nI0607 21:48:24.324686 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0449592 (* 1 = 0.0449592 loss)\nI0607 21:48:24.324879 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191168 (* 1 = 0.191168 loss)\nI0607 21:48:24.324900 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168808 (* 1 = 0.0168808 loss)\nI0607 21:48:24.324981 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186899 (* 1 = 0.186899 loss)\nI0607 21:48:24.324997 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.016905 (* 1 = 0.016905 loss)\nI0607 21:48:24.325067 13573 sgd_solver.cpp:106] Iteration 4820, lr = 0.001\nI0607 21:48:37.265746 13573 solver.cpp:229] Iteration 4840, loss = 0.666949\nI0607 21:48:37.265806 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.244551 (* 1 = 0.244551 loss)\nI0607 21:48:37.265816 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169038 (* 1 = 0.169038 loss)\nI0607 21:48:37.265822 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128201 (* 1 = 0.128201 loss)\nI0607 21:48:37.265830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00708771 (* 1 = 0.00708771 loss)\nI0607 21:48:37.265836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128223 (* 1 = 0.128223 loss)\nI0607 21:48:37.265841 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00709084 (* 1 = 0.00709084 loss)\nI0607 21:48:37.265848 13573 sgd_solver.cpp:106] Iteration 4840, lr = 0.001\nI0607 21:48:50.104202 13573 solver.cpp:229] Iteration 4860, loss = 0.976514\nI0607 21:48:50.104269 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.318142 (* 1 = 0.318142 loss)\nI0607 21:48:50.104277 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184246 (* 1 = 0.184246 loss)\nI0607 21:48:50.104285 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123579 (* 1 = 0.123579 loss)\nI0607 21:48:50.104290 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257 (* 1 = 0.0257 loss)\nI0607 21:48:50.104295 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125128 (* 1 = 0.125128 loss)\nI0607 21:48:50.104301 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0257199 (* 1 = 0.0257199 loss)\nI0607 21:48:50.104308 13573 sgd_solver.cpp:106] Iteration 4860, lr = 0.001\nI0607 21:49:03.020666 13573 solver.cpp:229] Iteration 4880, loss = 0.931338\nI0607 21:49:03.020727 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0973091 (* 1 = 0.0973091 loss)\nI0607 21:49:03.020736 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157683 (* 1 = 0.157683 loss)\nI0607 21:49:03.020742 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256351 (* 1 = 0.256351 loss)\nI0607 21:49:03.020747 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0179397 (* 1 = 0.0179397 loss)\nI0607 21:49:03.020753 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253914 (* 1 = 0.253914 loss)\nI0607 21:49:03.020758 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0179318 (* 1 = 0.0179318 loss)\nI0607 21:49:03.020766 13573 sgd_solver.cpp:106] Iteration 4880, lr = 0.001\nI0607 21:49:16.005578 13573 solver.cpp:229] Iteration 4900, loss = 1.03892\nI0607 21:49:16.005655 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.269237 (* 1 = 0.269237 loss)\nI0607 21:49:16.005664 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.293551 (* 1 = 0.293551 loss)\nI0607 21:49:16.005671 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119225 (* 1 = 0.119225 loss)\nI0607 21:49:16.005676 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0438408 (* 1 = 0.0438408 loss)\nI0607 21:49:16.005682 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114514 (* 1 = 0.114514 loss)\nI0607 21:49:16.005688 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.043861 (* 1 = 0.043861 loss)\nI0607 21:49:16.005695 13573 sgd_solver.cpp:106] Iteration 4900, lr = 0.001\nI0607 21:49:28.764789 13573 solver.cpp:229] Iteration 4920, loss = 1.89443\nI0607 21:49:28.764853 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.536601 (* 1 = 0.536601 loss)\nI0607 21:49:28.764863 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.667643 (* 1 = 0.667643 loss)\nI0607 21:49:28.764869 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.493777 (* 1 = 0.493777 loss)\nI0607 21:49:28.764876 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0672915 (* 1 = 0.0672915 loss)\nI0607 21:49:28.764883 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.482442 (* 1 = 0.482442 loss)\nI0607 21:49:28.764889 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0672857 (* 1 = 0.0672857 loss)\nI0607 21:49:28.764895 13573 sgd_solver.cpp:106] Iteration 4920, lr = 0.001\nI0607 21:49:41.770077 13573 solver.cpp:229] Iteration 4940, loss = 0.832097\nI0607 21:49:41.770148 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.220755 (* 1 = 0.220755 loss)\nI0607 21:49:41.770159 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124221 (* 1 = 0.124221 loss)\nI0607 21:49:41.770164 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202302 (* 1 = 0.202302 loss)\nI0607 21:49:41.770170 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00956259 (* 1 = 0.00956259 loss)\nI0607 21:49:41.770176 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193909 (* 1 = 0.193909 loss)\nI0607 21:49:41.770182 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00956047 (* 1 = 0.00956047 loss)\nI0607 21:49:41.770190 13573 sgd_solver.cpp:106] Iteration 4940, lr = 0.001\nI0607 21:49:54.848644 13573 solver.cpp:229] Iteration 4960, loss = 1.17015\nI0607 21:49:54.848708 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.408345 (* 1 = 0.408345 loss)\nI0607 21:49:54.848718 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.339711 (* 1 = 0.339711 loss)\nI0607 21:49:54.848726 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10749 (* 1 = 0.10749 loss)\nI0607 21:49:54.848731 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0272024 (* 1 = 0.0272024 loss)\nI0607 21:49:54.848737 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108776 (* 1 = 0.108776 loss)\nI0607 21:49:54.848743 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0271936 (* 1 = 0.0271936 loss)\nI0607 21:49:54.848752 13573 sgd_solver.cpp:106] Iteration 4960, lr = 0.001\nI0607 21:50:07.548247 13573 solver.cpp:229] Iteration 4980, loss = 1.50679\nI0607 21:50:07.548318 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.436152 (* 1 = 0.436152 loss)\nI0607 21:50:07.548328 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.683172 (* 1 = 0.683172 loss)\nI0607 21:50:07.548334 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.496874 (* 1 = 0.496874 loss)\nI0607 21:50:07.548341 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.115881 (* 1 = 0.115881 loss)\nI0607 21:50:07.548346 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.508788 (* 1 = 0.508788 loss)\nI0607 21:50:07.548352 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.115854 (* 1 = 0.115854 loss)\nI0607 21:50:07.548358 13573 sgd_solver.cpp:106] Iteration 4980, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:50:20.474943 13573 solver.cpp:229] Iteration 5000, loss = 0.92343\nI0607 21:50:20.475008 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.468077 (* 1 = 0.468077 loss)\nI0607 21:50:20.475016 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.228733 (* 1 = 0.228733 loss)\nI0607 21:50:20.475023 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112214 (* 1 = 0.112214 loss)\nI0607 21:50:20.475029 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013585 (* 1 = 0.013585 loss)\nI0607 21:50:20.475035 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.116733 (* 1 = 0.116733 loss)\nI0607 21:50:20.475040 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135971 (* 1 = 0.0135971 loss)\nI0607 21:50:20.475047 13573 sgd_solver.cpp:106] Iteration 5000, lr = 0.001\nI0607 21:50:33.286010 13573 solver.cpp:229] Iteration 5020, loss = 0.802325\nI0607 21:50:33.286070 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.247059 (* 1 = 0.247059 loss)\nI0607 21:50:33.286078 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.166904 (* 1 = 0.166904 loss)\nI0607 21:50:33.286084 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1327 (* 1 = 0.1327 loss)\nI0607 21:50:33.286090 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0683551 (* 1 = 0.0683551 loss)\nI0607 21:50:33.286097 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133382 (* 1 = 0.133382 loss)\nI0607 21:50:33.286101 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.068376 (* 1 = 0.068376 loss)\nI0607 21:50:33.286108 13573 sgd_solver.cpp:106] Iteration 5020, lr = 0.001\nI0607 21:50:45.966295 13573 solver.cpp:229] Iteration 5040, loss = 0.654119\nI0607 21:50:45.966369 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0853682 (* 1 = 0.0853682 loss)\nI0607 21:50:45.966378 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148069 (* 1 = 0.148069 loss)\nI0607 21:50:45.966384 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310041 (* 1 = 0.310041 loss)\nI0607 21:50:45.966390 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257041 (* 1 = 0.0257041 loss)\nI0607 21:50:45.966395 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303666 (* 1 = 0.303666 loss)\nI0607 21:50:45.966401 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0256768 (* 1 = 0.0256768 loss)\nI0607 21:50:45.966408 13573 sgd_solver.cpp:106] Iteration 5040, lr = 0.001\nI0607 21:50:59.076670 13573 solver.cpp:229] Iteration 5060, loss = 0.760569\nI0607 21:50:59.076740 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0904487 (* 1 = 0.0904487 loss)\nI0607 21:50:59.076750 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.243283 (* 1 = 0.243283 loss)\nI0607 21:50:59.076756 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174356 (* 1 = 0.174356 loss)\nI0607 21:50:59.076761 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418232 (* 1 = 0.0418232 loss)\nI0607 21:50:59.076767 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173442 (* 1 = 0.173442 loss)\nI0607 21:50:59.076772 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418214 (* 1 = 0.0418214 loss)\nI0607 21:50:59.076779 13573 sgd_solver.cpp:106] Iteration 5060, lr = 0.001\nI0607 21:51:11.909111 13573 solver.cpp:229] Iteration 5080, loss = 0.913851\nI0607 21:51:11.909178 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.196472 (* 1 = 0.196472 loss)\nI0607 21:51:11.909188 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0738825 (* 1 = 0.0738825 loss)\nI0607 21:51:11.909194 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204941 (* 1 = 0.204941 loss)\nI0607 21:51:11.909199 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0612998 (* 1 = 0.0612998 loss)\nI0607 21:51:11.909205 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.206716 (* 1 = 0.206716 loss)\nI0607 21:51:11.909211 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0612879 (* 1 = 0.0612879 loss)\nI0607 21:51:11.909219 13573 sgd_solver.cpp:106] Iteration 5080, lr = 0.001\nI0607 21:51:24.705008 13573 solver.cpp:229] Iteration 5100, loss = 0.604722\nI0607 21:51:24.705070 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.211755 (* 1 = 0.211755 loss)\nI0607 21:51:24.705080 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280431 (* 1 = 0.280431 loss)\nI0607 21:51:24.705085 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120551 (* 1 = 0.120551 loss)\nI0607 21:51:24.705091 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00887694 (* 1 = 0.00887694 loss)\nI0607 21:51:24.705096 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119261 (* 1 = 0.119261 loss)\nI0607 21:51:24.705102 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00887735 (* 1 = 0.00887735 loss)\nI0607 21:51:24.705109 13573 sgd_solver.cpp:106] Iteration 5100, lr = 0.001\nI0607 21:51:37.634234 13573 solver.cpp:229] Iteration 5120, loss = 1.26367\nI0607 21:51:37.634306 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.405044 (* 1 = 0.405044 loss)\nI0607 21:51:37.634316 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.221543 (* 1 = 0.221543 loss)\nI0607 21:51:37.634322 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.54892 (* 1 = 0.54892 loss)\nI0607 21:51:37.634328 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0779871 (* 1 = 0.0779871 loss)\nI0607 21:51:37.634333 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.541323 (* 1 = 0.541323 loss)\nI0607 21:51:37.634340 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0780152 (* 1 = 0.0780152 loss)\nI0607 21:51:37.634346 13573 sgd_solver.cpp:106] Iteration 5120, lr = 0.001\nI0607 21:51:50.553171 13573 solver.cpp:229] Iteration 5140, loss = 1.02515\nI0607 21:51:50.553232 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.339372 (* 1 = 0.339372 loss)\nI0607 21:51:50.553241 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.266402 (* 1 = 0.266402 loss)\nI0607 21:51:50.553247 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162146 (* 1 = 0.162146 loss)\nI0607 21:51:50.553253 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321394 (* 1 = 0.0321394 loss)\nI0607 21:51:50.553259 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155958 (* 1 = 0.155958 loss)\nI0607 21:51:50.553266 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0321363 (* 1 = 0.0321363 loss)\nI0607 21:51:50.553272 13573 sgd_solver.cpp:106] Iteration 5140, lr = 0.001\nI0607 21:52:03.782328 13573 solver.cpp:229] Iteration 5160, loss = 0.780194\nI0607 21:52:03.782394 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00817177 (* 1 = 0.00817177 loss)\nI0607 21:52:03.782404 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0204179 (* 1 = 0.0204179 loss)\nI0607 21:52:03.782410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156475 (* 1 = 0.156475 loss)\nI0607 21:52:03.782416 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0046406 (* 1 = 0.0046406 loss)\nI0607 21:52:03.782423 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154698 (* 1 = 0.154698 loss)\nI0607 21:52:03.782428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00464609 (* 1 = 0.00464609 loss)\nI0607 21:52:03.782435 13573 sgd_solver.cpp:106] Iteration 5160, lr = 0.001\nI0607 21:52:16.525605 13573 solver.cpp:229] Iteration 5180, loss = 2.36847\nI0607 21:52:16.525666 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116034 (* 1 = 0.116034 loss)\nI0607 21:52:16.525676 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0826026 (* 1 = 0.0826026 loss)\nI0607 21:52:16.525683 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.724038 (* 1 = 0.724038 loss)\nI0607 21:52:16.525689 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.264184 (* 1 = 0.264184 loss)\nI0607 21:52:16.525696 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.68897 (* 1 = 0.68897 loss)\nI0607 21:52:16.525702 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.26421 (* 1 = 0.26421 loss)\nI0607 21:52:16.525709 13573 sgd_solver.cpp:106] Iteration 5180, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:52:29.281950 13573 solver.cpp:229] Iteration 5200, loss = 0.532096\nI0607 21:52:29.282022 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.208399 (* 1 = 0.208399 loss)\nI0607 21:52:29.282032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.193263 (* 1 = 0.193263 loss)\nI0607 21:52:29.282038 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113171 (* 1 = 0.113171 loss)\nI0607 21:52:29.282044 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0040252 (* 1 = 0.0040252 loss)\nI0607 21:52:29.282049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112382 (* 1 = 0.112382 loss)\nI0607 21:52:29.282055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00402453 (* 1 = 0.00402453 loss)\nI0607 21:52:29.282061 13573 sgd_solver.cpp:106] Iteration 5200, lr = 0.001\nI0607 21:52:42.061036 13573 solver.cpp:229] Iteration 5220, loss = 1.37039\nI0607 21:52:42.061095 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.355932 (* 1 = 0.355932 loss)\nI0607 21:52:42.061105 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.239329 (* 1 = 0.239329 loss)\nI0607 21:52:42.061110 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41853 (* 1 = 0.41853 loss)\nI0607 21:52:42.061116 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0988547 (* 1 = 0.0988547 loss)\nI0607 21:52:42.061122 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.411132 (* 1 = 0.411132 loss)\nI0607 21:52:42.061127 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0987944 (* 1 = 0.0987944 loss)\nI0607 21:52:42.061134 13573 sgd_solver.cpp:106] Iteration 5220, lr = 0.001\nI0607 21:52:54.873529 13573 solver.cpp:229] Iteration 5240, loss = 1.63459\nI0607 21:52:54.873594 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161493 (* 1 = 0.161493 loss)\nI0607 21:52:54.873603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.216896 (* 1 = 0.216896 loss)\nI0607 21:52:54.873610 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.363083 (* 1 = 0.363083 loss)\nI0607 21:52:54.873615 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.056243 (* 1 = 0.056243 loss)\nI0607 21:52:54.873620 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.358952 (* 1 = 0.358952 loss)\nI0607 21:52:54.873626 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0562475 (* 1 = 0.0562475 loss)\nI0607 21:52:54.873633 13573 sgd_solver.cpp:106] Iteration 5240, lr = 0.001\nI0607 21:53:07.822365 13573 solver.cpp:229] Iteration 5260, loss = 1.66427\nI0607 21:53:07.822438 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.433406 (* 1 = 0.433406 loss)\nI0607 21:53:07.822448 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.415814 (* 1 = 0.415814 loss)\nI0607 21:53:07.822453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.526435 (* 1 = 0.526435 loss)\nI0607 21:53:07.822459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0656226 (* 1 = 0.0656226 loss)\nI0607 21:53:07.822465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.538565 (* 1 = 0.538565 loss)\nI0607 21:53:07.822471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0656029 (* 1 = 0.0656029 loss)\nI0607 21:53:07.822479 13573 sgd_solver.cpp:106] Iteration 5260, lr = 0.001\nI0607 21:53:20.849231 13573 solver.cpp:229] Iteration 5280, loss = 0.640627\nI0607 21:53:20.849293 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0459026 (* 1 = 0.0459026 loss)\nI0607 21:53:20.849303 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0663733 (* 1 = 0.0663733 loss)\nI0607 21:53:20.849308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126626 (* 1 = 0.126626 loss)\nI0607 21:53:20.849313 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0639472 (* 1 = 0.0639472 loss)\nI0607 21:53:20.849319 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133207 (* 1 = 0.133207 loss)\nI0607 21:53:20.849324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0639281 (* 1 = 0.0639281 loss)\nI0607 21:53:20.849331 13573 sgd_solver.cpp:106] Iteration 5280, lr = 0.001\nI0607 21:53:33.817401 13573 solver.cpp:229] Iteration 5300, loss = 1.51232\nI0607 21:53:33.817467 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.415909 (* 1 = 0.415909 loss)\nI0607 21:53:33.817476 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.295928 (* 1 = 0.295928 loss)\nI0607 21:53:33.817482 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143222 (* 1 = 0.143222 loss)\nI0607 21:53:33.817488 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.029099 (* 1 = 0.029099 loss)\nI0607 21:53:33.817494 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.148967 (* 1 = 0.148967 loss)\nI0607 21:53:33.817499 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291046 (* 1 = 0.0291046 loss)\nI0607 21:53:33.817507 13573 sgd_solver.cpp:106] Iteration 5300, lr = 0.001\nI0607 21:53:46.635823 13573 solver.cpp:229] Iteration 5320, loss = 0.753834\nI0607 21:53:46.635885 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.25357 (* 1 = 0.25357 loss)\nI0607 21:53:46.635893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184049 (* 1 = 0.184049 loss)\nI0607 21:53:46.635900 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112632 (* 1 = 0.112632 loss)\nI0607 21:53:46.635905 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00764312 (* 1 = 0.00764312 loss)\nI0607 21:53:46.635911 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109205 (* 1 = 0.109205 loss)\nI0607 21:53:46.635917 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00764383 (* 1 = 0.00764383 loss)\nI0607 21:53:46.635924 13573 sgd_solver.cpp:106] Iteration 5320, lr = 0.001\nI0607 21:53:59.646595 13573 solver.cpp:229] Iteration 5340, loss = 2.11293\nI0607 21:53:59.646667 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.338599 (* 1 = 0.338599 loss)\nI0607 21:53:59.646675 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161297 (* 1 = 0.161297 loss)\nI0607 21:53:59.646682 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.586764 (* 1 = 0.586764 loss)\nI0607 21:53:59.646687 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.239371 (* 1 = 0.239371 loss)\nI0607 21:53:59.646692 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.588862 (* 1 = 0.588862 loss)\nI0607 21:53:59.646698 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.239333 (* 1 = 0.239333 loss)\nI0607 21:53:59.646704 13573 sgd_solver.cpp:106] Iteration 5340, lr = 0.001\nI0607 21:54:12.518854 13573 solver.cpp:229] Iteration 5360, loss = 0.765798\nI0607 21:54:12.518913 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.288754 (* 1 = 0.288754 loss)\nI0607 21:54:12.518921 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164407 (* 1 = 0.164407 loss)\nI0607 21:54:12.518929 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137411 (* 1 = 0.137411 loss)\nI0607 21:54:12.518934 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00616124 (* 1 = 0.00616124 loss)\nI0607 21:54:12.518940 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.130495 (* 1 = 0.130495 loss)\nI0607 21:54:12.518946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00616086 (* 1 = 0.00616086 loss)\nI0607 21:54:12.518954 13573 sgd_solver.cpp:106] Iteration 5360, lr = 0.001\nI0607 21:54:25.494349 13573 solver.cpp:229] Iteration 5380, loss = 1.19464\nI0607 21:54:25.494418 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.35507 (* 1 = 0.35507 loss)\nI0607 21:54:25.494427 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170291 (* 1 = 0.170291 loss)\nI0607 21:54:25.494434 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124164 (* 1 = 0.124164 loss)\nI0607 21:54:25.494441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0391034 (* 1 = 0.0391034 loss)\nI0607 21:54:25.494446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124544 (* 1 = 0.124544 loss)\nI0607 21:54:25.494452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0390696 (* 1 = 0.0390696 loss)\nI0607 21:54:25.494459 13573 sgd_solver.cpp:106] Iteration 5380, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:54:38.384569 13573 solver.cpp:229] Iteration 5400, loss = 0.857106\nI0607 21:54:38.384634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.312555 (* 1 = 0.312555 loss)\nI0607 21:54:38.384642 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171579 (* 1 = 0.171579 loss)\nI0607 21:54:38.384649 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118367 (* 1 = 0.118367 loss)\nI0607 21:54:38.384655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0327634 (* 1 = 0.0327634 loss)\nI0607 21:54:38.384661 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.120287 (* 1 = 0.120287 loss)\nI0607 21:54:38.384706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0327636 (* 1 = 0.0327636 loss)\nI0607 21:54:38.384728 13573 sgd_solver.cpp:106] Iteration 5400, lr = 0.001\nI0607 21:54:51.228399 13573 solver.cpp:229] Iteration 5420, loss = 0.974833\nI0607 21:54:51.228472 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.262242 (* 1 = 0.262242 loss)\nI0607 21:54:51.228482 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.259942 (* 1 = 0.259942 loss)\nI0607 21:54:51.228488 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155115 (* 1 = 0.155115 loss)\nI0607 21:54:51.228494 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0200907 (* 1 = 0.0200907 loss)\nI0607 21:54:51.228500 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150155 (* 1 = 0.150155 loss)\nI0607 21:54:51.228507 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0200932 (* 1 = 0.0200932 loss)\nI0607 21:54:51.228513 13573 sgd_solver.cpp:106] Iteration 5420, lr = 0.001\nI0607 21:55:04.223809 13573 solver.cpp:229] Iteration 5440, loss = 0.652232\nI0607 21:55:04.223873 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0952198 (* 1 = 0.0952198 loss)\nI0607 21:55:04.223883 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.151723 (* 1 = 0.151723 loss)\nI0607 21:55:04.223889 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124958 (* 1 = 0.124958 loss)\nI0607 21:55:04.223896 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.102692 (* 1 = 0.102692 loss)\nI0607 21:55:04.223901 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139447 (* 1 = 0.139447 loss)\nI0607 21:55:04.223907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.102691 (* 1 = 0.102691 loss)\nI0607 21:55:04.223915 13573 sgd_solver.cpp:106] Iteration 5440, lr = 0.001\nI0607 21:55:17.208835 13573 solver.cpp:229] Iteration 5460, loss = 1.3393\nI0607 21:55:17.208907 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.38007 (* 1 = 0.38007 loss)\nI0607 21:55:17.208917 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.168054 (* 1 = 0.168054 loss)\nI0607 21:55:17.208923 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252061 (* 1 = 0.252061 loss)\nI0607 21:55:17.208930 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0554664 (* 1 = 0.0554664 loss)\nI0607 21:55:17.208935 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253708 (* 1 = 0.253708 loss)\nI0607 21:55:17.208940 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0554356 (* 1 = 0.0554356 loss)\nI0607 21:55:17.208947 13573 sgd_solver.cpp:106] Iteration 5460, lr = 0.001\nI0607 21:55:29.973595 13573 solver.cpp:229] Iteration 5480, loss = 0.606516\nI0607 21:55:29.973656 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000510343 (* 1 = 0.000510343 loss)\nI0607 21:55:29.973666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0147879 (* 1 = 0.0147879 loss)\nI0607 21:55:29.973841 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141633 (* 1 = 0.141633 loss)\nI0607 21:55:29.973860 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00190045 (* 1 = 0.00190045 loss)\nI0607 21:55:29.973937 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16122 (* 1 = 0.16122 loss)\nI0607 21:55:29.973953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0019007 (* 1 = 0.0019007 loss)\nI0607 21:55:29.974020 13573 sgd_solver.cpp:106] Iteration 5480, lr = 0.001\nI0607 21:55:42.799796 13573 solver.cpp:229] Iteration 5500, loss = 1.70433\nI0607 21:55:42.799935 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0131731 (* 1 = 0.0131731 loss)\nI0607 21:55:42.799960 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0632791 (* 1 = 0.0632791 loss)\nI0607 21:55:42.799979 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.841186 (* 1 = 0.841186 loss)\nI0607 21:55:42.799996 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.231592 (* 1 = 0.231592 loss)\nI0607 21:55:42.800014 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.849943 (* 1 = 0.849943 loss)\nI0607 21:55:42.800029 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.231627 (* 1 = 0.231627 loss)\nI0607 21:55:42.800046 13573 sgd_solver.cpp:106] Iteration 5500, lr = 0.001\nI0607 21:55:55.415218 13573 solver.cpp:229] Iteration 5520, loss = 1.2066\nI0607 21:55:55.415285 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.363747 (* 1 = 0.363747 loss)\nI0607 21:55:55.415294 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.256931 (* 1 = 0.256931 loss)\nI0607 21:55:55.415300 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295632 (* 1 = 0.295632 loss)\nI0607 21:55:55.415307 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0340932 (* 1 = 0.0340932 loss)\nI0607 21:55:55.415311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306952 (* 1 = 0.306952 loss)\nI0607 21:55:55.415318 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0340843 (* 1 = 0.0340843 loss)\nI0607 21:55:55.415324 13573 sgd_solver.cpp:106] Iteration 5520, lr = 0.001\nI0607 21:56:08.424616 13573 solver.cpp:229] Iteration 5540, loss = 1.42546\nI0607 21:56:08.424676 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.388599 (* 1 = 0.388599 loss)\nI0607 21:56:08.424685 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18872 (* 1 = 0.18872 loss)\nI0607 21:56:08.424690 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.166695 (* 1 = 0.166695 loss)\nI0607 21:56:08.424697 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.087567 (* 1 = 0.087567 loss)\nI0607 21:56:08.424702 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165158 (* 1 = 0.165158 loss)\nI0607 21:56:08.424708 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0875805 (* 1 = 0.0875805 loss)\nI0607 21:56:08.424715 13573 sgd_solver.cpp:106] Iteration 5540, lr = 0.001\nI0607 21:56:21.423029 13573 solver.cpp:229] Iteration 5560, loss = 0.774946\nI0607 21:56:21.423102 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.203071 (* 1 = 0.203071 loss)\nI0607 21:56:21.423112 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.246683 (* 1 = 0.246683 loss)\nI0607 21:56:21.423118 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14085 (* 1 = 0.14085 loss)\nI0607 21:56:21.423125 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0165366 (* 1 = 0.0165366 loss)\nI0607 21:56:21.423130 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138335 (* 1 = 0.138335 loss)\nI0607 21:56:21.423135 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0165391 (* 1 = 0.0165391 loss)\nI0607 21:56:21.423143 13573 sgd_solver.cpp:106] Iteration 5560, lr = 0.001\nI0607 21:56:34.228518 13573 solver.cpp:229] Iteration 5580, loss = 1.31511\nI0607 21:56:34.228576 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.435736 (* 1 = 0.435736 loss)\nI0607 21:56:34.228585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.417752 (* 1 = 0.417752 loss)\nI0607 21:56:34.228591 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126231 (* 1 = 0.126231 loss)\nI0607 21:56:34.228597 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0620525 (* 1 = 0.0620525 loss)\nI0607 21:56:34.228603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122866 (* 1 = 0.122866 loss)\nI0607 21:56:34.228608 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0620517 (* 1 = 0.0620517 loss)\nI0607 21:56:34.228615 13573 sgd_solver.cpp:106] Iteration 5580, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:56:47.172511 13573 solver.cpp:229] Iteration 5600, loss = 0.741379\nI0607 21:56:47.172581 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00281724 (* 1 = 0.00281724 loss)\nI0607 21:56:47.172591 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.032166 (* 1 = 0.032166 loss)\nI0607 21:56:47.172598 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214069 (* 1 = 0.214069 loss)\nI0607 21:56:47.172605 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00826367 (* 1 = 0.00826367 loss)\nI0607 21:56:47.172610 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23621 (* 1 = 0.23621 loss)\nI0607 21:56:47.172616 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00826109 (* 1 = 0.00826109 loss)\nI0607 21:56:47.172623 13573 sgd_solver.cpp:106] Iteration 5600, lr = 0.001\nI0607 21:56:59.887665 13573 solver.cpp:229] Iteration 5620, loss = 1.11517\nI0607 21:56:59.887728 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.480978 (* 1 = 0.480978 loss)\nI0607 21:56:59.887737 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.373991 (* 1 = 0.373991 loss)\nI0607 21:56:59.887743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246882 (* 1 = 0.246882 loss)\nI0607 21:56:59.887749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384363 (* 1 = 0.0384363 loss)\nI0607 21:56:59.887754 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246445 (* 1 = 0.246445 loss)\nI0607 21:56:59.887761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0384338 (* 1 = 0.0384338 loss)\nI0607 21:56:59.887768 13573 sgd_solver.cpp:106] Iteration 5620, lr = 0.001\nI0607 21:57:12.911878 13573 solver.cpp:229] Iteration 5640, loss = 1.74944\nI0607 21:57:12.911952 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0730787 (* 1 = 0.0730787 loss)\nI0607 21:57:12.911962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.133816 (* 1 = 0.133816 loss)\nI0607 21:57:12.911967 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263741 (* 1 = 0.263741 loss)\nI0607 21:57:12.911973 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0141895 (* 1 = 0.0141895 loss)\nI0607 21:57:12.911978 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.270202 (* 1 = 0.270202 loss)\nI0607 21:57:12.911984 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.01417 (* 1 = 0.01417 loss)\nI0607 21:57:12.911991 13573 sgd_solver.cpp:106] Iteration 5640, lr = 0.001\nI0607 21:57:25.647248 13573 solver.cpp:229] Iteration 5660, loss = 1.07103\nI0607 21:57:25.647310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.168656 (* 1 = 0.168656 loss)\nI0607 21:57:25.647320 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.404931 (* 1 = 0.404931 loss)\nI0607 21:57:25.647326 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246233 (* 1 = 0.246233 loss)\nI0607 21:57:25.647332 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0466214 (* 1 = 0.0466214 loss)\nI0607 21:57:25.647338 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240834 (* 1 = 0.240834 loss)\nI0607 21:57:25.647344 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0466079 (* 1 = 0.0466079 loss)\nI0607 21:57:25.647351 13573 sgd_solver.cpp:106] Iteration 5660, lr = 0.001\nI0607 21:57:38.679999 13573 solver.cpp:229] Iteration 5680, loss = 1.76564\nI0607 21:57:38.680066 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.421088 (* 1 = 0.421088 loss)\nI0607 21:57:38.680076 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209533 (* 1 = 0.209533 loss)\nI0607 21:57:38.680083 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.27438 (* 1 = 0.27438 loss)\nI0607 21:57:38.680088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0308249 (* 1 = 0.0308249 loss)\nI0607 21:57:38.680094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282434 (* 1 = 0.282434 loss)\nI0607 21:57:38.680099 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0308064 (* 1 = 0.0308064 loss)\nI0607 21:57:38.680106 13573 sgd_solver.cpp:106] Iteration 5680, lr = 0.001\nI0607 21:57:51.478232 13573 solver.cpp:229] Iteration 5700, loss = 1.10446\nI0607 21:57:51.478298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.132329 (* 1 = 0.132329 loss)\nI0607 21:57:51.478307 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.322366 (* 1 = 0.322366 loss)\nI0607 21:57:51.478314 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184468 (* 1 = 0.184468 loss)\nI0607 21:57:51.478320 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0239033 (* 1 = 0.0239033 loss)\nI0607 21:57:51.478327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175885 (* 1 = 0.175885 loss)\nI0607 21:57:51.478332 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238851 (* 1 = 0.0238851 loss)\nI0607 21:57:51.478340 13573 sgd_solver.cpp:106] Iteration 5700, lr = 0.001\nI0607 21:58:04.295768 13573 solver.cpp:229] Iteration 5720, loss = 1.08096\nI0607 21:58:04.295830 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.402898 (* 1 = 0.402898 loss)\nI0607 21:58:04.295840 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.153781 (* 1 = 0.153781 loss)\nI0607 21:58:04.295846 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.34745 (* 1 = 0.34745 loss)\nI0607 21:58:04.295852 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.11054 (* 1 = 0.11054 loss)\nI0607 21:58:04.295857 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.345665 (* 1 = 0.345665 loss)\nI0607 21:58:04.295863 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.110506 (* 1 = 0.110506 loss)\nI0607 21:58:04.295871 13573 sgd_solver.cpp:106] Iteration 5720, lr = 0.001\nI0607 21:58:17.308598 13573 solver.cpp:229] Iteration 5740, loss = 1.32638\nI0607 21:58:17.308658 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.564192 (* 1 = 0.564192 loss)\nI0607 21:58:17.308667 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.404821 (* 1 = 0.404821 loss)\nI0607 21:58:17.308673 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.353365 (* 1 = 0.353365 loss)\nI0607 21:58:17.308679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0579099 (* 1 = 0.0579099 loss)\nI0607 21:58:17.308686 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.335721 (* 1 = 0.335721 loss)\nI0607 21:58:17.308691 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0578843 (* 1 = 0.0578843 loss)\nI0607 21:58:17.308697 13573 sgd_solver.cpp:106] Iteration 5740, lr = 0.001\nI0607 21:58:30.024348 13573 solver.cpp:229] Iteration 5760, loss = 1.1757\nI0607 21:58:30.024420 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.393016 (* 1 = 0.393016 loss)\nI0607 21:58:30.024431 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.369947 (* 1 = 0.369947 loss)\nI0607 21:58:30.024436 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228177 (* 1 = 0.228177 loss)\nI0607 21:58:30.024443 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194583 (* 1 = 0.0194583 loss)\nI0607 21:58:30.024448 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227794 (* 1 = 0.227794 loss)\nI0607 21:58:30.024454 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0194616 (* 1 = 0.0194616 loss)\nI0607 21:58:30.024461 13573 sgd_solver.cpp:106] Iteration 5760, lr = 0.001\nI0607 21:58:42.934229 13573 solver.cpp:229] Iteration 5780, loss = 1.1919\nI0607 21:58:42.934295 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0508244 (* 1 = 0.0508244 loss)\nI0607 21:58:42.934305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.067298 (* 1 = 0.067298 loss)\nI0607 21:58:42.934312 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310113 (* 1 = 0.310113 loss)\nI0607 21:58:42.934319 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0426668 (* 1 = 0.0426668 loss)\nI0607 21:58:42.934324 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.291507 (* 1 = 0.291507 loss)\nI0607 21:58:42.934330 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0426423 (* 1 = 0.0426423 loss)\nI0607 21:58:42.934337 13573 sgd_solver.cpp:106] Iteration 5780, lr = 0.001\nspeed: 0.645s / iter\nI0607 21:58:55.759471 13573 solver.cpp:229] Iteration 5800, loss = 1.30972\nI0607 21:58:55.759531 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.554853 (* 1 = 0.554853 loss)\nI0607 21:58:55.759541 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.444543 (* 1 = 0.444543 loss)\nI0607 21:58:55.759548 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.307429 (* 1 = 0.307429 loss)\nI0607 21:58:55.759554 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0377164 (* 1 = 0.0377164 loss)\nI0607 21:58:55.759559 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.304018 (* 1 = 0.304018 loss)\nI0607 21:58:55.759565 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.037714 (* 1 = 0.037714 loss)\nI0607 21:58:55.759572 13573 sgd_solver.cpp:106] Iteration 5800, lr = 0.001\nI0607 21:59:08.766640 13573 solver.cpp:229] Iteration 5820, loss = 1.11024\nI0607 21:59:08.766707 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233144 (* 1 = 0.233144 loss)\nI0607 21:59:08.766717 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.175688 (* 1 = 0.175688 loss)\nI0607 21:59:08.766723 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116971 (* 1 = 0.116971 loss)\nI0607 21:59:08.766729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00988339 (* 1 = 0.00988339 loss)\nI0607 21:59:08.766734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122572 (* 1 = 0.122572 loss)\nI0607 21:59:08.766741 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00987195 (* 1 = 0.00987195 loss)\nI0607 21:59:08.766747 13573 sgd_solver.cpp:106] Iteration 5820, lr = 0.001\nI0607 21:59:21.846050 13573 solver.cpp:229] Iteration 5840, loss = 1.23773\nI0607 21:59:21.846108 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.363088 (* 1 = 0.363088 loss)\nI0607 21:59:21.846118 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.585467 (* 1 = 0.585467 loss)\nI0607 21:59:21.846125 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.33171 (* 1 = 0.33171 loss)\nI0607 21:59:21.846132 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0495714 (* 1 = 0.0495714 loss)\nI0607 21:59:21.846138 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.336593 (* 1 = 0.336593 loss)\nI0607 21:59:21.846143 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0495668 (* 1 = 0.0495668 loss)\nI0607 21:59:21.846151 13573 sgd_solver.cpp:106] Iteration 5840, lr = 0.001\nI0607 21:59:34.741245 13573 solver.cpp:229] Iteration 5860, loss = 1.66896\nI0607 21:59:34.741358 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.408294 (* 1 = 0.408294 loss)\nI0607 21:59:34.741367 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.937455 (* 1 = 0.937455 loss)\nI0607 21:59:34.741374 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.558152 (* 1 = 0.558152 loss)\nI0607 21:59:34.741379 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0635725 (* 1 = 0.0635725 loss)\nI0607 21:59:34.741385 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.555859 (* 1 = 0.555859 loss)\nI0607 21:59:34.741391 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0635844 (* 1 = 0.0635844 loss)\nI0607 21:59:34.741399 13573 sgd_solver.cpp:106] Iteration 5860, lr = 0.001\nI0607 21:59:47.784024 13573 solver.cpp:229] Iteration 5880, loss = 0.875077\nI0607 21:59:47.784088 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.431595 (* 1 = 0.431595 loss)\nI0607 21:59:47.784098 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.224318 (* 1 = 0.224318 loss)\nI0607 21:59:47.784104 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100614 (* 1 = 0.100614 loss)\nI0607 21:59:47.784111 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0421542 (* 1 = 0.0421542 loss)\nI0607 21:59:47.784116 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0902319 (* 1 = 0.0902319 loss)\nI0607 21:59:47.784122 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0421555 (* 1 = 0.0421555 loss)\nI0607 21:59:47.784131 13573 sgd_solver.cpp:106] Iteration 5880, lr = 0.001\nI0607 22:00:00.655992 13573 solver.cpp:229] Iteration 5900, loss = 1.03549\nI0607 22:00:00.656060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.321306 (* 1 = 0.321306 loss)\nI0607 22:00:00.656069 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188711 (* 1 = 0.188711 loss)\nI0607 22:00:00.656075 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0844764 (* 1 = 0.0844764 loss)\nI0607 22:00:00.656081 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00811715 (* 1 = 0.00811715 loss)\nI0607 22:00:00.656087 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0814632 (* 1 = 0.0814632 loss)\nI0607 22:00:00.656093 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0081179 (* 1 = 0.0081179 loss)\nI0607 22:00:00.656100 13573 sgd_solver.cpp:106] Iteration 5900, lr = 0.001\nI0607 22:00:13.704730 13573 solver.cpp:229] Iteration 5920, loss = 0.603631\nI0607 22:00:13.704795 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.269875 (* 1 = 0.269875 loss)\nI0607 22:00:13.704805 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.31124 (* 1 = 0.31124 loss)\nI0607 22:00:13.704813 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0994827 (* 1 = 0.0994827 loss)\nI0607 22:00:13.704818 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00868075 (* 1 = 0.00868075 loss)\nI0607 22:00:13.704824 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0999777 (* 1 = 0.0999777 loss)\nI0607 22:00:13.704830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00865484 (* 1 = 0.00865484 loss)\nI0607 22:00:13.704838 13573 sgd_solver.cpp:106] Iteration 5920, lr = 0.001\nI0607 22:00:26.601423 13573 solver.cpp:229] Iteration 5940, loss = 1.88826\nI0607 22:00:26.601496 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.449178 (* 1 = 0.449178 loss)\nI0607 22:00:26.601505 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.389597 (* 1 = 0.389597 loss)\nI0607 22:00:26.601511 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.406355 (* 1 = 0.406355 loss)\nI0607 22:00:26.601518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0501075 (* 1 = 0.0501075 loss)\nI0607 22:00:26.601523 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.419561 (* 1 = 0.419561 loss)\nI0607 22:00:26.601529 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0501162 (* 1 = 0.0501162 loss)\nI0607 22:00:26.601536 13573 sgd_solver.cpp:106] Iteration 5940, lr = 0.001\nI0607 22:00:39.508805 13573 solver.cpp:229] Iteration 5960, loss = 1.48765\nI0607 22:00:39.508867 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.222404 (* 1 = 0.222404 loss)\nI0607 22:00:39.508877 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0938857 (* 1 = 0.0938857 loss)\nI0607 22:00:39.508884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123412 (* 1 = 0.123412 loss)\nI0607 22:00:39.508889 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0372411 (* 1 = 0.0372411 loss)\nI0607 22:00:39.508895 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123397 (* 1 = 0.123397 loss)\nI0607 22:00:39.508901 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0372494 (* 1 = 0.0372494 loss)\nI0607 22:00:39.508908 13573 sgd_solver.cpp:106] Iteration 5960, lr = 0.001\nI0607 22:00:52.427229 13573 solver.cpp:229] Iteration 5980, loss = 1.00447\nI0607 22:00:52.427294 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.219604 (* 1 = 0.219604 loss)\nI0607 22:00:52.427304 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.357261 (* 1 = 0.357261 loss)\nI0607 22:00:52.427310 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0912402 (* 1 = 0.0912402 loss)\nI0607 22:00:52.427316 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279076 (* 1 = 0.0279076 loss)\nI0607 22:00:52.427323 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0930586 (* 1 = 0.0930586 loss)\nI0607 22:00:52.427330 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279087 (* 1 = 0.0279087 loss)\nI0607 22:00:52.427335 13573 sgd_solver.cpp:106] Iteration 5980, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:01:05.297169 13573 solver.cpp:229] Iteration 6000, loss = 1.49723\nI0607 22:01:05.297237 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00353248 (* 1 = 0.00353248 loss)\nI0607 22:01:05.297247 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00472458 (* 1 = 0.00472458 loss)\nI0607 22:01:05.297253 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.383602 (* 1 = 0.383602 loss)\nI0607 22:01:05.297260 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0991663 (* 1 = 0.0991663 loss)\nI0607 22:01:05.297266 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.392318 (* 1 = 0.392318 loss)\nI0607 22:01:05.297271 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0991479 (* 1 = 0.0991479 loss)\nI0607 22:01:05.297276 13573 sgd_solver.cpp:106] Iteration 6000, lr = 0.001\nI0607 22:01:18.158530 13573 solver.cpp:229] Iteration 6020, loss = 0.472161\nI0607 22:01:18.158594 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00245772 (* 1 = 0.00245772 loss)\nI0607 22:01:18.158607 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0918886 (* 1 = 0.0918886 loss)\nI0607 22:01:18.158612 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.229487 (* 1 = 0.229487 loss)\nI0607 22:01:18.158618 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156123 (* 1 = 0.0156123 loss)\nI0607 22:01:18.158624 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240493 (* 1 = 0.240493 loss)\nI0607 22:01:18.158630 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156086 (* 1 = 0.0156086 loss)\nI0607 22:01:18.158638 13573 sgd_solver.cpp:106] Iteration 6020, lr = 0.001\nI0607 22:01:31.295343 13573 solver.cpp:229] Iteration 6040, loss = 0.840406\nI0607 22:01:31.295418 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124651 (* 1 = 0.124651 loss)\nI0607 22:01:31.295428 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0695748 (* 1 = 0.0695748 loss)\nI0607 22:01:31.295433 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101029 (* 1 = 0.101029 loss)\nI0607 22:01:31.295439 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384833 (* 1 = 0.0384833 loss)\nI0607 22:01:31.295446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1022 (* 1 = 0.1022 loss)\nI0607 22:01:31.295452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0384449 (* 1 = 0.0384449 loss)\nI0607 22:01:31.295459 13573 sgd_solver.cpp:106] Iteration 6040, lr = 0.001\nI0607 22:01:44.167187 13573 solver.cpp:229] Iteration 6060, loss = 2.07885\nI0607 22:01:44.167253 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.418155 (* 1 = 0.418155 loss)\nI0607 22:01:44.167263 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16406 (* 1 = 0.16406 loss)\nI0607 22:01:44.167269 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0876648 (* 1 = 0.0876648 loss)\nI0607 22:01:44.167275 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.018214 (* 1 = 0.018214 loss)\nI0607 22:01:44.167281 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0881544 (* 1 = 0.0881544 loss)\nI0607 22:01:44.167287 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182143 (* 1 = 0.0182143 loss)\nI0607 22:01:44.167294 13573 sgd_solver.cpp:106] Iteration 6060, lr = 0.001\nI0607 22:01:57.038226 13573 solver.cpp:229] Iteration 6080, loss = 0.991401\nI0607 22:01:57.038295 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.230683 (* 1 = 0.230683 loss)\nI0607 22:01:57.038306 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.25377 (* 1 = 0.25377 loss)\nI0607 22:01:57.038312 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.441831 (* 1 = 0.441831 loss)\nI0607 22:01:57.038318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0821337 (* 1 = 0.0821337 loss)\nI0607 22:01:57.038324 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.436241 (* 1 = 0.436241 loss)\nI0607 22:01:57.038331 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0820938 (* 1 = 0.0820938 loss)\nI0607 22:01:57.038337 13573 sgd_solver.cpp:106] Iteration 6080, lr = 0.001\nI0607 22:02:09.987722 13573 solver.cpp:229] Iteration 6100, loss = 0.806035\nI0607 22:02:09.987787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.334267 (* 1 = 0.334267 loss)\nI0607 22:02:09.987797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.217791 (* 1 = 0.217791 loss)\nI0607 22:02:09.987803 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177983 (* 1 = 0.177983 loss)\nI0607 22:02:09.987809 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0330666 (* 1 = 0.0330666 loss)\nI0607 22:02:09.987815 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187224 (* 1 = 0.187224 loss)\nI0607 22:02:09.987821 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0330688 (* 1 = 0.0330688 loss)\nI0607 22:02:09.987828 13573 sgd_solver.cpp:106] Iteration 6100, lr = 0.001\nI0607 22:02:23.066577 13573 solver.cpp:229] Iteration 6120, loss = 0.523766\nI0607 22:02:23.066655 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145991 (* 1 = 0.145991 loss)\nI0607 22:02:23.066665 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126435 (* 1 = 0.126435 loss)\nI0607 22:02:23.066671 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113966 (* 1 = 0.113966 loss)\nI0607 22:02:23.066679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.062146 (* 1 = 0.062146 loss)\nI0607 22:02:23.066684 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10873 (* 1 = 0.10873 loss)\nI0607 22:02:23.066690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0621014 (* 1 = 0.0621014 loss)\nI0607 22:02:23.066699 13573 sgd_solver.cpp:106] Iteration 6120, lr = 0.001\nI0607 22:02:35.729995 13573 solver.cpp:229] Iteration 6140, loss = 1.3545\nI0607 22:02:35.730057 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.386844 (* 1 = 0.386844 loss)\nI0607 22:02:35.730065 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.400754 (* 1 = 0.400754 loss)\nI0607 22:02:35.730072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.341723 (* 1 = 0.341723 loss)\nI0607 22:02:35.730077 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0619731 (* 1 = 0.0619731 loss)\nI0607 22:02:35.730083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.352518 (* 1 = 0.352518 loss)\nI0607 22:02:35.730088 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0619535 (* 1 = 0.0619535 loss)\nI0607 22:02:35.730095 13573 sgd_solver.cpp:106] Iteration 6140, lr = 0.001\nI0607 22:02:48.651415 13573 solver.cpp:229] Iteration 6160, loss = 0.903509\nI0607 22:02:48.651482 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.160905 (* 1 = 0.160905 loss)\nI0607 22:02:48.651491 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.135887 (* 1 = 0.135887 loss)\nI0607 22:02:48.651499 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0895179 (* 1 = 0.0895179 loss)\nI0607 22:02:48.651504 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0396924 (* 1 = 0.0396924 loss)\nI0607 22:02:48.651510 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0861992 (* 1 = 0.0861992 loss)\nI0607 22:02:48.651515 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0397031 (* 1 = 0.0397031 loss)\nI0607 22:02:48.651522 13573 sgd_solver.cpp:106] Iteration 6160, lr = 0.001\nI0607 22:03:01.632618 13573 solver.cpp:229] Iteration 6180, loss = 0.920589\nI0607 22:03:01.632690 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.277872 (* 1 = 0.277872 loss)\nI0607 22:03:01.632701 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202038 (* 1 = 0.202038 loss)\nI0607 22:03:01.632707 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270718 (* 1 = 0.270718 loss)\nI0607 22:03:01.632714 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0359047 (* 1 = 0.0359047 loss)\nI0607 22:03:01.632719 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271743 (* 1 = 0.271743 loss)\nI0607 22:03:01.632725 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0358882 (* 1 = 0.0358882 loss)\nI0607 22:03:01.632732 13573 sgd_solver.cpp:106] Iteration 6180, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:03:14.610764 13573 solver.cpp:229] Iteration 6200, loss = 1.13235\nI0607 22:03:14.610839 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.541434 (* 1 = 0.541434 loss)\nI0607 22:03:14.610848 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.346452 (* 1 = 0.346452 loss)\nI0607 22:03:14.610854 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0969067 (* 1 = 0.0969067 loss)\nI0607 22:03:14.610860 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108314 (* 1 = 0.0108314 loss)\nI0607 22:03:14.610867 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0912395 (* 1 = 0.0912395 loss)\nI0607 22:03:14.610872 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108261 (* 1 = 0.0108261 loss)\nI0607 22:03:14.610879 13573 sgd_solver.cpp:106] Iteration 6200, lr = 0.001\nI0607 22:03:27.456272 13573 solver.cpp:229] Iteration 6220, loss = 0.794701\nI0607 22:03:27.456356 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00354244 (* 1 = 0.00354244 loss)\nI0607 22:03:27.456365 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00224847 (* 1 = 0.00224847 loss)\nI0607 22:03:27.456372 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213014 (* 1 = 0.213014 loss)\nI0607 22:03:27.456378 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321195 (* 1 = 0.0321195 loss)\nI0607 22:03:27.456384 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235499 (* 1 = 0.235499 loss)\nI0607 22:03:27.456390 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.032108 (* 1 = 0.032108 loss)\nI0607 22:03:27.456399 13573 sgd_solver.cpp:106] Iteration 6220, lr = 0.001\nI0607 22:03:40.444627 13573 solver.cpp:229] Iteration 6240, loss = 0.817587\nI0607 22:03:40.444747 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.384751 (* 1 = 0.384751 loss)\nI0607 22:03:40.444756 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202991 (* 1 = 0.202991 loss)\nI0607 22:03:40.444762 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128081 (* 1 = 0.128081 loss)\nI0607 22:03:40.444768 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0265012 (* 1 = 0.0265012 loss)\nI0607 22:03:40.444774 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127604 (* 1 = 0.127604 loss)\nI0607 22:03:40.444780 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264984 (* 1 = 0.0264984 loss)\nI0607 22:03:40.444787 13573 sgd_solver.cpp:106] Iteration 6240, lr = 0.001\nI0607 22:03:53.426334 13573 solver.cpp:229] Iteration 6260, loss = 0.542096\nI0607 22:03:53.426398 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00354871 (* 1 = 0.00354871 loss)\nI0607 22:03:53.426407 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0126229 (* 1 = 0.0126229 loss)\nI0607 22:03:53.426414 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179983 (* 1 = 0.179983 loss)\nI0607 22:03:53.426420 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00104259 (* 1 = 0.00104259 loss)\nI0607 22:03:53.426426 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16181 (* 1 = 0.16181 loss)\nI0607 22:03:53.426432 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00104915 (* 1 = 0.00104915 loss)\nI0607 22:03:53.426440 13573 sgd_solver.cpp:106] Iteration 6260, lr = 0.001\nI0607 22:04:06.105101 13573 solver.cpp:229] Iteration 6280, loss = 1.40959\nI0607 22:04:06.105170 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.253013 (* 1 = 0.253013 loss)\nI0607 22:04:06.105180 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.476036 (* 1 = 0.476036 loss)\nI0607 22:04:06.105186 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41229 (* 1 = 0.41229 loss)\nI0607 22:04:06.105192 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.155292 (* 1 = 0.155292 loss)\nI0607 22:04:06.105198 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.409078 (* 1 = 0.409078 loss)\nI0607 22:04:06.105204 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.155304 (* 1 = 0.155304 loss)\nI0607 22:04:06.105211 13573 sgd_solver.cpp:106] Iteration 6280, lr = 0.001\nI0607 22:04:19.191735 13573 solver.cpp:229] Iteration 6300, loss = 0.920827\nI0607 22:04:19.191802 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.23202 (* 1 = 0.23202 loss)\nI0607 22:04:19.191812 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.797429 (* 1 = 0.797429 loss)\nI0607 22:04:19.191818 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121046 (* 1 = 0.121046 loss)\nI0607 22:04:19.191824 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00975087 (* 1 = 0.00975087 loss)\nI0607 22:04:19.191830 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118745 (* 1 = 0.118745 loss)\nI0607 22:04:19.191836 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0097378 (* 1 = 0.0097378 loss)\nI0607 22:04:19.191843 13573 sgd_solver.cpp:106] Iteration 6300, lr = 0.001\nI0607 22:04:32.270371 13573 solver.cpp:229] Iteration 6320, loss = 0.71741\nI0607 22:04:32.270449 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0732887 (* 1 = 0.0732887 loss)\nI0607 22:04:32.270458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190026 (* 1 = 0.190026 loss)\nI0607 22:04:32.270464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241211 (* 1 = 0.241211 loss)\nI0607 22:04:32.270470 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0457034 (* 1 = 0.0457034 loss)\nI0607 22:04:32.270476 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26788 (* 1 = 0.26788 loss)\nI0607 22:04:32.270483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.045678 (* 1 = 0.045678 loss)\nI0607 22:04:32.270490 13573 sgd_solver.cpp:106] Iteration 6320, lr = 0.001\nI0607 22:04:45.066649 13573 solver.cpp:229] Iteration 6340, loss = 1.058\nI0607 22:04:45.066712 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.299759 (* 1 = 0.299759 loss)\nI0607 22:04:45.066721 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.31537 (* 1 = 0.31537 loss)\nI0607 22:04:45.066728 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.313093 (* 1 = 0.313093 loss)\nI0607 22:04:45.066735 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0596804 (* 1 = 0.0596804 loss)\nI0607 22:04:45.066741 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.324559 (* 1 = 0.324559 loss)\nI0607 22:04:45.066747 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0596718 (* 1 = 0.0596718 loss)\nI0607 22:04:45.066756 13573 sgd_solver.cpp:106] Iteration 6340, lr = 0.001\nI0607 22:04:58.012697 13573 solver.cpp:229] Iteration 6360, loss = 0.998173\nI0607 22:04:58.012761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.293362 (* 1 = 0.293362 loss)\nI0607 22:04:58.012770 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110102 (* 1 = 0.110102 loss)\nI0607 22:04:58.012778 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.330964 (* 1 = 0.330964 loss)\nI0607 22:04:58.012783 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.172386 (* 1 = 0.172386 loss)\nI0607 22:04:58.012789 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330873 (* 1 = 0.330873 loss)\nI0607 22:04:58.012794 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.172384 (* 1 = 0.172384 loss)\nI0607 22:04:58.012801 13573 sgd_solver.cpp:106] Iteration 6360, lr = 0.001\nI0607 22:05:10.920930 13573 solver.cpp:229] Iteration 6380, loss = 1.28576\nI0607 22:05:10.920995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.356451 (* 1 = 0.356451 loss)\nI0607 22:05:10.921005 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.407562 (* 1 = 0.407562 loss)\nI0607 22:05:10.921011 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.401691 (* 1 = 0.401691 loss)\nI0607 22:05:10.921018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0680995 (* 1 = 0.0680995 loss)\nI0607 22:05:10.921025 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.400797 (* 1 = 0.400797 loss)\nI0607 22:05:10.921030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0680687 (* 1 = 0.0680687 loss)\nI0607 22:05:10.921037 13573 sgd_solver.cpp:106] Iteration 6380, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:05:23.815111 13573 solver.cpp:229] Iteration 6400, loss = 1.43417\nI0607 22:05:23.815243 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0180416 (* 1 = 0.0180416 loss)\nI0607 22:05:23.815253 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109405 (* 1 = 0.109405 loss)\nI0607 22:05:23.815258 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.400478 (* 1 = 0.400478 loss)\nI0607 22:05:23.815264 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109595 (* 1 = 0.109595 loss)\nI0607 22:05:23.815270 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.41521 (* 1 = 0.41521 loss)\nI0607 22:05:23.815277 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109527 (* 1 = 0.109527 loss)\nI0607 22:05:23.815284 13573 sgd_solver.cpp:106] Iteration 6400, lr = 0.001\nI0607 22:05:36.855085 13573 solver.cpp:229] Iteration 6420, loss = 0.701402\nI0607 22:05:36.855149 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.231908 (* 1 = 0.231908 loss)\nI0607 22:05:36.855159 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.341576 (* 1 = 0.341576 loss)\nI0607 22:05:36.855165 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113237 (* 1 = 0.113237 loss)\nI0607 22:05:36.855171 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189925 (* 1 = 0.0189925 loss)\nI0607 22:05:36.855177 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111405 (* 1 = 0.111405 loss)\nI0607 22:05:36.855182 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189812 (* 1 = 0.0189812 loss)\nI0607 22:05:36.855190 13573 sgd_solver.cpp:106] Iteration 6420, lr = 0.001\nI0607 22:05:49.919317 13573 solver.cpp:229] Iteration 6440, loss = 0.704179\nI0607 22:05:49.919389 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.299828 (* 1 = 0.299828 loss)\nI0607 22:05:49.919399 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170252 (* 1 = 0.170252 loss)\nI0607 22:05:49.919405 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187268 (* 1 = 0.187268 loss)\nI0607 22:05:49.919411 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0323316 (* 1 = 0.0323316 loss)\nI0607 22:05:49.919416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174612 (* 1 = 0.174612 loss)\nI0607 22:05:49.919422 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0323227 (* 1 = 0.0323227 loss)\nI0607 22:05:49.919430 13573 sgd_solver.cpp:106] Iteration 6440, lr = 0.001\nI0607 22:06:02.737771 13573 solver.cpp:229] Iteration 6460, loss = 1.24391\nI0607 22:06:02.737840 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.359884 (* 1 = 0.359884 loss)\nI0607 22:06:02.737850 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214592 (* 1 = 0.214592 loss)\nI0607 22:06:02.737857 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119587 (* 1 = 0.119587 loss)\nI0607 22:06:02.737864 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0762247 (* 1 = 0.0762247 loss)\nI0607 22:06:02.737869 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122222 (* 1 = 0.122222 loss)\nI0607 22:06:02.737876 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0762079 (* 1 = 0.0762079 loss)\nI0607 22:06:02.737884 13573 sgd_solver.cpp:106] Iteration 6460, lr = 0.001\nI0607 22:06:15.592787 13573 solver.cpp:229] Iteration 6480, loss = 1.38867\nI0607 22:06:15.592857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.34148 (* 1 = 0.34148 loss)\nI0607 22:06:15.592866 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.45278 (* 1 = 0.45278 loss)\nI0607 22:06:15.592872 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.416506 (* 1 = 0.416506 loss)\nI0607 22:06:15.592878 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109707 (* 1 = 0.109707 loss)\nI0607 22:06:15.592885 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.420226 (* 1 = 0.420226 loss)\nI0607 22:06:15.592890 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109695 (* 1 = 0.109695 loss)\nI0607 22:06:15.592897 13573 sgd_solver.cpp:106] Iteration 6480, lr = 0.001\nI0607 22:06:28.637560 13573 solver.cpp:229] Iteration 6500, loss = 0.640711\nI0607 22:06:28.637624 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.29263 (* 1 = 0.29263 loss)\nI0607 22:06:28.637634 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169964 (* 1 = 0.169964 loss)\nI0607 22:06:28.637640 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102667 (* 1 = 0.102667 loss)\nI0607 22:06:28.637645 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0516268 (* 1 = 0.0516268 loss)\nI0607 22:06:28.637651 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109804 (* 1 = 0.109804 loss)\nI0607 22:06:28.637657 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0516235 (* 1 = 0.0516235 loss)\nI0607 22:06:28.637665 13573 sgd_solver.cpp:106] Iteration 6500, lr = 0.001\nI0607 22:06:41.605942 13573 solver.cpp:229] Iteration 6520, loss = 1.34092\nI0607 22:06:41.606014 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.258346 (* 1 = 0.258346 loss)\nI0607 22:06:41.606024 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.262918 (* 1 = 0.262918 loss)\nI0607 22:06:41.606030 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.739331 (* 1 = 0.739331 loss)\nI0607 22:06:41.606035 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.191111 (* 1 = 0.191111 loss)\nI0607 22:06:41.606041 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.725963 (* 1 = 0.725963 loss)\nI0607 22:06:41.606047 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.191141 (* 1 = 0.191141 loss)\nI0607 22:06:41.606055 13573 sgd_solver.cpp:106] Iteration 6520, lr = 0.001\nI0607 22:06:54.577905 13573 solver.cpp:229] Iteration 6540, loss = 0.564309\nI0607 22:06:54.577975 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0528954 (* 1 = 0.0528954 loss)\nI0607 22:06:54.577986 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173373 (* 1 = 0.173373 loss)\nI0607 22:06:54.577991 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1663 (* 1 = 0.1663 loss)\nI0607 22:06:54.577996 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114014 (* 1 = 0.0114014 loss)\nI0607 22:06:54.578002 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167464 (* 1 = 0.167464 loss)\nI0607 22:06:54.578007 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114065 (* 1 = 0.0114065 loss)\nI0607 22:06:54.578014 13573 sgd_solver.cpp:106] Iteration 6540, lr = 0.001\nI0607 22:07:07.337011 13573 solver.cpp:229] Iteration 6560, loss = 0.763482\nI0607 22:07:07.337079 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000193727 (* 1 = 0.000193727 loss)\nI0607 22:07:07.337088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0147823 (* 1 = 0.0147823 loss)\nI0607 22:07:07.337095 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213613 (* 1 = 0.213613 loss)\nI0607 22:07:07.337101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.062216 (* 1 = 0.062216 loss)\nI0607 22:07:07.337106 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.209094 (* 1 = 0.209094 loss)\nI0607 22:07:07.337113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0622114 (* 1 = 0.0622114 loss)\nI0607 22:07:07.337121 13573 sgd_solver.cpp:106] Iteration 6560, lr = 0.001\nI0607 22:07:20.165884 13573 solver.cpp:229] Iteration 6580, loss = 1.35658\nI0607 22:07:20.165956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16841 (* 1 = 0.16841 loss)\nI0607 22:07:20.165966 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.180657 (* 1 = 0.180657 loss)\nI0607 22:07:20.165972 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.115121 (* 1 = 0.115121 loss)\nI0607 22:07:20.165978 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0292944 (* 1 = 0.0292944 loss)\nI0607 22:07:20.165984 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111872 (* 1 = 0.111872 loss)\nI0607 22:07:20.165990 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0292854 (* 1 = 0.0292854 loss)\nI0607 22:07:20.165997 13573 sgd_solver.cpp:106] Iteration 6580, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:07:33.040756 13573 solver.cpp:229] Iteration 6600, loss = 1.32681\nI0607 22:07:33.040820 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.582509 (* 1 = 0.582509 loss)\nI0607 22:07:33.040830 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.701079 (* 1 = 0.701079 loss)\nI0607 22:07:33.040837 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184158 (* 1 = 0.184158 loss)\nI0607 22:07:33.040843 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0647163 (* 1 = 0.0647163 loss)\nI0607 22:07:33.040850 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183429 (* 1 = 0.183429 loss)\nI0607 22:07:33.040856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0647151 (* 1 = 0.0647151 loss)\nI0607 22:07:33.040863 13573 sgd_solver.cpp:106] Iteration 6600, lr = 0.001\nI0607 22:07:46.018851 13573 solver.cpp:229] Iteration 6620, loss = 1.12636\nI0607 22:07:46.018920 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0813481 (* 1 = 0.0813481 loss)\nI0607 22:07:46.018930 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139547 (* 1 = 0.139547 loss)\nI0607 22:07:46.018936 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.327386 (* 1 = 0.327386 loss)\nI0607 22:07:46.018941 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0463238 (* 1 = 0.0463238 loss)\nI0607 22:07:46.018947 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.346999 (* 1 = 0.346999 loss)\nI0607 22:07:46.018954 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0463072 (* 1 = 0.0463072 loss)\nI0607 22:07:46.018960 13573 sgd_solver.cpp:106] Iteration 6620, lr = 0.001\nI0607 22:07:58.992350 13573 solver.cpp:229] Iteration 6640, loss = 1.26605\nI0607 22:07:58.992413 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.437187 (* 1 = 0.437187 loss)\nI0607 22:07:58.992422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.331515 (* 1 = 0.331515 loss)\nI0607 22:07:58.992429 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0918274 (* 1 = 0.0918274 loss)\nI0607 22:07:58.992434 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00862951 (* 1 = 0.00862951 loss)\nI0607 22:07:58.992439 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107902 (* 1 = 0.107902 loss)\nI0607 22:07:58.992445 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00861839 (* 1 = 0.00861839 loss)\nI0607 22:07:58.992452 13573 sgd_solver.cpp:106] Iteration 6640, lr = 0.001\nI0607 22:08:11.887802 13573 solver.cpp:229] Iteration 6660, loss = 1.00024\nI0607 22:08:11.887868 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.387628 (* 1 = 0.387628 loss)\nI0607 22:08:11.887878 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146396 (* 1 = 0.146396 loss)\nI0607 22:08:11.887884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105068 (* 1 = 0.105068 loss)\nI0607 22:08:11.887890 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0199441 (* 1 = 0.0199441 loss)\nI0607 22:08:11.887897 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104727 (* 1 = 0.104727 loss)\nI0607 22:08:11.887902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0199426 (* 1 = 0.0199426 loss)\nI0607 22:08:11.887909 13573 sgd_solver.cpp:106] Iteration 6660, lr = 0.001\nI0607 22:08:24.628334 13573 solver.cpp:229] Iteration 6680, loss = 1.16534\nI0607 22:08:24.628396 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.46311 (* 1 = 0.46311 loss)\nI0607 22:08:24.628407 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.24781 (* 1 = 0.24781 loss)\nI0607 22:08:24.628412 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.313947 (* 1 = 0.313947 loss)\nI0607 22:08:24.628418 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.168735 (* 1 = 0.168735 loss)\nI0607 22:08:24.628423 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.321999 (* 1 = 0.321999 loss)\nI0607 22:08:24.628429 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.168776 (* 1 = 0.168776 loss)\nI0607 22:08:24.628437 13573 sgd_solver.cpp:106] Iteration 6680, lr = 0.001\nI0607 22:08:37.328032 13573 solver.cpp:229] Iteration 6700, loss = 1.21454\nI0607 22:08:37.328099 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194443 (* 1 = 0.194443 loss)\nI0607 22:08:37.328111 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.292867 (* 1 = 0.292867 loss)\nI0607 22:08:37.328117 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.380536 (* 1 = 0.380536 loss)\nI0607 22:08:37.328125 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296574 (* 1 = 0.0296574 loss)\nI0607 22:08:37.328130 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.365356 (* 1 = 0.365356 loss)\nI0607 22:08:37.328136 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029661 (* 1 = 0.029661 loss)\nI0607 22:08:37.328143 13573 sgd_solver.cpp:106] Iteration 6700, lr = 0.001\nI0607 22:08:50.099102 13573 solver.cpp:229] Iteration 6720, loss = 1.353\nI0607 22:08:50.099169 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0750817 (* 1 = 0.0750817 loss)\nI0607 22:08:50.099179 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.230863 (* 1 = 0.230863 loss)\nI0607 22:08:50.099184 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269224 (* 1 = 0.269224 loss)\nI0607 22:08:50.099190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0273803 (* 1 = 0.0273803 loss)\nI0607 22:08:50.099196 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256198 (* 1 = 0.256198 loss)\nI0607 22:08:50.099202 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0273595 (* 1 = 0.0273595 loss)\nI0607 22:08:50.099210 13573 sgd_solver.cpp:106] Iteration 6720, lr = 0.001\nI0607 22:09:02.876844 13573 solver.cpp:229] Iteration 6740, loss = 0.786211\nI0607 22:09:02.876982 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0601964 (* 1 = 0.0601964 loss)\nI0607 22:09:02.876992 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0691886 (* 1 = 0.0691886 loss)\nI0607 22:09:02.876998 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282655 (* 1 = 0.282655 loss)\nI0607 22:09:02.877004 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0389819 (* 1 = 0.0389819 loss)\nI0607 22:09:02.877010 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28697 (* 1 = 0.28697 loss)\nI0607 22:09:02.877015 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0389443 (* 1 = 0.0389443 loss)\nI0607 22:09:02.877023 13573 sgd_solver.cpp:106] Iteration 6740, lr = 0.001\nI0607 22:09:15.993324 13573 solver.cpp:229] Iteration 6760, loss = 1.61237\nI0607 22:09:15.993386 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00590906 (* 1 = 0.00590906 loss)\nI0607 22:09:15.993396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0700079 (* 1 = 0.0700079 loss)\nI0607 22:09:15.993403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.369107 (* 1 = 0.369107 loss)\nI0607 22:09:15.993409 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275757 (* 1 = 0.0275757 loss)\nI0607 22:09:15.993417 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.391651 (* 1 = 0.391651 loss)\nI0607 22:09:15.993422 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.027586 (* 1 = 0.027586 loss)\nI0607 22:09:15.993430 13573 sgd_solver.cpp:106] Iteration 6760, lr = 0.001\nI0607 22:09:28.880466 13573 solver.cpp:229] Iteration 6780, loss = 1.10788\nI0607 22:09:28.880558 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00273469 (* 1 = 0.00273469 loss)\nI0607 22:09:28.880568 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0288581 (* 1 = 0.0288581 loss)\nI0607 22:09:28.880574 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277529 (* 1 = 0.277529 loss)\nI0607 22:09:28.880580 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0237192 (* 1 = 0.0237192 loss)\nI0607 22:09:28.880586 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277705 (* 1 = 0.277705 loss)\nI0607 22:09:28.880591 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0237242 (* 1 = 0.0237242 loss)\nI0607 22:09:28.880599 13573 sgd_solver.cpp:106] Iteration 6780, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:09:41.815748 13573 solver.cpp:229] Iteration 6800, loss = 0.95572\nI0607 22:09:41.815822 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0287596 (* 1 = 0.0287596 loss)\nI0607 22:09:41.815832 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0188148 (* 1 = 0.0188148 loss)\nI0607 22:09:41.815838 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257712 (* 1 = 0.257712 loss)\nI0607 22:09:41.815845 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0284804 (* 1 = 0.0284804 loss)\nI0607 22:09:41.815850 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226365 (* 1 = 0.226365 loss)\nI0607 22:09:41.815856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0284786 (* 1 = 0.0284786 loss)\nI0607 22:09:41.815862 13573 sgd_solver.cpp:106] Iteration 6800, lr = 0.001\nI0607 22:09:54.777541 13573 solver.cpp:229] Iteration 6820, loss = 1.70932\nI0607 22:09:54.777604 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.468667 (* 1 = 0.468667 loss)\nI0607 22:09:54.777614 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.676409 (* 1 = 0.676409 loss)\nI0607 22:09:54.777619 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.447198 (* 1 = 0.447198 loss)\nI0607 22:09:54.777626 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328673 (* 1 = 0.0328673 loss)\nI0607 22:09:54.777631 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.440684 (* 1 = 0.440684 loss)\nI0607 22:09:54.777637 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0328688 (* 1 = 0.0328688 loss)\nI0607 22:09:54.777644 13573 sgd_solver.cpp:106] Iteration 6820, lr = 0.001\nI0607 22:10:07.599797 13573 solver.cpp:229] Iteration 6840, loss = 0.599602\nI0607 22:10:07.599866 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0986816 (* 1 = 0.0986816 loss)\nI0607 22:10:07.599875 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141481 (* 1 = 0.141481 loss)\nI0607 22:10:07.599881 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231764 (* 1 = 0.231764 loss)\nI0607 22:10:07.599886 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0267854 (* 1 = 0.0267854 loss)\nI0607 22:10:07.599892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216576 (* 1 = 0.216576 loss)\nI0607 22:10:07.599897 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0267961 (* 1 = 0.0267961 loss)\nI0607 22:10:07.599905 13573 sgd_solver.cpp:106] Iteration 6840, lr = 0.001\nI0607 22:10:20.191478 13573 solver.cpp:229] Iteration 6860, loss = 1.00094\nI0607 22:10:20.191542 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.179223 (* 1 = 0.179223 loss)\nI0607 22:10:20.191551 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158583 (* 1 = 0.158583 loss)\nI0607 22:10:20.191558 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972657 (* 1 = 0.0972657 loss)\nI0607 22:10:20.191565 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0220939 (* 1 = 0.0220939 loss)\nI0607 22:10:20.191570 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100399 (* 1 = 0.100399 loss)\nI0607 22:10:20.191576 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0220782 (* 1 = 0.0220782 loss)\nI0607 22:10:20.191581 13573 sgd_solver.cpp:106] Iteration 6860, lr = 0.001\nI0607 22:10:33.045835 13573 solver.cpp:229] Iteration 6880, loss = 0.85339\nI0607 22:10:33.045925 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.36785 (* 1 = 0.36785 loss)\nI0607 22:10:33.045938 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.232024 (* 1 = 0.232024 loss)\nI0607 22:10:33.045944 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116004 (* 1 = 0.116004 loss)\nI0607 22:10:33.045951 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0615595 (* 1 = 0.0615595 loss)\nI0607 22:10:33.045956 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119285 (* 1 = 0.119285 loss)\nI0607 22:10:33.045963 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0615548 (* 1 = 0.0615548 loss)\nI0607 22:10:33.045971 13573 sgd_solver.cpp:106] Iteration 6880, lr = 0.001\nI0607 22:10:45.668638 13573 solver.cpp:229] Iteration 6900, loss = 0.391115\nI0607 22:10:45.668706 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00252712 (* 1 = 0.00252712 loss)\nI0607 22:10:45.668715 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0124521 (* 1 = 0.0124521 loss)\nI0607 22:10:45.668722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209863 (* 1 = 0.209863 loss)\nI0607 22:10:45.668728 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00633925 (* 1 = 0.00633925 loss)\nI0607 22:10:45.668733 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.180932 (* 1 = 0.180932 loss)\nI0607 22:10:45.668740 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00633522 (* 1 = 0.00633522 loss)\nI0607 22:10:45.668747 13573 sgd_solver.cpp:106] Iteration 6900, lr = 0.001\nI0607 22:10:58.561239 13573 solver.cpp:229] Iteration 6920, loss = 2.51053\nI0607 22:10:58.561323 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.229522 (* 1 = 0.229522 loss)\nI0607 22:10:58.561339 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.20821 (* 1 = 0.20821 loss)\nI0607 22:10:58.561350 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.787158 (* 1 = 0.787158 loss)\nI0607 22:10:58.561362 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.339536 (* 1 = 0.339536 loss)\nI0607 22:10:58.561374 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.783936 (* 1 = 0.783936 loss)\nI0607 22:10:58.561385 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.33954 (* 1 = 0.33954 loss)\nI0607 22:10:58.561395 13573 sgd_solver.cpp:106] Iteration 6920, lr = 0.001\nI0607 22:11:11.561344 13573 solver.cpp:229] Iteration 6940, loss = 0.755009\nI0607 22:11:11.561408 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.154962 (* 1 = 0.154962 loss)\nI0607 22:11:11.561416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202602 (* 1 = 0.202602 loss)\nI0607 22:11:11.561422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15696 (* 1 = 0.15696 loss)\nI0607 22:11:11.561429 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0265073 (* 1 = 0.0265073 loss)\nI0607 22:11:11.561434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147718 (* 1 = 0.147718 loss)\nI0607 22:11:11.561440 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264979 (* 1 = 0.0264979 loss)\nI0607 22:11:11.561446 13573 sgd_solver.cpp:106] Iteration 6940, lr = 0.001\nI0607 22:11:24.399451 13573 solver.cpp:229] Iteration 6960, loss = 0.878772\nI0607 22:11:24.399521 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.315898 (* 1 = 0.315898 loss)\nI0607 22:11:24.399531 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.279947 (* 1 = 0.279947 loss)\nI0607 22:11:24.399538 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.339706 (* 1 = 0.339706 loss)\nI0607 22:11:24.399544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0731504 (* 1 = 0.0731504 loss)\nI0607 22:11:24.399549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334653 (* 1 = 0.334653 loss)\nI0607 22:11:24.399555 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0731284 (* 1 = 0.0731284 loss)\nI0607 22:11:24.399562 13573 sgd_solver.cpp:106] Iteration 6960, lr = 0.001\nI0607 22:11:37.162190 13573 solver.cpp:229] Iteration 6980, loss = 0.747524\nI0607 22:11:37.162252 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.227141 (* 1 = 0.227141 loss)\nI0607 22:11:37.162261 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.277628 (* 1 = 0.277628 loss)\nI0607 22:11:37.162267 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101349 (* 1 = 0.101349 loss)\nI0607 22:11:37.162273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0173099 (* 1 = 0.0173099 loss)\nI0607 22:11:37.162279 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0908892 (* 1 = 0.0908892 loss)\nI0607 22:11:37.162286 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0173099 (* 1 = 0.0173099 loss)\nI0607 22:11:37.162292 13573 sgd_solver.cpp:106] Iteration 6980, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:11:50.079426 13573 solver.cpp:229] Iteration 7000, loss = 1.08273\nI0607 22:11:50.079509 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.379884 (* 1 = 0.379884 loss)\nI0607 22:11:50.079519 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.22221 (* 1 = 0.22221 loss)\nI0607 22:11:50.079524 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240677 (* 1 = 0.240677 loss)\nI0607 22:11:50.079530 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0784298 (* 1 = 0.0784298 loss)\nI0607 22:11:50.079535 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245748 (* 1 = 0.245748 loss)\nI0607 22:11:50.079541 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0784343 (* 1 = 0.0784343 loss)\nI0607 22:11:50.079550 13573 sgd_solver.cpp:106] Iteration 7000, lr = 0.001\nI0607 22:12:03.218256 13573 solver.cpp:229] Iteration 7020, loss = 0.988283\nI0607 22:12:03.218327 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.215828 (* 1 = 0.215828 loss)\nI0607 22:12:03.218336 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.23705 (* 1 = 0.23705 loss)\nI0607 22:12:03.218343 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.397138 (* 1 = 0.397138 loss)\nI0607 22:12:03.218350 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0373793 (* 1 = 0.0373793 loss)\nI0607 22:12:03.218358 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.412961 (* 1 = 0.412961 loss)\nI0607 22:12:03.218364 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0373565 (* 1 = 0.0373565 loss)\nI0607 22:12:03.218374 13573 sgd_solver.cpp:106] Iteration 7020, lr = 0.001\nI0607 22:12:16.099841 13573 solver.cpp:229] Iteration 7040, loss = 0.738401\nI0607 22:12:16.099913 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104455 (* 1 = 0.104455 loss)\nI0607 22:12:16.099925 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121568 (* 1 = 0.121568 loss)\nI0607 22:12:16.099931 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0903548 (* 1 = 0.0903548 loss)\nI0607 22:12:16.099936 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276176 (* 1 = 0.0276176 loss)\nI0607 22:12:16.099942 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0897339 (* 1 = 0.0897339 loss)\nI0607 22:12:16.099948 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276188 (* 1 = 0.0276188 loss)\nI0607 22:12:16.099956 13573 sgd_solver.cpp:106] Iteration 7040, lr = 0.001\nI0607 22:12:29.119258 13573 solver.cpp:229] Iteration 7060, loss = 0.648572\nI0607 22:12:29.119328 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00132291 (* 1 = 0.00132291 loss)\nI0607 22:12:29.119338 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0177897 (* 1 = 0.0177897 loss)\nI0607 22:12:29.119345 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.291928 (* 1 = 0.291928 loss)\nI0607 22:12:29.119351 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0124198 (* 1 = 0.0124198 loss)\nI0607 22:12:29.119359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250137 (* 1 = 0.250137 loss)\nI0607 22:12:29.119364 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0124131 (* 1 = 0.0124131 loss)\nI0607 22:12:29.119372 13573 sgd_solver.cpp:106] Iteration 7060, lr = 0.001\nI0607 22:12:41.879426 13573 solver.cpp:229] Iteration 7080, loss = 1.26719\nI0607 22:12:41.879544 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.305194 (* 1 = 0.305194 loss)\nI0607 22:12:41.879554 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.324913 (* 1 = 0.324913 loss)\nI0607 22:12:41.879565 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104367 (* 1 = 0.104367 loss)\nI0607 22:12:41.879570 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0238748 (* 1 = 0.0238748 loss)\nI0607 22:12:41.879576 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100854 (* 1 = 0.100854 loss)\nI0607 22:12:41.879582 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238893 (* 1 = 0.0238893 loss)\nI0607 22:12:41.879590 13573 sgd_solver.cpp:106] Iteration 7080, lr = 0.001\nI0607 22:12:54.844583 13573 solver.cpp:229] Iteration 7100, loss = 0.804962\nI0607 22:12:54.844647 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.316533 (* 1 = 0.316533 loss)\nI0607 22:12:54.844657 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157844 (* 1 = 0.157844 loss)\nI0607 22:12:54.844663 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239712 (* 1 = 0.239712 loss)\nI0607 22:12:54.844671 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0303998 (* 1 = 0.0303998 loss)\nI0607 22:12:54.844676 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236944 (* 1 = 0.236944 loss)\nI0607 22:12:54.844681 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.030385 (* 1 = 0.030385 loss)\nI0607 22:12:54.844688 13573 sgd_solver.cpp:106] Iteration 7100, lr = 0.001\nI0607 22:13:07.905416 13573 solver.cpp:229] Iteration 7120, loss = 1.54228\nI0607 22:13:07.905478 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0462921 (* 1 = 0.0462921 loss)\nI0607 22:13:07.905488 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209782 (* 1 = 0.209782 loss)\nI0607 22:13:07.905495 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.354626 (* 1 = 0.354626 loss)\nI0607 22:13:07.905503 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0764937 (* 1 = 0.0764937 loss)\nI0607 22:13:07.905508 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.353289 (* 1 = 0.353289 loss)\nI0607 22:13:07.905514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0765027 (* 1 = 0.0765027 loss)\nI0607 22:13:07.905521 13573 sgd_solver.cpp:106] Iteration 7120, lr = 0.001\nI0607 22:13:20.881373 13573 solver.cpp:229] Iteration 7140, loss = 1.08278\nI0607 22:13:20.881445 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.390574 (* 1 = 0.390574 loss)\nI0607 22:13:20.881454 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.279694 (* 1 = 0.279694 loss)\nI0607 22:13:20.881460 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260149 (* 1 = 0.260149 loss)\nI0607 22:13:20.881466 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0478069 (* 1 = 0.0478069 loss)\nI0607 22:13:20.881472 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269277 (* 1 = 0.269277 loss)\nI0607 22:13:20.881479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0478167 (* 1 = 0.0478167 loss)\nI0607 22:13:20.881486 13573 sgd_solver.cpp:106] Iteration 7140, lr = 0.001\nI0607 22:13:33.713174 13573 solver.cpp:229] Iteration 7160, loss = 0.923039\nI0607 22:13:33.713234 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121278 (* 1 = 0.121278 loss)\nI0607 22:13:33.713244 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.215745 (* 1 = 0.215745 loss)\nI0607 22:13:33.713250 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112219 (* 1 = 0.112219 loss)\nI0607 22:13:33.713256 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0677608 (* 1 = 0.0677608 loss)\nI0607 22:13:33.713263 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107734 (* 1 = 0.107734 loss)\nI0607 22:13:33.713268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0677463 (* 1 = 0.0677463 loss)\nI0607 22:13:33.713274 13573 sgd_solver.cpp:106] Iteration 7160, lr = 0.001\nI0607 22:13:46.479804 13573 solver.cpp:229] Iteration 7180, loss = 0.897026\nI0607 22:13:46.479874 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0151879 (* 1 = 0.0151879 loss)\nI0607 22:13:46.479884 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100234 (* 1 = 0.100234 loss)\nI0607 22:13:46.479890 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.372228 (* 1 = 0.372228 loss)\nI0607 22:13:46.479897 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.046649 (* 1 = 0.046649 loss)\nI0607 22:13:46.479902 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.404394 (* 1 = 0.404394 loss)\nI0607 22:13:46.479907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.046616 (* 1 = 0.046616 loss)\nI0607 22:13:46.479914 13573 sgd_solver.cpp:106] Iteration 7180, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:13:59.363716 13573 solver.cpp:229] Iteration 7200, loss = 0.649758\nI0607 22:13:59.363780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141324 (* 1 = 0.141324 loss)\nI0607 22:13:59.363790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0731839 (* 1 = 0.0731839 loss)\nI0607 22:13:59.363796 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11866 (* 1 = 0.11866 loss)\nI0607 22:13:59.363802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0362283 (* 1 = 0.0362283 loss)\nI0607 22:13:59.363808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115653 (* 1 = 0.115653 loss)\nI0607 22:13:59.363816 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0362072 (* 1 = 0.0362072 loss)\nI0607 22:13:59.363822 13573 sgd_solver.cpp:106] Iteration 7200, lr = 0.001\nI0607 22:14:12.453048 13573 solver.cpp:229] Iteration 7220, loss = 0.763979\nI0607 22:14:12.453117 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00822527 (* 1 = 0.00822527 loss)\nI0607 22:14:12.453127 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0223094 (* 1 = 0.0223094 loss)\nI0607 22:14:12.453133 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.296989 (* 1 = 0.296989 loss)\nI0607 22:14:12.453138 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0223564 (* 1 = 0.0223564 loss)\nI0607 22:14:12.453145 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280182 (* 1 = 0.280182 loss)\nI0607 22:14:12.453150 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0223591 (* 1 = 0.0223591 loss)\nI0607 22:14:12.453157 13573 sgd_solver.cpp:106] Iteration 7220, lr = 0.001\nI0607 22:14:25.465495 13573 solver.cpp:229] Iteration 7240, loss = 1.69936\nI0607 22:14:25.465560 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.417903 (* 1 = 0.417903 loss)\nI0607 22:14:25.465570 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195122 (* 1 = 0.195122 loss)\nI0607 22:14:25.465576 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0921539 (* 1 = 0.0921539 loss)\nI0607 22:14:25.465582 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110281 (* 1 = 0.0110281 loss)\nI0607 22:14:25.465589 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.098592 (* 1 = 0.098592 loss)\nI0607 22:14:25.465593 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110239 (* 1 = 0.0110239 loss)\nI0607 22:14:25.465600 13573 sgd_solver.cpp:106] Iteration 7240, lr = 0.001\nI0607 22:14:38.201719 13573 solver.cpp:229] Iteration 7260, loss = 1.31041\nI0607 22:14:38.201778 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00144067 (* 1 = 0.00144067 loss)\nI0607 22:14:38.201792 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0817751 (* 1 = 0.0817751 loss)\nI0607 22:14:38.201797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191787 (* 1 = 0.191787 loss)\nI0607 22:14:38.201803 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0140478 (* 1 = 0.0140478 loss)\nI0607 22:14:38.201808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186759 (* 1 = 0.186759 loss)\nI0607 22:14:38.201814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0140498 (* 1 = 0.0140498 loss)\nI0607 22:14:38.201822 13573 sgd_solver.cpp:106] Iteration 7260, lr = 0.001\nI0607 22:14:51.061084 13573 solver.cpp:229] Iteration 7280, loss = 0.524372\nI0607 22:14:51.061174 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000412695 (* 1 = 0.000412695 loss)\nI0607 22:14:51.061187 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00122094 (* 1 = 0.00122094 loss)\nI0607 22:14:51.061192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214053 (* 1 = 0.214053 loss)\nI0607 22:14:51.061198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.027903 (* 1 = 0.027903 loss)\nI0607 22:14:51.061204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216491 (* 1 = 0.216491 loss)\nI0607 22:14:51.061210 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279333 (* 1 = 0.0279333 loss)\nI0607 22:14:51.061218 13573 sgd_solver.cpp:106] Iteration 7280, lr = 0.001\nI0607 22:15:03.968806 13573 solver.cpp:229] Iteration 7300, loss = 0.716116\nI0607 22:15:03.968870 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121852 (* 1 = 0.121852 loss)\nI0607 22:15:03.968880 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162844 (* 1 = 0.162844 loss)\nI0607 22:15:03.968888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174242 (* 1 = 0.174242 loss)\nI0607 22:15:03.968894 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126076 (* 1 = 0.0126076 loss)\nI0607 22:15:03.968901 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19852 (* 1 = 0.19852 loss)\nI0607 22:15:03.968907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126009 (* 1 = 0.0126009 loss)\nI0607 22:15:03.968915 13573 sgd_solver.cpp:106] Iteration 7300, lr = 0.001\nI0607 22:15:16.772547 13573 solver.cpp:229] Iteration 7320, loss = 0.803208\nI0607 22:15:16.772619 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000857434 (* 1 = 0.000857434 loss)\nI0607 22:15:16.772629 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00435825 (* 1 = 0.00435825 loss)\nI0607 22:15:16.772635 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.130596 (* 1 = 0.130596 loss)\nI0607 22:15:16.772641 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00294243 (* 1 = 0.00294243 loss)\nI0607 22:15:16.772646 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149122 (* 1 = 0.149122 loss)\nI0607 22:15:16.772652 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00294924 (* 1 = 0.00294924 loss)\nI0607 22:15:16.772660 13573 sgd_solver.cpp:106] Iteration 7320, lr = 0.001\nI0607 22:15:29.730345 13573 solver.cpp:229] Iteration 7340, loss = 0.753873\nI0607 22:15:29.730408 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.257343 (* 1 = 0.257343 loss)\nI0607 22:15:29.730417 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280516 (* 1 = 0.280516 loss)\nI0607 22:15:29.730423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.2147 (* 1 = 0.2147 loss)\nI0607 22:15:29.730430 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0514349 (* 1 = 0.0514349 loss)\nI0607 22:15:29.730437 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217037 (* 1 = 0.217037 loss)\nI0607 22:15:29.730442 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0514048 (* 1 = 0.0514048 loss)\nI0607 22:15:29.730449 13573 sgd_solver.cpp:106] Iteration 7340, lr = 0.001\nI0607 22:15:42.628794 13573 solver.cpp:229] Iteration 7360, loss = 0.837769\nI0607 22:15:42.628865 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.268958 (* 1 = 0.268958 loss)\nI0607 22:15:42.628875 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.204435 (* 1 = 0.204435 loss)\nI0607 22:15:42.628880 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.133017 (* 1 = 0.133017 loss)\nI0607 22:15:42.628886 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0282236 (* 1 = 0.0282236 loss)\nI0607 22:15:42.628891 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137441 (* 1 = 0.137441 loss)\nI0607 22:15:42.628897 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0282268 (* 1 = 0.0282268 loss)\nI0607 22:15:42.628904 13573 sgd_solver.cpp:106] Iteration 7360, lr = 0.001\nI0607 22:15:55.807570 13573 solver.cpp:229] Iteration 7380, loss = 1.25982\nI0607 22:15:55.807637 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00204986 (* 1 = 0.00204986 loss)\nI0607 22:15:55.807647 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0279987 (* 1 = 0.0279987 loss)\nI0607 22:15:55.807654 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.493065 (* 1 = 0.493065 loss)\nI0607 22:15:55.807660 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.214501 (* 1 = 0.214501 loss)\nI0607 22:15:55.807667 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.559106 (* 1 = 0.559106 loss)\nI0607 22:15:55.807672 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.214532 (* 1 = 0.214532 loss)\nI0607 22:15:55.807679 13573 sgd_solver.cpp:106] Iteration 7380, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:16:08.431174 13573 solver.cpp:229] Iteration 7400, loss = 1.00964\nI0607 22:16:08.431246 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.36291 (* 1 = 0.36291 loss)\nI0607 22:16:08.431257 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.302164 (* 1 = 0.302164 loss)\nI0607 22:16:08.431262 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0960956 (* 1 = 0.0960956 loss)\nI0607 22:16:08.431268 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00790072 (* 1 = 0.00790072 loss)\nI0607 22:16:08.431274 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0964248 (* 1 = 0.0964248 loss)\nI0607 22:16:08.431280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00790188 (* 1 = 0.00790188 loss)\nI0607 22:16:08.431287 13573 sgd_solver.cpp:106] Iteration 7400, lr = 0.001\nI0607 22:16:21.365404 13573 solver.cpp:229] Iteration 7420, loss = 1.13346\nI0607 22:16:21.365473 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.183845 (* 1 = 0.183845 loss)\nI0607 22:16:21.365483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0727355 (* 1 = 0.0727355 loss)\nI0607 22:16:21.365489 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.516118 (* 1 = 0.516118 loss)\nI0607 22:16:21.365494 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.144777 (* 1 = 0.144777 loss)\nI0607 22:16:21.365500 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.519401 (* 1 = 0.519401 loss)\nI0607 22:16:21.365505 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.144773 (* 1 = 0.144773 loss)\nI0607 22:16:21.365512 13573 sgd_solver.cpp:106] Iteration 7420, lr = 0.001\nI0607 22:16:34.257565 13573 solver.cpp:229] Iteration 7440, loss = 1.29832\nI0607 22:16:34.257629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0175851 (* 1 = 0.0175851 loss)\nI0607 22:16:34.257639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0107337 (* 1 = 0.0107337 loss)\nI0607 22:16:34.257645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.614612 (* 1 = 0.614612 loss)\nI0607 22:16:34.257652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.144758 (* 1 = 0.144758 loss)\nI0607 22:16:34.257657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.607534 (* 1 = 0.607534 loss)\nI0607 22:16:34.257663 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.144772 (* 1 = 0.144772 loss)\nI0607 22:16:34.257670 13573 sgd_solver.cpp:106] Iteration 7440, lr = 0.001\nI0607 22:16:47.142158 13573 solver.cpp:229] Iteration 7460, loss = 0.614929\nI0607 22:16:47.142285 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0020616 (* 1 = 0.0020616 loss)\nI0607 22:16:47.142294 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0439184 (* 1 = 0.0439184 loss)\nI0607 22:16:47.142302 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203932 (* 1 = 0.203932 loss)\nI0607 22:16:47.142307 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0131157 (* 1 = 0.0131157 loss)\nI0607 22:16:47.142313 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183318 (* 1 = 0.183318 loss)\nI0607 22:16:47.142319 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131128 (* 1 = 0.0131128 loss)\nI0607 22:16:47.142326 13573 sgd_solver.cpp:106] Iteration 7460, lr = 0.001\nI0607 22:17:00.085114 13573 solver.cpp:229] Iteration 7480, loss = 0.90139\nI0607 22:17:00.085180 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.449916 (* 1 = 0.449916 loss)\nI0607 22:17:00.085191 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.476955 (* 1 = 0.476955 loss)\nI0607 22:17:00.085197 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189825 (* 1 = 0.189825 loss)\nI0607 22:17:00.085204 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276879 (* 1 = 0.0276879 loss)\nI0607 22:17:00.085211 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182575 (* 1 = 0.182575 loss)\nI0607 22:17:00.085217 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276825 (* 1 = 0.0276825 loss)\nI0607 22:17:00.085225 13573 sgd_solver.cpp:106] Iteration 7480, lr = 0.001\nI0607 22:17:13.076880 13573 solver.cpp:229] Iteration 7500, loss = 1.0603\nI0607 22:17:13.076948 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0664926 (* 1 = 0.0664926 loss)\nI0607 22:17:13.076958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0744729 (* 1 = 0.0744729 loss)\nI0607 22:17:13.076966 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132677 (* 1 = 0.132677 loss)\nI0607 22:17:13.076970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.057035 (* 1 = 0.057035 loss)\nI0607 22:17:13.076977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149571 (* 1 = 0.149571 loss)\nI0607 22:17:13.076982 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0570351 (* 1 = 0.0570351 loss)\nI0607 22:17:13.076989 13573 sgd_solver.cpp:106] Iteration 7500, lr = 0.001\nI0607 22:17:25.839087 13573 solver.cpp:229] Iteration 7520, loss = 1.9817\nI0607 22:17:25.839155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000531788 (* 1 = 0.000531788 loss)\nI0607 22:17:25.839166 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00937962 (* 1 = 0.00937962 loss)\nI0607 22:17:25.839172 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211011 (* 1 = 0.211011 loss)\nI0607 22:17:25.839179 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.020789 (* 1 = 0.020789 loss)\nI0607 22:17:25.839184 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158183 (* 1 = 0.158183 loss)\nI0607 22:17:25.839190 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207871 (* 1 = 0.0207871 loss)\nI0607 22:17:25.839198 13573 sgd_solver.cpp:106] Iteration 7520, lr = 0.001\nI0607 22:17:38.530181 13573 solver.cpp:229] Iteration 7540, loss = 1.00904\nI0607 22:17:38.530256 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0629843 (* 1 = 0.0629843 loss)\nI0607 22:17:38.530267 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.203056 (* 1 = 0.203056 loss)\nI0607 22:17:38.530273 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.314859 (* 1 = 0.314859 loss)\nI0607 22:17:38.530279 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.133629 (* 1 = 0.133629 loss)\nI0607 22:17:38.530285 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.310891 (* 1 = 0.310891 loss)\nI0607 22:17:38.530292 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.13365 (* 1 = 0.13365 loss)\nI0607 22:17:38.530299 13573 sgd_solver.cpp:106] Iteration 7540, lr = 0.001\nI0607 22:17:51.281863 13573 solver.cpp:229] Iteration 7560, loss = 0.752852\nI0607 22:17:51.281929 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.043334 (* 1 = 0.043334 loss)\nI0607 22:17:51.281939 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103271 (* 1 = 0.103271 loss)\nI0607 22:17:51.281946 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302201 (* 1 = 0.302201 loss)\nI0607 22:17:51.281952 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0304666 (* 1 = 0.0304666 loss)\nI0607 22:17:51.281958 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.30837 (* 1 = 0.30837 loss)\nI0607 22:17:51.281965 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0304981 (* 1 = 0.0304981 loss)\nI0607 22:17:51.281972 13573 sgd_solver.cpp:106] Iteration 7560, lr = 0.001\nI0607 22:18:04.461194 13573 solver.cpp:229] Iteration 7580, loss = 0.925036\nI0607 22:18:04.461262 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00878942 (* 1 = 0.00878942 loss)\nI0607 22:18:04.461272 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0705344 (* 1 = 0.0705344 loss)\nI0607 22:18:04.461278 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.364371 (* 1 = 0.364371 loss)\nI0607 22:18:04.461284 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0749133 (* 1 = 0.0749133 loss)\nI0607 22:18:04.461290 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387876 (* 1 = 0.387876 loss)\nI0607 22:18:04.461295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0749143 (* 1 = 0.0749143 loss)\nI0607 22:18:04.461302 13573 sgd_solver.cpp:106] Iteration 7580, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:18:17.243947 13573 solver.cpp:229] Iteration 7600, loss = 1.05539\nI0607 22:18:17.244010 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00145674 (* 1 = 0.00145674 loss)\nI0607 22:18:17.244020 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00807978 (* 1 = 0.00807978 loss)\nI0607 22:18:17.244026 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213453 (* 1 = 0.213453 loss)\nI0607 22:18:17.244032 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135005 (* 1 = 0.0135005 loss)\nI0607 22:18:17.244038 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207412 (* 1 = 0.207412 loss)\nI0607 22:18:17.244043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134806 (* 1 = 0.0134806 loss)\nI0607 22:18:17.244051 13573 sgd_solver.cpp:106] Iteration 7600, lr = 0.001\nI0607 22:18:30.133800 13573 solver.cpp:229] Iteration 7620, loss = 0.943644\nI0607 22:18:30.133932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.229159 (* 1 = 0.229159 loss)\nI0607 22:18:30.133941 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.176922 (* 1 = 0.176922 loss)\nI0607 22:18:30.133947 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108672 (* 1 = 0.108672 loss)\nI0607 22:18:30.133954 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0209647 (* 1 = 0.0209647 loss)\nI0607 22:18:30.133960 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106411 (* 1 = 0.106411 loss)\nI0607 22:18:30.133965 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209618 (* 1 = 0.0209618 loss)\nI0607 22:18:30.133973 13573 sgd_solver.cpp:106] Iteration 7620, lr = 0.001\nI0607 22:18:42.915772 13573 solver.cpp:229] Iteration 7640, loss = 0.712667\nI0607 22:18:42.915841 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00458817 (* 1 = 0.00458817 loss)\nI0607 22:18:42.915851 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0390527 (* 1 = 0.0390527 loss)\nI0607 22:18:42.915858 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172868 (* 1 = 0.172868 loss)\nI0607 22:18:42.915863 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00817491 (* 1 = 0.00817491 loss)\nI0607 22:18:42.915869 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194622 (* 1 = 0.194622 loss)\nI0607 22:18:42.915874 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00817575 (* 1 = 0.00817575 loss)\nI0607 22:18:42.915881 13573 sgd_solver.cpp:106] Iteration 7640, lr = 0.001\nI0607 22:18:55.801748 13573 solver.cpp:229] Iteration 7660, loss = 0.912752\nI0607 22:18:55.801817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00214557 (* 1 = 0.00214557 loss)\nI0607 22:18:55.801827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0413398 (* 1 = 0.0413398 loss)\nI0607 22:18:55.801833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.236501 (* 1 = 0.236501 loss)\nI0607 22:18:55.801839 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0378206 (* 1 = 0.0378206 loss)\nI0607 22:18:55.801846 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258639 (* 1 = 0.258639 loss)\nI0607 22:18:55.801851 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0378157 (* 1 = 0.0378157 loss)\nI0607 22:18:55.801858 13573 sgd_solver.cpp:106] Iteration 7660, lr = 0.001\nI0607 22:19:08.685744 13573 solver.cpp:229] Iteration 7680, loss = 0.961562\nI0607 22:19:08.685806 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.138249 (* 1 = 0.138249 loss)\nI0607 22:19:08.685817 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116686 (* 1 = 0.116686 loss)\nI0607 22:19:08.685823 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.229663 (* 1 = 0.229663 loss)\nI0607 22:19:08.685829 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00467454 (* 1 = 0.00467454 loss)\nI0607 22:19:08.685837 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222641 (* 1 = 0.222641 loss)\nI0607 22:19:08.685842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00467452 (* 1 = 0.00467452 loss)\nI0607 22:19:08.685849 13573 sgd_solver.cpp:106] Iteration 7680, lr = 0.001\nI0607 22:19:21.393803 13573 solver.cpp:229] Iteration 7700, loss = 1.97076\nI0607 22:19:21.393870 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.266104 (* 1 = 0.266104 loss)\nI0607 22:19:21.393879 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.264332 (* 1 = 0.264332 loss)\nI0607 22:19:21.393885 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.358816 (* 1 = 0.358816 loss)\nI0607 22:19:21.393892 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0450978 (* 1 = 0.0450978 loss)\nI0607 22:19:21.393898 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387375 (* 1 = 0.387375 loss)\nI0607 22:19:21.393903 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0451036 (* 1 = 0.0451036 loss)\nI0607 22:19:21.393910 13573 sgd_solver.cpp:106] Iteration 7700, lr = 0.001\nI0607 22:19:34.058784 13573 solver.cpp:229] Iteration 7720, loss = 1.16165\nI0607 22:19:34.058848 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.165991 (* 1 = 0.165991 loss)\nI0607 22:19:34.058857 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.287792 (* 1 = 0.287792 loss)\nI0607 22:19:34.058863 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.318804 (* 1 = 0.318804 loss)\nI0607 22:19:34.058871 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.03063 (* 1 = 0.03063 loss)\nI0607 22:19:34.058876 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.308034 (* 1 = 0.308034 loss)\nI0607 22:19:34.058881 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0306091 (* 1 = 0.0306091 loss)\nI0607 22:19:34.058888 13573 sgd_solver.cpp:106] Iteration 7720, lr = 0.001\nI0607 22:19:47.047585 13573 solver.cpp:229] Iteration 7740, loss = 1.04438\nI0607 22:19:47.047693 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.518994 (* 1 = 0.518994 loss)\nI0607 22:19:47.047703 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.408406 (* 1 = 0.408406 loss)\nI0607 22:19:47.047709 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223691 (* 1 = 0.223691 loss)\nI0607 22:19:47.047715 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0813686 (* 1 = 0.0813686 loss)\nI0607 22:19:47.047721 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223608 (* 1 = 0.223608 loss)\nI0607 22:19:47.047727 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0813693 (* 1 = 0.0813693 loss)\nI0607 22:19:47.047734 13573 sgd_solver.cpp:106] Iteration 7740, lr = 0.001\nI0607 22:19:59.780882 13573 solver.cpp:229] Iteration 7760, loss = 1.29931\nI0607 22:19:59.780951 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0165516 (* 1 = 0.0165516 loss)\nI0607 22:19:59.780961 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0203875 (* 1 = 0.0203875 loss)\nI0607 22:19:59.780967 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.682862 (* 1 = 0.682862 loss)\nI0607 22:19:59.780973 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.137061 (* 1 = 0.137061 loss)\nI0607 22:19:59.780978 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.694686 (* 1 = 0.694686 loss)\nI0607 22:19:59.780984 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.137023 (* 1 = 0.137023 loss)\nI0607 22:19:59.780990 13573 sgd_solver.cpp:106] Iteration 7760, lr = 0.001\nI0607 22:20:12.655058 13573 solver.cpp:229] Iteration 7780, loss = 1.51142\nI0607 22:20:12.655122 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.459438 (* 1 = 0.459438 loss)\nI0607 22:20:12.655131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.515585 (* 1 = 0.515585 loss)\nI0607 22:20:12.655138 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.235167 (* 1 = 0.235167 loss)\nI0607 22:20:12.655143 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0238652 (* 1 = 0.0238652 loss)\nI0607 22:20:12.655148 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228363 (* 1 = 0.228363 loss)\nI0607 22:20:12.655153 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238557 (* 1 = 0.0238557 loss)\nI0607 22:20:12.655160 13573 sgd_solver.cpp:106] Iteration 7780, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:20:25.629374 13573 solver.cpp:229] Iteration 7800, loss = 0.716572\nI0607 22:20:25.629446 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141607 (* 1 = 0.141607 loss)\nI0607 22:20:25.629456 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169209 (* 1 = 0.169209 loss)\nI0607 22:20:25.629462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105029 (* 1 = 0.105029 loss)\nI0607 22:20:25.629468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041066 (* 1 = 0.041066 loss)\nI0607 22:20:25.629474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100238 (* 1 = 0.100238 loss)\nI0607 22:20:25.629480 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0410563 (* 1 = 0.0410563 loss)\nI0607 22:20:25.629488 13573 sgd_solver.cpp:106] Iteration 7800, lr = 0.001\nI0607 22:20:38.546116 13573 solver.cpp:229] Iteration 7820, loss = 0.658513\nI0607 22:20:38.546178 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.343759 (* 1 = 0.343759 loss)\nI0607 22:20:38.546188 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0952686 (* 1 = 0.0952686 loss)\nI0607 22:20:38.546195 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105298 (* 1 = 0.105298 loss)\nI0607 22:20:38.546200 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017262 (* 1 = 0.017262 loss)\nI0607 22:20:38.546206 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096814 (* 1 = 0.096814 loss)\nI0607 22:20:38.546213 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0172512 (* 1 = 0.0172512 loss)\nI0607 22:20:38.546221 13573 sgd_solver.cpp:106] Iteration 7820, lr = 0.001\nI0607 22:20:51.433938 13573 solver.cpp:229] Iteration 7840, loss = 1.27519\nI0607 22:20:51.434002 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.273159 (* 1 = 0.273159 loss)\nI0607 22:20:51.434011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15709 (* 1 = 0.15709 loss)\nI0607 22:20:51.434018 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.122834 (* 1 = 0.122834 loss)\nI0607 22:20:51.434025 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0228852 (* 1 = 0.0228852 loss)\nI0607 22:20:51.434029 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118465 (* 1 = 0.118465 loss)\nI0607 22:20:51.434036 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0228872 (* 1 = 0.0228872 loss)\nI0607 22:20:51.434041 13573 sgd_solver.cpp:106] Iteration 7840, lr = 0.001\nI0607 22:21:04.275454 13573 solver.cpp:229] Iteration 7860, loss = 0.781194\nI0607 22:21:04.275514 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00254033 (* 1 = 0.00254033 loss)\nI0607 22:21:04.275524 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0454641 (* 1 = 0.0454641 loss)\nI0607 22:21:04.275530 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258999 (* 1 = 0.258999 loss)\nI0607 22:21:04.275537 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170288 (* 1 = 0.0170288 loss)\nI0607 22:21:04.275542 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.244848 (* 1 = 0.244848 loss)\nI0607 22:21:04.275549 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017057 (* 1 = 0.017057 loss)\nI0607 22:21:04.275557 13573 sgd_solver.cpp:106] Iteration 7860, lr = 0.001\nI0607 22:21:17.119309 13573 solver.cpp:229] Iteration 7880, loss = 0.461553\nI0607 22:21:17.119446 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00328615 (* 1 = 0.00328615 loss)\nI0607 22:21:17.119455 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0783713 (* 1 = 0.0783713 loss)\nI0607 22:21:17.119462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.142059 (* 1 = 0.142059 loss)\nI0607 22:21:17.119468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00395491 (* 1 = 0.00395491 loss)\nI0607 22:21:17.119474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138437 (* 1 = 0.138437 loss)\nI0607 22:21:17.119480 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00394807 (* 1 = 0.00394807 loss)\nI0607 22:21:17.119488 13573 sgd_solver.cpp:106] Iteration 7880, lr = 0.001\nI0607 22:21:30.030374 13573 solver.cpp:229] Iteration 7900, loss = 0.795096\nI0607 22:21:30.030441 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233055 (* 1 = 0.233055 loss)\nI0607 22:21:30.030452 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.220493 (* 1 = 0.220493 loss)\nI0607 22:21:30.030457 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109818 (* 1 = 0.109818 loss)\nI0607 22:21:30.030463 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0312762 (* 1 = 0.0312762 loss)\nI0607 22:21:30.030469 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110423 (* 1 = 0.110423 loss)\nI0607 22:21:30.030475 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0312923 (* 1 = 0.0312923 loss)\nI0607 22:21:30.030481 13573 sgd_solver.cpp:106] Iteration 7900, lr = 0.001\nI0607 22:21:42.888135 13573 solver.cpp:229] Iteration 7920, loss = 0.780084\nI0607 22:21:42.888195 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.298233 (* 1 = 0.298233 loss)\nI0607 22:21:42.888206 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162836 (* 1 = 0.162836 loss)\nI0607 22:21:42.888211 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0978442 (* 1 = 0.0978442 loss)\nI0607 22:21:42.888218 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0148519 (* 1 = 0.0148519 loss)\nI0607 22:21:42.888223 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100568 (* 1 = 0.100568 loss)\nI0607 22:21:42.888231 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0148408 (* 1 = 0.0148408 loss)\nI0607 22:21:42.888237 13573 sgd_solver.cpp:106] Iteration 7920, lr = 0.001\nI0607 22:21:55.708288 13573 solver.cpp:229] Iteration 7940, loss = 1.13497\nI0607 22:21:55.708362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.389238 (* 1 = 0.389238 loss)\nI0607 22:21:55.708371 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163424 (* 1 = 0.163424 loss)\nI0607 22:21:55.708377 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10331 (* 1 = 0.10331 loss)\nI0607 22:21:55.708384 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0093458 (* 1 = 0.0093458 loss)\nI0607 22:21:55.708389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101107 (* 1 = 0.101107 loss)\nI0607 22:21:55.708395 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00934185 (* 1 = 0.00934185 loss)\nI0607 22:21:55.708402 13573 sgd_solver.cpp:106] Iteration 7940, lr = 0.001\nI0607 22:22:08.600067 13573 solver.cpp:229] Iteration 7960, loss = 1.09772\nI0607 22:22:08.600211 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.211566 (* 1 = 0.211566 loss)\nI0607 22:22:08.600221 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.294054 (* 1 = 0.294054 loss)\nI0607 22:22:08.600226 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.271464 (* 1 = 0.271464 loss)\nI0607 22:22:08.600232 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.063855 (* 1 = 0.063855 loss)\nI0607 22:22:08.600239 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261966 (* 1 = 0.261966 loss)\nI0607 22:22:08.600244 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0638657 (* 1 = 0.0638657 loss)\nI0607 22:22:08.600251 13573 sgd_solver.cpp:106] Iteration 7960, lr = 0.001\nI0607 22:22:21.603546 13573 solver.cpp:229] Iteration 7980, loss = 0.430395\nI0607 22:22:21.603613 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00551251 (* 1 = 0.00551251 loss)\nI0607 22:22:21.603623 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0103553 (* 1 = 0.0103553 loss)\nI0607 22:22:21.603629 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0943998 (* 1 = 0.0943998 loss)\nI0607 22:22:21.603636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0556435 (* 1 = 0.0556435 loss)\nI0607 22:22:21.603641 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0867752 (* 1 = 0.0867752 loss)\nI0607 22:22:21.603646 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0556373 (* 1 = 0.0556373 loss)\nI0607 22:22:21.603653 13573 sgd_solver.cpp:106] Iteration 7980, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:22:34.298465 13573 solver.cpp:229] Iteration 8000, loss = 1.10666\nI0607 22:22:34.298537 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.338166 (* 1 = 0.338166 loss)\nI0607 22:22:34.298545 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.187016 (* 1 = 0.187016 loss)\nI0607 22:22:34.298552 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134669 (* 1 = 0.134669 loss)\nI0607 22:22:34.298557 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198408 (* 1 = 0.0198408 loss)\nI0607 22:22:34.298563 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.130285 (* 1 = 0.130285 loss)\nI0607 22:22:34.298569 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.019842 (* 1 = 0.019842 loss)\nI0607 22:22:34.298578 13573 sgd_solver.cpp:106] Iteration 8000, lr = 0.001\nI0607 22:22:47.008040 13573 solver.cpp:229] Iteration 8020, loss = 1.07792\nI0607 22:22:47.008110 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16256 (* 1 = 0.16256 loss)\nI0607 22:22:47.008119 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18802 (* 1 = 0.18802 loss)\nI0607 22:22:47.008126 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205258 (* 1 = 0.205258 loss)\nI0607 22:22:47.008132 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.047531 (* 1 = 0.047531 loss)\nI0607 22:22:47.008138 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196897 (* 1 = 0.196897 loss)\nI0607 22:22:47.008143 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0475127 (* 1 = 0.0475127 loss)\nI0607 22:22:47.008150 13573 sgd_solver.cpp:106] Iteration 8020, lr = 0.001\nI0607 22:22:59.774847 13573 solver.cpp:229] Iteration 8040, loss = 0.951299\nI0607 22:22:59.774911 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.303337 (* 1 = 0.303337 loss)\nI0607 22:22:59.774921 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.274177 (* 1 = 0.274177 loss)\nI0607 22:22:59.774927 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909909 (* 1 = 0.0909909 loss)\nI0607 22:22:59.774933 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164404 (* 1 = 0.0164404 loss)\nI0607 22:22:59.774940 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.091152 (* 1 = 0.091152 loss)\nI0607 22:22:59.774945 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164502 (* 1 = 0.0164502 loss)\nI0607 22:22:59.774952 13573 sgd_solver.cpp:106] Iteration 8040, lr = 0.001\nI0607 22:23:12.818900 13573 solver.cpp:229] Iteration 8060, loss = 0.609001\nI0607 22:23:12.818966 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178707 (* 1 = 0.178707 loss)\nI0607 22:23:12.818976 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.196403 (* 1 = 0.196403 loss)\nI0607 22:23:12.818982 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099343 (* 1 = 0.099343 loss)\nI0607 22:23:12.818989 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00383587 (* 1 = 0.00383587 loss)\nI0607 22:23:12.818994 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0998332 (* 1 = 0.0998332 loss)\nI0607 22:23:12.819000 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00383237 (* 1 = 0.00383237 loss)\nI0607 22:23:12.819006 13573 sgd_solver.cpp:106] Iteration 8060, lr = 0.001\nI0607 22:23:25.689313 13573 solver.cpp:229] Iteration 8080, loss = 1.12917\nI0607 22:23:25.689381 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.497929 (* 1 = 0.497929 loss)\nI0607 22:23:25.689390 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.255015 (* 1 = 0.255015 loss)\nI0607 22:23:25.689396 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32486 (* 1 = 0.32486 loss)\nI0607 22:23:25.689402 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0179158 (* 1 = 0.0179158 loss)\nI0607 22:23:25.689409 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.326213 (* 1 = 0.326213 loss)\nI0607 22:23:25.689415 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0179252 (* 1 = 0.0179252 loss)\nI0607 22:23:25.689422 13573 sgd_solver.cpp:106] Iteration 8080, lr = 0.001\nI0607 22:23:38.637907 13573 solver.cpp:229] Iteration 8100, loss = 0.879186\nI0607 22:23:38.637970 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.361032 (* 1 = 0.361032 loss)\nI0607 22:23:38.637979 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138411 (* 1 = 0.138411 loss)\nI0607 22:23:38.637985 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.307063 (* 1 = 0.307063 loss)\nI0607 22:23:38.637991 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0380675 (* 1 = 0.0380675 loss)\nI0607 22:23:38.637996 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293708 (* 1 = 0.293708 loss)\nI0607 22:23:38.638002 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0380498 (* 1 = 0.0380498 loss)\nI0607 22:23:38.638010 13573 sgd_solver.cpp:106] Iteration 8100, lr = 0.001\nI0607 22:23:51.594697 13573 solver.cpp:229] Iteration 8120, loss = 1.41451\nI0607 22:23:51.594763 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233896 (* 1 = 0.233896 loss)\nI0607 22:23:51.594784 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107881 (* 1 = 0.107881 loss)\nI0607 22:23:51.594794 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0984513 (* 1 = 0.0984513 loss)\nI0607 22:23:51.594799 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00678044 (* 1 = 0.00678044 loss)\nI0607 22:23:51.594805 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102273 (* 1 = 0.102273 loss)\nI0607 22:23:51.594810 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00678054 (* 1 = 0.00678054 loss)\nI0607 22:23:51.594816 13573 sgd_solver.cpp:106] Iteration 8120, lr = 0.001\nI0607 22:24:04.262913 13573 solver.cpp:229] Iteration 8140, loss = 0.557284\nI0607 22:24:04.262979 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000519904 (* 1 = 0.000519904 loss)\nI0607 22:24:04.262987 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00644061 (* 1 = 0.00644061 loss)\nI0607 22:24:04.262994 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183499 (* 1 = 0.183499 loss)\nI0607 22:24:04.263000 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157391 (* 1 = 0.0157391 loss)\nI0607 22:24:04.263005 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211692 (* 1 = 0.211692 loss)\nI0607 22:24:04.263010 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0157491 (* 1 = 0.0157491 loss)\nI0607 22:24:04.263017 13573 sgd_solver.cpp:106] Iteration 8140, lr = 0.001\nI0607 22:24:16.997458 13573 solver.cpp:229] Iteration 8160, loss = 1.26929\nI0607 22:24:16.997524 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.250576 (* 1 = 0.250576 loss)\nI0607 22:24:16.997534 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15657 (* 1 = 0.15657 loss)\nI0607 22:24:16.997539 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155181 (* 1 = 0.155181 loss)\nI0607 22:24:16.997545 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0593232 (* 1 = 0.0593232 loss)\nI0607 22:24:16.997550 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.166516 (* 1 = 0.166516 loss)\nI0607 22:24:16.997556 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0593419 (* 1 = 0.0593419 loss)\nI0607 22:24:16.997562 13573 sgd_solver.cpp:106] Iteration 8160, lr = 0.001\nI0607 22:24:30.017951 13573 solver.cpp:229] Iteration 8180, loss = 1.34086\nI0607 22:24:30.018023 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00422535 (* 1 = 0.00422535 loss)\nI0607 22:24:30.018033 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00907145 (* 1 = 0.00907145 loss)\nI0607 22:24:30.018038 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.449531 (* 1 = 0.449531 loss)\nI0607 22:24:30.018044 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0664619 (* 1 = 0.0664619 loss)\nI0607 22:24:30.018049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.443034 (* 1 = 0.443034 loss)\nI0607 22:24:30.018055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0664596 (* 1 = 0.0664596 loss)\nI0607 22:24:30.018061 13573 sgd_solver.cpp:106] Iteration 8180, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:24:42.823643 13573 solver.cpp:229] Iteration 8200, loss = 0.722169\nI0607 22:24:42.823706 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.253237 (* 1 = 0.253237 loss)\nI0607 22:24:42.823716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.378035 (* 1 = 0.378035 loss)\nI0607 22:24:42.823722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191054 (* 1 = 0.191054 loss)\nI0607 22:24:42.823729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0342215 (* 1 = 0.0342215 loss)\nI0607 22:24:42.823735 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186964 (* 1 = 0.186964 loss)\nI0607 22:24:42.823741 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0342196 (* 1 = 0.0342196 loss)\nI0607 22:24:42.823748 13573 sgd_solver.cpp:106] Iteration 8200, lr = 0.001\nI0607 22:24:55.694952 13573 solver.cpp:229] Iteration 8220, loss = 1.00747\nI0607 22:24:55.695020 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.429878 (* 1 = 0.429878 loss)\nI0607 22:24:55.695030 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.357879 (* 1 = 0.357879 loss)\nI0607 22:24:55.695036 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1581 (* 1 = 0.1581 loss)\nI0607 22:24:55.695042 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0247456 (* 1 = 0.0247456 loss)\nI0607 22:24:55.695049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160878 (* 1 = 0.160878 loss)\nI0607 22:24:55.695053 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247511 (* 1 = 0.0247511 loss)\nI0607 22:24:55.695060 13573 sgd_solver.cpp:106] Iteration 8220, lr = 0.001\nI0607 22:25:08.437242 13573 solver.cpp:229] Iteration 8240, loss = 1.10736\nI0607 22:25:08.437304 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0331796 (* 1 = 0.0331796 loss)\nI0607 22:25:08.437314 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0410007 (* 1 = 0.0410007 loss)\nI0607 22:25:08.437319 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277343 (* 1 = 0.277343 loss)\nI0607 22:25:08.437325 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0963913 (* 1 = 0.0963913 loss)\nI0607 22:25:08.437330 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.275319 (* 1 = 0.275319 loss)\nI0607 22:25:08.437336 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0963726 (* 1 = 0.0963726 loss)\nI0607 22:25:08.437342 13573 sgd_solver.cpp:106] Iteration 8240, lr = 0.001\nI0607 22:25:21.344475 13573 solver.cpp:229] Iteration 8260, loss = 1.33991\nI0607 22:25:21.344553 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.146976 (* 1 = 0.146976 loss)\nI0607 22:25:21.344564 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105011 (* 1 = 0.105011 loss)\nI0607 22:25:21.344569 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.460235 (* 1 = 0.460235 loss)\nI0607 22:25:21.344575 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.217038 (* 1 = 0.217038 loss)\nI0607 22:25:21.344581 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.465763 (* 1 = 0.465763 loss)\nI0607 22:25:21.344586 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.217017 (* 1 = 0.217017 loss)\nI0607 22:25:21.344594 13573 sgd_solver.cpp:106] Iteration 8260, lr = 0.001\nI0607 22:25:34.350524 13573 solver.cpp:229] Iteration 8280, loss = 0.658713\nI0607 22:25:34.350585 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.259613 (* 1 = 0.259613 loss)\nI0607 22:25:34.350594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139514 (* 1 = 0.139514 loss)\nI0607 22:25:34.350600 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153679 (* 1 = 0.153679 loss)\nI0607 22:25:34.350606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.008148 (* 1 = 0.008148 loss)\nI0607 22:25:34.350611 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163859 (* 1 = 0.163859 loss)\nI0607 22:25:34.350617 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00814959 (* 1 = 0.00814959 loss)\nI0607 22:25:34.350623 13573 sgd_solver.cpp:106] Iteration 8280, lr = 0.001\nI0607 22:25:47.140962 13573 solver.cpp:229] Iteration 8300, loss = 0.792236\nI0607 22:25:47.141026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.462951 (* 1 = 0.462951 loss)\nI0607 22:25:47.141036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.359306 (* 1 = 0.359306 loss)\nI0607 22:25:47.141042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.082422 (* 1 = 0.082422 loss)\nI0607 22:25:47.141047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135163 (* 1 = 0.0135163 loss)\nI0607 22:25:47.141053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855462 (* 1 = 0.0855462 loss)\nI0607 22:25:47.141059 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135175 (* 1 = 0.0135175 loss)\nI0607 22:25:47.141067 13573 sgd_solver.cpp:106] Iteration 8300, lr = 0.001\nI0607 22:26:00.035931 13573 solver.cpp:229] Iteration 8320, loss = 0.988462\nI0607 22:26:00.035998 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.395007 (* 1 = 0.395007 loss)\nI0607 22:26:00.036008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.273248 (* 1 = 0.273248 loss)\nI0607 22:26:00.036015 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246308 (* 1 = 0.246308 loss)\nI0607 22:26:00.036020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283817 (* 1 = 0.0283817 loss)\nI0607 22:26:00.036026 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.274043 (* 1 = 0.274043 loss)\nI0607 22:26:00.036236 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283807 (* 1 = 0.0283807 loss)\nI0607 22:26:00.036291 13573 sgd_solver.cpp:106] Iteration 8320, lr = 0.001\nI0607 22:26:12.779019 13573 solver.cpp:229] Iteration 8340, loss = 0.72779\nI0607 22:26:12.779098 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0769528 (* 1 = 0.0769528 loss)\nI0607 22:26:12.779112 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0948371 (* 1 = 0.0948371 loss)\nI0607 22:26:12.779122 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.158667 (* 1 = 0.158667 loss)\nI0607 22:26:12.779131 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134092 (* 1 = 0.0134092 loss)\nI0607 22:26:12.779139 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152016 (* 1 = 0.152016 loss)\nI0607 22:26:12.779148 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013404 (* 1 = 0.013404 loss)\nI0607 22:26:12.779156 13573 sgd_solver.cpp:106] Iteration 8340, lr = 0.001\nI0607 22:26:25.693044 13573 solver.cpp:229] Iteration 8360, loss = 0.5903\nI0607 22:26:25.693121 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00502731 (* 1 = 0.00502731 loss)\nI0607 22:26:25.693131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.018314 (* 1 = 0.018314 loss)\nI0607 22:26:25.693137 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.293255 (* 1 = 0.293255 loss)\nI0607 22:26:25.693142 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.027579 (* 1 = 0.027579 loss)\nI0607 22:26:25.693148 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277847 (* 1 = 0.277847 loss)\nI0607 22:26:25.693161 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275967 (* 1 = 0.0275967 loss)\nI0607 22:26:25.693168 13573 sgd_solver.cpp:106] Iteration 8360, lr = 0.001\nI0607 22:26:38.531137 13573 solver.cpp:229] Iteration 8380, loss = 0.640777\nI0607 22:26:38.531210 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0578886 (* 1 = 0.0578886 loss)\nI0607 22:26:38.531221 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.183623 (* 1 = 0.183623 loss)\nI0607 22:26:38.531227 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247184 (* 1 = 0.247184 loss)\nI0607 22:26:38.531234 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0212239 (* 1 = 0.0212239 loss)\nI0607 22:26:38.531239 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228583 (* 1 = 0.228583 loss)\nI0607 22:26:38.531244 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0212202 (* 1 = 0.0212202 loss)\nI0607 22:26:38.531251 13573 sgd_solver.cpp:106] Iteration 8380, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:26:51.326206 13573 solver.cpp:229] Iteration 8400, loss = 0.983461\nI0607 22:26:51.326272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.206203 (* 1 = 0.206203 loss)\nI0607 22:26:51.326280 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.266757 (* 1 = 0.266757 loss)\nI0607 22:26:51.326287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128354 (* 1 = 0.128354 loss)\nI0607 22:26:51.326292 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269689 (* 1 = 0.0269689 loss)\nI0607 22:26:51.326298 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127283 (* 1 = 0.127283 loss)\nI0607 22:26:51.326303 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269988 (* 1 = 0.0269988 loss)\nI0607 22:26:51.326310 13573 sgd_solver.cpp:106] Iteration 8400, lr = 0.001\nI0607 22:27:04.338434 13573 solver.cpp:229] Iteration 8420, loss = 0.907994\nI0607 22:27:04.338503 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.181711 (* 1 = 0.181711 loss)\nI0607 22:27:04.338512 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.186152 (* 1 = 0.186152 loss)\nI0607 22:27:04.338518 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267974 (* 1 = 0.267974 loss)\nI0607 22:27:04.338524 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0208598 (* 1 = 0.0208598 loss)\nI0607 22:27:04.338529 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269856 (* 1 = 0.269856 loss)\nI0607 22:27:04.338536 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0208609 (* 1 = 0.0208609 loss)\nI0607 22:27:04.338542 13573 sgd_solver.cpp:106] Iteration 8420, lr = 0.001\nI0607 22:27:17.192064 13573 solver.cpp:229] Iteration 8440, loss = 0.706102\nI0607 22:27:17.192134 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128234 (* 1 = 0.128234 loss)\nI0607 22:27:17.192143 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110388 (* 1 = 0.110388 loss)\nI0607 22:27:17.192149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217215 (* 1 = 0.217215 loss)\nI0607 22:27:17.192155 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169045 (* 1 = 0.0169045 loss)\nI0607 22:27:17.192160 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.184475 (* 1 = 0.184475 loss)\nI0607 22:27:17.192167 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169037 (* 1 = 0.0169037 loss)\nI0607 22:27:17.192173 13573 sgd_solver.cpp:106] Iteration 8440, lr = 0.001\nI0607 22:27:30.208137 13573 solver.cpp:229] Iteration 8460, loss = 1.0596\nI0607 22:27:30.208209 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.363352 (* 1 = 0.363352 loss)\nI0607 22:27:30.208220 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.270057 (* 1 = 0.270057 loss)\nI0607 22:27:30.208225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.322885 (* 1 = 0.322885 loss)\nI0607 22:27:30.208231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0422637 (* 1 = 0.0422637 loss)\nI0607 22:27:30.208237 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323846 (* 1 = 0.323846 loss)\nI0607 22:27:30.208243 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0422719 (* 1 = 0.0422719 loss)\nI0607 22:27:30.208250 13573 sgd_solver.cpp:106] Iteration 8460, lr = 0.001\nI0607 22:27:43.170611 13573 solver.cpp:229] Iteration 8480, loss = 1.16903\nI0607 22:27:43.170683 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.266391 (* 1 = 0.266391 loss)\nI0607 22:27:43.170693 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198236 (* 1 = 0.198236 loss)\nI0607 22:27:43.170699 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104602 (* 1 = 0.104602 loss)\nI0607 22:27:43.170706 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.012199 (* 1 = 0.012199 loss)\nI0607 22:27:43.170711 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107737 (* 1 = 0.107737 loss)\nI0607 22:27:43.170717 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.012196 (* 1 = 0.012196 loss)\nI0607 22:27:43.170723 13573 sgd_solver.cpp:106] Iteration 8480, lr = 0.001\nI0607 22:27:56.169750 13573 solver.cpp:229] Iteration 8500, loss = 1.14654\nI0607 22:27:56.169811 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.673968 (* 1 = 0.673968 loss)\nI0607 22:27:56.169819 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.426455 (* 1 = 0.426455 loss)\nI0607 22:27:56.169826 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207214 (* 1 = 0.207214 loss)\nI0607 22:27:56.169831 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.02843 (* 1 = 0.02843 loss)\nI0607 22:27:56.169837 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216369 (* 1 = 0.216369 loss)\nI0607 22:27:56.169843 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0284369 (* 1 = 0.0284369 loss)\nI0607 22:27:56.169849 13573 sgd_solver.cpp:106] Iteration 8500, lr = 0.001\nI0607 22:28:08.907800 13573 solver.cpp:229] Iteration 8520, loss = 0.935061\nI0607 22:28:08.907867 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.57264 (* 1 = 0.57264 loss)\nI0607 22:28:08.907877 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.332796 (* 1 = 0.332796 loss)\nI0607 22:28:08.907883 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175614 (* 1 = 0.175614 loss)\nI0607 22:28:08.907889 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0346829 (* 1 = 0.0346829 loss)\nI0607 22:28:08.907896 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177146 (* 1 = 0.177146 loss)\nI0607 22:28:08.907902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0346917 (* 1 = 0.0346917 loss)\nI0607 22:28:08.907907 13573 sgd_solver.cpp:106] Iteration 8520, lr = 0.001\nI0607 22:28:21.817373 13573 solver.cpp:229] Iteration 8540, loss = 1.95296\nI0607 22:28:21.817451 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0166456 (* 1 = 0.0166456 loss)\nI0607 22:28:21.817461 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00839882 (* 1 = 0.00839882 loss)\nI0607 22:28:21.817466 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.01695 (* 1 = 1.01695 loss)\nI0607 22:28:21.817472 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.456818 (* 1 = 0.456818 loss)\nI0607 22:28:21.817477 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.999125 (* 1 = 0.999125 loss)\nI0607 22:28:21.817483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.456652 (* 1 = 0.456652 loss)\nI0607 22:28:21.817490 13573 sgd_solver.cpp:106] Iteration 8540, lr = 0.001\nI0607 22:28:34.526806 13573 solver.cpp:229] Iteration 8560, loss = 1.13017\nI0607 22:28:34.526880 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.528896 (* 1 = 0.528896 loss)\nI0607 22:28:34.526890 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.308519 (* 1 = 0.308519 loss)\nI0607 22:28:34.526895 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223332 (* 1 = 0.223332 loss)\nI0607 22:28:34.526901 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0982375 (* 1 = 0.0982375 loss)\nI0607 22:28:34.526906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226499 (* 1 = 0.226499 loss)\nI0607 22:28:34.526912 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0982203 (* 1 = 0.0982203 loss)\nI0607 22:28:34.526919 13573 sgd_solver.cpp:106] Iteration 8560, lr = 0.001\nI0607 22:28:47.409569 13573 solver.cpp:229] Iteration 8580, loss = 0.63978\nI0607 22:28:47.409644 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144928 (* 1 = 0.144928 loss)\nI0607 22:28:47.409654 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161432 (* 1 = 0.161432 loss)\nI0607 22:28:47.409660 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909528 (* 1 = 0.0909528 loss)\nI0607 22:28:47.409667 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219542 (* 1 = 0.0219542 loss)\nI0607 22:28:47.409672 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0919787 (* 1 = 0.0919787 loss)\nI0607 22:28:47.409677 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219521 (* 1 = 0.0219521 loss)\nI0607 22:28:47.409683 13573 sgd_solver.cpp:106] Iteration 8580, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:29:00.367086 13573 solver.cpp:229] Iteration 8600, loss = 0.941116\nI0607 22:29:00.367161 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.275106 (* 1 = 0.275106 loss)\nI0607 22:29:00.367171 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.205152 (* 1 = 0.205152 loss)\nI0607 22:29:00.367177 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135413 (* 1 = 0.135413 loss)\nI0607 22:29:00.367182 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0467644 (* 1 = 0.0467644 loss)\nI0607 22:29:00.367187 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13728 (* 1 = 0.13728 loss)\nI0607 22:29:00.367193 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0467653 (* 1 = 0.0467653 loss)\nI0607 22:29:00.367200 13573 sgd_solver.cpp:106] Iteration 8600, lr = 0.001\nI0607 22:29:13.157366 13573 solver.cpp:229] Iteration 8620, loss = 0.723466\nI0607 22:29:13.157439 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0021986 (* 1 = 0.0021986 loss)\nI0607 22:29:13.157449 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00646489 (* 1 = 0.00646489 loss)\nI0607 22:29:13.157455 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.374645 (* 1 = 0.374645 loss)\nI0607 22:29:13.157460 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0430013 (* 1 = 0.0430013 loss)\nI0607 22:29:13.157466 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.325913 (* 1 = 0.325913 loss)\nI0607 22:29:13.157471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0429712 (* 1 = 0.0429712 loss)\nI0607 22:29:13.157479 13573 sgd_solver.cpp:106] Iteration 8620, lr = 0.001\nI0607 22:29:26.019654 13573 solver.cpp:229] Iteration 8640, loss = 0.898199\nI0607 22:29:26.019724 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.201804 (* 1 = 0.201804 loss)\nI0607 22:29:26.019733 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.189078 (* 1 = 0.189078 loss)\nI0607 22:29:26.019739 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204008 (* 1 = 0.204008 loss)\nI0607 22:29:26.019745 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.034739 (* 1 = 0.034739 loss)\nI0607 22:29:26.019752 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.20617 (* 1 = 0.20617 loss)\nI0607 22:29:26.019757 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0347559 (* 1 = 0.0347559 loss)\nI0607 22:29:26.019763 13573 sgd_solver.cpp:106] Iteration 8640, lr = 0.001\nI0607 22:29:38.803162 13573 solver.cpp:229] Iteration 8660, loss = 2.75361\nI0607 22:29:38.803241 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.585047 (* 1 = 0.585047 loss)\nI0607 22:29:38.803251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.803297 (* 1 = 0.803297 loss)\nI0607 22:29:38.803256 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.563659 (* 1 = 0.563659 loss)\nI0607 22:29:38.803262 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.16485 (* 1 = 0.16485 loss)\nI0607 22:29:38.803268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.549745 (* 1 = 0.549745 loss)\nI0607 22:29:38.803274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.164873 (* 1 = 0.164873 loss)\nI0607 22:29:38.803282 13573 sgd_solver.cpp:106] Iteration 8660, lr = 0.001\nI0607 22:29:51.676692 13573 solver.cpp:229] Iteration 8680, loss = 0.758451\nI0607 22:29:51.676759 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00151441 (* 1 = 0.00151441 loss)\nI0607 22:29:51.676769 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0231711 (* 1 = 0.0231711 loss)\nI0607 22:29:51.676774 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183039 (* 1 = 0.183039 loss)\nI0607 22:29:51.676781 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100202 (* 1 = 0.0100202 loss)\nI0607 22:29:51.676787 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211098 (* 1 = 0.211098 loss)\nI0607 22:29:51.676793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100332 (* 1 = 0.0100332 loss)\nI0607 22:29:51.676800 13573 sgd_solver.cpp:106] Iteration 8680, lr = 0.001\nI0607 22:30:04.426367 13573 solver.cpp:229] Iteration 8700, loss = 0.930481\nI0607 22:30:04.426434 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.235362 (* 1 = 0.235362 loss)\nI0607 22:30:04.426442 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.358781 (* 1 = 0.358781 loss)\nI0607 22:30:04.426448 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.283853 (* 1 = 0.283853 loss)\nI0607 22:30:04.426455 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0376779 (* 1 = 0.0376779 loss)\nI0607 22:30:04.426460 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.266142 (* 1 = 0.266142 loss)\nI0607 22:30:04.426465 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0377018 (* 1 = 0.0377018 loss)\nI0607 22:30:04.426472 13573 sgd_solver.cpp:106] Iteration 8700, lr = 0.001\nI0607 22:30:17.399442 13573 solver.cpp:229] Iteration 8720, loss = 1.82263\nI0607 22:30:17.399514 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.264197 (* 1 = 0.264197 loss)\nI0607 22:30:17.399524 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.246213 (* 1 = 0.246213 loss)\nI0607 22:30:17.399530 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.176853 (* 1 = 0.176853 loss)\nI0607 22:30:17.399535 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481578 (* 1 = 0.0481578 loss)\nI0607 22:30:17.399541 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174399 (* 1 = 0.174399 loss)\nI0607 22:30:17.399547 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0481624 (* 1 = 0.0481624 loss)\nI0607 22:30:17.399554 13573 sgd_solver.cpp:106] Iteration 8720, lr = 0.001\nI0607 22:30:30.481978 13573 solver.cpp:229] Iteration 8740, loss = 0.934713\nI0607 22:30:30.482044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.359851 (* 1 = 0.359851 loss)\nI0607 22:30:30.482053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.405478 (* 1 = 0.405478 loss)\nI0607 22:30:30.482059 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199611 (* 1 = 0.199611 loss)\nI0607 22:30:30.482065 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0131837 (* 1 = 0.0131837 loss)\nI0607 22:30:30.482070 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181885 (* 1 = 0.181885 loss)\nI0607 22:30:30.482076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131826 (* 1 = 0.0131826 loss)\nI0607 22:30:30.482084 13573 sgd_solver.cpp:106] Iteration 8740, lr = 0.001\nI0607 22:30:43.311015 13573 solver.cpp:229] Iteration 8760, loss = 1.15995\nI0607 22:30:43.311079 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.442894 (* 1 = 0.442894 loss)\nI0607 22:30:43.311089 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.20431 (* 1 = 0.20431 loss)\nI0607 22:30:43.311095 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132377 (* 1 = 0.132377 loss)\nI0607 22:30:43.311100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164905 (* 1 = 0.0164905 loss)\nI0607 22:30:43.311106 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145203 (* 1 = 0.145203 loss)\nI0607 22:30:43.311112 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164927 (* 1 = 0.0164927 loss)\nI0607 22:30:43.311120 13573 sgd_solver.cpp:106] Iteration 8760, lr = 0.001\nI0607 22:30:56.275089 13573 solver.cpp:229] Iteration 8780, loss = 0.728119\nI0607 22:30:56.275164 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13388 (* 1 = 0.13388 loss)\nI0607 22:30:56.275173 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140193 (* 1 = 0.140193 loss)\nI0607 22:30:56.275179 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300322 (* 1 = 0.300322 loss)\nI0607 22:30:56.275185 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149823 (* 1 = 0.0149823 loss)\nI0607 22:30:56.275190 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.309805 (* 1 = 0.309805 loss)\nI0607 22:30:56.275197 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.014978 (* 1 = 0.014978 loss)\nI0607 22:30:56.275203 13573 sgd_solver.cpp:106] Iteration 8780, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:31:09.114692 13573 solver.cpp:229] Iteration 8800, loss = 0.783877\nI0607 22:31:09.114759 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.280114 (* 1 = 0.280114 loss)\nI0607 22:31:09.114768 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.38683 (* 1 = 0.38683 loss)\nI0607 22:31:09.114789 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143511 (* 1 = 0.143511 loss)\nI0607 22:31:09.114795 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00859086 (* 1 = 0.00859086 loss)\nI0607 22:31:09.114801 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149934 (* 1 = 0.149934 loss)\nI0607 22:31:09.114807 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00858591 (* 1 = 0.00858591 loss)\nI0607 22:31:09.114814 13573 sgd_solver.cpp:106] Iteration 8800, lr = 0.001\nI0607 22:31:21.744467 13573 solver.cpp:229] Iteration 8820, loss = 0.59056\nI0607 22:31:21.744529 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.209669 (* 1 = 0.209669 loss)\nI0607 22:31:21.744539 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131217 (* 1 = 0.131217 loss)\nI0607 22:31:21.744546 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0959592 (* 1 = 0.0959592 loss)\nI0607 22:31:21.744552 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028942 (* 1 = 0.028942 loss)\nI0607 22:31:21.744559 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935496 (* 1 = 0.0935496 loss)\nI0607 22:31:21.744565 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0289363 (* 1 = 0.0289363 loss)\nI0607 22:31:21.744571 13573 sgd_solver.cpp:106] Iteration 8820, lr = 0.001\nI0607 22:31:34.690991 13573 solver.cpp:229] Iteration 8840, loss = 1.32719\nI0607 22:31:34.691063 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.207106 (* 1 = 0.207106 loss)\nI0607 22:31:34.691072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.237284 (* 1 = 0.237284 loss)\nI0607 22:31:34.691078 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0978135 (* 1 = 0.0978135 loss)\nI0607 22:31:34.691083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.023771 (* 1 = 0.023771 loss)\nI0607 22:31:34.691089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0966813 (* 1 = 0.0966813 loss)\nI0607 22:31:34.691094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0237755 (* 1 = 0.0237755 loss)\nI0607 22:31:34.691102 13573 sgd_solver.cpp:106] Iteration 8840, lr = 0.001\nI0607 22:31:47.769515 13573 solver.cpp:229] Iteration 8860, loss = 1.17395\nI0607 22:31:47.769577 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.303901 (* 1 = 0.303901 loss)\nI0607 22:31:47.769587 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.671928 (* 1 = 0.671928 loss)\nI0607 22:31:47.769593 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.382597 (* 1 = 0.382597 loss)\nI0607 22:31:47.769598 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0790949 (* 1 = 0.0790949 loss)\nI0607 22:31:47.769604 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.377976 (* 1 = 0.377976 loss)\nI0607 22:31:47.769610 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0790596 (* 1 = 0.0790596 loss)\nI0607 22:31:47.769618 13573 sgd_solver.cpp:106] Iteration 8860, lr = 0.001\nI0607 22:32:00.763990 13573 solver.cpp:229] Iteration 8880, loss = 0.985333\nI0607 22:32:00.764056 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.49695 (* 1 = 0.49695 loss)\nI0607 22:32:00.764066 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.492064 (* 1 = 0.492064 loss)\nI0607 22:32:00.764072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120066 (* 1 = 0.120066 loss)\nI0607 22:32:00.764077 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0485491 (* 1 = 0.0485491 loss)\nI0607 22:32:00.764083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118104 (* 1 = 0.118104 loss)\nI0607 22:32:00.764088 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.048562 (* 1 = 0.048562 loss)\nI0607 22:32:00.764096 13573 sgd_solver.cpp:106] Iteration 8880, lr = 0.001\nI0607 22:32:13.602123 13573 solver.cpp:229] Iteration 8900, loss = 0.512658\nI0607 22:32:13.602186 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0317222 (* 1 = 0.0317222 loss)\nI0607 22:32:13.602195 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0345196 (* 1 = 0.0345196 loss)\nI0607 22:32:13.602201 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155602 (* 1 = 0.155602 loss)\nI0607 22:32:13.602207 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00579224 (* 1 = 0.00579224 loss)\nI0607 22:32:13.602212 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156349 (* 1 = 0.156349 loss)\nI0607 22:32:13.602218 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00579722 (* 1 = 0.00579722 loss)\nI0607 22:32:13.602226 13573 sgd_solver.cpp:106] Iteration 8900, lr = 0.001\nI0607 22:32:26.488065 13573 solver.cpp:229] Iteration 8920, loss = 0.600416\nI0607 22:32:26.488142 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0466938 (* 1 = 0.0466938 loss)\nI0607 22:32:26.488152 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109758 (* 1 = 0.109758 loss)\nI0607 22:32:26.488157 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257401 (* 1 = 0.257401 loss)\nI0607 22:32:26.488164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00923575 (* 1 = 0.00923575 loss)\nI0607 22:32:26.488170 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.251486 (* 1 = 0.251486 loss)\nI0607 22:32:26.488176 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00922693 (* 1 = 0.00922693 loss)\nI0607 22:32:26.488184 13573 sgd_solver.cpp:106] Iteration 8920, lr = 0.001\nI0607 22:32:39.224191 13573 solver.cpp:229] Iteration 8940, loss = 1.26879\nI0607 22:32:39.224259 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0264815 (* 1 = 0.0264815 loss)\nI0607 22:32:39.224272 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0693067 (* 1 = 0.0693067 loss)\nI0607 22:32:39.224282 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109058 (* 1 = 0.109058 loss)\nI0607 22:32:39.224292 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0894549 (* 1 = 0.0894549 loss)\nI0607 22:32:39.224300 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124261 (* 1 = 0.124261 loss)\nI0607 22:32:39.224309 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0894396 (* 1 = 0.0894396 loss)\nI0607 22:32:39.224319 13573 sgd_solver.cpp:106] Iteration 8940, lr = 0.001\nI0607 22:32:52.237272 13573 solver.cpp:229] Iteration 8960, loss = 0.708691\nI0607 22:32:52.237336 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116873 (* 1 = 0.116873 loss)\nI0607 22:32:52.237346 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.120977 (* 1 = 0.120977 loss)\nI0607 22:32:52.237352 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0965937 (* 1 = 0.0965937 loss)\nI0607 22:32:52.237359 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0515968 (* 1 = 0.0515968 loss)\nI0607 22:32:52.237365 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107496 (* 1 = 0.107496 loss)\nI0607 22:32:52.237370 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0515942 (* 1 = 0.0515942 loss)\nI0607 22:32:52.237376 13573 sgd_solver.cpp:106] Iteration 8960, lr = 0.001\nI0607 22:33:05.295758 13573 solver.cpp:229] Iteration 8980, loss = 0.92453\nI0607 22:33:05.295815 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.227462 (* 1 = 0.227462 loss)\nI0607 22:33:05.295825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.212282 (* 1 = 0.212282 loss)\nI0607 22:33:05.295831 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109201 (* 1 = 0.109201 loss)\nI0607 22:33:05.295837 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00601984 (* 1 = 0.00601984 loss)\nI0607 22:33:05.295843 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106215 (* 1 = 0.106215 loss)\nI0607 22:33:05.295850 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00601895 (* 1 = 0.00601895 loss)\nI0607 22:33:05.295855 13573 sgd_solver.cpp:106] Iteration 8980, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:33:18.176200 13573 solver.cpp:229] Iteration 9000, loss = 0.643111\nI0607 22:33:18.176321 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0278888 (* 1 = 0.0278888 loss)\nI0607 22:33:18.176331 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0402053 (* 1 = 0.0402053 loss)\nI0607 22:33:18.176337 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.244136 (* 1 = 0.244136 loss)\nI0607 22:33:18.176343 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178461 (* 1 = 0.0178461 loss)\nI0607 22:33:18.176349 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245218 (* 1 = 0.245218 loss)\nI0607 22:33:18.176358 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017859 (* 1 = 0.017859 loss)\nI0607 22:33:18.176365 13573 sgd_solver.cpp:106] Iteration 9000, lr = 0.001\nI0607 22:33:31.024749 13573 solver.cpp:229] Iteration 9020, loss = 1.1447\nI0607 22:33:31.024813 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.282983 (* 1 = 0.282983 loss)\nI0607 22:33:31.024822 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.403123 (* 1 = 0.403123 loss)\nI0607 22:33:31.024828 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.176479 (* 1 = 0.176479 loss)\nI0607 22:33:31.024835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0148402 (* 1 = 0.0148402 loss)\nI0607 22:33:31.024840 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181613 (* 1 = 0.181613 loss)\nI0607 22:33:31.024847 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0148344 (* 1 = 0.0148344 loss)\nI0607 22:33:31.024853 13573 sgd_solver.cpp:106] Iteration 9020, lr = 0.001\nI0607 22:33:43.926398 13573 solver.cpp:229] Iteration 9040, loss = 0.841089\nI0607 22:33:43.926466 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.32851 (* 1 = 0.32851 loss)\nI0607 22:33:43.926476 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169717 (* 1 = 0.169717 loss)\nI0607 22:33:43.926483 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101641 (* 1 = 0.101641 loss)\nI0607 22:33:43.926489 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283634 (* 1 = 0.0283634 loss)\nI0607 22:33:43.926494 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102228 (* 1 = 0.102228 loss)\nI0607 22:33:43.926501 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283645 (* 1 = 0.0283645 loss)\nI0607 22:33:43.926508 13573 sgd_solver.cpp:106] Iteration 9040, lr = 0.001\nI0607 22:33:56.860301 13573 solver.cpp:229] Iteration 9060, loss = 1.17983\nI0607 22:33:56.860364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.393486 (* 1 = 0.393486 loss)\nI0607 22:33:56.860373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188421 (* 1 = 0.188421 loss)\nI0607 22:33:56.860380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.095707 (* 1 = 0.095707 loss)\nI0607 22:33:56.860386 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0288559 (* 1 = 0.0288559 loss)\nI0607 22:33:56.860391 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0969665 (* 1 = 0.0969665 loss)\nI0607 22:33:56.860397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0288609 (* 1 = 0.0288609 loss)\nI0607 22:33:56.860404 13573 sgd_solver.cpp:106] Iteration 9060, lr = 0.001\nI0607 22:34:09.846700 13573 solver.cpp:229] Iteration 9080, loss = 1.03779\nI0607 22:34:09.846762 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233166 (* 1 = 0.233166 loss)\nI0607 22:34:09.846784 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.288275 (* 1 = 0.288275 loss)\nI0607 22:34:09.846792 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.328254 (* 1 = 0.328254 loss)\nI0607 22:34:09.846799 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0201089 (* 1 = 0.0201089 loss)\nI0607 22:34:09.846804 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.397251 (* 1 = 0.397251 loss)\nI0607 22:34:09.846810 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201076 (* 1 = 0.0201076 loss)\nI0607 22:34:09.846817 13573 sgd_solver.cpp:106] Iteration 9080, lr = 0.001\nI0607 22:34:22.876266 13573 solver.cpp:229] Iteration 9100, loss = 0.810431\nI0607 22:34:22.876346 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.34354 (* 1 = 0.34354 loss)\nI0607 22:34:22.876358 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.206849 (* 1 = 0.206849 loss)\nI0607 22:34:22.876363 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110708 (* 1 = 0.110708 loss)\nI0607 22:34:22.876369 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0557292 (* 1 = 0.0557292 loss)\nI0607 22:34:22.876375 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106782 (* 1 = 0.106782 loss)\nI0607 22:34:22.876381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0557255 (* 1 = 0.0557255 loss)\nI0607 22:34:22.876389 13573 sgd_solver.cpp:106] Iteration 9100, lr = 0.001\nI0607 22:34:35.800180 13573 solver.cpp:229] Iteration 9120, loss = 0.608232\nI0607 22:34:35.800242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0509917 (* 1 = 0.0509917 loss)\nI0607 22:34:35.800251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.070614 (* 1 = 0.070614 loss)\nI0607 22:34:35.800257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101491 (* 1 = 0.101491 loss)\nI0607 22:34:35.800263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0557242 (* 1 = 0.0557242 loss)\nI0607 22:34:35.800269 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106283 (* 1 = 0.106283 loss)\nI0607 22:34:35.800276 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0557269 (* 1 = 0.0557269 loss)\nI0607 22:34:35.800282 13573 sgd_solver.cpp:106] Iteration 9120, lr = 0.001\nI0607 22:34:48.847762 13573 solver.cpp:229] Iteration 9140, loss = 0.805315\nI0607 22:34:48.847851 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.123275 (* 1 = 0.123275 loss)\nI0607 22:34:48.847860 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188196 (* 1 = 0.188196 loss)\nI0607 22:34:48.847867 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0986299 (* 1 = 0.0986299 loss)\nI0607 22:34:48.847872 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.053394 (* 1 = 0.053394 loss)\nI0607 22:34:48.847878 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102505 (* 1 = 0.102505 loss)\nI0607 22:34:48.847884 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0533936 (* 1 = 0.0533936 loss)\nI0607 22:34:48.847892 13573 sgd_solver.cpp:106] Iteration 9140, lr = 0.001\nI0607 22:35:01.796212 13573 solver.cpp:229] Iteration 9160, loss = 0.500588\nI0607 22:35:01.796273 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0659003 (* 1 = 0.0659003 loss)\nI0607 22:35:01.796285 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118913 (* 1 = 0.118913 loss)\nI0607 22:35:01.796291 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107642 (* 1 = 0.107642 loss)\nI0607 22:35:01.796298 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0647699 (* 1 = 0.0647699 loss)\nI0607 22:35:01.796303 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108198 (* 1 = 0.108198 loss)\nI0607 22:35:01.796309 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0647715 (* 1 = 0.0647715 loss)\nI0607 22:35:01.796315 13573 sgd_solver.cpp:106] Iteration 9160, lr = 0.001\nI0607 22:35:14.764401 13573 solver.cpp:229] Iteration 9180, loss = 1.34036\nI0607 22:35:14.764523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0113804 (* 1 = 0.0113804 loss)\nI0607 22:35:14.764531 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0778349 (* 1 = 0.0778349 loss)\nI0607 22:35:14.764538 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.328119 (* 1 = 0.328119 loss)\nI0607 22:35:14.764544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0454011 (* 1 = 0.0454011 loss)\nI0607 22:35:14.764549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.31285 (* 1 = 0.31285 loss)\nI0607 22:35:14.764555 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0454048 (* 1 = 0.0454048 loss)\nI0607 22:35:14.764562 13573 sgd_solver.cpp:106] Iteration 9180, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:35:27.771124 13573 solver.cpp:229] Iteration 9200, loss = 0.588129\nI0607 22:35:27.771191 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175976 (* 1 = 0.175976 loss)\nI0607 22:35:27.771201 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146806 (* 1 = 0.146806 loss)\nI0607 22:35:27.771208 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.13545 (* 1 = 0.13545 loss)\nI0607 22:35:27.771214 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0311926 (* 1 = 0.0311926 loss)\nI0607 22:35:27.771219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.142465 (* 1 = 0.142465 loss)\nI0607 22:35:27.771225 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311944 (* 1 = 0.0311944 loss)\nI0607 22:35:27.771234 13573 sgd_solver.cpp:106] Iteration 9200, lr = 0.001\nI0607 22:35:40.742473 13573 solver.cpp:229] Iteration 9220, loss = 1.62423\nI0607 22:35:40.742539 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.520229 (* 1 = 0.520229 loss)\nI0607 22:35:40.742549 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.261623 (* 1 = 0.261623 loss)\nI0607 22:35:40.742554 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.676613 (* 1 = 0.676613 loss)\nI0607 22:35:40.742560 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.172603 (* 1 = 0.172603 loss)\nI0607 22:35:40.742566 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.673898 (* 1 = 0.673898 loss)\nI0607 22:35:40.742573 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.172607 (* 1 = 0.172607 loss)\nI0607 22:35:40.742580 13573 sgd_solver.cpp:106] Iteration 9220, lr = 0.001\nI0607 22:35:53.484647 13573 solver.cpp:229] Iteration 9240, loss = 0.560835\nI0607 22:35:53.484724 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0291402 (* 1 = 0.0291402 loss)\nI0607 22:35:53.484732 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0207776 (* 1 = 0.0207776 loss)\nI0607 22:35:53.484740 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.192836 (* 1 = 0.192836 loss)\nI0607 22:35:53.484745 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157484 (* 1 = 0.0157484 loss)\nI0607 22:35:53.484750 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183164 (* 1 = 0.183164 loss)\nI0607 22:35:53.484755 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0157466 (* 1 = 0.0157466 loss)\nI0607 22:35:53.484761 13573 sgd_solver.cpp:106] Iteration 9240, lr = 0.001\nI0607 22:36:06.557219 13573 solver.cpp:229] Iteration 9260, loss = 0.802665\nI0607 22:36:06.557283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0171868 (* 1 = 0.0171868 loss)\nI0607 22:36:06.557292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0911883 (* 1 = 0.0911883 loss)\nI0607 22:36:06.557298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241942 (* 1 = 0.241942 loss)\nI0607 22:36:06.557303 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0141294 (* 1 = 0.0141294 loss)\nI0607 22:36:06.557308 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253785 (* 1 = 0.253785 loss)\nI0607 22:36:06.557314 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.014124 (* 1 = 0.014124 loss)\nI0607 22:36:06.557322 13573 sgd_solver.cpp:106] Iteration 9260, lr = 0.001\nI0607 22:36:19.483427 13573 solver.cpp:229] Iteration 9280, loss = 1.54695\nI0607 22:36:19.483496 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.23576 (* 1 = 0.23576 loss)\nI0607 22:36:19.483505 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.293226 (* 1 = 0.293226 loss)\nI0607 22:36:19.483511 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.348731 (* 1 = 0.348731 loss)\nI0607 22:36:19.483518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0552932 (* 1 = 0.0552932 loss)\nI0607 22:36:19.483525 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.325235 (* 1 = 0.325235 loss)\nI0607 22:36:19.483530 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0552882 (* 1 = 0.0552882 loss)\nI0607 22:36:19.483538 13573 sgd_solver.cpp:106] Iteration 9280, lr = 0.001\nI0607 22:36:32.472898 13573 solver.cpp:229] Iteration 9300, loss = 0.518201\nI0607 22:36:32.472965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000851016 (* 1 = 0.000851016 loss)\nI0607 22:36:32.472975 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0131828 (* 1 = 0.0131828 loss)\nI0607 22:36:32.472980 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.163553 (* 1 = 0.163553 loss)\nI0607 22:36:32.472986 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00376964 (* 1 = 0.00376964 loss)\nI0607 22:36:32.472991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.164237 (* 1 = 0.164237 loss)\nI0607 22:36:32.472998 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00377264 (* 1 = 0.00377264 loss)\nI0607 22:36:32.473006 13573 sgd_solver.cpp:106] Iteration 9300, lr = 0.001\nI0607 22:36:45.315325 13573 solver.cpp:229] Iteration 9320, loss = 1.96845\nI0607 22:36:45.315402 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000471556 (* 1 = 0.000471556 loss)\nI0607 22:36:45.315413 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00625325 (* 1 = 0.00625325 loss)\nI0607 22:36:45.315419 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.200285 (* 1 = 0.200285 loss)\nI0607 22:36:45.315425 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0436927 (* 1 = 0.0436927 loss)\nI0607 22:36:45.315431 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258563 (* 1 = 0.258563 loss)\nI0607 22:36:45.315436 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0437012 (* 1 = 0.0437012 loss)\nI0607 22:36:45.315444 13573 sgd_solver.cpp:106] Iteration 9320, lr = 0.001\nI0607 22:36:58.139343 13573 solver.cpp:229] Iteration 9340, loss = 1.49437\nI0607 22:36:58.139408 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.33276 (* 1 = 0.33276 loss)\nI0607 22:36:58.139418 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.353691 (* 1 = 0.353691 loss)\nI0607 22:36:58.139423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.57812 (* 1 = 0.57812 loss)\nI0607 22:36:58.139430 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0706813 (* 1 = 0.0706813 loss)\nI0607 22:36:58.139436 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.559385 (* 1 = 0.559385 loss)\nI0607 22:36:58.139441 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0706756 (* 1 = 0.0706756 loss)\nI0607 22:36:58.139448 13573 sgd_solver.cpp:106] Iteration 9340, lr = 0.001\nI0607 22:37:11.078083 13573 solver.cpp:229] Iteration 9360, loss = 1.13156\nI0607 22:37:11.078148 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.257259 (* 1 = 0.257259 loss)\nI0607 22:37:11.078158 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152047 (* 1 = 0.152047 loss)\nI0607 22:37:11.078163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247723 (* 1 = 0.247723 loss)\nI0607 22:37:11.078171 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.110904 (* 1 = 0.110904 loss)\nI0607 22:37:11.078176 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24377 (* 1 = 0.24377 loss)\nI0607 22:37:11.078181 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.110902 (* 1 = 0.110902 loss)\nI0607 22:37:11.078188 13573 sgd_solver.cpp:106] Iteration 9360, lr = 0.001\nI0607 22:37:24.045315 13573 solver.cpp:229] Iteration 9380, loss = 1.08555\nI0607 22:37:24.045383 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.375424 (* 1 = 0.375424 loss)\nI0607 22:37:24.045392 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.369038 (* 1 = 0.369038 loss)\nI0607 22:37:24.045398 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190729 (* 1 = 0.190729 loss)\nI0607 22:37:24.045404 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.199394 (* 1 = 0.199394 loss)\nI0607 22:37:24.045409 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19678 (* 1 = 0.19678 loss)\nI0607 22:37:24.045415 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.199392 (* 1 = 0.199392 loss)\nI0607 22:37:24.045423 13573 sgd_solver.cpp:106] Iteration 9380, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:37:36.822238 13573 solver.cpp:229] Iteration 9400, loss = 1.11019\nI0607 22:37:36.822300 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.191857 (* 1 = 0.191857 loss)\nI0607 22:37:36.822309 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.220291 (* 1 = 0.220291 loss)\nI0607 22:37:36.822315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.149462 (* 1 = 0.149462 loss)\nI0607 22:37:36.822321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.185826 (* 1 = 0.185826 loss)\nI0607 22:37:36.822326 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143573 (* 1 = 0.143573 loss)\nI0607 22:37:36.822332 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.185427 (* 1 = 0.185427 loss)\nI0607 22:37:36.822340 13573 sgd_solver.cpp:106] Iteration 9400, lr = 0.001\nI0607 22:37:49.646342 13573 solver.cpp:229] Iteration 9420, loss = 0.717548\nI0607 22:37:49.646407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000955258 (* 1 = 0.000955258 loss)\nI0607 22:37:49.646417 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124417 (* 1 = 0.124417 loss)\nI0607 22:37:49.646423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262169 (* 1 = 0.262169 loss)\nI0607 22:37:49.646430 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0358361 (* 1 = 0.0358361 loss)\nI0607 22:37:49.646435 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.281696 (* 1 = 0.281696 loss)\nI0607 22:37:49.646441 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.035847 (* 1 = 0.035847 loss)\nI0607 22:37:49.646448 13573 sgd_solver.cpp:106] Iteration 9420, lr = 0.001\nI0607 22:38:02.777556 13573 solver.cpp:229] Iteration 9440, loss = 1.80619\nI0607 22:38:02.777643 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169996 (* 1 = 0.169996 loss)\nI0607 22:38:02.777653 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.274053 (* 1 = 0.274053 loss)\nI0607 22:38:02.777659 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.380479 (* 1 = 0.380479 loss)\nI0607 22:38:02.777667 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0639 (* 1 = 0.0639 loss)\nI0607 22:38:02.777673 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.382092 (* 1 = 0.382092 loss)\nI0607 22:38:02.777678 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0638972 (* 1 = 0.0638972 loss)\nI0607 22:38:02.777685 13573 sgd_solver.cpp:106] Iteration 9440, lr = 0.001\nI0607 22:38:15.500195 13573 solver.cpp:229] Iteration 9460, loss = 1.49607\nI0607 22:38:15.500255 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193578 (* 1 = 0.193578 loss)\nI0607 22:38:15.500264 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132794 (* 1 = 0.132794 loss)\nI0607 22:38:15.500272 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0921548 (* 1 = 0.0921548 loss)\nI0607 22:38:15.500277 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011945 (* 1 = 0.011945 loss)\nI0607 22:38:15.500283 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0933853 (* 1 = 0.0933853 loss)\nI0607 22:38:15.500289 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119493 (* 1 = 0.0119493 loss)\nI0607 22:38:15.500298 13573 sgd_solver.cpp:106] Iteration 9460, lr = 0.001\nI0607 22:38:28.328634 13573 solver.cpp:229] Iteration 9480, loss = 1.3564\nI0607 22:38:28.328701 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.192117 (* 1 = 0.192117 loss)\nI0607 22:38:28.328709 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.232613 (* 1 = 0.232613 loss)\nI0607 22:38:28.328716 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123452 (* 1 = 0.123452 loss)\nI0607 22:38:28.328722 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0121658 (* 1 = 0.0121658 loss)\nI0607 22:38:28.328727 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122823 (* 1 = 0.122823 loss)\nI0607 22:38:28.328733 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0121648 (* 1 = 0.0121648 loss)\nI0607 22:38:28.328740 13573 sgd_solver.cpp:106] Iteration 9480, lr = 0.001\nI0607 22:38:41.248742 13573 solver.cpp:229] Iteration 9500, loss = 1.74282\nI0607 22:38:41.248819 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.209774 (* 1 = 0.209774 loss)\nI0607 22:38:41.248828 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.231203 (* 1 = 0.231203 loss)\nI0607 22:38:41.248834 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.35926 (* 1 = 0.35926 loss)\nI0607 22:38:41.248841 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00985776 (* 1 = 0.00985776 loss)\nI0607 22:38:41.248847 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.352833 (* 1 = 0.352833 loss)\nI0607 22:38:41.248852 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00986527 (* 1 = 0.00986527 loss)\nI0607 22:38:41.248857 13573 sgd_solver.cpp:106] Iteration 9500, lr = 0.001\nI0607 22:38:54.066079 13573 solver.cpp:229] Iteration 9520, loss = 0.893183\nI0607 22:38:54.066141 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.517588 (* 1 = 0.517588 loss)\nI0607 22:38:54.066151 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195033 (* 1 = 0.195033 loss)\nI0607 22:38:54.066157 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102466 (* 1 = 0.102466 loss)\nI0607 22:38:54.066162 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0253736 (* 1 = 0.0253736 loss)\nI0607 22:38:54.066169 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108118 (* 1 = 0.108118 loss)\nI0607 22:38:54.066174 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0253745 (* 1 = 0.0253745 loss)\nI0607 22:38:54.066180 13573 sgd_solver.cpp:106] Iteration 9520, lr = 0.001\nI0607 22:39:06.863919 13573 solver.cpp:229] Iteration 9540, loss = 0.71695\nI0607 22:39:06.864003 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00518784 (* 1 = 0.00518784 loss)\nI0607 22:39:06.864013 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0668908 (* 1 = 0.0668908 loss)\nI0607 22:39:06.864018 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197039 (* 1 = 0.197039 loss)\nI0607 22:39:06.864024 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100904 (* 1 = 0.0100904 loss)\nI0607 22:39:06.864029 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227812 (* 1 = 0.227812 loss)\nI0607 22:39:06.864035 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100956 (* 1 = 0.0100956 loss)\nI0607 22:39:06.864042 13573 sgd_solver.cpp:106] Iteration 9540, lr = 0.001\nI0607 22:39:19.842598 13573 solver.cpp:229] Iteration 9560, loss = 1.22045\nI0607 22:39:19.842664 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189256 (* 1 = 0.189256 loss)\nI0607 22:39:19.842674 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.302114 (* 1 = 0.302114 loss)\nI0607 22:39:19.842680 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0900155 (* 1 = 0.0900155 loss)\nI0607 22:39:19.842686 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00715135 (* 1 = 0.00715135 loss)\nI0607 22:39:19.842692 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096858 (* 1 = 0.096858 loss)\nI0607 22:39:19.842699 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00714264 (* 1 = 0.00714264 loss)\nI0607 22:39:19.842706 13573 sgd_solver.cpp:106] Iteration 9560, lr = 0.001\nI0607 22:39:32.525992 13573 solver.cpp:229] Iteration 9580, loss = 1.05113\nI0607 22:39:32.526064 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.238551 (* 1 = 0.238551 loss)\nI0607 22:39:32.526073 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.421538 (* 1 = 0.421538 loss)\nI0607 22:39:32.526079 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197251 (* 1 = 0.197251 loss)\nI0607 22:39:32.526085 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.114802 (* 1 = 0.114802 loss)\nI0607 22:39:32.526091 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190754 (* 1 = 0.190754 loss)\nI0607 22:39:32.526098 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.114812 (* 1 = 0.114812 loss)\nI0607 22:39:32.526103 13573 sgd_solver.cpp:106] Iteration 9580, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:39:45.584563 13573 solver.cpp:229] Iteration 9600, loss = 0.522616\nI0607 22:39:45.584631 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131337 (* 1 = 0.131337 loss)\nI0607 22:39:45.584640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130588 (* 1 = 0.130588 loss)\nI0607 22:39:45.584646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10393 (* 1 = 0.10393 loss)\nI0607 22:39:45.584652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0230219 (* 1 = 0.0230219 loss)\nI0607 22:39:45.584659 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0921752 (* 1 = 0.0921752 loss)\nI0607 22:39:45.584666 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0230454 (* 1 = 0.0230454 loss)\nI0607 22:39:45.584673 13573 sgd_solver.cpp:106] Iteration 9600, lr = 0.001\nI0607 22:39:58.485096 13573 solver.cpp:229] Iteration 9620, loss = 1.82681\nI0607 22:39:58.485164 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104119 (* 1 = 0.104119 loss)\nI0607 22:39:58.485174 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119624 (* 1 = 0.119624 loss)\nI0607 22:39:58.485180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231135 (* 1 = 0.231135 loss)\nI0607 22:39:58.485186 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0582119 (* 1 = 0.0582119 loss)\nI0607 22:39:58.485191 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232114 (* 1 = 0.232114 loss)\nI0607 22:39:58.485198 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.058219 (* 1 = 0.058219 loss)\nI0607 22:39:58.485204 13573 sgd_solver.cpp:106] Iteration 9620, lr = 0.001\nI0607 22:40:11.353164 13573 solver.cpp:229] Iteration 9640, loss = 0.701188\nI0607 22:40:11.353241 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139964 (* 1 = 0.139964 loss)\nI0607 22:40:11.353250 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117314 (* 1 = 0.117314 loss)\nI0607 22:40:11.353256 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211887 (* 1 = 0.211887 loss)\nI0607 22:40:11.353262 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193952 (* 1 = 0.0193952 loss)\nI0607 22:40:11.353268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204 (* 1 = 0.204 loss)\nI0607 22:40:11.353274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0193882 (* 1 = 0.0193882 loss)\nI0607 22:40:11.353281 13573 sgd_solver.cpp:106] Iteration 9640, lr = 0.001\nI0607 22:40:24.231057 13573 solver.cpp:229] Iteration 9660, loss = 1.32409\nI0607 22:40:24.231118 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116161 (* 1 = 0.116161 loss)\nI0607 22:40:24.231128 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14521 (* 1 = 0.14521 loss)\nI0607 22:40:24.231135 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.219175 (* 1 = 0.219175 loss)\nI0607 22:40:24.231142 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0285807 (* 1 = 0.0285807 loss)\nI0607 22:40:24.231148 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228428 (* 1 = 0.228428 loss)\nI0607 22:40:24.231154 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0285836 (* 1 = 0.0285836 loss)\nI0607 22:40:24.231161 13573 sgd_solver.cpp:106] Iteration 9660, lr = 0.001\nI0607 22:40:37.149178 13573 solver.cpp:229] Iteration 9680, loss = 0.546682\nI0607 22:40:37.149261 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.146395 (* 1 = 0.146395 loss)\nI0607 22:40:37.149271 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136039 (* 1 = 0.136039 loss)\nI0607 22:40:37.149277 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106691 (* 1 = 0.106691 loss)\nI0607 22:40:37.149283 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0377703 (* 1 = 0.0377703 loss)\nI0607 22:40:37.149289 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105829 (* 1 = 0.105829 loss)\nI0607 22:40:37.149296 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0377573 (* 1 = 0.0377573 loss)\nI0607 22:40:37.149302 13573 sgd_solver.cpp:106] Iteration 9680, lr = 0.001\nI0607 22:40:49.963059 13573 solver.cpp:229] Iteration 9700, loss = 0.71925\nI0607 22:40:49.963129 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127051 (* 1 = 0.127051 loss)\nI0607 22:40:49.963140 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15821 (* 1 = 0.15821 loss)\nI0607 22:40:49.963145 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179439 (* 1 = 0.179439 loss)\nI0607 22:40:49.963152 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259613 (* 1 = 0.0259613 loss)\nI0607 22:40:49.963158 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.18282 (* 1 = 0.18282 loss)\nI0607 22:40:49.963165 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.025964 (* 1 = 0.025964 loss)\nI0607 22:40:49.963171 13573 sgd_solver.cpp:106] Iteration 9700, lr = 0.001\nI0607 22:41:02.575078 13573 solver.cpp:229] Iteration 9720, loss = 0.828773\nI0607 22:41:02.575141 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.334465 (* 1 = 0.334465 loss)\nI0607 22:41:02.575150 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.205243 (* 1 = 0.205243 loss)\nI0607 22:41:02.575156 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099454 (* 1 = 0.099454 loss)\nI0607 22:41:02.575162 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155754 (* 1 = 0.0155754 loss)\nI0607 22:41:02.575168 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0943462 (* 1 = 0.0943462 loss)\nI0607 22:41:02.575173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0155741 (* 1 = 0.0155741 loss)\nI0607 22:41:02.575181 13573 sgd_solver.cpp:106] Iteration 9720, lr = 0.001\nI0607 22:41:15.577287 13573 solver.cpp:229] Iteration 9740, loss = 1.05682\nI0607 22:41:15.577354 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.402201 (* 1 = 0.402201 loss)\nI0607 22:41:15.577364 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.581512 (* 1 = 0.581512 loss)\nI0607 22:41:15.577370 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22919 (* 1 = 0.22919 loss)\nI0607 22:41:15.577376 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0597368 (* 1 = 0.0597368 loss)\nI0607 22:41:15.577381 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233814 (* 1 = 0.233814 loss)\nI0607 22:41:15.577388 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0597364 (* 1 = 0.0597364 loss)\nI0607 22:41:15.577394 13573 sgd_solver.cpp:106] Iteration 9740, lr = 0.001\nI0607 22:41:28.298130 13573 solver.cpp:229] Iteration 9760, loss = 0.715535\nI0607 22:41:28.298197 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127562 (* 1 = 0.127562 loss)\nI0607 22:41:28.298207 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.181256 (* 1 = 0.181256 loss)\nI0607 22:41:28.298213 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120388 (* 1 = 0.120388 loss)\nI0607 22:41:28.298219 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0232651 (* 1 = 0.0232651 loss)\nI0607 22:41:28.298225 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15701 (* 1 = 0.15701 loss)\nI0607 22:41:28.298231 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.023264 (* 1 = 0.023264 loss)\nI0607 22:41:28.298238 13573 sgd_solver.cpp:106] Iteration 9760, lr = 0.001\nI0607 22:41:41.145508 13573 solver.cpp:229] Iteration 9780, loss = 0.466211\nI0607 22:41:41.145632 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0273772 (* 1 = 0.0273772 loss)\nI0607 22:41:41.145642 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0151189 (* 1 = 0.0151189 loss)\nI0607 22:41:41.145648 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220895 (* 1 = 0.220895 loss)\nI0607 22:41:41.145654 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0207658 (* 1 = 0.0207658 loss)\nI0607 22:41:41.145660 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215232 (* 1 = 0.215232 loss)\nI0607 22:41:41.145665 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207664 (* 1 = 0.0207664 loss)\nI0607 22:41:41.145673 13573 sgd_solver.cpp:106] Iteration 9780, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:41:54.148308 13573 solver.cpp:229] Iteration 9800, loss = 0.679762\nI0607 22:41:54.148398 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.270922 (* 1 = 0.270922 loss)\nI0607 22:41:54.148408 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.494796 (* 1 = 0.494796 loss)\nI0607 22:41:54.148416 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0848961 (* 1 = 0.0848961 loss)\nI0607 22:41:54.148422 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0225229 (* 1 = 0.0225229 loss)\nI0607 22:41:54.148427 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0806065 (* 1 = 0.0806065 loss)\nI0607 22:41:54.148432 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225254 (* 1 = 0.0225254 loss)\nI0607 22:41:54.148440 13573 sgd_solver.cpp:106] Iteration 9800, lr = 0.001\nI0607 22:42:06.890444 13573 solver.cpp:229] Iteration 9820, loss = 1.91258\nI0607 22:42:06.890506 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0573414 (* 1 = 0.0573414 loss)\nI0607 22:42:06.890516 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0698063 (* 1 = 0.0698063 loss)\nI0607 22:42:06.890522 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.56547 (* 1 = 0.56547 loss)\nI0607 22:42:06.890528 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.148051 (* 1 = 0.148051 loss)\nI0607 22:42:06.890534 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.548324 (* 1 = 0.548324 loss)\nI0607 22:42:06.890539 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.148075 (* 1 = 0.148075 loss)\nI0607 22:42:06.890547 13573 sgd_solver.cpp:106] Iteration 9820, lr = 0.001\nI0607 22:42:20.030854 13573 solver.cpp:229] Iteration 9840, loss = 1.12377\nI0607 22:42:20.030918 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.263103 (* 1 = 0.263103 loss)\nI0607 22:42:20.030927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.265985 (* 1 = 0.265985 loss)\nI0607 22:42:20.030933 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.346285 (* 1 = 0.346285 loss)\nI0607 22:42:20.030939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0346051 (* 1 = 0.0346051 loss)\nI0607 22:42:20.030946 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.322999 (* 1 = 0.322999 loss)\nI0607 22:42:20.030951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0346162 (* 1 = 0.0346162 loss)\nI0607 22:42:20.030958 13573 sgd_solver.cpp:106] Iteration 9840, lr = 0.001\nI0607 22:42:32.582294 13573 solver.cpp:229] Iteration 9860, loss = 0.893553\nI0607 22:42:32.582370 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00482133 (* 1 = 0.00482133 loss)\nI0607 22:42:32.582379 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0106534 (* 1 = 0.0106534 loss)\nI0607 22:42:32.582386 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.455931 (* 1 = 0.455931 loss)\nI0607 22:42:32.582391 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.176651 (* 1 = 0.176651 loss)\nI0607 22:42:32.582396 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.462914 (* 1 = 0.462914 loss)\nI0607 22:42:32.582402 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.176689 (* 1 = 0.176689 loss)\nI0607 22:42:32.582409 13573 sgd_solver.cpp:106] Iteration 9860, lr = 0.001\nI0607 22:42:45.665228 13573 solver.cpp:229] Iteration 9880, loss = 1.37202\nI0607 22:42:45.665292 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00630533 (* 1 = 0.00630533 loss)\nI0607 22:42:45.665302 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0220594 (* 1 = 0.0220594 loss)\nI0607 22:42:45.665307 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276371 (* 1 = 0.276371 loss)\nI0607 22:42:45.665313 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0532664 (* 1 = 0.0532664 loss)\nI0607 22:42:45.665318 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.287671 (* 1 = 0.287671 loss)\nI0607 22:42:45.665324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0532712 (* 1 = 0.0532712 loss)\nI0607 22:42:45.665331 13573 sgd_solver.cpp:106] Iteration 9880, lr = 0.001\nI0607 22:42:58.530732 13573 solver.cpp:229] Iteration 9900, loss = 1.01532\nI0607 22:42:58.530812 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.426997 (* 1 = 0.426997 loss)\nI0607 22:42:58.530822 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.211369 (* 1 = 0.211369 loss)\nI0607 22:42:58.530828 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187763 (* 1 = 0.187763 loss)\nI0607 22:42:58.530834 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0340656 (* 1 = 0.0340656 loss)\nI0607 22:42:58.530840 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183684 (* 1 = 0.183684 loss)\nI0607 22:42:58.530845 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0340642 (* 1 = 0.0340642 loss)\nI0607 22:42:58.530853 13573 sgd_solver.cpp:106] Iteration 9900, lr = 0.001\nI0607 22:43:11.382957 13573 solver.cpp:229] Iteration 9920, loss = 0.693535\nI0607 22:43:11.383030 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167199 (* 1 = 0.167199 loss)\nI0607 22:43:11.383040 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.175376 (* 1 = 0.175376 loss)\nI0607 22:43:11.383046 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204066 (* 1 = 0.204066 loss)\nI0607 22:43:11.383052 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0282903 (* 1 = 0.0282903 loss)\nI0607 22:43:11.383059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217633 (* 1 = 0.217633 loss)\nI0607 22:43:11.383064 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0282903 (* 1 = 0.0282903 loss)\nI0607 22:43:11.383071 13573 sgd_solver.cpp:106] Iteration 9920, lr = 0.001\nI0607 22:43:24.306257 13573 solver.cpp:229] Iteration 9940, loss = 1.09885\nI0607 22:43:24.306324 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.262623 (* 1 = 0.262623 loss)\nI0607 22:43:24.306334 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.257416 (* 1 = 0.257416 loss)\nI0607 22:43:24.306341 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.24262 (* 1 = 0.24262 loss)\nI0607 22:43:24.306347 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0616716 (* 1 = 0.0616716 loss)\nI0607 22:43:24.306354 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24351 (* 1 = 0.24351 loss)\nI0607 22:43:24.306360 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0616737 (* 1 = 0.0616737 loss)\nI0607 22:43:24.306366 13573 sgd_solver.cpp:106] Iteration 9940, lr = 0.001\nI0607 22:43:37.105762 13573 solver.cpp:229] Iteration 9960, loss = 0.858478\nI0607 22:43:37.105826 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.450877 (* 1 = 0.450877 loss)\nI0607 22:43:37.105835 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.373458 (* 1 = 0.373458 loss)\nI0607 22:43:37.105841 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.115054 (* 1 = 0.115054 loss)\nI0607 22:43:37.105847 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00189405 (* 1 = 0.00189405 loss)\nI0607 22:43:37.105854 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0885286 (* 1 = 0.0885286 loss)\nI0607 22:43:37.105859 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00189402 (* 1 = 0.00189402 loss)\nI0607 22:43:37.105866 13573 sgd_solver.cpp:106] Iteration 9960, lr = 0.001\nI0607 22:43:49.979508 13573 solver.cpp:229] Iteration 9980, loss = 2.48562\nI0607 22:43:49.979571 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.478116 (* 1 = 0.478116 loss)\nI0607 22:43:49.979580 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.246889 (* 1 = 0.246889 loss)\nI0607 22:43:49.979586 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0770886 (* 1 = 0.0770886 loss)\nI0607 22:43:49.979593 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0065264 (* 1 = 0.0065264 loss)\nI0607 22:43:49.979598 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0773516 (* 1 = 0.0773516 loss)\nI0607 22:43:49.979604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0065267 (* 1 = 0.0065267 loss)\nI0607 22:43:49.979610 13573 sgd_solver.cpp:106] Iteration 9980, lr = 0.001\nspeed: 0.645s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_10000.caffemodel\nI0607 22:44:04.908977 13573 solver.cpp:229] Iteration 10000, loss = 0.704114\nI0607 22:44:04.909060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.205211 (* 1 = 0.205211 loss)\nI0607 22:44:04.909068 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172334 (* 1 = 0.172334 loss)\nI0607 22:44:04.909075 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204615 (* 1 = 0.204615 loss)\nI0607 22:44:04.909080 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0080985 (* 1 = 0.0080985 loss)\nI0607 22:44:04.909086 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157304 (* 1 = 0.157304 loss)\nI0607 22:44:04.909092 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00810066 (* 1 = 0.00810066 loss)\nI0607 22:44:04.909099 13573 sgd_solver.cpp:106] Iteration 10000, lr = 0.001\nI0607 22:44:17.965859 13573 solver.cpp:229] Iteration 10020, loss = 0.528706\nI0607 22:44:17.965919 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139824 (* 1 = 0.139824 loss)\nI0607 22:44:17.965929 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.091012 (* 1 = 0.091012 loss)\nI0607 22:44:17.965934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114599 (* 1 = 0.114599 loss)\nI0607 22:44:17.965940 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0677196 (* 1 = 0.0677196 loss)\nI0607 22:44:17.965945 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117467 (* 1 = 0.117467 loss)\nI0607 22:44:17.965951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0677306 (* 1 = 0.0677306 loss)\nI0607 22:44:17.965958 13573 sgd_solver.cpp:106] Iteration 10020, lr = 0.001\nI0607 22:44:30.926642 13573 solver.cpp:229] Iteration 10040, loss = 1.13911\nI0607 22:44:30.926710 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.137932 (* 1 = 0.137932 loss)\nI0607 22:44:30.926719 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.256819 (* 1 = 0.256819 loss)\nI0607 22:44:30.926726 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.494054 (* 1 = 0.494054 loss)\nI0607 22:44:30.926733 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0801405 (* 1 = 0.0801405 loss)\nI0607 22:44:30.926738 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.468381 (* 1 = 0.468381 loss)\nI0607 22:44:30.926745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0801562 (* 1 = 0.0801562 loss)\nI0607 22:44:30.926753 13573 sgd_solver.cpp:106] Iteration 10040, lr = 0.001\nI0607 22:44:43.781755 13573 solver.cpp:229] Iteration 10060, loss = 1.04908\nI0607 22:44:43.781821 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.190493 (* 1 = 0.190493 loss)\nI0607 22:44:43.781831 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148869 (* 1 = 0.148869 loss)\nI0607 22:44:43.781836 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250784 (* 1 = 0.250784 loss)\nI0607 22:44:43.781842 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0640391 (* 1 = 0.0640391 loss)\nI0607 22:44:43.781848 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250011 (* 1 = 0.250011 loss)\nI0607 22:44:43.781853 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.064041 (* 1 = 0.064041 loss)\nI0607 22:44:43.781862 13573 sgd_solver.cpp:106] Iteration 10060, lr = 0.001\nI0607 22:44:56.651679 13573 solver.cpp:229] Iteration 10080, loss = 1.04885\nI0607 22:44:56.651743 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110092 (* 1 = 0.110092 loss)\nI0607 22:44:56.651752 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.151029 (* 1 = 0.151029 loss)\nI0607 22:44:56.651759 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.597533 (* 1 = 0.597533 loss)\nI0607 22:44:56.651764 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.149403 (* 1 = 0.149403 loss)\nI0607 22:44:56.651769 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.590391 (* 1 = 0.590391 loss)\nI0607 22:44:56.651775 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.14936 (* 1 = 0.14936 loss)\nI0607 22:44:56.651782 13573 sgd_solver.cpp:106] Iteration 10080, lr = 0.001\nI0607 22:45:09.352777 13573 solver.cpp:229] Iteration 10100, loss = 1.50595\nI0607 22:45:09.352854 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.26046 (* 1 = 0.26046 loss)\nI0607 22:45:09.352862 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.222248 (* 1 = 0.222248 loss)\nI0607 22:45:09.352869 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141534 (* 1 = 0.141534 loss)\nI0607 22:45:09.352874 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0095815 (* 1 = 0.0095815 loss)\nI0607 22:45:09.352880 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14171 (* 1 = 0.14171 loss)\nI0607 22:45:09.352886 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00958109 (* 1 = 0.00958109 loss)\nI0607 22:45:09.352893 13573 sgd_solver.cpp:106] Iteration 10100, lr = 0.001\nI0607 22:45:22.378064 13573 solver.cpp:229] Iteration 10120, loss = 0.699965\nI0607 22:45:22.378126 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.319573 (* 1 = 0.319573 loss)\nI0607 22:45:22.378135 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.266185 (* 1 = 0.266185 loss)\nI0607 22:45:22.378141 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0961044 (* 1 = 0.0961044 loss)\nI0607 22:45:22.378147 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.032529 (* 1 = 0.032529 loss)\nI0607 22:45:22.378154 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0962599 (* 1 = 0.0962599 loss)\nI0607 22:45:22.378159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0325285 (* 1 = 0.0325285 loss)\nI0607 22:45:22.378166 13573 sgd_solver.cpp:106] Iteration 10120, lr = 0.001\nI0607 22:45:35.322726 13573 solver.cpp:229] Iteration 10140, loss = 0.52996\nI0607 22:45:35.322800 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00122889 (* 1 = 0.00122889 loss)\nI0607 22:45:35.322810 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00344166 (* 1 = 0.00344166 loss)\nI0607 22:45:35.322816 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180718 (* 1 = 0.180718 loss)\nI0607 22:45:35.322823 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00599243 (* 1 = 0.00599243 loss)\nI0607 22:45:35.322827 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.192728 (* 1 = 0.192728 loss)\nI0607 22:45:35.322832 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00599556 (* 1 = 0.00599556 loss)\nI0607 22:45:35.322839 13573 sgd_solver.cpp:106] Iteration 10140, lr = 0.001\nI0607 22:45:47.925020 13573 solver.cpp:229] Iteration 10160, loss = 1.56242\nI0607 22:45:47.925106 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.203636 (* 1 = 0.203636 loss)\nI0607 22:45:47.925117 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.404586 (* 1 = 0.404586 loss)\nI0607 22:45:47.925123 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204948 (* 1 = 0.204948 loss)\nI0607 22:45:47.925130 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041319 (* 1 = 0.041319 loss)\nI0607 22:45:47.925135 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227526 (* 1 = 0.227526 loss)\nI0607 22:45:47.925142 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0413098 (* 1 = 0.0413098 loss)\nI0607 22:45:47.925149 13573 sgd_solver.cpp:106] Iteration 10160, lr = 0.001\nI0607 22:46:00.844326 13573 solver.cpp:229] Iteration 10180, loss = 1.46874\nI0607 22:46:00.844390 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.284716 (* 1 = 0.284716 loss)\nI0607 22:46:00.844399 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.211812 (* 1 = 0.211812 loss)\nI0607 22:46:00.844405 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111954 (* 1 = 0.111954 loss)\nI0607 22:46:00.844413 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155142 (* 1 = 0.0155142 loss)\nI0607 22:46:00.844418 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111019 (* 1 = 0.111019 loss)\nI0607 22:46:00.844424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0155144 (* 1 = 0.0155144 loss)\nI0607 22:46:00.844431 13573 sgd_solver.cpp:106] Iteration 10180, lr = 0.001\nspeed: 0.645s / iter\nI0607 22:46:13.920812 13573 solver.cpp:229] Iteration 10200, loss = 1.89254\nI0607 22:46:13.920887 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.174084 (* 1 = 0.174084 loss)\nI0607 22:46:13.920898 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.212226 (* 1 = 0.212226 loss)\nI0607 22:46:13.920904 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.611033 (* 1 = 0.611033 loss)\nI0607 22:46:13.920910 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.13917 (* 1 = 0.13917 loss)\nI0607 22:46:13.920917 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.59064 (* 1 = 0.59064 loss)\nI0607 22:46:13.920923 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.139195 (* 1 = 0.139195 loss)\nI0607 22:46:13.920930 13573 sgd_solver.cpp:106] Iteration 10200, lr = 0.001\nI0607 22:46:26.650354 13573 solver.cpp:229] Iteration 10220, loss = 1.21367\nI0607 22:46:26.650414 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.269628 (* 1 = 0.269628 loss)\nI0607 22:46:26.650426 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124483 (* 1 = 0.124483 loss)\nI0607 22:46:26.650432 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0876525 (* 1 = 0.0876525 loss)\nI0607 22:46:26.650439 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0101527 (* 1 = 0.0101527 loss)\nI0607 22:46:26.650444 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0801757 (* 1 = 0.0801757 loss)\nI0607 22:46:26.650449 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0101537 (* 1 = 0.0101537 loss)\nI0607 22:46:26.650456 13573 sgd_solver.cpp:106] Iteration 10220, lr = 0.001\nI0607 22:46:39.324954 13573 solver.cpp:229] Iteration 10240, loss = 0.86608\nI0607 22:46:39.325019 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00400193 (* 1 = 0.00400193 loss)\nI0607 22:46:39.325029 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.013981 (* 1 = 0.013981 loss)\nI0607 22:46:39.325036 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175095 (* 1 = 0.175095 loss)\nI0607 22:46:39.325042 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0210752 (* 1 = 0.0210752 loss)\nI0607 22:46:39.325047 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187959 (* 1 = 0.187959 loss)\nI0607 22:46:39.325052 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0210712 (* 1 = 0.0210712 loss)\nI0607 22:46:39.325059 13573 sgd_solver.cpp:106] Iteration 10240, lr = 0.001\nI0607 22:46:52.158430 13573 solver.cpp:229] Iteration 10260, loss = 0.724908\nI0607 22:46:52.158493 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00933481 (* 1 = 0.00933481 loss)\nI0607 22:46:52.158504 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116555 (* 1 = 0.116555 loss)\nI0607 22:46:52.158509 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.31113 (* 1 = 0.31113 loss)\nI0607 22:46:52.158515 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0562976 (* 1 = 0.0562976 loss)\nI0607 22:46:52.158521 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28405 (* 1 = 0.28405 loss)\nI0607 22:46:52.158527 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0563089 (* 1 = 0.0563089 loss)\nI0607 22:46:52.158535 13573 sgd_solver.cpp:106] Iteration 10260, lr = 0.001\nI0607 22:47:04.995831 13573 solver.cpp:229] Iteration 10280, loss = 1.12919\nI0607 22:47:04.995890 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.344392 (* 1 = 0.344392 loss)\nI0607 22:47:04.995899 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.417653 (* 1 = 0.417653 loss)\nI0607 22:47:04.995906 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267765 (* 1 = 0.267765 loss)\nI0607 22:47:04.995913 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0322811 (* 1 = 0.0322811 loss)\nI0607 22:47:04.995918 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277757 (* 1 = 0.277757 loss)\nI0607 22:47:04.995923 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0322799 (* 1 = 0.0322799 loss)\nI0607 22:47:04.995930 13573 sgd_solver.cpp:106] Iteration 10280, lr = 0.001\nI0607 22:47:17.920327 13573 solver.cpp:229] Iteration 10300, loss = 0.80829\nI0607 22:47:17.920405 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.2055 (* 1 = 0.2055 loss)\nI0607 22:47:17.920415 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.243043 (* 1 = 0.243043 loss)\nI0607 22:47:17.920421 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0852394 (* 1 = 0.0852394 loss)\nI0607 22:47:17.920428 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0270594 (* 1 = 0.0270594 loss)\nI0607 22:47:17.920433 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0810271 (* 1 = 0.0810271 loss)\nI0607 22:47:17.920439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0270597 (* 1 = 0.0270597 loss)\nI0607 22:47:17.920446 13573 sgd_solver.cpp:106] Iteration 10300, lr = 0.001\nI0607 22:47:30.872257 13573 solver.cpp:229] Iteration 10320, loss = 0.726346\nI0607 22:47:30.872323 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.240046 (* 1 = 0.240046 loss)\nI0607 22:47:30.872333 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.154886 (* 1 = 0.154886 loss)\nI0607 22:47:30.872339 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199815 (* 1 = 0.199815 loss)\nI0607 22:47:30.872345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0455284 (* 1 = 0.0455284 loss)\nI0607 22:47:30.872351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19826 (* 1 = 0.19826 loss)\nI0607 22:47:30.872357 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0455165 (* 1 = 0.0455165 loss)\nI0607 22:47:30.872364 13573 sgd_solver.cpp:106] Iteration 10320, lr = 0.001\nI0607 22:47:43.672168 13573 solver.cpp:229] Iteration 10340, loss = 0.65868\nI0607 22:47:43.672246 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00336987 (* 1 = 0.00336987 loss)\nI0607 22:47:43.672256 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00800121 (* 1 = 0.00800121 loss)\nI0607 22:47:43.672262 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197423 (* 1 = 0.197423 loss)\nI0607 22:47:43.672267 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00392938 (* 1 = 0.00392938 loss)\nI0607 22:47:43.672273 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.148883 (* 1 = 0.148883 loss)\nI0607 22:47:43.672278 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00392833 (* 1 = 0.00392833 loss)\nI0607 22:47:43.672286 13573 sgd_solver.cpp:106] Iteration 10340, lr = 0.001\nI0607 22:47:56.639662 13573 solver.cpp:229] Iteration 10360, loss = 0.643414\nI0607 22:47:56.639726 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11049 (* 1 = 0.11049 loss)\nI0607 22:47:56.639735 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148038 (* 1 = 0.148038 loss)\nI0607 22:47:56.639741 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206468 (* 1 = 0.206468 loss)\nI0607 22:47:56.639747 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217915 (* 1 = 0.0217915 loss)\nI0607 22:47:56.639752 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193099 (* 1 = 0.193099 loss)\nI0607 22:47:56.639758 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0217866 (* 1 = 0.0217866 loss)\nI0607 22:47:56.639765 13573 sgd_solver.cpp:106] Iteration 10360, lr = 0.001\nI0607 22:48:09.570332 13573 solver.cpp:229] Iteration 10380, loss = 0.616882\nI0607 22:48:09.570396 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.2925 (* 1 = 0.2925 loss)\nI0607 22:48:09.570405 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188655 (* 1 = 0.188655 loss)\nI0607 22:48:09.570412 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0816045 (* 1 = 0.0816045 loss)\nI0607 22:48:09.570418 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0404259 (* 1 = 0.0404259 loss)\nI0607 22:48:09.570423 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0787833 (* 1 = 0.0787833 loss)\nI0607 22:48:09.570430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.040419 (* 1 = 0.040419 loss)\nI0607 22:48:09.570436 13573 sgd_solver.cpp:106] Iteration 10380, lr = 0.001\nspeed: 0.644s / iter\nI0607 22:48:22.672667 13573 solver.cpp:229] Iteration 10400, loss = 1.4812\nI0607 22:48:22.672729 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100807 (* 1 = 0.100807 loss)\nI0607 22:48:22.672739 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146743 (* 1 = 0.146743 loss)\nI0607 22:48:22.672745 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113126 (* 1 = 0.113126 loss)\nI0607 22:48:22.672750 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.134842 (* 1 = 0.134842 loss)\nI0607 22:48:22.672755 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112 (* 1 = 0.112 loss)\nI0607 22:48:22.672761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.134825 (* 1 = 0.134825 loss)\nI0607 22:48:22.672768 13573 sgd_solver.cpp:106] Iteration 10400, lr = 0.001\nI0607 22:48:35.265261 13573 solver.cpp:229] Iteration 10420, loss = 0.605242\nI0607 22:48:35.265391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.206919 (* 1 = 0.206919 loss)\nI0607 22:48:35.265400 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130059 (* 1 = 0.130059 loss)\nI0607 22:48:35.265406 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152322 (* 1 = 0.152322 loss)\nI0607 22:48:35.265411 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.089346 (* 1 = 0.089346 loss)\nI0607 22:48:35.265417 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154082 (* 1 = 0.154082 loss)\nI0607 22:48:35.265424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0893369 (* 1 = 0.0893369 loss)\nI0607 22:48:35.265429 13573 sgd_solver.cpp:106] Iteration 10420, lr = 0.001\nI0607 22:48:48.131871 13573 solver.cpp:229] Iteration 10440, loss = 0.808589\nI0607 22:48:48.131956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.342607 (* 1 = 0.342607 loss)\nI0607 22:48:48.131968 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.29334 (* 1 = 0.29334 loss)\nI0607 22:48:48.131975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092694 (* 1 = 0.092694 loss)\nI0607 22:48:48.131980 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0141953 (* 1 = 0.0141953 loss)\nI0607 22:48:48.131986 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0921598 (* 1 = 0.0921598 loss)\nI0607 22:48:48.131992 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0141914 (* 1 = 0.0141914 loss)\nI0607 22:48:48.131999 13573 sgd_solver.cpp:106] Iteration 10440, lr = 0.001\nI0607 22:49:01.005230 13573 solver.cpp:229] Iteration 10460, loss = 0.942473\nI0607 22:49:01.005296 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102586 (* 1 = 0.102586 loss)\nI0607 22:49:01.005306 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.339563 (* 1 = 0.339563 loss)\nI0607 22:49:01.005311 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295156 (* 1 = 0.295156 loss)\nI0607 22:49:01.005317 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189607 (* 1 = 0.0189607 loss)\nI0607 22:49:01.005322 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279143 (* 1 = 0.279143 loss)\nI0607 22:49:01.005328 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.018983 (* 1 = 0.018983 loss)\nI0607 22:49:01.005336 13573 sgd_solver.cpp:106] Iteration 10460, lr = 0.001\nI0607 22:49:13.789837 13573 solver.cpp:229] Iteration 10480, loss = 0.58552\nI0607 22:49:13.789906 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.160176 (* 1 = 0.160176 loss)\nI0607 22:49:13.789916 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109913 (* 1 = 0.109913 loss)\nI0607 22:49:13.789921 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138156 (* 1 = 0.138156 loss)\nI0607 22:49:13.789927 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0626378 (* 1 = 0.0626378 loss)\nI0607 22:49:13.789932 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139126 (* 1 = 0.139126 loss)\nI0607 22:49:13.789938 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0626294 (* 1 = 0.0626294 loss)\nI0607 22:49:13.789944 13573 sgd_solver.cpp:106] Iteration 10480, lr = 0.001\nI0607 22:49:26.374513 13573 solver.cpp:229] Iteration 10500, loss = 2.47202\nI0607 22:49:26.374575 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142609 (* 1 = 0.142609 loss)\nI0607 22:49:26.374585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102235 (* 1 = 0.102235 loss)\nI0607 22:49:26.374591 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.12177 (* 1 = 1.12177 loss)\nI0607 22:49:26.374596 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.705441 (* 1 = 0.705441 loss)\nI0607 22:49:26.374603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.11872 (* 1 = 1.11872 loss)\nI0607 22:49:26.374608 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.697413 (* 1 = 0.697413 loss)\nI0607 22:49:26.374614 13573 sgd_solver.cpp:106] Iteration 10500, lr = 0.001\nI0607 22:49:38.986557 13573 solver.cpp:229] Iteration 10520, loss = 2.01831\nI0607 22:49:38.986625 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.259869 (* 1 = 0.259869 loss)\nI0607 22:49:38.986634 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.166997 (* 1 = 0.166997 loss)\nI0607 22:49:38.986640 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.849162 (* 1 = 0.849162 loss)\nI0607 22:49:38.986645 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.376637 (* 1 = 0.376637 loss)\nI0607 22:49:38.986650 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.8518 (* 1 = 0.8518 loss)\nI0607 22:49:38.986656 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.376665 (* 1 = 0.376665 loss)\nI0607 22:49:38.986665 13573 sgd_solver.cpp:106] Iteration 10520, lr = 0.001\nI0607 22:49:51.840817 13573 solver.cpp:229] Iteration 10540, loss = 0.716601\nI0607 22:49:51.840879 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0794228 (* 1 = 0.0794228 loss)\nI0607 22:49:51.840888 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.185518 (* 1 = 0.185518 loss)\nI0607 22:49:51.840894 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222394 (* 1 = 0.222394 loss)\nI0607 22:49:51.840900 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0240321 (* 1 = 0.0240321 loss)\nI0607 22:49:51.840905 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.198923 (* 1 = 0.198923 loss)\nI0607 22:49:51.840911 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0240378 (* 1 = 0.0240378 loss)\nI0607 22:49:51.840917 13573 sgd_solver.cpp:106] Iteration 10540, lr = 0.001\nI0607 22:50:04.637157 13573 solver.cpp:229] Iteration 10560, loss = 2.16419\nI0607 22:50:04.637233 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.407185 (* 1 = 0.407185 loss)\nI0607 22:50:04.637243 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172602 (* 1 = 0.172602 loss)\nI0607 22:50:04.637249 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.69333 (* 1 = 0.69333 loss)\nI0607 22:50:04.637255 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.216453 (* 1 = 0.216453 loss)\nI0607 22:50:04.637260 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.697589 (* 1 = 0.697589 loss)\nI0607 22:50:04.637266 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.214916 (* 1 = 0.214916 loss)\nI0607 22:50:04.637274 13573 sgd_solver.cpp:106] Iteration 10560, lr = 0.001\nI0607 22:50:17.411680 13573 solver.cpp:229] Iteration 10580, loss = 0.988994\nI0607 22:50:17.411744 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.156502 (* 1 = 0.156502 loss)\nI0607 22:50:17.411754 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.050675 (* 1 = 0.050675 loss)\nI0607 22:50:17.411761 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174569 (* 1 = 0.174569 loss)\nI0607 22:50:17.411767 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0386768 (* 1 = 0.0386768 loss)\nI0607 22:50:17.411772 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163379 (* 1 = 0.163379 loss)\nI0607 22:50:17.411778 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0386594 (* 1 = 0.0386594 loss)\nI0607 22:50:17.411785 13573 sgd_solver.cpp:106] Iteration 10580, lr = 0.001\nspeed: 0.644s / iter\nI0607 22:50:30.412451 13573 solver.cpp:229] Iteration 10600, loss = 1.28865\nI0607 22:50:30.412519 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.386312 (* 1 = 0.386312 loss)\nI0607 22:50:30.412528 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.52155 (* 1 = 0.52155 loss)\nI0607 22:50:30.412534 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108705 (* 1 = 0.108705 loss)\nI0607 22:50:30.412540 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0323092 (* 1 = 0.0323092 loss)\nI0607 22:50:30.412546 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104516 (* 1 = 0.104516 loss)\nI0607 22:50:30.412551 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0323165 (* 1 = 0.0323165 loss)\nI0607 22:50:30.412559 13573 sgd_solver.cpp:106] Iteration 10600, lr = 0.001\nI0607 22:50:43.219584 13573 solver.cpp:229] Iteration 10620, loss = 0.66843\nI0607 22:50:43.219646 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.336079 (* 1 = 0.336079 loss)\nI0607 22:50:43.219655 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.215895 (* 1 = 0.215895 loss)\nI0607 22:50:43.219661 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0950046 (* 1 = 0.0950046 loss)\nI0607 22:50:43.219667 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128894 (* 1 = 0.0128894 loss)\nI0607 22:50:43.219673 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938735 (* 1 = 0.0938735 loss)\nI0607 22:50:43.219678 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0128864 (* 1 = 0.0128864 loss)\nI0607 22:50:43.219684 13573 sgd_solver.cpp:106] Iteration 10620, lr = 0.001\nI0607 22:50:56.059639 13573 solver.cpp:229] Iteration 10640, loss = 0.61119\nI0607 22:50:56.059715 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0689133 (* 1 = 0.0689133 loss)\nI0607 22:50:56.059725 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0910247 (* 1 = 0.0910247 loss)\nI0607 22:50:56.059731 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0913681 (* 1 = 0.0913681 loss)\nI0607 22:50:56.059736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0537975 (* 1 = 0.0537975 loss)\nI0607 22:50:56.059742 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0877742 (* 1 = 0.0877742 loss)\nI0607 22:50:56.059747 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0537973 (* 1 = 0.0537973 loss)\nI0607 22:50:56.059754 13573 sgd_solver.cpp:106] Iteration 10640, lr = 0.001\nI0607 22:51:08.815233 13573 solver.cpp:229] Iteration 10660, loss = 0.701862\nI0607 22:51:08.815304 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.198721 (* 1 = 0.198721 loss)\nI0607 22:51:08.815313 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161179 (* 1 = 0.161179 loss)\nI0607 22:51:08.815320 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179866 (* 1 = 0.179866 loss)\nI0607 22:51:08.815326 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0171867 (* 1 = 0.0171867 loss)\nI0607 22:51:08.815332 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210599 (* 1 = 0.210599 loss)\nI0607 22:51:08.815338 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017176 (* 1 = 0.017176 loss)\nI0607 22:51:08.815346 13573 sgd_solver.cpp:106] Iteration 10660, lr = 0.001\nI0607 22:51:21.667796 13573 solver.cpp:229] Iteration 10680, loss = 0.901056\nI0607 22:51:21.667865 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.209216 (* 1 = 0.209216 loss)\nI0607 22:51:21.667874 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0813127 (* 1 = 0.0813127 loss)\nI0607 22:51:21.667881 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252684 (* 1 = 0.252684 loss)\nI0607 22:51:21.667887 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0196475 (* 1 = 0.0196475 loss)\nI0607 22:51:21.667892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268407 (* 1 = 0.268407 loss)\nI0607 22:51:21.667898 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0196449 (* 1 = 0.0196449 loss)\nI0607 22:51:21.667906 13573 sgd_solver.cpp:106] Iteration 10680, lr = 0.001\nI0607 22:51:34.704103 13573 solver.cpp:229] Iteration 10700, loss = 0.736661\nI0607 22:51:34.704180 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.244488 (* 1 = 0.244488 loss)\nI0607 22:51:34.704190 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.337471 (* 1 = 0.337471 loss)\nI0607 22:51:34.704196 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110462 (* 1 = 0.110462 loss)\nI0607 22:51:34.704202 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0146656 (* 1 = 0.0146656 loss)\nI0607 22:51:34.704215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0985687 (* 1 = 0.0985687 loss)\nI0607 22:51:34.704221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0146625 (* 1 = 0.0146625 loss)\nI0607 22:51:34.704228 13573 sgd_solver.cpp:106] Iteration 10700, lr = 0.001\nI0607 22:51:47.520328 13573 solver.cpp:229] Iteration 10720, loss = 0.885886\nI0607 22:51:47.520387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184558 (* 1 = 0.184558 loss)\nI0607 22:51:47.520396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117736 (* 1 = 0.117736 loss)\nI0607 22:51:47.520403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.50381 (* 1 = 0.50381 loss)\nI0607 22:51:47.520409 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0900125 (* 1 = 0.0900125 loss)\nI0607 22:51:47.520414 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.457366 (* 1 = 0.457366 loss)\nI0607 22:51:47.520421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0900202 (* 1 = 0.0900202 loss)\nI0607 22:51:47.520427 13573 sgd_solver.cpp:106] Iteration 10720, lr = 0.001\nI0607 22:52:00.683188 13573 solver.cpp:229] Iteration 10740, loss = 0.661833\nI0607 22:52:00.683254 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.491244 (* 1 = 0.491244 loss)\nI0607 22:52:00.683264 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.186953 (* 1 = 0.186953 loss)\nI0607 22:52:00.683269 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0953397 (* 1 = 0.0953397 loss)\nI0607 22:52:00.683275 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301894 (* 1 = 0.0301894 loss)\nI0607 22:52:00.683281 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0910949 (* 1 = 0.0910949 loss)\nI0607 22:52:00.683286 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301895 (* 1 = 0.0301895 loss)\nI0607 22:52:00.683293 13573 sgd_solver.cpp:106] Iteration 10740, lr = 0.001\nI0607 22:52:13.670658 13573 solver.cpp:229] Iteration 10760, loss = 0.726043\nI0607 22:52:13.670722 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.204959 (* 1 = 0.204959 loss)\nI0607 22:52:13.670730 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104781 (* 1 = 0.104781 loss)\nI0607 22:52:13.670737 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0998952 (* 1 = 0.0998952 loss)\nI0607 22:52:13.670742 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0115568 (* 1 = 0.0115568 loss)\nI0607 22:52:13.670748 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106793 (* 1 = 0.106793 loss)\nI0607 22:52:13.670754 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115558 (* 1 = 0.0115558 loss)\nI0607 22:52:13.670761 13573 sgd_solver.cpp:106] Iteration 10760, lr = 0.001\nI0607 22:52:26.626317 13573 solver.cpp:229] Iteration 10780, loss = 0.686169\nI0607 22:52:26.626384 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00511635 (* 1 = 0.00511635 loss)\nI0607 22:52:26.626394 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0604822 (* 1 = 0.0604822 loss)\nI0607 22:52:26.626400 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264441 (* 1 = 0.264441 loss)\nI0607 22:52:26.626406 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0428695 (* 1 = 0.0428695 loss)\nI0607 22:52:26.626411 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264898 (* 1 = 0.264898 loss)\nI0607 22:52:26.626417 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0428717 (* 1 = 0.0428717 loss)\nI0607 22:52:26.626425 13573 sgd_solver.cpp:106] Iteration 10780, lr = 0.001\nspeed: 0.644s / iter\nI0607 22:52:39.332388 13573 solver.cpp:229] Iteration 10800, loss = 1.94219\nI0607 22:52:39.332497 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169207 (* 1 = 0.169207 loss)\nI0607 22:52:39.332506 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202295 (* 1 = 0.202295 loss)\nI0607 22:52:39.332512 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.335616 (* 1 = 0.335616 loss)\nI0607 22:52:39.332518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0498935 (* 1 = 0.0498935 loss)\nI0607 22:52:39.332525 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28377 (* 1 = 0.28377 loss)\nI0607 22:52:39.332530 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0498992 (* 1 = 0.0498992 loss)\nI0607 22:52:39.332537 13573 sgd_solver.cpp:106] Iteration 10800, lr = 0.001\nI0607 22:52:52.227018 13573 solver.cpp:229] Iteration 10820, loss = 1.15415\nI0607 22:52:52.227092 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.247381 (* 1 = 0.247381 loss)\nI0607 22:52:52.227102 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14999 (* 1 = 0.14999 loss)\nI0607 22:52:52.227108 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0853592 (* 1 = 0.0853592 loss)\nI0607 22:52:52.227113 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143364 (* 1 = 0.0143364 loss)\nI0607 22:52:52.227119 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0868568 (* 1 = 0.0868568 loss)\nI0607 22:52:52.227125 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0143363 (* 1 = 0.0143363 loss)\nI0607 22:52:52.227133 13573 sgd_solver.cpp:106] Iteration 10820, lr = 0.001\nI0607 22:53:04.965396 13573 solver.cpp:229] Iteration 10840, loss = 1.20861\nI0607 22:53:04.965457 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.275034 (* 1 = 0.275034 loss)\nI0607 22:53:04.965466 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.156157 (* 1 = 0.156157 loss)\nI0607 22:53:04.965472 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.351593 (* 1 = 0.351593 loss)\nI0607 22:53:04.965478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.022179 (* 1 = 0.022179 loss)\nI0607 22:53:04.965483 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.322431 (* 1 = 0.322431 loss)\nI0607 22:53:04.965489 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0222021 (* 1 = 0.0222021 loss)\nI0607 22:53:04.965497 13573 sgd_solver.cpp:106] Iteration 10840, lr = 0.001\nI0607 22:53:17.767706 13573 solver.cpp:229] Iteration 10860, loss = 0.998682\nI0607 22:53:17.767798 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.186716 (* 1 = 0.186716 loss)\nI0607 22:53:17.767808 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138456 (* 1 = 0.138456 loss)\nI0607 22:53:17.767814 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.196492 (* 1 = 0.196492 loss)\nI0607 22:53:17.767820 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0477921 (* 1 = 0.0477921 loss)\nI0607 22:53:17.767827 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.198117 (* 1 = 0.198117 loss)\nI0607 22:53:17.767832 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0477793 (* 1 = 0.0477793 loss)\nI0607 22:53:17.767840 13573 sgd_solver.cpp:106] Iteration 10860, lr = 0.001\nI0607 22:53:30.499462 13573 solver.cpp:229] Iteration 10880, loss = 0.64124\nI0607 22:53:30.499526 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.225214 (* 1 = 0.225214 loss)\nI0607 22:53:30.499536 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0756749 (* 1 = 0.0756749 loss)\nI0607 22:53:30.499542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119872 (* 1 = 0.119872 loss)\nI0607 22:53:30.499548 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0299042 (* 1 = 0.0299042 loss)\nI0607 22:53:30.499554 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.120587 (* 1 = 0.120587 loss)\nI0607 22:53:30.499559 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0299049 (* 1 = 0.0299049 loss)\nI0607 22:53:30.499567 13573 sgd_solver.cpp:106] Iteration 10880, lr = 0.001\nI0607 22:53:43.398308 13573 solver.cpp:229] Iteration 10900, loss = 0.816222\nI0607 22:53:43.398370 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.239183 (* 1 = 0.239183 loss)\nI0607 22:53:43.398380 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.205664 (* 1 = 0.205664 loss)\nI0607 22:53:43.398386 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184581 (* 1 = 0.184581 loss)\nI0607 22:53:43.398391 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0390987 (* 1 = 0.0390987 loss)\nI0607 22:53:43.398397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210482 (* 1 = 0.210482 loss)\nI0607 22:53:43.398403 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0390975 (* 1 = 0.0390975 loss)\nI0607 22:53:43.398411 13573 sgd_solver.cpp:106] Iteration 10900, lr = 0.001\nI0607 22:53:56.633810 13573 solver.cpp:229] Iteration 10920, loss = 0.59788\nI0607 22:53:56.633886 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.132376 (* 1 = 0.132376 loss)\nI0607 22:53:56.633896 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0674266 (* 1 = 0.0674266 loss)\nI0607 22:53:56.633903 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295997 (* 1 = 0.295997 loss)\nI0607 22:53:56.633908 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0209683 (* 1 = 0.0209683 loss)\nI0607 22:53:56.633913 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.255642 (* 1 = 0.255642 loss)\nI0607 22:53:56.633919 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209678 (* 1 = 0.0209678 loss)\nI0607 22:53:56.633927 13573 sgd_solver.cpp:106] Iteration 10920, lr = 0.001\nI0607 22:54:09.413327 13573 solver.cpp:229] Iteration 10940, loss = 1.16838\nI0607 22:54:09.413394 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.491618 (* 1 = 0.491618 loss)\nI0607 22:54:09.413404 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.439016 (* 1 = 0.439016 loss)\nI0607 22:54:09.413410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.28445 (* 1 = 0.28445 loss)\nI0607 22:54:09.413416 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0424794 (* 1 = 0.0424794 loss)\nI0607 22:54:09.413422 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280656 (* 1 = 0.280656 loss)\nI0607 22:54:09.413429 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0424732 (* 1 = 0.0424732 loss)\nI0607 22:54:09.413435 13573 sgd_solver.cpp:106] Iteration 10940, lr = 0.001\nI0607 22:54:22.337906 13573 solver.cpp:229] Iteration 10960, loss = 0.906879\nI0607 22:54:22.337978 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00335759 (* 1 = 0.00335759 loss)\nI0607 22:54:22.337988 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0690361 (* 1 = 0.0690361 loss)\nI0607 22:54:22.337994 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.324285 (* 1 = 0.324285 loss)\nI0607 22:54:22.338001 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.100974 (* 1 = 0.100974 loss)\nI0607 22:54:22.338006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.350236 (* 1 = 0.350236 loss)\nI0607 22:54:22.338012 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.100984 (* 1 = 0.100984 loss)\nI0607 22:54:22.338021 13573 sgd_solver.cpp:106] Iteration 10960, lr = 0.001\nI0607 22:54:35.097681 13573 solver.cpp:229] Iteration 10980, loss = 0.650674\nI0607 22:54:35.097753 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0222985 (* 1 = 0.0222985 loss)\nI0607 22:54:35.097762 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0310762 (* 1 = 0.0310762 loss)\nI0607 22:54:35.097769 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241449 (* 1 = 0.241449 loss)\nI0607 22:54:35.097774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00662692 (* 1 = 0.00662692 loss)\nI0607 22:54:35.097779 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226612 (* 1 = 0.226612 loss)\nI0607 22:54:35.097785 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00662944 (* 1 = 0.00662944 loss)\nI0607 22:54:35.097792 13573 sgd_solver.cpp:106] Iteration 10980, lr = 0.001\nspeed: 0.644s / iter\nI0607 22:54:47.913223 13573 solver.cpp:229] Iteration 11000, loss = 0.529577\nI0607 22:54:47.913286 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.200937 (* 1 = 0.200937 loss)\nI0607 22:54:47.913295 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.233762 (* 1 = 0.233762 loss)\nI0607 22:54:47.913302 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.085302 (* 1 = 0.085302 loss)\nI0607 22:54:47.913308 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259505 (* 1 = 0.0259505 loss)\nI0607 22:54:47.913314 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0795466 (* 1 = 0.0795466 loss)\nI0607 22:54:47.913321 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0259496 (* 1 = 0.0259496 loss)\nI0607 22:54:47.913328 13573 sgd_solver.cpp:106] Iteration 11000, lr = 0.001\nI0607 22:55:00.650753 13573 solver.cpp:229] Iteration 11020, loss = 1.36412\nI0607 22:55:00.650835 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.362379 (* 1 = 0.362379 loss)\nI0607 22:55:00.650845 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.40143 (* 1 = 0.40143 loss)\nI0607 22:55:00.650851 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.556352 (* 1 = 0.556352 loss)\nI0607 22:55:00.650857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0943495 (* 1 = 0.0943495 loss)\nI0607 22:55:00.650862 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.573537 (* 1 = 0.573537 loss)\nI0607 22:55:00.650868 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.094348 (* 1 = 0.094348 loss)\nI0607 22:55:00.650876 13573 sgd_solver.cpp:106] Iteration 11020, lr = 0.001\nI0607 22:55:13.538197 13573 solver.cpp:229] Iteration 11040, loss = 0.488116\nI0607 22:55:13.538269 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00379011 (* 1 = 0.00379011 loss)\nI0607 22:55:13.538277 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0216401 (* 1 = 0.0216401 loss)\nI0607 22:55:13.538283 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.227274 (* 1 = 0.227274 loss)\nI0607 22:55:13.538290 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00397147 (* 1 = 0.00397147 loss)\nI0607 22:55:13.538295 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.206614 (* 1 = 0.206614 loss)\nI0607 22:55:13.538300 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00396691 (* 1 = 0.00396691 loss)\nI0607 22:55:13.538307 13573 sgd_solver.cpp:106] Iteration 11040, lr = 0.001\nI0607 22:55:26.534003 13573 solver.cpp:229] Iteration 11060, loss = 1.49795\nI0607 22:55:26.534128 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.187039 (* 1 = 0.187039 loss)\nI0607 22:55:26.534137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163959 (* 1 = 0.163959 loss)\nI0607 22:55:26.534143 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.739779 (* 1 = 0.739779 loss)\nI0607 22:55:26.534148 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.294057 (* 1 = 0.294057 loss)\nI0607 22:55:26.534154 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.73094 (* 1 = 0.73094 loss)\nI0607 22:55:26.534159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.294051 (* 1 = 0.294051 loss)\nI0607 22:55:26.534167 13573 sgd_solver.cpp:106] Iteration 11060, lr = 0.001\nI0607 22:55:39.479826 13573 solver.cpp:229] Iteration 11080, loss = 0.714323\nI0607 22:55:39.479892 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00796353 (* 1 = 0.00796353 loss)\nI0607 22:55:39.479902 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00962056 (* 1 = 0.00962056 loss)\nI0607 22:55:39.479907 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301965 (* 1 = 0.301965 loss)\nI0607 22:55:39.479912 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.108138 (* 1 = 0.108138 loss)\nI0607 22:55:39.479918 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.313648 (* 1 = 0.313648 loss)\nI0607 22:55:39.479923 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.108135 (* 1 = 0.108135 loss)\nI0607 22:55:39.479930 13573 sgd_solver.cpp:106] Iteration 11080, lr = 0.001\nI0607 22:55:52.420205 13573 solver.cpp:229] Iteration 11100, loss = 0.695181\nI0607 22:55:52.420270 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.165488 (* 1 = 0.165488 loss)\nI0607 22:55:52.420282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.196962 (* 1 = 0.196962 loss)\nI0607 22:55:52.420289 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.163927 (* 1 = 0.163927 loss)\nI0607 22:55:52.420295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104362 (* 1 = 0.104362 loss)\nI0607 22:55:52.420300 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153272 (* 1 = 0.153272 loss)\nI0607 22:55:52.420306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104362 (* 1 = 0.104362 loss)\nI0607 22:55:52.420315 13573 sgd_solver.cpp:106] Iteration 11100, lr = 0.001\nI0607 22:56:05.199236 13573 solver.cpp:229] Iteration 11120, loss = 1.3335\nI0607 22:56:05.199308 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.284324 (* 1 = 0.284324 loss)\nI0607 22:56:05.199317 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.271897 (* 1 = 0.271897 loss)\nI0607 22:56:05.199323 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.305179 (* 1 = 0.305179 loss)\nI0607 22:56:05.199329 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041616 (* 1 = 0.041616 loss)\nI0607 22:56:05.199335 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278305 (* 1 = 0.278305 loss)\nI0607 22:56:05.199340 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0416257 (* 1 = 0.0416257 loss)\nI0607 22:56:05.199347 13573 sgd_solver.cpp:106] Iteration 11120, lr = 0.001\nI0607 22:56:18.054066 13573 solver.cpp:229] Iteration 11140, loss = 1.20743\nI0607 22:56:18.054131 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0882766 (* 1 = 0.0882766 loss)\nI0607 22:56:18.054141 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0891305 (* 1 = 0.0891305 loss)\nI0607 22:56:18.054147 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302358 (* 1 = 0.302358 loss)\nI0607 22:56:18.054152 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243278 (* 1 = 0.0243278 loss)\nI0607 22:56:18.054157 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.335032 (* 1 = 0.335032 loss)\nI0607 22:56:18.054163 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243409 (* 1 = 0.0243409 loss)\nI0607 22:56:18.054170 13573 sgd_solver.cpp:106] Iteration 11140, lr = 0.001\nI0607 22:56:30.912493 13573 solver.cpp:229] Iteration 11160, loss = 1.03745\nI0607 22:56:30.912575 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171192 (* 1 = 0.171192 loss)\nI0607 22:56:30.912585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.241973 (* 1 = 0.241973 loss)\nI0607 22:56:30.912590 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260906 (* 1 = 0.260906 loss)\nI0607 22:56:30.912597 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00761661 (* 1 = 0.00761661 loss)\nI0607 22:56:30.912603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239232 (* 1 = 0.239232 loss)\nI0607 22:56:30.912609 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00762537 (* 1 = 0.00762537 loss)\nI0607 22:56:30.912617 13573 sgd_solver.cpp:106] Iteration 11160, lr = 0.001\nI0607 22:56:43.806545 13573 solver.cpp:229] Iteration 11180, loss = 0.960993\nI0607 22:56:43.806617 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0943121 (* 1 = 0.0943121 loss)\nI0607 22:56:43.806627 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126427 (* 1 = 0.126427 loss)\nI0607 22:56:43.806632 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119807 (* 1 = 0.119807 loss)\nI0607 22:56:43.806638 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0339993 (* 1 = 0.0339993 loss)\nI0607 22:56:43.806643 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118315 (* 1 = 0.118315 loss)\nI0607 22:56:43.806649 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0339993 (* 1 = 0.0339993 loss)\nI0607 22:56:43.806655 13573 sgd_solver.cpp:106] Iteration 11180, lr = 0.001\nspeed: 0.644s / iter\nI0607 22:56:56.637001 13573 solver.cpp:229] Iteration 11200, loss = 0.859399\nI0607 22:56:56.637065 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.132692 (* 1 = 0.132692 loss)\nI0607 22:56:56.637075 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119633 (* 1 = 0.119633 loss)\nI0607 22:56:56.637080 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215506 (* 1 = 0.215506 loss)\nI0607 22:56:56.637086 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0113007 (* 1 = 0.0113007 loss)\nI0607 22:56:56.637092 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199858 (* 1 = 0.199858 loss)\nI0607 22:56:56.637097 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112953 (* 1 = 0.0112953 loss)\nI0607 22:56:56.637104 13573 sgd_solver.cpp:106] Iteration 11200, lr = 0.001\nI0607 22:57:09.403617 13573 solver.cpp:229] Iteration 11220, loss = 0.777475\nI0607 22:57:09.403679 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0568305 (* 1 = 0.0568305 loss)\nI0607 22:57:09.403695 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0693834 (* 1 = 0.0693834 loss)\nI0607 22:57:09.403702 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228563 (* 1 = 0.228563 loss)\nI0607 22:57:09.403708 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011939 (* 1 = 0.011939 loss)\nI0607 22:57:09.403714 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216635 (* 1 = 0.216635 loss)\nI0607 22:57:09.403720 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119284 (* 1 = 0.0119284 loss)\nI0607 22:57:09.403728 13573 sgd_solver.cpp:106] Iteration 11220, lr = 0.001\nI0607 22:57:22.205170 13573 solver.cpp:229] Iteration 11240, loss = 0.963331\nI0607 22:57:22.205235 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.489914 (* 1 = 0.489914 loss)\nI0607 22:57:22.205245 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229431 (* 1 = 0.229431 loss)\nI0607 22:57:22.205250 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.305659 (* 1 = 0.305659 loss)\nI0607 22:57:22.205257 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0459537 (* 1 = 0.0459537 loss)\nI0607 22:57:22.205263 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.310255 (* 1 = 0.310255 loss)\nI0607 22:57:22.205269 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0459493 (* 1 = 0.0459493 loss)\nI0607 22:57:22.205276 13573 sgd_solver.cpp:106] Iteration 11240, lr = 0.001\nI0607 22:57:35.287060 13573 solver.cpp:229] Iteration 11260, loss = 0.992616\nI0607 22:57:35.287124 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113242 (* 1 = 0.113242 loss)\nI0607 22:57:35.287134 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0910401 (* 1 = 0.0910401 loss)\nI0607 22:57:35.287140 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0995117 (* 1 = 0.0995117 loss)\nI0607 22:57:35.287145 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0644946 (* 1 = 0.0644946 loss)\nI0607 22:57:35.287151 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096931 (* 1 = 0.096931 loss)\nI0607 22:57:35.287158 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0644871 (* 1 = 0.0644871 loss)\nI0607 22:57:35.287164 13573 sgd_solver.cpp:106] Iteration 11260, lr = 0.001\nI0607 22:57:48.083987 13573 solver.cpp:229] Iteration 11280, loss = 0.395464\nI0607 22:57:48.084060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000758171 (* 1 = 0.000758171 loss)\nI0607 22:57:48.084071 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00699224 (* 1 = 0.00699224 loss)\nI0607 22:57:48.084079 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216364 (* 1 = 0.216364 loss)\nI0607 22:57:48.084084 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0106147 (* 1 = 0.0106147 loss)\nI0607 22:57:48.084090 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.166584 (* 1 = 0.166584 loss)\nI0607 22:57:48.084097 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0106048 (* 1 = 0.0106048 loss)\nI0607 22:57:48.084105 13573 sgd_solver.cpp:106] Iteration 11280, lr = 0.001\nI0607 22:58:00.860615 13573 solver.cpp:229] Iteration 11300, loss = 0.592534\nI0607 22:58:00.860682 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0645309 (* 1 = 0.0645309 loss)\nI0607 22:58:00.860690 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112599 (* 1 = 0.112599 loss)\nI0607 22:58:00.860697 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188148 (* 1 = 0.188148 loss)\nI0607 22:58:00.860702 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00828268 (* 1 = 0.00828268 loss)\nI0607 22:58:00.860708 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158296 (* 1 = 0.158296 loss)\nI0607 22:58:00.860713 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00828148 (* 1 = 0.00828148 loss)\nI0607 22:58:00.860720 13573 sgd_solver.cpp:106] Iteration 11300, lr = 0.001\nI0607 22:58:13.778959 13573 solver.cpp:229] Iteration 11320, loss = 0.829923\nI0607 22:58:13.779026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.239443 (* 1 = 0.239443 loss)\nI0607 22:58:13.779034 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.215926 (* 1 = 0.215926 loss)\nI0607 22:58:13.779039 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.319778 (* 1 = 0.319778 loss)\nI0607 22:58:13.779045 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112023 (* 1 = 0.0112023 loss)\nI0607 22:58:13.779052 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260706 (* 1 = 0.260706 loss)\nI0607 22:58:13.779057 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0111907 (* 1 = 0.0111907 loss)\nI0607 22:58:13.779063 13573 sgd_solver.cpp:106] Iteration 11320, lr = 0.001\nI0607 22:58:26.715968 13573 solver.cpp:229] Iteration 11340, loss = 0.715245\nI0607 22:58:26.716099 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167624 (* 1 = 0.167624 loss)\nI0607 22:58:26.716109 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190051 (* 1 = 0.190051 loss)\nI0607 22:58:26.716121 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0938671 (* 1 = 0.0938671 loss)\nI0607 22:58:26.716127 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00542635 (* 1 = 0.00542635 loss)\nI0607 22:58:26.716133 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0982835 (* 1 = 0.0982835 loss)\nI0607 22:58:26.716138 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00542693 (* 1 = 0.00542693 loss)\nI0607 22:58:26.716145 13573 sgd_solver.cpp:106] Iteration 11340, lr = 0.001\nI0607 22:58:39.611964 13573 solver.cpp:229] Iteration 11360, loss = 0.780529\nI0607 22:58:39.612023 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.222408 (* 1 = 0.222408 loss)\nI0607 22:58:39.612032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.290182 (* 1 = 0.290182 loss)\nI0607 22:58:39.612040 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.13366 (* 1 = 0.13366 loss)\nI0607 22:58:39.612046 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301712 (* 1 = 0.0301712 loss)\nI0607 22:58:39.612051 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139468 (* 1 = 0.139468 loss)\nI0607 22:58:39.612057 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301794 (* 1 = 0.0301794 loss)\nI0607 22:58:39.612064 13573 sgd_solver.cpp:106] Iteration 11360, lr = 0.001\nI0607 22:58:52.493432 13573 solver.cpp:229] Iteration 11380, loss = 1.24924\nI0607 22:58:52.493513 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0818639 (* 1 = 0.0818639 loss)\nI0607 22:58:52.493523 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.221395 (* 1 = 0.221395 loss)\nI0607 22:58:52.493530 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.227378 (* 1 = 0.227378 loss)\nI0607 22:58:52.493536 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0363947 (* 1 = 0.0363947 loss)\nI0607 22:58:52.493541 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.237501 (* 1 = 0.237501 loss)\nI0607 22:58:52.493547 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0364231 (* 1 = 0.0364231 loss)\nI0607 22:58:52.493556 13573 sgd_solver.cpp:106] Iteration 11380, lr = 0.001\nspeed: 0.644s / iter\nI0607 22:59:05.496464 13573 solver.cpp:229] Iteration 11400, loss = 0.709\nI0607 22:59:05.496527 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189927 (* 1 = 0.189927 loss)\nI0607 22:59:05.496537 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.317496 (* 1 = 0.317496 loss)\nI0607 22:59:05.496543 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0982438 (* 1 = 0.0982438 loss)\nI0607 22:59:05.496548 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0141905 (* 1 = 0.0141905 loss)\nI0607 22:59:05.496554 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1024 (* 1 = 0.1024 loss)\nI0607 22:59:05.496561 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0141907 (* 1 = 0.0141907 loss)\nI0607 22:59:05.496567 13573 sgd_solver.cpp:106] Iteration 11400, lr = 0.001\nI0607 22:59:18.437458 13573 solver.cpp:229] Iteration 11420, loss = 1.19648\nI0607 22:59:18.437522 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00145931 (* 1 = 0.00145931 loss)\nI0607 22:59:18.437531 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0383962 (* 1 = 0.0383962 loss)\nI0607 22:59:18.437537 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.296272 (* 1 = 0.296272 loss)\nI0607 22:59:18.437543 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0919571 (* 1 = 0.0919571 loss)\nI0607 22:59:18.437549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.29735 (* 1 = 0.29735 loss)\nI0607 22:59:18.437554 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0918991 (* 1 = 0.0918991 loss)\nI0607 22:59:18.437561 13573 sgd_solver.cpp:106] Iteration 11420, lr = 0.001\nI0607 22:59:31.097065 13573 solver.cpp:229] Iteration 11440, loss = 1.18286\nI0607 22:59:31.097139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.301217 (* 1 = 0.301217 loss)\nI0607 22:59:31.097148 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.207001 (* 1 = 0.207001 loss)\nI0607 22:59:31.097154 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.413848 (* 1 = 0.413848 loss)\nI0607 22:59:31.097162 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0882936 (* 1 = 0.0882936 loss)\nI0607 22:59:31.097167 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.421576 (* 1 = 0.421576 loss)\nI0607 22:59:31.097173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0883191 (* 1 = 0.0883191 loss)\nI0607 22:59:31.097180 13573 sgd_solver.cpp:106] Iteration 11440, lr = 0.001\nI0607 22:59:44.012069 13573 solver.cpp:229] Iteration 11460, loss = 0.939721\nI0607 22:59:44.012138 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.333948 (* 1 = 0.333948 loss)\nI0607 22:59:44.012147 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.275769 (* 1 = 0.275769 loss)\nI0607 22:59:44.012154 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.210243 (* 1 = 0.210243 loss)\nI0607 22:59:44.012161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0325554 (* 1 = 0.0325554 loss)\nI0607 22:59:44.012166 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216675 (* 1 = 0.216675 loss)\nI0607 22:59:44.012172 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0325537 (* 1 = 0.0325537 loss)\nI0607 22:59:44.012179 13573 sgd_solver.cpp:106] Iteration 11460, lr = 0.001\nI0607 22:59:56.827244 13573 solver.cpp:229] Iteration 11480, loss = 0.779286\nI0607 22:59:56.827320 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.333657 (* 1 = 0.333657 loss)\nI0607 22:59:56.827329 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.261973 (* 1 = 0.261973 loss)\nI0607 22:59:56.827335 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136124 (* 1 = 0.136124 loss)\nI0607 22:59:56.827342 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0590417 (* 1 = 0.0590417 loss)\nI0607 22:59:56.827347 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.129578 (* 1 = 0.129578 loss)\nI0607 22:59:56.827353 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0590421 (* 1 = 0.0590421 loss)\nI0607 22:59:56.827360 13573 sgd_solver.cpp:106] Iteration 11480, lr = 0.001\nI0607 23:00:09.746592 13573 solver.cpp:229] Iteration 11500, loss = 1.50444\nI0607 23:00:09.746660 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.381087 (* 1 = 0.381087 loss)\nI0607 23:00:09.746671 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.504534 (* 1 = 0.504534 loss)\nI0607 23:00:09.746677 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207263 (* 1 = 0.207263 loss)\nI0607 23:00:09.746683 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.18453 (* 1 = 0.18453 loss)\nI0607 23:00:09.746690 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.208678 (* 1 = 0.208678 loss)\nI0607 23:00:09.746695 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.184779 (* 1 = 0.184779 loss)\nI0607 23:00:09.746702 13573 sgd_solver.cpp:106] Iteration 11500, lr = 0.001\nI0607 23:00:22.540432 13573 solver.cpp:229] Iteration 11520, loss = 0.697607\nI0607 23:00:22.540518 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0753907 (* 1 = 0.0753907 loss)\nI0607 23:00:22.540527 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0999135 (* 1 = 0.0999135 loss)\nI0607 23:00:22.540534 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138084 (* 1 = 0.138084 loss)\nI0607 23:00:22.540539 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00633194 (* 1 = 0.00633194 loss)\nI0607 23:00:22.540545 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118203 (* 1 = 0.118203 loss)\nI0607 23:00:22.540551 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0063319 (* 1 = 0.0063319 loss)\nI0607 23:00:22.540560 13573 sgd_solver.cpp:106] Iteration 11520, lr = 0.001\nI0607 23:00:35.581956 13573 solver.cpp:229] Iteration 11540, loss = 0.617843\nI0607 23:00:35.582016 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00582699 (* 1 = 0.00582699 loss)\nI0607 23:00:35.582026 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0172907 (* 1 = 0.0172907 loss)\nI0607 23:00:35.582031 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.359193 (* 1 = 0.359193 loss)\nI0607 23:00:35.582037 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0392948 (* 1 = 0.0392948 loss)\nI0607 23:00:35.582042 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.379396 (* 1 = 0.379396 loss)\nI0607 23:00:35.582048 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.039307 (* 1 = 0.039307 loss)\nI0607 23:00:35.582056 13573 sgd_solver.cpp:106] Iteration 11540, lr = 0.001\nI0607 23:00:48.433722 13573 solver.cpp:229] Iteration 11560, loss = 0.692536\nI0607 23:00:48.433784 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.387483 (* 1 = 0.387483 loss)\nI0607 23:00:48.433792 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.242825 (* 1 = 0.242825 loss)\nI0607 23:00:48.433799 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0960323 (* 1 = 0.0960323 loss)\nI0607 23:00:48.433804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00967252 (* 1 = 0.00967252 loss)\nI0607 23:00:48.433810 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0988326 (* 1 = 0.0988326 loss)\nI0607 23:00:48.433816 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00967004 (* 1 = 0.00967004 loss)\nI0607 23:00:48.433823 13573 sgd_solver.cpp:106] Iteration 11560, lr = 0.001\nI0607 23:01:00.988790 13573 solver.cpp:229] Iteration 11580, loss = 0.493619\nI0607 23:01:00.988921 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.176927 (* 1 = 0.176927 loss)\nI0607 23:01:00.988930 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.166971 (* 1 = 0.166971 loss)\nI0607 23:01:00.988936 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138424 (* 1 = 0.138424 loss)\nI0607 23:01:00.988943 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0306902 (* 1 = 0.0306902 loss)\nI0607 23:01:00.988948 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136768 (* 1 = 0.136768 loss)\nI0607 23:01:00.988955 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0306971 (* 1 = 0.0306971 loss)\nI0607 23:01:00.988961 13573 sgd_solver.cpp:106] Iteration 11580, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:01:13.977152 13573 solver.cpp:229] Iteration 11600, loss = 0.367033\nI0607 23:01:13.977219 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.010208 (* 1 = 0.010208 loss)\nI0607 23:01:13.977229 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00987889 (* 1 = 0.00987889 loss)\nI0607 23:01:13.977236 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202341 (* 1 = 0.202341 loss)\nI0607 23:01:13.977242 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149863 (* 1 = 0.0149863 loss)\nI0607 23:01:13.977248 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194223 (* 1 = 0.194223 loss)\nI0607 23:01:13.977254 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0149909 (* 1 = 0.0149909 loss)\nI0607 23:01:13.977262 13573 sgd_solver.cpp:106] Iteration 11600, lr = 0.001\nI0607 23:01:26.923724 13573 solver.cpp:229] Iteration 11620, loss = 0.700094\nI0607 23:01:26.923800 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.297905 (* 1 = 0.297905 loss)\nI0607 23:01:26.923810 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.252506 (* 1 = 0.252506 loss)\nI0607 23:01:26.923816 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147149 (* 1 = 0.147149 loss)\nI0607 23:01:26.923822 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0387555 (* 1 = 0.0387555 loss)\nI0607 23:01:26.923828 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152314 (* 1 = 0.152314 loss)\nI0607 23:01:26.923833 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0387675 (* 1 = 0.0387675 loss)\nI0607 23:01:26.923840 13573 sgd_solver.cpp:106] Iteration 11620, lr = 0.001\nI0607 23:01:39.743526 13573 solver.cpp:229] Iteration 11640, loss = 0.966077\nI0607 23:01:39.743593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.253697 (* 1 = 0.253697 loss)\nI0607 23:01:39.743602 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.491648 (* 1 = 0.491648 loss)\nI0607 23:01:39.743609 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112634 (* 1 = 0.112634 loss)\nI0607 23:01:39.743615 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257978 (* 1 = 0.0257978 loss)\nI0607 23:01:39.743620 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107507 (* 1 = 0.107507 loss)\nI0607 23:01:39.743626 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258035 (* 1 = 0.0258035 loss)\nI0607 23:01:39.743633 13573 sgd_solver.cpp:106] Iteration 11640, lr = 0.001\nI0607 23:01:52.457939 13573 solver.cpp:229] Iteration 11660, loss = 0.665979\nI0607 23:01:52.458006 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148055 (* 1 = 0.148055 loss)\nI0607 23:01:52.458016 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158353 (* 1 = 0.158353 loss)\nI0607 23:01:52.458022 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099507 (* 1 = 0.099507 loss)\nI0607 23:01:52.458027 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0537094 (* 1 = 0.0537094 loss)\nI0607 23:01:52.458034 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101584 (* 1 = 0.101584 loss)\nI0607 23:01:52.458039 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0536876 (* 1 = 0.0536876 loss)\nI0607 23:01:52.458045 13573 sgd_solver.cpp:106] Iteration 11660, lr = 0.001\nI0607 23:02:05.551339 13573 solver.cpp:229] Iteration 11680, loss = 1.11442\nI0607 23:02:05.551403 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.143576 (* 1 = 0.143576 loss)\nI0607 23:02:05.551412 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.149573 (* 1 = 0.149573 loss)\nI0607 23:02:05.551419 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0965999 (* 1 = 0.0965999 loss)\nI0607 23:02:05.551424 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135631 (* 1 = 0.0135631 loss)\nI0607 23:02:05.551430 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.088784 (* 1 = 0.088784 loss)\nI0607 23:02:05.551436 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135614 (* 1 = 0.0135614 loss)\nI0607 23:02:05.551443 13573 sgd_solver.cpp:106] Iteration 11680, lr = 0.001\nI0607 23:02:18.321931 13573 solver.cpp:229] Iteration 11700, loss = 0.69354\nI0607 23:02:18.321998 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0795844 (* 1 = 0.0795844 loss)\nI0607 23:02:18.322008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0577367 (* 1 = 0.0577367 loss)\nI0607 23:02:18.322015 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259636 (* 1 = 0.259636 loss)\nI0607 23:02:18.322021 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0191084 (* 1 = 0.0191084 loss)\nI0607 23:02:18.322026 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240479 (* 1 = 0.240479 loss)\nI0607 23:02:18.322032 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0191085 (* 1 = 0.0191085 loss)\nI0607 23:02:18.322039 13573 sgd_solver.cpp:106] Iteration 11700, lr = 0.001\nI0607 23:02:31.089385 13573 solver.cpp:229] Iteration 11720, loss = 1.47715\nI0607 23:02:31.089473 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.34613 (* 1 = 0.34613 loss)\nI0607 23:02:31.089481 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.376694 (* 1 = 0.376694 loss)\nI0607 23:02:31.089488 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.449963 (* 1 = 0.449963 loss)\nI0607 23:02:31.089493 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104618 (* 1 = 0.104618 loss)\nI0607 23:02:31.089499 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.468254 (* 1 = 0.468254 loss)\nI0607 23:02:31.089505 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104669 (* 1 = 0.104669 loss)\nI0607 23:02:31.089514 13573 sgd_solver.cpp:106] Iteration 11720, lr = 0.001\nI0607 23:02:43.894685 13573 solver.cpp:229] Iteration 11740, loss = 0.735284\nI0607 23:02:43.894747 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144287 (* 1 = 0.144287 loss)\nI0607 23:02:43.894757 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171799 (* 1 = 0.171799 loss)\nI0607 23:02:43.894763 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153195 (* 1 = 0.153195 loss)\nI0607 23:02:43.894779 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.163038 (* 1 = 0.163038 loss)\nI0607 23:02:43.894789 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156425 (* 1 = 0.156425 loss)\nI0607 23:02:43.894795 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.163063 (* 1 = 0.163063 loss)\nI0607 23:02:43.894803 13573 sgd_solver.cpp:106] Iteration 11740, lr = 0.001\nI0607 23:02:56.731196 13573 solver.cpp:229] Iteration 11760, loss = 0.872252\nI0607 23:02:56.731269 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0975585 (* 1 = 0.0975585 loss)\nI0607 23:02:56.731284 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.027131 (* 1 = 0.027131 loss)\nI0607 23:02:56.731293 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.323494 (* 1 = 0.323494 loss)\nI0607 23:02:56.731302 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0585627 (* 1 = 0.0585627 loss)\nI0607 23:02:56.731312 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334605 (* 1 = 0.334605 loss)\nI0607 23:02:56.731319 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0585598 (* 1 = 0.0585598 loss)\nI0607 23:02:56.731329 13573 sgd_solver.cpp:106] Iteration 11760, lr = 0.001\nI0607 23:03:09.986981 13573 solver.cpp:229] Iteration 11780, loss = 1.05138\nI0607 23:03:09.987041 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.232584 (* 1 = 0.232584 loss)\nI0607 23:03:09.987052 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.424344 (* 1 = 0.424344 loss)\nI0607 23:03:09.987059 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120376 (* 1 = 0.120376 loss)\nI0607 23:03:09.987066 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.121884 (* 1 = 0.121884 loss)\nI0607 23:03:09.987071 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122928 (* 1 = 0.122928 loss)\nI0607 23:03:09.987076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121896 (* 1 = 0.121896 loss)\nI0607 23:03:09.987083 13573 sgd_solver.cpp:106] Iteration 11780, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:03:22.969611 13573 solver.cpp:229] Iteration 11800, loss = 0.715521\nI0607 23:03:22.969700 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.207393 (* 1 = 0.207393 loss)\nI0607 23:03:22.969710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.275581 (* 1 = 0.275581 loss)\nI0607 23:03:22.969717 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0964042 (* 1 = 0.0964042 loss)\nI0607 23:03:22.969723 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156887 (* 1 = 0.0156887 loss)\nI0607 23:03:22.969729 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0950952 (* 1 = 0.0950952 loss)\nI0607 23:03:22.969735 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156889 (* 1 = 0.0156889 loss)\nI0607 23:03:22.969741 13573 sgd_solver.cpp:106] Iteration 11800, lr = 0.001\nI0607 23:03:35.937914 13573 solver.cpp:229] Iteration 11820, loss = 2.79517\nI0607 23:03:35.937978 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00661636 (* 1 = 0.00661636 loss)\nI0607 23:03:35.937986 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.091084 (* 1 = 0.091084 loss)\nI0607 23:03:35.937993 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.23264 (* 1 = 1.23264 loss)\nI0607 23:03:35.937999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.564533 (* 1 = 0.564533 loss)\nI0607 23:03:35.938004 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.22152 (* 1 = 1.22152 loss)\nI0607 23:03:35.938009 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.555097 (* 1 = 0.555097 loss)\nI0607 23:03:35.938016 13573 sgd_solver.cpp:106] Iteration 11820, lr = 0.001\nI0607 23:03:48.877611 13573 solver.cpp:229] Iteration 11840, loss = 0.804622\nI0607 23:03:48.877701 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194689 (* 1 = 0.194689 loss)\nI0607 23:03:48.877712 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107749 (* 1 = 0.107749 loss)\nI0607 23:03:48.877718 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169255 (* 1 = 0.169255 loss)\nI0607 23:03:48.877724 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00240964 (* 1 = 0.00240964 loss)\nI0607 23:03:48.877730 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172465 (* 1 = 0.172465 loss)\nI0607 23:03:48.877735 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00240989 (* 1 = 0.00240989 loss)\nI0607 23:03:48.877743 13573 sgd_solver.cpp:106] Iteration 11840, lr = 0.001\nI0607 23:04:02.087374 13573 solver.cpp:229] Iteration 11860, loss = 0.844295\nI0607 23:04:02.087446 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.464263 (* 1 = 0.464263 loss)\nI0607 23:04:02.087456 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19657 (* 1 = 0.19657 loss)\nI0607 23:04:02.087462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1731 (* 1 = 0.1731 loss)\nI0607 23:04:02.087468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.063783 (* 1 = 0.063783 loss)\nI0607 23:04:02.087474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175367 (* 1 = 0.175367 loss)\nI0607 23:04:02.087479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0637833 (* 1 = 0.0637833 loss)\nI0607 23:04:02.087487 13573 sgd_solver.cpp:106] Iteration 11860, lr = 0.001\nI0607 23:04:14.777586 13573 solver.cpp:229] Iteration 11880, loss = 0.622149\nI0607 23:04:14.777645 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159032 (* 1 = 0.159032 loss)\nI0607 23:04:14.777655 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0972064 (* 1 = 0.0972064 loss)\nI0607 23:04:14.777662 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.170824 (* 1 = 0.170824 loss)\nI0607 23:04:14.777667 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0396039 (* 1 = 0.0396039 loss)\nI0607 23:04:14.777673 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170722 (* 1 = 0.170722 loss)\nI0607 23:04:14.777679 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0396064 (* 1 = 0.0396064 loss)\nI0607 23:04:14.777686 13573 sgd_solver.cpp:106] Iteration 11880, lr = 0.001\nI0607 23:04:27.722388 13573 solver.cpp:229] Iteration 11900, loss = 1.08556\nI0607 23:04:27.722455 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.35307 (* 1 = 0.35307 loss)\nI0607 23:04:27.722465 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.206175 (* 1 = 0.206175 loss)\nI0607 23:04:27.722471 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184035 (* 1 = 0.184035 loss)\nI0607 23:04:27.722477 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.179004 (* 1 = 0.179004 loss)\nI0607 23:04:27.722482 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190486 (* 1 = 0.190486 loss)\nI0607 23:04:27.722488 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.17895 (* 1 = 0.17895 loss)\nI0607 23:04:27.722496 13573 sgd_solver.cpp:106] Iteration 11900, lr = 0.001\nI0607 23:04:40.701084 13573 solver.cpp:229] Iteration 11920, loss = 0.811371\nI0607 23:04:40.701155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.362429 (* 1 = 0.362429 loss)\nI0607 23:04:40.701165 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.212806 (* 1 = 0.212806 loss)\nI0607 23:04:40.701171 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184526 (* 1 = 0.184526 loss)\nI0607 23:04:40.701177 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301221 (* 1 = 0.0301221 loss)\nI0607 23:04:40.701184 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202291 (* 1 = 0.202291 loss)\nI0607 23:04:40.701189 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301199 (* 1 = 0.0301199 loss)\nI0607 23:04:40.701196 13573 sgd_solver.cpp:106] Iteration 11920, lr = 0.001\nI0607 23:04:53.692077 13573 solver.cpp:229] Iteration 11940, loss = 1.0618\nI0607 23:04:53.692127 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.514336 (* 1 = 0.514336 loss)\nI0607 23:04:53.692137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.362646 (* 1 = 0.362646 loss)\nI0607 23:04:53.692142 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168152 (* 1 = 0.168152 loss)\nI0607 23:04:53.692148 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013279 (* 1 = 0.013279 loss)\nI0607 23:04:53.692155 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.221102 (* 1 = 0.221102 loss)\nI0607 23:04:53.692160 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013282 (* 1 = 0.013282 loss)\nI0607 23:04:53.692168 13573 sgd_solver.cpp:106] Iteration 11940, lr = 0.001\nI0607 23:05:06.584575 13573 solver.cpp:229] Iteration 11960, loss = 3.28875\nI0607 23:05:06.584632 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.495057 (* 1 = 0.495057 loss)\nI0607 23:05:06.584641 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.223206 (* 1 = 0.223206 loss)\nI0607 23:05:06.584647 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.94714 (* 1 = 1.94714 loss)\nI0607 23:05:06.584653 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.765427 (* 1 = 0.765427 loss)\nI0607 23:05:06.584659 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.9435 (* 1 = 1.9435 loss)\nI0607 23:05:06.584664 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.77125 (* 1 = 0.77125 loss)\nI0607 23:05:06.584671 13573 sgd_solver.cpp:106] Iteration 11960, lr = 0.001\nI0607 23:05:19.514968 13573 solver.cpp:229] Iteration 11980, loss = 0.897586\nI0607 23:05:19.515056 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00996073 (* 1 = 0.00996073 loss)\nI0607 23:05:19.515067 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00580106 (* 1 = 0.00580106 loss)\nI0607 23:05:19.515074 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.47514 (* 1 = 0.47514 loss)\nI0607 23:05:19.515079 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0643188 (* 1 = 0.0643188 loss)\nI0607 23:05:19.515085 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.439255 (* 1 = 0.439255 loss)\nI0607 23:05:19.515091 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0643393 (* 1 = 0.0643393 loss)\nI0607 23:05:19.515100 13573 sgd_solver.cpp:106] Iteration 11980, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:05:32.461475 13573 solver.cpp:229] Iteration 12000, loss = 1.46117\nI0607 23:05:32.461555 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00129126 (* 1 = 0.00129126 loss)\nI0607 23:05:32.461565 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00343649 (* 1 = 0.00343649 loss)\nI0607 23:05:32.461571 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26133 (* 1 = 0.26133 loss)\nI0607 23:05:32.461577 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246705 (* 1 = 0.0246705 loss)\nI0607 23:05:32.461583 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.284748 (* 1 = 0.284748 loss)\nI0607 23:05:32.461589 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246612 (* 1 = 0.0246612 loss)\nI0607 23:05:32.461596 13573 sgd_solver.cpp:106] Iteration 12000, lr = 0.001\nI0607 23:05:45.418825 13573 solver.cpp:229] Iteration 12020, loss = 0.800307\nI0607 23:05:45.418884 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0977993 (* 1 = 0.0977993 loss)\nI0607 23:05:45.418893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103083 (* 1 = 0.103083 loss)\nI0607 23:05:45.418900 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221345 (* 1 = 0.221345 loss)\nI0607 23:05:45.418906 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149295 (* 1 = 0.0149295 loss)\nI0607 23:05:45.418911 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158924 (* 1 = 0.158924 loss)\nI0607 23:05:45.418917 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0149246 (* 1 = 0.0149246 loss)\nI0607 23:05:45.418925 13573 sgd_solver.cpp:106] Iteration 12020, lr = 0.001\nI0607 23:05:58.189626 13573 solver.cpp:229] Iteration 12040, loss = 1.18537\nI0607 23:05:58.189692 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113201 (* 1 = 0.113201 loss)\nI0607 23:05:58.189702 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184401 (* 1 = 0.184401 loss)\nI0607 23:05:58.189709 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257021 (* 1 = 0.257021 loss)\nI0607 23:05:58.189714 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0632949 (* 1 = 0.0632949 loss)\nI0607 23:05:58.189720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.254948 (* 1 = 0.254948 loss)\nI0607 23:05:58.189726 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0633063 (* 1 = 0.0633063 loss)\nI0607 23:05:58.189733 13573 sgd_solver.cpp:106] Iteration 12040, lr = 0.001\nI0607 23:06:11.102298 13573 solver.cpp:229] Iteration 12060, loss = 0.682054\nI0607 23:06:11.102356 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00990021 (* 1 = 0.00990021 loss)\nI0607 23:06:11.102367 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.022732 (* 1 = 0.022732 loss)\nI0607 23:06:11.102375 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.3025 (* 1 = 0.3025 loss)\nI0607 23:06:11.102380 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0290516 (* 1 = 0.0290516 loss)\nI0607 23:06:11.102385 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273268 (* 1 = 0.273268 loss)\nI0607 23:06:11.102391 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029067 (* 1 = 0.029067 loss)\nI0607 23:06:11.102398 13573 sgd_solver.cpp:106] Iteration 12060, lr = 0.001\nI0607 23:06:23.767411 13573 solver.cpp:229] Iteration 12080, loss = 1.61186\nI0607 23:06:23.767474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.25558 (* 1 = 0.25558 loss)\nI0607 23:06:23.767483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171589 (* 1 = 0.171589 loss)\nI0607 23:06:23.767490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0853051 (* 1 = 0.0853051 loss)\nI0607 23:06:23.767495 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149034 (* 1 = 0.0149034 loss)\nI0607 23:06:23.767501 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0804079 (* 1 = 0.0804079 loss)\nI0607 23:06:23.767506 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0149017 (* 1 = 0.0149017 loss)\nI0607 23:06:23.767513 13573 sgd_solver.cpp:106] Iteration 12080, lr = 0.001\nI0607 23:06:36.728155 13573 solver.cpp:229] Iteration 12100, loss = 0.526865\nI0607 23:06:36.728229 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.200053 (* 1 = 0.200053 loss)\nI0607 23:06:36.728238 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.166441 (* 1 = 0.166441 loss)\nI0607 23:06:36.728245 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101167 (* 1 = 0.101167 loss)\nI0607 23:06:36.728250 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0197662 (* 1 = 0.0197662 loss)\nI0607 23:06:36.728255 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10111 (* 1 = 0.10111 loss)\nI0607 23:06:36.728261 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0197685 (* 1 = 0.0197685 loss)\nI0607 23:06:36.728268 13573 sgd_solver.cpp:106] Iteration 12100, lr = 0.001\nI0607 23:06:49.624408 13573 solver.cpp:229] Iteration 12120, loss = 0.842049\nI0607 23:06:49.624474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.350709 (* 1 = 0.350709 loss)\nI0607 23:06:49.624483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157395 (* 1 = 0.157395 loss)\nI0607 23:06:49.624490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203639 (* 1 = 0.203639 loss)\nI0607 23:06:49.624495 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010927 (* 1 = 0.010927 loss)\nI0607 23:06:49.624501 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1943 (* 1 = 0.1943 loss)\nI0607 23:06:49.624507 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109259 (* 1 = 0.0109259 loss)\nI0607 23:06:49.624516 13573 sgd_solver.cpp:106] Iteration 12120, lr = 0.001\nI0607 23:07:02.550668 13573 solver.cpp:229] Iteration 12140, loss = 0.787277\nI0607 23:07:02.550734 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000399785 (* 1 = 0.000399785 loss)\nI0607 23:07:02.550743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00133701 (* 1 = 0.00133701 loss)\nI0607 23:07:02.550748 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233721 (* 1 = 0.233721 loss)\nI0607 23:07:02.550755 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0326195 (* 1 = 0.0326195 loss)\nI0607 23:07:02.550760 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247122 (* 1 = 0.247122 loss)\nI0607 23:07:02.550765 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0326329 (* 1 = 0.0326329 loss)\nI0607 23:07:02.550797 13573 sgd_solver.cpp:106] Iteration 12140, lr = 0.001\nI0607 23:07:15.406863 13573 solver.cpp:229] Iteration 12160, loss = 2.00208\nI0607 23:07:15.406942 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0777427 (* 1 = 0.0777427 loss)\nI0607 23:07:15.406951 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0219245 (* 1 = 0.0219245 loss)\nI0607 23:07:15.406958 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258622 (* 1 = 0.258622 loss)\nI0607 23:07:15.406963 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259676 (* 1 = 0.0259676 loss)\nI0607 23:07:15.406970 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238826 (* 1 = 0.238826 loss)\nI0607 23:07:15.406975 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.02597 (* 1 = 0.02597 loss)\nI0607 23:07:15.406983 13573 sgd_solver.cpp:106] Iteration 12160, lr = 0.001\nI0607 23:07:28.264324 13573 solver.cpp:229] Iteration 12180, loss = 0.809263\nI0607 23:07:28.264385 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.431363 (* 1 = 0.431363 loss)\nI0607 23:07:28.264394 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.12893 (* 1 = 0.12893 loss)\nI0607 23:07:28.264401 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0887588 (* 1 = 0.0887588 loss)\nI0607 23:07:28.264407 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167272 (* 1 = 0.0167272 loss)\nI0607 23:07:28.264412 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.093831 (* 1 = 0.093831 loss)\nI0607 23:07:28.264418 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0167278 (* 1 = 0.0167278 loss)\nI0607 23:07:28.264425 13573 sgd_solver.cpp:106] Iteration 12180, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:07:41.199986 13573 solver.cpp:229] Iteration 12200, loss = 0.811188\nI0607 23:07:41.200044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0298421 (* 1 = 0.0298421 loss)\nI0607 23:07:41.200053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0577329 (* 1 = 0.0577329 loss)\nI0607 23:07:41.200059 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.326934 (* 1 = 0.326934 loss)\nI0607 23:07:41.200067 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0319828 (* 1 = 0.0319828 loss)\nI0607 23:07:41.200073 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.336153 (* 1 = 0.336153 loss)\nI0607 23:07:41.200078 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0319988 (* 1 = 0.0319988 loss)\nI0607 23:07:41.200090 13573 sgd_solver.cpp:106] Iteration 12200, lr = 0.001\nI0607 23:07:53.988723 13573 solver.cpp:229] Iteration 12220, loss = 1.03091\nI0607 23:07:53.988786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0113346 (* 1 = 0.0113346 loss)\nI0607 23:07:53.988798 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.021736 (* 1 = 0.021736 loss)\nI0607 23:07:53.988806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393353 (* 1 = 0.393353 loss)\nI0607 23:07:53.988812 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0668781 (* 1 = 0.0668781 loss)\nI0607 23:07:53.988818 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.374049 (* 1 = 0.374049 loss)\nI0607 23:07:53.988824 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0668609 (* 1 = 0.0668609 loss)\nI0607 23:07:53.988832 13573 sgd_solver.cpp:106] Iteration 12220, lr = 0.001\nI0607 23:08:06.972993 13573 solver.cpp:229] Iteration 12240, loss = 0.80677\nI0607 23:08:06.973065 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184612 (* 1 = 0.184612 loss)\nI0607 23:08:06.973075 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.145068 (* 1 = 0.145068 loss)\nI0607 23:08:06.973081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108394 (* 1 = 0.108394 loss)\nI0607 23:08:06.973088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0442711 (* 1 = 0.0442711 loss)\nI0607 23:08:06.973094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110488 (* 1 = 0.110488 loss)\nI0607 23:08:06.973100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0442743 (* 1 = 0.0442743 loss)\nI0607 23:08:06.973109 13573 sgd_solver.cpp:106] Iteration 12240, lr = 0.001\nI0607 23:08:20.049265 13573 solver.cpp:229] Iteration 12260, loss = 0.477394\nI0607 23:08:20.049347 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128971 (* 1 = 0.128971 loss)\nI0607 23:08:20.049357 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0897772 (* 1 = 0.0897772 loss)\nI0607 23:08:20.049365 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.094634 (* 1 = 0.094634 loss)\nI0607 23:08:20.049371 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0241219 (* 1 = 0.0241219 loss)\nI0607 23:08:20.049376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0857814 (* 1 = 0.0857814 loss)\nI0607 23:08:20.049381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0241222 (* 1 = 0.0241222 loss)\nI0607 23:08:20.049389 13573 sgd_solver.cpp:106] Iteration 12260, lr = 0.001\nI0607 23:08:32.969395 13573 solver.cpp:229] Iteration 12280, loss = 1.10475\nI0607 23:08:32.969468 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.285328 (* 1 = 0.285328 loss)\nI0607 23:08:32.969478 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172873 (* 1 = 0.172873 loss)\nI0607 23:08:32.969485 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215062 (* 1 = 0.215062 loss)\nI0607 23:08:32.969491 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0498341 (* 1 = 0.0498341 loss)\nI0607 23:08:32.969496 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216702 (* 1 = 0.216702 loss)\nI0607 23:08:32.969502 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0498208 (* 1 = 0.0498208 loss)\nI0607 23:08:32.969511 13573 sgd_solver.cpp:106] Iteration 12280, lr = 0.001\nI0607 23:08:45.757252 13573 solver.cpp:229] Iteration 12300, loss = 2.04756\nI0607 23:08:45.757375 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115546 (* 1 = 0.115546 loss)\nI0607 23:08:45.757385 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0854668 (* 1 = 0.0854668 loss)\nI0607 23:08:45.757391 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.382779 (* 1 = 0.382779 loss)\nI0607 23:08:45.757397 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0266385 (* 1 = 0.0266385 loss)\nI0607 23:08:45.757402 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.401322 (* 1 = 0.401322 loss)\nI0607 23:08:45.757408 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0266135 (* 1 = 0.0266135 loss)\nI0607 23:08:45.757416 13573 sgd_solver.cpp:106] Iteration 12300, lr = 0.001\nI0607 23:08:58.622516 13573 solver.cpp:229] Iteration 12320, loss = 1.48776\nI0607 23:08:58.622584 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.388482 (* 1 = 0.388482 loss)\nI0607 23:08:58.622594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.299502 (* 1 = 0.299502 loss)\nI0607 23:08:58.622601 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.355908 (* 1 = 0.355908 loss)\nI0607 23:08:58.622606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0642399 (* 1 = 0.0642399 loss)\nI0607 23:08:58.622612 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.348627 (* 1 = 0.348627 loss)\nI0607 23:08:58.622618 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0642696 (* 1 = 0.0642696 loss)\nI0607 23:08:58.622625 13573 sgd_solver.cpp:106] Iteration 12320, lr = 0.001\nI0607 23:09:11.304368 13573 solver.cpp:229] Iteration 12340, loss = 0.487441\nI0607 23:09:11.304457 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121034 (* 1 = 0.121034 loss)\nI0607 23:09:11.304468 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0880326 (* 1 = 0.0880326 loss)\nI0607 23:09:11.304474 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0823993 (* 1 = 0.0823993 loss)\nI0607 23:09:11.304481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0415883 (* 1 = 0.0415883 loss)\nI0607 23:09:11.304486 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.089249 (* 1 = 0.089249 loss)\nI0607 23:09:11.304492 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0415885 (* 1 = 0.0415885 loss)\nI0607 23:09:11.304499 13573 sgd_solver.cpp:106] Iteration 12340, lr = 0.001\nI0607 23:09:24.217308 13573 solver.cpp:229] Iteration 12360, loss = 1.03641\nI0607 23:09:24.217370 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178642 (* 1 = 0.178642 loss)\nI0607 23:09:24.217381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0800484 (* 1 = 0.0800484 loss)\nI0607 23:09:24.217387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.290806 (* 1 = 0.290806 loss)\nI0607 23:09:24.217393 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0998669 (* 1 = 0.0998669 loss)\nI0607 23:09:24.217399 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.287814 (* 1 = 0.287814 loss)\nI0607 23:09:24.217406 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0998996 (* 1 = 0.0998996 loss)\nI0607 23:09:24.217413 13573 sgd_solver.cpp:106] Iteration 12360, lr = 0.001\nI0607 23:09:36.784881 13573 solver.cpp:229] Iteration 12380, loss = 0.902751\nI0607 23:09:36.784957 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00171831 (* 1 = 0.00171831 loss)\nI0607 23:09:36.784967 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.006971 (* 1 = 0.006971 loss)\nI0607 23:09:36.784974 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.527141 (* 1 = 0.527141 loss)\nI0607 23:09:36.784981 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.192767 (* 1 = 0.192767 loss)\nI0607 23:09:36.784986 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.538933 (* 1 = 0.538933 loss)\nI0607 23:09:36.784991 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.19283 (* 1 = 0.19283 loss)\nI0607 23:09:36.784999 13573 sgd_solver.cpp:106] Iteration 12380, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:09:49.547098 13573 solver.cpp:229] Iteration 12400, loss = 1.41952\nI0607 23:09:49.547163 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127117 (* 1 = 0.127117 loss)\nI0607 23:09:49.547173 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.127015 (* 1 = 0.127015 loss)\nI0607 23:09:49.547178 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.360105 (* 1 = 0.360105 loss)\nI0607 23:09:49.547184 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0232235 (* 1 = 0.0232235 loss)\nI0607 23:09:49.547189 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.357031 (* 1 = 0.357031 loss)\nI0607 23:09:49.547195 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0232037 (* 1 = 0.0232037 loss)\nI0607 23:09:49.547202 13573 sgd_solver.cpp:106] Iteration 12400, lr = 0.001\nI0607 23:10:02.287371 13573 solver.cpp:229] Iteration 12420, loss = 2.25319\nI0607 23:10:02.287438 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0327145 (* 1 = 0.0327145 loss)\nI0607 23:10:02.287448 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0523842 (* 1 = 0.0523842 loss)\nI0607 23:10:02.287454 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.2516 (* 1 = 1.2516 loss)\nI0607 23:10:02.287461 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.50914 (* 1 = 0.50914 loss)\nI0607 23:10:02.287467 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.27023 (* 1 = 1.27023 loss)\nI0607 23:10:02.287472 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.505787 (* 1 = 0.505787 loss)\nI0607 23:10:02.287479 13573 sgd_solver.cpp:106] Iteration 12420, lr = 0.001\nI0607 23:10:15.028146 13573 solver.cpp:229] Iteration 12440, loss = 0.784883\nI0607 23:10:15.028210 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0152653 (* 1 = 0.0152653 loss)\nI0607 23:10:15.028219 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00756932 (* 1 = 0.00756932 loss)\nI0607 23:10:15.028225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295944 (* 1 = 0.295944 loss)\nI0607 23:10:15.028231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0109815 (* 1 = 0.0109815 loss)\nI0607 23:10:15.028236 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295492 (* 1 = 0.295492 loss)\nI0607 23:10:15.028242 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109919 (* 1 = 0.0109919 loss)\nI0607 23:10:15.028249 13573 sgd_solver.cpp:106] Iteration 12440, lr = 0.001\nI0607 23:10:27.940151 13573 solver.cpp:229] Iteration 12460, loss = 0.924891\nI0607 23:10:27.940212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175541 (* 1 = 0.175541 loss)\nI0607 23:10:27.940222 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.192543 (* 1 = 0.192543 loss)\nI0607 23:10:27.940227 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198784 (* 1 = 0.198784 loss)\nI0607 23:10:27.940233 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189806 (* 1 = 0.0189806 loss)\nI0607 23:10:27.940238 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203676 (* 1 = 0.203676 loss)\nI0607 23:10:27.940244 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189736 (* 1 = 0.0189736 loss)\nI0607 23:10:27.940250 13573 sgd_solver.cpp:106] Iteration 12460, lr = 0.001\nI0607 23:10:40.861279 13573 solver.cpp:229] Iteration 12480, loss = 0.863378\nI0607 23:10:40.861346 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139611 (* 1 = 0.139611 loss)\nI0607 23:10:40.861354 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.208762 (* 1 = 0.208762 loss)\nI0607 23:10:40.861361 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194547 (* 1 = 0.194547 loss)\nI0607 23:10:40.861366 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0372128 (* 1 = 0.0372128 loss)\nI0607 23:10:40.861372 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203517 (* 1 = 0.203517 loss)\nI0607 23:10:40.861377 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.037208 (* 1 = 0.037208 loss)\nI0607 23:10:40.861384 13573 sgd_solver.cpp:106] Iteration 12480, lr = 0.001\nI0607 23:10:53.957335 13573 solver.cpp:229] Iteration 12500, loss = 0.888057\nI0607 23:10:53.957399 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.268191 (* 1 = 0.268191 loss)\nI0607 23:10:53.957408 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0557387 (* 1 = 0.0557387 loss)\nI0607 23:10:53.957414 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0991041 (* 1 = 0.0991041 loss)\nI0607 23:10:53.957420 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0330692 (* 1 = 0.0330692 loss)\nI0607 23:10:53.957425 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100346 (* 1 = 0.100346 loss)\nI0607 23:10:53.957432 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0330717 (* 1 = 0.0330717 loss)\nI0607 23:10:53.957438 13573 sgd_solver.cpp:106] Iteration 12500, lr = 0.001\nI0607 23:11:06.691542 13573 solver.cpp:229] Iteration 12520, loss = 0.743227\nI0607 23:11:06.691604 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0762664 (* 1 = 0.0762664 loss)\nI0607 23:11:06.691613 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10709 (* 1 = 0.10709 loss)\nI0607 23:11:06.691619 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.086772 (* 1 = 0.086772 loss)\nI0607 23:11:06.691625 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00396942 (* 1 = 0.00396942 loss)\nI0607 23:11:06.691630 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0888183 (* 1 = 0.0888183 loss)\nI0607 23:11:06.691637 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00396659 (* 1 = 0.00396659 loss)\nI0607 23:11:06.691643 13573 sgd_solver.cpp:106] Iteration 12520, lr = 0.001\nI0607 23:11:19.578706 13573 solver.cpp:229] Iteration 12540, loss = 0.464692\nI0607 23:11:19.578781 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.181367 (* 1 = 0.181367 loss)\nI0607 23:11:19.578794 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.074067 (* 1 = 0.074067 loss)\nI0607 23:11:19.578799 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0903035 (* 1 = 0.0903035 loss)\nI0607 23:11:19.578805 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00533333 (* 1 = 0.00533333 loss)\nI0607 23:11:19.578810 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0860455 (* 1 = 0.0860455 loss)\nI0607 23:11:19.578816 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00532897 (* 1 = 0.00532897 loss)\nI0607 23:11:19.578822 13573 sgd_solver.cpp:106] Iteration 12540, lr = 0.001\nI0607 23:11:32.283994 13573 solver.cpp:229] Iteration 12560, loss = 1.39326\nI0607 23:11:32.284062 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.215585 (* 1 = 0.215585 loss)\nI0607 23:11:32.284072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147861 (* 1 = 0.147861 loss)\nI0607 23:11:32.284078 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.60045 (* 1 = 0.60045 loss)\nI0607 23:11:32.284085 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.174611 (* 1 = 0.174611 loss)\nI0607 23:11:32.284090 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.626337 (* 1 = 0.626337 loss)\nI0607 23:11:32.284096 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.174614 (* 1 = 0.174614 loss)\nI0607 23:11:32.284104 13573 sgd_solver.cpp:106] Iteration 12560, lr = 0.001\nI0607 23:11:45.111949 13573 solver.cpp:229] Iteration 12580, loss = 0.751031\nI0607 23:11:45.112025 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00164965 (* 1 = 0.00164965 loss)\nI0607 23:11:45.112035 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0111354 (* 1 = 0.0111354 loss)\nI0607 23:11:45.112042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.322432 (* 1 = 0.322432 loss)\nI0607 23:11:45.112048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0833294 (* 1 = 0.0833294 loss)\nI0607 23:11:45.112053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.383062 (* 1 = 0.383062 loss)\nI0607 23:11:45.112061 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0833388 (* 1 = 0.0833388 loss)\nI0607 23:11:45.112067 13573 sgd_solver.cpp:106] Iteration 12580, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:11:57.866554 13573 solver.cpp:229] Iteration 12600, loss = 1.46485\nI0607 23:11:57.866619 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.358375 (* 1 = 0.358375 loss)\nI0607 23:11:57.866628 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.383512 (* 1 = 0.383512 loss)\nI0607 23:11:57.866634 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.491034 (* 1 = 0.491034 loss)\nI0607 23:11:57.866641 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.105678 (* 1 = 0.105678 loss)\nI0607 23:11:57.866646 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.465551 (* 1 = 0.465551 loss)\nI0607 23:11:57.866652 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.105648 (* 1 = 0.105648 loss)\nI0607 23:11:57.866665 13573 sgd_solver.cpp:106] Iteration 12600, lr = 0.001\nI0607 23:12:10.689553 13573 solver.cpp:229] Iteration 12620, loss = 0.493708\nI0607 23:12:10.689616 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000803597 (* 1 = 0.000803597 loss)\nI0607 23:12:10.689626 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0189013 (* 1 = 0.0189013 loss)\nI0607 23:12:10.689632 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.13371 (* 1 = 0.13371 loss)\nI0607 23:12:10.689637 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0131553 (* 1 = 0.0131553 loss)\nI0607 23:12:10.689644 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114072 (* 1 = 0.114072 loss)\nI0607 23:12:10.689649 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131522 (* 1 = 0.0131522 loss)\nI0607 23:12:10.689656 13573 sgd_solver.cpp:106] Iteration 12620, lr = 0.001\nI0607 23:12:23.496961 13573 solver.cpp:229] Iteration 12640, loss = 0.425424\nI0607 23:12:23.497035 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111803 (* 1 = 0.111803 loss)\nI0607 23:12:23.497045 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0900253 (* 1 = 0.0900253 loss)\nI0607 23:12:23.497051 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0808029 (* 1 = 0.0808029 loss)\nI0607 23:12:23.497056 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00393917 (* 1 = 0.00393917 loss)\nI0607 23:12:23.497061 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.082376 (* 1 = 0.082376 loss)\nI0607 23:12:23.497066 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00393879 (* 1 = 0.00393879 loss)\nI0607 23:12:23.497073 13573 sgd_solver.cpp:106] Iteration 12640, lr = 0.001\nI0607 23:12:36.538147 13573 solver.cpp:229] Iteration 12660, loss = 1.61785\nI0607 23:12:36.538209 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.190995 (* 1 = 0.190995 loss)\nI0607 23:12:36.538218 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150558 (* 1 = 0.150558 loss)\nI0607 23:12:36.538225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.35402 (* 1 = 0.35402 loss)\nI0607 23:12:36.538231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0962087 (* 1 = 0.0962087 loss)\nI0607 23:12:36.538236 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.355573 (* 1 = 0.355573 loss)\nI0607 23:12:36.538242 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0961881 (* 1 = 0.0961881 loss)\nI0607 23:12:36.538249 13573 sgd_solver.cpp:106] Iteration 12660, lr = 0.001\nI0607 23:12:49.357157 13573 solver.cpp:229] Iteration 12680, loss = 0.9228\nI0607 23:12:49.357230 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0633235 (* 1 = 0.0633235 loss)\nI0607 23:12:49.357240 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0448745 (* 1 = 0.0448745 loss)\nI0607 23:12:49.357246 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275246 (* 1 = 0.275246 loss)\nI0607 23:12:49.357252 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0070001 (* 1 = 0.0070001 loss)\nI0607 23:12:49.357259 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282597 (* 1 = 0.282597 loss)\nI0607 23:12:49.357264 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00698673 (* 1 = 0.00698673 loss)\nI0607 23:12:49.357271 13573 sgd_solver.cpp:106] Iteration 12680, lr = 0.001\nI0607 23:13:02.509778 13573 solver.cpp:229] Iteration 12700, loss = 1.15253\nI0607 23:13:02.509841 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.330414 (* 1 = 0.330414 loss)\nI0607 23:13:02.509850 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.399977 (* 1 = 0.399977 loss)\nI0607 23:13:02.509856 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.361211 (* 1 = 0.361211 loss)\nI0607 23:13:02.509862 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418368 (* 1 = 0.0418368 loss)\nI0607 23:13:02.509868 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.344504 (* 1 = 0.344504 loss)\nI0607 23:13:02.509874 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418227 (* 1 = 0.0418227 loss)\nI0607 23:13:02.509881 13573 sgd_solver.cpp:106] Iteration 12700, lr = 0.001\nI0607 23:13:15.324354 13573 solver.cpp:229] Iteration 12720, loss = 1.34477\nI0607 23:13:15.324417 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.416235 (* 1 = 0.416235 loss)\nI0607 23:13:15.324426 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.276239 (* 1 = 0.276239 loss)\nI0607 23:13:15.324432 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.710598 (* 1 = 0.710598 loss)\nI0607 23:13:15.324440 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0879295 (* 1 = 0.0879295 loss)\nI0607 23:13:15.324445 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.693787 (* 1 = 0.693787 loss)\nI0607 23:13:15.324450 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0879378 (* 1 = 0.0879378 loss)\nI0607 23:13:15.324457 13573 sgd_solver.cpp:106] Iteration 12720, lr = 0.001\nI0607 23:13:28.235497 13573 solver.cpp:229] Iteration 12740, loss = 0.739453\nI0607 23:13:28.235575 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.304257 (* 1 = 0.304257 loss)\nI0607 23:13:28.235585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.189414 (* 1 = 0.189414 loss)\nI0607 23:13:28.235591 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103922 (* 1 = 0.103922 loss)\nI0607 23:13:28.235597 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0359878 (* 1 = 0.0359878 loss)\nI0607 23:13:28.235604 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.120346 (* 1 = 0.120346 loss)\nI0607 23:13:28.235610 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0359243 (* 1 = 0.0359243 loss)\nI0607 23:13:28.235616 13573 sgd_solver.cpp:106] Iteration 12740, lr = 0.001\nI0607 23:13:41.194919 13573 solver.cpp:229] Iteration 12760, loss = 0.933692\nI0607 23:13:41.194993 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0949723 (* 1 = 0.0949723 loss)\nI0607 23:13:41.195003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114951 (* 1 = 0.114951 loss)\nI0607 23:13:41.195008 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0944322 (* 1 = 0.0944322 loss)\nI0607 23:13:41.195014 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0290979 (* 1 = 0.0290979 loss)\nI0607 23:13:41.195020 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0949674 (* 1 = 0.0949674 loss)\nI0607 23:13:41.195026 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291055 (* 1 = 0.0291055 loss)\nI0607 23:13:41.195034 13573 sgd_solver.cpp:106] Iteration 12760, lr = 0.001\nI0607 23:13:54.132422 13573 solver.cpp:229] Iteration 12780, loss = 0.617445\nI0607 23:13:54.132488 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00339126 (* 1 = 0.00339126 loss)\nI0607 23:13:54.132496 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.047018 (* 1 = 0.047018 loss)\nI0607 23:13:54.132503 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.339095 (* 1 = 0.339095 loss)\nI0607 23:13:54.132508 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.044391 (* 1 = 0.044391 loss)\nI0607 23:13:54.132513 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323533 (* 1 = 0.323533 loss)\nI0607 23:13:54.132519 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0443504 (* 1 = 0.0443504 loss)\nI0607 23:13:54.132526 13573 sgd_solver.cpp:106] Iteration 12780, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:14:07.285562 13573 solver.cpp:229] Iteration 12800, loss = 1.43321\nI0607 23:14:07.285629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169332 (* 1 = 0.169332 loss)\nI0607 23:14:07.285639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.174855 (* 1 = 0.174855 loss)\nI0607 23:14:07.285645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182473 (* 1 = 0.182473 loss)\nI0607 23:14:07.285650 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116494 (* 1 = 0.0116494 loss)\nI0607 23:14:07.285655 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175004 (* 1 = 0.175004 loss)\nI0607 23:14:07.285661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116544 (* 1 = 0.0116544 loss)\nI0607 23:14:07.285668 13573 sgd_solver.cpp:106] Iteration 12800, lr = 0.001\nI0607 23:14:20.146483 13573 solver.cpp:229] Iteration 12820, loss = 0.871334\nI0607 23:14:20.146559 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125321 (* 1 = 0.125321 loss)\nI0607 23:14:20.146567 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.288673 (* 1 = 0.288673 loss)\nI0607 23:14:20.146574 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0988771 (* 1 = 0.0988771 loss)\nI0607 23:14:20.146579 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00795588 (* 1 = 0.00795588 loss)\nI0607 23:14:20.146585 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0958764 (* 1 = 0.0958764 loss)\nI0607 23:14:20.146590 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00795477 (* 1 = 0.00795477 loss)\nI0607 23:14:20.146597 13573 sgd_solver.cpp:106] Iteration 12820, lr = 0.001\nI0607 23:14:33.090661 13573 solver.cpp:229] Iteration 12840, loss = 1.03998\nI0607 23:14:33.090725 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.327046 (* 1 = 0.327046 loss)\nI0607 23:14:33.090734 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.272385 (* 1 = 0.272385 loss)\nI0607 23:14:33.090740 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.442909 (* 1 = 0.442909 loss)\nI0607 23:14:33.090747 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0998086 (* 1 = 0.0998086 loss)\nI0607 23:14:33.090752 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.426276 (* 1 = 0.426276 loss)\nI0607 23:14:33.090759 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.099826 (* 1 = 0.099826 loss)\nI0607 23:14:33.090764 13573 sgd_solver.cpp:106] Iteration 12840, lr = 0.001\nI0607 23:14:46.079856 13573 solver.cpp:229] Iteration 12860, loss = 0.632588\nI0607 23:14:46.079923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102196 (* 1 = 0.102196 loss)\nI0607 23:14:46.079932 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136325 (* 1 = 0.136325 loss)\nI0607 23:14:46.079938 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.230322 (* 1 = 0.230322 loss)\nI0607 23:14:46.079944 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.033472 (* 1 = 0.033472 loss)\nI0607 23:14:46.079951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223484 (* 1 = 0.223484 loss)\nI0607 23:14:46.079957 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0334502 (* 1 = 0.0334502 loss)\nI0607 23:14:46.079965 13573 sgd_solver.cpp:106] Iteration 12860, lr = 0.001\nI0607 23:14:58.859206 13573 solver.cpp:229] Iteration 12880, loss = 0.821903\nI0607 23:14:58.859274 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.283277 (* 1 = 0.283277 loss)\nI0607 23:14:58.859285 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.256111 (* 1 = 0.256111 loss)\nI0607 23:14:58.859292 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113413 (* 1 = 0.113413 loss)\nI0607 23:14:58.859297 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0776355 (* 1 = 0.0776355 loss)\nI0607 23:14:58.859303 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108458 (* 1 = 0.108458 loss)\nI0607 23:14:58.859309 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0776472 (* 1 = 0.0776472 loss)\nI0607 23:14:58.859315 13573 sgd_solver.cpp:106] Iteration 12880, lr = 0.001\nI0607 23:15:11.616277 13573 solver.cpp:229] Iteration 12900, loss = 0.635422\nI0607 23:15:11.616353 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.255104 (* 1 = 0.255104 loss)\nI0607 23:15:11.616364 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124056 (* 1 = 0.124056 loss)\nI0607 23:15:11.616369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0983665 (* 1 = 0.0983665 loss)\nI0607 23:15:11.616375 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0220121 (* 1 = 0.0220121 loss)\nI0607 23:15:11.616380 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0963247 (* 1 = 0.0963247 loss)\nI0607 23:15:11.616386 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0220294 (* 1 = 0.0220294 loss)\nI0607 23:15:11.616392 13573 sgd_solver.cpp:106] Iteration 12900, lr = 0.001\nI0607 23:15:24.587512 13573 solver.cpp:229] Iteration 12920, loss = 1.74565\nI0607 23:15:24.587577 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0200713 (* 1 = 0.0200713 loss)\nI0607 23:15:24.587586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0574812 (* 1 = 0.0574812 loss)\nI0607 23:15:24.587594 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.725328 (* 1 = 0.725328 loss)\nI0607 23:15:24.587599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.322909 (* 1 = 0.322909 loss)\nI0607 23:15:24.587605 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.734048 (* 1 = 0.734048 loss)\nI0607 23:15:24.587611 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.322959 (* 1 = 0.322959 loss)\nI0607 23:15:24.587620 13573 sgd_solver.cpp:106] Iteration 12920, lr = 0.001\nI0607 23:15:37.691696 13573 solver.cpp:229] Iteration 12940, loss = 1.15313\nI0607 23:15:37.691764 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.457217 (* 1 = 0.457217 loss)\nI0607 23:15:37.691776 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.316883 (* 1 = 0.316883 loss)\nI0607 23:15:37.691781 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.24344 (* 1 = 0.24344 loss)\nI0607 23:15:37.691787 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0225485 (* 1 = 0.0225485 loss)\nI0607 23:15:37.691793 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246762 (* 1 = 0.246762 loss)\nI0607 23:15:37.691798 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225396 (* 1 = 0.0225396 loss)\nI0607 23:15:37.691804 13573 sgd_solver.cpp:106] Iteration 12940, lr = 0.001\nI0607 23:15:50.746798 13573 solver.cpp:229] Iteration 12960, loss = 1.16034\nI0607 23:15:50.746860 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178547 (* 1 = 0.178547 loss)\nI0607 23:15:50.746870 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.13535 (* 1 = 0.13535 loss)\nI0607 23:15:50.746876 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.129197 (* 1 = 0.129197 loss)\nI0607 23:15:50.746883 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0414826 (* 1 = 0.0414826 loss)\nI0607 23:15:50.746889 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.129104 (* 1 = 0.129104 loss)\nI0607 23:15:50.746896 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0414768 (* 1 = 0.0414768 loss)\nI0607 23:15:50.746903 13573 sgd_solver.cpp:106] Iteration 12960, lr = 0.001\nI0607 23:16:03.771983 13573 solver.cpp:229] Iteration 12980, loss = 0.676521\nI0607 23:16:03.772056 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.240106 (* 1 = 0.240106 loss)\nI0607 23:16:03.772065 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147816 (* 1 = 0.147816 loss)\nI0607 23:16:03.772071 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167448 (* 1 = 0.167448 loss)\nI0607 23:16:03.772078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00210646 (* 1 = 0.00210646 loss)\nI0607 23:16:03.772083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14956 (* 1 = 0.14956 loss)\nI0607 23:16:03.772089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00210601 (* 1 = 0.00210601 loss)\nI0607 23:16:03.772096 13573 sgd_solver.cpp:106] Iteration 12980, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:16:16.690095 13573 solver.cpp:229] Iteration 13000, loss = 0.706769\nI0607 23:16:16.690158 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.451496 (* 1 = 0.451496 loss)\nI0607 23:16:16.690167 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.225524 (* 1 = 0.225524 loss)\nI0607 23:16:16.690173 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0841341 (* 1 = 0.0841341 loss)\nI0607 23:16:16.690179 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0443226 (* 1 = 0.0443226 loss)\nI0607 23:16:16.690186 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0843598 (* 1 = 0.0843598 loss)\nI0607 23:16:16.690191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0443171 (* 1 = 0.0443171 loss)\nI0607 23:16:16.690198 13573 sgd_solver.cpp:106] Iteration 13000, lr = 0.001\nI0607 23:16:29.658921 13573 solver.cpp:229] Iteration 13020, loss = 1.11334\nI0607 23:16:29.658990 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.282439 (* 1 = 0.282439 loss)\nI0607 23:16:29.658999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.357374 (* 1 = 0.357374 loss)\nI0607 23:16:29.659005 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.407786 (* 1 = 0.407786 loss)\nI0607 23:16:29.659010 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0925203 (* 1 = 0.0925203 loss)\nI0607 23:16:29.659016 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408215 (* 1 = 0.408215 loss)\nI0607 23:16:29.659021 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.092528 (* 1 = 0.092528 loss)\nI0607 23:16:29.659029 13573 sgd_solver.cpp:106] Iteration 13020, lr = 0.001\nI0607 23:16:42.559715 13573 solver.cpp:229] Iteration 13040, loss = 1.68373\nI0607 23:16:42.559782 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.39685 (* 1 = 0.39685 loss)\nI0607 23:16:42.559792 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.489116 (* 1 = 0.489116 loss)\nI0607 23:16:42.559798 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.094002 (* 1 = 0.094002 loss)\nI0607 23:16:42.559804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418768 (* 1 = 0.0418768 loss)\nI0607 23:16:42.559810 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0948882 (* 1 = 0.0948882 loss)\nI0607 23:16:42.559816 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418869 (* 1 = 0.0418869 loss)\nI0607 23:16:42.559824 13573 sgd_solver.cpp:106] Iteration 13040, lr = 0.001\nI0607 23:16:55.438802 13573 solver.cpp:229] Iteration 13060, loss = 0.573791\nI0607 23:16:55.438864 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.122034 (* 1 = 0.122034 loss)\nI0607 23:16:55.438874 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0906448 (* 1 = 0.0906448 loss)\nI0607 23:16:55.438880 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101434 (* 1 = 0.101434 loss)\nI0607 23:16:55.438886 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328254 (* 1 = 0.0328254 loss)\nI0607 23:16:55.438892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100861 (* 1 = 0.100861 loss)\nI0607 23:16:55.438899 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0328168 (* 1 = 0.0328168 loss)\nI0607 23:16:55.438906 13573 sgd_solver.cpp:106] Iteration 13060, lr = 0.001\nI0607 23:17:08.490949 13573 solver.cpp:229] Iteration 13080, loss = 0.629186\nI0607 23:17:08.491016 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00608001 (* 1 = 0.00608001 loss)\nI0607 23:17:08.491026 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0418619 (* 1 = 0.0418619 loss)\nI0607 23:17:08.491032 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203321 (* 1 = 0.203321 loss)\nI0607 23:17:08.491039 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0218143 (* 1 = 0.0218143 loss)\nI0607 23:17:08.491044 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.201039 (* 1 = 0.201039 loss)\nI0607 23:17:08.491050 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0218444 (* 1 = 0.0218444 loss)\nI0607 23:17:08.491058 13573 sgd_solver.cpp:106] Iteration 13080, lr = 0.001\nI0607 23:17:21.407541 13573 solver.cpp:229] Iteration 13100, loss = 3.69147\nI0607 23:17:21.407600 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.386233 (* 1 = 0.386233 loss)\nI0607 23:17:21.407609 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.445439 (* 1 = 0.445439 loss)\nI0607 23:17:21.407616 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.96752 (* 1 = 1.96752 loss)\nI0607 23:17:21.407622 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.769966 (* 1 = 0.769966 loss)\nI0607 23:17:21.407629 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.9627 (* 1 = 1.9627 loss)\nI0607 23:17:21.407634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.77082 (* 1 = 0.77082 loss)\nI0607 23:17:21.407642 13573 sgd_solver.cpp:106] Iteration 13100, lr = 0.001\nI0607 23:17:34.234318 13573 solver.cpp:229] Iteration 13120, loss = 0.519175\nI0607 23:17:34.234454 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0935171 (* 1 = 0.0935171 loss)\nI0607 23:17:34.234468 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0511642 (* 1 = 0.0511642 loss)\nI0607 23:17:34.234478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257618 (* 1 = 0.257618 loss)\nI0607 23:17:34.234488 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00328443 (* 1 = 0.00328443 loss)\nI0607 23:17:34.234496 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224302 (* 1 = 0.224302 loss)\nI0607 23:17:34.234505 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00328302 (* 1 = 0.00328302 loss)\nI0607 23:17:34.234516 13573 sgd_solver.cpp:106] Iteration 13120, lr = 0.001\nI0607 23:17:47.178884 13573 solver.cpp:229] Iteration 13140, loss = 0.691394\nI0607 23:17:47.178952 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00209081 (* 1 = 0.00209081 loss)\nI0607 23:17:47.178962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0237752 (* 1 = 0.0237752 loss)\nI0607 23:17:47.178968 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.2492 (* 1 = 0.2492 loss)\nI0607 23:17:47.178973 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0230842 (* 1 = 0.0230842 loss)\nI0607 23:17:47.178979 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253 (* 1 = 0.253 loss)\nI0607 23:17:47.178985 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0231003 (* 1 = 0.0231003 loss)\nI0607 23:17:47.178992 13573 sgd_solver.cpp:106] Iteration 13140, lr = 0.001\nI0607 23:17:59.955752 13573 solver.cpp:229] Iteration 13160, loss = 1.04857\nI0607 23:17:59.955826 13573 solver.cpp:245]     Train net output #0: loss_bbox = 3.86131e-05 (* 1 = 3.86131e-05 loss)\nI0607 23:17:59.955837 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00609625 (* 1 = 0.00609625 loss)\nI0607 23:17:59.955843 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231393 (* 1 = 0.231393 loss)\nI0607 23:17:59.955849 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00290111 (* 1 = 0.00290111 loss)\nI0607 23:17:59.955855 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232736 (* 1 = 0.232736 loss)\nI0607 23:17:59.955862 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00291159 (* 1 = 0.00291159 loss)\nI0607 23:17:59.955869 13573 sgd_solver.cpp:106] Iteration 13160, lr = 0.001\nI0607 23:18:12.600978 13573 solver.cpp:229] Iteration 13180, loss = 1.20361\nI0607 23:18:12.601044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00349972 (* 1 = 0.00349972 loss)\nI0607 23:18:12.601053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0449846 (* 1 = 0.0449846 loss)\nI0607 23:18:12.601060 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.441339 (* 1 = 0.441339 loss)\nI0607 23:18:12.601066 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.129818 (* 1 = 0.129818 loss)\nI0607 23:18:12.601073 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.478732 (* 1 = 0.478732 loss)\nI0607 23:18:12.601078 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.129857 (* 1 = 0.129857 loss)\nI0607 23:18:12.601086 13573 sgd_solver.cpp:106] Iteration 13180, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:18:25.639969 13573 solver.cpp:229] Iteration 13200, loss = 0.716263\nI0607 23:18:25.640044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00199431 (* 1 = 0.00199431 loss)\nI0607 23:18:25.640054 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0032986 (* 1 = 0.0032986 loss)\nI0607 23:18:25.640061 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.127293 (* 1 = 0.127293 loss)\nI0607 23:18:25.640066 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00392859 (* 1 = 0.00392859 loss)\nI0607 23:18:25.640072 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.151793 (* 1 = 0.151793 loss)\nI0607 23:18:25.640079 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00393182 (* 1 = 0.00393182 loss)\nI0607 23:18:25.640087 13573 sgd_solver.cpp:106] Iteration 13200, lr = 0.001\nI0607 23:18:38.591938 13573 solver.cpp:229] Iteration 13220, loss = 0.842832\nI0607 23:18:38.592002 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.221326 (* 1 = 0.221326 loss)\nI0607 23:18:38.592011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167181 (* 1 = 0.167181 loss)\nI0607 23:18:38.592017 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175346 (* 1 = 0.175346 loss)\nI0607 23:18:38.592025 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010568 (* 1 = 0.010568 loss)\nI0607 23:18:38.592031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156954 (* 1 = 0.156954 loss)\nI0607 23:18:38.592036 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105673 (* 1 = 0.0105673 loss)\nI0607 23:18:38.592042 13573 sgd_solver.cpp:106] Iteration 13220, lr = 0.001\nI0607 23:18:51.505167 13573 solver.cpp:229] Iteration 13240, loss = 0.865824\nI0607 23:18:51.505228 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0697388 (* 1 = 0.0697388 loss)\nI0607 23:18:51.505239 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.076283 (* 1 = 0.076283 loss)\nI0607 23:18:51.505245 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138559 (* 1 = 0.138559 loss)\nI0607 23:18:51.505251 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0568678 (* 1 = 0.0568678 loss)\nI0607 23:18:51.505256 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146033 (* 1 = 0.146033 loss)\nI0607 23:18:51.505262 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0568793 (* 1 = 0.0568793 loss)\nI0607 23:18:51.505270 13573 sgd_solver.cpp:106] Iteration 13240, lr = 0.001\nI0607 23:19:04.442728 13573 solver.cpp:229] Iteration 13260, loss = 0.84376\nI0607 23:19:04.442811 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00423725 (* 1 = 0.00423725 loss)\nI0607 23:19:04.442823 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00772025 (* 1 = 0.00772025 loss)\nI0607 23:19:04.442829 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.24193 (* 1 = 0.24193 loss)\nI0607 23:19:04.442836 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105029 (* 1 = 0.0105029 loss)\nI0607 23:19:04.442842 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278431 (* 1 = 0.278431 loss)\nI0607 23:19:04.442848 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104932 (* 1 = 0.0104932 loss)\nI0607 23:19:04.442855 13573 sgd_solver.cpp:106] Iteration 13260, lr = 0.001\nI0607 23:19:17.399159 13573 solver.cpp:229] Iteration 13280, loss = 0.985462\nI0607 23:19:17.399230 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.265687 (* 1 = 0.265687 loss)\nI0607 23:19:17.399238 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.580247 (* 1 = 0.580247 loss)\nI0607 23:19:17.399245 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.225136 (* 1 = 0.225136 loss)\nI0607 23:19:17.399251 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0717948 (* 1 = 0.0717948 loss)\nI0607 23:19:17.399257 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.221565 (* 1 = 0.221565 loss)\nI0607 23:19:17.399263 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0717926 (* 1 = 0.0717926 loss)\nI0607 23:19:17.399271 13573 sgd_solver.cpp:106] Iteration 13280, lr = 0.001\nI0607 23:19:30.410508 13573 solver.cpp:229] Iteration 13300, loss = 0.545171\nI0607 23:19:30.410570 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.122322 (* 1 = 0.122322 loss)\nI0607 23:19:30.410579 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148505 (* 1 = 0.148505 loss)\nI0607 23:19:30.410586 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0951149 (* 1 = 0.0951149 loss)\nI0607 23:19:30.410593 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00495876 (* 1 = 0.00495876 loss)\nI0607 23:19:30.410598 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0889575 (* 1 = 0.0889575 loss)\nI0607 23:19:30.410604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00495697 (* 1 = 0.00495697 loss)\nI0607 23:19:30.410612 13573 sgd_solver.cpp:106] Iteration 13300, lr = 0.001\nI0607 23:19:43.345978 13573 solver.cpp:229] Iteration 13320, loss = 0.904104\nI0607 23:19:43.346041 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0450772 (* 1 = 0.0450772 loss)\nI0607 23:19:43.346050 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0621294 (* 1 = 0.0621294 loss)\nI0607 23:19:43.346056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0935582 (* 1 = 0.0935582 loss)\nI0607 23:19:43.346062 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0493013 (* 1 = 0.0493013 loss)\nI0607 23:19:43.346068 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0893671 (* 1 = 0.0893671 loss)\nI0607 23:19:43.346074 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0493011 (* 1 = 0.0493011 loss)\nI0607 23:19:43.346082 13573 sgd_solver.cpp:106] Iteration 13320, lr = 0.001\nI0607 23:19:56.128617 13573 solver.cpp:229] Iteration 13340, loss = 1.21113\nI0607 23:19:56.128690 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.342095 (* 1 = 0.342095 loss)\nI0607 23:19:56.128700 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.267762 (* 1 = 0.267762 loss)\nI0607 23:19:56.128708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.429612 (* 1 = 0.429612 loss)\nI0607 23:19:56.128715 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.156403 (* 1 = 0.156403 loss)\nI0607 23:19:56.128720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.428722 (* 1 = 0.428722 loss)\nI0607 23:19:56.128726 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.156393 (* 1 = 0.156393 loss)\nI0607 23:19:56.128733 13573 sgd_solver.cpp:106] Iteration 13340, lr = 0.001\nI0607 23:20:09.161916 13573 solver.cpp:229] Iteration 13360, loss = 0.599775\nI0607 23:20:09.161985 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0216038 (* 1 = 0.0216038 loss)\nI0607 23:20:09.161994 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0340227 (* 1 = 0.0340227 loss)\nI0607 23:20:09.162001 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.384597 (* 1 = 0.384597 loss)\nI0607 23:20:09.162008 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0488422 (* 1 = 0.0488422 loss)\nI0607 23:20:09.162014 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.301928 (* 1 = 0.301928 loss)\nI0607 23:20:09.162019 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0488505 (* 1 = 0.0488505 loss)\nI0607 23:20:09.162026 13573 sgd_solver.cpp:106] Iteration 13360, lr = 0.001\nI0607 23:20:22.174248 13573 solver.cpp:229] Iteration 13380, loss = 0.802367\nI0607 23:20:22.174316 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.315382 (* 1 = 0.315382 loss)\nI0607 23:20:22.174326 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148003 (* 1 = 0.148003 loss)\nI0607 23:20:22.174334 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0998678 (* 1 = 0.0998678 loss)\nI0607 23:20:22.174340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0274763 (* 1 = 0.0274763 loss)\nI0607 23:20:22.174345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0973427 (* 1 = 0.0973427 loss)\nI0607 23:20:22.174351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0274658 (* 1 = 0.0274658 loss)\nI0607 23:20:22.174358 13573 sgd_solver.cpp:106] Iteration 13380, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:20:35.141626 13573 solver.cpp:229] Iteration 13400, loss = 0.62091\nI0607 23:20:35.141696 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00393144 (* 1 = 0.00393144 loss)\nI0607 23:20:35.141706 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00669134 (* 1 = 0.00669134 loss)\nI0607 23:20:35.141713 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265499 (* 1 = 0.265499 loss)\nI0607 23:20:35.141719 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0152469 (* 1 = 0.0152469 loss)\nI0607 23:20:35.141724 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.286085 (* 1 = 0.286085 loss)\nI0607 23:20:35.141731 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0152545 (* 1 = 0.0152545 loss)\nI0607 23:20:35.141739 13573 sgd_solver.cpp:106] Iteration 13400, lr = 0.001\nI0607 23:20:48.178979 13573 solver.cpp:229] Iteration 13420, loss = 1.21528\nI0607 23:20:48.179045 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.328492 (* 1 = 0.328492 loss)\nI0607 23:20:48.179055 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.318913 (* 1 = 0.318913 loss)\nI0607 23:20:48.179061 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110878 (* 1 = 0.110878 loss)\nI0607 23:20:48.179067 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.072538 (* 1 = 0.072538 loss)\nI0607 23:20:48.179072 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108029 (* 1 = 0.108029 loss)\nI0607 23:20:48.179080 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0725351 (* 1 = 0.0725351 loss)\nI0607 23:20:48.179086 13573 sgd_solver.cpp:106] Iteration 13420, lr = 0.001\nI0607 23:21:01.230608 13573 solver.cpp:229] Iteration 13440, loss = 0.484433\nI0607 23:21:01.230681 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0991608 (* 1 = 0.0991608 loss)\nI0607 23:21:01.230690 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11746 (* 1 = 0.11746 loss)\nI0607 23:21:01.230697 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949521 (* 1 = 0.0949521 loss)\nI0607 23:21:01.230703 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108734 (* 1 = 0.0108734 loss)\nI0607 23:21:01.230708 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0962309 (* 1 = 0.0962309 loss)\nI0607 23:21:01.230715 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108497 (* 1 = 0.0108497 loss)\nI0607 23:21:01.230721 13573 sgd_solver.cpp:106] Iteration 13440, lr = 0.001\nI0607 23:21:14.470607 13573 solver.cpp:229] Iteration 13460, loss = 0.804751\nI0607 23:21:14.470667 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00804243 (* 1 = 0.00804243 loss)\nI0607 23:21:14.470676 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0394409 (* 1 = 0.0394409 loss)\nI0607 23:21:14.470682 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334767 (* 1 = 0.334767 loss)\nI0607 23:21:14.470688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.056982 (* 1 = 0.056982 loss)\nI0607 23:21:14.470695 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.347618 (* 1 = 0.347618 loss)\nI0607 23:21:14.470700 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0569763 (* 1 = 0.0569763 loss)\nI0607 23:21:14.470706 13573 sgd_solver.cpp:106] Iteration 13460, lr = 0.001\nI0607 23:21:27.289692 13573 solver.cpp:229] Iteration 13480, loss = 0.748648\nI0607 23:21:27.289758 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.224458 (* 1 = 0.224458 loss)\nI0607 23:21:27.289768 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172804 (* 1 = 0.172804 loss)\nI0607 23:21:27.289774 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182901 (* 1 = 0.182901 loss)\nI0607 23:21:27.289780 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0582497 (* 1 = 0.0582497 loss)\nI0607 23:21:27.289786 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17777 (* 1 = 0.17777 loss)\nI0607 23:21:27.289793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0582174 (* 1 = 0.0582174 loss)\nI0607 23:21:27.289799 13573 sgd_solver.cpp:106] Iteration 13480, lr = 0.001\nI0607 23:21:40.176304 13573 solver.cpp:229] Iteration 13500, loss = 0.977227\nI0607 23:21:40.176365 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.438356 (* 1 = 0.438356 loss)\nI0607 23:21:40.176374 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.208317 (* 1 = 0.208317 loss)\nI0607 23:21:40.176380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.317323 (* 1 = 0.317323 loss)\nI0607 23:21:40.176388 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0505196 (* 1 = 0.0505196 loss)\nI0607 23:21:40.176393 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327406 (* 1 = 0.327406 loss)\nI0607 23:21:40.176398 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.050524 (* 1 = 0.050524 loss)\nI0607 23:21:40.176406 13573 sgd_solver.cpp:106] Iteration 13500, lr = 0.001\nI0607 23:21:52.958247 13573 solver.cpp:229] Iteration 13520, loss = 1.3856\nI0607 23:21:52.958318 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.317834 (* 1 = 0.317834 loss)\nI0607 23:21:52.958328 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269117 (* 1 = 0.269117 loss)\nI0607 23:21:52.958334 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.609547 (* 1 = 0.609547 loss)\nI0607 23:21:52.958340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.203598 (* 1 = 0.203598 loss)\nI0607 23:21:52.958345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.608925 (* 1 = 0.608925 loss)\nI0607 23:21:52.958351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.203612 (* 1 = 0.203612 loss)\nI0607 23:21:52.958359 13573 sgd_solver.cpp:106] Iteration 13520, lr = 0.001\nI0607 23:22:05.719964 13573 solver.cpp:229] Iteration 13540, loss = 0.960544\nI0607 23:22:05.720022 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167306 (* 1 = 0.167306 loss)\nI0607 23:22:05.720031 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.422279 (* 1 = 0.422279 loss)\nI0607 23:22:05.720038 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204026 (* 1 = 0.204026 loss)\nI0607 23:22:05.720043 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0617216 (* 1 = 0.0617216 loss)\nI0607 23:22:05.720049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205444 (* 1 = 0.205444 loss)\nI0607 23:22:05.720055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.061725 (* 1 = 0.061725 loss)\nI0607 23:22:05.720062 13573 sgd_solver.cpp:106] Iteration 13540, lr = 0.001\nI0607 23:22:18.565486 13573 solver.cpp:229] Iteration 13560, loss = 0.925922\nI0607 23:22:18.565554 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0135157 (* 1 = 0.0135157 loss)\nI0607 23:22:18.565564 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0990892 (* 1 = 0.0990892 loss)\nI0607 23:22:18.565570 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297745 (* 1 = 0.297745 loss)\nI0607 23:22:18.565575 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0389443 (* 1 = 0.0389443 loss)\nI0607 23:22:18.565580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.325392 (* 1 = 0.325392 loss)\nI0607 23:22:18.565587 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0389715 (* 1 = 0.0389715 loss)\nI0607 23:22:18.565593 13573 sgd_solver.cpp:106] Iteration 13560, lr = 0.001\nI0607 23:22:31.457511 13573 solver.cpp:229] Iteration 13580, loss = 1.04338\nI0607 23:22:31.457572 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.1613 (* 1 = 0.1613 loss)\nI0607 23:22:31.457581 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.21317 (* 1 = 0.21317 loss)\nI0607 23:22:31.457587 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.484416 (* 1 = 0.484416 loss)\nI0607 23:22:31.457592 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.140162 (* 1 = 0.140162 loss)\nI0607 23:22:31.457598 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.503775 (* 1 = 0.503775 loss)\nI0607 23:22:31.457604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.140168 (* 1 = 0.140168 loss)\nI0607 23:22:31.457610 13573 sgd_solver.cpp:106] Iteration 13580, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:22:44.237993 13573 solver.cpp:229] Iteration 13600, loss = 0.935878\nI0607 23:22:44.238062 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.20003 (* 1 = 0.20003 loss)\nI0607 23:22:44.238072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125778 (* 1 = 0.125778 loss)\nI0607 23:22:44.238078 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.268833 (* 1 = 0.268833 loss)\nI0607 23:22:44.238085 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.052882 (* 1 = 0.052882 loss)\nI0607 23:22:44.238090 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.270272 (* 1 = 0.270272 loss)\nI0607 23:22:44.238095 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0529275 (* 1 = 0.0529275 loss)\nI0607 23:22:44.238101 13573 sgd_solver.cpp:106] Iteration 13600, lr = 0.001\nI0607 23:22:57.158238 13573 solver.cpp:229] Iteration 13620, loss = 0.968912\nI0607 23:22:57.158301 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.231526 (* 1 = 0.231526 loss)\nI0607 23:22:57.158310 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.275591 (* 1 = 0.275591 loss)\nI0607 23:22:57.158315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187591 (* 1 = 0.187591 loss)\nI0607 23:22:57.158323 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0828724 (* 1 = 0.0828724 loss)\nI0607 23:22:57.158327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187315 (* 1 = 0.187315 loss)\nI0607 23:22:57.158334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0828812 (* 1 = 0.0828812 loss)\nI0607 23:22:57.158339 13573 sgd_solver.cpp:106] Iteration 13620, lr = 0.001\nI0607 23:23:10.042243 13573 solver.cpp:229] Iteration 13640, loss = 1.27528\nI0607 23:23:10.042309 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.461941 (* 1 = 0.461941 loss)\nI0607 23:23:10.042318 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.532367 (* 1 = 0.532367 loss)\nI0607 23:23:10.042325 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.375821 (* 1 = 0.375821 loss)\nI0607 23:23:10.042330 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106263 (* 1 = 0.106263 loss)\nI0607 23:23:10.042336 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.365058 (* 1 = 0.365058 loss)\nI0607 23:23:10.042341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106296 (* 1 = 0.106296 loss)\nI0607 23:23:10.042348 13573 sgd_solver.cpp:106] Iteration 13640, lr = 0.001\nI0607 23:23:22.856134 13573 solver.cpp:229] Iteration 13660, loss = 0.462261\nI0607 23:23:22.856258 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0018853 (* 1 = 0.0018853 loss)\nI0607 23:23:22.856267 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0180735 (* 1 = 0.0180735 loss)\nI0607 23:23:22.856273 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.274951 (* 1 = 0.274951 loss)\nI0607 23:23:22.856278 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0245532 (* 1 = 0.0245532 loss)\nI0607 23:23:22.856284 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282424 (* 1 = 0.282424 loss)\nI0607 23:23:22.856289 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0245991 (* 1 = 0.0245991 loss)\nI0607 23:23:22.856297 13573 sgd_solver.cpp:106] Iteration 13660, lr = 0.001\nI0607 23:23:35.770692 13573 solver.cpp:229] Iteration 13680, loss = 0.791843\nI0607 23:23:35.770756 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00130496 (* 1 = 0.00130496 loss)\nI0607 23:23:35.770766 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00220353 (* 1 = 0.00220353 loss)\nI0607 23:23:35.770786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295057 (* 1 = 0.295057 loss)\nI0607 23:23:35.770793 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0443679 (* 1 = 0.0443679 loss)\nI0607 23:23:35.770799 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.284841 (* 1 = 0.284841 loss)\nI0607 23:23:35.770805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0443353 (* 1 = 0.0443353 loss)\nI0607 23:23:35.770813 13573 sgd_solver.cpp:106] Iteration 13680, lr = 0.001\nI0607 23:23:48.704329 13573 solver.cpp:229] Iteration 13700, loss = 0.513822\nI0607 23:23:48.704393 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0240144 (* 1 = 0.0240144 loss)\nI0607 23:23:48.704403 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0963991 (* 1 = 0.0963991 loss)\nI0607 23:23:48.704411 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.158443 (* 1 = 0.158443 loss)\nI0607 23:23:48.704416 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00711897 (* 1 = 0.00711897 loss)\nI0607 23:23:48.704421 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168002 (* 1 = 0.168002 loss)\nI0607 23:23:48.704427 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00711732 (* 1 = 0.00711732 loss)\nI0607 23:23:48.704433 13573 sgd_solver.cpp:106] Iteration 13700, lr = 0.001\nI0607 23:24:01.572129 13573 solver.cpp:229] Iteration 13720, loss = 0.919183\nI0607 23:24:01.572182 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.344589 (* 1 = 0.344589 loss)\nI0607 23:24:01.572191 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269 (* 1 = 0.269 loss)\nI0607 23:24:01.572198 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.234801 (* 1 = 0.234801 loss)\nI0607 23:24:01.572204 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.016114 (* 1 = 0.016114 loss)\nI0607 23:24:01.572211 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224133 (* 1 = 0.224133 loss)\nI0607 23:24:01.572216 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0161187 (* 1 = 0.0161187 loss)\nI0607 23:24:01.572223 13573 sgd_solver.cpp:106] Iteration 13720, lr = 0.001\nI0607 23:24:14.243297 13573 solver.cpp:229] Iteration 13740, loss = 0.649288\nI0607 23:24:14.243376 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167483 (* 1 = 0.167483 loss)\nI0607 23:24:14.243384 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198502 (* 1 = 0.198502 loss)\nI0607 23:24:14.243391 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919198 (* 1 = 0.0919198 loss)\nI0607 23:24:14.243397 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.026247 (* 1 = 0.026247 loss)\nI0607 23:24:14.243403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0964199 (* 1 = 0.0964199 loss)\nI0607 23:24:14.243409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0262575 (* 1 = 0.0262575 loss)\nI0607 23:24:14.243417 13573 sgd_solver.cpp:106] Iteration 13740, lr = 0.001\nI0607 23:24:27.167753 13573 solver.cpp:229] Iteration 13760, loss = 0.760294\nI0607 23:24:27.167817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0659543 (* 1 = 0.0659543 loss)\nI0607 23:24:27.167827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0533933 (* 1 = 0.0533933 loss)\nI0607 23:24:27.167834 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279427 (* 1 = 0.279427 loss)\nI0607 23:24:27.167840 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0286136 (* 1 = 0.0286136 loss)\nI0607 23:24:27.167846 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.272186 (* 1 = 0.272186 loss)\nI0607 23:24:27.167853 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286021 (* 1 = 0.0286021 loss)\nI0607 23:24:27.167860 13573 sgd_solver.cpp:106] Iteration 13760, lr = 0.001\nI0607 23:24:40.098134 13573 solver.cpp:229] Iteration 13780, loss = 0.93563\nI0607 23:24:40.098208 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.368074 (* 1 = 0.368074 loss)\nI0607 23:24:40.098218 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.413864 (* 1 = 0.413864 loss)\nI0607 23:24:40.098225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137271 (* 1 = 0.137271 loss)\nI0607 23:24:40.098232 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0527355 (* 1 = 0.0527355 loss)\nI0607 23:24:40.098237 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13908 (* 1 = 0.13908 loss)\nI0607 23:24:40.098244 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0527532 (* 1 = 0.0527532 loss)\nI0607 23:24:40.098251 13573 sgd_solver.cpp:106] Iteration 13780, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:24:53.189381 13573 solver.cpp:229] Iteration 13800, loss = 1.15672\nI0607 23:24:53.189446 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00247665 (* 1 = 0.00247665 loss)\nI0607 23:24:53.189456 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.043156 (* 1 = 0.043156 loss)\nI0607 23:24:53.189462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257647 (* 1 = 0.257647 loss)\nI0607 23:24:53.189468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0308002 (* 1 = 0.0308002 loss)\nI0607 23:24:53.189474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219251 (* 1 = 0.219251 loss)\nI0607 23:24:53.189479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0308038 (* 1 = 0.0308038 loss)\nI0607 23:24:53.189487 13573 sgd_solver.cpp:106] Iteration 13800, lr = 0.001\nI0607 23:25:06.151882 13573 solver.cpp:229] Iteration 13820, loss = 0.418941\nI0607 23:25:06.151949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000203172 (* 1 = 0.000203172 loss)\nI0607 23:25:06.151958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00233923 (* 1 = 0.00233923 loss)\nI0607 23:25:06.151964 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183726 (* 1 = 0.183726 loss)\nI0607 23:25:06.151970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00451746 (* 1 = 0.00451746 loss)\nI0607 23:25:06.151976 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.180118 (* 1 = 0.180118 loss)\nI0607 23:25:06.151983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00451643 (* 1 = 0.00451643 loss)\nI0607 23:25:06.151988 13573 sgd_solver.cpp:106] Iteration 13820, lr = 0.001\nI0607 23:25:19.199980 13573 solver.cpp:229] Iteration 13840, loss = 0.594193\nI0607 23:25:19.200057 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0331533 (* 1 = 0.0331533 loss)\nI0607 23:25:19.200065 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0442567 (* 1 = 0.0442567 loss)\nI0607 23:25:19.200072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0951082 (* 1 = 0.0951082 loss)\nI0607 23:25:19.200078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0515327 (* 1 = 0.0515327 loss)\nI0607 23:25:19.200083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0811492 (* 1 = 0.0811492 loss)\nI0607 23:25:19.200089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0515226 (* 1 = 0.0515226 loss)\nI0607 23:25:19.200095 13573 sgd_solver.cpp:106] Iteration 13840, lr = 0.001\nI0607 23:25:32.156896 13573 solver.cpp:229] Iteration 13860, loss = 0.475732\nI0607 23:25:32.156968 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.154708 (* 1 = 0.154708 loss)\nI0607 23:25:32.156978 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.115043 (* 1 = 0.115043 loss)\nI0607 23:25:32.156985 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992874 (* 1 = 0.0992874 loss)\nI0607 23:25:32.156991 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0424501 (* 1 = 0.0424501 loss)\nI0607 23:25:32.156996 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104115 (* 1 = 0.104115 loss)\nI0607 23:25:32.157002 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0424479 (* 1 = 0.0424479 loss)\nI0607 23:25:32.157008 13573 sgd_solver.cpp:106] Iteration 13860, lr = 0.001\nI0607 23:25:45.321017 13573 solver.cpp:229] Iteration 13880, loss = 0.827767\nI0607 23:25:45.321092 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.261448 (* 1 = 0.261448 loss)\nI0607 23:25:45.321101 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.231639 (* 1 = 0.231639 loss)\nI0607 23:25:45.321107 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286351 (* 1 = 0.286351 loss)\nI0607 23:25:45.321113 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275297 (* 1 = 0.0275297 loss)\nI0607 23:25:45.321120 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258631 (* 1 = 0.258631 loss)\nI0607 23:25:45.321125 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275309 (* 1 = 0.0275309 loss)\nI0607 23:25:45.321131 13573 sgd_solver.cpp:106] Iteration 13880, lr = 0.001\nI0607 23:25:58.215837 13573 solver.cpp:229] Iteration 13900, loss = 1.7961\nI0607 23:25:58.215898 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115966 (* 1 = 0.115966 loss)\nI0607 23:25:58.215908 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147645 (* 1 = 0.147645 loss)\nI0607 23:25:58.215914 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102533 (* 1 = 0.102533 loss)\nI0607 23:25:58.215919 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0770754 (* 1 = 0.0770754 loss)\nI0607 23:25:58.215924 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104834 (* 1 = 0.104834 loss)\nI0607 23:25:58.215930 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0770723 (* 1 = 0.0770723 loss)\nI0607 23:25:58.215937 13573 sgd_solver.cpp:106] Iteration 13900, lr = 0.001\nI0607 23:26:11.166018 13573 solver.cpp:229] Iteration 13920, loss = 0.843933\nI0607 23:26:11.166085 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0126371 (* 1 = 0.0126371 loss)\nI0607 23:26:11.166093 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.019957 (* 1 = 0.019957 loss)\nI0607 23:26:11.166102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195761 (* 1 = 0.195761 loss)\nI0607 23:26:11.166108 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00619403 (* 1 = 0.00619403 loss)\nI0607 23:26:11.166115 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204216 (* 1 = 0.204216 loss)\nI0607 23:26:11.166121 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00617462 (* 1 = 0.00617462 loss)\nI0607 23:26:11.166128 13573 sgd_solver.cpp:106] Iteration 13920, lr = 0.001\nI0607 23:26:23.757935 13573 solver.cpp:229] Iteration 13940, loss = 0.674882\nI0607 23:26:23.757999 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233732 (* 1 = 0.233732 loss)\nI0607 23:26:23.758008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.328618 (* 1 = 0.328618 loss)\nI0607 23:26:23.758015 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0934407 (* 1 = 0.0934407 loss)\nI0607 23:26:23.758021 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00911208 (* 1 = 0.00911208 loss)\nI0607 23:26:23.758028 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0933809 (* 1 = 0.0933809 loss)\nI0607 23:26:23.758033 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00911253 (* 1 = 0.00911253 loss)\nI0607 23:26:23.758041 13573 sgd_solver.cpp:106] Iteration 13940, lr = 0.001\nI0607 23:26:36.529139 13573 solver.cpp:229] Iteration 13960, loss = 0.98588\nI0607 23:26:36.529217 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0130255 (* 1 = 0.0130255 loss)\nI0607 23:26:36.529227 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0197606 (* 1 = 0.0197606 loss)\nI0607 23:26:36.529232 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284462 (* 1 = 0.284462 loss)\nI0607 23:26:36.529238 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0183458 (* 1 = 0.0183458 loss)\nI0607 23:26:36.529244 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.290824 (* 1 = 0.290824 loss)\nI0607 23:26:36.529250 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0183529 (* 1 = 0.0183529 loss)\nI0607 23:26:36.529258 13573 sgd_solver.cpp:106] Iteration 13960, lr = 0.001\nI0607 23:26:49.331063 13573 solver.cpp:229] Iteration 13980, loss = 0.626306\nI0607 23:26:49.331130 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.136236 (* 1 = 0.136236 loss)\nI0607 23:26:49.331140 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.208917 (* 1 = 0.208917 loss)\nI0607 23:26:49.331147 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239531 (* 1 = 0.239531 loss)\nI0607 23:26:49.331154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0062366 (* 1 = 0.0062366 loss)\nI0607 23:26:49.331161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.234718 (* 1 = 0.234718 loss)\nI0607 23:26:49.331166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00621926 (* 1 = 0.00621926 loss)\nI0607 23:26:49.331172 13573 sgd_solver.cpp:106] Iteration 13980, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:27:02.133911 13573 solver.cpp:229] Iteration 14000, loss = 0.61485\nI0607 23:27:02.133980 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.274853 (* 1 = 0.274853 loss)\nI0607 23:27:02.133990 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0896728 (* 1 = 0.0896728 loss)\nI0607 23:27:02.133996 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171938 (* 1 = 0.171938 loss)\nI0607 23:27:02.134001 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0338192 (* 1 = 0.0338192 loss)\nI0607 23:27:02.134006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170133 (* 1 = 0.170133 loss)\nI0607 23:27:02.134012 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0338128 (* 1 = 0.0338128 loss)\nI0607 23:27:02.134019 13573 sgd_solver.cpp:106] Iteration 14000, lr = 0.001\nI0607 23:27:14.973238 13573 solver.cpp:229] Iteration 14020, loss = 1.40676\nI0607 23:27:14.973316 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00843911 (* 1 = 0.00843911 loss)\nI0607 23:27:14.973325 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0330489 (* 1 = 0.0330489 loss)\nI0607 23:27:14.973331 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.890487 (* 1 = 0.890487 loss)\nI0607 23:27:14.973337 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.243918 (* 1 = 0.243918 loss)\nI0607 23:27:14.973342 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.887483 (* 1 = 0.887483 loss)\nI0607 23:27:14.973348 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.24398 (* 1 = 0.24398 loss)\nI0607 23:27:14.973356 13573 sgd_solver.cpp:106] Iteration 14020, lr = 0.001\nI0607 23:27:28.091038 13573 solver.cpp:229] Iteration 14040, loss = 0.927404\nI0607 23:27:28.091104 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0973362 (* 1 = 0.0973362 loss)\nI0607 23:27:28.091112 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.123188 (* 1 = 0.123188 loss)\nI0607 23:27:28.091119 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104553 (* 1 = 0.104553 loss)\nI0607 23:27:28.091125 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0180282 (* 1 = 0.0180282 loss)\nI0607 23:27:28.091130 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0995965 (* 1 = 0.0995965 loss)\nI0607 23:27:28.091135 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0180198 (* 1 = 0.0180198 loss)\nI0607 23:27:28.091142 13573 sgd_solver.cpp:106] Iteration 14040, lr = 0.001\nI0607 23:27:40.967115 13573 solver.cpp:229] Iteration 14060, loss = 0.754801\nI0607 23:27:40.967182 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.176871 (* 1 = 0.176871 loss)\nI0607 23:27:40.967193 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110369 (* 1 = 0.110369 loss)\nI0607 23:27:40.967200 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153873 (* 1 = 0.153873 loss)\nI0607 23:27:40.967206 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0306216 (* 1 = 0.0306216 loss)\nI0607 23:27:40.967211 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158245 (* 1 = 0.158245 loss)\nI0607 23:27:40.967216 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0306083 (* 1 = 0.0306083 loss)\nI0607 23:27:40.967222 13573 sgd_solver.cpp:106] Iteration 14060, lr = 0.001\nI0607 23:27:53.913388 13573 solver.cpp:229] Iteration 14080, loss = 1.32078\nI0607 23:27:53.913456 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00811088 (* 1 = 0.00811088 loss)\nI0607 23:27:53.913466 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.134516 (* 1 = 0.134516 loss)\nI0607 23:27:53.913471 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.758021 (* 1 = 0.758021 loss)\nI0607 23:27:53.913477 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.153934 (* 1 = 0.153934 loss)\nI0607 23:27:53.913483 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.732703 (* 1 = 0.732703 loss)\nI0607 23:27:53.913488 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.153791 (* 1 = 0.153791 loss)\nI0607 23:27:53.913496 13573 sgd_solver.cpp:106] Iteration 14080, lr = 0.001\nI0607 23:28:07.056356 13573 solver.cpp:229] Iteration 14100, loss = 0.80836\nI0607 23:28:07.056418 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.246598 (* 1 = 0.246598 loss)\nI0607 23:28:07.056427 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.129175 (* 1 = 0.129175 loss)\nI0607 23:28:07.056434 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10524 (* 1 = 0.10524 loss)\nI0607 23:28:07.056440 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0938746 (* 1 = 0.0938746 loss)\nI0607 23:28:07.056447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109666 (* 1 = 0.109666 loss)\nI0607 23:28:07.056452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.093873 (* 1 = 0.093873 loss)\nI0607 23:28:07.056458 13573 sgd_solver.cpp:106] Iteration 14100, lr = 0.001\nI0607 23:28:20.001078 13573 solver.cpp:229] Iteration 14120, loss = 0.781424\nI0607 23:28:20.001145 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0915766 (* 1 = 0.0915766 loss)\nI0607 23:28:20.001155 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0857202 (* 1 = 0.0857202 loss)\nI0607 23:28:20.001161 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101295 (* 1 = 0.101295 loss)\nI0607 23:28:20.001166 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0641864 (* 1 = 0.0641864 loss)\nI0607 23:28:20.001173 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105834 (* 1 = 0.105834 loss)\nI0607 23:28:20.001178 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0641953 (* 1 = 0.0641953 loss)\nI0607 23:28:20.001184 13573 sgd_solver.cpp:106] Iteration 14120, lr = 0.001\nI0607 23:28:32.866013 13573 solver.cpp:229] Iteration 14140, loss = 0.851135\nI0607 23:28:32.866083 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0986186 (* 1 = 0.0986186 loss)\nI0607 23:28:32.866092 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.280442 (* 1 = 0.280442 loss)\nI0607 23:28:32.866098 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238051 (* 1 = 0.238051 loss)\nI0607 23:28:32.866106 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00664836 (* 1 = 0.00664836 loss)\nI0607 23:28:32.866111 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238659 (* 1 = 0.238659 loss)\nI0607 23:28:32.866117 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0066416 (* 1 = 0.0066416 loss)\nI0607 23:28:32.866123 13573 sgd_solver.cpp:106] Iteration 14140, lr = 0.001\nI0607 23:28:45.847574 13573 solver.cpp:229] Iteration 14160, loss = 0.636289\nI0607 23:28:45.847652 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00135065 (* 1 = 0.00135065 loss)\nI0607 23:28:45.847662 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00292835 (* 1 = 0.00292835 loss)\nI0607 23:28:45.847669 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23956 (* 1 = 0.23956 loss)\nI0607 23:28:45.847676 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.037518 (* 1 = 0.037518 loss)\nI0607 23:28:45.847682 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247037 (* 1 = 0.247037 loss)\nI0607 23:28:45.847688 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0374908 (* 1 = 0.0374908 loss)\nI0607 23:28:45.847695 13573 sgd_solver.cpp:106] Iteration 14160, lr = 0.001\nI0607 23:28:58.611042 13573 solver.cpp:229] Iteration 14180, loss = 0.431985\nI0607 23:28:58.611105 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0184104 (* 1 = 0.0184104 loss)\nI0607 23:28:58.611115 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0219368 (* 1 = 0.0219368 loss)\nI0607 23:28:58.611120 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099407 (* 1 = 0.099407 loss)\nI0607 23:28:58.611127 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.161239 (* 1 = 0.161239 loss)\nI0607 23:28:58.611132 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105923 (* 1 = 0.105923 loss)\nI0607 23:28:58.611138 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.161223 (* 1 = 0.161223 loss)\nI0607 23:28:58.611145 13573 sgd_solver.cpp:106] Iteration 14180, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:29:11.377609 13573 solver.cpp:229] Iteration 14200, loss = 0.505758\nI0607 23:29:11.377678 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0952176 (* 1 = 0.0952176 loss)\nI0607 23:29:11.377687 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0815874 (* 1 = 0.0815874 loss)\nI0607 23:29:11.377694 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0667038 (* 1 = 0.0667038 loss)\nI0607 23:29:11.377699 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104896 (* 1 = 0.0104896 loss)\nI0607 23:29:11.377706 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0645383 (* 1 = 0.0645383 loss)\nI0607 23:29:11.377712 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104894 (* 1 = 0.0104894 loss)\nI0607 23:29:11.377718 13573 sgd_solver.cpp:106] Iteration 14200, lr = 0.001\nI0607 23:29:24.195050 13573 solver.cpp:229] Iteration 14220, loss = 0.934279\nI0607 23:29:24.195111 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104333 (* 1 = 0.104333 loss)\nI0607 23:29:24.195121 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0881559 (* 1 = 0.0881559 loss)\nI0607 23:29:24.195127 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949466 (* 1 = 0.0949466 loss)\nI0607 23:29:24.195132 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00789335 (* 1 = 0.00789335 loss)\nI0607 23:29:24.195137 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0970728 (* 1 = 0.0970728 loss)\nI0607 23:29:24.195143 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00789275 (* 1 = 0.00789275 loss)\nI0607 23:29:24.195149 13573 sgd_solver.cpp:106] Iteration 14220, lr = 0.001\nI0607 23:29:37.171154 13573 solver.cpp:229] Iteration 14240, loss = 0.465075\nI0607 23:29:37.171291 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199103 (* 1 = 0.199103 loss)\nI0607 23:29:37.171301 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0973011 (* 1 = 0.0973011 loss)\nI0607 23:29:37.171308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.093098 (* 1 = 0.093098 loss)\nI0607 23:29:37.171314 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0107795 (* 1 = 0.0107795 loss)\nI0607 23:29:37.171320 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0933955 (* 1 = 0.0933955 loss)\nI0607 23:29:37.171326 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107807 (* 1 = 0.0107807 loss)\nI0607 23:29:37.171334 13573 sgd_solver.cpp:106] Iteration 14240, lr = 0.001\nI0607 23:29:49.927534 13573 solver.cpp:229] Iteration 14260, loss = 1.24883\nI0607 23:29:49.927598 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.303935 (* 1 = 0.303935 loss)\nI0607 23:29:49.927608 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16169 (* 1 = 0.16169 loss)\nI0607 23:29:49.927614 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.445901 (* 1 = 0.445901 loss)\nI0607 23:29:49.927621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0958867 (* 1 = 0.0958867 loss)\nI0607 23:29:49.927628 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.447065 (* 1 = 0.447065 loss)\nI0607 23:29:49.927634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0960188 (* 1 = 0.0960188 loss)\nI0607 23:29:49.927641 13573 sgd_solver.cpp:106] Iteration 14260, lr = 0.001\nI0607 23:30:03.097537 13573 solver.cpp:229] Iteration 14280, loss = 1.58262\nI0607 23:30:03.097604 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.553844 (* 1 = 0.553844 loss)\nI0607 23:30:03.097614 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.517297 (* 1 = 0.517297 loss)\nI0607 23:30:03.097620 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116629 (* 1 = 0.116629 loss)\nI0607 23:30:03.097626 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0521799 (* 1 = 0.0521799 loss)\nI0607 23:30:03.097632 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.113461 (* 1 = 0.113461 loss)\nI0607 23:30:03.097638 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0521712 (* 1 = 0.0521712 loss)\nI0607 23:30:03.097645 13573 sgd_solver.cpp:106] Iteration 14280, lr = 0.001\nI0607 23:30:15.899070 13573 solver.cpp:229] Iteration 14300, loss = 1.15872\nI0607 23:30:15.899132 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.381984 (* 1 = 0.381984 loss)\nI0607 23:30:15.899142 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.338938 (* 1 = 0.338938 loss)\nI0607 23:30:15.899149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.243193 (* 1 = 0.243193 loss)\nI0607 23:30:15.899155 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0478854 (* 1 = 0.0478854 loss)\nI0607 23:30:15.899161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.241087 (* 1 = 0.241087 loss)\nI0607 23:30:15.899168 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0478695 (* 1 = 0.0478695 loss)\nI0607 23:30:15.899175 13573 sgd_solver.cpp:106] Iteration 14300, lr = 0.001\nI0607 23:30:28.993700 13573 solver.cpp:229] Iteration 14320, loss = 0.923257\nI0607 23:30:28.993782 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.204958 (* 1 = 0.204958 loss)\nI0607 23:30:28.993793 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0988517 (* 1 = 0.0988517 loss)\nI0607 23:30:28.993799 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.176773 (* 1 = 0.176773 loss)\nI0607 23:30:28.993806 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00907794 (* 1 = 0.00907794 loss)\nI0607 23:30:28.993813 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163109 (* 1 = 0.163109 loss)\nI0607 23:30:28.993818 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00907852 (* 1 = 0.00907852 loss)\nI0607 23:30:28.993824 13573 sgd_solver.cpp:106] Iteration 14320, lr = 0.001\nI0607 23:30:41.990511 13573 solver.cpp:229] Iteration 14340, loss = 1.43291\nI0607 23:30:41.990579 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.290009 (* 1 = 0.290009 loss)\nI0607 23:30:41.990588 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.250521 (* 1 = 0.250521 loss)\nI0607 23:30:41.990595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.641221 (* 1 = 0.641221 loss)\nI0607 23:30:41.990602 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123418 (* 1 = 0.123418 loss)\nI0607 23:30:41.990607 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.628476 (* 1 = 0.628476 loss)\nI0607 23:30:41.990612 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123333 (* 1 = 0.123333 loss)\nI0607 23:30:41.990620 13573 sgd_solver.cpp:106] Iteration 14340, lr = 0.001\nI0607 23:30:54.951663 13573 solver.cpp:229] Iteration 14360, loss = 0.917732\nI0607 23:30:54.951733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0324315 (* 1 = 0.0324315 loss)\nI0607 23:30:54.951743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0807101 (* 1 = 0.0807101 loss)\nI0607 23:30:54.951750 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294133 (* 1 = 0.294133 loss)\nI0607 23:30:54.951756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00854805 (* 1 = 0.00854805 loss)\nI0607 23:30:54.951762 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235323 (* 1 = 0.235323 loss)\nI0607 23:30:54.951768 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00856037 (* 1 = 0.00856037 loss)\nI0607 23:30:54.951776 13573 sgd_solver.cpp:106] Iteration 14360, lr = 0.001\nI0607 23:31:07.790205 13573 solver.cpp:229] Iteration 14380, loss = 0.841601\nI0607 23:31:07.790271 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.289159 (* 1 = 0.289159 loss)\nI0607 23:31:07.790279 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114252 (* 1 = 0.114252 loss)\nI0607 23:31:07.790287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0807652 (* 1 = 0.0807652 loss)\nI0607 23:31:07.790292 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00587205 (* 1 = 0.00587205 loss)\nI0607 23:31:07.790297 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0961034 (* 1 = 0.0961034 loss)\nI0607 23:31:07.790302 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00587142 (* 1 = 0.00587142 loss)\nI0607 23:31:07.790310 13573 sgd_solver.cpp:106] Iteration 14380, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:31:20.733623 13573 solver.cpp:229] Iteration 14400, loss = 0.568844\nI0607 23:31:20.733706 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.160597 (* 1 = 0.160597 loss)\nI0607 23:31:20.733716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124977 (* 1 = 0.124977 loss)\nI0607 23:31:20.733722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.163416 (* 1 = 0.163416 loss)\nI0607 23:31:20.733729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0466499 (* 1 = 0.0466499 loss)\nI0607 23:31:20.733734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159592 (* 1 = 0.159592 loss)\nI0607 23:31:20.733741 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0466548 (* 1 = 0.0466548 loss)\nI0607 23:31:20.733748 13573 sgd_solver.cpp:106] Iteration 14400, lr = 0.001\nI0607 23:31:33.812422 13573 solver.cpp:229] Iteration 14420, loss = 1.26296\nI0607 23:31:33.812486 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0597991 (* 1 = 0.0597991 loss)\nI0607 23:31:33.812495 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0485021 (* 1 = 0.0485021 loss)\nI0607 23:31:33.812501 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265395 (* 1 = 0.265395 loss)\nI0607 23:31:33.812507 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158976 (* 1 = 0.0158976 loss)\nI0607 23:31:33.812513 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250454 (* 1 = 0.250454 loss)\nI0607 23:31:33.812518 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158921 (* 1 = 0.0158921 loss)\nI0607 23:31:33.812525 13573 sgd_solver.cpp:106] Iteration 14420, lr = 0.001\nI0607 23:31:46.949560 13573 solver.cpp:229] Iteration 14440, loss = 0.871866\nI0607 23:31:46.949630 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00139676 (* 1 = 0.00139676 loss)\nI0607 23:31:46.949640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0147633 (* 1 = 0.0147633 loss)\nI0607 23:31:46.949645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.472797 (* 1 = 0.472797 loss)\nI0607 23:31:46.949651 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.124589 (* 1 = 0.124589 loss)\nI0607 23:31:46.949656 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.510258 (* 1 = 0.510258 loss)\nI0607 23:31:46.949662 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124497 (* 1 = 0.124497 loss)\nI0607 23:31:46.949668 13573 sgd_solver.cpp:106] Iteration 14440, lr = 0.001\nI0607 23:31:59.968297 13573 solver.cpp:229] Iteration 14460, loss = 0.857148\nI0607 23:31:59.968363 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.21374 (* 1 = 0.21374 loss)\nI0607 23:31:59.968372 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.212759 (* 1 = 0.212759 loss)\nI0607 23:31:59.968379 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0873497 (* 1 = 0.0873497 loss)\nI0607 23:31:59.968384 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108298 (* 1 = 0.0108298 loss)\nI0607 23:31:59.968389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0919226 (* 1 = 0.0919226 loss)\nI0607 23:31:59.968395 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108299 (* 1 = 0.0108299 loss)\nI0607 23:31:59.968402 13573 sgd_solver.cpp:106] Iteration 14460, lr = 0.001\nI0607 23:32:12.798404 13573 solver.cpp:229] Iteration 14480, loss = 0.436985\nI0607 23:32:12.798465 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00163455 (* 1 = 0.00163455 loss)\nI0607 23:32:12.798475 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00389841 (* 1 = 0.00389841 loss)\nI0607 23:32:12.798481 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213303 (* 1 = 0.213303 loss)\nI0607 23:32:12.798486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00358398 (* 1 = 0.00358398 loss)\nI0607 23:32:12.798492 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172946 (* 1 = 0.172946 loss)\nI0607 23:32:12.798497 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00358371 (* 1 = 0.00358371 loss)\nI0607 23:32:12.798504 13573 sgd_solver.cpp:106] Iteration 14480, lr = 0.001\nI0607 23:32:25.495782 13573 solver.cpp:229] Iteration 14500, loss = 0.808002\nI0607 23:32:25.495853 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.279327 (* 1 = 0.279327 loss)\nI0607 23:32:25.495863 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.643102 (* 1 = 0.643102 loss)\nI0607 23:32:25.495869 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0848957 (* 1 = 0.0848957 loss)\nI0607 23:32:25.495875 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0186965 (* 1 = 0.0186965 loss)\nI0607 23:32:25.496073 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0869426 (* 1 = 0.0869426 loss)\nI0607 23:32:25.496131 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0186962 (* 1 = 0.0186962 loss)\nI0607 23:32:25.496191 13573 sgd_solver.cpp:106] Iteration 14500, lr = 0.001\nI0607 23:32:38.420686 13573 solver.cpp:229] Iteration 14520, loss = 0.851744\nI0607 23:32:38.420750 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105847 (* 1 = 0.105847 loss)\nI0607 23:32:38.420760 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16281 (* 1 = 0.16281 loss)\nI0607 23:32:38.420766 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.173966 (* 1 = 0.173966 loss)\nI0607 23:32:38.420773 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0245025 (* 1 = 0.0245025 loss)\nI0607 23:32:38.420778 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169476 (* 1 = 0.169476 loss)\nI0607 23:32:38.420784 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0245034 (* 1 = 0.0245034 loss)\nI0607 23:32:38.420792 13573 sgd_solver.cpp:106] Iteration 14520, lr = 0.001\nI0607 23:32:51.114603 13573 solver.cpp:229] Iteration 14540, loss = 0.875367\nI0607 23:32:51.114672 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16938 (* 1 = 0.16938 loss)\nI0607 23:32:51.114682 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.393605 (* 1 = 0.393605 loss)\nI0607 23:32:51.114688 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195973 (* 1 = 0.195973 loss)\nI0607 23:32:51.114694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.10337 (* 1 = 0.10337 loss)\nI0607 23:32:51.114701 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19458 (* 1 = 0.19458 loss)\nI0607 23:32:51.114706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.10337 (* 1 = 0.10337 loss)\nI0607 23:32:51.114714 13573 sgd_solver.cpp:106] Iteration 14540, lr = 0.001\nI0607 23:33:04.051596 13573 solver.cpp:229] Iteration 14560, loss = 1.00687\nI0607 23:33:04.051659 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.214756 (* 1 = 0.214756 loss)\nI0607 23:33:04.051668 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.259844 (* 1 = 0.259844 loss)\nI0607 23:33:04.051676 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.424813 (* 1 = 0.424813 loss)\nI0607 23:33:04.051681 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0949194 (* 1 = 0.0949194 loss)\nI0607 23:33:04.051687 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.424356 (* 1 = 0.424356 loss)\nI0607 23:33:04.051692 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0948902 (* 1 = 0.0948902 loss)\nI0607 23:33:04.051699 13573 sgd_solver.cpp:106] Iteration 14560, lr = 0.001\nI0607 23:33:17.234127 13573 solver.cpp:229] Iteration 14580, loss = 0.896454\nI0607 23:33:17.234190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101133 (* 1 = 0.101133 loss)\nI0607 23:33:17.234200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.133947 (* 1 = 0.133947 loss)\nI0607 23:33:17.234205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112831 (* 1 = 0.112831 loss)\nI0607 23:33:17.234211 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.064021 (* 1 = 0.064021 loss)\nI0607 23:33:17.234216 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.113036 (* 1 = 0.113036 loss)\nI0607 23:33:17.234222 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0640059 (* 1 = 0.0640059 loss)\nI0607 23:33:17.234230 13573 sgd_solver.cpp:106] Iteration 14580, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:33:30.052875 13573 solver.cpp:229] Iteration 14600, loss = 1.09002\nI0607 23:33:30.052938 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.197712 (* 1 = 0.197712 loss)\nI0607 23:33:30.052947 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.463366 (* 1 = 0.463366 loss)\nI0607 23:33:30.052953 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1544 (* 1 = 0.1544 loss)\nI0607 23:33:30.052960 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0573487 (* 1 = 0.0573487 loss)\nI0607 23:33:30.052966 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15228 (* 1 = 0.15228 loss)\nI0607 23:33:30.052973 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0573396 (* 1 = 0.0573396 loss)\nI0607 23:33:30.053004 13573 sgd_solver.cpp:106] Iteration 14600, lr = 0.001\nI0607 23:33:42.974182 13573 solver.cpp:229] Iteration 14620, loss = 0.70449\nI0607 23:33:42.974254 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.243881 (* 1 = 0.243881 loss)\nI0607 23:33:42.974263 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.183002 (* 1 = 0.183002 loss)\nI0607 23:33:42.974270 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0827817 (* 1 = 0.0827817 loss)\nI0607 23:33:42.974277 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00992307 (* 1 = 0.00992307 loss)\nI0607 23:33:42.974282 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0849094 (* 1 = 0.0849094 loss)\nI0607 23:33:42.974288 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00993777 (* 1 = 0.00993777 loss)\nI0607 23:33:42.974295 13573 sgd_solver.cpp:106] Iteration 14620, lr = 0.001\nI0607 23:33:55.724493 13573 solver.cpp:229] Iteration 14640, loss = 0.898881\nI0607 23:33:55.724562 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116088 (* 1 = 0.116088 loss)\nI0607 23:33:55.724572 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0727868 (* 1 = 0.0727868 loss)\nI0607 23:33:55.724577 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267482 (* 1 = 0.267482 loss)\nI0607 23:33:55.724583 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0353552 (* 1 = 0.0353552 loss)\nI0607 23:33:55.724588 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246773 (* 1 = 0.246773 loss)\nI0607 23:33:55.724594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0353432 (* 1 = 0.0353432 loss)\nI0607 23:33:55.724601 13573 sgd_solver.cpp:106] Iteration 14640, lr = 0.001\nI0607 23:34:08.636771 13573 solver.cpp:229] Iteration 14660, loss = 0.971969\nI0607 23:34:08.636831 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.253657 (* 1 = 0.253657 loss)\nI0607 23:34:08.636840 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.232687 (* 1 = 0.232687 loss)\nI0607 23:34:08.636847 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301213 (* 1 = 0.301213 loss)\nI0607 23:34:08.636853 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0471931 (* 1 = 0.0471931 loss)\nI0607 23:34:08.636859 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271601 (* 1 = 0.271601 loss)\nI0607 23:34:08.636868 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0471752 (* 1 = 0.0471752 loss)\nI0607 23:34:08.636878 13573 sgd_solver.cpp:106] Iteration 14660, lr = 0.001\nI0607 23:34:21.270582 13573 solver.cpp:229] Iteration 14680, loss = 0.876793\nI0607 23:34:21.270663 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17182 (* 1 = 0.17182 loss)\nI0607 23:34:21.270679 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150378 (* 1 = 0.150378 loss)\nI0607 23:34:21.270686 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102392 (* 1 = 0.102392 loss)\nI0607 23:34:21.270694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264275 (* 1 = 0.0264275 loss)\nI0607 23:34:21.270699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101248 (* 1 = 0.101248 loss)\nI0607 23:34:21.270704 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264378 (* 1 = 0.0264378 loss)\nI0607 23:34:21.270711 13573 sgd_solver.cpp:106] Iteration 14680, lr = 0.001\nI0607 23:34:34.167371 13573 solver.cpp:229] Iteration 14700, loss = 0.668643\nI0607 23:34:34.167436 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00148029 (* 1 = 0.00148029 loss)\nI0607 23:34:34.167446 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110291 (* 1 = 0.110291 loss)\nI0607 23:34:34.167453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214063 (* 1 = 0.214063 loss)\nI0607 23:34:34.167459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.015412 (* 1 = 0.015412 loss)\nI0607 23:34:34.167464 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.272834 (* 1 = 0.272834 loss)\nI0607 23:34:34.167469 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153906 (* 1 = 0.0153906 loss)\nI0607 23:34:34.167476 13573 sgd_solver.cpp:106] Iteration 14700, lr = 0.001\nI0607 23:34:46.892665 13573 solver.cpp:229] Iteration 14720, loss = 1.02279\nI0607 23:34:46.892735 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.247874 (* 1 = 0.247874 loss)\nI0607 23:34:46.892745 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.444355 (* 1 = 0.444355 loss)\nI0607 23:34:46.892750 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139621 (* 1 = 0.139621 loss)\nI0607 23:34:46.892756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0213013 (* 1 = 0.0213013 loss)\nI0607 23:34:46.892761 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.142513 (* 1 = 0.142513 loss)\nI0607 23:34:46.892767 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0212991 (* 1 = 0.0212991 loss)\nI0607 23:34:46.892774 13573 sgd_solver.cpp:106] Iteration 14720, lr = 0.001\nI0607 23:34:59.991446 13573 solver.cpp:229] Iteration 14740, loss = 0.633563\nI0607 23:34:59.991505 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.186907 (* 1 = 0.186907 loss)\nI0607 23:34:59.991514 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.264108 (* 1 = 0.264108 loss)\nI0607 23:34:59.991521 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0844514 (* 1 = 0.0844514 loss)\nI0607 23:34:59.991526 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00833544 (* 1 = 0.00833544 loss)\nI0607 23:34:59.991533 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0931751 (* 1 = 0.0931751 loss)\nI0607 23:34:59.991539 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00833547 (* 1 = 0.00833547 loss)\nI0607 23:34:59.991544 13573 sgd_solver.cpp:106] Iteration 14740, lr = 0.001\nI0607 23:35:12.820629 13573 solver.cpp:229] Iteration 14760, loss = 0.659915\nI0607 23:35:12.820705 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00564369 (* 1 = 0.00564369 loss)\nI0607 23:35:12.820715 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0208726 (* 1 = 0.0208726 loss)\nI0607 23:35:12.820721 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.367052 (* 1 = 0.367052 loss)\nI0607 23:35:12.820726 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246505 (* 1 = 0.0246505 loss)\nI0607 23:35:12.820732 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.37978 (* 1 = 0.37978 loss)\nI0607 23:35:12.820739 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246507 (* 1 = 0.0246507 loss)\nI0607 23:35:12.820744 13573 sgd_solver.cpp:106] Iteration 14760, lr = 0.001\nI0607 23:35:25.747720 13573 solver.cpp:229] Iteration 14780, loss = 0.670854\nI0607 23:35:25.747782 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140456 (* 1 = 0.140456 loss)\nI0607 23:35:25.747790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157781 (* 1 = 0.157781 loss)\nI0607 23:35:25.747797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992333 (* 1 = 0.0992333 loss)\nI0607 23:35:25.747803 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0236432 (* 1 = 0.0236432 loss)\nI0607 23:35:25.747808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100701 (* 1 = 0.100701 loss)\nI0607 23:35:25.747814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236071 (* 1 = 0.0236071 loss)\nI0607 23:35:25.747820 13573 sgd_solver.cpp:106] Iteration 14780, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:35:38.526921 13573 solver.cpp:229] Iteration 14800, loss = 0.962751\nI0607 23:35:38.526988 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124287 (* 1 = 0.124287 loss)\nI0607 23:35:38.526998 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0666538 (* 1 = 0.0666538 loss)\nI0607 23:35:38.527004 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215629 (* 1 = 0.215629 loss)\nI0607 23:35:38.527009 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0248194 (* 1 = 0.0248194 loss)\nI0607 23:35:38.527014 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218652 (* 1 = 0.218652 loss)\nI0607 23:35:38.527020 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0248487 (* 1 = 0.0248487 loss)\nI0607 23:35:38.527026 13573 sgd_solver.cpp:106] Iteration 14800, lr = 0.001\nI0607 23:35:51.293073 13573 solver.cpp:229] Iteration 14820, loss = 0.863657\nI0607 23:35:51.293138 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.133306 (* 1 = 0.133306 loss)\nI0607 23:35:51.293146 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147974 (* 1 = 0.147974 loss)\nI0607 23:35:51.293153 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.308205 (* 1 = 0.308205 loss)\nI0607 23:35:51.293159 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0771919 (* 1 = 0.0771919 loss)\nI0607 23:35:51.293164 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.338498 (* 1 = 0.338498 loss)\nI0607 23:35:51.293169 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0771771 (* 1 = 0.0771771 loss)\nI0607 23:35:51.293176 13573 sgd_solver.cpp:106] Iteration 14820, lr = 0.001\nI0607 23:36:04.377625 13573 solver.cpp:229] Iteration 14840, loss = 0.596835\nI0607 23:36:04.377751 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00139066 (* 1 = 0.00139066 loss)\nI0607 23:36:04.377761 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0516318 (* 1 = 0.0516318 loss)\nI0607 23:36:04.377768 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.225786 (* 1 = 0.225786 loss)\nI0607 23:36:04.377774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0162886 (* 1 = 0.0162886 loss)\nI0607 23:36:04.377779 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.200546 (* 1 = 0.200546 loss)\nI0607 23:36:04.377785 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0162936 (* 1 = 0.0162936 loss)\nI0607 23:36:04.377792 13573 sgd_solver.cpp:106] Iteration 14840, lr = 0.001\nI0607 23:36:17.419935 13573 solver.cpp:229] Iteration 14860, loss = 0.665094\nI0607 23:36:17.419999 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.172148 (* 1 = 0.172148 loss)\nI0607 23:36:17.420009 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0960981 (* 1 = 0.0960981 loss)\nI0607 23:36:17.420016 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110945 (* 1 = 0.110945 loss)\nI0607 23:36:17.420022 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0890725 (* 1 = 0.0890725 loss)\nI0607 23:36:17.420027 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110808 (* 1 = 0.110808 loss)\nI0607 23:36:17.420032 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.089026 (* 1 = 0.089026 loss)\nI0607 23:36:17.420039 13573 sgd_solver.cpp:106] Iteration 14860, lr = 0.001\nI0607 23:36:30.344903 13573 solver.cpp:229] Iteration 14880, loss = 0.54324\nI0607 23:36:30.344970 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.1317 (* 1 = 0.1317 loss)\nI0607 23:36:30.344980 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0585784 (* 1 = 0.0585784 loss)\nI0607 23:36:30.344987 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215467 (* 1 = 0.215467 loss)\nI0607 23:36:30.344993 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0400008 (* 1 = 0.0400008 loss)\nI0607 23:36:30.345000 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212023 (* 1 = 0.212023 loss)\nI0607 23:36:30.345005 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0400288 (* 1 = 0.0400288 loss)\nI0607 23:36:30.345012 13573 sgd_solver.cpp:106] Iteration 14880, lr = 0.001\nI0607 23:36:43.255702 13573 solver.cpp:229] Iteration 14900, loss = 0.958884\nI0607 23:36:43.255769 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00945052 (* 1 = 0.00945052 loss)\nI0607 23:36:43.255780 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0255502 (* 1 = 0.0255502 loss)\nI0607 23:36:43.255787 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.416442 (* 1 = 0.416442 loss)\nI0607 23:36:43.255792 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.029771 (* 1 = 0.029771 loss)\nI0607 23:36:43.255798 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.453376 (* 1 = 0.453376 loss)\nI0607 23:36:43.255805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0297377 (* 1 = 0.0297377 loss)\nI0607 23:36:43.255810 13573 sgd_solver.cpp:106] Iteration 14900, lr = 0.001\nI0607 23:36:56.038028 13573 solver.cpp:229] Iteration 14920, loss = 0.768056\nI0607 23:36:56.038103 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0878998 (* 1 = 0.0878998 loss)\nI0607 23:36:56.038115 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.200127 (* 1 = 0.200127 loss)\nI0607 23:36:56.038121 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.16078 (* 1 = 0.16078 loss)\nI0607 23:36:56.038126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.144341 (* 1 = 0.144341 loss)\nI0607 23:36:56.038132 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168066 (* 1 = 0.168066 loss)\nI0607 23:36:56.038138 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.144325 (* 1 = 0.144325 loss)\nI0607 23:36:56.038146 13573 sgd_solver.cpp:106] Iteration 14920, lr = 0.001\nI0607 23:37:09.128327 13573 solver.cpp:229] Iteration 14940, loss = 0.797264\nI0607 23:37:09.128399 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0678955 (* 1 = 0.0678955 loss)\nI0607 23:37:09.128408 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0614219 (* 1 = 0.0614219 loss)\nI0607 23:37:09.128414 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0865191 (* 1 = 0.0865191 loss)\nI0607 23:37:09.128420 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0236403 (* 1 = 0.0236403 loss)\nI0607 23:37:09.128427 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0831365 (* 1 = 0.0831365 loss)\nI0607 23:37:09.128432 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236622 (* 1 = 0.0236622 loss)\nI0607 23:37:09.128439 13573 sgd_solver.cpp:106] Iteration 14940, lr = 0.001\nI0607 23:37:21.916304 13573 solver.cpp:229] Iteration 14960, loss = 1.30257\nI0607 23:37:21.916373 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.208787 (* 1 = 0.208787 loss)\nI0607 23:37:21.916383 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179706 (* 1 = 0.179706 loss)\nI0607 23:37:21.916388 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284768 (* 1 = 0.284768 loss)\nI0607 23:37:21.916394 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0501124 (* 1 = 0.0501124 loss)\nI0607 23:37:21.916399 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.286352 (* 1 = 0.286352 loss)\nI0607 23:37:21.916405 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0501325 (* 1 = 0.0501325 loss)\nI0607 23:37:21.916412 13573 sgd_solver.cpp:106] Iteration 14960, lr = 0.001\nI0607 23:37:34.765702 13573 solver.cpp:229] Iteration 14980, loss = 1.12042\nI0607 23:37:34.765774 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0884672 (* 1 = 0.0884672 loss)\nI0607 23:37:34.765784 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179177 (* 1 = 0.179177 loss)\nI0607 23:37:34.765789 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0929847 (* 1 = 0.0929847 loss)\nI0607 23:37:34.765794 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0043831 (* 1 = 0.0043831 loss)\nI0607 23:37:34.765800 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0946081 (* 1 = 0.0946081 loss)\nI0607 23:37:34.765805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00438309 (* 1 = 0.00438309 loss)\nI0607 23:37:34.765812 13573 sgd_solver.cpp:106] Iteration 14980, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:37:47.550043 13573 solver.cpp:229] Iteration 15000, loss = 0.866116\nI0607 23:37:47.550117 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0134701 (* 1 = 0.0134701 loss)\nI0607 23:37:47.550127 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0130781 (* 1 = 0.0130781 loss)\nI0607 23:37:47.550132 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.457413 (* 1 = 0.457413 loss)\nI0607 23:37:47.550137 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.154055 (* 1 = 0.154055 loss)\nI0607 23:37:47.550143 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.465116 (* 1 = 0.465116 loss)\nI0607 23:37:47.550148 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.154106 (* 1 = 0.154106 loss)\nI0607 23:37:47.550155 13573 sgd_solver.cpp:106] Iteration 15000, lr = 0.001\nI0607 23:38:00.445322 13573 solver.cpp:229] Iteration 15020, loss = 0.695591\nI0607 23:38:00.445387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0106697 (* 1 = 0.0106697 loss)\nI0607 23:38:00.445396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0376906 (* 1 = 0.0376906 loss)\nI0607 23:38:00.445402 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.316544 (* 1 = 0.316544 loss)\nI0607 23:38:00.445408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0300761 (* 1 = 0.0300761 loss)\nI0607 23:38:00.445415 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.338898 (* 1 = 0.338898 loss)\nI0607 23:38:00.445420 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0300845 (* 1 = 0.0300845 loss)\nI0607 23:38:00.445426 13573 sgd_solver.cpp:106] Iteration 15020, lr = 0.001\nI0607 23:38:13.079159 13573 solver.cpp:229] Iteration 15040, loss = 0.863109\nI0607 23:38:13.079231 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.23526 (* 1 = 0.23526 loss)\nI0607 23:38:13.079241 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.30267 (* 1 = 0.30267 loss)\nI0607 23:38:13.079246 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125838 (* 1 = 0.125838 loss)\nI0607 23:38:13.079252 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.213779 (* 1 = 0.213779 loss)\nI0607 23:38:13.079257 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.132458 (* 1 = 0.132458 loss)\nI0607 23:38:13.079263 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.213887 (* 1 = 0.213887 loss)\nI0607 23:38:13.079270 13573 sgd_solver.cpp:106] Iteration 15040, lr = 0.001\nI0607 23:38:26.023730 13573 solver.cpp:229] Iteration 15060, loss = 1.4595\nI0607 23:38:26.023795 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.237886 (* 1 = 0.237886 loss)\nI0607 23:38:26.023804 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.246726 (* 1 = 0.246726 loss)\nI0607 23:38:26.023809 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.732375 (* 1 = 0.732375 loss)\nI0607 23:38:26.023815 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.149261 (* 1 = 0.149261 loss)\nI0607 23:38:26.023820 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.729378 (* 1 = 0.729378 loss)\nI0607 23:38:26.023826 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.149265 (* 1 = 0.149265 loss)\nI0607 23:38:26.023833 13573 sgd_solver.cpp:106] Iteration 15060, lr = 0.001\nI0607 23:38:39.062032 13573 solver.cpp:229] Iteration 15080, loss = 1.70345\nI0607 23:38:39.062105 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.312467 (* 1 = 0.312467 loss)\nI0607 23:38:39.062115 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.346688 (* 1 = 0.346688 loss)\nI0607 23:38:39.062121 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.472442 (* 1 = 0.472442 loss)\nI0607 23:38:39.062127 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0945154 (* 1 = 0.0945154 loss)\nI0607 23:38:39.062132 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.485365 (* 1 = 0.485365 loss)\nI0607 23:38:39.062139 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0945192 (* 1 = 0.0945192 loss)\nI0607 23:38:39.062145 13573 sgd_solver.cpp:106] Iteration 15080, lr = 0.001\nI0607 23:38:52.082890 13573 solver.cpp:229] Iteration 15100, loss = 0.792995\nI0607 23:38:52.082953 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.316484 (* 1 = 0.316484 loss)\nI0607 23:38:52.082962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.156788 (* 1 = 0.156788 loss)\nI0607 23:38:52.082969 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100844 (* 1 = 0.100844 loss)\nI0607 23:38:52.082975 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0424844 (* 1 = 0.0424844 loss)\nI0607 23:38:52.082980 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104936 (* 1 = 0.104936 loss)\nI0607 23:38:52.082986 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0424843 (* 1 = 0.0424843 loss)\nI0607 23:38:52.082993 13573 sgd_solver.cpp:106] Iteration 15100, lr = 0.001\nI0607 23:39:05.027072 13573 solver.cpp:229] Iteration 15120, loss = 0.500793\nI0607 23:39:05.027133 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.292735 (* 1 = 0.292735 loss)\nI0607 23:39:05.027143 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.137554 (* 1 = 0.137554 loss)\nI0607 23:39:05.027148 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0812046 (* 1 = 0.0812046 loss)\nI0607 23:39:05.027154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00788911 (* 1 = 0.00788911 loss)\nI0607 23:39:05.027160 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0966843 (* 1 = 0.0966843 loss)\nI0607 23:39:05.027166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00788912 (* 1 = 0.00788912 loss)\nI0607 23:39:05.027173 13573 sgd_solver.cpp:106] Iteration 15120, lr = 0.001\nI0607 23:39:17.736480 13573 solver.cpp:229] Iteration 15140, loss = 0.92196\nI0607 23:39:17.736553 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00114541 (* 1 = 0.00114541 loss)\nI0607 23:39:17.736563 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00123418 (* 1 = 0.00123418 loss)\nI0607 23:39:17.736568 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.314913 (* 1 = 0.314913 loss)\nI0607 23:39:17.736574 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296629 (* 1 = 0.0296629 loss)\nI0607 23:39:17.736580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334271 (* 1 = 0.334271 loss)\nI0607 23:39:17.736585 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296826 (* 1 = 0.0296826 loss)\nI0607 23:39:17.736593 13573 sgd_solver.cpp:106] Iteration 15140, lr = 0.001\nI0607 23:39:30.743050 13573 solver.cpp:229] Iteration 15160, loss = 1.04176\nI0607 23:39:30.743113 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0744345 (* 1 = 0.0744345 loss)\nI0607 23:39:30.743122 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.123421 (* 1 = 0.123421 loss)\nI0607 23:39:30.743129 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.418101 (* 1 = 0.418101 loss)\nI0607 23:39:30.743134 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.110788 (* 1 = 0.110788 loss)\nI0607 23:39:30.743140 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.412201 (* 1 = 0.412201 loss)\nI0607 23:39:30.743146 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.110776 (* 1 = 0.110776 loss)\nI0607 23:39:30.743154 13573 sgd_solver.cpp:106] Iteration 15160, lr = 0.001\nI0607 23:39:43.678107 13573 solver.cpp:229] Iteration 15180, loss = 0.781006\nI0607 23:39:43.678170 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0452563 (* 1 = 0.0452563 loss)\nI0607 23:39:43.678180 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0874304 (* 1 = 0.0874304 loss)\nI0607 23:39:43.678186 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190442 (* 1 = 0.190442 loss)\nI0607 23:39:43.678191 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128836 (* 1 = 0.0128836 loss)\nI0607 23:39:43.678200 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189019 (* 1 = 0.189019 loss)\nI0607 23:39:43.678208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0128847 (* 1 = 0.0128847 loss)\nI0607 23:39:43.678217 13573 sgd_solver.cpp:106] Iteration 15180, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:39:56.578830 13573 solver.cpp:229] Iteration 15200, loss = 0.740904\nI0607 23:39:56.578891 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0348299 (* 1 = 0.0348299 loss)\nI0607 23:39:56.578900 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0577345 (* 1 = 0.0577345 loss)\nI0607 23:39:56.578907 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0862398 (* 1 = 0.0862398 loss)\nI0607 23:39:56.578912 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0211582 (* 1 = 0.0211582 loss)\nI0607 23:39:56.578917 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.078263 (* 1 = 0.078263 loss)\nI0607 23:39:56.578923 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.021158 (* 1 = 0.021158 loss)\nI0607 23:39:56.578930 13573 sgd_solver.cpp:106] Iteration 15200, lr = 0.001\nI0607 23:40:09.500011 13573 solver.cpp:229] Iteration 15220, loss = 1.71633\nI0607 23:40:09.500077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.378081 (* 1 = 0.378081 loss)\nI0607 23:40:09.500087 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.394429 (* 1 = 0.394429 loss)\nI0607 23:40:09.500093 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.810838 (* 1 = 0.810838 loss)\nI0607 23:40:09.500098 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.164129 (* 1 = 0.164129 loss)\nI0607 23:40:09.500104 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.80453 (* 1 = 0.80453 loss)\nI0607 23:40:09.500109 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.164143 (* 1 = 0.164143 loss)\nI0607 23:40:09.500116 13573 sgd_solver.cpp:106] Iteration 15220, lr = 0.001\nI0607 23:40:22.427922 13573 solver.cpp:229] Iteration 15240, loss = 0.690045\nI0607 23:40:22.427990 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.096387 (* 1 = 0.096387 loss)\nI0607 23:40:22.428000 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0786065 (* 1 = 0.0786065 loss)\nI0607 23:40:22.428006 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0874604 (* 1 = 0.0874604 loss)\nI0607 23:40:22.428012 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00688951 (* 1 = 0.00688951 loss)\nI0607 23:40:22.428017 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0934674 (* 1 = 0.0934674 loss)\nI0607 23:40:22.428023 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00688763 (* 1 = 0.00688763 loss)\nI0607 23:40:22.428031 13573 sgd_solver.cpp:106] Iteration 15240, lr = 0.001\nI0607 23:40:35.216769 13573 solver.cpp:229] Iteration 15260, loss = 1.29513\nI0607 23:40:35.216974 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.207974 (* 1 = 0.207974 loss)\nI0607 23:40:35.216994 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202426 (* 1 = 0.202426 loss)\nI0607 23:40:35.217000 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.4064 (* 1 = 0.4064 loss)\nI0607 23:40:35.217011 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0653294 (* 1 = 0.0653294 loss)\nI0607 23:40:35.217022 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.403944 (* 1 = 0.403944 loss)\nI0607 23:40:35.217030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0653023 (* 1 = 0.0653023 loss)\nI0607 23:40:35.217036 13573 sgd_solver.cpp:106] Iteration 15260, lr = 0.001\nI0607 23:40:48.081564 13573 solver.cpp:229] Iteration 15280, loss = 1.98855\nI0607 23:40:48.081640 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.422401 (* 1 = 0.422401 loss)\nI0607 23:40:48.081650 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.382008 (* 1 = 0.382008 loss)\nI0607 23:40:48.081655 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105397 (* 1 = 0.105397 loss)\nI0607 23:40:48.081661 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00498497 (* 1 = 0.00498497 loss)\nI0607 23:40:48.081667 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10816 (* 1 = 0.10816 loss)\nI0607 23:40:48.081674 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00498604 (* 1 = 0.00498604 loss)\nI0607 23:40:48.081681 13573 sgd_solver.cpp:106] Iteration 15280, lr = 0.001\nI0607 23:41:00.924360 13573 solver.cpp:229] Iteration 15300, loss = 0.605588\nI0607 23:41:00.924427 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0841912 (* 1 = 0.0841912 loss)\nI0607 23:41:00.924437 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0705595 (* 1 = 0.0705595 loss)\nI0607 23:41:00.924443 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0962815 (* 1 = 0.0962815 loss)\nI0607 23:41:00.924448 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0347642 (* 1 = 0.0347642 loss)\nI0607 23:41:00.924454 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0926911 (* 1 = 0.0926911 loss)\nI0607 23:41:00.924460 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0347701 (* 1 = 0.0347701 loss)\nI0607 23:41:00.924468 13573 sgd_solver.cpp:106] Iteration 15300, lr = 0.001\nI0607 23:41:13.645139 13573 solver.cpp:229] Iteration 15320, loss = 0.907702\nI0607 23:41:13.645206 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130032 (* 1 = 0.130032 loss)\nI0607 23:41:13.645216 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0888568 (* 1 = 0.0888568 loss)\nI0607 23:41:13.645222 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.349436 (* 1 = 0.349436 loss)\nI0607 23:41:13.645227 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0608762 (* 1 = 0.0608762 loss)\nI0607 23:41:13.645233 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328576 (* 1 = 0.328576 loss)\nI0607 23:41:13.645238 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0608309 (* 1 = 0.0608309 loss)\nI0607 23:41:13.645246 13573 sgd_solver.cpp:106] Iteration 15320, lr = 0.001\nI0607 23:41:26.573109 13573 solver.cpp:229] Iteration 15340, loss = 0.818274\nI0607 23:41:26.573174 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00452087 (* 1 = 0.00452087 loss)\nI0607 23:41:26.573184 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0835405 (* 1 = 0.0835405 loss)\nI0607 23:41:26.573190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.166218 (* 1 = 0.166218 loss)\nI0607 23:41:26.573196 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00322142 (* 1 = 0.00322142 loss)\nI0607 23:41:26.573201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.164435 (* 1 = 0.164435 loss)\nI0607 23:41:26.573207 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0032279 (* 1 = 0.0032279 loss)\nI0607 23:41:26.573215 13573 sgd_solver.cpp:106] Iteration 15340, lr = 0.001\nI0607 23:41:39.373471 13573 solver.cpp:229] Iteration 15360, loss = 0.55575\nI0607 23:41:39.373533 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000110665 (* 1 = 0.000110665 loss)\nI0607 23:41:39.373543 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0109414 (* 1 = 0.0109414 loss)\nI0607 23:41:39.373548 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264957 (* 1 = 0.264957 loss)\nI0607 23:41:39.373554 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0356329 (* 1 = 0.0356329 loss)\nI0607 23:41:39.373559 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232583 (* 1 = 0.232583 loss)\nI0607 23:41:39.373565 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0356699 (* 1 = 0.0356699 loss)\nI0607 23:41:39.373572 13573 sgd_solver.cpp:106] Iteration 15360, lr = 0.001\nI0607 23:41:52.309361 13573 solver.cpp:229] Iteration 15380, loss = 1.65955\nI0607 23:41:52.309463 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.428447 (* 1 = 0.428447 loss)\nI0607 23:41:52.309473 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.359883 (* 1 = 0.359883 loss)\nI0607 23:41:52.309478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.613561 (* 1 = 0.613561 loss)\nI0607 23:41:52.309484 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.233356 (* 1 = 0.233356 loss)\nI0607 23:41:52.309490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.613535 (* 1 = 0.613535 loss)\nI0607 23:41:52.309496 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.233409 (* 1 = 0.233409 loss)\nI0607 23:41:52.309504 13573 sgd_solver.cpp:106] Iteration 15380, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:42:05.086271 13573 solver.cpp:229] Iteration 15400, loss = 1.27948\nI0607 23:42:05.086334 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.314313 (* 1 = 0.314313 loss)\nI0607 23:42:05.086344 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.250554 (* 1 = 0.250554 loss)\nI0607 23:42:05.086349 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.525258 (* 1 = 0.525258 loss)\nI0607 23:42:05.086355 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.158079 (* 1 = 0.158079 loss)\nI0607 23:42:05.086360 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.522088 (* 1 = 0.522088 loss)\nI0607 23:42:05.086366 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.158139 (* 1 = 0.158139 loss)\nI0607 23:42:05.086374 13573 sgd_solver.cpp:106] Iteration 15400, lr = 0.001\nI0607 23:42:17.831347 13573 solver.cpp:229] Iteration 15420, loss = 0.659783\nI0607 23:42:17.831415 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.315821 (* 1 = 0.315821 loss)\nI0607 23:42:17.831424 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.347269 (* 1 = 0.347269 loss)\nI0607 23:42:17.831431 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0875222 (* 1 = 0.0875222 loss)\nI0607 23:42:17.831437 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0092236 (* 1 = 0.0092236 loss)\nI0607 23:42:17.831444 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0801705 (* 1 = 0.0801705 loss)\nI0607 23:42:17.831449 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00922405 (* 1 = 0.00922405 loss)\nI0607 23:42:17.831455 13573 sgd_solver.cpp:106] Iteration 15420, lr = 0.001\nI0607 23:42:30.705266 13573 solver.cpp:229] Iteration 15440, loss = 0.836431\nI0607 23:42:30.705330 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.272399 (* 1 = 0.272399 loss)\nI0607 23:42:30.705339 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167475 (* 1 = 0.167475 loss)\nI0607 23:42:30.705345 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136662 (* 1 = 0.136662 loss)\nI0607 23:42:30.705351 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276967 (* 1 = 0.0276967 loss)\nI0607 23:42:30.705358 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.140666 (* 1 = 0.140666 loss)\nI0607 23:42:30.705363 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0277126 (* 1 = 0.0277126 loss)\nI0607 23:42:30.705370 13573 sgd_solver.cpp:106] Iteration 15440, lr = 0.001\nI0607 23:42:43.441577 13573 solver.cpp:229] Iteration 15460, loss = 0.818368\nI0607 23:42:43.441653 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.400127 (* 1 = 0.400127 loss)\nI0607 23:42:43.441663 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19297 (* 1 = 0.19297 loss)\nI0607 23:42:43.441669 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949426 (* 1 = 0.0949426 loss)\nI0607 23:42:43.441675 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.031262 (* 1 = 0.031262 loss)\nI0607 23:42:43.441681 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.09725 (* 1 = 0.09725 loss)\nI0607 23:42:43.441687 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0312591 (* 1 = 0.0312591 loss)\nI0607 23:42:43.441694 13573 sgd_solver.cpp:106] Iteration 15460, lr = 0.001\nI0607 23:42:56.193970 13573 solver.cpp:229] Iteration 15480, loss = 2.13993\nI0607 23:42:56.194032 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108392 (* 1 = 0.108392 loss)\nI0607 23:42:56.194042 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1341 (* 1 = 0.1341 loss)\nI0607 23:42:56.194048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.12736 (* 1 = 1.12736 loss)\nI0607 23:42:56.194053 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.649839 (* 1 = 0.649839 loss)\nI0607 23:42:56.194059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.1318 (* 1 = 1.1318 loss)\nI0607 23:42:56.194064 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.636387 (* 1 = 0.636387 loss)\nI0607 23:42:56.194072 13573 sgd_solver.cpp:106] Iteration 15480, lr = 0.001\nI0607 23:43:09.099601 13573 solver.cpp:229] Iteration 15500, loss = 1.02477\nI0607 23:43:09.099671 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00903633 (* 1 = 0.00903633 loss)\nI0607 23:43:09.099681 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0384939 (* 1 = 0.0384939 loss)\nI0607 23:43:09.099689 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.441673 (* 1 = 0.441673 loss)\nI0607 23:43:09.099694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0631987 (* 1 = 0.0631987 loss)\nI0607 23:43:09.099699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.432758 (* 1 = 0.432758 loss)\nI0607 23:43:09.099705 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0632219 (* 1 = 0.0632219 loss)\nI0607 23:43:09.099714 13573 sgd_solver.cpp:106] Iteration 15500, lr = 0.001\nI0607 23:43:22.155635 13573 solver.cpp:229] Iteration 15520, loss = 0.420333\nI0607 23:43:22.155701 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.12414 (* 1 = 0.12414 loss)\nI0607 23:43:22.155711 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.071159 (* 1 = 0.071159 loss)\nI0607 23:43:22.155717 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0914447 (* 1 = 0.0914447 loss)\nI0607 23:43:22.155724 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0034638 (* 1 = 0.0034638 loss)\nI0607 23:43:22.155730 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.089246 (* 1 = 0.089246 loss)\nI0607 23:43:22.155735 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00346805 (* 1 = 0.00346805 loss)\nI0607 23:43:22.155743 13573 sgd_solver.cpp:106] Iteration 15520, lr = 0.001\nI0607 23:43:35.065793 13573 solver.cpp:229] Iteration 15540, loss = 1.01778\nI0607 23:43:35.065852 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.200656 (* 1 = 0.200656 loss)\nI0607 23:43:35.065861 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.346628 (* 1 = 0.346628 loss)\nI0607 23:43:35.065868 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304941 (* 1 = 0.304941 loss)\nI0607 23:43:35.065874 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0463235 (* 1 = 0.0463235 loss)\nI0607 23:43:35.065881 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.308806 (* 1 = 0.308806 loss)\nI0607 23:43:35.065886 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0463133 (* 1 = 0.0463133 loss)\nI0607 23:43:35.065892 13573 sgd_solver.cpp:106] Iteration 15540, lr = 0.001\nI0607 23:43:48.132860 13573 solver.cpp:229] Iteration 15560, loss = 0.86374\nI0607 23:43:48.132922 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.156357 (* 1 = 0.156357 loss)\nI0607 23:43:48.132932 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0856625 (* 1 = 0.0856625 loss)\nI0607 23:43:48.132938 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17119 (* 1 = 0.17119 loss)\nI0607 23:43:48.132944 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.220926 (* 1 = 0.220926 loss)\nI0607 23:43:48.132951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1686 (* 1 = 0.1686 loss)\nI0607 23:43:48.132956 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.220523 (* 1 = 0.220523 loss)\nI0607 23:43:48.132962 13573 sgd_solver.cpp:106] Iteration 15560, lr = 0.001\nI0607 23:44:00.878957 13573 solver.cpp:229] Iteration 15580, loss = 1.79414\nI0607 23:44:00.879034 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.227755 (* 1 = 0.227755 loss)\nI0607 23:44:00.879042 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146843 (* 1 = 0.146843 loss)\nI0607 23:44:00.879048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191054 (* 1 = 0.191054 loss)\nI0607 23:44:00.879055 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0349497 (* 1 = 0.0349497 loss)\nI0607 23:44:00.879060 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1948 (* 1 = 0.1948 loss)\nI0607 23:44:00.879066 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.034928 (* 1 = 0.034928 loss)\nI0607 23:44:00.879073 13573 sgd_solver.cpp:106] Iteration 15580, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:44:13.721345 13573 solver.cpp:229] Iteration 15600, loss = 1.20583\nI0607 23:44:13.721415 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.388696 (* 1 = 0.388696 loss)\nI0607 23:44:13.721426 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.286848 (* 1 = 0.286848 loss)\nI0607 23:44:13.721431 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40801 (* 1 = 0.40801 loss)\nI0607 23:44:13.721436 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0693077 (* 1 = 0.0693077 loss)\nI0607 23:44:13.721442 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408513 (* 1 = 0.408513 loss)\nI0607 23:44:13.721448 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0692991 (* 1 = 0.0692991 loss)\nI0607 23:44:13.721456 13573 sgd_solver.cpp:106] Iteration 15600, lr = 0.001\nI0607 23:44:26.740809 13573 solver.cpp:229] Iteration 15620, loss = 0.784571\nI0607 23:44:26.740872 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0129085 (* 1 = 0.0129085 loss)\nI0607 23:44:26.740882 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0477754 (* 1 = 0.0477754 loss)\nI0607 23:44:26.740888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300398 (* 1 = 0.300398 loss)\nI0607 23:44:26.740895 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00994845 (* 1 = 0.00994845 loss)\nI0607 23:44:26.740900 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376805 (* 1 = 0.376805 loss)\nI0607 23:44:26.740906 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00993729 (* 1 = 0.00993729 loss)\nI0607 23:44:26.740913 13573 sgd_solver.cpp:106] Iteration 15620, lr = 0.001\nI0607 23:44:39.911041 13573 solver.cpp:229] Iteration 15640, loss = 1.36394\nI0607 23:44:39.911114 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.284822 (* 1 = 0.284822 loss)\nI0607 23:44:39.911123 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.177848 (* 1 = 0.177848 loss)\nI0607 23:44:39.911130 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.627802 (* 1 = 0.627802 loss)\nI0607 23:44:39.911136 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.167952 (* 1 = 0.167952 loss)\nI0607 23:44:39.911142 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.632023 (* 1 = 0.632023 loss)\nI0607 23:44:39.911149 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.167871 (* 1 = 0.167871 loss)\nI0607 23:44:39.911156 13573 sgd_solver.cpp:106] Iteration 15640, lr = 0.001\nI0607 23:44:52.625808 13573 solver.cpp:229] Iteration 15660, loss = 0.986351\nI0607 23:44:52.625875 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.290812 (* 1 = 0.290812 loss)\nI0607 23:44:52.625885 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107648 (* 1 = 0.107648 loss)\nI0607 23:44:52.625891 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0856977 (* 1 = 0.0856977 loss)\nI0607 23:44:52.625897 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0654366 (* 1 = 0.0654366 loss)\nI0607 23:44:52.625902 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0808608 (* 1 = 0.0808608 loss)\nI0607 23:44:52.625908 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0654147 (* 1 = 0.0654147 loss)\nI0607 23:44:52.625915 13573 sgd_solver.cpp:106] Iteration 15660, lr = 0.001\nI0607 23:45:05.666508 13573 solver.cpp:229] Iteration 15680, loss = 0.747139\nI0607 23:45:05.666571 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0112443 (* 1 = 0.0112443 loss)\nI0607 23:45:05.666579 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0118585 (* 1 = 0.0118585 loss)\nI0607 23:45:05.666585 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270893 (* 1 = 0.270893 loss)\nI0607 23:45:05.666591 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017526 (* 1 = 0.017526 loss)\nI0607 23:45:05.666596 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.281497 (* 1 = 0.281497 loss)\nI0607 23:45:05.666602 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175418 (* 1 = 0.0175418 loss)\nI0607 23:45:05.666610 13573 sgd_solver.cpp:106] Iteration 15680, lr = 0.001\nI0607 23:45:18.372875 13573 solver.cpp:229] Iteration 15700, loss = 0.718375\nI0607 23:45:18.372944 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.22822 (* 1 = 0.22822 loss)\nI0607 23:45:18.372954 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116635 (* 1 = 0.116635 loss)\nI0607 23:45:18.372961 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105191 (* 1 = 0.105191 loss)\nI0607 23:45:18.372967 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0214372 (* 1 = 0.0214372 loss)\nI0607 23:45:18.372972 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102483 (* 1 = 0.102483 loss)\nI0607 23:45:18.372978 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0214554 (* 1 = 0.0214554 loss)\nI0607 23:45:18.372987 13573 sgd_solver.cpp:106] Iteration 15700, lr = 0.001\nI0607 23:45:31.463932 13573 solver.cpp:229] Iteration 15720, loss = 1.37586\nI0607 23:45:31.463999 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10611 (* 1 = 0.10611 loss)\nI0607 23:45:31.464015 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.207528 (* 1 = 0.207528 loss)\nI0607 23:45:31.464022 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.391817 (* 1 = 0.391817 loss)\nI0607 23:45:31.464027 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.105202 (* 1 = 0.105202 loss)\nI0607 23:45:31.464033 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.386936 (* 1 = 0.386936 loss)\nI0607 23:45:31.464040 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.105194 (* 1 = 0.105194 loss)\nI0607 23:45:31.464047 13573 sgd_solver.cpp:106] Iteration 15720, lr = 0.001\nI0607 23:45:44.473192 13573 solver.cpp:229] Iteration 15740, loss = 0.760639\nI0607 23:45:44.473259 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0910649 (* 1 = 0.0910649 loss)\nI0607 23:45:44.473270 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0774248 (* 1 = 0.0774248 loss)\nI0607 23:45:44.473278 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103464 (* 1 = 0.103464 loss)\nI0607 23:45:44.473284 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0177074 (* 1 = 0.0177074 loss)\nI0607 23:45:44.473289 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105514 (* 1 = 0.105514 loss)\nI0607 23:45:44.473295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0177126 (* 1 = 0.0177126 loss)\nI0607 23:45:44.473304 13573 sgd_solver.cpp:106] Iteration 15740, lr = 0.001\nI0607 23:45:57.334044 13573 solver.cpp:229] Iteration 15760, loss = 0.408189\nI0607 23:45:57.334112 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00157726 (* 1 = 0.00157726 loss)\nI0607 23:45:57.334122 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00233964 (* 1 = 0.00233964 loss)\nI0607 23:45:57.334128 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171577 (* 1 = 0.171577 loss)\nI0607 23:45:57.334134 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0184383 (* 1 = 0.0184383 loss)\nI0607 23:45:57.334141 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232999 (* 1 = 0.232999 loss)\nI0607 23:45:57.334146 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0184301 (* 1 = 0.0184301 loss)\nI0607 23:45:57.334153 13573 sgd_solver.cpp:106] Iteration 15760, lr = 0.001\nI0607 23:46:10.104248 13573 solver.cpp:229] Iteration 15780, loss = 1.39396\nI0607 23:46:10.104331 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000454744 (* 1 = 0.000454744 loss)\nI0607 23:46:10.104342 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00381178 (* 1 = 0.00381178 loss)\nI0607 23:46:10.104348 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.324211 (* 1 = 0.324211 loss)\nI0607 23:46:10.104356 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0576156 (* 1 = 0.0576156 loss)\nI0607 23:46:10.104362 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328763 (* 1 = 0.328763 loss)\nI0607 23:46:10.104367 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0575606 (* 1 = 0.0575606 loss)\nI0607 23:46:10.104374 13573 sgd_solver.cpp:106] Iteration 15780, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:46:22.927197 13573 solver.cpp:229] Iteration 15800, loss = 0.963002\nI0607 23:46:22.927265 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.202687 (* 1 = 0.202687 loss)\nI0607 23:46:22.927274 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.419396 (* 1 = 0.419396 loss)\nI0607 23:46:22.927280 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.412682 (* 1 = 0.412682 loss)\nI0607 23:46:22.927286 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0401504 (* 1 = 0.0401504 loss)\nI0607 23:46:22.927292 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.391061 (* 1 = 0.391061 loss)\nI0607 23:46:22.927299 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.040166 (* 1 = 0.040166 loss)\nI0607 23:46:22.927305 13573 sgd_solver.cpp:106] Iteration 15800, lr = 0.001\nI0607 23:46:35.807374 13573 solver.cpp:229] Iteration 15820, loss = 0.89859\nI0607 23:46:35.807440 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.20115 (* 1 = 0.20115 loss)\nI0607 23:46:35.807448 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.256633 (* 1 = 0.256633 loss)\nI0607 23:46:35.807456 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101416 (* 1 = 0.101416 loss)\nI0607 23:46:35.807461 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0317718 (* 1 = 0.0317718 loss)\nI0607 23:46:35.807466 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102831 (* 1 = 0.102831 loss)\nI0607 23:46:35.807472 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0317691 (* 1 = 0.0317691 loss)\nI0607 23:46:35.807481 13573 sgd_solver.cpp:106] Iteration 15820, lr = 0.001\nI0607 23:46:48.798846 13573 solver.cpp:229] Iteration 15840, loss = 0.643125\nI0607 23:46:48.798914 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0842986 (* 1 = 0.0842986 loss)\nI0607 23:46:48.798924 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114157 (* 1 = 0.114157 loss)\nI0607 23:46:48.798930 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.323316 (* 1 = 0.323316 loss)\nI0607 23:46:48.798935 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0140686 (* 1 = 0.0140686 loss)\nI0607 23:46:48.798941 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.36063 (* 1 = 0.36063 loss)\nI0607 23:46:48.798948 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0140779 (* 1 = 0.0140779 loss)\nI0607 23:46:48.798954 13573 sgd_solver.cpp:106] Iteration 15840, lr = 0.001\nI0607 23:47:01.548140 13573 solver.cpp:229] Iteration 15860, loss = 0.593099\nI0607 23:47:01.548202 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000116925 (* 1 = 0.000116925 loss)\nI0607 23:47:01.548213 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00947282 (* 1 = 0.00947282 loss)\nI0607 23:47:01.548218 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252607 (* 1 = 0.252607 loss)\nI0607 23:47:01.548224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00479858 (* 1 = 0.00479858 loss)\nI0607 23:47:01.548230 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.234986 (* 1 = 0.234986 loss)\nI0607 23:47:01.548235 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00480285 (* 1 = 0.00480285 loss)\nI0607 23:47:01.548243 13573 sgd_solver.cpp:106] Iteration 15860, lr = 0.001\nI0607 23:47:14.526100 13573 solver.cpp:229] Iteration 15880, loss = 1.14305\nI0607 23:47:14.526165 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.347085 (* 1 = 0.347085 loss)\nI0607 23:47:14.526175 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.315091 (* 1 = 0.315091 loss)\nI0607 23:47:14.526180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104678 (* 1 = 0.104678 loss)\nI0607 23:47:14.526187 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0754268 (* 1 = 0.0754268 loss)\nI0607 23:47:14.526193 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127531 (* 1 = 0.127531 loss)\nI0607 23:47:14.526198 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0754172 (* 1 = 0.0754172 loss)\nI0607 23:47:14.526206 13573 sgd_solver.cpp:106] Iteration 15880, lr = 0.001\nI0607 23:47:27.327801 13573 solver.cpp:229] Iteration 15900, loss = 0.650207\nI0607 23:47:27.327864 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.176997 (* 1 = 0.176997 loss)\nI0607 23:47:27.327873 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125902 (* 1 = 0.125902 loss)\nI0607 23:47:27.327879 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26212 (* 1 = 0.26212 loss)\nI0607 23:47:27.327885 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182673 (* 1 = 0.0182673 loss)\nI0607 23:47:27.327891 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260041 (* 1 = 0.260041 loss)\nI0607 23:47:27.327898 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182704 (* 1 = 0.0182704 loss)\nI0607 23:47:27.327903 13573 sgd_solver.cpp:106] Iteration 15900, lr = 0.001\nI0607 23:47:40.274638 13573 solver.cpp:229] Iteration 15920, loss = 1.19831\nI0607 23:47:40.274703 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.372038 (* 1 = 0.372038 loss)\nI0607 23:47:40.274710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.471904 (* 1 = 0.471904 loss)\nI0607 23:47:40.274716 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.417813 (* 1 = 0.417813 loss)\nI0607 23:47:40.274721 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.156158 (* 1 = 0.156158 loss)\nI0607 23:47:40.274727 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.415325 (* 1 = 0.415325 loss)\nI0607 23:47:40.274732 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.156139 (* 1 = 0.156139 loss)\nI0607 23:47:40.274739 13573 sgd_solver.cpp:106] Iteration 15920, lr = 0.001\nI0607 23:47:53.049677 13573 solver.cpp:229] Iteration 15940, loss = 0.495704\nI0607 23:47:53.049747 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000661474 (* 1 = 0.000661474 loss)\nI0607 23:47:53.049757 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000827983 (* 1 = 0.000827983 loss)\nI0607 23:47:53.049763 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162104 (* 1 = 0.162104 loss)\nI0607 23:47:53.049769 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0036458 (* 1 = 0.0036458 loss)\nI0607 23:47:53.049775 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175592 (* 1 = 0.175592 loss)\nI0607 23:47:53.049782 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00364214 (* 1 = 0.00364214 loss)\nI0607 23:47:53.049789 13573 sgd_solver.cpp:106] Iteration 15940, lr = 0.001\nI0607 23:48:06.021209 13573 solver.cpp:229] Iteration 15960, loss = 0.628888\nI0607 23:48:06.021292 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.237919 (* 1 = 0.237919 loss)\nI0607 23:48:06.021301 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.159973 (* 1 = 0.159973 loss)\nI0607 23:48:06.021308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.090396 (* 1 = 0.090396 loss)\nI0607 23:48:06.021313 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0207268 (* 1 = 0.0207268 loss)\nI0607 23:48:06.021319 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0906771 (* 1 = 0.0906771 loss)\nI0607 23:48:06.021325 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207251 (* 1 = 0.0207251 loss)\nI0607 23:48:06.021332 13573 sgd_solver.cpp:106] Iteration 15960, lr = 0.001\nI0607 23:48:18.761476 13573 solver.cpp:229] Iteration 15980, loss = 0.817055\nI0607 23:48:18.761541 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145876 (* 1 = 0.145876 loss)\nI0607 23:48:18.761550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.127883 (* 1 = 0.127883 loss)\nI0607 23:48:18.761556 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197674 (* 1 = 0.197674 loss)\nI0607 23:48:18.761562 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0184906 (* 1 = 0.0184906 loss)\nI0607 23:48:18.761567 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207414 (* 1 = 0.207414 loss)\nI0607 23:48:18.761574 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0184909 (* 1 = 0.0184909 loss)\nI0607 23:48:18.761580 13573 sgd_solver.cpp:106] Iteration 15980, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:48:31.609671 13573 solver.cpp:229] Iteration 16000, loss = 0.708907\nI0607 23:48:31.609740 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.227113 (* 1 = 0.227113 loss)\nI0607 23:48:31.609750 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.201591 (* 1 = 0.201591 loss)\nI0607 23:48:31.609757 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0910333 (* 1 = 0.0910333 loss)\nI0607 23:48:31.609763 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0211893 (* 1 = 0.0211893 loss)\nI0607 23:48:31.609769 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0894089 (* 1 = 0.0894089 loss)\nI0607 23:48:31.609776 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0211911 (* 1 = 0.0211911 loss)\nI0607 23:48:31.609784 13573 sgd_solver.cpp:106] Iteration 16000, lr = 0.001\nI0607 23:48:44.473263 13573 solver.cpp:229] Iteration 16020, loss = 0.732905\nI0607 23:48:44.473335 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0805951 (* 1 = 0.0805951 loss)\nI0607 23:48:44.473345 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.205432 (* 1 = 0.205432 loss)\nI0607 23:48:44.473351 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.192789 (* 1 = 0.192789 loss)\nI0607 23:48:44.473356 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0736087 (* 1 = 0.0736087 loss)\nI0607 23:48:44.473361 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186808 (* 1 = 0.186808 loss)\nI0607 23:48:44.473367 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0735643 (* 1 = 0.0735643 loss)\nI0607 23:48:44.473374 13573 sgd_solver.cpp:106] Iteration 16020, lr = 0.001\nI0607 23:48:57.462647 13573 solver.cpp:229] Iteration 16040, loss = 0.710686\nI0607 23:48:57.462712 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0291316 (* 1 = 0.0291316 loss)\nI0607 23:48:57.462723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0768341 (* 1 = 0.0768341 loss)\nI0607 23:48:57.462729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.333619 (* 1 = 0.333619 loss)\nI0607 23:48:57.462734 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0440993 (* 1 = 0.0440993 loss)\nI0607 23:48:57.462740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.307337 (* 1 = 0.307337 loss)\nI0607 23:48:57.462745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0440811 (* 1 = 0.0440811 loss)\nI0607 23:48:57.462752 13573 sgd_solver.cpp:106] Iteration 16040, lr = 0.001\nI0607 23:49:10.282311 13573 solver.cpp:229] Iteration 16060, loss = 0.451416\nI0607 23:49:10.282431 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128808 (* 1 = 0.128808 loss)\nI0607 23:49:10.282441 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0710452 (* 1 = 0.0710452 loss)\nI0607 23:49:10.282447 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0998558 (* 1 = 0.0998558 loss)\nI0607 23:49:10.282454 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0210054 (* 1 = 0.0210054 loss)\nI0607 23:49:10.282459 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0986401 (* 1 = 0.0986401 loss)\nI0607 23:49:10.282464 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0210108 (* 1 = 0.0210108 loss)\nI0607 23:49:10.282471 13573 sgd_solver.cpp:106] Iteration 16060, lr = 0.001\nI0607 23:49:23.127807 13573 solver.cpp:229] Iteration 16080, loss = 1.11072\nI0607 23:49:23.127873 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.653372 (* 1 = 0.653372 loss)\nI0607 23:49:23.127882 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.291775 (* 1 = 0.291775 loss)\nI0607 23:49:23.127888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.315072 (* 1 = 0.315072 loss)\nI0607 23:49:23.127894 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.037312 (* 1 = 0.037312 loss)\nI0607 23:49:23.127900 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.311682 (* 1 = 0.311682 loss)\nI0607 23:49:23.127907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0372821 (* 1 = 0.0372821 loss)\nI0607 23:49:23.127913 13573 sgd_solver.cpp:106] Iteration 16080, lr = 0.001\nI0607 23:49:36.220154 13573 solver.cpp:229] Iteration 16100, loss = 0.414096\nI0607 23:49:36.220212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000530357 (* 1 = 0.000530357 loss)\nI0607 23:49:36.220224 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00697273 (* 1 = 0.00697273 loss)\nI0607 23:49:36.220230 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139679 (* 1 = 0.139679 loss)\nI0607 23:49:36.220235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00181094 (* 1 = 0.00181094 loss)\nI0607 23:49:36.220242 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139446 (* 1 = 0.139446 loss)\nI0607 23:49:36.220247 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00181052 (* 1 = 0.00181052 loss)\nI0607 23:49:36.220253 13573 sgd_solver.cpp:106] Iteration 16100, lr = 0.001\nI0607 23:49:48.993072 13573 solver.cpp:229] Iteration 16120, loss = 0.911934\nI0607 23:49:48.993149 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.328785 (* 1 = 0.328785 loss)\nI0607 23:49:48.993157 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.193719 (* 1 = 0.193719 loss)\nI0607 23:49:48.993163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169681 (* 1 = 0.169681 loss)\nI0607 23:49:48.993170 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0492578 (* 1 = 0.0492578 loss)\nI0607 23:49:48.993176 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175358 (* 1 = 0.175358 loss)\nI0607 23:49:48.993182 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0492733 (* 1 = 0.0492733 loss)\nI0607 23:49:48.993191 13573 sgd_solver.cpp:106] Iteration 16120, lr = 0.001\nI0607 23:50:01.809811 13573 solver.cpp:229] Iteration 16140, loss = 1.13856\nI0607 23:50:01.809880 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00220058 (* 1 = 0.00220058 loss)\nI0607 23:50:01.809890 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00534134 (* 1 = 0.00534134 loss)\nI0607 23:50:01.809896 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.561308 (* 1 = 0.561308 loss)\nI0607 23:50:01.809901 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.161723 (* 1 = 0.161723 loss)\nI0607 23:50:01.809907 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.582569 (* 1 = 0.582569 loss)\nI0607 23:50:01.809913 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.161605 (* 1 = 0.161605 loss)\nI0607 23:50:01.809921 13573 sgd_solver.cpp:106] Iteration 16140, lr = 0.001\nI0607 23:50:14.703199 13573 solver.cpp:229] Iteration 16160, loss = 0.809925\nI0607 23:50:14.703265 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0979101 (* 1 = 0.0979101 loss)\nI0607 23:50:14.703274 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214046 (* 1 = 0.214046 loss)\nI0607 23:50:14.703280 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289908 (* 1 = 0.289908 loss)\nI0607 23:50:14.703286 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0742455 (* 1 = 0.0742455 loss)\nI0607 23:50:14.703292 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.294811 (* 1 = 0.294811 loss)\nI0607 23:50:14.703299 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.074277 (* 1 = 0.074277 loss)\nI0607 23:50:14.703306 13573 sgd_solver.cpp:106] Iteration 16160, lr = 0.001\nI0607 23:50:27.517001 13573 solver.cpp:229] Iteration 16180, loss = 0.543864\nI0607 23:50:27.517071 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0013351 (* 1 = 0.0013351 loss)\nI0607 23:50:27.517081 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00272376 (* 1 = 0.00272376 loss)\nI0607 23:50:27.517087 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.12553 (* 1 = 0.12553 loss)\nI0607 23:50:27.517092 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00110472 (* 1 = 0.00110472 loss)\nI0607 23:50:27.517098 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121239 (* 1 = 0.121239 loss)\nI0607 23:50:27.517104 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0011003 (* 1 = 0.0011003 loss)\nI0607 23:50:27.517112 13573 sgd_solver.cpp:106] Iteration 16180, lr = 0.001\nspeed: 0.644s / iter\nI0607 23:50:40.488960 13573 solver.cpp:229] Iteration 16200, loss = 1.00081\nI0607 23:50:40.489017 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0013139 (* 1 = 0.0013139 loss)\nI0607 23:50:40.489027 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00443973 (* 1 = 0.00443973 loss)\nI0607 23:50:40.489033 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304586 (* 1 = 0.304586 loss)\nI0607 23:50:40.489039 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0747564 (* 1 = 0.0747564 loss)\nI0607 23:50:40.489045 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.326671 (* 1 = 0.326671 loss)\nI0607 23:50:40.489051 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0747805 (* 1 = 0.0747805 loss)\nI0607 23:50:40.489084 13573 sgd_solver.cpp:106] Iteration 16200, lr = 0.001\nI0607 23:50:53.629716 13573 solver.cpp:229] Iteration 16220, loss = 0.66854\nI0607 23:50:53.629786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101577 (* 1 = 0.101577 loss)\nI0607 23:50:53.629796 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119987 (* 1 = 0.119987 loss)\nI0607 23:50:53.629802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0877385 (* 1 = 0.0877385 loss)\nI0607 23:50:53.629807 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276003 (* 1 = 0.0276003 loss)\nI0607 23:50:53.629813 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0829841 (* 1 = 0.0829841 loss)\nI0607 23:50:53.629818 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.027598 (* 1 = 0.027598 loss)\nI0607 23:50:53.629825 13573 sgd_solver.cpp:106] Iteration 16220, lr = 0.001\nI0607 23:51:06.743129 13573 solver.cpp:229] Iteration 16240, loss = 1.07786\nI0607 23:51:06.743201 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0767848 (* 1 = 0.0767848 loss)\nI0607 23:51:06.743211 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0740526 (* 1 = 0.0740526 loss)\nI0607 23:51:06.743218 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0816539 (* 1 = 0.0816539 loss)\nI0607 23:51:06.743223 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0162284 (* 1 = 0.0162284 loss)\nI0607 23:51:06.743229 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0785746 (* 1 = 0.0785746 loss)\nI0607 23:51:06.743234 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0162385 (* 1 = 0.0162385 loss)\nI0607 23:51:06.743242 13573 sgd_solver.cpp:106] Iteration 16240, lr = 0.001\nI0607 23:51:19.526288 13573 solver.cpp:229] Iteration 16260, loss = 0.623025\nI0607 23:51:19.526350 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0166092 (* 1 = 0.0166092 loss)\nI0607 23:51:19.526358 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0773645 (* 1 = 0.0773645 loss)\nI0607 23:51:19.526365 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238199 (* 1 = 0.238199 loss)\nI0607 23:51:19.526371 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276608 (* 1 = 0.0276608 loss)\nI0607 23:51:19.526376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247572 (* 1 = 0.247572 loss)\nI0607 23:51:19.526381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276537 (* 1 = 0.0276537 loss)\nI0607 23:51:19.526387 13573 sgd_solver.cpp:106] Iteration 16260, lr = 0.001\nI0607 23:51:32.509899 13573 solver.cpp:229] Iteration 16280, loss = 1.08605\nI0607 23:51:32.509966 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.135708 (* 1 = 0.135708 loss)\nI0607 23:51:32.509976 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107199 (* 1 = 0.107199 loss)\nI0607 23:51:32.509982 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40966 (* 1 = 0.40966 loss)\nI0607 23:51:32.509989 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0593571 (* 1 = 0.0593571 loss)\nI0607 23:51:32.509994 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.418268 (* 1 = 0.418268 loss)\nI0607 23:51:32.509999 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.059315 (* 1 = 0.059315 loss)\nI0607 23:51:32.510006 13573 sgd_solver.cpp:106] Iteration 16280, lr = 0.001\nI0607 23:51:45.426983 13573 solver.cpp:229] Iteration 16300, loss = 0.580843\nI0607 23:51:45.427053 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.273567 (* 1 = 0.273567 loss)\nI0607 23:51:45.427063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101114 (* 1 = 0.101114 loss)\nI0607 23:51:45.427069 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126942 (* 1 = 0.126942 loss)\nI0607 23:51:45.427075 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0181626 (* 1 = 0.0181626 loss)\nI0607 23:51:45.427081 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1145 (* 1 = 0.1145 loss)\nI0607 23:51:45.427088 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0181655 (* 1 = 0.0181655 loss)\nI0607 23:51:45.427094 13573 sgd_solver.cpp:106] Iteration 16300, lr = 0.001\nI0607 23:51:58.404038 13573 solver.cpp:229] Iteration 16320, loss = 1.12489\nI0607 23:51:58.404099 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.410253 (* 1 = 0.410253 loss)\nI0607 23:51:58.404109 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152441 (* 1 = 0.152441 loss)\nI0607 23:51:58.404114 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108457 (* 1 = 0.108457 loss)\nI0607 23:51:58.404120 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00214289 (* 1 = 0.00214289 loss)\nI0607 23:51:58.404126 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0981541 (* 1 = 0.0981541 loss)\nI0607 23:51:58.404132 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00214291 (* 1 = 0.00214291 loss)\nI0607 23:51:58.404139 13573 sgd_solver.cpp:106] Iteration 16320, lr = 0.001\nI0607 23:52:11.172140 13573 solver.cpp:229] Iteration 16340, loss = 2.019\nI0607 23:52:11.172207 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.221775 (* 1 = 0.221775 loss)\nI0607 23:52:11.172216 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.213624 (* 1 = 0.213624 loss)\nI0607 23:52:11.172224 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.319649 (* 1 = 0.319649 loss)\nI0607 23:52:11.172228 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481824 (* 1 = 0.0481824 loss)\nI0607 23:52:11.172235 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.325855 (* 1 = 0.325855 loss)\nI0607 23:52:11.172240 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0481726 (* 1 = 0.0481726 loss)\nI0607 23:52:11.172246 13573 sgd_solver.cpp:106] Iteration 16340, lr = 0.001\nI0607 23:52:24.334290 13573 solver.cpp:229] Iteration 16360, loss = 1.54607\nI0607 23:52:24.334360 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0423222 (* 1 = 0.0423222 loss)\nI0607 23:52:24.334370 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0726445 (* 1 = 0.0726445 loss)\nI0607 23:52:24.334378 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.00971 (* 1 = 1.00971 loss)\nI0607 23:52:24.334383 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.33776 (* 1 = 0.33776 loss)\nI0607 23:52:24.334389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.993303 (* 1 = 0.993303 loss)\nI0607 23:52:24.334395 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.337942 (* 1 = 0.337942 loss)\nI0607 23:52:24.334403 13573 sgd_solver.cpp:106] Iteration 16360, lr = 0.001\nI0607 23:52:37.168541 13573 solver.cpp:229] Iteration 16380, loss = 0.692767\nI0607 23:52:37.168609 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189276 (* 1 = 0.189276 loss)\nI0607 23:52:37.168619 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.265857 (* 1 = 0.265857 loss)\nI0607 23:52:37.168625 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134098 (* 1 = 0.134098 loss)\nI0607 23:52:37.168632 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0691738 (* 1 = 0.0691738 loss)\nI0607 23:52:37.168637 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133707 (* 1 = 0.133707 loss)\nI0607 23:52:37.168643 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0691739 (* 1 = 0.0691739 loss)\nI0607 23:52:37.168650 13573 sgd_solver.cpp:106] Iteration 16380, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:52:50.176873 13573 solver.cpp:229] Iteration 16400, loss = 0.924739\nI0607 23:52:50.176965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0728881 (* 1 = 0.0728881 loss)\nI0607 23:52:50.176977 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15237 (* 1 = 0.15237 loss)\nI0607 23:52:50.176983 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.423912 (* 1 = 0.423912 loss)\nI0607 23:52:50.176990 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.073351 (* 1 = 0.073351 loss)\nI0607 23:52:50.176995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.406208 (* 1 = 0.406208 loss)\nI0607 23:52:50.177000 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0733676 (* 1 = 0.0733676 loss)\nI0607 23:52:50.177007 13573 sgd_solver.cpp:106] Iteration 16400, lr = 0.001\nI0607 23:53:03.170174 13573 solver.cpp:229] Iteration 16420, loss = 0.496155\nI0607 23:53:03.170249 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000484159 (* 1 = 0.000484159 loss)\nI0607 23:53:03.170258 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000211978 (* 1 = 0.000211978 loss)\nI0607 23:53:03.170264 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215037 (* 1 = 0.215037 loss)\nI0607 23:53:03.170269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0174447 (* 1 = 0.0174447 loss)\nI0607 23:53:03.170275 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210336 (* 1 = 0.210336 loss)\nI0607 23:53:03.170280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0174521 (* 1 = 0.0174521 loss)\nI0607 23:53:03.170287 13573 sgd_solver.cpp:106] Iteration 16420, lr = 0.001\nI0607 23:53:15.923838 13573 solver.cpp:229] Iteration 16440, loss = 1.28264\nI0607 23:53:15.923902 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.155257 (* 1 = 0.155257 loss)\nI0607 23:53:15.923913 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106639 (* 1 = 0.106639 loss)\nI0607 23:53:15.923918 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.457595 (* 1 = 0.457595 loss)\nI0607 23:53:15.923923 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.220163 (* 1 = 0.220163 loss)\nI0607 23:53:15.923929 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.458215 (* 1 = 0.458215 loss)\nI0607 23:53:15.923934 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.220422 (* 1 = 0.220422 loss)\nI0607 23:53:15.923941 13573 sgd_solver.cpp:106] Iteration 16440, lr = 0.001\nI0607 23:53:28.746848 13573 solver.cpp:229] Iteration 16460, loss = 0.539606\nI0607 23:53:28.746922 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.202689 (* 1 = 0.202689 loss)\nI0607 23:53:28.746937 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0843881 (* 1 = 0.0843881 loss)\nI0607 23:53:28.746947 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15613 (* 1 = 0.15613 loss)\nI0607 23:53:28.746956 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0361156 (* 1 = 0.0361156 loss)\nI0607 23:53:28.746965 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153132 (* 1 = 0.153132 loss)\nI0607 23:53:28.746974 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0361275 (* 1 = 0.0361275 loss)\nI0607 23:53:28.746984 13573 sgd_solver.cpp:106] Iteration 16460, lr = 0.001\nI0607 23:53:41.572543 13573 solver.cpp:229] Iteration 16480, loss = 1.00069\nI0607 23:53:41.572613 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178103 (* 1 = 0.178103 loss)\nI0607 23:53:41.572623 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16908 (* 1 = 0.16908 loss)\nI0607 23:53:41.572628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.399771 (* 1 = 0.399771 loss)\nI0607 23:53:41.572634 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.076519 (* 1 = 0.076519 loss)\nI0607 23:53:41.572640 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.386303 (* 1 = 0.386303 loss)\nI0607 23:53:41.572646 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0764801 (* 1 = 0.0764801 loss)\nI0607 23:53:41.572654 13573 sgd_solver.cpp:106] Iteration 16480, lr = 0.001\nI0607 23:53:54.498224 13573 solver.cpp:229] Iteration 16500, loss = 0.425355\nI0607 23:53:54.498302 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0371602 (* 1 = 0.0371602 loss)\nI0607 23:53:54.498311 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0475253 (* 1 = 0.0475253 loss)\nI0607 23:53:54.498317 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0799525 (* 1 = 0.0799525 loss)\nI0607 23:53:54.498323 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0266376 (* 1 = 0.0266376 loss)\nI0607 23:53:54.498329 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0830143 (* 1 = 0.0830143 loss)\nI0607 23:53:54.498334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.026652 (* 1 = 0.026652 loss)\nI0607 23:53:54.498342 13573 sgd_solver.cpp:106] Iteration 16500, lr = 0.001\nI0607 23:54:07.572402 13573 solver.cpp:229] Iteration 16520, loss = 0.522653\nI0607 23:54:07.572466 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0760842 (* 1 = 0.0760842 loss)\nI0607 23:54:07.572476 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0431419 (* 1 = 0.0431419 loss)\nI0607 23:54:07.572484 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152796 (* 1 = 0.152796 loss)\nI0607 23:54:07.572489 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0298779 (* 1 = 0.0298779 loss)\nI0607 23:54:07.572495 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147469 (* 1 = 0.147469 loss)\nI0607 23:54:07.572500 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0298846 (* 1 = 0.0298846 loss)\nI0607 23:54:07.572507 13573 sgd_solver.cpp:106] Iteration 16520, lr = 0.001\nI0607 23:54:20.594602 13573 solver.cpp:229] Iteration 16540, loss = 0.720836\nI0607 23:54:20.594686 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139759 (* 1 = 0.139759 loss)\nI0607 23:54:20.594696 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132424 (* 1 = 0.132424 loss)\nI0607 23:54:20.594703 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.31372 (* 1 = 0.31372 loss)\nI0607 23:54:20.594710 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0677511 (* 1 = 0.0677511 loss)\nI0607 23:54:20.594717 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318596 (* 1 = 0.318596 loss)\nI0607 23:54:20.594722 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0677422 (* 1 = 0.0677422 loss)\nI0607 23:54:20.594729 13573 sgd_solver.cpp:106] Iteration 16540, lr = 0.001\nI0607 23:54:33.216511 13573 solver.cpp:229] Iteration 16560, loss = 0.846593\nI0607 23:54:33.216586 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.339403 (* 1 = 0.339403 loss)\nI0607 23:54:33.216595 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.155653 (* 1 = 0.155653 loss)\nI0607 23:54:33.216601 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.318413 (* 1 = 0.318413 loss)\nI0607 23:54:33.216608 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0295384 (* 1 = 0.0295384 loss)\nI0607 23:54:33.216612 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318471 (* 1 = 0.318471 loss)\nI0607 23:54:33.216619 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0295411 (* 1 = 0.0295411 loss)\nI0607 23:54:33.216624 13573 sgd_solver.cpp:106] Iteration 16560, lr = 0.001\nI0607 23:54:46.385128 13573 solver.cpp:229] Iteration 16580, loss = 1.46726\nI0607 23:54:46.385190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.22423 (* 1 = 0.22423 loss)\nI0607 23:54:46.385200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140365 (* 1 = 0.140365 loss)\nI0607 23:54:46.385206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.868073 (* 1 = 0.868073 loss)\nI0607 23:54:46.385212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.167095 (* 1 = 0.167095 loss)\nI0607 23:54:46.385218 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.861548 (* 1 = 0.861548 loss)\nI0607 23:54:46.385224 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.167304 (* 1 = 0.167304 loss)\nI0607 23:54:46.385231 13573 sgd_solver.cpp:106] Iteration 16580, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:54:59.280045 13573 solver.cpp:229] Iteration 16600, loss = 0.538607\nI0607 23:54:59.280113 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13713 (* 1 = 0.13713 loss)\nI0607 23:54:59.280122 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124641 (* 1 = 0.124641 loss)\nI0607 23:54:59.280128 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0939997 (* 1 = 0.0939997 loss)\nI0607 23:54:59.280134 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0078339 (* 1 = 0.0078339 loss)\nI0607 23:54:59.280140 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0948319 (* 1 = 0.0948319 loss)\nI0607 23:54:59.280145 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00783477 (* 1 = 0.00783477 loss)\nI0607 23:54:59.280153 13573 sgd_solver.cpp:106] Iteration 16600, lr = 0.001\nI0607 23:55:11.942890 13573 solver.cpp:229] Iteration 16620, loss = 0.413387\nI0607 23:55:11.942952 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178034 (* 1 = 0.178034 loss)\nI0607 23:55:11.942962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0510902 (* 1 = 0.0510902 loss)\nI0607 23:55:11.942970 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0871808 (* 1 = 0.0871808 loss)\nI0607 23:55:11.942977 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00158646 (* 1 = 0.00158646 loss)\nI0607 23:55:11.942983 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0953316 (* 1 = 0.0953316 loss)\nI0607 23:55:11.942989 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00158645 (* 1 = 0.00158645 loss)\nI0607 23:55:11.942996 13573 sgd_solver.cpp:106] Iteration 16620, lr = 0.001\nI0607 23:55:25.044927 13573 solver.cpp:229] Iteration 16640, loss = 0.857101\nI0607 23:55:25.045006 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.332863 (* 1 = 0.332863 loss)\nI0607 23:55:25.045020 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161514 (* 1 = 0.161514 loss)\nI0607 23:55:25.045030 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11592 (* 1 = 0.11592 loss)\nI0607 23:55:25.045042 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0125612 (* 1 = 0.0125612 loss)\nI0607 23:55:25.045050 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117875 (* 1 = 0.117875 loss)\nI0607 23:55:25.045058 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0125582 (* 1 = 0.0125582 loss)\nI0607 23:55:25.045068 13573 sgd_solver.cpp:106] Iteration 16640, lr = 0.001\nI0607 23:55:37.872709 13573 solver.cpp:229] Iteration 16660, loss = 0.788436\nI0607 23:55:37.872795 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.123839 (* 1 = 0.123839 loss)\nI0607 23:55:37.872807 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108908 (* 1 = 0.108908 loss)\nI0607 23:55:37.872813 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145915 (* 1 = 0.145915 loss)\nI0607 23:55:37.872818 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021891 (* 1 = 0.021891 loss)\nI0607 23:55:37.872824 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163061 (* 1 = 0.163061 loss)\nI0607 23:55:37.872830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0218755 (* 1 = 0.0218755 loss)\nI0607 23:55:37.872838 13573 sgd_solver.cpp:106] Iteration 16660, lr = 0.001\nI0607 23:55:50.852082 13573 solver.cpp:229] Iteration 16680, loss = 0.560222\nI0607 23:55:50.852161 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00284151 (* 1 = 0.00284151 loss)\nI0607 23:55:50.852172 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0506313 (* 1 = 0.0506313 loss)\nI0607 23:55:50.852179 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286686 (* 1 = 0.286686 loss)\nI0607 23:55:50.852185 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0483012 (* 1 = 0.0483012 loss)\nI0607 23:55:50.852190 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.314169 (* 1 = 0.314169 loss)\nI0607 23:55:50.852196 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0482879 (* 1 = 0.0482879 loss)\nI0607 23:55:50.852205 13573 sgd_solver.cpp:106] Iteration 16680, lr = 0.001\nI0607 23:56:03.874061 13573 solver.cpp:229] Iteration 16700, loss = 1.05035\nI0607 23:56:03.874128 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00354826 (* 1 = 0.00354826 loss)\nI0607 23:56:03.874137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00114096 (* 1 = 0.00114096 loss)\nI0607 23:56:03.874145 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252606 (* 1 = 0.252606 loss)\nI0607 23:56:03.874150 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0326071 (* 1 = 0.0326071 loss)\nI0607 23:56:03.874156 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277464 (* 1 = 0.277464 loss)\nI0607 23:56:03.874162 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0326096 (* 1 = 0.0326096 loss)\nI0607 23:56:03.874169 13573 sgd_solver.cpp:106] Iteration 16700, lr = 0.001\nI0607 23:56:16.976672 13573 solver.cpp:229] Iteration 16720, loss = 1.50326\nI0607 23:56:16.976742 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.26531 (* 1 = 0.26531 loss)\nI0607 23:56:16.976752 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.149824 (* 1 = 0.149824 loss)\nI0607 23:56:16.976758 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286862 (* 1 = 0.286862 loss)\nI0607 23:56:16.976763 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0532705 (* 1 = 0.0532705 loss)\nI0607 23:56:16.976769 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.272615 (* 1 = 0.272615 loss)\nI0607 23:56:16.976775 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0533376 (* 1 = 0.0533376 loss)\nI0607 23:56:16.976783 13573 sgd_solver.cpp:106] Iteration 16720, lr = 0.001\nI0607 23:56:29.951002 13573 solver.cpp:229] Iteration 16740, loss = 0.641841\nI0607 23:56:29.951068 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.179957 (* 1 = 0.179957 loss)\nI0607 23:56:29.951078 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131436 (* 1 = 0.131436 loss)\nI0607 23:56:29.951086 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0946419 (* 1 = 0.0946419 loss)\nI0607 23:56:29.951092 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0245678 (* 1 = 0.0245678 loss)\nI0607 23:56:29.951098 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0992504 (* 1 = 0.0992504 loss)\nI0607 23:56:29.951104 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.024574 (* 1 = 0.024574 loss)\nI0607 23:56:29.951112 13573 sgd_solver.cpp:106] Iteration 16740, lr = 0.001\nI0607 23:56:42.686281 13573 solver.cpp:229] Iteration 16760, loss = 0.935479\nI0607 23:56:42.686345 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.27092 (* 1 = 0.27092 loss)\nI0607 23:56:42.686354 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.215173 (* 1 = 0.215173 loss)\nI0607 23:56:42.686360 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120285 (* 1 = 0.120285 loss)\nI0607 23:56:42.686367 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.148021 (* 1 = 0.148021 loss)\nI0607 23:56:42.686372 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121166 (* 1 = 0.121166 loss)\nI0607 23:56:42.686378 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.147914 (* 1 = 0.147914 loss)\nI0607 23:56:42.686384 13573 sgd_solver.cpp:106] Iteration 16760, lr = 0.001\nI0607 23:56:55.705404 13573 solver.cpp:229] Iteration 16780, loss = 0.913847\nI0607 23:56:55.705472 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.179403 (* 1 = 0.179403 loss)\nI0607 23:56:55.705484 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0917196 (* 1 = 0.0917196 loss)\nI0607 23:56:55.705492 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.305737 (* 1 = 0.305737 loss)\nI0607 23:56:55.705497 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0252135 (* 1 = 0.0252135 loss)\nI0607 23:56:55.705504 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.289824 (* 1 = 0.289824 loss)\nI0607 23:56:55.705513 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0252166 (* 1 = 0.0252166 loss)\nI0607 23:56:55.705523 13573 sgd_solver.cpp:106] Iteration 16780, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:57:08.842998 13573 solver.cpp:229] Iteration 16800, loss = 0.734445\nI0607 23:57:08.843067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105717 (* 1 = 0.105717 loss)\nI0607 23:57:08.843077 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0926033 (* 1 = 0.0926033 loss)\nI0607 23:57:08.843085 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105074 (* 1 = 0.105074 loss)\nI0607 23:57:08.843089 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0505367 (* 1 = 0.0505367 loss)\nI0607 23:57:08.843096 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0997894 (* 1 = 0.0997894 loss)\nI0607 23:57:08.843101 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0505439 (* 1 = 0.0505439 loss)\nI0607 23:57:08.843108 13573 sgd_solver.cpp:106] Iteration 16800, lr = 0.001\nI0607 23:57:21.633950 13573 solver.cpp:229] Iteration 16820, loss = 0.91133\nI0607 23:57:21.634021 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.19373 (* 1 = 0.19373 loss)\nI0607 23:57:21.634032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109396 (* 1 = 0.109396 loss)\nI0607 23:57:21.634038 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.447162 (* 1 = 0.447162 loss)\nI0607 23:57:21.634044 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.105774 (* 1 = 0.105774 loss)\nI0607 23:57:21.634050 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.42792 (* 1 = 0.42792 loss)\nI0607 23:57:21.634057 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.105763 (* 1 = 0.105763 loss)\nI0607 23:57:21.634063 13573 sgd_solver.cpp:106] Iteration 16820, lr = 0.001\nI0607 23:57:34.570186 13573 solver.cpp:229] Iteration 16840, loss = 2.08166\nI0607 23:57:34.570255 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.218708 (* 1 = 0.218708 loss)\nI0607 23:57:34.570266 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.231814 (* 1 = 0.231814 loss)\nI0607 23:57:34.570271 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.04654 (* 1 = 1.04654 loss)\nI0607 23:57:34.570277 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.444841 (* 1 = 0.444841 loss)\nI0607 23:57:34.570282 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.03542 (* 1 = 1.03542 loss)\nI0607 23:57:34.570288 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.444549 (* 1 = 0.444549 loss)\nI0607 23:57:34.570296 13573 sgd_solver.cpp:106] Iteration 16840, lr = 0.001\nI0607 23:57:47.601140 13573 solver.cpp:229] Iteration 16860, loss = 0.405364\nI0607 23:57:47.601207 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00159572 (* 1 = 0.00159572 loss)\nI0607 23:57:47.601217 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00260566 (* 1 = 0.00260566 loss)\nI0607 23:57:47.601223 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202808 (* 1 = 0.202808 loss)\nI0607 23:57:47.601229 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00224097 (* 1 = 0.00224097 loss)\nI0607 23:57:47.601235 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196231 (* 1 = 0.196231 loss)\nI0607 23:57:47.601241 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0022465 (* 1 = 0.0022465 loss)\nI0607 23:57:47.601248 13573 sgd_solver.cpp:106] Iteration 16860, lr = 0.001\nI0607 23:58:00.470926 13573 solver.cpp:229] Iteration 16880, loss = 1.06442\nI0607 23:58:00.470994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0665498 (* 1 = 0.0665498 loss)\nI0607 23:58:00.471004 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0543185 (* 1 = 0.0543185 loss)\nI0607 23:58:00.471011 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.245013 (* 1 = 0.245013 loss)\nI0607 23:58:00.471017 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0324294 (* 1 = 0.0324294 loss)\nI0607 23:58:00.471022 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247466 (* 1 = 0.247466 loss)\nI0607 23:58:00.471029 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0324317 (* 1 = 0.0324317 loss)\nI0607 23:58:00.471035 13573 sgd_solver.cpp:106] Iteration 16880, lr = 0.001\nI0607 23:58:13.418035 13573 solver.cpp:229] Iteration 16900, loss = 1.17525\nI0607 23:58:13.418109 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141812 (* 1 = 0.141812 loss)\nI0607 23:58:13.418119 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124215 (* 1 = 0.124215 loss)\nI0607 23:58:13.418126 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0888672 (* 1 = 0.0888672 loss)\nI0607 23:58:13.418133 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175606 (* 1 = 0.0175606 loss)\nI0607 23:58:13.418138 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0883329 (* 1 = 0.0883329 loss)\nI0607 23:58:13.418144 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175483 (* 1 = 0.0175483 loss)\nI0607 23:58:13.418151 13573 sgd_solver.cpp:106] Iteration 16900, lr = 0.001\nI0607 23:58:26.190832 13573 solver.cpp:229] Iteration 16920, loss = 0.641931\nI0607 23:58:26.190896 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175334 (* 1 = 0.175334 loss)\nI0607 23:58:26.190907 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124315 (* 1 = 0.124315 loss)\nI0607 23:58:26.190912 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0814821 (* 1 = 0.0814821 loss)\nI0607 23:58:26.190919 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.034979 (* 1 = 0.034979 loss)\nI0607 23:58:26.190924 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0895237 (* 1 = 0.0895237 loss)\nI0607 23:58:26.190930 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.034975 (* 1 = 0.034975 loss)\nI0607 23:58:26.190938 13573 sgd_solver.cpp:106] Iteration 16920, lr = 0.001\nI0607 23:58:39.195494 13573 solver.cpp:229] Iteration 16940, loss = 0.834065\nI0607 23:58:39.195564 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.403545 (* 1 = 0.403545 loss)\nI0607 23:58:39.195574 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.336799 (* 1 = 0.336799 loss)\nI0607 23:58:39.195580 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.140731 (* 1 = 0.140731 loss)\nI0607 23:58:39.195585 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0172323 (* 1 = 0.0172323 loss)\nI0607 23:58:39.195591 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.140728 (* 1 = 0.140728 loss)\nI0607 23:58:39.195597 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0172403 (* 1 = 0.0172403 loss)\nI0607 23:58:39.195605 13573 sgd_solver.cpp:106] Iteration 16940, lr = 0.001\nI0607 23:58:52.229698 13573 solver.cpp:229] Iteration 16960, loss = 0.685848\nI0607 23:58:52.229760 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.228594 (* 1 = 0.228594 loss)\nI0607 23:58:52.229770 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0914838 (* 1 = 0.0914838 loss)\nI0607 23:58:52.229778 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0990573 (* 1 = 0.0990573 loss)\nI0607 23:58:52.229784 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0331705 (* 1 = 0.0331705 loss)\nI0607 23:58:52.229789 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0945593 (* 1 = 0.0945593 loss)\nI0607 23:58:52.229794 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.033171 (* 1 = 0.033171 loss)\nI0607 23:58:52.229801 13573 sgd_solver.cpp:106] Iteration 16960, lr = 0.001\nI0607 23:59:05.248807 13573 solver.cpp:229] Iteration 16980, loss = 0.58922\nI0607 23:59:05.248870 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0808571 (* 1 = 0.0808571 loss)\nI0607 23:59:05.248879 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0998318 (* 1 = 0.0998318 loss)\nI0607 23:59:05.248885 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156375 (* 1 = 0.156375 loss)\nI0607 23:59:05.248891 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0311552 (* 1 = 0.0311552 loss)\nI0607 23:59:05.248896 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156358 (* 1 = 0.156358 loss)\nI0607 23:59:05.248903 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311625 (* 1 = 0.0311625 loss)\nI0607 23:59:05.248908 13573 sgd_solver.cpp:106] Iteration 16980, lr = 0.001\nspeed: 0.645s / iter\nI0607 23:59:17.922080 13573 solver.cpp:229] Iteration 17000, loss = 0.487324\nI0607 23:59:17.922168 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0367017 (* 1 = 0.0367017 loss)\nI0607 23:59:17.922178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0361585 (* 1 = 0.0361585 loss)\nI0607 23:59:17.922183 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246208 (* 1 = 0.246208 loss)\nI0607 23:59:17.922189 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153371 (* 1 = 0.0153371 loss)\nI0607 23:59:17.922195 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21715 (* 1 = 0.21715 loss)\nI0607 23:59:17.922201 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153324 (* 1 = 0.0153324 loss)\nI0607 23:59:17.922209 13573 sgd_solver.cpp:106] Iteration 17000, lr = 0.001\nI0607 23:59:30.699681 13573 solver.cpp:229] Iteration 17020, loss = 0.842918\nI0607 23:59:30.699755 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194971 (* 1 = 0.194971 loss)\nI0607 23:59:30.699764 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.197515 (* 1 = 0.197515 loss)\nI0607 23:59:30.699770 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119606 (* 1 = 0.119606 loss)\nI0607 23:59:30.699776 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.185986 (* 1 = 0.185986 loss)\nI0607 23:59:30.699781 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121952 (* 1 = 0.121952 loss)\nI0607 23:59:30.699787 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.185951 (* 1 = 0.185951 loss)\nI0607 23:59:30.699795 13573 sgd_solver.cpp:106] Iteration 17020, lr = 0.001\nI0607 23:59:43.615566 13573 solver.cpp:229] Iteration 17040, loss = 0.965335\nI0607 23:59:43.615627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.07688 (* 1 = 0.07688 loss)\nI0607 23:59:43.615636 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109033 (* 1 = 0.109033 loss)\nI0607 23:59:43.615643 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.084147 (* 1 = 0.084147 loss)\nI0607 23:59:43.615648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0297683 (* 1 = 0.0297683 loss)\nI0607 23:59:43.615654 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.083629 (* 1 = 0.083629 loss)\nI0607 23:59:43.615659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0297737 (* 1 = 0.0297737 loss)\nI0607 23:59:43.615666 13573 sgd_solver.cpp:106] Iteration 17040, lr = 0.001\nI0607 23:59:56.430390 13573 solver.cpp:229] Iteration 17060, loss = 1.38185\nI0607 23:59:56.430464 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.249925 (* 1 = 0.249925 loss)\nI0607 23:59:56.430472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.521386 (* 1 = 0.521386 loss)\nI0607 23:59:56.430480 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.316021 (* 1 = 0.316021 loss)\nI0607 23:59:56.430485 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0672995 (* 1 = 0.0672995 loss)\nI0607 23:59:56.430490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.316272 (* 1 = 0.316272 loss)\nI0607 23:59:56.430496 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0673107 (* 1 = 0.0673107 loss)\nI0607 23:59:56.430503 13573 sgd_solver.cpp:106] Iteration 17060, lr = 0.001\nI0608 00:00:09.123585 13573 solver.cpp:229] Iteration 17080, loss = 0.682067\nI0608 00:00:09.123659 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.260296 (* 1 = 0.260296 loss)\nI0608 00:00:09.123669 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125701 (* 1 = 0.125701 loss)\nI0608 00:00:09.123677 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0958822 (* 1 = 0.0958822 loss)\nI0608 00:00:09.123682 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384822 (* 1 = 0.0384822 loss)\nI0608 00:00:09.123687 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101803 (* 1 = 0.101803 loss)\nI0608 00:00:09.123693 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0384848 (* 1 = 0.0384848 loss)\nI0608 00:00:09.123700 13573 sgd_solver.cpp:106] Iteration 17080, lr = 0.001\nI0608 00:00:21.994968 13573 solver.cpp:229] Iteration 17100, loss = 0.59265\nI0608 00:00:21.995048 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0961328 (* 1 = 0.0961328 loss)\nI0608 00:00:21.995060 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136928 (* 1 = 0.136928 loss)\nI0608 00:00:21.995066 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183326 (* 1 = 0.183326 loss)\nI0608 00:00:21.995072 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0501242 (* 1 = 0.0501242 loss)\nI0608 00:00:21.995079 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169797 (* 1 = 0.169797 loss)\nI0608 00:00:21.995085 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0501252 (* 1 = 0.0501252 loss)\nI0608 00:00:21.995091 13573 sgd_solver.cpp:106] Iteration 17100, lr = 0.001\nI0608 00:00:34.890945 13573 solver.cpp:229] Iteration 17120, loss = 0.93828\nI0608 00:00:34.891010 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0871305 (* 1 = 0.0871305 loss)\nI0608 00:00:34.891021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0596615 (* 1 = 0.0596615 loss)\nI0608 00:00:34.891026 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262818 (* 1 = 0.262818 loss)\nI0608 00:00:34.891032 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.019902 (* 1 = 0.019902 loss)\nI0608 00:00:34.891037 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278497 (* 1 = 0.278497 loss)\nI0608 00:00:34.891043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0199026 (* 1 = 0.0199026 loss)\nI0608 00:00:34.891050 13573 sgd_solver.cpp:106] Iteration 17120, lr = 0.001\nI0608 00:00:47.668351 13573 solver.cpp:229] Iteration 17140, loss = 1.74452\nI0608 00:00:47.668438 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.477889 (* 1 = 0.477889 loss)\nI0608 00:00:47.668447 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.248828 (* 1 = 0.248828 loss)\nI0608 00:00:47.668453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.564604 (* 1 = 0.564604 loss)\nI0608 00:00:47.668459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.182432 (* 1 = 0.182432 loss)\nI0608 00:00:47.668465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.607423 (* 1 = 0.607423 loss)\nI0608 00:00:47.668471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.182568 (* 1 = 0.182568 loss)\nI0608 00:00:47.668478 13573 sgd_solver.cpp:106] Iteration 17140, lr = 0.001\nI0608 00:01:00.819159 13573 solver.cpp:229] Iteration 17160, loss = 0.511885\nI0608 00:01:00.819237 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.259833 (* 1 = 0.259833 loss)\nI0608 00:01:00.819247 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125078 (* 1 = 0.125078 loss)\nI0608 00:01:00.819253 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0915279 (* 1 = 0.0915279 loss)\nI0608 00:01:00.819259 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105623 (* 1 = 0.0105623 loss)\nI0608 00:01:00.819264 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935782 (* 1 = 0.0935782 loss)\nI0608 00:01:00.819270 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105507 (* 1 = 0.0105507 loss)\nI0608 00:01:00.819278 13573 sgd_solver.cpp:106] Iteration 17160, lr = 0.001\nI0608 00:01:13.708544 13573 solver.cpp:229] Iteration 17180, loss = 1.26079\nI0608 00:01:13.708626 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169523 (* 1 = 0.169523 loss)\nI0608 00:01:13.708637 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162138 (* 1 = 0.162138 loss)\nI0608 00:01:13.708644 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.192269 (* 1 = 0.192269 loss)\nI0608 00:01:13.708650 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.127399 (* 1 = 0.127399 loss)\nI0608 00:01:13.708657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196223 (* 1 = 0.196223 loss)\nI0608 00:01:13.708662 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.127399 (* 1 = 0.127399 loss)\nI0608 00:01:13.708668 13573 sgd_solver.cpp:106] Iteration 17180, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:01:26.383214 13573 solver.cpp:229] Iteration 17200, loss = 2.00966\nI0608 00:01:26.383283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0140861 (* 1 = 0.0140861 loss)\nI0608 00:01:26.383293 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0367206 (* 1 = 0.0367206 loss)\nI0608 00:01:26.383299 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.94835 (* 1 = 0.94835 loss)\nI0608 00:01:26.383304 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.256321 (* 1 = 0.256321 loss)\nI0608 00:01:26.383311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.957376 (* 1 = 0.957376 loss)\nI0608 00:01:26.383316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.256093 (* 1 = 0.256093 loss)\nI0608 00:01:26.383322 13573 sgd_solver.cpp:106] Iteration 17200, lr = 0.001\nI0608 00:01:39.207044 13573 solver.cpp:229] Iteration 17220, loss = 0.626714\nI0608 00:01:39.207123 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.034583 (* 1 = 0.034583 loss)\nI0608 00:01:39.207137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.054447 (* 1 = 0.054447 loss)\nI0608 00:01:39.207147 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.333489 (* 1 = 0.333489 loss)\nI0608 00:01:39.207155 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0471988 (* 1 = 0.0471988 loss)\nI0608 00:01:39.207165 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.341123 (* 1 = 0.341123 loss)\nI0608 00:01:39.207175 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0472025 (* 1 = 0.0472025 loss)\nI0608 00:01:39.207183 13573 sgd_solver.cpp:106] Iteration 17220, lr = 0.001\nI0608 00:01:52.112428 13573 solver.cpp:229] Iteration 17240, loss = 0.990789\nI0608 00:01:52.112488 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199482 (* 1 = 0.199482 loss)\nI0608 00:01:52.112498 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184455 (* 1 = 0.184455 loss)\nI0608 00:01:52.112504 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.375814 (* 1 = 0.375814 loss)\nI0608 00:01:52.112510 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0190531 (* 1 = 0.0190531 loss)\nI0608 00:01:52.112516 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.36626 (* 1 = 0.36626 loss)\nI0608 00:01:52.112522 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0190749 (* 1 = 0.0190749 loss)\nI0608 00:01:52.112529 13573 sgd_solver.cpp:106] Iteration 17240, lr = 0.001\nI0608 00:02:05.019203 13573 solver.cpp:229] Iteration 17260, loss = 1.3135\nI0608 00:02:05.019273 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.316715 (* 1 = 0.316715 loss)\nI0608 00:02:05.019282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.420083 (* 1 = 0.420083 loss)\nI0608 00:02:05.019287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.261796 (* 1 = 0.261796 loss)\nI0608 00:02:05.019294 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.198448 (* 1 = 0.198448 loss)\nI0608 00:02:05.019299 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261937 (* 1 = 0.261937 loss)\nI0608 00:02:05.019304 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.198442 (* 1 = 0.198442 loss)\nI0608 00:02:05.019310 13573 sgd_solver.cpp:106] Iteration 17260, lr = 0.001\nI0608 00:02:17.704012 13573 solver.cpp:229] Iteration 17280, loss = 0.554348\nI0608 00:02:17.704087 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00141996 (* 1 = 0.00141996 loss)\nI0608 00:02:17.704097 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0025964 (* 1 = 0.0025964 loss)\nI0608 00:02:17.704102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164072 (* 1 = 0.164072 loss)\nI0608 00:02:17.704108 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00583983 (* 1 = 0.00583983 loss)\nI0608 00:02:17.704114 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131385 (* 1 = 0.131385 loss)\nI0608 00:02:17.704120 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00582225 (* 1 = 0.00582225 loss)\nI0608 00:02:17.704128 13573 sgd_solver.cpp:106] Iteration 17280, lr = 0.001\nI0608 00:02:30.383534 13573 solver.cpp:229] Iteration 17300, loss = 0.929763\nI0608 00:02:30.383600 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0109964 (* 1 = 0.0109964 loss)\nI0608 00:02:30.383610 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0196693 (* 1 = 0.0196693 loss)\nI0608 00:02:30.383616 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.488475 (* 1 = 0.488475 loss)\nI0608 00:02:30.383621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0466176 (* 1 = 0.0466176 loss)\nI0608 00:02:30.383627 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.506365 (* 1 = 0.506365 loss)\nI0608 00:02:30.383632 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0466512 (* 1 = 0.0466512 loss)\nI0608 00:02:30.383640 13573 sgd_solver.cpp:106] Iteration 17300, lr = 0.001\nI0608 00:02:43.276397 13573 solver.cpp:229] Iteration 17320, loss = 0.530994\nI0608 00:02:43.276468 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166362 (* 1 = 0.166362 loss)\nI0608 00:02:43.276479 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107407 (* 1 = 0.107407 loss)\nI0608 00:02:43.276485 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0877395 (* 1 = 0.0877395 loss)\nI0608 00:02:43.276490 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194263 (* 1 = 0.0194263 loss)\nI0608 00:02:43.276496 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0978551 (* 1 = 0.0978551 loss)\nI0608 00:02:43.276502 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0194298 (* 1 = 0.0194298 loss)\nI0608 00:02:43.276509 13573 sgd_solver.cpp:106] Iteration 17320, lr = 0.001\nI0608 00:02:56.003958 13573 solver.cpp:229] Iteration 17340, loss = 1.01777\nI0608 00:02:56.004026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.351835 (* 1 = 0.351835 loss)\nI0608 00:02:56.004036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.401057 (* 1 = 0.401057 loss)\nI0608 00:02:56.004042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.366746 (* 1 = 0.366746 loss)\nI0608 00:02:56.004048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104954 (* 1 = 0.104954 loss)\nI0608 00:02:56.004053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.367367 (* 1 = 0.367367 loss)\nI0608 00:02:56.004060 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.10501 (* 1 = 0.10501 loss)\nI0608 00:02:56.004065 13573 sgd_solver.cpp:106] Iteration 17340, lr = 0.001\nI0608 00:03:08.750416 13573 solver.cpp:229] Iteration 17360, loss = 1.01098\nI0608 00:03:08.750481 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.259345 (* 1 = 0.259345 loss)\nI0608 00:03:08.750490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.249752 (* 1 = 0.249752 loss)\nI0608 00:03:08.750496 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.394807 (* 1 = 0.394807 loss)\nI0608 00:03:08.750502 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.186063 (* 1 = 0.186063 loss)\nI0608 00:03:08.750509 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.393813 (* 1 = 0.393813 loss)\nI0608 00:03:08.750514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.186074 (* 1 = 0.186074 loss)\nI0608 00:03:08.750522 13573 sgd_solver.cpp:106] Iteration 17360, lr = 0.001\nI0608 00:03:21.551558 13573 solver.cpp:229] Iteration 17380, loss = 1.26667\nI0608 00:03:21.551627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199494 (* 1 = 0.199494 loss)\nI0608 00:03:21.551637 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179232 (* 1 = 0.179232 loss)\nI0608 00:03:21.551643 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.530044 (* 1 = 0.530044 loss)\nI0608 00:03:21.551650 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.224933 (* 1 = 0.224933 loss)\nI0608 00:03:21.551656 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.521652 (* 1 = 0.521652 loss)\nI0608 00:03:21.551661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.225014 (* 1 = 0.225014 loss)\nI0608 00:03:21.551668 13573 sgd_solver.cpp:106] Iteration 17380, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:03:34.491978 13573 solver.cpp:229] Iteration 17400, loss = 0.741255\nI0608 00:03:34.492054 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0115019 (* 1 = 0.0115019 loss)\nI0608 00:03:34.492063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0143734 (* 1 = 0.0143734 loss)\nI0608 00:03:34.492069 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334406 (* 1 = 0.334406 loss)\nI0608 00:03:34.492075 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0717189 (* 1 = 0.0717189 loss)\nI0608 00:03:34.492081 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328977 (* 1 = 0.328977 loss)\nI0608 00:03:34.492087 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.071689 (* 1 = 0.071689 loss)\nI0608 00:03:34.492094 13573 sgd_solver.cpp:106] Iteration 17400, lr = 0.001\nI0608 00:03:47.475312 13573 solver.cpp:229] Iteration 17420, loss = 1.70438\nI0608 00:03:47.475378 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169383 (* 1 = 0.169383 loss)\nI0608 00:03:47.475386 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179725 (* 1 = 0.179725 loss)\nI0608 00:03:47.475394 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.599654 (* 1 = 0.599654 loss)\nI0608 00:03:47.475400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.108251 (* 1 = 0.108251 loss)\nI0608 00:03:47.475406 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.588684 (* 1 = 0.588684 loss)\nI0608 00:03:47.475411 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.1083 (* 1 = 0.1083 loss)\nI0608 00:03:47.475419 13573 sgd_solver.cpp:106] Iteration 17420, lr = 0.001\nI0608 00:04:00.446600 13573 solver.cpp:229] Iteration 17440, loss = 0.460483\nI0608 00:04:00.446662 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17009 (* 1 = 0.17009 loss)\nI0608 00:04:00.446671 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128751 (* 1 = 0.128751 loss)\nI0608 00:04:00.446678 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0771417 (* 1 = 0.0771417 loss)\nI0608 00:04:00.446683 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00289462 (* 1 = 0.00289462 loss)\nI0608 00:04:00.446689 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0985226 (* 1 = 0.0985226 loss)\nI0608 00:04:00.446696 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00289336 (* 1 = 0.00289336 loss)\nI0608 00:04:00.446702 13573 sgd_solver.cpp:106] Iteration 17440, lr = 0.001\nI0608 00:04:13.396059 13573 solver.cpp:229] Iteration 17460, loss = 1.2296\nI0608 00:04:13.396121 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0326597 (* 1 = 0.0326597 loss)\nI0608 00:04:13.396131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0293875 (* 1 = 0.0293875 loss)\nI0608 00:04:13.396136 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.449856 (* 1 = 0.449856 loss)\nI0608 00:04:13.396142 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.141894 (* 1 = 0.141894 loss)\nI0608 00:04:13.396147 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.417342 (* 1 = 0.417342 loss)\nI0608 00:04:13.396152 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.141883 (* 1 = 0.141883 loss)\nI0608 00:04:13.396159 13573 sgd_solver.cpp:106] Iteration 17460, lr = 0.001\nI0608 00:04:26.179426 13573 solver.cpp:229] Iteration 17480, loss = 0.860993\nI0608 00:04:26.179500 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.286937 (* 1 = 0.286937 loss)\nI0608 00:04:26.179510 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.200334 (* 1 = 0.200334 loss)\nI0608 00:04:26.179517 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211943 (* 1 = 0.211943 loss)\nI0608 00:04:26.179522 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0288163 (* 1 = 0.0288163 loss)\nI0608 00:04:26.179528 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227262 (* 1 = 0.227262 loss)\nI0608 00:04:26.179534 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0288216 (* 1 = 0.0288216 loss)\nI0608 00:04:26.179540 13573 sgd_solver.cpp:106] Iteration 17480, lr = 0.001\nI0608 00:04:39.108779 13573 solver.cpp:229] Iteration 17500, loss = 0.594583\nI0608 00:04:39.108851 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00247917 (* 1 = 0.00247917 loss)\nI0608 00:04:39.108860 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.016699 (* 1 = 0.016699 loss)\nI0608 00:04:39.108866 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294301 (* 1 = 0.294301 loss)\nI0608 00:04:39.108872 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0670355 (* 1 = 0.0670355 loss)\nI0608 00:04:39.108877 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.298651 (* 1 = 0.298651 loss)\nI0608 00:04:39.108883 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0670198 (* 1 = 0.0670198 loss)\nI0608 00:04:39.108891 13573 sgd_solver.cpp:106] Iteration 17500, lr = 0.001\nI0608 00:04:52.148083 13573 solver.cpp:229] Iteration 17520, loss = 1.24426\nI0608 00:04:52.148144 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.221338 (* 1 = 0.221338 loss)\nI0608 00:04:52.148154 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.166944 (* 1 = 0.166944 loss)\nI0608 00:04:52.148159 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.527892 (* 1 = 0.527892 loss)\nI0608 00:04:52.148164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0368999 (* 1 = 0.0368999 loss)\nI0608 00:04:52.148170 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.460639 (* 1 = 0.460639 loss)\nI0608 00:04:52.148176 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.036882 (* 1 = 0.036882 loss)\nI0608 00:04:52.148182 13573 sgd_solver.cpp:106] Iteration 17520, lr = 0.001\nI0608 00:05:05.106076 13573 solver.cpp:229] Iteration 17540, loss = 0.84612\nI0608 00:05:05.106139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.212391 (* 1 = 0.212391 loss)\nI0608 00:05:05.106149 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118156 (* 1 = 0.118156 loss)\nI0608 00:05:05.106155 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0962585 (* 1 = 0.0962585 loss)\nI0608 00:05:05.106161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00666477 (* 1 = 0.00666477 loss)\nI0608 00:05:05.106166 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103618 (* 1 = 0.103618 loss)\nI0608 00:05:05.106173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00666101 (* 1 = 0.00666101 loss)\nI0608 00:05:05.106179 13573 sgd_solver.cpp:106] Iteration 17540, lr = 0.001\nI0608 00:05:18.050791 13573 solver.cpp:229] Iteration 17560, loss = 0.44546\nI0608 00:05:18.050863 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00392865 (* 1 = 0.00392865 loss)\nI0608 00:05:18.050871 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0129262 (* 1 = 0.0129262 loss)\nI0608 00:05:18.050879 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1906 (* 1 = 0.1906 loss)\nI0608 00:05:18.050884 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159697 (* 1 = 0.0159697 loss)\nI0608 00:05:18.050889 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.20041 (* 1 = 0.20041 loss)\nI0608 00:05:18.050895 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159475 (* 1 = 0.0159475 loss)\nI0608 00:05:18.050901 13573 sgd_solver.cpp:106] Iteration 17560, lr = 0.001\nI0608 00:05:31.083883 13573 solver.cpp:229] Iteration 17580, loss = 0.888805\nI0608 00:05:31.083945 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.232461 (* 1 = 0.232461 loss)\nI0608 00:05:31.083955 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.446591 (* 1 = 0.446591 loss)\nI0608 00:05:31.083961 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.224886 (* 1 = 0.224886 loss)\nI0608 00:05:31.083966 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0410091 (* 1 = 0.0410091 loss)\nI0608 00:05:31.083971 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213309 (* 1 = 0.213309 loss)\nI0608 00:05:31.083977 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0410016 (* 1 = 0.0410016 loss)\nI0608 00:05:31.083984 13573 sgd_solver.cpp:106] Iteration 17580, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:05:43.835958 13573 solver.cpp:229] Iteration 17600, loss = 1.03399\nI0608 00:05:43.836027 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.277073 (* 1 = 0.277073 loss)\nI0608 00:05:43.836036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.222113 (* 1 = 0.222113 loss)\nI0608 00:05:43.836042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222691 (* 1 = 0.222691 loss)\nI0608 00:05:43.836050 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0844395 (* 1 = 0.0844395 loss)\nI0608 00:05:43.836055 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.200908 (* 1 = 0.200908 loss)\nI0608 00:05:43.836061 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0844673 (* 1 = 0.0844673 loss)\nI0608 00:05:43.836068 13573 sgd_solver.cpp:106] Iteration 17600, lr = 0.001\nI0608 00:05:56.690711 13573 solver.cpp:229] Iteration 17620, loss = 0.700261\nI0608 00:05:56.690804 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.12169 (* 1 = 0.12169 loss)\nI0608 00:05:56.690815 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0922365 (* 1 = 0.0922365 loss)\nI0608 00:05:56.690821 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180267 (* 1 = 0.180267 loss)\nI0608 00:05:56.690827 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217065 (* 1 = 0.0217065 loss)\nI0608 00:05:56.690832 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188233 (* 1 = 0.188233 loss)\nI0608 00:05:56.690838 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0217193 (* 1 = 0.0217193 loss)\nI0608 00:05:56.690845 13573 sgd_solver.cpp:106] Iteration 17620, lr = 0.001\nI0608 00:06:09.529150 13573 solver.cpp:229] Iteration 17640, loss = 0.647045\nI0608 00:06:09.529214 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0165219 (* 1 = 0.0165219 loss)\nI0608 00:06:09.529223 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0648366 (* 1 = 0.0648366 loss)\nI0608 00:06:09.529229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273822 (* 1 = 0.273822 loss)\nI0608 00:06:09.529235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0580385 (* 1 = 0.0580385 loss)\nI0608 00:06:09.529242 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.275269 (* 1 = 0.275269 loss)\nI0608 00:06:09.529247 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0579442 (* 1 = 0.0579442 loss)\nI0608 00:06:09.529254 13573 sgd_solver.cpp:106] Iteration 17640, lr = 0.001\nI0608 00:06:22.336599 13573 solver.cpp:229] Iteration 17660, loss = 2.0597\nI0608 00:06:22.336664 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.264032 (* 1 = 0.264032 loss)\nI0608 00:06:22.336675 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19333 (* 1 = 0.19333 loss)\nI0608 00:06:22.336681 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.841849 (* 1 = 0.841849 loss)\nI0608 00:06:22.336688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.383439 (* 1 = 0.383439 loss)\nI0608 00:06:22.336693 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.844446 (* 1 = 0.844446 loss)\nI0608 00:06:22.336699 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.383461 (* 1 = 0.383461 loss)\nI0608 00:06:22.336705 13573 sgd_solver.cpp:106] Iteration 17660, lr = 0.001\nI0608 00:06:35.314501 13573 solver.cpp:229] Iteration 17680, loss = 0.553923\nI0608 00:06:35.314573 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151072 (* 1 = 0.151072 loss)\nI0608 00:06:35.314581 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148249 (* 1 = 0.148249 loss)\nI0608 00:06:35.314587 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135743 (* 1 = 0.135743 loss)\nI0608 00:06:35.314594 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0326009 (* 1 = 0.0326009 loss)\nI0608 00:06:35.314599 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137599 (* 1 = 0.137599 loss)\nI0608 00:06:35.314604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0326127 (* 1 = 0.0326127 loss)\nI0608 00:06:35.314611 13573 sgd_solver.cpp:106] Iteration 17680, lr = 0.001\nI0608 00:06:48.278079 13573 solver.cpp:229] Iteration 17700, loss = 0.833789\nI0608 00:06:48.278143 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.297108 (* 1 = 0.297108 loss)\nI0608 00:06:48.278152 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.320166 (* 1 = 0.320166 loss)\nI0608 00:06:48.278158 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137963 (* 1 = 0.137963 loss)\nI0608 00:06:48.278164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0557514 (* 1 = 0.0557514 loss)\nI0608 00:06:48.278170 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131883 (* 1 = 0.131883 loss)\nI0608 00:06:48.278177 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0557643 (* 1 = 0.0557643 loss)\nI0608 00:06:48.278182 13573 sgd_solver.cpp:106] Iteration 17700, lr = 0.001\nI0608 00:07:01.153975 13573 solver.cpp:229] Iteration 17720, loss = 1.15734\nI0608 00:07:01.154045 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.478813 (* 1 = 0.478813 loss)\nI0608 00:07:01.154057 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.487724 (* 1 = 0.487724 loss)\nI0608 00:07:01.154063 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.419747 (* 1 = 0.419747 loss)\nI0608 00:07:01.154069 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.055689 (* 1 = 0.055689 loss)\nI0608 00:07:01.154075 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.413087 (* 1 = 0.413087 loss)\nI0608 00:07:01.154081 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0556918 (* 1 = 0.0556918 loss)\nI0608 00:07:01.154088 13573 sgd_solver.cpp:106] Iteration 17720, lr = 0.001\nI0608 00:07:13.793874 13573 solver.cpp:229] Iteration 17740, loss = 0.565168\nI0608 00:07:13.793946 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.06753 (* 1 = 0.06753 loss)\nI0608 00:07:13.793956 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.053529 (* 1 = 0.053529 loss)\nI0608 00:07:13.793962 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180355 (* 1 = 0.180355 loss)\nI0608 00:07:13.793967 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00770018 (* 1 = 0.00770018 loss)\nI0608 00:07:13.793973 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156116 (* 1 = 0.156116 loss)\nI0608 00:07:13.793979 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0077032 (* 1 = 0.0077032 loss)\nI0608 00:07:13.793987 13573 sgd_solver.cpp:106] Iteration 17740, lr = 0.001\nI0608 00:07:26.453824 13573 solver.cpp:229] Iteration 17760, loss = 1.00628\nI0608 00:07:26.453886 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.235413 (* 1 = 0.235413 loss)\nI0608 00:07:26.453896 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136866 (* 1 = 0.136866 loss)\nI0608 00:07:26.453902 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.339739 (* 1 = 0.339739 loss)\nI0608 00:07:26.453908 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0591873 (* 1 = 0.0591873 loss)\nI0608 00:07:26.453914 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.343654 (* 1 = 0.343654 loss)\nI0608 00:07:26.453920 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0591826 (* 1 = 0.0591826 loss)\nI0608 00:07:26.453928 13573 sgd_solver.cpp:106] Iteration 17760, lr = 0.001\nI0608 00:07:39.428575 13573 solver.cpp:229] Iteration 17780, loss = 0.90498\nI0608 00:07:39.428642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.212352 (* 1 = 0.212352 loss)\nI0608 00:07:39.428650 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190506 (* 1 = 0.190506 loss)\nI0608 00:07:39.428656 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.178301 (* 1 = 0.178301 loss)\nI0608 00:07:39.428663 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0395644 (* 1 = 0.0395644 loss)\nI0608 00:07:39.428668 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183647 (* 1 = 0.183647 loss)\nI0608 00:07:39.428673 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0395657 (* 1 = 0.0395657 loss)\nI0608 00:07:39.428680 13573 sgd_solver.cpp:106] Iteration 17780, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:07:52.320152 13573 solver.cpp:229] Iteration 17800, loss = 0.498634\nI0608 00:07:52.320215 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0541902 (* 1 = 0.0541902 loss)\nI0608 00:07:52.320225 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0355855 (* 1 = 0.0355855 loss)\nI0608 00:07:52.320230 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113452 (* 1 = 0.113452 loss)\nI0608 00:07:52.320236 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0381631 (* 1 = 0.0381631 loss)\nI0608 00:07:52.320241 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106573 (* 1 = 0.106573 loss)\nI0608 00:07:52.320247 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0381584 (* 1 = 0.0381584 loss)\nI0608 00:07:52.320253 13573 sgd_solver.cpp:106] Iteration 17800, lr = 0.001\nI0608 00:08:05.155931 13573 solver.cpp:229] Iteration 17820, loss = 1.06562\nI0608 00:08:05.156005 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.321739 (* 1 = 0.321739 loss)\nI0608 00:08:05.156014 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124447 (* 1 = 0.124447 loss)\nI0608 00:08:05.156020 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.382395 (* 1 = 0.382395 loss)\nI0608 00:08:05.156026 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.03157 (* 1 = 0.03157 loss)\nI0608 00:08:05.156031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.38134 (* 1 = 0.38134 loss)\nI0608 00:08:05.156038 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0315883 (* 1 = 0.0315883 loss)\nI0608 00:08:05.156044 13573 sgd_solver.cpp:106] Iteration 17820, lr = 0.001\nI0608 00:08:18.051748 13573 solver.cpp:229] Iteration 17840, loss = 0.692393\nI0608 00:08:18.051815 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.149905 (* 1 = 0.149905 loss)\nI0608 00:08:18.051825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.153379 (* 1 = 0.153379 loss)\nI0608 00:08:18.051831 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.170921 (* 1 = 0.170921 loss)\nI0608 00:08:18.051836 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0258198 (* 1 = 0.0258198 loss)\nI0608 00:08:18.051842 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177683 (* 1 = 0.177683 loss)\nI0608 00:08:18.051847 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258133 (* 1 = 0.0258133 loss)\nI0608 00:08:18.051854 13573 sgd_solver.cpp:106] Iteration 17840, lr = 0.001\nI0608 00:08:30.853615 13573 solver.cpp:229] Iteration 17860, loss = 0.511302\nI0608 00:08:30.853680 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.132988 (* 1 = 0.132988 loss)\nI0608 00:08:30.853691 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0846432 (* 1 = 0.0846432 loss)\nI0608 00:08:30.853698 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11842 (* 1 = 0.11842 loss)\nI0608 00:08:30.853703 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0173476 (* 1 = 0.0173476 loss)\nI0608 00:08:30.853709 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.113879 (* 1 = 0.113879 loss)\nI0608 00:08:30.853715 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0173559 (* 1 = 0.0173559 loss)\nI0608 00:08:30.853723 13573 sgd_solver.cpp:106] Iteration 17860, lr = 0.001\nI0608 00:08:43.876468 13573 solver.cpp:229] Iteration 17880, loss = 0.792706\nI0608 00:08:43.876543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.005046 (* 1 = 0.005046 loss)\nI0608 00:08:43.876554 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0201361 (* 1 = 0.0201361 loss)\nI0608 00:08:43.876559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201213 (* 1 = 0.201213 loss)\nI0608 00:08:43.876565 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00768162 (* 1 = 0.00768162 loss)\nI0608 00:08:43.876571 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177885 (* 1 = 0.177885 loss)\nI0608 00:08:43.876577 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00768336 (* 1 = 0.00768336 loss)\nI0608 00:08:43.876585 13573 sgd_solver.cpp:106] Iteration 17880, lr = 0.001\nI0608 00:08:56.833411 13573 solver.cpp:229] Iteration 17900, loss = 1.11746\nI0608 00:08:56.833482 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148693 (* 1 = 0.148693 loss)\nI0608 00:08:56.833490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0871103 (* 1 = 0.0871103 loss)\nI0608 00:08:56.833498 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.501645 (* 1 = 0.501645 loss)\nI0608 00:08:56.833503 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.213614 (* 1 = 0.213614 loss)\nI0608 00:08:56.833508 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.502784 (* 1 = 0.502784 loss)\nI0608 00:08:56.833513 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.2137 (* 1 = 0.2137 loss)\nI0608 00:08:56.833520 13573 sgd_solver.cpp:106] Iteration 17900, lr = 0.001\nI0608 00:09:09.547188 13573 solver.cpp:229] Iteration 17920, loss = 1.13091\nI0608 00:09:09.547253 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00816072 (* 1 = 0.00816072 loss)\nI0608 00:09:09.547263 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0459937 (* 1 = 0.0459937 loss)\nI0608 00:09:09.547269 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.571315 (* 1 = 0.571315 loss)\nI0608 00:09:09.547274 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.26741 (* 1 = 0.26741 loss)\nI0608 00:09:09.547281 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.573186 (* 1 = 0.573186 loss)\nI0608 00:09:09.547286 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.267388 (* 1 = 0.267388 loss)\nI0608 00:09:09.547292 13573 sgd_solver.cpp:106] Iteration 17920, lr = 0.001\nI0608 00:09:22.350345 13573 solver.cpp:229] Iteration 17940, loss = 0.489371\nI0608 00:09:22.350421 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.158612 (* 1 = 0.158612 loss)\nI0608 00:09:22.350432 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103992 (* 1 = 0.103992 loss)\nI0608 00:09:22.350438 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0959879 (* 1 = 0.0959879 loss)\nI0608 00:09:22.350445 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00879587 (* 1 = 0.00879587 loss)\nI0608 00:09:22.350451 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0967302 (* 1 = 0.0967302 loss)\nI0608 00:09:22.350457 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00878569 (* 1 = 0.00878569 loss)\nI0608 00:09:22.350466 13573 sgd_solver.cpp:106] Iteration 17940, lr = 0.001\nI0608 00:09:35.205998 13573 solver.cpp:229] Iteration 17960, loss = 1.05475\nI0608 00:09:35.206073 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0882528 (* 1 = 0.0882528 loss)\nI0608 00:09:35.206084 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170961 (* 1 = 0.170961 loss)\nI0608 00:09:35.206089 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.565949 (* 1 = 0.565949 loss)\nI0608 00:09:35.206096 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0880616 (* 1 = 0.0880616 loss)\nI0608 00:09:35.206102 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.579458 (* 1 = 0.579458 loss)\nI0608 00:09:35.206109 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0880053 (* 1 = 0.0880053 loss)\nI0608 00:09:35.206115 13573 sgd_solver.cpp:106] Iteration 17960, lr = 0.001\nI0608 00:09:48.243619 13573 solver.cpp:229] Iteration 17980, loss = 1.3873\nI0608 00:09:48.243695 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.229553 (* 1 = 0.229553 loss)\nI0608 00:09:48.243705 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.256157 (* 1 = 0.256157 loss)\nI0608 00:09:48.243711 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.380581 (* 1 = 0.380581 loss)\nI0608 00:09:48.243718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0551911 (* 1 = 0.0551911 loss)\nI0608 00:09:48.243724 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376911 (* 1 = 0.376911 loss)\nI0608 00:09:48.243731 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0551557 (* 1 = 0.0551557 loss)\nI0608 00:09:48.243737 13573 sgd_solver.cpp:106] Iteration 17980, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:10:01.251806 13573 solver.cpp:229] Iteration 18000, loss = 0.952838\nI0608 00:10:01.251878 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.246458 (* 1 = 0.246458 loss)\nI0608 00:10:01.251888 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.220038 (* 1 = 0.220038 loss)\nI0608 00:10:01.251894 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.45778 (* 1 = 0.45778 loss)\nI0608 00:10:01.251900 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0758315 (* 1 = 0.0758315 loss)\nI0608 00:10:01.251906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.466608 (* 1 = 0.466608 loss)\nI0608 00:10:01.251912 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.075819 (* 1 = 0.075819 loss)\nI0608 00:10:01.251920 13573 sgd_solver.cpp:106] Iteration 18000, lr = 0.001\nI0608 00:10:14.266544 13573 solver.cpp:229] Iteration 18020, loss = 0.949308\nI0608 00:10:14.266604 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00052767 (* 1 = 0.00052767 loss)\nI0608 00:10:14.266616 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000746382 (* 1 = 0.000746382 loss)\nI0608 00:10:14.266623 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162031 (* 1 = 0.162031 loss)\nI0608 00:10:14.266629 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00434174 (* 1 = 0.00434174 loss)\nI0608 00:10:14.266635 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15729 (* 1 = 0.15729 loss)\nI0608 00:10:14.266641 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00434031 (* 1 = 0.00434031 loss)\nI0608 00:10:14.266649 13573 sgd_solver.cpp:106] Iteration 18020, lr = 0.001\nI0608 00:10:27.102214 13573 solver.cpp:229] Iteration 18040, loss = 1.90635\nI0608 00:10:27.102289 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.203513 (* 1 = 0.203513 loss)\nI0608 00:10:27.102298 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.371941 (* 1 = 0.371941 loss)\nI0608 00:10:27.102304 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0850236 (* 1 = 0.0850236 loss)\nI0608 00:10:27.102310 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158113 (* 1 = 0.0158113 loss)\nI0608 00:10:27.102315 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0956739 (* 1 = 0.0956739 loss)\nI0608 00:10:27.102321 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158321 (* 1 = 0.0158321 loss)\nI0608 00:10:27.102329 13573 sgd_solver.cpp:106] Iteration 18040, lr = 0.001\nI0608 00:10:39.851251 13573 solver.cpp:229] Iteration 18060, loss = 0.577592\nI0608 00:10:39.851341 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.2517 (* 1 = 0.2517 loss)\nI0608 00:10:39.851351 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202209 (* 1 = 0.202209 loss)\nI0608 00:10:39.851357 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10932 (* 1 = 0.10932 loss)\nI0608 00:10:39.851363 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00197713 (* 1 = 0.00197713 loss)\nI0608 00:10:39.851369 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152026 (* 1 = 0.152026 loss)\nI0608 00:10:39.851375 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00197735 (* 1 = 0.00197735 loss)\nI0608 00:10:39.851383 13573 sgd_solver.cpp:106] Iteration 18060, lr = 0.001\nI0608 00:10:52.730960 13573 solver.cpp:229] Iteration 18080, loss = 0.720161\nI0608 00:10:52.731026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00429823 (* 1 = 0.00429823 loss)\nI0608 00:10:52.731035 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0783312 (* 1 = 0.0783312 loss)\nI0608 00:10:52.731041 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334003 (* 1 = 0.334003 loss)\nI0608 00:10:52.731047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109437 (* 1 = 0.109437 loss)\nI0608 00:10:52.731052 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.319527 (* 1 = 0.319527 loss)\nI0608 00:10:52.731058 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109372 (* 1 = 0.109372 loss)\nI0608 00:10:52.731065 13573 sgd_solver.cpp:106] Iteration 18080, lr = 0.001\nI0608 00:11:05.554635 13573 solver.cpp:229] Iteration 18100, loss = 0.871208\nI0608 00:11:05.554709 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.556515 (* 1 = 0.556515 loss)\nI0608 00:11:05.554718 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173326 (* 1 = 0.173326 loss)\nI0608 00:11:05.554724 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189765 (* 1 = 0.189765 loss)\nI0608 00:11:05.554729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.178282 (* 1 = 0.178282 loss)\nI0608 00:11:05.554734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187136 (* 1 = 0.187136 loss)\nI0608 00:11:05.554740 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.178283 (* 1 = 0.178283 loss)\nI0608 00:11:05.554747 13573 sgd_solver.cpp:106] Iteration 18100, lr = 0.001\nI0608 00:11:18.443307 13573 solver.cpp:229] Iteration 18120, loss = 1.47167\nI0608 00:11:18.443377 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00465751 (* 1 = 0.00465751 loss)\nI0608 00:11:18.443387 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0213261 (* 1 = 0.0213261 loss)\nI0608 00:11:18.443393 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.975666 (* 1 = 0.975666 loss)\nI0608 00:11:18.443399 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.266934 (* 1 = 0.266934 loss)\nI0608 00:11:18.443405 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.960584 (* 1 = 0.960584 loss)\nI0608 00:11:18.443413 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.267113 (* 1 = 0.267113 loss)\nI0608 00:11:18.443419 13573 sgd_solver.cpp:106] Iteration 18120, lr = 0.001\nI0608 00:11:31.259933 13573 solver.cpp:229] Iteration 18140, loss = 0.886773\nI0608 00:11:31.259994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169517 (* 1 = 0.169517 loss)\nI0608 00:11:31.260004 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.178706 (* 1 = 0.178706 loss)\nI0608 00:11:31.260010 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0932496 (* 1 = 0.0932496 loss)\nI0608 00:11:31.260015 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0655869 (* 1 = 0.0655869 loss)\nI0608 00:11:31.260021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101041 (* 1 = 0.101041 loss)\nI0608 00:11:31.260027 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0656055 (* 1 = 0.0656055 loss)\nI0608 00:11:31.260035 13573 sgd_solver.cpp:106] Iteration 18140, lr = 0.001\nI0608 00:11:44.119372 13573 solver.cpp:229] Iteration 18160, loss = 1.08952\nI0608 00:11:44.119446 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.011589 (* 1 = 0.011589 loss)\nI0608 00:11:44.119457 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0114855 (* 1 = 0.0114855 loss)\nI0608 00:11:44.119462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184345 (* 1 = 0.184345 loss)\nI0608 00:11:44.119468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154322 (* 1 = 0.0154322 loss)\nI0608 00:11:44.119474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.208846 (* 1 = 0.208846 loss)\nI0608 00:11:44.119479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154251 (* 1 = 0.0154251 loss)\nI0608 00:11:44.119487 13573 sgd_solver.cpp:106] Iteration 18160, lr = 0.001\nI0608 00:11:56.952283 13573 solver.cpp:229] Iteration 18180, loss = 1.95698\nI0608 00:11:56.952353 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169116 (* 1 = 0.169116 loss)\nI0608 00:11:56.952361 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131708 (* 1 = 0.131708 loss)\nI0608 00:11:56.952368 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.79802 (* 1 = 0.79802 loss)\nI0608 00:11:56.952373 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.386273 (* 1 = 0.386273 loss)\nI0608 00:11:56.952378 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.794621 (* 1 = 0.794621 loss)\nI0608 00:11:56.952383 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.386272 (* 1 = 0.386272 loss)\nI0608 00:11:56.952390 13573 sgd_solver.cpp:106] Iteration 18180, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:12:09.850946 13573 solver.cpp:229] Iteration 18200, loss = 0.618256\nI0608 00:12:09.851009 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.208349 (* 1 = 0.208349 loss)\nI0608 00:12:09.851019 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.322202 (* 1 = 0.322202 loss)\nI0608 00:12:09.851025 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105831 (* 1 = 0.105831 loss)\nI0608 00:12:09.851032 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130996 (* 1 = 0.0130996 loss)\nI0608 00:12:09.851037 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111808 (* 1 = 0.111808 loss)\nI0608 00:12:09.851043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131142 (* 1 = 0.0131142 loss)\nI0608 00:12:09.851052 13573 sgd_solver.cpp:106] Iteration 18200, lr = 0.001\nI0608 00:12:22.983933 13573 solver.cpp:229] Iteration 18220, loss = 0.582111\nI0608 00:12:22.984006 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0937236 (* 1 = 0.0937236 loss)\nI0608 00:12:22.984019 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143524 (* 1 = 0.143524 loss)\nI0608 00:12:22.984025 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0997597 (* 1 = 0.0997597 loss)\nI0608 00:12:22.984030 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0355279 (* 1 = 0.0355279 loss)\nI0608 00:12:22.984036 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0976838 (* 1 = 0.0976838 loss)\nI0608 00:12:22.984041 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0354916 (* 1 = 0.0354916 loss)\nI0608 00:12:22.984048 13573 sgd_solver.cpp:106] Iteration 18220, lr = 0.001\nI0608 00:12:35.665119 13573 solver.cpp:229] Iteration 18240, loss = 1.78956\nI0608 00:12:35.665205 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.277059 (* 1 = 0.277059 loss)\nI0608 00:12:35.665215 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.378822 (* 1 = 0.378822 loss)\nI0608 00:12:35.665221 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41499 (* 1 = 0.41499 loss)\nI0608 00:12:35.665226 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.164514 (* 1 = 0.164514 loss)\nI0608 00:12:35.665232 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.41098 (* 1 = 0.41098 loss)\nI0608 00:12:35.665237 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.164541 (* 1 = 0.164541 loss)\nI0608 00:12:35.665244 13573 sgd_solver.cpp:106] Iteration 18240, lr = 0.001\nI0608 00:12:48.696210 13573 solver.cpp:229] Iteration 18260, loss = 0.535666\nI0608 00:12:48.696275 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0368528 (* 1 = 0.0368528 loss)\nI0608 00:12:48.696285 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0812708 (* 1 = 0.0812708 loss)\nI0608 00:12:48.696290 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.235397 (* 1 = 0.235397 loss)\nI0608 00:12:48.696296 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00714455 (* 1 = 0.00714455 loss)\nI0608 00:12:48.696301 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235146 (* 1 = 0.235146 loss)\nI0608 00:12:48.696307 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0071429 (* 1 = 0.0071429 loss)\nI0608 00:12:48.696316 13573 sgd_solver.cpp:106] Iteration 18260, lr = 0.001\nI0608 00:13:01.657793 13573 solver.cpp:229] Iteration 18280, loss = 0.930686\nI0608 00:13:01.657868 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117092 (* 1 = 0.117092 loss)\nI0608 00:13:01.657878 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113971 (* 1 = 0.113971 loss)\nI0608 00:13:01.657884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105812 (* 1 = 0.105812 loss)\nI0608 00:13:01.657891 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0876826 (* 1 = 0.0876826 loss)\nI0608 00:13:01.657896 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103792 (* 1 = 0.103792 loss)\nI0608 00:13:01.657902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0876712 (* 1 = 0.0876712 loss)\nI0608 00:13:01.657908 13573 sgd_solver.cpp:106] Iteration 18280, lr = 0.001\nI0608 00:13:14.569032 13573 solver.cpp:229] Iteration 18300, loss = 0.469653\nI0608 00:13:14.569093 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0406093 (* 1 = 0.0406093 loss)\nI0608 00:13:14.569103 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0356785 (* 1 = 0.0356785 loss)\nI0608 00:13:14.569108 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240234 (* 1 = 0.240234 loss)\nI0608 00:13:14.569113 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0230119 (* 1 = 0.0230119 loss)\nI0608 00:13:14.569119 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236733 (* 1 = 0.236733 loss)\nI0608 00:13:14.569125 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0230119 (* 1 = 0.0230119 loss)\nI0608 00:13:14.569133 13573 sgd_solver.cpp:106] Iteration 18300, lr = 0.001\nI0608 00:13:27.673944 13573 solver.cpp:229] Iteration 18320, loss = 0.785512\nI0608 00:13:27.674011 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.205496 (* 1 = 0.205496 loss)\nI0608 00:13:27.674021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140362 (* 1 = 0.140362 loss)\nI0608 00:13:27.674026 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256698 (* 1 = 0.256698 loss)\nI0608 00:13:27.674032 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0588688 (* 1 = 0.0588688 loss)\nI0608 00:13:27.674037 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.257748 (* 1 = 0.257748 loss)\nI0608 00:13:27.674043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0588577 (* 1 = 0.0588577 loss)\nI0608 00:13:27.674049 13573 sgd_solver.cpp:106] Iteration 18320, lr = 0.001\nI0608 00:13:40.492658 13573 solver.cpp:229] Iteration 18340, loss = 1.06298\nI0608 00:13:40.492733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.240212 (* 1 = 0.240212 loss)\nI0608 00:13:40.492743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122903 (* 1 = 0.122903 loss)\nI0608 00:13:40.492748 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.536584 (* 1 = 0.536584 loss)\nI0608 00:13:40.492754 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.107268 (* 1 = 0.107268 loss)\nI0608 00:13:40.492759 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.568794 (* 1 = 0.568794 loss)\nI0608 00:13:40.492765 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.10727 (* 1 = 0.10727 loss)\nI0608 00:13:40.492771 13573 sgd_solver.cpp:106] Iteration 18340, lr = 0.001\nI0608 00:13:53.215401 13573 solver.cpp:229] Iteration 18360, loss = 0.5378\nI0608 00:13:53.215462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107406 (* 1 = 0.107406 loss)\nI0608 00:13:53.215472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0714318 (* 1 = 0.0714318 loss)\nI0608 00:13:53.215478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.293141 (* 1 = 0.293141 loss)\nI0608 00:13:53.215483 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243186 (* 1 = 0.0243186 loss)\nI0608 00:13:53.215489 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259433 (* 1 = 0.259433 loss)\nI0608 00:13:53.215494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243175 (* 1 = 0.0243175 loss)\nI0608 00:13:53.215502 13573 sgd_solver.cpp:106] Iteration 18360, lr = 0.001\nI0608 00:14:06.196172 13573 solver.cpp:229] Iteration 18380, loss = 0.883366\nI0608 00:14:06.196238 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0936393 (* 1 = 0.0936393 loss)\nI0608 00:14:06.196247 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0638511 (* 1 = 0.0638511 loss)\nI0608 00:14:06.196254 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259281 (* 1 = 0.259281 loss)\nI0608 00:14:06.196259 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182457 (* 1 = 0.0182457 loss)\nI0608 00:14:06.196265 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259426 (* 1 = 0.259426 loss)\nI0608 00:14:06.196271 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182449 (* 1 = 0.0182449 loss)\nI0608 00:14:06.196277 13573 sgd_solver.cpp:106] Iteration 18380, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:14:18.850220 13573 solver.cpp:229] Iteration 18400, loss = 0.602487\nI0608 00:14:18.850291 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0344592 (* 1 = 0.0344592 loss)\nI0608 00:14:18.850301 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.052373 (* 1 = 0.052373 loss)\nI0608 00:14:18.850307 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0938914 (* 1 = 0.0938914 loss)\nI0608 00:14:18.850312 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170905 (* 1 = 0.0170905 loss)\nI0608 00:14:18.850318 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096026 (* 1 = 0.096026 loss)\nI0608 00:14:18.850324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017087 (* 1 = 0.017087 loss)\nI0608 00:14:18.850330 13573 sgd_solver.cpp:106] Iteration 18400, lr = 0.001\nI0608 00:14:31.833868 13573 solver.cpp:229] Iteration 18420, loss = 0.678823\nI0608 00:14:31.833928 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100587 (* 1 = 0.100587 loss)\nI0608 00:14:31.833937 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.133037 (* 1 = 0.133037 loss)\nI0608 00:14:31.833945 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0798504 (* 1 = 0.0798504 loss)\nI0608 00:14:31.833950 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104716 (* 1 = 0.0104716 loss)\nI0608 00:14:31.833956 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0852748 (* 1 = 0.0852748 loss)\nI0608 00:14:31.833961 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104669 (* 1 = 0.0104669 loss)\nI0608 00:14:31.833967 13573 sgd_solver.cpp:106] Iteration 18420, lr = 0.001\nI0608 00:14:44.728602 13573 solver.cpp:229] Iteration 18440, loss = 0.56051\nI0608 00:14:44.728672 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00203895 (* 1 = 0.00203895 loss)\nI0608 00:14:44.728680 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00150372 (* 1 = 0.00150372 loss)\nI0608 00:14:44.728687 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284804 (* 1 = 0.284804 loss)\nI0608 00:14:44.728693 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0223332 (* 1 = 0.0223332 loss)\nI0608 00:14:44.728698 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269906 (* 1 = 0.269906 loss)\nI0608 00:14:44.728703 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0223377 (* 1 = 0.0223377 loss)\nI0608 00:14:44.728711 13573 sgd_solver.cpp:106] Iteration 18440, lr = 0.001\nI0608 00:14:57.750412 13573 solver.cpp:229] Iteration 18460, loss = 0.915238\nI0608 00:14:57.750488 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.19181 (* 1 = 0.19181 loss)\nI0608 00:14:57.750497 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138314 (* 1 = 0.138314 loss)\nI0608 00:14:57.750504 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.347858 (* 1 = 0.347858 loss)\nI0608 00:14:57.750509 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0590257 (* 1 = 0.0590257 loss)\nI0608 00:14:57.750514 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.354479 (* 1 = 0.354479 loss)\nI0608 00:14:57.750520 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0590361 (* 1 = 0.0590361 loss)\nI0608 00:14:57.750526 13573 sgd_solver.cpp:106] Iteration 18460, lr = 0.001\nI0608 00:15:10.576936 13573 solver.cpp:229] Iteration 18480, loss = 1.74983\nI0608 00:15:10.576998 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0170198 (* 1 = 0.0170198 loss)\nI0608 00:15:10.577008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.067606 (* 1 = 0.067606 loss)\nI0608 00:15:10.577014 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.06529 (* 1 = 1.06529 loss)\nI0608 00:15:10.577020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.254473 (* 1 = 0.254473 loss)\nI0608 00:15:10.577026 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.08494 (* 1 = 1.08494 loss)\nI0608 00:15:10.577033 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.254581 (* 1 = 0.254581 loss)\nI0608 00:15:10.577040 13573 sgd_solver.cpp:106] Iteration 18480, lr = 0.001\nI0608 00:15:23.549355 13573 solver.cpp:229] Iteration 18500, loss = 1.2938\nI0608 00:15:23.549420 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0550394 (* 1 = 0.0550394 loss)\nI0608 00:15:23.549430 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125831 (* 1 = 0.125831 loss)\nI0608 00:15:23.549437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.495767 (* 1 = 0.495767 loss)\nI0608 00:15:23.549441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.253665 (* 1 = 0.253665 loss)\nI0608 00:15:23.549448 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.503996 (* 1 = 0.503996 loss)\nI0608 00:15:23.549453 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.253613 (* 1 = 0.253613 loss)\nI0608 00:15:23.549459 13573 sgd_solver.cpp:106] Iteration 18500, lr = 0.001\nI0608 00:15:36.330490 13573 solver.cpp:229] Iteration 18520, loss = 0.487745\nI0608 00:15:36.330564 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0030582 (* 1 = 0.0030582 loss)\nI0608 00:15:36.330574 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0214198 (* 1 = 0.0214198 loss)\nI0608 00:15:36.330579 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263064 (* 1 = 0.263064 loss)\nI0608 00:15:36.330585 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0528319 (* 1 = 0.0528319 loss)\nI0608 00:15:36.330591 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.29875 (* 1 = 0.29875 loss)\nI0608 00:15:36.330596 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0528547 (* 1 = 0.0528547 loss)\nI0608 00:15:36.330603 13573 sgd_solver.cpp:106] Iteration 18520, lr = 0.001\nI0608 00:15:49.168861 13573 solver.cpp:229] Iteration 18540, loss = 1.43794\nI0608 00:15:49.168921 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189807 (* 1 = 0.189807 loss)\nI0608 00:15:49.168931 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.303016 (* 1 = 0.303016 loss)\nI0608 00:15:49.168937 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.685715 (* 1 = 0.685715 loss)\nI0608 00:15:49.168943 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109655 (* 1 = 0.109655 loss)\nI0608 00:15:49.168948 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.686088 (* 1 = 0.686088 loss)\nI0608 00:15:49.168953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109689 (* 1 = 0.109689 loss)\nI0608 00:15:49.168961 13573 sgd_solver.cpp:106] Iteration 18540, lr = 0.001\nI0608 00:16:01.928071 13573 solver.cpp:229] Iteration 18560, loss = 1.14482\nI0608 00:16:01.928138 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.242101 (* 1 = 0.242101 loss)\nI0608 00:16:01.928149 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.201119 (* 1 = 0.201119 loss)\nI0608 00:16:01.928155 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259181 (* 1 = 0.259181 loss)\nI0608 00:16:01.928161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0522582 (* 1 = 0.0522582 loss)\nI0608 00:16:01.928166 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249033 (* 1 = 0.249033 loss)\nI0608 00:16:01.928172 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.052236 (* 1 = 0.052236 loss)\nI0608 00:16:01.928179 13573 sgd_solver.cpp:106] Iteration 18560, lr = 0.001\nI0608 00:16:15.072016 13573 solver.cpp:229] Iteration 18580, loss = 1.00718\nI0608 00:16:15.072082 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0114041 (* 1 = 0.0114041 loss)\nI0608 00:16:15.072091 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0834519 (* 1 = 0.0834519 loss)\nI0608 00:16:15.072098 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.332456 (* 1 = 0.332456 loss)\nI0608 00:16:15.072103 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0869807 (* 1 = 0.0869807 loss)\nI0608 00:16:15.072108 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.348412 (* 1 = 0.348412 loss)\nI0608 00:16:15.072114 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0869617 (* 1 = 0.0869617 loss)\nI0608 00:16:15.072120 13573 sgd_solver.cpp:106] Iteration 18580, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:16:27.680753 13573 solver.cpp:229] Iteration 18600, loss = 0.590793\nI0608 00:16:27.680815 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0431711 (* 1 = 0.0431711 loss)\nI0608 00:16:27.680824 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0997565 (* 1 = 0.0997565 loss)\nI0608 00:16:27.680830 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139974 (* 1 = 0.139974 loss)\nI0608 00:16:27.680835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111381 (* 1 = 0.0111381 loss)\nI0608 00:16:27.680841 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165146 (* 1 = 0.165146 loss)\nI0608 00:16:27.680846 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0111373 (* 1 = 0.0111373 loss)\nI0608 00:16:27.680853 13573 sgd_solver.cpp:106] Iteration 18600, lr = 0.001\nI0608 00:16:40.606241 13573 solver.cpp:229] Iteration 18620, loss = 1.57585\nI0608 00:16:40.606308 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00248256 (* 1 = 0.00248256 loss)\nI0608 00:16:40.606318 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000878012 (* 1 = 0.000878012 loss)\nI0608 00:16:40.606325 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215078 (* 1 = 0.215078 loss)\nI0608 00:16:40.606331 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00465221 (* 1 = 0.00465221 loss)\nI0608 00:16:40.606336 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202637 (* 1 = 0.202637 loss)\nI0608 00:16:40.606343 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00464837 (* 1 = 0.00464837 loss)\nI0608 00:16:40.606350 13573 sgd_solver.cpp:106] Iteration 18620, lr = 0.001\nI0608 00:16:53.512831 13573 solver.cpp:229] Iteration 18640, loss = 1.32057\nI0608 00:16:53.512897 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0104964 (* 1 = 0.0104964 loss)\nI0608 00:16:53.512907 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00563204 (* 1 = 0.00563204 loss)\nI0608 00:16:53.512912 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294021 (* 1 = 0.294021 loss)\nI0608 00:16:53.512918 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0537363 (* 1 = 0.0537363 loss)\nI0608 00:16:53.512923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278947 (* 1 = 0.278947 loss)\nI0608 00:16:53.512929 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0537429 (* 1 = 0.0537429 loss)\nI0608 00:16:53.512938 13573 sgd_solver.cpp:106] Iteration 18640, lr = 0.001\nI0608 00:17:06.397661 13573 solver.cpp:229] Iteration 18660, loss = 0.643606\nI0608 00:17:06.397733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142399 (* 1 = 0.142399 loss)\nI0608 00:17:06.397743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.159271 (* 1 = 0.159271 loss)\nI0608 00:17:06.397749 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0823063 (* 1 = 0.0823063 loss)\nI0608 00:17:06.397754 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189957 (* 1 = 0.0189957 loss)\nI0608 00:17:06.397760 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0872076 (* 1 = 0.0872076 loss)\nI0608 00:17:06.397766 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189886 (* 1 = 0.0189886 loss)\nI0608 00:17:06.397773 13573 sgd_solver.cpp:106] Iteration 18660, lr = 0.001\nI0608 00:17:19.273610 13573 solver.cpp:229] Iteration 18680, loss = 1.30549\nI0608 00:17:19.273679 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.301663 (* 1 = 0.301663 loss)\nI0608 00:17:19.273687 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.294947 (* 1 = 0.294947 loss)\nI0608 00:17:19.273694 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.621346 (* 1 = 0.621346 loss)\nI0608 00:17:19.273699 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.147897 (* 1 = 0.147897 loss)\nI0608 00:17:19.273705 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.620053 (* 1 = 0.620053 loss)\nI0608 00:17:19.273710 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.147831 (* 1 = 0.147831 loss)\nI0608 00:17:19.273715 13573 sgd_solver.cpp:106] Iteration 18680, lr = 0.001\nI0608 00:17:32.341217 13573 solver.cpp:229] Iteration 18700, loss = 1.50738\nI0608 00:17:32.341282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.242495 (* 1 = 0.242495 loss)\nI0608 00:17:32.341292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173596 (* 1 = 0.173596 loss)\nI0608 00:17:32.341298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.502835 (* 1 = 0.502835 loss)\nI0608 00:17:32.341305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0761759 (* 1 = 0.0761759 loss)\nI0608 00:17:32.341310 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.521434 (* 1 = 0.521434 loss)\nI0608 00:17:32.341316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0761793 (* 1 = 0.0761793 loss)\nI0608 00:17:32.341325 13573 sgd_solver.cpp:106] Iteration 18700, lr = 0.001\nI0608 00:17:45.256376 13573 solver.cpp:229] Iteration 18720, loss = 0.763785\nI0608 00:17:45.256461 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.078372 (* 1 = 0.078372 loss)\nI0608 00:17:45.256471 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.160496 (* 1 = 0.160496 loss)\nI0608 00:17:45.256477 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0997726 (* 1 = 0.0997726 loss)\nI0608 00:17:45.256484 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0398923 (* 1 = 0.0398923 loss)\nI0608 00:17:45.256489 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0980472 (* 1 = 0.0980472 loss)\nI0608 00:17:45.256496 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0398827 (* 1 = 0.0398827 loss)\nI0608 00:17:45.256503 13573 sgd_solver.cpp:106] Iteration 18720, lr = 0.001\nI0608 00:17:58.123903 13573 solver.cpp:229] Iteration 18740, loss = 0.927222\nI0608 00:17:58.123962 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0527885 (* 1 = 0.0527885 loss)\nI0608 00:17:58.123972 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124441 (* 1 = 0.124441 loss)\nI0608 00:17:58.123978 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214578 (* 1 = 0.214578 loss)\nI0608 00:17:58.123984 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.019503 (* 1 = 0.019503 loss)\nI0608 00:17:58.123991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191087 (* 1 = 0.191087 loss)\nI0608 00:17:58.123996 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0195063 (* 1 = 0.0195063 loss)\nI0608 00:17:58.124004 13573 sgd_solver.cpp:106] Iteration 18740, lr = 0.001\nI0608 00:18:11.043500 13573 solver.cpp:229] Iteration 18760, loss = 0.764695\nI0608 00:18:11.043560 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161592 (* 1 = 0.161592 loss)\nI0608 00:18:11.043568 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132043 (* 1 = 0.132043 loss)\nI0608 00:18:11.043575 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104842 (* 1 = 0.104842 loss)\nI0608 00:18:11.043581 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0618351 (* 1 = 0.0618351 loss)\nI0608 00:18:11.043586 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100633 (* 1 = 0.100633 loss)\nI0608 00:18:11.043591 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0618523 (* 1 = 0.0618523 loss)\nI0608 00:18:11.043598 13573 sgd_solver.cpp:106] Iteration 18760, lr = 0.001\nI0608 00:18:24.038511 13573 solver.cpp:229] Iteration 18780, loss = 0.470918\nI0608 00:18:24.038585 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131514 (* 1 = 0.131514 loss)\nI0608 00:18:24.038595 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0888968 (* 1 = 0.0888968 loss)\nI0608 00:18:24.038600 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0898273 (* 1 = 0.0898273 loss)\nI0608 00:18:24.038606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0304057 (* 1 = 0.0304057 loss)\nI0608 00:18:24.038612 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0861947 (* 1 = 0.0861947 loss)\nI0608 00:18:24.038617 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0304046 (* 1 = 0.0304046 loss)\nI0608 00:18:24.038625 13573 sgd_solver.cpp:106] Iteration 18780, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:18:36.892917 13573 solver.cpp:229] Iteration 18800, loss = 0.679689\nI0608 00:18:36.893002 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.14283 (* 1 = 0.14283 loss)\nI0608 00:18:36.893012 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0972091 (* 1 = 0.0972091 loss)\nI0608 00:18:36.893018 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104898 (* 1 = 0.104898 loss)\nI0608 00:18:36.893024 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0490057 (* 1 = 0.0490057 loss)\nI0608 00:18:36.893030 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109179 (* 1 = 0.109179 loss)\nI0608 00:18:36.893036 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0489866 (* 1 = 0.0489866 loss)\nI0608 00:18:36.893049 13573 sgd_solver.cpp:106] Iteration 18800, lr = 0.001\nI0608 00:18:49.763720 13573 solver.cpp:229] Iteration 18820, loss = 0.59111\nI0608 00:18:49.763789 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.103517 (* 1 = 0.103517 loss)\nI0608 00:18:49.763799 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0505213 (* 1 = 0.0505213 loss)\nI0608 00:18:49.763805 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0966782 (* 1 = 0.0966782 loss)\nI0608 00:18:49.763811 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0254353 (* 1 = 0.0254353 loss)\nI0608 00:18:49.763818 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0985376 (* 1 = 0.0985376 loss)\nI0608 00:18:49.763823 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0254358 (* 1 = 0.0254358 loss)\nI0608 00:18:49.763829 13573 sgd_solver.cpp:106] Iteration 18820, lr = 0.001\nI0608 00:19:02.613464 13573 solver.cpp:229] Iteration 18840, loss = 2.15271\nI0608 00:19:02.613529 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000732112 (* 1 = 0.000732112 loss)\nI0608 00:19:02.613539 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0167527 (* 1 = 0.0167527 loss)\nI0608 00:19:02.613545 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23469 (* 1 = 0.23469 loss)\nI0608 00:19:02.613551 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0410839 (* 1 = 0.0410839 loss)\nI0608 00:19:02.613557 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247631 (* 1 = 0.247631 loss)\nI0608 00:19:02.613564 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0410887 (* 1 = 0.0410887 loss)\nI0608 00:19:02.613569 13573 sgd_solver.cpp:106] Iteration 18840, lr = 0.001\nI0608 00:19:15.500622 13573 solver.cpp:229] Iteration 18860, loss = 0.931846\nI0608 00:19:15.500689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.137437 (* 1 = 0.137437 loss)\nI0608 00:19:15.500699 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18056 (* 1 = 0.18056 loss)\nI0608 00:19:15.500705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155104 (* 1 = 0.155104 loss)\nI0608 00:19:15.500711 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481328 (* 1 = 0.0481328 loss)\nI0608 00:19:15.500716 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154633 (* 1 = 0.154633 loss)\nI0608 00:19:15.500722 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.048132 (* 1 = 0.048132 loss)\nI0608 00:19:15.500730 13573 sgd_solver.cpp:106] Iteration 18860, lr = 0.001\nI0608 00:19:28.282789 13573 solver.cpp:229] Iteration 18880, loss = 0.460856\nI0608 00:19:28.282865 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0329398 (* 1 = 0.0329398 loss)\nI0608 00:19:28.282873 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0242368 (* 1 = 0.0242368 loss)\nI0608 00:19:28.282879 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.084233 (* 1 = 0.084233 loss)\nI0608 00:19:28.282884 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0316397 (* 1 = 0.0316397 loss)\nI0608 00:19:28.282891 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.082231 (* 1 = 0.082231 loss)\nI0608 00:19:28.282896 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0316245 (* 1 = 0.0316245 loss)\nI0608 00:19:28.282902 13573 sgd_solver.cpp:106] Iteration 18880, lr = 0.001\nI0608 00:19:40.979821 13573 solver.cpp:229] Iteration 18900, loss = 2.27139\nI0608 00:19:40.979888 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00180635 (* 1 = 0.00180635 loss)\nI0608 00:19:40.979897 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0086594 (* 1 = 0.0086594 loss)\nI0608 00:19:40.979903 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.769668 (* 1 = 0.769668 loss)\nI0608 00:19:40.979909 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.35708 (* 1 = 0.35708 loss)\nI0608 00:19:40.979915 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.770326 (* 1 = 0.770326 loss)\nI0608 00:19:40.979920 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.357152 (* 1 = 0.357152 loss)\nI0608 00:19:40.979928 13573 sgd_solver.cpp:106] Iteration 18900, lr = 0.001\nI0608 00:19:53.736999 13573 solver.cpp:229] Iteration 18920, loss = 0.518062\nI0608 00:19:53.737092 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00274021 (* 1 = 0.00274021 loss)\nI0608 00:19:53.737100 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0283157 (* 1 = 0.0283157 loss)\nI0608 00:19:53.737107 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183715 (* 1 = 0.183715 loss)\nI0608 00:19:53.737112 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0192302 (* 1 = 0.0192302 loss)\nI0608 00:19:53.737118 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167855 (* 1 = 0.167855 loss)\nI0608 00:19:53.737123 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0192245 (* 1 = 0.0192245 loss)\nI0608 00:19:53.737130 13573 sgd_solver.cpp:106] Iteration 18920, lr = 0.001\nI0608 00:20:06.324545 13573 solver.cpp:229] Iteration 18940, loss = 0.457048\nI0608 00:20:06.324609 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0518756 (* 1 = 0.0518756 loss)\nI0608 00:20:06.324620 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0452201 (* 1 = 0.0452201 loss)\nI0608 00:20:06.324625 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110514 (* 1 = 0.110514 loss)\nI0608 00:20:06.324631 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279362 (* 1 = 0.0279362 loss)\nI0608 00:20:06.324636 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114016 (* 1 = 0.114016 loss)\nI0608 00:20:06.324641 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279379 (* 1 = 0.0279379 loss)\nI0608 00:20:06.324650 13573 sgd_solver.cpp:106] Iteration 18940, lr = 0.001\nI0608 00:20:19.227092 13573 solver.cpp:229] Iteration 18960, loss = 1.04712\nI0608 00:20:19.227154 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.241135 (* 1 = 0.241135 loss)\nI0608 00:20:19.227162 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.263327 (* 1 = 0.263327 loss)\nI0608 00:20:19.227169 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41782 (* 1 = 0.41782 loss)\nI0608 00:20:19.227174 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.093044 (* 1 = 0.093044 loss)\nI0608 00:20:19.227180 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.418184 (* 1 = 0.418184 loss)\nI0608 00:20:19.227185 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.09304 (* 1 = 0.09304 loss)\nI0608 00:20:19.227192 13573 sgd_solver.cpp:106] Iteration 18960, lr = 0.001\nI0608 00:20:31.978092 13573 solver.cpp:229] Iteration 18980, loss = 0.875205\nI0608 00:20:31.978159 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.366707 (* 1 = 0.366707 loss)\nI0608 00:20:31.978168 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.233522 (* 1 = 0.233522 loss)\nI0608 00:20:31.978174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.16131 (* 1 = 0.16131 loss)\nI0608 00:20:31.978180 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0326516 (* 1 = 0.0326516 loss)\nI0608 00:20:31.978186 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161569 (* 1 = 0.161569 loss)\nI0608 00:20:31.978191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0326679 (* 1 = 0.0326679 loss)\nI0608 00:20:31.978198 13573 sgd_solver.cpp:106] Iteration 18980, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:20:44.762265 13573 solver.cpp:229] Iteration 19000, loss = 1.17884\nI0608 00:20:44.762331 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.428402 (* 1 = 0.428402 loss)\nI0608 00:20:44.762339 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.290633 (* 1 = 0.290633 loss)\nI0608 00:20:44.762346 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.338415 (* 1 = 0.338415 loss)\nI0608 00:20:44.762352 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.151128 (* 1 = 0.151128 loss)\nI0608 00:20:44.762357 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340988 (* 1 = 0.340988 loss)\nI0608 00:20:44.762363 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.151129 (* 1 = 0.151129 loss)\nI0608 00:20:44.762370 13573 sgd_solver.cpp:106] Iteration 19000, lr = 0.001\nI0608 00:20:57.910317 13573 solver.cpp:229] Iteration 19020, loss = 1.09844\nI0608 00:20:57.910387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000385661 (* 1 = 0.000385661 loss)\nI0608 00:20:57.910398 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00104749 (* 1 = 0.00104749 loss)\nI0608 00:20:57.910403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.283575 (* 1 = 0.283575 loss)\nI0608 00:20:57.910409 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246501 (* 1 = 0.0246501 loss)\nI0608 00:20:57.910414 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.319735 (* 1 = 0.319735 loss)\nI0608 00:20:57.910420 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246404 (* 1 = 0.0246404 loss)\nI0608 00:20:57.910428 13573 sgd_solver.cpp:106] Iteration 19020, lr = 0.001\nI0608 00:21:10.670589 13573 solver.cpp:229] Iteration 19040, loss = 0.739761\nI0608 00:21:10.670662 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0441951 (* 1 = 0.0441951 loss)\nI0608 00:21:10.670672 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0530421 (* 1 = 0.0530421 loss)\nI0608 00:21:10.670678 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0860619 (* 1 = 0.0860619 loss)\nI0608 00:21:10.670684 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.031434 (* 1 = 0.031434 loss)\nI0608 00:21:10.670691 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0872617 (* 1 = 0.0872617 loss)\nI0608 00:21:10.670696 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0314083 (* 1 = 0.0314083 loss)\nI0608 00:21:10.670704 13573 sgd_solver.cpp:106] Iteration 19040, lr = 0.001\nI0608 00:21:23.657515 13573 solver.cpp:229] Iteration 19060, loss = 1.00737\nI0608 00:21:23.657574 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.464846 (* 1 = 0.464846 loss)\nI0608 00:21:23.657583 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.363968 (* 1 = 0.363968 loss)\nI0608 00:21:23.657589 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.230373 (* 1 = 0.230373 loss)\nI0608 00:21:23.657595 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.108049 (* 1 = 0.108049 loss)\nI0608 00:21:23.657600 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23115 (* 1 = 0.23115 loss)\nI0608 00:21:23.657606 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.108098 (* 1 = 0.108098 loss)\nI0608 00:21:23.657613 13573 sgd_solver.cpp:106] Iteration 19060, lr = 0.001\nI0608 00:21:36.589927 13573 solver.cpp:229] Iteration 19080, loss = 0.498903\nI0608 00:21:36.589994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16756 (* 1 = 0.16756 loss)\nI0608 00:21:36.590003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.115359 (* 1 = 0.115359 loss)\nI0608 00:21:36.590009 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171616 (* 1 = 0.171616 loss)\nI0608 00:21:36.590015 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0271029 (* 1 = 0.0271029 loss)\nI0608 00:21:36.590021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172237 (* 1 = 0.172237 loss)\nI0608 00:21:36.590026 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0270874 (* 1 = 0.0270874 loss)\nI0608 00:21:36.590034 13573 sgd_solver.cpp:106] Iteration 19080, lr = 0.001\nI0608 00:21:49.393630 13573 solver.cpp:229] Iteration 19100, loss = 0.722618\nI0608 00:21:49.393694 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.252437 (* 1 = 0.252437 loss)\nI0608 00:21:49.393707 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.253823 (* 1 = 0.253823 loss)\nI0608 00:21:49.393712 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.144294 (* 1 = 0.144294 loss)\nI0608 00:21:49.393718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0121067 (* 1 = 0.0121067 loss)\nI0608 00:21:49.393724 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150391 (* 1 = 0.150391 loss)\nI0608 00:21:49.393730 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0121115 (* 1 = 0.0121115 loss)\nI0608 00:21:49.393736 13573 sgd_solver.cpp:106] Iteration 19100, lr = 0.001\nI0608 00:22:02.314412 13573 solver.cpp:229] Iteration 19120, loss = 0.690451\nI0608 00:22:02.314472 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00140977 (* 1 = 0.00140977 loss)\nI0608 00:22:02.314482 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00186407 (* 1 = 0.00186407 loss)\nI0608 00:22:02.314488 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194153 (* 1 = 0.194153 loss)\nI0608 00:22:02.314494 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00359275 (* 1 = 0.00359275 loss)\nI0608 00:22:02.314501 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190562 (* 1 = 0.190562 loss)\nI0608 00:22:02.314505 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00359425 (* 1 = 0.00359425 loss)\nI0608 00:22:02.314512 13573 sgd_solver.cpp:106] Iteration 19120, lr = 0.001\nI0608 00:22:15.106555 13573 solver.cpp:229] Iteration 19140, loss = 0.342827\nI0608 00:22:15.106644 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0781145 (* 1 = 0.0781145 loss)\nI0608 00:22:15.106654 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0749869 (* 1 = 0.0749869 loss)\nI0608 00:22:15.106660 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0693393 (* 1 = 0.0693393 loss)\nI0608 00:22:15.106667 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0107253 (* 1 = 0.0107253 loss)\nI0608 00:22:15.106673 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0645997 (* 1 = 0.0645997 loss)\nI0608 00:22:15.106678 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107189 (* 1 = 0.0107189 loss)\nI0608 00:22:15.106686 13573 sgd_solver.cpp:106] Iteration 19140, lr = 0.001\nI0608 00:22:27.967830 13573 solver.cpp:229] Iteration 19160, loss = 1.11708\nI0608 00:22:27.967895 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.258983 (* 1 = 0.258983 loss)\nI0608 00:22:27.967905 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138934 (* 1 = 0.138934 loss)\nI0608 00:22:27.967911 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0716083 (* 1 = 0.0716083 loss)\nI0608 00:22:27.967917 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00570032 (* 1 = 0.00570032 loss)\nI0608 00:22:27.967922 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0736175 (* 1 = 0.0736175 loss)\nI0608 00:22:27.967928 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00569864 (* 1 = 0.00569864 loss)\nI0608 00:22:27.967934 13573 sgd_solver.cpp:106] Iteration 19160, lr = 0.001\nI0608 00:22:40.882899 13573 solver.cpp:229] Iteration 19180, loss = 1.01569\nI0608 00:22:40.882964 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00156534 (* 1 = 0.00156534 loss)\nI0608 00:22:40.882974 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0113131 (* 1 = 0.0113131 loss)\nI0608 00:22:40.882980 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.452195 (* 1 = 0.452195 loss)\nI0608 00:22:40.882985 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104046 (* 1 = 0.104046 loss)\nI0608 00:22:40.882992 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.464334 (* 1 = 0.464334 loss)\nI0608 00:22:40.882997 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104101 (* 1 = 0.104101 loss)\nI0608 00:22:40.883002 13573 sgd_solver.cpp:106] Iteration 19180, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:22:53.956413 13573 solver.cpp:229] Iteration 19200, loss = 0.744037\nI0608 00:22:53.956486 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166358 (* 1 = 0.166358 loss)\nI0608 00:22:53.956496 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.249895 (* 1 = 0.249895 loss)\nI0608 00:22:53.956501 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.303837 (* 1 = 0.303837 loss)\nI0608 00:22:53.956507 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0519418 (* 1 = 0.0519418 loss)\nI0608 00:22:53.956512 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278137 (* 1 = 0.278137 loss)\nI0608 00:22:53.956518 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0519624 (* 1 = 0.0519624 loss)\nI0608 00:22:53.956526 13573 sgd_solver.cpp:106] Iteration 19200, lr = 0.001\nI0608 00:23:06.893959 13573 solver.cpp:229] Iteration 19220, loss = 1.7699\nI0608 00:23:06.894026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.295776 (* 1 = 0.295776 loss)\nI0608 00:23:06.894034 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.160921 (* 1 = 0.160921 loss)\nI0608 00:23:06.894040 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.660901 (* 1 = 0.660901 loss)\nI0608 00:23:06.894047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.177263 (* 1 = 0.177263 loss)\nI0608 00:23:06.894052 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.643517 (* 1 = 0.643517 loss)\nI0608 00:23:06.894057 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.177317 (* 1 = 0.177317 loss)\nI0608 00:23:06.894063 13573 sgd_solver.cpp:106] Iteration 19220, lr = 0.001\nI0608 00:23:19.650717 13573 solver.cpp:229] Iteration 19240, loss = 0.799387\nI0608 00:23:19.650787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13586 (* 1 = 0.13586 loss)\nI0608 00:23:19.650799 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152527 (* 1 = 0.152527 loss)\nI0608 00:23:19.650804 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189379 (* 1 = 0.189379 loss)\nI0608 00:23:19.650810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0663896 (* 1 = 0.0663896 loss)\nI0608 00:23:19.650816 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191142 (* 1 = 0.191142 loss)\nI0608 00:23:19.650822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0663488 (* 1 = 0.0663488 loss)\nI0608 00:23:19.650830 13573 sgd_solver.cpp:106] Iteration 19240, lr = 0.001\nI0608 00:23:32.612278 13573 solver.cpp:229] Iteration 19260, loss = 0.341116\nI0608 00:23:32.612362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.176339 (* 1 = 0.176339 loss)\nI0608 00:23:32.612372 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0465914 (* 1 = 0.0465914 loss)\nI0608 00:23:32.612380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0969039 (* 1 = 0.0969039 loss)\nI0608 00:23:32.612385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0017942 (* 1 = 0.0017942 loss)\nI0608 00:23:32.612391 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0965911 (* 1 = 0.0965911 loss)\nI0608 00:23:32.612397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00179432 (* 1 = 0.00179432 loss)\nI0608 00:23:32.612406 13573 sgd_solver.cpp:106] Iteration 19260, lr = 0.001\nI0608 00:23:45.446439 13573 solver.cpp:229] Iteration 19280, loss = 0.803637\nI0608 00:23:45.446501 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.24243 (* 1 = 0.24243 loss)\nI0608 00:23:45.446511 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269437 (* 1 = 0.269437 loss)\nI0608 00:23:45.446516 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157415 (* 1 = 0.157415 loss)\nI0608 00:23:45.446522 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0462967 (* 1 = 0.0462967 loss)\nI0608 00:23:45.446528 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.148249 (* 1 = 0.148249 loss)\nI0608 00:23:45.446533 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.046296 (* 1 = 0.046296 loss)\nI0608 00:23:45.446539 13573 sgd_solver.cpp:106] Iteration 19280, lr = 0.001\nI0608 00:23:58.309003 13573 solver.cpp:229] Iteration 19300, loss = 0.83922\nI0608 00:23:58.309070 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.103411 (* 1 = 0.103411 loss)\nI0608 00:23:58.309079 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0657847 (* 1 = 0.0657847 loss)\nI0608 00:23:58.309085 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.115868 (* 1 = 0.115868 loss)\nI0608 00:23:58.309092 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0295284 (* 1 = 0.0295284 loss)\nI0608 00:23:58.309096 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118771 (* 1 = 0.118771 loss)\nI0608 00:23:58.309103 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0295176 (* 1 = 0.0295176 loss)\nI0608 00:23:58.309109 13573 sgd_solver.cpp:106] Iteration 19300, lr = 0.001\nI0608 00:24:11.143507 13573 solver.cpp:229] Iteration 19320, loss = 2.61843\nI0608 00:24:11.143646 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.286601 (* 1 = 0.286601 loss)\nI0608 00:24:11.143656 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.262698 (* 1 = 0.262698 loss)\nI0608 00:24:11.143662 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.254002 (* 1 = 0.254002 loss)\nI0608 00:24:11.143668 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0885842 (* 1 = 0.0885842 loss)\nI0608 00:24:11.143674 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256542 (* 1 = 0.256542 loss)\nI0608 00:24:11.143681 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0885941 (* 1 = 0.0885941 loss)\nI0608 00:24:11.143688 13573 sgd_solver.cpp:106] Iteration 19320, lr = 0.001\nI0608 00:24:23.982740 13573 solver.cpp:229] Iteration 19340, loss = 0.812284\nI0608 00:24:23.982822 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.215583 (* 1 = 0.215583 loss)\nI0608 00:24:23.982832 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106313 (* 1 = 0.106313 loss)\nI0608 00:24:23.982838 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.200541 (* 1 = 0.200541 loss)\nI0608 00:24:23.982846 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0804381 (* 1 = 0.0804381 loss)\nI0608 00:24:23.982851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191105 (* 1 = 0.191105 loss)\nI0608 00:24:23.982857 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0804281 (* 1 = 0.0804281 loss)\nI0608 00:24:23.982866 13573 sgd_solver.cpp:106] Iteration 19340, lr = 0.001\nI0608 00:24:36.897286 13573 solver.cpp:229] Iteration 19360, loss = 0.72481\nI0608 00:24:36.897373 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.074812 (* 1 = 0.074812 loss)\nI0608 00:24:36.897383 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0520351 (* 1 = 0.0520351 loss)\nI0608 00:24:36.897389 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289898 (* 1 = 0.289898 loss)\nI0608 00:24:36.897395 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0274982 (* 1 = 0.0274982 loss)\nI0608 00:24:36.897400 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240611 (* 1 = 0.240611 loss)\nI0608 00:24:36.897406 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0274897 (* 1 = 0.0274897 loss)\nI0608 00:24:36.897413 13573 sgd_solver.cpp:106] Iteration 19360, lr = 0.001\nI0608 00:24:49.761327 13573 solver.cpp:229] Iteration 19380, loss = 0.918434\nI0608 00:24:49.761392 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111767 (* 1 = 0.111767 loss)\nI0608 00:24:49.761401 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0711239 (* 1 = 0.0711239 loss)\nI0608 00:24:49.761407 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111069 (* 1 = 0.111069 loss)\nI0608 00:24:49.761414 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0989973 (* 1 = 0.0989973 loss)\nI0608 00:24:49.761418 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11761 (* 1 = 0.11761 loss)\nI0608 00:24:49.761425 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0990135 (* 1 = 0.0990135 loss)\nI0608 00:24:49.761432 13573 sgd_solver.cpp:106] Iteration 19380, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:25:02.534721 13573 solver.cpp:229] Iteration 19400, loss = 1.04902\nI0608 00:25:02.534808 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.025504 (* 1 = 0.025504 loss)\nI0608 00:25:02.534819 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0858943 (* 1 = 0.0858943 loss)\nI0608 00:25:02.534826 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.453005 (* 1 = 0.453005 loss)\nI0608 00:25:02.534832 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.223646 (* 1 = 0.223646 loss)\nI0608 00:25:02.534838 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.508717 (* 1 = 0.508717 loss)\nI0608 00:25:02.534844 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.223607 (* 1 = 0.223607 loss)\nI0608 00:25:02.534852 13573 sgd_solver.cpp:106] Iteration 19400, lr = 0.001\nI0608 00:25:15.371767 13573 solver.cpp:229] Iteration 19420, loss = 0.754309\nI0608 00:25:15.371843 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159274 (* 1 = 0.159274 loss)\nI0608 00:25:15.371853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.177893 (* 1 = 0.177893 loss)\nI0608 00:25:15.371860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156206 (* 1 = 0.156206 loss)\nI0608 00:25:15.371867 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.1182 (* 1 = 0.1182 loss)\nI0608 00:25:15.371873 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153194 (* 1 = 0.153194 loss)\nI0608 00:25:15.371879 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.1182 (* 1 = 0.1182 loss)\nI0608 00:25:15.371887 13573 sgd_solver.cpp:106] Iteration 19420, lr = 0.001\nI0608 00:25:28.382133 13573 solver.cpp:229] Iteration 19440, loss = 1.31666\nI0608 00:25:28.382197 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.323194 (* 1 = 0.323194 loss)\nI0608 00:25:28.382206 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.530857 (* 1 = 0.530857 loss)\nI0608 00:25:28.382212 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.332464 (* 1 = 0.332464 loss)\nI0608 00:25:28.382220 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0570309 (* 1 = 0.0570309 loss)\nI0608 00:25:28.382225 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.337532 (* 1 = 0.337532 loss)\nI0608 00:25:28.382230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0570315 (* 1 = 0.0570315 loss)\nI0608 00:25:28.382237 13573 sgd_solver.cpp:106] Iteration 19440, lr = 0.001\nI0608 00:25:41.340903 13573 solver.cpp:229] Iteration 19460, loss = 1.06242\nI0608 00:25:41.340970 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.245687 (* 1 = 0.245687 loss)\nI0608 00:25:41.340979 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19671 (* 1 = 0.19671 loss)\nI0608 00:25:41.340986 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.479857 (* 1 = 0.479857 loss)\nI0608 00:25:41.340991 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264105 (* 1 = 0.0264105 loss)\nI0608 00:25:41.340997 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.461017 (* 1 = 0.461017 loss)\nI0608 00:25:41.341003 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264148 (* 1 = 0.0264148 loss)\nI0608 00:25:41.341011 13573 sgd_solver.cpp:106] Iteration 19460, lr = 0.001\nI0608 00:25:54.366581 13573 solver.cpp:229] Iteration 19480, loss = 0.80144\nI0608 00:25:54.366647 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107195 (* 1 = 0.107195 loss)\nI0608 00:25:54.366662 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0910212 (* 1 = 0.0910212 loss)\nI0608 00:25:54.366672 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191518 (* 1 = 0.191518 loss)\nI0608 00:25:54.366683 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0123541 (* 1 = 0.0123541 loss)\nI0608 00:25:54.366693 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207341 (* 1 = 0.207341 loss)\nI0608 00:25:54.366701 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0123565 (* 1 = 0.0123565 loss)\nI0608 00:25:54.366711 13573 sgd_solver.cpp:106] Iteration 19480, lr = 0.001\nI0608 00:26:07.500905 13573 solver.cpp:229] Iteration 19500, loss = 1.87121\nI0608 00:26:07.501029 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00301578 (* 1 = 0.00301578 loss)\nI0608 00:26:07.501039 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00381429 (* 1 = 0.00381429 loss)\nI0608 00:26:07.501044 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.975341 (* 1 = 0.975341 loss)\nI0608 00:26:07.501050 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.572044 (* 1 = 0.572044 loss)\nI0608 00:26:07.501056 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.980876 (* 1 = 0.980876 loss)\nI0608 00:26:07.501062 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.572016 (* 1 = 0.572016 loss)\nI0608 00:26:07.501070 13573 sgd_solver.cpp:106] Iteration 19500, lr = 0.001\nI0608 00:26:20.384519 13573 solver.cpp:229] Iteration 19520, loss = 0.818947\nI0608 00:26:20.384589 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199874 (* 1 = 0.199874 loss)\nI0608 00:26:20.384599 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132453 (* 1 = 0.132453 loss)\nI0608 00:26:20.384605 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0825164 (* 1 = 0.0825164 loss)\nI0608 00:26:20.384611 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0146588 (* 1 = 0.0146588 loss)\nI0608 00:26:20.384616 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.094813 (* 1 = 0.094813 loss)\nI0608 00:26:20.384623 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0146573 (* 1 = 0.0146573 loss)\nI0608 00:26:20.384630 13573 sgd_solver.cpp:106] Iteration 19520, lr = 0.001\nI0608 00:26:33.251384 13573 solver.cpp:229] Iteration 19540, loss = 0.704295\nI0608 00:26:33.251446 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0650066 (* 1 = 0.0650066 loss)\nI0608 00:26:33.251454 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0917682 (* 1 = 0.0917682 loss)\nI0608 00:26:33.251461 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103774 (* 1 = 0.103774 loss)\nI0608 00:26:33.251466 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0756235 (* 1 = 0.0756235 loss)\nI0608 00:26:33.251472 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0995964 (* 1 = 0.0995964 loss)\nI0608 00:26:33.251478 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0756394 (* 1 = 0.0756394 loss)\nI0608 00:26:33.251485 13573 sgd_solver.cpp:106] Iteration 19540, lr = 0.001\nI0608 00:26:45.975575 13573 solver.cpp:229] Iteration 19560, loss = 1.22749\nI0608 00:26:45.975646 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.478118 (* 1 = 0.478118 loss)\nI0608 00:26:45.975656 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.431745 (* 1 = 0.431745 loss)\nI0608 00:26:45.975662 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.383962 (* 1 = 0.383962 loss)\nI0608 00:26:45.975668 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0647583 (* 1 = 0.0647583 loss)\nI0608 00:26:45.975673 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.378408 (* 1 = 0.378408 loss)\nI0608 00:26:45.975680 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0647715 (* 1 = 0.0647715 loss)\nI0608 00:26:45.975687 13573 sgd_solver.cpp:106] Iteration 19560, lr = 0.001\nI0608 00:26:58.939846 13573 solver.cpp:229] Iteration 19580, loss = 0.658908\nI0608 00:26:58.939906 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.306769 (* 1 = 0.306769 loss)\nI0608 00:26:58.939915 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.238931 (* 1 = 0.238931 loss)\nI0608 00:26:58.939923 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0948726 (* 1 = 0.0948726 loss)\nI0608 00:26:58.939929 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0579275 (* 1 = 0.0579275 loss)\nI0608 00:26:58.939934 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0984669 (* 1 = 0.0984669 loss)\nI0608 00:26:58.939940 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0579356 (* 1 = 0.0579356 loss)\nI0608 00:26:58.939947 13573 sgd_solver.cpp:106] Iteration 19580, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:27:11.858888 13573 solver.cpp:229] Iteration 19600, loss = 0.777088\nI0608 00:27:11.858947 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.138277 (* 1 = 0.138277 loss)\nI0608 00:27:11.858958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140802 (* 1 = 0.140802 loss)\nI0608 00:27:11.858965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0823879 (* 1 = 0.0823879 loss)\nI0608 00:27:11.858971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00447788 (* 1 = 0.00447788 loss)\nI0608 00:27:11.858978 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0877878 (* 1 = 0.0877878 loss)\nI0608 00:27:11.858983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00448207 (* 1 = 0.00448207 loss)\nI0608 00:27:11.858989 13573 sgd_solver.cpp:106] Iteration 19600, lr = 0.001\nI0608 00:27:24.808105 13573 solver.cpp:229] Iteration 19620, loss = 1.0923\nI0608 00:27:24.808181 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.551663 (* 1 = 0.551663 loss)\nI0608 00:27:24.808190 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.588939 (* 1 = 0.588939 loss)\nI0608 00:27:24.808195 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226075 (* 1 = 0.226075 loss)\nI0608 00:27:24.808202 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0241219 (* 1 = 0.0241219 loss)\nI0608 00:27:24.808207 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214693 (* 1 = 0.214693 loss)\nI0608 00:27:24.808213 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0241203 (* 1 = 0.0241203 loss)\nI0608 00:27:24.808219 13573 sgd_solver.cpp:106] Iteration 19620, lr = 0.001\nI0608 00:27:37.665325 13573 solver.cpp:229] Iteration 19640, loss = 0.728948\nI0608 00:27:37.665388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0936384 (* 1 = 0.0936384 loss)\nI0608 00:27:37.665398 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164702 (* 1 = 0.164702 loss)\nI0608 00:27:37.665405 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220145 (* 1 = 0.220145 loss)\nI0608 00:27:37.665411 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00723262 (* 1 = 0.00723262 loss)\nI0608 00:27:37.665416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232421 (* 1 = 0.232421 loss)\nI0608 00:27:37.665422 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00723466 (* 1 = 0.00723466 loss)\nI0608 00:27:37.665431 13573 sgd_solver.cpp:106] Iteration 19640, lr = 0.001\nI0608 00:27:50.647408 13573 solver.cpp:229] Iteration 19660, loss = 0.565216\nI0608 00:27:50.647474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0766298 (* 1 = 0.0766298 loss)\nI0608 00:27:50.647483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0960117 (* 1 = 0.0960117 loss)\nI0608 00:27:50.647490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184296 (* 1 = 0.184296 loss)\nI0608 00:27:50.647495 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00210973 (* 1 = 0.00210973 loss)\nI0608 00:27:50.647501 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.180263 (* 1 = 0.180263 loss)\nI0608 00:27:50.647506 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00211182 (* 1 = 0.00211182 loss)\nI0608 00:27:50.647513 13573 sgd_solver.cpp:106] Iteration 19660, lr = 0.001\nI0608 00:28:03.619201 13573 solver.cpp:229] Iteration 19680, loss = 0.659022\nI0608 00:28:03.619277 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117849 (* 1 = 0.117849 loss)\nI0608 00:28:03.619287 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0742326 (* 1 = 0.0742326 loss)\nI0608 00:28:03.619293 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102648 (* 1 = 0.102648 loss)\nI0608 00:28:03.619299 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0270618 (* 1 = 0.0270618 loss)\nI0608 00:28:03.619304 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0975274 (* 1 = 0.0975274 loss)\nI0608 00:28:03.619310 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0270727 (* 1 = 0.0270727 loss)\nI0608 00:28:03.619318 13573 sgd_solver.cpp:106] Iteration 19680, lr = 0.001\nI0608 00:28:16.422147 13573 solver.cpp:229] Iteration 19700, loss = 2.34125\nI0608 00:28:16.422216 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0871642 (* 1 = 0.0871642 loss)\nI0608 00:28:16.422225 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0500517 (* 1 = 0.0500517 loss)\nI0608 00:28:16.422231 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.12809 (* 1 = 1.12809 loss)\nI0608 00:28:16.422238 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.723455 (* 1 = 0.723455 loss)\nI0608 00:28:16.422243 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.13368 (* 1 = 1.13368 loss)\nI0608 00:28:16.422250 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.719285 (* 1 = 0.719285 loss)\nI0608 00:28:16.422257 13573 sgd_solver.cpp:106] Iteration 19700, lr = 0.001\nI0608 00:28:29.299299 13573 solver.cpp:229] Iteration 19720, loss = 1.30389\nI0608 00:28:29.299360 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233678 (* 1 = 0.233678 loss)\nI0608 00:28:29.299371 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.166773 (* 1 = 0.166773 loss)\nI0608 00:28:29.299376 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112533 (* 1 = 0.112533 loss)\nI0608 00:28:29.299382 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269773 (* 1 = 0.0269773 loss)\nI0608 00:28:29.299388 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111577 (* 1 = 0.111577 loss)\nI0608 00:28:29.299394 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269777 (* 1 = 0.0269777 loss)\nI0608 00:28:29.299401 13573 sgd_solver.cpp:106] Iteration 19720, lr = 0.001\nI0608 00:28:42.172607 13573 solver.cpp:229] Iteration 19740, loss = 1.16549\nI0608 00:28:42.172680 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.221904 (* 1 = 0.221904 loss)\nI0608 00:28:42.172689 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.231078 (* 1 = 0.231078 loss)\nI0608 00:28:42.172695 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148585 (* 1 = 0.148585 loss)\nI0608 00:28:42.172701 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0360404 (* 1 = 0.0360404 loss)\nI0608 00:28:42.172708 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149217 (* 1 = 0.149217 loss)\nI0608 00:28:42.172713 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0360365 (* 1 = 0.0360365 loss)\nI0608 00:28:42.172720 13573 sgd_solver.cpp:106] Iteration 19740, lr = 0.001\nI0608 00:28:55.078979 13573 solver.cpp:229] Iteration 19760, loss = 0.56154\nI0608 00:28:55.079041 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00017021 (* 1 = 0.00017021 loss)\nI0608 00:28:55.079051 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00168959 (* 1 = 0.00168959 loss)\nI0608 00:28:55.079056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167914 (* 1 = 0.167914 loss)\nI0608 00:28:55.079062 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00180176 (* 1 = 0.00180176 loss)\nI0608 00:28:55.079067 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194366 (* 1 = 0.194366 loss)\nI0608 00:28:55.079073 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00180095 (* 1 = 0.00180095 loss)\nI0608 00:28:55.079080 13573 sgd_solver.cpp:106] Iteration 19760, lr = 0.001\nI0608 00:29:08.072648 13573 solver.cpp:229] Iteration 19780, loss = 0.979446\nI0608 00:29:08.072713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.289966 (* 1 = 0.289966 loss)\nI0608 00:29:08.072722 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.263456 (* 1 = 0.263456 loss)\nI0608 00:29:08.072729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.411785 (* 1 = 0.411785 loss)\nI0608 00:29:08.072736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0661897 (* 1 = 0.0661897 loss)\nI0608 00:29:08.072741 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.416181 (* 1 = 0.416181 loss)\nI0608 00:29:08.072747 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0661889 (* 1 = 0.0661889 loss)\nI0608 00:29:08.072754 13573 sgd_solver.cpp:106] Iteration 19780, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:29:21.040738 13573 solver.cpp:229] Iteration 19800, loss = 0.864577\nI0608 00:29:21.040808 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.209806 (* 1 = 0.209806 loss)\nI0608 00:29:21.040818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.174536 (* 1 = 0.174536 loss)\nI0608 00:29:21.040824 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0856888 (* 1 = 0.0856888 loss)\nI0608 00:29:21.040830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189695 (* 1 = 0.0189695 loss)\nI0608 00:29:21.040837 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0900526 (* 1 = 0.0900526 loss)\nI0608 00:29:21.040843 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189672 (* 1 = 0.0189672 loss)\nI0608 00:29:21.040849 13573 sgd_solver.cpp:106] Iteration 19800, lr = 0.001\nI0608 00:29:33.894757 13573 solver.cpp:229] Iteration 19820, loss = 0.52063\nI0608 00:29:33.894829 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.247348 (* 1 = 0.247348 loss)\nI0608 00:29:33.894840 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.115242 (* 1 = 0.115242 loss)\nI0608 00:29:33.894845 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126808 (* 1 = 0.126808 loss)\nI0608 00:29:33.894850 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0232931 (* 1 = 0.0232931 loss)\nI0608 00:29:33.894856 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126988 (* 1 = 0.126988 loss)\nI0608 00:29:33.894862 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0232968 (* 1 = 0.0232968 loss)\nI0608 00:29:33.894870 13573 sgd_solver.cpp:106] Iteration 19820, lr = 0.001\nI0608 00:29:46.772054 13573 solver.cpp:229] Iteration 19840, loss = 1.18325\nI0608 00:29:46.772122 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.403226 (* 1 = 0.403226 loss)\nI0608 00:29:46.772131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.206029 (* 1 = 0.206029 loss)\nI0608 00:29:46.772137 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177013 (* 1 = 0.177013 loss)\nI0608 00:29:46.772143 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0821058 (* 1 = 0.0821058 loss)\nI0608 00:29:46.772148 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167151 (* 1 = 0.167151 loss)\nI0608 00:29:46.772155 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0821137 (* 1 = 0.0821137 loss)\nI0608 00:29:46.772161 13573 sgd_solver.cpp:106] Iteration 19840, lr = 0.001\nI0608 00:29:59.743803 13573 solver.cpp:229] Iteration 19860, loss = 1.75588\nI0608 00:29:59.743871 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000171574 (* 1 = 0.000171574 loss)\nI0608 00:29:59.743882 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.019131 (* 1 = 0.019131 loss)\nI0608 00:29:59.743890 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.372129 (* 1 = 0.372129 loss)\nI0608 00:29:59.743896 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0713958 (* 1 = 0.0713958 loss)\nI0608 00:29:59.743901 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.369634 (* 1 = 0.369634 loss)\nI0608 00:29:59.743907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0713492 (* 1 = 0.0713492 loss)\nI0608 00:29:59.743932 13573 sgd_solver.cpp:106] Iteration 19860, lr = 0.001\nI0608 00:30:12.475198 13573 solver.cpp:229] Iteration 19880, loss = 1.03902\nI0608 00:30:12.475265 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148953 (* 1 = 0.148953 loss)\nI0608 00:30:12.475275 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108233 (* 1 = 0.108233 loss)\nI0608 00:30:12.475281 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0863461 (* 1 = 0.0863461 loss)\nI0608 00:30:12.475286 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133589 (* 1 = 0.0133589 loss)\nI0608 00:30:12.475291 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0858422 (* 1 = 0.0858422 loss)\nI0608 00:30:12.475297 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133601 (* 1 = 0.0133601 loss)\nI0608 00:30:12.475304 13573 sgd_solver.cpp:106] Iteration 19880, lr = 0.001\nI0608 00:30:25.090495 13573 solver.cpp:229] Iteration 19900, loss = 0.478529\nI0608 00:30:25.090580 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10257 (* 1 = 0.10257 loss)\nI0608 00:30:25.090590 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0742679 (* 1 = 0.0742679 loss)\nI0608 00:30:25.090595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0929749 (* 1 = 0.0929749 loss)\nI0608 00:30:25.090601 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00862931 (* 1 = 0.00862931 loss)\nI0608 00:30:25.090606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0972682 (* 1 = 0.0972682 loss)\nI0608 00:30:25.090612 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00863987 (* 1 = 0.00863987 loss)\nI0608 00:30:25.090620 13573 sgd_solver.cpp:106] Iteration 19900, lr = 0.001\nI0608 00:30:37.968967 13573 solver.cpp:229] Iteration 19920, loss = 0.66024\nI0608 00:30:37.969036 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0062474 (* 1 = 0.0062474 loss)\nI0608 00:30:37.969045 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0179638 (* 1 = 0.0179638 loss)\nI0608 00:30:37.969051 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240523 (* 1 = 0.240523 loss)\nI0608 00:30:37.969056 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227314 (* 1 = 0.0227314 loss)\nI0608 00:30:37.969063 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253811 (* 1 = 0.253811 loss)\nI0608 00:30:37.969069 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.022743 (* 1 = 0.022743 loss)\nI0608 00:30:37.969075 13573 sgd_solver.cpp:106] Iteration 19920, lr = 0.001\nI0608 00:30:50.827224 13573 solver.cpp:229] Iteration 19940, loss = 0.754231\nI0608 00:30:50.827286 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.211106 (* 1 = 0.211106 loss)\nI0608 00:30:50.827296 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188831 (* 1 = 0.188831 loss)\nI0608 00:30:50.827301 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.29016 (* 1 = 0.29016 loss)\nI0608 00:30:50.827307 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0544639 (* 1 = 0.0544639 loss)\nI0608 00:30:50.827313 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.297794 (* 1 = 0.297794 loss)\nI0608 00:30:50.827319 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0544907 (* 1 = 0.0544907 loss)\nI0608 00:30:50.827327 13573 sgd_solver.cpp:106] Iteration 19940, lr = 0.001\nI0608 00:31:03.664005 13573 solver.cpp:229] Iteration 19960, loss = 0.367228\nI0608 00:31:03.664072 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0542969 (* 1 = 0.0542969 loss)\nI0608 00:31:03.664083 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0516861 (* 1 = 0.0516861 loss)\nI0608 00:31:03.664089 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110452 (* 1 = 0.110452 loss)\nI0608 00:31:03.664095 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.030931 (* 1 = 0.030931 loss)\nI0608 00:31:03.664101 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110344 (* 1 = 0.110344 loss)\nI0608 00:31:03.664108 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309512 (* 1 = 0.0309512 loss)\nI0608 00:31:03.664115 13573 sgd_solver.cpp:106] Iteration 19960, lr = 0.001\nI0608 00:31:16.362210 13573 solver.cpp:229] Iteration 19980, loss = 0.694631\nI0608 00:31:16.362280 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194655 (* 1 = 0.194655 loss)\nI0608 00:31:16.362290 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.210822 (* 1 = 0.210822 loss)\nI0608 00:31:16.362298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101143 (* 1 = 0.101143 loss)\nI0608 00:31:16.362304 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0874735 (* 1 = 0.0874735 loss)\nI0608 00:31:16.362310 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0995296 (* 1 = 0.0995296 loss)\nI0608 00:31:16.362316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0874768 (* 1 = 0.0874768 loss)\nI0608 00:31:16.362324 13573 sgd_solver.cpp:106] Iteration 19980, lr = 0.001\nspeed: 0.644s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_20000.caffemodel\nI0608 00:31:30.873453 13573 solver.cpp:229] Iteration 20000, loss = 1.46917\nI0608 00:31:30.873517 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.269215 (* 1 = 0.269215 loss)\nI0608 00:31:30.873525 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132904 (* 1 = 0.132904 loss)\nI0608 00:31:30.873533 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201035 (* 1 = 0.201035 loss)\nI0608 00:31:30.873538 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0945198 (* 1 = 0.0945198 loss)\nI0608 00:31:30.873544 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205261 (* 1 = 0.205261 loss)\nI0608 00:31:30.873550 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0945137 (* 1 = 0.0945137 loss)\nI0608 00:31:30.873558 13573 sgd_solver.cpp:106] Iteration 20000, lr = 0.001\nI0608 00:31:43.574442 13573 solver.cpp:229] Iteration 20020, loss = 0.665209\nI0608 00:31:43.574636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118901 (* 1 = 0.118901 loss)\nI0608 00:31:43.574673 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16088 (* 1 = 0.16088 loss)\nI0608 00:31:43.574692 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.230838 (* 1 = 0.230838 loss)\nI0608 00:31:43.574707 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103854 (* 1 = 0.0103854 loss)\nI0608 00:31:43.574720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214713 (* 1 = 0.214713 loss)\nI0608 00:31:43.574735 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103843 (* 1 = 0.0103843 loss)\nI0608 00:31:43.574749 13573 sgd_solver.cpp:106] Iteration 20020, lr = 0.001\nI0608 00:31:56.544209 13573 solver.cpp:229] Iteration 20040, loss = 0.409474\nI0608 00:31:56.544267 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175877 (* 1 = 0.175877 loss)\nI0608 00:31:56.544276 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136646 (* 1 = 0.136646 loss)\nI0608 00:31:56.544283 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0857212 (* 1 = 0.0857212 loss)\nI0608 00:31:56.544288 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00326795 (* 1 = 0.00326795 loss)\nI0608 00:31:56.544294 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0909635 (* 1 = 0.0909635 loss)\nI0608 00:31:56.544301 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00326828 (* 1 = 0.00326828 loss)\nI0608 00:31:56.544307 13573 sgd_solver.cpp:106] Iteration 20040, lr = 0.001\nI0608 00:32:09.513007 13573 solver.cpp:229] Iteration 20060, loss = 0.468288\nI0608 00:32:09.513072 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0715518 (* 1 = 0.0715518 loss)\nI0608 00:32:09.513082 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0874728 (* 1 = 0.0874728 loss)\nI0608 00:32:09.513087 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0869199 (* 1 = 0.0869199 loss)\nI0608 00:32:09.513092 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0162828 (* 1 = 0.0162828 loss)\nI0608 00:32:09.513098 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0890585 (* 1 = 0.0890585 loss)\nI0608 00:32:09.513104 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0162747 (* 1 = 0.0162747 loss)\nI0608 00:32:09.513110 13573 sgd_solver.cpp:106] Iteration 20060, lr = 0.001\nI0608 00:32:22.501521 13573 solver.cpp:229] Iteration 20080, loss = 0.612939\nI0608 00:32:22.501591 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115274 (* 1 = 0.115274 loss)\nI0608 00:32:22.501601 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148965 (* 1 = 0.148965 loss)\nI0608 00:32:22.501607 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111477 (* 1 = 0.111477 loss)\nI0608 00:32:22.501613 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0561209 (* 1 = 0.0561209 loss)\nI0608 00:32:22.501619 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107185 (* 1 = 0.107185 loss)\nI0608 00:32:22.501626 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.05612 (* 1 = 0.05612 loss)\nI0608 00:32:22.501632 13573 sgd_solver.cpp:106] Iteration 20080, lr = 0.001\nI0608 00:32:35.576804 13573 solver.cpp:229] Iteration 20100, loss = 0.706651\nI0608 00:32:35.576869 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00694996 (* 1 = 0.00694996 loss)\nI0608 00:32:35.576877 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00566 (* 1 = 0.00566 loss)\nI0608 00:32:35.576884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.4375 (* 1 = 0.4375 loss)\nI0608 00:32:35.576889 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0677167 (* 1 = 0.0677167 loss)\nI0608 00:32:35.576895 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.371488 (* 1 = 0.371488 loss)\nI0608 00:32:35.576901 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0677234 (* 1 = 0.0677234 loss)\nI0608 00:32:35.576908 13573 sgd_solver.cpp:106] Iteration 20100, lr = 0.001\nI0608 00:32:48.469548 13573 solver.cpp:229] Iteration 20120, loss = 1.35358\nI0608 00:32:48.469669 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.25465 (* 1 = 0.25465 loss)\nI0608 00:32:48.469678 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179616 (* 1 = 0.179616 loss)\nI0608 00:32:48.469684 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.390337 (* 1 = 0.390337 loss)\nI0608 00:32:48.469691 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0833882 (* 1 = 0.0833882 loss)\nI0608 00:32:48.469696 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387511 (* 1 = 0.387511 loss)\nI0608 00:32:48.469702 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0833341 (* 1 = 0.0833341 loss)\nI0608 00:32:48.469709 13573 sgd_solver.cpp:106] Iteration 20120, lr = 0.001\nI0608 00:33:01.184399 13573 solver.cpp:229] Iteration 20140, loss = 0.566186\nI0608 00:33:01.184473 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.242789 (* 1 = 0.242789 loss)\nI0608 00:33:01.184483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173147 (* 1 = 0.173147 loss)\nI0608 00:33:01.184489 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137971 (* 1 = 0.137971 loss)\nI0608 00:33:01.184495 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114743 (* 1 = 0.0114743 loss)\nI0608 00:33:01.184501 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138931 (* 1 = 0.138931 loss)\nI0608 00:33:01.184509 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011475 (* 1 = 0.011475 loss)\nI0608 00:33:01.184516 13573 sgd_solver.cpp:106] Iteration 20140, lr = 0.001\nI0608 00:33:14.121307 13573 solver.cpp:229] Iteration 20160, loss = 0.786736\nI0608 00:33:14.121374 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184093 (* 1 = 0.184093 loss)\nI0608 00:33:14.121384 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0931014 (* 1 = 0.0931014 loss)\nI0608 00:33:14.121390 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209123 (* 1 = 0.209123 loss)\nI0608 00:33:14.121397 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0536924 (* 1 = 0.0536924 loss)\nI0608 00:33:14.121402 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.208141 (* 1 = 0.208141 loss)\nI0608 00:33:14.121408 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0536819 (* 1 = 0.0536819 loss)\nI0608 00:33:14.121415 13573 sgd_solver.cpp:106] Iteration 20160, lr = 0.001\nI0608 00:33:27.138954 13573 solver.cpp:229] Iteration 20180, loss = 0.776357\nI0608 00:33:27.139026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.3421 (* 1 = 0.3421 loss)\nI0608 00:33:27.139035 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.249835 (* 1 = 0.249835 loss)\nI0608 00:33:27.139041 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226834 (* 1 = 0.226834 loss)\nI0608 00:33:27.139047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0417986 (* 1 = 0.0417986 loss)\nI0608 00:33:27.139053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223905 (* 1 = 0.223905 loss)\nI0608 00:33:27.139060 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418057 (* 1 = 0.0418057 loss)\nI0608 00:33:27.139066 13573 sgd_solver.cpp:106] Iteration 20180, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:33:40.043926 13573 solver.cpp:229] Iteration 20200, loss = 1.14826\nI0608 00:33:40.043994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.135486 (* 1 = 0.135486 loss)\nI0608 00:33:40.044004 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.267905 (* 1 = 0.267905 loss)\nI0608 00:33:40.044010 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275917 (* 1 = 0.275917 loss)\nI0608 00:33:40.044016 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.085707 (* 1 = 0.085707 loss)\nI0608 00:33:40.044021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273339 (* 1 = 0.273339 loss)\nI0608 00:33:40.044028 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0857064 (* 1 = 0.0857064 loss)\nI0608 00:33:40.044035 13573 sgd_solver.cpp:106] Iteration 20200, lr = 0.001\nI0608 00:33:52.981093 13573 solver.cpp:229] Iteration 20220, loss = 0.710889\nI0608 00:33:52.981168 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0896159 (* 1 = 0.0896159 loss)\nI0608 00:33:52.981178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16893 (* 1 = 0.16893 loss)\nI0608 00:33:52.981184 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.322003 (* 1 = 0.322003 loss)\nI0608 00:33:52.981190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0517587 (* 1 = 0.0517587 loss)\nI0608 00:33:52.981195 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323555 (* 1 = 0.323555 loss)\nI0608 00:33:52.981201 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0517621 (* 1 = 0.0517621 loss)\nI0608 00:33:52.981209 13573 sgd_solver.cpp:106] Iteration 20220, lr = 0.001\nI0608 00:34:06.075073 13573 solver.cpp:229] Iteration 20240, loss = 0.717613\nI0608 00:34:06.075136 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0971563 (* 1 = 0.0971563 loss)\nI0608 00:34:06.075146 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0304786 (* 1 = 0.0304786 loss)\nI0608 00:34:06.075152 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0881041 (* 1 = 0.0881041 loss)\nI0608 00:34:06.075158 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0292361 (* 1 = 0.0292361 loss)\nI0608 00:34:06.075165 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0888403 (* 1 = 0.0888403 loss)\nI0608 00:34:06.075170 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0292788 (* 1 = 0.0292788 loss)\nI0608 00:34:06.075177 13573 sgd_solver.cpp:106] Iteration 20240, lr = 0.001\nI0608 00:34:18.810943 13573 solver.cpp:229] Iteration 20260, loss = 1.49658\nI0608 00:34:18.811019 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.228134 (* 1 = 0.228134 loss)\nI0608 00:34:18.811029 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.213825 (* 1 = 0.213825 loss)\nI0608 00:34:18.811035 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.20044 (* 1 = 0.20044 loss)\nI0608 00:34:18.811041 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0136974 (* 1 = 0.0136974 loss)\nI0608 00:34:18.811046 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177068 (* 1 = 0.177068 loss)\nI0608 00:34:18.811053 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137102 (* 1 = 0.0137102 loss)\nI0608 00:34:18.811059 13573 sgd_solver.cpp:106] Iteration 20260, lr = 0.001\nI0608 00:34:31.728956 13573 solver.cpp:229] Iteration 20280, loss = 0.864957\nI0608 00:34:31.729024 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151541 (* 1 = 0.151541 loss)\nI0608 00:34:31.729034 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0817951 (* 1 = 0.0817951 loss)\nI0608 00:34:31.729040 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216316 (* 1 = 0.216316 loss)\nI0608 00:34:31.729046 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0398323 (* 1 = 0.0398323 loss)\nI0608 00:34:31.729053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218469 (* 1 = 0.218469 loss)\nI0608 00:34:31.729058 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.039842 (* 1 = 0.039842 loss)\nI0608 00:34:31.729064 13573 sgd_solver.cpp:106] Iteration 20280, lr = 0.001\nI0608 00:34:44.520442 13573 solver.cpp:229] Iteration 20300, loss = 0.424083\nI0608 00:34:44.520506 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000226507 (* 1 = 0.000226507 loss)\nI0608 00:34:44.520516 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0036233 (* 1 = 0.0036233 loss)\nI0608 00:34:44.520522 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179249 (* 1 = 0.179249 loss)\nI0608 00:34:44.520529 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00253656 (* 1 = 0.00253656 loss)\nI0608 00:34:44.520534 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167787 (* 1 = 0.167787 loss)\nI0608 00:34:44.520539 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00253475 (* 1 = 0.00253475 loss)\nI0608 00:34:44.520546 13573 sgd_solver.cpp:106] Iteration 20300, lr = 0.001\nI0608 00:34:57.428925 13573 solver.cpp:229] Iteration 20320, loss = 0.567914\nI0608 00:34:57.428994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00106476 (* 1 = 0.00106476 loss)\nI0608 00:34:57.429004 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000702842 (* 1 = 0.000702842 loss)\nI0608 00:34:57.429010 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175337 (* 1 = 0.175337 loss)\nI0608 00:34:57.429016 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0066129 (* 1 = 0.0066129 loss)\nI0608 00:34:57.429023 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170458 (* 1 = 0.170458 loss)\nI0608 00:34:57.429028 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00661911 (* 1 = 0.00661911 loss)\nI0608 00:34:57.429035 13573 sgd_solver.cpp:106] Iteration 20320, lr = 0.001\nI0608 00:35:10.333698 13573 solver.cpp:229] Iteration 20340, loss = 0.514336\nI0608 00:35:10.333770 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.162449 (* 1 = 0.162449 loss)\nI0608 00:35:10.333778 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114478 (* 1 = 0.114478 loss)\nI0608 00:35:10.333786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101562 (* 1 = 0.101562 loss)\nI0608 00:35:10.333791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0210094 (* 1 = 0.0210094 loss)\nI0608 00:35:10.333797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105265 (* 1 = 0.105265 loss)\nI0608 00:35:10.333803 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209738 (* 1 = 0.0209738 loss)\nI0608 00:35:10.333811 13573 sgd_solver.cpp:106] Iteration 20340, lr = 0.001\nI0608 00:35:23.145360 13573 solver.cpp:229] Iteration 20360, loss = 0.778542\nI0608 00:35:23.145422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.173557 (* 1 = 0.173557 loss)\nI0608 00:35:23.145432 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.226263 (* 1 = 0.226263 loss)\nI0608 00:35:23.145439 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0850647 (* 1 = 0.0850647 loss)\nI0608 00:35:23.145444 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0250145 (* 1 = 0.0250145 loss)\nI0608 00:35:23.145450 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.09188 (* 1 = 0.09188 loss)\nI0608 00:35:23.145457 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0250189 (* 1 = 0.0250189 loss)\nI0608 00:35:23.145464 13573 sgd_solver.cpp:106] Iteration 20360, lr = 0.001\nI0608 00:35:36.023958 13573 solver.cpp:229] Iteration 20380, loss = 1.0858\nI0608 00:35:36.024029 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.168005 (* 1 = 0.168005 loss)\nI0608 00:35:36.024039 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.168117 (* 1 = 0.168117 loss)\nI0608 00:35:36.024044 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279597 (* 1 = 0.279597 loss)\nI0608 00:35:36.024051 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0782167 (* 1 = 0.0782167 loss)\nI0608 00:35:36.024057 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28373 (* 1 = 0.28373 loss)\nI0608 00:35:36.024062 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0781735 (* 1 = 0.0781735 loss)\nI0608 00:35:36.024070 13573 sgd_solver.cpp:106] Iteration 20380, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:35:48.664546 13573 solver.cpp:229] Iteration 20400, loss = 0.934634\nI0608 00:35:48.664611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000825149 (* 1 = 0.000825149 loss)\nI0608 00:35:48.664623 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0169534 (* 1 = 0.0169534 loss)\nI0608 00:35:48.664628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.47835 (* 1 = 0.47835 loss)\nI0608 00:35:48.664634 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0862263 (* 1 = 0.0862263 loss)\nI0608 00:35:48.664639 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.435281 (* 1 = 0.435281 loss)\nI0608 00:35:48.664645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0862366 (* 1 = 0.0862366 loss)\nI0608 00:35:48.664654 13573 sgd_solver.cpp:106] Iteration 20400, lr = 0.001\nI0608 00:36:01.758373 13573 solver.cpp:229] Iteration 20420, loss = 0.644643\nI0608 00:36:01.758435 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.407058 (* 1 = 0.407058 loss)\nI0608 00:36:01.758443 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150631 (* 1 = 0.150631 loss)\nI0608 00:36:01.758450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104209 (* 1 = 0.104209 loss)\nI0608 00:36:01.758456 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0621076 (* 1 = 0.0621076 loss)\nI0608 00:36:01.758461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101774 (* 1 = 0.101774 loss)\nI0608 00:36:01.758467 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0620901 (* 1 = 0.0620901 loss)\nI0608 00:36:01.758473 13573 sgd_solver.cpp:106] Iteration 20420, lr = 0.001\nI0608 00:36:14.574198 13573 solver.cpp:229] Iteration 20440, loss = 0.834921\nI0608 00:36:14.574259 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111512 (* 1 = 0.111512 loss)\nI0608 00:36:14.574270 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.289237 (* 1 = 0.289237 loss)\nI0608 00:36:14.574275 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.38836 (* 1 = 0.38836 loss)\nI0608 00:36:14.574281 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0803802 (* 1 = 0.0803802 loss)\nI0608 00:36:14.574286 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.394861 (* 1 = 0.394861 loss)\nI0608 00:36:14.574292 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.080389 (* 1 = 0.080389 loss)\nI0608 00:36:14.574300 13573 sgd_solver.cpp:106] Iteration 20440, lr = 0.001\nI0608 00:36:27.425752 13573 solver.cpp:229] Iteration 20460, loss = 1.23017\nI0608 00:36:27.425818 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.231428 (* 1 = 0.231428 loss)\nI0608 00:36:27.425828 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124053 (* 1 = 0.124053 loss)\nI0608 00:36:27.425834 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117478 (* 1 = 0.117478 loss)\nI0608 00:36:27.425839 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00566742 (* 1 = 0.00566742 loss)\nI0608 00:36:27.425845 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137644 (* 1 = 0.137644 loss)\nI0608 00:36:27.425851 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0056701 (* 1 = 0.0056701 loss)\nI0608 00:36:27.425858 13573 sgd_solver.cpp:106] Iteration 20460, lr = 0.001\nI0608 00:36:40.362277 13573 solver.cpp:229] Iteration 20480, loss = 2.79668\nI0608 00:36:40.362363 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.266772 (* 1 = 0.266772 loss)\nI0608 00:36:40.362373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136342 (* 1 = 0.136342 loss)\nI0608 00:36:40.362378 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.13574 (* 1 = 1.13574 loss)\nI0608 00:36:40.362385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.328944 (* 1 = 0.328944 loss)\nI0608 00:36:40.362390 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.13141 (* 1 = 1.13141 loss)\nI0608 00:36:40.362396 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.32898 (* 1 = 0.32898 loss)\nI0608 00:36:40.362402 13573 sgd_solver.cpp:106] Iteration 20480, lr = 0.001\nI0608 00:36:53.220332 13573 solver.cpp:229] Iteration 20500, loss = 0.658635\nI0608 00:36:53.220448 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0715636 (* 1 = 0.0715636 loss)\nI0608 00:36:53.220458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112677 (* 1 = 0.112677 loss)\nI0608 00:36:53.220463 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0824601 (* 1 = 0.0824601 loss)\nI0608 00:36:53.220469 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481269 (* 1 = 0.0481269 loss)\nI0608 00:36:53.220475 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.08266 (* 1 = 0.08266 loss)\nI0608 00:36:53.220480 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0481342 (* 1 = 0.0481342 loss)\nI0608 00:36:53.220487 13573 sgd_solver.cpp:106] Iteration 20500, lr = 0.001\nI0608 00:37:06.249297 13573 solver.cpp:229] Iteration 20520, loss = 0.776186\nI0608 00:37:06.249363 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0775111 (* 1 = 0.0775111 loss)\nI0608 00:37:06.249372 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0461666 (* 1 = 0.0461666 loss)\nI0608 00:37:06.249378 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.321592 (* 1 = 0.321592 loss)\nI0608 00:37:06.249384 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0314829 (* 1 = 0.0314829 loss)\nI0608 00:37:06.249389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.30141 (* 1 = 0.30141 loss)\nI0608 00:37:06.249395 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0314818 (* 1 = 0.0314818 loss)\nI0608 00:37:06.249402 13573 sgd_solver.cpp:106] Iteration 20520, lr = 0.001\nI0608 00:37:18.921533 13573 solver.cpp:229] Iteration 20540, loss = 0.475714\nI0608 00:37:18.921599 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000148333 (* 1 = 0.000148333 loss)\nI0608 00:37:18.921609 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.014609 (* 1 = 0.014609 loss)\nI0608 00:37:18.921615 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223403 (* 1 = 0.223403 loss)\nI0608 00:37:18.921622 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00798288 (* 1 = 0.00798288 loss)\nI0608 00:37:18.921627 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216205 (* 1 = 0.216205 loss)\nI0608 00:37:18.921634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00797603 (* 1 = 0.00797603 loss)\nI0608 00:37:18.921641 13573 sgd_solver.cpp:106] Iteration 20540, lr = 0.001\nI0608 00:37:31.623138 13573 solver.cpp:229] Iteration 20560, loss = 1.09303\nI0608 00:37:31.623212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.397979 (* 1 = 0.397979 loss)\nI0608 00:37:31.623222 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.175787 (* 1 = 0.175787 loss)\nI0608 00:37:31.623229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0854541 (* 1 = 0.0854541 loss)\nI0608 00:37:31.623234 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00946975 (* 1 = 0.00946975 loss)\nI0608 00:37:31.623240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0831134 (* 1 = 0.0831134 loss)\nI0608 00:37:31.623245 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00947188 (* 1 = 0.00947188 loss)\nI0608 00:37:31.623252 13573 sgd_solver.cpp:106] Iteration 20560, lr = 0.001\nI0608 00:37:44.760452 13573 solver.cpp:229] Iteration 20580, loss = 1.01176\nI0608 00:37:44.760511 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00219108 (* 1 = 0.00219108 loss)\nI0608 00:37:44.760521 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00521821 (* 1 = 0.00521821 loss)\nI0608 00:37:44.760527 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.398127 (* 1 = 0.398127 loss)\nI0608 00:37:44.760534 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0656697 (* 1 = 0.0656697 loss)\nI0608 00:37:44.760538 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.396831 (* 1 = 0.396831 loss)\nI0608 00:37:44.760545 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0656695 (* 1 = 0.0656695 loss)\nI0608 00:37:44.760551 13573 sgd_solver.cpp:106] Iteration 20580, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:37:57.654878 13573 solver.cpp:229] Iteration 20600, loss = 1.04111\nI0608 00:37:57.654942 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00227903 (* 1 = 0.00227903 loss)\nI0608 00:37:57.654952 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0182813 (* 1 = 0.0182813 loss)\nI0608 00:37:57.654958 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267358 (* 1 = 0.267358 loss)\nI0608 00:37:57.654963 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0245156 (* 1 = 0.0245156 loss)\nI0608 00:37:57.654968 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.25429 (* 1 = 0.25429 loss)\nI0608 00:37:57.654974 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0245185 (* 1 = 0.0245185 loss)\nI0608 00:37:57.654980 13573 sgd_solver.cpp:106] Iteration 20600, lr = 0.001\nI0608 00:38:10.419181 13573 solver.cpp:229] Iteration 20620, loss = 0.457052\nI0608 00:38:10.419252 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0409174 (* 1 = 0.0409174 loss)\nI0608 00:38:10.419262 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0527732 (* 1 = 0.0527732 loss)\nI0608 00:38:10.419267 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226192 (* 1 = 0.226192 loss)\nI0608 00:38:10.419273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0249863 (* 1 = 0.0249863 loss)\nI0608 00:38:10.419278 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219314 (* 1 = 0.219314 loss)\nI0608 00:38:10.419284 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.024991 (* 1 = 0.024991 loss)\nI0608 00:38:10.419291 13573 sgd_solver.cpp:106] Iteration 20620, lr = 0.001\nI0608 00:38:23.071441 13573 solver.cpp:229] Iteration 20640, loss = 0.6915\nI0608 00:38:23.071504 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.026879 (* 1 = 0.026879 loss)\nI0608 00:38:23.071513 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0753757 (* 1 = 0.0753757 loss)\nI0608 00:38:23.071519 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177512 (* 1 = 0.177512 loss)\nI0608 00:38:23.071526 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.114529 (* 1 = 0.114529 loss)\nI0608 00:38:23.071530 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176199 (* 1 = 0.176199 loss)\nI0608 00:38:23.071537 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.11452 (* 1 = 0.11452 loss)\nI0608 00:38:23.071543 13573 sgd_solver.cpp:106] Iteration 20640, lr = 0.001\nI0608 00:38:35.775476 13573 solver.cpp:229] Iteration 20660, loss = 0.456968\nI0608 00:38:35.775542 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.038837 (* 1 = 0.038837 loss)\nI0608 00:38:35.775552 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0576139 (* 1 = 0.0576139 loss)\nI0608 00:38:35.775557 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195899 (* 1 = 0.195899 loss)\nI0608 00:38:35.775563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114831 (* 1 = 0.0114831 loss)\nI0608 00:38:35.775568 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181506 (* 1 = 0.181506 loss)\nI0608 00:38:35.775574 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114812 (* 1 = 0.0114812 loss)\nI0608 00:38:35.775580 13573 sgd_solver.cpp:106] Iteration 20660, lr = 0.001\nI0608 00:38:48.731258 13573 solver.cpp:229] Iteration 20680, loss = 1.09212\nI0608 00:38:48.731330 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.29412 (* 1 = 0.29412 loss)\nI0608 00:38:48.731339 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.239498 (* 1 = 0.239498 loss)\nI0608 00:38:48.731345 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265305 (* 1 = 0.265305 loss)\nI0608 00:38:48.731351 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0508241 (* 1 = 0.0508241 loss)\nI0608 00:38:48.731358 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26764 (* 1 = 0.26764 loss)\nI0608 00:38:48.731362 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0508196 (* 1 = 0.0508196 loss)\nI0608 00:38:48.731369 13573 sgd_solver.cpp:106] Iteration 20680, lr = 0.001\nI0608 00:39:01.566373 13573 solver.cpp:229] Iteration 20700, loss = 1.20981\nI0608 00:39:01.566437 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.187462 (* 1 = 0.187462 loss)\nI0608 00:39:01.566445 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.320269 (* 1 = 0.320269 loss)\nI0608 00:39:01.566452 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0864703 (* 1 = 0.0864703 loss)\nI0608 00:39:01.566457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0203034 (* 1 = 0.0203034 loss)\nI0608 00:39:01.566462 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0854347 (* 1 = 0.0854347 loss)\nI0608 00:39:01.566468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.020302 (* 1 = 0.020302 loss)\nI0608 00:39:01.566474 13573 sgd_solver.cpp:106] Iteration 20700, lr = 0.001\nI0608 00:39:14.552920 13573 solver.cpp:229] Iteration 20720, loss = 0.445803\nI0608 00:39:14.552986 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0541084 (* 1 = 0.0541084 loss)\nI0608 00:39:14.552995 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138315 (* 1 = 0.138315 loss)\nI0608 00:39:14.553001 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118395 (* 1 = 0.118395 loss)\nI0608 00:39:14.553006 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00305931 (* 1 = 0.00305931 loss)\nI0608 00:39:14.553012 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0917122 (* 1 = 0.0917122 loss)\nI0608 00:39:14.553019 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00305948 (* 1 = 0.00305948 loss)\nI0608 00:39:14.553025 13573 sgd_solver.cpp:106] Iteration 20720, lr = 0.001\nI0608 00:39:27.506054 13573 solver.cpp:229] Iteration 20740, loss = 0.605045\nI0608 00:39:27.506139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184792 (* 1 = 0.184792 loss)\nI0608 00:39:27.506148 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158603 (* 1 = 0.158603 loss)\nI0608 00:39:27.506155 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0993539 (* 1 = 0.0993539 loss)\nI0608 00:39:27.506160 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.076735 (* 1 = 0.076735 loss)\nI0608 00:39:27.506166 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101313 (* 1 = 0.101313 loss)\nI0608 00:39:27.506172 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0767366 (* 1 = 0.0767366 loss)\nI0608 00:39:27.506180 13573 sgd_solver.cpp:106] Iteration 20740, lr = 0.001\nI0608 00:39:40.450146 13573 solver.cpp:229] Iteration 20760, loss = 0.959051\nI0608 00:39:40.450213 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.039144 (* 1 = 0.039144 loss)\nI0608 00:39:40.450223 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.091238 (* 1 = 0.091238 loss)\nI0608 00:39:40.450229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310274 (* 1 = 0.310274 loss)\nI0608 00:39:40.450235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328945 (* 1 = 0.0328945 loss)\nI0608 00:39:40.450240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.298414 (* 1 = 0.298414 loss)\nI0608 00:39:40.450247 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0329054 (* 1 = 0.0329054 loss)\nI0608 00:39:40.450254 13573 sgd_solver.cpp:106] Iteration 20760, lr = 0.001\nI0608 00:39:53.229720 13573 solver.cpp:229] Iteration 20780, loss = 0.849456\nI0608 00:39:53.229791 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.571293 (* 1 = 0.571293 loss)\nI0608 00:39:53.229801 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172024 (* 1 = 0.172024 loss)\nI0608 00:39:53.229806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.173669 (* 1 = 0.173669 loss)\nI0608 00:39:53.229812 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0870683 (* 1 = 0.0870683 loss)\nI0608 00:39:53.229818 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172078 (* 1 = 0.172078 loss)\nI0608 00:39:53.229823 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0870587 (* 1 = 0.0870587 loss)\nI0608 00:39:53.229830 13573 sgd_solver.cpp:106] Iteration 20780, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:40:05.834805 13573 solver.cpp:229] Iteration 20800, loss = 0.998735\nI0608 00:40:05.834868 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.240928 (* 1 = 0.240928 loss)\nI0608 00:40:05.834878 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.23901 (* 1 = 0.23901 loss)\nI0608 00:40:05.834884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.307672 (* 1 = 0.307672 loss)\nI0608 00:40:05.834890 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0650736 (* 1 = 0.0650736 loss)\nI0608 00:40:05.834897 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.305692 (* 1 = 0.305692 loss)\nI0608 00:40:05.834903 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0650646 (* 1 = 0.0650646 loss)\nI0608 00:40:05.834910 13573 sgd_solver.cpp:106] Iteration 20800, lr = 0.001\nI0608 00:40:18.756611 13573 solver.cpp:229] Iteration 20820, loss = 0.90208\nI0608 00:40:18.756687 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0743762 (* 1 = 0.0743762 loss)\nI0608 00:40:18.756697 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0706765 (* 1 = 0.0706765 loss)\nI0608 00:40:18.756705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0925616 (* 1 = 0.0925616 loss)\nI0608 00:40:18.756711 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0379004 (* 1 = 0.0379004 loss)\nI0608 00:40:18.756716 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0943376 (* 1 = 0.0943376 loss)\nI0608 00:40:18.756721 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0378729 (* 1 = 0.0378729 loss)\nI0608 00:40:18.756728 13573 sgd_solver.cpp:106] Iteration 20820, lr = 0.001\nI0608 00:40:31.722664 13573 solver.cpp:229] Iteration 20840, loss = 1.80034\nI0608 00:40:31.722733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0896063 (* 1 = 0.0896063 loss)\nI0608 00:40:31.722741 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0760319 (* 1 = 0.0760319 loss)\nI0608 00:40:31.722748 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179666 (* 1 = 0.179666 loss)\nI0608 00:40:31.722754 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.038069 (* 1 = 0.038069 loss)\nI0608 00:40:31.722759 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191716 (* 1 = 0.191716 loss)\nI0608 00:40:31.722764 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0380677 (* 1 = 0.0380677 loss)\nI0608 00:40:31.722784 13573 sgd_solver.cpp:106] Iteration 20840, lr = 0.001\nI0608 00:40:44.577050 13573 solver.cpp:229] Iteration 20860, loss = 1.54487\nI0608 00:40:44.577121 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.304322 (* 1 = 0.304322 loss)\nI0608 00:40:44.577131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.248709 (* 1 = 0.248709 loss)\nI0608 00:40:44.577136 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.362279 (* 1 = 0.362279 loss)\nI0608 00:40:44.577142 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0786868 (* 1 = 0.0786868 loss)\nI0608 00:40:44.577147 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.326309 (* 1 = 0.326309 loss)\nI0608 00:40:44.577153 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0786764 (* 1 = 0.0786764 loss)\nI0608 00:40:44.577160 13573 sgd_solver.cpp:106] Iteration 20860, lr = 0.001\nI0608 00:40:57.609030 13573 solver.cpp:229] Iteration 20880, loss = 0.570333\nI0608 00:40:57.609097 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0536989 (* 1 = 0.0536989 loss)\nI0608 00:40:57.609107 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11013 (* 1 = 0.11013 loss)\nI0608 00:40:57.609112 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0874218 (* 1 = 0.0874218 loss)\nI0608 00:40:57.609118 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126374 (* 1 = 0.0126374 loss)\nI0608 00:40:57.609124 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0881384 (* 1 = 0.0881384 loss)\nI0608 00:40:57.609129 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126377 (* 1 = 0.0126377 loss)\nI0608 00:40:57.609138 13573 sgd_solver.cpp:106] Iteration 20880, lr = 0.001\nI0608 00:41:10.493459 13573 solver.cpp:229] Iteration 20900, loss = 0.957327\nI0608 00:41:10.493523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0164598 (* 1 = 0.0164598 loss)\nI0608 00:41:10.493532 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0392249 (* 1 = 0.0392249 loss)\nI0608 00:41:10.493538 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.368046 (* 1 = 0.368046 loss)\nI0608 00:41:10.493544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0736632 (* 1 = 0.0736632 loss)\nI0608 00:41:10.493551 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.347327 (* 1 = 0.347327 loss)\nI0608 00:41:10.493556 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0737252 (* 1 = 0.0737252 loss)\nI0608 00:41:10.493563 13573 sgd_solver.cpp:106] Iteration 20900, lr = 0.001\nI0608 00:41:23.612900 13573 solver.cpp:229] Iteration 20920, loss = 0.691441\nI0608 00:41:23.612992 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00870264 (* 1 = 0.00870264 loss)\nI0608 00:41:23.613003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0320958 (* 1 = 0.0320958 loss)\nI0608 00:41:23.613008 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156692 (* 1 = 0.156692 loss)\nI0608 00:41:23.613014 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0021704 (* 1 = 0.0021704 loss)\nI0608 00:41:23.613020 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150239 (* 1 = 0.150239 loss)\nI0608 00:41:23.613026 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00216839 (* 1 = 0.00216839 loss)\nI0608 00:41:23.613032 13573 sgd_solver.cpp:106] Iteration 20920, lr = 0.001\nI0608 00:41:36.604567 13573 solver.cpp:229] Iteration 20940, loss = 1.25228\nI0608 00:41:36.604634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.454024 (* 1 = 0.454024 loss)\nI0608 00:41:36.604642 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.308152 (* 1 = 0.308152 loss)\nI0608 00:41:36.604647 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.576619 (* 1 = 0.576619 loss)\nI0608 00:41:36.604653 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104407 (* 1 = 0.104407 loss)\nI0608 00:41:36.604660 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.592004 (* 1 = 0.592004 loss)\nI0608 00:41:36.604665 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104481 (* 1 = 0.104481 loss)\nI0608 00:41:36.604671 13573 sgd_solver.cpp:106] Iteration 20940, lr = 0.001\nI0608 00:41:49.570394 13573 solver.cpp:229] Iteration 20960, loss = 0.775722\nI0608 00:41:49.570468 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00284055 (* 1 = 0.00284055 loss)\nI0608 00:41:49.570478 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0208383 (* 1 = 0.0208383 loss)\nI0608 00:41:49.570484 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.402927 (* 1 = 0.402927 loss)\nI0608 00:41:49.570489 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.115937 (* 1 = 0.115937 loss)\nI0608 00:41:49.570495 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.438183 (* 1 = 0.438183 loss)\nI0608 00:41:49.570502 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.11592 (* 1 = 0.11592 loss)\nI0608 00:41:49.570508 13573 sgd_solver.cpp:106] Iteration 20960, lr = 0.001\nI0608 00:42:02.275902 13573 solver.cpp:229] Iteration 20980, loss = 1.06591\nI0608 00:42:02.275969 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.298681 (* 1 = 0.298681 loss)\nI0608 00:42:02.275979 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.192678 (* 1 = 0.192678 loss)\nI0608 00:42:02.275985 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310337 (* 1 = 0.310337 loss)\nI0608 00:42:02.275990 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0932281 (* 1 = 0.0932281 loss)\nI0608 00:42:02.275995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.301717 (* 1 = 0.301717 loss)\nI0608 00:42:02.276001 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0932365 (* 1 = 0.0932365 loss)\nI0608 00:42:02.276007 13573 sgd_solver.cpp:106] Iteration 20980, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:42:15.206384 13573 solver.cpp:229] Iteration 21000, loss = 0.529858\nI0608 00:42:15.206460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0308815 (* 1 = 0.0308815 loss)\nI0608 00:42:15.206470 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0542145 (* 1 = 0.0542145 loss)\nI0608 00:42:15.206475 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.243483 (* 1 = 0.243483 loss)\nI0608 00:42:15.206481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143829 (* 1 = 0.0143829 loss)\nI0608 00:42:15.206487 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24895 (* 1 = 0.24895 loss)\nI0608 00:42:15.206493 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144097 (* 1 = 0.0144097 loss)\nI0608 00:42:15.206501 13573 sgd_solver.cpp:106] Iteration 21000, lr = 0.001\nI0608 00:42:28.072036 13573 solver.cpp:229] Iteration 21020, loss = 0.748201\nI0608 00:42:28.072101 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00892388 (* 1 = 0.00892388 loss)\nI0608 00:42:28.072110 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.027816 (* 1 = 0.027816 loss)\nI0608 00:42:28.072118 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213914 (* 1 = 0.213914 loss)\nI0608 00:42:28.072123 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0150475 (* 1 = 0.0150475 loss)\nI0608 00:42:28.072129 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235821 (* 1 = 0.235821 loss)\nI0608 00:42:28.072134 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0150547 (* 1 = 0.0150547 loss)\nI0608 00:42:28.072140 13573 sgd_solver.cpp:106] Iteration 21020, lr = 0.001\nI0608 00:42:40.990461 13573 solver.cpp:229] Iteration 21040, loss = 1.21269\nI0608 00:42:40.990522 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0944081 (* 1 = 0.0944081 loss)\nI0608 00:42:40.990532 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0849433 (* 1 = 0.0849433 loss)\nI0608 00:42:40.990538 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233064 (* 1 = 0.233064 loss)\nI0608 00:42:40.990545 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00985411 (* 1 = 0.00985411 loss)\nI0608 00:42:40.990550 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194978 (* 1 = 0.194978 loss)\nI0608 00:42:40.990556 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00984111 (* 1 = 0.00984111 loss)\nI0608 00:42:40.990564 13573 sgd_solver.cpp:106] Iteration 21040, lr = 0.001\nI0608 00:42:53.889994 13573 solver.cpp:229] Iteration 21060, loss = 1.60511\nI0608 00:42:53.890066 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.372682 (* 1 = 0.372682 loss)\nI0608 00:42:53.890076 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.332966 (* 1 = 0.332966 loss)\nI0608 00:42:53.890082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.161533 (* 1 = 0.161533 loss)\nI0608 00:42:53.890089 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104325 (* 1 = 0.104325 loss)\nI0608 00:42:53.890094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155108 (* 1 = 0.155108 loss)\nI0608 00:42:53.890100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104326 (* 1 = 0.104326 loss)\nI0608 00:42:53.890105 13573 sgd_solver.cpp:106] Iteration 21060, lr = 0.001\nI0608 00:43:06.865967 13573 solver.cpp:229] Iteration 21080, loss = 0.957914\nI0608 00:43:06.866037 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.176198 (* 1 = 0.176198 loss)\nI0608 00:43:06.866047 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126079 (* 1 = 0.126079 loss)\nI0608 00:43:06.866055 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0894789 (* 1 = 0.0894789 loss)\nI0608 00:43:06.866060 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0209282 (* 1 = 0.0209282 loss)\nI0608 00:43:06.866065 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0911723 (* 1 = 0.0911723 loss)\nI0608 00:43:06.866070 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209281 (* 1 = 0.0209281 loss)\nI0608 00:43:06.866078 13573 sgd_solver.cpp:106] Iteration 21080, lr = 0.001\nI0608 00:43:19.499212 13573 solver.cpp:229] Iteration 21100, loss = 0.564264\nI0608 00:43:19.499276 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0375556 (* 1 = 0.0375556 loss)\nI0608 00:43:19.499285 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.13207 (* 1 = 0.13207 loss)\nI0608 00:43:19.499292 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0835851 (* 1 = 0.0835851 loss)\nI0608 00:43:19.499299 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0145922 (* 1 = 0.0145922 loss)\nI0608 00:43:19.499305 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0851263 (* 1 = 0.0851263 loss)\nI0608 00:43:19.499310 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0145959 (* 1 = 0.0145959 loss)\nI0608 00:43:19.499317 13573 sgd_solver.cpp:106] Iteration 21100, lr = 0.001\nI0608 00:43:32.368309 13573 solver.cpp:229] Iteration 21120, loss = 0.72778\nI0608 00:43:32.368405 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0417378 (* 1 = 0.0417378 loss)\nI0608 00:43:32.368422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0146125 (* 1 = 0.0146125 loss)\nI0608 00:43:32.368433 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277903 (* 1 = 0.277903 loss)\nI0608 00:43:32.368444 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.073887 (* 1 = 0.073887 loss)\nI0608 00:43:32.368486 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.308076 (* 1 = 0.308076 loss)\nI0608 00:43:32.368516 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0738584 (* 1 = 0.0738584 loss)\nI0608 00:43:32.368542 13573 sgd_solver.cpp:106] Iteration 21120, lr = 0.001\nI0608 00:43:45.263767 13573 solver.cpp:229] Iteration 21140, loss = 0.46468\nI0608 00:43:45.263830 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00351625 (* 1 = 0.00351625 loss)\nI0608 00:43:45.263840 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000400951 (* 1 = 0.000400951 loss)\nI0608 00:43:45.263846 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199382 (* 1 = 0.199382 loss)\nI0608 00:43:45.263854 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00471458 (* 1 = 0.00471458 loss)\nI0608 00:43:45.263859 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199185 (* 1 = 0.199185 loss)\nI0608 00:43:45.263864 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00471015 (* 1 = 0.00471015 loss)\nI0608 00:43:45.263872 13573 sgd_solver.cpp:106] Iteration 21140, lr = 0.001\nI0608 00:43:58.084698 13573 solver.cpp:229] Iteration 21160, loss = 0.594256\nI0608 00:43:58.084769 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.172625 (* 1 = 0.172625 loss)\nI0608 00:43:58.084779 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104989 (* 1 = 0.104989 loss)\nI0608 00:43:58.084785 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992179 (* 1 = 0.0992179 loss)\nI0608 00:43:58.084791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0474037 (* 1 = 0.0474037 loss)\nI0608 00:43:58.084797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10171 (* 1 = 0.10171 loss)\nI0608 00:43:58.084802 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0474017 (* 1 = 0.0474017 loss)\nI0608 00:43:58.084810 13573 sgd_solver.cpp:106] Iteration 21160, lr = 0.001\nI0608 00:44:11.132762 13573 solver.cpp:229] Iteration 21180, loss = 0.587675\nI0608 00:44:11.132834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148173 (* 1 = 0.148173 loss)\nI0608 00:44:11.132844 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112402 (* 1 = 0.112402 loss)\nI0608 00:44:11.132850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0970319 (* 1 = 0.0970319 loss)\nI0608 00:44:11.132855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00851216 (* 1 = 0.00851216 loss)\nI0608 00:44:11.132861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0952337 (* 1 = 0.0952337 loss)\nI0608 00:44:11.132866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00851246 (* 1 = 0.00851246 loss)\nI0608 00:44:11.132874 13573 sgd_solver.cpp:106] Iteration 21180, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:44:24.014091 13573 solver.cpp:229] Iteration 21200, loss = 0.466333\nI0608 00:44:24.014166 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.202641 (* 1 = 0.202641 loss)\nI0608 00:44:24.014178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0446319 (* 1 = 0.0446319 loss)\nI0608 00:44:24.014183 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156642 (* 1 = 0.156642 loss)\nI0608 00:44:24.014189 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00192794 (* 1 = 0.00192794 loss)\nI0608 00:44:24.014195 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13336 (* 1 = 0.13336 loss)\nI0608 00:44:24.014202 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00192798 (* 1 = 0.00192798 loss)\nI0608 00:44:24.014209 13573 sgd_solver.cpp:106] Iteration 21200, lr = 0.001\nI0608 00:44:36.925948 13573 solver.cpp:229] Iteration 21220, loss = 0.940606\nI0608 00:44:36.926013 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167446 (* 1 = 0.167446 loss)\nI0608 00:44:36.926023 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.137473 (* 1 = 0.137473 loss)\nI0608 00:44:36.926029 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.173814 (* 1 = 0.173814 loss)\nI0608 00:44:36.926035 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155908 (* 1 = 0.0155908 loss)\nI0608 00:44:36.926041 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174065 (* 1 = 0.174065 loss)\nI0608 00:44:36.926048 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0155922 (* 1 = 0.0155922 loss)\nI0608 00:44:36.926054 13573 sgd_solver.cpp:106] Iteration 21220, lr = 0.001\nI0608 00:44:49.772408 13573 solver.cpp:229] Iteration 21240, loss = 0.655462\nI0608 00:44:49.772478 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194346 (* 1 = 0.194346 loss)\nI0608 00:44:49.772490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131059 (* 1 = 0.131059 loss)\nI0608 00:44:49.772495 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128106 (* 1 = 0.128106 loss)\nI0608 00:44:49.772501 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.190436 (* 1 = 0.190436 loss)\nI0608 00:44:49.772506 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126679 (* 1 = 0.126679 loss)\nI0608 00:44:49.772512 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.190153 (* 1 = 0.190153 loss)\nI0608 00:44:49.772519 13573 sgd_solver.cpp:106] Iteration 21240, lr = 0.001\nI0608 00:45:02.775543 13573 solver.cpp:229] Iteration 21260, loss = 0.606603\nI0608 00:45:02.775609 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000262995 (* 1 = 0.000262995 loss)\nI0608 00:45:02.775619 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000128032 (* 1 = 0.000128032 loss)\nI0608 00:45:02.775624 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.149165 (* 1 = 0.149165 loss)\nI0608 00:45:02.775631 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00441764 (* 1 = 0.00441764 loss)\nI0608 00:45:02.775636 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181682 (* 1 = 0.181682 loss)\nI0608 00:45:02.775642 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00441583 (* 1 = 0.00441583 loss)\nI0608 00:45:02.775650 13573 sgd_solver.cpp:106] Iteration 21260, lr = 0.001\nI0608 00:45:15.748853 13573 solver.cpp:229] Iteration 21280, loss = 0.576615\nI0608 00:45:15.748919 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.14962 (* 1 = 0.14962 loss)\nI0608 00:45:15.748929 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102441 (* 1 = 0.102441 loss)\nI0608 00:45:15.748934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0923883 (* 1 = 0.0923883 loss)\nI0608 00:45:15.748949 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118088 (* 1 = 0.0118088 loss)\nI0608 00:45:15.748955 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0983442 (* 1 = 0.0983442 loss)\nI0608 00:45:15.748961 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011809 (* 1 = 0.011809 loss)\nI0608 00:45:15.748967 13573 sgd_solver.cpp:106] Iteration 21280, lr = 0.001\nI0608 00:45:28.606748 13573 solver.cpp:229] Iteration 21300, loss = 0.669108\nI0608 00:45:28.606824 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.441889 (* 1 = 0.441889 loss)\nI0608 00:45:28.606833 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.241488 (* 1 = 0.241488 loss)\nI0608 00:45:28.606840 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0942712 (* 1 = 0.0942712 loss)\nI0608 00:45:28.606845 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301037 (* 1 = 0.0301037 loss)\nI0608 00:45:28.606851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0956782 (* 1 = 0.0956782 loss)\nI0608 00:45:28.606858 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301243 (* 1 = 0.0301243 loss)\nI0608 00:45:28.606863 13573 sgd_solver.cpp:106] Iteration 21300, lr = 0.001\nI0608 00:45:41.350556 13573 solver.cpp:229] Iteration 21320, loss = 2.42098\nI0608 00:45:41.350627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.286401 (* 1 = 0.286401 loss)\nI0608 00:45:41.350636 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.354362 (* 1 = 0.354362 loss)\nI0608 00:45:41.350642 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.678957 (* 1 = 0.678957 loss)\nI0608 00:45:41.350649 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0925729 (* 1 = 0.0925729 loss)\nI0608 00:45:41.350656 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.69945 (* 1 = 0.69945 loss)\nI0608 00:45:41.350661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.092571 (* 1 = 0.092571 loss)\nI0608 00:45:41.350668 13573 sgd_solver.cpp:106] Iteration 21320, lr = 0.001\nI0608 00:45:54.207743 13573 solver.cpp:229] Iteration 21340, loss = 0.406404\nI0608 00:45:54.207813 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000413706 (* 1 = 0.000413706 loss)\nI0608 00:45:54.207823 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00513322 (* 1 = 0.00513322 loss)\nI0608 00:45:54.207828 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198646 (* 1 = 0.198646 loss)\nI0608 00:45:54.207834 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155879 (* 1 = 0.0155879 loss)\nI0608 00:45:54.207840 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210108 (* 1 = 0.210108 loss)\nI0608 00:45:54.207846 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0155867 (* 1 = 0.0155867 loss)\nI0608 00:45:54.207854 13573 sgd_solver.cpp:106] Iteration 21340, lr = 0.001\nI0608 00:46:07.053493 13573 solver.cpp:229] Iteration 21360, loss = 0.906004\nI0608 00:46:07.053558 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.386157 (* 1 = 0.386157 loss)\nI0608 00:46:07.053568 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.260674 (* 1 = 0.260674 loss)\nI0608 00:46:07.053575 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124246 (* 1 = 0.124246 loss)\nI0608 00:46:07.053580 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.169343 (* 1 = 0.169343 loss)\nI0608 00:46:07.053586 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121403 (* 1 = 0.121403 loss)\nI0608 00:46:07.053592 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.169337 (* 1 = 0.169337 loss)\nI0608 00:46:07.053599 13573 sgd_solver.cpp:106] Iteration 21360, lr = 0.001\nI0608 00:46:20.054077 13573 solver.cpp:229] Iteration 21380, loss = 1.90841\nI0608 00:46:20.054190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0275659 (* 1 = 0.0275659 loss)\nI0608 00:46:20.054200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.058221 (* 1 = 0.058221 loss)\nI0608 00:46:20.054206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.0248 (* 1 = 1.0248 loss)\nI0608 00:46:20.054213 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.458404 (* 1 = 0.458404 loss)\nI0608 00:46:20.054219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.0146 (* 1 = 1.0146 loss)\nI0608 00:46:20.054224 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.458403 (* 1 = 0.458403 loss)\nI0608 00:46:20.054231 13573 sgd_solver.cpp:106] Iteration 21380, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:46:32.832995 13573 solver.cpp:229] Iteration 21400, loss = 0.457314\nI0608 00:46:32.833067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0619289 (* 1 = 0.0619289 loss)\nI0608 00:46:32.833076 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0466626 (* 1 = 0.0466626 loss)\nI0608 00:46:32.833082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917941 (* 1 = 0.0917941 loss)\nI0608 00:46:32.833088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0421352 (* 1 = 0.0421352 loss)\nI0608 00:46:32.833094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0890234 (* 1 = 0.0890234 loss)\nI0608 00:46:32.833101 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.042154 (* 1 = 0.042154 loss)\nI0608 00:46:32.833108 13573 sgd_solver.cpp:106] Iteration 21400, lr = 0.001\nI0608 00:46:45.625609 13573 solver.cpp:229] Iteration 21420, loss = 0.678176\nI0608 00:46:45.625680 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00731677 (* 1 = 0.00731677 loss)\nI0608 00:46:45.625690 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0214869 (* 1 = 0.0214869 loss)\nI0608 00:46:45.625696 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.347729 (* 1 = 0.347729 loss)\nI0608 00:46:45.625702 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0322497 (* 1 = 0.0322497 loss)\nI0608 00:46:45.625707 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.34983 (* 1 = 0.34983 loss)\nI0608 00:46:45.625713 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0322689 (* 1 = 0.0322689 loss)\nI0608 00:46:45.625721 13573 sgd_solver.cpp:106] Iteration 21420, lr = 0.001\nI0608 00:46:58.725646 13573 solver.cpp:229] Iteration 21440, loss = 0.710081\nI0608 00:46:58.725710 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193252 (* 1 = 0.193252 loss)\nI0608 00:46:58.725720 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124212 (* 1 = 0.124212 loss)\nI0608 00:46:58.725726 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188868 (* 1 = 0.188868 loss)\nI0608 00:46:58.725733 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0272555 (* 1 = 0.0272555 loss)\nI0608 00:46:58.725739 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186906 (* 1 = 0.186906 loss)\nI0608 00:46:58.725744 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0272849 (* 1 = 0.0272849 loss)\nI0608 00:46:58.725750 13573 sgd_solver.cpp:106] Iteration 21440, lr = 0.001\nI0608 00:47:11.873847 13573 solver.cpp:229] Iteration 21460, loss = 0.868032\nI0608 00:47:11.873976 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000809478 (* 1 = 0.000809478 loss)\nI0608 00:47:11.873986 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.021328 (* 1 = 0.021328 loss)\nI0608 00:47:11.873993 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.186886 (* 1 = 0.186886 loss)\nI0608 00:47:11.873999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00887108 (* 1 = 0.00887108 loss)\nI0608 00:47:11.874006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160157 (* 1 = 0.160157 loss)\nI0608 00:47:11.874011 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00886438 (* 1 = 0.00886438 loss)\nI0608 00:47:11.874017 13573 sgd_solver.cpp:106] Iteration 21460, lr = 0.001\nI0608 00:47:24.816181 13573 solver.cpp:229] Iteration 21480, loss = 0.755101\nI0608 00:47:24.816253 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150996 (* 1 = 0.150996 loss)\nI0608 00:47:24.816263 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0672666 (* 1 = 0.0672666 loss)\nI0608 00:47:24.816269 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216841 (* 1 = 0.216841 loss)\nI0608 00:47:24.816275 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0290857 (* 1 = 0.0290857 loss)\nI0608 00:47:24.816282 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213599 (* 1 = 0.213599 loss)\nI0608 00:47:24.816287 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0290834 (* 1 = 0.0290834 loss)\nI0608 00:47:24.816294 13573 sgd_solver.cpp:106] Iteration 21480, lr = 0.001\nI0608 00:47:37.853585 13573 solver.cpp:229] Iteration 21500, loss = 1.19081\nI0608 00:47:37.853672 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0377377 (* 1 = 0.0377377 loss)\nI0608 00:47:37.853682 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0644576 (* 1 = 0.0644576 loss)\nI0608 00:47:37.853688 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0753282 (* 1 = 0.0753282 loss)\nI0608 00:47:37.853694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102873 (* 1 = 0.0102873 loss)\nI0608 00:47:37.853699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108425 (* 1 = 0.108425 loss)\nI0608 00:47:37.853706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0102875 (* 1 = 0.0102875 loss)\nI0608 00:47:37.853713 13573 sgd_solver.cpp:106] Iteration 21500, lr = 0.001\nI0608 00:47:50.675904 13573 solver.cpp:229] Iteration 21520, loss = 0.487209\nI0608 00:47:50.675993 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161982 (* 1 = 0.161982 loss)\nI0608 00:47:50.676003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161842 (* 1 = 0.161842 loss)\nI0608 00:47:50.676009 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136033 (* 1 = 0.136033 loss)\nI0608 00:47:50.676015 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0127526 (* 1 = 0.0127526 loss)\nI0608 00:47:50.676021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.134221 (* 1 = 0.134221 loss)\nI0608 00:47:50.676028 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0127557 (* 1 = 0.0127557 loss)\nI0608 00:47:50.676035 13573 sgd_solver.cpp:106] Iteration 21520, lr = 0.001\nI0608 00:48:03.423938 13573 solver.cpp:229] Iteration 21540, loss = 0.410091\nI0608 00:48:03.424001 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0206252 (* 1 = 0.0206252 loss)\nI0608 00:48:03.424011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0338722 (* 1 = 0.0338722 loss)\nI0608 00:48:03.424017 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0872344 (* 1 = 0.0872344 loss)\nI0608 00:48:03.424022 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0588995 (* 1 = 0.0588995 loss)\nI0608 00:48:03.424028 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0903903 (* 1 = 0.0903903 loss)\nI0608 00:48:03.424034 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0589179 (* 1 = 0.0589179 loss)\nI0608 00:48:03.424041 13573 sgd_solver.cpp:106] Iteration 21540, lr = 0.001\nI0608 00:48:16.596779 13573 solver.cpp:229] Iteration 21560, loss = 1.01694\nI0608 00:48:16.596851 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.149997 (* 1 = 0.149997 loss)\nI0608 00:48:16.596860 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116784 (* 1 = 0.116784 loss)\nI0608 00:48:16.596866 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136867 (* 1 = 0.136867 loss)\nI0608 00:48:16.596873 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133156 (* 1 = 0.0133156 loss)\nI0608 00:48:16.596877 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146633 (* 1 = 0.146633 loss)\nI0608 00:48:16.596884 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133125 (* 1 = 0.0133125 loss)\nI0608 00:48:16.596890 13573 sgd_solver.cpp:106] Iteration 21560, lr = 0.001\nI0608 00:48:29.565382 13573 solver.cpp:229] Iteration 21580, loss = 1.89764\nI0608 00:48:29.565448 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.26738 (* 1 = 0.26738 loss)\nI0608 00:48:29.565457 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.362482 (* 1 = 0.362482 loss)\nI0608 00:48:29.565464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.905415 (* 1 = 0.905415 loss)\nI0608 00:48:29.565469 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.336149 (* 1 = 0.336149 loss)\nI0608 00:48:29.565475 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.892257 (* 1 = 0.892257 loss)\nI0608 00:48:29.565481 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.336119 (* 1 = 0.336119 loss)\nI0608 00:48:29.565488 13573 sgd_solver.cpp:106] Iteration 21580, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:48:42.253083 13573 solver.cpp:229] Iteration 21600, loss = 0.966893\nI0608 00:48:42.253155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.1712 (* 1 = 0.1712 loss)\nI0608 00:48:42.253165 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14982 (* 1 = 0.14982 loss)\nI0608 00:48:42.253170 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.332188 (* 1 = 0.332188 loss)\nI0608 00:48:42.253177 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0396615 (* 1 = 0.0396615 loss)\nI0608 00:48:42.253182 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327438 (* 1 = 0.327438 loss)\nI0608 00:48:42.253188 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.039658 (* 1 = 0.039658 loss)\nI0608 00:48:42.253195 13573 sgd_solver.cpp:106] Iteration 21600, lr = 0.001\nI0608 00:48:55.298331 13573 solver.cpp:229] Iteration 21620, loss = 1.1682\nI0608 00:48:55.298396 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178873 (* 1 = 0.178873 loss)\nI0608 00:48:55.298406 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.185008 (* 1 = 0.185008 loss)\nI0608 00:48:55.298413 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0878534 (* 1 = 0.0878534 loss)\nI0608 00:48:55.298418 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00943323 (* 1 = 0.00943323 loss)\nI0608 00:48:55.298424 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0891037 (* 1 = 0.0891037 loss)\nI0608 00:48:55.298430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00942501 (* 1 = 0.00942501 loss)\nI0608 00:48:55.298436 13573 sgd_solver.cpp:106] Iteration 21620, lr = 0.001\nI0608 00:49:08.234752 13573 solver.cpp:229] Iteration 21640, loss = 0.735262\nI0608 00:49:08.234836 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.187672 (* 1 = 0.187672 loss)\nI0608 00:49:08.234846 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0765801 (* 1 = 0.0765801 loss)\nI0608 00:49:08.234853 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10482 (* 1 = 0.10482 loss)\nI0608 00:49:08.234858 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176335 (* 1 = 0.0176335 loss)\nI0608 00:49:08.234864 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103702 (* 1 = 0.103702 loss)\nI0608 00:49:08.234869 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176357 (* 1 = 0.0176357 loss)\nI0608 00:49:08.234876 13573 sgd_solver.cpp:106] Iteration 21640, lr = 0.001\nI0608 00:49:21.124399 13573 solver.cpp:229] Iteration 21660, loss = 0.795583\nI0608 00:49:21.124462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134436 (* 1 = 0.134436 loss)\nI0608 00:49:21.124472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114669 (* 1 = 0.114669 loss)\nI0608 00:49:21.124478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206555 (* 1 = 0.206555 loss)\nI0608 00:49:21.124485 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0315682 (* 1 = 0.0315682 loss)\nI0608 00:49:21.124490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.178874 (* 1 = 0.178874 loss)\nI0608 00:49:21.124495 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0315937 (* 1 = 0.0315937 loss)\nI0608 00:49:21.124502 13573 sgd_solver.cpp:106] Iteration 21660, lr = 0.001\nI0608 00:49:33.840780 13573 solver.cpp:229] Iteration 21680, loss = 0.775948\nI0608 00:49:33.840863 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0824005 (* 1 = 0.0824005 loss)\nI0608 00:49:33.840873 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.242509 (* 1 = 0.242509 loss)\nI0608 00:49:33.840878 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0924125 (* 1 = 0.0924125 loss)\nI0608 00:49:33.840884 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0346007 (* 1 = 0.0346007 loss)\nI0608 00:49:33.840889 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100291 (* 1 = 0.100291 loss)\nI0608 00:49:33.840895 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0345796 (* 1 = 0.0345796 loss)\nI0608 00:49:33.840903 13573 sgd_solver.cpp:106] Iteration 21680, lr = 0.001\nI0608 00:49:46.910253 13573 solver.cpp:229] Iteration 21700, loss = 0.647701\nI0608 00:49:46.910325 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.217556 (* 1 = 0.217556 loss)\nI0608 00:49:46.910334 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.251551 (* 1 = 0.251551 loss)\nI0608 00:49:46.910341 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145562 (* 1 = 0.145562 loss)\nI0608 00:49:46.910346 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0293941 (* 1 = 0.0293941 loss)\nI0608 00:49:46.910351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147974 (* 1 = 0.147974 loss)\nI0608 00:49:46.910358 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0293836 (* 1 = 0.0293836 loss)\nI0608 00:49:46.910365 13573 sgd_solver.cpp:106] Iteration 21700, lr = 0.001\nI0608 00:49:59.769184 13573 solver.cpp:229] Iteration 21720, loss = 0.568309\nI0608 00:49:59.769248 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00281628 (* 1 = 0.00281628 loss)\nI0608 00:49:59.769260 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0139028 (* 1 = 0.0139028 loss)\nI0608 00:49:59.769266 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266949 (* 1 = 0.266949 loss)\nI0608 00:49:59.769273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0230497 (* 1 = 0.0230497 loss)\nI0608 00:49:59.769278 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264718 (* 1 = 0.264718 loss)\nI0608 00:49:59.769284 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0230458 (* 1 = 0.0230458 loss)\nI0608 00:49:59.769291 13573 sgd_solver.cpp:106] Iteration 21720, lr = 0.001\nI0608 00:50:12.567247 13573 solver.cpp:229] Iteration 21740, loss = 1.5206\nI0608 00:50:12.567317 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.05786 (* 1 = 0.05786 loss)\nI0608 00:50:12.567332 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171465 (* 1 = 0.171465 loss)\nI0608 00:50:12.567339 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0733538 (* 1 = 0.0733538 loss)\nI0608 00:50:12.567345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0304259 (* 1 = 0.0304259 loss)\nI0608 00:50:12.567351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0776901 (* 1 = 0.0776901 loss)\nI0608 00:50:12.567358 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.030436 (* 1 = 0.030436 loss)\nI0608 00:50:12.567364 13573 sgd_solver.cpp:106] Iteration 21740, lr = 0.001\nI0608 00:50:25.488965 13573 solver.cpp:229] Iteration 21760, loss = 0.911829\nI0608 00:50:25.489044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00375653 (* 1 = 0.00375653 loss)\nI0608 00:50:25.489054 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0278667 (* 1 = 0.0278667 loss)\nI0608 00:50:25.489061 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.464024 (* 1 = 0.464024 loss)\nI0608 00:50:25.489068 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.069389 (* 1 = 0.069389 loss)\nI0608 00:50:25.489073 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.435591 (* 1 = 0.435591 loss)\nI0608 00:50:25.489078 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0694028 (* 1 = 0.0694028 loss)\nI0608 00:50:25.489085 13573 sgd_solver.cpp:106] Iteration 21760, lr = 0.001\nI0608 00:50:38.304996 13573 solver.cpp:229] Iteration 21780, loss = 0.708058\nI0608 00:50:38.305059 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000915339 (* 1 = 0.000915339 loss)\nI0608 00:50:38.305070 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0204847 (* 1 = 0.0204847 loss)\nI0608 00:50:38.305078 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.374255 (* 1 = 0.374255 loss)\nI0608 00:50:38.305083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.10084 (* 1 = 0.10084 loss)\nI0608 00:50:38.305089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.352089 (* 1 = 0.352089 loss)\nI0608 00:50:38.305095 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.100827 (* 1 = 0.100827 loss)\nI0608 00:50:38.305102 13573 sgd_solver.cpp:106] Iteration 21780, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:50:51.231415 13573 solver.cpp:229] Iteration 21800, loss = 0.625068\nI0608 00:50:51.231483 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119442 (* 1 = 0.119442 loss)\nI0608 00:50:51.231492 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107665 (* 1 = 0.107665 loss)\nI0608 00:50:51.231498 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111619 (* 1 = 0.111619 loss)\nI0608 00:50:51.231503 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0494008 (* 1 = 0.0494008 loss)\nI0608 00:50:51.231509 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110329 (* 1 = 0.110329 loss)\nI0608 00:50:51.231514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0494001 (* 1 = 0.0494001 loss)\nI0608 00:50:51.231521 13573 sgd_solver.cpp:106] Iteration 21800, lr = 0.001\nI0608 00:51:04.287722 13573 solver.cpp:229] Iteration 21820, loss = 0.691441\nI0608 00:51:04.287783 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.218037 (* 1 = 0.218037 loss)\nI0608 00:51:04.287792 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169834 (* 1 = 0.169834 loss)\nI0608 00:51:04.287798 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167751 (* 1 = 0.167751 loss)\nI0608 00:51:04.287803 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0685089 (* 1 = 0.0685089 loss)\nI0608 00:51:04.287809 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172196 (* 1 = 0.172196 loss)\nI0608 00:51:04.287814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0685267 (* 1 = 0.0685267 loss)\nI0608 00:51:04.287822 13573 sgd_solver.cpp:106] Iteration 21820, lr = 0.001\nI0608 00:51:17.199607 13573 solver.cpp:229] Iteration 21840, loss = 1.82707\nI0608 00:51:17.199683 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0314047 (* 1 = 0.0314047 loss)\nI0608 00:51:17.199692 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0581057 (* 1 = 0.0581057 loss)\nI0608 00:51:17.199698 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.278634 (* 1 = 0.278634 loss)\nI0608 00:51:17.199703 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0338136 (* 1 = 0.0338136 loss)\nI0608 00:51:17.199709 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264991 (* 1 = 0.264991 loss)\nI0608 00:51:17.199714 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0338059 (* 1 = 0.0338059 loss)\nI0608 00:51:17.199720 13573 sgd_solver.cpp:106] Iteration 21840, lr = 0.001\nI0608 00:51:30.187394 13573 solver.cpp:229] Iteration 21860, loss = 1.04917\nI0608 00:51:30.187460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.222467 (* 1 = 0.222467 loss)\nI0608 00:51:30.187470 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.233164 (* 1 = 0.233164 loss)\nI0608 00:51:30.187476 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.555203 (* 1 = 0.555203 loss)\nI0608 00:51:30.187481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.115633 (* 1 = 0.115633 loss)\nI0608 00:51:30.187487 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.566421 (* 1 = 0.566421 loss)\nI0608 00:51:30.187494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.115618 (* 1 = 0.115618 loss)\nI0608 00:51:30.187500 13573 sgd_solver.cpp:106] Iteration 21860, lr = 0.001\nI0608 00:51:43.091567 13573 solver.cpp:229] Iteration 21880, loss = 0.916511\nI0608 00:51:43.091630 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104117 (* 1 = 0.104117 loss)\nI0608 00:51:43.091639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.250574 (* 1 = 0.250574 loss)\nI0608 00:51:43.091646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0999862 (* 1 = 0.0999862 loss)\nI0608 00:51:43.091652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.12417 (* 1 = 0.12417 loss)\nI0608 00:51:43.091657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0937291 (* 1 = 0.0937291 loss)\nI0608 00:51:43.091663 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124172 (* 1 = 0.124172 loss)\nI0608 00:51:43.091670 13573 sgd_solver.cpp:106] Iteration 21880, lr = 0.001\nI0608 00:51:56.114886 13573 solver.cpp:229] Iteration 21900, loss = 1.08087\nI0608 00:51:56.114962 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.231123 (* 1 = 0.231123 loss)\nI0608 00:51:56.114971 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.340306 (* 1 = 0.340306 loss)\nI0608 00:51:56.114977 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.477809 (* 1 = 0.477809 loss)\nI0608 00:51:56.114984 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219398 (* 1 = 0.0219398 loss)\nI0608 00:51:56.114990 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.462515 (* 1 = 0.462515 loss)\nI0608 00:51:56.114996 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219345 (* 1 = 0.0219345 loss)\nI0608 00:51:56.115005 13573 sgd_solver.cpp:106] Iteration 21900, lr = 0.001\nI0608 00:52:09.103606 13573 solver.cpp:229] Iteration 21920, loss = 1.09799\nI0608 00:52:09.103675 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0501491 (* 1 = 0.0501491 loss)\nI0608 00:52:09.103685 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0568782 (* 1 = 0.0568782 loss)\nI0608 00:52:09.103691 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.492972 (* 1 = 0.492972 loss)\nI0608 00:52:09.103698 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.141719 (* 1 = 0.141719 loss)\nI0608 00:52:09.103703 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.476477 (* 1 = 0.476477 loss)\nI0608 00:52:09.103708 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.141719 (* 1 = 0.141719 loss)\nI0608 00:52:09.103715 13573 sgd_solver.cpp:106] Iteration 21920, lr = 0.001\nI0608 00:52:21.834486 13573 solver.cpp:229] Iteration 21940, loss = 0.796749\nI0608 00:52:21.834547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.218027 (* 1 = 0.218027 loss)\nI0608 00:52:21.834556 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.267066 (* 1 = 0.267066 loss)\nI0608 00:52:21.834563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.130791 (* 1 = 0.130791 loss)\nI0608 00:52:21.834569 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0901943 (* 1 = 0.0901943 loss)\nI0608 00:52:21.834574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15742 (* 1 = 0.15742 loss)\nI0608 00:52:21.834580 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0901946 (* 1 = 0.0901946 loss)\nI0608 00:52:21.834588 13573 sgd_solver.cpp:106] Iteration 21940, lr = 0.001\nI0608 00:52:34.851039 13573 solver.cpp:229] Iteration 21960, loss = 0.491978\nI0608 00:52:34.851176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0440797 (* 1 = 0.0440797 loss)\nI0608 00:52:34.851186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0705463 (* 1 = 0.0705463 loss)\nI0608 00:52:34.851192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0804299 (* 1 = 0.0804299 loss)\nI0608 00:52:34.851198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153104 (* 1 = 0.0153104 loss)\nI0608 00:52:34.851204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0764329 (* 1 = 0.0764329 loss)\nI0608 00:52:34.851210 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153105 (* 1 = 0.0153105 loss)\nI0608 00:52:34.851217 13573 sgd_solver.cpp:106] Iteration 21960, lr = 0.001\nI0608 00:52:47.726982 13573 solver.cpp:229] Iteration 21980, loss = 0.426673\nI0608 00:52:47.727049 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00488177 (* 1 = 0.00488177 loss)\nI0608 00:52:47.727059 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00857381 (* 1 = 0.00857381 loss)\nI0608 00:52:47.727066 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241588 (* 1 = 0.241588 loss)\nI0608 00:52:47.727072 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00918321 (* 1 = 0.00918321 loss)\nI0608 00:52:47.727077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22263 (* 1 = 0.22263 loss)\nI0608 00:52:47.727082 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00918646 (* 1 = 0.00918646 loss)\nI0608 00:52:47.727089 13573 sgd_solver.cpp:106] Iteration 21980, lr = 0.001\nspeed: 0.644s / iter\nI0608 00:53:00.593286 13573 solver.cpp:229] Iteration 22000, loss = 0.787343\nI0608 00:53:00.593348 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.336717 (* 1 = 0.336717 loss)\nI0608 00:53:00.593356 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.351267 (* 1 = 0.351267 loss)\nI0608 00:53:00.593364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0908776 (* 1 = 0.0908776 loss)\nI0608 00:53:00.593369 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103315 (* 1 = 0.0103315 loss)\nI0608 00:53:00.593375 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0867225 (* 1 = 0.0867225 loss)\nI0608 00:53:00.593381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103308 (* 1 = 0.0103308 loss)\nI0608 00:53:00.593389 13573 sgd_solver.cpp:106] Iteration 22000, lr = 0.001\nI0608 00:53:13.319651 13573 solver.cpp:229] Iteration 22020, loss = 0.433827\nI0608 00:53:13.319727 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0584789 (* 1 = 0.0584789 loss)\nI0608 00:53:13.319737 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121775 (* 1 = 0.121775 loss)\nI0608 00:53:13.319743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0863834 (* 1 = 0.0863834 loss)\nI0608 00:53:13.319749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264587 (* 1 = 0.0264587 loss)\nI0608 00:53:13.319754 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905301 (* 1 = 0.0905301 loss)\nI0608 00:53:13.319761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264613 (* 1 = 0.0264613 loss)\nI0608 00:53:13.319767 13573 sgd_solver.cpp:106] Iteration 22020, lr = 0.001\nI0608 00:53:26.272320 13573 solver.cpp:229] Iteration 22040, loss = 0.762183\nI0608 00:53:26.272385 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139591 (* 1 = 0.139591 loss)\nI0608 00:53:26.272394 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10191 (* 1 = 0.10191 loss)\nI0608 00:53:26.272400 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.35841 (* 1 = 0.35841 loss)\nI0608 00:53:26.272406 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0438796 (* 1 = 0.0438796 loss)\nI0608 00:53:26.272411 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.344476 (* 1 = 0.344476 loss)\nI0608 00:53:26.272418 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0438815 (* 1 = 0.0438815 loss)\nI0608 00:53:26.272424 13573 sgd_solver.cpp:106] Iteration 22040, lr = 0.001\nI0608 00:53:39.235690 13573 solver.cpp:229] Iteration 22060, loss = 0.844009\nI0608 00:53:39.235756 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0681466 (* 1 = 0.0681466 loss)\nI0608 00:53:39.235765 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.026662 (* 1 = 0.026662 loss)\nI0608 00:53:39.235771 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.500325 (* 1 = 0.500325 loss)\nI0608 00:53:39.235777 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0809769 (* 1 = 0.0809769 loss)\nI0608 00:53:39.235783 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.553133 (* 1 = 0.553133 loss)\nI0608 00:53:39.235790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0810026 (* 1 = 0.0810026 loss)\nI0608 00:53:39.235797 13573 sgd_solver.cpp:106] Iteration 22060, lr = 0.001\nI0608 00:53:52.167702 13573 solver.cpp:229] Iteration 22080, loss = 0.659975\nI0608 00:53:52.167788 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0205633 (* 1 = 0.0205633 loss)\nI0608 00:53:52.167798 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0424911 (* 1 = 0.0424911 loss)\nI0608 00:53:52.167804 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.161477 (* 1 = 0.161477 loss)\nI0608 00:53:52.167810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00980224 (* 1 = 0.00980224 loss)\nI0608 00:53:52.167815 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189214 (* 1 = 0.189214 loss)\nI0608 00:53:52.167821 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00978836 (* 1 = 0.00978836 loss)\nI0608 00:53:52.167829 13573 sgd_solver.cpp:106] Iteration 22080, lr = 0.001\nI0608 00:54:05.242503 13573 solver.cpp:229] Iteration 22100, loss = 0.859559\nI0608 00:54:05.242569 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.283259 (* 1 = 0.283259 loss)\nI0608 00:54:05.242579 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0890798 (* 1 = 0.0890798 loss)\nI0608 00:54:05.242585 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0805283 (* 1 = 0.0805283 loss)\nI0608 00:54:05.242593 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00827962 (* 1 = 0.00827962 loss)\nI0608 00:54:05.242597 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0773529 (* 1 = 0.0773529 loss)\nI0608 00:54:05.242604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00827967 (* 1 = 0.00827967 loss)\nI0608 00:54:05.242612 13573 sgd_solver.cpp:106] Iteration 22100, lr = 0.001\nI0608 00:54:18.196864 13573 solver.cpp:229] Iteration 22120, loss = 0.820363\nI0608 00:54:18.196933 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.278149 (* 1 = 0.278149 loss)\nI0608 00:54:18.196943 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.247118 (* 1 = 0.247118 loss)\nI0608 00:54:18.196949 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297904 (* 1 = 0.297904 loss)\nI0608 00:54:18.196954 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0358155 (* 1 = 0.0358155 loss)\nI0608 00:54:18.196960 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278455 (* 1 = 0.278455 loss)\nI0608 00:54:18.196966 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0358304 (* 1 = 0.0358304 loss)\nI0608 00:54:18.196972 13573 sgd_solver.cpp:106] Iteration 22120, lr = 0.001\nI0608 00:54:30.953557 13573 solver.cpp:229] Iteration 22140, loss = 0.396279\nI0608 00:54:30.953630 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0624861 (* 1 = 0.0624861 loss)\nI0608 00:54:30.953640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108555 (* 1 = 0.108555 loss)\nI0608 00:54:30.953646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.077415 (* 1 = 0.077415 loss)\nI0608 00:54:30.953652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00642134 (* 1 = 0.00642134 loss)\nI0608 00:54:30.953658 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.070883 (* 1 = 0.070883 loss)\nI0608 00:54:30.953665 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00642165 (* 1 = 0.00642165 loss)\nI0608 00:54:30.953672 13573 sgd_solver.cpp:106] Iteration 22140, lr = 0.001\nI0608 00:54:43.792990 13573 solver.cpp:229] Iteration 22160, loss = 0.545393\nI0608 00:54:43.793061 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0798745 (* 1 = 0.0798745 loss)\nI0608 00:54:43.793069 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130791 (* 1 = 0.130791 loss)\nI0608 00:54:43.793076 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104518 (* 1 = 0.104518 loss)\nI0608 00:54:43.793082 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0973518 (* 1 = 0.0973518 loss)\nI0608 00:54:43.793088 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106633 (* 1 = 0.106633 loss)\nI0608 00:54:43.793093 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0973367 (* 1 = 0.0973367 loss)\nI0608 00:54:43.793100 13573 sgd_solver.cpp:106] Iteration 22160, lr = 0.001\nI0608 00:54:56.918880 13573 solver.cpp:229] Iteration 22180, loss = 0.521946\nI0608 00:54:56.918949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00560914 (* 1 = 0.00560914 loss)\nI0608 00:54:56.918959 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0169071 (* 1 = 0.0169071 loss)\nI0608 00:54:56.918965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211755 (* 1 = 0.211755 loss)\nI0608 00:54:56.918970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0138171 (* 1 = 0.0138171 loss)\nI0608 00:54:56.918977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222964 (* 1 = 0.222964 loss)\nI0608 00:54:56.918982 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0138137 (* 1 = 0.0138137 loss)\nI0608 00:54:56.918988 13573 sgd_solver.cpp:106] Iteration 22180, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:55:09.843097 13573 solver.cpp:229] Iteration 22200, loss = 1.70416\nI0608 00:55:09.843173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.385708 (* 1 = 0.385708 loss)\nI0608 00:55:09.843181 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.345085 (* 1 = 0.345085 loss)\nI0608 00:55:09.843188 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.730501 (* 1 = 0.730501 loss)\nI0608 00:55:09.843192 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.216162 (* 1 = 0.216162 loss)\nI0608 00:55:09.843199 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.723728 (* 1 = 0.723728 loss)\nI0608 00:55:09.843204 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.216162 (* 1 = 0.216162 loss)\nI0608 00:55:09.843211 13573 sgd_solver.cpp:106] Iteration 22200, lr = 0.001\nI0608 00:55:22.898357 13573 solver.cpp:229] Iteration 22220, loss = 1.05129\nI0608 00:55:22.898422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.357664 (* 1 = 0.357664 loss)\nI0608 00:55:22.898430 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.532957 (* 1 = 0.532957 loss)\nI0608 00:55:22.898437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109154 (* 1 = 0.109154 loss)\nI0608 00:55:22.898442 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259042 (* 1 = 0.0259042 loss)\nI0608 00:55:22.898447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111517 (* 1 = 0.111517 loss)\nI0608 00:55:22.898453 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0259068 (* 1 = 0.0259068 loss)\nI0608 00:55:22.898459 13573 sgd_solver.cpp:106] Iteration 22220, lr = 0.001\nI0608 00:55:35.905930 13573 solver.cpp:229] Iteration 22240, loss = 1.22254\nI0608 00:55:35.905995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.277852 (* 1 = 0.277852 loss)\nI0608 00:55:35.906005 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171485 (* 1 = 0.171485 loss)\nI0608 00:55:35.906011 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.127672 (* 1 = 0.127672 loss)\nI0608 00:55:35.906018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0239991 (* 1 = 0.0239991 loss)\nI0608 00:55:35.906023 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123172 (* 1 = 0.123172 loss)\nI0608 00:55:35.906029 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0240004 (* 1 = 0.0240004 loss)\nI0608 00:55:35.906038 13573 sgd_solver.cpp:106] Iteration 22240, lr = 0.001\nI0608 00:55:48.627455 13573 solver.cpp:229] Iteration 22260, loss = 0.772327\nI0608 00:55:48.627530 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.412172 (* 1 = 0.412172 loss)\nI0608 00:55:48.627539 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.230864 (* 1 = 0.230864 loss)\nI0608 00:55:48.627545 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203778 (* 1 = 0.203778 loss)\nI0608 00:55:48.627552 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0497615 (* 1 = 0.0497615 loss)\nI0608 00:55:48.627558 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207013 (* 1 = 0.207013 loss)\nI0608 00:55:48.627564 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0497645 (* 1 = 0.0497645 loss)\nI0608 00:55:48.627571 13573 sgd_solver.cpp:106] Iteration 22260, lr = 0.001\nI0608 00:56:01.658023 13573 solver.cpp:229] Iteration 22280, loss = 0.532414\nI0608 00:56:01.658083 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.160346 (* 1 = 0.160346 loss)\nI0608 00:56:01.658093 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.177885 (* 1 = 0.177885 loss)\nI0608 00:56:01.658099 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187717 (* 1 = 0.187717 loss)\nI0608 00:56:01.658105 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00686481 (* 1 = 0.00686481 loss)\nI0608 00:56:01.658112 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159058 (* 1 = 0.159058 loss)\nI0608 00:56:01.658116 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00686477 (* 1 = 0.00686477 loss)\nI0608 00:56:01.658124 13573 sgd_solver.cpp:106] Iteration 22280, lr = 0.001\nI0608 00:56:14.684788 13573 solver.cpp:229] Iteration 22300, loss = 0.977676\nI0608 00:56:14.684857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.277636 (* 1 = 0.277636 loss)\nI0608 00:56:14.684869 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.2039 (* 1 = 0.2039 loss)\nI0608 00:56:14.684875 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.437612 (* 1 = 0.437612 loss)\nI0608 00:56:14.684881 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0922014 (* 1 = 0.0922014 loss)\nI0608 00:56:14.684886 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.442897 (* 1 = 0.442897 loss)\nI0608 00:56:14.684892 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0921942 (* 1 = 0.0921942 loss)\nI0608 00:56:14.684900 13573 sgd_solver.cpp:106] Iteration 22300, lr = 0.001\nI0608 00:56:27.617715 13573 solver.cpp:229] Iteration 22320, loss = 1.96954\nI0608 00:56:27.617841 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.327155 (* 1 = 0.327155 loss)\nI0608 00:56:27.617851 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.340256 (* 1 = 0.340256 loss)\nI0608 00:56:27.617857 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.758479 (* 1 = 0.758479 loss)\nI0608 00:56:27.617863 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.206402 (* 1 = 0.206402 loss)\nI0608 00:56:27.617868 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.779297 (* 1 = 0.779297 loss)\nI0608 00:56:27.617873 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.206377 (* 1 = 0.206377 loss)\nI0608 00:56:27.617880 13573 sgd_solver.cpp:106] Iteration 22320, lr = 0.001\nI0608 00:56:40.501222 13573 solver.cpp:229] Iteration 22340, loss = 0.53319\nI0608 00:56:40.501293 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.257862 (* 1 = 0.257862 loss)\nI0608 00:56:40.501302 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143689 (* 1 = 0.143689 loss)\nI0608 00:56:40.501309 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0934109 (* 1 = 0.0934109 loss)\nI0608 00:56:40.501314 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0266355 (* 1 = 0.0266355 loss)\nI0608 00:56:40.501320 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0921151 (* 1 = 0.0921151 loss)\nI0608 00:56:40.501325 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.026629 (* 1 = 0.026629 loss)\nI0608 00:56:40.501332 13573 sgd_solver.cpp:106] Iteration 22340, lr = 0.001\nI0608 00:56:53.436965 13573 solver.cpp:229] Iteration 22360, loss = 0.657006\nI0608 00:56:53.437027 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.192254 (* 1 = 0.192254 loss)\nI0608 00:56:53.437036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.222362 (* 1 = 0.222362 loss)\nI0608 00:56:53.437043 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101573 (* 1 = 0.101573 loss)\nI0608 00:56:53.437048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0237325 (* 1 = 0.0237325 loss)\nI0608 00:56:53.437053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109694 (* 1 = 0.109694 loss)\nI0608 00:56:53.437059 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0237308 (* 1 = 0.0237308 loss)\nI0608 00:56:53.437067 13573 sgd_solver.cpp:106] Iteration 22360, lr = 0.001\nI0608 00:57:06.374358 13573 solver.cpp:229] Iteration 22380, loss = 0.747304\nI0608 00:57:06.374433 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.191972 (* 1 = 0.191972 loss)\nI0608 00:57:06.374444 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0867091 (* 1 = 0.0867091 loss)\nI0608 00:57:06.374449 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0981268 (* 1 = 0.0981268 loss)\nI0608 00:57:06.374455 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0401992 (* 1 = 0.0401992 loss)\nI0608 00:57:06.374461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0917866 (* 1 = 0.0917866 loss)\nI0608 00:57:06.374466 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0402015 (* 1 = 0.0402015 loss)\nI0608 00:57:06.374474 13573 sgd_solver.cpp:106] Iteration 22380, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:57:19.361090 13573 solver.cpp:229] Iteration 22400, loss = 0.593804\nI0608 00:57:19.361158 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0869559 (* 1 = 0.0869559 loss)\nI0608 00:57:19.361168 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0370341 (* 1 = 0.0370341 loss)\nI0608 00:57:19.361174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252411 (* 1 = 0.252411 loss)\nI0608 00:57:19.361181 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00310789 (* 1 = 0.00310789 loss)\nI0608 00:57:19.361186 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.243859 (* 1 = 0.243859 loss)\nI0608 00:57:19.361191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0031046 (* 1 = 0.0031046 loss)\nI0608 00:57:19.361198 13573 sgd_solver.cpp:106] Iteration 22400, lr = 0.001\nI0608 00:57:32.436794 13573 solver.cpp:229] Iteration 22420, loss = 0.874508\nI0608 00:57:32.436869 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.256766 (* 1 = 0.256766 loss)\nI0608 00:57:32.436879 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229552 (* 1 = 0.229552 loss)\nI0608 00:57:32.436885 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141792 (* 1 = 0.141792 loss)\nI0608 00:57:32.436892 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0604873 (* 1 = 0.0604873 loss)\nI0608 00:57:32.436897 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146737 (* 1 = 0.146737 loss)\nI0608 00:57:32.436902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0604799 (* 1 = 0.0604799 loss)\nI0608 00:57:32.436909 13573 sgd_solver.cpp:106] Iteration 22420, lr = 0.001\nI0608 00:57:45.425544 13573 solver.cpp:229] Iteration 22440, loss = 1.41569\nI0608 00:57:45.425612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0232788 (* 1 = 0.0232788 loss)\nI0608 00:57:45.425621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0110974 (* 1 = 0.0110974 loss)\nI0608 00:57:45.425627 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.973659 (* 1 = 0.973659 loss)\nI0608 00:57:45.425633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.211789 (* 1 = 0.211789 loss)\nI0608 00:57:45.425638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.958108 (* 1 = 0.958108 loss)\nI0608 00:57:45.425644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.211856 (* 1 = 0.211856 loss)\nI0608 00:57:45.425650 13573 sgd_solver.cpp:106] Iteration 22440, lr = 0.001\nI0608 00:57:58.230381 13573 solver.cpp:229] Iteration 22460, loss = 0.692755\nI0608 00:57:58.230439 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0173609 (* 1 = 0.0173609 loss)\nI0608 00:57:58.230448 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.027399 (* 1 = 0.027399 loss)\nI0608 00:57:58.230454 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.229806 (* 1 = 0.229806 loss)\nI0608 00:57:58.230459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0138661 (* 1 = 0.0138661 loss)\nI0608 00:57:58.230465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23847 (* 1 = 0.23847 loss)\nI0608 00:57:58.230470 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0138581 (* 1 = 0.0138581 loss)\nI0608 00:57:58.230476 13573 sgd_solver.cpp:106] Iteration 22460, lr = 0.001\nI0608 00:58:10.852394 13573 solver.cpp:229] Iteration 22480, loss = 0.474462\nI0608 00:58:10.852468 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0108868 (* 1 = 0.0108868 loss)\nI0608 00:58:10.852478 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0155199 (* 1 = 0.0155199 loss)\nI0608 00:58:10.852483 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137337 (* 1 = 0.137337 loss)\nI0608 00:58:10.852488 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156837 (* 1 = 0.0156837 loss)\nI0608 00:58:10.852494 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131833 (* 1 = 0.131833 loss)\nI0608 00:58:10.852499 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156783 (* 1 = 0.0156783 loss)\nI0608 00:58:10.852505 13573 sgd_solver.cpp:106] Iteration 22480, lr = 0.001\nI0608 00:58:23.675565 13573 solver.cpp:229] Iteration 22500, loss = 1.62838\nI0608 00:58:23.675631 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.204462 (* 1 = 0.204462 loss)\nI0608 00:58:23.675640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10952 (* 1 = 0.10952 loss)\nI0608 00:58:23.675645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.362001 (* 1 = 0.362001 loss)\nI0608 00:58:23.675652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.023984 (* 1 = 0.023984 loss)\nI0608 00:58:23.675657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.337182 (* 1 = 0.337182 loss)\nI0608 00:58:23.675662 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0239845 (* 1 = 0.0239845 loss)\nI0608 00:58:23.675668 13573 sgd_solver.cpp:106] Iteration 22500, lr = 0.001\nI0608 00:58:36.676220 13573 solver.cpp:229] Iteration 22520, loss = 0.533584\nI0608 00:58:36.676283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.068009 (* 1 = 0.068009 loss)\nI0608 00:58:36.676293 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.187153 (* 1 = 0.187153 loss)\nI0608 00:58:36.676300 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102863 (* 1 = 0.102863 loss)\nI0608 00:58:36.676306 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109453 (* 1 = 0.109453 loss)\nI0608 00:58:36.676311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0978004 (* 1 = 0.0978004 loss)\nI0608 00:58:36.676317 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109452 (* 1 = 0.109452 loss)\nI0608 00:58:36.676326 13573 sgd_solver.cpp:106] Iteration 22520, lr = 0.001\nI0608 00:58:49.572353 13573 solver.cpp:229] Iteration 22540, loss = 0.516809\nI0608 00:58:49.572433 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0985352 (* 1 = 0.0985352 loss)\nI0608 00:58:49.572443 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.294321 (* 1 = 0.294321 loss)\nI0608 00:58:49.572449 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0892345 (* 1 = 0.0892345 loss)\nI0608 00:58:49.572455 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00977818 (* 1 = 0.00977818 loss)\nI0608 00:58:49.572461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0924962 (* 1 = 0.0924962 loss)\nI0608 00:58:49.572468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00977365 (* 1 = 0.00977365 loss)\nI0608 00:58:49.572475 13573 sgd_solver.cpp:106] Iteration 22540, lr = 0.001\nI0608 00:59:02.478304 13573 solver.cpp:229] Iteration 22560, loss = 0.538545\nI0608 00:59:02.478368 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.210382 (* 1 = 0.210382 loss)\nI0608 00:59:02.478379 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.215017 (* 1 = 0.215017 loss)\nI0608 00:59:02.478384 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0938763 (* 1 = 0.0938763 loss)\nI0608 00:59:02.478389 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00882541 (* 1 = 0.00882541 loss)\nI0608 00:59:02.478395 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0999016 (* 1 = 0.0999016 loss)\nI0608 00:59:02.478401 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00882782 (* 1 = 0.00882782 loss)\nI0608 00:59:02.478408 13573 sgd_solver.cpp:106] Iteration 22560, lr = 0.001\nI0608 00:59:15.096235 13573 solver.cpp:229] Iteration 22580, loss = 0.808927\nI0608 00:59:15.096302 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144411 (* 1 = 0.144411 loss)\nI0608 00:59:15.096312 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0291413 (* 1 = 0.0291413 loss)\nI0608 00:59:15.096318 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.384139 (* 1 = 0.384139 loss)\nI0608 00:59:15.096323 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.122727 (* 1 = 0.122727 loss)\nI0608 00:59:15.096328 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.378557 (* 1 = 0.378557 loss)\nI0608 00:59:15.096334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.122663 (* 1 = 0.122663 loss)\nI0608 00:59:15.096340 13573 sgd_solver.cpp:106] Iteration 22580, lr = 0.001\nspeed: 0.645s / iter\nI0608 00:59:27.935977 13573 solver.cpp:229] Iteration 22600, loss = 1.55998\nI0608 00:59:27.936049 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0723755 (* 1 = 0.0723755 loss)\nI0608 00:59:27.936059 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.054604 (* 1 = 0.054604 loss)\nI0608 00:59:27.936065 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.510284 (* 1 = 0.510284 loss)\nI0608 00:59:27.936070 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.148419 (* 1 = 0.148419 loss)\nI0608 00:59:27.936076 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.517431 (* 1 = 0.517431 loss)\nI0608 00:59:27.936082 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.148448 (* 1 = 0.148448 loss)\nI0608 00:59:27.936089 13573 sgd_solver.cpp:106] Iteration 22600, lr = 0.001\nI0608 00:59:40.489753 13573 solver.cpp:229] Iteration 22620, loss = 0.475931\nI0608 00:59:40.489814 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17571 (* 1 = 0.17571 loss)\nI0608 00:59:40.489823 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0896996 (* 1 = 0.0896996 loss)\nI0608 00:59:40.489830 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120997 (* 1 = 0.120997 loss)\nI0608 00:59:40.489835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0225777 (* 1 = 0.0225777 loss)\nI0608 00:59:40.489840 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.140663 (* 1 = 0.140663 loss)\nI0608 00:59:40.489845 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225758 (* 1 = 0.0225758 loss)\nI0608 00:59:40.489852 13573 sgd_solver.cpp:106] Iteration 22620, lr = 0.001\nI0608 00:59:53.307914 13573 solver.cpp:229] Iteration 22640, loss = 0.524859\nI0608 00:59:53.307978 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.226223 (* 1 = 0.226223 loss)\nI0608 00:59:53.307987 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.230556 (* 1 = 0.230556 loss)\nI0608 00:59:53.307994 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.098495 (* 1 = 0.098495 loss)\nI0608 00:59:53.307999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00693561 (* 1 = 0.00693561 loss)\nI0608 00:59:53.308006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0907364 (* 1 = 0.0907364 loss)\nI0608 00:59:53.308010 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0069357 (* 1 = 0.0069357 loss)\nI0608 00:59:53.308017 13573 sgd_solver.cpp:106] Iteration 22640, lr = 0.001\nI0608 01:00:06.103026 13573 solver.cpp:229] Iteration 22660, loss = 0.492216\nI0608 01:00:06.103097 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140642 (* 1 = 0.140642 loss)\nI0608 01:00:06.103109 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18912 (* 1 = 0.18912 loss)\nI0608 01:00:06.103116 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0931309 (* 1 = 0.0931309 loss)\nI0608 01:00:06.103121 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0357216 (* 1 = 0.0357216 loss)\nI0608 01:00:06.103127 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.087314 (* 1 = 0.087314 loss)\nI0608 01:00:06.103132 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0357221 (* 1 = 0.0357221 loss)\nI0608 01:00:06.103139 13573 sgd_solver.cpp:106] Iteration 22660, lr = 0.001\nI0608 01:00:18.985116 13573 solver.cpp:229] Iteration 22680, loss = 0.518988\nI0608 01:00:18.985178 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.112834 (* 1 = 0.112834 loss)\nI0608 01:00:18.985188 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112475 (* 1 = 0.112475 loss)\nI0608 01:00:18.985193 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125235 (* 1 = 0.125235 loss)\nI0608 01:00:18.985199 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0280639 (* 1 = 0.0280639 loss)\nI0608 01:00:18.985204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.135532 (* 1 = 0.135532 loss)\nI0608 01:00:18.985210 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.028061 (* 1 = 0.028061 loss)\nI0608 01:00:18.985218 13573 sgd_solver.cpp:106] Iteration 22680, lr = 0.001\nI0608 01:00:31.629422 13573 solver.cpp:229] Iteration 22700, loss = 1.08059\nI0608 01:00:31.629492 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0102712 (* 1 = 0.0102712 loss)\nI0608 01:00:31.629500 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.020907 (* 1 = 0.020907 loss)\nI0608 01:00:31.629508 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.724162 (* 1 = 0.724162 loss)\nI0608 01:00:31.629513 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0900327 (* 1 = 0.0900327 loss)\nI0608 01:00:31.629518 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.709568 (* 1 = 0.709568 loss)\nI0608 01:00:31.629523 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0901392 (* 1 = 0.0901392 loss)\nI0608 01:00:31.629530 13573 sgd_solver.cpp:106] Iteration 22700, lr = 0.001\nI0608 01:00:44.543321 13573 solver.cpp:229] Iteration 22720, loss = 0.708839\nI0608 01:00:44.543388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.136974 (* 1 = 0.136974 loss)\nI0608 01:00:44.543397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0653438 (* 1 = 0.0653438 loss)\nI0608 01:00:44.543403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151109 (* 1 = 0.151109 loss)\nI0608 01:00:44.543408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00190699 (* 1 = 0.00190699 loss)\nI0608 01:00:44.543414 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157556 (* 1 = 0.157556 loss)\nI0608 01:00:44.543421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00190784 (* 1 = 0.00190784 loss)\nI0608 01:00:44.543427 13573 sgd_solver.cpp:106] Iteration 22720, lr = 0.001\nI0608 01:00:57.178781 13573 solver.cpp:229] Iteration 22740, loss = 0.986725\nI0608 01:00:57.178839 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.003157 (* 1 = 0.003157 loss)\nI0608 01:00:57.178848 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0171051 (* 1 = 0.0171051 loss)\nI0608 01:00:57.178854 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.518499 (* 1 = 0.518499 loss)\nI0608 01:00:57.178860 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.164928 (* 1 = 0.164928 loss)\nI0608 01:00:57.178865 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.497584 (* 1 = 0.497584 loss)\nI0608 01:00:57.178871 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.164906 (* 1 = 0.164906 loss)\nI0608 01:00:57.178877 13573 sgd_solver.cpp:106] Iteration 22740, lr = 0.001\nI0608 01:01:10.023994 13573 solver.cpp:229] Iteration 22760, loss = 0.874536\nI0608 01:01:10.024057 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105538 (* 1 = 0.105538 loss)\nI0608 01:01:10.024067 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0683973 (* 1 = 0.0683973 loss)\nI0608 01:01:10.024072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.471661 (* 1 = 0.471661 loss)\nI0608 01:01:10.024078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0774583 (* 1 = 0.0774583 loss)\nI0608 01:01:10.024083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.431921 (* 1 = 0.431921 loss)\nI0608 01:01:10.024089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0774574 (* 1 = 0.0774574 loss)\nI0608 01:01:10.024096 13573 sgd_solver.cpp:106] Iteration 22760, lr = 0.001\nI0608 01:01:22.852766 13573 solver.cpp:229] Iteration 22780, loss = 0.816991\nI0608 01:01:22.852834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0965521 (* 1 = 0.0965521 loss)\nI0608 01:01:22.852843 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0544404 (* 1 = 0.0544404 loss)\nI0608 01:01:22.852849 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108356 (* 1 = 0.108356 loss)\nI0608 01:01:22.852855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0261339 (* 1 = 0.0261339 loss)\nI0608 01:01:22.852860 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108379 (* 1 = 0.108379 loss)\nI0608 01:01:22.852866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0261577 (* 1 = 0.0261577 loss)\nI0608 01:01:22.852874 13573 sgd_solver.cpp:106] Iteration 22780, lr = 0.001\nspeed: 0.644s / iter\nI0608 01:01:35.748739 13573 solver.cpp:229] Iteration 22800, loss = 1.02367\nI0608 01:01:35.748800 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0520492 (* 1 = 0.0520492 loss)\nI0608 01:01:35.748809 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0884673 (* 1 = 0.0884673 loss)\nI0608 01:01:35.748816 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.462072 (* 1 = 0.462072 loss)\nI0608 01:01:35.748822 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0568441 (* 1 = 0.0568441 loss)\nI0608 01:01:35.748828 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.468317 (* 1 = 0.468317 loss)\nI0608 01:01:35.748834 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0568272 (* 1 = 0.0568272 loss)\nI0608 01:01:35.748842 13573 sgd_solver.cpp:106] Iteration 22800, lr = 0.001\nI0608 01:01:48.741253 13573 solver.cpp:229] Iteration 22820, loss = 1.27169\nI0608 01:01:48.741322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115193 (* 1 = 0.115193 loss)\nI0608 01:01:48.741333 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122409 (* 1 = 0.122409 loss)\nI0608 01:01:48.741338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0874895 (* 1 = 0.0874895 loss)\nI0608 01:01:48.741345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00666285 (* 1 = 0.00666285 loss)\nI0608 01:01:48.741351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.077736 (* 1 = 0.077736 loss)\nI0608 01:01:48.741358 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00666036 (* 1 = 0.00666036 loss)\nI0608 01:01:48.741364 13573 sgd_solver.cpp:106] Iteration 22820, lr = 0.001\nI0608 01:02:01.488332 13573 solver.cpp:229] Iteration 22840, loss = 0.581673\nI0608 01:02:01.488390 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0832991 (* 1 = 0.0832991 loss)\nI0608 01:02:01.488399 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0318834 (* 1 = 0.0318834 loss)\nI0608 01:02:01.488406 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297302 (* 1 = 0.297302 loss)\nI0608 01:02:01.488411 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321704 (* 1 = 0.0321704 loss)\nI0608 01:02:01.488417 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259454 (* 1 = 0.259454 loss)\nI0608 01:02:01.488422 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0321413 (* 1 = 0.0321413 loss)\nI0608 01:02:01.488430 13573 sgd_solver.cpp:106] Iteration 22840, lr = 0.001\nI0608 01:02:14.401630 13573 solver.cpp:229] Iteration 22860, loss = 0.985298\nI0608 01:02:14.401700 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0114665 (* 1 = 0.0114665 loss)\nI0608 01:02:14.401710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0103929 (* 1 = 0.0103929 loss)\nI0608 01:02:14.401715 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.347572 (* 1 = 0.347572 loss)\nI0608 01:02:14.401721 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0267975 (* 1 = 0.0267975 loss)\nI0608 01:02:14.401726 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.355887 (* 1 = 0.355887 loss)\nI0608 01:02:14.401732 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.026808 (* 1 = 0.026808 loss)\nI0608 01:02:14.401738 13573 sgd_solver.cpp:106] Iteration 22860, lr = 0.001\nI0608 01:02:27.298764 13573 solver.cpp:229] Iteration 22880, loss = 0.885169\nI0608 01:02:27.298837 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.037554 (* 1 = 0.037554 loss)\nI0608 01:02:27.298851 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0699786 (* 1 = 0.0699786 loss)\nI0608 01:02:27.298856 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264927 (* 1 = 0.264927 loss)\nI0608 01:02:27.298862 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0204799 (* 1 = 0.0204799 loss)\nI0608 01:02:27.298868 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27142 (* 1 = 0.27142 loss)\nI0608 01:02:27.298874 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0204844 (* 1 = 0.0204844 loss)\nI0608 01:02:27.298882 13573 sgd_solver.cpp:106] Iteration 22880, lr = 0.001\nI0608 01:02:40.251390 13573 solver.cpp:229] Iteration 22900, loss = 0.885394\nI0608 01:02:40.251449 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.369168 (* 1 = 0.369168 loss)\nI0608 01:02:40.251458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158938 (* 1 = 0.158938 loss)\nI0608 01:02:40.251464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15985 (* 1 = 0.15985 loss)\nI0608 01:02:40.251471 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0869882 (* 1 = 0.0869882 loss)\nI0608 01:02:40.251476 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.166979 (* 1 = 0.166979 loss)\nI0608 01:02:40.251482 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0870301 (* 1 = 0.0870301 loss)\nI0608 01:02:40.251487 13573 sgd_solver.cpp:106] Iteration 22900, lr = 0.001\nI0608 01:02:53.081063 13573 solver.cpp:229] Iteration 22920, loss = 0.568161\nI0608 01:02:53.081137 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125575 (* 1 = 0.125575 loss)\nI0608 01:02:53.081147 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0723634 (* 1 = 0.0723634 loss)\nI0608 01:02:53.081153 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110237 (* 1 = 0.110237 loss)\nI0608 01:02:53.081158 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0349563 (* 1 = 0.0349563 loss)\nI0608 01:02:53.081164 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109433 (* 1 = 0.109433 loss)\nI0608 01:02:53.081169 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0349565 (* 1 = 0.0349565 loss)\nI0608 01:02:53.081176 13573 sgd_solver.cpp:106] Iteration 22920, lr = 0.001\nI0608 01:03:05.861753 13573 solver.cpp:229] Iteration 22940, loss = 0.682347\nI0608 01:03:05.861819 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.234744 (* 1 = 0.234744 loss)\nI0608 01:03:05.861829 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.204365 (* 1 = 0.204365 loss)\nI0608 01:03:05.861835 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103708 (* 1 = 0.103708 loss)\nI0608 01:03:05.861840 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269386 (* 1 = 0.0269386 loss)\nI0608 01:03:05.861845 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100398 (* 1 = 0.100398 loss)\nI0608 01:03:05.861850 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269385 (* 1 = 0.0269385 loss)\nI0608 01:03:05.861857 13573 sgd_solver.cpp:106] Iteration 22940, lr = 0.001\nI0608 01:03:18.761322 13573 solver.cpp:229] Iteration 22960, loss = 0.989812\nI0608 01:03:18.761386 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0415693 (* 1 = 0.0415693 loss)\nI0608 01:03:18.761395 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0679268 (* 1 = 0.0679268 loss)\nI0608 01:03:18.761401 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277429 (* 1 = 0.277429 loss)\nI0608 01:03:18.761430 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0127135 (* 1 = 0.0127135 loss)\nI0608 01:03:18.761436 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28638 (* 1 = 0.28638 loss)\nI0608 01:03:18.761441 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0127241 (* 1 = 0.0127241 loss)\nI0608 01:03:18.761448 13573 sgd_solver.cpp:106] Iteration 22960, lr = 0.001\nI0608 01:03:31.808715 13573 solver.cpp:229] Iteration 22980, loss = 1.36463\nI0608 01:03:31.808789 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0521133 (* 1 = 0.0521133 loss)\nI0608 01:03:31.808799 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0464557 (* 1 = 0.0464557 loss)\nI0608 01:03:31.808804 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.768411 (* 1 = 0.768411 loss)\nI0608 01:03:31.808811 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.332185 (* 1 = 0.332185 loss)\nI0608 01:03:31.808817 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.786349 (* 1 = 0.786349 loss)\nI0608 01:03:31.808823 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.332041 (* 1 = 0.332041 loss)\nI0608 01:03:31.808831 13573 sgd_solver.cpp:106] Iteration 22980, lr = 0.001\nspeed: 0.644s / iter\nI0608 01:03:44.710044 13573 solver.cpp:229] Iteration 23000, loss = 1.01544\nI0608 01:03:44.710114 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.180106 (* 1 = 0.180106 loss)\nI0608 01:03:44.710124 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.315503 (* 1 = 0.315503 loss)\nI0608 01:03:44.710129 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.3165 (* 1 = 0.3165 loss)\nI0608 01:03:44.710135 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0247395 (* 1 = 0.0247395 loss)\nI0608 01:03:44.710141 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.312015 (* 1 = 0.312015 loss)\nI0608 01:03:44.710147 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247476 (* 1 = 0.0247476 loss)\nI0608 01:03:44.710155 13573 sgd_solver.cpp:106] Iteration 23000, lr = 0.001\nI0608 01:03:57.679395 13573 solver.cpp:229] Iteration 23020, loss = 1.83261\nI0608 01:03:57.679457 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0120701 (* 1 = 0.0120701 loss)\nI0608 01:03:57.679467 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000956213 (* 1 = 0.000956213 loss)\nI0608 01:03:57.679473 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.07488 (* 1 = 1.07488 loss)\nI0608 01:03:57.679478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.381397 (* 1 = 0.381397 loss)\nI0608 01:03:57.679484 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.10542 (* 1 = 1.10542 loss)\nI0608 01:03:57.679491 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.381202 (* 1 = 0.381202 loss)\nI0608 01:03:57.679497 13573 sgd_solver.cpp:106] Iteration 23020, lr = 0.001\nI0608 01:04:10.552958 13573 solver.cpp:229] Iteration 23040, loss = 0.497187\nI0608 01:04:10.553030 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0905745 (* 1 = 0.0905745 loss)\nI0608 01:04:10.553040 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0634562 (* 1 = 0.0634562 loss)\nI0608 01:04:10.553045 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10688 (* 1 = 0.10688 loss)\nI0608 01:04:10.553051 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0226662 (* 1 = 0.0226662 loss)\nI0608 01:04:10.553057 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100213 (* 1 = 0.100213 loss)\nI0608 01:04:10.553063 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0226581 (* 1 = 0.0226581 loss)\nI0608 01:04:10.553069 13573 sgd_solver.cpp:106] Iteration 23040, lr = 0.001\nI0608 01:04:23.379396 13573 solver.cpp:229] Iteration 23060, loss = 0.886003\nI0608 01:04:23.379462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00878862 (* 1 = 0.00878862 loss)\nI0608 01:04:23.379472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0290267 (* 1 = 0.0290267 loss)\nI0608 01:04:23.379487 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.477215 (* 1 = 0.477215 loss)\nI0608 01:04:23.379492 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0435021 (* 1 = 0.0435021 loss)\nI0608 01:04:23.379498 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.475058 (* 1 = 0.475058 loss)\nI0608 01:04:23.379504 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0435344 (* 1 = 0.0435344 loss)\nI0608 01:04:23.379510 13573 sgd_solver.cpp:106] Iteration 23060, lr = 0.001\nI0608 01:04:36.281507 13573 solver.cpp:229] Iteration 23080, loss = 0.6075\nI0608 01:04:36.281580 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000749664 (* 1 = 0.000749664 loss)\nI0608 01:04:36.281590 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00125442 (* 1 = 0.00125442 loss)\nI0608 01:04:36.281596 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233443 (* 1 = 0.233443 loss)\nI0608 01:04:36.281602 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0455796 (* 1 = 0.0455796 loss)\nI0608 01:04:36.281608 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236055 (* 1 = 0.236055 loss)\nI0608 01:04:36.281613 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0455864 (* 1 = 0.0455864 loss)\nI0608 01:04:36.281620 13573 sgd_solver.cpp:106] Iteration 23080, lr = 0.001\nI0608 01:04:49.265178 13573 solver.cpp:229] Iteration 23100, loss = 0.860207\nI0608 01:04:49.265244 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.346966 (* 1 = 0.346966 loss)\nI0608 01:04:49.265254 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.185967 (* 1 = 0.185967 loss)\nI0608 01:04:49.265266 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118198 (* 1 = 0.118198 loss)\nI0608 01:04:49.265274 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234641 (* 1 = 0.0234641 loss)\nI0608 01:04:49.265280 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110346 (* 1 = 0.110346 loss)\nI0608 01:04:49.265285 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.023464 (* 1 = 0.023464 loss)\nI0608 01:04:49.265291 13573 sgd_solver.cpp:106] Iteration 23100, lr = 0.001\nI0608 01:05:02.538494 13573 solver.cpp:229] Iteration 23120, loss = 0.727065\nI0608 01:05:02.538563 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0991816 (* 1 = 0.0991816 loss)\nI0608 01:05:02.538571 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107314 (* 1 = 0.107314 loss)\nI0608 01:05:02.538578 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0823501 (* 1 = 0.0823501 loss)\nI0608 01:05:02.538583 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167216 (* 1 = 0.0167216 loss)\nI0608 01:05:02.538589 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0729351 (* 1 = 0.0729351 loss)\nI0608 01:05:02.538594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0167197 (* 1 = 0.0167197 loss)\nI0608 01:05:02.538600 13573 sgd_solver.cpp:106] Iteration 23120, lr = 0.001\nI0608 01:05:15.285260 13573 solver.cpp:229] Iteration 23140, loss = 0.675296\nI0608 01:05:15.285336 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.154873 (* 1 = 0.154873 loss)\nI0608 01:05:15.285346 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.221284 (* 1 = 0.221284 loss)\nI0608 01:05:15.285351 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.232079 (* 1 = 0.232079 loss)\nI0608 01:05:15.285358 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194423 (* 1 = 0.0194423 loss)\nI0608 01:05:15.285364 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214324 (* 1 = 0.214324 loss)\nI0608 01:05:15.285370 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0194417 (* 1 = 0.0194417 loss)\nI0608 01:05:15.285378 13573 sgd_solver.cpp:106] Iteration 23140, lr = 0.001\nI0608 01:05:28.233341 13573 solver.cpp:229] Iteration 23160, loss = 1.02511\nI0608 01:05:28.233403 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11508 (* 1 = 0.11508 loss)\nI0608 01:05:28.233413 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0838813 (* 1 = 0.0838813 loss)\nI0608 01:05:28.233419 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.530582 (* 1 = 0.530582 loss)\nI0608 01:05:28.233425 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.154489 (* 1 = 0.154489 loss)\nI0608 01:05:28.233430 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.544542 (* 1 = 0.544542 loss)\nI0608 01:05:28.233436 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.154548 (* 1 = 0.154548 loss)\nI0608 01:05:28.233443 13573 sgd_solver.cpp:106] Iteration 23160, lr = 0.001\nI0608 01:05:41.257462 13573 solver.cpp:229] Iteration 23180, loss = 0.873401\nI0608 01:05:41.257526 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0400807 (* 1 = 0.0400807 loss)\nI0608 01:05:41.257536 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0982923 (* 1 = 0.0982923 loss)\nI0608 01:05:41.257542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.452316 (* 1 = 0.452316 loss)\nI0608 01:05:41.257549 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.102601 (* 1 = 0.102601 loss)\nI0608 01:05:41.257553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.462888 (* 1 = 0.462888 loss)\nI0608 01:05:41.257560 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.102598 (* 1 = 0.102598 loss)\nI0608 01:05:41.257565 13573 sgd_solver.cpp:106] Iteration 23180, lr = 0.001\nspeed: 0.644s / iter\nI0608 01:05:54.115146 13573 solver.cpp:229] Iteration 23200, loss = 0.642999\nI0608 01:05:54.115217 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140088 (* 1 = 0.140088 loss)\nI0608 01:05:54.115228 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117711 (* 1 = 0.117711 loss)\nI0608 01:05:54.115234 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0952002 (* 1 = 0.0952002 loss)\nI0608 01:05:54.115241 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00651458 (* 1 = 0.00651458 loss)\nI0608 01:05:54.115245 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0857943 (* 1 = 0.0857943 loss)\nI0608 01:05:54.115250 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00651494 (* 1 = 0.00651494 loss)\nI0608 01:05:54.115257 13573 sgd_solver.cpp:106] Iteration 23200, lr = 0.001\nI0608 01:06:06.939497 13573 solver.cpp:229] Iteration 23220, loss = 1.5043\nI0608 01:06:06.939563 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000389626 (* 1 = 0.000389626 loss)\nI0608 01:06:06.939574 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00700416 (* 1 = 0.00700416 loss)\nI0608 01:06:06.939579 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201422 (* 1 = 0.201422 loss)\nI0608 01:06:06.939585 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137155 (* 1 = 0.0137155 loss)\nI0608 01:06:06.939590 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181886 (* 1 = 0.181886 loss)\nI0608 01:06:06.939596 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137113 (* 1 = 0.0137113 loss)\nI0608 01:06:06.939604 13573 sgd_solver.cpp:106] Iteration 23220, lr = 0.001\nI0608 01:06:19.925889 13573 solver.cpp:229] Iteration 23240, loss = 0.383031\nI0608 01:06:19.925959 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0949912 (* 1 = 0.0949912 loss)\nI0608 01:06:19.925969 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.061245 (* 1 = 0.061245 loss)\nI0608 01:06:19.925976 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0797375 (* 1 = 0.0797375 loss)\nI0608 01:06:19.925982 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296134 (* 1 = 0.0296134 loss)\nI0608 01:06:19.925988 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.076326 (* 1 = 0.076326 loss)\nI0608 01:06:19.925994 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296182 (* 1 = 0.0296182 loss)\nI0608 01:06:19.926002 13573 sgd_solver.cpp:106] Iteration 23240, lr = 0.001\nI0608 01:06:32.797780 13573 solver.cpp:229] Iteration 23260, loss = 1.08987\nI0608 01:06:32.797857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.389373 (* 1 = 0.389373 loss)\nI0608 01:06:32.797868 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.216915 (* 1 = 0.216915 loss)\nI0608 01:06:32.797873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0961303 (* 1 = 0.0961303 loss)\nI0608 01:06:32.797879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0483107 (* 1 = 0.0483107 loss)\nI0608 01:06:32.797885 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0910702 (* 1 = 0.0910702 loss)\nI0608 01:06:32.797891 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.04832 (* 1 = 0.04832 loss)\nI0608 01:06:32.797899 13573 sgd_solver.cpp:106] Iteration 23260, lr = 0.001\nI0608 01:06:45.906921 13573 solver.cpp:229] Iteration 23280, loss = 0.844153\nI0608 01:06:45.906996 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00183378 (* 1 = 0.00183378 loss)\nI0608 01:06:45.907006 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0151002 (* 1 = 0.0151002 loss)\nI0608 01:06:45.907013 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393556 (* 1 = 0.393556 loss)\nI0608 01:06:45.907019 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0573122 (* 1 = 0.0573122 loss)\nI0608 01:06:45.907025 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.372395 (* 1 = 0.372395 loss)\nI0608 01:06:45.907032 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0573 (* 1 = 0.0573 loss)\nI0608 01:06:45.907038 13573 sgd_solver.cpp:106] Iteration 23280, lr = 0.001\nI0608 01:06:58.736805 13573 solver.cpp:229] Iteration 23300, loss = 0.479788\nI0608 01:06:58.736872 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.265071 (* 1 = 0.265071 loss)\nI0608 01:06:58.736882 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.088579 (* 1 = 0.088579 loss)\nI0608 01:06:58.736888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0854456 (* 1 = 0.0854456 loss)\nI0608 01:06:58.736894 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00821233 (* 1 = 0.00821233 loss)\nI0608 01:06:58.736901 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0907143 (* 1 = 0.0907143 loss)\nI0608 01:06:58.736907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00818082 (* 1 = 0.00818082 loss)\nI0608 01:06:58.736913 13573 sgd_solver.cpp:106] Iteration 23300, lr = 0.001\nI0608 01:07:11.535063 13573 solver.cpp:229] Iteration 23320, loss = 1.02766\nI0608 01:07:11.535128 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.425486 (* 1 = 0.425486 loss)\nI0608 01:07:11.535137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.348437 (* 1 = 0.348437 loss)\nI0608 01:07:11.535143 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191716 (* 1 = 0.191716 loss)\nI0608 01:07:11.535150 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0779687 (* 1 = 0.0779687 loss)\nI0608 01:07:11.535156 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193116 (* 1 = 0.193116 loss)\nI0608 01:07:11.535161 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0779504 (* 1 = 0.0779504 loss)\nI0608 01:07:11.535168 13573 sgd_solver.cpp:106] Iteration 23320, lr = 0.001\nI0608 01:07:24.554123 13573 solver.cpp:229] Iteration 23340, loss = 1.98846\nI0608 01:07:24.554191 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00600077 (* 1 = 0.00600077 loss)\nI0608 01:07:24.554200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0348556 (* 1 = 0.0348556 loss)\nI0608 01:07:24.554206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.01155 (* 1 = 1.01155 loss)\nI0608 01:07:24.554213 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.244068 (* 1 = 0.244068 loss)\nI0608 01:07:24.554219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.03718 (* 1 = 1.03718 loss)\nI0608 01:07:24.554224 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.243863 (* 1 = 0.243863 loss)\nI0608 01:07:24.554230 13573 sgd_solver.cpp:106] Iteration 23340, lr = 0.001\nI0608 01:07:37.531317 13573 solver.cpp:229] Iteration 23360, loss = 0.593169\nI0608 01:07:37.531391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0396622 (* 1 = 0.0396622 loss)\nI0608 01:07:37.531400 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140752 (* 1 = 0.140752 loss)\nI0608 01:07:37.531407 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0871284 (* 1 = 0.0871284 loss)\nI0608 01:07:37.531414 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110543 (* 1 = 0.0110543 loss)\nI0608 01:07:37.531419 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0817043 (* 1 = 0.0817043 loss)\nI0608 01:07:37.531424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110596 (* 1 = 0.0110596 loss)\nI0608 01:07:37.531431 13573 sgd_solver.cpp:106] Iteration 23360, lr = 0.001\nI0608 01:07:50.464836 13573 solver.cpp:229] Iteration 23380, loss = 1.29541\nI0608 01:07:50.464898 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.323326 (* 1 = 0.323326 loss)\nI0608 01:07:50.464907 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.134916 (* 1 = 0.134916 loss)\nI0608 01:07:50.464915 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105177 (* 1 = 0.105177 loss)\nI0608 01:07:50.464920 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0737664 (* 1 = 0.0737664 loss)\nI0608 01:07:50.464926 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107786 (* 1 = 0.107786 loss)\nI0608 01:07:50.464931 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0737656 (* 1 = 0.0737656 loss)\nI0608 01:07:50.464937 13573 sgd_solver.cpp:106] Iteration 23380, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:08:03.357841 13573 solver.cpp:229] Iteration 23400, loss = 0.766994\nI0608 01:08:03.357908 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0370626 (* 1 = 0.0370626 loss)\nI0608 01:08:03.357918 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.040925 (* 1 = 0.040925 loss)\nI0608 01:08:03.357923 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0844763 (* 1 = 0.0844763 loss)\nI0608 01:08:03.357928 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0341873 (* 1 = 0.0341873 loss)\nI0608 01:08:03.357934 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855648 (* 1 = 0.0855648 loss)\nI0608 01:08:03.357939 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0341882 (* 1 = 0.0341882 loss)\nI0608 01:08:03.357946 13573 sgd_solver.cpp:106] Iteration 23400, lr = 0.001\nI0608 01:08:16.246084 13573 solver.cpp:229] Iteration 23420, loss = 0.564734\nI0608 01:08:16.246170 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.218412 (* 1 = 0.218412 loss)\nI0608 01:08:16.246181 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0478531 (* 1 = 0.0478531 loss)\nI0608 01:08:16.246187 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0954711 (* 1 = 0.0954711 loss)\nI0608 01:08:16.246192 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00930016 (* 1 = 0.00930016 loss)\nI0608 01:08:16.246198 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102763 (* 1 = 0.102763 loss)\nI0608 01:08:16.246204 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00930009 (* 1 = 0.00930009 loss)\nI0608 01:08:16.246212 13573 sgd_solver.cpp:106] Iteration 23420, lr = 0.001\nI0608 01:08:29.329972 13573 solver.cpp:229] Iteration 23440, loss = 0.75557\nI0608 01:08:29.330034 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166443 (* 1 = 0.166443 loss)\nI0608 01:08:29.330044 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119507 (* 1 = 0.119507 loss)\nI0608 01:08:29.330049 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.278904 (* 1 = 0.278904 loss)\nI0608 01:08:29.330055 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0245027 (* 1 = 0.0245027 loss)\nI0608 01:08:29.330060 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.289528 (* 1 = 0.289528 loss)\nI0608 01:08:29.330065 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0245072 (* 1 = 0.0245072 loss)\nI0608 01:08:29.330072 13573 sgd_solver.cpp:106] Iteration 23440, lr = 0.001\nI0608 01:08:42.136786 13573 solver.cpp:229] Iteration 23460, loss = 0.573941\nI0608 01:08:42.136853 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.156342 (* 1 = 0.156342 loss)\nI0608 01:08:42.136863 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0378031 (* 1 = 0.0378031 loss)\nI0608 01:08:42.136869 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121929 (* 1 = 0.121929 loss)\nI0608 01:08:42.136875 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00778851 (* 1 = 0.00778851 loss)\nI0608 01:08:42.136880 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14266 (* 1 = 0.14266 loss)\nI0608 01:08:42.136886 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00778828 (* 1 = 0.00778828 loss)\nI0608 01:08:42.136893 13573 sgd_solver.cpp:106] Iteration 23460, lr = 0.001\nI0608 01:08:55.170313 13573 solver.cpp:229] Iteration 23480, loss = 0.698574\nI0608 01:08:55.170382 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.286103 (* 1 = 0.286103 loss)\nI0608 01:08:55.170392 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0891803 (* 1 = 0.0891803 loss)\nI0608 01:08:55.170399 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118497 (* 1 = 0.118497 loss)\nI0608 01:08:55.170404 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116786 (* 1 = 0.116786 loss)\nI0608 01:08:55.170410 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121622 (* 1 = 0.121622 loss)\nI0608 01:08:55.170416 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116878 (* 1 = 0.116878 loss)\nI0608 01:08:55.170423 13573 sgd_solver.cpp:106] Iteration 23480, lr = 0.001\nI0608 01:09:08.324545 13573 solver.cpp:229] Iteration 23500, loss = 0.964358\nI0608 01:09:08.324614 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116875 (* 1 = 0.116875 loss)\nI0608 01:09:08.324623 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0926312 (* 1 = 0.0926312 loss)\nI0608 01:09:08.324630 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.398773 (* 1 = 0.398773 loss)\nI0608 01:09:08.324636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.204363 (* 1 = 0.204363 loss)\nI0608 01:09:08.324642 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.39748 (* 1 = 0.39748 loss)\nI0608 01:09:08.324648 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.20437 (* 1 = 0.20437 loss)\nI0608 01:09:08.324656 13573 sgd_solver.cpp:106] Iteration 23500, lr = 0.001\nI0608 01:09:21.054083 13573 solver.cpp:229] Iteration 23520, loss = 0.59908\nI0608 01:09:21.054155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131463 (* 1 = 0.131463 loss)\nI0608 01:09:21.054165 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.206766 (* 1 = 0.206766 loss)\nI0608 01:09:21.054172 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0920345 (* 1 = 0.0920345 loss)\nI0608 01:09:21.054178 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.048909 (* 1 = 0.048909 loss)\nI0608 01:09:21.054184 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0949369 (* 1 = 0.0949369 loss)\nI0608 01:09:21.054190 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.04891 (* 1 = 0.04891 loss)\nI0608 01:09:21.054198 13573 sgd_solver.cpp:106] Iteration 23520, lr = 0.001\nI0608 01:09:33.987292 13573 solver.cpp:229] Iteration 23540, loss = 1.23967\nI0608 01:09:33.987378 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.188033 (* 1 = 0.188033 loss)\nI0608 01:09:33.987388 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121365 (* 1 = 0.121365 loss)\nI0608 01:09:33.987396 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.35074 (* 1 = 0.35074 loss)\nI0608 01:09:33.987401 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.056464 (* 1 = 0.056464 loss)\nI0608 01:09:33.987407 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.359349 (* 1 = 0.359349 loss)\nI0608 01:09:33.987413 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0564677 (* 1 = 0.0564677 loss)\nI0608 01:09:33.987421 13573 sgd_solver.cpp:106] Iteration 23540, lr = 0.001\nI0608 01:09:46.650250 13573 solver.cpp:229] Iteration 23560, loss = 0.428957\nI0608 01:09:46.650310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0925177 (* 1 = 0.0925177 loss)\nI0608 01:09:46.650319 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0682699 (* 1 = 0.0682699 loss)\nI0608 01:09:46.650326 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092216 (* 1 = 0.092216 loss)\nI0608 01:09:46.650331 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0880373 (* 1 = 0.0880373 loss)\nI0608 01:09:46.650337 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0887192 (* 1 = 0.0887192 loss)\nI0608 01:09:46.650343 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0880369 (* 1 = 0.0880369 loss)\nI0608 01:09:46.650349 13573 sgd_solver.cpp:106] Iteration 23560, lr = 0.001\nI0608 01:09:59.696665 13573 solver.cpp:229] Iteration 23580, loss = 1.10442\nI0608 01:09:59.696733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.200035 (* 1 = 0.200035 loss)\nI0608 01:09:59.696744 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132969 (* 1 = 0.132969 loss)\nI0608 01:09:59.696750 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.432308 (* 1 = 0.432308 loss)\nI0608 01:09:59.696756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0884292 (* 1 = 0.0884292 loss)\nI0608 01:09:59.696761 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.441516 (* 1 = 0.441516 loss)\nI0608 01:09:59.696768 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.088425 (* 1 = 0.088425 loss)\nI0608 01:09:59.696774 13573 sgd_solver.cpp:106] Iteration 23580, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:10:12.397893 13573 solver.cpp:229] Iteration 23600, loss = 1.10803\nI0608 01:10:12.397961 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.138707 (* 1 = 0.138707 loss)\nI0608 01:10:12.397971 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15288 (* 1 = 0.15288 loss)\nI0608 01:10:12.397977 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139707 (* 1 = 0.139707 loss)\nI0608 01:10:12.397984 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0197583 (* 1 = 0.0197583 loss)\nI0608 01:10:12.397990 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147513 (* 1 = 0.147513 loss)\nI0608 01:10:12.397996 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0197608 (* 1 = 0.0197608 loss)\nI0608 01:10:12.398005 13573 sgd_solver.cpp:106] Iteration 23600, lr = 0.001\nI0608 01:10:25.287148 13573 solver.cpp:229] Iteration 23620, loss = 1.16888\nI0608 01:10:25.287286 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193702 (* 1 = 0.193702 loss)\nI0608 01:10:25.287295 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214391 (* 1 = 0.214391 loss)\nI0608 01:10:25.287302 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.337837 (* 1 = 0.337837 loss)\nI0608 01:10:25.287307 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0430941 (* 1 = 0.0430941 loss)\nI0608 01:10:25.287313 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.34105 (* 1 = 0.34105 loss)\nI0608 01:10:25.287319 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431139 (* 1 = 0.0431139 loss)\nI0608 01:10:25.287327 13573 sgd_solver.cpp:106] Iteration 23620, lr = 0.001\nI0608 01:10:38.303932 13573 solver.cpp:229] Iteration 23640, loss = 1.33247\nI0608 01:10:38.304025 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00234059 (* 1 = 0.00234059 loss)\nI0608 01:10:38.304036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00103105 (* 1 = 0.00103105 loss)\nI0608 01:10:38.304042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.51 (* 1 = 0.51 loss)\nI0608 01:10:38.304049 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.161948 (* 1 = 0.161948 loss)\nI0608 01:10:38.304054 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.451032 (* 1 = 0.451032 loss)\nI0608 01:10:38.304059 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.161919 (* 1 = 0.161919 loss)\nI0608 01:10:38.304067 13573 sgd_solver.cpp:106] Iteration 23640, lr = 0.001\nI0608 01:10:51.181196 13573 solver.cpp:229] Iteration 23660, loss = 0.780913\nI0608 01:10:51.181267 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127009 (* 1 = 0.127009 loss)\nI0608 01:10:51.181279 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0755341 (* 1 = 0.0755341 loss)\nI0608 01:10:51.181285 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259694 (* 1 = 0.259694 loss)\nI0608 01:10:51.181291 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0286418 (* 1 = 0.0286418 loss)\nI0608 01:10:51.181298 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.290704 (* 1 = 0.290704 loss)\nI0608 01:10:51.181303 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.028643 (* 1 = 0.028643 loss)\nI0608 01:10:51.181310 13573 sgd_solver.cpp:106] Iteration 23660, lr = 0.001\nI0608 01:11:04.103247 13573 solver.cpp:229] Iteration 23680, loss = 0.401685\nI0608 01:11:04.103312 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0491664 (* 1 = 0.0491664 loss)\nI0608 01:11:04.103322 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0882688 (* 1 = 0.0882688 loss)\nI0608 01:11:04.103327 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0762629 (* 1 = 0.0762629 loss)\nI0608 01:11:04.103333 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0122592 (* 1 = 0.0122592 loss)\nI0608 01:11:04.103339 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0773765 (* 1 = 0.0773765 loss)\nI0608 01:11:04.103344 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0122593 (* 1 = 0.0122593 loss)\nI0608 01:11:04.103353 13573 sgd_solver.cpp:106] Iteration 23680, lr = 0.001\nI0608 01:11:17.021956 13573 solver.cpp:229] Iteration 23700, loss = 0.696676\nI0608 01:11:17.022088 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116365 (* 1 = 0.116365 loss)\nI0608 01:11:17.022096 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188396 (* 1 = 0.188396 loss)\nI0608 01:11:17.022102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0862628 (* 1 = 0.0862628 loss)\nI0608 01:11:17.022109 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00391159 (* 1 = 0.00391159 loss)\nI0608 01:11:17.022114 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0922509 (* 1 = 0.0922509 loss)\nI0608 01:11:17.022120 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00390935 (* 1 = 0.00390935 loss)\nI0608 01:11:17.022127 13573 sgd_solver.cpp:106] Iteration 23700, lr = 0.001\nI0608 01:11:30.050673 13573 solver.cpp:229] Iteration 23720, loss = 0.920371\nI0608 01:11:30.050740 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129193 (* 1 = 0.129193 loss)\nI0608 01:11:30.050750 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10953 (* 1 = 0.10953 loss)\nI0608 01:11:30.050756 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0907889 (* 1 = 0.0907889 loss)\nI0608 01:11:30.050762 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168623 (* 1 = 0.0168623 loss)\nI0608 01:11:30.050767 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855701 (* 1 = 0.0855701 loss)\nI0608 01:11:30.050787 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168461 (* 1 = 0.0168461 loss)\nI0608 01:11:30.050796 13573 sgd_solver.cpp:106] Iteration 23720, lr = 0.001\nI0608 01:11:43.028874 13573 solver.cpp:229] Iteration 23740, loss = 0.629427\nI0608 01:11:43.028954 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144344 (* 1 = 0.144344 loss)\nI0608 01:11:43.028964 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110523 (* 1 = 0.110523 loss)\nI0608 01:11:43.028970 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0970133 (* 1 = 0.0970133 loss)\nI0608 01:11:43.028976 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0210222 (* 1 = 0.0210222 loss)\nI0608 01:11:43.028982 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.091012 (* 1 = 0.091012 loss)\nI0608 01:11:43.028988 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0210137 (* 1 = 0.0210137 loss)\nI0608 01:11:43.028996 13573 sgd_solver.cpp:106] Iteration 23740, lr = 0.001\nI0608 01:11:55.987716 13573 solver.cpp:229] Iteration 23760, loss = 0.899008\nI0608 01:11:55.987787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.162853 (* 1 = 0.162853 loss)\nI0608 01:11:55.987797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0962296 (* 1 = 0.0962296 loss)\nI0608 01:11:55.987803 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.122576 (* 1 = 0.122576 loss)\nI0608 01:11:55.987809 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0251584 (* 1 = 0.0251584 loss)\nI0608 01:11:55.987814 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126813 (* 1 = 0.126813 loss)\nI0608 01:11:55.987820 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251507 (* 1 = 0.0251507 loss)\nI0608 01:11:55.987828 13573 sgd_solver.cpp:106] Iteration 23760, lr = 0.001\nI0608 01:12:08.916362 13573 solver.cpp:229] Iteration 23780, loss = 0.677517\nI0608 01:12:08.916434 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0514829 (* 1 = 0.0514829 loss)\nI0608 01:12:08.916443 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0687386 (* 1 = 0.0687386 loss)\nI0608 01:12:08.916450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099582 (* 1 = 0.099582 loss)\nI0608 01:12:08.916455 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.042135 (* 1 = 0.042135 loss)\nI0608 01:12:08.916460 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100319 (* 1 = 0.100319 loss)\nI0608 01:12:08.916466 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0421575 (* 1 = 0.0421575 loss)\nI0608 01:12:08.916473 13573 sgd_solver.cpp:106] Iteration 23780, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:12:21.876433 13573 solver.cpp:229] Iteration 23800, loss = 0.494524\nI0608 01:12:21.876500 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00184537 (* 1 = 0.00184537 loss)\nI0608 01:12:21.876512 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000821127 (* 1 = 0.000821127 loss)\nI0608 01:12:21.876518 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212024 (* 1 = 0.212024 loss)\nI0608 01:12:21.876523 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0185319 (* 1 = 0.0185319 loss)\nI0608 01:12:21.876529 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212173 (* 1 = 0.212173 loss)\nI0608 01:12:21.876535 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0185379 (* 1 = 0.0185379 loss)\nI0608 01:12:21.876541 13573 sgd_solver.cpp:106] Iteration 23800, lr = 0.001\nI0608 01:12:34.704663 13573 solver.cpp:229] Iteration 23820, loss = 0.555128\nI0608 01:12:34.704726 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0150085 (* 1 = 0.0150085 loss)\nI0608 01:12:34.704735 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00467465 (* 1 = 0.00467465 loss)\nI0608 01:12:34.704741 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.159283 (* 1 = 0.159283 loss)\nI0608 01:12:34.704747 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0073323 (* 1 = 0.0073323 loss)\nI0608 01:12:34.704754 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.220749 (* 1 = 0.220749 loss)\nI0608 01:12:34.704759 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00733235 (* 1 = 0.00733235 loss)\nI0608 01:12:34.704767 13573 sgd_solver.cpp:106] Iteration 23820, lr = 0.001\nI0608 01:12:47.375774 13573 solver.cpp:229] Iteration 23840, loss = 0.595591\nI0608 01:12:47.375847 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0356395 (* 1 = 0.0356395 loss)\nI0608 01:12:47.375856 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104426 (* 1 = 0.104426 loss)\nI0608 01:12:47.375864 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207086 (* 1 = 0.207086 loss)\nI0608 01:12:47.375869 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0151767 (* 1 = 0.0151767 loss)\nI0608 01:12:47.375875 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207943 (* 1 = 0.207943 loss)\nI0608 01:12:47.375881 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151921 (* 1 = 0.0151921 loss)\nI0608 01:12:47.375890 13573 sgd_solver.cpp:106] Iteration 23840, lr = 0.001\nI0608 01:13:00.264215 13573 solver.cpp:229] Iteration 23860, loss = 0.671041\nI0608 01:13:00.264292 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189741 (* 1 = 0.189741 loss)\nI0608 01:13:00.264305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202718 (* 1 = 0.202718 loss)\nI0608 01:13:00.264315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0935309 (* 1 = 0.0935309 loss)\nI0608 01:13:00.264325 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0166055 (* 1 = 0.0166055 loss)\nI0608 01:13:00.264334 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0856272 (* 1 = 0.0856272 loss)\nI0608 01:13:00.264344 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0166046 (* 1 = 0.0166046 loss)\nI0608 01:13:00.264353 13573 sgd_solver.cpp:106] Iteration 23860, lr = 0.001\nI0608 01:13:13.066745 13573 solver.cpp:229] Iteration 23880, loss = 0.776128\nI0608 01:13:13.066815 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0970251 (* 1 = 0.0970251 loss)\nI0608 01:13:13.066825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.07781 (* 1 = 0.07781 loss)\nI0608 01:13:13.066833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0731919 (* 1 = 0.0731919 loss)\nI0608 01:13:13.066838 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00984667 (* 1 = 0.00984667 loss)\nI0608 01:13:13.066844 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0801978 (* 1 = 0.0801978 loss)\nI0608 01:13:13.066850 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00984507 (* 1 = 0.00984507 loss)\nI0608 01:13:13.066859 13573 sgd_solver.cpp:106] Iteration 23880, lr = 0.001\nI0608 01:13:26.212494 13573 solver.cpp:229] Iteration 23900, loss = 0.474393\nI0608 01:13:26.212568 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.103577 (* 1 = 0.103577 loss)\nI0608 01:13:26.212577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111365 (* 1 = 0.111365 loss)\nI0608 01:13:26.212584 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113205 (* 1 = 0.113205 loss)\nI0608 01:13:26.212589 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0106768 (* 1 = 0.0106768 loss)\nI0608 01:13:26.212595 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.113695 (* 1 = 0.113695 loss)\nI0608 01:13:26.212600 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0106804 (* 1 = 0.0106804 loss)\nI0608 01:13:26.212608 13573 sgd_solver.cpp:106] Iteration 23900, lr = 0.001\nI0608 01:13:39.179662 13573 solver.cpp:229] Iteration 23920, loss = 0.985931\nI0608 01:13:39.179728 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0760811 (* 1 = 0.0760811 loss)\nI0608 01:13:39.179738 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0900445 (* 1 = 0.0900445 loss)\nI0608 01:13:39.179744 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125422 (* 1 = 0.125422 loss)\nI0608 01:13:39.179750 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.163478 (* 1 = 0.163478 loss)\nI0608 01:13:39.179755 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118785 (* 1 = 0.118785 loss)\nI0608 01:13:39.179761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.163482 (* 1 = 0.163482 loss)\nI0608 01:13:39.179767 13573 sgd_solver.cpp:106] Iteration 23920, lr = 0.001\nI0608 01:13:52.217242 13573 solver.cpp:229] Iteration 23940, loss = 0.619363\nI0608 01:13:52.217309 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.244694 (* 1 = 0.244694 loss)\nI0608 01:13:52.217319 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.232454 (* 1 = 0.232454 loss)\nI0608 01:13:52.217325 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.140374 (* 1 = 0.140374 loss)\nI0608 01:13:52.217331 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.034504 (* 1 = 0.034504 loss)\nI0608 01:13:52.217337 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143035 (* 1 = 0.143035 loss)\nI0608 01:13:52.217344 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0345004 (* 1 = 0.0345004 loss)\nI0608 01:13:52.217350 13573 sgd_solver.cpp:106] Iteration 23940, lr = 0.001\nI0608 01:14:05.083847 13573 solver.cpp:229] Iteration 23960, loss = 0.616634\nI0608 01:14:05.083914 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0998556 (* 1 = 0.0998556 loss)\nI0608 01:14:05.083923 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0274168 (* 1 = 0.0274168 loss)\nI0608 01:14:05.083930 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0929317 (* 1 = 0.0929317 loss)\nI0608 01:14:05.083935 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0260406 (* 1 = 0.0260406 loss)\nI0608 01:14:05.083940 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.09602 (* 1 = 0.09602 loss)\nI0608 01:14:05.083946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0260463 (* 1 = 0.0260463 loss)\nI0608 01:14:05.083953 13573 sgd_solver.cpp:106] Iteration 23960, lr = 0.001\nI0608 01:14:18.196786 13573 solver.cpp:229] Iteration 23980, loss = 2.52147\nI0608 01:14:18.196858 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.251825 (* 1 = 0.251825 loss)\nI0608 01:14:18.196867 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.123876 (* 1 = 0.123876 loss)\nI0608 01:14:18.196874 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.232253 (* 1 = 0.232253 loss)\nI0608 01:14:18.196880 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0697695 (* 1 = 0.0697695 loss)\nI0608 01:14:18.196887 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226839 (* 1 = 0.226839 loss)\nI0608 01:14:18.196892 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0697652 (* 1 = 0.0697652 loss)\nI0608 01:14:18.196899 13573 sgd_solver.cpp:106] Iteration 23980, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:14:30.977617 13573 solver.cpp:229] Iteration 24000, loss = 1.18081\nI0608 01:14:30.977689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0614714 (* 1 = 0.0614714 loss)\nI0608 01:14:30.977699 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161953 (* 1 = 0.161953 loss)\nI0608 01:14:30.977705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217815 (* 1 = 0.217815 loss)\nI0608 01:14:30.977710 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0354283 (* 1 = 0.0354283 loss)\nI0608 01:14:30.977716 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19962 (* 1 = 0.19962 loss)\nI0608 01:14:30.977723 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0354233 (* 1 = 0.0354233 loss)\nI0608 01:14:30.977730 13573 sgd_solver.cpp:106] Iteration 24000, lr = 0.001\nI0608 01:14:43.921531 13573 solver.cpp:229] Iteration 24020, loss = 0.6021\nI0608 01:14:43.921627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.163693 (* 1 = 0.163693 loss)\nI0608 01:14:43.921638 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.252356 (* 1 = 0.252356 loss)\nI0608 01:14:43.921643 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972638 (* 1 = 0.0972638 loss)\nI0608 01:14:43.921649 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242332 (* 1 = 0.0242332 loss)\nI0608 01:14:43.921655 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0910812 (* 1 = 0.0910812 loss)\nI0608 01:14:43.921661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0242339 (* 1 = 0.0242339 loss)\nI0608 01:14:43.921669 13573 sgd_solver.cpp:106] Iteration 24020, lr = 0.001\nI0608 01:14:56.794371 13573 solver.cpp:229] Iteration 24040, loss = 0.726569\nI0608 01:14:56.794438 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0777963 (* 1 = 0.0777963 loss)\nI0608 01:14:56.794448 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116787 (* 1 = 0.116787 loss)\nI0608 01:14:56.794453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.245175 (* 1 = 0.245175 loss)\nI0608 01:14:56.794459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202363 (* 1 = 0.0202363 loss)\nI0608 01:14:56.794466 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.243922 (* 1 = 0.243922 loss)\nI0608 01:14:56.794471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202195 (* 1 = 0.0202195 loss)\nI0608 01:14:56.794477 13573 sgd_solver.cpp:106] Iteration 24040, lr = 0.001\nI0608 01:15:09.635349 13573 solver.cpp:229] Iteration 24060, loss = 0.837508\nI0608 01:15:09.635421 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.367114 (* 1 = 0.367114 loss)\nI0608 01:15:09.635429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.339364 (* 1 = 0.339364 loss)\nI0608 01:15:09.635435 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137136 (* 1 = 0.137136 loss)\nI0608 01:15:09.635442 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0540207 (* 1 = 0.0540207 loss)\nI0608 01:15:09.635447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.132525 (* 1 = 0.132525 loss)\nI0608 01:15:09.635452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0540144 (* 1 = 0.0540144 loss)\nI0608 01:15:09.635459 13573 sgd_solver.cpp:106] Iteration 24060, lr = 0.001\nI0608 01:15:22.481343 13573 solver.cpp:229] Iteration 24080, loss = 0.520661\nI0608 01:15:22.481408 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0695676 (* 1 = 0.0695676 loss)\nI0608 01:15:22.481418 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0592269 (* 1 = 0.0592269 loss)\nI0608 01:15:22.481425 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110987 (* 1 = 0.110987 loss)\nI0608 01:15:22.481431 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.113906 (* 1 = 0.113906 loss)\nI0608 01:15:22.481436 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109856 (* 1 = 0.109856 loss)\nI0608 01:15:22.481441 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.113907 (* 1 = 0.113907 loss)\nI0608 01:15:22.481448 13573 sgd_solver.cpp:106] Iteration 24080, lr = 0.001\nI0608 01:15:35.432983 13573 solver.cpp:229] Iteration 24100, loss = 0.459577\nI0608 01:15:35.433058 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121765 (* 1 = 0.121765 loss)\nI0608 01:15:35.433073 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0556374 (* 1 = 0.0556374 loss)\nI0608 01:15:35.433082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0838507 (* 1 = 0.0838507 loss)\nI0608 01:15:35.433091 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202676 (* 1 = 0.0202676 loss)\nI0608 01:15:35.433099 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920733 (* 1 = 0.0920733 loss)\nI0608 01:15:35.433109 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202674 (* 1 = 0.0202674 loss)\nI0608 01:15:35.433120 13573 sgd_solver.cpp:106] Iteration 24100, lr = 0.001\nI0608 01:15:48.383767 13573 solver.cpp:229] Iteration 24120, loss = 1.12417\nI0608 01:15:48.383833 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.471794 (* 1 = 0.471794 loss)\nI0608 01:15:48.383844 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.297053 (* 1 = 0.297053 loss)\nI0608 01:15:48.383851 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.296764 (* 1 = 0.296764 loss)\nI0608 01:15:48.383857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0152584 (* 1 = 0.0152584 loss)\nI0608 01:15:48.383862 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.290052 (* 1 = 0.290052 loss)\nI0608 01:15:48.383867 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0152558 (* 1 = 0.0152558 loss)\nI0608 01:15:48.383874 13573 sgd_solver.cpp:106] Iteration 24120, lr = 0.001\nI0608 01:16:01.144841 13573 solver.cpp:229] Iteration 24140, loss = 0.665786\nI0608 01:16:01.144912 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.235368 (* 1 = 0.235368 loss)\nI0608 01:16:01.144922 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.159053 (* 1 = 0.159053 loss)\nI0608 01:16:01.144927 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0954022 (* 1 = 0.0954022 loss)\nI0608 01:16:01.144933 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00853298 (* 1 = 0.00853298 loss)\nI0608 01:16:01.144939 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0962812 (* 1 = 0.0962812 loss)\nI0608 01:16:01.144945 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00853625 (* 1 = 0.00853625 loss)\nI0608 01:16:01.144953 13573 sgd_solver.cpp:106] Iteration 24140, lr = 0.001\nI0608 01:16:14.166751 13573 solver.cpp:229] Iteration 24160, loss = 2.35295\nI0608 01:16:14.166823 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00349392 (* 1 = 0.00349392 loss)\nI0608 01:16:14.166833 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0311958 (* 1 = 0.0311958 loss)\nI0608 01:16:14.166841 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.828156 (* 1 = 0.828156 loss)\nI0608 01:16:14.166846 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.452161 (* 1 = 0.452161 loss)\nI0608 01:16:14.166851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.834177 (* 1 = 0.834177 loss)\nI0608 01:16:14.166857 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.452232 (* 1 = 0.452232 loss)\nI0608 01:16:14.166865 13573 sgd_solver.cpp:106] Iteration 24160, lr = 0.001\nI0608 01:16:26.984524 13573 solver.cpp:229] Iteration 24180, loss = 1.01777\nI0608 01:16:26.984598 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.176066 (* 1 = 0.176066 loss)\nI0608 01:16:26.984608 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0804686 (* 1 = 0.0804686 loss)\nI0608 01:16:26.984614 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41523 (* 1 = 0.41523 loss)\nI0608 01:16:26.984621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0656904 (* 1 = 0.0656904 loss)\nI0608 01:16:26.984627 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.404267 (* 1 = 0.404267 loss)\nI0608 01:16:26.984632 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0657115 (* 1 = 0.0657115 loss)\nI0608 01:16:26.984639 13573 sgd_solver.cpp:106] Iteration 24180, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:16:39.749500 13573 solver.cpp:229] Iteration 24200, loss = 0.904266\nI0608 01:16:39.749568 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0901866 (* 1 = 0.0901866 loss)\nI0608 01:16:39.749578 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14571 (* 1 = 0.14571 loss)\nI0608 01:16:39.749584 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.465235 (* 1 = 0.465235 loss)\nI0608 01:16:39.749590 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.096989 (* 1 = 0.096989 loss)\nI0608 01:16:39.749596 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.469119 (* 1 = 0.469119 loss)\nI0608 01:16:39.749603 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0969435 (* 1 = 0.0969435 loss)\nI0608 01:16:39.749609 13573 sgd_solver.cpp:106] Iteration 24200, lr = 0.001\nI0608 01:16:52.531847 13573 solver.cpp:229] Iteration 24220, loss = 0.989774\nI0608 01:16:52.531916 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.190403 (* 1 = 0.190403 loss)\nI0608 01:16:52.531927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.251839 (* 1 = 0.251839 loss)\nI0608 01:16:52.531934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.272521 (* 1 = 0.272521 loss)\nI0608 01:16:52.531940 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106704 (* 1 = 0.106704 loss)\nI0608 01:16:52.531945 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264225 (* 1 = 0.264225 loss)\nI0608 01:16:52.531951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106683 (* 1 = 0.106683 loss)\nI0608 01:16:52.531960 13573 sgd_solver.cpp:106] Iteration 24220, lr = 0.001\nI0608 01:17:05.532740 13573 solver.cpp:229] Iteration 24240, loss = 0.860581\nI0608 01:17:05.532858 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.269151 (* 1 = 0.269151 loss)\nI0608 01:17:05.532867 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.319744 (* 1 = 0.319744 loss)\nI0608 01:17:05.532873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.322633 (* 1 = 0.322633 loss)\nI0608 01:17:05.532879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.060857 (* 1 = 0.060857 loss)\nI0608 01:17:05.532886 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.309566 (* 1 = 0.309566 loss)\nI0608 01:17:05.532891 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0608474 (* 1 = 0.0608474 loss)\nI0608 01:17:05.532898 13573 sgd_solver.cpp:106] Iteration 24240, lr = 0.001\nI0608 01:17:18.389770 13573 solver.cpp:229] Iteration 24260, loss = 0.776498\nI0608 01:17:18.389837 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.214286 (* 1 = 0.214286 loss)\nI0608 01:17:18.389847 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10055 (* 1 = 0.10055 loss)\nI0608 01:17:18.389853 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11004 (* 1 = 0.11004 loss)\nI0608 01:17:18.389858 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0453491 (* 1 = 0.0453491 loss)\nI0608 01:17:18.389864 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104393 (* 1 = 0.104393 loss)\nI0608 01:17:18.389869 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0453497 (* 1 = 0.0453497 loss)\nI0608 01:17:18.389876 13573 sgd_solver.cpp:106] Iteration 24260, lr = 0.001\nI0608 01:17:31.278962 13573 solver.cpp:229] Iteration 24280, loss = 0.718946\nI0608 01:17:31.279040 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.317444 (* 1 = 0.317444 loss)\nI0608 01:17:31.279052 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0913244 (* 1 = 0.0913244 loss)\nI0608 01:17:31.279057 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0939031 (* 1 = 0.0939031 loss)\nI0608 01:17:31.279064 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00876753 (* 1 = 0.00876753 loss)\nI0608 01:17:31.279070 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105106 (* 1 = 0.105106 loss)\nI0608 01:17:31.279076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00876627 (* 1 = 0.00876627 loss)\nI0608 01:17:31.279084 13573 sgd_solver.cpp:106] Iteration 24280, lr = 0.001\nI0608 01:17:44.046838 13573 solver.cpp:229] Iteration 24300, loss = 1.58182\nI0608 01:17:44.046911 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.325999 (* 1 = 0.325999 loss)\nI0608 01:17:44.046921 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.386795 (* 1 = 0.386795 loss)\nI0608 01:17:44.046926 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162662 (* 1 = 0.162662 loss)\nI0608 01:17:44.046932 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0490446 (* 1 = 0.0490446 loss)\nI0608 01:17:44.046939 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15829 (* 1 = 0.15829 loss)\nI0608 01:17:44.046946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0490442 (* 1 = 0.0490442 loss)\nI0608 01:17:44.046952 13573 sgd_solver.cpp:106] Iteration 24300, lr = 0.001\nI0608 01:17:56.928488 13573 solver.cpp:229] Iteration 24320, loss = 0.898684\nI0608 01:17:56.928552 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121266 (* 1 = 0.121266 loss)\nI0608 01:17:56.928561 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.220193 (* 1 = 0.220193 loss)\nI0608 01:17:56.928568 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.081475 (* 1 = 0.081475 loss)\nI0608 01:17:56.928573 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00604734 (* 1 = 0.00604734 loss)\nI0608 01:17:56.928580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0834047 (* 1 = 0.0834047 loss)\nI0608 01:17:56.928584 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00604723 (* 1 = 0.00604723 loss)\nI0608 01:17:56.928617 13573 sgd_solver.cpp:106] Iteration 24320, lr = 0.001\nI0608 01:18:09.943497 13573 solver.cpp:229] Iteration 24340, loss = 0.52578\nI0608 01:18:09.943562 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0499321 (* 1 = 0.0499321 loss)\nI0608 01:18:09.943570 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.099306 (* 1 = 0.099306 loss)\nI0608 01:18:09.943578 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107476 (* 1 = 0.107476 loss)\nI0608 01:18:09.943583 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175334 (* 1 = 0.0175334 loss)\nI0608 01:18:09.943589 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103272 (* 1 = 0.103272 loss)\nI0608 01:18:09.943595 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175334 (* 1 = 0.0175334 loss)\nI0608 01:18:09.943603 13573 sgd_solver.cpp:106] Iteration 24340, lr = 0.001\nI0608 01:18:22.943487 13573 solver.cpp:229] Iteration 24360, loss = 0.624782\nI0608 01:18:22.943614 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11123 (* 1 = 0.11123 loss)\nI0608 01:18:22.943624 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0701876 (* 1 = 0.0701876 loss)\nI0608 01:18:22.943630 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0977853 (* 1 = 0.0977853 loss)\nI0608 01:18:22.943637 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0582269 (* 1 = 0.0582269 loss)\nI0608 01:18:22.943644 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.09734 (* 1 = 0.09734 loss)\nI0608 01:18:22.943650 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0582425 (* 1 = 0.0582425 loss)\nI0608 01:18:22.943656 13573 sgd_solver.cpp:106] Iteration 24360, lr = 0.001\nI0608 01:18:36.084592 13573 solver.cpp:229] Iteration 24380, loss = 0.466161\nI0608 01:18:36.084655 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0322007 (* 1 = 0.0322007 loss)\nI0608 01:18:36.084663 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0422262 (* 1 = 0.0422262 loss)\nI0608 01:18:36.084669 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0958836 (* 1 = 0.0958836 loss)\nI0608 01:18:36.084676 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0679496 (* 1 = 0.0679496 loss)\nI0608 01:18:36.084681 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0913875 (* 1 = 0.0913875 loss)\nI0608 01:18:36.084686 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0679414 (* 1 = 0.0679414 loss)\nI0608 01:18:36.084693 13573 sgd_solver.cpp:106] Iteration 24380, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:18:48.806952 13573 solver.cpp:229] Iteration 24400, loss = 0.330093\nI0608 01:18:48.807018 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0844277 (* 1 = 0.0844277 loss)\nI0608 01:18:48.807027 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0700549 (* 1 = 0.0700549 loss)\nI0608 01:18:48.807034 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0768565 (* 1 = 0.0768565 loss)\nI0608 01:18:48.807039 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112172 (* 1 = 0.0112172 loss)\nI0608 01:18:48.807045 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0822386 (* 1 = 0.0822386 loss)\nI0608 01:18:48.807050 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112193 (* 1 = 0.0112193 loss)\nI0608 01:18:48.807057 13573 sgd_solver.cpp:106] Iteration 24400, lr = 0.001\nI0608 01:19:01.593719 13573 solver.cpp:229] Iteration 24420, loss = 0.708359\nI0608 01:19:01.593808 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0962322 (* 1 = 0.0962322 loss)\nI0608 01:19:01.593818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0618541 (* 1 = 0.0618541 loss)\nI0608 01:19:01.593823 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.075729 (* 1 = 0.075729 loss)\nI0608 01:19:01.593830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00637152 (* 1 = 0.00637152 loss)\nI0608 01:19:01.593837 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.079772 (* 1 = 0.079772 loss)\nI0608 01:19:01.593842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00637178 (* 1 = 0.00637178 loss)\nI0608 01:19:01.593850 13573 sgd_solver.cpp:106] Iteration 24420, lr = 0.001\nI0608 01:19:14.494619 13573 solver.cpp:229] Iteration 24440, loss = 0.527579\nI0608 01:19:14.494685 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0647713 (* 1 = 0.0647713 loss)\nI0608 01:19:14.494695 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.156559 (* 1 = 0.156559 loss)\nI0608 01:19:14.494701 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138298 (* 1 = 0.138298 loss)\nI0608 01:19:14.494707 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0586347 (* 1 = 0.0586347 loss)\nI0608 01:19:14.494714 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.140988 (* 1 = 0.140988 loss)\nI0608 01:19:14.494719 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0586144 (* 1 = 0.0586144 loss)\nI0608 01:19:14.494726 13573 sgd_solver.cpp:106] Iteration 24440, lr = 0.001\nI0608 01:19:27.301054 13573 solver.cpp:229] Iteration 24460, loss = 0.675668\nI0608 01:19:27.301136 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0809434 (* 1 = 0.0809434 loss)\nI0608 01:19:27.301146 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0790391 (* 1 = 0.0790391 loss)\nI0608 01:19:27.301152 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117297 (* 1 = 0.117297 loss)\nI0608 01:19:27.301158 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.122094 (* 1 = 0.122094 loss)\nI0608 01:19:27.301164 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107628 (* 1 = 0.107628 loss)\nI0608 01:19:27.301169 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.122065 (* 1 = 0.122065 loss)\nI0608 01:19:27.301177 13573 sgd_solver.cpp:106] Iteration 24460, lr = 0.001\nI0608 01:19:40.246767 13573 solver.cpp:229] Iteration 24480, loss = 0.649844\nI0608 01:19:40.246843 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00215023 (* 1 = 0.00215023 loss)\nI0608 01:19:40.246853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000141604 (* 1 = 0.000141604 loss)\nI0608 01:19:40.246860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.225611 (* 1 = 0.225611 loss)\nI0608 01:19:40.246865 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.015181 (* 1 = 0.015181 loss)\nI0608 01:19:40.246871 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214113 (* 1 = 0.214113 loss)\nI0608 01:19:40.246877 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151774 (* 1 = 0.0151774 loss)\nI0608 01:19:40.246886 13573 sgd_solver.cpp:106] Iteration 24480, lr = 0.001\nI0608 01:19:53.189256 13573 solver.cpp:229] Iteration 24500, loss = 1.41348\nI0608 01:19:53.189321 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109569 (* 1 = 0.109569 loss)\nI0608 01:19:53.189330 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105748 (* 1 = 0.105748 loss)\nI0608 01:19:53.189337 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.759878 (* 1 = 0.759878 loss)\nI0608 01:19:53.189342 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.122675 (* 1 = 0.122675 loss)\nI0608 01:19:53.189347 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.713866 (* 1 = 0.713866 loss)\nI0608 01:19:53.189353 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.122655 (* 1 = 0.122655 loss)\nI0608 01:19:53.189360 13573 sgd_solver.cpp:106] Iteration 24500, lr = 0.001\nI0608 01:20:05.987332 13573 solver.cpp:229] Iteration 24520, loss = 0.456016\nI0608 01:20:05.987406 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0686306 (* 1 = 0.0686306 loss)\nI0608 01:20:05.987416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0287569 (* 1 = 0.0287569 loss)\nI0608 01:20:05.987422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103065 (* 1 = 0.103065 loss)\nI0608 01:20:05.987428 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278107 (* 1 = 0.0278107 loss)\nI0608 01:20:05.987433 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104344 (* 1 = 0.104344 loss)\nI0608 01:20:05.987439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.027811 (* 1 = 0.027811 loss)\nI0608 01:20:05.987447 13573 sgd_solver.cpp:106] Iteration 24520, lr = 0.001\nI0608 01:20:18.892405 13573 solver.cpp:229] Iteration 24540, loss = 1.02187\nI0608 01:20:18.892472 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0439576 (* 1 = 0.0439576 loss)\nI0608 01:20:18.892482 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0120935 (* 1 = 0.0120935 loss)\nI0608 01:20:18.892488 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.582367 (* 1 = 0.582367 loss)\nI0608 01:20:18.892493 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.22289 (* 1 = 0.22289 loss)\nI0608 01:20:18.892499 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.596803 (* 1 = 0.596803 loss)\nI0608 01:20:18.892505 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.223014 (* 1 = 0.223014 loss)\nI0608 01:20:18.892513 13573 sgd_solver.cpp:106] Iteration 24540, lr = 0.001\nI0608 01:20:31.750022 13573 solver.cpp:229] Iteration 24560, loss = 0.496733\nI0608 01:20:31.750087 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0806485 (* 1 = 0.0806485 loss)\nI0608 01:20:31.750097 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0251981 (* 1 = 0.0251981 loss)\nI0608 01:20:31.750102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0872273 (* 1 = 0.0872273 loss)\nI0608 01:20:31.750108 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.030958 (* 1 = 0.030958 loss)\nI0608 01:20:31.750113 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0807462 (* 1 = 0.0807462 loss)\nI0608 01:20:31.750119 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309549 (* 1 = 0.0309549 loss)\nI0608 01:20:31.750126 13573 sgd_solver.cpp:106] Iteration 24560, lr = 0.001\nI0608 01:20:44.668165 13573 solver.cpp:229] Iteration 24580, loss = 0.747988\nI0608 01:20:44.668264 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.191948 (* 1 = 0.191948 loss)\nI0608 01:20:44.668274 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164635 (* 1 = 0.164635 loss)\nI0608 01:20:44.668280 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246154 (* 1 = 0.246154 loss)\nI0608 01:20:44.668287 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.107667 (* 1 = 0.107667 loss)\nI0608 01:20:44.668292 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.242994 (* 1 = 0.242994 loss)\nI0608 01:20:44.668298 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.107639 (* 1 = 0.107639 loss)\nI0608 01:20:44.668304 13573 sgd_solver.cpp:106] Iteration 24580, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:20:57.573745 13573 solver.cpp:229] Iteration 24600, loss = 1.26244\nI0608 01:20:57.573813 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194162 (* 1 = 0.194162 loss)\nI0608 01:20:57.573824 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195992 (* 1 = 0.195992 loss)\nI0608 01:20:57.573830 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.45731 (* 1 = 0.45731 loss)\nI0608 01:20:57.573837 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.180786 (* 1 = 0.180786 loss)\nI0608 01:20:57.573842 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.464174 (* 1 = 0.464174 loss)\nI0608 01:20:57.573848 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.180775 (* 1 = 0.180775 loss)\nI0608 01:20:57.573854 13573 sgd_solver.cpp:106] Iteration 24600, lr = 0.001\nI0608 01:21:10.423835 13573 solver.cpp:229] Iteration 24620, loss = 0.821313\nI0608 01:21:10.423899 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129821 (* 1 = 0.129821 loss)\nI0608 01:21:10.423908 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147939 (* 1 = 0.147939 loss)\nI0608 01:21:10.423915 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104712 (* 1 = 0.104712 loss)\nI0608 01:21:10.423921 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.033577 (* 1 = 0.033577 loss)\nI0608 01:21:10.423926 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106678 (* 1 = 0.106678 loss)\nI0608 01:21:10.423933 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.033577 (* 1 = 0.033577 loss)\nI0608 01:21:10.423939 13573 sgd_solver.cpp:106] Iteration 24620, lr = 0.001\nI0608 01:21:23.385190 13573 solver.cpp:229] Iteration 24640, loss = 0.873079\nI0608 01:21:23.385263 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119456 (* 1 = 0.119456 loss)\nI0608 01:21:23.385273 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121157 (* 1 = 0.121157 loss)\nI0608 01:21:23.385278 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108789 (* 1 = 0.108789 loss)\nI0608 01:21:23.385284 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0868161 (* 1 = 0.0868161 loss)\nI0608 01:21:23.385290 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107179 (* 1 = 0.107179 loss)\nI0608 01:21:23.385295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0868324 (* 1 = 0.0868324 loss)\nI0608 01:21:23.385303 13573 sgd_solver.cpp:106] Iteration 24640, lr = 0.001\nI0608 01:21:36.219902 13573 solver.cpp:229] Iteration 24660, loss = 0.579784\nI0608 01:21:36.219969 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113603 (* 1 = 0.113603 loss)\nI0608 01:21:36.219977 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0984287 (* 1 = 0.0984287 loss)\nI0608 01:21:36.219983 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258472 (* 1 = 0.258472 loss)\nI0608 01:21:36.219990 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0413175 (* 1 = 0.0413175 loss)\nI0608 01:21:36.219995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265917 (* 1 = 0.265917 loss)\nI0608 01:21:36.220000 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.041347 (* 1 = 0.041347 loss)\nI0608 01:21:36.220007 13573 sgd_solver.cpp:106] Iteration 24660, lr = 0.001\nI0608 01:21:49.007313 13573 solver.cpp:229] Iteration 24680, loss = 0.796291\nI0608 01:21:49.007400 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00892773 (* 1 = 0.00892773 loss)\nI0608 01:21:49.007411 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0130025 (* 1 = 0.0130025 loss)\nI0608 01:21:49.007417 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.244799 (* 1 = 0.244799 loss)\nI0608 01:21:49.007423 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00897443 (* 1 = 0.00897443 loss)\nI0608 01:21:49.007429 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260764 (* 1 = 0.260764 loss)\nI0608 01:21:49.007436 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00897403 (* 1 = 0.00897403 loss)\nI0608 01:21:49.007470 13573 sgd_solver.cpp:106] Iteration 24680, lr = 0.001\nI0608 01:22:01.911137 13573 solver.cpp:229] Iteration 24700, loss = 1.16877\nI0608 01:22:01.911216 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.337861 (* 1 = 0.337861 loss)\nI0608 01:22:01.911224 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269834 (* 1 = 0.269834 loss)\nI0608 01:22:01.911231 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.3092 (* 1 = 0.3092 loss)\nI0608 01:22:01.911237 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.119344 (* 1 = 0.119344 loss)\nI0608 01:22:01.911243 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.316356 (* 1 = 0.316356 loss)\nI0608 01:22:01.911249 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.11936 (* 1 = 0.11936 loss)\nI0608 01:22:01.911258 13573 sgd_solver.cpp:106] Iteration 24700, lr = 0.001\nI0608 01:22:14.575528 13573 solver.cpp:229] Iteration 24720, loss = 1.03062\nI0608 01:22:14.575598 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127921 (* 1 = 0.127921 loss)\nI0608 01:22:14.575609 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188738 (* 1 = 0.188738 loss)\nI0608 01:22:14.575615 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.496893 (* 1 = 0.496893 loss)\nI0608 01:22:14.575621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0804486 (* 1 = 0.0804486 loss)\nI0608 01:22:14.575628 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.496213 (* 1 = 0.496213 loss)\nI0608 01:22:14.575634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0802965 (* 1 = 0.0802965 loss)\nI0608 01:22:14.575639 13573 sgd_solver.cpp:106] Iteration 24720, lr = 0.001\nI0608 01:22:27.310930 13573 solver.cpp:229] Iteration 24740, loss = 0.765551\nI0608 01:22:27.310997 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166946 (* 1 = 0.166946 loss)\nI0608 01:22:27.311007 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108914 (* 1 = 0.108914 loss)\nI0608 01:22:27.311012 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203063 (* 1 = 0.203063 loss)\nI0608 01:22:27.311018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0115723 (* 1 = 0.0115723 loss)\nI0608 01:22:27.311024 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174702 (* 1 = 0.174702 loss)\nI0608 01:22:27.311030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115737 (* 1 = 0.0115737 loss)\nI0608 01:22:27.311038 13573 sgd_solver.cpp:106] Iteration 24740, lr = 0.001\nI0608 01:22:40.088034 13573 solver.cpp:229] Iteration 24760, loss = 0.75315\nI0608 01:22:40.088142 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.247156 (* 1 = 0.247156 loss)\nI0608 01:22:40.088151 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.237482 (* 1 = 0.237482 loss)\nI0608 01:22:40.088158 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195375 (* 1 = 0.195375 loss)\nI0608 01:22:40.088165 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170249 (* 1 = 0.0170249 loss)\nI0608 01:22:40.088171 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191418 (* 1 = 0.191418 loss)\nI0608 01:22:40.088176 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170259 (* 1 = 0.0170259 loss)\nI0608 01:22:40.088183 13573 sgd_solver.cpp:106] Iteration 24760, lr = 0.001\nI0608 01:22:52.830734 13573 solver.cpp:229] Iteration 24780, loss = 0.577674\nI0608 01:22:52.830811 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17273 (* 1 = 0.17273 loss)\nI0608 01:22:52.830821 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.177404 (* 1 = 0.177404 loss)\nI0608 01:22:52.830827 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110808 (* 1 = 0.110808 loss)\nI0608 01:22:52.830834 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0250895 (* 1 = 0.0250895 loss)\nI0608 01:22:52.830840 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105968 (* 1 = 0.105968 loss)\nI0608 01:22:52.830847 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0250712 (* 1 = 0.0250712 loss)\nI0608 01:22:52.830852 13573 sgd_solver.cpp:106] Iteration 24780, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:23:05.531863 13573 solver.cpp:229] Iteration 24800, loss = 0.445859\nI0608 01:23:05.531925 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0632776 (* 1 = 0.0632776 loss)\nI0608 01:23:05.531935 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0822638 (* 1 = 0.0822638 loss)\nI0608 01:23:05.531941 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0878861 (* 1 = 0.0878861 loss)\nI0608 01:23:05.531947 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0085661 (* 1 = 0.0085661 loss)\nI0608 01:23:05.531954 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0916978 (* 1 = 0.0916978 loss)\nI0608 01:23:05.531960 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00856852 (* 1 = 0.00856852 loss)\nI0608 01:23:05.531966 13573 sgd_solver.cpp:106] Iteration 24800, lr = 0.001\nI0608 01:23:18.478561 13573 solver.cpp:229] Iteration 24820, loss = 1.99119\nI0608 01:23:18.478634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126373 (* 1 = 0.126373 loss)\nI0608 01:23:18.478644 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0445127 (* 1 = 0.0445127 loss)\nI0608 01:23:18.478651 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.979161 (* 1 = 0.979161 loss)\nI0608 01:23:18.478655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.366883 (* 1 = 0.366883 loss)\nI0608 01:23:18.478662 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.967303 (* 1 = 0.967303 loss)\nI0608 01:23:18.478667 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.367353 (* 1 = 0.367353 loss)\nI0608 01:23:18.478673 13573 sgd_solver.cpp:106] Iteration 24820, lr = 0.001\nI0608 01:23:31.410570 13573 solver.cpp:229] Iteration 24840, loss = 0.602886\nI0608 01:23:31.410636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0716658 (* 1 = 0.0716658 loss)\nI0608 01:23:31.410647 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0445037 (* 1 = 0.0445037 loss)\nI0608 01:23:31.410653 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0816359 (* 1 = 0.0816359 loss)\nI0608 01:23:31.410660 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00427343 (* 1 = 0.00427343 loss)\nI0608 01:23:31.410665 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0786959 (* 1 = 0.0786959 loss)\nI0608 01:23:31.410672 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00427303 (* 1 = 0.00427303 loss)\nI0608 01:23:31.410679 13573 sgd_solver.cpp:106] Iteration 24840, lr = 0.001\nI0608 01:23:44.336863 13573 solver.cpp:229] Iteration 24860, loss = 0.703276\nI0608 01:23:44.336937 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0682037 (* 1 = 0.0682037 loss)\nI0608 01:23:44.336947 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0858136 (* 1 = 0.0858136 loss)\nI0608 01:23:44.336953 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0903664 (* 1 = 0.0903664 loss)\nI0608 01:23:44.336959 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0539978 (* 1 = 0.0539978 loss)\nI0608 01:23:44.336966 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0934086 (* 1 = 0.0934086 loss)\nI0608 01:23:44.336971 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0540122 (* 1 = 0.0540122 loss)\nI0608 01:23:44.336978 13573 sgd_solver.cpp:106] Iteration 24860, lr = 0.001\nI0608 01:23:57.114413 13573 solver.cpp:229] Iteration 24880, loss = 1.00945\nI0608 01:23:57.114478 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000288369 (* 1 = 0.000288369 loss)\nI0608 01:23:57.114490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000215039 (* 1 = 0.000215039 loss)\nI0608 01:23:57.114495 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179378 (* 1 = 0.179378 loss)\nI0608 01:23:57.114502 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00699636 (* 1 = 0.00699636 loss)\nI0608 01:23:57.114508 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196723 (* 1 = 0.196723 loss)\nI0608 01:23:57.114514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00697873 (* 1 = 0.00697873 loss)\nI0608 01:23:57.114521 13573 sgd_solver.cpp:106] Iteration 24880, lr = 0.001\nI0608 01:24:10.008254 13573 solver.cpp:229] Iteration 24900, loss = 0.605318\nI0608 01:24:10.008322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0948977 (* 1 = 0.0948977 loss)\nI0608 01:24:10.008332 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229398 (* 1 = 0.229398 loss)\nI0608 01:24:10.008338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117672 (* 1 = 0.117672 loss)\nI0608 01:24:10.008344 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111027 (* 1 = 0.111027 loss)\nI0608 01:24:10.008350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115402 (* 1 = 0.115402 loss)\nI0608 01:24:10.008355 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111076 (* 1 = 0.111076 loss)\nI0608 01:24:10.008363 13573 sgd_solver.cpp:106] Iteration 24900, lr = 0.001\nI0608 01:24:22.986253 13573 solver.cpp:229] Iteration 24920, loss = 2.12152\nI0608 01:24:22.986327 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.143724 (* 1 = 0.143724 loss)\nI0608 01:24:22.986336 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141905 (* 1 = 0.141905 loss)\nI0608 01:24:22.986342 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.333053 (* 1 = 0.333053 loss)\nI0608 01:24:22.986348 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0611582 (* 1 = 0.0611582 loss)\nI0608 01:24:22.986354 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.333219 (* 1 = 0.333219 loss)\nI0608 01:24:22.986359 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0611651 (* 1 = 0.0611651 loss)\nI0608 01:24:22.986366 13573 sgd_solver.cpp:106] Iteration 24920, lr = 0.001\nI0608 01:24:35.910423 13573 solver.cpp:229] Iteration 24940, loss = 0.628333\nI0608 01:24:35.910490 13573 solver.cpp:245]     Train net output #0: loss_bbox = 2.54905e-05 (* 1 = 2.54905e-05 loss)\nI0608 01:24:35.910501 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00034615 (* 1 = 0.00034615 loss)\nI0608 01:24:35.910507 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257819 (* 1 = 0.257819 loss)\nI0608 01:24:35.910513 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.039529 (* 1 = 0.039529 loss)\nI0608 01:24:35.910519 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269983 (* 1 = 0.269983 loss)\nI0608 01:24:35.910526 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0395219 (* 1 = 0.0395219 loss)\nI0608 01:24:35.910533 13573 sgd_solver.cpp:106] Iteration 24940, lr = 0.001\nI0608 01:24:48.677328 13573 solver.cpp:229] Iteration 24960, loss = 0.760082\nI0608 01:24:48.677397 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.30086 (* 1 = 0.30086 loss)\nI0608 01:24:48.677407 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.227738 (* 1 = 0.227738 loss)\nI0608 01:24:48.677412 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177358 (* 1 = 0.177358 loss)\nI0608 01:24:48.677418 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0139643 (* 1 = 0.0139643 loss)\nI0608 01:24:48.677424 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169782 (* 1 = 0.169782 loss)\nI0608 01:24:48.677430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0139579 (* 1 = 0.0139579 loss)\nI0608 01:24:48.677438 13573 sgd_solver.cpp:106] Iteration 24960, lr = 0.001\nI0608 01:25:01.475332 13573 solver.cpp:229] Iteration 24980, loss = 0.97374\nI0608 01:25:01.475451 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.182985 (* 1 = 0.182985 loss)\nI0608 01:25:01.475461 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.213255 (* 1 = 0.213255 loss)\nI0608 01:25:01.475467 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100329 (* 1 = 0.100329 loss)\nI0608 01:25:01.475473 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00928507 (* 1 = 0.00928507 loss)\nI0608 01:25:01.475479 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102335 (* 1 = 0.102335 loss)\nI0608 01:25:01.475484 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00928236 (* 1 = 0.00928236 loss)\nI0608 01:25:01.475492 13573 sgd_solver.cpp:106] Iteration 24980, lr = 0.001\nspeed: 0.644s / iter\nI0608 01:25:14.137186 13573 solver.cpp:229] Iteration 25000, loss = 0.662403\nI0608 01:25:14.137243 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.288615 (* 1 = 0.288615 loss)\nI0608 01:25:14.137251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.204561 (* 1 = 0.204561 loss)\nI0608 01:25:14.137257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102088 (* 1 = 0.102088 loss)\nI0608 01:25:14.137264 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0235898 (* 1 = 0.0235898 loss)\nI0608 01:25:14.137269 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102601 (* 1 = 0.102601 loss)\nI0608 01:25:14.137274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0235956 (* 1 = 0.0235956 loss)\nI0608 01:25:14.137290 13573 sgd_solver.cpp:106] Iteration 25000, lr = 0.001\nI0608 01:25:27.127763 13573 solver.cpp:229] Iteration 25020, loss = 0.762898\nI0608 01:25:27.127826 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104561 (* 1 = 0.104561 loss)\nI0608 01:25:27.127836 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125182 (* 1 = 0.125182 loss)\nI0608 01:25:27.127842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11725 (* 1 = 0.11725 loss)\nI0608 01:25:27.127847 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.022989 (* 1 = 0.022989 loss)\nI0608 01:25:27.127853 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131256 (* 1 = 0.131256 loss)\nI0608 01:25:27.127859 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0230028 (* 1 = 0.0230028 loss)\nI0608 01:25:27.127866 13573 sgd_solver.cpp:106] Iteration 25020, lr = 0.001\nI0608 01:25:40.006639 13573 solver.cpp:229] Iteration 25040, loss = 1.28174\nI0608 01:25:40.006714 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.298491 (* 1 = 0.298491 loss)\nI0608 01:25:40.006724 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.224434 (* 1 = 0.224434 loss)\nI0608 01:25:40.006731 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.602271 (* 1 = 0.602271 loss)\nI0608 01:25:40.006736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0965597 (* 1 = 0.0965597 loss)\nI0608 01:25:40.006742 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.590385 (* 1 = 0.590385 loss)\nI0608 01:25:40.006747 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0966034 (* 1 = 0.0966034 loss)\nI0608 01:25:40.006754 13573 sgd_solver.cpp:106] Iteration 25040, lr = 0.001\nI0608 01:25:52.953457 13573 solver.cpp:229] Iteration 25060, loss = 0.765815\nI0608 01:25:52.953521 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000647376 (* 1 = 0.000647376 loss)\nI0608 01:25:52.953531 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00455047 (* 1 = 0.00455047 loss)\nI0608 01:25:52.953536 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26467 (* 1 = 0.26467 loss)\nI0608 01:25:52.953542 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243108 (* 1 = 0.0243108 loss)\nI0608 01:25:52.953547 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273394 (* 1 = 0.273394 loss)\nI0608 01:25:52.953553 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.024309 (* 1 = 0.024309 loss)\nI0608 01:25:52.953559 13573 sgd_solver.cpp:106] Iteration 25060, lr = 0.001\nI0608 01:26:05.721293 13573 solver.cpp:229] Iteration 25080, loss = 1.32951\nI0608 01:26:05.721357 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107717 (* 1 = 0.107717 loss)\nI0608 01:26:05.721367 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125255 (* 1 = 0.125255 loss)\nI0608 01:26:05.721374 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.460209 (* 1 = 0.460209 loss)\nI0608 01:26:05.721380 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0533197 (* 1 = 0.0533197 loss)\nI0608 01:26:05.721385 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.486969 (* 1 = 0.486969 loss)\nI0608 01:26:05.721391 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.053362 (* 1 = 0.053362 loss)\nI0608 01:26:05.721400 13573 sgd_solver.cpp:106] Iteration 25080, lr = 0.001\nI0608 01:26:18.584096 13573 solver.cpp:229] Iteration 25100, loss = 0.455074\nI0608 01:26:18.584174 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0179147 (* 1 = 0.0179147 loss)\nI0608 01:26:18.584184 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0395172 (* 1 = 0.0395172 loss)\nI0608 01:26:18.584192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202798 (* 1 = 0.202798 loss)\nI0608 01:26:18.584197 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0122593 (* 1 = 0.0122593 loss)\nI0608 01:26:18.584203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188123 (* 1 = 0.188123 loss)\nI0608 01:26:18.584209 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0122455 (* 1 = 0.0122455 loss)\nI0608 01:26:18.584216 13573 sgd_solver.cpp:106] Iteration 25100, lr = 0.001\nI0608 01:26:31.330410 13573 solver.cpp:229] Iteration 25120, loss = 0.854923\nI0608 01:26:31.330476 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0943974 (* 1 = 0.0943974 loss)\nI0608 01:26:31.330485 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.264986 (* 1 = 0.264986 loss)\nI0608 01:26:31.330492 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.287283 (* 1 = 0.287283 loss)\nI0608 01:26:31.330498 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114469 (* 1 = 0.0114469 loss)\nI0608 01:26:31.330503 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280353 (* 1 = 0.280353 loss)\nI0608 01:26:31.330510 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114463 (* 1 = 0.0114463 loss)\nI0608 01:26:31.330518 13573 sgd_solver.cpp:106] Iteration 25120, lr = 0.001\nI0608 01:26:44.270377 13573 solver.cpp:229] Iteration 25140, loss = 1.37232\nI0608 01:26:44.270467 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0009479 (* 1 = 0.0009479 loss)\nI0608 01:26:44.270478 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00284436 (* 1 = 0.00284436 loss)\nI0608 01:26:44.270483 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.582189 (* 1 = 0.582189 loss)\nI0608 01:26:44.270490 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.118249 (* 1 = 0.118249 loss)\nI0608 01:26:44.270495 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.542137 (* 1 = 0.542137 loss)\nI0608 01:26:44.270501 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.118232 (* 1 = 0.118232 loss)\nI0608 01:26:44.270509 13573 sgd_solver.cpp:106] Iteration 25140, lr = 0.001\nI0608 01:26:57.176131 13573 solver.cpp:229] Iteration 25160, loss = 1.21041\nI0608 01:26:57.176213 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.258451 (* 1 = 0.258451 loss)\nI0608 01:26:57.176223 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.155825 (* 1 = 0.155825 loss)\nI0608 01:26:57.176229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.444776 (* 1 = 0.444776 loss)\nI0608 01:26:57.176235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0358931 (* 1 = 0.0358931 loss)\nI0608 01:26:57.176241 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.445224 (* 1 = 0.445224 loss)\nI0608 01:26:57.176246 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0358956 (* 1 = 0.0358956 loss)\nI0608 01:26:57.176254 13573 sgd_solver.cpp:106] Iteration 25160, lr = 0.001\nI0608 01:27:10.183419 13573 solver.cpp:229] Iteration 25180, loss = 0.426464\nI0608 01:27:10.183480 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.123734 (* 1 = 0.123734 loss)\nI0608 01:27:10.183490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0367691 (* 1 = 0.0367691 loss)\nI0608 01:27:10.183496 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0913792 (* 1 = 0.0913792 loss)\nI0608 01:27:10.183502 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126809 (* 1 = 0.0126809 loss)\nI0608 01:27:10.183508 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0885457 (* 1 = 0.0885457 loss)\nI0608 01:27:10.183514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126776 (* 1 = 0.0126776 loss)\nI0608 01:27:10.183522 13573 sgd_solver.cpp:106] Iteration 25180, lr = 0.001\nspeed: 0.644s / iter\nI0608 01:27:23.048543 13573 solver.cpp:229] Iteration 25200, loss = 0.753795\nI0608 01:27:23.048610 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121755 (* 1 = 0.121755 loss)\nI0608 01:27:23.048620 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.044201 (* 1 = 0.044201 loss)\nI0608 01:27:23.048626 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949697 (* 1 = 0.0949697 loss)\nI0608 01:27:23.048632 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105916 (* 1 = 0.0105916 loss)\nI0608 01:27:23.048638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0926302 (* 1 = 0.0926302 loss)\nI0608 01:27:23.048645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105923 (* 1 = 0.0105923 loss)\nI0608 01:27:23.048651 13573 sgd_solver.cpp:106] Iteration 25200, lr = 0.001\nI0608 01:27:35.944134 13573 solver.cpp:229] Iteration 25220, loss = 0.594501\nI0608 01:27:35.944224 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0412185 (* 1 = 0.0412185 loss)\nI0608 01:27:35.944236 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0932242 (* 1 = 0.0932242 loss)\nI0608 01:27:35.944242 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0879531 (* 1 = 0.0879531 loss)\nI0608 01:27:35.944247 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182487 (* 1 = 0.0182487 loss)\nI0608 01:27:35.944252 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0825723 (* 1 = 0.0825723 loss)\nI0608 01:27:35.944258 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182441 (* 1 = 0.0182441 loss)\nI0608 01:27:35.944264 13573 sgd_solver.cpp:106] Iteration 25220, lr = 0.001\nI0608 01:27:48.887060 13573 solver.cpp:229] Iteration 25240, loss = 0.721001\nI0608 01:27:48.887131 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175841 (* 1 = 0.175841 loss)\nI0608 01:27:48.887140 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148936 (* 1 = 0.148936 loss)\nI0608 01:27:48.887146 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109623 (* 1 = 0.109623 loss)\nI0608 01:27:48.887152 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0496596 (* 1 = 0.0496596 loss)\nI0608 01:27:48.887158 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1145 (* 1 = 0.1145 loss)\nI0608 01:27:48.887164 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0496496 (* 1 = 0.0496496 loss)\nI0608 01:27:48.887171 13573 sgd_solver.cpp:106] Iteration 25240, lr = 0.001\nI0608 01:28:01.844252 13573 solver.cpp:229] Iteration 25260, loss = 0.89358\nI0608 01:28:01.844321 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.165354 (* 1 = 0.165354 loss)\nI0608 01:28:01.844331 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171983 (* 1 = 0.171983 loss)\nI0608 01:28:01.844338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.477207 (* 1 = 0.477207 loss)\nI0608 01:28:01.844344 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0611398 (* 1 = 0.0611398 loss)\nI0608 01:28:01.844350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.460903 (* 1 = 0.460903 loss)\nI0608 01:28:01.844357 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0610591 (* 1 = 0.0610591 loss)\nI0608 01:28:01.844364 13573 sgd_solver.cpp:106] Iteration 25260, lr = 0.001\nI0608 01:28:14.827886 13573 solver.cpp:229] Iteration 25280, loss = 0.787093\nI0608 01:28:14.827960 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0802425 (* 1 = 0.0802425 loss)\nI0608 01:28:14.827970 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.073443 (* 1 = 0.073443 loss)\nI0608 01:28:14.827975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118971 (* 1 = 0.118971 loss)\nI0608 01:28:14.827981 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.137339 (* 1 = 0.137339 loss)\nI0608 01:28:14.827986 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117271 (* 1 = 0.117271 loss)\nI0608 01:28:14.827992 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.137365 (* 1 = 0.137365 loss)\nI0608 01:28:14.827999 13573 sgd_solver.cpp:106] Iteration 25280, lr = 0.001\nI0608 01:28:27.741858 13573 solver.cpp:229] Iteration 25300, loss = 0.624402\nI0608 01:28:27.741925 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.310169 (* 1 = 0.310169 loss)\nI0608 01:28:27.741935 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.088958 (* 1 = 0.088958 loss)\nI0608 01:28:27.741940 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103991 (* 1 = 0.103991 loss)\nI0608 01:28:27.741946 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0287082 (* 1 = 0.0287082 loss)\nI0608 01:28:27.741952 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0904422 (* 1 = 0.0904422 loss)\nI0608 01:28:27.741957 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0287051 (* 1 = 0.0287051 loss)\nI0608 01:28:27.741964 13573 sgd_solver.cpp:106] Iteration 25300, lr = 0.001\nI0608 01:28:40.598316 13573 solver.cpp:229] Iteration 25320, loss = 0.791906\nI0608 01:28:40.598443 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00278831 (* 1 = 0.00278831 loss)\nI0608 01:28:40.598453 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00408853 (* 1 = 0.00408853 loss)\nI0608 01:28:40.598459 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1437 (* 1 = 0.1437 loss)\nI0608 01:28:40.598465 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00330907 (* 1 = 0.00330907 loss)\nI0608 01:28:40.598470 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122412 (* 1 = 0.122412 loss)\nI0608 01:28:40.598476 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00330999 (* 1 = 0.00330999 loss)\nI0608 01:28:40.598484 13573 sgd_solver.cpp:106] Iteration 25320, lr = 0.001\nI0608 01:28:53.491715 13573 solver.cpp:229] Iteration 25340, loss = 2.10176\nI0608 01:28:53.491777 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.196451 (* 1 = 0.196451 loss)\nI0608 01:28:53.491788 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.181122 (* 1 = 0.181122 loss)\nI0608 01:28:53.491794 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.373459 (* 1 = 0.373459 loss)\nI0608 01:28:53.491801 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0611947 (* 1 = 0.0611947 loss)\nI0608 01:28:53.491806 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.364069 (* 1 = 0.364069 loss)\nI0608 01:28:53.491812 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0611683 (* 1 = 0.0611683 loss)\nI0608 01:28:53.491821 13573 sgd_solver.cpp:106] Iteration 25340, lr = 0.001\nI0608 01:29:06.374009 13573 solver.cpp:229] Iteration 25360, loss = 0.996094\nI0608 01:29:06.374083 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175323 (* 1 = 0.175323 loss)\nI0608 01:29:06.374094 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158617 (* 1 = 0.158617 loss)\nI0608 01:29:06.374100 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.20443 (* 1 = 0.20443 loss)\nI0608 01:29:06.374107 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0181756 (* 1 = 0.0181756 loss)\nI0608 01:29:06.374114 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216884 (* 1 = 0.216884 loss)\nI0608 01:29:06.374119 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0181828 (* 1 = 0.0181828 loss)\nI0608 01:29:06.374126 13573 sgd_solver.cpp:106] Iteration 25360, lr = 0.001\nI0608 01:29:19.284194 13573 solver.cpp:229] Iteration 25380, loss = 0.740236\nI0608 01:29:19.284260 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193437 (* 1 = 0.193437 loss)\nI0608 01:29:19.284270 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10664 (* 1 = 0.10664 loss)\nI0608 01:29:19.284276 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.196761 (* 1 = 0.196761 loss)\nI0608 01:29:19.284281 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0610474 (* 1 = 0.0610474 loss)\nI0608 01:29:19.284287 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19447 (* 1 = 0.19447 loss)\nI0608 01:29:19.284293 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0610354 (* 1 = 0.0610354 loss)\nI0608 01:29:19.284301 13573 sgd_solver.cpp:106] Iteration 25380, lr = 0.001\nspeed: 0.644s / iter\nI0608 01:29:32.104310 13573 solver.cpp:229] Iteration 25400, loss = 0.367626\nI0608 01:29:32.104377 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00206882 (* 1 = 0.00206882 loss)\nI0608 01:29:32.104387 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000775188 (* 1 = 0.000775188 loss)\nI0608 01:29:32.104393 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199462 (* 1 = 0.199462 loss)\nI0608 01:29:32.104398 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0184695 (* 1 = 0.0184695 loss)\nI0608 01:29:32.104403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.201151 (* 1 = 0.201151 loss)\nI0608 01:29:32.104409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0184673 (* 1 = 0.0184673 loss)\nI0608 01:29:32.104416 13573 sgd_solver.cpp:106] Iteration 25400, lr = 0.001\nI0608 01:29:45.081451 13573 solver.cpp:229] Iteration 25420, loss = 1.77641\nI0608 01:29:45.081524 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.012375 (* 1 = 0.012375 loss)\nI0608 01:29:45.081534 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00501799 (* 1 = 0.00501799 loss)\nI0608 01:29:45.081542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.932997 (* 1 = 0.932997 loss)\nI0608 01:29:45.081547 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.333051 (* 1 = 0.333051 loss)\nI0608 01:29:45.081552 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.899932 (* 1 = 0.899932 loss)\nI0608 01:29:45.081558 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.333048 (* 1 = 0.333048 loss)\nI0608 01:29:45.081567 13573 sgd_solver.cpp:106] Iteration 25420, lr = 0.001\nI0608 01:29:57.877688 13573 solver.cpp:229] Iteration 25440, loss = 0.647405\nI0608 01:29:57.877751 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106208 (* 1 = 0.106208 loss)\nI0608 01:29:57.877760 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121471 (* 1 = 0.121471 loss)\nI0608 01:29:57.877768 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100991 (* 1 = 0.100991 loss)\nI0608 01:29:57.877774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193337 (* 1 = 0.0193337 loss)\nI0608 01:29:57.877780 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104034 (* 1 = 0.104034 loss)\nI0608 01:29:57.877786 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0193301 (* 1 = 0.0193301 loss)\nI0608 01:29:57.877794 13573 sgd_solver.cpp:106] Iteration 25440, lr = 0.001\nI0608 01:30:10.939414 13573 solver.cpp:229] Iteration 25460, loss = 0.954833\nI0608 01:30:10.939486 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000831519 (* 1 = 0.000831519 loss)\nI0608 01:30:10.939496 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00139556 (* 1 = 0.00139556 loss)\nI0608 01:30:10.939503 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302621 (* 1 = 0.302621 loss)\nI0608 01:30:10.939508 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0504194 (* 1 = 0.0504194 loss)\nI0608 01:30:10.939514 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27359 (* 1 = 0.27359 loss)\nI0608 01:30:10.939520 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0504433 (* 1 = 0.0504433 loss)\nI0608 01:30:10.939527 13573 sgd_solver.cpp:106] Iteration 25460, lr = 0.001\nI0608 01:30:23.885365 13573 solver.cpp:229] Iteration 25480, loss = 0.480171\nI0608 01:30:23.885433 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134212 (* 1 = 0.134212 loss)\nI0608 01:30:23.885445 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126666 (* 1 = 0.126666 loss)\nI0608 01:30:23.885452 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101244 (* 1 = 0.101244 loss)\nI0608 01:30:23.885457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302149 (* 1 = 0.0302149 loss)\nI0608 01:30:23.885463 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106198 (* 1 = 0.106198 loss)\nI0608 01:30:23.885469 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302114 (* 1 = 0.0302114 loss)\nI0608 01:30:23.885476 13573 sgd_solver.cpp:106] Iteration 25480, lr = 0.001\nI0608 01:30:36.858981 13573 solver.cpp:229] Iteration 25500, loss = 0.480979\nI0608 01:30:36.859046 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0126569 (* 1 = 0.0126569 loss)\nI0608 01:30:36.859055 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.04149 (* 1 = 0.04149 loss)\nI0608 01:30:36.859062 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155805 (* 1 = 0.155805 loss)\nI0608 01:30:36.859068 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00228551 (* 1 = 0.00228551 loss)\nI0608 01:30:36.859073 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160461 (* 1 = 0.160461 loss)\nI0608 01:30:36.859081 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00228484 (* 1 = 0.00228484 loss)\nI0608 01:30:36.859087 13573 sgd_solver.cpp:106] Iteration 25500, lr = 0.001\nI0608 01:30:49.738116 13573 solver.cpp:229] Iteration 25520, loss = 0.602786\nI0608 01:30:49.738183 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118134 (* 1 = 0.118134 loss)\nI0608 01:30:49.738193 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.061805 (* 1 = 0.061805 loss)\nI0608 01:30:49.738198 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132763 (* 1 = 0.132763 loss)\nI0608 01:30:49.738204 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0319699 (* 1 = 0.0319699 loss)\nI0608 01:30:49.738209 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.130012 (* 1 = 0.130012 loss)\nI0608 01:30:49.738215 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0319619 (* 1 = 0.0319619 loss)\nI0608 01:30:49.738222 13573 sgd_solver.cpp:106] Iteration 25520, lr = 0.001\nI0608 01:31:02.663458 13573 solver.cpp:229] Iteration 25540, loss = 2.00487\nI0608 01:31:02.663529 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.486429 (* 1 = 0.486429 loss)\nI0608 01:31:02.663539 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.182187 (* 1 = 0.182187 loss)\nI0608 01:31:02.663545 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.408663 (* 1 = 0.408663 loss)\nI0608 01:31:02.663552 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.087903 (* 1 = 0.087903 loss)\nI0608 01:31:02.663558 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.407751 (* 1 = 0.407751 loss)\nI0608 01:31:02.663563 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0879489 (* 1 = 0.0879489 loss)\nI0608 01:31:02.663570 13573 sgd_solver.cpp:106] Iteration 25540, lr = 0.001\nI0608 01:31:15.636719 13573 solver.cpp:229] Iteration 25560, loss = 0.532814\nI0608 01:31:15.636783 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00418522 (* 1 = 0.00418522 loss)\nI0608 01:31:15.636793 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00948685 (* 1 = 0.00948685 loss)\nI0608 01:31:15.636800 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.291365 (* 1 = 0.291365 loss)\nI0608 01:31:15.636806 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0291734 (* 1 = 0.0291734 loss)\nI0608 01:31:15.636811 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.237648 (* 1 = 0.237648 loss)\nI0608 01:31:15.636817 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291695 (* 1 = 0.0291695 loss)\nI0608 01:31:15.636824 13573 sgd_solver.cpp:106] Iteration 25560, lr = 0.001\nI0608 01:31:28.625138 13573 solver.cpp:229] Iteration 25580, loss = 0.860506\nI0608 01:31:28.625200 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115049 (* 1 = 0.115049 loss)\nI0608 01:31:28.625211 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0513232 (* 1 = 0.0513232 loss)\nI0608 01:31:28.625216 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188265 (* 1 = 0.188265 loss)\nI0608 01:31:28.625221 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0731289 (* 1 = 0.0731289 loss)\nI0608 01:31:28.625227 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189425 (* 1 = 0.189425 loss)\nI0608 01:31:28.625233 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.073145 (* 1 = 0.073145 loss)\nI0608 01:31:28.625241 13573 sgd_solver.cpp:106] Iteration 25580, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:31:41.428737 13573 solver.cpp:229] Iteration 25600, loss = 1.14453\nI0608 01:31:41.428810 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0757422 (* 1 = 0.0757422 loss)\nI0608 01:31:41.428819 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132954 (* 1 = 0.132954 loss)\nI0608 01:31:41.428825 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.541225 (* 1 = 0.541225 loss)\nI0608 01:31:41.428831 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.278175 (* 1 = 0.278175 loss)\nI0608 01:31:41.428836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.560877 (* 1 = 0.560877 loss)\nI0608 01:31:41.428843 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.278144 (* 1 = 0.278144 loss)\nI0608 01:31:41.428849 13573 sgd_solver.cpp:106] Iteration 25600, lr = 0.001\nI0608 01:31:54.325510 13573 solver.cpp:229] Iteration 25620, loss = 0.808917\nI0608 01:31:54.325579 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0965796 (* 1 = 0.0965796 loss)\nI0608 01:31:54.325589 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.062724 (* 1 = 0.062724 loss)\nI0608 01:31:54.325595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.268305 (* 1 = 0.268305 loss)\nI0608 01:31:54.325601 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123791 (* 1 = 0.123791 loss)\nI0608 01:31:54.325608 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265645 (* 1 = 0.265645 loss)\nI0608 01:31:54.325613 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123795 (* 1 = 0.123795 loss)\nI0608 01:31:54.325620 13573 sgd_solver.cpp:106] Iteration 25620, lr = 0.001\nI0608 01:32:07.122568 13573 solver.cpp:229] Iteration 25640, loss = 0.569446\nI0608 01:32:07.122634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0946303 (* 1 = 0.0946303 loss)\nI0608 01:32:07.122643 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164979 (* 1 = 0.164979 loss)\nI0608 01:32:07.122649 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23635 (* 1 = 0.23635 loss)\nI0608 01:32:07.122655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0634558 (* 1 = 0.0634558 loss)\nI0608 01:32:07.122661 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.237325 (* 1 = 0.237325 loss)\nI0608 01:32:07.122668 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0634859 (* 1 = 0.0634859 loss)\nI0608 01:32:07.122674 13573 sgd_solver.cpp:106] Iteration 25640, lr = 0.001\nI0608 01:32:20.039541 13573 solver.cpp:229] Iteration 25660, loss = 1.27491\nI0608 01:32:20.039610 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139631 (* 1 = 0.139631 loss)\nI0608 01:32:20.039620 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152708 (* 1 = 0.152708 loss)\nI0608 01:32:20.039626 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113707 (* 1 = 0.113707 loss)\nI0608 01:32:20.039633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0750181 (* 1 = 0.0750181 loss)\nI0608 01:32:20.039638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.116126 (* 1 = 0.116126 loss)\nI0608 01:32:20.039644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0749979 (* 1 = 0.0749979 loss)\nI0608 01:32:20.039651 13573 sgd_solver.cpp:106] Iteration 25660, lr = 0.001\nI0608 01:32:33.002895 13573 solver.cpp:229] Iteration 25680, loss = 0.782531\nI0608 01:32:33.002964 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.302105 (* 1 = 0.302105 loss)\nI0608 01:32:33.002974 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.49056 (* 1 = 0.49056 loss)\nI0608 01:32:33.002979 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118422 (* 1 = 0.118422 loss)\nI0608 01:32:33.002985 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0361454 (* 1 = 0.0361454 loss)\nI0608 01:32:33.002991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1104 (* 1 = 0.1104 loss)\nI0608 01:32:33.002997 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.036145 (* 1 = 0.036145 loss)\nI0608 01:32:33.003005 13573 sgd_solver.cpp:106] Iteration 25680, lr = 0.001\nI0608 01:32:45.856137 13573 solver.cpp:229] Iteration 25700, loss = 0.626235\nI0608 01:32:45.856199 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.058073 (* 1 = 0.058073 loss)\nI0608 01:32:45.856209 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0888829 (* 1 = 0.0888829 loss)\nI0608 01:32:45.856215 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136413 (* 1 = 0.136413 loss)\nI0608 01:32:45.856221 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0138939 (* 1 = 0.0138939 loss)\nI0608 01:32:45.856227 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145501 (* 1 = 0.145501 loss)\nI0608 01:32:45.856233 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013893 (* 1 = 0.013893 loss)\nI0608 01:32:45.856240 13573 sgd_solver.cpp:106] Iteration 25700, lr = 0.001\nI0608 01:32:58.746820 13573 solver.cpp:229] Iteration 25720, loss = 0.554729\nI0608 01:32:58.746891 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0175031 (* 1 = 0.0175031 loss)\nI0608 01:32:58.746899 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0338335 (* 1 = 0.0338335 loss)\nI0608 01:32:58.746906 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.088401 (* 1 = 0.088401 loss)\nI0608 01:32:58.746912 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0658601 (* 1 = 0.0658601 loss)\nI0608 01:32:58.746918 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0853481 (* 1 = 0.0853481 loss)\nI0608 01:32:58.746924 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0658532 (* 1 = 0.0658532 loss)\nI0608 01:32:58.746932 13573 sgd_solver.cpp:106] Iteration 25720, lr = 0.001\nI0608 01:33:11.916823 13573 solver.cpp:229] Iteration 25740, loss = 0.367886\nI0608 01:33:11.916894 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0234165 (* 1 = 0.0234165 loss)\nI0608 01:33:11.916904 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.039425 (* 1 = 0.039425 loss)\nI0608 01:33:11.916910 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.087218 (* 1 = 0.087218 loss)\nI0608 01:33:11.916916 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0795077 (* 1 = 0.0795077 loss)\nI0608 01:33:11.916923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920561 (* 1 = 0.0920561 loss)\nI0608 01:33:11.916929 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0795028 (* 1 = 0.0795028 loss)\nI0608 01:33:11.916935 13573 sgd_solver.cpp:106] Iteration 25740, lr = 0.001\nI0608 01:33:24.837014 13573 solver.cpp:229] Iteration 25760, loss = 0.807924\nI0608 01:33:24.837075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00429168 (* 1 = 0.00429168 loss)\nI0608 01:33:24.837085 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0125593 (* 1 = 0.0125593 loss)\nI0608 01:33:24.837091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151989 (* 1 = 0.151989 loss)\nI0608 01:33:24.837097 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00290296 (* 1 = 0.00290296 loss)\nI0608 01:33:24.837102 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.134236 (* 1 = 0.134236 loss)\nI0608 01:33:24.837108 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00290352 (* 1 = 0.00290352 loss)\nI0608 01:33:24.837116 13573 sgd_solver.cpp:106] Iteration 25760, lr = 0.001\nI0608 01:33:37.664070 13573 solver.cpp:229] Iteration 25780, loss = 1.2178\nI0608 01:33:37.664153 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.259196 (* 1 = 0.259196 loss)\nI0608 01:33:37.664162 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.240846 (* 1 = 0.240846 loss)\nI0608 01:33:37.664168 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.633655 (* 1 = 0.633655 loss)\nI0608 01:33:37.664175 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0892592 (* 1 = 0.0892592 loss)\nI0608 01:33:37.664180 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.639687 (* 1 = 0.639687 loss)\nI0608 01:33:37.664186 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0892301 (* 1 = 0.0892301 loss)\nI0608 01:33:37.664192 13573 sgd_solver.cpp:106] Iteration 25780, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:33:50.650586 13573 solver.cpp:229] Iteration 25800, loss = 1.11934\nI0608 01:33:50.650657 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10507 (* 1 = 0.10507 loss)\nI0608 01:33:50.650666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18184 (* 1 = 0.18184 loss)\nI0608 01:33:50.650673 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.550231 (* 1 = 0.550231 loss)\nI0608 01:33:50.650678 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.177283 (* 1 = 0.177283 loss)\nI0608 01:33:50.650686 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.552176 (* 1 = 0.552176 loss)\nI0608 01:33:50.650691 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.177162 (* 1 = 0.177162 loss)\nI0608 01:33:50.650697 13573 sgd_solver.cpp:106] Iteration 25800, lr = 0.001\nI0608 01:34:03.605135 13573 solver.cpp:229] Iteration 25820, loss = 1.07718\nI0608 01:34:03.605196 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.401877 (* 1 = 0.401877 loss)\nI0608 01:34:03.605208 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.249255 (* 1 = 0.249255 loss)\nI0608 01:34:03.605214 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.479354 (* 1 = 0.479354 loss)\nI0608 01:34:03.605221 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0532085 (* 1 = 0.0532085 loss)\nI0608 01:34:03.605226 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.461772 (* 1 = 0.461772 loss)\nI0608 01:34:03.605232 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0531922 (* 1 = 0.0531922 loss)\nI0608 01:34:03.605239 13573 sgd_solver.cpp:106] Iteration 25820, lr = 0.001\nI0608 01:34:16.506429 13573 solver.cpp:229] Iteration 25840, loss = 1.15861\nI0608 01:34:16.506505 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.276448 (* 1 = 0.276448 loss)\nI0608 01:34:16.506515 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.196419 (* 1 = 0.196419 loss)\nI0608 01:34:16.506520 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.426592 (* 1 = 0.426592 loss)\nI0608 01:34:16.506526 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0955614 (* 1 = 0.0955614 loss)\nI0608 01:34:16.506531 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.427186 (* 1 = 0.427186 loss)\nI0608 01:34:16.506537 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0955281 (* 1 = 0.0955281 loss)\nI0608 01:34:16.506544 13573 sgd_solver.cpp:106] Iteration 25840, lr = 0.001\nI0608 01:34:29.368419 13573 solver.cpp:229] Iteration 25860, loss = 0.492962\nI0608 01:34:29.368481 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114756 (* 1 = 0.114756 loss)\nI0608 01:34:29.368490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0871569 (* 1 = 0.0871569 loss)\nI0608 01:34:29.368496 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0853875 (* 1 = 0.0853875 loss)\nI0608 01:34:29.368502 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0425199 (* 1 = 0.0425199 loss)\nI0608 01:34:29.368507 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0831234 (* 1 = 0.0831234 loss)\nI0608 01:34:29.368515 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0425276 (* 1 = 0.0425276 loss)\nI0608 01:34:29.368521 13573 sgd_solver.cpp:106] Iteration 25860, lr = 0.001\nI0608 01:34:42.110029 13573 solver.cpp:229] Iteration 25880, loss = 0.839559\nI0608 01:34:42.110098 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0022052 (* 1 = 0.0022052 loss)\nI0608 01:34:42.110108 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00413885 (* 1 = 0.00413885 loss)\nI0608 01:34:42.110115 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214055 (* 1 = 0.214055 loss)\nI0608 01:34:42.110121 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.014726 (* 1 = 0.014726 loss)\nI0608 01:34:42.110126 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235832 (* 1 = 0.235832 loss)\nI0608 01:34:42.110131 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0147244 (* 1 = 0.0147244 loss)\nI0608 01:34:42.110138 13573 sgd_solver.cpp:106] Iteration 25880, lr = 0.001\nI0608 01:34:55.187247 13573 solver.cpp:229] Iteration 25900, loss = 1.32129\nI0608 01:34:55.187317 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.336035 (* 1 = 0.336035 loss)\nI0608 01:34:55.187327 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.386233 (* 1 = 0.386233 loss)\nI0608 01:34:55.187333 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.515672 (* 1 = 0.515672 loss)\nI0608 01:34:55.187340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103912 (* 1 = 0.103912 loss)\nI0608 01:34:55.187345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.488903 (* 1 = 0.488903 loss)\nI0608 01:34:55.187351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103991 (* 1 = 0.103991 loss)\nI0608 01:34:55.187358 13573 sgd_solver.cpp:106] Iteration 25900, lr = 0.001\nI0608 01:35:08.238147 13573 solver.cpp:229] Iteration 25920, loss = 0.691053\nI0608 01:35:08.238209 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106398 (* 1 = 0.106398 loss)\nI0608 01:35:08.238219 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0596991 (* 1 = 0.0596991 loss)\nI0608 01:35:08.238225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169718 (* 1 = 0.169718 loss)\nI0608 01:35:08.238231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0437304 (* 1 = 0.0437304 loss)\nI0608 01:35:08.238237 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163907 (* 1 = 0.163907 loss)\nI0608 01:35:08.238242 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0437426 (* 1 = 0.0437426 loss)\nI0608 01:35:08.238250 13573 sgd_solver.cpp:106] Iteration 25920, lr = 0.001\nI0608 01:35:20.953263 13573 solver.cpp:229] Iteration 25940, loss = 0.749276\nI0608 01:35:20.953332 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0187685 (* 1 = 0.0187685 loss)\nI0608 01:35:20.953341 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0115039 (* 1 = 0.0115039 loss)\nI0608 01:35:20.953348 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.324213 (* 1 = 0.324213 loss)\nI0608 01:35:20.953354 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013724 (* 1 = 0.013724 loss)\nI0608 01:35:20.953359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334062 (* 1 = 0.334062 loss)\nI0608 01:35:20.953366 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013721 (* 1 = 0.013721 loss)\nI0608 01:35:20.953372 13573 sgd_solver.cpp:106] Iteration 25940, lr = 0.001\nI0608 01:35:33.868582 13573 solver.cpp:229] Iteration 25960, loss = 1.28071\nI0608 01:35:33.868656 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00112243 (* 1 = 0.00112243 loss)\nI0608 01:35:33.868666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0101219 (* 1 = 0.0101219 loss)\nI0608 01:35:33.868672 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.604004 (* 1 = 0.604004 loss)\nI0608 01:35:33.868679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106416 (* 1 = 0.106416 loss)\nI0608 01:35:33.868683 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.574751 (* 1 = 0.574751 loss)\nI0608 01:35:33.868690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106422 (* 1 = 0.106422 loss)\nI0608 01:35:33.868696 13573 sgd_solver.cpp:106] Iteration 25960, lr = 0.001\nI0608 01:35:46.820559 13573 solver.cpp:229] Iteration 25980, loss = 1.08308\nI0608 01:35:46.820624 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.204939 (* 1 = 0.204939 loss)\nI0608 01:35:46.820634 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.189501 (* 1 = 0.189501 loss)\nI0608 01:35:46.820641 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0981073 (* 1 = 0.0981073 loss)\nI0608 01:35:46.820647 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170286 (* 1 = 0.0170286 loss)\nI0608 01:35:46.820652 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0963941 (* 1 = 0.0963941 loss)\nI0608 01:35:46.820658 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170295 (* 1 = 0.0170295 loss)\nI0608 01:35:46.820665 13573 sgd_solver.cpp:106] Iteration 25980, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:35:59.661204 13573 solver.cpp:229] Iteration 26000, loss = 0.470055\nI0608 01:35:59.661273 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00187205 (* 1 = 0.00187205 loss)\nI0608 01:35:59.661283 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0203379 (* 1 = 0.0203379 loss)\nI0608 01:35:59.661289 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19645 (* 1 = 0.19645 loss)\nI0608 01:35:59.661295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00979638 (* 1 = 0.00979638 loss)\nI0608 01:35:59.661301 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216986 (* 1 = 0.216986 loss)\nI0608 01:35:59.661306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00977616 (* 1 = 0.00977616 loss)\nI0608 01:35:59.661314 13573 sgd_solver.cpp:106] Iteration 26000, lr = 0.001\nI0608 01:36:12.549815 13573 solver.cpp:229] Iteration 26020, loss = 0.335844\nI0608 01:36:12.549885 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0691437 (* 1 = 0.0691437 loss)\nI0608 01:36:12.549896 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0477687 (* 1 = 0.0477687 loss)\nI0608 01:36:12.549902 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101989 (* 1 = 0.101989 loss)\nI0608 01:36:12.549908 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110438 (* 1 = 0.0110438 loss)\nI0608 01:36:12.549913 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977753 (* 1 = 0.0977753 loss)\nI0608 01:36:12.549919 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110471 (* 1 = 0.0110471 loss)\nI0608 01:36:12.549927 13573 sgd_solver.cpp:106] Iteration 26020, lr = 0.001\nI0608 01:36:25.397032 13573 solver.cpp:229] Iteration 26040, loss = 0.923808\nI0608 01:36:25.397094 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00423792 (* 1 = 0.00423792 loss)\nI0608 01:36:25.397102 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00675125 (* 1 = 0.00675125 loss)\nI0608 01:36:25.397109 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.280138 (* 1 = 0.280138 loss)\nI0608 01:36:25.397114 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0501511 (* 1 = 0.0501511 loss)\nI0608 01:36:25.397120 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293462 (* 1 = 0.293462 loss)\nI0608 01:36:25.397126 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0502194 (* 1 = 0.0502194 loss)\nI0608 01:36:25.397133 13573 sgd_solver.cpp:106] Iteration 26040, lr = 0.001\nI0608 01:36:38.146566 13573 solver.cpp:229] Iteration 26060, loss = 0.632334\nI0608 01:36:38.146632 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127444 (* 1 = 0.127444 loss)\nI0608 01:36:38.146641 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100268 (* 1 = 0.100268 loss)\nI0608 01:36:38.146651 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.218581 (* 1 = 0.218581 loss)\nI0608 01:36:38.146656 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0381049 (* 1 = 0.0381049 loss)\nI0608 01:36:38.146662 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22064 (* 1 = 0.22064 loss)\nI0608 01:36:38.146668 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0381158 (* 1 = 0.0381158 loss)\nI0608 01:36:38.146677 13573 sgd_solver.cpp:106] Iteration 26060, lr = 0.001\nI0608 01:36:50.775545 13573 solver.cpp:229] Iteration 26080, loss = 0.966027\nI0608 01:36:50.775615 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0273665 (* 1 = 0.0273665 loss)\nI0608 01:36:50.775627 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136639 (* 1 = 0.136639 loss)\nI0608 01:36:50.775635 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282534 (* 1 = 0.282534 loss)\nI0608 01:36:50.775640 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281192 (* 1 = 0.0281192 loss)\nI0608 01:36:50.775646 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.241276 (* 1 = 0.241276 loss)\nI0608 01:36:50.775652 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0280841 (* 1 = 0.0280841 loss)\nI0608 01:36:50.775660 13573 sgd_solver.cpp:106] Iteration 26080, lr = 0.001\nI0608 01:37:03.614883 13573 solver.cpp:229] Iteration 26100, loss = 0.817563\nI0608 01:37:03.614943 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102732 (* 1 = 0.102732 loss)\nI0608 01:37:03.614953 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0916311 (* 1 = 0.0916311 loss)\nI0608 01:37:03.614959 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.353561 (* 1 = 0.353561 loss)\nI0608 01:37:03.614965 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0438493 (* 1 = 0.0438493 loss)\nI0608 01:37:03.614971 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.368321 (* 1 = 0.368321 loss)\nI0608 01:37:03.614976 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0438557 (* 1 = 0.0438557 loss)\nI0608 01:37:03.614984 13573 sgd_solver.cpp:106] Iteration 26100, lr = 0.001\nI0608 01:37:16.599866 13573 solver.cpp:229] Iteration 26120, loss = 0.444215\nI0608 01:37:16.599932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00321049 (* 1 = 0.00321049 loss)\nI0608 01:37:16.599941 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0050475 (* 1 = 0.0050475 loss)\nI0608 01:37:16.599948 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198058 (* 1 = 0.198058 loss)\nI0608 01:37:16.599954 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0188192 (* 1 = 0.0188192 loss)\nI0608 01:37:16.599959 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211124 (* 1 = 0.211124 loss)\nI0608 01:37:16.599966 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0188221 (* 1 = 0.0188221 loss)\nI0608 01:37:16.599972 13573 sgd_solver.cpp:106] Iteration 26120, lr = 0.001\nI0608 01:37:29.557664 13573 solver.cpp:229] Iteration 26140, loss = 0.559512\nI0608 01:37:29.557727 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102306 (* 1 = 0.102306 loss)\nI0608 01:37:29.557736 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.20917 (* 1 = 0.20917 loss)\nI0608 01:37:29.557744 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.142705 (* 1 = 0.142705 loss)\nI0608 01:37:29.557749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0431433 (* 1 = 0.0431433 loss)\nI0608 01:37:29.557754 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159295 (* 1 = 0.159295 loss)\nI0608 01:37:29.557760 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431484 (* 1 = 0.0431484 loss)\nI0608 01:37:29.557767 13573 sgd_solver.cpp:106] Iteration 26140, lr = 0.001\nI0608 01:37:42.748492 13573 solver.cpp:229] Iteration 26160, loss = 0.589963\nI0608 01:37:42.748564 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00041209 (* 1 = 0.00041209 loss)\nI0608 01:37:42.748575 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0478473 (* 1 = 0.0478473 loss)\nI0608 01:37:42.748581 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.159726 (* 1 = 0.159726 loss)\nI0608 01:37:42.748587 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00842891 (* 1 = 0.00842891 loss)\nI0608 01:37:42.748592 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.220024 (* 1 = 0.220024 loss)\nI0608 01:37:42.748598 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00843757 (* 1 = 0.00843757 loss)\nI0608 01:37:42.748605 13573 sgd_solver.cpp:106] Iteration 26160, lr = 0.001\nI0608 01:37:55.784770 13573 solver.cpp:229] Iteration 26180, loss = 1.02343\nI0608 01:37:55.784839 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.330868 (* 1 = 0.330868 loss)\nI0608 01:37:55.784848 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269551 (* 1 = 0.269551 loss)\nI0608 01:37:55.784855 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.388662 (* 1 = 0.388662 loss)\nI0608 01:37:55.784862 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0760601 (* 1 = 0.0760601 loss)\nI0608 01:37:55.784868 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376648 (* 1 = 0.376648 loss)\nI0608 01:37:55.784873 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0760167 (* 1 = 0.0760167 loss)\nI0608 01:37:55.784880 13573 sgd_solver.cpp:106] Iteration 26180, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:38:08.669452 13573 solver.cpp:229] Iteration 26200, loss = 0.731477\nI0608 01:38:08.669572 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125708 (* 1 = 0.125708 loss)\nI0608 01:38:08.669581 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111881 (* 1 = 0.111881 loss)\nI0608 01:38:08.669589 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0943213 (* 1 = 0.0943213 loss)\nI0608 01:38:08.669594 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.034521 (* 1 = 0.034521 loss)\nI0608 01:38:08.669600 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102794 (* 1 = 0.102794 loss)\nI0608 01:38:08.669605 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.034529 (* 1 = 0.034529 loss)\nI0608 01:38:08.669612 13573 sgd_solver.cpp:106] Iteration 26200, lr = 0.001\nI0608 01:38:21.624016 13573 solver.cpp:229] Iteration 26220, loss = 0.673655\nI0608 01:38:21.624079 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107357 (* 1 = 0.107357 loss)\nI0608 01:38:21.624089 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0959606 (* 1 = 0.0959606 loss)\nI0608 01:38:21.624094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104051 (* 1 = 0.104051 loss)\nI0608 01:38:21.624100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0980586 (* 1 = 0.0980586 loss)\nI0608 01:38:21.624105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0987378 (* 1 = 0.0987378 loss)\nI0608 01:38:21.624111 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0980651 (* 1 = 0.0980651 loss)\nI0608 01:38:21.624117 13573 sgd_solver.cpp:106] Iteration 26220, lr = 0.001\nI0608 01:38:34.668779 13573 solver.cpp:229] Iteration 26240, loss = 0.662545\nI0608 01:38:34.668846 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00575495 (* 1 = 0.00575495 loss)\nI0608 01:38:34.668855 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0130252 (* 1 = 0.0130252 loss)\nI0608 01:38:34.668862 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.417113 (* 1 = 0.417113 loss)\nI0608 01:38:34.668869 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00392821 (* 1 = 0.00392821 loss)\nI0608 01:38:34.668875 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.41791 (* 1 = 0.41791 loss)\nI0608 01:38:34.668880 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00393128 (* 1 = 0.00393128 loss)\nI0608 01:38:34.668889 13573 sgd_solver.cpp:106] Iteration 26240, lr = 0.001\nI0608 01:38:47.672642 13573 solver.cpp:229] Iteration 26260, loss = 0.67641\nI0608 01:38:47.672716 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125949 (* 1 = 0.125949 loss)\nI0608 01:38:47.672726 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125019 (* 1 = 0.125019 loss)\nI0608 01:38:47.672732 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265951 (* 1 = 0.265951 loss)\nI0608 01:38:47.672739 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0431368 (* 1 = 0.0431368 loss)\nI0608 01:38:47.672745 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249021 (* 1 = 0.249021 loss)\nI0608 01:38:47.672751 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431372 (* 1 = 0.0431372 loss)\nI0608 01:38:47.672758 13573 sgd_solver.cpp:106] Iteration 26260, lr = 0.001\nI0608 01:39:00.557301 13573 solver.cpp:229] Iteration 26280, loss = 0.955626\nI0608 01:39:00.557361 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.554392 (* 1 = 0.554392 loss)\nI0608 01:39:00.557370 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214106 (* 1 = 0.214106 loss)\nI0608 01:39:00.557376 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203656 (* 1 = 0.203656 loss)\nI0608 01:39:00.557382 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0388812 (* 1 = 0.0388812 loss)\nI0608 01:39:00.557389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.20306 (* 1 = 0.20306 loss)\nI0608 01:39:00.557394 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0389087 (* 1 = 0.0389087 loss)\nI0608 01:39:00.557400 13573 sgd_solver.cpp:106] Iteration 26280, lr = 0.001\nI0608 01:39:13.204560 13573 solver.cpp:229] Iteration 26300, loss = 0.648236\nI0608 01:39:13.204623 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0804514 (* 1 = 0.0804514 loss)\nI0608 01:39:13.204633 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0546936 (* 1 = 0.0546936 loss)\nI0608 01:39:13.204639 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239418 (* 1 = 0.239418 loss)\nI0608 01:39:13.204645 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0542088 (* 1 = 0.0542088 loss)\nI0608 01:39:13.204650 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227903 (* 1 = 0.227903 loss)\nI0608 01:39:13.204656 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0542006 (* 1 = 0.0542006 loss)\nI0608 01:39:13.204663 13573 sgd_solver.cpp:106] Iteration 26300, lr = 0.001\nI0608 01:39:26.127763 13573 solver.cpp:229] Iteration 26320, loss = 0.915957\nI0608 01:39:26.127879 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.332664 (* 1 = 0.332664 loss)\nI0608 01:39:26.127889 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.294845 (* 1 = 0.294845 loss)\nI0608 01:39:26.127895 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187744 (* 1 = 0.187744 loss)\nI0608 01:39:26.127902 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0370452 (* 1 = 0.0370452 loss)\nI0608 01:39:26.127907 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.179154 (* 1 = 0.179154 loss)\nI0608 01:39:26.127913 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0370458 (* 1 = 0.0370458 loss)\nI0608 01:39:26.127920 13573 sgd_solver.cpp:106] Iteration 26320, lr = 0.001\nI0608 01:39:39.110352 13573 solver.cpp:229] Iteration 26340, loss = 0.6232\nI0608 01:39:39.110419 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0143509 (* 1 = 0.0143509 loss)\nI0608 01:39:39.110430 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0410536 (* 1 = 0.0410536 loss)\nI0608 01:39:39.110435 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302146 (* 1 = 0.302146 loss)\nI0608 01:39:39.110440 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0697011 (* 1 = 0.0697011 loss)\nI0608 01:39:39.110446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318754 (* 1 = 0.318754 loss)\nI0608 01:39:39.110452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0697454 (* 1 = 0.0697454 loss)\nI0608 01:39:39.110460 13573 sgd_solver.cpp:106] Iteration 26340, lr = 0.001\nI0608 01:39:51.893965 13573 solver.cpp:229] Iteration 26360, loss = 0.433763\nI0608 01:39:51.894031 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0184045 (* 1 = 0.0184045 loss)\nI0608 01:39:51.894040 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0255657 (* 1 = 0.0255657 loss)\nI0608 01:39:51.894047 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.210074 (* 1 = 0.210074 loss)\nI0608 01:39:51.894053 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00817298 (* 1 = 0.00817298 loss)\nI0608 01:39:51.894059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21186 (* 1 = 0.21186 loss)\nI0608 01:39:51.894065 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00816146 (* 1 = 0.00816146 loss)\nI0608 01:39:51.894073 13573 sgd_solver.cpp:106] Iteration 26360, lr = 0.001\nI0608 01:40:04.754047 13573 solver.cpp:229] Iteration 26380, loss = 1.03772\nI0608 01:40:04.754123 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00870461 (* 1 = 0.00870461 loss)\nI0608 01:40:04.754133 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0312675 (* 1 = 0.0312675 loss)\nI0608 01:40:04.754139 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.464188 (* 1 = 0.464188 loss)\nI0608 01:40:04.754145 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.173512 (* 1 = 0.173512 loss)\nI0608 01:40:04.754150 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.468583 (* 1 = 0.468583 loss)\nI0608 01:40:04.754156 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.173495 (* 1 = 0.173495 loss)\nI0608 01:40:04.754163 13573 sgd_solver.cpp:106] Iteration 26380, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:40:17.744843 13573 solver.cpp:229] Iteration 26400, loss = 0.957516\nI0608 01:40:17.744910 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0335889 (* 1 = 0.0335889 loss)\nI0608 01:40:17.744920 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0185163 (* 1 = 0.0185163 loss)\nI0608 01:40:17.744925 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0878704 (* 1 = 0.0878704 loss)\nI0608 01:40:17.744931 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0568413 (* 1 = 0.0568413 loss)\nI0608 01:40:17.744937 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.090127 (* 1 = 0.090127 loss)\nI0608 01:40:17.744943 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0568321 (* 1 = 0.0568321 loss)\nI0608 01:40:17.744951 13573 sgd_solver.cpp:106] Iteration 26400, lr = 0.001\nI0608 01:40:30.561347 13573 solver.cpp:229] Iteration 26420, loss = 0.622151\nI0608 01:40:30.561460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0347363 (* 1 = 0.0347363 loss)\nI0608 01:40:30.561470 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0240035 (* 1 = 0.0240035 loss)\nI0608 01:40:30.561476 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.377353 (* 1 = 0.377353 loss)\nI0608 01:40:30.561483 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283582 (* 1 = 0.0283582 loss)\nI0608 01:40:30.561488 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.369181 (* 1 = 0.369181 loss)\nI0608 01:40:30.561494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283471 (* 1 = 0.0283471 loss)\nI0608 01:40:30.561502 13573 sgd_solver.cpp:106] Iteration 26420, lr = 0.001\nI0608 01:40:43.323206 13573 solver.cpp:229] Iteration 26440, loss = 0.421929\nI0608 01:40:43.323271 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0928684 (* 1 = 0.0928684 loss)\nI0608 01:40:43.323281 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0573287 (* 1 = 0.0573287 loss)\nI0608 01:40:43.323287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0996174 (* 1 = 0.0996174 loss)\nI0608 01:40:43.323292 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0113041 (* 1 = 0.0113041 loss)\nI0608 01:40:43.323298 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101697 (* 1 = 0.101697 loss)\nI0608 01:40:43.323304 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011304 (* 1 = 0.011304 loss)\nI0608 01:40:43.323312 13573 sgd_solver.cpp:106] Iteration 26440, lr = 0.001\nI0608 01:40:56.109408 13573 solver.cpp:229] Iteration 26460, loss = 0.55155\nI0608 01:40:56.109496 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000879576 (* 1 = 0.000879576 loss)\nI0608 01:40:56.109508 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00981863 (* 1 = 0.00981863 loss)\nI0608 01:40:56.109513 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.193098 (* 1 = 0.193098 loss)\nI0608 01:40:56.109519 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102434 (* 1 = 0.0102434 loss)\nI0608 01:40:56.109525 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240535 (* 1 = 0.240535 loss)\nI0608 01:40:56.109530 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0102419 (* 1 = 0.0102419 loss)\nI0608 01:40:56.109537 13573 sgd_solver.cpp:106] Iteration 26460, lr = 0.001\nI0608 01:41:09.077358 13573 solver.cpp:229] Iteration 26480, loss = 0.578591\nI0608 01:41:09.077433 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.152937 (* 1 = 0.152937 loss)\nI0608 01:41:09.077445 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162879 (* 1 = 0.162879 loss)\nI0608 01:41:09.077450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0895622 (* 1 = 0.0895622 loss)\nI0608 01:41:09.077456 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384337 (* 1 = 0.0384337 loss)\nI0608 01:41:09.077461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0949727 (* 1 = 0.0949727 loss)\nI0608 01:41:09.077467 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0384369 (* 1 = 0.0384369 loss)\nI0608 01:41:09.077474 13573 sgd_solver.cpp:106] Iteration 26480, lr = 0.001\nI0608 01:41:22.004629 13573 solver.cpp:229] Iteration 26500, loss = 1.34571\nI0608 01:41:22.004690 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.164681 (* 1 = 0.164681 loss)\nI0608 01:41:22.004701 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110763 (* 1 = 0.110763 loss)\nI0608 01:41:22.004707 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32286 (* 1 = 0.32286 loss)\nI0608 01:41:22.004714 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0286493 (* 1 = 0.0286493 loss)\nI0608 01:41:22.004719 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315808 (* 1 = 0.315808 loss)\nI0608 01:41:22.004724 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286529 (* 1 = 0.0286529 loss)\nI0608 01:41:22.004731 13573 sgd_solver.cpp:106] Iteration 26500, lr = 0.001\nI0608 01:41:34.839138 13573 solver.cpp:229] Iteration 26520, loss = 1.27365\nI0608 01:41:34.839201 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.250317 (* 1 = 0.250317 loss)\nI0608 01:41:34.839211 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.243765 (* 1 = 0.243765 loss)\nI0608 01:41:34.839218 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.468759 (* 1 = 0.468759 loss)\nI0608 01:41:34.839224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104722 (* 1 = 0.104722 loss)\nI0608 01:41:34.839229 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.457755 (* 1 = 0.457755 loss)\nI0608 01:41:34.839234 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104705 (* 1 = 0.104705 loss)\nI0608 01:41:34.839241 13573 sgd_solver.cpp:106] Iteration 26520, lr = 0.001\nI0608 01:41:47.625293 13573 solver.cpp:229] Iteration 26540, loss = 0.66299\nI0608 01:41:47.625361 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.31663 (* 1 = 0.31663 loss)\nI0608 01:41:47.625377 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.34983 (* 1 = 0.34983 loss)\nI0608 01:41:47.625383 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.13806 (* 1 = 0.13806 loss)\nI0608 01:41:47.625389 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0609819 (* 1 = 0.0609819 loss)\nI0608 01:41:47.625396 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13023 (* 1 = 0.13023 loss)\nI0608 01:41:47.625401 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0609791 (* 1 = 0.0609791 loss)\nI0608 01:41:47.625407 13573 sgd_solver.cpp:106] Iteration 26540, lr = 0.001\nI0608 01:42:00.570495 13573 solver.cpp:229] Iteration 26560, loss = 0.874178\nI0608 01:42:00.570559 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0349932 (* 1 = 0.0349932 loss)\nI0608 01:42:00.570569 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118281 (* 1 = 0.118281 loss)\nI0608 01:42:00.570575 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304327 (* 1 = 0.304327 loss)\nI0608 01:42:00.570581 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0173504 (* 1 = 0.0173504 loss)\nI0608 01:42:00.570587 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236562 (* 1 = 0.236562 loss)\nI0608 01:42:00.570593 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017349 (* 1 = 0.017349 loss)\nI0608 01:42:00.570600 13573 sgd_solver.cpp:106] Iteration 26560, lr = 0.001\nI0608 01:42:13.634850 13573 solver.cpp:229] Iteration 26580, loss = 0.544515\nI0608 01:42:13.634919 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161708 (* 1 = 0.161708 loss)\nI0608 01:42:13.634929 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.320365 (* 1 = 0.320365 loss)\nI0608 01:42:13.634935 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101919 (* 1 = 0.101919 loss)\nI0608 01:42:13.634941 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0020769 (* 1 = 0.0020769 loss)\nI0608 01:42:13.634948 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115417 (* 1 = 0.115417 loss)\nI0608 01:42:13.634953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00207607 (* 1 = 0.00207607 loss)\nI0608 01:42:13.634963 13573 sgd_solver.cpp:106] Iteration 26580, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:42:26.356837 13573 solver.cpp:229] Iteration 26600, loss = 1.06572\nI0608 01:42:26.356914 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000993566 (* 1 = 0.000993566 loss)\nI0608 01:42:26.356925 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0287879 (* 1 = 0.0287879 loss)\nI0608 01:42:26.356931 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.218043 (* 1 = 0.218043 loss)\nI0608 01:42:26.356937 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154877 (* 1 = 0.0154877 loss)\nI0608 01:42:26.356943 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217137 (* 1 = 0.217137 loss)\nI0608 01:42:26.356950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154966 (* 1 = 0.0154966 loss)\nI0608 01:42:26.356956 13573 sgd_solver.cpp:106] Iteration 26600, lr = 0.001\nI0608 01:42:39.214471 13573 solver.cpp:229] Iteration 26620, loss = 0.832811\nI0608 01:42:39.214539 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144363 (* 1 = 0.144363 loss)\nI0608 01:42:39.214548 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0650097 (* 1 = 0.0650097 loss)\nI0608 01:42:39.214555 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174361 (* 1 = 0.174361 loss)\nI0608 01:42:39.214560 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00323757 (* 1 = 0.00323757 loss)\nI0608 01:42:39.214566 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199925 (* 1 = 0.199925 loss)\nI0608 01:42:39.214572 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00323576 (* 1 = 0.00323576 loss)\nI0608 01:42:39.214579 13573 sgd_solver.cpp:106] Iteration 26620, lr = 0.001\nI0608 01:42:52.145746 13573 solver.cpp:229] Iteration 26640, loss = 0.634833\nI0608 01:42:52.145817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0779384 (* 1 = 0.0779384 loss)\nI0608 01:42:52.145825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0740923 (* 1 = 0.0740923 loss)\nI0608 01:42:52.145831 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0978669 (* 1 = 0.0978669 loss)\nI0608 01:42:52.145838 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114991 (* 1 = 0.0114991 loss)\nI0608 01:42:52.145843 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0928754 (* 1 = 0.0928754 loss)\nI0608 01:42:52.145849 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115026 (* 1 = 0.0115026 loss)\nI0608 01:42:52.145855 13573 sgd_solver.cpp:106] Iteration 26640, lr = 0.001\nI0608 01:43:05.044015 13573 solver.cpp:229] Iteration 26660, loss = 0.639975\nI0608 01:43:05.044090 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.094415 (* 1 = 0.094415 loss)\nI0608 01:43:05.044100 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0892127 (* 1 = 0.0892127 loss)\nI0608 01:43:05.044106 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.322732 (* 1 = 0.322732 loss)\nI0608 01:43:05.044111 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0600539 (* 1 = 0.0600539 loss)\nI0608 01:43:05.044116 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.30928 (* 1 = 0.30928 loss)\nI0608 01:43:05.044122 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0600818 (* 1 = 0.0600818 loss)\nI0608 01:43:05.044128 13573 sgd_solver.cpp:106] Iteration 26660, lr = 0.001\nI0608 01:43:17.973315 13573 solver.cpp:229] Iteration 26680, loss = 0.66805\nI0608 01:43:17.973378 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00527666 (* 1 = 0.00527666 loss)\nI0608 01:43:17.973388 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0439139 (* 1 = 0.0439139 loss)\nI0608 01:43:17.973394 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.291207 (* 1 = 0.291207 loss)\nI0608 01:43:17.973400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158686 (* 1 = 0.0158686 loss)\nI0608 01:43:17.973407 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277312 (* 1 = 0.277312 loss)\nI0608 01:43:17.973412 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158628 (* 1 = 0.0158628 loss)\nI0608 01:43:17.973418 13573 sgd_solver.cpp:106] Iteration 26680, lr = 0.001\nI0608 01:43:30.884255 13573 solver.cpp:229] Iteration 26700, loss = 1.84758\nI0608 01:43:30.884323 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141368 (* 1 = 0.141368 loss)\nI0608 01:43:30.884335 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130503 (* 1 = 0.130503 loss)\nI0608 01:43:30.884341 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190766 (* 1 = 0.190766 loss)\nI0608 01:43:30.884348 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0898249 (* 1 = 0.0898249 loss)\nI0608 01:43:30.884354 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191276 (* 1 = 0.191276 loss)\nI0608 01:43:30.884361 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0898285 (* 1 = 0.0898285 loss)\nI0608 01:43:30.884368 13573 sgd_solver.cpp:106] Iteration 26700, lr = 0.001\nI0608 01:43:43.735771 13573 solver.cpp:229] Iteration 26720, loss = 0.809898\nI0608 01:43:43.735836 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105164 (* 1 = 0.105164 loss)\nI0608 01:43:43.735846 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108344 (* 1 = 0.108344 loss)\nI0608 01:43:43.735852 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156299 (* 1 = 0.156299 loss)\nI0608 01:43:43.735857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0248112 (* 1 = 0.0248112 loss)\nI0608 01:43:43.735863 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150537 (* 1 = 0.150537 loss)\nI0608 01:43:43.735869 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0248077 (* 1 = 0.0248077 loss)\nI0608 01:43:43.735877 13573 sgd_solver.cpp:106] Iteration 26720, lr = 0.001\nI0608 01:43:56.230536 13573 solver.cpp:229] Iteration 26740, loss = 0.859735\nI0608 01:43:56.230610 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0343072 (* 1 = 0.0343072 loss)\nI0608 01:43:56.230620 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0502101 (* 1 = 0.0502101 loss)\nI0608 01:43:56.230626 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0718676 (* 1 = 0.0718676 loss)\nI0608 01:43:56.230633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.012228 (* 1 = 0.012228 loss)\nI0608 01:43:56.230638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0755924 (* 1 = 0.0755924 loss)\nI0608 01:43:56.230644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.012221 (* 1 = 0.012221 loss)\nI0608 01:43:56.230651 13573 sgd_solver.cpp:106] Iteration 26740, lr = 0.001\nI0608 01:44:09.113303 13573 solver.cpp:229] Iteration 26760, loss = 0.756461\nI0608 01:44:09.113370 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0791187 (* 1 = 0.0791187 loss)\nI0608 01:44:09.113379 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0656442 (* 1 = 0.0656442 loss)\nI0608 01:44:09.113385 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117619 (* 1 = 0.117619 loss)\nI0608 01:44:09.113391 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.151017 (* 1 = 0.151017 loss)\nI0608 01:44:09.113397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11733 (* 1 = 0.11733 loss)\nI0608 01:44:09.113404 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.151108 (* 1 = 0.151108 loss)\nI0608 01:44:09.113411 13573 sgd_solver.cpp:106] Iteration 26760, lr = 0.001\nI0608 01:44:22.012502 13573 solver.cpp:229] Iteration 26780, loss = 0.922885\nI0608 01:44:22.012568 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.156475 (* 1 = 0.156475 loss)\nI0608 01:44:22.012578 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.186538 (* 1 = 0.186538 loss)\nI0608 01:44:22.012583 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.483033 (* 1 = 0.483033 loss)\nI0608 01:44:22.012589 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0221728 (* 1 = 0.0221728 loss)\nI0608 01:44:22.012595 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.501582 (* 1 = 0.501582 loss)\nI0608 01:44:22.012601 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0221771 (* 1 = 0.0221771 loss)\nI0608 01:44:22.012609 13573 sgd_solver.cpp:106] Iteration 26780, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:44:35.061303 13573 solver.cpp:229] Iteration 26800, loss = 1.02699\nI0608 01:44:35.061381 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134603 (* 1 = 0.134603 loss)\nI0608 01:44:35.061390 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.189141 (* 1 = 0.189141 loss)\nI0608 01:44:35.061396 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393101 (* 1 = 0.393101 loss)\nI0608 01:44:35.061401 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.159141 (* 1 = 0.159141 loss)\nI0608 01:44:35.061408 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.390519 (* 1 = 0.390519 loss)\nI0608 01:44:35.061414 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.159164 (* 1 = 0.159164 loss)\nI0608 01:44:35.061420 13573 sgd_solver.cpp:106] Iteration 26800, lr = 0.001\nI0608 01:44:47.760154 13573 solver.cpp:229] Iteration 26820, loss = 0.704275\nI0608 01:44:47.760239 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.158173 (* 1 = 0.158173 loss)\nI0608 01:44:47.760251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0748569 (* 1 = 0.0748569 loss)\nI0608 01:44:47.760257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18437 (* 1 = 0.18437 loss)\nI0608 01:44:47.760262 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0345192 (* 1 = 0.0345192 loss)\nI0608 01:44:47.760268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.185993 (* 1 = 0.185993 loss)\nI0608 01:44:47.760274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0345281 (* 1 = 0.0345281 loss)\nI0608 01:44:47.760282 13573 sgd_solver.cpp:106] Iteration 26820, lr = 0.001\nI0608 01:45:00.666016 13573 solver.cpp:229] Iteration 26840, loss = 0.646526\nI0608 01:45:00.666080 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159167 (* 1 = 0.159167 loss)\nI0608 01:45:00.666090 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0845555 (* 1 = 0.0845555 loss)\nI0608 01:45:00.666096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105378 (* 1 = 0.105378 loss)\nI0608 01:45:00.666101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.060203 (* 1 = 0.060203 loss)\nI0608 01:45:00.666107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100167 (* 1 = 0.100167 loss)\nI0608 01:45:00.666113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0602038 (* 1 = 0.0602038 loss)\nI0608 01:45:00.666119 13573 sgd_solver.cpp:106] Iteration 26840, lr = 0.001\nI0608 01:45:13.556294 13573 solver.cpp:229] Iteration 26860, loss = 0.582827\nI0608 01:45:13.556370 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.207826 (* 1 = 0.207826 loss)\nI0608 01:45:13.556380 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.155539 (* 1 = 0.155539 loss)\nI0608 01:45:13.556387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.09533 (* 1 = 0.09533 loss)\nI0608 01:45:13.556393 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0528594 (* 1 = 0.0528594 loss)\nI0608 01:45:13.556398 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0881493 (* 1 = 0.0881493 loss)\nI0608 01:45:13.556404 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0528578 (* 1 = 0.0528578 loss)\nI0608 01:45:13.556411 13573 sgd_solver.cpp:106] Iteration 26860, lr = 0.001\nI0608 01:45:26.336408 13573 solver.cpp:229] Iteration 26880, loss = 0.572579\nI0608 01:45:26.336474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000352462 (* 1 = 0.000352462 loss)\nI0608 01:45:26.336489 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000292248 (* 1 = 0.000292248 loss)\nI0608 01:45:26.336496 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169677 (* 1 = 0.169677 loss)\nI0608 01:45:26.336503 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00272687 (* 1 = 0.00272687 loss)\nI0608 01:45:26.336508 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152238 (* 1 = 0.152238 loss)\nI0608 01:45:26.336514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00272502 (* 1 = 0.00272502 loss)\nI0608 01:45:26.336522 13573 sgd_solver.cpp:106] Iteration 26880, lr = 0.001\nI0608 01:45:39.253983 13573 solver.cpp:229] Iteration 26900, loss = 0.615925\nI0608 01:45:39.254050 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0900412 (* 1 = 0.0900412 loss)\nI0608 01:45:39.254060 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.05005 (* 1 = 0.05005 loss)\nI0608 01:45:39.254066 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0800274 (* 1 = 0.0800274 loss)\nI0608 01:45:39.254072 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116834 (* 1 = 0.0116834 loss)\nI0608 01:45:39.254078 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0815794 (* 1 = 0.0815794 loss)\nI0608 01:45:39.254084 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116782 (* 1 = 0.0116782 loss)\nI0608 01:45:39.254091 13573 sgd_solver.cpp:106] Iteration 26900, lr = 0.001\nI0608 01:45:52.137138 13573 solver.cpp:229] Iteration 26920, loss = 0.737058\nI0608 01:45:52.137223 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00219228 (* 1 = 0.00219228 loss)\nI0608 01:45:52.137233 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000260571 (* 1 = 0.000260571 loss)\nI0608 01:45:52.137239 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.232385 (* 1 = 0.232385 loss)\nI0608 01:45:52.137245 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0286427 (* 1 = 0.0286427 loss)\nI0608 01:45:52.137251 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217156 (* 1 = 0.217156 loss)\nI0608 01:45:52.137257 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286407 (* 1 = 0.0286407 loss)\nI0608 01:45:52.137265 13573 sgd_solver.cpp:106] Iteration 26920, lr = 0.001\nI0608 01:46:05.105947 13573 solver.cpp:229] Iteration 26940, loss = 0.580985\nI0608 01:46:05.106022 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0207825 (* 1 = 0.0207825 loss)\nI0608 01:46:05.106032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0572944 (* 1 = 0.0572944 loss)\nI0608 01:46:05.106040 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134115 (* 1 = 0.134115 loss)\nI0608 01:46:05.106045 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105104 (* 1 = 0.0105104 loss)\nI0608 01:46:05.106050 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138201 (* 1 = 0.138201 loss)\nI0608 01:46:05.106056 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105068 (* 1 = 0.0105068 loss)\nI0608 01:46:05.106065 13573 sgd_solver.cpp:106] Iteration 26940, lr = 0.001\nI0608 01:46:18.213093 13573 solver.cpp:229] Iteration 26960, loss = 0.618977\nI0608 01:46:18.213177 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150339 (* 1 = 0.150339 loss)\nI0608 01:46:18.213188 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132728 (* 1 = 0.132728 loss)\nI0608 01:46:18.213194 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211977 (* 1 = 0.211977 loss)\nI0608 01:46:18.213201 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0312421 (* 1 = 0.0312421 loss)\nI0608 01:46:18.213207 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.209008 (* 1 = 0.209008 loss)\nI0608 01:46:18.213212 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0312361 (* 1 = 0.0312361 loss)\nI0608 01:46:18.213219 13573 sgd_solver.cpp:106] Iteration 26960, lr = 0.001\nI0608 01:46:30.808002 13573 solver.cpp:229] Iteration 26980, loss = 0.547474\nI0608 01:46:30.808127 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00205286 (* 1 = 0.00205286 loss)\nI0608 01:46:30.808138 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000431134 (* 1 = 0.000431134 loss)\nI0608 01:46:30.808145 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.291568 (* 1 = 0.291568 loss)\nI0608 01:46:30.808151 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0136722 (* 1 = 0.0136722 loss)\nI0608 01:46:30.808156 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.251653 (* 1 = 0.251653 loss)\nI0608 01:46:30.808162 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0136614 (* 1 = 0.0136614 loss)\nI0608 01:46:30.808171 13573 sgd_solver.cpp:106] Iteration 26980, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:46:43.760512 13573 solver.cpp:229] Iteration 27000, loss = 0.67063\nI0608 01:46:43.760599 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121139 (* 1 = 0.121139 loss)\nI0608 01:46:43.760609 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118373 (* 1 = 0.118373 loss)\nI0608 01:46:43.760615 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0941634 (* 1 = 0.0941634 loss)\nI0608 01:46:43.760622 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00865533 (* 1 = 0.00865533 loss)\nI0608 01:46:43.760627 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0871161 (* 1 = 0.0871161 loss)\nI0608 01:46:43.760633 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00865527 (* 1 = 0.00865527 loss)\nI0608 01:46:43.760640 13573 sgd_solver.cpp:106] Iteration 27000, lr = 0.001\nI0608 01:46:56.634536 13573 solver.cpp:229] Iteration 27020, loss = 0.434386\nI0608 01:46:56.634593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108685 (* 1 = 0.108685 loss)\nI0608 01:46:56.634603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0765136 (* 1 = 0.0765136 loss)\nI0608 01:46:56.634608 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101413 (* 1 = 0.101413 loss)\nI0608 01:46:56.634613 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155759 (* 1 = 0.0155759 loss)\nI0608 01:46:56.634619 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10224 (* 1 = 0.10224 loss)\nI0608 01:46:56.634625 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0155774 (* 1 = 0.0155774 loss)\nI0608 01:46:56.634632 13573 sgd_solver.cpp:106] Iteration 27020, lr = 0.001\nI0608 01:47:09.346513 13573 solver.cpp:229] Iteration 27040, loss = 0.764053\nI0608 01:47:09.346596 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00183621 (* 1 = 0.00183621 loss)\nI0608 01:47:09.346606 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00398142 (* 1 = 0.00398142 loss)\nI0608 01:47:09.346612 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.178037 (* 1 = 0.178037 loss)\nI0608 01:47:09.346618 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.000475456 (* 1 = 0.000475456 loss)\nI0608 01:47:09.346624 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165056 (* 1 = 0.165056 loss)\nI0608 01:47:09.346631 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.000476708 (* 1 = 0.000476708 loss)\nI0608 01:47:09.346637 13573 sgd_solver.cpp:106] Iteration 27040, lr = 0.001\nI0608 01:47:22.202374 13573 solver.cpp:229] Iteration 27060, loss = 0.601624\nI0608 01:47:22.202438 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.195169 (* 1 = 0.195169 loss)\nI0608 01:47:22.202447 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0999749 (* 1 = 0.0999749 loss)\nI0608 01:47:22.202455 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138422 (* 1 = 0.138422 loss)\nI0608 01:47:22.202460 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0210374 (* 1 = 0.0210374 loss)\nI0608 01:47:22.202466 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133286 (* 1 = 0.133286 loss)\nI0608 01:47:22.202471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0210384 (* 1 = 0.0210384 loss)\nI0608 01:47:22.202478 13573 sgd_solver.cpp:106] Iteration 27060, lr = 0.001\nI0608 01:47:35.155123 13573 solver.cpp:229] Iteration 27080, loss = 1.99527\nI0608 01:47:35.155195 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102564 (* 1 = 0.102564 loss)\nI0608 01:47:35.155205 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157803 (* 1 = 0.157803 loss)\nI0608 01:47:35.155211 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300753 (* 1 = 0.300753 loss)\nI0608 01:47:35.155217 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194191 (* 1 = 0.0194191 loss)\nI0608 01:47:35.155223 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303371 (* 1 = 0.303371 loss)\nI0608 01:47:35.155230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0194286 (* 1 = 0.0194286 loss)\nI0608 01:47:35.155236 13573 sgd_solver.cpp:106] Iteration 27080, lr = 0.001\nI0608 01:47:48.019850 13573 solver.cpp:229] Iteration 27100, loss = 0.990202\nI0608 01:47:48.019918 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199484 (* 1 = 0.199484 loss)\nI0608 01:47:48.019927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146107 (* 1 = 0.146107 loss)\nI0608 01:47:48.019933 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.312805 (* 1 = 0.312805 loss)\nI0608 01:47:48.019939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0491469 (* 1 = 0.0491469 loss)\nI0608 01:47:48.019944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306855 (* 1 = 0.306855 loss)\nI0608 01:47:48.019951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0491298 (* 1 = 0.0491298 loss)\nI0608 01:47:48.019958 13573 sgd_solver.cpp:106] Iteration 27100, lr = 0.001\nI0608 01:48:01.035089 13573 solver.cpp:229] Iteration 27120, loss = 0.897911\nI0608 01:48:01.035156 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.153663 (* 1 = 0.153663 loss)\nI0608 01:48:01.035166 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.176026 (* 1 = 0.176026 loss)\nI0608 01:48:01.035172 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256312 (* 1 = 0.256312 loss)\nI0608 01:48:01.035178 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0907087 (* 1 = 0.0907087 loss)\nI0608 01:48:01.035183 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.262853 (* 1 = 0.262853 loss)\nI0608 01:48:01.035189 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0907411 (* 1 = 0.0907411 loss)\nI0608 01:48:01.035197 13573 sgd_solver.cpp:106] Iteration 27120, lr = 0.001\nI0608 01:48:13.946563 13573 solver.cpp:229] Iteration 27140, loss = 0.575742\nI0608 01:48:13.946631 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17911 (* 1 = 0.17911 loss)\nI0608 01:48:13.946640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108292 (* 1 = 0.108292 loss)\nI0608 01:48:13.946646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134963 (* 1 = 0.134963 loss)\nI0608 01:48:13.946652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0352611 (* 1 = 0.0352611 loss)\nI0608 01:48:13.946657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.135117 (* 1 = 0.135117 loss)\nI0608 01:48:13.946663 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0352639 (* 1 = 0.0352639 loss)\nI0608 01:48:13.946671 13573 sgd_solver.cpp:106] Iteration 27140, lr = 0.001\nI0608 01:48:26.947134 13573 solver.cpp:229] Iteration 27160, loss = 0.560252\nI0608 01:48:26.947198 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175329 (* 1 = 0.175329 loss)\nI0608 01:48:26.947208 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105529 (* 1 = 0.105529 loss)\nI0608 01:48:26.947214 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102885 (* 1 = 0.102885 loss)\nI0608 01:48:26.947219 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0125208 (* 1 = 0.0125208 loss)\nI0608 01:48:26.947226 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103662 (* 1 = 0.103662 loss)\nI0608 01:48:26.947230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0125217 (* 1 = 0.0125217 loss)\nI0608 01:48:26.947237 13573 sgd_solver.cpp:106] Iteration 27160, lr = 0.001\nI0608 01:48:39.848032 13573 solver.cpp:229] Iteration 27180, loss = 0.726404\nI0608 01:48:39.848107 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.157694 (* 1 = 0.157694 loss)\nI0608 01:48:39.848116 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103121 (* 1 = 0.103121 loss)\nI0608 01:48:39.848121 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228063 (* 1 = 0.228063 loss)\nI0608 01:48:39.848127 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0241772 (* 1 = 0.0241772 loss)\nI0608 01:48:39.848134 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226383 (* 1 = 0.226383 loss)\nI0608 01:48:39.848140 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0241876 (* 1 = 0.0241876 loss)\nI0608 01:48:39.848146 13573 sgd_solver.cpp:106] Iteration 27180, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:48:52.706957 13573 solver.cpp:229] Iteration 27200, loss = 0.576576\nI0608 01:48:52.707025 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0946607 (* 1 = 0.0946607 loss)\nI0608 01:48:52.707034 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0643062 (* 1 = 0.0643062 loss)\nI0608 01:48:52.707041 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.218435 (* 1 = 0.218435 loss)\nI0608 01:48:52.707046 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0124044 (* 1 = 0.0124044 loss)\nI0608 01:48:52.707051 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213003 (* 1 = 0.213003 loss)\nI0608 01:48:52.707057 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.01239 (* 1 = 0.01239 loss)\nI0608 01:48:52.707064 13573 sgd_solver.cpp:106] Iteration 27200, lr = 0.001\nI0608 01:49:05.686800 13573 solver.cpp:229] Iteration 27220, loss = 1.94316\nI0608 01:49:05.686861 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.345814 (* 1 = 0.345814 loss)\nI0608 01:49:05.686872 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.151889 (* 1 = 0.151889 loss)\nI0608 01:49:05.686877 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.918947 (* 1 = 0.918947 loss)\nI0608 01:49:05.686883 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.242032 (* 1 = 0.242032 loss)\nI0608 01:49:05.686888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.917772 (* 1 = 0.917772 loss)\nI0608 01:49:05.686894 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.242003 (* 1 = 0.242003 loss)\nI0608 01:49:05.686902 13573 sgd_solver.cpp:106] Iteration 27220, lr = 0.001\nI0608 01:49:18.718269 13573 solver.cpp:229] Iteration 27240, loss = 1.47513\nI0608 01:49:18.718400 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141155 (* 1 = 0.141155 loss)\nI0608 01:49:18.718410 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.123676 (* 1 = 0.123676 loss)\nI0608 01:49:18.718415 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.818755 (* 1 = 0.818755 loss)\nI0608 01:49:18.718421 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.261034 (* 1 = 0.261034 loss)\nI0608 01:49:18.718426 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.812639 (* 1 = 0.812639 loss)\nI0608 01:49:18.718432 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.26096 (* 1 = 0.26096 loss)\nI0608 01:49:18.718439 13573 sgd_solver.cpp:106] Iteration 27240, lr = 0.001\nI0608 01:49:31.769310 13573 solver.cpp:229] Iteration 27260, loss = 0.443536\nI0608 01:49:31.769384 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0101891 (* 1 = 0.0101891 loss)\nI0608 01:49:31.769395 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0554681 (* 1 = 0.0554681 loss)\nI0608 01:49:31.769402 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.089897 (* 1 = 0.089897 loss)\nI0608 01:49:31.769407 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0144712 (* 1 = 0.0144712 loss)\nI0608 01:49:31.769412 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0980926 (* 1 = 0.0980926 loss)\nI0608 01:49:31.769418 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144724 (* 1 = 0.0144724 loss)\nI0608 01:49:31.769424 13573 sgd_solver.cpp:106] Iteration 27260, lr = 0.001\nI0608 01:49:44.832419 13573 solver.cpp:229] Iteration 27280, loss = 0.750725\nI0608 01:49:44.832484 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000269047 (* 1 = 0.000269047 loss)\nI0608 01:49:44.832494 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000350119 (* 1 = 0.000350119 loss)\nI0608 01:49:44.832499 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228716 (* 1 = 0.228716 loss)\nI0608 01:49:44.832505 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0260981 (* 1 = 0.0260981 loss)\nI0608 01:49:44.832511 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197663 (* 1 = 0.197663 loss)\nI0608 01:49:44.832516 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0260992 (* 1 = 0.0260992 loss)\nI0608 01:49:44.832522 13573 sgd_solver.cpp:106] Iteration 27280, lr = 0.001\nI0608 01:49:57.779165 13573 solver.cpp:229] Iteration 27300, loss = 0.806418\nI0608 01:49:57.779242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106668 (* 1 = 0.106668 loss)\nI0608 01:49:57.779250 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0915071 (* 1 = 0.0915071 loss)\nI0608 01:49:57.779256 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302573 (* 1 = 0.302573 loss)\nI0608 01:49:57.779263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0356086 (* 1 = 0.0356086 loss)\nI0608 01:49:57.779268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323752 (* 1 = 0.323752 loss)\nI0608 01:49:57.779273 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0355903 (* 1 = 0.0355903 loss)\nI0608 01:49:57.779281 13573 sgd_solver.cpp:106] Iteration 27300, lr = 0.001\nI0608 01:50:10.799422 13573 solver.cpp:229] Iteration 27320, loss = 1.70496\nI0608 01:50:10.799494 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.201366 (* 1 = 0.201366 loss)\nI0608 01:50:10.799504 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202604 (* 1 = 0.202604 loss)\nI0608 01:50:10.799510 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.225292 (* 1 = 0.225292 loss)\nI0608 01:50:10.799516 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0715042 (* 1 = 0.0715042 loss)\nI0608 01:50:10.799522 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228176 (* 1 = 0.228176 loss)\nI0608 01:50:10.799528 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0715098 (* 1 = 0.0715098 loss)\nI0608 01:50:10.799535 13573 sgd_solver.cpp:106] Iteration 27320, lr = 0.001\nI0608 01:50:23.760306 13573 solver.cpp:229] Iteration 27340, loss = 0.755941\nI0608 01:50:23.760375 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114619 (* 1 = 0.114619 loss)\nI0608 01:50:23.760385 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0870665 (* 1 = 0.0870665 loss)\nI0608 01:50:23.760390 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0934245 (* 1 = 0.0934245 loss)\nI0608 01:50:23.760396 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0486448 (* 1 = 0.0486448 loss)\nI0608 01:50:23.760402 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938532 (* 1 = 0.0938532 loss)\nI0608 01:50:23.760408 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0486824 (* 1 = 0.0486824 loss)\nI0608 01:50:23.760416 13573 sgd_solver.cpp:106] Iteration 27340, lr = 0.001\nI0608 01:50:36.550143 13573 solver.cpp:229] Iteration 27360, loss = 0.685277\nI0608 01:50:36.550218 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.220067 (* 1 = 0.220067 loss)\nI0608 01:50:36.550228 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113381 (* 1 = 0.113381 loss)\nI0608 01:50:36.550235 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0847271 (* 1 = 0.0847271 loss)\nI0608 01:50:36.550240 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108605 (* 1 = 0.0108605 loss)\nI0608 01:50:36.550246 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920148 (* 1 = 0.0920148 loss)\nI0608 01:50:36.550251 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108484 (* 1 = 0.0108484 loss)\nI0608 01:50:36.550258 13573 sgd_solver.cpp:106] Iteration 27360, lr = 0.001\nI0608 01:50:49.469204 13573 solver.cpp:229] Iteration 27380, loss = 0.562483\nI0608 01:50:49.469272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151547 (* 1 = 0.151547 loss)\nI0608 01:50:49.469282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0847808 (* 1 = 0.0847808 loss)\nI0608 01:50:49.469288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0993664 (* 1 = 0.0993664 loss)\nI0608 01:50:49.469295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302246 (* 1 = 0.0302246 loss)\nI0608 01:50:49.469300 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101191 (* 1 = 0.101191 loss)\nI0608 01:50:49.469306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302225 (* 1 = 0.0302225 loss)\nI0608 01:50:49.469312 13573 sgd_solver.cpp:106] Iteration 27380, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:51:02.534538 13573 solver.cpp:229] Iteration 27400, loss = 1.0846\nI0608 01:51:02.534602 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0344398 (* 1 = 0.0344398 loss)\nI0608 01:51:02.534611 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0668608 (* 1 = 0.0668608 loss)\nI0608 01:51:02.534617 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284351 (* 1 = 0.284351 loss)\nI0608 01:51:02.534623 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0924277 (* 1 = 0.0924277 loss)\nI0608 01:51:02.534629 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.299217 (* 1 = 0.299217 loss)\nI0608 01:51:02.534636 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0924573 (* 1 = 0.0924573 loss)\nI0608 01:51:02.534642 13573 sgd_solver.cpp:106] Iteration 27400, lr = 0.001\nI0608 01:51:15.439211 13573 solver.cpp:229] Iteration 27420, loss = 0.629039\nI0608 01:51:15.439301 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0223381 (* 1 = 0.0223381 loss)\nI0608 01:51:15.439309 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0347678 (* 1 = 0.0347678 loss)\nI0608 01:51:15.439316 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.416427 (* 1 = 0.416427 loss)\nI0608 01:51:15.439321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0285942 (* 1 = 0.0285942 loss)\nI0608 01:51:15.439327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.439641 (* 1 = 0.439641 loss)\nI0608 01:51:15.439334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0285954 (* 1 = 0.0285954 loss)\nI0608 01:51:15.439342 13573 sgd_solver.cpp:106] Iteration 27420, lr = 0.001\nI0608 01:51:28.532773 13573 solver.cpp:229] Iteration 27440, loss = 0.470791\nI0608 01:51:28.532835 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00179786 (* 1 = 0.00179786 loss)\nI0608 01:51:28.532843 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00529711 (* 1 = 0.00529711 loss)\nI0608 01:51:28.532850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.176605 (* 1 = 0.176605 loss)\nI0608 01:51:28.532855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00733682 (* 1 = 0.00733682 loss)\nI0608 01:51:28.532861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211998 (* 1 = 0.211998 loss)\nI0608 01:51:28.532867 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00733802 (* 1 = 0.00733802 loss)\nI0608 01:51:28.532874 13573 sgd_solver.cpp:106] Iteration 27440, lr = 0.001\nI0608 01:51:41.281761 13573 solver.cpp:229] Iteration 27460, loss = 0.655306\nI0608 01:51:41.281831 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00145135 (* 1 = 0.00145135 loss)\nI0608 01:51:41.281841 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00389 (* 1 = 0.00389 loss)\nI0608 01:51:41.281847 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217109 (* 1 = 0.217109 loss)\nI0608 01:51:41.281852 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175471 (* 1 = 0.0175471 loss)\nI0608 01:51:41.281857 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.231835 (* 1 = 0.231835 loss)\nI0608 01:51:41.281863 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175494 (* 1 = 0.0175494 loss)\nI0608 01:51:41.281870 13573 sgd_solver.cpp:106] Iteration 27460, lr = 0.001\nI0608 01:51:54.010923 13573 solver.cpp:229] Iteration 27480, loss = 0.633282\nI0608 01:51:54.010999 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00130018 (* 1 = 0.00130018 loss)\nI0608 01:51:54.011008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0063632 (* 1 = 0.0063632 loss)\nI0608 01:51:54.011015 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.261931 (* 1 = 0.261931 loss)\nI0608 01:51:54.011020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0374317 (* 1 = 0.0374317 loss)\nI0608 01:51:54.011026 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240499 (* 1 = 0.240499 loss)\nI0608 01:51:54.011031 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.037444 (* 1 = 0.037444 loss)\nI0608 01:51:54.011039 13573 sgd_solver.cpp:106] Iteration 27480, lr = 0.001\nI0608 01:52:06.745152 13573 solver.cpp:229] Iteration 27500, loss = 0.73985\nI0608 01:52:06.745216 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.30112 (* 1 = 0.30112 loss)\nI0608 01:52:06.745225 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.25784 (* 1 = 0.25784 loss)\nI0608 01:52:06.745231 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121998 (* 1 = 0.121998 loss)\nI0608 01:52:06.745237 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.16197 (* 1 = 0.16197 loss)\nI0608 01:52:06.745242 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122574 (* 1 = 0.122574 loss)\nI0608 01:52:06.745249 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.16202 (* 1 = 0.16202 loss)\nI0608 01:52:06.745255 13573 sgd_solver.cpp:106] Iteration 27500, lr = 0.001\nI0608 01:52:19.736191 13573 solver.cpp:229] Iteration 27520, loss = 0.70191\nI0608 01:52:19.736258 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0942317 (* 1 = 0.0942317 loss)\nI0608 01:52:19.736268 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.156174 (* 1 = 0.156174 loss)\nI0608 01:52:19.736274 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267695 (* 1 = 0.267695 loss)\nI0608 01:52:19.736280 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.062686 (* 1 = 0.062686 loss)\nI0608 01:52:19.736285 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.281976 (* 1 = 0.281976 loss)\nI0608 01:52:19.736291 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0626951 (* 1 = 0.0626951 loss)\nI0608 01:52:19.736299 13573 sgd_solver.cpp:106] Iteration 27520, lr = 0.001\nI0608 01:52:32.646466 13573 solver.cpp:229] Iteration 27540, loss = 0.431427\nI0608 01:52:32.646539 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00371197 (* 1 = 0.00371197 loss)\nI0608 01:52:32.646548 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.016801 (* 1 = 0.016801 loss)\nI0608 01:52:32.646554 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265846 (* 1 = 0.265846 loss)\nI0608 01:52:32.646560 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155228 (* 1 = 0.0155228 loss)\nI0608 01:52:32.646566 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.2712 (* 1 = 0.2712 loss)\nI0608 01:52:32.646571 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.015524 (* 1 = 0.015524 loss)\nI0608 01:52:32.646579 13573 sgd_solver.cpp:106] Iteration 27540, lr = 0.001\nI0608 01:52:45.805475 13573 solver.cpp:229] Iteration 27560, loss = 0.432399\nI0608 01:52:45.805539 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0374992 (* 1 = 0.0374992 loss)\nI0608 01:52:45.805548 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0681156 (* 1 = 0.0681156 loss)\nI0608 01:52:45.805554 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109947 (* 1 = 0.109947 loss)\nI0608 01:52:45.805560 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0289116 (* 1 = 0.0289116 loss)\nI0608 01:52:45.805565 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107683 (* 1 = 0.107683 loss)\nI0608 01:52:45.805572 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0288999 (* 1 = 0.0288999 loss)\nI0608 01:52:45.805577 13573 sgd_solver.cpp:106] Iteration 27560, lr = 0.001\nI0608 01:52:58.677144 13573 solver.cpp:229] Iteration 27580, loss = 0.573121\nI0608 01:52:58.677203 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104294 (* 1 = 0.104294 loss)\nI0608 01:52:58.677212 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0900297 (* 1 = 0.0900297 loss)\nI0608 01:52:58.677218 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102278 (* 1 = 0.102278 loss)\nI0608 01:52:58.677224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114357 (* 1 = 0.0114357 loss)\nI0608 01:52:58.677229 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101611 (* 1 = 0.101611 loss)\nI0608 01:52:58.677235 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114349 (* 1 = 0.0114349 loss)\nI0608 01:52:58.677242 13573 sgd_solver.cpp:106] Iteration 27580, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:53:11.696750 13573 solver.cpp:229] Iteration 27600, loss = 0.585324\nI0608 01:53:11.696825 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0590798 (* 1 = 0.0590798 loss)\nI0608 01:53:11.696835 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0798538 (* 1 = 0.0798538 loss)\nI0608 01:53:11.696840 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0845215 (* 1 = 0.0845215 loss)\nI0608 01:53:11.696846 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0290563 (* 1 = 0.0290563 loss)\nI0608 01:53:11.696851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0883706 (* 1 = 0.0883706 loss)\nI0608 01:53:11.696856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0290418 (* 1 = 0.0290418 loss)\nI0608 01:53:11.696863 13573 sgd_solver.cpp:106] Iteration 27600, lr = 0.001\nI0608 01:53:24.489518 13573 solver.cpp:229] Iteration 27620, loss = 0.596338\nI0608 01:53:24.489583 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.138415 (* 1 = 0.138415 loss)\nI0608 01:53:24.489591 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.178969 (* 1 = 0.178969 loss)\nI0608 01:53:24.489598 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258765 (* 1 = 0.258765 loss)\nI0608 01:53:24.489603 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0414423 (* 1 = 0.0414423 loss)\nI0608 01:53:24.489609 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260642 (* 1 = 0.260642 loss)\nI0608 01:53:24.489614 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0414269 (* 1 = 0.0414269 loss)\nI0608 01:53:24.489620 13573 sgd_solver.cpp:106] Iteration 27620, lr = 0.001\nI0608 01:53:37.667958 13573 solver.cpp:229] Iteration 27640, loss = 0.704161\nI0608 01:53:37.668023 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.160837 (* 1 = 0.160837 loss)\nI0608 01:53:37.668033 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16753 (* 1 = 0.16753 loss)\nI0608 01:53:37.668040 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.254051 (* 1 = 0.254051 loss)\nI0608 01:53:37.668045 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0379986 (* 1 = 0.0379986 loss)\nI0608 01:53:37.668051 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.243511 (* 1 = 0.243511 loss)\nI0608 01:53:37.668056 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0380045 (* 1 = 0.0380045 loss)\nI0608 01:53:37.668063 13573 sgd_solver.cpp:106] Iteration 27640, lr = 0.001\nI0608 01:53:50.657480 13573 solver.cpp:229] Iteration 27660, loss = 1.92931\nI0608 01:53:50.657553 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.320441 (* 1 = 0.320441 loss)\nI0608 01:53:50.657563 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.29465 (* 1 = 0.29465 loss)\nI0608 01:53:50.657569 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.10056 (* 1 = 1.10056 loss)\nI0608 01:53:50.657575 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.151093 (* 1 = 0.151093 loss)\nI0608 01:53:50.657580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.10147 (* 1 = 1.10147 loss)\nI0608 01:53:50.657593 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.151134 (* 1 = 0.151134 loss)\nI0608 01:53:50.657600 13573 sgd_solver.cpp:106] Iteration 27660, lr = 0.001\nI0608 01:54:03.713825 13573 solver.cpp:229] Iteration 27680, loss = 0.484927\nI0608 01:54:03.713886 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0342079 (* 1 = 0.0342079 loss)\nI0608 01:54:03.713896 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0411923 (* 1 = 0.0411923 loss)\nI0608 01:54:03.713902 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275232 (* 1 = 0.275232 loss)\nI0608 01:54:03.713907 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0067019 (* 1 = 0.0067019 loss)\nI0608 01:54:03.713913 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188081 (* 1 = 0.188081 loss)\nI0608 01:54:03.713919 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0067003 (* 1 = 0.0067003 loss)\nI0608 01:54:03.713925 13573 sgd_solver.cpp:106] Iteration 27680, lr = 0.001\nI0608 01:54:16.519634 13573 solver.cpp:229] Iteration 27700, loss = 0.747888\nI0608 01:54:16.519707 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0829958 (* 1 = 0.0829958 loss)\nI0608 01:54:16.519717 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0808549 (* 1 = 0.0808549 loss)\nI0608 01:54:16.519723 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101199 (* 1 = 0.101199 loss)\nI0608 01:54:16.519729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217076 (* 1 = 0.0217076 loss)\nI0608 01:54:16.519735 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100813 (* 1 = 0.100813 loss)\nI0608 01:54:16.519742 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0216928 (* 1 = 0.0216928 loss)\nI0608 01:54:16.519749 13573 sgd_solver.cpp:106] Iteration 27700, lr = 0.001\nI0608 01:54:29.257243 13573 solver.cpp:229] Iteration 27720, loss = 0.988767\nI0608 01:54:29.257310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0136186 (* 1 = 0.0136186 loss)\nI0608 01:54:29.257321 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0224656 (* 1 = 0.0224656 loss)\nI0608 01:54:29.257328 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.464642 (* 1 = 0.464642 loss)\nI0608 01:54:29.257333 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.258728 (* 1 = 0.258728 loss)\nI0608 01:54:29.257359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.454505 (* 1 = 0.454505 loss)\nI0608 01:54:29.257365 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.258736 (* 1 = 0.258736 loss)\nI0608 01:54:29.257372 13573 sgd_solver.cpp:106] Iteration 27720, lr = 0.001\nI0608 01:54:42.053005 13573 solver.cpp:229] Iteration 27740, loss = 1.25436\nI0608 01:54:42.053063 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0645137 (* 1 = 0.0645137 loss)\nI0608 01:54:42.053072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0677606 (* 1 = 0.0677606 loss)\nI0608 01:54:42.053081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.495564 (* 1 = 0.495564 loss)\nI0608 01:54:42.053086 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.283599 (* 1 = 0.283599 loss)\nI0608 01:54:42.053092 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.474308 (* 1 = 0.474308 loss)\nI0608 01:54:42.053097 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.283598 (* 1 = 0.283598 loss)\nI0608 01:54:42.053103 13573 sgd_solver.cpp:106] Iteration 27740, lr = 0.001\nI0608 01:54:54.885185 13573 solver.cpp:229] Iteration 27760, loss = 1.09925\nI0608 01:54:54.885252 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.186124 (* 1 = 0.186124 loss)\nI0608 01:54:54.885262 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.221923 (* 1 = 0.221923 loss)\nI0608 01:54:54.885268 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0943798 (* 1 = 0.0943798 loss)\nI0608 01:54:54.885273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0358858 (* 1 = 0.0358858 loss)\nI0608 01:54:54.885279 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0939013 (* 1 = 0.0939013 loss)\nI0608 01:54:54.885284 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0358693 (* 1 = 0.0358693 loss)\nI0608 01:54:54.885290 13573 sgd_solver.cpp:106] Iteration 27760, lr = 0.001\nI0608 01:55:07.632136 13573 solver.cpp:229] Iteration 27780, loss = 0.812226\nI0608 01:55:07.632210 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0461249 (* 1 = 0.0461249 loss)\nI0608 01:55:07.632220 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.060701 (* 1 = 0.060701 loss)\nI0608 01:55:07.632225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.482338 (* 1 = 0.482338 loss)\nI0608 01:55:07.632230 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0388436 (* 1 = 0.0388436 loss)\nI0608 01:55:07.632236 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.419875 (* 1 = 0.419875 loss)\nI0608 01:55:07.632241 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0388544 (* 1 = 0.0388544 loss)\nI0608 01:55:07.632248 13573 sgd_solver.cpp:106] Iteration 27780, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:55:20.479055 13573 solver.cpp:229] Iteration 27800, loss = 0.782603\nI0608 01:55:20.479120 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0170891 (* 1 = 0.0170891 loss)\nI0608 01:55:20.479128 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0880949 (* 1 = 0.0880949 loss)\nI0608 01:55:20.479135 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.346432 (* 1 = 0.346432 loss)\nI0608 01:55:20.479140 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0728904 (* 1 = 0.0728904 loss)\nI0608 01:55:20.479146 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.388623 (* 1 = 0.388623 loss)\nI0608 01:55:20.479151 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0728985 (* 1 = 0.0728985 loss)\nI0608 01:55:20.479158 13573 sgd_solver.cpp:106] Iteration 27800, lr = 0.001\nI0608 01:55:33.363093 13573 solver.cpp:229] Iteration 27820, loss = 0.803175\nI0608 01:55:33.363159 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.190836 (* 1 = 0.190836 loss)\nI0608 01:55:33.363168 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143696 (* 1 = 0.143696 loss)\nI0608 01:55:33.363174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.398975 (* 1 = 0.398975 loss)\nI0608 01:55:33.363180 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0311778 (* 1 = 0.0311778 loss)\nI0608 01:55:33.363185 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.404546 (* 1 = 0.404546 loss)\nI0608 01:55:33.363191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311711 (* 1 = 0.0311711 loss)\nI0608 01:55:33.363198 13573 sgd_solver.cpp:106] Iteration 27820, lr = 0.001\nI0608 01:55:46.152158 13573 solver.cpp:229] Iteration 27840, loss = 0.70069\nI0608 01:55:46.152233 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.247117 (* 1 = 0.247117 loss)\nI0608 01:55:46.152243 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0853372 (* 1 = 0.0853372 loss)\nI0608 01:55:46.152249 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0845377 (* 1 = 0.0845377 loss)\nI0608 01:55:46.152256 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0213087 (* 1 = 0.0213087 loss)\nI0608 01:55:46.152261 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0859143 (* 1 = 0.0859143 loss)\nI0608 01:55:46.152266 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0213056 (* 1 = 0.0213056 loss)\nI0608 01:55:46.152273 13573 sgd_solver.cpp:106] Iteration 27840, lr = 0.001\nI0608 01:55:59.400698 13573 solver.cpp:229] Iteration 27860, loss = 0.514401\nI0608 01:55:59.400761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.065179 (* 1 = 0.065179 loss)\nI0608 01:55:59.400770 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0649005 (* 1 = 0.0649005 loss)\nI0608 01:55:59.400776 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0886022 (* 1 = 0.0886022 loss)\nI0608 01:55:59.400781 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0617029 (* 1 = 0.0617029 loss)\nI0608 01:55:59.400787 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0847915 (* 1 = 0.0847915 loss)\nI0608 01:55:59.400792 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0616927 (* 1 = 0.0616927 loss)\nI0608 01:55:59.400799 13573 sgd_solver.cpp:106] Iteration 27860, lr = 0.001\nI0608 01:56:12.396373 13573 solver.cpp:229] Iteration 27880, loss = 0.602498\nI0608 01:56:12.396440 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0255039 (* 1 = 0.0255039 loss)\nI0608 01:56:12.396450 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0652943 (* 1 = 0.0652943 loss)\nI0608 01:56:12.396456 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.150787 (* 1 = 0.150787 loss)\nI0608 01:56:12.396461 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0131564 (* 1 = 0.0131564 loss)\nI0608 01:56:12.396467 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203006 (* 1 = 0.203006 loss)\nI0608 01:56:12.396472 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131617 (* 1 = 0.0131617 loss)\nI0608 01:56:12.396479 13573 sgd_solver.cpp:106] Iteration 27880, lr = 0.001\nI0608 01:56:25.182991 13573 solver.cpp:229] Iteration 27900, loss = 0.735405\nI0608 01:56:25.183054 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0169417 (* 1 = 0.0169417 loss)\nI0608 01:56:25.183063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.010366 (* 1 = 0.010366 loss)\nI0608 01:56:25.183069 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.371684 (* 1 = 0.371684 loss)\nI0608 01:56:25.183075 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.077079 (* 1 = 0.077079 loss)\nI0608 01:56:25.183080 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387092 (* 1 = 0.387092 loss)\nI0608 01:56:25.183086 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0770987 (* 1 = 0.0770987 loss)\nI0608 01:56:25.183094 13573 sgd_solver.cpp:106] Iteration 27900, lr = 0.001\nI0608 01:56:37.948993 13573 solver.cpp:229] Iteration 27920, loss = 0.698681\nI0608 01:56:37.949060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.391728 (* 1 = 0.391728 loss)\nI0608 01:56:37.949070 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.252912 (* 1 = 0.252912 loss)\nI0608 01:56:37.949077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103771 (* 1 = 0.103771 loss)\nI0608 01:56:37.949084 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0161383 (* 1 = 0.0161383 loss)\nI0608 01:56:37.949090 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102432 (* 1 = 0.102432 loss)\nI0608 01:56:37.949095 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0161445 (* 1 = 0.0161445 loss)\nI0608 01:56:37.949102 13573 sgd_solver.cpp:106] Iteration 27920, lr = 0.001\nI0608 01:56:50.828008 13573 solver.cpp:229] Iteration 27940, loss = 0.711975\nI0608 01:56:50.828080 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0105082 (* 1 = 0.0105082 loss)\nI0608 01:56:50.828090 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00770054 (* 1 = 0.00770054 loss)\nI0608 01:56:50.828097 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221608 (* 1 = 0.221608 loss)\nI0608 01:56:50.828104 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0521763 (* 1 = 0.0521763 loss)\nI0608 01:56:50.828109 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259208 (* 1 = 0.259208 loss)\nI0608 01:56:50.828114 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0522051 (* 1 = 0.0522051 loss)\nI0608 01:56:50.828121 13573 sgd_solver.cpp:106] Iteration 27940, lr = 0.001\nI0608 01:57:03.701748 13573 solver.cpp:229] Iteration 27960, loss = 1.07901\nI0608 01:57:03.701827 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.352197 (* 1 = 0.352197 loss)\nI0608 01:57:03.701835 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.269903 (* 1 = 0.269903 loss)\nI0608 01:57:03.701841 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.378948 (* 1 = 0.378948 loss)\nI0608 01:57:03.701848 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0730036 (* 1 = 0.0730036 loss)\nI0608 01:57:03.701854 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.37409 (* 1 = 0.37409 loss)\nI0608 01:57:03.701860 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0729918 (* 1 = 0.0729918 loss)\nI0608 01:57:03.701867 13573 sgd_solver.cpp:106] Iteration 27960, lr = 0.001\nI0608 01:57:16.702514 13573 solver.cpp:229] Iteration 27980, loss = 0.353601\nI0608 01:57:16.702587 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0769318 (* 1 = 0.0769318 loss)\nI0608 01:57:16.702615 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.031984 (* 1 = 0.031984 loss)\nI0608 01:57:16.702622 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0989842 (* 1 = 0.0989842 loss)\nI0608 01:57:16.702628 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0369945 (* 1 = 0.0369945 loss)\nI0608 01:57:16.702635 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0957227 (* 1 = 0.0957227 loss)\nI0608 01:57:16.702641 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0369908 (* 1 = 0.0369908 loss)\nI0608 01:57:16.702647 13573 sgd_solver.cpp:106] Iteration 27980, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:57:29.745199 13573 solver.cpp:229] Iteration 28000, loss = 1.00685\nI0608 01:57:29.745272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0136839 (* 1 = 0.0136839 loss)\nI0608 01:57:29.745282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0320529 (* 1 = 0.0320529 loss)\nI0608 01:57:29.745288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0888888 (* 1 = 0.0888888 loss)\nI0608 01:57:29.745295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.048967 (* 1 = 0.048967 loss)\nI0608 01:57:29.745299 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0947446 (* 1 = 0.0947446 loss)\nI0608 01:57:29.745306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0489594 (* 1 = 0.0489594 loss)\nI0608 01:57:29.745312 13573 sgd_solver.cpp:106] Iteration 28000, lr = 0.001\nI0608 01:57:42.854015 13573 solver.cpp:229] Iteration 28020, loss = 0.874961\nI0608 01:57:42.854091 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150262 (* 1 = 0.150262 loss)\nI0608 01:57:42.854100 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104031 (* 1 = 0.104031 loss)\nI0608 01:57:42.854106 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14455 (* 1 = 0.14455 loss)\nI0608 01:57:42.854112 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278406 (* 1 = 0.0278406 loss)\nI0608 01:57:42.854118 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139897 (* 1 = 0.139897 loss)\nI0608 01:57:42.854125 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278425 (* 1 = 0.0278425 loss)\nI0608 01:57:42.854130 13573 sgd_solver.cpp:106] Iteration 28020, lr = 0.001\nI0608 01:57:55.710006 13573 solver.cpp:229] Iteration 28040, loss = 1.71303\nI0608 01:57:55.710075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.196998 (* 1 = 0.196998 loss)\nI0608 01:57:55.710085 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0938287 (* 1 = 0.0938287 loss)\nI0608 01:57:55.710091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269204 (* 1 = 0.269204 loss)\nI0608 01:57:55.710098 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0379727 (* 1 = 0.0379727 loss)\nI0608 01:57:55.710103 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258275 (* 1 = 0.258275 loss)\nI0608 01:57:55.710108 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0379844 (* 1 = 0.0379844 loss)\nI0608 01:57:55.710114 13573 sgd_solver.cpp:106] Iteration 28040, lr = 0.001\nI0608 01:58:08.327527 13573 solver.cpp:229] Iteration 28060, loss = 0.497765\nI0608 01:58:08.327591 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.307111 (* 1 = 0.307111 loss)\nI0608 01:58:08.327601 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0676185 (* 1 = 0.0676185 loss)\nI0608 01:58:08.327607 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949357 (* 1 = 0.0949357 loss)\nI0608 01:58:08.327613 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.044399 (* 1 = 0.044399 loss)\nI0608 01:58:08.327618 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.086297 (* 1 = 0.086297 loss)\nI0608 01:58:08.327623 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0443866 (* 1 = 0.0443866 loss)\nI0608 01:58:08.327630 13573 sgd_solver.cpp:106] Iteration 28060, lr = 0.001\nI0608 01:58:21.201855 13573 solver.cpp:229] Iteration 28080, loss = 0.657587\nI0608 01:58:21.201934 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0319887 (* 1 = 0.0319887 loss)\nI0608 01:58:21.201948 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0971908 (* 1 = 0.0971908 loss)\nI0608 01:58:21.201958 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.337158 (* 1 = 0.337158 loss)\nI0608 01:58:21.201967 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0518945 (* 1 = 0.0518945 loss)\nI0608 01:58:21.201977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327728 (* 1 = 0.327728 loss)\nI0608 01:58:21.201984 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0518994 (* 1 = 0.0518994 loss)\nI0608 01:58:21.201994 13573 sgd_solver.cpp:106] Iteration 28080, lr = 0.001\nI0608 01:58:34.197566 13573 solver.cpp:229] Iteration 28100, loss = 0.643131\nI0608 01:58:34.197685 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.307609 (* 1 = 0.307609 loss)\nI0608 01:58:34.197698 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140143 (* 1 = 0.140143 loss)\nI0608 01:58:34.197708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0925928 (* 1 = 0.0925928 loss)\nI0608 01:58:34.197717 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246894 (* 1 = 0.0246894 loss)\nI0608 01:58:34.197726 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0870272 (* 1 = 0.0870272 loss)\nI0608 01:58:34.197736 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246787 (* 1 = 0.0246787 loss)\nI0608 01:58:34.197767 13573 sgd_solver.cpp:106] Iteration 28100, lr = 0.001\nI0608 01:58:47.043354 13573 solver.cpp:229] Iteration 28120, loss = 1.03196\nI0608 01:58:47.043426 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.227681 (* 1 = 0.227681 loss)\nI0608 01:58:47.043436 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119817 (* 1 = 0.119817 loss)\nI0608 01:58:47.043442 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.518997 (* 1 = 0.518997 loss)\nI0608 01:58:47.043447 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0912455 (* 1 = 0.0912455 loss)\nI0608 01:58:47.043452 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.494276 (* 1 = 0.494276 loss)\nI0608 01:58:47.043458 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0912709 (* 1 = 0.0912709 loss)\nI0608 01:58:47.043465 13573 sgd_solver.cpp:106] Iteration 28120, lr = 0.001\nI0608 01:59:00.001478 13573 solver.cpp:229] Iteration 28140, loss = 0.782335\nI0608 01:59:00.001543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.210021 (* 1 = 0.210021 loss)\nI0608 01:59:00.001552 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108461 (* 1 = 0.108461 loss)\nI0608 01:59:00.001559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.371877 (* 1 = 0.371877 loss)\nI0608 01:59:00.001564 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.11558 (* 1 = 0.11558 loss)\nI0608 01:59:00.001570 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.366173 (* 1 = 0.366173 loss)\nI0608 01:59:00.001576 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.115519 (* 1 = 0.115519 loss)\nI0608 01:59:00.001583 13573 sgd_solver.cpp:106] Iteration 28140, lr = 0.001\nI0608 01:59:12.965358 13573 solver.cpp:229] Iteration 28160, loss = 1.24059\nI0608 01:59:12.965492 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.188175 (* 1 = 0.188175 loss)\nI0608 01:59:12.965502 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128485 (* 1 = 0.128485 loss)\nI0608 01:59:12.965508 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101955 (* 1 = 0.101955 loss)\nI0608 01:59:12.965514 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.060795 (* 1 = 0.060795 loss)\nI0608 01:59:12.965520 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0869184 (* 1 = 0.0869184 loss)\nI0608 01:59:12.965526 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0608086 (* 1 = 0.0608086 loss)\nI0608 01:59:12.965533 13573 sgd_solver.cpp:106] Iteration 28160, lr = 0.001\nI0608 01:59:25.677103 13573 solver.cpp:229] Iteration 28180, loss = 1.43563\nI0608 01:59:25.677176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.185948 (* 1 = 0.185948 loss)\nI0608 01:59:25.677187 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.186581 (* 1 = 0.186581 loss)\nI0608 01:59:25.677193 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.383258 (* 1 = 0.383258 loss)\nI0608 01:59:25.677201 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0553846 (* 1 = 0.0553846 loss)\nI0608 01:59:25.677206 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.384102 (* 1 = 0.384102 loss)\nI0608 01:59:25.677212 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0554245 (* 1 = 0.0554245 loss)\nI0608 01:59:25.677219 13573 sgd_solver.cpp:106] Iteration 28180, lr = 0.001\nspeed: 0.645s / iter\nI0608 01:59:38.581393 13573 solver.cpp:229] Iteration 28200, loss = 2.25897\nI0608 01:59:38.581455 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102553 (* 1 = 0.102553 loss)\nI0608 01:59:38.581466 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0534843 (* 1 = 0.0534843 loss)\nI0608 01:59:38.581473 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.889055 (* 1 = 0.889055 loss)\nI0608 01:59:38.581478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.362901 (* 1 = 0.362901 loss)\nI0608 01:59:38.581485 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.897241 (* 1 = 0.897241 loss)\nI0608 01:59:38.581490 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.362953 (* 1 = 0.362953 loss)\nI0608 01:59:38.581497 13573 sgd_solver.cpp:106] Iteration 28200, lr = 0.001\nI0608 01:59:51.332494 13573 solver.cpp:229] Iteration 28220, loss = 0.6433\nI0608 01:59:51.332603 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0355609 (* 1 = 0.0355609 loss)\nI0608 01:59:51.332614 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0406651 (* 1 = 0.0406651 loss)\nI0608 01:59:51.332622 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269282 (* 1 = 0.269282 loss)\nI0608 01:59:51.332628 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0262047 (* 1 = 0.0262047 loss)\nI0608 01:59:51.332633 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277222 (* 1 = 0.277222 loss)\nI0608 01:59:51.332639 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0262113 (* 1 = 0.0262113 loss)\nI0608 01:59:51.332646 13573 sgd_solver.cpp:106] Iteration 28220, lr = 0.001\nI0608 02:00:04.133190 13573 solver.cpp:229] Iteration 28240, loss = 1.46647\nI0608 02:00:04.133272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114536 (* 1 = 0.114536 loss)\nI0608 02:00:04.133282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202742 (* 1 = 0.202742 loss)\nI0608 02:00:04.133288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.319465 (* 1 = 0.319465 loss)\nI0608 02:00:04.133294 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0248042 (* 1 = 0.0248042 loss)\nI0608 02:00:04.133299 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330914 (* 1 = 0.330914 loss)\nI0608 02:00:04.133306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0248132 (* 1 = 0.0248132 loss)\nI0608 02:00:04.133312 13573 sgd_solver.cpp:106] Iteration 28240, lr = 0.001\nI0608 02:00:17.171134 13573 solver.cpp:229] Iteration 28260, loss = 0.71478\nI0608 02:00:17.171202 13573 solver.cpp:245]     Train net output #0: loss_bbox = 4.64581e-05 (* 1 = 4.64581e-05 loss)\nI0608 02:00:17.171212 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0279524 (* 1 = 0.0279524 loss)\nI0608 02:00:17.171218 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197513 (* 1 = 0.197513 loss)\nI0608 02:00:17.171224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0152919 (* 1 = 0.0152919 loss)\nI0608 02:00:17.171231 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202035 (* 1 = 0.202035 loss)\nI0608 02:00:17.171236 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0152979 (* 1 = 0.0152979 loss)\nI0608 02:00:17.171242 13573 sgd_solver.cpp:106] Iteration 28260, lr = 0.001\nI0608 02:00:30.032524 13573 solver.cpp:229] Iteration 28280, loss = 0.901832\nI0608 02:00:30.032585 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105691 (* 1 = 0.105691 loss)\nI0608 02:00:30.032594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0513363 (* 1 = 0.0513363 loss)\nI0608 02:00:30.032603 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0988157 (* 1 = 0.0988157 loss)\nI0608 02:00:30.032608 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0206619 (* 1 = 0.0206619 loss)\nI0608 02:00:30.032614 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10161 (* 1 = 0.10161 loss)\nI0608 02:00:30.032620 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0206637 (* 1 = 0.0206637 loss)\nI0608 02:00:30.032627 13573 sgd_solver.cpp:106] Iteration 28280, lr = 0.001\nI0608 02:00:42.894121 13573 solver.cpp:229] Iteration 28300, loss = 0.415548\nI0608 02:00:42.894198 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000392494 (* 1 = 0.000392494 loss)\nI0608 02:00:42.894208 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0257776 (* 1 = 0.0257776 loss)\nI0608 02:00:42.894214 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217733 (* 1 = 0.217733 loss)\nI0608 02:00:42.894220 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00512146 (* 1 = 0.00512146 loss)\nI0608 02:00:42.894227 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16131 (* 1 = 0.16131 loss)\nI0608 02:00:42.894232 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00512519 (* 1 = 0.00512519 loss)\nI0608 02:00:42.894239 13573 sgd_solver.cpp:106] Iteration 28300, lr = 0.001\nI0608 02:00:55.444600 13573 solver.cpp:229] Iteration 28320, loss = 0.544529\nI0608 02:00:55.444684 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0382972 (* 1 = 0.0382972 loss)\nI0608 02:00:55.444694 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0200979 (* 1 = 0.0200979 loss)\nI0608 02:00:55.444700 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0785721 (* 1 = 0.0785721 loss)\nI0608 02:00:55.444706 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0360724 (* 1 = 0.0360724 loss)\nI0608 02:00:55.444711 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0834335 (* 1 = 0.0834335 loss)\nI0608 02:00:55.444717 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0360857 (* 1 = 0.0360857 loss)\nI0608 02:00:55.444725 13573 sgd_solver.cpp:106] Iteration 28320, lr = 0.001\nI0608 02:01:08.447887 13573 solver.cpp:229] Iteration 28340, loss = 0.885851\nI0608 02:01:08.447950 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.331125 (* 1 = 0.331125 loss)\nI0608 02:01:08.447960 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190968 (* 1 = 0.190968 loss)\nI0608 02:01:08.447965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157352 (* 1 = 0.157352 loss)\nI0608 02:01:08.447971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269106 (* 1 = 0.0269106 loss)\nI0608 02:01:08.447978 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154071 (* 1 = 0.154071 loss)\nI0608 02:01:08.447983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0268976 (* 1 = 0.0268976 loss)\nI0608 02:01:08.447989 13573 sgd_solver.cpp:106] Iteration 28340, lr = 0.001\nI0608 02:01:21.345681 13573 solver.cpp:229] Iteration 28360, loss = 0.600684\nI0608 02:01:21.345757 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00215738 (* 1 = 0.00215738 loss)\nI0608 02:01:21.345767 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00722021 (* 1 = 0.00722021 loss)\nI0608 02:01:21.345773 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.412476 (* 1 = 0.412476 loss)\nI0608 02:01:21.345778 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0720315 (* 1 = 0.0720315 loss)\nI0608 02:01:21.345784 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.375815 (* 1 = 0.375815 loss)\nI0608 02:01:21.345790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0720329 (* 1 = 0.0720329 loss)\nI0608 02:01:21.345798 13573 sgd_solver.cpp:106] Iteration 28360, lr = 0.001\nI0608 02:01:34.282176 13573 solver.cpp:229] Iteration 28380, loss = 0.617572\nI0608 02:01:34.282239 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.270932 (* 1 = 0.270932 loss)\nI0608 02:01:34.282248 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.268356 (* 1 = 0.268356 loss)\nI0608 02:01:34.282255 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.095995 (* 1 = 0.095995 loss)\nI0608 02:01:34.282260 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321333 (* 1 = 0.0321333 loss)\nI0608 02:01:34.282266 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0885905 (* 1 = 0.0885905 loss)\nI0608 02:01:34.282272 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0321288 (* 1 = 0.0321288 loss)\nI0608 02:01:34.282279 13573 sgd_solver.cpp:106] Iteration 28380, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:01:47.038717 13573 solver.cpp:229] Iteration 28400, loss = 1.76986\nI0608 02:01:47.038796 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175245 (* 1 = 0.175245 loss)\nI0608 02:01:47.038806 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162669 (* 1 = 0.162669 loss)\nI0608 02:01:47.038812 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252232 (* 1 = 0.252232 loss)\nI0608 02:01:47.038818 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0201224 (* 1 = 0.0201224 loss)\nI0608 02:01:47.038823 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.25783 (* 1 = 0.25783 loss)\nI0608 02:01:47.038830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201165 (* 1 = 0.0201165 loss)\nI0608 02:01:47.038873 13573 sgd_solver.cpp:106] Iteration 28400, lr = 0.001\nI0608 02:02:00.023602 13573 solver.cpp:229] Iteration 28420, loss = 0.70373\nI0608 02:02:00.023679 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00063417 (* 1 = 0.00063417 loss)\nI0608 02:02:00.023687 13573 solver.cpp:245]     Train net output #1: loss_cls = 6.9453e-05 (* 1 = 6.9453e-05 loss)\nI0608 02:02:00.023694 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15774 (* 1 = 0.15774 loss)\nI0608 02:02:00.023699 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00458176 (* 1 = 0.00458176 loss)\nI0608 02:02:00.023705 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173997 (* 1 = 0.173997 loss)\nI0608 02:02:00.023710 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00458257 (* 1 = 0.00458257 loss)\nI0608 02:02:00.023716 13573 sgd_solver.cpp:106] Iteration 28420, lr = 0.001\nI0608 02:02:13.091181 13573 solver.cpp:229] Iteration 28440, loss = 1.25777\nI0608 02:02:13.091244 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.132553 (* 1 = 0.132553 loss)\nI0608 02:02:13.091253 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.271348 (* 1 = 0.271348 loss)\nI0608 02:02:13.091259 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.627298 (* 1 = 0.627298 loss)\nI0608 02:02:13.091264 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.20143 (* 1 = 0.20143 loss)\nI0608 02:02:13.091270 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.605066 (* 1 = 0.605066 loss)\nI0608 02:02:13.091276 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.201597 (* 1 = 0.201597 loss)\nI0608 02:02:13.091284 13573 sgd_solver.cpp:106] Iteration 28440, lr = 0.001\nI0608 02:02:25.996852 13573 solver.cpp:229] Iteration 28460, loss = 0.435746\nI0608 02:02:25.996919 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0629642 (* 1 = 0.0629642 loss)\nI0608 02:02:25.996928 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0756815 (* 1 = 0.0756815 loss)\nI0608 02:02:25.996934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0897461 (* 1 = 0.0897461 loss)\nI0608 02:02:25.996940 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028619 (* 1 = 0.028619 loss)\nI0608 02:02:25.996945 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0887053 (* 1 = 0.0887053 loss)\nI0608 02:02:25.996951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286439 (* 1 = 0.0286439 loss)\nI0608 02:02:25.996958 13573 sgd_solver.cpp:106] Iteration 28460, lr = 0.001\nI0608 02:02:38.985348 13573 solver.cpp:229] Iteration 28480, loss = 0.553706\nI0608 02:02:38.985429 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151622 (* 1 = 0.151622 loss)\nI0608 02:02:38.985438 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.153321 (* 1 = 0.153321 loss)\nI0608 02:02:38.985445 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168308 (* 1 = 0.168308 loss)\nI0608 02:02:38.985450 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0330708 (* 1 = 0.0330708 loss)\nI0608 02:02:38.985455 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155497 (* 1 = 0.155497 loss)\nI0608 02:02:38.985462 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0330736 (* 1 = 0.0330736 loss)\nI0608 02:02:38.985471 13573 sgd_solver.cpp:106] Iteration 28480, lr = 0.001\nI0608 02:02:51.866971 13573 solver.cpp:229] Iteration 28500, loss = 0.816925\nI0608 02:02:51.867039 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101909 (* 1 = 0.101909 loss)\nI0608 02:02:51.867049 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0305393 (* 1 = 0.0305393 loss)\nI0608 02:02:51.867055 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223966 (* 1 = 0.223966 loss)\nI0608 02:02:51.867061 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0205228 (* 1 = 0.0205228 loss)\nI0608 02:02:51.867067 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223057 (* 1 = 0.223057 loss)\nI0608 02:02:51.867074 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0205271 (* 1 = 0.0205271 loss)\nI0608 02:02:51.867080 13573 sgd_solver.cpp:106] Iteration 28500, lr = 0.001\nI0608 02:03:04.625561 13573 solver.cpp:229] Iteration 28520, loss = 1.52494\nI0608 02:03:04.625630 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0792451 (* 1 = 0.0792451 loss)\nI0608 02:03:04.625639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107401 (* 1 = 0.107401 loss)\nI0608 02:03:04.625646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0845106 (* 1 = 0.0845106 loss)\nI0608 02:03:04.625653 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0136981 (* 1 = 0.0136981 loss)\nI0608 02:03:04.625658 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0825485 (* 1 = 0.0825485 loss)\nI0608 02:03:04.625663 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0136935 (* 1 = 0.0136935 loss)\nI0608 02:03:04.625671 13573 sgd_solver.cpp:106] Iteration 28520, lr = 0.001\nI0608 02:03:17.423220 13573 solver.cpp:229] Iteration 28540, loss = 0.700594\nI0608 02:03:17.423288 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.032104 (* 1 = 0.032104 loss)\nI0608 02:03:17.423298 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125912 (* 1 = 0.125912 loss)\nI0608 02:03:17.423305 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126376 (* 1 = 0.126376 loss)\nI0608 02:03:17.423310 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116572 (* 1 = 0.116572 loss)\nI0608 02:03:17.423316 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137575 (* 1 = 0.137575 loss)\nI0608 02:03:17.423321 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116617 (* 1 = 0.116617 loss)\nI0608 02:03:17.423328 13573 sgd_solver.cpp:106] Iteration 28540, lr = 0.001\nI0608 02:03:30.340469 13573 solver.cpp:229] Iteration 28560, loss = 0.57116\nI0608 02:03:30.340533 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000442504 (* 1 = 0.000442504 loss)\nI0608 02:03:30.340543 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000715807 (* 1 = 0.000715807 loss)\nI0608 02:03:30.340549 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17258 (* 1 = 0.17258 loss)\nI0608 02:03:30.340555 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00128531 (* 1 = 0.00128531 loss)\nI0608 02:03:30.340560 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169075 (* 1 = 0.169075 loss)\nI0608 02:03:30.340566 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00128689 (* 1 = 0.00128689 loss)\nI0608 02:03:30.340572 13573 sgd_solver.cpp:106] Iteration 28560, lr = 0.001\nI0608 02:03:43.209215 13573 solver.cpp:229] Iteration 28580, loss = 0.614081\nI0608 02:03:43.209282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0874085 (* 1 = 0.0874085 loss)\nI0608 02:03:43.209292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0244633 (* 1 = 0.0244633 loss)\nI0608 02:03:43.209298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189607 (* 1 = 0.189607 loss)\nI0608 02:03:43.209305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00607343 (* 1 = 0.00607343 loss)\nI0608 02:03:43.209311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.221005 (* 1 = 0.221005 loss)\nI0608 02:03:43.209316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00606612 (* 1 = 0.00606612 loss)\nI0608 02:03:43.209323 13573 sgd_solver.cpp:106] Iteration 28580, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:03:55.867547 13573 solver.cpp:229] Iteration 28600, loss = 0.822383\nI0608 02:03:55.867609 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117081 (* 1 = 0.117081 loss)\nI0608 02:03:55.867619 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.165964 (* 1 = 0.165964 loss)\nI0608 02:03:55.867625 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.347866 (* 1 = 0.347866 loss)\nI0608 02:03:55.867633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0491112 (* 1 = 0.0491112 loss)\nI0608 02:03:55.867640 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.35197 (* 1 = 0.35197 loss)\nI0608 02:03:55.867645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0490944 (* 1 = 0.0490944 loss)\nI0608 02:03:55.867652 13573 sgd_solver.cpp:106] Iteration 28600, lr = 0.001\nI0608 02:04:08.921046 13573 solver.cpp:229] Iteration 28620, loss = 0.417222\nI0608 02:04:08.921125 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.14325 (* 1 = 0.14325 loss)\nI0608 02:04:08.921135 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0756254 (* 1 = 0.0756254 loss)\nI0608 02:04:08.921141 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10192 (* 1 = 0.10192 loss)\nI0608 02:04:08.921146 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243032 (* 1 = 0.0243032 loss)\nI0608 02:04:08.921152 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107529 (* 1 = 0.107529 loss)\nI0608 02:04:08.921159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0242783 (* 1 = 0.0242783 loss)\nI0608 02:04:08.921165 13573 sgd_solver.cpp:106] Iteration 28620, lr = 0.001\nI0608 02:04:21.840164 13573 solver.cpp:229] Iteration 28640, loss = 1.14181\nI0608 02:04:21.840231 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.264066 (* 1 = 0.264066 loss)\nI0608 02:04:21.840242 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.182725 (* 1 = 0.182725 loss)\nI0608 02:04:21.840248 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240272 (* 1 = 0.240272 loss)\nI0608 02:04:21.840255 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0804858 (* 1 = 0.0804858 loss)\nI0608 02:04:21.840260 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.237979 (* 1 = 0.237979 loss)\nI0608 02:04:21.840265 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0805024 (* 1 = 0.0805024 loss)\nI0608 02:04:21.840272 13573 sgd_solver.cpp:106] Iteration 28640, lr = 0.001\nI0608 02:04:34.572361 13573 solver.cpp:229] Iteration 28660, loss = 0.915222\nI0608 02:04:34.572428 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.196577 (* 1 = 0.196577 loss)\nI0608 02:04:34.572438 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.354994 (* 1 = 0.354994 loss)\nI0608 02:04:34.572443 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262818 (* 1 = 0.262818 loss)\nI0608 02:04:34.572450 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0443171 (* 1 = 0.0443171 loss)\nI0608 02:04:34.572455 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.262538 (* 1 = 0.262538 loss)\nI0608 02:04:34.572461 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0443119 (* 1 = 0.0443119 loss)\nI0608 02:04:34.572469 13573 sgd_solver.cpp:106] Iteration 28660, lr = 0.001\nI0608 02:04:47.288425 13573 solver.cpp:229] Iteration 28680, loss = 1.88217\nI0608 02:04:47.288493 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0391975 (* 1 = 0.0391975 loss)\nI0608 02:04:47.288503 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0594954 (* 1 = 0.0594954 loss)\nI0608 02:04:47.288511 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0870457 (* 1 = 0.0870457 loss)\nI0608 02:04:47.288516 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.051816 (* 1 = 0.051816 loss)\nI0608 02:04:47.288522 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0858278 (* 1 = 0.0858278 loss)\nI0608 02:04:47.288528 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0518292 (* 1 = 0.0518292 loss)\nI0608 02:04:47.288537 13573 sgd_solver.cpp:106] Iteration 28680, lr = 0.001\nI0608 02:05:00.123168 13573 solver.cpp:229] Iteration 28700, loss = 0.756357\nI0608 02:05:00.123234 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.185134 (* 1 = 0.185134 loss)\nI0608 02:05:00.123245 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.235618 (* 1 = 0.235618 loss)\nI0608 02:05:00.123250 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.234247 (* 1 = 0.234247 loss)\nI0608 02:05:00.123257 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0552384 (* 1 = 0.0552384 loss)\nI0608 02:05:00.123262 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228458 (* 1 = 0.228458 loss)\nI0608 02:05:00.123268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0552529 (* 1 = 0.0552529 loss)\nI0608 02:05:00.123276 13573 sgd_solver.cpp:106] Iteration 28700, lr = 0.001\nI0608 02:05:13.298539 13573 solver.cpp:229] Iteration 28720, loss = 0.668112\nI0608 02:05:13.298604 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115157 (* 1 = 0.115157 loss)\nI0608 02:05:13.298614 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169845 (* 1 = 0.169845 loss)\nI0608 02:05:13.298619 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0860093 (* 1 = 0.0860093 loss)\nI0608 02:05:13.298625 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110487 (* 1 = 0.0110487 loss)\nI0608 02:05:13.298630 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0802307 (* 1 = 0.0802307 loss)\nI0608 02:05:13.298636 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011047 (* 1 = 0.011047 loss)\nI0608 02:05:13.298642 13573 sgd_solver.cpp:106] Iteration 28720, lr = 0.001\nI0608 02:05:26.078491 13573 solver.cpp:229] Iteration 28740, loss = 0.499766\nI0608 02:05:26.078610 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00231803 (* 1 = 0.00231803 loss)\nI0608 02:05:26.078622 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00193613 (* 1 = 0.00193613 loss)\nI0608 02:05:26.078629 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215537 (* 1 = 0.215537 loss)\nI0608 02:05:26.078634 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0152896 (* 1 = 0.0152896 loss)\nI0608 02:05:26.078641 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203441 (* 1 = 0.203441 loss)\nI0608 02:05:26.078647 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0152854 (* 1 = 0.0152854 loss)\nI0608 02:05:26.078655 13573 sgd_solver.cpp:106] Iteration 28740, lr = 0.001\nI0608 02:05:38.665199 13573 solver.cpp:229] Iteration 28760, loss = 0.615131\nI0608 02:05:38.665269 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167382 (* 1 = 0.167382 loss)\nI0608 02:05:38.665279 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.352532 (* 1 = 0.352532 loss)\nI0608 02:05:38.665285 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0842428 (* 1 = 0.0842428 loss)\nI0608 02:05:38.665292 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158536 (* 1 = 0.0158536 loss)\nI0608 02:05:38.665297 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905205 (* 1 = 0.0905205 loss)\nI0608 02:05:38.665303 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158509 (* 1 = 0.0158509 loss)\nI0608 02:05:38.665310 13573 sgd_solver.cpp:106] Iteration 28760, lr = 0.001\nI0608 02:05:51.722151 13573 solver.cpp:229] Iteration 28780, loss = 0.861032\nI0608 02:05:51.722234 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124691 (* 1 = 0.124691 loss)\nI0608 02:05:51.722245 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214913 (* 1 = 0.214913 loss)\nI0608 02:05:51.722252 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257991 (* 1 = 0.257991 loss)\nI0608 02:05:51.722257 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104435 (* 1 = 0.104435 loss)\nI0608 02:05:51.722264 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264483 (* 1 = 0.264483 loss)\nI0608 02:05:51.722270 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.10447 (* 1 = 0.10447 loss)\nI0608 02:05:51.722276 13573 sgd_solver.cpp:106] Iteration 28780, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:06:04.916035 13573 solver.cpp:229] Iteration 28800, loss = 0.63118\nI0608 02:06:04.916103 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151348 (* 1 = 0.151348 loss)\nI0608 02:06:04.916113 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152552 (* 1 = 0.152552 loss)\nI0608 02:06:04.916119 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26764 (* 1 = 0.26764 loss)\nI0608 02:06:04.916126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0319234 (* 1 = 0.0319234 loss)\nI0608 02:06:04.916131 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.267533 (* 1 = 0.267533 loss)\nI0608 02:06:04.916136 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0319402 (* 1 = 0.0319402 loss)\nI0608 02:06:04.916143 13573 sgd_solver.cpp:106] Iteration 28800, lr = 0.001\nI0608 02:06:17.982719 13573 solver.cpp:229] Iteration 28820, loss = 0.991024\nI0608 02:06:17.982798 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.230634 (* 1 = 0.230634 loss)\nI0608 02:06:17.982808 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125955 (* 1 = 0.125955 loss)\nI0608 02:06:17.982813 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241245 (* 1 = 0.241245 loss)\nI0608 02:06:17.982820 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0551271 (* 1 = 0.0551271 loss)\nI0608 02:06:17.982825 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.244305 (* 1 = 0.244305 loss)\nI0608 02:06:17.982831 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0551083 (* 1 = 0.0551083 loss)\nI0608 02:06:17.982838 13573 sgd_solver.cpp:106] Iteration 28820, lr = 0.001\nI0608 02:06:30.761011 13573 solver.cpp:229] Iteration 28840, loss = 0.826515\nI0608 02:06:30.761075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0837659 (* 1 = 0.0837659 loss)\nI0608 02:06:30.761085 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0669827 (* 1 = 0.0669827 loss)\nI0608 02:06:30.761091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0865292 (* 1 = 0.0865292 loss)\nI0608 02:06:30.761097 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0183057 (* 1 = 0.0183057 loss)\nI0608 02:06:30.761103 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0782041 (* 1 = 0.0782041 loss)\nI0608 02:06:30.761109 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0183014 (* 1 = 0.0183014 loss)\nI0608 02:06:30.761116 13573 sgd_solver.cpp:106] Iteration 28840, lr = 0.001\nI0608 02:06:43.657411 13573 solver.cpp:229] Iteration 28860, loss = 0.569826\nI0608 02:06:43.657483 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00304607 (* 1 = 0.00304607 loss)\nI0608 02:06:43.657492 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00122252 (* 1 = 0.00122252 loss)\nI0608 02:06:43.657498 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.219979 (* 1 = 0.219979 loss)\nI0608 02:06:43.657505 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302632 (* 1 = 0.0302632 loss)\nI0608 02:06:43.657510 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.198338 (* 1 = 0.198338 loss)\nI0608 02:06:43.657516 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302702 (* 1 = 0.0302702 loss)\nI0608 02:06:43.657523 13573 sgd_solver.cpp:106] Iteration 28860, lr = 0.001\nI0608 02:06:56.426470 13573 solver.cpp:229] Iteration 28880, loss = 1.06689\nI0608 02:06:56.426596 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00268015 (* 1 = 0.00268015 loss)\nI0608 02:06:56.426606 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000544011 (* 1 = 0.000544011 loss)\nI0608 02:06:56.426614 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233425 (* 1 = 0.233425 loss)\nI0608 02:06:56.426618 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153059 (* 1 = 0.0153059 loss)\nI0608 02:06:56.426625 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.2213 (* 1 = 0.2213 loss)\nI0608 02:06:56.426630 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.015321 (* 1 = 0.015321 loss)\nI0608 02:06:56.426637 13573 sgd_solver.cpp:106] Iteration 28880, lr = 0.001\nI0608 02:07:09.203660 13573 solver.cpp:229] Iteration 28900, loss = 1.30167\nI0608 02:07:09.203722 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0972467 (* 1 = 0.0972467 loss)\nI0608 02:07:09.203732 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0578803 (* 1 = 0.0578803 loss)\nI0608 02:07:09.203738 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.451512 (* 1 = 0.451512 loss)\nI0608 02:07:09.203744 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0270344 (* 1 = 0.0270344 loss)\nI0608 02:07:09.203749 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.478252 (* 1 = 0.478252 loss)\nI0608 02:07:09.203755 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0270557 (* 1 = 0.0270557 loss)\nI0608 02:07:09.203763 13573 sgd_solver.cpp:106] Iteration 28900, lr = 0.001\nI0608 02:07:22.010664 13573 solver.cpp:229] Iteration 28920, loss = 0.848153\nI0608 02:07:22.010730 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.286187 (* 1 = 0.286187 loss)\nI0608 02:07:22.010740 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10967 (* 1 = 0.10967 loss)\nI0608 02:07:22.010746 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.280587 (* 1 = 0.280587 loss)\nI0608 02:07:22.010751 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0718686 (* 1 = 0.0718686 loss)\nI0608 02:07:22.010757 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306393 (* 1 = 0.306393 loss)\nI0608 02:07:22.010763 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.071877 (* 1 = 0.071877 loss)\nI0608 02:07:22.010782 13573 sgd_solver.cpp:106] Iteration 28920, lr = 0.001\nI0608 02:07:34.964462 13573 solver.cpp:229] Iteration 28940, loss = 0.546374\nI0608 02:07:34.964531 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127363 (* 1 = 0.127363 loss)\nI0608 02:07:34.964540 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.068746 (* 1 = 0.068746 loss)\nI0608 02:07:34.964546 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0911204 (* 1 = 0.0911204 loss)\nI0608 02:07:34.964552 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0251851 (* 1 = 0.0251851 loss)\nI0608 02:07:34.964558 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0965211 (* 1 = 0.0965211 loss)\nI0608 02:07:34.964563 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251671 (* 1 = 0.0251671 loss)\nI0608 02:07:34.964571 13573 sgd_solver.cpp:106] Iteration 28940, lr = 0.001\nI0608 02:07:47.673593 13573 solver.cpp:229] Iteration 28960, loss = 0.791194\nI0608 02:07:47.673663 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0153 (* 1 = 0.0153 loss)\nI0608 02:07:47.673673 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0149162 (* 1 = 0.0149162 loss)\nI0608 02:07:47.673679 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.367987 (* 1 = 0.367987 loss)\nI0608 02:07:47.673684 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0337466 (* 1 = 0.0337466 loss)\nI0608 02:07:47.673691 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.31664 (* 1 = 0.31664 loss)\nI0608 02:07:47.673696 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0337588 (* 1 = 0.0337588 loss)\nI0608 02:07:47.673703 13573 sgd_solver.cpp:106] Iteration 28960, lr = 0.001\nI0608 02:08:00.536036 13573 solver.cpp:229] Iteration 28980, loss = 1.72968\nI0608 02:08:00.536099 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.276976 (* 1 = 0.276976 loss)\nI0608 02:08:00.536113 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117443 (* 1 = 0.117443 loss)\nI0608 02:08:00.536118 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0928931 (* 1 = 0.0928931 loss)\nI0608 02:08:00.536124 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0447443 (* 1 = 0.0447443 loss)\nI0608 02:08:00.536129 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0893062 (* 1 = 0.0893062 loss)\nI0608 02:08:00.536135 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0447432 (* 1 = 0.0447432 loss)\nI0608 02:08:00.536144 13573 sgd_solver.cpp:106] Iteration 28980, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:08:13.432970 13573 solver.cpp:229] Iteration 29000, loss = 0.484864\nI0608 02:08:13.433109 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0110964 (* 1 = 0.0110964 loss)\nI0608 02:08:13.433117 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0137002 (* 1 = 0.0137002 loss)\nI0608 02:08:13.433123 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205522 (* 1 = 0.205522 loss)\nI0608 02:08:13.433130 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0300009 (* 1 = 0.0300009 loss)\nI0608 02:08:13.433135 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202768 (* 1 = 0.202768 loss)\nI0608 02:08:13.433141 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0300005 (* 1 = 0.0300005 loss)\nI0608 02:08:13.433149 13573 sgd_solver.cpp:106] Iteration 29000, lr = 0.001\nI0608 02:08:26.326225 13573 solver.cpp:229] Iteration 29020, loss = 0.519363\nI0608 02:08:26.326295 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000482258 (* 1 = 0.000482258 loss)\nI0608 02:08:26.326305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00836644 (* 1 = 0.00836644 loss)\nI0608 02:08:26.326311 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182405 (* 1 = 0.182405 loss)\nI0608 02:08:26.326318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00604514 (* 1 = 0.00604514 loss)\nI0608 02:08:26.326323 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168348 (* 1 = 0.168348 loss)\nI0608 02:08:26.326329 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00604266 (* 1 = 0.00604266 loss)\nI0608 02:08:26.326336 13573 sgd_solver.cpp:106] Iteration 29020, lr = 0.001\nI0608 02:08:39.300951 13573 solver.cpp:229] Iteration 29040, loss = 1.59136\nI0608 02:08:39.301023 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.071827 (* 1 = 0.071827 loss)\nI0608 02:08:39.301033 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.145174 (* 1 = 0.145174 loss)\nI0608 02:08:39.301039 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183909 (* 1 = 0.183909 loss)\nI0608 02:08:39.301045 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0531838 (* 1 = 0.0531838 loss)\nI0608 02:08:39.301050 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176859 (* 1 = 0.176859 loss)\nI0608 02:08:39.301056 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0531761 (* 1 = 0.0531761 loss)\nI0608 02:08:39.301064 13573 sgd_solver.cpp:106] Iteration 29040, lr = 0.001\nI0608 02:08:52.085088 13573 solver.cpp:229] Iteration 29060, loss = 0.482089\nI0608 02:08:52.085147 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00221671 (* 1 = 0.00221671 loss)\nI0608 02:08:52.085157 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00488104 (* 1 = 0.00488104 loss)\nI0608 02:08:52.085163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302538 (* 1 = 0.302538 loss)\nI0608 02:08:52.085170 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0224431 (* 1 = 0.0224431 loss)\nI0608 02:08:52.085175 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265379 (* 1 = 0.265379 loss)\nI0608 02:08:52.085181 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0224535 (* 1 = 0.0224535 loss)\nI0608 02:08:52.085188 13573 sgd_solver.cpp:106] Iteration 29060, lr = 0.001\nI0608 02:09:05.041479 13573 solver.cpp:229] Iteration 29080, loss = 0.907847\nI0608 02:09:05.041548 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.369409 (* 1 = 0.369409 loss)\nI0608 02:09:05.041558 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100565 (* 1 = 0.100565 loss)\nI0608 02:09:05.041564 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0864004 (* 1 = 0.0864004 loss)\nI0608 02:09:05.041570 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242346 (* 1 = 0.0242346 loss)\nI0608 02:09:05.041576 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.091983 (* 1 = 0.091983 loss)\nI0608 02:09:05.041582 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0242377 (* 1 = 0.0242377 loss)\nI0608 02:09:05.041590 13573 sgd_solver.cpp:106] Iteration 29080, lr = 0.001\nI0608 02:09:18.114815 13573 solver.cpp:229] Iteration 29100, loss = 0.610765\nI0608 02:09:18.114890 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0671826 (* 1 = 0.0671826 loss)\nI0608 02:09:18.114900 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0521451 (* 1 = 0.0521451 loss)\nI0608 02:09:18.114905 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.349768 (* 1 = 0.349768 loss)\nI0608 02:09:18.114912 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155989 (* 1 = 0.0155989 loss)\nI0608 02:09:18.114917 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.291659 (* 1 = 0.291659 loss)\nI0608 02:09:18.114923 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156062 (* 1 = 0.0156062 loss)\nI0608 02:09:18.114929 13573 sgd_solver.cpp:106] Iteration 29100, lr = 0.001\nI0608 02:09:31.051523 13573 solver.cpp:229] Iteration 29120, loss = 0.892656\nI0608 02:09:31.051591 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000913405 (* 1 = 0.000913405 loss)\nI0608 02:09:31.051601 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000105988 (* 1 = 0.000105988 loss)\nI0608 02:09:31.051607 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206783 (* 1 = 0.206783 loss)\nI0608 02:09:31.051614 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100957 (* 1 = 0.0100957 loss)\nI0608 02:09:31.051618 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.244141 (* 1 = 0.244141 loss)\nI0608 02:09:31.051625 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100955 (* 1 = 0.0100955 loss)\nI0608 02:09:31.051631 13573 sgd_solver.cpp:106] Iteration 29120, lr = 0.001\nI0608 02:09:43.812343 13573 solver.cpp:229] Iteration 29140, loss = 0.388169\nI0608 02:09:43.812407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00379043 (* 1 = 0.00379043 loss)\nI0608 02:09:43.812417 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00219513 (* 1 = 0.00219513 loss)\nI0608 02:09:43.812422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.193043 (* 1 = 0.193043 loss)\nI0608 02:09:43.812427 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.032289 (* 1 = 0.032289 loss)\nI0608 02:09:43.812433 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.185035 (* 1 = 0.185035 loss)\nI0608 02:09:43.812439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0323229 (* 1 = 0.0323229 loss)\nI0608 02:09:43.812445 13573 sgd_solver.cpp:106] Iteration 29140, lr = 0.001\nI0608 02:09:56.597254 13573 solver.cpp:229] Iteration 29160, loss = 0.489742\nI0608 02:09:56.597375 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0289027 (* 1 = 0.0289027 loss)\nI0608 02:09:56.597385 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.047611 (* 1 = 0.047611 loss)\nI0608 02:09:56.597391 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246995 (* 1 = 0.246995 loss)\nI0608 02:09:56.597396 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00632703 (* 1 = 0.00632703 loss)\nI0608 02:09:56.597403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226005 (* 1 = 0.226005 loss)\nI0608 02:09:56.597409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00631975 (* 1 = 0.00631975 loss)\nI0608 02:09:56.597415 13573 sgd_solver.cpp:106] Iteration 29160, lr = 0.001\nI0608 02:10:09.518198 13573 solver.cpp:229] Iteration 29180, loss = 0.530799\nI0608 02:10:09.518263 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105945 (* 1 = 0.105945 loss)\nI0608 02:10:09.518272 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10867 (* 1 = 0.10867 loss)\nI0608 02:10:09.518280 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101875 (* 1 = 0.101875 loss)\nI0608 02:10:09.518285 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.023691 (* 1 = 0.023691 loss)\nI0608 02:10:09.518290 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103061 (* 1 = 0.103061 loss)\nI0608 02:10:09.518296 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236929 (* 1 = 0.0236929 loss)\nI0608 02:10:09.518303 13573 sgd_solver.cpp:106] Iteration 29180, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:10:22.230254 13573 solver.cpp:229] Iteration 29200, loss = 1.03824\nI0608 02:10:22.230322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.172507 (* 1 = 0.172507 loss)\nI0608 02:10:22.230332 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0951984 (* 1 = 0.0951984 loss)\nI0608 02:10:22.230340 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092652 (* 1 = 0.092652 loss)\nI0608 02:10:22.230345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104947 (* 1 = 0.0104947 loss)\nI0608 02:10:22.230351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0785538 (* 1 = 0.0785538 loss)\nI0608 02:10:22.230357 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105029 (* 1 = 0.0105029 loss)\nI0608 02:10:22.230366 13573 sgd_solver.cpp:106] Iteration 29200, lr = 0.001\nI0608 02:10:34.989223 13573 solver.cpp:229] Iteration 29220, loss = 0.749556\nI0608 02:10:34.989298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189456 (* 1 = 0.189456 loss)\nI0608 02:10:34.989310 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.194314 (* 1 = 0.194314 loss)\nI0608 02:10:34.989315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106392 (* 1 = 0.106392 loss)\nI0608 02:10:34.989321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275987 (* 1 = 0.0275987 loss)\nI0608 02:10:34.989327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106422 (* 1 = 0.106422 loss)\nI0608 02:10:34.989333 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276259 (* 1 = 0.0276259 loss)\nI0608 02:10:34.989341 13573 sgd_solver.cpp:106] Iteration 29220, lr = 0.001\nI0608 02:10:48.163085 13573 solver.cpp:229] Iteration 29240, loss = 1.74339\nI0608 02:10:48.163151 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00444777 (* 1 = 0.00444777 loss)\nI0608 02:10:48.163159 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00824166 (* 1 = 0.00824166 loss)\nI0608 02:10:48.163166 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.07456 (* 1 = 1.07456 loss)\nI0608 02:10:48.163172 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.338842 (* 1 = 0.338842 loss)\nI0608 02:10:48.163177 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.08392 (* 1 = 1.08392 loss)\nI0608 02:10:48.163182 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.338733 (* 1 = 0.338733 loss)\nI0608 02:10:48.163188 13573 sgd_solver.cpp:106] Iteration 29240, lr = 0.001\nI0608 02:11:01.121812 13573 solver.cpp:229] Iteration 29260, loss = 0.615166\nI0608 02:11:01.121881 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.225433 (* 1 = 0.225433 loss)\nI0608 02:11:01.121891 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.192253 (* 1 = 0.192253 loss)\nI0608 02:11:01.121897 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119698 (* 1 = 0.119698 loss)\nI0608 02:11:01.121903 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0426811 (* 1 = 0.0426811 loss)\nI0608 02:11:01.121908 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119498 (* 1 = 0.119498 loss)\nI0608 02:11:01.121914 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0426768 (* 1 = 0.0426768 loss)\nI0608 02:11:01.121922 13573 sgd_solver.cpp:106] Iteration 29260, lr = 0.001\nI0608 02:11:13.990365 13573 solver.cpp:229] Iteration 29280, loss = 0.914047\nI0608 02:11:13.990439 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147988 (* 1 = 0.147988 loss)\nI0608 02:11:13.990449 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10698 (* 1 = 0.10698 loss)\nI0608 02:11:13.990455 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0944468 (* 1 = 0.0944468 loss)\nI0608 02:11:13.990461 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0379827 (* 1 = 0.0379827 loss)\nI0608 02:11:13.990466 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.089064 (* 1 = 0.089064 loss)\nI0608 02:11:13.990473 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0379582 (* 1 = 0.0379582 loss)\nI0608 02:11:13.990478 13573 sgd_solver.cpp:106] Iteration 29280, lr = 0.001\nI0608 02:11:26.938030 13573 solver.cpp:229] Iteration 29300, loss = 1.46887\nI0608 02:11:26.938098 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0855641 (* 1 = 0.0855641 loss)\nI0608 02:11:26.938107 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0979839 (* 1 = 0.0979839 loss)\nI0608 02:11:26.938113 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221098 (* 1 = 0.221098 loss)\nI0608 02:11:26.938119 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133044 (* 1 = 0.0133044 loss)\nI0608 02:11:26.938124 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238949 (* 1 = 0.238949 loss)\nI0608 02:11:26.938130 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133348 (* 1 = 0.0133348 loss)\nI0608 02:11:26.938136 13573 sgd_solver.cpp:106] Iteration 29300, lr = 0.001\nI0608 02:11:39.855391 13573 solver.cpp:229] Iteration 29320, loss = 0.725182\nI0608 02:11:39.855470 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.213212 (* 1 = 0.213212 loss)\nI0608 02:11:39.855480 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0886633 (* 1 = 0.0886633 loss)\nI0608 02:11:39.855487 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917834 (* 1 = 0.0917834 loss)\nI0608 02:11:39.855494 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164576 (* 1 = 0.0164576 loss)\nI0608 02:11:39.855499 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0955511 (* 1 = 0.0955511 loss)\nI0608 02:11:39.855504 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164549 (* 1 = 0.0164549 loss)\nI0608 02:11:39.855512 13573 sgd_solver.cpp:106] Iteration 29320, lr = 0.001\nI0608 02:11:52.841013 13573 solver.cpp:229] Iteration 29340, loss = 0.930556\nI0608 02:11:52.841076 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00278088 (* 1 = 0.00278088 loss)\nI0608 02:11:52.841086 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.019933 (* 1 = 0.019933 loss)\nI0608 02:11:52.841092 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26657 (* 1 = 0.26657 loss)\nI0608 02:11:52.841099 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0237195 (* 1 = 0.0237195 loss)\nI0608 02:11:52.841104 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21434 (* 1 = 0.21434 loss)\nI0608 02:11:52.841109 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0237266 (* 1 = 0.0237266 loss)\nI0608 02:11:52.841115 13573 sgd_solver.cpp:106] Iteration 29340, lr = 0.001\nI0608 02:12:05.476645 13573 solver.cpp:229] Iteration 29360, loss = 0.516828\nI0608 02:12:05.476712 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000259804 (* 1 = 0.000259804 loss)\nI0608 02:12:05.476722 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00148369 (* 1 = 0.00148369 loss)\nI0608 02:12:05.476728 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121167 (* 1 = 0.121167 loss)\nI0608 02:12:05.476734 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00277501 (* 1 = 0.00277501 loss)\nI0608 02:12:05.476740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143708 (* 1 = 0.143708 loss)\nI0608 02:12:05.476745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0027723 (* 1 = 0.0027723 loss)\nI0608 02:12:05.476752 13573 sgd_solver.cpp:106] Iteration 29360, lr = 0.001\nI0608 02:12:18.136754 13573 solver.cpp:229] Iteration 29380, loss = 0.462454\nI0608 02:12:18.136817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000147222 (* 1 = 0.000147222 loss)\nI0608 02:12:18.136827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000109364 (* 1 = 0.000109364 loss)\nI0608 02:12:18.136833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188885 (* 1 = 0.188885 loss)\nI0608 02:12:18.136839 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00410091 (* 1 = 0.00410091 loss)\nI0608 02:12:18.136845 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158666 (* 1 = 0.158666 loss)\nI0608 02:12:18.136852 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00410109 (* 1 = 0.00410109 loss)\nI0608 02:12:18.136857 13573 sgd_solver.cpp:106] Iteration 29380, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:12:30.986791 13573 solver.cpp:229] Iteration 29400, loss = 0.657212\nI0608 02:12:30.986924 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0210499 (* 1 = 0.0210499 loss)\nI0608 02:12:30.986933 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.015723 (* 1 = 0.015723 loss)\nI0608 02:12:30.986939 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.318103 (* 1 = 0.318103 loss)\nI0608 02:12:30.986945 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0173742 (* 1 = 0.0173742 loss)\nI0608 02:12:30.986951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.310358 (* 1 = 0.310358 loss)\nI0608 02:12:30.986956 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0173877 (* 1 = 0.0173877 loss)\nI0608 02:12:30.986963 13573 sgd_solver.cpp:106] Iteration 29400, lr = 0.001\nI0608 02:12:44.020360 13573 solver.cpp:229] Iteration 29420, loss = 1.11152\nI0608 02:12:44.020427 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00324912 (* 1 = 0.00324912 loss)\nI0608 02:12:44.020437 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0112038 (* 1 = 0.0112038 loss)\nI0608 02:12:44.020444 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.511922 (* 1 = 0.511922 loss)\nI0608 02:12:44.020449 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.197092 (* 1 = 0.197092 loss)\nI0608 02:12:44.020455 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.523683 (* 1 = 0.523683 loss)\nI0608 02:12:44.020462 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.197073 (* 1 = 0.197073 loss)\nI0608 02:12:44.020468 13573 sgd_solver.cpp:106] Iteration 29420, lr = 0.001\nI0608 02:12:56.888583 13573 solver.cpp:229] Iteration 29440, loss = 0.638564\nI0608 02:12:56.888664 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134572 (* 1 = 0.134572 loss)\nI0608 02:12:56.888674 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.246575 (* 1 = 0.246575 loss)\nI0608 02:12:56.888679 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204953 (* 1 = 0.204953 loss)\nI0608 02:12:56.888686 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00825953 (* 1 = 0.00825953 loss)\nI0608 02:12:56.888692 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.184543 (* 1 = 0.184543 loss)\nI0608 02:12:56.888698 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00826294 (* 1 = 0.00826294 loss)\nI0608 02:12:56.888706 13573 sgd_solver.cpp:106] Iteration 29440, lr = 0.001\nI0608 02:13:09.922279 13573 solver.cpp:229] Iteration 29460, loss = 0.678122\nI0608 02:13:09.922353 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.094012 (* 1 = 0.094012 loss)\nI0608 02:13:09.922363 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.145774 (* 1 = 0.145774 loss)\nI0608 02:13:09.922369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153591 (* 1 = 0.153591 loss)\nI0608 02:13:09.922375 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0558911 (* 1 = 0.0558911 loss)\nI0608 02:13:09.922381 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154096 (* 1 = 0.154096 loss)\nI0608 02:13:09.922387 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0558843 (* 1 = 0.0558843 loss)\nI0608 02:13:09.922394 13573 sgd_solver.cpp:106] Iteration 29460, lr = 0.001\nI0608 02:13:22.660509 13573 solver.cpp:229] Iteration 29480, loss = 1.10706\nI0608 02:13:22.660598 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0251155 (* 1 = 0.0251155 loss)\nI0608 02:13:22.660607 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0675606 (* 1 = 0.0675606 loss)\nI0608 02:13:22.660614 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273377 (* 1 = 0.273377 loss)\nI0608 02:13:22.660619 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0751341 (* 1 = 0.0751341 loss)\nI0608 02:13:22.660625 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.275255 (* 1 = 0.275255 loss)\nI0608 02:13:22.660630 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0751632 (* 1 = 0.0751632 loss)\nI0608 02:13:22.660637 13573 sgd_solver.cpp:106] Iteration 29480, lr = 0.001\nI0608 02:13:35.428608 13573 solver.cpp:229] Iteration 29500, loss = 0.374765\nI0608 02:13:35.428671 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00019293 (* 1 = 0.00019293 loss)\nI0608 02:13:35.428681 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00230738 (* 1 = 0.00230738 loss)\nI0608 02:13:35.428688 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.150604 (* 1 = 0.150604 loss)\nI0608 02:13:35.428694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103891 (* 1 = 0.0103891 loss)\nI0608 02:13:35.428699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.148647 (* 1 = 0.148647 loss)\nI0608 02:13:35.428705 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103914 (* 1 = 0.0103914 loss)\nI0608 02:13:35.428712 13573 sgd_solver.cpp:106] Iteration 29500, lr = 0.001\nI0608 02:13:48.175081 13573 solver.cpp:229] Iteration 29520, loss = 0.713791\nI0608 02:13:48.175145 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.19948 (* 1 = 0.19948 loss)\nI0608 02:13:48.175154 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14314 (* 1 = 0.14314 loss)\nI0608 02:13:48.175163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.09954 (* 1 = 0.09954 loss)\nI0608 02:13:48.175168 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0404026 (* 1 = 0.0404026 loss)\nI0608 02:13:48.175174 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0988837 (* 1 = 0.0988837 loss)\nI0608 02:13:48.175179 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0403891 (* 1 = 0.0403891 loss)\nI0608 02:13:48.175187 13573 sgd_solver.cpp:106] Iteration 29520, lr = 0.001\nI0608 02:14:00.942598 13573 solver.cpp:229] Iteration 29540, loss = 0.808645\nI0608 02:14:00.942678 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.24186 (* 1 = 0.24186 loss)\nI0608 02:14:00.942688 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100229 (* 1 = 0.100229 loss)\nI0608 02:14:00.942694 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.383536 (* 1 = 0.383536 loss)\nI0608 02:14:00.942701 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0315087 (* 1 = 0.0315087 loss)\nI0608 02:14:00.942708 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.381313 (* 1 = 0.381313 loss)\nI0608 02:14:00.942713 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0315164 (* 1 = 0.0315164 loss)\nI0608 02:14:00.942720 13573 sgd_solver.cpp:106] Iteration 29540, lr = 0.001\nI0608 02:14:14.003690 13573 solver.cpp:229] Iteration 29560, loss = 1.49042\nI0608 02:14:14.003751 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.230327 (* 1 = 0.230327 loss)\nI0608 02:14:14.003760 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.30584 (* 1 = 0.30584 loss)\nI0608 02:14:14.003767 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.392768 (* 1 = 0.392768 loss)\nI0608 02:14:14.003773 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0589013 (* 1 = 0.0589013 loss)\nI0608 02:14:14.003778 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334235 (* 1 = 0.334235 loss)\nI0608 02:14:14.003784 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0589006 (* 1 = 0.0589006 loss)\nI0608 02:14:14.003793 13573 sgd_solver.cpp:106] Iteration 29560, lr = 0.001\nI0608 02:14:26.916962 13573 solver.cpp:229] Iteration 29580, loss = 2.00563\nI0608 02:14:26.917032 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.120291 (* 1 = 0.120291 loss)\nI0608 02:14:26.917042 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179432 (* 1 = 0.179432 loss)\nI0608 02:14:26.917047 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276288 (* 1 = 0.276288 loss)\nI0608 02:14:26.917053 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159443 (* 1 = 0.0159443 loss)\nI0608 02:14:26.917059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.244139 (* 1 = 0.244139 loss)\nI0608 02:14:26.917064 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159514 (* 1 = 0.0159514 loss)\nI0608 02:14:26.917071 13573 sgd_solver.cpp:106] Iteration 29580, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:14:39.821499 13573 solver.cpp:229] Iteration 29600, loss = 1.33517\nI0608 02:14:39.821566 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0049381 (* 1 = 0.0049381 loss)\nI0608 02:14:39.821575 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00716665 (* 1 = 0.00716665 loss)\nI0608 02:14:39.821583 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188973 (* 1 = 0.188973 loss)\nI0608 02:14:39.821588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0101637 (* 1 = 0.0101637 loss)\nI0608 02:14:39.821593 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.184519 (* 1 = 0.184519 loss)\nI0608 02:14:39.821599 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0101613 (* 1 = 0.0101613 loss)\nI0608 02:14:39.821606 13573 sgd_solver.cpp:106] Iteration 29600, lr = 0.001\nI0608 02:14:52.632557 13573 solver.cpp:229] Iteration 29620, loss = 0.401586\nI0608 02:14:52.632624 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0436474 (* 1 = 0.0436474 loss)\nI0608 02:14:52.632635 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107705 (* 1 = 0.107705 loss)\nI0608 02:14:52.632642 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0880184 (* 1 = 0.0880184 loss)\nI0608 02:14:52.632647 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217134 (* 1 = 0.0217134 loss)\nI0608 02:14:52.632653 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0889858 (* 1 = 0.0889858 loss)\nI0608 02:14:52.632660 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0217135 (* 1 = 0.0217135 loss)\nI0608 02:14:52.632668 13573 sgd_solver.cpp:106] Iteration 29620, lr = 0.001\nI0608 02:15:05.513541 13573 solver.cpp:229] Iteration 29640, loss = 0.538022\nI0608 02:15:05.513613 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105796 (* 1 = 0.105796 loss)\nI0608 02:15:05.513624 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0639347 (* 1 = 0.0639347 loss)\nI0608 02:15:05.513630 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0966464 (* 1 = 0.0966464 loss)\nI0608 02:15:05.513636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0537618 (* 1 = 0.0537618 loss)\nI0608 02:15:05.513643 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935936 (* 1 = 0.0935936 loss)\nI0608 02:15:05.513648 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0537614 (* 1 = 0.0537614 loss)\nI0608 02:15:05.513656 13573 sgd_solver.cpp:106] Iteration 29640, lr = 0.001\nI0608 02:15:18.457270 13573 solver.cpp:229] Iteration 29660, loss = 0.783567\nI0608 02:15:18.457337 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00439903 (* 1 = 0.00439903 loss)\nI0608 02:15:18.457346 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0104942 (* 1 = 0.0104942 loss)\nI0608 02:15:18.457353 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.436855 (* 1 = 0.436855 loss)\nI0608 02:15:18.457360 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.12162 (* 1 = 0.12162 loss)\nI0608 02:15:18.457365 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.406004 (* 1 = 0.406004 loss)\nI0608 02:15:18.457371 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121561 (* 1 = 0.121561 loss)\nI0608 02:15:18.457379 13573 sgd_solver.cpp:106] Iteration 29660, lr = 0.001\nI0608 02:15:31.293792 13573 solver.cpp:229] Iteration 29680, loss = 1.61906\nI0608 02:15:31.293867 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140285 (* 1 = 0.140285 loss)\nI0608 02:15:31.293877 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108322 (* 1 = 0.108322 loss)\nI0608 02:15:31.293884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.757759 (* 1 = 0.757759 loss)\nI0608 02:15:31.293890 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.207998 (* 1 = 0.207998 loss)\nI0608 02:15:31.293896 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.776443 (* 1 = 0.776443 loss)\nI0608 02:15:31.293902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.207925 (* 1 = 0.207925 loss)\nI0608 02:15:31.293910 13573 sgd_solver.cpp:106] Iteration 29680, lr = 0.001\nI0608 02:15:44.158524 13573 solver.cpp:229] Iteration 29700, loss = 0.7148\nI0608 02:15:44.158593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130624 (* 1 = 0.130624 loss)\nI0608 02:15:44.158603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121421 (* 1 = 0.121421 loss)\nI0608 02:15:44.158610 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277409 (* 1 = 0.277409 loss)\nI0608 02:15:44.158617 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0376876 (* 1 = 0.0376876 loss)\nI0608 02:15:44.158622 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227689 (* 1 = 0.227689 loss)\nI0608 02:15:44.158628 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0377017 (* 1 = 0.0377017 loss)\nI0608 02:15:44.158637 13573 sgd_solver.cpp:106] Iteration 29700, lr = 0.001\nI0608 02:15:56.926100 13573 solver.cpp:229] Iteration 29720, loss = 0.603071\nI0608 02:15:56.926164 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.257873 (* 1 = 0.257873 loss)\nI0608 02:15:56.926174 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121138 (* 1 = 0.121138 loss)\nI0608 02:15:56.926182 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.170588 (* 1 = 0.170588 loss)\nI0608 02:15:56.926187 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243975 (* 1 = 0.0243975 loss)\nI0608 02:15:56.926193 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157334 (* 1 = 0.157334 loss)\nI0608 02:15:56.926199 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243979 (* 1 = 0.0243979 loss)\nI0608 02:15:56.926208 13573 sgd_solver.cpp:106] Iteration 29720, lr = 0.001\nI0608 02:16:09.980566 13573 solver.cpp:229] Iteration 29740, loss = 0.840587\nI0608 02:16:09.980676 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100118 (* 1 = 0.100118 loss)\nI0608 02:16:09.980686 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0998182 (* 1 = 0.0998182 loss)\nI0608 02:16:09.980692 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109956 (* 1 = 0.109956 loss)\nI0608 02:16:09.980698 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0825946 (* 1 = 0.0825946 loss)\nI0608 02:16:09.980703 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104861 (* 1 = 0.104861 loss)\nI0608 02:16:09.980710 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0826121 (* 1 = 0.0826121 loss)\nI0608 02:16:09.980716 13573 sgd_solver.cpp:106] Iteration 29740, lr = 0.001\nI0608 02:16:22.765092 13573 solver.cpp:229] Iteration 29760, loss = 1.20569\nI0608 02:16:22.765162 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17152 (* 1 = 0.17152 loss)\nI0608 02:16:22.765172 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.203957 (* 1 = 0.203957 loss)\nI0608 02:16:22.765178 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.402729 (* 1 = 0.402729 loss)\nI0608 02:16:22.765185 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0681874 (* 1 = 0.0681874 loss)\nI0608 02:16:22.765192 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.409841 (* 1 = 0.409841 loss)\nI0608 02:16:22.765197 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0682001 (* 1 = 0.0682001 loss)\nI0608 02:16:22.765203 13573 sgd_solver.cpp:106] Iteration 29760, lr = 0.001\nI0608 02:16:35.562484 13573 solver.cpp:229] Iteration 29780, loss = 0.852463\nI0608 02:16:35.562549 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.157918 (* 1 = 0.157918 loss)\nI0608 02:16:35.562559 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0815635 (* 1 = 0.0815635 loss)\nI0608 02:16:35.562566 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.405616 (* 1 = 0.405616 loss)\nI0608 02:16:35.562572 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.121701 (* 1 = 0.121701 loss)\nI0608 02:16:35.562577 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.401097 (* 1 = 0.401097 loss)\nI0608 02:16:35.562583 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121688 (* 1 = 0.121688 loss)\nI0608 02:16:35.562590 13573 sgd_solver.cpp:106] Iteration 29780, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:16:48.327632 13573 solver.cpp:229] Iteration 29800, loss = 0.627823\nI0608 02:16:48.327709 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.065617 (* 1 = 0.065617 loss)\nI0608 02:16:48.327718 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0647425 (* 1 = 0.0647425 loss)\nI0608 02:16:48.327724 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206413 (* 1 = 0.206413 loss)\nI0608 02:16:48.327730 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281466 (* 1 = 0.0281466 loss)\nI0608 02:16:48.327736 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205608 (* 1 = 0.205608 loss)\nI0608 02:16:48.327742 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0281434 (* 1 = 0.0281434 loss)\nI0608 02:16:48.327749 13573 sgd_solver.cpp:106] Iteration 29800, lr = 0.001\nI0608 02:17:01.258371 13573 solver.cpp:229] Iteration 29820, loss = 0.978499\nI0608 02:17:01.258435 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.192985 (* 1 = 0.192985 loss)\nI0608 02:17:01.258445 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.178774 (* 1 = 0.178774 loss)\nI0608 02:17:01.258451 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11095 (* 1 = 0.11095 loss)\nI0608 02:17:01.258457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182386 (* 1 = 0.0182386 loss)\nI0608 02:17:01.258462 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110138 (* 1 = 0.110138 loss)\nI0608 02:17:01.258468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182413 (* 1 = 0.0182413 loss)\nI0608 02:17:01.258476 13573 sgd_solver.cpp:106] Iteration 29820, lr = 0.001\nI0608 02:17:14.270527 13573 solver.cpp:229] Iteration 29840, loss = 0.686315\nI0608 02:17:14.270592 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0894101 (* 1 = 0.0894101 loss)\nI0608 02:17:14.270602 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0577971 (* 1 = 0.0577971 loss)\nI0608 02:17:14.270608 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223923 (* 1 = 0.223923 loss)\nI0608 02:17:14.270613 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155854 (* 1 = 0.0155854 loss)\nI0608 02:17:14.270619 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226896 (* 1 = 0.226896 loss)\nI0608 02:17:14.270624 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.015587 (* 1 = 0.015587 loss)\nI0608 02:17:14.270632 13573 sgd_solver.cpp:106] Iteration 29840, lr = 0.001\nI0608 02:17:27.377496 13573 solver.cpp:229] Iteration 29860, loss = 1.01386\nI0608 02:17:27.377570 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0671146 (* 1 = 0.0671146 loss)\nI0608 02:17:27.377580 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0406298 (* 1 = 0.0406298 loss)\nI0608 02:17:27.377586 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.404661 (* 1 = 0.404661 loss)\nI0608 02:17:27.377593 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278578 (* 1 = 0.0278578 loss)\nI0608 02:17:27.377599 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.358051 (* 1 = 0.358051 loss)\nI0608 02:17:27.377604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278439 (* 1 = 0.0278439 loss)\nI0608 02:17:27.377610 13573 sgd_solver.cpp:106] Iteration 29860, lr = 0.001\nI0608 02:17:40.294066 13573 solver.cpp:229] Iteration 29880, loss = 0.35992\nI0608 02:17:40.294138 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0019699 (* 1 = 0.0019699 loss)\nI0608 02:17:40.294148 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0106961 (* 1 = 0.0106961 loss)\nI0608 02:17:40.294154 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.144038 (* 1 = 0.144038 loss)\nI0608 02:17:40.294160 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00597199 (* 1 = 0.00597199 loss)\nI0608 02:17:40.294165 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182905 (* 1 = 0.182905 loss)\nI0608 02:17:40.294172 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00597361 (* 1 = 0.00597361 loss)\nI0608 02:17:40.294178 13573 sgd_solver.cpp:106] Iteration 29880, lr = 0.001\nI0608 02:17:53.031366 13573 solver.cpp:229] Iteration 29900, loss = 1.32403\nI0608 02:17:53.031443 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.215333 (* 1 = 0.215333 loss)\nI0608 02:17:53.031452 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.298808 (* 1 = 0.298808 loss)\nI0608 02:17:53.031458 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.573416 (* 1 = 0.573416 loss)\nI0608 02:17:53.031464 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.193522 (* 1 = 0.193522 loss)\nI0608 02:17:53.031469 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.570429 (* 1 = 0.570429 loss)\nI0608 02:17:53.031476 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.19351 (* 1 = 0.19351 loss)\nI0608 02:17:53.031482 13573 sgd_solver.cpp:106] Iteration 29900, lr = 0.001\nI0608 02:18:05.831308 13573 solver.cpp:229] Iteration 29920, loss = 0.58503\nI0608 02:18:05.831379 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.146367 (* 1 = 0.146367 loss)\nI0608 02:18:05.831388 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0877891 (* 1 = 0.0877891 loss)\nI0608 02:18:05.831394 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123332 (* 1 = 0.123332 loss)\nI0608 02:18:05.831400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0091528 (* 1 = 0.0091528 loss)\nI0608 02:18:05.831406 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123316 (* 1 = 0.123316 loss)\nI0608 02:18:05.831413 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00915731 (* 1 = 0.00915731 loss)\nI0608 02:18:05.831419 13573 sgd_solver.cpp:106] Iteration 29920, lr = 0.001\nI0608 02:18:18.715847 13573 solver.cpp:229] Iteration 29940, loss = 1.55819\nI0608 02:18:18.715909 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.348098 (* 1 = 0.348098 loss)\nI0608 02:18:18.715919 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.358176 (* 1 = 0.358176 loss)\nI0608 02:18:18.715924 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393241 (* 1 = 0.393241 loss)\nI0608 02:18:18.715930 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0891522 (* 1 = 0.0891522 loss)\nI0608 02:18:18.715936 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.386013 (* 1 = 0.386013 loss)\nI0608 02:18:18.715942 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0891442 (* 1 = 0.0891442 loss)\nI0608 02:18:18.715950 13573 sgd_solver.cpp:106] Iteration 29940, lr = 0.001\nI0608 02:18:31.841212 13573 solver.cpp:229] Iteration 29960, loss = 1.05589\nI0608 02:18:31.841285 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.22953 (* 1 = 0.22953 loss)\nI0608 02:18:31.841295 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.203348 (* 1 = 0.203348 loss)\nI0608 02:18:31.841301 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100194 (* 1 = 0.100194 loss)\nI0608 02:18:31.841307 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.113794 (* 1 = 0.113794 loss)\nI0608 02:18:31.841313 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10343 (* 1 = 0.10343 loss)\nI0608 02:18:31.841318 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.113799 (* 1 = 0.113799 loss)\nI0608 02:18:31.841325 13573 sgd_solver.cpp:106] Iteration 29960, lr = 0.001\nI0608 02:18:44.898180 13573 solver.cpp:229] Iteration 29980, loss = 1.03745\nI0608 02:18:44.898248 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00559952 (* 1 = 0.00559952 loss)\nI0608 02:18:44.898258 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0146398 (* 1 = 0.0146398 loss)\nI0608 02:18:44.898264 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.482566 (* 1 = 0.482566 loss)\nI0608 02:18:44.898269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0292512 (* 1 = 0.0292512 loss)\nI0608 02:18:44.898275 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.422343 (* 1 = 0.422343 loss)\nI0608 02:18:44.898280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0292369 (* 1 = 0.0292369 loss)\nI0608 02:18:44.898288 13573 sgd_solver.cpp:106] Iteration 29980, lr = 0.001\nspeed: 0.645s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_30000.caffemodel\nI0608 02:18:59.639376 13573 solver.cpp:229] Iteration 30000, loss = 1.39192\nI0608 02:18:59.639447 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.234614 (* 1 = 0.234614 loss)\nI0608 02:18:59.639461 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104384 (* 1 = 0.104384 loss)\nI0608 02:18:59.639472 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212293 (* 1 = 0.212293 loss)\nI0608 02:18:59.639482 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0347692 (* 1 = 0.0347692 loss)\nI0608 02:18:59.639490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.230404 (* 1 = 0.230404 loss)\nI0608 02:18:59.639498 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0347709 (* 1 = 0.0347709 loss)\nI0608 02:18:59.639508 13573 sgd_solver.cpp:106] Iteration 30000, lr = 0.001\nI0608 02:19:12.438767 13573 solver.cpp:229] Iteration 30020, loss = 0.374112\nI0608 02:19:12.438843 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0582877 (* 1 = 0.0582877 loss)\nI0608 02:19:12.438853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0452772 (* 1 = 0.0452772 loss)\nI0608 02:19:12.438858 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0926861 (* 1 = 0.0926861 loss)\nI0608 02:19:12.438863 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0355934 (* 1 = 0.0355934 loss)\nI0608 02:19:12.438869 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0976472 (* 1 = 0.0976472 loss)\nI0608 02:19:12.438875 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0355827 (* 1 = 0.0355827 loss)\nI0608 02:19:12.438881 13573 sgd_solver.cpp:106] Iteration 30020, lr = 0.001\nI0608 02:19:25.212493 13573 solver.cpp:229] Iteration 30040, loss = 1.18743\nI0608 02:19:25.212560 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.232177 (* 1 = 0.232177 loss)\nI0608 02:19:25.212570 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138777 (* 1 = 0.138777 loss)\nI0608 02:19:25.212575 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106607 (* 1 = 0.106607 loss)\nI0608 02:19:25.212581 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.049001 (* 1 = 0.049001 loss)\nI0608 02:19:25.212587 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0984281 (* 1 = 0.0984281 loss)\nI0608 02:19:25.212594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0490013 (* 1 = 0.0490013 loss)\nI0608 02:19:25.212600 13573 sgd_solver.cpp:106] Iteration 30040, lr = 0.001\nI0608 02:19:38.266439 13573 solver.cpp:229] Iteration 30060, loss = 1.22317\nI0608 02:19:38.266501 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.216343 (* 1 = 0.216343 loss)\nI0608 02:19:38.266511 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101821 (* 1 = 0.101821 loss)\nI0608 02:19:38.266518 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207411 (* 1 = 0.207411 loss)\nI0608 02:19:38.266525 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0519231 (* 1 = 0.0519231 loss)\nI0608 02:19:38.266530 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202638 (* 1 = 0.202638 loss)\nI0608 02:19:38.266535 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0518979 (* 1 = 0.0518979 loss)\nI0608 02:19:38.266543 13573 sgd_solver.cpp:106] Iteration 30060, lr = 0.001\nI0608 02:19:51.398756 13573 solver.cpp:229] Iteration 30080, loss = 1.08668\nI0608 02:19:51.398844 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150984 (* 1 = 0.150984 loss)\nI0608 02:19:51.398854 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.249482 (* 1 = 0.249482 loss)\nI0608 02:19:51.398859 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.521751 (* 1 = 0.521751 loss)\nI0608 02:19:51.398864 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0791327 (* 1 = 0.0791327 loss)\nI0608 02:19:51.398870 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.553807 (* 1 = 0.553807 loss)\nI0608 02:19:51.398875 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0791068 (* 1 = 0.0791068 loss)\nI0608 02:19:51.398882 13573 sgd_solver.cpp:106] Iteration 30080, lr = 0.001\nI0608 02:20:04.475353 13573 solver.cpp:229] Iteration 30100, loss = 0.664197\nI0608 02:20:04.475421 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117342 (* 1 = 0.117342 loss)\nI0608 02:20:04.475431 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0635004 (* 1 = 0.0635004 loss)\nI0608 02:20:04.475437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124196 (* 1 = 0.124196 loss)\nI0608 02:20:04.475443 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.154004 (* 1 = 0.154004 loss)\nI0608 02:20:04.475450 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124947 (* 1 = 0.124947 loss)\nI0608 02:20:04.475455 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.154046 (* 1 = 0.154046 loss)\nI0608 02:20:04.475462 13573 sgd_solver.cpp:106] Iteration 30100, lr = 0.001\nI0608 02:20:17.559345 13573 solver.cpp:229] Iteration 30120, loss = 1.21627\nI0608 02:20:17.559420 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.304139 (* 1 = 0.304139 loss)\nI0608 02:20:17.559429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198733 (* 1 = 0.198733 loss)\nI0608 02:20:17.559435 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.607585 (* 1 = 0.607585 loss)\nI0608 02:20:17.559442 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.102219 (* 1 = 0.102219 loss)\nI0608 02:20:17.559448 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.593514 (* 1 = 0.593514 loss)\nI0608 02:20:17.559453 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.102242 (* 1 = 0.102242 loss)\nI0608 02:20:17.559460 13573 sgd_solver.cpp:106] Iteration 30120, lr = 0.001\nI0608 02:20:30.366871 13573 solver.cpp:229] Iteration 30140, loss = 0.637521\nI0608 02:20:30.366936 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.14898 (* 1 = 0.14898 loss)\nI0608 02:20:30.366946 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.050861 (* 1 = 0.050861 loss)\nI0608 02:20:30.366952 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.185545 (* 1 = 0.185545 loss)\nI0608 02:20:30.366958 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0733901 (* 1 = 0.0733901 loss)\nI0608 02:20:30.366964 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190777 (* 1 = 0.190777 loss)\nI0608 02:20:30.366971 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0733931 (* 1 = 0.0733931 loss)\nI0608 02:20:30.366978 13573 sgd_solver.cpp:106] Iteration 30140, lr = 0.001\nI0608 02:20:43.143554 13573 solver.cpp:229] Iteration 30160, loss = 0.851347\nI0608 02:20:43.143761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.243299 (* 1 = 0.243299 loss)\nI0608 02:20:43.143772 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179299 (* 1 = 0.179299 loss)\nI0608 02:20:43.143779 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.309029 (* 1 = 0.309029 loss)\nI0608 02:20:43.143784 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.124109 (* 1 = 0.124109 loss)\nI0608 02:20:43.143790 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.309101 (* 1 = 0.309101 loss)\nI0608 02:20:43.143796 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124105 (* 1 = 0.124105 loss)\nI0608 02:20:43.143805 13573 sgd_solver.cpp:106] Iteration 30160, lr = 0.001\nI0608 02:20:55.800276 13573 solver.cpp:229] Iteration 30180, loss = 0.42832\nI0608 02:20:55.800338 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0427004 (* 1 = 0.0427004 loss)\nI0608 02:20:55.800348 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0488924 (* 1 = 0.0488924 loss)\nI0608 02:20:55.800354 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0954568 (* 1 = 0.0954568 loss)\nI0608 02:20:55.800360 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0574912 (* 1 = 0.0574912 loss)\nI0608 02:20:55.800365 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0952102 (* 1 = 0.0952102 loss)\nI0608 02:20:55.800371 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0574759 (* 1 = 0.0574759 loss)\nI0608 02:20:55.800379 13573 sgd_solver.cpp:106] Iteration 30180, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:21:08.690956 13573 solver.cpp:229] Iteration 30200, loss = 0.837062\nI0608 02:21:08.691026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.198325 (* 1 = 0.198325 loss)\nI0608 02:21:08.691035 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0824705 (* 1 = 0.0824705 loss)\nI0608 02:21:08.691042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152994 (* 1 = 0.152994 loss)\nI0608 02:21:08.691047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0122611 (* 1 = 0.0122611 loss)\nI0608 02:21:08.691053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.142006 (* 1 = 0.142006 loss)\nI0608 02:21:08.691058 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0122666 (* 1 = 0.0122666 loss)\nI0608 02:21:08.691066 13573 sgd_solver.cpp:106] Iteration 30200, lr = 0.001\nI0608 02:21:21.494354 13573 solver.cpp:229] Iteration 30220, loss = 1.79929\nI0608 02:21:21.494419 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0602542 (* 1 = 0.0602542 loss)\nI0608 02:21:21.494429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0461733 (* 1 = 0.0461733 loss)\nI0608 02:21:21.494436 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.10335 (* 1 = 1.10335 loss)\nI0608 02:21:21.494441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.331016 (* 1 = 0.331016 loss)\nI0608 02:21:21.494447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.07837 (* 1 = 1.07837 loss)\nI0608 02:21:21.494453 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.33536 (* 1 = 0.33536 loss)\nI0608 02:21:21.494460 13573 sgd_solver.cpp:106] Iteration 30220, lr = 0.001\nI0608 02:21:34.343937 13573 solver.cpp:229] Iteration 30240, loss = 0.930559\nI0608 02:21:34.344008 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.391861 (* 1 = 0.391861 loss)\nI0608 02:21:34.344018 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124877 (* 1 = 0.124877 loss)\nI0608 02:21:34.344024 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177769 (* 1 = 0.177769 loss)\nI0608 02:21:34.344030 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0838598 (* 1 = 0.0838598 loss)\nI0608 02:21:34.344036 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176602 (* 1 = 0.176602 loss)\nI0608 02:21:34.344043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0838562 (* 1 = 0.0838562 loss)\nI0608 02:21:34.344049 13573 sgd_solver.cpp:106] Iteration 30240, lr = 0.001\nI0608 02:21:47.075824 13573 solver.cpp:229] Iteration 30260, loss = 0.916852\nI0608 02:21:47.075889 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00507567 (* 1 = 0.00507567 loss)\nI0608 02:21:47.075899 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0161962 (* 1 = 0.0161962 loss)\nI0608 02:21:47.075906 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.388712 (* 1 = 0.388712 loss)\nI0608 02:21:47.075911 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.12182 (* 1 = 0.12182 loss)\nI0608 02:21:47.075917 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408944 (* 1 = 0.408944 loss)\nI0608 02:21:47.075922 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121822 (* 1 = 0.121822 loss)\nI0608 02:21:47.075928 13573 sgd_solver.cpp:106] Iteration 30260, lr = 0.001\nI0608 02:21:59.795574 13573 solver.cpp:229] Iteration 30280, loss = 1.04199\nI0608 02:21:59.795657 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.27834 (* 1 = 0.27834 loss)\nI0608 02:21:59.795667 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1331 (* 1 = 0.1331 loss)\nI0608 02:21:59.795673 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.416588 (* 1 = 0.416588 loss)\nI0608 02:21:59.795680 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0581574 (* 1 = 0.0581574 loss)\nI0608 02:21:59.795684 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.424761 (* 1 = 0.424761 loss)\nI0608 02:21:59.795691 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0581559 (* 1 = 0.0581559 loss)\nI0608 02:21:59.795696 13573 sgd_solver.cpp:106] Iteration 30280, lr = 0.001\nI0608 02:22:12.618412 13573 solver.cpp:229] Iteration 30300, loss = 1.17863\nI0608 02:22:12.618485 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0550149 (* 1 = 0.0550149 loss)\nI0608 02:22:12.618494 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.027789 (* 1 = 0.027789 loss)\nI0608 02:22:12.618500 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.545119 (* 1 = 0.545119 loss)\nI0608 02:22:12.618506 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.141678 (* 1 = 0.141678 loss)\nI0608 02:22:12.618511 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.581839 (* 1 = 0.581839 loss)\nI0608 02:22:12.618516 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.141738 (* 1 = 0.141738 loss)\nI0608 02:22:12.618523 13573 sgd_solver.cpp:106] Iteration 30300, lr = 0.001\nI0608 02:22:25.366264 13573 solver.cpp:229] Iteration 30320, loss = 0.432046\nI0608 02:22:25.366324 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.180272 (* 1 = 0.180272 loss)\nI0608 02:22:25.366335 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.137499 (* 1 = 0.137499 loss)\nI0608 02:22:25.366343 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0736136 (* 1 = 0.0736136 loss)\nI0608 02:22:25.366348 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00517753 (* 1 = 0.00517753 loss)\nI0608 02:22:25.366354 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0736296 (* 1 = 0.0736296 loss)\nI0608 02:22:25.366360 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00517756 (* 1 = 0.00517756 loss)\nI0608 02:22:25.366367 13573 sgd_solver.cpp:106] Iteration 30320, lr = 0.001\nI0608 02:22:38.220612 13573 solver.cpp:229] Iteration 30340, loss = 0.910223\nI0608 02:22:38.220679 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.087723 (* 1 = 0.087723 loss)\nI0608 02:22:38.220688 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0737214 (* 1 = 0.0737214 loss)\nI0608 02:22:38.220695 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151087 (* 1 = 0.151087 loss)\nI0608 02:22:38.220700 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.022137 (* 1 = 0.022137 loss)\nI0608 02:22:38.220705 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.135426 (* 1 = 0.135426 loss)\nI0608 02:22:38.220710 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0221368 (* 1 = 0.0221368 loss)\nI0608 02:22:38.220717 13573 sgd_solver.cpp:106] Iteration 30340, lr = 0.001\nI0608 02:22:51.230363 13573 solver.cpp:229] Iteration 30360, loss = 1.16497\nI0608 02:22:51.230439 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0606336 (* 1 = 0.0606336 loss)\nI0608 02:22:51.230448 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.155538 (* 1 = 0.155538 loss)\nI0608 02:22:51.230454 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190484 (* 1 = 0.190484 loss)\nI0608 02:22:51.230460 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175379 (* 1 = 0.0175379 loss)\nI0608 02:22:51.230465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186867 (* 1 = 0.186867 loss)\nI0608 02:22:51.230471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175359 (* 1 = 0.0175359 loss)\nI0608 02:22:51.230478 13573 sgd_solver.cpp:106] Iteration 30360, lr = 0.001\nI0608 02:23:04.088129 13573 solver.cpp:229] Iteration 30380, loss = 1.19847\nI0608 02:23:04.088192 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175975 (* 1 = 0.175975 loss)\nI0608 02:23:04.088203 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0955606 (* 1 = 0.0955606 loss)\nI0608 02:23:04.088209 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919376 (* 1 = 0.0919376 loss)\nI0608 02:23:04.088214 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0208166 (* 1 = 0.0208166 loss)\nI0608 02:23:04.088220 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905459 (* 1 = 0.0905459 loss)\nI0608 02:23:04.088225 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0208159 (* 1 = 0.0208159 loss)\nI0608 02:23:04.088232 13573 sgd_solver.cpp:106] Iteration 30380, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:23:16.999682 13573 solver.cpp:229] Iteration 30400, loss = 0.459369\nI0608 02:23:16.999749 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105594 (* 1 = 0.105594 loss)\nI0608 02:23:16.999759 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0592649 (* 1 = 0.0592649 loss)\nI0608 02:23:16.999765 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0830647 (* 1 = 0.0830647 loss)\nI0608 02:23:16.999771 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00539026 (* 1 = 0.00539026 loss)\nI0608 02:23:16.999778 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905371 (* 1 = 0.0905371 loss)\nI0608 02:23:16.999783 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00538558 (* 1 = 0.00538558 loss)\nI0608 02:23:16.999789 13573 sgd_solver.cpp:106] Iteration 30400, lr = 0.001\nI0608 02:23:29.952392 13573 solver.cpp:229] Iteration 30420, loss = 0.498953\nI0608 02:23:29.952466 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.135833 (* 1 = 0.135833 loss)\nI0608 02:23:29.952474 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140905 (* 1 = 0.140905 loss)\nI0608 02:23:29.952481 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909433 (* 1 = 0.0909433 loss)\nI0608 02:23:29.952486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0216557 (* 1 = 0.0216557 loss)\nI0608 02:23:29.952492 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0971324 (* 1 = 0.0971324 loss)\nI0608 02:23:29.952497 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0216568 (* 1 = 0.0216568 loss)\nI0608 02:23:29.952504 13573 sgd_solver.cpp:106] Iteration 30420, lr = 0.001\nI0608 02:23:42.737437 13573 solver.cpp:229] Iteration 30440, loss = 0.378508\nI0608 02:23:42.737500 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0996268 (* 1 = 0.0996268 loss)\nI0608 02:23:42.737509 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0903813 (* 1 = 0.0903813 loss)\nI0608 02:23:42.737515 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0882468 (* 1 = 0.0882468 loss)\nI0608 02:23:42.737521 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100015 (* 1 = 0.0100015 loss)\nI0608 02:23:42.737526 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0809294 (* 1 = 0.0809294 loss)\nI0608 02:23:42.737532 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100019 (* 1 = 0.0100019 loss)\nI0608 02:23:42.737540 13573 sgd_solver.cpp:106] Iteration 30440, lr = 0.001\nI0608 02:23:55.622690 13573 solver.cpp:229] Iteration 30460, loss = 1.01789\nI0608 02:23:55.622756 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.229679 (* 1 = 0.229679 loss)\nI0608 02:23:55.622766 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.174002 (* 1 = 0.174002 loss)\nI0608 02:23:55.622783 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.392304 (* 1 = 0.392304 loss)\nI0608 02:23:55.622792 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0797798 (* 1 = 0.0797798 loss)\nI0608 02:23:55.622797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.382547 (* 1 = 0.382547 loss)\nI0608 02:23:55.622802 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0797701 (* 1 = 0.0797701 loss)\nI0608 02:23:55.622809 13573 sgd_solver.cpp:106] Iteration 30460, lr = 0.001\nI0608 02:24:08.367914 13573 solver.cpp:229] Iteration 30480, loss = 1.01328\nI0608 02:24:08.367980 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.276899 (* 1 = 0.276899 loss)\nI0608 02:24:08.367990 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.313557 (* 1 = 0.313557 loss)\nI0608 02:24:08.367995 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.333932 (* 1 = 0.333932 loss)\nI0608 02:24:08.368001 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.038801 (* 1 = 0.038801 loss)\nI0608 02:24:08.368006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.333499 (* 1 = 0.333499 loss)\nI0608 02:24:08.368012 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.038803 (* 1 = 0.038803 loss)\nI0608 02:24:08.368019 13573 sgd_solver.cpp:106] Iteration 30480, lr = 0.001\nI0608 02:24:21.287874 13573 solver.cpp:229] Iteration 30500, loss = 0.434555\nI0608 02:24:21.287938 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0732366 (* 1 = 0.0732366 loss)\nI0608 02:24:21.287947 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.079249 (* 1 = 0.079249 loss)\nI0608 02:24:21.287955 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152221 (* 1 = 0.152221 loss)\nI0608 02:24:21.287959 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00631468 (* 1 = 0.00631468 loss)\nI0608 02:24:21.287966 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.130389 (* 1 = 0.130389 loss)\nI0608 02:24:21.287971 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00631239 (* 1 = 0.00631239 loss)\nI0608 02:24:21.287978 13573 sgd_solver.cpp:106] Iteration 30500, lr = 0.001\nI0608 02:24:34.334259 13573 solver.cpp:229] Iteration 30520, loss = 1.444\nI0608 02:24:34.334331 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134675 (* 1 = 0.134675 loss)\nI0608 02:24:34.334342 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131754 (* 1 = 0.131754 loss)\nI0608 02:24:34.334347 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.734493 (* 1 = 0.734493 loss)\nI0608 02:24:34.334353 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.135557 (* 1 = 0.135557 loss)\nI0608 02:24:34.334359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.722857 (* 1 = 0.722857 loss)\nI0608 02:24:34.334364 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.135543 (* 1 = 0.135543 loss)\nI0608 02:24:34.334372 13573 sgd_solver.cpp:106] Iteration 30520, lr = 0.001\nI0608 02:24:47.462132 13573 solver.cpp:229] Iteration 30540, loss = 0.7428\nI0608 02:24:47.462196 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16372 (* 1 = 0.16372 loss)\nI0608 02:24:47.462205 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143986 (* 1 = 0.143986 loss)\nI0608 02:24:47.462213 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0907919 (* 1 = 0.0907919 loss)\nI0608 02:24:47.462218 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00388256 (* 1 = 0.00388256 loss)\nI0608 02:24:47.462224 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855639 (* 1 = 0.0855639 loss)\nI0608 02:24:47.462230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00388255 (* 1 = 0.00388255 loss)\nI0608 02:24:47.462239 13573 sgd_solver.cpp:106] Iteration 30540, lr = 0.001\nI0608 02:25:00.273061 13573 solver.cpp:229] Iteration 30560, loss = 0.918116\nI0608 02:25:00.273131 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.138651 (* 1 = 0.138651 loss)\nI0608 02:25:00.273140 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0847 (* 1 = 0.0847 loss)\nI0608 02:25:00.273147 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.082287 (* 1 = 0.082287 loss)\nI0608 02:25:00.273152 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0348402 (* 1 = 0.0348402 loss)\nI0608 02:25:00.273159 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0806618 (* 1 = 0.0806618 loss)\nI0608 02:25:00.273164 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0348406 (* 1 = 0.0348406 loss)\nI0608 02:25:00.273171 13573 sgd_solver.cpp:106] Iteration 30560, lr = 0.001\nI0608 02:25:13.228857 13573 solver.cpp:229] Iteration 30580, loss = 1.21818\nI0608 02:25:13.228924 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.225795 (* 1 = 0.225795 loss)\nI0608 02:25:13.228934 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.226231 (* 1 = 0.226231 loss)\nI0608 02:25:13.228940 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.491353 (* 1 = 0.491353 loss)\nI0608 02:25:13.228945 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0708185 (* 1 = 0.0708185 loss)\nI0608 02:25:13.228951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.487381 (* 1 = 0.487381 loss)\nI0608 02:25:13.228956 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0708249 (* 1 = 0.0708249 loss)\nI0608 02:25:13.228963 13573 sgd_solver.cpp:106] Iteration 30580, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:25:26.219609 13573 solver.cpp:229] Iteration 30600, loss = 0.46168\nI0608 02:25:26.219669 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151389 (* 1 = 0.151389 loss)\nI0608 02:25:26.219679 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101212 (* 1 = 0.101212 loss)\nI0608 02:25:26.219686 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10633 (* 1 = 0.10633 loss)\nI0608 02:25:26.219693 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00954284 (* 1 = 0.00954284 loss)\nI0608 02:25:26.219700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.098918 (* 1 = 0.098918 loss)\nI0608 02:25:26.219707 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00954235 (* 1 = 0.00954235 loss)\nI0608 02:25:26.219715 13573 sgd_solver.cpp:106] Iteration 30600, lr = 0.001\nI0608 02:25:39.338426 13573 solver.cpp:229] Iteration 30620, loss = 0.572259\nI0608 02:25:39.338501 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0705801 (* 1 = 0.0705801 loss)\nI0608 02:25:39.338511 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0831214 (* 1 = 0.0831214 loss)\nI0608 02:25:39.338517 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0801075 (* 1 = 0.0801075 loss)\nI0608 02:25:39.338523 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0123969 (* 1 = 0.0123969 loss)\nI0608 02:25:39.338529 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0838292 (* 1 = 0.0838292 loss)\nI0608 02:25:39.338534 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0123971 (* 1 = 0.0123971 loss)\nI0608 02:25:39.338541 13573 sgd_solver.cpp:106] Iteration 30620, lr = 0.001\nI0608 02:25:52.211942 13573 solver.cpp:229] Iteration 30640, loss = 0.607547\nI0608 02:25:52.212008 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00528536 (* 1 = 0.00528536 loss)\nI0608 02:25:52.212018 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00637409 (* 1 = 0.00637409 loss)\nI0608 02:25:52.212024 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.336626 (* 1 = 0.336626 loss)\nI0608 02:25:52.212030 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0427674 (* 1 = 0.0427674 loss)\nI0608 02:25:52.212036 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.402297 (* 1 = 0.402297 loss)\nI0608 02:25:52.212041 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0427669 (* 1 = 0.0427669 loss)\nI0608 02:25:52.212049 13573 sgd_solver.cpp:106] Iteration 30640, lr = 0.001\nI0608 02:26:05.065696 13573 solver.cpp:229] Iteration 30660, loss = 0.584792\nI0608 02:26:05.065757 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.06304 (* 1 = 0.06304 loss)\nI0608 02:26:05.065768 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0320073 (* 1 = 0.0320073 loss)\nI0608 02:26:05.065773 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0900175 (* 1 = 0.0900175 loss)\nI0608 02:26:05.065778 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0151662 (* 1 = 0.0151662 loss)\nI0608 02:26:05.065784 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.088578 (* 1 = 0.088578 loss)\nI0608 02:26:05.065790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151559 (* 1 = 0.0151559 loss)\nI0608 02:26:05.065798 13573 sgd_solver.cpp:106] Iteration 30660, lr = 0.001\nI0608 02:26:17.961261 13573 solver.cpp:229] Iteration 30680, loss = 0.773609\nI0608 02:26:17.961333 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00776741 (* 1 = 0.00776741 loss)\nI0608 02:26:17.961341 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.04778 (* 1 = 0.04778 loss)\nI0608 02:26:17.961347 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265253 (* 1 = 0.265253 loss)\nI0608 02:26:17.961354 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0934034 (* 1 = 0.0934034 loss)\nI0608 02:26:17.961359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.267588 (* 1 = 0.267588 loss)\nI0608 02:26:17.961365 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0933954 (* 1 = 0.0933954 loss)\nI0608 02:26:17.961372 13573 sgd_solver.cpp:106] Iteration 30680, lr = 0.001\nI0608 02:26:30.877621 13573 solver.cpp:229] Iteration 30700, loss = 0.456729\nI0608 02:26:30.877686 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125218 (* 1 = 0.125218 loss)\nI0608 02:26:30.877694 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10416 (* 1 = 0.10416 loss)\nI0608 02:26:30.877701 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112706 (* 1 = 0.112706 loss)\nI0608 02:26:30.877707 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00676021 (* 1 = 0.00676021 loss)\nI0608 02:26:30.877713 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108246 (* 1 = 0.108246 loss)\nI0608 02:26:30.877719 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00676107 (* 1 = 0.00676107 loss)\nI0608 02:26:30.877727 13573 sgd_solver.cpp:106] Iteration 30700, lr = 0.001\nI0608 02:26:43.923979 13573 solver.cpp:229] Iteration 30720, loss = 0.631707\nI0608 02:26:43.924041 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100319 (* 1 = 0.100319 loss)\nI0608 02:26:43.924052 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0720654 (* 1 = 0.0720654 loss)\nI0608 02:26:43.924057 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124728 (* 1 = 0.124728 loss)\nI0608 02:26:43.924064 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0150145 (* 1 = 0.0150145 loss)\nI0608 02:26:43.924070 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126368 (* 1 = 0.126368 loss)\nI0608 02:26:43.924077 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0150128 (* 1 = 0.0150128 loss)\nI0608 02:26:43.924084 13573 sgd_solver.cpp:106] Iteration 30720, lr = 0.001\nI0608 02:26:56.718458 13573 solver.cpp:229] Iteration 30740, loss = 0.86245\nI0608 02:26:56.718530 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.072504 (* 1 = 0.072504 loss)\nI0608 02:26:56.718539 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0903257 (* 1 = 0.0903257 loss)\nI0608 02:26:56.718545 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.396216 (* 1 = 0.396216 loss)\nI0608 02:26:56.718551 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.120882 (* 1 = 0.120882 loss)\nI0608 02:26:56.718556 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.404589 (* 1 = 0.404589 loss)\nI0608 02:26:56.718562 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.120906 (* 1 = 0.120906 loss)\nI0608 02:26:56.718569 13573 sgd_solver.cpp:106] Iteration 30740, lr = 0.001\nI0608 02:27:09.558972 13573 solver.cpp:229] Iteration 30760, loss = 0.471783\nI0608 02:27:09.559041 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0901696 (* 1 = 0.0901696 loss)\nI0608 02:27:09.559049 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.060289 (* 1 = 0.060289 loss)\nI0608 02:27:09.559056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0770914 (* 1 = 0.0770914 loss)\nI0608 02:27:09.559062 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00831848 (* 1 = 0.00831848 loss)\nI0608 02:27:09.559067 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0789287 (* 1 = 0.0789287 loss)\nI0608 02:27:09.559072 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00831369 (* 1 = 0.00831369 loss)\nI0608 02:27:09.559079 13573 sgd_solver.cpp:106] Iteration 30760, lr = 0.001\nI0608 02:27:22.550999 13573 solver.cpp:229] Iteration 30780, loss = 0.63132\nI0608 02:27:22.551060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184509 (* 1 = 0.184509 loss)\nI0608 02:27:22.551070 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0372377 (* 1 = 0.0372377 loss)\nI0608 02:27:22.551079 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111335 (* 1 = 0.111335 loss)\nI0608 02:27:22.551085 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00812605 (* 1 = 0.00812605 loss)\nI0608 02:27:22.551091 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101294 (* 1 = 0.101294 loss)\nI0608 02:27:22.551097 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00812443 (* 1 = 0.00812443 loss)\nI0608 02:27:22.551105 13573 sgd_solver.cpp:106] Iteration 30780, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:27:35.520630 13573 solver.cpp:229] Iteration 30800, loss = 0.52068\nI0608 02:27:35.520709 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0695309 (* 1 = 0.0695309 loss)\nI0608 02:27:35.520720 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117866 (* 1 = 0.117866 loss)\nI0608 02:27:35.520725 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188595 (* 1 = 0.188595 loss)\nI0608 02:27:35.520731 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0741377 (* 1 = 0.0741377 loss)\nI0608 02:27:35.520736 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.192604 (* 1 = 0.192604 loss)\nI0608 02:27:35.520743 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0741433 (* 1 = 0.0741433 loss)\nI0608 02:27:35.520750 13573 sgd_solver.cpp:106] Iteration 30800, lr = 0.001\nI0608 02:27:48.406424 13573 solver.cpp:229] Iteration 30820, loss = 0.584094\nI0608 02:27:48.406486 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0998807 (* 1 = 0.0998807 loss)\nI0608 02:27:48.406497 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0811131 (* 1 = 0.0811131 loss)\nI0608 02:27:48.406502 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0946414 (* 1 = 0.0946414 loss)\nI0608 02:27:48.406508 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0355517 (* 1 = 0.0355517 loss)\nI0608 02:27:48.406513 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0937862 (* 1 = 0.0937862 loss)\nI0608 02:27:48.406519 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0355532 (* 1 = 0.0355532 loss)\nI0608 02:27:48.406525 13573 sgd_solver.cpp:106] Iteration 30820, lr = 0.001\nI0608 02:28:01.425652 13573 solver.cpp:229] Iteration 30840, loss = 1.01435\nI0608 02:28:01.425716 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.123965 (* 1 = 0.123965 loss)\nI0608 02:28:01.425729 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102936 (* 1 = 0.102936 loss)\nI0608 02:28:01.425735 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.178101 (* 1 = 0.178101 loss)\nI0608 02:28:01.425741 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259425 (* 1 = 0.0259425 loss)\nI0608 02:28:01.425748 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169476 (* 1 = 0.169476 loss)\nI0608 02:28:01.425755 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0259455 (* 1 = 0.0259455 loss)\nI0608 02:28:01.425763 13573 sgd_solver.cpp:106] Iteration 30840, lr = 0.001\nI0608 02:28:14.439427 13573 solver.cpp:229] Iteration 30860, loss = 1.14132\nI0608 02:28:14.439503 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0167146 (* 1 = 0.0167146 loss)\nI0608 02:28:14.439514 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00485092 (* 1 = 0.00485092 loss)\nI0608 02:28:14.439520 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.546802 (* 1 = 0.546802 loss)\nI0608 02:28:14.439527 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0848293 (* 1 = 0.0848293 loss)\nI0608 02:28:14.439532 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.491563 (* 1 = 0.491563 loss)\nI0608 02:28:14.439538 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0848462 (* 1 = 0.0848462 loss)\nI0608 02:28:14.439545 13573 sgd_solver.cpp:106] Iteration 30860, lr = 0.001\nI0608 02:28:27.315255 13573 solver.cpp:229] Iteration 30880, loss = 0.373136\nI0608 02:28:27.315318 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.122439 (* 1 = 0.122439 loss)\nI0608 02:28:27.315328 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0706909 (* 1 = 0.0706909 loss)\nI0608 02:28:27.315335 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0806622 (* 1 = 0.0806622 loss)\nI0608 02:28:27.315340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217267 (* 1 = 0.0217267 loss)\nI0608 02:28:27.315346 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0918397 (* 1 = 0.0918397 loss)\nI0608 02:28:27.315352 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0217213 (* 1 = 0.0217213 loss)\nI0608 02:28:27.315361 13573 sgd_solver.cpp:106] Iteration 30880, lr = 0.001\nI0608 02:28:40.248103 13573 solver.cpp:229] Iteration 30900, loss = 1.21108\nI0608 02:28:40.248169 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124855 (* 1 = 0.124855 loss)\nI0608 02:28:40.248179 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105619 (* 1 = 0.105619 loss)\nI0608 02:28:40.248185 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.314686 (* 1 = 0.314686 loss)\nI0608 02:28:40.248191 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0124911 (* 1 = 0.0124911 loss)\nI0608 02:28:40.248198 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273253 (* 1 = 0.273253 loss)\nI0608 02:28:40.248203 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0124942 (* 1 = 0.0124942 loss)\nI0608 02:28:40.248210 13573 sgd_solver.cpp:106] Iteration 30900, lr = 0.001\nI0608 02:28:52.861857 13573 solver.cpp:229] Iteration 30920, loss = 1.80283\nI0608 02:28:52.861929 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0289963 (* 1 = 0.0289963 loss)\nI0608 02:28:52.861939 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0871402 (* 1 = 0.0871402 loss)\nI0608 02:28:52.861945 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.466669 (* 1 = 0.466669 loss)\nI0608 02:28:52.861951 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0924936 (* 1 = 0.0924936 loss)\nI0608 02:28:52.861956 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.481611 (* 1 = 0.481611 loss)\nI0608 02:28:52.861963 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0924244 (* 1 = 0.0924244 loss)\nI0608 02:28:52.861969 13573 sgd_solver.cpp:106] Iteration 30920, lr = 0.001\nI0608 02:29:05.681798 13573 solver.cpp:229] Iteration 30940, loss = 2.14237\nI0608 02:29:05.681860 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.133634 (* 1 = 0.133634 loss)\nI0608 02:29:05.681869 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.063093 (* 1 = 0.063093 loss)\nI0608 02:29:05.681875 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.28386 (* 1 = 1.28386 loss)\nI0608 02:29:05.681881 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.397851 (* 1 = 0.397851 loss)\nI0608 02:29:05.681887 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.26925 (* 1 = 1.26925 loss)\nI0608 02:29:05.681893 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.402365 (* 1 = 0.402365 loss)\nI0608 02:29:05.681900 13573 sgd_solver.cpp:106] Iteration 30940, lr = 0.001\nI0608 02:29:18.637779 13573 solver.cpp:229] Iteration 30960, loss = 1.05774\nI0608 02:29:18.637861 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.173471 (* 1 = 0.173471 loss)\nI0608 02:29:18.637871 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184304 (* 1 = 0.184304 loss)\nI0608 02:29:18.637877 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.196081 (* 1 = 0.196081 loss)\nI0608 02:29:18.637883 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0206272 (* 1 = 0.0206272 loss)\nI0608 02:29:18.637889 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199561 (* 1 = 0.199561 loss)\nI0608 02:29:18.637895 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0206189 (* 1 = 0.0206189 loss)\nI0608 02:29:18.637903 13573 sgd_solver.cpp:106] Iteration 30960, lr = 0.001\nI0608 02:29:31.708339 13573 solver.cpp:229] Iteration 30980, loss = 0.981165\nI0608 02:29:31.708412 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.262834 (* 1 = 0.262834 loss)\nI0608 02:29:31.708422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18335 (* 1 = 0.18335 loss)\nI0608 02:29:31.708427 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.292735 (* 1 = 0.292735 loss)\nI0608 02:29:31.708434 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0363049 (* 1 = 0.0363049 loss)\nI0608 02:29:31.708439 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280924 (* 1 = 0.280924 loss)\nI0608 02:29:31.708446 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0362884 (* 1 = 0.0362884 loss)\nI0608 02:29:31.708451 13573 sgd_solver.cpp:106] Iteration 30980, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:29:44.472576 13573 solver.cpp:229] Iteration 31000, loss = 0.814379\nI0608 02:29:44.472637 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118608 (* 1 = 0.118608 loss)\nI0608 02:29:44.472646 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0996115 (* 1 = 0.0996115 loss)\nI0608 02:29:44.472652 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302852 (* 1 = 0.302852 loss)\nI0608 02:29:44.472658 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0801099 (* 1 = 0.0801099 loss)\nI0608 02:29:44.472664 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306506 (* 1 = 0.306506 loss)\nI0608 02:29:44.472669 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0801365 (* 1 = 0.0801365 loss)\nI0608 02:29:44.472676 13573 sgd_solver.cpp:106] Iteration 31000, lr = 0.001\nI0608 02:29:57.266911 13573 solver.cpp:229] Iteration 31020, loss = 0.833036\nI0608 02:29:57.266975 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00146253 (* 1 = 0.00146253 loss)\nI0608 02:29:57.266984 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00187005 (* 1 = 0.00187005 loss)\nI0608 02:29:57.266991 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199762 (* 1 = 0.199762 loss)\nI0608 02:29:57.266996 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198824 (* 1 = 0.0198824 loss)\nI0608 02:29:57.267002 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215502 (* 1 = 0.215502 loss)\nI0608 02:29:57.267007 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198776 (* 1 = 0.0198776 loss)\nI0608 02:29:57.267014 13573 sgd_solver.cpp:106] Iteration 31020, lr = 0.001\nI0608 02:30:10.451975 13573 solver.cpp:229] Iteration 31040, loss = 0.894134\nI0608 02:30:10.452038 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.202971 (* 1 = 0.202971 loss)\nI0608 02:30:10.452052 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.236982 (* 1 = 0.236982 loss)\nI0608 02:30:10.452060 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.070441 (* 1 = 0.070441 loss)\nI0608 02:30:10.452065 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00634239 (* 1 = 0.00634239 loss)\nI0608 02:30:10.452071 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0791228 (* 1 = 0.0791228 loss)\nI0608 02:30:10.452077 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00634211 (* 1 = 0.00634211 loss)\nI0608 02:30:10.452085 13573 sgd_solver.cpp:106] Iteration 31040, lr = 0.001\nI0608 02:30:23.335434 13573 solver.cpp:229] Iteration 31060, loss = 0.859654\nI0608 02:30:23.335507 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111109 (* 1 = 0.111109 loss)\nI0608 02:30:23.335516 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.361571 (* 1 = 0.361571 loss)\nI0608 02:30:23.335523 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182987 (* 1 = 0.182987 loss)\nI0608 02:30:23.335528 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0513582 (* 1 = 0.0513582 loss)\nI0608 02:30:23.335535 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189031 (* 1 = 0.189031 loss)\nI0608 02:30:23.335539 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0513475 (* 1 = 0.0513475 loss)\nI0608 02:30:23.335546 13573 sgd_solver.cpp:106] Iteration 31060, lr = 0.001\nI0608 02:30:36.257990 13573 solver.cpp:229] Iteration 31080, loss = 0.738209\nI0608 02:30:36.258074 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0759711 (* 1 = 0.0759711 loss)\nI0608 02:30:36.258083 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0743677 (* 1 = 0.0743677 loss)\nI0608 02:30:36.258090 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0868346 (* 1 = 0.0868346 loss)\nI0608 02:30:36.258095 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246818 (* 1 = 0.0246818 loss)\nI0608 02:30:36.258101 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0917972 (* 1 = 0.0917972 loss)\nI0608 02:30:36.258107 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246825 (* 1 = 0.0246825 loss)\nI0608 02:30:36.258113 13573 sgd_solver.cpp:106] Iteration 31080, lr = 0.001\nI0608 02:30:49.279239 13573 solver.cpp:229] Iteration 31100, loss = 1.17119\nI0608 02:30:49.279302 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130352 (* 1 = 0.130352 loss)\nI0608 02:30:49.279312 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117853 (* 1 = 0.117853 loss)\nI0608 02:30:49.279320 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.3128 (* 1 = 0.3128 loss)\nI0608 02:30:49.279325 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116222 (* 1 = 0.0116222 loss)\nI0608 02:30:49.279331 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.339561 (* 1 = 0.339561 loss)\nI0608 02:30:49.279338 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116283 (* 1 = 0.0116283 loss)\nI0608 02:30:49.279345 13573 sgd_solver.cpp:106] Iteration 31100, lr = 0.001\nI0608 02:31:02.274924 13573 solver.cpp:229] Iteration 31120, loss = 0.568375\nI0608 02:31:02.274997 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0696009 (* 1 = 0.0696009 loss)\nI0608 02:31:02.275007 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163453 (* 1 = 0.163453 loss)\nI0608 02:31:02.275012 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0836278 (* 1 = 0.0836278 loss)\nI0608 02:31:02.275018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134448 (* 1 = 0.0134448 loss)\nI0608 02:31:02.275023 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855114 (* 1 = 0.0855114 loss)\nI0608 02:31:02.275029 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134477 (* 1 = 0.0134477 loss)\nI0608 02:31:02.275037 13573 sgd_solver.cpp:106] Iteration 31120, lr = 0.001\nI0608 02:31:15.225821 13573 solver.cpp:229] Iteration 31140, loss = 1.51837\nI0608 02:31:15.225889 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102052 (* 1 = 0.102052 loss)\nI0608 02:31:15.225899 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0495719 (* 1 = 0.0495719 loss)\nI0608 02:31:15.225905 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.397527 (* 1 = 0.397527 loss)\nI0608 02:31:15.225911 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0592193 (* 1 = 0.0592193 loss)\nI0608 02:31:15.225917 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.400236 (* 1 = 0.400236 loss)\nI0608 02:31:15.225922 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0592225 (* 1 = 0.0592225 loss)\nI0608 02:31:15.225930 13573 sgd_solver.cpp:106] Iteration 31140, lr = 0.001\nI0608 02:31:28.048755 13573 solver.cpp:229] Iteration 31160, loss = 0.711207\nI0608 02:31:28.048832 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.016496 (* 1 = 0.016496 loss)\nI0608 02:31:28.048842 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0551378 (* 1 = 0.0551378 loss)\nI0608 02:31:28.048848 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202456 (* 1 = 0.202456 loss)\nI0608 02:31:28.048854 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00232226 (* 1 = 0.00232226 loss)\nI0608 02:31:28.048861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194174 (* 1 = 0.194174 loss)\nI0608 02:31:28.048866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00232658 (* 1 = 0.00232658 loss)\nI0608 02:31:28.048872 13573 sgd_solver.cpp:106] Iteration 31160, lr = 0.001\nI0608 02:31:41.049396 13573 solver.cpp:229] Iteration 31180, loss = 0.998644\nI0608 02:31:41.049459 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0972849 (* 1 = 0.0972849 loss)\nI0608 02:31:41.049469 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101334 (* 1 = 0.101334 loss)\nI0608 02:31:41.049475 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103293 (* 1 = 0.103293 loss)\nI0608 02:31:41.049481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164373 (* 1 = 0.0164373 loss)\nI0608 02:31:41.049487 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096931 (* 1 = 0.096931 loss)\nI0608 02:31:41.049494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164354 (* 1 = 0.0164354 loss)\nI0608 02:31:41.049501 13573 sgd_solver.cpp:106] Iteration 31180, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:31:54.018667 13573 solver.cpp:229] Iteration 31200, loss = 0.961315\nI0608 02:31:54.018754 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159175 (* 1 = 0.159175 loss)\nI0608 02:31:54.018764 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.2242 (* 1 = 0.2242 loss)\nI0608 02:31:54.018784 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.344759 (* 1 = 0.344759 loss)\nI0608 02:31:54.018792 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0559587 (* 1 = 0.0559587 loss)\nI0608 02:31:54.018798 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.342047 (* 1 = 0.342047 loss)\nI0608 02:31:54.018805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0559315 (* 1 = 0.0559315 loss)\nI0608 02:31:54.018812 13573 sgd_solver.cpp:106] Iteration 31200, lr = 0.001\nI0608 02:32:06.983599 13573 solver.cpp:229] Iteration 31220, loss = 1.13191\nI0608 02:32:06.983671 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00380117 (* 1 = 0.00380117 loss)\nI0608 02:32:06.983682 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0109565 (* 1 = 0.0109565 loss)\nI0608 02:32:06.983690 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.68971 (* 1 = 0.68971 loss)\nI0608 02:32:06.983695 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.133093 (* 1 = 0.133093 loss)\nI0608 02:32:06.983700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.684593 (* 1 = 0.684593 loss)\nI0608 02:32:06.983706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.133076 (* 1 = 0.133076 loss)\nI0608 02:32:06.983712 13573 sgd_solver.cpp:106] Iteration 31220, lr = 0.001\nI0608 02:32:19.919190 13573 solver.cpp:229] Iteration 31240, loss = 0.45746\nI0608 02:32:19.919252 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119858 (* 1 = 0.119858 loss)\nI0608 02:32:19.919261 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0866006 (* 1 = 0.0866006 loss)\nI0608 02:32:19.919268 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0920295 (* 1 = 0.0920295 loss)\nI0608 02:32:19.919273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0121492 (* 1 = 0.0121492 loss)\nI0608 02:32:19.919279 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0917888 (* 1 = 0.0917888 loss)\nI0608 02:32:19.919286 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0121516 (* 1 = 0.0121516 loss)\nI0608 02:32:19.919291 13573 sgd_solver.cpp:106] Iteration 31240, lr = 0.001\nI0608 02:32:32.909473 13573 solver.cpp:229] Iteration 31260, loss = 0.652766\nI0608 02:32:32.909535 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166212 (* 1 = 0.166212 loss)\nI0608 02:32:32.909544 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116017 (* 1 = 0.116017 loss)\nI0608 02:32:32.909550 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0723833 (* 1 = 0.0723833 loss)\nI0608 02:32:32.909556 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00285969 (* 1 = 0.00285969 loss)\nI0608 02:32:32.909562 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0706187 (* 1 = 0.0706187 loss)\nI0608 02:32:32.909567 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00285968 (* 1 = 0.00285968 loss)\nI0608 02:32:32.909575 13573 sgd_solver.cpp:106] Iteration 31260, lr = 0.001\nI0608 02:32:45.846087 13573 solver.cpp:229] Iteration 31280, loss = 0.534352\nI0608 02:32:45.846163 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00139269 (* 1 = 0.00139269 loss)\nI0608 02:32:45.846174 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00525078 (* 1 = 0.00525078 loss)\nI0608 02:32:45.846180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23666 (* 1 = 0.23666 loss)\nI0608 02:32:45.846185 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154514 (* 1 = 0.0154514 loss)\nI0608 02:32:45.846191 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23182 (* 1 = 0.23182 loss)\nI0608 02:32:45.846197 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154541 (* 1 = 0.0154541 loss)\nI0608 02:32:45.846204 13573 sgd_solver.cpp:106] Iteration 31280, lr = 0.001\nI0608 02:32:58.964009 13573 solver.cpp:229] Iteration 31300, loss = 0.81712\nI0608 02:32:58.964069 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000310088 (* 1 = 0.000310088 loss)\nI0608 02:32:58.964081 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0017093 (* 1 = 0.0017093 loss)\nI0608 02:32:58.964087 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182066 (* 1 = 0.182066 loss)\nI0608 02:32:58.964093 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0419936 (* 1 = 0.0419936 loss)\nI0608 02:32:58.964098 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247075 (* 1 = 0.247075 loss)\nI0608 02:32:58.964104 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0419828 (* 1 = 0.0419828 loss)\nI0608 02:32:58.964112 13573 sgd_solver.cpp:106] Iteration 31300, lr = 0.001\nI0608 02:33:11.587196 13573 solver.cpp:229] Iteration 31320, loss = 0.376767\nI0608 02:33:11.587276 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110466 (* 1 = 0.110466 loss)\nI0608 02:33:11.587288 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107695 (* 1 = 0.107695 loss)\nI0608 02:33:11.587296 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909287 (* 1 = 0.0909287 loss)\nI0608 02:33:11.587301 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011907 (* 1 = 0.011907 loss)\nI0608 02:33:11.587306 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102479 (* 1 = 0.102479 loss)\nI0608 02:33:11.587312 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119085 (* 1 = 0.0119085 loss)\nI0608 02:33:11.587318 13573 sgd_solver.cpp:106] Iteration 31320, lr = 0.001\nI0608 02:33:24.653057 13573 solver.cpp:229] Iteration 31340, loss = 0.831702\nI0608 02:33:24.653188 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0697265 (* 1 = 0.0697265 loss)\nI0608 02:33:24.653198 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102586 (* 1 = 0.102586 loss)\nI0608 02:33:24.653204 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0897302 (* 1 = 0.0897302 loss)\nI0608 02:33:24.653210 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00766414 (* 1 = 0.00766414 loss)\nI0608 02:33:24.653216 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0842533 (* 1 = 0.0842533 loss)\nI0608 02:33:24.653221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00765786 (* 1 = 0.00765786 loss)\nI0608 02:33:24.653229 13573 sgd_solver.cpp:106] Iteration 31340, lr = 0.001\nI0608 02:33:37.316496 13573 solver.cpp:229] Iteration 31360, loss = 0.478927\nI0608 02:33:37.316556 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0876571 (* 1 = 0.0876571 loss)\nI0608 02:33:37.316586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104052 (* 1 = 0.104052 loss)\nI0608 02:33:37.316593 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0980027 (* 1 = 0.0980027 loss)\nI0608 02:33:37.316599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0512553 (* 1 = 0.0512553 loss)\nI0608 02:33:37.316606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0971592 (* 1 = 0.0971592 loss)\nI0608 02:33:37.316612 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0512537 (* 1 = 0.0512537 loss)\nI0608 02:33:37.316618 13573 sgd_solver.cpp:106] Iteration 31360, lr = 0.001\nI0608 02:33:50.002665 13573 solver.cpp:229] Iteration 31380, loss = 0.589898\nI0608 02:33:50.002734 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17105 (* 1 = 0.17105 loss)\nI0608 02:33:50.002744 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130441 (* 1 = 0.130441 loss)\nI0608 02:33:50.002750 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194588 (* 1 = 0.194588 loss)\nI0608 02:33:50.002756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0285804 (* 1 = 0.0285804 loss)\nI0608 02:33:50.002763 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.221929 (* 1 = 0.221929 loss)\nI0608 02:33:50.002768 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0285807 (* 1 = 0.0285807 loss)\nI0608 02:33:50.002789 13573 sgd_solver.cpp:106] Iteration 31380, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:34:02.978132 13573 solver.cpp:229] Iteration 31400, loss = 0.413504\nI0608 02:34:02.978202 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.143577 (* 1 = 0.143577 loss)\nI0608 02:34:02.978212 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0804451 (* 1 = 0.0804451 loss)\nI0608 02:34:02.978219 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0931319 (* 1 = 0.0931319 loss)\nI0608 02:34:02.978224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00650338 (* 1 = 0.00650338 loss)\nI0608 02:34:02.978230 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100207 (* 1 = 0.100207 loss)\nI0608 02:34:02.978236 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00650338 (* 1 = 0.00650338 loss)\nI0608 02:34:02.978243 13573 sgd_solver.cpp:106] Iteration 31400, lr = 0.001\nI0608 02:34:15.881784 13573 solver.cpp:229] Iteration 31420, loss = 0.631495\nI0608 02:34:15.881855 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171221 (* 1 = 0.171221 loss)\nI0608 02:34:15.881865 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157563 (* 1 = 0.157563 loss)\nI0608 02:34:15.881870 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148242 (* 1 = 0.148242 loss)\nI0608 02:34:15.881876 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0450953 (* 1 = 0.0450953 loss)\nI0608 02:34:15.881881 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.144679 (* 1 = 0.144679 loss)\nI0608 02:34:15.881887 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.045096 (* 1 = 0.045096 loss)\nI0608 02:34:15.881896 13573 sgd_solver.cpp:106] Iteration 31420, lr = 0.001\nI0608 02:34:29.066179 13573 solver.cpp:229] Iteration 31440, loss = 0.43571\nI0608 02:34:29.066241 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128744 (* 1 = 0.128744 loss)\nI0608 02:34:29.066251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0524739 (* 1 = 0.0524739 loss)\nI0608 02:34:29.066257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0926946 (* 1 = 0.0926946 loss)\nI0608 02:34:29.066263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0300864 (* 1 = 0.0300864 loss)\nI0608 02:34:29.066268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0957315 (* 1 = 0.0957315 loss)\nI0608 02:34:29.066274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0300994 (* 1 = 0.0300994 loss)\nI0608 02:34:29.066282 13573 sgd_solver.cpp:106] Iteration 31440, lr = 0.001\nI0608 02:34:41.961416 13573 solver.cpp:229] Iteration 31460, loss = 0.386494\nI0608 02:34:41.961488 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0028369 (* 1 = 0.0028369 loss)\nI0608 02:34:41.961498 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00527662 (* 1 = 0.00527662 loss)\nI0608 02:34:41.961504 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.165585 (* 1 = 0.165585 loss)\nI0608 02:34:41.961510 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105167 (* 1 = 0.0105167 loss)\nI0608 02:34:41.961515 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187486 (* 1 = 0.187486 loss)\nI0608 02:34:41.961521 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105061 (* 1 = 0.0105061 loss)\nI0608 02:34:41.961529 13573 sgd_solver.cpp:106] Iteration 31460, lr = 0.001\nI0608 02:34:55.014484 13573 solver.cpp:229] Iteration 31480, loss = 0.612432\nI0608 02:34:55.014551 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00115283 (* 1 = 0.00115283 loss)\nI0608 02:34:55.014561 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0525059 (* 1 = 0.0525059 loss)\nI0608 02:34:55.014734 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.230834 (* 1 = 0.230834 loss)\nI0608 02:34:55.014756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0087188 (* 1 = 0.0087188 loss)\nI0608 02:34:55.014825 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278067 (* 1 = 0.278067 loss)\nI0608 02:34:55.014855 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00872594 (* 1 = 0.00872594 loss)\nI0608 02:34:55.014876 13573 sgd_solver.cpp:106] Iteration 31480, lr = 0.001\nI0608 02:35:07.971071 13573 solver.cpp:229] Iteration 31500, loss = 0.888489\nI0608 02:35:07.971143 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101088 (* 1 = 0.101088 loss)\nI0608 02:35:07.971153 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.080854 (* 1 = 0.080854 loss)\nI0608 02:35:07.971159 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0899631 (* 1 = 0.0899631 loss)\nI0608 02:35:07.971165 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010556 (* 1 = 0.010556 loss)\nI0608 02:35:07.971171 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0889546 (* 1 = 0.0889546 loss)\nI0608 02:35:07.971177 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105483 (* 1 = 0.0105483 loss)\nI0608 02:35:07.971184 13573 sgd_solver.cpp:106] Iteration 31500, lr = 0.001\nI0608 02:35:20.978088 13573 solver.cpp:229] Iteration 31520, loss = 0.769734\nI0608 02:35:20.978157 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134483 (* 1 = 0.134483 loss)\nI0608 02:35:20.978166 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.249479 (* 1 = 0.249479 loss)\nI0608 02:35:20.978173 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.144657 (* 1 = 0.144657 loss)\nI0608 02:35:20.978178 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0294509 (* 1 = 0.0294509 loss)\nI0608 02:35:20.978184 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138666 (* 1 = 0.138666 loss)\nI0608 02:35:20.978189 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0294457 (* 1 = 0.0294457 loss)\nI0608 02:35:20.978196 13573 sgd_solver.cpp:106] Iteration 31520, lr = 0.001\nI0608 02:35:33.888674 13573 solver.cpp:229] Iteration 31540, loss = 0.984873\nI0608 02:35:33.888746 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00206064 (* 1 = 0.00206064 loss)\nI0608 02:35:33.888756 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.029187 (* 1 = 0.029187 loss)\nI0608 02:35:33.888761 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.287534 (* 1 = 0.287534 loss)\nI0608 02:35:33.888767 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0548251 (* 1 = 0.0548251 loss)\nI0608 02:35:33.888772 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.288951 (* 1 = 0.288951 loss)\nI0608 02:35:33.888777 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0548232 (* 1 = 0.0548232 loss)\nI0608 02:35:33.888784 13573 sgd_solver.cpp:106] Iteration 31540, lr = 0.001\nI0608 02:35:46.732269 13573 solver.cpp:229] Iteration 31560, loss = 0.604839\nI0608 02:35:46.732336 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0590151 (* 1 = 0.0590151 loss)\nI0608 02:35:46.732345 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0351392 (* 1 = 0.0351392 loss)\nI0608 02:35:46.732352 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.309399 (* 1 = 0.309399 loss)\nI0608 02:35:46.732357 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0292453 (* 1 = 0.0292453 loss)\nI0608 02:35:46.732362 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315576 (* 1 = 0.315576 loss)\nI0608 02:35:46.732367 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029257 (* 1 = 0.029257 loss)\nI0608 02:35:46.732374 13573 sgd_solver.cpp:106] Iteration 31560, lr = 0.001\nI0608 02:35:59.588670 13573 solver.cpp:229] Iteration 31580, loss = 0.501966\nI0608 02:35:59.588791 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0101937 (* 1 = 0.0101937 loss)\nI0608 02:35:59.588800 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0110878 (* 1 = 0.0110878 loss)\nI0608 02:35:59.588806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.242026 (* 1 = 0.242026 loss)\nI0608 02:35:59.588812 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103882 (* 1 = 0.0103882 loss)\nI0608 02:35:59.588819 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.231028 (* 1 = 0.231028 loss)\nI0608 02:35:59.588824 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103918 (* 1 = 0.0103918 loss)\nI0608 02:35:59.588830 13573 sgd_solver.cpp:106] Iteration 31580, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:36:12.517326 13573 solver.cpp:229] Iteration 31600, loss = 0.612702\nI0608 02:36:12.517391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000134381 (* 1 = 0.000134381 loss)\nI0608 02:36:12.517400 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0029227 (* 1 = 0.0029227 loss)\nI0608 02:36:12.517407 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203476 (* 1 = 0.203476 loss)\nI0608 02:36:12.517412 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00861316 (* 1 = 0.00861316 loss)\nI0608 02:36:12.517418 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191118 (* 1 = 0.191118 loss)\nI0608 02:36:12.517424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00860968 (* 1 = 0.00860968 loss)\nI0608 02:36:12.517431 13573 sgd_solver.cpp:106] Iteration 31600, lr = 0.001\nI0608 02:36:25.466473 13573 solver.cpp:229] Iteration 31620, loss = 0.89742\nI0608 02:36:25.466539 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.21929 (* 1 = 0.21929 loss)\nI0608 02:36:25.466549 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.271504 (* 1 = 0.271504 loss)\nI0608 02:36:25.466555 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.185045 (* 1 = 0.185045 loss)\nI0608 02:36:25.466562 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.18804 (* 1 = 0.18804 loss)\nI0608 02:36:25.466567 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191585 (* 1 = 0.191585 loss)\nI0608 02:36:25.466573 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.190304 (* 1 = 0.190304 loss)\nI0608 02:36:25.466580 13573 sgd_solver.cpp:106] Iteration 31620, lr = 0.001\nI0608 02:36:38.307982 13573 solver.cpp:229] Iteration 31640, loss = 0.513069\nI0608 02:36:38.308053 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.206352 (* 1 = 0.206352 loss)\nI0608 02:36:38.308063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112084 (* 1 = 0.112084 loss)\nI0608 02:36:38.308068 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108733 (* 1 = 0.108733 loss)\nI0608 02:36:38.308074 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.027683 (* 1 = 0.027683 loss)\nI0608 02:36:38.308079 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103376 (* 1 = 0.103376 loss)\nI0608 02:36:38.308085 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276845 (* 1 = 0.0276845 loss)\nI0608 02:36:38.308091 13573 sgd_solver.cpp:106] Iteration 31640, lr = 0.001\nI0608 02:36:51.130193 13573 solver.cpp:229] Iteration 31660, loss = 1.27561\nI0608 02:36:51.130267 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.187241 (* 1 = 0.187241 loss)\nI0608 02:36:51.130277 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.218674 (* 1 = 0.218674 loss)\nI0608 02:36:51.130283 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.559378 (* 1 = 0.559378 loss)\nI0608 02:36:51.130290 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0956527 (* 1 = 0.0956527 loss)\nI0608 02:36:51.130295 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.564521 (* 1 = 0.564521 loss)\nI0608 02:36:51.130300 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0956891 (* 1 = 0.0956891 loss)\nI0608 02:36:51.130307 13573 sgd_solver.cpp:106] Iteration 31660, lr = 0.001\nI0608 02:37:04.173341 13573 solver.cpp:229] Iteration 31680, loss = 0.35364\nI0608 02:37:04.173410 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0179877 (* 1 = 0.0179877 loss)\nI0608 02:37:04.173419 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0354035 (* 1 = 0.0354035 loss)\nI0608 02:37:04.173425 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18502 (* 1 = 0.18502 loss)\nI0608 02:37:04.173431 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00829803 (* 1 = 0.00829803 loss)\nI0608 02:37:04.173436 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165977 (* 1 = 0.165977 loss)\nI0608 02:37:04.173442 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00830084 (* 1 = 0.00830084 loss)\nI0608 02:37:04.173449 13573 sgd_solver.cpp:106] Iteration 31680, lr = 0.001\nI0608 02:37:16.971606 13573 solver.cpp:229] Iteration 31700, loss = 0.505844\nI0608 02:37:16.971668 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10304 (* 1 = 0.10304 loss)\nI0608 02:37:16.971678 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.211763 (* 1 = 0.211763 loss)\nI0608 02:37:16.971684 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0877829 (* 1 = 0.0877829 loss)\nI0608 02:37:16.971690 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0191466 (* 1 = 0.0191466 loss)\nI0608 02:37:16.971696 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0859871 (* 1 = 0.0859871 loss)\nI0608 02:37:16.971702 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.019146 (* 1 = 0.019146 loss)\nI0608 02:37:16.971709 13573 sgd_solver.cpp:106] Iteration 31700, lr = 0.001\nI0608 02:37:29.810988 13573 solver.cpp:229] Iteration 31720, loss = 0.536703\nI0608 02:37:29.811060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00643847 (* 1 = 0.00643847 loss)\nI0608 02:37:29.811069 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0147792 (* 1 = 0.0147792 loss)\nI0608 02:37:29.811075 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238621 (* 1 = 0.238621 loss)\nI0608 02:37:29.811080 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137512 (* 1 = 0.0137512 loss)\nI0608 02:37:29.811086 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.208165 (* 1 = 0.208165 loss)\nI0608 02:37:29.811091 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137618 (* 1 = 0.0137618 loss)\nI0608 02:37:29.811097 13573 sgd_solver.cpp:106] Iteration 31720, lr = 0.001\nI0608 02:37:42.460567 13573 solver.cpp:229] Iteration 31740, loss = 0.962339\nI0608 02:37:42.460629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199062 (* 1 = 0.199062 loss)\nI0608 02:37:42.460639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0864525 (* 1 = 0.0864525 loss)\nI0608 02:37:42.460645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0894524 (* 1 = 0.0894524 loss)\nI0608 02:37:42.460654 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118628 (* 1 = 0.0118628 loss)\nI0608 02:37:42.460660 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0859939 (* 1 = 0.0859939 loss)\nI0608 02:37:42.460666 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118619 (* 1 = 0.0118619 loss)\nI0608 02:37:42.460675 13573 sgd_solver.cpp:106] Iteration 31740, lr = 0.001\nI0608 02:37:55.344372 13573 solver.cpp:229] Iteration 31760, loss = 0.62392\nI0608 02:37:55.344439 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0195604 (* 1 = 0.0195604 loss)\nI0608 02:37:55.344449 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0149433 (* 1 = 0.0149433 loss)\nI0608 02:37:55.344455 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204411 (* 1 = 0.204411 loss)\nI0608 02:37:55.344460 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0142913 (* 1 = 0.0142913 loss)\nI0608 02:37:55.344465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205427 (* 1 = 0.205427 loss)\nI0608 02:37:55.344471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0142928 (* 1 = 0.0142928 loss)\nI0608 02:37:55.344477 13573 sgd_solver.cpp:106] Iteration 31760, lr = 0.001\nI0608 02:38:08.207516 13573 solver.cpp:229] Iteration 31780, loss = 1.0655\nI0608 02:38:08.207595 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000405331 (* 1 = 0.000405331 loss)\nI0608 02:38:08.207607 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00112532 (* 1 = 0.00112532 loss)\nI0608 02:38:08.207612 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.312138 (* 1 = 0.312138 loss)\nI0608 02:38:08.207618 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0487777 (* 1 = 0.0487777 loss)\nI0608 02:38:08.207624 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.326103 (* 1 = 0.326103 loss)\nI0608 02:38:08.207630 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0487687 (* 1 = 0.0487687 loss)\nI0608 02:38:08.207638 13573 sgd_solver.cpp:106] Iteration 31780, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:38:21.137887 13573 solver.cpp:229] Iteration 31800, loss = 0.922162\nI0608 02:38:21.137953 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0130291 (* 1 = 0.0130291 loss)\nI0608 02:38:21.137962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0219581 (* 1 = 0.0219581 loss)\nI0608 02:38:21.137969 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.230791 (* 1 = 0.230791 loss)\nI0608 02:38:21.137974 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0237414 (* 1 = 0.0237414 loss)\nI0608 02:38:21.137980 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233569 (* 1 = 0.233569 loss)\nI0608 02:38:21.137986 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0237477 (* 1 = 0.0237477 loss)\nI0608 02:38:21.137994 13573 sgd_solver.cpp:106] Iteration 31800, lr = 0.001\nI0608 02:38:34.005347 13573 solver.cpp:229] Iteration 31820, loss = 0.983361\nI0608 02:38:34.005414 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00435462 (* 1 = 0.00435462 loss)\nI0608 02:38:34.005422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00209644 (* 1 = 0.00209644 loss)\nI0608 02:38:34.005429 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.598417 (* 1 = 0.598417 loss)\nI0608 02:38:34.005434 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.17822 (* 1 = 0.17822 loss)\nI0608 02:38:34.005440 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.625057 (* 1 = 0.625057 loss)\nI0608 02:38:34.005446 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.178164 (* 1 = 0.178164 loss)\nI0608 02:38:34.005453 13573 sgd_solver.cpp:106] Iteration 31820, lr = 0.001\nI0608 02:38:47.100337 13573 solver.cpp:229] Iteration 31840, loss = 0.429712\nI0608 02:38:47.100415 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0635548 (* 1 = 0.0635548 loss)\nI0608 02:38:47.100425 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0792999 (* 1 = 0.0792999 loss)\nI0608 02:38:47.100430 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0899099 (* 1 = 0.0899099 loss)\nI0608 02:38:47.100436 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00698547 (* 1 = 0.00698547 loss)\nI0608 02:38:47.100442 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0872107 (* 1 = 0.0872107 loss)\nI0608 02:38:47.100448 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00698185 (* 1 = 0.00698185 loss)\nI0608 02:38:47.100455 13573 sgd_solver.cpp:106] Iteration 31840, lr = 0.001\nI0608 02:38:59.853157 13573 solver.cpp:229] Iteration 31860, loss = 0.720873\nI0608 02:38:59.853224 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00414799 (* 1 = 0.00414799 loss)\nI0608 02:38:59.853235 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00185096 (* 1 = 0.00185096 loss)\nI0608 02:38:59.853240 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209731 (* 1 = 0.209731 loss)\nI0608 02:38:59.853246 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169305 (* 1 = 0.0169305 loss)\nI0608 02:38:59.853252 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213922 (* 1 = 0.213922 loss)\nI0608 02:38:59.853258 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169421 (* 1 = 0.0169421 loss)\nI0608 02:38:59.853266 13573 sgd_solver.cpp:106] Iteration 31860, lr = 0.001\nI0608 02:39:12.852460 13573 solver.cpp:229] Iteration 31880, loss = 1.46495\nI0608 02:39:12.852537 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.210014 (* 1 = 0.210014 loss)\nI0608 02:39:12.852547 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173022 (* 1 = 0.173022 loss)\nI0608 02:39:12.852553 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.682666 (* 1 = 0.682666 loss)\nI0608 02:39:12.852558 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.255468 (* 1 = 0.255468 loss)\nI0608 02:39:12.852565 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.677927 (* 1 = 0.677927 loss)\nI0608 02:39:12.852571 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.255419 (* 1 = 0.255419 loss)\nI0608 02:39:12.852577 13573 sgd_solver.cpp:106] Iteration 31880, lr = 0.001\nI0608 02:39:25.783522 13573 solver.cpp:229] Iteration 31900, loss = 0.963123\nI0608 02:39:25.783582 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000784776 (* 1 = 0.000784776 loss)\nI0608 02:39:25.783592 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000668847 (* 1 = 0.000668847 loss)\nI0608 02:39:25.783598 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255462 (* 1 = 0.255462 loss)\nI0608 02:39:25.783604 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0347638 (* 1 = 0.0347638 loss)\nI0608 02:39:25.783610 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249831 (* 1 = 0.249831 loss)\nI0608 02:39:25.783617 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0347795 (* 1 = 0.0347795 loss)\nI0608 02:39:25.783623 13573 sgd_solver.cpp:106] Iteration 31900, lr = 0.001\nI0608 02:39:38.477360 13573 solver.cpp:229] Iteration 31920, loss = 0.453951\nI0608 02:39:38.477430 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159707 (* 1 = 0.159707 loss)\nI0608 02:39:38.477440 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0739655 (* 1 = 0.0739655 loss)\nI0608 02:39:38.477447 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0976618 (* 1 = 0.0976618 loss)\nI0608 02:39:38.477452 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158902 (* 1 = 0.0158902 loss)\nI0608 02:39:38.477458 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102587 (* 1 = 0.102587 loss)\nI0608 02:39:38.477463 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158906 (* 1 = 0.0158906 loss)\nI0608 02:39:38.477471 13573 sgd_solver.cpp:106] Iteration 31920, lr = 0.001\nI0608 02:39:51.499886 13573 solver.cpp:229] Iteration 31940, loss = 0.49025\nI0608 02:39:51.499974 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167857 (* 1 = 0.167857 loss)\nI0608 02:39:51.499984 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.188167 (* 1 = 0.188167 loss)\nI0608 02:39:51.499991 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0861811 (* 1 = 0.0861811 loss)\nI0608 02:39:51.499997 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169874 (* 1 = 0.0169874 loss)\nI0608 02:39:51.500002 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0909282 (* 1 = 0.0909282 loss)\nI0608 02:39:51.500008 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169871 (* 1 = 0.0169871 loss)\nI0608 02:39:51.500016 13573 sgd_solver.cpp:106] Iteration 31940, lr = 0.001\nI0608 02:40:04.399296 13573 solver.cpp:229] Iteration 31960, loss = 0.75928\nI0608 02:40:04.399354 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0838366 (* 1 = 0.0838366 loss)\nI0608 02:40:04.399364 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0719288 (* 1 = 0.0719288 loss)\nI0608 02:40:04.399371 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.37704 (* 1 = 0.37704 loss)\nI0608 02:40:04.399377 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0615777 (* 1 = 0.0615777 loss)\nI0608 02:40:04.399382 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.386555 (* 1 = 0.386555 loss)\nI0608 02:40:04.399389 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0616098 (* 1 = 0.0616098 loss)\nI0608 02:40:04.399395 13573 sgd_solver.cpp:106] Iteration 31960, lr = 0.001\nI0608 02:40:17.305498 13573 solver.cpp:229] Iteration 31980, loss = 1.81515\nI0608 02:40:17.305584 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00275077 (* 1 = 0.00275077 loss)\nI0608 02:40:17.305594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00195443 (* 1 = 0.00195443 loss)\nI0608 02:40:17.305601 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.61051 (* 1 = 0.61051 loss)\nI0608 02:40:17.305608 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.217893 (* 1 = 0.217893 loss)\nI0608 02:40:17.305613 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.595487 (* 1 = 0.595487 loss)\nI0608 02:40:17.305619 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.217847 (* 1 = 0.217847 loss)\nI0608 02:40:17.305626 13573 sgd_solver.cpp:106] Iteration 31980, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:40:30.263557 13573 solver.cpp:229] Iteration 32000, loss = 0.93172\nI0608 02:40:30.263629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121766 (* 1 = 0.121766 loss)\nI0608 02:40:30.263639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146799 (* 1 = 0.146799 loss)\nI0608 02:40:30.263645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.376581 (* 1 = 0.376581 loss)\nI0608 02:40:30.263651 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0901589 (* 1 = 0.0901589 loss)\nI0608 02:40:30.263658 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.37549 (* 1 = 0.37549 loss)\nI0608 02:40:30.263664 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0901248 (* 1 = 0.0901248 loss)\nI0608 02:40:30.263670 13573 sgd_solver.cpp:106] Iteration 32000, lr = 0.001\nI0608 02:40:43.140105 13573 solver.cpp:229] Iteration 32020, loss = 0.446514\nI0608 02:40:43.140169 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0899182 (* 1 = 0.0899182 loss)\nI0608 02:40:43.140180 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0450005 (* 1 = 0.0450005 loss)\nI0608 02:40:43.140187 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100629 (* 1 = 0.100629 loss)\nI0608 02:40:43.140192 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0322029 (* 1 = 0.0322029 loss)\nI0608 02:40:43.140198 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100724 (* 1 = 0.100724 loss)\nI0608 02:40:43.140205 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.032203 (* 1 = 0.032203 loss)\nI0608 02:40:43.140213 13573 sgd_solver.cpp:106] Iteration 32020, lr = 0.001\nI0608 02:40:56.206533 13573 solver.cpp:229] Iteration 32040, loss = 0.721809\nI0608 02:40:56.206599 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.103391 (* 1 = 0.103391 loss)\nI0608 02:40:56.206609 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.194936 (* 1 = 0.194936 loss)\nI0608 02:40:56.206615 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.236365 (* 1 = 0.236365 loss)\nI0608 02:40:56.206621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0423159 (* 1 = 0.0423159 loss)\nI0608 02:40:56.206627 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.241442 (* 1 = 0.241442 loss)\nI0608 02:40:56.206634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0423135 (* 1 = 0.0423135 loss)\nI0608 02:40:56.206640 13573 sgd_solver.cpp:106] Iteration 32040, lr = 0.001\nI0608 02:41:09.167672 13573 solver.cpp:229] Iteration 32060, loss = 0.748247\nI0608 02:41:09.167747 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.326758 (* 1 = 0.326758 loss)\nI0608 02:41:09.167755 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162045 (* 1 = 0.162045 loss)\nI0608 02:41:09.167762 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220538 (* 1 = 0.220538 loss)\nI0608 02:41:09.167768 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0379397 (* 1 = 0.0379397 loss)\nI0608 02:41:09.167773 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214101 (* 1 = 0.214101 loss)\nI0608 02:41:09.167778 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0379283 (* 1 = 0.0379283 loss)\nI0608 02:41:09.167786 13573 sgd_solver.cpp:106] Iteration 32060, lr = 0.001\nI0608 02:41:22.011126 13573 solver.cpp:229] Iteration 32080, loss = 1.01143\nI0608 02:41:22.011190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.401542 (* 1 = 0.401542 loss)\nI0608 02:41:22.011200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.21997 (* 1 = 0.21997 loss)\nI0608 02:41:22.011206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40733 (* 1 = 0.40733 loss)\nI0608 02:41:22.011212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.077559 (* 1 = 0.077559 loss)\nI0608 02:41:22.011219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.402907 (* 1 = 0.402907 loss)\nI0608 02:41:22.011224 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.07754 (* 1 = 0.07754 loss)\nI0608 02:41:22.011230 13573 sgd_solver.cpp:106] Iteration 32080, lr = 0.001\nI0608 02:41:35.078223 13573 solver.cpp:229] Iteration 32100, loss = 0.565522\nI0608 02:41:35.078310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0577223 (* 1 = 0.0577223 loss)\nI0608 02:41:35.078320 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114781 (* 1 = 0.114781 loss)\nI0608 02:41:35.078326 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0970405 (* 1 = 0.0970405 loss)\nI0608 02:41:35.078332 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217936 (* 1 = 0.0217936 loss)\nI0608 02:41:35.078338 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0993396 (* 1 = 0.0993396 loss)\nI0608 02:41:35.078344 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0218112 (* 1 = 0.0218112 loss)\nI0608 02:41:35.078351 13573 sgd_solver.cpp:106] Iteration 32100, lr = 0.001\nI0608 02:41:48.101910 13573 solver.cpp:229] Iteration 32120, loss = 0.78847\nI0608 02:41:48.102025 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00642678 (* 1 = 0.00642678 loss)\nI0608 02:41:48.102035 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00649699 (* 1 = 0.00649699 loss)\nI0608 02:41:48.102041 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.349036 (* 1 = 0.349036 loss)\nI0608 02:41:48.102046 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418846 (* 1 = 0.0418846 loss)\nI0608 02:41:48.102052 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28998 (* 1 = 0.28998 loss)\nI0608 02:41:48.102058 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418922 (* 1 = 0.0418922 loss)\nI0608 02:41:48.102064 13573 sgd_solver.cpp:106] Iteration 32120, lr = 0.001\nI0608 02:42:01.048882 13573 solver.cpp:229] Iteration 32140, loss = 0.761287\nI0608 02:42:01.048944 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0302444 (* 1 = 0.0302444 loss)\nI0608 02:42:01.048954 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0886327 (* 1 = 0.0886327 loss)\nI0608 02:42:01.048960 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263186 (* 1 = 0.263186 loss)\nI0608 02:42:01.048966 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167804 (* 1 = 0.0167804 loss)\nI0608 02:42:01.048972 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.266161 (* 1 = 0.266161 loss)\nI0608 02:42:01.048979 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0167959 (* 1 = 0.0167959 loss)\nI0608 02:42:01.048985 13573 sgd_solver.cpp:106] Iteration 32140, lr = 0.001\nI0608 02:42:13.867076 13573 solver.cpp:229] Iteration 32160, loss = 1.08939\nI0608 02:42:13.867144 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0871503 (* 1 = 0.0871503 loss)\nI0608 02:42:13.867153 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109877 (* 1 = 0.109877 loss)\nI0608 02:42:13.867161 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.338047 (* 1 = 0.338047 loss)\nI0608 02:42:13.867166 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234004 (* 1 = 0.0234004 loss)\nI0608 02:42:13.867172 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330865 (* 1 = 0.330865 loss)\nI0608 02:42:13.867177 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0233836 (* 1 = 0.0233836 loss)\nI0608 02:42:13.867184 13573 sgd_solver.cpp:106] Iteration 32160, lr = 0.001\nI0608 02:42:26.857890 13573 solver.cpp:229] Iteration 32180, loss = 0.686558\nI0608 02:42:26.857965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00124914 (* 1 = 0.00124914 loss)\nI0608 02:42:26.857975 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00145281 (* 1 = 0.00145281 loss)\nI0608 02:42:26.857981 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.261279 (* 1 = 0.261279 loss)\nI0608 02:42:26.857988 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0578148 (* 1 = 0.0578148 loss)\nI0608 02:42:26.857993 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260414 (* 1 = 0.260414 loss)\nI0608 02:42:26.858000 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0578302 (* 1 = 0.0578302 loss)\nI0608 02:42:26.858006 13573 sgd_solver.cpp:106] Iteration 32180, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:42:39.720221 13573 solver.cpp:229] Iteration 32200, loss = 1.68677\nI0608 02:42:39.720283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.276606 (* 1 = 0.276606 loss)\nI0608 02:42:39.720293 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.368931 (* 1 = 0.368931 loss)\nI0608 02:42:39.720300 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.883821 (* 1 = 0.883821 loss)\nI0608 02:42:39.720306 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0731118 (* 1 = 0.0731118 loss)\nI0608 02:42:39.720312 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.88116 (* 1 = 0.88116 loss)\nI0608 02:42:39.720319 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0731263 (* 1 = 0.0731263 loss)\nI0608 02:42:39.720326 13573 sgd_solver.cpp:106] Iteration 32200, lr = 0.001\nI0608 02:42:52.479271 13573 solver.cpp:229] Iteration 32220, loss = 0.559225\nI0608 02:42:52.479341 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0554029 (* 1 = 0.0554029 loss)\nI0608 02:42:52.479349 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0770506 (* 1 = 0.0770506 loss)\nI0608 02:42:52.479356 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.091019 (* 1 = 0.091019 loss)\nI0608 02:42:52.479360 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010165 (* 1 = 0.010165 loss)\nI0608 02:42:52.479367 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0835172 (* 1 = 0.0835172 loss)\nI0608 02:42:52.479372 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0101934 (* 1 = 0.0101934 loss)\nI0608 02:42:52.479378 13573 sgd_solver.cpp:106] Iteration 32220, lr = 0.001\nI0608 02:43:05.297168 13573 solver.cpp:229] Iteration 32240, loss = 0.357419\nI0608 02:43:05.297230 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00302563 (* 1 = 0.00302563 loss)\nI0608 02:43:05.297240 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00625406 (* 1 = 0.00625406 loss)\nI0608 02:43:05.297247 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0928741 (* 1 = 0.0928741 loss)\nI0608 02:43:05.297253 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0272542 (* 1 = 0.0272542 loss)\nI0608 02:43:05.297260 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0811698 (* 1 = 0.0811698 loss)\nI0608 02:43:05.297266 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0272665 (* 1 = 0.0272665 loss)\nI0608 02:43:05.297272 13573 sgd_solver.cpp:106] Iteration 32240, lr = 0.001\nI0608 02:43:18.247690 13573 solver.cpp:229] Iteration 32260, loss = 1.01743\nI0608 02:43:18.247761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.170847 (* 1 = 0.170847 loss)\nI0608 02:43:18.247771 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104012 (* 1 = 0.104012 loss)\nI0608 02:43:18.247776 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0928269 (* 1 = 0.0928269 loss)\nI0608 02:43:18.247781 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.033781 (* 1 = 0.033781 loss)\nI0608 02:43:18.247787 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0944448 (* 1 = 0.0944448 loss)\nI0608 02:43:18.247793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0337835 (* 1 = 0.0337835 loss)\nI0608 02:43:18.247799 13573 sgd_solver.cpp:106] Iteration 32260, lr = 0.001\nI0608 02:43:31.147907 13573 solver.cpp:229] Iteration 32280, loss = 0.672218\nI0608 02:43:31.147975 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110593 (* 1 = 0.110593 loss)\nI0608 02:43:31.147984 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.210397 (* 1 = 0.210397 loss)\nI0608 02:43:31.147990 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137181 (* 1 = 0.137181 loss)\nI0608 02:43:31.147996 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0463136 (* 1 = 0.0463136 loss)\nI0608 02:43:31.148001 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131884 (* 1 = 0.131884 loss)\nI0608 02:43:31.148007 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0463155 (* 1 = 0.0463155 loss)\nI0608 02:43:31.148013 13573 sgd_solver.cpp:106] Iteration 32280, lr = 0.001\nI0608 02:43:44.062341 13573 solver.cpp:229] Iteration 32300, loss = 0.433339\nI0608 02:43:44.062407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0235474 (* 1 = 0.0235474 loss)\nI0608 02:43:44.062417 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00921061 (* 1 = 0.00921061 loss)\nI0608 02:43:44.062422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147956 (* 1 = 0.147956 loss)\nI0608 02:43:44.062427 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134663 (* 1 = 0.0134663 loss)\nI0608 02:43:44.062433 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123322 (* 1 = 0.123322 loss)\nI0608 02:43:44.062438 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134757 (* 1 = 0.0134757 loss)\nI0608 02:43:44.062445 13573 sgd_solver.cpp:106] Iteration 32300, lr = 0.001\nI0608 02:43:57.091125 13573 solver.cpp:229] Iteration 32320, loss = 0.815731\nI0608 02:43:57.091198 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0908655 (* 1 = 0.0908655 loss)\nI0608 02:43:57.091207 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0651513 (* 1 = 0.0651513 loss)\nI0608 02:43:57.091213 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.232281 (* 1 = 0.232281 loss)\nI0608 02:43:57.091219 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0421554 (* 1 = 0.0421554 loss)\nI0608 02:43:57.091224 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24056 (* 1 = 0.24056 loss)\nI0608 02:43:57.091230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0421761 (* 1 = 0.0421761 loss)\nI0608 02:43:57.091236 13573 sgd_solver.cpp:106] Iteration 32320, lr = 0.001\nI0608 02:44:09.883172 13573 solver.cpp:229] Iteration 32340, loss = 1.10619\nI0608 02:44:09.883241 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105588 (* 1 = 0.105588 loss)\nI0608 02:44:09.883250 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0923411 (* 1 = 0.0923411 loss)\nI0608 02:44:09.883257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972461 (* 1 = 0.0972461 loss)\nI0608 02:44:09.883263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00894134 (* 1 = 0.00894134 loss)\nI0608 02:44:09.883270 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0904606 (* 1 = 0.0904606 loss)\nI0608 02:44:09.883275 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00892321 (* 1 = 0.00892321 loss)\nI0608 02:44:09.883281 13573 sgd_solver.cpp:106] Iteration 32340, lr = 0.001\nI0608 02:44:22.884297 13573 solver.cpp:229] Iteration 32360, loss = 0.804252\nI0608 02:44:22.884362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.12147 (* 1 = 0.12147 loss)\nI0608 02:44:22.884372 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0965088 (* 1 = 0.0965088 loss)\nI0608 02:44:22.884378 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260242 (* 1 = 0.260242 loss)\nI0608 02:44:22.884384 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0151392 (* 1 = 0.0151392 loss)\nI0608 02:44:22.884390 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293773 (* 1 = 0.293773 loss)\nI0608 02:44:22.884397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151523 (* 1 = 0.0151523 loss)\nI0608 02:44:22.884403 13573 sgd_solver.cpp:106] Iteration 32360, lr = 0.001\nI0608 02:44:35.707963 13573 solver.cpp:229] Iteration 32380, loss = 0.841493\nI0608 02:44:35.708035 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0952222 (* 1 = 0.0952222 loss)\nI0608 02:44:35.708045 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108519 (* 1 = 0.108519 loss)\nI0608 02:44:35.708051 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.355727 (* 1 = 0.355727 loss)\nI0608 02:44:35.708057 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0823784 (* 1 = 0.0823784 loss)\nI0608 02:44:35.708063 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340719 (* 1 = 0.340719 loss)\nI0608 02:44:35.708070 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0823891 (* 1 = 0.0823891 loss)\nI0608 02:44:35.708078 13573 sgd_solver.cpp:106] Iteration 32380, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:44:48.745296 13573 solver.cpp:229] Iteration 32400, loss = 0.535706\nI0608 02:44:48.745367 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000330116 (* 1 = 0.000330116 loss)\nI0608 02:44:48.745378 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00265669 (* 1 = 0.00265669 loss)\nI0608 02:44:48.745383 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.224745 (* 1 = 0.224745 loss)\nI0608 02:44:48.745388 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00662138 (* 1 = 0.00662138 loss)\nI0608 02:44:48.745394 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157924 (* 1 = 0.157924 loss)\nI0608 02:44:48.745399 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00662053 (* 1 = 0.00662053 loss)\nI0608 02:44:48.745406 13573 sgd_solver.cpp:106] Iteration 32400, lr = 0.001\nI0608 02:45:01.613382 13573 solver.cpp:229] Iteration 32420, loss = 1.0569\nI0608 02:45:01.613447 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0377988 (* 1 = 0.0377988 loss)\nI0608 02:45:01.613456 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0685138 (* 1 = 0.0685138 loss)\nI0608 02:45:01.613462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0889979 (* 1 = 0.0889979 loss)\nI0608 02:45:01.613467 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137712 (* 1 = 0.0137712 loss)\nI0608 02:45:01.613472 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0893566 (* 1 = 0.0893566 loss)\nI0608 02:45:01.613478 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137959 (* 1 = 0.0137959 loss)\nI0608 02:45:01.613485 13573 sgd_solver.cpp:106] Iteration 32420, lr = 0.001\nI0608 02:45:14.456935 13573 solver.cpp:229] Iteration 32440, loss = 1.05037\nI0608 02:45:14.457062 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.259009 (* 1 = 0.259009 loss)\nI0608 02:45:14.457072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19319 (* 1 = 0.19319 loss)\nI0608 02:45:14.457077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.473028 (* 1 = 0.473028 loss)\nI0608 02:45:14.457083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.121842 (* 1 = 0.121842 loss)\nI0608 02:45:14.457089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.47153 (* 1 = 0.47153 loss)\nI0608 02:45:14.457095 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121806 (* 1 = 0.121806 loss)\nI0608 02:45:14.457103 13573 sgd_solver.cpp:106] Iteration 32440, lr = 0.001\nI0608 02:45:27.218637 13573 solver.cpp:229] Iteration 32460, loss = 1.14657\nI0608 02:45:27.218708 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00196145 (* 1 = 0.00196145 loss)\nI0608 02:45:27.218716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0243964 (* 1 = 0.0243964 loss)\nI0608 02:45:27.218722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.585845 (* 1 = 0.585845 loss)\nI0608 02:45:27.218729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.165529 (* 1 = 0.165529 loss)\nI0608 02:45:27.218734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.614382 (* 1 = 0.614382 loss)\nI0608 02:45:27.218739 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.165538 (* 1 = 0.165538 loss)\nI0608 02:45:27.218746 13573 sgd_solver.cpp:106] Iteration 32460, lr = 0.001\nI0608 02:45:39.971657 13573 solver.cpp:229] Iteration 32480, loss = 0.498018\nI0608 02:45:39.971719 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0191933 (* 1 = 0.0191933 loss)\nI0608 02:45:39.971730 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0307959 (* 1 = 0.0307959 loss)\nI0608 02:45:39.971736 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206236 (* 1 = 0.206236 loss)\nI0608 02:45:39.971742 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264069 (* 1 = 0.0264069 loss)\nI0608 02:45:39.971748 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207957 (* 1 = 0.207957 loss)\nI0608 02:45:39.971755 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.026395 (* 1 = 0.026395 loss)\nI0608 02:45:39.971761 13573 sgd_solver.cpp:106] Iteration 32480, lr = 0.001\nI0608 02:45:52.788687 13573 solver.cpp:229] Iteration 32500, loss = 0.564177\nI0608 02:45:52.788761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.210862 (* 1 = 0.210862 loss)\nI0608 02:45:52.788771 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0579479 (* 1 = 0.0579479 loss)\nI0608 02:45:52.788777 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0955988 (* 1 = 0.0955988 loss)\nI0608 02:45:52.788784 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128922 (* 1 = 0.0128922 loss)\nI0608 02:45:52.788789 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0890698 (* 1 = 0.0890698 loss)\nI0608 02:45:52.788794 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.012891 (* 1 = 0.012891 loss)\nI0608 02:45:52.788801 13573 sgd_solver.cpp:106] Iteration 32500, lr = 0.001\nI0608 02:46:05.732098 13573 solver.cpp:229] Iteration 32520, loss = 2.49198\nI0608 02:46:05.732223 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16062 (* 1 = 0.16062 loss)\nI0608 02:46:05.732234 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.155746 (* 1 = 0.155746 loss)\nI0608 02:46:05.732239 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.596868 (* 1 = 0.596868 loss)\nI0608 02:46:05.732245 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0277249 (* 1 = 0.0277249 loss)\nI0608 02:46:05.732251 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.640711 (* 1 = 0.640711 loss)\nI0608 02:46:05.732256 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0277248 (* 1 = 0.0277248 loss)\nI0608 02:46:05.732264 13573 sgd_solver.cpp:106] Iteration 32520, lr = 0.001\nI0608 02:46:18.421365 13573 solver.cpp:229] Iteration 32540, loss = 0.646983\nI0608 02:46:18.421427 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144212 (* 1 = 0.144212 loss)\nI0608 02:46:18.421434 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.176508 (* 1 = 0.176508 loss)\nI0608 02:46:18.421442 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15211 (* 1 = 0.15211 loss)\nI0608 02:46:18.421447 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0334478 (* 1 = 0.0334478 loss)\nI0608 02:46:18.421453 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154673 (* 1 = 0.154673 loss)\nI0608 02:46:18.421458 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0334384 (* 1 = 0.0334384 loss)\nI0608 02:46:18.421465 13573 sgd_solver.cpp:106] Iteration 32540, lr = 0.001\nI0608 02:46:31.363792 13573 solver.cpp:229] Iteration 32560, loss = 1.05057\nI0608 02:46:31.363858 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.164036 (* 1 = 0.164036 loss)\nI0608 02:46:31.363868 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146609 (* 1 = 0.146609 loss)\nI0608 02:46:31.363873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182187 (* 1 = 0.182187 loss)\nI0608 02:46:31.363879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0387234 (* 1 = 0.0387234 loss)\nI0608 02:46:31.363885 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177693 (* 1 = 0.177693 loss)\nI0608 02:46:31.363891 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0387419 (* 1 = 0.0387419 loss)\nI0608 02:46:31.363898 13573 sgd_solver.cpp:106] Iteration 32560, lr = 0.001\nI0608 02:46:44.414876 13573 solver.cpp:229] Iteration 32580, loss = 0.359979\nI0608 02:46:44.414949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0592052 (* 1 = 0.0592052 loss)\nI0608 02:46:44.414958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.039549 (* 1 = 0.039549 loss)\nI0608 02:46:44.414965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0942189 (* 1 = 0.0942189 loss)\nI0608 02:46:44.414970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0267456 (* 1 = 0.0267456 loss)\nI0608 02:46:44.414976 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0875124 (* 1 = 0.0875124 loss)\nI0608 02:46:44.414983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0267742 (* 1 = 0.0267742 loss)\nI0608 02:46:44.414989 13573 sgd_solver.cpp:106] Iteration 32580, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:46:57.318375 13573 solver.cpp:229] Iteration 32600, loss = 0.517808\nI0608 02:46:57.318442 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000233566 (* 1 = 0.000233566 loss)\nI0608 02:46:57.318452 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00153976 (* 1 = 0.00153976 loss)\nI0608 02:46:57.318459 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187162 (* 1 = 0.187162 loss)\nI0608 02:46:57.318464 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0106439 (* 1 = 0.0106439 loss)\nI0608 02:46:57.318470 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.20088 (* 1 = 0.20088 loss)\nI0608 02:46:57.318475 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0106358 (* 1 = 0.0106358 loss)\nI0608 02:46:57.318482 13573 sgd_solver.cpp:106] Iteration 32600, lr = 0.001\nI0608 02:47:10.264075 13573 solver.cpp:229] Iteration 32620, loss = 0.926682\nI0608 02:47:10.264137 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114831 (* 1 = 0.114831 loss)\nI0608 02:47:10.264147 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111165 (* 1 = 0.111165 loss)\nI0608 02:47:10.264153 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0887007 (* 1 = 0.0887007 loss)\nI0608 02:47:10.264158 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278089 (* 1 = 0.0278089 loss)\nI0608 02:47:10.264183 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0834147 (* 1 = 0.0834147 loss)\nI0608 02:47:10.264191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278184 (* 1 = 0.0278184 loss)\nI0608 02:47:10.264199 13573 sgd_solver.cpp:106] Iteration 32620, lr = 0.001\nI0608 02:47:23.117725 13573 solver.cpp:229] Iteration 32640, loss = 0.919778\nI0608 02:47:23.117797 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147327 (* 1 = 0.147327 loss)\nI0608 02:47:23.117806 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214928 (* 1 = 0.214928 loss)\nI0608 02:47:23.117812 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.483392 (* 1 = 0.483392 loss)\nI0608 02:47:23.117817 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0710396 (* 1 = 0.0710396 loss)\nI0608 02:47:23.117823 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.496103 (* 1 = 0.496103 loss)\nI0608 02:47:23.117830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0710008 (* 1 = 0.0710008 loss)\nI0608 02:47:23.117835 13573 sgd_solver.cpp:106] Iteration 32640, lr = 0.001\nI0608 02:47:36.040963 13573 solver.cpp:229] Iteration 32660, loss = 0.421723\nI0608 02:47:36.041033 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0289038 (* 1 = 0.0289038 loss)\nI0608 02:47:36.041043 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0286593 (* 1 = 0.0286593 loss)\nI0608 02:47:36.041049 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.149356 (* 1 = 0.149356 loss)\nI0608 02:47:36.041054 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0271365 (* 1 = 0.0271365 loss)\nI0608 02:47:36.041061 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194213 (* 1 = 0.194213 loss)\nI0608 02:47:36.041066 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0271458 (* 1 = 0.0271458 loss)\nI0608 02:47:36.041074 13573 sgd_solver.cpp:106] Iteration 32660, lr = 0.001\nI0608 02:47:49.050734 13573 solver.cpp:229] Iteration 32680, loss = 0.827626\nI0608 02:47:49.051072 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.122244 (* 1 = 0.122244 loss)\nI0608 02:47:49.051084 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111918 (* 1 = 0.111918 loss)\nI0608 02:47:49.051090 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179674 (* 1 = 0.179674 loss)\nI0608 02:47:49.051096 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0291037 (* 1 = 0.0291037 loss)\nI0608 02:47:49.051102 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191468 (* 1 = 0.191468 loss)\nI0608 02:47:49.051108 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0290981 (* 1 = 0.0290981 loss)\nI0608 02:47:49.051116 13573 sgd_solver.cpp:106] Iteration 32680, lr = 0.001\nI0608 02:48:01.913002 13573 solver.cpp:229] Iteration 32700, loss = 0.622038\nI0608 02:48:01.913069 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00363168 (* 1 = 0.00363168 loss)\nI0608 02:48:01.913079 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00517132 (* 1 = 0.00517132 loss)\nI0608 02:48:01.913086 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212125 (* 1 = 0.212125 loss)\nI0608 02:48:01.913092 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0119073 (* 1 = 0.0119073 loss)\nI0608 02:48:01.913099 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197041 (* 1 = 0.197041 loss)\nI0608 02:48:01.913105 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119162 (* 1 = 0.0119162 loss)\nI0608 02:48:01.913112 13573 sgd_solver.cpp:106] Iteration 32700, lr = 0.001\nI0608 02:48:14.823703 13573 solver.cpp:229] Iteration 32720, loss = 0.832296\nI0608 02:48:14.823768 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13841 (* 1 = 0.13841 loss)\nI0608 02:48:14.823778 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0946735 (* 1 = 0.0946735 loss)\nI0608 02:48:14.823786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40136 (* 1 = 0.40136 loss)\nI0608 02:48:14.823791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0705129 (* 1 = 0.0705129 loss)\nI0608 02:48:14.823797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.4071 (* 1 = 0.4071 loss)\nI0608 02:48:14.823803 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0705177 (* 1 = 0.0705177 loss)\nI0608 02:48:14.823829 13573 sgd_solver.cpp:106] Iteration 32720, lr = 0.001\nI0608 02:48:27.662722 13573 solver.cpp:229] Iteration 32740, loss = 0.846465\nI0608 02:48:27.662807 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0866273 (* 1 = 0.0866273 loss)\nI0608 02:48:27.662818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0788202 (* 1 = 0.0788202 loss)\nI0608 02:48:27.662824 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289168 (* 1 = 0.289168 loss)\nI0608 02:48:27.662830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0205289 (* 1 = 0.0205289 loss)\nI0608 02:48:27.662837 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328476 (* 1 = 0.328476 loss)\nI0608 02:48:27.662842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0205371 (* 1 = 0.0205371 loss)\nI0608 02:48:27.662849 13573 sgd_solver.cpp:106] Iteration 32740, lr = 0.001\nI0608 02:48:40.360774 13573 solver.cpp:229] Iteration 32760, loss = 0.835177\nI0608 02:48:40.360837 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106948 (* 1 = 0.106948 loss)\nI0608 02:48:40.360847 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0239578 (* 1 = 0.0239578 loss)\nI0608 02:48:40.360853 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0879081 (* 1 = 0.0879081 loss)\nI0608 02:48:40.360858 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0239598 (* 1 = 0.0239598 loss)\nI0608 02:48:40.360864 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0844461 (* 1 = 0.0844461 loss)\nI0608 02:48:40.360870 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0239636 (* 1 = 0.0239636 loss)\nI0608 02:48:40.360877 13573 sgd_solver.cpp:106] Iteration 32760, lr = 0.001\nI0608 02:48:53.181782 13573 solver.cpp:229] Iteration 32780, loss = 0.523759\nI0608 02:48:53.181850 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00207715 (* 1 = 0.00207715 loss)\nI0608 02:48:53.181859 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0111948 (* 1 = 0.0111948 loss)\nI0608 02:48:53.181865 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184001 (* 1 = 0.184001 loss)\nI0608 02:48:53.181871 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128299 (* 1 = 0.0128299 loss)\nI0608 02:48:53.181876 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191563 (* 1 = 0.191563 loss)\nI0608 02:48:53.181882 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0128299 (* 1 = 0.0128299 loss)\nI0608 02:48:53.181888 13573 sgd_solver.cpp:106] Iteration 32780, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:49:06.017313 13573 solver.cpp:229] Iteration 32800, loss = 1.88973\nI0608 02:49:06.017388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.295468 (* 1 = 0.295468 loss)\nI0608 02:49:06.017397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.314781 (* 1 = 0.314781 loss)\nI0608 02:49:06.017403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.919824 (* 1 = 0.919824 loss)\nI0608 02:49:06.017410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.276499 (* 1 = 0.276499 loss)\nI0608 02:49:06.017415 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.954836 (* 1 = 0.954836 loss)\nI0608 02:49:06.017421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.276614 (* 1 = 0.276614 loss)\nI0608 02:49:06.017426 13573 sgd_solver.cpp:106] Iteration 32800, lr = 0.001\nI0608 02:49:18.948000 13573 solver.cpp:229] Iteration 32820, loss = 1.20374\nI0608 02:49:18.948062 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121622 (* 1 = 0.121622 loss)\nI0608 02:49:18.948071 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198767 (* 1 = 0.198767 loss)\nI0608 02:49:18.948077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393115 (* 1 = 0.393115 loss)\nI0608 02:49:18.948083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0575071 (* 1 = 0.0575071 loss)\nI0608 02:49:18.948089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376556 (* 1 = 0.376556 loss)\nI0608 02:49:18.948094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0574955 (* 1 = 0.0574955 loss)\nI0608 02:49:18.948102 13573 sgd_solver.cpp:106] Iteration 32820, lr = 0.001\nI0608 02:49:31.611407 13573 solver.cpp:229] Iteration 32840, loss = 0.627261\nI0608 02:49:31.611471 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.353677 (* 1 = 0.353677 loss)\nI0608 02:49:31.611480 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.240948 (* 1 = 0.240948 loss)\nI0608 02:49:31.611486 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.150171 (* 1 = 0.150171 loss)\nI0608 02:49:31.611492 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0274008 (* 1 = 0.0274008 loss)\nI0608 02:49:31.611498 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146258 (* 1 = 0.146258 loss)\nI0608 02:49:31.611505 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0274138 (* 1 = 0.0274138 loss)\nI0608 02:49:31.611511 13573 sgd_solver.cpp:106] Iteration 32840, lr = 0.001\nI0608 02:49:44.600611 13573 solver.cpp:229] Iteration 32860, loss = 1.29485\nI0608 02:49:44.600694 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.440075 (* 1 = 0.440075 loss)\nI0608 02:49:44.600704 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.401072 (* 1 = 0.401072 loss)\nI0608 02:49:44.600710 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.541087 (* 1 = 0.541087 loss)\nI0608 02:49:44.600718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0893873 (* 1 = 0.0893873 loss)\nI0608 02:49:44.600723 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.534765 (* 1 = 0.534765 loss)\nI0608 02:49:44.600728 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0893804 (* 1 = 0.0893804 loss)\nI0608 02:49:44.600736 13573 sgd_solver.cpp:106] Iteration 32860, lr = 0.001\nI0608 02:49:57.520332 13573 solver.cpp:229] Iteration 32880, loss = 0.495925\nI0608 02:49:57.520388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00122405 (* 1 = 0.00122405 loss)\nI0608 02:49:57.520397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0103094 (* 1 = 0.0103094 loss)\nI0608 02:49:57.520403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14797 (* 1 = 0.14797 loss)\nI0608 02:49:57.520409 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00938713 (* 1 = 0.00938713 loss)\nI0608 02:49:57.520416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15986 (* 1 = 0.15986 loss)\nI0608 02:49:57.520421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00939302 (* 1 = 0.00939302 loss)\nI0608 02:49:57.520427 13573 sgd_solver.cpp:106] Iteration 32880, lr = 0.001\nI0608 02:50:10.361769 13573 solver.cpp:229] Iteration 32900, loss = 0.994536\nI0608 02:50:10.361840 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161142 (* 1 = 0.161142 loss)\nI0608 02:50:10.361853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.232574 (* 1 = 0.232574 loss)\nI0608 02:50:10.361861 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.352054 (* 1 = 0.352054 loss)\nI0608 02:50:10.361867 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0343897 (* 1 = 0.0343897 loss)\nI0608 02:50:10.361874 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328103 (* 1 = 0.328103 loss)\nI0608 02:50:10.361881 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0343832 (* 1 = 0.0343832 loss)\nI0608 02:50:10.361888 13573 sgd_solver.cpp:106] Iteration 32900, lr = 0.001\nI0608 02:50:23.372359 13573 solver.cpp:229] Iteration 32920, loss = 1.01182\nI0608 02:50:23.372483 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00606082 (* 1 = 0.00606082 loss)\nI0608 02:50:23.372493 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00592685 (* 1 = 0.00592685 loss)\nI0608 02:50:23.372498 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.430043 (* 1 = 0.430043 loss)\nI0608 02:50:23.372505 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0832599 (* 1 = 0.0832599 loss)\nI0608 02:50:23.372511 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.459017 (* 1 = 0.459017 loss)\nI0608 02:50:23.372516 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0832609 (* 1 = 0.0832609 loss)\nI0608 02:50:23.372531 13573 sgd_solver.cpp:106] Iteration 32920, lr = 0.001\nI0608 02:50:36.222896 13573 solver.cpp:229] Iteration 32940, loss = 1.26661\nI0608 02:50:36.222960 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.215625 (* 1 = 0.215625 loss)\nI0608 02:50:36.222970 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.185267 (* 1 = 0.185267 loss)\nI0608 02:50:36.222975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.165658 (* 1 = 0.165658 loss)\nI0608 02:50:36.222982 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0247834 (* 1 = 0.0247834 loss)\nI0608 02:50:36.222988 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177464 (* 1 = 0.177464 loss)\nI0608 02:50:36.222995 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247914 (* 1 = 0.0247914 loss)\nI0608 02:50:36.223001 13573 sgd_solver.cpp:106] Iteration 32940, lr = 0.001\nI0608 02:50:49.054623 13573 solver.cpp:229] Iteration 32960, loss = 0.68181\nI0608 02:50:49.054694 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0520945 (* 1 = 0.0520945 loss)\nI0608 02:50:49.054704 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161281 (* 1 = 0.161281 loss)\nI0608 02:50:49.054710 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0931121 (* 1 = 0.0931121 loss)\nI0608 02:50:49.054716 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106725 (* 1 = 0.106725 loss)\nI0608 02:50:49.054721 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0980024 (* 1 = 0.0980024 loss)\nI0608 02:50:49.054728 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106732 (* 1 = 0.106732 loss)\nI0608 02:50:49.054734 13573 sgd_solver.cpp:106] Iteration 32960, lr = 0.001\nI0608 02:51:01.880192 13573 solver.cpp:229] Iteration 32980, loss = 0.503269\nI0608 02:51:01.880267 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129923 (* 1 = 0.129923 loss)\nI0608 02:51:01.880276 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110817 (* 1 = 0.110817 loss)\nI0608 02:51:01.880283 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972829 (* 1 = 0.0972829 loss)\nI0608 02:51:01.880290 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176816 (* 1 = 0.0176816 loss)\nI0608 02:51:01.880295 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.098557 (* 1 = 0.098557 loss)\nI0608 02:51:01.880300 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176775 (* 1 = 0.0176775 loss)\nI0608 02:51:01.880306 13573 sgd_solver.cpp:106] Iteration 32980, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:51:14.843857 13573 solver.cpp:229] Iteration 33000, loss = 0.444556\nI0608 02:51:14.843955 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0720467 (* 1 = 0.0720467 loss)\nI0608 02:51:14.843971 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0541323 (* 1 = 0.0541323 loss)\nI0608 02:51:14.843984 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.08615 (* 1 = 0.08615 loss)\nI0608 02:51:14.844038 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0094845 (* 1 = 0.0094845 loss)\nI0608 02:51:14.844064 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.084462 (* 1 = 0.084462 loss)\nI0608 02:51:14.844080 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00946914 (* 1 = 0.00946914 loss)\nI0608 02:51:14.844101 13573 sgd_solver.cpp:106] Iteration 33000, lr = 0.001\nI0608 02:51:27.710657 13573 solver.cpp:229] Iteration 33020, loss = 0.729096\nI0608 02:51:27.710724 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.329325 (* 1 = 0.329325 loss)\nI0608 02:51:27.710733 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.235945 (* 1 = 0.235945 loss)\nI0608 02:51:27.710741 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174947 (* 1 = 0.174947 loss)\nI0608 02:51:27.710747 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0355121 (* 1 = 0.0355121 loss)\nI0608 02:51:27.710753 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171642 (* 1 = 0.171642 loss)\nI0608 02:51:27.710759 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0355154 (* 1 = 0.0355154 loss)\nI0608 02:51:27.710767 13573 sgd_solver.cpp:106] Iteration 33020, lr = 0.001\nI0608 02:51:40.360020 13573 solver.cpp:229] Iteration 33040, loss = 1.0439\nI0608 02:51:40.360097 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0956764 (* 1 = 0.0956764 loss)\nI0608 02:51:40.360106 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0941487 (* 1 = 0.0941487 loss)\nI0608 02:51:40.360113 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0973368 (* 1 = 0.0973368 loss)\nI0608 02:51:40.360118 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.030587 (* 1 = 0.030587 loss)\nI0608 02:51:40.360126 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935925 (* 1 = 0.0935925 loss)\nI0608 02:51:40.360131 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0305773 (* 1 = 0.0305773 loss)\nI0608 02:51:40.360138 13573 sgd_solver.cpp:106] Iteration 33040, lr = 0.001\nI0608 02:51:53.406468 13573 solver.cpp:229] Iteration 33060, loss = 1.20839\nI0608 02:51:53.406539 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.200614 (* 1 = 0.200614 loss)\nI0608 02:51:53.406550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.21294 (* 1 = 0.21294 loss)\nI0608 02:51:53.406556 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135973 (* 1 = 0.135973 loss)\nI0608 02:51:53.406563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0236897 (* 1 = 0.0236897 loss)\nI0608 02:51:53.406569 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139767 (* 1 = 0.139767 loss)\nI0608 02:51:53.406575 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236875 (* 1 = 0.0236875 loss)\nI0608 02:51:53.406582 13573 sgd_solver.cpp:106] Iteration 33060, lr = 0.001\nI0608 02:52:06.449596 13573 solver.cpp:229] Iteration 33080, loss = 0.73453\nI0608 02:52:06.449673 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.188739 (* 1 = 0.188739 loss)\nI0608 02:52:06.449683 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.204958 (* 1 = 0.204958 loss)\nI0608 02:52:06.449690 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.144344 (* 1 = 0.144344 loss)\nI0608 02:52:06.449697 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120233 (* 1 = 0.0120233 loss)\nI0608 02:52:06.449702 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.146355 (* 1 = 0.146355 loss)\nI0608 02:52:06.449708 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.01202 (* 1 = 0.01202 loss)\nI0608 02:52:06.449717 13573 sgd_solver.cpp:106] Iteration 33080, lr = 0.001\nI0608 02:52:19.472627 13573 solver.cpp:229] Iteration 33100, loss = 0.78591\nI0608 02:52:19.472698 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00100927 (* 1 = 0.00100927 loss)\nI0608 02:52:19.472710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00221729 (* 1 = 0.00221729 loss)\nI0608 02:52:19.472717 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136209 (* 1 = 0.136209 loss)\nI0608 02:52:19.472721 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00607178 (* 1 = 0.00607178 loss)\nI0608 02:52:19.472728 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175159 (* 1 = 0.175159 loss)\nI0608 02:52:19.472733 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00607653 (* 1 = 0.00607653 loss)\nI0608 02:52:19.472740 13573 sgd_solver.cpp:106] Iteration 33100, lr = 0.001\nI0608 02:52:32.156076 13573 solver.cpp:229] Iteration 33120, loss = 0.897991\nI0608 02:52:32.156147 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0369791 (* 1 = 0.0369791 loss)\nI0608 02:52:32.156157 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0314368 (* 1 = 0.0314368 loss)\nI0608 02:52:32.156163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136028 (* 1 = 0.136028 loss)\nI0608 02:52:32.156169 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0019796 (* 1 = 0.0019796 loss)\nI0608 02:52:32.156174 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149568 (* 1 = 0.149568 loss)\nI0608 02:52:32.156180 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00197927 (* 1 = 0.00197927 loss)\nI0608 02:52:32.156188 13573 sgd_solver.cpp:106] Iteration 33120, lr = 0.001\nI0608 02:52:45.129894 13573 solver.cpp:229] Iteration 33140, loss = 1.06301\nI0608 02:52:45.129967 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139168 (* 1 = 0.139168 loss)\nI0608 02:52:45.129977 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0657439 (* 1 = 0.0657439 loss)\nI0608 02:52:45.129983 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0805411 (* 1 = 0.0805411 loss)\nI0608 02:52:45.129989 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0226513 (* 1 = 0.0226513 loss)\nI0608 02:52:45.129995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0870938 (* 1 = 0.0870938 loss)\nI0608 02:52:45.130002 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0226575 (* 1 = 0.0226575 loss)\nI0608 02:52:45.130008 13573 sgd_solver.cpp:106] Iteration 33140, lr = 0.001\nI0608 02:52:57.993118 13573 solver.cpp:229] Iteration 33160, loss = 1.20464\nI0608 02:52:57.993183 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0016954 (* 1 = 0.0016954 loss)\nI0608 02:52:57.993196 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0154818 (* 1 = 0.0154818 loss)\nI0608 02:52:57.993201 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.320959 (* 1 = 0.320959 loss)\nI0608 02:52:57.993208 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0371508 (* 1 = 0.0371508 loss)\nI0608 02:52:57.993213 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.316014 (* 1 = 0.316014 loss)\nI0608 02:52:57.993221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0371257 (* 1 = 0.0371257 loss)\nI0608 02:52:57.993227 13573 sgd_solver.cpp:106] Iteration 33160, lr = 0.001\nI0608 02:53:10.857918 13573 solver.cpp:229] Iteration 33180, loss = 1.1145\nI0608 02:53:10.858047 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.07482 (* 1 = 0.07482 loss)\nI0608 02:53:10.858058 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119495 (* 1 = 0.119495 loss)\nI0608 02:53:10.858064 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0975534 (* 1 = 0.0975534 loss)\nI0608 02:53:10.858070 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0358154 (* 1 = 0.0358154 loss)\nI0608 02:53:10.858077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103124 (* 1 = 0.103124 loss)\nI0608 02:53:10.858083 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0357775 (* 1 = 0.0357775 loss)\nI0608 02:53:10.858089 13573 sgd_solver.cpp:106] Iteration 33180, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:53:23.690757 13573 solver.cpp:229] Iteration 33200, loss = 0.722433\nI0608 02:53:23.690834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0173815 (* 1 = 0.0173815 loss)\nI0608 02:53:23.690842 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0694508 (* 1 = 0.0694508 loss)\nI0608 02:53:23.690848 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.431529 (* 1 = 0.431529 loss)\nI0608 02:53:23.690855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.037976 (* 1 = 0.037976 loss)\nI0608 02:53:23.690860 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.40488 (* 1 = 0.40488 loss)\nI0608 02:53:23.690865 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0379567 (* 1 = 0.0379567 loss)\nI0608 02:53:23.690872 13573 sgd_solver.cpp:106] Iteration 33200, lr = 0.001\nI0608 02:53:36.812556 13573 solver.cpp:229] Iteration 33220, loss = 0.73795\nI0608 02:53:36.812628 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.180762 (* 1 = 0.180762 loss)\nI0608 02:53:36.812636 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.191783 (* 1 = 0.191783 loss)\nI0608 02:53:36.812641 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279737 (* 1 = 0.279737 loss)\nI0608 02:53:36.812649 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0330396 (* 1 = 0.0330396 loss)\nI0608 02:53:36.812654 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.302969 (* 1 = 0.302969 loss)\nI0608 02:53:36.812659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0330517 (* 1 = 0.0330517 loss)\nI0608 02:53:36.812666 13573 sgd_solver.cpp:106] Iteration 33220, lr = 0.001\nI0608 02:53:49.342610 13573 solver.cpp:229] Iteration 33240, loss = 0.55689\nI0608 02:53:49.342671 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0389107 (* 1 = 0.0389107 loss)\nI0608 02:53:49.342681 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0758233 (* 1 = 0.0758233 loss)\nI0608 02:53:49.342687 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223187 (* 1 = 0.223187 loss)\nI0608 02:53:49.342692 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0026717 (* 1 = 0.0026717 loss)\nI0608 02:53:49.342697 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213001 (* 1 = 0.213001 loss)\nI0608 02:53:49.342703 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00268577 (* 1 = 0.00268577 loss)\nI0608 02:53:49.342710 13573 sgd_solver.cpp:106] Iteration 33240, lr = 0.001\nI0608 02:54:02.396291 13573 solver.cpp:229] Iteration 33260, loss = 1.05615\nI0608 02:54:02.396358 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0947254 (* 1 = 0.0947254 loss)\nI0608 02:54:02.396368 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105773 (* 1 = 0.105773 loss)\nI0608 02:54:02.396374 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.450546 (* 1 = 0.450546 loss)\nI0608 02:54:02.396379 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0835639 (* 1 = 0.0835639 loss)\nI0608 02:54:02.396385 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.46999 (* 1 = 0.46999 loss)\nI0608 02:54:02.396390 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0835554 (* 1 = 0.0835554 loss)\nI0608 02:54:02.396397 13573 sgd_solver.cpp:106] Iteration 33260, lr = 0.001\nI0608 02:54:15.205515 13573 solver.cpp:229] Iteration 33280, loss = 0.661188\nI0608 02:54:15.205580 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0685594 (* 1 = 0.0685594 loss)\nI0608 02:54:15.205590 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161833 (* 1 = 0.161833 loss)\nI0608 02:54:15.205595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212608 (* 1 = 0.212608 loss)\nI0608 02:54:15.205601 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0596777 (* 1 = 0.0596777 loss)\nI0608 02:54:15.205606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21708 (* 1 = 0.21708 loss)\nI0608 02:54:15.205612 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0596417 (* 1 = 0.0596417 loss)\nI0608 02:54:15.205618 13573 sgd_solver.cpp:106] Iteration 33280, lr = 0.001\nI0608 02:54:27.959466 13573 solver.cpp:229] Iteration 33300, loss = 0.418191\nI0608 02:54:27.959528 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0715343 (* 1 = 0.0715343 loss)\nI0608 02:54:27.959537 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0398593 (* 1 = 0.0398593 loss)\nI0608 02:54:27.959543 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104339 (* 1 = 0.104339 loss)\nI0608 02:54:27.959549 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.057791 (* 1 = 0.057791 loss)\nI0608 02:54:27.959555 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0967618 (* 1 = 0.0967618 loss)\nI0608 02:54:27.959560 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0577442 (* 1 = 0.0577442 loss)\nI0608 02:54:27.959568 13573 sgd_solver.cpp:106] Iteration 33300, lr = 0.001\nI0608 02:54:40.878461 13573 solver.cpp:229] Iteration 33320, loss = 0.73194\nI0608 02:54:40.878527 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.163569 (* 1 = 0.163569 loss)\nI0608 02:54:40.878536 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132841 (* 1 = 0.132841 loss)\nI0608 02:54:40.878542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.423451 (* 1 = 0.423451 loss)\nI0608 02:54:40.878548 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182808 (* 1 = 0.0182808 loss)\nI0608 02:54:40.878553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.373679 (* 1 = 0.373679 loss)\nI0608 02:54:40.878559 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182808 (* 1 = 0.0182808 loss)\nI0608 02:54:40.878566 13573 sgd_solver.cpp:106] Iteration 33320, lr = 0.001\nI0608 02:54:53.645941 13573 solver.cpp:229] Iteration 33340, loss = 1.37032\nI0608 02:54:53.646003 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.35834 (* 1 = 0.35834 loss)\nI0608 02:54:53.646014 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202581 (* 1 = 0.202581 loss)\nI0608 02:54:53.646020 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41319 (* 1 = 0.41319 loss)\nI0608 02:54:53.646026 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0896101 (* 1 = 0.0896101 loss)\nI0608 02:54:53.646033 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.410209 (* 1 = 0.410209 loss)\nI0608 02:54:53.646037 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0895882 (* 1 = 0.0895882 loss)\nI0608 02:54:53.646044 13573 sgd_solver.cpp:106] Iteration 33340, lr = 0.001\nI0608 02:55:06.621642 13573 solver.cpp:229] Iteration 33360, loss = 0.998523\nI0608 02:55:06.621714 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.264605 (* 1 = 0.264605 loss)\nI0608 02:55:06.621723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18027 (* 1 = 0.18027 loss)\nI0608 02:55:06.621729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.452674 (* 1 = 0.452674 loss)\nI0608 02:55:06.621736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0628657 (* 1 = 0.0628657 loss)\nI0608 02:55:06.621740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.458793 (* 1 = 0.458793 loss)\nI0608 02:55:06.621747 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0628308 (* 1 = 0.0628308 loss)\nI0608 02:55:06.621752 13573 sgd_solver.cpp:106] Iteration 33360, lr = 0.001\nI0608 02:55:19.398108 13573 solver.cpp:229] Iteration 33380, loss = 1.99021\nI0608 02:55:19.398175 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0875271 (* 1 = 0.0875271 loss)\nI0608 02:55:19.398186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0812518 (* 1 = 0.0812518 loss)\nI0608 02:55:19.398192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.30449 (* 1 = 0.30449 loss)\nI0608 02:55:19.398198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0165031 (* 1 = 0.0165031 loss)\nI0608 02:55:19.398203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.311983 (* 1 = 0.311983 loss)\nI0608 02:55:19.398210 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0165062 (* 1 = 0.0165062 loss)\nI0608 02:55:19.398216 13573 sgd_solver.cpp:106] Iteration 33380, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:55:32.347157 13573 solver.cpp:229] Iteration 33400, loss = 0.540744\nI0608 02:55:32.347219 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00058271 (* 1 = 0.00058271 loss)\nI0608 02:55:32.347229 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000905748 (* 1 = 0.000905748 loss)\nI0608 02:55:32.347235 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256471 (* 1 = 0.256471 loss)\nI0608 02:55:32.347241 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0277849 (* 1 = 0.0277849 loss)\nI0608 02:55:32.347246 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.198842 (* 1 = 0.198842 loss)\nI0608 02:55:32.347252 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0277974 (* 1 = 0.0277974 loss)\nI0608 02:55:32.347259 13573 sgd_solver.cpp:106] Iteration 33400, lr = 0.001\nI0608 02:55:45.240840 13573 solver.cpp:229] Iteration 33420, loss = 0.471284\nI0608 02:55:45.240907 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0505175 (* 1 = 0.0505175 loss)\nI0608 02:55:45.240916 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0518312 (* 1 = 0.0518312 loss)\nI0608 02:55:45.240922 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.2054 (* 1 = 0.2054 loss)\nI0608 02:55:45.240928 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0250554 (* 1 = 0.0250554 loss)\nI0608 02:55:45.240934 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195205 (* 1 = 0.195205 loss)\nI0608 02:55:45.240941 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0250472 (* 1 = 0.0250472 loss)\nI0608 02:55:45.240947 13573 sgd_solver.cpp:106] Iteration 33420, lr = 0.001\nI0608 02:55:58.289280 13573 solver.cpp:229] Iteration 33440, loss = 0.329488\nI0608 02:55:58.289350 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.028495 (* 1 = 0.028495 loss)\nI0608 02:55:58.289360 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0748349 (* 1 = 0.0748349 loss)\nI0608 02:55:58.289366 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0757786 (* 1 = 0.0757786 loss)\nI0608 02:55:58.289371 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114916 (* 1 = 0.0114916 loss)\nI0608 02:55:58.289376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0793757 (* 1 = 0.0793757 loss)\nI0608 02:55:58.289382 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114904 (* 1 = 0.0114904 loss)\nI0608 02:55:58.289389 13573 sgd_solver.cpp:106] Iteration 33440, lr = 0.001\nI0608 02:56:11.201584 13573 solver.cpp:229] Iteration 33460, loss = 0.457209\nI0608 02:56:11.201649 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0278486 (* 1 = 0.0278486 loss)\nI0608 02:56:11.201658 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0291382 (* 1 = 0.0291382 loss)\nI0608 02:56:11.201664 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0858526 (* 1 = 0.0858526 loss)\nI0608 02:56:11.201670 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276435 (* 1 = 0.0276435 loss)\nI0608 02:56:11.201676 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0828669 (* 1 = 0.0828669 loss)\nI0608 02:56:11.201683 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.02765 (* 1 = 0.02765 loss)\nI0608 02:56:11.201689 13573 sgd_solver.cpp:106] Iteration 33460, lr = 0.001\nI0608 02:56:24.001003 13573 solver.cpp:229] Iteration 33480, loss = 1.13891\nI0608 02:56:24.001080 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.192199 (* 1 = 0.192199 loss)\nI0608 02:56:24.001091 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110608 (* 1 = 0.110608 loss)\nI0608 02:56:24.001096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.437472 (* 1 = 0.437472 loss)\nI0608 02:56:24.001102 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0914124 (* 1 = 0.0914124 loss)\nI0608 02:56:24.001108 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.436073 (* 1 = 0.436073 loss)\nI0608 02:56:24.001114 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.091356 (* 1 = 0.091356 loss)\nI0608 02:56:24.001122 13573 sgd_solver.cpp:106] Iteration 33480, lr = 0.001\nI0608 02:56:36.972424 13573 solver.cpp:229] Iteration 33500, loss = 0.531478\nI0608 02:56:36.972494 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0513883 (* 1 = 0.0513883 loss)\nI0608 02:56:36.972504 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107494 (* 1 = 0.107494 loss)\nI0608 02:56:36.972510 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0892221 (* 1 = 0.0892221 loss)\nI0608 02:56:36.972517 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00736307 (* 1 = 0.00736307 loss)\nI0608 02:56:36.972523 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0821712 (* 1 = 0.0821712 loss)\nI0608 02:56:36.972529 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00735343 (* 1 = 0.00735343 loss)\nI0608 02:56:36.972537 13573 sgd_solver.cpp:106] Iteration 33500, lr = 0.001\nI0608 02:56:49.902221 13573 solver.cpp:229] Iteration 33520, loss = 1.12143\nI0608 02:56:49.902287 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.391186 (* 1 = 0.391186 loss)\nI0608 02:56:49.902297 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.239709 (* 1 = 0.239709 loss)\nI0608 02:56:49.902302 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.489773 (* 1 = 0.489773 loss)\nI0608 02:56:49.902308 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111877 (* 1 = 0.111877 loss)\nI0608 02:56:49.902314 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.48979 (* 1 = 0.48979 loss)\nI0608 02:56:49.902320 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111855 (* 1 = 0.111855 loss)\nI0608 02:56:49.902328 13573 sgd_solver.cpp:106] Iteration 33520, lr = 0.001\nI0608 02:57:02.721762 13573 solver.cpp:229] Iteration 33540, loss = 0.811741\nI0608 02:57:02.721832 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.172308 (* 1 = 0.172308 loss)\nI0608 02:57:02.721842 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141244 (* 1 = 0.141244 loss)\nI0608 02:57:02.721848 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.173447 (* 1 = 0.173447 loss)\nI0608 02:57:02.721853 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0676583 (* 1 = 0.0676583 loss)\nI0608 02:57:02.721858 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172265 (* 1 = 0.172265 loss)\nI0608 02:57:02.721864 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0676807 (* 1 = 0.0676807 loss)\nI0608 02:57:02.721870 13573 sgd_solver.cpp:106] Iteration 33540, lr = 0.001\nI0608 02:57:15.926373 13573 solver.cpp:229] Iteration 33560, loss = 1.11348\nI0608 02:57:15.926436 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.179237 (* 1 = 0.179237 loss)\nI0608 02:57:15.926445 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.224898 (* 1 = 0.224898 loss)\nI0608 02:57:15.926451 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.530445 (* 1 = 0.530445 loss)\nI0608 02:57:15.926457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0926125 (* 1 = 0.0926125 loss)\nI0608 02:57:15.926463 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.51343 (* 1 = 0.51343 loss)\nI0608 02:57:15.926468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0926427 (* 1 = 0.0926427 loss)\nI0608 02:57:15.926476 13573 sgd_solver.cpp:106] Iteration 33560, lr = 0.001\nI0608 02:57:28.835768 13573 solver.cpp:229] Iteration 33580, loss = 0.644336\nI0608 02:57:28.835845 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.240406 (* 1 = 0.240406 loss)\nI0608 02:57:28.835855 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.156845 (* 1 = 0.156845 loss)\nI0608 02:57:28.835861 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174584 (* 1 = 0.174584 loss)\nI0608 02:57:28.835866 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135711 (* 1 = 0.0135711 loss)\nI0608 02:57:28.835872 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207133 (* 1 = 0.207133 loss)\nI0608 02:57:28.835878 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135746 (* 1 = 0.0135746 loss)\nI0608 02:57:28.835885 13573 sgd_solver.cpp:106] Iteration 33580, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:57:41.977416 13573 solver.cpp:229] Iteration 33600, loss = 0.53734\nI0608 02:57:41.977479 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.177727 (* 1 = 0.177727 loss)\nI0608 02:57:41.977488 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.177604 (* 1 = 0.177604 loss)\nI0608 02:57:41.977494 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14457 (* 1 = 0.14457 loss)\nI0608 02:57:41.977500 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0323586 (* 1 = 0.0323586 loss)\nI0608 02:57:41.977505 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149517 (* 1 = 0.149517 loss)\nI0608 02:57:41.977511 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0323446 (* 1 = 0.0323446 loss)\nI0608 02:57:41.977517 13573 sgd_solver.cpp:106] Iteration 33600, lr = 0.001\nI0608 02:57:54.959740 13573 solver.cpp:229] Iteration 33620, loss = 0.580491\nI0608 02:57:54.959806 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.1798 (* 1 = 0.1798 loss)\nI0608 02:57:54.959816 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0722047 (* 1 = 0.0722047 loss)\nI0608 02:57:54.959823 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0829786 (* 1 = 0.0829786 loss)\nI0608 02:57:54.959828 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118899 (* 1 = 0.0118899 loss)\nI0608 02:57:54.959833 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0861607 (* 1 = 0.0861607 loss)\nI0608 02:57:54.959839 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118962 (* 1 = 0.0118962 loss)\nI0608 02:57:54.959846 13573 sgd_solver.cpp:106] Iteration 33620, lr = 0.001\nI0608 02:58:07.876708 13573 solver.cpp:229] Iteration 33640, loss = 0.630186\nI0608 02:58:07.876780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00056493 (* 1 = 0.00056493 loss)\nI0608 02:58:07.876791 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0110691 (* 1 = 0.0110691 loss)\nI0608 02:58:07.876796 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.320644 (* 1 = 0.320644 loss)\nI0608 02:58:07.876802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0087589 (* 1 = 0.0087589 loss)\nI0608 02:58:07.876807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.321228 (* 1 = 0.321228 loss)\nI0608 02:58:07.876813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00877187 (* 1 = 0.00877187 loss)\nI0608 02:58:07.876819 13573 sgd_solver.cpp:106] Iteration 33640, lr = 0.001\nI0608 02:58:21.045768 13573 solver.cpp:229] Iteration 33660, loss = 1.31333\nI0608 02:58:21.045828 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0103006 (* 1 = 0.0103006 loss)\nI0608 02:58:21.045837 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0211305 (* 1 = 0.0211305 loss)\nI0608 02:58:21.045843 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.178723 (* 1 = 0.178723 loss)\nI0608 02:58:21.045850 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0347617 (* 1 = 0.0347617 loss)\nI0608 02:58:21.045855 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165629 (* 1 = 0.165629 loss)\nI0608 02:58:21.045861 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0347736 (* 1 = 0.0347736 loss)\nI0608 02:58:21.045867 13573 sgd_solver.cpp:106] Iteration 33660, lr = 0.001\nI0608 02:58:33.926158 13573 solver.cpp:229] Iteration 33680, loss = 0.584861\nI0608 02:58:33.926225 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106412 (* 1 = 0.106412 loss)\nI0608 02:58:33.926235 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0679005 (* 1 = 0.0679005 loss)\nI0608 02:58:33.926241 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.192233 (* 1 = 0.192233 loss)\nI0608 02:58:33.926246 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0448146 (* 1 = 0.0448146 loss)\nI0608 02:58:33.926252 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.192623 (* 1 = 0.192623 loss)\nI0608 02:58:33.926259 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0447874 (* 1 = 0.0447874 loss)\nI0608 02:58:33.926267 13573 sgd_solver.cpp:106] Iteration 33680, lr = 0.001\nI0608 02:58:46.825009 13573 solver.cpp:229] Iteration 33700, loss = 0.794474\nI0608 02:58:46.825086 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000549034 (* 1 = 0.000549034 loss)\nI0608 02:58:46.825096 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00035807 (* 1 = 0.00035807 loss)\nI0608 02:58:46.825103 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.325348 (* 1 = 0.325348 loss)\nI0608 02:58:46.825109 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0192429 (* 1 = 0.0192429 loss)\nI0608 02:58:46.825114 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.357895 (* 1 = 0.357895 loss)\nI0608 02:58:46.825120 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0192405 (* 1 = 0.0192405 loss)\nI0608 02:58:46.825127 13573 sgd_solver.cpp:106] Iteration 33700, lr = 0.001\nI0608 02:58:59.874886 13573 solver.cpp:229] Iteration 33720, loss = 0.380818\nI0608 02:58:59.874949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0111777 (* 1 = 0.0111777 loss)\nI0608 02:58:59.874959 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0136553 (* 1 = 0.0136553 loss)\nI0608 02:58:59.874965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179007 (* 1 = 0.179007 loss)\nI0608 02:58:59.874971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0147043 (* 1 = 0.0147043 loss)\nI0608 02:58:59.874977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195304 (* 1 = 0.195304 loss)\nI0608 02:58:59.874984 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0147036 (* 1 = 0.0147036 loss)\nI0608 02:58:59.874991 13573 sgd_solver.cpp:106] Iteration 33720, lr = 0.001\nI0608 02:59:12.770752 13573 solver.cpp:229] Iteration 33740, loss = 0.937224\nI0608 02:59:12.770833 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00509306 (* 1 = 0.00509306 loss)\nI0608 02:59:12.770843 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000361567 (* 1 = 0.000361567 loss)\nI0608 02:59:12.770849 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.434832 (* 1 = 0.434832 loss)\nI0608 02:59:12.770855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0826569 (* 1 = 0.0826569 loss)\nI0608 02:59:12.770860 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.439599 (* 1 = 0.439599 loss)\nI0608 02:59:12.770866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0826918 (* 1 = 0.0826918 loss)\nI0608 02:59:12.770872 13573 sgd_solver.cpp:106] Iteration 33740, lr = 0.001\nI0608 02:59:25.563055 13573 solver.cpp:229] Iteration 33760, loss = 0.465024\nI0608 02:59:25.563125 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166294 (* 1 = 0.166294 loss)\nI0608 02:59:25.563135 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0819632 (* 1 = 0.0819632 loss)\nI0608 02:59:25.563141 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0899125 (* 1 = 0.0899125 loss)\nI0608 02:59:25.563148 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0535264 (* 1 = 0.0535264 loss)\nI0608 02:59:25.563153 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.092351 (* 1 = 0.092351 loss)\nI0608 02:59:25.563158 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0535263 (* 1 = 0.0535263 loss)\nI0608 02:59:25.563165 13573 sgd_solver.cpp:106] Iteration 33760, lr = 0.001\nI0608 02:59:38.427006 13573 solver.cpp:229] Iteration 33780, loss = 0.562757\nI0608 02:59:38.427067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119898 (* 1 = 0.119898 loss)\nI0608 02:59:38.427075 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114848 (* 1 = 0.114848 loss)\nI0608 02:59:38.427081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.186628 (* 1 = 0.186628 loss)\nI0608 02:59:38.427088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.020125 (* 1 = 0.020125 loss)\nI0608 02:59:38.427093 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183963 (* 1 = 0.183963 loss)\nI0608 02:59:38.427098 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201216 (* 1 = 0.0201216 loss)\nI0608 02:59:38.427104 13573 sgd_solver.cpp:106] Iteration 33780, lr = 0.001\nspeed: 0.645s / iter\nI0608 02:59:51.303865 13573 solver.cpp:229] Iteration 33800, loss = 0.839482\nI0608 02:59:51.303932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.143794 (* 1 = 0.143794 loss)\nI0608 02:59:51.303941 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164811 (* 1 = 0.164811 loss)\nI0608 02:59:51.303947 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.360084 (* 1 = 0.360084 loss)\nI0608 02:59:51.303953 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302942 (* 1 = 0.0302942 loss)\nI0608 02:59:51.303982 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328145 (* 1 = 0.328145 loss)\nI0608 02:59:51.303989 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302998 (* 1 = 0.0302998 loss)\nI0608 02:59:51.303997 13573 sgd_solver.cpp:106] Iteration 33800, lr = 0.001\nI0608 03:00:04.087574 13573 solver.cpp:229] Iteration 33820, loss = 1.04455\nI0608 03:00:04.087641 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.228911 (* 1 = 0.228911 loss)\nI0608 03:00:04.087649 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.183865 (* 1 = 0.183865 loss)\nI0608 03:00:04.087656 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.425711 (* 1 = 0.425711 loss)\nI0608 03:00:04.087661 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0965249 (* 1 = 0.0965249 loss)\nI0608 03:00:04.087666 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.423013 (* 1 = 0.423013 loss)\nI0608 03:00:04.087672 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0965334 (* 1 = 0.0965334 loss)\nI0608 03:00:04.087679 13573 sgd_solver.cpp:106] Iteration 33820, lr = 0.001\nI0608 03:00:16.880971 13573 solver.cpp:229] Iteration 33840, loss = 0.592538\nI0608 03:00:16.881038 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171934 (* 1 = 0.171934 loss)\nI0608 03:00:16.881050 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0627902 (* 1 = 0.0627902 loss)\nI0608 03:00:16.881057 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.303985 (* 1 = 0.303985 loss)\nI0608 03:00:16.881068 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013227 (* 1 = 0.013227 loss)\nI0608 03:00:16.881077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.30544 (* 1 = 0.30544 loss)\nI0608 03:00:16.881083 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0132161 (* 1 = 0.0132161 loss)\nI0608 03:00:16.881090 13573 sgd_solver.cpp:106] Iteration 33840, lr = 0.001\nI0608 03:00:29.806649 13573 solver.cpp:229] Iteration 33860, loss = 0.477707\nI0608 03:00:29.806721 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106988 (* 1 = 0.106988 loss)\nI0608 03:00:29.806731 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0894914 (* 1 = 0.0894914 loss)\nI0608 03:00:29.806737 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117686 (* 1 = 0.117686 loss)\nI0608 03:00:29.806744 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0151581 (* 1 = 0.0151581 loss)\nI0608 03:00:29.806751 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.116691 (* 1 = 0.116691 loss)\nI0608 03:00:29.806756 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151599 (* 1 = 0.0151599 loss)\nI0608 03:00:29.806763 13573 sgd_solver.cpp:106] Iteration 33860, lr = 0.001\nI0608 03:00:42.554334 13573 solver.cpp:229] Iteration 33880, loss = 1.03378\nI0608 03:00:42.554399 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108214 (* 1 = 0.108214 loss)\nI0608 03:00:42.554409 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0581191 (* 1 = 0.0581191 loss)\nI0608 03:00:42.554415 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238388 (* 1 = 0.238388 loss)\nI0608 03:00:42.554421 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0958274 (* 1 = 0.0958274 loss)\nI0608 03:00:42.554427 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.241742 (* 1 = 0.241742 loss)\nI0608 03:00:42.554433 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0958206 (* 1 = 0.0958206 loss)\nI0608 03:00:42.554440 13573 sgd_solver.cpp:106] Iteration 33880, lr = 0.001\nI0608 03:00:55.452291 13573 solver.cpp:229] Iteration 33900, loss = 0.65709\nI0608 03:00:55.452366 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00457326 (* 1 = 0.00457326 loss)\nI0608 03:00:55.452375 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00825983 (* 1 = 0.00825983 loss)\nI0608 03:00:55.452381 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.268732 (* 1 = 0.268732 loss)\nI0608 03:00:55.452388 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0250984 (* 1 = 0.0250984 loss)\nI0608 03:00:55.452394 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.289559 (* 1 = 0.289559 loss)\nI0608 03:00:55.452399 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0250993 (* 1 = 0.0250993 loss)\nI0608 03:00:55.452405 13573 sgd_solver.cpp:106] Iteration 33900, lr = 0.001\nI0608 03:01:08.451733 13573 solver.cpp:229] Iteration 33920, loss = 0.565493\nI0608 03:01:08.451833 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000131353 (* 1 = 0.000131353 loss)\nI0608 03:01:08.451843 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000284712 (* 1 = 0.000284712 loss)\nI0608 03:01:08.451849 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.149144 (* 1 = 0.149144 loss)\nI0608 03:01:08.451855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0094681 (* 1 = 0.0094681 loss)\nI0608 03:01:08.451861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.18777 (* 1 = 0.18777 loss)\nI0608 03:01:08.451867 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0094755 (* 1 = 0.0094755 loss)\nI0608 03:01:08.451875 13573 sgd_solver.cpp:106] Iteration 33920, lr = 0.001\nI0608 03:01:21.267747 13573 solver.cpp:229] Iteration 33940, loss = 1.17592\nI0608 03:01:21.267810 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.153516 (* 1 = 0.153516 loss)\nI0608 03:01:21.267827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0939311 (* 1 = 0.0939311 loss)\nI0608 03:01:21.267833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143786 (* 1 = 0.143786 loss)\nI0608 03:01:21.267839 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0480334 (* 1 = 0.0480334 loss)\nI0608 03:01:21.267845 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157226 (* 1 = 0.157226 loss)\nI0608 03:01:21.267851 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0480347 (* 1 = 0.0480347 loss)\nI0608 03:01:21.267860 13573 sgd_solver.cpp:106] Iteration 33940, lr = 0.001\nI0608 03:01:34.408856 13573 solver.cpp:229] Iteration 33960, loss = 0.799084\nI0608 03:01:34.408926 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0648095 (* 1 = 0.0648095 loss)\nI0608 03:01:34.408936 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0395475 (* 1 = 0.0395475 loss)\nI0608 03:01:34.408942 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172935 (* 1 = 0.172935 loss)\nI0608 03:01:34.408948 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00790727 (* 1 = 0.00790727 loss)\nI0608 03:01:34.408954 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214852 (* 1 = 0.214852 loss)\nI0608 03:01:34.408960 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00790244 (* 1 = 0.00790244 loss)\nI0608 03:01:34.408967 13573 sgd_solver.cpp:106] Iteration 33960, lr = 0.001\nI0608 03:01:47.317955 13573 solver.cpp:229] Iteration 33980, loss = 0.687406\nI0608 03:01:47.318017 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.177385 (* 1 = 0.177385 loss)\nI0608 03:01:47.318028 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124136 (* 1 = 0.124136 loss)\nI0608 03:01:47.318034 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136539 (* 1 = 0.136539 loss)\nI0608 03:01:47.318040 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102971 (* 1 = 0.0102971 loss)\nI0608 03:01:47.318047 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115461 (* 1 = 0.115461 loss)\nI0608 03:01:47.318053 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0102974 (* 1 = 0.0102974 loss)\nI0608 03:01:47.318059 13573 sgd_solver.cpp:106] Iteration 33980, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:02:00.169173 13573 solver.cpp:229] Iteration 34000, loss = 0.395481\nI0608 03:02:00.169239 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0692606 (* 1 = 0.0692606 loss)\nI0608 03:02:00.169247 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0604073 (* 1 = 0.0604073 loss)\nI0608 03:02:00.169253 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0981567 (* 1 = 0.0981567 loss)\nI0608 03:02:00.169260 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0609775 (* 1 = 0.0609775 loss)\nI0608 03:02:00.169265 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.08989 (* 1 = 0.08989 loss)\nI0608 03:02:00.169271 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0609711 (* 1 = 0.0609711 loss)\nI0608 03:02:00.169278 13573 sgd_solver.cpp:106] Iteration 34000, lr = 0.001\nI0608 03:02:13.067373 13573 solver.cpp:229] Iteration 34020, loss = 0.420111\nI0608 03:02:13.067442 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142376 (* 1 = 0.142376 loss)\nI0608 03:02:13.067451 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0663318 (* 1 = 0.0663318 loss)\nI0608 03:02:13.067458 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0989074 (* 1 = 0.0989074 loss)\nI0608 03:02:13.067463 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0244162 (* 1 = 0.0244162 loss)\nI0608 03:02:13.067469 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0953709 (* 1 = 0.0953709 loss)\nI0608 03:02:13.067476 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0244213 (* 1 = 0.0244213 loss)\nI0608 03:02:13.067482 13573 sgd_solver.cpp:106] Iteration 34020, lr = 0.001\nI0608 03:02:25.968093 13573 solver.cpp:229] Iteration 34040, loss = 0.664011\nI0608 03:02:25.968160 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.071428 (* 1 = 0.071428 loss)\nI0608 03:02:25.968169 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0225885 (* 1 = 0.0225885 loss)\nI0608 03:02:25.968175 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222364 (* 1 = 0.222364 loss)\nI0608 03:02:25.968181 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0205063 (* 1 = 0.0205063 loss)\nI0608 03:02:25.968188 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.251723 (* 1 = 0.251723 loss)\nI0608 03:02:25.968194 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0205032 (* 1 = 0.0205032 loss)\nI0608 03:02:25.968201 13573 sgd_solver.cpp:106] Iteration 34040, lr = 0.001\nI0608 03:02:38.896823 13573 solver.cpp:229] Iteration 34060, loss = 2.05022\nI0608 03:02:38.896896 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0350419 (* 1 = 0.0350419 loss)\nI0608 03:02:38.896905 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112732 (* 1 = 0.112732 loss)\nI0608 03:02:38.896911 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.293278 (* 1 = 0.293278 loss)\nI0608 03:02:38.896917 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0323063 (* 1 = 0.0323063 loss)\nI0608 03:02:38.896924 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295807 (* 1 = 0.295807 loss)\nI0608 03:02:38.896929 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0323002 (* 1 = 0.0323002 loss)\nI0608 03:02:38.896935 13573 sgd_solver.cpp:106] Iteration 34060, lr = 0.001\nI0608 03:02:51.770948 13573 solver.cpp:229] Iteration 34080, loss = 0.42833\nI0608 03:02:51.771011 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00639858 (* 1 = 0.00639858 loss)\nI0608 03:02:51.771021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0184616 (* 1 = 0.0184616 loss)\nI0608 03:02:51.771030 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204714 (* 1 = 0.204714 loss)\nI0608 03:02:51.771036 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0399545 (* 1 = 0.0399545 loss)\nI0608 03:02:51.771042 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.179131 (* 1 = 0.179131 loss)\nI0608 03:02:51.771049 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0399895 (* 1 = 0.0399895 loss)\nI0608 03:02:51.771056 13573 sgd_solver.cpp:106] Iteration 34080, lr = 0.001\nI0608 03:03:04.721093 13573 solver.cpp:229] Iteration 34100, loss = 2.33633\nI0608 03:03:04.721165 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.332568 (* 1 = 0.332568 loss)\nI0608 03:03:04.721175 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.330034 (* 1 = 0.330034 loss)\nI0608 03:03:04.721180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.13735 (* 1 = 1.13735 loss)\nI0608 03:03:04.721186 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.338014 (* 1 = 0.338014 loss)\nI0608 03:03:04.721192 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.15431 (* 1 = 1.15431 loss)\nI0608 03:03:04.721197 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.338215 (* 1 = 0.338215 loss)\nI0608 03:03:04.721204 13573 sgd_solver.cpp:106] Iteration 34100, lr = 0.001\nI0608 03:03:17.591011 13573 solver.cpp:229] Iteration 34120, loss = 0.946352\nI0608 03:03:17.591073 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126874 (* 1 = 0.126874 loss)\nI0608 03:03:17.591081 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116244 (* 1 = 0.116244 loss)\nI0608 03:03:17.591087 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.340576 (* 1 = 0.340576 loss)\nI0608 03:03:17.591094 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0390635 (* 1 = 0.0390635 loss)\nI0608 03:03:17.591099 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.31728 (* 1 = 0.31728 loss)\nI0608 03:03:17.591105 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0390412 (* 1 = 0.0390412 loss)\nI0608 03:03:17.591111 13573 sgd_solver.cpp:106] Iteration 34120, lr = 0.001\nI0608 03:03:30.567734 13573 solver.cpp:229] Iteration 34140, loss = 0.911198\nI0608 03:03:30.567800 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101141 (* 1 = 0.101141 loss)\nI0608 03:03:30.567809 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.18133 (* 1 = 0.18133 loss)\nI0608 03:03:30.567822 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.404457 (* 1 = 0.404457 loss)\nI0608 03:03:30.567831 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0718832 (* 1 = 0.0718832 loss)\nI0608 03:03:30.567836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.405564 (* 1 = 0.405564 loss)\nI0608 03:03:30.567842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0719045 (* 1 = 0.0719045 loss)\nI0608 03:03:30.567849 13573 sgd_solver.cpp:106] Iteration 34140, lr = 0.001\nI0608 03:03:43.559576 13573 solver.cpp:229] Iteration 34160, loss = 0.464614\nI0608 03:03:43.559649 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00184452 (* 1 = 0.00184452 loss)\nI0608 03:03:43.559659 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0372366 (* 1 = 0.0372366 loss)\nI0608 03:03:43.559665 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241842 (* 1 = 0.241842 loss)\nI0608 03:03:43.559671 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0524394 (* 1 = 0.0524394 loss)\nI0608 03:03:43.559677 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225829 (* 1 = 0.225829 loss)\nI0608 03:03:43.559684 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0524394 (* 1 = 0.0524394 loss)\nI0608 03:03:43.559691 13573 sgd_solver.cpp:106] Iteration 34160, lr = 0.001\nI0608 03:03:56.353036 13573 solver.cpp:229] Iteration 34180, loss = 0.481842\nI0608 03:03:56.353107 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00262741 (* 1 = 0.00262741 loss)\nI0608 03:03:56.353117 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00106863 (* 1 = 0.00106863 loss)\nI0608 03:03:56.353123 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.185021 (* 1 = 0.185021 loss)\nI0608 03:03:56.353128 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00900823 (* 1 = 0.00900823 loss)\nI0608 03:03:56.353134 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17395 (* 1 = 0.17395 loss)\nI0608 03:03:56.353140 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00901266 (* 1 = 0.00901266 loss)\nI0608 03:03:56.353147 13573 sgd_solver.cpp:106] Iteration 34180, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:04:09.161818 13573 solver.cpp:229] Iteration 34200, loss = 0.750829\nI0608 03:04:09.161877 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.181139 (* 1 = 0.181139 loss)\nI0608 03:04:09.161887 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.144515 (* 1 = 0.144515 loss)\nI0608 03:04:09.161893 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263108 (* 1 = 0.263108 loss)\nI0608 03:04:09.161900 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0350497 (* 1 = 0.0350497 loss)\nI0608 03:04:09.161905 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261253 (* 1 = 0.261253 loss)\nI0608 03:04:09.161911 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0350356 (* 1 = 0.0350356 loss)\nI0608 03:04:09.161918 13573 sgd_solver.cpp:106] Iteration 34200, lr = 0.001\nI0608 03:04:22.040591 13573 solver.cpp:229] Iteration 34220, loss = 0.825474\nI0608 03:04:22.040668 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.151646 (* 1 = 0.151646 loss)\nI0608 03:04:22.040678 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0950873 (* 1 = 0.0950873 loss)\nI0608 03:04:22.040683 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.309607 (* 1 = 0.309607 loss)\nI0608 03:04:22.040688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0831019 (* 1 = 0.0831019 loss)\nI0608 03:04:22.040694 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.322636 (* 1 = 0.322636 loss)\nI0608 03:04:22.040700 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0831103 (* 1 = 0.0831103 loss)\nI0608 03:04:22.040707 13573 sgd_solver.cpp:106] Iteration 34220, lr = 0.001\nI0608 03:04:34.768452 13573 solver.cpp:229] Iteration 34240, loss = 0.572186\nI0608 03:04:34.768535 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00697463 (* 1 = 0.00697463 loss)\nI0608 03:04:34.768545 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0230706 (* 1 = 0.0230706 loss)\nI0608 03:04:34.768551 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.350221 (* 1 = 0.350221 loss)\nI0608 03:04:34.768558 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0437455 (* 1 = 0.0437455 loss)\nI0608 03:04:34.768563 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.332719 (* 1 = 0.332719 loss)\nI0608 03:04:34.768569 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.043745 (* 1 = 0.043745 loss)\nI0608 03:04:34.768576 13573 sgd_solver.cpp:106] Iteration 34240, lr = 0.001\nI0608 03:04:47.787237 13573 solver.cpp:229] Iteration 34260, loss = 0.782822\nI0608 03:04:47.787299 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0167525 (* 1 = 0.0167525 loss)\nI0608 03:04:47.787309 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0162561 (* 1 = 0.0162561 loss)\nI0608 03:04:47.787315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.196701 (* 1 = 0.196701 loss)\nI0608 03:04:47.787322 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00357628 (* 1 = 0.00357628 loss)\nI0608 03:04:47.787328 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181727 (* 1 = 0.181727 loss)\nI0608 03:04:47.787334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00357628 (* 1 = 0.00357628 loss)\nI0608 03:04:47.787343 13573 sgd_solver.cpp:106] Iteration 34260, lr = 0.001\nI0608 03:05:00.674684 13573 solver.cpp:229] Iteration 34280, loss = 1.49363\nI0608 03:05:00.674757 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.068262 (* 1 = 0.068262 loss)\nI0608 03:05:00.674767 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105884 (* 1 = 0.105884 loss)\nI0608 03:05:00.674787 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.513099 (* 1 = 0.513099 loss)\nI0608 03:05:00.674793 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0632866 (* 1 = 0.0632866 loss)\nI0608 03:05:00.674799 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.547821 (* 1 = 0.547821 loss)\nI0608 03:05:00.674805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0632797 (* 1 = 0.0632797 loss)\nI0608 03:05:00.674811 13573 sgd_solver.cpp:106] Iteration 34280, lr = 0.001\nI0608 03:05:13.426663 13573 solver.cpp:229] Iteration 34300, loss = 0.652759\nI0608 03:05:13.426733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0467798 (* 1 = 0.0467798 loss)\nI0608 03:05:13.426743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0678703 (* 1 = 0.0678703 loss)\nI0608 03:05:13.426748 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0929367 (* 1 = 0.0929367 loss)\nI0608 03:05:13.426755 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0617242 (* 1 = 0.0617242 loss)\nI0608 03:05:13.426760 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0967033 (* 1 = 0.0967033 loss)\nI0608 03:05:13.426766 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0617278 (* 1 = 0.0617278 loss)\nI0608 03:05:13.426785 13573 sgd_solver.cpp:106] Iteration 34300, lr = 0.001\nI0608 03:05:26.435014 13573 solver.cpp:229] Iteration 34320, loss = 0.956592\nI0608 03:05:26.435078 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.170604 (* 1 = 0.170604 loss)\nI0608 03:05:26.435088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0829318 (* 1 = 0.0829318 loss)\nI0608 03:05:26.435094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.560234 (* 1 = 0.560234 loss)\nI0608 03:05:26.435101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0778785 (* 1 = 0.0778785 loss)\nI0608 03:05:26.435107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.545669 (* 1 = 0.545669 loss)\nI0608 03:05:26.435113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0778601 (* 1 = 0.0778601 loss)\nI0608 03:05:26.435122 13573 sgd_solver.cpp:106] Iteration 34320, lr = 0.001\nI0608 03:05:39.251479 13573 solver.cpp:229] Iteration 34340, loss = 0.711705\nI0608 03:05:39.251549 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0667934 (* 1 = 0.0667934 loss)\nI0608 03:05:39.251559 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0408066 (* 1 = 0.0408066 loss)\nI0608 03:05:39.251565 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.122811 (* 1 = 0.122811 loss)\nI0608 03:05:39.251571 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0359012 (* 1 = 0.0359012 loss)\nI0608 03:05:39.251577 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122454 (* 1 = 0.122454 loss)\nI0608 03:05:39.251583 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0358989 (* 1 = 0.0358989 loss)\nI0608 03:05:39.251591 13573 sgd_solver.cpp:106] Iteration 34340, lr = 0.001\nI0608 03:05:52.096299 13573 solver.cpp:229] Iteration 34360, loss = 0.390503\nI0608 03:05:52.096364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00214802 (* 1 = 0.00214802 loss)\nI0608 03:05:52.096374 13573 solver.cpp:245]     Train net output #1: loss_cls = 9.88336e-06 (* 1 = 9.88336e-06 loss)\nI0608 03:05:52.096380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.210391 (* 1 = 0.210391 loss)\nI0608 03:05:52.096386 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00766876 (* 1 = 0.00766876 loss)\nI0608 03:05:52.096391 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19516 (* 1 = 0.19516 loss)\nI0608 03:05:52.096397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00766845 (* 1 = 0.00766845 loss)\nI0608 03:05:52.096405 13573 sgd_solver.cpp:106] Iteration 34360, lr = 0.001\nI0608 03:06:04.944669 13573 solver.cpp:229] Iteration 34380, loss = 0.663514\nI0608 03:06:04.944736 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.163091 (* 1 = 0.163091 loss)\nI0608 03:06:04.944744 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150695 (* 1 = 0.150695 loss)\nI0608 03:06:04.944751 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.274448 (* 1 = 0.274448 loss)\nI0608 03:06:04.944756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384247 (* 1 = 0.0384247 loss)\nI0608 03:06:04.944762 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273087 (* 1 = 0.273087 loss)\nI0608 03:06:04.944768 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0383981 (* 1 = 0.0383981 loss)\nI0608 03:06:04.944775 13573 sgd_solver.cpp:106] Iteration 34380, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:06:17.928529 13573 solver.cpp:229] Iteration 34400, loss = 1.31236\nI0608 03:06:17.928673 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.165619 (* 1 = 0.165619 loss)\nI0608 03:06:17.928681 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148441 (* 1 = 0.148441 loss)\nI0608 03:06:17.928689 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919695 (* 1 = 0.0919695 loss)\nI0608 03:06:17.928694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112588 (* 1 = 0.0112588 loss)\nI0608 03:06:17.928700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.091249 (* 1 = 0.091249 loss)\nI0608 03:06:17.928706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112598 (* 1 = 0.0112598 loss)\nI0608 03:06:17.928714 13573 sgd_solver.cpp:106] Iteration 34400, lr = 0.001\nI0608 03:06:30.800746 13573 solver.cpp:229] Iteration 34420, loss = 0.739487\nI0608 03:06:30.800810 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0500021 (* 1 = 0.0500021 loss)\nI0608 03:06:30.800820 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14159 (* 1 = 0.14159 loss)\nI0608 03:06:30.800827 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153654 (* 1 = 0.153654 loss)\nI0608 03:06:30.800833 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.300588 (* 1 = 0.300588 loss)\nI0608 03:06:30.800839 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.151958 (* 1 = 0.151958 loss)\nI0608 03:06:30.800845 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.300582 (* 1 = 0.300582 loss)\nI0608 03:06:30.800853 13573 sgd_solver.cpp:106] Iteration 34420, lr = 0.001\nI0608 03:06:43.943363 13573 solver.cpp:229] Iteration 34440, loss = 0.635068\nI0608 03:06:43.943430 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0414976 (* 1 = 0.0414976 loss)\nI0608 03:06:43.943440 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0725995 (* 1 = 0.0725995 loss)\nI0608 03:06:43.943447 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.317585 (* 1 = 0.317585 loss)\nI0608 03:06:43.943454 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384381 (* 1 = 0.0384381 loss)\nI0608 03:06:43.943459 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295401 (* 1 = 0.295401 loss)\nI0608 03:06:43.943465 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0384434 (* 1 = 0.0384434 loss)\nI0608 03:06:43.943473 13573 sgd_solver.cpp:106] Iteration 34440, lr = 0.001\nI0608 03:06:56.890067 13573 solver.cpp:229] Iteration 34460, loss = 0.709618\nI0608 03:06:56.890133 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0977689 (* 1 = 0.0977689 loss)\nI0608 03:06:56.890143 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.212516 (* 1 = 0.212516 loss)\nI0608 03:06:56.890149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289175 (* 1 = 0.289175 loss)\nI0608 03:06:56.890156 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0285043 (* 1 = 0.0285043 loss)\nI0608 03:06:56.890161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.299336 (* 1 = 0.299336 loss)\nI0608 03:06:56.890167 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.028491 (* 1 = 0.028491 loss)\nI0608 03:06:56.890175 13573 sgd_solver.cpp:106] Iteration 34460, lr = 0.001\nI0608 03:07:09.939784 13573 solver.cpp:229] Iteration 34480, loss = 0.605474\nI0608 03:07:09.939847 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0663547 (* 1 = 0.0663547 loss)\nI0608 03:07:09.939857 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0596393 (* 1 = 0.0596393 loss)\nI0608 03:07:09.939863 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168987 (* 1 = 0.168987 loss)\nI0608 03:07:09.939868 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0343522 (* 1 = 0.0343522 loss)\nI0608 03:07:09.939874 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177217 (* 1 = 0.177217 loss)\nI0608 03:07:09.939880 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0343437 (* 1 = 0.0343437 loss)\nI0608 03:07:09.939887 13573 sgd_solver.cpp:106] Iteration 34480, lr = 0.001\nI0608 03:07:22.854001 13573 solver.cpp:229] Iteration 34500, loss = 0.938903\nI0608 03:07:22.854066 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100153 (* 1 = 0.100153 loss)\nI0608 03:07:22.854075 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0675613 (* 1 = 0.0675613 loss)\nI0608 03:07:22.854081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.327863 (* 1 = 0.327863 loss)\nI0608 03:07:22.854087 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0457655 (* 1 = 0.0457655 loss)\nI0608 03:07:22.854094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.287745 (* 1 = 0.287745 loss)\nI0608 03:07:22.854099 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0457744 (* 1 = 0.0457744 loss)\nI0608 03:07:22.854105 13573 sgd_solver.cpp:106] Iteration 34500, lr = 0.001\nI0608 03:07:35.994523 13573 solver.cpp:229] Iteration 34520, loss = 0.533782\nI0608 03:07:35.994585 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0630751 (* 1 = 0.0630751 loss)\nI0608 03:07:35.994593 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102642 (* 1 = 0.102642 loss)\nI0608 03:07:35.994599 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123449 (* 1 = 0.123449 loss)\nI0608 03:07:35.994606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0188204 (* 1 = 0.0188204 loss)\nI0608 03:07:35.994611 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121869 (* 1 = 0.121869 loss)\nI0608 03:07:35.994617 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0188229 (* 1 = 0.0188229 loss)\nI0608 03:07:35.994624 13573 sgd_solver.cpp:106] Iteration 34520, lr = 0.001\nI0608 03:07:49.076146 13573 solver.cpp:229] Iteration 34540, loss = 0.694092\nI0608 03:07:49.076262 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141332 (* 1 = 0.141332 loss)\nI0608 03:07:49.076272 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.149791 (* 1 = 0.149791 loss)\nI0608 03:07:49.076278 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992456 (* 1 = 0.0992456 loss)\nI0608 03:07:49.076284 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041889 (* 1 = 0.041889 loss)\nI0608 03:07:49.076290 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932749 (* 1 = 0.0932749 loss)\nI0608 03:07:49.076295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0419026 (* 1 = 0.0419026 loss)\nI0608 03:07:49.076303 13573 sgd_solver.cpp:106] Iteration 34540, lr = 0.001\nI0608 03:08:02.010697 13573 solver.cpp:229] Iteration 34560, loss = 0.698837\nI0608 03:08:02.010766 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.215801 (* 1 = 0.215801 loss)\nI0608 03:08:02.010787 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112605 (* 1 = 0.112605 loss)\nI0608 03:08:02.010795 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300052 (* 1 = 0.300052 loss)\nI0608 03:08:02.010802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0417346 (* 1 = 0.0417346 loss)\nI0608 03:08:02.010807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.314382 (* 1 = 0.314382 loss)\nI0608 03:08:02.010814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0417303 (* 1 = 0.0417303 loss)\nI0608 03:08:02.010823 13573 sgd_solver.cpp:106] Iteration 34560, lr = 0.001\nI0608 03:08:14.911970 13573 solver.cpp:229] Iteration 34580, loss = 0.815976\nI0608 03:08:14.912050 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.07157 (* 1 = 0.07157 loss)\nI0608 03:08:14.912060 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172025 (* 1 = 0.172025 loss)\nI0608 03:08:14.912066 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.127595 (* 1 = 0.127595 loss)\nI0608 03:08:14.912072 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.207661 (* 1 = 0.207661 loss)\nI0608 03:08:14.912077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122893 (* 1 = 0.122893 loss)\nI0608 03:08:14.912083 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.207989 (* 1 = 0.207989 loss)\nI0608 03:08:14.912089 13573 sgd_solver.cpp:106] Iteration 34580, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:08:27.806063 13573 solver.cpp:229] Iteration 34600, loss = 0.431868\nI0608 03:08:27.806133 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.034207 (* 1 = 0.034207 loss)\nI0608 03:08:27.806143 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0112285 (* 1 = 0.0112285 loss)\nI0608 03:08:27.806149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211423 (* 1 = 0.211423 loss)\nI0608 03:08:27.806154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116339 (* 1 = 0.0116339 loss)\nI0608 03:08:27.806159 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194113 (* 1 = 0.194113 loss)\nI0608 03:08:27.806165 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116408 (* 1 = 0.0116408 loss)\nI0608 03:08:27.806171 13573 sgd_solver.cpp:106] Iteration 34600, lr = 0.001\nI0608 03:08:40.849086 13573 solver.cpp:229] Iteration 34620, loss = 0.394785\nI0608 03:08:40.849150 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0429289 (* 1 = 0.0429289 loss)\nI0608 03:08:40.849159 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0955495 (* 1 = 0.0955495 loss)\nI0608 03:08:40.849165 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109402 (* 1 = 0.109402 loss)\nI0608 03:08:40.849171 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00228255 (* 1 = 0.00228255 loss)\nI0608 03:08:40.849177 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103336 (* 1 = 0.103336 loss)\nI0608 03:08:40.849184 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00228245 (* 1 = 0.00228245 loss)\nI0608 03:08:40.849189 13573 sgd_solver.cpp:106] Iteration 34620, lr = 0.001\nI0608 03:08:53.412278 13573 solver.cpp:229] Iteration 34640, loss = 0.857643\nI0608 03:08:53.412359 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101418 (* 1 = 0.101418 loss)\nI0608 03:08:53.412369 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.253141 (* 1 = 0.253141 loss)\nI0608 03:08:53.412375 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.298724 (* 1 = 0.298724 loss)\nI0608 03:08:53.412380 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0429802 (* 1 = 0.0429802 loss)\nI0608 03:08:53.412386 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.30126 (* 1 = 0.30126 loss)\nI0608 03:08:53.412392 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.042982 (* 1 = 0.042982 loss)\nI0608 03:08:53.412398 13573 sgd_solver.cpp:106] Iteration 34640, lr = 0.001\nI0608 03:09:06.425837 13573 solver.cpp:229] Iteration 34660, loss = 0.939804\nI0608 03:09:06.425897 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.246043 (* 1 = 0.246043 loss)\nI0608 03:09:06.425906 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150896 (* 1 = 0.150896 loss)\nI0608 03:09:06.425912 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.399995 (* 1 = 0.399995 loss)\nI0608 03:09:06.425918 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0858897 (* 1 = 0.0858897 loss)\nI0608 03:09:06.425925 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.395269 (* 1 = 0.395269 loss)\nI0608 03:09:06.425930 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0858947 (* 1 = 0.0858947 loss)\nI0608 03:09:06.425936 13573 sgd_solver.cpp:106] Iteration 34660, lr = 0.001\nI0608 03:09:19.315224 13573 solver.cpp:229] Iteration 34680, loss = 1.49504\nI0608 03:09:19.315296 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00901045 (* 1 = 0.00901045 loss)\nI0608 03:09:19.315305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0222887 (* 1 = 0.0222887 loss)\nI0608 03:09:19.315311 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207888 (* 1 = 0.207888 loss)\nI0608 03:09:19.315318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0180454 (* 1 = 0.0180454 loss)\nI0608 03:09:19.315323 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186254 (* 1 = 0.186254 loss)\nI0608 03:09:19.315330 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0180448 (* 1 = 0.0180448 loss)\nI0608 03:09:19.315337 13573 sgd_solver.cpp:106] Iteration 34680, lr = 0.001\nI0608 03:09:32.224721 13573 solver.cpp:229] Iteration 34700, loss = 0.740308\nI0608 03:09:32.224795 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0538953 (* 1 = 0.0538953 loss)\nI0608 03:09:32.224805 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0606207 (* 1 = 0.0606207 loss)\nI0608 03:09:32.224812 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.434066 (* 1 = 0.434066 loss)\nI0608 03:09:32.224817 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0546937 (* 1 = 0.0546937 loss)\nI0608 03:09:32.224822 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.448791 (* 1 = 0.448791 loss)\nI0608 03:09:32.224828 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0546888 (* 1 = 0.0546888 loss)\nI0608 03:09:32.224835 13573 sgd_solver.cpp:106] Iteration 34700, lr = 0.001\nI0608 03:09:45.147305 13573 solver.cpp:229] Iteration 34720, loss = 0.548199\nI0608 03:09:45.147367 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0955686 (* 1 = 0.0955686 loss)\nI0608 03:09:45.147377 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0910033 (* 1 = 0.0910033 loss)\nI0608 03:09:45.147383 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.254199 (* 1 = 0.254199 loss)\nI0608 03:09:45.147390 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017016 (* 1 = 0.017016 loss)\nI0608 03:09:45.147397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24821 (* 1 = 0.24821 loss)\nI0608 03:09:45.147403 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170236 (* 1 = 0.0170236 loss)\nI0608 03:09:45.147409 13573 sgd_solver.cpp:106] Iteration 34720, lr = 0.001\nI0608 03:09:58.006862 13573 solver.cpp:229] Iteration 34740, loss = 0.527028\nI0608 03:09:58.006932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.233525 (* 1 = 0.233525 loss)\nI0608 03:09:58.006942 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146922 (* 1 = 0.146922 loss)\nI0608 03:09:58.006947 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092281 (* 1 = 0.092281 loss)\nI0608 03:09:58.006953 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0215325 (* 1 = 0.0215325 loss)\nI0608 03:09:58.006959 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0900275 (* 1 = 0.0900275 loss)\nI0608 03:09:58.006964 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0215454 (* 1 = 0.0215454 loss)\nI0608 03:09:58.006971 13573 sgd_solver.cpp:106] Iteration 34740, lr = 0.001\nI0608 03:10:10.878381 13573 solver.cpp:229] Iteration 34760, loss = 2.25262\nI0608 03:10:10.878455 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.198434 (* 1 = 0.198434 loss)\nI0608 03:10:10.878465 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.240296 (* 1 = 0.240296 loss)\nI0608 03:10:10.878473 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.11982 (* 1 = 1.11982 loss)\nI0608 03:10:10.878478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.518337 (* 1 = 0.518337 loss)\nI0608 03:10:10.878484 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.118 (* 1 = 1.118 loss)\nI0608 03:10:10.878490 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.517097 (* 1 = 0.517097 loss)\nI0608 03:10:10.878497 13573 sgd_solver.cpp:106] Iteration 34760, lr = 0.001\nI0608 03:10:23.613574 13573 solver.cpp:229] Iteration 34780, loss = 0.914005\nI0608 03:10:23.613636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.199072 (* 1 = 0.199072 loss)\nI0608 03:10:23.613646 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.20547 (* 1 = 0.20547 loss)\nI0608 03:10:23.613652 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.470542 (* 1 = 0.470542 loss)\nI0608 03:10:23.613658 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0889505 (* 1 = 0.0889505 loss)\nI0608 03:10:23.613664 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.454937 (* 1 = 0.454937 loss)\nI0608 03:10:23.613669 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.088971 (* 1 = 0.088971 loss)\nI0608 03:10:23.613677 13573 sgd_solver.cpp:106] Iteration 34780, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:10:36.657570 13573 solver.cpp:229] Iteration 34800, loss = 0.471413\nI0608 03:10:36.657637 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0501576 (* 1 = 0.0501576 loss)\nI0608 03:10:36.657647 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.066578 (* 1 = 0.066578 loss)\nI0608 03:10:36.657654 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0872846 (* 1 = 0.0872846 loss)\nI0608 03:10:36.657658 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0261304 (* 1 = 0.0261304 loss)\nI0608 03:10:36.657665 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0864232 (* 1 = 0.0864232 loss)\nI0608 03:10:36.657670 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0261428 (* 1 = 0.0261428 loss)\nI0608 03:10:36.657676 13573 sgd_solver.cpp:106] Iteration 34800, lr = 0.001\nI0608 03:10:49.487969 13573 solver.cpp:229] Iteration 34820, loss = 0.536085\nI0608 03:10:49.488093 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0768141 (* 1 = 0.0768141 loss)\nI0608 03:10:49.488102 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0373022 (* 1 = 0.0373022 loss)\nI0608 03:10:49.488109 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247991 (* 1 = 0.247991 loss)\nI0608 03:10:49.488116 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110674 (* 1 = 0.0110674 loss)\nI0608 03:10:49.488121 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246138 (* 1 = 0.246138 loss)\nI0608 03:10:49.488127 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110672 (* 1 = 0.0110672 loss)\nI0608 03:10:49.488134 13573 sgd_solver.cpp:106] Iteration 34820, lr = 0.001\nI0608 03:11:02.231253 13573 solver.cpp:229] Iteration 34840, loss = 0.343418\nI0608 03:11:02.231315 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0368766 (* 1 = 0.0368766 loss)\nI0608 03:11:02.231330 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0265424 (* 1 = 0.0265424 loss)\nI0608 03:11:02.231338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124533 (* 1 = 0.124533 loss)\nI0608 03:11:02.231343 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0336505 (* 1 = 0.0336505 loss)\nI0608 03:11:02.231350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102224 (* 1 = 0.102224 loss)\nI0608 03:11:02.231356 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0336572 (* 1 = 0.0336572 loss)\nI0608 03:11:02.231364 13573 sgd_solver.cpp:106] Iteration 34840, lr = 0.001\nI0608 03:11:14.883123 13573 solver.cpp:229] Iteration 34860, loss = 0.627994\nI0608 03:11:14.883189 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0558427 (* 1 = 0.0558427 loss)\nI0608 03:11:14.883199 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0798141 (* 1 = 0.0798141 loss)\nI0608 03:11:14.883204 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275773 (* 1 = 0.275773 loss)\nI0608 03:11:14.883210 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0336351 (* 1 = 0.0336351 loss)\nI0608 03:11:14.883215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.301933 (* 1 = 0.301933 loss)\nI0608 03:11:14.883221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0336277 (* 1 = 0.0336277 loss)\nI0608 03:11:14.883227 13573 sgd_solver.cpp:106] Iteration 34860, lr = 0.001\nI0608 03:11:27.750919 13573 solver.cpp:229] Iteration 34880, loss = 1.40819\nI0608 03:11:27.750988 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.063264 (* 1 = 0.063264 loss)\nI0608 03:11:27.750998 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0729575 (* 1 = 0.0729575 loss)\nI0608 03:11:27.751004 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.482415 (* 1 = 0.482415 loss)\nI0608 03:11:27.751010 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202243 (* 1 = 0.0202243 loss)\nI0608 03:11:27.751015 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.455579 (* 1 = 0.455579 loss)\nI0608 03:11:27.751021 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202242 (* 1 = 0.0202242 loss)\nI0608 03:11:27.751029 13573 sgd_solver.cpp:106] Iteration 34880, lr = 0.001\nI0608 03:11:40.607442 13573 solver.cpp:229] Iteration 34900, loss = 0.582768\nI0608 03:11:40.607504 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0617912 (* 1 = 0.0617912 loss)\nI0608 03:11:40.607514 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0974104 (* 1 = 0.0974104 loss)\nI0608 03:11:40.607520 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113598 (* 1 = 0.113598 loss)\nI0608 03:11:40.607525 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041289 (* 1 = 0.041289 loss)\nI0608 03:11:40.607532 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117185 (* 1 = 0.117185 loss)\nI0608 03:11:40.607537 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0412882 (* 1 = 0.0412882 loss)\nI0608 03:11:40.607544 13573 sgd_solver.cpp:106] Iteration 34900, lr = 0.001\nI0608 03:11:53.640285 13573 solver.cpp:229] Iteration 34920, loss = 0.961189\nI0608 03:11:53.640352 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.244493 (* 1 = 0.244493 loss)\nI0608 03:11:53.640362 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.293987 (* 1 = 0.293987 loss)\nI0608 03:11:53.640368 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18837 (* 1 = 0.18837 loss)\nI0608 03:11:53.640374 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0552672 (* 1 = 0.0552672 loss)\nI0608 03:11:53.640379 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182378 (* 1 = 0.182378 loss)\nI0608 03:11:53.640385 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0552687 (* 1 = 0.0552687 loss)\nI0608 03:11:53.640391 13573 sgd_solver.cpp:106] Iteration 34920, lr = 0.001\nI0608 03:12:06.361148 13573 solver.cpp:229] Iteration 34940, loss = 1.74307\nI0608 03:12:06.361212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.184243 (* 1 = 0.184243 loss)\nI0608 03:12:06.361222 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.364983 (* 1 = 0.364983 loss)\nI0608 03:12:06.361228 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.562744 (* 1 = 0.562744 loss)\nI0608 03:12:06.361233 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.141952 (* 1 = 0.141952 loss)\nI0608 03:12:06.361239 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.547365 (* 1 = 0.547365 loss)\nI0608 03:12:06.361244 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.141927 (* 1 = 0.141927 loss)\nI0608 03:12:06.361251 13573 sgd_solver.cpp:106] Iteration 34940, lr = 0.001\nI0608 03:12:19.360990 13573 solver.cpp:229] Iteration 34960, loss = 2.14542\nI0608 03:12:19.361124 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113068 (* 1 = 0.113068 loss)\nI0608 03:12:19.361135 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0855296 (* 1 = 0.0855296 loss)\nI0608 03:12:19.361140 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.13662 (* 1 = 1.13662 loss)\nI0608 03:12:19.361146 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.632231 (* 1 = 0.632231 loss)\nI0608 03:12:19.361152 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.12952 (* 1 = 1.12952 loss)\nI0608 03:12:19.361157 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.632125 (* 1 = 0.632125 loss)\nI0608 03:12:19.361165 13573 sgd_solver.cpp:106] Iteration 34960, lr = 0.001\nI0608 03:12:32.390365 13573 solver.cpp:229] Iteration 34980, loss = 0.534651\nI0608 03:12:32.390426 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.152613 (* 1 = 0.152613 loss)\nI0608 03:12:32.390435 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131984 (* 1 = 0.131984 loss)\nI0608 03:12:32.390441 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0985113 (* 1 = 0.0985113 loss)\nI0608 03:12:32.390447 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0650638 (* 1 = 0.0650638 loss)\nI0608 03:12:32.390452 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977639 (* 1 = 0.0977639 loss)\nI0608 03:12:32.390458 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0650492 (* 1 = 0.0650492 loss)\nI0608 03:12:32.390465 13573 sgd_solver.cpp:106] Iteration 34980, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:12:45.372997 13573 solver.cpp:229] Iteration 35000, loss = 1.79913\nI0608 03:12:45.373066 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.197369 (* 1 = 0.197369 loss)\nI0608 03:12:45.373076 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.168102 (* 1 = 0.168102 loss)\nI0608 03:12:45.373082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.06851 (* 1 = 1.06851 loss)\nI0608 03:12:45.373088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.323338 (* 1 = 0.323338 loss)\nI0608 03:12:45.373095 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.04834 (* 1 = 1.04834 loss)\nI0608 03:12:45.373100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.323243 (* 1 = 0.323243 loss)\nI0608 03:12:45.373108 13573 sgd_solver.cpp:106] Iteration 35000, lr = 0.001\nI0608 03:12:58.290500 13573 solver.cpp:229] Iteration 35020, loss = 0.348042\nI0608 03:12:58.290577 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0876304 (* 1 = 0.0876304 loss)\nI0608 03:12:58.290587 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0930379 (* 1 = 0.0930379 loss)\nI0608 03:12:58.290594 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0971525 (* 1 = 0.0971525 loss)\nI0608 03:12:58.290601 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00847195 (* 1 = 0.00847195 loss)\nI0608 03:12:58.290606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920672 (* 1 = 0.0920672 loss)\nI0608 03:12:58.290613 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00846453 (* 1 = 0.00846453 loss)\nI0608 03:12:58.290621 13573 sgd_solver.cpp:106] Iteration 35020, lr = 0.001\nI0608 03:13:11.296691 13573 solver.cpp:229] Iteration 35040, loss = 0.921378\nI0608 03:13:11.296753 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.206305 (* 1 = 0.206305 loss)\nI0608 03:13:11.296762 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0566967 (* 1 = 0.0566967 loss)\nI0608 03:13:11.296769 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.443241 (* 1 = 0.443241 loss)\nI0608 03:13:11.296774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0968755 (* 1 = 0.0968755 loss)\nI0608 03:13:11.296780 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.445761 (* 1 = 0.445761 loss)\nI0608 03:13:11.296787 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0969874 (* 1 = 0.0969874 loss)\nI0608 03:13:11.296794 13573 sgd_solver.cpp:106] Iteration 35040, lr = 0.001\nI0608 03:13:24.271049 13573 solver.cpp:229] Iteration 35060, loss = 0.694921\nI0608 03:13:24.271136 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0593282 (* 1 = 0.0593282 loss)\nI0608 03:13:24.271147 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0935332 (* 1 = 0.0935332 loss)\nI0608 03:13:24.271154 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32094 (* 1 = 0.32094 loss)\nI0608 03:13:24.271160 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0466489 (* 1 = 0.0466489 loss)\nI0608 03:13:24.271167 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.314686 (* 1 = 0.314686 loss)\nI0608 03:13:24.271173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0466391 (* 1 = 0.0466391 loss)\nI0608 03:13:24.271179 13573 sgd_solver.cpp:106] Iteration 35060, lr = 0.001\nI0608 03:13:37.046691 13573 solver.cpp:229] Iteration 35080, loss = 0.774775\nI0608 03:13:37.046767 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104344 (* 1 = 0.104344 loss)\nI0608 03:13:37.046789 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.082026 (* 1 = 0.082026 loss)\nI0608 03:13:37.046797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0926016 (* 1 = 0.0926016 loss)\nI0608 03:13:37.046802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0232347 (* 1 = 0.0232347 loss)\nI0608 03:13:37.046808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938991 (* 1 = 0.0938991 loss)\nI0608 03:13:37.046813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0232417 (* 1 = 0.0232417 loss)\nI0608 03:13:37.046820 13573 sgd_solver.cpp:106] Iteration 35080, lr = 0.001\nI0608 03:13:49.855985 13573 solver.cpp:229] Iteration 35100, loss = 0.473963\nI0608 03:13:49.856052 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148633 (* 1 = 0.148633 loss)\nI0608 03:13:49.856063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0803826 (* 1 = 0.0803826 loss)\nI0608 03:13:49.856070 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0899985 (* 1 = 0.0899985 loss)\nI0608 03:13:49.856076 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0530353 (* 1 = 0.0530353 loss)\nI0608 03:13:49.856082 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932795 (* 1 = 0.0932795 loss)\nI0608 03:13:49.856088 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0530303 (* 1 = 0.0530303 loss)\nI0608 03:13:49.856096 13573 sgd_solver.cpp:106] Iteration 35100, lr = 0.001\nI0608 03:14:02.819128 13573 solver.cpp:229] Iteration 35120, loss = 1.06357\nI0608 03:14:02.819197 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.179878 (* 1 = 0.179878 loss)\nI0608 03:14:02.819207 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131206 (* 1 = 0.131206 loss)\nI0608 03:14:02.819214 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207188 (* 1 = 0.207188 loss)\nI0608 03:14:02.819221 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0437111 (* 1 = 0.0437111 loss)\nI0608 03:14:02.819226 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225132 (* 1 = 0.225132 loss)\nI0608 03:14:02.819233 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.043694 (* 1 = 0.043694 loss)\nI0608 03:14:02.819241 13573 sgd_solver.cpp:106] Iteration 35120, lr = 0.001\nI0608 03:14:15.789366 13573 solver.cpp:229] Iteration 35140, loss = 0.5524\nI0608 03:14:15.789444 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.299075 (* 1 = 0.299075 loss)\nI0608 03:14:15.789454 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.090373 (* 1 = 0.090373 loss)\nI0608 03:14:15.789461 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0868589 (* 1 = 0.0868589 loss)\nI0608 03:14:15.789468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0026971 (* 1 = 0.0026971 loss)\nI0608 03:14:15.789474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0860716 (* 1 = 0.0860716 loss)\nI0608 03:14:15.789479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00269562 (* 1 = 0.00269562 loss)\nI0608 03:14:15.789486 13573 sgd_solver.cpp:106] Iteration 35140, lr = 0.001\nI0608 03:14:28.687957 13573 solver.cpp:229] Iteration 35160, loss = 0.384334\nI0608 03:14:28.688022 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0263763 (* 1 = 0.0263763 loss)\nI0608 03:14:28.688032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0713599 (* 1 = 0.0713599 loss)\nI0608 03:14:28.688038 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141434 (* 1 = 0.141434 loss)\nI0608 03:14:28.688045 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0506312 (* 1 = 0.0506312 loss)\nI0608 03:14:28.688050 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136724 (* 1 = 0.136724 loss)\nI0608 03:14:28.688055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0506309 (* 1 = 0.0506309 loss)\nI0608 03:14:28.688062 13573 sgd_solver.cpp:106] Iteration 35160, lr = 0.001\nI0608 03:14:41.719629 13573 solver.cpp:229] Iteration 35180, loss = 0.828316\nI0608 03:14:41.719713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.16597 (* 1 = 0.16597 loss)\nI0608 03:14:41.719723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.172987 (* 1 = 0.172987 loss)\nI0608 03:14:41.719729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256103 (* 1 = 0.256103 loss)\nI0608 03:14:41.719735 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0317198 (* 1 = 0.0317198 loss)\nI0608 03:14:41.719740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238955 (* 1 = 0.238955 loss)\nI0608 03:14:41.719746 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0317221 (* 1 = 0.0317221 loss)\nI0608 03:14:41.719753 13573 sgd_solver.cpp:106] Iteration 35180, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:14:54.646706 13573 solver.cpp:229] Iteration 35200, loss = 0.720983\nI0608 03:14:54.646790 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000522407 (* 1 = 0.000522407 loss)\nI0608 03:14:54.646800 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00384022 (* 1 = 0.00384022 loss)\nI0608 03:14:54.646806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.27383 (* 1 = 0.27383 loss)\nI0608 03:14:54.646811 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0546943 (* 1 = 0.0546943 loss)\nI0608 03:14:54.646817 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.274185 (* 1 = 0.274185 loss)\nI0608 03:14:54.646822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0546924 (* 1 = 0.0546924 loss)\nI0608 03:14:54.646828 13573 sgd_solver.cpp:106] Iteration 35200, lr = 0.001\nI0608 03:15:07.445794 13573 solver.cpp:229] Iteration 35220, loss = 0.623908\nI0608 03:15:07.445861 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0255094 (* 1 = 0.0255094 loss)\nI0608 03:15:07.445870 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0442687 (* 1 = 0.0442687 loss)\nI0608 03:15:07.445878 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0879132 (* 1 = 0.0879132 loss)\nI0608 03:15:07.445883 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.070197 (* 1 = 0.070197 loss)\nI0608 03:15:07.445888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0871018 (* 1 = 0.0871018 loss)\nI0608 03:15:07.445894 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0701939 (* 1 = 0.0701939 loss)\nI0608 03:15:07.445901 13573 sgd_solver.cpp:106] Iteration 35220, lr = 0.001\nI0608 03:15:20.456990 13573 solver.cpp:229] Iteration 35240, loss = 1.18763\nI0608 03:15:20.457059 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0460745 (* 1 = 0.0460745 loss)\nI0608 03:15:20.457069 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128179 (* 1 = 0.128179 loss)\nI0608 03:15:20.457077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.69452 (* 1 = 0.69452 loss)\nI0608 03:15:20.457082 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.191127 (* 1 = 0.191127 loss)\nI0608 03:15:20.457088 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.662231 (* 1 = 0.662231 loss)\nI0608 03:15:20.457093 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.191131 (* 1 = 0.191131 loss)\nI0608 03:15:20.457101 13573 sgd_solver.cpp:106] Iteration 35240, lr = 0.001\nI0608 03:15:33.041976 13573 solver.cpp:229] Iteration 35260, loss = 0.684291\nI0608 03:15:33.042055 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107957 (* 1 = 0.107957 loss)\nI0608 03:15:33.042065 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121421 (* 1 = 0.121421 loss)\nI0608 03:15:33.042071 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179164 (* 1 = 0.179164 loss)\nI0608 03:15:33.042078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0209164 (* 1 = 0.0209164 loss)\nI0608 03:15:33.042083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.178962 (* 1 = 0.178962 loss)\nI0608 03:15:33.042089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209112 (* 1 = 0.0209112 loss)\nI0608 03:15:33.042096 13573 sgd_solver.cpp:106] Iteration 35260, lr = 0.001\nI0608 03:15:45.776500 13573 solver.cpp:229] Iteration 35280, loss = 0.954309\nI0608 03:15:45.776594 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.112228 (* 1 = 0.112228 loss)\nI0608 03:15:45.776604 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0312863 (* 1 = 0.0312863 loss)\nI0608 03:15:45.776612 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.268848 (* 1 = 0.268848 loss)\nI0608 03:15:45.776618 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0536354 (* 1 = 0.0536354 loss)\nI0608 03:15:45.776623 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269746 (* 1 = 0.269746 loss)\nI0608 03:15:45.776629 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0536527 (* 1 = 0.0536527 loss)\nI0608 03:15:45.776638 13573 sgd_solver.cpp:106] Iteration 35280, lr = 0.001\nI0608 03:15:58.469216 13573 solver.cpp:229] Iteration 35300, loss = 0.920351\nI0608 03:15:58.469281 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.378941 (* 1 = 0.378941 loss)\nI0608 03:15:58.469290 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.228653 (* 1 = 0.228653 loss)\nI0608 03:15:58.469296 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.322046 (* 1 = 0.322046 loss)\nI0608 03:15:58.469302 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0183398 (* 1 = 0.0183398 loss)\nI0608 03:15:58.469308 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.338048 (* 1 = 0.338048 loss)\nI0608 03:15:58.469314 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0183358 (* 1 = 0.0183358 loss)\nI0608 03:15:58.469321 13573 sgd_solver.cpp:106] Iteration 35300, lr = 0.001\nI0608 03:16:11.522904 13573 solver.cpp:229] Iteration 35320, loss = 0.659816\nI0608 03:16:11.522975 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.265385 (* 1 = 0.265385 loss)\nI0608 03:16:11.522984 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.274322 (* 1 = 0.274322 loss)\nI0608 03:16:11.522990 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164863 (* 1 = 0.164863 loss)\nI0608 03:16:11.522997 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.077641 (* 1 = 0.077641 loss)\nI0608 03:16:11.523003 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156429 (* 1 = 0.156429 loss)\nI0608 03:16:11.523010 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0776466 (* 1 = 0.0776466 loss)\nI0608 03:16:11.523017 13573 sgd_solver.cpp:106] Iteration 35320, lr = 0.001\nI0608 03:16:24.504405 13573 solver.cpp:229] Iteration 35340, loss = 0.632094\nI0608 03:16:24.504474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.154682 (* 1 = 0.154682 loss)\nI0608 03:16:24.504483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132224 (* 1 = 0.132224 loss)\nI0608 03:16:24.504490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.208766 (* 1 = 0.208766 loss)\nI0608 03:16:24.504497 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158109 (* 1 = 0.0158109 loss)\nI0608 03:16:24.504503 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225408 (* 1 = 0.225408 loss)\nI0608 03:16:24.504508 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158019 (* 1 = 0.0158019 loss)\nI0608 03:16:24.504515 13573 sgd_solver.cpp:106] Iteration 35340, lr = 0.001\nI0608 03:16:37.391373 13573 solver.cpp:229] Iteration 35360, loss = 0.888685\nI0608 03:16:37.391445 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00496998 (* 1 = 0.00496998 loss)\nI0608 03:16:37.391454 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000957279 (* 1 = 0.000957279 loss)\nI0608 03:16:37.391461 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.45587 (* 1 = 0.45587 loss)\nI0608 03:16:37.391468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.155434 (* 1 = 0.155434 loss)\nI0608 03:16:37.391474 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.482468 (* 1 = 0.482468 loss)\nI0608 03:16:37.391479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.155411 (* 1 = 0.155411 loss)\nI0608 03:16:37.391484 13573 sgd_solver.cpp:106] Iteration 35360, lr = 0.001\nI0608 03:16:50.534760 13573 solver.cpp:229] Iteration 35380, loss = 1.3844\nI0608 03:16:50.534839 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.097753 (* 1 = 0.097753 loss)\nI0608 03:16:50.534849 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.056299 (* 1 = 0.056299 loss)\nI0608 03:16:50.534855 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.466425 (* 1 = 0.466425 loss)\nI0608 03:16:50.534862 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.169472 (* 1 = 0.169472 loss)\nI0608 03:16:50.534868 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.461689 (* 1 = 0.461689 loss)\nI0608 03:16:50.534873 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.169455 (* 1 = 0.169455 loss)\nI0608 03:16:50.534879 13573 sgd_solver.cpp:106] Iteration 35380, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:17:03.255370 13573 solver.cpp:229] Iteration 35400, loss = 0.550742\nI0608 03:17:03.255434 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0766293 (* 1 = 0.0766293 loss)\nI0608 03:17:03.255442 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.083401 (* 1 = 0.083401 loss)\nI0608 03:17:03.255450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241034 (* 1 = 0.241034 loss)\nI0608 03:17:03.255455 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164529 (* 1 = 0.0164529 loss)\nI0608 03:17:03.255461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265479 (* 1 = 0.265479 loss)\nI0608 03:17:03.255466 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164587 (* 1 = 0.0164587 loss)\nI0608 03:17:03.255473 13573 sgd_solver.cpp:106] Iteration 35400, lr = 0.001\nI0608 03:17:16.204814 13573 solver.cpp:229] Iteration 35420, loss = 0.655033\nI0608 03:17:16.204891 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.242203 (* 1 = 0.242203 loss)\nI0608 03:17:16.204900 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163832 (* 1 = 0.163832 loss)\nI0608 03:17:16.204906 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220027 (* 1 = 0.220027 loss)\nI0608 03:17:16.204912 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0139844 (* 1 = 0.0139844 loss)\nI0608 03:17:16.204918 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223426 (* 1 = 0.223426 loss)\nI0608 03:17:16.204924 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0139781 (* 1 = 0.0139781 loss)\nI0608 03:17:16.204932 13573 sgd_solver.cpp:106] Iteration 35420, lr = 0.001\nI0608 03:17:28.993309 13573 solver.cpp:229] Iteration 35440, loss = 0.784531\nI0608 03:17:28.993401 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0736758 (* 1 = 0.0736758 loss)\nI0608 03:17:28.993410 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0739879 (* 1 = 0.0739879 loss)\nI0608 03:17:28.993417 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295222 (* 1 = 0.295222 loss)\nI0608 03:17:28.993422 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0839546 (* 1 = 0.0839546 loss)\nI0608 03:17:28.993428 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293176 (* 1 = 0.293176 loss)\nI0608 03:17:28.993434 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0839393 (* 1 = 0.0839393 loss)\nI0608 03:17:28.993441 13573 sgd_solver.cpp:106] Iteration 35440, lr = 0.001\nI0608 03:17:41.846537 13573 solver.cpp:229] Iteration 35460, loss = 0.562834\nI0608 03:17:41.846614 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134842 (* 1 = 0.134842 loss)\nI0608 03:17:41.846626 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0852993 (* 1 = 0.0852993 loss)\nI0608 03:17:41.846632 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105385 (* 1 = 0.105385 loss)\nI0608 03:17:41.846637 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0613813 (* 1 = 0.0613813 loss)\nI0608 03:17:41.846643 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105155 (* 1 = 0.105155 loss)\nI0608 03:17:41.846649 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.06138 (* 1 = 0.06138 loss)\nI0608 03:17:41.846657 13573 sgd_solver.cpp:106] Iteration 35460, lr = 0.001\nI0608 03:17:54.744820 13573 solver.cpp:229] Iteration 35480, loss = 0.50811\nI0608 03:17:54.744887 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.122879 (* 1 = 0.122879 loss)\nI0608 03:17:54.744899 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0981326 (* 1 = 0.0981326 loss)\nI0608 03:17:54.744904 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110117 (* 1 = 0.110117 loss)\nI0608 03:17:54.744910 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00944624 (* 1 = 0.00944624 loss)\nI0608 03:17:54.744916 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111696 (* 1 = 0.111696 loss)\nI0608 03:17:54.744921 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00944586 (* 1 = 0.00944586 loss)\nI0608 03:17:54.744928 13573 sgd_solver.cpp:106] Iteration 35480, lr = 0.001\nI0608 03:18:07.761710 13573 solver.cpp:229] Iteration 35500, loss = 0.53546\nI0608 03:18:07.761778 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0421427 (* 1 = 0.0421427 loss)\nI0608 03:18:07.761795 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0968286 (* 1 = 0.0968286 loss)\nI0608 03:18:07.761801 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22871 (* 1 = 0.22871 loss)\nI0608 03:18:07.761807 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0348519 (* 1 = 0.0348519 loss)\nI0608 03:18:07.761812 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224205 (* 1 = 0.224205 loss)\nI0608 03:18:07.761818 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0348425 (* 1 = 0.0348425 loss)\nI0608 03:18:07.761826 13573 sgd_solver.cpp:106] Iteration 35500, lr = 0.001\nI0608 03:18:20.682132 13573 solver.cpp:229] Iteration 35520, loss = 0.457577\nI0608 03:18:20.682274 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00243115 (* 1 = 0.00243115 loss)\nI0608 03:18:20.682301 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00180083 (* 1 = 0.00180083 loss)\nI0608 03:18:20.682319 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197392 (* 1 = 0.197392 loss)\nI0608 03:18:20.682337 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00178633 (* 1 = 0.00178633 loss)\nI0608 03:18:20.682354 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.200553 (* 1 = 0.200553 loss)\nI0608 03:18:20.682371 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00178969 (* 1 = 0.00178969 loss)\nI0608 03:18:20.682389 13573 sgd_solver.cpp:106] Iteration 35520, lr = 0.001\nI0608 03:18:33.585733 13573 solver.cpp:229] Iteration 35540, loss = 0.703184\nI0608 03:18:33.585794 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0440849 (* 1 = 0.0440849 loss)\nI0608 03:18:33.585804 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117509 (* 1 = 0.117509 loss)\nI0608 03:18:33.585810 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23631 (* 1 = 0.23631 loss)\nI0608 03:18:33.585817 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00565871 (* 1 = 0.00565871 loss)\nI0608 03:18:33.585822 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.248204 (* 1 = 0.248204 loss)\nI0608 03:18:33.585829 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00565832 (* 1 = 0.00565832 loss)\nI0608 03:18:33.585835 13573 sgd_solver.cpp:106] Iteration 35540, lr = 0.001\nI0608 03:18:46.296114 13573 solver.cpp:229] Iteration 35560, loss = 1.00202\nI0608 03:18:46.296190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0697617 (* 1 = 0.0697617 loss)\nI0608 03:18:46.296200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0771745 (* 1 = 0.0771745 loss)\nI0608 03:18:46.296206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917918 (* 1 = 0.0917918 loss)\nI0608 03:18:46.296212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0623112 (* 1 = 0.0623112 loss)\nI0608 03:18:46.296217 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0965694 (* 1 = 0.0965694 loss)\nI0608 03:18:46.296223 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0623162 (* 1 = 0.0623162 loss)\nI0608 03:18:46.296231 13573 sgd_solver.cpp:106] Iteration 35560, lr = 0.001\nI0608 03:18:59.121783 13573 solver.cpp:229] Iteration 35580, loss = 0.757121\nI0608 03:18:59.121852 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00292566 (* 1 = 0.00292566 loss)\nI0608 03:18:59.121862 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00705567 (* 1 = 0.00705567 loss)\nI0608 03:18:59.121868 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.389973 (* 1 = 0.389973 loss)\nI0608 03:18:59.121875 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133691 (* 1 = 0.0133691 loss)\nI0608 03:18:59.121881 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.396721 (* 1 = 0.396721 loss)\nI0608 03:18:59.121886 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133568 (* 1 = 0.0133568 loss)\nI0608 03:18:59.121893 13573 sgd_solver.cpp:106] Iteration 35580, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:19:12.076755 13573 solver.cpp:229] Iteration 35600, loss = 1.7009\nI0608 03:19:12.076817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.157382 (* 1 = 0.157382 loss)\nI0608 03:19:12.076827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.197472 (* 1 = 0.197472 loss)\nI0608 03:19:12.076833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179087 (* 1 = 0.179087 loss)\nI0608 03:19:12.076838 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00842218 (* 1 = 0.00842218 loss)\nI0608 03:19:12.076844 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137351 (* 1 = 0.137351 loss)\nI0608 03:19:12.076850 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0084239 (* 1 = 0.0084239 loss)\nI0608 03:19:12.076856 13573 sgd_solver.cpp:106] Iteration 35600, lr = 0.001\nI0608 03:19:25.193337 13573 solver.cpp:229] Iteration 35620, loss = 2.03373\nI0608 03:19:25.193405 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.047251 (* 1 = 0.047251 loss)\nI0608 03:19:25.193414 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0562066 (* 1 = 0.0562066 loss)\nI0608 03:19:25.193421 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.495875 (* 1 = 0.495875 loss)\nI0608 03:19:25.193428 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0395684 (* 1 = 0.0395684 loss)\nI0608 03:19:25.193434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.487623 (* 1 = 0.487623 loss)\nI0608 03:19:25.193439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.039593 (* 1 = 0.039593 loss)\nI0608 03:19:25.193445 13573 sgd_solver.cpp:106] Iteration 35620, lr = 0.001\nI0608 03:19:38.173166 13573 solver.cpp:229] Iteration 35640, loss = 1.50404\nI0608 03:19:38.173233 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144373 (* 1 = 0.144373 loss)\nI0608 03:19:38.173244 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.204209 (* 1 = 0.204209 loss)\nI0608 03:19:38.173250 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.27567 (* 1 = 0.27567 loss)\nI0608 03:19:38.173257 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0494297 (* 1 = 0.0494297 loss)\nI0608 03:19:38.173264 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282689 (* 1 = 0.282689 loss)\nI0608 03:19:38.173269 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0494054 (* 1 = 0.0494054 loss)\nI0608 03:19:38.173277 13573 sgd_solver.cpp:106] Iteration 35640, lr = 0.001\nI0608 03:19:51.111052 13573 solver.cpp:229] Iteration 35660, loss = 0.458346\nI0608 03:19:51.111145 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0762287 (* 1 = 0.0762287 loss)\nI0608 03:19:51.111155 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110554 (* 1 = 0.110554 loss)\nI0608 03:19:51.111161 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104874 (* 1 = 0.104874 loss)\nI0608 03:19:51.111167 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0699454 (* 1 = 0.0699454 loss)\nI0608 03:19:51.111173 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0981655 (* 1 = 0.0981655 loss)\nI0608 03:19:51.111179 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0699274 (* 1 = 0.0699274 loss)\nI0608 03:19:51.111186 13573 sgd_solver.cpp:106] Iteration 35660, lr = 0.001\nI0608 03:20:04.226418 13573 solver.cpp:229] Iteration 35680, loss = 2.4266\nI0608 03:20:04.226486 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00234585 (* 1 = 0.00234585 loss)\nI0608 03:20:04.226495 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.011671 (* 1 = 0.011671 loss)\nI0608 03:20:04.226501 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.16079 (* 1 = 1.16079 loss)\nI0608 03:20:04.226507 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.98717 (* 1 = 0.98717 loss)\nI0608 03:20:04.226512 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.15976 (* 1 = 1.15976 loss)\nI0608 03:20:04.226518 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.98726 (* 1 = 0.98726 loss)\nI0608 03:20:04.226526 13573 sgd_solver.cpp:106] Iteration 35680, lr = 0.001\nI0608 03:20:17.210944 13573 solver.cpp:229] Iteration 35700, loss = 0.711825\nI0608 03:20:17.211074 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131402 (* 1 = 0.131402 loss)\nI0608 03:20:17.211084 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0783815 (* 1 = 0.0783815 loss)\nI0608 03:20:17.211091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301856 (* 1 = 0.301856 loss)\nI0608 03:20:17.211097 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0513558 (* 1 = 0.0513558 loss)\nI0608 03:20:17.211102 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.309551 (* 1 = 0.309551 loss)\nI0608 03:20:17.211107 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.051358 (* 1 = 0.051358 loss)\nI0608 03:20:17.211115 13573 sgd_solver.cpp:106] Iteration 35700, lr = 0.001\nI0608 03:20:30.018483 13573 solver.cpp:229] Iteration 35720, loss = 1.07591\nI0608 03:20:30.018548 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.262023 (* 1 = 0.262023 loss)\nI0608 03:20:30.018561 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.361153 (* 1 = 0.361153 loss)\nI0608 03:20:30.018573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.464163 (* 1 = 0.464163 loss)\nI0608 03:20:30.018582 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0889695 (* 1 = 0.0889695 loss)\nI0608 03:20:30.018592 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.444357 (* 1 = 0.444357 loss)\nI0608 03:20:30.018601 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0889717 (* 1 = 0.0889717 loss)\nI0608 03:20:30.018613 13573 sgd_solver.cpp:106] Iteration 35720, lr = 0.001\nI0608 03:20:42.875943 13573 solver.cpp:229] Iteration 35740, loss = 0.570154\nI0608 03:20:42.876010 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00448386 (* 1 = 0.00448386 loss)\nI0608 03:20:42.876021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00897454 (* 1 = 0.00897454 loss)\nI0608 03:20:42.876027 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.245138 (* 1 = 0.245138 loss)\nI0608 03:20:42.876034 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0456596 (* 1 = 0.0456596 loss)\nI0608 03:20:42.876039 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249519 (* 1 = 0.249519 loss)\nI0608 03:20:42.876045 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0456508 (* 1 = 0.0456508 loss)\nI0608 03:20:42.876052 13573 sgd_solver.cpp:106] Iteration 35740, lr = 0.001\nI0608 03:20:55.804256 13573 solver.cpp:229] Iteration 35760, loss = 0.479515\nI0608 03:20:55.804324 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0398411 (* 1 = 0.0398411 loss)\nI0608 03:20:55.804333 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0255388 (* 1 = 0.0255388 loss)\nI0608 03:20:55.804339 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207913 (* 1 = 0.207913 loss)\nI0608 03:20:55.804345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0207621 (* 1 = 0.0207621 loss)\nI0608 03:20:55.804350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19159 (* 1 = 0.19159 loss)\nI0608 03:20:55.804356 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207454 (* 1 = 0.0207454 loss)\nI0608 03:20:55.804363 13573 sgd_solver.cpp:106] Iteration 35760, lr = 0.001\nI0608 03:21:08.784091 13573 solver.cpp:229] Iteration 35780, loss = 1.01018\nI0608 03:21:08.784169 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0556017 (* 1 = 0.0556017 loss)\nI0608 03:21:08.784178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0805671 (* 1 = 0.0805671 loss)\nI0608 03:21:08.784184 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.565126 (* 1 = 0.565126 loss)\nI0608 03:21:08.784190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.114781 (* 1 = 0.114781 loss)\nI0608 03:21:08.784196 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.573528 (* 1 = 0.573528 loss)\nI0608 03:21:08.784202 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.114746 (* 1 = 0.114746 loss)\nI0608 03:21:08.784209 13573 sgd_solver.cpp:106] Iteration 35780, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:21:21.979876 13573 solver.cpp:229] Iteration 35800, loss = 0.396518\nI0608 03:21:21.979941 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000117621 (* 1 = 0.000117621 loss)\nI0608 03:21:21.979953 13573 solver.cpp:245]     Train net output #1: loss_cls = 6.87907e-06 (* 1 = 6.87907e-06 loss)\nI0608 03:21:21.979959 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195646 (* 1 = 0.195646 loss)\nI0608 03:21:21.979964 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0044441 (* 1 = 0.0044441 loss)\nI0608 03:21:21.979970 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149496 (* 1 = 0.149496 loss)\nI0608 03:21:21.979976 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00444502 (* 1 = 0.00444502 loss)\nI0608 03:21:21.979984 13573 sgd_solver.cpp:106] Iteration 35800, lr = 0.001\nI0608 03:21:34.820866 13573 solver.cpp:229] Iteration 35820, loss = 0.851793\nI0608 03:21:34.820933 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00189208 (* 1 = 0.00189208 loss)\nI0608 03:21:34.820943 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00846767 (* 1 = 0.00846767 loss)\nI0608 03:21:34.820951 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.451466 (* 1 = 0.451466 loss)\nI0608 03:21:34.820957 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.108876 (* 1 = 0.108876 loss)\nI0608 03:21:34.820962 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.432386 (* 1 = 0.432386 loss)\nI0608 03:21:34.820968 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.108934 (* 1 = 0.108934 loss)\nI0608 03:21:34.820976 13573 sgd_solver.cpp:106] Iteration 35820, lr = 0.001\nI0608 03:21:47.557993 13573 solver.cpp:229] Iteration 35840, loss = 0.524207\nI0608 03:21:47.558061 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0632972 (* 1 = 0.0632972 loss)\nI0608 03:21:47.558070 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0538311 (* 1 = 0.0538311 loss)\nI0608 03:21:47.558078 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171328 (* 1 = 0.171328 loss)\nI0608 03:21:47.558082 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0561263 (* 1 = 0.0561263 loss)\nI0608 03:21:47.558089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171451 (* 1 = 0.171451 loss)\nI0608 03:21:47.558094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0561261 (* 1 = 0.0561261 loss)\nI0608 03:21:47.558101 13573 sgd_solver.cpp:106] Iteration 35840, lr = 0.001\nI0608 03:22:00.576228 13573 solver.cpp:229] Iteration 35860, loss = 1.34837\nI0608 03:22:00.576293 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.288152 (* 1 = 0.288152 loss)\nI0608 03:22:00.576303 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118718 (* 1 = 0.118718 loss)\nI0608 03:22:00.576310 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.705128 (* 1 = 0.705128 loss)\nI0608 03:22:00.576315 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.158391 (* 1 = 0.158391 loss)\nI0608 03:22:00.576321 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.708546 (* 1 = 0.708546 loss)\nI0608 03:22:00.576326 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.158391 (* 1 = 0.158391 loss)\nI0608 03:22:00.576333 13573 sgd_solver.cpp:106] Iteration 35860, lr = 0.001\nI0608 03:22:13.392102 13573 solver.cpp:229] Iteration 35880, loss = 0.467709\nI0608 03:22:13.392205 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113387 (* 1 = 0.113387 loss)\nI0608 03:22:13.392216 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111837 (* 1 = 0.111837 loss)\nI0608 03:22:13.392222 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164922 (* 1 = 0.164922 loss)\nI0608 03:22:13.392228 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0277605 (* 1 = 0.0277605 loss)\nI0608 03:22:13.392235 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157018 (* 1 = 0.157018 loss)\nI0608 03:22:13.392241 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0277504 (* 1 = 0.0277504 loss)\nI0608 03:22:13.392248 13573 sgd_solver.cpp:106] Iteration 35880, lr = 0.001\nI0608 03:22:26.380877 13573 solver.cpp:229] Iteration 35900, loss = 0.466904\nI0608 03:22:26.380949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.170555 (* 1 = 0.170555 loss)\nI0608 03:22:26.380959 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15807 (* 1 = 0.15807 loss)\nI0608 03:22:26.380965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0742084 (* 1 = 0.0742084 loss)\nI0608 03:22:26.380971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.040551 (* 1 = 0.040551 loss)\nI0608 03:22:26.380977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0716307 (* 1 = 0.0716307 loss)\nI0608 03:22:26.380985 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0405517 (* 1 = 0.0405517 loss)\nI0608 03:22:26.380991 13573 sgd_solver.cpp:106] Iteration 35900, lr = 0.001\nI0608 03:22:39.371502 13573 solver.cpp:229] Iteration 35920, loss = 1.00284\nI0608 03:22:39.371573 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0158256 (* 1 = 0.0158256 loss)\nI0608 03:22:39.371582 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0133077 (* 1 = 0.0133077 loss)\nI0608 03:22:39.371589 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.090875 (* 1 = 0.090875 loss)\nI0608 03:22:39.371594 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0273042 (* 1 = 0.0273042 loss)\nI0608 03:22:39.371600 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0846018 (* 1 = 0.0846018 loss)\nI0608 03:22:39.371606 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0273021 (* 1 = 0.0273021 loss)\nI0608 03:22:39.371613 13573 sgd_solver.cpp:106] Iteration 35920, lr = 0.001\nI0608 03:22:52.274647 13573 solver.cpp:229] Iteration 35940, loss = 0.724006\nI0608 03:22:52.274713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.077314 (* 1 = 0.077314 loss)\nI0608 03:22:52.274721 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0660063 (* 1 = 0.0660063 loss)\nI0608 03:22:52.274727 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.236784 (* 1 = 0.236784 loss)\nI0608 03:22:52.274734 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.112051 (* 1 = 0.112051 loss)\nI0608 03:22:52.274739 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.234235 (* 1 = 0.234235 loss)\nI0608 03:22:52.274745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.112064 (* 1 = 0.112064 loss)\nI0608 03:22:52.274752 13573 sgd_solver.cpp:106] Iteration 35940, lr = 0.001\nI0608 03:23:05.221647 13573 solver.cpp:229] Iteration 35960, loss = 0.508779\nI0608 03:23:05.221720 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0982797 (* 1 = 0.0982797 loss)\nI0608 03:23:05.221729 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0998691 (* 1 = 0.0998691 loss)\nI0608 03:23:05.221736 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135316 (* 1 = 0.135316 loss)\nI0608 03:23:05.221741 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0267182 (* 1 = 0.0267182 loss)\nI0608 03:23:05.221747 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145539 (* 1 = 0.145539 loss)\nI0608 03:23:05.221753 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0267291 (* 1 = 0.0267291 loss)\nI0608 03:23:05.221760 13573 sgd_solver.cpp:106] Iteration 35960, lr = 0.001\nI0608 03:23:17.939138 13573 solver.cpp:229] Iteration 35980, loss = 0.618428\nI0608 03:23:17.939260 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117973 (* 1 = 0.117973 loss)\nI0608 03:23:17.939270 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0979852 (* 1 = 0.0979852 loss)\nI0608 03:23:17.939276 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263485 (* 1 = 0.263485 loss)\nI0608 03:23:17.939281 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0838514 (* 1 = 0.0838514 loss)\nI0608 03:23:17.939287 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260319 (* 1 = 0.260319 loss)\nI0608 03:23:17.939293 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0838478 (* 1 = 0.0838478 loss)\nI0608 03:23:17.939301 13573 sgd_solver.cpp:106] Iteration 35980, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:23:30.827611 13573 solver.cpp:229] Iteration 36000, loss = 0.369387\nI0608 03:23:30.827674 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102877 (* 1 = 0.102877 loss)\nI0608 03:23:30.827683 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.08654 (* 1 = 0.08654 loss)\nI0608 03:23:30.827689 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972939 (* 1 = 0.0972939 loss)\nI0608 03:23:30.827695 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0113049 (* 1 = 0.0113049 loss)\nI0608 03:23:30.827702 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100212 (* 1 = 0.100212 loss)\nI0608 03:23:30.827706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112979 (* 1 = 0.0112979 loss)\nI0608 03:23:30.827713 13573 sgd_solver.cpp:106] Iteration 36000, lr = 0.001\nI0608 03:23:43.484303 13573 solver.cpp:229] Iteration 36020, loss = 0.879899\nI0608 03:23:43.484369 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161992 (* 1 = 0.161992 loss)\nI0608 03:23:43.484380 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101978 (* 1 = 0.101978 loss)\nI0608 03:23:43.484385 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0973726 (* 1 = 0.0973726 loss)\nI0608 03:23:43.484391 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155469 (* 1 = 0.0155469 loss)\nI0608 03:23:43.484397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0988687 (* 1 = 0.0988687 loss)\nI0608 03:23:43.484403 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.015544 (* 1 = 0.015544 loss)\nI0608 03:23:43.484411 13573 sgd_solver.cpp:106] Iteration 36020, lr = 0.001\nI0608 03:23:56.305503 13573 solver.cpp:229] Iteration 36040, loss = 1.03409\nI0608 03:23:56.305577 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.279156 (* 1 = 0.279156 loss)\nI0608 03:23:56.305586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.327258 (* 1 = 0.327258 loss)\nI0608 03:23:56.305593 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.368067 (* 1 = 0.368067 loss)\nI0608 03:23:56.305598 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123683 (* 1 = 0.123683 loss)\nI0608 03:23:56.305603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.359406 (* 1 = 0.359406 loss)\nI0608 03:23:56.305608 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.12369 (* 1 = 0.12369 loss)\nI0608 03:23:56.305616 13573 sgd_solver.cpp:106] Iteration 36040, lr = 0.001\nI0608 03:24:09.236233 13573 solver.cpp:229] Iteration 36060, loss = 0.600315\nI0608 03:24:09.236300 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0911402 (* 1 = 0.0911402 loss)\nI0608 03:24:09.236310 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0278307 (* 1 = 0.0278307 loss)\nI0608 03:24:09.236316 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099208 (* 1 = 0.099208 loss)\nI0608 03:24:09.236322 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0307208 (* 1 = 0.0307208 loss)\nI0608 03:24:09.236328 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101599 (* 1 = 0.101599 loss)\nI0608 03:24:09.236335 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307292 (* 1 = 0.0307292 loss)\nI0608 03:24:09.236340 13573 sgd_solver.cpp:106] Iteration 36060, lr = 0.001\nI0608 03:24:22.141196 13573 solver.cpp:229] Iteration 36080, loss = 0.927358\nI0608 03:24:22.141263 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0376747 (* 1 = 0.0376747 loss)\nI0608 03:24:22.141273 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0553223 (* 1 = 0.0553223 loss)\nI0608 03:24:22.141279 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177152 (* 1 = 0.177152 loss)\nI0608 03:24:22.141284 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269995 (* 1 = 0.0269995 loss)\nI0608 03:24:22.141290 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211551 (* 1 = 0.211551 loss)\nI0608 03:24:22.141295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269925 (* 1 = 0.0269925 loss)\nI0608 03:24:22.141304 13573 sgd_solver.cpp:106] Iteration 36080, lr = 0.001\nI0608 03:24:35.146697 13573 solver.cpp:229] Iteration 36100, loss = 0.505165\nI0608 03:24:35.146791 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.217357 (* 1 = 0.217357 loss)\nI0608 03:24:35.146802 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0916669 (* 1 = 0.0916669 loss)\nI0608 03:24:35.146809 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0903513 (* 1 = 0.0903513 loss)\nI0608 03:24:35.146816 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279099 (* 1 = 0.0279099 loss)\nI0608 03:24:35.146821 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0899696 (* 1 = 0.0899696 loss)\nI0608 03:24:35.146827 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279051 (* 1 = 0.0279051 loss)\nI0608 03:24:35.146834 13573 sgd_solver.cpp:106] Iteration 36100, lr = 0.001\nI0608 03:24:48.032481 13573 solver.cpp:229] Iteration 36120, loss = 0.496866\nI0608 03:24:48.032547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0771075 (* 1 = 0.0771075 loss)\nI0608 03:24:48.032557 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0410106 (* 1 = 0.0410106 loss)\nI0608 03:24:48.032563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0869566 (* 1 = 0.0869566 loss)\nI0608 03:24:48.032569 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0215759 (* 1 = 0.0215759 loss)\nI0608 03:24:48.032574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0860892 (* 1 = 0.0860892 loss)\nI0608 03:24:48.032580 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0215776 (* 1 = 0.0215776 loss)\nI0608 03:24:48.032588 13573 sgd_solver.cpp:106] Iteration 36120, lr = 0.001\nI0608 03:25:00.917042 13573 solver.cpp:229] Iteration 36140, loss = 0.524906\nI0608 03:25:00.917105 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000764281 (* 1 = 0.000764281 loss)\nI0608 03:25:00.917115 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000805581 (* 1 = 0.000805581 loss)\nI0608 03:25:00.917121 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.281037 (* 1 = 0.281037 loss)\nI0608 03:25:00.917127 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0509954 (* 1 = 0.0509954 loss)\nI0608 03:25:00.917132 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.287962 (* 1 = 0.287962 loss)\nI0608 03:25:00.917138 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.051005 (* 1 = 0.051005 loss)\nI0608 03:25:00.917145 13573 sgd_solver.cpp:106] Iteration 36140, lr = 0.001\nI0608 03:25:13.895759 13573 solver.cpp:229] Iteration 36160, loss = 0.838519\nI0608 03:25:13.895876 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0760875 (* 1 = 0.0760875 loss)\nI0608 03:25:13.895886 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0843173 (* 1 = 0.0843173 loss)\nI0608 03:25:13.895894 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.451044 (* 1 = 0.451044 loss)\nI0608 03:25:13.895898 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.024372 (* 1 = 0.024372 loss)\nI0608 03:25:13.895905 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.403058 (* 1 = 0.403058 loss)\nI0608 03:25:13.895910 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243842 (* 1 = 0.0243842 loss)\nI0608 03:25:13.895918 13573 sgd_solver.cpp:106] Iteration 36160, lr = 0.001\nI0608 03:25:26.661208 13573 solver.cpp:229] Iteration 36180, loss = 0.851474\nI0608 03:25:26.661278 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.163546 (* 1 = 0.163546 loss)\nI0608 03:25:26.661288 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112924 (* 1 = 0.112924 loss)\nI0608 03:25:26.661294 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0725984 (* 1 = 0.0725984 loss)\nI0608 03:25:26.661300 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00857482 (* 1 = 0.00857482 loss)\nI0608 03:25:26.661305 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0694675 (* 1 = 0.0694675 loss)\nI0608 03:25:26.661311 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00857599 (* 1 = 0.00857599 loss)\nI0608 03:25:26.661319 13573 sgd_solver.cpp:106] Iteration 36180, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:25:39.538816 13573 solver.cpp:229] Iteration 36200, loss = 0.997366\nI0608 03:25:39.538875 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.339225 (* 1 = 0.339225 loss)\nI0608 03:25:39.538885 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.440343 (* 1 = 0.440343 loss)\nI0608 03:25:39.538892 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.311954 (* 1 = 0.311954 loss)\nI0608 03:25:39.538897 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0345351 (* 1 = 0.0345351 loss)\nI0608 03:25:39.538903 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.294901 (* 1 = 0.294901 loss)\nI0608 03:25:39.538909 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0345291 (* 1 = 0.0345291 loss)\nI0608 03:25:39.538916 13573 sgd_solver.cpp:106] Iteration 36200, lr = 0.001\nI0608 03:25:52.288594 13573 solver.cpp:229] Iteration 36220, loss = 0.545435\nI0608 03:25:52.288666 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00458808 (* 1 = 0.00458808 loss)\nI0608 03:25:52.288678 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00248133 (* 1 = 0.00248133 loss)\nI0608 03:25:52.288684 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32735 (* 1 = 0.32735 loss)\nI0608 03:25:52.288691 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0310228 (* 1 = 0.0310228 loss)\nI0608 03:25:52.288697 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.351458 (* 1 = 0.351458 loss)\nI0608 03:25:52.288702 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0310137 (* 1 = 0.0310137 loss)\nI0608 03:25:52.288709 13573 sgd_solver.cpp:106] Iteration 36220, lr = 0.001\nI0608 03:26:05.361302 13573 solver.cpp:229] Iteration 36240, loss = 1.34725\nI0608 03:26:05.361377 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150571 (* 1 = 0.150571 loss)\nI0608 03:26:05.361390 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202801 (* 1 = 0.202801 loss)\nI0608 03:26:05.361400 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.235226 (* 1 = 0.235226 loss)\nI0608 03:26:05.361410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0571132 (* 1 = 0.0571132 loss)\nI0608 03:26:05.361419 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252601 (* 1 = 0.252601 loss)\nI0608 03:26:05.361428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0571117 (* 1 = 0.0571117 loss)\nI0608 03:26:05.361438 13573 sgd_solver.cpp:106] Iteration 36240, lr = 0.001\nI0608 03:26:18.399338 13573 solver.cpp:229] Iteration 36260, loss = 0.914829\nI0608 03:26:18.399406 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0019804 (* 1 = 0.0019804 loss)\nI0608 03:26:18.399421 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0135362 (* 1 = 0.0135362 loss)\nI0608 03:26:18.399430 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.186951 (* 1 = 0.186951 loss)\nI0608 03:26:18.399440 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0160097 (* 1 = 0.0160097 loss)\nI0608 03:26:18.399449 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19317 (* 1 = 0.19317 loss)\nI0608 03:26:18.399457 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159889 (* 1 = 0.0159889 loss)\nI0608 03:26:18.399466 13573 sgd_solver.cpp:106] Iteration 36260, lr = 0.001\nI0608 03:26:31.055335 13573 solver.cpp:229] Iteration 36280, loss = 0.59902\nI0608 03:26:31.055407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.214533 (* 1 = 0.214533 loss)\nI0608 03:26:31.055416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.182871 (* 1 = 0.182871 loss)\nI0608 03:26:31.055423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.193526 (* 1 = 0.193526 loss)\nI0608 03:26:31.055428 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0371935 (* 1 = 0.0371935 loss)\nI0608 03:26:31.055434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212374 (* 1 = 0.212374 loss)\nI0608 03:26:31.055439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0371833 (* 1 = 0.0371833 loss)\nI0608 03:26:31.055449 13573 sgd_solver.cpp:106] Iteration 36280, lr = 0.001\nI0608 03:26:44.010272 13573 solver.cpp:229] Iteration 36300, loss = 0.471977\nI0608 03:26:44.010340 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0496188 (* 1 = 0.0496188 loss)\nI0608 03:26:44.010350 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0977001 (* 1 = 0.0977001 loss)\nI0608 03:26:44.010356 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909871 (* 1 = 0.0909871 loss)\nI0608 03:26:44.010361 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.036564 (* 1 = 0.036564 loss)\nI0608 03:26:44.010367 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0860577 (* 1 = 0.0860577 loss)\nI0608 03:26:44.010373 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.036561 (* 1 = 0.036561 loss)\nI0608 03:26:44.010381 13573 sgd_solver.cpp:106] Iteration 36300, lr = 0.001\nI0608 03:26:56.979111 13573 solver.cpp:229] Iteration 36320, loss = 1.0063\nI0608 03:26:56.979176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.213385 (* 1 = 0.213385 loss)\nI0608 03:26:56.979185 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.216727 (* 1 = 0.216727 loss)\nI0608 03:26:56.979192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.438564 (* 1 = 0.438564 loss)\nI0608 03:26:56.979198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0312783 (* 1 = 0.0312783 loss)\nI0608 03:26:56.979204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.400555 (* 1 = 0.400555 loss)\nI0608 03:26:56.979210 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0312598 (* 1 = 0.0312598 loss)\nI0608 03:26:56.979218 13573 sgd_solver.cpp:106] Iteration 36320, lr = 0.001\nI0608 03:27:09.928174 13573 solver.cpp:229] Iteration 36340, loss = 1.25434\nI0608 03:27:09.928252 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0624752 (* 1 = 0.0624752 loss)\nI0608 03:27:09.928262 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0865242 (* 1 = 0.0865242 loss)\nI0608 03:27:09.928268 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102265 (* 1 = 0.102265 loss)\nI0608 03:27:09.928273 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0702442 (* 1 = 0.0702442 loss)\nI0608 03:27:09.928279 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100285 (* 1 = 0.100285 loss)\nI0608 03:27:09.928285 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0702347 (* 1 = 0.0702347 loss)\nI0608 03:27:09.928292 13573 sgd_solver.cpp:106] Iteration 36340, lr = 0.001\nI0608 03:27:22.831652 13573 solver.cpp:229] Iteration 36360, loss = 0.39464\nI0608 03:27:22.831713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115131 (* 1 = 0.115131 loss)\nI0608 03:27:22.831722 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122671 (* 1 = 0.122671 loss)\nI0608 03:27:22.831729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0838745 (* 1 = 0.0838745 loss)\nI0608 03:27:22.831735 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0144919 (* 1 = 0.0144919 loss)\nI0608 03:27:22.831740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855843 (* 1 = 0.0855843 loss)\nI0608 03:27:22.831746 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144762 (* 1 = 0.0144762 loss)\nI0608 03:27:22.831753 13573 sgd_solver.cpp:106] Iteration 36360, lr = 0.001\nI0608 03:27:35.679397 13573 solver.cpp:229] Iteration 36380, loss = 0.668125\nI0608 03:27:35.679463 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0604529 (* 1 = 0.0604529 loss)\nI0608 03:27:35.679472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0850963 (* 1 = 0.0850963 loss)\nI0608 03:27:35.679481 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0952428 (* 1 = 0.0952428 loss)\nI0608 03:27:35.679486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.125354 (* 1 = 0.125354 loss)\nI0608 03:27:35.679491 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100771 (* 1 = 0.100771 loss)\nI0608 03:27:35.679497 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.125359 (* 1 = 0.125359 loss)\nI0608 03:27:35.679504 13573 sgd_solver.cpp:106] Iteration 36380, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:27:48.664492 13573 solver.cpp:229] Iteration 36400, loss = 0.534959\nI0608 03:27:48.664578 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0321563 (* 1 = 0.0321563 loss)\nI0608 03:27:48.664588 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0436538 (* 1 = 0.0436538 loss)\nI0608 03:27:48.664595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216159 (* 1 = 0.216159 loss)\nI0608 03:27:48.664600 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0465908 (* 1 = 0.0465908 loss)\nI0608 03:27:48.664607 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.230417 (* 1 = 0.230417 loss)\nI0608 03:27:48.664613 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.046577 (* 1 = 0.046577 loss)\nI0608 03:27:48.664644 13573 sgd_solver.cpp:106] Iteration 36400, lr = 0.001\nI0608 03:28:01.621178 13573 solver.cpp:229] Iteration 36420, loss = 0.823243\nI0608 03:28:01.621248 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.143991 (* 1 = 0.143991 loss)\nI0608 03:28:01.621263 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.180883 (* 1 = 0.180883 loss)\nI0608 03:28:01.621273 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.348396 (* 1 = 0.348396 loss)\nI0608 03:28:01.621284 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0713998 (* 1 = 0.0713998 loss)\nI0608 03:28:01.621292 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.357879 (* 1 = 0.357879 loss)\nI0608 03:28:01.621301 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.071434 (* 1 = 0.071434 loss)\nI0608 03:28:01.621310 13573 sgd_solver.cpp:106] Iteration 36420, lr = 0.001\nI0608 03:28:14.666018 13573 solver.cpp:229] Iteration 36440, loss = 0.476497\nI0608 03:28:14.666092 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0821097 (* 1 = 0.0821097 loss)\nI0608 03:28:14.666106 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158802 (* 1 = 0.158802 loss)\nI0608 03:28:14.666115 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0895216 (* 1 = 0.0895216 loss)\nI0608 03:28:14.666126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0723218 (* 1 = 0.0723218 loss)\nI0608 03:28:14.666136 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0892614 (* 1 = 0.0892614 loss)\nI0608 03:28:14.666144 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0723005 (* 1 = 0.0723005 loss)\nI0608 03:28:14.666153 13573 sgd_solver.cpp:106] Iteration 36440, lr = 0.001\nI0608 03:28:27.612723 13573 solver.cpp:229] Iteration 36460, loss = 0.363714\nI0608 03:28:27.612793 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0761815 (* 1 = 0.0761815 loss)\nI0608 03:28:27.612808 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138749 (* 1 = 0.138749 loss)\nI0608 03:28:27.612818 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0986439 (* 1 = 0.0986439 loss)\nI0608 03:28:27.612828 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0247255 (* 1 = 0.0247255 loss)\nI0608 03:28:27.612836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100925 (* 1 = 0.100925 loss)\nI0608 03:28:27.612844 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247266 (* 1 = 0.0247266 loss)\nI0608 03:28:27.612854 13573 sgd_solver.cpp:106] Iteration 36460, lr = 0.001\nI0608 03:28:40.622974 13573 solver.cpp:229] Iteration 36480, loss = 0.508732\nI0608 03:28:40.623057 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0810017 (* 1 = 0.0810017 loss)\nI0608 03:28:40.623072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0841403 (* 1 = 0.0841403 loss)\nI0608 03:28:40.623081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112712 (* 1 = 0.112712 loss)\nI0608 03:28:40.623090 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0725805 (* 1 = 0.0725805 loss)\nI0608 03:28:40.623100 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11926 (* 1 = 0.11926 loss)\nI0608 03:28:40.623108 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0725807 (* 1 = 0.0725807 loss)\nI0608 03:28:40.623118 13573 sgd_solver.cpp:106] Iteration 36480, lr = 0.001\nI0608 03:28:53.393440 13573 solver.cpp:229] Iteration 36500, loss = 0.515041\nI0608 03:28:53.393508 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0946195 (* 1 = 0.0946195 loss)\nI0608 03:28:53.393517 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.057519 (* 1 = 0.057519 loss)\nI0608 03:28:53.393523 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119235 (* 1 = 0.119235 loss)\nI0608 03:28:53.393529 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0754406 (* 1 = 0.0754406 loss)\nI0608 03:28:53.393535 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111292 (* 1 = 0.111292 loss)\nI0608 03:28:53.393540 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0754408 (* 1 = 0.0754408 loss)\nI0608 03:28:53.393548 13573 sgd_solver.cpp:106] Iteration 36500, lr = 0.001\nI0608 03:29:06.226693 13573 solver.cpp:229] Iteration 36520, loss = 1.81983\nI0608 03:29:06.226759 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0211149 (* 1 = 0.0211149 loss)\nI0608 03:29:06.226781 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0226685 (* 1 = 0.0226685 loss)\nI0608 03:29:06.226791 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198023 (* 1 = 0.198023 loss)\nI0608 03:29:06.226797 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158631 (* 1 = 0.0158631 loss)\nI0608 03:29:06.226804 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193088 (* 1 = 0.193088 loss)\nI0608 03:29:06.226809 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158598 (* 1 = 0.0158598 loss)\nI0608 03:29:06.226816 13573 sgd_solver.cpp:106] Iteration 36520, lr = 0.001\nI0608 03:29:19.022318 13573 solver.cpp:229] Iteration 36540, loss = 0.615407\nI0608 03:29:19.022385 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0706038 (* 1 = 0.0706038 loss)\nI0608 03:29:19.022395 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100007 (* 1 = 0.100007 loss)\nI0608 03:29:19.022402 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104378 (* 1 = 0.104378 loss)\nI0608 03:29:19.022408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0450459 (* 1 = 0.0450459 loss)\nI0608 03:29:19.022413 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.098337 (* 1 = 0.098337 loss)\nI0608 03:29:19.022419 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0450108 (* 1 = 0.0450108 loss)\nI0608 03:29:19.022426 13573 sgd_solver.cpp:106] Iteration 36540, lr = 0.001\nI0608 03:29:32.070627 13573 solver.cpp:229] Iteration 36560, loss = 0.751332\nI0608 03:29:32.070703 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147764 (* 1 = 0.147764 loss)\nI0608 03:29:32.070716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105327 (* 1 = 0.105327 loss)\nI0608 03:29:32.070725 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393973 (* 1 = 0.393973 loss)\nI0608 03:29:32.070734 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.078608 (* 1 = 0.078608 loss)\nI0608 03:29:32.070744 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.394645 (* 1 = 0.394645 loss)\nI0608 03:29:32.070752 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.078603 (* 1 = 0.078603 loss)\nI0608 03:29:32.070761 13573 sgd_solver.cpp:106] Iteration 36560, lr = 0.001\nI0608 03:29:45.102212 13573 solver.cpp:229] Iteration 36580, loss = 0.50446\nI0608 03:29:45.102278 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0486616 (* 1 = 0.0486616 loss)\nI0608 03:29:45.102293 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0547775 (* 1 = 0.0547775 loss)\nI0608 03:29:45.102301 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138657 (* 1 = 0.138657 loss)\nI0608 03:29:45.102310 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013441 (* 1 = 0.013441 loss)\nI0608 03:29:45.102319 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15209 (* 1 = 0.15209 loss)\nI0608 03:29:45.102329 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134378 (* 1 = 0.0134378 loss)\nI0608 03:29:45.102336 13573 sgd_solver.cpp:106] Iteration 36580, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:29:58.048209 13573 solver.cpp:229] Iteration 36600, loss = 1.05395\nI0608 03:29:58.048274 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.155707 (* 1 = 0.155707 loss)\nI0608 03:29:58.048283 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.175269 (* 1 = 0.175269 loss)\nI0608 03:29:58.048290 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.476351 (* 1 = 0.476351 loss)\nI0608 03:29:58.048295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.184305 (* 1 = 0.184305 loss)\nI0608 03:29:58.048301 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.470182 (* 1 = 0.470182 loss)\nI0608 03:29:58.048306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.184347 (* 1 = 0.184347 loss)\nI0608 03:29:58.048313 13573 sgd_solver.cpp:106] Iteration 36600, lr = 0.001\nI0608 03:30:10.991606 13573 solver.cpp:229] Iteration 36620, loss = 1.07412\nI0608 03:30:10.991677 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0601145 (* 1 = 0.0601145 loss)\nI0608 03:30:10.991689 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0392314 (* 1 = 0.0392314 loss)\nI0608 03:30:10.991695 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273702 (* 1 = 0.273702 loss)\nI0608 03:30:10.991701 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0767973 (* 1 = 0.0767973 loss)\nI0608 03:30:10.991706 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.274127 (* 1 = 0.274127 loss)\nI0608 03:30:10.991713 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0768025 (* 1 = 0.0768025 loss)\nI0608 03:30:10.991719 13573 sgd_solver.cpp:106] Iteration 36620, lr = 0.001\nI0608 03:30:24.013787 13573 solver.cpp:229] Iteration 36640, loss = 0.375883\nI0608 03:30:24.013921 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0557025 (* 1 = 0.0557025 loss)\nI0608 03:30:24.013929 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0322198 (* 1 = 0.0322198 loss)\nI0608 03:30:24.013936 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0918241 (* 1 = 0.0918241 loss)\nI0608 03:30:24.013942 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00816162 (* 1 = 0.00816162 loss)\nI0608 03:30:24.013947 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0883268 (* 1 = 0.0883268 loss)\nI0608 03:30:24.013953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00816167 (* 1 = 0.00816167 loss)\nI0608 03:30:24.013960 13573 sgd_solver.cpp:106] Iteration 36640, lr = 0.001\nI0608 03:30:36.922710 13573 solver.cpp:229] Iteration 36660, loss = 0.377586\nI0608 03:30:36.922788 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0803103 (* 1 = 0.0803103 loss)\nI0608 03:30:36.922799 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117394 (* 1 = 0.117394 loss)\nI0608 03:30:36.922806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0878097 (* 1 = 0.0878097 loss)\nI0608 03:30:36.922811 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0195802 (* 1 = 0.0195802 loss)\nI0608 03:30:36.922817 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0834873 (* 1 = 0.0834873 loss)\nI0608 03:30:36.922823 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0195674 (* 1 = 0.0195674 loss)\nI0608 03:30:36.922830 13573 sgd_solver.cpp:106] Iteration 36660, lr = 0.001\nI0608 03:30:49.719964 13573 solver.cpp:229] Iteration 36680, loss = 0.837437\nI0608 03:30:49.720032 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0585466 (* 1 = 0.0585466 loss)\nI0608 03:30:49.720043 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0589985 (* 1 = 0.0589985 loss)\nI0608 03:30:49.720049 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.115637 (* 1 = 0.115637 loss)\nI0608 03:30:49.720057 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0089626 (* 1 = 0.0089626 loss)\nI0608 03:30:49.720062 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109911 (* 1 = 0.109911 loss)\nI0608 03:30:49.720067 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00896295 (* 1 = 0.00896295 loss)\nI0608 03:30:49.720074 13573 sgd_solver.cpp:106] Iteration 36680, lr = 0.001\nI0608 03:31:02.790200 13573 solver.cpp:229] Iteration 36700, loss = 0.454674\nI0608 03:31:02.790277 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000135287 (* 1 = 0.000135287 loss)\nI0608 03:31:02.790287 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00415013 (* 1 = 0.00415013 loss)\nI0608 03:31:02.790292 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164347 (* 1 = 0.164347 loss)\nI0608 03:31:02.790298 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153478 (* 1 = 0.0153478 loss)\nI0608 03:31:02.790304 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157425 (* 1 = 0.157425 loss)\nI0608 03:31:02.790309 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153391 (* 1 = 0.0153391 loss)\nI0608 03:31:02.790318 13573 sgd_solver.cpp:106] Iteration 36700, lr = 0.001\nI0608 03:31:15.743587 13573 solver.cpp:229] Iteration 36720, loss = 0.911012\nI0608 03:31:15.743657 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108581 (* 1 = 0.108581 loss)\nI0608 03:31:15.743666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14541 (* 1 = 0.14541 loss)\nI0608 03:31:15.743672 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269189 (* 1 = 0.269189 loss)\nI0608 03:31:15.743679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.105933 (* 1 = 0.105933 loss)\nI0608 03:31:15.743685 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273794 (* 1 = 0.273794 loss)\nI0608 03:31:15.743690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.105917 (* 1 = 0.105917 loss)\nI0608 03:31:15.743696 13573 sgd_solver.cpp:106] Iteration 36720, lr = 0.001\nI0608 03:31:28.701882 13573 solver.cpp:229] Iteration 36740, loss = 0.637932\nI0608 03:31:28.701949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110657 (* 1 = 0.110657 loss)\nI0608 03:31:28.701959 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0470588 (* 1 = 0.0470588 loss)\nI0608 03:31:28.701966 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0943834 (* 1 = 0.0943834 loss)\nI0608 03:31:28.701972 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0215779 (* 1 = 0.0215779 loss)\nI0608 03:31:28.701977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0963429 (* 1 = 0.0963429 loss)\nI0608 03:31:28.701982 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0215783 (* 1 = 0.0215783 loss)\nI0608 03:31:28.701990 13573 sgd_solver.cpp:106] Iteration 36740, lr = 0.001\nI0608 03:31:41.633433 13573 solver.cpp:229] Iteration 36760, loss = 0.790828\nI0608 03:31:41.633510 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0288106 (* 1 = 0.0288106 loss)\nI0608 03:31:41.633519 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0107553 (* 1 = 0.0107553 loss)\nI0608 03:31:41.633525 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.280625 (* 1 = 0.280625 loss)\nI0608 03:31:41.633533 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0297115 (* 1 = 0.0297115 loss)\nI0608 03:31:41.633538 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250076 (* 1 = 0.250076 loss)\nI0608 03:31:41.633544 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029699 (* 1 = 0.029699 loss)\nI0608 03:31:41.633551 13573 sgd_solver.cpp:106] Iteration 36760, lr = 0.001\nI0608 03:31:54.399509 13573 solver.cpp:229] Iteration 36780, loss = 0.374493\nI0608 03:31:54.399576 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00835441 (* 1 = 0.00835441 loss)\nI0608 03:31:54.399585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000846308 (* 1 = 0.000846308 loss)\nI0608 03:31:54.399592 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212311 (* 1 = 0.212311 loss)\nI0608 03:31:54.399598 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0209671 (* 1 = 0.0209671 loss)\nI0608 03:31:54.399603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176248 (* 1 = 0.176248 loss)\nI0608 03:31:54.399610 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209587 (* 1 = 0.0209587 loss)\nI0608 03:31:54.399617 13573 sgd_solver.cpp:106] Iteration 36780, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:32:07.307693 13573 solver.cpp:229] Iteration 36800, loss = 0.47676\nI0608 03:32:07.307760 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00251514 (* 1 = 0.00251514 loss)\nI0608 03:32:07.307770 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00168771 (* 1 = 0.00168771 loss)\nI0608 03:32:07.307776 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.193714 (* 1 = 0.193714 loss)\nI0608 03:32:07.307782 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00954579 (* 1 = 0.00954579 loss)\nI0608 03:32:07.307788 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183842 (* 1 = 0.183842 loss)\nI0608 03:32:07.307793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00953894 (* 1 = 0.00953894 loss)\nI0608 03:32:07.307801 13573 sgd_solver.cpp:106] Iteration 36800, lr = 0.001\nI0608 03:32:20.228188 13573 solver.cpp:229] Iteration 36820, loss = 0.355943\nI0608 03:32:20.228262 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.089981 (* 1 = 0.089981 loss)\nI0608 03:32:20.228271 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0672864 (* 1 = 0.0672864 loss)\nI0608 03:32:20.228277 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.077502 (* 1 = 0.077502 loss)\nI0608 03:32:20.228282 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00723516 (* 1 = 0.00723516 loss)\nI0608 03:32:20.228288 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0830899 (* 1 = 0.0830899 loss)\nI0608 03:32:20.228294 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00723852 (* 1 = 0.00723852 loss)\nI0608 03:32:20.228302 13573 sgd_solver.cpp:106] Iteration 36820, lr = 0.001\nI0608 03:32:33.225867 13573 solver.cpp:229] Iteration 36840, loss = 0.688241\nI0608 03:32:33.225932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126731 (* 1 = 0.126731 loss)\nI0608 03:32:33.225944 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.145042 (* 1 = 0.145042 loss)\nI0608 03:32:33.225950 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265788 (* 1 = 0.265788 loss)\nI0608 03:32:33.225957 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0487578 (* 1 = 0.0487578 loss)\nI0608 03:32:33.225962 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279724 (* 1 = 0.279724 loss)\nI0608 03:32:33.225968 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0487422 (* 1 = 0.0487422 loss)\nI0608 03:32:33.225976 13573 sgd_solver.cpp:106] Iteration 36840, lr = 0.001\nI0608 03:32:46.207612 13573 solver.cpp:229] Iteration 36860, loss = 0.739338\nI0608 03:32:46.207698 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00301162 (* 1 = 0.00301162 loss)\nI0608 03:32:46.207710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0300039 (* 1 = 0.0300039 loss)\nI0608 03:32:46.207716 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267574 (* 1 = 0.267574 loss)\nI0608 03:32:46.207722 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0248892 (* 1 = 0.0248892 loss)\nI0608 03:32:46.207728 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.288637 (* 1 = 0.288637 loss)\nI0608 03:32:46.207733 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0248665 (* 1 = 0.0248665 loss)\nI0608 03:32:46.207741 13573 sgd_solver.cpp:106] Iteration 36860, lr = 0.001\nI0608 03:32:59.143978 13573 solver.cpp:229] Iteration 36880, loss = 0.821493\nI0608 03:32:59.144043 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000443506 (* 1 = 0.000443506 loss)\nI0608 03:32:59.144052 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000204284 (* 1 = 0.000204284 loss)\nI0608 03:32:59.144058 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.50119 (* 1 = 0.50119 loss)\nI0608 03:32:59.144064 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.160796 (* 1 = 0.160796 loss)\nI0608 03:32:59.144070 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.492015 (* 1 = 0.492015 loss)\nI0608 03:32:59.144075 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.160835 (* 1 = 0.160835 loss)\nI0608 03:32:59.144083 13573 sgd_solver.cpp:106] Iteration 36880, lr = 0.001\nI0608 03:33:12.019171 13573 solver.cpp:229] Iteration 36900, loss = 0.81305\nI0608 03:33:12.019239 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00451303 (* 1 = 0.00451303 loss)\nI0608 03:33:12.019248 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0104602 (* 1 = 0.0104602 loss)\nI0608 03:33:12.019254 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.384049 (* 1 = 0.384049 loss)\nI0608 03:33:12.019260 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0471866 (* 1 = 0.0471866 loss)\nI0608 03:33:12.019265 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.375726 (* 1 = 0.375726 loss)\nI0608 03:33:12.019271 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0472004 (* 1 = 0.0472004 loss)\nI0608 03:33:12.019279 13573 sgd_solver.cpp:106] Iteration 36900, lr = 0.001\nI0608 03:33:24.923568 13573 solver.cpp:229] Iteration 36920, loss = 0.956813\nI0608 03:33:24.923631 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000872179 (* 1 = 0.000872179 loss)\nI0608 03:33:24.923641 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000237806 (* 1 = 0.000237806 loss)\nI0608 03:33:24.923647 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162498 (* 1 = 0.162498 loss)\nI0608 03:33:24.923653 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0119314 (* 1 = 0.0119314 loss)\nI0608 03:33:24.923660 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127737 (* 1 = 0.127737 loss)\nI0608 03:33:24.923666 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119276 (* 1 = 0.0119276 loss)\nI0608 03:33:24.923672 13573 sgd_solver.cpp:106] Iteration 36920, lr = 0.001\nI0608 03:33:38.049487 13573 solver.cpp:229] Iteration 36940, loss = 0.692208\nI0608 03:33:38.049562 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000629251 (* 1 = 0.000629251 loss)\nI0608 03:33:38.049571 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00448777 (* 1 = 0.00448777 loss)\nI0608 03:33:38.049577 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.378761 (* 1 = 0.378761 loss)\nI0608 03:33:38.049583 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.125297 (* 1 = 0.125297 loss)\nI0608 03:33:38.049589 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.356063 (* 1 = 0.356063 loss)\nI0608 03:33:38.049594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.125291 (* 1 = 0.125291 loss)\nI0608 03:33:38.049602 13573 sgd_solver.cpp:106] Iteration 36940, lr = 0.001\nI0608 03:33:50.722674 13573 solver.cpp:229] Iteration 36960, loss = 0.534895\nI0608 03:33:50.722738 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.180503 (* 1 = 0.180503 loss)\nI0608 03:33:50.722748 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114571 (* 1 = 0.114571 loss)\nI0608 03:33:50.722754 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155409 (* 1 = 0.155409 loss)\nI0608 03:33:50.722759 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0381216 (* 1 = 0.0381216 loss)\nI0608 03:33:50.722764 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176268 (* 1 = 0.176268 loss)\nI0608 03:33:50.722784 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0381157 (* 1 = 0.0381157 loss)\nI0608 03:33:50.722792 13573 sgd_solver.cpp:106] Iteration 36960, lr = 0.001\nI0608 03:34:03.636610 13573 solver.cpp:229] Iteration 36980, loss = 0.49002\nI0608 03:34:03.636673 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00583435 (* 1 = 0.00583435 loss)\nI0608 03:34:03.636683 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00081079 (* 1 = 0.00081079 loss)\nI0608 03:34:03.636689 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233241 (* 1 = 0.233241 loss)\nI0608 03:34:03.636694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321573 (* 1 = 0.0321573 loss)\nI0608 03:34:03.636700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226005 (* 1 = 0.226005 loss)\nI0608 03:34:03.636706 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.032176 (* 1 = 0.032176 loss)\nI0608 03:34:03.636713 13573 sgd_solver.cpp:106] Iteration 36980, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:34:16.368180 13573 solver.cpp:229] Iteration 37000, loss = 1.2034\nI0608 03:34:16.368255 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.330618 (* 1 = 0.330618 loss)\nI0608 03:34:16.368265 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.293847 (* 1 = 0.293847 loss)\nI0608 03:34:16.368273 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.413047 (* 1 = 0.413047 loss)\nI0608 03:34:16.368278 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0535716 (* 1 = 0.0535716 loss)\nI0608 03:34:16.368284 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.421122 (* 1 = 0.421122 loss)\nI0608 03:34:16.368289 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0535474 (* 1 = 0.0535474 loss)\nI0608 03:34:16.368297 13573 sgd_solver.cpp:106] Iteration 37000, lr = 0.001\nI0608 03:34:29.069962 13573 solver.cpp:229] Iteration 37020, loss = 1.33817\nI0608 03:34:29.070026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100389 (* 1 = 0.100389 loss)\nI0608 03:34:29.070036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.263275 (* 1 = 0.263275 loss)\nI0608 03:34:29.070042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.441632 (* 1 = 0.441632 loss)\nI0608 03:34:29.070047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0493141 (* 1 = 0.0493141 loss)\nI0608 03:34:29.070053 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.419415 (* 1 = 0.419415 loss)\nI0608 03:34:29.070060 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0492871 (* 1 = 0.0492871 loss)\nI0608 03:34:29.070066 13573 sgd_solver.cpp:106] Iteration 37020, lr = 0.001\nI0608 03:34:41.914144 13573 solver.cpp:229] Iteration 37040, loss = 0.975172\nI0608 03:34:41.914213 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00259377 (* 1 = 0.00259377 loss)\nI0608 03:34:41.914223 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0131584 (* 1 = 0.0131584 loss)\nI0608 03:34:41.914229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265501 (* 1 = 0.265501 loss)\nI0608 03:34:41.914235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0562993 (* 1 = 0.0562993 loss)\nI0608 03:34:41.914240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264845 (* 1 = 0.264845 loss)\nI0608 03:34:41.914247 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0563088 (* 1 = 0.0563088 loss)\nI0608 03:34:41.914253 13573 sgd_solver.cpp:106] Iteration 37040, lr = 0.001\nI0608 03:34:54.828318 13573 solver.cpp:229] Iteration 37060, loss = 0.827597\nI0608 03:34:54.828393 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0176646 (* 1 = 0.0176646 loss)\nI0608 03:34:54.828403 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.025822 (* 1 = 0.025822 loss)\nI0608 03:34:54.828410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.426936 (* 1 = 0.426936 loss)\nI0608 03:34:54.828416 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0832472 (* 1 = 0.0832472 loss)\nI0608 03:34:54.828421 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.398241 (* 1 = 0.398241 loss)\nI0608 03:34:54.828428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.083223 (* 1 = 0.083223 loss)\nI0608 03:34:54.828434 13573 sgd_solver.cpp:106] Iteration 37060, lr = 0.001\nI0608 03:35:07.937786 13573 solver.cpp:229] Iteration 37080, loss = 0.350842\nI0608 03:35:07.937856 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17209 (* 1 = 0.17209 loss)\nI0608 03:35:07.937867 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.091928 (* 1 = 0.091928 loss)\nI0608 03:35:07.937873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0854906 (* 1 = 0.0854906 loss)\nI0608 03:35:07.937880 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00424215 (* 1 = 0.00424215 loss)\nI0608 03:35:07.937885 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0862336 (* 1 = 0.0862336 loss)\nI0608 03:35:07.937891 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00424415 (* 1 = 0.00424415 loss)\nI0608 03:35:07.937897 13573 sgd_solver.cpp:106] Iteration 37080, lr = 0.001\nI0608 03:35:21.053947 13573 solver.cpp:229] Iteration 37100, loss = 1.65048\nI0608 03:35:21.054013 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00564862 (* 1 = 0.00564862 loss)\nI0608 03:35:21.054021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00458177 (* 1 = 0.00458177 loss)\nI0608 03:35:21.054028 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.944819 (* 1 = 0.944819 loss)\nI0608 03:35:21.054033 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.334238 (* 1 = 0.334238 loss)\nI0608 03:35:21.054039 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.914311 (* 1 = 0.914311 loss)\nI0608 03:35:21.054044 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.334568 (* 1 = 0.334568 loss)\nI0608 03:35:21.054051 13573 sgd_solver.cpp:106] Iteration 37100, lr = 0.001\nI0608 03:35:33.952718 13573 solver.cpp:229] Iteration 37120, loss = 0.732824\nI0608 03:35:33.952786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0850681 (* 1 = 0.0850681 loss)\nI0608 03:35:33.952796 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0477742 (* 1 = 0.0477742 loss)\nI0608 03:35:33.952802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0939354 (* 1 = 0.0939354 loss)\nI0608 03:35:33.952807 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154281 (* 1 = 0.0154281 loss)\nI0608 03:35:33.952813 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932972 (* 1 = 0.0932972 loss)\nI0608 03:35:33.952819 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154272 (* 1 = 0.0154272 loss)\nI0608 03:35:33.952826 13573 sgd_solver.cpp:106] Iteration 37120, lr = 0.001\nI0608 03:35:46.942895 13573 solver.cpp:229] Iteration 37140, loss = 0.662698\nI0608 03:35:46.942963 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130224 (* 1 = 0.130224 loss)\nI0608 03:35:46.942975 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0955309 (* 1 = 0.0955309 loss)\nI0608 03:35:46.942981 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0734199 (* 1 = 0.0734199 loss)\nI0608 03:35:46.942987 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00727007 (* 1 = 0.00727007 loss)\nI0608 03:35:46.942993 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0885893 (* 1 = 0.0885893 loss)\nI0608 03:35:46.942999 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00727068 (* 1 = 0.00727068 loss)\nI0608 03:35:46.943007 13573 sgd_solver.cpp:106] Iteration 37140, lr = 0.001\nI0608 03:35:59.860631 13573 solver.cpp:229] Iteration 37160, loss = 0.540629\nI0608 03:35:59.860698 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000784388 (* 1 = 0.000784388 loss)\nI0608 03:35:59.860708 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0256412 (* 1 = 0.0256412 loss)\nI0608 03:35:59.860714 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.24241 (* 1 = 0.24241 loss)\nI0608 03:35:59.860720 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0160896 (* 1 = 0.0160896 loss)\nI0608 03:35:59.860726 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.243901 (* 1 = 0.243901 loss)\nI0608 03:35:59.860733 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0160968 (* 1 = 0.0160968 loss)\nI0608 03:35:59.860740 13573 sgd_solver.cpp:106] Iteration 37160, lr = 0.001\nI0608 03:36:12.892328 13573 solver.cpp:229] Iteration 37180, loss = 0.619217\nI0608 03:36:12.892407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.226824 (* 1 = 0.226824 loss)\nI0608 03:36:12.892416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.187674 (* 1 = 0.187674 loss)\nI0608 03:36:12.892423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108088 (* 1 = 0.108088 loss)\nI0608 03:36:12.892428 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.101882 (* 1 = 0.101882 loss)\nI0608 03:36:12.892434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107877 (* 1 = 0.107877 loss)\nI0608 03:36:12.892439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.101891 (* 1 = 0.101891 loss)\nI0608 03:36:12.892446 13573 sgd_solver.cpp:106] Iteration 37180, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:36:25.539165 13573 solver.cpp:229] Iteration 37200, loss = 1.1052\nI0608 03:36:25.539232 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171405 (* 1 = 0.171405 loss)\nI0608 03:36:25.539242 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138625 (* 1 = 0.138625 loss)\nI0608 03:36:25.539247 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250518 (* 1 = 0.250518 loss)\nI0608 03:36:25.539253 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.056761 (* 1 = 0.056761 loss)\nI0608 03:36:25.539259 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233361 (* 1 = 0.233361 loss)\nI0608 03:36:25.539265 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0567669 (* 1 = 0.0567669 loss)\nI0608 03:36:25.539271 13573 sgd_solver.cpp:106] Iteration 37200, lr = 0.001\nI0608 03:36:38.236184 13573 solver.cpp:229] Iteration 37220, loss = 0.427977\nI0608 03:36:38.236249 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0790316 (* 1 = 0.0790316 loss)\nI0608 03:36:38.236259 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.029445 (* 1 = 0.029445 loss)\nI0608 03:36:38.236265 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137017 (* 1 = 0.137017 loss)\nI0608 03:36:38.236271 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0496289 (* 1 = 0.0496289 loss)\nI0608 03:36:38.236276 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.141639 (* 1 = 0.141639 loss)\nI0608 03:36:38.236281 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0495816 (* 1 = 0.0495816 loss)\nI0608 03:36:38.236289 13573 sgd_solver.cpp:106] Iteration 37220, lr = 0.001\nI0608 03:36:51.147615 13573 solver.cpp:229] Iteration 37240, loss = 0.279957\nI0608 03:36:51.147688 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0146217 (* 1 = 0.0146217 loss)\nI0608 03:36:51.147698 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0093973 (* 1 = 0.0093973 loss)\nI0608 03:36:51.147704 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0845692 (* 1 = 0.0845692 loss)\nI0608 03:36:51.147709 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0115604 (* 1 = 0.0115604 loss)\nI0608 03:36:51.147716 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0703696 (* 1 = 0.0703696 loss)\nI0608 03:36:51.147722 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115625 (* 1 = 0.0115625 loss)\nI0608 03:36:51.147728 13573 sgd_solver.cpp:106] Iteration 37240, lr = 0.001\nI0608 03:37:04.279577 13573 solver.cpp:229] Iteration 37260, loss = 0.896184\nI0608 03:37:04.279650 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0979232 (* 1 = 0.0979232 loss)\nI0608 03:37:04.279660 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.17772 (* 1 = 0.17772 loss)\nI0608 03:37:04.279666 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226375 (* 1 = 0.226375 loss)\nI0608 03:37:04.279672 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0454371 (* 1 = 0.0454371 loss)\nI0608 03:37:04.279678 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21988 (* 1 = 0.21988 loss)\nI0608 03:37:04.279685 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.045448 (* 1 = 0.045448 loss)\nI0608 03:37:04.279691 13573 sgd_solver.cpp:106] Iteration 37260, lr = 0.001\nI0608 03:37:17.048781 13573 solver.cpp:229] Iteration 37280, loss = 0.857674\nI0608 03:37:17.048844 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108001 (* 1 = 0.108001 loss)\nI0608 03:37:17.048856 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.268593 (* 1 = 0.268593 loss)\nI0608 03:37:17.048862 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223347 (* 1 = 0.223347 loss)\nI0608 03:37:17.048869 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0719884 (* 1 = 0.0719884 loss)\nI0608 03:37:17.048877 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207254 (* 1 = 0.207254 loss)\nI0608 03:37:17.048882 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0719958 (* 1 = 0.0719958 loss)\nI0608 03:37:17.048890 13573 sgd_solver.cpp:106] Iteration 37280, lr = 0.001\nI0608 03:37:30.232128 13573 solver.cpp:229] Iteration 37300, loss = 0.46595\nI0608 03:37:30.232214 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0213441 (* 1 = 0.0213441 loss)\nI0608 03:37:30.232225 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0269429 (* 1 = 0.0269429 loss)\nI0608 03:37:30.232231 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228565 (* 1 = 0.228565 loss)\nI0608 03:37:30.232237 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0322518 (* 1 = 0.0322518 loss)\nI0608 03:37:30.232244 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22643 (* 1 = 0.22643 loss)\nI0608 03:37:30.232249 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.032243 (* 1 = 0.032243 loss)\nI0608 03:37:30.232256 13573 sgd_solver.cpp:106] Iteration 37300, lr = 0.001\nI0608 03:37:43.070958 13573 solver.cpp:229] Iteration 37320, loss = 0.752603\nI0608 03:37:43.071030 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.036442 (* 1 = 0.036442 loss)\nI0608 03:37:43.071039 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0285651 (* 1 = 0.0285651 loss)\nI0608 03:37:43.071045 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.34633 (* 1 = 0.34633 loss)\nI0608 03:37:43.071051 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.08184 (* 1 = 0.08184 loss)\nI0608 03:37:43.071058 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.349055 (* 1 = 0.349055 loss)\nI0608 03:37:43.071063 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0817813 (* 1 = 0.0817813 loss)\nI0608 03:37:43.071069 13573 sgd_solver.cpp:106] Iteration 37320, lr = 0.001\nI0608 03:37:55.761148 13573 solver.cpp:229] Iteration 37340, loss = 0.384873\nI0608 03:37:55.761214 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0157903 (* 1 = 0.0157903 loss)\nI0608 03:37:55.761224 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0437955 (* 1 = 0.0437955 loss)\nI0608 03:37:55.761229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151587 (* 1 = 0.151587 loss)\nI0608 03:37:55.761234 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100762 (* 1 = 0.0100762 loss)\nI0608 03:37:55.761240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168295 (* 1 = 0.168295 loss)\nI0608 03:37:55.761245 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100822 (* 1 = 0.0100822 loss)\nI0608 03:37:55.761251 13573 sgd_solver.cpp:106] Iteration 37340, lr = 0.001\nI0608 03:38:08.822669 13573 solver.cpp:229] Iteration 37360, loss = 1.51023\nI0608 03:38:08.822813 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0201328 (* 1 = 0.0201328 loss)\nI0608 03:38:08.822824 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000765548 (* 1 = 0.000765548 loss)\nI0608 03:38:08.822829 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262649 (* 1 = 0.262649 loss)\nI0608 03:38:08.822835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00305174 (* 1 = 0.00305174 loss)\nI0608 03:38:08.822841 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24126 (* 1 = 0.24126 loss)\nI0608 03:38:08.822849 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00305046 (* 1 = 0.00305046 loss)\nI0608 03:38:08.822855 13573 sgd_solver.cpp:106] Iteration 37360, lr = 0.001\nI0608 03:38:21.752059 13573 solver.cpp:229] Iteration 37380, loss = 1.44577\nI0608 03:38:21.752131 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127401 (* 1 = 0.127401 loss)\nI0608 03:38:21.752141 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0959327 (* 1 = 0.0959327 loss)\nI0608 03:38:21.752148 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.68967 (* 1 = 0.68967 loss)\nI0608 03:38:21.752154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116198 (* 1 = 0.116198 loss)\nI0608 03:38:21.752161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.686136 (* 1 = 0.686136 loss)\nI0608 03:38:21.752166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116155 (* 1 = 0.116155 loss)\nI0608 03:38:21.752173 13573 sgd_solver.cpp:106] Iteration 37380, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:38:34.557579 13573 solver.cpp:229] Iteration 37400, loss = 0.747057\nI0608 03:38:34.557657 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00983969 (* 1 = 0.00983969 loss)\nI0608 03:38:34.557667 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00425762 (* 1 = 0.00425762 loss)\nI0608 03:38:34.557673 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204159 (* 1 = 0.204159 loss)\nI0608 03:38:34.557679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00825149 (* 1 = 0.00825149 loss)\nI0608 03:38:34.557684 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.206592 (* 1 = 0.206592 loss)\nI0608 03:38:34.557690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00825415 (* 1 = 0.00825415 loss)\nI0608 03:38:34.557696 13573 sgd_solver.cpp:106] Iteration 37400, lr = 0.001\nI0608 03:38:47.441411 13573 solver.cpp:229] Iteration 37420, loss = 0.819882\nI0608 03:38:47.441480 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110335 (* 1 = 0.110335 loss)\nI0608 03:38:47.441489 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148537 (* 1 = 0.148537 loss)\nI0608 03:38:47.441495 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.31005 (* 1 = 0.31005 loss)\nI0608 03:38:47.441501 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0132434 (* 1 = 0.0132434 loss)\nI0608 03:38:47.441507 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.283325 (* 1 = 0.283325 loss)\nI0608 03:38:47.441514 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013235 (* 1 = 0.013235 loss)\nI0608 03:38:47.441520 13573 sgd_solver.cpp:106] Iteration 37420, lr = 0.001\nI0608 03:39:00.367333 13573 solver.cpp:229] Iteration 37440, loss = 0.638468\nI0608 03:39:00.367398 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0794288 (* 1 = 0.0794288 loss)\nI0608 03:39:00.367406 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117378 (* 1 = 0.117378 loss)\nI0608 03:39:00.367413 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172777 (* 1 = 0.172777 loss)\nI0608 03:39:00.367419 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.180263 (* 1 = 0.180263 loss)\nI0608 03:39:00.367424 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169004 (* 1 = 0.169004 loss)\nI0608 03:39:00.367429 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.180272 (* 1 = 0.180272 loss)\nI0608 03:39:00.367437 13573 sgd_solver.cpp:106] Iteration 37440, lr = 0.001\nI0608 03:39:13.360780 13573 solver.cpp:229] Iteration 37460, loss = 1.82264\nI0608 03:39:13.360862 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0269971 (* 1 = 0.0269971 loss)\nI0608 03:39:13.360872 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0294658 (* 1 = 0.0294658 loss)\nI0608 03:39:13.360878 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.702788 (* 1 = 0.702788 loss)\nI0608 03:39:13.360884 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.27441 (* 1 = 0.27441 loss)\nI0608 03:39:13.360889 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.728179 (* 1 = 0.728179 loss)\nI0608 03:39:13.360895 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.274383 (* 1 = 0.274383 loss)\nI0608 03:39:13.360903 13573 sgd_solver.cpp:106] Iteration 37460, lr = 0.001\nI0608 03:39:26.149986 13573 solver.cpp:229] Iteration 37480, loss = 0.980125\nI0608 03:39:26.150053 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128119 (* 1 = 0.128119 loss)\nI0608 03:39:26.150063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158942 (* 1 = 0.158942 loss)\nI0608 03:39:26.150069 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.570271 (* 1 = 0.570271 loss)\nI0608 03:39:26.150075 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0535513 (* 1 = 0.0535513 loss)\nI0608 03:39:26.150080 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.579022 (* 1 = 0.579022 loss)\nI0608 03:39:26.150086 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0535426 (* 1 = 0.0535426 loss)\nI0608 03:39:26.150094 13573 sgd_solver.cpp:106] Iteration 37480, lr = 0.001\nI0608 03:39:39.154821 13573 solver.cpp:229] Iteration 37500, loss = 0.600232\nI0608 03:39:39.154882 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0861512 (* 1 = 0.0861512 loss)\nI0608 03:39:39.154891 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0815582 (* 1 = 0.0815582 loss)\nI0608 03:39:39.154898 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101098 (* 1 = 0.101098 loss)\nI0608 03:39:39.154903 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182901 (* 1 = 0.0182901 loss)\nI0608 03:39:39.154909 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102363 (* 1 = 0.102363 loss)\nI0608 03:39:39.154916 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182864 (* 1 = 0.0182864 loss)\nI0608 03:39:39.154922 13573 sgd_solver.cpp:106] Iteration 37500, lr = 0.001\nI0608 03:39:51.957442 13573 solver.cpp:229] Iteration 37520, loss = 0.789549\nI0608 03:39:51.957520 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0858607 (* 1 = 0.0858607 loss)\nI0608 03:39:51.957530 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0861744 (* 1 = 0.0861744 loss)\nI0608 03:39:51.957535 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.346946 (* 1 = 0.346946 loss)\nI0608 03:39:51.957540 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0888825 (* 1 = 0.0888825 loss)\nI0608 03:39:51.957546 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.351596 (* 1 = 0.351596 loss)\nI0608 03:39:51.957552 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0888156 (* 1 = 0.0888156 loss)\nI0608 03:39:51.957559 13573 sgd_solver.cpp:106] Iteration 37520, lr = 0.001\nI0608 03:40:05.028452 13573 solver.cpp:229] Iteration 37540, loss = 0.79799\nI0608 03:40:05.028519 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.120911 (* 1 = 0.120911 loss)\nI0608 03:40:05.028529 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122515 (* 1 = 0.122515 loss)\nI0608 03:40:05.028537 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175038 (* 1 = 0.175038 loss)\nI0608 03:40:05.028542 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0817244 (* 1 = 0.0817244 loss)\nI0608 03:40:05.028548 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17038 (* 1 = 0.17038 loss)\nI0608 03:40:05.028553 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0817268 (* 1 = 0.0817268 loss)\nI0608 03:40:05.028559 13573 sgd_solver.cpp:106] Iteration 37540, lr = 0.001\nI0608 03:40:17.783174 13573 solver.cpp:229] Iteration 37560, loss = 0.674514\nI0608 03:40:17.783236 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0718632 (* 1 = 0.0718632 loss)\nI0608 03:40:17.783246 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0966458 (* 1 = 0.0966458 loss)\nI0608 03:40:17.783252 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.361919 (* 1 = 0.361919 loss)\nI0608 03:40:17.783258 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0686257 (* 1 = 0.0686257 loss)\nI0608 03:40:17.783263 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.35868 (* 1 = 0.35868 loss)\nI0608 03:40:17.783269 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0686054 (* 1 = 0.0686054 loss)\nI0608 03:40:17.783277 13573 sgd_solver.cpp:106] Iteration 37560, lr = 0.001\nI0608 03:40:30.707289 13573 solver.cpp:229] Iteration 37580, loss = 0.60436\nI0608 03:40:30.707417 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.232421 (* 1 = 0.232421 loss)\nI0608 03:40:30.707427 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.164615 (* 1 = 0.164615 loss)\nI0608 03:40:30.707432 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0889413 (* 1 = 0.0889413 loss)\nI0608 03:40:30.707438 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00938173 (* 1 = 0.00938173 loss)\nI0608 03:40:30.707444 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938319 (* 1 = 0.0938319 loss)\nI0608 03:40:30.707450 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0093771 (* 1 = 0.0093771 loss)\nI0608 03:40:30.707458 13573 sgd_solver.cpp:106] Iteration 37580, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:40:43.604872 13573 solver.cpp:229] Iteration 37600, loss = 0.642183\nI0608 03:40:43.604944 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0334006 (* 1 = 0.0334006 loss)\nI0608 03:40:43.604954 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0735505 (* 1 = 0.0735505 loss)\nI0608 03:40:43.604961 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0857801 (* 1 = 0.0857801 loss)\nI0608 03:40:43.604966 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0602052 (* 1 = 0.0602052 loss)\nI0608 03:40:43.604972 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0846315 (* 1 = 0.0846315 loss)\nI0608 03:40:43.604979 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0602159 (* 1 = 0.0602159 loss)\nI0608 03:40:43.604986 13573 sgd_solver.cpp:106] Iteration 37600, lr = 0.001\nI0608 03:40:56.339655 13573 solver.cpp:229] Iteration 37620, loss = 1.07969\nI0608 03:40:56.339715 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.371234 (* 1 = 0.371234 loss)\nI0608 03:40:56.339725 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126206 (* 1 = 0.126206 loss)\nI0608 03:40:56.339731 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14225 (* 1 = 0.14225 loss)\nI0608 03:40:56.339737 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.089912 (* 1 = 0.089912 loss)\nI0608 03:40:56.339743 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145732 (* 1 = 0.145732 loss)\nI0608 03:40:56.339749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0899225 (* 1 = 0.0899225 loss)\nI0608 03:40:56.339756 13573 sgd_solver.cpp:106] Iteration 37620, lr = 0.001\nI0608 03:41:09.176987 13573 solver.cpp:229] Iteration 37640, loss = 0.796969\nI0608 03:41:09.177064 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.267145 (* 1 = 0.267145 loss)\nI0608 03:41:09.177074 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16643 (* 1 = 0.16643 loss)\nI0608 03:41:09.177080 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179091 (* 1 = 0.179091 loss)\nI0608 03:41:09.177086 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0442623 (* 1 = 0.0442623 loss)\nI0608 03:41:09.177093 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.18041 (* 1 = 0.18041 loss)\nI0608 03:41:09.177098 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0442661 (* 1 = 0.0442661 loss)\nI0608 03:41:09.177105 13573 sgd_solver.cpp:106] Iteration 37640, lr = 0.001\nI0608 03:41:21.936851 13573 solver.cpp:229] Iteration 37660, loss = 0.646686\nI0608 03:41:21.936918 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0505206 (* 1 = 0.0505206 loss)\nI0608 03:41:21.936928 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0252097 (* 1 = 0.0252097 loss)\nI0608 03:41:21.936934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32601 (* 1 = 0.32601 loss)\nI0608 03:41:21.936939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0429786 (* 1 = 0.0429786 loss)\nI0608 03:41:21.936945 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.350876 (* 1 = 0.350876 loss)\nI0608 03:41:21.936951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431076 (* 1 = 0.0431076 loss)\nI0608 03:41:21.936959 13573 sgd_solver.cpp:106] Iteration 37660, lr = 0.001\nI0608 03:41:34.701722 13573 solver.cpp:229] Iteration 37680, loss = 1.58672\nI0608 03:41:34.701787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108689 (* 1 = 0.108689 loss)\nI0608 03:41:34.701797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0282379 (* 1 = 0.0282379 loss)\nI0608 03:41:34.701803 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.09636 (* 1 = 0.09636 loss)\nI0608 03:41:34.701809 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0244719 (* 1 = 0.0244719 loss)\nI0608 03:41:34.701815 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0937451 (* 1 = 0.0937451 loss)\nI0608 03:41:34.701822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0244857 (* 1 = 0.0244857 loss)\nI0608 03:41:34.701828 13573 sgd_solver.cpp:106] Iteration 37680, lr = 0.001\nI0608 03:41:47.712211 13573 solver.cpp:229] Iteration 37700, loss = 0.635327\nI0608 03:41:47.712298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000287258 (* 1 = 0.000287258 loss)\nI0608 03:41:47.712308 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00567482 (* 1 = 0.00567482 loss)\nI0608 03:41:47.712316 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.224539 (* 1 = 0.224539 loss)\nI0608 03:41:47.712321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0286808 (* 1 = 0.0286808 loss)\nI0608 03:41:47.712327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213416 (* 1 = 0.213416 loss)\nI0608 03:41:47.712333 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286807 (* 1 = 0.0286807 loss)\nI0608 03:41:47.712340 13573 sgd_solver.cpp:106] Iteration 37700, lr = 0.001\nI0608 03:42:00.688483 13573 solver.cpp:229] Iteration 37720, loss = 0.488684\nI0608 03:42:00.688551 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0730113 (* 1 = 0.0730113 loss)\nI0608 03:42:00.688561 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0255885 (* 1 = 0.0255885 loss)\nI0608 03:42:00.688567 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.234954 (* 1 = 0.234954 loss)\nI0608 03:42:00.688573 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202198 (* 1 = 0.0202198 loss)\nI0608 03:42:00.688580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223613 (* 1 = 0.223613 loss)\nI0608 03:42:00.688585 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201937 (* 1 = 0.0201937 loss)\nI0608 03:42:00.688593 13573 sgd_solver.cpp:106] Iteration 37720, lr = 0.001\nI0608 03:42:13.733162 13573 solver.cpp:229] Iteration 37740, loss = 0.918773\nI0608 03:42:13.733227 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.226389 (* 1 = 0.226389 loss)\nI0608 03:42:13.733235 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161987 (* 1 = 0.161987 loss)\nI0608 03:42:13.733242 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135104 (* 1 = 0.135104 loss)\nI0608 03:42:13.733247 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0265487 (* 1 = 0.0265487 loss)\nI0608 03:42:13.733253 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.132156 (* 1 = 0.132156 loss)\nI0608 03:42:13.733258 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0265667 (* 1 = 0.0265667 loss)\nI0608 03:42:13.733265 13573 sgd_solver.cpp:106] Iteration 37740, lr = 0.001\nI0608 03:42:26.660666 13573 solver.cpp:229] Iteration 37760, loss = 0.659306\nI0608 03:42:26.660738 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.203887 (* 1 = 0.203887 loss)\nI0608 03:42:26.660748 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111432 (* 1 = 0.111432 loss)\nI0608 03:42:26.660754 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297205 (* 1 = 0.297205 loss)\nI0608 03:42:26.660759 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0429436 (* 1 = 0.0429436 loss)\nI0608 03:42:26.660765 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.302414 (* 1 = 0.302414 loss)\nI0608 03:42:26.660770 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0429574 (* 1 = 0.0429574 loss)\nI0608 03:42:26.660778 13573 sgd_solver.cpp:106] Iteration 37760, lr = 0.001\nI0608 03:42:39.586840 13573 solver.cpp:229] Iteration 37780, loss = 0.929089\nI0608 03:42:39.586907 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0752817 (* 1 = 0.0752817 loss)\nI0608 03:42:39.586916 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0820773 (* 1 = 0.0820773 loss)\nI0608 03:42:39.586923 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.50661 (* 1 = 0.50661 loss)\nI0608 03:42:39.586928 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106214 (* 1 = 0.106214 loss)\nI0608 03:42:39.586935 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.491098 (* 1 = 0.491098 loss)\nI0608 03:42:39.586941 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106157 (* 1 = 0.106157 loss)\nI0608 03:42:39.586947 13573 sgd_solver.cpp:106] Iteration 37780, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:42:52.385790 13573 solver.cpp:229] Iteration 37800, loss = 0.557747\nI0608 03:42:52.385864 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147278 (* 1 = 0.147278 loss)\nI0608 03:42:52.385872 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105345 (* 1 = 0.105345 loss)\nI0608 03:42:52.385879 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0999943 (* 1 = 0.0999943 loss)\nI0608 03:42:52.385885 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0122645 (* 1 = 0.0122645 loss)\nI0608 03:42:52.385890 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0947179 (* 1 = 0.0947179 loss)\nI0608 03:42:52.385895 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0122709 (* 1 = 0.0122709 loss)\nI0608 03:42:52.385902 13573 sgd_solver.cpp:106] Iteration 37800, lr = 0.001\nI0608 03:43:05.167392 13573 solver.cpp:229] Iteration 37820, loss = 0.791083\nI0608 03:43:05.167456 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109696 (* 1 = 0.109696 loss)\nI0608 03:43:05.167465 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116624 (* 1 = 0.116624 loss)\nI0608 03:43:05.167472 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909778 (* 1 = 0.0909778 loss)\nI0608 03:43:05.167479 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00758533 (* 1 = 0.00758533 loss)\nI0608 03:43:05.167484 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103347 (* 1 = 0.103347 loss)\nI0608 03:43:05.167490 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00758535 (* 1 = 0.00758535 loss)\nI0608 03:43:05.167497 13573 sgd_solver.cpp:106] Iteration 37820, lr = 0.001\nI0608 03:43:18.065124 13573 solver.cpp:229] Iteration 37840, loss = 1.13978\nI0608 03:43:18.065191 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000810609 (* 1 = 0.000810609 loss)\nI0608 03:43:18.065201 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0739624 (* 1 = 0.0739624 loss)\nI0608 03:43:18.065207 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.407601 (* 1 = 0.407601 loss)\nI0608 03:43:18.065214 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.133801 (* 1 = 0.133801 loss)\nI0608 03:43:18.065219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.410798 (* 1 = 0.410798 loss)\nI0608 03:43:18.065225 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.134088 (* 1 = 0.134088 loss)\nI0608 03:43:18.065232 13573 sgd_solver.cpp:106] Iteration 37840, lr = 0.001\nI0608 03:43:30.946424 13573 solver.cpp:229] Iteration 37860, loss = 0.683275\nI0608 03:43:30.946499 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0893231 (* 1 = 0.0893231 loss)\nI0608 03:43:30.946509 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0607119 (* 1 = 0.0607119 loss)\nI0608 03:43:30.946516 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277645 (* 1 = 0.277645 loss)\nI0608 03:43:30.946521 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0786845 (* 1 = 0.0786845 loss)\nI0608 03:43:30.946527 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245046 (* 1 = 0.245046 loss)\nI0608 03:43:30.946533 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0785918 (* 1 = 0.0785918 loss)\nI0608 03:43:30.946539 13573 sgd_solver.cpp:106] Iteration 37860, lr = 0.001\nI0608 03:43:43.746807 13573 solver.cpp:229] Iteration 37880, loss = 1.01331\nI0608 03:43:43.746877 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.497401 (* 1 = 0.497401 loss)\nI0608 03:43:43.746886 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195487 (* 1 = 0.195487 loss)\nI0608 03:43:43.746892 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302757 (* 1 = 0.302757 loss)\nI0608 03:43:43.746898 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0690341 (* 1 = 0.0690341 loss)\nI0608 03:43:43.746904 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.308478 (* 1 = 0.308478 loss)\nI0608 03:43:43.746911 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0690285 (* 1 = 0.0690285 loss)\nI0608 03:43:43.746917 13573 sgd_solver.cpp:106] Iteration 37880, lr = 0.001\nI0608 03:43:56.752149 13573 solver.cpp:229] Iteration 37900, loss = 0.689391\nI0608 03:43:56.752215 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000469405 (* 1 = 0.000469405 loss)\nI0608 03:43:56.752225 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00615817 (* 1 = 0.00615817 loss)\nI0608 03:43:56.752233 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.388528 (* 1 = 0.388528 loss)\nI0608 03:43:56.752238 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0581769 (* 1 = 0.0581769 loss)\nI0608 03:43:56.752243 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.365098 (* 1 = 0.365098 loss)\nI0608 03:43:56.752249 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.058196 (* 1 = 0.058196 loss)\nI0608 03:43:56.752256 13573 sgd_solver.cpp:106] Iteration 37900, lr = 0.001\nI0608 03:44:09.725312 13573 solver.cpp:229] Iteration 37920, loss = 1.02055\nI0608 03:44:09.725379 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.237045 (* 1 = 0.237045 loss)\nI0608 03:44:09.725389 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.277079 (* 1 = 0.277079 loss)\nI0608 03:44:09.725395 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.452334 (* 1 = 0.452334 loss)\nI0608 03:44:09.725401 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0749861 (* 1 = 0.0749861 loss)\nI0608 03:44:09.725406 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.448026 (* 1 = 0.448026 loss)\nI0608 03:44:09.725412 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0749336 (* 1 = 0.0749336 loss)\nI0608 03:44:09.725420 13573 sgd_solver.cpp:106] Iteration 37920, lr = 0.001\nI0608 03:44:22.741904 13573 solver.cpp:229] Iteration 37940, loss = 0.620005\nI0608 03:44:22.741977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0726669 (* 1 = 0.0726669 loss)\nI0608 03:44:22.741988 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.129612 (* 1 = 0.129612 loss)\nI0608 03:44:22.741994 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.130355 (* 1 = 0.130355 loss)\nI0608 03:44:22.741999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0212623 (* 1 = 0.0212623 loss)\nI0608 03:44:22.742005 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169462 (* 1 = 0.169462 loss)\nI0608 03:44:22.742010 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0212195 (* 1 = 0.0212195 loss)\nI0608 03:44:22.742017 13573 sgd_solver.cpp:106] Iteration 37940, lr = 0.001\nI0608 03:44:35.518887 13573 solver.cpp:229] Iteration 37960, loss = 0.609873\nI0608 03:44:35.518965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0704427 (* 1 = 0.0704427 loss)\nI0608 03:44:35.518975 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0612337 (* 1 = 0.0612337 loss)\nI0608 03:44:35.518980 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0994881 (* 1 = 0.0994881 loss)\nI0608 03:44:35.518985 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0458725 (* 1 = 0.0458725 loss)\nI0608 03:44:35.518991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0989883 (* 1 = 0.0989883 loss)\nI0608 03:44:35.518996 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0458531 (* 1 = 0.0458531 loss)\nI0608 03:44:35.519003 13573 sgd_solver.cpp:106] Iteration 37960, lr = 0.001\nI0608 03:44:48.261533 13573 solver.cpp:229] Iteration 37980, loss = 0.365148\nI0608 03:44:48.261602 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142856 (* 1 = 0.142856 loss)\nI0608 03:44:48.261612 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0812865 (* 1 = 0.0812865 loss)\nI0608 03:44:48.261617 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.085486 (* 1 = 0.085486 loss)\nI0608 03:44:48.261623 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120724 (* 1 = 0.0120724 loss)\nI0608 03:44:48.261629 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0840857 (* 1 = 0.0840857 loss)\nI0608 03:44:48.261634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0120739 (* 1 = 0.0120739 loss)\nI0608 03:44:48.261641 13573 sgd_solver.cpp:106] Iteration 37980, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:45:01.152743 13573 solver.cpp:229] Iteration 38000, loss = 0.506558\nI0608 03:45:01.152818 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0164307 (* 1 = 0.0164307 loss)\nI0608 03:45:01.152827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.01603 (* 1 = 0.01603 loss)\nI0608 03:45:01.152833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.254351 (* 1 = 0.254351 loss)\nI0608 03:45:01.152839 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0324437 (* 1 = 0.0324437 loss)\nI0608 03:45:01.152844 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280857 (* 1 = 0.280857 loss)\nI0608 03:45:01.152850 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0324511 (* 1 = 0.0324511 loss)\nI0608 03:45:01.152856 13573 sgd_solver.cpp:106] Iteration 38000, lr = 0.001\nI0608 03:45:14.124428 13573 solver.cpp:229] Iteration 38020, loss = 1.31722\nI0608 03:45:14.124498 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.38539 (* 1 = 0.38539 loss)\nI0608 03:45:14.124508 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.293926 (* 1 = 0.293926 loss)\nI0608 03:45:14.124514 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.535466 (* 1 = 0.535466 loss)\nI0608 03:45:14.124521 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0901465 (* 1 = 0.0901465 loss)\nI0608 03:45:14.124526 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.534093 (* 1 = 0.534093 loss)\nI0608 03:45:14.124531 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0901371 (* 1 = 0.0901371 loss)\nI0608 03:45:14.124539 13573 sgd_solver.cpp:106] Iteration 38020, lr = 0.001\nI0608 03:45:27.067911 13573 solver.cpp:229] Iteration 38040, loss = 0.444472\nI0608 03:45:27.067982 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00755918 (* 1 = 0.00755918 loss)\nI0608 03:45:27.067992 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0160128 (* 1 = 0.0160128 loss)\nI0608 03:45:27.067998 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164812 (* 1 = 0.164812 loss)\nI0608 03:45:27.068004 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112803 (* 1 = 0.0112803 loss)\nI0608 03:45:27.068009 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162997 (* 1 = 0.162997 loss)\nI0608 03:45:27.068015 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112847 (* 1 = 0.0112847 loss)\nI0608 03:45:27.068022 13573 sgd_solver.cpp:106] Iteration 38040, lr = 0.001\nI0608 03:45:39.918887 13573 solver.cpp:229] Iteration 38060, loss = 0.797092\nI0608 03:45:39.918965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000755323 (* 1 = 0.000755323 loss)\nI0608 03:45:39.918975 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000160681 (* 1 = 0.000160681 loss)\nI0608 03:45:39.918982 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.449775 (* 1 = 0.449775 loss)\nI0608 03:45:39.918987 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.126408 (* 1 = 0.126408 loss)\nI0608 03:45:39.918992 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.459301 (* 1 = 0.459301 loss)\nI0608 03:45:39.918998 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.126758 (* 1 = 0.126758 loss)\nI0608 03:45:39.919005 13573 sgd_solver.cpp:106] Iteration 38060, lr = 0.001\nI0608 03:45:52.833374 13573 solver.cpp:229] Iteration 38080, loss = 0.473672\nI0608 03:45:52.833449 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105373 (* 1 = 0.105373 loss)\nI0608 03:45:52.833458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.153808 (* 1 = 0.153808 loss)\nI0608 03:45:52.833464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106041 (* 1 = 0.106041 loss)\nI0608 03:45:52.833470 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0251398 (* 1 = 0.0251398 loss)\nI0608 03:45:52.833477 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108704 (* 1 = 0.108704 loss)\nI0608 03:45:52.833482 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251396 (* 1 = 0.0251396 loss)\nI0608 03:45:52.833488 13573 sgd_solver.cpp:106] Iteration 38080, lr = 0.001\nI0608 03:46:05.744343 13573 solver.cpp:229] Iteration 38100, loss = 1.01985\nI0608 03:46:05.744423 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0472393 (* 1 = 0.0472393 loss)\nI0608 03:46:05.744433 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0718983 (* 1 = 0.0718983 loss)\nI0608 03:46:05.744439 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.159119 (* 1 = 0.159119 loss)\nI0608 03:46:05.744446 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0610189 (* 1 = 0.0610189 loss)\nI0608 03:46:05.744451 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150737 (* 1 = 0.150737 loss)\nI0608 03:46:05.744458 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0609311 (* 1 = 0.0609311 loss)\nI0608 03:46:05.744465 13573 sgd_solver.cpp:106] Iteration 38100, lr = 0.001\nI0608 03:46:18.479914 13573 solver.cpp:229] Iteration 38120, loss = 3.26368\nI0608 03:46:18.479987 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.197978 (* 1 = 0.197978 loss)\nI0608 03:46:18.479996 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110749 (* 1 = 0.110749 loss)\nI0608 03:46:18.480002 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100859 (* 1 = 0.100859 loss)\nI0608 03:46:18.480008 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.032171 (* 1 = 0.032171 loss)\nI0608 03:46:18.480013 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0831475 (* 1 = 0.0831475 loss)\nI0608 03:46:18.480020 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0322384 (* 1 = 0.0322384 loss)\nI0608 03:46:18.480026 13573 sgd_solver.cpp:106] Iteration 38120, lr = 0.001\nI0608 03:46:31.447334 13573 solver.cpp:229] Iteration 38140, loss = 1.03454\nI0608 03:46:31.447404 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.311118 (* 1 = 0.311118 loss)\nI0608 03:46:31.447414 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179881 (* 1 = 0.179881 loss)\nI0608 03:46:31.447420 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.408305 (* 1 = 0.408305 loss)\nI0608 03:46:31.447427 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0989948 (* 1 = 0.0989948 loss)\nI0608 03:46:31.447432 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.405823 (* 1 = 0.405823 loss)\nI0608 03:46:31.447438 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0990034 (* 1 = 0.0990034 loss)\nI0608 03:46:31.447444 13573 sgd_solver.cpp:106] Iteration 38140, lr = 0.001\nI0608 03:46:44.360896 13573 solver.cpp:229] Iteration 38160, loss = 1.74928\nI0608 03:46:44.360973 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0307078 (* 1 = 0.0307078 loss)\nI0608 03:46:44.360983 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0384931 (* 1 = 0.0384931 loss)\nI0608 03:46:44.360988 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231394 (* 1 = 0.231394 loss)\nI0608 03:46:44.360994 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133794 (* 1 = 0.0133794 loss)\nI0608 03:46:44.360999 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190244 (* 1 = 0.190244 loss)\nI0608 03:46:44.361006 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133874 (* 1 = 0.0133874 loss)\nI0608 03:46:44.361012 13573 sgd_solver.cpp:106] Iteration 38160, lr = 0.001\nI0608 03:46:57.337923 13573 solver.cpp:229] Iteration 38180, loss = 0.55618\nI0608 03:46:57.337990 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0494018 (* 1 = 0.0494018 loss)\nI0608 03:46:57.337999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0638113 (* 1 = 0.0638113 loss)\nI0608 03:46:57.338006 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189701 (* 1 = 0.189701 loss)\nI0608 03:46:57.338011 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00323177 (* 1 = 0.00323177 loss)\nI0608 03:46:57.338016 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17669 (* 1 = 0.17669 loss)\nI0608 03:46:57.338022 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00322939 (* 1 = 0.00322939 loss)\nI0608 03:46:57.338029 13573 sgd_solver.cpp:106] Iteration 38180, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:47:10.278199 13573 solver.cpp:229] Iteration 38200, loss = 1.09851\nI0608 03:47:10.278275 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.270582 (* 1 = 0.270582 loss)\nI0608 03:47:10.278283 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141401 (* 1 = 0.141401 loss)\nI0608 03:47:10.278290 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.516005 (* 1 = 0.516005 loss)\nI0608 03:47:10.278295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0953878 (* 1 = 0.0953878 loss)\nI0608 03:47:10.278301 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.53245 (* 1 = 0.53245 loss)\nI0608 03:47:10.278306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0953834 (* 1 = 0.0953834 loss)\nI0608 03:47:10.278313 13573 sgd_solver.cpp:106] Iteration 38200, lr = 0.001\nI0608 03:47:23.128358 13573 solver.cpp:229] Iteration 38220, loss = 0.564117\nI0608 03:47:23.128430 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0235607 (* 1 = 0.0235607 loss)\nI0608 03:47:23.128440 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0538115 (* 1 = 0.0538115 loss)\nI0608 03:47:23.128446 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.232074 (* 1 = 0.232074 loss)\nI0608 03:47:23.128453 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135418 (* 1 = 0.0135418 loss)\nI0608 03:47:23.128458 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258316 (* 1 = 0.258316 loss)\nI0608 03:47:23.128464 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135495 (* 1 = 0.0135495 loss)\nI0608 03:47:23.128474 13573 sgd_solver.cpp:106] Iteration 38220, lr = 0.001\nI0608 03:47:35.990888 13573 solver.cpp:229] Iteration 38240, loss = 0.49969\nI0608 03:47:35.990967 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0068598 (* 1 = 0.0068598 loss)\nI0608 03:47:35.990977 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00405778 (* 1 = 0.00405778 loss)\nI0608 03:47:35.990983 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215293 (* 1 = 0.215293 loss)\nI0608 03:47:35.990988 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227564 (* 1 = 0.0227564 loss)\nI0608 03:47:35.990995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210006 (* 1 = 0.210006 loss)\nI0608 03:47:35.991001 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0227437 (* 1 = 0.0227437 loss)\nI0608 03:47:35.991009 13573 sgd_solver.cpp:106] Iteration 38240, lr = 0.001\nI0608 03:47:48.896848 13573 solver.cpp:229] Iteration 38260, loss = 0.641908\nI0608 03:47:48.896911 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00319822 (* 1 = 0.00319822 loss)\nI0608 03:47:48.896921 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0151999 (* 1 = 0.0151999 loss)\nI0608 03:47:48.896927 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252693 (* 1 = 0.252693 loss)\nI0608 03:47:48.896934 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0479096 (* 1 = 0.0479096 loss)\nI0608 03:47:48.896939 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.262248 (* 1 = 0.262248 loss)\nI0608 03:47:48.896945 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0478104 (* 1 = 0.0478104 loss)\nI0608 03:47:48.896952 13573 sgd_solver.cpp:106] Iteration 38260, lr = 0.001\nI0608 03:48:01.888202 13573 solver.cpp:229] Iteration 38280, loss = 2.35418\nI0608 03:48:01.888272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00830876 (* 1 = 0.00830876 loss)\nI0608 03:48:01.888281 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00513997 (* 1 = 0.00513997 loss)\nI0608 03:48:01.888288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.16213 (* 1 = 1.16213 loss)\nI0608 03:48:01.888293 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.773623 (* 1 = 0.773623 loss)\nI0608 03:48:01.888298 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.14606 (* 1 = 1.14606 loss)\nI0608 03:48:01.888304 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.791422 (* 1 = 0.791422 loss)\nI0608 03:48:01.888310 13573 sgd_solver.cpp:106] Iteration 38280, lr = 0.001\nI0608 03:48:14.643028 13573 solver.cpp:229] Iteration 38300, loss = 0.747985\nI0608 03:48:14.643101 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0847375 (* 1 = 0.0847375 loss)\nI0608 03:48:14.643111 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112003 (* 1 = 0.112003 loss)\nI0608 03:48:14.643117 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183288 (* 1 = 0.183288 loss)\nI0608 03:48:14.643122 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157614 (* 1 = 0.0157614 loss)\nI0608 03:48:14.643128 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22451 (* 1 = 0.22451 loss)\nI0608 03:48:14.643133 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0157593 (* 1 = 0.0157593 loss)\nI0608 03:48:14.643141 13573 sgd_solver.cpp:106] Iteration 38300, lr = 0.001\nI0608 03:48:27.512981 13573 solver.cpp:229] Iteration 38320, loss = 0.478231\nI0608 03:48:27.513051 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111825 (* 1 = 0.111825 loss)\nI0608 03:48:27.513059 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117612 (* 1 = 0.117612 loss)\nI0608 03:48:27.513065 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128061 (* 1 = 0.128061 loss)\nI0608 03:48:27.513072 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246493 (* 1 = 0.0246493 loss)\nI0608 03:48:27.513077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128712 (* 1 = 0.128712 loss)\nI0608 03:48:27.513082 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246503 (* 1 = 0.0246503 loss)\nI0608 03:48:27.513088 13573 sgd_solver.cpp:106] Iteration 38320, lr = 0.001\nI0608 03:48:40.552840 13573 solver.cpp:229] Iteration 38340, loss = 0.911134\nI0608 03:48:40.552907 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.163828 (* 1 = 0.163828 loss)\nI0608 03:48:40.552917 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119546 (* 1 = 0.119546 loss)\nI0608 03:48:40.552922 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41067 (* 1 = 0.41067 loss)\nI0608 03:48:40.552928 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0407369 (* 1 = 0.0407369 loss)\nI0608 03:48:40.552934 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.358882 (* 1 = 0.358882 loss)\nI0608 03:48:40.552940 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0407363 (* 1 = 0.0407363 loss)\nI0608 03:48:40.552947 13573 sgd_solver.cpp:106] Iteration 38340, lr = 0.001\nI0608 03:48:53.308568 13573 solver.cpp:229] Iteration 38360, loss = 0.940367\nI0608 03:48:53.308629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00151599 (* 1 = 0.00151599 loss)\nI0608 03:48:53.308640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00140292 (* 1 = 0.00140292 loss)\nI0608 03:48:53.308645 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.419839 (* 1 = 0.419839 loss)\nI0608 03:48:53.308650 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0920795 (* 1 = 0.0920795 loss)\nI0608 03:48:53.308656 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.426467 (* 1 = 0.426467 loss)\nI0608 03:48:53.308661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0920961 (* 1 = 0.0920961 loss)\nI0608 03:48:53.308668 13573 sgd_solver.cpp:106] Iteration 38360, lr = 0.001\nI0608 03:49:06.263298 13573 solver.cpp:229] Iteration 38380, loss = 0.790869\nI0608 03:49:06.263363 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00689415 (* 1 = 0.00689415 loss)\nI0608 03:49:06.263373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0408198 (* 1 = 0.0408198 loss)\nI0608 03:49:06.263380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.410781 (* 1 = 0.410781 loss)\nI0608 03:49:06.263384 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0964177 (* 1 = 0.0964177 loss)\nI0608 03:49:06.263391 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.424363 (* 1 = 0.424363 loss)\nI0608 03:49:06.263396 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0962516 (* 1 = 0.0962516 loss)\nI0608 03:49:06.263402 13573 sgd_solver.cpp:106] Iteration 38380, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:49:19.243410 13573 solver.cpp:229] Iteration 38400, loss = 0.519165\nI0608 03:49:19.243477 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129374 (* 1 = 0.129374 loss)\nI0608 03:49:19.243486 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113528 (* 1 = 0.113528 loss)\nI0608 03:49:19.243492 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107161 (* 1 = 0.107161 loss)\nI0608 03:49:19.243497 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0519721 (* 1 = 0.0519721 loss)\nI0608 03:49:19.243504 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104264 (* 1 = 0.104264 loss)\nI0608 03:49:19.243510 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0519622 (* 1 = 0.0519622 loss)\nI0608 03:49:19.243516 13573 sgd_solver.cpp:106] Iteration 38400, lr = 0.001\nI0608 03:49:32.025079 13573 solver.cpp:229] Iteration 38420, loss = 0.750402\nI0608 03:49:32.025140 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000576923 (* 1 = 0.000576923 loss)\nI0608 03:49:32.025149 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0135606 (* 1 = 0.0135606 loss)\nI0608 03:49:32.025156 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.523849 (* 1 = 0.523849 loss)\nI0608 03:49:32.025161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0570408 (* 1 = 0.0570408 loss)\nI0608 03:49:32.025166 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.462296 (* 1 = 0.462296 loss)\nI0608 03:49:32.025172 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.057023 (* 1 = 0.057023 loss)\nI0608 03:49:32.025178 13573 sgd_solver.cpp:106] Iteration 38420, lr = 0.001\nI0608 03:49:44.890305 13573 solver.cpp:229] Iteration 38440, loss = 0.488755\nI0608 03:49:44.890374 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159793 (* 1 = 0.159793 loss)\nI0608 03:49:44.890384 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.081836 (* 1 = 0.081836 loss)\nI0608 03:49:44.890390 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0855779 (* 1 = 0.0855779 loss)\nI0608 03:49:44.890396 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0544178 (* 1 = 0.0544178 loss)\nI0608 03:49:44.890403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0796609 (* 1 = 0.0796609 loss)\nI0608 03:49:44.890408 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0544161 (* 1 = 0.0544161 loss)\nI0608 03:49:44.890416 13573 sgd_solver.cpp:106] Iteration 38440, lr = 0.001\nI0608 03:49:57.842597 13573 solver.cpp:229] Iteration 38460, loss = 0.693869\nI0608 03:49:57.842658 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0685483 (* 1 = 0.0685483 loss)\nI0608 03:49:57.842666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106002 (* 1 = 0.106002 loss)\nI0608 03:49:57.842674 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107117 (* 1 = 0.107117 loss)\nI0608 03:49:57.842679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013185 (* 1 = 0.013185 loss)\nI0608 03:49:57.842684 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105919 (* 1 = 0.105919 loss)\nI0608 03:49:57.842690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131851 (* 1 = 0.0131851 loss)\nI0608 03:49:57.842697 13573 sgd_solver.cpp:106] Iteration 38460, lr = 0.001\nI0608 03:50:10.661268 13573 solver.cpp:229] Iteration 38480, loss = 0.469634\nI0608 03:50:10.661339 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0941961 (* 1 = 0.0941961 loss)\nI0608 03:50:10.661347 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0522979 (* 1 = 0.0522979 loss)\nI0608 03:50:10.661353 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162617 (* 1 = 0.162617 loss)\nI0608 03:50:10.661360 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0048412 (* 1 = 0.0048412 loss)\nI0608 03:50:10.661365 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.184755 (* 1 = 0.184755 loss)\nI0608 03:50:10.661370 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00484008 (* 1 = 0.00484008 loss)\nI0608 03:50:10.661377 13573 sgd_solver.cpp:106] Iteration 38480, lr = 0.001\nI0608 03:50:23.763952 13573 solver.cpp:229] Iteration 38500, loss = 0.800563\nI0608 03:50:23.764044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0472291 (* 1 = 0.0472291 loss)\nI0608 03:50:23.764058 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0650249 (* 1 = 0.0650249 loss)\nI0608 03:50:23.764067 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266448 (* 1 = 0.266448 loss)\nI0608 03:50:23.764077 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0551311 (* 1 = 0.0551311 loss)\nI0608 03:50:23.764086 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26554 (* 1 = 0.26554 loss)\nI0608 03:50:23.764097 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0551035 (* 1 = 0.0551035 loss)\nI0608 03:50:23.764108 13573 sgd_solver.cpp:106] Iteration 38500, lr = 0.001\nI0608 03:50:36.751768 13573 solver.cpp:229] Iteration 38520, loss = 1.20697\nI0608 03:50:36.751845 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.073377 (* 1 = 0.073377 loss)\nI0608 03:50:36.751855 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.27721 (* 1 = 0.27721 loss)\nI0608 03:50:36.751862 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0880036 (* 1 = 0.0880036 loss)\nI0608 03:50:36.751868 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0431598 (* 1 = 0.0431598 loss)\nI0608 03:50:36.751873 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0871563 (* 1 = 0.0871563 loss)\nI0608 03:50:36.751878 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431586 (* 1 = 0.0431586 loss)\nI0608 03:50:36.751884 13573 sgd_solver.cpp:106] Iteration 38520, lr = 0.001\nI0608 03:50:49.747162 13573 solver.cpp:229] Iteration 38540, loss = 0.611457\nI0608 03:50:49.747223 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.173386 (* 1 = 0.173386 loss)\nI0608 03:50:49.747233 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118451 (* 1 = 0.118451 loss)\nI0608 03:50:49.747239 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151887 (* 1 = 0.151887 loss)\nI0608 03:50:49.747246 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0871802 (* 1 = 0.0871802 loss)\nI0608 03:50:49.747251 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161967 (* 1 = 0.161967 loss)\nI0608 03:50:49.747257 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0871811 (* 1 = 0.0871811 loss)\nI0608 03:50:49.747265 13573 sgd_solver.cpp:106] Iteration 38540, lr = 0.001\nI0608 03:51:02.709486 13573 solver.cpp:229] Iteration 38560, loss = 0.725753\nI0608 03:51:02.709553 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00341273 (* 1 = 0.00341273 loss)\nI0608 03:51:02.709563 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00386079 (* 1 = 0.00386079 loss)\nI0608 03:51:02.709568 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.415654 (* 1 = 0.415654 loss)\nI0608 03:51:02.709574 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189771 (* 1 = 0.0189771 loss)\nI0608 03:51:02.709580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.37711 (* 1 = 0.37711 loss)\nI0608 03:51:02.709586 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0190196 (* 1 = 0.0190196 loss)\nI0608 03:51:02.709592 13573 sgd_solver.cpp:106] Iteration 38560, lr = 0.001\nI0608 03:51:15.572881 13573 solver.cpp:229] Iteration 38580, loss = 0.793101\nI0608 03:51:15.572944 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.158093 (* 1 = 0.158093 loss)\nI0608 03:51:15.572953 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0845631 (* 1 = 0.0845631 loss)\nI0608 03:51:15.572959 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107623 (* 1 = 0.107623 loss)\nI0608 03:51:15.572965 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0571672 (* 1 = 0.0571672 loss)\nI0608 03:51:15.572970 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111115 (* 1 = 0.111115 loss)\nI0608 03:51:15.572976 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0570858 (* 1 = 0.0570858 loss)\nI0608 03:51:15.572983 13573 sgd_solver.cpp:106] Iteration 38580, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:51:28.617550 13573 solver.cpp:229] Iteration 38600, loss = 0.731596\nI0608 03:51:28.617631 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0658782 (* 1 = 0.0658782 loss)\nI0608 03:51:28.617640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0823503 (* 1 = 0.0823503 loss)\nI0608 03:51:28.617646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0868851 (* 1 = 0.0868851 loss)\nI0608 03:51:28.617652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120609 (* 1 = 0.0120609 loss)\nI0608 03:51:28.617657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0844331 (* 1 = 0.0844331 loss)\nI0608 03:51:28.617663 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0120809 (* 1 = 0.0120809 loss)\nI0608 03:51:28.617671 13573 sgd_solver.cpp:106] Iteration 38600, lr = 0.001\nI0608 03:51:41.498922 13573 solver.cpp:229] Iteration 38620, loss = 0.354591\nI0608 03:51:41.498989 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00756121 (* 1 = 0.00756121 loss)\nI0608 03:51:41.498999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00472222 (* 1 = 0.00472222 loss)\nI0608 03:51:41.499006 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.161238 (* 1 = 0.161238 loss)\nI0608 03:51:41.499011 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00700636 (* 1 = 0.00700636 loss)\nI0608 03:51:41.499017 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170741 (* 1 = 0.170741 loss)\nI0608 03:51:41.499022 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00701527 (* 1 = 0.00701527 loss)\nI0608 03:51:41.499030 13573 sgd_solver.cpp:106] Iteration 38620, lr = 0.001\nI0608 03:51:54.243489 13573 solver.cpp:229] Iteration 38640, loss = 0.57438\nI0608 03:51:54.243549 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.157496 (* 1 = 0.157496 loss)\nI0608 03:51:54.243559 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100769 (* 1 = 0.100769 loss)\nI0608 03:51:54.243566 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0885758 (* 1 = 0.0885758 loss)\nI0608 03:51:54.243571 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0142178 (* 1 = 0.0142178 loss)\nI0608 03:51:54.243577 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0962657 (* 1 = 0.0962657 loss)\nI0608 03:51:54.243582 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0142178 (* 1 = 0.0142178 loss)\nI0608 03:51:54.243589 13573 sgd_solver.cpp:106] Iteration 38640, lr = 0.001\nI0608 03:52:07.374126 13573 solver.cpp:229] Iteration 38660, loss = 1.04316\nI0608 03:52:07.374200 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.189095 (* 1 = 0.189095 loss)\nI0608 03:52:07.374210 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229382 (* 1 = 0.229382 loss)\nI0608 03:52:07.374217 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.346802 (* 1 = 0.346802 loss)\nI0608 03:52:07.374222 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.135874 (* 1 = 0.135874 loss)\nI0608 03:52:07.374228 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.356076 (* 1 = 0.356076 loss)\nI0608 03:52:07.374234 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.135701 (* 1 = 0.135701 loss)\nI0608 03:52:07.374240 13573 sgd_solver.cpp:106] Iteration 38660, lr = 0.001\nI0608 03:52:20.254905 13573 solver.cpp:229] Iteration 38680, loss = 0.630783\nI0608 03:52:20.254973 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0887151 (* 1 = 0.0887151 loss)\nI0608 03:52:20.254982 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0836886 (* 1 = 0.0836886 loss)\nI0608 03:52:20.254988 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0994257 (* 1 = 0.0994257 loss)\nI0608 03:52:20.254994 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00665133 (* 1 = 0.00665133 loss)\nI0608 03:52:20.255000 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102539 (* 1 = 0.102539 loss)\nI0608 03:52:20.255007 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00665143 (* 1 = 0.00665143 loss)\nI0608 03:52:20.255013 13573 sgd_solver.cpp:106] Iteration 38680, lr = 0.001\nI0608 03:52:33.346707 13573 solver.cpp:229] Iteration 38700, loss = 0.371902\nI0608 03:52:33.346779 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.024924 (* 1 = 0.024924 loss)\nI0608 03:52:33.346792 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0247657 (* 1 = 0.0247657 loss)\nI0608 03:52:33.346798 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909194 (* 1 = 0.0909194 loss)\nI0608 03:52:33.346804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0656915 (* 1 = 0.0656915 loss)\nI0608 03:52:33.346809 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0880853 (* 1 = 0.0880853 loss)\nI0608 03:52:33.346815 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0657322 (* 1 = 0.0657322 loss)\nI0608 03:52:33.346823 13573 sgd_solver.cpp:106] Iteration 38700, lr = 0.001\nI0608 03:52:46.209538 13573 solver.cpp:229] Iteration 38720, loss = 0.548375\nI0608 03:52:46.209611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0893878 (* 1 = 0.0893878 loss)\nI0608 03:52:46.209621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.142677 (* 1 = 0.142677 loss)\nI0608 03:52:46.209627 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.09435 (* 1 = 0.09435 loss)\nI0608 03:52:46.209633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0339683 (* 1 = 0.0339683 loss)\nI0608 03:52:46.209640 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0961651 (* 1 = 0.0961651 loss)\nI0608 03:52:46.209645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0340158 (* 1 = 0.0340158 loss)\nI0608 03:52:46.209653 13573 sgd_solver.cpp:106] Iteration 38720, lr = 0.001\nI0608 03:52:58.974457 13573 solver.cpp:229] Iteration 38740, loss = 0.693131\nI0608 03:52:58.974526 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.050104 (* 1 = 0.050104 loss)\nI0608 03:52:58.974535 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0427996 (* 1 = 0.0427996 loss)\nI0608 03:52:58.974541 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19772 (* 1 = 0.19772 loss)\nI0608 03:52:58.974547 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0115216 (* 1 = 0.0115216 loss)\nI0608 03:52:58.974553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.178972 (* 1 = 0.178972 loss)\nI0608 03:52:58.974560 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115054 (* 1 = 0.0115054 loss)\nI0608 03:52:58.974565 13573 sgd_solver.cpp:106] Iteration 38740, lr = 0.001\nI0608 03:53:11.963357 13573 solver.cpp:229] Iteration 38760, loss = 0.408372\nI0608 03:53:11.963423 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0910301 (* 1 = 0.0910301 loss)\nI0608 03:53:11.963433 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100525 (* 1 = 0.100525 loss)\nI0608 03:53:11.963438 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102579 (* 1 = 0.102579 loss)\nI0608 03:53:11.963443 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234942 (* 1 = 0.0234942 loss)\nI0608 03:53:11.963449 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10013 (* 1 = 0.10013 loss)\nI0608 03:53:11.963455 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0234943 (* 1 = 0.0234943 loss)\nI0608 03:53:11.963462 13573 sgd_solver.cpp:106] Iteration 38760, lr = 0.001\nI0608 03:53:24.725276 13573 solver.cpp:229] Iteration 38780, loss = 0.543541\nI0608 03:53:24.725404 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00785362 (* 1 = 0.00785362 loss)\nI0608 03:53:24.725414 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0110467 (* 1 = 0.0110467 loss)\nI0608 03:53:24.725419 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18346 (* 1 = 0.18346 loss)\nI0608 03:53:24.725425 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227608 (* 1 = 0.0227608 loss)\nI0608 03:53:24.725431 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.185781 (* 1 = 0.185781 loss)\nI0608 03:53:24.725437 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0227715 (* 1 = 0.0227715 loss)\nI0608 03:53:24.725445 13573 sgd_solver.cpp:106] Iteration 38780, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:53:37.530730 13573 solver.cpp:229] Iteration 38800, loss = 0.615081\nI0608 03:53:37.530803 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0282981 (* 1 = 0.0282981 loss)\nI0608 03:53:37.530813 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0574506 (* 1 = 0.0574506 loss)\nI0608 03:53:37.530820 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23698 (* 1 = 0.23698 loss)\nI0608 03:53:37.530827 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00689611 (* 1 = 0.00689611 loss)\nI0608 03:53:37.530833 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23233 (* 1 = 0.23233 loss)\nI0608 03:53:37.530845 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.006899 (* 1 = 0.006899 loss)\nI0608 03:53:37.530886 13573 sgd_solver.cpp:106] Iteration 38800, lr = 0.001\nI0608 03:53:50.357637 13573 solver.cpp:229] Iteration 38820, loss = 0.73967\nI0608 03:53:50.357723 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.149259 (* 1 = 0.149259 loss)\nI0608 03:53:50.357733 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0917653 (* 1 = 0.0917653 loss)\nI0608 03:53:50.357740 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297533 (* 1 = 0.297533 loss)\nI0608 03:53:50.357746 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126071 (* 1 = 0.0126071 loss)\nI0608 03:53:50.357753 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.35012 (* 1 = 0.35012 loss)\nI0608 03:53:50.357758 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126073 (* 1 = 0.0126073 loss)\nI0608 03:53:50.357765 13573 sgd_solver.cpp:106] Iteration 38820, lr = 0.001\nI0608 03:54:03.197082 13573 solver.cpp:229] Iteration 38840, loss = 0.849521\nI0608 03:54:03.197146 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.381469 (* 1 = 0.381469 loss)\nI0608 03:54:03.197155 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.231201 (* 1 = 0.231201 loss)\nI0608 03:54:03.197162 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0887315 (* 1 = 0.0887315 loss)\nI0608 03:54:03.197168 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0425372 (* 1 = 0.0425372 loss)\nI0608 03:54:03.197173 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0863514 (* 1 = 0.0863514 loss)\nI0608 03:54:03.197180 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0425372 (* 1 = 0.0425372 loss)\nI0608 03:54:03.197186 13573 sgd_solver.cpp:106] Iteration 38840, lr = 0.001\nI0608 03:54:16.146584 13573 solver.cpp:229] Iteration 38860, loss = 0.506539\nI0608 03:54:16.146663 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0417494 (* 1 = 0.0417494 loss)\nI0608 03:54:16.146672 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107288 (* 1 = 0.107288 loss)\nI0608 03:54:16.146678 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.210419 (* 1 = 0.210419 loss)\nI0608 03:54:16.146684 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0309622 (* 1 = 0.0309622 loss)\nI0608 03:54:16.146690 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.200693 (* 1 = 0.200693 loss)\nI0608 03:54:16.146697 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0310026 (* 1 = 0.0310026 loss)\nI0608 03:54:16.146703 13573 sgd_solver.cpp:106] Iteration 38860, lr = 0.001\nI0608 03:54:29.083735 13573 solver.cpp:229] Iteration 38880, loss = 1.07637\nI0608 03:54:29.083802 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.253633 (* 1 = 0.253633 loss)\nI0608 03:54:29.083812 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128984 (* 1 = 0.128984 loss)\nI0608 03:54:29.083818 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0962024 (* 1 = 0.0962024 loss)\nI0608 03:54:29.083824 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143043 (* 1 = 0.0143043 loss)\nI0608 03:54:29.083830 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100545 (* 1 = 0.100545 loss)\nI0608 03:54:29.083837 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0143041 (* 1 = 0.0143041 loss)\nI0608 03:54:29.083844 13573 sgd_solver.cpp:106] Iteration 38880, lr = 0.001\nI0608 03:54:42.025926 13573 solver.cpp:229] Iteration 38900, loss = 1.3272\nI0608 03:54:42.025995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.289006 (* 1 = 0.289006 loss)\nI0608 03:54:42.026003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.377749 (* 1 = 0.377749 loss)\nI0608 03:54:42.026010 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.246827 (* 1 = 0.246827 loss)\nI0608 03:54:42.026015 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0416505 (* 1 = 0.0416505 loss)\nI0608 03:54:42.026021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268338 (* 1 = 0.268338 loss)\nI0608 03:54:42.026027 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0416261 (* 1 = 0.0416261 loss)\nI0608 03:54:42.026034 13573 sgd_solver.cpp:106] Iteration 38900, lr = 0.001\nI0608 03:54:55.054003 13573 solver.cpp:229] Iteration 38920, loss = 0.919629\nI0608 03:54:55.054071 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0719694 (* 1 = 0.0719694 loss)\nI0608 03:54:55.054081 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0662964 (* 1 = 0.0662964 loss)\nI0608 03:54:55.054088 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.339558 (* 1 = 0.339558 loss)\nI0608 03:54:55.054095 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0600551 (* 1 = 0.0600551 loss)\nI0608 03:54:55.054100 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.33348 (* 1 = 0.33348 loss)\nI0608 03:54:55.054106 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0600401 (* 1 = 0.0600401 loss)\nI0608 03:54:55.054114 13573 sgd_solver.cpp:106] Iteration 38920, lr = 0.001\nI0608 03:55:08.182628 13573 solver.cpp:229] Iteration 38940, loss = 0.863706\nI0608 03:55:08.182696 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.363109 (* 1 = 0.363109 loss)\nI0608 03:55:08.182706 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19649 (* 1 = 0.19649 loss)\nI0608 03:55:08.182713 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124951 (* 1 = 0.124951 loss)\nI0608 03:55:08.182718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.195492 (* 1 = 0.195492 loss)\nI0608 03:55:08.182724 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126734 (* 1 = 0.126734 loss)\nI0608 03:55:08.182729 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.195501 (* 1 = 0.195501 loss)\nI0608 03:55:08.182737 13573 sgd_solver.cpp:106] Iteration 38940, lr = 0.001\nI0608 03:55:21.083658 13573 solver.cpp:229] Iteration 38960, loss = 0.808047\nI0608 03:55:21.083722 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.228909 (* 1 = 0.228909 loss)\nI0608 03:55:21.083731 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.177838 (* 1 = 0.177838 loss)\nI0608 03:55:21.083739 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201161 (* 1 = 0.201161 loss)\nI0608 03:55:21.083745 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0502087 (* 1 = 0.0502087 loss)\nI0608 03:55:21.083750 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219694 (* 1 = 0.219694 loss)\nI0608 03:55:21.083755 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0502042 (* 1 = 0.0502042 loss)\nI0608 03:55:21.083762 13573 sgd_solver.cpp:106] Iteration 38960, lr = 0.001\nI0608 03:55:33.903707 13573 solver.cpp:229] Iteration 38980, loss = 0.588502\nI0608 03:55:33.903794 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0103322 (* 1 = 0.0103322 loss)\nI0608 03:55:33.903803 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0252744 (* 1 = 0.0252744 loss)\nI0608 03:55:33.903810 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262156 (* 1 = 0.262156 loss)\nI0608 03:55:33.903815 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0293693 (* 1 = 0.0293693 loss)\nI0608 03:55:33.903820 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279947 (* 1 = 0.279947 loss)\nI0608 03:55:33.903826 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.02938 (* 1 = 0.02938 loss)\nI0608 03:55:33.903833 13573 sgd_solver.cpp:106] Iteration 38980, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:55:46.857699 13573 solver.cpp:229] Iteration 39000, loss = 0.89479\nI0608 03:55:46.857784 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.219443 (* 1 = 0.219443 loss)\nI0608 03:55:46.857794 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.205541 (* 1 = 0.205541 loss)\nI0608 03:55:46.857798 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.307771 (* 1 = 0.307771 loss)\nI0608 03:55:46.857805 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.188308 (* 1 = 0.188308 loss)\nI0608 03:55:46.857810 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.305002 (* 1 = 0.305002 loss)\nI0608 03:55:46.857816 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.188492 (* 1 = 0.188492 loss)\nI0608 03:55:46.857823 13573 sgd_solver.cpp:106] Iteration 39000, lr = 0.001\nI0608 03:55:59.700767 13573 solver.cpp:229] Iteration 39020, loss = 1.0372\nI0608 03:55:59.700839 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.120554 (* 1 = 0.120554 loss)\nI0608 03:55:59.700848 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0687431 (* 1 = 0.0687431 loss)\nI0608 03:55:59.700855 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.375178 (* 1 = 0.375178 loss)\nI0608 03:55:59.700861 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.050553 (* 1 = 0.050553 loss)\nI0608 03:55:59.700866 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.392539 (* 1 = 0.392539 loss)\nI0608 03:55:59.700872 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.050502 (* 1 = 0.050502 loss)\nI0608 03:55:59.700880 13573 sgd_solver.cpp:106] Iteration 39020, lr = 0.001\nI0608 03:56:12.635089 13573 solver.cpp:229] Iteration 39040, loss = 0.540904\nI0608 03:56:12.635157 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0678726 (* 1 = 0.0678726 loss)\nI0608 03:56:12.635167 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0533766 (* 1 = 0.0533766 loss)\nI0608 03:56:12.635174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.20539 (* 1 = 0.20539 loss)\nI0608 03:56:12.635179 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0380057 (* 1 = 0.0380057 loss)\nI0608 03:56:12.635185 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202731 (* 1 = 0.202731 loss)\nI0608 03:56:12.635191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0380178 (* 1 = 0.0380178 loss)\nI0608 03:56:12.635198 13573 sgd_solver.cpp:106] Iteration 39040, lr = 0.001\nI0608 03:56:25.376221 13573 solver.cpp:229] Iteration 39060, loss = 1.13134\nI0608 03:56:25.376291 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147014 (* 1 = 0.147014 loss)\nI0608 03:56:25.376299 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.12254 (* 1 = 0.12254 loss)\nI0608 03:56:25.376305 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168163 (* 1 = 0.168163 loss)\nI0608 03:56:25.376312 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0147081 (* 1 = 0.0147081 loss)\nI0608 03:56:25.376317 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173799 (* 1 = 0.173799 loss)\nI0608 03:56:25.376322 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0146884 (* 1 = 0.0146884 loss)\nI0608 03:56:25.376329 13573 sgd_solver.cpp:106] Iteration 39060, lr = 0.001\nI0608 03:56:38.382274 13573 solver.cpp:229] Iteration 39080, loss = 0.616673\nI0608 03:56:38.382344 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0967726 (* 1 = 0.0967726 loss)\nI0608 03:56:38.382354 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0813052 (* 1 = 0.0813052 loss)\nI0608 03:56:38.382359 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0854598 (* 1 = 0.0854598 loss)\nI0608 03:56:38.382365 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134816 (* 1 = 0.0134816 loss)\nI0608 03:56:38.382370 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0949776 (* 1 = 0.0949776 loss)\nI0608 03:56:38.382376 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134843 (* 1 = 0.0134843 loss)\nI0608 03:56:38.382383 13573 sgd_solver.cpp:106] Iteration 39080, lr = 0.001\nI0608 03:56:51.611490 13573 solver.cpp:229] Iteration 39100, loss = 2.32773\nI0608 03:56:51.611547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.137373 (* 1 = 0.137373 loss)\nI0608 03:56:51.611557 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.258349 (* 1 = 0.258349 loss)\nI0608 03:56:51.611563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.666223 (* 1 = 0.666223 loss)\nI0608 03:56:51.611569 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.114118 (* 1 = 0.114118 loss)\nI0608 03:56:51.611575 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.675445 (* 1 = 0.675445 loss)\nI0608 03:56:51.611580 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.114164 (* 1 = 0.114164 loss)\nI0608 03:56:51.611588 13573 sgd_solver.cpp:106] Iteration 39100, lr = 0.001\nI0608 03:57:04.431820 13573 solver.cpp:229] Iteration 39120, loss = 0.731407\nI0608 03:57:04.431917 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0247869 (* 1 = 0.0247869 loss)\nI0608 03:57:04.431927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0618896 (* 1 = 0.0618896 loss)\nI0608 03:57:04.431933 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.449315 (* 1 = 0.449315 loss)\nI0608 03:57:04.431939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0495449 (* 1 = 0.0495449 loss)\nI0608 03:57:04.431946 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.432296 (* 1 = 0.432296 loss)\nI0608 03:57:04.431951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0495639 (* 1 = 0.0495639 loss)\nI0608 03:57:04.431957 13573 sgd_solver.cpp:106] Iteration 39120, lr = 0.001\nI0608 03:57:17.328850 13573 solver.cpp:229] Iteration 39140, loss = 1.37763\nI0608 03:57:17.328923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0490672 (* 1 = 0.0490672 loss)\nI0608 03:57:17.328933 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0296468 (* 1 = 0.0296468 loss)\nI0608 03:57:17.328938 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0815328 (* 1 = 0.0815328 loss)\nI0608 03:57:17.328944 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00269049 (* 1 = 0.00269049 loss)\nI0608 03:57:17.328950 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0761774 (* 1 = 0.0761774 loss)\nI0608 03:57:17.328955 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00269183 (* 1 = 0.00269183 loss)\nI0608 03:57:17.328963 13573 sgd_solver.cpp:106] Iteration 39140, lr = 0.001\nI0608 03:57:30.164114 13573 solver.cpp:229] Iteration 39160, loss = 0.741931\nI0608 03:57:30.164173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0856068 (* 1 = 0.0856068 loss)\nI0608 03:57:30.164186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0785371 (* 1 = 0.0785371 loss)\nI0608 03:57:30.164192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.293375 (* 1 = 0.293375 loss)\nI0608 03:57:30.164197 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0260095 (* 1 = 0.0260095 loss)\nI0608 03:57:30.164203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.299412 (* 1 = 0.299412 loss)\nI0608 03:57:30.164209 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.026008 (* 1 = 0.026008 loss)\nI0608 03:57:30.164217 13573 sgd_solver.cpp:106] Iteration 39160, lr = 0.001\nI0608 03:57:43.121238 13573 solver.cpp:229] Iteration 39180, loss = 0.591418\nI0608 03:57:43.121306 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117881 (* 1 = 0.117881 loss)\nI0608 03:57:43.121316 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167527 (* 1 = 0.167527 loss)\nI0608 03:57:43.121322 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0936134 (* 1 = 0.0936134 loss)\nI0608 03:57:43.121328 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264745 (* 1 = 0.0264745 loss)\nI0608 03:57:43.121335 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.08561 (* 1 = 0.08561 loss)\nI0608 03:57:43.121340 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264727 (* 1 = 0.0264727 loss)\nI0608 03:57:43.121347 13573 sgd_solver.cpp:106] Iteration 39180, lr = 0.001\nspeed: 0.645s / iter\nI0608 03:57:55.917352 13573 solver.cpp:229] Iteration 39200, loss = 0.875457\nI0608 03:57:55.917457 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0270905 (* 1 = 0.0270905 loss)\nI0608 03:57:55.917467 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0511675 (* 1 = 0.0511675 loss)\nI0608 03:57:55.917474 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.122271 (* 1 = 0.122271 loss)\nI0608 03:57:55.917480 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0816218 (* 1 = 0.0816218 loss)\nI0608 03:57:55.917485 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0987378 (* 1 = 0.0987378 loss)\nI0608 03:57:55.917491 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0816185 (* 1 = 0.0816185 loss)\nI0608 03:57:55.917498 13573 sgd_solver.cpp:106] Iteration 39200, lr = 0.001\nI0608 03:58:08.974525 13573 solver.cpp:229] Iteration 39220, loss = 1.49917\nI0608 03:58:08.974591 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.002738 (* 1 = 0.002738 loss)\nI0608 03:58:08.974599 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0655633 (* 1 = 0.0655633 loss)\nI0608 03:58:08.974606 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.327598 (* 1 = 0.327598 loss)\nI0608 03:58:08.974611 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118966 (* 1 = 0.0118966 loss)\nI0608 03:58:08.974617 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.276315 (* 1 = 0.276315 loss)\nI0608 03:58:08.974623 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118779 (* 1 = 0.0118779 loss)\nI0608 03:58:08.974630 13573 sgd_solver.cpp:106] Iteration 39220, lr = 0.001\nI0608 03:58:21.942530 13573 solver.cpp:229] Iteration 39240, loss = 0.358489\nI0608 03:58:21.942600 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11281 (* 1 = 0.11281 loss)\nI0608 03:58:21.942608 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0516649 (* 1 = 0.0516649 loss)\nI0608 03:58:21.942615 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0782288 (* 1 = 0.0782288 loss)\nI0608 03:58:21.942620 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0131811 (* 1 = 0.0131811 loss)\nI0608 03:58:21.942626 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.077634 (* 1 = 0.077634 loss)\nI0608 03:58:21.942631 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131783 (* 1 = 0.0131783 loss)\nI0608 03:58:21.942638 13573 sgd_solver.cpp:106] Iteration 39240, lr = 0.001\nI0608 03:58:34.862210 13573 solver.cpp:229] Iteration 39260, loss = 0.790616\nI0608 03:58:34.862282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.218077 (* 1 = 0.218077 loss)\nI0608 03:58:34.862299 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0702601 (* 1 = 0.0702601 loss)\nI0608 03:58:34.862306 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300595 (* 1 = 0.300595 loss)\nI0608 03:58:34.862313 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0295559 (* 1 = 0.0295559 loss)\nI0608 03:58:34.862318 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.317296 (* 1 = 0.317296 loss)\nI0608 03:58:34.862324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0295493 (* 1 = 0.0295493 loss)\nI0608 03:58:34.862332 13573 sgd_solver.cpp:106] Iteration 39260, lr = 0.001\nI0608 03:58:47.648591 13573 solver.cpp:229] Iteration 39280, loss = 0.68739\nI0608 03:58:47.648664 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0647699 (* 1 = 0.0647699 loss)\nI0608 03:58:47.648674 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.199455 (* 1 = 0.199455 loss)\nI0608 03:58:47.648679 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.12846 (* 1 = 0.12846 loss)\nI0608 03:58:47.648684 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.223936 (* 1 = 0.223936 loss)\nI0608 03:58:47.648690 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128772 (* 1 = 0.128772 loss)\nI0608 03:58:47.648695 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.223943 (* 1 = 0.223943 loss)\nI0608 03:58:47.648702 13573 sgd_solver.cpp:106] Iteration 39280, lr = 0.001\nI0608 03:59:00.462608 13573 solver.cpp:229] Iteration 39300, loss = 0.579827\nI0608 03:59:00.462674 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0302464 (* 1 = 0.0302464 loss)\nI0608 03:59:00.462684 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0235734 (* 1 = 0.0235734 loss)\nI0608 03:59:00.462690 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.094139 (* 1 = 0.094139 loss)\nI0608 03:59:00.462697 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0407545 (* 1 = 0.0407545 loss)\nI0608 03:59:00.462702 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096525 (* 1 = 0.096525 loss)\nI0608 03:59:00.462707 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0407706 (* 1 = 0.0407706 loss)\nI0608 03:59:00.462714 13573 sgd_solver.cpp:106] Iteration 39300, lr = 0.001\nI0608 03:59:13.330322 13573 solver.cpp:229] Iteration 39320, loss = 1.3071\nI0608 03:59:13.330449 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.177385 (* 1 = 0.177385 loss)\nI0608 03:59:13.330459 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.191505 (* 1 = 0.191505 loss)\nI0608 03:59:13.330464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.728985 (* 1 = 0.728985 loss)\nI0608 03:59:13.330471 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.195235 (* 1 = 0.195235 loss)\nI0608 03:59:13.330476 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.729175 (* 1 = 0.729175 loss)\nI0608 03:59:13.330482 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.195103 (* 1 = 0.195103 loss)\nI0608 03:59:13.330489 13573 sgd_solver.cpp:106] Iteration 39320, lr = 0.001\nI0608 03:59:26.276139 13573 solver.cpp:229] Iteration 39340, loss = 0.68547\nI0608 03:59:26.276208 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.275232 (* 1 = 0.275232 loss)\nI0608 03:59:26.276218 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.264408 (* 1 = 0.264408 loss)\nI0608 03:59:26.276224 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.099424 (* 1 = 0.099424 loss)\nI0608 03:59:26.276231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0192318 (* 1 = 0.0192318 loss)\nI0608 03:59:26.276235 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102837 (* 1 = 0.102837 loss)\nI0608 03:59:26.276242 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0192324 (* 1 = 0.0192324 loss)\nI0608 03:59:26.276248 13573 sgd_solver.cpp:106] Iteration 39340, lr = 0.001\nI0608 03:59:39.245141 13573 solver.cpp:229] Iteration 39360, loss = 0.688078\nI0608 03:59:39.245218 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102915 (* 1 = 0.102915 loss)\nI0608 03:59:39.245226 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0232588 (* 1 = 0.0232588 loss)\nI0608 03:59:39.245232 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.369824 (* 1 = 0.369824 loss)\nI0608 03:59:39.245239 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0733866 (* 1 = 0.0733866 loss)\nI0608 03:59:39.245244 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.358566 (* 1 = 0.358566 loss)\nI0608 03:59:39.245249 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0733829 (* 1 = 0.0733829 loss)\nI0608 03:59:39.245256 13573 sgd_solver.cpp:106] Iteration 39360, lr = 0.001\nI0608 03:59:52.200750 13573 solver.cpp:229] Iteration 39380, loss = 0.879364\nI0608 03:59:52.200825 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0882618 (* 1 = 0.0882618 loss)\nI0608 03:59:52.200835 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0719355 (* 1 = 0.0719355 loss)\nI0608 03:59:52.200842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124943 (* 1 = 0.124943 loss)\nI0608 03:59:52.200847 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.207225 (* 1 = 0.207225 loss)\nI0608 03:59:52.200853 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.12338 (* 1 = 0.12338 loss)\nI0608 03:59:52.200858 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.207044 (* 1 = 0.207044 loss)\nI0608 03:59:52.200865 13573 sgd_solver.cpp:106] Iteration 39380, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:00:05.125064 13573 solver.cpp:229] Iteration 39400, loss = 1.40786\nI0608 04:00:05.125139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.219627 (* 1 = 0.219627 loss)\nI0608 04:00:05.125149 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.245906 (* 1 = 0.245906 loss)\nI0608 04:00:05.125155 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.558179 (* 1 = 0.558179 loss)\nI0608 04:00:05.125161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0629527 (* 1 = 0.0629527 loss)\nI0608 04:00:05.125167 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.564323 (* 1 = 0.564323 loss)\nI0608 04:00:05.125174 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0629539 (* 1 = 0.0629539 loss)\nI0608 04:00:05.125180 13573 sgd_solver.cpp:106] Iteration 39400, lr = 0.001\nI0608 04:00:18.049111 13573 solver.cpp:229] Iteration 39420, loss = 0.969241\nI0608 04:00:18.049173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.077738 (* 1 = 0.077738 loss)\nI0608 04:00:18.049183 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0998513 (* 1 = 0.0998513 loss)\nI0608 04:00:18.049190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.520929 (* 1 = 0.520929 loss)\nI0608 04:00:18.049196 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.154214 (* 1 = 0.154214 loss)\nI0608 04:00:18.049201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.524376 (* 1 = 0.524376 loss)\nI0608 04:00:18.049206 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.154272 (* 1 = 0.154272 loss)\nI0608 04:00:18.049213 13573 sgd_solver.cpp:106] Iteration 39420, lr = 0.001\nI0608 04:00:30.968875 13573 solver.cpp:229] Iteration 39440, loss = 1.20422\nI0608 04:00:30.968940 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.217921 (* 1 = 0.217921 loss)\nI0608 04:00:30.968950 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162528 (* 1 = 0.162528 loss)\nI0608 04:00:30.968956 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.127986 (* 1 = 0.127986 loss)\nI0608 04:00:30.968962 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.027985 (* 1 = 0.027985 loss)\nI0608 04:00:30.968967 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.131542 (* 1 = 0.131542 loss)\nI0608 04:00:30.968973 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279904 (* 1 = 0.0279904 loss)\nI0608 04:00:30.968981 13573 sgd_solver.cpp:106] Iteration 39440, lr = 0.001\nI0608 04:00:43.714474 13573 solver.cpp:229] Iteration 39460, loss = 0.534266\nI0608 04:00:43.714542 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.133263 (* 1 = 0.133263 loss)\nI0608 04:00:43.714551 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0888595 (* 1 = 0.0888595 loss)\nI0608 04:00:43.714557 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198051 (* 1 = 0.198051 loss)\nI0608 04:00:43.714563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.02946 (* 1 = 0.02946 loss)\nI0608 04:00:43.714570 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202718 (* 1 = 0.202718 loss)\nI0608 04:00:43.714574 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0294746 (* 1 = 0.0294746 loss)\nI0608 04:00:43.714582 13573 sgd_solver.cpp:106] Iteration 39460, lr = 0.001\nI0608 04:00:56.585250 13573 solver.cpp:229] Iteration 39480, loss = 0.639583\nI0608 04:00:56.585325 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0688917 (* 1 = 0.0688917 loss)\nI0608 04:00:56.585335 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.183547 (* 1 = 0.183547 loss)\nI0608 04:00:56.585340 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0994953 (* 1 = 0.0994953 loss)\nI0608 04:00:56.585346 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106955 (* 1 = 0.106955 loss)\nI0608 04:00:56.585351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0963703 (* 1 = 0.0963703 loss)\nI0608 04:00:56.585357 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106953 (* 1 = 0.106953 loss)\nI0608 04:00:56.585363 13573 sgd_solver.cpp:106] Iteration 39480, lr = 0.001\nI0608 04:01:09.619385 13573 solver.cpp:229] Iteration 39500, loss = 0.428011\nI0608 04:01:09.619451 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0910286 (* 1 = 0.0910286 loss)\nI0608 04:01:09.619460 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0580433 (* 1 = 0.0580433 loss)\nI0608 04:01:09.619467 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0994193 (* 1 = 0.0994193 loss)\nI0608 04:01:09.619472 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135767 (* 1 = 0.0135767 loss)\nI0608 04:01:09.619477 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0956417 (* 1 = 0.0956417 loss)\nI0608 04:01:09.619483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135781 (* 1 = 0.0135781 loss)\nI0608 04:01:09.619489 13573 sgd_solver.cpp:106] Iteration 39500, lr = 0.001\nI0608 04:01:22.429141 13573 solver.cpp:229] Iteration 39520, loss = 0.420574\nI0608 04:01:22.429203 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115187 (* 1 = 0.115187 loss)\nI0608 04:01:22.429213 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0572442 (* 1 = 0.0572442 loss)\nI0608 04:01:22.429219 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0970576 (* 1 = 0.0970576 loss)\nI0608 04:01:22.429224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275353 (* 1 = 0.0275353 loss)\nI0608 04:01:22.429230 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0922642 (* 1 = 0.0922642 loss)\nI0608 04:01:22.429236 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275436 (* 1 = 0.0275436 loss)\nI0608 04:01:22.429244 13573 sgd_solver.cpp:106] Iteration 39520, lr = 0.001\nI0608 04:01:35.148056 13573 solver.cpp:229] Iteration 39540, loss = 1.03572\nI0608 04:01:35.148123 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.156335 (* 1 = 0.156335 loss)\nI0608 04:01:35.148133 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.196494 (* 1 = 0.196494 loss)\nI0608 04:01:35.148139 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247449 (* 1 = 0.247449 loss)\nI0608 04:01:35.148146 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0719808 (* 1 = 0.0719808 loss)\nI0608 04:01:35.148152 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271674 (* 1 = 0.271674 loss)\nI0608 04:01:35.148159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0719438 (* 1 = 0.0719438 loss)\nI0608 04:01:35.148167 13573 sgd_solver.cpp:106] Iteration 39540, lr = 0.001\nI0608 04:01:48.109277 13573 solver.cpp:229] Iteration 39560, loss = 0.462572\nI0608 04:01:48.109347 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00205407 (* 1 = 0.00205407 loss)\nI0608 04:01:48.109359 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000364195 (* 1 = 0.000364195 loss)\nI0608 04:01:48.109364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169496 (* 1 = 0.169496 loss)\nI0608 04:01:48.109370 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00768922 (* 1 = 0.00768922 loss)\nI0608 04:01:48.109375 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186681 (* 1 = 0.186681 loss)\nI0608 04:01:48.109381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00766448 (* 1 = 0.00766448 loss)\nI0608 04:01:48.109388 13573 sgd_solver.cpp:106] Iteration 39560, lr = 0.001\nI0608 04:02:01.318262 13573 solver.cpp:229] Iteration 39580, loss = 0.675017\nI0608 04:02:01.318327 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000489036 (* 1 = 0.000489036 loss)\nI0608 04:02:01.318337 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00210759 (* 1 = 0.00210759 loss)\nI0608 04:02:01.318343 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.360059 (* 1 = 0.360059 loss)\nI0608 04:02:01.318351 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0303682 (* 1 = 0.0303682 loss)\nI0608 04:02:01.318356 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.369456 (* 1 = 0.369456 loss)\nI0608 04:02:01.318362 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0303707 (* 1 = 0.0303707 loss)\nI0608 04:02:01.318369 13573 sgd_solver.cpp:106] Iteration 39580, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:02:14.077071 13573 solver.cpp:229] Iteration 39600, loss = 2.09342\nI0608 04:02:14.077142 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0398924 (* 1 = 0.0398924 loss)\nI0608 04:02:14.077152 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.135043 (* 1 = 0.135043 loss)\nI0608 04:02:14.077158 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168245 (* 1 = 0.168245 loss)\nI0608 04:02:14.077163 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242107 (* 1 = 0.0242107 loss)\nI0608 04:02:14.077169 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154847 (* 1 = 0.154847 loss)\nI0608 04:02:14.077175 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.024212 (* 1 = 0.024212 loss)\nI0608 04:02:14.077183 13573 sgd_solver.cpp:106] Iteration 39600, lr = 0.001\nI0608 04:02:27.006932 13573 solver.cpp:229] Iteration 39620, loss = 0.736375\nI0608 04:02:27.007002 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.212556 (* 1 = 0.212556 loss)\nI0608 04:02:27.007014 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.17082 (* 1 = 0.17082 loss)\nI0608 04:02:27.007019 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136383 (* 1 = 0.136383 loss)\nI0608 04:02:27.007025 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0274215 (* 1 = 0.0274215 loss)\nI0608 04:02:27.007031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13403 (* 1 = 0.13403 loss)\nI0608 04:02:27.007037 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0273635 (* 1 = 0.0273635 loss)\nI0608 04:02:27.007043 13573 sgd_solver.cpp:106] Iteration 39620, lr = 0.001\nI0608 04:02:40.063699 13573 solver.cpp:229] Iteration 39640, loss = 1.15925\nI0608 04:02:40.063825 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.213272 (* 1 = 0.213272 loss)\nI0608 04:02:40.063834 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.294795 (* 1 = 0.294795 loss)\nI0608 04:02:40.063840 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.278386 (* 1 = 0.278386 loss)\nI0608 04:02:40.063846 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0559322 (* 1 = 0.0559322 loss)\nI0608 04:02:40.063851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282256 (* 1 = 0.282256 loss)\nI0608 04:02:40.063858 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0559133 (* 1 = 0.0559133 loss)\nI0608 04:02:40.063864 13573 sgd_solver.cpp:106] Iteration 39640, lr = 0.001\nI0608 04:02:53.191848 13573 solver.cpp:229] Iteration 39660, loss = 0.640987\nI0608 04:02:53.191913 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.15248 (* 1 = 0.15248 loss)\nI0608 04:02:53.191923 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0779417 (* 1 = 0.0779417 loss)\nI0608 04:02:53.191929 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.28733 (* 1 = 0.28733 loss)\nI0608 04:02:53.191936 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0430359 (* 1 = 0.0430359 loss)\nI0608 04:02:53.191941 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.301828 (* 1 = 0.301828 loss)\nI0608 04:02:53.191946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0430351 (* 1 = 0.0430351 loss)\nI0608 04:02:53.191954 13573 sgd_solver.cpp:106] Iteration 39660, lr = 0.001\nI0608 04:03:06.098462 13573 solver.cpp:229] Iteration 39680, loss = 0.905536\nI0608 04:03:06.098527 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102229 (* 1 = 0.102229 loss)\nI0608 04:03:06.098536 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0408456 (* 1 = 0.0408456 loss)\nI0608 04:03:06.098542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102694 (* 1 = 0.102694 loss)\nI0608 04:03:06.098548 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178595 (* 1 = 0.0178595 loss)\nI0608 04:03:06.098553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0994633 (* 1 = 0.0994633 loss)\nI0608 04:03:06.098559 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0178506 (* 1 = 0.0178506 loss)\nI0608 04:03:06.098567 13573 sgd_solver.cpp:106] Iteration 39680, lr = 0.001\nI0608 04:03:18.990856 13573 solver.cpp:229] Iteration 39700, loss = 0.432513\nI0608 04:03:18.990932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.022732 (* 1 = 0.022732 loss)\nI0608 04:03:18.990943 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.026952 (* 1 = 0.026952 loss)\nI0608 04:03:18.990949 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0925617 (* 1 = 0.0925617 loss)\nI0608 04:03:18.990954 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0341542 (* 1 = 0.0341542 loss)\nI0608 04:03:18.990960 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100225 (* 1 = 0.100225 loss)\nI0608 04:03:18.990965 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0341627 (* 1 = 0.0341627 loss)\nI0608 04:03:18.990972 13573 sgd_solver.cpp:106] Iteration 39700, lr = 0.001\nI0608 04:03:31.729835 13573 solver.cpp:229] Iteration 39720, loss = 1.04705\nI0608 04:03:31.729899 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0516651 (* 1 = 0.0516651 loss)\nI0608 04:03:31.729908 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0338065 (* 1 = 0.0338065 loss)\nI0608 04:03:31.729914 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255131 (* 1 = 0.255131 loss)\nI0608 04:03:31.729920 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108164 (* 1 = 0.0108164 loss)\nI0608 04:03:31.729926 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260009 (* 1 = 0.260009 loss)\nI0608 04:03:31.729933 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108114 (* 1 = 0.0108114 loss)\nI0608 04:03:31.729939 13573 sgd_solver.cpp:106] Iteration 39720, lr = 0.001\nI0608 04:03:44.588846 13573 solver.cpp:229] Iteration 39740, loss = 2.82124\nI0608 04:03:44.588917 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00567536 (* 1 = 0.00567536 loss)\nI0608 04:03:44.588927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0956144 (* 1 = 0.0956144 loss)\nI0608 04:03:44.588933 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.40241 (* 1 = 1.40241 loss)\nI0608 04:03:44.588939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.595342 (* 1 = 0.595342 loss)\nI0608 04:03:44.588944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.44174 (* 1 = 1.44174 loss)\nI0608 04:03:44.588950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.584018 (* 1 = 0.584018 loss)\nI0608 04:03:44.588956 13573 sgd_solver.cpp:106] Iteration 39740, lr = 0.001\nI0608 04:03:57.329349 13573 solver.cpp:229] Iteration 39760, loss = 0.841372\nI0608 04:03:57.329422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0607863 (* 1 = 0.0607863 loss)\nI0608 04:03:57.329432 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0760237 (* 1 = 0.0760237 loss)\nI0608 04:03:57.329437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.502988 (* 1 = 0.502988 loss)\nI0608 04:03:57.329443 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0624812 (* 1 = 0.0624812 loss)\nI0608 04:03:57.329449 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.51374 (* 1 = 0.51374 loss)\nI0608 04:03:57.329455 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0624731 (* 1 = 0.0624731 loss)\nI0608 04:03:57.329462 13573 sgd_solver.cpp:106] Iteration 39760, lr = 0.001\nI0608 04:04:10.418385 13573 solver.cpp:229] Iteration 39780, loss = 0.761099\nI0608 04:04:10.418450 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0164609 (* 1 = 0.0164609 loss)\nI0608 04:04:10.418460 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0442273 (* 1 = 0.0442273 loss)\nI0608 04:04:10.418467 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207693 (* 1 = 0.207693 loss)\nI0608 04:04:10.418471 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0341195 (* 1 = 0.0341195 loss)\nI0608 04:04:10.418478 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.231855 (* 1 = 0.231855 loss)\nI0608 04:04:10.418483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.034116 (* 1 = 0.034116 loss)\nI0608 04:04:10.418489 13573 sgd_solver.cpp:106] Iteration 39780, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:04:23.250210 13573 solver.cpp:229] Iteration 39800, loss = 1.51809\nI0608 04:04:23.250277 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.112764 (* 1 = 0.112764 loss)\nI0608 04:04:23.250288 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0858745 (* 1 = 0.0858745 loss)\nI0608 04:04:23.250294 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284856 (* 1 = 0.284856 loss)\nI0608 04:04:23.250300 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0427346 (* 1 = 0.0427346 loss)\nI0608 04:04:23.250306 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.288543 (* 1 = 0.288543 loss)\nI0608 04:04:23.250313 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0427339 (* 1 = 0.0427339 loss)\nI0608 04:04:23.250320 13573 sgd_solver.cpp:106] Iteration 39800, lr = 0.001\nI0608 04:04:36.448474 13573 solver.cpp:229] Iteration 39820, loss = 1.06073\nI0608 04:04:36.448551 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0933592 (* 1 = 0.0933592 loss)\nI0608 04:04:36.448561 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.221108 (* 1 = 0.221108 loss)\nI0608 04:04:36.448567 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.445323 (* 1 = 0.445323 loss)\nI0608 04:04:36.448573 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0740628 (* 1 = 0.0740628 loss)\nI0608 04:04:36.448580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.426672 (* 1 = 0.426672 loss)\nI0608 04:04:36.448585 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.074097 (* 1 = 0.074097 loss)\nI0608 04:04:36.448593 13573 sgd_solver.cpp:106] Iteration 39820, lr = 0.001\nI0608 04:04:49.397860 13573 solver.cpp:229] Iteration 39840, loss = 0.358507\nI0608 04:04:49.397920 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0757101 (* 1 = 0.0757101 loss)\nI0608 04:04:49.397929 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0283298 (* 1 = 0.0283298 loss)\nI0608 04:04:49.397935 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0947054 (* 1 = 0.0947054 loss)\nI0608 04:04:49.397941 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0629845 (* 1 = 0.0629845 loss)\nI0608 04:04:49.397948 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0891217 (* 1 = 0.0891217 loss)\nI0608 04:04:49.397953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.062996 (* 1 = 0.062996 loss)\nI0608 04:04:49.397961 13573 sgd_solver.cpp:106] Iteration 39840, lr = 0.001\nI0608 04:05:02.071521 13573 solver.cpp:229] Iteration 39860, loss = 0.540057\nI0608 04:05:02.071588 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110622 (* 1 = 0.110622 loss)\nI0608 04:05:02.071596 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103627 (* 1 = 0.103627 loss)\nI0608 04:05:02.071604 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206084 (* 1 = 0.206084 loss)\nI0608 04:05:02.071609 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0132162 (* 1 = 0.0132162 loss)\nI0608 04:05:02.071615 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188868 (* 1 = 0.188868 loss)\nI0608 04:05:02.071620 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0132146 (* 1 = 0.0132146 loss)\nI0608 04:05:02.071629 13573 sgd_solver.cpp:106] Iteration 39860, lr = 0.001\nI0608 04:05:15.143038 13573 solver.cpp:229] Iteration 39880, loss = 1.11447\nI0608 04:05:15.143100 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.146919 (* 1 = 0.146919 loss)\nI0608 04:05:15.143110 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.17844 (* 1 = 0.17844 loss)\nI0608 04:05:15.143115 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237838 (* 1 = 0.237838 loss)\nI0608 04:05:15.143121 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0975637 (* 1 = 0.0975637 loss)\nI0608 04:05:15.143127 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252313 (* 1 = 0.252313 loss)\nI0608 04:05:15.143133 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0975678 (* 1 = 0.0975678 loss)\nI0608 04:05:15.143141 13573 sgd_solver.cpp:106] Iteration 39880, lr = 0.001\nI0608 04:05:27.901546 13573 solver.cpp:229] Iteration 39900, loss = 0.614556\nI0608 04:05:27.901618 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0890637 (* 1 = 0.0890637 loss)\nI0608 04:05:27.901628 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.127033 (* 1 = 0.127033 loss)\nI0608 04:05:27.901635 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0985895 (* 1 = 0.0985895 loss)\nI0608 04:05:27.901641 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0423562 (* 1 = 0.0423562 loss)\nI0608 04:05:27.901648 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977437 (* 1 = 0.0977437 loss)\nI0608 04:05:27.901653 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0423904 (* 1 = 0.0423904 loss)\nI0608 04:05:27.901660 13573 sgd_solver.cpp:106] Iteration 39900, lr = 0.001\nI0608 04:05:40.798593 13573 solver.cpp:229] Iteration 39920, loss = 0.486474\nI0608 04:05:40.798662 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.192948 (* 1 = 0.192948 loss)\nI0608 04:05:40.798672 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11243 (* 1 = 0.11243 loss)\nI0608 04:05:40.798679 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0990893 (* 1 = 0.0990893 loss)\nI0608 04:05:40.798684 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111947 (* 1 = 0.0111947 loss)\nI0608 04:05:40.798691 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100646 (* 1 = 0.100646 loss)\nI0608 04:05:40.798696 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011206 (* 1 = 0.011206 loss)\nI0608 04:05:40.798703 13573 sgd_solver.cpp:106] Iteration 39920, lr = 0.001\nI0608 04:05:53.609669 13573 solver.cpp:229] Iteration 39940, loss = 1.8937\nI0608 04:05:53.609746 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.081767 (* 1 = 0.081767 loss)\nI0608 04:05:53.609756 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0904214 (* 1 = 0.0904214 loss)\nI0608 04:05:53.609761 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0921326 (* 1 = 0.0921326 loss)\nI0608 04:05:53.609768 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00545745 (* 1 = 0.00545745 loss)\nI0608 04:05:53.609773 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0841758 (* 1 = 0.0841758 loss)\nI0608 04:05:53.609779 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00545697 (* 1 = 0.00545697 loss)\nI0608 04:05:53.609786 13573 sgd_solver.cpp:106] Iteration 39940, lr = 0.001\nI0608 04:06:06.349751 13573 solver.cpp:229] Iteration 39960, loss = 0.633186\nI0608 04:06:06.349815 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102783 (* 1 = 0.102783 loss)\nI0608 04:06:06.349824 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143687 (* 1 = 0.143687 loss)\nI0608 04:06:06.349830 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162804 (* 1 = 0.162804 loss)\nI0608 04:06:06.349835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0367546 (* 1 = 0.0367546 loss)\nI0608 04:06:06.349841 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156262 (* 1 = 0.156262 loss)\nI0608 04:06:06.349848 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0367394 (* 1 = 0.0367394 loss)\nI0608 04:06:06.349853 13573 sgd_solver.cpp:106] Iteration 39960, lr = 0.001\nI0608 04:06:19.272918 13573 solver.cpp:229] Iteration 39980, loss = 0.73406\nI0608 04:06:19.272982 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0980218 (* 1 = 0.0980218 loss)\nI0608 04:06:19.272992 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0592548 (* 1 = 0.0592548 loss)\nI0608 04:06:19.272999 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334914 (* 1 = 0.334914 loss)\nI0608 04:06:19.273003 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.014249 (* 1 = 0.014249 loss)\nI0608 04:06:19.273010 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.34662 (* 1 = 0.34662 loss)\nI0608 04:06:19.273015 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0142457 (* 1 = 0.0142457 loss)\nI0608 04:06:19.273021 13573 sgd_solver.cpp:106] Iteration 39980, lr = 0.001\nspeed: 0.645s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_40000.caffemodel\nI0608 04:06:38.288175 13573 solver.cpp:229] Iteration 40000, loss = 0.790908\nI0608 04:06:38.288255 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00150788 (* 1 = 0.00150788 loss)\nI0608 04:06:38.288265 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0142839 (* 1 = 0.0142839 loss)\nI0608 04:06:38.288271 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.342101 (* 1 = 0.342101 loss)\nI0608 04:06:38.288277 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0315043 (* 1 = 0.0315043 loss)\nI0608 04:06:38.288283 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.349985 (* 1 = 0.349985 loss)\nI0608 04:06:38.288290 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0315045 (* 1 = 0.0315045 loss)\nI0608 04:06:38.288297 13573 sgd_solver.cpp:106] Iteration 40000, lr = 0.001\nI0608 04:06:51.110994 13573 solver.cpp:229] Iteration 40020, loss = 1.01865\nI0608 04:06:51.111058 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0748788 (* 1 = 0.0748788 loss)\nI0608 04:06:51.111068 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0711455 (* 1 = 0.0711455 loss)\nI0608 04:06:51.111074 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0842394 (* 1 = 0.0842394 loss)\nI0608 04:06:51.111081 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00328299 (* 1 = 0.00328299 loss)\nI0608 04:06:51.111086 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0838628 (* 1 = 0.0838628 loss)\nI0608 04:06:51.111093 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00328226 (* 1 = 0.00328226 loss)\nI0608 04:06:51.111099 13573 sgd_solver.cpp:106] Iteration 40020, lr = 0.001\nI0608 04:07:03.966671 13573 solver.cpp:229] Iteration 40040, loss = 0.36494\nI0608 04:07:03.966830 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0686103 (* 1 = 0.0686103 loss)\nI0608 04:07:03.966840 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0552145 (* 1 = 0.0552145 loss)\nI0608 04:07:03.966845 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0914817 (* 1 = 0.0914817 loss)\nI0608 04:07:03.966851 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0171541 (* 1 = 0.0171541 loss)\nI0608 04:07:03.966857 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0841174 (* 1 = 0.0841174 loss)\nI0608 04:07:03.966863 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0171464 (* 1 = 0.0171464 loss)\nI0608 04:07:03.966871 13573 sgd_solver.cpp:106] Iteration 40040, lr = 0.001\nI0608 04:07:16.845108 13573 solver.cpp:229] Iteration 40060, loss = 0.545179\nI0608 04:07:16.845177 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0436315 (* 1 = 0.0436315 loss)\nI0608 04:07:16.845187 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0366796 (* 1 = 0.0366796 loss)\nI0608 04:07:16.845193 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0900595 (* 1 = 0.0900595 loss)\nI0608 04:07:16.845199 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.075987 (* 1 = 0.075987 loss)\nI0608 04:07:16.845206 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106304 (* 1 = 0.106304 loss)\nI0608 04:07:16.845211 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0759573 (* 1 = 0.0759573 loss)\nI0608 04:07:16.845218 13573 sgd_solver.cpp:106] Iteration 40060, lr = 0.001\nI0608 04:07:29.634660 13573 solver.cpp:229] Iteration 40080, loss = 0.531176\nI0608 04:07:29.634727 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0468941 (* 1 = 0.0468941 loss)\nI0608 04:07:29.634737 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0425937 (* 1 = 0.0425937 loss)\nI0608 04:07:29.634743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.080094 (* 1 = 0.080094 loss)\nI0608 04:07:29.634749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017576 (* 1 = 0.017576 loss)\nI0608 04:07:29.634754 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0849458 (* 1 = 0.0849458 loss)\nI0608 04:07:29.634760 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175729 (* 1 = 0.0175729 loss)\nI0608 04:07:29.634768 13573 sgd_solver.cpp:106] Iteration 40080, lr = 0.001\nI0608 04:07:42.745529 13573 solver.cpp:229] Iteration 40100, loss = 1.01231\nI0608 04:07:42.745606 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00362379 (* 1 = 0.00362379 loss)\nI0608 04:07:42.745617 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00120397 (* 1 = 0.00120397 loss)\nI0608 04:07:42.745623 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.591839 (* 1 = 0.591839 loss)\nI0608 04:07:42.745630 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111909 (* 1 = 0.111909 loss)\nI0608 04:07:42.745635 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.542669 (* 1 = 0.542669 loss)\nI0608 04:07:42.745642 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111987 (* 1 = 0.111987 loss)\nI0608 04:07:42.745649 13573 sgd_solver.cpp:106] Iteration 40100, lr = 0.001\nI0608 04:07:55.611726 13573 solver.cpp:229] Iteration 40120, loss = 0.65345\nI0608 04:07:55.611804 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0431316 (* 1 = 0.0431316 loss)\nI0608 04:07:55.611812 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0815321 (* 1 = 0.0815321 loss)\nI0608 04:07:55.611819 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0991703 (* 1 = 0.0991703 loss)\nI0608 04:07:55.611824 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0417586 (* 1 = 0.0417586 loss)\nI0608 04:07:55.611830 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0940114 (* 1 = 0.0940114 loss)\nI0608 04:07:55.611835 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0417768 (* 1 = 0.0417768 loss)\nI0608 04:07:55.611841 13573 sgd_solver.cpp:106] Iteration 40120, lr = 0.001\nI0608 04:08:08.417255 13573 solver.cpp:229] Iteration 40140, loss = 0.528565\nI0608 04:08:08.417379 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00215086 (* 1 = 0.00215086 loss)\nI0608 04:08:08.417388 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00206544 (* 1 = 0.00206544 loss)\nI0608 04:08:08.417395 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139052 (* 1 = 0.139052 loss)\nI0608 04:08:08.417402 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0013958 (* 1 = 0.0013958 loss)\nI0608 04:08:08.417407 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173377 (* 1 = 0.173377 loss)\nI0608 04:08:08.417413 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00139885 (* 1 = 0.00139885 loss)\nI0608 04:08:08.417419 13573 sgd_solver.cpp:106] Iteration 40140, lr = 0.001\nI0608 04:08:21.294389 13573 solver.cpp:229] Iteration 40160, loss = 0.486827\nI0608 04:08:21.294462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101905 (* 1 = 0.101905 loss)\nI0608 04:08:21.294473 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0589096 (* 1 = 0.0589096 loss)\nI0608 04:08:21.294479 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0888403 (* 1 = 0.0888403 loss)\nI0608 04:08:21.294486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00898323 (* 1 = 0.00898323 loss)\nI0608 04:08:21.294490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935991 (* 1 = 0.0935991 loss)\nI0608 04:08:21.294497 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00897628 (* 1 = 0.00897628 loss)\nI0608 04:08:21.294503 13573 sgd_solver.cpp:106] Iteration 40160, lr = 0.001\nI0608 04:08:34.226140 13573 solver.cpp:229] Iteration 40180, loss = 0.414197\nI0608 04:08:34.226215 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0969479 (* 1 = 0.0969479 loss)\nI0608 04:08:34.226225 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0850316 (* 1 = 0.0850316 loss)\nI0608 04:08:34.226233 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0898121 (* 1 = 0.0898121 loss)\nI0608 04:08:34.226238 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00849939 (* 1 = 0.00849939 loss)\nI0608 04:08:34.226244 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0951468 (* 1 = 0.0951468 loss)\nI0608 04:08:34.226250 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00849117 (* 1 = 0.00849117 loss)\nI0608 04:08:34.226258 13573 sgd_solver.cpp:106] Iteration 40180, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:08:47.131903 13573 solver.cpp:229] Iteration 40200, loss = 1.06179\nI0608 04:08:47.132025 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0613227 (* 1 = 0.0613227 loss)\nI0608 04:08:47.132053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0609711 (* 1 = 0.0609711 loss)\nI0608 04:08:47.132061 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0929697 (* 1 = 0.0929697 loss)\nI0608 04:08:47.132068 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0144727 (* 1 = 0.0144727 loss)\nI0608 04:08:47.132073 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0931241 (* 1 = 0.0931241 loss)\nI0608 04:08:47.132079 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144761 (* 1 = 0.0144761 loss)\nI0608 04:08:47.132087 13573 sgd_solver.cpp:106] Iteration 40200, lr = 0.001\nI0608 04:08:59.937834 13573 solver.cpp:229] Iteration 40220, loss = 0.552568\nI0608 04:08:59.937904 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0340134 (* 1 = 0.0340134 loss)\nI0608 04:08:59.937916 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.042815 (* 1 = 0.042815 loss)\nI0608 04:08:59.937922 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.200978 (* 1 = 0.200978 loss)\nI0608 04:08:59.937927 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00199064 (* 1 = 0.00199064 loss)\nI0608 04:08:59.937933 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.209225 (* 1 = 0.209225 loss)\nI0608 04:08:59.937939 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00199173 (* 1 = 0.00199173 loss)\nI0608 04:08:59.937945 13573 sgd_solver.cpp:106] Iteration 40220, lr = 0.001\nI0608 04:09:12.890717 13573 solver.cpp:229] Iteration 40240, loss = 0.530837\nI0608 04:09:12.890820 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.153593 (* 1 = 0.153593 loss)\nI0608 04:09:12.890831 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0689822 (* 1 = 0.0689822 loss)\nI0608 04:09:12.890838 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0958584 (* 1 = 0.0958584 loss)\nI0608 04:09:12.890844 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105663 (* 1 = 0.0105663 loss)\nI0608 04:09:12.890851 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0940707 (* 1 = 0.0940707 loss)\nI0608 04:09:12.890856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105683 (* 1 = 0.0105683 loss)\nI0608 04:09:12.890863 13573 sgd_solver.cpp:106] Iteration 40240, lr = 0.001\nI0608 04:09:25.855526 13573 solver.cpp:229] Iteration 40260, loss = 1.45246\nI0608 04:09:25.855666 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126056 (* 1 = 0.126056 loss)\nI0608 04:09:25.855676 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0411687 (* 1 = 0.0411687 loss)\nI0608 04:09:25.855682 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100091 (* 1 = 0.100091 loss)\nI0608 04:09:25.855689 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198248 (* 1 = 0.0198248 loss)\nI0608 04:09:25.855695 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0808278 (* 1 = 0.0808278 loss)\nI0608 04:09:25.855700 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198199 (* 1 = 0.0198199 loss)\nI0608 04:09:25.855707 13573 sgd_solver.cpp:106] Iteration 40260, lr = 0.001\nI0608 04:09:38.897234 13573 solver.cpp:229] Iteration 40280, loss = 0.462367\nI0608 04:09:38.897305 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000926736 (* 1 = 0.000926736 loss)\nI0608 04:09:38.897313 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0112627 (* 1 = 0.0112627 loss)\nI0608 04:09:38.897320 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.219221 (* 1 = 0.219221 loss)\nI0608 04:09:38.897325 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0244027 (* 1 = 0.0244027 loss)\nI0608 04:09:38.897330 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228428 (* 1 = 0.228428 loss)\nI0608 04:09:38.897336 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0244006 (* 1 = 0.0244006 loss)\nI0608 04:09:38.897343 13573 sgd_solver.cpp:106] Iteration 40280, lr = 0.001\nI0608 04:09:51.749097 13573 solver.cpp:229] Iteration 40300, loss = 0.665231\nI0608 04:09:51.749171 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118605 (* 1 = 0.118605 loss)\nI0608 04:09:51.749181 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125354 (* 1 = 0.125354 loss)\nI0608 04:09:51.749186 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0851762 (* 1 = 0.0851762 loss)\nI0608 04:09:51.749192 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182082 (* 1 = 0.0182082 loss)\nI0608 04:09:51.749397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0930729 (* 1 = 0.0930729 loss)\nI0608 04:09:51.749454 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182171 (* 1 = 0.0182171 loss)\nI0608 04:09:51.749508 13573 sgd_solver.cpp:106] Iteration 40300, lr = 0.001\nI0608 04:10:04.566886 13573 solver.cpp:229] Iteration 40320, loss = 0.8118\nI0608 04:10:04.566954 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00100244 (* 1 = 0.00100244 loss)\nI0608 04:10:04.566964 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00785857 (* 1 = 0.00785857 loss)\nI0608 04:10:04.566972 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.433017 (* 1 = 0.433017 loss)\nI0608 04:10:04.566977 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0479373 (* 1 = 0.0479373 loss)\nI0608 04:10:04.566983 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.473215 (* 1 = 0.473215 loss)\nI0608 04:10:04.566989 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0479171 (* 1 = 0.0479171 loss)\nI0608 04:10:04.566997 13573 sgd_solver.cpp:106] Iteration 40320, lr = 0.001\nI0608 04:10:17.509723 13573 solver.cpp:229] Iteration 40340, loss = 1.14239\nI0608 04:10:17.509791 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.210612 (* 1 = 0.210612 loss)\nI0608 04:10:17.509801 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15654 (* 1 = 0.15654 loss)\nI0608 04:10:17.509807 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.596263 (* 1 = 0.596263 loss)\nI0608 04:10:17.509814 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.186588 (* 1 = 0.186588 loss)\nI0608 04:10:17.509819 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.607776 (* 1 = 0.607776 loss)\nI0608 04:10:17.509825 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.186686 (* 1 = 0.186686 loss)\nI0608 04:10:17.509832 13573 sgd_solver.cpp:106] Iteration 40340, lr = 0.001\nI0608 04:10:30.174295 13573 solver.cpp:229] Iteration 40360, loss = 1.37103\nI0608 04:10:30.174371 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.390382 (* 1 = 0.390382 loss)\nI0608 04:10:30.174381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.292989 (* 1 = 0.292989 loss)\nI0608 04:10:30.174387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180809 (* 1 = 0.180809 loss)\nI0608 04:10:30.174394 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0254706 (* 1 = 0.0254706 loss)\nI0608 04:10:30.174401 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181848 (* 1 = 0.181848 loss)\nI0608 04:10:30.174407 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0254756 (* 1 = 0.0254756 loss)\nI0608 04:10:30.174413 13573 sgd_solver.cpp:106] Iteration 40360, lr = 0.001\nI0608 04:10:43.042982 13573 solver.cpp:229] Iteration 40380, loss = 1.50437\nI0608 04:10:43.043045 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0185929 (* 1 = 0.0185929 loss)\nI0608 04:10:43.043053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0493653 (* 1 = 0.0493653 loss)\nI0608 04:10:43.043061 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.842007 (* 1 = 0.842007 loss)\nI0608 04:10:43.043066 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.319193 (* 1 = 0.319193 loss)\nI0608 04:10:43.043071 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.87251 (* 1 = 0.87251 loss)\nI0608 04:10:43.043077 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.319276 (* 1 = 0.319276 loss)\nI0608 04:10:43.043084 13573 sgd_solver.cpp:106] Iteration 40380, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:10:56.051280 13573 solver.cpp:229] Iteration 40400, loss = 0.627388\nI0608 04:10:56.051347 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0738717 (* 1 = 0.0738717 loss)\nI0608 04:10:56.051357 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0989533 (* 1 = 0.0989533 loss)\nI0608 04:10:56.051362 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294842 (* 1 = 0.294842 loss)\nI0608 04:10:56.051368 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017473 (* 1 = 0.017473 loss)\nI0608 04:10:56.051373 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277023 (* 1 = 0.277023 loss)\nI0608 04:10:56.051379 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0174697 (* 1 = 0.0174697 loss)\nI0608 04:10:56.051385 13573 sgd_solver.cpp:106] Iteration 40400, lr = 0.001\nI0608 04:11:09.059038 13573 solver.cpp:229] Iteration 40420, loss = 1.7866\nI0608 04:11:09.059108 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0698155 (* 1 = 0.0698155 loss)\nI0608 04:11:09.059118 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124407 (* 1 = 0.124407 loss)\nI0608 04:11:09.059123 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.900649 (* 1 = 0.900649 loss)\nI0608 04:11:09.059129 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.626157 (* 1 = 0.626157 loss)\nI0608 04:11:09.059135 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.918487 (* 1 = 0.918487 loss)\nI0608 04:11:09.059141 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.626143 (* 1 = 0.626143 loss)\nI0608 04:11:09.059149 13573 sgd_solver.cpp:106] Iteration 40420, lr = 0.001\nI0608 04:11:22.199329 13573 solver.cpp:229] Iteration 40440, loss = 0.459482\nI0608 04:11:22.199398 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0832219 (* 1 = 0.0832219 loss)\nI0608 04:11:22.199409 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.180728 (* 1 = 0.180728 loss)\nI0608 04:11:22.199414 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0873417 (* 1 = 0.0873417 loss)\nI0608 04:11:22.199420 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.015914 (* 1 = 0.015914 loss)\nI0608 04:11:22.199425 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0889066 (* 1 = 0.0889066 loss)\nI0608 04:11:22.199431 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159083 (* 1 = 0.0159083 loss)\nI0608 04:11:22.199439 13573 sgd_solver.cpp:106] Iteration 40440, lr = 0.001\nI0608 04:11:35.140795 13573 solver.cpp:229] Iteration 40460, loss = 1.25103\nI0608 04:11:35.140866 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.363733 (* 1 = 0.363733 loss)\nI0608 04:11:35.140875 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.246542 (* 1 = 0.246542 loss)\nI0608 04:11:35.140882 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.490216 (* 1 = 0.490216 loss)\nI0608 04:11:35.140887 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0771049 (* 1 = 0.0771049 loss)\nI0608 04:11:35.140892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.482118 (* 1 = 0.482118 loss)\nI0608 04:11:35.140898 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0771221 (* 1 = 0.0771221 loss)\nI0608 04:11:35.140905 13573 sgd_solver.cpp:106] Iteration 40460, lr = 0.001\nI0608 04:11:47.943147 13573 solver.cpp:229] Iteration 40480, loss = 0.698465\nI0608 04:11:47.943220 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0608803 (* 1 = 0.0608803 loss)\nI0608 04:11:47.943229 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.032655 (* 1 = 0.032655 loss)\nI0608 04:11:47.943235 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0976799 (* 1 = 0.0976799 loss)\nI0608 04:11:47.943241 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418874 (* 1 = 0.0418874 loss)\nI0608 04:11:47.943248 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100263 (* 1 = 0.100263 loss)\nI0608 04:11:47.943253 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0419133 (* 1 = 0.0419133 loss)\nI0608 04:11:47.943260 13573 sgd_solver.cpp:106] Iteration 40480, lr = 0.001\nI0608 04:12:00.849385 13573 solver.cpp:229] Iteration 40500, loss = 0.621715\nI0608 04:12:00.849454 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130245 (* 1 = 0.130245 loss)\nI0608 04:12:00.849464 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.182459 (* 1 = 0.182459 loss)\nI0608 04:12:00.849470 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.185706 (* 1 = 0.185706 loss)\nI0608 04:12:00.849476 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.117286 (* 1 = 0.117286 loss)\nI0608 04:12:00.849481 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181123 (* 1 = 0.181123 loss)\nI0608 04:12:00.849488 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.117276 (* 1 = 0.117276 loss)\nI0608 04:12:00.849494 13573 sgd_solver.cpp:106] Iteration 40500, lr = 0.001\nI0608 04:12:13.789714 13573 solver.cpp:229] Iteration 40520, loss = 0.702931\nI0608 04:12:13.789788 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0846631 (* 1 = 0.0846631 loss)\nI0608 04:12:13.789798 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0233895 (* 1 = 0.0233895 loss)\nI0608 04:12:13.789803 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105688 (* 1 = 0.105688 loss)\nI0608 04:12:13.789808 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0161799 (* 1 = 0.0161799 loss)\nI0608 04:12:13.789814 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0995694 (* 1 = 0.0995694 loss)\nI0608 04:12:13.789819 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0161728 (* 1 = 0.0161728 loss)\nI0608 04:12:13.789826 13573 sgd_solver.cpp:106] Iteration 40520, lr = 0.001\nI0608 04:12:26.602815 13573 solver.cpp:229] Iteration 40540, loss = 0.317889\nI0608 04:12:26.602877 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0822778 (* 1 = 0.0822778 loss)\nI0608 04:12:26.602890 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0492256 (* 1 = 0.0492256 loss)\nI0608 04:12:26.602896 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0775453 (* 1 = 0.0775453 loss)\nI0608 04:12:26.602902 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0185866 (* 1 = 0.0185866 loss)\nI0608 04:12:26.602908 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0873099 (* 1 = 0.0873099 loss)\nI0608 04:12:26.602915 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0185853 (* 1 = 0.0185853 loss)\nI0608 04:12:26.602921 13573 sgd_solver.cpp:106] Iteration 40540, lr = 0.001\nI0608 04:12:39.671474 13573 solver.cpp:229] Iteration 40560, loss = 0.554467\nI0608 04:12:39.671540 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0854552 (* 1 = 0.0854552 loss)\nI0608 04:12:39.671550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.153244 (* 1 = 0.153244 loss)\nI0608 04:12:39.671556 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183011 (* 1 = 0.183011 loss)\nI0608 04:12:39.671561 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0400952 (* 1 = 0.0400952 loss)\nI0608 04:12:39.671567 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186564 (* 1 = 0.186564 loss)\nI0608 04:12:39.671572 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.040086 (* 1 = 0.040086 loss)\nI0608 04:12:39.671579 13573 sgd_solver.cpp:106] Iteration 40560, lr = 0.001\nI0608 04:12:52.537276 13573 solver.cpp:229] Iteration 40580, loss = 0.646797\nI0608 04:12:52.537350 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142619 (* 1 = 0.142619 loss)\nI0608 04:12:52.537360 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141662 (* 1 = 0.141662 loss)\nI0608 04:12:52.537366 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.242519 (* 1 = 0.242519 loss)\nI0608 04:12:52.537371 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.022603 (* 1 = 0.022603 loss)\nI0608 04:12:52.537377 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.257815 (* 1 = 0.257815 loss)\nI0608 04:12:52.537384 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225857 (* 1 = 0.0225857 loss)\nI0608 04:12:52.537389 13573 sgd_solver.cpp:106] Iteration 40580, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:13:05.746311 13573 solver.cpp:229] Iteration 40600, loss = 1.28755\nI0608 04:13:05.746376 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0175609 (* 1 = 0.0175609 loss)\nI0608 04:13:05.746386 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0068069 (* 1 = 0.0068069 loss)\nI0608 04:13:05.746392 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.728712 (* 1 = 0.728712 loss)\nI0608 04:13:05.746397 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0903662 (* 1 = 0.0903662 loss)\nI0608 04:13:05.746402 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.73864 (* 1 = 0.73864 loss)\nI0608 04:13:05.746408 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.090323 (* 1 = 0.090323 loss)\nI0608 04:13:05.746415 13573 sgd_solver.cpp:106] Iteration 40600, lr = 0.001\nI0608 04:13:18.510815 13573 solver.cpp:229] Iteration 40620, loss = 1.50256\nI0608 04:13:18.510880 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0681532 (* 1 = 0.0681532 loss)\nI0608 04:13:18.510890 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0891638 (* 1 = 0.0891638 loss)\nI0608 04:13:18.510895 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.707653 (* 1 = 0.707653 loss)\nI0608 04:13:18.510901 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.225234 (* 1 = 0.225234 loss)\nI0608 04:13:18.510906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.702991 (* 1 = 0.702991 loss)\nI0608 04:13:18.510912 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.225335 (* 1 = 0.225335 loss)\nI0608 04:13:18.510920 13573 sgd_solver.cpp:106] Iteration 40620, lr = 0.001\nI0608 04:13:31.333359 13573 solver.cpp:229] Iteration 40640, loss = 0.498028\nI0608 04:13:31.333431 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.160844 (* 1 = 0.160844 loss)\nI0608 04:13:31.333441 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0826344 (* 1 = 0.0826344 loss)\nI0608 04:13:31.333446 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137079 (* 1 = 0.137079 loss)\nI0608 04:13:31.333452 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0341159 (* 1 = 0.0341159 loss)\nI0608 04:13:31.333457 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.134739 (* 1 = 0.134739 loss)\nI0608 04:13:31.333463 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0341197 (* 1 = 0.0341197 loss)\nI0608 04:13:31.333488 13573 sgd_solver.cpp:106] Iteration 40640, lr = 0.001\nI0608 04:13:44.250625 13573 solver.cpp:229] Iteration 40660, loss = 0.584178\nI0608 04:13:44.250689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107346 (* 1 = 0.107346 loss)\nI0608 04:13:44.250699 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0545597 (* 1 = 0.0545597 loss)\nI0608 04:13:44.250705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.208539 (* 1 = 0.208539 loss)\nI0608 04:13:44.250710 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.035783 (* 1 = 0.035783 loss)\nI0608 04:13:44.250715 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203253 (* 1 = 0.203253 loss)\nI0608 04:13:44.250721 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.035777 (* 1 = 0.035777 loss)\nI0608 04:13:44.250727 13573 sgd_solver.cpp:106] Iteration 40660, lr = 0.001\nI0608 04:13:56.917220 13573 solver.cpp:229] Iteration 40680, loss = 0.548532\nI0608 04:13:56.917290 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000682653 (* 1 = 0.000682653 loss)\nI0608 04:13:56.917300 13573 solver.cpp:245]     Train net output #1: loss_cls = 4.62126e-05 (* 1 = 4.62126e-05 loss)\nI0608 04:13:56.917306 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157617 (* 1 = 0.157617 loss)\nI0608 04:13:56.917311 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00945709 (* 1 = 0.00945709 loss)\nI0608 04:13:56.917317 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172874 (* 1 = 0.172874 loss)\nI0608 04:13:56.917323 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00945049 (* 1 = 0.00945049 loss)\nI0608 04:13:56.917330 13573 sgd_solver.cpp:106] Iteration 40680, lr = 0.001\nI0608 04:14:09.661897 13573 solver.cpp:229] Iteration 40700, loss = 1.04744\nI0608 04:14:09.661972 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142148 (* 1 = 0.142148 loss)\nI0608 04:14:09.661981 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.23802 (* 1 = 0.23802 loss)\nI0608 04:14:09.661988 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.512058 (* 1 = 0.512058 loss)\nI0608 04:14:09.661993 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.112104 (* 1 = 0.112104 loss)\nI0608 04:14:09.661999 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.511343 (* 1 = 0.511343 loss)\nI0608 04:14:09.662005 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.112114 (* 1 = 0.112114 loss)\nI0608 04:14:09.662012 13573 sgd_solver.cpp:106] Iteration 40700, lr = 0.001\nI0608 04:14:22.744326 13573 solver.cpp:229] Iteration 40720, loss = 0.459013\nI0608 04:14:22.744387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0757892 (* 1 = 0.0757892 loss)\nI0608 04:14:22.744397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0288562 (* 1 = 0.0288562 loss)\nI0608 04:14:22.744403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0916116 (* 1 = 0.0916116 loss)\nI0608 04:14:22.744410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0256796 (* 1 = 0.0256796 loss)\nI0608 04:14:22.744415 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0869949 (* 1 = 0.0869949 loss)\nI0608 04:14:22.744421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0256795 (* 1 = 0.0256795 loss)\nI0608 04:14:22.744428 13573 sgd_solver.cpp:106] Iteration 40720, lr = 0.001\nI0608 04:14:35.347846 13573 solver.cpp:229] Iteration 40740, loss = 0.381364\nI0608 04:14:35.347918 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.037107 (* 1 = 0.037107 loss)\nI0608 04:14:35.347926 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0217113 (* 1 = 0.0217113 loss)\nI0608 04:14:35.347934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.096763 (* 1 = 0.096763 loss)\nI0608 04:14:35.347939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0320995 (* 1 = 0.0320995 loss)\nI0608 04:14:35.347944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0948246 (* 1 = 0.0948246 loss)\nI0608 04:14:35.347950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0320992 (* 1 = 0.0320992 loss)\nI0608 04:14:35.347957 13573 sgd_solver.cpp:106] Iteration 40740, lr = 0.001\nI0608 04:14:48.060073 13573 solver.cpp:229] Iteration 40760, loss = 0.68082\nI0608 04:14:48.060132 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000858572 (* 1 = 0.000858572 loss)\nI0608 04:14:48.060142 13573 solver.cpp:245]     Train net output #1: loss_cls = 7.99043e-05 (* 1 = 7.99043e-05 loss)\nI0608 04:14:48.060149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157696 (* 1 = 0.157696 loss)\nI0608 04:14:48.060155 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0026784 (* 1 = 0.0026784 loss)\nI0608 04:14:48.060161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154469 (* 1 = 0.154469 loss)\nI0608 04:14:48.060166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00267854 (* 1 = 0.00267854 loss)\nI0608 04:14:48.060173 13573 sgd_solver.cpp:106] Iteration 40760, lr = 0.001\nI0608 04:15:01.032485 13573 solver.cpp:229] Iteration 40780, loss = 0.436217\nI0608 04:15:01.032555 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0751161 (* 1 = 0.0751161 loss)\nI0608 04:15:01.032564 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0780466 (* 1 = 0.0780466 loss)\nI0608 04:15:01.032572 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0923209 (* 1 = 0.0923209 loss)\nI0608 04:15:01.032577 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00348668 (* 1 = 0.00348668 loss)\nI0608 04:15:01.032582 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0953788 (* 1 = 0.0953788 loss)\nI0608 04:15:01.032588 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00349505 (* 1 = 0.00349505 loss)\nI0608 04:15:01.032595 13573 sgd_solver.cpp:106] Iteration 40780, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:15:13.907351 13573 solver.cpp:229] Iteration 40800, loss = 0.432982\nI0608 04:15:13.907419 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0655721 (* 1 = 0.0655721 loss)\nI0608 04:15:13.907429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0484083 (* 1 = 0.0484083 loss)\nI0608 04:15:13.907435 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132181 (* 1 = 0.132181 loss)\nI0608 04:15:13.907441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0218791 (* 1 = 0.0218791 loss)\nI0608 04:15:13.907446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.134937 (* 1 = 0.134937 loss)\nI0608 04:15:13.907452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0218587 (* 1 = 0.0218587 loss)\nI0608 04:15:13.907460 13573 sgd_solver.cpp:106] Iteration 40800, lr = 0.001\nI0608 04:15:26.831859 13573 solver.cpp:229] Iteration 40820, loss = 0.568687\nI0608 04:15:26.831923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0587902 (* 1 = 0.0587902 loss)\nI0608 04:15:26.831933 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0508674 (* 1 = 0.0508674 loss)\nI0608 04:15:26.831938 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167401 (* 1 = 0.167401 loss)\nI0608 04:15:26.831944 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275589 (* 1 = 0.0275589 loss)\nI0608 04:15:26.831950 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.164494 (* 1 = 0.164494 loss)\nI0608 04:15:26.831956 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275555 (* 1 = 0.0275555 loss)\nI0608 04:15:26.831964 13573 sgd_solver.cpp:106] Iteration 40820, lr = 0.001\nI0608 04:15:39.579391 13573 solver.cpp:229] Iteration 40840, loss = 0.50142\nI0608 04:15:39.579457 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124743 (* 1 = 0.124743 loss)\nI0608 04:15:39.579468 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0554422 (* 1 = 0.0554422 loss)\nI0608 04:15:39.579473 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0989624 (* 1 = 0.0989624 loss)\nI0608 04:15:39.579478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118033 (* 1 = 0.0118033 loss)\nI0608 04:15:39.579484 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0991306 (* 1 = 0.0991306 loss)\nI0608 04:15:39.579490 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118062 (* 1 = 0.0118062 loss)\nI0608 04:15:39.579497 13573 sgd_solver.cpp:106] Iteration 40840, lr = 0.001\nI0608 04:15:52.435005 13573 solver.cpp:229] Iteration 40860, loss = 1.01271\nI0608 04:15:52.435067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116049 (* 1 = 0.116049 loss)\nI0608 04:15:52.435076 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.180716 (* 1 = 0.180716 loss)\nI0608 04:15:52.435082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.494858 (* 1 = 0.494858 loss)\nI0608 04:15:52.435088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.157838 (* 1 = 0.157838 loss)\nI0608 04:15:52.435094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.494648 (* 1 = 0.494648 loss)\nI0608 04:15:52.435101 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.157821 (* 1 = 0.157821 loss)\nI0608 04:15:52.435106 13573 sgd_solver.cpp:106] Iteration 40860, lr = 0.001\nI0608 04:16:05.352435 13573 solver.cpp:229] Iteration 40880, loss = 0.298028\nI0608 04:16:05.352497 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.061093 (* 1 = 0.061093 loss)\nI0608 04:16:05.352509 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0372485 (* 1 = 0.0372485 loss)\nI0608 04:16:05.352514 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0937058 (* 1 = 0.0937058 loss)\nI0608 04:16:05.352520 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0229372 (* 1 = 0.0229372 loss)\nI0608 04:16:05.352526 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0945434 (* 1 = 0.0945434 loss)\nI0608 04:16:05.352532 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0229532 (* 1 = 0.0229532 loss)\nI0608 04:16:05.352538 13573 sgd_solver.cpp:106] Iteration 40880, lr = 0.001\nI0608 04:16:18.249506 13573 solver.cpp:229] Iteration 40900, loss = 2.06906\nI0608 04:16:18.249593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0636363 (* 1 = 0.0636363 loss)\nI0608 04:16:18.249603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0993328 (* 1 = 0.0993328 loss)\nI0608 04:16:18.249608 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.26414 (* 1 = 1.26414 loss)\nI0608 04:16:18.249614 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.397765 (* 1 = 0.397765 loss)\nI0608 04:16:18.249619 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.23253 (* 1 = 1.23253 loss)\nI0608 04:16:18.249625 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.398412 (* 1 = 0.398412 loss)\nI0608 04:16:18.249632 13573 sgd_solver.cpp:106] Iteration 40900, lr = 0.001\nI0608 04:16:31.021548 13573 solver.cpp:229] Iteration 40920, loss = 1.13007\nI0608 04:16:31.021618 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0933018 (* 1 = 0.0933018 loss)\nI0608 04:16:31.021631 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0826162 (* 1 = 0.0826162 loss)\nI0608 04:16:31.021636 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.364113 (* 1 = 0.364113 loss)\nI0608 04:16:31.021642 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.1158 (* 1 = 0.1158 loss)\nI0608 04:16:31.021648 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.356953 (* 1 = 0.356953 loss)\nI0608 04:16:31.021654 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.115813 (* 1 = 0.115813 loss)\nI0608 04:16:31.021661 13573 sgd_solver.cpp:106] Iteration 40920, lr = 0.001\nI0608 04:16:43.910567 13573 solver.cpp:229] Iteration 40940, loss = 0.532125\nI0608 04:16:43.910634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00463533 (* 1 = 0.00463533 loss)\nI0608 04:16:43.910642 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00815367 (* 1 = 0.00815367 loss)\nI0608 04:16:43.910648 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334362 (* 1 = 0.334362 loss)\nI0608 04:16:43.910655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0309817 (* 1 = 0.0309817 loss)\nI0608 04:16:43.910660 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293787 (* 1 = 0.293787 loss)\nI0608 04:16:43.910666 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309663 (* 1 = 0.0309663 loss)\nI0608 04:16:43.910673 13573 sgd_solver.cpp:106] Iteration 40940, lr = 0.001\nI0608 04:16:56.503051 13573 solver.cpp:229] Iteration 40960, loss = 0.519246\nI0608 04:16:56.503114 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125349 (* 1 = 0.125349 loss)\nI0608 04:16:56.503124 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0489986 (* 1 = 0.0489986 loss)\nI0608 04:16:56.503131 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105905 (* 1 = 0.105905 loss)\nI0608 04:16:56.503137 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.102042 (* 1 = 0.102042 loss)\nI0608 04:16:56.503142 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109098 (* 1 = 0.109098 loss)\nI0608 04:16:56.503149 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.101923 (* 1 = 0.101923 loss)\nI0608 04:16:56.503155 13573 sgd_solver.cpp:106] Iteration 40960, lr = 0.001\nI0608 04:17:09.109287 13573 solver.cpp:229] Iteration 40980, loss = 0.497647\nI0608 04:17:09.109369 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0154853 (* 1 = 0.0154853 loss)\nI0608 04:17:09.109380 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0170907 (* 1 = 0.0170907 loss)\nI0608 04:17:09.109386 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.29294 (* 1 = 0.29294 loss)\nI0608 04:17:09.109392 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137094 (* 1 = 0.0137094 loss)\nI0608 04:17:09.109398 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239661 (* 1 = 0.239661 loss)\nI0608 04:17:09.109405 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137096 (* 1 = 0.0137096 loss)\nI0608 04:17:09.109411 13573 sgd_solver.cpp:106] Iteration 40980, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:17:21.945849 13573 solver.cpp:229] Iteration 41000, loss = 0.553974\nI0608 04:17:21.945912 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0011634 (* 1 = 0.0011634 loss)\nI0608 04:17:21.945922 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00268865 (* 1 = 0.00268865 loss)\nI0608 04:17:21.945929 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.361651 (* 1 = 0.361651 loss)\nI0608 04:17:21.945935 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130857 (* 1 = 0.0130857 loss)\nI0608 04:17:21.945940 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.363124 (* 1 = 0.363124 loss)\nI0608 04:17:21.945946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0130866 (* 1 = 0.0130866 loss)\nI0608 04:17:21.945952 13573 sgd_solver.cpp:106] Iteration 41000, lr = 0.001\nI0608 04:17:34.762652 13573 solver.cpp:229] Iteration 41020, loss = 0.656693\nI0608 04:17:34.762723 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0984811 (* 1 = 0.0984811 loss)\nI0608 04:17:34.762732 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103982 (* 1 = 0.103982 loss)\nI0608 04:17:34.762739 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.27983 (* 1 = 0.27983 loss)\nI0608 04:17:34.762745 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111447 (* 1 = 0.111447 loss)\nI0608 04:17:34.762750 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.274429 (* 1 = 0.274429 loss)\nI0608 04:17:34.762756 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111463 (* 1 = 0.111463 loss)\nI0608 04:17:34.762763 13573 sgd_solver.cpp:106] Iteration 41020, lr = 0.001\nI0608 04:17:47.719396 13573 solver.cpp:229] Iteration 41040, loss = 0.588009\nI0608 04:17:47.719460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.191242 (* 1 = 0.191242 loss)\nI0608 04:17:47.719470 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.221977 (* 1 = 0.221977 loss)\nI0608 04:17:47.719476 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103703 (* 1 = 0.103703 loss)\nI0608 04:17:47.719483 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149026 (* 1 = 0.0149026 loss)\nI0608 04:17:47.719488 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101095 (* 1 = 0.101095 loss)\nI0608 04:17:47.719494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0148992 (* 1 = 0.0148992 loss)\nI0608 04:17:47.719501 13573 sgd_solver.cpp:106] Iteration 41040, lr = 0.001\nI0608 04:18:00.663797 13573 solver.cpp:229] Iteration 41060, loss = 1.01934\nI0608 04:18:00.663862 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142516 (* 1 = 0.142516 loss)\nI0608 04:18:00.663872 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101077 (* 1 = 0.101077 loss)\nI0608 04:18:00.663879 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191103 (* 1 = 0.191103 loss)\nI0608 04:18:00.663885 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156251 (* 1 = 0.0156251 loss)\nI0608 04:18:00.663892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199672 (* 1 = 0.199672 loss)\nI0608 04:18:00.663897 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156315 (* 1 = 0.0156315 loss)\nI0608 04:18:00.663905 13573 sgd_solver.cpp:106] Iteration 41060, lr = 0.001\nI0608 04:18:13.818966 13573 solver.cpp:229] Iteration 41080, loss = 0.788348\nI0608 04:18:13.819039 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0242157 (* 1 = 0.0242157 loss)\nI0608 04:18:13.819049 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0455421 (* 1 = 0.0455421 loss)\nI0608 04:18:13.819056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.373903 (* 1 = 0.373903 loss)\nI0608 04:18:13.819061 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0471426 (* 1 = 0.0471426 loss)\nI0608 04:18:13.819067 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.370403 (* 1 = 0.370403 loss)\nI0608 04:18:13.819072 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0471337 (* 1 = 0.0471337 loss)\nI0608 04:18:13.819080 13573 sgd_solver.cpp:106] Iteration 41080, lr = 0.001\nI0608 04:18:26.799171 13573 solver.cpp:229] Iteration 41100, loss = 0.996355\nI0608 04:18:26.799234 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00320345 (* 1 = 0.00320345 loss)\nI0608 04:18:26.799244 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0341187 (* 1 = 0.0341187 loss)\nI0608 04:18:26.799250 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.414911 (* 1 = 0.414911 loss)\nI0608 04:18:26.799257 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.121278 (* 1 = 0.121278 loss)\nI0608 04:18:26.799262 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387588 (* 1 = 0.387588 loss)\nI0608 04:18:26.799268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121305 (* 1 = 0.121305 loss)\nI0608 04:18:26.799274 13573 sgd_solver.cpp:106] Iteration 41100, lr = 0.001\nI0608 04:18:39.867489 13573 solver.cpp:229] Iteration 41120, loss = 2.05616\nI0608 04:18:39.867558 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0012001 (* 1 = 0.0012001 loss)\nI0608 04:18:39.867568 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000684941 (* 1 = 0.000684941 loss)\nI0608 04:18:39.867573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.292769 (* 1 = 0.292769 loss)\nI0608 04:18:39.867579 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0303719 (* 1 = 0.0303719 loss)\nI0608 04:18:39.867584 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.32906 (* 1 = 0.32906 loss)\nI0608 04:18:39.867590 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0303551 (* 1 = 0.0303551 loss)\nI0608 04:18:39.867597 13573 sgd_solver.cpp:106] Iteration 41120, lr = 0.001\nI0608 04:18:52.892375 13573 solver.cpp:229] Iteration 41140, loss = 1.54568\nI0608 04:18:52.892448 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00124406 (* 1 = 0.00124406 loss)\nI0608 04:18:52.892458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00238939 (* 1 = 0.00238939 loss)\nI0608 04:18:52.892464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.305169 (* 1 = 0.305169 loss)\nI0608 04:18:52.892470 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153091 (* 1 = 0.0153091 loss)\nI0608 04:18:52.892477 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259487 (* 1 = 0.259487 loss)\nI0608 04:18:52.892483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153101 (* 1 = 0.0153101 loss)\nI0608 04:18:52.892490 13573 sgd_solver.cpp:106] Iteration 41140, lr = 0.001\nI0608 04:19:05.968979 13573 solver.cpp:229] Iteration 41160, loss = 0.549595\nI0608 04:19:05.969043 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0630821 (* 1 = 0.0630821 loss)\nI0608 04:19:05.969053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0614511 (* 1 = 0.0614511 loss)\nI0608 04:19:05.969058 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0966474 (* 1 = 0.0966474 loss)\nI0608 04:19:05.969064 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110152 (* 1 = 0.0110152 loss)\nI0608 04:19:05.969070 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0848586 (* 1 = 0.0848586 loss)\nI0608 04:19:05.969076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110185 (* 1 = 0.0110185 loss)\nI0608 04:19:05.969084 13573 sgd_solver.cpp:106] Iteration 41160, lr = 0.001\nI0608 04:19:18.869958 13573 solver.cpp:229] Iteration 41180, loss = 1.15903\nI0608 04:19:18.870038 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194257 (* 1 = 0.194257 loss)\nI0608 04:19:18.870048 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0482566 (* 1 = 0.0482566 loss)\nI0608 04:19:18.870054 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180309 (* 1 = 0.180309 loss)\nI0608 04:19:18.870059 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00319439 (* 1 = 0.00319439 loss)\nI0608 04:19:18.870065 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128542 (* 1 = 0.128542 loss)\nI0608 04:19:18.870070 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00318838 (* 1 = 0.00318838 loss)\nI0608 04:19:18.870077 13573 sgd_solver.cpp:106] Iteration 41180, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:19:31.621636 13573 solver.cpp:229] Iteration 41200, loss = 0.570126\nI0608 04:19:31.621701 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0837334 (* 1 = 0.0837334 loss)\nI0608 04:19:31.621711 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117051 (* 1 = 0.117051 loss)\nI0608 04:19:31.621717 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0807639 (* 1 = 0.0807639 loss)\nI0608 04:19:31.621722 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234501 (* 1 = 0.0234501 loss)\nI0608 04:19:31.621728 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0808681 (* 1 = 0.0808681 loss)\nI0608 04:19:31.621734 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0234508 (* 1 = 0.0234508 loss)\nI0608 04:19:31.621770 13573 sgd_solver.cpp:106] Iteration 41200, lr = 0.001\nI0608 04:19:44.430949 13573 solver.cpp:229] Iteration 41220, loss = 1.0161\nI0608 04:19:44.431077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.17306 (* 1 = 0.17306 loss)\nI0608 04:19:44.431088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0905662 (* 1 = 0.0905662 loss)\nI0608 04:19:44.431094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264923 (* 1 = 0.264923 loss)\nI0608 04:19:44.431100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.04749 (* 1 = 0.04749 loss)\nI0608 04:19:44.431107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.285167 (* 1 = 0.285167 loss)\nI0608 04:19:44.431113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0474746 (* 1 = 0.0474746 loss)\nI0608 04:19:44.431120 13573 sgd_solver.cpp:106] Iteration 41220, lr = 0.001\nI0608 04:19:57.267478 13573 solver.cpp:229] Iteration 41240, loss = 0.579436\nI0608 04:19:57.267547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0474316 (* 1 = 0.0474316 loss)\nI0608 04:19:57.267557 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103606 (* 1 = 0.103606 loss)\nI0608 04:19:57.267563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100018 (* 1 = 0.100018 loss)\nI0608 04:19:57.267570 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0555516 (* 1 = 0.0555516 loss)\nI0608 04:19:57.267575 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0961372 (* 1 = 0.0961372 loss)\nI0608 04:19:57.267580 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0555446 (* 1 = 0.0555446 loss)\nI0608 04:19:57.267586 13573 sgd_solver.cpp:106] Iteration 41240, lr = 0.001\nI0608 04:20:10.043395 13573 solver.cpp:229] Iteration 41260, loss = 0.485955\nI0608 04:20:10.043458 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00151693 (* 1 = 0.00151693 loss)\nI0608 04:20:10.043468 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0223054 (* 1 = 0.0223054 loss)\nI0608 04:20:10.043475 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.225594 (* 1 = 0.225594 loss)\nI0608 04:20:10.043481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283544 (* 1 = 0.0283544 loss)\nI0608 04:20:10.043488 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236264 (* 1 = 0.236264 loss)\nI0608 04:20:10.043493 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283465 (* 1 = 0.0283465 loss)\nI0608 04:20:10.043500 13573 sgd_solver.cpp:106] Iteration 41260, lr = 0.001\nI0608 04:20:23.021082 13573 solver.cpp:229] Iteration 41280, loss = 0.451444\nI0608 04:20:23.021155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000565138 (* 1 = 0.000565138 loss)\nI0608 04:20:23.021165 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00544332 (* 1 = 0.00544332 loss)\nI0608 04:20:23.021172 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276297 (* 1 = 0.276297 loss)\nI0608 04:20:23.021184 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0375922 (* 1 = 0.0375922 loss)\nI0608 04:20:23.021191 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.267356 (* 1 = 0.267356 loss)\nI0608 04:20:23.021198 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0375786 (* 1 = 0.0375786 loss)\nI0608 04:20:23.021205 13573 sgd_solver.cpp:106] Iteration 41280, lr = 0.001\nI0608 04:20:35.907335 13573 solver.cpp:229] Iteration 41300, loss = 0.922809\nI0608 04:20:35.907402 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00463619 (* 1 = 0.00463619 loss)\nI0608 04:20:35.907413 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0517943 (* 1 = 0.0517943 loss)\nI0608 04:20:35.907418 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.529092 (* 1 = 0.529092 loss)\nI0608 04:20:35.907424 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0816454 (* 1 = 0.0816454 loss)\nI0608 04:20:35.907430 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.497975 (* 1 = 0.497975 loss)\nI0608 04:20:35.907436 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0815826 (* 1 = 0.0815826 loss)\nI0608 04:20:35.907444 13573 sgd_solver.cpp:106] Iteration 41300, lr = 0.001\nI0608 04:20:48.894871 13573 solver.cpp:229] Iteration 41320, loss = 0.863672\nI0608 04:20:48.894958 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.252447 (* 1 = 0.252447 loss)\nI0608 04:20:48.894968 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.238819 (* 1 = 0.238819 loss)\nI0608 04:20:48.894973 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266274 (* 1 = 0.266274 loss)\nI0608 04:20:48.894980 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0878637 (* 1 = 0.0878637 loss)\nI0608 04:20:48.894986 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258531 (* 1 = 0.258531 loss)\nI0608 04:20:48.894992 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.087837 (* 1 = 0.087837 loss)\nI0608 04:20:48.894999 13573 sgd_solver.cpp:106] Iteration 41320, lr = 0.001\nI0608 04:21:01.754184 13573 solver.cpp:229] Iteration 41340, loss = 0.698842\nI0608 04:21:01.754247 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0148022 (* 1 = 0.0148022 loss)\nI0608 04:21:01.754257 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0387877 (* 1 = 0.0387877 loss)\nI0608 04:21:01.754263 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0753654 (* 1 = 0.0753654 loss)\nI0608 04:21:01.754268 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133435 (* 1 = 0.0133435 loss)\nI0608 04:21:01.754274 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0723675 (* 1 = 0.0723675 loss)\nI0608 04:21:01.754281 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133442 (* 1 = 0.0133442 loss)\nI0608 04:21:01.754287 13573 sgd_solver.cpp:106] Iteration 41340, lr = 0.001\nI0608 04:21:14.728276 13573 solver.cpp:229] Iteration 41360, loss = 1.39653\nI0608 04:21:14.728343 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00190129 (* 1 = 0.00190129 loss)\nI0608 04:21:14.728353 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00326849 (* 1 = 0.00326849 loss)\nI0608 04:21:14.728359 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.568002 (* 1 = 0.568002 loss)\nI0608 04:21:14.728364 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.208566 (* 1 = 0.208566 loss)\nI0608 04:21:14.728370 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.495119 (* 1 = 0.495119 loss)\nI0608 04:21:14.728376 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.20867 (* 1 = 0.20867 loss)\nI0608 04:21:14.728382 13573 sgd_solver.cpp:106] Iteration 41360, lr = 0.001\nI0608 04:21:27.564918 13573 solver.cpp:229] Iteration 41380, loss = 1.12892\nI0608 04:21:27.564990 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.136065 (* 1 = 0.136065 loss)\nI0608 04:21:27.564999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11172 (* 1 = 0.11172 loss)\nI0608 04:21:27.565006 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139447 (* 1 = 0.139447 loss)\nI0608 04:21:27.565011 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.177112 (* 1 = 0.177112 loss)\nI0608 04:21:27.565017 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138323 (* 1 = 0.138323 loss)\nI0608 04:21:27.565022 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.177079 (* 1 = 0.177079 loss)\nI0608 04:21:27.565029 13573 sgd_solver.cpp:106] Iteration 41380, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:21:40.390216 13573 solver.cpp:229] Iteration 41400, loss = 0.314346\nI0608 04:21:40.390277 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140669 (* 1 = 0.140669 loss)\nI0608 04:21:40.390287 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0999956 (* 1 = 0.0999956 loss)\nI0608 04:21:40.390293 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0720473 (* 1 = 0.0720473 loss)\nI0608 04:21:40.390300 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00638911 (* 1 = 0.00638911 loss)\nI0608 04:21:40.390305 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.068257 (* 1 = 0.068257 loss)\nI0608 04:21:40.390311 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00638921 (* 1 = 0.00638921 loss)\nI0608 04:21:40.390318 13573 sgd_solver.cpp:106] Iteration 41400, lr = 0.001\nI0608 04:21:53.318871 13573 solver.cpp:229] Iteration 41420, loss = 0.872569\nI0608 04:21:53.319005 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0630239 (* 1 = 0.0630239 loss)\nI0608 04:21:53.319015 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0928269 (* 1 = 0.0928269 loss)\nI0608 04:21:53.319020 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.360746 (* 1 = 0.360746 loss)\nI0608 04:21:53.319025 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.026805 (* 1 = 0.026805 loss)\nI0608 04:21:53.319031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.357402 (* 1 = 0.357402 loss)\nI0608 04:21:53.319037 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0267823 (* 1 = 0.0267823 loss)\nI0608 04:21:53.319044 13573 sgd_solver.cpp:106] Iteration 41420, lr = 0.001\nI0608 04:22:06.166059 13573 solver.cpp:229] Iteration 41440, loss = 0.793241\nI0608 04:22:06.166127 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0551194 (* 1 = 0.0551194 loss)\nI0608 04:22:06.166137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0819688 (* 1 = 0.0819688 loss)\nI0608 04:22:06.166142 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0822831 (* 1 = 0.0822831 loss)\nI0608 04:22:06.166148 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0145228 (* 1 = 0.0145228 loss)\nI0608 04:22:06.166153 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0807576 (* 1 = 0.0807576 loss)\nI0608 04:22:06.166159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0145234 (* 1 = 0.0145234 loss)\nI0608 04:22:06.166167 13573 sgd_solver.cpp:106] Iteration 41440, lr = 0.001\nI0608 04:22:18.909374 13573 solver.cpp:229] Iteration 41460, loss = 1.13927\nI0608 04:22:18.909445 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00327243 (* 1 = 0.00327243 loss)\nI0608 04:22:18.909456 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00754676 (* 1 = 0.00754676 loss)\nI0608 04:22:18.909462 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.407056 (* 1 = 0.407056 loss)\nI0608 04:22:18.909468 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.119605 (* 1 = 0.119605 loss)\nI0608 04:22:18.909473 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.454975 (* 1 = 0.454975 loss)\nI0608 04:22:18.909479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.119676 (* 1 = 0.119676 loss)\nI0608 04:22:18.909487 13573 sgd_solver.cpp:106] Iteration 41460, lr = 0.001\nI0608 04:22:31.862547 13573 solver.cpp:229] Iteration 41480, loss = 0.374719\nI0608 04:22:31.862619 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0510019 (* 1 = 0.0510019 loss)\nI0608 04:22:31.862629 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0378801 (* 1 = 0.0378801 loss)\nI0608 04:22:31.862635 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0828186 (* 1 = 0.0828186 loss)\nI0608 04:22:31.862648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00801017 (* 1 = 0.00801017 loss)\nI0608 04:22:31.862654 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0746348 (* 1 = 0.0746348 loss)\nI0608 04:22:31.862660 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00801171 (* 1 = 0.00801171 loss)\nI0608 04:22:31.862668 13573 sgd_solver.cpp:106] Iteration 41480, lr = 0.001\nI0608 04:22:44.889999 13573 solver.cpp:229] Iteration 41500, loss = 0.703071\nI0608 04:22:44.890075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.260879 (* 1 = 0.260879 loss)\nI0608 04:22:44.890085 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.201091 (* 1 = 0.201091 loss)\nI0608 04:22:44.890091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.176771 (* 1 = 0.176771 loss)\nI0608 04:22:44.890097 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0445258 (* 1 = 0.0445258 loss)\nI0608 04:22:44.890103 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177159 (* 1 = 0.177159 loss)\nI0608 04:22:44.890108 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0445311 (* 1 = 0.0445311 loss)\nI0608 04:22:44.890115 13573 sgd_solver.cpp:106] Iteration 41500, lr = 0.001\nI0608 04:22:57.593446 13573 solver.cpp:229] Iteration 41520, loss = 1.48779\nI0608 04:22:57.593521 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0716713 (* 1 = 0.0716713 loss)\nI0608 04:22:57.593531 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.162891 (* 1 = 0.162891 loss)\nI0608 04:22:57.593538 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.448839 (* 1 = 0.448839 loss)\nI0608 04:22:57.593544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.214218 (* 1 = 0.214218 loss)\nI0608 04:22:57.593549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.44774 (* 1 = 0.44774 loss)\nI0608 04:22:57.593555 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.214178 (* 1 = 0.214178 loss)\nI0608 04:22:57.593561 13573 sgd_solver.cpp:106] Iteration 41520, lr = 0.001\nI0608 04:23:10.501684 13573 solver.cpp:229] Iteration 41540, loss = 1.22701\nI0608 04:23:10.501757 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00930103 (* 1 = 0.00930103 loss)\nI0608 04:23:10.501767 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0204362 (* 1 = 0.0204362 loss)\nI0608 04:23:10.501772 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.694966 (* 1 = 0.694966 loss)\nI0608 04:23:10.501778 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.119076 (* 1 = 0.119076 loss)\nI0608 04:23:10.501785 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.709901 (* 1 = 0.709901 loss)\nI0608 04:23:10.501790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.119118 (* 1 = 0.119118 loss)\nI0608 04:23:10.501796 13573 sgd_solver.cpp:106] Iteration 41540, lr = 0.001\nI0608 04:23:23.475023 13573 solver.cpp:229] Iteration 41560, loss = 0.65274\nI0608 04:23:23.475095 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0782882 (* 1 = 0.0782882 loss)\nI0608 04:23:23.475105 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15453 (* 1 = 0.15453 loss)\nI0608 04:23:23.475111 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267454 (* 1 = 0.267454 loss)\nI0608 04:23:23.475117 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.058123 (* 1 = 0.058123 loss)\nI0608 04:23:23.475123 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277806 (* 1 = 0.277806 loss)\nI0608 04:23:23.475128 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0581083 (* 1 = 0.0581083 loss)\nI0608 04:23:23.475136 13573 sgd_solver.cpp:106] Iteration 41560, lr = 0.001\nI0608 04:23:36.365164 13573 solver.cpp:229] Iteration 41580, loss = 0.625412\nI0608 04:23:36.365241 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175048 (* 1 = 0.175048 loss)\nI0608 04:23:36.365250 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.110806 (* 1 = 0.110806 loss)\nI0608 04:23:36.365257 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0955403 (* 1 = 0.0955403 loss)\nI0608 04:23:36.365263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0071314 (* 1 = 0.0071314 loss)\nI0608 04:23:36.365268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100988 (* 1 = 0.100988 loss)\nI0608 04:23:36.365274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00713113 (* 1 = 0.00713113 loss)\nI0608 04:23:36.365280 13573 sgd_solver.cpp:106] Iteration 41580, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:23:49.239295 13573 solver.cpp:229] Iteration 41600, loss = 0.738764\nI0608 04:23:49.239362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0964552 (* 1 = 0.0964552 loss)\nI0608 04:23:49.239372 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.08657 (* 1 = 0.08657 loss)\nI0608 04:23:49.239377 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949664 (* 1 = 0.0949664 loss)\nI0608 04:23:49.239383 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0300988 (* 1 = 0.0300988 loss)\nI0608 04:23:49.239388 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101692 (* 1 = 0.101692 loss)\nI0608 04:23:49.239394 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.030099 (* 1 = 0.030099 loss)\nI0608 04:23:49.239401 13573 sgd_solver.cpp:106] Iteration 41600, lr = 0.001\nI0608 04:24:01.845728 13573 solver.cpp:229] Iteration 41620, loss = 0.717248\nI0608 04:24:01.845798 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000820841 (* 1 = 0.000820841 loss)\nI0608 04:24:01.845806 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00394814 (* 1 = 0.00394814 loss)\nI0608 04:24:01.845813 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233722 (* 1 = 0.233722 loss)\nI0608 04:24:01.845818 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296447 (* 1 = 0.0296447 loss)\nI0608 04:24:01.845824 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235291 (* 1 = 0.235291 loss)\nI0608 04:24:01.845829 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296404 (* 1 = 0.0296404 loss)\nI0608 04:24:01.845836 13573 sgd_solver.cpp:106] Iteration 41620, lr = 0.001\nI0608 04:24:14.786718 13573 solver.cpp:229] Iteration 41640, loss = 0.490057\nI0608 04:24:14.786798 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134632 (* 1 = 0.134632 loss)\nI0608 04:24:14.786808 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0833207 (* 1 = 0.0833207 loss)\nI0608 04:24:14.786813 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215796 (* 1 = 0.215796 loss)\nI0608 04:24:14.786819 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0335072 (* 1 = 0.0335072 loss)\nI0608 04:24:14.786825 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204256 (* 1 = 0.204256 loss)\nI0608 04:24:14.786830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0335038 (* 1 = 0.0335038 loss)\nI0608 04:24:14.786837 13573 sgd_solver.cpp:106] Iteration 41640, lr = 0.001\nI0608 04:24:27.743065 13573 solver.cpp:229] Iteration 41660, loss = 0.553821\nI0608 04:24:27.743131 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0320011 (* 1 = 0.0320011 loss)\nI0608 04:24:27.743141 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0838241 (* 1 = 0.0838241 loss)\nI0608 04:24:27.743147 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0950815 (* 1 = 0.0950815 loss)\nI0608 04:24:27.743154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123271 (* 1 = 0.123271 loss)\nI0608 04:24:27.743160 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0953377 (* 1 = 0.0953377 loss)\nI0608 04:24:27.743165 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.12326 (* 1 = 0.12326 loss)\nI0608 04:24:27.743173 13573 sgd_solver.cpp:106] Iteration 41660, lr = 0.001\nI0608 04:24:40.845106 13573 solver.cpp:229] Iteration 41680, loss = 0.816526\nI0608 04:24:40.845181 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.172571 (* 1 = 0.172571 loss)\nI0608 04:24:40.845192 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0988385 (* 1 = 0.0988385 loss)\nI0608 04:24:40.845197 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.327168 (* 1 = 0.327168 loss)\nI0608 04:24:40.845203 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0977739 (* 1 = 0.0977739 loss)\nI0608 04:24:40.845208 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.324983 (* 1 = 0.324983 loss)\nI0608 04:24:40.845214 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0977902 (* 1 = 0.0977902 loss)\nI0608 04:24:40.845221 13573 sgd_solver.cpp:106] Iteration 41680, lr = 0.001\nI0608 04:24:53.563087 13573 solver.cpp:229] Iteration 41700, loss = 1.28089\nI0608 04:24:53.563153 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0543996 (* 1 = 0.0543996 loss)\nI0608 04:24:53.563163 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0863362 (* 1 = 0.0863362 loss)\nI0608 04:24:53.563169 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.666089 (* 1 = 0.666089 loss)\nI0608 04:24:53.563174 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.23955 (* 1 = 0.23955 loss)\nI0608 04:24:53.563179 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.668968 (* 1 = 0.668968 loss)\nI0608 04:24:53.563185 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.239581 (* 1 = 0.239581 loss)\nI0608 04:24:53.563192 13573 sgd_solver.cpp:106] Iteration 41700, lr = 0.001\nI0608 04:25:06.593446 13573 solver.cpp:229] Iteration 41720, loss = 0.556574\nI0608 04:25:06.593513 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0596155 (* 1 = 0.0596155 loss)\nI0608 04:25:06.593525 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0842373 (* 1 = 0.0842373 loss)\nI0608 04:25:06.593530 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0843299 (* 1 = 0.0843299 loss)\nI0608 04:25:06.593536 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00274473 (* 1 = 0.00274473 loss)\nI0608 04:25:06.593542 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0826307 (* 1 = 0.0826307 loss)\nI0608 04:25:06.593547 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00274637 (* 1 = 0.00274637 loss)\nI0608 04:25:06.593554 13573 sgd_solver.cpp:106] Iteration 41720, lr = 0.001\nI0608 04:25:19.464869 13573 solver.cpp:229] Iteration 41740, loss = 0.688396\nI0608 04:25:19.464928 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128131 (* 1 = 0.128131 loss)\nI0608 04:25:19.464938 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.241639 (* 1 = 0.241639 loss)\nI0608 04:25:19.464946 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123624 (* 1 = 0.123624 loss)\nI0608 04:25:19.464951 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.235237 (* 1 = 0.235237 loss)\nI0608 04:25:19.464956 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.12723 (* 1 = 0.12723 loss)\nI0608 04:25:19.464962 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.236808 (* 1 = 0.236808 loss)\nI0608 04:25:19.464968 13573 sgd_solver.cpp:106] Iteration 41740, lr = 0.001\nI0608 04:25:32.349243 13573 solver.cpp:229] Iteration 41760, loss = 0.747165\nI0608 04:25:32.349319 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0476173 (* 1 = 0.0476173 loss)\nI0608 04:25:32.349329 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0756788 (* 1 = 0.0756788 loss)\nI0608 04:25:32.349335 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239225 (* 1 = 0.239225 loss)\nI0608 04:25:32.349341 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0070685 (* 1 = 0.0070685 loss)\nI0608 04:25:32.349346 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280703 (* 1 = 0.280703 loss)\nI0608 04:25:32.349352 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00707042 (* 1 = 0.00707042 loss)\nI0608 04:25:32.349359 13573 sgd_solver.cpp:106] Iteration 41760, lr = 0.001\nI0608 04:25:45.314911 13573 solver.cpp:229] Iteration 41780, loss = 0.327459\nI0608 04:25:45.314977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0252074 (* 1 = 0.0252074 loss)\nI0608 04:25:45.314987 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0395011 (* 1 = 0.0395011 loss)\nI0608 04:25:45.314993 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.087762 (* 1 = 0.087762 loss)\nI0608 04:25:45.314999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0390997 (* 1 = 0.0390997 loss)\nI0608 04:25:45.315004 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0863343 (* 1 = 0.0863343 loss)\nI0608 04:25:45.315011 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0391064 (* 1 = 0.0391064 loss)\nI0608 04:25:45.315017 13573 sgd_solver.cpp:106] Iteration 41780, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:25:58.040115 13573 solver.cpp:229] Iteration 41800, loss = 0.410629\nI0608 04:25:58.040177 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0275329 (* 1 = 0.0275329 loss)\nI0608 04:25:58.040186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0343939 (* 1 = 0.0343939 loss)\nI0608 04:25:58.040192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0892476 (* 1 = 0.0892476 loss)\nI0608 04:25:58.040199 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0272813 (* 1 = 0.0272813 loss)\nI0608 04:25:58.040204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0879883 (* 1 = 0.0879883 loss)\nI0608 04:25:58.040210 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0273059 (* 1 = 0.0273059 loss)\nI0608 04:25:58.040215 13573 sgd_solver.cpp:106] Iteration 41800, lr = 0.001\nI0608 04:26:10.966001 13573 solver.cpp:229] Iteration 41820, loss = 0.598941\nI0608 04:26:10.966075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000637727 (* 1 = 0.000637727 loss)\nI0608 04:26:10.966085 13573 solver.cpp:245]     Train net output #1: loss_cls = 6.32566e-05 (* 1 = 6.32566e-05 loss)\nI0608 04:26:10.966091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.192766 (* 1 = 0.192766 loss)\nI0608 04:26:10.966097 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278516 (* 1 = 0.0278516 loss)\nI0608 04:26:10.966104 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137264 (* 1 = 0.137264 loss)\nI0608 04:26:10.966109 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278639 (* 1 = 0.0278639 loss)\nI0608 04:26:10.966115 13573 sgd_solver.cpp:106] Iteration 41820, lr = 0.001\nI0608 04:26:23.912503 13573 solver.cpp:229] Iteration 41840, loss = 0.539441\nI0608 04:26:23.912569 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000198912 (* 1 = 0.000198912 loss)\nI0608 04:26:23.912580 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00145796 (* 1 = 0.00145796 loss)\nI0608 04:26:23.912585 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213448 (* 1 = 0.213448 loss)\nI0608 04:26:23.912590 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00208772 (* 1 = 0.00208772 loss)\nI0608 04:26:23.912596 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183952 (* 1 = 0.183952 loss)\nI0608 04:26:23.912602 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0020935 (* 1 = 0.0020935 loss)\nI0608 04:26:23.912608 13573 sgd_solver.cpp:106] Iteration 41840, lr = 0.001\nI0608 04:26:36.790944 13573 solver.cpp:229] Iteration 41860, loss = 1.07751\nI0608 04:26:36.791009 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.2116 (* 1 = 0.2116 loss)\nI0608 04:26:36.791019 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.181744 (* 1 = 0.181744 loss)\nI0608 04:26:36.791025 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.305687 (* 1 = 0.305687 loss)\nI0608 04:26:36.791031 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0573354 (* 1 = 0.0573354 loss)\nI0608 04:26:36.791038 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315224 (* 1 = 0.315224 loss)\nI0608 04:26:36.791043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0573195 (* 1 = 0.0573195 loss)\nI0608 04:26:36.791049 13573 sgd_solver.cpp:106] Iteration 41860, lr = 0.001\nI0608 04:26:49.927289 13573 solver.cpp:229] Iteration 41880, loss = 0.82325\nI0608 04:26:49.927356 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00206451 (* 1 = 0.00206451 loss)\nI0608 04:26:49.927366 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00873525 (* 1 = 0.00873525 loss)\nI0608 04:26:49.927372 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.454092 (* 1 = 0.454092 loss)\nI0608 04:26:49.927377 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.136116 (* 1 = 0.136116 loss)\nI0608 04:26:49.927383 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.479427 (* 1 = 0.479427 loss)\nI0608 04:26:49.927388 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.136107 (* 1 = 0.136107 loss)\nI0608 04:26:49.927395 13573 sgd_solver.cpp:106] Iteration 41880, lr = 0.001\nI0608 04:27:02.570080 13573 solver.cpp:229] Iteration 41900, loss = 0.929131\nI0608 04:27:02.570145 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00335049 (* 1 = 0.00335049 loss)\nI0608 04:27:02.570154 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00667661 (* 1 = 0.00667661 loss)\nI0608 04:27:02.570160 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.592867 (* 1 = 0.592867 loss)\nI0608 04:27:02.570166 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.129217 (* 1 = 0.129217 loss)\nI0608 04:27:02.570173 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.573078 (* 1 = 0.573078 loss)\nI0608 04:27:02.570178 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.129297 (* 1 = 0.129297 loss)\nI0608 04:27:02.570185 13573 sgd_solver.cpp:106] Iteration 41900, lr = 0.001\nI0608 04:27:15.550379 13573 solver.cpp:229] Iteration 41920, loss = 1.03314\nI0608 04:27:15.550441 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0401417 (* 1 = 0.0401417 loss)\nI0608 04:27:15.550451 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0723722 (* 1 = 0.0723722 loss)\nI0608 04:27:15.550457 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.309124 (* 1 = 0.309124 loss)\nI0608 04:27:15.550463 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0318891 (* 1 = 0.0318891 loss)\nI0608 04:27:15.550468 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315135 (* 1 = 0.315135 loss)\nI0608 04:27:15.550474 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0318895 (* 1 = 0.0318895 loss)\nI0608 04:27:15.550482 13573 sgd_solver.cpp:106] Iteration 41920, lr = 0.001\nI0608 04:27:28.418215 13573 solver.cpp:229] Iteration 41940, loss = 0.415459\nI0608 04:27:28.418288 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0660452 (* 1 = 0.0660452 loss)\nI0608 04:27:28.418304 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0208974 (* 1 = 0.0208974 loss)\nI0608 04:27:28.418311 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0985336 (* 1 = 0.0985336 loss)\nI0608 04:27:28.418318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0148809 (* 1 = 0.0148809 loss)\nI0608 04:27:28.418323 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0806216 (* 1 = 0.0806216 loss)\nI0608 04:27:28.418329 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0148921 (* 1 = 0.0148921 loss)\nI0608 04:27:28.418337 13573 sgd_solver.cpp:106] Iteration 41940, lr = 0.001\nI0608 04:27:41.470106 13573 solver.cpp:229] Iteration 41960, loss = 0.601679\nI0608 04:27:41.470176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000248769 (* 1 = 0.000248769 loss)\nI0608 04:27:41.470186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000826695 (* 1 = 0.000826695 loss)\nI0608 04:27:41.470192 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198319 (* 1 = 0.198319 loss)\nI0608 04:27:41.470198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.000787455 (* 1 = 0.000787455 loss)\nI0608 04:27:41.470204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172462 (* 1 = 0.172462 loss)\nI0608 04:27:41.470211 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00079031 (* 1 = 0.00079031 loss)\nI0608 04:27:41.470218 13573 sgd_solver.cpp:106] Iteration 41960, lr = 0.001\nI0608 04:27:54.159134 13573 solver.cpp:229] Iteration 41980, loss = 0.434024\nI0608 04:27:54.159201 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00310551 (* 1 = 0.00310551 loss)\nI0608 04:27:54.159211 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00105209 (* 1 = 0.00105209 loss)\nI0608 04:27:54.159217 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.178855 (* 1 = 0.178855 loss)\nI0608 04:27:54.159224 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00814301 (* 1 = 0.00814301 loss)\nI0608 04:27:54.159229 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14643 (* 1 = 0.14643 loss)\nI0608 04:27:54.159235 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00813149 (* 1 = 0.00813149 loss)\nI0608 04:27:54.159241 13573 sgd_solver.cpp:106] Iteration 41980, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:28:06.922040 13573 solver.cpp:229] Iteration 42000, loss = 1.04518\nI0608 04:28:06.922114 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.293654 (* 1 = 0.293654 loss)\nI0608 04:28:06.922124 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198969 (* 1 = 0.198969 loss)\nI0608 04:28:06.922129 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17116 (* 1 = 0.17116 loss)\nI0608 04:28:06.922135 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0362753 (* 1 = 0.0362753 loss)\nI0608 04:28:06.922142 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168417 (* 1 = 0.168417 loss)\nI0608 04:28:06.922147 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0362718 (* 1 = 0.0362718 loss)\nI0608 04:28:06.922154 13573 sgd_solver.cpp:106] Iteration 42000, lr = 0.001\nI0608 04:28:19.701097 13573 solver.cpp:229] Iteration 42020, loss = 0.827392\nI0608 04:28:19.701162 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0710823 (* 1 = 0.0710823 loss)\nI0608 04:28:19.701172 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0982973 (* 1 = 0.0982973 loss)\nI0608 04:28:19.701179 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0941519 (* 1 = 0.0941519 loss)\nI0608 04:28:19.701184 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109168 (* 1 = 0.109168 loss)\nI0608 04:28:19.701189 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0924129 (* 1 = 0.0924129 loss)\nI0608 04:28:19.701195 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.10916 (* 1 = 0.10916 loss)\nI0608 04:28:19.701201 13573 sgd_solver.cpp:106] Iteration 42020, lr = 0.001\nI0608 04:28:32.693135 13573 solver.cpp:229] Iteration 42040, loss = 0.441818\nI0608 04:28:32.693199 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105087 (* 1 = 0.105087 loss)\nI0608 04:28:32.693209 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0348278 (* 1 = 0.0348278 loss)\nI0608 04:28:32.693214 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0837438 (* 1 = 0.0837438 loss)\nI0608 04:28:32.693220 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00365276 (* 1 = 0.00365276 loss)\nI0608 04:28:32.693226 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0830566 (* 1 = 0.0830566 loss)\nI0608 04:28:32.693231 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00365261 (* 1 = 0.00365261 loss)\nI0608 04:28:32.693238 13573 sgd_solver.cpp:106] Iteration 42040, lr = 0.001\nI0608 04:28:45.741876 13573 solver.cpp:229] Iteration 42060, loss = 0.560752\nI0608 04:28:45.741950 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0320359 (* 1 = 0.0320359 loss)\nI0608 04:28:45.741960 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0227325 (* 1 = 0.0227325 loss)\nI0608 04:28:45.741966 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148152 (* 1 = 0.148152 loss)\nI0608 04:28:45.741971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.265232 (* 1 = 0.265232 loss)\nI0608 04:28:45.741977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150897 (* 1 = 0.150897 loss)\nI0608 04:28:45.741983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.265359 (* 1 = 0.265359 loss)\nI0608 04:28:45.741991 13573 sgd_solver.cpp:106] Iteration 42060, lr = 0.001\nI0608 04:28:58.553889 13573 solver.cpp:229] Iteration 42080, loss = 0.741453\nI0608 04:28:58.553953 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000512001 (* 1 = 0.000512001 loss)\nI0608 04:28:58.553966 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00139716 (* 1 = 0.00139716 loss)\nI0608 04:28:58.553972 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238388 (* 1 = 0.238388 loss)\nI0608 04:28:58.553978 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0370995 (* 1 = 0.0370995 loss)\nI0608 04:28:58.553984 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235942 (* 1 = 0.235942 loss)\nI0608 04:28:58.553990 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0371002 (* 1 = 0.0371002 loss)\nI0608 04:28:58.553997 13573 sgd_solver.cpp:106] Iteration 42080, lr = 0.001\nI0608 04:29:11.529381 13573 solver.cpp:229] Iteration 42100, loss = 0.446986\nI0608 04:29:11.529453 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0679345 (* 1 = 0.0679345 loss)\nI0608 04:29:11.529464 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.142392 (* 1 = 0.142392 loss)\nI0608 04:29:11.529470 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092358 (* 1 = 0.092358 loss)\nI0608 04:29:11.529475 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0499688 (* 1 = 0.0499688 loss)\nI0608 04:29:11.529481 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0913586 (* 1 = 0.0913586 loss)\nI0608 04:29:11.529487 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0498882 (* 1 = 0.0498882 loss)\nI0608 04:29:11.529495 13573 sgd_solver.cpp:106] Iteration 42100, lr = 0.001\nI0608 04:29:24.250780 13573 solver.cpp:229] Iteration 42120, loss = 0.584224\nI0608 04:29:24.250905 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0732492 (* 1 = 0.0732492 loss)\nI0608 04:29:24.250915 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1235 (* 1 = 0.1235 loss)\nI0608 04:29:24.250921 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220259 (* 1 = 0.220259 loss)\nI0608 04:29:24.250927 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00888454 (* 1 = 0.00888454 loss)\nI0608 04:29:24.250933 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.221517 (* 1 = 0.221517 loss)\nI0608 04:29:24.250939 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00887763 (* 1 = 0.00887763 loss)\nI0608 04:29:24.250946 13573 sgd_solver.cpp:106] Iteration 42120, lr = 0.001\nI0608 04:29:37.108191 13573 solver.cpp:229] Iteration 42140, loss = 1.16291\nI0608 04:29:37.108271 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000659433 (* 1 = 0.000659433 loss)\nI0608 04:29:37.108283 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0141358 (* 1 = 0.0141358 loss)\nI0608 04:29:37.108289 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.160841 (* 1 = 0.160841 loss)\nI0608 04:29:37.108294 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114894 (* 1 = 0.0114894 loss)\nI0608 04:29:37.108300 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.141526 (* 1 = 0.141526 loss)\nI0608 04:29:37.108306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114663 (* 1 = 0.0114663 loss)\nI0608 04:29:37.108314 13573 sgd_solver.cpp:106] Iteration 42140, lr = 0.001\nI0608 04:29:49.871024 13573 solver.cpp:229] Iteration 42160, loss = 0.869578\nI0608 04:29:49.871094 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0865358 (* 1 = 0.0865358 loss)\nI0608 04:29:49.871104 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0678478 (* 1 = 0.0678478 loss)\nI0608 04:29:49.871110 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10149 (* 1 = 0.10149 loss)\nI0608 04:29:49.871116 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0761705 (* 1 = 0.0761705 loss)\nI0608 04:29:49.871122 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102703 (* 1 = 0.102703 loss)\nI0608 04:29:49.871129 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.076174 (* 1 = 0.076174 loss)\nI0608 04:29:49.871135 13573 sgd_solver.cpp:106] Iteration 42160, lr = 0.001\nI0608 04:30:02.652325 13573 solver.cpp:229] Iteration 42180, loss = 0.895598\nI0608 04:30:02.652400 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00113158 (* 1 = 0.00113158 loss)\nI0608 04:30:02.652410 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000326584 (* 1 = 0.000326584 loss)\nI0608 04:30:02.652415 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.470097 (* 1 = 0.470097 loss)\nI0608 04:30:02.652422 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.121731 (* 1 = 0.121731 loss)\nI0608 04:30:02.652427 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.497012 (* 1 = 0.497012 loss)\nI0608 04:30:02.652433 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121924 (* 1 = 0.121924 loss)\nI0608 04:30:02.652441 13573 sgd_solver.cpp:106] Iteration 42180, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:30:15.359406 13573 solver.cpp:229] Iteration 42200, loss = 0.592178\nI0608 04:30:15.359477 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00354594 (* 1 = 0.00354594 loss)\nI0608 04:30:15.359486 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00656912 (* 1 = 0.00656912 loss)\nI0608 04:30:15.359493 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.359271 (* 1 = 0.359271 loss)\nI0608 04:30:15.359498 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0666014 (* 1 = 0.0666014 loss)\nI0608 04:30:15.359503 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340443 (* 1 = 0.340443 loss)\nI0608 04:30:15.359509 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0665963 (* 1 = 0.0665963 loss)\nI0608 04:30:15.359516 13573 sgd_solver.cpp:106] Iteration 42200, lr = 0.001\nI0608 04:30:28.501132 13573 solver.cpp:229] Iteration 42220, loss = 0.519835\nI0608 04:30:28.501255 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145496 (* 1 = 0.145496 loss)\nI0608 04:30:28.501265 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0936228 (* 1 = 0.0936228 loss)\nI0608 04:30:28.501271 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0846287 (* 1 = 0.0846287 loss)\nI0608 04:30:28.501276 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117484 (* 1 = 0.0117484 loss)\nI0608 04:30:28.501282 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0879813 (* 1 = 0.0879813 loss)\nI0608 04:30:28.501288 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117551 (* 1 = 0.0117551 loss)\nI0608 04:30:28.501296 13573 sgd_solver.cpp:106] Iteration 42220, lr = 0.001\nI0608 04:30:41.479710 13573 solver.cpp:229] Iteration 42240, loss = 0.611278\nI0608 04:30:41.479780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000177301 (* 1 = 0.000177301 loss)\nI0608 04:30:41.479790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00577675 (* 1 = 0.00577675 loss)\nI0608 04:30:41.479797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156859 (* 1 = 0.156859 loss)\nI0608 04:30:41.479804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00656136 (* 1 = 0.00656136 loss)\nI0608 04:30:41.479809 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171316 (* 1 = 0.171316 loss)\nI0608 04:30:41.479815 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00657328 (* 1 = 0.00657328 loss)\nI0608 04:30:41.479822 13573 sgd_solver.cpp:106] Iteration 42240, lr = 0.001\nI0608 04:30:54.233566 13573 solver.cpp:229] Iteration 42260, loss = 0.6823\nI0608 04:30:54.233638 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0403465 (* 1 = 0.0403465 loss)\nI0608 04:30:54.233647 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.024157 (* 1 = 0.024157 loss)\nI0608 04:30:54.233654 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.146483 (* 1 = 0.146483 loss)\nI0608 04:30:54.233660 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00774601 (* 1 = 0.00774601 loss)\nI0608 04:30:54.233666 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13114 (* 1 = 0.13114 loss)\nI0608 04:30:54.233672 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00774693 (* 1 = 0.00774693 loss)\nI0608 04:30:54.233680 13573 sgd_solver.cpp:106] Iteration 42260, lr = 0.001\nI0608 04:31:07.059095 13573 solver.cpp:229] Iteration 42280, loss = 0.598671\nI0608 04:31:07.059173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0246558 (* 1 = 0.0246558 loss)\nI0608 04:31:07.059183 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.021791 (* 1 = 0.021791 loss)\nI0608 04:31:07.059190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.316502 (* 1 = 0.316502 loss)\nI0608 04:31:07.059195 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0213228 (* 1 = 0.0213228 loss)\nI0608 04:31:07.059201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.347389 (* 1 = 0.347389 loss)\nI0608 04:31:07.059206 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0213199 (* 1 = 0.0213199 loss)\nI0608 04:31:07.059216 13573 sgd_solver.cpp:106] Iteration 42280, lr = 0.001\nI0608 04:31:19.801342 13573 solver.cpp:229] Iteration 42300, loss = 0.973109\nI0608 04:31:19.801406 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.071708 (* 1 = 0.071708 loss)\nI0608 04:31:19.801416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0514468 (* 1 = 0.0514468 loss)\nI0608 04:31:19.801422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.315605 (* 1 = 0.315605 loss)\nI0608 04:31:19.801429 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0379586 (* 1 = 0.0379586 loss)\nI0608 04:31:19.801434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.311764 (* 1 = 0.311764 loss)\nI0608 04:31:19.801440 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0379454 (* 1 = 0.0379454 loss)\nI0608 04:31:19.801448 13573 sgd_solver.cpp:106] Iteration 42300, lr = 0.001\nI0608 04:31:32.736090 13573 solver.cpp:229] Iteration 42320, loss = 0.861927\nI0608 04:31:32.736160 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0805279 (* 1 = 0.0805279 loss)\nI0608 04:31:32.736169 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0601239 (* 1 = 0.0601239 loss)\nI0608 04:31:32.736176 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.496436 (* 1 = 0.496436 loss)\nI0608 04:31:32.736181 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0920977 (* 1 = 0.0920977 loss)\nI0608 04:31:32.736187 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.498222 (* 1 = 0.498222 loss)\nI0608 04:31:32.736193 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0920702 (* 1 = 0.0920702 loss)\nI0608 04:31:32.736202 13573 sgd_solver.cpp:106] Iteration 42320, lr = 0.001\nI0608 04:31:45.713003 13573 solver.cpp:229] Iteration 42340, loss = 0.378567\nI0608 04:31:45.713078 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110974 (* 1 = 0.110974 loss)\nI0608 04:31:45.713088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0570959 (* 1 = 0.0570959 loss)\nI0608 04:31:45.713094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0834851 (* 1 = 0.0834851 loss)\nI0608 04:31:45.713099 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.022723 (* 1 = 0.022723 loss)\nI0608 04:31:45.713105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0815274 (* 1 = 0.0815274 loss)\nI0608 04:31:45.713110 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0227221 (* 1 = 0.0227221 loss)\nI0608 04:31:45.713117 13573 sgd_solver.cpp:106] Iteration 42340, lr = 0.001\nI0608 04:31:58.612818 13573 solver.cpp:229] Iteration 42360, loss = 0.50706\nI0608 04:31:58.612885 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00139342 (* 1 = 0.00139342 loss)\nI0608 04:31:58.612895 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00491036 (* 1 = 0.00491036 loss)\nI0608 04:31:58.612901 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209466 (* 1 = 0.209466 loss)\nI0608 04:31:58.612906 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00129073 (* 1 = 0.00129073 loss)\nI0608 04:31:58.612912 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219755 (* 1 = 0.219755 loss)\nI0608 04:31:58.612918 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00128935 (* 1 = 0.00128935 loss)\nI0608 04:31:58.612926 13573 sgd_solver.cpp:106] Iteration 42360, lr = 0.001\nI0608 04:32:11.574760 13573 solver.cpp:229] Iteration 42380, loss = 0.761812\nI0608 04:32:11.574857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0510615 (* 1 = 0.0510615 loss)\nI0608 04:32:11.574869 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0517582 (* 1 = 0.0517582 loss)\nI0608 04:32:11.574877 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.218193 (* 1 = 0.218193 loss)\nI0608 04:32:11.574882 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0404617 (* 1 = 0.0404617 loss)\nI0608 04:32:11.574888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21467 (* 1 = 0.21467 loss)\nI0608 04:32:11.574894 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0404288 (* 1 = 0.0404288 loss)\nI0608 04:32:11.574901 13573 sgd_solver.cpp:106] Iteration 42380, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:32:24.532461 13573 solver.cpp:229] Iteration 42400, loss = 0.434853\nI0608 04:32:24.532542 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150816 (* 1 = 0.150816 loss)\nI0608 04:32:24.532555 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0791848 (* 1 = 0.0791848 loss)\nI0608 04:32:24.532562 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0940324 (* 1 = 0.0940324 loss)\nI0608 04:32:24.532568 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0251503 (* 1 = 0.0251503 loss)\nI0608 04:32:24.532574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.094898 (* 1 = 0.094898 loss)\nI0608 04:32:24.532580 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251521 (* 1 = 0.0251521 loss)\nI0608 04:32:24.532588 13573 sgd_solver.cpp:106] Iteration 42400, lr = 0.001\nI0608 04:32:37.611984 13573 solver.cpp:229] Iteration 42420, loss = 0.614713\nI0608 04:32:37.612051 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000482933 (* 1 = 0.000482933 loss)\nI0608 04:32:37.612061 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000612926 (* 1 = 0.000612926 loss)\nI0608 04:32:37.612068 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156328 (* 1 = 0.156328 loss)\nI0608 04:32:37.612074 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0180721 (* 1 = 0.0180721 loss)\nI0608 04:32:37.612081 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.178716 (* 1 = 0.178716 loss)\nI0608 04:32:37.612087 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0180511 (* 1 = 0.0180511 loss)\nI0608 04:32:37.612093 13573 sgd_solver.cpp:106] Iteration 42420, lr = 0.001\nI0608 04:32:50.447424 13573 solver.cpp:229] Iteration 42440, loss = 0.57465\nI0608 04:32:50.447496 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.231782 (* 1 = 0.231782 loss)\nI0608 04:32:50.447506 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170229 (* 1 = 0.170229 loss)\nI0608 04:32:50.447512 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917278 (* 1 = 0.0917278 loss)\nI0608 04:32:50.447518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.018625 (* 1 = 0.018625 loss)\nI0608 04:32:50.447525 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0869464 (* 1 = 0.0869464 loss)\nI0608 04:32:50.447531 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0186251 (* 1 = 0.0186251 loss)\nI0608 04:32:50.447538 13573 sgd_solver.cpp:106] Iteration 42440, lr = 0.001\nI0608 04:33:03.322886 13573 solver.cpp:229] Iteration 42460, loss = 0.449034\nI0608 04:33:03.322952 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0869516 (* 1 = 0.0869516 loss)\nI0608 04:33:03.322962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0553376 (* 1 = 0.0553376 loss)\nI0608 04:33:03.322968 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0901002 (* 1 = 0.0901002 loss)\nI0608 04:33:03.322973 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0364308 (* 1 = 0.0364308 loss)\nI0608 04:33:03.322979 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0960832 (* 1 = 0.0960832 loss)\nI0608 04:33:03.322985 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0364303 (* 1 = 0.0364303 loss)\nI0608 04:33:03.322993 13573 sgd_solver.cpp:106] Iteration 42460, lr = 0.001\nI0608 04:33:16.345803 13573 solver.cpp:229] Iteration 42480, loss = 0.563529\nI0608 04:33:16.345880 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.190663 (* 1 = 0.190663 loss)\nI0608 04:33:16.345888 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.165319 (* 1 = 0.165319 loss)\nI0608 04:33:16.345894 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106756 (* 1 = 0.106756 loss)\nI0608 04:33:16.345901 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0723861 (* 1 = 0.0723861 loss)\nI0608 04:33:16.345906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106035 (* 1 = 0.106035 loss)\nI0608 04:33:16.345912 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0723864 (* 1 = 0.0723864 loss)\nI0608 04:33:16.345921 13573 sgd_solver.cpp:106] Iteration 42480, lr = 0.001\nI0608 04:33:29.470520 13573 solver.cpp:229] Iteration 42500, loss = 0.8384\nI0608 04:33:29.470597 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0744091 (* 1 = 0.0744091 loss)\nI0608 04:33:29.470607 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0914353 (* 1 = 0.0914353 loss)\nI0608 04:33:29.470613 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222641 (* 1 = 0.222641 loss)\nI0608 04:33:29.470619 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00810985 (* 1 = 0.00810985 loss)\nI0608 04:33:29.470625 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.221898 (* 1 = 0.221898 loss)\nI0608 04:33:29.470630 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0081094 (* 1 = 0.0081094 loss)\nI0608 04:33:29.470638 13573 sgd_solver.cpp:106] Iteration 42500, lr = 0.001\nI0608 04:33:42.420188 13573 solver.cpp:229] Iteration 42520, loss = 0.373316\nI0608 04:33:42.420256 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0630487 (* 1 = 0.0630487 loss)\nI0608 04:33:42.420265 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0505523 (* 1 = 0.0505523 loss)\nI0608 04:33:42.420271 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0993156 (* 1 = 0.0993156 loss)\nI0608 04:33:42.420277 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0068966 (* 1 = 0.0068966 loss)\nI0608 04:33:42.420284 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11587 (* 1 = 0.11587 loss)\nI0608 04:33:42.420289 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00691359 (* 1 = 0.00691359 loss)\nI0608 04:33:42.420295 13573 sgd_solver.cpp:106] Iteration 42520, lr = 0.001\nI0608 04:33:55.371271 13573 solver.cpp:229] Iteration 42540, loss = 0.389173\nI0608 04:33:55.371335 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0962921 (* 1 = 0.0962921 loss)\nI0608 04:33:55.371345 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122513 (* 1 = 0.122513 loss)\nI0608 04:33:55.371351 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10173 (* 1 = 0.10173 loss)\nI0608 04:33:55.371356 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178381 (* 1 = 0.0178381 loss)\nI0608 04:33:55.371362 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0880528 (* 1 = 0.0880528 loss)\nI0608 04:33:55.371368 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0178534 (* 1 = 0.0178534 loss)\nI0608 04:33:55.371376 13573 sgd_solver.cpp:106] Iteration 42540, lr = 0.001\nI0608 04:34:08.200615 13573 solver.cpp:229] Iteration 42560, loss = 0.706242\nI0608 04:34:08.200706 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000734938 (* 1 = 0.000734938 loss)\nI0608 04:34:08.200716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0043942 (* 1 = 0.0043942 loss)\nI0608 04:34:08.200722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.292005 (* 1 = 0.292005 loss)\nI0608 04:34:08.200727 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0243787 (* 1 = 0.0243787 loss)\nI0608 04:34:08.200733 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.312357 (* 1 = 0.312357 loss)\nI0608 04:34:08.200738 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0243802 (* 1 = 0.0243802 loss)\nI0608 04:34:08.200747 13573 sgd_solver.cpp:106] Iteration 42560, lr = 0.001\nI0608 04:34:20.939908 13573 solver.cpp:229] Iteration 42580, loss = 0.315757\nI0608 04:34:20.939972 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0174755 (* 1 = 0.0174755 loss)\nI0608 04:34:20.939981 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.046059 (* 1 = 0.046059 loss)\nI0608 04:34:20.939988 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0819748 (* 1 = 0.0819748 loss)\nI0608 04:34:20.939993 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0577596 (* 1 = 0.0577596 loss)\nI0608 04:34:20.940001 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0793725 (* 1 = 0.0793725 loss)\nI0608 04:34:20.940006 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0577809 (* 1 = 0.0577809 loss)\nI0608 04:34:20.940013 13573 sgd_solver.cpp:106] Iteration 42580, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:34:33.943747 13573 solver.cpp:229] Iteration 42600, loss = 0.600168\nI0608 04:34:33.943812 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129685 (* 1 = 0.129685 loss)\nI0608 04:34:33.943822 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0587024 (* 1 = 0.0587024 loss)\nI0608 04:34:33.943828 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0963947 (* 1 = 0.0963947 loss)\nI0608 04:34:33.943835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0320476 (* 1 = 0.0320476 loss)\nI0608 04:34:33.943840 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0970846 (* 1 = 0.0970846 loss)\nI0608 04:34:33.943851 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0320061 (* 1 = 0.0320061 loss)\nI0608 04:34:33.943858 13573 sgd_solver.cpp:106] Iteration 42600, lr = 0.001\nI0608 04:34:46.972904 13573 solver.cpp:229] Iteration 42620, loss = 0.764984\nI0608 04:34:46.972980 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0817273 (* 1 = 0.0817273 loss)\nI0608 04:34:46.972990 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.205223 (* 1 = 0.205223 loss)\nI0608 04:34:46.972996 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0874126 (* 1 = 0.0874126 loss)\nI0608 04:34:46.973002 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0253526 (* 1 = 0.0253526 loss)\nI0608 04:34:46.973008 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0827567 (* 1 = 0.0827567 loss)\nI0608 04:34:46.973014 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0253814 (* 1 = 0.0253814 loss)\nI0608 04:34:46.973021 13573 sgd_solver.cpp:106] Iteration 42620, lr = 0.001\nI0608 04:34:59.887879 13573 solver.cpp:229] Iteration 42640, loss = 0.862262\nI0608 04:34:59.887941 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107306 (* 1 = 0.107306 loss)\nI0608 04:34:59.887950 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0576365 (* 1 = 0.0576365 loss)\nI0608 04:34:59.887956 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239127 (* 1 = 0.239127 loss)\nI0608 04:34:59.887962 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0505265 (* 1 = 0.0505265 loss)\nI0608 04:34:59.887969 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23549 (* 1 = 0.23549 loss)\nI0608 04:34:59.887974 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0504611 (* 1 = 0.0504611 loss)\nI0608 04:34:59.887980 13573 sgd_solver.cpp:106] Iteration 42640, lr = 0.001\nI0608 04:35:12.816383 13573 solver.cpp:229] Iteration 42660, loss = 0.602872\nI0608 04:35:12.816450 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144056 (* 1 = 0.144056 loss)\nI0608 04:35:12.816460 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108555 (* 1 = 0.108555 loss)\nI0608 04:35:12.816467 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0849679 (* 1 = 0.0849679 loss)\nI0608 04:35:12.816471 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157148 (* 1 = 0.0157148 loss)\nI0608 04:35:12.816478 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0827526 (* 1 = 0.0827526 loss)\nI0608 04:35:12.816483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156901 (* 1 = 0.0156901 loss)\nI0608 04:35:12.816489 13573 sgd_solver.cpp:106] Iteration 42660, lr = 0.001\nI0608 04:35:25.841133 13573 solver.cpp:229] Iteration 42680, loss = 2.09127\nI0608 04:35:25.841194 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0916573 (* 1 = 0.0916573 loss)\nI0608 04:35:25.841204 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.137628 (* 1 = 0.137628 loss)\nI0608 04:35:25.841210 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.093556 (* 1 = 0.093556 loss)\nI0608 04:35:25.841217 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00579464 (* 1 = 0.00579464 loss)\nI0608 04:35:25.841223 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0908376 (* 1 = 0.0908376 loss)\nI0608 04:35:25.841229 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00579829 (* 1 = 0.00579829 loss)\nI0608 04:35:25.841238 13573 sgd_solver.cpp:106] Iteration 42680, lr = 0.001\nI0608 04:35:38.822005 13573 solver.cpp:229] Iteration 42700, loss = 0.495368\nI0608 04:35:38.822125 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0602084 (* 1 = 0.0602084 loss)\nI0608 04:35:38.822134 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0686031 (* 1 = 0.0686031 loss)\nI0608 04:35:38.822140 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116846 (* 1 = 0.116846 loss)\nI0608 04:35:38.822146 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0222832 (* 1 = 0.0222832 loss)\nI0608 04:35:38.822152 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14622 (* 1 = 0.14622 loss)\nI0608 04:35:38.822158 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0222716 (* 1 = 0.0222716 loss)\nI0608 04:35:38.822166 13573 sgd_solver.cpp:106] Iteration 42700, lr = 0.001\nI0608 04:35:51.924329 13573 solver.cpp:229] Iteration 42720, loss = 0.757915\nI0608 04:35:51.924401 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0035996 (* 1 = 0.0035996 loss)\nI0608 04:35:51.924412 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0624273 (* 1 = 0.0624273 loss)\nI0608 04:35:51.924417 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.344212 (* 1 = 0.344212 loss)\nI0608 04:35:51.924423 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011728 (* 1 = 0.011728 loss)\nI0608 04:35:51.924428 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334007 (* 1 = 0.334007 loss)\nI0608 04:35:51.924434 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117059 (* 1 = 0.0117059 loss)\nI0608 04:35:51.924441 13573 sgd_solver.cpp:106] Iteration 42720, lr = 0.001\nI0608 04:36:04.782537 13573 solver.cpp:229] Iteration 42740, loss = 0.470334\nI0608 04:36:04.782601 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127697 (* 1 = 0.127697 loss)\nI0608 04:36:04.782611 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.237027 (* 1 = 0.237027 loss)\nI0608 04:36:04.782618 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0891873 (* 1 = 0.0891873 loss)\nI0608 04:36:04.782624 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105854 (* 1 = 0.0105854 loss)\nI0608 04:36:04.782630 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0903614 (* 1 = 0.0903614 loss)\nI0608 04:36:04.782636 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105876 (* 1 = 0.0105876 loss)\nI0608 04:36:04.782644 13573 sgd_solver.cpp:106] Iteration 42740, lr = 0.001\nI0608 04:36:17.698423 13573 solver.cpp:229] Iteration 42760, loss = 1.43644\nI0608 04:36:17.698498 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.236752 (* 1 = 0.236752 loss)\nI0608 04:36:17.698508 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118999 (* 1 = 0.118999 loss)\nI0608 04:36:17.698513 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.706444 (* 1 = 0.706444 loss)\nI0608 04:36:17.698518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0911536 (* 1 = 0.0911536 loss)\nI0608 04:36:17.698524 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.75012 (* 1 = 0.75012 loss)\nI0608 04:36:17.698529 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0911453 (* 1 = 0.0911453 loss)\nI0608 04:36:17.698536 13573 sgd_solver.cpp:106] Iteration 42760, lr = 0.001\nI0608 04:36:30.582372 13573 solver.cpp:229] Iteration 42780, loss = 0.888931\nI0608 04:36:30.582440 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0698126 (* 1 = 0.0698126 loss)\nI0608 04:36:30.582450 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0937653 (* 1 = 0.0937653 loss)\nI0608 04:36:30.582455 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393697 (* 1 = 0.393697 loss)\nI0608 04:36:30.582461 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.03555 (* 1 = 0.03555 loss)\nI0608 04:36:30.582468 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.358397 (* 1 = 0.358397 loss)\nI0608 04:36:30.582473 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0355456 (* 1 = 0.0355456 loss)\nI0608 04:36:30.582480 13573 sgd_solver.cpp:106] Iteration 42780, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:36:43.441406 13573 solver.cpp:229] Iteration 42800, loss = 0.81502\nI0608 04:36:43.441527 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.017878 (* 1 = 0.017878 loss)\nI0608 04:36:43.441537 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0151895 (* 1 = 0.0151895 loss)\nI0608 04:36:43.441542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.316179 (* 1 = 0.316179 loss)\nI0608 04:36:43.441548 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0298823 (* 1 = 0.0298823 loss)\nI0608 04:36:43.441553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323001 (* 1 = 0.323001 loss)\nI0608 04:36:43.441560 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0299022 (* 1 = 0.0299022 loss)\nI0608 04:36:43.441566 13573 sgd_solver.cpp:106] Iteration 42800, lr = 0.001\nI0608 04:36:56.509129 13573 solver.cpp:229] Iteration 42820, loss = 0.45088\nI0608 04:36:56.509191 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00249717 (* 1 = 0.00249717 loss)\nI0608 04:36:56.509201 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00147147 (* 1 = 0.00147147 loss)\nI0608 04:36:56.509207 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212895 (* 1 = 0.212895 loss)\nI0608 04:36:56.509212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116274 (* 1 = 0.0116274 loss)\nI0608 04:36:56.509218 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222093 (* 1 = 0.222093 loss)\nI0608 04:36:56.509223 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116289 (* 1 = 0.0116289 loss)\nI0608 04:36:56.509230 13573 sgd_solver.cpp:106] Iteration 42820, lr = 0.001\nI0608 04:37:09.540782 13573 solver.cpp:229] Iteration 42840, loss = 0.683719\nI0608 04:37:09.540848 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00770028 (* 1 = 0.00770028 loss)\nI0608 04:37:09.540859 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0581607 (* 1 = 0.0581607 loss)\nI0608 04:37:09.540866 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.396596 (* 1 = 0.396596 loss)\nI0608 04:37:09.540873 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.067652 (* 1 = 0.067652 loss)\nI0608 04:37:09.540877 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.409825 (* 1 = 0.409825 loss)\nI0608 04:37:09.540884 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0675725 (* 1 = 0.0675725 loss)\nI0608 04:37:09.540891 13573 sgd_solver.cpp:106] Iteration 42840, lr = 0.001\nI0608 04:37:22.272770 13573 solver.cpp:229] Iteration 42860, loss = 0.667473\nI0608 04:37:22.272833 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00896993 (* 1 = 0.00896993 loss)\nI0608 04:37:22.272843 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0187475 (* 1 = 0.0187475 loss)\nI0608 04:37:22.272850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152053 (* 1 = 0.152053 loss)\nI0608 04:37:22.272855 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0016515 (* 1 = 0.0016515 loss)\nI0608 04:37:22.272861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150571 (* 1 = 0.150571 loss)\nI0608 04:37:22.272866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00165069 (* 1 = 0.00165069 loss)\nI0608 04:37:22.272872 13573 sgd_solver.cpp:106] Iteration 42860, lr = 0.001\nI0608 04:37:34.977383 13573 solver.cpp:229] Iteration 42880, loss = 0.555821\nI0608 04:37:34.977452 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0158967 (* 1 = 0.0158967 loss)\nI0608 04:37:34.977463 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0447855 (* 1 = 0.0447855 loss)\nI0608 04:37:34.977468 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972668 (* 1 = 0.0972668 loss)\nI0608 04:37:34.977474 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.115779 (* 1 = 0.115779 loss)\nI0608 04:37:34.977480 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0992726 (* 1 = 0.0992726 loss)\nI0608 04:37:34.977486 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.115815 (* 1 = 0.115815 loss)\nI0608 04:37:34.977494 13573 sgd_solver.cpp:106] Iteration 42880, lr = 0.001\nI0608 04:37:48.010908 13573 solver.cpp:229] Iteration 42900, loss = 0.473745\nI0608 04:37:48.010995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113122 (* 1 = 0.113122 loss)\nI0608 04:37:48.011005 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0912614 (* 1 = 0.0912614 loss)\nI0608 04:37:48.011011 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103306 (* 1 = 0.103306 loss)\nI0608 04:37:48.011018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0238847 (* 1 = 0.0238847 loss)\nI0608 04:37:48.011024 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115557 (* 1 = 0.115557 loss)\nI0608 04:37:48.011030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238994 (* 1 = 0.0238994 loss)\nI0608 04:37:48.011037 13573 sgd_solver.cpp:106] Iteration 42900, lr = 0.001\nI0608 04:38:01.124248 13573 solver.cpp:229] Iteration 42920, loss = 0.787988\nI0608 04:38:01.124310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0714811 (* 1 = 0.0714811 loss)\nI0608 04:38:01.124320 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124677 (* 1 = 0.124677 loss)\nI0608 04:38:01.124325 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.338007 (* 1 = 0.338007 loss)\nI0608 04:38:01.124331 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0409517 (* 1 = 0.0409517 loss)\nI0608 04:38:01.124337 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.36866 (* 1 = 0.36866 loss)\nI0608 04:38:01.124343 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0409175 (* 1 = 0.0409175 loss)\nI0608 04:38:01.124351 13573 sgd_solver.cpp:106] Iteration 42920, lr = 0.001\nI0608 04:38:13.999845 13573 solver.cpp:229] Iteration 42940, loss = 0.468379\nI0608 04:38:13.999914 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.054591 (* 1 = 0.054591 loss)\nI0608 04:38:13.999925 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0309159 (* 1 = 0.0309159 loss)\nI0608 04:38:13.999932 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114306 (* 1 = 0.114306 loss)\nI0608 04:38:13.999938 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00101409 (* 1 = 0.00101409 loss)\nI0608 04:38:13.999944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.091488 (* 1 = 0.091488 loss)\nI0608 04:38:13.999950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00101462 (* 1 = 0.00101462 loss)\nI0608 04:38:13.999958 13573 sgd_solver.cpp:106] Iteration 42940, lr = 0.001\nI0608 04:38:26.928953 13573 solver.cpp:229] Iteration 42960, loss = 2.85476\nI0608 04:38:26.929023 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0970553 (* 1 = 0.0970553 loss)\nI0608 04:38:26.929033 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.147151 (* 1 = 0.147151 loss)\nI0608 04:38:26.929039 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.702816 (* 1 = 0.702816 loss)\nI0608 04:38:26.929044 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.114464 (* 1 = 0.114464 loss)\nI0608 04:38:26.929049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.678443 (* 1 = 0.678443 loss)\nI0608 04:38:26.929054 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.114459 (* 1 = 0.114459 loss)\nI0608 04:38:26.929061 13573 sgd_solver.cpp:106] Iteration 42960, lr = 0.001\nI0608 04:38:39.898669 13573 solver.cpp:229] Iteration 42980, loss = 1.8602\nI0608 04:38:39.898736 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140005 (* 1 = 0.140005 loss)\nI0608 04:38:39.898744 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139674 (* 1 = 0.139674 loss)\nI0608 04:38:39.898751 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.677404 (* 1 = 0.677404 loss)\nI0608 04:38:39.898757 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0991808 (* 1 = 0.0991808 loss)\nI0608 04:38:39.898763 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.69428 (* 1 = 0.69428 loss)\nI0608 04:38:39.898782 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0991108 (* 1 = 0.0991108 loss)\nI0608 04:38:39.898790 13573 sgd_solver.cpp:106] Iteration 42980, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:38:52.868815 13573 solver.cpp:229] Iteration 43000, loss = 0.850171\nI0608 04:38:52.868898 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0700566 (* 1 = 0.0700566 loss)\nI0608 04:38:52.868909 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0702921 (* 1 = 0.0702921 loss)\nI0608 04:38:52.868916 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.200601 (* 1 = 0.200601 loss)\nI0608 04:38:52.868921 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0308142 (* 1 = 0.0308142 loss)\nI0608 04:38:52.868927 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212206 (* 1 = 0.212206 loss)\nI0608 04:38:52.868932 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0308389 (* 1 = 0.0308389 loss)\nI0608 04:38:52.868939 13573 sgd_solver.cpp:106] Iteration 43000, lr = 0.001\nI0608 04:39:05.816635 13573 solver.cpp:229] Iteration 43020, loss = 1.35783\nI0608 04:39:05.816709 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.061087 (* 1 = 0.061087 loss)\nI0608 04:39:05.816720 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0406764 (* 1 = 0.0406764 loss)\nI0608 04:39:05.816725 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0858663 (* 1 = 0.0858663 loss)\nI0608 04:39:05.816731 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0199041 (* 1 = 0.0199041 loss)\nI0608 04:39:05.816736 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0918576 (* 1 = 0.0918576 loss)\nI0608 04:39:05.816742 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0199064 (* 1 = 0.0199064 loss)\nI0608 04:39:05.816750 13573 sgd_solver.cpp:106] Iteration 43020, lr = 0.001\nI0608 04:39:18.799896 13573 solver.cpp:229] Iteration 43040, loss = 1.23191\nI0608 04:39:18.799957 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.208909 (* 1 = 0.208909 loss)\nI0608 04:39:18.799967 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.187516 (* 1 = 0.187516 loss)\nI0608 04:39:18.799973 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.661693 (* 1 = 0.661693 loss)\nI0608 04:39:18.799978 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0855833 (* 1 = 0.0855833 loss)\nI0608 04:39:18.799984 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.607803 (* 1 = 0.607803 loss)\nI0608 04:39:18.799990 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.085623 (* 1 = 0.085623 loss)\nI0608 04:39:18.799998 13573 sgd_solver.cpp:106] Iteration 43040, lr = 0.001\nI0608 04:39:31.854876 13573 solver.cpp:229] Iteration 43060, loss = 1.12732\nI0608 04:39:31.854948 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.051938 (* 1 = 0.051938 loss)\nI0608 04:39:31.854957 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0777289 (* 1 = 0.0777289 loss)\nI0608 04:39:31.854964 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.338287 (* 1 = 0.338287 loss)\nI0608 04:39:31.854969 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481185 (* 1 = 0.0481185 loss)\nI0608 04:39:31.854974 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.337536 (* 1 = 0.337536 loss)\nI0608 04:39:31.854980 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0480745 (* 1 = 0.0480745 loss)\nI0608 04:39:31.854988 13573 sgd_solver.cpp:106] Iteration 43060, lr = 0.001\nI0608 04:39:44.738204 13573 solver.cpp:229] Iteration 43080, loss = 0.391022\nI0608 04:39:44.738276 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.068064 (* 1 = 0.068064 loss)\nI0608 04:39:44.738284 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0802852 (* 1 = 0.0802852 loss)\nI0608 04:39:44.738291 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0953037 (* 1 = 0.0953037 loss)\nI0608 04:39:44.738296 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0367459 (* 1 = 0.0367459 loss)\nI0608 04:39:44.738302 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0982469 (* 1 = 0.0982469 loss)\nI0608 04:39:44.738308 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0367323 (* 1 = 0.0367323 loss)\nI0608 04:39:44.738317 13573 sgd_solver.cpp:106] Iteration 43080, lr = 0.001\nI0608 04:39:57.890692 13573 solver.cpp:229] Iteration 43100, loss = 0.882019\nI0608 04:39:57.890753 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.234841 (* 1 = 0.234841 loss)\nI0608 04:39:57.890761 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.250047 (* 1 = 0.250047 loss)\nI0608 04:39:57.890768 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.348262 (* 1 = 0.348262 loss)\nI0608 04:39:57.890786 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0688857 (* 1 = 0.0688857 loss)\nI0608 04:39:57.890794 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.359436 (* 1 = 0.359436 loss)\nI0608 04:39:57.890799 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0688576 (* 1 = 0.0688576 loss)\nI0608 04:39:57.890806 13573 sgd_solver.cpp:106] Iteration 43100, lr = 0.001\nI0608 04:40:10.858531 13573 solver.cpp:229] Iteration 43120, loss = 0.793878\nI0608 04:40:10.858599 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000455779 (* 1 = 0.000455779 loss)\nI0608 04:40:10.858608 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0040349 (* 1 = 0.0040349 loss)\nI0608 04:40:10.858614 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23789 (* 1 = 0.23789 loss)\nI0608 04:40:10.858620 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0144908 (* 1 = 0.0144908 loss)\nI0608 04:40:10.858625 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215079 (* 1 = 0.215079 loss)\nI0608 04:40:10.858631 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144627 (* 1 = 0.0144627 loss)\nI0608 04:40:10.858639 13573 sgd_solver.cpp:106] Iteration 43120, lr = 0.001\nI0608 04:40:23.800340 13573 solver.cpp:229] Iteration 43140, loss = 1.48851\nI0608 04:40:23.800460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00191697 (* 1 = 0.00191697 loss)\nI0608 04:40:23.800470 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00113035 (* 1 = 0.00113035 loss)\nI0608 04:40:23.800477 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.431793 (* 1 = 0.431793 loss)\nI0608 04:40:23.800483 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.146745 (* 1 = 0.146745 loss)\nI0608 04:40:23.800489 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.433563 (* 1 = 0.433563 loss)\nI0608 04:40:23.800494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.14682 (* 1 = 0.14682 loss)\nI0608 04:40:23.800503 13573 sgd_solver.cpp:106] Iteration 43140, lr = 0.001\nI0608 04:40:36.583945 13573 solver.cpp:229] Iteration 43160, loss = 1.43437\nI0608 04:40:36.584008 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.254141 (* 1 = 0.254141 loss)\nI0608 04:40:36.584018 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119049 (* 1 = 0.119049 loss)\nI0608 04:40:36.584024 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.869048 (* 1 = 0.869048 loss)\nI0608 04:40:36.584030 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.152461 (* 1 = 0.152461 loss)\nI0608 04:40:36.584036 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.832767 (* 1 = 0.832767 loss)\nI0608 04:40:36.584043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.152467 (* 1 = 0.152467 loss)\nI0608 04:40:36.584049 13573 sgd_solver.cpp:106] Iteration 43160, lr = 0.001\nI0608 04:40:49.291127 13573 solver.cpp:229] Iteration 43180, loss = 0.610664\nI0608 04:40:49.291204 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.133843 (* 1 = 0.133843 loss)\nI0608 04:40:49.291214 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.203903 (* 1 = 0.203903 loss)\nI0608 04:40:49.291221 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106093 (* 1 = 0.106093 loss)\nI0608 04:40:49.291227 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0231285 (* 1 = 0.0231285 loss)\nI0608 04:40:49.291234 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112823 (* 1 = 0.112823 loss)\nI0608 04:40:49.291239 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0231259 (* 1 = 0.0231259 loss)\nI0608 04:40:49.291246 13573 sgd_solver.cpp:106] Iteration 43180, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:41:01.929040 13573 solver.cpp:229] Iteration 43200, loss = 1.02673\nI0608 04:41:01.929100 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.216263 (* 1 = 0.216263 loss)\nI0608 04:41:01.929113 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143806 (* 1 = 0.143806 loss)\nI0608 04:41:01.929119 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.468965 (* 1 = 0.468965 loss)\nI0608 04:41:01.929126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.057017 (* 1 = 0.057017 loss)\nI0608 04:41:01.929131 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.436394 (* 1 = 0.436394 loss)\nI0608 04:41:01.929136 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0570414 (* 1 = 0.0570414 loss)\nI0608 04:41:01.929143 13573 sgd_solver.cpp:106] Iteration 43200, lr = 0.001\nI0608 04:41:15.050724 13573 solver.cpp:229] Iteration 43220, loss = 1.61092\nI0608 04:41:15.050801 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.205005 (* 1 = 0.205005 loss)\nI0608 04:41:15.050810 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.227706 (* 1 = 0.227706 loss)\nI0608 04:41:15.050817 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.842294 (* 1 = 0.842294 loss)\nI0608 04:41:15.050822 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0906768 (* 1 = 0.0906768 loss)\nI0608 04:41:15.050828 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.819291 (* 1 = 0.819291 loss)\nI0608 04:41:15.050835 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0906955 (* 1 = 0.0906955 loss)\nI0608 04:41:15.050843 13573 sgd_solver.cpp:106] Iteration 43220, lr = 0.001\nI0608 04:41:27.691337 13573 solver.cpp:229] Iteration 43240, loss = 0.716407\nI0608 04:41:27.691397 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.154598 (* 1 = 0.154598 loss)\nI0608 04:41:27.691406 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.13767 (* 1 = 0.13767 loss)\nI0608 04:41:27.691413 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.326164 (* 1 = 0.326164 loss)\nI0608 04:41:27.691419 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0222316 (* 1 = 0.0222316 loss)\nI0608 04:41:27.691426 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.305224 (* 1 = 0.305224 loss)\nI0608 04:41:27.691431 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0222572 (* 1 = 0.0222572 loss)\nI0608 04:41:27.691439 13573 sgd_solver.cpp:106] Iteration 43240, lr = 0.001\nI0608 04:41:40.546056 13573 solver.cpp:229] Iteration 43260, loss = 0.523316\nI0608 04:41:40.546129 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0896971 (* 1 = 0.0896971 loss)\nI0608 04:41:40.546139 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0458531 (* 1 = 0.0458531 loss)\nI0608 04:41:40.546145 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.129896 (* 1 = 0.129896 loss)\nI0608 04:41:40.546151 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0044085 (* 1 = 0.0044085 loss)\nI0608 04:41:40.546157 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123395 (* 1 = 0.123395 loss)\nI0608 04:41:40.546164 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00440776 (* 1 = 0.00440776 loss)\nI0608 04:41:40.546171 13573 sgd_solver.cpp:106] Iteration 43260, lr = 0.001\nI0608 04:41:53.249790 13573 solver.cpp:229] Iteration 43280, loss = 1.14815\nI0608 04:41:53.249997 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129026 (* 1 = 0.129026 loss)\nI0608 04:41:53.250007 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.181454 (* 1 = 0.181454 loss)\nI0608 04:41:53.250013 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.470968 (* 1 = 0.470968 loss)\nI0608 04:41:53.250020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0662593 (* 1 = 0.0662593 loss)\nI0608 04:41:53.250025 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.46875 (* 1 = 0.46875 loss)\nI0608 04:41:53.250031 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0663454 (* 1 = 0.0663454 loss)\nI0608 04:41:53.250038 13573 sgd_solver.cpp:106] Iteration 43280, lr = 0.001\nI0608 04:42:05.976512 13573 solver.cpp:229] Iteration 43300, loss = 0.686426\nI0608 04:42:05.976594 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0909011 (* 1 = 0.0909011 loss)\nI0608 04:42:05.976603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0579461 (* 1 = 0.0579461 loss)\nI0608 04:42:05.976610 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.25802 (* 1 = 0.25802 loss)\nI0608 04:42:05.976616 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111688 (* 1 = 0.0111688 loss)\nI0608 04:42:05.976621 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.267232 (* 1 = 0.267232 loss)\nI0608 04:42:05.976627 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0111539 (* 1 = 0.0111539 loss)\nI0608 04:42:05.976634 13573 sgd_solver.cpp:106] Iteration 43300, lr = 0.001\nI0608 04:42:18.771759 13573 solver.cpp:229] Iteration 43320, loss = 0.643691\nI0608 04:42:18.771827 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0249341 (* 1 = 0.0249341 loss)\nI0608 04:42:18.771836 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0730073 (* 1 = 0.0730073 loss)\nI0608 04:42:18.771842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0880687 (* 1 = 0.0880687 loss)\nI0608 04:42:18.771847 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103532 (* 1 = 0.0103532 loss)\nI0608 04:42:18.771853 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.088633 (* 1 = 0.088633 loss)\nI0608 04:42:18.771859 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103479 (* 1 = 0.0103479 loss)\nI0608 04:42:18.771865 13573 sgd_solver.cpp:106] Iteration 43320, lr = 0.001\nI0608 04:42:31.563017 13573 solver.cpp:229] Iteration 43340, loss = 0.626056\nI0608 04:42:31.563078 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.222813 (* 1 = 0.222813 loss)\nI0608 04:42:31.563088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16837 (* 1 = 0.16837 loss)\nI0608 04:42:31.563094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0759122 (* 1 = 0.0759122 loss)\nI0608 04:42:31.563100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0550022 (* 1 = 0.0550022 loss)\nI0608 04:42:31.563107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0785294 (* 1 = 0.0785294 loss)\nI0608 04:42:31.563112 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0550065 (* 1 = 0.0550065 loss)\nI0608 04:42:31.563119 13573 sgd_solver.cpp:106] Iteration 43340, lr = 0.001\nI0608 04:42:44.311749 13573 solver.cpp:229] Iteration 43360, loss = 0.679885\nI0608 04:42:44.311820 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.206253 (* 1 = 0.206253 loss)\nI0608 04:42:44.311830 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0793415 (* 1 = 0.0793415 loss)\nI0608 04:42:44.311836 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132999 (* 1 = 0.132999 loss)\nI0608 04:42:44.311841 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0311762 (* 1 = 0.0311762 loss)\nI0608 04:42:44.311847 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147626 (* 1 = 0.147626 loss)\nI0608 04:42:44.311853 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311873 (* 1 = 0.0311873 loss)\nI0608 04:42:44.311861 13573 sgd_solver.cpp:106] Iteration 43360, lr = 0.001\nI0608 04:42:57.439682 13573 solver.cpp:229] Iteration 43380, loss = 0.697723\nI0608 04:42:57.439749 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101249 (* 1 = 0.101249 loss)\nI0608 04:42:57.439760 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0877636 (* 1 = 0.0877636 loss)\nI0608 04:42:57.439766 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257894 (* 1 = 0.257894 loss)\nI0608 04:42:57.439772 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0431291 (* 1 = 0.0431291 loss)\nI0608 04:42:57.439779 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233224 (* 1 = 0.233224 loss)\nI0608 04:42:57.439785 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0431267 (* 1 = 0.0431267 loss)\nI0608 04:42:57.439791 13573 sgd_solver.cpp:106] Iteration 43380, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:43:10.207450 13573 solver.cpp:229] Iteration 43400, loss = 0.636272\nI0608 04:43:10.207563 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148648 (* 1 = 0.148648 loss)\nI0608 04:43:10.207572 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.149473 (* 1 = 0.149473 loss)\nI0608 04:43:10.207578 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199484 (* 1 = 0.199484 loss)\nI0608 04:43:10.207584 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0268644 (* 1 = 0.0268644 loss)\nI0608 04:43:10.207590 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.198057 (* 1 = 0.198057 loss)\nI0608 04:43:10.207595 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0268559 (* 1 = 0.0268559 loss)\nI0608 04:43:10.207602 13573 sgd_solver.cpp:106] Iteration 43400, lr = 0.001\nI0608 04:43:23.302690 13573 solver.cpp:229] Iteration 43420, loss = 0.595283\nI0608 04:43:23.302757 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0815449 (* 1 = 0.0815449 loss)\nI0608 04:43:23.302765 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0889416 (* 1 = 0.0889416 loss)\nI0608 04:43:23.302783 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153677 (* 1 = 0.153677 loss)\nI0608 04:43:23.302793 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0372749 (* 1 = 0.0372749 loss)\nI0608 04:43:23.302798 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161986 (* 1 = 0.161986 loss)\nI0608 04:43:23.302803 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0372626 (* 1 = 0.0372626 loss)\nI0608 04:43:23.302810 13573 sgd_solver.cpp:106] Iteration 43420, lr = 0.001\nI0608 04:43:36.187801 13573 solver.cpp:229] Iteration 43440, loss = 0.670027\nI0608 04:43:36.187883 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0957779 (* 1 = 0.0957779 loss)\nI0608 04:43:36.187893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0983925 (* 1 = 0.0983925 loss)\nI0608 04:43:36.187899 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304169 (* 1 = 0.304169 loss)\nI0608 04:43:36.187906 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0489612 (* 1 = 0.0489612 loss)\nI0608 04:43:36.187912 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.314135 (* 1 = 0.314135 loss)\nI0608 04:43:36.187918 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0489304 (* 1 = 0.0489304 loss)\nI0608 04:43:36.187925 13573 sgd_solver.cpp:106] Iteration 43440, lr = 0.001\nI0608 04:43:49.095110 13573 solver.cpp:229] Iteration 43460, loss = 0.615179\nI0608 04:43:49.095175 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166365 (* 1 = 0.166365 loss)\nI0608 04:43:49.095185 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.116544 (* 1 = 0.116544 loss)\nI0608 04:43:49.095191 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220868 (* 1 = 0.220868 loss)\nI0608 04:43:49.095198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0185855 (* 1 = 0.0185855 loss)\nI0608 04:43:49.095204 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225954 (* 1 = 0.225954 loss)\nI0608 04:43:49.095211 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0185794 (* 1 = 0.0185794 loss)\nI0608 04:43:49.095217 13573 sgd_solver.cpp:106] Iteration 43460, lr = 0.001\nI0608 04:44:01.942818 13573 solver.cpp:229] Iteration 43480, loss = 0.909705\nI0608 04:44:01.942896 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109948 (* 1 = 0.109948 loss)\nI0608 04:44:01.942906 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0913755 (* 1 = 0.0913755 loss)\nI0608 04:44:01.942912 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.30164 (* 1 = 0.30164 loss)\nI0608 04:44:01.942919 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0483706 (* 1 = 0.0483706 loss)\nI0608 04:44:01.942924 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295348 (* 1 = 0.295348 loss)\nI0608 04:44:01.942930 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0483655 (* 1 = 0.0483655 loss)\nI0608 04:44:01.942937 13573 sgd_solver.cpp:106] Iteration 43480, lr = 0.001\nI0608 04:44:14.819543 13573 solver.cpp:229] Iteration 43500, loss = 0.947734\nI0608 04:44:14.819612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13341 (* 1 = 0.13341 loss)\nI0608 04:44:14.819622 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.165191 (* 1 = 0.165191 loss)\nI0608 04:44:14.819628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.293371 (* 1 = 0.293371 loss)\nI0608 04:44:14.819633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0891725 (* 1 = 0.0891725 loss)\nI0608 04:44:14.819639 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.290189 (* 1 = 0.290189 loss)\nI0608 04:44:14.819644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0891548 (* 1 = 0.0891548 loss)\nI0608 04:44:14.819651 13573 sgd_solver.cpp:106] Iteration 43500, lr = 0.001\nI0608 04:44:27.637243 13573 solver.cpp:229] Iteration 43520, loss = 1.09474\nI0608 04:44:27.637312 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127781 (* 1 = 0.127781 loss)\nI0608 04:44:27.637322 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11947 (* 1 = 0.11947 loss)\nI0608 04:44:27.637329 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40807 (* 1 = 0.40807 loss)\nI0608 04:44:27.637336 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.122496 (* 1 = 0.122496 loss)\nI0608 04:44:27.637341 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.406778 (* 1 = 0.406778 loss)\nI0608 04:44:27.637346 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.122494 (* 1 = 0.122494 loss)\nI0608 04:44:27.637354 13573 sgd_solver.cpp:106] Iteration 43520, lr = 0.001\nI0608 04:44:40.310145 13573 solver.cpp:229] Iteration 43540, loss = 0.638538\nI0608 04:44:40.310212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0472681 (* 1 = 0.0472681 loss)\nI0608 04:44:40.310221 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0727735 (* 1 = 0.0727735 loss)\nI0608 04:44:40.310228 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0824468 (* 1 = 0.0824468 loss)\nI0608 04:44:40.310235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00212875 (* 1 = 0.00212875 loss)\nI0608 04:44:40.310240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0994957 (* 1 = 0.0994957 loss)\nI0608 04:44:40.310246 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00212884 (* 1 = 0.00212884 loss)\nI0608 04:44:40.310253 13573 sgd_solver.cpp:106] Iteration 43540, lr = 0.001\nI0608 04:44:53.076912 13573 solver.cpp:229] Iteration 43560, loss = 0.52551\nI0608 04:44:53.076995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.088939 (* 1 = 0.088939 loss)\nI0608 04:44:53.077004 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.096875 (* 1 = 0.096875 loss)\nI0608 04:44:53.077010 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.085666 (* 1 = 0.085666 loss)\nI0608 04:44:53.077016 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105807 (* 1 = 0.0105807 loss)\nI0608 04:44:53.077021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0875208 (* 1 = 0.0875208 loss)\nI0608 04:44:53.077028 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105825 (* 1 = 0.0105825 loss)\nI0608 04:44:53.077034 13573 sgd_solver.cpp:106] Iteration 43560, lr = 0.001\nI0608 04:45:06.002291 13573 solver.cpp:229] Iteration 43580, loss = 0.701005\nI0608 04:45:06.002358 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0530983 (* 1 = 0.0530983 loss)\nI0608 04:45:06.002368 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0355114 (* 1 = 0.0355114 loss)\nI0608 04:45:06.002375 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.348452 (* 1 = 0.348452 loss)\nI0608 04:45:06.002382 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109374 (* 1 = 0.109374 loss)\nI0608 04:45:06.002388 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.350683 (* 1 = 0.350683 loss)\nI0608 04:45:06.002393 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109344 (* 1 = 0.109344 loss)\nI0608 04:45:06.002401 13573 sgd_solver.cpp:106] Iteration 43580, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:45:19.085580 13573 solver.cpp:229] Iteration 43600, loss = 0.409847\nI0608 04:45:19.085700 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0869996 (* 1 = 0.0869996 loss)\nI0608 04:45:19.085710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11087 (* 1 = 0.11087 loss)\nI0608 04:45:19.085716 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0987974 (* 1 = 0.0987974 loss)\nI0608 04:45:19.085722 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0121347 (* 1 = 0.0121347 loss)\nI0608 04:45:19.085728 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102017 (* 1 = 0.102017 loss)\nI0608 04:45:19.085734 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0121338 (* 1 = 0.0121338 loss)\nI0608 04:45:19.085742 13573 sgd_solver.cpp:106] Iteration 43600, lr = 0.001\nI0608 04:45:31.966106 13573 solver.cpp:229] Iteration 43620, loss = 0.669687\nI0608 04:45:31.966173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0177733 (* 1 = 0.0177733 loss)\nI0608 04:45:31.966183 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0488829 (* 1 = 0.0488829 loss)\nI0608 04:45:31.966190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334963 (* 1 = 0.334963 loss)\nI0608 04:45:31.966197 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0470131 (* 1 = 0.0470131 loss)\nI0608 04:45:31.966202 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315755 (* 1 = 0.315755 loss)\nI0608 04:45:31.966208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0470198 (* 1 = 0.0470198 loss)\nI0608 04:45:31.966215 13573 sgd_solver.cpp:106] Iteration 43620, lr = 0.001\nI0608 04:45:44.745645 13573 solver.cpp:229] Iteration 43640, loss = 1.96938\nI0608 04:45:44.745714 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0816552 (* 1 = 0.0816552 loss)\nI0608 04:45:44.745724 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0900008 (* 1 = 0.0900008 loss)\nI0608 04:45:44.745731 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.884926 (* 1 = 0.884926 loss)\nI0608 04:45:44.745738 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.181474 (* 1 = 0.181474 loss)\nI0608 04:45:44.745743 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.886465 (* 1 = 0.886465 loss)\nI0608 04:45:44.745749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.181547 (* 1 = 0.181547 loss)\nI0608 04:45:44.745757 13573 sgd_solver.cpp:106] Iteration 43640, lr = 0.001\nI0608 04:45:57.833525 13573 solver.cpp:229] Iteration 43660, loss = 1.02351\nI0608 04:45:57.833601 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0164762 (* 1 = 0.0164762 loss)\nI0608 04:45:57.833612 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0205564 (* 1 = 0.0205564 loss)\nI0608 04:45:57.833618 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151236 (* 1 = 0.151236 loss)\nI0608 04:45:57.833624 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164496 (* 1 = 0.0164496 loss)\nI0608 04:45:57.833631 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149867 (* 1 = 0.149867 loss)\nI0608 04:45:57.833636 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164556 (* 1 = 0.0164556 loss)\nI0608 04:45:57.833644 13573 sgd_solver.cpp:106] Iteration 43660, lr = 0.001\nI0608 04:46:10.797060 13573 solver.cpp:229] Iteration 43680, loss = 0.702708\nI0608 04:46:10.797127 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125863 (* 1 = 0.125863 loss)\nI0608 04:46:10.797137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0469643 (* 1 = 0.0469643 loss)\nI0608 04:46:10.797142 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.329084 (* 1 = 0.329084 loss)\nI0608 04:46:10.797148 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0360805 (* 1 = 0.0360805 loss)\nI0608 04:46:10.797153 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.401282 (* 1 = 0.401282 loss)\nI0608 04:46:10.797159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0361031 (* 1 = 0.0361031 loss)\nI0608 04:46:10.797166 13573 sgd_solver.cpp:106] Iteration 43680, lr = 0.001\nI0608 04:46:23.627320 13573 solver.cpp:229] Iteration 43700, loss = 0.520829\nI0608 04:46:23.627383 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0706205 (* 1 = 0.0706205 loss)\nI0608 04:46:23.627393 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0230031 (* 1 = 0.0230031 loss)\nI0608 04:46:23.627398 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0981151 (* 1 = 0.0981151 loss)\nI0608 04:46:23.627404 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0712877 (* 1 = 0.0712877 loss)\nI0608 04:46:23.627410 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0936418 (* 1 = 0.0936418 loss)\nI0608 04:46:23.627416 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0712889 (* 1 = 0.0712889 loss)\nI0608 04:46:23.627424 13573 sgd_solver.cpp:106] Iteration 43700, lr = 0.001\nI0608 04:46:36.650115 13573 solver.cpp:229] Iteration 43720, loss = 0.631967\nI0608 04:46:36.650249 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0482275 (* 1 = 0.0482275 loss)\nI0608 04:46:36.650257 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0828854 (* 1 = 0.0828854 loss)\nI0608 04:46:36.650264 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23907 (* 1 = 0.23907 loss)\nI0608 04:46:36.650269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0832006 (* 1 = 0.0832006 loss)\nI0608 04:46:36.650275 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256008 (* 1 = 0.256008 loss)\nI0608 04:46:36.650281 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.083174 (* 1 = 0.083174 loss)\nI0608 04:46:36.650287 13573 sgd_solver.cpp:106] Iteration 43720, lr = 0.001\nI0608 04:46:49.626305 13573 solver.cpp:229] Iteration 43740, loss = 0.596985\nI0608 04:46:49.626360 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.080684 (* 1 = 0.080684 loss)\nI0608 04:46:49.626370 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0778039 (* 1 = 0.0778039 loss)\nI0608 04:46:49.626376 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110648 (* 1 = 0.110648 loss)\nI0608 04:46:49.626382 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0327152 (* 1 = 0.0327152 loss)\nI0608 04:46:49.626389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0968135 (* 1 = 0.0968135 loss)\nI0608 04:46:49.626394 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0327156 (* 1 = 0.0327156 loss)\nI0608 04:46:49.626401 13573 sgd_solver.cpp:106] Iteration 43740, lr = 0.001\nI0608 04:47:02.304790 13573 solver.cpp:229] Iteration 43760, loss = 1.01196\nI0608 04:47:02.304858 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0779668 (* 1 = 0.0779668 loss)\nI0608 04:47:02.304868 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.154757 (* 1 = 0.154757 loss)\nI0608 04:47:02.304875 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134662 (* 1 = 0.134662 loss)\nI0608 04:47:02.304880 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126654 (* 1 = 0.0126654 loss)\nI0608 04:47:02.304885 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136062 (* 1 = 0.136062 loss)\nI0608 04:47:02.304891 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126601 (* 1 = 0.0126601 loss)\nI0608 04:47:02.304898 13573 sgd_solver.cpp:106] Iteration 43760, lr = 0.001\nI0608 04:47:15.077821 13573 solver.cpp:229] Iteration 43780, loss = 0.348021\nI0608 04:47:15.077893 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0893562 (* 1 = 0.0893562 loss)\nI0608 04:47:15.077903 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0806091 (* 1 = 0.0806091 loss)\nI0608 04:47:15.077916 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0782167 (* 1 = 0.0782167 loss)\nI0608 04:47:15.077922 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0107816 (* 1 = 0.0107816 loss)\nI0608 04:47:15.077929 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0707798 (* 1 = 0.0707798 loss)\nI0608 04:47:15.077934 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107823 (* 1 = 0.0107823 loss)\nI0608 04:47:15.077941 13573 sgd_solver.cpp:106] Iteration 43780, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:47:27.859611 13573 solver.cpp:229] Iteration 43800, loss = 0.879007\nI0608 04:47:27.859673 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102576 (* 1 = 0.102576 loss)\nI0608 04:47:27.859683 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0909717 (* 1 = 0.0909717 loss)\nI0608 04:47:27.859688 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0955225 (* 1 = 0.0955225 loss)\nI0608 04:47:27.859694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108513 (* 1 = 0.0108513 loss)\nI0608 04:47:27.859700 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0980386 (* 1 = 0.0980386 loss)\nI0608 04:47:27.859705 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108917 (* 1 = 0.0108917 loss)\nI0608 04:47:27.859712 13573 sgd_solver.cpp:106] Iteration 43800, lr = 0.001\nI0608 04:47:40.828343 13573 solver.cpp:229] Iteration 43820, loss = 1.00734\nI0608 04:47:40.828413 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0446234 (* 1 = 0.0446234 loss)\nI0608 04:47:40.828421 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0431211 (* 1 = 0.0431211 loss)\nI0608 04:47:40.828428 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0907831 (* 1 = 0.0907831 loss)\nI0608 04:47:40.828434 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00153382 (* 1 = 0.00153382 loss)\nI0608 04:47:40.828439 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0937189 (* 1 = 0.0937189 loss)\nI0608 04:47:40.828445 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00153248 (* 1 = 0.00153248 loss)\nI0608 04:47:40.828452 13573 sgd_solver.cpp:106] Iteration 43820, lr = 0.001\nI0608 04:47:53.659736 13573 solver.cpp:229] Iteration 43840, loss = 0.333494\nI0608 04:47:53.659811 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0594344 (* 1 = 0.0594344 loss)\nI0608 04:47:53.659821 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0432783 (* 1 = 0.0432783 loss)\nI0608 04:47:53.659826 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0940484 (* 1 = 0.0940484 loss)\nI0608 04:47:53.659832 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0489318 (* 1 = 0.0489318 loss)\nI0608 04:47:53.659838 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920279 (* 1 = 0.0920279 loss)\nI0608 04:47:53.659843 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0489588 (* 1 = 0.0489588 loss)\nI0608 04:47:53.659850 13573 sgd_solver.cpp:106] Iteration 43840, lr = 0.001\nI0608 04:48:06.601730 13573 solver.cpp:229] Iteration 43860, loss = 0.881909\nI0608 04:48:06.601799 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.364464 (* 1 = 0.364464 loss)\nI0608 04:48:06.601807 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.256022 (* 1 = 0.256022 loss)\nI0608 04:48:06.601814 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247844 (* 1 = 0.247844 loss)\nI0608 04:48:06.601819 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0397971 (* 1 = 0.0397971 loss)\nI0608 04:48:06.601825 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250101 (* 1 = 0.250101 loss)\nI0608 04:48:06.601831 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.039758 (* 1 = 0.039758 loss)\nI0608 04:48:06.601840 13573 sgd_solver.cpp:106] Iteration 43860, lr = 0.001\nI0608 04:48:19.860972 13573 solver.cpp:229] Iteration 43880, loss = 0.908687\nI0608 04:48:19.861032 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0648568 (* 1 = 0.0648568 loss)\nI0608 04:48:19.861042 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0880435 (* 1 = 0.0880435 loss)\nI0608 04:48:19.861048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.274331 (* 1 = 0.274331 loss)\nI0608 04:48:19.861053 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011332 (* 1 = 0.011332 loss)\nI0608 04:48:19.861059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.257494 (* 1 = 0.257494 loss)\nI0608 04:48:19.861064 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0113042 (* 1 = 0.0113042 loss)\nI0608 04:48:19.861071 13573 sgd_solver.cpp:106] Iteration 43880, lr = 0.001\nI0608 04:48:32.917263 13573 solver.cpp:229] Iteration 43900, loss = 1.03151\nI0608 04:48:32.917337 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118378 (* 1 = 0.118378 loss)\nI0608 04:48:32.917347 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0925285 (* 1 = 0.0925285 loss)\nI0608 04:48:32.917354 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.376245 (* 1 = 0.376245 loss)\nI0608 04:48:32.917361 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0405389 (* 1 = 0.0405389 loss)\nI0608 04:48:32.917366 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.401637 (* 1 = 0.401637 loss)\nI0608 04:48:32.917372 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0405329 (* 1 = 0.0405329 loss)\nI0608 04:48:32.917381 13573 sgd_solver.cpp:106] Iteration 43900, lr = 0.001\nI0608 04:48:45.835268 13573 solver.cpp:229] Iteration 43920, loss = 0.614975\nI0608 04:48:45.835352 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00349247 (* 1 = 0.00349247 loss)\nI0608 04:48:45.835363 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0517141 (* 1 = 0.0517141 loss)\nI0608 04:48:45.835369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147387 (* 1 = 0.147387 loss)\nI0608 04:48:45.835376 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00625021 (* 1 = 0.00625021 loss)\nI0608 04:48:45.835381 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170924 (* 1 = 0.170924 loss)\nI0608 04:48:45.835387 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00625152 (* 1 = 0.00625152 loss)\nI0608 04:48:45.835396 13573 sgd_solver.cpp:106] Iteration 43920, lr = 0.001\nI0608 04:48:58.771093 13573 solver.cpp:229] Iteration 43940, loss = 0.517646\nI0608 04:48:58.771164 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0320736 (* 1 = 0.0320736 loss)\nI0608 04:48:58.771174 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0456723 (* 1 = 0.0456723 loss)\nI0608 04:48:58.771180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179128 (* 1 = 0.179128 loss)\nI0608 04:48:58.771186 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0109053 (* 1 = 0.0109053 loss)\nI0608 04:48:58.771191 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.184008 (* 1 = 0.184008 loss)\nI0608 04:48:58.771198 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109084 (* 1 = 0.0109084 loss)\nI0608 04:48:58.771204 13573 sgd_solver.cpp:106] Iteration 43940, lr = 0.001\nI0608 04:49:11.698053 13573 solver.cpp:229] Iteration 43960, loss = 0.692578\nI0608 04:49:11.698118 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.142818 (* 1 = 0.142818 loss)\nI0608 04:49:11.698127 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170553 (* 1 = 0.170553 loss)\nI0608 04:49:11.698133 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104953 (* 1 = 0.104953 loss)\nI0608 04:49:11.698139 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0686519 (* 1 = 0.0686519 loss)\nI0608 04:49:11.698145 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109901 (* 1 = 0.109901 loss)\nI0608 04:49:11.698151 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0686512 (* 1 = 0.0686512 loss)\nI0608 04:49:11.698159 13573 sgd_solver.cpp:106] Iteration 43960, lr = 0.001\nI0608 04:49:24.601083 13573 solver.cpp:229] Iteration 43980, loss = 0.587353\nI0608 04:49:24.601146 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00315743 (* 1 = 0.00315743 loss)\nI0608 04:49:24.601156 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0687742 (* 1 = 0.0687742 loss)\nI0608 04:49:24.601163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.325936 (* 1 = 0.325936 loss)\nI0608 04:49:24.601168 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154751 (* 1 = 0.0154751 loss)\nI0608 04:49:24.601174 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.324336 (* 1 = 0.324336 loss)\nI0608 04:49:24.601181 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154389 (* 1 = 0.0154389 loss)\nI0608 04:49:24.601187 13573 sgd_solver.cpp:106] Iteration 43980, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:49:37.490159 13573 solver.cpp:229] Iteration 44000, loss = 0.745958\nI0608 04:49:37.490228 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.153333 (* 1 = 0.153333 loss)\nI0608 04:49:37.490238 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.253875 (* 1 = 0.253875 loss)\nI0608 04:49:37.490244 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262928 (* 1 = 0.262928 loss)\nI0608 04:49:37.490250 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0583222 (* 1 = 0.0583222 loss)\nI0608 04:49:37.490255 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.277127 (* 1 = 0.277127 loss)\nI0608 04:49:37.490262 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0583048 (* 1 = 0.0583048 loss)\nI0608 04:49:37.490268 13573 sgd_solver.cpp:106] Iteration 44000, lr = 0.001\nI0608 04:49:50.432945 13573 solver.cpp:229] Iteration 44020, loss = 1.56082\nI0608 04:49:50.433008 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0912767 (* 1 = 0.0912767 loss)\nI0608 04:49:50.433018 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124226 (* 1 = 0.124226 loss)\nI0608 04:49:50.433024 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.04366 (* 1 = 1.04366 loss)\nI0608 04:49:50.433030 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.286816 (* 1 = 0.286816 loss)\nI0608 04:49:50.433037 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.01336 (* 1 = 1.01336 loss)\nI0608 04:49:50.433043 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.286308 (* 1 = 0.286308 loss)\nI0608 04:49:50.433049 13573 sgd_solver.cpp:106] Iteration 44020, lr = 0.001\nI0608 04:50:03.436255 13573 solver.cpp:229] Iteration 44040, loss = 0.504615\nI0608 04:50:03.436329 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.152243 (* 1 = 0.152243 loss)\nI0608 04:50:03.436341 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.178223 (* 1 = 0.178223 loss)\nI0608 04:50:03.436347 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132004 (* 1 = 0.132004 loss)\nI0608 04:50:03.436353 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269406 (* 1 = 0.0269406 loss)\nI0608 04:50:03.436358 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149087 (* 1 = 0.149087 loss)\nI0608 04:50:03.436365 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269451 (* 1 = 0.0269451 loss)\nI0608 04:50:03.436372 13573 sgd_solver.cpp:106] Iteration 44040, lr = 0.001\nI0608 04:50:16.311853 13573 solver.cpp:229] Iteration 44060, loss = 0.497881\nI0608 04:50:16.311914 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.060741 (* 1 = 0.060741 loss)\nI0608 04:50:16.311925 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0509203 (* 1 = 0.0509203 loss)\nI0608 04:50:16.311933 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.133483 (* 1 = 0.133483 loss)\nI0608 04:50:16.311939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0376373 (* 1 = 0.0376373 loss)\nI0608 04:50:16.311944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.134494 (* 1 = 0.134494 loss)\nI0608 04:50:16.311950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0376436 (* 1 = 0.0376436 loss)\nI0608 04:50:16.311959 13573 sgd_solver.cpp:106] Iteration 44060, lr = 0.001\nI0608 04:50:29.356612 13573 solver.cpp:229] Iteration 44080, loss = 0.465416\nI0608 04:50:29.356689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000686252 (* 1 = 0.000686252 loss)\nI0608 04:50:29.356699 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00188302 (* 1 = 0.00188302 loss)\nI0608 04:50:29.356705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180243 (* 1 = 0.180243 loss)\nI0608 04:50:29.356711 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178667 (* 1 = 0.0178667 loss)\nI0608 04:50:29.356716 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218751 (* 1 = 0.218751 loss)\nI0608 04:50:29.356722 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.01787 (* 1 = 0.01787 loss)\nI0608 04:50:29.356729 13573 sgd_solver.cpp:106] Iteration 44080, lr = 0.001\nI0608 04:50:42.407601 13573 solver.cpp:229] Iteration 44100, loss = 0.596118\nI0608 04:50:42.407686 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0835295 (* 1 = 0.0835295 loss)\nI0608 04:50:42.407697 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184587 (* 1 = 0.184587 loss)\nI0608 04:50:42.407703 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107843 (* 1 = 0.107843 loss)\nI0608 04:50:42.407708 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0695525 (* 1 = 0.0695525 loss)\nI0608 04:50:42.407713 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.111394 (* 1 = 0.111394 loss)\nI0608 04:50:42.407719 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0695484 (* 1 = 0.0695484 loss)\nI0608 04:50:42.407727 13573 sgd_solver.cpp:106] Iteration 44100, lr = 0.001\nI0608 04:50:55.210427 13573 solver.cpp:229] Iteration 44120, loss = 0.396984\nI0608 04:50:55.210489 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0607218 (* 1 = 0.0607218 loss)\nI0608 04:50:55.210499 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.182074 (* 1 = 0.182074 loss)\nI0608 04:50:55.210505 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0879131 (* 1 = 0.0879131 loss)\nI0608 04:50:55.210510 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0453512 (* 1 = 0.0453512 loss)\nI0608 04:50:55.210515 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0889692 (* 1 = 0.0889692 loss)\nI0608 04:50:55.210521 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0453582 (* 1 = 0.0453582 loss)\nI0608 04:50:55.210528 13573 sgd_solver.cpp:106] Iteration 44120, lr = 0.001\nI0608 04:51:08.190867 13573 solver.cpp:229] Iteration 44140, loss = 0.398861\nI0608 04:51:08.190933 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000830468 (* 1 = 0.000830468 loss)\nI0608 04:51:08.190943 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00668371 (* 1 = 0.00668371 loss)\nI0608 04:51:08.190949 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152292 (* 1 = 0.152292 loss)\nI0608 04:51:08.190955 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.015111 (* 1 = 0.015111 loss)\nI0608 04:51:08.190961 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.144663 (* 1 = 0.144663 loss)\nI0608 04:51:08.190968 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0150944 (* 1 = 0.0150944 loss)\nI0608 04:51:08.190975 13573 sgd_solver.cpp:106] Iteration 44140, lr = 0.001\nI0608 04:51:20.945770 13573 solver.cpp:229] Iteration 44160, loss = 0.645517\nI0608 04:51:20.945839 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125069 (* 1 = 0.125069 loss)\nI0608 04:51:20.945848 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229043 (* 1 = 0.229043 loss)\nI0608 04:51:20.945854 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0981469 (* 1 = 0.0981469 loss)\nI0608 04:51:20.945860 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0251789 (* 1 = 0.0251789 loss)\nI0608 04:51:20.945866 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102351 (* 1 = 0.102351 loss)\nI0608 04:51:20.945871 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251746 (* 1 = 0.0251746 loss)\nI0608 04:51:20.945878 13573 sgd_solver.cpp:106] Iteration 44160, lr = 0.001\nI0608 04:51:33.698000 13573 solver.cpp:229] Iteration 44180, loss = 0.438644\nI0608 04:51:33.698060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0330442 (* 1 = 0.0330442 loss)\nI0608 04:51:33.698071 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0324275 (* 1 = 0.0324275 loss)\nI0608 04:51:33.698076 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0993689 (* 1 = 0.0993689 loss)\nI0608 04:51:33.698082 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0639007 (* 1 = 0.0639007 loss)\nI0608 04:51:33.698088 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0981052 (* 1 = 0.0981052 loss)\nI0608 04:51:33.698094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0639357 (* 1 = 0.0639357 loss)\nI0608 04:51:33.698101 13573 sgd_solver.cpp:106] Iteration 44180, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:51:46.646014 13573 solver.cpp:229] Iteration 44200, loss = 0.922875\nI0608 04:51:46.646088 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.205273 (* 1 = 0.205273 loss)\nI0608 04:51:46.646097 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.161766 (* 1 = 0.161766 loss)\nI0608 04:51:46.646103 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241169 (* 1 = 0.241169 loss)\nI0608 04:51:46.646109 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0812839 (* 1 = 0.0812839 loss)\nI0608 04:51:46.646116 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239044 (* 1 = 0.239044 loss)\nI0608 04:51:46.646121 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0812946 (* 1 = 0.0812946 loss)\nI0608 04:51:46.646128 13573 sgd_solver.cpp:106] Iteration 44200, lr = 0.001\nI0608 04:51:59.643309 13573 solver.cpp:229] Iteration 44220, loss = 0.438085\nI0608 04:51:59.643375 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0745205 (* 1 = 0.0745205 loss)\nI0608 04:51:59.643385 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0585345 (* 1 = 0.0585345 loss)\nI0608 04:51:59.643391 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0933032 (* 1 = 0.0933032 loss)\nI0608 04:51:59.643398 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0354413 (* 1 = 0.0354413 loss)\nI0608 04:51:59.643402 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0916338 (* 1 = 0.0916338 loss)\nI0608 04:51:59.643409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0354571 (* 1 = 0.0354571 loss)\nI0608 04:51:59.643417 13573 sgd_solver.cpp:106] Iteration 44220, lr = 0.001\nI0608 04:52:12.521950 13573 solver.cpp:229] Iteration 44240, loss = 0.575802\nI0608 04:52:12.522009 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10436 (* 1 = 0.10436 loss)\nI0608 04:52:12.522022 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0403952 (* 1 = 0.0403952 loss)\nI0608 04:52:12.522028 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.292999 (* 1 = 0.292999 loss)\nI0608 04:52:12.522034 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0106884 (* 1 = 0.0106884 loss)\nI0608 04:52:12.522040 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327506 (* 1 = 0.327506 loss)\nI0608 04:52:12.522047 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0106952 (* 1 = 0.0106952 loss)\nI0608 04:52:12.522053 13573 sgd_solver.cpp:106] Iteration 44240, lr = 0.001\nI0608 04:52:25.468695 13573 solver.cpp:229] Iteration 44260, loss = 0.548672\nI0608 04:52:25.468770 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0542148 (* 1 = 0.0542148 loss)\nI0608 04:52:25.468780 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0428793 (* 1 = 0.0428793 loss)\nI0608 04:52:25.468786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118769 (* 1 = 0.118769 loss)\nI0608 04:52:25.468791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0856302 (* 1 = 0.0856302 loss)\nI0608 04:52:25.468797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114208 (* 1 = 0.114208 loss)\nI0608 04:52:25.468803 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0856468 (* 1 = 0.0856468 loss)\nI0608 04:52:25.468811 13573 sgd_solver.cpp:106] Iteration 44260, lr = 0.001\nI0608 04:52:38.267793 13573 solver.cpp:229] Iteration 44280, loss = 0.916382\nI0608 04:52:38.267859 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0691079 (* 1 = 0.0691079 loss)\nI0608 04:52:38.267869 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0423938 (* 1 = 0.0423938 loss)\nI0608 04:52:38.267875 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.410531 (* 1 = 0.410531 loss)\nI0608 04:52:38.267881 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.11205 (* 1 = 0.11205 loss)\nI0608 04:52:38.267886 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.418276 (* 1 = 0.418276 loss)\nI0608 04:52:38.267892 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.112055 (* 1 = 0.112055 loss)\nI0608 04:52:38.267899 13573 sgd_solver.cpp:106] Iteration 44280, lr = 0.001\nI0608 04:52:51.217502 13573 solver.cpp:229] Iteration 44300, loss = 0.808355\nI0608 04:52:51.217566 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.154157 (* 1 = 0.154157 loss)\nI0608 04:52:51.217576 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209294 (* 1 = 0.209294 loss)\nI0608 04:52:51.217581 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.366649 (* 1 = 0.366649 loss)\nI0608 04:52:51.217587 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0367197 (* 1 = 0.0367197 loss)\nI0608 04:52:51.217593 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.347629 (* 1 = 0.347629 loss)\nI0608 04:52:51.217599 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0367509 (* 1 = 0.0367509 loss)\nI0608 04:52:51.217607 13573 sgd_solver.cpp:106] Iteration 44300, lr = 0.001\nI0608 04:53:04.202855 13573 solver.cpp:229] Iteration 44320, loss = 0.813204\nI0608 04:53:04.202927 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00265248 (* 1 = 0.00265248 loss)\nI0608 04:53:04.202937 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0106719 (* 1 = 0.0106719 loss)\nI0608 04:53:04.202944 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.461212 (* 1 = 0.461212 loss)\nI0608 04:53:04.202950 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.085774 (* 1 = 0.085774 loss)\nI0608 04:53:04.202955 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.457699 (* 1 = 0.457699 loss)\nI0608 04:53:04.202960 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0857891 (* 1 = 0.0857891 loss)\nI0608 04:53:04.202966 13573 sgd_solver.cpp:106] Iteration 44320, lr = 0.001\nI0608 04:53:17.048681 13573 solver.cpp:229] Iteration 44340, loss = 0.735909\nI0608 04:53:17.048753 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.1582 (* 1 = 0.1582 loss)\nI0608 04:53:17.048763 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0965826 (* 1 = 0.0965826 loss)\nI0608 04:53:17.048768 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164719 (* 1 = 0.164719 loss)\nI0608 04:53:17.048774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0911241 (* 1 = 0.0911241 loss)\nI0608 04:53:17.048780 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158612 (* 1 = 0.158612 loss)\nI0608 04:53:17.048786 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0911213 (* 1 = 0.0911213 loss)\nI0608 04:53:17.048794 13573 sgd_solver.cpp:106] Iteration 44340, lr = 0.001\nI0608 04:53:29.998764 13573 solver.cpp:229] Iteration 44360, loss = 1.00263\nI0608 04:53:29.998865 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131384 (* 1 = 0.131384 loss)\nI0608 04:53:29.998875 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.208023 (* 1 = 0.208023 loss)\nI0608 04:53:29.998883 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.463112 (* 1 = 0.463112 loss)\nI0608 04:53:29.998889 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0890395 (* 1 = 0.0890395 loss)\nI0608 04:53:29.998895 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.483872 (* 1 = 0.483872 loss)\nI0608 04:53:29.998901 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0890288 (* 1 = 0.0890288 loss)\nI0608 04:53:29.998908 13573 sgd_solver.cpp:106] Iteration 44360, lr = 0.001\nI0608 04:53:42.927484 13573 solver.cpp:229] Iteration 44380, loss = 0.374558\nI0608 04:53:42.927556 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111235 (* 1 = 0.111235 loss)\nI0608 04:53:42.927567 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0505794 (* 1 = 0.0505794 loss)\nI0608 04:53:42.927572 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0903905 (* 1 = 0.0903905 loss)\nI0608 04:53:42.927578 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.010902 (* 1 = 0.010902 loss)\nI0608 04:53:42.927584 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0896755 (* 1 = 0.0896755 loss)\nI0608 04:53:42.927589 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109021 (* 1 = 0.0109021 loss)\nI0608 04:53:42.927597 13573 sgd_solver.cpp:106] Iteration 44380, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:53:56.012516 13573 solver.cpp:229] Iteration 44400, loss = 0.439273\nI0608 04:53:56.012625 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144204 (* 1 = 0.144204 loss)\nI0608 04:53:56.012636 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0310288 (* 1 = 0.0310288 loss)\nI0608 04:53:56.012642 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111598 (* 1 = 0.111598 loss)\nI0608 04:53:56.012648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0129613 (* 1 = 0.0129613 loss)\nI0608 04:53:56.012653 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10502 (* 1 = 0.10502 loss)\nI0608 04:53:56.012660 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0129635 (* 1 = 0.0129635 loss)\nI0608 04:53:56.012666 13573 sgd_solver.cpp:106] Iteration 44400, lr = 0.001\nI0608 04:54:09.021541 13573 solver.cpp:229] Iteration 44420, loss = 0.923241\nI0608 04:54:09.021616 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111275 (* 1 = 0.111275 loss)\nI0608 04:54:09.021626 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10704 (* 1 = 0.10704 loss)\nI0608 04:54:09.021632 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.346726 (* 1 = 0.346726 loss)\nI0608 04:54:09.021638 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0214392 (* 1 = 0.0214392 loss)\nI0608 04:54:09.021644 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.370966 (* 1 = 0.370966 loss)\nI0608 04:54:09.021651 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0214096 (* 1 = 0.0214096 loss)\nI0608 04:54:09.021658 13573 sgd_solver.cpp:106] Iteration 44420, lr = 0.001\nI0608 04:54:21.910470 13573 solver.cpp:229] Iteration 44440, loss = 0.611996\nI0608 04:54:21.910588 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.202002 (* 1 = 0.202002 loss)\nI0608 04:54:21.910603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.169174 (* 1 = 0.169174 loss)\nI0608 04:54:21.910612 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.192192 (* 1 = 0.192192 loss)\nI0608 04:54:21.910621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0356713 (* 1 = 0.0356713 loss)\nI0608 04:54:21.910631 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188876 (* 1 = 0.188876 loss)\nI0608 04:54:21.910642 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0356742 (* 1 = 0.0356742 loss)\nI0608 04:54:21.910652 13573 sgd_solver.cpp:106] Iteration 44440, lr = 0.001\nI0608 04:54:35.044566 13573 solver.cpp:229] Iteration 44460, loss = 0.499422\nI0608 04:54:35.044636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104192 (* 1 = 0.104192 loss)\nI0608 04:54:35.044651 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.080403 (* 1 = 0.080403 loss)\nI0608 04:54:35.044661 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155132 (* 1 = 0.155132 loss)\nI0608 04:54:35.044669 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0449095 (* 1 = 0.0449095 loss)\nI0608 04:54:35.044679 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154982 (* 1 = 0.154982 loss)\nI0608 04:54:35.044690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0448947 (* 1 = 0.0448947 loss)\nI0608 04:54:35.044699 13573 sgd_solver.cpp:106] Iteration 44460, lr = 0.001\nI0608 04:54:47.884661 13573 solver.cpp:229] Iteration 44480, loss = 0.787906\nI0608 04:54:47.884733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0820093 (* 1 = 0.0820093 loss)\nI0608 04:54:47.884743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.133946 (* 1 = 0.133946 loss)\nI0608 04:54:47.884749 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168027 (* 1 = 0.168027 loss)\nI0608 04:54:47.884755 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0252624 (* 1 = 0.0252624 loss)\nI0608 04:54:47.884760 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173285 (* 1 = 0.173285 loss)\nI0608 04:54:47.884766 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.025247 (* 1 = 0.025247 loss)\nI0608 04:54:47.884773 13573 sgd_solver.cpp:106] Iteration 44480, lr = 0.001\nI0608 04:55:00.666503 13573 solver.cpp:229] Iteration 44500, loss = 0.583705\nI0608 04:55:00.666570 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109003 (* 1 = 0.109003 loss)\nI0608 04:55:00.666580 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163505 (* 1 = 0.163505 loss)\nI0608 04:55:00.666586 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.146668 (* 1 = 0.146668 loss)\nI0608 04:55:00.666592 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.025519 (* 1 = 0.025519 loss)\nI0608 04:55:00.666599 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143823 (* 1 = 0.143823 loss)\nI0608 04:55:00.666604 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.025518 (* 1 = 0.025518 loss)\nI0608 04:55:00.666610 13573 sgd_solver.cpp:106] Iteration 44500, lr = 0.001\nI0608 04:55:13.655990 13573 solver.cpp:229] Iteration 44520, loss = 0.45534\nI0608 04:55:13.656067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0511024 (* 1 = 0.0511024 loss)\nI0608 04:55:13.656080 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0264116 (* 1 = 0.0264116 loss)\nI0608 04:55:13.656090 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.149186 (* 1 = 0.149186 loss)\nI0608 04:55:13.656100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0617249 (* 1 = 0.0617249 loss)\nI0608 04:55:13.656110 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161771 (* 1 = 0.161771 loss)\nI0608 04:55:13.656118 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0616855 (* 1 = 0.0616855 loss)\nI0608 04:55:13.656127 13573 sgd_solver.cpp:106] Iteration 44520, lr = 0.001\nI0608 04:55:26.451334 13573 solver.cpp:229] Iteration 44540, loss = 0.930042\nI0608 04:55:26.451407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000898602 (* 1 = 0.000898602 loss)\nI0608 04:55:26.451416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000332595 (* 1 = 0.000332595 loss)\nI0608 04:55:26.451423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.450118 (* 1 = 0.450118 loss)\nI0608 04:55:26.451429 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.147354 (* 1 = 0.147354 loss)\nI0608 04:55:26.451436 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.444939 (* 1 = 0.444939 loss)\nI0608 04:55:26.451441 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.147466 (* 1 = 0.147466 loss)\nI0608 04:55:26.451448 13573 sgd_solver.cpp:106] Iteration 44540, lr = 0.001\nI0608 04:55:39.360472 13573 solver.cpp:229] Iteration 44560, loss = 0.73292\nI0608 04:55:39.360541 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0247783 (* 1 = 0.0247783 loss)\nI0608 04:55:39.360555 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0276101 (* 1 = 0.0276101 loss)\nI0608 04:55:39.360564 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.39085 (* 1 = 0.39085 loss)\nI0608 04:55:39.360574 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.069542 (* 1 = 0.069542 loss)\nI0608 04:55:39.360581 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.384588 (* 1 = 0.384588 loss)\nI0608 04:55:39.360590 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0695894 (* 1 = 0.0695894 loss)\nI0608 04:55:39.360601 13573 sgd_solver.cpp:106] Iteration 44560, lr = 0.001\nI0608 04:55:52.186122 13573 solver.cpp:229] Iteration 44580, loss = 0.390981\nI0608 04:55:52.186190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.112689 (* 1 = 0.112689 loss)\nI0608 04:55:52.186200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0881824 (* 1 = 0.0881824 loss)\nI0608 04:55:52.186206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0862084 (* 1 = 0.0862084 loss)\nI0608 04:55:52.186213 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0177012 (* 1 = 0.0177012 loss)\nI0608 04:55:52.186218 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0916494 (* 1 = 0.0916494 loss)\nI0608 04:55:52.186223 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0177027 (* 1 = 0.0177027 loss)\nI0608 04:55:52.186230 13573 sgd_solver.cpp:106] Iteration 44580, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:56:05.073837 13573 solver.cpp:229] Iteration 44600, loss = 0.595087\nI0608 04:56:05.073909 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.133466 (* 1 = 0.133466 loss)\nI0608 04:56:05.073920 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0614663 (* 1 = 0.0614663 loss)\nI0608 04:56:05.073926 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0876811 (* 1 = 0.0876811 loss)\nI0608 04:56:05.073932 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00162365 (* 1 = 0.00162365 loss)\nI0608 04:56:05.073938 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101549 (* 1 = 0.101549 loss)\nI0608 04:56:05.073946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00162362 (* 1 = 0.00162362 loss)\nI0608 04:56:05.073954 13573 sgd_solver.cpp:106] Iteration 44600, lr = 0.001\nI0608 04:56:18.034739 13573 solver.cpp:229] Iteration 44620, loss = 0.442305\nI0608 04:56:18.034822 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0919329 (* 1 = 0.0919329 loss)\nI0608 04:56:18.034833 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111152 (* 1 = 0.111152 loss)\nI0608 04:56:18.034839 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147103 (* 1 = 0.147103 loss)\nI0608 04:56:18.034845 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.029137 (* 1 = 0.029137 loss)\nI0608 04:56:18.034852 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150101 (* 1 = 0.150101 loss)\nI0608 04:56:18.034857 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029138 (* 1 = 0.029138 loss)\nI0608 04:56:18.034864 13573 sgd_solver.cpp:106] Iteration 44620, lr = 0.001\nI0608 04:56:30.880275 13573 solver.cpp:229] Iteration 44640, loss = 1.113\nI0608 04:56:30.880344 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0117083 (* 1 = 0.0117083 loss)\nI0608 04:56:30.880354 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0754119 (* 1 = 0.0754119 loss)\nI0608 04:56:30.880360 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.564678 (* 1 = 0.564678 loss)\nI0608 04:56:30.880367 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0396442 (* 1 = 0.0396442 loss)\nI0608 04:56:30.880373 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.569702 (* 1 = 0.569702 loss)\nI0608 04:56:30.880378 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0396073 (* 1 = 0.0396073 loss)\nI0608 04:56:30.880386 13573 sgd_solver.cpp:106] Iteration 44640, lr = 0.001\nI0608 04:56:43.688678 13573 solver.cpp:229] Iteration 44660, loss = 1.19335\nI0608 04:56:43.688750 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0759405 (* 1 = 0.0759405 loss)\nI0608 04:56:43.688760 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106883 (* 1 = 0.106883 loss)\nI0608 04:56:43.688766 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.27369 (* 1 = 0.27369 loss)\nI0608 04:56:43.688771 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158934 (* 1 = 0.0158934 loss)\nI0608 04:56:43.688777 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278844 (* 1 = 0.278844 loss)\nI0608 04:56:43.688783 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158949 (* 1 = 0.0158949 loss)\nI0608 04:56:43.688791 13573 sgd_solver.cpp:106] Iteration 44660, lr = 0.001\nI0608 04:56:56.528019 13573 solver.cpp:229] Iteration 44680, loss = 0.380484\nI0608 04:56:56.528087 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.133549 (* 1 = 0.133549 loss)\nI0608 04:56:56.528097 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105531 (* 1 = 0.105531 loss)\nI0608 04:56:56.528103 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0951688 (* 1 = 0.0951688 loss)\nI0608 04:56:56.528110 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193649 (* 1 = 0.0193649 loss)\nI0608 04:56:56.528115 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905558 (* 1 = 0.0905558 loss)\nI0608 04:56:56.528120 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.019362 (* 1 = 0.019362 loss)\nI0608 04:56:56.528127 13573 sgd_solver.cpp:106] Iteration 44680, lr = 0.001\nI0608 04:57:09.446183 13573 solver.cpp:229] Iteration 44700, loss = 0.618431\nI0608 04:57:09.446247 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.051104 (* 1 = 0.051104 loss)\nI0608 04:57:09.446256 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.059894 (* 1 = 0.059894 loss)\nI0608 04:57:09.446264 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.090482 (* 1 = 0.090482 loss)\nI0608 04:57:09.446269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0254922 (* 1 = 0.0254922 loss)\nI0608 04:57:09.446275 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0922695 (* 1 = 0.0922695 loss)\nI0608 04:57:09.446280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0254914 (* 1 = 0.0254914 loss)\nI0608 04:57:09.446288 13573 sgd_solver.cpp:106] Iteration 44700, lr = 0.001\nI0608 04:57:22.153398 13573 solver.cpp:229] Iteration 44720, loss = 0.695953\nI0608 04:57:22.153524 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0739995 (* 1 = 0.0739995 loss)\nI0608 04:57:22.153533 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.099585 (* 1 = 0.099585 loss)\nI0608 04:57:22.153540 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.244551 (* 1 = 0.244551 loss)\nI0608 04:57:22.153545 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00507984 (* 1 = 0.00507984 loss)\nI0608 04:57:22.153551 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256443 (* 1 = 0.256443 loss)\nI0608 04:57:22.153558 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00508139 (* 1 = 0.00508139 loss)\nI0608 04:57:22.153564 13573 sgd_solver.cpp:106] Iteration 44720, lr = 0.001\nI0608 04:57:35.027195 13573 solver.cpp:229] Iteration 44740, loss = 1.12217\nI0608 04:57:35.027271 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.385742 (* 1 = 0.385742 loss)\nI0608 04:57:35.027281 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125815 (* 1 = 0.125815 loss)\nI0608 04:57:35.027287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.394433 (* 1 = 0.394433 loss)\nI0608 04:57:35.027293 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.129603 (* 1 = 0.129603 loss)\nI0608 04:57:35.027299 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.390953 (* 1 = 0.390953 loss)\nI0608 04:57:35.027305 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.129659 (* 1 = 0.129659 loss)\nI0608 04:57:35.027313 13573 sgd_solver.cpp:106] Iteration 44740, lr = 0.001\nI0608 04:57:47.800606 13573 solver.cpp:229] Iteration 44760, loss = 0.480168\nI0608 04:57:47.800735 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000210247 (* 1 = 0.000210247 loss)\nI0608 04:57:47.800745 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000792084 (* 1 = 0.000792084 loss)\nI0608 04:57:47.800752 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153754 (* 1 = 0.153754 loss)\nI0608 04:57:47.800758 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0017484 (* 1 = 0.0017484 loss)\nI0608 04:57:47.800765 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174344 (* 1 = 0.174344 loss)\nI0608 04:57:47.800771 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00174962 (* 1 = 0.00174962 loss)\nI0608 04:57:47.800778 13573 sgd_solver.cpp:106] Iteration 44760, lr = 0.001\nI0608 04:58:00.618074 13573 solver.cpp:229] Iteration 44780, loss = 0.498655\nI0608 04:58:00.618150 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0987339 (* 1 = 0.0987339 loss)\nI0608 04:58:00.618162 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0840234 (* 1 = 0.0840234 loss)\nI0608 04:58:00.618170 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0918788 (* 1 = 0.0918788 loss)\nI0608 04:58:00.618175 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0196743 (* 1 = 0.0196743 loss)\nI0608 04:58:00.618181 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938939 (* 1 = 0.0938939 loss)\nI0608 04:58:00.618186 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0197063 (* 1 = 0.0197063 loss)\nI0608 04:58:00.618194 13573 sgd_solver.cpp:106] Iteration 44780, lr = 0.001\nspeed: 0.645s / iter\nI0608 04:58:13.581768 13573 solver.cpp:229] Iteration 44800, loss = 0.800428\nI0608 04:58:13.581832 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00915598 (* 1 = 0.00915598 loss)\nI0608 04:58:13.581842 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0134323 (* 1 = 0.0134323 loss)\nI0608 04:58:13.581848 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264547 (* 1 = 0.264547 loss)\nI0608 04:58:13.581854 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0332573 (* 1 = 0.0332573 loss)\nI0608 04:58:13.581861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235417 (* 1 = 0.235417 loss)\nI0608 04:58:13.581866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0332595 (* 1 = 0.0332595 loss)\nI0608 04:58:13.581874 13573 sgd_solver.cpp:106] Iteration 44800, lr = 0.001\nI0608 04:58:26.596434 13573 solver.cpp:229] Iteration 44820, loss = 1.15622\nI0608 04:58:26.596508 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00620522 (* 1 = 0.00620522 loss)\nI0608 04:58:26.596518 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00397869 (* 1 = 0.00397869 loss)\nI0608 04:58:26.596524 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.378453 (* 1 = 0.378453 loss)\nI0608 04:58:26.596530 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0536202 (* 1 = 0.0536202 loss)\nI0608 04:58:26.596536 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.404938 (* 1 = 0.404938 loss)\nI0608 04:58:26.596542 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0535671 (* 1 = 0.0535671 loss)\nI0608 04:58:26.596550 13573 sgd_solver.cpp:106] Iteration 44820, lr = 0.001\nI0608 04:58:39.505503 13573 solver.cpp:229] Iteration 44840, loss = 0.958664\nI0608 04:58:39.505578 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0964225 (* 1 = 0.0964225 loss)\nI0608 04:58:39.505586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0919811 (* 1 = 0.0919811 loss)\nI0608 04:58:39.505592 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.160738 (* 1 = 0.160738 loss)\nI0608 04:58:39.505599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0315934 (* 1 = 0.0315934 loss)\nI0608 04:58:39.505604 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154955 (* 1 = 0.154955 loss)\nI0608 04:58:39.505610 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0315923 (* 1 = 0.0315923 loss)\nI0608 04:58:39.505617 13573 sgd_solver.cpp:106] Iteration 44840, lr = 0.001\nI0608 04:58:52.507365 13573 solver.cpp:229] Iteration 44860, loss = 0.593323\nI0608 04:58:52.507431 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.220523 (* 1 = 0.220523 loss)\nI0608 04:58:52.507441 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.092089 (* 1 = 0.092089 loss)\nI0608 04:58:52.507448 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0864248 (* 1 = 0.0864248 loss)\nI0608 04:58:52.507453 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0750335 (* 1 = 0.0750335 loss)\nI0608 04:58:52.507459 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0851465 (* 1 = 0.0851465 loss)\nI0608 04:58:52.507464 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0750319 (* 1 = 0.0750319 loss)\nI0608 04:58:52.507472 13573 sgd_solver.cpp:106] Iteration 44860, lr = 0.001\nI0608 04:59:05.522861 13573 solver.cpp:229] Iteration 44880, loss = 0.760106\nI0608 04:59:05.522958 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0507478 (* 1 = 0.0507478 loss)\nI0608 04:59:05.522969 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0447811 (* 1 = 0.0447811 loss)\nI0608 04:59:05.522975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.308175 (* 1 = 0.308175 loss)\nI0608 04:59:05.522981 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167447 (* 1 = 0.0167447 loss)\nI0608 04:59:05.522987 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303729 (* 1 = 0.303729 loss)\nI0608 04:59:05.522994 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0167486 (* 1 = 0.0167486 loss)\nI0608 04:59:05.523001 13573 sgd_solver.cpp:106] Iteration 44880, lr = 0.001\nI0608 04:59:18.539685 13573 solver.cpp:229] Iteration 44900, loss = 0.637881\nI0608 04:59:18.539760 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.301994 (* 1 = 0.301994 loss)\nI0608 04:59:18.539770 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150464 (* 1 = 0.150464 loss)\nI0608 04:59:18.539777 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167343 (* 1 = 0.167343 loss)\nI0608 04:59:18.539783 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0456679 (* 1 = 0.0456679 loss)\nI0608 04:59:18.539788 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.179399 (* 1 = 0.179399 loss)\nI0608 04:59:18.539793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0456694 (* 1 = 0.0456694 loss)\nI0608 04:59:18.539801 13573 sgd_solver.cpp:106] Iteration 44900, lr = 0.001\nI0608 04:59:31.591301 13573 solver.cpp:229] Iteration 44920, loss = 0.381813\nI0608 04:59:31.591392 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109844 (* 1 = 0.109844 loss)\nI0608 04:59:31.591401 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0580412 (* 1 = 0.0580412 loss)\nI0608 04:59:31.591408 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0795791 (* 1 = 0.0795791 loss)\nI0608 04:59:31.591414 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00310094 (* 1 = 0.00310094 loss)\nI0608 04:59:31.591420 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0805857 (* 1 = 0.0805857 loss)\nI0608 04:59:31.591428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00310198 (* 1 = 0.00310198 loss)\nI0608 04:59:31.591435 13573 sgd_solver.cpp:106] Iteration 44920, lr = 0.001\nI0608 04:59:44.635705 13573 solver.cpp:229] Iteration 44940, loss = 0.484999\nI0608 04:59:44.635779 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0724339 (* 1 = 0.0724339 loss)\nI0608 04:59:44.635789 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101185 (* 1 = 0.101185 loss)\nI0608 04:59:44.635795 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102944 (* 1 = 0.102944 loss)\nI0608 04:59:44.635802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168183 (* 1 = 0.0168183 loss)\nI0608 04:59:44.635807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.113406 (* 1 = 0.113406 loss)\nI0608 04:59:44.635812 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168171 (* 1 = 0.0168171 loss)\nI0608 04:59:44.635819 13573 sgd_solver.cpp:106] Iteration 44940, lr = 0.001\nI0608 04:59:57.549146 13573 solver.cpp:229] Iteration 44960, loss = 0.804462\nI0608 04:59:57.549214 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0229887 (* 1 = 0.0229887 loss)\nI0608 04:59:57.549224 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.031875 (* 1 = 0.031875 loss)\nI0608 04:59:57.549232 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.531315 (* 1 = 0.531315 loss)\nI0608 04:59:57.549237 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109998 (* 1 = 0.109998 loss)\nI0608 04:59:57.549242 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.53889 (* 1 = 0.53889 loss)\nI0608 04:59:57.549248 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.11006 (* 1 = 0.11006 loss)\nI0608 04:59:57.549255 13573 sgd_solver.cpp:106] Iteration 44960, lr = 0.001\nI0608 05:00:10.604688 13573 solver.cpp:229] Iteration 44980, loss = 0.845265\nI0608 05:00:10.604761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00190837 (* 1 = 0.00190837 loss)\nI0608 05:00:10.604773 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00158527 (* 1 = 0.00158527 loss)\nI0608 05:00:10.604779 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.507248 (* 1 = 0.507248 loss)\nI0608 05:00:10.604786 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0591504 (* 1 = 0.0591504 loss)\nI0608 05:00:10.604791 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.504705 (* 1 = 0.504705 loss)\nI0608 05:00:10.604797 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0591093 (* 1 = 0.0591093 loss)\nI0608 05:00:10.604804 13573 sgd_solver.cpp:106] Iteration 44980, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:00:23.685833 13573 solver.cpp:229] Iteration 45000, loss = 0.446771\nI0608 05:00:23.685900 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0504193 (* 1 = 0.0504193 loss)\nI0608 05:00:23.685910 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0822885 (* 1 = 0.0822885 loss)\nI0608 05:00:23.685916 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0923232 (* 1 = 0.0923232 loss)\nI0608 05:00:23.685922 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0123675 (* 1 = 0.0123675 loss)\nI0608 05:00:23.685927 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0880303 (* 1 = 0.0880303 loss)\nI0608 05:00:23.685933 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0123678 (* 1 = 0.0123678 loss)\nI0608 05:00:23.685940 13573 sgd_solver.cpp:106] Iteration 45000, lr = 0.001\nI0608 05:00:36.488088 13573 solver.cpp:229] Iteration 45020, loss = 0.531935\nI0608 05:00:36.488157 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0198147 (* 1 = 0.0198147 loss)\nI0608 05:00:36.488168 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0390059 (* 1 = 0.0390059 loss)\nI0608 05:00:36.488174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255752 (* 1 = 0.255752 loss)\nI0608 05:00:36.488180 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0500582 (* 1 = 0.0500582 loss)\nI0608 05:00:36.488186 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249494 (* 1 = 0.249494 loss)\nI0608 05:00:36.488193 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0500466 (* 1 = 0.0500466 loss)\nI0608 05:00:36.488199 13573 sgd_solver.cpp:106] Iteration 45020, lr = 0.001\nI0608 05:00:49.288457 13573 solver.cpp:229] Iteration 45040, loss = 0.533764\nI0608 05:00:49.288525 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00237311 (* 1 = 0.00237311 loss)\nI0608 05:00:49.288535 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00573346 (* 1 = 0.00573346 loss)\nI0608 05:00:49.288542 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294024 (* 1 = 0.294024 loss)\nI0608 05:00:49.288547 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0203194 (* 1 = 0.0203194 loss)\nI0608 05:00:49.288553 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.289957 (* 1 = 0.289957 loss)\nI0608 05:00:49.288558 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0203012 (* 1 = 0.0203012 loss)\nI0608 05:00:49.288565 13573 sgd_solver.cpp:106] Iteration 45040, lr = 0.001\nI0608 05:01:02.192637 13573 solver.cpp:229] Iteration 45060, loss = 0.689905\nI0608 05:01:02.192714 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0460795 (* 1 = 0.0460795 loss)\nI0608 05:01:02.192724 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.2121 (* 1 = 0.2121 loss)\nI0608 05:01:02.192730 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.254113 (* 1 = 0.254113 loss)\nI0608 05:01:02.192736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0393482 (* 1 = 0.0393482 loss)\nI0608 05:01:02.192742 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.254879 (* 1 = 0.254879 loss)\nI0608 05:01:02.192749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.039348 (* 1 = 0.039348 loss)\nI0608 05:01:02.192755 13573 sgd_solver.cpp:106] Iteration 45060, lr = 0.001\nI0608 05:01:15.067716 13573 solver.cpp:229] Iteration 45080, loss = 0.618956\nI0608 05:01:15.067790 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00211787 (* 1 = 0.00211787 loss)\nI0608 05:01:15.067800 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0200603 (* 1 = 0.0200603 loss)\nI0608 05:01:15.067806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0869538 (* 1 = 0.0869538 loss)\nI0608 05:01:15.067811 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00406391 (* 1 = 0.00406391 loss)\nI0608 05:01:15.067817 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0813797 (* 1 = 0.0813797 loss)\nI0608 05:01:15.067823 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00405927 (* 1 = 0.00405927 loss)\nI0608 05:01:15.067831 13573 sgd_solver.cpp:106] Iteration 45080, lr = 0.001\nI0608 05:01:27.962640 13573 solver.cpp:229] Iteration 45100, loss = 0.666632\nI0608 05:01:27.962709 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0211774 (* 1 = 0.0211774 loss)\nI0608 05:01:27.962719 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0747046 (* 1 = 0.0747046 loss)\nI0608 05:01:27.962725 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255113 (* 1 = 0.255113 loss)\nI0608 05:01:27.962730 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202976 (* 1 = 0.0202976 loss)\nI0608 05:01:27.962736 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.234432 (* 1 = 0.234432 loss)\nI0608 05:01:27.962743 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202948 (* 1 = 0.0202948 loss)\nI0608 05:01:27.962749 13573 sgd_solver.cpp:106] Iteration 45100, lr = 0.001\nI0608 05:01:40.945271 13573 solver.cpp:229] Iteration 45120, loss = 0.666942\nI0608 05:01:40.945348 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109145 (* 1 = 0.109145 loss)\nI0608 05:01:40.945358 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.165007 (* 1 = 0.165007 loss)\nI0608 05:01:40.945364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277901 (* 1 = 0.277901 loss)\nI0608 05:01:40.945370 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00730978 (* 1 = 0.00730978 loss)\nI0608 05:01:40.945376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.299361 (* 1 = 0.299361 loss)\nI0608 05:01:40.945381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0073123 (* 1 = 0.0073123 loss)\nI0608 05:01:40.945389 13573 sgd_solver.cpp:106] Iteration 45120, lr = 0.001\nI0608 05:01:54.040396 13573 solver.cpp:229] Iteration 45140, loss = 0.559567\nI0608 05:01:54.040462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127904 (* 1 = 0.127904 loss)\nI0608 05:01:54.040472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0938485 (* 1 = 0.0938485 loss)\nI0608 05:01:54.040478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.181732 (* 1 = 0.181732 loss)\nI0608 05:01:54.040484 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198765 (* 1 = 0.0198765 loss)\nI0608 05:01:54.040489 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183775 (* 1 = 0.183775 loss)\nI0608 05:01:54.040495 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198778 (* 1 = 0.0198778 loss)\nI0608 05:01:54.040503 13573 sgd_solver.cpp:106] Iteration 45140, lr = 0.001\nI0608 05:02:06.900640 13573 solver.cpp:229] Iteration 45160, loss = 1.32217\nI0608 05:02:06.900707 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119864 (* 1 = 0.119864 loss)\nI0608 05:02:06.900717 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1087 (* 1 = 0.1087 loss)\nI0608 05:02:06.900723 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.387429 (* 1 = 0.387429 loss)\nI0608 05:02:06.900729 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0309853 (* 1 = 0.0309853 loss)\nI0608 05:02:06.900734 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.413548 (* 1 = 0.413548 loss)\nI0608 05:02:06.900740 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309574 (* 1 = 0.0309574 loss)\nI0608 05:02:06.900748 13573 sgd_solver.cpp:106] Iteration 45160, lr = 0.001\nI0608 05:02:19.825733 13573 solver.cpp:229] Iteration 45180, loss = 0.518718\nI0608 05:02:19.825809 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00178911 (* 1 = 0.00178911 loss)\nI0608 05:02:19.825819 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.023581 (* 1 = 0.023581 loss)\nI0608 05:02:19.825824 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.248105 (* 1 = 0.248105 loss)\nI0608 05:02:19.825830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.009554 (* 1 = 0.009554 loss)\nI0608 05:02:19.825836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.243663 (* 1 = 0.243663 loss)\nI0608 05:02:19.825842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00955528 (* 1 = 0.00955528 loss)\nI0608 05:02:19.825850 13573 sgd_solver.cpp:106] Iteration 45180, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:02:32.468742 13573 solver.cpp:229] Iteration 45200, loss = 0.483341\nI0608 05:02:32.468822 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0672467 (* 1 = 0.0672467 loss)\nI0608 05:02:32.468832 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0374723 (* 1 = 0.0374723 loss)\nI0608 05:02:32.468838 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0918756 (* 1 = 0.0918756 loss)\nI0608 05:02:32.468844 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0280238 (* 1 = 0.0280238 loss)\nI0608 05:02:32.468849 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.082818 (* 1 = 0.082818 loss)\nI0608 05:02:32.468857 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0280238 (* 1 = 0.0280238 loss)\nI0608 05:02:32.468864 13573 sgd_solver.cpp:106] Iteration 45200, lr = 0.001\nI0608 05:02:45.572352 13573 solver.cpp:229] Iteration 45220, loss = 0.682884\nI0608 05:02:45.572427 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0245509 (* 1 = 0.0245509 loss)\nI0608 05:02:45.572437 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0382985 (* 1 = 0.0382985 loss)\nI0608 05:02:45.572443 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.42273 (* 1 = 0.42273 loss)\nI0608 05:02:45.572448 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0291143 (* 1 = 0.0291143 loss)\nI0608 05:02:45.572454 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.371927 (* 1 = 0.371927 loss)\nI0608 05:02:45.572460 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291349 (* 1 = 0.0291349 loss)\nI0608 05:02:45.572468 13573 sgd_solver.cpp:106] Iteration 45220, lr = 0.001\nI0608 05:02:58.298384 13573 solver.cpp:229] Iteration 45240, loss = 0.62337\nI0608 05:02:58.298454 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00256648 (* 1 = 0.00256648 loss)\nI0608 05:02:58.298465 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000128602 (* 1 = 0.000128602 loss)\nI0608 05:02:58.298471 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.38361 (* 1 = 0.38361 loss)\nI0608 05:02:58.298478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0514955 (* 1 = 0.0514955 loss)\nI0608 05:02:58.298485 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340368 (* 1 = 0.340368 loss)\nI0608 05:02:58.298491 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0514559 (* 1 = 0.0514559 loss)\nI0608 05:02:58.298497 13573 sgd_solver.cpp:106] Iteration 45240, lr = 0.001\nI0608 05:03:11.095356 13573 solver.cpp:229] Iteration 45260, loss = 0.370925\nI0608 05:03:11.095437 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139846 (* 1 = 0.139846 loss)\nI0608 05:03:11.095446 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0563561 (* 1 = 0.0563561 loss)\nI0608 05:03:11.095453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0849108 (* 1 = 0.0849108 loss)\nI0608 05:03:11.095459 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0187746 (* 1 = 0.0187746 loss)\nI0608 05:03:11.095465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938447 (* 1 = 0.0938447 loss)\nI0608 05:03:11.095471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0187748 (* 1 = 0.0187748 loss)\nI0608 05:03:11.095479 13573 sgd_solver.cpp:106] Iteration 45260, lr = 0.001\nI0608 05:03:23.925879 13573 solver.cpp:229] Iteration 45280, loss = 0.796818\nI0608 05:03:23.925950 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.074245 (* 1 = 0.074245 loss)\nI0608 05:03:23.925959 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.059789 (* 1 = 0.059789 loss)\nI0608 05:03:23.925966 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0891075 (* 1 = 0.0891075 loss)\nI0608 05:03:23.925971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0521628 (* 1 = 0.0521628 loss)\nI0608 05:03:23.925976 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0876952 (* 1 = 0.0876952 loss)\nI0608 05:03:23.925982 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.052163 (* 1 = 0.052163 loss)\nI0608 05:03:23.925990 13573 sgd_solver.cpp:106] Iteration 45280, lr = 0.001\nI0608 05:03:36.894132 13573 solver.cpp:229] Iteration 45300, loss = 0.930186\nI0608 05:03:36.894192 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0762239 (* 1 = 0.0762239 loss)\nI0608 05:03:36.894202 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0626662 (* 1 = 0.0626662 loss)\nI0608 05:03:36.894208 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.470757 (* 1 = 0.470757 loss)\nI0608 05:03:36.894213 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.177634 (* 1 = 0.177634 loss)\nI0608 05:03:36.894219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.472801 (* 1 = 0.472801 loss)\nI0608 05:03:36.894225 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.177782 (* 1 = 0.177782 loss)\nI0608 05:03:36.894232 13573 sgd_solver.cpp:106] Iteration 45300, lr = 0.001\nI0608 05:03:49.758198 13573 solver.cpp:229] Iteration 45320, loss = 0.965851\nI0608 05:03:49.758272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139394 (* 1 = 0.139394 loss)\nI0608 05:03:49.758282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0912177 (* 1 = 0.0912177 loss)\nI0608 05:03:49.758288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.409723 (* 1 = 0.409723 loss)\nI0608 05:03:49.758294 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103541 (* 1 = 0.103541 loss)\nI0608 05:03:49.758299 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.41034 (* 1 = 0.41034 loss)\nI0608 05:03:49.758306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103574 (* 1 = 0.103574 loss)\nI0608 05:03:49.758313 13573 sgd_solver.cpp:106] Iteration 45320, lr = 0.001\nI0608 05:04:02.686087 13573 solver.cpp:229] Iteration 45340, loss = 0.991856\nI0608 05:04:02.686152 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107086 (* 1 = 0.107086 loss)\nI0608 05:04:02.686162 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.153542 (* 1 = 0.153542 loss)\nI0608 05:04:02.686168 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177302 (* 1 = 0.177302 loss)\nI0608 05:04:02.686174 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0307726 (* 1 = 0.0307726 loss)\nI0608 05:04:02.686180 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17506 (* 1 = 0.17506 loss)\nI0608 05:04:02.686187 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307511 (* 1 = 0.0307511 loss)\nI0608 05:04:02.686193 13573 sgd_solver.cpp:106] Iteration 45340, lr = 0.001\nI0608 05:04:15.403647 13573 solver.cpp:229] Iteration 45360, loss = 1.5442\nI0608 05:04:15.403708 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0284902 (* 1 = 0.0284902 loss)\nI0608 05:04:15.403718 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.053626 (* 1 = 0.053626 loss)\nI0608 05:04:15.403724 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0869622 (* 1 = 0.0869622 loss)\nI0608 05:04:15.403730 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0340397 (* 1 = 0.0340397 loss)\nI0608 05:04:15.403736 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0853581 (* 1 = 0.0853581 loss)\nI0608 05:04:15.403743 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0340536 (* 1 = 0.0340536 loss)\nI0608 05:04:15.403749 13573 sgd_solver.cpp:106] Iteration 45360, lr = 0.001\nI0608 05:04:28.630079 13573 solver.cpp:229] Iteration 45380, loss = 0.828367\nI0608 05:04:28.630149 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0373764 (* 1 = 0.0373764 loss)\nI0608 05:04:28.630158 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.059093 (* 1 = 0.059093 loss)\nI0608 05:04:28.630165 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.420383 (* 1 = 0.420383 loss)\nI0608 05:04:28.630170 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111505 (* 1 = 0.111505 loss)\nI0608 05:04:28.630177 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.392647 (* 1 = 0.392647 loss)\nI0608 05:04:28.630182 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111497 (* 1 = 0.111497 loss)\nI0608 05:04:28.630189 13573 sgd_solver.cpp:106] Iteration 45380, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:04:41.576709 13573 solver.cpp:229] Iteration 45400, loss = 0.607536\nI0608 05:04:41.576776 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0452282 (* 1 = 0.0452282 loss)\nI0608 05:04:41.576786 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0660581 (* 1 = 0.0660581 loss)\nI0608 05:04:41.576792 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207034 (* 1 = 0.207034 loss)\nI0608 05:04:41.576798 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0374445 (* 1 = 0.0374445 loss)\nI0608 05:04:41.576804 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207968 (* 1 = 0.207968 loss)\nI0608 05:04:41.576810 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0374571 (* 1 = 0.0374571 loss)\nI0608 05:04:41.576818 13573 sgd_solver.cpp:106] Iteration 45400, lr = 0.001\nI0608 05:04:54.594274 13573 solver.cpp:229] Iteration 45420, loss = 0.63491\nI0608 05:04:54.594339 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193563 (* 1 = 0.193563 loss)\nI0608 05:04:54.594349 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152217 (* 1 = 0.152217 loss)\nI0608 05:04:54.594355 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0761501 (* 1 = 0.0761501 loss)\nI0608 05:04:54.594362 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021736 (* 1 = 0.021736 loss)\nI0608 05:04:54.594367 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0852215 (* 1 = 0.0852215 loss)\nI0608 05:04:54.594373 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0217503 (* 1 = 0.0217503 loss)\nI0608 05:04:54.594380 13573 sgd_solver.cpp:106] Iteration 45420, lr = 0.001\nI0608 05:05:07.688452 13573 solver.cpp:229] Iteration 45440, loss = 0.666166\nI0608 05:05:07.688575 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.375706 (* 1 = 0.375706 loss)\nI0608 05:05:07.688585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146254 (* 1 = 0.146254 loss)\nI0608 05:05:07.688591 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211848 (* 1 = 0.211848 loss)\nI0608 05:05:07.688596 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0195531 (* 1 = 0.0195531 loss)\nI0608 05:05:07.688602 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.209617 (* 1 = 0.209617 loss)\nI0608 05:05:07.688607 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0195392 (* 1 = 0.0195392 loss)\nI0608 05:05:07.688614 13573 sgd_solver.cpp:106] Iteration 45440, lr = 0.001\nI0608 05:05:20.471458 13573 solver.cpp:229] Iteration 45460, loss = 1.2806\nI0608 05:05:20.471527 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111719 (* 1 = 0.111719 loss)\nI0608 05:05:20.471537 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.149387 (* 1 = 0.149387 loss)\nI0608 05:05:20.471544 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.583682 (* 1 = 0.583682 loss)\nI0608 05:05:20.471549 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.25696 (* 1 = 0.25696 loss)\nI0608 05:05:20.471554 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.586807 (* 1 = 0.586807 loss)\nI0608 05:05:20.471560 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.256927 (* 1 = 0.256927 loss)\nI0608 05:05:20.471566 13573 sgd_solver.cpp:106] Iteration 45460, lr = 0.001\nI0608 05:05:33.410218 13573 solver.cpp:229] Iteration 45480, loss = 0.533644\nI0608 05:05:33.410282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000664234 (* 1 = 0.000664234 loss)\nI0608 05:05:33.410292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00709585 (* 1 = 0.00709585 loss)\nI0608 05:05:33.410298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.292688 (* 1 = 0.292688 loss)\nI0608 05:05:33.410305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0544297 (* 1 = 0.0544297 loss)\nI0608 05:05:33.410310 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.254346 (* 1 = 0.254346 loss)\nI0608 05:05:33.410316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.054406 (* 1 = 0.054406 loss)\nI0608 05:05:33.410325 13573 sgd_solver.cpp:106] Iteration 45480, lr = 0.001\nI0608 05:05:46.381927 13573 solver.cpp:229] Iteration 45500, loss = 1.0055\nI0608 05:05:46.381997 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.059008 (* 1 = 0.059008 loss)\nI0608 05:05:46.382007 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0388108 (* 1 = 0.0388108 loss)\nI0608 05:05:46.382014 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0958862 (* 1 = 0.0958862 loss)\nI0608 05:05:46.382019 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321297 (* 1 = 0.0321297 loss)\nI0608 05:05:46.382025 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0971541 (* 1 = 0.0971541 loss)\nI0608 05:05:46.382030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.032157 (* 1 = 0.032157 loss)\nI0608 05:05:46.382038 13573 sgd_solver.cpp:106] Iteration 45500, lr = 0.001\nI0608 05:05:59.281971 13573 solver.cpp:229] Iteration 45520, loss = 0.677199\nI0608 05:05:59.282053 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0951175 (* 1 = 0.0951175 loss)\nI0608 05:05:59.282063 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0872319 (* 1 = 0.0872319 loss)\nI0608 05:05:59.282069 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233205 (* 1 = 0.233205 loss)\nI0608 05:05:59.282075 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.101117 (* 1 = 0.101117 loss)\nI0608 05:05:59.282080 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259296 (* 1 = 0.259296 loss)\nI0608 05:05:59.282086 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.101116 (* 1 = 0.101116 loss)\nI0608 05:05:59.282094 13573 sgd_solver.cpp:106] Iteration 45520, lr = 0.001\nI0608 05:06:12.428295 13573 solver.cpp:229] Iteration 45540, loss = 0.50548\nI0608 05:06:12.428359 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000285568 (* 1 = 0.000285568 loss)\nI0608 05:06:12.428369 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00530887 (* 1 = 0.00530887 loss)\nI0608 05:06:12.428375 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221449 (* 1 = 0.221449 loss)\nI0608 05:06:12.428381 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00252707 (* 1 = 0.00252707 loss)\nI0608 05:06:12.428386 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.192764 (* 1 = 0.192764 loss)\nI0608 05:06:12.428392 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00253118 (* 1 = 0.00253118 loss)\nI0608 05:06:12.428400 13573 sgd_solver.cpp:106] Iteration 45540, lr = 0.001\nI0608 05:06:25.497797 13573 solver.cpp:229] Iteration 45560, loss = 0.619452\nI0608 05:06:25.497874 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0326028 (* 1 = 0.0326028 loss)\nI0608 05:06:25.497882 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0744183 (* 1 = 0.0744183 loss)\nI0608 05:06:25.497889 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0936008 (* 1 = 0.0936008 loss)\nI0608 05:06:25.497895 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0586527 (* 1 = 0.0586527 loss)\nI0608 05:06:25.497900 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0851477 (* 1 = 0.0851477 loss)\nI0608 05:06:25.497906 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0586755 (* 1 = 0.0586755 loss)\nI0608 05:06:25.497912 13573 sgd_solver.cpp:106] Iteration 45560, lr = 0.001\nI0608 05:06:38.231650 13573 solver.cpp:229] Iteration 45580, loss = 0.808929\nI0608 05:06:38.231716 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110766 (* 1 = 0.110766 loss)\nI0608 05:06:38.231725 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128532 (* 1 = 0.128532 loss)\nI0608 05:06:38.231730 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.424328 (* 1 = 0.424328 loss)\nI0608 05:06:38.231736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.098067 (* 1 = 0.098067 loss)\nI0608 05:06:38.231742 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.414113 (* 1 = 0.414113 loss)\nI0608 05:06:38.231747 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0980801 (* 1 = 0.0980801 loss)\nI0608 05:06:38.231755 13573 sgd_solver.cpp:106] Iteration 45580, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:06:51.042027 13573 solver.cpp:229] Iteration 45600, loss = 0.689445\nI0608 05:06:51.042089 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0238616 (* 1 = 0.0238616 loss)\nI0608 05:06:51.042098 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0317672 (* 1 = 0.0317672 loss)\nI0608 05:06:51.042104 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202954 (* 1 = 0.202954 loss)\nI0608 05:06:51.042110 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143591 (* 1 = 0.0143591 loss)\nI0608 05:06:51.042116 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218915 (* 1 = 0.218915 loss)\nI0608 05:06:51.042121 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0143549 (* 1 = 0.0143549 loss)\nI0608 05:06:51.042129 13573 sgd_solver.cpp:106] Iteration 45600, lr = 0.001\nI0608 05:07:03.835265 13573 solver.cpp:229] Iteration 45620, loss = 0.583941\nI0608 05:07:03.835335 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0215944 (* 1 = 0.0215944 loss)\nI0608 05:07:03.835345 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0845121 (* 1 = 0.0845121 loss)\nI0608 05:07:03.835351 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.359712 (* 1 = 0.359712 loss)\nI0608 05:07:03.835356 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0463392 (* 1 = 0.0463392 loss)\nI0608 05:07:03.835362 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327391 (* 1 = 0.327391 loss)\nI0608 05:07:03.835368 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0463344 (* 1 = 0.0463344 loss)\nI0608 05:07:03.835376 13573 sgd_solver.cpp:106] Iteration 45620, lr = 0.001\nI0608 05:07:16.779644 13573 solver.cpp:229] Iteration 45640, loss = 0.507968\nI0608 05:07:16.779711 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0817728 (* 1 = 0.0817728 loss)\nI0608 05:07:16.779721 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0601254 (* 1 = 0.0601254 loss)\nI0608 05:07:16.779727 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156491 (* 1 = 0.156491 loss)\nI0608 05:07:16.779733 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0073941 (* 1 = 0.0073941 loss)\nI0608 05:07:16.779738 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160255 (* 1 = 0.160255 loss)\nI0608 05:07:16.779744 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00739296 (* 1 = 0.00739296 loss)\nI0608 05:07:16.779752 13573 sgd_solver.cpp:106] Iteration 45640, lr = 0.001\nI0608 05:07:29.722452 13573 solver.cpp:229] Iteration 45660, loss = 0.557601\nI0608 05:07:29.722518 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0803581 (* 1 = 0.0803581 loss)\nI0608 05:07:29.722527 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0781602 (* 1 = 0.0781602 loss)\nI0608 05:07:29.722533 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0984291 (* 1 = 0.0984291 loss)\nI0608 05:07:29.722539 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00707574 (* 1 = 0.00707574 loss)\nI0608 05:07:29.722545 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105868 (* 1 = 0.105868 loss)\nI0608 05:07:29.722551 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0070752 (* 1 = 0.0070752 loss)\nI0608 05:07:29.722558 13573 sgd_solver.cpp:106] Iteration 45660, lr = 0.001\nI0608 05:07:42.812405 13573 solver.cpp:229] Iteration 45680, loss = 0.860837\nI0608 05:07:42.812466 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.15683 (* 1 = 0.15683 loss)\nI0608 05:07:42.812477 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.160046 (* 1 = 0.160046 loss)\nI0608 05:07:42.812484 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.446248 (* 1 = 0.446248 loss)\nI0608 05:07:42.812489 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0841214 (* 1 = 0.0841214 loss)\nI0608 05:07:42.812495 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.420995 (* 1 = 0.420995 loss)\nI0608 05:07:42.812501 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0840988 (* 1 = 0.0840988 loss)\nI0608 05:07:42.812510 13573 sgd_solver.cpp:106] Iteration 45680, lr = 0.001\nI0608 05:07:55.707862 13573 solver.cpp:229] Iteration 45700, loss = 0.884465\nI0608 05:07:55.707931 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117004 (* 1 = 0.117004 loss)\nI0608 05:07:55.707940 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0454893 (* 1 = 0.0454893 loss)\nI0608 05:07:55.707947 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.496138 (* 1 = 0.496138 loss)\nI0608 05:07:55.707952 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0640574 (* 1 = 0.0640574 loss)\nI0608 05:07:55.707957 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.504354 (* 1 = 0.504354 loss)\nI0608 05:07:55.707963 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0640494 (* 1 = 0.0640494 loss)\nI0608 05:07:55.707969 13573 sgd_solver.cpp:106] Iteration 45700, lr = 0.001\nI0608 05:08:08.741149 13573 solver.cpp:229] Iteration 45720, loss = 0.725625\nI0608 05:08:08.741211 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000998111 (* 1 = 0.000998111 loss)\nI0608 05:08:08.741221 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00229982 (* 1 = 0.00229982 loss)\nI0608 05:08:08.741227 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.46708 (* 1 = 0.46708 loss)\nI0608 05:08:08.741233 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.121576 (* 1 = 0.121576 loss)\nI0608 05:08:08.741240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.439388 (* 1 = 0.439388 loss)\nI0608 05:08:08.741245 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.121597 (* 1 = 0.121597 loss)\nI0608 05:08:08.741251 13573 sgd_solver.cpp:106] Iteration 45720, lr = 0.001\nI0608 05:08:21.660060 13573 solver.cpp:229] Iteration 45740, loss = 0.647568\nI0608 05:08:21.660135 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00122743 (* 1 = 0.00122743 loss)\nI0608 05:08:21.660143 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0123869 (* 1 = 0.0123869 loss)\nI0608 05:08:21.660150 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237362 (* 1 = 0.237362 loss)\nI0608 05:08:21.660156 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0399351 (* 1 = 0.0399351 loss)\nI0608 05:08:21.660161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258666 (* 1 = 0.258666 loss)\nI0608 05:08:21.660166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0399005 (* 1 = 0.0399005 loss)\nI0608 05:08:21.660174 13573 sgd_solver.cpp:106] Iteration 45740, lr = 0.001\nI0608 05:08:34.620080 13573 solver.cpp:229] Iteration 45760, loss = 0.760882\nI0608 05:08:34.620152 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0122714 (* 1 = 0.0122714 loss)\nI0608 05:08:34.620162 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00106794 (* 1 = 0.00106794 loss)\nI0608 05:08:34.620169 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.280307 (* 1 = 0.280307 loss)\nI0608 05:08:34.620175 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.068128 (* 1 = 0.068128 loss)\nI0608 05:08:34.620182 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269665 (* 1 = 0.269665 loss)\nI0608 05:08:34.620187 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0680857 (* 1 = 0.0680857 loss)\nI0608 05:08:34.620195 13573 sgd_solver.cpp:106] Iteration 45760, lr = 0.001\nI0608 05:08:47.531009 13573 solver.cpp:229] Iteration 45780, loss = 0.341443\nI0608 05:08:47.531072 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0344896 (* 1 = 0.0344896 loss)\nI0608 05:08:47.531082 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0568212 (* 1 = 0.0568212 loss)\nI0608 05:08:47.531088 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0859395 (* 1 = 0.0859395 loss)\nI0608 05:08:47.531095 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0396767 (* 1 = 0.0396767 loss)\nI0608 05:08:47.531100 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0852707 (* 1 = 0.0852707 loss)\nI0608 05:08:47.531106 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0396931 (* 1 = 0.0396931 loss)\nI0608 05:08:47.531114 13573 sgd_solver.cpp:106] Iteration 45780, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:09:00.410586 13573 solver.cpp:229] Iteration 45800, loss = 0.936925\nI0608 05:09:00.410660 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0035075 (* 1 = 0.0035075 loss)\nI0608 05:09:00.410670 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0169668 (* 1 = 0.0169668 loss)\nI0608 05:09:00.410676 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.281501 (* 1 = 0.281501 loss)\nI0608 05:09:00.410682 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321443 (* 1 = 0.0321443 loss)\nI0608 05:09:00.410687 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259625 (* 1 = 0.259625 loss)\nI0608 05:09:00.410693 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0321457 (* 1 = 0.0321457 loss)\nI0608 05:09:00.410699 13573 sgd_solver.cpp:106] Iteration 45800, lr = 0.001\nI0608 05:09:13.428108 13573 solver.cpp:229] Iteration 45820, loss = 0.581632\nI0608 05:09:13.428174 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0280794 (* 1 = 0.0280794 loss)\nI0608 05:09:13.428182 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0179624 (* 1 = 0.0179624 loss)\nI0608 05:09:13.428189 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221113 (* 1 = 0.221113 loss)\nI0608 05:09:13.428194 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0122533 (* 1 = 0.0122533 loss)\nI0608 05:09:13.428200 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.164829 (* 1 = 0.164829 loss)\nI0608 05:09:13.428205 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0122503 (* 1 = 0.0122503 loss)\nI0608 05:09:13.428212 13573 sgd_solver.cpp:106] Iteration 45820, lr = 0.001\nI0608 05:09:26.305163 13573 solver.cpp:229] Iteration 45840, loss = 0.878705\nI0608 05:09:26.305224 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0584524 (* 1 = 0.0584524 loss)\nI0608 05:09:26.305234 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0461012 (* 1 = 0.0461012 loss)\nI0608 05:09:26.305240 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.466389 (* 1 = 0.466389 loss)\nI0608 05:09:26.305246 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.13418 (* 1 = 0.13418 loss)\nI0608 05:09:26.305253 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.419597 (* 1 = 0.419597 loss)\nI0608 05:09:26.305258 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.134083 (* 1 = 0.134083 loss)\nI0608 05:09:26.305265 13573 sgd_solver.cpp:106] Iteration 45840, lr = 0.001\nI0608 05:09:39.223232 13573 solver.cpp:229] Iteration 45860, loss = 1.26926\nI0608 05:09:39.223307 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0571507 (* 1 = 0.0571507 loss)\nI0608 05:09:39.223317 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0809813 (* 1 = 0.0809813 loss)\nI0608 05:09:39.223323 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228712 (* 1 = 0.228712 loss)\nI0608 05:09:39.223330 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0221944 (* 1 = 0.0221944 loss)\nI0608 05:09:39.223335 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217708 (* 1 = 0.217708 loss)\nI0608 05:09:39.223341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0221964 (* 1 = 0.0221964 loss)\nI0608 05:09:39.223350 13573 sgd_solver.cpp:106] Iteration 45860, lr = 0.001\nI0608 05:09:52.206748 13573 solver.cpp:229] Iteration 45880, loss = 0.367797\nI0608 05:09:52.206830 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0501572 (* 1 = 0.0501572 loss)\nI0608 05:09:52.206841 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0903448 (* 1 = 0.0903448 loss)\nI0608 05:09:52.206847 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108753 (* 1 = 0.108753 loss)\nI0608 05:09:52.206853 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126495 (* 1 = 0.0126495 loss)\nI0608 05:09:52.206858 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102782 (* 1 = 0.102782 loss)\nI0608 05:09:52.206864 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126539 (* 1 = 0.0126539 loss)\nI0608 05:09:52.206871 13573 sgd_solver.cpp:106] Iteration 45880, lr = 0.001\nI0608 05:10:04.907716 13573 solver.cpp:229] Iteration 45900, loss = 1.84536\nI0608 05:10:04.907779 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101869 (* 1 = 0.101869 loss)\nI0608 05:10:04.907788 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104576 (* 1 = 0.104576 loss)\nI0608 05:10:04.907794 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191087 (* 1 = 0.191087 loss)\nI0608 05:10:04.907800 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0345163 (* 1 = 0.0345163 loss)\nI0608 05:10:04.907806 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189681 (* 1 = 0.189681 loss)\nI0608 05:10:04.907812 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0345136 (* 1 = 0.0345136 loss)\nI0608 05:10:04.907819 13573 sgd_solver.cpp:106] Iteration 45900, lr = 0.001\nI0608 05:10:17.867143 13573 solver.cpp:229] Iteration 45920, loss = 0.740668\nI0608 05:10:17.867218 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0480861 (* 1 = 0.0480861 loss)\nI0608 05:10:17.867228 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0966917 (* 1 = 0.0966917 loss)\nI0608 05:10:17.867234 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0841588 (* 1 = 0.0841588 loss)\nI0608 05:10:17.867239 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0445056 (* 1 = 0.0445056 loss)\nI0608 05:10:17.867245 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0887382 (* 1 = 0.0887382 loss)\nI0608 05:10:17.867251 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0445251 (* 1 = 0.0445251 loss)\nI0608 05:10:17.867259 13573 sgd_solver.cpp:106] Iteration 45920, lr = 0.001\nI0608 05:10:30.642151 13573 solver.cpp:229] Iteration 45940, loss = 0.581296\nI0608 05:10:30.642218 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0453799 (* 1 = 0.0453799 loss)\nI0608 05:10:30.642227 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0230914 (* 1 = 0.0230914 loss)\nI0608 05:10:30.642233 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0887329 (* 1 = 0.0887329 loss)\nI0608 05:10:30.642240 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0204939 (* 1 = 0.0204939 loss)\nI0608 05:10:30.642244 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0847603 (* 1 = 0.0847603 loss)\nI0608 05:10:30.642251 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0205092 (* 1 = 0.0205092 loss)\nI0608 05:10:30.642257 13573 sgd_solver.cpp:106] Iteration 45940, lr = 0.001\nI0608 05:10:43.446115 13573 solver.cpp:229] Iteration 45960, loss = 1.13684\nI0608 05:10:43.446190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00448809 (* 1 = 0.00448809 loss)\nI0608 05:10:43.446199 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00374235 (* 1 = 0.00374235 loss)\nI0608 05:10:43.446205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.565609 (* 1 = 0.565609 loss)\nI0608 05:10:43.446211 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.171809 (* 1 = 0.171809 loss)\nI0608 05:10:43.446218 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.571788 (* 1 = 0.571788 loss)\nI0608 05:10:43.446223 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.171809 (* 1 = 0.171809 loss)\nI0608 05:10:43.446229 13573 sgd_solver.cpp:106] Iteration 45960, lr = 0.001\nI0608 05:10:55.910625 13573 solver.cpp:229] Iteration 45980, loss = 0.671448\nI0608 05:10:55.910693 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130768 (* 1 = 0.130768 loss)\nI0608 05:10:55.910702 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0508219 (* 1 = 0.0508219 loss)\nI0608 05:10:55.910708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.096012 (* 1 = 0.096012 loss)\nI0608 05:10:55.910714 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0502996 (* 1 = 0.0502996 loss)\nI0608 05:10:55.910720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105749 (* 1 = 0.105749 loss)\nI0608 05:10:55.910725 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0502941 (* 1 = 0.0502941 loss)\nI0608 05:10:55.910732 13573 sgd_solver.cpp:106] Iteration 45980, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:11:08.489048 13573 solver.cpp:229] Iteration 46000, loss = 0.459609\nI0608 05:11:08.489141 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119921 (* 1 = 0.119921 loss)\nI0608 05:11:08.489152 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105895 (* 1 = 0.105895 loss)\nI0608 05:11:08.489158 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0904996 (* 1 = 0.0904996 loss)\nI0608 05:11:08.489164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169098 (* 1 = 0.0169098 loss)\nI0608 05:11:08.489171 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0907275 (* 1 = 0.0907275 loss)\nI0608 05:11:08.489176 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169078 (* 1 = 0.0169078 loss)\nI0608 05:11:08.489183 13573 sgd_solver.cpp:106] Iteration 46000, lr = 0.001\nI0608 05:11:21.218165 13573 solver.cpp:229] Iteration 46020, loss = 0.479721\nI0608 05:11:21.218226 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00356279 (* 1 = 0.00356279 loss)\nI0608 05:11:21.218235 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0520279 (* 1 = 0.0520279 loss)\nI0608 05:11:21.218241 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264008 (* 1 = 0.264008 loss)\nI0608 05:11:21.218246 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126463 (* 1 = 0.0126463 loss)\nI0608 05:11:21.218252 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.286794 (* 1 = 0.286794 loss)\nI0608 05:11:21.218258 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126309 (* 1 = 0.0126309 loss)\nI0608 05:11:21.218264 13573 sgd_solver.cpp:106] Iteration 46020, lr = 0.001\nI0608 05:11:34.109153 13573 solver.cpp:229] Iteration 46040, loss = 0.509346\nI0608 05:11:34.109221 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0362427 (* 1 = 0.0362427 loss)\nI0608 05:11:34.109230 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0535817 (* 1 = 0.0535817 loss)\nI0608 05:11:34.109236 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269603 (* 1 = 0.269603 loss)\nI0608 05:11:34.109241 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0344747 (* 1 = 0.0344747 loss)\nI0608 05:11:34.109247 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278748 (* 1 = 0.278748 loss)\nI0608 05:11:34.109253 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0344738 (* 1 = 0.0344738 loss)\nI0608 05:11:34.109259 13573 sgd_solver.cpp:106] Iteration 46040, lr = 0.001\nI0608 05:11:47.267443 13573 solver.cpp:229] Iteration 46060, loss = 1.12504\nI0608 05:11:47.267506 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110773 (* 1 = 0.110773 loss)\nI0608 05:11:47.267516 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0428939 (* 1 = 0.0428939 loss)\nI0608 05:11:47.267523 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108323 (* 1 = 0.108323 loss)\nI0608 05:11:47.267529 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.127312 (* 1 = 0.127312 loss)\nI0608 05:11:47.267534 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115259 (* 1 = 0.115259 loss)\nI0608 05:11:47.267540 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.127255 (* 1 = 0.127255 loss)\nI0608 05:11:47.267547 13573 sgd_solver.cpp:106] Iteration 46060, lr = 0.001\nI0608 05:12:00.015029 13573 solver.cpp:229] Iteration 46080, loss = 1.24953\nI0608 05:12:00.015106 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0330427 (* 1 = 0.0330427 loss)\nI0608 05:12:00.015116 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0274615 (* 1 = 0.0274615 loss)\nI0608 05:12:00.015122 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41669 (* 1 = 0.41669 loss)\nI0608 05:12:00.015128 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0297024 (* 1 = 0.0297024 loss)\nI0608 05:12:00.015135 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.377892 (* 1 = 0.377892 loss)\nI0608 05:12:00.015141 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0297073 (* 1 = 0.0297073 loss)\nI0608 05:12:00.015148 13573 sgd_solver.cpp:106] Iteration 46080, lr = 0.001\nI0608 05:12:12.914345 13573 solver.cpp:229] Iteration 46100, loss = 0.701823\nI0608 05:12:12.914413 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.405534 (* 1 = 0.405534 loss)\nI0608 05:12:12.914422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141779 (* 1 = 0.141779 loss)\nI0608 05:12:12.914428 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105485 (* 1 = 0.105485 loss)\nI0608 05:12:12.914434 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0360601 (* 1 = 0.0360601 loss)\nI0608 05:12:12.914440 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0983225 (* 1 = 0.0983225 loss)\nI0608 05:12:12.914446 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0360622 (* 1 = 0.0360622 loss)\nI0608 05:12:12.914453 13573 sgd_solver.cpp:106] Iteration 46100, lr = 0.001\nI0608 05:12:25.855176 13573 solver.cpp:229] Iteration 46120, loss = 1.00249\nI0608 05:12:25.855242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102637 (* 1 = 0.102637 loss)\nI0608 05:12:25.855252 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109447 (* 1 = 0.109447 loss)\nI0608 05:12:25.855258 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.28605 (* 1 = 0.28605 loss)\nI0608 05:12:25.855265 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0494676 (* 1 = 0.0494676 loss)\nI0608 05:12:25.855271 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246119 (* 1 = 0.246119 loss)\nI0608 05:12:25.855278 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0494747 (* 1 = 0.0494747 loss)\nI0608 05:12:25.855285 13573 sgd_solver.cpp:106] Iteration 46120, lr = 0.001\nI0608 05:12:38.748340 13573 solver.cpp:229] Iteration 46140, loss = 0.748004\nI0608 05:12:38.748410 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0654458 (* 1 = 0.0654458 loss)\nI0608 05:12:38.748420 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0453252 (* 1 = 0.0453252 loss)\nI0608 05:12:38.748426 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0787144 (* 1 = 0.0787144 loss)\nI0608 05:12:38.748433 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00981004 (* 1 = 0.00981004 loss)\nI0608 05:12:38.748440 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0850232 (* 1 = 0.0850232 loss)\nI0608 05:12:38.748446 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00980317 (* 1 = 0.00980317 loss)\nI0608 05:12:38.748453 13573 sgd_solver.cpp:106] Iteration 46140, lr = 0.001\nI0608 05:12:51.635521 13573 solver.cpp:229] Iteration 46160, loss = 1.03172\nI0608 05:12:51.635586 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148799 (* 1 = 0.148799 loss)\nI0608 05:12:51.635594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.237396 (* 1 = 0.237396 loss)\nI0608 05:12:51.635601 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116447 (* 1 = 0.116447 loss)\nI0608 05:12:51.635607 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0240505 (* 1 = 0.0240505 loss)\nI0608 05:12:51.635613 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112709 (* 1 = 0.112709 loss)\nI0608 05:12:51.635618 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0240425 (* 1 = 0.0240425 loss)\nI0608 05:12:51.635625 13573 sgd_solver.cpp:106] Iteration 46160, lr = 0.001\nI0608 05:13:04.749423 13573 solver.cpp:229] Iteration 46180, loss = 0.449593\nI0608 05:13:04.749490 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10871 (* 1 = 0.10871 loss)\nI0608 05:13:04.749500 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103668 (* 1 = 0.103668 loss)\nI0608 05:13:04.749506 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125455 (* 1 = 0.125455 loss)\nI0608 05:13:04.749512 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00931357 (* 1 = 0.00931357 loss)\nI0608 05:13:04.749518 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124721 (* 1 = 0.124721 loss)\nI0608 05:13:04.749523 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00931758 (* 1 = 0.00931758 loss)\nI0608 05:13:04.749531 13573 sgd_solver.cpp:106] Iteration 46180, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:13:17.806625 13573 solver.cpp:229] Iteration 46200, loss = 0.534469\nI0608 05:13:17.806748 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0574287 (* 1 = 0.0574287 loss)\nI0608 05:13:17.806759 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0403702 (* 1 = 0.0403702 loss)\nI0608 05:13:17.806766 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0952882 (* 1 = 0.0952882 loss)\nI0608 05:13:17.806787 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0649793 (* 1 = 0.0649793 loss)\nI0608 05:13:17.806793 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0959937 (* 1 = 0.0959937 loss)\nI0608 05:13:17.806799 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0649145 (* 1 = 0.0649145 loss)\nI0608 05:13:17.806807 13573 sgd_solver.cpp:106] Iteration 46200, lr = 0.001\nI0608 05:13:30.689667 13573 solver.cpp:229] Iteration 46220, loss = 0.854673\nI0608 05:13:30.689733 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0509962 (* 1 = 0.0509962 loss)\nI0608 05:13:30.689743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0519658 (* 1 = 0.0519658 loss)\nI0608 05:13:30.689749 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10014 (* 1 = 0.10014 loss)\nI0608 05:13:30.689755 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0181867 (* 1 = 0.0181867 loss)\nI0608 05:13:30.689760 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.096934 (* 1 = 0.096934 loss)\nI0608 05:13:30.689766 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0181473 (* 1 = 0.0181473 loss)\nI0608 05:13:30.689774 13573 sgd_solver.cpp:106] Iteration 46220, lr = 0.001\nI0608 05:13:43.720538 13573 solver.cpp:229] Iteration 46240, loss = 0.781029\nI0608 05:13:43.720612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109451 (* 1 = 0.109451 loss)\nI0608 05:13:43.720621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125154 (* 1 = 0.125154 loss)\nI0608 05:13:43.720628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0935906 (* 1 = 0.0935906 loss)\nI0608 05:13:43.720633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.014748 (* 1 = 0.014748 loss)\nI0608 05:13:43.720638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0901555 (* 1 = 0.0901555 loss)\nI0608 05:13:43.720643 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0147502 (* 1 = 0.0147502 loss)\nI0608 05:13:43.720650 13573 sgd_solver.cpp:106] Iteration 46240, lr = 0.001\nI0608 05:13:56.675606 13573 solver.cpp:229] Iteration 46260, loss = 1.19331\nI0608 05:13:56.675678 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.262289 (* 1 = 0.262289 loss)\nI0608 05:13:56.675688 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140454 (* 1 = 0.140454 loss)\nI0608 05:13:56.675694 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092917 (* 1 = 0.092917 loss)\nI0608 05:13:56.675699 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0131414 (* 1 = 0.0131414 loss)\nI0608 05:13:56.675705 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0892018 (* 1 = 0.0892018 loss)\nI0608 05:13:56.675710 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0131389 (* 1 = 0.0131389 loss)\nI0608 05:13:56.675717 13573 sgd_solver.cpp:106] Iteration 46260, lr = 0.001\nI0608 05:14:09.525404 13573 solver.cpp:229] Iteration 46280, loss = 0.396671\nI0608 05:14:09.525465 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000182122 (* 1 = 0.000182122 loss)\nI0608 05:14:09.525475 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00163507 (* 1 = 0.00163507 loss)\nI0608 05:14:09.525480 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148983 (* 1 = 0.148983 loss)\nI0608 05:14:09.525486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0163018 (* 1 = 0.0163018 loss)\nI0608 05:14:09.525491 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137143 (* 1 = 0.137143 loss)\nI0608 05:14:09.525497 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0162924 (* 1 = 0.0162924 loss)\nI0608 05:14:09.525504 13573 sgd_solver.cpp:106] Iteration 46280, lr = 0.001\nI0608 05:14:22.515648 13573 solver.cpp:229] Iteration 46300, loss = 1.14845\nI0608 05:14:22.515771 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0365928 (* 1 = 0.0365928 loss)\nI0608 05:14:22.515780 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.06174 (* 1 = 0.06174 loss)\nI0608 05:14:22.515786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273641 (* 1 = 0.273641 loss)\nI0608 05:14:22.515792 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278867 (* 1 = 0.0278867 loss)\nI0608 05:14:22.515815 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260054 (* 1 = 0.260054 loss)\nI0608 05:14:22.515836 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278996 (* 1 = 0.0278996 loss)\nI0608 05:14:22.515843 13573 sgd_solver.cpp:106] Iteration 46300, lr = 0.001\nI0608 05:14:35.326719 13573 solver.cpp:229] Iteration 46320, loss = 0.92202\nI0608 05:14:35.326797 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0516837 (* 1 = 0.0516837 loss)\nI0608 05:14:35.326807 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.011553 (* 1 = 0.011553 loss)\nI0608 05:14:35.326813 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0851875 (* 1 = 0.0851875 loss)\nI0608 05:14:35.326818 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0289801 (* 1 = 0.0289801 loss)\nI0608 05:14:35.326824 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0911483 (* 1 = 0.0911483 loss)\nI0608 05:14:35.326829 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0289802 (* 1 = 0.0289802 loss)\nI0608 05:14:35.326836 13573 sgd_solver.cpp:106] Iteration 46320, lr = 0.001\nI0608 05:14:48.107261 13573 solver.cpp:229] Iteration 46340, loss = 0.600252\nI0608 05:14:48.107324 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0346165 (* 1 = 0.0346165 loss)\nI0608 05:14:48.107334 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0155619 (* 1 = 0.0155619 loss)\nI0608 05:14:48.107339 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187771 (* 1 = 0.187771 loss)\nI0608 05:14:48.107345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00674277 (* 1 = 0.00674277 loss)\nI0608 05:14:48.107352 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165761 (* 1 = 0.165761 loss)\nI0608 05:14:48.107357 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00674085 (* 1 = 0.00674085 loss)\nI0608 05:14:48.107363 13573 sgd_solver.cpp:106] Iteration 46340, lr = 0.001\nI0608 05:15:01.102056 13573 solver.cpp:229] Iteration 46360, loss = 1.10617\nI0608 05:15:01.102133 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0535833 (* 1 = 0.0535833 loss)\nI0608 05:15:01.102144 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.120837 (* 1 = 0.120837 loss)\nI0608 05:15:01.102149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.531793 (* 1 = 0.531793 loss)\nI0608 05:15:01.102154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.127422 (* 1 = 0.127422 loss)\nI0608 05:15:01.102159 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.498044 (* 1 = 0.498044 loss)\nI0608 05:15:01.102165 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.127334 (* 1 = 0.127334 loss)\nI0608 05:15:01.102171 13573 sgd_solver.cpp:106] Iteration 46360, lr = 0.001\nI0608 05:15:13.928071 13573 solver.cpp:229] Iteration 46380, loss = 1.48007\nI0608 05:15:13.928139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00170633 (* 1 = 0.00170633 loss)\nI0608 05:15:13.928149 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0387634 (* 1 = 0.0387634 loss)\nI0608 05:15:13.928155 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.166333 (* 1 = 0.166333 loss)\nI0608 05:15:13.928161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00741302 (* 1 = 0.00741302 loss)\nI0608 05:15:13.928167 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176065 (* 1 = 0.176065 loss)\nI0608 05:15:13.928174 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00739682 (* 1 = 0.00739682 loss)\nI0608 05:15:13.928181 13573 sgd_solver.cpp:106] Iteration 46380, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:15:26.833519 13573 solver.cpp:229] Iteration 46400, loss = 1.39648\nI0608 05:15:26.833612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.256758 (* 1 = 0.256758 loss)\nI0608 05:15:26.833622 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.47963 (* 1 = 0.47963 loss)\nI0608 05:15:26.833628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.683131 (* 1 = 0.683131 loss)\nI0608 05:15:26.833636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0976642 (* 1 = 0.0976642 loss)\nI0608 05:15:26.833642 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.66709 (* 1 = 0.66709 loss)\nI0608 05:15:26.833647 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0976574 (* 1 = 0.0976574 loss)\nI0608 05:15:26.833654 13573 sgd_solver.cpp:106] Iteration 46400, lr = 0.001\nI0608 05:15:39.840312 13573 solver.cpp:229] Iteration 46420, loss = 0.688031\nI0608 05:15:39.840421 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0660435 (* 1 = 0.0660435 loss)\nI0608 05:15:39.840431 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0532639 (* 1 = 0.0532639 loss)\nI0608 05:15:39.840437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195011 (* 1 = 0.195011 loss)\nI0608 05:15:39.840443 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.136296 (* 1 = 0.136296 loss)\nI0608 05:15:39.840450 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.190073 (* 1 = 0.190073 loss)\nI0608 05:15:39.840456 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.136321 (* 1 = 0.136321 loss)\nI0608 05:15:39.840462 13573 sgd_solver.cpp:106] Iteration 46420, lr = 0.001\nI0608 05:15:52.468921 13573 solver.cpp:229] Iteration 46440, loss = 1.25069\nI0608 05:15:52.468997 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.174725 (* 1 = 0.174725 loss)\nI0608 05:15:52.469007 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.047699 (* 1 = 0.047699 loss)\nI0608 05:15:52.469013 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919155 (* 1 = 0.0919155 loss)\nI0608 05:15:52.469019 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137933 (* 1 = 0.0137933 loss)\nI0608 05:15:52.469024 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103946 (* 1 = 0.103946 loss)\nI0608 05:15:52.469030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013799 (* 1 = 0.013799 loss)\nI0608 05:15:52.469038 13573 sgd_solver.cpp:106] Iteration 46440, lr = 0.001\nI0608 05:16:05.483011 13573 solver.cpp:229] Iteration 46460, loss = 0.876101\nI0608 05:16:05.483117 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114166 (* 1 = 0.114166 loss)\nI0608 05:16:05.483127 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143251 (* 1 = 0.143251 loss)\nI0608 05:16:05.483134 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.381231 (* 1 = 0.381231 loss)\nI0608 05:16:05.483139 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0567894 (* 1 = 0.0567894 loss)\nI0608 05:16:05.483145 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.405719 (* 1 = 0.405719 loss)\nI0608 05:16:05.483150 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0567734 (* 1 = 0.0567734 loss)\nI0608 05:16:05.483156 13573 sgd_solver.cpp:106] Iteration 46460, lr = 0.001\nI0608 05:16:18.375177 13573 solver.cpp:229] Iteration 46480, loss = 0.855213\nI0608 05:16:18.375242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0716296 (* 1 = 0.0716296 loss)\nI0608 05:16:18.375252 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0755992 (* 1 = 0.0755992 loss)\nI0608 05:16:18.375259 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.251621 (* 1 = 0.251621 loss)\nI0608 05:16:18.375265 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0165003 (* 1 = 0.0165003 loss)\nI0608 05:16:18.375272 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303198 (* 1 = 0.303198 loss)\nI0608 05:16:18.375277 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164968 (* 1 = 0.0164968 loss)\nI0608 05:16:18.375284 13573 sgd_solver.cpp:106] Iteration 46480, lr = 0.001\nI0608 05:16:31.200171 13573 solver.cpp:229] Iteration 46500, loss = 0.471372\nI0608 05:16:31.200242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000633035 (* 1 = 0.000633035 loss)\nI0608 05:16:31.200251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00947683 (* 1 = 0.00947683 loss)\nI0608 05:16:31.200259 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148384 (* 1 = 0.148384 loss)\nI0608 05:16:31.200265 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0070461 (* 1 = 0.0070461 loss)\nI0608 05:16:31.200271 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168256 (* 1 = 0.168256 loss)\nI0608 05:16:31.200278 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00704207 (* 1 = 0.00704207 loss)\nI0608 05:16:31.200285 13573 sgd_solver.cpp:106] Iteration 46500, lr = 0.001\nI0608 05:16:44.161468 13573 solver.cpp:229] Iteration 46520, loss = 0.776391\nI0608 05:16:44.161543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0507618 (* 1 = 0.0507618 loss)\nI0608 05:16:44.161553 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.133684 (* 1 = 0.133684 loss)\nI0608 05:16:44.161559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209738 (* 1 = 0.209738 loss)\nI0608 05:16:44.161564 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011781 (* 1 = 0.011781 loss)\nI0608 05:16:44.161571 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212153 (* 1 = 0.212153 loss)\nI0608 05:16:44.161576 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117862 (* 1 = 0.0117862 loss)\nI0608 05:16:44.161582 13573 sgd_solver.cpp:106] Iteration 46520, lr = 0.001\nI0608 05:16:56.995672 13573 solver.cpp:229] Iteration 46540, loss = 1.33677\nI0608 05:16:56.995749 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00117877 (* 1 = 0.00117877 loss)\nI0608 05:16:56.995759 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0148765 (* 1 = 0.0148765 loss)\nI0608 05:16:56.995765 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201416 (* 1 = 0.201416 loss)\nI0608 05:16:56.995771 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.000734166 (* 1 = 0.000734166 loss)\nI0608 05:16:56.995779 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225733 (* 1 = 0.225733 loss)\nI0608 05:16:56.995784 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.000733961 (* 1 = 0.000733961 loss)\nI0608 05:16:56.995790 13573 sgd_solver.cpp:106] Iteration 46540, lr = 0.001\nI0608 05:17:10.090831 13573 solver.cpp:229] Iteration 46560, loss = 0.451933\nI0608 05:17:10.090900 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.085779 (* 1 = 0.085779 loss)\nI0608 05:17:10.090912 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0451944 (* 1 = 0.0451944 loss)\nI0608 05:17:10.090919 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187294 (* 1 = 0.187294 loss)\nI0608 05:17:10.090924 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.02234 (* 1 = 0.02234 loss)\nI0608 05:17:10.090930 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.18488 (* 1 = 0.18488 loss)\nI0608 05:17:10.090936 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.022359 (* 1 = 0.022359 loss)\nI0608 05:17:10.090943 13573 sgd_solver.cpp:106] Iteration 46560, lr = 0.001\nI0608 05:17:22.905495 13573 solver.cpp:229] Iteration 46580, loss = 0.63046\nI0608 05:17:22.905567 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.208261 (* 1 = 0.208261 loss)\nI0608 05:17:22.905577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.215098 (* 1 = 0.215098 loss)\nI0608 05:17:22.905583 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238811 (* 1 = 0.238811 loss)\nI0608 05:17:22.905589 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242174 (* 1 = 0.0242174 loss)\nI0608 05:17:22.905594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224943 (* 1 = 0.224943 loss)\nI0608 05:17:22.905601 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.024234 (* 1 = 0.024234 loss)\nI0608 05:17:22.905606 13573 sgd_solver.cpp:106] Iteration 46580, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:17:35.863226 13573 solver.cpp:229] Iteration 46600, loss = 0.627762\nI0608 05:17:35.863299 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0155734 (* 1 = 0.0155734 loss)\nI0608 05:17:35.863308 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0208504 (* 1 = 0.0208504 loss)\nI0608 05:17:35.863314 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286027 (* 1 = 0.286027 loss)\nI0608 05:17:35.863320 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296685 (* 1 = 0.0296685 loss)\nI0608 05:17:35.863327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306726 (* 1 = 0.306726 loss)\nI0608 05:17:35.863332 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296628 (* 1 = 0.0296628 loss)\nI0608 05:17:35.863338 13573 sgd_solver.cpp:106] Iteration 46600, lr = 0.001\nI0608 05:17:48.728821 13573 solver.cpp:229] Iteration 46620, loss = 0.496392\nI0608 05:17:48.728924 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0593288 (* 1 = 0.0593288 loss)\nI0608 05:17:48.728935 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0239319 (* 1 = 0.0239319 loss)\nI0608 05:17:48.728942 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100698 (* 1 = 0.100698 loss)\nI0608 05:17:48.728948 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159239 (* 1 = 0.0159239 loss)\nI0608 05:17:48.728955 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0939585 (* 1 = 0.0939585 loss)\nI0608 05:17:48.728961 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159271 (* 1 = 0.0159271 loss)\nI0608 05:17:48.728970 13573 sgd_solver.cpp:106] Iteration 46620, lr = 0.001\nI0608 05:18:01.695315 13573 solver.cpp:229] Iteration 46640, loss = 1.49679\nI0608 05:18:01.695381 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0022091 (* 1 = 0.0022091 loss)\nI0608 05:18:01.695391 13573 solver.cpp:245]     Train net output #1: loss_cls = 8.17915e-05 (* 1 = 8.17915e-05 loss)\nI0608 05:18:01.695397 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.875469 (* 1 = 0.875469 loss)\nI0608 05:18:01.695405 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.394069 (* 1 = 0.394069 loss)\nI0608 05:18:01.695410 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.916463 (* 1 = 0.916463 loss)\nI0608 05:18:01.695415 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.394173 (* 1 = 0.394173 loss)\nI0608 05:18:01.695422 13573 sgd_solver.cpp:106] Iteration 46640, lr = 0.001\nI0608 05:18:14.593277 13573 solver.cpp:229] Iteration 46660, loss = 1.08868\nI0608 05:18:14.593348 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0874818 (* 1 = 0.0874818 loss)\nI0608 05:18:14.593358 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.117816 (* 1 = 0.117816 loss)\nI0608 05:18:14.593364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0868091 (* 1 = 0.0868091 loss)\nI0608 05:18:14.593369 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0695886 (* 1 = 0.0695886 loss)\nI0608 05:18:14.593374 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104294 (* 1 = 0.104294 loss)\nI0608 05:18:14.593380 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0695847 (* 1 = 0.0695847 loss)\nI0608 05:18:14.593387 13573 sgd_solver.cpp:106] Iteration 46660, lr = 0.001\nI0608 05:18:27.485013 13573 solver.cpp:229] Iteration 46680, loss = 0.677293\nI0608 05:18:27.485085 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0710971 (* 1 = 0.0710971 loss)\nI0608 05:18:27.485095 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.057141 (* 1 = 0.057141 loss)\nI0608 05:18:27.485101 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0895794 (* 1 = 0.0895794 loss)\nI0608 05:18:27.485107 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00496711 (* 1 = 0.00496711 loss)\nI0608 05:18:27.485113 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0860701 (* 1 = 0.0860701 loss)\nI0608 05:18:27.485119 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00496636 (* 1 = 0.00496636 loss)\nI0608 05:18:27.485126 13573 sgd_solver.cpp:106] Iteration 46680, lr = 0.001\nI0608 05:18:40.399494 13573 solver.cpp:229] Iteration 46700, loss = 0.739809\nI0608 05:18:40.399559 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000374632 (* 1 = 0.000374632 loss)\nI0608 05:18:40.399570 13573 solver.cpp:245]     Train net output #1: loss_cls = 6.30602e-05 (* 1 = 6.30602e-05 loss)\nI0608 05:18:40.399576 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.281821 (* 1 = 0.281821 loss)\nI0608 05:18:40.399582 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0603009 (* 1 = 0.0603009 loss)\nI0608 05:18:40.399588 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27546 (* 1 = 0.27546 loss)\nI0608 05:18:40.399595 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0602894 (* 1 = 0.0602894 loss)\nI0608 05:18:40.399600 13573 sgd_solver.cpp:106] Iteration 46700, lr = 0.001\nI0608 05:18:53.289417 13573 solver.cpp:229] Iteration 46720, loss = 0.601225\nI0608 05:18:53.289487 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.14453 (* 1 = 0.14453 loss)\nI0608 05:18:53.289497 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102501 (* 1 = 0.102501 loss)\nI0608 05:18:53.289504 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104658 (* 1 = 0.104658 loss)\nI0608 05:18:53.289510 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.087893 (* 1 = 0.087893 loss)\nI0608 05:18:53.289515 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0980037 (* 1 = 0.0980037 loss)\nI0608 05:18:53.289520 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0878892 (* 1 = 0.0878892 loss)\nI0608 05:18:53.289527 13573 sgd_solver.cpp:106] Iteration 46720, lr = 0.001\nI0608 05:19:06.208709 13573 solver.cpp:229] Iteration 46740, loss = 0.351947\nI0608 05:19:06.208781 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0667982 (* 1 = 0.0667982 loss)\nI0608 05:19:06.208791 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.058345 (* 1 = 0.058345 loss)\nI0608 05:19:06.208796 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0864942 (* 1 = 0.0864942 loss)\nI0608 05:19:06.208802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0295161 (* 1 = 0.0295161 loss)\nI0608 05:19:06.208807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0882514 (* 1 = 0.0882514 loss)\nI0608 05:19:06.208813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0295266 (* 1 = 0.0295266 loss)\nI0608 05:19:06.208820 13573 sgd_solver.cpp:106] Iteration 46740, lr = 0.001\nI0608 05:19:19.006994 13573 solver.cpp:229] Iteration 46760, loss = 0.817795\nI0608 05:19:19.007068 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119522 (* 1 = 0.119522 loss)\nI0608 05:19:19.007077 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.072013 (* 1 = 0.072013 loss)\nI0608 05:19:19.007083 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228448 (* 1 = 0.228448 loss)\nI0608 05:19:19.007089 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0180399 (* 1 = 0.0180399 loss)\nI0608 05:19:19.007094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261692 (* 1 = 0.261692 loss)\nI0608 05:19:19.007100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0180343 (* 1 = 0.0180343 loss)\nI0608 05:19:19.007107 13573 sgd_solver.cpp:106] Iteration 46760, lr = 0.001\nI0608 05:19:31.955160 13573 solver.cpp:229] Iteration 46780, loss = 1.29778\nI0608 05:19:31.955231 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.110683 (* 1 = 0.110683 loss)\nI0608 05:19:31.955241 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0861827 (* 1 = 0.0861827 loss)\nI0608 05:19:31.955248 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.611563 (* 1 = 0.611563 loss)\nI0608 05:19:31.955255 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.212657 (* 1 = 0.212657 loss)\nI0608 05:19:31.955260 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.606155 (* 1 = 0.606155 loss)\nI0608 05:19:31.955266 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.212548 (* 1 = 0.212548 loss)\nI0608 05:19:31.955273 13573 sgd_solver.cpp:106] Iteration 46780, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:19:44.911465 13573 solver.cpp:229] Iteration 46800, loss = 0.685495\nI0608 05:19:44.911594 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.164713 (* 1 = 0.164713 loss)\nI0608 05:19:44.911619 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138655 (* 1 = 0.138655 loss)\nI0608 05:19:44.911638 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.218629 (* 1 = 0.218629 loss)\nI0608 05:19:44.911655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0947062 (* 1 = 0.0947062 loss)\nI0608 05:19:44.911672 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219809 (* 1 = 0.219809 loss)\nI0608 05:19:44.911689 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0947194 (* 1 = 0.0947194 loss)\nI0608 05:19:44.911706 13573 sgd_solver.cpp:106] Iteration 46800, lr = 0.001\nI0608 05:19:57.668511 13573 solver.cpp:229] Iteration 46820, loss = 0.489009\nI0608 05:19:57.668570 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129309 (* 1 = 0.129309 loss)\nI0608 05:19:57.668579 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0533697 (* 1 = 0.0533697 loss)\nI0608 05:19:57.668586 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0989806 (* 1 = 0.0989806 loss)\nI0608 05:19:57.668591 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0188856 (* 1 = 0.0188856 loss)\nI0608 05:19:57.668596 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0942533 (* 1 = 0.0942533 loss)\nI0608 05:19:57.668602 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0188779 (* 1 = 0.0188779 loss)\nI0608 05:19:57.668609 13573 sgd_solver.cpp:106] Iteration 46820, lr = 0.001\nI0608 05:20:10.454075 13573 solver.cpp:229] Iteration 46840, loss = 0.616454\nI0608 05:20:10.454143 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0263651 (* 1 = 0.0263651 loss)\nI0608 05:20:10.454152 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0353034 (* 1 = 0.0353034 loss)\nI0608 05:20:10.454159 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.285033 (* 1 = 0.285033 loss)\nI0608 05:20:10.454164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0956952 (* 1 = 0.0956952 loss)\nI0608 05:20:10.454169 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.280122 (* 1 = 0.280122 loss)\nI0608 05:20:10.454175 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0957313 (* 1 = 0.0957313 loss)\nI0608 05:20:10.454183 13573 sgd_solver.cpp:106] Iteration 46840, lr = 0.001\nI0608 05:20:23.306593 13573 solver.cpp:229] Iteration 46860, loss = 0.422594\nI0608 05:20:23.306665 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0500406 (* 1 = 0.0500406 loss)\nI0608 05:20:23.306674 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0704858 (* 1 = 0.0704858 loss)\nI0608 05:20:23.306680 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102822 (* 1 = 0.102822 loss)\nI0608 05:20:23.306686 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0780471 (* 1 = 0.0780471 loss)\nI0608 05:20:23.306692 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107979 (* 1 = 0.107979 loss)\nI0608 05:20:23.306699 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0780316 (* 1 = 0.0780316 loss)\nI0608 05:20:23.306704 13573 sgd_solver.cpp:106] Iteration 46860, lr = 0.001\nI0608 05:20:36.380043 13573 solver.cpp:229] Iteration 46880, loss = 1.42628\nI0608 05:20:36.380110 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0823237 (* 1 = 0.0823237 loss)\nI0608 05:20:36.380120 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0467859 (* 1 = 0.0467859 loss)\nI0608 05:20:36.380125 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102904 (* 1 = 0.102904 loss)\nI0608 05:20:36.380131 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021998 (* 1 = 0.021998 loss)\nI0608 05:20:36.380136 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100458 (* 1 = 0.100458 loss)\nI0608 05:20:36.380141 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.021996 (* 1 = 0.021996 loss)\nI0608 05:20:36.380148 13573 sgd_solver.cpp:106] Iteration 46880, lr = 0.001\nI0608 05:20:49.395277 13573 solver.cpp:229] Iteration 46900, loss = 0.764769\nI0608 05:20:49.395360 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.083926 (* 1 = 0.083926 loss)\nI0608 05:20:49.395375 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.180735 (* 1 = 0.180735 loss)\nI0608 05:20:49.395385 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182691 (* 1 = 0.182691 loss)\nI0608 05:20:49.395395 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0308872 (* 1 = 0.0308872 loss)\nI0608 05:20:49.395403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182358 (* 1 = 0.182358 loss)\nI0608 05:20:49.395411 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0308774 (* 1 = 0.0308774 loss)\nI0608 05:20:49.395422 13573 sgd_solver.cpp:106] Iteration 46900, lr = 0.001\nI0608 05:21:02.039885 13573 solver.cpp:229] Iteration 46920, loss = 0.517774\nI0608 05:21:02.039944 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0129105 (* 1 = 0.0129105 loss)\nI0608 05:21:02.039954 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0513443 (* 1 = 0.0513443 loss)\nI0608 05:21:02.039963 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175747 (* 1 = 0.175747 loss)\nI0608 05:21:02.039968 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.025164 (* 1 = 0.025164 loss)\nI0608 05:21:02.039973 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203009 (* 1 = 0.203009 loss)\nI0608 05:21:02.039979 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251566 (* 1 = 0.0251566 loss)\nI0608 05:21:02.039985 13573 sgd_solver.cpp:106] Iteration 46920, lr = 0.001\nI0608 05:21:14.903640 13573 solver.cpp:229] Iteration 46940, loss = 1.12355\nI0608 05:21:14.903713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.071952 (* 1 = 0.071952 loss)\nI0608 05:21:14.903723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0294639 (* 1 = 0.0294639 loss)\nI0608 05:21:14.903729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.08035 (* 1 = 0.08035 loss)\nI0608 05:21:14.903736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00451004 (* 1 = 0.00451004 loss)\nI0608 05:21:14.903741 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0778912 (* 1 = 0.0778912 loss)\nI0608 05:21:14.903748 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00451021 (* 1 = 0.00451021 loss)\nI0608 05:21:14.903755 13573 sgd_solver.cpp:106] Iteration 46940, lr = 0.001\nI0608 05:21:28.053987 13573 solver.cpp:229] Iteration 46960, loss = 1.11886\nI0608 05:21:28.054119 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.227901 (* 1 = 0.227901 loss)\nI0608 05:21:28.054129 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.241937 (* 1 = 0.241937 loss)\nI0608 05:21:28.054136 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301148 (* 1 = 0.301148 loss)\nI0608 05:21:28.054141 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.13965 (* 1 = 0.13965 loss)\nI0608 05:21:28.054147 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.297084 (* 1 = 0.297084 loss)\nI0608 05:21:28.054153 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.13965 (* 1 = 0.13965 loss)\nI0608 05:21:28.054160 13573 sgd_solver.cpp:106] Iteration 46960, lr = 0.001\nI0608 05:21:41.031551 13573 solver.cpp:229] Iteration 46980, loss = 0.710938\nI0608 05:21:41.031615 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127256 (* 1 = 0.127256 loss)\nI0608 05:21:41.031625 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.054973 (* 1 = 0.054973 loss)\nI0608 05:21:41.031630 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177466 (* 1 = 0.177466 loss)\nI0608 05:21:41.031636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0361899 (* 1 = 0.0361899 loss)\nI0608 05:21:41.031642 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182338 (* 1 = 0.182338 loss)\nI0608 05:21:41.031648 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0362013 (* 1 = 0.0362013 loss)\nI0608 05:21:41.031656 13573 sgd_solver.cpp:106] Iteration 46980, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:21:54.159515 13573 solver.cpp:229] Iteration 47000, loss = 0.717767\nI0608 05:21:54.159581 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.12306 (* 1 = 0.12306 loss)\nI0608 05:21:54.159590 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143626 (* 1 = 0.143626 loss)\nI0608 05:21:54.159597 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237384 (* 1 = 0.237384 loss)\nI0608 05:21:54.159603 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0569782 (* 1 = 0.0569782 loss)\nI0608 05:21:54.159610 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.242139 (* 1 = 0.242139 loss)\nI0608 05:21:54.159615 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0569789 (* 1 = 0.0569789 loss)\nI0608 05:21:54.159634 13573 sgd_solver.cpp:106] Iteration 47000, lr = 0.001\nI0608 05:22:07.144134 13573 solver.cpp:229] Iteration 47020, loss = 0.692588\nI0608 05:22:07.144201 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.139524 (* 1 = 0.139524 loss)\nI0608 05:22:07.144210 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106138 (* 1 = 0.106138 loss)\nI0608 05:22:07.144217 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183606 (* 1 = 0.183606 loss)\nI0608 05:22:07.144223 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.112507 (* 1 = 0.112507 loss)\nI0608 05:22:07.144228 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196936 (* 1 = 0.196936 loss)\nI0608 05:22:07.144234 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.11251 (* 1 = 0.11251 loss)\nI0608 05:22:07.144242 13573 sgd_solver.cpp:106] Iteration 47020, lr = 0.001\nI0608 05:22:20.004066 13573 solver.cpp:229] Iteration 47040, loss = 0.498823\nI0608 05:22:20.004144 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00105956 (* 1 = 0.00105956 loss)\nI0608 05:22:20.004154 13573 solver.cpp:245]     Train net output #1: loss_cls = 9.89104e-05 (* 1 = 9.89104e-05 loss)\nI0608 05:22:20.004160 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222169 (* 1 = 0.222169 loss)\nI0608 05:22:20.004165 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00608669 (* 1 = 0.00608669 loss)\nI0608 05:22:20.004171 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24568 (* 1 = 0.24568 loss)\nI0608 05:22:20.004176 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00608969 (* 1 = 0.00608969 loss)\nI0608 05:22:20.004184 13573 sgd_solver.cpp:106] Iteration 47040, lr = 0.001\nI0608 05:22:32.870602 13573 solver.cpp:229] Iteration 47060, loss = 0.61905\nI0608 05:22:32.870668 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11078 (* 1 = 0.11078 loss)\nI0608 05:22:32.870677 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114625 (* 1 = 0.114625 loss)\nI0608 05:22:32.870684 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172356 (* 1 = 0.172356 loss)\nI0608 05:22:32.870689 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.122381 (* 1 = 0.122381 loss)\nI0608 05:22:32.870695 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165361 (* 1 = 0.165361 loss)\nI0608 05:22:32.870702 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.122382 (* 1 = 0.122382 loss)\nI0608 05:22:32.870707 13573 sgd_solver.cpp:106] Iteration 47060, lr = 0.001\nI0608 05:22:45.844926 13573 solver.cpp:229] Iteration 47080, loss = 0.422696\nI0608 05:22:45.845012 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0141094 (* 1 = 0.0141094 loss)\nI0608 05:22:45.845022 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0535106 (* 1 = 0.0535106 loss)\nI0608 05:22:45.845028 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0855109 (* 1 = 0.0855109 loss)\nI0608 05:22:45.845034 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0162682 (* 1 = 0.0162682 loss)\nI0608 05:22:45.845041 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0927822 (* 1 = 0.0927822 loss)\nI0608 05:22:45.845046 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.016269 (* 1 = 0.016269 loss)\nI0608 05:22:45.845054 13573 sgd_solver.cpp:106] Iteration 47080, lr = 0.001\nI0608 05:22:58.892137 13573 solver.cpp:229] Iteration 47100, loss = 0.474993\nI0608 05:22:58.892261 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00613999 (* 1 = 0.00613999 loss)\nI0608 05:22:58.892269 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00932951 (* 1 = 0.00932951 loss)\nI0608 05:22:58.892276 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.292438 (* 1 = 0.292438 loss)\nI0608 05:22:58.892282 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0376389 (* 1 = 0.0376389 loss)\nI0608 05:22:58.892287 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.270947 (* 1 = 0.270947 loss)\nI0608 05:22:58.892293 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0376578 (* 1 = 0.0376578 loss)\nI0608 05:22:58.892300 13573 sgd_solver.cpp:106] Iteration 47100, lr = 0.001\nI0608 05:23:11.897758 13573 solver.cpp:229] Iteration 47120, loss = 0.41801\nI0608 05:23:11.897825 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0855104 (* 1 = 0.0855104 loss)\nI0608 05:23:11.897836 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.083563 (* 1 = 0.083563 loss)\nI0608 05:23:11.897842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0915873 (* 1 = 0.0915873 loss)\nI0608 05:23:11.897848 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00398733 (* 1 = 0.00398733 loss)\nI0608 05:23:11.897855 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0934451 (* 1 = 0.0934451 loss)\nI0608 05:23:11.897860 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00399298 (* 1 = 0.00399298 loss)\nI0608 05:23:11.897866 13573 sgd_solver.cpp:106] Iteration 47120, lr = 0.001\nI0608 05:23:24.694149 13573 solver.cpp:229] Iteration 47140, loss = 0.615135\nI0608 05:23:24.694212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0635564 (* 1 = 0.0635564 loss)\nI0608 05:23:24.694221 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0392384 (* 1 = 0.0392384 loss)\nI0608 05:23:24.694227 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0933074 (* 1 = 0.0933074 loss)\nI0608 05:23:24.694233 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00800132 (* 1 = 0.00800132 loss)\nI0608 05:23:24.694239 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0926403 (* 1 = 0.0926403 loss)\nI0608 05:23:24.694245 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00800379 (* 1 = 0.00800379 loss)\nI0608 05:23:24.694252 13573 sgd_solver.cpp:106] Iteration 47140, lr = 0.001\nI0608 05:23:37.804373 13573 solver.cpp:229] Iteration 47160, loss = 0.397047\nI0608 05:23:37.804466 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0426328 (* 1 = 0.0426328 loss)\nI0608 05:23:37.804474 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173183 (* 1 = 0.173183 loss)\nI0608 05:23:37.804481 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.083532 (* 1 = 0.083532 loss)\nI0608 05:23:37.804487 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135381 (* 1 = 0.0135381 loss)\nI0608 05:23:37.804493 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0933134 (* 1 = 0.0933134 loss)\nI0608 05:23:37.804499 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135383 (* 1 = 0.0135383 loss)\nI0608 05:23:37.804507 13573 sgd_solver.cpp:106] Iteration 47160, lr = 0.001\nI0608 05:23:50.605172 13573 solver.cpp:229] Iteration 47180, loss = 0.952498\nI0608 05:23:50.605244 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0747382 (* 1 = 0.0747382 loss)\nI0608 05:23:50.605254 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0827329 (* 1 = 0.0827329 loss)\nI0608 05:23:50.605260 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.420466 (* 1 = 0.420466 loss)\nI0608 05:23:50.605267 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0596138 (* 1 = 0.0596138 loss)\nI0608 05:23:50.605271 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.410448 (* 1 = 0.410448 loss)\nI0608 05:23:50.605278 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0596197 (* 1 = 0.0596197 loss)\nI0608 05:23:50.605285 13573 sgd_solver.cpp:106] Iteration 47180, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:24:03.656373 13573 solver.cpp:229] Iteration 47200, loss = 1.45176\nI0608 05:24:03.656432 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00228004 (* 1 = 0.00228004 loss)\nI0608 05:24:03.656442 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00470154 (* 1 = 0.00470154 loss)\nI0608 05:24:03.656450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214902 (* 1 = 0.214902 loss)\nI0608 05:24:03.656455 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0192225 (* 1 = 0.0192225 loss)\nI0608 05:24:03.656461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.192777 (* 1 = 0.192777 loss)\nI0608 05:24:03.656466 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0192186 (* 1 = 0.0192186 loss)\nI0608 05:24:03.656474 13573 sgd_solver.cpp:106] Iteration 47200, lr = 0.001\nI0608 05:24:16.641531 13573 solver.cpp:229] Iteration 47220, loss = 0.858608\nI0608 05:24:16.641643 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0889152 (* 1 = 0.0889152 loss)\nI0608 05:24:16.641652 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0961054 (* 1 = 0.0961054 loss)\nI0608 05:24:16.641659 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.093272 (* 1 = 0.093272 loss)\nI0608 05:24:16.641664 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0166294 (* 1 = 0.0166294 loss)\nI0608 05:24:16.641670 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0990447 (* 1 = 0.0990447 loss)\nI0608 05:24:16.641677 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0166298 (* 1 = 0.0166298 loss)\nI0608 05:24:16.641685 13573 sgd_solver.cpp:106] Iteration 47220, lr = 0.001\nI0608 05:24:29.500557 13573 solver.cpp:229] Iteration 47240, loss = 0.708632\nI0608 05:24:29.500622 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111298 (* 1 = 0.111298 loss)\nI0608 05:24:29.500633 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0932341 (* 1 = 0.0932341 loss)\nI0608 05:24:29.500638 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147217 (* 1 = 0.147217 loss)\nI0608 05:24:29.500644 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257381 (* 1 = 0.0257381 loss)\nI0608 05:24:29.500650 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154569 (* 1 = 0.154569 loss)\nI0608 05:24:29.500655 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0257367 (* 1 = 0.0257367 loss)\nI0608 05:24:29.500663 13573 sgd_solver.cpp:106] Iteration 47240, lr = 0.001\nI0608 05:24:42.426225 13573 solver.cpp:229] Iteration 47260, loss = 1.56257\nI0608 05:24:42.426285 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0403752 (* 1 = 0.0403752 loss)\nI0608 05:24:42.426296 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0772687 (* 1 = 0.0772687 loss)\nI0608 05:24:42.426302 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.08115 (* 1 = 0.08115 loss)\nI0608 05:24:42.426308 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.014102 (* 1 = 0.014102 loss)\nI0608 05:24:42.426314 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.082202 (* 1 = 0.082202 loss)\nI0608 05:24:42.426321 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0140998 (* 1 = 0.0140998 loss)\nI0608 05:24:42.426327 13573 sgd_solver.cpp:106] Iteration 47260, lr = 0.001\nI0608 05:24:55.364356 13573 solver.cpp:229] Iteration 47280, loss = 1.42036\nI0608 05:24:55.364420 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.080484 (* 1 = 0.080484 loss)\nI0608 05:24:55.364429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0672033 (* 1 = 0.0672033 loss)\nI0608 05:24:55.364436 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.475633 (* 1 = 0.475633 loss)\nI0608 05:24:55.364441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0348339 (* 1 = 0.0348339 loss)\nI0608 05:24:55.364447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.479162 (* 1 = 0.479162 loss)\nI0608 05:24:55.364452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.034824 (* 1 = 0.034824 loss)\nI0608 05:24:55.364459 13573 sgd_solver.cpp:106] Iteration 47280, lr = 0.001\nI0608 05:25:08.303009 13573 solver.cpp:229] Iteration 47300, loss = 0.813882\nI0608 05:25:08.303077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00199035 (* 1 = 0.00199035 loss)\nI0608 05:25:08.303087 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00538864 (* 1 = 0.00538864 loss)\nI0608 05:25:08.303093 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.321797 (* 1 = 0.321797 loss)\nI0608 05:25:08.303098 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0436747 (* 1 = 0.0436747 loss)\nI0608 05:25:08.303105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.334027 (* 1 = 0.334027 loss)\nI0608 05:25:08.303112 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0436948 (* 1 = 0.0436948 loss)\nI0608 05:25:08.303119 13573 sgd_solver.cpp:106] Iteration 47300, lr = 0.001\nI0608 05:25:21.269883 13573 solver.cpp:229] Iteration 47320, loss = 0.844824\nI0608 05:25:21.269951 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.182944 (* 1 = 0.182944 loss)\nI0608 05:25:21.269960 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.209725 (* 1 = 0.209725 loss)\nI0608 05:25:21.269966 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.293147 (* 1 = 0.293147 loss)\nI0608 05:25:21.269973 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0268174 (* 1 = 0.0268174 loss)\nI0608 05:25:21.269979 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.300708 (* 1 = 0.300708 loss)\nI0608 05:25:21.269984 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0268233 (* 1 = 0.0268233 loss)\nI0608 05:25:21.269991 13573 sgd_solver.cpp:106] Iteration 47320, lr = 0.001\nI0608 05:25:34.388092 13573 solver.cpp:229] Iteration 47340, loss = 0.396293\nI0608 05:25:34.388187 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0527668 (* 1 = 0.0527668 loss)\nI0608 05:25:34.388197 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0680698 (* 1 = 0.0680698 loss)\nI0608 05:25:34.388205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.083633 (* 1 = 0.083633 loss)\nI0608 05:25:34.388211 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0184578 (* 1 = 0.0184578 loss)\nI0608 05:25:34.388216 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0854521 (* 1 = 0.0854521 loss)\nI0608 05:25:34.388222 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0184575 (* 1 = 0.0184575 loss)\nI0608 05:25:34.388229 13573 sgd_solver.cpp:106] Iteration 47340, lr = 0.001\nI0608 05:25:47.048508 13573 solver.cpp:229] Iteration 47360, loss = 1.45347\nI0608 05:25:47.048583 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171 (* 1 = 0.171 loss)\nI0608 05:25:47.048593 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104123 (* 1 = 0.104123 loss)\nI0608 05:25:47.048599 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279931 (* 1 = 0.279931 loss)\nI0608 05:25:47.048605 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278164 (* 1 = 0.0278164 loss)\nI0608 05:25:47.048611 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.285373 (* 1 = 0.285373 loss)\nI0608 05:25:47.048617 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278116 (* 1 = 0.0278116 loss)\nI0608 05:25:47.048624 13573 sgd_solver.cpp:106] Iteration 47360, lr = 0.001\nI0608 05:25:59.940426 13573 solver.cpp:229] Iteration 47380, loss = 0.323345\nI0608 05:25:59.940503 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0374097 (* 1 = 0.0374097 loss)\nI0608 05:25:59.940513 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0855777 (* 1 = 0.0855777 loss)\nI0608 05:25:59.940520 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0926199 (* 1 = 0.0926199 loss)\nI0608 05:25:59.940526 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157943 (* 1 = 0.0157943 loss)\nI0608 05:25:59.940531 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0925039 (* 1 = 0.0925039 loss)\nI0608 05:25:59.940537 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158031 (* 1 = 0.0158031 loss)\nI0608 05:25:59.940544 13573 sgd_solver.cpp:106] Iteration 47380, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:26:12.865556 13573 solver.cpp:229] Iteration 47400, loss = 0.692427\nI0608 05:26:12.865669 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0983802 (* 1 = 0.0983802 loss)\nI0608 05:26:12.865679 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0577525 (* 1 = 0.0577525 loss)\nI0608 05:26:12.865685 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255683 (* 1 = 0.255683 loss)\nI0608 05:26:12.865691 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.134829 (* 1 = 0.134829 loss)\nI0608 05:26:12.865697 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26339 (* 1 = 0.26339 loss)\nI0608 05:26:12.865702 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.134804 (* 1 = 0.134804 loss)\nI0608 05:26:12.865710 13573 sgd_solver.cpp:106] Iteration 47400, lr = 0.001\nI0608 05:26:25.912312 13573 solver.cpp:229] Iteration 47420, loss = 1.38752\nI0608 05:26:25.912386 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00421068 (* 1 = 0.00421068 loss)\nI0608 05:26:25.912396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0030096 (* 1 = 0.0030096 loss)\nI0608 05:26:25.912402 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.629194 (* 1 = 0.629194 loss)\nI0608 05:26:25.912408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.309056 (* 1 = 0.309056 loss)\nI0608 05:26:25.912415 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.629613 (* 1 = 0.629613 loss)\nI0608 05:26:25.912420 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.308985 (* 1 = 0.308985 loss)\nI0608 05:26:25.912427 13573 sgd_solver.cpp:106] Iteration 47420, lr = 0.001\nI0608 05:26:38.903890 13573 solver.cpp:229] Iteration 47440, loss = 1.44209\nI0608 05:26:38.903965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0876524 (* 1 = 0.0876524 loss)\nI0608 05:26:38.903976 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152063 (* 1 = 0.152063 loss)\nI0608 05:26:38.903982 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.165831 (* 1 = 0.165831 loss)\nI0608 05:26:38.903988 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328279 (* 1 = 0.0328279 loss)\nI0608 05:26:38.903995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.179624 (* 1 = 0.179624 loss)\nI0608 05:26:38.904000 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0328266 (* 1 = 0.0328266 loss)\nI0608 05:26:38.904007 13573 sgd_solver.cpp:106] Iteration 47440, lr = 0.001\nI0608 05:26:51.952117 13573 solver.cpp:229] Iteration 47460, loss = 0.470845\nI0608 05:26:51.952183 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0971416 (* 1 = 0.0971416 loss)\nI0608 05:26:51.952193 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.097394 (* 1 = 0.097394 loss)\nI0608 05:26:51.952200 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.086262 (* 1 = 0.086262 loss)\nI0608 05:26:51.952206 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143945 (* 1 = 0.0143945 loss)\nI0608 05:26:51.952213 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.081132 (* 1 = 0.081132 loss)\nI0608 05:26:51.952219 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0143945 (* 1 = 0.0143945 loss)\nI0608 05:26:51.952226 13573 sgd_solver.cpp:106] Iteration 47460, lr = 0.001\nI0608 05:27:05.025563 13573 solver.cpp:229] Iteration 47480, loss = 0.531955\nI0608 05:27:05.025642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000663533 (* 1 = 0.000663533 loss)\nI0608 05:27:05.025652 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000305415 (* 1 = 0.000305415 loss)\nI0608 05:27:05.025658 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.299778 (* 1 = 0.299778 loss)\nI0608 05:27:05.025665 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.012071 (* 1 = 0.012071 loss)\nI0608 05:27:05.025671 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.29561 (* 1 = 0.29561 loss)\nI0608 05:27:05.025676 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.012069 (* 1 = 0.012069 loss)\nI0608 05:27:05.025683 13573 sgd_solver.cpp:106] Iteration 47480, lr = 0.001\nI0608 05:27:17.973690 13573 solver.cpp:229] Iteration 47500, loss = 0.681434\nI0608 05:27:17.973762 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00170415 (* 1 = 0.00170415 loss)\nI0608 05:27:17.973772 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0466725 (* 1 = 0.0466725 loss)\nI0608 05:27:17.973778 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.178613 (* 1 = 0.178613 loss)\nI0608 05:27:17.973783 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00319325 (* 1 = 0.00319325 loss)\nI0608 05:27:17.973789 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196643 (* 1 = 0.196643 loss)\nI0608 05:27:17.973795 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00319592 (* 1 = 0.00319592 loss)\nI0608 05:27:17.973803 13573 sgd_solver.cpp:106] Iteration 47500, lr = 0.001\nI0608 05:27:30.911960 13573 solver.cpp:229] Iteration 47520, loss = 0.826021\nI0608 05:27:30.912037 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0119159 (* 1 = 0.0119159 loss)\nI0608 05:27:30.912046 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0401002 (* 1 = 0.0401002 loss)\nI0608 05:27:30.912052 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.505125 (* 1 = 0.505125 loss)\nI0608 05:27:30.912058 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0863902 (* 1 = 0.0863902 loss)\nI0608 05:27:30.912065 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.524784 (* 1 = 0.524784 loss)\nI0608 05:27:30.912071 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0863838 (* 1 = 0.0863838 loss)\nI0608 05:27:30.912078 13573 sgd_solver.cpp:106] Iteration 47520, lr = 0.001\nI0608 05:27:43.819797 13573 solver.cpp:229] Iteration 47540, loss = 0.896008\nI0608 05:27:43.819861 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.236319 (* 1 = 0.236319 loss)\nI0608 05:27:43.819870 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173218 (* 1 = 0.173218 loss)\nI0608 05:27:43.819876 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.283726 (* 1 = 0.283726 loss)\nI0608 05:27:43.819882 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0629237 (* 1 = 0.0629237 loss)\nI0608 05:27:43.819888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.275984 (* 1 = 0.275984 loss)\nI0608 05:27:43.819895 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0629243 (* 1 = 0.0629243 loss)\nI0608 05:27:43.819901 13573 sgd_solver.cpp:106] Iteration 47540, lr = 0.001\nI0608 05:27:56.778936 13573 solver.cpp:229] Iteration 47560, loss = 0.556228\nI0608 05:27:56.779003 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0046052 (* 1 = 0.0046052 loss)\nI0608 05:27:56.779014 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0012964 (* 1 = 0.0012964 loss)\nI0608 05:27:56.779021 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.335808 (* 1 = 0.335808 loss)\nI0608 05:27:56.779026 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0205348 (* 1 = 0.0205348 loss)\nI0608 05:27:56.779032 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293691 (* 1 = 0.293691 loss)\nI0608 05:27:56.779038 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0205291 (* 1 = 0.0205291 loss)\nI0608 05:27:56.779045 13573 sgd_solver.cpp:106] Iteration 47560, lr = 0.001\nI0608 05:28:09.507444 13573 solver.cpp:229] Iteration 47580, loss = 0.499801\nI0608 05:28:09.507522 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000771056 (* 1 = 0.000771056 loss)\nI0608 05:28:09.507532 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.012347 (* 1 = 0.012347 loss)\nI0608 05:28:09.507539 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214466 (* 1 = 0.214466 loss)\nI0608 05:28:09.507544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0363946 (* 1 = 0.0363946 loss)\nI0608 05:28:09.507550 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195757 (* 1 = 0.195757 loss)\nI0608 05:28:09.507556 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0363915 (* 1 = 0.0363915 loss)\nI0608 05:28:09.507563 13573 sgd_solver.cpp:106] Iteration 47580, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:28:22.374027 13573 solver.cpp:229] Iteration 47600, loss = 0.842811\nI0608 05:28:22.374088 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0370641 (* 1 = 0.0370641 loss)\nI0608 05:28:22.374096 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0984539 (* 1 = 0.0984539 loss)\nI0608 05:28:22.374104 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141117 (* 1 = 0.141117 loss)\nI0608 05:28:22.374110 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.138482 (* 1 = 0.138482 loss)\nI0608 05:28:22.374116 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138553 (* 1 = 0.138553 loss)\nI0608 05:28:22.374121 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.138529 (* 1 = 0.138529 loss)\nI0608 05:28:22.374128 13573 sgd_solver.cpp:106] Iteration 47600, lr = 0.001\nI0608 05:28:35.048287 13573 solver.cpp:229] Iteration 47620, loss = 1.21623\nI0608 05:28:35.048354 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129013 (* 1 = 0.129013 loss)\nI0608 05:28:35.048364 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114571 (* 1 = 0.114571 loss)\nI0608 05:28:35.048370 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.568303 (* 1 = 0.568303 loss)\nI0608 05:28:35.048377 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.231921 (* 1 = 0.231921 loss)\nI0608 05:28:35.048382 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.548416 (* 1 = 0.548416 loss)\nI0608 05:28:35.048389 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.231935 (* 1 = 0.231935 loss)\nI0608 05:28:35.048396 13573 sgd_solver.cpp:106] Iteration 47620, lr = 0.001\nI0608 05:28:48.026751 13573 solver.cpp:229] Iteration 47640, loss = 1.4317\nI0608 05:28:48.026836 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00202433 (* 1 = 0.00202433 loss)\nI0608 05:28:48.026847 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000106256 (* 1 = 0.000106256 loss)\nI0608 05:28:48.026859 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.720314 (* 1 = 0.720314 loss)\nI0608 05:28:48.026866 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.225021 (* 1 = 0.225021 loss)\nI0608 05:28:48.026872 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.671981 (* 1 = 0.671981 loss)\nI0608 05:28:48.026878 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.225053 (* 1 = 0.225053 loss)\nI0608 05:28:48.026885 13573 sgd_solver.cpp:106] Iteration 47640, lr = 0.001\nI0608 05:29:00.949947 13573 solver.cpp:229] Iteration 47660, loss = 0.835709\nI0608 05:29:00.950011 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00104403 (* 1 = 0.00104403 loss)\nI0608 05:29:00.950021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00358508 (* 1 = 0.00358508 loss)\nI0608 05:29:00.950027 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162373 (* 1 = 0.162373 loss)\nI0608 05:29:00.950033 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00254278 (* 1 = 0.00254278 loss)\nI0608 05:29:00.950038 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163272 (* 1 = 0.163272 loss)\nI0608 05:29:00.950044 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00253977 (* 1 = 0.00253977 loss)\nI0608 05:29:00.950050 13573 sgd_solver.cpp:106] Iteration 47660, lr = 0.001\nI0608 05:29:13.882356 13573 solver.cpp:229] Iteration 47680, loss = 0.598758\nI0608 05:29:13.882422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0425092 (* 1 = 0.0425092 loss)\nI0608 05:29:13.882431 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0431488 (* 1 = 0.0431488 loss)\nI0608 05:29:13.882437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145527 (* 1 = 0.145527 loss)\nI0608 05:29:13.882443 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00439741 (* 1 = 0.00439741 loss)\nI0608 05:29:13.882448 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177833 (* 1 = 0.177833 loss)\nI0608 05:29:13.882454 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00439654 (* 1 = 0.00439654 loss)\nI0608 05:29:13.882462 13573 sgd_solver.cpp:106] Iteration 47680, lr = 0.001\nI0608 05:29:26.959765 13573 solver.cpp:229] Iteration 47700, loss = 1.08177\nI0608 05:29:26.959828 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.168371 (* 1 = 0.168371 loss)\nI0608 05:29:26.959837 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.143017 (* 1 = 0.143017 loss)\nI0608 05:29:26.959842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.468557 (* 1 = 0.468557 loss)\nI0608 05:29:26.959848 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103968 (* 1 = 0.103968 loss)\nI0608 05:29:26.959854 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.460308 (* 1 = 0.460308 loss)\nI0608 05:29:26.959861 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103962 (* 1 = 0.103962 loss)\nI0608 05:29:26.959867 13573 sgd_solver.cpp:106] Iteration 47700, lr = 0.001\nI0608 05:29:40.066803 13573 solver.cpp:229] Iteration 47720, loss = 0.463685\nI0608 05:29:40.066874 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0622938 (* 1 = 0.0622938 loss)\nI0608 05:29:40.066884 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0297631 (* 1 = 0.0297631 loss)\nI0608 05:29:40.066890 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175958 (* 1 = 0.175958 loss)\nI0608 05:29:40.066895 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0289409 (* 1 = 0.0289409 loss)\nI0608 05:29:40.066901 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.18867 (* 1 = 0.18867 loss)\nI0608 05:29:40.066907 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.028941 (* 1 = 0.028941 loss)\nI0608 05:29:40.066915 13573 sgd_solver.cpp:106] Iteration 47720, lr = 0.001\nI0608 05:29:53.092200 13573 solver.cpp:229] Iteration 47740, loss = 0.411661\nI0608 05:29:53.092264 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0566574 (* 1 = 0.0566574 loss)\nI0608 05:29:53.092273 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0745876 (* 1 = 0.0745876 loss)\nI0608 05:29:53.092279 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100873 (* 1 = 0.100873 loss)\nI0608 05:29:53.092285 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0174617 (* 1 = 0.0174617 loss)\nI0608 05:29:53.092290 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0960151 (* 1 = 0.0960151 loss)\nI0608 05:29:53.092296 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0174606 (* 1 = 0.0174606 loss)\nI0608 05:29:53.092303 13573 sgd_solver.cpp:106] Iteration 47740, lr = 0.001\nI0608 05:30:05.903875 13573 solver.cpp:229] Iteration 47760, loss = 1.00899\nI0608 05:30:05.903937 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.282291 (* 1 = 0.282291 loss)\nI0608 05:30:05.903946 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.229503 (* 1 = 0.229503 loss)\nI0608 05:30:05.903952 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41652 (* 1 = 0.41652 loss)\nI0608 05:30:05.903959 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0599404 (* 1 = 0.0599404 loss)\nI0608 05:30:05.903964 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.421756 (* 1 = 0.421756 loss)\nI0608 05:30:05.903970 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0599374 (* 1 = 0.0599374 loss)\nI0608 05:30:05.903976 13573 sgd_solver.cpp:106] Iteration 47760, lr = 0.001\nI0608 05:30:18.838944 13573 solver.cpp:229] Iteration 47780, loss = 0.359563\nI0608 05:30:18.839017 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0724721 (* 1 = 0.0724721 loss)\nI0608 05:30:18.839026 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0354648 (* 1 = 0.0354648 loss)\nI0608 05:30:18.839032 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157452 (* 1 = 0.157452 loss)\nI0608 05:30:18.839038 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193531 (* 1 = 0.0193531 loss)\nI0608 05:30:18.839043 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153106 (* 1 = 0.153106 loss)\nI0608 05:30:18.839049 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0193537 (* 1 = 0.0193537 loss)\nI0608 05:30:18.839056 13573 sgd_solver.cpp:106] Iteration 47780, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:30:31.761117 13573 solver.cpp:229] Iteration 47800, loss = 2.31693\nI0608 05:30:31.761185 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000869789 (* 1 = 0.000869789 loss)\nI0608 05:30:31.761195 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0530307 (* 1 = 0.0530307 loss)\nI0608 05:30:31.761203 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.21073 (* 1 = 1.21073 loss)\nI0608 05:30:31.761209 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.648868 (* 1 = 0.648868 loss)\nI0608 05:30:31.761214 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.18251 (* 1 = 1.18251 loss)\nI0608 05:30:31.761220 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.649022 (* 1 = 0.649022 loss)\nI0608 05:30:31.761229 13573 sgd_solver.cpp:106] Iteration 47800, lr = 0.001\nI0608 05:30:44.831151 13573 solver.cpp:229] Iteration 47820, loss = 0.749411\nI0608 05:30:44.831221 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00168185 (* 1 = 0.00168185 loss)\nI0608 05:30:44.831231 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00191045 (* 1 = 0.00191045 loss)\nI0608 05:30:44.831238 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.446014 (* 1 = 0.446014 loss)\nI0608 05:30:44.831243 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116899 (* 1 = 0.116899 loss)\nI0608 05:30:44.831249 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.476493 (* 1 = 0.476493 loss)\nI0608 05:30:44.831255 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116921 (* 1 = 0.116921 loss)\nI0608 05:30:44.831264 13573 sgd_solver.cpp:106] Iteration 47820, lr = 0.001\nI0608 05:30:57.873493 13573 solver.cpp:229] Iteration 47840, loss = 0.552146\nI0608 05:30:57.873567 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0296295 (* 1 = 0.0296295 loss)\nI0608 05:30:57.873577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108524 (* 1 = 0.108524 loss)\nI0608 05:30:57.873584 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0970013 (* 1 = 0.0970013 loss)\nI0608 05:30:57.873589 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0516296 (* 1 = 0.0516296 loss)\nI0608 05:30:57.873594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920857 (* 1 = 0.0920857 loss)\nI0608 05:30:57.873600 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0516289 (* 1 = 0.0516289 loss)\nI0608 05:30:57.873606 13573 sgd_solver.cpp:106] Iteration 47840, lr = 0.001\nI0608 05:31:10.909605 13573 solver.cpp:229] Iteration 47860, loss = 0.574775\nI0608 05:31:10.909665 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.180069 (* 1 = 0.180069 loss)\nI0608 05:31:10.909674 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0527831 (* 1 = 0.0527831 loss)\nI0608 05:31:10.909680 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0905756 (* 1 = 0.0905756 loss)\nI0608 05:31:10.909687 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143923 (* 1 = 0.0143923 loss)\nI0608 05:31:10.909692 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0854879 (* 1 = 0.0854879 loss)\nI0608 05:31:10.909696 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0143931 (* 1 = 0.0143931 loss)\nI0608 05:31:10.909703 13573 sgd_solver.cpp:106] Iteration 47860, lr = 0.001\nI0608 05:31:23.901609 13573 solver.cpp:229] Iteration 47880, loss = 0.698577\nI0608 05:31:23.901693 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0793915 (* 1 = 0.0793915 loss)\nI0608 05:31:23.901703 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.071853 (* 1 = 0.071853 loss)\nI0608 05:31:23.901710 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0886217 (* 1 = 0.0886217 loss)\nI0608 05:31:23.901716 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283761 (* 1 = 0.0283761 loss)\nI0608 05:31:23.901721 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0890708 (* 1 = 0.0890708 loss)\nI0608 05:31:23.901727 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283813 (* 1 = 0.0283813 loss)\nI0608 05:31:23.901736 13573 sgd_solver.cpp:106] Iteration 47880, lr = 0.001\nI0608 05:31:36.804000 13573 solver.cpp:229] Iteration 47900, loss = 0.563555\nI0608 05:31:36.804064 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0847754 (* 1 = 0.0847754 loss)\nI0608 05:31:36.804074 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128487 (* 1 = 0.128487 loss)\nI0608 05:31:36.804080 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259345 (* 1 = 0.259345 loss)\nI0608 05:31:36.804086 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0700153 (* 1 = 0.0700153 loss)\nI0608 05:31:36.804093 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258819 (* 1 = 0.258819 loss)\nI0608 05:31:36.804100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0700242 (* 1 = 0.0700242 loss)\nI0608 05:31:36.804107 13573 sgd_solver.cpp:106] Iteration 47900, lr = 0.001\nI0608 05:31:49.833698 13573 solver.cpp:229] Iteration 47920, loss = 0.95211\nI0608 05:31:49.833786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00458395 (* 1 = 0.00458395 loss)\nI0608 05:31:49.833796 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0071917 (* 1 = 0.0071917 loss)\nI0608 05:31:49.833802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.303758 (* 1 = 0.303758 loss)\nI0608 05:31:49.833809 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0321112 (* 1 = 0.0321112 loss)\nI0608 05:31:49.833814 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.342383 (* 1 = 0.342383 loss)\nI0608 05:31:49.833822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0321171 (* 1 = 0.0321171 loss)\nI0608 05:31:49.833829 13573 sgd_solver.cpp:106] Iteration 47920, lr = 0.001\nI0608 05:32:02.923400 13573 solver.cpp:229] Iteration 47940, loss = 0.886075\nI0608 05:32:02.923470 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.129436 (* 1 = 0.129436 loss)\nI0608 05:32:02.923485 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0625832 (* 1 = 0.0625832 loss)\nI0608 05:32:02.923494 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17839 (* 1 = 0.17839 loss)\nI0608 05:32:02.923503 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0504952 (* 1 = 0.0504952 loss)\nI0608 05:32:02.923512 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162665 (* 1 = 0.162665 loss)\nI0608 05:32:02.923521 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0504516 (* 1 = 0.0504516 loss)\nI0608 05:32:02.923529 13573 sgd_solver.cpp:106] Iteration 47940, lr = 0.001\nI0608 05:32:15.606372 13573 solver.cpp:229] Iteration 47960, loss = 0.575737\nI0608 05:32:15.606441 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0941064 (* 1 = 0.0941064 loss)\nI0608 05:32:15.606453 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0846109 (* 1 = 0.0846109 loss)\nI0608 05:32:15.606463 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0972881 (* 1 = 0.0972881 loss)\nI0608 05:32:15.606472 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.015856 (* 1 = 0.015856 loss)\nI0608 05:32:15.606482 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0937651 (* 1 = 0.0937651 loss)\nI0608 05:32:15.606489 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158561 (* 1 = 0.0158561 loss)\nI0608 05:32:15.606498 13573 sgd_solver.cpp:106] Iteration 47960, lr = 0.001\nI0608 05:32:28.637750 13573 solver.cpp:229] Iteration 47980, loss = 0.672707\nI0608 05:32:28.637814 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.086834 (* 1 = 0.086834 loss)\nI0608 05:32:28.637823 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0944003 (* 1 = 0.0944003 loss)\nI0608 05:32:28.637830 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.271992 (* 1 = 0.271992 loss)\nI0608 05:32:28.637836 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0308525 (* 1 = 0.0308525 loss)\nI0608 05:32:28.637842 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271509 (* 1 = 0.271509 loss)\nI0608 05:32:28.637848 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0308432 (* 1 = 0.0308432 loss)\nI0608 05:32:28.637856 13573 sgd_solver.cpp:106] Iteration 47980, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:32:41.585521 13573 solver.cpp:229] Iteration 48000, loss = 0.278349\nI0608 05:32:41.585667 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0644004 (* 1 = 0.0644004 loss)\nI0608 05:32:41.585677 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0449265 (* 1 = 0.0449265 loss)\nI0608 05:32:41.585683 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909797 (* 1 = 0.0909797 loss)\nI0608 05:32:41.585690 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0207286 (* 1 = 0.0207286 loss)\nI0608 05:32:41.585695 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932528 (* 1 = 0.0932528 loss)\nI0608 05:32:41.585701 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207102 (* 1 = 0.0207102 loss)\nI0608 05:32:41.585708 13573 sgd_solver.cpp:106] Iteration 48000, lr = 0.001\nI0608 05:32:54.522217 13573 solver.cpp:229] Iteration 48020, loss = 1.40832\nI0608 05:32:54.522295 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.22979 (* 1 = 0.22979 loss)\nI0608 05:32:54.522306 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.19272 (* 1 = 0.19272 loss)\nI0608 05:32:54.522312 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.354916 (* 1 = 0.354916 loss)\nI0608 05:32:54.522318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0815447 (* 1 = 0.0815447 loss)\nI0608 05:32:54.522325 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.351959 (* 1 = 0.351959 loss)\nI0608 05:32:54.522330 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0815501 (* 1 = 0.0815501 loss)\nI0608 05:32:54.522337 13573 sgd_solver.cpp:106] Iteration 48020, lr = 0.001\nI0608 05:33:07.437444 13573 solver.cpp:229] Iteration 48040, loss = 0.513961\nI0608 05:33:07.437515 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.208865 (* 1 = 0.208865 loss)\nI0608 05:33:07.437525 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0944115 (* 1 = 0.0944115 loss)\nI0608 05:33:07.437531 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.156792 (* 1 = 0.156792 loss)\nI0608 05:33:07.437536 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0124464 (* 1 = 0.0124464 loss)\nI0608 05:33:07.437542 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171072 (* 1 = 0.171072 loss)\nI0608 05:33:07.437547 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0124461 (* 1 = 0.0124461 loss)\nI0608 05:33:07.437554 13573 sgd_solver.cpp:106] Iteration 48040, lr = 0.001\nI0608 05:33:20.361488 13573 solver.cpp:229] Iteration 48060, loss = 0.642862\nI0608 05:33:20.361558 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00179596 (* 1 = 0.00179596 loss)\nI0608 05:33:20.361567 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00465033 (* 1 = 0.00465033 loss)\nI0608 05:33:20.361573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221356 (* 1 = 0.221356 loss)\nI0608 05:33:20.361579 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167159 (* 1 = 0.0167159 loss)\nI0608 05:33:20.361585 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.227647 (* 1 = 0.227647 loss)\nI0608 05:33:20.361590 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0166905 (* 1 = 0.0166905 loss)\nI0608 05:33:20.361598 13573 sgd_solver.cpp:106] Iteration 48060, lr = 0.001\nI0608 05:33:33.039561 13573 solver.cpp:229] Iteration 48080, loss = 0.463674\nI0608 05:33:33.039634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0680391 (* 1 = 0.0680391 loss)\nI0608 05:33:33.039644 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0418701 (* 1 = 0.0418701 loss)\nI0608 05:33:33.039650 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.16063 (* 1 = 0.16063 loss)\nI0608 05:33:33.039656 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0172877 (* 1 = 0.0172877 loss)\nI0608 05:33:33.039662 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14819 (* 1 = 0.14819 loss)\nI0608 05:33:33.039669 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017282 (* 1 = 0.017282 loss)\nI0608 05:33:33.039675 13573 sgd_solver.cpp:106] Iteration 48080, lr = 0.001\nI0608 05:33:45.890311 13573 solver.cpp:229] Iteration 48100, loss = 0.631148\nI0608 05:33:45.890377 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119221 (* 1 = 0.119221 loss)\nI0608 05:33:45.890386 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102584 (* 1 = 0.102584 loss)\nI0608 05:33:45.890393 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105864 (* 1 = 0.105864 loss)\nI0608 05:33:45.890398 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00595795 (* 1 = 0.00595795 loss)\nI0608 05:33:45.890404 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101782 (* 1 = 0.101782 loss)\nI0608 05:33:45.890409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00595795 (* 1 = 0.00595795 loss)\nI0608 05:33:45.890416 13573 sgd_solver.cpp:106] Iteration 48100, lr = 0.001\nI0608 05:33:58.825500 13573 solver.cpp:229] Iteration 48120, loss = 0.812629\nI0608 05:33:58.825564 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0898975 (* 1 = 0.0898975 loss)\nI0608 05:33:58.825573 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0776016 (* 1 = 0.0776016 loss)\nI0608 05:33:58.825580 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.433653 (* 1 = 0.433653 loss)\nI0608 05:33:58.825585 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0934307 (* 1 = 0.0934307 loss)\nI0608 05:33:58.825592 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.452411 (* 1 = 0.452411 loss)\nI0608 05:33:58.825598 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0934035 (* 1 = 0.0934035 loss)\nI0608 05:33:58.825603 13573 sgd_solver.cpp:106] Iteration 48120, lr = 0.001\nI0608 05:34:11.676053 13573 solver.cpp:229] Iteration 48140, loss = 0.501545\nI0608 05:34:11.676115 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161035 (* 1 = 0.161035 loss)\nI0608 05:34:11.676125 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0636688 (* 1 = 0.0636688 loss)\nI0608 05:34:11.676131 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0818828 (* 1 = 0.0818828 loss)\nI0608 05:34:11.676137 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168963 (* 1 = 0.0168963 loss)\nI0608 05:34:11.676143 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0846962 (* 1 = 0.0846962 loss)\nI0608 05:34:11.676149 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168947 (* 1 = 0.0168947 loss)\nI0608 05:34:11.676156 13573 sgd_solver.cpp:106] Iteration 48140, lr = 0.001\nI0608 05:34:24.459494 13573 solver.cpp:229] Iteration 48160, loss = 0.825437\nI0608 05:34:24.459556 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000223737 (* 1 = 0.000223737 loss)\nI0608 05:34:24.459566 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0127482 (* 1 = 0.0127482 loss)\nI0608 05:34:24.459573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177724 (* 1 = 0.177724 loss)\nI0608 05:34:24.459578 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00874361 (* 1 = 0.00874361 loss)\nI0608 05:34:24.459584 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167605 (* 1 = 0.167605 loss)\nI0608 05:34:24.459590 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00873983 (* 1 = 0.00873983 loss)\nI0608 05:34:24.459597 13573 sgd_solver.cpp:106] Iteration 48160, lr = 0.001\nI0608 05:34:37.287104 13573 solver.cpp:229] Iteration 48180, loss = 0.746518\nI0608 05:34:37.287173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.237554 (* 1 = 0.237554 loss)\nI0608 05:34:37.287184 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170499 (* 1 = 0.170499 loss)\nI0608 05:34:37.287189 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106786 (* 1 = 0.106786 loss)\nI0608 05:34:37.287195 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159912 (* 1 = 0.0159912 loss)\nI0608 05:34:37.287201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103051 (* 1 = 0.103051 loss)\nI0608 05:34:37.287207 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0160205 (* 1 = 0.0160205 loss)\nI0608 05:34:37.287215 13573 sgd_solver.cpp:106] Iteration 48180, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:34:50.333508 13573 solver.cpp:229] Iteration 48200, loss = 0.631352\nI0608 05:34:50.333575 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00362957 (* 1 = 0.00362957 loss)\nI0608 05:34:50.333585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0450865 (* 1 = 0.0450865 loss)\nI0608 05:34:50.333591 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1553 (* 1 = 0.1553 loss)\nI0608 05:34:50.333597 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00655796 (* 1 = 0.00655796 loss)\nI0608 05:34:50.333603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155141 (* 1 = 0.155141 loss)\nI0608 05:34:50.333609 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00655912 (* 1 = 0.00655912 loss)\nI0608 05:34:50.333616 13573 sgd_solver.cpp:106] Iteration 48200, lr = 0.001\nI0608 05:35:03.142971 13573 solver.cpp:229] Iteration 48220, loss = 0.528805\nI0608 05:35:03.143036 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.137486 (* 1 = 0.137486 loss)\nI0608 05:35:03.143046 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10814 (* 1 = 0.10814 loss)\nI0608 05:35:03.143052 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106765 (* 1 = 0.106765 loss)\nI0608 05:35:03.143059 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0493347 (* 1 = 0.0493347 loss)\nI0608 05:35:03.143064 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103021 (* 1 = 0.103021 loss)\nI0608 05:35:03.143069 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0493067 (* 1 = 0.0493067 loss)\nI0608 05:35:03.143076 13573 sgd_solver.cpp:106] Iteration 48220, lr = 0.001\nI0608 05:35:16.051419 13573 solver.cpp:229] Iteration 48240, loss = 0.576407\nI0608 05:35:16.051508 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000170618 (* 1 = 0.000170618 loss)\nI0608 05:35:16.051518 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000118635 (* 1 = 0.000118635 loss)\nI0608 05:35:16.051525 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.393361 (* 1 = 0.393361 loss)\nI0608 05:35:16.051532 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0440504 (* 1 = 0.0440504 loss)\nI0608 05:35:16.051537 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.38196 (* 1 = 0.38196 loss)\nI0608 05:35:16.051543 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0440002 (* 1 = 0.0440002 loss)\nI0608 05:35:16.051551 13573 sgd_solver.cpp:106] Iteration 48240, lr = 0.001\nI0608 05:35:28.981377 13573 solver.cpp:229] Iteration 48260, loss = 0.362578\nI0608 05:35:28.981449 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124666 (* 1 = 0.124666 loss)\nI0608 05:35:28.981459 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.121411 (* 1 = 0.121411 loss)\nI0608 05:35:28.981465 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0853352 (* 1 = 0.0853352 loss)\nI0608 05:35:28.981472 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104655 (* 1 = 0.0104655 loss)\nI0608 05:35:28.981477 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0865216 (* 1 = 0.0865216 loss)\nI0608 05:35:28.981483 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104754 (* 1 = 0.0104754 loss)\nI0608 05:35:28.981490 13573 sgd_solver.cpp:106] Iteration 48260, lr = 0.001\nI0608 05:35:41.912523 13573 solver.cpp:229] Iteration 48280, loss = 1.10123\nI0608 05:35:41.912612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00116944 (* 1 = 0.00116944 loss)\nI0608 05:35:41.912622 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00230418 (* 1 = 0.00230418 loss)\nI0608 05:35:41.912628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.318578 (* 1 = 0.318578 loss)\nI0608 05:35:41.912636 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.016167 (* 1 = 0.016167 loss)\nI0608 05:35:41.912641 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315775 (* 1 = 0.315775 loss)\nI0608 05:35:41.912647 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0161595 (* 1 = 0.0161595 loss)\nI0608 05:35:41.912655 13573 sgd_solver.cpp:106] Iteration 48280, lr = 0.001\nI0608 05:35:54.825500 13573 solver.cpp:229] Iteration 48300, loss = 0.406677\nI0608 05:35:54.825567 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00184713 (* 1 = 0.00184713 loss)\nI0608 05:35:54.825577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000164643 (* 1 = 0.000164643 loss)\nI0608 05:35:54.825582 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17815 (* 1 = 0.17815 loss)\nI0608 05:35:54.825588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00290072 (* 1 = 0.00290072 loss)\nI0608 05:35:54.825594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188334 (* 1 = 0.188334 loss)\nI0608 05:35:54.825600 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0028969 (* 1 = 0.0028969 loss)\nI0608 05:35:54.825608 13573 sgd_solver.cpp:106] Iteration 48300, lr = 0.001\nI0608 05:36:07.488463 13573 solver.cpp:229] Iteration 48320, loss = 0.45008\nI0608 05:36:07.488533 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109196 (* 1 = 0.109196 loss)\nI0608 05:36:07.488543 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103081 (* 1 = 0.103081 loss)\nI0608 05:36:07.488548 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0924063 (* 1 = 0.0924063 loss)\nI0608 05:36:07.488554 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0188828 (* 1 = 0.0188828 loss)\nI0608 05:36:07.488560 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0918387 (* 1 = 0.0918387 loss)\nI0608 05:36:07.488566 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0188823 (* 1 = 0.0188823 loss)\nI0608 05:36:07.488574 13573 sgd_solver.cpp:106] Iteration 48320, lr = 0.001\nI0608 05:36:20.612838 13573 solver.cpp:229] Iteration 48340, loss = 0.69023\nI0608 05:36:20.612915 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0179689 (* 1 = 0.0179689 loss)\nI0608 05:36:20.612923 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0674529 (* 1 = 0.0674529 loss)\nI0608 05:36:20.612929 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215637 (* 1 = 0.215637 loss)\nI0608 05:36:20.612936 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0179718 (* 1 = 0.0179718 loss)\nI0608 05:36:20.612941 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.23248 (* 1 = 0.23248 loss)\nI0608 05:36:20.612946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0179891 (* 1 = 0.0179891 loss)\nI0608 05:36:20.612952 13573 sgd_solver.cpp:106] Iteration 48340, lr = 0.001\nI0608 05:36:33.746490 13573 solver.cpp:229] Iteration 48360, loss = 0.952128\nI0608 05:36:33.746556 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116896 (* 1 = 0.116896 loss)\nI0608 05:36:33.746567 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0712169 (* 1 = 0.0712169 loss)\nI0608 05:36:33.746573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104797 (* 1 = 0.104797 loss)\nI0608 05:36:33.746578 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0674901 (* 1 = 0.0674901 loss)\nI0608 05:36:33.746585 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10178 (* 1 = 0.10178 loss)\nI0608 05:36:33.746592 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0674805 (* 1 = 0.0674805 loss)\nI0608 05:36:33.746598 13573 sgd_solver.cpp:106] Iteration 48360, lr = 0.001\nI0608 05:36:46.601330 13573 solver.cpp:229] Iteration 48380, loss = 0.902751\nI0608 05:36:46.601392 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0649414 (* 1 = 0.0649414 loss)\nI0608 05:36:46.601402 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0596602 (* 1 = 0.0596602 loss)\nI0608 05:36:46.601408 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.339106 (* 1 = 0.339106 loss)\nI0608 05:36:46.601413 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0646655 (* 1 = 0.0646655 loss)\nI0608 05:36:46.601419 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.346995 (* 1 = 0.346995 loss)\nI0608 05:36:46.601424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0646581 (* 1 = 0.0646581 loss)\nI0608 05:36:46.601431 13573 sgd_solver.cpp:106] Iteration 48380, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:36:59.541612 13573 solver.cpp:229] Iteration 48400, loss = 0.815883\nI0608 05:36:59.541695 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00321242 (* 1 = 0.00321242 loss)\nI0608 05:36:59.541704 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000371609 (* 1 = 0.000371609 loss)\nI0608 05:36:59.541712 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241063 (* 1 = 0.241063 loss)\nI0608 05:36:59.541718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278973 (* 1 = 0.0278973 loss)\nI0608 05:36:59.541723 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236664 (* 1 = 0.236664 loss)\nI0608 05:36:59.541729 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278906 (* 1 = 0.0278906 loss)\nI0608 05:36:59.541738 13573 sgd_solver.cpp:106] Iteration 48400, lr = 0.001\nI0608 05:37:12.520519 13573 solver.cpp:229] Iteration 48420, loss = 0.657676\nI0608 05:37:12.520586 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0634066 (* 1 = 0.0634066 loss)\nI0608 05:37:12.520596 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0231133 (* 1 = 0.0231133 loss)\nI0608 05:37:12.520601 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.435109 (* 1 = 0.435109 loss)\nI0608 05:37:12.520606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0632692 (* 1 = 0.0632692 loss)\nI0608 05:37:12.520612 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.418225 (* 1 = 0.418225 loss)\nI0608 05:37:12.520618 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0632844 (* 1 = 0.0632844 loss)\nI0608 05:37:12.520625 13573 sgd_solver.cpp:106] Iteration 48420, lr = 0.001\nI0608 05:37:25.596254 13573 solver.cpp:229] Iteration 48440, loss = 0.418704\nI0608 05:37:25.596318 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0736842 (* 1 = 0.0736842 loss)\nI0608 05:37:25.596328 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.048709 (* 1 = 0.048709 loss)\nI0608 05:37:25.596334 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.110794 (* 1 = 0.110794 loss)\nI0608 05:37:25.596340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202373 (* 1 = 0.0202373 loss)\nI0608 05:37:25.596345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117209 (* 1 = 0.117209 loss)\nI0608 05:37:25.596351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202375 (* 1 = 0.0202375 loss)\nI0608 05:37:25.596357 13573 sgd_solver.cpp:106] Iteration 48440, lr = 0.001\nI0608 05:37:38.662721 13573 solver.cpp:229] Iteration 48460, loss = 1.30205\nI0608 05:37:38.662804 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.149508 (* 1 = 0.149508 loss)\nI0608 05:37:38.662814 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139081 (* 1 = 0.139081 loss)\nI0608 05:37:38.662820 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.11473 (* 1 = 0.11473 loss)\nI0608 05:37:38.662827 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111234 (* 1 = 0.111234 loss)\nI0608 05:37:38.662832 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11217 (* 1 = 0.11217 loss)\nI0608 05:37:38.662837 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111194 (* 1 = 0.111194 loss)\nI0608 05:37:38.662844 13573 sgd_solver.cpp:106] Iteration 48460, lr = 0.001\nI0608 05:37:51.530881 13573 solver.cpp:229] Iteration 48480, loss = 1.06987\nI0608 05:37:51.530961 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.221542 (* 1 = 0.221542 loss)\nI0608 05:37:51.530971 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.311087 (* 1 = 0.311087 loss)\nI0608 05:37:51.530975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.333639 (* 1 = 0.333639 loss)\nI0608 05:37:51.530982 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.157856 (* 1 = 0.157856 loss)\nI0608 05:37:51.530987 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.336101 (* 1 = 0.336101 loss)\nI0608 05:37:51.530992 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.157908 (* 1 = 0.157908 loss)\nI0608 05:37:51.530999 13573 sgd_solver.cpp:106] Iteration 48480, lr = 0.001\nI0608 05:38:04.336024 13573 solver.cpp:229] Iteration 48500, loss = 1.21617\nI0608 05:38:04.336089 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0113385 (* 1 = 0.0113385 loss)\nI0608 05:38:04.336099 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0513874 (* 1 = 0.0513874 loss)\nI0608 05:38:04.336105 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239486 (* 1 = 0.239486 loss)\nI0608 05:38:04.336112 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0140488 (* 1 = 0.0140488 loss)\nI0608 05:38:04.336117 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.206209 (* 1 = 0.206209 loss)\nI0608 05:38:04.336122 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0140573 (* 1 = 0.0140573 loss)\nI0608 05:38:04.336130 13573 sgd_solver.cpp:106] Iteration 48500, lr = 0.001\nI0608 05:38:17.086320 13573 solver.cpp:229] Iteration 48520, loss = 0.821435\nI0608 05:38:17.086395 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0718207 (* 1 = 0.0718207 loss)\nI0608 05:38:17.086405 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0972267 (* 1 = 0.0972267 loss)\nI0608 05:38:17.086410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250207 (* 1 = 0.250207 loss)\nI0608 05:38:17.086417 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0416349 (* 1 = 0.0416349 loss)\nI0608 05:38:17.086423 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.254081 (* 1 = 0.254081 loss)\nI0608 05:38:17.086428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0416381 (* 1 = 0.0416381 loss)\nI0608 05:38:17.086436 13573 sgd_solver.cpp:106] Iteration 48520, lr = 0.001\nI0608 05:38:30.018590 13573 solver.cpp:229] Iteration 48540, loss = 0.803703\nI0608 05:38:30.018657 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0758867 (* 1 = 0.0758867 loss)\nI0608 05:38:30.018666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.076914 (* 1 = 0.076914 loss)\nI0608 05:38:30.018673 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0868536 (* 1 = 0.0868536 loss)\nI0608 05:38:30.018679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00798002 (* 1 = 0.00798002 loss)\nI0608 05:38:30.018685 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.086079 (* 1 = 0.086079 loss)\nI0608 05:38:30.018692 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0079878 (* 1 = 0.0079878 loss)\nI0608 05:38:30.018698 13573 sgd_solver.cpp:106] Iteration 48540, lr = 0.001\nI0608 05:38:42.815001 13573 solver.cpp:229] Iteration 48560, loss = 0.386637\nI0608 05:38:42.815069 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0676364 (* 1 = 0.0676364 loss)\nI0608 05:38:42.815079 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.090056 (* 1 = 0.090056 loss)\nI0608 05:38:42.815085 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0836347 (* 1 = 0.0836347 loss)\nI0608 05:38:42.815091 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00566597 (* 1 = 0.00566597 loss)\nI0608 05:38:42.815098 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0893559 (* 1 = 0.0893559 loss)\nI0608 05:38:42.815104 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00565 (* 1 = 0.00565 loss)\nI0608 05:38:42.815111 13573 sgd_solver.cpp:106] Iteration 48560, lr = 0.001\nI0608 05:38:56.070436 13573 solver.cpp:229] Iteration 48580, loss = 0.706681\nI0608 05:38:56.070508 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13589 (* 1 = 0.13589 loss)\nI0608 05:38:56.070518 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.109687 (* 1 = 0.109687 loss)\nI0608 05:38:56.070524 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.413111 (* 1 = 0.413111 loss)\nI0608 05:38:56.070531 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0644261 (* 1 = 0.0644261 loss)\nI0608 05:38:56.070538 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.401051 (* 1 = 0.401051 loss)\nI0608 05:38:56.070544 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0644411 (* 1 = 0.0644411 loss)\nI0608 05:38:56.070550 13573 sgd_solver.cpp:106] Iteration 48580, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:39:08.935127 13573 solver.cpp:229] Iteration 48600, loss = 0.645552\nI0608 05:39:08.935194 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145549 (* 1 = 0.145549 loss)\nI0608 05:39:08.935204 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0859385 (* 1 = 0.0859385 loss)\nI0608 05:39:08.935210 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104222 (* 1 = 0.104222 loss)\nI0608 05:39:08.935215 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0126232 (* 1 = 0.0126232 loss)\nI0608 05:39:08.935221 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977719 (* 1 = 0.0977719 loss)\nI0608 05:39:08.935227 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0126246 (* 1 = 0.0126246 loss)\nI0608 05:39:08.935235 13573 sgd_solver.cpp:106] Iteration 48600, lr = 0.001\nI0608 05:39:22.016458 13573 solver.cpp:229] Iteration 48620, loss = 0.838884\nI0608 05:39:22.016573 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00346802 (* 1 = 0.00346802 loss)\nI0608 05:39:22.016583 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0224426 (* 1 = 0.0224426 loss)\nI0608 05:39:22.016589 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.455303 (* 1 = 0.455303 loss)\nI0608 05:39:22.016594 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0513339 (* 1 = 0.0513339 loss)\nI0608 05:39:22.016600 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.491156 (* 1 = 0.491156 loss)\nI0608 05:39:22.016607 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0513016 (* 1 = 0.0513016 loss)\nI0608 05:39:22.016613 13573 sgd_solver.cpp:106] Iteration 48620, lr = 0.001\nI0608 05:39:35.046952 13573 solver.cpp:229] Iteration 48640, loss = 0.772661\nI0608 05:39:35.047018 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131702 (* 1 = 0.131702 loss)\nI0608 05:39:35.047026 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184771 (* 1 = 0.184771 loss)\nI0608 05:39:35.047034 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0916802 (* 1 = 0.0916802 loss)\nI0608 05:39:35.047039 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328531 (* 1 = 0.0328531 loss)\nI0608 05:39:35.047044 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0970007 (* 1 = 0.0970007 loss)\nI0608 05:39:35.047050 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0328304 (* 1 = 0.0328304 loss)\nI0608 05:39:35.047057 13573 sgd_solver.cpp:106] Iteration 48640, lr = 0.001\nI0608 05:39:48.009070 13573 solver.cpp:229] Iteration 48660, loss = 0.596179\nI0608 05:39:48.009135 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167367 (* 1 = 0.167367 loss)\nI0608 05:39:48.009145 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0926136 (* 1 = 0.0926136 loss)\nI0608 05:39:48.009150 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.161751 (* 1 = 0.161751 loss)\nI0608 05:39:48.009156 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116792 (* 1 = 0.0116792 loss)\nI0608 05:39:48.009161 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168322 (* 1 = 0.168322 loss)\nI0608 05:39:48.009167 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116803 (* 1 = 0.0116803 loss)\nI0608 05:39:48.009174 13573 sgd_solver.cpp:106] Iteration 48660, lr = 0.001\nI0608 05:40:00.800703 13573 solver.cpp:229] Iteration 48680, loss = 0.540104\nI0608 05:40:00.800775 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131905 (* 1 = 0.131905 loss)\nI0608 05:40:00.800784 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0783724 (* 1 = 0.0783724 loss)\nI0608 05:40:00.800791 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0983183 (* 1 = 0.0983183 loss)\nI0608 05:40:00.800796 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103341 (* 1 = 0.103341 loss)\nI0608 05:40:00.800801 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103237 (* 1 = 0.103237 loss)\nI0608 05:40:00.800807 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103345 (* 1 = 0.103345 loss)\nI0608 05:40:00.800813 13573 sgd_solver.cpp:106] Iteration 48680, lr = 0.001\nI0608 05:40:13.477789 13573 solver.cpp:229] Iteration 48700, loss = 0.436699\nI0608 05:40:13.477850 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000606644 (* 1 = 0.000606644 loss)\nI0608 05:40:13.477859 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000342219 (* 1 = 0.000342219 loss)\nI0608 05:40:13.477865 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174163 (* 1 = 0.174163 loss)\nI0608 05:40:13.477871 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00594126 (* 1 = 0.00594126 loss)\nI0608 05:40:13.477876 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167757 (* 1 = 0.167757 loss)\nI0608 05:40:13.477882 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00594176 (* 1 = 0.00594176 loss)\nI0608 05:40:13.477888 13573 sgd_solver.cpp:106] Iteration 48700, lr = 0.001\nI0608 05:40:26.349938 13573 solver.cpp:229] Iteration 48720, loss = 0.52336\nI0608 05:40:26.350008 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.001278 (* 1 = 0.001278 loss)\nI0608 05:40:26.350018 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00483889 (* 1 = 0.00483889 loss)\nI0608 05:40:26.350023 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.30266 (* 1 = 0.30266 loss)\nI0608 05:40:26.350029 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028157 (* 1 = 0.028157 loss)\nI0608 05:40:26.350035 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303925 (* 1 = 0.303925 loss)\nI0608 05:40:26.350042 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0281751 (* 1 = 0.0281751 loss)\nI0608 05:40:26.350049 13573 sgd_solver.cpp:106] Iteration 48720, lr = 0.001\nI0608 05:40:39.428697 13573 solver.cpp:229] Iteration 48740, loss = 0.343356\nI0608 05:40:39.428769 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0751468 (* 1 = 0.0751468 loss)\nI0608 05:40:39.428779 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0433496 (* 1 = 0.0433496 loss)\nI0608 05:40:39.428786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107602 (* 1 = 0.107602 loss)\nI0608 05:40:39.428791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0382462 (* 1 = 0.0382462 loss)\nI0608 05:40:39.428797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109612 (* 1 = 0.109612 loss)\nI0608 05:40:39.428802 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0382601 (* 1 = 0.0382601 loss)\nI0608 05:40:39.428809 13573 sgd_solver.cpp:106] Iteration 48740, lr = 0.001\nI0608 05:40:52.291651 13573 solver.cpp:229] Iteration 48760, loss = 0.62391\nI0608 05:40:52.291710 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0243458 (* 1 = 0.0243458 loss)\nI0608 05:40:52.291720 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0582951 (* 1 = 0.0582951 loss)\nI0608 05:40:52.291726 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0929443 (* 1 = 0.0929443 loss)\nI0608 05:40:52.291733 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0858497 (* 1 = 0.0858497 loss)\nI0608 05:40:52.291738 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0859599 (* 1 = 0.0859599 loss)\nI0608 05:40:52.291743 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0858759 (* 1 = 0.0858759 loss)\nI0608 05:40:52.291750 13573 sgd_solver.cpp:106] Iteration 48760, lr = 0.001\nI0608 05:41:05.023469 13573 solver.cpp:229] Iteration 48780, loss = 2.0203\nI0608 05:41:05.023536 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0605032 (* 1 = 0.0605032 loss)\nI0608 05:41:05.023546 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.136636 (* 1 = 0.136636 loss)\nI0608 05:41:05.023552 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187351 (* 1 = 0.187351 loss)\nI0608 05:41:05.023557 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0335152 (* 1 = 0.0335152 loss)\nI0608 05:41:05.023563 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195486 (* 1 = 0.195486 loss)\nI0608 05:41:05.023568 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0335235 (* 1 = 0.0335235 loss)\nI0608 05:41:05.023576 13573 sgd_solver.cpp:106] Iteration 48780, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:41:17.729708 13573 solver.cpp:229] Iteration 48800, loss = 0.466877\nI0608 05:41:17.729782 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0324627 (* 1 = 0.0324627 loss)\nI0608 05:41:17.729791 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0280682 (* 1 = 0.0280682 loss)\nI0608 05:41:17.729797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22709 (* 1 = 0.22709 loss)\nI0608 05:41:17.729804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00718681 (* 1 = 0.00718681 loss)\nI0608 05:41:17.729809 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.234954 (* 1 = 0.234954 loss)\nI0608 05:41:17.729815 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00718887 (* 1 = 0.00718887 loss)\nI0608 05:41:17.729821 13573 sgd_solver.cpp:106] Iteration 48800, lr = 0.001\nI0608 05:41:30.512481 13573 solver.cpp:229] Iteration 48820, loss = 2.88763\nI0608 05:41:30.512543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00276954 (* 1 = 0.00276954 loss)\nI0608 05:41:30.512552 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00558994 (* 1 = 0.00558994 loss)\nI0608 05:41:30.512558 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.700781 (* 1 = 0.700781 loss)\nI0608 05:41:30.512563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.136205 (* 1 = 0.136205 loss)\nI0608 05:41:30.512569 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.710999 (* 1 = 0.710999 loss)\nI0608 05:41:30.512575 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.1362 (* 1 = 0.1362 loss)\nI0608 05:41:30.512581 13573 sgd_solver.cpp:106] Iteration 48820, lr = 0.001\nI0608 05:41:43.508981 13573 solver.cpp:229] Iteration 48840, loss = 0.8549\nI0608 05:41:43.509045 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0727293 (* 1 = 0.0727293 loss)\nI0608 05:41:43.509054 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0559609 (* 1 = 0.0559609 loss)\nI0608 05:41:43.509060 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19362 (* 1 = 0.19362 loss)\nI0608 05:41:43.509066 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0115739 (* 1 = 0.0115739 loss)\nI0608 05:41:43.509071 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210971 (* 1 = 0.210971 loss)\nI0608 05:41:43.509078 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0115735 (* 1 = 0.0115735 loss)\nI0608 05:41:43.509084 13573 sgd_solver.cpp:106] Iteration 48840, lr = 0.001\nI0608 05:41:56.335592 13573 solver.cpp:229] Iteration 48860, loss = 1.06943\nI0608 05:41:56.335667 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100551 (* 1 = 0.100551 loss)\nI0608 05:41:56.335677 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.040619 (* 1 = 0.040619 loss)\nI0608 05:41:56.335682 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256932 (* 1 = 0.256932 loss)\nI0608 05:41:56.335688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.030963 (* 1 = 0.030963 loss)\nI0608 05:41:56.335695 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249781 (* 1 = 0.249781 loss)\nI0608 05:41:56.335700 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309702 (* 1 = 0.0309702 loss)\nI0608 05:41:56.335706 13573 sgd_solver.cpp:106] Iteration 48860, lr = 0.001\nI0608 05:42:09.173521 13573 solver.cpp:229] Iteration 48880, loss = 0.478089\nI0608 05:42:09.173584 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00435186 (* 1 = 0.00435186 loss)\nI0608 05:42:09.173593 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00274722 (* 1 = 0.00274722 loss)\nI0608 05:42:09.173600 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231661 (* 1 = 0.231661 loss)\nI0608 05:42:09.173606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00185956 (* 1 = 0.00185956 loss)\nI0608 05:42:09.173612 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236237 (* 1 = 0.236237 loss)\nI0608 05:42:09.173619 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00186296 (* 1 = 0.00186296 loss)\nI0608 05:42:09.173625 13573 sgd_solver.cpp:106] Iteration 48880, lr = 0.001\nI0608 05:42:21.916092 13573 solver.cpp:229] Iteration 48900, loss = 0.641888\nI0608 05:42:21.916160 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130651 (* 1 = 0.130651 loss)\nI0608 05:42:21.916172 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100671 (* 1 = 0.100671 loss)\nI0608 05:42:21.916177 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.308001 (* 1 = 0.308001 loss)\nI0608 05:42:21.916182 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0222248 (* 1 = 0.0222248 loss)\nI0608 05:42:21.916188 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.338238 (* 1 = 0.338238 loss)\nI0608 05:42:21.916193 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0222324 (* 1 = 0.0222324 loss)\nI0608 05:42:21.916200 13573 sgd_solver.cpp:106] Iteration 48900, lr = 0.001\nI0608 05:42:34.980990 13573 solver.cpp:229] Iteration 48920, loss = 0.479278\nI0608 05:42:34.981053 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150617 (* 1 = 0.150617 loss)\nI0608 05:42:34.981062 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101543 (* 1 = 0.101543 loss)\nI0608 05:42:34.981070 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0915426 (* 1 = 0.0915426 loss)\nI0608 05:42:34.981076 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0330719 (* 1 = 0.0330719 loss)\nI0608 05:42:34.981081 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.08877 (* 1 = 0.08877 loss)\nI0608 05:42:34.981087 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.03307 (* 1 = 0.03307 loss)\nI0608 05:42:34.981094 13573 sgd_solver.cpp:106] Iteration 48920, lr = 0.001\nI0608 05:42:47.941318 13573 solver.cpp:229] Iteration 48940, loss = 0.919071\nI0608 05:42:47.941380 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134649 (* 1 = 0.134649 loss)\nI0608 05:42:47.941390 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.15455 (* 1 = 0.15455 loss)\nI0608 05:42:47.941396 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.413695 (* 1 = 0.413695 loss)\nI0608 05:42:47.941403 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0776267 (* 1 = 0.0776267 loss)\nI0608 05:42:47.941409 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.410376 (* 1 = 0.410376 loss)\nI0608 05:42:47.941414 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0776679 (* 1 = 0.0776679 loss)\nI0608 05:42:47.941421 13573 sgd_solver.cpp:106] Iteration 48940, lr = 0.001\nI0608 05:43:00.849419 13573 solver.cpp:229] Iteration 48960, loss = 0.342259\nI0608 05:43:00.849485 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0676243 (* 1 = 0.0676243 loss)\nI0608 05:43:00.849494 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0752877 (* 1 = 0.0752877 loss)\nI0608 05:43:00.849500 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0892013 (* 1 = 0.0892013 loss)\nI0608 05:43:00.849506 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0199045 (* 1 = 0.0199045 loss)\nI0608 05:43:00.849511 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0893406 (* 1 = 0.0893406 loss)\nI0608 05:43:00.849519 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0199188 (* 1 = 0.0199188 loss)\nI0608 05:43:00.849524 13573 sgd_solver.cpp:106] Iteration 48960, lr = 0.001\nI0608 05:43:13.802443 13573 solver.cpp:229] Iteration 48980, loss = 0.732875\nI0608 05:43:13.802511 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0924356 (* 1 = 0.0924356 loss)\nI0608 05:43:13.802520 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0700071 (* 1 = 0.0700071 loss)\nI0608 05:43:13.802526 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.329239 (* 1 = 0.329239 loss)\nI0608 05:43:13.802532 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0572812 (* 1 = 0.0572812 loss)\nI0608 05:43:13.802537 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.347123 (* 1 = 0.347123 loss)\nI0608 05:43:13.802543 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0572846 (* 1 = 0.0572846 loss)\nI0608 05:43:13.802551 13573 sgd_solver.cpp:106] Iteration 48980, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:43:26.564905 13573 solver.cpp:229] Iteration 49000, loss = 0.52358\nI0608 05:43:26.565026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0533651 (* 1 = 0.0533651 loss)\nI0608 05:43:26.565034 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0721183 (* 1 = 0.0721183 loss)\nI0608 05:43:26.565040 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220668 (* 1 = 0.220668 loss)\nI0608 05:43:26.565047 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0666777 (* 1 = 0.0666777 loss)\nI0608 05:43:26.565052 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218096 (* 1 = 0.218096 loss)\nI0608 05:43:26.565058 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0666515 (* 1 = 0.0666515 loss)\nI0608 05:43:26.565065 13573 sgd_solver.cpp:106] Iteration 49000, lr = 0.001\nI0608 05:43:39.397187 13573 solver.cpp:229] Iteration 49020, loss = 1.04653\nI0608 05:43:39.397250 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.106484 (* 1 = 0.106484 loss)\nI0608 05:43:39.397259 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0928269 (* 1 = 0.0928269 loss)\nI0608 05:43:39.397266 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114762 (* 1 = 0.114762 loss)\nI0608 05:43:39.397272 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.062939 (* 1 = 0.062939 loss)\nI0608 05:43:39.397279 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114836 (* 1 = 0.114836 loss)\nI0608 05:43:39.397284 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0629452 (* 1 = 0.0629452 loss)\nI0608 05:43:39.397290 13573 sgd_solver.cpp:106] Iteration 49020, lr = 0.001\nI0608 05:43:52.229676 13573 solver.cpp:229] Iteration 49040, loss = 0.962165\nI0608 05:43:52.229734 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0130444 (* 1 = 0.0130444 loss)\nI0608 05:43:52.229743 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0251575 (* 1 = 0.0251575 loss)\nI0608 05:43:52.229750 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.52712 (* 1 = 0.52712 loss)\nI0608 05:43:52.229756 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0506374 (* 1 = 0.0506374 loss)\nI0608 05:43:52.229763 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.557438 (* 1 = 0.557438 loss)\nI0608 05:43:52.229768 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0506309 (* 1 = 0.0506309 loss)\nI0608 05:43:52.229774 13573 sgd_solver.cpp:106] Iteration 49040, lr = 0.001\nI0608 05:44:05.074808 13573 solver.cpp:229] Iteration 49060, loss = 0.798519\nI0608 05:44:05.074877 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0381891 (* 1 = 0.0381891 loss)\nI0608 05:44:05.074887 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0228332 (* 1 = 0.0228332 loss)\nI0608 05:44:05.074892 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.084715 (* 1 = 0.084715 loss)\nI0608 05:44:05.074898 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00322673 (* 1 = 0.00322673 loss)\nI0608 05:44:05.074904 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0948969 (* 1 = 0.0948969 loss)\nI0608 05:44:05.074910 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00322669 (* 1 = 0.00322669 loss)\nI0608 05:44:05.074918 13573 sgd_solver.cpp:106] Iteration 49060, lr = 0.001\nI0608 05:44:17.997280 13573 solver.cpp:229] Iteration 49080, loss = 0.538084\nI0608 05:44:17.997349 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116875 (* 1 = 0.116875 loss)\nI0608 05:44:17.997359 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0435748 (* 1 = 0.0435748 loss)\nI0608 05:44:17.997364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213208 (* 1 = 0.213208 loss)\nI0608 05:44:17.997370 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0695134 (* 1 = 0.0695134 loss)\nI0608 05:44:17.997376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216402 (* 1 = 0.216402 loss)\nI0608 05:44:17.997382 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0695125 (* 1 = 0.0695125 loss)\nI0608 05:44:17.997388 13573 sgd_solver.cpp:106] Iteration 49080, lr = 0.001\nI0608 05:44:30.990406 13573 solver.cpp:229] Iteration 49100, loss = 0.527552\nI0608 05:44:30.990474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00348462 (* 1 = 0.00348462 loss)\nI0608 05:44:30.990484 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0302155 (* 1 = 0.0302155 loss)\nI0608 05:44:30.990490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258828 (* 1 = 0.258828 loss)\nI0608 05:44:30.990496 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110527 (* 1 = 0.0110527 loss)\nI0608 05:44:30.990502 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253248 (* 1 = 0.253248 loss)\nI0608 05:44:30.990509 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110532 (* 1 = 0.0110532 loss)\nI0608 05:44:30.990516 13573 sgd_solver.cpp:106] Iteration 49100, lr = 0.001\nI0608 05:44:43.992240 13573 solver.cpp:229] Iteration 49120, loss = 0.525689\nI0608 05:44:43.992305 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0405004 (* 1 = 0.0405004 loss)\nI0608 05:44:43.992314 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0335174 (* 1 = 0.0335174 loss)\nI0608 05:44:43.992321 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300038 (* 1 = 0.300038 loss)\nI0608 05:44:43.992326 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.01872 (* 1 = 0.01872 loss)\nI0608 05:44:43.992331 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271862 (* 1 = 0.271862 loss)\nI0608 05:44:43.992338 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0187184 (* 1 = 0.0187184 loss)\nI0608 05:44:43.992346 13573 sgd_solver.cpp:106] Iteration 49120, lr = 0.001\nI0608 05:44:57.024864 13573 solver.cpp:229] Iteration 49140, loss = 0.41212\nI0608 05:44:57.024930 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00160439 (* 1 = 0.00160439 loss)\nI0608 05:44:57.024940 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0206702 (* 1 = 0.0206702 loss)\nI0608 05:44:57.024945 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.158228 (* 1 = 0.158228 loss)\nI0608 05:44:57.024951 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00431589 (* 1 = 0.00431589 loss)\nI0608 05:44:57.024957 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.156051 (* 1 = 0.156051 loss)\nI0608 05:44:57.024963 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00431134 (* 1 = 0.00431134 loss)\nI0608 05:44:57.024971 13573 sgd_solver.cpp:106] Iteration 49140, lr = 0.001\nI0608 05:45:09.929621 13573 solver.cpp:229] Iteration 49160, loss = 1.18267\nI0608 05:45:09.929685 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0269128 (* 1 = 0.0269128 loss)\nI0608 05:45:09.929695 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0419846 (* 1 = 0.0419846 loss)\nI0608 05:45:09.929702 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.307357 (* 1 = 0.307357 loss)\nI0608 05:45:09.929708 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0303716 (* 1 = 0.0303716 loss)\nI0608 05:45:09.929713 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328778 (* 1 = 0.328778 loss)\nI0608 05:45:09.929719 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0303811 (* 1 = 0.0303811 loss)\nI0608 05:45:09.929726 13573 sgd_solver.cpp:106] Iteration 49160, lr = 0.001\nI0608 05:45:22.739794 13573 solver.cpp:229] Iteration 49180, loss = 0.633124\nI0608 05:45:22.739857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193162 (* 1 = 0.193162 loss)\nI0608 05:45:22.739867 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.227835 (* 1 = 0.227835 loss)\nI0608 05:45:22.739873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.16881 (* 1 = 0.16881 loss)\nI0608 05:45:22.739879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0564111 (* 1 = 0.0564111 loss)\nI0608 05:45:22.739886 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174861 (* 1 = 0.174861 loss)\nI0608 05:45:22.739892 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.056416 (* 1 = 0.056416 loss)\nI0608 05:45:22.739897 13573 sgd_solver.cpp:106] Iteration 49180, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:45:35.743518 13573 solver.cpp:229] Iteration 49200, loss = 0.388937\nI0608 05:45:35.743584 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00057493 (* 1 = 0.00057493 loss)\nI0608 05:45:35.743594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0153333 (* 1 = 0.0153333 loss)\nI0608 05:45:35.743600 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190516 (* 1 = 0.190516 loss)\nI0608 05:45:35.743607 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159042 (* 1 = 0.0159042 loss)\nI0608 05:45:35.743613 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207178 (* 1 = 0.207178 loss)\nI0608 05:45:35.743618 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159078 (* 1 = 0.0159078 loss)\nI0608 05:45:35.743625 13573 sgd_solver.cpp:106] Iteration 49200, lr = 0.001\nI0608 05:45:48.676661 13573 solver.cpp:229] Iteration 49220, loss = 0.810662\nI0608 05:45:48.676722 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0468379 (* 1 = 0.0468379 loss)\nI0608 05:45:48.676730 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0508474 (* 1 = 0.0508474 loss)\nI0608 05:45:48.676738 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15884 (* 1 = 0.15884 loss)\nI0608 05:45:48.676764 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0261548 (* 1 = 0.0261548 loss)\nI0608 05:45:48.676771 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.151776 (* 1 = 0.151776 loss)\nI0608 05:45:48.676776 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0261541 (* 1 = 0.0261541 loss)\nI0608 05:45:48.676784 13573 sgd_solver.cpp:106] Iteration 49220, lr = 0.001\nI0608 05:46:01.561244 13573 solver.cpp:229] Iteration 49240, loss = 0.622164\nI0608 05:46:01.561319 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0472735 (* 1 = 0.0472735 loss)\nI0608 05:46:01.561329 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00875572 (* 1 = 0.00875572 loss)\nI0608 05:46:01.561336 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194804 (* 1 = 0.194804 loss)\nI0608 05:46:01.561341 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0124828 (* 1 = 0.0124828 loss)\nI0608 05:46:01.561347 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204661 (* 1 = 0.204661 loss)\nI0608 05:46:01.561352 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0124792 (* 1 = 0.0124792 loss)\nI0608 05:46:01.561359 13573 sgd_solver.cpp:106] Iteration 49240, lr = 0.001\nI0608 05:46:14.483798 13573 solver.cpp:229] Iteration 49260, loss = 0.722587\nI0608 05:46:14.483861 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.1213 (* 1 = 0.1213 loss)\nI0608 05:46:14.483871 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.099467 (* 1 = 0.099467 loss)\nI0608 05:46:14.483877 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0904605 (* 1 = 0.0904605 loss)\nI0608 05:46:14.483883 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168049 (* 1 = 0.0168049 loss)\nI0608 05:46:14.483889 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0875726 (* 1 = 0.0875726 loss)\nI0608 05:46:14.483896 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168053 (* 1 = 0.0168053 loss)\nI0608 05:46:14.483902 13573 sgd_solver.cpp:106] Iteration 49260, lr = 0.001\nI0608 05:46:27.444610 13573 solver.cpp:229] Iteration 49280, loss = 1.08137\nI0608 05:46:27.444670 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159121 (* 1 = 0.159121 loss)\nI0608 05:46:27.444680 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.171471 (* 1 = 0.171471 loss)\nI0608 05:46:27.444686 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.45981 (* 1 = 0.45981 loss)\nI0608 05:46:27.444694 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021417 (* 1 = 0.021417 loss)\nI0608 05:46:27.444699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.500848 (* 1 = 0.500848 loss)\nI0608 05:46:27.444705 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0214113 (* 1 = 0.0214113 loss)\nI0608 05:46:27.444711 13573 sgd_solver.cpp:106] Iteration 49280, lr = 0.001\nI0608 05:46:40.206167 13573 solver.cpp:229] Iteration 49300, loss = 0.808135\nI0608 05:46:40.206238 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.041823 (* 1 = 0.041823 loss)\nI0608 05:46:40.206248 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122049 (* 1 = 0.122049 loss)\nI0608 05:46:40.206254 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.37022 (* 1 = 0.37022 loss)\nI0608 05:46:40.206260 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.107111 (* 1 = 0.107111 loss)\nI0608 05:46:40.206265 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.384206 (* 1 = 0.384206 loss)\nI0608 05:46:40.206272 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.107096 (* 1 = 0.107096 loss)\nI0608 05:46:40.206279 13573 sgd_solver.cpp:106] Iteration 49300, lr = 0.001\nI0608 05:46:52.904250 13573 solver.cpp:229] Iteration 49320, loss = 0.577541\nI0608 05:46:52.904319 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0671612 (* 1 = 0.0671612 loss)\nI0608 05:46:52.904327 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0670746 (* 1 = 0.0670746 loss)\nI0608 05:46:52.904333 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121902 (* 1 = 0.121902 loss)\nI0608 05:46:52.904340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234965 (* 1 = 0.0234965 loss)\nI0608 05:46:52.904345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125156 (* 1 = 0.125156 loss)\nI0608 05:46:52.904351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0234916 (* 1 = 0.0234916 loss)\nI0608 05:46:52.904358 13573 sgd_solver.cpp:106] Iteration 49320, lr = 0.001\nI0608 05:47:05.608662 13573 solver.cpp:229] Iteration 49340, loss = 1.22163\nI0608 05:47:05.608726 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0445309 (* 1 = 0.0445309 loss)\nI0608 05:47:05.608736 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.12262 (* 1 = 0.12262 loss)\nI0608 05:47:05.608747 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.437497 (* 1 = 0.437497 loss)\nI0608 05:47:05.608754 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.160397 (* 1 = 0.160397 loss)\nI0608 05:47:05.608760 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.434405 (* 1 = 0.434405 loss)\nI0608 05:47:05.608767 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.160381 (* 1 = 0.160381 loss)\nI0608 05:47:05.608772 13573 sgd_solver.cpp:106] Iteration 49340, lr = 0.001\nI0608 05:47:18.632313 13573 solver.cpp:229] Iteration 49360, loss = 0.56664\nI0608 05:47:18.632388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0807517 (* 1 = 0.0807517 loss)\nI0608 05:47:18.632398 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0380559 (* 1 = 0.0380559 loss)\nI0608 05:47:18.632405 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212842 (* 1 = 0.212842 loss)\nI0608 05:47:18.632411 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0419094 (* 1 = 0.0419094 loss)\nI0608 05:47:18.632416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215256 (* 1 = 0.215256 loss)\nI0608 05:47:18.632422 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418919 (* 1 = 0.0418919 loss)\nI0608 05:47:18.632429 13573 sgd_solver.cpp:106] Iteration 49360, lr = 0.001\nI0608 05:47:31.375885 13573 solver.cpp:229] Iteration 49380, loss = 0.873094\nI0608 05:47:31.375959 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141732 (* 1 = 0.141732 loss)\nI0608 05:47:31.375969 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.137777 (* 1 = 0.137777 loss)\nI0608 05:47:31.375975 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.424669 (* 1 = 0.424669 loss)\nI0608 05:47:31.375982 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0850345 (* 1 = 0.0850345 loss)\nI0608 05:47:31.375988 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.421673 (* 1 = 0.421673 loss)\nI0608 05:47:31.375993 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0850216 (* 1 = 0.0850216 loss)\nI0608 05:47:31.375999 13573 sgd_solver.cpp:106] Iteration 49380, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:47:44.502200 13573 solver.cpp:229] Iteration 49400, loss = 0.476162\nI0608 05:47:44.502269 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121733 (* 1 = 0.121733 loss)\nI0608 05:47:44.502284 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0434638 (* 1 = 0.0434638 loss)\nI0608 05:47:44.502292 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108699 (* 1 = 0.108699 loss)\nI0608 05:47:44.502302 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.033086 (* 1 = 0.033086 loss)\nI0608 05:47:44.502311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117034 (* 1 = 0.117034 loss)\nI0608 05:47:44.502320 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0330832 (* 1 = 0.0330832 loss)\nI0608 05:47:44.502328 13573 sgd_solver.cpp:106] Iteration 49400, lr = 0.001\nI0608 05:47:57.325687 13573 solver.cpp:229] Iteration 49420, loss = 1.39639\nI0608 05:47:57.325758 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0276053 (* 1 = 0.0276053 loss)\nI0608 05:47:57.325768 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0266177 (* 1 = 0.0266177 loss)\nI0608 05:47:57.325775 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279981 (* 1 = 0.279981 loss)\nI0608 05:47:57.325780 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0569393 (* 1 = 0.0569393 loss)\nI0608 05:47:57.325786 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.254082 (* 1 = 0.254082 loss)\nI0608 05:47:57.325793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.056914 (* 1 = 0.056914 loss)\nI0608 05:47:57.325799 13573 sgd_solver.cpp:106] Iteration 49420, lr = 0.001\nI0608 05:48:10.327035 13573 solver.cpp:229] Iteration 49440, loss = 1.32866\nI0608 05:48:10.327102 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0363022 (* 1 = 0.0363022 loss)\nI0608 05:48:10.327112 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102164 (* 1 = 0.102164 loss)\nI0608 05:48:10.327118 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137708 (* 1 = 0.137708 loss)\nI0608 05:48:10.327124 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.305459 (* 1 = 0.305459 loss)\nI0608 05:48:10.327131 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.135856 (* 1 = 0.135856 loss)\nI0608 05:48:10.327136 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.305451 (* 1 = 0.305451 loss)\nI0608 05:48:10.327142 13573 sgd_solver.cpp:106] Iteration 49440, lr = 0.001\nI0608 05:48:23.481568 13573 solver.cpp:229] Iteration 49460, loss = 0.737552\nI0608 05:48:23.481636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0910233 (* 1 = 0.0910233 loss)\nI0608 05:48:23.481645 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0421474 (* 1 = 0.0421474 loss)\nI0608 05:48:23.481652 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155117 (* 1 = 0.155117 loss)\nI0608 05:48:23.481658 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0555019 (* 1 = 0.0555019 loss)\nI0608 05:48:23.481664 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161006 (* 1 = 0.161006 loss)\nI0608 05:48:23.481670 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0554871 (* 1 = 0.0554871 loss)\nI0608 05:48:23.481678 13573 sgd_solver.cpp:106] Iteration 49460, lr = 0.001\nI0608 05:48:36.619169 13573 solver.cpp:229] Iteration 49480, loss = 1.04446\nI0608 05:48:36.619294 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.168907 (* 1 = 0.168907 loss)\nI0608 05:48:36.619305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.17225 (* 1 = 0.17225 loss)\nI0608 05:48:36.619313 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256329 (* 1 = 0.256329 loss)\nI0608 05:48:36.619318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00787371 (* 1 = 0.00787371 loss)\nI0608 05:48:36.619324 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.220463 (* 1 = 0.220463 loss)\nI0608 05:48:36.619330 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00787102 (* 1 = 0.00787102 loss)\nI0608 05:48:36.619338 13573 sgd_solver.cpp:106] Iteration 49480, lr = 0.001\nI0608 05:48:49.502647 13573 solver.cpp:229] Iteration 49500, loss = 1.19299\nI0608 05:48:49.502712 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.157129 (* 1 = 0.157129 loss)\nI0608 05:48:49.502720 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.267558 (* 1 = 0.267558 loss)\nI0608 05:48:49.502727 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.218622 (* 1 = 0.218622 loss)\nI0608 05:48:49.502733 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0554297 (* 1 = 0.0554297 loss)\nI0608 05:48:49.502739 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211502 (* 1 = 0.211502 loss)\nI0608 05:48:49.502745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0554158 (* 1 = 0.0554158 loss)\nI0608 05:48:49.502753 13573 sgd_solver.cpp:106] Iteration 49500, lr = 0.001\nI0608 05:49:02.430104 13573 solver.cpp:229] Iteration 49520, loss = 1.23378\nI0608 05:49:02.430177 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0466306 (* 1 = 0.0466306 loss)\nI0608 05:49:02.430187 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0402114 (* 1 = 0.0402114 loss)\nI0608 05:49:02.430194 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.106993 (* 1 = 0.106993 loss)\nI0608 05:49:02.430200 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0324351 (* 1 = 0.0324351 loss)\nI0608 05:49:02.430207 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0987811 (* 1 = 0.0987811 loss)\nI0608 05:49:02.430213 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0324383 (* 1 = 0.0324383 loss)\nI0608 05:49:02.430220 13573 sgd_solver.cpp:106] Iteration 49520, lr = 0.001\nI0608 05:49:15.236112 13573 solver.cpp:229] Iteration 49540, loss = 0.82303\nI0608 05:49:15.236176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.197891 (* 1 = 0.197891 loss)\nI0608 05:49:15.236186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.214212 (* 1 = 0.214212 loss)\nI0608 05:49:15.236191 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.315658 (* 1 = 0.315658 loss)\nI0608 05:49:15.236197 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.09574 (* 1 = 0.09574 loss)\nI0608 05:49:15.236202 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.313349 (* 1 = 0.313349 loss)\nI0608 05:49:15.236208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0957377 (* 1 = 0.0957377 loss)\nI0608 05:49:15.236215 13573 sgd_solver.cpp:106] Iteration 49540, lr = 0.001\nI0608 05:49:28.251346 13573 solver.cpp:229] Iteration 49560, loss = 0.470118\nI0608 05:49:28.251430 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118636 (* 1 = 0.118636 loss)\nI0608 05:49:28.251440 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0852849 (* 1 = 0.0852849 loss)\nI0608 05:49:28.251446 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0979442 (* 1 = 0.0979442 loss)\nI0608 05:49:28.251451 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257449 (* 1 = 0.0257449 loss)\nI0608 05:49:28.251457 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0921569 (* 1 = 0.0921569 loss)\nI0608 05:49:28.251463 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0257431 (* 1 = 0.0257431 loss)\nI0608 05:49:28.251471 13573 sgd_solver.cpp:106] Iteration 49560, lr = 0.001\nI0608 05:49:41.074503 13573 solver.cpp:229] Iteration 49580, loss = 0.550743\nI0608 05:49:41.074574 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0723251 (* 1 = 0.0723251 loss)\nI0608 05:49:41.074584 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.085528 (* 1 = 0.085528 loss)\nI0608 05:49:41.074590 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213564 (* 1 = 0.213564 loss)\nI0608 05:49:41.074596 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021492 (* 1 = 0.021492 loss)\nI0608 05:49:41.074602 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215658 (* 1 = 0.215658 loss)\nI0608 05:49:41.074607 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0214888 (* 1 = 0.0214888 loss)\nI0608 05:49:41.074615 13573 sgd_solver.cpp:106] Iteration 49580, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:49:53.914582 13573 solver.cpp:229] Iteration 49600, loss = 0.352575\nI0608 05:49:53.914639 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0852659 (* 1 = 0.0852659 loss)\nI0608 05:49:53.914649 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0603779 (* 1 = 0.0603779 loss)\nI0608 05:49:53.914655 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0790436 (* 1 = 0.0790436 loss)\nI0608 05:49:53.914661 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111694 (* 1 = 0.0111694 loss)\nI0608 05:49:53.914667 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0846479 (* 1 = 0.0846479 loss)\nI0608 05:49:53.914674 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0111666 (* 1 = 0.0111666 loss)\nI0608 05:49:53.914680 13573 sgd_solver.cpp:106] Iteration 49600, lr = 0.001\nI0608 05:50:06.737709 13573 solver.cpp:229] Iteration 49620, loss = 0.719975\nI0608 05:50:06.737784 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0771353 (* 1 = 0.0771353 loss)\nI0608 05:50:06.737793 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0340193 (* 1 = 0.0340193 loss)\nI0608 05:50:06.737799 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284545 (* 1 = 0.284545 loss)\nI0608 05:50:06.737805 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.202634 (* 1 = 0.202634 loss)\nI0608 05:50:06.737812 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.289486 (* 1 = 0.289486 loss)\nI0608 05:50:06.737818 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.202558 (* 1 = 0.202558 loss)\nI0608 05:50:06.737825 13573 sgd_solver.cpp:106] Iteration 49620, lr = 0.001\nI0608 05:50:19.776535 13573 solver.cpp:229] Iteration 49640, loss = 0.491905\nI0608 05:50:19.776603 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.134032 (* 1 = 0.134032 loss)\nI0608 05:50:19.776613 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.223994 (* 1 = 0.223994 loss)\nI0608 05:50:19.776619 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14193 (* 1 = 0.14193 loss)\nI0608 05:50:19.776625 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0290327 (* 1 = 0.0290327 loss)\nI0608 05:50:19.776631 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149681 (* 1 = 0.149681 loss)\nI0608 05:50:19.776638 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0290322 (* 1 = 0.0290322 loss)\nI0608 05:50:19.776644 13573 sgd_solver.cpp:106] Iteration 49640, lr = 0.001\nI0608 05:50:32.488936 13573 solver.cpp:229] Iteration 49660, loss = 0.453036\nI0608 05:50:32.488996 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145198 (* 1 = 0.145198 loss)\nI0608 05:50:32.489006 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0685142 (* 1 = 0.0685142 loss)\nI0608 05:50:32.489012 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0925601 (* 1 = 0.0925601 loss)\nI0608 05:50:32.489018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0238818 (* 1 = 0.0238818 loss)\nI0608 05:50:32.489024 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.094344 (* 1 = 0.094344 loss)\nI0608 05:50:32.489030 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238896 (* 1 = 0.0238896 loss)\nI0608 05:50:32.489037 13573 sgd_solver.cpp:106] Iteration 49660, lr = 0.001\nI0608 05:50:45.152891 13573 solver.cpp:229] Iteration 49680, loss = 0.637033\nI0608 05:50:45.152964 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0742812 (* 1 = 0.0742812 loss)\nI0608 05:50:45.152974 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106034 (* 1 = 0.106034 loss)\nI0608 05:50:45.152981 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276694 (* 1 = 0.276694 loss)\nI0608 05:50:45.152987 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0364451 (* 1 = 0.0364451 loss)\nI0608 05:50:45.152992 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273405 (* 1 = 0.273405 loss)\nI0608 05:50:45.152998 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0364642 (* 1 = 0.0364642 loss)\nI0608 05:50:45.153005 13573 sgd_solver.cpp:106] Iteration 49680, lr = 0.001\nI0608 05:50:58.129443 13573 solver.cpp:229] Iteration 49700, loss = 0.538064\nI0608 05:50:58.129509 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0731578 (* 1 = 0.0731578 loss)\nI0608 05:50:58.129519 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0796067 (* 1 = 0.0796067 loss)\nI0608 05:50:58.129525 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111428 (* 1 = 0.111428 loss)\nI0608 05:50:58.129530 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0985469 (* 1 = 0.0985469 loss)\nI0608 05:50:58.129536 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112346 (* 1 = 0.112346 loss)\nI0608 05:50:58.129542 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0985256 (* 1 = 0.0985256 loss)\nI0608 05:50:58.129549 13573 sgd_solver.cpp:106] Iteration 49700, lr = 0.001\nI0608 05:51:11.228651 13573 solver.cpp:229] Iteration 49720, loss = 1.16727\nI0608 05:51:11.228716 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125355 (* 1 = 0.125355 loss)\nI0608 05:51:11.228726 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198163 (* 1 = 0.198163 loss)\nI0608 05:51:11.228732 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.44151 (* 1 = 0.44151 loss)\nI0608 05:51:11.228739 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0334578 (* 1 = 0.0334578 loss)\nI0608 05:51:11.228744 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.461334 (* 1 = 0.461334 loss)\nI0608 05:51:11.228749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0334468 (* 1 = 0.0334468 loss)\nI0608 05:51:11.228756 13573 sgd_solver.cpp:106] Iteration 49720, lr = 0.001\nI0608 05:51:24.028550 13573 solver.cpp:229] Iteration 49740, loss = 0.586379\nI0608 05:51:24.028686 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10991 (* 1 = 0.10991 loss)\nI0608 05:51:24.028695 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0480231 (* 1 = 0.0480231 loss)\nI0608 05:51:24.028702 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117318 (* 1 = 0.117318 loss)\nI0608 05:51:24.028707 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.069149 (* 1 = 0.069149 loss)\nI0608 05:51:24.028713 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114372 (* 1 = 0.114372 loss)\nI0608 05:51:24.028720 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0691119 (* 1 = 0.0691119 loss)\nI0608 05:51:24.028728 13573 sgd_solver.cpp:106] Iteration 49740, lr = 0.001\nI0608 05:51:36.893424 13573 solver.cpp:229] Iteration 49760, loss = 0.847988\nI0608 05:51:36.893484 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.173818 (* 1 = 0.173818 loss)\nI0608 05:51:36.893493 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.219316 (* 1 = 0.219316 loss)\nI0608 05:51:36.893499 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.209267 (* 1 = 0.209267 loss)\nI0608 05:51:36.893506 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0354749 (* 1 = 0.0354749 loss)\nI0608 05:51:36.893512 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.2077 (* 1 = 0.2077 loss)\nI0608 05:51:36.893517 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.035482 (* 1 = 0.035482 loss)\nI0608 05:51:36.893523 13573 sgd_solver.cpp:106] Iteration 49760, lr = 0.001\nI0608 05:51:50.018648 13573 solver.cpp:229] Iteration 49780, loss = 2.3754\nI0608 05:51:50.018713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0181933 (* 1 = 0.0181933 loss)\nI0608 05:51:50.018723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0257878 (* 1 = 0.0257878 loss)\nI0608 05:51:50.018728 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.979697 (* 1 = 0.979697 loss)\nI0608 05:51:50.018734 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.413263 (* 1 = 0.413263 loss)\nI0608 05:51:50.018740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.982257 (* 1 = 0.982257 loss)\nI0608 05:51:50.018745 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.4132 (* 1 = 0.4132 loss)\nI0608 05:51:50.018752 13573 sgd_solver.cpp:106] Iteration 49780, lr = 0.001\nspeed: 0.645s / iter\nI0608 05:52:02.805568 13573 solver.cpp:229] Iteration 49800, loss = 0.610487\nI0608 05:52:02.805642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00330192 (* 1 = 0.00330192 loss)\nI0608 05:52:02.805652 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00803017 (* 1 = 0.00803017 loss)\nI0608 05:52:02.805658 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238683 (* 1 = 0.238683 loss)\nI0608 05:52:02.805665 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0109311 (* 1 = 0.0109311 loss)\nI0608 05:52:02.805670 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.294845 (* 1 = 0.294845 loss)\nI0608 05:52:02.805675 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109289 (* 1 = 0.0109289 loss)\nI0608 05:52:02.805681 13573 sgd_solver.cpp:106] Iteration 49800, lr = 0.001\nI0608 05:52:15.758221 13573 solver.cpp:229] Iteration 49820, loss = 1.29302\nI0608 05:52:15.758283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.305494 (* 1 = 0.305494 loss)\nI0608 05:52:15.758292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122609 (* 1 = 0.122609 loss)\nI0608 05:52:15.758298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.699029 (* 1 = 0.699029 loss)\nI0608 05:52:15.758304 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.203962 (* 1 = 0.203962 loss)\nI0608 05:52:15.758309 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.711685 (* 1 = 0.711685 loss)\nI0608 05:52:15.758314 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.203957 (* 1 = 0.203957 loss)\nI0608 05:52:15.758322 13573 sgd_solver.cpp:106] Iteration 49820, lr = 0.001\nI0608 05:52:28.767632 13573 solver.cpp:229] Iteration 49840, loss = 0.692093\nI0608 05:52:28.767695 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.099299 (* 1 = 0.099299 loss)\nI0608 05:52:28.767704 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.107859 (* 1 = 0.107859 loss)\nI0608 05:52:28.767717 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226749 (* 1 = 0.226749 loss)\nI0608 05:52:28.767724 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0251345 (* 1 = 0.0251345 loss)\nI0608 05:52:28.767729 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228662 (* 1 = 0.228662 loss)\nI0608 05:52:28.767735 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0251355 (* 1 = 0.0251355 loss)\nI0608 05:52:28.767741 13573 sgd_solver.cpp:106] Iteration 49840, lr = 0.001\nI0608 05:52:41.789721 13573 solver.cpp:229] Iteration 49860, loss = 0.821767\nI0608 05:52:41.789849 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00469644 (* 1 = 0.00469644 loss)\nI0608 05:52:41.789858 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0148247 (* 1 = 0.0148247 loss)\nI0608 05:52:41.789865 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119588 (* 1 = 0.119588 loss)\nI0608 05:52:41.789870 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.428661 (* 1 = 0.428661 loss)\nI0608 05:52:41.789876 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121171 (* 1 = 0.121171 loss)\nI0608 05:52:41.789882 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.428674 (* 1 = 0.428674 loss)\nI0608 05:52:41.789890 13573 sgd_solver.cpp:106] Iteration 49860, lr = 0.001\nI0608 05:52:54.798363 13573 solver.cpp:229] Iteration 49880, loss = 0.476261\nI0608 05:52:54.798432 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0646975 (* 1 = 0.0646975 loss)\nI0608 05:52:54.798442 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0322791 (* 1 = 0.0322791 loss)\nI0608 05:52:54.798449 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203809 (* 1 = 0.203809 loss)\nI0608 05:52:54.798454 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0312035 (* 1 = 0.0312035 loss)\nI0608 05:52:54.798460 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195781 (* 1 = 0.195781 loss)\nI0608 05:52:54.798466 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0312189 (* 1 = 0.0312189 loss)\nI0608 05:52:54.798473 13573 sgd_solver.cpp:106] Iteration 49880, lr = 0.001\nI0608 05:53:07.575357 13573 solver.cpp:229] Iteration 49900, loss = 0.678294\nI0608 05:53:07.575422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00332845 (* 1 = 0.00332845 loss)\nI0608 05:53:07.575438 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00302357 (* 1 = 0.00302357 loss)\nI0608 05:53:07.575444 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.299768 (* 1 = 0.299768 loss)\nI0608 05:53:07.575450 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.034183 (* 1 = 0.034183 loss)\nI0608 05:53:07.575456 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.319788 (* 1 = 0.319788 loss)\nI0608 05:53:07.575462 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0342031 (* 1 = 0.0342031 loss)\nI0608 05:53:07.575469 13573 sgd_solver.cpp:106] Iteration 49900, lr = 0.001\nI0608 05:53:20.586904 13573 solver.cpp:229] Iteration 49920, loss = 0.592872\nI0608 05:53:20.586977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0361145 (* 1 = 0.0361145 loss)\nI0608 05:53:20.586987 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0796727 (* 1 = 0.0796727 loss)\nI0608 05:53:20.586994 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118763 (* 1 = 0.118763 loss)\nI0608 05:53:20.587000 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.208614 (* 1 = 0.208614 loss)\nI0608 05:53:20.587007 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.12251 (* 1 = 0.12251 loss)\nI0608 05:53:20.587013 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.208846 (* 1 = 0.208846 loss)\nI0608 05:53:20.587019 13573 sgd_solver.cpp:106] Iteration 49920, lr = 0.001\nI0608 05:53:33.347478 13573 solver.cpp:229] Iteration 49940, loss = 0.897512\nI0608 05:53:33.347542 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0514674 (* 1 = 0.0514674 loss)\nI0608 05:53:33.347550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0322947 (* 1 = 0.0322947 loss)\nI0608 05:53:33.347556 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0863483 (* 1 = 0.0863483 loss)\nI0608 05:53:33.347563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0071731 (* 1 = 0.0071731 loss)\nI0608 05:53:33.347568 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0873853 (* 1 = 0.0873853 loss)\nI0608 05:53:33.347573 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00717091 (* 1 = 0.00717091 loss)\nI0608 05:53:33.347580 13573 sgd_solver.cpp:106] Iteration 49940, lr = 0.001\nI0608 05:53:46.268837 13573 solver.cpp:229] Iteration 49960, loss = 1.20181\nI0608 05:53:46.268900 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0243774 (* 1 = 0.0243774 loss)\nI0608 05:53:46.268909 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0318095 (* 1 = 0.0318095 loss)\nI0608 05:53:46.268916 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.186215 (* 1 = 0.186215 loss)\nI0608 05:53:46.268921 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00348599 (* 1 = 0.00348599 loss)\nI0608 05:53:46.268928 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.194438 (* 1 = 0.194438 loss)\nI0608 05:53:46.268934 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00348603 (* 1 = 0.00348603 loss)\nI0608 05:53:46.268939 13573 sgd_solver.cpp:106] Iteration 49960, lr = 0.001\nI0608 05:53:59.325486 13573 solver.cpp:229] Iteration 49980, loss = 0.527493\nI0608 05:53:59.325556 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00361598 (* 1 = 0.00361598 loss)\nI0608 05:53:59.325564 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00388084 (* 1 = 0.00388084 loss)\nI0608 05:53:59.325570 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188922 (* 1 = 0.188922 loss)\nI0608 05:53:59.325577 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00190053 (* 1 = 0.00190053 loss)\nI0608 05:53:59.325582 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193161 (* 1 = 0.193161 loss)\nI0608 05:53:59.325587 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00190516 (* 1 = 0.00190516 loss)\nI0608 05:53:59.325594 13573 sgd_solver.cpp:106] Iteration 49980, lr = 0.001\nspeed: 0.645s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_50000.caffemodel\nI0608 05:54:17.555027 13573 solver.cpp:229] Iteration 50000, loss = 0.643427\nI0608 05:54:17.555096 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0872927 (* 1 = 0.0872927 loss)\nI0608 05:54:17.555105 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.159694 (* 1 = 0.159694 loss)\nI0608 05:54:17.555111 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0931616 (* 1 = 0.0931616 loss)\nI0608 05:54:17.555117 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0226012 (* 1 = 0.0226012 loss)\nI0608 05:54:17.555124 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100109 (* 1 = 0.100109 loss)\nI0608 05:54:17.555130 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225752 (* 1 = 0.0225752 loss)\nI0608 05:54:17.555137 13573 sgd_solver.cpp:106] Iteration 50000, lr = 0.0001\nI0608 05:54:30.421103 13573 solver.cpp:229] Iteration 50020, loss = 0.585818\nI0608 05:54:30.421169 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0869366 (* 1 = 0.0869366 loss)\nI0608 05:54:30.421177 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0637174 (* 1 = 0.0637174 loss)\nI0608 05:54:30.421183 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0904009 (* 1 = 0.0904009 loss)\nI0608 05:54:30.421190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164647 (* 1 = 0.0164647 loss)\nI0608 05:54:30.421195 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0900331 (* 1 = 0.0900331 loss)\nI0608 05:54:30.421200 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164777 (* 1 = 0.0164777 loss)\nI0608 05:54:30.421207 13573 sgd_solver.cpp:106] Iteration 50020, lr = 0.0001\nI0608 05:54:43.154366 13573 solver.cpp:229] Iteration 50040, loss = 0.645361\nI0608 05:54:43.154443 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0124291 (* 1 = 0.0124291 loss)\nI0608 05:54:43.154451 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0374125 (* 1 = 0.0374125 loss)\nI0608 05:54:43.154458 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255026 (* 1 = 0.255026 loss)\nI0608 05:54:43.154464 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.012861 (* 1 = 0.012861 loss)\nI0608 05:54:43.154469 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.230821 (* 1 = 0.230821 loss)\nI0608 05:54:43.154474 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0128597 (* 1 = 0.0128597 loss)\nI0608 05:54:43.154481 13573 sgd_solver.cpp:106] Iteration 50040, lr = 0.0001\nI0608 05:54:56.109921 13573 solver.cpp:229] Iteration 50060, loss = 0.586163\nI0608 05:54:56.109987 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0709167 (* 1 = 0.0709167 loss)\nI0608 05:54:56.109997 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0881649 (* 1 = 0.0881649 loss)\nI0608 05:54:56.110002 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205732 (* 1 = 0.205732 loss)\nI0608 05:54:56.110008 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0375415 (* 1 = 0.0375415 loss)\nI0608 05:54:56.110013 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197379 (* 1 = 0.197379 loss)\nI0608 05:54:56.110018 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0375403 (* 1 = 0.0375403 loss)\nI0608 05:54:56.110025 13573 sgd_solver.cpp:106] Iteration 50060, lr = 0.0001\nI0608 05:55:08.841980 13573 solver.cpp:229] Iteration 50080, loss = 1.60299\nI0608 05:55:08.842044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.228196 (* 1 = 0.228196 loss)\nI0608 05:55:08.842053 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150474 (* 1 = 0.150474 loss)\nI0608 05:55:08.842058 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.93423 (* 1 = 0.93423 loss)\nI0608 05:55:08.842064 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.193436 (* 1 = 0.193436 loss)\nI0608 05:55:08.842069 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.961416 (* 1 = 0.961416 loss)\nI0608 05:55:08.842075 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.193439 (* 1 = 0.193439 loss)\nI0608 05:55:08.842082 13573 sgd_solver.cpp:106] Iteration 50080, lr = 0.0001\nI0608 05:55:21.494676 13573 solver.cpp:229] Iteration 50100, loss = 0.370174\nI0608 05:55:21.494750 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0634144 (* 1 = 0.0634144 loss)\nI0608 05:55:21.494758 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0956997 (* 1 = 0.0956997 loss)\nI0608 05:55:21.494765 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0927184 (* 1 = 0.0927184 loss)\nI0608 05:55:21.494781 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170333 (* 1 = 0.0170333 loss)\nI0608 05:55:21.494789 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0914336 (* 1 = 0.0914336 loss)\nI0608 05:55:21.494796 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170339 (* 1 = 0.0170339 loss)\nI0608 05:55:21.494802 13573 sgd_solver.cpp:106] Iteration 50100, lr = 0.0001\nI0608 05:55:34.404542 13573 solver.cpp:229] Iteration 50120, loss = 0.477357\nI0608 05:55:34.404611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0625254 (* 1 = 0.0625254 loss)\nI0608 05:55:34.404621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0428282 (* 1 = 0.0428282 loss)\nI0608 05:55:34.404628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949315 (* 1 = 0.0949315 loss)\nI0608 05:55:34.404633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0262809 (* 1 = 0.0262809 loss)\nI0608 05:55:34.404639 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932917 (* 1 = 0.0932917 loss)\nI0608 05:55:34.404644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0263015 (* 1 = 0.0263015 loss)\nI0608 05:55:34.404650 13573 sgd_solver.cpp:106] Iteration 50120, lr = 0.0001\nI0608 05:55:47.265908 13573 solver.cpp:229] Iteration 50140, loss = 1.07096\nI0608 05:55:47.265969 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.169077 (* 1 = 0.169077 loss)\nI0608 05:55:47.265980 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.127711 (* 1 = 0.127711 loss)\nI0608 05:55:47.265985 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.380739 (* 1 = 0.380739 loss)\nI0608 05:55:47.265991 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302695 (* 1 = 0.0302695 loss)\nI0608 05:55:47.265997 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.360683 (* 1 = 0.360683 loss)\nI0608 05:55:47.266002 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302724 (* 1 = 0.0302724 loss)\nI0608 05:55:47.266008 13573 sgd_solver.cpp:106] Iteration 50140, lr = 0.0001\nI0608 05:56:00.062378 13573 solver.cpp:229] Iteration 50160, loss = 0.587131\nI0608 05:56:00.062451 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.12888 (* 1 = 0.12888 loss)\nI0608 05:56:00.062461 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0921857 (* 1 = 0.0921857 loss)\nI0608 05:56:00.062467 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147321 (* 1 = 0.147321 loss)\nI0608 05:56:00.062474 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0470702 (* 1 = 0.0470702 loss)\nI0608 05:56:00.062479 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.14538 (* 1 = 0.14538 loss)\nI0608 05:56:00.062484 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0470712 (* 1 = 0.0470712 loss)\nI0608 05:56:00.062491 13573 sgd_solver.cpp:106] Iteration 50160, lr = 0.0001\nI0608 05:56:12.929935 13573 solver.cpp:229] Iteration 50180, loss = 0.520924\nI0608 05:56:12.929997 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0228386 (* 1 = 0.0228386 loss)\nI0608 05:56:12.930006 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00960448 (* 1 = 0.00960448 loss)\nI0608 05:56:12.930012 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0886645 (* 1 = 0.0886645 loss)\nI0608 05:56:12.930018 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182113 (* 1 = 0.0182113 loss)\nI0608 05:56:12.930023 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.086635 (* 1 = 0.086635 loss)\nI0608 05:56:12.930029 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182302 (* 1 = 0.0182302 loss)\nI0608 05:56:12.930035 13573 sgd_solver.cpp:106] Iteration 50180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 05:56:25.806049 13573 solver.cpp:229] Iteration 50200, loss = 2.03636\nI0608 05:56:25.806118 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171175 (* 1 = 0.171175 loss)\nI0608 05:56:25.806128 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.411889 (* 1 = 0.411889 loss)\nI0608 05:56:25.806133 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.933849 (* 1 = 0.933849 loss)\nI0608 05:56:25.806138 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.264503 (* 1 = 0.264503 loss)\nI0608 05:56:25.806144 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.941853 (* 1 = 0.941853 loss)\nI0608 05:56:25.806149 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.264557 (* 1 = 0.264557 loss)\nI0608 05:56:25.806156 13573 sgd_solver.cpp:106] Iteration 50200, lr = 0.0001\nI0608 05:56:38.556217 13573 solver.cpp:229] Iteration 50220, loss = 0.65289\nI0608 05:56:38.556291 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0439357 (* 1 = 0.0439357 loss)\nI0608 05:56:38.556301 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.015993 (* 1 = 0.015993 loss)\nI0608 05:56:38.556308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.320781 (* 1 = 0.320781 loss)\nI0608 05:56:38.556313 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.133552 (* 1 = 0.133552 loss)\nI0608 05:56:38.556318 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318897 (* 1 = 0.318897 loss)\nI0608 05:56:38.556324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.133592 (* 1 = 0.133592 loss)\nI0608 05:56:38.556329 13573 sgd_solver.cpp:106] Iteration 50220, lr = 0.0001\nI0608 05:56:51.543615 13573 solver.cpp:229] Iteration 50240, loss = 0.64644\nI0608 05:56:51.543669 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0333581 (* 1 = 0.0333581 loss)\nI0608 05:56:51.543685 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00791476 (* 1 = 0.00791476 loss)\nI0608 05:56:51.543692 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164846 (* 1 = 0.164846 loss)\nI0608 05:56:51.543699 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00833576 (* 1 = 0.00833576 loss)\nI0608 05:56:51.543704 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205322 (* 1 = 0.205322 loss)\nI0608 05:56:51.543710 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00833769 (* 1 = 0.00833769 loss)\nI0608 05:56:51.543717 13573 sgd_solver.cpp:106] Iteration 50240, lr = 0.0001\nI0608 05:57:04.339951 13573 solver.cpp:229] Iteration 50260, loss = 1.41545\nI0608 05:57:04.340021 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.213952 (* 1 = 0.213952 loss)\nI0608 05:57:04.340035 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.157331 (* 1 = 0.157331 loss)\nI0608 05:57:04.340044 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.814252 (* 1 = 0.814252 loss)\nI0608 05:57:04.340054 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.14033 (* 1 = 0.14033 loss)\nI0608 05:57:04.340062 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.833488 (* 1 = 0.833488 loss)\nI0608 05:57:04.340072 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.140329 (* 1 = 0.140329 loss)\nI0608 05:57:04.340082 13573 sgd_solver.cpp:106] Iteration 50260, lr = 0.0001\nI0608 05:57:17.353013 13573 solver.cpp:229] Iteration 50280, loss = 1.14142\nI0608 05:57:17.353086 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.164858 (* 1 = 0.164858 loss)\nI0608 05:57:17.353096 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.180221 (* 1 = 0.180221 loss)\nI0608 05:57:17.353102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276138 (* 1 = 0.276138 loss)\nI0608 05:57:17.353108 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0928176 (* 1 = 0.0928176 loss)\nI0608 05:57:17.353113 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.288243 (* 1 = 0.288243 loss)\nI0608 05:57:17.353119 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0928262 (* 1 = 0.0928262 loss)\nI0608 05:57:17.353126 13573 sgd_solver.cpp:106] Iteration 50280, lr = 0.0001\nI0608 05:57:30.246590 13573 solver.cpp:229] Iteration 50300, loss = 0.403139\nI0608 05:57:30.246651 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0587295 (* 1 = 0.0587295 loss)\nI0608 05:57:30.246660 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.037751 (* 1 = 0.037751 loss)\nI0608 05:57:30.246666 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0986231 (* 1 = 0.0986231 loss)\nI0608 05:57:30.246672 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.014538 (* 1 = 0.014538 loss)\nI0608 05:57:30.246677 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0951953 (* 1 = 0.0951953 loss)\nI0608 05:57:30.246683 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0145431 (* 1 = 0.0145431 loss)\nI0608 05:57:30.246690 13573 sgd_solver.cpp:106] Iteration 50300, lr = 0.0001\nI0608 05:57:43.219707 13573 solver.cpp:229] Iteration 50320, loss = 1.7419\nI0608 05:57:43.219772 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0026381 (* 1 = 0.0026381 loss)\nI0608 05:57:43.219799 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00105666 (* 1 = 0.00105666 loss)\nI0608 05:57:43.219807 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.05335 (* 1 = 1.05335 loss)\nI0608 05:57:43.219813 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.384168 (* 1 = 0.384168 loss)\nI0608 05:57:43.219820 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.03782 (* 1 = 1.03782 loss)\nI0608 05:57:43.219825 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.384216 (* 1 = 0.384216 loss)\nI0608 05:57:43.219831 13573 sgd_solver.cpp:106] Iteration 50320, lr = 0.0001\nI0608 05:57:55.718677 13573 solver.cpp:229] Iteration 50340, loss = 0.553751\nI0608 05:57:55.718749 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.022305 (* 1 = 0.022305 loss)\nI0608 05:57:55.718757 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0630097 (* 1 = 0.0630097 loss)\nI0608 05:57:55.718763 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0750697 (* 1 = 0.0750697 loss)\nI0608 05:57:55.718782 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0090927 (* 1 = 0.0090927 loss)\nI0608 05:57:55.718792 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0779273 (* 1 = 0.0779273 loss)\nI0608 05:57:55.718797 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00909582 (* 1 = 0.00909582 loss)\nI0608 05:57:55.718804 13573 sgd_solver.cpp:106] Iteration 50340, lr = 0.0001\nI0608 05:58:08.462329 13573 solver.cpp:229] Iteration 50360, loss = 1.44839\nI0608 05:58:08.462391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0019856 (* 1 = 0.0019856 loss)\nI0608 05:58:08.462400 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00160897 (* 1 = 0.00160897 loss)\nI0608 05:58:08.462406 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266752 (* 1 = 0.266752 loss)\nI0608 05:58:08.462412 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0226309 (* 1 = 0.0226309 loss)\nI0608 05:58:08.462419 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.263038 (* 1 = 0.263038 loss)\nI0608 05:58:08.462424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0226365 (* 1 = 0.0226365 loss)\nI0608 05:58:08.462430 13573 sgd_solver.cpp:106] Iteration 50360, lr = 0.0001\nI0608 05:58:21.516288 13573 solver.cpp:229] Iteration 50380, loss = 0.55957\nI0608 05:58:21.516355 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.123392 (* 1 = 0.123392 loss)\nI0608 05:58:21.516365 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100362 (* 1 = 0.100362 loss)\nI0608 05:58:21.516371 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260656 (* 1 = 0.260656 loss)\nI0608 05:58:21.516376 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.023107 (* 1 = 0.023107 loss)\nI0608 05:58:21.516381 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.271713 (* 1 = 0.271713 loss)\nI0608 05:58:21.516387 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0231143 (* 1 = 0.0231143 loss)\nI0608 05:58:21.516394 13573 sgd_solver.cpp:106] Iteration 50380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 05:58:34.293859 13573 solver.cpp:229] Iteration 50400, loss = 0.6034\nI0608 05:58:34.293923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.050219 (* 1 = 0.050219 loss)\nI0608 05:58:34.293932 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0513815 (* 1 = 0.0513815 loss)\nI0608 05:58:34.293939 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112581 (* 1 = 0.112581 loss)\nI0608 05:58:34.293946 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.095981 (* 1 = 0.095981 loss)\nI0608 05:58:34.293951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109638 (* 1 = 0.109638 loss)\nI0608 05:58:34.293956 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0959238 (* 1 = 0.0959238 loss)\nI0608 05:58:34.293963 13573 sgd_solver.cpp:106] Iteration 50400, lr = 0.0001\nI0608 05:58:47.147918 13573 solver.cpp:229] Iteration 50420, loss = 0.900857\nI0608 05:58:47.147979 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00217531 (* 1 = 0.00217531 loss)\nI0608 05:58:47.147989 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00978497 (* 1 = 0.00978497 loss)\nI0608 05:58:47.147994 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.272621 (* 1 = 0.272621 loss)\nI0608 05:58:47.148000 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259626 (* 1 = 0.0259626 loss)\nI0608 05:58:47.148006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247013 (* 1 = 0.247013 loss)\nI0608 05:58:47.148011 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0259714 (* 1 = 0.0259714 loss)\nI0608 05:58:47.148017 13573 sgd_solver.cpp:106] Iteration 50420, lr = 0.0001\nI0608 05:59:00.226707 13573 solver.cpp:229] Iteration 50440, loss = 0.843056\nI0608 05:59:00.226785 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0235575 (* 1 = 0.0235575 loss)\nI0608 05:59:00.226796 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0726932 (* 1 = 0.0726932 loss)\nI0608 05:59:00.226802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.434615 (* 1 = 0.434615 loss)\nI0608 05:59:00.226809 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0675038 (* 1 = 0.0675038 loss)\nI0608 05:59:00.226814 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.411867 (* 1 = 0.411867 loss)\nI0608 05:59:00.226819 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0675397 (* 1 = 0.0675397 loss)\nI0608 05:59:00.226826 13573 sgd_solver.cpp:106] Iteration 50440, lr = 0.0001\nI0608 05:59:13.267478 13573 solver.cpp:229] Iteration 50460, loss = 0.603903\nI0608 05:59:13.267540 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0480183 (* 1 = 0.0480183 loss)\nI0608 05:59:13.267550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0632132 (* 1 = 0.0632132 loss)\nI0608 05:59:13.267555 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258223 (* 1 = 0.258223 loss)\nI0608 05:59:13.267561 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.069249 (* 1 = 0.069249 loss)\nI0608 05:59:13.267567 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.25899 (* 1 = 0.25899 loss)\nI0608 05:59:13.267572 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0692715 (* 1 = 0.0692715 loss)\nI0608 05:59:13.267578 13573 sgd_solver.cpp:106] Iteration 50460, lr = 0.0001\nI0608 05:59:26.079373 13573 solver.cpp:229] Iteration 50480, loss = 0.467904\nI0608 05:59:26.079444 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.05024 (* 1 = 0.05024 loss)\nI0608 05:59:26.079454 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0789437 (* 1 = 0.0789437 loss)\nI0608 05:59:26.079460 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134012 (* 1 = 0.134012 loss)\nI0608 05:59:26.079465 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0370992 (* 1 = 0.0370992 loss)\nI0608 05:59:26.079471 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114148 (* 1 = 0.114148 loss)\nI0608 05:59:26.079476 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0370934 (* 1 = 0.0370934 loss)\nI0608 05:59:26.079483 13573 sgd_solver.cpp:106] Iteration 50480, lr = 0.0001\nI0608 05:59:39.165554 13573 solver.cpp:229] Iteration 50500, loss = 0.607006\nI0608 05:59:39.165621 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000154166 (* 1 = 0.000154166 loss)\nI0608 05:59:39.165632 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000295259 (* 1 = 0.000295259 loss)\nI0608 05:59:39.165637 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.176941 (* 1 = 0.176941 loss)\nI0608 05:59:39.165642 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0174342 (* 1 = 0.0174342 loss)\nI0608 05:59:39.165648 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174659 (* 1 = 0.174659 loss)\nI0608 05:59:39.165654 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0174386 (* 1 = 0.0174386 loss)\nI0608 05:59:39.165660 13573 sgd_solver.cpp:106] Iteration 50500, lr = 0.0001\nI0608 05:59:52.189975 13573 solver.cpp:229] Iteration 50520, loss = 0.794146\nI0608 05:59:52.190038 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0862884 (* 1 = 0.0862884 loss)\nI0608 05:59:52.190047 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0405757 (* 1 = 0.0405757 loss)\nI0608 05:59:52.190053 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.340603 (* 1 = 0.340603 loss)\nI0608 05:59:52.190059 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0594183 (* 1 = 0.0594183 loss)\nI0608 05:59:52.190064 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.335456 (* 1 = 0.335456 loss)\nI0608 05:59:52.190070 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.059411 (* 1 = 0.059411 loss)\nI0608 05:59:52.190078 13573 sgd_solver.cpp:106] Iteration 50520, lr = 0.0001\nI0608 06:00:05.006659 13573 solver.cpp:229] Iteration 50540, loss = 0.652337\nI0608 06:00:05.006731 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107432 (* 1 = 0.107432 loss)\nI0608 06:00:05.006742 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0710536 (* 1 = 0.0710536 loss)\nI0608 06:00:05.006747 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201452 (* 1 = 0.201452 loss)\nI0608 06:00:05.006752 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0353016 (* 1 = 0.0353016 loss)\nI0608 06:00:05.006758 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203177 (* 1 = 0.203177 loss)\nI0608 06:00:05.006764 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0352959 (* 1 = 0.0352959 loss)\nI0608 06:00:05.006783 13573 sgd_solver.cpp:106] Iteration 50540, lr = 0.0001\nI0608 06:00:17.954730 13573 solver.cpp:229] Iteration 50560, loss = 0.720763\nI0608 06:00:17.954807 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0782529 (* 1 = 0.0782529 loss)\nI0608 06:00:17.954818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0626644 (* 1 = 0.0626644 loss)\nI0608 06:00:17.954823 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.20322 (* 1 = 0.20322 loss)\nI0608 06:00:17.954828 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164718 (* 1 = 0.0164718 loss)\nI0608 06:00:17.954834 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218976 (* 1 = 0.218976 loss)\nI0608 06:00:17.954840 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164807 (* 1 = 0.0164807 loss)\nI0608 06:00:17.954846 13573 sgd_solver.cpp:106] Iteration 50560, lr = 0.0001\nI0608 06:00:31.025517 13573 solver.cpp:229] Iteration 50580, loss = 0.296103\nI0608 06:00:31.025580 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0719781 (* 1 = 0.0719781 loss)\nI0608 06:00:31.025589 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0650033 (* 1 = 0.0650033 loss)\nI0608 06:00:31.025595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0801205 (* 1 = 0.0801205 loss)\nI0608 06:00:31.025600 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117189 (* 1 = 0.0117189 loss)\nI0608 06:00:31.025606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.083707 (* 1 = 0.083707 loss)\nI0608 06:00:31.025612 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117171 (* 1 = 0.0117171 loss)\nI0608 06:00:31.025619 13573 sgd_solver.cpp:106] Iteration 50580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:00:44.003334 13573 solver.cpp:229] Iteration 50600, loss = 0.364484\nI0608 06:00:44.003408 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0356833 (* 1 = 0.0356833 loss)\nI0608 06:00:44.003417 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0342491 (* 1 = 0.0342491 loss)\nI0608 06:00:44.003423 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143945 (* 1 = 0.143945 loss)\nI0608 06:00:44.003429 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0364076 (* 1 = 0.0364076 loss)\nI0608 06:00:44.003434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139503 (* 1 = 0.139503 loss)\nI0608 06:00:44.003440 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0364137 (* 1 = 0.0364137 loss)\nI0608 06:00:44.003448 13573 sgd_solver.cpp:106] Iteration 50600, lr = 0.0001\nI0608 06:00:56.921775 13573 solver.cpp:229] Iteration 50620, loss = 0.686818\nI0608 06:00:56.921842 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0603897 (* 1 = 0.0603897 loss)\nI0608 06:00:56.921852 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0602891 (* 1 = 0.0602891 loss)\nI0608 06:00:56.921859 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.316699 (* 1 = 0.316699 loss)\nI0608 06:00:56.921864 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0258903 (* 1 = 0.0258903 loss)\nI0608 06:00:56.921869 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328699 (* 1 = 0.328699 loss)\nI0608 06:00:56.921875 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258972 (* 1 = 0.0258972 loss)\nI0608 06:00:56.921881 13573 sgd_solver.cpp:106] Iteration 50620, lr = 0.0001\nI0608 06:01:10.020611 13573 solver.cpp:229] Iteration 50640, loss = 0.801383\nI0608 06:01:10.020673 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00459054 (* 1 = 0.00459054 loss)\nI0608 06:01:10.020689 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00434241 (* 1 = 0.00434241 loss)\nI0608 06:01:10.020695 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.344709 (* 1 = 0.344709 loss)\nI0608 06:01:10.020702 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328673 (* 1 = 0.0328673 loss)\nI0608 06:01:10.020709 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.397941 (* 1 = 0.397941 loss)\nI0608 06:01:10.020714 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0329117 (* 1 = 0.0329117 loss)\nI0608 06:01:10.020721 13573 sgd_solver.cpp:106] Iteration 50640, lr = 0.0001\nI0608 06:01:22.805057 13573 solver.cpp:229] Iteration 50660, loss = 0.271462\nI0608 06:01:22.805127 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0420237 (* 1 = 0.0420237 loss)\nI0608 06:01:22.805137 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0441698 (* 1 = 0.0441698 loss)\nI0608 06:01:22.805143 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0858177 (* 1 = 0.0858177 loss)\nI0608 06:01:22.805150 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0173387 (* 1 = 0.0173387 loss)\nI0608 06:01:22.805155 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0837991 (* 1 = 0.0837991 loss)\nI0608 06:01:22.805161 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0173364 (* 1 = 0.0173364 loss)\nI0608 06:01:22.805168 13573 sgd_solver.cpp:106] Iteration 50660, lr = 0.0001\nI0608 06:01:35.652997 13573 solver.cpp:229] Iteration 50680, loss = 0.656085\nI0608 06:01:35.653064 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0183219 (* 1 = 0.0183219 loss)\nI0608 06:01:35.653072 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0511866 (* 1 = 0.0511866 loss)\nI0608 06:01:35.653079 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.419252 (* 1 = 0.419252 loss)\nI0608 06:01:35.653267 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0370979 (* 1 = 0.0370979 loss)\nI0608 06:01:35.653324 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.443302 (* 1 = 0.443302 loss)\nI0608 06:01:35.653391 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0371065 (* 1 = 0.0371065 loss)\nI0608 06:01:35.653447 13573 sgd_solver.cpp:106] Iteration 50680, lr = 0.0001\nI0608 06:01:48.458289 13573 solver.cpp:229] Iteration 50700, loss = 0.487788\nI0608 06:01:48.458351 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0418407 (* 1 = 0.0418407 loss)\nI0608 06:01:48.458361 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0118272 (* 1 = 0.0118272 loss)\nI0608 06:01:48.458369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0853309 (* 1 = 0.0853309 loss)\nI0608 06:01:48.458374 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0208196 (* 1 = 0.0208196 loss)\nI0608 06:01:48.458380 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0889904 (* 1 = 0.0889904 loss)\nI0608 06:01:48.458385 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0208257 (* 1 = 0.0208257 loss)\nI0608 06:01:48.458392 13573 sgd_solver.cpp:106] Iteration 50700, lr = 0.0001\nI0608 06:02:01.450196 13573 solver.cpp:229] Iteration 50720, loss = 2.21751\nI0608 06:02:01.450309 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0527364 (* 1 = 0.0527364 loss)\nI0608 06:02:01.450320 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0358037 (* 1 = 0.0358037 loss)\nI0608 06:02:01.450326 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125966 (* 1 = 0.125966 loss)\nI0608 06:02:01.450331 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.208239 (* 1 = 0.208239 loss)\nI0608 06:02:01.450337 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125048 (* 1 = 0.125048 loss)\nI0608 06:02:01.450343 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.207756 (* 1 = 0.207756 loss)\nI0608 06:02:01.450350 13573 sgd_solver.cpp:106] Iteration 50720, lr = 0.0001\nI0608 06:02:14.467207 13573 solver.cpp:229] Iteration 50740, loss = 0.468588\nI0608 06:02:14.467272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0183576 (* 1 = 0.0183576 loss)\nI0608 06:02:14.467283 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0370092 (* 1 = 0.0370092 loss)\nI0608 06:02:14.467288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0895499 (* 1 = 0.0895499 loss)\nI0608 06:02:14.467293 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0196874 (* 1 = 0.0196874 loss)\nI0608 06:02:14.467313 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0926807 (* 1 = 0.0926807 loss)\nI0608 06:02:14.467322 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0197007 (* 1 = 0.0197007 loss)\nI0608 06:02:14.467332 13573 sgd_solver.cpp:106] Iteration 50740, lr = 0.0001\nI0608 06:02:27.607727 13573 solver.cpp:229] Iteration 50760, loss = 0.406244\nI0608 06:02:27.607789 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0499378 (* 1 = 0.0499378 loss)\nI0608 06:02:27.607798 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0656091 (* 1 = 0.0656091 loss)\nI0608 06:02:27.607805 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0989162 (* 1 = 0.0989162 loss)\nI0608 06:02:27.607810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0153292 (* 1 = 0.0153292 loss)\nI0608 06:02:27.607816 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0976474 (* 1 = 0.0976474 loss)\nI0608 06:02:27.607822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0153188 (* 1 = 0.0153188 loss)\nI0608 06:02:27.607830 13573 sgd_solver.cpp:106] Iteration 50760, lr = 0.0001\nI0608 06:02:40.606359 13573 solver.cpp:229] Iteration 50780, loss = 0.77135\nI0608 06:02:40.606436 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0487209 (* 1 = 0.0487209 loss)\nI0608 06:02:40.606446 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0264927 (* 1 = 0.0264927 loss)\nI0608 06:02:40.606452 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.123104 (* 1 = 0.123104 loss)\nI0608 06:02:40.606458 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0526875 (* 1 = 0.0526875 loss)\nI0608 06:02:40.606463 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.120553 (* 1 = 0.120553 loss)\nI0608 06:02:40.606469 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.052703 (* 1 = 0.052703 loss)\nI0608 06:02:40.606477 13573 sgd_solver.cpp:106] Iteration 50780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:02:53.225327 13573 solver.cpp:229] Iteration 50800, loss = 0.441346\nI0608 06:02:53.225461 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0541694 (* 1 = 0.0541694 loss)\nI0608 06:02:53.225471 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0313029 (* 1 = 0.0313029 loss)\nI0608 06:02:53.225476 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0863731 (* 1 = 0.0863731 loss)\nI0608 06:02:53.225482 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0106898 (* 1 = 0.0106898 loss)\nI0608 06:02:53.225487 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0832788 (* 1 = 0.0832788 loss)\nI0608 06:02:53.225493 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0106869 (* 1 = 0.0106869 loss)\nI0608 06:02:53.225500 13573 sgd_solver.cpp:106] Iteration 50800, lr = 0.0001\nI0608 06:03:06.233258 13573 solver.cpp:229] Iteration 50820, loss = 0.4268\nI0608 06:03:06.233320 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0653225 (* 1 = 0.0653225 loss)\nI0608 06:03:06.233330 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0817586 (* 1 = 0.0817586 loss)\nI0608 06:03:06.233336 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151035 (* 1 = 0.151035 loss)\nI0608 06:03:06.233342 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0123789 (* 1 = 0.0123789 loss)\nI0608 06:03:06.233348 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161717 (* 1 = 0.161717 loss)\nI0608 06:03:06.233355 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0123746 (* 1 = 0.0123746 loss)\nI0608 06:03:06.233361 13573 sgd_solver.cpp:106] Iteration 50820, lr = 0.0001\nI0608 06:03:18.829020 13573 solver.cpp:229] Iteration 50840, loss = 1.03894\nI0608 06:03:18.829087 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00823664 (* 1 = 0.00823664 loss)\nI0608 06:03:18.829097 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00293227 (* 1 = 0.00293227 loss)\nI0608 06:03:18.829102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.38361 (* 1 = 0.38361 loss)\nI0608 06:03:18.829108 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0351334 (* 1 = 0.0351334 loss)\nI0608 06:03:18.829113 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.391485 (* 1 = 0.391485 loss)\nI0608 06:03:18.829119 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.035174 (* 1 = 0.035174 loss)\nI0608 06:03:18.829126 13573 sgd_solver.cpp:106] Iteration 50840, lr = 0.0001\nI0608 06:03:31.786037 13573 solver.cpp:229] Iteration 50860, loss = 0.525293\nI0608 06:03:31.786113 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116055 (* 1 = 0.116055 loss)\nI0608 06:03:31.786121 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141152 (* 1 = 0.141152 loss)\nI0608 06:03:31.786128 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0970483 (* 1 = 0.0970483 loss)\nI0608 06:03:31.786134 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257139 (* 1 = 0.0257139 loss)\nI0608 06:03:31.786139 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0984374 (* 1 = 0.0984374 loss)\nI0608 06:03:31.786144 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0257093 (* 1 = 0.0257093 loss)\nI0608 06:03:31.786151 13573 sgd_solver.cpp:106] Iteration 50860, lr = 0.0001\nI0608 06:03:44.586057 13573 solver.cpp:229] Iteration 50880, loss = 0.48679\nI0608 06:03:44.586129 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0573125 (* 1 = 0.0573125 loss)\nI0608 06:03:44.586138 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0769238 (* 1 = 0.0769238 loss)\nI0608 06:03:44.586145 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.097477 (* 1 = 0.097477 loss)\nI0608 06:03:44.586150 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116764 (* 1 = 0.0116764 loss)\nI0608 06:03:44.586156 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103804 (* 1 = 0.103804 loss)\nI0608 06:03:44.586163 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116733 (* 1 = 0.0116733 loss)\nI0608 06:03:44.586169 13573 sgd_solver.cpp:106] Iteration 50880, lr = 0.0001\nI0608 06:03:57.572712 13573 solver.cpp:229] Iteration 50900, loss = 0.879554\nI0608 06:03:57.572779 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0178618 (* 1 = 0.0178618 loss)\nI0608 06:03:57.572788 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0808112 (* 1 = 0.0808112 loss)\nI0608 06:03:57.572794 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279309 (* 1 = 0.279309 loss)\nI0608 06:03:57.572800 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0303868 (* 1 = 0.0303868 loss)\nI0608 06:03:57.572805 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306703 (* 1 = 0.306703 loss)\nI0608 06:03:57.572811 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.030362 (* 1 = 0.030362 loss)\nI0608 06:03:57.572818 13573 sgd_solver.cpp:106] Iteration 50900, lr = 0.0001\nI0608 06:04:10.384618 13573 solver.cpp:229] Iteration 50920, loss = 0.479229\nI0608 06:04:10.384683 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.170523 (* 1 = 0.170523 loss)\nI0608 06:04:10.384692 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.148316 (* 1 = 0.148316 loss)\nI0608 06:04:10.384698 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0727871 (* 1 = 0.0727871 loss)\nI0608 06:04:10.384703 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134929 (* 1 = 0.0134929 loss)\nI0608 06:04:10.384708 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.078562 (* 1 = 0.078562 loss)\nI0608 06:04:10.384714 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134949 (* 1 = 0.0134949 loss)\nI0608 06:04:10.384721 13573 sgd_solver.cpp:106] Iteration 50920, lr = 0.0001\nI0608 06:04:23.378715 13573 solver.cpp:229] Iteration 50940, loss = 1.02281\nI0608 06:04:23.378787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.276831 (* 1 = 0.276831 loss)\nI0608 06:04:23.378798 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.129241 (* 1 = 0.129241 loss)\nI0608 06:04:23.378805 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.538114 (* 1 = 0.538114 loss)\nI0608 06:04:23.378811 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0791632 (* 1 = 0.0791632 loss)\nI0608 06:04:23.378818 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.544495 (* 1 = 0.544495 loss)\nI0608 06:04:23.378824 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0791791 (* 1 = 0.0791791 loss)\nI0608 06:04:23.378831 13573 sgd_solver.cpp:106] Iteration 50940, lr = 0.0001\nI0608 06:04:36.312690 13573 solver.cpp:229] Iteration 50960, loss = 0.87624\nI0608 06:04:36.312755 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00273501 (* 1 = 0.00273501 loss)\nI0608 06:04:36.312764 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000997905 (* 1 = 0.000997905 loss)\nI0608 06:04:36.312770 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214588 (* 1 = 0.214588 loss)\nI0608 06:04:36.312777 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100985 (* 1 = 0.0100985 loss)\nI0608 06:04:36.312785 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217215 (* 1 = 0.217215 loss)\nI0608 06:04:36.312794 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0101 (* 1 = 0.0101 loss)\nI0608 06:04:36.312804 13573 sgd_solver.cpp:106] Iteration 50960, lr = 0.0001\nI0608 06:04:49.150197 13573 solver.cpp:229] Iteration 50980, loss = 0.381245\nI0608 06:04:49.150270 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0149547 (* 1 = 0.0149547 loss)\nI0608 06:04:49.150280 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0231159 (* 1 = 0.0231159 loss)\nI0608 06:04:49.150287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0761304 (* 1 = 0.0761304 loss)\nI0608 06:04:49.150293 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279867 (* 1 = 0.0279867 loss)\nI0608 06:04:49.150300 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0862485 (* 1 = 0.0862485 loss)\nI0608 06:04:49.150305 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279888 (* 1 = 0.0279888 loss)\nI0608 06:04:49.150313 13573 sgd_solver.cpp:106] Iteration 50980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:05:02.152534 13573 solver.cpp:229] Iteration 51000, loss = 2.34671\nI0608 06:05:02.152611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00309639 (* 1 = 0.00309639 loss)\nI0608 06:05:02.152621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000500648 (* 1 = 0.000500648 loss)\nI0608 06:05:02.152627 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.838292 (* 1 = 0.838292 loss)\nI0608 06:05:02.152633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.365508 (* 1 = 0.365508 loss)\nI0608 06:05:02.152639 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.808251 (* 1 = 0.808251 loss)\nI0608 06:05:02.152645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.365427 (* 1 = 0.365427 loss)\nI0608 06:05:02.152652 13573 sgd_solver.cpp:106] Iteration 51000, lr = 0.0001\nI0608 06:05:15.062733 13573 solver.cpp:229] Iteration 51020, loss = 0.445282\nI0608 06:05:15.062804 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0611701 (* 1 = 0.0611701 loss)\nI0608 06:05:15.062818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0559803 (* 1 = 0.0559803 loss)\nI0608 06:05:15.062824 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0892435 (* 1 = 0.0892435 loss)\nI0608 06:05:15.062829 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00876872 (* 1 = 0.00876872 loss)\nI0608 06:05:15.062835 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0875024 (* 1 = 0.0875024 loss)\nI0608 06:05:15.062840 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00877121 (* 1 = 0.00877121 loss)\nI0608 06:05:15.062847 13573 sgd_solver.cpp:106] Iteration 51020, lr = 0.0001\nI0608 06:05:27.959734 13573 solver.cpp:229] Iteration 51040, loss = 0.454417\nI0608 06:05:27.959811 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0776856 (* 1 = 0.0776856 loss)\nI0608 06:05:27.959820 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0499348 (* 1 = 0.0499348 loss)\nI0608 06:05:27.959826 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.129596 (* 1 = 0.129596 loss)\nI0608 06:05:27.959832 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0185952 (* 1 = 0.0185952 loss)\nI0608 06:05:27.959837 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.132379 (* 1 = 0.132379 loss)\nI0608 06:05:27.959843 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0185866 (* 1 = 0.0185866 loss)\nI0608 06:05:27.959851 13573 sgd_solver.cpp:106] Iteration 51040, lr = 0.0001\nI0608 06:05:40.738754 13573 solver.cpp:229] Iteration 51060, loss = 0.660517\nI0608 06:05:40.738826 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0642035 (* 1 = 0.0642035 loss)\nI0608 06:05:40.738834 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108047 (* 1 = 0.108047 loss)\nI0608 06:05:40.738840 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120238 (* 1 = 0.120238 loss)\nI0608 06:05:40.738847 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279561 (* 1 = 0.0279561 loss)\nI0608 06:05:40.738852 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.12886 (* 1 = 0.12886 loss)\nI0608 06:05:40.738858 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0279538 (* 1 = 0.0279538 loss)\nI0608 06:05:40.738864 13573 sgd_solver.cpp:106] Iteration 51060, lr = 0.0001\nI0608 06:05:53.771081 13573 solver.cpp:229] Iteration 51080, loss = 0.49115\nI0608 06:05:53.771148 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00459033 (* 1 = 0.00459033 loss)\nI0608 06:05:53.771158 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0087423 (* 1 = 0.0087423 loss)\nI0608 06:05:53.771163 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182972 (* 1 = 0.182972 loss)\nI0608 06:05:53.771169 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112529 (* 1 = 0.0112529 loss)\nI0608 06:05:53.771175 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22146 (* 1 = 0.22146 loss)\nI0608 06:05:53.771180 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112392 (* 1 = 0.0112392 loss)\nI0608 06:05:53.771188 13573 sgd_solver.cpp:106] Iteration 51080, lr = 0.0001\nI0608 06:06:06.655778 13573 solver.cpp:229] Iteration 51100, loss = 0.495784\nI0608 06:06:06.655846 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00300257 (* 1 = 0.00300257 loss)\nI0608 06:06:06.655856 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00408311 (* 1 = 0.00408311 loss)\nI0608 06:06:06.655863 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204233 (* 1 = 0.204233 loss)\nI0608 06:06:06.655867 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.017253 (* 1 = 0.017253 loss)\nI0608 06:06:06.655874 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212245 (* 1 = 0.212245 loss)\nI0608 06:06:06.655879 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017258 (* 1 = 0.017258 loss)\nI0608 06:06:06.655885 13573 sgd_solver.cpp:106] Iteration 51100, lr = 0.0001\nI0608 06:06:19.614814 13573 solver.cpp:229] Iteration 51120, loss = 0.276959\nI0608 06:06:19.614876 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0419177 (* 1 = 0.0419177 loss)\nI0608 06:06:19.614886 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0520794 (* 1 = 0.0520794 loss)\nI0608 06:06:19.614892 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0872024 (* 1 = 0.0872024 loss)\nI0608 06:06:19.614898 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00634849 (* 1 = 0.00634849 loss)\nI0608 06:06:19.614903 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0926407 (* 1 = 0.0926407 loss)\nI0608 06:06:19.614909 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00635442 (* 1 = 0.00635442 loss)\nI0608 06:06:19.614917 13573 sgd_solver.cpp:106] Iteration 51120, lr = 0.0001\nI0608 06:06:32.729712 13573 solver.cpp:229] Iteration 51140, loss = 0.433521\nI0608 06:06:32.729780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0152263 (* 1 = 0.0152263 loss)\nI0608 06:06:32.729789 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.043161 (* 1 = 0.043161 loss)\nI0608 06:06:32.729796 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.16515 (* 1 = 0.16515 loss)\nI0608 06:06:32.729801 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0292189 (* 1 = 0.0292189 loss)\nI0608 06:06:32.729807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167176 (* 1 = 0.167176 loss)\nI0608 06:06:32.729813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0292269 (* 1 = 0.0292269 loss)\nI0608 06:06:32.729820 13573 sgd_solver.cpp:106] Iteration 51140, lr = 0.0001\nI0608 06:06:45.734364 13573 solver.cpp:229] Iteration 51160, loss = 0.417945\nI0608 06:06:45.734436 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0315216 (* 1 = 0.0315216 loss)\nI0608 06:06:45.734444 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.064488 (* 1 = 0.064488 loss)\nI0608 06:06:45.734450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0879625 (* 1 = 0.0879625 loss)\nI0608 06:06:45.734457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0309864 (* 1 = 0.0309864 loss)\nI0608 06:06:45.734462 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0896079 (* 1 = 0.0896079 loss)\nI0608 06:06:45.734467 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309888 (* 1 = 0.0309888 loss)\nI0608 06:06:45.734473 13573 sgd_solver.cpp:106] Iteration 51160, lr = 0.0001\nI0608 06:06:58.588922 13573 solver.cpp:229] Iteration 51180, loss = 0.513949\nI0608 06:06:58.588984 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0629782 (* 1 = 0.0629782 loss)\nI0608 06:06:58.588994 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.052841 (* 1 = 0.052841 loss)\nI0608 06:06:58.588999 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102814 (* 1 = 0.102814 loss)\nI0608 06:06:58.589005 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0418822 (* 1 = 0.0418822 loss)\nI0608 06:06:58.589011 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100297 (* 1 = 0.100297 loss)\nI0608 06:06:58.589016 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0418962 (* 1 = 0.0418962 loss)\nI0608 06:06:58.589023 13573 sgd_solver.cpp:106] Iteration 51180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:07:11.393556 13573 solver.cpp:229] Iteration 51200, loss = 0.667816\nI0608 06:07:11.393625 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130184 (* 1 = 0.130184 loss)\nI0608 06:07:11.393635 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.134094 (* 1 = 0.134094 loss)\nI0608 06:07:11.393640 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238109 (* 1 = 0.238109 loss)\nI0608 06:07:11.393647 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0174689 (* 1 = 0.0174689 loss)\nI0608 06:07:11.393653 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232489 (* 1 = 0.232489 loss)\nI0608 06:07:11.393659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0174783 (* 1 = 0.0174783 loss)\nI0608 06:07:11.393666 13573 sgd_solver.cpp:106] Iteration 51200, lr = 0.0001\nI0608 06:07:24.301455 13573 solver.cpp:229] Iteration 51220, loss = 1.0873\nI0608 06:07:24.301517 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0772348 (* 1 = 0.0772348 loss)\nI0608 06:07:24.301525 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0249948 (* 1 = 0.0249948 loss)\nI0608 06:07:24.301532 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.354464 (* 1 = 0.354464 loss)\nI0608 06:07:24.301537 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00844655 (* 1 = 0.00844655 loss)\nI0608 06:07:24.301543 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.335217 (* 1 = 0.335217 loss)\nI0608 06:07:24.301549 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00844654 (* 1 = 0.00844654 loss)\nI0608 06:07:24.301555 13573 sgd_solver.cpp:106] Iteration 51220, lr = 0.0001\nI0608 06:07:37.023632 13573 solver.cpp:229] Iteration 51240, loss = 0.732888\nI0608 06:07:37.023701 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0535375 (* 1 = 0.0535375 loss)\nI0608 06:07:37.023710 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0353522 (* 1 = 0.0353522 loss)\nI0608 06:07:37.023716 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.442972 (* 1 = 0.442972 loss)\nI0608 06:07:37.023722 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.128068 (* 1 = 0.128068 loss)\nI0608 06:07:37.023727 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.440331 (* 1 = 0.440331 loss)\nI0608 06:07:37.023733 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.128079 (* 1 = 0.128079 loss)\nI0608 06:07:37.023741 13573 sgd_solver.cpp:106] Iteration 51240, lr = 0.0001\nI0608 06:07:49.833051 13573 solver.cpp:229] Iteration 51260, loss = 0.746584\nI0608 06:07:49.833118 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000856679 (* 1 = 0.000856679 loss)\nI0608 06:07:49.833128 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00333089 (* 1 = 0.00333089 loss)\nI0608 06:07:49.833134 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301455 (* 1 = 0.301455 loss)\nI0608 06:07:49.833140 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0317198 (* 1 = 0.0317198 loss)\nI0608 06:07:49.833145 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.335971 (* 1 = 0.335971 loss)\nI0608 06:07:49.833151 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0317264 (* 1 = 0.0317264 loss)\nI0608 06:07:49.833158 13573 sgd_solver.cpp:106] Iteration 51260, lr = 0.0001\nI0608 06:08:02.964371 13573 solver.cpp:229] Iteration 51280, loss = 0.487029\nI0608 06:08:02.964434 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00409466 (* 1 = 0.00409466 loss)\nI0608 06:08:02.964444 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00111412 (* 1 = 0.00111412 loss)\nI0608 06:08:02.964450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.158158 (* 1 = 0.158158 loss)\nI0608 06:08:02.964457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00518775 (* 1 = 0.00518775 loss)\nI0608 06:08:02.964462 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187579 (* 1 = 0.187579 loss)\nI0608 06:08:02.964468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00517982 (* 1 = 0.00517982 loss)\nI0608 06:08:02.964474 13573 sgd_solver.cpp:106] Iteration 51280, lr = 0.0001\nI0608 06:08:15.793048 13573 solver.cpp:229] Iteration 51300, loss = 0.455993\nI0608 06:08:15.793121 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0637134 (* 1 = 0.0637134 loss)\nI0608 06:08:15.793130 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108908 (* 1 = 0.108908 loss)\nI0608 06:08:15.793136 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259978 (* 1 = 0.259978 loss)\nI0608 06:08:15.793141 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169108 (* 1 = 0.0169108 loss)\nI0608 06:08:15.793148 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.255122 (* 1 = 0.255122 loss)\nI0608 06:08:15.793153 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169188 (* 1 = 0.0169188 loss)\nI0608 06:08:15.793159 13573 sgd_solver.cpp:106] Iteration 51300, lr = 0.0001\nI0608 06:08:28.727730 13573 solver.cpp:229] Iteration 51320, loss = 0.538867\nI0608 06:08:28.727802 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0608158 (* 1 = 0.0608158 loss)\nI0608 06:08:28.727813 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0654294 (* 1 = 0.0654294 loss)\nI0608 06:08:28.727819 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100314 (* 1 = 0.100314 loss)\nI0608 06:08:28.727825 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103939 (* 1 = 0.103939 loss)\nI0608 06:08:28.727831 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0968368 (* 1 = 0.0968368 loss)\nI0608 06:08:28.727838 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103947 (* 1 = 0.103947 loss)\nI0608 06:08:28.727844 13573 sgd_solver.cpp:106] Iteration 51320, lr = 0.0001\nI0608 06:08:41.625938 13573 solver.cpp:229] Iteration 51340, loss = 0.64651\nI0608 06:08:41.626003 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00201167 (* 1 = 0.00201167 loss)\nI0608 06:08:41.626011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0207337 (* 1 = 0.0207337 loss)\nI0608 06:08:41.626018 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297763 (* 1 = 0.297763 loss)\nI0608 06:08:41.626024 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0166118 (* 1 = 0.0166118 loss)\nI0608 06:08:41.626029 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.30165 (* 1 = 0.30165 loss)\nI0608 06:08:41.626035 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0165932 (* 1 = 0.0165932 loss)\nI0608 06:08:41.626042 13573 sgd_solver.cpp:106] Iteration 51340, lr = 0.0001\nI0608 06:08:54.548457 13573 solver.cpp:229] Iteration 51360, loss = 1.80272\nI0608 06:08:54.548528 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0565092 (* 1 = 0.0565092 loss)\nI0608 06:08:54.548537 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0439493 (* 1 = 0.0439493 loss)\nI0608 06:08:54.548544 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.515592 (* 1 = 0.515592 loss)\nI0608 06:08:54.548550 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0474908 (* 1 = 0.0474908 loss)\nI0608 06:08:54.548555 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.561516 (* 1 = 0.561516 loss)\nI0608 06:08:54.548562 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0474976 (* 1 = 0.0474976 loss)\nI0608 06:08:54.548568 13573 sgd_solver.cpp:106] Iteration 51360, lr = 0.0001\nI0608 06:09:07.585772 13573 solver.cpp:229] Iteration 51380, loss = 0.705611\nI0608 06:09:07.585862 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0512744 (* 1 = 0.0512744 loss)\nI0608 06:09:07.585873 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0504148 (* 1 = 0.0504148 loss)\nI0608 06:09:07.585880 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191741 (* 1 = 0.191741 loss)\nI0608 06:09:07.585886 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028652 (* 1 = 0.028652 loss)\nI0608 06:09:07.585891 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204024 (* 1 = 0.204024 loss)\nI0608 06:09:07.585897 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286526 (* 1 = 0.0286526 loss)\nI0608 06:09:07.585906 13573 sgd_solver.cpp:106] Iteration 51380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:09:20.654515 13573 solver.cpp:229] Iteration 51400, loss = 0.795971\nI0608 06:09:20.654582 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000749155 (* 1 = 0.000749155 loss)\nI0608 06:09:20.654592 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000418933 (* 1 = 0.000418933 loss)\nI0608 06:09:20.654597 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.41962 (* 1 = 0.41962 loss)\nI0608 06:09:20.654603 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.148833 (* 1 = 0.148833 loss)\nI0608 06:09:20.654610 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.44057 (* 1 = 0.44057 loss)\nI0608 06:09:20.654616 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.148849 (* 1 = 0.148849 loss)\nI0608 06:09:20.654623 13573 sgd_solver.cpp:106] Iteration 51400, lr = 0.0001\nI0608 06:09:33.687361 13573 solver.cpp:229] Iteration 51420, loss = 1.10622\nI0608 06:09:33.687448 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0864329 (* 1 = 0.0864329 loss)\nI0608 06:09:33.687458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.152975 (* 1 = 0.152975 loss)\nI0608 06:09:33.687463 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.337828 (* 1 = 0.337828 loss)\nI0608 06:09:33.687469 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0979759 (* 1 = 0.0979759 loss)\nI0608 06:09:33.687475 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.333835 (* 1 = 0.333835 loss)\nI0608 06:09:33.687481 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0979661 (* 1 = 0.0979661 loss)\nI0608 06:09:33.687489 13573 sgd_solver.cpp:106] Iteration 51420, lr = 0.0001\nI0608 06:09:46.536579 13573 solver.cpp:229] Iteration 51440, loss = 0.62422\nI0608 06:09:46.536653 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00115963 (* 1 = 0.00115963 loss)\nI0608 06:09:46.536662 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00216157 (* 1 = 0.00216157 loss)\nI0608 06:09:46.536669 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250802 (* 1 = 0.250802 loss)\nI0608 06:09:46.536674 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0309574 (* 1 = 0.0309574 loss)\nI0608 06:09:46.536680 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282118 (* 1 = 0.282118 loss)\nI0608 06:09:46.536686 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309489 (* 1 = 0.0309489 loss)\nI0608 06:09:46.536694 13573 sgd_solver.cpp:106] Iteration 51440, lr = 0.0001\nI0608 06:09:59.617234 13573 solver.cpp:229] Iteration 51460, loss = 0.727866\nI0608 06:09:59.617298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.031831 (* 1 = 0.031831 loss)\nI0608 06:09:59.617307 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00925065 (* 1 = 0.00925065 loss)\nI0608 06:09:59.617313 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0921426 (* 1 = 0.0921426 loss)\nI0608 06:09:59.617319 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0499345 (* 1 = 0.0499345 loss)\nI0608 06:09:59.617326 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100321 (* 1 = 0.100321 loss)\nI0608 06:09:59.617331 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0499336 (* 1 = 0.0499336 loss)\nI0608 06:09:59.617338 13573 sgd_solver.cpp:106] Iteration 51460, lr = 0.0001\nI0608 06:10:12.596149 13573 solver.cpp:229] Iteration 51480, loss = 0.545285\nI0608 06:10:12.596283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108664 (* 1 = 0.108664 loss)\nI0608 06:10:12.596293 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0887225 (* 1 = 0.0887225 loss)\nI0608 06:10:12.596299 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211359 (* 1 = 0.211359 loss)\nI0608 06:10:12.596305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.031154 (* 1 = 0.031154 loss)\nI0608 06:10:12.596312 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214895 (* 1 = 0.214895 loss)\nI0608 06:10:12.596318 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311651 (* 1 = 0.0311651 loss)\nI0608 06:10:12.596324 13573 sgd_solver.cpp:106] Iteration 51480, lr = 0.0001\nI0608 06:10:25.742197 13573 solver.cpp:229] Iteration 51500, loss = 0.897443\nI0608 06:10:25.742265 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00294078 (* 1 = 0.00294078 loss)\nI0608 06:10:25.742275 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000920638 (* 1 = 0.000920638 loss)\nI0608 06:10:25.742281 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.555933 (* 1 = 0.555933 loss)\nI0608 06:10:25.742287 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.134142 (* 1 = 0.134142 loss)\nI0608 06:10:25.742293 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.528613 (* 1 = 0.528613 loss)\nI0608 06:10:25.742298 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.134117 (* 1 = 0.134117 loss)\nI0608 06:10:25.742305 13573 sgd_solver.cpp:106] Iteration 51500, lr = 0.0001\nI0608 06:10:38.839697 13573 solver.cpp:229] Iteration 51520, loss = 1.08099\nI0608 06:10:38.839768 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125506 (* 1 = 0.125506 loss)\nI0608 06:10:38.839778 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.257586 (* 1 = 0.257586 loss)\nI0608 06:10:38.839784 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.328394 (* 1 = 0.328394 loss)\nI0608 06:10:38.839790 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.128796 (* 1 = 0.128796 loss)\nI0608 06:10:38.839795 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330696 (* 1 = 0.330696 loss)\nI0608 06:10:38.839802 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.128805 (* 1 = 0.128805 loss)\nI0608 06:10:38.839808 13573 sgd_solver.cpp:106] Iteration 51520, lr = 0.0001\nI0608 06:10:51.836469 13573 solver.cpp:229] Iteration 51540, loss = 0.508\nI0608 06:10:51.836556 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.098676 (* 1 = 0.098676 loss)\nI0608 06:10:51.836567 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0185917 (* 1 = 0.0185917 loss)\nI0608 06:10:51.836573 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220495 (* 1 = 0.220495 loss)\nI0608 06:10:51.836580 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0129536 (* 1 = 0.0129536 loss)\nI0608 06:10:51.836585 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212663 (* 1 = 0.212663 loss)\nI0608 06:10:51.836591 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0129441 (* 1 = 0.0129441 loss)\nI0608 06:10:51.836597 13573 sgd_solver.cpp:106] Iteration 51540, lr = 0.0001\nI0608 06:11:04.697278 13573 solver.cpp:229] Iteration 51560, loss = 0.508799\nI0608 06:11:04.697402 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114724 (* 1 = 0.114724 loss)\nI0608 06:11:04.697412 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.123057 (* 1 = 0.123057 loss)\nI0608 06:11:04.697418 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.185312 (* 1 = 0.185312 loss)\nI0608 06:11:04.697423 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0200654 (* 1 = 0.0200654 loss)\nI0608 06:11:04.697429 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17834 (* 1 = 0.17834 loss)\nI0608 06:11:04.697435 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0200729 (* 1 = 0.0200729 loss)\nI0608 06:11:04.697443 13573 sgd_solver.cpp:106] Iteration 51560, lr = 0.0001\nI0608 06:11:17.465880 13573 solver.cpp:229] Iteration 51580, loss = 0.77304\nI0608 06:11:17.465946 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000822051 (* 1 = 0.000822051 loss)\nI0608 06:11:17.465955 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00591292 (* 1 = 0.00591292 loss)\nI0608 06:11:17.465961 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.245907 (* 1 = 0.245907 loss)\nI0608 06:11:17.465967 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00657486 (* 1 = 0.00657486 loss)\nI0608 06:11:17.465973 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236873 (* 1 = 0.236873 loss)\nI0608 06:11:17.465978 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00657791 (* 1 = 0.00657791 loss)\nI0608 06:11:17.465986 13573 sgd_solver.cpp:106] Iteration 51580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:11:30.306699 13573 solver.cpp:229] Iteration 51600, loss = 0.477105\nI0608 06:11:30.306783 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0613682 (* 1 = 0.0613682 loss)\nI0608 06:11:30.306797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0274912 (* 1 = 0.0274912 loss)\nI0608 06:11:30.306802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105045 (* 1 = 0.105045 loss)\nI0608 06:11:30.306808 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137077 (* 1 = 0.0137077 loss)\nI0608 06:11:30.306814 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101177 (* 1 = 0.101177 loss)\nI0608 06:11:30.306821 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137059 (* 1 = 0.0137059 loss)\nI0608 06:11:30.306828 13573 sgd_solver.cpp:106] Iteration 51600, lr = 0.0001\nI0608 06:11:43.091217 13573 solver.cpp:229] Iteration 51620, loss = 0.482003\nI0608 06:11:43.091284 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00076367 (* 1 = 0.00076367 loss)\nI0608 06:11:43.091294 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00933102 (* 1 = 0.00933102 loss)\nI0608 06:11:43.091300 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.314458 (* 1 = 0.314458 loss)\nI0608 06:11:43.091305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0260095 (* 1 = 0.0260095 loss)\nI0608 06:11:43.091311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.307166 (* 1 = 0.307166 loss)\nI0608 06:11:43.091316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0260203 (* 1 = 0.0260203 loss)\nI0608 06:11:43.091323 13573 sgd_solver.cpp:106] Iteration 51620, lr = 0.0001\nI0608 06:11:56.111210 13573 solver.cpp:229] Iteration 51640, loss = 0.790227\nI0608 06:11:56.111275 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.042714 (* 1 = 0.042714 loss)\nI0608 06:11:56.111285 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0788862 (* 1 = 0.0788862 loss)\nI0608 06:11:56.111291 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269435 (* 1 = 0.269435 loss)\nI0608 06:11:56.111297 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0528084 (* 1 = 0.0528084 loss)\nI0608 06:11:56.111302 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279823 (* 1 = 0.279823 loss)\nI0608 06:11:56.111308 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.052815 (* 1 = 0.052815 loss)\nI0608 06:11:56.111315 13573 sgd_solver.cpp:106] Iteration 51640, lr = 0.0001\nI0608 06:12:09.012028 13573 solver.cpp:229] Iteration 51660, loss = 0.691533\nI0608 06:12:09.012096 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0249265 (* 1 = 0.0249265 loss)\nI0608 06:12:09.012106 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0389798 (* 1 = 0.0389798 loss)\nI0608 06:12:09.012111 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.371724 (* 1 = 0.371724 loss)\nI0608 06:12:09.012117 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0499935 (* 1 = 0.0499935 loss)\nI0608 06:12:09.012123 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.337834 (* 1 = 0.337834 loss)\nI0608 06:12:09.012130 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0499893 (* 1 = 0.0499893 loss)\nI0608 06:12:09.012136 13573 sgd_solver.cpp:106] Iteration 51660, lr = 0.0001\nI0608 06:12:22.161773 13573 solver.cpp:229] Iteration 51680, loss = 0.510888\nI0608 06:12:22.161834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0628902 (* 1 = 0.0628902 loss)\nI0608 06:12:22.161844 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0910077 (* 1 = 0.0910077 loss)\nI0608 06:12:22.161850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194686 (* 1 = 0.194686 loss)\nI0608 06:12:22.161856 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0208819 (* 1 = 0.0208819 loss)\nI0608 06:12:22.161862 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.196596 (* 1 = 0.196596 loss)\nI0608 06:12:22.161869 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0208827 (* 1 = 0.0208827 loss)\nI0608 06:12:22.161875 13573 sgd_solver.cpp:106] Iteration 51680, lr = 0.0001\nI0608 06:12:35.031116 13573 solver.cpp:229] Iteration 51700, loss = 1.20202\nI0608 06:12:35.031184 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0177206 (* 1 = 0.0177206 loss)\nI0608 06:12:35.031196 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00431369 (* 1 = 0.00431369 loss)\nI0608 06:12:35.031203 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.342147 (* 1 = 0.342147 loss)\nI0608 06:12:35.031208 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0937942 (* 1 = 0.0937942 loss)\nI0608 06:12:35.031213 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330651 (* 1 = 0.330651 loss)\nI0608 06:12:35.031219 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.093793 (* 1 = 0.093793 loss)\nI0608 06:12:35.031226 13573 sgd_solver.cpp:106] Iteration 51700, lr = 0.0001\nI0608 06:12:47.886531 13573 solver.cpp:229] Iteration 51720, loss = 0.672837\nI0608 06:12:47.886595 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0884349 (* 1 = 0.0884349 loss)\nI0608 06:12:47.886605 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0321132 (* 1 = 0.0321132 loss)\nI0608 06:12:47.886610 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0889217 (* 1 = 0.0889217 loss)\nI0608 06:12:47.886616 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0147645 (* 1 = 0.0147645 loss)\nI0608 06:12:47.886621 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0907811 (* 1 = 0.0907811 loss)\nI0608 06:12:47.886627 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.014758 (* 1 = 0.014758 loss)\nI0608 06:12:47.886634 13573 sgd_solver.cpp:106] Iteration 51720, lr = 0.0001\nI0608 06:13:01.068087 13573 solver.cpp:229] Iteration 51740, loss = 0.5616\nI0608 06:13:01.068159 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0266659 (* 1 = 0.0266659 loss)\nI0608 06:13:01.068169 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00708972 (* 1 = 0.00708972 loss)\nI0608 06:13:01.068176 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.284762 (* 1 = 0.284762 loss)\nI0608 06:13:01.068181 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128188 (* 1 = 0.0128188 loss)\nI0608 06:13:01.068186 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306674 (* 1 = 0.306674 loss)\nI0608 06:13:01.068192 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0128029 (* 1 = 0.0128029 loss)\nI0608 06:13:01.068199 13573 sgd_solver.cpp:106] Iteration 51740, lr = 0.0001\nI0608 06:13:14.216980 13573 solver.cpp:229] Iteration 51760, loss = 1.36997\nI0608 06:13:14.217047 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0583771 (* 1 = 0.0583771 loss)\nI0608 06:13:14.217057 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0522593 (* 1 = 0.0522593 loss)\nI0608 06:13:14.217063 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.96227 (* 1 = 0.96227 loss)\nI0608 06:13:14.217068 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0940391 (* 1 = 0.0940391 loss)\nI0608 06:13:14.217074 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.930177 (* 1 = 0.930177 loss)\nI0608 06:13:14.217080 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0940295 (* 1 = 0.0940295 loss)\nI0608 06:13:14.217087 13573 sgd_solver.cpp:106] Iteration 51760, lr = 0.0001\nI0608 06:13:27.075522 13573 solver.cpp:229] Iteration 51780, loss = 0.571037\nI0608 06:13:27.075592 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000428586 (* 1 = 0.000428586 loss)\nI0608 06:13:27.075601 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00959149 (* 1 = 0.00959149 loss)\nI0608 06:13:27.075608 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266477 (* 1 = 0.266477 loss)\nI0608 06:13:27.075613 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0367573 (* 1 = 0.0367573 loss)\nI0608 06:13:27.075618 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.288776 (* 1 = 0.288776 loss)\nI0608 06:13:27.075624 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0367479 (* 1 = 0.0367479 loss)\nI0608 06:13:27.075630 13573 sgd_solver.cpp:106] Iteration 51780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:13:40.042827 13573 solver.cpp:229] Iteration 51800, loss = 1.45917\nI0608 06:13:40.042887 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0785306 (* 1 = 0.0785306 loss)\nI0608 06:13:40.042897 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113871 (* 1 = 0.113871 loss)\nI0608 06:13:40.042903 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.131874 (* 1 = 0.131874 loss)\nI0608 06:13:40.042909 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168809 (* 1 = 0.0168809 loss)\nI0608 06:13:40.042914 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.129245 (* 1 = 0.129245 loss)\nI0608 06:13:40.042920 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168689 (* 1 = 0.0168689 loss)\nI0608 06:13:40.042927 13573 sgd_solver.cpp:106] Iteration 51800, lr = 0.0001\nI0608 06:13:52.829121 13573 solver.cpp:229] Iteration 51820, loss = 0.427706\nI0608 06:13:52.829190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0389168 (* 1 = 0.0389168 loss)\nI0608 06:13:52.829198 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.041221 (* 1 = 0.041221 loss)\nI0608 06:13:52.829205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118186 (* 1 = 0.118186 loss)\nI0608 06:13:52.829210 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0307522 (* 1 = 0.0307522 loss)\nI0608 06:13:52.829216 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121186 (* 1 = 0.121186 loss)\nI0608 06:13:52.829221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307457 (* 1 = 0.0307457 loss)\nI0608 06:13:52.829228 13573 sgd_solver.cpp:106] Iteration 51820, lr = 0.0001\nI0608 06:14:05.418972 13573 solver.cpp:229] Iteration 51840, loss = 0.954814\nI0608 06:14:05.419044 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.122193 (* 1 = 0.122193 loss)\nI0608 06:14:05.419054 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0556241 (* 1 = 0.0556241 loss)\nI0608 06:14:05.419060 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.343686 (* 1 = 0.343686 loss)\nI0608 06:14:05.419065 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.056185 (* 1 = 0.056185 loss)\nI0608 06:14:05.419071 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.332817 (* 1 = 0.332817 loss)\nI0608 06:14:05.419076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0561838 (* 1 = 0.0561838 loss)\nI0608 06:14:05.419082 13573 sgd_solver.cpp:106] Iteration 51840, lr = 0.0001\nI0608 06:14:18.437669 13573 solver.cpp:229] Iteration 51860, loss = 0.50824\nI0608 06:14:18.437737 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00247518 (* 1 = 0.00247518 loss)\nI0608 06:14:18.437747 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00096989 (* 1 = 0.00096989 loss)\nI0608 06:14:18.437752 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134178 (* 1 = 0.134178 loss)\nI0608 06:14:18.437758 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00662783 (* 1 = 0.00662783 loss)\nI0608 06:14:18.437764 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157473 (* 1 = 0.157473 loss)\nI0608 06:14:18.437769 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00663093 (* 1 = 0.00663093 loss)\nI0608 06:14:18.437777 13573 sgd_solver.cpp:106] Iteration 51860, lr = 0.0001\nI0608 06:14:31.491479 13573 solver.cpp:229] Iteration 51880, loss = 0.840693\nI0608 06:14:31.491541 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.13346 (* 1 = 0.13346 loss)\nI0608 06:14:31.491551 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0529367 (* 1 = 0.0529367 loss)\nI0608 06:14:31.491559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.2473 (* 1 = 0.2473 loss)\nI0608 06:14:31.491564 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117404 (* 1 = 0.0117404 loss)\nI0608 06:14:31.491571 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.281338 (* 1 = 0.281338 loss)\nI0608 06:14:31.491577 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117474 (* 1 = 0.0117474 loss)\nI0608 06:14:31.491585 13573 sgd_solver.cpp:106] Iteration 51880, lr = 0.0001\nI0608 06:14:44.608269 13573 solver.cpp:229] Iteration 51900, loss = 0.848435\nI0608 06:14:44.608345 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0289191 (* 1 = 0.0289191 loss)\nI0608 06:14:44.608355 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0693561 (* 1 = 0.0693561 loss)\nI0608 06:14:44.608361 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.342721 (* 1 = 0.342721 loss)\nI0608 06:14:44.608367 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.2516 (* 1 = 0.2516 loss)\nI0608 06:14:44.608373 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.346185 (* 1 = 0.346185 loss)\nI0608 06:14:44.608379 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.251575 (* 1 = 0.251575 loss)\nI0608 06:14:44.608387 13573 sgd_solver.cpp:106] Iteration 51900, lr = 0.0001\nI0608 06:14:57.586208 13573 solver.cpp:229] Iteration 51920, loss = 0.587292\nI0608 06:14:57.586280 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00723271 (* 1 = 0.00723271 loss)\nI0608 06:14:57.586290 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00665515 (* 1 = 0.00665515 loss)\nI0608 06:14:57.586297 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.299523 (* 1 = 0.299523 loss)\nI0608 06:14:57.586302 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0172773 (* 1 = 0.0172773 loss)\nI0608 06:14:57.586308 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.300758 (* 1 = 0.300758 loss)\nI0608 06:14:57.586314 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017255 (* 1 = 0.017255 loss)\nI0608 06:14:57.586321 13573 sgd_solver.cpp:106] Iteration 51920, lr = 0.0001\nI0608 06:15:10.317531 13573 solver.cpp:229] Iteration 51940, loss = 1.16449\nI0608 06:15:10.317597 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0829866 (* 1 = 0.0829866 loss)\nI0608 06:15:10.317607 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124075 (* 1 = 0.124075 loss)\nI0608 06:15:10.317613 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.550798 (* 1 = 0.550798 loss)\nI0608 06:15:10.317620 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.140417 (* 1 = 0.140417 loss)\nI0608 06:15:10.317625 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.54686 (* 1 = 0.54686 loss)\nI0608 06:15:10.317631 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.140434 (* 1 = 0.140434 loss)\nI0608 06:15:10.317638 13573 sgd_solver.cpp:106] Iteration 51940, lr = 0.0001\nI0608 06:15:23.095872 13573 solver.cpp:229] Iteration 51960, loss = 0.645547\nI0608 06:15:23.095993 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109662 (* 1 = 0.109662 loss)\nI0608 06:15:23.096002 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.13754 (* 1 = 0.13754 loss)\nI0608 06:15:23.096009 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.165789 (* 1 = 0.165789 loss)\nI0608 06:15:23.096014 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.011349 (* 1 = 0.011349 loss)\nI0608 06:15:23.096020 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163583 (* 1 = 0.163583 loss)\nI0608 06:15:23.096026 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0113461 (* 1 = 0.0113461 loss)\nI0608 06:15:23.096034 13573 sgd_solver.cpp:106] Iteration 51960, lr = 0.0001\nI0608 06:15:36.079885 13573 solver.cpp:229] Iteration 51980, loss = 0.655865\nI0608 06:15:36.079959 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000197103 (* 1 = 0.000197103 loss)\nI0608 06:15:36.079973 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00759146 (* 1 = 0.00759146 loss)\nI0608 06:15:36.079984 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182898 (* 1 = 0.182898 loss)\nI0608 06:15:36.079993 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110215 (* 1 = 0.0110215 loss)\nI0608 06:15:36.080003 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162745 (* 1 = 0.162745 loss)\nI0608 06:15:36.080010 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110261 (* 1 = 0.0110261 loss)\nI0608 06:15:36.080020 13573 sgd_solver.cpp:106] Iteration 51980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:15:48.794517 13573 solver.cpp:229] Iteration 52000, loss = 0.518023\nI0608 06:15:48.794581 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0217769 (* 1 = 0.0217769 loss)\nI0608 06:15:48.794591 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0347822 (* 1 = 0.0347822 loss)\nI0608 06:15:48.794597 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195602 (* 1 = 0.195602 loss)\nI0608 06:15:48.794603 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0453447 (* 1 = 0.0453447 loss)\nI0608 06:15:48.794608 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193684 (* 1 = 0.193684 loss)\nI0608 06:15:48.794615 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0453465 (* 1 = 0.0453465 loss)\nI0608 06:15:48.794621 13573 sgd_solver.cpp:106] Iteration 52000, lr = 0.0001\nI0608 06:16:01.893419 13573 solver.cpp:229] Iteration 52020, loss = 0.889675\nI0608 06:16:01.893488 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00506696 (* 1 = 0.00506696 loss)\nI0608 06:16:01.893498 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00517293 (* 1 = 0.00517293 loss)\nI0608 06:16:01.893506 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.478293 (* 1 = 0.478293 loss)\nI0608 06:16:01.893512 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.157236 (* 1 = 0.157236 loss)\nI0608 06:16:01.893517 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.468014 (* 1 = 0.468014 loss)\nI0608 06:16:01.893522 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.157195 (* 1 = 0.157195 loss)\nI0608 06:16:01.893528 13573 sgd_solver.cpp:106] Iteration 52020, lr = 0.0001\nI0608 06:16:14.684074 13573 solver.cpp:229] Iteration 52040, loss = 0.454978\nI0608 06:16:14.684190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.051055 (* 1 = 0.051055 loss)\nI0608 06:16:14.684201 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0466822 (* 1 = 0.0466822 loss)\nI0608 06:16:14.684206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252328 (* 1 = 0.252328 loss)\nI0608 06:16:14.684212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175851 (* 1 = 0.0175851 loss)\nI0608 06:16:14.684218 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249699 (* 1 = 0.249699 loss)\nI0608 06:16:14.684223 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175854 (* 1 = 0.0175854 loss)\nI0608 06:16:14.684231 13573 sgd_solver.cpp:106] Iteration 52040, lr = 0.0001\nI0608 06:16:27.541963 13573 solver.cpp:229] Iteration 52060, loss = 2.35527\nI0608 06:16:27.542035 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161952 (* 1 = 0.161952 loss)\nI0608 06:16:27.542045 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.103412 (* 1 = 0.103412 loss)\nI0608 06:16:27.542052 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.0108 (* 1 = 1.0108 loss)\nI0608 06:16:27.542057 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.451259 (* 1 = 0.451259 loss)\nI0608 06:16:27.542062 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.00734 (* 1 = 1.00734 loss)\nI0608 06:16:27.542068 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.451296 (* 1 = 0.451296 loss)\nI0608 06:16:27.542074 13573 sgd_solver.cpp:106] Iteration 52060, lr = 0.0001\nI0608 06:16:40.392251 13573 solver.cpp:229] Iteration 52080, loss = 0.574772\nI0608 06:16:40.392323 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000280563 (* 1 = 0.000280563 loss)\nI0608 06:16:40.392333 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000196587 (* 1 = 0.000196587 loss)\nI0608 06:16:40.392338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162771 (* 1 = 0.162771 loss)\nI0608 06:16:40.392344 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00691117 (* 1 = 0.00691117 loss)\nI0608 06:16:40.392350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.166576 (* 1 = 0.166576 loss)\nI0608 06:16:40.392355 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00691923 (* 1 = 0.00691923 loss)\nI0608 06:16:40.392362 13573 sgd_solver.cpp:106] Iteration 52080, lr = 0.0001\nI0608 06:16:53.198611 13573 solver.cpp:229] Iteration 52100, loss = 1.77533\nI0608 06:16:53.198681 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0571148 (* 1 = 0.0571148 loss)\nI0608 06:16:53.198690 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0276818 (* 1 = 0.0276818 loss)\nI0608 06:16:53.198696 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310186 (* 1 = 0.310186 loss)\nI0608 06:16:53.198703 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0254929 (* 1 = 0.0254929 loss)\nI0608 06:16:53.198709 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269502 (* 1 = 0.269502 loss)\nI0608 06:16:53.198714 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0254955 (* 1 = 0.0254955 loss)\nI0608 06:16:53.198720 13573 sgd_solver.cpp:106] Iteration 52100, lr = 0.0001\nI0608 06:17:06.013666 13573 solver.cpp:229] Iteration 52120, loss = 0.818294\nI0608 06:17:06.013741 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114675 (* 1 = 0.114675 loss)\nI0608 06:17:06.013751 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0622074 (* 1 = 0.0622074 loss)\nI0608 06:17:06.013757 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0953019 (* 1 = 0.0953019 loss)\nI0608 06:17:06.013763 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104914 (* 1 = 0.0104914 loss)\nI0608 06:17:06.013769 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0878644 (* 1 = 0.0878644 loss)\nI0608 06:17:06.013774 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104871 (* 1 = 0.0104871 loss)\nI0608 06:17:06.013782 13573 sgd_solver.cpp:106] Iteration 52120, lr = 0.0001\nI0608 06:17:19.003410 13573 solver.cpp:229] Iteration 52140, loss = 0.453546\nI0608 06:17:19.003499 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000163144 (* 1 = 0.000163144 loss)\nI0608 06:17:19.003509 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000256938 (* 1 = 0.000256938 loss)\nI0608 06:17:19.003515 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188172 (* 1 = 0.188172 loss)\nI0608 06:17:19.003521 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00915844 (* 1 = 0.00915844 loss)\nI0608 06:17:19.003527 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189498 (* 1 = 0.189498 loss)\nI0608 06:17:19.003533 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00915942 (* 1 = 0.00915942 loss)\nI0608 06:17:19.003540 13573 sgd_solver.cpp:106] Iteration 52140, lr = 0.0001\nI0608 06:17:32.007513 13573 solver.cpp:229] Iteration 52160, loss = 0.572732\nI0608 06:17:32.007627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0192558 (* 1 = 0.0192558 loss)\nI0608 06:17:32.007637 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.024483 (* 1 = 0.024483 loss)\nI0608 06:17:32.007643 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.144815 (* 1 = 0.144815 loss)\nI0608 06:17:32.007649 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00532801 (* 1 = 0.00532801 loss)\nI0608 06:17:32.007655 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122821 (* 1 = 0.122821 loss)\nI0608 06:17:32.007661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00533354 (* 1 = 0.00533354 loss)\nI0608 06:17:32.007669 13573 sgd_solver.cpp:106] Iteration 52160, lr = 0.0001\nI0608 06:17:45.015344 13573 solver.cpp:229] Iteration 52180, loss = 0.834441\nI0608 06:17:45.015436 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0511016 (* 1 = 0.0511016 loss)\nI0608 06:17:45.015446 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0580027 (* 1 = 0.0580027 loss)\nI0608 06:17:45.015453 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.384642 (* 1 = 0.384642 loss)\nI0608 06:17:45.015460 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109331 (* 1 = 0.109331 loss)\nI0608 06:17:45.015465 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.359549 (* 1 = 0.359549 loss)\nI0608 06:17:45.015471 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109355 (* 1 = 0.109355 loss)\nI0608 06:17:45.015480 13573 sgd_solver.cpp:106] Iteration 52180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:17:58.083942 13573 solver.cpp:229] Iteration 52200, loss = 0.48405\nI0608 06:17:58.084018 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0587581 (* 1 = 0.0587581 loss)\nI0608 06:17:58.084028 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0328747 (* 1 = 0.0328747 loss)\nI0608 06:17:58.084035 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0827224 (* 1 = 0.0827224 loss)\nI0608 06:17:58.084041 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00627091 (* 1 = 0.00627091 loss)\nI0608 06:17:58.084048 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0818338 (* 1 = 0.0818338 loss)\nI0608 06:17:58.084053 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00626721 (* 1 = 0.00626721 loss)\nI0608 06:17:58.084261 13573 sgd_solver.cpp:106] Iteration 52200, lr = 0.0001\nI0608 06:18:11.103787 13573 solver.cpp:229] Iteration 52220, loss = 1.19631\nI0608 06:18:11.103857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.246369 (* 1 = 0.246369 loss)\nI0608 06:18:11.103868 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.127353 (* 1 = 0.127353 loss)\nI0608 06:18:11.103875 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.666434 (* 1 = 0.666434 loss)\nI0608 06:18:11.103881 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123797 (* 1 = 0.123797 loss)\nI0608 06:18:11.103888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.667222 (* 1 = 0.667222 loss)\nI0608 06:18:11.103893 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123785 (* 1 = 0.123785 loss)\nI0608 06:18:11.103901 13573 sgd_solver.cpp:106] Iteration 52220, lr = 0.0001\nI0608 06:18:24.097002 13573 solver.cpp:229] Iteration 52240, loss = 0.675517\nI0608 06:18:24.097076 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0719788 (* 1 = 0.0719788 loss)\nI0608 06:18:24.097087 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0967698 (* 1 = 0.0967698 loss)\nI0608 06:18:24.097093 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.199204 (* 1 = 0.199204 loss)\nI0608 06:18:24.097100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0679279 (* 1 = 0.0679279 loss)\nI0608 06:18:24.097105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197587 (* 1 = 0.197587 loss)\nI0608 06:18:24.097111 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0679498 (* 1 = 0.0679498 loss)\nI0608 06:18:24.097118 13573 sgd_solver.cpp:106] Iteration 52240, lr = 0.0001\nI0608 06:18:36.940250 13573 solver.cpp:229] Iteration 52260, loss = 0.856432\nI0608 06:18:36.940328 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00380364 (* 1 = 0.00380364 loss)\nI0608 06:18:36.940338 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00069979 (* 1 = 0.00069979 loss)\nI0608 06:18:36.940345 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191858 (* 1 = 0.191858 loss)\nI0608 06:18:36.940351 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104669 (* 1 = 0.0104669 loss)\nI0608 06:18:36.940356 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170434 (* 1 = 0.170434 loss)\nI0608 06:18:36.940362 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104663 (* 1 = 0.0104663 loss)\nI0608 06:18:36.940369 13573 sgd_solver.cpp:106] Iteration 52260, lr = 0.0001\nI0608 06:18:49.707093 13573 solver.cpp:229] Iteration 52280, loss = 0.398293\nI0608 06:18:49.707166 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0641589 (* 1 = 0.0641589 loss)\nI0608 06:18:49.707176 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0687232 (* 1 = 0.0687232 loss)\nI0608 06:18:49.707182 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101861 (* 1 = 0.101861 loss)\nI0608 06:18:49.707187 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0620075 (* 1 = 0.0620075 loss)\nI0608 06:18:49.707193 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104785 (* 1 = 0.104785 loss)\nI0608 06:18:49.707200 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0620229 (* 1 = 0.0620229 loss)\nI0608 06:18:49.707206 13573 sgd_solver.cpp:106] Iteration 52280, lr = 0.0001\nI0608 06:19:02.643674 13573 solver.cpp:229] Iteration 52300, loss = 0.792633\nI0608 06:19:02.643739 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00853146 (* 1 = 0.00853146 loss)\nI0608 06:19:02.643749 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00424916 (* 1 = 0.00424916 loss)\nI0608 06:19:02.643755 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.310771 (* 1 = 0.310771 loss)\nI0608 06:19:02.643761 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0332065 (* 1 = 0.0332065 loss)\nI0608 06:19:02.643767 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.31557 (* 1 = 0.31557 loss)\nI0608 06:19:02.643774 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0331896 (* 1 = 0.0331896 loss)\nI0608 06:19:02.643780 13573 sgd_solver.cpp:106] Iteration 52300, lr = 0.0001\nI0608 06:19:15.426745 13573 solver.cpp:229] Iteration 52320, loss = 0.427468\nI0608 06:19:15.426847 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0786221 (* 1 = 0.0786221 loss)\nI0608 06:19:15.426857 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0512838 (* 1 = 0.0512838 loss)\nI0608 06:19:15.426863 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168197 (* 1 = 0.168197 loss)\nI0608 06:19:15.426869 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0311047 (* 1 = 0.0311047 loss)\nI0608 06:19:15.426875 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169946 (* 1 = 0.169946 loss)\nI0608 06:19:15.426882 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311065 (* 1 = 0.0311065 loss)\nI0608 06:19:15.426888 13573 sgd_solver.cpp:106] Iteration 52320, lr = 0.0001\nI0608 06:19:28.469327 13573 solver.cpp:229] Iteration 52340, loss = 0.548463\nI0608 06:19:28.469404 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.02291 (* 1 = 0.02291 loss)\nI0608 06:19:28.469414 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0434498 (* 1 = 0.0434498 loss)\nI0608 06:19:28.469421 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155192 (* 1 = 0.155192 loss)\nI0608 06:19:28.469427 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0520338 (* 1 = 0.0520338 loss)\nI0608 06:19:28.469434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154637 (* 1 = 0.154637 loss)\nI0608 06:19:28.469439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0520422 (* 1 = 0.0520422 loss)\nI0608 06:19:28.469446 13573 sgd_solver.cpp:106] Iteration 52340, lr = 0.0001\nI0608 06:19:41.421573 13573 solver.cpp:229] Iteration 52360, loss = 0.743522\nI0608 06:19:41.421650 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00126575 (* 1 = 0.00126575 loss)\nI0608 06:19:41.421661 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00464509 (* 1 = 0.00464509 loss)\nI0608 06:19:41.421667 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.354453 (* 1 = 0.354453 loss)\nI0608 06:19:41.421674 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0992143 (* 1 = 0.0992143 loss)\nI0608 06:19:41.421679 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.349112 (* 1 = 0.349112 loss)\nI0608 06:19:41.421685 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0992046 (* 1 = 0.0992046 loss)\nI0608 06:19:41.421694 13573 sgd_solver.cpp:106] Iteration 52360, lr = 0.0001\nI0608 06:19:54.323835 13573 solver.cpp:229] Iteration 52380, loss = 0.603737\nI0608 06:19:54.323904 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0534266 (* 1 = 0.0534266 loss)\nI0608 06:19:54.323915 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0636339 (* 1 = 0.0636339 loss)\nI0608 06:19:54.323920 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203292 (* 1 = 0.203292 loss)\nI0608 06:19:54.323926 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.18286 (* 1 = 0.18286 loss)\nI0608 06:19:54.323931 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205698 (* 1 = 0.205698 loss)\nI0608 06:19:54.323937 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.182882 (* 1 = 0.182882 loss)\nI0608 06:19:54.323945 13573 sgd_solver.cpp:106] Iteration 52380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:20:07.253649 13573 solver.cpp:229] Iteration 52400, loss = 1.34908\nI0608 06:20:07.253720 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.177924 (* 1 = 0.177924 loss)\nI0608 06:20:07.253731 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.133595 (* 1 = 0.133595 loss)\nI0608 06:20:07.253736 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.714699 (* 1 = 0.714699 loss)\nI0608 06:20:07.253741 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.243934 (* 1 = 0.243934 loss)\nI0608 06:20:07.253747 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.71809 (* 1 = 0.71809 loss)\nI0608 06:20:07.253752 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.243971 (* 1 = 0.243971 loss)\nI0608 06:20:07.253759 13573 sgd_solver.cpp:106] Iteration 52400, lr = 0.0001\nI0608 06:20:20.229084 13573 solver.cpp:229] Iteration 52420, loss = 0.62347\nI0608 06:20:20.229156 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0413855 (* 1 = 0.0413855 loss)\nI0608 06:20:20.229166 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0466017 (* 1 = 0.0466017 loss)\nI0608 06:20:20.229172 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152986 (* 1 = 0.152986 loss)\nI0608 06:20:20.229178 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0291781 (* 1 = 0.0291781 loss)\nI0608 06:20:20.229184 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145487 (* 1 = 0.145487 loss)\nI0608 06:20:20.229189 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291769 (* 1 = 0.0291769 loss)\nI0608 06:20:20.229197 13573 sgd_solver.cpp:106] Iteration 52420, lr = 0.0001\nI0608 06:20:33.083945 13573 solver.cpp:229] Iteration 52440, loss = 0.320369\nI0608 06:20:33.084020 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0749473 (* 1 = 0.0749473 loss)\nI0608 06:20:33.084030 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0174 (* 1 = 0.0174 loss)\nI0608 06:20:33.084036 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0968805 (* 1 = 0.0968805 loss)\nI0608 06:20:33.084043 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00849656 (* 1 = 0.00849656 loss)\nI0608 06:20:33.084048 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100376 (* 1 = 0.100376 loss)\nI0608 06:20:33.084055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00849325 (* 1 = 0.00849325 loss)\nI0608 06:20:33.084064 13573 sgd_solver.cpp:106] Iteration 52440, lr = 0.0001\nI0608 06:20:46.033535 13573 solver.cpp:229] Iteration 52460, loss = 0.530674\nI0608 06:20:46.033599 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.109113 (* 1 = 0.109113 loss)\nI0608 06:20:46.033610 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0572907 (* 1 = 0.0572907 loss)\nI0608 06:20:46.033617 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.144078 (* 1 = 0.144078 loss)\nI0608 06:20:46.033622 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269009 (* 1 = 0.0269009 loss)\nI0608 06:20:46.033628 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160301 (* 1 = 0.160301 loss)\nI0608 06:20:46.033634 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269024 (* 1 = 0.0269024 loss)\nI0608 06:20:46.033640 13573 sgd_solver.cpp:106] Iteration 52460, lr = 0.0001\nI0608 06:20:59.146924 13573 solver.cpp:229] Iteration 52480, loss = 0.440615\nI0608 06:20:59.146996 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101394 (* 1 = 0.101394 loss)\nI0608 06:20:59.147008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111209 (* 1 = 0.111209 loss)\nI0608 06:20:59.147014 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171934 (* 1 = 0.171934 loss)\nI0608 06:20:59.147020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.01458 (* 1 = 0.01458 loss)\nI0608 06:20:59.147027 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17336 (* 1 = 0.17336 loss)\nI0608 06:20:59.147032 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0145799 (* 1 = 0.0145799 loss)\nI0608 06:20:59.147039 13573 sgd_solver.cpp:106] Iteration 52480, lr = 0.0001\nI0608 06:21:11.854642 13573 solver.cpp:229] Iteration 52500, loss = 0.38863\nI0608 06:21:11.854765 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0888742 (* 1 = 0.0888742 loss)\nI0608 06:21:11.854789 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0616897 (* 1 = 0.0616897 loss)\nI0608 06:21:11.854797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0896629 (* 1 = 0.0896629 loss)\nI0608 06:21:11.854804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013774 (* 1 = 0.013774 loss)\nI0608 06:21:11.854809 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0910614 (* 1 = 0.0910614 loss)\nI0608 06:21:11.854815 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013779 (* 1 = 0.013779 loss)\nI0608 06:21:11.854822 13573 sgd_solver.cpp:106] Iteration 52500, lr = 0.0001\nI0608 06:21:24.697890 13573 solver.cpp:229] Iteration 52520, loss = 0.748163\nI0608 06:21:24.697959 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.193537 (* 1 = 0.193537 loss)\nI0608 06:21:24.697969 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0676305 (* 1 = 0.0676305 loss)\nI0608 06:21:24.697976 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917625 (* 1 = 0.0917625 loss)\nI0608 06:21:24.697983 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0184972 (* 1 = 0.0184972 loss)\nI0608 06:21:24.697988 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0949127 (* 1 = 0.0949127 loss)\nI0608 06:21:24.697993 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0184924 (* 1 = 0.0184924 loss)\nI0608 06:21:24.698000 13573 sgd_solver.cpp:106] Iteration 52520, lr = 0.0001\nI0608 06:21:37.662116 13573 solver.cpp:229] Iteration 52540, loss = 0.325406\nI0608 06:21:37.662190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0330081 (* 1 = 0.0330081 loss)\nI0608 06:21:37.662199 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0341781 (* 1 = 0.0341781 loss)\nI0608 06:21:37.662206 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0888497 (* 1 = 0.0888497 loss)\nI0608 06:21:37.662212 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0145089 (* 1 = 0.0145089 loss)\nI0608 06:21:37.662219 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0894908 (* 1 = 0.0894908 loss)\nI0608 06:21:37.662225 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0145111 (* 1 = 0.0145111 loss)\nI0608 06:21:37.662231 13573 sgd_solver.cpp:106] Iteration 52540, lr = 0.0001\nI0608 06:21:50.422626 13573 solver.cpp:229] Iteration 52560, loss = 1.23591\nI0608 06:21:50.422696 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.2194 (* 1 = 0.2194 loss)\nI0608 06:21:50.422706 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.228006 (* 1 = 0.228006 loss)\nI0608 06:21:50.422713 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.71488 (* 1 = 0.71488 loss)\nI0608 06:21:50.422719 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116839 (* 1 = 0.116839 loss)\nI0608 06:21:50.422724 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.711031 (* 1 = 0.711031 loss)\nI0608 06:21:50.422730 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116833 (* 1 = 0.116833 loss)\nI0608 06:21:50.422736 13573 sgd_solver.cpp:106] Iteration 52560, lr = 0.0001\nI0608 06:22:03.273602 13573 solver.cpp:229] Iteration 52580, loss = 1.30554\nI0608 06:22:03.273665 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147482 (* 1 = 0.147482 loss)\nI0608 06:22:03.273675 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108498 (* 1 = 0.108498 loss)\nI0608 06:22:03.273681 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.44034 (* 1 = 0.44034 loss)\nI0608 06:22:03.273689 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.129346 (* 1 = 0.129346 loss)\nI0608 06:22:03.273694 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.435956 (* 1 = 0.435956 loss)\nI0608 06:22:03.273699 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.129361 (* 1 = 0.129361 loss)\nI0608 06:22:03.273706 13573 sgd_solver.cpp:106] Iteration 52580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:22:16.217226 13573 solver.cpp:229] Iteration 52600, loss = 0.559962\nI0608 06:22:16.217298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.080643 (* 1 = 0.080643 loss)\nI0608 06:22:16.217308 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0846314 (* 1 = 0.0846314 loss)\nI0608 06:22:16.217314 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.090824 (* 1 = 0.090824 loss)\nI0608 06:22:16.217319 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0163962 (* 1 = 0.0163962 loss)\nI0608 06:22:16.217325 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0927278 (* 1 = 0.0927278 loss)\nI0608 06:22:16.217331 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0163865 (* 1 = 0.0163865 loss)\nI0608 06:22:16.217339 13573 sgd_solver.cpp:106] Iteration 52600, lr = 0.0001\nI0608 06:22:29.140384 13573 solver.cpp:229] Iteration 52620, loss = 0.583426\nI0608 06:22:29.140447 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.049032 (* 1 = 0.049032 loss)\nI0608 06:22:29.140456 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0311048 (* 1 = 0.0311048 loss)\nI0608 06:22:29.140463 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.085581 (* 1 = 0.085581 loss)\nI0608 06:22:29.140470 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0331158 (* 1 = 0.0331158 loss)\nI0608 06:22:29.140475 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0850207 (* 1 = 0.0850207 loss)\nI0608 06:22:29.140480 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0331138 (* 1 = 0.0331138 loss)\nI0608 06:22:29.140487 13573 sgd_solver.cpp:106] Iteration 52620, lr = 0.0001\nI0608 06:22:41.925652 13573 solver.cpp:229] Iteration 52640, loss = 1.05086\nI0608 06:22:41.925721 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102359 (* 1 = 0.102359 loss)\nI0608 06:22:41.925731 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0874547 (* 1 = 0.0874547 loss)\nI0608 06:22:41.925737 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.506151 (* 1 = 0.506151 loss)\nI0608 06:22:41.925742 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0831558 (* 1 = 0.0831558 loss)\nI0608 06:22:41.925748 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.512776 (* 1 = 0.512776 loss)\nI0608 06:22:41.925753 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0831387 (* 1 = 0.0831387 loss)\nI0608 06:22:41.925760 13573 sgd_solver.cpp:106] Iteration 52640, lr = 0.0001\nI0608 06:22:54.928210 13573 solver.cpp:229] Iteration 52660, loss = 0.714031\nI0608 06:22:54.928272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.135526 (* 1 = 0.135526 loss)\nI0608 06:22:54.928280 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150201 (* 1 = 0.150201 loss)\nI0608 06:22:54.928287 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.272502 (* 1 = 0.272502 loss)\nI0608 06:22:54.928292 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0377416 (* 1 = 0.0377416 loss)\nI0608 06:22:54.928298 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.25838 (* 1 = 0.25838 loss)\nI0608 06:22:54.928303 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0377188 (* 1 = 0.0377188 loss)\nI0608 06:22:54.928310 13573 sgd_solver.cpp:106] Iteration 52660, lr = 0.0001\nI0608 06:23:07.983081 13573 solver.cpp:229] Iteration 52680, loss = 0.335811\nI0608 06:23:07.983165 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0205478 (* 1 = 0.0205478 loss)\nI0608 06:23:07.983175 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0453289 (* 1 = 0.0453289 loss)\nI0608 06:23:07.983180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0812447 (* 1 = 0.0812447 loss)\nI0608 06:23:07.983186 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0622361 (* 1 = 0.0622361 loss)\nI0608 06:23:07.983192 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0950815 (* 1 = 0.0950815 loss)\nI0608 06:23:07.983198 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0622295 (* 1 = 0.0622295 loss)\nI0608 06:23:07.983206 13573 sgd_solver.cpp:106] Iteration 52680, lr = 0.0001\nI0608 06:23:20.919718 13573 solver.cpp:229] Iteration 52700, loss = 0.587251\nI0608 06:23:20.919791 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0283692 (* 1 = 0.0283692 loss)\nI0608 06:23:20.919801 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0376234 (* 1 = 0.0376234 loss)\nI0608 06:23:20.919807 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217168 (* 1 = 0.217168 loss)\nI0608 06:23:20.919813 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.101511 (* 1 = 0.101511 loss)\nI0608 06:23:20.919818 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213354 (* 1 = 0.213354 loss)\nI0608 06:23:20.919824 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.101545 (* 1 = 0.101545 loss)\nI0608 06:23:20.919831 13573 sgd_solver.cpp:106] Iteration 52700, lr = 0.0001\nI0608 06:23:33.793467 13573 solver.cpp:229] Iteration 52720, loss = 1.67784\nI0608 06:23:33.793531 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0538828 (* 1 = 0.0538828 loss)\nI0608 06:23:33.793541 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0415258 (* 1 = 0.0415258 loss)\nI0608 06:23:33.793547 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.03135 (* 1 = 1.03135 loss)\nI0608 06:23:33.793552 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.449176 (* 1 = 0.449176 loss)\nI0608 06:23:33.793558 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.0205 (* 1 = 1.0205 loss)\nI0608 06:23:33.793563 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.449095 (* 1 = 0.449095 loss)\nI0608 06:23:33.793570 13573 sgd_solver.cpp:106] Iteration 52720, lr = 0.0001\nI0608 06:23:46.722247 13573 solver.cpp:229] Iteration 52740, loss = 0.480725\nI0608 06:23:46.722316 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0126789 (* 1 = 0.0126789 loss)\nI0608 06:23:46.722326 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.010852 (* 1 = 0.010852 loss)\nI0608 06:23:46.722332 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.290712 (* 1 = 0.290712 loss)\nI0608 06:23:46.722337 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0210868 (* 1 = 0.0210868 loss)\nI0608 06:23:46.722342 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295217 (* 1 = 0.295217 loss)\nI0608 06:23:46.722348 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0211131 (* 1 = 0.0211131 loss)\nI0608 06:23:46.722354 13573 sgd_solver.cpp:106] Iteration 52740, lr = 0.0001\nI0608 06:23:59.493499 13573 solver.cpp:229] Iteration 52760, loss = 0.858547\nI0608 06:23:59.493569 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0194196 (* 1 = 0.0194196 loss)\nI0608 06:23:59.493579 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0373817 (* 1 = 0.0373817 loss)\nI0608 06:23:59.493585 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.31051 (* 1 = 0.31051 loss)\nI0608 06:23:59.493590 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0334094 (* 1 = 0.0334094 loss)\nI0608 06:23:59.493597 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.302538 (* 1 = 0.302538 loss)\nI0608 06:23:59.493602 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.033406 (* 1 = 0.033406 loss)\nI0608 06:23:59.493608 13573 sgd_solver.cpp:106] Iteration 52760, lr = 0.0001\nI0608 06:24:12.144207 13573 solver.cpp:229] Iteration 52780, loss = 0.496416\nI0608 06:24:12.144268 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.046556 (* 1 = 0.046556 loss)\nI0608 06:24:12.144278 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0141794 (* 1 = 0.0141794 loss)\nI0608 06:24:12.144284 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.093336 (* 1 = 0.093336 loss)\nI0608 06:24:12.144289 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170306 (* 1 = 0.0170306 loss)\nI0608 06:24:12.144294 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0942775 (* 1 = 0.0942775 loss)\nI0608 06:24:12.144300 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170237 (* 1 = 0.0170237 loss)\nI0608 06:24:12.144306 13573 sgd_solver.cpp:106] Iteration 52780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:24:25.041414 13573 solver.cpp:229] Iteration 52800, loss = 0.400292\nI0608 06:24:25.041479 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0395101 (* 1 = 0.0395101 loss)\nI0608 06:24:25.041488 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.13507 (* 1 = 0.13507 loss)\nI0608 06:24:25.041493 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.089531 (* 1 = 0.089531 loss)\nI0608 06:24:25.041499 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.041519 (* 1 = 0.041519 loss)\nI0608 06:24:25.041505 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0913161 (* 1 = 0.0913161 loss)\nI0608 06:24:25.041512 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.041525 (* 1 = 0.041525 loss)\nI0608 06:24:25.041517 13573 sgd_solver.cpp:106] Iteration 52800, lr = 0.0001\nI0608 06:24:37.873802 13573 solver.cpp:229] Iteration 52820, loss = 0.518278\nI0608 06:24:37.873872 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0258081 (* 1 = 0.0258081 loss)\nI0608 06:24:37.873881 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0150965 (* 1 = 0.0150965 loss)\nI0608 06:24:37.873888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0858745 (* 1 = 0.0858745 loss)\nI0608 06:24:37.873893 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168053 (* 1 = 0.0168053 loss)\nI0608 06:24:37.873898 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0833032 (* 1 = 0.0833032 loss)\nI0608 06:24:37.873904 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168114 (* 1 = 0.0168114 loss)\nI0608 06:24:37.873910 13573 sgd_solver.cpp:106] Iteration 52820, lr = 0.0001\nI0608 06:24:50.909335 13573 solver.cpp:229] Iteration 52840, loss = 0.70346\nI0608 06:24:50.909396 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0542805 (* 1 = 0.0542805 loss)\nI0608 06:24:50.909406 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0162777 (* 1 = 0.0162777 loss)\nI0608 06:24:50.909412 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0993394 (* 1 = 0.0993394 loss)\nI0608 06:24:50.909417 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0196553 (* 1 = 0.0196553 loss)\nI0608 06:24:50.909423 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0999065 (* 1 = 0.0999065 loss)\nI0608 06:24:50.909430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0196559 (* 1 = 0.0196559 loss)\nI0608 06:24:50.909435 13573 sgd_solver.cpp:106] Iteration 52840, lr = 0.0001\nI0608 06:25:03.836457 13573 solver.cpp:229] Iteration 52860, loss = 0.784085\nI0608 06:25:03.836521 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000597522 (* 1 = 0.000597522 loss)\nI0608 06:25:03.836531 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00286311 (* 1 = 0.00286311 loss)\nI0608 06:25:03.836537 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.299003 (* 1 = 0.299003 loss)\nI0608 06:25:03.836542 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384285 (* 1 = 0.0384285 loss)\nI0608 06:25:03.836549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.336352 (* 1 = 0.336352 loss)\nI0608 06:25:03.836555 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0383926 (* 1 = 0.0383926 loss)\nI0608 06:25:03.836561 13573 sgd_solver.cpp:106] Iteration 52860, lr = 0.0001\nI0608 06:25:16.830183 13573 solver.cpp:229] Iteration 52880, loss = 1.13204\nI0608 06:25:16.830248 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0625957 (* 1 = 0.0625957 loss)\nI0608 06:25:16.830257 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0778534 (* 1 = 0.0778534 loss)\nI0608 06:25:16.830263 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.380925 (* 1 = 0.380925 loss)\nI0608 06:25:16.830269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0496631 (* 1 = 0.0496631 loss)\nI0608 06:25:16.830276 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.366026 (* 1 = 0.366026 loss)\nI0608 06:25:16.830281 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0496918 (* 1 = 0.0496918 loss)\nI0608 06:25:16.830287 13573 sgd_solver.cpp:106] Iteration 52880, lr = 0.0001\nI0608 06:25:29.510367 13573 solver.cpp:229] Iteration 52900, loss = 1.23516\nI0608 06:25:29.510429 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00207443 (* 1 = 0.00207443 loss)\nI0608 06:25:29.510439 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00648795 (* 1 = 0.00648795 loss)\nI0608 06:25:29.510447 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.529058 (* 1 = 0.529058 loss)\nI0608 06:25:29.510452 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.160134 (* 1 = 0.160134 loss)\nI0608 06:25:29.510457 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.536625 (* 1 = 0.536625 loss)\nI0608 06:25:29.510463 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.1601 (* 1 = 0.1601 loss)\nI0608 06:25:29.510471 13573 sgd_solver.cpp:106] Iteration 52900, lr = 0.0001\nI0608 06:25:42.589890 13573 solver.cpp:229] Iteration 52920, loss = 0.578228\nI0608 06:25:42.589957 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0701399 (* 1 = 0.0701399 loss)\nI0608 06:25:42.589967 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0811073 (* 1 = 0.0811073 loss)\nI0608 06:25:42.589972 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100475 (* 1 = 0.100475 loss)\nI0608 06:25:42.589978 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156275 (* 1 = 0.0156275 loss)\nI0608 06:25:42.589983 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0965085 (* 1 = 0.0965085 loss)\nI0608 06:25:42.589989 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156315 (* 1 = 0.0156315 loss)\nI0608 06:25:42.589996 13573 sgd_solver.cpp:106] Iteration 52920, lr = 0.0001\nI0608 06:25:55.465633 13573 solver.cpp:229] Iteration 52940, loss = 0.326859\nI0608 06:25:55.465695 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0880973 (* 1 = 0.0880973 loss)\nI0608 06:25:55.465704 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0274916 (* 1 = 0.0274916 loss)\nI0608 06:25:55.465710 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0859766 (* 1 = 0.0859766 loss)\nI0608 06:25:55.465716 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0114606 (* 1 = 0.0114606 loss)\nI0608 06:25:55.465723 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0877916 (* 1 = 0.0877916 loss)\nI0608 06:25:55.465728 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0114581 (* 1 = 0.0114581 loss)\nI0608 06:25:55.465734 13573 sgd_solver.cpp:106] Iteration 52940, lr = 0.0001\nI0608 06:26:08.409708 13573 solver.cpp:229] Iteration 52960, loss = 0.771738\nI0608 06:26:08.409781 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000365668 (* 1 = 0.000365668 loss)\nI0608 06:26:08.409791 13573 solver.cpp:245]     Train net output #1: loss_cls = 5.46727e-05 (* 1 = 5.46727e-05 loss)\nI0608 06:26:08.409797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.373223 (* 1 = 0.373223 loss)\nI0608 06:26:08.409802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193894 (* 1 = 0.0193894 loss)\nI0608 06:26:08.409808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376227 (* 1 = 0.376227 loss)\nI0608 06:26:08.409814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0193848 (* 1 = 0.0193848 loss)\nI0608 06:26:08.409821 13573 sgd_solver.cpp:106] Iteration 52960, lr = 0.0001\nI0608 06:26:21.085139 13573 solver.cpp:229] Iteration 52980, loss = 1.60701\nI0608 06:26:21.085204 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00798063 (* 1 = 0.00798063 loss)\nI0608 06:26:21.085213 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0158669 (* 1 = 0.0158669 loss)\nI0608 06:26:21.085220 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162293 (* 1 = 0.162293 loss)\nI0608 06:26:21.085225 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00480733 (* 1 = 0.00480733 loss)\nI0608 06:26:21.085232 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149146 (* 1 = 0.149146 loss)\nI0608 06:26:21.085237 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00481631 (* 1 = 0.00481631 loss)\nI0608 06:26:21.085243 13573 sgd_solver.cpp:106] Iteration 52980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:26:34.083302 13573 solver.cpp:229] Iteration 53000, loss = 1.02713\nI0608 06:26:34.083362 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0859542 (* 1 = 0.0859542 loss)\nI0608 06:26:34.083371 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0771357 (* 1 = 0.0771357 loss)\nI0608 06:26:34.083377 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.614987 (* 1 = 0.614987 loss)\nI0608 06:26:34.083384 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.159913 (* 1 = 0.159913 loss)\nI0608 06:26:34.083389 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.634934 (* 1 = 0.634934 loss)\nI0608 06:26:34.083395 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.159925 (* 1 = 0.159925 loss)\nI0608 06:26:34.083400 13573 sgd_solver.cpp:106] Iteration 53000, lr = 0.0001\nI0608 06:26:47.070004 13573 solver.cpp:229] Iteration 53020, loss = 0.816956\nI0608 06:26:47.070078 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00242402 (* 1 = 0.00242402 loss)\nI0608 06:26:47.070089 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000693619 (* 1 = 0.000693619 loss)\nI0608 06:26:47.070096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184279 (* 1 = 0.184279 loss)\nI0608 06:26:47.070102 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104292 (* 1 = 0.0104292 loss)\nI0608 06:26:47.070108 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181166 (* 1 = 0.181166 loss)\nI0608 06:26:47.070114 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104224 (* 1 = 0.0104224 loss)\nI0608 06:26:47.070122 13573 sgd_solver.cpp:106] Iteration 53020, lr = 0.0001\nI0608 06:26:59.871533 13573 solver.cpp:229] Iteration 53040, loss = 1.23219\nI0608 06:26:59.871616 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0668594 (* 1 = 0.0668594 loss)\nI0608 06:26:59.871628 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0590423 (* 1 = 0.0590423 loss)\nI0608 06:26:59.871790 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.33237 (* 1 = 0.33237 loss)\nI0608 06:26:59.871810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0787007 (* 1 = 0.0787007 loss)\nI0608 06:26:59.871888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340396 (* 1 = 0.340396 loss)\nI0608 06:26:59.871906 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0786876 (* 1 = 0.0786876 loss)\nI0608 06:26:59.871974 13573 sgd_solver.cpp:106] Iteration 53040, lr = 0.0001\nI0608 06:27:12.604475 13573 solver.cpp:229] Iteration 53060, loss = 0.496874\nI0608 06:27:12.604537 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100786 (* 1 = 0.100786 loss)\nI0608 06:27:12.604547 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0517192 (* 1 = 0.0517192 loss)\nI0608 06:27:12.604552 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.173609 (* 1 = 0.173609 loss)\nI0608 06:27:12.604558 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0284543 (* 1 = 0.0284543 loss)\nI0608 06:27:12.604564 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167878 (* 1 = 0.167878 loss)\nI0608 06:27:12.604569 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0284539 (* 1 = 0.0284539 loss)\nI0608 06:27:12.604576 13573 sgd_solver.cpp:106] Iteration 53060, lr = 0.0001\nI0608 06:27:25.416432 13573 solver.cpp:229] Iteration 53080, loss = 0.419995\nI0608 06:27:25.416502 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0878049 (* 1 = 0.0878049 loss)\nI0608 06:27:25.416512 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0463015 (* 1 = 0.0463015 loss)\nI0608 06:27:25.416517 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.084695 (* 1 = 0.084695 loss)\nI0608 06:27:25.416523 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00262935 (* 1 = 0.00262935 loss)\nI0608 06:27:25.416529 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0872258 (* 1 = 0.0872258 loss)\nI0608 06:27:25.416535 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00263207 (* 1 = 0.00263207 loss)\nI0608 06:27:25.416541 13573 sgd_solver.cpp:106] Iteration 53080, lr = 0.0001\nI0608 06:27:38.600847 13573 solver.cpp:229] Iteration 53100, loss = 0.793203\nI0608 06:27:38.600910 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.159453 (* 1 = 0.159453 loss)\nI0608 06:27:38.600919 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.066495 (* 1 = 0.066495 loss)\nI0608 06:27:38.600925 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101287 (* 1 = 0.101287 loss)\nI0608 06:27:38.600931 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0350364 (* 1 = 0.0350364 loss)\nI0608 06:27:38.600936 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105784 (* 1 = 0.105784 loss)\nI0608 06:27:38.600944 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0350291 (* 1 = 0.0350291 loss)\nI0608 06:27:38.600950 13573 sgd_solver.cpp:106] Iteration 53100, lr = 0.0001\nI0608 06:27:51.409445 13573 solver.cpp:229] Iteration 53120, loss = 1.45289\nI0608 06:27:51.409507 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0901564 (* 1 = 0.0901564 loss)\nI0608 06:27:51.409515 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.08089 (* 1 = 0.08089 loss)\nI0608 06:27:51.409520 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.840039 (* 1 = 0.840039 loss)\nI0608 06:27:51.409526 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.303067 (* 1 = 0.303067 loss)\nI0608 06:27:51.409533 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.846919 (* 1 = 0.846919 loss)\nI0608 06:27:51.409538 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.303063 (* 1 = 0.303063 loss)\nI0608 06:27:51.409543 13573 sgd_solver.cpp:106] Iteration 53120, lr = 0.0001\nI0608 06:28:04.466332 13573 solver.cpp:229] Iteration 53140, loss = 1.03339\nI0608 06:28:04.466403 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.197198 (* 1 = 0.197198 loss)\nI0608 06:28:04.466413 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0571211 (* 1 = 0.0571211 loss)\nI0608 06:28:04.466418 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.679184 (* 1 = 0.679184 loss)\nI0608 06:28:04.466424 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0808809 (* 1 = 0.0808809 loss)\nI0608 06:28:04.466429 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.682458 (* 1 = 0.682458 loss)\nI0608 06:28:04.466435 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0808334 (* 1 = 0.0808334 loss)\nI0608 06:28:04.466441 13573 sgd_solver.cpp:106] Iteration 53140, lr = 0.0001\nI0608 06:28:17.367700 13573 solver.cpp:229] Iteration 53160, loss = 0.729653\nI0608 06:28:17.367771 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000272553 (* 1 = 0.000272553 loss)\nI0608 06:28:17.367781 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00029808 (* 1 = 0.00029808 loss)\nI0608 06:28:17.367787 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113369 (* 1 = 0.113369 loss)\nI0608 06:28:17.367794 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100128 (* 1 = 0.0100128 loss)\nI0608 06:28:17.367799 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101977 (* 1 = 0.101977 loss)\nI0608 06:28:17.367805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100149 (* 1 = 0.0100149 loss)\nI0608 06:28:17.367812 13573 sgd_solver.cpp:106] Iteration 53160, lr = 0.0001\nI0608 06:28:30.154845 13573 solver.cpp:229] Iteration 53180, loss = 1.07835\nI0608 06:28:30.154906 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0116724 (* 1 = 0.0116724 loss)\nI0608 06:28:30.154916 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0269523 (* 1 = 0.0269523 loss)\nI0608 06:28:30.154922 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.516568 (* 1 = 0.516568 loss)\nI0608 06:28:30.154927 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.301411 (* 1 = 0.301411 loss)\nI0608 06:28:30.154933 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.490022 (* 1 = 0.490022 loss)\nI0608 06:28:30.154939 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.301406 (* 1 = 0.301406 loss)\nI0608 06:28:30.154947 13573 sgd_solver.cpp:106] Iteration 53180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:28:43.165774 13573 solver.cpp:229] Iteration 53200, loss = 0.558693\nI0608 06:28:43.165848 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0414636 (* 1 = 0.0414636 loss)\nI0608 06:28:43.165858 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0537564 (* 1 = 0.0537564 loss)\nI0608 06:28:43.165864 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.29369 (* 1 = 0.29369 loss)\nI0608 06:28:43.165870 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0462787 (* 1 = 0.0462787 loss)\nI0608 06:28:43.165876 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.302071 (* 1 = 0.302071 loss)\nI0608 06:28:43.165882 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0462586 (* 1 = 0.0462586 loss)\nI0608 06:28:43.165889 13573 sgd_solver.cpp:106] Iteration 53200, lr = 0.0001\nI0608 06:28:56.108609 13573 solver.cpp:229] Iteration 53220, loss = 0.380722\nI0608 06:28:56.108677 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.180027 (* 1 = 0.180027 loss)\nI0608 06:28:56.108687 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0150951 (* 1 = 0.0150951 loss)\nI0608 06:28:56.108693 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0820203 (* 1 = 0.0820203 loss)\nI0608 06:28:56.108700 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0623127 (* 1 = 0.0623127 loss)\nI0608 06:28:56.108705 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0850152 (* 1 = 0.0850152 loss)\nI0608 06:28:56.108711 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0623135 (* 1 = 0.0623135 loss)\nI0608 06:28:56.108716 13573 sgd_solver.cpp:106] Iteration 53220, lr = 0.0001\nI0608 06:29:08.826616 13573 solver.cpp:229] Iteration 53240, loss = 0.802064\nI0608 06:29:08.826678 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.198502 (* 1 = 0.198502 loss)\nI0608 06:29:08.826689 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.052444 (* 1 = 0.052444 loss)\nI0608 06:29:08.826695 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105699 (* 1 = 0.105699 loss)\nI0608 06:29:08.826701 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0518151 (* 1 = 0.0518151 loss)\nI0608 06:29:08.826707 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104104 (* 1 = 0.104104 loss)\nI0608 06:29:08.826714 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0518141 (* 1 = 0.0518141 loss)\nI0608 06:29:08.826721 13573 sgd_solver.cpp:106] Iteration 53240, lr = 0.0001\nI0608 06:29:21.888501 13573 solver.cpp:229] Iteration 53260, loss = 0.557112\nI0608 06:29:21.888578 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0272661 (* 1 = 0.0272661 loss)\nI0608 06:29:21.888587 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0303617 (* 1 = 0.0303617 loss)\nI0608 06:29:21.888593 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.249539 (* 1 = 0.249539 loss)\nI0608 06:29:21.888599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157168 (* 1 = 0.0157168 loss)\nI0608 06:29:21.888605 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.270751 (* 1 = 0.270751 loss)\nI0608 06:29:21.888612 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0157139 (* 1 = 0.0157139 loss)\nI0608 06:29:21.888619 13573 sgd_solver.cpp:106] Iteration 53260, lr = 0.0001\nI0608 06:29:34.756973 13573 solver.cpp:229] Iteration 53280, loss = 0.580718\nI0608 06:29:34.757036 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0384054 (* 1 = 0.0384054 loss)\nI0608 06:29:34.757046 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0283694 (* 1 = 0.0283694 loss)\nI0608 06:29:34.757053 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.31304 (* 1 = 0.31304 loss)\nI0608 06:29:34.757058 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.042363 (* 1 = 0.042363 loss)\nI0608 06:29:34.757064 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.285615 (* 1 = 0.285615 loss)\nI0608 06:29:34.757071 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0423617 (* 1 = 0.0423617 loss)\nI0608 06:29:34.757078 13573 sgd_solver.cpp:106] Iteration 53280, lr = 0.0001\nI0608 06:29:47.610630 13573 solver.cpp:229] Iteration 53300, loss = 0.606991\nI0608 06:29:47.610695 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0277447 (* 1 = 0.0277447 loss)\nI0608 06:29:47.610705 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0713972 (* 1 = 0.0713972 loss)\nI0608 06:29:47.610712 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101975 (* 1 = 0.101975 loss)\nI0608 06:29:47.610718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.143586 (* 1 = 0.143586 loss)\nI0608 06:29:47.610723 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105267 (* 1 = 0.105267 loss)\nI0608 06:29:47.610728 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.143582 (* 1 = 0.143582 loss)\nI0608 06:29:47.610734 13573 sgd_solver.cpp:106] Iteration 53300, lr = 0.0001\nI0608 06:30:00.404950 13573 solver.cpp:229] Iteration 53320, loss = 0.895851\nI0608 06:30:00.405020 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000286979 (* 1 = 0.000286979 loss)\nI0608 06:30:00.405030 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00175681 (* 1 = 0.00175681 loss)\nI0608 06:30:00.405036 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.298869 (* 1 = 0.298869 loss)\nI0608 06:30:00.405042 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0505509 (* 1 = 0.0505509 loss)\nI0608 06:30:00.405048 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269677 (* 1 = 0.269677 loss)\nI0608 06:30:00.405055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0505541 (* 1 = 0.0505541 loss)\nI0608 06:30:00.405061 13573 sgd_solver.cpp:106] Iteration 53320, lr = 0.0001\nI0608 06:30:13.025492 13573 solver.cpp:229] Iteration 53340, loss = 0.847268\nI0608 06:30:13.025565 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0441889 (* 1 = 0.0441889 loss)\nI0608 06:30:13.025575 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0187287 (* 1 = 0.0187287 loss)\nI0608 06:30:13.025581 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0908284 (* 1 = 0.0908284 loss)\nI0608 06:30:13.025588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0354716 (* 1 = 0.0354716 loss)\nI0608 06:30:13.025593 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977062 (* 1 = 0.0977062 loss)\nI0608 06:30:13.025599 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0354707 (* 1 = 0.0354707 loss)\nI0608 06:30:13.025606 13573 sgd_solver.cpp:106] Iteration 53340, lr = 0.0001\nI0608 06:30:25.725219 13573 solver.cpp:229] Iteration 53360, loss = 0.404241\nI0608 06:30:25.725291 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.028678 (* 1 = 0.028678 loss)\nI0608 06:30:25.725301 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0721299 (* 1 = 0.0721299 loss)\nI0608 06:30:25.725308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0922264 (* 1 = 0.0922264 loss)\nI0608 06:30:25.725314 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0336301 (* 1 = 0.0336301 loss)\nI0608 06:30:25.725320 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0945494 (* 1 = 0.0945494 loss)\nI0608 06:30:25.725327 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0336281 (* 1 = 0.0336281 loss)\nI0608 06:30:25.725332 13573 sgd_solver.cpp:106] Iteration 53360, lr = 0.0001\nI0608 06:30:38.543092 13573 solver.cpp:229] Iteration 53380, loss = 0.420582\nI0608 06:30:38.543164 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00955477 (* 1 = 0.00955477 loss)\nI0608 06:30:38.543174 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00578492 (* 1 = 0.00578492 loss)\nI0608 06:30:38.543181 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205381 (* 1 = 0.205381 loss)\nI0608 06:30:38.543187 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128771 (* 1 = 0.0128771 loss)\nI0608 06:30:38.543193 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205594 (* 1 = 0.205594 loss)\nI0608 06:30:38.543200 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.012876 (* 1 = 0.012876 loss)\nI0608 06:30:38.543207 13573 sgd_solver.cpp:106] Iteration 53380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:30:51.286857 13573 solver.cpp:229] Iteration 53400, loss = 0.489183\nI0608 06:30:51.286923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0413754 (* 1 = 0.0413754 loss)\nI0608 06:30:51.286933 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0940824 (* 1 = 0.0940824 loss)\nI0608 06:30:51.286939 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157086 (* 1 = 0.157086 loss)\nI0608 06:30:51.286945 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0592764 (* 1 = 0.0592764 loss)\nI0608 06:30:51.286952 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155154 (* 1 = 0.155154 loss)\nI0608 06:30:51.286957 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0592775 (* 1 = 0.0592775 loss)\nI0608 06:30:51.286964 13573 sgd_solver.cpp:106] Iteration 53400, lr = 0.0001\nI0608 06:31:03.826818 13573 solver.cpp:229] Iteration 53420, loss = 0.342269\nI0608 06:31:03.826892 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0482019 (* 1 = 0.0482019 loss)\nI0608 06:31:03.826903 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1094 (* 1 = 0.1094 loss)\nI0608 06:31:03.826910 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0948465 (* 1 = 0.0948465 loss)\nI0608 06:31:03.826915 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0233926 (* 1 = 0.0233926 loss)\nI0608 06:31:03.826920 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0883421 (* 1 = 0.0883421 loss)\nI0608 06:31:03.826926 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.023395 (* 1 = 0.023395 loss)\nI0608 06:31:03.826933 13573 sgd_solver.cpp:106] Iteration 53420, lr = 0.0001\nI0608 06:31:16.696122 13573 solver.cpp:229] Iteration 53440, loss = 0.67263\nI0608 06:31:16.696190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0429384 (* 1 = 0.0429384 loss)\nI0608 06:31:16.696199 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0601882 (* 1 = 0.0601882 loss)\nI0608 06:31:16.696205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.305558 (* 1 = 0.305558 loss)\nI0608 06:31:16.696211 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283056 (* 1 = 0.0283056 loss)\nI0608 06:31:16.696216 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295356 (* 1 = 0.295356 loss)\nI0608 06:31:16.696223 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0282872 (* 1 = 0.0282872 loss)\nI0608 06:31:16.696229 13573 sgd_solver.cpp:106] Iteration 53440, lr = 0.0001\nI0608 06:31:29.569524 13573 solver.cpp:229] Iteration 53460, loss = 0.657535\nI0608 06:31:29.569586 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.15662 (* 1 = 0.15662 loss)\nI0608 06:31:29.569597 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.049833 (* 1 = 0.049833 loss)\nI0608 06:31:29.569603 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300316 (* 1 = 0.300316 loss)\nI0608 06:31:29.569609 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0934897 (* 1 = 0.0934897 loss)\nI0608 06:31:29.569615 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.292698 (* 1 = 0.292698 loss)\nI0608 06:31:29.569622 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0934722 (* 1 = 0.0934722 loss)\nI0608 06:31:29.569628 13573 sgd_solver.cpp:106] Iteration 53460, lr = 0.0001\nI0608 06:31:42.071218 13573 solver.cpp:229] Iteration 53480, loss = 0.661507\nI0608 06:31:42.071305 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0519764 (* 1 = 0.0519764 loss)\nI0608 06:31:42.071321 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.052636 (* 1 = 0.052636 loss)\nI0608 06:31:42.071328 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206607 (* 1 = 0.206607 loss)\nI0608 06:31:42.071334 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0393273 (* 1 = 0.0393273 loss)\nI0608 06:31:42.071341 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.209214 (* 1 = 0.209214 loss)\nI0608 06:31:42.071346 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0393356 (* 1 = 0.0393356 loss)\nI0608 06:31:42.071353 13573 sgd_solver.cpp:106] Iteration 53480, lr = 0.0001\nI0608 06:31:54.778944 13573 solver.cpp:229] Iteration 53500, loss = 0.891583\nI0608 06:31:54.779011 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.236889 (* 1 = 0.236889 loss)\nI0608 06:31:54.779021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0653401 (* 1 = 0.0653401 loss)\nI0608 06:31:54.779026 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.336977 (* 1 = 0.336977 loss)\nI0608 06:31:54.779032 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.107638 (* 1 = 0.107638 loss)\nI0608 06:31:54.779037 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.337996 (* 1 = 0.337996 loss)\nI0608 06:31:54.779042 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.107679 (* 1 = 0.107679 loss)\nI0608 06:31:54.779049 13573 sgd_solver.cpp:106] Iteration 53500, lr = 0.0001\nI0608 06:32:07.444758 13573 solver.cpp:229] Iteration 53520, loss = 0.761629\nI0608 06:32:07.444826 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.046515 (* 1 = 0.046515 loss)\nI0608 06:32:07.444835 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0540581 (* 1 = 0.0540581 loss)\nI0608 06:32:07.444842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.315372 (* 1 = 0.315372 loss)\nI0608 06:32:07.444847 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302229 (* 1 = 0.0302229 loss)\nI0608 06:32:07.444854 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.332652 (* 1 = 0.332652 loss)\nI0608 06:32:07.444859 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302361 (* 1 = 0.0302361 loss)\nI0608 06:32:07.444865 13573 sgd_solver.cpp:106] Iteration 53520, lr = 0.0001\nI0608 06:32:20.380931 13573 solver.cpp:229] Iteration 53540, loss = 0.846531\nI0608 06:32:20.381057 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113298 (* 1 = 0.113298 loss)\nI0608 06:32:20.381067 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.067218 (* 1 = 0.067218 loss)\nI0608 06:32:20.381073 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.480811 (* 1 = 0.480811 loss)\nI0608 06:32:20.381079 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.126506 (* 1 = 0.126506 loss)\nI0608 06:32:20.381085 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.489096 (* 1 = 0.489096 loss)\nI0608 06:32:20.381091 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.12651 (* 1 = 0.12651 loss)\nI0608 06:32:20.381098 13573 sgd_solver.cpp:106] Iteration 53540, lr = 0.0001\nI0608 06:32:33.037441 13573 solver.cpp:229] Iteration 53560, loss = 0.611496\nI0608 06:32:33.037513 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0802045 (* 1 = 0.0802045 loss)\nI0608 06:32:33.037523 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0396648 (* 1 = 0.0396648 loss)\nI0608 06:32:33.037529 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0776273 (* 1 = 0.0776273 loss)\nI0608 06:32:33.037535 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00813637 (* 1 = 0.00813637 loss)\nI0608 06:32:33.037541 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0791582 (* 1 = 0.0791582 loss)\nI0608 06:32:33.037547 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0081351 (* 1 = 0.0081351 loss)\nI0608 06:32:33.037554 13573 sgd_solver.cpp:106] Iteration 53560, lr = 0.0001\nI0608 06:32:46.116776 13573 solver.cpp:229] Iteration 53580, loss = 0.839948\nI0608 06:32:46.116847 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.047732 (* 1 = 0.047732 loss)\nI0608 06:32:46.116863 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0489922 (* 1 = 0.0489922 loss)\nI0608 06:32:46.116873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.125922 (* 1 = 0.125922 loss)\nI0608 06:32:46.116881 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.206149 (* 1 = 0.206149 loss)\nI0608 06:32:46.116891 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123491 (* 1 = 0.123491 loss)\nI0608 06:32:46.116899 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.205936 (* 1 = 0.205936 loss)\nI0608 06:32:46.116909 13573 sgd_solver.cpp:106] Iteration 53580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:32:58.841190 13573 solver.cpp:229] Iteration 53600, loss = 0.557016\nI0608 06:32:58.841260 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0570847 (* 1 = 0.0570847 loss)\nI0608 06:32:58.841270 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0763176 (* 1 = 0.0763176 loss)\nI0608 06:32:58.841276 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992397 (* 1 = 0.0992397 loss)\nI0608 06:32:58.841282 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111428 (* 1 = 0.0111428 loss)\nI0608 06:32:58.841289 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.095868 (* 1 = 0.095868 loss)\nI0608 06:32:58.841295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011149 (* 1 = 0.011149 loss)\nI0608 06:32:58.841302 13573 sgd_solver.cpp:106] Iteration 53600, lr = 0.0001\nI0608 06:33:11.713536 13573 solver.cpp:229] Iteration 53620, loss = 1.6325\nI0608 06:33:11.713600 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.209258 (* 1 = 0.209258 loss)\nI0608 06:33:11.713611 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0616702 (* 1 = 0.0616702 loss)\nI0608 06:33:11.713618 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.770016 (* 1 = 0.770016 loss)\nI0608 06:33:11.713625 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.188247 (* 1 = 0.188247 loss)\nI0608 06:33:11.713631 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.781593 (* 1 = 0.781593 loss)\nI0608 06:33:11.713639 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.188223 (* 1 = 0.188223 loss)\nI0608 06:33:11.713645 13573 sgd_solver.cpp:106] Iteration 53620, lr = 0.0001\nI0608 06:33:24.597287 13573 solver.cpp:229] Iteration 53640, loss = 0.511466\nI0608 06:33:24.597364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0591122 (* 1 = 0.0591122 loss)\nI0608 06:33:24.597374 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0687222 (* 1 = 0.0687222 loss)\nI0608 06:33:24.597381 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162091 (* 1 = 0.162091 loss)\nI0608 06:33:24.597388 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0507741 (* 1 = 0.0507741 loss)\nI0608 06:33:24.597393 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154228 (* 1 = 0.154228 loss)\nI0608 06:33:24.597399 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0507678 (* 1 = 0.0507678 loss)\nI0608 06:33:24.597407 13573 sgd_solver.cpp:106] Iteration 53640, lr = 0.0001\nI0608 06:33:37.345903 13573 solver.cpp:229] Iteration 53660, loss = 0.693323\nI0608 06:33:37.345983 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0128896 (* 1 = 0.0128896 loss)\nI0608 06:33:37.345993 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0129027 (* 1 = 0.0129027 loss)\nI0608 06:33:37.345999 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.378189 (* 1 = 0.378189 loss)\nI0608 06:33:37.346005 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0658358 (* 1 = 0.0658358 loss)\nI0608 06:33:37.346011 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.385112 (* 1 = 0.385112 loss)\nI0608 06:33:37.346017 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0658541 (* 1 = 0.0658541 loss)\nI0608 06:33:37.346024 13573 sgd_solver.cpp:106] Iteration 53660, lr = 0.0001\nI0608 06:33:50.111876 13573 solver.cpp:229] Iteration 53680, loss = 1.11413\nI0608 06:33:50.111948 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000239802 (* 1 = 0.000239802 loss)\nI0608 06:33:50.111958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000109866 (* 1 = 0.000109866 loss)\nI0608 06:33:50.111964 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171545 (* 1 = 0.171545 loss)\nI0608 06:33:50.111970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0034873 (* 1 = 0.0034873 loss)\nI0608 06:33:50.111976 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182996 (* 1 = 0.182996 loss)\nI0608 06:33:50.111982 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00349007 (* 1 = 0.00349007 loss)\nI0608 06:33:50.111989 13573 sgd_solver.cpp:106] Iteration 53680, lr = 0.0001\nI0608 06:34:02.834728 13573 solver.cpp:229] Iteration 53700, loss = 0.427344\nI0608 06:34:02.834808 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145752 (* 1 = 0.145752 loss)\nI0608 06:34:02.834818 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126047 (* 1 = 0.126047 loss)\nI0608 06:34:02.834825 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0759491 (* 1 = 0.0759491 loss)\nI0608 06:34:02.834830 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481704 (* 1 = 0.0481704 loss)\nI0608 06:34:02.834836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.085796 (* 1 = 0.085796 loss)\nI0608 06:34:02.834842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0481639 (* 1 = 0.0481639 loss)\nI0608 06:34:02.834849 13573 sgd_solver.cpp:106] Iteration 53700, lr = 0.0001\nI0608 06:34:15.665225 13573 solver.cpp:229] Iteration 53720, loss = 0.570549\nI0608 06:34:15.665294 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0736855 (* 1 = 0.0736855 loss)\nI0608 06:34:15.665304 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0506981 (* 1 = 0.0506981 loss)\nI0608 06:34:15.665310 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121511 (* 1 = 0.121511 loss)\nI0608 06:34:15.665316 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.168054 (* 1 = 0.168054 loss)\nI0608 06:34:15.665321 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127666 (* 1 = 0.127666 loss)\nI0608 06:34:15.665328 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.168047 (* 1 = 0.168047 loss)\nI0608 06:34:15.665334 13573 sgd_solver.cpp:106] Iteration 53720, lr = 0.0001\nI0608 06:34:28.490756 13573 solver.cpp:229] Iteration 53740, loss = 0.450672\nI0608 06:34:28.490844 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0828151 (* 1 = 0.0828151 loss)\nI0608 06:34:28.490854 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0767796 (* 1 = 0.0767796 loss)\nI0608 06:34:28.490860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0901365 (* 1 = 0.0901365 loss)\nI0608 06:34:28.490866 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.018526 (* 1 = 0.018526 loss)\nI0608 06:34:28.490872 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0963087 (* 1 = 0.0963087 loss)\nI0608 06:34:28.490878 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.018527 (* 1 = 0.018527 loss)\nI0608 06:34:28.490885 13573 sgd_solver.cpp:106] Iteration 53740, lr = 0.0001\nI0608 06:34:41.206145 13573 solver.cpp:229] Iteration 53760, loss = 1.35961\nI0608 06:34:41.206214 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.085388 (* 1 = 0.085388 loss)\nI0608 06:34:41.206223 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0749363 (* 1 = 0.0749363 loss)\nI0608 06:34:41.206229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.145264 (* 1 = 0.145264 loss)\nI0608 06:34:41.206235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.159958 (* 1 = 0.159958 loss)\nI0608 06:34:41.206241 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143863 (* 1 = 0.143863 loss)\nI0608 06:34:41.206248 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.159948 (* 1 = 0.159948 loss)\nI0608 06:34:41.206254 13573 sgd_solver.cpp:106] Iteration 53760, lr = 0.0001\nI0608 06:34:54.202756 13573 solver.cpp:229] Iteration 53780, loss = 0.350814\nI0608 06:34:54.202836 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.063521 (* 1 = 0.063521 loss)\nI0608 06:34:54.202846 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0399877 (* 1 = 0.0399877 loss)\nI0608 06:34:54.202852 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0930206 (* 1 = 0.0930206 loss)\nI0608 06:34:54.202858 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0231239 (* 1 = 0.0231239 loss)\nI0608 06:34:54.202864 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0981554 (* 1 = 0.0981554 loss)\nI0608 06:34:54.202870 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0231224 (* 1 = 0.0231224 loss)\nI0608 06:34:54.202877 13573 sgd_solver.cpp:106] Iteration 53780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:35:07.006014 13573 solver.cpp:229] Iteration 53800, loss = 0.926349\nI0608 06:35:07.006098 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0564937 (* 1 = 0.0564937 loss)\nI0608 06:35:07.006109 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0557183 (* 1 = 0.0557183 loss)\nI0608 06:35:07.006116 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100251 (* 1 = 0.100251 loss)\nI0608 06:35:07.006122 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0326654 (* 1 = 0.0326654 loss)\nI0608 06:35:07.006129 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100134 (* 1 = 0.100134 loss)\nI0608 06:35:07.006134 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0326718 (* 1 = 0.0326718 loss)\nI0608 06:35:07.006142 13573 sgd_solver.cpp:106] Iteration 53800, lr = 0.0001\nI0608 06:35:19.494232 13573 solver.cpp:229] Iteration 53820, loss = 1.29048\nI0608 06:35:19.494352 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00435843 (* 1 = 0.00435843 loss)\nI0608 06:35:19.494362 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0172633 (* 1 = 0.0172633 loss)\nI0608 06:35:19.494369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.376776 (* 1 = 0.376776 loss)\nI0608 06:35:19.494374 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0756933 (* 1 = 0.0756933 loss)\nI0608 06:35:19.494380 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.349693 (* 1 = 0.349693 loss)\nI0608 06:35:19.494385 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0756812 (* 1 = 0.0756812 loss)\nI0608 06:35:19.494392 13573 sgd_solver.cpp:106] Iteration 53820, lr = 0.0001\nI0608 06:35:32.441738 13573 solver.cpp:229] Iteration 53840, loss = 0.715191\nI0608 06:35:32.441800 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0651334 (* 1 = 0.0651334 loss)\nI0608 06:35:32.441809 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0268452 (* 1 = 0.0268452 loss)\nI0608 06:35:32.441815 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259429 (* 1 = 0.259429 loss)\nI0608 06:35:32.441820 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164158 (* 1 = 0.0164158 loss)\nI0608 06:35:32.441826 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279086 (* 1 = 0.279086 loss)\nI0608 06:35:32.441831 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164141 (* 1 = 0.0164141 loss)\nI0608 06:35:32.441838 13573 sgd_solver.cpp:106] Iteration 53840, lr = 0.0001\nI0608 06:35:45.416723 13573 solver.cpp:229] Iteration 53860, loss = 1.12795\nI0608 06:35:45.416786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0423539 (* 1 = 0.0423539 loss)\nI0608 06:35:45.416795 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0316598 (* 1 = 0.0316598 loss)\nI0608 06:35:45.416802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.663097 (* 1 = 0.663097 loss)\nI0608 06:35:45.416808 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.249593 (* 1 = 0.249593 loss)\nI0608 06:35:45.416813 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.70307 (* 1 = 0.70307 loss)\nI0608 06:35:45.416820 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.249563 (* 1 = 0.249563 loss)\nI0608 06:35:45.416826 13573 sgd_solver.cpp:106] Iteration 53860, lr = 0.0001\nI0608 06:35:58.328166 13573 solver.cpp:229] Iteration 53880, loss = 0.828362\nI0608 06:35:58.328235 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0510294 (* 1 = 0.0510294 loss)\nI0608 06:35:58.328245 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0734728 (* 1 = 0.0734728 loss)\nI0608 06:35:58.328251 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205057 (* 1 = 0.205057 loss)\nI0608 06:35:58.328256 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0124746 (* 1 = 0.0124746 loss)\nI0608 06:35:58.328263 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.230923 (* 1 = 0.230923 loss)\nI0608 06:35:58.328268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0124742 (* 1 = 0.0124742 loss)\nI0608 06:35:58.328274 13573 sgd_solver.cpp:106] Iteration 53880, lr = 0.0001\nI0608 06:36:11.191752 13573 solver.cpp:229] Iteration 53900, loss = 0.645774\nI0608 06:36:11.191835 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00464164 (* 1 = 0.00464164 loss)\nI0608 06:36:11.191845 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00145178 (* 1 = 0.00145178 loss)\nI0608 06:36:11.191851 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286503 (* 1 = 0.286503 loss)\nI0608 06:36:11.191857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0534923 (* 1 = 0.0534923 loss)\nI0608 06:36:11.191864 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.31475 (* 1 = 0.31475 loss)\nI0608 06:36:11.191869 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0534915 (* 1 = 0.0534915 loss)\nI0608 06:36:11.191875 13573 sgd_solver.cpp:106] Iteration 53900, lr = 0.0001\nI0608 06:36:24.174748 13573 solver.cpp:229] Iteration 53920, loss = 0.609613\nI0608 06:36:24.174820 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00163326 (* 1 = 0.00163326 loss)\nI0608 06:36:24.174830 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000523565 (* 1 = 0.000523565 loss)\nI0608 06:36:24.174836 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275062 (* 1 = 0.275062 loss)\nI0608 06:36:24.174842 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0344292 (* 1 = 0.0344292 loss)\nI0608 06:36:24.174849 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.283128 (* 1 = 0.283128 loss)\nI0608 06:36:24.174854 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0344402 (* 1 = 0.0344402 loss)\nI0608 06:36:24.174860 13573 sgd_solver.cpp:106] Iteration 53920, lr = 0.0001\nI0608 06:36:36.847117 13573 solver.cpp:229] Iteration 53940, loss = 0.424472\nI0608 06:36:36.847196 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000375912 (* 1 = 0.000375912 loss)\nI0608 06:36:36.847206 13573 solver.cpp:245]     Train net output #1: loss_cls = 5.13203e-05 (* 1 = 5.13203e-05 loss)\nI0608 06:36:36.847213 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.251271 (* 1 = 0.251271 loss)\nI0608 06:36:36.847218 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102439 (* 1 = 0.0102439 loss)\nI0608 06:36:36.847224 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256271 (* 1 = 0.256271 loss)\nI0608 06:36:36.847230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0102394 (* 1 = 0.0102394 loss)\nI0608 06:36:36.847237 13573 sgd_solver.cpp:106] Iteration 53940, lr = 0.0001\nI0608 06:36:49.638494 13573 solver.cpp:229] Iteration 53960, loss = 1.45596\nI0608 06:36:49.638561 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0714633 (* 1 = 0.0714633 loss)\nI0608 06:36:49.638569 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0824991 (* 1 = 0.0824991 loss)\nI0608 06:36:49.638576 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.753268 (* 1 = 0.753268 loss)\nI0608 06:36:49.638582 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.148684 (* 1 = 0.148684 loss)\nI0608 06:36:49.638588 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.726448 (* 1 = 0.726448 loss)\nI0608 06:36:49.638594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.1487 (* 1 = 0.1487 loss)\nI0608 06:36:49.638602 13573 sgd_solver.cpp:106] Iteration 53960, lr = 0.0001\nI0608 06:37:02.703239 13573 solver.cpp:229] Iteration 53980, loss = 0.343663\nI0608 06:37:02.703308 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00125542 (* 1 = 0.00125542 loss)\nI0608 06:37:02.703318 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000666553 (* 1 = 0.000666553 loss)\nI0608 06:37:02.703325 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167314 (* 1 = 0.167314 loss)\nI0608 06:37:02.703330 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104413 (* 1 = 0.0104413 loss)\nI0608 06:37:02.703336 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172968 (* 1 = 0.172968 loss)\nI0608 06:37:02.703341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104342 (* 1 = 0.0104342 loss)\nI0608 06:37:02.703348 13573 sgd_solver.cpp:106] Iteration 53980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:37:15.486294 13573 solver.cpp:229] Iteration 54000, loss = 2.33722\nI0608 06:37:15.486364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00195333 (* 1 = 0.00195333 loss)\nI0608 06:37:15.486374 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0344182 (* 1 = 0.0344182 loss)\nI0608 06:37:15.486380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.67837 (* 1 = 0.67837 loss)\nI0608 06:37:15.486387 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.302602 (* 1 = 0.302602 loss)\nI0608 06:37:15.486392 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.634777 (* 1 = 0.634777 loss)\nI0608 06:37:15.486397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.302564 (* 1 = 0.302564 loss)\nI0608 06:37:15.486403 13573 sgd_solver.cpp:106] Iteration 54000, lr = 0.0001\nI0608 06:37:28.146535 13573 solver.cpp:229] Iteration 54020, loss = 0.745768\nI0608 06:37:28.146600 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0493085 (* 1 = 0.0493085 loss)\nI0608 06:37:28.146610 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0424706 (* 1 = 0.0424706 loss)\nI0608 06:37:28.146615 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103098 (* 1 = 0.103098 loss)\nI0608 06:37:28.146621 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182263 (* 1 = 0.0182263 loss)\nI0608 06:37:28.146627 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938744 (* 1 = 0.0938744 loss)\nI0608 06:37:28.146633 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182301 (* 1 = 0.0182301 loss)\nI0608 06:37:28.146641 13573 sgd_solver.cpp:106] Iteration 54020, lr = 0.0001\nI0608 06:37:40.724129 13573 solver.cpp:229] Iteration 54040, loss = 0.402387\nI0608 06:37:40.724190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000268537 (* 1 = 0.000268537 loss)\nI0608 06:37:40.724200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00685131 (* 1 = 0.00685131 loss)\nI0608 06:37:40.724205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220064 (* 1 = 0.220064 loss)\nI0608 06:37:40.724211 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00677069 (* 1 = 0.00677069 loss)\nI0608 06:37:40.724216 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215659 (* 1 = 0.215659 loss)\nI0608 06:37:40.724222 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00677663 (* 1 = 0.00677663 loss)\nI0608 06:37:40.724230 13573 sgd_solver.cpp:106] Iteration 54040, lr = 0.0001\nI0608 06:37:53.425948 13573 solver.cpp:229] Iteration 54060, loss = 1.14569\nI0608 06:37:53.426017 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0771297 (* 1 = 0.0771297 loss)\nI0608 06:37:53.426025 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0371669 (* 1 = 0.0371669 loss)\nI0608 06:37:53.426031 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14758 (* 1 = 0.14758 loss)\nI0608 06:37:53.426038 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0286882 (* 1 = 0.0286882 loss)\nI0608 06:37:53.426043 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.142195 (* 1 = 0.142195 loss)\nI0608 06:37:53.426048 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286958 (* 1 = 0.0286958 loss)\nI0608 06:37:53.426054 13573 sgd_solver.cpp:106] Iteration 54060, lr = 0.0001\nI0608 06:38:06.551779 13573 solver.cpp:229] Iteration 54080, loss = 1.15184\nI0608 06:38:06.551844 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000522848 (* 1 = 0.000522848 loss)\nI0608 06:38:06.551856 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00583656 (* 1 = 0.00583656 loss)\nI0608 06:38:06.551862 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157718 (* 1 = 0.157718 loss)\nI0608 06:38:06.551867 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194314 (* 1 = 0.0194314 loss)\nI0608 06:38:06.551872 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216599 (* 1 = 0.216599 loss)\nI0608 06:38:06.551877 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0194307 (* 1 = 0.0194307 loss)\nI0608 06:38:06.551884 13573 sgd_solver.cpp:106] Iteration 54080, lr = 0.0001\nI0608 06:38:19.495364 13573 solver.cpp:229] Iteration 54100, loss = 0.758779\nI0608 06:38:19.495424 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0413171 (* 1 = 0.0413171 loss)\nI0608 06:38:19.495434 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0540639 (* 1 = 0.0540639 loss)\nI0608 06:38:19.495440 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.514999 (* 1 = 0.514999 loss)\nI0608 06:38:19.495445 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0816612 (* 1 = 0.0816612 loss)\nI0608 06:38:19.495451 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.496357 (* 1 = 0.496357 loss)\nI0608 06:38:19.495457 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0816476 (* 1 = 0.0816476 loss)\nI0608 06:38:19.495463 13573 sgd_solver.cpp:106] Iteration 54100, lr = 0.0001\nI0608 06:38:32.199188 13573 solver.cpp:229] Iteration 54120, loss = 0.95675\nI0608 06:38:32.199261 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0330874 (* 1 = 0.0330874 loss)\nI0608 06:38:32.199275 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0310256 (* 1 = 0.0310256 loss)\nI0608 06:38:32.199282 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.352679 (* 1 = 0.352679 loss)\nI0608 06:38:32.199288 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.10826 (* 1 = 0.10826 loss)\nI0608 06:38:32.199295 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.349276 (* 1 = 0.349276 loss)\nI0608 06:38:32.199301 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.108274 (* 1 = 0.108274 loss)\nI0608 06:38:32.199308 13573 sgd_solver.cpp:106] Iteration 54120, lr = 0.0001\nI0608 06:38:45.093116 13573 solver.cpp:229] Iteration 54140, loss = 0.78381\nI0608 06:38:45.093180 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.047298 (* 1 = 0.047298 loss)\nI0608 06:38:45.093190 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0474713 (* 1 = 0.0474713 loss)\nI0608 06:38:45.093196 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.397827 (* 1 = 0.397827 loss)\nI0608 06:38:45.093202 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0593973 (* 1 = 0.0593973 loss)\nI0608 06:38:45.093209 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.378084 (* 1 = 0.378084 loss)\nI0608 06:38:45.093214 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0593795 (* 1 = 0.0593795 loss)\nI0608 06:38:45.093222 13573 sgd_solver.cpp:106] Iteration 54140, lr = 0.0001\nI0608 06:38:57.863992 13573 solver.cpp:229] Iteration 54160, loss = 0.523225\nI0608 06:38:57.864056 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000550165 (* 1 = 0.000550165 loss)\nI0608 06:38:57.864066 13573 solver.cpp:245]     Train net output #1: loss_cls = 2.08384e-05 (* 1 = 2.08384e-05 loss)\nI0608 06:38:57.864073 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195271 (* 1 = 0.195271 loss)\nI0608 06:38:57.864078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0179432 (* 1 = 0.0179432 loss)\nI0608 06:38:57.864084 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.180339 (* 1 = 0.180339 loss)\nI0608 06:38:57.864089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0179447 (* 1 = 0.0179447 loss)\nI0608 06:38:57.864096 13573 sgd_solver.cpp:106] Iteration 54160, lr = 0.0001\nI0608 06:39:10.956221 13573 solver.cpp:229] Iteration 54180, loss = 0.338072\nI0608 06:39:10.956292 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.069537 (* 1 = 0.069537 loss)\nI0608 06:39:10.956302 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0283376 (* 1 = 0.0283376 loss)\nI0608 06:39:10.956308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114585 (* 1 = 0.114585 loss)\nI0608 06:39:10.956315 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0244741 (* 1 = 0.0244741 loss)\nI0608 06:39:10.956320 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11442 (* 1 = 0.11442 loss)\nI0608 06:39:10.956326 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0244907 (* 1 = 0.0244907 loss)\nI0608 06:39:10.956332 13573 sgd_solver.cpp:106] Iteration 54180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:39:23.593956 13573 solver.cpp:229] Iteration 54200, loss = 0.551858\nI0608 06:39:23.594017 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0342808 (* 1 = 0.0342808 loss)\nI0608 06:39:23.594025 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.047167 (* 1 = 0.047167 loss)\nI0608 06:39:23.594032 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.114037 (* 1 = 0.114037 loss)\nI0608 06:39:23.594038 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.126665 (* 1 = 0.126665 loss)\nI0608 06:39:23.594043 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112935 (* 1 = 0.112935 loss)\nI0608 06:39:23.594049 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.12668 (* 1 = 0.12668 loss)\nI0608 06:39:23.594068 13573 sgd_solver.cpp:106] Iteration 54200, lr = 0.0001\nI0608 06:39:36.259935 13573 solver.cpp:229] Iteration 54220, loss = 0.480167\nI0608 06:39:36.260002 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0394709 (* 1 = 0.0394709 loss)\nI0608 06:39:36.260011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0314445 (* 1 = 0.0314445 loss)\nI0608 06:39:36.260017 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0915208 (* 1 = 0.0915208 loss)\nI0608 06:39:36.260023 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135609 (* 1 = 0.0135609 loss)\nI0608 06:39:36.260028 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0920981 (* 1 = 0.0920981 loss)\nI0608 06:39:36.260035 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013559 (* 1 = 0.013559 loss)\nI0608 06:39:36.260041 13573 sgd_solver.cpp:106] Iteration 54220, lr = 0.0001\nI0608 06:39:48.821152 13573 solver.cpp:229] Iteration 54240, loss = 0.410746\nI0608 06:39:48.821225 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00029768 (* 1 = 0.00029768 loss)\nI0608 06:39:48.821235 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000701452 (* 1 = 0.000701452 loss)\nI0608 06:39:48.821241 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201368 (* 1 = 0.201368 loss)\nI0608 06:39:48.821247 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143304 (* 1 = 0.0143304 loss)\nI0608 06:39:48.821252 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.231293 (* 1 = 0.231293 loss)\nI0608 06:39:48.821257 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0143293 (* 1 = 0.0143293 loss)\nI0608 06:39:48.821264 13573 sgd_solver.cpp:106] Iteration 54240, lr = 0.0001\nI0608 06:40:01.635118 13573 solver.cpp:229] Iteration 54260, loss = 0.795358\nI0608 06:40:01.635182 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.072881 (* 1 = 0.072881 loss)\nI0608 06:40:01.635192 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0760746 (* 1 = 0.0760746 loss)\nI0608 06:40:01.635198 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15635 (* 1 = 0.15635 loss)\nI0608 06:40:01.635205 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0660044 (* 1 = 0.0660044 loss)\nI0608 06:40:01.635210 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173226 (* 1 = 0.173226 loss)\nI0608 06:40:01.635215 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0660009 (* 1 = 0.0660009 loss)\nI0608 06:40:01.635222 13573 sgd_solver.cpp:106] Iteration 54260, lr = 0.0001\nI0608 06:40:14.604742 13573 solver.cpp:229] Iteration 54280, loss = 0.533689\nI0608 06:40:14.604813 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.098198 (* 1 = 0.098198 loss)\nI0608 06:40:14.604823 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0694982 (* 1 = 0.0694982 loss)\nI0608 06:40:14.604830 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0977388 (* 1 = 0.0977388 loss)\nI0608 06:40:14.604835 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00498351 (* 1 = 0.00498351 loss)\nI0608 06:40:14.604841 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0942329 (* 1 = 0.0942329 loss)\nI0608 06:40:14.604847 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00499052 (* 1 = 0.00499052 loss)\nI0608 06:40:14.604854 13573 sgd_solver.cpp:106] Iteration 54280, lr = 0.0001\nI0608 06:40:27.496484 13573 solver.cpp:229] Iteration 54300, loss = 0.386796\nI0608 06:40:27.496559 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0300429 (* 1 = 0.0300429 loss)\nI0608 06:40:27.496569 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0110829 (* 1 = 0.0110829 loss)\nI0608 06:40:27.496575 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174506 (* 1 = 0.174506 loss)\nI0608 06:40:27.496582 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0402626 (* 1 = 0.0402626 loss)\nI0608 06:40:27.496587 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153475 (* 1 = 0.153475 loss)\nI0608 06:40:27.496593 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0402717 (* 1 = 0.0402717 loss)\nI0608 06:40:27.496600 13573 sgd_solver.cpp:106] Iteration 54300, lr = 0.0001\nI0608 06:40:40.503765 13573 solver.cpp:229] Iteration 54320, loss = 0.451477\nI0608 06:40:40.503830 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0933224 (* 1 = 0.0933224 loss)\nI0608 06:40:40.503839 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0535863 (* 1 = 0.0535863 loss)\nI0608 06:40:40.503845 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101033 (* 1 = 0.101033 loss)\nI0608 06:40:40.503850 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.027268 (* 1 = 0.027268 loss)\nI0608 06:40:40.503856 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101753 (* 1 = 0.101753 loss)\nI0608 06:40:40.503861 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0272468 (* 1 = 0.0272468 loss)\nI0608 06:40:40.503868 13573 sgd_solver.cpp:106] Iteration 54320, lr = 0.0001\nI0608 06:40:53.240291 13573 solver.cpp:229] Iteration 54340, loss = 0.345623\nI0608 06:40:53.240356 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0442247 (* 1 = 0.0442247 loss)\nI0608 06:40:53.240366 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0410513 (* 1 = 0.0410513 loss)\nI0608 06:40:53.240372 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117969 (* 1 = 0.117969 loss)\nI0608 06:40:53.240377 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120856 (* 1 = 0.0120856 loss)\nI0608 06:40:53.240383 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137878 (* 1 = 0.137878 loss)\nI0608 06:40:53.240389 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0120836 (* 1 = 0.0120836 loss)\nI0608 06:40:53.240396 13573 sgd_solver.cpp:106] Iteration 54340, lr = 0.0001\nI0608 06:41:06.237124 13573 solver.cpp:229] Iteration 54360, loss = 0.687128\nI0608 06:41:06.237190 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000510251 (* 1 = 0.000510251 loss)\nI0608 06:41:06.237200 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000482298 (* 1 = 0.000482298 loss)\nI0608 06:41:06.237205 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.405115 (* 1 = 0.405115 loss)\nI0608 06:41:06.237210 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.049193 (* 1 = 0.049193 loss)\nI0608 06:41:06.237215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.438745 (* 1 = 0.438745 loss)\nI0608 06:41:06.237221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0492054 (* 1 = 0.0492054 loss)\nI0608 06:41:06.237228 13573 sgd_solver.cpp:106] Iteration 54360, lr = 0.0001\nI0608 06:41:19.008494 13573 solver.cpp:229] Iteration 54380, loss = 0.433122\nI0608 06:41:19.008560 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0580907 (* 1 = 0.0580907 loss)\nI0608 06:41:19.008570 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0456096 (* 1 = 0.0456096 loss)\nI0608 06:41:19.008576 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0904242 (* 1 = 0.0904242 loss)\nI0608 06:41:19.008582 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0129773 (* 1 = 0.0129773 loss)\nI0608 06:41:19.008589 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0873804 (* 1 = 0.0873804 loss)\nI0608 06:41:19.008594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0129786 (* 1 = 0.0129786 loss)\nI0608 06:41:19.008601 13573 sgd_solver.cpp:106] Iteration 54380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:41:32.024556 13573 solver.cpp:229] Iteration 54400, loss = 1.36373\nI0608 06:41:32.024626 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0957235 (* 1 = 0.0957235 loss)\nI0608 06:41:32.024634 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0833856 (* 1 = 0.0833856 loss)\nI0608 06:41:32.024641 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.614278 (* 1 = 0.614278 loss)\nI0608 06:41:32.024646 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.17275 (* 1 = 0.17275 loss)\nI0608 06:41:32.024652 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.60607 (* 1 = 0.60607 loss)\nI0608 06:41:32.024658 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.172759 (* 1 = 0.172759 loss)\nI0608 06:41:32.024665 13573 sgd_solver.cpp:106] Iteration 54400, lr = 0.0001\nI0608 06:41:45.072698 13573 solver.cpp:229] Iteration 54420, loss = 0.378736\nI0608 06:41:45.072772 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0510088 (* 1 = 0.0510088 loss)\nI0608 06:41:45.072782 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.021581 (* 1 = 0.021581 loss)\nI0608 06:41:45.072788 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0802365 (* 1 = 0.0802365 loss)\nI0608 06:41:45.072793 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0273777 (* 1 = 0.0273777 loss)\nI0608 06:41:45.072799 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0865254 (* 1 = 0.0865254 loss)\nI0608 06:41:45.072805 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0273793 (* 1 = 0.0273793 loss)\nI0608 06:41:45.072813 13573 sgd_solver.cpp:106] Iteration 54420, lr = 0.0001\nI0608 06:41:58.070286 13573 solver.cpp:229] Iteration 54440, loss = 0.620309\nI0608 06:41:58.070366 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0523854 (* 1 = 0.0523854 loss)\nI0608 06:41:58.070377 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0178679 (* 1 = 0.0178679 loss)\nI0608 06:41:58.070384 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0999472 (* 1 = 0.0999472 loss)\nI0608 06:41:58.070391 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0425897 (* 1 = 0.0425897 loss)\nI0608 06:41:58.070397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0928731 (* 1 = 0.0928731 loss)\nI0608 06:41:58.070403 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0425857 (* 1 = 0.0425857 loss)\nI0608 06:41:58.070410 13573 sgd_solver.cpp:106] Iteration 54440, lr = 0.0001\nI0608 06:42:10.993319 13573 solver.cpp:229] Iteration 54460, loss = 0.694647\nI0608 06:42:10.993388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000408235 (* 1 = 0.000408235 loss)\nI0608 06:42:10.993398 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00271261 (* 1 = 0.00271261 loss)\nI0608 06:42:10.993404 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215208 (* 1 = 0.215208 loss)\nI0608 06:42:10.993410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0109823 (* 1 = 0.0109823 loss)\nI0608 06:42:10.993417 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.242167 (* 1 = 0.242167 loss)\nI0608 06:42:10.993423 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0109985 (* 1 = 0.0109985 loss)\nI0608 06:42:10.993432 13573 sgd_solver.cpp:106] Iteration 54460, lr = 0.0001\nI0608 06:42:23.845842 13573 solver.cpp:229] Iteration 54480, loss = 0.491471\nI0608 06:42:23.845921 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.032735 (* 1 = 0.032735 loss)\nI0608 06:42:23.845930 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0548378 (* 1 = 0.0548378 loss)\nI0608 06:42:23.845937 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.068667 (* 1 = 0.068667 loss)\nI0608 06:42:23.845942 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0119053 (* 1 = 0.0119053 loss)\nI0608 06:42:23.845947 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0805074 (* 1 = 0.0805074 loss)\nI0608 06:42:23.845954 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119044 (* 1 = 0.0119044 loss)\nI0608 06:42:23.845962 13573 sgd_solver.cpp:106] Iteration 54480, lr = 0.0001\nI0608 06:42:36.687105 13573 solver.cpp:229] Iteration 54500, loss = 0.420653\nI0608 06:42:36.687176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0885556 (* 1 = 0.0885556 loss)\nI0608 06:42:36.687186 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0285141 (* 1 = 0.0285141 loss)\nI0608 06:42:36.687191 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0961916 (* 1 = 0.0961916 loss)\nI0608 06:42:36.687197 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0307746 (* 1 = 0.0307746 loss)\nI0608 06:42:36.687203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101062 (* 1 = 0.101062 loss)\nI0608 06:42:36.687208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307838 (* 1 = 0.0307838 loss)\nI0608 06:42:36.687216 13573 sgd_solver.cpp:106] Iteration 54500, lr = 0.0001\nI0608 06:42:49.647171 13573 solver.cpp:229] Iteration 54520, loss = 0.60143\nI0608 06:42:49.647245 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102562 (* 1 = 0.102562 loss)\nI0608 06:42:49.647256 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111626 (* 1 = 0.111626 loss)\nI0608 06:42:49.647264 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241822 (* 1 = 0.241822 loss)\nI0608 06:42:49.647269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021247 (* 1 = 0.021247 loss)\nI0608 06:42:49.647274 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.229533 (* 1 = 0.229533 loss)\nI0608 06:42:49.647280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0212478 (* 1 = 0.0212478 loss)\nI0608 06:42:49.647287 13573 sgd_solver.cpp:106] Iteration 54520, lr = 0.0001\nI0608 06:43:02.424953 13573 solver.cpp:229] Iteration 54540, loss = 0.90808\nI0608 06:43:02.425029 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00024971 (* 1 = 0.00024971 loss)\nI0608 06:43:02.425038 13573 solver.cpp:245]     Train net output #1: loss_cls = 6.78355e-05 (* 1 = 6.78355e-05 loss)\nI0608 06:43:02.425045 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.133221 (* 1 = 0.133221 loss)\nI0608 06:43:02.425050 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116891 (* 1 = 0.0116891 loss)\nI0608 06:43:02.425055 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.157189 (* 1 = 0.157189 loss)\nI0608 06:43:02.425061 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116902 (* 1 = 0.0116902 loss)\nI0608 06:43:02.425068 13573 sgd_solver.cpp:106] Iteration 54540, lr = 0.0001\nI0608 06:43:15.365903 13573 solver.cpp:229] Iteration 54560, loss = 0.594516\nI0608 06:43:15.365972 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148539 (* 1 = 0.148539 loss)\nI0608 06:43:15.365980 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125375 (* 1 = 0.125375 loss)\nI0608 06:43:15.365988 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.252015 (* 1 = 0.252015 loss)\nI0608 06:43:15.365993 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0553277 (* 1 = 0.0553277 loss)\nI0608 06:43:15.365998 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252711 (* 1 = 0.252711 loss)\nI0608 06:43:15.366003 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0553331 (* 1 = 0.0553331 loss)\nI0608 06:43:15.366010 13573 sgd_solver.cpp:106] Iteration 54560, lr = 0.0001\nI0608 06:43:28.139991 13573 solver.cpp:229] Iteration 54580, loss = 1.01569\nI0608 06:43:28.140067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0281743 (* 1 = 0.0281743 loss)\nI0608 06:43:28.140077 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0320172 (* 1 = 0.0320172 loss)\nI0608 06:43:28.140082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0839514 (* 1 = 0.0839514 loss)\nI0608 06:43:28.140089 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0236656 (* 1 = 0.0236656 loss)\nI0608 06:43:28.140094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0827502 (* 1 = 0.0827502 loss)\nI0608 06:43:28.140100 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236611 (* 1 = 0.0236611 loss)\nI0608 06:43:28.140106 13573 sgd_solver.cpp:106] Iteration 54580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:43:40.924449 13573 solver.cpp:229] Iteration 54600, loss = 0.351203\nI0608 06:43:40.924515 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.065602 (* 1 = 0.065602 loss)\nI0608 06:43:40.924523 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0258765 (* 1 = 0.0258765 loss)\nI0608 06:43:40.924530 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0850777 (* 1 = 0.0850777 loss)\nI0608 06:43:40.924535 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100821 (* 1 = 0.0100821 loss)\nI0608 06:43:40.924540 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.086239 (* 1 = 0.086239 loss)\nI0608 06:43:40.924546 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100822 (* 1 = 0.0100822 loss)\nI0608 06:43:40.924552 13573 sgd_solver.cpp:106] Iteration 54600, lr = 0.0001\nI0608 06:43:53.679715 13573 solver.cpp:229] Iteration 54620, loss = 0.315138\nI0608 06:43:53.679788 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0403507 (* 1 = 0.0403507 loss)\nI0608 06:43:53.679797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0846844 (* 1 = 0.0846844 loss)\nI0608 06:43:53.679803 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0737765 (* 1 = 0.0737765 loss)\nI0608 06:43:53.679810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108565 (* 1 = 0.0108565 loss)\nI0608 06:43:53.679816 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0752979 (* 1 = 0.0752979 loss)\nI0608 06:43:53.679821 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108546 (* 1 = 0.0108546 loss)\nI0608 06:43:53.679828 13573 sgd_solver.cpp:106] Iteration 54620, lr = 0.0001\nI0608 06:44:06.515651 13573 solver.cpp:229] Iteration 54640, loss = 0.543589\nI0608 06:44:06.515718 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00134816 (* 1 = 0.00134816 loss)\nI0608 06:44:06.515728 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00111372 (* 1 = 0.00111372 loss)\nI0608 06:44:06.515734 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194465 (* 1 = 0.194465 loss)\nI0608 06:44:06.515740 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0101164 (* 1 = 0.0101164 loss)\nI0608 06:44:06.515745 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202878 (* 1 = 0.202878 loss)\nI0608 06:44:06.515751 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0101343 (* 1 = 0.0101343 loss)\nI0608 06:44:06.515758 13573 sgd_solver.cpp:106] Iteration 54640, lr = 0.0001\nI0608 06:44:19.366863 13573 solver.cpp:229] Iteration 54660, loss = 1.03626\nI0608 06:44:19.366932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0664966 (* 1 = 0.0664966 loss)\nI0608 06:44:19.366942 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0912973 (* 1 = 0.0912973 loss)\nI0608 06:44:19.366948 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.678977 (* 1 = 0.678977 loss)\nI0608 06:44:19.366953 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.125393 (* 1 = 0.125393 loss)\nI0608 06:44:19.366958 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.639246 (* 1 = 0.639246 loss)\nI0608 06:44:19.366964 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.12537 (* 1 = 0.12537 loss)\nI0608 06:44:19.366971 13573 sgd_solver.cpp:106] Iteration 54660, lr = 0.0001\nI0608 06:44:32.261703 13573 solver.cpp:229] Iteration 54680, loss = 0.872152\nI0608 06:44:32.261767 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0302584 (* 1 = 0.0302584 loss)\nI0608 06:44:32.261777 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.021966 (* 1 = 0.021966 loss)\nI0608 06:44:32.261783 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.483046 (* 1 = 0.483046 loss)\nI0608 06:44:32.261790 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0253798 (* 1 = 0.0253798 loss)\nI0608 06:44:32.261795 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.513616 (* 1 = 0.513616 loss)\nI0608 06:44:32.261801 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0253722 (* 1 = 0.0253722 loss)\nI0608 06:44:32.261807 13573 sgd_solver.cpp:106] Iteration 54680, lr = 0.0001\nI0608 06:44:45.161761 13573 solver.cpp:229] Iteration 54700, loss = 0.735993\nI0608 06:44:45.161830 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00036647 (* 1 = 0.00036647 loss)\nI0608 06:44:45.161839 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00235785 (* 1 = 0.00235785 loss)\nI0608 06:44:45.161846 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190926 (* 1 = 0.190926 loss)\nI0608 06:44:45.161852 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00639752 (* 1 = 0.00639752 loss)\nI0608 06:44:45.161859 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207993 (* 1 = 0.207993 loss)\nI0608 06:44:45.161864 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00640106 (* 1 = 0.00640106 loss)\nI0608 06:44:45.161871 13573 sgd_solver.cpp:106] Iteration 54700, lr = 0.0001\nI0608 06:44:58.046306 13573 solver.cpp:229] Iteration 54720, loss = 0.696707\nI0608 06:44:58.046380 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0643925 (* 1 = 0.0643925 loss)\nI0608 06:44:58.046391 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.037291 (* 1 = 0.037291 loss)\nI0608 06:44:58.046396 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18496 (* 1 = 0.18496 loss)\nI0608 06:44:58.046403 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0213552 (* 1 = 0.0213552 loss)\nI0608 06:44:58.046408 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.185208 (* 1 = 0.185208 loss)\nI0608 06:44:58.046416 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0213529 (* 1 = 0.0213529 loss)\nI0608 06:44:58.046422 13573 sgd_solver.cpp:106] Iteration 54720, lr = 0.0001\nI0608 06:45:10.981588 13573 solver.cpp:229] Iteration 54740, loss = 0.422805\nI0608 06:45:10.981650 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0435334 (* 1 = 0.0435334 loss)\nI0608 06:45:10.981660 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0503145 (* 1 = 0.0503145 loss)\nI0608 06:45:10.981667 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19552 (* 1 = 0.19552 loss)\nI0608 06:45:10.981673 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276304 (* 1 = 0.0276304 loss)\nI0608 06:45:10.981678 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.201951 (* 1 = 0.201951 loss)\nI0608 06:45:10.981684 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276325 (* 1 = 0.0276325 loss)\nI0608 06:45:10.981693 13573 sgd_solver.cpp:106] Iteration 54740, lr = 0.0001\nI0608 06:45:23.983917 13573 solver.cpp:229] Iteration 54760, loss = 0.761203\nI0608 06:45:23.983989 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0571238 (* 1 = 0.0571238 loss)\nI0608 06:45:23.983999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0510413 (* 1 = 0.0510413 loss)\nI0608 06:45:23.984005 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.087931 (* 1 = 0.087931 loss)\nI0608 06:45:23.984011 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110371 (* 1 = 0.0110371 loss)\nI0608 06:45:23.984017 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0847491 (* 1 = 0.0847491 loss)\nI0608 06:45:23.984024 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110378 (* 1 = 0.0110378 loss)\nI0608 06:45:23.984030 13573 sgd_solver.cpp:106] Iteration 54760, lr = 0.0001\nI0608 06:45:36.815291 13573 solver.cpp:229] Iteration 54780, loss = 1.41648\nI0608 06:45:36.815356 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131421 (* 1 = 0.131421 loss)\nI0608 06:45:36.815366 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0835115 (* 1 = 0.0835115 loss)\nI0608 06:45:36.815373 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.326824 (* 1 = 0.326824 loss)\nI0608 06:45:36.815379 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0945676 (* 1 = 0.0945676 loss)\nI0608 06:45:36.815384 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318931 (* 1 = 0.318931 loss)\nI0608 06:45:36.815390 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0945589 (* 1 = 0.0945589 loss)\nI0608 06:45:36.815397 13573 sgd_solver.cpp:106] Iteration 54780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:45:49.777946 13573 solver.cpp:229] Iteration 54800, loss = 0.386731\nI0608 06:45:49.778020 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0350156 (* 1 = 0.0350156 loss)\nI0608 06:45:49.778029 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0884385 (* 1 = 0.0884385 loss)\nI0608 06:45:49.778035 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0841596 (* 1 = 0.0841596 loss)\nI0608 06:45:49.778041 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275563 (* 1 = 0.0275563 loss)\nI0608 06:45:49.778048 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0849756 (* 1 = 0.0849756 loss)\nI0608 06:45:49.778053 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275577 (* 1 = 0.0275577 loss)\nI0608 06:45:49.778059 13573 sgd_solver.cpp:106] Iteration 54800, lr = 0.0001\nI0608 06:46:02.605012 13573 solver.cpp:229] Iteration 54820, loss = 0.441401\nI0608 06:46:02.605079 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000434645 (* 1 = 0.000434645 loss)\nI0608 06:46:02.605088 13573 solver.cpp:245]     Train net output #1: loss_cls = 9.77169e-05 (* 1 = 9.77169e-05 loss)\nI0608 06:46:02.605094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.141204 (* 1 = 0.141204 loss)\nI0608 06:46:02.605100 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00337585 (* 1 = 0.00337585 loss)\nI0608 06:46:02.605105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163114 (* 1 = 0.163114 loss)\nI0608 06:46:02.605111 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00336573 (* 1 = 0.00336573 loss)\nI0608 06:46:02.605118 13573 sgd_solver.cpp:106] Iteration 54820, lr = 0.0001\nI0608 06:46:15.497005 13573 solver.cpp:229] Iteration 54840, loss = 0.643351\nI0608 06:46:15.497067 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0862901 (* 1 = 0.0862901 loss)\nI0608 06:46:15.497076 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0512208 (* 1 = 0.0512208 loss)\nI0608 06:46:15.497082 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223093 (* 1 = 0.223093 loss)\nI0608 06:46:15.497088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.07759 (* 1 = 0.07759 loss)\nI0608 06:46:15.497094 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223692 (* 1 = 0.223692 loss)\nI0608 06:46:15.497099 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0775846 (* 1 = 0.0775846 loss)\nI0608 06:46:15.497107 13573 sgd_solver.cpp:106] Iteration 54840, lr = 0.0001\nI0608 06:46:28.294559 13573 solver.cpp:229] Iteration 54860, loss = 0.586443\nI0608 06:46:28.294642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.01099 (* 1 = 0.01099 loss)\nI0608 06:46:28.294651 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00589219 (* 1 = 0.00589219 loss)\nI0608 06:46:28.294658 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.221443 (* 1 = 0.221443 loss)\nI0608 06:46:28.294664 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0148535 (* 1 = 0.0148535 loss)\nI0608 06:46:28.294669 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171027 (* 1 = 0.171027 loss)\nI0608 06:46:28.294675 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0148386 (* 1 = 0.0148386 loss)\nI0608 06:46:28.294682 13573 sgd_solver.cpp:106] Iteration 54860, lr = 0.0001\nI0608 06:46:41.333223 13573 solver.cpp:229] Iteration 54880, loss = 0.656941\nI0608 06:46:41.333289 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00337514 (* 1 = 0.00337514 loss)\nI0608 06:46:41.333298 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00165438 (* 1 = 0.00165438 loss)\nI0608 06:46:41.333304 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267612 (* 1 = 0.267612 loss)\nI0608 06:46:41.333310 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0266551 (* 1 = 0.0266551 loss)\nI0608 06:46:41.333315 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246106 (* 1 = 0.246106 loss)\nI0608 06:46:41.333322 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0266576 (* 1 = 0.0266576 loss)\nI0608 06:46:41.333328 13573 sgd_solver.cpp:106] Iteration 54880, lr = 0.0001\nI0608 06:46:54.241461 13573 solver.cpp:229] Iteration 54900, loss = 0.421341\nI0608 06:46:54.241523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0422737 (* 1 = 0.0422737 loss)\nI0608 06:46:54.241533 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0231931 (* 1 = 0.0231931 loss)\nI0608 06:46:54.241539 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0993139 (* 1 = 0.0993139 loss)\nI0608 06:46:54.241544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0413919 (* 1 = 0.0413919 loss)\nI0608 06:46:54.241550 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102167 (* 1 = 0.102167 loss)\nI0608 06:46:54.241555 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0413932 (* 1 = 0.0413932 loss)\nI0608 06:46:54.241562 13573 sgd_solver.cpp:106] Iteration 54900, lr = 0.0001\nI0608 06:47:07.005295 13573 solver.cpp:229] Iteration 54920, loss = 0.641456\nI0608 06:47:07.005358 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0856215 (* 1 = 0.0856215 loss)\nI0608 06:47:07.005368 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0405337 (* 1 = 0.0405337 loss)\nI0608 06:47:07.005374 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992247 (* 1 = 0.0992247 loss)\nI0608 06:47:07.005380 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00848634 (* 1 = 0.00848634 loss)\nI0608 06:47:07.005386 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11848 (* 1 = 0.11848 loss)\nI0608 06:47:07.005393 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00848617 (* 1 = 0.00848617 loss)\nI0608 06:47:07.005399 13573 sgd_solver.cpp:106] Iteration 54920, lr = 0.0001\nI0608 06:47:19.910468 13573 solver.cpp:229] Iteration 54940, loss = 0.526636\nI0608 06:47:19.910543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00481721 (* 1 = 0.00481721 loss)\nI0608 06:47:19.910552 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00795278 (* 1 = 0.00795278 loss)\nI0608 06:47:19.910559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270629 (* 1 = 0.270629 loss)\nI0608 06:47:19.910565 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.024667 (* 1 = 0.024667 loss)\nI0608 06:47:19.910570 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265238 (* 1 = 0.265238 loss)\nI0608 06:47:19.910576 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246463 (* 1 = 0.0246463 loss)\nI0608 06:47:19.910583 13573 sgd_solver.cpp:106] Iteration 54940, lr = 0.0001\nI0608 06:47:32.704170 13573 solver.cpp:229] Iteration 54960, loss = 0.606234\nI0608 06:47:32.704236 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0109459 (* 1 = 0.0109459 loss)\nI0608 06:47:32.704246 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00540263 (* 1 = 0.00540263 loss)\nI0608 06:47:32.704252 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.366561 (* 1 = 0.366561 loss)\nI0608 06:47:32.704257 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0724323 (* 1 = 0.0724323 loss)\nI0608 06:47:32.704262 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.425627 (* 1 = 0.425627 loss)\nI0608 06:47:32.704268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0724085 (* 1 = 0.0724085 loss)\nI0608 06:47:32.704277 13573 sgd_solver.cpp:106] Iteration 54960, lr = 0.0001\nI0608 06:47:45.771780 13573 solver.cpp:229] Iteration 54980, loss = 0.527734\nI0608 06:47:45.771849 13573 solver.cpp:245]     Train net output #0: loss_bbox = 9.26366e-05 (* 1 = 9.26366e-05 loss)\nI0608 06:47:45.771859 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000121105 (* 1 = 0.000121105 loss)\nI0608 06:47:45.771865 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.21399 (* 1 = 0.21399 loss)\nI0608 06:47:45.771870 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00802834 (* 1 = 0.00802834 loss)\nI0608 06:47:45.771877 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.200017 (* 1 = 0.200017 loss)\nI0608 06:47:45.771883 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00802428 (* 1 = 0.00802428 loss)\nI0608 06:47:45.771889 13573 sgd_solver.cpp:106] Iteration 54980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:47:58.705399 13573 solver.cpp:229] Iteration 55000, loss = 0.563788\nI0608 06:47:58.705474 13573 solver.cpp:245]     Train net output #0: loss_bbox = 5.24434e-05 (* 1 = 5.24434e-05 loss)\nI0608 06:47:58.705483 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0166487 (* 1 = 0.0166487 loss)\nI0608 06:47:58.705490 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22431 (* 1 = 0.22431 loss)\nI0608 06:47:58.705495 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0293823 (* 1 = 0.0293823 loss)\nI0608 06:47:58.705502 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239798 (* 1 = 0.239798 loss)\nI0608 06:47:58.705507 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0293805 (* 1 = 0.0293805 loss)\nI0608 06:47:58.705513 13573 sgd_solver.cpp:106] Iteration 55000, lr = 0.0001\nI0608 06:48:11.583564 13573 solver.cpp:229] Iteration 55020, loss = 0.351084\nI0608 06:48:11.583627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00696942 (* 1 = 0.00696942 loss)\nI0608 06:48:11.583636 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00539009 (* 1 = 0.00539009 loss)\nI0608 06:48:11.583642 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.193072 (* 1 = 0.193072 loss)\nI0608 06:48:11.583648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130527 (* 1 = 0.0130527 loss)\nI0608 06:48:11.583653 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.178086 (* 1 = 0.178086 loss)\nI0608 06:48:11.583659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013042 (* 1 = 0.013042 loss)\nI0608 06:48:11.583665 13573 sgd_solver.cpp:106] Iteration 55020, lr = 0.0001\nI0608 06:48:24.566332 13573 solver.cpp:229] Iteration 55040, loss = 0.910498\nI0608 06:48:24.566427 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0609461 (* 1 = 0.0609461 loss)\nI0608 06:48:24.566437 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0530384 (* 1 = 0.0530384 loss)\nI0608 06:48:24.566443 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.455514 (* 1 = 0.455514 loss)\nI0608 06:48:24.566449 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116387 (* 1 = 0.116387 loss)\nI0608 06:48:24.566455 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.466688 (* 1 = 0.466688 loss)\nI0608 06:48:24.566462 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116385 (* 1 = 0.116385 loss)\nI0608 06:48:24.566469 13573 sgd_solver.cpp:106] Iteration 55040, lr = 0.0001\nI0608 06:48:37.537883 13573 solver.cpp:229] Iteration 55060, loss = 2.0928\nI0608 06:48:37.537981 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.076409 (* 1 = 0.076409 loss)\nI0608 06:48:37.537991 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0949193 (* 1 = 0.0949193 loss)\nI0608 06:48:37.537997 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.498199 (* 1 = 0.498199 loss)\nI0608 06:48:37.538002 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0896734 (* 1 = 0.0896734 loss)\nI0608 06:48:37.538007 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.488761 (* 1 = 0.488761 loss)\nI0608 06:48:37.538013 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0896737 (* 1 = 0.0896737 loss)\nI0608 06:48:37.538020 13573 sgd_solver.cpp:106] Iteration 55060, lr = 0.0001\nI0608 06:48:50.450086 13573 solver.cpp:229] Iteration 55080, loss = 0.994897\nI0608 06:48:50.450153 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147219 (* 1 = 0.147219 loss)\nI0608 06:48:50.450163 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0469827 (* 1 = 0.0469827 loss)\nI0608 06:48:50.450170 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216156 (* 1 = 0.216156 loss)\nI0608 06:48:50.450176 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106532 (* 1 = 0.106532 loss)\nI0608 06:48:50.450181 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228139 (* 1 = 0.228139 loss)\nI0608 06:48:50.450186 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106534 (* 1 = 0.106534 loss)\nI0608 06:48:50.450193 13573 sgd_solver.cpp:106] Iteration 55080, lr = 0.0001\nI0608 06:49:03.347910 13573 solver.cpp:229] Iteration 55100, loss = 1.95334\nI0608 06:49:03.347978 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0700523 (* 1 = 0.0700523 loss)\nI0608 06:49:03.347987 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0946016 (* 1 = 0.0946016 loss)\nI0608 06:49:03.347993 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.933556 (* 1 = 0.933556 loss)\nI0608 06:49:03.348000 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.636553 (* 1 = 0.636553 loss)\nI0608 06:49:03.348004 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.927958 (* 1 = 0.927958 loss)\nI0608 06:49:03.348011 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.636539 (* 1 = 0.636539 loss)\nI0608 06:49:03.348017 13573 sgd_solver.cpp:106] Iteration 55100, lr = 0.0001\nI0608 06:49:16.141726 13573 solver.cpp:229] Iteration 55120, loss = 0.694918\nI0608 06:49:16.141798 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125886 (* 1 = 0.125886 loss)\nI0608 06:49:16.141808 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0424535 (* 1 = 0.0424535 loss)\nI0608 06:49:16.141814 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.425964 (* 1 = 0.425964 loss)\nI0608 06:49:16.141819 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0309917 (* 1 = 0.0309917 loss)\nI0608 06:49:16.141825 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.415336 (* 1 = 0.415336 loss)\nI0608 06:49:16.141831 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0309969 (* 1 = 0.0309969 loss)\nI0608 06:49:16.141839 13573 sgd_solver.cpp:106] Iteration 55120, lr = 0.0001\nI0608 06:49:29.307436 13573 solver.cpp:229] Iteration 55140, loss = 0.514558\nI0608 06:49:29.307497 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127573 (* 1 = 0.127573 loss)\nI0608 06:49:29.307507 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0834746 (* 1 = 0.0834746 loss)\nI0608 06:49:29.307512 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0924738 (* 1 = 0.0924738 loss)\nI0608 06:49:29.307518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0603843 (* 1 = 0.0603843 loss)\nI0608 06:49:29.307524 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0880955 (* 1 = 0.0880955 loss)\nI0608 06:49:29.307530 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0603777 (* 1 = 0.0603777 loss)\nI0608 06:49:29.307536 13573 sgd_solver.cpp:106] Iteration 55140, lr = 0.0001\nI0608 06:49:42.159901 13573 solver.cpp:229] Iteration 55160, loss = 0.371264\nI0608 06:49:42.159973 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0767036 (* 1 = 0.0767036 loss)\nI0608 06:49:42.159984 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0454313 (* 1 = 0.0454313 loss)\nI0608 06:49:42.159991 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117599 (* 1 = 0.117599 loss)\nI0608 06:49:42.159996 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0689084 (* 1 = 0.0689084 loss)\nI0608 06:49:42.160002 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114127 (* 1 = 0.114127 loss)\nI0608 06:49:42.160008 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0688889 (* 1 = 0.0688889 loss)\nI0608 06:49:42.160017 13573 sgd_solver.cpp:106] Iteration 55160, lr = 0.0001\nI0608 06:49:55.044013 13573 solver.cpp:229] Iteration 55180, loss = 1.15932\nI0608 06:49:55.044086 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.166993 (* 1 = 0.166993 loss)\nI0608 06:49:55.044096 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195755 (* 1 = 0.195755 loss)\nI0608 06:49:55.044102 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.488987 (* 1 = 0.488987 loss)\nI0608 06:49:55.044108 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.024059 (* 1 = 0.024059 loss)\nI0608 06:49:55.044114 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.475695 (* 1 = 0.475695 loss)\nI0608 06:49:55.044121 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0240283 (* 1 = 0.0240283 loss)\nI0608 06:49:55.044126 13573 sgd_solver.cpp:106] Iteration 55180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:50:07.850409 13573 solver.cpp:229] Iteration 55200, loss = 1.32164\nI0608 06:50:07.850473 13573 solver.cpp:245]     Train net output #0: loss_bbox = 8.67019e-05 (* 1 = 8.67019e-05 loss)\nI0608 06:50:07.850483 13573 solver.cpp:245]     Train net output #1: loss_cls = 3.18716e-06 (* 1 = 3.18716e-06 loss)\nI0608 06:50:07.850489 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.208669 (* 1 = 0.208669 loss)\nI0608 06:50:07.850495 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0053732 (* 1 = 0.0053732 loss)\nI0608 06:50:07.850502 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217089 (* 1 = 0.217089 loss)\nI0608 06:50:07.850527 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00537388 (* 1 = 0.00537388 loss)\nI0608 06:50:07.850534 13573 sgd_solver.cpp:106] Iteration 55200, lr = 0.0001\nI0608 06:50:20.598996 13573 solver.cpp:229] Iteration 55220, loss = 1.40124\nI0608 06:50:20.599062 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0726831 (* 1 = 0.0726831 loss)\nI0608 06:50:20.599071 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0522541 (* 1 = 0.0522541 loss)\nI0608 06:50:20.599078 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179841 (* 1 = 0.179841 loss)\nI0608 06:50:20.599084 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.032428 (* 1 = 0.032428 loss)\nI0608 06:50:20.599090 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.179416 (* 1 = 0.179416 loss)\nI0608 06:50:20.599097 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0324259 (* 1 = 0.0324259 loss)\nI0608 06:50:20.599104 13573 sgd_solver.cpp:106] Iteration 55220, lr = 0.0001\nI0608 06:50:33.537009 13573 solver.cpp:229] Iteration 55240, loss = 0.514953\nI0608 06:50:33.537081 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0951856 (* 1 = 0.0951856 loss)\nI0608 06:50:33.537091 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0546134 (* 1 = 0.0546134 loss)\nI0608 06:50:33.537096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139851 (* 1 = 0.139851 loss)\nI0608 06:50:33.537101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118673 (* 1 = 0.0118673 loss)\nI0608 06:50:33.537107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139108 (* 1 = 0.139108 loss)\nI0608 06:50:33.537113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118687 (* 1 = 0.0118687 loss)\nI0608 06:50:33.537119 13573 sgd_solver.cpp:106] Iteration 55240, lr = 0.0001\nI0608 06:50:46.546591 13573 solver.cpp:229] Iteration 55260, loss = 0.570959\nI0608 06:50:46.546656 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000218036 (* 1 = 0.000218036 loss)\nI0608 06:50:46.546666 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00139019 (* 1 = 0.00139019 loss)\nI0608 06:50:46.546672 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.295068 (* 1 = 0.295068 loss)\nI0608 06:50:46.546679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00888843 (* 1 = 0.00888843 loss)\nI0608 06:50:46.546684 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.255026 (* 1 = 0.255026 loss)\nI0608 06:50:46.546690 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00888558 (* 1 = 0.00888558 loss)\nI0608 06:50:46.546696 13573 sgd_solver.cpp:106] Iteration 55260, lr = 0.0001\nI0608 06:50:59.511428 13573 solver.cpp:229] Iteration 55280, loss = 0.835674\nI0608 06:50:59.511494 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.088109 (* 1 = 0.088109 loss)\nI0608 06:50:59.511504 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.115503 (* 1 = 0.115503 loss)\nI0608 06:50:59.511510 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334569 (* 1 = 0.334569 loss)\nI0608 06:50:59.511517 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0334692 (* 1 = 0.0334692 loss)\nI0608 06:50:59.511523 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.328003 (* 1 = 0.328003 loss)\nI0608 06:50:59.511528 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0334808 (* 1 = 0.0334808 loss)\nI0608 06:50:59.511534 13573 sgd_solver.cpp:106] Iteration 55280, lr = 0.0001\nI0608 06:51:12.646330 13573 solver.cpp:229] Iteration 55300, loss = 1.42259\nI0608 06:51:12.646394 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0841012 (* 1 = 0.0841012 loss)\nI0608 06:51:12.646404 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130113 (* 1 = 0.130113 loss)\nI0608 06:51:12.646410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.519762 (* 1 = 0.519762 loss)\nI0608 06:51:12.646416 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.15635 (* 1 = 0.15635 loss)\nI0608 06:51:12.646422 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.522256 (* 1 = 0.522256 loss)\nI0608 06:51:12.646428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.156309 (* 1 = 0.156309 loss)\nI0608 06:51:12.646435 13573 sgd_solver.cpp:106] Iteration 55300, lr = 0.0001\nI0608 06:51:25.534250 13573 solver.cpp:229] Iteration 55320, loss = 0.678293\nI0608 06:51:25.534322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00753646 (* 1 = 0.00753646 loss)\nI0608 06:51:25.534333 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00082493 (* 1 = 0.00082493 loss)\nI0608 06:51:25.534339 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.196138 (* 1 = 0.196138 loss)\nI0608 06:51:25.534345 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0300173 (* 1 = 0.0300173 loss)\nI0608 06:51:25.534351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.248665 (* 1 = 0.248665 loss)\nI0608 06:51:25.534358 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0300033 (* 1 = 0.0300033 loss)\nI0608 06:51:25.534371 13573 sgd_solver.cpp:106] Iteration 55320, lr = 0.0001\nI0608 06:51:38.508325 13573 solver.cpp:229] Iteration 55340, loss = 0.584424\nI0608 06:51:38.508405 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00728058 (* 1 = 0.00728058 loss)\nI0608 06:51:38.508416 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0182683 (* 1 = 0.0182683 loss)\nI0608 06:51:38.508422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267965 (* 1 = 0.267965 loss)\nI0608 06:51:38.508427 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0278615 (* 1 = 0.0278615 loss)\nI0608 06:51:38.508433 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226204 (* 1 = 0.226204 loss)\nI0608 06:51:38.508440 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0278599 (* 1 = 0.0278599 loss)\nI0608 06:51:38.508446 13573 sgd_solver.cpp:106] Iteration 55340, lr = 0.0001\nI0608 06:51:51.331104 13573 solver.cpp:229] Iteration 55360, loss = 0.450526\nI0608 06:51:51.331167 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0199964 (* 1 = 0.0199964 loss)\nI0608 06:51:51.331176 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0187395 (* 1 = 0.0187395 loss)\nI0608 06:51:51.331182 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0956968 (* 1 = 0.0956968 loss)\nI0608 06:51:51.331188 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0641504 (* 1 = 0.0641504 loss)\nI0608 06:51:51.331194 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0931939 (* 1 = 0.0931939 loss)\nI0608 06:51:51.331202 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0641543 (* 1 = 0.0641543 loss)\nI0608 06:51:51.331208 13573 sgd_solver.cpp:106] Iteration 55360, lr = 0.0001\nI0608 06:52:04.252646 13573 solver.cpp:229] Iteration 55380, loss = 0.580899\nI0608 06:52:04.252717 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.214628 (* 1 = 0.214628 loss)\nI0608 06:52:04.252727 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0666496 (* 1 = 0.0666496 loss)\nI0608 06:52:04.252732 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.154039 (* 1 = 0.154039 loss)\nI0608 06:52:04.252738 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0257379 (* 1 = 0.0257379 loss)\nI0608 06:52:04.252743 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.15359 (* 1 = 0.15359 loss)\nI0608 06:52:04.252749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0257365 (* 1 = 0.0257365 loss)\nI0608 06:52:04.252756 13573 sgd_solver.cpp:106] Iteration 55380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:52:16.994575 13573 solver.cpp:229] Iteration 55400, loss = 2.35528\nI0608 06:52:16.994642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00145256 (* 1 = 0.00145256 loss)\nI0608 06:52:16.994652 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000710525 (* 1 = 0.000710525 loss)\nI0608 06:52:16.994658 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197563 (* 1 = 0.197563 loss)\nI0608 06:52:16.994664 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00174926 (* 1 = 0.00174926 loss)\nI0608 06:52:16.994670 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203272 (* 1 = 0.203272 loss)\nI0608 06:52:16.994676 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.001754 (* 1 = 0.001754 loss)\nI0608 06:52:16.994684 13573 sgd_solver.cpp:106] Iteration 55400, lr = 0.0001\nI0608 06:52:29.533172 13573 solver.cpp:229] Iteration 55420, loss = 0.773917\nI0608 06:52:29.533241 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0102667 (* 1 = 0.0102667 loss)\nI0608 06:52:29.533252 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00104877 (* 1 = 0.00104877 loss)\nI0608 06:52:29.533258 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273449 (* 1 = 0.273449 loss)\nI0608 06:52:29.533264 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0273815 (* 1 = 0.0273815 loss)\nI0608 06:52:29.533270 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.272046 (* 1 = 0.272046 loss)\nI0608 06:52:29.533277 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.027388 (* 1 = 0.027388 loss)\nI0608 06:52:29.533284 13573 sgd_solver.cpp:106] Iteration 55420, lr = 0.0001\nI0608 06:52:42.395941 13573 solver.cpp:229] Iteration 55440, loss = 1.5269\nI0608 06:52:42.396000 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00356695 (* 1 = 0.00356695 loss)\nI0608 06:52:42.396011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00316152 (* 1 = 0.00316152 loss)\nI0608 06:52:42.396018 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.901827 (* 1 = 0.901827 loss)\nI0608 06:52:42.396023 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.338132 (* 1 = 0.338132 loss)\nI0608 06:52:42.396028 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.890715 (* 1 = 0.890715 loss)\nI0608 06:52:42.396034 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.338208 (* 1 = 0.338208 loss)\nI0608 06:52:42.396040 13573 sgd_solver.cpp:106] Iteration 55440, lr = 0.0001\nI0608 06:52:55.452276 13573 solver.cpp:229] Iteration 55460, loss = 0.549344\nI0608 06:52:55.452342 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0957273 (* 1 = 0.0957273 loss)\nI0608 06:52:55.452353 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0710861 (* 1 = 0.0710861 loss)\nI0608 06:52:55.452358 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228428 (* 1 = 0.228428 loss)\nI0608 06:52:55.452364 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0766261 (* 1 = 0.0766261 loss)\nI0608 06:52:55.452370 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217858 (* 1 = 0.217858 loss)\nI0608 06:52:55.452375 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0766213 (* 1 = 0.0766213 loss)\nI0608 06:52:55.452383 13573 sgd_solver.cpp:106] Iteration 55460, lr = 0.0001\nI0608 06:53:08.231571 13573 solver.cpp:229] Iteration 55480, loss = 0.478706\nI0608 06:53:08.231634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117502 (* 1 = 0.117502 loss)\nI0608 06:53:08.231644 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.079138 (* 1 = 0.079138 loss)\nI0608 06:53:08.231649 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135774 (* 1 = 0.135774 loss)\nI0608 06:53:08.231655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00993312 (* 1 = 0.00993312 loss)\nI0608 06:53:08.231660 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139122 (* 1 = 0.139122 loss)\nI0608 06:53:08.231667 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00993308 (* 1 = 0.00993308 loss)\nI0608 06:53:08.231673 13573 sgd_solver.cpp:106] Iteration 55480, lr = 0.0001\nI0608 06:53:21.148061 13573 solver.cpp:229] Iteration 55500, loss = 0.851856\nI0608 06:53:21.148134 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0036165 (* 1 = 0.0036165 loss)\nI0608 06:53:21.148144 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0162591 (* 1 = 0.0162591 loss)\nI0608 06:53:21.148149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.296993 (* 1 = 0.296993 loss)\nI0608 06:53:21.148154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182685 (* 1 = 0.0182685 loss)\nI0608 06:53:21.148159 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.322303 (* 1 = 0.322303 loss)\nI0608 06:53:21.148165 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182781 (* 1 = 0.0182781 loss)\nI0608 06:53:21.148171 13573 sgd_solver.cpp:106] Iteration 55500, lr = 0.0001\nI0608 06:53:34.047457 13573 solver.cpp:229] Iteration 55520, loss = 1.10676\nI0608 06:53:34.047523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00555885 (* 1 = 0.00555885 loss)\nI0608 06:53:34.047533 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.023648 (* 1 = 0.023648 loss)\nI0608 06:53:34.047538 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.402169 (* 1 = 0.402169 loss)\nI0608 06:53:34.047544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0953877 (* 1 = 0.0953877 loss)\nI0608 06:53:34.047549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.429678 (* 1 = 0.429678 loss)\nI0608 06:53:34.047554 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0953713 (* 1 = 0.0953713 loss)\nI0608 06:53:34.047561 13573 sgd_solver.cpp:106] Iteration 55520, lr = 0.0001\nI0608 06:53:46.894831 13573 solver.cpp:229] Iteration 55540, loss = 0.266639\nI0608 06:53:46.894896 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0596734 (* 1 = 0.0596734 loss)\nI0608 06:53:46.894906 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0635692 (* 1 = 0.0635692 loss)\nI0608 06:53:46.894912 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.089208 (* 1 = 0.089208 loss)\nI0608 06:53:46.894917 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00705 (* 1 = 0.00705 loss)\nI0608 06:53:46.894923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0911934 (* 1 = 0.0911934 loss)\nI0608 06:53:46.894928 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0070498 (* 1 = 0.0070498 loss)\nI0608 06:53:46.894934 13573 sgd_solver.cpp:106] Iteration 55540, lr = 0.0001\nI0608 06:53:59.805799 13573 solver.cpp:229] Iteration 55560, loss = 0.594864\nI0608 06:53:59.805871 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0384178 (* 1 = 0.0384178 loss)\nI0608 06:53:59.805881 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0240257 (* 1 = 0.0240257 loss)\nI0608 06:53:59.805887 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102432 (* 1 = 0.102432 loss)\nI0608 06:53:59.805893 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0651772 (* 1 = 0.0651772 loss)\nI0608 06:53:59.805899 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0972414 (* 1 = 0.0972414 loss)\nI0608 06:53:59.805904 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0651741 (* 1 = 0.0651741 loss)\nI0608 06:53:59.805912 13573 sgd_solver.cpp:106] Iteration 55560, lr = 0.0001\nI0608 06:54:12.863291 13573 solver.cpp:229] Iteration 55580, loss = 0.631642\nI0608 06:54:12.863356 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00154152 (* 1 = 0.00154152 loss)\nI0608 06:54:12.863366 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000303272 (* 1 = 0.000303272 loss)\nI0608 06:54:12.863373 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.356716 (* 1 = 0.356716 loss)\nI0608 06:54:12.863379 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0318612 (* 1 = 0.0318612 loss)\nI0608 06:54:12.863384 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.346768 (* 1 = 0.346768 loss)\nI0608 06:54:12.863390 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0318745 (* 1 = 0.0318745 loss)\nI0608 06:54:12.863397 13573 sgd_solver.cpp:106] Iteration 55580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:54:25.812885 13573 solver.cpp:229] Iteration 55600, loss = 0.449637\nI0608 06:54:25.812959 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0865437 (* 1 = 0.0865437 loss)\nI0608 06:54:25.812969 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.105673 (* 1 = 0.105673 loss)\nI0608 06:54:25.812976 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112938 (* 1 = 0.112938 loss)\nI0608 06:54:25.812981 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0542117 (* 1 = 0.0542117 loss)\nI0608 06:54:25.812988 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.11775 (* 1 = 0.11775 loss)\nI0608 06:54:25.812994 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.054209 (* 1 = 0.054209 loss)\nI0608 06:54:25.813001 13573 sgd_solver.cpp:106] Iteration 55600, lr = 0.0001\nI0608 06:54:38.503808 13573 solver.cpp:229] Iteration 55620, loss = 0.715683\nI0608 06:54:38.503926 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0477339 (* 1 = 0.0477339 loss)\nI0608 06:54:38.503937 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0644156 (* 1 = 0.0644156 loss)\nI0608 06:54:38.503942 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162344 (* 1 = 0.162344 loss)\nI0608 06:54:38.503948 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0638788 (* 1 = 0.0638788 loss)\nI0608 06:54:38.503954 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162461 (* 1 = 0.162461 loss)\nI0608 06:54:38.503960 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0638654 (* 1 = 0.0638654 loss)\nI0608 06:54:38.503968 13573 sgd_solver.cpp:106] Iteration 55620, lr = 0.0001\nI0608 06:54:51.420289 13573 solver.cpp:229] Iteration 55640, loss = 0.749489\nI0608 06:54:51.420364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0604526 (* 1 = 0.0604526 loss)\nI0608 06:54:51.420373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0484111 (* 1 = 0.0484111 loss)\nI0608 06:54:51.420379 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.285932 (* 1 = 0.285932 loss)\nI0608 06:54:51.420385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246989 (* 1 = 0.0246989 loss)\nI0608 06:54:51.420392 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.29802 (* 1 = 0.29802 loss)\nI0608 06:54:51.420397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247027 (* 1 = 0.0247027 loss)\nI0608 06:54:51.420404 13573 sgd_solver.cpp:106] Iteration 55640, lr = 0.0001\nI0608 06:55:04.347987 13573 solver.cpp:229] Iteration 55660, loss = 0.459562\nI0608 06:55:04.348060 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10382 (* 1 = 0.10382 loss)\nI0608 06:55:04.348074 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0877014 (* 1 = 0.0877014 loss)\nI0608 06:55:04.348081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183131 (* 1 = 0.183131 loss)\nI0608 06:55:04.348088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0207373 (* 1 = 0.0207373 loss)\nI0608 06:55:04.348093 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174248 (* 1 = 0.174248 loss)\nI0608 06:55:04.348099 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207353 (* 1 = 0.0207353 loss)\nI0608 06:55:04.348107 13573 sgd_solver.cpp:106] Iteration 55660, lr = 0.0001\nI0608 06:55:17.245877 13573 solver.cpp:229] Iteration 55680, loss = 0.374254\nI0608 06:55:17.245945 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.020303 (* 1 = 0.020303 loss)\nI0608 06:55:17.245954 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0106092 (* 1 = 0.0106092 loss)\nI0608 06:55:17.245960 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102354 (* 1 = 0.102354 loss)\nI0608 06:55:17.245966 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0654493 (* 1 = 0.0654493 loss)\nI0608 06:55:17.245971 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0970926 (* 1 = 0.0970926 loss)\nI0608 06:55:17.245977 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0654431 (* 1 = 0.0654431 loss)\nI0608 06:55:17.245985 13573 sgd_solver.cpp:106] Iteration 55680, lr = 0.0001\nI0608 06:55:30.032721 13573 solver.cpp:229] Iteration 55700, loss = 0.500605\nI0608 06:55:30.032799 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00971355 (* 1 = 0.00971355 loss)\nI0608 06:55:30.032809 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0149126 (* 1 = 0.0149126 loss)\nI0608 06:55:30.032815 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164656 (* 1 = 0.164656 loss)\nI0608 06:55:30.032821 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116189 (* 1 = 0.0116189 loss)\nI0608 06:55:30.032827 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159026 (* 1 = 0.159026 loss)\nI0608 06:55:30.032833 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011608 (* 1 = 0.011608 loss)\nI0608 06:55:30.032841 13573 sgd_solver.cpp:106] Iteration 55700, lr = 0.0001\nI0608 06:55:43.081897 13573 solver.cpp:229] Iteration 55720, loss = 1.10086\nI0608 06:55:43.081967 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000465042 (* 1 = 0.000465042 loss)\nI0608 06:55:43.081977 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0016929 (* 1 = 0.0016929 loss)\nI0608 06:55:43.081984 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.520355 (* 1 = 0.520355 loss)\nI0608 06:55:43.081990 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.146021 (* 1 = 0.146021 loss)\nI0608 06:55:43.081995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.522047 (* 1 = 0.522047 loss)\nI0608 06:55:43.082001 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.146037 (* 1 = 0.146037 loss)\nI0608 06:55:43.082008 13573 sgd_solver.cpp:106] Iteration 55720, lr = 0.0001\nI0608 06:55:55.913709 13573 solver.cpp:229] Iteration 55740, loss = 0.799651\nI0608 06:55:55.913817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00032276 (* 1 = 0.00032276 loss)\nI0608 06:55:55.913827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000262223 (* 1 = 0.000262223 loss)\nI0608 06:55:55.913835 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207058 (* 1 = 0.207058 loss)\nI0608 06:55:55.913841 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0307557 (* 1 = 0.0307557 loss)\nI0608 06:55:55.913846 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176365 (* 1 = 0.176365 loss)\nI0608 06:55:55.913851 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0307542 (* 1 = 0.0307542 loss)\nI0608 06:55:55.913858 13573 sgd_solver.cpp:106] Iteration 55740, lr = 0.0001\nI0608 06:56:08.780910 13573 solver.cpp:229] Iteration 55760, loss = 0.444022\nI0608 06:56:08.780977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00142034 (* 1 = 0.00142034 loss)\nI0608 06:56:08.780987 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00142208 (* 1 = 0.00142208 loss)\nI0608 06:56:08.780993 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.278364 (* 1 = 0.278364 loss)\nI0608 06:56:08.780999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00747181 (* 1 = 0.00747181 loss)\nI0608 06:56:08.781005 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.287429 (* 1 = 0.287429 loss)\nI0608 06:56:08.781011 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00748178 (* 1 = 0.00748178 loss)\nI0608 06:56:08.781018 13573 sgd_solver.cpp:106] Iteration 55760, lr = 0.0001\nI0608 06:56:21.669749 13573 solver.cpp:229] Iteration 55780, loss = 0.571333\nI0608 06:56:21.669844 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0725298 (* 1 = 0.0725298 loss)\nI0608 06:56:21.669854 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0572356 (* 1 = 0.0572356 loss)\nI0608 06:56:21.669860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.204418 (* 1 = 0.204418 loss)\nI0608 06:56:21.669867 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0138767 (* 1 = 0.0138767 loss)\nI0608 06:56:21.669872 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.202496 (* 1 = 0.202496 loss)\nI0608 06:56:21.669878 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0138767 (* 1 = 0.0138767 loss)\nI0608 06:56:21.669885 13573 sgd_solver.cpp:106] Iteration 55780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:56:34.514480 13573 solver.cpp:229] Iteration 55800, loss = 0.289083\nI0608 06:56:34.514551 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0471266 (* 1 = 0.0471266 loss)\nI0608 06:56:34.514561 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0175761 (* 1 = 0.0175761 loss)\nI0608 06:56:34.514567 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0910498 (* 1 = 0.0910498 loss)\nI0608 06:56:34.514574 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00871625 (* 1 = 0.00871625 loss)\nI0608 06:56:34.514580 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0904352 (* 1 = 0.0904352 loss)\nI0608 06:56:34.514585 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0087164 (* 1 = 0.0087164 loss)\nI0608 06:56:34.514592 13573 sgd_solver.cpp:106] Iteration 55800, lr = 0.0001\nI0608 06:56:47.479884 13573 solver.cpp:229] Iteration 55820, loss = 0.783939\nI0608 06:56:47.479956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0413668 (* 1 = 0.0413668 loss)\nI0608 06:56:47.479965 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0499949 (* 1 = 0.0499949 loss)\nI0608 06:56:47.479971 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147777 (* 1 = 0.147777 loss)\nI0608 06:56:47.479977 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.029613 (* 1 = 0.029613 loss)\nI0608 06:56:47.479984 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.144806 (* 1 = 0.144806 loss)\nI0608 06:56:47.479990 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296086 (* 1 = 0.0296086 loss)\nI0608 06:56:47.479996 13573 sgd_solver.cpp:106] Iteration 55820, lr = 0.0001\nI0608 06:57:00.225850 13573 solver.cpp:229] Iteration 55840, loss = 0.560103\nI0608 06:57:00.225919 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127547 (* 1 = 0.127547 loss)\nI0608 06:57:00.225930 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0786612 (* 1 = 0.0786612 loss)\nI0608 06:57:00.225936 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183148 (* 1 = 0.183148 loss)\nI0608 06:57:00.225942 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0422672 (* 1 = 0.0422672 loss)\nI0608 06:57:00.225949 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187099 (* 1 = 0.187099 loss)\nI0608 06:57:00.225953 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.042265 (* 1 = 0.042265 loss)\nI0608 06:57:00.225960 13573 sgd_solver.cpp:106] Iteration 55840, lr = 0.0001\nI0608 06:57:13.001008 13573 solver.cpp:229] Iteration 55860, loss = 0.541308\nI0608 06:57:13.001073 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.091696 (* 1 = 0.091696 loss)\nI0608 06:57:13.001083 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0417964 (* 1 = 0.0417964 loss)\nI0608 06:57:13.001090 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270788 (* 1 = 0.270788 loss)\nI0608 06:57:13.001096 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175564 (* 1 = 0.0175564 loss)\nI0608 06:57:13.001101 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.251818 (* 1 = 0.251818 loss)\nI0608 06:57:13.001106 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0175532 (* 1 = 0.0175532 loss)\nI0608 06:57:13.001112 13573 sgd_solver.cpp:106] Iteration 55860, lr = 0.0001\nI0608 06:57:25.967703 13573 solver.cpp:229] Iteration 55880, loss = 0.638834\nI0608 06:57:25.967792 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.22051 (* 1 = 0.22051 loss)\nI0608 06:57:25.967803 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118554 (* 1 = 0.118554 loss)\nI0608 06:57:25.967809 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.113751 (* 1 = 0.113751 loss)\nI0608 06:57:25.967816 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0435305 (* 1 = 0.0435305 loss)\nI0608 06:57:25.967821 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108328 (* 1 = 0.108328 loss)\nI0608 06:57:25.967828 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0435308 (* 1 = 0.0435308 loss)\nI0608 06:57:25.967834 13573 sgd_solver.cpp:106] Iteration 55880, lr = 0.0001\nI0608 06:57:38.946050 13573 solver.cpp:229] Iteration 55900, loss = 0.940561\nI0608 06:57:38.946122 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121015 (* 1 = 0.121015 loss)\nI0608 06:57:38.946131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167999 (* 1 = 0.167999 loss)\nI0608 06:57:38.946138 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.516085 (* 1 = 0.516085 loss)\nI0608 06:57:38.946144 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0359779 (* 1 = 0.0359779 loss)\nI0608 06:57:38.946151 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.521106 (* 1 = 0.521106 loss)\nI0608 06:57:38.946156 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0359802 (* 1 = 0.0359802 loss)\nI0608 06:57:38.946162 13573 sgd_solver.cpp:106] Iteration 55900, lr = 0.0001\nI0608 06:57:52.017802 13573 solver.cpp:229] Iteration 55920, loss = 1.19321\nI0608 06:57:52.017910 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0381072 (* 1 = 0.0381072 loss)\nI0608 06:57:52.017920 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0192917 (* 1 = 0.0192917 loss)\nI0608 06:57:52.017927 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.278731 (* 1 = 0.278731 loss)\nI0608 06:57:52.017933 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170702 (* 1 = 0.0170702 loss)\nI0608 06:57:52.017940 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26876 (* 1 = 0.26876 loss)\nI0608 06:57:52.017946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0170785 (* 1 = 0.0170785 loss)\nI0608 06:57:52.017953 13573 sgd_solver.cpp:106] Iteration 55920, lr = 0.0001\nI0608 06:58:04.849632 13573 solver.cpp:229] Iteration 55940, loss = 1.06638\nI0608 06:58:04.849699 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0558853 (* 1 = 0.0558853 loss)\nI0608 06:58:04.849709 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0897735 (* 1 = 0.0897735 loss)\nI0608 06:58:04.849716 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.548573 (* 1 = 0.548573 loss)\nI0608 06:58:04.849722 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0687461 (* 1 = 0.0687461 loss)\nI0608 06:58:04.849728 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.537087 (* 1 = 0.537087 loss)\nI0608 06:58:04.849735 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0687069 (* 1 = 0.0687069 loss)\nI0608 06:58:04.849742 13573 sgd_solver.cpp:106] Iteration 55940, lr = 0.0001\nI0608 06:58:17.877176 13573 solver.cpp:229] Iteration 55960, loss = 1.05654\nI0608 06:58:17.877245 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.311303 (* 1 = 0.311303 loss)\nI0608 06:58:17.877256 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139526 (* 1 = 0.139526 loss)\nI0608 06:58:17.877261 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237573 (* 1 = 0.237573 loss)\nI0608 06:58:17.877269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0603714 (* 1 = 0.0603714 loss)\nI0608 06:58:17.877274 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.242199 (* 1 = 0.242199 loss)\nI0608 06:58:17.877280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0603799 (* 1 = 0.0603799 loss)\nI0608 06:58:17.877287 13573 sgd_solver.cpp:106] Iteration 55960, lr = 0.0001\nI0608 06:58:30.677309 13573 solver.cpp:229] Iteration 55980, loss = 0.574119\nI0608 06:58:30.677379 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0430861 (* 1 = 0.0430861 loss)\nI0608 06:58:30.677389 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0182087 (* 1 = 0.0182087 loss)\nI0608 06:58:30.677395 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128232 (* 1 = 0.128232 loss)\nI0608 06:58:30.677400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.197555 (* 1 = 0.197555 loss)\nI0608 06:58:30.677407 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128862 (* 1 = 0.128862 loss)\nI0608 06:58:30.677412 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.197855 (* 1 = 0.197855 loss)\nI0608 06:58:30.677419 13573 sgd_solver.cpp:106] Iteration 55980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 06:58:43.573618 13573 solver.cpp:229] Iteration 56000, loss = 0.689392\nI0608 06:58:43.573691 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0592692 (* 1 = 0.0592692 loss)\nI0608 06:58:43.573700 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0281423 (* 1 = 0.0281423 loss)\nI0608 06:58:43.573706 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.365448 (* 1 = 0.365448 loss)\nI0608 06:58:43.573712 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0438712 (* 1 = 0.0438712 loss)\nI0608 06:58:43.573719 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387854 (* 1 = 0.387854 loss)\nI0608 06:58:43.573724 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0438454 (* 1 = 0.0438454 loss)\nI0608 06:58:43.573730 13573 sgd_solver.cpp:106] Iteration 56000, lr = 0.0001\nI0608 06:58:56.539196 13573 solver.cpp:229] Iteration 56020, loss = 0.409243\nI0608 06:58:56.539263 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0349839 (* 1 = 0.0349839 loss)\nI0608 06:58:56.539273 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0224264 (* 1 = 0.0224264 loss)\nI0608 06:58:56.539279 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0966735 (* 1 = 0.0966735 loss)\nI0608 06:58:56.539285 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0650255 (* 1 = 0.0650255 loss)\nI0608 06:58:56.539291 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105992 (* 1 = 0.105992 loss)\nI0608 06:58:56.539296 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.065021 (* 1 = 0.065021 loss)\nI0608 06:58:56.539304 13573 sgd_solver.cpp:106] Iteration 56020, lr = 0.0001\nI0608 06:59:09.357617 13573 solver.cpp:229] Iteration 56040, loss = 0.396896\nI0608 06:59:09.357725 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0374798 (* 1 = 0.0374798 loss)\nI0608 06:59:09.357735 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0684724 (* 1 = 0.0684724 loss)\nI0608 06:59:09.357741 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0905991 (* 1 = 0.0905991 loss)\nI0608 06:59:09.357748 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.050873 (* 1 = 0.050873 loss)\nI0608 06:59:09.357753 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0962559 (* 1 = 0.0962559 loss)\nI0608 06:59:09.357758 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0508772 (* 1 = 0.0508772 loss)\nI0608 06:59:09.357765 13573 sgd_solver.cpp:106] Iteration 56040, lr = 0.0001\nI0608 06:59:22.064524 13573 solver.cpp:229] Iteration 56060, loss = 0.478466\nI0608 06:59:22.064589 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0402486 (* 1 = 0.0402486 loss)\nI0608 06:59:22.064597 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0393885 (* 1 = 0.0393885 loss)\nI0608 06:59:22.064604 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195661 (* 1 = 0.195661 loss)\nI0608 06:59:22.064610 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0737354 (* 1 = 0.0737354 loss)\nI0608 06:59:22.064615 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195311 (* 1 = 0.195311 loss)\nI0608 06:59:22.064620 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0737315 (* 1 = 0.0737315 loss)\nI0608 06:59:22.064627 13573 sgd_solver.cpp:106] Iteration 56060, lr = 0.0001\nI0608 06:59:35.113209 13573 solver.cpp:229] Iteration 56080, loss = 0.448013\nI0608 06:59:35.113283 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0093999 (* 1 = 0.0093999 loss)\nI0608 06:59:35.113294 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00636483 (* 1 = 0.00636483 loss)\nI0608 06:59:35.113301 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105891 (* 1 = 0.105891 loss)\nI0608 06:59:35.113306 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.184542 (* 1 = 0.184542 loss)\nI0608 06:59:35.113312 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103464 (* 1 = 0.103464 loss)\nI0608 06:59:35.113317 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.184546 (* 1 = 0.184546 loss)\nI0608 06:59:35.113323 13573 sgd_solver.cpp:106] Iteration 56080, lr = 0.0001\nI0608 06:59:47.746331 13573 solver.cpp:229] Iteration 56100, loss = 0.48468\nI0608 06:59:47.746404 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.147583 (* 1 = 0.147583 loss)\nI0608 06:59:47.746413 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0603646 (* 1 = 0.0603646 loss)\nI0608 06:59:47.746419 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213278 (* 1 = 0.213278 loss)\nI0608 06:59:47.746425 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178965 (* 1 = 0.0178965 loss)\nI0608 06:59:47.746430 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.206818 (* 1 = 0.206818 loss)\nI0608 06:59:47.746436 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0178898 (* 1 = 0.0178898 loss)\nI0608 06:59:47.746443 13573 sgd_solver.cpp:106] Iteration 56100, lr = 0.0001\nI0608 07:00:00.703786 13573 solver.cpp:229] Iteration 56120, loss = 0.401463\nI0608 07:00:00.703852 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0742897 (* 1 = 0.0742897 loss)\nI0608 07:00:00.703861 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0618775 (* 1 = 0.0618775 loss)\nI0608 07:00:00.703867 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.077126 (* 1 = 0.077126 loss)\nI0608 07:00:00.703873 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00951463 (* 1 = 0.00951463 loss)\nI0608 07:00:00.703878 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0736103 (* 1 = 0.0736103 loss)\nI0608 07:00:00.703883 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00951335 (* 1 = 0.00951335 loss)\nI0608 07:00:00.703891 13573 sgd_solver.cpp:106] Iteration 56120, lr = 0.0001\nI0608 07:00:13.751760 13573 solver.cpp:229] Iteration 56140, loss = 1.02218\nI0608 07:00:13.751822 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.170676 (* 1 = 0.170676 loss)\nI0608 07:00:13.751832 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.187941 (* 1 = 0.187941 loss)\nI0608 07:00:13.751837 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.396861 (* 1 = 0.396861 loss)\nI0608 07:00:13.751843 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0347117 (* 1 = 0.0347117 loss)\nI0608 07:00:13.751849 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.401652 (* 1 = 0.401652 loss)\nI0608 07:00:13.751854 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0347211 (* 1 = 0.0347211 loss)\nI0608 07:00:13.751862 13573 sgd_solver.cpp:106] Iteration 56140, lr = 0.0001\nI0608 07:00:26.592073 13573 solver.cpp:229] Iteration 56160, loss = 0.661352\nI0608 07:00:26.592149 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11647 (* 1 = 0.11647 loss)\nI0608 07:00:26.592159 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0773514 (* 1 = 0.0773514 loss)\nI0608 07:00:26.592164 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223159 (* 1 = 0.223159 loss)\nI0608 07:00:26.592170 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0996282 (* 1 = 0.0996282 loss)\nI0608 07:00:26.592176 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.212777 (* 1 = 0.212777 loss)\nI0608 07:00:26.592182 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0996279 (* 1 = 0.0996279 loss)\nI0608 07:00:26.592190 13573 sgd_solver.cpp:106] Iteration 56160, lr = 0.0001\nI0608 07:00:39.530612 13573 solver.cpp:229] Iteration 56180, loss = 1.15914\nI0608 07:00:39.530683 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124221 (* 1 = 0.124221 loss)\nI0608 07:00:39.530692 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0976981 (* 1 = 0.0976981 loss)\nI0608 07:00:39.530699 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.399341 (* 1 = 0.399341 loss)\nI0608 07:00:39.530705 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0763792 (* 1 = 0.0763792 loss)\nI0608 07:00:39.530711 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.433406 (* 1 = 0.433406 loss)\nI0608 07:00:39.530717 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0764009 (* 1 = 0.0764009 loss)\nI0608 07:00:39.530725 13573 sgd_solver.cpp:106] Iteration 56180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:00:52.437327 13573 solver.cpp:229] Iteration 56200, loss = 1.28975\nI0608 07:00:52.437391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114074 (* 1 = 0.114074 loss)\nI0608 07:00:52.437400 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114216 (* 1 = 0.114216 loss)\nI0608 07:00:52.437407 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19439 (* 1 = 0.19439 loss)\nI0608 07:00:52.437412 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275438 (* 1 = 0.0275438 loss)\nI0608 07:00:52.437418 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187252 (* 1 = 0.187252 loss)\nI0608 07:00:52.437423 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275425 (* 1 = 0.0275425 loss)\nI0608 07:00:52.437430 13573 sgd_solver.cpp:106] Iteration 56200, lr = 0.0001\nI0608 07:01:05.458789 13573 solver.cpp:229] Iteration 56220, loss = 0.624821\nI0608 07:01:05.458847 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0324747 (* 1 = 0.0324747 loss)\nI0608 07:01:05.458856 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.04377 (* 1 = 0.04377 loss)\nI0608 07:01:05.458863 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.390946 (* 1 = 0.390946 loss)\nI0608 07:01:05.458869 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0571537 (* 1 = 0.0571537 loss)\nI0608 07:01:05.458874 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.42514 (* 1 = 0.42514 loss)\nI0608 07:01:05.458880 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0571412 (* 1 = 0.0571412 loss)\nI0608 07:01:05.458889 13573 sgd_solver.cpp:106] Iteration 56220, lr = 0.0001\nI0608 07:01:18.334501 13573 solver.cpp:229] Iteration 56240, loss = 0.970652\nI0608 07:01:18.334563 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0200789 (* 1 = 0.0200789 loss)\nI0608 07:01:18.334573 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0176154 (* 1 = 0.0176154 loss)\nI0608 07:01:18.334579 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.671274 (* 1 = 0.671274 loss)\nI0608 07:01:18.334585 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.124447 (* 1 = 0.124447 loss)\nI0608 07:01:18.334590 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.653365 (* 1 = 0.653365 loss)\nI0608 07:01:18.334596 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124478 (* 1 = 0.124478 loss)\nI0608 07:01:18.334602 13573 sgd_solver.cpp:106] Iteration 56240, lr = 0.0001\nI0608 07:01:31.491318 13573 solver.cpp:229] Iteration 56260, loss = 0.35021\nI0608 07:01:31.491377 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0478437 (* 1 = 0.0478437 loss)\nI0608 07:01:31.491387 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0364411 (* 1 = 0.0364411 loss)\nI0608 07:01:31.491394 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0850529 (* 1 = 0.0850529 loss)\nI0608 07:01:31.491400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0220539 (* 1 = 0.0220539 loss)\nI0608 07:01:31.491405 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.088794 (* 1 = 0.088794 loss)\nI0608 07:01:31.491410 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0220578 (* 1 = 0.0220578 loss)\nI0608 07:01:31.491416 13573 sgd_solver.cpp:106] Iteration 56260, lr = 0.0001\nI0608 07:01:44.314388 13573 solver.cpp:229] Iteration 56280, loss = 0.530663\nI0608 07:01:44.314463 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0780714 (* 1 = 0.0780714 loss)\nI0608 07:01:44.314472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0699261 (* 1 = 0.0699261 loss)\nI0608 07:01:44.314478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.248754 (* 1 = 0.248754 loss)\nI0608 07:01:44.314484 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296804 (* 1 = 0.0296804 loss)\nI0608 07:01:44.314489 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253993 (* 1 = 0.253993 loss)\nI0608 07:01:44.314496 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296774 (* 1 = 0.0296774 loss)\nI0608 07:01:44.314502 13573 sgd_solver.cpp:106] Iteration 56280, lr = 0.0001\nI0608 07:01:57.403837 13573 solver.cpp:229] Iteration 56300, loss = 0.606862\nI0608 07:01:57.403903 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.088382 (* 1 = 0.088382 loss)\nI0608 07:01:57.403913 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0959374 (* 1 = 0.0959374 loss)\nI0608 07:01:57.403919 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256626 (* 1 = 0.256626 loss)\nI0608 07:01:57.403925 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106603 (* 1 = 0.106603 loss)\nI0608 07:01:57.403931 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256267 (* 1 = 0.256267 loss)\nI0608 07:01:57.403937 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106614 (* 1 = 0.106614 loss)\nI0608 07:01:57.403944 13573 sgd_solver.cpp:106] Iteration 56300, lr = 0.0001\nI0608 07:02:10.235821 13573 solver.cpp:229] Iteration 56320, loss = 0.574689\nI0608 07:02:10.235885 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0763128 (* 1 = 0.0763128 loss)\nI0608 07:02:10.235895 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0389484 (* 1 = 0.0389484 loss)\nI0608 07:02:10.235903 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0739923 (* 1 = 0.0739923 loss)\nI0608 07:02:10.235908 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0272421 (* 1 = 0.0272421 loss)\nI0608 07:02:10.235913 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0716408 (* 1 = 0.0716408 loss)\nI0608 07:02:10.235919 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0272406 (* 1 = 0.0272406 loss)\nI0608 07:02:10.235927 13573 sgd_solver.cpp:106] Iteration 56320, lr = 0.0001\nI0608 07:02:23.067689 13573 solver.cpp:229] Iteration 56340, loss = 1.21265\nI0608 07:02:23.067761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0456373 (* 1 = 0.0456373 loss)\nI0608 07:02:23.067771 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0821433 (* 1 = 0.0821433 loss)\nI0608 07:02:23.067785 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275152 (* 1 = 0.275152 loss)\nI0608 07:02:23.067791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0618824 (* 1 = 0.0618824 loss)\nI0608 07:02:23.067797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.289279 (* 1 = 0.289279 loss)\nI0608 07:02:23.067804 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0618974 (* 1 = 0.0618974 loss)\nI0608 07:02:23.067812 13573 sgd_solver.cpp:106] Iteration 56340, lr = 0.0001\nI0608 07:02:36.061808 13573 solver.cpp:229] Iteration 56360, loss = 0.495983\nI0608 07:02:36.061875 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0221312 (* 1 = 0.0221312 loss)\nI0608 07:02:36.061885 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0300399 (* 1 = 0.0300399 loss)\nI0608 07:02:36.061892 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282097 (* 1 = 0.282097 loss)\nI0608 07:02:36.061897 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0453976 (* 1 = 0.0453976 loss)\nI0608 07:02:36.061904 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.300885 (* 1 = 0.300885 loss)\nI0608 07:02:36.061910 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0454109 (* 1 = 0.0454109 loss)\nI0608 07:02:36.061918 13573 sgd_solver.cpp:106] Iteration 56360, lr = 0.0001\nI0608 07:02:48.919870 13573 solver.cpp:229] Iteration 56380, loss = 1.05263\nI0608 07:02:48.919982 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.162444 (* 1 = 0.162444 loss)\nI0608 07:02:48.919992 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104418 (* 1 = 0.104418 loss)\nI0608 07:02:48.919998 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222182 (* 1 = 0.222182 loss)\nI0608 07:02:48.920004 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0349099 (* 1 = 0.0349099 loss)\nI0608 07:02:48.920011 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.22131 (* 1 = 0.22131 loss)\nI0608 07:02:48.920017 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0349119 (* 1 = 0.0349119 loss)\nI0608 07:02:48.920023 13573 sgd_solver.cpp:106] Iteration 56380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:03:01.964570 13573 solver.cpp:229] Iteration 56400, loss = 0.383948\nI0608 07:03:01.964640 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0376286 (* 1 = 0.0376286 loss)\nI0608 07:03:01.964650 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0155266 (* 1 = 0.0155266 loss)\nI0608 07:03:01.964656 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.224683 (* 1 = 0.224683 loss)\nI0608 07:03:01.964661 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0212754 (* 1 = 0.0212754 loss)\nI0608 07:03:01.964668 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225128 (* 1 = 0.225128 loss)\nI0608 07:03:01.964673 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.021273 (* 1 = 0.021273 loss)\nI0608 07:03:01.964681 13573 sgd_solver.cpp:106] Iteration 56400, lr = 0.0001\nI0608 07:03:14.647933 13573 solver.cpp:229] Iteration 56420, loss = 0.498927\nI0608 07:03:14.647994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119429 (* 1 = 0.119429 loss)\nI0608 07:03:14.648003 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0624191 (* 1 = 0.0624191 loss)\nI0608 07:03:14.648010 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0854525 (* 1 = 0.0854525 loss)\nI0608 07:03:14.648015 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0770323 (* 1 = 0.0770323 loss)\nI0608 07:03:14.648021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0875008 (* 1 = 0.0875008 loss)\nI0608 07:03:14.648027 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0770356 (* 1 = 0.0770356 loss)\nI0608 07:03:14.648035 13573 sgd_solver.cpp:106] Iteration 56420, lr = 0.0001\nI0608 07:03:27.717056 13573 solver.cpp:229] Iteration 56440, loss = 0.599513\nI0608 07:03:27.717133 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0721117 (* 1 = 0.0721117 loss)\nI0608 07:03:27.717142 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0797814 (* 1 = 0.0797814 loss)\nI0608 07:03:27.717149 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126099 (* 1 = 0.126099 loss)\nI0608 07:03:27.717154 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.045337 (* 1 = 0.045337 loss)\nI0608 07:03:27.717160 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128112 (* 1 = 0.128112 loss)\nI0608 07:03:27.717166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0453363 (* 1 = 0.0453363 loss)\nI0608 07:03:27.717173 13573 sgd_solver.cpp:106] Iteration 56440, lr = 0.0001\nI0608 07:03:40.499559 13573 solver.cpp:229] Iteration 56460, loss = 0.568729\nI0608 07:03:40.499629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0271571 (* 1 = 0.0271571 loss)\nI0608 07:03:40.499639 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0105007 (* 1 = 0.0105007 loss)\nI0608 07:03:40.499644 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266561 (* 1 = 0.266561 loss)\nI0608 07:03:40.499651 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0182658 (* 1 = 0.0182658 loss)\nI0608 07:03:40.499657 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306894 (* 1 = 0.306894 loss)\nI0608 07:03:40.499663 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0182676 (* 1 = 0.0182676 loss)\nI0608 07:03:40.499671 13573 sgd_solver.cpp:106] Iteration 56460, lr = 0.0001\nI0608 07:03:53.322435 13573 solver.cpp:229] Iteration 56480, loss = 0.366769\nI0608 07:03:53.322506 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00357488 (* 1 = 0.00357488 loss)\nI0608 07:03:53.322515 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0115058 (* 1 = 0.0115058 loss)\nI0608 07:03:53.322521 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206434 (* 1 = 0.206434 loss)\nI0608 07:03:53.322527 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110265 (* 1 = 0.0110265 loss)\nI0608 07:03:53.322533 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.195564 (* 1 = 0.195564 loss)\nI0608 07:03:53.322538 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110253 (* 1 = 0.0110253 loss)\nI0608 07:03:53.322546 13573 sgd_solver.cpp:106] Iteration 56480, lr = 0.0001\nI0608 07:04:06.495084 13573 solver.cpp:229] Iteration 56500, loss = 0.422211\nI0608 07:04:06.495196 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0232247 (* 1 = 0.0232247 loss)\nI0608 07:04:06.495206 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0433754 (* 1 = 0.0433754 loss)\nI0608 07:04:06.495213 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107628 (* 1 = 0.107628 loss)\nI0608 07:04:06.495218 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0728841 (* 1 = 0.0728841 loss)\nI0608 07:04:06.495224 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107712 (* 1 = 0.107712 loss)\nI0608 07:04:06.495229 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0728922 (* 1 = 0.0728922 loss)\nI0608 07:04:06.495236 13573 sgd_solver.cpp:106] Iteration 56500, lr = 0.0001\nI0608 07:04:19.306329 13573 solver.cpp:229] Iteration 56520, loss = 0.684594\nI0608 07:04:19.306401 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0709946 (* 1 = 0.0709946 loss)\nI0608 07:04:19.306411 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0753235 (* 1 = 0.0753235 loss)\nI0608 07:04:19.306417 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180258 (* 1 = 0.180258 loss)\nI0608 07:04:19.306423 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0442431 (* 1 = 0.0442431 loss)\nI0608 07:04:19.306429 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183626 (* 1 = 0.183626 loss)\nI0608 07:04:19.306435 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0442441 (* 1 = 0.0442441 loss)\nI0608 07:04:19.306443 13573 sgd_solver.cpp:106] Iteration 56520, lr = 0.0001\nI0608 07:04:32.305160 13573 solver.cpp:229] Iteration 56540, loss = 0.627359\nI0608 07:04:32.305227 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00324559 (* 1 = 0.00324559 loss)\nI0608 07:04:32.305236 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00401541 (* 1 = 0.00401541 loss)\nI0608 07:04:32.305243 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.21017 (* 1 = 0.21017 loss)\nI0608 07:04:32.305249 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0129608 (* 1 = 0.0129608 loss)\nI0608 07:04:32.305254 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213453 (* 1 = 0.213453 loss)\nI0608 07:04:32.305260 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0129593 (* 1 = 0.0129593 loss)\nI0608 07:04:32.305269 13573 sgd_solver.cpp:106] Iteration 56540, lr = 0.0001\nI0608 07:04:44.961760 13573 solver.cpp:229] Iteration 56560, loss = 0.864466\nI0608 07:04:44.961828 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0618508 (* 1 = 0.0618508 loss)\nI0608 07:04:44.961838 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0826163 (* 1 = 0.0826163 loss)\nI0608 07:04:44.961844 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.515004 (* 1 = 0.515004 loss)\nI0608 07:04:44.961849 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0829891 (* 1 = 0.0829891 loss)\nI0608 07:04:44.961855 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.55904 (* 1 = 0.55904 loss)\nI0608 07:04:44.961861 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.082976 (* 1 = 0.082976 loss)\nI0608 07:04:44.961869 13573 sgd_solver.cpp:106] Iteration 56560, lr = 0.0001\nI0608 07:04:57.662331 13573 solver.cpp:229] Iteration 56580, loss = 0.423677\nI0608 07:04:57.662396 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0204224 (* 1 = 0.0204224 loss)\nI0608 07:04:57.662405 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0370659 (* 1 = 0.0370659 loss)\nI0608 07:04:57.662411 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0809072 (* 1 = 0.0809072 loss)\nI0608 07:04:57.662417 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301123 (* 1 = 0.0301123 loss)\nI0608 07:04:57.662423 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.07859 (* 1 = 0.07859 loss)\nI0608 07:04:57.662430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301091 (* 1 = 0.0301091 loss)\nI0608 07:04:57.662437 13573 sgd_solver.cpp:106] Iteration 56580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:05:10.479518 13573 solver.cpp:229] Iteration 56600, loss = 0.296922\nI0608 07:05:10.479653 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0363078 (* 1 = 0.0363078 loss)\nI0608 07:05:10.479662 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.045318 (* 1 = 0.045318 loss)\nI0608 07:05:10.479670 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0947377 (* 1 = 0.0947377 loss)\nI0608 07:05:10.479676 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0211473 (* 1 = 0.0211473 loss)\nI0608 07:05:10.479681 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0984688 (* 1 = 0.0984688 loss)\nI0608 07:05:10.479686 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0211434 (* 1 = 0.0211434 loss)\nI0608 07:05:10.479699 13573 sgd_solver.cpp:106] Iteration 56600, lr = 0.0001\nI0608 07:05:23.586470 13573 solver.cpp:229] Iteration 56620, loss = 1.12356\nI0608 07:05:23.586540 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000729005 (* 1 = 0.000729005 loss)\nI0608 07:05:23.586549 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000258461 (* 1 = 0.000258461 loss)\nI0608 07:05:23.586556 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.477167 (* 1 = 0.477167 loss)\nI0608 07:05:23.586562 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.149011 (* 1 = 0.149011 loss)\nI0608 07:05:23.586567 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.449926 (* 1 = 0.449926 loss)\nI0608 07:05:23.586573 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.149026 (* 1 = 0.149026 loss)\nI0608 07:05:23.586580 13573 sgd_solver.cpp:106] Iteration 56620, lr = 0.0001\nI0608 07:05:36.664325 13573 solver.cpp:229] Iteration 56640, loss = 0.397472\nI0608 07:05:36.664412 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00286536 (* 1 = 0.00286536 loss)\nI0608 07:05:36.664422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0106717 (* 1 = 0.0106717 loss)\nI0608 07:05:36.664427 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17325 (* 1 = 0.17325 loss)\nI0608 07:05:36.664433 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202967 (* 1 = 0.0202967 loss)\nI0608 07:05:36.664439 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.230702 (* 1 = 0.230702 loss)\nI0608 07:05:36.664445 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202979 (* 1 = 0.0202979 loss)\nI0608 07:05:36.664451 13573 sgd_solver.cpp:106] Iteration 56640, lr = 0.0001\nI0608 07:05:49.636873 13573 solver.cpp:229] Iteration 56660, loss = 1.38308\nI0608 07:05:49.636934 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0519483 (* 1 = 0.0519483 loss)\nI0608 07:05:49.636945 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0226964 (* 1 = 0.0226964 loss)\nI0608 07:05:49.636950 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275063 (* 1 = 0.275063 loss)\nI0608 07:05:49.636956 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0329508 (* 1 = 0.0329508 loss)\nI0608 07:05:49.636961 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340785 (* 1 = 0.340785 loss)\nI0608 07:05:49.636967 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0329534 (* 1 = 0.0329534 loss)\nI0608 07:05:49.636976 13573 sgd_solver.cpp:106] Iteration 56660, lr = 0.0001\nI0608 07:06:02.506067 13573 solver.cpp:229] Iteration 56680, loss = 0.578594\nI0608 07:06:02.506139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0537658 (* 1 = 0.0537658 loss)\nI0608 07:06:02.506150 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0116854 (* 1 = 0.0116854 loss)\nI0608 07:06:02.506156 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1806 (* 1 = 0.1806 loss)\nI0608 07:06:02.506162 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164947 (* 1 = 0.0164947 loss)\nI0608 07:06:02.506168 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.2126 (* 1 = 0.2126 loss)\nI0608 07:06:02.506173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164936 (* 1 = 0.0164936 loss)\nI0608 07:06:02.506181 13573 sgd_solver.cpp:106] Iteration 56680, lr = 0.0001\nI0608 07:06:15.403389 13573 solver.cpp:229] Iteration 56700, loss = 0.455488\nI0608 07:06:15.403482 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0460693 (* 1 = 0.0460693 loss)\nI0608 07:06:15.403492 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0501933 (* 1 = 0.0501933 loss)\nI0608 07:06:15.403498 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.238586 (* 1 = 0.238586 loss)\nI0608 07:06:15.403504 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0458973 (* 1 = 0.0458973 loss)\nI0608 07:06:15.403511 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217614 (* 1 = 0.217614 loss)\nI0608 07:06:15.403517 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0459057 (* 1 = 0.0459057 loss)\nI0608 07:06:15.403523 13573 sgd_solver.cpp:106] Iteration 56700, lr = 0.0001\nI0608 07:06:28.164968 13573 solver.cpp:229] Iteration 56720, loss = 1.06634\nI0608 07:06:28.165032 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0703668 (* 1 = 0.0703668 loss)\nI0608 07:06:28.165042 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.135303 (* 1 = 0.135303 loss)\nI0608 07:06:28.165048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.636516 (* 1 = 0.636516 loss)\nI0608 07:06:28.165055 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0912447 (* 1 = 0.0912447 loss)\nI0608 07:06:28.165060 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.633533 (* 1 = 0.633533 loss)\nI0608 07:06:28.165066 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0912419 (* 1 = 0.0912419 loss)\nI0608 07:06:28.165072 13573 sgd_solver.cpp:106] Iteration 56720, lr = 0.0001\nI0608 07:06:41.055542 13573 solver.cpp:229] Iteration 56740, loss = 0.618949\nI0608 07:06:41.055614 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.112813 (* 1 = 0.112813 loss)\nI0608 07:06:41.055624 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0598616 (* 1 = 0.0598616 loss)\nI0608 07:06:41.055630 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.326565 (* 1 = 0.326565 loss)\nI0608 07:06:41.055635 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0248261 (* 1 = 0.0248261 loss)\nI0608 07:06:41.055649 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.376781 (* 1 = 0.376781 loss)\nI0608 07:06:41.055655 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0248288 (* 1 = 0.0248288 loss)\nI0608 07:06:41.055663 13573 sgd_solver.cpp:106] Iteration 56740, lr = 0.0001\nI0608 07:06:54.367671 13573 solver.cpp:229] Iteration 56760, loss = 0.389685\nI0608 07:06:54.367743 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0302004 (* 1 = 0.0302004 loss)\nI0608 07:06:54.367753 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.024002 (* 1 = 0.024002 loss)\nI0608 07:06:54.367759 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179189 (* 1 = 0.179189 loss)\nI0608 07:06:54.367765 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0107718 (* 1 = 0.0107718 loss)\nI0608 07:06:54.367771 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.165926 (* 1 = 0.165926 loss)\nI0608 07:06:54.367777 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107689 (* 1 = 0.0107689 loss)\nI0608 07:06:54.367784 13573 sgd_solver.cpp:106] Iteration 56760, lr = 0.0001\nI0608 07:07:07.367568 13573 solver.cpp:229] Iteration 56780, loss = 0.459333\nI0608 07:07:07.367636 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0012523 (* 1 = 0.0012523 loss)\nI0608 07:07:07.367645 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00140295 (* 1 = 0.00140295 loss)\nI0608 07:07:07.367651 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.272856 (* 1 = 0.272856 loss)\nI0608 07:07:07.367657 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0106963 (* 1 = 0.0106963 loss)\nI0608 07:07:07.367662 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240261 (* 1 = 0.240261 loss)\nI0608 07:07:07.367669 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107019 (* 1 = 0.0107019 loss)\nI0608 07:07:07.367676 13573 sgd_solver.cpp:106] Iteration 56780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:07:20.310302 13573 solver.cpp:229] Iteration 56800, loss = 1.93523\nI0608 07:07:20.310372 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0291702 (* 1 = 0.0291702 loss)\nI0608 07:07:20.310382 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0316429 (* 1 = 0.0316429 loss)\nI0608 07:07:20.310389 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0765112 (* 1 = 0.0765112 loss)\nI0608 07:07:20.310394 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0155311 (* 1 = 0.0155311 loss)\nI0608 07:07:20.310400 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0776287 (* 1 = 0.0776287 loss)\nI0608 07:07:20.310405 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0155323 (* 1 = 0.0155323 loss)\nI0608 07:07:20.310411 13573 sgd_solver.cpp:106] Iteration 56800, lr = 0.0001\nI0608 07:07:33.108891 13573 solver.cpp:229] Iteration 56820, loss = 0.573409\nI0608 07:07:33.108963 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0910691 (* 1 = 0.0910691 loss)\nI0608 07:07:33.108971 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0453611 (* 1 = 0.0453611 loss)\nI0608 07:07:33.108978 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.280011 (* 1 = 0.280011 loss)\nI0608 07:07:33.108983 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0329987 (* 1 = 0.0329987 loss)\nI0608 07:07:33.108989 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.310007 (* 1 = 0.310007 loss)\nI0608 07:07:33.108995 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0329674 (* 1 = 0.0329674 loss)\nI0608 07:07:33.109001 13573 sgd_solver.cpp:106] Iteration 56820, lr = 0.0001\nI0608 07:07:45.866073 13573 solver.cpp:229] Iteration 56840, loss = 1.06563\nI0608 07:07:45.866142 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0821316 (* 1 = 0.0821316 loss)\nI0608 07:07:45.866150 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0568735 (* 1 = 0.0568735 loss)\nI0608 07:07:45.866156 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0756483 (* 1 = 0.0756483 loss)\nI0608 07:07:45.866163 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00818171 (* 1 = 0.00818171 loss)\nI0608 07:07:45.866168 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0726698 (* 1 = 0.0726698 loss)\nI0608 07:07:45.866173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00818209 (* 1 = 0.00818209 loss)\nI0608 07:07:45.866180 13573 sgd_solver.cpp:106] Iteration 56840, lr = 0.0001\nI0608 07:07:58.894858 13573 solver.cpp:229] Iteration 56860, loss = 1.31006\nI0608 07:07:58.894932 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00394186 (* 1 = 0.00394186 loss)\nI0608 07:07:58.894942 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00104588 (* 1 = 0.00104588 loss)\nI0608 07:07:58.894948 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226411 (* 1 = 0.226411 loss)\nI0608 07:07:58.894953 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0206223 (* 1 = 0.0206223 loss)\nI0608 07:07:58.894958 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222749 (* 1 = 0.222749 loss)\nI0608 07:07:58.894964 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.02062 (* 1 = 0.02062 loss)\nI0608 07:07:58.894971 13573 sgd_solver.cpp:106] Iteration 56860, lr = 0.0001\nI0608 07:08:12.009989 13573 solver.cpp:229] Iteration 56880, loss = 0.945827\nI0608 07:08:12.010053 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.130892 (* 1 = 0.130892 loss)\nI0608 07:08:12.010062 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.108954 (* 1 = 0.108954 loss)\nI0608 07:08:12.010068 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.438417 (* 1 = 0.438417 loss)\nI0608 07:08:12.010074 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.031645 (* 1 = 0.031645 loss)\nI0608 07:08:12.010079 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.451842 (* 1 = 0.451842 loss)\nI0608 07:08:12.010085 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0316519 (* 1 = 0.0316519 loss)\nI0608 07:08:12.010092 13573 sgd_solver.cpp:106] Iteration 56880, lr = 0.0001\nI0608 07:08:25.011456 13573 solver.cpp:229] Iteration 56900, loss = 0.422176\nI0608 07:08:25.011533 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0197403 (* 1 = 0.0197403 loss)\nI0608 07:08:25.011543 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0149291 (* 1 = 0.0149291 loss)\nI0608 07:08:25.011548 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119125 (* 1 = 0.119125 loss)\nI0608 07:08:25.011554 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00359327 (* 1 = 0.00359327 loss)\nI0608 07:08:25.011560 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168599 (* 1 = 0.168599 loss)\nI0608 07:08:25.011566 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.003592 (* 1 = 0.003592 loss)\nI0608 07:08:25.011574 13573 sgd_solver.cpp:106] Iteration 56900, lr = 0.0001\nI0608 07:08:38.203894 13573 solver.cpp:229] Iteration 56920, loss = 0.710036\nI0608 07:08:38.203958 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0957978 (* 1 = 0.0957978 loss)\nI0608 07:08:38.203968 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0221713 (* 1 = 0.0221713 loss)\nI0608 07:08:38.203974 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40487 (* 1 = 0.40487 loss)\nI0608 07:08:38.203980 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0783112 (* 1 = 0.0783112 loss)\nI0608 07:08:38.203986 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.385512 (* 1 = 0.385512 loss)\nI0608 07:08:38.203992 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0782876 (* 1 = 0.0782876 loss)\nI0608 07:08:38.203999 13573 sgd_solver.cpp:106] Iteration 56920, lr = 0.0001\nI0608 07:08:51.226294 13573 solver.cpp:229] Iteration 56940, loss = 0.422233\nI0608 07:08:51.226388 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0592806 (* 1 = 0.0592806 loss)\nI0608 07:08:51.226397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0287851 (* 1 = 0.0287851 loss)\nI0608 07:08:51.226404 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0978843 (* 1 = 0.0978843 loss)\nI0608 07:08:51.226410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0367966 (* 1 = 0.0367966 loss)\nI0608 07:08:51.226416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0955704 (* 1 = 0.0955704 loss)\nI0608 07:08:51.226423 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0368021 (* 1 = 0.0368021 loss)\nI0608 07:08:51.226430 13573 sgd_solver.cpp:106] Iteration 56940, lr = 0.0001\nI0608 07:09:04.207394 13573 solver.cpp:229] Iteration 56960, loss = 0.583141\nI0608 07:09:04.207459 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000421288 (* 1 = 0.000421288 loss)\nI0608 07:09:04.207469 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00679653 (* 1 = 0.00679653 loss)\nI0608 07:09:04.207475 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.195438 (* 1 = 0.195438 loss)\nI0608 07:09:04.207481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0139948 (* 1 = 0.0139948 loss)\nI0608 07:09:04.207487 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197951 (* 1 = 0.197951 loss)\nI0608 07:09:04.207494 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0139818 (* 1 = 0.0139818 loss)\nI0608 07:09:04.207500 13573 sgd_solver.cpp:106] Iteration 56960, lr = 0.0001\nI0608 07:09:17.110831 13573 solver.cpp:229] Iteration 56980, loss = 0.439476\nI0608 07:09:17.110924 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0566311 (* 1 = 0.0566311 loss)\nI0608 07:09:17.110934 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0597291 (* 1 = 0.0597291 loss)\nI0608 07:09:17.110940 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0959253 (* 1 = 0.0959253 loss)\nI0608 07:09:17.110946 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0142705 (* 1 = 0.0142705 loss)\nI0608 07:09:17.110952 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0915805 (* 1 = 0.0915805 loss)\nI0608 07:09:17.110957 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0142701 (* 1 = 0.0142701 loss)\nI0608 07:09:17.110965 13573 sgd_solver.cpp:106] Iteration 56980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:09:29.897084 13573 solver.cpp:229] Iteration 57000, loss = 0.715341\nI0608 07:09:29.897159 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.049277 (* 1 = 0.049277 loss)\nI0608 07:09:29.897169 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0420668 (* 1 = 0.0420668 loss)\nI0608 07:09:29.897176 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.208943 (* 1 = 0.208943 loss)\nI0608 07:09:29.897181 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0212745 (* 1 = 0.0212745 loss)\nI0608 07:09:29.897187 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204874 (* 1 = 0.204874 loss)\nI0608 07:09:29.897193 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0212869 (* 1 = 0.0212869 loss)\nI0608 07:09:29.897200 13573 sgd_solver.cpp:106] Iteration 57000, lr = 0.0001\nI0608 07:09:42.751619 13573 solver.cpp:229] Iteration 57020, loss = 0.537377\nI0608 07:09:42.751689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131526 (* 1 = 0.131526 loss)\nI0608 07:09:42.751701 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0648658 (* 1 = 0.0648658 loss)\nI0608 07:09:42.751708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.267447 (* 1 = 0.267447 loss)\nI0608 07:09:42.751713 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0338862 (* 1 = 0.0338862 loss)\nI0608 07:09:42.751718 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260525 (* 1 = 0.260525 loss)\nI0608 07:09:42.751724 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0338925 (* 1 = 0.0338925 loss)\nI0608 07:09:42.751732 13573 sgd_solver.cpp:106] Iteration 57020, lr = 0.0001\nI0608 07:09:55.592234 13573 solver.cpp:229] Iteration 57040, loss = 1.53162\nI0608 07:09:55.592304 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.037901 (* 1 = 0.037901 loss)\nI0608 07:09:55.592315 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104679 (* 1 = 0.104679 loss)\nI0608 07:09:55.592322 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.16547 (* 1 = 0.16547 loss)\nI0608 07:09:55.592329 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00698458 (* 1 = 0.00698458 loss)\nI0608 07:09:55.592334 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174234 (* 1 = 0.174234 loss)\nI0608 07:09:55.592340 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00698364 (* 1 = 0.00698364 loss)\nI0608 07:09:55.592346 13573 sgd_solver.cpp:106] Iteration 57040, lr = 0.0001\nI0608 07:10:08.479070 13573 solver.cpp:229] Iteration 57060, loss = 0.886898\nI0608 07:10:08.479137 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00319373 (* 1 = 0.00319373 loss)\nI0608 07:10:08.479148 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0227323 (* 1 = 0.0227323 loss)\nI0608 07:10:08.479154 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.519793 (* 1 = 0.519793 loss)\nI0608 07:10:08.479161 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.126734 (* 1 = 0.126734 loss)\nI0608 07:10:08.479167 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.51771 (* 1 = 0.51771 loss)\nI0608 07:10:08.479173 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.126709 (* 1 = 0.126709 loss)\nI0608 07:10:08.479181 13573 sgd_solver.cpp:106] Iteration 57060, lr = 0.0001\nI0608 07:10:21.589670 13573 solver.cpp:229] Iteration 57080, loss = 0.64763\nI0608 07:10:21.589799 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0184455 (* 1 = 0.0184455 loss)\nI0608 07:10:21.589809 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00754643 (* 1 = 0.00754643 loss)\nI0608 07:10:21.589818 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.362015 (* 1 = 0.362015 loss)\nI0608 07:10:21.589823 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0519722 (* 1 = 0.0519722 loss)\nI0608 07:10:21.589829 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.38971 (* 1 = 0.38971 loss)\nI0608 07:10:21.589835 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0519716 (* 1 = 0.0519716 loss)\nI0608 07:10:21.589843 13573 sgd_solver.cpp:106] Iteration 57080, lr = 0.0001\nI0608 07:10:34.581231 13573 solver.cpp:229] Iteration 57100, loss = 0.637599\nI0608 07:10:34.581293 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0484619 (* 1 = 0.0484619 loss)\nI0608 07:10:34.581303 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0480944 (* 1 = 0.0480944 loss)\nI0608 07:10:34.581310 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0996774 (* 1 = 0.0996774 loss)\nI0608 07:10:34.581316 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0440064 (* 1 = 0.0440064 loss)\nI0608 07:10:34.581322 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104824 (* 1 = 0.104824 loss)\nI0608 07:10:34.581328 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0440138 (* 1 = 0.0440138 loss)\nI0608 07:10:34.581336 13573 sgd_solver.cpp:106] Iteration 57100, lr = 0.0001\nI0608 07:10:47.519842 13573 solver.cpp:229] Iteration 57120, loss = 0.603207\nI0608 07:10:47.519915 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0570756 (* 1 = 0.0570756 loss)\nI0608 07:10:47.519925 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0462732 (* 1 = 0.0462732 loss)\nI0608 07:10:47.519932 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.122751 (* 1 = 0.122751 loss)\nI0608 07:10:47.519937 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00162058 (* 1 = 0.00162058 loss)\nI0608 07:10:47.519944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102423 (* 1 = 0.102423 loss)\nI0608 07:10:47.519950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00162051 (* 1 = 0.00162051 loss)\nI0608 07:10:47.519958 13573 sgd_solver.cpp:106] Iteration 57120, lr = 0.0001\nI0608 07:11:00.606684 13573 solver.cpp:229] Iteration 57140, loss = 0.336468\nI0608 07:11:00.606755 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0362178 (* 1 = 0.0362178 loss)\nI0608 07:11:00.606765 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0340148 (* 1 = 0.0340148 loss)\nI0608 07:11:00.606786 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.085887 (* 1 = 0.085887 loss)\nI0608 07:11:00.606794 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00989652 (* 1 = 0.00989652 loss)\nI0608 07:11:00.606801 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0860218 (* 1 = 0.0860218 loss)\nI0608 07:11:00.606806 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0099003 (* 1 = 0.0099003 loss)\nI0608 07:11:00.606812 13573 sgd_solver.cpp:106] Iteration 57140, lr = 0.0001\nI0608 07:11:13.367677 13573 solver.cpp:229] Iteration 57160, loss = 0.836009\nI0608 07:11:13.367736 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0220269 (* 1 = 0.0220269 loss)\nI0608 07:11:13.367746 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0383183 (* 1 = 0.0383183 loss)\nI0608 07:11:13.367753 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.484626 (* 1 = 0.484626 loss)\nI0608 07:11:13.367758 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.108404 (* 1 = 0.108404 loss)\nI0608 07:11:13.367764 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.463435 (* 1 = 0.463435 loss)\nI0608 07:11:13.367770 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.108409 (* 1 = 0.108409 loss)\nI0608 07:11:13.367776 13573 sgd_solver.cpp:106] Iteration 57160, lr = 0.0001\nI0608 07:11:26.286643 13573 solver.cpp:229] Iteration 57180, loss = 0.52012\nI0608 07:11:26.286726 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0631664 (* 1 = 0.0631664 loss)\nI0608 07:11:26.286737 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0594044 (* 1 = 0.0594044 loss)\nI0608 07:11:26.286743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105171 (* 1 = 0.105171 loss)\nI0608 07:11:26.286749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0127356 (* 1 = 0.0127356 loss)\nI0608 07:11:26.286756 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103802 (* 1 = 0.103802 loss)\nI0608 07:11:26.286761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0127391 (* 1 = 0.0127391 loss)\nI0608 07:11:26.286779 13573 sgd_solver.cpp:106] Iteration 57180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:11:39.277750 13573 solver.cpp:229] Iteration 57200, loss = 0.534119\nI0608 07:11:39.277834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00031364 (* 1 = 0.00031364 loss)\nI0608 07:11:39.277844 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00341067 (* 1 = 0.00341067 loss)\nI0608 07:11:39.277850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.226144 (* 1 = 0.226144 loss)\nI0608 07:11:39.277856 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0226462 (* 1 = 0.0226462 loss)\nI0608 07:11:39.277863 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13319 (* 1 = 0.13319 loss)\nI0608 07:11:39.277868 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0226467 (* 1 = 0.0226467 loss)\nI0608 07:11:39.277875 13573 sgd_solver.cpp:106] Iteration 57200, lr = 0.0001\nI0608 07:11:51.930089 13573 solver.cpp:229] Iteration 57220, loss = 1.16836\nI0608 07:11:51.930151 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.206334 (* 1 = 0.206334 loss)\nI0608 07:11:51.930160 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.173742 (* 1 = 0.173742 loss)\nI0608 07:11:51.930166 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.502928 (* 1 = 0.502928 loss)\nI0608 07:11:51.930173 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0804259 (* 1 = 0.0804259 loss)\nI0608 07:11:51.930179 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.492556 (* 1 = 0.492556 loss)\nI0608 07:11:51.930186 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0804223 (* 1 = 0.0804223 loss)\nI0608 07:11:51.930192 13573 sgd_solver.cpp:106] Iteration 57220, lr = 0.0001\nI0608 07:12:05.016042 13573 solver.cpp:229] Iteration 57240, loss = 0.336923\nI0608 07:12:05.016110 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000386653 (* 1 = 0.000386653 loss)\nI0608 07:12:05.016121 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00642893 (* 1 = 0.00642893 loss)\nI0608 07:12:05.016127 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152005 (* 1 = 0.152005 loss)\nI0608 07:12:05.016134 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00694948 (* 1 = 0.00694948 loss)\nI0608 07:12:05.016139 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211432 (* 1 = 0.211432 loss)\nI0608 07:12:05.016145 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00694851 (* 1 = 0.00694851 loss)\nI0608 07:12:05.016151 13573 sgd_solver.cpp:106] Iteration 57240, lr = 0.0001\nI0608 07:12:17.810168 13573 solver.cpp:229] Iteration 57260, loss = 0.400692\nI0608 07:12:17.810292 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0445442 (* 1 = 0.0445442 loss)\nI0608 07:12:17.810302 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0260707 (* 1 = 0.0260707 loss)\nI0608 07:12:17.810308 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.08458 (* 1 = 0.08458 loss)\nI0608 07:12:17.810314 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00615906 (* 1 = 0.00615906 loss)\nI0608 07:12:17.810320 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0840598 (* 1 = 0.0840598 loss)\nI0608 07:12:17.810326 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00615907 (* 1 = 0.00615907 loss)\nI0608 07:12:17.810333 13573 sgd_solver.cpp:106] Iteration 57260, lr = 0.0001\nI0608 07:12:30.658632 13573 solver.cpp:229] Iteration 57280, loss = 0.74416\nI0608 07:12:30.658697 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00130499 (* 1 = 0.00130499 loss)\nI0608 07:12:30.658706 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000564089 (* 1 = 0.000564089 loss)\nI0608 07:12:30.658712 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.261012 (* 1 = 0.261012 loss)\nI0608 07:12:30.658718 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0568512 (* 1 = 0.0568512 loss)\nI0608 07:12:30.658723 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260052 (* 1 = 0.260052 loss)\nI0608 07:12:30.658730 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0568846 (* 1 = 0.0568846 loss)\nI0608 07:12:30.658736 13573 sgd_solver.cpp:106] Iteration 57280, lr = 0.0001\nI0608 07:12:43.475553 13573 solver.cpp:229] Iteration 57300, loss = 0.958697\nI0608 07:12:43.475617 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0226892 (* 1 = 0.0226892 loss)\nI0608 07:12:43.475626 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.024782 (* 1 = 0.024782 loss)\nI0608 07:12:43.475632 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.45793 (* 1 = 0.45793 loss)\nI0608 07:12:43.475638 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198695 (* 1 = 0.0198695 loss)\nI0608 07:12:43.475644 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.495037 (* 1 = 0.495037 loss)\nI0608 07:12:43.475649 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198491 (* 1 = 0.0198491 loss)\nI0608 07:12:43.475656 13573 sgd_solver.cpp:106] Iteration 57300, lr = 0.0001\nI0608 07:12:56.197379 13573 solver.cpp:229] Iteration 57320, loss = 0.74055\nI0608 07:12:56.197448 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101834 (* 1 = 0.101834 loss)\nI0608 07:12:56.197458 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0603381 (* 1 = 0.0603381 loss)\nI0608 07:12:56.197464 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240567 (* 1 = 0.240567 loss)\nI0608 07:12:56.197470 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0982954 (* 1 = 0.0982954 loss)\nI0608 07:12:56.197476 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.237711 (* 1 = 0.237711 loss)\nI0608 07:12:56.197482 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0983099 (* 1 = 0.0983099 loss)\nI0608 07:12:56.197489 13573 sgd_solver.cpp:106] Iteration 57320, lr = 0.0001\nI0608 07:13:09.328927 13573 solver.cpp:229] Iteration 57340, loss = 0.671573\nI0608 07:13:09.328995 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.101831 (* 1 = 0.101831 loss)\nI0608 07:13:09.329005 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.04272 (* 1 = 0.04272 loss)\nI0608 07:13:09.329011 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102492 (* 1 = 0.102492 loss)\nI0608 07:13:09.329016 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0782084 (* 1 = 0.0782084 loss)\nI0608 07:13:09.329022 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103628 (* 1 = 0.103628 loss)\nI0608 07:13:09.329028 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0782135 (* 1 = 0.0782135 loss)\nI0608 07:13:09.329035 13573 sgd_solver.cpp:106] Iteration 57340, lr = 0.0001\nI0608 07:13:22.376103 13573 solver.cpp:229] Iteration 57360, loss = 0.525998\nI0608 07:13:22.376168 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00108845 (* 1 = 0.00108845 loss)\nI0608 07:13:22.376178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0090767 (* 1 = 0.0090767 loss)\nI0608 07:13:22.376184 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.325699 (* 1 = 0.325699 loss)\nI0608 07:13:22.376190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0682424 (* 1 = 0.0682424 loss)\nI0608 07:13:22.376196 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.301292 (* 1 = 0.301292 loss)\nI0608 07:13:22.376202 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0682318 (* 1 = 0.0682318 loss)\nI0608 07:13:22.376210 13573 sgd_solver.cpp:106] Iteration 57360, lr = 0.0001\nI0608 07:13:35.330715 13573 solver.cpp:229] Iteration 57380, loss = 0.452167\nI0608 07:13:35.330859 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00242505 (* 1 = 0.00242505 loss)\nI0608 07:13:35.330869 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00280687 (* 1 = 0.00280687 loss)\nI0608 07:13:35.330876 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167015 (* 1 = 0.167015 loss)\nI0608 07:13:35.330883 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00717429 (* 1 = 0.00717429 loss)\nI0608 07:13:35.330888 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170159 (* 1 = 0.170159 loss)\nI0608 07:13:35.330893 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00716533 (* 1 = 0.00716533 loss)\nI0608 07:13:35.330900 13573 sgd_solver.cpp:106] Iteration 57380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:13:48.460005 13573 solver.cpp:229] Iteration 57400, loss = 0.677438\nI0608 07:13:48.460073 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0921667 (* 1 = 0.0921667 loss)\nI0608 07:13:48.460088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0671175 (* 1 = 0.0671175 loss)\nI0608 07:13:48.460094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.400805 (* 1 = 0.400805 loss)\nI0608 07:13:48.460099 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0199106 (* 1 = 0.0199106 loss)\nI0608 07:13:48.460105 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.446031 (* 1 = 0.446031 loss)\nI0608 07:13:48.460111 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0199064 (* 1 = 0.0199064 loss)\nI0608 07:13:48.460119 13573 sgd_solver.cpp:106] Iteration 57400, lr = 0.0001\nI0608 07:14:01.305428 13573 solver.cpp:229] Iteration 57420, loss = 0.312634\nI0608 07:14:01.305503 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0509447 (* 1 = 0.0509447 loss)\nI0608 07:14:01.305513 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.063396 (* 1 = 0.063396 loss)\nI0608 07:14:01.305521 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100231 (* 1 = 0.100231 loss)\nI0608 07:14:01.305526 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0192898 (* 1 = 0.0192898 loss)\nI0608 07:14:01.305531 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107514 (* 1 = 0.107514 loss)\nI0608 07:14:01.305537 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0192869 (* 1 = 0.0192869 loss)\nI0608 07:14:01.305544 13573 sgd_solver.cpp:106] Iteration 57420, lr = 0.0001\nI0608 07:14:14.206735 13573 solver.cpp:229] Iteration 57440, loss = 0.454507\nI0608 07:14:14.206832 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000949725 (* 1 = 0.000949725 loss)\nI0608 07:14:14.206843 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000112133 (* 1 = 0.000112133 loss)\nI0608 07:14:14.206849 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197739 (* 1 = 0.197739 loss)\nI0608 07:14:14.206856 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154713 (* 1 = 0.0154713 loss)\nI0608 07:14:14.206861 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210667 (* 1 = 0.210667 loss)\nI0608 07:14:14.206867 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154725 (* 1 = 0.0154725 loss)\nI0608 07:14:14.206874 13573 sgd_solver.cpp:106] Iteration 57440, lr = 0.0001\nI0608 07:14:27.040457 13573 solver.cpp:229] Iteration 57460, loss = 0.337646\nI0608 07:14:27.040537 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0169404 (* 1 = 0.0169404 loss)\nI0608 07:14:27.040546 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0152254 (* 1 = 0.0152254 loss)\nI0608 07:14:27.040552 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0853617 (* 1 = 0.0853617 loss)\nI0608 07:14:27.040558 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0195457 (* 1 = 0.0195457 loss)\nI0608 07:14:27.040565 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0945606 (* 1 = 0.0945606 loss)\nI0608 07:14:27.040571 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0195548 (* 1 = 0.0195548 loss)\nI0608 07:14:27.040577 13573 sgd_solver.cpp:106] Iteration 57460, lr = 0.0001\nI0608 07:14:39.991917 13573 solver.cpp:229] Iteration 57480, loss = 0.443488\nI0608 07:14:39.991982 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0612252 (* 1 = 0.0612252 loss)\nI0608 07:14:39.991992 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0433746 (* 1 = 0.0433746 loss)\nI0608 07:14:39.991998 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0890422 (* 1 = 0.0890422 loss)\nI0608 07:14:39.992004 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103678 (* 1 = 0.0103678 loss)\nI0608 07:14:39.992010 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0931292 (* 1 = 0.0931292 loss)\nI0608 07:14:39.992017 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103659 (* 1 = 0.0103659 loss)\nI0608 07:14:39.992023 13573 sgd_solver.cpp:106] Iteration 57480, lr = 0.0001\nI0608 07:14:52.832180 13573 solver.cpp:229] Iteration 57500, loss = 0.686595\nI0608 07:14:52.832250 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0253826 (* 1 = 0.0253826 loss)\nI0608 07:14:52.832259 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0534887 (* 1 = 0.0534887 loss)\nI0608 07:14:52.832265 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.274804 (* 1 = 0.274804 loss)\nI0608 07:14:52.832271 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123492 (* 1 = 0.123492 loss)\nI0608 07:14:52.832276 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26424 (* 1 = 0.26424 loss)\nI0608 07:14:52.832283 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.12348 (* 1 = 0.12348 loss)\nI0608 07:14:52.832290 13573 sgd_solver.cpp:106] Iteration 57500, lr = 0.0001\nI0608 07:15:06.021945 13573 solver.cpp:229] Iteration 57520, loss = 1.34115\nI0608 07:15:06.022022 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0889209 (* 1 = 0.0889209 loss)\nI0608 07:15:06.022032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0758811 (* 1 = 0.0758811 loss)\nI0608 07:15:06.022037 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.645886 (* 1 = 0.645886 loss)\nI0608 07:15:06.022043 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.138195 (* 1 = 0.138195 loss)\nI0608 07:15:06.022049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.669589 (* 1 = 0.669589 loss)\nI0608 07:15:06.022054 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.138187 (* 1 = 0.138187 loss)\nI0608 07:15:06.022061 13573 sgd_solver.cpp:106] Iteration 57520, lr = 0.0001\nI0608 07:15:19.100462 13573 solver.cpp:229] Iteration 57540, loss = 0.321715\nI0608 07:15:19.100533 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000165152 (* 1 = 0.000165152 loss)\nI0608 07:15:19.100541 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000417916 (* 1 = 0.000417916 loss)\nI0608 07:15:19.100548 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148576 (* 1 = 0.148576 loss)\nI0608 07:15:19.100553 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0030171 (* 1 = 0.0030171 loss)\nI0608 07:15:19.100559 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.135043 (* 1 = 0.135043 loss)\nI0608 07:15:19.100564 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00301394 (* 1 = 0.00301394 loss)\nI0608 07:15:19.100571 13573 sgd_solver.cpp:106] Iteration 57540, lr = 0.0001\nI0608 07:15:32.052788 13573 solver.cpp:229] Iteration 57560, loss = 0.527601\nI0608 07:15:32.052863 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00497266 (* 1 = 0.00497266 loss)\nI0608 07:15:32.052872 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00784438 (* 1 = 0.00784438 loss)\nI0608 07:15:32.052878 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260504 (* 1 = 0.260504 loss)\nI0608 07:15:32.052884 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0246856 (* 1 = 0.0246856 loss)\nI0608 07:15:32.052891 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.259841 (* 1 = 0.259841 loss)\nI0608 07:15:32.052896 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0246682 (* 1 = 0.0246682 loss)\nI0608 07:15:32.052902 13573 sgd_solver.cpp:106] Iteration 57560, lr = 0.0001\nI0608 07:15:44.781086 13573 solver.cpp:229] Iteration 57580, loss = 0.651231\nI0608 07:15:44.781167 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.064587 (* 1 = 0.064587 loss)\nI0608 07:15:44.781177 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.088764 (* 1 = 0.088764 loss)\nI0608 07:15:44.781183 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.37333 (* 1 = 0.37333 loss)\nI0608 07:15:44.781188 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0501901 (* 1 = 0.0501901 loss)\nI0608 07:15:44.781194 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.33176 (* 1 = 0.33176 loss)\nI0608 07:15:44.781199 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0502148 (* 1 = 0.0502148 loss)\nI0608 07:15:44.781206 13573 sgd_solver.cpp:106] Iteration 57580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:15:57.614394 13573 solver.cpp:229] Iteration 57600, loss = 0.894922\nI0608 07:15:57.614455 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0405516 (* 1 = 0.0405516 loss)\nI0608 07:15:57.614464 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0449698 (* 1 = 0.0449698 loss)\nI0608 07:15:57.614470 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.453727 (* 1 = 0.453727 loss)\nI0608 07:15:57.614476 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.213919 (* 1 = 0.213919 loss)\nI0608 07:15:57.614481 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.455071 (* 1 = 0.455071 loss)\nI0608 07:15:57.614487 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.213923 (* 1 = 0.213923 loss)\nI0608 07:15:57.614493 13573 sgd_solver.cpp:106] Iteration 57600, lr = 0.0001\nI0608 07:16:10.372653 13573 solver.cpp:229] Iteration 57620, loss = 1.94942\nI0608 07:16:10.372720 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.132306 (* 1 = 0.132306 loss)\nI0608 07:16:10.372730 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.184809 (* 1 = 0.184809 loss)\nI0608 07:16:10.372736 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.15128 (* 1 = 1.15128 loss)\nI0608 07:16:10.372741 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.330608 (* 1 = 0.330608 loss)\nI0608 07:16:10.372747 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.14305 (* 1 = 1.14305 loss)\nI0608 07:16:10.372752 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.330594 (* 1 = 0.330594 loss)\nI0608 07:16:10.372759 13573 sgd_solver.cpp:106] Iteration 57620, lr = 0.0001\nI0608 07:16:23.077518 13573 solver.cpp:229] Iteration 57640, loss = 1.2817\nI0608 07:16:23.077589 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0839409 (* 1 = 0.0839409 loss)\nI0608 07:16:23.077597 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0727715 (* 1 = 0.0727715 loss)\nI0608 07:16:23.077603 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.586854 (* 1 = 0.586854 loss)\nI0608 07:16:23.077610 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0871806 (* 1 = 0.0871806 loss)\nI0608 07:16:23.077615 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.567027 (* 1 = 0.567027 loss)\nI0608 07:16:23.077621 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0871695 (* 1 = 0.0871695 loss)\nI0608 07:16:23.077630 13573 sgd_solver.cpp:106] Iteration 57640, lr = 0.0001\nI0608 07:16:35.838008 13573 solver.cpp:229] Iteration 57660, loss = 0.869281\nI0608 07:16:35.838065 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0497541 (* 1 = 0.0497541 loss)\nI0608 07:16:35.838075 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.053317 (* 1 = 0.053317 loss)\nI0608 07:16:35.838081 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.561867 (* 1 = 0.561867 loss)\nI0608 07:16:35.838088 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0971255 (* 1 = 0.0971255 loss)\nI0608 07:16:35.838093 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.55176 (* 1 = 0.55176 loss)\nI0608 07:16:35.838099 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0971515 (* 1 = 0.0971515 loss)\nI0608 07:16:35.838104 13573 sgd_solver.cpp:106] Iteration 57660, lr = 0.0001\nI0608 07:16:48.681622 13573 solver.cpp:229] Iteration 57680, loss = 0.380101\nI0608 07:16:48.681689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0224204 (* 1 = 0.0224204 loss)\nI0608 07:16:48.681697 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0602455 (* 1 = 0.0602455 loss)\nI0608 07:16:48.681704 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949005 (* 1 = 0.0949005 loss)\nI0608 07:16:48.681710 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0398117 (* 1 = 0.0398117 loss)\nI0608 07:16:48.681716 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0979532 (* 1 = 0.0979532 loss)\nI0608 07:16:48.681722 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0398222 (* 1 = 0.0398222 loss)\nI0608 07:16:48.681730 13573 sgd_solver.cpp:106] Iteration 57680, lr = 0.0001\nI0608 07:17:01.470368 13573 solver.cpp:229] Iteration 57700, loss = 0.671506\nI0608 07:17:01.470434 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.175406 (* 1 = 0.175406 loss)\nI0608 07:17:01.470443 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113641 (* 1 = 0.113641 loss)\nI0608 07:17:01.470449 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.340928 (* 1 = 0.340928 loss)\nI0608 07:17:01.470454 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0404515 (* 1 = 0.0404515 loss)\nI0608 07:17:01.470460 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.336438 (* 1 = 0.336438 loss)\nI0608 07:17:01.470465 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0404561 (* 1 = 0.0404561 loss)\nI0608 07:17:01.470473 13573 sgd_solver.cpp:106] Iteration 57700, lr = 0.0001\nI0608 07:17:14.213588 13573 solver.cpp:229] Iteration 57720, loss = 0.859718\nI0608 07:17:14.213650 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0781044 (* 1 = 0.0781044 loss)\nI0608 07:17:14.213660 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0665469 (* 1 = 0.0665469 loss)\nI0608 07:17:14.213665 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269387 (* 1 = 0.269387 loss)\nI0608 07:17:14.213670 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0910626 (* 1 = 0.0910626 loss)\nI0608 07:17:14.213675 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.272184 (* 1 = 0.272184 loss)\nI0608 07:17:14.213681 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0910581 (* 1 = 0.0910581 loss)\nI0608 07:17:14.213688 13573 sgd_solver.cpp:106] Iteration 57720, lr = 0.0001\nI0608 07:17:27.202105 13573 solver.cpp:229] Iteration 57740, loss = 0.719961\nI0608 07:17:27.202180 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.182206 (* 1 = 0.182206 loss)\nI0608 07:17:27.202188 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.134167 (* 1 = 0.134167 loss)\nI0608 07:17:27.202194 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304602 (* 1 = 0.304602 loss)\nI0608 07:17:27.202200 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.076231 (* 1 = 0.076231 loss)\nI0608 07:17:27.202206 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.294662 (* 1 = 0.294662 loss)\nI0608 07:17:27.202211 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0762361 (* 1 = 0.0762361 loss)\nI0608 07:17:27.202219 13573 sgd_solver.cpp:106] Iteration 57740, lr = 0.0001\nI0608 07:17:40.126312 13573 solver.cpp:229] Iteration 57760, loss = 0.494298\nI0608 07:17:40.126379 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00197411 (* 1 = 0.00197411 loss)\nI0608 07:17:40.126389 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00388655 (* 1 = 0.00388655 loss)\nI0608 07:17:40.126394 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289016 (* 1 = 0.289016 loss)\nI0608 07:17:40.126408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0489283 (* 1 = 0.0489283 loss)\nI0608 07:17:40.126415 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.266339 (* 1 = 0.266339 loss)\nI0608 07:17:40.126421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0489441 (* 1 = 0.0489441 loss)\nI0608 07:17:40.126427 13573 sgd_solver.cpp:106] Iteration 57760, lr = 0.0001\nI0608 07:17:53.012605 13573 solver.cpp:229] Iteration 57780, loss = 0.555895\nI0608 07:17:53.012667 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0616159 (* 1 = 0.0616159 loss)\nI0608 07:17:53.012676 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.040895 (* 1 = 0.040895 loss)\nI0608 07:17:53.012682 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100377 (* 1 = 0.100377 loss)\nI0608 07:17:53.012688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242028 (* 1 = 0.0242028 loss)\nI0608 07:17:53.012694 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100632 (* 1 = 0.100632 loss)\nI0608 07:17:53.012701 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0242065 (* 1 = 0.0242065 loss)\nI0608 07:17:53.012707 13573 sgd_solver.cpp:106] Iteration 57780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:18:05.770498 13573 solver.cpp:229] Iteration 57800, loss = 0.614223\nI0608 07:18:05.770568 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00389372 (* 1 = 0.00389372 loss)\nI0608 07:18:05.770577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0135417 (* 1 = 0.0135417 loss)\nI0608 07:18:05.770584 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0827221 (* 1 = 0.0827221 loss)\nI0608 07:18:05.770591 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275737 (* 1 = 0.0275737 loss)\nI0608 07:18:05.770596 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0813961 (* 1 = 0.0813961 loss)\nI0608 07:18:05.770601 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275755 (* 1 = 0.0275755 loss)\nI0608 07:18:05.770608 13573 sgd_solver.cpp:106] Iteration 57800, lr = 0.0001\nI0608 07:18:18.760433 13573 solver.cpp:229] Iteration 57820, loss = 0.362962\nI0608 07:18:18.760496 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0688158 (* 1 = 0.0688158 loss)\nI0608 07:18:18.760505 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.045991 (* 1 = 0.045991 loss)\nI0608 07:18:18.760511 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0858895 (* 1 = 0.0858895 loss)\nI0608 07:18:18.760517 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202351 (* 1 = 0.0202351 loss)\nI0608 07:18:18.760524 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0823621 (* 1 = 0.0823621 loss)\nI0608 07:18:18.760529 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.020237 (* 1 = 0.020237 loss)\nI0608 07:18:18.760536 13573 sgd_solver.cpp:106] Iteration 57820, lr = 0.0001\nI0608 07:18:31.677973 13573 solver.cpp:229] Iteration 57840, loss = 0.696203\nI0608 07:18:31.678038 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00136246 (* 1 = 0.00136246 loss)\nI0608 07:18:31.678047 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0159183 (* 1 = 0.0159183 loss)\nI0608 07:18:31.678053 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258668 (* 1 = 0.258668 loss)\nI0608 07:18:31.678061 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0484877 (* 1 = 0.0484877 loss)\nI0608 07:18:31.678066 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.258045 (* 1 = 0.258045 loss)\nI0608 07:18:31.678071 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0484675 (* 1 = 0.0484675 loss)\nI0608 07:18:31.678078 13573 sgd_solver.cpp:106] Iteration 57840, lr = 0.0001\nI0608 07:18:44.755584 13573 solver.cpp:229] Iteration 57860, loss = 1.47462\nI0608 07:18:44.755661 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00135876 (* 1 = 0.00135876 loss)\nI0608 07:18:44.755671 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000166606 (* 1 = 0.000166606 loss)\nI0608 07:18:44.755676 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.784053 (* 1 = 0.784053 loss)\nI0608 07:18:44.755682 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.352703 (* 1 = 0.352703 loss)\nI0608 07:18:44.755687 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.759349 (* 1 = 0.759349 loss)\nI0608 07:18:44.755693 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.352622 (* 1 = 0.352622 loss)\nI0608 07:18:44.755700 13573 sgd_solver.cpp:106] Iteration 57860, lr = 0.0001\nI0608 07:18:57.664430 13573 solver.cpp:229] Iteration 57880, loss = 1.13368\nI0608 07:18:57.664489 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118018 (* 1 = 0.118018 loss)\nI0608 07:18:57.664499 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.194627 (* 1 = 0.194627 loss)\nI0608 07:18:57.664505 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.613592 (* 1 = 0.613592 loss)\nI0608 07:18:57.664510 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0951664 (* 1 = 0.0951664 loss)\nI0608 07:18:57.664516 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.588143 (* 1 = 0.588143 loss)\nI0608 07:18:57.664521 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0951559 (* 1 = 0.0951559 loss)\nI0608 07:18:57.664528 13573 sgd_solver.cpp:106] Iteration 57880, lr = 0.0001\nI0608 07:19:10.523620 13573 solver.cpp:229] Iteration 57900, loss = 1.31381\nI0608 07:19:10.523691 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0595374 (* 1 = 0.0595374 loss)\nI0608 07:19:10.523701 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141604 (* 1 = 0.141604 loss)\nI0608 07:19:10.523708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.850308 (* 1 = 0.850308 loss)\nI0608 07:19:10.523715 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0686926 (* 1 = 0.0686926 loss)\nI0608 07:19:10.523720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.844846 (* 1 = 0.844846 loss)\nI0608 07:19:10.523726 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0686298 (* 1 = 0.0686298 loss)\nI0608 07:19:10.523733 13573 sgd_solver.cpp:106] Iteration 57900, lr = 0.0001\nI0608 07:19:23.477177 13573 solver.cpp:229] Iteration 57920, loss = 0.846419\nI0608 07:19:23.477250 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0773762 (* 1 = 0.0773762 loss)\nI0608 07:19:23.477259 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0923243 (* 1 = 0.0923243 loss)\nI0608 07:19:23.477267 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.26632 (* 1 = 0.26632 loss)\nI0608 07:19:23.477272 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0801776 (* 1 = 0.0801776 loss)\nI0608 07:19:23.477278 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261471 (* 1 = 0.261471 loss)\nI0608 07:19:23.477284 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0801722 (* 1 = 0.0801722 loss)\nI0608 07:19:23.477291 13573 sgd_solver.cpp:106] Iteration 57920, lr = 0.0001\nI0608 07:19:36.197760 13573 solver.cpp:229] Iteration 57940, loss = 0.34474\nI0608 07:19:36.197823 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0321388 (* 1 = 0.0321388 loss)\nI0608 07:19:36.197831 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0543579 (* 1 = 0.0543579 loss)\nI0608 07:19:36.197837 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992168 (* 1 = 0.0992168 loss)\nI0608 07:19:36.197844 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0518973 (* 1 = 0.0518973 loss)\nI0608 07:19:36.197849 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.104495 (* 1 = 0.104495 loss)\nI0608 07:19:36.197854 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.051908 (* 1 = 0.051908 loss)\nI0608 07:19:36.197860 13573 sgd_solver.cpp:106] Iteration 57940, lr = 0.0001\nI0608 07:19:49.260534 13573 solver.cpp:229] Iteration 57960, loss = 0.417143\nI0608 07:19:49.260622 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0234661 (* 1 = 0.0234661 loss)\nI0608 07:19:49.260632 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.05976 (* 1 = 0.05976 loss)\nI0608 07:19:49.260638 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0900406 (* 1 = 0.0900406 loss)\nI0608 07:19:49.260644 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00602584 (* 1 = 0.00602584 loss)\nI0608 07:19:49.260650 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.113826 (* 1 = 0.113826 loss)\nI0608 07:19:49.260656 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00602538 (* 1 = 0.00602538 loss)\nI0608 07:19:49.260664 13573 sgd_solver.cpp:106] Iteration 57960, lr = 0.0001\nI0608 07:20:01.898916 13573 solver.cpp:229] Iteration 57980, loss = 0.504269\nI0608 07:20:01.898998 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0433057 (* 1 = 0.0433057 loss)\nI0608 07:20:01.899008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0994897 (* 1 = 0.0994897 loss)\nI0608 07:20:01.899014 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258862 (* 1 = 0.258862 loss)\nI0608 07:20:01.899020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0638606 (* 1 = 0.0638606 loss)\nI0608 07:20:01.899025 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223012 (* 1 = 0.223012 loss)\nI0608 07:20:01.899031 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0638665 (* 1 = 0.0638665 loss)\nI0608 07:20:01.899039 13573 sgd_solver.cpp:106] Iteration 57980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:20:14.764441 13573 solver.cpp:229] Iteration 58000, loss = 1.54866\nI0608 07:20:14.764511 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0584198 (* 1 = 0.0584198 loss)\nI0608 07:20:14.764521 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0685571 (* 1 = 0.0685571 loss)\nI0608 07:20:14.764528 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.115087 (* 1 = 0.115087 loss)\nI0608 07:20:14.764533 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.101703 (* 1 = 0.101703 loss)\nI0608 07:20:14.764539 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.112086 (* 1 = 0.112086 loss)\nI0608 07:20:14.764545 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.101718 (* 1 = 0.101718 loss)\nI0608 07:20:14.764552 13573 sgd_solver.cpp:106] Iteration 58000, lr = 0.0001\nI0608 07:20:27.789230 13573 solver.cpp:229] Iteration 58020, loss = 0.543099\nI0608 07:20:27.789300 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000837678 (* 1 = 0.000837678 loss)\nI0608 07:20:27.789309 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0179715 (* 1 = 0.0179715 loss)\nI0608 07:20:27.789316 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223262 (* 1 = 0.223262 loss)\nI0608 07:20:27.789321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0295411 (* 1 = 0.0295411 loss)\nI0608 07:20:27.789327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.25264 (* 1 = 0.25264 loss)\nI0608 07:20:27.789332 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029537 (* 1 = 0.029537 loss)\nI0608 07:20:27.789340 13573 sgd_solver.cpp:106] Iteration 58020, lr = 0.0001\nI0608 07:20:40.973430 13573 solver.cpp:229] Iteration 58040, loss = 0.698219\nI0608 07:20:40.973495 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0210927 (* 1 = 0.0210927 loss)\nI0608 07:20:40.973503 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0340613 (* 1 = 0.0340613 loss)\nI0608 07:20:40.973511 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112684 (* 1 = 0.112684 loss)\nI0608 07:20:40.973517 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0879039 (* 1 = 0.0879039 loss)\nI0608 07:20:40.973523 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119689 (* 1 = 0.119689 loss)\nI0608 07:20:40.973529 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0878995 (* 1 = 0.0878995 loss)\nI0608 07:20:40.973536 13573 sgd_solver.cpp:106] Iteration 58040, lr = 0.0001\nI0608 07:20:54.019917 13573 solver.cpp:229] Iteration 58060, loss = 0.517711\nI0608 07:20:54.019984 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000835498 (* 1 = 0.000835498 loss)\nI0608 07:20:54.019994 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00186714 (* 1 = 0.00186714 loss)\nI0608 07:20:54.020000 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.215753 (* 1 = 0.215753 loss)\nI0608 07:20:54.020005 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0170877 (* 1 = 0.0170877 loss)\nI0608 07:20:54.020011 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211074 (* 1 = 0.211074 loss)\nI0608 07:20:54.020016 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017085 (* 1 = 0.017085 loss)\nI0608 07:20:54.020023 13573 sgd_solver.cpp:106] Iteration 58060, lr = 0.0001\nI0608 07:21:07.056385 13573 solver.cpp:229] Iteration 58080, loss = 0.461292\nI0608 07:21:07.056445 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0717414 (* 1 = 0.0717414 loss)\nI0608 07:21:07.056454 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0524262 (* 1 = 0.0524262 loss)\nI0608 07:21:07.056460 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0959853 (* 1 = 0.0959853 loss)\nI0608 07:21:07.056466 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00776234 (* 1 = 0.00776234 loss)\nI0608 07:21:07.056471 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932661 (* 1 = 0.0932661 loss)\nI0608 07:21:07.056476 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00776135 (* 1 = 0.00776135 loss)\nI0608 07:21:07.056483 13573 sgd_solver.cpp:106] Iteration 58080, lr = 0.0001\nI0608 07:21:19.937223 13573 solver.cpp:229] Iteration 58100, loss = 0.599169\nI0608 07:21:19.937294 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0683575 (* 1 = 0.0683575 loss)\nI0608 07:21:19.937304 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0101939 (* 1 = 0.0101939 loss)\nI0608 07:21:19.937309 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171994 (* 1 = 0.171994 loss)\nI0608 07:21:19.937315 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00368004 (* 1 = 0.00368004 loss)\nI0608 07:21:19.937321 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176684 (* 1 = 0.176684 loss)\nI0608 07:21:19.937327 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00368116 (* 1 = 0.00368116 loss)\nI0608 07:21:19.937335 13573 sgd_solver.cpp:106] Iteration 58100, lr = 0.0001\nI0608 07:21:32.677930 13573 solver.cpp:229] Iteration 58120, loss = 0.700206\nI0608 07:21:32.677996 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.171106 (* 1 = 0.171106 loss)\nI0608 07:21:32.678011 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.170792 (* 1 = 0.170792 loss)\nI0608 07:21:32.678020 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0886756 (* 1 = 0.0886756 loss)\nI0608 07:21:32.678025 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0417851 (* 1 = 0.0417851 loss)\nI0608 07:21:32.678031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0878295 (* 1 = 0.0878295 loss)\nI0608 07:21:32.678037 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0417867 (* 1 = 0.0417867 loss)\nI0608 07:21:32.678045 13573 sgd_solver.cpp:106] Iteration 58120, lr = 0.0001\nI0608 07:21:45.795294 13573 solver.cpp:229] Iteration 58140, loss = 0.590546\nI0608 07:21:45.795491 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0863707 (* 1 = 0.0863707 loss)\nI0608 07:21:45.795501 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0367283 (* 1 = 0.0367283 loss)\nI0608 07:21:45.795507 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262464 (* 1 = 0.262464 loss)\nI0608 07:21:45.795513 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.066108 (* 1 = 0.066108 loss)\nI0608 07:21:45.795519 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.263174 (* 1 = 0.263174 loss)\nI0608 07:21:45.795524 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0660955 (* 1 = 0.0660955 loss)\nI0608 07:21:45.795531 13573 sgd_solver.cpp:106] Iteration 58140, lr = 0.0001\nI0608 07:21:58.661734 13573 solver.cpp:229] Iteration 58160, loss = 0.502028\nI0608 07:21:58.661809 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0295535 (* 1 = 0.0295535 loss)\nI0608 07:21:58.661819 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0610502 (* 1 = 0.0610502 loss)\nI0608 07:21:58.661825 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179891 (* 1 = 0.179891 loss)\nI0608 07:21:58.661831 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0144763 (* 1 = 0.0144763 loss)\nI0608 07:21:58.661836 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167982 (* 1 = 0.167982 loss)\nI0608 07:21:58.661842 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0144815 (* 1 = 0.0144815 loss)\nI0608 07:21:58.661849 13573 sgd_solver.cpp:106] Iteration 58160, lr = 0.0001\nI0608 07:22:11.581724 13573 solver.cpp:229] Iteration 58180, loss = 0.481025\nI0608 07:22:11.581794 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0640657 (* 1 = 0.0640657 loss)\nI0608 07:22:11.581804 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0224247 (* 1 = 0.0224247 loss)\nI0608 07:22:11.581809 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.266809 (* 1 = 0.266809 loss)\nI0608 07:22:11.581816 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0311794 (* 1 = 0.0311794 loss)\nI0608 07:22:11.581821 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.238137 (* 1 = 0.238137 loss)\nI0608 07:22:11.581827 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0311931 (* 1 = 0.0311931 loss)\nI0608 07:22:11.581835 13573 sgd_solver.cpp:106] Iteration 58180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:22:24.334561 13573 solver.cpp:229] Iteration 58200, loss = 0.774812\nI0608 07:22:24.334630 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0149623 (* 1 = 0.0149623 loss)\nI0608 07:22:24.334640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0247846 (* 1 = 0.0247846 loss)\nI0608 07:22:24.334646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.216446 (* 1 = 0.216446 loss)\nI0608 07:22:24.334652 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130949 (* 1 = 0.0130949 loss)\nI0608 07:22:24.334658 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21588 (* 1 = 0.21588 loss)\nI0608 07:22:24.334666 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0130953 (* 1 = 0.0130953 loss)\nI0608 07:22:24.334672 13573 sgd_solver.cpp:106] Iteration 58200, lr = 0.0001\nI0608 07:22:37.221650 13573 solver.cpp:229] Iteration 58220, loss = 0.626638\nI0608 07:22:37.221716 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0756958 (* 1 = 0.0756958 loss)\nI0608 07:22:37.221725 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0848174 (* 1 = 0.0848174 loss)\nI0608 07:22:37.221731 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.135518 (* 1 = 0.135518 loss)\nI0608 07:22:37.221737 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0787402 (* 1 = 0.0787402 loss)\nI0608 07:22:37.221742 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138526 (* 1 = 0.138526 loss)\nI0608 07:22:37.221748 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0787349 (* 1 = 0.0787349 loss)\nI0608 07:22:37.221755 13573 sgd_solver.cpp:106] Iteration 58220, lr = 0.0001\nI0608 07:22:50.024260 13573 solver.cpp:229] Iteration 58240, loss = 0.254238\nI0608 07:22:50.024329 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0300729 (* 1 = 0.0300729 loss)\nI0608 07:22:50.024338 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0178544 (* 1 = 0.0178544 loss)\nI0608 07:22:50.024344 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.089312 (* 1 = 0.089312 loss)\nI0608 07:22:50.024350 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0151289 (* 1 = 0.0151289 loss)\nI0608 07:22:50.024356 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0892142 (* 1 = 0.0892142 loss)\nI0608 07:22:50.024363 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.015128 (* 1 = 0.015128 loss)\nI0608 07:22:50.024369 13573 sgd_solver.cpp:106] Iteration 58240, lr = 0.0001\nI0608 07:23:02.976904 13573 solver.cpp:229] Iteration 58260, loss = 0.245879\nI0608 07:23:02.976982 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0390293 (* 1 = 0.0390293 loss)\nI0608 07:23:02.976995 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0250512 (* 1 = 0.0250512 loss)\nI0608 07:23:02.977001 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0967035 (* 1 = 0.0967035 loss)\nI0608 07:23:02.977007 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00872499 (* 1 = 0.00872499 loss)\nI0608 07:23:02.977015 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0847324 (* 1 = 0.0847324 loss)\nI0608 07:23:02.977020 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00872245 (* 1 = 0.00872245 loss)\nI0608 07:23:02.977026 13573 sgd_solver.cpp:106] Iteration 58260, lr = 0.0001\nI0608 07:23:15.941562 13573 solver.cpp:229] Iteration 58280, loss = 0.552278\nI0608 07:23:15.941625 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131695 (* 1 = 0.131695 loss)\nI0608 07:23:15.941635 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.211459 (* 1 = 0.211459 loss)\nI0608 07:23:15.941642 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.112023 (* 1 = 0.112023 loss)\nI0608 07:23:15.941648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0499167 (* 1 = 0.0499167 loss)\nI0608 07:23:15.941653 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110076 (* 1 = 0.110076 loss)\nI0608 07:23:15.941659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0499138 (* 1 = 0.0499138 loss)\nI0608 07:23:15.941666 13573 sgd_solver.cpp:106] Iteration 58280, lr = 0.0001\nI0608 07:23:28.677613 13573 solver.cpp:229] Iteration 58300, loss = 0.781849\nI0608 07:23:28.677676 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0226549 (* 1 = 0.0226549 loss)\nI0608 07:23:28.677685 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0278916 (* 1 = 0.0278916 loss)\nI0608 07:23:28.677691 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.497226 (* 1 = 0.497226 loss)\nI0608 07:23:28.677697 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0835798 (* 1 = 0.0835798 loss)\nI0608 07:23:28.677703 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.4937 (* 1 = 0.4937 loss)\nI0608 07:23:28.677709 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0835489 (* 1 = 0.0835489 loss)\nI0608 07:23:28.677716 13573 sgd_solver.cpp:106] Iteration 58300, lr = 0.0001\nI0608 07:23:41.732172 13573 solver.cpp:229] Iteration 58320, loss = 0.536397\nI0608 07:23:41.732242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0181512 (* 1 = 0.0181512 loss)\nI0608 07:23:41.732251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0131528 (* 1 = 0.0131528 loss)\nI0608 07:23:41.732259 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0816868 (* 1 = 0.0816868 loss)\nI0608 07:23:41.732264 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112554 (* 1 = 0.0112554 loss)\nI0608 07:23:41.732270 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.079393 (* 1 = 0.079393 loss)\nI0608 07:23:41.732275 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112568 (* 1 = 0.0112568 loss)\nI0608 07:23:41.732281 13573 sgd_solver.cpp:106] Iteration 58320, lr = 0.0001\nI0608 07:23:54.605581 13573 solver.cpp:229] Iteration 58340, loss = 0.365349\nI0608 07:23:54.605634 13573 solver.cpp:245]     Train net output #0: loss_bbox = 4.83832e-05 (* 1 = 4.83832e-05 loss)\nI0608 07:23:54.605644 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000199099 (* 1 = 0.000199099 loss)\nI0608 07:23:54.605650 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.146606 (* 1 = 0.146606 loss)\nI0608 07:23:54.605656 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0149401 (* 1 = 0.0149401 loss)\nI0608 07:23:54.605661 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16946 (* 1 = 0.16946 loss)\nI0608 07:23:54.605667 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0149427 (* 1 = 0.0149427 loss)\nI0608 07:23:54.605675 13573 sgd_solver.cpp:106] Iteration 58340, lr = 0.0001\nI0608 07:24:07.524497 13573 solver.cpp:229] Iteration 58360, loss = 0.705456\nI0608 07:24:07.524583 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00839146 (* 1 = 0.00839146 loss)\nI0608 07:24:07.524593 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00140288 (* 1 = 0.00140288 loss)\nI0608 07:24:07.524600 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.419437 (* 1 = 0.419437 loss)\nI0608 07:24:07.524605 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0471085 (* 1 = 0.0471085 loss)\nI0608 07:24:07.524611 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.420594 (* 1 = 0.420594 loss)\nI0608 07:24:07.524616 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0470999 (* 1 = 0.0470999 loss)\nI0608 07:24:07.524623 13573 sgd_solver.cpp:106] Iteration 58360, lr = 0.0001\nI0608 07:24:20.529232 13573 solver.cpp:229] Iteration 58380, loss = 0.570602\nI0608 07:24:20.529299 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0579985 (* 1 = 0.0579985 loss)\nI0608 07:24:20.529312 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0322965 (* 1 = 0.0322965 loss)\nI0608 07:24:20.529323 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0843687 (* 1 = 0.0843687 loss)\nI0608 07:24:20.529332 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00782303 (* 1 = 0.00782303 loss)\nI0608 07:24:20.529340 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.084685 (* 1 = 0.084685 loss)\nI0608 07:24:20.529350 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00782088 (* 1 = 0.00782088 loss)\nI0608 07:24:20.529359 13573 sgd_solver.cpp:106] Iteration 58380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:24:33.387712 13573 solver.cpp:229] Iteration 58400, loss = 0.812327\nI0608 07:24:33.387779 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.145995 (* 1 = 0.145995 loss)\nI0608 07:24:33.387790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0112611 (* 1 = 0.0112611 loss)\nI0608 07:24:33.387796 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.461952 (* 1 = 0.461952 loss)\nI0608 07:24:33.387801 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.057863 (* 1 = 0.057863 loss)\nI0608 07:24:33.387807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.487512 (* 1 = 0.487512 loss)\nI0608 07:24:33.387814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0578567 (* 1 = 0.0578567 loss)\nI0608 07:24:33.387820 13573 sgd_solver.cpp:106] Iteration 58400, lr = 0.0001\nI0608 07:24:46.347268 13573 solver.cpp:229] Iteration 58420, loss = 1.02667\nI0608 07:24:46.347334 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0659702 (* 1 = 0.0659702 loss)\nI0608 07:24:46.347344 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0578343 (* 1 = 0.0578343 loss)\nI0608 07:24:46.347352 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.442021 (* 1 = 0.442021 loss)\nI0608 07:24:46.347357 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0946482 (* 1 = 0.0946482 loss)\nI0608 07:24:46.347362 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.421592 (* 1 = 0.421592 loss)\nI0608 07:24:46.347368 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0946414 (* 1 = 0.0946414 loss)\nI0608 07:24:46.347375 13573 sgd_solver.cpp:106] Iteration 58420, lr = 0.0001\nI0608 07:24:59.193065 13573 solver.cpp:229] Iteration 58440, loss = 0.639346\nI0608 07:24:59.193126 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0910746 (* 1 = 0.0910746 loss)\nI0608 07:24:59.193136 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0637968 (* 1 = 0.0637968 loss)\nI0608 07:24:59.193142 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260559 (* 1 = 0.260559 loss)\nI0608 07:24:59.193148 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.065943 (* 1 = 0.065943 loss)\nI0608 07:24:59.193153 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.263247 (* 1 = 0.263247 loss)\nI0608 07:24:59.193159 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0659313 (* 1 = 0.0659313 loss)\nI0608 07:24:59.193166 13573 sgd_solver.cpp:106] Iteration 58440, lr = 0.0001\nI0608 07:25:11.960033 13573 solver.cpp:229] Iteration 58460, loss = 0.840141\nI0608 07:25:11.960104 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119898 (* 1 = 0.119898 loss)\nI0608 07:25:11.960116 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113003 (* 1 = 0.113003 loss)\nI0608 07:25:11.960124 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.30747 (* 1 = 0.30747 loss)\nI0608 07:25:11.960129 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0395173 (* 1 = 0.0395173 loss)\nI0608 07:25:11.960134 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303832 (* 1 = 0.303832 loss)\nI0608 07:25:11.960140 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0395073 (* 1 = 0.0395073 loss)\nI0608 07:25:11.960147 13573 sgd_solver.cpp:106] Iteration 58460, lr = 0.0001\nI0608 07:25:24.885538 13573 solver.cpp:229] Iteration 58480, loss = 1.23776\nI0608 07:25:24.885610 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00326524 (* 1 = 0.00326524 loss)\nI0608 07:25:24.885620 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000216378 (* 1 = 0.000216378 loss)\nI0608 07:25:24.885627 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171299 (* 1 = 0.171299 loss)\nI0608 07:25:24.885632 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00208475 (* 1 = 0.00208475 loss)\nI0608 07:25:24.885638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.201817 (* 1 = 0.201817 loss)\nI0608 07:25:24.885644 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00208538 (* 1 = 0.00208538 loss)\nI0608 07:25:24.885651 13573 sgd_solver.cpp:106] Iteration 58480, lr = 0.0001\nI0608 07:25:37.706043 13573 solver.cpp:229] Iteration 58500, loss = 0.47621\nI0608 07:25:37.706104 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0816998 (* 1 = 0.0816998 loss)\nI0608 07:25:37.706113 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0908885 (* 1 = 0.0908885 loss)\nI0608 07:25:37.706120 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171537 (* 1 = 0.171537 loss)\nI0608 07:25:37.706126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.031816 (* 1 = 0.031816 loss)\nI0608 07:25:37.706131 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17402 (* 1 = 0.17402 loss)\nI0608 07:25:37.706137 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0318135 (* 1 = 0.0318135 loss)\nI0608 07:25:37.706145 13573 sgd_solver.cpp:106] Iteration 58500, lr = 0.0001\nI0608 07:25:50.567476 13573 solver.cpp:229] Iteration 58520, loss = 0.776441\nI0608 07:25:50.567543 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0327784 (* 1 = 0.0327784 loss)\nI0608 07:25:50.567551 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0430472 (* 1 = 0.0430472 loss)\nI0608 07:25:50.567558 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206884 (* 1 = 0.206884 loss)\nI0608 07:25:50.567564 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0733052 (* 1 = 0.0733052 loss)\nI0608 07:25:50.567569 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.206403 (* 1 = 0.206403 loss)\nI0608 07:25:50.567575 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0733031 (* 1 = 0.0733031 loss)\nI0608 07:25:50.567581 13573 sgd_solver.cpp:106] Iteration 58520, lr = 0.0001\nI0608 07:26:03.317214 13573 solver.cpp:229] Iteration 58540, loss = 0.393135\nI0608 07:26:03.317279 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00178004 (* 1 = 0.00178004 loss)\nI0608 07:26:03.317289 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000964994 (* 1 = 0.000964994 loss)\nI0608 07:26:03.317296 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.241653 (* 1 = 0.241653 loss)\nI0608 07:26:03.317302 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00797792 (* 1 = 0.00797792 loss)\nI0608 07:26:03.317308 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216944 (* 1 = 0.216944 loss)\nI0608 07:26:03.317314 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00798449 (* 1 = 0.00798449 loss)\nI0608 07:26:03.317322 13573 sgd_solver.cpp:106] Iteration 58540, lr = 0.0001\nI0608 07:26:16.233302 13573 solver.cpp:229] Iteration 58560, loss = 0.68851\nI0608 07:26:16.233366 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00143588 (* 1 = 0.00143588 loss)\nI0608 07:26:16.233376 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000539444 (* 1 = 0.000539444 loss)\nI0608 07:26:16.233382 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265258 (* 1 = 0.265258 loss)\nI0608 07:26:16.233387 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0180988 (* 1 = 0.0180988 loss)\nI0608 07:26:16.233394 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.248542 (* 1 = 0.248542 loss)\nI0608 07:26:16.233400 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0180876 (* 1 = 0.0180876 loss)\nI0608 07:26:16.233407 13573 sgd_solver.cpp:106] Iteration 58560, lr = 0.0001\nI0608 07:26:29.141710 13573 solver.cpp:229] Iteration 58580, loss = 0.390157\nI0608 07:26:29.142343 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0104633 (* 1 = 0.0104633 loss)\nI0608 07:26:29.142365 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0456165 (* 1 = 0.0456165 loss)\nI0608 07:26:29.142926 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.092212 (* 1 = 0.092212 loss)\nI0608 07:26:29.142947 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0654191 (* 1 = 0.0654191 loss)\nI0608 07:26:29.142954 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0958787 (* 1 = 0.0958787 loss)\nI0608 07:26:29.143486 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0654143 (* 1 = 0.0654143 loss)\nI0608 07:26:29.143504 13573 sgd_solver.cpp:106] Iteration 58580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:26:42.177696 13573 solver.cpp:229] Iteration 58600, loss = 1.92556\nI0608 07:26:42.177765 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.257729 (* 1 = 0.257729 loss)\nI0608 07:26:42.177774 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139589 (* 1 = 0.139589 loss)\nI0608 07:26:42.177780 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.844722 (* 1 = 0.844722 loss)\nI0608 07:26:42.177785 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.198568 (* 1 = 0.198568 loss)\nI0608 07:26:42.177791 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.858988 (* 1 = 0.858988 loss)\nI0608 07:26:42.177798 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.198569 (* 1 = 0.198569 loss)\nI0608 07:26:42.177804 13573 sgd_solver.cpp:106] Iteration 58600, lr = 0.0001\nI0608 07:26:55.227316 13573 solver.cpp:229] Iteration 58620, loss = 0.962099\nI0608 07:26:55.227376 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126987 (* 1 = 0.126987 loss)\nI0608 07:26:55.227385 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.193292 (* 1 = 0.193292 loss)\nI0608 07:26:55.227391 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.617676 (* 1 = 0.617676 loss)\nI0608 07:26:55.227397 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0269078 (* 1 = 0.0269078 loss)\nI0608 07:26:55.227403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.606962 (* 1 = 0.606962 loss)\nI0608 07:26:55.227409 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0269126 (* 1 = 0.0269126 loss)\nI0608 07:26:55.227416 13573 sgd_solver.cpp:106] Iteration 58620, lr = 0.0001\nI0608 07:27:08.065892 13573 solver.cpp:229] Iteration 58640, loss = 0.784063\nI0608 07:27:08.065965 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0686267 (* 1 = 0.0686267 loss)\nI0608 07:27:08.065974 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.138603 (* 1 = 0.138603 loss)\nI0608 07:27:08.065980 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.364899 (* 1 = 0.364899 loss)\nI0608 07:27:08.065985 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0619158 (* 1 = 0.0619158 loss)\nI0608 07:27:08.065991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.368484 (* 1 = 0.368484 loss)\nI0608 07:27:08.065996 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0619154 (* 1 = 0.0619154 loss)\nI0608 07:27:08.066004 13573 sgd_solver.cpp:106] Iteration 58640, lr = 0.0001\nI0608 07:27:20.769414 13573 solver.cpp:229] Iteration 58660, loss = 0.572585\nI0608 07:27:20.769480 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0623162 (* 1 = 0.0623162 loss)\nI0608 07:27:20.769490 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0915435 (* 1 = 0.0915435 loss)\nI0608 07:27:20.769495 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.302095 (* 1 = 0.302095 loss)\nI0608 07:27:20.769500 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0477382 (* 1 = 0.0477382 loss)\nI0608 07:27:20.769506 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315039 (* 1 = 0.315039 loss)\nI0608 07:27:20.769512 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0477441 (* 1 = 0.0477441 loss)\nI0608 07:27:20.769518 13573 sgd_solver.cpp:106] Iteration 58660, lr = 0.0001\nI0608 07:27:33.738644 13573 solver.cpp:229] Iteration 58680, loss = 0.398924\nI0608 07:27:33.738710 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0839228 (* 1 = 0.0839228 loss)\nI0608 07:27:33.738720 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0700559 (* 1 = 0.0700559 loss)\nI0608 07:27:33.738726 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155864 (* 1 = 0.155864 loss)\nI0608 07:27:33.738732 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.013967 (* 1 = 0.013967 loss)\nI0608 07:27:33.738737 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17051 (* 1 = 0.17051 loss)\nI0608 07:27:33.738744 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013976 (* 1 = 0.013976 loss)\nI0608 07:27:33.738751 13573 sgd_solver.cpp:106] Iteration 58680, lr = 0.0001\nI0608 07:27:46.690270 13573 solver.cpp:229] Iteration 58700, loss = 0.867519\nI0608 07:27:46.690346 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0008143 (* 1 = 0.0008143 loss)\nI0608 07:27:46.690356 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000174311 (* 1 = 0.000174311 loss)\nI0608 07:27:46.690362 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.180798 (* 1 = 0.180798 loss)\nI0608 07:27:46.690368 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0193534 (* 1 = 0.0193534 loss)\nI0608 07:27:46.690374 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210392 (* 1 = 0.210392 loss)\nI0608 07:27:46.690381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0193398 (* 1 = 0.0193398 loss)\nI0608 07:27:46.690387 13573 sgd_solver.cpp:106] Iteration 58700, lr = 0.0001\nI0608 07:27:59.559891 13573 solver.cpp:229] Iteration 58720, loss = 1.83721\nI0608 07:27:59.559955 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0484638 (* 1 = 0.0484638 loss)\nI0608 07:27:59.559964 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0954286 (* 1 = 0.0954286 loss)\nI0608 07:27:59.559972 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.896717 (* 1 = 0.896717 loss)\nI0608 07:27:59.559976 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.635861 (* 1 = 0.635861 loss)\nI0608 07:27:59.559983 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.911957 (* 1 = 0.911957 loss)\nI0608 07:27:59.559988 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.63586 (* 1 = 0.63586 loss)\nI0608 07:27:59.559994 13573 sgd_solver.cpp:106] Iteration 58720, lr = 0.0001\nI0608 07:28:12.609588 13573 solver.cpp:229] Iteration 58740, loss = 0.363187\nI0608 07:28:12.609647 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000965988 (* 1 = 0.000965988 loss)\nI0608 07:28:12.609658 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00110079 (* 1 = 0.00110079 loss)\nI0608 07:28:12.609663 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175288 (* 1 = 0.175288 loss)\nI0608 07:28:12.609669 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0067867 (* 1 = 0.0067867 loss)\nI0608 07:28:12.609675 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160356 (* 1 = 0.160356 loss)\nI0608 07:28:12.609681 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00679327 (* 1 = 0.00679327 loss)\nI0608 07:28:12.609688 13573 sgd_solver.cpp:106] Iteration 58740, lr = 0.0001\nI0608 07:28:25.562083 13573 solver.cpp:229] Iteration 58760, loss = 0.364676\nI0608 07:28:25.562163 13573 solver.cpp:245]     Train net output #0: loss_bbox = 3.55378e-05 (* 1 = 3.55378e-05 loss)\nI0608 07:28:25.562175 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00026678 (* 1 = 0.00026678 loss)\nI0608 07:28:25.562180 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220928 (* 1 = 0.220928 loss)\nI0608 07:28:25.562186 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0028213 (* 1 = 0.0028213 loss)\nI0608 07:28:25.562192 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191043 (* 1 = 0.191043 loss)\nI0608 07:28:25.562198 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00282355 (* 1 = 0.00282355 loss)\nI0608 07:28:25.562206 13573 sgd_solver.cpp:106] Iteration 58760, lr = 0.0001\nI0608 07:28:38.563311 13573 solver.cpp:229] Iteration 58780, loss = 0.357994\nI0608 07:28:38.563380 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0628649 (* 1 = 0.0628649 loss)\nI0608 07:28:38.563396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0824127 (* 1 = 0.0824127 loss)\nI0608 07:28:38.563403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121692 (* 1 = 0.121692 loss)\nI0608 07:28:38.563410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0289189 (* 1 = 0.0289189 loss)\nI0608 07:28:38.563416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119679 (* 1 = 0.119679 loss)\nI0608 07:28:38.563421 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0289206 (* 1 = 0.0289206 loss)\nI0608 07:28:38.563428 13573 sgd_solver.cpp:106] Iteration 58780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:28:51.464224 13573 solver.cpp:229] Iteration 58800, loss = 0.819639\nI0608 07:28:51.464295 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.211809 (* 1 = 0.211809 loss)\nI0608 07:28:51.464305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163507 (* 1 = 0.163507 loss)\nI0608 07:28:51.464311 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.442346 (* 1 = 0.442346 loss)\nI0608 07:28:51.464318 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0548879 (* 1 = 0.0548879 loss)\nI0608 07:28:51.464323 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.458129 (* 1 = 0.458129 loss)\nI0608 07:28:51.464329 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0548975 (* 1 = 0.0548975 loss)\nI0608 07:28:51.464336 13573 sgd_solver.cpp:106] Iteration 58800, lr = 0.0001\nI0608 07:29:04.333103 13573 solver.cpp:229] Iteration 58820, loss = 0.447397\nI0608 07:29:04.333286 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0156847 (* 1 = 0.0156847 loss)\nI0608 07:29:04.333294 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0394144 (* 1 = 0.0394144 loss)\nI0608 07:29:04.333302 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104759 (* 1 = 0.104759 loss)\nI0608 07:29:04.333307 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.057793 (* 1 = 0.057793 loss)\nI0608 07:29:04.333312 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0854195 (* 1 = 0.0854195 loss)\nI0608 07:29:04.333318 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0577933 (* 1 = 0.0577933 loss)\nI0608 07:29:04.333324 13573 sgd_solver.cpp:106] Iteration 58820, lr = 0.0001\nI0608 07:29:17.315172 13573 solver.cpp:229] Iteration 58840, loss = 1.90512\nI0608 07:29:17.315237 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0908803 (* 1 = 0.0908803 loss)\nI0608 07:29:17.315246 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0365326 (* 1 = 0.0365326 loss)\nI0608 07:29:17.315253 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.336091 (* 1 = 0.336091 loss)\nI0608 07:29:17.315258 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.104992 (* 1 = 0.104992 loss)\nI0608 07:29:17.315263 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.330512 (* 1 = 0.330512 loss)\nI0608 07:29:17.315268 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.104968 (* 1 = 0.104968 loss)\nI0608 07:29:17.315274 13573 sgd_solver.cpp:106] Iteration 58840, lr = 0.0001\nI0608 07:29:30.271562 13573 solver.cpp:229] Iteration 58860, loss = 0.539304\nI0608 07:29:30.271692 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.04659 (* 1 = 0.04659 loss)\nI0608 07:29:30.271702 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0884984 (* 1 = 0.0884984 loss)\nI0608 07:29:30.271708 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105971 (* 1 = 0.105971 loss)\nI0608 07:29:30.271714 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0424118 (* 1 = 0.0424118 loss)\nI0608 07:29:30.271720 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108062 (* 1 = 0.108062 loss)\nI0608 07:29:30.271726 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0424075 (* 1 = 0.0424075 loss)\nI0608 07:29:30.271733 13573 sgd_solver.cpp:106] Iteration 58860, lr = 0.0001\nI0608 07:29:43.211748 13573 solver.cpp:229] Iteration 58880, loss = 0.427239\nI0608 07:29:43.211814 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0422387 (* 1 = 0.0422387 loss)\nI0608 07:29:43.211825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0832216 (* 1 = 0.0832216 loss)\nI0608 07:29:43.211832 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0931048 (* 1 = 0.0931048 loss)\nI0608 07:29:43.211838 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0244059 (* 1 = 0.0244059 loss)\nI0608 07:29:43.211843 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0912186 (* 1 = 0.0912186 loss)\nI0608 07:29:43.211848 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0244106 (* 1 = 0.0244106 loss)\nI0608 07:29:43.211855 13573 sgd_solver.cpp:106] Iteration 58880, lr = 0.0001\nI0608 07:29:56.380524 13573 solver.cpp:229] Iteration 58900, loss = 0.570796\nI0608 07:29:56.380591 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00481004 (* 1 = 0.00481004 loss)\nI0608 07:29:56.380600 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00751473 (* 1 = 0.00751473 loss)\nI0608 07:29:56.380606 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172672 (* 1 = 0.172672 loss)\nI0608 07:29:56.380612 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104096 (* 1 = 0.0104096 loss)\nI0608 07:29:56.380617 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.177375 (* 1 = 0.177375 loss)\nI0608 07:29:56.380623 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104182 (* 1 = 0.0104182 loss)\nI0608 07:29:56.380630 13573 sgd_solver.cpp:106] Iteration 58900, lr = 0.0001\nI0608 07:30:09.058583 13573 solver.cpp:229] Iteration 58920, loss = 0.434752\nI0608 07:30:09.058658 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00100708 (* 1 = 0.00100708 loss)\nI0608 07:30:09.058667 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000352451 (* 1 = 0.000352451 loss)\nI0608 07:30:09.058673 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172593 (* 1 = 0.172593 loss)\nI0608 07:30:09.058679 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0152621 (* 1 = 0.0152621 loss)\nI0608 07:30:09.058686 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205663 (* 1 = 0.205663 loss)\nI0608 07:30:09.058691 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0152676 (* 1 = 0.0152676 loss)\nI0608 07:30:09.058698 13573 sgd_solver.cpp:106] Iteration 58920, lr = 0.0001\nI0608 07:30:21.849256 13573 solver.cpp:229] Iteration 58940, loss = 1.21845\nI0608 07:30:21.849319 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00611068 (* 1 = 0.00611068 loss)\nI0608 07:30:21.849329 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00336349 (* 1 = 0.00336349 loss)\nI0608 07:30:21.849335 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143802 (* 1 = 0.143802 loss)\nI0608 07:30:21.849340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00692966 (* 1 = 0.00692966 loss)\nI0608 07:30:21.849346 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149465 (* 1 = 0.149465 loss)\nI0608 07:30:21.849352 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00692076 (* 1 = 0.00692076 loss)\nI0608 07:30:21.849359 13573 sgd_solver.cpp:106] Iteration 58940, lr = 0.0001\nI0608 07:30:34.645879 13573 solver.cpp:229] Iteration 58960, loss = 0.298676\nI0608 07:30:34.645946 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0282447 (* 1 = 0.0282447 loss)\nI0608 07:30:34.645956 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0316586 (* 1 = 0.0316586 loss)\nI0608 07:30:34.645961 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0924551 (* 1 = 0.0924551 loss)\nI0608 07:30:34.645967 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0401967 (* 1 = 0.0401967 loss)\nI0608 07:30:34.645972 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0939072 (* 1 = 0.0939072 loss)\nI0608 07:30:34.645978 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0401902 (* 1 = 0.0401902 loss)\nI0608 07:30:34.645985 13573 sgd_solver.cpp:106] Iteration 58960, lr = 0.0001\nI0608 07:30:47.563345 13573 solver.cpp:229] Iteration 58980, loss = 0.383399\nI0608 07:30:47.563417 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000139758 (* 1 = 0.000139758 loss)\nI0608 07:30:47.563427 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000311698 (* 1 = 0.000311698 loss)\nI0608 07:30:47.563433 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.171094 (* 1 = 0.171094 loss)\nI0608 07:30:47.563441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00655752 (* 1 = 0.00655752 loss)\nI0608 07:30:47.563446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205316 (* 1 = 0.205316 loss)\nI0608 07:30:47.563452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00655164 (* 1 = 0.00655164 loss)\nI0608 07:30:47.563459 13573 sgd_solver.cpp:106] Iteration 58980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:31:00.439208 13573 solver.cpp:229] Iteration 59000, loss = 1.41928\nI0608 07:31:00.439271 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00610544 (* 1 = 0.00610544 loss)\nI0608 07:31:00.439282 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00727921 (* 1 = 0.00727921 loss)\nI0608 07:31:00.439288 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.614641 (* 1 = 0.614641 loss)\nI0608 07:31:00.439294 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.125051 (* 1 = 0.125051 loss)\nI0608 07:31:00.439299 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.612859 (* 1 = 0.612859 loss)\nI0608 07:31:00.439306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.125043 (* 1 = 0.125043 loss)\nI0608 07:31:00.439311 13573 sgd_solver.cpp:106] Iteration 59000, lr = 0.0001\nI0608 07:31:13.435117 13573 solver.cpp:229] Iteration 59020, loss = 0.454975\nI0608 07:31:13.435186 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000361601 (* 1 = 0.000361601 loss)\nI0608 07:31:13.435196 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00108385 (* 1 = 0.00108385 loss)\nI0608 07:31:13.435204 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19678 (* 1 = 0.19678 loss)\nI0608 07:31:13.435209 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0238543 (* 1 = 0.0238543 loss)\nI0608 07:31:13.435215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207831 (* 1 = 0.207831 loss)\nI0608 07:31:13.435221 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238551 (* 1 = 0.0238551 loss)\nI0608 07:31:13.435228 13573 sgd_solver.cpp:106] Iteration 59020, lr = 0.0001\nI0608 07:31:26.490522 13573 solver.cpp:229] Iteration 59040, loss = 0.946448\nI0608 07:31:26.490597 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0466102 (* 1 = 0.0466102 loss)\nI0608 07:31:26.490607 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0294844 (* 1 = 0.0294844 loss)\nI0608 07:31:26.490613 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.349265 (* 1 = 0.349265 loss)\nI0608 07:31:26.490619 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276777 (* 1 = 0.0276777 loss)\nI0608 07:31:26.490625 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.361232 (* 1 = 0.361232 loss)\nI0608 07:31:26.490631 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276727 (* 1 = 0.0276727 loss)\nI0608 07:31:26.490638 13573 sgd_solver.cpp:106] Iteration 59040, lr = 0.0001\nI0608 07:31:39.232621 13573 solver.cpp:229] Iteration 59060, loss = 0.69078\nI0608 07:31:39.232687 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0551299 (* 1 = 0.0551299 loss)\nI0608 07:31:39.232697 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0825653 (* 1 = 0.0825653 loss)\nI0608 07:31:39.232703 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.371479 (* 1 = 0.371479 loss)\nI0608 07:31:39.232709 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0808121 (* 1 = 0.0808121 loss)\nI0608 07:31:39.232715 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.336804 (* 1 = 0.336804 loss)\nI0608 07:31:39.232722 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0808133 (* 1 = 0.0808133 loss)\nI0608 07:31:39.232728 13573 sgd_solver.cpp:106] Iteration 59060, lr = 0.0001\nI0608 07:31:52.129257 13573 solver.cpp:229] Iteration 59080, loss = 0.735916\nI0608 07:31:52.129328 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.044897 (* 1 = 0.044897 loss)\nI0608 07:31:52.129338 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0306691 (* 1 = 0.0306691 loss)\nI0608 07:31:52.129344 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0958287 (* 1 = 0.0958287 loss)\nI0608 07:31:52.129349 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0247335 (* 1 = 0.0247335 loss)\nI0608 07:31:52.129355 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0956065 (* 1 = 0.0956065 loss)\nI0608 07:31:52.129361 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247402 (* 1 = 0.0247402 loss)\nI0608 07:31:52.129369 13573 sgd_solver.cpp:106] Iteration 59080, lr = 0.0001\nI0608 07:32:04.996896 13573 solver.cpp:229] Iteration 59100, loss = 1.27125\nI0608 07:32:04.996978 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0559908 (* 1 = 0.0559908 loss)\nI0608 07:32:04.996989 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0626608 (* 1 = 0.0626608 loss)\nI0608 07:32:04.996995 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.472924 (* 1 = 0.472924 loss)\nI0608 07:32:04.997000 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.101502 (* 1 = 0.101502 loss)\nI0608 07:32:04.997006 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.472847 (* 1 = 0.472847 loss)\nI0608 07:32:04.997012 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.101525 (* 1 = 0.101525 loss)\nI0608 07:32:04.997020 13573 sgd_solver.cpp:106] Iteration 59100, lr = 0.0001\nI0608 07:32:17.723465 13573 solver.cpp:229] Iteration 59120, loss = 0.394117\nI0608 07:32:17.723659 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0681738 (* 1 = 0.0681738 loss)\nI0608 07:32:17.723670 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0642861 (* 1 = 0.0642861 loss)\nI0608 07:32:17.723676 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0995047 (* 1 = 0.0995047 loss)\nI0608 07:32:17.723682 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283872 (* 1 = 0.0283872 loss)\nI0608 07:32:17.723688 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100176 (* 1 = 0.100176 loss)\nI0608 07:32:17.723695 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283937 (* 1 = 0.0283937 loss)\nI0608 07:32:17.723701 13573 sgd_solver.cpp:106] Iteration 59120, lr = 0.0001\nI0608 07:32:30.774219 13573 solver.cpp:229] Iteration 59140, loss = 0.846474\nI0608 07:32:30.774288 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0786193 (* 1 = 0.0786193 loss)\nI0608 07:32:30.774297 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0551978 (* 1 = 0.0551978 loss)\nI0608 07:32:30.774304 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.426115 (* 1 = 0.426115 loss)\nI0608 07:32:30.774310 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0621421 (* 1 = 0.0621421 loss)\nI0608 07:32:30.774315 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408979 (* 1 = 0.408979 loss)\nI0608 07:32:30.774322 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0621104 (* 1 = 0.0621104 loss)\nI0608 07:32:30.774327 13573 sgd_solver.cpp:106] Iteration 59140, lr = 0.0001\nI0608 07:32:43.706902 13573 solver.cpp:229] Iteration 59160, loss = 0.385881\nI0608 07:32:43.706971 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0509488 (* 1 = 0.0509488 loss)\nI0608 07:32:43.706981 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0325917 (* 1 = 0.0325917 loss)\nI0608 07:32:43.706989 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0836447 (* 1 = 0.0836447 loss)\nI0608 07:32:43.706995 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00787127 (* 1 = 0.00787127 loss)\nI0608 07:32:43.707000 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.082317 (* 1 = 0.082317 loss)\nI0608 07:32:43.707006 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00786989 (* 1 = 0.00786989 loss)\nI0608 07:32:43.707013 13573 sgd_solver.cpp:106] Iteration 59160, lr = 0.0001\nI0608 07:32:56.559967 13573 solver.cpp:229] Iteration 59180, loss = 0.376532\nI0608 07:32:56.560025 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0232458 (* 1 = 0.0232458 loss)\nI0608 07:32:56.560036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0247378 (* 1 = 0.0247378 loss)\nI0608 07:32:56.560044 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0800652 (* 1 = 0.0800652 loss)\nI0608 07:32:56.560048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264163 (* 1 = 0.0264163 loss)\nI0608 07:32:56.560055 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977566 (* 1 = 0.0977566 loss)\nI0608 07:32:56.560060 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264168 (* 1 = 0.0264168 loss)\nI0608 07:32:56.560067 13573 sgd_solver.cpp:106] Iteration 59180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:33:09.596669 13573 solver.cpp:229] Iteration 59200, loss = 0.452079\nI0608 07:33:09.596740 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114498 (* 1 = 0.114498 loss)\nI0608 07:33:09.596750 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0845509 (* 1 = 0.0845509 loss)\nI0608 07:33:09.596756 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0811891 (* 1 = 0.0811891 loss)\nI0608 07:33:09.596762 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227879 (* 1 = 0.0227879 loss)\nI0608 07:33:09.596768 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0797445 (* 1 = 0.0797445 loss)\nI0608 07:33:09.596774 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0227872 (* 1 = 0.0227872 loss)\nI0608 07:33:09.596781 13573 sgd_solver.cpp:106] Iteration 59200, lr = 0.0001\nI0608 07:33:22.361181 13573 solver.cpp:229] Iteration 59220, loss = 0.923541\nI0608 07:33:22.361243 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00187865 (* 1 = 0.00187865 loss)\nI0608 07:33:22.361251 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00392711 (* 1 = 0.00392711 loss)\nI0608 07:33:22.361258 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.58764 (* 1 = 0.58764 loss)\nI0608 07:33:22.361263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.171435 (* 1 = 0.171435 loss)\nI0608 07:33:22.361268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.581627 (* 1 = 0.581627 loss)\nI0608 07:33:22.361274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.171445 (* 1 = 0.171445 loss)\nI0608 07:33:22.361280 13573 sgd_solver.cpp:106] Iteration 59220, lr = 0.0001\nI0608 07:33:35.417680 13573 solver.cpp:229] Iteration 59240, loss = 0.812781\nI0608 07:33:35.417758 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00308869 (* 1 = 0.00308869 loss)\nI0608 07:33:35.417768 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0468916 (* 1 = 0.0468916 loss)\nI0608 07:33:35.417774 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137586 (* 1 = 0.137586 loss)\nI0608 07:33:35.417779 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.308466 (* 1 = 0.308466 loss)\nI0608 07:33:35.417785 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13704 (* 1 = 0.13704 loss)\nI0608 07:33:35.417791 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.308487 (* 1 = 0.308487 loss)\nI0608 07:33:35.417798 13573 sgd_solver.cpp:106] Iteration 59240, lr = 0.0001\nI0608 07:33:48.283453 13573 solver.cpp:229] Iteration 59260, loss = 1.70171\nI0608 07:33:48.283541 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111659 (* 1 = 0.111659 loss)\nI0608 07:33:48.283552 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0971269 (* 1 = 0.0971269 loss)\nI0608 07:33:48.283558 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.203344 (* 1 = 0.203344 loss)\nI0608 07:33:48.283565 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302465 (* 1 = 0.0302465 loss)\nI0608 07:33:48.283571 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.205772 (* 1 = 0.205772 loss)\nI0608 07:33:48.283576 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302474 (* 1 = 0.0302474 loss)\nI0608 07:33:48.283583 13573 sgd_solver.cpp:106] Iteration 59260, lr = 0.0001\nI0608 07:34:01.135591 13573 solver.cpp:229] Iteration 59280, loss = 0.517601\nI0608 07:34:01.135718 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0415654 (* 1 = 0.0415654 loss)\nI0608 07:34:01.135728 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0453607 (* 1 = 0.0453607 loss)\nI0608 07:34:01.135735 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262934 (* 1 = 0.262934 loss)\nI0608 07:34:01.135740 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0566123 (* 1 = 0.0566123 loss)\nI0608 07:34:01.135746 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.264437 (* 1 = 0.264437 loss)\nI0608 07:34:01.135751 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0566142 (* 1 = 0.0566142 loss)\nI0608 07:34:01.135758 13573 sgd_solver.cpp:106] Iteration 59280, lr = 0.0001\nI0608 07:34:14.302654 13573 solver.cpp:229] Iteration 59300, loss = 0.403206\nI0608 07:34:14.302743 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0962042 (* 1 = 0.0962042 loss)\nI0608 07:34:14.302753 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0743843 (* 1 = 0.0743843 loss)\nI0608 07:34:14.302759 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105261 (* 1 = 0.105261 loss)\nI0608 07:34:14.302765 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0642963 (* 1 = 0.0642963 loss)\nI0608 07:34:14.302784 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109414 (* 1 = 0.109414 loss)\nI0608 07:34:14.302793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0642912 (* 1 = 0.0642912 loss)\nI0608 07:34:14.302799 13573 sgd_solver.cpp:106] Iteration 59300, lr = 0.0001\nI0608 07:34:27.198500 13573 solver.cpp:229] Iteration 59320, loss = 0.570109\nI0608 07:34:27.198570 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0563094 (* 1 = 0.0563094 loss)\nI0608 07:34:27.198580 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0646386 (* 1 = 0.0646386 loss)\nI0608 07:34:27.198586 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177615 (* 1 = 0.177615 loss)\nI0608 07:34:27.198591 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159732 (* 1 = 0.0159732 loss)\nI0608 07:34:27.198597 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222294 (* 1 = 0.222294 loss)\nI0608 07:34:27.198603 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159737 (* 1 = 0.0159737 loss)\nI0608 07:34:27.198611 13573 sgd_solver.cpp:106] Iteration 59320, lr = 0.0001\nI0608 07:34:40.128638 13573 solver.cpp:229] Iteration 59340, loss = 0.878756\nI0608 07:34:40.128702 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0341391 (* 1 = 0.0341391 loss)\nI0608 07:34:40.128712 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0207247 (* 1 = 0.0207247 loss)\nI0608 07:34:40.128718 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282156 (* 1 = 0.282156 loss)\nI0608 07:34:40.128724 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0255046 (* 1 = 0.0255046 loss)\nI0608 07:34:40.128731 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269284 (* 1 = 0.269284 loss)\nI0608 07:34:40.128737 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0255059 (* 1 = 0.0255059 loss)\nI0608 07:34:40.128744 13573 sgd_solver.cpp:106] Iteration 59340, lr = 0.0001\nI0608 07:34:53.075711 13573 solver.cpp:229] Iteration 59360, loss = 0.566372\nI0608 07:34:53.075786 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0289521 (* 1 = 0.0289521 loss)\nI0608 07:34:53.075796 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0197344 (* 1 = 0.0197344 loss)\nI0608 07:34:53.075803 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.391192 (* 1 = 0.391192 loss)\nI0608 07:34:53.075809 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0229211 (* 1 = 0.0229211 loss)\nI0608 07:34:53.075814 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.397359 (* 1 = 0.397359 loss)\nI0608 07:34:53.075820 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0229191 (* 1 = 0.0229191 loss)\nI0608 07:34:53.075827 13573 sgd_solver.cpp:106] Iteration 59360, lr = 0.0001\nI0608 07:35:05.980857 13573 solver.cpp:229] Iteration 59380, loss = 0.608771\nI0608 07:35:05.980923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0414083 (* 1 = 0.0414083 loss)\nI0608 07:35:05.980933 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0563619 (* 1 = 0.0563619 loss)\nI0608 07:35:05.980939 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0889559 (* 1 = 0.0889559 loss)\nI0608 07:35:05.980945 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00487972 (* 1 = 0.00487972 loss)\nI0608 07:35:05.980952 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905671 (* 1 = 0.0905671 loss)\nI0608 07:35:05.980957 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0048792 (* 1 = 0.0048792 loss)\nI0608 07:35:05.980964 13573 sgd_solver.cpp:106] Iteration 59380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:35:19.019464 13573 solver.cpp:229] Iteration 59400, loss = 0.908304\nI0608 07:35:19.019541 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111611 (* 1 = 0.111611 loss)\nI0608 07:35:19.019551 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.250364 (* 1 = 0.250364 loss)\nI0608 07:35:19.019557 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.493373 (* 1 = 0.493373 loss)\nI0608 07:35:19.019563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0939283 (* 1 = 0.0939283 loss)\nI0608 07:35:19.019569 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.482627 (* 1 = 0.482627 loss)\nI0608 07:35:19.019574 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0939415 (* 1 = 0.0939415 loss)\nI0608 07:35:19.019582 13573 sgd_solver.cpp:106] Iteration 59400, lr = 0.0001\nI0608 07:35:31.905066 13573 solver.cpp:229] Iteration 59420, loss = 0.477925\nI0608 07:35:31.905141 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0686971 (* 1 = 0.0686971 loss)\nI0608 07:35:31.905150 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0250131 (* 1 = 0.0250131 loss)\nI0608 07:35:31.905158 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107001 (* 1 = 0.107001 loss)\nI0608 07:35:31.905164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0737455 (* 1 = 0.0737455 loss)\nI0608 07:35:31.905169 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106452 (* 1 = 0.106452 loss)\nI0608 07:35:31.905175 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0737559 (* 1 = 0.0737559 loss)\nI0608 07:35:31.905182 13573 sgd_solver.cpp:106] Iteration 59420, lr = 0.0001\nI0608 07:35:44.672106 13573 solver.cpp:229] Iteration 59440, loss = 0.745231\nI0608 07:35:44.672227 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0560334 (* 1 = 0.0560334 loss)\nI0608 07:35:44.672256 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0524193 (* 1 = 0.0524193 loss)\nI0608 07:35:44.672264 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.344623 (* 1 = 0.344623 loss)\nI0608 07:35:44.672271 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.069149 (* 1 = 0.069149 loss)\nI0608 07:35:44.672276 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.357982 (* 1 = 0.357982 loss)\nI0608 07:35:44.672282 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0691535 (* 1 = 0.0691535 loss)\nI0608 07:35:44.672289 13573 sgd_solver.cpp:106] Iteration 59440, lr = 0.0001\nI0608 07:35:57.607782 13573 solver.cpp:229] Iteration 59460, loss = 0.562392\nI0608 07:35:57.607854 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0409184 (* 1 = 0.0409184 loss)\nI0608 07:35:57.607864 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0126874 (* 1 = 0.0126874 loss)\nI0608 07:35:57.607870 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.190823 (* 1 = 0.190823 loss)\nI0608 07:35:57.607877 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219588 (* 1 = 0.0219588 loss)\nI0608 07:35:57.607882 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.209252 (* 1 = 0.209252 loss)\nI0608 07:35:57.607887 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219531 (* 1 = 0.0219531 loss)\nI0608 07:35:57.607894 13573 sgd_solver.cpp:106] Iteration 59460, lr = 0.0001\nI0608 07:36:10.505431 13573 solver.cpp:229] Iteration 59480, loss = 0.546397\nI0608 07:36:10.505496 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.131846 (* 1 = 0.131846 loss)\nI0608 07:36:10.505506 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0518767 (* 1 = 0.0518767 loss)\nI0608 07:36:10.505512 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121011 (* 1 = 0.121011 loss)\nI0608 07:36:10.505518 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0125704 (* 1 = 0.0125704 loss)\nI0608 07:36:10.505523 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124556 (* 1 = 0.124556 loss)\nI0608 07:36:10.505529 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0125698 (* 1 = 0.0125698 loss)\nI0608 07:36:10.505537 13573 sgd_solver.cpp:106] Iteration 59480, lr = 0.0001\nI0608 07:36:23.349572 13573 solver.cpp:229] Iteration 59500, loss = 0.428669\nI0608 07:36:23.349637 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119329 (* 1 = 0.119329 loss)\nI0608 07:36:23.349645 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100599 (* 1 = 0.100599 loss)\nI0608 07:36:23.349651 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157554 (* 1 = 0.157554 loss)\nI0608 07:36:23.349658 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0239928 (* 1 = 0.0239928 loss)\nI0608 07:36:23.349663 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153325 (* 1 = 0.153325 loss)\nI0608 07:36:23.349669 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0239917 (* 1 = 0.0239917 loss)\nI0608 07:36:23.349676 13573 sgd_solver.cpp:106] Iteration 59500, lr = 0.0001\nI0608 07:36:36.062850 13573 solver.cpp:229] Iteration 59520, loss = 1.46064\nI0608 07:36:36.062968 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0265616 (* 1 = 0.0265616 loss)\nI0608 07:36:36.062978 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0370676 (* 1 = 0.0370676 loss)\nI0608 07:36:36.062984 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.12244 (* 1 = 0.12244 loss)\nI0608 07:36:36.062990 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.197656 (* 1 = 0.197656 loss)\nI0608 07:36:36.062996 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117633 (* 1 = 0.117633 loss)\nI0608 07:36:36.063002 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.197669 (* 1 = 0.197669 loss)\nI0608 07:36:36.063009 13573 sgd_solver.cpp:106] Iteration 59520, lr = 0.0001\nI0608 07:36:48.776430 13573 solver.cpp:229] Iteration 59540, loss = 0.609462\nI0608 07:36:48.776499 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0436369 (* 1 = 0.0436369 loss)\nI0608 07:36:48.776510 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.064876 (* 1 = 0.064876 loss)\nI0608 07:36:48.776516 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152186 (* 1 = 0.152186 loss)\nI0608 07:36:48.776522 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.26152 (* 1 = 0.26152 loss)\nI0608 07:36:48.776528 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150621 (* 1 = 0.150621 loss)\nI0608 07:36:48.776533 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.261529 (* 1 = 0.261529 loss)\nI0608 07:36:48.776540 13573 sgd_solver.cpp:106] Iteration 59540, lr = 0.0001\nI0608 07:37:01.799057 13573 solver.cpp:229] Iteration 59560, loss = 0.406907\nI0608 07:37:01.799129 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000119957 (* 1 = 0.000119957 loss)\nI0608 07:37:01.799139 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.40149e-05 (* 1 = 1.40149e-05 loss)\nI0608 07:37:01.799145 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207042 (* 1 = 0.207042 loss)\nI0608 07:37:01.799151 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276267 (* 1 = 0.0276267 loss)\nI0608 07:37:01.799157 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224048 (* 1 = 0.224048 loss)\nI0608 07:37:01.799163 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276275 (* 1 = 0.0276275 loss)\nI0608 07:37:01.799170 13573 sgd_solver.cpp:106] Iteration 59560, lr = 0.0001\nI0608 07:37:14.883669 13573 solver.cpp:229] Iteration 59580, loss = 0.746719\nI0608 07:37:14.883756 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00676441 (* 1 = 0.00676441 loss)\nI0608 07:37:14.883766 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0186247 (* 1 = 0.0186247 loss)\nI0608 07:37:14.883772 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.361605 (* 1 = 0.361605 loss)\nI0608 07:37:14.883779 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.059419 (* 1 = 0.059419 loss)\nI0608 07:37:14.883783 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.344929 (* 1 = 0.344929 loss)\nI0608 07:37:14.883790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0594136 (* 1 = 0.0594136 loss)\nI0608 07:37:14.883796 13573 sgd_solver.cpp:106] Iteration 59580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:37:27.800112 13573 solver.cpp:229] Iteration 59600, loss = 1.68147\nI0608 07:37:27.800176 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0513314 (* 1 = 0.0513314 loss)\nI0608 07:37:27.800187 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0131843 (* 1 = 0.0131843 loss)\nI0608 07:37:27.800194 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10591 (* 1 = 0.10591 loss)\nI0608 07:37:27.800199 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0748589 (* 1 = 0.0748589 loss)\nI0608 07:37:27.800205 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108677 (* 1 = 0.108677 loss)\nI0608 07:37:27.800211 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0748686 (* 1 = 0.0748686 loss)\nI0608 07:37:27.800217 13573 sgd_solver.cpp:106] Iteration 59600, lr = 0.0001\nI0608 07:37:40.547557 13573 solver.cpp:229] Iteration 59620, loss = 1.21147\nI0608 07:37:40.547627 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00403301 (* 1 = 0.00403301 loss)\nI0608 07:37:40.547637 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00880649 (* 1 = 0.00880649 loss)\nI0608 07:37:40.547643 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.613216 (* 1 = 0.613216 loss)\nI0608 07:37:40.547648 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.126811 (* 1 = 0.126811 loss)\nI0608 07:37:40.547654 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.624295 (* 1 = 0.624295 loss)\nI0608 07:37:40.547660 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.126804 (* 1 = 0.126804 loss)\nI0608 07:37:40.547667 13573 sgd_solver.cpp:106] Iteration 59620, lr = 0.0001\nI0608 07:37:53.488117 13573 solver.cpp:229] Iteration 59640, loss = 0.913426\nI0608 07:37:53.488188 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.032741 (* 1 = 0.032741 loss)\nI0608 07:37:53.488198 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0638007 (* 1 = 0.0638007 loss)\nI0608 07:37:53.488204 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0859609 (* 1 = 0.0859609 loss)\nI0608 07:37:53.488211 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0132949 (* 1 = 0.0132949 loss)\nI0608 07:37:53.488217 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0839081 (* 1 = 0.0839081 loss)\nI0608 07:37:53.488224 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0132935 (* 1 = 0.0132935 loss)\nI0608 07:37:53.488230 13573 sgd_solver.cpp:106] Iteration 59640, lr = 0.0001\nI0608 07:38:06.456780 13573 solver.cpp:229] Iteration 59660, loss = 0.612883\nI0608 07:38:06.456846 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0116164 (* 1 = 0.0116164 loss)\nI0608 07:38:06.456856 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0107411 (* 1 = 0.0107411 loss)\nI0608 07:38:06.456862 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.283934 (* 1 = 0.283934 loss)\nI0608 07:38:06.456869 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.026189 (* 1 = 0.026189 loss)\nI0608 07:38:06.456876 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28591 (* 1 = 0.28591 loss)\nI0608 07:38:06.456881 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0261865 (* 1 = 0.0261865 loss)\nI0608 07:38:06.456888 13573 sgd_solver.cpp:106] Iteration 59660, lr = 0.0001\nI0608 07:38:19.309237 13573 solver.cpp:229] Iteration 59680, loss = 0.687409\nI0608 07:38:19.309300 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0566824 (* 1 = 0.0566824 loss)\nI0608 07:38:19.309310 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0589451 (* 1 = 0.0589451 loss)\nI0608 07:38:19.309316 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.415768 (* 1 = 0.415768 loss)\nI0608 07:38:19.309322 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0405071 (* 1 = 0.0405071 loss)\nI0608 07:38:19.309329 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.395795 (* 1 = 0.395795 loss)\nI0608 07:38:19.309334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0404933 (* 1 = 0.0404933 loss)\nI0608 07:38:19.309341 13573 sgd_solver.cpp:106] Iteration 59680, lr = 0.0001\nI0608 07:38:32.263048 13573 solver.cpp:229] Iteration 59700, loss = 0.770078\nI0608 07:38:32.263123 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0897385 (* 1 = 0.0897385 loss)\nI0608 07:38:32.263133 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0840289 (* 1 = 0.0840289 loss)\nI0608 07:38:32.263139 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223039 (* 1 = 0.223039 loss)\nI0608 07:38:32.263144 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.058959 (* 1 = 0.058959 loss)\nI0608 07:38:32.263150 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.226913 (* 1 = 0.226913 loss)\nI0608 07:38:32.263156 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0589633 (* 1 = 0.0589633 loss)\nI0608 07:38:32.263164 13573 sgd_solver.cpp:106] Iteration 59700, lr = 0.0001\nI0608 07:38:45.053310 13573 solver.cpp:229] Iteration 59720, loss = 0.667224\nI0608 07:38:45.053377 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.10158 (* 1 = 0.10158 loss)\nI0608 07:38:45.053387 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0332436 (* 1 = 0.0332436 loss)\nI0608 07:38:45.053393 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213266 (* 1 = 0.213266 loss)\nI0608 07:38:45.053400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0287703 (* 1 = 0.0287703 loss)\nI0608 07:38:45.053406 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215043 (* 1 = 0.215043 loss)\nI0608 07:38:45.053411 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0287482 (* 1 = 0.0287482 loss)\nI0608 07:38:45.053417 13573 sgd_solver.cpp:106] Iteration 59720, lr = 0.0001\nI0608 07:38:58.177240 13573 solver.cpp:229] Iteration 59740, loss = 1.17975\nI0608 07:38:58.177302 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0811646 (* 1 = 0.0811646 loss)\nI0608 07:38:58.177312 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.16399 (* 1 = 0.16399 loss)\nI0608 07:38:58.177319 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.590621 (* 1 = 0.590621 loss)\nI0608 07:38:58.177323 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.124008 (* 1 = 0.124008 loss)\nI0608 07:38:58.177330 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.591119 (* 1 = 0.591119 loss)\nI0608 07:38:58.177335 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124023 (* 1 = 0.124023 loss)\nI0608 07:38:58.177341 13573 sgd_solver.cpp:106] Iteration 59740, lr = 0.0001\nI0608 07:39:10.980357 13573 solver.cpp:229] Iteration 59760, loss = 1.47233\nI0608 07:39:10.980425 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0183622 (* 1 = 0.0183622 loss)\nI0608 07:39:10.980435 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.018736 (* 1 = 0.018736 loss)\nI0608 07:39:10.980442 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0855281 (* 1 = 0.0855281 loss)\nI0608 07:39:10.980448 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0351061 (* 1 = 0.0351061 loss)\nI0608 07:39:10.980453 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0816119 (* 1 = 0.0816119 loss)\nI0608 07:39:10.980458 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0351101 (* 1 = 0.0351101 loss)\nI0608 07:39:10.980465 13573 sgd_solver.cpp:106] Iteration 59760, lr = 0.0001\nI0608 07:39:23.696038 13573 solver.cpp:229] Iteration 59780, loss = 0.406887\nI0608 07:39:23.696110 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0300247 (* 1 = 0.0300247 loss)\nI0608 07:39:23.696118 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0400032 (* 1 = 0.0400032 loss)\nI0608 07:39:23.696125 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205851 (* 1 = 0.205851 loss)\nI0608 07:39:23.696130 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0121392 (* 1 = 0.0121392 loss)\nI0608 07:39:23.696136 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199547 (* 1 = 0.199547 loss)\nI0608 07:39:23.696142 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0121426 (* 1 = 0.0121426 loss)\nI0608 07:39:23.696149 13573 sgd_solver.cpp:106] Iteration 59780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:39:36.624871 13573 solver.cpp:229] Iteration 59800, loss = 0.370209\nI0608 07:39:36.624943 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0523743 (* 1 = 0.0523743 loss)\nI0608 07:39:36.624953 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0282343 (* 1 = 0.0282343 loss)\nI0608 07:39:36.624959 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.120923 (* 1 = 0.120923 loss)\nI0608 07:39:36.624965 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0356654 (* 1 = 0.0356654 loss)\nI0608 07:39:36.624970 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121953 (* 1 = 0.121953 loss)\nI0608 07:39:36.624976 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0356603 (* 1 = 0.0356603 loss)\nI0608 07:39:36.624985 13573 sgd_solver.cpp:106] Iteration 59800, lr = 0.0001\nI0608 07:39:50.028766 13573 solver.cpp:229] Iteration 59820, loss = 0.574555\nI0608 07:39:50.028831 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0492555 (* 1 = 0.0492555 loss)\nI0608 07:39:50.028841 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0766563 (* 1 = 0.0766563 loss)\nI0608 07:39:50.028846 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121928 (* 1 = 0.121928 loss)\nI0608 07:39:50.028852 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.180011 (* 1 = 0.180011 loss)\nI0608 07:39:50.028858 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122371 (* 1 = 0.122371 loss)\nI0608 07:39:50.028863 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.179987 (* 1 = 0.179987 loss)\nI0608 07:39:50.028870 13573 sgd_solver.cpp:106] Iteration 59820, lr = 0.0001\nI0608 07:40:02.894291 13573 solver.cpp:229] Iteration 59840, loss = 1.22843\nI0608 07:40:02.894354 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128092 (* 1 = 0.128092 loss)\nI0608 07:40:02.894367 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.192978 (* 1 = 0.192978 loss)\nI0608 07:40:02.894374 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.437276 (* 1 = 0.437276 loss)\nI0608 07:40:02.894379 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.198461 (* 1 = 0.198461 loss)\nI0608 07:40:02.894385 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.446796 (* 1 = 0.446796 loss)\nI0608 07:40:02.894392 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.198472 (* 1 = 0.198472 loss)\nI0608 07:40:02.894397 13573 sgd_solver.cpp:106] Iteration 59840, lr = 0.0001\nI0608 07:40:16.065814 13573 solver.cpp:229] Iteration 59860, loss = 0.757993\nI0608 07:40:16.065899 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0636695 (* 1 = 0.0636695 loss)\nI0608 07:40:16.065910 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119275 (* 1 = 0.119275 loss)\nI0608 07:40:16.065917 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32918 (* 1 = 0.32918 loss)\nI0608 07:40:16.065922 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.15811 (* 1 = 0.15811 loss)\nI0608 07:40:16.065928 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.331313 (* 1 = 0.331313 loss)\nI0608 07:40:16.065935 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.158139 (* 1 = 0.158139 loss)\nI0608 07:40:16.065943 13573 sgd_solver.cpp:106] Iteration 59860, lr = 0.0001\nI0608 07:40:28.908025 13573 solver.cpp:229] Iteration 59880, loss = 0.959708\nI0608 07:40:28.908090 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0205053 (* 1 = 0.0205053 loss)\nI0608 07:40:28.908099 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.060138 (* 1 = 0.060138 loss)\nI0608 07:40:28.908105 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119562 (* 1 = 0.119562 loss)\nI0608 07:40:28.908112 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.19974 (* 1 = 0.19974 loss)\nI0608 07:40:28.908118 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121159 (* 1 = 0.121159 loss)\nI0608 07:40:28.908124 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.199753 (* 1 = 0.199753 loss)\nI0608 07:40:28.908131 13573 sgd_solver.cpp:106] Iteration 59880, lr = 0.0001\nI0608 07:40:41.745535 13573 solver.cpp:229] Iteration 59900, loss = 0.802387\nI0608 07:40:41.745604 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0475878 (* 1 = 0.0475878 loss)\nI0608 07:40:41.745615 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0406739 (* 1 = 0.0406739 loss)\nI0608 07:40:41.745620 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103534 (* 1 = 0.103534 loss)\nI0608 07:40:41.745626 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0479681 (* 1 = 0.0479681 loss)\nI0608 07:40:41.745631 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0980125 (* 1 = 0.0980125 loss)\nI0608 07:40:41.745638 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0479743 (* 1 = 0.0479743 loss)\nI0608 07:40:41.745645 13573 sgd_solver.cpp:106] Iteration 59900, lr = 0.0001\nI0608 07:40:54.757154 13573 solver.cpp:229] Iteration 59920, loss = 1.22295\nI0608 07:40:54.757227 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0411987 (* 1 = 0.0411987 loss)\nI0608 07:40:54.757237 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0576821 (* 1 = 0.0576821 loss)\nI0608 07:40:54.757243 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.44073 (* 1 = 0.44073 loss)\nI0608 07:40:54.757249 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.110918 (* 1 = 0.110918 loss)\nI0608 07:40:54.757256 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.439102 (* 1 = 0.439102 loss)\nI0608 07:40:54.757261 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.110929 (* 1 = 0.110929 loss)\nI0608 07:40:54.757267 13573 sgd_solver.cpp:106] Iteration 59920, lr = 0.0001\nI0608 07:41:07.745348 13573 solver.cpp:229] Iteration 59940, loss = 0.584522\nI0608 07:41:07.745410 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0251461 (* 1 = 0.0251461 loss)\nI0608 07:41:07.745420 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0655009 (* 1 = 0.0655009 loss)\nI0608 07:41:07.745426 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.132755 (* 1 = 0.132755 loss)\nI0608 07:41:07.745432 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0258841 (* 1 = 0.0258841 loss)\nI0608 07:41:07.745438 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136295 (* 1 = 0.136295 loss)\nI0608 07:41:07.745445 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258888 (* 1 = 0.0258888 loss)\nI0608 07:41:07.745451 13573 sgd_solver.cpp:106] Iteration 59940, lr = 0.0001\nI0608 07:41:20.568577 13573 solver.cpp:229] Iteration 59960, loss = 0.300696\nI0608 07:41:20.568660 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0339386 (* 1 = 0.0339386 loss)\nI0608 07:41:20.568670 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0430892 (* 1 = 0.0430892 loss)\nI0608 07:41:20.568675 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0881635 (* 1 = 0.0881635 loss)\nI0608 07:41:20.568681 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0453399 (* 1 = 0.0453399 loss)\nI0608 07:41:20.568687 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0959447 (* 1 = 0.0959447 loss)\nI0608 07:41:20.568693 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0453472 (* 1 = 0.0453472 loss)\nI0608 07:41:20.568701 13573 sgd_solver.cpp:106] Iteration 59960, lr = 0.0001\nI0608 07:41:33.310621 13573 solver.cpp:229] Iteration 59980, loss = 0.513673\nI0608 07:41:33.310683 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0501471 (* 1 = 0.0501471 loss)\nI0608 07:41:33.310693 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0875567 (* 1 = 0.0875567 loss)\nI0608 07:41:33.310698 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136751 (* 1 = 0.136751 loss)\nI0608 07:41:33.310704 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0320443 (* 1 = 0.0320443 loss)\nI0608 07:41:33.310710 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.12705 (* 1 = 0.12705 loss)\nI0608 07:41:33.310716 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0320384 (* 1 = 0.0320384 loss)\nI0608 07:41:33.310724 13573 sgd_solver.cpp:106] Iteration 59980, lr = 0.0001\nspeed: 0.645s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_60000.caffemodel\nI0608 07:41:50.406913 13573 solver.cpp:229] Iteration 60000, loss = 0.481426\nI0608 07:41:50.406991 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0253662 (* 1 = 0.0253662 loss)\nI0608 07:41:50.407002 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0216349 (* 1 = 0.0216349 loss)\nI0608 07:41:50.407008 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0753978 (* 1 = 0.0753978 loss)\nI0608 07:41:50.407014 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102901 (* 1 = 0.0102901 loss)\nI0608 07:41:50.407021 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0878643 (* 1 = 0.0878643 loss)\nI0608 07:41:50.407027 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.010291 (* 1 = 0.010291 loss)\nI0608 07:41:50.407032 13573 sgd_solver.cpp:106] Iteration 60000, lr = 0.0001\nI0608 07:42:03.311683 13573 solver.cpp:229] Iteration 60020, loss = 0.851943\nI0608 07:42:03.311751 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.105331 (* 1 = 0.105331 loss)\nI0608 07:42:03.311761 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118253 (* 1 = 0.118253 loss)\nI0608 07:42:03.311767 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.362477 (* 1 = 0.362477 loss)\nI0608 07:42:03.311774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0604404 (* 1 = 0.0604404 loss)\nI0608 07:42:03.311779 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.369985 (* 1 = 0.369985 loss)\nI0608 07:42:03.311784 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0604406 (* 1 = 0.0604406 loss)\nI0608 07:42:03.311792 13573 sgd_solver.cpp:106] Iteration 60020, lr = 0.0001\nI0608 07:42:16.375723 13573 solver.cpp:229] Iteration 60040, loss = 0.446686\nI0608 07:42:16.375819 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141257 (* 1 = 0.141257 loss)\nI0608 07:42:16.375831 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150343 (* 1 = 0.150343 loss)\nI0608 07:42:16.375838 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105706 (* 1 = 0.105706 loss)\nI0608 07:42:16.375844 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0356226 (* 1 = 0.0356226 loss)\nI0608 07:42:16.375850 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.105365 (* 1 = 0.105365 loss)\nI0608 07:42:16.375856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0356264 (* 1 = 0.0356264 loss)\nI0608 07:42:16.375864 13573 sgd_solver.cpp:106] Iteration 60040, lr = 0.0001\nI0608 07:42:29.260813 13573 solver.cpp:229] Iteration 60060, loss = 0.308349\nI0608 07:42:29.260874 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.077687 (* 1 = 0.077687 loss)\nI0608 07:42:29.260884 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0407941 (* 1 = 0.0407941 loss)\nI0608 07:42:29.260890 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.078835 (* 1 = 0.078835 loss)\nI0608 07:42:29.260895 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130727 (* 1 = 0.0130727 loss)\nI0608 07:42:29.260901 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0835575 (* 1 = 0.0835575 loss)\nI0608 07:42:29.260906 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0130725 (* 1 = 0.0130725 loss)\nI0608 07:42:29.260913 13573 sgd_solver.cpp:106] Iteration 60060, lr = 0.0001\nI0608 07:42:42.042408 13573 solver.cpp:229] Iteration 60080, loss = 0.544951\nI0608 07:42:42.042479 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0221633 (* 1 = 0.0221633 loss)\nI0608 07:42:42.042488 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0360355 (* 1 = 0.0360355 loss)\nI0608 07:42:42.042495 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228132 (* 1 = 0.228132 loss)\nI0608 07:42:42.042500 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0946386 (* 1 = 0.0946386 loss)\nI0608 07:42:42.042505 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224427 (* 1 = 0.224427 loss)\nI0608 07:42:42.042511 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0946326 (* 1 = 0.0946326 loss)\nI0608 07:42:42.042518 13573 sgd_solver.cpp:106] Iteration 60080, lr = 0.0001\nI0608 07:42:54.812844 13573 solver.cpp:229] Iteration 60100, loss = 0.805924\nI0608 07:42:54.812916 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0418402 (* 1 = 0.0418402 loss)\nI0608 07:42:54.812927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0339459 (* 1 = 0.0339459 loss)\nI0608 07:42:54.812932 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177817 (* 1 = 0.177817 loss)\nI0608 07:42:54.812938 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.130298 (* 1 = 0.130298 loss)\nI0608 07:42:54.812944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.174919 (* 1 = 0.174919 loss)\nI0608 07:42:54.812952 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.130274 (* 1 = 0.130274 loss)\nI0608 07:42:54.812968 13573 sgd_solver.cpp:106] Iteration 60100, lr = 0.0001\nI0608 07:43:07.925622 13573 solver.cpp:229] Iteration 60120, loss = 0.874742\nI0608 07:43:07.925688 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128136 (* 1 = 0.128136 loss)\nI0608 07:43:07.925698 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0944162 (* 1 = 0.0944162 loss)\nI0608 07:43:07.925705 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.35747 (* 1 = 0.35747 loss)\nI0608 07:43:07.925711 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0917187 (* 1 = 0.0917187 loss)\nI0608 07:43:07.925717 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.362029 (* 1 = 0.362029 loss)\nI0608 07:43:07.925724 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0917103 (* 1 = 0.0917103 loss)\nI0608 07:43:07.925731 13573 sgd_solver.cpp:106] Iteration 60120, lr = 0.0001\nI0608 07:43:20.768517 13573 solver.cpp:229] Iteration 60140, loss = 0.487625\nI0608 07:43:20.768584 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0569706 (* 1 = 0.0569706 loss)\nI0608 07:43:20.768594 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0466697 (* 1 = 0.0466697 loss)\nI0608 07:43:20.768600 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0982053 (* 1 = 0.0982053 loss)\nI0608 07:43:20.768605 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0237199 (* 1 = 0.0237199 loss)\nI0608 07:43:20.768611 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0955357 (* 1 = 0.0955357 loss)\nI0608 07:43:20.768618 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0237181 (* 1 = 0.0237181 loss)\nI0608 07:43:20.768625 13573 sgd_solver.cpp:106] Iteration 60140, lr = 0.0001\nI0608 07:43:33.499202 13573 solver.cpp:229] Iteration 60160, loss = 0.398404\nI0608 07:43:33.499287 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0336904 (* 1 = 0.0336904 loss)\nI0608 07:43:33.499297 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.06146 (* 1 = 0.06146 loss)\nI0608 07:43:33.499303 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0999921 (* 1 = 0.0999921 loss)\nI0608 07:43:33.499310 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0537684 (* 1 = 0.0537684 loss)\nI0608 07:43:33.499315 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100308 (* 1 = 0.100308 loss)\nI0608 07:43:33.499321 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0537754 (* 1 = 0.0537754 loss)\nI0608 07:43:33.499328 13573 sgd_solver.cpp:106] Iteration 60160, lr = 0.0001\nI0608 07:43:46.222074 13573 solver.cpp:229] Iteration 60180, loss = 0.61212\nI0608 07:43:46.222136 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0497529 (* 1 = 0.0497529 loss)\nI0608 07:43:46.222146 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0596035 (* 1 = 0.0596035 loss)\nI0608 07:43:46.222151 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189881 (* 1 = 0.189881 loss)\nI0608 07:43:46.222158 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328478 (* 1 = 0.0328478 loss)\nI0608 07:43:46.222164 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187401 (* 1 = 0.187401 loss)\nI0608 07:43:46.222172 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0328484 (* 1 = 0.0328484 loss)\nI0608 07:43:46.222178 13573 sgd_solver.cpp:106] Iteration 60180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:43:59.061678 13573 solver.cpp:229] Iteration 60200, loss = 0.802998\nI0608 07:43:59.061741 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.09329 (* 1 = 0.09329 loss)\nI0608 07:43:59.061750 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131109 (* 1 = 0.131109 loss)\nI0608 07:43:59.061756 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.386277 (* 1 = 0.386277 loss)\nI0608 07:43:59.061763 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0927856 (* 1 = 0.0927856 loss)\nI0608 07:43:59.061769 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.387015 (* 1 = 0.387015 loss)\nI0608 07:43:59.061774 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0927785 (* 1 = 0.0927785 loss)\nI0608 07:43:59.061780 13573 sgd_solver.cpp:106] Iteration 60200, lr = 0.0001\nI0608 07:44:11.870067 13573 solver.cpp:229] Iteration 60220, loss = 0.408418\nI0608 07:44:11.870141 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.116545 (* 1 = 0.116545 loss)\nI0608 07:44:11.870149 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0346724 (* 1 = 0.0346724 loss)\nI0608 07:44:11.870157 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.163819 (* 1 = 0.163819 loss)\nI0608 07:44:11.870163 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0373533 (* 1 = 0.0373533 loss)\nI0608 07:44:11.870169 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.151662 (* 1 = 0.151662 loss)\nI0608 07:44:11.870174 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0373558 (* 1 = 0.0373558 loss)\nI0608 07:44:11.870182 13573 sgd_solver.cpp:106] Iteration 60220, lr = 0.0001\nI0608 07:44:24.844964 13573 solver.cpp:229] Iteration 60240, loss = 0.831808\nI0608 07:44:24.845048 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0832083 (* 1 = 0.0832083 loss)\nI0608 07:44:24.845059 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0808225 (* 1 = 0.0808225 loss)\nI0608 07:44:24.845064 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259895 (* 1 = 0.259895 loss)\nI0608 07:44:24.845070 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0687684 (* 1 = 0.0687684 loss)\nI0608 07:44:24.845077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265831 (* 1 = 0.265831 loss)\nI0608 07:44:24.845082 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0687852 (* 1 = 0.0687852 loss)\nI0608 07:44:24.845088 13573 sgd_solver.cpp:106] Iteration 60240, lr = 0.0001\nI0608 07:44:37.759788 13573 solver.cpp:229] Iteration 60260, loss = 0.459766\nI0608 07:44:37.759858 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0144102 (* 1 = 0.0144102 loss)\nI0608 07:44:37.759868 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0491075 (* 1 = 0.0491075 loss)\nI0608 07:44:37.759874 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0837257 (* 1 = 0.0837257 loss)\nI0608 07:44:37.759881 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0396966 (* 1 = 0.0396966 loss)\nI0608 07:44:37.759886 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0832718 (* 1 = 0.0832718 loss)\nI0608 07:44:37.759891 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0397003 (* 1 = 0.0397003 loss)\nI0608 07:44:37.759898 13573 sgd_solver.cpp:106] Iteration 60260, lr = 0.0001\nI0608 07:44:50.504899 13573 solver.cpp:229] Iteration 60280, loss = 0.340484\nI0608 07:44:50.504964 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000290105 (* 1 = 0.000290105 loss)\nI0608 07:44:50.504974 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000274983 (* 1 = 0.000274983 loss)\nI0608 07:44:50.504981 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.184029 (* 1 = 0.184029 loss)\nI0608 07:44:50.504987 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00878015 (* 1 = 0.00878015 loss)\nI0608 07:44:50.504992 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.191579 (* 1 = 0.191579 loss)\nI0608 07:44:50.504999 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00878045 (* 1 = 0.00878045 loss)\nI0608 07:44:50.505007 13573 sgd_solver.cpp:106] Iteration 60280, lr = 0.0001\nI0608 07:45:03.347645 13573 solver.cpp:229] Iteration 60300, loss = 0.641803\nI0608 07:45:03.347713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.057846 (* 1 = 0.057846 loss)\nI0608 07:45:03.347723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0641811 (* 1 = 0.0641811 loss)\nI0608 07:45:03.347729 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0832131 (* 1 = 0.0832131 loss)\nI0608 07:45:03.347735 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117287 (* 1 = 0.0117287 loss)\nI0608 07:45:03.347740 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0993842 (* 1 = 0.0993842 loss)\nI0608 07:45:03.347746 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117242 (* 1 = 0.0117242 loss)\nI0608 07:45:03.347753 13573 sgd_solver.cpp:106] Iteration 60300, lr = 0.0001\nI0608 07:45:16.281869 13573 solver.cpp:229] Iteration 60320, loss = 0.70876\nI0608 07:45:16.281934 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126997 (* 1 = 0.126997 loss)\nI0608 07:45:16.281944 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0539051 (* 1 = 0.0539051 loss)\nI0608 07:45:16.281949 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334407 (* 1 = 0.334407 loss)\nI0608 07:45:16.281955 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0623025 (* 1 = 0.0623025 loss)\nI0608 07:45:16.281961 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.33498 (* 1 = 0.33498 loss)\nI0608 07:45:16.281967 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0622919 (* 1 = 0.0622919 loss)\nI0608 07:45:16.281975 13573 sgd_solver.cpp:106] Iteration 60320, lr = 0.0001\nI0608 07:45:29.035387 13573 solver.cpp:229] Iteration 60340, loss = 0.348114\nI0608 07:45:29.035462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0394403 (* 1 = 0.0394403 loss)\nI0608 07:45:29.035471 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0169003 (* 1 = 0.0169003 loss)\nI0608 07:45:29.035478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0911426 (* 1 = 0.0911426 loss)\nI0608 07:45:29.035483 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.02649 (* 1 = 0.02649 loss)\nI0608 07:45:29.035490 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0954131 (* 1 = 0.0954131 loss)\nI0608 07:45:29.035495 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264859 (* 1 = 0.0264859 loss)\nI0608 07:45:29.035501 13573 sgd_solver.cpp:106] Iteration 60340, lr = 0.0001\nI0608 07:45:41.928207 13573 solver.cpp:229] Iteration 60360, loss = 0.945011\nI0608 07:45:41.928274 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0743118 (* 1 = 0.0743118 loss)\nI0608 07:45:41.928284 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0602721 (* 1 = 0.0602721 loss)\nI0608 07:45:41.928290 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.331412 (* 1 = 0.331412 loss)\nI0608 07:45:41.928295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0258066 (* 1 = 0.0258066 loss)\nI0608 07:45:41.928302 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.311145 (* 1 = 0.311145 loss)\nI0608 07:45:41.928306 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258095 (* 1 = 0.0258095 loss)\nI0608 07:45:41.928313 13573 sgd_solver.cpp:106] Iteration 60360, lr = 0.0001\nI0608 07:45:54.920980 13573 solver.cpp:229] Iteration 60380, loss = 1.11581\nI0608 07:45:54.921041 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0352236 (* 1 = 0.0352236 loss)\nI0608 07:45:54.921051 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0666767 (* 1 = 0.0666767 loss)\nI0608 07:45:54.921056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.442474 (* 1 = 0.442474 loss)\nI0608 07:45:54.921062 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.171008 (* 1 = 0.171008 loss)\nI0608 07:45:54.921067 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.440508 (* 1 = 0.440508 loss)\nI0608 07:45:54.921073 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.171003 (* 1 = 0.171003 loss)\nI0608 07:45:54.921079 13573 sgd_solver.cpp:106] Iteration 60380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:46:07.987251 13573 solver.cpp:229] Iteration 60400, loss = 0.456951\nI0608 07:46:07.987337 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00294928 (* 1 = 0.00294928 loss)\nI0608 07:46:07.987347 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000369097 (* 1 = 0.000369097 loss)\nI0608 07:46:07.987354 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.223188 (* 1 = 0.223188 loss)\nI0608 07:46:07.987360 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0328263 (* 1 = 0.0328263 loss)\nI0608 07:46:07.987366 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.230578 (* 1 = 0.230578 loss)\nI0608 07:46:07.987372 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0328105 (* 1 = 0.0328105 loss)\nI0608 07:46:07.987380 13573 sgd_solver.cpp:106] Iteration 60400, lr = 0.0001\nI0608 07:46:20.831430 13573 solver.cpp:229] Iteration 60420, loss = 0.57553\nI0608 07:46:20.831498 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0030382 (* 1 = 0.0030382 loss)\nI0608 07:46:20.831507 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0155463 (* 1 = 0.0155463 loss)\nI0608 07:46:20.831513 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.258129 (* 1 = 0.258129 loss)\nI0608 07:46:20.831519 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0169769 (* 1 = 0.0169769 loss)\nI0608 07:46:20.831524 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256839 (* 1 = 0.256839 loss)\nI0608 07:46:20.831531 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0169762 (* 1 = 0.0169762 loss)\nI0608 07:46:20.831537 13573 sgd_solver.cpp:106] Iteration 60420, lr = 0.0001\nI0608 07:46:33.737481 13573 solver.cpp:229] Iteration 60440, loss = 0.37631\nI0608 07:46:33.737547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0326563 (* 1 = 0.0326563 loss)\nI0608 07:46:33.737557 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0175936 (* 1 = 0.0175936 loss)\nI0608 07:46:33.737563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0828939 (* 1 = 0.0828939 loss)\nI0608 07:46:33.737570 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0335607 (* 1 = 0.0335607 loss)\nI0608 07:46:33.737576 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0854799 (* 1 = 0.0854799 loss)\nI0608 07:46:33.737581 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0335637 (* 1 = 0.0335637 loss)\nI0608 07:46:33.737589 13573 sgd_solver.cpp:106] Iteration 60440, lr = 0.0001\nI0608 07:46:46.556633 13573 solver.cpp:229] Iteration 60460, loss = 0.274685\nI0608 07:46:46.556766 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0257451 (* 1 = 0.0257451 loss)\nI0608 07:46:46.556774 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0358972 (* 1 = 0.0358972 loss)\nI0608 07:46:46.556780 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.084184 (* 1 = 0.084184 loss)\nI0608 07:46:46.556787 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168111 (* 1 = 0.0168111 loss)\nI0608 07:46:46.556793 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0823264 (* 1 = 0.0823264 loss)\nI0608 07:46:46.556799 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0168094 (* 1 = 0.0168094 loss)\nI0608 07:46:46.556807 13573 sgd_solver.cpp:106] Iteration 60460, lr = 0.0001\nI0608 07:46:59.508509 13573 solver.cpp:229] Iteration 60480, loss = 1.58003\nI0608 07:46:59.508577 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.243208 (* 1 = 0.243208 loss)\nI0608 07:46:59.508586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.129521 (* 1 = 0.129521 loss)\nI0608 07:46:59.508594 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.320245 (* 1 = 0.320245 loss)\nI0608 07:46:59.508599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0464461 (* 1 = 0.0464461 loss)\nI0608 07:46:59.508605 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323889 (* 1 = 0.323889 loss)\nI0608 07:46:59.508610 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0464534 (* 1 = 0.0464534 loss)\nI0608 07:46:59.508617 13573 sgd_solver.cpp:106] Iteration 60480, lr = 0.0001\nI0608 07:47:12.456156 13573 solver.cpp:229] Iteration 60500, loss = 0.255636\nI0608 07:47:12.456218 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0456659 (* 1 = 0.0456659 loss)\nI0608 07:47:12.456228 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.036634 (* 1 = 0.036634 loss)\nI0608 07:47:12.456233 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0879252 (* 1 = 0.0879252 loss)\nI0608 07:47:12.456240 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00705094 (* 1 = 0.00705094 loss)\nI0608 07:47:12.456245 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0865893 (* 1 = 0.0865893 loss)\nI0608 07:47:12.456251 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00705194 (* 1 = 0.00705194 loss)\nI0608 07:47:12.456259 13573 sgd_solver.cpp:106] Iteration 60500, lr = 0.0001\nI0608 07:47:25.489651 13573 solver.cpp:229] Iteration 60520, loss = 0.601578\nI0608 07:47:25.489728 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126809 (* 1 = 0.126809 loss)\nI0608 07:47:25.489738 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118525 (* 1 = 0.118525 loss)\nI0608 07:47:25.489744 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.240151 (* 1 = 0.240151 loss)\nI0608 07:47:25.489750 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0389725 (* 1 = 0.0389725 loss)\nI0608 07:47:25.489755 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250363 (* 1 = 0.250363 loss)\nI0608 07:47:25.489761 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0389736 (* 1 = 0.0389736 loss)\nI0608 07:47:25.489768 13573 sgd_solver.cpp:106] Iteration 60520, lr = 0.0001\nI0608 07:47:38.328994 13573 solver.cpp:229] Iteration 60540, loss = 0.891706\nI0608 07:47:38.329062 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0531969 (* 1 = 0.0531969 loss)\nI0608 07:47:38.329071 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0533281 (* 1 = 0.0533281 loss)\nI0608 07:47:38.329077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0899535 (* 1 = 0.0899535 loss)\nI0608 07:47:38.329083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0244733 (* 1 = 0.0244733 loss)\nI0608 07:47:38.329088 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0852135 (* 1 = 0.0852135 loss)\nI0608 07:47:38.329094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0244726 (* 1 = 0.0244726 loss)\nI0608 07:47:38.329102 13573 sgd_solver.cpp:106] Iteration 60540, lr = 0.0001\nI0608 07:47:51.199899 13573 solver.cpp:229] Iteration 60560, loss = 0.347621\nI0608 07:47:51.199964 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000148018 (* 1 = 0.000148018 loss)\nI0608 07:47:51.199973 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00141029 (* 1 = 0.00141029 loss)\nI0608 07:47:51.199980 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.164487 (* 1 = 0.164487 loss)\nI0608 07:47:51.199985 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00202539 (* 1 = 0.00202539 loss)\nI0608 07:47:51.199991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.192555 (* 1 = 0.192555 loss)\nI0608 07:47:51.199997 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00202762 (* 1 = 0.00202762 loss)\nI0608 07:47:51.200004 13573 sgd_solver.cpp:106] Iteration 60560, lr = 0.0001\nI0608 07:48:04.295358 13573 solver.cpp:229] Iteration 60580, loss = 0.514408\nI0608 07:48:04.295555 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0168388 (* 1 = 0.0168388 loss)\nI0608 07:48:04.295564 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00406192 (* 1 = 0.00406192 loss)\nI0608 07:48:04.295572 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.330417 (* 1 = 0.330417 loss)\nI0608 07:48:04.295577 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0128088 (* 1 = 0.0128088 loss)\nI0608 07:48:04.295583 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295978 (* 1 = 0.295978 loss)\nI0608 07:48:04.295588 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0127982 (* 1 = 0.0127982 loss)\nI0608 07:48:04.295595 13573 sgd_solver.cpp:106] Iteration 60580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:48:17.251528 13573 solver.cpp:229] Iteration 60600, loss = 0.514978\nI0608 07:48:17.251593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00525811 (* 1 = 0.00525811 loss)\nI0608 07:48:17.251602 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0046482 (* 1 = 0.0046482 loss)\nI0608 07:48:17.251608 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276377 (* 1 = 0.276377 loss)\nI0608 07:48:17.251613 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0295022 (* 1 = 0.0295022 loss)\nI0608 07:48:17.251619 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.281474 (* 1 = 0.281474 loss)\nI0608 07:48:17.251624 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0295175 (* 1 = 0.0295175 loss)\nI0608 07:48:17.251631 13573 sgd_solver.cpp:106] Iteration 60600, lr = 0.0001\nI0608 07:48:29.962019 13573 solver.cpp:229] Iteration 60620, loss = 1.14777\nI0608 07:48:29.962085 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.210879 (* 1 = 0.210879 loss)\nI0608 07:48:29.962095 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112226 (* 1 = 0.112226 loss)\nI0608 07:48:29.962100 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.494246 (* 1 = 0.494246 loss)\nI0608 07:48:29.962105 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106193 (* 1 = 0.106193 loss)\nI0608 07:48:29.962111 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.49595 (* 1 = 0.49595 loss)\nI0608 07:48:29.962117 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106187 (* 1 = 0.106187 loss)\nI0608 07:48:29.962124 13573 sgd_solver.cpp:106] Iteration 60620, lr = 0.0001\nI0608 07:48:42.903018 13573 solver.cpp:229] Iteration 60640, loss = 1.09483\nI0608 07:48:42.903080 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000144849 (* 1 = 0.000144849 loss)\nI0608 07:48:42.903090 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00879653 (* 1 = 0.00879653 loss)\nI0608 07:48:42.903096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17105 (* 1 = 0.17105 loss)\nI0608 07:48:42.903102 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0159226 (* 1 = 0.0159226 loss)\nI0608 07:48:42.903107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17644 (* 1 = 0.17644 loss)\nI0608 07:48:42.903113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0159229 (* 1 = 0.0159229 loss)\nI0608 07:48:42.903120 13573 sgd_solver.cpp:106] Iteration 60640, lr = 0.0001\nI0608 07:48:55.729542 13573 solver.cpp:229] Iteration 60660, loss = 1.11833\nI0608 07:48:55.729607 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.136397 (* 1 = 0.136397 loss)\nI0608 07:48:55.729616 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14716 (* 1 = 0.14716 loss)\nI0608 07:48:55.729622 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.471839 (* 1 = 0.471839 loss)\nI0608 07:48:55.729629 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.206616 (* 1 = 0.206616 loss)\nI0608 07:48:55.729634 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.4875 (* 1 = 0.4875 loss)\nI0608 07:48:55.729640 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.206622 (* 1 = 0.206622 loss)\nI0608 07:48:55.729646 13573 sgd_solver.cpp:106] Iteration 60660, lr = 0.0001\nI0608 07:49:08.566354 13573 solver.cpp:229] Iteration 60680, loss = 0.295458\nI0608 07:49:08.566421 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0629716 (* 1 = 0.0629716 loss)\nI0608 07:49:08.566431 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0692269 (* 1 = 0.0692269 loss)\nI0608 07:49:08.566437 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0726369 (* 1 = 0.0726369 loss)\nI0608 07:49:08.566442 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00549207 (* 1 = 0.00549207 loss)\nI0608 07:49:08.566448 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0781099 (* 1 = 0.0781099 loss)\nI0608 07:49:08.566454 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00549144 (* 1 = 0.00549144 loss)\nI0608 07:49:08.566462 13573 sgd_solver.cpp:106] Iteration 60680, lr = 0.0001\nI0608 07:49:21.444522 13573 solver.cpp:229] Iteration 60700, loss = 0.548524\nI0608 07:49:21.444593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0528238 (* 1 = 0.0528238 loss)\nI0608 07:49:21.444603 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0747288 (* 1 = 0.0747288 loss)\nI0608 07:49:21.444609 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207337 (* 1 = 0.207337 loss)\nI0608 07:49:21.444615 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0788041 (* 1 = 0.0788041 loss)\nI0608 07:49:21.444622 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.204584 (* 1 = 0.204584 loss)\nI0608 07:49:21.444627 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0788059 (* 1 = 0.0788059 loss)\nI0608 07:49:21.444633 13573 sgd_solver.cpp:106] Iteration 60700, lr = 0.0001\nI0608 07:49:34.393692 13573 solver.cpp:229] Iteration 60720, loss = 0.55889\nI0608 07:49:34.393759 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.044695 (* 1 = 0.044695 loss)\nI0608 07:49:34.393767 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0991027 (* 1 = 0.0991027 loss)\nI0608 07:49:34.393774 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0916775 (* 1 = 0.0916775 loss)\nI0608 07:49:34.393779 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164612 (* 1 = 0.0164612 loss)\nI0608 07:49:34.393785 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0882193 (* 1 = 0.0882193 loss)\nI0608 07:49:34.393790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164635 (* 1 = 0.0164635 loss)\nI0608 07:49:34.393797 13573 sgd_solver.cpp:106] Iteration 60720, lr = 0.0001\nI0608 07:49:47.472818 13573 solver.cpp:229] Iteration 60740, loss = 0.53107\nI0608 07:49:47.472887 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0552367 (* 1 = 0.0552367 loss)\nI0608 07:49:47.472903 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0555955 (* 1 = 0.0555955 loss)\nI0608 07:49:47.472909 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.317523 (* 1 = 0.317523 loss)\nI0608 07:49:47.472915 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259056 (* 1 = 0.0259056 loss)\nI0608 07:49:47.472921 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.315545 (* 1 = 0.315545 loss)\nI0608 07:49:47.472928 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258986 (* 1 = 0.0258986 loss)\nI0608 07:49:47.472934 13573 sgd_solver.cpp:106] Iteration 60740, lr = 0.0001\nI0608 07:50:00.465147 13573 solver.cpp:229] Iteration 60760, loss = 0.411425\nI0608 07:50:00.465222 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0595883 (* 1 = 0.0595883 loss)\nI0608 07:50:00.465231 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0397351 (* 1 = 0.0397351 loss)\nI0608 07:50:00.465239 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0937335 (* 1 = 0.0937335 loss)\nI0608 07:50:00.465243 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116536 (* 1 = 0.0116536 loss)\nI0608 07:50:00.465250 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0892197 (* 1 = 0.0892197 loss)\nI0608 07:50:00.465255 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116489 (* 1 = 0.0116489 loss)\nI0608 07:50:00.465261 13573 sgd_solver.cpp:106] Iteration 60760, lr = 0.0001\nI0608 07:50:13.458742 13573 solver.cpp:229] Iteration 60780, loss = 0.516902\nI0608 07:50:13.458822 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00512323 (* 1 = 0.00512323 loss)\nI0608 07:50:13.458832 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00226958 (* 1 = 0.00226958 loss)\nI0608 07:50:13.458837 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0840755 (* 1 = 0.0840755 loss)\nI0608 07:50:13.458843 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0344683 (* 1 = 0.0344683 loss)\nI0608 07:50:13.458849 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0874957 (* 1 = 0.0874957 loss)\nI0608 07:50:13.458854 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.03447 (* 1 = 0.03447 loss)\nI0608 07:50:13.458861 13573 sgd_solver.cpp:106] Iteration 60780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:50:26.466845 13573 solver.cpp:229] Iteration 60800, loss = 0.833994\nI0608 07:50:26.466907 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0227977 (* 1 = 0.0227977 loss)\nI0608 07:50:26.466918 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0296134 (* 1 = 0.0296134 loss)\nI0608 07:50:26.466924 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.257184 (* 1 = 0.257184 loss)\nI0608 07:50:26.466930 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0163931 (* 1 = 0.0163931 loss)\nI0608 07:50:26.466938 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245422 (* 1 = 0.245422 loss)\nI0608 07:50:26.466943 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0163895 (* 1 = 0.0163895 loss)\nI0608 07:50:26.466951 13573 sgd_solver.cpp:106] Iteration 60800, lr = 0.0001\nI0608 07:50:39.334307 13573 solver.cpp:229] Iteration 60820, loss = 0.513259\nI0608 07:50:39.334373 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0256537 (* 1 = 0.0256537 loss)\nI0608 07:50:39.334383 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0417377 (* 1 = 0.0417377 loss)\nI0608 07:50:39.334388 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.31969 (* 1 = 0.31969 loss)\nI0608 07:50:39.334394 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227562 (* 1 = 0.0227562 loss)\nI0608 07:50:39.334400 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.284566 (* 1 = 0.284566 loss)\nI0608 07:50:39.334405 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0227546 (* 1 = 0.0227546 loss)\nI0608 07:50:39.334412 13573 sgd_solver.cpp:106] Iteration 60820, lr = 0.0001\nI0608 07:50:52.172446 13573 solver.cpp:229] Iteration 60840, loss = 0.358781\nI0608 07:50:52.172510 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0347195 (* 1 = 0.0347195 loss)\nI0608 07:50:52.172521 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0890127 (* 1 = 0.0890127 loss)\nI0608 07:50:52.172528 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0964312 (* 1 = 0.0964312 loss)\nI0608 07:50:52.172535 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0324714 (* 1 = 0.0324714 loss)\nI0608 07:50:52.172541 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0965002 (* 1 = 0.0965002 loss)\nI0608 07:50:52.172549 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0324754 (* 1 = 0.0324754 loss)\nI0608 07:50:52.172556 13573 sgd_solver.cpp:106] Iteration 60840, lr = 0.0001\nI0608 07:51:05.018594 13573 solver.cpp:229] Iteration 60860, loss = 0.454725\nI0608 07:51:05.018728 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00313715 (* 1 = 0.00313715 loss)\nI0608 07:51:05.018738 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00381022 (* 1 = 0.00381022 loss)\nI0608 07:51:05.018743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.275595 (* 1 = 0.275595 loss)\nI0608 07:51:05.018750 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0298572 (* 1 = 0.0298572 loss)\nI0608 07:51:05.018756 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.260257 (* 1 = 0.260257 loss)\nI0608 07:51:05.018762 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029867 (* 1 = 0.029867 loss)\nI0608 07:51:05.018780 13573 sgd_solver.cpp:106] Iteration 60860, lr = 0.0001\nI0608 07:51:17.797956 13573 solver.cpp:229] Iteration 60880, loss = 0.556747\nI0608 07:51:17.798017 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000366053 (* 1 = 0.000366053 loss)\nI0608 07:51:17.798027 13573 solver.cpp:245]     Train net output #1: loss_cls = 8.05737e-05 (* 1 = 8.05737e-05 loss)\nI0608 07:51:17.798034 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304494 (* 1 = 0.304494 loss)\nI0608 07:51:17.798040 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0416753 (* 1 = 0.0416753 loss)\nI0608 07:51:17.798048 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261745 (* 1 = 0.261745 loss)\nI0608 07:51:17.798053 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0416793 (* 1 = 0.0416793 loss)\nI0608 07:51:17.798060 13573 sgd_solver.cpp:106] Iteration 60880, lr = 0.0001\nI0608 07:51:30.761764 13573 solver.cpp:229] Iteration 60900, loss = 0.596154\nI0608 07:51:30.761831 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11657 (* 1 = 0.11657 loss)\nI0608 07:51:30.761840 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0668639 (* 1 = 0.0668639 loss)\nI0608 07:51:30.761847 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.134234 (* 1 = 0.134234 loss)\nI0608 07:51:30.761852 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.176842 (* 1 = 0.176842 loss)\nI0608 07:51:30.761857 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.134014 (* 1 = 0.134014 loss)\nI0608 07:51:30.761863 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.176869 (* 1 = 0.176869 loss)\nI0608 07:51:30.761869 13573 sgd_solver.cpp:106] Iteration 60900, lr = 0.0001\nI0608 07:51:43.698467 13573 solver.cpp:229] Iteration 60920, loss = 0.316343\nI0608 07:51:43.698540 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0445501 (* 1 = 0.0445501 loss)\nI0608 07:51:43.698549 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0582226 (* 1 = 0.0582226 loss)\nI0608 07:51:43.698555 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101698 (* 1 = 0.101698 loss)\nI0608 07:51:43.698561 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00968931 (* 1 = 0.00968931 loss)\nI0608 07:51:43.698566 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0987989 (* 1 = 0.0987989 loss)\nI0608 07:51:43.698572 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00968666 (* 1 = 0.00968666 loss)\nI0608 07:51:43.698580 13573 sgd_solver.cpp:106] Iteration 60920, lr = 0.0001\nI0608 07:51:56.534178 13573 solver.cpp:229] Iteration 60940, loss = 0.544553\nI0608 07:51:56.534238 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0216552 (* 1 = 0.0216552 loss)\nI0608 07:51:56.534250 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.022097 (* 1 = 0.022097 loss)\nI0608 07:51:56.534256 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187453 (* 1 = 0.187453 loss)\nI0608 07:51:56.534262 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0740685 (* 1 = 0.0740685 loss)\nI0608 07:51:56.534268 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188984 (* 1 = 0.188984 loss)\nI0608 07:51:56.534274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0740768 (* 1 = 0.0740768 loss)\nI0608 07:51:56.534282 13573 sgd_solver.cpp:106] Iteration 60940, lr = 0.0001\nI0608 07:52:09.221938 13573 solver.cpp:229] Iteration 60960, loss = 1.55932\nI0608 07:52:09.222002 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.146656 (* 1 = 0.146656 loss)\nI0608 07:52:09.222012 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.142362 (* 1 = 0.142362 loss)\nI0608 07:52:09.222018 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.775238 (* 1 = 0.775238 loss)\nI0608 07:52:09.222023 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123333 (* 1 = 0.123333 loss)\nI0608 07:52:09.222029 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.775467 (* 1 = 0.775467 loss)\nI0608 07:52:09.222035 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123324 (* 1 = 0.123324 loss)\nI0608 07:52:09.222041 13573 sgd_solver.cpp:106] Iteration 60960, lr = 0.0001\nI0608 07:52:22.129834 13573 solver.cpp:229] Iteration 60980, loss = 1.37985\nI0608 07:52:22.129901 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108117 (* 1 = 0.108117 loss)\nI0608 07:52:22.129911 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.158326 (* 1 = 0.158326 loss)\nI0608 07:52:22.129917 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105996 (* 1 = 0.105996 loss)\nI0608 07:52:22.129923 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156939 (* 1 = 0.0156939 loss)\nI0608 07:52:22.129930 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0972502 (* 1 = 0.0972502 loss)\nI0608 07:52:22.129935 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156953 (* 1 = 0.0156953 loss)\nI0608 07:52:22.129942 13573 sgd_solver.cpp:106] Iteration 60980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:52:34.847597 13573 solver.cpp:229] Iteration 61000, loss = 1.96358\nI0608 07:52:34.847659 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0594177 (* 1 = 0.0594177 loss)\nI0608 07:52:34.847669 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0678663 (* 1 = 0.0678663 loss)\nI0608 07:52:34.847676 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0912825 (* 1 = 0.0912825 loss)\nI0608 07:52:34.847681 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0202025 (* 1 = 0.0202025 loss)\nI0608 07:52:34.847687 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0963553 (* 1 = 0.0963553 loss)\nI0608 07:52:34.847692 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0202033 (* 1 = 0.0202033 loss)\nI0608 07:52:34.847699 13573 sgd_solver.cpp:106] Iteration 61000, lr = 0.0001\nI0608 07:52:47.680299 13573 solver.cpp:229] Iteration 61020, loss = 0.815056\nI0608 07:52:47.680366 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.179216 (* 1 = 0.179216 loss)\nI0608 07:52:47.680375 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.128722 (* 1 = 0.128722 loss)\nI0608 07:52:47.680380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.347538 (* 1 = 0.347538 loss)\nI0608 07:52:47.680387 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0635304 (* 1 = 0.0635304 loss)\nI0608 07:52:47.680392 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.353166 (* 1 = 0.353166 loss)\nI0608 07:52:47.680398 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0635267 (* 1 = 0.0635267 loss)\nI0608 07:52:47.680405 13573 sgd_solver.cpp:106] Iteration 61020, lr = 0.0001\nI0608 07:53:00.531810 13573 solver.cpp:229] Iteration 61040, loss = 0.2876\nI0608 07:53:00.531883 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0523382 (* 1 = 0.0523382 loss)\nI0608 07:53:00.531893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0239084 (* 1 = 0.0239084 loss)\nI0608 07:53:00.531898 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0940876 (* 1 = 0.0940876 loss)\nI0608 07:53:00.531904 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0285289 (* 1 = 0.0285289 loss)\nI0608 07:53:00.531909 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0962751 (* 1 = 0.0962751 loss)\nI0608 07:53:00.531915 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0285327 (* 1 = 0.0285327 loss)\nI0608 07:53:00.531924 13573 sgd_solver.cpp:106] Iteration 61040, lr = 0.0001\nI0608 07:53:13.466544 13573 solver.cpp:229] Iteration 61060, loss = 1.04679\nI0608 07:53:13.466609 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.042818 (* 1 = 0.042818 loss)\nI0608 07:53:13.466619 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.093592 (* 1 = 0.093592 loss)\nI0608 07:53:13.466625 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.502356 (* 1 = 0.502356 loss)\nI0608 07:53:13.466630 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0652551 (* 1 = 0.0652551 loss)\nI0608 07:53:13.466636 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.48465 (* 1 = 0.48465 loss)\nI0608 07:53:13.466642 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0652487 (* 1 = 0.0652487 loss)\nI0608 07:53:13.466650 13573 sgd_solver.cpp:106] Iteration 61060, lr = 0.0001\nI0608 07:53:26.363260 13573 solver.cpp:229] Iteration 61080, loss = 0.423071\nI0608 07:53:26.363353 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0279346 (* 1 = 0.0279346 loss)\nI0608 07:53:26.363363 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00882385 (* 1 = 0.00882385 loss)\nI0608 07:53:26.363368 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.236336 (* 1 = 0.236336 loss)\nI0608 07:53:26.363374 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00504753 (* 1 = 0.00504753 loss)\nI0608 07:53:26.363380 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.24869 (* 1 = 0.24869 loss)\nI0608 07:53:26.363386 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00504697 (* 1 = 0.00504697 loss)\nI0608 07:53:26.363394 13573 sgd_solver.cpp:106] Iteration 61080, lr = 0.0001\nI0608 07:53:39.355104 13573 solver.cpp:229] Iteration 61100, loss = 0.750674\nI0608 07:53:39.355168 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0207504 (* 1 = 0.0207504 loss)\nI0608 07:53:39.355178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0224978 (* 1 = 0.0224978 loss)\nI0608 07:53:39.355185 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259 (* 1 = 0.259 loss)\nI0608 07:53:39.355190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0511439 (* 1 = 0.0511439 loss)\nI0608 07:53:39.355196 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247994 (* 1 = 0.247994 loss)\nI0608 07:53:39.355201 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0511445 (* 1 = 0.0511445 loss)\nI0608 07:53:39.355209 13573 sgd_solver.cpp:106] Iteration 61100, lr = 0.0001\nI0608 07:53:52.313515 13573 solver.cpp:229] Iteration 61120, loss = 0.464384\nI0608 07:53:52.313576 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000519809 (* 1 = 0.000519809 loss)\nI0608 07:53:52.313586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00126837 (* 1 = 0.00126837 loss)\nI0608 07:53:52.313592 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.253909 (* 1 = 0.253909 loss)\nI0608 07:53:52.313598 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0669014 (* 1 = 0.0669014 loss)\nI0608 07:53:52.313604 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223301 (* 1 = 0.223301 loss)\nI0608 07:53:52.313611 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0669037 (* 1 = 0.0669037 loss)\nI0608 07:53:52.313617 13573 sgd_solver.cpp:106] Iteration 61120, lr = 0.0001\nI0608 07:54:05.026365 13573 solver.cpp:229] Iteration 61140, loss = 0.63729\nI0608 07:54:05.026430 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.057372 (* 1 = 0.057372 loss)\nI0608 07:54:05.026439 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0882294 (* 1 = 0.0882294 loss)\nI0608 07:54:05.026445 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.313976 (* 1 = 0.313976 loss)\nI0608 07:54:05.026451 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0437359 (* 1 = 0.0437359 loss)\nI0608 07:54:05.026458 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.341825 (* 1 = 0.341825 loss)\nI0608 07:54:05.026463 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0437382 (* 1 = 0.0437382 loss)\nI0608 07:54:05.026470 13573 sgd_solver.cpp:106] Iteration 61140, lr = 0.0001\nI0608 07:54:18.051524 13573 solver.cpp:229] Iteration 61160, loss = 0.467855\nI0608 07:54:18.051595 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.114036 (* 1 = 0.114036 loss)\nI0608 07:54:18.051605 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0860088 (* 1 = 0.0860088 loss)\nI0608 07:54:18.051611 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177235 (* 1 = 0.177235 loss)\nI0608 07:54:18.051617 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302541 (* 1 = 0.0302541 loss)\nI0608 07:54:18.051623 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173292 (* 1 = 0.173292 loss)\nI0608 07:54:18.051630 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302533 (* 1 = 0.0302533 loss)\nI0608 07:54:18.051636 13573 sgd_solver.cpp:106] Iteration 61160, lr = 0.0001\nI0608 07:54:30.698269 13573 solver.cpp:229] Iteration 61180, loss = 0.694145\nI0608 07:54:30.698330 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0514363 (* 1 = 0.0514363 loss)\nI0608 07:54:30.698339 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0567072 (* 1 = 0.0567072 loss)\nI0608 07:54:30.698346 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.324463 (* 1 = 0.324463 loss)\nI0608 07:54:30.698354 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0287372 (* 1 = 0.0287372 loss)\nI0608 07:54:30.698359 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.312627 (* 1 = 0.312627 loss)\nI0608 07:54:30.698365 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0287409 (* 1 = 0.0287409 loss)\nI0608 07:54:30.698372 13573 sgd_solver.cpp:106] Iteration 61180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:54:43.608817 13573 solver.cpp:229] Iteration 61200, loss = 0.534302\nI0608 07:54:43.608885 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0402124 (* 1 = 0.0402124 loss)\nI0608 07:54:43.608893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0781185 (* 1 = 0.0781185 loss)\nI0608 07:54:43.608899 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.235382 (* 1 = 0.235382 loss)\nI0608 07:54:43.608906 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0404869 (* 1 = 0.0404869 loss)\nI0608 07:54:43.608918 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.224934 (* 1 = 0.224934 loss)\nI0608 07:54:43.608924 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0405014 (* 1 = 0.0405014 loss)\nI0608 07:54:43.608932 13573 sgd_solver.cpp:106] Iteration 61200, lr = 0.0001\nI0608 07:54:56.593713 13573 solver.cpp:229] Iteration 61220, loss = 0.823501\nI0608 07:54:56.593835 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100329 (* 1 = 0.100329 loss)\nI0608 07:54:56.593845 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.112741 (* 1 = 0.112741 loss)\nI0608 07:54:56.593852 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.153333 (* 1 = 0.153333 loss)\nI0608 07:54:56.593858 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0904707 (* 1 = 0.0904707 loss)\nI0608 07:54:56.593863 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.163621 (* 1 = 0.163621 loss)\nI0608 07:54:56.593869 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0904708 (* 1 = 0.0904708 loss)\nI0608 07:54:56.593876 13573 sgd_solver.cpp:106] Iteration 61220, lr = 0.0001\nI0608 07:55:09.661033 13573 solver.cpp:229] Iteration 61240, loss = 1.13661\nI0608 07:55:09.661100 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.157341 (* 1 = 0.157341 loss)\nI0608 07:55:09.661110 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10169 (* 1 = 0.10169 loss)\nI0608 07:55:09.661116 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.382022 (* 1 = 0.382022 loss)\nI0608 07:55:09.661123 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.243246 (* 1 = 0.243246 loss)\nI0608 07:55:09.661128 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.381899 (* 1 = 0.381899 loss)\nI0608 07:55:09.661134 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.243247 (* 1 = 0.243247 loss)\nI0608 07:55:09.661141 13573 sgd_solver.cpp:106] Iteration 61240, lr = 0.0001\nI0608 07:55:22.301232 13573 solver.cpp:229] Iteration 61260, loss = 1.13715\nI0608 07:55:22.301298 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0583156 (* 1 = 0.0583156 loss)\nI0608 07:55:22.301308 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.11761 (* 1 = 0.11761 loss)\nI0608 07:55:22.301314 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.647941 (* 1 = 0.647941 loss)\nI0608 07:55:22.301321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.125753 (* 1 = 0.125753 loss)\nI0608 07:55:22.301326 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.672329 (* 1 = 0.672329 loss)\nI0608 07:55:22.301331 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.125744 (* 1 = 0.125744 loss)\nI0608 07:55:22.301338 13573 sgd_solver.cpp:106] Iteration 61260, lr = 0.0001\nI0608 07:55:35.376756 13573 solver.cpp:229] Iteration 61280, loss = 0.299687\nI0608 07:55:35.376843 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0296767 (* 1 = 0.0296767 loss)\nI0608 07:55:35.376853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0241907 (* 1 = 0.0241907 loss)\nI0608 07:55:35.376859 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102353 (* 1 = 0.102353 loss)\nI0608 07:55:35.376865 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167295 (* 1 = 0.0167295 loss)\nI0608 07:55:35.376873 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0918277 (* 1 = 0.0918277 loss)\nI0608 07:55:35.376878 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0167412 (* 1 = 0.0167412 loss)\nI0608 07:55:35.376886 13573 sgd_solver.cpp:106] Iteration 61280, lr = 0.0001\nI0608 07:55:48.242475 13573 solver.cpp:229] Iteration 61300, loss = 0.890007\nI0608 07:55:48.242540 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.161988 (* 1 = 0.161988 loss)\nI0608 07:55:48.242550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0748394 (* 1 = 0.0748394 loss)\nI0608 07:55:48.242557 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.093527 (* 1 = 0.093527 loss)\nI0608 07:55:48.242561 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283342 (* 1 = 0.0283342 loss)\nI0608 07:55:48.242568 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0957361 (* 1 = 0.0957361 loss)\nI0608 07:55:48.242573 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283414 (* 1 = 0.0283414 loss)\nI0608 07:55:48.242580 13573 sgd_solver.cpp:106] Iteration 61300, lr = 0.0001\nI0608 07:56:00.936825 13573 solver.cpp:229] Iteration 61320, loss = 0.514982\nI0608 07:56:00.936894 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0215189 (* 1 = 0.0215189 loss)\nI0608 07:56:00.936904 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00850655 (* 1 = 0.00850655 loss)\nI0608 07:56:00.936910 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.084691 (* 1 = 0.084691 loss)\nI0608 07:56:00.936916 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0710184 (* 1 = 0.0710184 loss)\nI0608 07:56:00.936923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855205 (* 1 = 0.0855205 loss)\nI0608 07:56:00.936928 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0710263 (* 1 = 0.0710263 loss)\nI0608 07:56:00.936934 13573 sgd_solver.cpp:106] Iteration 61320, lr = 0.0001\nI0608 07:56:13.694157 13573 solver.cpp:229] Iteration 61340, loss = 0.723846\nI0608 07:56:13.694221 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0215613 (* 1 = 0.0215613 loss)\nI0608 07:56:13.694229 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0618026 (* 1 = 0.0618026 loss)\nI0608 07:56:13.694236 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.4027 (* 1 = 0.4027 loss)\nI0608 07:56:13.694242 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.112006 (* 1 = 0.112006 loss)\nI0608 07:56:13.694247 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.41462 (* 1 = 0.41462 loss)\nI0608 07:56:13.694252 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111999 (* 1 = 0.111999 loss)\nI0608 07:56:13.694259 13573 sgd_solver.cpp:106] Iteration 61340, lr = 0.0001\nI0608 07:56:26.405336 13573 solver.cpp:229] Iteration 61360, loss = 1.93251\nI0608 07:56:26.405459 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00171279 (* 1 = 0.00171279 loss)\nI0608 07:56:26.405469 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000246093 (* 1 = 0.000246093 loss)\nI0608 07:56:26.405475 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.897341 (* 1 = 0.897341 loss)\nI0608 07:56:26.405481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.487684 (* 1 = 0.487684 loss)\nI0608 07:56:26.405486 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.904795 (* 1 = 0.904795 loss)\nI0608 07:56:26.405493 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.487662 (* 1 = 0.487662 loss)\nI0608 07:56:26.405499 13573 sgd_solver.cpp:106] Iteration 61360, lr = 0.0001\nI0608 07:56:39.290671 13573 solver.cpp:229] Iteration 61380, loss = 1.52956\nI0608 07:56:39.290750 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100483 (* 1 = 0.100483 loss)\nI0608 07:56:39.290760 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.120161 (* 1 = 0.120161 loss)\nI0608 07:56:39.290766 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.639462 (* 1 = 0.639462 loss)\nI0608 07:56:39.290786 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.12356 (* 1 = 0.12356 loss)\nI0608 07:56:39.290793 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.63227 (* 1 = 0.63227 loss)\nI0608 07:56:39.290799 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123564 (* 1 = 0.123564 loss)\nI0608 07:56:39.290807 13573 sgd_solver.cpp:106] Iteration 61380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:56:51.950074 13573 solver.cpp:229] Iteration 61400, loss = 1.06361\nI0608 07:56:51.950134 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0290618 (* 1 = 0.0290618 loss)\nI0608 07:56:51.950145 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0282838 (* 1 = 0.0282838 loss)\nI0608 07:56:51.950150 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.474396 (* 1 = 0.474396 loss)\nI0608 07:56:51.950156 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.17679 (* 1 = 0.17679 loss)\nI0608 07:56:51.950162 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.472588 (* 1 = 0.472588 loss)\nI0608 07:56:51.950168 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.176789 (* 1 = 0.176789 loss)\nI0608 07:56:51.950175 13573 sgd_solver.cpp:106] Iteration 61400, lr = 0.0001\nI0608 07:57:04.927850 13573 solver.cpp:229] Iteration 61420, loss = 0.855403\nI0608 07:57:04.927923 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.274708 (* 1 = 0.274708 loss)\nI0608 07:57:04.927953 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104328 (* 1 = 0.104328 loss)\nI0608 07:57:04.927960 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.311068 (* 1 = 0.311068 loss)\nI0608 07:57:04.927966 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0720742 (* 1 = 0.0720742 loss)\nI0608 07:57:04.927973 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.312432 (* 1 = 0.312432 loss)\nI0608 07:57:04.927978 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.072073 (* 1 = 0.072073 loss)\nI0608 07:57:04.927984 13573 sgd_solver.cpp:106] Iteration 61420, lr = 0.0001\nI0608 07:57:17.759909 13573 solver.cpp:229] Iteration 61440, loss = 0.494362\nI0608 07:57:17.759994 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0435161 (* 1 = 0.0435161 loss)\nI0608 07:57:17.760005 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0223966 (* 1 = 0.0223966 loss)\nI0608 07:57:17.760011 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137437 (* 1 = 0.137437 loss)\nI0608 07:57:17.760017 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.02065 (* 1 = 0.02065 loss)\nI0608 07:57:17.760023 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.133274 (* 1 = 0.133274 loss)\nI0608 07:57:17.760028 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0206513 (* 1 = 0.0206513 loss)\nI0608 07:57:17.760035 13573 sgd_solver.cpp:106] Iteration 61440, lr = 0.0001\nI0608 07:57:30.850342 13573 solver.cpp:229] Iteration 61460, loss = 0.53689\nI0608 07:57:30.850404 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0090059 (* 1 = 0.0090059 loss)\nI0608 07:57:30.850414 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0145739 (* 1 = 0.0145739 loss)\nI0608 07:57:30.850419 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.29407 (* 1 = 0.29407 loss)\nI0608 07:57:30.850425 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0385909 (* 1 = 0.0385909 loss)\nI0608 07:57:30.850431 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.320473 (* 1 = 0.320473 loss)\nI0608 07:57:30.850437 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0385849 (* 1 = 0.0385849 loss)\nI0608 07:57:30.850443 13573 sgd_solver.cpp:106] Iteration 61460, lr = 0.0001\nI0608 07:57:43.643625 13573 solver.cpp:229] Iteration 61480, loss = 0.6154\nI0608 07:57:43.643755 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000466691 (* 1 = 0.000466691 loss)\nI0608 07:57:43.643765 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00826569 (* 1 = 0.00826569 loss)\nI0608 07:57:43.643772 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.282145 (* 1 = 0.282145 loss)\nI0608 07:57:43.643779 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176357 (* 1 = 0.0176357 loss)\nI0608 07:57:43.643784 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.283815 (* 1 = 0.283815 loss)\nI0608 07:57:43.643790 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176324 (* 1 = 0.0176324 loss)\nI0608 07:57:43.643796 13573 sgd_solver.cpp:106] Iteration 61480, lr = 0.0001\nI0608 07:57:56.432545 13573 solver.cpp:229] Iteration 61500, loss = 0.674029\nI0608 07:57:56.432607 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0565545 (* 1 = 0.0565545 loss)\nI0608 07:57:56.432616 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0192242 (* 1 = 0.0192242 loss)\nI0608 07:57:56.432622 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.353219 (* 1 = 0.353219 loss)\nI0608 07:57:56.432628 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0448146 (* 1 = 0.0448146 loss)\nI0608 07:57:56.432634 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340645 (* 1 = 0.340645 loss)\nI0608 07:57:56.432641 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0448284 (* 1 = 0.0448284 loss)\nI0608 07:57:56.432646 13573 sgd_solver.cpp:106] Iteration 61500, lr = 0.0001\nI0608 07:58:09.433014 13573 solver.cpp:229] Iteration 61520, loss = 0.927475\nI0608 07:58:09.433080 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0772654 (* 1 = 0.0772654 loss)\nI0608 07:58:09.433090 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.096247 (* 1 = 0.096247 loss)\nI0608 07:58:09.433096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.373539 (* 1 = 0.373539 loss)\nI0608 07:58:09.433101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.113755 (* 1 = 0.113755 loss)\nI0608 07:58:09.433107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.374825 (* 1 = 0.374825 loss)\nI0608 07:58:09.433113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.113739 (* 1 = 0.113739 loss)\nI0608 07:58:09.433120 13573 sgd_solver.cpp:106] Iteration 61520, lr = 0.0001\nI0608 07:58:22.266100 13573 solver.cpp:229] Iteration 61540, loss = 0.575112\nI0608 07:58:22.266175 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0708911 (* 1 = 0.0708911 loss)\nI0608 07:58:22.266183 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0907026 (* 1 = 0.0907026 loss)\nI0608 07:58:22.266189 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212926 (* 1 = 0.212926 loss)\nI0608 07:58:22.266194 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0567276 (* 1 = 0.0567276 loss)\nI0608 07:58:22.266201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.210021 (* 1 = 0.210021 loss)\nI0608 07:58:22.266206 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0567188 (* 1 = 0.0567188 loss)\nI0608 07:58:22.266212 13573 sgd_solver.cpp:106] Iteration 61540, lr = 0.0001\nI0608 07:58:35.442109 13573 solver.cpp:229] Iteration 61560, loss = 0.716082\nI0608 07:58:35.442174 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0286695 (* 1 = 0.0286695 loss)\nI0608 07:58:35.442184 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0353204 (* 1 = 0.0353204 loss)\nI0608 07:58:35.442190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126944 (* 1 = 0.126944 loss)\nI0608 07:58:35.442198 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.203409 (* 1 = 0.203409 loss)\nI0608 07:58:35.442203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125771 (* 1 = 0.125771 loss)\nI0608 07:58:35.442209 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.203987 (* 1 = 0.203987 loss)\nI0608 07:58:35.442217 13573 sgd_solver.cpp:106] Iteration 61560, lr = 0.0001\nI0608 07:58:48.153192 13573 solver.cpp:229] Iteration 61580, loss = 0.879626\nI0608 07:58:48.153259 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0402305 (* 1 = 0.0402305 loss)\nI0608 07:58:48.153270 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0261365 (* 1 = 0.0261365 loss)\nI0608 07:58:48.153276 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256158 (* 1 = 0.256158 loss)\nI0608 07:58:48.153281 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176466 (* 1 = 0.0176466 loss)\nI0608 07:58:48.153287 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246092 (* 1 = 0.246092 loss)\nI0608 07:58:48.153293 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176449 (* 1 = 0.0176449 loss)\nI0608 07:58:48.153300 13573 sgd_solver.cpp:106] Iteration 61580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 07:59:00.957085 13573 solver.cpp:229] Iteration 61600, loss = 0.415839\nI0608 07:59:00.957167 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0424346 (* 1 = 0.0424346 loss)\nI0608 07:59:00.957177 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0924071 (* 1 = 0.0924071 loss)\nI0608 07:59:00.957183 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.150931 (* 1 = 0.150931 loss)\nI0608 07:59:00.957190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0424203 (* 1 = 0.0424203 loss)\nI0608 07:59:00.957196 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152624 (* 1 = 0.152624 loss)\nI0608 07:59:00.957202 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.042429 (* 1 = 0.042429 loss)\nI0608 07:59:00.957209 13573 sgd_solver.cpp:106] Iteration 61600, lr = 0.0001\nI0608 07:59:13.751011 13573 solver.cpp:229] Iteration 61620, loss = 0.334049\nI0608 07:59:13.751075 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0579542 (* 1 = 0.0579542 loss)\nI0608 07:59:13.751083 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0235464 (* 1 = 0.0235464 loss)\nI0608 07:59:13.751091 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0877473 (* 1 = 0.0877473 loss)\nI0608 07:59:13.751096 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0164858 (* 1 = 0.0164858 loss)\nI0608 07:59:13.751101 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0852069 (* 1 = 0.0852069 loss)\nI0608 07:59:13.751107 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0164894 (* 1 = 0.0164894 loss)\nI0608 07:59:13.751113 13573 sgd_solver.cpp:106] Iteration 61620, lr = 0.0001\nI0608 07:59:26.618839 13573 solver.cpp:229] Iteration 61640, loss = 1.07117\nI0608 07:59:26.618906 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113498 (* 1 = 0.113498 loss)\nI0608 07:59:26.618916 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0697045 (* 1 = 0.0697045 loss)\nI0608 07:59:26.618921 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.429029 (* 1 = 0.429029 loss)\nI0608 07:59:26.618927 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0796076 (* 1 = 0.0796076 loss)\nI0608 07:59:26.618932 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408007 (* 1 = 0.408007 loss)\nI0608 07:59:26.618937 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0796054 (* 1 = 0.0796054 loss)\nI0608 07:59:26.618944 13573 sgd_solver.cpp:106] Iteration 61640, lr = 0.0001\nI0608 07:59:39.601737 13573 solver.cpp:229] Iteration 61660, loss = 1.25386\nI0608 07:59:39.601797 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.167861 (* 1 = 0.167861 loss)\nI0608 07:59:39.601807 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.167671 (* 1 = 0.167671 loss)\nI0608 07:59:39.601814 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.569566 (* 1 = 0.569566 loss)\nI0608 07:59:39.601819 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.156372 (* 1 = 0.156372 loss)\nI0608 07:59:39.601824 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.56979 (* 1 = 0.56979 loss)\nI0608 07:59:39.601830 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.156375 (* 1 = 0.156375 loss)\nI0608 07:59:39.601836 13573 sgd_solver.cpp:106] Iteration 61660, lr = 0.0001\nI0608 07:59:52.368794 13573 solver.cpp:229] Iteration 61680, loss = 0.86129\nI0608 07:59:52.368872 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0248223 (* 1 = 0.0248223 loss)\nI0608 07:59:52.368882 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.012881 (* 1 = 0.012881 loss)\nI0608 07:59:52.368888 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.470767 (* 1 = 0.470767 loss)\nI0608 07:59:52.368893 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0544954 (* 1 = 0.0544954 loss)\nI0608 07:59:52.368899 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.446235 (* 1 = 0.446235 loss)\nI0608 07:59:52.368904 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0544783 (* 1 = 0.0544783 loss)\nI0608 07:59:52.368911 13573 sgd_solver.cpp:106] Iteration 61680, lr = 0.0001\nI0608 08:00:05.347323 13573 solver.cpp:229] Iteration 61700, loss = 0.626651\nI0608 08:00:05.347391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.02945 (* 1 = 0.02945 loss)\nI0608 08:00:05.347401 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0397476 (* 1 = 0.0397476 loss)\nI0608 08:00:05.347407 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.121533 (* 1 = 0.121533 loss)\nI0608 08:00:05.347414 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0231729 (* 1 = 0.0231729 loss)\nI0608 08:00:05.347419 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.13716 (* 1 = 0.13716 loss)\nI0608 08:00:05.347424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0231717 (* 1 = 0.0231717 loss)\nI0608 08:00:05.347430 13573 sgd_solver.cpp:106] Iteration 61700, lr = 0.0001\nI0608 08:00:18.221433 13573 solver.cpp:229] Iteration 61720, loss = 0.788468\nI0608 08:00:18.221501 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000182839 (* 1 = 0.000182839 loss)\nI0608 08:00:18.221511 13573 solver.cpp:245]     Train net output #1: loss_cls = 4.86579e-05 (* 1 = 4.86579e-05 loss)\nI0608 08:00:18.221518 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.213338 (* 1 = 0.213338 loss)\nI0608 08:00:18.221524 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0335842 (* 1 = 0.0335842 loss)\nI0608 08:00:18.221530 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.250856 (* 1 = 0.250856 loss)\nI0608 08:00:18.221536 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0335996 (* 1 = 0.0335996 loss)\nI0608 08:00:18.221544 13573 sgd_solver.cpp:106] Iteration 61720, lr = 0.0001\nI0608 08:00:31.067893 13573 solver.cpp:229] Iteration 61740, loss = 0.56004\nI0608 08:00:31.067975 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0597515 (* 1 = 0.0597515 loss)\nI0608 08:00:31.067984 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0640351 (* 1 = 0.0640351 loss)\nI0608 08:00:31.067991 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.104909 (* 1 = 0.104909 loss)\nI0608 08:00:31.067997 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0566644 (* 1 = 0.0566644 loss)\nI0608 08:00:31.068003 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0957796 (* 1 = 0.0957796 loss)\nI0608 08:00:31.068009 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0566692 (* 1 = 0.0566692 loss)\nI0608 08:00:31.068017 13573 sgd_solver.cpp:106] Iteration 61740, lr = 0.0001\nI0608 08:00:43.956708 13573 solver.cpp:229] Iteration 61760, loss = 0.352572\nI0608 08:00:43.956776 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0228967 (* 1 = 0.0228967 loss)\nI0608 08:00:43.956786 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0157255 (* 1 = 0.0157255 loss)\nI0608 08:00:43.956791 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0992284 (* 1 = 0.0992284 loss)\nI0608 08:00:43.956797 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0333055 (* 1 = 0.0333055 loss)\nI0608 08:00:43.956804 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0984415 (* 1 = 0.0984415 loss)\nI0608 08:00:43.956809 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0333013 (* 1 = 0.0333013 loss)\nI0608 08:00:43.956815 13573 sgd_solver.cpp:106] Iteration 61760, lr = 0.0001\nI0608 08:00:56.956306 13573 solver.cpp:229] Iteration 61780, loss = 0.588519\nI0608 08:00:56.956372 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0878678 (* 1 = 0.0878678 loss)\nI0608 08:00:56.956380 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0408037 (* 1 = 0.0408037 loss)\nI0608 08:00:56.956387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.229618 (* 1 = 0.229618 loss)\nI0608 08:00:56.956393 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0229889 (* 1 = 0.0229889 loss)\nI0608 08:00:56.956398 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215897 (* 1 = 0.215897 loss)\nI0608 08:00:56.956403 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0229879 (* 1 = 0.0229879 loss)\nI0608 08:00:56.956410 13573 sgd_solver.cpp:106] Iteration 61780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:01:09.790699 13573 solver.cpp:229] Iteration 61800, loss = 0.828402\nI0608 08:01:09.790783 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00296519 (* 1 = 0.00296519 loss)\nI0608 08:01:09.790794 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0027715 (* 1 = 0.0027715 loss)\nI0608 08:01:09.790801 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.370292 (* 1 = 0.370292 loss)\nI0608 08:01:09.790807 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0240621 (* 1 = 0.0240621 loss)\nI0608 08:01:09.790812 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.37124 (* 1 = 0.37124 loss)\nI0608 08:01:09.790817 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0240651 (* 1 = 0.0240651 loss)\nI0608 08:01:09.790824 13573 sgd_solver.cpp:106] Iteration 61800, lr = 0.0001\nI0608 08:01:22.690282 13573 solver.cpp:229] Iteration 61820, loss = 0.999444\nI0608 08:01:22.690347 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.185556 (* 1 = 0.185556 loss)\nI0608 08:01:22.690357 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0385671 (* 1 = 0.0385671 loss)\nI0608 08:01:22.690362 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174525 (* 1 = 0.174525 loss)\nI0608 08:01:22.690368 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234195 (* 1 = 0.0234195 loss)\nI0608 08:01:22.690374 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181712 (* 1 = 0.181712 loss)\nI0608 08:01:22.690381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0234229 (* 1 = 0.0234229 loss)\nI0608 08:01:22.690387 13573 sgd_solver.cpp:106] Iteration 61820, lr = 0.0001\nI0608 08:01:35.616101 13573 solver.cpp:229] Iteration 61840, loss = 1.19423\nI0608 08:01:35.616168 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0603684 (* 1 = 0.0603684 loss)\nI0608 08:01:35.616178 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0490865 (* 1 = 0.0490865 loss)\nI0608 08:01:35.616184 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0945705 (* 1 = 0.0945705 loss)\nI0608 08:01:35.616190 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.052057 (* 1 = 0.052057 loss)\nI0608 08:01:35.616195 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102243 (* 1 = 0.102243 loss)\nI0608 08:01:35.616201 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0520612 (* 1 = 0.0520612 loss)\nI0608 08:01:35.616207 13573 sgd_solver.cpp:106] Iteration 61840, lr = 0.0001\nI0608 08:01:48.364750 13573 solver.cpp:229] Iteration 61860, loss = 0.362197\nI0608 08:01:48.364821 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0272592 (* 1 = 0.0272592 loss)\nI0608 08:01:48.364831 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0817496 (* 1 = 0.0817496 loss)\nI0608 08:01:48.364838 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0886487 (* 1 = 0.0886487 loss)\nI0608 08:01:48.364845 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0141517 (* 1 = 0.0141517 loss)\nI0608 08:01:48.364850 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0789769 (* 1 = 0.0789769 loss)\nI0608 08:01:48.364856 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0141515 (* 1 = 0.0141515 loss)\nI0608 08:01:48.364862 13573 sgd_solver.cpp:106] Iteration 61860, lr = 0.0001\nI0608 08:02:01.045783 13573 solver.cpp:229] Iteration 61880, loss = 0.510533\nI0608 08:02:01.045850 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0378752 (* 1 = 0.0378752 loss)\nI0608 08:02:01.045861 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0202201 (* 1 = 0.0202201 loss)\nI0608 08:02:01.045867 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917118 (* 1 = 0.0917118 loss)\nI0608 08:02:01.045874 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301572 (* 1 = 0.0301572 loss)\nI0608 08:02:01.045881 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0960167 (* 1 = 0.0960167 loss)\nI0608 08:02:01.045886 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301596 (* 1 = 0.0301596 loss)\nI0608 08:02:01.045895 13573 sgd_solver.cpp:106] Iteration 61880, lr = 0.0001\nI0608 08:02:13.949061 13573 solver.cpp:229] Iteration 61900, loss = 1.05317\nI0608 08:02:13.949129 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0791552 (* 1 = 0.0791552 loss)\nI0608 08:02:13.949141 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0792076 (* 1 = 0.0792076 loss)\nI0608 08:02:13.949146 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.756281 (* 1 = 0.756281 loss)\nI0608 08:02:13.949153 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0991701 (* 1 = 0.0991701 loss)\nI0608 08:02:13.949159 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.688282 (* 1 = 0.688282 loss)\nI0608 08:02:13.949165 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0991843 (* 1 = 0.0991843 loss)\nI0608 08:02:13.949172 13573 sgd_solver.cpp:106] Iteration 61900, lr = 0.0001\nI0608 08:02:26.897100 13573 solver.cpp:229] Iteration 61920, loss = 0.766419\nI0608 08:02:26.897173 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0583797 (* 1 = 0.0583797 loss)\nI0608 08:02:26.897183 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0184528 (* 1 = 0.0184528 loss)\nI0608 08:02:26.897191 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.179308 (* 1 = 0.179308 loss)\nI0608 08:02:26.897197 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.047066 (* 1 = 0.047066 loss)\nI0608 08:02:26.897203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176223 (* 1 = 0.176223 loss)\nI0608 08:02:26.897208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0470657 (* 1 = 0.0470657 loss)\nI0608 08:02:26.897215 13573 sgd_solver.cpp:106] Iteration 61920, lr = 0.0001\nI0608 08:02:39.810192 13573 solver.cpp:229] Iteration 61940, loss = 0.51435\nI0608 08:02:39.810263 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0156718 (* 1 = 0.0156718 loss)\nI0608 08:02:39.810273 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0365304 (* 1 = 0.0365304 loss)\nI0608 08:02:39.810279 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.21144 (* 1 = 0.21144 loss)\nI0608 08:02:39.810286 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0125842 (* 1 = 0.0125842 loss)\nI0608 08:02:39.810292 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213672 (* 1 = 0.213672 loss)\nI0608 08:02:39.810297 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0125848 (* 1 = 0.0125848 loss)\nI0608 08:02:39.810305 13573 sgd_solver.cpp:106] Iteration 61940, lr = 0.0001\nI0608 08:02:52.820478 13573 solver.cpp:229] Iteration 61960, loss = 0.70004\nI0608 08:02:52.820544 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.020527 (* 1 = 0.020527 loss)\nI0608 08:02:52.820554 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0216507 (* 1 = 0.0216507 loss)\nI0608 08:02:52.820559 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0877767 (* 1 = 0.0877767 loss)\nI0608 08:02:52.820565 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219722 (* 1 = 0.0219722 loss)\nI0608 08:02:52.820571 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0796254 (* 1 = 0.0796254 loss)\nI0608 08:02:52.820577 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219794 (* 1 = 0.0219794 loss)\nI0608 08:02:52.820585 13573 sgd_solver.cpp:106] Iteration 61960, lr = 0.0001\nI0608 08:03:05.888443 13573 solver.cpp:229] Iteration 61980, loss = 0.783107\nI0608 08:03:05.888516 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00454003 (* 1 = 0.00454003 loss)\nI0608 08:03:05.888526 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000729355 (* 1 = 0.000729355 loss)\nI0608 08:03:05.888533 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.189401 (* 1 = 0.189401 loss)\nI0608 08:03:05.888540 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0205891 (* 1 = 0.0205891 loss)\nI0608 08:03:05.888545 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.182177 (* 1 = 0.182177 loss)\nI0608 08:03:05.888550 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0205908 (* 1 = 0.0205908 loss)\nI0608 08:03:05.888558 13573 sgd_solver.cpp:106] Iteration 61980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:03:18.929716 13573 solver.cpp:229] Iteration 62000, loss = 0.561859\nI0608 08:03:18.929787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0666654 (* 1 = 0.0666654 loss)\nI0608 08:03:18.929797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.044951 (* 1 = 0.044951 loss)\nI0608 08:03:18.929805 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919586 (* 1 = 0.0919586 loss)\nI0608 08:03:18.929810 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0201443 (* 1 = 0.0201443 loss)\nI0608 08:03:18.929816 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855244 (* 1 = 0.0855244 loss)\nI0608 08:03:18.929822 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201297 (* 1 = 0.0201297 loss)\nI0608 08:03:18.929828 13573 sgd_solver.cpp:106] Iteration 62000, lr = 0.0001\nI0608 08:03:31.913666 13573 solver.cpp:229] Iteration 62020, loss = 0.735582\nI0608 08:03:31.913727 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0999 (* 1 = 0.0999 loss)\nI0608 08:03:31.913738 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0532251 (* 1 = 0.0532251 loss)\nI0608 08:03:31.913743 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.361179 (* 1 = 0.361179 loss)\nI0608 08:03:31.913749 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0209012 (* 1 = 0.0209012 loss)\nI0608 08:03:31.913755 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.325169 (* 1 = 0.325169 loss)\nI0608 08:03:31.913763 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0209078 (* 1 = 0.0209078 loss)\nI0608 08:03:31.913769 13573 sgd_solver.cpp:106] Iteration 62020, lr = 0.0001\nI0608 08:03:44.953032 13573 solver.cpp:229] Iteration 62040, loss = 0.371161\nI0608 08:03:44.953104 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0931639 (* 1 = 0.0931639 loss)\nI0608 08:03:44.953114 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0681804 (* 1 = 0.0681804 loss)\nI0608 08:03:44.953120 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0954773 (* 1 = 0.0954773 loss)\nI0608 08:03:44.953126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0322657 (* 1 = 0.0322657 loss)\nI0608 08:03:44.953132 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102049 (* 1 = 0.102049 loss)\nI0608 08:03:44.953138 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0322683 (* 1 = 0.0322683 loss)\nI0608 08:03:44.953145 13573 sgd_solver.cpp:106] Iteration 62040, lr = 0.0001\nI0608 08:03:57.657308 13573 solver.cpp:229] Iteration 62060, loss = 0.54886\nI0608 08:03:57.657373 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.140523 (* 1 = 0.140523 loss)\nI0608 08:03:57.657383 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102935 (* 1 = 0.102935 loss)\nI0608 08:03:57.657389 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.211587 (* 1 = 0.211587 loss)\nI0608 08:03:57.657395 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.018912 (* 1 = 0.018912 loss)\nI0608 08:03:57.657402 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.218438 (* 1 = 0.218438 loss)\nI0608 08:03:57.657407 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189098 (* 1 = 0.0189098 loss)\nI0608 08:03:57.657414 13573 sgd_solver.cpp:106] Iteration 62060, lr = 0.0001\nI0608 08:04:10.579458 13573 solver.cpp:229] Iteration 62080, loss = 1.49963\nI0608 08:04:10.579521 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0825905 (* 1 = 0.0825905 loss)\nI0608 08:04:10.579530 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104054 (* 1 = 0.104054 loss)\nI0608 08:04:10.579536 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.871825 (* 1 = 0.871825 loss)\nI0608 08:04:10.579542 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.304414 (* 1 = 0.304414 loss)\nI0608 08:04:10.579547 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.82615 (* 1 = 0.82615 loss)\nI0608 08:04:10.579553 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.304413 (* 1 = 0.304413 loss)\nI0608 08:04:10.579560 13573 sgd_solver.cpp:106] Iteration 62080, lr = 0.0001\nI0608 08:04:23.324218 13573 solver.cpp:229] Iteration 62100, loss = 0.300927\nI0608 08:04:23.324290 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0433138 (* 1 = 0.0433138 loss)\nI0608 08:04:23.324300 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0685196 (* 1 = 0.0685196 loss)\nI0608 08:04:23.324306 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0915503 (* 1 = 0.0915503 loss)\nI0608 08:04:23.324311 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0256957 (* 1 = 0.0256957 loss)\nI0608 08:04:23.324316 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0789736 (* 1 = 0.0789736 loss)\nI0608 08:04:23.324322 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0256945 (* 1 = 0.0256945 loss)\nI0608 08:04:23.324328 13573 sgd_solver.cpp:106] Iteration 62100, lr = 0.0001\nI0608 08:04:36.114207 13573 solver.cpp:229] Iteration 62120, loss = 1.39575\nI0608 08:04:36.114282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00311792 (* 1 = 0.00311792 loss)\nI0608 08:04:36.114292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00224524 (* 1 = 0.00224524 loss)\nI0608 08:04:36.114298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.88606 (* 1 = 0.88606 loss)\nI0608 08:04:36.114305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.336546 (* 1 = 0.336546 loss)\nI0608 08:04:36.114310 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.895322 (* 1 = 0.895322 loss)\nI0608 08:04:36.114316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.336576 (* 1 = 0.336576 loss)\nI0608 08:04:36.114323 13573 sgd_solver.cpp:106] Iteration 62120, lr = 0.0001\nI0608 08:04:49.189637 13573 solver.cpp:229] Iteration 62140, loss = 0.530703\nI0608 08:04:49.189714 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0352898 (* 1 = 0.0352898 loss)\nI0608 08:04:49.189724 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0405819 (* 1 = 0.0405819 loss)\nI0608 08:04:49.189731 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0822344 (* 1 = 0.0822344 loss)\nI0608 08:04:49.189738 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00929671 (* 1 = 0.00929671 loss)\nI0608 08:04:49.189743 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0782436 (* 1 = 0.0782436 loss)\nI0608 08:04:49.189749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00929499 (* 1 = 0.00929499 loss)\nI0608 08:04:49.189757 13573 sgd_solver.cpp:106] Iteration 62140, lr = 0.0001\nI0608 08:05:01.937917 13573 solver.cpp:229] Iteration 62160, loss = 0.90987\nI0608 08:05:01.937976 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0723556 (* 1 = 0.0723556 loss)\nI0608 08:05:01.937986 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.115301 (* 1 = 0.115301 loss)\nI0608 08:05:01.937993 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.511328 (* 1 = 0.511328 loss)\nI0608 08:05:01.937999 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.154864 (* 1 = 0.154864 loss)\nI0608 08:05:01.938004 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.492042 (* 1 = 0.492042 loss)\nI0608 08:05:01.938009 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.154865 (* 1 = 0.154865 loss)\nI0608 08:05:01.938016 13573 sgd_solver.cpp:106] Iteration 62160, lr = 0.0001\nI0608 08:05:14.866852 13573 solver.cpp:229] Iteration 62180, loss = 0.848093\nI0608 08:05:14.866914 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.22127 (* 1 = 0.22127 loss)\nI0608 08:05:14.866924 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.202638 (* 1 = 0.202638 loss)\nI0608 08:05:14.866930 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.265255 (* 1 = 0.265255 loss)\nI0608 08:05:14.866935 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0861566 (* 1 = 0.0861566 loss)\nI0608 08:05:14.866941 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268576 (* 1 = 0.268576 loss)\nI0608 08:05:14.866946 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0861819 (* 1 = 0.0861819 loss)\nI0608 08:05:14.866953 13573 sgd_solver.cpp:106] Iteration 62180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:05:27.582007 13573 solver.cpp:229] Iteration 62200, loss = 0.492937\nI0608 08:05:27.582078 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0855801 (* 1 = 0.0855801 loss)\nI0608 08:05:27.582092 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0826969 (* 1 = 0.0826969 loss)\nI0608 08:05:27.582098 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.19197 (* 1 = 0.19197 loss)\nI0608 08:05:27.582103 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0522315 (* 1 = 0.0522315 loss)\nI0608 08:05:27.582109 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.21199 (* 1 = 0.21199 loss)\nI0608 08:05:27.582115 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0522277 (* 1 = 0.0522277 loss)\nI0608 08:05:27.582123 13573 sgd_solver.cpp:106] Iteration 62200, lr = 0.0001\nI0608 08:05:40.466022 13573 solver.cpp:229] Iteration 62220, loss = 0.417185\nI0608 08:05:40.466083 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00139852 (* 1 = 0.00139852 loss)\nI0608 08:05:40.466091 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0220816 (* 1 = 0.0220816 loss)\nI0608 08:05:40.466097 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205148 (* 1 = 0.205148 loss)\nI0608 08:05:40.466104 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0259478 (* 1 = 0.0259478 loss)\nI0608 08:05:40.466109 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223429 (* 1 = 0.223429 loss)\nI0608 08:05:40.466114 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.025948 (* 1 = 0.025948 loss)\nI0608 08:05:40.466120 13573 sgd_solver.cpp:106] Iteration 62220, lr = 0.0001\nI0608 08:05:53.423825 13573 solver.cpp:229] Iteration 62240, loss = 1.34502\nI0608 08:05:53.423893 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.118908 (* 1 = 0.118908 loss)\nI0608 08:05:53.423903 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.17006 (* 1 = 0.17006 loss)\nI0608 08:05:53.423910 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.451016 (* 1 = 0.451016 loss)\nI0608 08:05:53.423916 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.103796 (* 1 = 0.103796 loss)\nI0608 08:05:53.423923 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.480643 (* 1 = 0.480643 loss)\nI0608 08:05:53.423928 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.103789 (* 1 = 0.103789 loss)\nI0608 08:05:53.423935 13573 sgd_solver.cpp:106] Iteration 62240, lr = 0.0001\nI0608 08:06:06.120213 13573 solver.cpp:229] Iteration 62260, loss = 0.711541\nI0608 08:06:06.120281 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.163224 (* 1 = 0.163224 loss)\nI0608 08:06:06.120291 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0270958 (* 1 = 0.0270958 loss)\nI0608 08:06:06.120298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.30268 (* 1 = 0.30268 loss)\nI0608 08:06:06.120303 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0197501 (* 1 = 0.0197501 loss)\nI0608 08:06:06.120309 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.294691 (* 1 = 0.294691 loss)\nI0608 08:06:06.120316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0197431 (* 1 = 0.0197431 loss)\nI0608 08:06:06.120321 13573 sgd_solver.cpp:106] Iteration 62260, lr = 0.0001\nI0608 08:06:18.924461 13573 solver.cpp:229] Iteration 62280, loss = 0.589564\nI0608 08:06:18.924547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000658048 (* 1 = 0.000658048 loss)\nI0608 08:06:18.924558 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00123667 (* 1 = 0.00123667 loss)\nI0608 08:06:18.924564 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.278206 (* 1 = 0.278206 loss)\nI0608 08:06:18.924571 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0167937 (* 1 = 0.0167937 loss)\nI0608 08:06:18.924576 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246925 (* 1 = 0.246925 loss)\nI0608 08:06:18.924582 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0167939 (* 1 = 0.0167939 loss)\nI0608 08:06:18.924589 13573 sgd_solver.cpp:106] Iteration 62280, lr = 0.0001\nI0608 08:06:31.672000 13573 solver.cpp:229] Iteration 62300, loss = 0.795859\nI0608 08:06:31.672061 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0397887 (* 1 = 0.0397887 loss)\nI0608 08:06:31.672071 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00747775 (* 1 = 0.00747775 loss)\nI0608 08:06:31.672077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.268303 (* 1 = 0.268303 loss)\nI0608 08:06:31.672083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0647201 (* 1 = 0.0647201 loss)\nI0608 08:06:31.672089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.275822 (* 1 = 0.275822 loss)\nI0608 08:06:31.672094 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0647233 (* 1 = 0.0647233 loss)\nI0608 08:06:31.672101 13573 sgd_solver.cpp:106] Iteration 62300, lr = 0.0001\nI0608 08:06:44.500852 13573 solver.cpp:229] Iteration 62320, loss = 1.2306\nI0608 08:06:44.500938 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0167509 (* 1 = 0.0167509 loss)\nI0608 08:06:44.500948 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0102059 (* 1 = 0.0102059 loss)\nI0608 08:06:44.500954 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.70279 (* 1 = 0.70279 loss)\nI0608 08:06:44.500960 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.271495 (* 1 = 0.271495 loss)\nI0608 08:06:44.500967 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.720946 (* 1 = 0.720946 loss)\nI0608 08:06:44.500972 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.271484 (* 1 = 0.271484 loss)\nI0608 08:06:44.500979 13573 sgd_solver.cpp:106] Iteration 62320, lr = 0.0001\nI0608 08:06:57.273545 13573 solver.cpp:229] Iteration 62340, loss = 0.307328\nI0608 08:06:57.273608 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.045511 (* 1 = 0.045511 loss)\nI0608 08:06:57.273617 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0260647 (* 1 = 0.0260647 loss)\nI0608 08:06:57.273623 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.09639 (* 1 = 0.09639 loss)\nI0608 08:06:57.273629 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0150296 (* 1 = 0.0150296 loss)\nI0608 08:06:57.273634 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0922519 (* 1 = 0.0922519 loss)\nI0608 08:06:57.273640 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0150289 (* 1 = 0.0150289 loss)\nI0608 08:06:57.273648 13573 sgd_solver.cpp:106] Iteration 62340, lr = 0.0001\nI0608 08:07:10.172466 13573 solver.cpp:229] Iteration 62360, loss = 0.559083\nI0608 08:07:10.172533 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0026731 (* 1 = 0.0026731 loss)\nI0608 08:07:10.172543 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0195909 (* 1 = 0.0195909 loss)\nI0608 08:07:10.172549 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250375 (* 1 = 0.250375 loss)\nI0608 08:07:10.172555 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.025806 (* 1 = 0.025806 loss)\nI0608 08:07:10.172561 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.261959 (* 1 = 0.261959 loss)\nI0608 08:07:10.172567 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0258153 (* 1 = 0.0258153 loss)\nI0608 08:07:10.172574 13573 sgd_solver.cpp:106] Iteration 62360, lr = 0.0001\nI0608 08:07:23.063918 13573 solver.cpp:229] Iteration 62380, loss = 0.553382\nI0608 08:07:23.063989 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0745686 (* 1 = 0.0745686 loss)\nI0608 08:07:23.063999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104561 (* 1 = 0.104561 loss)\nI0608 08:07:23.064005 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0781289 (* 1 = 0.0781289 loss)\nI0608 08:07:23.064012 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0341979 (* 1 = 0.0341979 loss)\nI0608 08:07:23.064016 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0806089 (* 1 = 0.0806089 loss)\nI0608 08:07:23.064023 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0341971 (* 1 = 0.0341971 loss)\nI0608 08:07:23.064029 13573 sgd_solver.cpp:106] Iteration 62380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:07:35.866735 13573 solver.cpp:229] Iteration 62400, loss = 0.455765\nI0608 08:07:35.866811 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0828068 (* 1 = 0.0828068 loss)\nI0608 08:07:35.866822 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119766 (* 1 = 0.119766 loss)\nI0608 08:07:35.866827 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137825 (* 1 = 0.137825 loss)\nI0608 08:07:35.866833 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0591119 (* 1 = 0.0591119 loss)\nI0608 08:07:35.866839 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.137295 (* 1 = 0.137295 loss)\nI0608 08:07:35.866845 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0591129 (* 1 = 0.0591129 loss)\nI0608 08:07:35.866852 13573 sgd_solver.cpp:106] Iteration 62400, lr = 0.0001\nI0608 08:07:48.782500 13573 solver.cpp:229] Iteration 62420, loss = 0.675413\nI0608 08:07:48.782564 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0506442 (* 1 = 0.0506442 loss)\nI0608 08:07:48.782573 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0812002 (* 1 = 0.0812002 loss)\nI0608 08:07:48.782579 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.234111 (* 1 = 0.234111 loss)\nI0608 08:07:48.782585 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0637935 (* 1 = 0.0637935 loss)\nI0608 08:07:48.782590 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245055 (* 1 = 0.245055 loss)\nI0608 08:07:48.782596 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0638001 (* 1 = 0.0638001 loss)\nI0608 08:07:48.782603 13573 sgd_solver.cpp:106] Iteration 62420, lr = 0.0001\nI0608 08:08:01.913502 13573 solver.cpp:229] Iteration 62440, loss = 0.302009\nI0608 08:08:01.913578 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0225141 (* 1 = 0.0225141 loss)\nI0608 08:08:01.913586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00978727 (* 1 = 0.00978727 loss)\nI0608 08:08:01.913592 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136763 (* 1 = 0.136763 loss)\nI0608 08:08:01.913599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0135959 (* 1 = 0.0135959 loss)\nI0608 08:08:01.913604 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.143816 (* 1 = 0.143816 loss)\nI0608 08:08:01.913610 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0135952 (* 1 = 0.0135952 loss)\nI0608 08:08:01.913617 13573 sgd_solver.cpp:106] Iteration 62440, lr = 0.0001\nI0608 08:08:15.018939 13573 solver.cpp:229] Iteration 62460, loss = 0.855816\nI0608 08:08:15.019012 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0270184 (* 1 = 0.0270184 loss)\nI0608 08:08:15.019021 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.040178 (* 1 = 0.040178 loss)\nI0608 08:08:15.019027 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263669 (* 1 = 0.263669 loss)\nI0608 08:08:15.019034 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0357625 (* 1 = 0.0357625 loss)\nI0608 08:08:15.019039 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.282364 (* 1 = 0.282364 loss)\nI0608 08:08:15.019045 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0357788 (* 1 = 0.0357788 loss)\nI0608 08:08:15.019053 13573 sgd_solver.cpp:106] Iteration 62460, lr = 0.0001\nI0608 08:08:27.739848 13573 solver.cpp:229] Iteration 62480, loss = 0.295293\nI0608 08:08:27.739909 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0264572 (* 1 = 0.0264572 loss)\nI0608 08:08:27.739919 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.017937 (* 1 = 0.017937 loss)\nI0608 08:08:27.739925 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152039 (* 1 = 0.152039 loss)\nI0608 08:08:27.739931 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00180857 (* 1 = 0.00180857 loss)\nI0608 08:08:27.739938 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.138864 (* 1 = 0.138864 loss)\nI0608 08:08:27.739943 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00180844 (* 1 = 0.00180844 loss)\nI0608 08:08:27.739950 13573 sgd_solver.cpp:106] Iteration 62480, lr = 0.0001\nI0608 08:08:40.656982 13573 solver.cpp:229] Iteration 62500, loss = 0.662369\nI0608 08:08:40.657055 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0504609 (* 1 = 0.0504609 loss)\nI0608 08:08:40.657064 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0361303 (* 1 = 0.0361303 loss)\nI0608 08:08:40.657071 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.366436 (* 1 = 0.366436 loss)\nI0608 08:08:40.657078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.111712 (* 1 = 0.111712 loss)\nI0608 08:08:40.657083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.36859 (* 1 = 0.36859 loss)\nI0608 08:08:40.657089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.111689 (* 1 = 0.111689 loss)\nI0608 08:08:40.657096 13573 sgd_solver.cpp:106] Iteration 62500, lr = 0.0001\nI0608 08:08:53.672173 13573 solver.cpp:229] Iteration 62520, loss = 0.902709\nI0608 08:08:53.672242 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0463083 (* 1 = 0.0463083 loss)\nI0608 08:08:53.672252 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0219722 (* 1 = 0.0219722 loss)\nI0608 08:08:53.672258 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.437272 (* 1 = 0.437272 loss)\nI0608 08:08:53.672263 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.129625 (* 1 = 0.129625 loss)\nI0608 08:08:53.672269 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.463756 (* 1 = 0.463756 loss)\nI0608 08:08:53.672276 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.129623 (* 1 = 0.129623 loss)\nI0608 08:08:53.672282 13573 sgd_solver.cpp:106] Iteration 62520, lr = 0.0001\nI0608 08:09:06.511093 13573 solver.cpp:229] Iteration 62540, loss = 0.538485\nI0608 08:09:06.511155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0599354 (* 1 = 0.0599354 loss)\nI0608 08:09:06.511164 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0576945 (* 1 = 0.0576945 loss)\nI0608 08:09:06.511170 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0983579 (* 1 = 0.0983579 loss)\nI0608 08:09:06.511176 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0276182 (* 1 = 0.0276182 loss)\nI0608 08:09:06.511183 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100307 (* 1 = 0.100307 loss)\nI0608 08:09:06.511188 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0276238 (* 1 = 0.0276238 loss)\nI0608 08:09:06.511195 13573 sgd_solver.cpp:106] Iteration 62540, lr = 0.0001\nI0608 08:09:19.498385 13573 solver.cpp:229] Iteration 62560, loss = 0.566858\nI0608 08:09:19.498508 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000658324 (* 1 = 0.000658324 loss)\nI0608 08:09:19.498518 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0169121 (* 1 = 0.0169121 loss)\nI0608 08:09:19.498524 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273151 (* 1 = 0.273151 loss)\nI0608 08:09:19.498530 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0950582 (* 1 = 0.0950582 loss)\nI0608 08:09:19.498536 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.290817 (* 1 = 0.290817 loss)\nI0608 08:09:19.498543 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0950532 (* 1 = 0.0950532 loss)\nI0608 08:09:19.498550 13573 sgd_solver.cpp:106] Iteration 62560, lr = 0.0001\nI0608 08:09:32.241803 13573 solver.cpp:229] Iteration 62580, loss = 0.505538\nI0608 08:09:32.241864 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0205631 (* 1 = 0.0205631 loss)\nI0608 08:09:32.241874 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.031585 (* 1 = 0.031585 loss)\nI0608 08:09:32.241880 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.317669 (* 1 = 0.317669 loss)\nI0608 08:09:32.241886 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0298988 (* 1 = 0.0298988 loss)\nI0608 08:09:32.241892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.340313 (* 1 = 0.340313 loss)\nI0608 08:09:32.241899 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0299036 (* 1 = 0.0299036 loss)\nI0608 08:09:32.241906 13573 sgd_solver.cpp:106] Iteration 62580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:09:45.041052 13573 solver.cpp:229] Iteration 62600, loss = 0.430253\nI0608 08:09:45.041149 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0451666 (* 1 = 0.0451666 loss)\nI0608 08:09:45.041160 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0132545 (* 1 = 0.0132545 loss)\nI0608 08:09:45.041167 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177332 (* 1 = 0.177332 loss)\nI0608 08:09:45.041172 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0294296 (* 1 = 0.0294296 loss)\nI0608 08:09:45.041178 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17685 (* 1 = 0.17685 loss)\nI0608 08:09:45.041184 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0294288 (* 1 = 0.0294288 loss)\nI0608 08:09:45.041193 13573 sgd_solver.cpp:106] Iteration 62600, lr = 0.0001\nI0608 08:09:57.915338 13573 solver.cpp:229] Iteration 62620, loss = 0.507296\nI0608 08:09:57.915405 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0416361 (* 1 = 0.0416361 loss)\nI0608 08:09:57.915415 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0222804 (* 1 = 0.0222804 loss)\nI0608 08:09:57.915421 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.227009 (* 1 = 0.227009 loss)\nI0608 08:09:57.915427 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0165134 (* 1 = 0.0165134 loss)\nI0608 08:09:57.915433 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173897 (* 1 = 0.173897 loss)\nI0608 08:09:57.915439 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.016514 (* 1 = 0.016514 loss)\nI0608 08:09:57.915446 13573 sgd_solver.cpp:106] Iteration 62620, lr = 0.0001\nI0608 08:10:10.882390 13573 solver.cpp:229] Iteration 62640, loss = 0.954243\nI0608 08:10:10.882463 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00924713 (* 1 = 0.00924713 loss)\nI0608 08:10:10.882472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.033159 (* 1 = 0.033159 loss)\nI0608 08:10:10.882480 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.334482 (* 1 = 0.334482 loss)\nI0608 08:10:10.882486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0387161 (* 1 = 0.0387161 loss)\nI0608 08:10:10.882491 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.317373 (* 1 = 0.317373 loss)\nI0608 08:10:10.882498 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0387316 (* 1 = 0.0387316 loss)\nI0608 08:10:10.882505 13573 sgd_solver.cpp:106] Iteration 62640, lr = 0.0001\nI0608 08:10:23.640985 13573 solver.cpp:229] Iteration 62660, loss = 0.716807\nI0608 08:10:23.641057 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0605545 (* 1 = 0.0605545 loss)\nI0608 08:10:23.641067 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.123419 (* 1 = 0.123419 loss)\nI0608 08:10:23.641072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152297 (* 1 = 0.152297 loss)\nI0608 08:10:23.641078 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198096 (* 1 = 0.0198096 loss)\nI0608 08:10:23.641083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155787 (* 1 = 0.155787 loss)\nI0608 08:10:23.641090 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198086 (* 1 = 0.0198086 loss)\nI0608 08:10:23.641098 13573 sgd_solver.cpp:106] Iteration 62660, lr = 0.0001\nI0608 08:10:36.467861 13573 solver.cpp:229] Iteration 62680, loss = 0.504758\nI0608 08:10:36.467926 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0762484 (* 1 = 0.0762484 loss)\nI0608 08:10:36.467936 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0354089 (* 1 = 0.0354089 loss)\nI0608 08:10:36.467942 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10141 (* 1 = 0.10141 loss)\nI0608 08:10:36.467954 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283608 (* 1 = 0.0283608 loss)\nI0608 08:10:36.467962 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1034 (* 1 = 0.1034 loss)\nI0608 08:10:36.467967 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283659 (* 1 = 0.0283659 loss)\nI0608 08:10:36.467975 13573 sgd_solver.cpp:106] Iteration 62680, lr = 0.0001\nI0608 08:10:49.515902 13573 solver.cpp:229] Iteration 62700, loss = 0.612225\nI0608 08:10:49.515983 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0930585 (* 1 = 0.0930585 loss)\nI0608 08:10:49.515995 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.055232 (* 1 = 0.055232 loss)\nI0608 08:10:49.516000 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.229467 (* 1 = 0.229467 loss)\nI0608 08:10:49.516006 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234985 (* 1 = 0.0234985 loss)\nI0608 08:10:49.516011 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222493 (* 1 = 0.222493 loss)\nI0608 08:10:49.516017 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0235142 (* 1 = 0.0235142 loss)\nI0608 08:10:49.516024 13573 sgd_solver.cpp:106] Iteration 62700, lr = 0.0001\nI0608 08:11:02.295979 13573 solver.cpp:229] Iteration 62720, loss = 0.67016\nI0608 08:11:02.296047 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00903593 (* 1 = 0.00903593 loss)\nI0608 08:11:02.296057 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0329621 (* 1 = 0.0329621 loss)\nI0608 08:11:02.296063 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0884014 (* 1 = 0.0884014 loss)\nI0608 08:11:02.296069 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.052769 (* 1 = 0.052769 loss)\nI0608 08:11:02.296075 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0878183 (* 1 = 0.0878183 loss)\nI0608 08:11:02.296082 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.052778 (* 1 = 0.052778 loss)\nI0608 08:11:02.296088 13573 sgd_solver.cpp:106] Iteration 62720, lr = 0.0001\nI0608 08:11:15.209501 13573 solver.cpp:229] Iteration 62740, loss = 0.288034\nI0608 08:11:15.209568 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0602835 (* 1 = 0.0602835 loss)\nI0608 08:11:15.209578 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0480045 (* 1 = 0.0480045 loss)\nI0608 08:11:15.209584 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0778717 (* 1 = 0.0778717 loss)\nI0608 08:11:15.209590 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00888348 (* 1 = 0.00888348 loss)\nI0608 08:11:15.209596 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.082863 (* 1 = 0.082863 loss)\nI0608 08:11:15.209602 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00888256 (* 1 = 0.00888256 loss)\nI0608 08:11:15.209609 13573 sgd_solver.cpp:106] Iteration 62740, lr = 0.0001\nI0608 08:11:28.009806 13573 solver.cpp:229] Iteration 62760, loss = 0.889947\nI0608 08:11:28.009871 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0235342 (* 1 = 0.0235342 loss)\nI0608 08:11:28.009881 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0167957 (* 1 = 0.0167957 loss)\nI0608 08:11:28.009887 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0771926 (* 1 = 0.0771926 loss)\nI0608 08:11:28.009892 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0119169 (* 1 = 0.0119169 loss)\nI0608 08:11:28.009898 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0776633 (* 1 = 0.0776633 loss)\nI0608 08:11:28.009904 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0119164 (* 1 = 0.0119164 loss)\nI0608 08:11:28.009912 13573 sgd_solver.cpp:106] Iteration 62760, lr = 0.0001\nI0608 08:11:40.987886 13573 solver.cpp:229] Iteration 62780, loss = 0.42651\nI0608 08:11:40.987978 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000806951 (* 1 = 0.000806951 loss)\nI0608 08:11:40.987988 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00108683 (* 1 = 0.00108683 loss)\nI0608 08:11:40.987995 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.244464 (* 1 = 0.244464 loss)\nI0608 08:11:40.988001 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120211 (* 1 = 0.0120211 loss)\nI0608 08:11:40.988008 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247288 (* 1 = 0.247288 loss)\nI0608 08:11:40.988013 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0120347 (* 1 = 0.0120347 loss)\nI0608 08:11:40.988020 13573 sgd_solver.cpp:106] Iteration 62780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:11:53.857475 13573 solver.cpp:229] Iteration 62800, loss = 0.476785\nI0608 08:11:53.857540 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0264014 (* 1 = 0.0264014 loss)\nI0608 08:11:53.857550 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0354076 (* 1 = 0.0354076 loss)\nI0608 08:11:53.857556 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119572 (* 1 = 0.119572 loss)\nI0608 08:11:53.857563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0241142 (* 1 = 0.0241142 loss)\nI0608 08:11:53.857568 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.118061 (* 1 = 0.118061 loss)\nI0608 08:11:53.857574 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.02412 (* 1 = 0.02412 loss)\nI0608 08:11:53.857583 13573 sgd_solver.cpp:106] Iteration 62800, lr = 0.0001\nI0608 08:12:06.614102 13573 solver.cpp:229] Iteration 62820, loss = 0.513603\nI0608 08:12:06.614181 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.113575 (* 1 = 0.113575 loss)\nI0608 08:12:06.614189 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.104561 (* 1 = 0.104561 loss)\nI0608 08:12:06.614195 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167205 (* 1 = 0.167205 loss)\nI0608 08:12:06.614202 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0145987 (* 1 = 0.0145987 loss)\nI0608 08:12:06.614207 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.16073 (* 1 = 0.16073 loss)\nI0608 08:12:06.614213 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0145991 (* 1 = 0.0145991 loss)\nI0608 08:12:06.614220 13573 sgd_solver.cpp:106] Iteration 62820, lr = 0.0001\nI0608 08:12:19.567325 13573 solver.cpp:229] Iteration 62840, loss = 0.664501\nI0608 08:12:19.567397 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00143117 (* 1 = 0.00143117 loss)\nI0608 08:12:19.567406 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00159934 (* 1 = 0.00159934 loss)\nI0608 08:12:19.567412 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.165747 (* 1 = 0.165747 loss)\nI0608 08:12:19.567418 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00192155 (* 1 = 0.00192155 loss)\nI0608 08:12:19.567425 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.225328 (* 1 = 0.225328 loss)\nI0608 08:12:19.567430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00192474 (* 1 = 0.00192474 loss)\nI0608 08:12:19.567438 13573 sgd_solver.cpp:106] Iteration 62840, lr = 0.0001\nI0608 08:12:32.396682 13573 solver.cpp:229] Iteration 62860, loss = 1.22987\nI0608 08:12:32.396745 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00354259 (* 1 = 0.00354259 loss)\nI0608 08:12:32.396755 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000170129 (* 1 = 0.000170129 loss)\nI0608 08:12:32.396761 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175792 (* 1 = 0.175792 loss)\nI0608 08:12:32.396767 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00046893 (* 1 = 0.00046893 loss)\nI0608 08:12:32.396773 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.178693 (* 1 = 0.178693 loss)\nI0608 08:12:32.396780 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.000467808 (* 1 = 0.000467808 loss)\nI0608 08:12:32.396786 13573 sgd_solver.cpp:106] Iteration 62860, lr = 0.0001\nI0608 08:12:45.344717 13573 solver.cpp:229] Iteration 62880, loss = 0.282694\nI0608 08:12:45.344831 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0199347 (* 1 = 0.0199347 loss)\nI0608 08:12:45.344841 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00915217 (* 1 = 0.00915217 loss)\nI0608 08:12:45.344847 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0899535 (* 1 = 0.0899535 loss)\nI0608 08:12:45.344853 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0256044 (* 1 = 0.0256044 loss)\nI0608 08:12:45.344859 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0913437 (* 1 = 0.0913437 loss)\nI0608 08:12:45.344866 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0256098 (* 1 = 0.0256098 loss)\nI0608 08:12:45.344872 13573 sgd_solver.cpp:106] Iteration 62880, lr = 0.0001\nI0608 08:12:58.159348 13573 solver.cpp:229] Iteration 62900, loss = 0.685677\nI0608 08:12:58.159411 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0399377 (* 1 = 0.0399377 loss)\nI0608 08:12:58.159421 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0567174 (* 1 = 0.0567174 loss)\nI0608 08:12:58.159428 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.151477 (* 1 = 0.151477 loss)\nI0608 08:12:58.159435 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198103 (* 1 = 0.0198103 loss)\nI0608 08:12:58.159440 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.153494 (* 1 = 0.153494 loss)\nI0608 08:12:58.159447 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198098 (* 1 = 0.0198098 loss)\nI0608 08:12:58.159454 13573 sgd_solver.cpp:106] Iteration 62900, lr = 0.0001\nI0608 08:13:11.158251 13573 solver.cpp:229] Iteration 62920, loss = 0.789913\nI0608 08:13:11.158347 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00754656 (* 1 = 0.00754656 loss)\nI0608 08:13:11.158360 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0107791 (* 1 = 0.0107791 loss)\nI0608 08:13:11.158365 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.407449 (* 1 = 0.407449 loss)\nI0608 08:13:11.158372 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0833984 (* 1 = 0.0833984 loss)\nI0608 08:13:11.158378 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.428899 (* 1 = 0.428899 loss)\nI0608 08:13:11.158385 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0834112 (* 1 = 0.0834112 loss)\nI0608 08:13:11.158393 13573 sgd_solver.cpp:106] Iteration 62920, lr = 0.0001\nI0608 08:13:24.323302 13573 solver.cpp:229] Iteration 62940, loss = 0.26067\nI0608 08:13:24.323379 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0577402 (* 1 = 0.0577402 loss)\nI0608 08:13:24.323388 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0501793 (* 1 = 0.0501793 loss)\nI0608 08:13:24.323395 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0862081 (* 1 = 0.0862081 loss)\nI0608 08:13:24.323400 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0120062 (* 1 = 0.0120062 loss)\nI0608 08:13:24.323406 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0854785 (* 1 = 0.0854785 loss)\nI0608 08:13:24.323412 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0120061 (* 1 = 0.0120061 loss)\nI0608 08:13:24.323421 13573 sgd_solver.cpp:106] Iteration 62940, lr = 0.0001\nI0608 08:13:37.318246 13573 solver.cpp:229] Iteration 62960, loss = 0.571087\nI0608 08:13:37.318315 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0241476 (* 1 = 0.0241476 loss)\nI0608 08:13:37.318325 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0314789 (* 1 = 0.0314789 loss)\nI0608 08:13:37.318331 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0913638 (* 1 = 0.0913638 loss)\nI0608 08:13:37.318337 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0198645 (* 1 = 0.0198645 loss)\nI0608 08:13:37.318343 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0882539 (* 1 = 0.0882539 loss)\nI0608 08:13:37.318349 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0198587 (* 1 = 0.0198587 loss)\nI0608 08:13:37.318356 13573 sgd_solver.cpp:106] Iteration 62960, lr = 0.0001\nI0608 08:13:50.269089 13573 solver.cpp:229] Iteration 62980, loss = 0.354639\nI0608 08:13:50.269155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0390315 (* 1 = 0.0390315 loss)\nI0608 08:13:50.269165 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0159773 (* 1 = 0.0159773 loss)\nI0608 08:13:50.269171 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.14791 (* 1 = 0.14791 loss)\nI0608 08:13:50.269177 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00248214 (* 1 = 0.00248214 loss)\nI0608 08:13:50.269182 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139443 (* 1 = 0.139443 loss)\nI0608 08:13:50.269188 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00248218 (* 1 = 0.00248218 loss)\nI0608 08:13:50.269196 13573 sgd_solver.cpp:106] Iteration 62980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:14:03.143785 13573 solver.cpp:229] Iteration 63000, loss = 0.882627\nI0608 08:14:03.143860 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0858872 (* 1 = 0.0858872 loss)\nI0608 08:14:03.143870 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0635866 (* 1 = 0.0635866 loss)\nI0608 08:14:03.143877 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.335153 (* 1 = 0.335153 loss)\nI0608 08:14:03.143882 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.1954 (* 1 = 0.1954 loss)\nI0608 08:14:03.143887 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.335077 (* 1 = 0.335077 loss)\nI0608 08:14:03.143893 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.195403 (* 1 = 0.195403 loss)\nI0608 08:14:03.143899 13573 sgd_solver.cpp:106] Iteration 63000, lr = 0.0001\nI0608 08:14:16.072358 13573 solver.cpp:229] Iteration 63020, loss = 0.719999\nI0608 08:14:16.072422 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00701025 (* 1 = 0.00701025 loss)\nI0608 08:14:16.072434 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0166205 (* 1 = 0.0166205 loss)\nI0608 08:14:16.072441 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.353288 (* 1 = 0.353288 loss)\nI0608 08:14:16.072448 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0762006 (* 1 = 0.0762006 loss)\nI0608 08:14:16.072455 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.366492 (* 1 = 0.366492 loss)\nI0608 08:14:16.072463 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0761898 (* 1 = 0.0761898 loss)\nI0608 08:14:16.072469 13573 sgd_solver.cpp:106] Iteration 63020, lr = 0.0001\nI0608 08:14:29.148320 13573 solver.cpp:229] Iteration 63040, loss = 0.377276\nI0608 08:14:29.148386 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0863305 (* 1 = 0.0863305 loss)\nI0608 08:14:29.148394 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0892993 (* 1 = 0.0892993 loss)\nI0608 08:14:29.148401 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0983257 (* 1 = 0.0983257 loss)\nI0608 08:14:29.148407 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0187772 (* 1 = 0.0187772 loss)\nI0608 08:14:29.148412 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0993872 (* 1 = 0.0993872 loss)\nI0608 08:14:29.148419 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0187748 (* 1 = 0.0187748 loss)\nI0608 08:14:29.148427 13573 sgd_solver.cpp:106] Iteration 63040, lr = 0.0001\nI0608 08:14:42.059784 13573 solver.cpp:229] Iteration 63060, loss = 1.03327\nI0608 08:14:42.059855 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0161495 (* 1 = 0.0161495 loss)\nI0608 08:14:42.059865 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00387571 (* 1 = 0.00387571 loss)\nI0608 08:14:42.059872 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.504003 (* 1 = 0.504003 loss)\nI0608 08:14:42.059877 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.143959 (* 1 = 0.143959 loss)\nI0608 08:14:42.059883 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.499347 (* 1 = 0.499347 loss)\nI0608 08:14:42.059890 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.143947 (* 1 = 0.143947 loss)\nI0608 08:14:42.059896 13573 sgd_solver.cpp:106] Iteration 63060, lr = 0.0001\nI0608 08:14:54.871126 13573 solver.cpp:229] Iteration 63080, loss = 0.939788\nI0608 08:14:54.871188 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0779758 (* 1 = 0.0779758 loss)\nI0608 08:14:54.871197 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0230003 (* 1 = 0.0230003 loss)\nI0608 08:14:54.871204 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.385591 (* 1 = 0.385591 loss)\nI0608 08:14:54.871210 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0998163 (* 1 = 0.0998163 loss)\nI0608 08:14:54.871215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.380133 (* 1 = 0.380133 loss)\nI0608 08:14:54.871222 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0998364 (* 1 = 0.0998364 loss)\nI0608 08:14:54.871228 13573 sgd_solver.cpp:106] Iteration 63080, lr = 0.0001\nI0608 08:15:07.888478 13573 solver.cpp:229] Iteration 63100, loss = 0.378358\nI0608 08:15:07.888547 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0136609 (* 1 = 0.0136609 loss)\nI0608 08:15:07.888556 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0306058 (* 1 = 0.0306058 loss)\nI0608 08:15:07.888562 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22921 (* 1 = 0.22921 loss)\nI0608 08:15:07.888568 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00703955 (* 1 = 0.00703955 loss)\nI0608 08:15:07.888574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240868 (* 1 = 0.240868 loss)\nI0608 08:15:07.888581 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00703997 (* 1 = 0.00703997 loss)\nI0608 08:15:07.888587 13573 sgd_solver.cpp:106] Iteration 63100, lr = 0.0001\nI0608 08:15:20.489981 13573 solver.cpp:229] Iteration 63120, loss = 0.689692\nI0608 08:15:20.490056 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.149163 (* 1 = 0.149163 loss)\nI0608 08:15:20.490064 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.159172 (* 1 = 0.159172 loss)\nI0608 08:15:20.490072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.287822 (* 1 = 0.287822 loss)\nI0608 08:15:20.490077 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0438627 (* 1 = 0.0438627 loss)\nI0608 08:15:20.490082 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293339 (* 1 = 0.293339 loss)\nI0608 08:15:20.490088 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0438623 (* 1 = 0.0438623 loss)\nI0608 08:15:20.490095 13573 sgd_solver.cpp:106] Iteration 63120, lr = 0.0001\nI0608 08:15:33.378923 13573 solver.cpp:229] Iteration 63140, loss = 0.992225\nI0608 08:15:33.378989 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0210124 (* 1 = 0.0210124 loss)\nI0608 08:15:33.378999 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0271534 (* 1 = 0.0271534 loss)\nI0608 08:15:33.379006 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250077 (* 1 = 0.250077 loss)\nI0608 08:15:33.379011 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117194 (* 1 = 0.0117194 loss)\nI0608 08:15:33.379017 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197005 (* 1 = 0.197005 loss)\nI0608 08:15:33.379024 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117268 (* 1 = 0.0117268 loss)\nI0608 08:15:33.379030 13573 sgd_solver.cpp:106] Iteration 63140, lr = 0.0001\nI0608 08:15:46.213333 13573 solver.cpp:229] Iteration 63160, loss = 0.813423\nI0608 08:15:46.213392 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0621745 (* 1 = 0.0621745 loss)\nI0608 08:15:46.213402 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0447867 (* 1 = 0.0447867 loss)\nI0608 08:15:46.213409 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.182128 (* 1 = 0.182128 loss)\nI0608 08:15:46.213415 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0178142 (* 1 = 0.0178142 loss)\nI0608 08:15:46.213421 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.183293 (* 1 = 0.183293 loss)\nI0608 08:15:46.213428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0178102 (* 1 = 0.0178102 loss)\nI0608 08:15:46.213491 13573 sgd_solver.cpp:106] Iteration 63160, lr = 0.0001\nI0608 08:15:59.237699 13573 solver.cpp:229] Iteration 63180, loss = 0.406123\nI0608 08:15:59.237776 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00883135 (* 1 = 0.00883135 loss)\nI0608 08:15:59.237785 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0109308 (* 1 = 0.0109308 loss)\nI0608 08:15:59.237792 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.077494 (* 1 = 0.077494 loss)\nI0608 08:15:59.237797 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118662 (* 1 = 0.0118662 loss)\nI0608 08:15:59.237803 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0775945 (* 1 = 0.0775945 loss)\nI0608 08:15:59.237809 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118656 (* 1 = 0.0118656 loss)\nI0608 08:15:59.237817 13573 sgd_solver.cpp:106] Iteration 63180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:16:12.115710 13573 solver.cpp:229] Iteration 63200, loss = 0.552184\nI0608 08:16:12.115779 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0923265 (* 1 = 0.0923265 loss)\nI0608 08:16:12.115789 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0793844 (* 1 = 0.0793844 loss)\nI0608 08:16:12.115795 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0955527 (* 1 = 0.0955527 loss)\nI0608 08:16:12.115802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.018943 (* 1 = 0.018943 loss)\nI0608 08:16:12.115808 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0908418 (* 1 = 0.0908418 loss)\nI0608 08:16:12.115813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189457 (* 1 = 0.0189457 loss)\nI0608 08:16:12.115820 13573 sgd_solver.cpp:106] Iteration 63200, lr = 0.0001\nI0608 08:16:24.866189 13573 solver.cpp:229] Iteration 63220, loss = 0.344882\nI0608 08:16:24.866253 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0521103 (* 1 = 0.0521103 loss)\nI0608 08:16:24.866262 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0446534 (* 1 = 0.0446534 loss)\nI0608 08:16:24.866268 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0864389 (* 1 = 0.0864389 loss)\nI0608 08:16:24.866274 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0163439 (* 1 = 0.0163439 loss)\nI0608 08:16:24.866281 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.090051 (* 1 = 0.090051 loss)\nI0608 08:16:24.866286 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0163448 (* 1 = 0.0163448 loss)\nI0608 08:16:24.866292 13573 sgd_solver.cpp:106] Iteration 63220, lr = 0.0001\nI0608 08:16:37.845175 13573 solver.cpp:229] Iteration 63240, loss = 2.0453\nI0608 08:16:37.845247 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00176321 (* 1 = 0.00176321 loss)\nI0608 08:16:37.845257 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00974439 (* 1 = 0.00974439 loss)\nI0608 08:16:37.845263 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.21537 (* 1 = 1.21537 loss)\nI0608 08:16:37.845268 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.675115 (* 1 = 0.675115 loss)\nI0608 08:16:37.845273 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.22369 (* 1 = 1.22369 loss)\nI0608 08:16:37.845278 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.675155 (* 1 = 0.675155 loss)\nI0608 08:16:37.845285 13573 sgd_solver.cpp:106] Iteration 63240, lr = 0.0001\nI0608 08:16:50.718114 13573 solver.cpp:229] Iteration 63260, loss = 0.736987\nI0608 08:16:50.718181 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0179041 (* 1 = 0.0179041 loss)\nI0608 08:16:50.718190 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0284841 (* 1 = 0.0284841 loss)\nI0608 08:16:50.718196 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.424421 (* 1 = 0.424421 loss)\nI0608 08:16:50.718202 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.139265 (* 1 = 0.139265 loss)\nI0608 08:16:50.718207 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.425481 (* 1 = 0.425481 loss)\nI0608 08:16:50.718214 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.139259 (* 1 = 0.139259 loss)\nI0608 08:16:50.718220 13573 sgd_solver.cpp:106] Iteration 63260, lr = 0.0001\nI0608 08:17:03.697341 13573 solver.cpp:229] Iteration 63280, loss = 1.40598\nI0608 08:17:03.697407 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.107029 (* 1 = 0.107029 loss)\nI0608 08:17:03.697417 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.114411 (* 1 = 0.114411 loss)\nI0608 08:17:03.697422 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.906326 (* 1 = 0.906326 loss)\nI0608 08:17:03.697428 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.207385 (* 1 = 0.207385 loss)\nI0608 08:17:03.697434 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.897897 (* 1 = 0.897897 loss)\nI0608 08:17:03.697440 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.207364 (* 1 = 0.207364 loss)\nI0608 08:17:03.697448 13573 sgd_solver.cpp:106] Iteration 63280, lr = 0.0001\nI0608 08:17:16.534003 13573 solver.cpp:229] Iteration 63300, loss = 0.746706\nI0608 08:17:16.534077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0223272 (* 1 = 0.0223272 loss)\nI0608 08:17:16.534088 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0813 (* 1 = 0.0813 loss)\nI0608 08:17:16.534095 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101221 (* 1 = 0.101221 loss)\nI0608 08:17:16.534101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0533271 (* 1 = 0.0533271 loss)\nI0608 08:17:16.534106 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0828768 (* 1 = 0.0828768 loss)\nI0608 08:17:16.534112 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0533331 (* 1 = 0.0533331 loss)\nI0608 08:17:16.534119 13573 sgd_solver.cpp:106] Iteration 63300, lr = 0.0001\nI0608 08:17:29.557377 13573 solver.cpp:229] Iteration 63320, loss = 0.37439\nI0608 08:17:29.557445 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00218057 (* 1 = 0.00218057 loss)\nI0608 08:17:29.557454 13573 solver.cpp:245]     Train net output #1: loss_cls = 2.30585e-05 (* 1 = 2.30585e-05 loss)\nI0608 08:17:29.557461 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.172737 (* 1 = 0.172737 loss)\nI0608 08:17:29.557467 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0137077 (* 1 = 0.0137077 loss)\nI0608 08:17:29.557473 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173982 (* 1 = 0.173982 loss)\nI0608 08:17:29.557479 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0137048 (* 1 = 0.0137048 loss)\nI0608 08:17:29.557487 13573 sgd_solver.cpp:106] Iteration 63320, lr = 0.0001\nI0608 08:17:42.333914 13573 solver.cpp:229] Iteration 63340, loss = 0.6048\nI0608 08:17:42.333977 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00843976 (* 1 = 0.00843976 loss)\nI0608 08:17:42.333988 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0322445 (* 1 = 0.0322445 loss)\nI0608 08:17:42.333995 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.27175 (* 1 = 0.27175 loss)\nI0608 08:17:42.334002 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0203255 (* 1 = 0.0203255 loss)\nI0608 08:17:42.334008 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.303522 (* 1 = 0.303522 loss)\nI0608 08:17:42.334014 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0203233 (* 1 = 0.0203233 loss)\nI0608 08:17:42.334022 13573 sgd_solver.cpp:106] Iteration 63340, lr = 0.0001\nI0608 08:17:55.342514 13573 solver.cpp:229] Iteration 63360, loss = 0.592476\nI0608 08:17:55.342586 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.044958 (* 1 = 0.044958 loss)\nI0608 08:17:55.342595 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0763123 (* 1 = 0.0763123 loss)\nI0608 08:17:55.342602 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0991907 (* 1 = 0.0991907 loss)\nI0608 08:17:55.342608 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0420865 (* 1 = 0.0420865 loss)\nI0608 08:17:55.342613 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100101 (* 1 = 0.100101 loss)\nI0608 08:17:55.342619 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0420963 (* 1 = 0.0420963 loss)\nI0608 08:17:55.342636 13573 sgd_solver.cpp:106] Iteration 63360, lr = 0.0001\nI0608 08:18:08.350847 13573 solver.cpp:229] Iteration 63380, loss = 0.635773\nI0608 08:18:08.350917 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.040836 (* 1 = 0.040836 loss)\nI0608 08:18:08.350926 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.150285 (* 1 = 0.150285 loss)\nI0608 08:18:08.350932 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174789 (* 1 = 0.174789 loss)\nI0608 08:18:08.350939 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.124623 (* 1 = 0.124623 loss)\nI0608 08:18:08.350944 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.181034 (* 1 = 0.181034 loss)\nI0608 08:18:08.350950 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124627 (* 1 = 0.124627 loss)\nI0608 08:18:08.350957 13573 sgd_solver.cpp:106] Iteration 63380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:18:21.298425 13573 solver.cpp:229] Iteration 63400, loss = 0.520314\nI0608 08:18:21.298501 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0124714 (* 1 = 0.0124714 loss)\nI0608 08:18:21.298511 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00290144 (* 1 = 0.00290144 loss)\nI0608 08:18:21.298516 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277095 (* 1 = 0.277095 loss)\nI0608 08:18:21.298522 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0076425 (* 1 = 0.0076425 loss)\nI0608 08:18:21.298528 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.253524 (* 1 = 0.253524 loss)\nI0608 08:18:21.298533 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00764347 (* 1 = 0.00764347 loss)\nI0608 08:18:21.298540 13573 sgd_solver.cpp:106] Iteration 63400, lr = 0.0001\nI0608 08:18:34.397670 13573 solver.cpp:229] Iteration 63420, loss = 0.618858\nI0608 08:18:34.397742 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000612841 (* 1 = 0.000612841 loss)\nI0608 08:18:34.397753 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0034552 (* 1 = 0.0034552 loss)\nI0608 08:18:34.397759 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.350124 (* 1 = 0.350124 loss)\nI0608 08:18:34.397765 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0829457 (* 1 = 0.0829457 loss)\nI0608 08:18:34.397771 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.375407 (* 1 = 0.375407 loss)\nI0608 08:18:34.397778 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0829355 (* 1 = 0.0829355 loss)\nI0608 08:18:34.397785 13573 sgd_solver.cpp:106] Iteration 63420, lr = 0.0001\nI0608 08:18:47.367383 13573 solver.cpp:229] Iteration 63440, loss = 0.324999\nI0608 08:18:47.367471 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0375107 (* 1 = 0.0375107 loss)\nI0608 08:18:47.367481 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0331473 (* 1 = 0.0331473 loss)\nI0608 08:18:47.367487 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.150739 (* 1 = 0.150739 loss)\nI0608 08:18:47.367493 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00586878 (* 1 = 0.00586878 loss)\nI0608 08:18:47.367499 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114077 (* 1 = 0.114077 loss)\nI0608 08:18:47.367506 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00586912 (* 1 = 0.00586912 loss)\nI0608 08:18:47.367512 13573 sgd_solver.cpp:106] Iteration 63440, lr = 0.0001\nI0608 08:19:00.244019 13573 solver.cpp:229] Iteration 63460, loss = 0.587912\nI0608 08:19:00.244087 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0544977 (* 1 = 0.0544977 loss)\nI0608 08:19:00.244105 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.051016 (* 1 = 0.051016 loss)\nI0608 08:19:00.244112 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.344767 (* 1 = 0.344767 loss)\nI0608 08:19:00.244119 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0493978 (* 1 = 0.0493978 loss)\nI0608 08:19:00.244125 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.326874 (* 1 = 0.326874 loss)\nI0608 08:19:00.244132 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0493983 (* 1 = 0.0493983 loss)\nI0608 08:19:00.244138 13573 sgd_solver.cpp:106] Iteration 63460, lr = 0.0001\nI0608 08:19:13.123139 13573 solver.cpp:229] Iteration 63480, loss = 1.56942\nI0608 08:19:13.123212 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000730835 (* 1 = 0.000730835 loss)\nI0608 08:19:13.123222 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0337283 (* 1 = 0.0337283 loss)\nI0608 08:19:13.123229 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.297208 (* 1 = 0.297208 loss)\nI0608 08:19:13.123235 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0308211 (* 1 = 0.0308211 loss)\nI0608 08:19:13.123240 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27871 (* 1 = 0.27871 loss)\nI0608 08:19:13.123246 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0308358 (* 1 = 0.0308358 loss)\nI0608 08:19:13.123253 13573 sgd_solver.cpp:106] Iteration 63480, lr = 0.0001\nI0608 08:19:25.900236 13573 solver.cpp:229] Iteration 63500, loss = 0.304041\nI0608 08:19:25.900307 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0275186 (* 1 = 0.0275186 loss)\nI0608 08:19:25.900316 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0241894 (* 1 = 0.0241894 loss)\nI0608 08:19:25.900323 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0784919 (* 1 = 0.0784919 loss)\nI0608 08:19:25.900328 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0195399 (* 1 = 0.0195399 loss)\nI0608 08:19:25.900334 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0887038 (* 1 = 0.0887038 loss)\nI0608 08:19:25.900341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0195392 (* 1 = 0.0195392 loss)\nI0608 08:19:25.900348 13573 sgd_solver.cpp:106] Iteration 63500, lr = 0.0001\nI0608 08:19:38.797739 13573 solver.cpp:229] Iteration 63520, loss = 0.529809\nI0608 08:19:38.797854 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0728418 (* 1 = 0.0728418 loss)\nI0608 08:19:38.797865 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0433193 (* 1 = 0.0433193 loss)\nI0608 08:19:38.797873 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137422 (* 1 = 0.137422 loss)\nI0608 08:19:38.797878 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118545 (* 1 = 0.0118545 loss)\nI0608 08:19:38.797883 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.139066 (* 1 = 0.139066 loss)\nI0608 08:19:38.797889 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0118544 (* 1 = 0.0118544 loss)\nI0608 08:19:38.797897 13573 sgd_solver.cpp:106] Iteration 63520, lr = 0.0001\nI0608 08:19:51.554399 13573 solver.cpp:229] Iteration 63540, loss = 0.648661\nI0608 08:19:51.554460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000371206 (* 1 = 0.000371206 loss)\nI0608 08:19:51.554469 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00636653 (* 1 = 0.00636653 loss)\nI0608 08:19:51.554476 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.160282 (* 1 = 0.160282 loss)\nI0608 08:19:51.554481 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0143339 (* 1 = 0.0143339 loss)\nI0608 08:19:51.554487 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.173715 (* 1 = 0.173715 loss)\nI0608 08:19:51.554493 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.014332 (* 1 = 0.014332 loss)\nI0608 08:19:51.554500 13573 sgd_solver.cpp:106] Iteration 63540, lr = 0.0001\nI0608 08:20:04.577638 13573 solver.cpp:229] Iteration 63560, loss = 0.317755\nI0608 08:20:04.577713 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0568232 (* 1 = 0.0568232 loss)\nI0608 08:20:04.577723 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.019425 (* 1 = 0.019425 loss)\nI0608 08:20:04.577728 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0958278 (* 1 = 0.0958278 loss)\nI0608 08:20:04.577733 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.024598 (* 1 = 0.024598 loss)\nI0608 08:20:04.577739 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0855793 (* 1 = 0.0855793 loss)\nI0608 08:20:04.577744 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0245975 (* 1 = 0.0245975 loss)\nI0608 08:20:04.577751 13573 sgd_solver.cpp:106] Iteration 63560, lr = 0.0001\nI0608 08:20:17.301573 13573 solver.cpp:229] Iteration 63580, loss = 1.07356\nI0608 08:20:17.301645 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0523006 (* 1 = 0.0523006 loss)\nI0608 08:20:17.301654 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0374629 (* 1 = 0.0374629 loss)\nI0608 08:20:17.301661 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0930809 (* 1 = 0.0930809 loss)\nI0608 08:20:17.301666 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00431143 (* 1 = 0.00431143 loss)\nI0608 08:20:17.301672 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0927432 (* 1 = 0.0927432 loss)\nI0608 08:20:17.301678 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00431191 (* 1 = 0.00431191 loss)\nI0608 08:20:17.301687 13573 sgd_solver.cpp:106] Iteration 63580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:20:30.316866 13573 solver.cpp:229] Iteration 63600, loss = 0.416498\nI0608 08:20:30.316947 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00304294 (* 1 = 0.00304294 loss)\nI0608 08:20:30.316957 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000381954 (* 1 = 0.000381954 loss)\nI0608 08:20:30.316964 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23496 (* 1 = 0.23496 loss)\nI0608 08:20:30.316970 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00910944 (* 1 = 0.00910944 loss)\nI0608 08:20:30.316977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.211047 (* 1 = 0.211047 loss)\nI0608 08:20:30.316982 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00911341 (* 1 = 0.00911341 loss)\nI0608 08:20:30.316990 13573 sgd_solver.cpp:106] Iteration 63600, lr = 0.0001\nI0608 08:20:43.355350 13573 solver.cpp:229] Iteration 63620, loss = 0.454578\nI0608 08:20:43.355412 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0610556 (* 1 = 0.0610556 loss)\nI0608 08:20:43.355422 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0385002 (* 1 = 0.0385002 loss)\nI0608 08:20:43.355429 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147549 (* 1 = 0.147549 loss)\nI0608 08:20:43.355437 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0535674 (* 1 = 0.0535674 loss)\nI0608 08:20:43.355443 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.147362 (* 1 = 0.147362 loss)\nI0608 08:20:43.355448 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0535647 (* 1 = 0.0535647 loss)\nI0608 08:20:43.355454 13573 sgd_solver.cpp:106] Iteration 63620, lr = 0.0001\nI0608 08:20:56.193186 13573 solver.cpp:229] Iteration 63640, loss = 0.579215\nI0608 08:20:56.193258 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0616102 (* 1 = 0.0616102 loss)\nI0608 08:20:56.193267 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0344991 (* 1 = 0.0344991 loss)\nI0608 08:20:56.193274 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.338444 (* 1 = 0.338444 loss)\nI0608 08:20:56.193279 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194499 (* 1 = 0.0194499 loss)\nI0608 08:20:56.193284 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.341855 (* 1 = 0.341855 loss)\nI0608 08:20:56.193290 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.019452 (* 1 = 0.019452 loss)\nI0608 08:20:56.193296 13573 sgd_solver.cpp:106] Iteration 63640, lr = 0.0001\nI0608 08:21:08.996781 13573 solver.cpp:229] Iteration 63660, loss = 0.69045\nI0608 08:21:08.996909 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000525268 (* 1 = 0.000525268 loss)\nI0608 08:21:08.996919 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000472553 (* 1 = 0.000472553 loss)\nI0608 08:21:08.996925 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.227207 (* 1 = 0.227207 loss)\nI0608 08:21:08.996930 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0181386 (* 1 = 0.0181386 loss)\nI0608 08:21:08.996937 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.27803 (* 1 = 0.27803 loss)\nI0608 08:21:08.996942 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0181279 (* 1 = 0.0181279 loss)\nI0608 08:21:08.996948 13573 sgd_solver.cpp:106] Iteration 63660, lr = 0.0001\nI0608 08:21:21.982218 13573 solver.cpp:229] Iteration 63680, loss = 0.383453\nI0608 08:21:21.982309 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0260438 (* 1 = 0.0260438 loss)\nI0608 08:21:21.982319 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00707486 (* 1 = 0.00707486 loss)\nI0608 08:21:21.982326 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.200235 (* 1 = 0.200235 loss)\nI0608 08:21:21.982331 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0201849 (* 1 = 0.0201849 loss)\nI0608 08:21:21.982336 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.228323 (* 1 = 0.228323 loss)\nI0608 08:21:21.982342 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201713 (* 1 = 0.0201713 loss)\nI0608 08:21:21.982348 13573 sgd_solver.cpp:106] Iteration 63680, lr = 0.0001\nI0608 08:21:34.702564 13573 solver.cpp:229] Iteration 63700, loss = 0.95517\nI0608 08:21:34.702626 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0361451 (* 1 = 0.0361451 loss)\nI0608 08:21:34.702636 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.016553 (* 1 = 0.016553 loss)\nI0608 08:21:34.702643 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222044 (* 1 = 0.222044 loss)\nI0608 08:21:34.702649 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0428957 (* 1 = 0.0428957 loss)\nI0608 08:21:34.702656 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.220622 (* 1 = 0.220622 loss)\nI0608 08:21:34.702661 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0428736 (* 1 = 0.0428736 loss)\nI0608 08:21:34.702669 13573 sgd_solver.cpp:106] Iteration 63700, lr = 0.0001\nI0608 08:21:47.512732 13573 solver.cpp:229] Iteration 63720, loss = 1.23357\nI0608 08:21:47.512799 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0661971 (* 1 = 0.0661971 loss)\nI0608 08:21:47.512807 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0998456 (* 1 = 0.0998456 loss)\nI0608 08:21:47.512814 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.66784 (* 1 = 0.66784 loss)\nI0608 08:21:47.512820 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.256801 (* 1 = 0.256801 loss)\nI0608 08:21:47.512825 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.664842 (* 1 = 0.664842 loss)\nI0608 08:21:47.512831 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.256786 (* 1 = 0.256786 loss)\nI0608 08:21:47.512838 13573 sgd_solver.cpp:106] Iteration 63720, lr = 0.0001\nI0608 08:22:00.529378 13573 solver.cpp:229] Iteration 63740, loss = 0.541846\nI0608 08:22:00.529451 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0567878 (* 1 = 0.0567878 loss)\nI0608 08:22:00.529460 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0999499 (* 1 = 0.0999499 loss)\nI0608 08:22:00.529466 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.237163 (* 1 = 0.237163 loss)\nI0608 08:22:00.529472 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00513741 (* 1 = 0.00513741 loss)\nI0608 08:22:00.529477 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.297198 (* 1 = 0.297198 loss)\nI0608 08:22:00.529484 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00513716 (* 1 = 0.00513716 loss)\nI0608 08:22:00.529490 13573 sgd_solver.cpp:106] Iteration 63740, lr = 0.0001\nI0608 08:22:13.566481 13573 solver.cpp:229] Iteration 63760, loss = 1.39696\nI0608 08:22:13.566546 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0426875 (* 1 = 0.0426875 loss)\nI0608 08:22:13.566556 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0264602 (* 1 = 0.0264602 loss)\nI0608 08:22:13.566562 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.702651 (* 1 = 0.702651 loss)\nI0608 08:22:13.566568 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.14084 (* 1 = 0.14084 loss)\nI0608 08:22:13.566573 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.716232 (* 1 = 0.716232 loss)\nI0608 08:22:13.566579 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.140892 (* 1 = 0.140892 loss)\nI0608 08:22:13.566586 13573 sgd_solver.cpp:106] Iteration 63760, lr = 0.0001\nI0608 08:22:26.468899 13573 solver.cpp:229] Iteration 63780, loss = 2.21171\nI0608 08:22:26.468964 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.137478 (* 1 = 0.137478 loss)\nI0608 08:22:26.468974 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0934129 (* 1 = 0.0934129 loss)\nI0608 08:22:26.468981 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.25475 (* 1 = 1.25475 loss)\nI0608 08:22:26.468986 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.412646 (* 1 = 0.412646 loss)\nI0608 08:22:26.468991 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.2789 (* 1 = 1.2789 loss)\nI0608 08:22:26.468997 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.394695 (* 1 = 0.394695 loss)\nI0608 08:22:26.469003 13573 sgd_solver.cpp:106] Iteration 63780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:22:39.096858 13573 solver.cpp:229] Iteration 63800, loss = 0.383232\nI0608 08:22:39.096925 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.060822 (* 1 = 0.060822 loss)\nI0608 08:22:39.096935 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0249654 (* 1 = 0.0249654 loss)\nI0608 08:22:39.096940 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0926836 (* 1 = 0.0926836 loss)\nI0608 08:22:39.096946 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00617803 (* 1 = 0.00617803 loss)\nI0608 08:22:39.096951 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.088407 (* 1 = 0.088407 loss)\nI0608 08:22:39.096957 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00617672 (* 1 = 0.00617672 loss)\nI0608 08:22:39.096964 13573 sgd_solver.cpp:106] Iteration 63800, lr = 0.0001\nI0608 08:22:52.053400 13573 solver.cpp:229] Iteration 63820, loss = 0.789323\nI0608 08:22:52.053462 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0837231 (* 1 = 0.0837231 loss)\nI0608 08:22:52.053472 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0378294 (* 1 = 0.0378294 loss)\nI0608 08:22:52.053478 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.397976 (* 1 = 0.397976 loss)\nI0608 08:22:52.053485 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0609018 (* 1 = 0.0609018 loss)\nI0608 08:22:52.053491 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.39538 (* 1 = 0.39538 loss)\nI0608 08:22:52.053498 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.060906 (* 1 = 0.060906 loss)\nI0608 08:22:52.053506 13573 sgd_solver.cpp:106] Iteration 63820, lr = 0.0001\nI0608 08:23:05.004247 13573 solver.cpp:229] Iteration 63840, loss = 0.945235\nI0608 08:23:05.004314 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0017099 (* 1 = 0.0017099 loss)\nI0608 08:23:05.004324 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000677508 (* 1 = 0.000677508 loss)\nI0608 08:23:05.004330 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.317722 (* 1 = 0.317722 loss)\nI0608 08:23:05.004335 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0229907 (* 1 = 0.0229907 loss)\nI0608 08:23:05.004341 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.309062 (* 1 = 0.309062 loss)\nI0608 08:23:05.004348 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0229864 (* 1 = 0.0229864 loss)\nI0608 08:23:05.004354 13573 sgd_solver.cpp:106] Iteration 63840, lr = 0.0001\nI0608 08:23:18.037256 13573 solver.cpp:229] Iteration 63860, loss = 0.642269\nI0608 08:23:18.037322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0247054 (* 1 = 0.0247054 loss)\nI0608 08:23:18.037331 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0451026 (* 1 = 0.0451026 loss)\nI0608 08:23:18.037338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137194 (* 1 = 0.137194 loss)\nI0608 08:23:18.037343 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0415979 (* 1 = 0.0415979 loss)\nI0608 08:23:18.037348 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.123371 (* 1 = 0.123371 loss)\nI0608 08:23:18.037354 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0415971 (* 1 = 0.0415971 loss)\nI0608 08:23:18.037361 13573 sgd_solver.cpp:106] Iteration 63860, lr = 0.0001\nI0608 08:23:31.080581 13573 solver.cpp:229] Iteration 63880, loss = 0.916569\nI0608 08:23:31.080642 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0857036 (* 1 = 0.0857036 loss)\nI0608 08:23:31.080651 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0760726 (* 1 = 0.0760726 loss)\nI0608 08:23:31.080657 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.460214 (* 1 = 0.460214 loss)\nI0608 08:23:31.080662 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0215582 (* 1 = 0.0215582 loss)\nI0608 08:23:31.080667 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.447371 (* 1 = 0.447371 loss)\nI0608 08:23:31.080673 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0215543 (* 1 = 0.0215543 loss)\nI0608 08:23:31.080682 13573 sgd_solver.cpp:106] Iteration 63880, lr = 0.0001\nI0608 08:23:43.963574 13573 solver.cpp:229] Iteration 63900, loss = 0.687144\nI0608 08:23:43.963644 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00378675 (* 1 = 0.00378675 loss)\nI0608 08:23:43.963654 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0424528 (* 1 = 0.0424528 loss)\nI0608 08:23:43.963660 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.281126 (* 1 = 0.281126 loss)\nI0608 08:23:43.963665 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110289 (* 1 = 0.0110289 loss)\nI0608 08:23:43.963670 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.297366 (* 1 = 0.297366 loss)\nI0608 08:23:43.963676 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110318 (* 1 = 0.0110318 loss)\nI0608 08:23:43.963682 13573 sgd_solver.cpp:106] Iteration 63900, lr = 0.0001\nI0608 08:23:56.759271 13573 solver.cpp:229] Iteration 63920, loss = 0.801545\nI0608 08:23:56.759335 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.21172 (* 1 = 0.21172 loss)\nI0608 08:23:56.759344 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.179971 (* 1 = 0.179971 loss)\nI0608 08:23:56.759349 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.170604 (* 1 = 0.170604 loss)\nI0608 08:23:56.759356 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0901417 (* 1 = 0.0901417 loss)\nI0608 08:23:56.759361 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171759 (* 1 = 0.171759 loss)\nI0608 08:23:56.759367 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0901439 (* 1 = 0.0901439 loss)\nI0608 08:23:56.759374 13573 sgd_solver.cpp:106] Iteration 63920, lr = 0.0001\nI0608 08:24:09.566814 13573 solver.cpp:229] Iteration 63940, loss = 1.05429\nI0608 08:24:09.566887 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.153765 (* 1 = 0.153765 loss)\nI0608 08:24:09.566897 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.195439 (* 1 = 0.195439 loss)\nI0608 08:24:09.566903 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.306872 (* 1 = 0.306872 loss)\nI0608 08:24:09.566908 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.142334 (* 1 = 0.142334 loss)\nI0608 08:24:09.566915 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.293249 (* 1 = 0.293249 loss)\nI0608 08:24:09.566920 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.142334 (* 1 = 0.142334 loss)\nI0608 08:24:09.566926 13573 sgd_solver.cpp:106] Iteration 63940, lr = 0.0001\nI0608 08:24:22.585515 13573 solver.cpp:229] Iteration 63960, loss = 1.1887\nI0608 08:24:22.585579 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0098692 (* 1 = 0.0098692 loss)\nI0608 08:24:22.585589 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0380615 (* 1 = 0.0380615 loss)\nI0608 08:24:22.585595 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228081 (* 1 = 0.228081 loss)\nI0608 08:24:22.585602 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133019 (* 1 = 0.0133019 loss)\nI0608 08:24:22.585606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.246873 (* 1 = 0.246873 loss)\nI0608 08:24:22.585613 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133012 (* 1 = 0.0133012 loss)\nI0608 08:24:22.585618 13573 sgd_solver.cpp:106] Iteration 63960, lr = 0.0001\nI0608 08:24:35.612788 13573 solver.cpp:229] Iteration 63980, loss = 0.586213\nI0608 08:24:35.612851 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0362782 (* 1 = 0.0362782 loss)\nI0608 08:24:35.612861 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0166474 (* 1 = 0.0166474 loss)\nI0608 08:24:35.612867 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.290135 (* 1 = 0.290135 loss)\nI0608 08:24:35.612874 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0234348 (* 1 = 0.0234348 loss)\nI0608 08:24:35.612879 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.267344 (* 1 = 0.267344 loss)\nI0608 08:24:35.612885 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0234395 (* 1 = 0.0234395 loss)\nI0608 08:24:35.612893 13573 sgd_solver.cpp:106] Iteration 63980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:24:48.729296 13573 solver.cpp:229] Iteration 64000, loss = 0.487072\nI0608 08:24:48.729387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000409552 (* 1 = 0.000409552 loss)\nI0608 08:24:48.729396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000259901 (* 1 = 0.000259901 loss)\nI0608 08:24:48.729403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.308854 (* 1 = 0.308854 loss)\nI0608 08:24:48.729408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0053037 (* 1 = 0.0053037 loss)\nI0608 08:24:48.729414 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279678 (* 1 = 0.279678 loss)\nI0608 08:24:48.729420 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00529985 (* 1 = 0.00529985 loss)\nI0608 08:24:48.729427 13573 sgd_solver.cpp:106] Iteration 64000, lr = 0.0001\nI0608 08:25:01.763216 13573 solver.cpp:229] Iteration 64020, loss = 0.648007\nI0608 08:25:01.763286 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.051085 (* 1 = 0.051085 loss)\nI0608 08:25:01.763296 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0644288 (* 1 = 0.0644288 loss)\nI0608 08:25:01.763303 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0928946 (* 1 = 0.0928946 loss)\nI0608 08:25:01.763309 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00747486 (* 1 = 0.00747486 loss)\nI0608 08:25:01.763314 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0954579 (* 1 = 0.0954579 loss)\nI0608 08:25:01.763320 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00747272 (* 1 = 0.00747272 loss)\nI0608 08:25:01.763326 13573 sgd_solver.cpp:106] Iteration 64020, lr = 0.0001\nI0608 08:25:14.576076 13573 solver.cpp:229] Iteration 64040, loss = 0.853169\nI0608 08:25:14.576139 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0218573 (* 1 = 0.0218573 loss)\nI0608 08:25:14.576156 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0370303 (* 1 = 0.0370303 loss)\nI0608 08:25:14.576164 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.433478 (* 1 = 0.433478 loss)\nI0608 08:25:14.576169 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0378321 (* 1 = 0.0378321 loss)\nI0608 08:25:14.576175 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.41372 (* 1 = 0.41372 loss)\nI0608 08:25:14.576181 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0378399 (* 1 = 0.0378399 loss)\nI0608 08:25:14.576189 13573 sgd_solver.cpp:106] Iteration 64040, lr = 0.0001\nI0608 08:25:27.481289 13573 solver.cpp:229] Iteration 64060, loss = 0.538091\nI0608 08:25:27.481418 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000647826 (* 1 = 0.000647826 loss)\nI0608 08:25:27.481428 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00306778 (* 1 = 0.00306778 loss)\nI0608 08:25:27.481434 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.32407 (* 1 = 0.32407 loss)\nI0608 08:25:27.481441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281981 (* 1 = 0.0281981 loss)\nI0608 08:25:27.481446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327881 (* 1 = 0.327881 loss)\nI0608 08:25:27.481452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0282041 (* 1 = 0.0282041 loss)\nI0608 08:25:27.481461 13573 sgd_solver.cpp:106] Iteration 64060, lr = 0.0001\nI0608 08:25:40.432524 13573 solver.cpp:229] Iteration 64080, loss = 1.04903\nI0608 08:25:40.432593 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.102465 (* 1 = 0.102465 loss)\nI0608 08:25:40.432602 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.101147 (* 1 = 0.101147 loss)\nI0608 08:25:40.432608 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.584954 (* 1 = 0.584954 loss)\nI0608 08:25:40.432615 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0687427 (* 1 = 0.0687427 loss)\nI0608 08:25:40.432621 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.588146 (* 1 = 0.588146 loss)\nI0608 08:25:40.432626 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.068738 (* 1 = 0.068738 loss)\nI0608 08:25:40.432633 13573 sgd_solver.cpp:106] Iteration 64080, lr = 0.0001\nI0608 08:25:53.424981 13573 solver.cpp:229] Iteration 64100, loss = 0.544563\nI0608 08:25:53.425056 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0164671 (* 1 = 0.0164671 loss)\nI0608 08:25:53.425066 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0171757 (* 1 = 0.0171757 loss)\nI0608 08:25:53.425072 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.300809 (* 1 = 0.300809 loss)\nI0608 08:25:53.425079 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0575092 (* 1 = 0.0575092 loss)\nI0608 08:25:53.425084 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323153 (* 1 = 0.323153 loss)\nI0608 08:25:53.425091 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0575224 (* 1 = 0.0575224 loss)\nI0608 08:25:53.425097 13573 sgd_solver.cpp:106] Iteration 64100, lr = 0.0001\nI0608 08:26:06.199103 13573 solver.cpp:229] Iteration 64120, loss = 0.632519\nI0608 08:26:06.199175 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000456774 (* 1 = 0.000456774 loss)\nI0608 08:26:06.199184 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000988701 (* 1 = 0.000988701 loss)\nI0608 08:26:06.199190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.157241 (* 1 = 0.157241 loss)\nI0608 08:26:06.199196 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0162025 (* 1 = 0.0162025 loss)\nI0608 08:26:06.199203 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.169127 (* 1 = 0.169127 loss)\nI0608 08:26:06.199208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0161991 (* 1 = 0.0161991 loss)\nI0608 08:26:06.199214 13573 sgd_solver.cpp:106] Iteration 64120, lr = 0.0001\nI0608 08:26:19.060925 13573 solver.cpp:229] Iteration 64140, loss = 1.38071\nI0608 08:26:19.061033 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.282726 (* 1 = 0.282726 loss)\nI0608 08:26:19.061041 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.14597 (* 1 = 0.14597 loss)\nI0608 08:26:19.061048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.707111 (* 1 = 0.707111 loss)\nI0608 08:26:19.061053 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.157916 (* 1 = 0.157916 loss)\nI0608 08:26:19.061059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.709872 (* 1 = 0.709872 loss)\nI0608 08:26:19.061064 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.157892 (* 1 = 0.157892 loss)\nI0608 08:26:19.061071 13573 sgd_solver.cpp:106] Iteration 64140, lr = 0.0001\nI0608 08:26:31.814260 13573 solver.cpp:229] Iteration 64160, loss = 0.428128\nI0608 08:26:31.814322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0863975 (* 1 = 0.0863975 loss)\nI0608 08:26:31.814332 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0569088 (* 1 = 0.0569088 loss)\nI0608 08:26:31.814338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0981586 (* 1 = 0.0981586 loss)\nI0608 08:26:31.814344 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00772169 (* 1 = 0.00772169 loss)\nI0608 08:26:31.814350 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0988924 (* 1 = 0.0988924 loss)\nI0608 08:26:31.814355 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00772364 (* 1 = 0.00772364 loss)\nI0608 08:26:31.814363 13573 sgd_solver.cpp:106] Iteration 64160, lr = 0.0001\nI0608 08:26:44.819718 13573 solver.cpp:229] Iteration 64180, loss = 0.359259\nI0608 08:26:44.819787 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0107164 (* 1 = 0.0107164 loss)\nI0608 08:26:44.819797 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0190647 (* 1 = 0.0190647 loss)\nI0608 08:26:44.819802 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147413 (* 1 = 0.147413 loss)\nI0608 08:26:44.819808 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0160834 (* 1 = 0.0160834 loss)\nI0608 08:26:44.819813 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.142688 (* 1 = 0.142688 loss)\nI0608 08:26:44.819819 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0160927 (* 1 = 0.0160927 loss)\nI0608 08:26:44.819826 13573 sgd_solver.cpp:106] Iteration 64180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:26:57.536231 13573 solver.cpp:229] Iteration 64200, loss = 1.57352\nI0608 08:26:57.536303 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.185845 (* 1 = 0.185845 loss)\nI0608 08:26:57.536312 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.192647 (* 1 = 0.192647 loss)\nI0608 08:26:57.536319 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.677623 (* 1 = 0.677623 loss)\nI0608 08:26:57.536324 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.294236 (* 1 = 0.294236 loss)\nI0608 08:26:57.536329 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.648346 (* 1 = 0.648346 loss)\nI0608 08:26:57.536334 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.297858 (* 1 = 0.297858 loss)\nI0608 08:26:57.536340 13573 sgd_solver.cpp:106] Iteration 64200, lr = 0.0001\nI0608 08:27:10.168009 13573 solver.cpp:229] Iteration 64220, loss = 0.800158\nI0608 08:27:10.168077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.111651 (* 1 = 0.111651 loss)\nI0608 08:27:10.168087 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0329301 (* 1 = 0.0329301 loss)\nI0608 08:27:10.168093 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.25102 (* 1 = 0.25102 loss)\nI0608 08:27:10.168098 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0584628 (* 1 = 0.0584628 loss)\nI0608 08:27:10.168104 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.232698 (* 1 = 0.232698 loss)\nI0608 08:27:10.168110 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0584553 (* 1 = 0.0584553 loss)\nI0608 08:27:10.168117 13573 sgd_solver.cpp:106] Iteration 64220, lr = 0.0001\nI0608 08:27:22.957219 13573 solver.cpp:229] Iteration 64240, loss = 0.776462\nI0608 08:27:22.957290 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.261046 (* 1 = 0.261046 loss)\nI0608 08:27:22.957300 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0767997 (* 1 = 0.0767997 loss)\nI0608 08:27:22.957306 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.214955 (* 1 = 0.214955 loss)\nI0608 08:27:22.957312 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0194772 (* 1 = 0.0194772 loss)\nI0608 08:27:22.957319 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.208451 (* 1 = 0.208451 loss)\nI0608 08:27:22.957324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0194788 (* 1 = 0.0194788 loss)\nI0608 08:27:22.957330 13573 sgd_solver.cpp:106] Iteration 64240, lr = 0.0001\nI0608 08:27:36.056813 13573 solver.cpp:229] Iteration 64260, loss = 0.41224\nI0608 08:27:36.056880 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.177076 (* 1 = 0.177076 loss)\nI0608 08:27:36.056890 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0382109 (* 1 = 0.0382109 loss)\nI0608 08:27:36.056895 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103326 (* 1 = 0.103326 loss)\nI0608 08:27:36.056900 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301331 (* 1 = 0.0301331 loss)\nI0608 08:27:36.056906 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0976488 (* 1 = 0.0976488 loss)\nI0608 08:27:36.056911 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301334 (* 1 = 0.0301334 loss)\nI0608 08:27:36.056918 13573 sgd_solver.cpp:106] Iteration 64260, lr = 0.0001\nI0608 08:27:49.153223 13573 solver.cpp:229] Iteration 64280, loss = 0.526566\nI0608 08:27:49.153285 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0673116 (* 1 = 0.0673116 loss)\nI0608 08:27:49.153293 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0772607 (* 1 = 0.0772607 loss)\nI0608 08:27:49.153300 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100909 (* 1 = 0.100909 loss)\nI0608 08:27:49.153306 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0242041 (* 1 = 0.0242041 loss)\nI0608 08:27:49.153311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100946 (* 1 = 0.100946 loss)\nI0608 08:27:49.153316 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0242004 (* 1 = 0.0242004 loss)\nI0608 08:27:49.153323 13573 sgd_solver.cpp:106] Iteration 64280, lr = 0.0001\nI0608 08:28:01.919764 13573 solver.cpp:229] Iteration 64300, loss = 0.860942\nI0608 08:28:01.919836 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00788354 (* 1 = 0.00788354 loss)\nI0608 08:28:01.919844 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0239794 (* 1 = 0.0239794 loss)\nI0608 08:28:01.919850 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.330845 (* 1 = 0.330845 loss)\nI0608 08:28:01.919857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227792 (* 1 = 0.0227792 loss)\nI0608 08:28:01.919862 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.384826 (* 1 = 0.384826 loss)\nI0608 08:28:01.919867 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.022778 (* 1 = 0.022778 loss)\nI0608 08:28:01.919874 13573 sgd_solver.cpp:106] Iteration 64300, lr = 0.0001\nI0608 08:28:14.586889 13573 solver.cpp:229] Iteration 64320, loss = 0.567663\nI0608 08:28:14.586956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0918371 (* 1 = 0.0918371 loss)\nI0608 08:28:14.586966 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.072407 (* 1 = 0.072407 loss)\nI0608 08:28:14.586971 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162636 (* 1 = 0.162636 loss)\nI0608 08:28:14.586977 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0229876 (* 1 = 0.0229876 loss)\nI0608 08:28:14.586982 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.159673 (* 1 = 0.159673 loss)\nI0608 08:28:14.586987 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0229864 (* 1 = 0.0229864 loss)\nI0608 08:28:14.586994 13573 sgd_solver.cpp:106] Iteration 64320, lr = 0.0001\nI0608 08:28:27.302953 13573 solver.cpp:229] Iteration 64340, loss = 0.301388\nI0608 08:28:27.303014 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0162982 (* 1 = 0.0162982 loss)\nI0608 08:28:27.303023 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0341771 (* 1 = 0.0341771 loss)\nI0608 08:28:27.303030 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.081447 (* 1 = 0.081447 loss)\nI0608 08:28:27.303035 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0254772 (* 1 = 0.0254772 loss)\nI0608 08:28:27.303040 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0756544 (* 1 = 0.0756544 loss)\nI0608 08:28:27.303046 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0254769 (* 1 = 0.0254769 loss)\nI0608 08:28:27.303053 13573 sgd_solver.cpp:106] Iteration 64340, lr = 0.0001\nI0608 08:28:40.042348 13573 solver.cpp:229] Iteration 64360, loss = 0.393723\nI0608 08:28:40.042423 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000411665 (* 1 = 0.000411665 loss)\nI0608 08:28:40.042433 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000137058 (* 1 = 0.000137058 loss)\nI0608 08:28:40.042438 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.220537 (* 1 = 0.220537 loss)\nI0608 08:28:40.042444 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0136078 (* 1 = 0.0136078 loss)\nI0608 08:28:40.042450 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.207502 (* 1 = 0.207502 loss)\nI0608 08:28:40.042455 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0136123 (* 1 = 0.0136123 loss)\nI0608 08:28:40.042461 13573 sgd_solver.cpp:106] Iteration 64360, lr = 0.0001\nI0608 08:28:52.883653 13573 solver.cpp:229] Iteration 64380, loss = 0.607327\nI0608 08:28:52.883719 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00156352 (* 1 = 0.00156352 loss)\nI0608 08:28:52.883728 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.030057 (* 1 = 0.030057 loss)\nI0608 08:28:52.883734 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.280103 (* 1 = 0.280103 loss)\nI0608 08:28:52.883740 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176732 (* 1 = 0.0176732 loss)\nI0608 08:28:52.883745 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306429 (* 1 = 0.306429 loss)\nI0608 08:28:52.883751 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176713 (* 1 = 0.0176713 loss)\nI0608 08:28:52.883757 13573 sgd_solver.cpp:106] Iteration 64380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:29:05.602233 13573 solver.cpp:229] Iteration 64400, loss = 0.288386\nI0608 08:29:05.602295 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0499009 (* 1 = 0.0499009 loss)\nI0608 08:29:05.602305 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.02448 (* 1 = 0.02448 loss)\nI0608 08:29:05.602313 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0903065 (* 1 = 0.0903065 loss)\nI0608 08:29:05.602319 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0162114 (* 1 = 0.0162114 loss)\nI0608 08:29:05.602324 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0862772 (* 1 = 0.0862772 loss)\nI0608 08:29:05.602337 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0162101 (* 1 = 0.0162101 loss)\nI0608 08:29:05.602345 13573 sgd_solver.cpp:106] Iteration 64400, lr = 0.0001\nI0608 08:29:18.573202 13573 solver.cpp:229] Iteration 64420, loss = 0.814271\nI0608 08:29:18.573277 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0530163 (* 1 = 0.0530163 loss)\nI0608 08:29:18.573289 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0811941 (* 1 = 0.0811941 loss)\nI0608 08:29:18.573297 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.442834 (* 1 = 0.442834 loss)\nI0608 08:29:18.573302 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0804691 (* 1 = 0.0804691 loss)\nI0608 08:29:18.573307 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.45715 (* 1 = 0.45715 loss)\nI0608 08:29:18.573313 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0804731 (* 1 = 0.0804731 loss)\nI0608 08:29:18.573320 13573 sgd_solver.cpp:106] Iteration 64420, lr = 0.0001\nI0608 08:29:31.715593 13573 solver.cpp:229] Iteration 64440, loss = 0.364419\nI0608 08:29:31.715665 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00529815 (* 1 = 0.00529815 loss)\nI0608 08:29:31.715675 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000919996 (* 1 = 0.000919996 loss)\nI0608 08:29:31.715682 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.142046 (* 1 = 0.142046 loss)\nI0608 08:29:31.715688 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00959199 (* 1 = 0.00959199 loss)\nI0608 08:29:31.715694 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152814 (* 1 = 0.152814 loss)\nI0608 08:29:31.715700 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00959042 (* 1 = 0.00959042 loss)\nI0608 08:29:31.715708 13573 sgd_solver.cpp:106] Iteration 64440, lr = 0.0001\nI0608 08:29:44.777685 13573 solver.cpp:229] Iteration 64460, loss = 0.868461\nI0608 08:29:44.777755 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00117231 (* 1 = 0.00117231 loss)\nI0608 08:29:44.777765 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000961593 (* 1 = 0.000961593 loss)\nI0608 08:29:44.777770 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.483491 (* 1 = 0.483491 loss)\nI0608 08:29:44.777776 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.116865 (* 1 = 0.116865 loss)\nI0608 08:29:44.777781 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.499433 (* 1 = 0.499433 loss)\nI0608 08:29:44.777787 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.116905 (* 1 = 0.116905 loss)\nI0608 08:29:44.777794 13573 sgd_solver.cpp:106] Iteration 64460, lr = 0.0001\nI0608 08:29:57.650046 13573 solver.cpp:229] Iteration 64480, loss = 0.584363\nI0608 08:29:57.650112 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0463626 (* 1 = 0.0463626 loss)\nI0608 08:29:57.650121 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.131822 (* 1 = 0.131822 loss)\nI0608 08:29:57.650128 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.109971 (* 1 = 0.109971 loss)\nI0608 08:29:57.650133 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0225567 (* 1 = 0.0225567 loss)\nI0608 08:29:57.650140 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.114712 (* 1 = 0.114712 loss)\nI0608 08:29:57.650146 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225587 (* 1 = 0.0225587 loss)\nI0608 08:29:57.650151 13573 sgd_solver.cpp:106] Iteration 64480, lr = 0.0001\nI0608 08:30:10.652462 13573 solver.cpp:229] Iteration 64500, loss = 0.344351\nI0608 08:30:10.652525 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0406354 (* 1 = 0.0406354 loss)\nI0608 08:30:10.652534 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0909338 (* 1 = 0.0909338 loss)\nI0608 08:30:10.652540 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0846671 (* 1 = 0.0846671 loss)\nI0608 08:30:10.652546 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0291763 (* 1 = 0.0291763 loss)\nI0608 08:30:10.652551 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0764473 (* 1 = 0.0764473 loss)\nI0608 08:30:10.652556 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291739 (* 1 = 0.0291739 loss)\nI0608 08:30:10.652564 13573 sgd_solver.cpp:106] Iteration 64500, lr = 0.0001\nI0608 08:30:23.362474 13573 solver.cpp:229] Iteration 64520, loss = 1.99058\nI0608 08:30:23.362546 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0933964 (* 1 = 0.0933964 loss)\nI0608 08:30:23.362556 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0861773 (* 1 = 0.0861773 loss)\nI0608 08:30:23.362562 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.16287 (* 1 = 1.16287 loss)\nI0608 08:30:23.362570 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.513876 (* 1 = 0.513876 loss)\nI0608 08:30:23.362576 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.10612 (* 1 = 1.10612 loss)\nI0608 08:30:23.362581 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.498168 (* 1 = 0.498168 loss)\nI0608 08:30:23.362589 13573 sgd_solver.cpp:106] Iteration 64520, lr = 0.0001\nI0608 08:30:36.216799 13573 solver.cpp:229] Iteration 64540, loss = 0.423143\nI0608 08:30:36.216863 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0486647 (* 1 = 0.0486647 loss)\nI0608 08:30:36.216872 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0639876 (* 1 = 0.0639876 loss)\nI0608 08:30:36.216878 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101143 (* 1 = 0.101143 loss)\nI0608 08:30:36.216884 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0877859 (* 1 = 0.0877859 loss)\nI0608 08:30:36.216892 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.099923 (* 1 = 0.099923 loss)\nI0608 08:30:36.216897 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0877861 (* 1 = 0.0877861 loss)\nI0608 08:30:36.216905 13573 sgd_solver.cpp:106] Iteration 64540, lr = 0.0001\nI0608 08:30:49.075353 13573 solver.cpp:229] Iteration 64560, loss = 0.499644\nI0608 08:30:49.075429 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.128792 (* 1 = 0.128792 loss)\nI0608 08:30:49.075439 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.119672 (* 1 = 0.119672 loss)\nI0608 08:30:49.075445 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.111748 (* 1 = 0.111748 loss)\nI0608 08:30:49.075451 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0714831 (* 1 = 0.0714831 loss)\nI0608 08:30:49.075458 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115642 (* 1 = 0.115642 loss)\nI0608 08:30:49.075464 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0714791 (* 1 = 0.0714791 loss)\nI0608 08:30:49.075471 13573 sgd_solver.cpp:106] Iteration 64560, lr = 0.0001\nI0608 08:31:01.766654 13573 solver.cpp:229] Iteration 64580, loss = 0.61275\nI0608 08:31:01.766716 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.220241 (* 1 = 0.220241 loss)\nI0608 08:31:01.766726 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.139708 (* 1 = 0.139708 loss)\nI0608 08:31:01.766732 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194291 (* 1 = 0.194291 loss)\nI0608 08:31:01.766738 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0551964 (* 1 = 0.0551964 loss)\nI0608 08:31:01.766744 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.197018 (* 1 = 0.197018 loss)\nI0608 08:31:01.766751 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0551957 (* 1 = 0.0551957 loss)\nI0608 08:31:01.766757 13573 sgd_solver.cpp:106] Iteration 64580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:31:14.737761 13573 solver.cpp:229] Iteration 64600, loss = 1.06442\nI0608 08:31:14.737824 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0802271 (* 1 = 0.0802271 loss)\nI0608 08:31:14.737834 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0547507 (* 1 = 0.0547507 loss)\nI0608 08:31:14.737840 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.277111 (* 1 = 0.277111 loss)\nI0608 08:31:14.737846 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0472765 (* 1 = 0.0472765 loss)\nI0608 08:31:14.737853 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268333 (* 1 = 0.268333 loss)\nI0608 08:31:14.737857 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0472599 (* 1 = 0.0472599 loss)\nI0608 08:31:14.737864 13573 sgd_solver.cpp:106] Iteration 64600, lr = 0.0001\nI0608 08:31:27.636898 13573 solver.cpp:229] Iteration 64620, loss = 0.383155\nI0608 08:31:27.636983 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0336644 (* 1 = 0.0336644 loss)\nI0608 08:31:27.636992 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0121263 (* 1 = 0.0121263 loss)\nI0608 08:31:27.636999 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0866325 (* 1 = 0.0866325 loss)\nI0608 08:31:27.637006 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0111546 (* 1 = 0.0111546 loss)\nI0608 08:31:27.637011 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0898801 (* 1 = 0.0898801 loss)\nI0608 08:31:27.637017 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0111524 (* 1 = 0.0111524 loss)\nI0608 08:31:27.637023 13573 sgd_solver.cpp:106] Iteration 64620, lr = 0.0001\nI0608 08:31:40.500543 13573 solver.cpp:229] Iteration 64640, loss = 0.571834\nI0608 08:31:40.500612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00110334 (* 1 = 0.00110334 loss)\nI0608 08:31:40.500622 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00163355 (* 1 = 0.00163355 loss)\nI0608 08:31:40.500627 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.271405 (* 1 = 0.271405 loss)\nI0608 08:31:40.500633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0450094 (* 1 = 0.0450094 loss)\nI0608 08:31:40.500638 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239928 (* 1 = 0.239928 loss)\nI0608 08:31:40.500645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0449993 (* 1 = 0.0449993 loss)\nI0608 08:31:40.500651 13573 sgd_solver.cpp:106] Iteration 64640, lr = 0.0001\nI0608 08:31:53.512703 13573 solver.cpp:229] Iteration 64660, loss = 0.460122\nI0608 08:31:53.512771 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00103575 (* 1 = 0.00103575 loss)\nI0608 08:31:53.512781 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000756591 (* 1 = 0.000756591 loss)\nI0608 08:31:53.512787 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.177001 (* 1 = 0.177001 loss)\nI0608 08:31:53.512794 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00655512 (* 1 = 0.00655512 loss)\nI0608 08:31:53.512799 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158725 (* 1 = 0.158725 loss)\nI0608 08:31:53.512804 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00655426 (* 1 = 0.00655426 loss)\nI0608 08:31:53.512811 13573 sgd_solver.cpp:106] Iteration 64660, lr = 0.0001\nI0608 08:32:06.460218 13573 solver.cpp:229] Iteration 64680, loss = 0.33527\nI0608 08:32:06.460280 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0588248 (* 1 = 0.0588248 loss)\nI0608 08:32:06.460290 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0514539 (* 1 = 0.0514539 loss)\nI0608 08:32:06.460295 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100107 (* 1 = 0.100107 loss)\nI0608 08:32:06.460301 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0247784 (* 1 = 0.0247784 loss)\nI0608 08:32:06.460306 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0994278 (* 1 = 0.0994278 loss)\nI0608 08:32:06.460312 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0247835 (* 1 = 0.0247835 loss)\nI0608 08:32:06.460319 13573 sgd_solver.cpp:106] Iteration 64680, lr = 0.0001\nI0608 08:32:19.316294 13573 solver.cpp:229] Iteration 64700, loss = 0.578318\nI0608 08:32:19.316360 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0273806 (* 1 = 0.0273806 loss)\nI0608 08:32:19.316370 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0378572 (* 1 = 0.0378572 loss)\nI0608 08:32:19.316375 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231033 (* 1 = 0.231033 loss)\nI0608 08:32:19.316381 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00595772 (* 1 = 0.00595772 loss)\nI0608 08:32:19.316387 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.217404 (* 1 = 0.217404 loss)\nI0608 08:32:19.316392 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00596172 (* 1 = 0.00596172 loss)\nI0608 08:32:19.316400 13573 sgd_solver.cpp:106] Iteration 64700, lr = 0.0001\nI0608 08:32:32.275813 13573 solver.cpp:229] Iteration 64720, loss = 0.387644\nI0608 08:32:32.275884 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0974001 (* 1 = 0.0974001 loss)\nI0608 08:32:32.275895 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0469777 (* 1 = 0.0469777 loss)\nI0608 08:32:32.275902 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.094249 (* 1 = 0.094249 loss)\nI0608 08:32:32.275907 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0326588 (* 1 = 0.0326588 loss)\nI0608 08:32:32.275913 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0973985 (* 1 = 0.0973985 loss)\nI0608 08:32:32.275919 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0326568 (* 1 = 0.0326568 loss)\nI0608 08:32:32.275925 13573 sgd_solver.cpp:106] Iteration 64720, lr = 0.0001\nI0608 08:32:45.516228 13573 solver.cpp:229] Iteration 64740, loss = 0.736106\nI0608 08:32:45.516293 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0310991 (* 1 = 0.0310991 loss)\nI0608 08:32:45.516302 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0120877 (* 1 = 0.0120877 loss)\nI0608 08:32:45.516309 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.420695 (* 1 = 0.420695 loss)\nI0608 08:32:45.516314 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0601817 (* 1 = 0.0601817 loss)\nI0608 08:32:45.516319 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.408276 (* 1 = 0.408276 loss)\nI0608 08:32:45.516324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0601934 (* 1 = 0.0601934 loss)\nI0608 08:32:45.516331 13573 sgd_solver.cpp:106] Iteration 64740, lr = 0.0001\nI0608 08:32:58.425623 13573 solver.cpp:229] Iteration 64760, loss = 0.34214\nI0608 08:32:58.425689 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0525539 (* 1 = 0.0525539 loss)\nI0608 08:32:58.425698 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0462408 (* 1 = 0.0462408 loss)\nI0608 08:32:58.425704 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.126808 (* 1 = 0.126808 loss)\nI0608 08:32:58.425709 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00242989 (* 1 = 0.00242989 loss)\nI0608 08:32:58.425715 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0943132 (* 1 = 0.0943132 loss)\nI0608 08:32:58.425720 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00242979 (* 1 = 0.00242979 loss)\nI0608 08:32:58.425727 13573 sgd_solver.cpp:106] Iteration 64760, lr = 0.0001\nI0608 08:33:11.569555 13573 solver.cpp:229] Iteration 64780, loss = 0.541854\nI0608 08:33:11.569629 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0550373 (* 1 = 0.0550373 loss)\nI0608 08:33:11.569640 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0546188 (* 1 = 0.0546188 loss)\nI0608 08:33:11.569646 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0985956 (* 1 = 0.0985956 loss)\nI0608 08:33:11.569653 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0270969 (* 1 = 0.0270969 loss)\nI0608 08:33:11.569658 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0992484 (* 1 = 0.0992484 loss)\nI0608 08:33:11.569664 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0270919 (* 1 = 0.0270919 loss)\nI0608 08:33:11.569670 13573 sgd_solver.cpp:106] Iteration 64780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:33:24.665789 13573 solver.cpp:229] Iteration 64800, loss = 0.436245\nI0608 08:33:24.665858 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00973543 (* 1 = 0.00973543 loss)\nI0608 08:33:24.665868 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00295943 (* 1 = 0.00295943 loss)\nI0608 08:33:24.665874 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.208696 (* 1 = 0.208696 loss)\nI0608 08:33:24.665879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0225165 (* 1 = 0.0225165 loss)\nI0608 08:33:24.665885 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.249385 (* 1 = 0.249385 loss)\nI0608 08:33:24.665892 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0225146 (* 1 = 0.0225146 loss)\nI0608 08:33:24.665899 13573 sgd_solver.cpp:106] Iteration 64800, lr = 0.0001\nI0608 08:33:37.467094 13573 solver.cpp:229] Iteration 64820, loss = 0.363269\nI0608 08:33:37.467159 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0567827 (* 1 = 0.0567827 loss)\nI0608 08:33:37.467167 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0812438 (* 1 = 0.0812438 loss)\nI0608 08:33:37.467173 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.136706 (* 1 = 0.136706 loss)\nI0608 08:33:37.467180 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00865116 (* 1 = 0.00865116 loss)\nI0608 08:33:37.467185 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.145817 (* 1 = 0.145817 loss)\nI0608 08:33:37.467190 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00864754 (* 1 = 0.00864754 loss)\nI0608 08:33:37.467197 13573 sgd_solver.cpp:106] Iteration 64820, lr = 0.0001\nI0608 08:33:50.325368 13573 solver.cpp:229] Iteration 64840, loss = 0.64405\nI0608 08:33:50.325443 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0578709 (* 1 = 0.0578709 loss)\nI0608 08:33:50.325453 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0685149 (* 1 = 0.0685149 loss)\nI0608 08:33:50.325459 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.143132 (* 1 = 0.143132 loss)\nI0608 08:33:50.325464 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0207769 (* 1 = 0.0207769 loss)\nI0608 08:33:50.325469 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.136407 (* 1 = 0.136407 loss)\nI0608 08:33:50.325474 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207764 (* 1 = 0.0207764 loss)\nI0608 08:33:50.325480 13573 sgd_solver.cpp:106] Iteration 64840, lr = 0.0001\nI0608 08:34:02.920060 13573 solver.cpp:229] Iteration 64860, loss = 0.54705\nI0608 08:34:02.920121 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00217553 (* 1 = 0.00217553 loss)\nI0608 08:34:02.920131 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0196694 (* 1 = 0.0196694 loss)\nI0608 08:34:02.920137 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.311641 (* 1 = 0.311641 loss)\nI0608 08:34:02.920143 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.097416 (* 1 = 0.097416 loss)\nI0608 08:34:02.920150 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.269764 (* 1 = 0.269764 loss)\nI0608 08:34:02.920156 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.097412 (* 1 = 0.097412 loss)\nI0608 08:34:02.920162 13573 sgd_solver.cpp:106] Iteration 64860, lr = 0.0001\nI0608 08:34:16.027921 13573 solver.cpp:229] Iteration 64880, loss = 0.511721\nI0608 08:34:16.027989 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0827087 (* 1 = 0.0827087 loss)\nI0608 08:34:16.027998 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0513255 (* 1 = 0.0513255 loss)\nI0608 08:34:16.028005 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.1452 (* 1 = 0.1452 loss)\nI0608 08:34:16.028012 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0385126 (* 1 = 0.0385126 loss)\nI0608 08:34:16.028017 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.142247 (* 1 = 0.142247 loss)\nI0608 08:34:16.028023 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.03851 (* 1 = 0.03851 loss)\nI0608 08:34:16.028028 13573 sgd_solver.cpp:106] Iteration 64880, lr = 0.0001\nI0608 08:34:28.906338 13573 solver.cpp:229] Iteration 64900, loss = 0.81804\nI0608 08:34:28.906411 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0898457 (* 1 = 0.0898457 loss)\nI0608 08:34:28.906420 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0926737 (* 1 = 0.0926737 loss)\nI0608 08:34:28.906426 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.326527 (* 1 = 0.326527 loss)\nI0608 08:34:28.906432 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0618533 (* 1 = 0.0618533 loss)\nI0608 08:34:28.906437 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.321581 (* 1 = 0.321581 loss)\nI0608 08:34:28.906443 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0618487 (* 1 = 0.0618487 loss)\nI0608 08:34:28.906450 13573 sgd_solver.cpp:106] Iteration 64900, lr = 0.0001\nI0608 08:34:41.889216 13573 solver.cpp:229] Iteration 64920, loss = 0.36262\nI0608 08:34:41.889282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0523054 (* 1 = 0.0523054 loss)\nI0608 08:34:41.889292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0129837 (* 1 = 0.0129837 loss)\nI0608 08:34:41.889298 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0935105 (* 1 = 0.0935105 loss)\nI0608 08:34:41.889305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0355524 (* 1 = 0.0355524 loss)\nI0608 08:34:41.889312 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0985703 (* 1 = 0.0985703 loss)\nI0608 08:34:41.889317 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0355537 (* 1 = 0.0355537 loss)\nI0608 08:34:41.889324 13573 sgd_solver.cpp:106] Iteration 64920, lr = 0.0001\nI0608 08:34:54.933629 13573 solver.cpp:229] Iteration 64940, loss = 0.546473\nI0608 08:34:54.933694 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0025632 (* 1 = 0.0025632 loss)\nI0608 08:34:54.933703 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00412777 (* 1 = 0.00412777 loss)\nI0608 08:34:54.933710 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.202432 (* 1 = 0.202432 loss)\nI0608 08:34:54.933717 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0227619 (* 1 = 0.0227619 loss)\nI0608 08:34:54.933722 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.203412 (* 1 = 0.203412 loss)\nI0608 08:34:54.933727 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0227523 (* 1 = 0.0227523 loss)\nI0608 08:34:54.933734 13573 sgd_solver.cpp:106] Iteration 64940, lr = 0.0001\nI0608 08:35:07.885452 13573 solver.cpp:229] Iteration 64960, loss = 0.432286\nI0608 08:35:07.885524 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.115125 (* 1 = 0.115125 loss)\nI0608 08:35:07.885535 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0616501 (* 1 = 0.0616501 loss)\nI0608 08:35:07.885540 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10486 (* 1 = 0.10486 loss)\nI0608 08:35:07.885546 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028631 (* 1 = 0.028631 loss)\nI0608 08:35:07.885551 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102296 (* 1 = 0.102296 loss)\nI0608 08:35:07.885557 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0286357 (* 1 = 0.0286357 loss)\nI0608 08:35:07.885565 13573 sgd_solver.cpp:106] Iteration 64960, lr = 0.0001\nI0608 08:35:20.739394 13573 solver.cpp:229] Iteration 64980, loss = 0.539829\nI0608 08:35:20.739460 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0149147 (* 1 = 0.0149147 loss)\nI0608 08:35:20.739471 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0265621 (* 1 = 0.0265621 loss)\nI0608 08:35:20.739477 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18096 (* 1 = 0.18096 loss)\nI0608 08:35:20.739482 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0271024 (* 1 = 0.0271024 loss)\nI0608 08:35:20.739488 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.229788 (* 1 = 0.229788 loss)\nI0608 08:35:20.739495 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0271011 (* 1 = 0.0271011 loss)\nI0608 08:35:20.739502 13573 sgd_solver.cpp:106] Iteration 64980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:35:33.659999 13573 solver.cpp:229] Iteration 65000, loss = 0.987935\nI0608 08:35:33.660069 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0847481 (* 1 = 0.0847481 loss)\nI0608 08:35:33.660079 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.141523 (* 1 = 0.141523 loss)\nI0608 08:35:33.660085 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.463083 (* 1 = 0.463083 loss)\nI0608 08:35:33.660091 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.09561 (* 1 = 0.09561 loss)\nI0608 08:35:33.660097 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.467663 (* 1 = 0.467663 loss)\nI0608 08:35:33.660104 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0956137 (* 1 = 0.0956137 loss)\nI0608 08:35:33.660110 13573 sgd_solver.cpp:106] Iteration 65000, lr = 0.0001\nI0608 08:35:46.434561 13573 solver.cpp:229] Iteration 65020, loss = 1.49243\nI0608 08:35:46.434638 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0792899 (* 1 = 0.0792899 loss)\nI0608 08:35:46.434648 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.132088 (* 1 = 0.132088 loss)\nI0608 08:35:46.434653 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.631255 (* 1 = 0.631255 loss)\nI0608 08:35:46.434659 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.181164 (* 1 = 0.181164 loss)\nI0608 08:35:46.434665 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.628016 (* 1 = 0.628016 loss)\nI0608 08:35:46.434671 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.181162 (* 1 = 0.181162 loss)\nI0608 08:35:46.434679 13573 sgd_solver.cpp:106] Iteration 65020, lr = 0.0001\nI0608 08:35:58.994511 13573 solver.cpp:229] Iteration 65040, loss = 0.71034\nI0608 08:35:58.994575 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0601256 (* 1 = 0.0601256 loss)\nI0608 08:35:58.994585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0294924 (* 1 = 0.0294924 loss)\nI0608 08:35:58.994592 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.39366 (* 1 = 0.39366 loss)\nI0608 08:35:58.994599 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106124 (* 1 = 0.106124 loss)\nI0608 08:35:58.994606 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.389498 (* 1 = 0.389498 loss)\nI0608 08:35:58.994611 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106122 (* 1 = 0.106122 loss)\nI0608 08:35:58.994618 13573 sgd_solver.cpp:106] Iteration 65040, lr = 0.0001\nI0608 08:36:11.905366 13573 solver.cpp:229] Iteration 65060, loss = 0.629679\nI0608 08:36:11.905433 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0718674 (* 1 = 0.0718674 loss)\nI0608 08:36:11.905443 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0655055 (* 1 = 0.0655055 loss)\nI0608 08:36:11.905450 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.346673 (* 1 = 0.346673 loss)\nI0608 08:36:11.905457 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0500366 (* 1 = 0.0500366 loss)\nI0608 08:36:11.905462 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.373844 (* 1 = 0.373844 loss)\nI0608 08:36:11.905468 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0500236 (* 1 = 0.0500236 loss)\nI0608 08:36:11.905475 13573 sgd_solver.cpp:106] Iteration 65060, lr = 0.0001\nI0608 08:36:25.012980 13573 solver.cpp:229] Iteration 65080, loss = 0.394128\nI0608 08:36:25.013059 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000542195 (* 1 = 0.000542195 loss)\nI0608 08:36:25.013070 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00697594 (* 1 = 0.00697594 loss)\nI0608 08:36:25.013077 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.149928 (* 1 = 0.149928 loss)\nI0608 08:36:25.013083 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.000421766 (* 1 = 0.000421766 loss)\nI0608 08:36:25.013089 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.1716 (* 1 = 0.1716 loss)\nI0608 08:36:25.013095 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.000423335 (* 1 = 0.000423335 loss)\nI0608 08:36:25.013103 13573 sgd_solver.cpp:106] Iteration 65080, lr = 0.0001\nI0608 08:36:38.039418 13573 solver.cpp:229] Iteration 65100, loss = 0.639206\nI0608 08:36:38.039489 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0418563 (* 1 = 0.0418563 loss)\nI0608 08:36:38.039499 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.013678 (* 1 = 0.013678 loss)\nI0608 08:36:38.039506 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.317026 (* 1 = 0.317026 loss)\nI0608 08:36:38.039512 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0391627 (* 1 = 0.0391627 loss)\nI0608 08:36:38.039518 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.298519 (* 1 = 0.298519 loss)\nI0608 08:36:38.039525 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0391613 (* 1 = 0.0391613 loss)\nI0608 08:36:38.039531 13573 sgd_solver.cpp:106] Iteration 65100, lr = 0.0001\nI0608 08:36:51.089890 13573 solver.cpp:229] Iteration 65120, loss = 0.772904\nI0608 08:36:51.089954 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00994086 (* 1 = 0.00994086 loss)\nI0608 08:36:51.089964 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0196237 (* 1 = 0.0196237 loss)\nI0608 08:36:51.089970 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.130962 (* 1 = 0.130962 loss)\nI0608 08:36:51.089977 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103831 (* 1 = 0.0103831 loss)\nI0608 08:36:51.089982 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.124308 (* 1 = 0.124308 loss)\nI0608 08:36:51.089987 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0103889 (* 1 = 0.0103889 loss)\nI0608 08:36:51.089994 13573 sgd_solver.cpp:106] Iteration 65120, lr = 0.0001\nI0608 08:37:04.114694 13573 solver.cpp:229] Iteration 65140, loss = 0.778609\nI0608 08:37:04.114763 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.144381 (* 1 = 0.144381 loss)\nI0608 08:37:04.114786 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146316 (* 1 = 0.146316 loss)\nI0608 08:37:04.114794 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.298355 (* 1 = 0.298355 loss)\nI0608 08:37:04.114799 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.142348 (* 1 = 0.142348 loss)\nI0608 08:37:04.114805 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.304706 (* 1 = 0.304706 loss)\nI0608 08:37:04.114811 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.142351 (* 1 = 0.142351 loss)\nI0608 08:37:04.114817 13573 sgd_solver.cpp:106] Iteration 65140, lr = 0.0001\nI0608 08:37:17.295145 13573 solver.cpp:229] Iteration 65160, loss = 0.661418\nI0608 08:37:17.295210 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0665419 (* 1 = 0.0665419 loss)\nI0608 08:37:17.295219 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0444758 (* 1 = 0.0444758 loss)\nI0608 08:37:17.295225 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0882614 (* 1 = 0.0882614 loss)\nI0608 08:37:17.295231 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0241098 (* 1 = 0.0241098 loss)\nI0608 08:37:17.295238 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0910375 (* 1 = 0.0910375 loss)\nI0608 08:37:17.295243 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0241096 (* 1 = 0.0241096 loss)\nI0608 08:37:17.295250 13573 sgd_solver.cpp:106] Iteration 65160, lr = 0.0001\nI0608 08:37:30.242094 13573 solver.cpp:229] Iteration 65180, loss = 0.713925\nI0608 08:37:30.242158 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.01218 (* 1 = 0.01218 loss)\nI0608 08:37:30.242167 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0250037 (* 1 = 0.0250037 loss)\nI0608 08:37:30.242173 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270177 (* 1 = 0.270177 loss)\nI0608 08:37:30.242179 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00699308 (* 1 = 0.00699308 loss)\nI0608 08:37:30.242185 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239484 (* 1 = 0.239484 loss)\nI0608 08:37:30.242192 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0070056 (* 1 = 0.0070056 loss)\nI0608 08:37:30.242198 13573 sgd_solver.cpp:106] Iteration 65180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:37:43.387291 13573 solver.cpp:229] Iteration 65200, loss = 0.764025\nI0608 08:37:43.387364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00131722 (* 1 = 0.00131722 loss)\nI0608 08:37:43.387374 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00188722 (* 1 = 0.00188722 loss)\nI0608 08:37:43.387380 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262906 (* 1 = 0.262906 loss)\nI0608 08:37:43.387387 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281858 (* 1 = 0.0281858 loss)\nI0608 08:37:43.387392 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.273032 (* 1 = 0.273032 loss)\nI0608 08:37:43.387398 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0281844 (* 1 = 0.0281844 loss)\nI0608 08:37:43.387406 13573 sgd_solver.cpp:106] Iteration 65200, lr = 0.0001\nI0608 08:37:56.384429 13573 solver.cpp:229] Iteration 65220, loss = 0.370281\nI0608 08:37:56.384490 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00619038 (* 1 = 0.00619038 loss)\nI0608 08:37:56.384500 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00626334 (* 1 = 0.00626334 loss)\nI0608 08:37:56.384507 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.206362 (* 1 = 0.206362 loss)\nI0608 08:37:56.384513 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00827312 (* 1 = 0.00827312 loss)\nI0608 08:37:56.384519 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.251754 (* 1 = 0.251754 loss)\nI0608 08:37:56.384526 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00827367 (* 1 = 0.00827367 loss)\nI0608 08:37:56.384533 13573 sgd_solver.cpp:106] Iteration 65220, lr = 0.0001\nI0608 08:38:09.222445 13573 solver.cpp:229] Iteration 65240, loss = 0.50893\nI0608 08:38:09.222510 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0162961 (* 1 = 0.0162961 loss)\nI0608 08:38:09.222520 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0337919 (* 1 = 0.0337919 loss)\nI0608 08:38:09.222527 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124066 (* 1 = 0.124066 loss)\nI0608 08:38:09.222532 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.118219 (* 1 = 0.118219 loss)\nI0608 08:38:09.222537 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.127504 (* 1 = 0.127504 loss)\nI0608 08:38:09.222543 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.118222 (* 1 = 0.118222 loss)\nI0608 08:38:09.222549 13573 sgd_solver.cpp:106] Iteration 65240, lr = 0.0001\nI0608 08:38:22.306124 13573 solver.cpp:229] Iteration 65260, loss = 0.577531\nI0608 08:38:22.306247 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.15575 (* 1 = 0.15575 loss)\nI0608 08:38:22.306257 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0596144 (* 1 = 0.0596144 loss)\nI0608 08:38:22.306263 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259717 (* 1 = 0.259717 loss)\nI0608 08:38:22.306269 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0795171 (* 1 = 0.0795171 loss)\nI0608 08:38:22.306275 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.256829 (* 1 = 0.256829 loss)\nI0608 08:38:22.306280 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0795209 (* 1 = 0.0795209 loss)\nI0608 08:38:22.306288 13573 sgd_solver.cpp:106] Iteration 65260, lr = 0.0001\nI0608 08:38:35.229972 13573 solver.cpp:229] Iteration 65280, loss = 1.48525\nI0608 08:38:35.230039 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0222232 (* 1 = 0.0222232 loss)\nI0608 08:38:35.230049 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.035137 (* 1 = 0.035137 loss)\nI0608 08:38:35.230056 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0945581 (* 1 = 0.0945581 loss)\nI0608 08:38:35.230062 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0531543 (* 1 = 0.0531543 loss)\nI0608 08:38:35.230067 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100259 (* 1 = 0.100259 loss)\nI0608 08:38:35.230073 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0531531 (* 1 = 0.0531531 loss)\nI0608 08:38:35.230080 13573 sgd_solver.cpp:106] Iteration 65280, lr = 0.0001\nI0608 08:38:47.950117 13573 solver.cpp:229] Iteration 65300, loss = 1.00062\nI0608 08:38:47.950181 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0889877 (* 1 = 0.0889877 loss)\nI0608 08:38:47.950191 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0332672 (* 1 = 0.0332672 loss)\nI0608 08:38:47.950197 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0918305 (* 1 = 0.0918305 loss)\nI0608 08:38:47.950203 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0108806 (* 1 = 0.0108806 loss)\nI0608 08:38:47.950208 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0966294 (* 1 = 0.0966294 loss)\nI0608 08:38:47.950214 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0108787 (* 1 = 0.0108787 loss)\nI0608 08:38:47.950220 13573 sgd_solver.cpp:106] Iteration 65300, lr = 0.0001\nI0608 08:39:00.875252 13573 solver.cpp:229] Iteration 65320, loss = 0.435187\nI0608 08:39:00.875313 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0418205 (* 1 = 0.0418205 loss)\nI0608 08:39:00.875324 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0673644 (* 1 = 0.0673644 loss)\nI0608 08:39:00.875329 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0924616 (* 1 = 0.0924616 loss)\nI0608 08:39:00.875335 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00923642 (* 1 = 0.00923642 loss)\nI0608 08:39:00.875340 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0917469 (* 1 = 0.0917469 loss)\nI0608 08:39:00.875346 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00923811 (* 1 = 0.00923811 loss)\nI0608 08:39:00.875353 13573 sgd_solver.cpp:106] Iteration 65320, lr = 0.0001\nI0608 08:39:13.857074 13573 solver.cpp:229] Iteration 65340, loss = 0.535923\nI0608 08:39:13.857142 13573 solver.cpp:245]     Train net output #0: loss_bbox = 5.52356e-05 (* 1 = 5.52356e-05 loss)\nI0608 08:39:13.857152 13573 solver.cpp:245]     Train net output #1: loss_cls = 5.89519e-05 (* 1 = 5.89519e-05 loss)\nI0608 08:39:13.857158 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.198111 (* 1 = 0.198111 loss)\nI0608 08:39:13.857164 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0296803 (* 1 = 0.0296803 loss)\nI0608 08:39:13.857170 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.208358 (* 1 = 0.208358 loss)\nI0608 08:39:13.857177 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0296845 (* 1 = 0.0296845 loss)\nI0608 08:39:13.857184 13573 sgd_solver.cpp:106] Iteration 65340, lr = 0.0001\nI0608 08:39:26.747905 13573 solver.cpp:229] Iteration 65360, loss = 1.23285\nI0608 08:39:26.748034 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0974765 (* 1 = 0.0974765 loss)\nI0608 08:39:26.748044 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122766 (* 1 = 0.122766 loss)\nI0608 08:39:26.748049 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.730652 (* 1 = 0.730652 loss)\nI0608 08:39:26.748055 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.124105 (* 1 = 0.124105 loss)\nI0608 08:39:26.748060 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.742025 (* 1 = 0.742025 loss)\nI0608 08:39:26.748066 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.124096 (* 1 = 0.124096 loss)\nI0608 08:39:26.748073 13573 sgd_solver.cpp:106] Iteration 65360, lr = 0.0001\nI0608 08:39:39.742238 13573 solver.cpp:229] Iteration 65380, loss = 0.491274\nI0608 08:39:39.742303 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0263198 (* 1 = 0.0263198 loss)\nI0608 08:39:39.742312 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0225293 (* 1 = 0.0225293 loss)\nI0608 08:39:39.742318 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0921238 (* 1 = 0.0921238 loss)\nI0608 08:39:39.742324 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0201286 (* 1 = 0.0201286 loss)\nI0608 08:39:39.742329 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101475 (* 1 = 0.101475 loss)\nI0608 08:39:39.742336 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0201228 (* 1 = 0.0201228 loss)\nI0608 08:39:39.742342 13573 sgd_solver.cpp:106] Iteration 65380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:39:52.663852 13573 solver.cpp:229] Iteration 65400, loss = 0.562317\nI0608 08:39:52.663918 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0919476 (* 1 = 0.0919476 loss)\nI0608 08:39:52.663928 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0593888 (* 1 = 0.0593888 loss)\nI0608 08:39:52.663934 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.256422 (* 1 = 0.256422 loss)\nI0608 08:39:52.663940 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0874453 (* 1 = 0.0874453 loss)\nI0608 08:39:52.663945 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.25405 (* 1 = 0.25405 loss)\nI0608 08:39:52.663951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0874414 (* 1 = 0.0874414 loss)\nI0608 08:39:52.663959 13573 sgd_solver.cpp:106] Iteration 65400, lr = 0.0001\nI0608 08:40:05.473717 13573 solver.cpp:229] Iteration 65420, loss = 0.233591\nI0608 08:40:05.473791 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0212511 (* 1 = 0.0212511 loss)\nI0608 08:40:05.473800 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00940764 (* 1 = 0.00940764 loss)\nI0608 08:40:05.473806 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.07564 (* 1 = 0.07564 loss)\nI0608 08:40:05.473812 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0141151 (* 1 = 0.0141151 loss)\nI0608 08:40:05.473817 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0753527 (* 1 = 0.0753527 loss)\nI0608 08:40:05.473824 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0141144 (* 1 = 0.0141144 loss)\nI0608 08:40:05.473829 13573 sgd_solver.cpp:106] Iteration 65420, lr = 0.0001\nI0608 08:40:18.447674 13573 solver.cpp:229] Iteration 65440, loss = 0.538849\nI0608 08:40:18.447736 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.187972 (* 1 = 0.187972 loss)\nI0608 08:40:18.447748 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0982677 (* 1 = 0.0982677 loss)\nI0608 08:40:18.447755 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0739404 (* 1 = 0.0739404 loss)\nI0608 08:40:18.447762 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0344205 (* 1 = 0.0344205 loss)\nI0608 08:40:18.447767 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0730937 (* 1 = 0.0730937 loss)\nI0608 08:40:18.447773 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0344191 (* 1 = 0.0344191 loss)\nI0608 08:40:18.447780 13573 sgd_solver.cpp:106] Iteration 65440, lr = 0.0001\nI0608 08:40:31.606006 13573 solver.cpp:229] Iteration 65460, loss = 0.753984\nI0608 08:40:31.606077 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0374969 (* 1 = 0.0374969 loss)\nI0608 08:40:31.606087 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0278001 (* 1 = 0.0278001 loss)\nI0608 08:40:31.606094 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0990849 (* 1 = 0.0990849 loss)\nI0608 08:40:31.606101 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0348702 (* 1 = 0.0348702 loss)\nI0608 08:40:31.606106 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0946486 (* 1 = 0.0946486 loss)\nI0608 08:40:31.606112 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0348671 (* 1 = 0.0348671 loss)\nI0608 08:40:31.606119 13573 sgd_solver.cpp:106] Iteration 65460, lr = 0.0001\nI0608 08:40:44.642345 13573 solver.cpp:229] Iteration 65480, loss = 0.744492\nI0608 08:40:44.642421 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.148915 (* 1 = 0.148915 loss)\nI0608 08:40:44.642429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.140434 (* 1 = 0.140434 loss)\nI0608 08:40:44.642436 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.389045 (* 1 = 0.389045 loss)\nI0608 08:40:44.642441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0885064 (* 1 = 0.0885064 loss)\nI0608 08:40:44.642447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.385659 (* 1 = 0.385659 loss)\nI0608 08:40:44.642453 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.088506 (* 1 = 0.088506 loss)\nI0608 08:40:44.642460 13573 sgd_solver.cpp:106] Iteration 65480, lr = 0.0001\nI0608 08:40:57.402460 13573 solver.cpp:229] Iteration 65500, loss = 1.26312\nI0608 08:40:57.402523 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0157012 (* 1 = 0.0157012 loss)\nI0608 08:40:57.402532 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0274664 (* 1 = 0.0274664 loss)\nI0608 08:40:57.402539 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.585357 (* 1 = 0.585357 loss)\nI0608 08:40:57.402544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.13046 (* 1 = 0.13046 loss)\nI0608 08:40:57.402550 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.567297 (* 1 = 0.567297 loss)\nI0608 08:40:57.402556 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.130502 (* 1 = 0.130502 loss)\nI0608 08:40:57.402564 13573 sgd_solver.cpp:106] Iteration 65500, lr = 0.0001\nI0608 08:41:10.339015 13573 solver.cpp:229] Iteration 65520, loss = 0.710398\nI0608 08:41:10.339082 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0160812 (* 1 = 0.0160812 loss)\nI0608 08:41:10.339092 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0306169 (* 1 = 0.0306169 loss)\nI0608 08:41:10.339098 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.420721 (* 1 = 0.420721 loss)\nI0608 08:41:10.339103 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0667752 (* 1 = 0.0667752 loss)\nI0608 08:41:10.339109 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.444655 (* 1 = 0.444655 loss)\nI0608 08:41:10.339115 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.066777 (* 1 = 0.066777 loss)\nI0608 08:41:10.339121 13573 sgd_solver.cpp:106] Iteration 65520, lr = 0.0001\nI0608 08:41:23.240757 13573 solver.cpp:229] Iteration 65540, loss = 0.586694\nI0608 08:41:23.240886 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.178609 (* 1 = 0.178609 loss)\nI0608 08:41:23.240896 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.059203 (* 1 = 0.059203 loss)\nI0608 08:41:23.240902 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174538 (* 1 = 0.174538 loss)\nI0608 08:41:23.240908 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0181778 (* 1 = 0.0181778 loss)\nI0608 08:41:23.240913 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171676 (* 1 = 0.171676 loss)\nI0608 08:41:23.240921 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0181775 (* 1 = 0.0181775 loss)\nI0608 08:41:23.240927 13573 sgd_solver.cpp:106] Iteration 65540, lr = 0.0001\nI0608 08:41:35.991871 13573 solver.cpp:229] Iteration 65560, loss = 1.57808\nI0608 08:41:35.991940 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00378108 (* 1 = 0.00378108 loss)\nI0608 08:41:35.991950 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000866396 (* 1 = 0.000866396 loss)\nI0608 08:41:35.991957 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.724499 (* 1 = 0.724499 loss)\nI0608 08:41:35.991962 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.138199 (* 1 = 0.138199 loss)\nI0608 08:41:35.991968 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.71679 (* 1 = 0.71679 loss)\nI0608 08:41:35.991974 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.138181 (* 1 = 0.138181 loss)\nI0608 08:41:35.991981 13573 sgd_solver.cpp:106] Iteration 65560, lr = 0.0001\nI0608 08:41:49.045711 13573 solver.cpp:229] Iteration 65580, loss = 0.534068\nI0608 08:41:49.045780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0209455 (* 1 = 0.0209455 loss)\nI0608 08:41:49.045790 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00917953 (* 1 = 0.00917953 loss)\nI0608 08:41:49.045797 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0953052 (* 1 = 0.0953052 loss)\nI0608 08:41:49.045802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0657595 (* 1 = 0.0657595 loss)\nI0608 08:41:49.045809 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977758 (* 1 = 0.0977758 loss)\nI0608 08:41:49.045814 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0657529 (* 1 = 0.0657529 loss)\nI0608 08:41:49.045820 13573 sgd_solver.cpp:106] Iteration 65580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:42:01.840165 13573 solver.cpp:229] Iteration 65600, loss = 0.563891\nI0608 08:42:01.840250 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000793195 (* 1 = 0.000793195 loss)\nI0608 08:42:01.840260 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000469863 (* 1 = 0.000469863 loss)\nI0608 08:42:01.840266 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.328868 (* 1 = 0.328868 loss)\nI0608 08:42:01.840272 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0719634 (* 1 = 0.0719634 loss)\nI0608 08:42:01.840278 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318107 (* 1 = 0.318107 loss)\nI0608 08:42:01.840284 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0719709 (* 1 = 0.0719709 loss)\nI0608 08:42:01.840292 13573 sgd_solver.cpp:106] Iteration 65600, lr = 0.0001\nI0608 08:42:14.653745 13573 solver.cpp:229] Iteration 65620, loss = 0.975654\nI0608 08:42:14.653807 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00746662 (* 1 = 0.00746662 loss)\nI0608 08:42:14.653816 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00817965 (* 1 = 0.00817965 loss)\nI0608 08:42:14.653822 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.174135 (* 1 = 0.174135 loss)\nI0608 08:42:14.653828 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00796822 (* 1 = 0.00796822 loss)\nI0608 08:42:14.653834 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.161342 (* 1 = 0.161342 loss)\nI0608 08:42:14.653839 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00797004 (* 1 = 0.00797004 loss)\nI0608 08:42:14.653846 13573 sgd_solver.cpp:106] Iteration 65620, lr = 0.0001\nI0608 08:42:27.438130 13573 solver.cpp:229] Iteration 65640, loss = 1.31192\nI0608 08:42:27.438197 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0305206 (* 1 = 0.0305206 loss)\nI0608 08:42:27.438207 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0155026 (* 1 = 0.0155026 loss)\nI0608 08:42:27.438213 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919134 (* 1 = 0.0919134 loss)\nI0608 08:42:27.438218 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0464959 (* 1 = 0.0464959 loss)\nI0608 08:42:27.438225 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.090247 (* 1 = 0.090247 loss)\nI0608 08:42:27.438230 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0464917 (* 1 = 0.0464917 loss)\nI0608 08:42:27.438238 13573 sgd_solver.cpp:106] Iteration 65640, lr = 0.0001\nI0608 08:42:40.358680 13573 solver.cpp:229] Iteration 65660, loss = 0.690893\nI0608 08:42:40.358814 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0272449 (* 1 = 0.0272449 loss)\nI0608 08:42:40.358825 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0163273 (* 1 = 0.0163273 loss)\nI0608 08:42:40.358831 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.255873 (* 1 = 0.255873 loss)\nI0608 08:42:40.358837 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0160274 (* 1 = 0.0160274 loss)\nI0608 08:42:40.358842 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233968 (* 1 = 0.233968 loss)\nI0608 08:42:40.358849 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0160271 (* 1 = 0.0160271 loss)\nI0608 08:42:40.358855 13573 sgd_solver.cpp:106] Iteration 65660, lr = 0.0001\nI0608 08:42:53.468269 13573 solver.cpp:229] Iteration 65680, loss = 0.609988\nI0608 08:42:53.468336 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0394221 (* 1 = 0.0394221 loss)\nI0608 08:42:53.468345 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0181867 (* 1 = 0.0181867 loss)\nI0608 08:42:53.468351 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.217441 (* 1 = 0.217441 loss)\nI0608 08:42:53.468358 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0214412 (* 1 = 0.0214412 loss)\nI0608 08:42:53.468363 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233438 (* 1 = 0.233438 loss)\nI0608 08:42:53.468369 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0214395 (* 1 = 0.0214395 loss)\nI0608 08:42:53.468375 13573 sgd_solver.cpp:106] Iteration 65680, lr = 0.0001\nI0608 08:43:06.684572 13573 solver.cpp:229] Iteration 65700, loss = 0.472461\nI0608 08:43:06.684633 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0100168 (* 1 = 0.0100168 loss)\nI0608 08:43:06.684643 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0082905 (* 1 = 0.0082905 loss)\nI0608 08:43:06.684649 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.139287 (* 1 = 0.139287 loss)\nI0608 08:43:06.684655 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.016087 (* 1 = 0.016087 loss)\nI0608 08:43:06.684662 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.154429 (* 1 = 0.154429 loss)\nI0608 08:43:06.684667 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.016094 (* 1 = 0.016094 loss)\nI0608 08:43:06.684675 13573 sgd_solver.cpp:106] Iteration 65700, lr = 0.0001\nI0608 08:43:19.565307 13573 solver.cpp:229] Iteration 65720, loss = 0.745005\nI0608 08:43:19.565382 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.158844 (* 1 = 0.158844 loss)\nI0608 08:43:19.565392 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0646029 (* 1 = 0.0646029 loss)\nI0608 08:43:19.565398 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.193599 (* 1 = 0.193599 loss)\nI0608 08:43:19.565405 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0389678 (* 1 = 0.0389678 loss)\nI0608 08:43:19.565410 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.193046 (* 1 = 0.193046 loss)\nI0608 08:43:19.565418 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0389667 (* 1 = 0.0389667 loss)\nI0608 08:43:19.565423 13573 sgd_solver.cpp:106] Iteration 65720, lr = 0.0001\nI0608 08:43:32.577983 13573 solver.cpp:229] Iteration 65740, loss = 0.61806\nI0608 08:43:32.578048 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0510193 (* 1 = 0.0510193 loss)\nI0608 08:43:32.578058 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0485441 (* 1 = 0.0485441 loss)\nI0608 08:43:32.578065 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.105506 (* 1 = 0.105506 loss)\nI0608 08:43:32.578071 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00992739 (* 1 = 0.00992739 loss)\nI0608 08:43:32.578078 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935074 (* 1 = 0.0935074 loss)\nI0608 08:43:32.578083 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00992929 (* 1 = 0.00992929 loss)\nI0608 08:43:32.578090 13573 sgd_solver.cpp:106] Iteration 65740, lr = 0.0001\nI0608 08:43:45.313880 13573 solver.cpp:229] Iteration 65760, loss = 0.592528\nI0608 08:43:45.313946 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0276908 (* 1 = 0.0276908 loss)\nI0608 08:43:45.313954 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0264989 (* 1 = 0.0264989 loss)\nI0608 08:43:45.313961 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.130578 (* 1 = 0.130578 loss)\nI0608 08:43:45.313966 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.203642 (* 1 = 0.203642 loss)\nI0608 08:43:45.313971 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.126288 (* 1 = 0.126288 loss)\nI0608 08:43:45.313978 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.2028 (* 1 = 0.2028 loss)\nI0608 08:43:45.313985 13573 sgd_solver.cpp:106] Iteration 65760, lr = 0.0001\nI0608 08:43:58.379961 13573 solver.cpp:229] Iteration 65780, loss = 0.804112\nI0608 08:43:58.380031 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0194377 (* 1 = 0.0194377 loss)\nI0608 08:43:58.380041 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00433786 (* 1 = 0.00433786 loss)\nI0608 08:43:58.380048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301444 (* 1 = 0.301444 loss)\nI0608 08:43:58.380053 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00325479 (* 1 = 0.00325479 loss)\nI0608 08:43:58.380059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.298135 (* 1 = 0.298135 loss)\nI0608 08:43:58.380064 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0032551 (* 1 = 0.0032551 loss)\nI0608 08:43:58.380071 13573 sgd_solver.cpp:106] Iteration 65780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:44:11.498523 13573 solver.cpp:229] Iteration 65800, loss = 1.29777\nI0608 08:44:11.498589 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.100788 (* 1 = 0.100788 loss)\nI0608 08:44:11.498597 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.127837 (* 1 = 0.127837 loss)\nI0608 08:44:11.498605 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0829425 (* 1 = 0.0829425 loss)\nI0608 08:44:11.498610 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0606875 (* 1 = 0.0606875 loss)\nI0608 08:44:11.498615 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0869763 (* 1 = 0.0869763 loss)\nI0608 08:44:11.498620 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0606849 (* 1 = 0.0606849 loss)\nI0608 08:44:11.498627 13573 sgd_solver.cpp:106] Iteration 65800, lr = 0.0001\nI0608 08:44:24.293196 13573 solver.cpp:229] Iteration 65820, loss = 0.603329\nI0608 08:44:24.293267 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0653269 (* 1 = 0.0653269 loss)\nI0608 08:44:24.293277 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0578159 (* 1 = 0.0578159 loss)\nI0608 08:44:24.293282 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.259361 (* 1 = 0.259361 loss)\nI0608 08:44:24.293288 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0245584 (* 1 = 0.0245584 loss)\nI0608 08:44:24.293293 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.254028 (* 1 = 0.254028 loss)\nI0608 08:44:24.293299 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0245648 (* 1 = 0.0245648 loss)\nI0608 08:44:24.293305 13573 sgd_solver.cpp:106] Iteration 65820, lr = 0.0001\nI0608 08:44:37.171818 13573 solver.cpp:229] Iteration 65840, loss = 0.925577\nI0608 08:44:37.171885 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.141219 (* 1 = 0.141219 loss)\nI0608 08:44:37.171893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124522 (* 1 = 0.124522 loss)\nI0608 08:44:37.171900 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.409307 (* 1 = 0.409307 loss)\nI0608 08:44:37.171905 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0414665 (* 1 = 0.0414665 loss)\nI0608 08:44:37.171911 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.391668 (* 1 = 0.391668 loss)\nI0608 08:44:37.171917 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0414715 (* 1 = 0.0414715 loss)\nI0608 08:44:37.171924 13573 sgd_solver.cpp:106] Iteration 65840, lr = 0.0001\nI0608 08:44:50.023200 13573 solver.cpp:229] Iteration 65860, loss = 1.06041\nI0608 08:44:50.023272 13573 solver.cpp:245]     Train net output #0: loss_bbox = 4.20385e-05 (* 1 = 4.20385e-05 loss)\nI0608 08:44:50.023283 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000567076 (* 1 = 0.000567076 loss)\nI0608 08:44:50.023289 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.253762 (* 1 = 0.253762 loss)\nI0608 08:44:50.023295 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00842868 (* 1 = 0.00842868 loss)\nI0608 08:44:50.023301 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.286929 (* 1 = 0.286929 loss)\nI0608 08:44:50.023308 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00842278 (* 1 = 0.00842278 loss)\nI0608 08:44:50.023314 13573 sgd_solver.cpp:106] Iteration 65860, lr = 0.0001\nI0608 08:45:03.077922 13573 solver.cpp:229] Iteration 65880, loss = 1.18392\nI0608 08:45:03.077987 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0595738 (* 1 = 0.0595738 loss)\nI0608 08:45:03.077997 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0533281 (* 1 = 0.0533281 loss)\nI0608 08:45:03.078003 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0942077 (* 1 = 0.0942077 loss)\nI0608 08:45:03.078008 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0118835 (* 1 = 0.0118835 loss)\nI0608 08:45:03.078014 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0891842 (* 1 = 0.0891842 loss)\nI0608 08:45:03.078019 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.011884 (* 1 = 0.011884 loss)\nI0608 08:45:03.078027 13573 sgd_solver.cpp:106] Iteration 65880, lr = 0.0001\nI0608 08:45:15.780732 13573 solver.cpp:229] Iteration 65900, loss = 0.761362\nI0608 08:45:15.780792 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.055812 (* 1 = 0.055812 loss)\nI0608 08:45:15.780804 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0724639 (* 1 = 0.0724639 loss)\nI0608 08:45:15.780810 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.399028 (* 1 = 0.399028 loss)\nI0608 08:45:15.780817 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0299855 (* 1 = 0.0299855 loss)\nI0608 08:45:15.780822 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.405675 (* 1 = 0.405675 loss)\nI0608 08:45:15.780827 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029984 (* 1 = 0.029984 loss)\nI0608 08:45:15.780833 13573 sgd_solver.cpp:106] Iteration 65900, lr = 0.0001\nI0608 08:45:28.500994 13573 solver.cpp:229] Iteration 65920, loss = 0.769014\nI0608 08:45:28.501081 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.108737 (* 1 = 0.108737 loss)\nI0608 08:45:28.501091 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106444 (* 1 = 0.106444 loss)\nI0608 08:45:28.501097 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.414733 (* 1 = 0.414733 loss)\nI0608 08:45:28.501103 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0653443 (* 1 = 0.0653443 loss)\nI0608 08:45:28.501109 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.416719 (* 1 = 0.416719 loss)\nI0608 08:45:28.501116 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0653616 (* 1 = 0.0653616 loss)\nI0608 08:45:28.501123 13573 sgd_solver.cpp:106] Iteration 65920, lr = 0.0001\nI0608 08:45:41.435768 13573 solver.cpp:229] Iteration 65940, loss = 0.499187\nI0608 08:45:41.435834 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0205259 (* 1 = 0.0205259 loss)\nI0608 08:45:41.435845 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0108034 (* 1 = 0.0108034 loss)\nI0608 08:45:41.435852 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.263238 (* 1 = 0.263238 loss)\nI0608 08:45:41.435858 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0175711 (* 1 = 0.0175711 loss)\nI0608 08:45:41.435864 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.28106 (* 1 = 0.28106 loss)\nI0608 08:45:41.435871 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.01757 (* 1 = 0.01757 loss)\nI0608 08:45:41.435878 13573 sgd_solver.cpp:106] Iteration 65940, lr = 0.0001\nI0608 08:45:54.391774 13573 solver.cpp:229] Iteration 65960, loss = 0.90791\nI0608 08:45:54.391850 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0877885 (* 1 = 0.0877885 loss)\nI0608 08:45:54.391860 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0788375 (* 1 = 0.0788375 loss)\nI0608 08:45:54.391866 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.439771 (* 1 = 0.439771 loss)\nI0608 08:45:54.391872 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0610768 (* 1 = 0.0610768 loss)\nI0608 08:45:54.391877 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.446853 (* 1 = 0.446853 loss)\nI0608 08:45:54.391883 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0610949 (* 1 = 0.0610949 loss)\nI0608 08:45:54.391891 13573 sgd_solver.cpp:106] Iteration 65960, lr = 0.0001\nI0608 08:46:07.257288 13573 solver.cpp:229] Iteration 65980, loss = 0.476195\nI0608 08:46:07.257354 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00269197 (* 1 = 0.00269197 loss)\nI0608 08:46:07.257364 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000346059 (* 1 = 0.000346059 loss)\nI0608 08:46:07.257369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.269764 (* 1 = 0.269764 loss)\nI0608 08:46:07.257375 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0299148 (* 1 = 0.0299148 loss)\nI0608 08:46:07.257380 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.287628 (* 1 = 0.287628 loss)\nI0608 08:46:07.257386 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0299057 (* 1 = 0.0299057 loss)\nI0608 08:46:07.257393 13573 sgd_solver.cpp:106] Iteration 65980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:46:20.147516 13573 solver.cpp:229] Iteration 66000, loss = 0.349473\nI0608 08:46:20.147583 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0469907 (* 1 = 0.0469907 loss)\nI0608 08:46:20.147593 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0383584 (* 1 = 0.0383584 loss)\nI0608 08:46:20.147598 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.087943 (* 1 = 0.087943 loss)\nI0608 08:46:20.147604 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0262455 (* 1 = 0.0262455 loss)\nI0608 08:46:20.147610 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0868362 (* 1 = 0.0868362 loss)\nI0608 08:46:20.147616 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0262438 (* 1 = 0.0262438 loss)\nI0608 08:46:20.147622 13573 sgd_solver.cpp:106] Iteration 66000, lr = 0.0001\nI0608 08:46:32.999112 13573 solver.cpp:229] Iteration 66020, loss = 0.31635\nI0608 08:46:32.999184 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0411378 (* 1 = 0.0411378 loss)\nI0608 08:46:32.999194 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0238107 (* 1 = 0.0238107 loss)\nI0608 08:46:32.999202 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.094192 (* 1 = 0.094192 loss)\nI0608 08:46:32.999207 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0206556 (* 1 = 0.0206556 loss)\nI0608 08:46:32.999213 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0923221 (* 1 = 0.0923221 loss)\nI0608 08:46:32.999219 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0206547 (* 1 = 0.0206547 loss)\nI0608 08:46:32.999227 13573 sgd_solver.cpp:106] Iteration 66020, lr = 0.0001\nI0608 08:46:46.077888 13573 solver.cpp:229] Iteration 66040, loss = 0.899018\nI0608 08:46:46.077949 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.11879 (* 1 = 0.11879 loss)\nI0608 08:46:46.077958 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.120391 (* 1 = 0.120391 loss)\nI0608 08:46:46.077965 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.304332 (* 1 = 0.304332 loss)\nI0608 08:46:46.077971 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0757492 (* 1 = 0.0757492 loss)\nI0608 08:46:46.077977 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.306399 (* 1 = 0.306399 loss)\nI0608 08:46:46.077983 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0757454 (* 1 = 0.0757454 loss)\nI0608 08:46:46.077989 13573 sgd_solver.cpp:106] Iteration 66040, lr = 0.0001\nI0608 08:46:58.987918 13573 solver.cpp:229] Iteration 66060, loss = 1.13529\nI0608 08:46:58.987998 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.117608 (* 1 = 0.117608 loss)\nI0608 08:46:58.988008 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0552316 (* 1 = 0.0552316 loss)\nI0608 08:46:58.988014 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0884401 (* 1 = 0.0884401 loss)\nI0608 08:46:58.988020 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0275237 (* 1 = 0.0275237 loss)\nI0608 08:46:58.988026 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0944565 (* 1 = 0.0944565 loss)\nI0608 08:46:58.988032 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0275229 (* 1 = 0.0275229 loss)\nI0608 08:46:58.988039 13573 sgd_solver.cpp:106] Iteration 66060, lr = 0.0001\nI0608 08:47:11.918241 13573 solver.cpp:229] Iteration 66080, loss = 1.10218\nI0608 08:47:11.918308 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0758866 (* 1 = 0.0758866 loss)\nI0608 08:47:11.918318 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.125744 (* 1 = 0.125744 loss)\nI0608 08:47:11.918323 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.115266 (* 1 = 0.115266 loss)\nI0608 08:47:11.918329 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0393318 (* 1 = 0.0393318 loss)\nI0608 08:47:11.918335 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.116009 (* 1 = 0.116009 loss)\nI0608 08:47:11.918341 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0393355 (* 1 = 0.0393355 loss)\nI0608 08:47:11.918349 13573 sgd_solver.cpp:106] Iteration 66080, lr = 0.0001\nI0608 08:47:24.803864 13573 solver.cpp:229] Iteration 66100, loss = 0.553118\nI0608 08:47:24.803936 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0709561 (* 1 = 0.0709561 loss)\nI0608 08:47:24.803944 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0458007 (* 1 = 0.0458007 loss)\nI0608 08:47:24.803951 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.260732 (* 1 = 0.260732 loss)\nI0608 08:47:24.803956 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.086907 (* 1 = 0.086907 loss)\nI0608 08:47:24.803962 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.266602 (* 1 = 0.266602 loss)\nI0608 08:47:24.803968 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0869092 (* 1 = 0.0869092 loss)\nI0608 08:47:24.803974 13573 sgd_solver.cpp:106] Iteration 66100, lr = 0.0001\nI0608 08:47:37.580082 13573 solver.cpp:229] Iteration 66120, loss = 0.775838\nI0608 08:47:37.580152 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0737782 (* 1 = 0.0737782 loss)\nI0608 08:47:37.580160 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0654861 (* 1 = 0.0654861 loss)\nI0608 08:47:37.580168 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.467655 (* 1 = 0.467655 loss)\nI0608 08:47:37.580173 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.100493 (* 1 = 0.100493 loss)\nI0608 08:47:37.580178 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.472794 (* 1 = 0.472794 loss)\nI0608 08:47:37.580184 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.100505 (* 1 = 0.100505 loss)\nI0608 08:47:37.580191 13573 sgd_solver.cpp:106] Iteration 66120, lr = 0.0001\nI0608 08:47:50.175308 13573 solver.cpp:229] Iteration 66140, loss = 0.81266\nI0608 08:47:50.175381 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00414988 (* 1 = 0.00414988 loss)\nI0608 08:47:50.175391 13573 solver.cpp:245]     Train net output #1: loss_cls = 1.15675e-05 (* 1 = 1.15675e-05 loss)\nI0608 08:47:50.175397 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.309717 (* 1 = 0.309717 loss)\nI0608 08:47:50.175403 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.113003 (* 1 = 0.113003 loss)\nI0608 08:47:50.175410 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.284564 (* 1 = 0.284564 loss)\nI0608 08:47:50.175415 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.113003 (* 1 = 0.113003 loss)\nI0608 08:47:50.175422 13573 sgd_solver.cpp:106] Iteration 66140, lr = 0.0001\nI0608 08:48:03.134349 13573 solver.cpp:229] Iteration 66160, loss = 0.472929\nI0608 08:48:03.134415 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0297376 (* 1 = 0.0297376 loss)\nI0608 08:48:03.134425 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0359402 (* 1 = 0.0359402 loss)\nI0608 08:48:03.134433 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0883161 (* 1 = 0.0883161 loss)\nI0608 08:48:03.134438 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0168745 (* 1 = 0.0168745 loss)\nI0608 08:48:03.134444 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0886154 (* 1 = 0.0886154 loss)\nI0608 08:48:03.134450 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.016879 (* 1 = 0.016879 loss)\nI0608 08:48:03.134457 13573 sgd_solver.cpp:106] Iteration 66160, lr = 0.0001\nI0608 08:48:16.036546 13573 solver.cpp:229] Iteration 66180, loss = 0.368972\nI0608 08:48:16.036654 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0714763 (* 1 = 0.0714763 loss)\nI0608 08:48:16.036665 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0481211 (* 1 = 0.0481211 loss)\nI0608 08:48:16.036671 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10231 (* 1 = 0.10231 loss)\nI0608 08:48:16.036677 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0800743 (* 1 = 0.0800743 loss)\nI0608 08:48:16.036684 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.102067 (* 1 = 0.102067 loss)\nI0608 08:48:16.036689 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.080085 (* 1 = 0.080085 loss)\nI0608 08:48:16.036697 13573 sgd_solver.cpp:106] Iteration 66180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:48:29.060963 13573 solver.cpp:229] Iteration 66200, loss = 0.85026\nI0608 08:48:29.061028 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0157285 (* 1 = 0.0157285 loss)\nI0608 08:48:29.061038 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0212497 (* 1 = 0.0212497 loss)\nI0608 08:48:29.061043 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.138644 (* 1 = 0.138644 loss)\nI0608 08:48:29.061048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0607276 (* 1 = 0.0607276 loss)\nI0608 08:48:29.061054 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.186809 (* 1 = 0.186809 loss)\nI0608 08:48:29.061060 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0607253 (* 1 = 0.0607253 loss)\nI0608 08:48:29.061067 13573 sgd_solver.cpp:106] Iteration 66200, lr = 0.0001\nI0608 08:48:42.002168 13573 solver.cpp:229] Iteration 66220, loss = 0.500848\nI0608 08:48:42.002238 13573 solver.cpp:245]     Train net output #0: loss_bbox = 6.61479e-05 (* 1 = 6.61479e-05 loss)\nI0608 08:48:42.002249 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000985971 (* 1 = 0.000985971 loss)\nI0608 08:48:42.002255 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.137806 (* 1 = 0.137806 loss)\nI0608 08:48:42.002261 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00106929 (* 1 = 0.00106929 loss)\nI0608 08:48:42.002267 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.121202 (* 1 = 0.121202 loss)\nI0608 08:48:42.002274 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00107157 (* 1 = 0.00107157 loss)\nI0608 08:48:42.002280 13573 sgd_solver.cpp:106] Iteration 66220, lr = 0.0001\nI0608 08:48:54.976073 13573 solver.cpp:229] Iteration 66240, loss = 0.698977\nI0608 08:48:54.976133 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0597865 (* 1 = 0.0597865 loss)\nI0608 08:48:54.976142 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0388315 (* 1 = 0.0388315 loss)\nI0608 08:48:54.976148 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.183925 (* 1 = 0.183925 loss)\nI0608 08:48:54.976155 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0796039 (* 1 = 0.0796039 loss)\nI0608 08:48:54.976160 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.171059 (* 1 = 0.171059 loss)\nI0608 08:48:54.976166 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0796027 (* 1 = 0.0796027 loss)\nI0608 08:48:54.976172 13573 sgd_solver.cpp:106] Iteration 66240, lr = 0.0001\nI0608 08:49:07.647245 13573 solver.cpp:229] Iteration 66260, loss = 0.537885\nI0608 08:49:07.647322 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.051726 (* 1 = 0.051726 loss)\nI0608 08:49:07.647331 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0122135 (* 1 = 0.0122135 loss)\nI0608 08:49:07.647338 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100092 (* 1 = 0.100092 loss)\nI0608 08:49:07.647344 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0221568 (* 1 = 0.0221568 loss)\nI0608 08:49:07.647351 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101142 (* 1 = 0.101142 loss)\nI0608 08:49:07.647356 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0221579 (* 1 = 0.0221579 loss)\nI0608 08:49:07.647363 13573 sgd_solver.cpp:106] Iteration 66260, lr = 0.0001\nI0608 08:49:20.623133 13573 solver.cpp:229] Iteration 66280, loss = 0.593115\nI0608 08:49:20.623200 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0154926 (* 1 = 0.0154926 loss)\nI0608 08:49:20.623210 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00767494 (* 1 = 0.00767494 loss)\nI0608 08:49:20.623216 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.083293 (* 1 = 0.083293 loss)\nI0608 08:49:20.623222 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219112 (* 1 = 0.0219112 loss)\nI0608 08:49:20.623229 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.110079 (* 1 = 0.110079 loss)\nI0608 08:49:20.623234 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0219113 (* 1 = 0.0219113 loss)\nI0608 08:49:20.623241 13573 sgd_solver.cpp:106] Iteration 66280, lr = 0.0001\nI0608 08:49:33.357548 13573 solver.cpp:229] Iteration 66300, loss = 0.612196\nI0608 08:49:33.357612 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.016815 (* 1 = 0.016815 loss)\nI0608 08:49:33.357621 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00363199 (* 1 = 0.00363199 loss)\nI0608 08:49:33.357628 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201122 (* 1 = 0.201122 loss)\nI0608 08:49:33.357633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0213696 (* 1 = 0.0213696 loss)\nI0608 08:49:33.357640 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.222464 (* 1 = 0.222464 loss)\nI0608 08:49:33.357645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0213709 (* 1 = 0.0213709 loss)\nI0608 08:49:33.357652 13573 sgd_solver.cpp:106] Iteration 66300, lr = 0.0001\nI0608 08:49:46.521594 13573 solver.cpp:229] Iteration 66320, loss = 0.439175\nI0608 08:49:46.521678 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0496433 (* 1 = 0.0496433 loss)\nI0608 08:49:46.521687 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0287442 (* 1 = 0.0287442 loss)\nI0608 08:49:46.521693 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919616 (* 1 = 0.0919616 loss)\nI0608 08:49:46.521699 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.015492 (* 1 = 0.015492 loss)\nI0608 08:49:46.521705 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0923435 (* 1 = 0.0923435 loss)\nI0608 08:49:46.521711 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154967 (* 1 = 0.0154967 loss)\nI0608 08:49:46.521718 13573 sgd_solver.cpp:106] Iteration 66320, lr = 0.0001\nI0608 08:49:59.581281 13573 solver.cpp:229] Iteration 66340, loss = 0.391013\nI0608 08:49:59.581348 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0206284 (* 1 = 0.0206284 loss)\nI0608 08:49:59.581357 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0502164 (* 1 = 0.0502164 loss)\nI0608 08:49:59.581364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0858626 (* 1 = 0.0858626 loss)\nI0608 08:49:59.581370 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0606861 (* 1 = 0.0606861 loss)\nI0608 08:49:59.581377 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0817136 (* 1 = 0.0817136 loss)\nI0608 08:49:59.581382 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0606802 (* 1 = 0.0606802 loss)\nI0608 08:49:59.581390 13573 sgd_solver.cpp:106] Iteration 66340, lr = 0.0001\nI0608 08:50:12.126956 13573 solver.cpp:229] Iteration 66360, loss = 0.474952\nI0608 08:50:12.127032 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0397544 (* 1 = 0.0397544 loss)\nI0608 08:50:12.127043 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00550306 (* 1 = 0.00550306 loss)\nI0608 08:50:12.127048 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0942428 (* 1 = 0.0942428 loss)\nI0608 08:50:12.127054 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134602 (* 1 = 0.0134602 loss)\nI0608 08:50:12.127059 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0886824 (* 1 = 0.0886824 loss)\nI0608 08:50:12.127065 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0134538 (* 1 = 0.0134538 loss)\nI0608 08:50:12.127073 13573 sgd_solver.cpp:106] Iteration 66360, lr = 0.0001\nI0608 08:50:24.993963 13573 solver.cpp:229] Iteration 66380, loss = 1.00655\nI0608 08:50:24.994026 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.20728 (* 1 = 0.20728 loss)\nI0608 08:50:24.994036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.113671 (* 1 = 0.113671 loss)\nI0608 08:50:24.994042 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.450008 (* 1 = 0.450008 loss)\nI0608 08:50:24.994048 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0548511 (* 1 = 0.0548511 loss)\nI0608 08:50:24.994055 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.453968 (* 1 = 0.453968 loss)\nI0608 08:50:24.994060 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0548489 (* 1 = 0.0548489 loss)\nI0608 08:50:24.994067 13573 sgd_solver.cpp:106] Iteration 66380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:50:37.899364 13573 solver.cpp:229] Iteration 66400, loss = 0.72802\nI0608 08:50:37.899435 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0125139 (* 1 = 0.0125139 loss)\nI0608 08:50:37.899443 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0279231 (* 1 = 0.0279231 loss)\nI0608 08:50:37.899449 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.339313 (* 1 = 0.339313 loss)\nI0608 08:50:37.899456 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0593097 (* 1 = 0.0593097 loss)\nI0608 08:50:37.899461 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.371966 (* 1 = 0.371966 loss)\nI0608 08:50:37.899467 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0593086 (* 1 = 0.0593086 loss)\nI0608 08:50:37.899474 13573 sgd_solver.cpp:106] Iteration 66400, lr = 0.0001\nI0608 08:50:50.774173 13573 solver.cpp:229] Iteration 66420, loss = 0.575838\nI0608 08:50:50.774250 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0747968 (* 1 = 0.0747968 loss)\nI0608 08:50:50.774260 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0615501 (* 1 = 0.0615501 loss)\nI0608 08:50:50.774266 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.328909 (* 1 = 0.328909 loss)\nI0608 08:50:50.774271 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0219016 (* 1 = 0.0219016 loss)\nI0608 08:50:50.774277 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.326005 (* 1 = 0.326005 loss)\nI0608 08:50:50.774283 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0218821 (* 1 = 0.0218821 loss)\nI0608 08:50:50.774291 13573 sgd_solver.cpp:106] Iteration 66420, lr = 0.0001\nI0608 08:51:03.775665 13573 solver.cpp:229] Iteration 66440, loss = 0.427779\nI0608 08:51:03.775734 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0641286 (* 1 = 0.0641286 loss)\nI0608 08:51:03.775746 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.06292 (* 1 = 0.06292 loss)\nI0608 08:51:03.775753 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.188523 (* 1 = 0.188523 loss)\nI0608 08:51:03.775758 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116184 (* 1 = 0.0116184 loss)\nI0608 08:51:03.775763 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187334 (* 1 = 0.187334 loss)\nI0608 08:51:03.775768 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116194 (* 1 = 0.0116194 loss)\nI0608 08:51:03.775775 13573 sgd_solver.cpp:106] Iteration 66440, lr = 0.0001\nI0608 08:51:16.581347 13573 solver.cpp:229] Iteration 66460, loss = 0.512249\nI0608 08:51:16.581423 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00040478 (* 1 = 0.00040478 loss)\nI0608 08:51:16.581434 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000474139 (* 1 = 0.000474139 loss)\nI0608 08:51:16.581440 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.301508 (* 1 = 0.301508 loss)\nI0608 08:51:16.581446 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0749828 (* 1 = 0.0749828 loss)\nI0608 08:51:16.581452 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.308713 (* 1 = 0.308713 loss)\nI0608 08:51:16.581459 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0749767 (* 1 = 0.0749767 loss)\nI0608 08:51:16.581465 13573 sgd_solver.cpp:106] Iteration 66460, lr = 0.0001\nI0608 08:51:29.431013 13573 solver.cpp:229] Iteration 66480, loss = 0.283217\nI0608 08:51:29.431074 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0598782 (* 1 = 0.0598782 loss)\nI0608 08:51:29.431083 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0382703 (* 1 = 0.0382703 loss)\nI0608 08:51:29.431089 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103013 (* 1 = 0.103013 loss)\nI0608 08:51:29.431095 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0299319 (* 1 = 0.0299319 loss)\nI0608 08:51:29.431102 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0982088 (* 1 = 0.0982088 loss)\nI0608 08:51:29.431107 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.029933 (* 1 = 0.029933 loss)\nI0608 08:51:29.431114 13573 sgd_solver.cpp:106] Iteration 66480, lr = 0.0001\nI0608 08:51:42.305475 13573 solver.cpp:229] Iteration 66500, loss = 2.17499\nI0608 08:51:42.305546 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00827783 (* 1 = 0.00827783 loss)\nI0608 08:51:42.305557 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00337389 (* 1 = 0.00337389 loss)\nI0608 08:51:42.305562 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.31355 (* 1 = 1.31355 loss)\nI0608 08:51:42.305568 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.507576 (* 1 = 0.507576 loss)\nI0608 08:51:42.305574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.26482 (* 1 = 1.26482 loss)\nI0608 08:51:42.305580 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.498746 (* 1 = 0.498746 loss)\nI0608 08:51:42.305586 13573 sgd_solver.cpp:106] Iteration 66500, lr = 0.0001\nI0608 08:51:55.464781 13573 solver.cpp:229] Iteration 66520, loss = 0.502987\nI0608 08:51:55.464854 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00145639 (* 1 = 0.00145639 loss)\nI0608 08:51:55.464862 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000604285 (* 1 = 0.000604285 loss)\nI0608 08:51:55.464869 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.194884 (* 1 = 0.194884 loss)\nI0608 08:51:55.464874 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00334697 (* 1 = 0.00334697 loss)\nI0608 08:51:55.464880 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187422 (* 1 = 0.187422 loss)\nI0608 08:51:55.464886 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00334847 (* 1 = 0.00334847 loss)\nI0608 08:51:55.464893 13573 sgd_solver.cpp:106] Iteration 66520, lr = 0.0001\nI0608 08:52:08.290688 13573 solver.cpp:229] Iteration 66540, loss = 0.602814\nI0608 08:52:08.290755 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000634156 (* 1 = 0.000634156 loss)\nI0608 08:52:08.290766 13573 solver.cpp:245]     Train net output #1: loss_cls = 4.20728e-05 (* 1 = 4.20728e-05 loss)\nI0608 08:52:08.290783 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.281112 (* 1 = 0.281112 loss)\nI0608 08:52:08.290791 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0371824 (* 1 = 0.0371824 loss)\nI0608 08:52:08.290797 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247648 (* 1 = 0.247648 loss)\nI0608 08:52:08.290802 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0371816 (* 1 = 0.0371816 loss)\nI0608 08:52:08.290809 13573 sgd_solver.cpp:106] Iteration 66540, lr = 0.0001\nI0608 08:52:21.125371 13573 solver.cpp:229] Iteration 66560, loss = 0.620885\nI0608 08:52:21.125437 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0433965 (* 1 = 0.0433965 loss)\nI0608 08:52:21.125445 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0262747 (* 1 = 0.0262747 loss)\nI0608 08:52:21.125452 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.161457 (* 1 = 0.161457 loss)\nI0608 08:52:21.125458 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.106278 (* 1 = 0.106278 loss)\nI0608 08:52:21.125463 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.164934 (* 1 = 0.164934 loss)\nI0608 08:52:21.125470 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.106287 (* 1 = 0.106287 loss)\nI0608 08:52:21.125478 13573 sgd_solver.cpp:106] Iteration 66560, lr = 0.0001\nI0608 08:52:33.980204 13573 solver.cpp:229] Iteration 66580, loss = 0.609417\nI0608 08:52:33.980276 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.028647 (* 1 = 0.028647 loss)\nI0608 08:52:33.980286 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0229871 (* 1 = 0.0229871 loss)\nI0608 08:52:33.980293 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917902 (* 1 = 0.0917902 loss)\nI0608 08:52:33.980299 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0318743 (* 1 = 0.0318743 loss)\nI0608 08:52:33.980305 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.091888 (* 1 = 0.091888 loss)\nI0608 08:52:33.980311 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0318669 (* 1 = 0.0318669 loss)\nI0608 08:52:33.980319 13573 sgd_solver.cpp:106] Iteration 66580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:52:46.923765 13573 solver.cpp:229] Iteration 66600, loss = 0.841884\nI0608 08:52:46.923826 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0265865 (* 1 = 0.0265865 loss)\nI0608 08:52:46.923835 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0320613 (* 1 = 0.0320613 loss)\nI0608 08:52:46.923842 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.244608 (* 1 = 0.244608 loss)\nI0608 08:52:46.923848 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0360297 (* 1 = 0.0360297 loss)\nI0608 08:52:46.923854 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.278435 (* 1 = 0.278435 loss)\nI0608 08:52:46.923861 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0360169 (* 1 = 0.0360169 loss)\nI0608 08:52:46.923866 13573 sgd_solver.cpp:106] Iteration 66600, lr = 0.0001\nI0608 08:52:59.958353 13573 solver.cpp:229] Iteration 66620, loss = 1.19392\nI0608 08:52:59.958418 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0215151 (* 1 = 0.0215151 loss)\nI0608 08:52:59.958428 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0138308 (* 1 = 0.0138308 loss)\nI0608 08:52:59.958434 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.365769 (* 1 = 0.365769 loss)\nI0608 08:52:59.958441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0236162 (* 1 = 0.0236162 loss)\nI0608 08:52:59.958446 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.381089 (* 1 = 0.381089 loss)\nI0608 08:52:59.958452 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0236064 (* 1 = 0.0236064 loss)\nI0608 08:52:59.958459 13573 sgd_solver.cpp:106] Iteration 66620, lr = 0.0001\nI0608 08:53:12.848839 13573 solver.cpp:229] Iteration 66640, loss = 0.513087\nI0608 08:53:12.848924 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000905212 (* 1 = 0.000905212 loss)\nI0608 08:53:12.848934 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00075867 (* 1 = 0.00075867 loss)\nI0608 08:53:12.848940 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.175574 (* 1 = 0.175574 loss)\nI0608 08:53:12.848947 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0104542 (* 1 = 0.0104542 loss)\nI0608 08:53:12.848953 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170861 (* 1 = 0.170861 loss)\nI0608 08:53:12.848958 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104484 (* 1 = 0.0104484 loss)\nI0608 08:53:12.848965 13573 sgd_solver.cpp:106] Iteration 66640, lr = 0.0001\nI0608 08:53:25.853637 13573 solver.cpp:229] Iteration 66660, loss = 0.359508\nI0608 08:53:25.853703 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00155562 (* 1 = 0.00155562 loss)\nI0608 08:53:25.853713 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000147046 (* 1 = 0.000147046 loss)\nI0608 08:53:25.853719 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.167896 (* 1 = 0.167896 loss)\nI0608 08:53:25.853725 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0154612 (* 1 = 0.0154612 loss)\nI0608 08:53:25.853731 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.189476 (* 1 = 0.189476 loss)\nI0608 08:53:25.853737 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0154634 (* 1 = 0.0154634 loss)\nI0608 08:53:25.853744 13573 sgd_solver.cpp:106] Iteration 66660, lr = 0.0001\nI0608 08:53:38.859640 13573 solver.cpp:229] Iteration 66680, loss = 0.702253\nI0608 08:53:38.859705 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0765356 (* 1 = 0.0765356 loss)\nI0608 08:53:38.859716 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0713081 (* 1 = 0.0713081 loss)\nI0608 08:53:38.859722 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0917752 (* 1 = 0.0917752 loss)\nI0608 08:53:38.859728 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0669571 (* 1 = 0.0669571 loss)\nI0608 08:53:38.859735 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0905424 (* 1 = 0.0905424 loss)\nI0608 08:53:38.859740 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0669571 (* 1 = 0.0669571 loss)\nI0608 08:53:38.859746 13573 sgd_solver.cpp:106] Iteration 66680, lr = 0.0001\nI0608 08:53:51.958287 13573 solver.cpp:229] Iteration 66700, loss = 0.933642\nI0608 08:53:51.958364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0890293 (* 1 = 0.0890293 loss)\nI0608 08:53:51.958374 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0788391 (* 1 = 0.0788391 loss)\nI0608 08:53:51.958379 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.578519 (* 1 = 0.578519 loss)\nI0608 08:53:51.958385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0633796 (* 1 = 0.0633796 loss)\nI0608 08:53:51.958391 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.554139 (* 1 = 0.554139 loss)\nI0608 08:53:51.958397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0633764 (* 1 = 0.0633764 loss)\nI0608 08:53:51.958405 13573 sgd_solver.cpp:106] Iteration 66700, lr = 0.0001\nI0608 08:54:04.794389 13573 solver.cpp:229] Iteration 66720, loss = 0.305684\nI0608 08:54:04.794457 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.076162 (* 1 = 0.076162 loss)\nI0608 08:54:04.794467 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0271893 (* 1 = 0.0271893 loss)\nI0608 08:54:04.794473 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101377 (* 1 = 0.101377 loss)\nI0608 08:54:04.794478 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0059408 (* 1 = 0.0059408 loss)\nI0608 08:54:04.794484 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100308 (* 1 = 0.100308 loss)\nI0608 08:54:04.794490 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00594126 (* 1 = 0.00594126 loss)\nI0608 08:54:04.794498 13573 sgd_solver.cpp:106] Iteration 66720, lr = 0.0001\nI0608 08:54:17.902500 13573 solver.cpp:229] Iteration 66740, loss = 0.294166\nI0608 08:54:17.902583 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0478762 (* 1 = 0.0478762 loss)\nI0608 08:54:17.902593 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.049359 (* 1 = 0.049359 loss)\nI0608 08:54:17.902601 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0987931 (* 1 = 0.0987931 loss)\nI0608 08:54:17.902606 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0200605 (* 1 = 0.0200605 loss)\nI0608 08:54:17.902612 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0925958 (* 1 = 0.0925958 loss)\nI0608 08:54:17.902618 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0200615 (* 1 = 0.0200615 loss)\nI0608 08:54:17.902626 13573 sgd_solver.cpp:106] Iteration 66740, lr = 0.0001\nI0608 08:54:30.950665 13573 solver.cpp:229] Iteration 66760, loss = 0.688082\nI0608 08:54:30.950731 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0651395 (* 1 = 0.0651395 loss)\nI0608 08:54:30.950742 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0259732 (* 1 = 0.0259732 loss)\nI0608 08:54:30.950747 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.386621 (* 1 = 0.386621 loss)\nI0608 08:54:30.950752 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0449744 (* 1 = 0.0449744 loss)\nI0608 08:54:30.950758 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.327393 (* 1 = 0.327393 loss)\nI0608 08:54:30.950765 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0449766 (* 1 = 0.0449766 loss)\nI0608 08:54:30.950783 13573 sgd_solver.cpp:106] Iteration 66760, lr = 0.0001\nI0608 08:54:43.955950 13573 solver.cpp:229] Iteration 66780, loss = 1.26444\nI0608 08:54:43.956022 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00797254 (* 1 = 0.00797254 loss)\nI0608 08:54:43.956032 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00108364 (* 1 = 0.00108364 loss)\nI0608 08:54:43.956037 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.54027 (* 1 = 0.54027 loss)\nI0608 08:54:43.956043 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0775726 (* 1 = 0.0775726 loss)\nI0608 08:54:43.956049 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.481658 (* 1 = 0.481658 loss)\nI0608 08:54:43.956055 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0775679 (* 1 = 0.0775679 loss)\nI0608 08:54:43.956063 13573 sgd_solver.cpp:106] Iteration 66780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:54:56.916268 13573 solver.cpp:229] Iteration 66800, loss = 0.647359\nI0608 08:54:56.916345 13573 solver.cpp:245]     Train net output #0: loss_bbox = 1.14621e-05 (* 1 = 1.14621e-05 loss)\nI0608 08:54:56.916355 13573 solver.cpp:245]     Train net output #1: loss_cls = 7.19052e-05 (* 1 = 7.19052e-05 loss)\nI0608 08:54:56.916362 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.222385 (* 1 = 0.222385 loss)\nI0608 08:54:56.916368 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0100462 (* 1 = 0.0100462 loss)\nI0608 08:54:56.916373 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.2775 (* 1 = 0.2775 loss)\nI0608 08:54:56.916378 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0100446 (* 1 = 0.0100446 loss)\nI0608 08:54:56.916385 13573 sgd_solver.cpp:106] Iteration 66800, lr = 0.0001\nI0608 08:55:09.675325 13573 solver.cpp:229] Iteration 66820, loss = 0.436225\nI0608 08:55:09.675391 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.038513 (* 1 = 0.038513 loss)\nI0608 08:55:09.675400 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0228925 (* 1 = 0.0228925 loss)\nI0608 08:55:09.675406 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101428 (* 1 = 0.101428 loss)\nI0608 08:55:09.675412 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0649201 (* 1 = 0.0649201 loss)\nI0608 08:55:09.675418 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101148 (* 1 = 0.101148 loss)\nI0608 08:55:09.675424 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0649157 (* 1 = 0.0649157 loss)\nI0608 08:55:09.675431 13573 sgd_solver.cpp:106] Iteration 66820, lr = 0.0001\nI0608 08:55:22.669688 13573 solver.cpp:229] Iteration 66840, loss = 0.369318\nI0608 08:55:22.669747 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0519224 (* 1 = 0.0519224 loss)\nI0608 08:55:22.669757 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.100896 (* 1 = 0.100896 loss)\nI0608 08:55:22.669764 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0966792 (* 1 = 0.0966792 loss)\nI0608 08:55:22.669770 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281462 (* 1 = 0.0281462 loss)\nI0608 08:55:22.669775 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0941906 (* 1 = 0.0941906 loss)\nI0608 08:55:22.669781 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0281457 (* 1 = 0.0281457 loss)\nI0608 08:55:22.669790 13573 sgd_solver.cpp:106] Iteration 66840, lr = 0.0001\nI0608 08:55:35.737743 13573 solver.cpp:229] Iteration 66860, loss = 0.837013\nI0608 08:55:35.737814 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00110768 (* 1 = 0.00110768 loss)\nI0608 08:55:35.737824 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000433163 (* 1 = 0.000433163 loss)\nI0608 08:55:35.737831 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.210819 (* 1 = 0.210819 loss)\nI0608 08:55:35.737838 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0171769 (* 1 = 0.0171769 loss)\nI0608 08:55:35.737843 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.214885 (* 1 = 0.214885 loss)\nI0608 08:55:35.737849 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0171834 (* 1 = 0.0171834 loss)\nI0608 08:55:35.737856 13573 sgd_solver.cpp:106] Iteration 66860, lr = 0.0001\nI0608 08:55:48.518815 13573 solver.cpp:229] Iteration 66880, loss = 0.847614\nI0608 08:55:48.518883 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.125251 (* 1 = 0.125251 loss)\nI0608 08:55:48.518893 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.1586 (* 1 = 0.1586 loss)\nI0608 08:55:48.518899 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.299228 (* 1 = 0.299228 loss)\nI0608 08:55:48.518905 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0433966 (* 1 = 0.0433966 loss)\nI0608 08:55:48.518911 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.299449 (* 1 = 0.299449 loss)\nI0608 08:55:48.518918 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0433951 (* 1 = 0.0433951 loss)\nI0608 08:55:48.518924 13573 sgd_solver.cpp:106] Iteration 66880, lr = 0.0001\nI0608 08:56:01.376493 13573 solver.cpp:229] Iteration 66900, loss = 1.3298\nI0608 08:56:01.376572 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0420341 (* 1 = 0.0420341 loss)\nI0608 08:56:01.376581 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0912938 (* 1 = 0.0912938 loss)\nI0608 08:56:01.376588 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.688129 (* 1 = 0.688129 loss)\nI0608 08:56:01.376593 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.140582 (* 1 = 0.140582 loss)\nI0608 08:56:01.376600 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.68757 (* 1 = 0.68757 loss)\nI0608 08:56:01.376605 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.140631 (* 1 = 0.140631 loss)\nI0608 08:56:01.376611 13573 sgd_solver.cpp:106] Iteration 66900, lr = 0.0001\nI0608 08:56:14.201122 13573 solver.cpp:229] Iteration 66920, loss = 0.324261\nI0608 08:56:14.201187 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0928379 (* 1 = 0.0928379 loss)\nI0608 08:56:14.201197 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0196253 (* 1 = 0.0196253 loss)\nI0608 08:56:14.201203 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0799174 (* 1 = 0.0799174 loss)\nI0608 08:56:14.201208 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0130627 (* 1 = 0.0130627 loss)\nI0608 08:56:14.201215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0808924 (* 1 = 0.0808924 loss)\nI0608 08:56:14.201220 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0130623 (* 1 = 0.0130623 loss)\nI0608 08:56:14.201227 13573 sgd_solver.cpp:106] Iteration 66920, lr = 0.0001\nI0608 08:56:27.156316 13573 solver.cpp:229] Iteration 66940, loss = 0.917423\nI0608 08:56:27.156381 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.150141 (* 1 = 0.150141 loss)\nI0608 08:56:27.156397 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0854345 (* 1 = 0.0854345 loss)\nI0608 08:56:27.156404 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228006 (* 1 = 0.228006 loss)\nI0608 08:56:27.156410 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0239285 (* 1 = 0.0239285 loss)\nI0608 08:56:27.156416 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223549 (* 1 = 0.223549 loss)\nI0608 08:56:27.156422 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0239307 (* 1 = 0.0239307 loss)\nI0608 08:56:27.156430 13573 sgd_solver.cpp:106] Iteration 66940, lr = 0.0001\nI0608 08:56:40.012243 13573 solver.cpp:229] Iteration 66960, loss = 1.06097\nI0608 08:56:40.012310 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00285363 (* 1 = 0.00285363 loss)\nI0608 08:56:40.012320 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000476616 (* 1 = 0.000476616 loss)\nI0608 08:56:40.012326 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.650038 (* 1 = 0.650038 loss)\nI0608 08:56:40.012332 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.232666 (* 1 = 0.232666 loss)\nI0608 08:56:40.012338 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.619805 (* 1 = 0.619805 loss)\nI0608 08:56:40.012344 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.232589 (* 1 = 0.232589 loss)\nI0608 08:56:40.012351 13573 sgd_solver.cpp:106] Iteration 66960, lr = 0.0001\nI0608 08:56:52.889451 13573 solver.cpp:229] Iteration 66980, loss = 0.31594\nI0608 08:56:52.889513 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0623488 (* 1 = 0.0623488 loss)\nI0608 08:56:52.889523 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0296324 (* 1 = 0.0296324 loss)\nI0608 08:56:52.889528 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0994698 (* 1 = 0.0994698 loss)\nI0608 08:56:52.889534 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0271094 (* 1 = 0.0271094 loss)\nI0608 08:56:52.889539 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0977061 (* 1 = 0.0977061 loss)\nI0608 08:56:52.889545 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0271124 (* 1 = 0.0271124 loss)\nI0608 08:56:52.889552 13573 sgd_solver.cpp:106] Iteration 66980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:57:05.773269 13573 solver.cpp:229] Iteration 67000, loss = 0.66853\nI0608 08:57:05.773349 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00184314 (* 1 = 0.00184314 loss)\nI0608 08:57:05.773357 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000917034 (* 1 = 0.000917034 loss)\nI0608 08:57:05.773365 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.242466 (* 1 = 0.242466 loss)\nI0608 08:57:05.773370 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0263658 (* 1 = 0.0263658 loss)\nI0608 08:57:05.773376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.239832 (* 1 = 0.239832 loss)\nI0608 08:57:05.773381 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0263646 (* 1 = 0.0263646 loss)\nI0608 08:57:05.773388 13573 sgd_solver.cpp:106] Iteration 67000, lr = 0.0001\nI0608 08:57:18.825023 13573 solver.cpp:229] Iteration 67020, loss = 0.480968\nI0608 08:57:18.825093 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0921065 (* 1 = 0.0921065 loss)\nI0608 08:57:18.825103 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10283 (* 1 = 0.10283 loss)\nI0608 08:57:18.825109 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197773 (* 1 = 0.197773 loss)\nI0608 08:57:18.825115 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0354738 (* 1 = 0.0354738 loss)\nI0608 08:57:18.825120 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.187833 (* 1 = 0.187833 loss)\nI0608 08:57:18.825126 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0354729 (* 1 = 0.0354729 loss)\nI0608 08:57:18.825134 13573 sgd_solver.cpp:106] Iteration 67020, lr = 0.0001\nI0608 08:57:31.863608 13573 solver.cpp:229] Iteration 67040, loss = 0.43586\nI0608 08:57:31.863692 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0587435 (* 1 = 0.0587435 loss)\nI0608 08:57:31.863703 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0531576 (* 1 = 0.0531576 loss)\nI0608 08:57:31.863709 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.201585 (* 1 = 0.201585 loss)\nI0608 08:57:31.863715 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0186546 (* 1 = 0.0186546 loss)\nI0608 08:57:31.863721 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.219836 (* 1 = 0.219836 loss)\nI0608 08:57:31.863728 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0186543 (* 1 = 0.0186543 loss)\nI0608 08:57:31.863734 13573 sgd_solver.cpp:106] Iteration 67040, lr = 0.0001\nI0608 08:57:44.853330 13573 solver.cpp:229] Iteration 67060, loss = 0.373421\nI0608 08:57:44.853394 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00391898 (* 1 = 0.00391898 loss)\nI0608 08:57:44.853404 13573 solver.cpp:245]     Train net output #1: loss_cls = 3.81374e-05 (* 1 = 3.81374e-05 loss)\nI0608 08:57:44.853410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.128452 (* 1 = 0.128452 loss)\nI0608 08:57:44.853416 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00207942 (* 1 = 0.00207942 loss)\nI0608 08:57:44.853422 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.170444 (* 1 = 0.170444 loss)\nI0608 08:57:44.853428 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00208001 (* 1 = 0.00208001 loss)\nI0608 08:57:44.853435 13573 sgd_solver.cpp:106] Iteration 67060, lr = 0.0001\nI0608 08:57:57.775375 13573 solver.cpp:229] Iteration 67080, loss = 0.324156\nI0608 08:57:57.775463 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0386071 (* 1 = 0.0386071 loss)\nI0608 08:57:57.775473 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0531657 (* 1 = 0.0531657 loss)\nI0608 08:57:57.775480 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0747953 (* 1 = 0.0747953 loss)\nI0608 08:57:57.775486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00879269 (* 1 = 0.00879269 loss)\nI0608 08:57:57.775492 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0850069 (* 1 = 0.0850069 loss)\nI0608 08:57:57.775497 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00879228 (* 1 = 0.00879228 loss)\nI0608 08:57:57.775506 13573 sgd_solver.cpp:106] Iteration 67080, lr = 0.0001\nI0608 08:58:10.610275 13573 solver.cpp:229] Iteration 67100, loss = 0.469973\nI0608 08:58:10.610363 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000254087 (* 1 = 0.000254087 loss)\nI0608 08:58:10.610373 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000114004 (* 1 = 0.000114004 loss)\nI0608 08:58:10.610379 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.162009 (* 1 = 0.162009 loss)\nI0608 08:58:10.610385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0055359 (* 1 = 0.0055359 loss)\nI0608 08:58:10.610391 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.17376 (* 1 = 0.17376 loss)\nI0608 08:58:10.610396 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00552662 (* 1 = 0.00552662 loss)\nI0608 08:58:10.610404 13573 sgd_solver.cpp:106] Iteration 67100, lr = 0.0001\nI0608 08:58:23.615041 13573 solver.cpp:229] Iteration 67120, loss = 0.390596\nI0608 08:58:23.615105 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00861763 (* 1 = 0.00861763 loss)\nI0608 08:58:23.615114 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0251981 (* 1 = 0.0251981 loss)\nI0608 08:58:23.615120 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100337 (* 1 = 0.100337 loss)\nI0608 08:58:23.615126 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0634245 (* 1 = 0.0634245 loss)\nI0608 08:58:23.615133 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0984045 (* 1 = 0.0984045 loss)\nI0608 08:58:23.615139 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0634194 (* 1 = 0.0634194 loss)\nI0608 08:58:23.615145 13573 sgd_solver.cpp:106] Iteration 67120, lr = 0.0001\nI0608 08:58:36.564167 13573 solver.cpp:229] Iteration 67140, loss = 0.448364\nI0608 08:58:36.564236 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000145438 (* 1 = 0.000145438 loss)\nI0608 08:58:36.564246 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00310053 (* 1 = 0.00310053 loss)\nI0608 08:58:36.564254 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.169469 (* 1 = 0.169469 loss)\nI0608 08:58:36.564258 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0103997 (* 1 = 0.0103997 loss)\nI0608 08:58:36.564265 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.155607 (* 1 = 0.155607 loss)\nI0608 08:58:36.564270 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0104054 (* 1 = 0.0104054 loss)\nI0608 08:58:36.564277 13573 sgd_solver.cpp:106] Iteration 67140, lr = 0.0001\nI0608 08:58:49.594784 13573 solver.cpp:229] Iteration 67160, loss = 0.316304\nI0608 08:58:49.594867 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0261329 (* 1 = 0.0261329 loss)\nI0608 08:58:49.594877 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0101074 (* 1 = 0.0101074 loss)\nI0608 08:58:49.594884 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0949629 (* 1 = 0.0949629 loss)\nI0608 08:58:49.594890 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0367502 (* 1 = 0.0367502 loss)\nI0608 08:58:49.594897 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10067 (* 1 = 0.10067 loss)\nI0608 08:58:49.594902 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0367474 (* 1 = 0.0367474 loss)\nI0608 08:58:49.594909 13573 sgd_solver.cpp:106] Iteration 67160, lr = 0.0001\nI0608 08:59:02.530002 13573 solver.cpp:229] Iteration 67180, loss = 0.8772\nI0608 08:59:02.530082 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00117017 (* 1 = 0.00117017 loss)\nI0608 08:59:02.530092 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000432114 (* 1 = 0.000432114 loss)\nI0608 08:59:02.530098 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.270362 (* 1 = 0.270362 loss)\nI0608 08:59:02.530105 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0196017 (* 1 = 0.0196017 loss)\nI0608 08:59:02.530110 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.265198 (* 1 = 0.265198 loss)\nI0608 08:59:02.530117 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0195978 (* 1 = 0.0195978 loss)\nI0608 08:59:02.530123 13573 sgd_solver.cpp:106] Iteration 67180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 08:59:15.530205 13573 solver.cpp:229] Iteration 67200, loss = 0.59561\nI0608 08:59:15.530268 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0773956 (* 1 = 0.0773956 loss)\nI0608 08:59:15.530277 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0719158 (* 1 = 0.0719158 loss)\nI0608 08:59:15.530283 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.289237 (* 1 = 0.289237 loss)\nI0608 08:59:15.530289 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0910757 (* 1 = 0.0910757 loss)\nI0608 08:59:15.530295 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.2896 (* 1 = 0.2896 loss)\nI0608 08:59:15.530300 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0910439 (* 1 = 0.0910439 loss)\nI0608 08:59:15.530308 13573 sgd_solver.cpp:106] Iteration 67200, lr = 0.0001\nI0608 08:59:28.568382 13573 solver.cpp:229] Iteration 67220, loss = 1.05158\nI0608 08:59:28.568459 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0923317 (* 1 = 0.0923317 loss)\nI0608 08:59:28.568469 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.118158 (* 1 = 0.118158 loss)\nI0608 08:59:28.568475 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101268 (* 1 = 0.101268 loss)\nI0608 08:59:28.568480 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0157754 (* 1 = 0.0157754 loss)\nI0608 08:59:28.568486 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0998568 (* 1 = 0.0998568 loss)\nI0608 08:59:28.568491 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0157744 (* 1 = 0.0157744 loss)\nI0608 08:59:28.568500 13573 sgd_solver.cpp:106] Iteration 67220, lr = 0.0001\nI0608 08:59:41.465716 13573 solver.cpp:229] Iteration 67240, loss = 1.00235\nI0608 08:59:41.465783 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0217127 (* 1 = 0.0217127 loss)\nI0608 08:59:41.465792 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0216978 (* 1 = 0.0216978 loss)\nI0608 08:59:41.465798 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0991743 (* 1 = 0.0991743 loss)\nI0608 08:59:41.465804 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0264252 (* 1 = 0.0264252 loss)\nI0608 08:59:41.465811 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10089 (* 1 = 0.10089 loss)\nI0608 08:59:41.465816 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0264287 (* 1 = 0.0264287 loss)\nI0608 08:59:41.465823 13573 sgd_solver.cpp:106] Iteration 67240, lr = 0.0001\nI0608 08:59:54.637348 13573 solver.cpp:229] Iteration 67260, loss = 0.260897\nI0608 08:59:54.637410 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00890591 (* 1 = 0.00890591 loss)\nI0608 08:59:54.637419 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0163704 (* 1 = 0.0163704 loss)\nI0608 08:59:54.637425 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0907723 (* 1 = 0.0907723 loss)\nI0608 08:59:54.637431 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0451531 (* 1 = 0.0451531 loss)\nI0608 08:59:54.637437 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.087344 (* 1 = 0.087344 loss)\nI0608 08:59:54.637444 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0451529 (* 1 = 0.0451529 loss)\nI0608 08:59:54.637449 13573 sgd_solver.cpp:106] Iteration 67260, lr = 0.0001\nI0608 09:00:07.431932 13573 solver.cpp:229] Iteration 67280, loss = 0.323992\nI0608 09:00:07.432005 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0334914 (* 1 = 0.0334914 loss)\nI0608 09:00:07.432014 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0257759 (* 1 = 0.0257759 loss)\nI0608 09:00:07.432020 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0971375 (* 1 = 0.0971375 loss)\nI0608 09:00:07.432026 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0627936 (* 1 = 0.0627936 loss)\nI0608 09:00:07.432031 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0960953 (* 1 = 0.0960953 loss)\nI0608 09:00:07.432037 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0627957 (* 1 = 0.0627957 loss)\nI0608 09:00:07.432044 13573 sgd_solver.cpp:106] Iteration 67280, lr = 0.0001\nI0608 09:00:20.098548 13573 solver.cpp:229] Iteration 67300, loss = 0.424032\nI0608 09:00:20.098637 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0729315 (* 1 = 0.0729315 loss)\nI0608 09:00:20.098649 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0892174 (* 1 = 0.0892174 loss)\nI0608 09:00:20.098655 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.131294 (* 1 = 0.131294 loss)\nI0608 09:00:20.098661 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0380949 (* 1 = 0.0380949 loss)\nI0608 09:00:20.098667 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.128011 (* 1 = 0.128011 loss)\nI0608 09:00:20.098673 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.038093 (* 1 = 0.038093 loss)\nI0608 09:00:20.098681 13573 sgd_solver.cpp:106] Iteration 67300, lr = 0.0001\nI0608 09:00:33.208106 13573 solver.cpp:229] Iteration 67320, loss = 0.503287\nI0608 09:00:33.208171 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0348795 (* 1 = 0.0348795 loss)\nI0608 09:00:33.208184 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0141088 (* 1 = 0.0141088 loss)\nI0608 09:00:33.208190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294396 (* 1 = 0.294396 loss)\nI0608 09:00:33.208196 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0255149 (* 1 = 0.0255149 loss)\nI0608 09:00:33.208202 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.274823 (* 1 = 0.274823 loss)\nI0608 09:00:33.208209 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0255122 (* 1 = 0.0255122 loss)\nI0608 09:00:33.208216 13573 sgd_solver.cpp:106] Iteration 67320, lr = 0.0001\nI0608 09:00:46.074549 13573 solver.cpp:229] Iteration 67340, loss = 0.908929\nI0608 09:00:46.074611 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0622671 (* 1 = 0.0622671 loss)\nI0608 09:00:46.074620 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0381332 (* 1 = 0.0381332 loss)\nI0608 09:00:46.074627 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0873327 (* 1 = 0.0873327 loss)\nI0608 09:00:46.074633 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0430004 (* 1 = 0.0430004 loss)\nI0608 09:00:46.074640 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0830445 (* 1 = 0.0830445 loss)\nI0608 09:00:46.074645 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0430036 (* 1 = 0.0430036 loss)\nI0608 09:00:46.074652 13573 sgd_solver.cpp:106] Iteration 67340, lr = 0.0001\nI0608 09:00:59.069104 13573 solver.cpp:229] Iteration 67360, loss = 0.459373\nI0608 09:00:59.069174 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0168135 (* 1 = 0.0168135 loss)\nI0608 09:00:59.069183 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0423821 (* 1 = 0.0423821 loss)\nI0608 09:00:59.069190 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.262148 (* 1 = 0.262148 loss)\nI0608 09:00:59.069195 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0107655 (* 1 = 0.0107655 loss)\nI0608 09:00:59.069200 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.279249 (* 1 = 0.279249 loss)\nI0608 09:00:59.069207 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0107657 (* 1 = 0.0107657 loss)\nI0608 09:00:59.069214 13573 sgd_solver.cpp:106] Iteration 67360, lr = 0.0001\nI0608 09:01:11.976011 13573 solver.cpp:229] Iteration 67380, loss = 0.46906\nI0608 09:01:11.976076 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0398704 (* 1 = 0.0398704 loss)\nI0608 09:01:11.976086 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0456045 (* 1 = 0.0456045 loss)\nI0608 09:01:11.976092 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101493 (* 1 = 0.101493 loss)\nI0608 09:01:11.976099 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00906824 (* 1 = 0.00906824 loss)\nI0608 09:01:11.976104 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0959025 (* 1 = 0.0959025 loss)\nI0608 09:01:11.976110 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00906682 (* 1 = 0.00906682 loss)\nI0608 09:01:11.976117 13573 sgd_solver.cpp:106] Iteration 67380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:01:24.958416 13573 solver.cpp:229] Iteration 67400, loss = 0.236954\nI0608 09:01:24.958492 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0337707 (* 1 = 0.0337707 loss)\nI0608 09:01:24.958503 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0139487 (* 1 = 0.0139487 loss)\nI0608 09:01:24.958508 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0897987 (* 1 = 0.0897987 loss)\nI0608 09:01:24.958514 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00532075 (* 1 = 0.00532075 loss)\nI0608 09:01:24.958520 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0879579 (* 1 = 0.0879579 loss)\nI0608 09:01:24.958526 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00531968 (* 1 = 0.00531968 loss)\nI0608 09:01:24.958533 13573 sgd_solver.cpp:106] Iteration 67400, lr = 0.0001\nI0608 09:01:37.953089 13573 solver.cpp:229] Iteration 67420, loss = 0.555155\nI0608 09:01:37.953158 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.071386 (* 1 = 0.071386 loss)\nI0608 09:01:37.953168 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.111808 (* 1 = 0.111808 loss)\nI0608 09:01:37.953174 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294215 (* 1 = 0.294215 loss)\nI0608 09:01:37.953181 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0263884 (* 1 = 0.0263884 loss)\nI0608 09:01:37.953186 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.295925 (* 1 = 0.295925 loss)\nI0608 09:01:37.953191 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0263908 (* 1 = 0.0263908 loss)\nI0608 09:01:37.953198 13573 sgd_solver.cpp:106] Iteration 67420, lr = 0.0001\nI0608 09:01:50.713491 13573 solver.cpp:229] Iteration 67440, loss = 1.35148\nI0608 09:01:50.713565 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00083412 (* 1 = 0.00083412 loss)\nI0608 09:01:50.713577 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00914731 (* 1 = 0.00914731 loss)\nI0608 09:01:50.713582 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22696 (* 1 = 0.22696 loss)\nI0608 09:01:50.713588 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.020788 (* 1 = 0.020788 loss)\nI0608 09:01:50.713594 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.231302 (* 1 = 0.231302 loss)\nI0608 09:01:50.713599 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0207823 (* 1 = 0.0207823 loss)\nI0608 09:01:50.713606 13573 sgd_solver.cpp:106] Iteration 67440, lr = 0.0001\nI0608 09:02:03.612850 13573 solver.cpp:229] Iteration 67460, loss = 0.491129\nI0608 09:02:03.612954 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0505635 (* 1 = 0.0505635 loss)\nI0608 09:02:03.612964 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0611947 (* 1 = 0.0611947 loss)\nI0608 09:02:03.612970 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116889 (* 1 = 0.116889 loss)\nI0608 09:02:03.612977 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0706465 (* 1 = 0.0706465 loss)\nI0608 09:02:03.612982 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.117809 (* 1 = 0.117809 loss)\nI0608 09:02:03.612987 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.070651 (* 1 = 0.070651 loss)\nI0608 09:02:03.612994 13573 sgd_solver.cpp:106] Iteration 67460, lr = 0.0001\nI0608 09:02:16.544200 13573 solver.cpp:229] Iteration 67480, loss = 0.48457\nI0608 09:02:16.544275 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00244308 (* 1 = 0.00244308 loss)\nI0608 09:02:16.544286 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00607952 (* 1 = 0.00607952 loss)\nI0608 09:02:16.544291 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.23253 (* 1 = 0.23253 loss)\nI0608 09:02:16.544297 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0235452 (* 1 = 0.0235452 loss)\nI0608 09:02:16.544303 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245279 (* 1 = 0.245279 loss)\nI0608 09:02:16.544309 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0235429 (* 1 = 0.0235429 loss)\nI0608 09:02:16.544317 13573 sgd_solver.cpp:106] Iteration 67480, lr = 0.0001\nI0608 09:02:29.589849 13573 solver.cpp:229] Iteration 67500, loss = 0.462879\nI0608 09:02:29.589921 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0623679 (* 1 = 0.0623679 loss)\nI0608 09:02:29.589931 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0589418 (* 1 = 0.0589418 loss)\nI0608 09:02:29.589937 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.130565 (* 1 = 0.130565 loss)\nI0608 09:02:29.589943 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.044426 (* 1 = 0.044426 loss)\nI0608 09:02:29.589948 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122718 (* 1 = 0.122718 loss)\nI0608 09:02:29.589954 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0444238 (* 1 = 0.0444238 loss)\nI0608 09:02:29.589960 13573 sgd_solver.cpp:106] Iteration 67500, lr = 0.0001\nI0608 09:02:42.463165 13573 solver.cpp:229] Iteration 67520, loss = 0.954283\nI0608 09:02:42.463237 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0612341 (* 1 = 0.0612341 loss)\nI0608 09:02:42.463248 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.063555 (* 1 = 0.063555 loss)\nI0608 09:02:42.463253 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0991559 (* 1 = 0.0991559 loss)\nI0608 09:02:42.463259 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0117045 (* 1 = 0.0117045 loss)\nI0608 09:02:42.463265 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100314 (* 1 = 0.100314 loss)\nI0608 09:02:42.463271 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0117071 (* 1 = 0.0117071 loss)\nI0608 09:02:42.463279 13573 sgd_solver.cpp:106] Iteration 67520, lr = 0.0001\nI0608 09:02:55.495199 13573 solver.cpp:229] Iteration 67540, loss = 2.21661\nI0608 09:02:55.495268 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0522721 (* 1 = 0.0522721 loss)\nI0608 09:02:55.495278 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0483351 (* 1 = 0.0483351 loss)\nI0608 09:02:55.495285 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.579362 (* 1 = 0.579362 loss)\nI0608 09:02:55.495290 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0509381 (* 1 = 0.0509381 loss)\nI0608 09:02:55.495296 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.545457 (* 1 = 0.545457 loss)\nI0608 09:02:55.495301 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0509432 (* 1 = 0.0509432 loss)\nI0608 09:02:55.495307 13573 sgd_solver.cpp:106] Iteration 67540, lr = 0.0001\nI0608 09:03:08.341130 13573 solver.cpp:229] Iteration 67560, loss = 0.489673\nI0608 09:03:08.341259 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0352368 (* 1 = 0.0352368 loss)\nI0608 09:03:08.341269 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0730688 (* 1 = 0.0730688 loss)\nI0608 09:03:08.341276 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0986331 (* 1 = 0.0986331 loss)\nI0608 09:03:08.341282 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00884014 (* 1 = 0.00884014 loss)\nI0608 09:03:08.341289 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107476 (* 1 = 0.107476 loss)\nI0608 09:03:08.341295 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00884049 (* 1 = 0.00884049 loss)\nI0608 09:03:08.341301 13573 sgd_solver.cpp:106] Iteration 67560, lr = 0.0001\nI0608 09:03:21.189085 13573 solver.cpp:229] Iteration 67580, loss = 1.39927\nI0608 09:03:21.189172 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.143418 (* 1 = 0.143418 loss)\nI0608 09:03:21.189182 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.163195 (* 1 = 0.163195 loss)\nI0608 09:03:21.189188 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.456644 (* 1 = 0.456644 loss)\nI0608 09:03:21.189194 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.13056 (* 1 = 0.13056 loss)\nI0608 09:03:21.189201 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.445721 (* 1 = 0.445721 loss)\nI0608 09:03:21.189208 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.130559 (* 1 = 0.130559 loss)\nI0608 09:03:21.189214 13573 sgd_solver.cpp:106] Iteration 67580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:03:34.011281 13573 solver.cpp:229] Iteration 67600, loss = 0.499101\nI0608 09:03:34.011348 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000221177 (* 1 = 0.000221177 loss)\nI0608 09:03:34.011358 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00014062 (* 1 = 0.00014062 loss)\nI0608 09:03:34.011364 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.314724 (* 1 = 0.314724 loss)\nI0608 09:03:34.011370 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0384638 (* 1 = 0.0384638 loss)\nI0608 09:03:34.011376 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.26038 (* 1 = 0.26038 loss)\nI0608 09:03:34.011382 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0384685 (* 1 = 0.0384685 loss)\nI0608 09:03:34.011415 13573 sgd_solver.cpp:106] Iteration 67600, lr = 0.0001\nI0608 09:03:46.910869 13573 solver.cpp:229] Iteration 67620, loss = 0.492356\nI0608 09:03:46.910936 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0477555 (* 1 = 0.0477555 loss)\nI0608 09:03:46.910945 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0140571 (* 1 = 0.0140571 loss)\nI0608 09:03:46.910951 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0968 (* 1 = 0.0968 loss)\nI0608 09:03:46.910957 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00994594 (* 1 = 0.00994594 loss)\nI0608 09:03:46.910964 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0945596 (* 1 = 0.0945596 loss)\nI0608 09:03:46.910970 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00994353 (* 1 = 0.00994353 loss)\nI0608 09:03:46.910977 13573 sgd_solver.cpp:106] Iteration 67620, lr = 0.0001\nI0608 09:03:59.739693 13573 solver.cpp:229] Iteration 67640, loss = 0.585931\nI0608 09:03:59.739761 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.044366 (* 1 = 0.044366 loss)\nI0608 09:03:59.739774 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0672827 (* 1 = 0.0672827 loss)\nI0608 09:03:59.739780 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.234985 (* 1 = 0.234985 loss)\nI0608 09:03:59.739786 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0156759 (* 1 = 0.0156759 loss)\nI0608 09:03:59.739791 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.240636 (* 1 = 0.240636 loss)\nI0608 09:03:59.739797 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0156768 (* 1 = 0.0156768 loss)\nI0608 09:03:59.739804 13573 sgd_solver.cpp:106] Iteration 67640, lr = 0.0001\nI0608 09:04:12.740865 13573 solver.cpp:229] Iteration 67660, loss = 0.578878\nI0608 09:04:12.740926 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0659003 (* 1 = 0.0659003 loss)\nI0608 09:04:12.740936 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0170423 (* 1 = 0.0170423 loss)\nI0608 09:04:12.740942 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.307074 (* 1 = 0.307074 loss)\nI0608 09:04:12.740947 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0523233 (* 1 = 0.0523233 loss)\nI0608 09:04:12.740953 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.300419 (* 1 = 0.300419 loss)\nI0608 09:04:12.740958 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0523257 (* 1 = 0.0523257 loss)\nI0608 09:04:12.740965 13573 sgd_solver.cpp:106] Iteration 67660, lr = 0.0001\nI0608 09:04:25.672154 13573 solver.cpp:229] Iteration 67680, loss = 0.491935\nI0608 09:04:25.672230 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00704599 (* 1 = 0.00704599 loss)\nI0608 09:04:25.672238 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00573099 (* 1 = 0.00573099 loss)\nI0608 09:04:25.672245 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.248332 (* 1 = 0.248332 loss)\nI0608 09:04:25.672251 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0217377 (* 1 = 0.0217377 loss)\nI0608 09:04:25.672256 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.236557 (* 1 = 0.236557 loss)\nI0608 09:04:25.672262 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0217285 (* 1 = 0.0217285 loss)\nI0608 09:04:25.672271 13573 sgd_solver.cpp:106] Iteration 67680, lr = 0.0001\nI0608 09:04:38.505614 13573 solver.cpp:229] Iteration 67700, loss = 0.598608\nI0608 09:04:38.505681 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.104954 (* 1 = 0.104954 loss)\nI0608 09:04:38.505692 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.069922 (* 1 = 0.069922 loss)\nI0608 09:04:38.505697 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.264917 (* 1 = 0.264917 loss)\nI0608 09:04:38.505703 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0447238 (* 1 = 0.0447238 loss)\nI0608 09:04:38.505708 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.268057 (* 1 = 0.268057 loss)\nI0608 09:04:38.505714 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0447172 (* 1 = 0.0447172 loss)\nI0608 09:04:38.505722 13573 sgd_solver.cpp:106] Iteration 67700, lr = 0.0001\nI0608 09:04:51.176689 13573 solver.cpp:229] Iteration 67720, loss = 0.930111\nI0608 09:04:51.176751 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0640734 (* 1 = 0.0640734 loss)\nI0608 09:04:51.176761 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0463609 (* 1 = 0.0463609 loss)\nI0608 09:04:51.176767 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0986055 (* 1 = 0.0986055 loss)\nI0608 09:04:51.176774 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0151603 (* 1 = 0.0151603 loss)\nI0608 09:04:51.176779 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0938633 (* 1 = 0.0938633 loss)\nI0608 09:04:51.176784 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151594 (* 1 = 0.0151594 loss)\nI0608 09:04:51.176791 13573 sgd_solver.cpp:106] Iteration 67720, lr = 0.0001\nI0608 09:05:04.149580 13573 solver.cpp:229] Iteration 67740, loss = 1.51976\nI0608 09:05:04.149653 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.158213 (* 1 = 0.158213 loss)\nI0608 09:05:04.149662 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.126675 (* 1 = 0.126675 loss)\nI0608 09:05:04.149668 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107623 (* 1 = 0.107623 loss)\nI0608 09:05:04.149674 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0232696 (* 1 = 0.0232696 loss)\nI0608 09:05:04.149680 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.115248 (* 1 = 0.115248 loss)\nI0608 09:05:04.149686 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0232707 (* 1 = 0.0232707 loss)\nI0608 09:05:04.149693 13573 sgd_solver.cpp:106] Iteration 67740, lr = 0.0001\nI0608 09:05:17.205516 13573 solver.cpp:229] Iteration 67760, loss = 0.5813\nI0608 09:05:17.205590 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0906289 (* 1 = 0.0906289 loss)\nI0608 09:05:17.205600 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.13808 (* 1 = 0.13808 loss)\nI0608 09:05:17.205606 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.119492 (* 1 = 0.119492 loss)\nI0608 09:05:17.205612 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0362558 (* 1 = 0.0362558 loss)\nI0608 09:05:17.205618 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122747 (* 1 = 0.122747 loss)\nI0608 09:05:17.205623 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0362535 (* 1 = 0.0362535 loss)\nI0608 09:05:17.205631 13573 sgd_solver.cpp:106] Iteration 67760, lr = 0.0001\nI0608 09:05:30.178287 13573 solver.cpp:229] Iteration 67780, loss = 0.776146\nI0608 09:05:30.178352 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0747015 (* 1 = 0.0747015 loss)\nI0608 09:05:30.178362 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0519121 (* 1 = 0.0519121 loss)\nI0608 09:05:30.178369 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168689 (* 1 = 0.168689 loss)\nI0608 09:05:30.178375 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00476725 (* 1 = 0.00476725 loss)\nI0608 09:05:30.178381 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.175658 (* 1 = 0.175658 loss)\nI0608 09:05:30.178388 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0047769 (* 1 = 0.0047769 loss)\nI0608 09:05:30.178396 13573 sgd_solver.cpp:106] Iteration 67780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:05:42.804642 13573 solver.cpp:229] Iteration 67800, loss = 0.467547\nI0608 09:05:42.804718 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0754415 (* 1 = 0.0754415 loss)\nI0608 09:05:42.804728 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0413088 (* 1 = 0.0413088 loss)\nI0608 09:05:42.804734 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0855793 (* 1 = 0.0855793 loss)\nI0608 09:05:42.804740 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00840735 (* 1 = 0.00840735 loss)\nI0608 09:05:42.804746 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0874714 (* 1 = 0.0874714 loss)\nI0608 09:05:42.804751 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00840823 (* 1 = 0.00840823 loss)\nI0608 09:05:42.804759 13573 sgd_solver.cpp:106] Iteration 67800, lr = 0.0001\nI0608 09:05:56.022357 13573 solver.cpp:229] Iteration 67820, loss = 0.266323\nI0608 09:05:56.022433 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0146871 (* 1 = 0.0146871 loss)\nI0608 09:05:56.022441 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.022894 (* 1 = 0.022894 loss)\nI0608 09:05:56.022447 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0857163 (* 1 = 0.0857163 loss)\nI0608 09:05:56.022454 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0279617 (* 1 = 0.0279617 loss)\nI0608 09:05:56.022459 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0827334 (* 1 = 0.0827334 loss)\nI0608 09:05:56.022465 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.027961 (* 1 = 0.027961 loss)\nI0608 09:05:56.022474 13573 sgd_solver.cpp:106] Iteration 67820, lr = 0.0001\nI0608 09:06:08.694155 13573 solver.cpp:229] Iteration 67840, loss = 0.315275\nI0608 09:06:08.694222 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0566972 (* 1 = 0.0566972 loss)\nI0608 09:06:08.694232 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0600182 (* 1 = 0.0600182 loss)\nI0608 09:06:08.694239 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.10405 (* 1 = 0.10405 loss)\nI0608 09:06:08.694244 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0333707 (* 1 = 0.0333707 loss)\nI0608 09:06:08.694252 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.106438 (* 1 = 0.106438 loss)\nI0608 09:06:08.694257 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.033368 (* 1 = 0.033368 loss)\nI0608 09:06:08.694264 13573 sgd_solver.cpp:106] Iteration 67840, lr = 0.0001\nI0608 09:06:21.435243 13573 solver.cpp:229] Iteration 67860, loss = 0.650739\nI0608 09:06:21.435319 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0130649 (* 1 = 0.0130649 loss)\nI0608 09:06:21.435329 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0559208 (* 1 = 0.0559208 loss)\nI0608 09:06:21.435335 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0947118 (* 1 = 0.0947118 loss)\nI0608 09:06:21.435341 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0612309 (* 1 = 0.0612309 loss)\nI0608 09:06:21.435346 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0937825 (* 1 = 0.0937825 loss)\nI0608 09:06:21.435353 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0612287 (* 1 = 0.0612287 loss)\nI0608 09:06:21.435359 13573 sgd_solver.cpp:106] Iteration 67860, lr = 0.0001\nI0608 09:06:34.558367 13573 solver.cpp:229] Iteration 67880, loss = 0.352334\nI0608 09:06:34.558486 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0040542 (* 1 = 0.0040542 loss)\nI0608 09:06:34.558497 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00188264 (* 1 = 0.00188264 loss)\nI0608 09:06:34.558503 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0761011 (* 1 = 0.0761011 loss)\nI0608 09:06:34.558509 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0344685 (* 1 = 0.0344685 loss)\nI0608 09:06:34.558516 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0932825 (* 1 = 0.0932825 loss)\nI0608 09:06:34.558521 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0344684 (* 1 = 0.0344684 loss)\nI0608 09:06:34.558529 13573 sgd_solver.cpp:106] Iteration 67880, lr = 0.0001\nI0608 09:06:47.274147 13573 solver.cpp:229] Iteration 67900, loss = 0.792638\nI0608 09:06:47.274224 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0474331 (* 1 = 0.0474331 loss)\nI0608 09:06:47.274235 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0621342 (* 1 = 0.0621342 loss)\nI0608 09:06:47.274241 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.311056 (* 1 = 0.311056 loss)\nI0608 09:06:47.274247 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0105514 (* 1 = 0.0105514 loss)\nI0608 09:06:47.274253 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.285025 (* 1 = 0.285025 loss)\nI0608 09:06:47.274260 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0105509 (* 1 = 0.0105509 loss)\nI0608 09:06:47.274266 13573 sgd_solver.cpp:106] Iteration 67900, lr = 0.0001\nI0608 09:07:00.124833 13573 solver.cpp:229] Iteration 67920, loss = 0.738075\nI0608 09:07:00.124893 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00722111 (* 1 = 0.00722111 loss)\nI0608 09:07:00.124903 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0105732 (* 1 = 0.0105732 loss)\nI0608 09:07:00.124909 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.429444 (* 1 = 0.429444 loss)\nI0608 09:07:00.124915 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.049836 (* 1 = 0.049836 loss)\nI0608 09:07:00.124922 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.437846 (* 1 = 0.437846 loss)\nI0608 09:07:00.124927 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.049851 (* 1 = 0.049851 loss)\nI0608 09:07:00.124933 13573 sgd_solver.cpp:106] Iteration 67920, lr = 0.0001\nI0608 09:07:13.152297 13573 solver.cpp:229] Iteration 67940, loss = 0.701281\nI0608 09:07:13.152371 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.041314 (* 1 = 0.041314 loss)\nI0608 09:07:13.152381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0120414 (* 1 = 0.0120414 loss)\nI0608 09:07:13.152387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.272226 (* 1 = 0.272226 loss)\nI0608 09:07:13.152392 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0183941 (* 1 = 0.0183941 loss)\nI0608 09:07:13.152397 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.233716 (* 1 = 0.233716 loss)\nI0608 09:07:13.152403 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0183933 (* 1 = 0.0183933 loss)\nI0608 09:07:13.152410 13573 sgd_solver.cpp:106] Iteration 67940, lr = 0.0001\nI0608 09:07:26.115897 13573 solver.cpp:229] Iteration 67960, loss = 0.932256\nI0608 09:07:26.115969 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.017872 (* 1 = 0.017872 loss)\nI0608 09:07:26.115978 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0043934 (* 1 = 0.0043934 loss)\nI0608 09:07:26.115984 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.464533 (* 1 = 0.464533 loss)\nI0608 09:07:26.115990 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.151345 (* 1 = 0.151345 loss)\nI0608 09:07:26.115995 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.432282 (* 1 = 0.432282 loss)\nI0608 09:07:26.116001 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.151335 (* 1 = 0.151335 loss)\nI0608 09:07:26.116008 13573 sgd_solver.cpp:106] Iteration 67960, lr = 0.0001\nI0608 09:07:38.931172 13573 solver.cpp:229] Iteration 67980, loss = 0.960071\nI0608 09:07:38.931238 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0504036 (* 1 = 0.0504036 loss)\nI0608 09:07:38.931249 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0378291 (* 1 = 0.0378291 loss)\nI0608 09:07:38.931255 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.407644 (* 1 = 0.407644 loss)\nI0608 09:07:38.931262 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.08013 (* 1 = 0.08013 loss)\nI0608 09:07:38.931267 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.393661 (* 1 = 0.393661 loss)\nI0608 09:07:38.931272 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.080133 (* 1 = 0.080133 loss)\nI0608 09:07:38.931278 13573 sgd_solver.cpp:106] Iteration 67980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:07:51.979076 13573 solver.cpp:229] Iteration 68000, loss = 0.737578\nI0608 09:07:51.979151 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.121005 (* 1 = 0.121005 loss)\nI0608 09:07:51.979161 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0854159 (* 1 = 0.0854159 loss)\nI0608 09:07:51.979166 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.4512 (* 1 = 0.4512 loss)\nI0608 09:07:51.979171 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0324814 (* 1 = 0.0324814 loss)\nI0608 09:07:51.979177 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.428873 (* 1 = 0.428873 loss)\nI0608 09:07:51.979183 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0324651 (* 1 = 0.0324651 loss)\nI0608 09:07:51.979189 13573 sgd_solver.cpp:106] Iteration 68000, lr = 0.0001\nI0608 09:08:04.859118 13573 solver.cpp:229] Iteration 68020, loss = 0.357205\nI0608 09:08:04.859187 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0648169 (* 1 = 0.0648169 loss)\nI0608 09:08:04.859196 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.10979 (* 1 = 0.10979 loss)\nI0608 09:08:04.859202 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0812131 (* 1 = 0.0812131 loss)\nI0608 09:08:04.859208 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00505869 (* 1 = 0.00505869 loss)\nI0608 09:08:04.859215 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0951709 (* 1 = 0.0951709 loss)\nI0608 09:08:04.859220 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0050583 (* 1 = 0.0050583 loss)\nI0608 09:08:04.859227 13573 sgd_solver.cpp:106] Iteration 68020, lr = 0.0001\nI0608 09:08:17.807883 13573 solver.cpp:229] Iteration 68040, loss = 1.13289\nI0608 09:08:17.807952 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00119429 (* 1 = 0.00119429 loss)\nI0608 09:08:17.807962 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000758842 (* 1 = 0.000758842 loss)\nI0608 09:08:17.807970 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.63946 (* 1 = 0.63946 loss)\nI0608 09:08:17.807976 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.181039 (* 1 = 0.181039 loss)\nI0608 09:08:17.807982 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.622724 (* 1 = 0.622724 loss)\nI0608 09:08:17.807988 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.181046 (* 1 = 0.181046 loss)\nI0608 09:08:17.807997 13573 sgd_solver.cpp:106] Iteration 68040, lr = 0.0001\nI0608 09:08:30.944391 13573 solver.cpp:229] Iteration 68060, loss = 1.2798\nI0608 09:08:30.944465 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0157251 (* 1 = 0.0157251 loss)\nI0608 09:08:30.944475 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0365014 (* 1 = 0.0365014 loss)\nI0608 09:08:30.944481 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.581733 (* 1 = 0.581733 loss)\nI0608 09:08:30.944486 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.16378 (* 1 = 0.16378 loss)\nI0608 09:08:30.944492 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.592226 (* 1 = 0.592226 loss)\nI0608 09:08:30.944499 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.163761 (* 1 = 0.163761 loss)\nI0608 09:08:30.944504 13573 sgd_solver.cpp:106] Iteration 68060, lr = 0.0001\nI0608 09:08:44.053555 13573 solver.cpp:229] Iteration 68080, loss = 1.87359\nI0608 09:08:44.053623 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00015084 (* 1 = 0.00015084 loss)\nI0608 09:08:44.053633 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000783561 (* 1 = 0.000783561 loss)\nI0608 09:08:44.053640 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.05004 (* 1 = 1.05004 loss)\nI0608 09:08:44.053647 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.492073 (* 1 = 0.492073 loss)\nI0608 09:08:44.053653 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.07172 (* 1 = 1.07172 loss)\nI0608 09:08:44.053659 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.492115 (* 1 = 0.492115 loss)\nI0608 09:08:44.053668 13573 sgd_solver.cpp:106] Iteration 68080, lr = 0.0001\nI0608 09:08:56.831312 13573 solver.cpp:229] Iteration 68100, loss = 0.384838\nI0608 09:08:56.831387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00201835 (* 1 = 0.00201835 loss)\nI0608 09:08:56.831396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00034646 (* 1 = 0.00034646 loss)\nI0608 09:08:56.831403 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.191078 (* 1 = 0.191078 loss)\nI0608 09:08:56.831408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00815814 (* 1 = 0.00815814 loss)\nI0608 09:08:56.831413 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.188862 (* 1 = 0.188862 loss)\nI0608 09:08:56.831419 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00815786 (* 1 = 0.00815786 loss)\nI0608 09:08:56.831426 13573 sgd_solver.cpp:106] Iteration 68100, lr = 0.0001\nI0608 09:09:09.754984 13573 solver.cpp:229] Iteration 68120, loss = 1.49743\nI0608 09:09:09.755069 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000268826 (* 1 = 0.000268826 loss)\nI0608 09:09:09.755080 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00479127 (* 1 = 0.00479127 loss)\nI0608 09:09:09.755087 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.186248 (* 1 = 0.186248 loss)\nI0608 09:09:09.755094 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0123033 (* 1 = 0.0123033 loss)\nI0608 09:09:09.755100 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.180379 (* 1 = 0.180379 loss)\nI0608 09:09:09.755105 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0123018 (* 1 = 0.0123018 loss)\nI0608 09:09:09.755113 13573 sgd_solver.cpp:106] Iteration 68120, lr = 0.0001\nI0608 09:09:22.529942 13573 solver.cpp:229] Iteration 68140, loss = 0.739711\nI0608 09:09:22.530027 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0462162 (* 1 = 0.0462162 loss)\nI0608 09:09:22.530036 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0398481 (* 1 = 0.0398481 loss)\nI0608 09:09:22.530043 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0940614 (* 1 = 0.0940614 loss)\nI0608 09:09:22.530050 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00336717 (* 1 = 0.00336717 loss)\nI0608 09:09:22.530055 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0895732 (* 1 = 0.0895732 loss)\nI0608 09:09:22.530061 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00336623 (* 1 = 0.00336623 loss)\nI0608 09:09:22.530068 13573 sgd_solver.cpp:106] Iteration 68140, lr = 0.0001\nI0608 09:09:35.381422 13573 solver.cpp:229] Iteration 68160, loss = 0.493183\nI0608 09:09:35.381487 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.068904 (* 1 = 0.068904 loss)\nI0608 09:09:35.381496 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0541116 (* 1 = 0.0541116 loss)\nI0608 09:09:35.381502 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107579 (* 1 = 0.107579 loss)\nI0608 09:09:35.381508 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0757554 (* 1 = 0.0757554 loss)\nI0608 09:09:35.381513 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.109858 (* 1 = 0.109858 loss)\nI0608 09:09:35.381520 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0757594 (* 1 = 0.0757594 loss)\nI0608 09:09:35.381526 13573 sgd_solver.cpp:106] Iteration 68160, lr = 0.0001\nI0608 09:09:48.383329 13573 solver.cpp:229] Iteration 68180, loss = 1.83464\nI0608 09:09:48.383395 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0545976 (* 1 = 0.0545976 loss)\nI0608 09:09:48.383405 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0712075 (* 1 = 0.0712075 loss)\nI0608 09:09:48.383411 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.947863 (* 1 = 0.947863 loss)\nI0608 09:09:48.383417 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.638904 (* 1 = 0.638904 loss)\nI0608 09:09:48.383424 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.94877 (* 1 = 0.94877 loss)\nI0608 09:09:48.383430 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.638937 (* 1 = 0.638937 loss)\nI0608 09:09:48.383437 13573 sgd_solver.cpp:106] Iteration 68180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:10:01.246250 13573 solver.cpp:229] Iteration 68200, loss = 0.463124\nI0608 09:10:01.246318 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0198668 (* 1 = 0.0198668 loss)\nI0608 09:10:01.246327 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0164621 (* 1 = 0.0164621 loss)\nI0608 09:10:01.246333 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.294649 (* 1 = 0.294649 loss)\nI0608 09:10:01.246340 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.023878 (* 1 = 0.023878 loss)\nI0608 09:10:01.246345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.313466 (* 1 = 0.313466 loss)\nI0608 09:10:01.246351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0238619 (* 1 = 0.0238619 loss)\nI0608 09:10:01.246357 13573 sgd_solver.cpp:106] Iteration 68200, lr = 0.0001\nI0608 09:10:14.187480 13573 solver.cpp:229] Iteration 68220, loss = 0.274599\nI0608 09:10:14.187561 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0199266 (* 1 = 0.0199266 loss)\nI0608 09:10:14.187571 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0464497 (* 1 = 0.0464497 loss)\nI0608 09:10:14.187577 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.097926 (* 1 = 0.097926 loss)\nI0608 09:10:14.187583 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00585739 (* 1 = 0.00585739 loss)\nI0608 09:10:14.187589 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.107127 (* 1 = 0.107127 loss)\nI0608 09:10:14.187594 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00585724 (* 1 = 0.00585724 loss)\nI0608 09:10:14.187602 13573 sgd_solver.cpp:106] Iteration 68220, lr = 0.0001\nI0608 09:10:27.181609 13573 solver.cpp:229] Iteration 68240, loss = 0.459791\nI0608 09:10:27.181684 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0801098 (* 1 = 0.0801098 loss)\nI0608 09:10:27.181694 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0354051 (* 1 = 0.0354051 loss)\nI0608 09:10:27.181699 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.127168 (* 1 = 0.127168 loss)\nI0608 09:10:27.181705 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301471 (* 1 = 0.0301471 loss)\nI0608 09:10:27.181711 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.122713 (* 1 = 0.122713 loss)\nI0608 09:10:27.181717 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301466 (* 1 = 0.0301466 loss)\nI0608 09:10:27.181725 13573 sgd_solver.cpp:106] Iteration 68240, lr = 0.0001\nI0608 09:10:40.121886 13573 solver.cpp:229] Iteration 68260, loss = 0.577589\nI0608 09:10:40.121953 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0479106 (* 1 = 0.0479106 loss)\nI0608 09:10:40.121963 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0131306 (* 1 = 0.0131306 loss)\nI0608 09:10:40.121968 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0959178 (* 1 = 0.0959178 loss)\nI0608 09:10:40.121974 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00899583 (* 1 = 0.00899583 loss)\nI0608 09:10:40.121980 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0969049 (* 1 = 0.0969049 loss)\nI0608 09:10:40.121986 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00899432 (* 1 = 0.00899432 loss)\nI0608 09:10:40.121994 13573 sgd_solver.cpp:106] Iteration 68260, lr = 0.0001\nI0608 09:10:53.092106 13573 solver.cpp:229] Iteration 68280, loss = 0.521951\nI0608 09:10:53.092171 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0456992 (* 1 = 0.0456992 loss)\nI0608 09:10:53.092180 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0156323 (* 1 = 0.0156323 loss)\nI0608 09:10:53.092187 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101527 (* 1 = 0.101527 loss)\nI0608 09:10:53.092193 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00734822 (* 1 = 0.00734822 loss)\nI0608 09:10:53.092200 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0990262 (* 1 = 0.0990262 loss)\nI0608 09:10:53.092206 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00734712 (* 1 = 0.00734712 loss)\nI0608 09:10:53.092213 13573 sgd_solver.cpp:106] Iteration 68280, lr = 0.0001\nI0608 09:11:05.961239 13573 solver.cpp:229] Iteration 68300, loss = 0.809232\nI0608 09:11:05.961371 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000245645 (* 1 = 0.000245645 loss)\nI0608 09:11:05.961381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0069123 (* 1 = 0.0069123 loss)\nI0608 09:11:05.961387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.247737 (* 1 = 0.247737 loss)\nI0608 09:11:05.961393 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0503586 (* 1 = 0.0503586 loss)\nI0608 09:11:05.961400 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252074 (* 1 = 0.252074 loss)\nI0608 09:11:05.961406 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0503543 (* 1 = 0.0503543 loss)\nI0608 09:11:05.961413 13573 sgd_solver.cpp:106] Iteration 68300, lr = 0.0001\nI0608 09:11:19.002668 13573 solver.cpp:229] Iteration 68320, loss = 0.697984\nI0608 09:11:19.002754 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127176 (* 1 = 0.127176 loss)\nI0608 09:11:19.002764 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0849434 (* 1 = 0.0849434 loss)\nI0608 09:11:19.002784 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.165111 (* 1 = 0.165111 loss)\nI0608 09:11:19.002792 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0342316 (* 1 = 0.0342316 loss)\nI0608 09:11:19.002799 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.158748 (* 1 = 0.158748 loss)\nI0608 09:11:19.002804 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0342452 (* 1 = 0.0342452 loss)\nI0608 09:11:19.002811 13573 sgd_solver.cpp:106] Iteration 68320, lr = 0.0001\nI0608 09:11:32.060288 13573 solver.cpp:229] Iteration 68340, loss = 1.38616\nI0608 09:11:32.060364 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00160347 (* 1 = 0.00160347 loss)\nI0608 09:11:32.060372 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00046831 (* 1 = 0.00046831 loss)\nI0608 09:11:32.060379 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.907275 (* 1 = 0.907275 loss)\nI0608 09:11:32.060385 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.336669 (* 1 = 0.336669 loss)\nI0608 09:11:32.060390 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.904682 (* 1 = 0.904682 loss)\nI0608 09:11:32.060397 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.33668 (* 1 = 0.33668 loss)\nI0608 09:11:32.060405 13573 sgd_solver.cpp:106] Iteration 68340, lr = 0.0001\nI0608 09:11:45.093508 13573 solver.cpp:229] Iteration 68360, loss = 0.245752\nI0608 09:11:45.093574 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00203485 (* 1 = 0.00203485 loss)\nI0608 09:11:45.093585 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00108904 (* 1 = 0.00108904 loss)\nI0608 09:11:45.093590 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.079725 (* 1 = 0.079725 loss)\nI0608 09:11:45.093596 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0232481 (* 1 = 0.0232481 loss)\nI0608 09:11:45.093602 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0748815 (* 1 = 0.0748815 loss)\nI0608 09:11:45.093608 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0232472 (* 1 = 0.0232472 loss)\nI0608 09:11:45.093616 13573 sgd_solver.cpp:106] Iteration 68360, lr = 0.0001\nI0608 09:11:58.068354 13573 solver.cpp:229] Iteration 68380, loss = 0.428224\nI0608 09:11:58.068420 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0423927 (* 1 = 0.0423927 loss)\nI0608 09:11:58.068429 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0143798 (* 1 = 0.0143798 loss)\nI0608 09:11:58.068435 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0913214 (* 1 = 0.0913214 loss)\nI0608 09:11:58.068441 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00772516 (* 1 = 0.00772516 loss)\nI0608 09:11:58.068447 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0943748 (* 1 = 0.0943748 loss)\nI0608 09:11:58.068454 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00772485 (* 1 = 0.00772485 loss)\nI0608 09:11:58.068459 13573 sgd_solver.cpp:106] Iteration 68380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:12:10.890738 13573 solver.cpp:229] Iteration 68400, loss = 1.8417\nI0608 09:12:10.890817 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0127552 (* 1 = 0.0127552 loss)\nI0608 09:12:10.890827 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0183796 (* 1 = 0.0183796 loss)\nI0608 09:12:10.890833 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.07951 (* 1 = 1.07951 loss)\nI0608 09:12:10.890841 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.255169 (* 1 = 0.255169 loss)\nI0608 09:12:10.890846 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.0772 (* 1 = 1.0772 loss)\nI0608 09:12:10.890851 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.255151 (* 1 = 0.255151 loss)\nI0608 09:12:10.890858 13573 sgd_solver.cpp:106] Iteration 68400, lr = 0.0001\nI0608 09:12:23.734275 13573 solver.cpp:229] Iteration 68420, loss = 0.873774\nI0608 09:12:23.734335 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0279299 (* 1 = 0.0279299 loss)\nI0608 09:12:23.734344 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0475166 (* 1 = 0.0475166 loss)\nI0608 09:12:23.734351 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.407793 (* 1 = 0.407793 loss)\nI0608 09:12:23.734356 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0666194 (* 1 = 0.0666194 loss)\nI0608 09:12:23.734362 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.402625 (* 1 = 0.402625 loss)\nI0608 09:12:23.734369 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.06663 (* 1 = 0.06663 loss)\nI0608 09:12:23.734375 13573 sgd_solver.cpp:106] Iteration 68420, lr = 0.0001\nI0608 09:12:36.722790 13573 solver.cpp:229] Iteration 68440, loss = 0.270822\nI0608 09:12:36.722882 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0372699 (* 1 = 0.0372699 loss)\nI0608 09:12:36.722892 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0165401 (* 1 = 0.0165401 loss)\nI0608 09:12:36.722898 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.101072 (* 1 = 0.101072 loss)\nI0608 09:12:36.722903 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0281458 (* 1 = 0.0281458 loss)\nI0608 09:12:36.722909 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.103174 (* 1 = 0.103174 loss)\nI0608 09:12:36.722914 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0281487 (* 1 = 0.0281487 loss)\nI0608 09:12:36.722921 13573 sgd_solver.cpp:106] Iteration 68440, lr = 0.0001\nI0608 09:12:49.562305 13573 solver.cpp:229] Iteration 68460, loss = 0.630911\nI0608 09:12:49.562372 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.061275 (* 1 = 0.061275 loss)\nI0608 09:12:49.562381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0682907 (* 1 = 0.0682907 loss)\nI0608 09:12:49.562387 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.261568 (* 1 = 0.261568 loss)\nI0608 09:12:49.562393 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0441585 (* 1 = 0.0441585 loss)\nI0608 09:12:49.562399 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.262856 (* 1 = 0.262856 loss)\nI0608 09:12:49.562405 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0441588 (* 1 = 0.0441588 loss)\nI0608 09:12:49.562412 13573 sgd_solver.cpp:106] Iteration 68460, lr = 0.0001\nI0608 09:13:02.130409 13573 solver.cpp:229] Iteration 68480, loss = 0.855034\nI0608 09:13:02.130470 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.127917 (* 1 = 0.127917 loss)\nI0608 09:13:02.130481 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.146735 (* 1 = 0.146735 loss)\nI0608 09:13:02.130486 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.385401 (* 1 = 0.385401 loss)\nI0608 09:13:02.130492 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0404637 (* 1 = 0.0404637 loss)\nI0608 09:13:02.130498 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.381989 (* 1 = 0.381989 loss)\nI0608 09:13:02.130503 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0404622 (* 1 = 0.0404622 loss)\nI0608 09:13:02.130511 13573 sgd_solver.cpp:106] Iteration 68480, lr = 0.0001\nI0608 09:13:15.011138 13573 solver.cpp:229] Iteration 68500, loss = 0.426458\nI0608 09:13:15.011204 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.01376 (* 1 = 0.01376 loss)\nI0608 09:13:15.011214 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0314066 (* 1 = 0.0314066 loss)\nI0608 09:13:15.011219 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.187667 (* 1 = 0.187667 loss)\nI0608 09:13:15.011225 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00407226 (* 1 = 0.00407226 loss)\nI0608 09:13:15.011232 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.162094 (* 1 = 0.162094 loss)\nI0608 09:13:15.011238 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00407221 (* 1 = 0.00407221 loss)\nI0608 09:13:15.011245 13573 sgd_solver.cpp:106] Iteration 68500, lr = 0.0001\nI0608 09:13:27.733224 13573 solver.cpp:229] Iteration 68520, loss = 0.455044\nI0608 09:13:27.733299 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.045306 (* 1 = 0.045306 loss)\nI0608 09:13:27.733309 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0175727 (* 1 = 0.0175727 loss)\nI0608 09:13:27.733315 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.103341 (* 1 = 0.103341 loss)\nI0608 09:13:27.733321 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0102654 (* 1 = 0.0102654 loss)\nI0608 09:13:27.733327 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100436 (* 1 = 0.100436 loss)\nI0608 09:13:27.733332 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0102676 (* 1 = 0.0102676 loss)\nI0608 09:13:27.733340 13573 sgd_solver.cpp:106] Iteration 68520, lr = 0.0001\nI0608 09:13:40.578039 13573 solver.cpp:229] Iteration 68540, loss = 0.62214\nI0608 09:13:40.578109 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00140466 (* 1 = 0.00140466 loss)\nI0608 09:13:40.578120 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00106442 (* 1 = 0.00106442 loss)\nI0608 09:13:40.578126 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.370808 (* 1 = 0.370808 loss)\nI0608 09:13:40.578132 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0629197 (* 1 = 0.0629197 loss)\nI0608 09:13:40.578138 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.396635 (* 1 = 0.396635 loss)\nI0608 09:13:40.578145 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0629224 (* 1 = 0.0629224 loss)\nI0608 09:13:40.578151 13573 sgd_solver.cpp:106] Iteration 68540, lr = 0.0001\nI0608 09:13:53.319243 13573 solver.cpp:229] Iteration 68560, loss = 0.299127\nI0608 09:13:53.319319 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0196026 (* 1 = 0.0196026 loss)\nI0608 09:13:53.319327 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0232687 (* 1 = 0.0232687 loss)\nI0608 09:13:53.319334 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.108648 (* 1 = 0.108648 loss)\nI0608 09:13:53.319339 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00817288 (* 1 = 0.00817288 loss)\nI0608 09:13:53.319345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0917254 (* 1 = 0.0917254 loss)\nI0608 09:13:53.319351 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00817267 (* 1 = 0.00817267 loss)\nI0608 09:13:53.319357 13573 sgd_solver.cpp:106] Iteration 68560, lr = 0.0001\nI0608 09:14:06.147444 13573 solver.cpp:229] Iteration 68580, loss = 0.580903\nI0608 09:14:06.147509 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0307203 (* 1 = 0.0307203 loss)\nI0608 09:14:06.147519 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0566105 (* 1 = 0.0566105 loss)\nI0608 09:14:06.147526 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0960149 (* 1 = 0.0960149 loss)\nI0608 09:14:06.147531 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0481071 (* 1 = 0.0481071 loss)\nI0608 09:14:06.147537 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0892832 (* 1 = 0.0892832 loss)\nI0608 09:14:06.147543 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0481028 (* 1 = 0.0481028 loss)\nI0608 09:14:06.147552 13573 sgd_solver.cpp:106] Iteration 68580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:14:19.154616 13573 solver.cpp:229] Iteration 68600, loss = 0.364147\nI0608 09:14:19.154702 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0070712 (* 1 = 0.0070712 loss)\nI0608 09:14:19.154712 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00711812 (* 1 = 0.00711812 loss)\nI0608 09:14:19.154718 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.207746 (* 1 = 0.207746 loss)\nI0608 09:14:19.154724 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0133739 (* 1 = 0.0133739 loss)\nI0608 09:14:19.154729 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.215623 (* 1 = 0.215623 loss)\nI0608 09:14:19.154736 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0133749 (* 1 = 0.0133749 loss)\nI0608 09:14:19.154743 13573 sgd_solver.cpp:106] Iteration 68600, lr = 0.0001\nI0608 09:14:31.828407 13573 solver.cpp:229] Iteration 68620, loss = 1.00948\nI0608 09:14:31.828488 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0980263 (* 1 = 0.0980263 loss)\nI0608 09:14:31.828500 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0277996 (* 1 = 0.0277996 loss)\nI0608 09:14:31.828505 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.197514 (* 1 = 0.197514 loss)\nI0608 09:14:31.828511 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0211835 (* 1 = 0.0211835 loss)\nI0608 09:14:31.828517 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.185502 (* 1 = 0.185502 loss)\nI0608 09:14:31.828523 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0211965 (* 1 = 0.0211965 loss)\nI0608 09:14:31.828531 13573 sgd_solver.cpp:106] Iteration 68620, lr = 0.0001\nI0608 09:14:44.693835 13573 solver.cpp:229] Iteration 68640, loss = 0.641782\nI0608 09:14:44.693897 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0683913 (* 1 = 0.0683913 loss)\nI0608 09:14:44.693905 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0627255 (* 1 = 0.0627255 loss)\nI0608 09:14:44.693912 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.3183 (* 1 = 0.3183 loss)\nI0608 09:14:44.693918 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0617901 (* 1 = 0.0617901 loss)\nI0608 09:14:44.693924 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.314239 (* 1 = 0.314239 loss)\nI0608 09:14:44.693929 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0617774 (* 1 = 0.0617774 loss)\nI0608 09:14:44.693938 13573 sgd_solver.cpp:106] Iteration 68640, lr = 0.0001\nI0608 09:14:57.654978 13573 solver.cpp:229] Iteration 68660, loss = 0.516987\nI0608 09:14:57.655042 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000593158 (* 1 = 0.000593158 loss)\nI0608 09:14:57.655051 13573 solver.cpp:245]     Train net output #1: loss_cls = 9.88734e-05 (* 1 = 9.88734e-05 loss)\nI0608 09:14:57.655057 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.231121 (* 1 = 0.231121 loss)\nI0608 09:14:57.655063 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00341059 (* 1 = 0.00341059 loss)\nI0608 09:14:57.655069 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.213398 (* 1 = 0.213398 loss)\nI0608 09:14:57.655076 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00340923 (* 1 = 0.00340923 loss)\nI0608 09:14:57.655081 13573 sgd_solver.cpp:106] Iteration 68660, lr = 0.0001\nI0608 09:15:10.589007 13573 solver.cpp:229] Iteration 68680, loss = 0.445848\nI0608 09:15:10.589082 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.252017 (* 1 = 0.252017 loss)\nI0608 09:15:10.589092 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0751211 (* 1 = 0.0751211 loss)\nI0608 09:15:10.589098 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0894118 (* 1 = 0.0894118 loss)\nI0608 09:15:10.589104 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0474682 (* 1 = 0.0474682 loss)\nI0608 09:15:10.589110 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0873554 (* 1 = 0.0873554 loss)\nI0608 09:15:10.589117 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0474659 (* 1 = 0.0474659 loss)\nI0608 09:15:10.589123 13573 sgd_solver.cpp:106] Iteration 68680, lr = 0.0001\nI0608 09:15:23.338048 13573 solver.cpp:229] Iteration 68700, loss = 0.846021\nI0608 09:15:23.338109 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0339302 (* 1 = 0.0339302 loss)\nI0608 09:15:23.338119 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.074052 (* 1 = 0.074052 loss)\nI0608 09:15:23.338124 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.367027 (* 1 = 0.367027 loss)\nI0608 09:15:23.338130 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.109674 (* 1 = 0.109674 loss)\nI0608 09:15:23.338137 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.364858 (* 1 = 0.364858 loss)\nI0608 09:15:23.338143 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.109681 (* 1 = 0.109681 loss)\nI0608 09:15:23.338150 13573 sgd_solver.cpp:106] Iteration 68700, lr = 0.0001\nI0608 09:15:36.385277 13573 solver.cpp:229] Iteration 68720, loss = 1.59064\nI0608 09:15:36.385346 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00269638 (* 1 = 0.00269638 loss)\nI0608 09:15:36.385356 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00580002 (* 1 = 0.00580002 loss)\nI0608 09:15:36.385362 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.86843 (* 1 = 0.86843 loss)\nI0608 09:15:36.385368 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.239708 (* 1 = 0.239708 loss)\nI0608 09:15:36.385375 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.888846 (* 1 = 0.888846 loss)\nI0608 09:15:36.385380 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.239714 (* 1 = 0.239714 loss)\nI0608 09:15:36.385387 13573 sgd_solver.cpp:106] Iteration 68720, lr = 0.0001\nI0608 09:15:49.235342 13573 solver.cpp:229] Iteration 68740, loss = 0.271512\nI0608 09:15:49.235411 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0158225 (* 1 = 0.0158225 loss)\nI0608 09:15:49.235421 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0156867 (* 1 = 0.0156867 loss)\nI0608 09:15:49.235429 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0864792 (* 1 = 0.0864792 loss)\nI0608 09:15:49.235435 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00893193 (* 1 = 0.00893193 loss)\nI0608 09:15:49.235440 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0915937 (* 1 = 0.0915937 loss)\nI0608 09:15:49.235446 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00893306 (* 1 = 0.00893306 loss)\nI0608 09:15:49.235453 13573 sgd_solver.cpp:106] Iteration 68740, lr = 0.0001\nI0608 09:16:02.230985 13573 solver.cpp:229] Iteration 68760, loss = 0.731142\nI0608 09:16:02.231055 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0989682 (* 1 = 0.0989682 loss)\nI0608 09:16:02.231065 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0939496 (* 1 = 0.0939496 loss)\nI0608 09:16:02.231071 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.303769 (* 1 = 0.303769 loss)\nI0608 09:16:02.231077 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.030327 (* 1 = 0.030327 loss)\nI0608 09:16:02.231083 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.285464 (* 1 = 0.285464 loss)\nI0608 09:16:02.231089 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0303144 (* 1 = 0.0303144 loss)\nI0608 09:16:02.231097 13573 sgd_solver.cpp:106] Iteration 68760, lr = 0.0001\nI0608 09:16:15.327653 13573 solver.cpp:229] Iteration 68780, loss = 0.597048\nI0608 09:16:15.327714 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.103463 (* 1 = 0.103463 loss)\nI0608 09:16:15.327724 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.122152 (* 1 = 0.122152 loss)\nI0608 09:16:15.327730 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22599 (* 1 = 0.22599 loss)\nI0608 09:16:15.327736 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0421439 (* 1 = 0.0421439 loss)\nI0608 09:16:15.327742 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.220771 (* 1 = 0.220771 loss)\nI0608 09:16:15.327749 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.042146 (* 1 = 0.042146 loss)\nI0608 09:16:15.327755 13573 sgd_solver.cpp:106] Iteration 68780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:16:28.197125 13573 solver.cpp:229] Iteration 68800, loss = 0.646709\nI0608 09:16:28.197216 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0645052 (* 1 = 0.0645052 loss)\nI0608 09:16:28.197227 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0714236 (* 1 = 0.0714236 loss)\nI0608 09:16:28.197233 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.286211 (* 1 = 0.286211 loss)\nI0608 09:16:28.197239 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123662 (* 1 = 0.123662 loss)\nI0608 09:16:28.197245 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.285404 (* 1 = 0.285404 loss)\nI0608 09:16:28.197262 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123665 (* 1 = 0.123665 loss)\nI0608 09:16:28.197269 13573 sgd_solver.cpp:106] Iteration 68800, lr = 0.0001\nI0608 09:16:41.279963 13573 solver.cpp:229] Iteration 68820, loss = 0.62867\nI0608 09:16:41.280030 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.194901 (* 1 = 0.194901 loss)\nI0608 09:16:41.280040 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.174664 (* 1 = 0.174664 loss)\nI0608 09:16:41.280045 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.228237 (* 1 = 0.228237 loss)\nI0608 09:16:41.280051 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0231526 (* 1 = 0.0231526 loss)\nI0608 09:16:41.280056 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.223729 (* 1 = 0.223729 loss)\nI0608 09:16:41.280062 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0231656 (* 1 = 0.0231656 loss)\nI0608 09:16:41.280069 13573 sgd_solver.cpp:106] Iteration 68820, lr = 0.0001\nI0608 09:16:54.153417 13573 solver.cpp:229] Iteration 68840, loss = 0.761208\nI0608 09:16:54.153479 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0973157 (* 1 = 0.0973157 loss)\nI0608 09:16:54.153488 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.235986 (* 1 = 0.235986 loss)\nI0608 09:16:54.153494 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.279893 (* 1 = 0.279893 loss)\nI0608 09:16:54.153501 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0302233 (* 1 = 0.0302233 loss)\nI0608 09:16:54.153506 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.291552 (* 1 = 0.291552 loss)\nI0608 09:16:54.153512 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0302399 (* 1 = 0.0302399 loss)\nI0608 09:16:54.153518 13573 sgd_solver.cpp:106] Iteration 68840, lr = 0.0001\nI0608 09:17:07.045784 13573 solver.cpp:229] Iteration 68860, loss = 0.377177\nI0608 09:17:07.045912 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0517719 (* 1 = 0.0517719 loss)\nI0608 09:17:07.045920 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0296797 (* 1 = 0.0296797 loss)\nI0608 09:17:07.045928 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0913876 (* 1 = 0.0913876 loss)\nI0608 09:17:07.045933 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00928166 (* 1 = 0.00928166 loss)\nI0608 09:17:07.045938 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0935253 (* 1 = 0.0935253 loss)\nI0608 09:17:07.045944 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00928166 (* 1 = 0.00928166 loss)\nI0608 09:17:07.045951 13573 sgd_solver.cpp:106] Iteration 68860, lr = 0.0001\nI0608 09:17:19.939417 13573 solver.cpp:229] Iteration 68880, loss = 0.857878\nI0608 09:17:19.939484 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00453262 (* 1 = 0.00453262 loss)\nI0608 09:17:19.939493 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000958407 (* 1 = 0.000958407 loss)\nI0608 09:17:19.939501 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.239206 (* 1 = 0.239206 loss)\nI0608 09:17:19.939507 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0370896 (* 1 = 0.0370896 loss)\nI0608 09:17:19.939512 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.235359 (* 1 = 0.235359 loss)\nI0608 09:17:19.939517 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0370753 (* 1 = 0.0370753 loss)\nI0608 09:17:19.939524 13573 sgd_solver.cpp:106] Iteration 68880, lr = 0.0001\nI0608 09:17:32.878360 13573 solver.cpp:229] Iteration 68900, loss = 0.44852\nI0608 09:17:32.878424 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.162044 (* 1 = 0.162044 loss)\nI0608 09:17:32.878434 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0863457 (* 1 = 0.0863457 loss)\nI0608 09:17:32.878440 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.118829 (* 1 = 0.118829 loss)\nI0608 09:17:32.878446 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0172345 (* 1 = 0.0172345 loss)\nI0608 09:17:32.878453 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119863 (* 1 = 0.119863 loss)\nI0608 09:17:32.878458 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0172352 (* 1 = 0.0172352 loss)\nI0608 09:17:32.878466 13573 sgd_solver.cpp:106] Iteration 68900, lr = 0.0001\nI0608 09:17:45.688194 13573 solver.cpp:229] Iteration 68920, loss = 0.340258\nI0608 09:17:45.688282 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00154758 (* 1 = 0.00154758 loss)\nI0608 09:17:45.688292 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000620239 (* 1 = 0.000620239 loss)\nI0608 09:17:45.688299 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.17208 (* 1 = 0.17208 loss)\nI0608 09:17:45.688305 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0204124 (* 1 = 0.0204124 loss)\nI0608 09:17:45.688311 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.220445 (* 1 = 0.220445 loss)\nI0608 09:17:45.688318 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0204128 (* 1 = 0.0204128 loss)\nI0608 09:17:45.688324 13573 sgd_solver.cpp:106] Iteration 68920, lr = 0.0001\nI0608 09:17:58.931674 13573 solver.cpp:229] Iteration 68940, loss = 0.458595\nI0608 09:17:58.931737 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0381376 (* 1 = 0.0381376 loss)\nI0608 09:17:58.931747 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0151448 (* 1 = 0.0151448 loss)\nI0608 09:17:58.931753 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.208566 (* 1 = 0.208566 loss)\nI0608 09:17:58.931759 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0165836 (* 1 = 0.0165836 loss)\nI0608 09:17:58.931766 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.199813 (* 1 = 0.199813 loss)\nI0608 09:17:58.931771 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0165847 (* 1 = 0.0165847 loss)\nI0608 09:17:58.931777 13573 sgd_solver.cpp:106] Iteration 68940, lr = 0.0001\nI0608 09:18:11.921226 13573 solver.cpp:229] Iteration 68960, loss = 0.421828\nI0608 09:18:11.921289 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0069269 (* 1 = 0.0069269 loss)\nI0608 09:18:11.921299 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0364003 (* 1 = 0.0364003 loss)\nI0608 09:18:11.921306 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.250403 (* 1 = 0.250403 loss)\nI0608 09:18:11.921313 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0291907 (* 1 = 0.0291907 loss)\nI0608 09:18:11.921319 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.252883 (* 1 = 0.252883 loss)\nI0608 09:18:11.921324 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0291859 (* 1 = 0.0291859 loss)\nI0608 09:18:11.921331 13573 sgd_solver.cpp:106] Iteration 68960, lr = 0.0001\nI0608 09:18:24.678685 13573 solver.cpp:229] Iteration 68980, loss = 1.09855\nI0608 09:18:24.678763 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.20778 (* 1 = 0.20778 loss)\nI0608 09:18:24.678787 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.190815 (* 1 = 0.190815 loss)\nI0608 09:18:24.678795 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.409998 (* 1 = 0.409998 loss)\nI0608 09:18:24.678802 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.155837 (* 1 = 0.155837 loss)\nI0608 09:18:24.678807 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.412819 (* 1 = 0.412819 loss)\nI0608 09:18:24.678813 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.155835 (* 1 = 0.155835 loss)\nI0608 09:18:24.678820 13573 sgd_solver.cpp:106] Iteration 68980, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:18:37.637471 13573 solver.cpp:229] Iteration 69000, loss = 0.622897\nI0608 09:18:37.637542 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0583821 (* 1 = 0.0583821 loss)\nI0608 09:18:37.637552 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.106366 (* 1 = 0.106366 loss)\nI0608 09:18:37.637557 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.098486 (* 1 = 0.098486 loss)\nI0608 09:18:37.637563 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0785484 (* 1 = 0.0785484 loss)\nI0608 09:18:37.637569 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0979379 (* 1 = 0.0979379 loss)\nI0608 09:18:37.637575 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0785539 (* 1 = 0.0785539 loss)\nI0608 09:18:37.637583 13573 sgd_solver.cpp:106] Iteration 69000, lr = 0.0001\nI0608 09:18:50.605777 13573 solver.cpp:229] Iteration 69020, loss = 0.608487\nI0608 09:18:50.605841 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000550416 (* 1 = 0.000550416 loss)\nI0608 09:18:50.605850 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00173192 (* 1 = 0.00173192 loss)\nI0608 09:18:50.605857 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.372462 (* 1 = 0.372462 loss)\nI0608 09:18:50.605864 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0158215 (* 1 = 0.0158215 loss)\nI0608 09:18:50.605870 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.422298 (* 1 = 0.422298 loss)\nI0608 09:18:50.605875 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0158285 (* 1 = 0.0158285 loss)\nI0608 09:18:50.605882 13573 sgd_solver.cpp:106] Iteration 69020, lr = 0.0001\nI0608 09:19:03.337649 13573 solver.cpp:229] Iteration 69040, loss = 0.388441\nI0608 09:19:03.337725 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0376099 (* 1 = 0.0376099 loss)\nI0608 09:19:03.337734 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.03253 (* 1 = 0.03253 loss)\nI0608 09:19:03.337741 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0825365 (* 1 = 0.0825365 loss)\nI0608 09:19:03.337746 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0110214 (* 1 = 0.0110214 loss)\nI0608 09:19:03.337752 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0873612 (* 1 = 0.0873612 loss)\nI0608 09:19:03.337759 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0110231 (* 1 = 0.0110231 loss)\nI0608 09:19:03.337764 13573 sgd_solver.cpp:106] Iteration 69040, lr = 0.0001\nI0608 09:19:16.172711 13573 solver.cpp:229] Iteration 69060, loss = 0.551682\nI0608 09:19:16.172780 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00704382 (* 1 = 0.00704382 loss)\nI0608 09:19:16.172788 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0285366 (* 1 = 0.0285366 loss)\nI0608 09:19:16.172794 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0909566 (* 1 = 0.0909566 loss)\nI0608 09:19:16.172801 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0336064 (* 1 = 0.0336064 loss)\nI0608 09:19:16.172806 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0835456 (* 1 = 0.0835456 loss)\nI0608 09:19:16.172811 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0336067 (* 1 = 0.0336067 loss)\nI0608 09:19:16.172817 13573 sgd_solver.cpp:106] Iteration 69060, lr = 0.0001\nI0608 09:19:28.986486 13573 solver.cpp:229] Iteration 69080, loss = 0.706473\nI0608 09:19:28.986549 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.124421 (* 1 = 0.124421 loss)\nI0608 09:19:28.986558 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.130451 (* 1 = 0.130451 loss)\nI0608 09:19:28.986563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.380672 (* 1 = 0.380672 loss)\nI0608 09:19:28.986570 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0683325 (* 1 = 0.0683325 loss)\nI0608 09:19:28.986577 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.39738 (* 1 = 0.39738 loss)\nI0608 09:19:28.986582 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0683298 (* 1 = 0.0683298 loss)\nI0608 09:19:28.986588 13573 sgd_solver.cpp:106] Iteration 69080, lr = 0.0001\nI0608 09:19:41.559128 13573 solver.cpp:229] Iteration 69100, loss = 0.409065\nI0608 09:19:41.559195 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0621946 (* 1 = 0.0621946 loss)\nI0608 09:19:41.559204 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0319206 (* 1 = 0.0319206 loss)\nI0608 09:19:41.559211 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0883297 (* 1 = 0.0883297 loss)\nI0608 09:19:41.559217 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0176941 (* 1 = 0.0176941 loss)\nI0608 09:19:41.559223 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0865057 (* 1 = 0.0865057 loss)\nI0608 09:19:41.559229 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0176942 (* 1 = 0.0176942 loss)\nI0608 09:19:41.559237 13573 sgd_solver.cpp:106] Iteration 69100, lr = 0.0001\nI0608 09:19:54.429092 13573 solver.cpp:229] Iteration 69120, loss = 0.255218\nI0608 09:19:54.429155 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0266652 (* 1 = 0.0266652 loss)\nI0608 09:19:54.429164 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0188685 (* 1 = 0.0188685 loss)\nI0608 09:19:54.429170 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0818213 (* 1 = 0.0818213 loss)\nI0608 09:19:54.429177 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0112259 (* 1 = 0.0112259 loss)\nI0608 09:19:54.429183 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0835598 (* 1 = 0.0835598 loss)\nI0608 09:19:54.429188 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0112254 (* 1 = 0.0112254 loss)\nI0608 09:19:54.429194 13573 sgd_solver.cpp:106] Iteration 69120, lr = 0.0001\nI0608 09:20:07.289294 13573 solver.cpp:229] Iteration 69140, loss = 0.47266\nI0608 09:20:07.289376 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0780327 (* 1 = 0.0780327 loss)\nI0608 09:20:07.289386 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0660684 (* 1 = 0.0660684 loss)\nI0608 09:20:07.289391 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.168727 (* 1 = 0.168727 loss)\nI0608 09:20:07.289397 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0505363 (* 1 = 0.0505363 loss)\nI0608 09:20:07.289403 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.172853 (* 1 = 0.172853 loss)\nI0608 09:20:07.289408 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.050526 (* 1 = 0.050526 loss)\nI0608 09:20:07.289415 13573 sgd_solver.cpp:106] Iteration 69140, lr = 0.0001\nI0608 09:20:20.135591 13573 solver.cpp:229] Iteration 69160, loss = 0.627733\nI0608 09:20:20.135665 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0569933 (* 1 = 0.0569933 loss)\nI0608 09:20:20.135674 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0750526 (* 1 = 0.0750526 loss)\nI0608 09:20:20.135681 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.318032 (* 1 = 0.318032 loss)\nI0608 09:20:20.135686 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.123461 (* 1 = 0.123461 loss)\nI0608 09:20:20.135692 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.298817 (* 1 = 0.298817 loss)\nI0608 09:20:20.135699 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.123456 (* 1 = 0.123456 loss)\nI0608 09:20:20.135704 13573 sgd_solver.cpp:106] Iteration 69160, lr = 0.0001\nI0608 09:20:33.061853 13573 solver.cpp:229] Iteration 69180, loss = 0.74286\nI0608 09:20:33.061926 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0059417 (* 1 = 0.0059417 loss)\nI0608 09:20:33.061936 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0154478 (* 1 = 0.0154478 loss)\nI0608 09:20:33.061945 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.18108 (* 1 = 0.18108 loss)\nI0608 09:20:33.061952 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00640462 (* 1 = 0.00640462 loss)\nI0608 09:20:33.061959 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176428 (* 1 = 0.176428 loss)\nI0608 09:20:33.061964 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00640449 (* 1 = 0.00640449 loss)\nI0608 09:20:33.061971 13573 sgd_solver.cpp:106] Iteration 69180, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:20:45.768096 13573 solver.cpp:229] Iteration 69200, loss = 0.461658\nI0608 09:20:45.768165 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00340489 (* 1 = 0.00340489 loss)\nI0608 09:20:45.768175 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00759213 (* 1 = 0.00759213 loss)\nI0608 09:20:45.768182 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.22705 (* 1 = 0.22705 loss)\nI0608 09:20:45.768187 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0567022 (* 1 = 0.0567022 loss)\nI0608 09:20:45.768193 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.216849 (* 1 = 0.216849 loss)\nI0608 09:20:45.768199 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0566964 (* 1 = 0.0566964 loss)\nI0608 09:20:45.768208 13573 sgd_solver.cpp:106] Iteration 69200, lr = 0.0001\nI0608 09:20:58.745520 13573 solver.cpp:229] Iteration 69220, loss = 0.476775\nI0608 09:20:58.745589 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0349341 (* 1 = 0.0349341 loss)\nI0608 09:20:58.745599 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0540877 (* 1 = 0.0540877 loss)\nI0608 09:20:58.745605 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0919956 (* 1 = 0.0919956 loss)\nI0608 09:20:58.745611 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.028712 (* 1 = 0.028712 loss)\nI0608 09:20:58.745617 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0869001 (* 1 = 0.0869001 loss)\nI0608 09:20:58.745622 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0287113 (* 1 = 0.0287113 loss)\nI0608 09:20:58.745630 13573 sgd_solver.cpp:106] Iteration 69220, lr = 0.0001\nI0608 09:21:11.530256 13573 solver.cpp:229] Iteration 69240, loss = 1.93578\nI0608 09:21:11.530318 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00423534 (* 1 = 0.00423534 loss)\nI0608 09:21:11.530328 13573 solver.cpp:245]     Train net output #1: loss_cls = 8.15362e-05 (* 1 = 8.15362e-05 loss)\nI0608 09:21:11.530334 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.16154 (* 1 = 1.16154 loss)\nI0608 09:21:11.530339 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.559398 (* 1 = 0.559398 loss)\nI0608 09:21:11.530345 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.16206 (* 1 = 1.16206 loss)\nI0608 09:21:11.530350 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.559461 (* 1 = 0.559461 loss)\nI0608 09:21:11.530357 13573 sgd_solver.cpp:106] Iteration 69240, lr = 0.0001\nI0608 09:21:24.380764 13573 solver.cpp:229] Iteration 69260, loss = 0.511331\nI0608 09:21:24.380836 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0467888 (* 1 = 0.0467888 loss)\nI0608 09:21:24.380846 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0152833 (* 1 = 0.0152833 loss)\nI0608 09:21:24.380851 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.212387 (* 1 = 0.212387 loss)\nI0608 09:21:24.380857 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00917877 (* 1 = 0.00917877 loss)\nI0608 09:21:24.380862 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.19262 (* 1 = 0.19262 loss)\nI0608 09:21:24.380868 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00918013 (* 1 = 0.00918013 loss)\nI0608 09:21:24.380875 13573 sgd_solver.cpp:106] Iteration 69260, lr = 0.0001\nI0608 09:21:37.248299 13573 solver.cpp:229] Iteration 69280, loss = 0.562836\nI0608 09:21:37.248373 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.003287 (* 1 = 0.003287 loss)\nI0608 09:21:37.248381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00162435 (* 1 = 0.00162435 loss)\nI0608 09:21:37.248406 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.242655 (* 1 = 0.242655 loss)\nI0608 09:21:37.248415 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0501376 (* 1 = 0.0501376 loss)\nI0608 09:21:37.248421 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.243002 (* 1 = 0.243002 loss)\nI0608 09:21:37.248426 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0501381 (* 1 = 0.0501381 loss)\nI0608 09:21:37.248433 13573 sgd_solver.cpp:106] Iteration 69280, lr = 0.0001\nI0608 09:21:49.934325 13573 solver.cpp:229] Iteration 69300, loss = 0.862174\nI0608 09:21:49.934387 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0761509 (* 1 = 0.0761509 loss)\nI0608 09:21:49.934396 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0617468 (* 1 = 0.0617468 loss)\nI0608 09:21:49.934402 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.116054 (* 1 = 0.116054 loss)\nI0608 09:21:49.934408 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0540336 (* 1 = 0.0540336 loss)\nI0608 09:21:49.934414 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.108623 (* 1 = 0.108623 loss)\nI0608 09:21:49.934420 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.054032 (* 1 = 0.054032 loss)\nI0608 09:21:49.934427 13573 sgd_solver.cpp:106] Iteration 69300, lr = 0.0001\nI0608 09:22:03.020603 13573 solver.cpp:229] Iteration 69320, loss = 0.514722\nI0608 09:22:03.020671 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.017987 (* 1 = 0.017987 loss)\nI0608 09:22:03.020681 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0213997 (* 1 = 0.0213997 loss)\nI0608 09:22:03.020686 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.155676 (* 1 = 0.155676 loss)\nI0608 09:22:03.020692 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00667614 (* 1 = 0.00667614 loss)\nI0608 09:22:03.020699 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.152719 (* 1 = 0.152719 loss)\nI0608 09:22:03.020704 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00667545 (* 1 = 0.00667545 loss)\nI0608 09:22:03.020710 13573 sgd_solver.cpp:106] Iteration 69320, lr = 0.0001\nI0608 09:22:16.051944 13573 solver.cpp:229] Iteration 69340, loss = 0.720197\nI0608 09:22:16.052027 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0578488 (* 1 = 0.0578488 loss)\nI0608 09:22:16.052037 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0105716 (* 1 = 0.0105716 loss)\nI0608 09:22:16.052043 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0813427 (* 1 = 0.0813427 loss)\nI0608 09:22:16.052049 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00270035 (* 1 = 0.00270035 loss)\nI0608 09:22:16.052055 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0818542 (* 1 = 0.0818542 loss)\nI0608 09:22:16.052062 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00269969 (* 1 = 0.00269969 loss)\nI0608 09:22:16.052075 13573 sgd_solver.cpp:106] Iteration 69340, lr = 0.0001\nI0608 09:22:29.024302 13573 solver.cpp:229] Iteration 69360, loss = 0.308208\nI0608 09:22:29.024371 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0497173 (* 1 = 0.0497173 loss)\nI0608 09:22:29.024381 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0440036 (* 1 = 0.0440036 loss)\nI0608 09:22:29.024386 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.107354 (* 1 = 0.107354 loss)\nI0608 09:22:29.024394 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0337936 (* 1 = 0.0337936 loss)\nI0608 09:22:29.024399 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100486 (* 1 = 0.100486 loss)\nI0608 09:22:29.024405 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0337938 (* 1 = 0.0337938 loss)\nI0608 09:22:29.024411 13573 sgd_solver.cpp:106] Iteration 69360, lr = 0.0001\nI0608 09:22:42.065484 13573 solver.cpp:229] Iteration 69380, loss = 0.455497\nI0608 09:22:42.065546 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0323782 (* 1 = 0.0323782 loss)\nI0608 09:22:42.065556 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0209954 (* 1 = 0.0209954 loss)\nI0608 09:22:42.065562 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0959087 (* 1 = 0.0959087 loss)\nI0608 09:22:42.065567 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0416986 (* 1 = 0.0416986 loss)\nI0608 09:22:42.065574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101311 (* 1 = 0.101311 loss)\nI0608 09:22:42.065579 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0416986 (* 1 = 0.0416986 loss)\nI0608 09:22:42.065587 13573 sgd_solver.cpp:106] Iteration 69380, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:22:55.031474 13573 solver.cpp:229] Iteration 69400, loss = 0.441875\nI0608 09:22:55.031546 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000226816 (* 1 = 0.000226816 loss)\nI0608 09:22:55.031556 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000266669 (* 1 = 0.000266669 loss)\nI0608 09:22:55.031563 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.117639 (* 1 = 0.117639 loss)\nI0608 09:22:55.031569 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00136842 (* 1 = 0.00136842 loss)\nI0608 09:22:55.031574 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125877 (* 1 = 0.125877 loss)\nI0608 09:22:55.031579 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00136797 (* 1 = 0.00136797 loss)\nI0608 09:22:55.031586 13573 sgd_solver.cpp:106] Iteration 69400, lr = 0.0001\nI0608 09:23:08.072664 13573 solver.cpp:229] Iteration 69420, loss = 0.411288\nI0608 09:23:08.072726 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000234085 (* 1 = 0.000234085 loss)\nI0608 09:23:08.072736 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000464445 (* 1 = 0.000464445 loss)\nI0608 09:23:08.072742 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.273264 (* 1 = 0.273264 loss)\nI0608 09:23:08.072747 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0134428 (* 1 = 0.0134428 loss)\nI0608 09:23:08.072753 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.245033 (* 1 = 0.245033 loss)\nI0608 09:23:08.072759 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.013454 (* 1 = 0.013454 loss)\nI0608 09:23:08.072767 13573 sgd_solver.cpp:106] Iteration 69420, lr = 0.0001\nI0608 09:23:21.085777 13573 solver.cpp:229] Iteration 69440, loss = 0.656567\nI0608 09:23:21.085844 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000453034 (* 1 = 0.000453034 loss)\nI0608 09:23:21.085853 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000193146 (* 1 = 0.000193146 loss)\nI0608 09:23:21.085860 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.354063 (* 1 = 0.354063 loss)\nI0608 09:23:21.085865 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0662909 (* 1 = 0.0662909 loss)\nI0608 09:23:21.085871 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.323672 (* 1 = 0.323672 loss)\nI0608 09:23:21.085877 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0662748 (* 1 = 0.0662748 loss)\nI0608 09:23:21.085885 13573 sgd_solver.cpp:106] Iteration 69440, lr = 0.0001\nI0608 09:23:34.003363 13573 solver.cpp:229] Iteration 69460, loss = 0.465477\nI0608 09:23:34.003520 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0307546 (* 1 = 0.0307546 loss)\nI0608 09:23:34.003530 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00628037 (* 1 = 0.00628037 loss)\nI0608 09:23:34.003536 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0881805 (* 1 = 0.0881805 loss)\nI0608 09:23:34.003542 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0212498 (* 1 = 0.0212498 loss)\nI0608 09:23:34.003549 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0933306 (* 1 = 0.0933306 loss)\nI0608 09:23:34.003554 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0212499 (* 1 = 0.0212499 loss)\nI0608 09:23:34.003562 13573 sgd_solver.cpp:106] Iteration 69460, lr = 0.0001\nI0608 09:23:46.951833 13573 solver.cpp:229] Iteration 69480, loss = 0.573153\nI0608 09:23:46.951896 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0558378 (* 1 = 0.0558378 loss)\nI0608 09:23:46.951905 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0588154 (* 1 = 0.0588154 loss)\nI0608 09:23:46.951911 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0878178 (* 1 = 0.0878178 loss)\nI0608 09:23:46.951917 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.00472959 (* 1 = 0.00472959 loss)\nI0608 09:23:46.951922 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0990246 (* 1 = 0.0990246 loss)\nI0608 09:23:46.951928 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.00472942 (* 1 = 0.00472942 loss)\nI0608 09:23:46.951936 13573 sgd_solver.cpp:106] Iteration 69480, lr = 0.0001\nI0608 09:24:00.010699 13573 solver.cpp:229] Iteration 69500, loss = 0.454051\nI0608 09:24:00.010781 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0468726 (* 1 = 0.0468726 loss)\nI0608 09:24:00.010793 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0217871 (* 1 = 0.0217871 loss)\nI0608 09:24:00.010799 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.249282 (* 1 = 0.249282 loss)\nI0608 09:24:00.010805 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0333578 (* 1 = 0.0333578 loss)\nI0608 09:24:00.010812 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.247707 (* 1 = 0.247707 loss)\nI0608 09:24:00.010818 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0333583 (* 1 = 0.0333583 loss)\nI0608 09:24:00.010824 13573 sgd_solver.cpp:106] Iteration 69500, lr = 0.0001\nI0608 09:24:12.841240 13573 solver.cpp:229] Iteration 69520, loss = 0.488653\nI0608 09:24:12.841305 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00143332 (* 1 = 0.00143332 loss)\nI0608 09:24:12.841315 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.000667751 (* 1 = 0.000667751 loss)\nI0608 09:24:12.841321 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.205651 (* 1 = 0.205651 loss)\nI0608 09:24:12.841326 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.021261 (* 1 = 0.021261 loss)\nI0608 09:24:12.841332 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.176355 (* 1 = 0.176355 loss)\nI0608 09:24:12.841338 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.021261 (* 1 = 0.021261 loss)\nI0608 09:24:12.841346 13573 sgd_solver.cpp:106] Iteration 69520, lr = 0.0001\nI0608 09:24:25.971448 13573 solver.cpp:229] Iteration 69540, loss = 0.335934\nI0608 09:24:25.971524 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000692932 (* 1 = 0.000692932 loss)\nI0608 09:24:25.971532 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0442809 (* 1 = 0.0442809 loss)\nI0608 09:24:25.971539 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0855148 (* 1 = 0.0855148 loss)\nI0608 09:24:25.971544 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.057337 (* 1 = 0.057337 loss)\nI0608 09:24:25.971550 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0881727 (* 1 = 0.0881727 loss)\nI0608 09:24:25.971555 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0573352 (* 1 = 0.0573352 loss)\nI0608 09:24:25.971563 13573 sgd_solver.cpp:106] Iteration 69540, lr = 0.0001\nI0608 09:24:38.816258 13573 solver.cpp:229] Iteration 69560, loss = 0.411101\nI0608 09:24:38.816325 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0911009 (* 1 = 0.0911009 loss)\nI0608 09:24:38.816337 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0167173 (* 1 = 0.0167173 loss)\nI0608 09:24:38.816344 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.159894 (* 1 = 0.159894 loss)\nI0608 09:24:38.816349 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0306028 (* 1 = 0.0306028 loss)\nI0608 09:24:38.816354 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.160671 (* 1 = 0.160671 loss)\nI0608 09:24:38.816360 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0306036 (* 1 = 0.0306036 loss)\nI0608 09:24:38.816368 13573 sgd_solver.cpp:106] Iteration 69560, lr = 0.0001\nI0608 09:24:51.735499 13573 solver.cpp:229] Iteration 69580, loss = 3.32817\nI0608 09:24:51.735559 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.196579 (* 1 = 0.196579 loss)\nI0608 09:24:51.735569 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0683208 (* 1 = 0.0683208 loss)\nI0608 09:24:51.735575 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 2.06447 (* 1 = 2.06447 loss)\nI0608 09:24:51.735581 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.738219 (* 1 = 0.738219 loss)\nI0608 09:24:51.735586 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 2.06498 (* 1 = 2.06498 loss)\nI0608 09:24:51.735592 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.747496 (* 1 = 0.747496 loss)\nI0608 09:24:51.735599 13573 sgd_solver.cpp:106] Iteration 69580, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:25:04.584776 13573 solver.cpp:229] Iteration 69600, loss = 1.07853\nI0608 09:25:04.584843 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0260796 (* 1 = 0.0260796 loss)\nI0608 09:25:04.584852 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0181738 (* 1 = 0.0181738 loss)\nI0608 09:25:04.584858 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.152763 (* 1 = 0.152763 loss)\nI0608 09:25:04.584864 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0518646 (* 1 = 0.0518646 loss)\nI0608 09:25:04.584870 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.150548 (* 1 = 0.150548 loss)\nI0608 09:25:04.584877 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0518656 (* 1 = 0.0518656 loss)\nI0608 09:25:04.584883 13573 sgd_solver.cpp:106] Iteration 69600, lr = 0.0001\nI0608 09:25:17.509266 13573 solver.cpp:229] Iteration 69620, loss = 1.10301\nI0608 09:25:17.509327 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.043483 (* 1 = 0.043483 loss)\nI0608 09:25:17.509337 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.124387 (* 1 = 0.124387 loss)\nI0608 09:25:17.509342 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.651982 (* 1 = 0.651982 loss)\nI0608 09:25:17.509348 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.113852 (* 1 = 0.113852 loss)\nI0608 09:25:17.509353 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.667414 (* 1 = 0.667414 loss)\nI0608 09:25:17.509359 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.113866 (* 1 = 0.113866 loss)\nI0608 09:25:17.509366 13573 sgd_solver.cpp:106] Iteration 69620, lr = 0.0001\nI0608 09:25:30.392722 13573 solver.cpp:229] Iteration 69640, loss = 1.1535\nI0608 09:25:30.392796 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0688927 (* 1 = 0.0688927 loss)\nI0608 09:25:30.392805 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0808085 (* 1 = 0.0808085 loss)\nI0608 09:25:30.392812 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.147222 (* 1 = 0.147222 loss)\nI0608 09:25:30.392817 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0630438 (* 1 = 0.0630438 loss)\nI0608 09:25:30.392822 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.149183 (* 1 = 0.149183 loss)\nI0608 09:25:30.392827 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0630444 (* 1 = 0.0630444 loss)\nI0608 09:25:30.392834 13573 sgd_solver.cpp:106] Iteration 69640, lr = 0.0001\nI0608 09:25:43.122510 13573 solver.cpp:229] Iteration 69660, loss = 0.76165\nI0608 09:25:43.122577 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00122562 (* 1 = 0.00122562 loss)\nI0608 09:25:43.122586 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00386941 (* 1 = 0.00386941 loss)\nI0608 09:25:43.122592 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.276273 (* 1 = 0.276273 loss)\nI0608 09:25:43.122597 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0273976 (* 1 = 0.0273976 loss)\nI0608 09:25:43.122603 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.307926 (* 1 = 0.307926 loss)\nI0608 09:25:43.122609 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0274042 (* 1 = 0.0274042 loss)\nI0608 09:25:43.122615 13573 sgd_solver.cpp:106] Iteration 69660, lr = 0.0001\nI0608 09:25:56.150892 13573 solver.cpp:229] Iteration 69680, loss = 0.424532\nI0608 09:25:56.150956 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0914517 (* 1 = 0.0914517 loss)\nI0608 09:25:56.150966 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0464904 (* 1 = 0.0464904 loss)\nI0608 09:25:56.150972 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.100079 (* 1 = 0.100079 loss)\nI0608 09:25:56.150979 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0152008 (* 1 = 0.0152008 loss)\nI0608 09:25:56.150984 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.100098 (* 1 = 0.100098 loss)\nI0608 09:25:56.150990 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0151995 (* 1 = 0.0151995 loss)\nI0608 09:25:56.150996 13573 sgd_solver.cpp:106] Iteration 69680, lr = 0.0001\nI0608 09:26:09.114903 13573 solver.cpp:229] Iteration 69700, loss = 0.815053\nI0608 09:26:09.114979 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.126633 (* 1 = 0.126633 loss)\nI0608 09:26:09.114989 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.145451 (* 1 = 0.145451 loss)\nI0608 09:26:09.114995 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.40344 (* 1 = 0.40344 loss)\nI0608 09:26:09.115001 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.119959 (* 1 = 0.119959 loss)\nI0608 09:26:09.115007 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.409254 (* 1 = 0.409254 loss)\nI0608 09:26:09.115013 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.119962 (* 1 = 0.119962 loss)\nI0608 09:26:09.115021 13573 sgd_solver.cpp:106] Iteration 69700, lr = 0.0001\nI0608 09:26:21.923985 13573 solver.cpp:229] Iteration 69720, loss = 1.04886\nI0608 09:26:21.924049 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0415388 (* 1 = 0.0415388 loss)\nI0608 09:26:21.924058 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0276166 (* 1 = 0.0276166 loss)\nI0608 09:26:21.924065 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.311962 (* 1 = 0.311962 loss)\nI0608 09:26:21.924072 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.166607 (* 1 = 0.166607 loss)\nI0608 09:26:21.924077 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.318778 (* 1 = 0.318778 loss)\nI0608 09:26:21.924083 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.16662 (* 1 = 0.16662 loss)\nI0608 09:26:21.924090 13573 sgd_solver.cpp:106] Iteration 69720, lr = 0.0001\nI0608 09:26:34.749519 13573 solver.cpp:229] Iteration 69740, loss = 0.394408\nI0608 09:26:34.749588 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0905253 (* 1 = 0.0905253 loss)\nI0608 09:26:34.749598 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0243682 (* 1 = 0.0243682 loss)\nI0608 09:26:34.749603 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.148099 (* 1 = 0.148099 loss)\nI0608 09:26:34.749609 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0374775 (* 1 = 0.0374775 loss)\nI0608 09:26:34.749614 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.167639 (* 1 = 0.167639 loss)\nI0608 09:26:34.749620 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0374743 (* 1 = 0.0374743 loss)\nI0608 09:26:34.749626 13573 sgd_solver.cpp:106] Iteration 69740, lr = 0.0001\nI0608 09:26:47.763240 13573 solver.cpp:229] Iteration 69760, loss = 1.32348\nI0608 09:26:47.763365 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.00615718 (* 1 = 0.00615718 loss)\nI0608 09:26:47.763375 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00339399 (* 1 = 0.00339399 loss)\nI0608 09:26:47.763382 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.884092 (* 1 = 0.884092 loss)\nI0608 09:26:47.763388 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.146358 (* 1 = 0.146358 loss)\nI0608 09:26:47.763394 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.846844 (* 1 = 0.846844 loss)\nI0608 09:26:47.763401 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.146332 (* 1 = 0.146332 loss)\nI0608 09:26:47.763407 13573 sgd_solver.cpp:106] Iteration 69760, lr = 0.0001\nI0608 09:27:00.741189 13573 solver.cpp:229] Iteration 69780, loss = 0.810772\nI0608 09:27:00.741253 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0295221 (* 1 = 0.0295221 loss)\nI0608 09:27:00.741262 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0353086 (* 1 = 0.0353086 loss)\nI0608 09:27:00.741268 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124572 (* 1 = 0.124572 loss)\nI0608 09:27:00.741274 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.203263 (* 1 = 0.203263 loss)\nI0608 09:27:00.741281 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.125851 (* 1 = 0.125851 loss)\nI0608 09:27:00.741286 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.202726 (* 1 = 0.202726 loss)\nI0608 09:27:00.741293 13573 sgd_solver.cpp:106] Iteration 69780, lr = 0.0001\nspeed: 0.645s / iter\nI0608 09:27:13.797857 13573 solver.cpp:229] Iteration 69800, loss = 0.485938\nI0608 09:27:13.797925 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0272822 (* 1 = 0.0272822 loss)\nI0608 09:27:13.797936 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00921626 (* 1 = 0.00921626 loss)\nI0608 09:27:13.797942 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.102544 (* 1 = 0.102544 loss)\nI0608 09:27:13.797948 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0283841 (* 1 = 0.0283841 loss)\nI0608 09:27:13.797955 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.101448 (* 1 = 0.101448 loss)\nI0608 09:27:13.797960 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0283822 (* 1 = 0.0283822 loss)\nI0608 09:27:13.797968 13573 sgd_solver.cpp:106] Iteration 69800, lr = 0.0001\nI0608 09:27:26.633580 13573 solver.cpp:229] Iteration 69820, loss = 0.441926\nI0608 09:27:26.633652 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0560305 (* 1 = 0.0560305 loss)\nI0608 09:27:26.633662 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0673936 (* 1 = 0.0673936 loss)\nI0608 09:27:26.633668 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0987382 (* 1 = 0.0987382 loss)\nI0608 09:27:26.633674 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0471528 (* 1 = 0.0471528 loss)\nI0608 09:27:26.633682 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.10654 (* 1 = 0.10654 loss)\nI0608 09:27:26.633687 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0471537 (* 1 = 0.0471537 loss)\nI0608 09:27:26.633694 13573 sgd_solver.cpp:106] Iteration 69820, lr = 0.0001\nI0608 09:27:39.564918 13573 solver.cpp:229] Iteration 69840, loss = 0.886767\nI0608 09:27:39.564980 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0340988 (* 1 = 0.0340988 loss)\nI0608 09:27:39.564990 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.049718 (* 1 = 0.049718 loss)\nI0608 09:27:39.564997 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0892223 (* 1 = 0.0892223 loss)\nI0608 09:27:39.565002 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0116404 (* 1 = 0.0116404 loss)\nI0608 09:27:39.565009 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0970341 (* 1 = 0.0970341 loss)\nI0608 09:27:39.565016 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0116431 (* 1 = 0.0116431 loss)\nI0608 09:27:39.565022 13573 sgd_solver.cpp:106] Iteration 69840, lr = 0.0001\nI0608 09:27:52.338660 13573 solver.cpp:229] Iteration 69860, loss = 1.75626\nI0608 09:27:52.338740 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.294166 (* 1 = 0.294166 loss)\nI0608 09:27:52.338750 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.198511 (* 1 = 0.198511 loss)\nI0608 09:27:52.338757 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 1.03541 (* 1 = 1.03541 loss)\nI0608 09:27:52.338762 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.189383 (* 1 = 0.189383 loss)\nI0608 09:27:52.338783 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 1.03194 (* 1 = 1.03194 loss)\nI0608 09:27:52.338793 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.189399 (* 1 = 0.189399 loss)\nI0608 09:27:52.338800 13573 sgd_solver.cpp:106] Iteration 69860, lr = 0.0001\nI0608 09:28:05.078636 13573 solver.cpp:229] Iteration 69880, loss = 0.733228\nI0608 09:28:05.078711 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.119813 (* 1 = 0.119813 loss)\nI0608 09:28:05.078721 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0524616 (* 1 = 0.0524616 loss)\nI0608 09:28:05.078727 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.15086 (* 1 = 0.15086 loss)\nI0608 09:28:05.078732 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0389741 (* 1 = 0.0389741 loss)\nI0608 09:28:05.078738 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.168302 (* 1 = 0.168302 loss)\nI0608 09:28:05.078743 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0389736 (* 1 = 0.0389736 loss)\nI0608 09:28:05.078750 13573 sgd_solver.cpp:106] Iteration 69880, lr = 0.0001\nI0608 09:28:17.927510 13573 solver.cpp:229] Iteration 69900, loss = 0.827574\nI0608 09:28:17.927574 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0647492 (* 1 = 0.0647492 loss)\nI0608 09:28:17.927584 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0457858 (* 1 = 0.0457858 loss)\nI0608 09:28:17.927589 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.386513 (* 1 = 0.386513 loss)\nI0608 09:28:17.927595 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0551243 (* 1 = 0.0551243 loss)\nI0608 09:28:17.927601 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.367755 (* 1 = 0.367755 loss)\nI0608 09:28:17.927606 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0551187 (* 1 = 0.0551187 loss)\nI0608 09:28:17.927613 13573 sgd_solver.cpp:106] Iteration 69900, lr = 0.0001\nI0608 09:28:30.839797 13573 solver.cpp:229] Iteration 69920, loss = 0.459554\nI0608 09:28:30.839857 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0262494 (* 1 = 0.0262494 loss)\nI0608 09:28:30.839866 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0188457 (* 1 = 0.0188457 loss)\nI0608 09:28:30.839872 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.0761839 (* 1 = 0.0761839 loss)\nI0608 09:28:30.839879 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0189179 (* 1 = 0.0189179 loss)\nI0608 09:28:30.839884 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.0728763 (* 1 = 0.0728763 loss)\nI0608 09:28:30.839890 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0189179 (* 1 = 0.0189179 loss)\nI0608 09:28:30.839897 13573 sgd_solver.cpp:106] Iteration 69920, lr = 0.0001\nI0608 09:28:43.586834 13573 solver.cpp:229] Iteration 69940, loss = 1.06868\nI0608 09:28:43.586918 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.153618 (* 1 = 0.153618 loss)\nI0608 09:28:43.586927 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.102808 (* 1 = 0.102808 loss)\nI0608 09:28:43.586935 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.233119 (* 1 = 0.233119 loss)\nI0608 09:28:43.586941 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0301193 (* 1 = 0.0301193 loss)\nI0608 09:28:43.586946 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.241731 (* 1 = 0.241731 loss)\nI0608 09:28:43.586951 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.0301177 (* 1 = 0.0301177 loss)\nI0608 09:28:43.586959 13573 sgd_solver.cpp:106] Iteration 69940, lr = 0.0001\nI0608 09:28:56.718310 13573 solver.cpp:229] Iteration 69960, loss = 0.770026\nI0608 09:28:56.718394 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.000613869 (* 1 = 0.000613869 loss)\nI0608 09:28:56.718405 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.00208963 (* 1 = 0.00208963 loss)\nI0608 09:28:56.718410 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.432885 (* 1 = 0.432885 loss)\nI0608 09:28:56.718415 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.110953 (* 1 = 0.110953 loss)\nI0608 09:28:56.718421 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.383788 (* 1 = 0.383788 loss)\nI0608 09:28:56.718427 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.110922 (* 1 = 0.110922 loss)\nI0608 09:28:56.718435 13573 sgd_solver.cpp:106] Iteration 69960, lr = 0.0001\nI0608 09:29:09.612999 13573 solver.cpp:229] Iteration 69980, loss = 0.35324\nI0608 09:29:09.613080 13573 solver.cpp:245]     Train net output #0: loss_bbox = 0.0296727 (* 1 = 0.0296727 loss)\nI0608 09:29:09.613090 13573 solver.cpp:245]     Train net output #1: loss_cls = 0.0262318 (* 1 = 0.0262318 loss)\nI0608 09:29:09.613096 13573 solver.cpp:245]     Train net output #2: p2_rpn_cls_loss = 0.124355 (* 1 = 0.124355 loss)\nI0608 09:29:09.613102 13573 solver.cpp:245]     Train net output #3: p2_rpn_loss_bbox = 0.0173746 (* 1 = 0.0173746 loss)\nI0608 09:29:09.613107 13573 solver.cpp:245]     Train net output #4: rpn_cls_loss = 0.119674 (* 1 = 0.119674 loss)\nI0608 09:29:09.613113 13573 solver.cpp:245]     Train net output #5: rpn_loss_bbox = 0.017374 (* 1 = 0.017374 loss)\nI0608 09:29:09.613121 13573 sgd_solver.cpp:106] Iteration 69980, lr = 0.0001\nspeed: 0.645s / iter\nWrote snapshot to: /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_70000.caffemodel\ndone solving\n\nreal\t753m3.074s\nuser\t614m51.867s\nsys\t143m48.631s\n+ set +x\n+ ./tools/test_net.py --gpu 0 --def models/pascal_voc/VGG16/FP_Net_end2end/test.prototxt --net /home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_70000.caffemodel --imdb voc_2007_test --cfg experiments/cfgs/FP_Net_end2end.yml\nCalled with args:\nNamespace(caffemodel='/home/ubuntu/Work/brbchen/unskychen/FP_Net/output/FP_Net_end2end/voc_2007_trainval/FP_Net_end2end_iter_70000.caffemodel', cfg_file='experiments/cfgs/FP_Net_end2end.yml', comp_mode=False, gpu_id=0, imdb_name='voc_2007_test', max_per_image=100, prototxt='models/pascal_voc/VGG16/FP_Net_end2end/test.prototxt', set_cfgs=None, vis=False, wait=True)\nUsing config:\n{'DATA_DIR': '/home/ubuntu/Work/brbchen/unskychen/FP_Net/data',\n 'DEDUP_BOXES': 0.0625,\n 'EPS': 1e-14,\n 'EXP_DIR': 'FP_Net_end2end',\n 'GPU_ID': 0,\n 'MATLAB': 'matlab',\n 'MODELS_DIR': '/home/ubuntu/Work/brbchen/unskychen/FP_Net/models/pascal_voc',\n 'PIXEL_MEANS': array([[[ 102.9801,  115.9465,  122.7717]]]),\n 'RNG_SEED': 3,\n 'ROOT_DIR': '/home/ubuntu/Work/brbchen/unskychen/FP_Net',\n 'TEST': {'BBOX_REG': True,\n          'HAS_RPN': True,\n          'MAX_SIZE': 1000,\n          'NMS': 0.3,\n          'PROPOSAL_METHOD': 'selective_search',\n          'RPN_MIN_SIZE': 16,\n          'RPN_NMS_THRESH': 0.7,\n          'RPN_POST_NMS_TOP_N': 300,\n          'RPN_PRE_NMS_TOP_N': 6000,\n          'SCALES': [600],\n          'SVM': False},\n 'TRAIN': {'ASPECT_GROUPING': True,\n           'BATCH_SIZE': 128,\n           'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],\n           'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],\n           'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],\n           'BBOX_NORMALIZE_TARGETS': True,\n           'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,\n           'BBOX_REG': True,\n           'BBOX_THRESH': 0.5,\n           'BG_THRESH_HI': 0.5,\n           'BG_THRESH_LO': 0.0,\n           'FG_FRACTION': 0.25,\n           'FG_THRESH': 0.5,\n           'HAS_RPN': True,\n           'IMS_PER_BATCH': 1,\n           'MAX_SIZE': 1000,\n           'PROPOSAL_METHOD': 'gt',\n           'RPN_BATCHSIZE': 256,\n           'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],\n           'RPN_CLOBBER_POSITIVES': False,\n           'RPN_FG_FRACTION': 0.5,\n           'RPN_MIN_SIZE': 16,\n           'RPN_NEGATIVE_OVERLAP': 0.3,\n           'RPN_NMS_THRESH': 0.7,\n           'RPN_POSITIVE_OVERLAP': 0.7,\n           'RPN_POSITIVE_WEIGHT': -1.0,\n           'RPN_POST_NMS_TOP_N': 2000,\n           'RPN_PRE_NMS_TOP_N': 12000,\n           'SCALES': [600],\n           'SNAPSHOT_INFIX': '',\n           'SNAPSHOT_ITERS': 10000,\n           'USE_FLIPPED': True,\n           'USE_PREFETCH': False},\n 'USE_GPU_NMS': True}\nWARNING: Logging before InitGoogleLogging() is written to STDERR\nI0608 09:29:33.376579  6458 net.cpp:49] Initializing net from parameters: \nname: \"VGG_ILSVRC_16_layers\"\ninput: \"data\"\ninput: \"im_info\"\nstate {\n  phase: TEST\n}\ninput_shape {\n  dim: 1\n  dim: 3\n  dim: 224\n  dim: 224\n}\ninput_shape {\n  dim: 1\n  dim: 3\n}\nlayer {\n  name: \"conv1_1\"\n  type: \"Convolution\"\n  bottom: \"data\"\n  top: \"conv1_1\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu1_1\"\n  type: \"ReLU\"\n  bottom: \"conv1_1\"\n  top: \"conv1_1\"\n}\nlayer {\n  name: \"conv1_2\"\n  type: \"Convolution\"\n  bottom: \"conv1_1\"\n  top: \"conv1_2\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 64\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu1_2\"\n  type: \"ReLU\"\n  bottom: \"conv1_2\"\n  top: \"conv1_2\"\n}\nlayer {\n  name: \"pool1\"\n  type: \"Pooling\"\n  bottom: \"conv1_2\"\n  top: \"pool1\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv2_1\"\n  type: \"Convolution\"\n  bottom: \"pool1\"\n  top: \"conv2_1\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu2_1\"\n  type: \"ReLU\"\n  bottom: \"conv2_1\"\n  top: \"conv2_1\"\n}\nlayer {\n  name: \"conv2_2\"\n  type: \"Convolution\"\n  bottom: \"conv2_1\"\n  top: \"conv2_2\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 128\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu2_2\"\n  type: \"ReLU\"\n  bottom: \"conv2_2\"\n  top: \"conv2_2\"\n}\nlayer {\n  name: \"pool2\"\n  type: \"Pooling\"\n  bottom: \"conv2_2\"\n  top: \"pool2\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv3_1\"\n  type: \"Convolution\"\n  bottom: \"pool2\"\n  top: \"conv3_1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3_1\"\n  type: \"ReLU\"\n  bottom: \"conv3_1\"\n  top: \"conv3_1\"\n}\nlayer {\n  name: \"conv3_2\"\n  type: \"Convolution\"\n  bottom: \"conv3_1\"\n  top: \"conv3_2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3_2\"\n  type: \"ReLU\"\n  bottom: \"conv3_2\"\n  top: \"conv3_2\"\n}\nlayer {\n  name: \"conv3_3\"\n  type: \"Convolution\"\n  bottom: \"conv3_2\"\n  top: \"conv3_3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu3_3\"\n  type: \"ReLU\"\n  bottom: \"conv3_3\"\n  top: \"conv3_3\"\n}\nlayer {\n  name: \"pool3\"\n  type: \"Pooling\"\n  bottom: \"conv3_3\"\n  top: \"pool3\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv4_1\"\n  type: \"Convolution\"\n  bottom: \"pool3\"\n  top: \"conv4_1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu4_1\"\n  type: \"ReLU\"\n  bottom: \"conv4_1\"\n  top: \"conv4_1\"\n}\nlayer {\n  name: \"conv4_2\"\n  type: \"Convolution\"\n  bottom: \"conv4_1\"\n  top: \"conv4_2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu4_2\"\n  type: \"ReLU\"\n  bottom: \"conv4_2\"\n  top: \"conv4_2\"\n}\nlayer {\n  name: \"conv4_3\"\n  type: \"Convolution\"\n  bottom: \"conv4_2\"\n  top: \"conv4_3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu4_3\"\n  type: \"ReLU\"\n  bottom: \"conv4_3\"\n  top: \"conv4_3\"\n}\nlayer {\n  name: \"pool4\"\n  type: \"Pooling\"\n  bottom: \"conv4_3\"\n  top: \"pool4\"\n  pooling_param {\n    pool: MAX\n    kernel_size: 2\n    stride: 2\n  }\n}\nlayer {\n  name: \"conv5_1\"\n  type: \"Convolution\"\n  bottom: \"pool4\"\n  top: \"conv5_1\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu5_1\"\n  type: \"ReLU\"\n  bottom: \"conv5_1\"\n  top: \"conv5_1\"\n}\nlayer {\n  name: \"conv5_2\"\n  type: \"Convolution\"\n  bottom: \"conv5_1\"\n  top: \"conv5_2\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu5_2\"\n  type: \"ReLU\"\n  bottom: \"conv5_2\"\n  top: \"conv5_2\"\n}\nlayer {\n  name: \"conv5_3\"\n  type: \"Convolution\"\n  bottom: \"conv5_2\"\n  top: \"conv5_3\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"relu5_3\"\n  type: \"ReLU\"\n  bottom: \"conv5_3\"\n  top: \"conv5_3\"\n}\nlayer {\n  name: \"p1_conv\"\n  type: \"Convolution\"\n  bottom: \"conv5_3\"\n  top: \"p1_conv\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"p1_upsample\"\n  type: \"Deconvolution\"\n  bottom: \"p1_conv\"\n  top: \"p1_upsample\"\n  param {\n    lr_mult: 0\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 256\n    bias_term: false\n    pad: 0\n    kernel_size: 1\n    group: 256\n    stride: 2\n    weight_filler {\n      type: \"bilinear\"\n    }\n  }\n}\nlayer {\n  name: \"p2_conv\"\n  type: \"Convolution\"\n  bottom: \"conv4_3\"\n  top: \"p2_conv\"\n  param {\n    lr_mult: 1\n  }\n  param {\n    lr_mult: 2\n  }\n  convolution_param {\n    num_output: 256\n    pad: 1\n    kernel_size: 3\n  }\n}\nlayer {\n  name: \"conadd\"\n  type: \"Concat\"\n  bottom: \"p1_upsample\"\n  bottom: \"p2_conv\"\n  top: \"conadd\"\n}\nlayer {\n  name: \"rpn_conv/3x3\"\n  type: \"Convolution\"\n  bottom: \"p1_conv\"\n  top: \"rpn/output\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3\"\n  type: \"ReLU\"\n  bottom: \"rpn/output\"\n  top: \"rpn/output\"\n}\nlayer {\n  name: \"rpn_cls_score\"\n  type: \"Convolution\"\n  bottom: \"rpn/output\"\n  top: \"rpn_cls_score\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 18\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_bbox_pred\"\n  type: \"Convolution\"\n  bottom: \"rpn/output\"\n  top: \"rpn_bbox_pred\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 36\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_cls_score_reshape\"\n  type: \"Reshape\"\n  bottom: \"rpn_cls_score\"\n  top: \"rpn_cls_score_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 2\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"rpn_cls_prob\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape\"\n  top: \"rpn_cls_prob\"\n}\nlayer {\n  name: \"rpn_cls_prob_reshape\"\n  type: \"Reshape\"\n  bottom: \"rpn_cls_prob\"\n  top: \"rpn_cls_prob_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 18\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"proposal\"\n  type: \"Python\"\n  bottom: \"rpn_cls_prob_reshape\"\n  bottom: \"rpn_bbox_pred\"\n  bottom: \"im_info\"\n  top: \"rois\"\n  python_param {\n    module: \"rpn.proposal_layer\"\n    layer: \"ProposalLayer\"\n    param_str: \"\\'feat_stride\\': 16\"\n  }\n}\nlayer {\n  name: \"p2_rpn_conv/3x3\"\n  type: \"Convolution\"\n  bottom: \"conadd\"\n  top: \"p2_rpn/output\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 512\n    pad: 1\n    kernel_size: 3\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_relu/3x3\"\n  type: \"ReLU\"\n  bottom: \"p2_rpn/output\"\n  top: \"p2_rpn/output\"\n}\nlayer {\n  name: \"p2_rpn_cls_score\"\n  type: \"Convolution\"\n  bottom: \"p2_rpn/output\"\n  top: \"p2_rpn_cls_score\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 18\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_bbox_pred\"\n  type: \"Convolution\"\n  bottom: \"p2_rpn/output\"\n  top: \"p2_rpn_bbox_pred\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  convolution_param {\n    num_output: 36\n    pad: 0\n    kernel_size: 1\n    stride: 1\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_cls_score_reshape\"\n  type: \"Reshape\"\n  bottom: \"p2_rpn_cls_score\"\n  top: \"p2_rpn_cls_score_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 2\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_rpn_cls_prob\"\n  type: \"Softmax\"\n  bottom: \"p2_rpn_cls_score_reshape\"\n  top: \"p2_rpn_cls_prob\"\n}\nlayer {\n  name: \"p2_rpn_cls_prob_reshape\"\n  type: \"Reshape\"\n  bottom: \"p2_rpn_cls_prob\"\n  top: \"p2_rpn_cls_prob_reshape\"\n  reshape_param {\n    shape {\n      dim: 0\n      dim: 18\n      dim: -1\n      dim: 0\n    }\n  }\n}\nlayer {\n  name: \"p2_proposal\"\n  type: \"Python\"\n  bottom: \"p2_rpn_cls_prob_reshape\"\n  bottom: \"p2_rpn_bbox_pred\"\n  bottom: \"p2_im_info\"\n  top: \"p2_rois\"\n  python_param {\n    module: \"rpn.proposal_layer\"\n    layer: \"ProposalLayer\"\n    param_str: \"\\'feat_stride\\': 16\"\n  }\n}\nlayer {\n  name: \"data_all\"\n  type: \"Concat\"\n  bottom: \"rois\"\n  bottom: \"p2_rois\"\n  top: \"data_all\"\n  concat_param {\n    axis: 0\n  }\n}\nlayer {\n  name: \"roi_pool5\"\n  type: \"ROIPooling\"\n  bottom: \"conadd\"\n  bottom: \"data_all\"\n  top: \"pool5\"\n  roi_pooling_param {\n    pooled_h: 7\n    pooled_w: 7\n    spatial_scale: 0.0625\n  }\n}\nlayer {\n  name: \"fc6\"\n  type: \"InnerProduct\"\n  bottom: \"pool5\"\n  top: \"fc6\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu6\"\n  type: \"ReLU\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n}\nlayer {\n  name: \"drop6\"\n  type: \"Dropout\"\n  bottom: \"fc6\"\n  top: \"fc6\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7\"\n  type: \"InnerProduct\"\n  bottom: \"fc6\"\n  top: \"fc7\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 4096\n  }\n}\nlayer {\n  name: \"relu7\"\n  type: \"ReLU\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n}\nlayer {\n  name: \"drop7\"\n  type: \"Dropout\"\n  bottom: \"fc7\"\n  top: \"fc7\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"cls_score\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred\"\n  type: \"InnerProduct\"\n  bottom: \"fc7\"\n  top: \"bbox_pred\"\n  param {\n    lr_mult: 1\n    decay_mult: 1\n  }\n  param {\n    lr_mult: 2\n    decay_mult: 0\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"cls_prob\"\n  type: \"Softmax\"\n  bottom: \"cls_score\"\n  top: \"cls_prob\"\n}\nF0608 09:29:33.376754  6458 insert_splits.cpp:35] Unknown bottom blob 'p2_im_info' (layer 'p2_proposal', bottom index 2)\n*** Check failure stack trace: ***\n./experiments/scripts/FP_Net_end2end.sh: line 68:  6458 Aborted                 (core dumped) ./tools/test_net.py --gpu ${GPU_ID} --def models/${PT_DIR}/${NET}/FP_Net_end2end/test.prototxt --net ${NET_FINAL} --imdb ${TEST_IMDB} --cfg experiments/cfgs/FP_Net_end2end.yml ${EXTRA_ARGS}\n"
  },
  {
    "path": "caffe-fpn/tools/extra/train.log.train",
    "content": "#Iters Seconds TrainingLoss LearningRate\n0      0      5.75535   0.001\n20     20     2.91147   0.001\n40     40     2.87693   0.001\n60     60     2.61586   0.001\n80     80     1.76048   0.001\n100    100    2.70806   0.001\n120    120    1.90072   0.001\n140    140    1.93119   0.001\n160    160    2.68836   0.001\n180    180    2.77831   0.001\n200    200    2.72727   0.001\n220    220    2.18744   0.001\n240    240    1.82587   0.001\n260    260    2.09799   0.001\n280    280    2.04241   0.001\n300    300    2.19427   0.001\n320    320    1.2491    0.001\n340    340    1.57029   0.001\n360    360    1.68125   0.001\n380    380    2.36362   0.001\n400    400    2.02803   0.001\n420    420    1.87503   0.001\n440    440    1.77966   0.001\n460    460    1.31459   0.001\n480    480    1.71876   0.001\n500    500    2.14536   0.001\n520    520    1.24532   0.001\n540    540    3.17657   0.001\n560    560    1.52187   0.001\n580    580    2.55084   0.001\n600    600    1.69317   0.001\n620    620    1.44879   0.001\n640    640    1.82576   0.001\n660    660    1.94879   0.001\n680    680    1.46843   0.001\n700    700    1.85917   0.001\n720    720    1.71796   0.001\n740    740    1.43661   0.001\n760    760    1.65799   0.001\n780    780    0.710457  0.001\n800    800    1.44346   0.001\n820    820    1.12701   0.001\n840    840    2.86324   0.001\n860    860    1.44869   0.001\n880    880    1.65969   0.001\n900    900    1.69039   0.001\n920    920    2.46348   0.001\n940    940    1.03377   0.001\n960    960    2.12535   0.001\n980    980    1.07547   0.001\n1000   1000   1.36627   0.001\n1020   1020   1.06635   0.001\n1040   1040   1.68359   0.001\n1060   1060   1.08394   0.001\n1080   1080   1.53406   0.001\n1100   1100   1.18316   0.001\n1120   1120   1.21229   0.001\n1140   1140   1.42793   0.001\n1160   1160   1.45192   0.001\n1180   1180   1.95924   0.001\n1200   1200   1.24939   0.001\n1220   1220   1.87423   0.001\n1240   1240   1.66454   0.001\n1260   1260   0.581533  0.001\n1280   1280   1.36698   0.001\n1300   1300   1.59252   0.001\n1320   1320   1.63926   0.001\n1340   1340   1.38708   0.001\n1360   1360   1.45138   0.001\n1380   1380   1.95489   0.001\n1400   1400   1.0495    0.001\n1420   1420   1.03692   0.001\n1440   1440   1.2589    0.001\n1460   1460   1.10283   0.001\n1480   1480   2.20443   0.001\n1500   1500   1.14416   0.001\n1520   1520   2.02919   0.001\n1540   1540   1.20915   0.001\n1560   1560   1.32344   0.001\n1580   1580   2.72568   0.001\n1600   1600   0.925849  0.001\n1620   1620   1.41194   0.001\n1640   1640   1.2961    0.001\n1660   1660   1.29705   0.001\n1680   1680   1.8493    0.001\n1700   1700   0.685647  0.001\n1720   1720   1.36782   0.001\n1740   1740   1.47916   0.001\n1760   1760   0.94058   0.001\n1780   1780   0.862696  0.001\n1800   1800   1.63472   0.001\n1820   1820   1.7458    0.001\n1840   1840   0.589357  0.001\n1860   1860   1.06328   0.001\n1880   1880   0.992141  0.001\n1900   1900   0.63294   0.001\n1920   1920   1.42183   0.001\n1940   1940   1.663     0.001\n1960   1960   1.14311   0.001\n1980   1980   1.34724   0.001\n2000   2000   0.561836  0.001\n2020   2020   0.875582  0.001\n2040   2040   1.13697   0.001\n2060   2060   1.22153   0.001\n2080   2080   1.28191   0.001\n2100   2100   2.01174   0.001\n2120   2120   2.0806    0.001\n2140   2140   0.771625  0.001\n2160   2160   1.20559   0.001\n2180   2180   0.813419  0.001\n2200   2200   1.22748   0.001\n2220   2220   1.05768   0.001\n2240   2240   0.713589  0.001\n2260   2260   0.762268  0.001\n2280   2280   1.74036   0.001\n2300   2300   0.649375  0.001\n2320   2320   1.30136   0.001\n2340   2340   1.14435   0.001\n2360   2360   1.41456   0.001\n2380   2380   1.02689   0.001\n2400   2400   1.5242    0.001\n2420   2420   0.891014  0.001\n2440   2440   1.28534   0.001\n2460   2460   0.987902  0.001\n2480   2480   0.994781  0.001\n2500   2500   1.37252   0.001\n2520   2520   1.36002   0.001\n2540   2540   0.99073   0.001\n2560   2560   0.882777  0.001\n2580   2580   1.15264   0.001\n2600   2600   1.03977   0.001\n2620   2620   1.53462   0.001\n2640   2640   1.60991   0.001\n2660   2660   2.02155   0.001\n2680   2680   1.10572   0.001\n2700   2700   0.839528  0.001\n2720   2720   1.65201   0.001\n2740   2740   1.10152   0.001\n2760   2760   0.709194  0.001\n2780   2780   0.475377  0.001\n2800   2800   1.06183   0.001\n2820   2820   0.759828  0.001\n2840   2840   1.35158   0.001\n2860   2860   0.515239  0.001\n2880   2880   0.783716  0.001\n2900   2900   0.66823   0.001\n2920   2920   1.07996   0.001\n2940   2940   1.05079   0.001\n2960   2960   1.16245   0.001\n2980   2980   1.51395   0.001\n3000   3000   1.03882   0.001\n3020   3020   0.885829  0.001\n3040   3040   1.26035   0.001\n3060   3060   1.54773   0.001\n3080   3080   1.19362   0.001\n3100   3100   0.638312  0.001\n3120   3120   0.673077  0.001\n3140   3140   0.963317  0.001\n3160   3160   1.46722   0.001\n3180   3180   1.19371   0.001\n3200   3200   0.685202  0.001\n3220   3220   1.00138   0.001\n3240   3240   0.64623   0.001\n3260   3260   1.59334   0.001\n3280   3280   0.880625  0.001\n3300   3300   1.34161   0.001\n3320   3320   1.49816   0.001\n3340   3340   1.25427   0.001\n3360   3360   0.923309  0.001\n3380   3380   1.79977   0.001\n3400   3400   0.820677  0.001\n3420   3420   1.25995   0.001\n3440   3440   1.74615   0.001\n3460   3460   1.63004   0.001\n3480   3480   2.13677   0.001\n3500   3500   1.00069   0.001\n3520   3520   1.36736   0.001\n3540   3540   0.858699  0.001\n3560   3560   0.56873   0.001\n3580   3580   1.4137    0.001\n3600   3600   1.5519    0.001\n3620   3620   1.00408   0.001\n3640   3640   1.138     0.001\n3660   3660   0.551207  0.001\n3680   3680   0.97458   0.001\n3700   3700   0.859932  0.001\n3720   3720   2.03067   0.001\n3740   3740   1.09937   0.001\n3760   3760   1.17576   0.001\n3780   3780   1.89727   0.001\n3800   3800   0.717511  0.001\n3820   3820   1.20435   0.001\n3840   3840   0.989965  0.001\n3860   3860   1.07921   0.001\n3880   3880   1.11547   0.001\n3900   3900   0.845023  0.001\n3920   3920   1.73284   0.001\n3940   3940   2.44625   0.001\n3960   3960   2.32607   0.001\n3980   3980   1.51254   0.001\n4000   4000   1.25269   0.001\n4020   4020   1.51442   0.001\n4040   4040   0.876494  0.001\n4060   4060   1.17599   0.001\n4080   4080   1.09225   0.001\n4100   4100   0.616821  0.001\n4120   4120   0.822829  0.001\n4140   4140   1.07762   0.001\n4160   4160   1.96475   0.001\n4180   4180   1.20765   0.001\n4200   4200   0.990373  0.001\n4220   4220   1.19645   0.001\n4240   4240   1.03522   0.001\n4260   4260   0.965098  0.001\n4280   4280   0.641442  0.001\n4300   4300   0.895966  0.001\n4320   4320   1.82711   0.001\n4340   4340   1.06304   0.001\n4360   4360   0.69979   0.001\n4380   4380   1.00267   0.001\n4400   4400   1.39943   0.001\n4420   4420   1.55729   0.001\n4440   4440   1.02813   0.001\n4460   4460   1.24077   0.001\n4480   4480   1.22237   0.001\n4500   4500   1.22253   0.001\n4520   4520   1.1244    0.001\n4540   4540   1.01682   0.001\n4560   4560   1.60544   0.001\n4580   4580   1.03721   0.001\n4600   4600   1.45526   0.001\n4620   4620   1.44831   0.001\n4640   4640   1.27031   0.001\n4660   4660   1.09988   0.001\n4680   4680   1.14753   0.001\n4700   4700   1.16976   0.001\n4720   4720   0.946762  0.001\n4740   4740   1.37254   0.001\n4760   4760   1.39044   0.001\n4780   4780   1.04524   0.001\n4800   4800   1.39799   0.001\n4820   4820   0.932727  0.001\n4840   4840   0.666949  0.001\n4860   4860   0.976514  0.001\n4880   4880   0.931338  0.001\n4900   4900   1.03892   0.001\n4920   4920   1.89443   0.001\n4940   4940   0.832097  0.001\n4960   4960   1.17015   0.001\n4980   4980   1.50679   0.001\n5000   5000   0.92343   0.001\n5020   5020   0.802325  0.001\n5040   5040   0.654119  0.001\n5060   5060   0.760569  0.001\n5080   5080   0.913851  0.001\n5100   5100   0.604722  0.001\n5120   5120   1.26367   0.001\n5140   5140   1.02515   0.001\n5160   5160   0.780194  0.001\n5180   5180   2.36847   0.001\n5200   5200   0.532096  0.001\n5220   5220   1.37039   0.001\n5240   5240   1.63459   0.001\n5260   5260   1.66427   0.001\n5280   5280   0.640627  0.001\n5300   5300   1.51232   0.001\n5320   5320   0.753834  0.001\n5340   5340   2.11293   0.001\n5360   5360   0.765798  0.001\n5380   5380   1.19464   0.001\n5400   5400   0.857106  0.001\n5420   5420   0.974833  0.001\n5440   5440   0.652232  0.001\n5460   5460   1.3393    0.001\n5480   5480   0.606516  0.001\n5500   5500   1.70433   0.001\n5520   5520   1.2066    0.001\n5540   5540   1.42546   0.001\n5560   5560   0.774946  0.001\n5580   5580   1.31511   0.001\n5600   5600   0.741379  0.001\n5620   5620   1.11517   0.001\n5640   5640   1.74944   0.001\n5660   5660   1.07103   0.001\n5680   5680   1.76564   0.001\n5700   5700   1.10446   0.001\n5720   5720   1.08096   0.001\n5740   5740   1.32638   0.001\n5760   5760   1.1757    0.001\n5780   5780   1.1919    0.001\n5800   5800   1.30972   0.001\n5820   5820   1.11024   0.001\n5840   5840   1.23773   0.001\n5860   5860   1.66896   0.001\n5880   5880   0.875077  0.001\n5900   5900   1.03549   0.001\n5920   5920   0.603631  0.001\n5940   5940   1.88826   0.001\n5960   5960   1.48765   0.001\n5980   5980   1.00447   0.001\n6000   6000   1.49723   0.001\n6020   6020   0.472161  0.001\n6040   6040   0.840406  0.001\n6060   6060   2.07885   0.001\n6080   6080   0.991401  0.001\n6100   6100   0.806035  0.001\n6120   6120   0.523766  0.001\n6140   6140   1.3545    0.001\n6160   6160   0.903509  0.001\n6180   6180   0.920589  0.001\n6200   6200   1.13235   0.001\n6220   6220   0.794701  0.001\n6240   6240   0.817587  0.001\n6260   6260   0.542096  0.001\n6280   6280   1.40959   0.001\n6300   6300   0.920827  0.001\n6320   6320   0.71741   0.001\n6340   6340   1.058     0.001\n6360   6360   0.998173  0.001\n6380   6380   1.28576   0.001\n6400   6400   1.43417   0.001\n6420   6420   0.701402  0.001\n6440   6440   0.704179  0.001\n6460   6460   1.24391   0.001\n6480   6480   1.38867   0.001\n6500   6500   0.640711  0.001\n6520   6520   1.34092   0.001\n6540   6540   0.564309  0.001\n6560   6560   0.763482  0.001\n6580   6580   1.35658   0.001\n6600   6600   1.32681   0.001\n6620   6620   1.12636   0.001\n6640   6640   1.26605   0.001\n6660   6660   1.00024   0.001\n6680   6680   1.16534   0.001\n6700   6700   1.21454   0.001\n6720   6720   1.353     0.001\n6740   6740   0.786211  0.001\n6760   6760   1.61237   0.001\n6780   6780   1.10788   0.001\n6800   6800   0.95572   0.001\n6820   6820   1.70932   0.001\n6840   6840   0.599602  0.001\n6860   6860   1.00094   0.001\n6880   6880   0.85339   0.001\n6900   6900   0.391115  0.001\n6920   6920   2.51053   0.001\n6940   6940   0.755009  0.001\n6960   6960   0.878772  0.001\n6980   6980   0.747524  0.001\n7000   7000   1.08273   0.001\n7020   7020   0.988283  0.001\n7040   7040   0.738401  0.001\n7060   7060   0.648572  0.001\n7080   7080   1.26719   0.001\n7100   7100   0.804962  0.001\n7120   7120   1.54228   0.001\n7140   7140   1.08278   0.001\n7160   7160   0.923039  0.001\n7180   7180   0.897026  0.001\n7200   7200   0.649758  0.001\n7220   7220   0.763979  0.001\n7240   7240   1.69936   0.001\n7260   7260   1.31041   0.001\n7280   7280   0.524372  0.001\n7300   7300   0.716116  0.001\n7320   7320   0.803208  0.001\n7340   7340   0.753873  0.001\n7360   7360   0.837769  0.001\n7380   7380   1.25982   0.001\n7400   7400   1.00964   0.001\n7420   7420   1.13346   0.001\n7440   7440   1.29832   0.001\n7460   7460   0.614929  0.001\n7480   7480   0.90139   0.001\n7500   7500   1.0603    0.001\n7520   7520   1.9817    0.001\n7540   7540   1.00904   0.001\n7560   7560   0.752852  0.001\n7580   7580   0.925036  0.001\n7600   7600   1.05539   0.001\n7620   7620   0.943644  0.001\n7640   7640   0.712667  0.001\n7660   7660   0.912752  0.001\n7680   7680   0.961562  0.001\n7700   7700   1.97076   0.001\n7720   7720   1.16165   0.001\n7740   7740   1.04438   0.001\n7760   7760   1.29931   0.001\n7780   7780   1.51142   0.001\n7800   7800   0.716572  0.001\n7820   7820   0.658513  0.001\n7840   7840   1.27519   0.001\n7860   7860   0.781194  0.001\n7880   7880   0.461553  0.001\n7900   7900   0.795096  0.001\n7920   7920   0.780084  0.001\n7940   7940   1.13497   0.001\n7960   7960   1.09772   0.001\n7980   7980   0.430395  0.001\n8000   8000   1.10666   0.001\n8020   8020   1.07792   0.001\n8040   8040   0.951299  0.001\n8060   8060   0.609001  0.001\n8080   8080   1.12917   0.001\n8100   8100   0.879186  0.001\n8120   8120   1.41451   0.001\n8140   8140   0.557284  0.001\n8160   8160   1.26929   0.001\n8180   8180   1.34086   0.001\n8200   8200   0.722169  0.001\n8220   8220   1.00747   0.001\n8240   8240   1.10736   0.001\n8260   8260   1.33991   0.001\n8280   8280   0.658713  0.001\n8300   8300   0.792236  0.001\n8320   8320   0.988462  0.001\n8340   8340   0.72779   0.001\n8360   8360   0.5903    0.001\n8380   8380   0.640777  0.001\n8400   8400   0.983461  0.001\n8420   8420   0.907994  0.001\n8440   8440   0.706102  0.001\n8460   8460   1.0596    0.001\n8480   8480   1.16903   0.001\n8500   8500   1.14654   0.001\n8520   8520   0.935061  0.001\n8540   8540   1.95296   0.001\n8560   8560   1.13017   0.001\n8580   8580   0.63978   0.001\n8600   8600   0.941116  0.001\n8620   8620   0.723466  0.001\n8640   8640   0.898199  0.001\n8660   8660   2.75361   0.001\n8680   8680   0.758451  0.001\n8700   8700   0.930481  0.001\n8720   8720   1.82263   0.001\n8740   8740   0.934713  0.001\n8760   8760   1.15995   0.001\n8780   8780   0.728119  0.001\n8800   8800   0.783877  0.001\n8820   8820   0.59056   0.001\n8840   8840   1.32719   0.001\n8860   8860   1.17395   0.001\n8880   8880   0.985333  0.001\n8900   8900   0.512658  0.001\n8920   8920   0.600416  0.001\n8940   8940   1.26879   0.001\n8960   8960   0.708691  0.001\n8980   8980   0.92453   0.001\n9000   9000   0.643111  0.001\n9020   9020   1.1447    0.001\n9040   9040   0.841089  0.001\n9060   9060   1.17983   0.001\n9080   9080   1.03779   0.001\n9100   9100   0.810431  0.001\n9120   9120   0.608232  0.001\n9140   9140   0.805315  0.001\n9160   9160   0.500588  0.001\n9180   9180   1.34036   0.001\n9200   9200   0.588129  0.001\n9220   9220   1.62423   0.001\n9240   9240   0.560835  0.001\n9260   9260   0.802665  0.001\n9280   9280   1.54695   0.001\n9300   9300   0.518201  0.001\n9320   9320   1.96845   0.001\n9340   9340   1.49437   0.001\n9360   9360   1.13156   0.001\n9380   9380   1.08555   0.001\n9400   9400   1.11019   0.001\n9420   9420   0.717548  0.001\n9440   9440   1.80619   0.001\n9460   9460   1.49607   0.001\n9480   9480   1.3564    0.001\n9500   9500   1.74282   0.001\n9520   9520   0.893183  0.001\n9540   9540   0.71695   0.001\n9560   9560   1.22045   0.001\n9580   9580   1.05113   0.001\n9600   9600   0.522616  0.001\n9620   9620   1.82681   0.001\n9640   9640   0.701188  0.001\n9660   9660   1.32409   0.001\n9680   9680   0.546682  0.001\n9700   9700   0.71925   0.001\n9720   9720   0.828773  0.001\n9740   9740   1.05682   0.001\n9760   9760   0.715535  0.001\n9780   9780   0.466211  0.001\n9800   9800   0.679762  0.001\n9820   9820   1.91258   0.001\n9840   9840   1.12377   0.001\n9860   9860   0.893553  0.001\n9880   9880   1.37202   0.001\n9900   9900   1.01532   0.001\n9920   9920   0.693535  0.001\n9940   9940   1.09885   0.001\n9960   9960   0.858478  0.001\n9980   9980   2.48562   0.001\n10000  10000  0.704114  0.001\n10020  10020  0.528706  0.001\n10040  10040  1.13911   0.001\n10060  10060  1.04908   0.001\n10080  10080  1.04885   0.001\n10100  10100  1.50595   0.001\n10120  10120  0.699965  0.001\n10140  10140  0.52996   0.001\n10160  10160  1.56242   0.001\n10180  10180  1.46874   0.001\n10200  10200  1.89254   0.001\n10220  10220  1.21367   0.001\n10240  10240  0.86608   0.001\n10260  10260  0.724908  0.001\n10280  10280  1.12919   0.001\n10300  10300  0.80829   0.001\n10320  10320  0.726346  0.001\n10340  10340  0.65868   0.001\n10360  10360  0.643414  0.001\n10380  10380  0.616882  0.001\n10400  10400  1.4812    0.001\n10420  10420  0.605242  0.001\n10440  10440  0.808589  0.001\n10460  10460  0.942473  0.001\n10480  10480  0.58552   0.001\n10500  10500  2.47202   0.001\n10520  10520  2.01831   0.001\n10540  10540  0.716601  0.001\n10560  10560  2.16419   0.001\n10580  10580  0.988994  0.001\n10600  10600  1.28865   0.001\n10620  10620  0.66843   0.001\n10640  10640  0.61119   0.001\n10660  10660  0.701862  0.001\n10680  10680  0.901056  0.001\n10700  10700  0.736661  0.001\n10720  10720  0.885886  0.001\n10740  10740  0.661833  0.001\n10760  10760  0.726043  0.001\n10780  10780  0.686169  0.001\n10800  10800  1.94219   0.001\n10820  10820  1.15415   0.001\n10840  10840  1.20861   0.001\n10860  10860  0.998682  0.001\n10880  10880  0.64124   0.001\n10900  10900  0.816222  0.001\n10920  10920  0.59788   0.001\n10940  10940  1.16838   0.001\n10960  10960  0.906879  0.001\n10980  10980  0.650674  0.001\n11000  11000  0.529577  0.001\n11020  11020  1.36412   0.001\n11040  11040  0.488116  0.001\n11060  11060  1.49795   0.001\n11080  11080  0.714323  0.001\n11100  11100  0.695181  0.001\n11120  11120  1.3335    0.001\n11140  11140  1.20743   0.001\n11160  11160  1.03745   0.001\n11180  11180  0.960993  0.001\n11200  11200  0.859399  0.001\n11220  11220  0.777475  0.001\n11240  11240  0.963331  0.001\n11260  11260  0.992616  0.001\n11280  11280  0.395464  0.001\n11300  11300  0.592534  0.001\n11320  11320  0.829923  0.001\n11340  11340  0.715245  0.001\n11360  11360  0.780529  0.001\n11380  11380  1.24924   0.001\n11400  11400  0.709     0.001\n11420  11420  1.19648   0.001\n11440  11440  1.18286   0.001\n11460  11460  0.939721  0.001\n11480  11480  0.779286  0.001\n11500  11500  1.50444   0.001\n11520  11520  0.697607  0.001\n11540  11540  0.617843  0.001\n11560  11560  0.692536  0.001\n11580  11580  0.493619  0.001\n11600  11600  0.367033  0.001\n11620  11620  0.700094  0.001\n11640  11640  0.966077  0.001\n11660  11660  0.665979  0.001\n11680  11680  1.11442   0.001\n11700  11700  0.69354   0.001\n11720  11720  1.47715   0.001\n11740  11740  0.735284  0.001\n11760  11760  0.872252  0.001\n11780  11780  1.05138   0.001\n11800  11800  0.715521  0.001\n11820  11820  2.79517   0.001\n11840  11840  0.804622  0.001\n11860  11860  0.844295  0.001\n11880  11880  0.622149  0.001\n11900  11900  1.08556   0.001\n11920  11920  0.811371  0.001\n11940  11940  1.0618    0.001\n11960  11960  3.28875   0.001\n11980  11980  0.897586  0.001\n12000  12000  1.46117   0.001\n12020  12020  0.800307  0.001\n12040  12040  1.18537   0.001\n12060  12060  0.682054  0.001\n12080  12080  1.61186   0.001\n12100  12100  0.526865  0.001\n12120  12120  0.842049  0.001\n12140  12140  0.787277  0.001\n12160  12160  2.00208   0.001\n12180  12180  0.809263  0.001\n12200  12200  0.811188  0.001\n12220  12220  1.03091   0.001\n12240  12240  0.80677   0.001\n12260  12260  0.477394  0.001\n12280  12280  1.10475   0.001\n12300  12300  2.04756   0.001\n12320  12320  1.48776   0.001\n12340  12340  0.487441  0.001\n12360  12360  1.03641   0.001\n12380  12380  0.902751  0.001\n12400  12400  1.41952   0.001\n12420  12420  2.25319   0.001\n12440  12440  0.784883  0.001\n12460  12460  0.924891  0.001\n12480  12480  0.863378  0.001\n12500  12500  0.888057  0.001\n12520  12520  0.743227  0.001\n12540  12540  0.464692  0.001\n12560  12560  1.39326   0.001\n12580  12580  0.751031  0.001\n12600  12600  1.46485   0.001\n12620  12620  0.493708  0.001\n12640  12640  0.425424  0.001\n12660  12660  1.61785   0.001\n12680  12680  0.9228    0.001\n12700  12700  1.15253   0.001\n12720  12720  1.34477   0.001\n12740  12740  0.739453  0.001\n12760  12760  0.933692  0.001\n12780  12780  0.617445  0.001\n12800  12800  1.43321   0.001\n12820  12820  0.871334  0.001\n12840  12840  1.03998   0.001\n12860  12860  0.632588  0.001\n12880  12880  0.821903  0.001\n12900  12900  0.635422  0.001\n12920  12920  1.74565   0.001\n12940  12940  1.15313   0.001\n12960  12960  1.16034   0.001\n12980  12980  0.676521  0.001\n13000  13000  0.706769  0.001\n13020  13020  1.11334   0.001\n13040  13040  1.68373   0.001\n13060  13060  0.573791  0.001\n13080  13080  0.629186  0.001\n13100  13100  3.69147   0.001\n13120  13120  0.519175  0.001\n13140  13140  0.691394  0.001\n13160  13160  1.04857   0.001\n13180  13180  1.20361   0.001\n13200  13200  0.716263  0.001\n13220  13220  0.842832  0.001\n13240  13240  0.865824  0.001\n13260  13260  0.84376   0.001\n13280  13280  0.985462  0.001\n13300  13300  0.545171  0.001\n13320  13320  0.904104  0.001\n13340  13340  1.21113   0.001\n13360  13360  0.599775  0.001\n13380  13380  0.802367  0.001\n13400  13400  0.62091   0.001\n13420  13420  1.21528   0.001\n13440  13440  0.484433  0.001\n13460  13460  0.804751  0.001\n13480  13480  0.748648  0.001\n13500  13500  0.977227  0.001\n13520  13520  1.3856    0.001\n13540  13540  0.960544  0.001\n13560  13560  0.925922  0.001\n13580  13580  1.04338   0.001\n13600  13600  0.935878  0.001\n13620  13620  0.968912  0.001\n13640  13640  1.27528   0.001\n13660  13660  0.462261  0.001\n13680  13680  0.791843  0.001\n13700  13700  0.513822  0.001\n13720  13720  0.919183  0.001\n13740  13740  0.649288  0.001\n13760  13760  0.760294  0.001\n13780  13780  0.93563   0.001\n13800  13800  1.15672   0.001\n13820  13820  0.418941  0.001\n13840  13840  0.594193  0.001\n13860  13860  0.475732  0.001\n13880  13880  0.827767  0.001\n13900  13900  1.7961    0.001\n13920  13920  0.843933  0.001\n13940  13940  0.674882  0.001\n13960  13960  0.98588   0.001\n13980  13980  0.626306  0.001\n14000  14000  0.61485   0.001\n14020  14020  1.40676   0.001\n14040  14040  0.927404  0.001\n14060  14060  0.754801  0.001\n14080  14080  1.32078   0.001\n14100  14100  0.80836   0.001\n14120  14120  0.781424  0.001\n14140  14140  0.851135  0.001\n14160  14160  0.636289  0.001\n14180  14180  0.431985  0.001\n14200  14200  0.505758  0.001\n14220  14220  0.934279  0.001\n14240  14240  0.465075  0.001\n14260  14260  1.24883   0.001\n14280  14280  1.58262   0.001\n14300  14300  1.15872   0.001\n14320  14320  0.923257  0.001\n14340  14340  1.43291   0.001\n14360  14360  0.917732  0.001\n14380  14380  0.841601  0.001\n14400  14400  0.568844  0.001\n14420  14420  1.26296   0.001\n14440  14440  0.871866  0.001\n14460  14460  0.857148  0.001\n14480  14480  0.436985  0.001\n14500  14500  0.808002  0.001\n14520  14520  0.851744  0.001\n14540  14540  0.875367  0.001\n14560  14560  1.00687   0.001\n14580  14580  0.896454  0.001\n14600  14600  1.09002   0.001\n14620  14620  0.70449   0.001\n14640  14640  0.898881  0.001\n14660  14660  0.971969  0.001\n14680  14680  0.876793  0.001\n14700  14700  0.668643  0.001\n14720  14720  1.02279   0.001\n14740  14740  0.633563  0.001\n14760  14760  0.659915  0.001\n14780  14780  0.670854  0.001\n14800  14800  0.962751  0.001\n14820  14820  0.863657  0.001\n14840  14840  0.596835  0.001\n14860  14860  0.665094  0.001\n14880  14880  0.54324   0.001\n14900  14900  0.958884  0.001\n14920  14920  0.768056  0.001\n14940  14940  0.797264  0.001\n14960  14960  1.30257   0.001\n14980  14980  1.12042   0.001\n15000  15000  0.866116  0.001\n15020  15020  0.695591  0.001\n15040  15040  0.863109  0.001\n15060  15060  1.4595    0.001\n15080  15080  1.70345   0.001\n15100  15100  0.792995  0.001\n15120  15120  0.500793  0.001\n15140  15140  0.92196   0.001\n15160  15160  1.04176   0.001\n15180  15180  0.781006  0.001\n15200  15200  0.740904  0.001\n15220  15220  1.71633   0.001\n15240  15240  0.690045  0.001\n15260  15260  1.29513   0.001\n15280  15280  1.98855   0.001\n15300  15300  0.605588  0.001\n15320  15320  0.907702  0.001\n15340  15340  0.818274  0.001\n15360  15360  0.55575   0.001\n15380  15380  1.65955   0.001\n15400  15400  1.27948   0.001\n15420  15420  0.659783  0.001\n15440  15440  0.836431  0.001\n15460  15460  0.818368  0.001\n15480  15480  2.13993   0.001\n15500  15500  1.02477   0.001\n15520  15520  0.420333  0.001\n15540  15540  1.01778   0.001\n15560  15560  0.86374   0.001\n15580  15580  1.79414   0.001\n15600  15600  1.20583   0.001\n15620  15620  0.784571  0.001\n15640  15640  1.36394   0.001\n15660  15660  0.986351  0.001\n15680  15680  0.747139  0.001\n15700  15700  0.718375  0.001\n15720  15720  1.37586   0.001\n15740  15740  0.760639  0.001\n15760  15760  0.408189  0.001\n15780  15780  1.39396   0.001\n15800  15800  0.963002  0.001\n15820  15820  0.89859   0.001\n15840  15840  0.643125  0.001\n15860  15860  0.593099  0.001\n15880  15880  1.14305   0.001\n15900  15900  0.650207  0.001\n15920  15920  1.19831   0.001\n15940  15940  0.495704  0.001\n15960  15960  0.628888  0.001\n15980  15980  0.817055  0.001\n16000  16000  0.708907  0.001\n16020  16020  0.732905  0.001\n16040  16040  0.710686  0.001\n16060  16060  0.451416  0.001\n16080  16080  1.11072   0.001\n16100  16100  0.414096  0.001\n16120  16120  0.911934  0.001\n16140  16140  1.13856   0.001\n16160  16160  0.809925  0.001\n16180  16180  0.543864  0.001\n16200  16200  1.00081   0.001\n16220  16220  0.66854   0.001\n16240  16240  1.07786   0.001\n16260  16260  0.623025  0.001\n16280  16280  1.08605   0.001\n16300  16300  0.580843  0.001\n16320  16320  1.12489   0.001\n16340  16340  2.019     0.001\n16360  16360  1.54607   0.001\n16380  16380  0.692767  0.001\n16400  16400  0.924739  0.001\n16420  16420  0.496155  0.001\n16440  16440  1.28264   0.001\n16460  16460  0.539606  0.001\n16480  16480  1.00069   0.001\n16500  16500  0.425355  0.001\n16520  16520  0.522653  0.001\n16540  16540  0.720836  0.001\n16560  16560  0.846593  0.001\n16580  16580  1.46726   0.001\n16600  16600  0.538607  0.001\n16620  16620  0.413387  0.001\n16640  16640  0.857101  0.001\n16660  16660  0.788436  0.001\n16680  16680  0.560222  0.001\n16700  16700  1.05035   0.001\n16720  16720  1.50326   0.001\n16740  16740  0.641841  0.001\n16760  16760  0.935479  0.001\n16780  16780  0.913847  0.001\n16800  16800  0.734445  0.001\n16820  16820  0.91133   0.001\n16840  16840  2.08166   0.001\n16860  16860  0.405364  0.001\n16880  16880  1.06442   0.001\n16900  16900  1.17525   0.001\n16920  16920  0.641931  0.001\n16940  16940  0.834065  0.001\n16960  16960  0.685848  0.001\n16980  16980  0.58922   0.001\n17000  17000  0.487324  0.001\n17020  17020  0.842918  0.001\n17040  17040  0.965335  0.001\n17060  17060  1.38185   0.001\n17080  17080  0.682067  0.001\n17100  17100  0.59265   0.001\n17120  17120  0.93828   0.001\n17140  17140  1.74452   0.001\n17160  17160  0.511885  0.001\n17180  17180  1.26079   0.001\n17200  17200  2.00966   0.001\n17220  17220  0.626714  0.001\n17240  17240  0.990789  0.001\n17260  17260  1.3135    0.001\n17280  17280  0.554348  0.001\n17300  17300  0.929763  0.001\n17320  17320  0.530994  0.001\n17340  17340  1.01777   0.001\n17360  17360  1.01098   0.001\n17380  17380  1.26667   0.001\n17400  17400  0.741255  0.001\n17420  17420  1.70438   0.001\n17440  17440  0.460483  0.001\n17460  17460  1.2296    0.001\n17480  17480  0.860993  0.001\n17500  17500  0.594583  0.001\n17520  17520  1.24426   0.001\n17540  17540  0.84612   0.001\n17560  17560  0.44546   0.001\n17580  17580  0.888805  0.001\n17600  17600  1.03399   0.001\n17620  17620  0.700261  0.001\n17640  17640  0.647045  0.001\n17660  17660  2.0597    0.001\n17680  17680  0.553923  0.001\n17700  17700  0.833789  0.001\n17720  17720  1.15734   0.001\n17740  17740  0.565168  0.001\n17760  17760  1.00628   0.001\n17780  17780  0.90498   0.001\n17800  17800  0.498634  0.001\n17820  17820  1.06562   0.001\n17840  17840  0.692393  0.001\n17860  17860  0.511302  0.001\n17880  17880  0.792706  0.001\n17900  17900  1.11746   0.001\n17920  17920  1.13091   0.001\n17940  17940  0.489371  0.001\n17960  17960  1.05475   0.001\n17980  17980  1.3873    0.001\n18000  18000  0.952838  0.001\n18020  18020  0.949308  0.001\n18040  18040  1.90635   0.001\n18060  18060  0.577592  0.001\n18080  18080  0.720161  0.001\n18100  18100  0.871208  0.001\n18120  18120  1.47167   0.001\n18140  18140  0.886773  0.001\n18160  18160  1.08952   0.001\n18180  18180  1.95698   0.001\n18200  18200  0.618256  0.001\n18220  18220  0.582111  0.001\n18240  18240  1.78956   0.001\n18260  18260  0.535666  0.001\n18280  18280  0.930686  0.001\n18300  18300  0.469653  0.001\n18320  18320  0.785512  0.001\n18340  18340  1.06298   0.001\n18360  18360  0.5378    0.001\n18380  18380  0.883366  0.001\n18400  18400  0.602487  0.001\n18420  18420  0.678823  0.001\n18440  18440  0.56051   0.001\n18460  18460  0.915238  0.001\n18480  18480  1.74983   0.001\n18500  18500  1.2938    0.001\n18520  18520  0.487745  0.001\n18540  18540  1.43794   0.001\n18560  18560  1.14482   0.001\n18580  18580  1.00718   0.001\n18600  18600  0.590793  0.001\n18620  18620  1.57585   0.001\n18640  18640  1.32057   0.001\n18660  18660  0.643606  0.001\n18680  18680  1.30549   0.001\n18700  18700  1.50738   0.001\n18720  18720  0.763785  0.001\n18740  18740  0.927222  0.001\n18760  18760  0.764695  0.001\n18780  18780  0.470918  0.001\n18800  18800  0.679689  0.001\n18820  18820  0.59111   0.001\n18840  18840  2.15271   0.001\n18860  18860  0.931846  0.001\n18880  18880  0.460856  0.001\n18900  18900  2.27139   0.001\n18920  18920  0.518062  0.001\n18940  18940  0.457048  0.001\n18960  18960  1.04712   0.001\n18980  18980  0.875205  0.001\n19000  19000  1.17884   0.001\n19020  19020  1.09844   0.001\n19040  19040  0.739761  0.001\n19060  19060  1.00737   0.001\n19080  19080  0.498903  0.001\n19100  19100  0.722618  0.001\n19120  19120  0.690451  0.001\n19140  19140  0.342827  0.001\n19160  19160  1.11708   0.001\n19180  19180  1.01569   0.001\n19200  19200  0.744037  0.001\n19220  19220  1.7699    0.001\n19240  19240  0.799387  0.001\n19260  19260  0.341116  0.001\n19280  19280  0.803637  0.001\n19300  19300  0.83922   0.001\n19320  19320  2.61843   0.001\n19340  19340  0.812284  0.001\n19360  19360  0.72481   0.001\n19380  19380  0.918434  0.001\n19400  19400  1.04902   0.001\n19420  19420  0.754309  0.001\n19440  19440  1.31666   0.001\n19460  19460  1.06242   0.001\n19480  19480  0.80144   0.001\n19500  19500  1.87121   0.001\n19520  19520  0.818947  0.001\n19540  19540  0.704295  0.001\n19560  19560  1.22749   0.001\n19580  19580  0.658908  0.001\n19600  19600  0.777088  0.001\n19620  19620  1.0923    0.001\n19640  19640  0.728948  0.001\n19660  19660  0.565216  0.001\n19680  19680  0.659022  0.001\n19700  19700  2.34125   0.001\n19720  19720  1.30389   0.001\n19740  19740  1.16549   0.001\n19760  19760  0.56154   0.001\n19780  19780  0.979446  0.001\n19800  19800  0.864577  0.001\n19820  19820  0.52063   0.001\n19840  19840  1.18325   0.001\n19860  19860  1.75588   0.001\n19880  19880  1.03902   0.001\n19900  19900  0.478529  0.001\n19920  19920  0.66024   0.001\n19940  19940  0.754231  0.001\n19960  19960  0.367228  0.001\n19980  19980  0.694631  0.001\n20000  20000  1.46917   0.001\n20020  20020  0.665209  0.001\n20040  20040  0.409474  0.001\n20060  20060  0.468288  0.001\n20080  20080  0.612939  0.001\n20100  20100  0.706651  0.001\n20120  20120  1.35358   0.001\n20140  20140  0.566186  0.001\n20160  20160  0.786736  0.001\n20180  20180  0.776357  0.001\n20200  20200  1.14826   0.001\n20220  20220  0.710889  0.001\n20240  20240  0.717613  0.001\n20260  20260  1.49658   0.001\n20280  20280  0.864957  0.001\n20300  20300  0.424083  0.001\n20320  20320  0.567914  0.001\n20340  20340  0.514336  0.001\n20360  20360  0.778542  0.001\n20380  20380  1.0858    0.001\n20400  20400  0.934634  0.001\n20420  20420  0.644643  0.001\n20440  20440  0.834921  0.001\n20460  20460  1.23017   0.001\n20480  20480  2.79668   0.001\n20500  20500  0.658635  0.001\n20520  20520  0.776186  0.001\n20540  20540  0.475714  0.001\n20560  20560  1.09303   0.001\n20580  20580  1.01176   0.001\n20600  20600  1.04111   0.001\n20620  20620  0.457052  0.001\n20640  20640  0.6915    0.001\n20660  20660  0.456968  0.001\n20680  20680  1.09212   0.001\n20700  20700  1.20981   0.001\n20720  20720  0.445803  0.001\n20740  20740  0.605045  0.001\n20760  20760  0.959051  0.001\n20780  20780  0.849456  0.001\n20800  20800  0.998735  0.001\n20820  20820  0.90208   0.001\n20840  20840  1.80034   0.001\n20860  20860  1.54487   0.001\n20880  20880  0.570333  0.001\n20900  20900  0.957327  0.001\n20920  20920  0.691441  0.001\n20940  20940  1.25228   0.001\n20960  20960  0.775722  0.001\n20980  20980  1.06591   0.001\n21000  21000  0.529858  0.001\n21020  21020  0.748201  0.001\n21040  21040  1.21269   0.001\n21060  21060  1.60511   0.001\n21080  21080  0.957914  0.001\n21100  21100  0.564264  0.001\n21120  21120  0.72778   0.001\n21140  21140  0.46468   0.001\n21160  21160  0.594256  0.001\n21180  21180  0.587675  0.001\n21200  21200  0.466333  0.001\n21220  21220  0.940606  0.001\n21240  21240  0.655462  0.001\n21260  21260  0.606603  0.001\n21280  21280  0.576615  0.001\n21300  21300  0.669108  0.001\n21320  21320  2.42098   0.001\n21340  21340  0.406404  0.001\n21360  21360  0.906004  0.001\n21380  21380  1.90841   0.001\n21400  21400  0.457314  0.001\n21420  21420  0.678176  0.001\n21440  21440  0.710081  0.001\n21460  21460  0.868032  0.001\n21480  21480  0.755101  0.001\n21500  21500  1.19081   0.001\n21520  21520  0.487209  0.001\n21540  21540  0.410091  0.001\n21560  21560  1.01694   0.001\n21580  21580  1.89764   0.001\n21600  21600  0.966893  0.001\n21620  21620  1.1682    0.001\n21640  21640  0.735262  0.001\n21660  21660  0.795583  0.001\n21680  21680  0.775948  0.001\n21700  21700  0.647701  0.001\n21720  21720  0.568309  0.001\n21740  21740  1.5206    0.001\n21760  21760  0.911829  0.001\n21780  21780  0.708058  0.001\n21800  21800  0.625068  0.001\n21820  21820  0.691441  0.001\n21840  21840  1.82707   0.001\n21860  21860  1.04917   0.001\n21880  21880  0.916511  0.001\n21900  21900  1.08087   0.001\n21920  21920  1.09799   0.001\n21940  21940  0.796749  0.001\n21960  21960  0.491978  0.001\n21980  21980  0.426673  0.001\n22000  22000  0.787343  0.001\n22020  22020  0.433827  0.001\n22040  22040  0.762183  0.001\n22060  22060  0.844009  0.001\n22080  22080  0.659975  0.001\n22100  22100  0.859559  0.001\n22120  22120  0.820363  0.001\n22140  22140  0.396279  0.001\n22160  22160  0.545393  0.001\n22180  22180  0.521946  0.001\n22200  22200  1.70416   0.001\n22220  22220  1.05129   0.001\n22240  22240  1.22254   0.001\n22260  22260  0.772327  0.001\n22280  22280  0.532414  0.001\n22300  22300  0.977676  0.001\n22320  22320  1.96954   0.001\n22340  22340  0.53319   0.001\n22360  22360  0.657006  0.001\n22380  22380  0.747304  0.001\n22400  22400  0.593804  0.001\n22420  22420  0.874508  0.001\n22440  22440  1.41569   0.001\n22460  22460  0.692755  0.001\n22480  22480  0.474462  0.001\n22500  22500  1.62838   0.001\n22520  22520  0.533584  0.001\n22540  22540  0.516809  0.001\n22560  22560  0.538545  0.001\n22580  22580  0.808927  0.001\n22600  22600  1.55998   0.001\n22620  22620  0.475931  0.001\n22640  22640  0.524859  0.001\n22660  22660  0.492216  0.001\n22680  22680  0.518988  0.001\n22700  22700  1.08059   0.001\n22720  22720  0.708839  0.001\n22740  22740  0.986725  0.001\n22760  22760  0.874536  0.001\n22780  22780  0.816991  0.001\n22800  22800  1.02367   0.001\n22820  22820  1.27169   0.001\n22840  22840  0.581673  0.001\n22860  22860  0.985298  0.001\n22880  22880  0.885169  0.001\n22900  22900  0.885394  0.001\n22920  22920  0.568161  0.001\n22940  22940  0.682347  0.001\n22960  22960  0.989812  0.001\n22980  22980  1.36463   0.001\n23000  23000  1.01544   0.001\n23020  23020  1.83261   0.001\n23040  23040  0.497187  0.001\n23060  23060  0.886003  0.001\n23080  23080  0.6075    0.001\n23100  23100  0.860207  0.001\n23120  23120  0.727065  0.001\n23140  23140  0.675296  0.001\n23160  23160  1.02511   0.001\n23180  23180  0.873401  0.001\n23200  23200  0.642999  0.001\n23220  23220  1.5043    0.001\n23240  23240  0.383031  0.001\n23260  23260  1.08987   0.001\n23280  23280  0.844153  0.001\n23300  23300  0.479788  0.001\n23320  23320  1.02766   0.001\n23340  23340  1.98846   0.001\n23360  23360  0.593169  0.001\n23380  23380  1.29541   0.001\n23400  23400  0.766994  0.001\n23420  23420  0.564734  0.001\n23440  23440  0.75557   0.001\n23460  23460  0.573941  0.001\n23480  23480  0.698574  0.001\n23500  23500  0.964358  0.001\n23520  23520  0.59908   0.001\n23540  23540  1.23967   0.001\n23560  23560  0.428957  0.001\n23580  23580  1.10442   0.001\n23600  23600  1.10803   0.001\n23620  23620  1.16888   0.001\n23640  23640  1.33247   0.001\n23660  23660  0.780913  0.001\n23680  23680  0.401685  0.001\n23700  23700  0.696676  0.001\n23720  23720  0.920371  0.001\n23740  23740  0.629427  0.001\n23760  23760  0.899008  0.001\n23780  23780  0.677517  0.001\n23800  23800  0.494524  0.001\n23820  23820  0.555128  0.001\n23840  23840  0.595591  0.001\n23860  23860  0.671041  0.001\n23880  23880  0.776128  0.001\n23900  23900  0.474393  0.001\n23920  23920  0.985931  0.001\n23940  23940  0.619363  0.001\n23960  23960  0.616634  0.001\n23980  23980  2.52147   0.001\n24000  24000  1.18081   0.001\n24020  24020  0.6021    0.001\n24040  24040  0.726569  0.001\n24060  24060  0.837508  0.001\n24080  24080  0.520661  0.001\n24100  24100  0.459577  0.001\n24120  24120  1.12417   0.001\n24140  24140  0.665786  0.001\n24160  24160  2.35295   0.001\n24180  24180  1.01777   0.001\n24200  24200  0.904266  0.001\n24220  24220  0.989774  0.001\n24240  24240  0.860581  0.001\n24260  24260  0.776498  0.001\n24280  24280  0.718946  0.001\n24300  24300  1.58182   0.001\n24320  24320  0.898684  0.001\n24340  24340  0.52578   0.001\n24360  24360  0.624782  0.001\n24380  24380  0.466161  0.001\n24400  24400  0.330093  0.001\n24420  24420  0.708359  0.001\n24440  24440  0.527579  0.001\n24460  24460  0.675668  0.001\n24480  24480  0.649844  0.001\n24500  24500  1.41348   0.001\n24520  24520  0.456016  0.001\n24540  24540  1.02187   0.001\n24560  24560  0.496733  0.001\n24580  24580  0.747988  0.001\n24600  24600  1.26244   0.001\n24620  24620  0.821313  0.001\n24640  24640  0.873079  0.001\n24660  24660  0.579784  0.001\n24680  24680  0.796291  0.001\n24700  24700  1.16877   0.001\n24720  24720  1.03062   0.001\n24740  24740  0.765551  0.001\n24760  24760  0.75315   0.001\n24780  24780  0.577674  0.001\n24800  24800  0.445859  0.001\n24820  24820  1.99119   0.001\n24840  24840  0.602886  0.001\n24860  24860  0.703276  0.001\n24880  24880  1.00945   0.001\n24900  24900  0.605318  0.001\n24920  24920  2.12152   0.001\n24940  24940  0.628333  0.001\n24960  24960  0.760082  0.001\n24980  24980  0.97374   0.001\n25000  25000  0.662403  0.001\n25020  25020  0.762898  0.001\n25040  25040  1.28174   0.001\n25060  25060  0.765815  0.001\n25080  25080  1.32951   0.001\n25100  25100  0.455074  0.001\n25120  25120  0.854923  0.001\n25140  25140  1.37232   0.001\n25160  25160  1.21041   0.001\n25180  25180  0.426464  0.001\n25200  25200  0.753795  0.001\n25220  25220  0.594501  0.001\n25240  25240  0.721001  0.001\n25260  25260  0.89358   0.001\n25280  25280  0.787093  0.001\n25300  25300  0.624402  0.001\n25320  25320  0.791906  0.001\n25340  25340  2.10176   0.001\n25360  25360  0.996094  0.001\n25380  25380  0.740236  0.001\n25400  25400  0.367626  0.001\n25420  25420  1.77641   0.001\n25440  25440  0.647405  0.001\n25460  25460  0.954833  0.001\n25480  25480  0.480171  0.001\n25500  25500  0.480979  0.001\n25520  25520  0.602786  0.001\n25540  25540  2.00487   0.001\n25560  25560  0.532814  0.001\n25580  25580  0.860506  0.001\n25600  25600  1.14453   0.001\n25620  25620  0.808917  0.001\n25640  25640  0.569446  0.001\n25660  25660  1.27491   0.001\n25680  25680  0.782531  0.001\n25700  25700  0.626235  0.001\n25720  25720  0.554729  0.001\n25740  25740  0.367886  0.001\n25760  25760  0.807924  0.001\n25780  25780  1.2178    0.001\n25800  25800  1.11934   0.001\n25820  25820  1.07718   0.001\n25840  25840  1.15861   0.001\n25860  25860  0.492962  0.001\n25880  25880  0.839559  0.001\n25900  25900  1.32129   0.001\n25920  25920  0.691053  0.001\n25940  25940  0.749276  0.001\n25960  25960  1.28071   0.001\n25980  25980  1.08308   0.001\n26000  26000  0.470055  0.001\n26020  26020  0.335844  0.001\n26040  26040  0.923808  0.001\n26060  26060  0.632334  0.001\n26080  26080  0.966027  0.001\n26100  26100  0.817563  0.001\n26120  26120  0.444215  0.001\n26140  26140  0.559512  0.001\n26160  26160  0.589963  0.001\n26180  26180  1.02343   0.001\n26200  26200  0.731477  0.001\n26220  26220  0.673655  0.001\n26240  26240  0.662545  0.001\n26260  26260  0.67641   0.001\n26280  26280  0.955626  0.001\n26300  26300  0.648236  0.001\n26320  26320  0.915957  0.001\n26340  26340  0.6232    0.001\n26360  26360  0.433763  0.001\n26380  26380  1.03772   0.001\n26400  26400  0.957516  0.001\n26420  26420  0.622151  0.001\n26440  26440  0.421929  0.001\n26460  26460  0.55155   0.001\n26480  26480  0.578591  0.001\n26500  26500  1.34571   0.001\n26520  26520  1.27365   0.001\n26540  26540  0.66299   0.001\n26560  26560  0.874178  0.001\n26580  26580  0.544515  0.001\n26600  26600  1.06572   0.001\n26620  26620  0.832811  0.001\n26640  26640  0.634833  0.001\n26660  26660  0.639975  0.001\n26680  26680  0.66805   0.001\n26700  26700  1.84758   0.001\n26720  26720  0.809898  0.001\n26740  26740  0.859735  0.001\n26760  26760  0.756461  0.001\n26780  26780  0.922885  0.001\n26800  26800  1.02699   0.001\n26820  26820  0.704275  0.001\n26840  26840  0.646526  0.001\n26860  26860  0.582827  0.001\n26880  26880  0.572579  0.001\n26900  26900  0.615925  0.001\n26920  26920  0.737058  0.001\n26940  26940  0.580985  0.001\n26960  26960  0.618977  0.001\n26980  26980  0.547474  0.001\n27000  27000  0.67063   0.001\n27020  27020  0.434386  0.001\n27040  27040  0.764053  0.001\n27060  27060  0.601624  0.001\n27080  27080  1.99527   0.001\n27100  27100  0.990202  0.001\n27120  27120  0.897911  0.001\n27140  27140  0.575742  0.001\n27160  27160  0.560252  0.001\n27180  27180  0.726404  0.001\n27200  27200  0.576576  0.001\n27220  27220  1.94316   0.001\n27240  27240  1.47513   0.001\n27260  27260  0.443536  0.001\n27280  27280  0.750725  0.001\n27300  27300  0.806418  0.001\n27320  27320  1.70496   0.001\n27340  27340  0.755941  0.001\n27360  27360  0.685277  0.001\n27380  27380  0.562483  0.001\n27400  27400  1.0846    0.001\n27420  27420  0.629039  0.001\n27440  27440  0.470791  0.001\n27460  27460  0.655306  0.001\n27480  27480  0.633282  0.001\n27500  27500  0.73985   0.001\n27520  27520  0.70191   0.001\n27540  27540  0.431427  0.001\n27560  27560  0.432399  0.001\n27580  27580  0.573121  0.001\n27600  27600  0.585324  0.001\n27620  27620  0.596338  0.001\n27640  27640  0.704161  0.001\n27660  27660  1.92931   0.001\n27680  27680  0.484927  0.001\n27700  27700  0.747888  0.001\n27720  27720  0.988767  0.001\n27740  27740  1.25436   0.001\n27760  27760  1.09925   0.001\n27780  27780  0.812226  0.001\n27800  27800  0.782603  0.001\n27820  27820  0.803175  0.001\n27840  27840  0.70069   0.001\n27860  27860  0.514401  0.001\n27880  27880  0.602498  0.001\n27900  27900  0.735405  0.001\n27920  27920  0.698681  0.001\n27940  27940  0.711975  0.001\n27960  27960  1.07901   0.001\n27980  27980  0.353601  0.001\n28000  28000  1.00685   0.001\n28020  28020  0.874961  0.001\n28040  28040  1.71303   0.001\n28060  28060  0.497765  0.001\n28080  28080  0.657587  0.001\n28100  28100  0.643131  0.001\n28120  28120  1.03196   0.001\n28140  28140  0.782335  0.001\n28160  28160  1.24059   0.001\n28180  28180  1.43563   0.001\n28200  28200  2.25897   0.001\n28220  28220  0.6433    0.001\n28240  28240  1.46647   0.001\n28260  28260  0.71478   0.001\n28280  28280  0.901832  0.001\n28300  28300  0.415548  0.001\n28320  28320  0.544529  0.001\n28340  28340  0.885851  0.001\n28360  28360  0.600684  0.001\n28380  28380  0.617572  0.001\n28400  28400  1.76986   0.001\n28420  28420  0.70373   0.001\n28440  28440  1.25777   0.001\n28460  28460  0.435746  0.001\n28480  28480  0.553706  0.001\n28500  28500  0.816925  0.001\n28520  28520  1.52494   0.001\n28540  28540  0.700594  0.001\n28560  28560  0.57116   0.001\n28580  28580  0.614081  0.001\n28600  28600  0.822383  0.001\n28620  28620  0.417222  0.001\n28640  28640  1.14181   0.001\n28660  28660  0.915222  0.001\n28680  28680  1.88217   0.001\n28700  28700  0.756357  0.001\n28720  28720  0.668112  0.001\n28740  28740  0.499766  0.001\n28760  28760  0.615131  0.001\n28780  28780  0.861032  0.001\n28800  28800  0.63118   0.001\n28820  28820  0.991024  0.001\n28840  28840  0.826515  0.001\n28860  28860  0.569826  0.001\n28880  28880  1.06689   0.001\n28900  28900  1.30167   0.001\n28920  28920  0.848153  0.001\n28940  28940  0.546374  0.001\n28960  28960  0.791194  0.001\n28980  28980  1.72968   0.001\n29000  29000  0.484864  0.001\n29020  29020  0.519363  0.001\n29040  29040  1.59136   0.001\n29060  29060  0.482089  0.001\n29080  29080  0.907847  0.001\n29100  29100  0.610765  0.001\n29120  29120  0.892656  0.001\n29140  29140  0.388169  0.001\n29160  29160  0.489742  0.001\n29180  29180  0.530799  0.001\n29200  29200  1.03824   0.001\n29220  29220  0.749556  0.001\n29240  29240  1.74339   0.001\n29260  29260  0.615166  0.001\n29280  29280  0.914047  0.001\n29300  29300  1.46887   0.001\n29320  29320  0.725182  0.001\n29340  29340  0.930556  0.001\n29360  29360  0.516828  0.001\n29380  29380  0.462454  0.001\n29400  29400  0.657212  0.001\n29420  29420  1.11152   0.001\n29440  29440  0.638564  0.001\n29460  29460  0.678122  0.001\n29480  29480  1.10706   0.001\n29500  29500  0.374765  0.001\n29520  29520  0.713791  0.001\n29540  29540  0.808645  0.001\n29560  29560  1.49042   0.001\n29580  29580  2.00563   0.001\n29600  29600  1.33517   0.001\n29620  29620  0.401586  0.001\n29640  29640  0.538022  0.001\n29660  29660  0.783567  0.001\n29680  29680  1.61906   0.001\n29700  29700  0.7148    0.001\n29720  29720  0.603071  0.001\n29740  29740  0.840587  0.001\n29760  29760  1.20569   0.001\n29780  29780  0.852463  0.001\n29800  29800  0.627823  0.001\n29820  29820  0.978499  0.001\n29840  29840  0.686315  0.001\n29860  29860  1.01386   0.001\n29880  29880  0.35992   0.001\n29900  29900  1.32403   0.001\n29920  29920  0.58503   0.001\n29940  29940  1.55819   0.001\n29960  29960  1.05589   0.001\n29980  29980  1.03745   0.001\n30000  30000  1.39192   0.001\n30020  30020  0.374112  0.001\n30040  30040  1.18743   0.001\n30060  30060  1.22317   0.001\n30080  30080  1.08668   0.001\n30100  30100  0.664197  0.001\n30120  30120  1.21627   0.001\n30140  30140  0.637521  0.001\n30160  30160  0.851347  0.001\n30180  30180  0.42832   0.001\n30200  30200  0.837062  0.001\n30220  30220  1.79929   0.001\n30240  30240  0.930559  0.001\n30260  30260  0.916852  0.001\n30280  30280  1.04199   0.001\n30300  30300  1.17863   0.001\n30320  30320  0.432046  0.001\n30340  30340  0.910223  0.001\n30360  30360  1.16497   0.001\n30380  30380  1.19847   0.001\n30400  30400  0.459369  0.001\n30420  30420  0.498953  0.001\n30440  30440  0.378508  0.001\n30460  30460  1.01789   0.001\n30480  30480  1.01328   0.001\n30500  30500  0.434555  0.001\n30520  30520  1.444     0.001\n30540  30540  0.7428    0.001\n30560  30560  0.918116  0.001\n30580  30580  1.21818   0.001\n30600  30600  0.46168   0.001\n30620  30620  0.572259  0.001\n30640  30640  0.607547  0.001\n30660  30660  0.584792  0.001\n30680  30680  0.773609  0.001\n30700  30700  0.456729  0.001\n30720  30720  0.631707  0.001\n30740  30740  0.86245   0.001\n30760  30760  0.471783  0.001\n30780  30780  0.63132   0.001\n30800  30800  0.52068   0.001\n30820  30820  0.584094  0.001\n30840  30840  1.01435   0.001\n30860  30860  1.14132   0.001\n30880  30880  0.373136  0.001\n30900  30900  1.21108   0.001\n30920  30920  1.80283   0.001\n30940  30940  2.14237   0.001\n30960  30960  1.05774   0.001\n30980  30980  0.981165  0.001\n31000  31000  0.814379  0.001\n31020  31020  0.833036  0.001\n31040  31040  0.894134  0.001\n31060  31060  0.859654  0.001\n31080  31080  0.738209  0.001\n31100  31100  1.17119   0.001\n31120  31120  0.568375  0.001\n31140  31140  1.51837   0.001\n31160  31160  0.711207  0.001\n31180  31180  0.998644  0.001\n31200  31200  0.961315  0.001\n31220  31220  1.13191   0.001\n31240  31240  0.45746   0.001\n31260  31260  0.652766  0.001\n31280  31280  0.534352  0.001\n31300  31300  0.81712   0.001\n31320  31320  0.376767  0.001\n31340  31340  0.831702  0.001\n31360  31360  0.478927  0.001\n31380  31380  0.589898  0.001\n31400  31400  0.413504  0.001\n31420  31420  0.631495  0.001\n31440  31440  0.43571   0.001\n31460  31460  0.386494  0.001\n31480  31480  0.612432  0.001\n31500  31500  0.888489  0.001\n31520  31520  0.769734  0.001\n31540  31540  0.984873  0.001\n31560  31560  0.604839  0.001\n31580  31580  0.501966  0.001\n31600  31600  0.612702  0.001\n31620  31620  0.89742   0.001\n31640  31640  0.513069  0.001\n31660  31660  1.27561   0.001\n31680  31680  0.35364   0.001\n31700  31700  0.505844  0.001\n31720  31720  0.536703  0.001\n31740  31740  0.962339  0.001\n31760  31760  0.62392   0.001\n31780  31780  1.0655    0.001\n31800  31800  0.922162  0.001\n31820  31820  0.983361  0.001\n31840  31840  0.429712  0.001\n31860  31860  0.720873  0.001\n31880  31880  1.46495   0.001\n31900  31900  0.963123  0.001\n31920  31920  0.453951  0.001\n31940  31940  0.49025   0.001\n31960  31960  0.75928   0.001\n31980  31980  1.81515   0.001\n32000  32000  0.93172   0.001\n32020  32020  0.446514  0.001\n32040  32040  0.721809  0.001\n32060  32060  0.748247  0.001\n32080  32080  1.01143   0.001\n32100  32100  0.565522  0.001\n32120  32120  0.78847   0.001\n32140  32140  0.761287  0.001\n32160  32160  1.08939   0.001\n32180  32180  0.686558  0.001\n32200  32200  1.68677   0.001\n32220  32220  0.559225  0.001\n32240  32240  0.357419  0.001\n32260  32260  1.01743   0.001\n32280  32280  0.672218  0.001\n32300  32300  0.433339  0.001\n32320  32320  0.815731  0.001\n32340  32340  1.10619   0.001\n32360  32360  0.804252  0.001\n32380  32380  0.841493  0.001\n32400  32400  0.535706  0.001\n32420  32420  1.0569    0.001\n32440  32440  1.05037   0.001\n32460  32460  1.14657   0.001\n32480  32480  0.498018  0.001\n32500  32500  0.564177  0.001\n32520  32520  2.49198   0.001\n32540  32540  0.646983  0.001\n32560  32560  1.05057   0.001\n32580  32580  0.359979  0.001\n32600  32600  0.517808  0.001\n32620  32620  0.926682  0.001\n32640  32640  0.919778  0.001\n32660  32660  0.421723  0.001\n32680  32680  0.827626  0.001\n32700  32700  0.622038  0.001\n32720  32720  0.832296  0.001\n32740  32740  0.846465  0.001\n32760  32760  0.835177  0.001\n32780  32780  0.523759  0.001\n32800  32800  1.88973   0.001\n32820  32820  1.20374   0.001\n32840  32840  0.627261  0.001\n32860  32860  1.29485   0.001\n32880  32880  0.495925  0.001\n32900  32900  0.994536  0.001\n32920  32920  1.01182   0.001\n32940  32940  1.26661   0.001\n32960  32960  0.68181   0.001\n32980  32980  0.503269  0.001\n33000  33000  0.444556  0.001\n33020  33020  0.729096  0.001\n33040  33040  1.0439    0.001\n33060  33060  1.20839   0.001\n33080  33080  0.73453   0.001\n33100  33100  0.78591   0.001\n33120  33120  0.897991  0.001\n33140  33140  1.06301   0.001\n33160  33160  1.20464   0.001\n33180  33180  1.1145    0.001\n33200  33200  0.722433  0.001\n33220  33220  0.73795   0.001\n33240  33240  0.55689   0.001\n33260  33260  1.05615   0.001\n33280  33280  0.661188  0.001\n33300  33300  0.418191  0.001\n33320  33320  0.73194   0.001\n33340  33340  1.37032   0.001\n33360  33360  0.998523  0.001\n33380  33380  1.99021   0.001\n33400  33400  0.540744  0.001\n33420  33420  0.471284  0.001\n33440  33440  0.329488  0.001\n33460  33460  0.457209  0.001\n33480  33480  1.13891   0.001\n33500  33500  0.531478  0.001\n33520  33520  1.12143   0.001\n33540  33540  0.811741  0.001\n33560  33560  1.11348   0.001\n33580  33580  0.644336  0.001\n33600  33600  0.53734   0.001\n33620  33620  0.580491  0.001\n33640  33640  0.630186  0.001\n33660  33660  1.31333   0.001\n33680  33680  0.584861  0.001\n33700  33700  0.794474  0.001\n33720  33720  0.380818  0.001\n33740  33740  0.937224  0.001\n33760  33760  0.465024  0.001\n33780  33780  0.562757  0.001\n33800  33800  0.839482  0.001\n33820  33820  1.04455   0.001\n33840  33840  0.592538  0.001\n33860  33860  0.477707  0.001\n33880  33880  1.03378   0.001\n33900  33900  0.65709   0.001\n33920  33920  0.565493  0.001\n33940  33940  1.17592   0.001\n33960  33960  0.799084  0.001\n33980  33980  0.687406  0.001\n34000  34000  0.395481  0.001\n34020  34020  0.420111  0.001\n34040  34040  0.664011  0.001\n34060  34060  2.05022   0.001\n34080  34080  0.42833   0.001\n34100  34100  2.33633   0.001\n34120  34120  0.946352  0.001\n34140  34140  0.911198  0.001\n34160  34160  0.464614  0.001\n34180  34180  0.481842  0.001\n34200  34200  0.750829  0.001\n34220  34220  0.825474  0.001\n34240  34240  0.572186  0.001\n34260  34260  0.782822  0.001\n34280  34280  1.49363   0.001\n34300  34300  0.652759  0.001\n34320  34320  0.956592  0.001\n34340  34340  0.711705  0.001\n34360  34360  0.390503  0.001\n34380  34380  0.663514  0.001\n34400  34400  1.31236   0.001\n34420  34420  0.739487  0.001\n34440  34440  0.635068  0.001\n34460  34460  0.709618  0.001\n34480  34480  0.605474  0.001\n34500  34500  0.938903  0.001\n34520  34520  0.533782  0.001\n34540  34540  0.694092  0.001\n34560  34560  0.698837  0.001\n34580  34580  0.815976  0.001\n34600  34600  0.431868  0.001\n34620  34620  0.394785  0.001\n34640  34640  0.857643  0.001\n34660  34660  0.939804  0.001\n34680  34680  1.49504   0.001\n34700  34700  0.740308  0.001\n34720  34720  0.548199  0.001\n34740  34740  0.527028  0.001\n34760  34760  2.25262   0.001\n34780  34780  0.914005  0.001\n34800  34800  0.471413  0.001\n34820  34820  0.536085  0.001\n34840  34840  0.343418  0.001\n34860  34860  0.627994  0.001\n34880  34880  1.40819   0.001\n34900  34900  0.582768  0.001\n34920  34920  0.961189  0.001\n34940  34940  1.74307   0.001\n34960  34960  2.14542   0.001\n34980  34980  0.534651  0.001\n35000  35000  1.79913   0.001\n35020  35020  0.348042  0.001\n35040  35040  0.921378  0.001\n35060  35060  0.694921  0.001\n35080  35080  0.774775  0.001\n35100  35100  0.473963  0.001\n35120  35120  1.06357   0.001\n35140  35140  0.5524    0.001\n35160  35160  0.384334  0.001\n35180  35180  0.828316  0.001\n35200  35200  0.720983  0.001\n35220  35220  0.623908  0.001\n35240  35240  1.18763   0.001\n35260  35260  0.684291  0.001\n35280  35280  0.954309  0.001\n35300  35300  0.920351  0.001\n35320  35320  0.659816  0.001\n35340  35340  0.632094  0.001\n35360  35360  0.888685  0.001\n35380  35380  1.3844    0.001\n35400  35400  0.550742  0.001\n35420  35420  0.655033  0.001\n35440  35440  0.784531  0.001\n35460  35460  0.562834  0.001\n35480  35480  0.50811   0.001\n35500  35500  0.53546   0.001\n35520  35520  0.457577  0.001\n35540  35540  0.703184  0.001\n35560  35560  1.00202   0.001\n35580  35580  0.757121  0.001\n35600  35600  1.7009    0.001\n35620  35620  2.03373   0.001\n35640  35640  1.50404   0.001\n35660  35660  0.458346  0.001\n35680  35680  2.4266    0.001\n35700  35700  0.711825  0.001\n35720  35720  1.07591   0.001\n35740  35740  0.570154  0.001\n35760  35760  0.479515  0.001\n35780  35780  1.01018   0.001\n35800  35800  0.396518  0.001\n35820  35820  0.851793  0.001\n35840  35840  0.524207  0.001\n35860  35860  1.34837   0.001\n35880  35880  0.467709  0.001\n35900  35900  0.466904  0.001\n35920  35920  1.00284   0.001\n35940  35940  0.724006  0.001\n35960  35960  0.508779  0.001\n35980  35980  0.618428  0.001\n36000  36000  0.369387  0.001\n36020  36020  0.879899  0.001\n36040  36040  1.03409   0.001\n36060  36060  0.600315  0.001\n36080  36080  0.927358  0.001\n36100  36100  0.505165  0.001\n36120  36120  0.496866  0.001\n36140  36140  0.524906  0.001\n36160  36160  0.838519  0.001\n36180  36180  0.851474  0.001\n36200  36200  0.997366  0.001\n36220  36220  0.545435  0.001\n36240  36240  1.34725   0.001\n36260  36260  0.914829  0.001\n36280  36280  0.59902   0.001\n36300  36300  0.471977  0.001\n36320  36320  1.0063    0.001\n36340  36340  1.25434   0.001\n36360  36360  0.39464   0.001\n36380  36380  0.668125  0.001\n36400  36400  0.534959  0.001\n36420  36420  0.823243  0.001\n36440  36440  0.476497  0.001\n36460  36460  0.363714  0.001\n36480  36480  0.508732  0.001\n36500  36500  0.515041  0.001\n36520  36520  1.81983   0.001\n36540  36540  0.615407  0.001\n36560  36560  0.751332  0.001\n36580  36580  0.50446   0.001\n36600  36600  1.05395   0.001\n36620  36620  1.07412   0.001\n36640  36640  0.375883  0.001\n36660  36660  0.377586  0.001\n36680  36680  0.837437  0.001\n36700  36700  0.454674  0.001\n36720  36720  0.911012  0.001\n36740  36740  0.637932  0.001\n36760  36760  0.790828  0.001\n36780  36780  0.374493  0.001\n36800  36800  0.47676   0.001\n36820  36820  0.355943  0.001\n36840  36840  0.688241  0.001\n36860  36860  0.739338  0.001\n36880  36880  0.821493  0.001\n36900  36900  0.81305   0.001\n36920  36920  0.956813  0.001\n36940  36940  0.692208  0.001\n36960  36960  0.534895  0.001\n36980  36980  0.49002   0.001\n37000  37000  1.2034    0.001\n37020  37020  1.33817   0.001\n37040  37040  0.975172  0.001\n37060  37060  0.827597  0.001\n37080  37080  0.350842  0.001\n37100  37100  1.65048   0.001\n37120  37120  0.732824  0.001\n37140  37140  0.662698  0.001\n37160  37160  0.540629  0.001\n37180  37180  0.619217  0.001\n37200  37200  1.1052    0.001\n37220  37220  0.427977  0.001\n37240  37240  0.279957  0.001\n37260  37260  0.896184  0.001\n37280  37280  0.857674  0.001\n37300  37300  0.46595   0.001\n37320  37320  0.752603  0.001\n37340  37340  0.384873  0.001\n37360  37360  1.51023   0.001\n37380  37380  1.44577   0.001\n37400  37400  0.747057  0.001\n37420  37420  0.819882  0.001\n37440  37440  0.638468  0.001\n37460  37460  1.82264   0.001\n37480  37480  0.980125  0.001\n37500  37500  0.600232  0.001\n37520  37520  0.789549  0.001\n37540  37540  0.79799   0.001\n37560  37560  0.674514  0.001\n37580  37580  0.60436   0.001\n37600  37600  0.642183  0.001\n37620  37620  1.07969   0.001\n37640  37640  0.796969  0.001\n37660  37660  0.646686  0.001\n37680  37680  1.58672   0.001\n37700  37700  0.635327  0.001\n37720  37720  0.488684  0.001\n37740  37740  0.918773  0.001\n37760  37760  0.659306  0.001\n37780  37780  0.929089  0.001\n37800  37800  0.557747  0.001\n37820  37820  0.791083  0.001\n37840  37840  1.13978   0.001\n37860  37860  0.683275  0.001\n37880  37880  1.01331   0.001\n37900  37900  0.689391  0.001\n37920  37920  1.02055   0.001\n37940  37940  0.620005  0.001\n37960  37960  0.609873  0.001\n37980  37980  0.365148  0.001\n38000  38000  0.506558  0.001\n38020  38020  1.31722   0.001\n38040  38040  0.444472  0.001\n38060  38060  0.797092  0.001\n38080  38080  0.473672  0.001\n38100  38100  1.01985   0.001\n38120  38120  3.26368   0.001\n38140  38140  1.03454   0.001\n38160  38160  1.74928   0.001\n38180  38180  0.55618   0.001\n38200  38200  1.09851   0.001\n38220  38220  0.564117  0.001\n38240  38240  0.49969   0.001\n38260  38260  0.641908  0.001\n38280  38280  2.35418   0.001\n38300  38300  0.747985  0.001\n38320  38320  0.478231  0.001\n38340  38340  0.911134  0.001\n38360  38360  0.940367  0.001\n38380  38380  0.790869  0.001\n38400  38400  0.519165  0.001\n38420  38420  0.750402  0.001\n38440  38440  0.488755  0.001\n38460  38460  0.693869  0.001\n38480  38480  0.469634  0.001\n38500  38500  0.800563  0.001\n38520  38520  1.20697   0.001\n38540  38540  0.611457  0.001\n38560  38560  0.725753  0.001\n38580  38580  0.793101  0.001\n38600  38600  0.731596  0.001\n38620  38620  0.354591  0.001\n38640  38640  0.57438   0.001\n38660  38660  1.04316   0.001\n38680  38680  0.630783  0.001\n38700  38700  0.371902  0.001\n38720  38720  0.548375  0.001\n38740  38740  0.693131  0.001\n38760  38760  0.408372  0.001\n38780  38780  0.543541  0.001\n38800  38800  0.615081  0.001\n38820  38820  0.73967   0.001\n38840  38840  0.849521  0.001\n38860  38860  0.506539  0.001\n38880  38880  1.07637   0.001\n38900  38900  1.3272    0.001\n38920  38920  0.919629  0.001\n38940  38940  0.863706  0.001\n38960  38960  0.808047  0.001\n38980  38980  0.588502  0.001\n39000  39000  0.89479   0.001\n39020  39020  1.0372    0.001\n39040  39040  0.540904  0.001\n39060  39060  1.13134   0.001\n39080  39080  0.616673  0.001\n39100  39100  2.32773   0.001\n39120  39120  0.731407  0.001\n39140  39140  1.37763   0.001\n39160  39160  0.741931  0.001\n39180  39180  0.591418  0.001\n39200  39200  0.875457  0.001\n39220  39220  1.49917   0.001\n39240  39240  0.358489  0.001\n39260  39260  0.790616  0.001\n39280  39280  0.68739   0.001\n39300  39300  0.579827  0.001\n39320  39320  1.3071    0.001\n39340  39340  0.68547   0.001\n39360  39360  0.688078  0.001\n39380  39380  0.879364  0.001\n39400  39400  1.40786   0.001\n39420  39420  0.969241  0.001\n39440  39440  1.20422   0.001\n39460  39460  0.534266  0.001\n39480  39480  0.639583  0.001\n39500  39500  0.428011  0.001\n39520  39520  0.420574  0.001\n39540  39540  1.03572   0.001\n39560  39560  0.462572  0.001\n39580  39580  0.675017  0.001\n39600  39600  2.09342   0.001\n39620  39620  0.736375  0.001\n39640  39640  1.15925   0.001\n39660  39660  0.640987  0.001\n39680  39680  0.905536  0.001\n39700  39700  0.432513  0.001\n39720  39720  1.04705   0.001\n39740  39740  2.82124   0.001\n39760  39760  0.841372  0.001\n39780  39780  0.761099  0.001\n39800  39800  1.51809   0.001\n39820  39820  1.06073   0.001\n39840  39840  0.358507  0.001\n39860  39860  0.540057  0.001\n39880  39880  1.11447   0.001\n39900  39900  0.614556  0.001\n39920  39920  0.486474  0.001\n39940  39940  1.8937    0.001\n39960  39960  0.633186  0.001\n39980  39980  0.73406   0.001\n40000  40000  0.790908  0.001\n40020  40020  1.01865   0.001\n40040  40040  0.36494   0.001\n40060  40060  0.545179  0.001\n40080  40080  0.531176  0.001\n40100  40100  1.01231   0.001\n40120  40120  0.65345   0.001\n40140  40140  0.528565  0.001\n40160  40160  0.486827  0.001\n40180  40180  0.414197  0.001\n40200  40200  1.06179   0.001\n40220  40220  0.552568  0.001\n40240  40240  0.530837  0.001\n40260  40260  1.45246   0.001\n40280  40280  0.462367  0.001\n40300  40300  0.665231  0.001\n40320  40320  0.8118    0.001\n40340  40340  1.14239   0.001\n40360  40360  1.37103   0.001\n40380  40380  1.50437   0.001\n40400  40400  0.627388  0.001\n40420  40420  1.7866    0.001\n40440  40440  0.459482  0.001\n40460  40460  1.25103   0.001\n40480  40480  0.698465  0.001\n40500  40500  0.621715  0.001\n40520  40520  0.702931  0.001\n40540  40540  0.317889  0.001\n40560  40560  0.554467  0.001\n40580  40580  0.646797  0.001\n40600  40600  1.28755   0.001\n40620  40620  1.50256   0.001\n40640  40640  0.498028  0.001\n40660  40660  0.584178  0.001\n40680  40680  0.548532  0.001\n40700  40700  1.04744   0.001\n40720  40720  0.459013  0.001\n40740  40740  0.381364  0.001\n40760  40760  0.68082   0.001\n40780  40780  0.436217  0.001\n40800  40800  0.432982  0.001\n40820  40820  0.568687  0.001\n40840  40840  0.50142   0.001\n40860  40860  1.01271   0.001\n40880  40880  0.298028  0.001\n40900  40900  2.06906   0.001\n40920  40920  1.13007   0.001\n40940  40940  0.532125  0.001\n40960  40960  0.519246  0.001\n40980  40980  0.497647  0.001\n41000  41000  0.553974  0.001\n41020  41020  0.656693  0.001\n41040  41040  0.588009  0.001\n41060  41060  1.01934   0.001\n41080  41080  0.788348  0.001\n41100  41100  0.996355  0.001\n41120  41120  2.05616   0.001\n41140  41140  1.54568   0.001\n41160  41160  0.549595  0.001\n41180  41180  1.15903   0.001\n41200  41200  0.570126  0.001\n41220  41220  1.0161    0.001\n41240  41240  0.579436  0.001\n41260  41260  0.485955  0.001\n41280  41280  0.451444  0.001\n41300  41300  0.922809  0.001\n41320  41320  0.863672  0.001\n41340  41340  0.698842  0.001\n41360  41360  1.39653   0.001\n41380  41380  1.12892   0.001\n41400  41400  0.314346  0.001\n41420  41420  0.872569  0.001\n41440  41440  0.793241  0.001\n41460  41460  1.13927   0.001\n41480  41480  0.374719  0.001\n41500  41500  0.703071  0.001\n41520  41520  1.48779   0.001\n41540  41540  1.22701   0.001\n41560  41560  0.65274   0.001\n41580  41580  0.625412  0.001\n41600  41600  0.738764  0.001\n41620  41620  0.717248  0.001\n41640  41640  0.490057  0.001\n41660  41660  0.553821  0.001\n41680  41680  0.816526  0.001\n41700  41700  1.28089   0.001\n41720  41720  0.556574  0.001\n41740  41740  0.688396  0.001\n41760  41760  0.747165  0.001\n41780  41780  0.327459  0.001\n41800  41800  0.410629  0.001\n41820  41820  0.598941  0.001\n41840  41840  0.539441  0.001\n41860  41860  1.07751   0.001\n41880  41880  0.82325   0.001\n41900  41900  0.929131  0.001\n41920  41920  1.03314   0.001\n41940  41940  0.415459  0.001\n41960  41960  0.601679  0.001\n41980  41980  0.434024  0.001\n42000  42000  1.04518   0.001\n42020  42020  0.827392  0.001\n42040  42040  0.441818  0.001\n42060  42060  0.560752  0.001\n42080  42080  0.741453  0.001\n42100  42100  0.446986  0.001\n42120  42120  0.584224  0.001\n42140  42140  1.16291   0.001\n42160  42160  0.869578  0.001\n42180  42180  0.895598  0.001\n42200  42200  0.592178  0.001\n42220  42220  0.519835  0.001\n42240  42240  0.611278  0.001\n42260  42260  0.6823    0.001\n42280  42280  0.598671  0.001\n42300  42300  0.973109  0.001\n42320  42320  0.861927  0.001\n42340  42340  0.378567  0.001\n42360  42360  0.50706   0.001\n42380  42380  0.761812  0.001\n42400  42400  0.434853  0.001\n42420  42420  0.614713  0.001\n42440  42440  0.57465   0.001\n42460  42460  0.449034  0.001\n42480  42480  0.563529  0.001\n42500  42500  0.8384    0.001\n42520  42520  0.373316  0.001\n42540  42540  0.389173  0.001\n42560  42560  0.706242  0.001\n42580  42580  0.315757  0.001\n42600  42600  0.600168  0.001\n42620  42620  0.764984  0.001\n42640  42640  0.862262  0.001\n42660  42660  0.602872  0.001\n42680  42680  2.09127   0.001\n42700  42700  0.495368  0.001\n42720  42720  0.757915  0.001\n42740  42740  0.470334  0.001\n42760  42760  1.43644   0.001\n42780  42780  0.888931  0.001\n42800  42800  0.81502   0.001\n42820  42820  0.45088   0.001\n42840  42840  0.683719  0.001\n42860  42860  0.667473  0.001\n42880  42880  0.555821  0.001\n42900  42900  0.473745  0.001\n42920  42920  0.787988  0.001\n42940  42940  0.468379  0.001\n42960  42960  2.85476   0.001\n42980  42980  1.8602    0.001\n43000  43000  0.850171  0.001\n43020  43020  1.35783   0.001\n43040  43040  1.23191   0.001\n43060  43060  1.12732   0.001\n43080  43080  0.391022  0.001\n43100  43100  0.882019  0.001\n43120  43120  0.793878  0.001\n43140  43140  1.48851   0.001\n43160  43160  1.43437   0.001\n43180  43180  0.610664  0.001\n43200  43200  1.02673   0.001\n43220  43220  1.61092   0.001\n43240  43240  0.716407  0.001\n43260  43260  0.523316  0.001\n43280  43280  1.14815   0.001\n43300  43300  0.686426  0.001\n43320  43320  0.643691  0.001\n43340  43340  0.626056  0.001\n43360  43360  0.679885  0.001\n43380  43380  0.697723  0.001\n43400  43400  0.636272  0.001\n43420  43420  0.595283  0.001\n43440  43440  0.670027  0.001\n43460  43460  0.615179  0.001\n43480  43480  0.909705  0.001\n43500  43500  0.947734  0.001\n43520  43520  1.09474   0.001\n43540  43540  0.638538  0.001\n43560  43560  0.52551   0.001\n43580  43580  0.701005  0.001\n43600  43600  0.409847  0.001\n43620  43620  0.669687  0.001\n43640  43640  1.96938   0.001\n43660  43660  1.02351   0.001\n43680  43680  0.702708  0.001\n43700  43700  0.520829  0.001\n43720  43720  0.631967  0.001\n43740  43740  0.596985  0.001\n43760  43760  1.01196   0.001\n43780  43780  0.348021  0.001\n43800  43800  0.879007  0.001\n43820  43820  1.00734   0.001\n43840  43840  0.333494  0.001\n43860  43860  0.881909  0.001\n43880  43880  0.908687  0.001\n43900  43900  1.03151   0.001\n43920  43920  0.614975  0.001\n43940  43940  0.517646  0.001\n43960  43960  0.692578  0.001\n43980  43980  0.587353  0.001\n44000  44000  0.745958  0.001\n44020  44020  1.56082   0.001\n44040  44040  0.504615  0.001\n44060  44060  0.497881  0.001\n44080  44080  0.465416  0.001\n44100  44100  0.596118  0.001\n44120  44120  0.396984  0.001\n44140  44140  0.398861  0.001\n44160  44160  0.645517  0.001\n44180  44180  0.438644  0.001\n44200  44200  0.922875  0.001\n44220  44220  0.438085  0.001\n44240  44240  0.575802  0.001\n44260  44260  0.548672  0.001\n44280  44280  0.916382  0.001\n44300  44300  0.808355  0.001\n44320  44320  0.813204  0.001\n44340  44340  0.735909  0.001\n44360  44360  1.00263   0.001\n44380  44380  0.374558  0.001\n44400  44400  0.439273  0.001\n44420  44420  0.923241  0.001\n44440  44440  0.611996  0.001\n44460  44460  0.499422  0.001\n44480  44480  0.787906  0.001\n44500  44500  0.583705  0.001\n44520  44520  0.45534   0.001\n44540  44540  0.930042  0.001\n44560  44560  0.73292   0.001\n44580  44580  0.390981  0.001\n44600  44600  0.595087  0.001\n44620  44620  0.442305  0.001\n44640  44640  1.113     0.001\n44660  44660  1.19335   0.001\n44680  44680  0.380484  0.001\n44700  44700  0.618431  0.001\n44720  44720  0.695953  0.001\n44740  44740  1.12217   0.001\n44760  44760  0.480168  0.001\n44780  44780  0.498655  0.001\n44800  44800  0.800428  0.001\n44820  44820  1.15622   0.001\n44840  44840  0.958664  0.001\n44860  44860  0.593323  0.001\n44880  44880  0.760106  0.001\n44900  44900  0.637881  0.001\n44920  44920  0.381813  0.001\n44940  44940  0.484999  0.001\n44960  44960  0.804462  0.001\n44980  44980  0.845265  0.001\n45000  45000  0.446771  0.001\n45020  45020  0.531935  0.001\n45040  45040  0.533764  0.001\n45060  45060  0.689905  0.001\n45080  45080  0.618956  0.001\n45100  45100  0.666632  0.001\n45120  45120  0.666942  0.001\n45140  45140  0.559567  0.001\n45160  45160  1.32217   0.001\n45180  45180  0.518718  0.001\n45200  45200  0.483341  0.001\n45220  45220  0.682884  0.001\n45240  45240  0.62337   0.001\n45260  45260  0.370925  0.001\n45280  45280  0.796818  0.001\n45300  45300  0.930186  0.001\n45320  45320  0.965851  0.001\n45340  45340  0.991856  0.001\n45360  45360  1.5442    0.001\n45380  45380  0.828367  0.001\n45400  45400  0.607536  0.001\n45420  45420  0.63491   0.001\n45440  45440  0.666166  0.001\n45460  45460  1.2806    0.001\n45480  45480  0.533644  0.001\n45500  45500  1.0055    0.001\n45520  45520  0.677199  0.001\n45540  45540  0.50548   0.001\n45560  45560  0.619452  0.001\n45580  45580  0.808929  0.001\n45600  45600  0.689445  0.001\n45620  45620  0.583941  0.001\n45640  45640  0.507968  0.001\n45660  45660  0.557601  0.001\n45680  45680  0.860837  0.001\n45700  45700  0.884465  0.001\n45720  45720  0.725625  0.001\n45740  45740  0.647568  0.001\n45760  45760  0.760882  0.001\n45780  45780  0.341443  0.001\n45800  45800  0.936925  0.001\n45820  45820  0.581632  0.001\n45840  45840  0.878705  0.001\n45860  45860  1.26926   0.001\n45880  45880  0.367797  0.001\n45900  45900  1.84536   0.001\n45920  45920  0.740668  0.001\n45940  45940  0.581296  0.001\n45960  45960  1.13684   0.001\n45980  45980  0.671448  0.001\n46000  46000  0.459609  0.001\n46020  46020  0.479721  0.001\n46040  46040  0.509346  0.001\n46060  46060  1.12504   0.001\n46080  46080  1.24953   0.001\n46100  46100  0.701823  0.001\n46120  46120  1.00249   0.001\n46140  46140  0.748004  0.001\n46160  46160  1.03172   0.001\n46180  46180  0.449593  0.001\n46200  46200  0.534469  0.001\n46220  46220  0.854673  0.001\n46240  46240  0.781029  0.001\n46260  46260  1.19331   0.001\n46280  46280  0.396671  0.001\n46300  46300  1.14845   0.001\n46320  46320  0.92202   0.001\n46340  46340  0.600252  0.001\n46360  46360  1.10617   0.001\n46380  46380  1.48007   0.001\n46400  46400  1.39648   0.001\n46420  46420  0.688031  0.001\n46440  46440  1.25069   0.001\n46460  46460  0.876101  0.001\n46480  46480  0.855213  0.001\n46500  46500  0.471372  0.001\n46520  46520  0.776391  0.001\n46540  46540  1.33677   0.001\n46560  46560  0.451933  0.001\n46580  46580  0.63046   0.001\n46600  46600  0.627762  0.001\n46620  46620  0.496392  0.001\n46640  46640  1.49679   0.001\n46660  46660  1.08868   0.001\n46680  46680  0.677293  0.001\n46700  46700  0.739809  0.001\n46720  46720  0.601225  0.001\n46740  46740  0.351947  0.001\n46760  46760  0.817795  0.001\n46780  46780  1.29778   0.001\n46800  46800  0.685495  0.001\n46820  46820  0.489009  0.001\n46840  46840  0.616454  0.001\n46860  46860  0.422594  0.001\n46880  46880  1.42628   0.001\n46900  46900  0.764769  0.001\n46920  46920  0.517774  0.001\n46940  46940  1.12355   0.001\n46960  46960  1.11886   0.001\n46980  46980  0.710938  0.001\n47000  47000  0.717767  0.001\n47020  47020  0.692588  0.001\n47040  47040  0.498823  0.001\n47060  47060  0.61905   0.001\n47080  47080  0.422696  0.001\n47100  47100  0.474993  0.001\n47120  47120  0.41801   0.001\n47140  47140  0.615135  0.001\n47160  47160  0.397047  0.001\n47180  47180  0.952498  0.001\n47200  47200  1.45176   0.001\n47220  47220  0.858608  0.001\n47240  47240  0.708632  0.001\n47260  47260  1.56257   0.001\n47280  47280  1.42036   0.001\n47300  47300  0.813882  0.001\n47320  47320  0.844824  0.001\n47340  47340  0.396293  0.001\n47360  47360  1.45347   0.001\n47380  47380  0.323345  0.001\n47400  47400  0.692427  0.001\n47420  47420  1.38752   0.001\n47440  47440  1.44209   0.001\n47460  47460  0.470845  0.001\n47480  47480  0.531955  0.001\n47500  47500  0.681434  0.001\n47520  47520  0.826021  0.001\n47540  47540  0.896008  0.001\n47560  47560  0.556228  0.001\n47580  47580  0.499801  0.001\n47600  47600  0.842811  0.001\n47620  47620  1.21623   0.001\n47640  47640  1.4317    0.001\n47660  47660  0.835709  0.001\n47680  47680  0.598758  0.001\n47700  47700  1.08177   0.001\n47720  47720  0.463685  0.001\n47740  47740  0.411661  0.001\n47760  47760  1.00899   0.001\n47780  47780  0.359563  0.001\n47800  47800  2.31693   0.001\n47820  47820  0.749411  0.001\n47840  47840  0.552146  0.001\n47860  47860  0.574775  0.001\n47880  47880  0.698577  0.001\n47900  47900  0.563555  0.001\n47920  47920  0.95211   0.001\n47940  47940  0.886075  0.001\n47960  47960  0.575737  0.001\n47980  47980  0.672707  0.001\n48000  48000  0.278349  0.001\n48020  48020  1.40832   0.001\n48040  48040  0.513961  0.001\n48060  48060  0.642862  0.001\n48080  48080  0.463674  0.001\n48100  48100  0.631148  0.001\n48120  48120  0.812629  0.001\n48140  48140  0.501545  0.001\n48160  48160  0.825437  0.001\n48180  48180  0.746518  0.001\n48200  48200  0.631352  0.001\n48220  48220  0.528805  0.001\n48240  48240  0.576407  0.001\n48260  48260  0.362578  0.001\n48280  48280  1.10123   0.001\n48300  48300  0.406677  0.001\n48320  48320  0.45008   0.001\n48340  48340  0.69023   0.001\n48360  48360  0.952128  0.001\n48380  48380  0.902751  0.001\n48400  48400  0.815883  0.001\n48420  48420  0.657676  0.001\n48440  48440  0.418704  0.001\n48460  48460  1.30205   0.001\n48480  48480  1.06987   0.001\n48500  48500  1.21617   0.001\n48520  48520  0.821435  0.001\n48540  48540  0.803703  0.001\n48560  48560  0.386637  0.001\n48580  48580  0.706681  0.001\n48600  48600  0.645552  0.001\n48620  48620  0.838884  0.001\n48640  48640  0.772661  0.001\n48660  48660  0.596179  0.001\n48680  48680  0.540104  0.001\n48700  48700  0.436699  0.001\n48720  48720  0.52336   0.001\n48740  48740  0.343356  0.001\n48760  48760  0.62391   0.001\n48780  48780  2.0203    0.001\n48800  48800  0.466877  0.001\n48820  48820  2.88763   0.001\n48840  48840  0.8549    0.001\n48860  48860  1.06943   0.001\n48880  48880  0.478089  0.001\n48900  48900  0.641888  0.001\n48920  48920  0.479278  0.001\n48940  48940  0.919071  0.001\n48960  48960  0.342259  0.001\n48980  48980  0.732875  0.001\n49000  49000  0.52358   0.001\n49020  49020  1.04653   0.001\n49040  49040  0.962165  0.001\n49060  49060  0.798519  0.001\n49080  49080  0.538084  0.001\n49100  49100  0.527552  0.001\n49120  49120  0.525689  0.001\n49140  49140  0.41212   0.001\n49160  49160  1.18267   0.001\n49180  49180  0.633124  0.001\n49200  49200  0.388937  0.001\n49220  49220  0.810662  0.001\n49240  49240  0.622164  0.001\n49260  49260  0.722587  0.001\n49280  49280  1.08137   0.001\n49300  49300  0.808135  0.001\n49320  49320  0.577541  0.001\n49340  49340  1.22163   0.001\n49360  49360  0.56664   0.001\n49380  49380  0.873094  0.001\n49400  49400  0.476162  0.001\n49420  49420  1.39639   0.001\n49440  49440  1.32866   0.001\n49460  49460  0.737552  0.001\n49480  49480  1.04446   0.001\n49500  49500  1.19299   0.001\n49520  49520  1.23378   0.001\n49540  49540  0.82303   0.001\n49560  49560  0.470118  0.001\n49580  49580  0.550743  0.001\n49600  49600  0.352575  0.001\n49620  49620  0.719975  0.001\n49640  49640  0.491905  0.001\n49660  49660  0.453036  0.001\n49680  49680  0.637033  0.001\n49700  49700  0.538064  0.001\n49720  49720  1.16727   0.001\n49740  49740  0.586379  0.001\n49760  49760  0.847988  0.001\n49780  49780  2.3754    0.001\n49800  49800  0.610487  0.001\n49820  49820  1.29302   0.001\n49840  49840  0.692093  0.001\n49860  49860  0.821767  0.001\n49880  49880  0.476261  0.001\n49900  49900  0.678294  0.001\n49920  49920  0.592872  0.001\n49940  49940  0.897512  0.001\n49960  49960  1.20181   0.001\n49980  49980  0.527493  0.001\n50000  50000  0.643427  0.0001\n50020  50020  0.585818  0.0001\n50040  50040  0.645361  0.0001\n50060  50060  0.586163  0.0001\n50080  50080  1.60299   0.0001\n50100  50100  0.370174  0.0001\n50120  50120  0.477357  0.0001\n50140  50140  1.07096   0.0001\n50160  50160  0.587131  0.0001\n50180  50180  0.520924  0.0001\n50200  50200  2.03636   0.0001\n50220  50220  0.65289   0.0001\n50240  50240  0.64644   0.0001\n50260  50260  1.41545   0.0001\n50280  50280  1.14142   0.0001\n50300  50300  0.403139  0.0001\n50320  50320  1.7419    0.0001\n50340  50340  0.553751  0.0001\n50360  50360  1.44839   0.0001\n50380  50380  0.55957   0.0001\n50400  50400  0.6034    0.0001\n50420  50420  0.900857  0.0001\n50440  50440  0.843056  0.0001\n50460  50460  0.603903  0.0001\n50480  50480  0.467904  0.0001\n50500  50500  0.607006  0.0001\n50520  50520  0.794146  0.0001\n50540  50540  0.652337  0.0001\n50560  50560  0.720763  0.0001\n50580  50580  0.296103  0.0001\n50600  50600  0.364484  0.0001\n50620  50620  0.686818  0.0001\n50640  50640  0.801383  0.0001\n50660  50660  0.271462  0.0001\n50680  50680  0.656085  0.0001\n50700  50700  0.487788  0.0001\n50720  50720  2.21751   0.0001\n50740  50740  0.468588  0.0001\n50760  50760  0.406244  0.0001\n50780  50780  0.77135   0.0001\n50800  50800  0.441346  0.0001\n50820  50820  0.4268    0.0001\n50840  50840  1.03894   0.0001\n50860  50860  0.525293  0.0001\n50880  50880  0.48679   0.0001\n50900  50900  0.879554  0.0001\n50920  50920  0.479229  0.0001\n50940  50940  1.02281   0.0001\n50960  50960  0.87624   0.0001\n50980  50980  0.381245  0.0001\n51000  51000  2.34671   0.0001\n51020  51020  0.445282  0.0001\n51040  51040  0.454417  0.0001\n51060  51060  0.660517  0.0001\n51080  51080  0.49115   0.0001\n51100  51100  0.495784  0.0001\n51120  51120  0.276959  0.0001\n51140  51140  0.433521  0.0001\n51160  51160  0.417945  0.0001\n51180  51180  0.513949  0.0001\n51200  51200  0.667816  0.0001\n51220  51220  1.0873    0.0001\n51240  51240  0.732888  0.0001\n51260  51260  0.746584  0.0001\n51280  51280  0.487029  0.0001\n51300  51300  0.455993  0.0001\n51320  51320  0.538867  0.0001\n51340  51340  0.64651   0.0001\n51360  51360  1.80272   0.0001\n51380  51380  0.705611  0.0001\n51400  51400  0.795971  0.0001\n51420  51420  1.10622   0.0001\n51440  51440  0.62422   0.0001\n51460  51460  0.727866  0.0001\n51480  51480  0.545285  0.0001\n51500  51500  0.897443  0.0001\n51520  51520  1.08099   0.0001\n51540  51540  0.508     0.0001\n51560  51560  0.508799  0.0001\n51580  51580  0.77304   0.0001\n51600  51600  0.477105  0.0001\n51620  51620  0.482003  0.0001\n51640  51640  0.790227  0.0001\n51660  51660  0.691533  0.0001\n51680  51680  0.510888  0.0001\n51700  51700  1.20202   0.0001\n51720  51720  0.672837  0.0001\n51740  51740  0.5616    0.0001\n51760  51760  1.36997   0.0001\n51780  51780  0.571037  0.0001\n51800  51800  1.45917   0.0001\n51820  51820  0.427706  0.0001\n51840  51840  0.954814  0.0001\n51860  51860  0.50824   0.0001\n51880  51880  0.840693  0.0001\n51900  51900  0.848435  0.0001\n51920  51920  0.587292  0.0001\n51940  51940  1.16449   0.0001\n51960  51960  0.645547  0.0001\n51980  51980  0.655865  0.0001\n52000  52000  0.518023  0.0001\n52020  52020  0.889675  0.0001\n52040  52040  0.454978  0.0001\n52060  52060  2.35527   0.0001\n52080  52080  0.574772  0.0001\n52100  52100  1.77533   0.0001\n52120  52120  0.818294  0.0001\n52140  52140  0.453546  0.0001\n52160  52160  0.572732  0.0001\n52180  52180  0.834441  0.0001\n52200  52200  0.48405   0.0001\n52220  52220  1.19631   0.0001\n52240  52240  0.675517  0.0001\n52260  52260  0.856432  0.0001\n52280  52280  0.398293  0.0001\n52300  52300  0.792633  0.0001\n52320  52320  0.427468  0.0001\n52340  52340  0.548463  0.0001\n52360  52360  0.743522  0.0001\n52380  52380  0.603737  0.0001\n52400  52400  1.34908   0.0001\n52420  52420  0.62347   0.0001\n52440  52440  0.320369  0.0001\n52460  52460  0.530674  0.0001\n52480  52480  0.440615  0.0001\n52500  52500  0.38863   0.0001\n52520  52520  0.748163  0.0001\n52540  52540  0.325406  0.0001\n52560  52560  1.23591   0.0001\n52580  52580  1.30554   0.0001\n52600  52600  0.559962  0.0001\n52620  52620  0.583426  0.0001\n52640  52640  1.05086   0.0001\n52660  52660  0.714031  0.0001\n52680  52680  0.335811  0.0001\n52700  52700  0.587251  0.0001\n52720  52720  1.67784   0.0001\n52740  52740  0.480725  0.0001\n52760  52760  0.858547  0.0001\n52780  52780  0.496416  0.0001\n52800  52800  0.400292  0.0001\n52820  52820  0.518278  0.0001\n52840  52840  0.70346   0.0001\n52860  52860  0.784085  0.0001\n52880  52880  1.13204   0.0001\n52900  52900  1.23516   0.0001\n52920  52920  0.578228  0.0001\n52940  52940  0.326859  0.0001\n52960  52960  0.771738  0.0001\n52980  52980  1.60701   0.0001\n53000  53000  1.02713   0.0001\n53020  53020  0.816956  0.0001\n53040  53040  1.23219   0.0001\n53060  53060  0.496874  0.0001\n53080  53080  0.419995  0.0001\n53100  53100  0.793203  0.0001\n53120  53120  1.45289   0.0001\n53140  53140  1.03339   0.0001\n53160  53160  0.729653  0.0001\n53180  53180  1.07835   0.0001\n53200  53200  0.558693  0.0001\n53220  53220  0.380722  0.0001\n53240  53240  0.802064  0.0001\n53260  53260  0.557112  0.0001\n53280  53280  0.580718  0.0001\n53300  53300  0.606991  0.0001\n53320  53320  0.895851  0.0001\n53340  53340  0.847268  0.0001\n53360  53360  0.404241  0.0001\n53380  53380  0.420582  0.0001\n53400  53400  0.489183  0.0001\n53420  53420  0.342269  0.0001\n53440  53440  0.67263   0.0001\n53460  53460  0.657535  0.0001\n53480  53480  0.661507  0.0001\n53500  53500  0.891583  0.0001\n53520  53520  0.761629  0.0001\n53540  53540  0.846531  0.0001\n53560  53560  0.611496  0.0001\n53580  53580  0.839948  0.0001\n53600  53600  0.557016  0.0001\n53620  53620  1.6325    0.0001\n53640  53640  0.511466  0.0001\n53660  53660  0.693323  0.0001\n53680  53680  1.11413   0.0001\n53700  53700  0.427344  0.0001\n53720  53720  0.570549  0.0001\n53740  53740  0.450672  0.0001\n53760  53760  1.35961   0.0001\n53780  53780  0.350814  0.0001\n53800  53800  0.926349  0.0001\n53820  53820  1.29048   0.0001\n53840  53840  0.715191  0.0001\n53860  53860  1.12795   0.0001\n53880  53880  0.828362  0.0001\n53900  53900  0.645774  0.0001\n53920  53920  0.609613  0.0001\n53940  53940  0.424472  0.0001\n53960  53960  1.45596   0.0001\n53980  53980  0.343663  0.0001\n54000  54000  2.33722   0.0001\n54020  54020  0.745768  0.0001\n54040  54040  0.402387  0.0001\n54060  54060  1.14569   0.0001\n54080  54080  1.15184   0.0001\n54100  54100  0.758779  0.0001\n54120  54120  0.95675   0.0001\n54140  54140  0.78381   0.0001\n54160  54160  0.523225  0.0001\n54180  54180  0.338072  0.0001\n54200  54200  0.551858  0.0001\n54220  54220  0.480167  0.0001\n54240  54240  0.410746  0.0001\n54260  54260  0.795358  0.0001\n54280  54280  0.533689  0.0001\n54300  54300  0.386796  0.0001\n54320  54320  0.451477  0.0001\n54340  54340  0.345623  0.0001\n54360  54360  0.687128  0.0001\n54380  54380  0.433122  0.0001\n54400  54400  1.36373   0.0001\n54420  54420  0.378736  0.0001\n54440  54440  0.620309  0.0001\n54460  54460  0.694647  0.0001\n54480  54480  0.491471  0.0001\n54500  54500  0.420653  0.0001\n54520  54520  0.60143   0.0001\n54540  54540  0.90808   0.0001\n54560  54560  0.594516  0.0001\n54580  54580  1.01569   0.0001\n54600  54600  0.351203  0.0001\n54620  54620  0.315138  0.0001\n54640  54640  0.543589  0.0001\n54660  54660  1.03626   0.0001\n54680  54680  0.872152  0.0001\n54700  54700  0.735993  0.0001\n54720  54720  0.696707  0.0001\n54740  54740  0.422805  0.0001\n54760  54760  0.761203  0.0001\n54780  54780  1.41648   0.0001\n54800  54800  0.386731  0.0001\n54820  54820  0.441401  0.0001\n54840  54840  0.643351  0.0001\n54860  54860  0.586443  0.0001\n54880  54880  0.656941  0.0001\n54900  54900  0.421341  0.0001\n54920  54920  0.641456  0.0001\n54940  54940  0.526636  0.0001\n54960  54960  0.606234  0.0001\n54980  54980  0.527734  0.0001\n55000  55000  0.563788  0.0001\n55020  55020  0.351084  0.0001\n55040  55040  0.910498  0.0001\n55060  55060  2.0928    0.0001\n55080  55080  0.994897  0.0001\n55100  55100  1.95334   0.0001\n55120  55120  0.694918  0.0001\n55140  55140  0.514558  0.0001\n55160  55160  0.371264  0.0001\n55180  55180  1.15932   0.0001\n55200  55200  1.32164   0.0001\n55220  55220  1.40124   0.0001\n55240  55240  0.514953  0.0001\n55260  55260  0.570959  0.0001\n55280  55280  0.835674  0.0001\n55300  55300  1.42259   0.0001\n55320  55320  0.678293  0.0001\n55340  55340  0.584424  0.0001\n55360  55360  0.450526  0.0001\n55380  55380  0.580899  0.0001\n55400  55400  2.35528   0.0001\n55420  55420  0.773917  0.0001\n55440  55440  1.5269    0.0001\n55460  55460  0.549344  0.0001\n55480  55480  0.478706  0.0001\n55500  55500  0.851856  0.0001\n55520  55520  1.10676   0.0001\n55540  55540  0.266639  0.0001\n55560  55560  0.594864  0.0001\n55580  55580  0.631642  0.0001\n55600  55600  0.449637  0.0001\n55620  55620  0.715683  0.0001\n55640  55640  0.749489  0.0001\n55660  55660  0.459562  0.0001\n55680  55680  0.374254  0.0001\n55700  55700  0.500605  0.0001\n55720  55720  1.10086   0.0001\n55740  55740  0.799651  0.0001\n55760  55760  0.444022  0.0001\n55780  55780  0.571333  0.0001\n55800  55800  0.289083  0.0001\n55820  55820  0.783939  0.0001\n55840  55840  0.560103  0.0001\n55860  55860  0.541308  0.0001\n55880  55880  0.638834  0.0001\n55900  55900  0.940561  0.0001\n55920  55920  1.19321   0.0001\n55940  55940  1.06638   0.0001\n55960  55960  1.05654   0.0001\n55980  55980  0.574119  0.0001\n56000  56000  0.689392  0.0001\n56020  56020  0.409243  0.0001\n56040  56040  0.396896  0.0001\n56060  56060  0.478466  0.0001\n56080  56080  0.448013  0.0001\n56100  56100  0.48468   0.0001\n56120  56120  0.401463  0.0001\n56140  56140  1.02218   0.0001\n56160  56160  0.661352  0.0001\n56180  56180  1.15914   0.0001\n56200  56200  1.28975   0.0001\n56220  56220  0.624821  0.0001\n56240  56240  0.970652  0.0001\n56260  56260  0.35021   0.0001\n56280  56280  0.530663  0.0001\n56300  56300  0.606862  0.0001\n56320  56320  0.574689  0.0001\n56340  56340  1.21265   0.0001\n56360  56360  0.495983  0.0001\n56380  56380  1.05263   0.0001\n56400  56400  0.383948  0.0001\n56420  56420  0.498927  0.0001\n56440  56440  0.599513  0.0001\n56460  56460  0.568729  0.0001\n56480  56480  0.366769  0.0001\n56500  56500  0.422211  0.0001\n56520  56520  0.684594  0.0001\n56540  56540  0.627359  0.0001\n56560  56560  0.864466  0.0001\n56580  56580  0.423677  0.0001\n56600  56600  0.296922  0.0001\n56620  56620  1.12356   0.0001\n56640  56640  0.397472  0.0001\n56660  56660  1.38308   0.0001\n56680  56680  0.578594  0.0001\n56700  56700  0.455488  0.0001\n56720  56720  1.06634   0.0001\n56740  56740  0.618949  0.0001\n56760  56760  0.389685  0.0001\n56780  56780  0.459333  0.0001\n56800  56800  1.93523   0.0001\n56820  56820  0.573409  0.0001\n56840  56840  1.06563   0.0001\n56860  56860  1.31006   0.0001\n56880  56880  0.945827  0.0001\n56900  56900  0.422176  0.0001\n56920  56920  0.710036  0.0001\n56940  56940  0.422233  0.0001\n56960  56960  0.583141  0.0001\n56980  56980  0.439476  0.0001\n57000  57000  0.715341  0.0001\n57020  57020  0.537377  0.0001\n57040  57040  1.53162   0.0001\n57060  57060  0.886898  0.0001\n57080  57080  0.64763   0.0001\n57100  57100  0.637599  0.0001\n57120  57120  0.603207  0.0001\n57140  57140  0.336468  0.0001\n57160  57160  0.836009  0.0001\n57180  57180  0.52012   0.0001\n57200  57200  0.534119  0.0001\n57220  57220  1.16836   0.0001\n57240  57240  0.336923  0.0001\n57260  57260  0.400692  0.0001\n57280  57280  0.74416   0.0001\n57300  57300  0.958697  0.0001\n57320  57320  0.74055   0.0001\n57340  57340  0.671573  0.0001\n57360  57360  0.525998  0.0001\n57380  57380  0.452167  0.0001\n57400  57400  0.677438  0.0001\n57420  57420  0.312634  0.0001\n57440  57440  0.454507  0.0001\n57460  57460  0.337646  0.0001\n57480  57480  0.443488  0.0001\n57500  57500  0.686595  0.0001\n57520  57520  1.34115   0.0001\n57540  57540  0.321715  0.0001\n57560  57560  0.527601  0.0001\n57580  57580  0.651231  0.0001\n57600  57600  0.894922  0.0001\n57620  57620  1.94942   0.0001\n57640  57640  1.2817    0.0001\n57660  57660  0.869281  0.0001\n57680  57680  0.380101  0.0001\n57700  57700  0.671506  0.0001\n57720  57720  0.859718  0.0001\n57740  57740  0.719961  0.0001\n57760  57760  0.494298  0.0001\n57780  57780  0.555895  0.0001\n57800  57800  0.614223  0.0001\n57820  57820  0.362962  0.0001\n57840  57840  0.696203  0.0001\n57860  57860  1.47462   0.0001\n57880  57880  1.13368   0.0001\n57900  57900  1.31381   0.0001\n57920  57920  0.846419  0.0001\n57940  57940  0.34474   0.0001\n57960  57960  0.417143  0.0001\n57980  57980  0.504269  0.0001\n58000  58000  1.54866   0.0001\n58020  58020  0.543099  0.0001\n58040  58040  0.698219  0.0001\n58060  58060  0.517711  0.0001\n58080  58080  0.461292  0.0001\n58100  58100  0.599169  0.0001\n58120  58120  0.700206  0.0001\n58140  58140  0.590546  0.0001\n58160  58160  0.502028  0.0001\n58180  58180  0.481025  0.0001\n58200  58200  0.774812  0.0001\n58220  58220  0.626638  0.0001\n58240  58240  0.254238  0.0001\n58260  58260  0.245879  0.0001\n58280  58280  0.552278  0.0001\n58300  58300  0.781849  0.0001\n58320  58320  0.536397  0.0001\n58340  58340  0.365349  0.0001\n58360  58360  0.705456  0.0001\n58380  58380  0.570602  0.0001\n58400  58400  0.812327  0.0001\n58420  58420  1.02667   0.0001\n58440  58440  0.639346  0.0001\n58460  58460  0.840141  0.0001\n58480  58480  1.23776   0.0001\n58500  58500  0.47621   0.0001\n58520  58520  0.776441  0.0001\n58540  58540  0.393135  0.0001\n58560  58560  0.68851   0.0001\n58580  58580  0.390157  0.0001\n58600  58600  1.92556   0.0001\n58620  58620  0.962099  0.0001\n58640  58640  0.784063  0.0001\n58660  58660  0.572585  0.0001\n58680  58680  0.398924  0.0001\n58700  58700  0.867519  0.0001\n58720  58720  1.83721   0.0001\n58740  58740  0.363187  0.0001\n58760  58760  0.364676  0.0001\n58780  58780  0.357994  0.0001\n58800  58800  0.819639  0.0001\n58820  58820  0.447397  0.0001\n58840  58840  1.90512   0.0001\n58860  58860  0.539304  0.0001\n58880  58880  0.427239  0.0001\n58900  58900  0.570796  0.0001\n58920  58920  0.434752  0.0001\n58940  58940  1.21845   0.0001\n58960  58960  0.298676  0.0001\n58980  58980  0.383399  0.0001\n59000  59000  1.41928   0.0001\n59020  59020  0.454975  0.0001\n59040  59040  0.946448  0.0001\n59060  59060  0.69078   0.0001\n59080  59080  0.735916  0.0001\n59100  59100  1.27125   0.0001\n59120  59120  0.394117  0.0001\n59140  59140  0.846474  0.0001\n59160  59160  0.385881  0.0001\n59180  59180  0.376532  0.0001\n59200  59200  0.452079  0.0001\n59220  59220  0.923541  0.0001\n59240  59240  0.812781  0.0001\n59260  59260  1.70171   0.0001\n59280  59280  0.517601  0.0001\n59300  59300  0.403206  0.0001\n59320  59320  0.570109  0.0001\n59340  59340  0.878756  0.0001\n59360  59360  0.566372  0.0001\n59380  59380  0.608771  0.0001\n59400  59400  0.908304  0.0001\n59420  59420  0.477925  0.0001\n59440  59440  0.745231  0.0001\n59460  59460  0.562392  0.0001\n59480  59480  0.546397  0.0001\n59500  59500  0.428669  0.0001\n59520  59520  1.46064   0.0001\n59540  59540  0.609462  0.0001\n59560  59560  0.406907  0.0001\n59580  59580  0.746719  0.0001\n59600  59600  1.68147   0.0001\n59620  59620  1.21147   0.0001\n59640  59640  0.913426  0.0001\n59660  59660  0.612883  0.0001\n59680  59680  0.687409  0.0001\n59700  59700  0.770078  0.0001\n59720  59720  0.667224  0.0001\n59740  59740  1.17975   0.0001\n59760  59760  1.47233   0.0001\n59780  59780  0.406887  0.0001\n59800  59800  0.370209  0.0001\n59820  59820  0.574555  0.0001\n59840  59840  1.22843   0.0001\n59860  59860  0.757993  0.0001\n59880  59880  0.959708  0.0001\n59900  59900  0.802387  0.0001\n59920  59920  1.22295   0.0001\n59940  59940  0.584522  0.0001\n59960  59960  0.300696  0.0001\n59980  59980  0.513673  0.0001\n60000  60000  0.481426  0.0001\n60020  60020  0.851943  0.0001\n60040  60040  0.446686  0.0001\n60060  60060  0.308349  0.0001\n60080  60080  0.544951  0.0001\n60100  60100  0.805924  0.0001\n60120  60120  0.874742  0.0001\n60140  60140  0.487625  0.0001\n60160  60160  0.398404  0.0001\n60180  60180  0.61212   0.0001\n60200  60200  0.802998  0.0001\n60220  60220  0.408418  0.0001\n60240  60240  0.831808  0.0001\n60260  60260  0.459766  0.0001\n60280  60280  0.340484  0.0001\n60300  60300  0.641803  0.0001\n60320  60320  0.70876   0.0001\n60340  60340  0.348114  0.0001\n60360  60360  0.945011  0.0001\n60380  60380  1.11581   0.0001\n60400  60400  0.456951  0.0001\n60420  60420  0.57553   0.0001\n60440  60440  0.37631   0.0001\n60460  60460  0.274685  0.0001\n60480  60480  1.58003   0.0001\n60500  60500  0.255636  0.0001\n60520  60520  0.601578  0.0001\n60540  60540  0.891706  0.0001\n60560  60560  0.347621  0.0001\n60580  60580  0.514408  0.0001\n60600  60600  0.514978  0.0001\n60620  60620  1.14777   0.0001\n60640  60640  1.09483   0.0001\n60660  60660  1.11833   0.0001\n60680  60680  0.295458  0.0001\n60700  60700  0.548524  0.0001\n60720  60720  0.55889   0.0001\n60740  60740  0.53107   0.0001\n60760  60760  0.411425  0.0001\n60780  60780  0.516902  0.0001\n60800  60800  0.833994  0.0001\n60820  60820  0.513259  0.0001\n60840  60840  0.358781  0.0001\n60860  60860  0.454725  0.0001\n60880  60880  0.556747  0.0001\n60900  60900  0.596154  0.0001\n60920  60920  0.316343  0.0001\n60940  60940  0.544553  0.0001\n60960  60960  1.55932   0.0001\n60980  60980  1.37985   0.0001\n61000  61000  1.96358   0.0001\n61020  61020  0.815056  0.0001\n61040  61040  0.2876    0.0001\n61060  61060  1.04679   0.0001\n61080  61080  0.423071  0.0001\n61100  61100  0.750674  0.0001\n61120  61120  0.464384  0.0001\n61140  61140  0.63729   0.0001\n61160  61160  0.467855  0.0001\n61180  61180  0.694145  0.0001\n61200  61200  0.534302  0.0001\n61220  61220  0.823501  0.0001\n61240  61240  1.13661   0.0001\n61260  61260  1.13715   0.0001\n61280  61280  0.299687  0.0001\n61300  61300  0.890007  0.0001\n61320  61320  0.514982  0.0001\n61340  61340  0.723846  0.0001\n61360  61360  1.93251   0.0001\n61380  61380  1.52956   0.0001\n61400  61400  1.06361   0.0001\n61420  61420  0.855403  0.0001\n61440  61440  0.494362  0.0001\n61460  61460  0.53689   0.0001\n61480  61480  0.6154    0.0001\n61500  61500  0.674029  0.0001\n61520  61520  0.927475  0.0001\n61540  61540  0.575112  0.0001\n61560  61560  0.716082  0.0001\n61580  61580  0.879626  0.0001\n61600  61600  0.415839  0.0001\n61620  61620  0.334049  0.0001\n61640  61640  1.07117   0.0001\n61660  61660  1.25386   0.0001\n61680  61680  0.86129   0.0001\n61700  61700  0.626651  0.0001\n61720  61720  0.788468  0.0001\n61740  61740  0.56004   0.0001\n61760  61760  0.352572  0.0001\n61780  61780  0.588519  0.0001\n61800  61800  0.828402  0.0001\n61820  61820  0.999444  0.0001\n61840  61840  1.19423   0.0001\n61860  61860  0.362197  0.0001\n61880  61880  0.510533  0.0001\n61900  61900  1.05317   0.0001\n61920  61920  0.766419  0.0001\n61940  61940  0.51435   0.0001\n61960  61960  0.70004   0.0001\n61980  61980  0.783107  0.0001\n62000  62000  0.561859  0.0001\n62020  62020  0.735582  0.0001\n62040  62040  0.371161  0.0001\n62060  62060  0.54886   0.0001\n62080  62080  1.49963   0.0001\n62100  62100  0.300927  0.0001\n62120  62120  1.39575   0.0001\n62140  62140  0.530703  0.0001\n62160  62160  0.90987   0.0001\n62180  62180  0.848093  0.0001\n62200  62200  0.492937  0.0001\n62220  62220  0.417185  0.0001\n62240  62240  1.34502   0.0001\n62260  62260  0.711541  0.0001\n62280  62280  0.589564  0.0001\n62300  62300  0.795859  0.0001\n62320  62320  1.2306    0.0001\n62340  62340  0.307328  0.0001\n62360  62360  0.559083  0.0001\n62380  62380  0.553382  0.0001\n62400  62400  0.455765  0.0001\n62420  62420  0.675413  0.0001\n62440  62440  0.302009  0.0001\n62460  62460  0.855816  0.0001\n62480  62480  0.295293  0.0001\n62500  62500  0.662369  0.0001\n62520  62520  0.902709  0.0001\n62540  62540  0.538485  0.0001\n62560  62560  0.566858  0.0001\n62580  62580  0.505538  0.0001\n62600  62600  0.430253  0.0001\n62620  62620  0.507296  0.0001\n62640  62640  0.954243  0.0001\n62660  62660  0.716807  0.0001\n62680  62680  0.504758  0.0001\n62700  62700  0.612225  0.0001\n62720  62720  0.67016   0.0001\n62740  62740  0.288034  0.0001\n62760  62760  0.889947  0.0001\n62780  62780  0.42651   0.0001\n62800  62800  0.476785  0.0001\n62820  62820  0.513603  0.0001\n62840  62840  0.664501  0.0001\n62860  62860  1.22987   0.0001\n62880  62880  0.282694  0.0001\n62900  62900  0.685677  0.0001\n62920  62920  0.789913  0.0001\n62940  62940  0.26067   0.0001\n62960  62960  0.571087  0.0001\n62980  62980  0.354639  0.0001\n63000  63000  0.882627  0.0001\n63020  63020  0.719999  0.0001\n63040  63040  0.377276  0.0001\n63060  63060  1.03327   0.0001\n63080  63080  0.939788  0.0001\n63100  63100  0.378358  0.0001\n63120  63120  0.689692  0.0001\n63140  63140  0.992225  0.0001\n63160  63160  0.813423  0.0001\n63180  63180  0.406123  0.0001\n63200  63200  0.552184  0.0001\n63220  63220  0.344882  0.0001\n63240  63240  2.0453    0.0001\n63260  63260  0.736987  0.0001\n63280  63280  1.40598   0.0001\n63300  63300  0.746706  0.0001\n63320  63320  0.37439   0.0001\n63340  63340  0.6048    0.0001\n63360  63360  0.592476  0.0001\n63380  63380  0.635773  0.0001\n63400  63400  0.520314  0.0001\n63420  63420  0.618858  0.0001\n63440  63440  0.324999  0.0001\n63460  63460  0.587912  0.0001\n63480  63480  1.56942   0.0001\n63500  63500  0.304041  0.0001\n63520  63520  0.529809  0.0001\n63540  63540  0.648661  0.0001\n63560  63560  0.317755  0.0001\n63580  63580  1.07356   0.0001\n63600  63600  0.416498  0.0001\n63620  63620  0.454578  0.0001\n63640  63640  0.579215  0.0001\n63660  63660  0.69045   0.0001\n63680  63680  0.383453  0.0001\n63700  63700  0.95517   0.0001\n63720  63720  1.23357   0.0001\n63740  63740  0.541846  0.0001\n63760  63760  1.39696   0.0001\n63780  63780  2.21171   0.0001\n63800  63800  0.383232  0.0001\n63820  63820  0.789323  0.0001\n63840  63840  0.945235  0.0001\n63860  63860  0.642269  0.0001\n63880  63880  0.916569  0.0001\n63900  63900  0.687144  0.0001\n63920  63920  0.801545  0.0001\n63940  63940  1.05429   0.0001\n63960  63960  1.1887    0.0001\n63980  63980  0.586213  0.0001\n64000  64000  0.487072  0.0001\n64020  64020  0.648007  0.0001\n64040  64040  0.853169  0.0001\n64060  64060  0.538091  0.0001\n64080  64080  1.04903   0.0001\n64100  64100  0.544563  0.0001\n64120  64120  0.632519  0.0001\n64140  64140  1.38071   0.0001\n64160  64160  0.428128  0.0001\n64180  64180  0.359259  0.0001\n64200  64200  1.57352   0.0001\n64220  64220  0.800158  0.0001\n64240  64240  0.776462  0.0001\n64260  64260  0.41224   0.0001\n64280  64280  0.526566  0.0001\n64300  64300  0.860942  0.0001\n64320  64320  0.567663  0.0001\n64340  64340  0.301388  0.0001\n64360  64360  0.393723  0.0001\n64380  64380  0.607327  0.0001\n64400  64400  0.288386  0.0001\n64420  64420  0.814271  0.0001\n64440  64440  0.364419  0.0001\n64460  64460  0.868461  0.0001\n64480  64480  0.584363  0.0001\n64500  64500  0.344351  0.0001\n64520  64520  1.99058   0.0001\n64540  64540  0.423143  0.0001\n64560  64560  0.499644  0.0001\n64580  64580  0.61275   0.0001\n64600  64600  1.06442   0.0001\n64620  64620  0.383155  0.0001\n64640  64640  0.571834  0.0001\n64660  64660  0.460122  0.0001\n64680  64680  0.33527   0.0001\n64700  64700  0.578318  0.0001\n64720  64720  0.387644  0.0001\n64740  64740  0.736106  0.0001\n64760  64760  0.34214   0.0001\n64780  64780  0.541854  0.0001\n64800  64800  0.436245  0.0001\n64820  64820  0.363269  0.0001\n64840  64840  0.64405   0.0001\n64860  64860  0.54705   0.0001\n64880  64880  0.511721  0.0001\n64900  64900  0.81804   0.0001\n64920  64920  0.36262   0.0001\n64940  64940  0.546473  0.0001\n64960  64960  0.432286  0.0001\n64980  64980  0.539829  0.0001\n65000  65000  0.987935  0.0001\n65020  65020  1.49243   0.0001\n65040  65040  0.71034   0.0001\n65060  65060  0.629679  0.0001\n65080  65080  0.394128  0.0001\n65100  65100  0.639206  0.0001\n65120  65120  0.772904  0.0001\n65140  65140  0.778609  0.0001\n65160  65160  0.661418  0.0001\n65180  65180  0.713925  0.0001\n65200  65200  0.764025  0.0001\n65220  65220  0.370281  0.0001\n65240  65240  0.50893   0.0001\n65260  65260  0.577531  0.0001\n65280  65280  1.48525   0.0001\n65300  65300  1.00062   0.0001\n65320  65320  0.435187  0.0001\n65340  65340  0.535923  0.0001\n65360  65360  1.23285   0.0001\n65380  65380  0.491274  0.0001\n65400  65400  0.562317  0.0001\n65420  65420  0.233591  0.0001\n65440  65440  0.538849  0.0001\n65460  65460  0.753984  0.0001\n65480  65480  0.744492  0.0001\n65500  65500  1.26312   0.0001\n65520  65520  0.710398  0.0001\n65540  65540  0.586694  0.0001\n65560  65560  1.57808   0.0001\n65580  65580  0.534068  0.0001\n65600  65600  0.563891  0.0001\n65620  65620  0.975654  0.0001\n65640  65640  1.31192   0.0001\n65660  65660  0.690893  0.0001\n65680  65680  0.609988  0.0001\n65700  65700  0.472461  0.0001\n65720  65720  0.745005  0.0001\n65740  65740  0.61806   0.0001\n65760  65760  0.592528  0.0001\n65780  65780  0.804112  0.0001\n65800  65800  1.29777   0.0001\n65820  65820  0.603329  0.0001\n65840  65840  0.925577  0.0001\n65860  65860  1.06041   0.0001\n65880  65880  1.18392   0.0001\n65900  65900  0.761362  0.0001\n65920  65920  0.769014  0.0001\n65940  65940  0.499187  0.0001\n65960  65960  0.90791   0.0001\n65980  65980  0.476195  0.0001\n66000  66000  0.349473  0.0001\n66020  66020  0.31635   0.0001\n66040  66040  0.899018  0.0001\n66060  66060  1.13529   0.0001\n66080  66080  1.10218   0.0001\n66100  66100  0.553118  0.0001\n66120  66120  0.775838  0.0001\n66140  66140  0.81266   0.0001\n66160  66160  0.472929  0.0001\n66180  66180  0.368972  0.0001\n66200  66200  0.85026   0.0001\n66220  66220  0.500848  0.0001\n66240  66240  0.698977  0.0001\n66260  66260  0.537885  0.0001\n66280  66280  0.593115  0.0001\n66300  66300  0.612196  0.0001\n66320  66320  0.439175  0.0001\n66340  66340  0.391013  0.0001\n66360  66360  0.474952  0.0001\n66380  66380  1.00655   0.0001\n66400  66400  0.72802   0.0001\n66420  66420  0.575838  0.0001\n66440  66440  0.427779  0.0001\n66460  66460  0.512249  0.0001\n66480  66480  0.283217  0.0001\n66500  66500  2.17499   0.0001\n66520  66520  0.502987  0.0001\n66540  66540  0.602814  0.0001\n66560  66560  0.620885  0.0001\n66580  66580  0.609417  0.0001\n66600  66600  0.841884  0.0001\n66620  66620  1.19392   0.0001\n66640  66640  0.513087  0.0001\n66660  66660  0.359508  0.0001\n66680  66680  0.702253  0.0001\n66700  66700  0.933642  0.0001\n66720  66720  0.305684  0.0001\n66740  66740  0.294166  0.0001\n66760  66760  0.688082  0.0001\n66780  66780  1.26444   0.0001\n66800  66800  0.647359  0.0001\n66820  66820  0.436225  0.0001\n66840  66840  0.369318  0.0001\n66860  66860  0.837013  0.0001\n66880  66880  0.847614  0.0001\n66900  66900  1.3298    0.0001\n66920  66920  0.324261  0.0001\n66940  66940  0.917423  0.0001\n66960  66960  1.06097   0.0001\n66980  66980  0.31594   0.0001\n67000  67000  0.66853   0.0001\n67020  67020  0.480968  0.0001\n67040  67040  0.43586   0.0001\n67060  67060  0.373421  0.0001\n67080  67080  0.324156  0.0001\n67100  67100  0.469973  0.0001\n67120  67120  0.390596  0.0001\n67140  67140  0.448364  0.0001\n67160  67160  0.316304  0.0001\n67180  67180  0.8772    0.0001\n67200  67200  0.59561   0.0001\n67220  67220  1.05158   0.0001\n67240  67240  1.00235   0.0001\n67260  67260  0.260897  0.0001\n67280  67280  0.323992  0.0001\n67300  67300  0.424032  0.0001\n67320  67320  0.503287  0.0001\n67340  67340  0.908929  0.0001\n67360  67360  0.459373  0.0001\n67380  67380  0.46906   0.0001\n67400  67400  0.236954  0.0001\n67420  67420  0.555155  0.0001\n67440  67440  1.35148   0.0001\n67460  67460  0.491129  0.0001\n67480  67480  0.48457   0.0001\n67500  67500  0.462879  0.0001\n67520  67520  0.954283  0.0001\n67540  67540  2.21661   0.0001\n67560  67560  0.489673  0.0001\n67580  67580  1.39927   0.0001\n67600  67600  0.499101  0.0001\n67620  67620  0.492356  0.0001\n67640  67640  0.585931  0.0001\n67660  67660  0.578878  0.0001\n67680  67680  0.491935  0.0001\n67700  67700  0.598608  0.0001\n67720  67720  0.930111  0.0001\n67740  67740  1.51976   0.0001\n67760  67760  0.5813    0.0001\n67780  67780  0.776146  0.0001\n67800  67800  0.467547  0.0001\n67820  67820  0.266323  0.0001\n67840  67840  0.315275  0.0001\n67860  67860  0.650739  0.0001\n67880  67880  0.352334  0.0001\n67900  67900  0.792638  0.0001\n67920  67920  0.738075  0.0001\n67940  67940  0.701281  0.0001\n67960  67960  0.932256  0.0001\n67980  67980  0.960071  0.0001\n68000  68000  0.737578  0.0001\n68020  68020  0.357205  0.0001\n68040  68040  1.13289   0.0001\n68060  68060  1.2798    0.0001\n68080  68080  1.87359   0.0001\n68100  68100  0.384838  0.0001\n68120  68120  1.49743   0.0001\n68140  68140  0.739711  0.0001\n68160  68160  0.493183  0.0001\n68180  68180  1.83464   0.0001\n68200  68200  0.463124  0.0001\n68220  68220  0.274599  0.0001\n68240  68240  0.459791  0.0001\n68260  68260  0.577589  0.0001\n68280  68280  0.521951  0.0001\n68300  68300  0.809232  0.0001\n68320  68320  0.697984  0.0001\n68340  68340  1.38616   0.0001\n68360  68360  0.245752  0.0001\n68380  68380  0.428224  0.0001\n68400  68400  1.8417    0.0001\n68420  68420  0.873774  0.0001\n68440  68440  0.270822  0.0001\n68460  68460  0.630911  0.0001\n68480  68480  0.855034  0.0001\n68500  68500  0.426458  0.0001\n68520  68520  0.455044  0.0001\n68540  68540  0.62214   0.0001\n68560  68560  0.299127  0.0001\n68580  68580  0.580903  0.0001\n68600  68600  0.364147  0.0001\n68620  68620  1.00948   0.0001\n68640  68640  0.641782  0.0001\n68660  68660  0.516987  0.0001\n68680  68680  0.445848  0.0001\n68700  68700  0.846021  0.0001\n68720  68720  1.59064   0.0001\n68740  68740  0.271512  0.0001\n68760  68760  0.731142  0.0001\n68780  68780  0.597048  0.0001\n68800  68800  0.646709  0.0001\n68820  68820  0.62867   0.0001\n68840  68840  0.761208  0.0001\n68860  68860  0.377177  0.0001\n68880  68880  0.857878  0.0001\n68900  68900  0.44852   0.0001\n68920  68920  0.340258  0.0001\n68940  68940  0.458595  0.0001\n68960  68960  0.421828  0.0001\n68980  68980  1.09855   0.0001\n69000  69000  0.622897  0.0001\n69020  69020  0.608487  0.0001\n69040  69040  0.388441  0.0001\n69060  69060  0.551682  0.0001\n69080  69080  0.706473  0.0001\n69100  69100  0.409065  0.0001\n69120  69120  0.255218  0.0001\n69140  69140  0.47266   0.0001\n69160  69160  0.627733  0.0001\n69180  69180  0.74286   0.0001\n69200  69200  0.461658  0.0001\n69220  69220  0.476775  0.0001\n69240  69240  1.93578   0.0001\n69260  69260  0.511331  0.0001\n69280  69280  0.562836  0.0001\n69300  69300  0.862174  0.0001\n69320  69320  0.514722  0.0001\n69340  69340  0.720197  0.0001\n69360  69360  0.308208  0.0001\n69380  69380  0.455497  0.0001\n69400  69400  0.441875  0.0001\n69420  69420  0.411288  0.0001\n69440  69440  0.656567  0.0001\n69460  69460  0.465477  0.0001\n69480  69480  0.573153  0.0001\n69500  69500  0.454051  0.0001\n69520  69520  0.488653  0.0001\n69540  69540  0.335934  0.0001\n69560  69560  0.411101  0.0001\n69580  69580  3.32817   0.0001\n69600  69600  1.07853   0.0001\n69620  69620  1.10301   0.0001\n69640  69640  1.1535    0.0001\n69660  69660  0.76165   0.0001\n69680  69680  0.424532  0.0001\n69700  69700  0.815053  0.0001\n69720  69720  1.04886   0.0001\n69740  69740  0.394408  0.0001\n69760  69760  1.32348   0.0001\n69780  69780  0.810772  0.0001\n69800  69800  0.485938  0.0001\n69820  69820  0.441926  0.0001\n69840  69840  0.886767  0.0001\n69860  69860  1.75626   0.0001\n69880  69880  0.733228  0.0001\n69900  69900  0.827574  0.0001\n69920  69920  0.459554  0.0001\n69940  69940  1.06868   0.0001\n69960  69960  0.770026  0.0001\n69980  69980  0.35324   0.0001\n"
  },
  {
    "path": "caffe-fpn/tools/extract_features.cpp",
    "content": "#include <string>\n#include <vector>\n\n#include \"boost/algorithm/string.hpp\"\n#include \"google/protobuf/text_format.h\"\n\n#include \"caffe/blob.hpp\"\n#include \"caffe/common.hpp\"\n#include \"caffe/net.hpp\"\n#include \"caffe/proto/caffe.pb.h\"\n#include \"caffe/util/db.hpp\"\n#include \"caffe/util/format.hpp\"\n#include \"caffe/util/io.hpp\"\n\nusing caffe::Blob;\nusing caffe::Caffe;\nusing caffe::Datum;\nusing caffe::Net;\nusing std::string;\nnamespace db = caffe::db;\n\ntemplate<typename Dtype>\nint feature_extraction_pipeline(int argc, char** argv);\n\nint main(int argc, char** argv) {\n  return feature_extraction_pipeline<float>(argc, argv);\n//  return feature_extraction_pipeline<double>(argc, argv);\n}\n\ntemplate<typename Dtype>\nint feature_extraction_pipeline(int argc, char** argv) {\n  ::google::InitGoogleLogging(argv[0]);\n  const int num_required_args = 7;\n  if (argc < num_required_args) {\n    LOG(ERROR)<<\n    \"This program takes in a trained network and an input data layer, and then\"\n    \" extract features of the input data produced by the net.\\n\"\n    \"Usage: extract_features  pretrained_net_param\"\n    \"  feature_extraction_proto_file  extract_feature_blob_name1[,name2,...]\"\n    \"  save_feature_dataset_name1[,name2,...]  num_mini_batches  db_type\"\n    \"  [CPU/GPU] [DEVICE_ID=0]\\n\"\n    \"Note: you can extract multiple features in one pass by specifying\"\n    \" multiple feature blob names and dataset names separated by ','.\"\n    \" The names cannot contain white space characters and the number of blobs\"\n    \" and datasets must be equal.\";\n    return 1;\n  }\n  int arg_pos = num_required_args;\n\n  arg_pos = num_required_args;\n  if (argc > arg_pos && strcmp(argv[arg_pos], \"GPU\") == 0) {\n    LOG(ERROR)<< \"Using GPU\";\n    int device_id = 0;\n    if (argc > arg_pos + 1) {\n      device_id = atoi(argv[arg_pos + 1]);\n      CHECK_GE(device_id, 0);\n    }\n    LOG(ERROR) << \"Using Device_id=\" << device_id;\n    Caffe::SetDevice(device_id);\n    Caffe::set_mode(Caffe::GPU);\n  } else {\n    LOG(ERROR) << \"Using CPU\";\n    Caffe::set_mode(Caffe::CPU);\n  }\n\n  arg_pos = 0;  // the name of the executable\n  std::string pretrained_binary_proto(argv[++arg_pos]);\n\n  // Expected prototxt contains at least one data layer such as\n  //  the layer data_layer_name and one feature blob such as the\n  //  fc7 top blob to extract features.\n  /*\n   layers {\n     name: \"data_layer_name\"\n     type: DATA\n     data_param {\n       source: \"/path/to/your/images/to/extract/feature/images_leveldb\"\n       mean_file: \"/path/to/your/image_mean.binaryproto\"\n       batch_size: 128\n       crop_size: 227\n       mirror: false\n     }\n     top: \"data_blob_name\"\n     top: \"label_blob_name\"\n   }\n   layers {\n     name: \"drop7\"\n     type: DROPOUT\n     dropout_param {\n       dropout_ratio: 0.5\n     }\n     bottom: \"fc7\"\n     top: \"fc7\"\n   }\n   */\n  std::string feature_extraction_proto(argv[++arg_pos]);\n  boost::shared_ptr<Net<Dtype> > feature_extraction_net(\n      new Net<Dtype>(feature_extraction_proto, caffe::TEST));\n  feature_extraction_net->CopyTrainedLayersFrom(pretrained_binary_proto);\n\n  std::string extract_feature_blob_names(argv[++arg_pos]);\n  std::vector<std::string> blob_names;\n  boost::split(blob_names, extract_feature_blob_names, boost::is_any_of(\",\"));\n\n  std::string save_feature_dataset_names(argv[++arg_pos]);\n  std::vector<std::string> dataset_names;\n  boost::split(dataset_names, save_feature_dataset_names,\n               boost::is_any_of(\",\"));\n  CHECK_EQ(blob_names.size(), dataset_names.size()) <<\n      \" the number of blob names and dataset names must be equal\";\n  size_t num_features = blob_names.size();\n\n  for (size_t i = 0; i < num_features; i++) {\n    CHECK(feature_extraction_net->has_blob(blob_names[i]))\n        << \"Unknown feature blob name \" << blob_names[i]\n        << \" in the network \" << feature_extraction_proto;\n  }\n\n  int num_mini_batches = atoi(argv[++arg_pos]);\n\n  std::vector<boost::shared_ptr<db::DB> > feature_dbs;\n  std::vector<boost::shared_ptr<db::Transaction> > txns;\n  const char* db_type = argv[++arg_pos];\n  for (size_t i = 0; i < num_features; ++i) {\n    LOG(INFO)<< \"Opening dataset \" << dataset_names[i];\n    boost::shared_ptr<db::DB> db(db::GetDB(db_type));\n    db->Open(dataset_names.at(i), db::NEW);\n    feature_dbs.push_back(db);\n    boost::shared_ptr<db::Transaction> txn(db->NewTransaction());\n    txns.push_back(txn);\n  }\n\n  LOG(ERROR)<< \"Extacting Features\";\n\n  Datum datum;\n  std::vector<Blob<float>*> input_vec;\n  std::vector<int> image_indices(num_features, 0);\n  for (int batch_index = 0; batch_index < num_mini_batches; ++batch_index) {\n    feature_extraction_net->Forward(input_vec);\n    for (int i = 0; i < num_features; ++i) {\n      const boost::shared_ptr<Blob<Dtype> > feature_blob =\n        feature_extraction_net->blob_by_name(blob_names[i]);\n      int batch_size = feature_blob->num();\n      int dim_features = feature_blob->count() / batch_size;\n      const Dtype* feature_blob_data;\n      for (int n = 0; n < batch_size; ++n) {\n        datum.set_height(feature_blob->height());\n        datum.set_width(feature_blob->width());\n        datum.set_channels(feature_blob->channels());\n        datum.clear_data();\n        datum.clear_float_data();\n        feature_blob_data = feature_blob->cpu_data() +\n            feature_blob->offset(n);\n        for (int d = 0; d < dim_features; ++d) {\n          datum.add_float_data(feature_blob_data[d]);\n        }\n        string key_str = caffe::format_int(image_indices[i], 10);\n\n        string out;\n        CHECK(datum.SerializeToString(&out));\n        txns.at(i)->Put(key_str, out);\n        ++image_indices[i];\n        if (image_indices[i] % 1000 == 0) {\n          txns.at(i)->Commit();\n          txns.at(i).reset(feature_dbs.at(i)->NewTransaction());\n          LOG(ERROR)<< \"Extracted features of \" << image_indices[i] <<\n              \" query images for feature blob \" << blob_names[i];\n        }\n      }  // for (int n = 0; n < batch_size; ++n)\n    }  // for (int i = 0; i < num_features; ++i)\n  }  // for (int batch_index = 0; batch_index < num_mini_batches; ++batch_index)\n  // write the last batch\n  for (int i = 0; i < num_features; ++i) {\n    if (image_indices[i] % 1000 != 0) {\n      txns.at(i)->Commit();\n    }\n    LOG(ERROR)<< \"Extracted features of \" << image_indices[i] <<\n        \" query images for feature blob \" << blob_names[i];\n    feature_dbs.at(i)->Close();\n  }\n\n  LOG(ERROR)<< \"Successfully extracted the features!\";\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/finetune_net.cpp",
    "content": "#include \"caffe/caffe.hpp\"\n\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"Deprecated. Use caffe train --solver=... \"\n                \"[--weights=...] instead.\";\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/net_speed_benchmark.cpp",
    "content": "#include \"caffe/caffe.hpp\"\n\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"Deprecated. Use caffe time --model=... \"\n             \"[--iterations=50] [--gpu] [--device_id=0]\";\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/test_net.cpp",
    "content": "#include \"caffe/caffe.hpp\"\n\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"Deprecated. Use caffe test --model=... \"\n      \"--weights=... instead.\";\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/train_net.cpp",
    "content": "#include \"caffe/caffe.hpp\"\n\nint main(int argc, char** argv) {\n  LOG(FATAL) << \"Deprecated. Use caffe train --solver=... \"\n                \"[--snapshot=...] instead.\";\n  return 0;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/upgrade_net_proto_binary.cpp",
    "content": "// This is a script to upgrade \"V0\" network prototxts to the new format.\n// Usage:\n//    upgrade_net_proto_binary v0_net_proto_file_in net_proto_file_out\n\n#include <cstring>\n#include <fstream>  // NOLINT(readability/streams)\n#include <iostream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"caffe/caffe.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\nusing std::ofstream;\n\nusing namespace caffe;  // NOLINT(build/namespaces)\n\nint main(int argc, char** argv) {\n  ::google::InitGoogleLogging(argv[0]);\n  if (argc != 3) {\n    LOG(ERROR) << \"Usage: \"\n        << \"upgrade_net_proto_binary v0_net_proto_file_in net_proto_file_out\";\n    return 1;\n  }\n\n  NetParameter net_param;\n  string input_filename(argv[1]);\n  if (!ReadProtoFromBinaryFile(input_filename, &net_param)) {\n    LOG(ERROR) << \"Failed to parse input binary file as NetParameter: \"\n               << input_filename;\n    return 2;\n  }\n  bool need_upgrade = NetNeedsUpgrade(net_param);\n  bool success = true;\n  if (need_upgrade) {\n    success = UpgradeNetAsNeeded(input_filename, &net_param);\n    if (!success) {\n      LOG(ERROR) << \"Encountered error(s) while upgrading prototxt; \"\n                 << \"see details above.\";\n    }\n  } else {\n    LOG(ERROR) << \"File already in V1 proto format: \" << argv[1];\n  }\n\n  WriteProtoToBinaryFile(net_param, argv[2]);\n\n  LOG(ERROR) << \"Wrote upgraded NetParameter binary proto to \" << argv[2];\n  return !success;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/upgrade_net_proto_text.cpp",
    "content": "// This is a script to upgrade \"V0\" network prototxts to the new format.\n// Usage:\n//    upgrade_net_proto_text v0_net_proto_file_in net_proto_file_out\n\n#include <cstring>\n#include <fstream>  // NOLINT(readability/streams)\n#include <iostream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"caffe/caffe.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\nusing std::ofstream;\n\nusing namespace caffe;  // NOLINT(build/namespaces)\n\nint main(int argc, char** argv) {\n  ::google::InitGoogleLogging(argv[0]);\n  if (argc != 3) {\n    LOG(ERROR) << \"Usage: \"\n        << \"upgrade_net_proto_text v0_net_proto_file_in net_proto_file_out\";\n    return 1;\n  }\n\n  NetParameter net_param;\n  string input_filename(argv[1]);\n  if (!ReadProtoFromTextFile(input_filename, &net_param)) {\n    LOG(ERROR) << \"Failed to parse input text file as NetParameter: \"\n               << input_filename;\n    return 2;\n  }\n  bool need_upgrade = NetNeedsUpgrade(net_param);\n  bool need_data_upgrade = NetNeedsDataUpgrade(net_param);\n  bool success = true;\n  if (need_upgrade) {\n    success = UpgradeNetAsNeeded(input_filename, &net_param);\n    if (!success) {\n      LOG(ERROR) << \"Encountered error(s) while upgrading prototxt; \"\n                 << \"see details above.\";\n    }\n  } else {\n    LOG(ERROR) << \"File already in latest proto format: \" << input_filename;\n  }\n\n  if (need_data_upgrade) {\n    UpgradeNetDataTransformation(&net_param);\n  }\n\n  // Save new format prototxt.\n  WriteProtoToTextFile(net_param, argv[2]);\n\n  LOG(ERROR) << \"Wrote upgraded NetParameter text proto to \" << argv[2];\n  return !success;\n}\n"
  },
  {
    "path": "caffe-fpn/tools/upgrade_solver_proto_text.cpp",
    "content": "// This is a script to upgrade old solver prototxts to the new format.\n// Usage:\n//    upgrade_solver_proto_text old_solver_proto_file_in solver_proto_file_out\n\n#include <cstring>\n#include <fstream>  // NOLINT(readability/streams)\n#include <iostream>  // NOLINT(readability/streams)\n#include <string>\n\n#include \"caffe/caffe.hpp\"\n#include \"caffe/util/io.hpp\"\n#include \"caffe/util/upgrade_proto.hpp\"\n\nusing std::ofstream;\n\nusing namespace caffe;  // NOLINT(build/namespaces)\n\nint main(int argc, char** argv) {\n  ::google::InitGoogleLogging(argv[0]);\n  if (argc != 3) {\n    LOG(ERROR) << \"Usage: upgrade_solver_proto_text \"\n        << \"old_solver_proto_file_in solver_proto_file_out\";\n    return 1;\n  }\n\n  SolverParameter solver_param;\n  string input_filename(argv[1]);\n  if (!ReadProtoFromTextFile(input_filename, &solver_param)) {\n    LOG(ERROR) << \"Failed to parse input text file as SolverParameter: \"\n               << input_filename;\n    return 2;\n  }\n  bool need_upgrade = SolverNeedsTypeUpgrade(solver_param);\n  bool success = true;\n  if (need_upgrade) {\n    success = UpgradeSolverAsNeeded(input_filename, &solver_param);\n    if (!success) {\n      LOG(ERROR) << \"Encountered error(s) while upgrading prototxt; \"\n                 << \"see details above.\";\n    }\n  } else {\n    LOG(ERROR) << \"File already in latest proto format: \" << input_filename;\n  }\n\n  // Save new format prototxt.\n  WriteProtoToTextFile(solver_param, argv[2]);\n\n  LOG(ERROR) << \"Wrote upgraded SolverParameter text proto to \" << argv[2];\n  return !success;\n}\n"
  },
  {
    "path": "data/.gitignore",
    "content": "selective_search*\nimagenet_models*\nfast_rcnn_models*\nVOCdevkit*\ncache\n"
  },
  {
    "path": "data/README.md",
    "content": "This directory holds (*after you download them*):\n- Caffe models pre-trained on ImageNet\n- Faster R-CNN models\n- Symlinks to datasets\n\nTo download Caffe models (ZF, VGG16) pre-trained on ImageNet, run:\n\n```\n./data/scripts/fetch_imagenet_models.sh\n```\n\nThis script will populate `data/imagenet_models`.\n\nTo download Faster R-CNN models trained on VOC 2007, run:\n\n```\n./data/scripts/fetch_faster_rcnn_models.sh\n```\n\nThis script will populate `data/faster_rcnn_models`.\n\nIn order to train and test with PASCAL VOC, you will need to establish symlinks.\nFrom the `data` directory (`cd data`):\n\n```\n# For VOC 2007\nln -s /your/path/to/VOC2007/VOCdevkit VOCdevkit2007\n\n# For VOC 2012\nln -s /your/path/to/VOC2012/VOCdevkit VOCdevkit2012\n```\n\nInstall the MS COCO dataset at /path/to/coco\n\n```\nln -s /path/to/coco coco\n```\n\nFor COCO with Fast R-CNN, place object proposals under `coco_proposals` (inside\nthe `data` directory). You can obtain proposals on COCO from Jan Hosang at\nhttps://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/object-recognition-and-scene-understanding/how-good-are-detection-proposals-really/.\nFor COCO, using MCG is recommended over selective search. MCG boxes can be downloaded\nfrom http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/.\nUse the tool `lib/datasets/tools/mcg_munge.py` to convert the downloaded MCG data\ninto the same file layout as those from Jan Hosang.\n\nSince you'll likely be experimenting with multiple installs of Fast/er R-CNN in\nparallel, you'll probably want to keep all of this data in a shared place and\nuse symlinks. On my system I create the following symlinks inside `data`:\n\nAnnotations for the 5k image 'minival' subset of COCO val2014 that I like to use\ncan be found at https://dl.dropboxusercontent.com/s/o43o90bna78omob/instances_minival2014.json.zip?dl=0.\nAnnotations for COCO val2014 (set) minus minival (~35k images) can be found at\nhttps://dl.dropboxusercontent.com/s/s3tw5zcg7395368/instances_valminusminival2014.json.zip?dl=0.\n\n```\n# data/cache holds various outputs created by the datasets package\nln -s /data/fast_rcnn_shared/cache\n\n# move the imagenet_models to shared location and symlink to them\nln -s /data/fast_rcnn_shared/imagenet_models\n\n# move the selective search data to a shared location and symlink to them\n# (only applicable to Fast R-CNN training)\nln -s /data/fast_rcnn_shared/selective_search_data\n\nln -s /data/VOC2007/VOCdevkit VOCdevkit2007\nln -s /data/VOC2012/VOCdevkit VOCdevkit2012\n```\n"
  },
  {
    "path": "data/pylintrc",
    "content": "[TYPECHECK]\n\nignored-modules = numpy, numpy.random, cv2\n"
  },
  {
    "path": "data/scripts/fetch_faster_rcnn_models.sh",
    "content": "#!/bin/bash\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )/../\" && pwd )\"\ncd $DIR\n\nFILE=faster_rcnn_models.tgz\nURL=https://dl.dropboxusercontent.com/s/o6ii098bu51d139/faster_rcnn_models.tgz?dl=0\nCHECKSUM=ac116844f66aefe29587214272054668\n\nif [ -f $FILE ]; then\n  echo \"File already exists. Checking md5...\"\n  os=`uname -s`\n  if [ \"$os\" = \"Linux\" ]; then\n    checksum=`md5sum $FILE | awk '{ print $1 }'`\n  elif [ \"$os\" = \"Darwin\" ]; then\n    checksum=`cat $FILE | md5`\n  fi\n  if [ \"$checksum\" = \"$CHECKSUM\" ]; then\n    echo \"Checksum is correct. No need to download.\"\n    exit 0\n  else\n    echo \"Checksum is incorrect. Need to download again.\"\n  fi\nfi\n\necho \"Downloading Faster R-CNN demo models (695M)...\"\n\nwget $URL -O $FILE\n\necho \"Unzipping...\"\n\ntar zxvf $FILE\n\necho \"Done. Please run this command again to verify that checksum = $CHECKSUM.\"\n"
  },
  {
    "path": "data/scripts/fetch_imagenet_models.sh",
    "content": "#!/bin/bash\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )/../\" && pwd )\"\ncd $DIR\n\nFILE=imagenet_models.tgz\nURL=https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nCHECKSUM=ed34ca912d6782edfb673a8c3a0bda6d\n\nif [ -f $FILE ]; then\n  echo \"File already exists. Checking md5...\"\n  os=`uname -s`\n  if [ \"$os\" = \"Linux\" ]; then\n    checksum=`md5sum $FILE | awk '{ print $1 }'`\n  elif [ \"$os\" = \"Darwin\" ]; then\n    checksum=`cat $FILE | md5`\n  fi\n  if [ \"$checksum\" = \"$CHECKSUM\" ]; then\n    echo \"Checksum is correct. No need to download.\"\n    exit 0\n  else\n    echo \"Checksum is incorrect. Need to download again.\"\n  fi\nfi\n\necho \"Downloading pretrained ImageNet models (1G)...\"\n\nwget $URL -O $FILE\n\necho \"Unzipping...\"\n\ntar zxvf $FILE\n\necho \"Done. Please run this command again to verify that checksum = $CHECKSUM.\"\n"
  },
  {
    "path": "data/scripts/fetch_selective_search_data.sh",
    "content": "#!/bin/bash\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )/../\" && pwd )\"\ncd $DIR\n\nFILE=selective_search_data.tgz\nURL=https://dl.dropboxusercontent.com/s/orrt7o6bp6ae0tc/selective_search_data.tgz?dl=0\nCHECKSUM=7078c1db87a7851b31966b96774cd9b9\n\nif [ -f $FILE ]; then\n  echo \"File already exists. Checking md5...\"\n  os=`uname -s`\n  if [ \"$os\" = \"Linux\" ]; then\n    checksum=`md5sum $FILE | awk '{ print $1 }'`\n  elif [ \"$os\" = \"Darwin\" ]; then\n    checksum=`cat $FILE | md5`\n  fi\n  if [ \"$checksum\" = \"$CHECKSUM\" ]; then\n    echo \"Checksum is correct. No need to download.\"\n    exit 0\n  else\n    echo \"Checksum is incorrect. Need to download again.\"\n  fi\nfi\n\necho \"Downloading precomputed selective search boxes (0.5G)...\"\n\nwget $URL -O $FILE\n\necho \"Unzipping...\"\n\ntar zxvf $FILE\n\necho \"Done. Please run this command again to verify that checksum = $CHECKSUM.\"\n"
  },
  {
    "path": "data/wget-log",
    "content": "Retrying.\n\n--2017-06-05 18:12:32--  (try:10)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:14:49--  (try:11)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:17:06--  (try:12)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:19:23--  (try:13)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:21:41--  (try:14)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:23:58--  (try:15)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:26:15--  (try:16)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:28:32--  (try:17)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:30:49--  (try:18)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:33:07--  (try:19)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nRetrying.\n\n--2017-06-05 18:35:24--  (try:20)  https://dl.dropbox.com/s/gstw7122padlf0l/imagenet_models.tgz?dl=0\nConnecting to dl.dropbox.com (dl.dropbox.com)|93.46.8.89|:443... failed: Connection timed out.\nGiving up.\n\n"
  },
  {
    "path": "experiments/README.md",
    "content": "Scripts are under `experiments/scripts`.\n\nEach script saves a log file under `experiments/logs`.\n\nConfiguration override files used in the experiments are stored in `experiments/cfgs`.\n"
  },
  {
    "path": "experiments/cfgs/FP_Net_end2end.yml",
    "content": "EXP_DIR: FP_Net_end2end\nTRAIN:\n  HAS_RPN: True\n  IMS_PER_BATCH: 1\n  BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True\n  RPN_POSITIVE_OVERLAP: 0.7\n  RPN_BATCHSIZE: 256\n  PROPOSAL_METHOD: gt\n  BG_THRESH_LO: 0.0\nTEST:\n  HAS_RPN: True\n"
  },
  {
    "path": "experiments/scripts/FP_Net_end2end.sh",
    "content": "#/bin/bash\n# Usage:\n# ./experiments/scripts/faster_rcnn_end2end.sh GPU NET DATASET [options args to {train,test}_net.py]\n# DATASET is either pascal_voc or coco.\n#\n# Example:\n# ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc \\\n#   --set EXP_DIR foobar RNG_SEED 42 TRAIN.SCALES \"[400, 500, 600, 700]\"\n\nset -x\nset -e\n\nexport PYTHONUNBUFFERED=\"True\"\n\nGPU_ID=$1\nNET=$2\nNET_lc=${NET,,}\nDATASET=$3\n\narray=( $@ )\nlen=${#array[@]}\nEXTRA_ARGS=${array[@]:3:$len}\nEXTRA_ARGS_SLUG=${EXTRA_ARGS// /_}\n\ncase $DATASET in\n  pascal_voc)\n    TRAIN_IMDB=\"voc_2007_trainval\"\n    TEST_IMDB=\"voc_2007_test\"\n    PT_DIR=\"pascal_voc\"\n    ITERS=250000\n    ;;\n  coco)\n    # This is a very long and slow training schedule\n    # You can probably use fewer iterations and reduce the\n    # time to the LR drop (set in the solver to 350,000 iterations).\n    TRAIN_IMDB=\"coco_2014_train\"\n    TEST_IMDB=\"coco_2014_minival\"\n    PT_DIR=\"coco\"\n    ITERS=490000\n    ;;\n  *)\n    echo \"No dataset given\"\n    exit\n    ;;\nesac\n\nLOG=\"experiments/logs/faster_rcnn_end2end_${NET}_${EXTRA_ARGS_SLUG}.txt.`date +'%Y-%m-%d_%H-%M-%S'`\"\nexec &> >(tee -a \"$LOG\")\necho Logging output to \"$LOG\"\n\ntime ./tools/train_net.py --gpu ${GPU_ID} \\\n  --solver models/${PT_DIR}/${NET}/FP_Net_end2end/solver.prototxt \\\n  --weights data/pretrained_model/ResNet50.v2.caffemodel \\\n  --imdb ${TRAIN_IMDB} \\\n  --iters ${ITERS} \\\n  --cfg experiments/cfgs/FP_Net_end2end.yml \\\n  ${EXTRA_ARGS}\n\nset +x\nNET_FINAL=`grep -B 1 \"done solving\" ${LOG} | grep \"Wrote snapshot\" | awk '{print $4}'`\nset -x\n\ntime ./tools/test_net.py --gpu ${GPU_ID} \\\n  --def models/${PT_DIR}/${NET}/FP_Net_end2end/test.prototxt \\\n  --net ${NET_FINAL} \\\n  --imdb ${TEST_IMDB} \\\n  --cfg experiments/cfgs/FP_Net_end2end.yml \\\n  ${EXTRA_ARGS}\n"
  },
  {
    "path": "lib/Makefile",
    "content": "all:\n\tpython setup.py build_ext --inplace\n\trm -rf build\n"
  },
  {
    "path": "lib/datasets/VOCdevkit-matlab-wrapper/get_voc_opts.m",
    "content": "function VOCopts = get_voc_opts(path)\n\ntmp = pwd;\ncd(path);\ntry\n  addpath('VOCcode');\n  VOCinit;\ncatch\n  rmpath('VOCcode');\n  cd(tmp);\n  error(sprintf('VOCcode directory not found under %s', path));\nend\nrmpath('VOCcode');\ncd(tmp);\n"
  },
  {
    "path": "lib/datasets/VOCdevkit-matlab-wrapper/voc_eval.m",
    "content": "function res = voc_eval(path, comp_id, test_set, output_dir)\n\nVOCopts = get_voc_opts(path);\nVOCopts.testset = test_set;\n\nfor i = 1:length(VOCopts.classes)\n  cls = VOCopts.classes{i};\n  res(i) = voc_eval_cls(cls, VOCopts, comp_id, output_dir);\nend\n\nfprintf('\\n~~~~~~~~~~~~~~~~~~~~\\n');\nfprintf('Results:\\n');\naps = [res(:).ap]';\nfprintf('%.1f\\n', aps * 100);\nfprintf('%.1f\\n', mean(aps) * 100);\nfprintf('~~~~~~~~~~~~~~~~~~~~\\n');\n\nfunction res = voc_eval_cls(cls, VOCopts, comp_id, output_dir)\n\ntest_set = VOCopts.testset;\nyear = VOCopts.dataset(4:end);\n\naddpath(fullfile(VOCopts.datadir, 'VOCcode'));\n\nres_fn = sprintf(VOCopts.detrespath, comp_id, cls);\n\nrecall = [];\nprec = [];\nap = 0;\nap_auc = 0;\n\ndo_eval = (str2num(year) <= 2007) | ~strcmp(test_set, 'test');\nif do_eval\n  % Bug in VOCevaldet requires that tic has been called first\n  tic;\n  [recall, prec, ap] = VOCevaldet(VOCopts, comp_id, cls, true);\n  ap_auc = xVOCap(recall, prec);\n\n  % force plot limits\n  ylim([0 1]);\n  xlim([0 1]);\n\n  print(gcf, '-djpeg', '-r0', ...\n        [output_dir '/' cls '_pr.jpg']);\nend\nfprintf('!!! %s : %.4f %.4f\\n', cls, ap, ap_auc);\n\nres.recall = recall;\nres.prec = prec;\nres.ap = ap;\nres.ap_auc = ap_auc;\n\nsave([output_dir '/' cls '_pr.mat'], ...\n     'res', 'recall', 'prec', 'ap', 'ap_auc');\n\nrmpath(fullfile(VOCopts.datadir, 'VOCcode'));\n"
  },
  {
    "path": "lib/datasets/VOCdevkit-matlab-wrapper/xVOCap.m",
    "content": "function ap = xVOCap(rec,prec)\n% From the PASCAL VOC 2011 devkit\n\nmrec=[0 ; rec ; 1];\nmpre=[0 ; prec ; 0];\nfor i=numel(mpre)-1:-1:1\n    mpre(i)=max(mpre(i),mpre(i+1));\nend\ni=find(mrec(2:end)~=mrec(1:end-1))+1;\nap=sum((mrec(i)-mrec(i-1)).*mpre(i));\n"
  },
  {
    "path": "lib/datasets/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n"
  },
  {
    "path": "lib/datasets/coco.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nfrom datasets.imdb import imdb\nimport datasets.ds_utils as ds_utils\nfrom fast_rcnn.config import cfg\nimport os.path as osp\nimport sys\nimport os\nimport numpy as np\nimport scipy.sparse\nimport scipy.io as sio\nimport cPickle\nimport json\nimport uuid\n# COCO API\nfrom pycocotools.coco import COCO\nfrom pycocotools.cocoeval import COCOeval\nfrom pycocotools import mask as COCOmask\n\ndef _filter_crowd_proposals(roidb, crowd_thresh):\n    \"\"\"\n    Finds proposals that are inside crowd regions and marks them with\n    overlap = -1 (for all gt rois), which means they will be excluded from\n    training.\n    \"\"\"\n    for ix, entry in enumerate(roidb):\n        overlaps = entry['gt_overlaps'].toarray()\n        crowd_inds = np.where(overlaps.max(axis=1) == -1)[0]\n        non_gt_inds = np.where(entry['gt_classes'] == 0)[0]\n        if len(crowd_inds) == 0 or len(non_gt_inds) == 0:\n            continue\n        iscrowd = [int(True) for _ in xrange(len(crowd_inds))]\n        crowd_boxes = ds_utils.xyxy_to_xywh(entry['boxes'][crowd_inds, :])\n        non_gt_boxes = ds_utils.xyxy_to_xywh(entry['boxes'][non_gt_inds, :])\n        ious = COCOmask.iou(non_gt_boxes, crowd_boxes, iscrowd)\n        bad_inds = np.where(ious.max(axis=1) > crowd_thresh)[0]\n        overlaps[non_gt_inds[bad_inds], :] = -1\n        roidb[ix]['gt_overlaps'] = scipy.sparse.csr_matrix(overlaps)\n    return roidb\n\nclass coco(imdb):\n    def __init__(self, image_set, year):\n        imdb.__init__(self, 'coco_' + year + '_' + image_set)\n        # COCO specific config options\n        self.config = {'top_k' : 2000,\n                       'use_salt' : True,\n                       'cleanup' : True,\n                       'crowd_thresh' : 0.7,\n                       'min_size' : 2}\n        # name, paths\n        self._year = year\n        self._image_set = image_set\n        self._data_path = osp.join(cfg.DATA_DIR, 'coco')\n        # load COCO API, classes, class <-> id mappings\n        self._COCO = COCO(self._get_ann_file())\n        cats = self._COCO.loadCats(self._COCO.getCatIds())\n        self._classes = tuple(['__background__'] + [c['name'] for c in cats])\n        self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))\n        self._class_to_coco_cat_id = dict(zip([c['name'] for c in cats],\n                                              self._COCO.getCatIds()))\n        self._image_index = self._load_image_set_index()\n        # Default to roidb handler\n        self.set_proposal_method('selective_search')\n        self.competition_mode(False)\n\n        # Some image sets are \"views\" (i.e. subsets) into others.\n        # For example, minival2014 is a random 5000 image subset of val2014.\n        # This mapping tells us where the view's images and proposals come from.\n        self._view_map = {\n            'minival2014' : 'val2014',          # 5k val2014 subset\n            'valminusminival2014' : 'val2014',  # val2014 \\setminus minival2014\n        }\n        coco_name = image_set + year  # e.g., \"val2014\"\n        self._data_name = (self._view_map[coco_name]\n                           if self._view_map.has_key(coco_name)\n                           else coco_name)\n        # Dataset splits that have ground-truth annotations (test splits\n        # do not have gt annotations)\n        self._gt_splits = ('train', 'val', 'minival')\n\n    def _get_ann_file(self):\n        prefix = 'instances' if self._image_set.find('test') == -1 \\\n                             else 'image_info'\n        return osp.join(self._data_path, 'annotations',\n                        prefix + '_' + self._image_set + self._year + '.json')\n\n    def _load_image_set_index(self):\n        \"\"\"\n        Load image ids.\n        \"\"\"\n        image_ids = self._COCO.getImgIds()\n        return image_ids\n\n    def _get_widths(self):\n        anns = self._COCO.loadImgs(self._image_index)\n        widths = [ann['width'] for ann in anns]\n        return widths\n\n    def image_path_at(self, i):\n        \"\"\"\n        Return the absolute path to image i in the image sequence.\n        \"\"\"\n        return self.image_path_from_index(self._image_index[i])\n\n    def image_path_from_index(self, index):\n        \"\"\"\n        Construct an image path from the image's \"index\" identifier.\n        \"\"\"\n        # Example image path for index=119993:\n        #   images/train2014/COCO_train2014_000000119993.jpg\n        file_name = ('COCO_' + self._data_name + '_' +\n                     str(index).zfill(12) + '.jpg')\n        image_path = osp.join(self._data_path, 'images',\n                              self._data_name, file_name)\n        assert osp.exists(image_path), \\\n                'Path does not exist: {}'.format(image_path)\n        return image_path\n\n    def selective_search_roidb(self):\n        return self._roidb_from_proposals('selective_search')\n\n    def edge_boxes_roidb(self):\n        return self._roidb_from_proposals('edge_boxes_AR')\n\n    def mcg_roidb(self):\n        return self._roidb_from_proposals('MCG')\n\n    def _roidb_from_proposals(self, method):\n        \"\"\"\n        Creates a roidb from pre-computed proposals of a particular methods.\n        \"\"\"\n        top_k = self.config['top_k']\n        cache_file = osp.join(self.cache_path, self.name +\n                              '_{:s}_top{:d}'.format(method, top_k) +\n                              '_roidb.pkl')\n\n        if osp.exists(cache_file):\n            with open(cache_file, 'rb') as fid:\n                roidb = cPickle.load(fid)\n            print '{:s} {:s} roidb loaded from {:s}'.format(self.name, method,\n                                                            cache_file)\n            return roidb\n\n        if self._image_set in self._gt_splits:\n            gt_roidb = self.gt_roidb()\n            method_roidb = self._load_proposals(method, gt_roidb)\n            roidb = imdb.merge_roidbs(gt_roidb, method_roidb)\n            # Make sure we don't use proposals that are contained in crowds\n            roidb = _filter_crowd_proposals(roidb, self.config['crowd_thresh'])\n        else:\n            roidb = self._load_proposals(method, None)\n        with open(cache_file, 'wb') as fid:\n            cPickle.dump(roidb, fid, cPickle.HIGHEST_PROTOCOL)\n        print 'wrote {:s} roidb to {:s}'.format(method, cache_file)\n        return roidb\n\n    def _load_proposals(self, method, gt_roidb):\n        \"\"\"\n        Load pre-computed proposals in the format provided by Jan Hosang:\n        http://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-\n          computing/research/object-recognition-and-scene-understanding/how-\n          good-are-detection-proposals-really/\n        For MCG, use boxes from http://www.eecs.berkeley.edu/Research/Projects/\n          CS/vision/grouping/mcg/ and convert the file layout using\n        lib/datasets/tools/mcg_munge.py.\n        \"\"\"\n        box_list = []\n        top_k = self.config['top_k']\n        valid_methods = [\n            'MCG',\n            'selective_search',\n            'edge_boxes_AR',\n            'edge_boxes_70']\n        assert method in valid_methods\n\n        print 'Loading {} boxes'.format(method)\n        for i, index in enumerate(self._image_index):\n            if i % 1000 == 0:\n                print '{:d} / {:d}'.format(i + 1, len(self._image_index))\n\n            box_file = osp.join(\n                cfg.DATA_DIR, 'coco_proposals', method, 'mat',\n                self._get_box_file(index))\n\n            raw_data = sio.loadmat(box_file)['boxes']\n            boxes = np.maximum(raw_data - 1, 0).astype(np.uint16)\n            if method == 'MCG':\n                # Boxes from the MCG website are in (y1, x1, y2, x2) order\n                boxes = boxes[:, (1, 0, 3, 2)]\n            # Remove duplicate boxes and very small boxes and then take top k\n            keep = ds_utils.unique_boxes(boxes)\n            boxes = boxes[keep, :]\n            keep = ds_utils.filter_small_boxes(boxes, self.config['min_size'])\n            boxes = boxes[keep, :]\n            boxes = boxes[:top_k, :]\n            box_list.append(boxes)\n            # Sanity check\n            im_ann = self._COCO.loadImgs(index)[0]\n            width = im_ann['width']\n            height = im_ann['height']\n            ds_utils.validate_boxes(boxes, width=width, height=height)\n        return self.create_roidb_from_box_list(box_list, gt_roidb)\n\n    def gt_roidb(self):\n        \"\"\"\n        Return the database of ground-truth regions of interest.\n        This function loads/saves from/to a cache file to speed up future calls.\n        \"\"\"\n        cache_file = osp.join(self.cache_path, self.name + '_gt_roidb.pkl')\n        if osp.exists(cache_file):\n            with open(cache_file, 'rb') as fid:\n                roidb = cPickle.load(fid)\n            print '{} gt roidb loaded from {}'.format(self.name, cache_file)\n            return roidb\n\n        gt_roidb = [self._load_coco_annotation(index)\n                    for index in self._image_index]\n\n        with open(cache_file, 'wb') as fid:\n            cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL)\n        print 'wrote gt roidb to {}'.format(cache_file)\n        return gt_roidb\n\n    def _load_coco_annotation(self, index):\n        \"\"\"\n        Loads COCO bounding-box instance annotations. Crowd instances are\n        handled by marking their overlaps (with all categories) to -1. This\n        overlap value means that crowd \"instances\" are excluded from training.\n        \"\"\"\n        im_ann = self._COCO.loadImgs(index)[0]\n        width = im_ann['width']\n        height = im_ann['height']\n\n        annIds = self._COCO.getAnnIds(imgIds=index, iscrowd=None)\n        objs = self._COCO.loadAnns(annIds)\n        # Sanitize bboxes -- some are invalid\n        valid_objs = []\n        for obj in objs:\n            x1 = np.max((0, obj['bbox'][0]))\n            y1 = np.max((0, obj['bbox'][1]))\n            x2 = np.min((width - 1, x1 + np.max((0, obj['bbox'][2] - 1))))\n            y2 = np.min((height - 1, y1 + np.max((0, obj['bbox'][3] - 1))))\n            if obj['area'] > 0 and x2 >= x1 and y2 >= y1:\n                obj['clean_bbox'] = [x1, y1, x2, y2]\n                valid_objs.append(obj)\n        objs = valid_objs\n        num_objs = len(objs)\n\n        boxes = np.zeros((num_objs, 4), dtype=np.uint16)\n        gt_classes = np.zeros((num_objs), dtype=np.int32)\n        overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)\n        seg_areas = np.zeros((num_objs), dtype=np.float32)\n\n        # Lookup table to map from COCO category ids to our internal class\n        # indices\n        coco_cat_id_to_class_ind = dict([(self._class_to_coco_cat_id[cls],\n                                          self._class_to_ind[cls])\n                                         for cls in self._classes[1:]])\n\n        for ix, obj in enumerate(objs):\n            cls = coco_cat_id_to_class_ind[obj['category_id']]\n            boxes[ix, :] = obj['clean_bbox']\n            gt_classes[ix] = cls\n            seg_areas[ix] = obj['area']\n            if obj['iscrowd']:\n                # Set overlap to -1 for all classes for crowd objects\n                # so they will be excluded during training\n                overlaps[ix, :] = -1.0\n            else:\n                overlaps[ix, cls] = 1.0\n\n        ds_utils.validate_boxes(boxes, width=width, height=height)\n        overlaps = scipy.sparse.csr_matrix(overlaps)\n        return {'boxes' : boxes,\n                'gt_classes': gt_classes,\n                'gt_overlaps' : overlaps,\n                'flipped' : False,\n                'seg_areas' : seg_areas}\n\n    def _get_box_file(self, index):\n        # first 14 chars / first 22 chars / all chars + .mat\n        # COCO_val2014_0/COCO_val2014_000000447/COCO_val2014_000000447991.mat\n        file_name = ('COCO_' + self._data_name +\n                     '_' + str(index).zfill(12) + '.mat')\n        return osp.join(file_name[:14], file_name[:22], file_name)\n\n    def _print_detection_eval_metrics(self, coco_eval):\n        IoU_lo_thresh = 0.5\n        IoU_hi_thresh = 0.95\n        def _get_thr_ind(coco_eval, thr):\n            ind = np.where((coco_eval.params.iouThrs > thr - 1e-5) &\n                           (coco_eval.params.iouThrs < thr + 1e-5))[0][0]\n            iou_thr = coco_eval.params.iouThrs[ind]\n            assert np.isclose(iou_thr, thr)\n            return ind\n\n        ind_lo = _get_thr_ind(coco_eval, IoU_lo_thresh)\n        ind_hi = _get_thr_ind(coco_eval, IoU_hi_thresh)\n        # precision has dims (iou, recall, cls, area range, max dets)\n        # area range index 0: all area ranges\n        # max dets index 2: 100 per image\n        precision = \\\n            coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, :, 0, 2]\n        ap_default = np.mean(precision[precision > -1])\n        print ('~~~~ Mean and per-category AP @ IoU=[{:.2f},{:.2f}] '\n               '~~~~').format(IoU_lo_thresh, IoU_hi_thresh)\n        print '{:.1f}'.format(100 * ap_default)\n        for cls_ind, cls in enumerate(self.classes):\n            if cls == '__background__':\n                continue\n            # minus 1 because of __background__\n            precision = coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, cls_ind - 1, 0, 2]\n            ap = np.mean(precision[precision > -1])\n            print '{:.1f}'.format(100 * ap)\n\n        print '~~~~ Summary metrics ~~~~'\n        coco_eval.summarize()\n\n    def _do_detection_eval(self, res_file, output_dir):\n        ann_type = 'bbox'\n        coco_dt = self._COCO.loadRes(res_file)\n        coco_eval = COCOeval(self._COCO, coco_dt)\n        coco_eval.params.useSegm = (ann_type == 'segm')\n        coco_eval.evaluate()\n        coco_eval.accumulate()\n        self._print_detection_eval_metrics(coco_eval)\n        eval_file = osp.join(output_dir, 'detection_results.pkl')\n        with open(eval_file, 'wb') as fid:\n            cPickle.dump(coco_eval, fid, cPickle.HIGHEST_PROTOCOL)\n        print 'Wrote COCO eval results to: {}'.format(eval_file)\n\n    def _coco_results_one_category(self, boxes, cat_id):\n        results = []\n        for im_ind, index in enumerate(self.image_index):\n            dets = boxes[im_ind].astype(np.float)\n            if dets == []:\n                continue\n            scores = dets[:, -1]\n            xs = dets[:, 0]\n            ys = dets[:, 1]\n            ws = dets[:, 2] - xs + 1\n            hs = dets[:, 3] - ys + 1\n            results.extend(\n              [{'image_id' : index,\n                'category_id' : cat_id,\n                'bbox' : [xs[k], ys[k], ws[k], hs[k]],\n                'score' : scores[k]} for k in xrange(dets.shape[0])])\n        return results\n\n    def _write_coco_results_file(self, all_boxes, res_file):\n        # [{\"image_id\": 42,\n        #   \"category_id\": 18,\n        #   \"bbox\": [258.15,41.29,348.26,243.78],\n        #   \"score\": 0.236}, ...]\n        results = []\n        for cls_ind, cls in enumerate(self.classes):\n            if cls == '__background__':\n                continue\n            print 'Collecting {} results ({:d}/{:d})'.format(cls, cls_ind,\n                                                          self.num_classes - 1)\n            coco_cat_id = self._class_to_coco_cat_id[cls]\n            results.extend(self._coco_results_one_category(all_boxes[cls_ind],\n                                                           coco_cat_id))\n        print 'Writing results json to {}'.format(res_file)\n        with open(res_file, 'w') as fid:\n            json.dump(results, fid)\n\n    def evaluate_detections(self, all_boxes, output_dir):\n        res_file = osp.join(output_dir, ('detections_' +\n                                         self._image_set +\n                                         self._year +\n                                         '_results'))\n        if self.config['use_salt']:\n            res_file += '_{}'.format(str(uuid.uuid4()))\n        res_file += '.json'\n        self._write_coco_results_file(all_boxes, res_file)\n        # Only do evaluation on non-test sets\n        if self._image_set.find('test') == -1:\n            self._do_detection_eval(res_file, output_dir)\n        # Optionally cleanup results json file\n        if self.config['cleanup']:\n            os.remove(res_file)\n\n    def competition_mode(self, on):\n        if on:\n            self.config['use_salt'] = False\n            self.config['cleanup'] = False\n        else:\n            self.config['use_salt'] = True\n            self.config['cleanup'] = True\n"
  },
  {
    "path": "lib/datasets/ds_utils.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport numpy as np\n\ndef unique_boxes(boxes, scale=1.0):\n    \"\"\"Return indices of unique boxes.\"\"\"\n    v = np.array([1, 1e3, 1e6, 1e9])\n    hashes = np.round(boxes * scale).dot(v)\n    _, index = np.unique(hashes, return_index=True)\n    return np.sort(index)\n\ndef xywh_to_xyxy(boxes):\n    \"\"\"Convert [x y w h] box format to [x1 y1 x2 y2] format.\"\"\"\n    return np.hstack((boxes[:, 0:2], boxes[:, 0:2] + boxes[:, 2:4] - 1))\n\ndef xyxy_to_xywh(boxes):\n    \"\"\"Convert [x1 y1 x2 y2] box format to [x y w h] format.\"\"\"\n    return np.hstack((boxes[:, 0:2], boxes[:, 2:4] - boxes[:, 0:2] + 1))\n\ndef validate_boxes(boxes, width=0, height=0):\n    \"\"\"Check that a set of boxes are valid.\"\"\"\n    x1 = boxes[:, 0]\n    y1 = boxes[:, 1]\n    x2 = boxes[:, 2]\n    y2 = boxes[:, 3]\n    assert (x1 >= 0).all()\n    assert (y1 >= 0).all()\n    assert (x2 >= x1).all()\n    assert (y2 >= y1).all()\n    assert (x2 < width).all()\n    assert (y2 < height).all()\n\ndef filter_small_boxes(boxes, min_size):\n    w = boxes[:, 2] - boxes[:, 0]\n    h = boxes[:, 3] - boxes[:, 1]\n    keep = np.where((w >= min_size) & (h > min_size))[0]\n    return keep\n"
  },
  {
    "path": "lib/datasets/factory.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Factory method for easily getting imdbs by name.\"\"\"\n\n__sets = {}\n\nfrom datasets.pascal_voc import pascal_voc\nfrom datasets.coco import coco\nimport numpy as np\n\n# Set up voc_<year>_<split> using selective search \"fast\" mode\nfor year in ['2007', '2012']:\n    for split in ['train', 'val', 'trainval', 'test']:\n        name = 'voc_{}_{}'.format(year, split)\n        __sets[name] = (lambda split=split, year=year: pascal_voc(split, year))\n\n# Set up coco_2014_<split>\nfor year in ['2014']:\n    for split in ['train', 'val', 'minival', 'valminusminival']:\n        name = 'coco_{}_{}'.format(year, split)\n        __sets[name] = (lambda split=split, year=year: coco(split, year))\n\n# Set up coco_2015_<split>\nfor year in ['2015']:\n    for split in ['test', 'test-dev']:\n        name = 'coco_{}_{}'.format(year, split)\n        __sets[name] = (lambda split=split, year=year: coco(split, year))\n\ndef get_imdb(name):\n    \"\"\"Get an imdb (image database) by name.\"\"\"\n    if not __sets.has_key(name):\n        raise KeyError('Unknown dataset: {}'.format(name))\n    return __sets[name]()\n\ndef list_imdbs():\n    \"\"\"List all registered imdbs.\"\"\"\n    return __sets.keys()\n"
  },
  {
    "path": "lib/datasets/imdb.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport os\nimport os.path as osp\nimport PIL\nfrom utils.cython_bbox import bbox_overlaps\nimport numpy as np\nimport scipy.sparse\nfrom fast_rcnn.config import cfg\n\nclass imdb(object):\n    \"\"\"Image database.\"\"\"\n\n    def __init__(self, name):\n        self._name = name\n        self._num_classes = 0\n        self._classes = []\n        self._image_index = []\n        self._obj_proposer = 'selective_search'\n        self._roidb = None\n        self._roidb_handler = self.default_roidb\n        # Use this dict for storing dataset specific config options\n        self.config = {}\n\n    @property\n    def name(self):\n        return self._name\n\n    @property\n    def num_classes(self):\n        return len(self._classes)\n\n    @property\n    def classes(self):\n        return self._classes\n\n    @property\n    def image_index(self):\n        return self._image_index\n\n    @property\n    def roidb_handler(self):\n        return self._roidb_handler\n\n    @roidb_handler.setter\n    def roidb_handler(self, val):\n        self._roidb_handler = val\n\n    def set_proposal_method(self, method):\n        method = eval('self.' + method + '_roidb')\n        self.roidb_handler = method\n\n    @property\n    def roidb(self):\n        # A roidb is a list of dictionaries, each with the following keys:\n        #   boxes\n        #   gt_overlaps\n        #   gt_classes\n        #   flipped\n        if self._roidb is not None:\n            return self._roidb\n        self._roidb = self.roidb_handler()\n        return self._roidb\n\n    @property\n    def cache_path(self):\n        cache_path = osp.abspath(osp.join(cfg.DATA_DIR, 'cache'))\n        if not os.path.exists(cache_path):\n            os.makedirs(cache_path)\n        return cache_path\n\n    @property\n    def num_images(self):\n      return len(self.image_index)\n\n    def image_path_at(self, i):\n        raise NotImplementedError\n\n    def default_roidb(self):\n        raise NotImplementedError\n\n    def evaluate_detections(self, all_boxes, output_dir=None):\n        \"\"\"\n        all_boxes is a list of length number-of-classes.\n        Each list element is a list of length number-of-images.\n        Each of those list elements is either an empty list []\n        or a numpy array of detection.\n\n        all_boxes[class][image] = [] or np.array of shape #dets x 5\n        \"\"\"\n        raise NotImplementedError\n\n    def _get_widths(self):\n      return [PIL.Image.open(self.image_path_at(i)).size[0]\n              for i in xrange(self.num_images)]\n\n    def append_flipped_images(self):\n        num_images = self.num_images\n        widths = self._get_widths()\n        for i in xrange(num_images):\n            boxes = self.roidb[i]['boxes'].copy()\n            oldx1 = boxes[:, 0].copy()\n            oldx2 = boxes[:, 2].copy()\n            boxes[:, 0] = widths[i] - oldx2 - 1\n            boxes[:, 2] = widths[i] - oldx1 - 1\n            assert (boxes[:, 2] >= boxes[:, 0]).all()\n            entry = {'boxes' : boxes,\n                     'gt_overlaps' : self.roidb[i]['gt_overlaps'],\n                     'gt_classes' : self.roidb[i]['gt_classes'],\n                     'flipped' : True}\n            self.roidb.append(entry)\n        self._image_index = self._image_index * 2\n\n    def evaluate_recall(self, candidate_boxes=None, thresholds=None,\n                        area='all', limit=None):\n        \"\"\"Evaluate detection proposal recall metrics.\n\n        Returns:\n            results: dictionary of results with keys\n                'ar': average recall\n                'recalls': vector recalls at each IoU overlap threshold\n                'thresholds': vector of IoU overlap thresholds\n                'gt_overlaps': vector of all ground-truth overlaps\n        \"\"\"\n        # Record max overlap value for each gt box\n        # Return vector of overlap values\n        areas = { 'all': 0, 'small': 1, 'medium': 2, 'large': 3,\n                  '96-128': 4, '128-256': 5, '256-512': 6, '512-inf': 7}\n        area_ranges = [ [0**2, 1e5**2],    # all\n                        [0**2, 32**2],     # small\n                        [32**2, 96**2],    # medium\n                        [96**2, 1e5**2],   # large\n                        [96**2, 128**2],   # 96-128\n                        [128**2, 256**2],  # 128-256\n                        [256**2, 512**2],  # 256-512\n                        [512**2, 1e5**2],  # 512-inf\n                      ]\n        assert areas.has_key(area), 'unknown area range: {}'.format(area)\n        area_range = area_ranges[areas[area]]\n        gt_overlaps = np.zeros(0)\n        num_pos = 0\n        for i in xrange(self.num_images):\n            # Checking for max_overlaps == 1 avoids including crowd annotations\n            # (...pretty hacking :/)\n            max_gt_overlaps = self.roidb[i]['gt_overlaps'].toarray().max(axis=1)\n            gt_inds = np.where((self.roidb[i]['gt_classes'] > 0) &\n                               (max_gt_overlaps == 1))[0]\n            gt_boxes = self.roidb[i]['boxes'][gt_inds, :]\n            gt_areas = self.roidb[i]['seg_areas'][gt_inds]\n            valid_gt_inds = np.where((gt_areas >= area_range[0]) &\n                                     (gt_areas <= area_range[1]))[0]\n            gt_boxes = gt_boxes[valid_gt_inds, :]\n            num_pos += len(valid_gt_inds)\n\n            if candidate_boxes is None:\n                # If candidate_boxes is not supplied, the default is to use the\n                # non-ground-truth boxes from this roidb\n                non_gt_inds = np.where(self.roidb[i]['gt_classes'] == 0)[0]\n                boxes = self.roidb[i]['boxes'][non_gt_inds, :]\n            else:\n                boxes = candidate_boxes[i]\n            if boxes.shape[0] == 0:\n                continue\n            if limit is not None and boxes.shape[0] > limit:\n                boxes = boxes[:limit, :]\n\n            overlaps = bbox_overlaps(boxes.astype(np.float),\n                                     gt_boxes.astype(np.float))\n\n            _gt_overlaps = np.zeros((gt_boxes.shape[0]))\n            for j in xrange(gt_boxes.shape[0]):\n                # find which proposal box maximally covers each gt box\n                argmax_overlaps = overlaps.argmax(axis=0)\n                # and get the iou amount of coverage for each gt box\n                max_overlaps = overlaps.max(axis=0)\n                # find which gt box is 'best' covered (i.e. 'best' = most iou)\n                gt_ind = max_overlaps.argmax()\n                gt_ovr = max_overlaps.max()\n                assert(gt_ovr >= 0)\n                # find the proposal box that covers the best covered gt box\n                box_ind = argmax_overlaps[gt_ind]\n                # record the iou coverage of this gt box\n                _gt_overlaps[j] = overlaps[box_ind, gt_ind]\n                assert(_gt_overlaps[j] == gt_ovr)\n                # mark the proposal box and the gt box as used\n                overlaps[box_ind, :] = -1\n                overlaps[:, gt_ind] = -1\n            # append recorded iou coverage level\n            gt_overlaps = np.hstack((gt_overlaps, _gt_overlaps))\n\n        gt_overlaps = np.sort(gt_overlaps)\n        if thresholds is None:\n            step = 0.05\n            thresholds = np.arange(0.5, 0.95 + 1e-5, step)\n        recalls = np.zeros_like(thresholds)\n        # compute recall for each iou threshold\n        for i, t in enumerate(thresholds):\n            recalls[i] = (gt_overlaps >= t).sum() / float(num_pos)\n        # ar = 2 * np.trapz(recalls, thresholds)\n        ar = recalls.mean()\n        return {'ar': ar, 'recalls': recalls, 'thresholds': thresholds,\n                'gt_overlaps': gt_overlaps}\n\n    def create_roidb_from_box_list(self, box_list, gt_roidb):\n        assert len(box_list) == self.num_images, \\\n                'Number of boxes must match number of ground-truth images'\n        roidb = []\n        for i in xrange(self.num_images):\n            boxes = box_list[i]\n            num_boxes = boxes.shape[0]\n            overlaps = np.zeros((num_boxes, self.num_classes), dtype=np.float32)\n\n            if gt_roidb is not None and gt_roidb[i]['boxes'].size > 0:\n                gt_boxes = gt_roidb[i]['boxes']\n                gt_classes = gt_roidb[i]['gt_classes']\n                gt_overlaps = bbox_overlaps(boxes.astype(np.float),\n                                            gt_boxes.astype(np.float))\n                argmaxes = gt_overlaps.argmax(axis=1)\n                maxes = gt_overlaps.max(axis=1)\n                I = np.where(maxes > 0)[0]\n                overlaps[I, gt_classes[argmaxes[I]]] = maxes[I]\n\n            overlaps = scipy.sparse.csr_matrix(overlaps)\n            roidb.append({\n                'boxes' : boxes,\n                'gt_classes' : np.zeros((num_boxes,), dtype=np.int32),\n                'gt_overlaps' : overlaps,\n                'flipped' : False,\n                'seg_areas' : np.zeros((num_boxes,), dtype=np.float32),\n            })\n        return roidb\n\n    @staticmethod\n    def merge_roidbs(a, b):\n        assert len(a) == len(b)\n        for i in xrange(len(a)):\n            a[i]['boxes'] = np.vstack((a[i]['boxes'], b[i]['boxes']))\n            a[i]['gt_classes'] = np.hstack((a[i]['gt_classes'],\n                                            b[i]['gt_classes']))\n            a[i]['gt_overlaps'] = scipy.sparse.vstack([a[i]['gt_overlaps'],\n                                                       b[i]['gt_overlaps']])\n            a[i]['seg_areas'] = np.hstack((a[i]['seg_areas'],\n                                           b[i]['seg_areas']))\n        return a\n\n    def competition_mode(self, on):\n        \"\"\"Turn competition mode on or off.\"\"\"\n        pass\n"
  },
  {
    "path": "lib/datasets/pascal_voc.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport os\nfrom datasets.imdb import imdb\nimport datasets.ds_utils as ds_utils\nimport xml.etree.ElementTree as ET\nimport numpy as np\nimport scipy.sparse\nimport scipy.io as sio\nimport utils.cython_bbox\nimport cPickle\nimport subprocess\nimport uuid\nfrom voc_eval import voc_eval\nfrom fast_rcnn.config import cfg\n\nclass pascal_voc(imdb):\n    def __init__(self, image_set, year, devkit_path=None):\n        imdb.__init__(self, 'voc_' + year + '_' + image_set)\n        self._year = year\n        self._image_set = image_set\n        self._devkit_path = self._get_default_path() if devkit_path is None \\\n                            else devkit_path\n        self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)\n        self._classes = ('__background__', # always index 0\n                         'aeroplane', 'bicycle', 'bird', 'boat',\n                         'bottle', 'bus', 'car', 'cat', 'chair',\n                         'cow', 'diningtable', 'dog', 'horse',\n                         'motorbike', 'person', 'pottedplant',\n                         'sheep', 'sofa', 'train', 'tvmonitor')\n        self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))\n        self._image_ext = '.jpg'\n        self._image_index = self._load_image_set_index()\n        # Default to roidb handler\n        self._roidb_handler = self.selective_search_roidb\n        self._salt = str(uuid.uuid4())\n        self._comp_id = 'comp4'\n\n        # PASCAL specific config options\n        self.config = {'cleanup'     : True,\n                       'use_salt'    : True,\n                       'use_diff'    : False,\n                       'matlab_eval' : False,\n                       'rpn_file'    : None,\n                       'min_size'    : 2}\n\n        assert os.path.exists(self._devkit_path), \\\n                'VOCdevkit path does not exist: {}'.format(self._devkit_path)\n        assert os.path.exists(self._data_path), \\\n                'Path does not exist: {}'.format(self._data_path)\n\n    def image_path_at(self, i):\n        \"\"\"\n        Return the absolute path to image i in the image sequence.\n        \"\"\"\n        return self.image_path_from_index(self._image_index[i])\n\n    def image_path_from_index(self, index):\n        \"\"\"\n        Construct an image path from the image's \"index\" identifier.\n        \"\"\"\n        image_path = os.path.join(self._data_path, 'JPEGImages',\n                                  index + self._image_ext)\n        assert os.path.exists(image_path), \\\n                'Path does not exist: {}'.format(image_path)\n        return image_path\n\n    def _load_image_set_index(self):\n        \"\"\"\n        Load the indexes listed in this dataset's image set file.\n        \"\"\"\n        # Example path to image set file:\n        # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt\n        image_set_file = os.path.join(self._data_path, 'ImageSets', 'Main',\n                                      self._image_set + '.txt')\n        assert os.path.exists(image_set_file), \\\n                'Path does not exist: {}'.format(image_set_file)\n        with open(image_set_file) as f:\n            image_index = [x.strip() for x in f.readlines()]\n        return image_index\n\n    def _get_default_path(self):\n        \"\"\"\n        Return the default path where PASCAL VOC is expected to be installed.\n        \"\"\"\n        return os.path.join(cfg.DATA_DIR, 'VOCdevkit' + self._year)\n\n    def gt_roidb(self):\n        \"\"\"\n        Return the database of ground-truth regions of interest.\n\n        This function loads/saves from/to a cache file to speed up future calls.\n        \"\"\"\n        cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl')\n        if os.path.exists(cache_file):\n            with open(cache_file, 'rb') as fid:\n                roidb = cPickle.load(fid)\n            print '{} gt roidb loaded from {}'.format(self.name, cache_file)\n            return roidb\n\n        gt_roidb = [self._load_pascal_annotation(index)\n                    for index in self.image_index]\n        with open(cache_file, 'wb') as fid:\n            cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL)\n        print 'wrote gt roidb to {}'.format(cache_file)\n\n        return gt_roidb\n\n    def selective_search_roidb(self):\n        \"\"\"\n        Return the database of selective search regions of interest.\n        Ground-truth ROIs are also included.\n\n        This function loads/saves from/to a cache file to speed up future calls.\n        \"\"\"\n        cache_file = os.path.join(self.cache_path,\n                                  self.name + '_selective_search_roidb.pkl')\n\n        if os.path.exists(cache_file):\n            with open(cache_file, 'rb') as fid:\n                roidb = cPickle.load(fid)\n            print '{} ss roidb loaded from {}'.format(self.name, cache_file)\n            return roidb\n\n        if int(self._year) == 2007 or self._image_set != 'test':\n            gt_roidb = self.gt_roidb()\n            ss_roidb = self._load_selective_search_roidb(gt_roidb)\n            roidb = imdb.merge_roidbs(gt_roidb, ss_roidb)\n        else:\n            roidb = self._load_selective_search_roidb(None)\n        with open(cache_file, 'wb') as fid:\n            cPickle.dump(roidb, fid, cPickle.HIGHEST_PROTOCOL)\n        print 'wrote ss roidb to {}'.format(cache_file)\n\n        return roidb\n\n    def rpn_roidb(self):\n        if int(self._year) == 2007 or self._image_set != 'test':\n            gt_roidb = self.gt_roidb()\n            rpn_roidb = self._load_rpn_roidb(gt_roidb)\n            roidb = imdb.merge_roidbs(gt_roidb, rpn_roidb)\n        else:\n            roidb = self._load_rpn_roidb(None)\n\n        return roidb\n\n    def _load_rpn_roidb(self, gt_roidb):\n        filename = self.config['rpn_file']\n        print 'loading {}'.format(filename)\n        assert os.path.exists(filename), \\\n               'rpn data not found at: {}'.format(filename)\n        with open(filename, 'rb') as f:\n            box_list = cPickle.load(f)\n        return self.create_roidb_from_box_list(box_list, gt_roidb)\n\n    def _load_selective_search_roidb(self, gt_roidb):\n        filename = os.path.abspath(os.path.join(cfg.DATA_DIR,\n                                                'selective_search_data',\n                                                self.name + '.mat'))\n        assert os.path.exists(filename), \\\n               'Selective search data not found at: {}'.format(filename)\n        raw_data = sio.loadmat(filename)['boxes'].ravel()\n\n        box_list = []\n        for i in xrange(raw_data.shape[0]):\n            boxes = raw_data[i][:, (1, 0, 3, 2)] - 1\n            keep = ds_utils.unique_boxes(boxes)\n            boxes = boxes[keep, :]\n            keep = ds_utils.filter_small_boxes(boxes, self.config['min_size'])\n            boxes = boxes[keep, :]\n            box_list.append(boxes)\n\n        return self.create_roidb_from_box_list(box_list, gt_roidb)\n\n    def _load_pascal_annotation(self, index):\n        \"\"\"\n        Load image and bounding boxes info from XML file in the PASCAL VOC\n        format.\n        \"\"\"\n        filename = os.path.join(self._data_path, 'Annotations', index + '.xml')\n        tree = ET.parse(filename)\n        objs = tree.findall('object')\n        if not self.config['use_diff']:\n            # Exclude the samples labeled as difficult\n            non_diff_objs = [\n                obj for obj in objs if int(obj.find('difficult').text) == 0]\n            # if len(non_diff_objs) != len(objs):\n            #     print 'Removed {} difficult objects'.format(\n            #         len(objs) - len(non_diff_objs))\n            objs = non_diff_objs\n        num_objs = len(objs)\n\n        boxes = np.zeros((num_objs, 4), dtype=np.uint16)\n        gt_classes = np.zeros((num_objs), dtype=np.int32)\n        overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)\n        # \"Seg\" area for pascal is just the box area\n        seg_areas = np.zeros((num_objs), dtype=np.float32)\n\n        # Load object bounding boxes into a data frame.\n        for ix, obj in enumerate(objs):\n            bbox = obj.find('bndbox')\n            # Make pixel indexes 0-based\n            x1 = float(bbox.find('xmin').text) - 1\n            y1 = float(bbox.find('ymin').text) - 1\n            x2 = float(bbox.find('xmax').text) - 1\n            y2 = float(bbox.find('ymax').text) - 1\n            cls = self._class_to_ind[obj.find('name').text.lower().strip()]\n            boxes[ix, :] = [x1, y1, x2, y2]\n            gt_classes[ix] = cls\n            overlaps[ix, cls] = 1.0\n            seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)\n\n        overlaps = scipy.sparse.csr_matrix(overlaps)\n\n        return {'boxes' : boxes,\n                'gt_classes': gt_classes,\n                'gt_overlaps' : overlaps,\n                'flipped' : False,\n                'seg_areas' : seg_areas}\n\n    def _get_comp_id(self):\n        comp_id = (self._comp_id + '_' + self._salt if self.config['use_salt']\n            else self._comp_id)\n        return comp_id\n\n    def _get_voc_results_file_template(self):\n        # VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt\n        filename = self._get_comp_id() + '_det_' + self._image_set + '_{:s}.txt'\n        path = os.path.join(\n            self._devkit_path,\n            'results',\n            'VOC' + self._year,\n            'Main',\n            filename)\n        return path\n\n    def _write_voc_results_file(self, all_boxes):\n        for cls_ind, cls in enumerate(self.classes):\n            if cls == '__background__':\n                continue\n            print 'Writing {} VOC results file'.format(cls)\n            filename = self._get_voc_results_file_template().format(cls)\n            with open(filename, 'wt') as f:\n                for im_ind, index in enumerate(self.image_index):\n                    dets = all_boxes[cls_ind][im_ind]\n                    if dets == []:\n                        continue\n                    # the VOCdevkit expects 1-based indices\n                    for k in xrange(dets.shape[0]):\n                        f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\\n'.\n                                format(index, dets[k, -1],\n                                       dets[k, 0] + 1, dets[k, 1] + 1,\n                                       dets[k, 2] + 1, dets[k, 3] + 1))\n\n    def _do_python_eval(self, output_dir = 'output'):\n        annopath = os.path.join(\n            self._devkit_path,\n            'VOC' + self._year,\n            'Annotations',\n            '{:s}.xml')\n        imagesetfile = os.path.join(\n            self._devkit_path,\n            'VOC' + self._year,\n            'ImageSets',\n            'Main',\n            self._image_set + '.txt')\n        cachedir = os.path.join(self._devkit_path, 'annotations_cache')\n        aps = []\n        # The PASCAL VOC metric changed in 2010\n        use_07_metric = True if int(self._year) < 2010 else False\n        print 'VOC07 metric? ' + ('Yes' if use_07_metric else 'No')\n        if not os.path.isdir(output_dir):\n            os.mkdir(output_dir)\n        for i, cls in enumerate(self._classes):\n            if cls == '__background__':\n                continue\n            filename = self._get_voc_results_file_template().format(cls)\n            rec, prec, ap = voc_eval(\n                filename, annopath, imagesetfile, cls, cachedir, ovthresh=0.5,\n                use_07_metric=use_07_metric)\n            aps += [ap]\n            print('AP for {} = {:.4f}'.format(cls, ap))\n            with open(os.path.join(output_dir, cls + '_pr.pkl'), 'w') as f:\n                cPickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)\n        print('Mean AP = {:.4f}'.format(np.mean(aps)))\n        print('~~~~~~~~')\n        print('Results:')\n        for ap in aps:\n            print('{:.3f}'.format(ap))\n        print('{:.3f}'.format(np.mean(aps)))\n        print('~~~~~~~~')\n        print('')\n        print('--------------------------------------------------------------')\n        print('Results computed with the **unofficial** Python eval code.')\n        print('Results should be very close to the official MATLAB eval code.')\n        print('Recompute with `./tools/reval.py --matlab ...` for your paper.')\n        print('-- Thanks, The Management')\n        print('--------------------------------------------------------------')\n\n    def _do_matlab_eval(self, output_dir='output'):\n        print '-----------------------------------------------------'\n        print 'Computing results with the official MATLAB eval code.'\n        print '-----------------------------------------------------'\n        path = os.path.join(cfg.ROOT_DIR, 'lib', 'datasets',\n                            'VOCdevkit-matlab-wrapper')\n        cmd = 'cd {} && '.format(path)\n        cmd += '{:s} -nodisplay -nodesktop '.format(cfg.MATLAB)\n        cmd += '-r \"dbstop if error; '\n        cmd += 'voc_eval(\\'{:s}\\',\\'{:s}\\',\\'{:s}\\',\\'{:s}\\'); quit;\"' \\\n               .format(self._devkit_path, self._get_comp_id(),\n                       self._image_set, output_dir)\n        print('Running:\\n{}'.format(cmd))\n        status = subprocess.call(cmd, shell=True)\n\n    def evaluate_detections(self, all_boxes, output_dir):\n        self._write_voc_results_file(all_boxes)\n        self._do_python_eval(output_dir)\n        if self.config['matlab_eval']:\n            self._do_matlab_eval(output_dir)\n        if self.config['cleanup']:\n            for cls in self._classes:\n                if cls == '__background__':\n                    continue\n                filename = self._get_voc_results_file_template().format(cls)\n                os.remove(filename)\n\n    def competition_mode(self, on):\n        if on:\n            self.config['use_salt'] = False\n            self.config['cleanup'] = False\n        else:\n            self.config['use_salt'] = True\n            self.config['cleanup'] = True\n\nif __name__ == '__main__':\n    from datasets.pascal_voc import pascal_voc\n    d = pascal_voc('trainval', '2007')\n    res = d.roidb\n    from IPython import embed; embed()\n"
  },
  {
    "path": "lib/datasets/tools/mcg_munge.py",
    "content": "import os\nimport sys\n\n\"\"\"Hacky tool to convert file system layout of MCG boxes downloaded from\nhttp://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/\nso that it's consistent with those computed by Jan Hosang (see:\nhttp://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-\n  computing/research/object-recognition-and-scene-understanding/how-\n  good-are-detection-proposals-really/)\n\nNB: Boxes from the MCG website are in (y1, x1, y2, x2) order.\nBoxes from Hosang et al. are in (x1, y1, x2, y2) order.\n\"\"\"\n\ndef munge(src_dir):\n    # stored as: ./MCG-COCO-val2014-boxes/COCO_val2014_000000193401.mat\n    # want:      ./MCG/mat/COCO_val2014_0/COCO_val2014_000000141/COCO_val2014_000000141334.mat\n\n    files = os.listdir(src_dir)\n    for fn in files:\n        base, ext = os.path.splitext(fn)\n        # first 14 chars / first 22 chars / all chars + .mat\n        # COCO_val2014_0/COCO_val2014_000000447/COCO_val2014_000000447991.mat\n        first = base[:14]\n        second = base[:22]\n        dst_dir = os.path.join('MCG', 'mat', first, second)\n        if not os.path.exists(dst_dir):\n            os.makedirs(dst_dir)\n        src = os.path.join(src_dir, fn)\n        dst = os.path.join(dst_dir, fn)\n        print 'MV: {} -> {}'.format(src, dst)\n        os.rename(src, dst)\n\nif __name__ == '__main__':\n    # src_dir should look something like:\n    #  src_dir = 'MCG-COCO-val2014-boxes'\n    src_dir = sys.argv[1]\n    munge(src_dir)\n"
  },
  {
    "path": "lib/datasets/voc_eval.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Bharath Hariharan\n# --------------------------------------------------------\n\nimport xml.etree.ElementTree as ET\nimport os\nimport cPickle\nimport numpy as np\n\ndef parse_rec(filename):\n    \"\"\" Parse a PASCAL VOC xml file \"\"\"\n    tree = ET.parse(filename)\n    objects = []\n    for obj in tree.findall('object'):\n        obj_struct = {}\n        obj_struct['name'] = obj.find('name').text\n        obj_struct['pose'] = obj.find('pose').text\n        obj_struct['truncated'] = int(obj.find('truncated').text)\n        obj_struct['difficult'] = int(obj.find('difficult').text)\n        bbox = obj.find('bndbox')\n        obj_struct['bbox'] = [int(bbox.find('xmin').text),\n                              int(bbox.find('ymin').text),\n                              int(bbox.find('xmax').text),\n                              int(bbox.find('ymax').text)]\n        objects.append(obj_struct)\n\n    return objects\n\ndef voc_ap(rec, prec, use_07_metric=False):\n    \"\"\" ap = voc_ap(rec, prec, [use_07_metric])\n    Compute VOC AP given precision and recall.\n    If use_07_metric is true, uses the\n    VOC 07 11 point method (default:False).\n    \"\"\"\n    if use_07_metric:\n        # 11 point metric\n        ap = 0.\n        for t in np.arange(0., 1.1, 0.1):\n            if np.sum(rec >= t) == 0:\n                p = 0\n            else:\n                p = np.max(prec[rec >= t])\n            ap = ap + p / 11.\n    else:\n        # correct AP calculation\n        # first append sentinel values at the end\n        mrec = np.concatenate(([0.], rec, [1.]))\n        mpre = np.concatenate(([0.], prec, [0.]))\n\n        # compute the precision envelope\n        for i in range(mpre.size - 1, 0, -1):\n            mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])\n\n        # to calculate area under PR curve, look for points\n        # where X axis (recall) changes value\n        i = np.where(mrec[1:] != mrec[:-1])[0]\n\n        # and sum (\\Delta recall) * prec\n        ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])\n    return ap\n\ndef voc_eval(detpath,\n             annopath,\n             imagesetfile,\n             classname,\n             cachedir,\n             ovthresh=0.5,\n             use_07_metric=False):\n    \"\"\"rec, prec, ap = voc_eval(detpath,\n                                annopath,\n                                imagesetfile,\n                                classname,\n                                [ovthresh],\n                                [use_07_metric])\n\n    Top level function that does the PASCAL VOC evaluation.\n\n    detpath: Path to detections\n        detpath.format(classname) should produce the detection results file.\n    annopath: Path to annotations\n        annopath.format(imagename) should be the xml annotations file.\n    imagesetfile: Text file containing the list of images, one image per line.\n    classname: Category name (duh)\n    cachedir: Directory for caching the annotations\n    [ovthresh]: Overlap threshold (default = 0.5)\n    [use_07_metric]: Whether to use VOC07's 11 point AP computation\n        (default False)\n    \"\"\"\n    # assumes detections are in detpath.format(classname)\n    # assumes annotations are in annopath.format(imagename)\n    # assumes imagesetfile is a text file with each line an image name\n    # cachedir caches the annotations in a pickle file\n\n    # first load gt\n    if not os.path.isdir(cachedir):\n        os.mkdir(cachedir)\n    cachefile = os.path.join(cachedir, 'annots.pkl')\n    # read list of images\n    with open(imagesetfile, 'r') as f:\n        lines = f.readlines()\n    imagenames = [x.strip() for x in lines]\n\n    if not os.path.isfile(cachefile):\n        # load annots\n        recs = {}\n        for i, imagename in enumerate(imagenames):\n            recs[imagename] = parse_rec(annopath.format(imagename))\n            if i % 100 == 0:\n                print 'Reading annotation for {:d}/{:d}'.format(\n                    i + 1, len(imagenames))\n        # save\n        print 'Saving cached annotations to {:s}'.format(cachefile)\n        with open(cachefile, 'w') as f:\n            cPickle.dump(recs, f)\n    else:\n        # load\n        with open(cachefile, 'r') as f:\n            recs = cPickle.load(f)\n\n    # extract gt objects for this class\n    class_recs = {}\n    npos = 0\n    for imagename in imagenames:\n        R = [obj for obj in recs[imagename] if obj['name'] == classname]\n        bbox = np.array([x['bbox'] for x in R])\n        difficult = np.array([x['difficult'] for x in R]).astype(np.bool)\n        det = [False] * len(R)\n        npos = npos + sum(~difficult)\n        class_recs[imagename] = {'bbox': bbox,\n                                 'difficult': difficult,\n                                 'det': det}\n\n    # read dets\n    detfile = detpath.format(classname)\n    with open(detfile, 'r') as f:\n        lines = f.readlines()\n\n    splitlines = [x.strip().split(' ') for x in lines]\n    image_ids = [x[0] for x in splitlines]\n    confidence = np.array([float(x[1]) for x in splitlines])\n    BB = np.array([[float(z) for z in x[2:]] for x in splitlines])\n\n    # sort by confidence\n    sorted_ind = np.argsort(-confidence)\n    sorted_scores = np.sort(-confidence)\n    if sorted_ind == []:\n        BB = []\n    if sorted_ind != []:\n        BB = BB[sorted_ind, :]\n    image_ids = [image_ids[x] for x in sorted_ind]\n\n    # go down dets and mark TPs and FPs\n    nd = len(image_ids)\n    tp = np.zeros(nd)\n    fp = np.zeros(nd)\n    for d in range(nd):\n        R = class_recs[image_ids[d]]\n        bb = BB[d, :].astype(float)\n        ovmax = -np.inf\n        BBGT = R['bbox'].astype(float)\n\n        if BBGT.size > 0:\n            # compute overlaps\n            # intersection\n            ixmin = np.maximum(BBGT[:, 0], bb[0])\n            iymin = np.maximum(BBGT[:, 1], bb[1])\n            ixmax = np.minimum(BBGT[:, 2], bb[2])\n            iymax = np.minimum(BBGT[:, 3], bb[3])\n            iw = np.maximum(ixmax - ixmin + 1., 0.)\n            ih = np.maximum(iymax - iymin + 1., 0.)\n            inters = iw * ih\n\n            # union\n            uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) +\n                   (BBGT[:, 2] - BBGT[:, 0] + 1.) *\n                   (BBGT[:, 3] - BBGT[:, 1] + 1.) - inters)\n\n            overlaps = inters / uni\n            ovmax = np.max(overlaps)\n            jmax = np.argmax(overlaps)\n\n        if ovmax > ovthresh:\n            if not R['difficult'][jmax]:\n                if not R['det'][jmax]:\n                    tp[d] = 1.\n                    R['det'][jmax] = 1\n                else:\n                    fp[d] = 1.\n        else:\n            fp[d] = 1.\n\n    # compute precision recall\n    fp = np.cumsum(fp)\n    tp = np.cumsum(tp)\n    rec = tp / float(npos)\n    # avoid divide by zero in case the first detection matches a difficult\n    # ground truth\n    prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)\n    ap = voc_ap(rec, prec, use_07_metric)\n\n    return rec, prec, ap\n"
  },
  {
    "path": "lib/fast_rcnn/FP_Net_end2end.sh",
    "content": "#/bin/bash\n# Usage:\n# ./experiments/scripts/faster_rcnn_end2end.sh GPU NET DATASET [options args to {train,test}_net.py]\n# DATASET is either pascal_voc or coco.\n#\n# Example:\n# ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc \\\n#   --set EXP_DIR foobar RNG_SEED 42 TRAIN.SCALES \"[400, 500, 600, 700]\"\n\nset -x\nset -e\n\nexport PYTHONUNBUFFERED=\"True\"\n\nGPU_ID=$1\nNET=$2\nNET_lc=${NET,,}\nDATASET=$3\n\narray=( $@ )\nlen=${#array[@]}\nEXTRA_ARGS=${array[@]:3:$len}\nEXTRA_ARGS_SLUG=${EXTRA_ARGS// /_}\n\ncase $DATASET in\n  pascal_voc)\n    TRAIN_IMDB=\"voc_2007_trainval\"\n    TEST_IMDB=\"voc_2007_test\"\n    PT_DIR=\"pascal_voc\"\n    ITERS=150000\n    ;;\n  coco)\n    # This is a very long and slow training schedule\n    # You can probably use fewer iterations and reduce the\n    # time to the LR drop (set in the solver to 350,000 iterations).\n    TRAIN_IMDB=\"coco_2014_train\"\n    TEST_IMDB=\"coco_2014_minival\"\n    PT_DIR=\"coco\"\n    ITERS=490000\n    ;;\n  *)\n    echo \"No dataset given\"\n    exit\n    ;;\nesac\n\nLOG=\"experiments/logs/faster_rcnn_end2end_${NET}_${EXTRA_ARGS_SLUG}.txt.`date +'%Y-%m-%d_%H-%M-%S'`\"\nexec &> >(tee -a \"$LOG\")\necho Logging output to \"$LOG\"\n\ntime ./tools/train_net.py --gpu ${GPU_ID} \\\n  --solver models/${PT_DIR}/${NET}/FP_Net_end2end/solver.prototxt \\\n  --weights data/pretrained_model/ResNet50.v2.caffemodel  \\\n  --imdb ${TRAIN_IMDB} \\\n  --iters ${ITERS} \\\n  --cfg experiments/cfgs/FP_Net_end2end.yml \\\n  ${EXTRA_ARGS}\n\nset +x\nNET_FINAL=`grep -B 1 \"done solving\" ${LOG} | grep \"Wrote snapshot\" | awk '{print $4}'`\nset -x\n\ntime ./tools/test_net.py --gpu ${GPU_ID} \\\n  --def models/${PT_DIR}/${NET}/FP_Net_end2end/test.prototxt \\\n  --net ${NET_FINAL} \\\n  --imdb ${TEST_IMDB} \\\n  --cfg experiments/cfgs/FP_Net_end2end.yml \\\n  ${EXTRA_ARGS}\n"
  },
  {
    "path": "lib/fast_rcnn/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n"
  },
  {
    "path": "lib/fast_rcnn/bbox_transform.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport numpy as np\n\ndef bbox_transform(ex_rois, gt_rois):\n    ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0\n    ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0\n    ex_ctr_x = ex_rois[:, 0] + 0.5 * ex_widths\n    ex_ctr_y = ex_rois[:, 1] + 0.5 * ex_heights\n\n    gt_widths = gt_rois[:, 2] - gt_rois[:, 0] + 1.0\n    gt_heights = gt_rois[:, 3] - gt_rois[:, 1] + 1.0\n    gt_ctr_x = gt_rois[:, 0] + 0.5 * gt_widths\n    gt_ctr_y = gt_rois[:, 1] + 0.5 * gt_heights\n\n    targets_dx = (gt_ctr_x - ex_ctr_x) / ex_widths\n    targets_dy = (gt_ctr_y - ex_ctr_y) / ex_heights\n    targets_dw = np.log(gt_widths / ex_widths)\n    targets_dh = np.log(gt_heights / ex_heights)\n\n    targets = np.vstack(\n        (targets_dx, targets_dy, targets_dw, targets_dh)).transpose()\n    return targets\n\ndef bbox_transform_inv(boxes, deltas):\n    if boxes.shape[0] == 0:\n        return np.zeros((0, deltas.shape[1]), dtype=deltas.dtype)\n\n    boxes = boxes.astype(deltas.dtype, copy=False)\n\n    widths = boxes[:, 2] - boxes[:, 0] + 1.0\n    heights = boxes[:, 3] - boxes[:, 1] + 1.0\n    ctr_x = boxes[:, 0] + 0.5 * widths\n    ctr_y = boxes[:, 1] + 0.5 * heights\n\n    dx = deltas[:, 0::4]\n    dy = deltas[:, 1::4]\n    dw = deltas[:, 2::4]\n    dh = deltas[:, 3::4]\n\n    \n    pred_ctr_x = dx * widths[:, np.newaxis] + ctr_x[:, np.newaxis]\n    pred_ctr_y = dy * heights[:, np.newaxis] + ctr_y[:, np.newaxis]\n    pred_w = np.exp(dw) * widths[:, np.newaxis]\n    pred_h = np.exp(dh) * heights[:, np.newaxis]  \n  \n      \n  \n\n\n    pred_boxes = np.zeros(deltas.shape, dtype=deltas.dtype)\n    # x1\n    pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * pred_w\n    # y1\n    pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * pred_h\n    # x2\n    pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * pred_w\n    # y2\n    pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * pred_h\n\n    return pred_boxes\n\ndef clip_boxes(boxes, im_shape):\n    \"\"\"\n    Clip boxes to image boundaries.\n    \"\"\"\n\n    # x1 >= 0\n    boxes[:, 0::4] = np.maximum(np.minimum(boxes[:, 0::4], im_shape[1] - 1), 0)\n    # y1 >= 0\n    boxes[:, 1::4] = np.maximum(np.minimum(boxes[:, 1::4], im_shape[0] - 1), 0)\n    # x2 < im_shape[1]\n    boxes[:, 2::4] = np.maximum(np.minimum(boxes[:, 2::4], im_shape[1] - 1), 0)\n    # y2 < im_shape[0]\n    boxes[:, 3::4] = np.maximum(np.minimum(boxes[:, 3::4], im_shape[0] - 1), 0)\n    return boxes\n"
  },
  {
    "path": "lib/fast_rcnn/config.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Fast R-CNN config system.\n\nThis file specifies default config options for Fast R-CNN. You should not\nchange values in this file. Instead, you should write a config file (in yaml)\nand use cfg_from_file(yaml_file) to load it and override the default options.\n\nMost tools in $ROOT/tools take a --cfg option to specify an override file.\n    - See tools/{train,test}_net.py for example code that uses cfg_from_file()\n    - See experiments/cfgs/*.yml for example YAML config override files\n\"\"\"\n\nimport os\nimport os.path as osp\nimport numpy as np\n# `pip install easydict` if you don't have it\nfrom easydict import EasyDict as edict\n\n__C = edict()\n# Consumers can get config by:\n#   from fast_rcnn_config import cfg\ncfg = __C\n\n#\n# Training options\n#\n\n\n\n__C.TRAIN = edict()\n\n# Scales to use during training (can list multiple scales)\n# Each scale is the pixel size of an image's shortest side\n__C.TRAIN.SCALES = (768,)\n\n__C.TRAIN.IMAGE_STRIDE = 64\n__C.FPNRATIOS = [0.5,1,2]\n__C.FPNRSCALES = 2 ** np.arange(4,6)\n\n\n\n# Max pixel size of the longest side of a scaled input image\n__C.TRAIN.MAX_SIZE = 1280\n\n# Images to use per minibatch\n__C.TRAIN.IMS_PER_BATCH = 2\n\n# Minibatch size (number of regions of interest [ROIs])\n__C.TRAIN.BATCH_SIZE = 256\n\n# Fraction of minibatch that is labeled foreground (i.e. class > 0)\n__C.TRAIN.FG_FRACTION = 0.25\n\n# Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n__C.TRAIN.FG_THRESH = 0.5\n\n# Overlap threshold for a ROI to be considered background (class = 0 if\n# overlap in [LO, HI))\n__C.TRAIN.BG_THRESH_HI = 0.5\n__C.TRAIN.BG_THRESH_LO = 0.0\n\n# Use horizontally-flipped images during training?\n__C.TRAIN.USE_FLIPPED = True\n\n# Train bounding-box regressors\n__C.TRAIN.BBOX_REG = True\n\n# Overlap required between a ROI and ground-truth box in order for that ROI to\n# be used as a bounding-box regression training example\n__C.TRAIN.BBOX_THRESH = 0.5\n\n# Iterations between snapshots\n__C.TRAIN.SNAPSHOT_ITERS = 10000\n\n# solver.prototxt specifies the snapshot path prefix, this adds an optional\n# infix to yield the path: <prefix>[_<infix>]_iters_XYZ.caffemodel\n__C.TRAIN.SNAPSHOT_INFIX = ''\n\n# Use a prefetch thread in roi_data_layer.layer\n# So far I haven't found this useful; likely more engineering work is required\n__C.TRAIN.USE_PREFETCH = False\n\n# Normalize the targets (subtract empirical mean, divide by empirical stddev)\n__C.TRAIN.BBOX_NORMALIZE_TARGETS = True\n# Deprecated (inside weights)\n__C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)\n# Normalize the targets using \"precomputed\" (or made up) means and stdevs\n# (BBOX_NORMALIZE_TARGETS must also be True)\n__C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = True\n__C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0)\n__C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2)\n\n# Train using these proposals\n__C.TRAIN.PROPOSAL_METHOD = 'gt'\n\n# Make minibatches from images that have similar aspect ratios (i.e. both\n# tall and thin or both short and wide) in order to avoid wasting computation\n# on zero-padding.\n__C.TRAIN.ASPECT_GROUPING = True\n\n# Use RPN to detect objects\n__C.TRAIN.HAS_RPN = False\n# IOU >= thresh: positive example\n__C.TRAIN.RPN_POSITIVE_OVERLAP = 0.7\n# IOU < thresh: negative example\n__C.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3\n# If an anchor statisfied by positive and negative conditions set to negative\n__C.TRAIN.RPN_CLOBBER_POSITIVES = False\n# Max number of foreground examples\n__C.TRAIN.RPN_FG_FRACTION = 0.5\n# Total number of examples\n__C.TRAIN.RPN_BATCHSIZE = 256\n# NMS threshold used on RPN proposals\n__C.TRAIN.RPN_NMS_THRESH = 0.7\n# Number of top scoring boxes to keep before apply NMS to RPN proposals\n__C.TRAIN.RPN_PRE_NMS_TOP_N = 12000\n# Number of top scoring boxes to keep after applying NMS to RPN proposals\n__C.TRAIN.RPN_POST_NMS_TOP_N = 2000\n# Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale)\n__C.TRAIN.RPN_MIN_SIZE = 0\n# Deprecated (outside weights)\n__C.TRAIN.RPN_BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)\n# Give the positive RPN examples weight of p * 1 / {num positives}\n# and give negatives a weight of (1 - p)\n# Set to -1.0 to use uniform example weighting\n__C.TRAIN.RPN_POSITIVE_WEIGHT = -1.0\n\n\n#\n# Testing options\n#\n\n__C.TEST = edict()\n\n# Scales to use during testing (can list multiple scales)\n# Each scale is the pixel size of an image's shortest side\n__C.TEST.SCALES = (768,)\n\n# Max pixel size of the longest side of a scaled input image\n__C.TEST.MAX_SIZE = 1280\n\n# Overlap threshold used for non-maximum suppression (suppress boxes with\n# IoU >= this threshold)\n__C.TEST.NMS = 0.3\n\n# Experimental: treat the (K+1) units in the cls_score layer as linear\n# predictors (trained, eg, with one-vs-rest SVMs).\n__C.TEST.SVM = False\n\n# Test using bounding-box regressors\n__C.TEST.BBOX_REG = True\n\n# Propose boxes\n__C.TEST.HAS_RPN = False\n\n# Test using these proposals\n__C.TEST.PROPOSAL_METHOD = 'selective_search'\n\n## NMS threshold used on RPN proposals\n__C.TEST.RPN_NMS_THRESH = 0.7\n## Number of top scoring boxes to keep before apply NMS to RPN proposals\n__C.TEST.RPN_PRE_NMS_TOP_N = 6000\n## Number of top scoring boxes to keep after applying NMS to RPN proposals\n__C.TEST.RPN_POST_NMS_TOP_N = 1000\n# Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale)\n__C.TEST.RPN_MIN_SIZE = 16\n# Apply box scoring heuristics\n__C.TEST.BBOX_VOTE_N_WEIGHTED_SCORE = 1\n__C.TEST.BBOX_VOTE_WEIGHT_EMPTY = 0.5\n\n\n#\n# MISC\n#\n\n# The mapping from image coordinates to feature map coordinates might cause\n# some boxes that are distinct in image space to become identical in feature\n# coordinates. If DEDUP_BOXES > 0, then DEDUP_BOXES is used as the scale factor\n# for identifying duplicate boxes.\n# 1/16 is correct for {Alex,Caffe}Net, VGG_CNN_M_1024, and VGG16\n__C.DEDUP_BOXES = -1\n\n# Pixel mean values (BGR order) as a (1, 1, 3) array\n# We use the same pixel mean for all networks even though it's not exactly what\n# they were trained with\n__C.PIXEL_MEANS = np.array([[[102.9801, 115.9465, 122.7717]]])\n\n# For reproducibility\n__C.RNG_SEED = 3\n\n# A small number that's used many times\n__C.EPS = 1e-14\n\n# Root directory of project\n__C.ROOT_DIR = osp.abspath(osp.join(osp.dirname(__file__), '..', '..'))\n\n# Data directory\n__C.DATA_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'data'))\n\n# Model directory\n__C.MODELS_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'models', 'pascal_voc'))\n\n# Name (or path to) the matlab executable\n__C.MATLAB = 'matlab'\n\n# Place outputs under an experiments directory\n__C.EXP_DIR = 'default'\n\n# Use GPU implementation of non-maximum suppression\n__C.USE_GPU_NMS = True\n\n# Default GPU device id\n__C.GPU_ID = 0\n\n\ndef get_output_dir(imdb, net=None):\n    \"\"\"Return the directory where experimental artifacts are placed.\n    If the directory does not exist, it is created.\n\n    A canonical path is built using the name from an imdb and a network\n    (if not None).\n    \"\"\"\n    outdir = osp.abspath(osp.join(__C.ROOT_DIR, 'output', __C.EXP_DIR, imdb.name))\n    if net is not None:\n        outdir = osp.join(outdir, net.name)\n    if not os.path.exists(outdir):\n        os.makedirs(outdir)\n    return outdir\n\ndef _merge_a_into_b(a, b):\n    \"\"\"Merge config dictionary a into config dictionary b, clobbering the\n    options in b whenever they are also specified in a.\n    \"\"\"\n    if type(a) is not edict:\n        return\n\n    for k, v in a.iteritems():\n        # a must specify keys that are in b\n        if not b.has_key(k):\n            raise KeyError('{} is not a valid config key'.format(k))\n\n        # the types must match, too\n        old_type = type(b[k])\n        if old_type is not type(v):\n            if isinstance(b[k], np.ndarray):\n                v = np.array(v, dtype=b[k].dtype)\n            else:\n                raise ValueError(('Type mismatch ({} vs. {}) '\n                                'for config key: {}').format(type(b[k]),\n                                                            type(v), k))\n\n        # recursively merge dicts\n        if type(v) is edict:\n            try:\n                _merge_a_into_b(a[k], b[k])\n            except:\n                print('Error under config key: {}'.format(k))\n                raise\n        else:\n            b[k] = v\n\ndef cfg_from_file(filename):\n    \"\"\"Load a config file and merge it into the default options.\"\"\"\n    import yaml\n    with open(filename, 'r') as f:\n        yaml_cfg = edict(yaml.load(f))\n\n    _merge_a_into_b(yaml_cfg, __C)\n\ndef cfg_from_list(cfg_list):\n    \"\"\"Set config keys via list (e.g., from command line).\"\"\"\n    from ast import literal_eval\n    assert len(cfg_list) % 2 == 0\n    for k, v in zip(cfg_list[0::2], cfg_list[1::2]):\n        key_list = k.split('.')\n        d = __C\n        for subkey in key_list[:-1]:\n            assert d.has_key(subkey)\n            d = d[subkey]\n        subkey = key_list[-1]\n        assert d.has_key(subkey)\n        try:\n            value = literal_eval(v)\n        except:\n            # handle the case when v is a string literal\n            value = v\n        assert type(value) == type(d[subkey]), \\\n            'type {} does not match original type {}'.format(\n            type(value), type(d[subkey]))\n        d[subkey] = value\n"
  },
  {
    "path": "lib/fast_rcnn/nms_wrapper.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nfrom fast_rcnn.config import cfg\nfrom nms.gpu_nms import gpu_nms\nfrom nms.cpu_nms import cpu_nms\n\ndef nms(dets, thresh, force_cpu=False):\n    \"\"\"Dispatch to either CPU or GPU NMS implementations.\"\"\"\n\n    if dets.shape[0] == 0:\n        return []\n    if cfg.USE_GPU_NMS and not force_cpu:\n        return gpu_nms(dets, thresh, device_id=cfg.GPU_ID)\n    else:\n        return cpu_nms(dets, thresh)\n"
  },
  {
    "path": "lib/fast_rcnn/test.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Test a Fast R-CNN network on an imdb (image database).\"\"\"\n\nfrom fast_rcnn.config import cfg, get_output_dir\nfrom fast_rcnn.bbox_transform import clip_boxes, bbox_transform_inv\nimport argparse\nfrom utils.timer import Timer\nimport numpy as np\nimport cv2\nimport caffe\nfrom fast_rcnn.nms_wrapper import nms\nimport cPickle\nfrom utils.blob import im_list_to_blob\nimport os\nfrom utils.cython_bbox import bbox_overlaps\n\ndef _get_image_blob(im):\n    \"\"\"Converts an image into a network input.\n\n    Arguments:\n        im (ndarray): a color image in BGR order\n\n    Returns:\n        blob (ndarray): a data blob holding an image pyramid\n        im_scale_factors (list): list of image scales (relative to im) used\n            in the image pyramid\n    \"\"\"\n    im_orig = im.astype(np.float32, copy=True)\n    im_orig -= cfg.PIXEL_MEANS\n\n    im_shape = im_orig.shape\n    im_size_min = np.min(im_shape[0:2])\n    im_size_max = np.max(im_shape[0:2])\n    stride = cfg.TRAIN.IMAGE_STRIDE\n    processed_ims = []\n    im_scale_factors = []\n\n    for target_size in cfg.TEST.SCALES:\n        im_scale = float(target_size) / float(im_size_min)\n        # Prevent the biggest axis from being more than MAX_SIZE\n        if np.round(im_scale * im_size_max) > cfg.TEST.MAX_SIZE:\n            im_scale = float(cfg.TEST.MAX_SIZE) / float(im_size_max)\n        im = cv2.resize(im_orig, None, None, fx=im_scale, fy=im_scale,\n                        interpolation=cv2.INTER_LINEAR)\n        if stride == 0:\n            im_scale_factors.append(im_scale)\n            processed_ims.append(im)\n        else:\n            # pad to product of stride\n            im_height = int(np.ceil(im.shape[0] / float(stride)) * stride)\n            im_width = int(np.ceil(im.shape[1] / float(stride)) * stride)\n            im_channel = im.shape[2]\n            padded_im = np.zeros((im_height, im_width, im_channel))\n            padded_im[:im.shape[0], :im.shape[1], :] = im\n            im_scale_factors.append(im_scale)\n            processed_ims.append(padded_im)\n\n    # Create a blob to hold the input images\n    blob = im_list_to_blob(processed_ims)\n\n    return blob, np.array(im_scale_factors)\n\ndef _get_rois_blob(im_rois, im_scale_factors):\n    \"\"\"Converts RoIs into network inputs.\n\n    Arguments:\n        im_rois (ndarray): R x 4 matrix of RoIs in original image coordinates\n        im_scale_factors (list): scale factors as returned by _get_image_blob\n\n    Returns:\n        blob (ndarray): R x 5 matrix of RoIs in the image pyramid\n    \"\"\"\n    rois, levels = _project_im_rois(im_rois, im_scale_factors)\n    rois_blob = np.hstack((levels, rois))\n    return rois_blob.astype(np.float32, copy=False)\n\ndef _project_im_rois(im_rois, scales):\n    \"\"\"Project image RoIs into the image pyramid built by _get_image_blob.\n\n    Arguments:\n        im_rois (ndarray): R x 4 matrix of RoIs in original image coordinates\n        scales (list): scale factors as returned by _get_image_blob\n\n    Returns:\n        rois (ndarray): R x 4 matrix of projected RoI coordinates\n        levels (list): image pyramid levels used by each projected RoI\n    \"\"\"\n    im_rois = im_rois.astype(np.float, copy=False)\n\n    if len(scales) > 1:\n        widths = im_rois[:, 2] - im_rois[:, 0] + 1\n        heights = im_rois[:, 3] - im_rois[:, 1] + 1\n\n        areas = widths * heights\n        scaled_areas = areas[:, np.newaxis] * (scales[np.newaxis, :] ** 2)\n        diff_areas = np.abs(scaled_areas - 224 * 224)\n        levels = diff_areas.argmin(axis=1)[:, np.newaxis]\n    else:\n        levels = np.zeros((im_rois.shape[0], 1), dtype=np.int)\n\n    rois = im_rois * scales[levels]\n\n    return rois, levels\n\ndef _get_blobs(im, rois):\n    \"\"\"Convert an image and RoIs within that image into network inputs.\"\"\"\n    blobs = {'data' : None, 'rois' : None}\n    blobs['data'], im_scale_factors = _get_image_blob(im)\n    if not cfg.TEST.HAS_RPN:\n        blobs['rois'] = _get_rois_blob(rois, im_scale_factors)\n    return blobs, im_scale_factors\n\ndef transform_inverse(im_tensor, pixel_means):\n    \"\"\"\n    transform from mxnet im_tensor to ordinary RGB image\n    im_tensor is limited to one image\n    :param im_tensor: [batch, channel, height, width]\n    :param pixel_means: [B, G, R pixel means]\n    :return: im [height, width, channel(RGB)]\n    \"\"\"\n    assert im_tensor.shape[0] == 1\n    im_tensor = im_tensor.copy()\n    # put channel back\n    channel_swap = (0, 2, 3, 1)\n    im_tensor = im_tensor.transpose(channel_swap)\n    im = im_tensor[0]\n    assert im.shape[2] == 3\n    im += pixel_means[[2, 1, 0]]\n    im = im.astype(np.uint8)\n    return im\n\ndef vis_rois_detection(im_array, detections):\n    \"\"\"\n    visualize all detections in one image\n    :param im_array: [b=1 c h w] in rgb\n    :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ]\n    :param class_names: list of names in imdb\n    :param scale: visualize the scaled image\n    :return:\n    \"\"\"\n    import matplotlib  \n    matplotlib.use('Agg') \n    import matplotlib.pyplot as plt\n    from matplotlib.pyplot import savefig  \n    import random\n    a =  [103.06 ,115.9 ,123.15]\n    a = np.array(a)\n    im = transform_inverse(im_array,a)\n    plt.imshow(im)\n  #  print class_names.shape\n    for j in range(detections.shape[0]):\n\n        color = (random.random(), random.random(), random.random())  # generate a random color\n        dets = detections[j]\n     \n        det =dets\n\n        bbox = det[0:] \n \n        rect = plt.Rectangle((bbox[0], bbox[1]),\n                                 bbox[2] - bbox[0],\n                                 bbox[3] - bbox[1], fill=False,\n                                 edgecolor=color, linewidth=3.5)\n        plt.gca().add_patch(rect)\n    plt.show()\n    name = np.mean(im)\n    savefig ('vis/'+str(name)+'.png')\n    plt.clf()\n    plt.cla()\n\n    plt. close(0)\ndef bbox_vote(dets_NMS, dets_all, thresh=0.5):\n    dets_voted = np.zeros_like(dets_NMS)   # Empty matrix with the same shape and type\n\n    _overlaps = bbox_overlaps(\n\t\t\tnp.ascontiguousarray(dets_NMS[:, 0:4], dtype=np.float),\n\t\t\tnp.ascontiguousarray(dets_all[:, 0:4], dtype=np.float))\n\n    # for each survived box\n    for i, det in enumerate(dets_NMS):\n        dets_overlapped = dets_all[np.where(_overlaps[i, :] >= thresh)[0]]\n        assert(len(dets_overlapped) > 0)\n\n        boxes = dets_overlapped[:, 0:4]\n        scores = dets_overlapped[:, 4]\n\n        out_box = np.dot(scores, boxes)\n\n        dets_voted[i][0:4] = out_box / sum(scores)        # Weighted bounding boxes\n        dets_voted[i][4] = det[4]                         # Keep the original score\n\n        # Weighted scores (if enabled)\n        if cfg.TEST.BBOX_VOTE_N_WEIGHTED_SCORE > 1:\n            n_agreement = cfg.TEST.BBOX_VOTE_N_WEIGHTED_SCORE\n            w_empty = cfg.TEST.BBOX_VOTE_WEIGHT_EMPTY\n\n            n_detected = len(scores)\n\n            if n_detected >= n_agreement:\n                top_scores = -np.sort(-scores)[:n_agreement]\n                new_score = np.average(top_scores)\n            else:\n                new_score = np.average(scores) * (n_detected * 1.0 + (n_agreement - n_detected) * w_empty) / n_agreement\n\n            dets_voted[i][4] = min(new_score, dets_voted[i][4])\n\n    return dets_voted\n\ndef im_detect(net, im, boxes=None,num_classes=21):\n    \"\"\"Detect object classes in an image given object proposals.\n\n    Arguments:\n        net (caffe.Net): Fast R-CNN network to use\n        im (ndarray): color image to test (in BGR order)\n        boxes (ndarray): R x 4 array of object proposals or None (for RPN)\n\n    Returns:\n        scores (ndarray): R x K array of object class scores (K includes\n            background as object category 0)\n        boxes (ndarray): R x (4*K) array of predicted bounding boxes\n    \"\"\"\n    blobs, im_scales = _get_blobs(im, boxes)\n\n    # When mapping from image ROIs to feature map ROIs, there's some aliasing\n    # (some distinct image ROIs get mapped to the same feature ROI).\n    # Here, we identify duplicate feature ROIs, so we only compute features\n    # on the unique subset.\n    if cfg.DEDUP_BOXES > 0 and not cfg.TEST.HAS_RPN:\n        v = np.array([1, 1e3, 1e6, 1e9, 1e12])\n        hashes = np.round(blobs['rois'] * cfg.DEDUP_BOXES).dot(v)\n        _, index, inv_index = np.unique(hashes, return_index=True,\n                                        return_inverse=True)\n        blobs['rois'] = blobs['rois'][index, :]\n        boxes = boxes[index, :]\n\n    if cfg.TEST.HAS_RPN:\n        im_blob = blobs['data']\n        blobs['im_info'] = np.array(\n            [[im_blob.shape[2], im_blob.shape[3], im_scales[0]]],\n            dtype=np.float32)\n\n    # reshape network inputs\n    net.blobs['data'].reshape(*(blobs['data'].shape))\n    if cfg.TEST.HAS_RPN:\n        net.blobs['im_info'].reshape(*(blobs['im_info'].shape))\n    else:\n        net.blobs['rois'].reshape(*(blobs['rois'].shape))\n\n    # do forward\n    forward_kwargs = {'data': blobs['data'].astype(np.float32, copy=False)}\n    if cfg.TEST.HAS_RPN:\n        forward_kwargs['im_info'] = blobs['im_info'].astype(np.float32, copy=False)\n    else:\n        forward_kwargs['rois'] = blobs['rois'].astype(np.float32, copy=False)\n    blobs_out = net.forward(**forward_kwargs)\n\n\n\n    if cfg.TEST.HAS_RPN:\n        assert len(im_scales) == 1, \"Only single-image batch implemented\"\n        rois = net.blobs['rois'].data.copy()\n        # unscale back to raw image space\n        boxes = rois[:, 1:5] \n        index= np.where(np.sum(boxes,axis=1)!=0)[0]\n        boxes = boxes[index,:]\n     \n# / im_scales[0]\n    if cfg.TEST.SVM:\n        # use the raw scores before softmax under the assumption they\n        # were trained as linear SVMs\n        scores = net.blobs['cls_score'].data\n    else:\n        # use softmax estimated probabilities\n        scores = blobs_out['cls_prob']\n        scores = scores[index]\n\n      #  print scores[0:10]\n    \n    if cfg.TEST.BBOX_REG:\n        # Apply bounding-box regression deltas\n        box_deltas = blobs_out['bbox_pred']\n    \n        box_deltas = box_deltas[index,:]\n     \n\n        if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\n            means = np.tile(\n                    np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (num_classes, 1)).ravel()\n            stds = np.tile(\n                    np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (num_classes, 1)).ravel()\n      #  Optionally normalize targets by a precomputed mean and stdev\n            box_deltas = box_deltas * stds + means\n        \n\n      #  print boxes.shape,box_deltas.shape\n        pred_boxes = bbox_transform_inv(boxes, box_deltas)\n        s = (blobs['data'].astype(np.float32, copy=False).shape[2],blobs['data'].astype(np.float32, copy=False).shape[3],blobs['data'].astype(np.float32, copy=False).shape[1])\n \n        pred_boxes = clip_boxes(pred_boxes, s)\n    else:\n        # Simply repeat the boxes, once for each class\n        pred_boxes = np.tile(boxes, (1, scores.shape[1]))\n\n    if cfg.DEDUP_BOXES > 0 and not cfg.TEST.HAS_RPN:\n        # Map scores and predictions back to the original set of boxes\n        scores = scores[inv_index, :]\n        pred_boxes = pred_boxes[inv_index, :]\n\n    vis = False\n    if vis:\n        vis_rois_detection(blobs['data'].astype(np.float32, copy=False),pred_boxes/ im_scales[0])\n  \n\n    return scores, pred_boxes/ im_scales[0]\n\ndef vis_detections(im, class_name, dets, thresh=0.3):\n    \"\"\"Visual debugging of detections.\"\"\"\n    import matplotlib.pyplot as plt\n    im = im[:, :, (2, 1, 0)]\n    for i in xrange(np.minimum(10, dets.shape[0])):\n        bbox = dets[i, :4]\n        score = dets[i, -1]\n        if score > thresh:\n            plt.cla()\n            plt.imshow(im)\n            plt.gca().add_patch(\n                plt.Rectangle((bbox[0], bbox[1]),\n                              bbox[2] - bbox[0],\n                              bbox[3] - bbox[1], fill=False,\n                              edgecolor='g', linewidth=3)\n                )\n            plt.title('{}  {:.3f}'.format(class_name, score))\n            plt.show()\n\ndef apply_nms(all_boxes, thresh):\n    \"\"\"Apply non-maximum suppression to all predicted boxes output by the\n    test_net method.\n    \"\"\"\n    num_classes = len(all_boxes)\n    num_images = len(all_boxes[0])\n    nms_boxes = [[[] for _ in xrange(num_images)]\n                 for _ in xrange(num_classes)]\n    for cls_ind in xrange(num_classes):\n        for im_ind in xrange(num_images):\n            dets = all_boxes[cls_ind][im_ind]\n            if dets == []:\n                continue\n            # CPU NMS is much faster than GPU NMS when the number of boxes\n            # is relative small (e.g., < 10k)\n            # TODO(rbg): autotune NMS dispatch\n            keep = nms(dets, thresh, force_cpu=True)\n            if len(keep) == 0:\n                continue\n            nms_boxes[cls_ind][im_ind] = dets[keep, :].copy()\n    return nms_boxes\n\ndef test_net(net, imdb, max_per_image=100, thresh=0.05, vis=False):\n    \"\"\"Test a Fast R-CNN network on an image database.\"\"\"\n  \n    num_images = len(imdb.image_index)\n    # all detections are collected into:\n    #    all_boxes[cls][image] = N x 5 array of detections in\n    #    (x1, y1, x2, y2, score)\n    all_boxes = [[[] for _ in xrange(num_images)]\n                 for _ in xrange(imdb.num_classes)]\n\n    output_dir = get_output_dir(imdb, net)\n\n    # timers\n    _t = {'im_detect' : Timer(), 'misc' : Timer()}\n\n    if not cfg.TEST.HAS_RPN:\n        roidb = imdb.roidb\n\n    for i in xrange(num_images):\n        # filter out any ground truth boxes\n        if cfg.TEST.HAS_RPN:\n            box_proposals = None\n        else:\n            # The roidb may contain ground-truth rois (for example, if the roidb\n            # comes from the training or val split). We only want to evaluate\n            # detection on the *non*-ground-truth rois. We select those the rois\n            # that have the gt_classes field set to 0, which means there's no\n            # ground truth.\n            box_proposals = roidb[i]['boxes'][roidb[i]['gt_classes'] == 0]\n\n        im = cv2.imread(imdb.image_path_at(i))\n        _t['im_detect'].tic()\n        scores, boxes = im_detect(net, im, box_proposals,imdb.num_classes)\n        _t['im_detect'].toc()\n\n        _t['misc'].tic()\n        vis = False\n        if  vis:\n            imj =im\n            name = 'output/bads/'+ str(i) + '.jpg'\n            for jj in xrange(1, imdb.num_classes):  \n                indsj = np.where(scores[:, jj] > thresh)[0]\n                cls_scoresj = scores[indsj, jj]\n                cls_boxesj = boxes[indsj, jj*4:(jj+1)*4]\n                cls_detsj = np.hstack((cls_boxesj, cls_scoresj[:, np.newaxis])) \\\n                    .astype(np.float32, copy=False)\n                keep = nms(cls_detsj, cfg.TEST.NMS)\n                cls_detsj = cls_detsj[keep, :]\n                detsj = cls_detsj\n                for ii in xrange(np.minimum(10, detsj.shape[0])):\n                    bboxj = detsj[ii, :4]\n                    scorej = detsj[ii, -1]\n                    if bboxj != []:\n                        x1 = bboxj[0]\n                        y1 = bboxj[3]\n                        x2 = bboxj[2]\n                        y2 = bboxj[1]\n                        # if x1 < 0:\n                        #     x1=0\n                        # if y1> imj.shape[1]:\n                        #     y1=imj.shape[1]-1\n                        # if x2 > imj.shape[0]:\n                        #     x2 = imj.shape[0]-1\n                        # if y2 < 0:\n                        #     y2 = 0\n                        if scorej > 0.1:\n                            cv2.rectangle(imj, (x1, y1), (x2,y2),(0,255,0), 4)\n                            text = str(jj) + \": \" + str(scorej)\n                            font = cv2.FONT_HERSHEY_SIMPLEX\n                            cv2.putText(imj, text, (x1, y1),font , 1, (0,0,255), 4)\n            cv2.imwrite(name, imj)\n            #aaa\n        # skip j = 0, because it's the background class\n        for j in xrange(1, imdb.num_classes):\n            inds = np.where(scores[:, j] > thresh)[0]\n            cls_scores = scores[inds, j]\n            cls_boxes = boxes[inds, j*4:(j+1)*4]\n            cls_dets = np.hstack((cls_boxes, cls_scores[:, np.newaxis])) \\\n                .astype(np.float32, copy=False)\n            keep = nms(cls_dets, cfg.TEST.NMS)\n            \n            dets_NMSed = cls_dets[keep, :]\n            cls_dets = bbox_vote(dets_NMSed, cls_dets)\n           # print cls_scores\n            if vis:\n                vis_detections(im, imdb.classes[j], cls_dets)\n            all_boxes[j][i] = cls_dets\n\n        # Limit to max_per_image detections *over all classes*\n        if max_per_image > 0:\n            image_scores = np.hstack([all_boxes[j][i][:, -1]\n                                      for j in xrange(1, imdb.num_classes)])\n            if len(image_scores) > max_per_image:\n                image_thresh = np.sort(image_scores)[-max_per_image]\n                for j in xrange(1, imdb.num_classes):\n                    keep = np.where(all_boxes[j][i][:, -1] >= image_thresh)[0]\n                    all_boxes[j][i] = all_boxes[j][i][keep, :]\n        _t['misc'].toc()\n\n        print 'im_detect: {:d}/{:d} {:.3f}s {:.3f}s' \\\n              .format(i + 1, num_images, _t['im_detect'].average_time,\n                      _t['misc'].average_time)\n\n    det_file = os.path.join(output_dir, 'detections.pkl')\n    with open(det_file, 'wb') as f:\n        cPickle.dump(all_boxes, f, cPickle.HIGHEST_PROTOCOL)\n\n    print 'Evaluating detections'\n    imdb.evaluate_detections(all_boxes, output_dir)\n"
  },
  {
    "path": "lib/fast_rcnn/train.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Train a Fast R-CNN network.\"\"\"\n\nimport caffe\nfrom fast_rcnn.config import cfg\nimport roi_data_layer.roidb as rdl_roidb\nfrom utils.timer import Timer\nimport numpy as np\nimport os\nimport google.protobuf.text_format\nfrom caffe.proto import caffe_pb2\nimport google.protobuf as pb2\nimport matplotlib\nmatplotlib.use(\"Agg\")\n\nclass SolverWrapper(object):\n    \"\"\"A simple wrapper around Caffe's solver.\n    This wrapper gives us control over he snapshotting process, which we\n    use to unnormalize the learned bounding-box regression weights.\n    \"\"\"\n\n    def __init__(self, solver_prototxt, roidb, output_dir,\n                 pretrained_model=None):\n        \"\"\"Initialize the SolverWrapper.\"\"\"\n        self.output_dir = output_dir\n\n        if (cfg.TRAIN.HAS_RPN and cfg.TRAIN.BBOX_REG and\n            cfg.TRAIN.BBOX_NORMALIZE_TARGETS):\n            # RPN can only use precomputed normalization because there are no\n            # fixed statistics to compute a priori\n            assert cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED\n\n        if cfg.TRAIN.BBOX_REG:\n            print 'Computing bounding-box regression targets...'\n    \n            self.bbox_means, self.bbox_stds = \\\n                    rdl_roidb.add_bbox_regression_targets(roidb)\n            print 'done'\n\n        self.solver = caffe.SGDSolver(solver_prototxt)\n        if pretrained_model is not None:\n            print ('Loading pretrained model '\n                   'weights from {:s}').format(pretrained_model)\n            self.solver.net.copy_from(pretrained_model)\n\n        self.solver_param = caffe_pb2.SolverParameter()\n        with open(solver_prototxt, 'rt') as f:\n            pb2.text_format.Merge(f.read(), self.solver_param)\n\n        self.solver.net.layers[0].set_roidb(roidb)\n\n    def snapshot(self):\n        \"\"\"Take a snapshot of the network after unnormalizing the learned\n        bounding-box regression weights. This enables easy use at test-time.\n        \"\"\"\n        net = self.solver.net\n\n     \n\n        infix = ('_' + cfg.TRAIN.SNAPSHOT_INFIX\n                 if cfg.TRAIN.SNAPSHOT_INFIX != '' else '')\n        filename = (self.solver_param.snapshot_prefix + infix +\n                    '_iter_{:d}'.format(self.solver.iter) + '.caffemodel')\n        filename = os.path.join(self.output_dir, filename)\n\n        net.save(str(filename))\n        print 'Wrote snapshot to: {:s}'.format(filename)\n\n \n        return filename\n\n    def train_model(self, max_iters):\n        \"\"\"Network training loop.\"\"\"\n        last_snapshot_iter = -1\n        timer = Timer()\n        model_paths = []\n        while self.solver.iter < max_iters:\n            # Make one SGD update\n            timer.tic()\n            self.solver.step(1)\n            timer.toc()\n            if self.solver.iter % (10 * self.solver_param.display) == 0:\n                print 'speed: {:.3f}s / iter'.format(timer.average_time)\n\n            if self.solver.iter % cfg.TRAIN.SNAPSHOT_ITERS == 0:\n                last_snapshot_iter = self.solver.iter\n                model_paths.append(self.snapshot())\n\n        if last_snapshot_iter != self.solver.iter:\n            model_paths.append(self.snapshot())\n        return model_paths\n\ndef get_training_roidb(imdb):\n    \"\"\"Returns a roidb (Region of Interest database) for use in training.\"\"\"\n    if cfg.TRAIN.USE_FLIPPED:\n        print 'Appending horizontally-flipped training examples...'\n        imdb.append_flipped_images()\n        print 'done'\n\n    print 'Preparing training data...'\n    rdl_roidb.prepare_roidb(imdb)\n    print 'done'\n\n    return imdb.roidb\n\ndef filter_roidb(roidb):\n    \"\"\"Remove roidb entries that have no usable RoIs.\"\"\"\n\n    def is_valid(entry):\n        # Valid images have:\n        #   (1) At least one foreground RoI OR\n        #   (2) At least one background RoI\n        overlaps = entry['max_overlaps']\n        # find boxes with sufficient overlap\n        fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0]\n        # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\n        bg_inds = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) &\n                           (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]\n        # image is only valid if such boxes exist\n        valid = len(fg_inds) > 0 or len(bg_inds) > 0\n        return valid\n\n    num = len(roidb)\n    filtered_roidb = [entry for entry in roidb if is_valid(entry)]\n    num_after = len(filtered_roidb)\n    print 'Filtered {} roidb entries: {} -> {}'.format(num - num_after,\n                                                       num, num_after)\n    return filtered_roidb\n\ndef train_net(solver_prototxt, roidb, output_dir,\n              pretrained_model=None, max_iters=40000):\n    \"\"\"Train a Fast R-CNN network.\"\"\"\n\n    roidb = filter_roidb(roidb)\n  \n    sw = SolverWrapper(solver_prototxt, roidb, output_dir,\n                       pretrained_model=pretrained_model)\n\n    print 'Solving...'\n    model_paths = sw.train_model(max_iters)\n    print 'done solving'\n    return model_paths\n"
  },
  {
    "path": "lib/nms/.gitignore",
    "content": "*.c\n*.cpp\n*.so\n"
  },
  {
    "path": "lib/nms/__init__.py",
    "content": ""
  },
  {
    "path": "lib/nms/cpu_nms.pyx",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport numpy as np\ncimport numpy as np\n\ncdef inline np.float32_t max(np.float32_t a, np.float32_t b):\n    return a if a >= b else b\n\ncdef inline np.float32_t min(np.float32_t a, np.float32_t b):\n    return a if a <= b else b\n\ndef cpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh):\n    cdef np.ndarray[np.float32_t, ndim=1] x1 = dets[:, 0]\n    cdef np.ndarray[np.float32_t, ndim=1] y1 = dets[:, 1]\n    cdef np.ndarray[np.float32_t, ndim=1] x2 = dets[:, 2]\n    cdef np.ndarray[np.float32_t, ndim=1] y2 = dets[:, 3]\n    cdef np.ndarray[np.float32_t, ndim=1] scores = dets[:, 4]\n\n    cdef np.ndarray[np.float32_t, ndim=1] areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    cdef np.ndarray[np.int_t, ndim=1] order = scores.argsort()[::-1]\n\n    cdef int ndets = dets.shape[0]\n    cdef np.ndarray[np.int_t, ndim=1] suppressed = \\\n            np.zeros((ndets), dtype=np.int)\n\n    # nominal indices\n    cdef int _i, _j\n    # sorted indices\n    cdef int i, j\n    # temp variables for box i's (the box currently under consideration)\n    cdef np.float32_t ix1, iy1, ix2, iy2, iarea\n    # variables for computing overlap with box j (lower scoring box)\n    cdef np.float32_t xx1, yy1, xx2, yy2\n    cdef np.float32_t w, h\n    cdef np.float32_t inter, ovr\n\n    keep = []\n    for _i in range(ndets):\n        i = order[_i]\n        if suppressed[i] == 1:\n            continue\n        keep.append(i)\n        ix1 = x1[i]\n        iy1 = y1[i]\n        ix2 = x2[i]\n        iy2 = y2[i]\n        iarea = areas[i]\n        for _j in range(_i + 1, ndets):\n            j = order[_j]\n            if suppressed[j] == 1:\n                continue\n            xx1 = max(ix1, x1[j])\n            yy1 = max(iy1, y1[j])\n            xx2 = min(ix2, x2[j])\n            yy2 = min(iy2, y2[j])\n            w = max(0.0, xx2 - xx1 + 1)\n            h = max(0.0, yy2 - yy1 + 1)\n            inter = w * h\n            ovr = inter / (iarea + areas[j] - inter)\n            if ovr >= thresh:\n                suppressed[j] = 1\n\n    return keep\n"
  },
  {
    "path": "lib/nms/gpu_nms.hpp",
    "content": "void _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num,\n          int boxes_dim, float nms_overlap_thresh, int device_id);\n"
  },
  {
    "path": "lib/nms/gpu_nms.pyx",
    "content": "# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport numpy as np\ncimport numpy as np\n\nassert sizeof(int) == sizeof(np.int32_t)\n\ncdef extern from \"gpu_nms.hpp\":\n    void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int)\n\ndef gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh,\n            np.int32_t device_id=0):\n    cdef int boxes_num = dets.shape[0]\n    cdef int boxes_dim = dets.shape[1]\n    cdef int num_out\n    cdef np.ndarray[np.int32_t, ndim=1] \\\n        keep = np.zeros(boxes_num, dtype=np.int32)\n    cdef np.ndarray[np.float32_t, ndim=1] \\\n        scores = dets[:, 4]\n    cdef np.ndarray[np.int_t, ndim=1] \\\n        order = scores.argsort()[::-1]\n    cdef np.ndarray[np.float32_t, ndim=2] \\\n        sorted_dets = dets[order, :]\n    _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id)\n    keep = keep[:num_out]\n    return list(order[keep])\n"
  },
  {
    "path": "lib/nms/nms_kernel.cu",
    "content": "// ------------------------------------------------------------------\n// Faster R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Shaoqing Ren\n// ------------------------------------------------------------------\n\n#include \"gpu_nms.hpp\"\n#include <vector>\n#include <iostream>\n\n#define CUDA_CHECK(condition) \\\n  /* Code block avoids redefinition of cudaError_t error */ \\\n  do { \\\n    cudaError_t error = condition; \\\n    if (error != cudaSuccess) { \\\n      std::cout << cudaGetErrorString(error) << std::endl; \\\n    } \\\n  } while (0)\n\n#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\n__device__ inline float devIoU(float const * const a, float const * const b) {\n  float left = max(a[0], b[0]), right = min(a[2], b[2]);\n  float top = max(a[1], b[1]), bottom = min(a[3], b[3]);\n  float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);\n  float interS = width * height;\n  float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);\n  float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);\n  return interS / (Sa + Sb - interS);\n}\n\n__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,\n                           const float *dev_boxes, unsigned long long *dev_mask) {\n  const int row_start = blockIdx.y;\n  const int col_start = blockIdx.x;\n\n  // if (row_start > col_start) return;\n\n  const int row_size =\n        min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);\n  const int col_size =\n        min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);\n\n  __shared__ float block_boxes[threadsPerBlock * 5];\n  if (threadIdx.x < col_size) {\n    block_boxes[threadIdx.x * 5 + 0] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];\n    block_boxes[threadIdx.x * 5 + 1] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];\n    block_boxes[threadIdx.x * 5 + 2] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];\n    block_boxes[threadIdx.x * 5 + 3] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];\n    block_boxes[threadIdx.x * 5 + 4] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];\n  }\n  __syncthreads();\n\n  if (threadIdx.x < row_size) {\n    const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n    const float *cur_box = dev_boxes + cur_box_idx * 5;\n    int i = 0;\n    unsigned long long t = 0;\n    int start = 0;\n    if (row_start == col_start) {\n      start = threadIdx.x + 1;\n    }\n    for (i = start; i < col_size; i++) {\n      if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {\n        t |= 1ULL << i;\n      }\n    }\n    const int col_blocks = DIVUP(n_boxes, threadsPerBlock);\n    dev_mask[cur_box_idx * col_blocks + col_start] = t;\n  }\n}\n\nvoid _set_device(int device_id) {\n  int current_device;\n  CUDA_CHECK(cudaGetDevice(&current_device));\n  if (current_device == device_id) {\n    return;\n  }\n  // The call to cudaSetDevice must come before any calls to Get, which\n  // may perform initialization using the GPU.\n  CUDA_CHECK(cudaSetDevice(device_id));\n}\n\nvoid _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num,\n          int boxes_dim, float nms_overlap_thresh, int device_id) {\n  _set_device(device_id);\n\n  float* boxes_dev = NULL;\n  unsigned long long* mask_dev = NULL;\n\n  const int col_blocks = DIVUP(boxes_num, threadsPerBlock);\n\n  CUDA_CHECK(cudaMalloc(&boxes_dev,\n                        boxes_num * boxes_dim * sizeof(float)));\n  CUDA_CHECK(cudaMemcpy(boxes_dev,\n                        boxes_host,\n                        boxes_num * boxes_dim * sizeof(float),\n                        cudaMemcpyHostToDevice));\n\n  CUDA_CHECK(cudaMalloc(&mask_dev,\n                        boxes_num * col_blocks * sizeof(unsigned long long)));\n\n  dim3 blocks(DIVUP(boxes_num, threadsPerBlock),\n              DIVUP(boxes_num, threadsPerBlock));\n  dim3 threads(threadsPerBlock);\n  nms_kernel<<<blocks, threads>>>(boxes_num,\n                                  nms_overlap_thresh,\n                                  boxes_dev,\n                                  mask_dev);\n\n  std::vector<unsigned long long> mask_host(boxes_num * col_blocks);\n  CUDA_CHECK(cudaMemcpy(&mask_host[0],\n                        mask_dev,\n                        sizeof(unsigned long long) * boxes_num * col_blocks,\n                        cudaMemcpyDeviceToHost));\n\n  std::vector<unsigned long long> remv(col_blocks);\n  memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n  int num_to_keep = 0;\n  for (int i = 0; i < boxes_num; i++) {\n    int nblock = i / threadsPerBlock;\n    int inblock = i % threadsPerBlock;\n\n    if (!(remv[nblock] & (1ULL << inblock))) {\n      keep_out[num_to_keep++] = i;\n      unsigned long long *p = &mask_host[0] + i * col_blocks;\n      for (int j = nblock; j < col_blocks; j++) {\n        remv[j] |= p[j];\n      }\n    }\n  }\n  *num_out = num_to_keep;\n\n  CUDA_CHECK(cudaFree(boxes_dev));\n  CUDA_CHECK(cudaFree(mask_dev));\n}\n"
  },
  {
    "path": "lib/nms/py_cpu_nms.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport numpy as np\n\ndef py_cpu_nms(dets, thresh):\n    \"\"\"Pure Python NMS baseline.\"\"\"\n    x1 = dets[:, 0]\n    y1 = dets[:, 1]\n    x2 = dets[:, 2]\n    y2 = dets[:, 3]\n    scores = dets[:, 4]\n\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    keep = []\n    while order.size > 0:\n        i = order[0]\n        keep.append(i)\n        xx1 = np.maximum(x1[i], x1[order[1:]])\n        yy1 = np.maximum(y1[i], y1[order[1:]])\n        xx2 = np.minimum(x2[i], x2[order[1:]])\n        yy2 = np.minimum(y2[i], y2[order[1:]])\n\n        w = np.maximum(0.0, xx2 - xx1 + 1)\n        h = np.maximum(0.0, yy2 - yy1 + 1)\n        inter = w * h\n        ovr = inter / (areas[i] + areas[order[1:]] - inter)\n\n        inds = np.where(ovr <= thresh)[0]\n        order = order[inds + 1]\n\n    return keep\n"
  },
  {
    "path": "lib/pycocotools/UPSTREAM_REV",
    "content": "https://github.com/pdollar/coco/commit/3ac47c77ebd5a1ed4254a98b7fbf2ef4765a3574\n"
  },
  {
    "path": "lib/pycocotools/__init__.py",
    "content": "__author__ = 'tylin'\n"
  },
  {
    "path": "lib/pycocotools/_mask.c",
    "content": "/* Generated by Cython 0.25.2 */\n\n#define PY_SSIZE_T_CLEAN\n#include \"Python.h\"\n#ifndef Py_PYTHON_H\n    #error Python headers needed to compile C extensions, please install development version of Python.\n#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000)\n    #error Cython requires Python 2.6+ or Python 3.2+.\n#else\n#define CYTHON_ABI \"0_25_2\"\n#include <stddef.h>\n#ifndef offsetof\n  #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )\n#endif\n#if !defined(WIN32) && !defined(MS_WINDOWS)\n  #ifndef __stdcall\n    #define __stdcall\n  #endif\n  #ifndef __cdecl\n    #define __cdecl\n  #endif\n  #ifndef __fastcall\n    #define __fastcall\n  #endif\n#endif\n#ifndef DL_IMPORT\n  #define DL_IMPORT(t) t\n#endif\n#ifndef DL_EXPORT\n  #define DL_EXPORT(t) t\n#endif\n#ifndef HAVE_LONG_LONG\n  #if PY_VERSION_HEX >= 0x03030000 || (PY_MAJOR_VERSION == 2 && PY_VERSION_HEX >= 0x02070000)\n    #define HAVE_LONG_LONG\n  #endif\n#endif\n#ifndef PY_LONG_LONG\n  #define PY_LONG_LONG LONG_LONG\n#endif\n#ifndef Py_HUGE_VAL\n  #define Py_HUGE_VAL HUGE_VAL\n#endif\n#ifdef PYPY_VERSION\n  #define CYTHON_COMPILING_IN_PYPY 1\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #undef CYTHON_USE_TYPE_SLOTS\n  #define CYTHON_USE_TYPE_SLOTS 0\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_INTERNALS\n  #define CYTHON_USE_UNICODE_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #undef CYTHON_AVOID_BORROWED_REFS\n  #define CYTHON_AVOID_BORROWED_REFS 1\n  #undef CYTHON_ASSUME_SAFE_MACROS\n  #define CYTHON_ASSUME_SAFE_MACROS 0\n  #undef CYTHON_UNPACK_METHODS\n  #define CYTHON_UNPACK_METHODS 0\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n#elif defined(PYSTON_VERSION)\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 1\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n#else\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 1\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #if PY_MAJOR_VERSION < 3\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYLONG_INTERNALS\n    #define CYTHON_USE_PYLONG_INTERNALS 0\n  #elif !defined(CYTHON_USE_PYLONG_INTERNALS)\n    #define CYTHON_USE_PYLONG_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_PYLIST_INTERNALS\n    #define CYTHON_USE_PYLIST_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #if PY_VERSION_HEX < 0x030300F0\n    #undef CYTHON_USE_UNICODE_WRITER\n    #define CYTHON_USE_UNICODE_WRITER 0\n  #elif !defined(CYTHON_USE_UNICODE_WRITER)\n    #define CYTHON_USE_UNICODE_WRITER 1\n  #endif\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #ifndef CYTHON_FAST_THREAD_STATE\n    #define CYTHON_FAST_THREAD_STATE 1\n  #endif\n  #ifndef CYTHON_FAST_PYCALL\n    #define CYTHON_FAST_PYCALL 1\n  #endif\n#endif\n#if !defined(CYTHON_FAST_PYCCALL)\n#define CYTHON_FAST_PYCCALL  (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)\n#endif\n#if CYTHON_USE_PYLONG_INTERNALS\n  #include \"longintrepr.h\"\n  #undef SHIFT\n  #undef BASE\n  #undef MASK\n#endif\n#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)\n  #define Py_OptimizeFlag 0\n#endif\n#define __PYX_BUILD_PY_SSIZE_T \"n\"\n#define CYTHON_FORMAT_SSIZE_T \"z\"\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_BUILTIN_MODULE_NAME \"__builtin__\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyClass_Type\n#else\n  #define __Pyx_BUILTIN_MODULE_NAME \"builtins\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyType_Type\n#endif\n#ifndef Py_TPFLAGS_CHECKTYPES\n  #define Py_TPFLAGS_CHECKTYPES 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_INDEX\n  #define Py_TPFLAGS_HAVE_INDEX 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_NEWBUFFER\n  #define Py_TPFLAGS_HAVE_NEWBUFFER 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_FINALIZE\n  #define Py_TPFLAGS_HAVE_FINALIZE 0\n#endif\n#ifndef METH_FASTCALL\n  #define METH_FASTCALL 0x80\n  typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args,\n                                              Py_ssize_t nargs, PyObject *kwnames);\n#else\n  #define __Pyx_PyCFunctionFast _PyCFunctionFast\n#endif\n#if CYTHON_FAST_PYCCALL\n#define __Pyx_PyFastCFunction_Check(func)\\\n    ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST)))))\n#else\n#define __Pyx_PyFastCFunction_Check(func) 0\n#endif\n#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)\n  #define CYTHON_PEP393_ENABLED 1\n  #define __Pyx_PyUnicode_READY(op)       (likely(PyUnicode_IS_READY(op)) ?\\\n                                              0 : _PyUnicode_Ready((PyObject *)(op)))\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_LENGTH(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   PyUnicode_MAX_CHAR_VALUE(u)\n  #define __Pyx_PyUnicode_KIND(u)         PyUnicode_KIND(u)\n  #define __Pyx_PyUnicode_DATA(u)         PyUnicode_DATA(u)\n  #define __Pyx_PyUnicode_READ(k, d, i)   PyUnicode_READ(k, d, i)\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  PyUnicode_WRITE(k, d, i, ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))\n#else\n  #define CYTHON_PEP393_ENABLED 0\n  #define PyUnicode_1BYTE_KIND  1\n  #define PyUnicode_2BYTE_KIND  2\n  #define PyUnicode_4BYTE_KIND  4\n  #define __Pyx_PyUnicode_READY(op)       (0)\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_SIZE(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)\n  #define __Pyx_PyUnicode_KIND(u)         (sizeof(Py_UNICODE))\n  #define __Pyx_PyUnicode_DATA(u)         ((void*)PyUnicode_AS_UNICODE(u))\n  #define __Pyx_PyUnicode_READ(k, d, i)   ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  (((void)(k)), ((Py_UNICODE*)d)[i] = ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != PyUnicode_GET_SIZE(u))\n#endif\n#if CYTHON_COMPILING_IN_PYPY\n  #define __Pyx_PyUnicode_Concat(a, b)      PyNumber_Add(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  PyNumber_Add(a, b)\n#else\n  #define __Pyx_PyUnicode_Concat(a, b)      PyUnicode_Concat(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\\\n      PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)\n  #define PyUnicode_Contains(u, s)  PySequence_Contains(u, s)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)\n  #define PyByteArray_Check(obj)  PyObject_TypeCheck(obj, &PyByteArray_Type)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)\n  #define PyObject_Format(obj, fmt)  PyObject_CallMethod(obj, \"__format__\", \"O\", fmt)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)\n  #define PyObject_Malloc(s)   PyMem_Malloc(s)\n  #define PyObject_Free(p)     PyMem_Free(p)\n  #define PyObject_Realloc(p)  PyMem_Realloc(p)\n#endif\n#if CYTHON_COMPILING_IN_PYSTON\n  #define __Pyx_PyCode_HasFreeVars(co)  PyCode_HasFreeVars(co)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)\n#else\n  #define __Pyx_PyCode_HasFreeVars(co)  (PyCode_GetNumFree(co) > 0)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno)  (frame)->f_lineno = (lineno)\n#endif\n#define __Pyx_PyString_FormatSafe(a, b)   ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))\n#define __Pyx_PyUnicode_FormatSafe(a, b)  ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyString_Format(a, b)  PyUnicode_Format(a, b)\n#else\n  #define __Pyx_PyString_Format(a, b)  PyString_Format(a, b)\n#endif\n#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)\n  #define PyObject_ASCII(o)            PyObject_Repr(o)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBaseString_Type            PyUnicode_Type\n  #define PyStringObject               PyUnicodeObject\n  #define PyString_Type                PyUnicode_Type\n  #define PyString_Check               PyUnicode_Check\n  #define PyString_CheckExact          PyUnicode_CheckExact\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)\n  #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)\n#else\n  #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))\n  #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))\n#endif\n#ifndef PySet_CheckExact\n  #define PySet_CheckExact(obj)        (Py_TYPE(obj) == &PySet_Type)\n#endif\n#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)\n#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)\n#if PY_MAJOR_VERSION >= 3\n  #define PyIntObject                  PyLongObject\n  #define PyInt_Type                   PyLong_Type\n  #define PyInt_Check(op)              PyLong_Check(op)\n  #define PyInt_CheckExact(op)         PyLong_CheckExact(op)\n  #define PyInt_FromString             PyLong_FromString\n  #define PyInt_FromUnicode            PyLong_FromUnicode\n  #define PyInt_FromLong               PyLong_FromLong\n  #define PyInt_FromSize_t             PyLong_FromSize_t\n  #define PyInt_FromSsize_t            PyLong_FromSsize_t\n  #define PyInt_AsLong                 PyLong_AsLong\n  #define PyInt_AS_LONG                PyLong_AS_LONG\n  #define PyInt_AsSsize_t              PyLong_AsSsize_t\n  #define PyInt_AsUnsignedLongMask     PyLong_AsUnsignedLongMask\n  #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask\n  #define PyNumber_Int                 PyNumber_Long\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBoolObject                 PyLongObject\n#endif\n#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY\n  #ifndef PyUnicode_InternFromString\n    #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)\n  #endif\n#endif\n#if PY_VERSION_HEX < 0x030200A4\n  typedef long Py_hash_t;\n  #define __Pyx_PyInt_FromHash_t PyInt_FromLong\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsLong\n#else\n  #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsSsize_t\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func))\n#else\n  #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)\n#endif\n#if CYTHON_USE_ASYNC_SLOTS\n  #if PY_VERSION_HEX >= 0x030500B1\n    #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods\n    #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)\n  #else\n    typedef struct {\n        unaryfunc am_await;\n        unaryfunc am_aiter;\n        unaryfunc am_anext;\n    } __Pyx_PyAsyncMethodsStruct;\n    #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))\n  #endif\n#else\n  #define __Pyx_PyType_AsAsync(obj) NULL\n#endif\n#ifndef CYTHON_RESTRICT\n  #if defined(__GNUC__)\n    #define CYTHON_RESTRICT __restrict__\n  #elif defined(_MSC_VER) && _MSC_VER >= 1400\n    #define CYTHON_RESTRICT __restrict\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_RESTRICT restrict\n  #else\n    #define CYTHON_RESTRICT\n  #endif\n#endif\n#ifndef CYTHON_UNUSED\n# if defined(__GNUC__)\n#   if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))\n#     define CYTHON_UNUSED __attribute__ ((__unused__))\n#   else\n#     define CYTHON_UNUSED\n#   endif\n# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))\n#   define CYTHON_UNUSED __attribute__ ((__unused__))\n# else\n#   define CYTHON_UNUSED\n# endif\n#endif\n#ifndef CYTHON_MAYBE_UNUSED_VAR\n#  if defined(__cplusplus)\n     template<class T> void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }\n#  else\n#    define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)\n#  endif\n#endif\n#ifndef CYTHON_NCP_UNUSED\n# if CYTHON_COMPILING_IN_CPYTHON\n#  define CYTHON_NCP_UNUSED\n# else\n#  define CYTHON_NCP_UNUSED CYTHON_UNUSED\n# endif\n#endif\n#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)\n\n#ifndef CYTHON_INLINE\n  #if defined(__clang__)\n    #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))\n  #elif defined(__GNUC__)\n    #define CYTHON_INLINE __inline__\n  #elif defined(_MSC_VER)\n    #define CYTHON_INLINE __inline\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_INLINE inline\n  #else\n    #define CYTHON_INLINE\n  #endif\n#endif\n\n#if defined(WIN32) || defined(MS_WINDOWS)\n  #define _USE_MATH_DEFINES\n#endif\n#include <math.h>\n#ifdef NAN\n#define __PYX_NAN() ((float) NAN)\n#else\nstatic CYTHON_INLINE float __PYX_NAN() {\n  float value;\n  memset(&value, 0xFF, sizeof(value));\n  return value;\n}\n#endif\n#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)\n#define __Pyx_truncl trunc\n#else\n#define __Pyx_truncl truncl\n#endif\n\n\n#define __PYX_ERR(f_index, lineno, Ln_error) \\\n{ \\\n  __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \\\n}\n\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_TrueDivide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceTrueDivide(x,y)\n#else\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_Divide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceDivide(x,y)\n#endif\n\n#ifndef __PYX_EXTERN_C\n  #ifdef __cplusplus\n    #define __PYX_EXTERN_C extern \"C\"\n  #else\n    #define __PYX_EXTERN_C extern\n  #endif\n#endif\n\n#define __PYX_HAVE__pycocotools___mask\n#define __PYX_HAVE_API__pycocotools___mask\n#include <string.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include \"numpy/arrayobject.h\"\n#include \"numpy/ufuncobject.h\"\n#include \"maskApi.h\"\n#ifdef _OPENMP\n#include <omp.h>\n#endif /* _OPENMP */\n\n#ifdef PYREX_WITHOUT_ASSERTIONS\n#define CYTHON_WITHOUT_ASSERTIONS\n#endif\n\ntypedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;\n                const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;\n\n#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0\n#define __PYX_DEFAULT_STRING_ENCODING \"\"\n#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString\n#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#define __Pyx_uchar_cast(c) ((unsigned char)c)\n#define __Pyx_long_cast(x) ((long)x)\n#define __Pyx_fits_Py_ssize_t(v, type, is_signed)  (\\\n    (sizeof(type) < sizeof(Py_ssize_t))  ||\\\n    (sizeof(type) > sizeof(Py_ssize_t) &&\\\n          likely(v < (type)PY_SSIZE_T_MAX ||\\\n                 v == (type)PY_SSIZE_T_MAX)  &&\\\n          (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\\\n                                v == (type)PY_SSIZE_T_MIN)))  ||\\\n    (sizeof(type) == sizeof(Py_ssize_t) &&\\\n          (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\\\n                               v == (type)PY_SSIZE_T_MAX)))  )\n#if defined (__cplusplus) && __cplusplus >= 201103L\n    #include <cstdlib>\n    #define __Pyx_sst_abs(value) std::abs(value)\n#elif SIZEOF_INT >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) abs(value)\n#elif SIZEOF_LONG >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) labs(value)\n#elif defined (_MSC_VER) && defined (_M_X64)\n    #define __Pyx_sst_abs(value) _abs64(value)\n#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define __Pyx_sst_abs(value) llabs(value)\n#elif defined (__GNUC__)\n    #define __Pyx_sst_abs(value) __builtin_llabs(value)\n#else\n    #define __Pyx_sst_abs(value) ((value<0) ? -value : value)\n#endif\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject*);\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);\n#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))\n#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)\n#define __Pyx_PyBytes_FromString        PyBytes_FromString\n#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);\n#if PY_MAJOR_VERSION < 3\n    #define __Pyx_PyStr_FromString        __Pyx_PyBytes_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#else\n    #define __Pyx_PyStr_FromString        __Pyx_PyUnicode_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize\n#endif\n#define __Pyx_PyObject_AsSString(s)    ((signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsUString(s)    ((unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_FromCString(s)  __Pyx_PyObject_FromString((const char*)s)\n#define __Pyx_PyBytes_FromCString(s)   __Pyx_PyBytes_FromString((const char*)s)\n#define __Pyx_PyByteArray_FromCString(s)   __Pyx_PyByteArray_FromString((const char*)s)\n#define __Pyx_PyStr_FromCString(s)     __Pyx_PyStr_FromString((const char*)s)\n#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)\n#if PY_MAJOR_VERSION < 3\nstatic CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u)\n{\n    const Py_UNICODE *u_end = u;\n    while (*u_end++) ;\n    return (size_t)(u_end - u - 1);\n}\n#else\n#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen\n#endif\n#define __Pyx_PyUnicode_FromUnicode(u)       PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))\n#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode\n#define __Pyx_PyUnicode_AsUnicode            PyUnicode_AsUnicode\n#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)\n#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)\n#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False))\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);\n#if CYTHON_ASSUME_SAFE_MACROS\n#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))\n#else\n#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)\n#endif\n#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))\n#else\n#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))\n#endif\n#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\nstatic int __Pyx_sys_getdefaultencoding_not_ascii;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    PyObject* ascii_chars_u = NULL;\n    PyObject* ascii_chars_b = NULL;\n    const char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    if (strcmp(default_encoding_c, \"ascii\") == 0) {\n        __Pyx_sys_getdefaultencoding_not_ascii = 0;\n    } else {\n        char ascii_chars[128];\n        int c;\n        for (c = 0; c < 128; c++) {\n            ascii_chars[c] = c;\n        }\n        __Pyx_sys_getdefaultencoding_not_ascii = 1;\n        ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);\n        if (!ascii_chars_u) goto bad;\n        ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);\n        if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {\n            PyErr_Format(\n                PyExc_ValueError,\n                \"This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.\",\n                default_encoding_c);\n            goto bad;\n        }\n        Py_DECREF(ascii_chars_u);\n        Py_DECREF(ascii_chars_b);\n    }\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    Py_XDECREF(ascii_chars_u);\n    Py_XDECREF(ascii_chars_b);\n    return -1;\n}\n#endif\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)\n#else\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\nstatic char* __PYX_DEFAULT_STRING_ENCODING;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) (const char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c));\n    if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;\n    strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    return -1;\n}\n#endif\n#endif\n\n\n/* Test for GCC > 2.95 */\n#if defined(__GNUC__)     && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))\n  #define likely(x)   __builtin_expect(!!(x), 1)\n  #define unlikely(x) __builtin_expect(!!(x), 0)\n#else /* !__GNUC__ or GCC < 2.95 */\n  #define likely(x)   (x)\n  #define unlikely(x) (x)\n#endif /* __GNUC__ */\n\nstatic PyObject *__pyx_m;\nstatic PyObject *__pyx_d;\nstatic PyObject *__pyx_b;\nstatic PyObject *__pyx_empty_tuple;\nstatic PyObject *__pyx_empty_bytes;\nstatic PyObject *__pyx_empty_unicode;\nstatic int __pyx_lineno;\nstatic int __pyx_clineno = 0;\nstatic const char * __pyx_cfilenm= __FILE__;\nstatic const char *__pyx_filename;\n\n/* Header.proto */\n#if !defined(CYTHON_CCOMPLEX)\n  #if defined(__cplusplus)\n    #define CYTHON_CCOMPLEX 1\n  #elif defined(_Complex_I)\n    #define CYTHON_CCOMPLEX 1\n  #else\n    #define CYTHON_CCOMPLEX 0\n  #endif\n#endif\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #include <complex>\n  #else\n    #include <complex.h>\n  #endif\n#endif\n#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)\n  #undef _Complex_I\n  #define _Complex_I 1.0fj\n#endif\n\n\nstatic const char *__pyx_f[] = {\n  \"pycocotools/_mask.pyx\",\n  \"__init__.pxd\",\n  \"type.pxd\",\n};\n/* BufferFormatStructs.proto */\n#define IS_UNSIGNED(type) (((type) -1) > 0)\nstruct __Pyx_StructField_;\n#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)\ntypedef struct {\n  const char* name;\n  struct __Pyx_StructField_* fields;\n  size_t size;\n  size_t arraysize[8];\n  int ndim;\n  char typegroup;\n  char is_unsigned;\n  int flags;\n} __Pyx_TypeInfo;\ntypedef struct __Pyx_StructField_ {\n  __Pyx_TypeInfo* type;\n  const char* name;\n  size_t offset;\n} __Pyx_StructField;\ntypedef struct {\n  __Pyx_StructField* field;\n  size_t parent_offset;\n} __Pyx_BufFmt_StackElem;\ntypedef struct {\n  __Pyx_StructField root;\n  __Pyx_BufFmt_StackElem* head;\n  size_t fmt_offset;\n  size_t new_count, enc_count;\n  size_t struct_alignment;\n  int is_complex;\n  char enc_type;\n  char new_packmode;\n  char enc_packmode;\n  char is_valid_array;\n} __Pyx_BufFmt_Context;\n\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":725\n * # in Cython to enable them only on the right systems.\n * \n * ctypedef npy_int8       int8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n */\ntypedef npy_int8 __pyx_t_5numpy_int8_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":726\n * \n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t\n */\ntypedef npy_int16 __pyx_t_5numpy_int16_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":727\n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int64      int64_t\n * #ctypedef npy_int96      int96_t\n */\ntypedef npy_int32 __pyx_t_5numpy_int32_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":728\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_int96      int96_t\n * #ctypedef npy_int128     int128_t\n */\ntypedef npy_int64 __pyx_t_5numpy_int64_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":732\n * #ctypedef npy_int128     int128_t\n * \n * ctypedef npy_uint8      uint8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n */\ntypedef npy_uint8 __pyx_t_5numpy_uint8_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":733\n * \n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t\n */\ntypedef npy_uint16 __pyx_t_5numpy_uint16_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":734\n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint64     uint64_t\n * #ctypedef npy_uint96     uint96_t\n */\ntypedef npy_uint32 __pyx_t_5numpy_uint32_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":735\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_uint96     uint96_t\n * #ctypedef npy_uint128    uint128_t\n */\ntypedef npy_uint64 __pyx_t_5numpy_uint64_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":739\n * #ctypedef npy_uint128    uint128_t\n * \n * ctypedef npy_float32    float32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_float64    float64_t\n * #ctypedef npy_float80    float80_t\n */\ntypedef npy_float32 __pyx_t_5numpy_float32_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":740\n * \n * ctypedef npy_float32    float32_t\n * ctypedef npy_float64    float64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_float80    float80_t\n * #ctypedef npy_float128   float128_t\n */\ntypedef npy_float64 __pyx_t_5numpy_float64_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":749\n * # The int types are mapped a bit surprising --\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t\n */\ntypedef npy_long __pyx_t_5numpy_int_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":750\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   longlong_t\n * \n */\ntypedef npy_longlong __pyx_t_5numpy_long_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":751\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_ulong      uint_t\n */\ntypedef npy_longlong __pyx_t_5numpy_longlong_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":753\n * ctypedef npy_longlong   longlong_t\n * \n * ctypedef npy_ulong      uint_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t\n */\ntypedef npy_ulong __pyx_t_5numpy_uint_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":754\n * \n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulonglong_t\n * \n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulong_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":755\n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_intp       intp_t\n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":757\n * ctypedef npy_ulonglong  ulonglong_t\n * \n * ctypedef npy_intp       intp_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uintp      uintp_t\n * \n */\ntypedef npy_intp __pyx_t_5numpy_intp_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":758\n * \n * ctypedef npy_intp       intp_t\n * ctypedef npy_uintp      uintp_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_double     float_t\n */\ntypedef npy_uintp __pyx_t_5numpy_uintp_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":760\n * ctypedef npy_uintp      uintp_t\n * \n * ctypedef npy_double     float_t             # <<<<<<<<<<<<<<\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t\n */\ntypedef npy_double __pyx_t_5numpy_float_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":761\n * \n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longdouble longdouble_t\n * \n */\ntypedef npy_double __pyx_t_5numpy_double_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":762\n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cfloat      cfloat_t\n */\ntypedef npy_longdouble __pyx_t_5numpy_longdouble_t;\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< float > __pyx_t_float_complex;\n  #else\n    typedef float _Complex __pyx_t_float_complex;\n  #endif\n#else\n    typedef struct { float real, imag; } __pyx_t_float_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);\n\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< double > __pyx_t_double_complex;\n  #else\n    typedef double _Complex __pyx_t_double_complex;\n  #endif\n#else\n    typedef struct { double real, imag; } __pyx_t_double_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);\n\n\n/*--- Type declarations ---*/\nstruct __pyx_obj_11pycocotools_5_mask_RLEs;\nstruct __pyx_obj_11pycocotools_5_mask_Masks;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":764\n * ctypedef npy_longdouble longdouble_t\n * \n * ctypedef npy_cfloat      cfloat_t             # <<<<<<<<<<<<<<\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t\n */\ntypedef npy_cfloat __pyx_t_5numpy_cfloat_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":765\n * \n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t             # <<<<<<<<<<<<<<\n * ctypedef npy_clongdouble clongdouble_t\n * \n */\ntypedef npy_cdouble __pyx_t_5numpy_cdouble_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":766\n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cdouble     complex_t\n */\ntypedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":768\n * ctypedef npy_clongdouble clongdouble_t\n * \n * ctypedef npy_cdouble     complex_t             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n */\ntypedef npy_cdouble __pyx_t_5numpy_complex_t;\n\n/* \"pycocotools/_mask.pyx\":53\n * # python class to wrap RLE array in C\n * # the class handles the memory allocation and deallocation\n * cdef class RLEs:             # <<<<<<<<<<<<<<\n *     cdef RLE *_R\n *     cdef siz _n\n */\nstruct __pyx_obj_11pycocotools_5_mask_RLEs {\n  PyObject_HEAD\n  RLE *_R;\n  siz _n;\n};\n\n\n/* \"pycocotools/_mask.pyx\":74\n * # python class to wrap Mask array in C\n * # the class handles the memory allocation and deallocation\n * cdef class Masks:             # <<<<<<<<<<<<<<\n *     cdef byte *_mask\n *     cdef siz _h\n */\nstruct __pyx_obj_11pycocotools_5_mask_Masks {\n  PyObject_HEAD\n  byte *_mask;\n  siz _h;\n  siz _w;\n  siz _n;\n};\n\n\n/* --- Runtime support code (head) --- */\n/* Refnanny.proto */\n#ifndef CYTHON_REFNANNY\n  #define CYTHON_REFNANNY 0\n#endif\n#if CYTHON_REFNANNY\n  typedef struct {\n    void (*INCREF)(void*, PyObject*, int);\n    void (*DECREF)(void*, PyObject*, int);\n    void (*GOTREF)(void*, PyObject*, int);\n    void (*GIVEREF)(void*, PyObject*, int);\n    void* (*SetupContext)(const char*, int, const char*);\n    void (*FinishContext)(void**);\n  } __Pyx_RefNannyAPIStruct;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);\n  #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;\n#ifdef WITH_THREAD\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          if (acquire_gil) {\\\n              PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n              PyGILState_Release(__pyx_gilstate_save);\\\n          } else {\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n          }\n#else\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)\n#endif\n  #define __Pyx_RefNannyFinishContext()\\\n          __Pyx_RefNanny->FinishContext(&__pyx_refnanny)\n  #define __Pyx_INCREF(r)  __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_DECREF(r)  __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GOTREF(r)  __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_XINCREF(r)  do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)\n  #define __Pyx_XDECREF(r)  do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)\n  #define __Pyx_XGOTREF(r)  do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)\n  #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)\n#else\n  #define __Pyx_RefNannyDeclarations\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\n  #define __Pyx_RefNannyFinishContext()\n  #define __Pyx_INCREF(r) Py_INCREF(r)\n  #define __Pyx_DECREF(r) Py_DECREF(r)\n  #define __Pyx_GOTREF(r)\n  #define __Pyx_GIVEREF(r)\n  #define __Pyx_XINCREF(r) Py_XINCREF(r)\n  #define __Pyx_XDECREF(r) Py_XDECREF(r)\n  #define __Pyx_XGOTREF(r)\n  #define __Pyx_XGIVEREF(r)\n#endif\n#define __Pyx_XDECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_XDECREF(tmp);\\\n    } while (0)\n#define __Pyx_DECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_DECREF(tmp);\\\n    } while (0)\n#define __Pyx_CLEAR(r)    do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)\n#define __Pyx_XCLEAR(r)   do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)\n\n/* PyObjectGetAttrStr.proto */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {\n    PyTypeObject* tp = Py_TYPE(obj);\n    if (likely(tp->tp_getattro))\n        return tp->tp_getattro(obj, attr_name);\n#if PY_MAJOR_VERSION < 3\n    if (likely(tp->tp_getattr))\n        return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));\n#endif\n    return PyObject_GetAttr(obj, attr_name);\n}\n#else\n#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)\n#endif\n\n/* GetBuiltinName.proto */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name);\n\n/* RaiseDoubleKeywords.proto */\nstatic void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);\n\n/* ParseKeywords.proto */\nstatic int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\\\n    PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\\\n    const char* function_name);\n\n/* RaiseArgTupleInvalid.proto */\nstatic void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,\n    Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);\n\n/* IncludeStringH.proto */\n#include <string.h>\n\n/* BytesEquals.proto */\nstatic CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);\n\n/* UnicodeEquals.proto */\nstatic CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);\n\n/* StrEquals.proto */\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals\n#else\n#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals\n#endif\n\n/* PyObjectCall.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);\n#else\n#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)\n#endif\n\n/* PyThreadStateGet.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyThreadState_declare  PyThreadState *__pyx_tstate;\n#define __Pyx_PyThreadState_assign  __pyx_tstate = PyThreadState_GET();\n#else\n#define __Pyx_PyThreadState_declare\n#define __Pyx_PyThreadState_assign\n#endif\n\n/* PyErrFetchRestore.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ErrRestoreWithState(type, value, tb)  __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)    __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)    __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\n#define __Pyx_ErrRestoreWithState(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)  PyErr_Fetch(type, value, tb)\n#endif\n\n/* RaiseException.proto */\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);\n\n/* ExtTypeTest.proto */\nstatic CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);\n\n/* ArgTypeTest.proto */\nstatic CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed,\n    const char *name, int exact);\n\n/* ListAppend.proto */\n#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS\nstatic CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {\n    PyListObject* L = (PyListObject*) list;\n    Py_ssize_t len = Py_SIZE(list);\n    if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {\n        Py_INCREF(x);\n        PyList_SET_ITEM(list, len, x);\n        Py_SIZE(list) = len+1;\n        return 0;\n    }\n    return PyList_Append(list, x);\n}\n#else\n#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)\n#endif\n\n/* PyIntBinop.proto */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace);\n#else\n#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace)\\\n    (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))\n#endif\n\n/* GetItemInt.proto */\n#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\\\n    (is_list ? (PyErr_SetString(PyExc_IndexError, \"list index out of range\"), (PyObject*)NULL) :\\\n               __Pyx_GetItemInt_Generic(o, to_py_func(i))))\n#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\\\n    (PyErr_SetString(PyExc_IndexError, \"list index out of range\"), (PyObject*)NULL))\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,\n                                                              int wraparound, int boundscheck);\n#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\\\n    (PyErr_SetString(PyExc_IndexError, \"tuple index out of range\"), (PyObject*)NULL))\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,\n                                                              int wraparound, int boundscheck);\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,\n                                                     int is_list, int wraparound, int boundscheck);\n\n/* BufferFormatCheck.proto */\nstatic CYTHON_INLINE int  __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj,\n    __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type); // PROTO\n\n/* GetModuleGlobalName.proto */\nstatic CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name);\n\n/* PyCFunctionFastCall.proto */\n#if CYTHON_FAST_PYCCALL\nstatic CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);\n#else\n#define __Pyx_PyCFunction_FastCall(func, args, nargs)  (assert(0), NULL)\n#endif\n\n/* PyFunctionFastCall.proto */\n#if CYTHON_FAST_PYCALL\n#define __Pyx_PyFunction_FastCall(func, args, nargs)\\\n    __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)\n#if 1 || PY_VERSION_HEX < 0x030600B1\nstatic PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs);\n#else\n#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)\n#endif\n#endif\n\n/* PyObjectCallMethO.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);\n#endif\n\n/* PyObjectCallOneArg.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);\n\n/* PyIntBinop.proto */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, int inplace);\n#else\n#define __Pyx_PyInt_EqObjC(op1, op2, intval, inplace)\\\n    PyObject_RichCompare(op1, op2, Py_EQ)\n    #endif\n\n/* ListCompAppend.proto */\n#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS\nstatic CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {\n    PyListObject* L = (PyListObject*) list;\n    Py_ssize_t len = Py_SIZE(list);\n    if (likely(L->allocated > len)) {\n        Py_INCREF(x);\n        PyList_SET_ITEM(list, len, x);\n        Py_SIZE(list) = len+1;\n        return 0;\n    }\n    return PyList_Append(list, x);\n}\n#else\n#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)\n#endif\n\n/* FetchCommonType.proto */\nstatic PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);\n\n/* CythonFunction.proto */\n#define __Pyx_CyFunction_USED 1\n#include <structmember.h>\n#define __Pyx_CYFUNCTION_STATICMETHOD  0x01\n#define __Pyx_CYFUNCTION_CLASSMETHOD   0x02\n#define __Pyx_CYFUNCTION_CCLASS        0x04\n#define __Pyx_CyFunction_GetClosure(f)\\\n    (((__pyx_CyFunctionObject *) (f))->func_closure)\n#define __Pyx_CyFunction_GetClassObj(f)\\\n    (((__pyx_CyFunctionObject *) (f))->func_classobj)\n#define __Pyx_CyFunction_Defaults(type, f)\\\n    ((type *)(((__pyx_CyFunctionObject *) (f))->defaults))\n#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\\\n    ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g)\ntypedef struct {\n    PyCFunctionObject func;\n#if PY_VERSION_HEX < 0x030500A0\n    PyObject *func_weakreflist;\n#endif\n    PyObject *func_dict;\n    PyObject *func_name;\n    PyObject *func_qualname;\n    PyObject *func_doc;\n    PyObject *func_globals;\n    PyObject *func_code;\n    PyObject *func_closure;\n    PyObject *func_classobj;\n    void *defaults;\n    int defaults_pyobjects;\n    int flags;\n    PyObject *defaults_tuple;\n    PyObject *defaults_kwdict;\n    PyObject *(*defaults_getter)(PyObject *);\n    PyObject *func_annotations;\n} __pyx_CyFunctionObject;\nstatic PyTypeObject *__pyx_CyFunctionType = 0;\n#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code)\\\n    __Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)\nstatic PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml,\n                                      int flags, PyObject* qualname,\n                                      PyObject *self,\n                                      PyObject *module, PyObject *globals,\n                                      PyObject* code);\nstatic CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,\n                                                         size_t size,\n                                                         int pyobjects);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m,\n                                                            PyObject *tuple);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m,\n                                                             PyObject *dict);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m,\n                                                              PyObject *dict);\nstatic int __pyx_CyFunction_init(void);\n\n/* BufferFallbackError.proto */\nstatic void __Pyx_RaiseBufferFallbackError(void);\n\n/* None.proto */\nstatic CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);\n\n/* BufferIndexError.proto */\nstatic void __Pyx_RaiseBufferIndexError(int axis);\n\n#define __Pyx_BufPtrStrided1d(type, buf, i0, s0) (type)((char*)buf + i0 * s0)\n/* DictGetItem.proto */\n#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY\nstatic PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) {\n    PyObject *value;\n    value = PyDict_GetItemWithError(d, key);\n    if (unlikely(!value)) {\n        if (!PyErr_Occurred()) {\n            PyObject* args = PyTuple_Pack(1, key);\n            if (likely(args))\n                PyErr_SetObject(PyExc_KeyError, args);\n            Py_XDECREF(args);\n        }\n        return NULL;\n    }\n    Py_INCREF(value);\n    return value;\n}\n#else\n    #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)\n#endif\n\n/* RaiseTooManyValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);\n\n/* RaiseNeedMoreValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);\n\n/* RaiseNoneIterError.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);\n\n/* SaveResetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ExceptionSave(type, value, tb)  __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#define __Pyx_ExceptionReset(type, value, tb)  __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\n#else\n#define __Pyx_ExceptionSave(type, value, tb)   PyErr_GetExcInfo(type, value, tb)\n#define __Pyx_ExceptionReset(type, value, tb)  PyErr_SetExcInfo(type, value, tb)\n#endif\n\n/* PyErrExceptionMatches.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);\n#else\n#define __Pyx_PyErr_ExceptionMatches(err)  PyErr_ExceptionMatches(err)\n#endif\n\n/* GetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_GetException(type, value, tb)  __Pyx__GetException(__pyx_tstate, type, value, tb)\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);\n#endif\n\n/* Import.proto */\nstatic PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);\n\n/* CodeObjectCache.proto */\ntypedef struct {\n    PyCodeObject* code_object;\n    int code_line;\n} __Pyx_CodeObjectCacheEntry;\nstruct __Pyx_CodeObjectCache {\n    int count;\n    int max_count;\n    __Pyx_CodeObjectCacheEntry* entries;\n};\nstatic struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};\nstatic int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);\nstatic PyCodeObject *__pyx_find_code_object(int code_line);\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);\n\n/* AddTraceback.proto */\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename);\n\n/* BufferStructDeclare.proto */\ntypedef struct {\n  Py_ssize_t shape, strides, suboffsets;\n} __Pyx_Buf_DimInfo;\ntypedef struct {\n  size_t refcount;\n  Py_buffer pybuffer;\n} __Pyx_Buffer;\ntypedef struct {\n  __Pyx_Buffer *rcbuffer;\n  char *data;\n  __Pyx_Buf_DimInfo diminfo[8];\n} __Pyx_LocalBuf_ND;\n\n#if PY_MAJOR_VERSION < 3\n    static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);\n    static void __Pyx_ReleaseBuffer(Py_buffer *view);\n#else\n    #define __Pyx_GetBuffer PyObject_GetBuffer\n    #define __Pyx_ReleaseBuffer PyBuffer_Release\n#endif\n\n\n/* None.proto */\nstatic Py_ssize_t __Pyx_zeros[] = {0, 0, 0, 0, 0, 0, 0, 0};\nstatic Py_ssize_t __Pyx_minusones[] = {-1, -1, -1, -1, -1, -1, -1, -1};\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_siz(siz value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_Py_intptr_t(Py_intptr_t value);\n\n/* RealImag.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #define __Pyx_CREAL(z) ((z).real())\n    #define __Pyx_CIMAG(z) ((z).imag())\n  #else\n    #define __Pyx_CREAL(z) (__real__(z))\n    #define __Pyx_CIMAG(z) (__imag__(z))\n  #endif\n#else\n    #define __Pyx_CREAL(z) ((z).real)\n    #define __Pyx_CIMAG(z) ((z).imag)\n#endif\n#if defined(__cplusplus) && CYTHON_CCOMPLEX\\\n        && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)\n    #define __Pyx_SET_CREAL(z,x) ((z).real(x))\n    #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))\n#else\n    #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)\n    #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_float(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_float(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_float(a, b) ((a)-(b))\n    #define __Pyx_c_prod_float(a, b) ((a)*(b))\n    #define __Pyx_c_quot_float(a, b) ((a)/(b))\n    #define __Pyx_c_neg_float(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_float(z) ((z)==(float)0)\n    #define __Pyx_c_conj_float(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (::std::abs(z))\n        #define __Pyx_c_pow_float(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_float(z) ((z)==0)\n    #define __Pyx_c_conj_float(z)    (conjf(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (cabsf(z))\n        #define __Pyx_c_pow_float(a, b)  (cpowf(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    #endif\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_double(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_double(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_double(a, b) ((a)-(b))\n    #define __Pyx_c_prod_double(a, b) ((a)*(b))\n    #define __Pyx_c_quot_double(a, b) ((a)/(b))\n    #define __Pyx_c_neg_double(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_double(z) ((z)==(double)0)\n    #define __Pyx_c_conj_double(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (::std::abs(z))\n        #define __Pyx_c_pow_double(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_double(z) ((z)==0)\n    #define __Pyx_c_conj_double(z)    (conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (cabs(z))\n        #define __Pyx_c_pow_double(a, b)  (cpow(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    #endif\n#endif\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE siz __Pyx_PyInt_As_siz(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);\n\n/* CheckBinaryVersion.proto */\nstatic int __Pyx_check_binary_version(void);\n\n/* PyIdentifierFromString.proto */\n#if !defined(__Pyx_PyIdentifier_FromString)\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s)\n#else\n  #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s)\n#endif\n#endif\n\n/* ModuleImport.proto */\nstatic PyObject *__Pyx_ImportModule(const char *name);\n\n/* TypeImport.proto */\nstatic PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict);\n\n/* InitStrings.proto */\nstatic int __Pyx_InitStrings(__Pyx_StringTabEntry *t);\n\n\n/* Module declarations from 'cpython.buffer' */\n\n/* Module declarations from 'libc.string' */\n\n/* Module declarations from 'libc.stdio' */\n\n/* Module declarations from '__builtin__' */\n\n/* Module declarations from 'cpython.type' */\nstatic PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;\n\n/* Module declarations from 'cpython' */\n\n/* Module declarations from 'cpython.object' */\n\n/* Module declarations from 'cpython.ref' */\n\n/* Module declarations from 'libc.stdlib' */\n\n/* Module declarations from 'numpy' */\n\n/* Module declarations from 'numpy' */\nstatic PyTypeObject *__pyx_ptype_5numpy_dtype = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_array(void); /*proto*/\n\n/* Module declarations from 'pycocotools._mask' */\nstatic PyTypeObject *__pyx_ptype_11pycocotools_5_mask_RLEs = 0;\nstatic PyTypeObject *__pyx_ptype_11pycocotools_5_mask_Masks = 0;\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t = { \"uint8_t\", NULL, sizeof(__pyx_t_5numpy_uint8_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint8_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint8_t), 0 };\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_double_t = { \"double_t\", NULL, sizeof(__pyx_t_5numpy_double_t), { 0 }, 0, 'R', 0, 0 };\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t = { \"uint32_t\", NULL, sizeof(__pyx_t_5numpy_uint32_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint32_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint32_t), 0 };\n#define __Pyx_MODULE_NAME \"pycocotools._mask\"\nint __pyx_module_is_main_pycocotools___mask = 0;\n\n/* Implementation of 'pycocotools._mask' */\nstatic PyObject *__pyx_builtin_range;\nstatic PyObject *__pyx_builtin_AttributeError;\nstatic PyObject *__pyx_builtin_enumerate;\nstatic PyObject *__pyx_builtin_ValueError;\nstatic PyObject *__pyx_builtin_RuntimeError;\nstatic PyObject *__pyx_builtin_ImportError;\nstatic const char __pyx_k_F[] = \"F\";\nstatic const char __pyx_k_N[] = \"N\";\nstatic const char __pyx_k_R[] = \"R\";\nstatic const char __pyx_k_a[] = \"_a\";\nstatic const char __pyx_k_h[] = \"h\";\nstatic const char __pyx_k_i[] = \"i\";\nstatic const char __pyx_k_j[] = \"j\";\nstatic const char __pyx_k_m[] = \"m\";\nstatic const char __pyx_k_n[] = \"n\";\nstatic const char __pyx_k_p[] = \"p\";\nstatic const char __pyx_k_w[] = \"w\";\nstatic const char __pyx_k_Rs[] = \"Rs\";\nstatic const char __pyx_k_bb[] = \"bb\";\nstatic const char __pyx_k_dt[] = \"dt\";\nstatic const char __pyx_k_gt[] = \"gt\";\nstatic const char __pyx_k_np[] = \"np\";\nstatic const char __pyx_k_a_2[] = \"a\";\nstatic const char __pyx_k_all[] = \"all\";\nstatic const char __pyx_k_iou[] = \"_iou\";\nstatic const char __pyx_k_len[] = \"_len\";\nstatic const char __pyx_k_obj[] = \"obj\";\nstatic const char __pyx_k_area[] = \"area\";\nstatic const char __pyx_k_bb_2[] = \"_bb\";\nstatic const char __pyx_k_cnts[] = \"cnts\";\nstatic const char __pyx_k_data[] = \"data\";\nstatic const char __pyx_k_main[] = \"__main__\";\nstatic const char __pyx_k_mask[] = \"mask\";\nstatic const char __pyx_k_objs[] = \"objs\";\nstatic const char __pyx_k_poly[] = \"poly\";\nstatic const char __pyx_k_size[] = \"size\";\nstatic const char __pyx_k_test[] = \"__test__\";\nstatic const char __pyx_k_array[] = \"array\";\nstatic const char __pyx_k_bbIou[] = \"_bbIou\";\nstatic const char __pyx_k_dtype[] = \"dtype\";\nstatic const char __pyx_k_iou_2[] = \"iou\";\nstatic const char __pyx_k_isbox[] = \"isbox\";\nstatic const char __pyx_k_isrle[] = \"isrle\";\nstatic const char __pyx_k_masks[] = \"masks\";\nstatic const char __pyx_k_merge[] = \"merge\";\nstatic const char __pyx_k_numpy[] = \"numpy\";\nstatic const char __pyx_k_order[] = \"order\";\nstatic const char __pyx_k_pyobj[] = \"pyobj\";\nstatic const char __pyx_k_range[] = \"range\";\nstatic const char __pyx_k_shape[] = \"shape\";\nstatic const char __pyx_k_uint8[] = \"uint8\";\nstatic const char __pyx_k_zeros[] = \"zeros\";\nstatic const char __pyx_k_astype[] = \"astype\";\nstatic const char __pyx_k_author[] = \"__author__\";\nstatic const char __pyx_k_counts[] = \"counts\";\nstatic const char __pyx_k_decode[] = \"decode\";\nstatic const char __pyx_k_double[] = \"double\";\nstatic const char __pyx_k_encode[] = \"encode\";\nstatic const char __pyx_k_frBbox[] = \"frBbox\";\nstatic const char __pyx_k_frPoly[] = \"frPoly\";\nstatic const char __pyx_k_import[] = \"__import__\";\nstatic const char __pyx_k_iouFun[] = \"_iouFun\";\nstatic const char __pyx_k_rleIou[] = \"_rleIou\";\nstatic const char __pyx_k_toBbox[] = \"toBbox\";\nstatic const char __pyx_k_ucRles[] = \"ucRles\";\nstatic const char __pyx_k_uint32[] = \"uint32\";\nstatic const char __pyx_k_iscrowd[] = \"iscrowd\";\nstatic const char __pyx_k_np_poly[] = \"np_poly\";\nstatic const char __pyx_k_preproc[] = \"_preproc\";\nstatic const char __pyx_k_reshape[] = \"reshape\";\nstatic const char __pyx_k_rleObjs[] = \"rleObjs\";\nstatic const char __pyx_k_tsungyi[] = \"tsungyi\";\nstatic const char __pyx_k_c_string[] = \"c_string\";\nstatic const char __pyx_k_frString[] = \"_frString\";\nstatic const char __pyx_k_toString[] = \"_toString\";\nstatic const char __pyx_k_enumerate[] = \"enumerate\";\nstatic const char __pyx_k_intersect[] = \"intersect\";\nstatic const char __pyx_k_py_string[] = \"py_string\";\nstatic const char __pyx_k_pyiscrowd[] = \"pyiscrowd\";\nstatic const char __pyx_k_ValueError[] = \"ValueError\";\nstatic const char __pyx_k_ImportError[] = \"ImportError\";\nstatic const char __pyx_k_frPyObjects[] = \"frPyObjects\";\nstatic const char __pyx_k_RuntimeError[] = \"RuntimeError\";\nstatic const char __pyx_k_AttributeError[] = \"AttributeError\";\nstatic const char __pyx_k_iou_locals__len[] = \"iou.<locals>._len\";\nstatic const char __pyx_k_frUncompressedRLE[] = \"frUncompressedRLE\";\nstatic const char __pyx_k_iou_locals__bbIou[] = \"iou.<locals>._bbIou\";\nstatic const char __pyx_k_pycocotools__mask[] = \"pycocotools._mask\";\nstatic const char __pyx_k_iou_locals__rleIou[] = \"iou.<locals>._rleIou\";\nstatic const char __pyx_k_iou_locals__preproc[] = \"iou.<locals>._preproc\";\nstatic const char __pyx_k_input_data_type_not_allowed[] = \"input data type not allowed.\";\nstatic const char __pyx_k_input_type_is_not_supported[] = \"input type is not supported.\";\nstatic const char __pyx_k_ndarray_is_not_C_contiguous[] = \"ndarray is not C contiguous\";\nstatic const char __pyx_k_home_ubuntu_Work_brbchen_unskyc[] = \"/home/ubuntu/Work/brbchen/unskychen/trying/faster rcnn/lib/pycocotools/_mask.pyx\";\nstatic const char __pyx_k_numpy_core_multiarray_failed_to[] = \"numpy.core.multiarray failed to import\";\nstatic const char __pyx_k_numpy_ndarray_input_is_only_for[] = \"numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension\";\nstatic const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = \"unknown dtype code in numpy.pxd (%d)\";\nstatic const char __pyx_k_unrecognized_type_The_following[] = \"unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.\";\nstatic const char __pyx_k_Format_string_allocated_too_shor[] = \"Format string allocated too short, see comment in numpy.pxd\";\nstatic const char __pyx_k_Non_native_byte_order_not_suppor[] = \"Non-native byte order not supported\";\nstatic const char __pyx_k_The_dt_and_gt_should_have_the_sa[] = \"The dt and gt should have the same data type, either RLEs, list or np.ndarray\";\nstatic const char __pyx_k_list_input_can_be_bounding_box_N[] = \"list input can be bounding box (Nx4) or RLEs ([RLE])\";\nstatic const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = \"ndarray is not Fortran contiguous\";\nstatic const char __pyx_k_numpy_core_umath_failed_to_impor[] = \"numpy.core.umath failed to import\";\nstatic const char __pyx_k_Format_string_allocated_too_shor_2[] = \"Format string allocated too short.\";\nstatic PyObject *__pyx_n_s_AttributeError;\nstatic PyObject *__pyx_n_s_F;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;\nstatic PyObject *__pyx_n_s_ImportError;\nstatic PyObject *__pyx_n_s_N;\nstatic PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;\nstatic PyObject *__pyx_n_s_R;\nstatic PyObject *__pyx_n_s_Rs;\nstatic PyObject *__pyx_n_s_RuntimeError;\nstatic PyObject *__pyx_kp_s_The_dt_and_gt_should_have_the_sa;\nstatic PyObject *__pyx_n_s_ValueError;\nstatic PyObject *__pyx_n_s_a;\nstatic PyObject *__pyx_n_s_a_2;\nstatic PyObject *__pyx_n_s_all;\nstatic PyObject *__pyx_n_s_area;\nstatic PyObject *__pyx_n_s_array;\nstatic PyObject *__pyx_n_s_astype;\nstatic PyObject *__pyx_n_s_author;\nstatic PyObject *__pyx_n_s_bb;\nstatic PyObject *__pyx_n_s_bbIou;\nstatic PyObject *__pyx_n_s_bb_2;\nstatic PyObject *__pyx_n_s_c_string;\nstatic PyObject *__pyx_n_s_cnts;\nstatic PyObject *__pyx_n_s_counts;\nstatic PyObject *__pyx_n_s_data;\nstatic PyObject *__pyx_n_s_decode;\nstatic PyObject *__pyx_n_s_double;\nstatic PyObject *__pyx_n_s_dt;\nstatic PyObject *__pyx_n_s_dtype;\nstatic PyObject *__pyx_n_s_encode;\nstatic PyObject *__pyx_n_s_enumerate;\nstatic PyObject *__pyx_n_s_frBbox;\nstatic PyObject *__pyx_n_s_frPoly;\nstatic PyObject *__pyx_n_s_frPyObjects;\nstatic PyObject *__pyx_n_s_frString;\nstatic PyObject *__pyx_n_s_frUncompressedRLE;\nstatic PyObject *__pyx_n_s_gt;\nstatic PyObject *__pyx_n_s_h;\nstatic PyObject *__pyx_kp_s_home_ubuntu_Work_brbchen_unskyc;\nstatic PyObject *__pyx_n_s_i;\nstatic PyObject *__pyx_n_s_import;\nstatic PyObject *__pyx_kp_s_input_data_type_not_allowed;\nstatic PyObject *__pyx_kp_s_input_type_is_not_supported;\nstatic PyObject *__pyx_n_s_intersect;\nstatic PyObject *__pyx_n_s_iou;\nstatic PyObject *__pyx_n_s_iouFun;\nstatic PyObject *__pyx_n_s_iou_2;\nstatic PyObject *__pyx_n_s_iou_locals__bbIou;\nstatic PyObject *__pyx_n_s_iou_locals__len;\nstatic PyObject *__pyx_n_s_iou_locals__preproc;\nstatic PyObject *__pyx_n_s_iou_locals__rleIou;\nstatic PyObject *__pyx_n_s_isbox;\nstatic PyObject *__pyx_n_s_iscrowd;\nstatic PyObject *__pyx_n_s_isrle;\nstatic PyObject *__pyx_n_s_j;\nstatic PyObject *__pyx_n_s_len;\nstatic PyObject *__pyx_kp_s_list_input_can_be_bounding_box_N;\nstatic PyObject *__pyx_n_s_m;\nstatic PyObject *__pyx_n_s_main;\nstatic PyObject *__pyx_n_s_mask;\nstatic PyObject *__pyx_n_s_masks;\nstatic PyObject *__pyx_n_s_merge;\nstatic PyObject *__pyx_n_s_n;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;\nstatic PyObject *__pyx_n_s_np;\nstatic PyObject *__pyx_n_s_np_poly;\nstatic PyObject *__pyx_n_s_numpy;\nstatic PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to;\nstatic PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor;\nstatic PyObject *__pyx_kp_s_numpy_ndarray_input_is_only_for;\nstatic PyObject *__pyx_n_s_obj;\nstatic PyObject *__pyx_n_s_objs;\nstatic PyObject *__pyx_n_s_order;\nstatic PyObject *__pyx_n_s_p;\nstatic PyObject *__pyx_n_s_poly;\nstatic PyObject *__pyx_n_s_preproc;\nstatic PyObject *__pyx_n_s_py_string;\nstatic PyObject *__pyx_n_s_pycocotools__mask;\nstatic PyObject *__pyx_n_s_pyiscrowd;\nstatic PyObject *__pyx_n_s_pyobj;\nstatic PyObject *__pyx_n_s_range;\nstatic PyObject *__pyx_n_s_reshape;\nstatic PyObject *__pyx_n_s_rleIou;\nstatic PyObject *__pyx_n_s_rleObjs;\nstatic PyObject *__pyx_n_s_shape;\nstatic PyObject *__pyx_n_s_size;\nstatic PyObject *__pyx_n_s_test;\nstatic PyObject *__pyx_n_s_toBbox;\nstatic PyObject *__pyx_n_s_toString;\nstatic PyObject *__pyx_n_s_tsungyi;\nstatic PyObject *__pyx_n_s_ucRles;\nstatic PyObject *__pyx_n_s_uint32;\nstatic PyObject *__pyx_n_s_uint8;\nstatic PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;\nstatic PyObject *__pyx_kp_s_unrecognized_type_The_following;\nstatic PyObject *__pyx_n_s_w;\nstatic PyObject *__pyx_n_s_zeros;\nstatic int __pyx_pf_11pycocotools_5_mask_4RLEs___cinit__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, siz __pyx_v_n); /* proto */\nstatic void __pyx_pf_11pycocotools_5_mask_4RLEs_2__dealloc__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_4__getattr__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, PyObject *__pyx_v_key); /* proto */\nstatic int __pyx_pf_11pycocotools_5_mask_5Masks___cinit__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self, PyObject *__pyx_v_h, PyObject *__pyx_v_w, PyObject *__pyx_v_n); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_2__array__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask__toString(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_2_frString(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4encode(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_mask); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_6decode(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_8merge(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs, int __pyx_v_intersect); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_10area(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou__preproc(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_objs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_2_rleIou(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_dt, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_4_bbIou(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dt, PyArrayObject *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_6_len(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_12iou(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dt, PyObject *__pyx_v_gt, PyObject *__pyx_v_pyiscrowd); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_14toBbox(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_16frBbox(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_bb, siz __pyx_v_h, siz __pyx_v_w); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_18frPoly(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_poly, siz __pyx_v_h, siz __pyx_v_w); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_20frUncompressedRLE(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_ucRles, CYTHON_UNUSED siz __pyx_v_h, CYTHON_UNUSED siz __pyx_v_w); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_22frPyObjects(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pyobj, siz __pyx_v_h, PyObject *__pyx_v_w); /* proto */\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_RLEs(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_Masks(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_int_0;\nstatic PyObject *__pyx_int_1;\nstatic PyObject *__pyx_int_4;\nstatic PyObject *__pyx_tuple_;\nstatic PyObject *__pyx_tuple__2;\nstatic PyObject *__pyx_tuple__3;\nstatic PyObject *__pyx_tuple__4;\nstatic PyObject *__pyx_tuple__5;\nstatic PyObject *__pyx_tuple__7;\nstatic PyObject *__pyx_tuple__9;\nstatic PyObject *__pyx_tuple__11;\nstatic PyObject *__pyx_tuple__13;\nstatic PyObject *__pyx_tuple__14;\nstatic PyObject *__pyx_tuple__15;\nstatic PyObject *__pyx_tuple__16;\nstatic PyObject *__pyx_tuple__17;\nstatic PyObject *__pyx_tuple__18;\nstatic PyObject *__pyx_tuple__19;\nstatic PyObject *__pyx_tuple__20;\nstatic PyObject *__pyx_tuple__21;\nstatic PyObject *__pyx_tuple__22;\nstatic PyObject *__pyx_tuple__23;\nstatic PyObject *__pyx_tuple__24;\nstatic PyObject *__pyx_tuple__25;\nstatic PyObject *__pyx_tuple__26;\nstatic PyObject *__pyx_tuple__28;\nstatic PyObject *__pyx_tuple__30;\nstatic PyObject *__pyx_tuple__32;\nstatic PyObject *__pyx_tuple__34;\nstatic PyObject *__pyx_tuple__36;\nstatic PyObject *__pyx_tuple__38;\nstatic PyObject *__pyx_tuple__40;\nstatic PyObject *__pyx_tuple__42;\nstatic PyObject *__pyx_tuple__44;\nstatic PyObject *__pyx_tuple__46;\nstatic PyObject *__pyx_tuple__48;\nstatic PyObject *__pyx_codeobj__6;\nstatic PyObject *__pyx_codeobj__8;\nstatic PyObject *__pyx_codeobj__10;\nstatic PyObject *__pyx_codeobj__12;\nstatic PyObject *__pyx_codeobj__27;\nstatic PyObject *__pyx_codeobj__29;\nstatic PyObject *__pyx_codeobj__31;\nstatic PyObject *__pyx_codeobj__33;\nstatic PyObject *__pyx_codeobj__35;\nstatic PyObject *__pyx_codeobj__37;\nstatic PyObject *__pyx_codeobj__39;\nstatic PyObject *__pyx_codeobj__41;\nstatic PyObject *__pyx_codeobj__43;\nstatic PyObject *__pyx_codeobj__45;\nstatic PyObject *__pyx_codeobj__47;\nstatic PyObject *__pyx_codeobj__49;\n\n/* \"pycocotools/_mask.pyx\":57\n *     cdef siz _n\n * \n *     def __cinit__(self, siz n =0):             # <<<<<<<<<<<<<<\n *         rlesInit(&self._R, n)\n *         self._n = n\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_11pycocotools_5_mask_4RLEs_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_11pycocotools_5_mask_4RLEs_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  siz __pyx_v_n;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_n,0};\n    PyObject* values[1] = {0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (kw_args > 0) {\n          PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_n);\n          if (value) { values[0] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__cinit__\") < 0)) __PYX_ERR(0, 57, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    if (values[0]) {\n      __pyx_v_n = __Pyx_PyInt_As_siz(values[0]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 57, __pyx_L3_error)\n    } else {\n      __pyx_v_n = ((siz)0);\n    }\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 57, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.RLEs.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4RLEs___cinit__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self), __pyx_v_n);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_11pycocotools_5_mask_4RLEs___cinit__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, siz __pyx_v_n) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":58\n * \n *     def __cinit__(self, siz n =0):\n *         rlesInit(&self._R, n)             # <<<<<<<<<<<<<<\n *         self._n = n\n * \n */\n  rlesInit((&__pyx_v_self->_R), __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":59\n *     def __cinit__(self, siz n =0):\n *         rlesInit(&self._R, n)\n *         self._n = n             # <<<<<<<<<<<<<<\n * \n *     # free the RLE array here\n */\n  __pyx_v_self->_n = __pyx_v_n;\n\n  /* \"pycocotools/_mask.pyx\":57\n *     cdef siz _n\n * \n *     def __cinit__(self, siz n =0):             # <<<<<<<<<<<<<<\n *         rlesInit(&self._R, n)\n *         self._n = n\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":62\n * \n *     # free the RLE array here\n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._R is not NULL:\n *             for i in range(self._n):\n */\n\n/* Python wrapper */\nstatic void __pyx_pw_11pycocotools_5_mask_4RLEs_3__dealloc__(PyObject *__pyx_v_self); /*proto*/\nstatic void __pyx_pw_11pycocotools_5_mask_4RLEs_3__dealloc__(PyObject *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__dealloc__ (wrapper)\", 0);\n  __pyx_pf_11pycocotools_5_mask_4RLEs_2__dealloc__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_11pycocotools_5_mask_4RLEs_2__dealloc__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self) {\n  siz __pyx_v_i;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  siz __pyx_t_2;\n  siz __pyx_t_3;\n  __Pyx_RefNannySetupContext(\"__dealloc__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":63\n *     # free the RLE array here\n *     def __dealloc__(self):\n *         if self._R is not NULL:             # <<<<<<<<<<<<<<\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)\n */\n  __pyx_t_1 = ((__pyx_v_self->_R != NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"pycocotools/_mask.pyx\":64\n *     def __dealloc__(self):\n *         if self._R is not NULL:\n *             for i in range(self._n):             # <<<<<<<<<<<<<<\n *                 free(self._R[i].cnts)\n *             free(self._R)\n */\n    __pyx_t_2 = __pyx_v_self->_n;\n    for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {\n      __pyx_v_i = __pyx_t_3;\n\n      /* \"pycocotools/_mask.pyx\":65\n *         if self._R is not NULL:\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)             # <<<<<<<<<<<<<<\n *             free(self._R)\n *     def __getattr__(self, key):\n */\n      free((__pyx_v_self->_R[__pyx_v_i]).cnts);\n    }\n\n    /* \"pycocotools/_mask.pyx\":66\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)\n *             free(self._R)             # <<<<<<<<<<<<<<\n *     def __getattr__(self, key):\n *         if key == 'n':\n */\n    free(__pyx_v_self->_R);\n\n    /* \"pycocotools/_mask.pyx\":63\n *     # free the RLE array here\n *     def __dealloc__(self):\n *         if self._R is not NULL:             # <<<<<<<<<<<<<<\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":62\n * \n *     # free the RLE array here\n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._R is not NULL:\n *             for i in range(self._n):\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"pycocotools/_mask.pyx\":67\n *                 free(self._R[i].cnts)\n *             free(self._R)\n *     def __getattr__(self, key):             # <<<<<<<<<<<<<<\n *         if key == 'n':\n *             return self._n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getattr__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4RLEs_4__getattr__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self), ((PyObject *)__pyx_v_key));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_4__getattr__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, PyObject *__pyx_v_key) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  __Pyx_RefNannySetupContext(\"__getattr__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":68\n *             free(self._R)\n *     def __getattr__(self, key):\n *         if key == 'n':             # <<<<<<<<<<<<<<\n *             return self._n\n *         raise AttributeError(key)\n */\n  __pyx_t_1 = (__Pyx_PyString_Equals(__pyx_v_key, __pyx_n_s_n, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(0, 68, __pyx_L1_error)\n  if (__pyx_t_1) {\n\n    /* \"pycocotools/_mask.pyx\":69\n *     def __getattr__(self, key):\n *         if key == 'n':\n *             return self._n             # <<<<<<<<<<<<<<\n *         raise AttributeError(key)\n * \n */\n    __Pyx_XDECREF(__pyx_r);\n    __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_self->_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 69, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_r = __pyx_t_2;\n    __pyx_t_2 = 0;\n    goto __pyx_L0;\n\n    /* \"pycocotools/_mask.pyx\":68\n *             free(self._R)\n *     def __getattr__(self, key):\n *         if key == 'n':             # <<<<<<<<<<<<<<\n *             return self._n\n *         raise AttributeError(key)\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":70\n *         if key == 'n':\n *             return self._n\n *         raise AttributeError(key)             # <<<<<<<<<<<<<<\n * \n * # python class to wrap Mask array in C\n */\n  __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 70, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_INCREF(__pyx_v_key);\n  __Pyx_GIVEREF(__pyx_v_key);\n  PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_key);\n  __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_AttributeError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 70, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __PYX_ERR(0, 70, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":67\n *                 free(self._R[i].cnts)\n *             free(self._R)\n *     def __getattr__(self, key):             # <<<<<<<<<<<<<<\n *         if key == 'n':\n *             return self._n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"pycocotools._mask.RLEs.__getattr__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":80\n *     cdef siz _n\n * \n *     def __cinit__(self, h, w, n):             # <<<<<<<<<<<<<<\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_11pycocotools_5_mask_5Masks_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_11pycocotools_5_mask_5Masks_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_h = 0;\n  PyObject *__pyx_v_w = 0;\n  PyObject *__pyx_v_n = 0;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_h,&__pyx_n_s_w,&__pyx_n_s_n,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 3, 3, 1); __PYX_ERR(0, 80, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 3, 3, 2); __PYX_ERR(0, 80, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__cinit__\") < 0)) __PYX_ERR(0, 80, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_h = values[0];\n    __pyx_v_w = values[1];\n    __pyx_v_n = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 80, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_5Masks___cinit__(((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_v_self), __pyx_v_h, __pyx_v_w, __pyx_v_n);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_11pycocotools_5_mask_5Masks___cinit__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self, PyObject *__pyx_v_h, PyObject *__pyx_v_w, PyObject *__pyx_v_n) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  size_t __pyx_t_4;\n  siz __pyx_t_5;\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":81\n * \n *     def __cinit__(self, h, w, n):\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))             # <<<<<<<<<<<<<<\n *         self._h = h\n *         self._w = w\n */\n  __pyx_t_1 = PyNumber_Multiply(__pyx_v_h, __pyx_v_w); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_FromSize_t((sizeof(byte))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_4 = __Pyx_PyInt_As_size_t(__pyx_t_3); if (unlikely((__pyx_t_4 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_self->_mask = ((byte *)malloc(__pyx_t_4));\n\n  /* \"pycocotools/_mask.pyx\":82\n *     def __cinit__(self, h, w, n):\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h             # <<<<<<<<<<<<<<\n *         self._w = w\n *         self._n = n\n */\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_h); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L1_error)\n  __pyx_v_self->_h = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":83\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h\n *         self._w = w             # <<<<<<<<<<<<<<\n *         self._n = n\n *     # def __dealloc__(self):\n */\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_w); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 83, __pyx_L1_error)\n  __pyx_v_self->_w = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":84\n *         self._h = h\n *         self._w = w\n *         self._n = n             # <<<<<<<<<<<<<<\n *     # def __dealloc__(self):\n *         # the memory management of _mask has been passed to np.ndarray\n */\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_n); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L1_error)\n  __pyx_v_self->_n = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":80\n *     cdef siz _n\n * \n *     def __cinit__(self, h, w, n):             # <<<<<<<<<<<<<<\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":90\n * \n *     # called when passing into np.array() and return an np.ndarray in column-major order\n *     def __array__(self):             # <<<<<<<<<<<<<<\n *         cdef np.npy_intp shape[1]\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_3__array__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_3__array__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__array__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_5Masks_2__array__(((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_2__array__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self) {\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v_ndarray = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"__array__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":92\n *     def __array__(self):\n *         cdef np.npy_intp shape[1]\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n             # <<<<<<<<<<<<<<\n *         # Create a 1D array, and reshape it to fortran/Matlab column-major array\n *         ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')\n */\n  (__pyx_v_shape[0]) = ((((npy_intp)__pyx_v_self->_h) * __pyx_v_self->_w) * __pyx_v_self->_n);\n\n  /* \"pycocotools/_mask.pyx\":94\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n\n *         # Create a 1D array, and reshape it to fortran/Matlab column-major array\n *         ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')             # <<<<<<<<<<<<<<\n *         # The _mask allocated by Masks is now handled by ndarray\n *         PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)\n */\n  __pyx_t_1 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_UINT8, __pyx_v_self->_mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_reshape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_self->_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_self->_w); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_self->_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_4);\n  __pyx_t_1 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_4 = 0;\n  __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n  __pyx_t_5 = PyDict_New(); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 94, __pyx_L1_error)\n  __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_ndarray = __pyx_t_3;\n  __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":96\n *         ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')\n *         # The _mask allocated by Masks is now handled by ndarray\n *         PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *         return ndarray\n * \n */\n  if (!(likely(((__pyx_v_ndarray) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_ndarray, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 96, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_ndarray), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":97\n *         # The _mask allocated by Masks is now handled by ndarray\n *         PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)\n *         return ndarray             # <<<<<<<<<<<<<<\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_ndarray);\n  __pyx_r = __pyx_v_ndarray;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":90\n * \n *     # called when passing into np.array() and return an np.ndarray in column-major order\n *     def __array__(self):             # <<<<<<<<<<<<<<\n *         cdef np.npy_intp shape[1]\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__array__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_ndarray);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_1_toString(PyObject *__pyx_self, PyObject *__pyx_v_Rs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_1_toString = {\"_toString\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_1_toString, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_1_toString(PyObject *__pyx_self, PyObject *__pyx_v_Rs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_toString (wrapper)\", 0);\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_Rs), __pyx_ptype_11pycocotools_5_mask_RLEs, 1, \"Rs\", 0))) __PYX_ERR(0, 100, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask__toString(__pyx_self, ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_Rs));\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask__toString(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs) {\n  siz __pyx_v_n;\n  PyObject *__pyx_v_py_string = 0;\n  char *__pyx_v_c_string;\n  PyObject *__pyx_v_objs = NULL;\n  siz __pyx_v_i;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  siz __pyx_t_2;\n  siz __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  int __pyx_t_7;\n  __Pyx_RefNannySetupContext(\"_toString\", 0);\n\n  /* \"pycocotools/_mask.pyx\":101\n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):\n *     cdef siz n = Rs.n             # <<<<<<<<<<<<<<\n *     cdef bytes py_string\n *     cdef char* c_string\n */\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_Rs), __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 101, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_2 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 101, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_n = __pyx_t_2;\n\n  /* \"pycocotools/_mask.pyx\":104\n *     cdef bytes py_string\n *     cdef char* c_string\n *     objs = []             # <<<<<<<<<<<<<<\n *     for i in range(n):\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n */\n  __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 104, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v_objs = ((PyObject*)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":105\n *     cdef char* c_string\n *     objs = []\n *     for i in range(n):             # <<<<<<<<<<<<<<\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n *         py_string = c_string\n */\n  __pyx_t_2 = __pyx_v_n;\n  for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {\n    __pyx_v_i = __pyx_t_3;\n\n    /* \"pycocotools/_mask.pyx\":106\n *     objs = []\n *     for i in range(n):\n *         c_string = rleToString( <RLE*> &Rs._R[i] )             # <<<<<<<<<<<<<<\n *         py_string = c_string\n *         objs.append({\n */\n    __pyx_v_c_string = rleToString(((RLE *)(&(__pyx_v_Rs->_R[__pyx_v_i]))));\n\n    /* \"pycocotools/_mask.pyx\":107\n *     for i in range(n):\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n *         py_string = c_string             # <<<<<<<<<<<<<<\n *         objs.append({\n *             'size': [Rs._R[i].h, Rs._R[i].w],\n */\n    __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_c_string); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_1));\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":109\n *         py_string = c_string\n *         objs.append({\n *             'size': [Rs._R[i].h, Rs._R[i].w],             # <<<<<<<<<<<<<<\n *             'counts': py_string\n *         })\n */\n    __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_4 = __Pyx_PyInt_From_siz((__pyx_v_Rs->_R[__pyx_v_i]).h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = __Pyx_PyInt_From_siz((__pyx_v_Rs->_R[__pyx_v_i]).w); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = PyList_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyList_SET_ITEM(__pyx_t_6, 0, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyList_SET_ITEM(__pyx_t_6, 1, __pyx_t_5);\n    __pyx_t_4 = 0;\n    __pyx_t_5 = 0;\n    if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_size, __pyx_t_6) < 0) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n\n    /* \"pycocotools/_mask.pyx\":111\n *             'size': [Rs._R[i].h, Rs._R[i].w],\n *             'counts': py_string\n *         })             # <<<<<<<<<<<<<<\n *         free(c_string)\n *     return objs\n */\n    if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_counts, __pyx_v_py_string) < 0) __PYX_ERR(0, 109, __pyx_L1_error)\n\n    /* \"pycocotools/_mask.pyx\":108\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n *         py_string = c_string\n *         objs.append({             # <<<<<<<<<<<<<<\n *             'size': [Rs._R[i].h, Rs._R[i].w],\n *             'counts': py_string\n */\n    __pyx_t_7 = __Pyx_PyList_Append(__pyx_v_objs, __pyx_t_1); if (unlikely(__pyx_t_7 == -1)) __PYX_ERR(0, 108, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":112\n *             'counts': py_string\n *         })\n *         free(c_string)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n    free(__pyx_v_c_string);\n  }\n\n  /* \"pycocotools/_mask.pyx\":113\n *         })\n *         free(c_string)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"pycocotools._mask._toString\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_py_string);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3_frString(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3_frString = {\"_frString\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3_frString, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3_frString(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_frString (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_2_frString(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_2_frString(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  siz __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  PyObject *__pyx_v_py_string = 0;\n  char *__pyx_v_c_string;\n  PyObject *__pyx_v_i = NULL;\n  PyObject *__pyx_v_obj = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *(*__pyx_t_4)(PyObject *);\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  char *__pyx_t_7;\n  Py_ssize_t __pyx_t_8;\n  siz __pyx_t_9;\n  siz __pyx_t_10;\n  __Pyx_RefNannySetupContext(\"_frString\", 0);\n\n  /* \"pycocotools/_mask.pyx\":117\n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):\n *     cdef siz n = len(rleObjs)             # <<<<<<<<<<<<<<\n *     Rs = RLEs(n)\n *     cdef bytes py_string\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_rleObjs); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 117, __pyx_L1_error)\n  __pyx_v_n = __pyx_t_1;\n\n  /* \"pycocotools/_mask.pyx\":118\n * def _frString(rleObjs):\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     cdef bytes py_string\n *     cdef char* c_string\n */\n  __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 118, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);\n  __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 118, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":121\n *     cdef bytes py_string\n *     cdef char* c_string\n *     for i, obj in enumerate(rleObjs):             # <<<<<<<<<<<<<<\n *         py_string = str(obj['counts'])\n *         c_string = py_string\n */\n  __Pyx_INCREF(__pyx_int_0);\n  __pyx_t_2 = __pyx_int_0;\n  if (likely(PyList_CheckExact(__pyx_v_rleObjs)) || PyTuple_CheckExact(__pyx_v_rleObjs)) {\n    __pyx_t_3 = __pyx_v_rleObjs; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0;\n    __pyx_t_4 = NULL;\n  } else {\n    __pyx_t_1 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_rleObjs); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 121, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_4 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 121, __pyx_L1_error)\n  }\n  for (;;) {\n    if (likely(!__pyx_t_4)) {\n      if (likely(PyList_CheckExact(__pyx_t_3))) {\n        if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_3)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      } else {\n        if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      }\n    } else {\n      __pyx_t_5 = __pyx_t_4(__pyx_t_3);\n      if (unlikely(!__pyx_t_5)) {\n        PyObject* exc_type = PyErr_Occurred();\n        if (exc_type) {\n          if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n          else __PYX_ERR(0, 121, __pyx_L1_error)\n        }\n        break;\n      }\n      __Pyx_GOTREF(__pyx_t_5);\n    }\n    __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_5);\n    __pyx_t_5 = 0;\n    __Pyx_INCREF(__pyx_t_2);\n    __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_2);\n    __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_2);\n    __pyx_t_2 = __pyx_t_5;\n    __pyx_t_5 = 0;\n\n    /* \"pycocotools/_mask.pyx\":122\n *     cdef char* c_string\n *     for i, obj in enumerate(rleObjs):\n *         py_string = str(obj['counts'])             # <<<<<<<<<<<<<<\n *         c_string = py_string\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n */\n    __pyx_t_5 = PyObject_GetItem(__pyx_v_obj, __pyx_n_s_counts); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5);\n    __pyx_t_5 = 0;\n    __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&PyString_Type)), __pyx_t_6, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (!(likely(PyBytes_CheckExact(__pyx_t_5))||((__pyx_t_5) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"bytes\", Py_TYPE(__pyx_t_5)->tp_name), 0))) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_5));\n    __pyx_t_5 = 0;\n\n    /* \"pycocotools/_mask.pyx\":123\n *     for i, obj in enumerate(rleObjs):\n *         py_string = str(obj['counts'])\n *         c_string = py_string             # <<<<<<<<<<<<<<\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n *     return Rs\n */\n    __pyx_t_7 = __Pyx_PyObject_AsString(__pyx_v_py_string); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(0, 123, __pyx_L1_error)\n    __pyx_v_c_string = __pyx_t_7;\n\n    /* \"pycocotools/_mask.pyx\":124\n *         py_string = str(obj['counts'])\n *         c_string = py_string\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )             # <<<<<<<<<<<<<<\n *     return Rs\n * \n */\n    __pyx_t_8 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_8 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 124, __pyx_L1_error)\n    __pyx_t_5 = PyObject_GetItem(__pyx_v_obj, __pyx_n_s_size); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_5, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_9 = __Pyx_PyInt_As_siz(__pyx_t_6); if (unlikely((__pyx_t_9 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = PyObject_GetItem(__pyx_v_obj, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_6, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_10 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_10 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    rleFrString(((RLE *)(&(__pyx_v_Rs->_R[__pyx_t_8]))), ((char *)__pyx_v_c_string), __pyx_t_9, __pyx_t_10);\n\n    /* \"pycocotools/_mask.pyx\":121\n *     cdef bytes py_string\n *     cdef char* c_string\n *     for i, obj in enumerate(rleObjs):             # <<<<<<<<<<<<<<\n *         py_string = str(obj['counts'])\n *         c_string = py_string\n */\n  }\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":125\n *         c_string = py_string\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n *     return Rs             # <<<<<<<<<<<<<<\n * \n * # encode mask to RLEs objects\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(((PyObject *)__pyx_v_Rs));\n  __pyx_r = ((PyObject *)__pyx_v_Rs);\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"pycocotools._mask._frString\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_py_string);\n  __Pyx_XDECREF(__pyx_v_i);\n  __Pyx_XDECREF(__pyx_v_obj);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5encode(PyObject *__pyx_self, PyObject *__pyx_v_mask); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_5encode = {\"encode\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_5encode, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5encode(PyObject *__pyx_self, PyObject *__pyx_v_mask) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"encode (wrapper)\", 0);\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_mask), __pyx_ptype_5numpy_ndarray, 1, \"mask\", 0))) __PYX_ERR(0, 129, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4encode(__pyx_self, ((PyArrayObject *)__pyx_v_mask));\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4encode(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_mask) {\n  npy_intp __pyx_v_h;\n  npy_intp __pyx_v_w;\n  npy_intp __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  PyObject *__pyx_v_objs = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_mask;\n  __Pyx_Buffer __pyx_pybuffer_mask;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  npy_intp __pyx_t_1;\n  npy_intp __pyx_t_2;\n  npy_intp __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  __Pyx_RefNannySetupContext(\"encode\", 0);\n  __pyx_pybuffer_mask.pybuffer.buf = NULL;\n  __pyx_pybuffer_mask.refcount = 0;\n  __pyx_pybuffernd_mask.data = NULL;\n  __pyx_pybuffernd_mask.rcbuffer = &__pyx_pybuffer_mask;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_mask.rcbuffer->pybuffer, (PyObject*)__pyx_v_mask, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_F_CONTIGUOUS, 3, 0, __pyx_stack) == -1)) __PYX_ERR(0, 129, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_mask.diminfo[0].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_mask.diminfo[0].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_mask.diminfo[1].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_mask.diminfo[1].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_mask.diminfo[2].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_mask.diminfo[2].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[2];\n\n  /* \"pycocotools/_mask.pyx\":130\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = RLEs(n)\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n */\n  __pyx_t_1 = (__pyx_v_mask->dimensions[0]);\n  __pyx_t_2 = (__pyx_v_mask->dimensions[1]);\n  __pyx_t_3 = (__pyx_v_mask->dimensions[2]);\n  __pyx_v_h = __pyx_t_1;\n  __pyx_v_w = __pyx_t_2;\n  __pyx_v_n = __pyx_t_3;\n\n  /* \"pycocotools/_mask.pyx\":131\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n *     objs = _toString(Rs)\n */\n  __pyx_t_4 = __Pyx_PyInt_From_Py_intptr_t(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 131, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 131, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);\n  __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 131, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":132\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)             # <<<<<<<<<<<<<<\n *     objs = _toString(Rs)\n *     return objs\n */\n  rleEncode(__pyx_v_Rs->_R, ((byte *)__pyx_v_mask->data), __pyx_v_h, __pyx_v_w, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":133\n *     cdef RLEs Rs = RLEs(n)\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n *     objs = _toString(Rs)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n  __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 133, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_6 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {\n    __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5);\n    if (likely(__pyx_t_6)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n      __Pyx_INCREF(__pyx_t_6);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_5, function);\n    }\n  }\n  if (!__pyx_t_6) {\n    __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_5, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_5)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_6, ((PyObject *)__pyx_v_Rs)};\n      __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_6, ((PyObject *)__pyx_v_Rs)};\n      __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n    } else\n    #endif\n    {\n      __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 133, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_6); __pyx_t_6 = NULL;\n      __Pyx_INCREF(((PyObject *)__pyx_v_Rs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));\n      PyTuple_SET_ITEM(__pyx_t_7, 0+1, ((PyObject *)__pyx_v_Rs));\n      __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_7, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_objs = __pyx_t_4;\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":134\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n *     objs = _toString(Rs)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * # decode mask from compressed list of RLE string or RLEs object\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_mask.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.encode\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_mask.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_7decode(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_7decode = {\"decode\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_7decode, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_7decode(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"decode (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_6decode(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_6decode(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  siz __pyx_v_h;\n  siz __pyx_v_w;\n  siz __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_masks = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  siz __pyx_t_5;\n  siz __pyx_t_6;\n  siz __pyx_t_7;\n  __Pyx_RefNannySetupContext(\"decode\", 0);\n\n  /* \"pycocotools/_mask.pyx\":138\n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n *     masks = Masks(h, w, n)\n */\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 138, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_3) {\n    __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;\n      __Pyx_INCREF(__pyx_v_rleObjs);\n      __Pyx_GIVEREF(__pyx_v_rleObjs);\n      PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 138, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":139\n * def decode(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n             # <<<<<<<<<<<<<<\n *     masks = Masks(h, w, n)\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );\n */\n  __pyx_t_5 = (__pyx_v_Rs->_R[0]).h;\n  __pyx_t_6 = (__pyx_v_Rs->_R[0]).w;\n  __pyx_t_7 = __pyx_v_Rs->_n;\n  __pyx_v_h = __pyx_t_5;\n  __pyx_v_w = __pyx_t_6;\n  __pyx_v_n = __pyx_t_7;\n\n  /* \"pycocotools/_mask.pyx\":140\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n *     masks = Masks(h, w, n)             # <<<<<<<<<<<<<<\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );\n *     return np.array(masks)\n */\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_w); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_4);\n  __pyx_t_1 = 0;\n  __pyx_t_2 = 0;\n  __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_Masks), __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_masks = ((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":141\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n *     masks = Masks(h, w, n)\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );             # <<<<<<<<<<<<<<\n *     return np.array(masks)\n * \n */\n  rleDecode(((RLE *)__pyx_v_Rs->_R), __pyx_v_masks->_mask, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":142\n *     masks = Masks(h, w, n)\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );\n *     return np.array(masks)             # <<<<<<<<<<<<<<\n * \n * def merge(rleObjs, bint intersect=0):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 142, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_3) {\n    __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_masks)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 142, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_masks)};\n      __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 142, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_masks)};\n      __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 142, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n    } else\n    #endif\n    {\n      __pyx_t_1 = PyTuple_New(1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); __pyx_t_3 = NULL;\n      __Pyx_INCREF(((PyObject *)__pyx_v_masks));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_masks));\n      PyTuple_SET_ITEM(__pyx_t_1, 0+1, ((PyObject *)__pyx_v_masks));\n      __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 142, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_r = __pyx_t_4;\n  __pyx_t_4 = 0;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_AddTraceback(\"pycocotools._mask.decode\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_masks);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_9merge(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_9merge = {\"merge\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_9merge, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_9merge(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_rleObjs = 0;\n  int __pyx_v_intersect;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"merge (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_rleObjs,&__pyx_n_s_intersect,0};\n    PyObject* values[2] = {0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_rleObjs)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (kw_args > 0) {\n          PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_intersect);\n          if (value) { values[1] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"merge\") < 0)) __PYX_ERR(0, 144, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    __pyx_v_rleObjs = values[0];\n    if (values[1]) {\n      __pyx_v_intersect = __Pyx_PyObject_IsTrue(values[1]); if (unlikely((__pyx_v_intersect == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L3_error)\n    } else {\n      __pyx_v_intersect = ((int)0);\n    }\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"merge\", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 144, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.merge\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_8merge(__pyx_self, __pyx_v_rleObjs, __pyx_v_intersect);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_8merge(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs, int __pyx_v_intersect) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_R = 0;\n  PyObject *__pyx_v_obj = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  __Pyx_RefNannySetupContext(\"merge\", 0);\n\n  /* \"pycocotools/_mask.pyx\":145\n * \n * def merge(rleObjs, bint intersect=0):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     cdef RLEs R = RLEs(1)\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n */\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_3) {\n    __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;\n      __Pyx_INCREF(__pyx_v_rleObjs);\n      __Pyx_GIVEREF(__pyx_v_rleObjs);\n      PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 145, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":146\n * def merge(rleObjs, bint intersect=0):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)             # <<<<<<<<<<<<<<\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_tuple_, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v_R = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":147\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)             # <<<<<<<<<<<<<<\n *     obj = _toString(R)[0]\n *     return obj\n */\n  rleMerge(((RLE *)__pyx_v_Rs->_R), ((RLE *)__pyx_v_R->_R), ((siz)__pyx_v_Rs->_n), __pyx_v_intersect);\n\n  /* \"pycocotools/_mask.pyx\":148\n *     cdef RLEs R = RLEs(1)\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]             # <<<<<<<<<<<<<<\n *     return obj\n * \n */\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_4 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_4)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_4);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_4) {\n    __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_R)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_4, ((PyObject *)__pyx_v_R)};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_4, ((PyObject *)__pyx_v_R)};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    {\n      __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); __pyx_t_4 = NULL;\n      __Pyx_INCREF(((PyObject *)__pyx_v_R));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_R));\n      PyTuple_SET_ITEM(__pyx_t_3, 0+1, ((PyObject *)__pyx_v_R));\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_obj = __pyx_t_2;\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":149\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]\n *     return obj             # <<<<<<<<<<<<<<\n * \n * def area(rleObjs):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_obj);\n  __pyx_r = __pyx_v_obj;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_AddTraceback(\"pycocotools._mask.merge\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_R);\n  __Pyx_XDECREF(__pyx_v_obj);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_11area(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_11area = {\"area\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_11area, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_11area(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"area (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_10area(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_10area(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  uint *__pyx_v__a;\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v_a = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"area\", 0);\n\n  /* \"pycocotools/_mask.pyx\":152\n * \n * def area(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n *     rleArea(Rs._R, Rs._n, _a)\n */\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 152, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_3) {\n    __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 152, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;\n      __Pyx_INCREF(__pyx_v_rleObjs);\n      __Pyx_GIVEREF(__pyx_v_rleObjs);\n      PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 152, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":153\n * def area(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))             # <<<<<<<<<<<<<<\n *     rleArea(Rs._R, Rs._n, _a)\n *     cdef np.npy_intp shape[1]\n */\n  __pyx_v__a = ((uint *)malloc((__pyx_v_Rs->_n * (sizeof(unsigned int)))));\n\n  /* \"pycocotools/_mask.pyx\":154\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n *     rleArea(Rs._R, Rs._n, _a)             # <<<<<<<<<<<<<<\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> Rs._n\n */\n  rleArea(__pyx_v_Rs->_R, __pyx_v_Rs->_n, __pyx_v__a);\n\n  /* \"pycocotools/_mask.pyx\":156\n *     rleArea(Rs._R, Rs._n, _a)\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> Rs._n             # <<<<<<<<<<<<<<\n *     a = np.array((Rs._n, ), dtype=np.uint8)\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n */\n  (__pyx_v_shape[0]) = ((npy_intp)__pyx_v_Rs->_n);\n\n  /* \"pycocotools/_mask.pyx\":157\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> Rs._n\n *     a = np.array((Rs._n, ), dtype=np.uint8)             # <<<<<<<<<<<<<<\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_Rs->_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4);\n  __pyx_t_4 = 0;\n  __pyx_t_4 = PyDict_New(); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_v_a = __pyx_t_5;\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":158\n *     shape[0] = <np.npy_intp> Rs._n\n *     a = np.array((Rs._n, ), dtype=np.uint8)\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)             # <<<<<<<<<<<<<<\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n *     return a\n */\n  __pyx_t_5 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_UINT32, __pyx_v__a); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 158, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF_SET(__pyx_v_a, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":159\n *     a = np.array((Rs._n, ), dtype=np.uint8)\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *     return a\n * \n */\n  if (!(likely(((__pyx_v_a) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_a, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 159, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_a), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":160\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n *     return a             # <<<<<<<<<<<<<<\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_a);\n  __pyx_r = __pyx_v_a;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"pycocotools._mask.area\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_a);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_13iou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_13iou = {\"iou\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_13iou, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_13iou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_dt = 0;\n  PyObject *__pyx_v_gt = 0;\n  PyObject *__pyx_v_pyiscrowd = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"iou (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_pyiscrowd,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"iou\", 1, 3, 3, 1); __PYX_ERR(0, 163, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_pyiscrowd)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"iou\", 1, 3, 3, 2); __PYX_ERR(0, 163, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"iou\") < 0)) __PYX_ERR(0, 163, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_dt = values[0];\n    __pyx_v_gt = values[1];\n    __pyx_v_pyiscrowd = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"iou\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 163, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.iou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_12iou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_pyiscrowd);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_1_preproc(PyObject *__pyx_self, PyObject *__pyx_v_objs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_1_preproc = {\"_preproc\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3iou_1_preproc, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_1_preproc(PyObject *__pyx_self, PyObject *__pyx_v_objs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_preproc (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou__preproc(__pyx_self, ((PyObject *)__pyx_v_objs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou__preproc(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_objs) {\n  PyObject *__pyx_v_isbox = NULL;\n  PyObject *__pyx_v_isrle = NULL;\n  PyObject *__pyx_v_obj = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  int __pyx_t_9;\n  PyObject *__pyx_t_10 = NULL;\n  PyObject *(*__pyx_t_11)(PyObject *);\n  PyObject *__pyx_t_12 = NULL;\n  Py_ssize_t __pyx_t_13;\n  PyObject *__pyx_t_14 = NULL;\n  __Pyx_RefNannySetupContext(\"_preproc\", 0);\n  __Pyx_INCREF(__pyx_v_objs);\n\n  /* \"pycocotools/_mask.pyx\":165\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):\n *         if len(objs) == 0:             # <<<<<<<<<<<<<<\n *             return objs\n *         if type(objs) == np.ndarray:\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_objs); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 165, __pyx_L1_error)\n  __pyx_t_2 = ((__pyx_t_1 == 0) != 0);\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":166\n *     def _preproc(objs):\n *         if len(objs) == 0:\n *             return objs             # <<<<<<<<<<<<<<\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(__pyx_v_objs);\n    __pyx_r = __pyx_v_objs;\n    goto __pyx_L0;\n\n    /* \"pycocotools/_mask.pyx\":165\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):\n *         if len(objs) == 0:             # <<<<<<<<<<<<<<\n *             return objs\n *         if type(objs) == np.ndarray:\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":167\n *         if len(objs) == 0:\n *             return objs\n *         if type(objs) == np.ndarray:             # <<<<<<<<<<<<<<\n *             if len(objs.shape) == 1:\n *                 objs = objs.reshape((objs[0], 1))\n */\n  __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_objs)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 167, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 167, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":168\n *             return objs\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n */\n    __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_1 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_2 = ((__pyx_t_1 == 1) != 0);\n    if (__pyx_t_2) {\n\n      /* \"pycocotools/_mask.pyx\":169\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:\n *                 objs = objs.reshape((objs[0], 1))             # <<<<<<<<<<<<<<\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n */\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_reshape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_objs, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GIVEREF(__pyx_t_5);\n      PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5);\n      __Pyx_INCREF(__pyx_int_1);\n      __Pyx_GIVEREF(__pyx_int_1);\n      PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_int_1);\n      __pyx_t_5 = 0;\n      __pyx_t_5 = NULL;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {\n        __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);\n        if (likely(__pyx_t_5)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n          __Pyx_INCREF(__pyx_t_5);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_4, function);\n        }\n      }\n      if (!__pyx_t_5) {\n        __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n        __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_t_6};\n          __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_t_6};\n          __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n        } else\n        #endif\n        {\n          __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 169, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;\n          __Pyx_GIVEREF(__pyx_t_6);\n          PyTuple_SET_ITEM(__pyx_t_7, 0+1, __pyx_t_6);\n          __pyx_t_6 = 0;\n          __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_3);\n      __pyx_t_3 = 0;\n\n      /* \"pycocotools/_mask.pyx\":168\n *             return objs\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n */\n    }\n\n    /* \"pycocotools/_mask.pyx\":171\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:             # <<<<<<<<<<<<<<\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n */\n    __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_1 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_8 = ((!((__pyx_t_1 == 2) != 0)) != 0);\n    if (!__pyx_t_8) {\n    } else {\n      __pyx_t_2 = __pyx_t_8;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_3 = __Pyx_PyInt_EqObjC(__pyx_t_4, __pyx_int_4, 4, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_9 = ((!__pyx_t_8) != 0);\n    __pyx_t_2 = __pyx_t_9;\n    __pyx_L7_bool_binop_done:;\n    if (__pyx_t_2) {\n\n      /* \"pycocotools/_mask.pyx\":172\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')             # <<<<<<<<<<<<<<\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 172, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(0, 172, __pyx_L1_error)\n\n      /* \"pycocotools/_mask.pyx\":171\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:             # <<<<<<<<<<<<<<\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n */\n    }\n\n    /* \"pycocotools/_mask.pyx\":173\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)             # <<<<<<<<<<<<<<\n *         elif type(objs) == list:\n *             # check if list is in box format and convert it to np.ndarray\n */\n    __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_astype); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_double); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = NULL;\n    if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {\n      __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_4);\n      if (likely(__pyx_t_7)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n        __Pyx_INCREF(__pyx_t_7);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_4, function);\n      }\n    }\n    if (!__pyx_t_7) {\n      __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_4)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_6};\n        __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_6};\n        __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 173, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_7); __pyx_t_7 = NULL;\n        __Pyx_GIVEREF(__pyx_t_6);\n        PyTuple_SET_ITEM(__pyx_t_5, 0+1, __pyx_t_6);\n        __pyx_t_6 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_3);\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":167\n *         if len(objs) == 0:\n *             return objs\n *         if type(objs) == np.ndarray:             # <<<<<<<<<<<<<<\n *             if len(objs.shape) == 1:\n *                 objs = objs.reshape((objs[0], 1))\n */\n    goto __pyx_L4;\n  }\n\n  /* \"pycocotools/_mask.pyx\":174\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:             # <<<<<<<<<<<<<<\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n */\n  __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_objs)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 174, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":176\n *         elif type(objs) == list:\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))             # <<<<<<<<<<<<<<\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:\n */\n    __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_all); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_6 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_array); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = PyList_New(0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    if (likely(PyList_CheckExact(__pyx_v_objs)) || PyTuple_CheckExact(__pyx_v_objs)) {\n      __pyx_t_10 = __pyx_v_objs; __Pyx_INCREF(__pyx_t_10); __pyx_t_1 = 0;\n      __pyx_t_11 = NULL;\n    } else {\n      __pyx_t_1 = -1; __pyx_t_10 = PyObject_GetIter(__pyx_v_objs); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_11 = Py_TYPE(__pyx_t_10)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 176, __pyx_L1_error)\n    }\n    for (;;) {\n      if (likely(!__pyx_t_11)) {\n        if (likely(PyList_CheckExact(__pyx_t_10))) {\n          if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_10)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyList_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 176, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n        } else {\n          if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_10)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 176, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n        }\n      } else {\n        __pyx_t_12 = __pyx_t_11(__pyx_t_10);\n        if (unlikely(!__pyx_t_12)) {\n          PyObject* exc_type = PyErr_Occurred();\n          if (exc_type) {\n            if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n            else __PYX_ERR(0, 176, __pyx_L1_error)\n          }\n          break;\n        }\n        __Pyx_GOTREF(__pyx_t_12);\n      }\n      __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_12);\n      __pyx_t_12 = 0;\n      __pyx_t_13 = PyObject_Length(__pyx_v_obj); if (unlikely(__pyx_t_13 == -1)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __pyx_t_2 = (__pyx_t_13 == 4);\n      if (__pyx_t_2) {\n      } else {\n        __pyx_t_14 = __Pyx_PyBool_FromLong(__pyx_t_2); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __pyx_t_12 = __pyx_t_14;\n        __pyx_t_14 = 0;\n        goto __pyx_L11_bool_binop_done;\n      }\n      __pyx_t_14 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_14); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n      if (!__pyx_t_2) {\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      } else {\n        __Pyx_INCREF(__pyx_t_14);\n        __pyx_t_12 = __pyx_t_14;\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        goto __pyx_L11_bool_binop_done;\n      }\n      __pyx_t_14 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_INCREF(__pyx_t_14);\n      __pyx_t_12 = __pyx_t_14;\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_L11_bool_binop_done:;\n      if (unlikely(__Pyx_ListComp_Append(__pyx_t_6, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __pyx_t_10 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n      __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_7);\n      if (likely(__pyx_t_10)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n        __Pyx_INCREF(__pyx_t_10);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_7, function);\n      }\n    }\n    if (!__pyx_t_10) {\n      __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_7)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_6};\n        __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_6};\n        __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_10); __pyx_t_10 = NULL;\n        __Pyx_GIVEREF(__pyx_t_6);\n        PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_t_6);\n        __pyx_t_6 = 0;\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {\n      __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5);\n      if (likely(__pyx_t_7)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n        __Pyx_INCREF(__pyx_t_7);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_5, function);\n      }\n    }\n    if (!__pyx_t_7) {\n      __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_5)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_4};\n        __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_4};\n        __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_7); __pyx_t_7 = NULL;\n        __Pyx_GIVEREF(__pyx_t_4);\n        PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_t_4);\n        __pyx_t_4 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_12, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_v_isbox = __pyx_t_3;\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":177\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))             # <<<<<<<<<<<<<<\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)\n */\n    __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_all); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_12);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_array); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_4 = PyList_New(0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    if (likely(PyList_CheckExact(__pyx_v_objs)) || PyTuple_CheckExact(__pyx_v_objs)) {\n      __pyx_t_6 = __pyx_v_objs; __Pyx_INCREF(__pyx_t_6); __pyx_t_1 = 0;\n      __pyx_t_11 = NULL;\n    } else {\n      __pyx_t_1 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_v_objs); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __pyx_t_11 = Py_TYPE(__pyx_t_6)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 177, __pyx_L1_error)\n    }\n    for (;;) {\n      if (likely(!__pyx_t_11)) {\n        if (likely(PyList_CheckExact(__pyx_t_6))) {\n          if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_6)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_10 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_1); __Pyx_INCREF(__pyx_t_10); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)\n          #else\n          __pyx_t_10 = PySequence_ITEM(__pyx_t_6, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          #endif\n        } else {\n          if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_6)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_10 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_1); __Pyx_INCREF(__pyx_t_10); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)\n          #else\n          __pyx_t_10 = PySequence_ITEM(__pyx_t_6, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_10);\n          #endif\n        }\n      } else {\n        __pyx_t_10 = __pyx_t_11(__pyx_t_6);\n        if (unlikely(!__pyx_t_10)) {\n          PyObject* exc_type = PyErr_Occurred();\n          if (exc_type) {\n            if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n            else __PYX_ERR(0, 177, __pyx_L1_error)\n          }\n          break;\n        }\n        __Pyx_GOTREF(__pyx_t_10);\n      }\n      __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_10);\n      __pyx_t_10 = 0;\n      __pyx_t_10 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 177, __pyx_L1_error)\n      if (unlikely(__Pyx_ListComp_Append(__pyx_t_4, (PyObject*)__pyx_t_10))) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {\n      __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_7);\n      if (likely(__pyx_t_6)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n        __Pyx_INCREF(__pyx_t_6);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_7, function);\n      }\n    }\n    if (!__pyx_t_6) {\n      __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_5);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_7)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_6, __pyx_t_4};\n        __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n        __Pyx_GOTREF(__pyx_t_5);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_6, __pyx_t_4};\n        __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n        __Pyx_GOTREF(__pyx_t_5);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_10 = PyTuple_New(1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_6); __pyx_t_6 = NULL;\n        __Pyx_GIVEREF(__pyx_t_4);\n        PyTuple_SET_ITEM(__pyx_t_10, 0+1, __pyx_t_4);\n        __pyx_t_4 = 0;\n        __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_10, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {\n      __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_12);\n      if (likely(__pyx_t_7)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);\n        __Pyx_INCREF(__pyx_t_7);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_12, function);\n      }\n    }\n    if (!__pyx_t_7) {\n      __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_12, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_12)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_5};\n        __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_5};\n        __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      } else\n      #endif\n      {\n        __pyx_t_10 = PyTuple_New(1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_7); __pyx_t_7 = NULL;\n        __Pyx_GIVEREF(__pyx_t_5);\n        PyTuple_SET_ITEM(__pyx_t_10, 0+1, __pyx_t_5);\n        __pyx_t_5 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_10, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n    __pyx_v_isrle = __pyx_t_3;\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":178\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:             # <<<<<<<<<<<<<<\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:\n */\n    __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_isbox); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 178, __pyx_L1_error)\n    if (__pyx_t_2) {\n\n      /* \"pycocotools/_mask.pyx\":179\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)             # <<<<<<<<<<<<<<\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))\n */\n      __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_array); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_12);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_v_objs);\n      __Pyx_GIVEREF(__pyx_v_objs);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_objs);\n      __pyx_t_10 = PyDict_New(); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_double); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      if (PyDict_SetItem(__pyx_t_10, __pyx_n_s_dtype, __pyx_t_7) < 0) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_3, __pyx_t_10); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_7);\n      __pyx_t_7 = 0;\n\n      /* \"pycocotools/_mask.pyx\":180\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:\n */\n      __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 180, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __pyx_t_1 = PyObject_Length(__pyx_t_7); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 180, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __pyx_t_2 = ((__pyx_t_1 == 1) != 0);\n      if (__pyx_t_2) {\n\n        /* \"pycocotools/_mask.pyx\":181\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))             # <<<<<<<<<<<<<<\n *             elif isrle:\n *                 objs = _frString(objs)\n */\n        __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_reshape); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_10);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_12 = __Pyx_GetItemInt(__pyx_t_3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_12);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_INCREF(__pyx_int_1);\n        __Pyx_GIVEREF(__pyx_int_1);\n        PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_int_1);\n        __Pyx_GIVEREF(__pyx_t_12);\n        PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_12);\n        __pyx_t_12 = 0;\n        __pyx_t_12 = NULL;\n        if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_10))) {\n          __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_10);\n          if (likely(__pyx_t_12)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_10);\n            __Pyx_INCREF(__pyx_t_12);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_10, function);\n          }\n        }\n        if (!__pyx_t_12) {\n          __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_10, __pyx_t_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 181, __pyx_L1_error)\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else {\n          #if CYTHON_FAST_PYCALL\n          if (PyFunction_Check(__pyx_t_10)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_3};\n            __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __Pyx_GOTREF(__pyx_t_7);\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          } else\n          #endif\n          #if CYTHON_FAST_PYCCALL\n          if (__Pyx_PyFastCFunction_Check(__pyx_t_10)) {\n            PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_3};\n            __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;\n            __Pyx_GOTREF(__pyx_t_7);\n            __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n          } else\n          #endif\n          {\n            __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_5);\n            __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_12); __pyx_t_12 = NULL;\n            __Pyx_GIVEREF(__pyx_t_3);\n            PyTuple_SET_ITEM(__pyx_t_5, 0+1, __pyx_t_3);\n            __pyx_t_3 = 0;\n            __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_5, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 181, __pyx_L1_error)\n            __Pyx_GOTREF(__pyx_t_7);\n            __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n          }\n        }\n        __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n        __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_7);\n        __pyx_t_7 = 0;\n\n        /* \"pycocotools/_mask.pyx\":180\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:\n */\n      }\n\n      /* \"pycocotools/_mask.pyx\":178\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:             # <<<<<<<<<<<<<<\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:\n */\n      goto __pyx_L16;\n    }\n\n    /* \"pycocotools/_mask.pyx\":182\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:             # <<<<<<<<<<<<<<\n *                 objs = _frString(objs)\n *             else:\n */\n    __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_isrle); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 182, __pyx_L1_error)\n    if (__pyx_t_2) {\n\n      /* \"pycocotools/_mask.pyx\":183\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:\n *                 objs = _frString(objs)             # <<<<<<<<<<<<<<\n *             else:\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n */\n      __pyx_t_10 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 183, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_5 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_10))) {\n        __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_10);\n        if (likely(__pyx_t_5)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_10);\n          __Pyx_INCREF(__pyx_t_5);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_10, function);\n        }\n      }\n      if (!__pyx_t_5) {\n        __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_10, __pyx_v_objs); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 183, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n      } else {\n        #if CYTHON_FAST_PYCALL\n        if (PyFunction_Check(__pyx_t_10)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_v_objs};\n          __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 183, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else\n        #endif\n        #if CYTHON_FAST_PYCCALL\n        if (__Pyx_PyFastCFunction_Check(__pyx_t_10)) {\n          PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_v_objs};\n          __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 183, __pyx_L1_error)\n          __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n          __Pyx_GOTREF(__pyx_t_7);\n        } else\n        #endif\n        {\n          __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 183, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_3);\n          __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); __pyx_t_5 = NULL;\n          __Pyx_INCREF(__pyx_v_objs);\n          __Pyx_GIVEREF(__pyx_v_objs);\n          PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_v_objs);\n          __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_3, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 183, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_7);\n          __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        }\n      }\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n      __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_7);\n      __pyx_t_7 = 0;\n\n      /* \"pycocotools/_mask.pyx\":182\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:             # <<<<<<<<<<<<<<\n *                 objs = _frString(objs)\n *             else:\n */\n      goto __pyx_L16;\n    }\n\n    /* \"pycocotools/_mask.pyx\":185\n *                 objs = _frString(objs)\n *             else:\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')             # <<<<<<<<<<<<<<\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n */\n    /*else*/ {\n      __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_Raise(__pyx_t_7, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __PYX_ERR(0, 185, __pyx_L1_error)\n    }\n    __pyx_L16:;\n\n    /* \"pycocotools/_mask.pyx\":174\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:             # <<<<<<<<<<<<<<\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n */\n    goto __pyx_L4;\n  }\n\n  /* \"pycocotools/_mask.pyx\":187\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')             # <<<<<<<<<<<<<<\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n */\n  /*else*/ {\n    __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_Raise(__pyx_t_7, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __PYX_ERR(0, 187, __pyx_L1_error)\n  }\n  __pyx_L4:;\n\n  /* \"pycocotools/_mask.pyx\":188\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs             # <<<<<<<<<<<<<<\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_10);\n  __Pyx_XDECREF(__pyx_t_12);\n  __Pyx_XDECREF(__pyx_t_14);\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._preproc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_isbox);\n  __Pyx_XDECREF(__pyx_v_isrle);\n  __Pyx_XDECREF(__pyx_v_obj);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_3_rleIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_3_rleIou = {\"_rleIou\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3iou_3_rleIou, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_3_rleIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_dt = 0;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_gt = 0;\n  PyArrayObject *__pyx_v_iscrowd = 0;\n  siz __pyx_v_m;\n  siz __pyx_v_n;\n  PyArrayObject *__pyx_v__iou = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_rleIou (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_iscrowd,&__pyx_n_s_m,&__pyx_n_s_n,&__pyx_n_s_iou,0};\n    PyObject* values[6] = {0,0,0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n        case  5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 1); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_iscrowd)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 2); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        case  3:\n        if (likely((values[3] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_m)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 3); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        case  4:\n        if (likely((values[4] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 4); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        case  5:\n        if (likely((values[5] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_iou)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 5); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"_rleIou\") < 0)) __PYX_ERR(0, 189, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 6) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n      values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n      values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n    }\n    __pyx_v_dt = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)values[0]);\n    __pyx_v_gt = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)values[1]);\n    __pyx_v_iscrowd = ((PyArrayObject *)values[2]);\n    __pyx_v_m = __Pyx_PyInt_As_siz(values[3]); if (unlikely((__pyx_v_m == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 189, __pyx_L3_error)\n    __pyx_v_n = __Pyx_PyInt_As_siz(values[4]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 189, __pyx_L3_error)\n    __pyx_v__iou = ((PyArrayObject *)values[5]);\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 189, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._rleIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dt), __pyx_ptype_11pycocotools_5_mask_RLEs, 1, \"dt\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_gt), __pyx_ptype_11pycocotools_5_mask_RLEs, 1, \"gt\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_iscrowd), __pyx_ptype_5numpy_ndarray, 1, \"iscrowd\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v__iou), __pyx_ptype_5numpy_ndarray, 1, \"_iou\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou_2_rleIou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_iscrowd, __pyx_v_m, __pyx_v_n, __pyx_v__iou);\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_2_rleIou(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_dt, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou) {\n  __Pyx_LocalBuf_ND __pyx_pybuffernd__iou;\n  __Pyx_Buffer __pyx_pybuffer__iou;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;\n  __Pyx_Buffer __pyx_pybuffer_iscrowd;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_rleIou\", 0);\n  __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;\n  __pyx_pybuffer_iscrowd.refcount = 0;\n  __pyx_pybuffernd_iscrowd.data = NULL;\n  __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;\n  __pyx_pybuffer__iou.pybuffer.buf = NULL;\n  __pyx_pybuffer__iou.refcount = 0;\n  __pyx_pybuffernd__iou.data = NULL;\n  __pyx_pybuffernd__iou.rcbuffer = &__pyx_pybuffer__iou;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_v_iscrowd, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 189, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd__iou.rcbuffer->pybuffer, (PyObject*)__pyx_v__iou, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 189, __pyx_L1_error)\n  }\n  __pyx_pybuffernd__iou.diminfo[0].strides = __pyx_pybuffernd__iou.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd__iou.diminfo[0].shape = __pyx_pybuffernd__iou.rcbuffer->pybuffer.shape[0];\n\n  /* \"pycocotools/_mask.pyx\":190\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )             # <<<<<<<<<<<<<<\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n */\n  rleIou(((RLE *)__pyx_v_dt->_R), ((RLE *)__pyx_v_gt->_R), __pyx_v_m, __pyx_v_n, ((byte *)__pyx_v_iscrowd->data), ((double *)__pyx_v__iou->data));\n\n  /* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._rleIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_5_bbIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_5_bbIou = {\"_bbIou\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3iou_5_bbIou, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_5_bbIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyArrayObject *__pyx_v_dt = 0;\n  PyArrayObject *__pyx_v_gt = 0;\n  PyArrayObject *__pyx_v_iscrowd = 0;\n  siz __pyx_v_m;\n  siz __pyx_v_n;\n  PyArrayObject *__pyx_v__iou = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_bbIou (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_iscrowd,&__pyx_n_s_m,&__pyx_n_s_n,&__pyx_n_s_iou,0};\n    PyObject* values[6] = {0,0,0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n        case  5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 1); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_iscrowd)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 2); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        case  3:\n        if (likely((values[3] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_m)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 3); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        case  4:\n        if (likely((values[4] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 4); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        case  5:\n        if (likely((values[5] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_iou)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 5); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"_bbIou\") < 0)) __PYX_ERR(0, 191, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 6) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n      values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n      values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n    }\n    __pyx_v_dt = ((PyArrayObject *)values[0]);\n    __pyx_v_gt = ((PyArrayObject *)values[1]);\n    __pyx_v_iscrowd = ((PyArrayObject *)values[2]);\n    __pyx_v_m = __Pyx_PyInt_As_siz(values[3]); if (unlikely((__pyx_v_m == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 191, __pyx_L3_error)\n    __pyx_v_n = __Pyx_PyInt_As_siz(values[4]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 191, __pyx_L3_error)\n    __pyx_v__iou = ((PyArrayObject *)values[5]);\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 191, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._bbIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dt), __pyx_ptype_5numpy_ndarray, 1, \"dt\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_gt), __pyx_ptype_5numpy_ndarray, 1, \"gt\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_iscrowd), __pyx_ptype_5numpy_ndarray, 1, \"iscrowd\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v__iou), __pyx_ptype_5numpy_ndarray, 1, \"_iou\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou_4_bbIou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_iscrowd, __pyx_v_m, __pyx_v_n, __pyx_v__iou);\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_4_bbIou(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dt, PyArrayObject *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou) {\n  __Pyx_LocalBuf_ND __pyx_pybuffernd__iou;\n  __Pyx_Buffer __pyx_pybuffer__iou;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_dt;\n  __Pyx_Buffer __pyx_pybuffer_dt;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_gt;\n  __Pyx_Buffer __pyx_pybuffer_gt;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;\n  __Pyx_Buffer __pyx_pybuffer_iscrowd;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_bbIou\", 0);\n  __pyx_pybuffer_dt.pybuffer.buf = NULL;\n  __pyx_pybuffer_dt.refcount = 0;\n  __pyx_pybuffernd_dt.data = NULL;\n  __pyx_pybuffernd_dt.rcbuffer = &__pyx_pybuffer_dt;\n  __pyx_pybuffer_gt.pybuffer.buf = NULL;\n  __pyx_pybuffer_gt.refcount = 0;\n  __pyx_pybuffernd_gt.data = NULL;\n  __pyx_pybuffernd_gt.rcbuffer = &__pyx_pybuffer_gt;\n  __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;\n  __pyx_pybuffer_iscrowd.refcount = 0;\n  __pyx_pybuffernd_iscrowd.data = NULL;\n  __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;\n  __pyx_pybuffer__iou.pybuffer.buf = NULL;\n  __pyx_pybuffer__iou.refcount = 0;\n  __pyx_pybuffernd__iou.data = NULL;\n  __pyx_pybuffernd__iou.rcbuffer = &__pyx_pybuffer__iou;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_dt.rcbuffer->pybuffer, (PyObject*)__pyx_v_dt, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_dt.diminfo[0].strides = __pyx_pybuffernd_dt.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_dt.diminfo[0].shape = __pyx_pybuffernd_dt.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_dt.diminfo[1].strides = __pyx_pybuffernd_dt.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_dt.diminfo[1].shape = __pyx_pybuffernd_dt.rcbuffer->pybuffer.shape[1];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_gt.rcbuffer->pybuffer, (PyObject*)__pyx_v_gt, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_gt.diminfo[0].strides = __pyx_pybuffernd_gt.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_gt.diminfo[0].shape = __pyx_pybuffernd_gt.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_gt.diminfo[1].strides = __pyx_pybuffernd_gt.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_gt.diminfo[1].shape = __pyx_pybuffernd_gt.rcbuffer->pybuffer.shape[1];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_v_iscrowd, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd__iou.rcbuffer->pybuffer, (PyObject*)__pyx_v__iou, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd__iou.diminfo[0].strides = __pyx_pybuffernd__iou.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd__iou.diminfo[0].shape = __pyx_pybuffernd__iou.rcbuffer->pybuffer.shape[0];\n\n  /* \"pycocotools/_mask.pyx\":192\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )             # <<<<<<<<<<<<<<\n *     def _len(obj):\n *         cdef siz N = 0\n */\n  bbIou(((BB)__pyx_v_dt->data), ((BB)__pyx_v_gt->data), __pyx_v_m, __pyx_v_n, ((byte *)__pyx_v_iscrowd->data), ((double *)__pyx_v__iou->data));\n\n  /* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dt.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gt.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._bbIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dt.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gt.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_7_len(PyObject *__pyx_self, PyObject *__pyx_v_obj); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_7_len = {\"_len\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3iou_7_len, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_7_len(PyObject *__pyx_self, PyObject *__pyx_v_obj) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_len (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou_6_len(__pyx_self, ((PyObject *)__pyx_v_obj));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_6_len(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj) {\n  siz __pyx_v_N;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  siz __pyx_t_3;\n  Py_ssize_t __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"_len\", 0);\n\n  /* \"pycocotools/_mask.pyx\":194\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n *         cdef siz N = 0             # <<<<<<<<<<<<<<\n *         if type(obj) == RLEs:\n *             N = obj.n\n */\n  __pyx_v_N = 0;\n\n  /* \"pycocotools/_mask.pyx\":195\n *     def _len(obj):\n *         cdef siz N = 0\n *         if type(obj) == RLEs:             # <<<<<<<<<<<<<<\n *             N = obj.n\n *         elif len(obj)==0:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 195, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":196\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n *             N = obj.n             # <<<<<<<<<<<<<<\n *         elif len(obj)==0:\n *             pass\n */\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_obj, __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 196, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_3 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_3 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 196, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_v_N = __pyx_t_3;\n\n    /* \"pycocotools/_mask.pyx\":195\n *     def _len(obj):\n *         cdef siz N = 0\n *         if type(obj) == RLEs:             # <<<<<<<<<<<<<<\n *             N = obj.n\n *         elif len(obj)==0:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":197\n *         if type(obj) == RLEs:\n *             N = obj.n\n *         elif len(obj)==0:             # <<<<<<<<<<<<<<\n *             pass\n *         elif type(obj) == np.ndarray:\n */\n  __pyx_t_4 = PyObject_Length(__pyx_v_obj); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(0, 197, __pyx_L1_error)\n  __pyx_t_2 = ((__pyx_t_4 == 0) != 0);\n  if (__pyx_t_2) {\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":199\n *         elif len(obj)==0:\n *             pass\n *         elif type(obj) == np.ndarray:             # <<<<<<<<<<<<<<\n *             N = obj.shape[0]\n *         return N\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 199, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 199, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":200\n *             pass\n *         elif type(obj) == np.ndarray:\n *             N = obj.shape[0]             # <<<<<<<<<<<<<<\n *         return N\n *     # convert iscrowd to numpy array\n */\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_obj, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 200, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 200, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_t_3 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_3 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 200, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_v_N = __pyx_t_3;\n\n    /* \"pycocotools/_mask.pyx\":199\n *         elif len(obj)==0:\n *             pass\n *         elif type(obj) == np.ndarray:             # <<<<<<<<<<<<<<\n *             N = obj.shape[0]\n *         return N\n */\n  }\n  __pyx_L3:;\n\n  /* \"pycocotools/_mask.pyx\":201\n *         elif type(obj) == np.ndarray:\n *             N = obj.shape[0]\n *         return N             # <<<<<<<<<<<<<<\n *     # convert iscrowd to numpy array\n *     cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_N); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 201, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_r = __pyx_t_5;\n  __pyx_t_5 = 0;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._len\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_12iou(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dt, PyObject *__pyx_v_gt, PyObject *__pyx_v_pyiscrowd) {\n  PyObject *__pyx_v__preproc = 0;\n  PyObject *__pyx_v__rleIou = 0;\n  PyObject *__pyx_v__bbIou = 0;\n  PyObject *__pyx_v__len = 0;\n  PyArrayObject *__pyx_v_iscrowd = 0;\n  siz __pyx_v_m;\n  siz __pyx_v_n;\n  double *__pyx_v__iou;\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v__iouFun = NULL;\n  PyObject *__pyx_v_iou = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;\n  __Pyx_Buffer __pyx_pybuffer_iscrowd;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyArrayObject *__pyx_t_6 = NULL;\n  siz __pyx_t_7;\n  int __pyx_t_8;\n  int __pyx_t_9;\n  int __pyx_t_10;\n  PyObject *__pyx_t_11 = NULL;\n  __Pyx_RefNannySetupContext(\"iou\", 0);\n  __Pyx_INCREF(__pyx_v_dt);\n  __Pyx_INCREF(__pyx_v_gt);\n  __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;\n  __pyx_pybuffer_iscrowd.refcount = 0;\n  __pyx_pybuffernd_iscrowd.data = NULL;\n  __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;\n\n  /* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_1_preproc, 0, __pyx_n_s_iou_locals__preproc, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__6)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 164, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__preproc = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_3_rleIou, 0, __pyx_n_s_iou_locals__rleIou, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__8)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 189, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__rleIou = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_5_bbIou, 0, __pyx_n_s_iou_locals__bbIou, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__10)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__bbIou = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_7_len, 0, __pyx_n_s_iou_locals__len, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__12)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 193, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__len = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":203\n *         return N\n *     # convert iscrowd to numpy array\n *     cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)             # <<<<<<<<<<<<<<\n *     # simple type checking\n *     cdef siz m, n\n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_v_pyiscrowd);\n  __Pyx_GIVEREF(__pyx_v_pyiscrowd);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_pyiscrowd);\n  __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 203, __pyx_L1_error)\n  __pyx_t_6 = ((PyArrayObject *)__pyx_t_5);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n      __pyx_v_iscrowd = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 203, __pyx_L1_error)\n    } else {__pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];\n    }\n  }\n  __pyx_t_6 = 0;\n  __pyx_v_iscrowd = ((PyArrayObject *)__pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":206\n *     # simple type checking\n *     cdef siz m, n\n *     dt = _preproc(dt)             # <<<<<<<<<<<<<<\n *     gt = _preproc(gt)\n *     m = _len(dt)\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou__preproc(__pyx_v__preproc, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 206, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF_SET(__pyx_v_dt, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":207\n *     cdef siz m, n\n *     dt = _preproc(dt)\n *     gt = _preproc(gt)             # <<<<<<<<<<<<<<\n *     m = _len(dt)\n *     n = _len(gt)\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou__preproc(__pyx_v__preproc, __pyx_v_gt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 207, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF_SET(__pyx_v_gt, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":208\n *     dt = _preproc(dt)\n *     gt = _preproc(gt)\n *     m = _len(dt)             # <<<<<<<<<<<<<<\n *     n = _len(gt)\n *     if m == 0 or n == 0:\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou_6_len(__pyx_v__len, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_7 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_7 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_m = __pyx_t_7;\n\n  /* \"pycocotools/_mask.pyx\":209\n *     gt = _preproc(gt)\n *     m = _len(dt)\n *     n = _len(gt)             # <<<<<<<<<<<<<<\n *     if m == 0 or n == 0:\n *         return []\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou_6_len(__pyx_v__len, __pyx_v_gt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 209, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_7 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_7 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 209, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_n = __pyx_t_7;\n\n  /* \"pycocotools/_mask.pyx\":210\n *     m = _len(dt)\n *     n = _len(gt)\n *     if m == 0 or n == 0:             # <<<<<<<<<<<<<<\n *         return []\n *     if not type(dt) == type(gt):\n */\n  __pyx_t_9 = ((__pyx_v_m == 0) != 0);\n  if (!__pyx_t_9) {\n  } else {\n    __pyx_t_8 = __pyx_t_9;\n    goto __pyx_L4_bool_binop_done;\n  }\n  __pyx_t_9 = ((__pyx_v_n == 0) != 0);\n  __pyx_t_8 = __pyx_t_9;\n  __pyx_L4_bool_binop_done:;\n  if (__pyx_t_8) {\n\n    /* \"pycocotools/_mask.pyx\":211\n *     n = _len(gt)\n *     if m == 0 or n == 0:\n *         return []             # <<<<<<<<<<<<<<\n *     if not type(dt) == type(gt):\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n */\n    __Pyx_XDECREF(__pyx_r);\n    __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 211, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_r = __pyx_t_5;\n    __pyx_t_5 = 0;\n    goto __pyx_L0;\n\n    /* \"pycocotools/_mask.pyx\":210\n *     m = _len(dt)\n *     n = _len(gt)\n *     if m == 0 or n == 0:             # <<<<<<<<<<<<<<\n *         return []\n *     if not type(dt) == type(gt):\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":212\n *     if m == 0 or n == 0:\n *         return []\n *     if not type(dt) == type(gt):             # <<<<<<<<<<<<<<\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n * \n */\n  __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)Py_TYPE(__pyx_v_gt)), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 212, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 212, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_9 = ((!__pyx_t_8) != 0);\n  if (__pyx_t_9) {\n\n    /* \"pycocotools/_mask.pyx\":213\n *         return []\n *     if not type(dt) == type(gt):\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')             # <<<<<<<<<<<<<<\n * \n *     # define local variables\n */\n    __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 213, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_Raise(__pyx_t_5, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __PYX_ERR(0, 213, __pyx_L1_error)\n\n    /* \"pycocotools/_mask.pyx\":212\n *     if m == 0 or n == 0:\n *         return []\n *     if not type(dt) == type(gt):             # <<<<<<<<<<<<<<\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n * \n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":216\n * \n *     # define local variables\n *     cdef double* _iou = <double*> 0             # <<<<<<<<<<<<<<\n *     cdef np.npy_intp shape[1]\n *     # check type and assign iou function\n */\n  __pyx_v__iou = ((double *)0);\n\n  /* \"pycocotools/_mask.pyx\":219\n *     cdef np.npy_intp shape[1]\n *     # check type and assign iou function\n *     if type(dt) == RLEs:             # <<<<<<<<<<<<<<\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:\n */\n  __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 219, __pyx_L1_error)\n  __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 219, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (__pyx_t_9) {\n\n    /* \"pycocotools/_mask.pyx\":220\n *     # check type and assign iou function\n *     if type(dt) == RLEs:\n *         _iouFun = _rleIou             # <<<<<<<<<<<<<<\n *     elif type(dt) == np.ndarray:\n *         _iouFun = _bbIou\n */\n    __Pyx_INCREF(__pyx_v__rleIou);\n    __pyx_v__iouFun = __pyx_v__rleIou;\n\n    /* \"pycocotools/_mask.pyx\":219\n *     cdef np.npy_intp shape[1]\n *     # check type and assign iou function\n *     if type(dt) == RLEs:             # <<<<<<<<<<<<<<\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:\n */\n    goto __pyx_L7;\n  }\n\n  /* \"pycocotools/_mask.pyx\":221\n *     if type(dt) == RLEs:\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:             # <<<<<<<<<<<<<<\n *         _iouFun = _bbIou\n *     else:\n */\n  __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 221, __pyx_L1_error)\n  __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 221, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (__pyx_t_9) {\n\n    /* \"pycocotools/_mask.pyx\":222\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:\n *         _iouFun = _bbIou             # <<<<<<<<<<<<<<\n *     else:\n *         raise Exception('input data type not allowed.')\n */\n    __Pyx_INCREF(__pyx_v__bbIou);\n    __pyx_v__iouFun = __pyx_v__bbIou;\n\n    /* \"pycocotools/_mask.pyx\":221\n *     if type(dt) == RLEs:\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:             # <<<<<<<<<<<<<<\n *         _iouFun = _bbIou\n *     else:\n */\n    goto __pyx_L7;\n  }\n\n  /* \"pycocotools/_mask.pyx\":224\n *         _iouFun = _bbIou\n *     else:\n *         raise Exception('input data type not allowed.')             # <<<<<<<<<<<<<<\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)\n */\n  /*else*/ {\n    __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 224, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_Raise(__pyx_t_5, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __PYX_ERR(0, 224, __pyx_L1_error)\n  }\n  __pyx_L7:;\n\n  /* \"pycocotools/_mask.pyx\":225\n *     else:\n *         raise Exception('input data type not allowed.')\n *     _iou = <double*> malloc(m*n* sizeof(double))             # <<<<<<<<<<<<<<\n *     iou = np.zeros((m*n, ), dtype=np.double)\n *     shape[0] = <np.npy_intp> m*n\n */\n  __pyx_v__iou = ((double *)malloc(((__pyx_v_m * __pyx_v_n) * (sizeof(double)))));\n\n  /* \"pycocotools/_mask.pyx\":226\n *         raise Exception('input data type not allowed.')\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)             # <<<<<<<<<<<<<<\n *     shape[0] = <np.npy_intp> m*n\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n */\n  __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_zeros); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyInt_From_siz((__pyx_v_m * __pyx_v_n)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n  __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_double); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_iou = __pyx_t_4;\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":227\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)\n *     shape[0] = <np.npy_intp> m*n             # <<<<<<<<<<<<<<\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n */\n  (__pyx_v_shape[0]) = (((npy_intp)__pyx_v_m) * __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":228\n *     iou = np.zeros((m*n, ), dtype=np.double)\n *     shape[0] = <np.npy_intp> m*n\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)             # <<<<<<<<<<<<<<\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n *     _iouFun(dt, gt, iscrowd, m, n, iou)\n */\n  __pyx_t_4 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_DOUBLE, __pyx_v__iou); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 228, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF_SET(__pyx_v_iou, __pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":229\n *     shape[0] = <np.npy_intp> m*n\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *     _iouFun(dt, gt, iscrowd, m, n, iou)\n *     return iou.reshape((m,n), order='F')\n */\n  if (!(likely(((__pyx_v_iou) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_iou, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 229, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_iou), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":230\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n *     _iouFun(dt, gt, iscrowd, m, n, iou)             # <<<<<<<<<<<<<<\n *     return iou.reshape((m,n), order='F')\n * \n */\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_m); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 230, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_INCREF(__pyx_v__iouFun);\n  __pyx_t_3 = __pyx_v__iouFun; __pyx_t_2 = NULL;\n  __pyx_t_10 = 0;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n    __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);\n    if (likely(__pyx_t_2)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_2);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_3, function);\n      __pyx_t_10 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_3)) {\n    PyObject *__pyx_temp[7] = {__pyx_t_2, __pyx_v_dt, __pyx_v_gt, ((PyObject *)__pyx_v_iscrowd), __pyx_t_1, __pyx_t_5, __pyx_v_iou};\n    __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_10, 6+__pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n    PyObject *__pyx_temp[7] = {__pyx_t_2, __pyx_v_dt, __pyx_v_gt, ((PyObject *)__pyx_v_iscrowd), __pyx_t_1, __pyx_t_5, __pyx_v_iou};\n    __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_10, 6+__pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  } else\n  #endif\n  {\n    __pyx_t_11 = PyTuple_New(6+__pyx_t_10); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_11);\n    if (__pyx_t_2) {\n      __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_2); __pyx_t_2 = NULL;\n    }\n    __Pyx_INCREF(__pyx_v_dt);\n    __Pyx_GIVEREF(__pyx_v_dt);\n    PyTuple_SET_ITEM(__pyx_t_11, 0+__pyx_t_10, __pyx_v_dt);\n    __Pyx_INCREF(__pyx_v_gt);\n    __Pyx_GIVEREF(__pyx_v_gt);\n    PyTuple_SET_ITEM(__pyx_t_11, 1+__pyx_t_10, __pyx_v_gt);\n    __Pyx_INCREF(((PyObject *)__pyx_v_iscrowd));\n    __Pyx_GIVEREF(((PyObject *)__pyx_v_iscrowd));\n    PyTuple_SET_ITEM(__pyx_t_11, 2+__pyx_t_10, ((PyObject *)__pyx_v_iscrowd));\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_11, 3+__pyx_t_10, __pyx_t_1);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_11, 4+__pyx_t_10, __pyx_t_5);\n    __Pyx_INCREF(__pyx_v_iou);\n    __Pyx_GIVEREF(__pyx_v_iou);\n    PyTuple_SET_ITEM(__pyx_t_11, 5+__pyx_t_10, __pyx_v_iou);\n    __pyx_t_1 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_11, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":231\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n *     _iouFun(dt, gt, iscrowd, m, n, iou)\n *     return iou.reshape((m,n), order='F')             # <<<<<<<<<<<<<<\n * \n * def toBbox( rleObjs ):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_iou, __pyx_n_s_reshape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_m); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_11 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_11);\n  __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_11);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_11);\n  __pyx_t_3 = 0;\n  __pyx_t_11 = 0;\n  __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_11);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n  __pyx_t_5 = PyDict_New(); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 231, __pyx_L1_error)\n  __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_11, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_r = __pyx_t_3;\n  __pyx_t_3 = 0;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_11);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.iou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF(__pyx_v__preproc);\n  __Pyx_XDECREF(__pyx_v__rleIou);\n  __Pyx_XDECREF(__pyx_v__bbIou);\n  __Pyx_XDECREF(__pyx_v__len);\n  __Pyx_XDECREF((PyObject *)__pyx_v_iscrowd);\n  __Pyx_XDECREF(__pyx_v__iouFun);\n  __Pyx_XDECREF(__pyx_v_iou);\n  __Pyx_XDECREF(__pyx_v_dt);\n  __Pyx_XDECREF(__pyx_v_gt);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_15toBbox(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_15toBbox = {\"toBbox\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_15toBbox, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_15toBbox(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"toBbox (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_14toBbox(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_14toBbox(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  siz __pyx_v_n;\n  BB __pyx_v__bb;\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v_bb = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  siz __pyx_t_5;\n  PyObject *__pyx_t_6 = NULL;\n  __Pyx_RefNannySetupContext(\"toBbox\", 0);\n\n  /* \"pycocotools/_mask.pyx\":234\n * \n * def toBbox( rleObjs ):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n */\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 234, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_3) {\n    __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;\n      __Pyx_INCREF(__pyx_v_rleObjs);\n      __Pyx_GIVEREF(__pyx_v_rleObjs);\n      PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 234, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":235\n * def toBbox( rleObjs ):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n             # <<<<<<<<<<<<<<\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )\n */\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_Rs), __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 235, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 235, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_n = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":236\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))             # <<<<<<<<<<<<<<\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )\n *     cdef np.npy_intp shape[1]\n */\n  __pyx_v__bb = ((BB)malloc(((4 * __pyx_v_n) * (sizeof(double)))));\n\n  /* \"pycocotools/_mask.pyx\":237\n *     cdef siz n = Rs.n\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )             # <<<<<<<<<<<<<<\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> 4*n\n */\n  rleToBbox(((RLE const *)__pyx_v_Rs->_R), __pyx_v__bb, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":239\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> 4*n             # <<<<<<<<<<<<<<\n *     bb = np.array((1,4*n), dtype=np.double)\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n */\n  (__pyx_v_shape[0]) = (((npy_intp)4) * __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":240\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> 4*n\n *     bb = np.array((1,4*n), dtype=np.double)             # <<<<<<<<<<<<<<\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_siz((4 * __pyx_v_n)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_INCREF(__pyx_int_1);\n  __Pyx_GIVEREF(__pyx_int_1);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_int_1);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4);\n  __pyx_t_4 = 0;\n  __pyx_t_4 = PyDict_New(); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_double); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_v_bb = __pyx_t_6;\n  __pyx_t_6 = 0;\n\n  /* \"pycocotools/_mask.pyx\":241\n *     shape[0] = <np.npy_intp> 4*n\n *     bb = np.array((1,4*n), dtype=np.double)\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))             # <<<<<<<<<<<<<<\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n *     return bb\n */\n  __pyx_t_4 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_DOUBLE, __pyx_v__bb); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_reshape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4);\n  __Pyx_INCREF(__pyx_int_4);\n  __Pyx_GIVEREF(__pyx_int_4);\n  PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_int_4);\n  __pyx_t_4 = 0;\n  __pyx_t_4 = NULL;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {\n    __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_1);\n    if (likely(__pyx_t_4)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n      __Pyx_INCREF(__pyx_t_4);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_1, function);\n    }\n  }\n  if (!__pyx_t_4) {\n    __pyx_t_6 = __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_GOTREF(__pyx_t_6);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_1)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_2};\n      __pyx_t_6 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_2};\n      __pyx_t_6 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 241, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); __pyx_t_4 = NULL;\n      __Pyx_GIVEREF(__pyx_t_2);\n      PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_t_2);\n      __pyx_t_2 = 0;\n      __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF_SET(__pyx_v_bb, __pyx_t_6);\n  __pyx_t_6 = 0;\n\n  /* \"pycocotools/_mask.pyx\":242\n *     bb = np.array((1,4*n), dtype=np.double)\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *     return bb\n * \n */\n  if (!(likely(((__pyx_v_bb) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_bb, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 242, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_bb), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":243\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n *     return bb             # <<<<<<<<<<<<<<\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_bb);\n  __pyx_r = __pyx_v_bb;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"pycocotools._mask.toBbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_bb);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_17frBbox(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_17frBbox = {\"frBbox\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_17frBbox, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_17frBbox(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyArrayObject *__pyx_v_bb = 0;\n  siz __pyx_v_h;\n  siz __pyx_v_w;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frBbox (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_bb,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_bb)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frBbox\", 1, 3, 3, 1); __PYX_ERR(0, 245, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frBbox\", 1, 3, 3, 2); __PYX_ERR(0, 245, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frBbox\") < 0)) __PYX_ERR(0, 245, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_bb = ((PyArrayObject *)values[0]);\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 245, __pyx_L3_error)\n    __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 245, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frBbox\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 245, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frBbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_bb), __pyx_ptype_5numpy_ndarray, 1, \"bb\", 0))) __PYX_ERR(0, 245, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_16frBbox(__pyx_self, __pyx_v_bb, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_16frBbox(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_bb, siz __pyx_v_h, siz __pyx_v_w) {\n  siz __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  PyObject *__pyx_v_objs = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_bb;\n  __Pyx_Buffer __pyx_pybuffer_bb;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  __Pyx_RefNannySetupContext(\"frBbox\", 0);\n  __pyx_pybuffer_bb.pybuffer.buf = NULL;\n  __pyx_pybuffer_bb.refcount = 0;\n  __pyx_pybuffernd_bb.data = NULL;\n  __pyx_pybuffernd_bb.rcbuffer = &__pyx_pybuffer_bb;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_bb.rcbuffer->pybuffer, (PyObject*)__pyx_v_bb, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 245, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_bb.diminfo[0].strides = __pyx_pybuffernd_bb.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_bb.diminfo[0].shape = __pyx_pybuffernd_bb.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_bb.diminfo[1].strides = __pyx_pybuffernd_bb.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_bb.diminfo[1].shape = __pyx_pybuffernd_bb.rcbuffer->pybuffer.shape[1];\n\n  /* \"pycocotools/_mask.pyx\":246\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n *     cdef siz n = bb.shape[0]             # <<<<<<<<<<<<<<\n *     Rs = RLEs(n)\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n */\n  __pyx_v_n = (__pyx_v_bb->dimensions[0]);\n\n  /* \"pycocotools/_mask.pyx\":247\n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n *     objs = _toString(Rs)\n */\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 247, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 247, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 247, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":248\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )             # <<<<<<<<<<<<<<\n *     objs = _toString(Rs)\n *     return objs\n */\n  rleFrBbox(((RLE *)__pyx_v_Rs->_R), ((BB const )__pyx_v_bb->data), __pyx_v_h, __pyx_v_w, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":249\n *     Rs = RLEs(n)\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n *     objs = _toString(Rs)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 249, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  if (!__pyx_t_3) {\n    __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_Rs)};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_Rs)};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;\n      __Pyx_INCREF(((PyObject *)__pyx_v_Rs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));\n      PyTuple_SET_ITEM(__pyx_t_4, 0+1, ((PyObject *)__pyx_v_Rs));\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_objs = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":250\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n *     objs = _toString(Rs)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * def frPoly( poly, siz h, siz w ):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_bb.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.frBbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_bb.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_19frPoly(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_19frPoly = {\"frPoly\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_19frPoly, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_19frPoly(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_poly = 0;\n  siz __pyx_v_h;\n  siz __pyx_v_w;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frPoly (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_poly,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_poly)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPoly\", 1, 3, 3, 1); __PYX_ERR(0, 252, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPoly\", 1, 3, 3, 2); __PYX_ERR(0, 252, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frPoly\") < 0)) __PYX_ERR(0, 252, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_poly = values[0];\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 252, __pyx_L3_error)\n    __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 252, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frPoly\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 252, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frPoly\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_18frPoly(__pyx_self, __pyx_v_poly, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_18frPoly(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_poly, siz __pyx_v_h, siz __pyx_v_w) {\n  PyArrayObject *__pyx_v_np_poly = 0;\n  Py_ssize_t __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  PyObject *__pyx_v_i = NULL;\n  PyObject *__pyx_v_p = NULL;\n  PyObject *__pyx_v_objs = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_np_poly;\n  __Pyx_Buffer __pyx_pybuffer_np_poly;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *(*__pyx_t_4)(PyObject *);\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  PyObject *__pyx_t_9 = NULL;\n  PyArrayObject *__pyx_t_10 = NULL;\n  int __pyx_t_11;\n  PyObject *__pyx_t_12 = NULL;\n  PyObject *__pyx_t_13 = NULL;\n  PyObject *__pyx_t_14 = NULL;\n  Py_ssize_t __pyx_t_15;\n  Py_ssize_t __pyx_t_16;\n  __Pyx_RefNannySetupContext(\"frPoly\", 0);\n  __pyx_pybuffer_np_poly.pybuffer.buf = NULL;\n  __pyx_pybuffer_np_poly.refcount = 0;\n  __pyx_pybuffernd_np_poly.data = NULL;\n  __pyx_pybuffernd_np_poly.rcbuffer = &__pyx_pybuffer_np_poly;\n\n  /* \"pycocotools/_mask.pyx\":254\n * def frPoly( poly, siz h, siz w ):\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)             # <<<<<<<<<<<<<<\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_poly); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 254, __pyx_L1_error)\n  __pyx_v_n = __pyx_t_1;\n\n  /* \"pycocotools/_mask.pyx\":255\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n *     Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     for i, p in enumerate(poly):\n *         np_poly = np.array(p, dtype=np.double, order='F')\n */\n  __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 255, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 255, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);\n  __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 255, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":256\n *     n = len(poly)\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):             # <<<<<<<<<<<<<<\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n */\n  __Pyx_INCREF(__pyx_int_0);\n  __pyx_t_2 = __pyx_int_0;\n  if (likely(PyList_CheckExact(__pyx_v_poly)) || PyTuple_CheckExact(__pyx_v_poly)) {\n    __pyx_t_3 = __pyx_v_poly; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0;\n    __pyx_t_4 = NULL;\n  } else {\n    __pyx_t_1 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_poly); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 256, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_4 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 256, __pyx_L1_error)\n  }\n  for (;;) {\n    if (likely(!__pyx_t_4)) {\n      if (likely(PyList_CheckExact(__pyx_t_3))) {\n        if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_3)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 256, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      } else {\n        if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 256, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      }\n    } else {\n      __pyx_t_5 = __pyx_t_4(__pyx_t_3);\n      if (unlikely(!__pyx_t_5)) {\n        PyObject* exc_type = PyErr_Occurred();\n        if (exc_type) {\n          if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n          else __PYX_ERR(0, 256, __pyx_L1_error)\n        }\n        break;\n      }\n      __Pyx_GOTREF(__pyx_t_5);\n    }\n    __Pyx_XDECREF_SET(__pyx_v_p, __pyx_t_5);\n    __pyx_t_5 = 0;\n    __Pyx_INCREF(__pyx_t_2);\n    __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_2);\n    __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_2);\n    __pyx_t_2 = __pyx_t_5;\n    __pyx_t_5 = 0;\n\n    /* \"pycocotools/_mask.pyx\":257\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):\n *         np_poly = np.array(p, dtype=np.double, order='F')             # <<<<<<<<<<<<<<\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n *     objs = _toString(Rs)\n */\n    __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_array); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_INCREF(__pyx_v_p);\n    __Pyx_GIVEREF(__pyx_v_p);\n    PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_p);\n    __pyx_t_7 = PyDict_New(); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_8 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_double); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_dtype, __pyx_t_9) < 0) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 257, __pyx_L1_error)\n    __pyx_t_9 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_5, __pyx_t_7); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (!(likely(((__pyx_t_9) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_9, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 257, __pyx_L1_error)\n    __pyx_t_10 = ((PyArrayObject *)__pyx_t_9);\n    {\n      __Pyx_BufFmt_StackElem __pyx_stack[1];\n      __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);\n      __pyx_t_11 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer, (PyObject*)__pyx_t_10, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n      if (unlikely(__pyx_t_11 < 0)) {\n        PyErr_Fetch(&__pyx_t_12, &__pyx_t_13, &__pyx_t_14);\n        if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer, (PyObject*)__pyx_v_np_poly, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n          Py_XDECREF(__pyx_t_12); Py_XDECREF(__pyx_t_13); Py_XDECREF(__pyx_t_14);\n          __Pyx_RaiseBufferFallbackError();\n        } else {\n          PyErr_Restore(__pyx_t_12, __pyx_t_13, __pyx_t_14);\n        }\n      }\n      __pyx_pybuffernd_np_poly.diminfo[0].strides = __pyx_pybuffernd_np_poly.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_np_poly.diminfo[0].shape = __pyx_pybuffernd_np_poly.rcbuffer->pybuffer.shape[0];\n      if (unlikely(__pyx_t_11 < 0)) __PYX_ERR(0, 257, __pyx_L1_error)\n    }\n    __pyx_t_10 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_np_poly, ((PyArrayObject *)__pyx_t_9));\n    __pyx_t_9 = 0;\n\n    /* \"pycocotools/_mask.pyx\":258\n *     for i, p in enumerate(poly):\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )             # <<<<<<<<<<<<<<\n *     objs = _toString(Rs)\n *     return objs\n */\n    __pyx_t_15 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_15 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 258, __pyx_L1_error)\n    __pyx_t_16 = PyObject_Length(((PyObject *)__pyx_v_np_poly)); if (unlikely(__pyx_t_16 == -1)) __PYX_ERR(0, 258, __pyx_L1_error)\n    rleFrPoly(((RLE *)(&(__pyx_v_Rs->_R[__pyx_t_15]))), ((double const *)__pyx_v_np_poly->data), __Pyx_div_Py_ssize_t(__pyx_t_16, 2), __pyx_v_h, __pyx_v_w);\n\n    /* \"pycocotools/_mask.pyx\":256\n *     n = len(poly)\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):             # <<<<<<<<<<<<<<\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n */\n  }\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":259\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n *     objs = _toString(Rs)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n  __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_9 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n    __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_3);\n    if (likely(__pyx_t_9)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_9);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_3, function);\n    }\n  }\n  if (!__pyx_t_9) {\n    __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_3, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 259, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n  } else {\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_9, ((PyObject *)__pyx_v_Rs)};\n      __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 259, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;\n      __Pyx_GOTREF(__pyx_t_2);\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[2] = {__pyx_t_9, ((PyObject *)__pyx_v_Rs)};\n      __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 259, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;\n      __Pyx_GOTREF(__pyx_t_2);\n    } else\n    #endif\n    {\n      __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 259, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      __Pyx_GIVEREF(__pyx_t_9); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_9); __pyx_t_9 = NULL;\n      __Pyx_INCREF(((PyObject *)__pyx_v_Rs));\n      __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));\n      PyTuple_SET_ITEM(__pyx_t_7, 0+1, ((PyObject *)__pyx_v_Rs));\n      __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_7, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 259, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_2);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    }\n  }\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_objs = __pyx_t_2;\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":260\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n *     objs = _toString(Rs)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_XDECREF(__pyx_t_9);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.frPoly\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_np_poly);\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_i);\n  __Pyx_XDECREF(__pyx_v_p);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_21frUncompressedRLE(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_21frUncompressedRLE = {\"frUncompressedRLE\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_21frUncompressedRLE, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_21frUncompressedRLE(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_ucRles = 0;\n  CYTHON_UNUSED siz __pyx_v_h;\n  CYTHON_UNUSED siz __pyx_v_w;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frUncompressedRLE (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_ucRles,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_ucRles)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frUncompressedRLE\", 1, 3, 3, 1); __PYX_ERR(0, 262, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frUncompressedRLE\", 1, 3, 3, 2); __PYX_ERR(0, 262, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frUncompressedRLE\") < 0)) __PYX_ERR(0, 262, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_ucRles = values[0];\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 262, __pyx_L3_error)\n    __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 262, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frUncompressedRLE\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 262, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frUncompressedRLE\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_20frUncompressedRLE(__pyx_self, __pyx_v_ucRles, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_20frUncompressedRLE(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_ucRles, CYTHON_UNUSED siz __pyx_v_h, CYTHON_UNUSED siz __pyx_v_w) {\n  PyArrayObject *__pyx_v_cnts = 0;\n  RLE __pyx_v_R;\n  uint *__pyx_v_data;\n  Py_ssize_t __pyx_v_n;\n  PyObject *__pyx_v_objs = NULL;\n  Py_ssize_t __pyx_v_i;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  Py_ssize_t __pyx_v_j;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_cnts;\n  __Pyx_Buffer __pyx_pybuffer_cnts;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  Py_ssize_t __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyArrayObject *__pyx_t_8 = NULL;\n  int __pyx_t_9;\n  PyObject *__pyx_t_10 = NULL;\n  PyObject *__pyx_t_11 = NULL;\n  PyObject *__pyx_t_12 = NULL;\n  Py_ssize_t __pyx_t_13;\n  Py_ssize_t __pyx_t_14;\n  Py_ssize_t __pyx_t_15;\n  RLE __pyx_t_16;\n  siz __pyx_t_17;\n  int __pyx_t_18;\n  __Pyx_RefNannySetupContext(\"frUncompressedRLE\", 0);\n  __pyx_pybuffer_cnts.pybuffer.buf = NULL;\n  __pyx_pybuffer_cnts.refcount = 0;\n  __pyx_pybuffernd_cnts.data = NULL;\n  __pyx_pybuffernd_cnts.rcbuffer = &__pyx_pybuffer_cnts;\n\n  /* \"pycocotools/_mask.pyx\":266\n *     cdef RLE R\n *     cdef uint *data\n *     n = len(ucRles)             # <<<<<<<<<<<<<<\n *     objs = []\n *     for i in range(n):\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_ucRles); if (unlikely(__pyx_t_1 == -1)) __PYX_ERR(0, 266, __pyx_L1_error)\n  __pyx_v_n = __pyx_t_1;\n\n  /* \"pycocotools/_mask.pyx\":267\n *     cdef uint *data\n *     n = len(ucRles)\n *     objs = []             # <<<<<<<<<<<<<<\n *     for i in range(n):\n *         Rs = RLEs(1)\n */\n  __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 267, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_v_objs = ((PyObject*)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":268\n *     n = len(ucRles)\n *     objs = []\n *     for i in range(n):             # <<<<<<<<<<<<<<\n *         Rs = RLEs(1)\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n */\n  __pyx_t_1 = __pyx_v_n;\n  for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_1; __pyx_t_3+=1) {\n    __pyx_v_i = __pyx_t_3;\n\n    /* \"pycocotools/_mask.pyx\":269\n *     objs = []\n *     for i in range(n):\n *         Rs = RLEs(1)             # <<<<<<<<<<<<<<\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n *         # time for malloc can be saved here but it's fine\n */\n    __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 269, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_XDECREF_SET(__pyx_v_Rs, ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_2));\n    __pyx_t_2 = 0;\n\n    /* \"pycocotools/_mask.pyx\":270\n *     for i in range(n):\n *         Rs = RLEs(1)\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)             # <<<<<<<<<<<<<<\n *         # time for malloc can be saved here but it's fine\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))\n */\n    __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_array); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_5 = PyObject_GetItem(__pyx_t_2, __pyx_n_s_counts); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_5);\n    __pyx_t_5 = 0;\n    __pyx_t_5 = PyDict_New(); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_uint32); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_7) < 0) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_2, __pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    if (!(likely(((__pyx_t_7) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_7, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 270, __pyx_L1_error)\n    __pyx_t_8 = ((PyArrayObject *)__pyx_t_7);\n    {\n      __Pyx_BufFmt_StackElem __pyx_stack[1];\n      __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);\n      __pyx_t_9 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer, (PyObject*)__pyx_t_8, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n      if (unlikely(__pyx_t_9 < 0)) {\n        PyErr_Fetch(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12);\n        if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer, (PyObject*)__pyx_v_cnts, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n          Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_12);\n          __Pyx_RaiseBufferFallbackError();\n        } else {\n          PyErr_Restore(__pyx_t_10, __pyx_t_11, __pyx_t_12);\n        }\n      }\n      __pyx_pybuffernd_cnts.diminfo[0].strides = __pyx_pybuffernd_cnts.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_cnts.diminfo[0].shape = __pyx_pybuffernd_cnts.rcbuffer->pybuffer.shape[0];\n      if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 270, __pyx_L1_error)\n    }\n    __pyx_t_8 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_cnts, ((PyArrayObject *)__pyx_t_7));\n    __pyx_t_7 = 0;\n\n    /* \"pycocotools/_mask.pyx\":272\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n *         # time for malloc can be saved here but it's fine\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))             # <<<<<<<<<<<<<<\n *         for j in range(len(cnts)):\n *             data[j] = <uint> cnts[j]\n */\n    __pyx_t_13 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_13 == -1)) __PYX_ERR(0, 272, __pyx_L1_error)\n    __pyx_v_data = ((uint *)malloc((__pyx_t_13 * (sizeof(unsigned int)))));\n\n    /* \"pycocotools/_mask.pyx\":273\n *         # time for malloc can be saved here but it's fine\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))\n *         for j in range(len(cnts)):             # <<<<<<<<<<<<<<\n *             data[j] = <uint> cnts[j]\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n */\n    __pyx_t_13 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_13 == -1)) __PYX_ERR(0, 273, __pyx_L1_error)\n    for (__pyx_t_14 = 0; __pyx_t_14 < __pyx_t_13; __pyx_t_14+=1) {\n      __pyx_v_j = __pyx_t_14;\n\n      /* \"pycocotools/_mask.pyx\":274\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))\n *         for j in range(len(cnts)):\n *             data[j] = <uint> cnts[j]             # <<<<<<<<<<<<<<\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n *         Rs._R[0] = R\n */\n      __pyx_t_15 = __pyx_v_j;\n      __pyx_t_9 = -1;\n      if (__pyx_t_15 < 0) {\n        __pyx_t_15 += __pyx_pybuffernd_cnts.diminfo[0].shape;\n        if (unlikely(__pyx_t_15 < 0)) __pyx_t_9 = 0;\n      } else if (unlikely(__pyx_t_15 >= __pyx_pybuffernd_cnts.diminfo[0].shape)) __pyx_t_9 = 0;\n      if (unlikely(__pyx_t_9 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_9);\n        __PYX_ERR(0, 274, __pyx_L1_error)\n      }\n      (__pyx_v_data[__pyx_v_j]) = ((uint)(*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_uint32_t *, __pyx_pybuffernd_cnts.rcbuffer->pybuffer.buf, __pyx_t_15, __pyx_pybuffernd_cnts.diminfo[0].strides)));\n    }\n\n    /* \"pycocotools/_mask.pyx\":275\n *         for j in range(len(cnts)):\n *             data[j] = <uint> cnts[j]\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)             # <<<<<<<<<<<<<<\n *         Rs._R[0] = R\n *         objs.append(_toString(Rs)[0])\n */\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_5 = PyObject_GetItem(__pyx_t_7, __pyx_n_s_size); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_t_5, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_17 = __Pyx_PyInt_As_siz(__pyx_t_7); if (unlikely((__pyx_t_17 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_16.h = __pyx_t_17;\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_5 = PyObject_GetItem(__pyx_t_7, __pyx_n_s_size); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_t_5, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_17 = __Pyx_PyInt_As_siz(__pyx_t_7); if (unlikely((__pyx_t_17 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_16.w = __pyx_t_17;\n    __pyx_t_13 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_13 == -1)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __pyx_t_16.m = __pyx_t_13;\n    __pyx_t_16.cnts = ((uint *)__pyx_v_data);\n    __pyx_v_R = __pyx_t_16;\n\n    /* \"pycocotools/_mask.pyx\":276\n *             data[j] = <uint> cnts[j]\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n *         Rs._R[0] = R             # <<<<<<<<<<<<<<\n *         objs.append(_toString(Rs)[0])\n *     return objs\n */\n    (__pyx_v_Rs->_R[0]) = __pyx_v_R;\n\n    /* \"pycocotools/_mask.pyx\":277\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n *         Rs._R[0] = R\n *         objs.append(_toString(Rs)[0])             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n    __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_2 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {\n      __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_5);\n      if (likely(__pyx_t_2)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n        __Pyx_INCREF(__pyx_t_2);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_5, function);\n      }\n    }\n    if (!__pyx_t_2) {\n      __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_5, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 277, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n    } else {\n      #if CYTHON_FAST_PYCALL\n      if (PyFunction_Check(__pyx_t_5)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_2, ((PyObject *)__pyx_v_Rs)};\n        __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_GOTREF(__pyx_t_7);\n      } else\n      #endif\n      #if CYTHON_FAST_PYCCALL\n      if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {\n        PyObject *__pyx_temp[2] = {__pyx_t_2, ((PyObject *)__pyx_v_Rs)};\n        __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n        __Pyx_GOTREF(__pyx_t_7);\n      } else\n      #endif\n      {\n        __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); __pyx_t_2 = NULL;\n        __Pyx_INCREF(((PyObject *)__pyx_v_Rs));\n        __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));\n        PyTuple_SET_ITEM(__pyx_t_4, 0+1, ((PyObject *)__pyx_v_Rs));\n        __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_4, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 277, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_7);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      }\n    }\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_7, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_18 = __Pyx_PyList_Append(__pyx_v_objs, __pyx_t_5); if (unlikely(__pyx_t_18 == -1)) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  }\n\n  /* \"pycocotools/_mask.pyx\":278\n *         Rs._R[0] = R\n *         objs.append(_toString(Rs)[0])\n *     return objs             # <<<<<<<<<<<<<<\n * \n * def frPyObjects(pyobj, siz h, w):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.frUncompressedRLE\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_cnts);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_23frPyObjects(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_23frPyObjects = {\"frPyObjects\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_23frPyObjects, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_23frPyObjects(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_pyobj = 0;\n  siz __pyx_v_h;\n  PyObject *__pyx_v_w = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frPyObjects (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyobj,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_pyobj)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPyObjects\", 1, 3, 3, 1); __PYX_ERR(0, 280, __pyx_L3_error)\n        }\n        case  2:\n        if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPyObjects\", 1, 3, 3, 2); __PYX_ERR(0, 280, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frPyObjects\") < 0)) __PYX_ERR(0, 280, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_pyobj = values[0];\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 280, __pyx_L3_error)\n    __pyx_v_w = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frPyObjects\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 280, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frPyObjects\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_22frPyObjects(__pyx_self, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_22frPyObjects(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pyobj, siz __pyx_v_h, PyObject *__pyx_v_w) {\n  PyObject *__pyx_v_objs = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  int __pyx_t_6;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  Py_ssize_t __pyx_t_9;\n  __Pyx_RefNannySetupContext(\"frPyObjects\", 0);\n\n  /* \"pycocotools/_mask.pyx\":281\n * \n * def frPyObjects(pyobj, siz h, w):\n *     if type(pyobj) == np.ndarray:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 281, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 281, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":282\n * def frPyObjects(pyobj, siz h, w):\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )             # <<<<<<<<<<<<<<\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )\n */\n    __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frBbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 282, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 282, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_5)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_5);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_7 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      if (__pyx_t_5) {\n        __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_4);\n      PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_6, __pyx_t_4);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_7, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_4 = 0;\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_objs = __pyx_t_1;\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":281\n * \n * def frPyObjects(pyobj, siz h, w):\n *     if type(pyobj) == np.ndarray:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":283\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_8) {\n  } else {\n    __pyx_t_2 = __pyx_t_8;\n    goto __pyx_L4_bool_binop_done;\n  }\n  __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_9 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_9 == -1)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_8 = ((__pyx_t_9 == 4) != 0);\n  __pyx_t_2 = __pyx_t_8;\n  __pyx_L4_bool_binop_done:;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":284\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )             # <<<<<<<<<<<<<<\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )\n */\n    __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frBbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 284, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_7 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 284, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_4 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_4)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_4);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_t_7, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_t_7, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_5 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      if (__pyx_t_4) {\n        __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); __pyx_t_4 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_7);\n      PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_6, __pyx_t_7);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_5, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_7 = 0;\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_objs = __pyx_t_1;\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":283\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":285\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:             # <<<<<<<<<<<<<<\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_8) {\n  } else {\n    __pyx_t_2 = __pyx_t_8;\n    goto __pyx_L6_bool_binop_done;\n  }\n  __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_9 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_9 == -1)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_8 = ((__pyx_t_9 > 4) != 0);\n  __pyx_t_2 = __pyx_t_8;\n  __pyx_L6_bool_binop_done:;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":286\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )             # <<<<<<<<<<<<<<\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n *         objs = frUncompressedRLE(pyobj, h, w)\n */\n    __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frPoly); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 286, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 286, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_7 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_7)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_7);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_7, __pyx_v_pyobj, __pyx_t_5, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_7, __pyx_v_pyobj, __pyx_t_5, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      if (__pyx_t_7) {\n        __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_7); __pyx_t_7 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_4, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_5);\n      PyTuple_SET_ITEM(__pyx_t_4, 1+__pyx_t_6, __pyx_t_5);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_4, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_5 = 0;\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_objs = __pyx_t_1;\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":285\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:             # <<<<<<<<<<<<<<\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":287\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:             # <<<<<<<<<<<<<<\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_8) {\n  } else {\n    __pyx_t_2 = __pyx_t_8;\n    goto __pyx_L8_bool_binop_done;\n  }\n  __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_t_1)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_t_2 = __pyx_t_8;\n  __pyx_L8_bool_binop_done:;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":288\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n *         objs = frUncompressedRLE(pyobj, h, w)             # <<<<<<<<<<<<<<\n *     else:\n *         raise Exception('input type is not supported.')\n */\n    __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_frUncompressedRLE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 288, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 288, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {\n      __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1);\n      if (likely(__pyx_t_5)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n        __Pyx_INCREF(__pyx_t_5);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_1, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_1)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_7 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      if (__pyx_t_5) {\n        __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_4);\n      PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_6, __pyx_t_4);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_7, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_4 = 0;\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_v_objs = __pyx_t_3;\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":287\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:             # <<<<<<<<<<<<<<\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":290\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n *         raise Exception('input type is not supported.')             # <<<<<<<<<<<<<<\n *     return objs\n */\n  /*else*/ {\n    __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__16, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 290, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(0, 290, __pyx_L1_error)\n  }\n  __pyx_L3:;\n\n  /* \"pycocotools/_mask.pyx\":291\n *     else:\n *         raise Exception('input type is not supported.')\n *     return objs             # <<<<<<<<<<<<<<\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_AddTraceback(\"pycocotools._mask.frPyObjects\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":197\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fullfill the PEP.\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getbuffer__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_v_copy_shape;\n  int __pyx_v_i;\n  int __pyx_v_ndim;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  int __pyx_v_t;\n  char *__pyx_v_f;\n  PyArray_Descr *__pyx_v_descr = 0;\n  int __pyx_v_offset;\n  int __pyx_v_hasfields;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  int __pyx_t_5;\n  PyObject *__pyx_t_6 = NULL;\n  char *__pyx_t_7;\n  __Pyx_RefNannySetupContext(\"__getbuffer__\", 0);\n  if (__pyx_v_info != NULL) {\n    __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(__pyx_v_info->obj);\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":203\n *             # of flags\n * \n *             if info == NULL: return             # <<<<<<<<<<<<<<\n * \n *             cdef int copy_shape, i, ndim\n */\n  __pyx_t_1 = ((__pyx_v_info == NULL) != 0);\n  if (__pyx_t_1) {\n    __pyx_r = 0;\n    goto __pyx_L0;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":206\n * \n *             cdef int copy_shape, i, ndim\n *             cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":207\n *             cdef int copy_shape, i, ndim\n *             cdef int endian_detector = 1\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n * \n *             ndim = PyArray_NDIM(self)\n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":209\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n *             ndim = PyArray_NDIM(self)             # <<<<<<<<<<<<<<\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":211\n *             ndim = PyArray_NDIM(self)\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 copy_shape = 1\n *             else:\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":212\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 copy_shape = 1             # <<<<<<<<<<<<<<\n *             else:\n *                 copy_shape = 0\n */\n    __pyx_v_copy_shape = 1;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":211\n *             ndim = PyArray_NDIM(self)\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 copy_shape = 1\n *             else:\n */\n    goto __pyx_L4;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":214\n *                 copy_shape = 1\n *             else:\n *                 copy_shape = 0             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n */\n  /*else*/ {\n    __pyx_v_copy_shape = 0;\n  }\n  __pyx_L4:;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L6_bool_binop_done;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":217\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L6_bool_binop_done:;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":218\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 218, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 218, __pyx_L1_error)\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L9_bool_binop_done;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":221\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L9_bool_binop_done:;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":222\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 222, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 222, __pyx_L1_error)\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":224\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n *             info.buf = PyArray_DATA(self)             # <<<<<<<<<<<<<<\n *             info.ndim = ndim\n *             if copy_shape:\n */\n  __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":225\n * \n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim             # <<<<<<<<<<<<<<\n *             if copy_shape:\n *                 # Allocate new buffer for strides and shape info.\n */\n  __pyx_v_info->ndim = __pyx_v_ndim;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":226\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if copy_shape:             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n  __pyx_t_1 = (__pyx_v_copy_shape != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":229\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)             # <<<<<<<<<<<<<<\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n */\n    __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * ((size_t)__pyx_v_ndim)) * 2)));\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":230\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)\n *                 info.shape = info.strides + ndim             # <<<<<<<<<<<<<<\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n */\n    __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":231\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):             # <<<<<<<<<<<<<<\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n */\n    __pyx_t_4 = __pyx_v_ndim;\n    for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {\n      __pyx_v_i = __pyx_t_5;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":232\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]             # <<<<<<<<<<<<<<\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n */\n      (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":233\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]             # <<<<<<<<<<<<<<\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n */\n      (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":226\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if copy_shape:             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n    goto __pyx_L11;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":235\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)             # <<<<<<<<<<<<<<\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n */\n  /*else*/ {\n    __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":236\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)             # <<<<<<<<<<<<<<\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n */\n    __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));\n  }\n  __pyx_L11:;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":237\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL             # <<<<<<<<<<<<<<\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n */\n  __pyx_v_info->suboffsets = NULL;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":238\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)             # <<<<<<<<<<<<<<\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n * \n */\n  __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":239\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)             # <<<<<<<<<<<<<<\n * \n *             cdef int t\n */\n  __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":242\n * \n *             cdef int t\n *             cdef char* f = NULL             # <<<<<<<<<<<<<<\n *             cdef dtype descr = self.descr\n *             cdef int offset\n */\n  __pyx_v_f = NULL;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":243\n *             cdef int t\n *             cdef char* f = NULL\n *             cdef dtype descr = self.descr             # <<<<<<<<<<<<<<\n *             cdef int offset\n * \n */\n  __pyx_t_3 = ((PyObject *)__pyx_v_self->descr);\n  __Pyx_INCREF(__pyx_t_3);\n  __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":246\n *             cdef int offset\n * \n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)             # <<<<<<<<<<<<<<\n * \n *             if not hasfields and not copy_shape:\n */\n  __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":248\n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)\n * \n *             if not hasfields and not copy_shape:             # <<<<<<<<<<<<<<\n *                 # do not call releasebuffer\n *                 info.obj = None\n */\n  __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L15_bool_binop_done;\n  }\n  __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L15_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":250\n *             if not hasfields and not copy_shape:\n *                 # do not call releasebuffer\n *                 info.obj = None             # <<<<<<<<<<<<<<\n *             else:\n *                 # need to call releasebuffer\n */\n    __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(Py_None);\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj);\n    __pyx_v_info->obj = Py_None;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":248\n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)\n * \n *             if not hasfields and not copy_shape:             # <<<<<<<<<<<<<<\n *                 # do not call releasebuffer\n *                 info.obj = None\n */\n    goto __pyx_L14;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":253\n *             else:\n *                 # need to call releasebuffer\n *                 info.obj = self             # <<<<<<<<<<<<<<\n * \n *             if not hasfields:\n */\n  /*else*/ {\n    __Pyx_INCREF(((PyObject *)__pyx_v_self));\n    __Pyx_GIVEREF(((PyObject *)__pyx_v_self));\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj);\n    __pyx_v_info->obj = ((PyObject *)__pyx_v_self);\n  }\n  __pyx_L14:;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":255\n *                 info.obj = self\n * \n *             if not hasfields:             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":256\n * \n *             if not hasfields:\n *                 t = descr.type_num             # <<<<<<<<<<<<<<\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n */\n    __pyx_t_4 = __pyx_v_descr->type_num;\n    __pyx_v_t = __pyx_t_4;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);\n    if (!__pyx_t_2) {\n      goto __pyx_L20_next_or;\n    } else {\n    }\n    __pyx_t_2 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L19_bool_binop_done;\n    }\n    __pyx_L20_next_or:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":258\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);\n    if (__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L19_bool_binop_done;\n    }\n    __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_1 = __pyx_t_2;\n    __pyx_L19_bool_binop_done:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (__pyx_t_1) {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":259\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 259, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 259, __pyx_L1_error)\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":260\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n */\n    switch (__pyx_v_t) {\n      case NPY_BYTE:\n      __pyx_v_f = ((char *)\"b\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":261\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n */\n      case NPY_UBYTE:\n      __pyx_v_f = ((char *)\"B\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":262\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n */\n      case NPY_SHORT:\n      __pyx_v_f = ((char *)\"h\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":263\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n */\n      case NPY_USHORT:\n      __pyx_v_f = ((char *)\"H\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":264\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n */\n      case NPY_INT:\n      __pyx_v_f = ((char *)\"i\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":265\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n */\n      case NPY_UINT:\n      __pyx_v_f = ((char *)\"I\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":266\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n */\n      case NPY_LONG:\n      __pyx_v_f = ((char *)\"l\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":267\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n */\n      case NPY_ULONG:\n      __pyx_v_f = ((char *)\"L\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":268\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n */\n      case NPY_LONGLONG:\n      __pyx_v_f = ((char *)\"q\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":269\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n */\n      case NPY_ULONGLONG:\n      __pyx_v_f = ((char *)\"Q\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":270\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n */\n      case NPY_FLOAT:\n      __pyx_v_f = ((char *)\"f\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":271\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n */\n      case NPY_DOUBLE:\n      __pyx_v_f = ((char *)\"d\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":272\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n */\n      case NPY_LONGDOUBLE:\n      __pyx_v_f = ((char *)\"g\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":273\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n */\n      case NPY_CFLOAT:\n      __pyx_v_f = ((char *)\"Zf\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":274\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"\n */\n      case NPY_CDOUBLE:\n      __pyx_v_f = ((char *)\"Zd\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":275\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n */\n      case NPY_CLONGDOUBLE:\n      __pyx_v_f = ((char *)\"Zg\");\n      break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":276\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"             # <<<<<<<<<<<<<<\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      case NPY_OBJECT:\n      __pyx_v_f = ((char *)\"O\");\n      break;\n      default:\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":278\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *                 info.format = f\n *                 return\n */\n      __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_GIVEREF(__pyx_t_6);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6);\n      __pyx_t_6 = 0;\n      __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_Raise(__pyx_t_6, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __PYX_ERR(1, 278, __pyx_L1_error)\n      break;\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":279\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f             # <<<<<<<<<<<<<<\n *                 return\n *             else:\n */\n    __pyx_v_info->format = __pyx_v_f;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":280\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f\n *                 return             # <<<<<<<<<<<<<<\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n */\n    __pyx_r = 0;\n    goto __pyx_L0;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":255\n *                 info.obj = self\n * \n *             if not hasfields:             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":282\n *                 return\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)             # <<<<<<<<<<<<<<\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n */\n  /*else*/ {\n    __pyx_v_info->format = ((char *)malloc(0xFF));\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":283\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment             # <<<<<<<<<<<<<<\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,\n */\n    (__pyx_v_info->format[0]) = '^';\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":284\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0             # <<<<<<<<<<<<<<\n *                 f = _util_dtypestring(descr, info.format + 1,\n *                                       info.format + _buffer_format_string_len,\n */\n    __pyx_v_offset = 0;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":285\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,             # <<<<<<<<<<<<<<\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n */\n    __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == NULL)) __PYX_ERR(1, 285, __pyx_L1_error)\n    __pyx_v_f = __pyx_t_7;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":288\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n *                 f[0] = c'\\0' # Terminate format string             # <<<<<<<<<<<<<<\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n */\n    (__pyx_v_f[0]) = '\\x00';\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":197\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fullfill the PEP.\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"numpy.ndarray.__getbuffer__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) {\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL;\n  }\n  goto __pyx_L2;\n  __pyx_L0:;\n  if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) {\n    __Pyx_GOTREF(Py_None);\n    __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL;\n  }\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_descr);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":290\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__releasebuffer__ (wrapper)\", 0);\n  __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__releasebuffer__\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":291\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":292\n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)             # <<<<<<<<<<<<<<\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 stdlib.free(info.strides)\n */\n    free(__pyx_v_info->format);\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":291\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":294\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 stdlib.free(info.strides)             # <<<<<<<<<<<<<<\n *                 # info.shape was stored after info.strides in the same block\n * \n */\n    free(__pyx_v_info->strides);\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":290\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":770\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew1\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":771\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n *     return PyArray_MultiIterNew(1, <void*>a)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 771, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":770\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew1\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":773\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew2\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":774\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 774, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":773\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew2\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":776\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew3\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":777\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 777, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":776\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew3\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":779\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew4\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":780\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 780, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":779\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew4\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":782\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew5\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":783\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)             # <<<<<<<<<<<<<<\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 783, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":782\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew5\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":785\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) {\n  PyArray_Descr *__pyx_v_child = 0;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  PyObject *__pyx_v_fields = 0;\n  PyObject *__pyx_v_childname = NULL;\n  PyObject *__pyx_v_new_offset = NULL;\n  PyObject *__pyx_v_t = NULL;\n  char *__pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  Py_ssize_t __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  int __pyx_t_7;\n  long __pyx_t_8;\n  char *__pyx_t_9;\n  __Pyx_RefNannySetupContext(\"_util_dtypestring\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":790\n * \n *     cdef dtype child\n *     cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n *     cdef tuple fields\n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":791\n *     cdef dtype child\n *     cdef int endian_detector = 1\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n *     cdef tuple fields\n * \n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":794\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  if (unlikely(__pyx_v_descr->names == Py_None)) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n    __PYX_ERR(1, 794, __pyx_L1_error)\n  }\n  __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0;\n  for (;;) {\n    if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 794, __pyx_L1_error)\n    #else\n    __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 794, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    #endif\n    __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3);\n    __pyx_t_3 = 0;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":795\n * \n *     for childname in descr.names:\n *         fields = descr.fields[childname]             # <<<<<<<<<<<<<<\n *         child, new_offset = fields\n * \n */\n    if (unlikely(__pyx_v_descr->fields == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 795, __pyx_L1_error)\n    }\n    __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 795, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"tuple\", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(1, 795, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3));\n    __pyx_t_3 = 0;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":796\n *     for childname in descr.names:\n *         fields = descr.fields[childname]\n *         child, new_offset = fields             # <<<<<<<<<<<<<<\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n */\n    if (likely(__pyx_v_fields != Py_None)) {\n      PyObject* sequence = __pyx_v_fields;\n      #if !CYTHON_COMPILING_IN_PYPY\n      Py_ssize_t size = Py_SIZE(sequence);\n      #else\n      Py_ssize_t size = PySequence_Size(sequence);\n      #endif\n      if (unlikely(size != 2)) {\n        if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n        else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n        __PYX_ERR(1, 796, __pyx_L1_error)\n      }\n      #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n      __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); \n      __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); \n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_4);\n      #else\n      __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 796, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 796, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      #endif\n    } else {\n      __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 796, __pyx_L1_error)\n    }\n    if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(1, 796, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3));\n    __pyx_t_3 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4);\n    __pyx_t_4 = 0;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":798\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":799\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 799, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 799, __pyx_L1_error)\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":798\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0);\n    if (!__pyx_t_7) {\n      goto __pyx_L8_next_or;\n    } else {\n    }\n    __pyx_t_7 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_L8_next_or:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":802\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *             raise ValueError(u\"Non-native byte order not supported\")\n *             # One could encode it in the format string and have Cython\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0);\n    if (__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_6 = __pyx_t_7;\n    __pyx_L7_bool_binop_done:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (__pyx_t_6) {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":803\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 803, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 803, __pyx_L1_error)\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":813\n * \n *         # Output padding bytes\n *         while offset[0] < new_offset:             # <<<<<<<<<<<<<<\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n */\n    while (1) {\n      __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (!__pyx_t_6) break;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":814\n *         # Output padding bytes\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte             # <<<<<<<<<<<<<<\n *             f += 1\n *             offset[0] += 1\n */\n      (__pyx_v_f[0]) = 0x78;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":815\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1             # <<<<<<<<<<<<<<\n *             offset[0] += 1\n * \n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":816\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n *             offset[0] += 1             # <<<<<<<<<<<<<<\n * \n *         offset[0] += child.itemsize\n */\n      __pyx_t_8 = 0;\n      (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1);\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":818\n *             offset[0] += 1\n * \n *         offset[0] += child.itemsize             # <<<<<<<<<<<<<<\n * \n *         if not PyDataType_HASFIELDS(child):\n */\n    __pyx_t_8 = 0;\n    (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize);\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":820\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n    __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":821\n * \n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num             # <<<<<<<<<<<<<<\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n */\n      __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 821, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4);\n      __pyx_t_4 = 0;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":822\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0);\n      if (__pyx_t_6) {\n\n        /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":823\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 823, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __PYX_ERR(1, 823, __pyx_L1_error)\n\n        /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":822\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":826\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 98;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":827\n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":828\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x68;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":829\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 72;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":830\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x69;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":831\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 73;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":832\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x6C;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":833\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 76;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":834\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x71;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":835\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 81;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":836\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":837\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x64;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":838\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x67;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":839\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf             # <<<<<<<<<<<<<<\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x66;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":840\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd             # <<<<<<<<<<<<<<\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x64;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":841\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg             # <<<<<<<<<<<<<<\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x67;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":842\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"             # <<<<<<<<<<<<<<\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 79;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":844\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *             f += 1\n *         else:\n */\n      /*else*/ {\n        __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_GIVEREF(__pyx_t_3);\n        PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3);\n        __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __PYX_ERR(1, 844, __pyx_L1_error)\n      }\n      __pyx_L15:;\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":845\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *             f += 1             # <<<<<<<<<<<<<<\n *         else:\n *             # Cython ignores struct boundary information (\"T{...}\"),\n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":820\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n      goto __pyx_L13;\n    }\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":849\n *             # Cython ignores struct boundary information (\"T{...}\"),\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)             # <<<<<<<<<<<<<<\n *     return f\n * \n */\n    /*else*/ {\n      __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == NULL)) __PYX_ERR(1, 849, __pyx_L1_error)\n      __pyx_v_f = __pyx_t_9;\n    }\n    __pyx_L13:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":794\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":850\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)\n *     return f             # <<<<<<<<<<<<<<\n * \n * \n */\n  __pyx_r = __pyx_v_f;\n  goto __pyx_L0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":785\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_AddTraceback(\"numpy._util_dtypestring\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_child);\n  __Pyx_XDECREF(__pyx_v_fields);\n  __Pyx_XDECREF(__pyx_v_childname);\n  __Pyx_XDECREF(__pyx_v_new_offset);\n  __Pyx_XDECREF(__pyx_v_t);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":966\n * \n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *      cdef PyObject* baseptr\n *      if base is None:\n */\n\nstatic CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) {\n  PyObject *__pyx_v_baseptr;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  __Pyx_RefNannySetupContext(\"set_array_base\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":968\n * cdef inline void set_array_base(ndarray arr, object base):\n *      cdef PyObject* baseptr\n *      if base is None:             # <<<<<<<<<<<<<<\n *          baseptr = NULL\n *      else:\n */\n  __pyx_t_1 = (__pyx_v_base == Py_None);\n  __pyx_t_2 = (__pyx_t_1 != 0);\n  if (__pyx_t_2) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":969\n *      cdef PyObject* baseptr\n *      if base is None:\n *          baseptr = NULL             # <<<<<<<<<<<<<<\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!\n */\n    __pyx_v_baseptr = NULL;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":968\n * cdef inline void set_array_base(ndarray arr, object base):\n *      cdef PyObject* baseptr\n *      if base is None:             # <<<<<<<<<<<<<<\n *          baseptr = NULL\n *      else:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":971\n *          baseptr = NULL\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!             # <<<<<<<<<<<<<<\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)\n */\n  /*else*/ {\n    Py_INCREF(__pyx_v_base);\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":972\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!\n *          baseptr = <PyObject*>base             # <<<<<<<<<<<<<<\n *      Py_XDECREF(arr.base)\n *      arr.base = baseptr\n */\n    __pyx_v_baseptr = ((PyObject *)__pyx_v_base);\n  }\n  __pyx_L3:;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":973\n *          Py_INCREF(base) # important to do this before decref below!\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)             # <<<<<<<<<<<<<<\n *      arr.base = baseptr\n * \n */\n  Py_XDECREF(__pyx_v_arr->base);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":974\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)\n *      arr.base = baseptr             # <<<<<<<<<<<<<<\n * \n * cdef inline object get_array_base(ndarray arr):\n */\n  __pyx_v_arr->base = __pyx_v_baseptr;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":966\n * \n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *      cdef PyObject* baseptr\n *      if base is None:\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":976\n *      arr.base = baseptr\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     if arr.base is NULL:\n *         return None\n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"get_array_base\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":977\n * \n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     else:\n */\n  __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":978\n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:\n *         return None             # <<<<<<<<<<<<<<\n *     else:\n *         return <object>arr.base\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(Py_None);\n    __pyx_r = Py_None;\n    goto __pyx_L0;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":977\n * \n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     else:\n */\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":980\n *         return None\n *     else:\n *         return <object>arr.base             # <<<<<<<<<<<<<<\n * \n * \n */\n  /*else*/ {\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(((PyObject *)__pyx_v_arr->base));\n    __pyx_r = ((PyObject *)__pyx_v_arr->base);\n    goto __pyx_L0;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":976\n *      arr.base = baseptr\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     if arr.base is NULL:\n *         return None\n */\n\n  /* function exit code */\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":985\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_array(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_array\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":987\n * cdef inline int import_array() except -1:\n *     try:\n *         _import_array()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n */\n      __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 987, __pyx_L3_error)\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":988\n *     try:\n *         _import_array()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 988, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":989\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 989, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 989, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":985\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":991\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_umath\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":993\n * cdef inline int import_umath() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 993, __pyx_L3_error)\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":994\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 994, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":995\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 995, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 995, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":991\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_ufunc\", 0);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":999\n * cdef inline int import_ufunc() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 999, __pyx_L3_error)\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":1000\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 1000, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":1001\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__25, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 1001, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 1001, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_RLEs(PyTypeObject *t, PyObject *a, PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_11pycocotools_5_mask_4RLEs_1__cinit__(o, a, k) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_11pycocotools_5_mask_RLEs(PyObject *o) {\n  #if PY_VERSION_HEX >= 0x030400a1\n  if (unlikely(Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  {\n    PyObject *etype, *eval, *etb;\n    PyErr_Fetch(&etype, &eval, &etb);\n    ++Py_REFCNT(o);\n    __pyx_pw_11pycocotools_5_mask_4RLEs_3__dealloc__(o);\n    --Py_REFCNT(o);\n    PyErr_Restore(etype, eval, etb);\n  }\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyObject *__pyx_tp_getattro_11pycocotools_5_mask_RLEs(PyObject *o, PyObject *n) {\n  PyObject *v = PyObject_GenericGetAttr(o, n);\n  if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) {\n    PyErr_Clear();\n    v = __pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__(o, n);\n  }\n  return v;\n}\n\nstatic PyMethodDef __pyx_methods_11pycocotools_5_mask_RLEs[] = {\n  {\"__getattr__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__, METH_O|METH_COEXIST, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_11pycocotools_5_mask_RLEs = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"pycocotools._mask.RLEs\", /*tp_name*/\n  sizeof(struct __pyx_obj_11pycocotools_5_mask_RLEs), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_11pycocotools_5_mask_RLEs, /*tp_dealloc*/\n  0, /*tp_print*/\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  0, /*tp_str*/\n  __pyx_tp_getattro_11pycocotools_5_mask_RLEs, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_11pycocotools_5_mask_RLEs, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  0, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_11pycocotools_5_mask_RLEs, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n};\n\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_Masks(PyTypeObject *t, PyObject *a, PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_11pycocotools_5_mask_5Masks_1__cinit__(o, a, k) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_11pycocotools_5_mask_Masks(PyObject *o) {\n  #if PY_VERSION_HEX >= 0x030400a1\n  if (unlikely(Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyMethodDef __pyx_methods_11pycocotools_5_mask_Masks[] = {\n  {\"__array__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_5Masks_3__array__, METH_NOARGS, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_11pycocotools_5_mask_Masks = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"pycocotools._mask.Masks\", /*tp_name*/\n  sizeof(struct __pyx_obj_11pycocotools_5_mask_Masks), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_11pycocotools_5_mask_Masks, /*tp_dealloc*/\n  0, /*tp_print*/\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  0, /*tp_str*/\n  0, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_11pycocotools_5_mask_Masks, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  0, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_11pycocotools_5_mask_Masks, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n};\n\nstatic PyMethodDef __pyx_methods[] = {\n  {0, 0, 0, 0}\n};\n\n#if PY_MAJOR_VERSION >= 3\nstatic struct PyModuleDef __pyx_moduledef = {\n  #if PY_VERSION_HEX < 0x03020000\n    { PyObject_HEAD_INIT(NULL) NULL, 0, NULL },\n  #else\n    PyModuleDef_HEAD_INIT,\n  #endif\n    \"_mask\",\n    0, /* m_doc */\n    -1, /* m_size */\n    __pyx_methods /* m_methods */,\n    NULL, /* m_reload */\n    NULL, /* m_traverse */\n    NULL, /* m_clear */\n    NULL /* m_free */\n};\n#endif\n\nstatic __Pyx_StringTabEntry __pyx_string_tab[] = {\n  {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1},\n  {&__pyx_n_s_F, __pyx_k_F, sizeof(__pyx_k_F), 0, 0, 1, 1},\n  {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0},\n  {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0},\n  {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1},\n  {&__pyx_n_s_N, __pyx_k_N, sizeof(__pyx_k_N), 0, 0, 1, 1},\n  {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0},\n  {&__pyx_n_s_R, __pyx_k_R, sizeof(__pyx_k_R), 0, 0, 1, 1},\n  {&__pyx_n_s_Rs, __pyx_k_Rs, sizeof(__pyx_k_Rs), 0, 0, 1, 1},\n  {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1},\n  {&__pyx_kp_s_The_dt_and_gt_should_have_the_sa, __pyx_k_The_dt_and_gt_should_have_the_sa, sizeof(__pyx_k_The_dt_and_gt_should_have_the_sa), 0, 0, 1, 0},\n  {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1},\n  {&__pyx_n_s_a, __pyx_k_a, sizeof(__pyx_k_a), 0, 0, 1, 1},\n  {&__pyx_n_s_a_2, __pyx_k_a_2, sizeof(__pyx_k_a_2), 0, 0, 1, 1},\n  {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1},\n  {&__pyx_n_s_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 0, 1, 1},\n  {&__pyx_n_s_array, __pyx_k_array, sizeof(__pyx_k_array), 0, 0, 1, 1},\n  {&__pyx_n_s_astype, __pyx_k_astype, sizeof(__pyx_k_astype), 0, 0, 1, 1},\n  {&__pyx_n_s_author, __pyx_k_author, sizeof(__pyx_k_author), 0, 0, 1, 1},\n  {&__pyx_n_s_bb, __pyx_k_bb, sizeof(__pyx_k_bb), 0, 0, 1, 1},\n  {&__pyx_n_s_bbIou, __pyx_k_bbIou, sizeof(__pyx_k_bbIou), 0, 0, 1, 1},\n  {&__pyx_n_s_bb_2, __pyx_k_bb_2, sizeof(__pyx_k_bb_2), 0, 0, 1, 1},\n  {&__pyx_n_s_c_string, __pyx_k_c_string, sizeof(__pyx_k_c_string), 0, 0, 1, 1},\n  {&__pyx_n_s_cnts, __pyx_k_cnts, sizeof(__pyx_k_cnts), 0, 0, 1, 1},\n  {&__pyx_n_s_counts, __pyx_k_counts, sizeof(__pyx_k_counts), 0, 0, 1, 1},\n  {&__pyx_n_s_data, __pyx_k_data, sizeof(__pyx_k_data), 0, 0, 1, 1},\n  {&__pyx_n_s_decode, __pyx_k_decode, sizeof(__pyx_k_decode), 0, 0, 1, 1},\n  {&__pyx_n_s_double, __pyx_k_double, sizeof(__pyx_k_double), 0, 0, 1, 1},\n  {&__pyx_n_s_dt, __pyx_k_dt, sizeof(__pyx_k_dt), 0, 0, 1, 1},\n  {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1},\n  {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1},\n  {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1},\n  {&__pyx_n_s_frBbox, __pyx_k_frBbox, sizeof(__pyx_k_frBbox), 0, 0, 1, 1},\n  {&__pyx_n_s_frPoly, __pyx_k_frPoly, sizeof(__pyx_k_frPoly), 0, 0, 1, 1},\n  {&__pyx_n_s_frPyObjects, __pyx_k_frPyObjects, sizeof(__pyx_k_frPyObjects), 0, 0, 1, 1},\n  {&__pyx_n_s_frString, __pyx_k_frString, sizeof(__pyx_k_frString), 0, 0, 1, 1},\n  {&__pyx_n_s_frUncompressedRLE, __pyx_k_frUncompressedRLE, sizeof(__pyx_k_frUncompressedRLE), 0, 0, 1, 1},\n  {&__pyx_n_s_gt, __pyx_k_gt, sizeof(__pyx_k_gt), 0, 0, 1, 1},\n  {&__pyx_n_s_h, __pyx_k_h, sizeof(__pyx_k_h), 0, 0, 1, 1},\n  {&__pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_k_home_ubuntu_Work_brbchen_unskyc, sizeof(__pyx_k_home_ubuntu_Work_brbchen_unskyc), 0, 0, 1, 0},\n  {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1},\n  {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},\n  {&__pyx_kp_s_input_data_type_not_allowed, __pyx_k_input_data_type_not_allowed, sizeof(__pyx_k_input_data_type_not_allowed), 0, 0, 1, 0},\n  {&__pyx_kp_s_input_type_is_not_supported, __pyx_k_input_type_is_not_supported, sizeof(__pyx_k_input_type_is_not_supported), 0, 0, 1, 0},\n  {&__pyx_n_s_intersect, __pyx_k_intersect, sizeof(__pyx_k_intersect), 0, 0, 1, 1},\n  {&__pyx_n_s_iou, __pyx_k_iou, sizeof(__pyx_k_iou), 0, 0, 1, 1},\n  {&__pyx_n_s_iouFun, __pyx_k_iouFun, sizeof(__pyx_k_iouFun), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_2, __pyx_k_iou_2, sizeof(__pyx_k_iou_2), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__bbIou, __pyx_k_iou_locals__bbIou, sizeof(__pyx_k_iou_locals__bbIou), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__len, __pyx_k_iou_locals__len, sizeof(__pyx_k_iou_locals__len), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__preproc, __pyx_k_iou_locals__preproc, sizeof(__pyx_k_iou_locals__preproc), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__rleIou, __pyx_k_iou_locals__rleIou, sizeof(__pyx_k_iou_locals__rleIou), 0, 0, 1, 1},\n  {&__pyx_n_s_isbox, __pyx_k_isbox, sizeof(__pyx_k_isbox), 0, 0, 1, 1},\n  {&__pyx_n_s_iscrowd, __pyx_k_iscrowd, sizeof(__pyx_k_iscrowd), 0, 0, 1, 1},\n  {&__pyx_n_s_isrle, __pyx_k_isrle, sizeof(__pyx_k_isrle), 0, 0, 1, 1},\n  {&__pyx_n_s_j, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1},\n  {&__pyx_n_s_len, __pyx_k_len, sizeof(__pyx_k_len), 0, 0, 1, 1},\n  {&__pyx_kp_s_list_input_can_be_bounding_box_N, __pyx_k_list_input_can_be_bounding_box_N, sizeof(__pyx_k_list_input_can_be_bounding_box_N), 0, 0, 1, 0},\n  {&__pyx_n_s_m, __pyx_k_m, sizeof(__pyx_k_m), 0, 0, 1, 1},\n  {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},\n  {&__pyx_n_s_mask, __pyx_k_mask, sizeof(__pyx_k_mask), 0, 0, 1, 1},\n  {&__pyx_n_s_masks, __pyx_k_masks, sizeof(__pyx_k_masks), 0, 0, 1, 1},\n  {&__pyx_n_s_merge, __pyx_k_merge, sizeof(__pyx_k_merge), 0, 0, 1, 1},\n  {&__pyx_n_s_n, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1},\n  {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0},\n  {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0},\n  {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1},\n  {&__pyx_n_s_np_poly, __pyx_k_np_poly, sizeof(__pyx_k_np_poly), 0, 0, 1, 1},\n  {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1},\n  {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0},\n  {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0},\n  {&__pyx_kp_s_numpy_ndarray_input_is_only_for, __pyx_k_numpy_ndarray_input_is_only_for, sizeof(__pyx_k_numpy_ndarray_input_is_only_for), 0, 0, 1, 0},\n  {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1},\n  {&__pyx_n_s_objs, __pyx_k_objs, sizeof(__pyx_k_objs), 0, 0, 1, 1},\n  {&__pyx_n_s_order, __pyx_k_order, sizeof(__pyx_k_order), 0, 0, 1, 1},\n  {&__pyx_n_s_p, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1},\n  {&__pyx_n_s_poly, __pyx_k_poly, sizeof(__pyx_k_poly), 0, 0, 1, 1},\n  {&__pyx_n_s_preproc, __pyx_k_preproc, sizeof(__pyx_k_preproc), 0, 0, 1, 1},\n  {&__pyx_n_s_py_string, __pyx_k_py_string, sizeof(__pyx_k_py_string), 0, 0, 1, 1},\n  {&__pyx_n_s_pycocotools__mask, __pyx_k_pycocotools__mask, sizeof(__pyx_k_pycocotools__mask), 0, 0, 1, 1},\n  {&__pyx_n_s_pyiscrowd, __pyx_k_pyiscrowd, sizeof(__pyx_k_pyiscrowd), 0, 0, 1, 1},\n  {&__pyx_n_s_pyobj, __pyx_k_pyobj, sizeof(__pyx_k_pyobj), 0, 0, 1, 1},\n  {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1},\n  {&__pyx_n_s_reshape, __pyx_k_reshape, sizeof(__pyx_k_reshape), 0, 0, 1, 1},\n  {&__pyx_n_s_rleIou, __pyx_k_rleIou, sizeof(__pyx_k_rleIou), 0, 0, 1, 1},\n  {&__pyx_n_s_rleObjs, __pyx_k_rleObjs, sizeof(__pyx_k_rleObjs), 0, 0, 1, 1},\n  {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1},\n  {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1},\n  {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},\n  {&__pyx_n_s_toBbox, __pyx_k_toBbox, sizeof(__pyx_k_toBbox), 0, 0, 1, 1},\n  {&__pyx_n_s_toString, __pyx_k_toString, sizeof(__pyx_k_toString), 0, 0, 1, 1},\n  {&__pyx_n_s_tsungyi, __pyx_k_tsungyi, sizeof(__pyx_k_tsungyi), 0, 0, 1, 1},\n  {&__pyx_n_s_ucRles, __pyx_k_ucRles, sizeof(__pyx_k_ucRles), 0, 0, 1, 1},\n  {&__pyx_n_s_uint32, __pyx_k_uint32, sizeof(__pyx_k_uint32), 0, 0, 1, 1},\n  {&__pyx_n_s_uint8, __pyx_k_uint8, sizeof(__pyx_k_uint8), 0, 0, 1, 1},\n  {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0},\n  {&__pyx_kp_s_unrecognized_type_The_following, __pyx_k_unrecognized_type_The_following, sizeof(__pyx_k_unrecognized_type_The_following), 0, 0, 1, 0},\n  {&__pyx_n_s_w, __pyx_k_w, sizeof(__pyx_k_w), 0, 0, 1, 1},\n  {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1},\n  {0, 0, 0, 0, 0, 0, 0}\n};\nstatic int __Pyx_InitCachedBuiltins(void) {\n  __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 64, __pyx_L1_error)\n  __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 70, __pyx_L1_error)\n  __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(0, 121, __pyx_L1_error)\n  __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 218, __pyx_L1_error)\n  __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(1, 799, __pyx_L1_error)\n  __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(1, 989, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic int __Pyx_InitCachedConstants(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_InitCachedConstants\", 0);\n\n  /* \"pycocotools/_mask.pyx\":146\n * def merge(rleObjs, bint intersect=0):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)             # <<<<<<<<<<<<<<\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]\n */\n  __pyx_tuple_ = PyTuple_Pack(1, __pyx_int_1); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 146, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple_);\n  __Pyx_GIVEREF(__pyx_tuple_);\n\n  /* \"pycocotools/_mask.pyx\":172\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')             # <<<<<<<<<<<<<<\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:\n */\n  __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_numpy_ndarray_input_is_only_for); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 172, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__2);\n  __Pyx_GIVEREF(__pyx_tuple__2);\n\n  /* \"pycocotools/_mask.pyx\":185\n *                 objs = _frString(objs)\n *             else:\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')             # <<<<<<<<<<<<<<\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n */\n  __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_list_input_can_be_bounding_box_N); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 185, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__3);\n  __Pyx_GIVEREF(__pyx_tuple__3);\n\n  /* \"pycocotools/_mask.pyx\":187\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')             # <<<<<<<<<<<<<<\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n */\n  __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unrecognized_type_The_following); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 187, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__4);\n  __Pyx_GIVEREF(__pyx_tuple__4);\n\n  /* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n  __pyx_tuple__5 = PyTuple_Pack(4, __pyx_n_s_objs, __pyx_n_s_isbox, __pyx_n_s_isrle, __pyx_n_s_obj); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 164, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__5);\n  __Pyx_GIVEREF(__pyx_tuple__5);\n  __pyx_codeobj__6 = (PyObject*)__Pyx_PyCode_New(1, 0, 4, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__5, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_preproc, 164, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__6)) __PYX_ERR(0, 164, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n  __pyx_tuple__7 = PyTuple_Pack(6, __pyx_n_s_dt, __pyx_n_s_gt, __pyx_n_s_iscrowd, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_iou); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(0, 189, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__7);\n  __Pyx_GIVEREF(__pyx_tuple__7);\n  __pyx_codeobj__8 = (PyObject*)__Pyx_PyCode_New(6, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__7, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_rleIou, 189, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__8)) __PYX_ERR(0, 189, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n  __pyx_tuple__9 = PyTuple_Pack(6, __pyx_n_s_dt, __pyx_n_s_gt, __pyx_n_s_iscrowd, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_iou); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__9);\n  __Pyx_GIVEREF(__pyx_tuple__9);\n  __pyx_codeobj__10 = (PyObject*)__Pyx_PyCode_New(6, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__9, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_bbIou, 191, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__10)) __PYX_ERR(0, 191, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n  __pyx_tuple__11 = PyTuple_Pack(2, __pyx_n_s_obj, __pyx_n_s_N); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 193, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__11);\n  __Pyx_GIVEREF(__pyx_tuple__11);\n  __pyx_codeobj__12 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__11, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_len, 193, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__12)) __PYX_ERR(0, 193, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":213\n *         return []\n *     if not type(dt) == type(gt):\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')             # <<<<<<<<<<<<<<\n * \n *     # define local variables\n */\n  __pyx_tuple__13 = PyTuple_Pack(1, __pyx_kp_s_The_dt_and_gt_should_have_the_sa); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(0, 213, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__13);\n  __Pyx_GIVEREF(__pyx_tuple__13);\n\n  /* \"pycocotools/_mask.pyx\":224\n *         _iouFun = _bbIou\n *     else:\n *         raise Exception('input data type not allowed.')             # <<<<<<<<<<<<<<\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)\n */\n  __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_input_data_type_not_allowed); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 224, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__14);\n  __Pyx_GIVEREF(__pyx_tuple__14);\n\n  /* \"pycocotools/_mask.pyx\":269\n *     objs = []\n *     for i in range(n):\n *         Rs = RLEs(1)             # <<<<<<<<<<<<<<\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n *         # time for malloc can be saved here but it's fine\n */\n  __pyx_tuple__15 = PyTuple_Pack(1, __pyx_int_1); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(0, 269, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__15);\n  __Pyx_GIVEREF(__pyx_tuple__15);\n\n  /* \"pycocotools/_mask.pyx\":290\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n *         raise Exception('input type is not supported.')             # <<<<<<<<<<<<<<\n *     return objs\n */\n  __pyx_tuple__16 = PyTuple_Pack(1, __pyx_kp_s_input_type_is_not_supported); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(0, 290, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__16);\n  __Pyx_GIVEREF(__pyx_tuple__16);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":218\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n  __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 218, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__17);\n  __Pyx_GIVEREF(__pyx_tuple__17);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":222\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n  __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 222, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__18);\n  __Pyx_GIVEREF(__pyx_tuple__18);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":259\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n  __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 259, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__19);\n  __Pyx_GIVEREF(__pyx_tuple__19);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":799\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n  __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 799, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__20);\n  __Pyx_GIVEREF(__pyx_tuple__20);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":803\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n  __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 803, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__21);\n  __Pyx_GIVEREF(__pyx_tuple__21);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":823\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n  __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 823, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__22);\n  __Pyx_GIVEREF(__pyx_tuple__22);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":989\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n  __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 989, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__23);\n  __Pyx_GIVEREF(__pyx_tuple__23);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":995\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n  __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 995, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__24);\n  __Pyx_GIVEREF(__pyx_tuple__24);\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":1001\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__25 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 1001, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__25);\n  __Pyx_GIVEREF(__pyx_tuple__25);\n\n  /* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n  __pyx_tuple__26 = PyTuple_Pack(6, __pyx_n_s_Rs, __pyx_n_s_n, __pyx_n_s_py_string, __pyx_n_s_c_string, __pyx_n_s_objs, __pyx_n_s_i); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(0, 100, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__26);\n  __Pyx_GIVEREF(__pyx_tuple__26);\n  __pyx_codeobj__27 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__26, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_toString, 100, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__27)) __PYX_ERR(0, 100, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n  __pyx_tuple__28 = PyTuple_Pack(7, __pyx_n_s_rleObjs, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_py_string, __pyx_n_s_c_string, __pyx_n_s_i, __pyx_n_s_obj); if (unlikely(!__pyx_tuple__28)) __PYX_ERR(0, 116, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__28);\n  __Pyx_GIVEREF(__pyx_tuple__28);\n  __pyx_codeobj__29 = (PyObject*)__Pyx_PyCode_New(1, 0, 7, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__28, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_frString, 116, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__29)) __PYX_ERR(0, 116, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n  __pyx_tuple__30 = PyTuple_Pack(6, __pyx_n_s_mask, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__30)) __PYX_ERR(0, 129, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__30);\n  __Pyx_GIVEREF(__pyx_tuple__30);\n  __pyx_codeobj__31 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__30, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_encode, 129, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__31)) __PYX_ERR(0, 129, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n  __pyx_tuple__32 = PyTuple_Pack(6, __pyx_n_s_rleObjs, __pyx_n_s_Rs, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_n, __pyx_n_s_masks); if (unlikely(!__pyx_tuple__32)) __PYX_ERR(0, 137, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__32);\n  __Pyx_GIVEREF(__pyx_tuple__32);\n  __pyx_codeobj__33 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__32, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_decode, 137, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__33)) __PYX_ERR(0, 137, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n  __pyx_tuple__34 = PyTuple_Pack(5, __pyx_n_s_rleObjs, __pyx_n_s_intersect, __pyx_n_s_Rs, __pyx_n_s_R, __pyx_n_s_obj); if (unlikely(!__pyx_tuple__34)) __PYX_ERR(0, 144, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__34);\n  __Pyx_GIVEREF(__pyx_tuple__34);\n  __pyx_codeobj__35 = (PyObject*)__Pyx_PyCode_New(2, 0, 5, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__34, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_merge, 144, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__35)) __PYX_ERR(0, 144, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n  __pyx_tuple__36 = PyTuple_Pack(5, __pyx_n_s_rleObjs, __pyx_n_s_Rs, __pyx_n_s_a, __pyx_n_s_shape, __pyx_n_s_a_2); if (unlikely(!__pyx_tuple__36)) __PYX_ERR(0, 151, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__36);\n  __Pyx_GIVEREF(__pyx_tuple__36);\n  __pyx_codeobj__37 = (PyObject*)__Pyx_PyCode_New(1, 0, 5, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__36, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_area, 151, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__37)) __PYX_ERR(0, 151, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n  __pyx_tuple__38 = PyTuple_Pack(18, __pyx_n_s_dt, __pyx_n_s_gt, __pyx_n_s_pyiscrowd, __pyx_n_s_preproc, __pyx_n_s_preproc, __pyx_n_s_rleIou, __pyx_n_s_rleIou, __pyx_n_s_bbIou, __pyx_n_s_bbIou, __pyx_n_s_len, __pyx_n_s_len, __pyx_n_s_iscrowd, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_iou, __pyx_n_s_shape, __pyx_n_s_iouFun, __pyx_n_s_iou_2); if (unlikely(!__pyx_tuple__38)) __PYX_ERR(0, 163, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__38);\n  __Pyx_GIVEREF(__pyx_tuple__38);\n  __pyx_codeobj__39 = (PyObject*)__Pyx_PyCode_New(3, 0, 18, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__38, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_iou_2, 163, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__39)) __PYX_ERR(0, 163, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n  __pyx_tuple__40 = PyTuple_Pack(6, __pyx_n_s_rleObjs, __pyx_n_s_Rs, __pyx_n_s_n, __pyx_n_s_bb_2, __pyx_n_s_shape, __pyx_n_s_bb); if (unlikely(!__pyx_tuple__40)) __PYX_ERR(0, 233, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__40);\n  __Pyx_GIVEREF(__pyx_tuple__40);\n  __pyx_codeobj__41 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__40, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_toBbox, 233, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__41)) __PYX_ERR(0, 233, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n  __pyx_tuple__42 = PyTuple_Pack(6, __pyx_n_s_bb, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__42)) __PYX_ERR(0, 245, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__42);\n  __Pyx_GIVEREF(__pyx_tuple__42);\n  __pyx_codeobj__43 = (PyObject*)__Pyx_PyCode_New(3, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__42, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_frBbox, 245, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__43)) __PYX_ERR(0, 245, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n  __pyx_tuple__44 = PyTuple_Pack(9, __pyx_n_s_poly, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_np_poly, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_i, __pyx_n_s_p, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__44)) __PYX_ERR(0, 252, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__44);\n  __Pyx_GIVEREF(__pyx_tuple__44);\n  __pyx_codeobj__45 = (PyObject*)__Pyx_PyCode_New(3, 0, 9, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__44, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_frPoly, 252, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__45)) __PYX_ERR(0, 252, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n  __pyx_tuple__46 = PyTuple_Pack(11, __pyx_n_s_ucRles, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_cnts, __pyx_n_s_R, __pyx_n_s_data, __pyx_n_s_n, __pyx_n_s_objs, __pyx_n_s_i, __pyx_n_s_Rs, __pyx_n_s_j); if (unlikely(!__pyx_tuple__46)) __PYX_ERR(0, 262, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__46);\n  __Pyx_GIVEREF(__pyx_tuple__46);\n  __pyx_codeobj__47 = (PyObject*)__Pyx_PyCode_New(3, 0, 11, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__46, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_frUncompressedRLE, 262, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__47)) __PYX_ERR(0, 262, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n  __pyx_tuple__48 = PyTuple_Pack(4, __pyx_n_s_pyobj, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__48)) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__48);\n  __Pyx_GIVEREF(__pyx_tuple__48);\n  __pyx_codeobj__49 = (PyObject*)__Pyx_PyCode_New(3, 0, 4, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__48, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_ubuntu_Work_brbchen_unskyc, __pyx_n_s_frPyObjects, 280, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__49)) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic int __Pyx_InitGlobals(void) {\n  if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_4 = PyInt_FromLong(4); if (unlikely(!__pyx_int_4)) __PYX_ERR(0, 1, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\n#if PY_MAJOR_VERSION < 3\nPyMODINIT_FUNC init_mask(void); /*proto*/\nPyMODINIT_FUNC init_mask(void)\n#else\nPyMODINIT_FUNC PyInit__mask(void); /*proto*/\nPyMODINIT_FUNC PyInit__mask(void)\n#endif\n{\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  __Pyx_RefNannyDeclarations\n  #if CYTHON_REFNANNY\n  __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"refnanny\");\n  if (!__Pyx_RefNanny) {\n      PyErr_Clear();\n      __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"Cython.Runtime.refnanny\");\n      if (!__Pyx_RefNanny)\n          Py_FatalError(\"failed to import 'refnanny' module\");\n  }\n  #endif\n  __Pyx_RefNannySetupContext(\"PyMODINIT_FUNC PyInit__mask(void)\", 0);\n  if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_bytes = PyBytes_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_unicode = PyUnicode_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pyx_CyFunction_USED\n  if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_FusedFunction_USED\n  if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Coroutine_USED\n  if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Generator_USED\n  if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_StopAsyncIteration_USED\n  if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  /*--- Library function declarations ---*/\n  /*--- Threads initialization code ---*/\n  #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS\n  #ifdef WITH_THREAD /* Python build with threading support? */\n  PyEval_InitThreads();\n  #endif\n  #endif\n  /*--- Module creation code ---*/\n  #if PY_MAJOR_VERSION < 3\n  __pyx_m = Py_InitModule4(\"_mask\", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);\n  #else\n  __pyx_m = PyModule_Create(&__pyx_moduledef);\n  #endif\n  if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_d);\n  __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if CYTHON_COMPILING_IN_PYPY\n  Py_INCREF(__pyx_b);\n  #endif\n  if (PyObject_SetAttrString(__pyx_m, \"__builtins__\", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  /*--- Initialize various global constants etc. ---*/\n  if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n  if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  if (__pyx_module_is_main_pycocotools___mask) {\n    if (PyObject_SetAttrString(__pyx_m, \"__name__\", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  }\n  #if PY_MAJOR_VERSION >= 3\n  {\n    PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)\n    if (!PyDict_GetItemString(modules, \"pycocotools._mask\")) {\n      if (unlikely(PyDict_SetItemString(modules, \"pycocotools._mask\", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)\n    }\n  }\n  #endif\n  /*--- Builtin init code ---*/\n  if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  /*--- Constants init code ---*/\n  if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  /*--- Global init code ---*/\n  /*--- Variable export code ---*/\n  /*--- Function export code ---*/\n  /*--- Type init code ---*/\n  if (PyType_Ready(&__pyx_type_11pycocotools_5_mask_RLEs) < 0) __PYX_ERR(0, 53, __pyx_L1_error)\n  __pyx_type_11pycocotools_5_mask_RLEs.tp_print = 0;\n  if (PyObject_SetAttrString(__pyx_m, \"RLEs\", (PyObject *)&__pyx_type_11pycocotools_5_mask_RLEs) < 0) __PYX_ERR(0, 53, __pyx_L1_error)\n  __pyx_ptype_11pycocotools_5_mask_RLEs = &__pyx_type_11pycocotools_5_mask_RLEs;\n  if (PyType_Ready(&__pyx_type_11pycocotools_5_mask_Masks) < 0) __PYX_ERR(0, 74, __pyx_L1_error)\n  __pyx_type_11pycocotools_5_mask_Masks.tp_print = 0;\n  if (PyObject_SetAttrString(__pyx_m, \"Masks\", (PyObject *)&__pyx_type_11pycocotools_5_mask_Masks) < 0) __PYX_ERR(0, 74, __pyx_L1_error)\n  __pyx_ptype_11pycocotools_5_mask_Masks = &__pyx_type_11pycocotools_5_mask_Masks;\n  /*--- Type import code ---*/\n  __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, \"type\", \n  #if CYTHON_COMPILING_IN_PYPY\n  sizeof(PyTypeObject),\n  #else\n  sizeof(PyHeapTypeObject),\n  #endif\n  0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error)\n  __pyx_ptype_5numpy_dtype = __Pyx_ImportType(\"numpy\", \"dtype\", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) __PYX_ERR(1, 155, __pyx_L1_error)\n  __pyx_ptype_5numpy_flatiter = __Pyx_ImportType(\"numpy\", \"flatiter\", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) __PYX_ERR(1, 168, __pyx_L1_error)\n  __pyx_ptype_5numpy_broadcast = __Pyx_ImportType(\"numpy\", \"broadcast\", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) __PYX_ERR(1, 172, __pyx_L1_error)\n  __pyx_ptype_5numpy_ndarray = __Pyx_ImportType(\"numpy\", \"ndarray\", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) __PYX_ERR(1, 181, __pyx_L1_error)\n  __pyx_ptype_5numpy_ufunc = __Pyx_ImportType(\"numpy\", \"ufunc\", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) __PYX_ERR(1, 861, __pyx_L1_error)\n  /*--- Variable import code ---*/\n  /*--- Function import code ---*/\n  /*--- Execution code ---*/\n  #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)\n  if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n\n  /* \"pycocotools/_mask.pyx\":11\n * #**************************************************************************\n * \n * __author__ = 'tsungyi'             # <<<<<<<<<<<<<<\n * \n * # import both Python-level and C-level symbols of Numpy\n */\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_author, __pyx_n_s_tsungyi) < 0) __PYX_ERR(0, 11, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":15\n * # import both Python-level and C-level symbols of Numpy\n * # the API uses Numpy to interface C and Python\n * import numpy as np             # <<<<<<<<<<<<<<\n * cimport numpy as np\n * from libc.stdlib cimport malloc, free\n */\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":20\n * \n * # intialized Numpy. must do.\n * np.import_array()             # <<<<<<<<<<<<<<\n * \n * # import numpy C function\n */\n  __pyx_t_2 = __pyx_f_5numpy_import_array(); if (unlikely(__pyx_t_2 == -1)) __PYX_ERR(0, 20, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_1_toString, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 100, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_toString, __pyx_t_1) < 0) __PYX_ERR(0, 100, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3_frString, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 116, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frString, __pyx_t_1) < 0) __PYX_ERR(0, 116, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_5encode, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 129, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_encode, __pyx_t_1) < 0) __PYX_ERR(0, 129, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_7decode, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 137, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_decode, __pyx_t_1) < 0) __PYX_ERR(0, 137, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_9merge, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 144, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_merge, __pyx_t_1) < 0) __PYX_ERR(0, 144, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_11area, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_area, __pyx_t_1) < 0) __PYX_ERR(0, 151, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_13iou, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 163, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_iou_2, __pyx_t_1) < 0) __PYX_ERR(0, 163, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_15toBbox, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 233, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_toBbox, __pyx_t_1) < 0) __PYX_ERR(0, 233, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_17frBbox, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frBbox, __pyx_t_1) < 0) __PYX_ERR(0, 245, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_19frPoly, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 252, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frPoly, __pyx_t_1) < 0) __PYX_ERR(0, 252, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_21frUncompressedRLE, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 262, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frUncompressedRLE, __pyx_t_1) < 0) __PYX_ERR(0, 262, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_23frPyObjects, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frPyObjects, __pyx_t_1) < 0) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":1\n * # distutils: language = c             # <<<<<<<<<<<<<<\n * # distutils: sources = ../MatlabAPI/private/maskApi.c\n * \n */\n  __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"../../../../../../../../usr/local/lib/python2.7/dist-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /*--- Wrapped vars code ---*/\n\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  if (__pyx_m) {\n    if (__pyx_d) {\n      __Pyx_AddTraceback(\"init pycocotools._mask\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n    }\n    Py_DECREF(__pyx_m); __pyx_m = 0;\n  } else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_ImportError, \"init pycocotools._mask\");\n  }\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  #if PY_MAJOR_VERSION < 3\n  return;\n  #else\n  return __pyx_m;\n  #endif\n}\n\n/* --- Runtime support code --- */\n/* Refnanny */\n#if CYTHON_REFNANNY\nstatic __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {\n    PyObject *m = NULL, *p = NULL;\n    void *r = NULL;\n    m = PyImport_ImportModule((char *)modname);\n    if (!m) goto end;\n    p = PyObject_GetAttrString(m, (char *)\"RefNannyAPI\");\n    if (!p) goto end;\n    r = PyLong_AsVoidPtr(p);\nend:\n    Py_XDECREF(p);\n    Py_XDECREF(m);\n    return (__Pyx_RefNannyAPIStruct *)r;\n}\n#endif\n\n/* GetBuiltinName */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name) {\n    PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);\n    if (unlikely(!result)) {\n        PyErr_Format(PyExc_NameError,\n#if PY_MAJOR_VERSION >= 3\n            \"name '%U' is not defined\", name);\n#else\n            \"name '%.200s' is not defined\", PyString_AS_STRING(name));\n#endif\n    }\n    return result;\n}\n\n/* RaiseDoubleKeywords */\nstatic void __Pyx_RaiseDoubleKeywordsError(\n    const char* func_name,\n    PyObject* kw_name)\n{\n    PyErr_Format(PyExc_TypeError,\n        #if PY_MAJOR_VERSION >= 3\n        \"%s() got multiple values for keyword argument '%U'\", func_name, kw_name);\n        #else\n        \"%s() got multiple values for keyword argument '%s'\", func_name,\n        PyString_AsString(kw_name));\n        #endif\n}\n\n/* ParseKeywords */\nstatic int __Pyx_ParseOptionalKeywords(\n    PyObject *kwds,\n    PyObject **argnames[],\n    PyObject *kwds2,\n    PyObject *values[],\n    Py_ssize_t num_pos_args,\n    const char* function_name)\n{\n    PyObject *key = 0, *value = 0;\n    Py_ssize_t pos = 0;\n    PyObject*** name;\n    PyObject*** first_kw_arg = argnames + num_pos_args;\n    while (PyDict_Next(kwds, &pos, &key, &value)) {\n        name = first_kw_arg;\n        while (*name && (**name != key)) name++;\n        if (*name) {\n            values[name-argnames] = value;\n            continue;\n        }\n        name = first_kw_arg;\n        #if PY_MAJOR_VERSION < 3\n        if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) {\n            while (*name) {\n                if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))\n                        && _PyString_Eq(**name, key)) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    if ((**argname == key) || (\n                            (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))\n                             && _PyString_Eq(**argname, key))) {\n                        goto arg_passed_twice;\n                    }\n                    argname++;\n                }\n            }\n        } else\n        #endif\n        if (likely(PyUnicode_Check(key))) {\n            while (*name) {\n                int cmp = (**name == key) ? 0 :\n                #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\n                #endif\n                    PyUnicode_Compare(**name, key);\n                if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                if (cmp == 0) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    int cmp = (**argname == key) ? 0 :\n                    #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\n                    #endif\n                        PyUnicode_Compare(**argname, key);\n                    if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                    if (cmp == 0) goto arg_passed_twice;\n                    argname++;\n                }\n            }\n        } else\n            goto invalid_keyword_type;\n        if (kwds2) {\n            if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;\n        } else {\n            goto invalid_keyword;\n        }\n    }\n    return 0;\narg_passed_twice:\n    __Pyx_RaiseDoubleKeywordsError(function_name, key);\n    goto bad;\ninvalid_keyword_type:\n    PyErr_Format(PyExc_TypeError,\n        \"%.200s() keywords must be strings\", function_name);\n    goto bad;\ninvalid_keyword:\n    PyErr_Format(PyExc_TypeError,\n    #if PY_MAJOR_VERSION < 3\n        \"%.200s() got an unexpected keyword argument '%.200s'\",\n        function_name, PyString_AsString(key));\n    #else\n        \"%s() got an unexpected keyword argument '%U'\",\n        function_name, key);\n    #endif\nbad:\n    return -1;\n}\n\n/* RaiseArgTupleInvalid */\nstatic void __Pyx_RaiseArgtupleInvalid(\n    const char* func_name,\n    int exact,\n    Py_ssize_t num_min,\n    Py_ssize_t num_max,\n    Py_ssize_t num_found)\n{\n    Py_ssize_t num_expected;\n    const char *more_or_less;\n    if (num_found < num_min) {\n        num_expected = num_min;\n        more_or_less = \"at least\";\n    } else {\n        num_expected = num_max;\n        more_or_less = \"at most\";\n    }\n    if (exact) {\n        more_or_less = \"exactly\";\n    }\n    PyErr_Format(PyExc_TypeError,\n                 \"%.200s() takes %.8s %\" CYTHON_FORMAT_SSIZE_T \"d positional argument%.1s (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                 func_name, more_or_less, num_expected,\n                 (num_expected == 1) ? \"\" : \"s\", num_found);\n}\n\n/* BytesEquals */\nstatic CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) {\n#if CYTHON_COMPILING_IN_PYPY\n    return PyObject_RichCompareBool(s1, s2, equals);\n#else\n    if (s1 == s2) {\n        return (equals == Py_EQ);\n    } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) {\n        const char *ps1, *ps2;\n        Py_ssize_t length = PyBytes_GET_SIZE(s1);\n        if (length != PyBytes_GET_SIZE(s2))\n            return (equals == Py_NE);\n        ps1 = PyBytes_AS_STRING(s1);\n        ps2 = PyBytes_AS_STRING(s2);\n        if (ps1[0] != ps2[0]) {\n            return (equals == Py_NE);\n        } else if (length == 1) {\n            return (equals == Py_EQ);\n        } else {\n            int result = memcmp(ps1, ps2, (size_t)length);\n            return (equals == Py_EQ) ? (result == 0) : (result != 0);\n        }\n    } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) {\n        return (equals == Py_NE);\n    } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) {\n        return (equals == Py_NE);\n    } else {\n        int result;\n        PyObject* py_result = PyObject_RichCompare(s1, s2, equals);\n        if (!py_result)\n            return -1;\n        result = __Pyx_PyObject_IsTrue(py_result);\n        Py_DECREF(py_result);\n        return result;\n    }\n#endif\n}\n\n/* UnicodeEquals */\nstatic CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) {\n#if CYTHON_COMPILING_IN_PYPY\n    return PyObject_RichCompareBool(s1, s2, equals);\n#else\n#if PY_MAJOR_VERSION < 3\n    PyObject* owned_ref = NULL;\n#endif\n    int s1_is_unicode, s2_is_unicode;\n    if (s1 == s2) {\n        goto return_eq;\n    }\n    s1_is_unicode = PyUnicode_CheckExact(s1);\n    s2_is_unicode = PyUnicode_CheckExact(s2);\n#if PY_MAJOR_VERSION < 3\n    if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) {\n        owned_ref = PyUnicode_FromObject(s2);\n        if (unlikely(!owned_ref))\n            return -1;\n        s2 = owned_ref;\n        s2_is_unicode = 1;\n    } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) {\n        owned_ref = PyUnicode_FromObject(s1);\n        if (unlikely(!owned_ref))\n            return -1;\n        s1 = owned_ref;\n        s1_is_unicode = 1;\n    } else if (((!s2_is_unicode) & (!s1_is_unicode))) {\n        return __Pyx_PyBytes_Equals(s1, s2, equals);\n    }\n#endif\n    if (s1_is_unicode & s2_is_unicode) {\n        Py_ssize_t length;\n        int kind;\n        void *data1, *data2;\n        if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0))\n            return -1;\n        length = __Pyx_PyUnicode_GET_LENGTH(s1);\n        if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) {\n            goto return_ne;\n        }\n        kind = __Pyx_PyUnicode_KIND(s1);\n        if (kind != __Pyx_PyUnicode_KIND(s2)) {\n            goto return_ne;\n        }\n        data1 = __Pyx_PyUnicode_DATA(s1);\n        data2 = __Pyx_PyUnicode_DATA(s2);\n        if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) {\n            goto return_ne;\n        } else if (length == 1) {\n            goto return_eq;\n        } else {\n            int result = memcmp(data1, data2, (size_t)(length * kind));\n            #if PY_MAJOR_VERSION < 3\n            Py_XDECREF(owned_ref);\n            #endif\n            return (equals == Py_EQ) ? (result == 0) : (result != 0);\n        }\n    } else if ((s1 == Py_None) & s2_is_unicode) {\n        goto return_ne;\n    } else if ((s2 == Py_None) & s1_is_unicode) {\n        goto return_ne;\n    } else {\n        int result;\n        PyObject* py_result = PyObject_RichCompare(s1, s2, equals);\n        if (!py_result)\n            return -1;\n        result = __Pyx_PyObject_IsTrue(py_result);\n        Py_DECREF(py_result);\n        return result;\n    }\nreturn_eq:\n    #if PY_MAJOR_VERSION < 3\n    Py_XDECREF(owned_ref);\n    #endif\n    return (equals == Py_EQ);\nreturn_ne:\n    #if PY_MAJOR_VERSION < 3\n    Py_XDECREF(owned_ref);\n    #endif\n    return (equals == Py_NE);\n#endif\n}\n\n/* PyObjectCall */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    PyObject *result;\n    ternaryfunc call = func->ob_type->tp_call;\n    if (unlikely(!call))\n        return PyObject_Call(func, arg, kw);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = (*call)(func, arg, kw);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* PyErrFetchRestore */\n#if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->curexc_type;\n    tmp_value = tstate->curexc_value;\n    tmp_tb = tstate->curexc_traceback;\n    tstate->curexc_type = type;\n    tstate->curexc_value = value;\n    tstate->curexc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->curexc_type;\n    *value = tstate->curexc_value;\n    *tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n}\n#endif\n\n/* RaiseException */\n#if PY_MAJOR_VERSION < 3\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,\n                        CYTHON_UNUSED PyObject *cause) {\n    __Pyx_PyThreadState_declare\n    Py_XINCREF(type);\n    if (!value || value == Py_None)\n        value = NULL;\n    else\n        Py_INCREF(value);\n    if (!tb || tb == Py_None)\n        tb = NULL;\n    else {\n        Py_INCREF(tb);\n        if (!PyTraceBack_Check(tb)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: arg 3 must be a traceback or None\");\n            goto raise_error;\n        }\n    }\n    if (PyType_Check(type)) {\n#if CYTHON_COMPILING_IN_PYPY\n        if (!value) {\n            Py_INCREF(Py_None);\n            value = Py_None;\n        }\n#endif\n        PyErr_NormalizeException(&type, &value, &tb);\n    } else {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto raise_error;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(type);\n        Py_INCREF(type);\n        if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: exception class must be a subclass of BaseException\");\n            goto raise_error;\n        }\n    }\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrRestore(type, value, tb);\n    return;\nraise_error:\n    Py_XDECREF(value);\n    Py_XDECREF(type);\n    Py_XDECREF(tb);\n    return;\n}\n#else\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {\n    PyObject* owned_instance = NULL;\n    if (tb == Py_None) {\n        tb = 0;\n    } else if (tb && !PyTraceBack_Check(tb)) {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: arg 3 must be a traceback or None\");\n        goto bad;\n    }\n    if (value == Py_None)\n        value = 0;\n    if (PyExceptionInstance_Check(type)) {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto bad;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(value);\n    } else if (PyExceptionClass_Check(type)) {\n        PyObject *instance_class = NULL;\n        if (value && PyExceptionInstance_Check(value)) {\n            instance_class = (PyObject*) Py_TYPE(value);\n            if (instance_class != type) {\n                int is_subclass = PyObject_IsSubclass(instance_class, type);\n                if (!is_subclass) {\n                    instance_class = NULL;\n                } else if (unlikely(is_subclass == -1)) {\n                    goto bad;\n                } else {\n                    type = instance_class;\n                }\n            }\n        }\n        if (!instance_class) {\n            PyObject *args;\n            if (!value)\n                args = PyTuple_New(0);\n            else if (PyTuple_Check(value)) {\n                Py_INCREF(value);\n                args = value;\n            } else\n                args = PyTuple_Pack(1, value);\n            if (!args)\n                goto bad;\n            owned_instance = PyObject_Call(type, args, NULL);\n            Py_DECREF(args);\n            if (!owned_instance)\n                goto bad;\n            value = owned_instance;\n            if (!PyExceptionInstance_Check(value)) {\n                PyErr_Format(PyExc_TypeError,\n                             \"calling %R should have returned an instance of \"\n                             \"BaseException, not %R\",\n                             type, Py_TYPE(value));\n                goto bad;\n            }\n        }\n    } else {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: exception class must be a subclass of BaseException\");\n        goto bad;\n    }\n#if PY_VERSION_HEX >= 0x03030000\n    if (cause) {\n#else\n    if (cause && cause != Py_None) {\n#endif\n        PyObject *fixed_cause;\n        if (cause == Py_None) {\n            fixed_cause = NULL;\n        } else if (PyExceptionClass_Check(cause)) {\n            fixed_cause = PyObject_CallObject(cause, NULL);\n            if (fixed_cause == NULL)\n                goto bad;\n        } else if (PyExceptionInstance_Check(cause)) {\n            fixed_cause = cause;\n            Py_INCREF(fixed_cause);\n        } else {\n            PyErr_SetString(PyExc_TypeError,\n                            \"exception causes must derive from \"\n                            \"BaseException\");\n            goto bad;\n        }\n        PyException_SetCause(value, fixed_cause);\n    }\n    PyErr_SetObject(type, value);\n    if (tb) {\n#if CYTHON_COMPILING_IN_PYPY\n        PyObject *tmp_type, *tmp_value, *tmp_tb;\n        PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);\n        Py_INCREF(tb);\n        PyErr_Restore(tmp_type, tmp_value, tb);\n        Py_XDECREF(tmp_tb);\n#else\n        PyThreadState *tstate = PyThreadState_GET();\n        PyObject* tmp_tb = tstate->curexc_traceback;\n        if (tb != tmp_tb) {\n            Py_INCREF(tb);\n            tstate->curexc_traceback = tb;\n            Py_XDECREF(tmp_tb);\n        }\n#endif\n    }\nbad:\n    Py_XDECREF(owned_instance);\n    return;\n}\n#endif\n\n/* ExtTypeTest */\n  static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) {\n    if (unlikely(!type)) {\n        PyErr_SetString(PyExc_SystemError, \"Missing type object\");\n        return 0;\n    }\n    if (likely(PyObject_TypeCheck(obj, type)))\n        return 1;\n    PyErr_Format(PyExc_TypeError, \"Cannot convert %.200s to %.200s\",\n                 Py_TYPE(obj)->tp_name, type->tp_name);\n    return 0;\n}\n\n/* ArgTypeTest */\n  static void __Pyx_RaiseArgumentTypeInvalid(const char* name, PyObject *obj, PyTypeObject *type) {\n    PyErr_Format(PyExc_TypeError,\n        \"Argument '%.200s' has incorrect type (expected %.200s, got %.200s)\",\n        name, type->tp_name, Py_TYPE(obj)->tp_name);\n}\nstatic CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed,\n    const char *name, int exact)\n{\n    if (unlikely(!type)) {\n        PyErr_SetString(PyExc_SystemError, \"Missing type object\");\n        return 0;\n    }\n    if (none_allowed && obj == Py_None) return 1;\n    else if (exact) {\n        if (likely(Py_TYPE(obj) == type)) return 1;\n        #if PY_MAJOR_VERSION == 2\n        else if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1;\n        #endif\n    }\n    else {\n        if (likely(PyObject_TypeCheck(obj, type))) return 1;\n    }\n    __Pyx_RaiseArgumentTypeInvalid(name, obj, type);\n    return 0;\n}\n\n/* PyIntBinop */\n  #if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, CYTHON_UNUSED int inplace) {\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_CheckExact(op1))) {\n        const long b = intval;\n        long x;\n        long a = PyInt_AS_LONG(op1);\n            x = (long)((unsigned long)a + b);\n            if (likely((x^a) >= 0 || (x^b) >= 0))\n                return PyInt_FromLong(x);\n            return PyLong_Type.tp_as_number->nb_add(op1, op2);\n    }\n    #endif\n    #if CYTHON_USE_PYLONG_INTERNALS\n    if (likely(PyLong_CheckExact(op1))) {\n        const long b = intval;\n        long a, x;\n#ifdef HAVE_LONG_LONG\n        const PY_LONG_LONG llb = intval;\n        PY_LONG_LONG lla, llx;\n#endif\n        const digit* digits = ((PyLongObject*)op1)->ob_digit;\n        const Py_ssize_t size = Py_SIZE(op1);\n        if (likely(__Pyx_sst_abs(size) <= 1)) {\n            a = likely(size) ? digits[0] : 0;\n            if (size == -1) a = -a;\n        } else {\n            switch (size) {\n                case -2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                case 2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                case -3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                case 3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                case -4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                case 4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                default: return PyLong_Type.tp_as_number->nb_add(op1, op2);\n            }\n        }\n                x = a + b;\n            return PyLong_FromLong(x);\n#ifdef HAVE_LONG_LONG\n        long_long:\n                llx = lla + llb;\n            return PyLong_FromLongLong(llx);\n#endif\n        \n        \n    }\n    #endif\n    if (PyFloat_CheckExact(op1)) {\n        const long b = intval;\n        double a = PyFloat_AS_DOUBLE(op1);\n            double result;\n            PyFPE_START_PROTECT(\"add\", return NULL)\n            result = ((double)a) + (double)b;\n            PyFPE_END_PROTECT(result)\n            return PyFloat_FromDouble(result);\n    }\n    return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2);\n}\n#endif\n\n/* GetItemInt */\n  static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {\n    PyObject *r;\n    if (!j) return NULL;\n    r = PyObject_GetItem(o, j);\n    Py_DECREF(j);\n    return r;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,\n                                                              CYTHON_NCP_UNUSED int wraparound,\n                                                              CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    if (wraparound & unlikely(i < 0)) i += PyList_GET_SIZE(o);\n    if ((!boundscheck) || likely((0 <= i) & (i < PyList_GET_SIZE(o)))) {\n        PyObject *r = PyList_GET_ITEM(o, i);\n        Py_INCREF(r);\n        return r;\n    }\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n#else\n    return PySequence_GetItem(o, i);\n#endif\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,\n                                                              CYTHON_NCP_UNUSED int wraparound,\n                                                              CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    if (wraparound & unlikely(i < 0)) i += PyTuple_GET_SIZE(o);\n    if ((!boundscheck) || likely((0 <= i) & (i < PyTuple_GET_SIZE(o)))) {\n        PyObject *r = PyTuple_GET_ITEM(o, i);\n        Py_INCREF(r);\n        return r;\n    }\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n#else\n    return PySequence_GetItem(o, i);\n#endif\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list,\n                                                     CYTHON_NCP_UNUSED int wraparound,\n                                                     CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS\n    if (is_list || PyList_CheckExact(o)) {\n        Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o);\n        if ((!boundscheck) || (likely((n >= 0) & (n < PyList_GET_SIZE(o))))) {\n            PyObject *r = PyList_GET_ITEM(o, n);\n            Py_INCREF(r);\n            return r;\n        }\n    }\n    else if (PyTuple_CheckExact(o)) {\n        Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o);\n        if ((!boundscheck) || likely((n >= 0) & (n < PyTuple_GET_SIZE(o)))) {\n            PyObject *r = PyTuple_GET_ITEM(o, n);\n            Py_INCREF(r);\n            return r;\n        }\n    } else {\n        PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence;\n        if (likely(m && m->sq_item)) {\n            if (wraparound && unlikely(i < 0) && likely(m->sq_length)) {\n                Py_ssize_t l = m->sq_length(o);\n                if (likely(l >= 0)) {\n                    i += l;\n                } else {\n                    if (!PyErr_ExceptionMatches(PyExc_OverflowError))\n                        return NULL;\n                    PyErr_Clear();\n                }\n            }\n            return m->sq_item(o, i);\n        }\n    }\n#else\n    if (is_list || PySequence_Check(o)) {\n        return PySequence_GetItem(o, i);\n    }\n#endif\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n}\n\n/* BufferFormatCheck */\n  static CYTHON_INLINE int __Pyx_IsLittleEndian(void) {\n  unsigned int n = 1;\n  return *(unsigned char*)(&n) != 0;\n}\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type) {\n  stack[0].field = &ctx->root;\n  stack[0].parent_offset = 0;\n  ctx->root.type = type;\n  ctx->root.name = \"buffer dtype\";\n  ctx->root.offset = 0;\n  ctx->head = stack;\n  ctx->head->field = &ctx->root;\n  ctx->fmt_offset = 0;\n  ctx->head->parent_offset = 0;\n  ctx->new_packmode = '@';\n  ctx->enc_packmode = '@';\n  ctx->new_count = 1;\n  ctx->enc_count = 0;\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  ctx->is_valid_array = 0;\n  ctx->struct_alignment = 0;\n  while (type->typegroup == 'S') {\n    ++ctx->head;\n    ctx->head->field = type->fields;\n    ctx->head->parent_offset = 0;\n    type = type->fields->type;\n  }\n}\nstatic int __Pyx_BufFmt_ParseNumber(const char** ts) {\n    int count;\n    const char* t = *ts;\n    if (*t < '0' || *t > '9') {\n      return -1;\n    } else {\n        count = *t++ - '0';\n        while (*t >= '0' && *t < '9') {\n            count *= 10;\n            count += *t++ - '0';\n        }\n    }\n    *ts = t;\n    return count;\n}\nstatic int __Pyx_BufFmt_ExpectNumber(const char **ts) {\n    int number = __Pyx_BufFmt_ParseNumber(ts);\n    if (number == -1)\n        PyErr_Format(PyExc_ValueError,\\\n                     \"Does not understand character buffer dtype format string ('%c')\", **ts);\n    return number;\n}\nstatic void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {\n  PyErr_Format(PyExc_ValueError,\n               \"Unexpected format string character: '%c'\", ch);\n}\nstatic const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': return \"'char'\";\n    case 'b': return \"'signed char'\";\n    case 'B': return \"'unsigned char'\";\n    case 'h': return \"'short'\";\n    case 'H': return \"'unsigned short'\";\n    case 'i': return \"'int'\";\n    case 'I': return \"'unsigned int'\";\n    case 'l': return \"'long'\";\n    case 'L': return \"'unsigned long'\";\n    case 'q': return \"'long long'\";\n    case 'Q': return \"'unsigned long long'\";\n    case 'f': return (is_complex ? \"'complex float'\" : \"'float'\");\n    case 'd': return (is_complex ? \"'complex double'\" : \"'double'\");\n    case 'g': return (is_complex ? \"'complex long double'\" : \"'long double'\");\n    case 'T': return \"a struct\";\n    case 'O': return \"Python object\";\n    case 'P': return \"a pointer\";\n    case 's': case 'p': return \"a string\";\n    case 0: return \"end\";\n    default: return \"unparseable format string\";\n  }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return 2;\n    case 'i': case 'I': case 'l': case 'L': return 4;\n    case 'q': case 'Q': return 8;\n    case 'f': return (is_complex ? 8 : 4);\n    case 'd': return (is_complex ? 16 : 8);\n    case 'g': {\n      PyErr_SetString(PyExc_ValueError, \"Python does not define a standard format string size for long double ('g')..\");\n      return 0;\n    }\n    case 'O': case 'P': return sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(short);\n    case 'i': case 'I': return sizeof(int);\n    case 'l': case 'L': return sizeof(long);\n    #ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(PY_LONG_LONG);\n    #endif\n    case 'f': return sizeof(float) * (is_complex ? 2 : 1);\n    case 'd': return sizeof(double) * (is_complex ? 2 : 1);\n    case 'g': return sizeof(long double) * (is_complex ? 2 : 1);\n    case 'O': case 'P': return sizeof(void*);\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\ntypedef struct { char c; short x; } __Pyx_st_short;\ntypedef struct { char c; int x; } __Pyx_st_int;\ntypedef struct { char c; long x; } __Pyx_st_long;\ntypedef struct { char c; float x; } __Pyx_st_float;\ntypedef struct { char c; double x; } __Pyx_st_double;\ntypedef struct { char c; long double x; } __Pyx_st_longdouble;\ntypedef struct { char c; void *x; } __Pyx_st_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_st_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_st_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\n/* These are for computing the padding at the end of the struct to align\n   on the first member of the struct. This will probably the same as above,\n   but we don't have any guarantees.\n */\ntypedef struct { short x; char c; } __Pyx_pad_short;\ntypedef struct { int x; char c; } __Pyx_pad_int;\ntypedef struct { long x; char c; } __Pyx_pad_long;\ntypedef struct { float x; char c; } __Pyx_pad_float;\ntypedef struct { double x; char c; } __Pyx_pad_double;\ntypedef struct { long double x; char c; } __Pyx_pad_longdouble;\ntypedef struct { void *x; char c; } __Pyx_pad_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_pad_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_pad_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {\n  switch (ch) {\n    case 'c':\n        return 'H';\n    case 'b': case 'h': case 'i':\n    case 'l': case 'q': case 's': case 'p':\n        return 'I';\n    case 'B': case 'H': case 'I': case 'L': case 'Q':\n        return 'U';\n    case 'f': case 'd': case 'g':\n        return (is_complex ? 'C' : 'R');\n    case 'O':\n        return 'O';\n    case 'P':\n        return 'P';\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\nstatic void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {\n  if (ctx->head == NULL || ctx->head->field == &ctx->root) {\n    const char* expected;\n    const char* quote;\n    if (ctx->head == NULL) {\n      expected = \"end\";\n      quote = \"\";\n    } else {\n      expected = ctx->head->field->type->name;\n      quote = \"'\";\n    }\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected %s%s%s but got %s\",\n                 quote, expected, quote,\n                 __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex));\n  } else {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_StructField* parent = (ctx->head - 1)->field;\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'\",\n                 field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex),\n                 parent->type->name, field->name);\n  }\n}\nstatic int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {\n  char group;\n  size_t size, offset, arraysize = 1;\n  if (ctx->enc_type == 0) return 0;\n  if (ctx->head->field->type->arraysize[0]) {\n    int i, ndim = 0;\n    if (ctx->enc_type == 's' || ctx->enc_type == 'p') {\n        ctx->is_valid_array = ctx->head->field->type->ndim == 1;\n        ndim = 1;\n        if (ctx->enc_count != ctx->head->field->type->arraysize[0]) {\n            PyErr_Format(PyExc_ValueError,\n                         \"Expected a dimension of size %zu, got %zu\",\n                         ctx->head->field->type->arraysize[0], ctx->enc_count);\n            return -1;\n        }\n    }\n    if (!ctx->is_valid_array) {\n      PyErr_Format(PyExc_ValueError, \"Expected %d dimensions, got %d\",\n                   ctx->head->field->type->ndim, ndim);\n      return -1;\n    }\n    for (i = 0; i < ctx->head->field->type->ndim; i++) {\n      arraysize *= ctx->head->field->type->arraysize[i];\n    }\n    ctx->is_valid_array = 0;\n    ctx->enc_count = 1;\n  }\n  group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex);\n  do {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_TypeInfo* type = field->type;\n    if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') {\n      size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex);\n    } else {\n      size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex);\n    }\n    if (ctx->enc_packmode == '@') {\n      size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex);\n      size_t align_mod_offset;\n      if (align_at == 0) return -1;\n      align_mod_offset = ctx->fmt_offset % align_at;\n      if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset;\n      if (ctx->struct_alignment == 0)\n          ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type,\n                                                                 ctx->is_complex);\n    }\n    if (type->size != size || type->typegroup != group) {\n      if (type->typegroup == 'C' && type->fields != NULL) {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        ++ctx->head;\n        ctx->head->field = type->fields;\n        ctx->head->parent_offset = parent_offset;\n        continue;\n      }\n      if ((type->typegroup == 'H' || group == 'H') && type->size == size) {\n      } else {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n      }\n    }\n    offset = ctx->head->parent_offset + field->offset;\n    if (ctx->fmt_offset != offset) {\n      PyErr_Format(PyExc_ValueError,\n                   \"Buffer dtype mismatch; next field is at offset %\" CYTHON_FORMAT_SSIZE_T \"d but %\" CYTHON_FORMAT_SSIZE_T \"d expected\",\n                   (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset);\n      return -1;\n    }\n    ctx->fmt_offset += size;\n    if (arraysize)\n      ctx->fmt_offset += (arraysize - 1) * size;\n    --ctx->enc_count;\n    while (1) {\n      if (field == &ctx->root) {\n        ctx->head = NULL;\n        if (ctx->enc_count != 0) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n        }\n        break;\n      }\n      ctx->head->field = ++field;\n      if (field->type == NULL) {\n        --ctx->head;\n        field = ctx->head->field;\n        continue;\n      } else if (field->type->typegroup == 'S') {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        if (field->type->fields->type == NULL) continue;\n        field = field->type->fields;\n        ++ctx->head;\n        ctx->head->field = field;\n        ctx->head->parent_offset = parent_offset;\n        break;\n      } else {\n        break;\n      }\n    }\n  } while (ctx->enc_count);\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  return 0;\n}\nstatic CYTHON_INLINE PyObject *\n__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp)\n{\n    const char *ts = *tsp;\n    int i = 0, number;\n    int ndim = ctx->head->field->type->ndim;\n;\n    ++ts;\n    if (ctx->new_count != 1) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Cannot handle repeated arrays in format string\");\n        return NULL;\n    }\n    if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n    while (*ts && *ts != ')') {\n        switch (*ts) {\n            case ' ': case '\\f': case '\\r': case '\\n': case '\\t': case '\\v':  continue;\n            default:  break;\n        }\n        number = __Pyx_BufFmt_ExpectNumber(&ts);\n        if (number == -1) return NULL;\n        if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i])\n            return PyErr_Format(PyExc_ValueError,\n                        \"Expected a dimension of size %zu, got %d\",\n                        ctx->head->field->type->arraysize[i], number);\n        if (*ts != ',' && *ts != ')')\n            return PyErr_Format(PyExc_ValueError,\n                                \"Expected a comma in format string, got '%c'\", *ts);\n        if (*ts == ',') ts++;\n        i++;\n    }\n    if (i != ndim)\n        return PyErr_Format(PyExc_ValueError, \"Expected %d dimension(s), got %d\",\n                            ctx->head->field->type->ndim, i);\n    if (!*ts) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Unexpected end of format string, expected ')'\");\n        return NULL;\n    }\n    ctx->is_valid_array = 1;\n    ctx->new_count = 1;\n    *tsp = ++ts;\n    return Py_None;\n}\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) {\n  int got_Z = 0;\n  while (1) {\n    switch(*ts) {\n      case 0:\n        if (ctx->enc_type != 0 && ctx->head == NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        if (ctx->head != NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        return ts;\n      case ' ':\n      case '\\r':\n      case '\\n':\n        ++ts;\n        break;\n      case '<':\n        if (!__Pyx_IsLittleEndian()) {\n          PyErr_SetString(PyExc_ValueError, \"Little-endian buffer not supported on big-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '>':\n      case '!':\n        if (__Pyx_IsLittleEndian()) {\n          PyErr_SetString(PyExc_ValueError, \"Big-endian buffer not supported on little-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '=':\n      case '@':\n      case '^':\n        ctx->new_packmode = *ts++;\n        break;\n      case 'T':\n        {\n          const char* ts_after_sub;\n          size_t i, struct_count = ctx->new_count;\n          size_t struct_alignment = ctx->struct_alignment;\n          ctx->new_count = 1;\n          ++ts;\n          if (*ts != '{') {\n            PyErr_SetString(PyExc_ValueError, \"Buffer acquisition: Expected '{' after 'T'\");\n            return NULL;\n          }\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          ctx->enc_count = 0;\n          ctx->struct_alignment = 0;\n          ++ts;\n          ts_after_sub = ts;\n          for (i = 0; i != struct_count; ++i) {\n            ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts);\n            if (!ts_after_sub) return NULL;\n          }\n          ts = ts_after_sub;\n          if (struct_alignment) ctx->struct_alignment = struct_alignment;\n        }\n        break;\n      case '}':\n        {\n          size_t alignment = ctx->struct_alignment;\n          ++ts;\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          if (alignment && ctx->fmt_offset % alignment) {\n            ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment);\n          }\n        }\n        return ts;\n      case 'x':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->fmt_offset += ctx->new_count;\n        ctx->new_count = 1;\n        ctx->enc_count = 0;\n        ctx->enc_type = 0;\n        ctx->enc_packmode = ctx->new_packmode;\n        ++ts;\n        break;\n      case 'Z':\n        got_Z = 1;\n        ++ts;\n        if (*ts != 'f' && *ts != 'd' && *ts != 'g') {\n          __Pyx_BufFmt_RaiseUnexpectedChar('Z');\n          return NULL;\n        }\n      case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':\n      case 'l': case 'L': case 'q': case 'Q':\n      case 'f': case 'd': case 'g':\n      case 'O': case 'p':\n        if (ctx->enc_type == *ts && got_Z == ctx->is_complex &&\n            ctx->enc_packmode == ctx->new_packmode) {\n          ctx->enc_count += ctx->new_count;\n          ctx->new_count = 1;\n          got_Z = 0;\n          ++ts;\n          break;\n        }\n      case 's':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->enc_count = ctx->new_count;\n        ctx->enc_packmode = ctx->new_packmode;\n        ctx->enc_type = *ts;\n        ctx->is_complex = got_Z;\n        ++ts;\n        ctx->new_count = 1;\n        got_Z = 0;\n        break;\n      case ':':\n        ++ts;\n        while(*ts != ':') ++ts;\n        ++ts;\n        break;\n      case '(':\n        if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL;\n        break;\n      default:\n        {\n          int number = __Pyx_BufFmt_ExpectNumber(&ts);\n          if (number == -1) return NULL;\n          ctx->new_count = (size_t)number;\n        }\n    }\n  }\n}\nstatic CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) {\n  buf->buf = NULL;\n  buf->obj = NULL;\n  buf->strides = __Pyx_zeros;\n  buf->shape = __Pyx_zeros;\n  buf->suboffsets = __Pyx_minusones;\n}\nstatic CYTHON_INLINE int __Pyx_GetBufferAndValidate(\n        Py_buffer* buf, PyObject* obj,  __Pyx_TypeInfo* dtype, int flags,\n        int nd, int cast, __Pyx_BufFmt_StackElem* stack)\n{\n  if (obj == Py_None || obj == NULL) {\n    __Pyx_ZeroBuffer(buf);\n    return 0;\n  }\n  buf->buf = NULL;\n  if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail;\n  if (buf->ndim != nd) {\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer has wrong number of dimensions (expected %d, got %d)\",\n                 nd, buf->ndim);\n    goto fail;\n  }\n  if (!cast) {\n    __Pyx_BufFmt_Context ctx;\n    __Pyx_BufFmt_Init(&ctx, stack, dtype);\n    if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail;\n  }\n  if ((unsigned)buf->itemsize != dtype->size) {\n    PyErr_Format(PyExc_ValueError,\n      \"Item size of buffer (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s) does not match size of '%s' (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s)\",\n      buf->itemsize, (buf->itemsize > 1) ? \"s\" : \"\",\n      dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? \"s\" : \"\");\n    goto fail;\n  }\n  if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones;\n  return 0;\nfail:;\n  __Pyx_ZeroBuffer(buf);\n  return -1;\n}\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) {\n  if (info->buf == NULL) return;\n  if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL;\n  __Pyx_ReleaseBuffer(info);\n}\n\n/* GetModuleGlobalName */\n    static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) {\n    PyObject *result;\n#if !CYTHON_AVOID_BORROWED_REFS\n    result = PyDict_GetItem(__pyx_d, name);\n    if (likely(result)) {\n        Py_INCREF(result);\n    } else {\n#else\n    result = PyObject_GetItem(__pyx_d, name);\n    if (!result) {\n        PyErr_Clear();\n#endif\n        result = __Pyx_GetBuiltinName(name);\n    }\n    return result;\n}\n\n/* PyCFunctionFastCall */\n      #if CYTHON_FAST_PYCCALL\nstatic CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) {\n    PyCFunctionObject *func = (PyCFunctionObject*)func_obj;\n    PyCFunction meth = PyCFunction_GET_FUNCTION(func);\n    PyObject *self = PyCFunction_GET_SELF(func);\n    assert(PyCFunction_Check(func));\n    assert(METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST)));\n    assert(nargs >= 0);\n    assert(nargs == 0 || args != NULL);\n    /* _PyCFunction_FastCallDict() must not be called with an exception set,\n       because it may clear it (directly or indirectly) and so the\n       caller loses its exception */\n    assert(!PyErr_Occurred());\n    return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs, NULL);\n}\n#endif  // CYTHON_FAST_PYCCALL\n\n/* PyFunctionFastCall */\n      #if CYTHON_FAST_PYCALL\n#include \"frameobject.h\"\nstatic PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,\n                                               PyObject *globals) {\n    PyFrameObject *f;\n    PyThreadState *tstate = PyThreadState_GET();\n    PyObject **fastlocals;\n    Py_ssize_t i;\n    PyObject *result;\n    assert(globals != NULL);\n    /* XXX Perhaps we should create a specialized\n       PyFrame_New() that doesn't take locals, but does\n       take builtins without sanity checking them.\n       */\n    assert(tstate != NULL);\n    f = PyFrame_New(tstate, co, globals, NULL);\n    if (f == NULL) {\n        return NULL;\n    }\n    fastlocals = f->f_localsplus;\n    for (i = 0; i < na; i++) {\n        Py_INCREF(*args);\n        fastlocals[i] = *args++;\n    }\n    result = PyEval_EvalFrameEx(f,0);\n    ++tstate->recursion_depth;\n    Py_DECREF(f);\n    --tstate->recursion_depth;\n    return result;\n}\n#if 1 || PY_VERSION_HEX < 0x030600B1\nstatic PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs) {\n    PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func);\n    PyObject *globals = PyFunction_GET_GLOBALS(func);\n    PyObject *argdefs = PyFunction_GET_DEFAULTS(func);\n    PyObject *closure;\n#if PY_MAJOR_VERSION >= 3\n    PyObject *kwdefs;\n#endif\n    PyObject *kwtuple, **k;\n    PyObject **d;\n    Py_ssize_t nd;\n    Py_ssize_t nk;\n    PyObject *result;\n    assert(kwargs == NULL || PyDict_Check(kwargs));\n    nk = kwargs ? PyDict_Size(kwargs) : 0;\n    if (Py_EnterRecursiveCall((char*)\" while calling a Python object\")) {\n        return NULL;\n    }\n    if (\n#if PY_MAJOR_VERSION >= 3\n            co->co_kwonlyargcount == 0 &&\n#endif\n            likely(kwargs == NULL || nk == 0) &&\n            co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) {\n        if (argdefs == NULL && co->co_argcount == nargs) {\n            result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals);\n            goto done;\n        }\n        else if (nargs == 0 && argdefs != NULL\n                 && co->co_argcount == Py_SIZE(argdefs)) {\n            /* function called with no arguments, but all parameters have\n               a default value: use default values as arguments .*/\n            args = &PyTuple_GET_ITEM(argdefs, 0);\n            result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals);\n            goto done;\n        }\n    }\n    if (kwargs != NULL) {\n        Py_ssize_t pos, i;\n        kwtuple = PyTuple_New(2 * nk);\n        if (kwtuple == NULL) {\n            result = NULL;\n            goto done;\n        }\n        k = &PyTuple_GET_ITEM(kwtuple, 0);\n        pos = i = 0;\n        while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) {\n            Py_INCREF(k[i]);\n            Py_INCREF(k[i+1]);\n            i += 2;\n        }\n        nk = i / 2;\n    }\n    else {\n        kwtuple = NULL;\n        k = NULL;\n    }\n    closure = PyFunction_GET_CLOSURE(func);\n#if PY_MAJOR_VERSION >= 3\n    kwdefs = PyFunction_GET_KW_DEFAULTS(func);\n#endif\n    if (argdefs != NULL) {\n        d = &PyTuple_GET_ITEM(argdefs, 0);\n        nd = Py_SIZE(argdefs);\n    }\n    else {\n        d = NULL;\n        nd = 0;\n    }\n#if PY_MAJOR_VERSION >= 3\n    result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL,\n                               args, nargs,\n                               k, (int)nk,\n                               d, (int)nd, kwdefs, closure);\n#else\n    result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL,\n                               args, nargs,\n                               k, (int)nk,\n                               d, (int)nd, closure);\n#endif\n    Py_XDECREF(kwtuple);\ndone:\n    Py_LeaveRecursiveCall();\n    return result;\n}\n#endif  // CPython < 3.6\n#endif  // CYTHON_FAST_PYCALL\n\n/* PyObjectCallMethO */\n      #if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) {\n    PyObject *self, *result;\n    PyCFunction cfunc;\n    cfunc = PyCFunction_GET_FUNCTION(func);\n    self = PyCFunction_GET_SELF(func);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = cfunc(self, arg);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* PyObjectCallOneArg */\n      #if CYTHON_COMPILING_IN_CPYTHON\nstatic PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n    PyObject *result;\n    PyObject *args = PyTuple_New(1);\n    if (unlikely(!args)) return NULL;\n    Py_INCREF(arg);\n    PyTuple_SET_ITEM(args, 0, arg);\n    result = __Pyx_PyObject_Call(func, args, NULL);\n    Py_DECREF(args);\n    return result;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n#if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(func)) {\n        return __Pyx_PyFunction_FastCall(func, &arg, 1);\n    }\n#endif\n#ifdef __Pyx_CyFunction_USED\n    if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) {\n#else\n    if (likely(PyCFunction_Check(func))) {\n#endif\n        if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) {\n            return __Pyx_PyObject_CallMethO(func, arg);\n#if CYTHON_FAST_PYCCALL\n        } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) {\n            return __Pyx_PyCFunction_FastCall(func, &arg, 1);\n#endif\n        }\n    }\n    return __Pyx__PyObject_CallOneArg(func, arg);\n}\n#else\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n    PyObject *result;\n    PyObject *args = PyTuple_Pack(1, arg);\n    if (unlikely(!args)) return NULL;\n    result = __Pyx_PyObject_Call(func, args, NULL);\n    Py_DECREF(args);\n    return result;\n}\n#endif\n\n/* PyIntBinop */\n        #if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, CYTHON_UNUSED int inplace) {\n    if (op1 == op2) {\n        Py_RETURN_TRUE;\n    }\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_CheckExact(op1))) {\n        const long b = intval;\n        long a = PyInt_AS_LONG(op1);\n        if (a == b) {\n            Py_RETURN_TRUE;\n        } else {\n            Py_RETURN_FALSE;\n        }\n    }\n    #endif\n    #if CYTHON_USE_PYLONG_INTERNALS\n    if (likely(PyLong_CheckExact(op1))) {\n        const long b = intval;\n        long a;\n        const digit* digits = ((PyLongObject*)op1)->ob_digit;\n        const Py_ssize_t size = Py_SIZE(op1);\n        if (likely(__Pyx_sst_abs(size) <= 1)) {\n            a = likely(size) ? digits[0] : 0;\n            if (size == -1) a = -a;\n        } else {\n            switch (size) {\n                case -2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case 2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case -3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case 3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case -4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case 4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                #if PyLong_SHIFT < 30 && PyLong_SHIFT != 15\n                default: return PyLong_Type.tp_richcompare(op1, op2, Py_EQ);\n                #else\n                default: Py_RETURN_FALSE;\n                #endif\n            }\n        }\n            if (a == b) {\n                Py_RETURN_TRUE;\n            } else {\n                Py_RETURN_FALSE;\n            }\n    }\n    #endif\n    if (PyFloat_CheckExact(op1)) {\n        const long b = intval;\n        double a = PyFloat_AS_DOUBLE(op1);\n            if ((double)a == (double)b) {\n                Py_RETURN_TRUE;\n            } else {\n                Py_RETURN_FALSE;\n            }\n    }\n    return PyObject_RichCompare(op1, op2, Py_EQ);\n}\n#endif\n\n/* FetchCommonType */\n        static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) {\n    PyObject* fake_module;\n    PyTypeObject* cached_type = NULL;\n    fake_module = PyImport_AddModule((char*) \"_cython_\" CYTHON_ABI);\n    if (!fake_module) return NULL;\n    Py_INCREF(fake_module);\n    cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name);\n    if (cached_type) {\n        if (!PyType_Check((PyObject*)cached_type)) {\n            PyErr_Format(PyExc_TypeError,\n                \"Shared Cython type %.200s is not a type object\",\n                type->tp_name);\n            goto bad;\n        }\n        if (cached_type->tp_basicsize != type->tp_basicsize) {\n            PyErr_Format(PyExc_TypeError,\n                \"Shared Cython type %.200s has the wrong size, try recompiling\",\n                type->tp_name);\n            goto bad;\n        }\n    } else {\n        if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad;\n        PyErr_Clear();\n        if (PyType_Ready(type) < 0) goto bad;\n        if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0)\n            goto bad;\n        Py_INCREF(type);\n        cached_type = type;\n    }\ndone:\n    Py_DECREF(fake_module);\n    return cached_type;\nbad:\n    Py_XDECREF(cached_type);\n    cached_type = NULL;\n    goto done;\n}\n\n/* CythonFunction */\n        static PyObject *\n__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure)\n{\n    if (unlikely(op->func_doc == NULL)) {\n        if (op->func.m_ml->ml_doc) {\n#if PY_MAJOR_VERSION >= 3\n            op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc);\n#else\n            op->func_doc = PyString_FromString(op->func.m_ml->ml_doc);\n#endif\n            if (unlikely(op->func_doc == NULL))\n                return NULL;\n        } else {\n            Py_INCREF(Py_None);\n            return Py_None;\n        }\n    }\n    Py_INCREF(op->func_doc);\n    return op->func_doc;\n}\nstatic int\n__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value)\n{\n    PyObject *tmp = op->func_doc;\n    if (value == NULL) {\n        value = Py_None;\n    }\n    Py_INCREF(value);\n    op->func_doc = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op)\n{\n    if (unlikely(op->func_name == NULL)) {\n#if PY_MAJOR_VERSION >= 3\n        op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name);\n#else\n        op->func_name = PyString_InternFromString(op->func.m_ml->ml_name);\n#endif\n        if (unlikely(op->func_name == NULL))\n            return NULL;\n    }\n    Py_INCREF(op->func_name);\n    return op->func_name;\n}\nstatic int\n__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value)\n{\n    PyObject *tmp;\n#if PY_MAJOR_VERSION >= 3\n    if (unlikely(value == NULL || !PyUnicode_Check(value))) {\n#else\n    if (unlikely(value == NULL || !PyString_Check(value))) {\n#endif\n        PyErr_SetString(PyExc_TypeError,\n                        \"__name__ must be set to a string object\");\n        return -1;\n    }\n    tmp = op->func_name;\n    Py_INCREF(value);\n    op->func_name = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op)\n{\n    Py_INCREF(op->func_qualname);\n    return op->func_qualname;\n}\nstatic int\n__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value)\n{\n    PyObject *tmp;\n#if PY_MAJOR_VERSION >= 3\n    if (unlikely(value == NULL || !PyUnicode_Check(value))) {\n#else\n    if (unlikely(value == NULL || !PyString_Check(value))) {\n#endif\n        PyErr_SetString(PyExc_TypeError,\n                        \"__qualname__ must be set to a string object\");\n        return -1;\n    }\n    tmp = op->func_qualname;\n    Py_INCREF(value);\n    op->func_qualname = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure)\n{\n    PyObject *self;\n    self = m->func_closure;\n    if (self == NULL)\n        self = Py_None;\n    Py_INCREF(self);\n    return self;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op)\n{\n    if (unlikely(op->func_dict == NULL)) {\n        op->func_dict = PyDict_New();\n        if (unlikely(op->func_dict == NULL))\n            return NULL;\n    }\n    Py_INCREF(op->func_dict);\n    return op->func_dict;\n}\nstatic int\n__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value)\n{\n    PyObject *tmp;\n    if (unlikely(value == NULL)) {\n        PyErr_SetString(PyExc_TypeError,\n               \"function's dictionary may not be deleted\");\n        return -1;\n    }\n    if (unlikely(!PyDict_Check(value))) {\n        PyErr_SetString(PyExc_TypeError,\n               \"setting function's dictionary to a non-dict\");\n        return -1;\n    }\n    tmp = op->func_dict;\n    Py_INCREF(value);\n    op->func_dict = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op)\n{\n    Py_INCREF(op->func_globals);\n    return op->func_globals;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op)\n{\n    Py_INCREF(Py_None);\n    return Py_None;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op)\n{\n    PyObject* result = (op->func_code) ? op->func_code : Py_None;\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) {\n    int result = 0;\n    PyObject *res = op->defaults_getter((PyObject *) op);\n    if (unlikely(!res))\n        return -1;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    op->defaults_tuple = PyTuple_GET_ITEM(res, 0);\n    Py_INCREF(op->defaults_tuple);\n    op->defaults_kwdict = PyTuple_GET_ITEM(res, 1);\n    Py_INCREF(op->defaults_kwdict);\n    #else\n    op->defaults_tuple = PySequence_ITEM(res, 0);\n    if (unlikely(!op->defaults_tuple)) result = -1;\n    else {\n        op->defaults_kwdict = PySequence_ITEM(res, 1);\n        if (unlikely(!op->defaults_kwdict)) result = -1;\n    }\n    #endif\n    Py_DECREF(res);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value) {\n    PyObject* tmp;\n    if (!value) {\n        value = Py_None;\n    } else if (value != Py_None && !PyTuple_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__defaults__ must be set to a tuple object\");\n        return -1;\n    }\n    Py_INCREF(value);\n    tmp = op->defaults_tuple;\n    op->defaults_tuple = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op) {\n    PyObject* result = op->defaults_tuple;\n    if (unlikely(!result)) {\n        if (op->defaults_getter) {\n            if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;\n            result = op->defaults_tuple;\n        } else {\n            result = Py_None;\n        }\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value) {\n    PyObject* tmp;\n    if (!value) {\n        value = Py_None;\n    } else if (value != Py_None && !PyDict_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__kwdefaults__ must be set to a dict object\");\n        return -1;\n    }\n    Py_INCREF(value);\n    tmp = op->defaults_kwdict;\n    op->defaults_kwdict = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op) {\n    PyObject* result = op->defaults_kwdict;\n    if (unlikely(!result)) {\n        if (op->defaults_getter) {\n            if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;\n            result = op->defaults_kwdict;\n        } else {\n            result = Py_None;\n        }\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value) {\n    PyObject* tmp;\n    if (!value || value == Py_None) {\n        value = NULL;\n    } else if (!PyDict_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__annotations__ must be set to a dict object\");\n        return -1;\n    }\n    Py_XINCREF(value);\n    tmp = op->func_annotations;\n    op->func_annotations = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op) {\n    PyObject* result = op->func_annotations;\n    if (unlikely(!result)) {\n        result = PyDict_New();\n        if (unlikely(!result)) return NULL;\n        op->func_annotations = result;\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic PyGetSetDef __pyx_CyFunction_getsets[] = {\n    {(char *) \"func_doc\", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},\n    {(char *) \"__doc__\",  (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},\n    {(char *) \"func_name\", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},\n    {(char *) \"__name__\", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},\n    {(char *) \"__qualname__\", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0},\n    {(char *) \"__self__\", (getter)__Pyx_CyFunction_get_self, 0, 0, 0},\n    {(char *) \"func_dict\", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},\n    {(char *) \"__dict__\", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},\n    {(char *) \"func_globals\", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},\n    {(char *) \"__globals__\", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},\n    {(char *) \"func_closure\", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},\n    {(char *) \"__closure__\", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},\n    {(char *) \"func_code\", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},\n    {(char *) \"__code__\", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},\n    {(char *) \"func_defaults\", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},\n    {(char *) \"__defaults__\", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},\n    {(char *) \"__kwdefaults__\", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0},\n    {(char *) \"__annotations__\", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0},\n    {0, 0, 0, 0, 0}\n};\nstatic PyMemberDef __pyx_CyFunction_members[] = {\n    {(char *) \"__module__\", T_OBJECT, offsetof(__pyx_CyFunctionObject, func.m_module), PY_WRITE_RESTRICTED, 0},\n    {0, 0, 0,  0, 0}\n};\nstatic PyObject *\n__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args)\n{\n#if PY_MAJOR_VERSION >= 3\n    return PyUnicode_FromString(m->func.m_ml->ml_name);\n#else\n    return PyString_FromString(m->func.m_ml->ml_name);\n#endif\n}\nstatic PyMethodDef __pyx_CyFunction_methods[] = {\n    {\"__reduce__\", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0},\n    {0, 0, 0, 0}\n};\n#if PY_VERSION_HEX < 0x030500A0\n#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist)\n#else\n#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist)\n#endif\nstatic PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags, PyObject* qualname,\n                                      PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) {\n    __pyx_CyFunctionObject *op = PyObject_GC_New(__pyx_CyFunctionObject, type);\n    if (op == NULL)\n        return NULL;\n    op->flags = flags;\n    __Pyx_CyFunction_weakreflist(op) = NULL;\n    op->func.m_ml = ml;\n    op->func.m_self = (PyObject *) op;\n    Py_XINCREF(closure);\n    op->func_closure = closure;\n    Py_XINCREF(module);\n    op->func.m_module = module;\n    op->func_dict = NULL;\n    op->func_name = NULL;\n    Py_INCREF(qualname);\n    op->func_qualname = qualname;\n    op->func_doc = NULL;\n    op->func_classobj = NULL;\n    op->func_globals = globals;\n    Py_INCREF(op->func_globals);\n    Py_XINCREF(code);\n    op->func_code = code;\n    op->defaults_pyobjects = 0;\n    op->defaults = NULL;\n    op->defaults_tuple = NULL;\n    op->defaults_kwdict = NULL;\n    op->defaults_getter = NULL;\n    op->func_annotations = NULL;\n    PyObject_GC_Track(op);\n    return (PyObject *) op;\n}\nstatic int\n__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m)\n{\n    Py_CLEAR(m->func_closure);\n    Py_CLEAR(m->func.m_module);\n    Py_CLEAR(m->func_dict);\n    Py_CLEAR(m->func_name);\n    Py_CLEAR(m->func_qualname);\n    Py_CLEAR(m->func_doc);\n    Py_CLEAR(m->func_globals);\n    Py_CLEAR(m->func_code);\n    Py_CLEAR(m->func_classobj);\n    Py_CLEAR(m->defaults_tuple);\n    Py_CLEAR(m->defaults_kwdict);\n    Py_CLEAR(m->func_annotations);\n    if (m->defaults) {\n        PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);\n        int i;\n        for (i = 0; i < m->defaults_pyobjects; i++)\n            Py_XDECREF(pydefaults[i]);\n        PyObject_Free(m->defaults);\n        m->defaults = NULL;\n    }\n    return 0;\n}\nstatic void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m)\n{\n    PyObject_GC_UnTrack(m);\n    if (__Pyx_CyFunction_weakreflist(m) != NULL)\n        PyObject_ClearWeakRefs((PyObject *) m);\n    __Pyx_CyFunction_clear(m);\n    PyObject_GC_Del(m);\n}\nstatic int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg)\n{\n    Py_VISIT(m->func_closure);\n    Py_VISIT(m->func.m_module);\n    Py_VISIT(m->func_dict);\n    Py_VISIT(m->func_name);\n    Py_VISIT(m->func_qualname);\n    Py_VISIT(m->func_doc);\n    Py_VISIT(m->func_globals);\n    Py_VISIT(m->func_code);\n    Py_VISIT(m->func_classobj);\n    Py_VISIT(m->defaults_tuple);\n    Py_VISIT(m->defaults_kwdict);\n    if (m->defaults) {\n        PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);\n        int i;\n        for (i = 0; i < m->defaults_pyobjects; i++)\n            Py_VISIT(pydefaults[i]);\n    }\n    return 0;\n}\nstatic PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type)\n{\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) {\n        Py_INCREF(func);\n        return func;\n    }\n    if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) {\n        if (type == NULL)\n            type = (PyObject *)(Py_TYPE(obj));\n        return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type)));\n    }\n    if (obj == Py_None)\n        obj = NULL;\n    return __Pyx_PyMethod_New(func, obj, type);\n}\nstatic PyObject*\n__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op)\n{\n#if PY_MAJOR_VERSION >= 3\n    return PyUnicode_FromFormat(\"<cyfunction %U at %p>\",\n                                op->func_qualname, (void *)op);\n#else\n    return PyString_FromFormat(\"<cyfunction %s at %p>\",\n                               PyString_AsString(op->func_qualname), (void *)op);\n#endif\n}\nstatic PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) {\n    PyCFunctionObject* f = (PyCFunctionObject*)func;\n    PyCFunction meth = f->m_ml->ml_meth;\n    Py_ssize_t size;\n    switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) {\n    case METH_VARARGS:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0))\n            return (*meth)(self, arg);\n        break;\n    case METH_VARARGS | METH_KEYWORDS:\n        return (*(PyCFunctionWithKeywords)meth)(self, arg, kw);\n    case METH_NOARGS:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0)) {\n            size = PyTuple_GET_SIZE(arg);\n            if (likely(size == 0))\n                return (*meth)(self, NULL);\n            PyErr_Format(PyExc_TypeError,\n                \"%.200s() takes no arguments (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                f->m_ml->ml_name, size);\n            return NULL;\n        }\n        break;\n    case METH_O:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0)) {\n            size = PyTuple_GET_SIZE(arg);\n            if (likely(size == 1)) {\n                PyObject *result, *arg0 = PySequence_ITEM(arg, 0);\n                if (unlikely(!arg0)) return NULL;\n                result = (*meth)(self, arg0);\n                Py_DECREF(arg0);\n                return result;\n            }\n            PyErr_Format(PyExc_TypeError,\n                \"%.200s() takes exactly one argument (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                f->m_ml->ml_name, size);\n            return NULL;\n        }\n        break;\n    default:\n        PyErr_SetString(PyExc_SystemError, \"Bad call flags in \"\n                        \"__Pyx_CyFunction_Call. METH_OLDARGS is no \"\n                        \"longer supported!\");\n        return NULL;\n    }\n    PyErr_Format(PyExc_TypeError, \"%.200s() takes no keyword arguments\",\n                 f->m_ml->ml_name);\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw);\n}\nstatic PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) {\n    PyObject *result;\n    __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func;\n    if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) {\n        Py_ssize_t argc;\n        PyObject *new_args;\n        PyObject *self;\n        argc = PyTuple_GET_SIZE(args);\n        new_args = PyTuple_GetSlice(args, 1, argc);\n        if (unlikely(!new_args))\n            return NULL;\n        self = PyTuple_GetItem(args, 0);\n        if (unlikely(!self)) {\n            Py_DECREF(new_args);\n            return NULL;\n        }\n        result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw);\n        Py_DECREF(new_args);\n    } else {\n        result = __Pyx_CyFunction_Call(func, args, kw);\n    }\n    return result;\n}\nstatic PyTypeObject __pyx_CyFunctionType_type = {\n    PyVarObject_HEAD_INIT(0, 0)\n    \"cython_function_or_method\",\n    sizeof(__pyx_CyFunctionObject),\n    0,\n    (destructor) __Pyx_CyFunction_dealloc,\n    0,\n    0,\n    0,\n#if PY_MAJOR_VERSION < 3\n    0,\n#else\n    0,\n#endif\n    (reprfunc) __Pyx_CyFunction_repr,\n    0,\n    0,\n    0,\n    0,\n    __Pyx_CyFunction_CallAsMethod,\n    0,\n    0,\n    0,\n    0,\n    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,\n    0,\n    (traverseproc) __Pyx_CyFunction_traverse,\n    (inquiry) __Pyx_CyFunction_clear,\n    0,\n#if PY_VERSION_HEX < 0x030500A0\n    offsetof(__pyx_CyFunctionObject, func_weakreflist),\n#else\n    offsetof(PyCFunctionObject, m_weakreflist),\n#endif\n    0,\n    0,\n    __pyx_CyFunction_methods,\n    __pyx_CyFunction_members,\n    __pyx_CyFunction_getsets,\n    0,\n    0,\n    __Pyx_CyFunction_descr_get,\n    0,\n    offsetof(__pyx_CyFunctionObject, func_dict),\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n#if PY_VERSION_HEX >= 0x030400a1\n    0,\n#endif\n};\nstatic int __pyx_CyFunction_init(void) {\n    __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type);\n    if (__pyx_CyFunctionType == NULL) {\n        return -1;\n    }\n    return 0;\n}\nstatic CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults = PyObject_Malloc(size);\n    if (!m->defaults)\n        return PyErr_NoMemory();\n    memset(m->defaults, 0, size);\n    m->defaults_pyobjects = pyobjects;\n    return m->defaults;\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults_tuple = tuple;\n    Py_INCREF(tuple);\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults_kwdict = dict;\n    Py_INCREF(dict);\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->func_annotations = dict;\n    Py_INCREF(dict);\n}\n\n/* BufferFallbackError */\n            static void __Pyx_RaiseBufferFallbackError(void) {\n  PyErr_SetString(PyExc_ValueError,\n     \"Buffer acquisition failed on assignment; and then reacquiring the old buffer failed too!\");\n}\n\n/* None */\n            static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) {\n    Py_ssize_t q = a / b;\n    Py_ssize_t r = a - q*b;\n    q -= ((r != 0) & ((r ^ b) < 0));\n    return q;\n}\n\n/* BufferIndexError */\n            static void __Pyx_RaiseBufferIndexError(int axis) {\n  PyErr_Format(PyExc_IndexError,\n     \"Out of bounds on buffer access (axis %d)\", axis);\n}\n\n/* RaiseTooManyValuesToUnpack */\n            static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) {\n    PyErr_Format(PyExc_ValueError,\n                 \"too many values to unpack (expected %\" CYTHON_FORMAT_SSIZE_T \"d)\", expected);\n}\n\n/* RaiseNeedMoreValuesToUnpack */\n            static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) {\n    PyErr_Format(PyExc_ValueError,\n                 \"need more than %\" CYTHON_FORMAT_SSIZE_T \"d value%.1s to unpack\",\n                 index, (index == 1) ? \"\" : \"s\");\n}\n\n/* RaiseNoneIterError */\n            static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n}\n\n/* SaveResetException */\n            #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->exc_type;\n    *value = tstate->exc_value;\n    *tb = tstate->exc_traceback;\n    Py_XINCREF(*type);\n    Py_XINCREF(*value);\n    Py_XINCREF(*tb);\n}\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = type;\n    tstate->exc_value = value;\n    tstate->exc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\n#endif\n\n/* PyErrExceptionMatches */\n            #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) {\n    PyObject *exc_type = tstate->curexc_type;\n    if (exc_type == err) return 1;\n    if (unlikely(!exc_type)) return 0;\n    return PyErr_GivenExceptionMatches(exc_type, err);\n}\n#endif\n\n/* GetException */\n            #if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) {\n#endif\n    PyObject *local_type, *local_value, *local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    local_type = tstate->curexc_type;\n    local_value = tstate->curexc_value;\n    local_tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n#else\n    PyErr_Fetch(&local_type, &local_value, &local_tb);\n#endif\n    PyErr_NormalizeException(&local_type, &local_value, &local_tb);\n#if CYTHON_FAST_THREAD_STATE\n    if (unlikely(tstate->curexc_type))\n#else\n    if (unlikely(PyErr_Occurred()))\n#endif\n        goto bad;\n    #if PY_MAJOR_VERSION >= 3\n    if (local_tb) {\n        if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))\n            goto bad;\n    }\n    #endif\n    Py_XINCREF(local_tb);\n    Py_XINCREF(local_type);\n    Py_XINCREF(local_value);\n    *type = local_type;\n    *value = local_value;\n    *tb = local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = local_type;\n    tstate->exc_value = local_value;\n    tstate->exc_traceback = local_tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n#else\n    PyErr_SetExcInfo(local_type, local_value, local_tb);\n#endif\n    return 0;\nbad:\n    *type = 0;\n    *value = 0;\n    *tb = 0;\n    Py_XDECREF(local_type);\n    Py_XDECREF(local_value);\n    Py_XDECREF(local_tb);\n    return -1;\n}\n\n/* Import */\n              static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {\n    PyObject *empty_list = 0;\n    PyObject *module = 0;\n    PyObject *global_dict = 0;\n    PyObject *empty_dict = 0;\n    PyObject *list;\n    #if PY_VERSION_HEX < 0x03030000\n    PyObject *py_import;\n    py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);\n    if (!py_import)\n        goto bad;\n    #endif\n    if (from_list)\n        list = from_list;\n    else {\n        empty_list = PyList_New(0);\n        if (!empty_list)\n            goto bad;\n        list = empty_list;\n    }\n    global_dict = PyModule_GetDict(__pyx_m);\n    if (!global_dict)\n        goto bad;\n    empty_dict = PyDict_New();\n    if (!empty_dict)\n        goto bad;\n    {\n        #if PY_MAJOR_VERSION >= 3\n        if (level == -1) {\n            if (strchr(__Pyx_MODULE_NAME, '.')) {\n                #if PY_VERSION_HEX < 0x03030000\n                PyObject *py_level = PyInt_FromLong(1);\n                if (!py_level)\n                    goto bad;\n                module = PyObject_CallFunctionObjArgs(py_import,\n                    name, global_dict, empty_dict, list, py_level, NULL);\n                Py_DECREF(py_level);\n                #else\n                module = PyImport_ImportModuleLevelObject(\n                    name, global_dict, empty_dict, list, 1);\n                #endif\n                if (!module) {\n                    if (!PyErr_ExceptionMatches(PyExc_ImportError))\n                        goto bad;\n                    PyErr_Clear();\n                }\n            }\n            level = 0;\n        }\n        #endif\n        if (!module) {\n            #if PY_VERSION_HEX < 0x03030000\n            PyObject *py_level = PyInt_FromLong(level);\n            if (!py_level)\n                goto bad;\n            module = PyObject_CallFunctionObjArgs(py_import,\n                name, global_dict, empty_dict, list, py_level, NULL);\n            Py_DECREF(py_level);\n            #else\n            module = PyImport_ImportModuleLevelObject(\n                name, global_dict, empty_dict, list, level);\n            #endif\n        }\n    }\nbad:\n    #if PY_VERSION_HEX < 0x03030000\n    Py_XDECREF(py_import);\n    #endif\n    Py_XDECREF(empty_list);\n    Py_XDECREF(empty_dict);\n    return module;\n}\n\n/* CodeObjectCache */\n              static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {\n    int start = 0, mid = 0, end = count - 1;\n    if (end >= 0 && code_line > entries[end].code_line) {\n        return count;\n    }\n    while (start < end) {\n        mid = start + (end - start) / 2;\n        if (code_line < entries[mid].code_line) {\n            end = mid;\n        } else if (code_line > entries[mid].code_line) {\n             start = mid + 1;\n        } else {\n            return mid;\n        }\n    }\n    if (code_line <= entries[mid].code_line) {\n        return mid;\n    } else {\n        return mid + 1;\n    }\n}\nstatic PyCodeObject *__pyx_find_code_object(int code_line) {\n    PyCodeObject* code_object;\n    int pos;\n    if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {\n        return NULL;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {\n        return NULL;\n    }\n    code_object = __pyx_code_cache.entries[pos].code_object;\n    Py_INCREF(code_object);\n    return code_object;\n}\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {\n    int pos, i;\n    __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;\n    if (unlikely(!code_line)) {\n        return;\n    }\n    if (unlikely(!entries)) {\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (likely(entries)) {\n            __pyx_code_cache.entries = entries;\n            __pyx_code_cache.max_count = 64;\n            __pyx_code_cache.count = 1;\n            entries[0].code_line = code_line;\n            entries[0].code_object = code_object;\n            Py_INCREF(code_object);\n        }\n        return;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {\n        PyCodeObject* tmp = entries[pos].code_object;\n        entries[pos].code_object = code_object;\n        Py_DECREF(tmp);\n        return;\n    }\n    if (__pyx_code_cache.count == __pyx_code_cache.max_count) {\n        int new_max = __pyx_code_cache.max_count + 64;\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(\n            __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (unlikely(!entries)) {\n            return;\n        }\n        __pyx_code_cache.entries = entries;\n        __pyx_code_cache.max_count = new_max;\n    }\n    for (i=__pyx_code_cache.count; i>pos; i--) {\n        entries[i] = entries[i-1];\n    }\n    entries[pos].code_line = code_line;\n    entries[pos].code_object = code_object;\n    __pyx_code_cache.count++;\n    Py_INCREF(code_object);\n}\n\n/* AddTraceback */\n              #include \"compile.h\"\n#include \"frameobject.h\"\n#include \"traceback.h\"\nstatic PyCodeObject* __Pyx_CreateCodeObjectForTraceback(\n            const char *funcname, int c_line,\n            int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyObject *py_srcfile = 0;\n    PyObject *py_funcname = 0;\n    #if PY_MAJOR_VERSION < 3\n    py_srcfile = PyString_FromString(filename);\n    #else\n    py_srcfile = PyUnicode_FromString(filename);\n    #endif\n    if (!py_srcfile) goto bad;\n    if (c_line) {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #else\n        py_funcname = PyUnicode_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #endif\n    }\n    else {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromString(funcname);\n        #else\n        py_funcname = PyUnicode_FromString(funcname);\n        #endif\n    }\n    if (!py_funcname) goto bad;\n    py_code = __Pyx_PyCode_New(\n        0,\n        0,\n        0,\n        0,\n        0,\n        __pyx_empty_bytes, /*PyObject *code,*/\n        __pyx_empty_tuple, /*PyObject *consts,*/\n        __pyx_empty_tuple, /*PyObject *names,*/\n        __pyx_empty_tuple, /*PyObject *varnames,*/\n        __pyx_empty_tuple, /*PyObject *freevars,*/\n        __pyx_empty_tuple, /*PyObject *cellvars,*/\n        py_srcfile,   /*PyObject *filename,*/\n        py_funcname,  /*PyObject *name,*/\n        py_line,\n        __pyx_empty_bytes  /*PyObject *lnotab*/\n    );\n    Py_DECREF(py_srcfile);\n    Py_DECREF(py_funcname);\n    return py_code;\nbad:\n    Py_XDECREF(py_srcfile);\n    Py_XDECREF(py_funcname);\n    return NULL;\n}\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyFrameObject *py_frame = 0;\n    py_code = __pyx_find_code_object(c_line ? c_line : py_line);\n    if (!py_code) {\n        py_code = __Pyx_CreateCodeObjectForTraceback(\n            funcname, c_line, py_line, filename);\n        if (!py_code) goto bad;\n        __pyx_insert_code_object(c_line ? c_line : py_line, py_code);\n    }\n    py_frame = PyFrame_New(\n        PyThreadState_GET(), /*PyThreadState *tstate,*/\n        py_code,             /*PyCodeObject *code,*/\n        __pyx_d,      /*PyObject *globals,*/\n        0                    /*PyObject *locals*/\n    );\n    if (!py_frame) goto bad;\n    __Pyx_PyFrame_SetLineNumber(py_frame, py_line);\n    PyTraceBack_Here(py_frame);\nbad:\n    Py_XDECREF(py_code);\n    Py_XDECREF(py_frame);\n}\n\n#if PY_MAJOR_VERSION < 3\nstatic int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {\n    if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags);\n        if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags);\n    PyErr_Format(PyExc_TypeError, \"'%.200s' does not have the buffer interface\", Py_TYPE(obj)->tp_name);\n    return -1;\n}\nstatic void __Pyx_ReleaseBuffer(Py_buffer *view) {\n    PyObject *obj = view->obj;\n    if (!obj) return;\n    if (PyObject_CheckBuffer(obj)) {\n        PyBuffer_Release(view);\n        return;\n    }\n        if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) { __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view); return; }\n    Py_DECREF(obj);\n    view->obj = NULL;\n}\n#endif\n\n\n              /* CIntFromPyVerify */\n              #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)\n#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)\n#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\\\n    {\\\n        func_type value = func_value;\\\n        if (sizeof(target_type) < sizeof(func_type)) {\\\n            if (unlikely(value != (func_type) (target_type) value)) {\\\n                func_type zero = 0;\\\n                if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\\\n                    return (target_type) -1;\\\n                if (is_unsigned && unlikely(value < zero))\\\n                    goto raise_neg_overflow;\\\n                else\\\n                    goto raise_overflow;\\\n            }\\\n        }\\\n        return (target_type) value;\\\n    }\n\n/* CIntToPy */\n              static CYTHON_INLINE PyObject* __Pyx_PyInt_From_siz(siz value) {\n    const siz neg_one = (siz) -1, const_zero = (siz) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(siz) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(siz) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(siz) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(siz) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(siz) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(siz),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n              static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {\n    const long neg_one = (long) -1, const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(long) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(long) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(long) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(long),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n              static CYTHON_INLINE PyObject* __Pyx_PyInt_From_Py_intptr_t(Py_intptr_t value) {\n    const Py_intptr_t neg_one = (Py_intptr_t) -1, const_zero = (Py_intptr_t) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(Py_intptr_t) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(Py_intptr_t) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(Py_intptr_t) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(Py_intptr_t) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(Py_intptr_t) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(Py_intptr_t),\n                                     little, !is_unsigned);\n    }\n}\n\n/* Declarations */\n              #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return ::std::complex< float >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return x + y*(__pyx_t_float_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      __pyx_t_float_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n              #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabsf(b.real) >= fabsf(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                float r = b.imag / b.real;\n                float s = 1.0 / (b.real + b.imag * r);\n                return __pyx_t_float_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            float r = b.real / b.imag;\n            float s = 1.0 / (b.imag + b.real * r);\n            return __pyx_t_float_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            float denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_float_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrtf(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypotf(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n            __pyx_t_float_complex z;\n            float r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    float denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = powf(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2f(0, -1);\n                }\n            } else {\n                r = __Pyx_c_abs_float(a);\n                theta = atan2f(a.imag, a.real);\n            }\n            lnr = logf(r);\n            z_r = expf(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cosf(z_theta);\n            z.imag = z_r * sinf(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* Declarations */\n              #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return ::std::complex< double >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return x + y*(__pyx_t_double_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      __pyx_t_double_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n              #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabs(b.real) >= fabs(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                double r = b.imag / b.real;\n                double s = 1.0 / (b.real + b.imag * r);\n                return __pyx_t_double_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            double r = b.real / b.imag;\n            double s = 1.0 / (b.imag + b.real * r);\n            return __pyx_t_double_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            double denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_double_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrt(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypot(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n            __pyx_t_double_complex z;\n            double r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    double denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = pow(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2(0, -1);\n                }\n            } else {\n                r = __Pyx_c_abs_double(a);\n                theta = atan2(a.imag, a.real);\n            }\n            lnr = log(r);\n            z_r = exp(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cos(z_theta);\n            z.imag = z_r * sin(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* CIntToPy */\n              static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {\n    const int neg_one = (int) -1, const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(int) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(int) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(int) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(int),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n              static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) {\n    const enum NPY_TYPES neg_one = (enum NPY_TYPES) -1, const_zero = (enum NPY_TYPES) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(enum NPY_TYPES) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(enum NPY_TYPES) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\n              static CYTHON_INLINE siz __Pyx_PyInt_As_siz(PyObject *x) {\n    const siz neg_one = (siz) -1, const_zero = (siz) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(siz) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(siz, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (siz) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (siz) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(siz, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(siz) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) >= 2 * PyLong_SHIFT) {\n                            return (siz) (((((siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(siz) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) >= 3 * PyLong_SHIFT) {\n                            return (siz) (((((((siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(siz) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) >= 4 * PyLong_SHIFT) {\n                            return (siz) (((((((((siz)digits[3]) << PyLong_SHIFT) | (siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (siz) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(siz) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(siz) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (siz) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(siz, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(siz,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(siz) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 2 * PyLong_SHIFT) {\n                            return (siz) (((siz)-1)*(((((siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(siz) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 2 * PyLong_SHIFT) {\n                            return (siz) ((((((siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(siz) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 3 * PyLong_SHIFT) {\n                            return (siz) (((siz)-1)*(((((((siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(siz) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 3 * PyLong_SHIFT) {\n                            return (siz) ((((((((siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(siz) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 4 * PyLong_SHIFT) {\n                            return (siz) (((siz)-1)*(((((((((siz)digits[3]) << PyLong_SHIFT) | (siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(siz) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 4 * PyLong_SHIFT) {\n                            return (siz) ((((((((((siz)digits[3]) << PyLong_SHIFT) | (siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(siz) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(siz) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            siz val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (siz) -1;\n        }\n    } else {\n        siz val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (siz) -1;\n        val = __Pyx_PyInt_As_siz(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to siz\");\n    return (siz) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to siz\");\n    return (siz) -1;\n}\n\n/* CIntFromPy */\n              static CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *x) {\n    const size_t neg_one = (size_t) -1, const_zero = (size_t) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(size_t) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(size_t, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (size_t) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (size_t) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(size_t, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 2 * PyLong_SHIFT) {\n                            return (size_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 3 * PyLong_SHIFT) {\n                            return (size_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 4 * PyLong_SHIFT) {\n                            return (size_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (size_t) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(size_t) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(size_t) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (size_t) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(size_t, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(size_t,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(size_t) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                            return (size_t) ((((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                            return (size_t) ((((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) {\n                            return (size_t) ((((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(size_t) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(size_t) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            size_t val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (size_t) -1;\n        }\n    } else {\n        size_t val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (size_t) -1;\n        val = __Pyx_PyInt_As_size_t(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to size_t\");\n    return (size_t) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to size_t\");\n    return (size_t) -1;\n}\n\n/* CIntFromPy */\n              static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {\n    const int neg_one = (int) -1, const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {\n                            return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {\n                            return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {\n                            return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (int) -1;\n        }\n    } else {\n        int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (int) -1;\n        val = __Pyx_PyInt_As_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to int\");\n    return (int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to int\");\n    return (int) -1;\n}\n\n/* CIntFromPy */\n              static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {\n    const long neg_one = (long) -1, const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(long) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (long) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {\n                            return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {\n                            return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {\n                            return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (long) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(long) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(long,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(long) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            long val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (long) -1;\n        }\n    } else {\n        long val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (long) -1;\n        val = __Pyx_PyInt_As_long(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to long\");\n    return (long) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to long\");\n    return (long) -1;\n}\n\n/* CheckBinaryVersion */\n              static int __Pyx_check_binary_version(void) {\n    char ctversion[4], rtversion[4];\n    PyOS_snprintf(ctversion, 4, \"%d.%d\", PY_MAJOR_VERSION, PY_MINOR_VERSION);\n    PyOS_snprintf(rtversion, 4, \"%s\", Py_GetVersion());\n    if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) {\n        char message[200];\n        PyOS_snprintf(message, sizeof(message),\n                      \"compiletime version %s of module '%.100s' \"\n                      \"does not match runtime version %s\",\n                      ctversion, __Pyx_MODULE_NAME, rtversion);\n        return PyErr_WarnEx(NULL, message, 1);\n    }\n    return 0;\n}\n\n/* ModuleImport */\n              #ifndef __PYX_HAVE_RT_ImportModule\n#define __PYX_HAVE_RT_ImportModule\nstatic PyObject *__Pyx_ImportModule(const char *name) {\n    PyObject *py_name = 0;\n    PyObject *py_module = 0;\n    py_name = __Pyx_PyIdentifier_FromString(name);\n    if (!py_name)\n        goto bad;\n    py_module = PyImport_Import(py_name);\n    Py_DECREF(py_name);\n    return py_module;\nbad:\n    Py_XDECREF(py_name);\n    return 0;\n}\n#endif\n\n/* TypeImport */\n              #ifndef __PYX_HAVE_RT_ImportType\n#define __PYX_HAVE_RT_ImportType\nstatic PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name,\n    size_t size, int strict)\n{\n    PyObject *py_module = 0;\n    PyObject *result = 0;\n    PyObject *py_name = 0;\n    char warning[200];\n    Py_ssize_t basicsize;\n#ifdef Py_LIMITED_API\n    PyObject *py_basicsize;\n#endif\n    py_module = __Pyx_ImportModule(module_name);\n    if (!py_module)\n        goto bad;\n    py_name = __Pyx_PyIdentifier_FromString(class_name);\n    if (!py_name)\n        goto bad;\n    result = PyObject_GetAttr(py_module, py_name);\n    Py_DECREF(py_name);\n    py_name = 0;\n    Py_DECREF(py_module);\n    py_module = 0;\n    if (!result)\n        goto bad;\n    if (!PyType_Check(result)) {\n        PyErr_Format(PyExc_TypeError,\n            \"%.200s.%.200s is not a type object\",\n            module_name, class_name);\n        goto bad;\n    }\n#ifndef Py_LIMITED_API\n    basicsize = ((PyTypeObject *)result)->tp_basicsize;\n#else\n    py_basicsize = PyObject_GetAttrString(result, \"__basicsize__\");\n    if (!py_basicsize)\n        goto bad;\n    basicsize = PyLong_AsSsize_t(py_basicsize);\n    Py_DECREF(py_basicsize);\n    py_basicsize = 0;\n    if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred())\n        goto bad;\n#endif\n    if (!strict && (size_t)basicsize > size) {\n        PyOS_snprintf(warning, sizeof(warning),\n            \"%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd\",\n            module_name, class_name, basicsize, size);\n        if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad;\n    }\n    else if ((size_t)basicsize != size) {\n        PyErr_Format(PyExc_ValueError,\n            \"%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd\",\n            module_name, class_name, basicsize, size);\n        goto bad;\n    }\n    return (PyTypeObject *)result;\nbad:\n    Py_XDECREF(py_module);\n    Py_XDECREF(result);\n    return NULL;\n}\n#endif\n\n/* InitStrings */\n              static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {\n    while (t->p) {\n        #if PY_MAJOR_VERSION < 3\n        if (t->is_unicode) {\n            *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);\n        } else if (t->intern) {\n            *t->p = PyString_InternFromString(t->s);\n        } else {\n            *t->p = PyString_FromStringAndSize(t->s, t->n - 1);\n        }\n        #else\n        if (t->is_unicode | t->is_str) {\n            if (t->intern) {\n                *t->p = PyUnicode_InternFromString(t->s);\n            } else if (t->encoding) {\n                *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);\n            } else {\n                *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);\n            }\n        } else {\n            *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);\n        }\n        #endif\n        if (!*t->p)\n            return -1;\n        ++t;\n    }\n    return 0;\n}\n\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {\n    return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));\n}\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) {\n    Py_ssize_t ignore;\n    return __Pyx_PyObject_AsStringAndSize(o, &ignore);\n}\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n    if (\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n            __Pyx_sys_getdefaultencoding_not_ascii &&\n#endif\n            PyUnicode_Check(o)) {\n#if PY_VERSION_HEX < 0x03030000\n        char* defenc_c;\n        PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);\n        if (!defenc) return NULL;\n        defenc_c = PyBytes_AS_STRING(defenc);\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n        {\n            char* end = defenc_c + PyBytes_GET_SIZE(defenc);\n            char* c;\n            for (c = defenc_c; c < end; c++) {\n                if ((unsigned char) (*c) >= 128) {\n                    PyUnicode_AsASCIIString(o);\n                    return NULL;\n                }\n            }\n        }\n#endif\n        *length = PyBytes_GET_SIZE(defenc);\n        return defenc_c;\n#else\n        if (__Pyx_PyUnicode_READY(o) == -1) return NULL;\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n        if (PyUnicode_IS_ASCII(o)) {\n            *length = PyUnicode_GET_LENGTH(o);\n            return PyUnicode_AsUTF8(o);\n        } else {\n            PyUnicode_AsASCIIString(o);\n            return NULL;\n        }\n#else\n        return PyUnicode_AsUTF8AndSize(o, length);\n#endif\n#endif\n    } else\n#endif\n#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))\n    if (PyByteArray_Check(o)) {\n        *length = PyByteArray_GET_SIZE(o);\n        return PyByteArray_AS_STRING(o);\n    } else\n#endif\n    {\n        char* result;\n        int r = PyBytes_AsStringAndSize(o, &result, length);\n        if (unlikely(r < 0)) {\n            return NULL;\n        } else {\n            return result;\n        }\n    }\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {\n   int is_true = x == Py_True;\n   if (is_true | (x == Py_False) | (x == Py_None)) return is_true;\n   else return PyObject_IsTrue(x);\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {\n#if CYTHON_USE_TYPE_SLOTS\n  PyNumberMethods *m;\n#endif\n  const char *name = NULL;\n  PyObject *res = NULL;\n#if PY_MAJOR_VERSION < 3\n  if (PyInt_Check(x) || PyLong_Check(x))\n#else\n  if (PyLong_Check(x))\n#endif\n    return __Pyx_NewRef(x);\n#if CYTHON_USE_TYPE_SLOTS\n  m = Py_TYPE(x)->tp_as_number;\n  #if PY_MAJOR_VERSION < 3\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = PyNumber_Int(x);\n  }\n  else if (m && m->nb_long) {\n    name = \"long\";\n    res = PyNumber_Long(x);\n  }\n  #else\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = PyNumber_Long(x);\n  }\n  #endif\n#else\n  res = PyNumber_Int(x);\n#endif\n  if (res) {\n#if PY_MAJOR_VERSION < 3\n    if (!PyInt_Check(res) && !PyLong_Check(res)) {\n#else\n    if (!PyLong_Check(res)) {\n#endif\n      PyErr_Format(PyExc_TypeError,\n                   \"__%.4s__ returned non-%.4s (type %.200s)\",\n                   name, name, Py_TYPE(res)->tp_name);\n      Py_DECREF(res);\n      return NULL;\n    }\n  }\n  else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_TypeError,\n                    \"an integer is required\");\n  }\n  return res;\n}\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {\n  Py_ssize_t ival;\n  PyObject *x;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_CheckExact(b))) {\n    if (sizeof(Py_ssize_t) >= sizeof(long))\n        return PyInt_AS_LONG(b);\n    else\n        return PyInt_AsSsize_t(x);\n  }\n#endif\n  if (likely(PyLong_CheckExact(b))) {\n    #if CYTHON_USE_PYLONG_INTERNALS\n    const digit* digits = ((PyLongObject*)b)->ob_digit;\n    const Py_ssize_t size = Py_SIZE(b);\n    if (likely(__Pyx_sst_abs(size) <= 1)) {\n        ival = likely(size) ? digits[0] : 0;\n        if (size == -1) ival = -ival;\n        return ival;\n    } else {\n      switch (size) {\n         case 2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n      }\n    }\n    #endif\n    return PyLong_AsSsize_t(b);\n  }\n  x = PyNumber_Index(b);\n  if (!x) return -1;\n  ival = PyInt_AsSsize_t(x);\n  Py_DECREF(x);\n  return ival;\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {\n    return PyInt_FromSize_t(ival);\n}\n\n\n#endif /* Py_PYTHON_H */\n"
  },
  {
    "path": "lib/pycocotools/_mask.pyx",
    "content": "# distutils: language = c\n# distutils: sources = ../MatlabAPI/private/maskApi.c\n\n#**************************************************************************\n# Microsoft COCO Toolbox.      version 2.0\n# Data, paper, and tutorials available at:  http://mscoco.org/\n# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n# Licensed under the Simplified BSD License [see coco/license.txt]\n#**************************************************************************\n\n__author__ = 'tsungyi'\n\n# import both Python-level and C-level symbols of Numpy\n# the API uses Numpy to interface C and Python\nimport numpy as np\ncimport numpy as np\nfrom libc.stdlib cimport malloc, free\n\n# intialized Numpy. must do.\nnp.import_array()\n\n# import numpy C function\n# we use PyArray_ENABLEFLAGS to make Numpy ndarray responsible to memoery management\ncdef extern from \"numpy/arrayobject.h\":\n    void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)\n\n# Declare the prototype of the C functions in MaskApi.h\ncdef extern from \"maskApi.h\":\n    ctypedef unsigned int uint\n    ctypedef unsigned long siz\n    ctypedef unsigned char byte\n    ctypedef double* BB\n    ctypedef struct RLE:\n        siz h,\n        siz w,\n        siz m,\n        uint* cnts,\n    void rlesInit( RLE **R, siz n )\n    void rleEncode( RLE *R, const byte *M, siz h, siz w, siz n )\n    void rleDecode( const RLE *R, byte *mask, siz n )\n    void rleMerge( const RLE *R, RLE *M, siz n, bint intersect )\n    void rleArea( const RLE *R, siz n, uint *a )\n    void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o )\n    void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o )\n    void rleToBbox( const RLE *R, BB bb, siz n )\n    void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n )\n    void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w )\n    char* rleToString( const RLE *R )\n    void rleFrString( RLE *R, char *s, siz h, siz w )\n\n# python class to wrap RLE array in C\n# the class handles the memory allocation and deallocation\ncdef class RLEs:\n    cdef RLE *_R\n    cdef siz _n\n\n    def __cinit__(self, siz n =0):\n        rlesInit(&self._R, n)\n        self._n = n\n\n    # free the RLE array here\n    def __dealloc__(self):\n        if self._R is not NULL:\n            for i in range(self._n):\n                free(self._R[i].cnts)\n            free(self._R)\n    def __getattr__(self, key):\n        if key == 'n':\n            return self._n\n        raise AttributeError(key)\n\n# python class to wrap Mask array in C\n# the class handles the memory allocation and deallocation\ncdef class Masks:\n    cdef byte *_mask\n    cdef siz _h\n    cdef siz _w\n    cdef siz _n\n\n    def __cinit__(self, h, w, n):\n        self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n        self._h = h\n        self._w = w\n        self._n = n\n    # def __dealloc__(self):\n        # the memory management of _mask has been passed to np.ndarray\n        # it doesn't need to be freed here\n\n    # called when passing into np.array() and return an np.ndarray in column-major order\n    def __array__(self):\n        cdef np.npy_intp shape[1]\n        shape[0] = <np.npy_intp> self._h*self._w*self._n\n        # Create a 1D array, and reshape it to fortran/Matlab column-major array\n        ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')\n        # The _mask allocated by Masks is now handled by ndarray\n        PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)\n        return ndarray\n\n# internal conversion from Python RLEs object to compressed RLE format\ndef _toString(RLEs Rs):\n    cdef siz n = Rs.n\n    cdef bytes py_string\n    cdef char* c_string\n    objs = []\n    for i in range(n):\n        c_string = rleToString( <RLE*> &Rs._R[i] )\n        py_string = c_string\n        objs.append({\n            'size': [Rs._R[i].h, Rs._R[i].w],\n            'counts': py_string\n        })\n        free(c_string)\n    return objs\n\n# internal conversion from compressed RLE format to Python RLEs object\ndef _frString(rleObjs):\n    cdef siz n = len(rleObjs)\n    Rs = RLEs(n)\n    cdef bytes py_string\n    cdef char* c_string\n    for i, obj in enumerate(rleObjs):\n        py_string = str(obj['counts'])\n        c_string = py_string\n        rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n    return Rs\n\n# encode mask to RLEs objects\n# list of RLE string can be generated by RLEs member function\ndef encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):\n    h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n    cdef RLEs Rs = RLEs(n)\n    rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n    objs = _toString(Rs)\n    return objs\n\n# decode mask from compressed list of RLE string or RLEs object\ndef decode(rleObjs):\n    cdef RLEs Rs = _frString(rleObjs)\n    h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n    masks = Masks(h, w, n)\n    rleDecode( <RLE*>Rs._R, masks._mask, n );\n    return np.array(masks)\n\ndef merge(rleObjs, bint intersect=0):\n    cdef RLEs Rs = _frString(rleObjs)\n    cdef RLEs R = RLEs(1)\n    rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n    obj = _toString(R)[0]\n    return obj\n\ndef area(rleObjs):\n    cdef RLEs Rs = _frString(rleObjs)\n    cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n    rleArea(Rs._R, Rs._n, _a)\n    cdef np.npy_intp shape[1]\n    shape[0] = <np.npy_intp> Rs._n\n    a = np.array((Rs._n, ), dtype=np.uint8)\n    a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n    PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n    return a\n\n# iou computation. support function overload (RLEs-RLEs and bbox-bbox).\ndef iou( dt, gt, pyiscrowd ):\n    def _preproc(objs):\n        if len(objs) == 0:\n            return objs\n        if type(objs) == np.ndarray:\n            if len(objs.shape) == 1:\n                objs = objs.reshape((objs[0], 1))\n            # check if it's Nx4 bbox\n            if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n                raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n            objs = objs.astype(np.double)\n        elif type(objs) == list:\n            # check if list is in box format and convert it to np.ndarray\n            isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n            isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n            if isbox:\n                objs = np.array(objs, dtype=np.double)\n                if len(objs.shape) == 1:\n                    objs = objs.reshape((1,objs.shape[0]))\n            elif isrle:\n                objs = _frString(objs)\n            else:\n                raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n        else:\n            raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n        return objs\n    def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n        rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n    def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n        bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n    def _len(obj):\n        cdef siz N = 0\n        if type(obj) == RLEs:\n            N = obj.n\n        elif len(obj)==0:\n            pass\n        elif type(obj) == np.ndarray:\n            N = obj.shape[0]\n        return N\n    # convert iscrowd to numpy array\n    cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)\n    # simple type checking\n    cdef siz m, n\n    dt = _preproc(dt)\n    gt = _preproc(gt)\n    m = _len(dt)\n    n = _len(gt)\n    if m == 0 or n == 0:\n        return []\n    if not type(dt) == type(gt):\n        raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n\n    # define local variables\n    cdef double* _iou = <double*> 0\n    cdef np.npy_intp shape[1]\n    # check type and assign iou function\n    if type(dt) == RLEs:\n        _iouFun = _rleIou\n    elif type(dt) == np.ndarray:\n        _iouFun = _bbIou\n    else:\n        raise Exception('input data type not allowed.')\n    _iou = <double*> malloc(m*n* sizeof(double))\n    iou = np.zeros((m*n, ), dtype=np.double)\n    shape[0] = <np.npy_intp> m*n\n    iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n    PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n    _iouFun(dt, gt, iscrowd, m, n, iou)\n    return iou.reshape((m,n), order='F')\n\ndef toBbox( rleObjs ):\n    cdef RLEs Rs = _frString(rleObjs)\n    cdef siz n = Rs.n\n    cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n    rleToBbox( <const RLE*> Rs._R, _bb, n )\n    cdef np.npy_intp shape[1]\n    shape[0] = <np.npy_intp> 4*n\n    bb = np.array((1,4*n), dtype=np.double)\n    bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n    PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n    return bb\n\ndef frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n    cdef siz n = bb.shape[0]\n    Rs = RLEs(n)\n    rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n    objs = _toString(Rs)\n    return objs\n\ndef frPoly( poly, siz h, siz w ):\n    cdef np.ndarray[np.double_t, ndim=1] np_poly\n    n = len(poly)\n    Rs = RLEs(n)\n    for i, p in enumerate(poly):\n        np_poly = np.array(p, dtype=np.double, order='F')\n        rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n    objs = _toString(Rs)\n    return objs\n\ndef frUncompressedRLE(ucRles, siz h, siz w):\n    cdef np.ndarray[np.uint32_t, ndim=1] cnts\n    cdef RLE R\n    cdef uint *data\n    n = len(ucRles)\n    objs = []\n    for i in range(n):\n        Rs = RLEs(1)\n        cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n        # time for malloc can be saved here but it's fine\n        data = <uint*> malloc(len(cnts)* sizeof(uint))\n        for j in range(len(cnts)):\n            data[j] = <uint> cnts[j]\n        R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n        Rs._R[0] = R\n        objs.append(_toString(Rs)[0])\n    return objs\n\ndef frPyObjects(pyobj, siz h, w):\n    if type(pyobj) == np.ndarray:\n        objs = frBbox(pyobj, h, w )\n    elif type(pyobj) == list and len(pyobj[0]) == 4:\n        objs = frBbox(pyobj, h, w )\n    elif type(pyobj) == list and len(pyobj[0]) > 4:\n        objs = frPoly(pyobj, h, w )\n    elif type(pyobj) == list and type(pyobj[0]) == dict:\n        objs = frUncompressedRLE(pyobj, h, w)\n    else:\n        raise Exception('input type is not supported.')\n    return objs\n"
  },
  {
    "path": "lib/pycocotools/coco.py",
    "content": "__author__ = 'tylin'\n__version__ = '1.0.1'\n# Interface for accessing the Microsoft COCO dataset.\n\n# Microsoft COCO is a large image dataset designed for object detection,\n# segmentation, and caption generation. pycocotools is a Python API that\n# assists in loading, parsing and visualizing the annotations in COCO.\n# Please visit http://mscoco.org/ for more information on COCO, including\n# for the data, paper, and tutorials. The exact format of the annotations\n# is also described on the COCO website. For example usage of the pycocotools\n# please see pycocotools_demo.ipynb. In addition to this API, please download both\n# the COCO images and annotations in order to run the demo.\n\n# An alternative to using the API is to load the annotations directly\n# into Python dictionary\n# Using the API provides additional utility functions. Note that this API\n# supports both *instance* and *caption* annotations. In the case of\n# captions not all functions are defined (e.g. categories are undefined).\n\n# The following API functions are defined:\n#  COCO       - COCO api class that loads COCO annotation file and prepare data structures.\n#  decodeMask - Decode binary mask M encoded via run-length encoding.\n#  encodeMask - Encode binary mask M using run-length encoding.\n#  getAnnIds  - Get ann ids that satisfy given filter conditions.\n#  getCatIds  - Get cat ids that satisfy given filter conditions.\n#  getImgIds  - Get img ids that satisfy given filter conditions.\n#  loadAnns   - Load anns with the specified ids.\n#  loadCats   - Load cats with the specified ids.\n#  loadImgs   - Load imgs with the specified ids.\n#  segToMask  - Convert polygon segmentation to binary mask.\n#  showAnns   - Display the specified annotations.\n#  loadRes    - Load algorithm results and create API for accessing them.\n#  download   - Download COCO images from mscoco.org server.\n# Throughout the API \"ann\"=annotation, \"cat\"=category, and \"img\"=image.\n# Help on each functions can be accessed by: \"help COCO>function\".\n\n# See also COCO>decodeMask,\n# COCO>encodeMask, COCO>getAnnIds, COCO>getCatIds,\n# COCO>getImgIds, COCO>loadAnns, COCO>loadCats,\n# COCO>loadImgs, COCO>segToMask, COCO>showAnns\n\n# Microsoft COCO Toolbox.      version 2.0\n# Data, paper, and tutorials available at:  http://mscoco.org/\n# Code written by Piotr Dollar and Tsung-Yi Lin, 2014.\n# Licensed under the Simplified BSD License [see bsd.txt]\n\nimport json\nimport datetime\nimport time\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nfrom matplotlib.collections import PatchCollection\nfrom matplotlib.patches import Polygon\nimport numpy as np\nfrom skimage.draw import polygon\nimport urllib\nimport copy\nimport itertools\nimport mask\nimport os\n\nclass COCO:\n    def __init__(self, annotation_file=None):\n        \"\"\"\n        Constructor of Microsoft COCO helper class for reading and visualizing annotations.\n        :param annotation_file (str): location of annotation file\n        :param image_folder (str): location to the folder that hosts images.\n        :return:\n        \"\"\"\n        # load dataset\n        self.dataset = {}\n        self.anns = []\n        self.imgToAnns = {}\n        self.catToImgs = {}\n        self.imgs = {}\n        self.cats = {}\n        if not annotation_file == None:\n            print 'loading annotations into memory...'\n            tic = time.time()\n            dataset = json.load(open(annotation_file, 'r'))\n            print 'Done (t=%0.2fs)'%(time.time()- tic)\n            self.dataset = dataset\n            self.createIndex()\n\n    def createIndex(self):\n        # create index\n        print 'creating index...'\n        anns = {}\n        imgToAnns = {}\n        catToImgs = {}\n        cats = {}\n        imgs = {}\n        if 'annotations' in self.dataset:\n            imgToAnns = {ann['image_id']: [] for ann in self.dataset['annotations']}\n            anns =      {ann['id']:       [] for ann in self.dataset['annotations']}\n            for ann in self.dataset['annotations']:\n                imgToAnns[ann['image_id']] += [ann]\n                anns[ann['id']] = ann\n\n        if 'images' in self.dataset:\n            imgs      = {im['id']: {} for im in self.dataset['images']}\n            for img in self.dataset['images']:\n                imgs[img['id']] = img\n\n        if 'categories' in self.dataset:\n            cats = {cat['id']: [] for cat in self.dataset['categories']}\n            for cat in self.dataset['categories']:\n                cats[cat['id']] = cat\n            catToImgs = {cat['id']: [] for cat in self.dataset['categories']}\n            if 'annotations' in self.dataset:\n                for ann in self.dataset['annotations']:\n                    catToImgs[ann['category_id']] += [ann['image_id']]\n\n        print 'index created!'\n\n        # create class members\n        self.anns = anns\n        self.imgToAnns = imgToAnns\n        self.catToImgs = catToImgs\n        self.imgs = imgs\n        self.cats = cats\n\n    def info(self):\n        \"\"\"\n        Print information about the annotation file.\n        :return:\n        \"\"\"\n        for key, value in self.dataset['info'].items():\n            print '%s: %s'%(key, value)\n\n    def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None):\n        \"\"\"\n        Get ann ids that satisfy given filter conditions. default skips that filter\n        :param imgIds  (int array)     : get anns for given imgs\n               catIds  (int array)     : get anns for given cats\n               areaRng (float array)   : get anns for given area range (e.g. [0 inf])\n               iscrowd (boolean)       : get anns for given crowd label (False or True)\n        :return: ids (int array)       : integer array of ann ids\n        \"\"\"\n        imgIds = imgIds if type(imgIds) == list else [imgIds]\n        catIds = catIds if type(catIds) == list else [catIds]\n\n        if len(imgIds) == len(catIds) == len(areaRng) == 0:\n            anns = self.dataset['annotations']\n        else:\n            if not len(imgIds) == 0:\n                # this can be changed by defaultdict\n                lists = [self.imgToAnns[imgId] for imgId in imgIds if imgId in self.imgToAnns]\n                anns = list(itertools.chain.from_iterable(lists))\n            else:\n                anns = self.dataset['annotations']\n            anns = anns if len(catIds)  == 0 else [ann for ann in anns if ann['category_id'] in catIds]\n            anns = anns if len(areaRng) == 0 else [ann for ann in anns if ann['area'] > areaRng[0] and ann['area'] < areaRng[1]]\n        if not iscrowd == None:\n            ids = [ann['id'] for ann in anns if ann['iscrowd'] == iscrowd]\n        else:\n            ids = [ann['id'] for ann in anns]\n        return ids\n\n    def getCatIds(self, catNms=[], supNms=[], catIds=[]):\n        \"\"\"\n        filtering parameters. default skips that filter.\n        :param catNms (str array)  : get cats for given cat names\n        :param supNms (str array)  : get cats for given supercategory names\n        :param catIds (int array)  : get cats for given cat ids\n        :return: ids (int array)   : integer array of cat ids\n        \"\"\"\n        catNms = catNms if type(catNms) == list else [catNms]\n        supNms = supNms if type(supNms) == list else [supNms]\n        catIds = catIds if type(catIds) == list else [catIds]\n\n        if len(catNms) == len(supNms) == len(catIds) == 0:\n            cats = self.dataset['categories']\n        else:\n            cats = self.dataset['categories']\n            cats = cats if len(catNms) == 0 else [cat for cat in cats if cat['name']          in catNms]\n            cats = cats if len(supNms) == 0 else [cat for cat in cats if cat['supercategory'] in supNms]\n            cats = cats if len(catIds) == 0 else [cat for cat in cats if cat['id']            in catIds]\n        ids = [cat['id'] for cat in cats]\n        return ids\n\n    def getImgIds(self, imgIds=[], catIds=[]):\n        '''\n        Get img ids that satisfy given filter conditions.\n        :param imgIds (int array) : get imgs for given ids\n        :param catIds (int array) : get imgs with all given cats\n        :return: ids (int array)  : integer array of img ids\n        '''\n        imgIds = imgIds if type(imgIds) == list else [imgIds]\n        catIds = catIds if type(catIds) == list else [catIds]\n\n        if len(imgIds) == len(catIds) == 0:\n            ids = self.imgs.keys()\n        else:\n            ids = set(imgIds)\n            for i, catId in enumerate(catIds):\n                if i == 0 and len(ids) == 0:\n                    ids = set(self.catToImgs[catId])\n                else:\n                    ids &= set(self.catToImgs[catId])\n        return list(ids)\n\n    def loadAnns(self, ids=[]):\n        \"\"\"\n        Load anns with the specified ids.\n        :param ids (int array)       : integer ids specifying anns\n        :return: anns (object array) : loaded ann objects\n        \"\"\"\n        if type(ids) == list:\n            return [self.anns[id] for id in ids]\n        elif type(ids) == int:\n            return [self.anns[ids]]\n\n    def loadCats(self, ids=[]):\n        \"\"\"\n        Load cats with the specified ids.\n        :param ids (int array)       : integer ids specifying cats\n        :return: cats (object array) : loaded cat objects\n        \"\"\"\n        if type(ids) == list:\n            return [self.cats[id] for id in ids]\n        elif type(ids) == int:\n            return [self.cats[ids]]\n\n    def loadImgs(self, ids=[]):\n        \"\"\"\n        Load anns with the specified ids.\n        :param ids (int array)       : integer ids specifying img\n        :return: imgs (object array) : loaded img objects\n        \"\"\"\n        if type(ids) == list:\n            return [self.imgs[id] for id in ids]\n        elif type(ids) == int:\n            return [self.imgs[ids]]\n\n    def showAnns(self, anns):\n        \"\"\"\n        Display the specified annotations.\n        :param anns (array of object): annotations to display\n        :return: None\n        \"\"\"\n        if len(anns) == 0:\n            return 0\n        if 'segmentation' in anns[0]:\n            datasetType = 'instances'\n        elif 'caption' in anns[0]:\n            datasetType = 'captions'\n        if datasetType == 'instances':\n            ax = plt.gca()\n            polygons = []\n            color = []\n            for ann in anns:\n                c = np.random.random((1, 3)).tolist()[0]\n                if type(ann['segmentation']) == list:\n                    # polygon\n                    for seg in ann['segmentation']:\n                        poly = np.array(seg).reshape((len(seg)/2, 2))\n                        polygons.append(Polygon(poly, True,alpha=0.4))\n                        color.append(c)\n                else:\n                    # mask\n                    t = self.imgs[ann['image_id']]\n                    if type(ann['segmentation']['counts']) == list:\n                        rle = mask.frPyObjects([ann['segmentation']], t['height'], t['width'])\n                    else:\n                        rle = [ann['segmentation']]\n                    m = mask.decode(rle)\n                    img = np.ones( (m.shape[0], m.shape[1], 3) )\n                    if ann['iscrowd'] == 1:\n                        color_mask = np.array([2.0,166.0,101.0])/255\n                    if ann['iscrowd'] == 0:\n                        color_mask = np.random.random((1, 3)).tolist()[0]\n                    for i in range(3):\n                        img[:,:,i] = color_mask[i]\n                    ax.imshow(np.dstack( (img, m*0.5) ))\n            p = PatchCollection(polygons, facecolors=color, edgecolors=(0,0,0,1), linewidths=3, alpha=0.4)\n            ax.add_collection(p)\n        elif datasetType == 'captions':\n            for ann in anns:\n                print ann['caption']\n\n    def loadRes(self, resFile):\n        \"\"\"\n        Load result file and return a result api object.\n        :param   resFile (str)     : file name of result file\n        :return: res (obj)         : result api object\n        \"\"\"\n        res = COCO()\n        res.dataset['images'] = [img for img in self.dataset['images']]\n        # res.dataset['info'] = copy.deepcopy(self.dataset['info'])\n        # res.dataset['licenses'] = copy.deepcopy(self.dataset['licenses'])\n\n        print 'Loading and preparing results...     '\n        tic = time.time()\n        anns    = json.load(open(resFile))\n        assert type(anns) == list, 'results in not an array of objects'\n        annsImgIds = [ann['image_id'] for ann in anns]\n        assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \\\n               'Results do not correspond to current coco set'\n        if 'caption' in anns[0]:\n            imgIds = set([img['id'] for img in res.dataset['images']]) & set([ann['image_id'] for ann in anns])\n            res.dataset['images'] = [img for img in res.dataset['images'] if img['id'] in imgIds]\n            for id, ann in enumerate(anns):\n                ann['id'] = id+1\n        elif 'bbox' in anns[0] and not anns[0]['bbox'] == []:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                bb = ann['bbox']\n                x1, x2, y1, y2 = [bb[0], bb[0]+bb[2], bb[1], bb[1]+bb[3]]\n                if not 'segmentation' in ann:\n                    ann['segmentation'] = [[x1, y1, x1, y2, x2, y2, x2, y1]]\n                ann['area'] = bb[2]*bb[3]\n                ann['id'] = id+1\n                ann['iscrowd'] = 0\n        elif 'segmentation' in anns[0]:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                # now only support compressed RLE format as segmentation results\n                ann['area'] = mask.area([ann['segmentation']])[0]\n                if not 'bbox' in ann:\n                    ann['bbox'] = mask.toBbox([ann['segmentation']])[0]\n                ann['id'] = id+1\n                ann['iscrowd'] = 0\n        print 'DONE (t=%0.2fs)'%(time.time()- tic)\n\n        res.dataset['annotations'] = anns\n        res.createIndex()\n        return res\n\n    def download( self, tarDir = None, imgIds = [] ):\n        '''\n        Download COCO images from mscoco.org server.\n        :param tarDir (str): COCO results directory name\n               imgIds (list): images to be downloaded\n        :return:\n        '''\n        if tarDir is None:\n            print 'Please specify target directory'\n            return -1\n        if len(imgIds) == 0:\n            imgs = self.imgs.values()\n        else:\n            imgs = self.loadImgs(imgIds)\n        N = len(imgs)\n        if not os.path.exists(tarDir):\n            os.makedirs(tarDir)\n        for i, img in enumerate(imgs):\n            tic = time.time()\n            fname = os.path.join(tarDir, img['file_name'])\n            if not os.path.exists(fname):\n                urllib.urlretrieve(img['coco_url'], fname)\n            print 'downloaded %d/%d images (t=%.1fs)'%(i, N, time.time()- tic)\n"
  },
  {
    "path": "lib/pycocotools/cocoeval.py",
    "content": "__author__ = 'tsungyi'\n\nimport numpy as np\nimport datetime\nimport time\nfrom collections import defaultdict\nimport mask\nimport copy\n\nclass COCOeval:\n    # Interface for evaluating detection on the Microsoft COCO dataset.\n    #\n    # The usage for CocoEval is as follows:\n    #  cocoGt=..., cocoDt=...       # load dataset and results\n    #  E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object\n    #  E.params.recThrs = ...;      # set parameters as desired\n    #  E.evaluate();                # run per image evaluation\n    #  E.accumulate();              # accumulate per image results\n    #  E.summarize();               # display summary metrics of results\n    # For example usage see evalDemo.m and http://mscoco.org/.\n    #\n    # The evaluation parameters are as follows (defaults in brackets):\n    #  imgIds     - [all] N img ids to use for evaluation\n    #  catIds     - [all] K cat ids to use for evaluation\n    #  iouThrs    - [.5:.05:.95] T=10 IoU thresholds for evaluation\n    #  recThrs    - [0:.01:1] R=101 recall thresholds for evaluation\n    #  areaRng    - [...] A=4 object area ranges for evaluation\n    #  maxDets    - [1 10 100] M=3 thresholds on max detections per image\n    #  useSegm    - [1] if true evaluate against ground-truth segments\n    #  useCats    - [1] if true use category labels for evaluation    # Note: if useSegm=0 the evaluation is run on bounding boxes.\n    # Note: if useCats=0 category labels are ignored as in proposal scoring.\n    # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified.\n    #\n    # evaluate(): evaluates detections on every image and every category and\n    # concats the results into the \"evalImgs\" with fields:\n    #  dtIds      - [1xD] id for each of the D detections (dt)\n    #  gtIds      - [1xG] id for each of the G ground truths (gt)\n    #  dtMatches  - [TxD] matching gt id at each IoU or 0\n    #  gtMatches  - [TxG] matching dt id at each IoU or 0\n    #  dtScores   - [1xD] confidence of each dt\n    #  gtIgnore   - [1xG] ignore flag for each gt\n    #  dtIgnore   - [TxD] ignore flag for each dt at each IoU\n    #\n    # accumulate(): accumulates the per-image, per-category evaluation\n    # results in \"evalImgs\" into the dictionary \"eval\" with fields:\n    #  params     - parameters used for evaluation\n    #  date       - date evaluation was performed\n    #  counts     - [T,R,K,A,M] parameter dimensions (see above)\n    #  precision  - [TxRxKxAxM] precision for every evaluation setting\n    #  recall     - [TxKxAxM] max recall for every evaluation setting\n    # Note: precision and recall==-1 for settings with no gt objects.\n    #\n    # See also coco, mask, pycocoDemo, pycocoEvalDemo\n    #\n    # Microsoft COCO Toolbox.      version 2.0\n    # Data, paper, and tutorials available at:  http://mscoco.org/\n    # Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n    # Licensed under the Simplified BSD License [see coco/license.txt]\n    def __init__(self, cocoGt=None, cocoDt=None):\n        '''\n        Initialize CocoEval using coco APIs for gt and dt\n        :param cocoGt: coco object with ground truth annotations\n        :param cocoDt: coco object with detection results\n        :return: None\n        '''\n        self.cocoGt   = cocoGt              # ground truth COCO API\n        self.cocoDt   = cocoDt              # detections COCO API\n        self.params   = {}                  # evaluation parameters\n        self.evalImgs = defaultdict(list)   # per-image per-category evaluation results [KxAxI] elements\n        self.eval     = {}                  # accumulated evaluation results\n        self._gts = defaultdict(list)       # gt for evaluation\n        self._dts = defaultdict(list)       # dt for evaluation\n        self.params = Params()              # parameters\n        self._paramsEval = {}               # parameters for evaluation\n        self.stats = []                     # result summarization\n        self.ious = {}                      # ious between all gts and dts\n        if not cocoGt is None:\n            self.params.imgIds = sorted(cocoGt.getImgIds())\n            self.params.catIds = sorted(cocoGt.getCatIds())\n\n\n    def _prepare(self):\n        '''\n        Prepare ._gts and ._dts for evaluation based on params\n        :return: None\n        '''\n        #\n        def _toMask(objs, coco):\n            # modify segmentation by reference\n            for obj in objs:\n                t = coco.imgs[obj['image_id']]\n                if type(obj['segmentation']) == list:\n                    if type(obj['segmentation'][0]) == dict:\n                        print 'debug'\n                    obj['segmentation'] = mask.frPyObjects(obj['segmentation'],t['height'],t['width'])\n                    if len(obj['segmentation']) == 1:\n                        obj['segmentation'] = obj['segmentation'][0]\n                    else:\n                        # an object can have multiple polygon regions\n                        # merge them into one RLE mask\n                        obj['segmentation'] = mask.merge(obj['segmentation'])\n                elif type(obj['segmentation']) == dict and type(obj['segmentation']['counts']) == list:\n                    obj['segmentation'] = mask.frPyObjects([obj['segmentation']],t['height'],t['width'])[0]\n                elif type(obj['segmentation']) == dict and \\\n                     type(obj['segmentation']['counts'] == unicode or type(obj['segmentation']['counts']) == str):\n                    pass\n                else:\n                    raise Exception('segmentation format not supported.')\n        p = self.params\n        if p.useCats:\n            gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds))\n            dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds))\n        else:\n            gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds))\n            dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds))\n\n        if p.useSegm:\n            _toMask(gts, self.cocoGt)\n            _toMask(dts, self.cocoDt)\n        self._gts = defaultdict(list)       # gt for evaluation\n        self._dts = defaultdict(list)       # dt for evaluation\n        for gt in gts:\n            self._gts[gt['image_id'], gt['category_id']].append(gt)\n        for dt in dts:\n            self._dts[dt['image_id'], dt['category_id']].append(dt)\n        self.evalImgs = defaultdict(list)   # per-image per-category evaluation results\n        self.eval     = {}                  # accumulated evaluation results\n\n    def evaluate(self):\n        '''\n        Run per image evaluation on given images and store results (a list of dict) in self.evalImgs\n        :return: None\n        '''\n        tic = time.time()\n        print 'Running per image evaluation...      '\n        p = self.params\n        p.imgIds = list(np.unique(p.imgIds))\n        if p.useCats:\n            p.catIds = list(np.unique(p.catIds))\n        p.maxDets = sorted(p.maxDets)\n        self.params=p\n\n        self._prepare()\n        # loop through images, area range, max detection number\n        catIds = p.catIds if p.useCats else [-1]\n\n        computeIoU = self.computeIoU\n        self.ious = {(imgId, catId): computeIoU(imgId, catId) \\\n                        for imgId in p.imgIds\n                        for catId in catIds}\n\n        evaluateImg = self.evaluateImg\n        maxDet = p.maxDets[-1]\n        self.evalImgs = [evaluateImg(imgId, catId, areaRng, maxDet)\n                 for catId in catIds\n                 for areaRng in p.areaRng\n                 for imgId in p.imgIds\n             ]\n        self._paramsEval = copy.deepcopy(self.params)\n        toc = time.time()\n        print 'DONE (t=%0.2fs).'%(toc-tic)\n\n    def computeIoU(self, imgId, catId):\n        p = self.params\n        if p.useCats:\n            gt = self._gts[imgId,catId]\n            dt = self._dts[imgId,catId]\n        else:\n            gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]]\n            dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]]\n        if len(gt) == 0 and len(dt) ==0:\n            return []\n        dt = sorted(dt, key=lambda x: -x['score'])\n        if len(dt) > p.maxDets[-1]:\n            dt=dt[0:p.maxDets[-1]]\n\n        if p.useSegm:\n            g = [g['segmentation'] for g in gt]\n            d = [d['segmentation'] for d in dt]\n        else:\n            g = [g['bbox'] for g in gt]\n            d = [d['bbox'] for d in dt]\n\n        # compute iou between each dt and gt region\n        iscrowd = [int(o['iscrowd']) for o in gt]\n        ious = mask.iou(d,g,iscrowd)\n        return ious\n\n    def evaluateImg(self, imgId, catId, aRng, maxDet):\n        '''\n        perform evaluation for single category and image\n        :return: dict (single image results)\n        '''\n        #\n        p = self.params\n        if p.useCats:\n            gt = self._gts[imgId,catId]\n            dt = self._dts[imgId,catId]\n        else:\n            gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]]\n            dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]]\n        if len(gt) == 0 and len(dt) ==0:\n            return None\n\n        for g in gt:\n            if 'ignore' not in g:\n                g['ignore'] = 0\n            if g['iscrowd'] == 1 or g['ignore'] or (g['area']<aRng[0] or g['area']>aRng[1]):\n                g['_ignore'] = 1\n            else:\n                g['_ignore'] = 0\n\n        # sort dt highest score first, sort gt ignore last\n        # gt = sorted(gt, key=lambda x: x['_ignore'])\n        gtind = [ind for (ind, g) in sorted(enumerate(gt), key=lambda (ind, g): g['_ignore']) ]\n\n        gt = [gt[ind] for ind in gtind]\n        dt = sorted(dt, key=lambda x: -x['score'])[0:maxDet]\n        iscrowd = [int(o['iscrowd']) for o in gt]\n        # load computed ious\n        N_iou = len(self.ious[imgId, catId])\n        ious = self.ious[imgId, catId][0:maxDet, np.array(gtind)] if N_iou >0 else self.ious[imgId, catId]\n\n        T = len(p.iouThrs)\n        G = len(gt)\n        D = len(dt)\n        gtm  = np.zeros((T,G))\n        dtm  = np.zeros((T,D))\n        gtIg = np.array([g['_ignore'] for g in gt])\n        dtIg = np.zeros((T,D))\n        if not len(ious)==0:\n            for tind, t in enumerate(p.iouThrs):\n                for dind, d in enumerate(dt):\n                    # information about best match so far (m=-1 -> unmatched)\n                    iou = min([t,1-1e-10])\n                    m   = -1\n                    for gind, g in enumerate(gt):\n                        # if this gt already matched, and not a crowd, continue\n                        if gtm[tind,gind]>0 and not iscrowd[gind]:\n                            continue\n                        # if dt matched to reg gt, and on ignore gt, stop\n                        if m>-1 and gtIg[m]==0 and gtIg[gind]==1:\n                            break\n                        # continue to next gt unless better match made\n                        if ious[dind,gind] < iou:\n                            continue\n                        # match successful and best so far, store appropriately\n                        iou=ious[dind,gind]\n                        m=gind\n                    # if match made store id of match for both dt and gt\n                    if m ==-1:\n                        continue\n                    dtIg[tind,dind] = gtIg[m]\n                    dtm[tind,dind]  = gt[m]['id']\n                    gtm[tind,m]     = d['id']\n        # set unmatched detections outside of area range to ignore\n        a = np.array([d['area']<aRng[0] or d['area']>aRng[1] for d in dt]).reshape((1, len(dt)))\n        dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0)))\n        # store results for given image and category\n        return {\n                'image_id':     imgId,\n                'category_id':  catId,\n                'aRng':         aRng,\n                'maxDet':       maxDet,\n                'dtIds':        [d['id'] for d in dt],\n                'gtIds':        [g['id'] for g in gt],\n                'dtMatches':    dtm,\n                'gtMatches':    gtm,\n                'dtScores':     [d['score'] for d in dt],\n                'gtIgnore':     gtIg,\n                'dtIgnore':     dtIg,\n            }\n\n    def accumulate(self, p = None):\n        '''\n        Accumulate per image evaluation results and store the result in self.eval\n        :param p: input params for evaluation\n        :return: None\n        '''\n        print 'Accumulating evaluation results...   '\n        tic = time.time()\n        if not self.evalImgs:\n            print 'Please run evaluate() first'\n        # allows input customized parameters\n        if p is None:\n            p = self.params\n        p.catIds = p.catIds if p.useCats == 1 else [-1]\n        T           = len(p.iouThrs)\n        R           = len(p.recThrs)\n        K           = len(p.catIds) if p.useCats else 1\n        A           = len(p.areaRng)\n        M           = len(p.maxDets)\n        precision   = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories\n        recall      = -np.ones((T,K,A,M))\n\n        # create dictionary for future indexing\n        _pe = self._paramsEval\n        catIds = _pe.catIds if _pe.useCats else [-1]\n        setK = set(catIds)\n        setA = set(map(tuple, _pe.areaRng))\n        setM = set(_pe.maxDets)\n        setI = set(_pe.imgIds)\n        # get inds to evaluate\n        k_list = [n for n, k in enumerate(p.catIds)  if k in setK]\n        m_list = [m for n, m in enumerate(p.maxDets) if m in setM]\n        a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA]\n        i_list = [n for n, i in enumerate(p.imgIds)  if i in setI]\n        # K0 = len(_pe.catIds)\n        I0 = len(_pe.imgIds)\n        A0 = len(_pe.areaRng)\n        # retrieve E at each category, area range, and max number of detections\n        for k, k0 in enumerate(k_list):\n            Nk = k0*A0*I0\n            for a, a0 in enumerate(a_list):\n                Na = a0*I0\n                for m, maxDet in enumerate(m_list):\n                    E = [self.evalImgs[Nk+Na+i] for i in i_list]\n                    E = filter(None, E)\n                    if len(E) == 0:\n                        continue\n                    dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E])\n\n                    # different sorting method generates slightly different results.\n                    # mergesort is used to be consistent as Matlab implementation.\n                    inds = np.argsort(-dtScores, kind='mergesort')\n\n                    dtm  = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds]\n                    dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet]  for e in E], axis=1)[:,inds]\n                    gtIg = np.concatenate([e['gtIgnore']  for e in E])\n                    npig = len([ig for ig in gtIg if ig == 0])\n                    if npig == 0:\n                        continue\n                    tps = np.logical_and(               dtm,  np.logical_not(dtIg) )\n                    fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) )\n\n                    tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)\n                    fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)\n                    for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):\n                        tp = np.array(tp)\n                        fp = np.array(fp)\n                        nd = len(tp)\n                        rc = tp / npig\n                        pr = tp / (fp+tp+np.spacing(1))\n                        q  = np.zeros((R,))\n\n                        if nd:\n                            recall[t,k,a,m] = rc[-1]\n                        else:\n                            recall[t,k,a,m] = 0\n\n                        # numpy is slow without cython optimization for accessing elements\n                        # use python array gets significant speed improvement\n                        pr = pr.tolist(); q = q.tolist()\n\n                        for i in range(nd-1, 0, -1):\n                            if pr[i] > pr[i-1]:\n                                pr[i-1] = pr[i]\n\n                        inds = np.searchsorted(rc, p.recThrs)\n                        try:\n                            for ri, pi in enumerate(inds):\n                                q[ri] = pr[pi]\n                        except:\n                            pass\n                        precision[t,:,k,a,m] = np.array(q)\n        self.eval = {\n            'params': p,\n            'counts': [T, R, K, A, M],\n            'date': datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\"),\n            'precision': precision,\n            'recall':   recall,\n        }\n        toc = time.time()\n        print 'DONE (t=%0.2fs).'%( toc-tic )\n\n    def summarize(self):\n        '''\n        Compute and display summary metrics for evaluation results.\n        Note this functin can *only* be applied on the default parameter setting\n        '''\n        def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ):\n            p = self.params\n            iStr        = ' {:<18} {} @[ IoU={:<9} | area={:>6} | maxDets={:>3} ] = {}'\n            titleStr    = 'Average Precision' if ap == 1 else 'Average Recall'\n            typeStr     = '(AP)' if ap==1 else '(AR)'\n            iouStr      = '%0.2f:%0.2f'%(p.iouThrs[0], p.iouThrs[-1]) if iouThr is None else '%0.2f'%(iouThr)\n            areaStr     = areaRng\n            maxDetsStr  = '%d'%(maxDets)\n\n            aind = [i for i, aRng in enumerate(['all', 'small', 'medium', 'large']) if aRng == areaRng]\n            mind = [i for i, mDet in enumerate([1, 10, 100]) if mDet == maxDets]\n            if ap == 1:\n                # dimension of precision: [TxRxKxAxM]\n                s = self.eval['precision']\n                # IoU\n                if iouThr is not None:\n                    t = np.where(iouThr == p.iouThrs)[0]\n                    s = s[t]\n                # areaRng\n                s = s[:,:,:,aind,mind]\n            else:\n                # dimension of recall: [TxKxAxM]\n                s = self.eval['recall']\n                s = s[:,:,aind,mind]\n            if len(s[s>-1])==0:\n                mean_s = -1\n            else:\n                mean_s = np.mean(s[s>-1])\n            print iStr.format(titleStr, typeStr, iouStr, areaStr, maxDetsStr, '%.3f'%(float(mean_s)))\n            return mean_s\n\n        if not self.eval:\n            raise Exception('Please run accumulate() first')\n        self.stats = np.zeros((12,))\n        self.stats[0] = _summarize(1)\n        self.stats[1] = _summarize(1,iouThr=.5)\n        self.stats[2] = _summarize(1,iouThr=.75)\n        self.stats[3] = _summarize(1,areaRng='small')\n        self.stats[4] = _summarize(1,areaRng='medium')\n        self.stats[5] = _summarize(1,areaRng='large')\n        self.stats[6] = _summarize(0,maxDets=1)\n        self.stats[7] = _summarize(0,maxDets=10)\n        self.stats[8] = _summarize(0,maxDets=100)\n        self.stats[9]  = _summarize(0,areaRng='small')\n        self.stats[10] = _summarize(0,areaRng='medium')\n        self.stats[11] = _summarize(0,areaRng='large')\n\n    def __str__(self):\n        self.summarize()\n\nclass Params:\n    '''\n    Params for coco evaluation api\n    '''\n    def __init__(self):\n        self.imgIds = []\n        self.catIds = []\n        # np.arange causes trouble.  the data point on arange is slightly larger than the true value\n        self.iouThrs = np.linspace(.5, 0.95, np.round((0.95-.5)/.05)+1, endpoint=True)\n        self.recThrs = np.linspace(.0, 1.00, np.round((1.00-.0)/.01)+1, endpoint=True)\n        self.maxDets = [1,10,100]\n        self.areaRng = [ [0**2,1e5**2], [0**2, 32**2], [32**2, 96**2], [96**2, 1e5**2] ]\n        self.useSegm = 0\n        self.useCats = 1"
  },
  {
    "path": "lib/pycocotools/license.txt",
    "content": "Copyright (c) 2014, Piotr Dollar and Tsung-Yi Lin\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met: \n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer. \n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution. \n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nThe views and conclusions contained in the software and documentation are those\nof the authors and should not be interpreted as representing official policies, \neither expressed or implied, of the FreeBSD Project.\n"
  },
  {
    "path": "lib/pycocotools/mask.py",
    "content": "__author__ = 'tsungyi'\n\nimport pycocotools._mask as _mask\n\n# Interface for manipulating masks stored in RLE format.\n#\n# RLE is a simple yet efficient format for storing binary masks. RLE\n# first divides a vector (or vectorized image) into a series of piecewise\n# constant regions and then for each piece simply stores the length of\n# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would\n# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1]\n# (note that the odd counts are always the numbers of zeros). Instead of\n# storing the counts directly, additional compression is achieved with a\n# variable bitrate representation based on a common scheme called LEB128.\n#\n# Compression is greatest given large piecewise constant regions.\n# Specifically, the size of the RLE is proportional to the number of\n# *boundaries* in M (or for an image the number of boundaries in the y\n# direction). Assuming fairly simple shapes, the RLE representation is\n# O(sqrt(n)) where n is number of pixels in the object. Hence space usage\n# is substantially lower, especially for large simple objects (large n).\n#\n# Many common operations on masks can be computed directly using the RLE\n# (without need for decoding). This includes computations such as area,\n# union, intersection, etc. All of these operations are linear in the\n# size of the RLE, in other words they are O(sqrt(n)) where n is the area\n# of the object. Computing these operations on the original mask is O(n).\n# Thus, using the RLE can result in substantial computational savings.\n#\n# The following API functions are defined:\n#  encode         - Encode binary masks using RLE.\n#  decode         - Decode binary masks encoded via RLE.\n#  merge          - Compute union or intersection of encoded masks.\n#  iou            - Compute intersection over union between masks.\n#  area           - Compute area of encoded masks.\n#  toBbox         - Get bounding boxes surrounding encoded masks.\n#  frPyObjects    - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask.\n#\n# Usage:\n#  Rs     = encode( masks )\n#  masks  = decode( Rs )\n#  R      = merge( Rs, intersect=false )\n#  o      = iou( dt, gt, iscrowd )\n#  a      = area( Rs )\n#  bbs    = toBbox( Rs )\n#  Rs     = frPyObjects( [pyObjects], h, w )\n#\n# In the API the following formats are used:\n#  Rs      - [dict] Run-length encoding of binary masks\n#  R       - dict Run-length encoding of binary mask\n#  masks   - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order)\n#  iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore\n#  bbs     - [nx4] Bounding box(es) stored as [x y w h]\n#  poly    - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list)\n#  dt,gt   - May be either bounding boxes or encoded masks\n# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel).\n#\n# Finally, a note about the intersection over union (iou) computation.\n# The standard iou of a ground truth (gt) and detected (dt) object is\n#  iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt))\n# For \"crowd\" regions, we use a modified criteria. If a gt object is\n# marked as \"iscrowd\", we allow a dt to match any subregion of the gt.\n# Choosing gt' in the crowd gt that best matches the dt can be done using\n# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing\n#  iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt)\n# For crowd gt regions we use this modified criteria above for the iou.\n#\n# To compile run \"python setup.py build_ext --inplace\"\n# Please do not contact us for help with compiling.\n#\n# Microsoft COCO Toolbox.      version 2.0\n# Data, paper, and tutorials available at:  http://mscoco.org/\n# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n# Licensed under the Simplified BSD License [see coco/license.txt]\n\nencode      = _mask.encode\ndecode      = _mask.decode\niou         = _mask.iou\nmerge       = _mask.merge\narea        = _mask.area\ntoBbox      = _mask.toBbox\nfrPyObjects = _mask.frPyObjects"
  },
  {
    "path": "lib/pycocotools/maskApi.c",
    "content": "/**************************************************************************\n* Microsoft COCO Toolbox.      version 2.0\n* Data, paper, and tutorials available at:  http://mscoco.org/\n* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n* Licensed under the Simplified BSD License [see coco/license.txt]\n**************************************************************************/\n#include \"maskApi.h\"\n#include <math.h>\n#include <stdlib.h>\n\nuint umin( uint a, uint b ) { return (a<b) ? a : b; }\nuint umax( uint a, uint b ) { return (a>b) ? a : b; }\n\nvoid rleInit( RLE *R, siz h, siz w, siz m, uint *cnts ) {\n  R->h=h; R->w=w; R->m=m; R->cnts=(m==0)?0:malloc(sizeof(uint)*m);\n  if(cnts) for(siz j=0; j<m; j++) R->cnts[j]=cnts[j];\n}\n\nvoid rleFree( RLE *R ) {\n  free(R->cnts); R->cnts=0;\n}\n\nvoid rlesInit( RLE **R, siz n ) {\n  *R = (RLE*) malloc(sizeof(RLE)*n);\n  for(siz i=0; i<n; i++) rleInit((*R)+i,0,0,0,0);\n}\n\nvoid rlesFree( RLE **R, siz n ) {\n  for(siz i=0; i<n; i++) rleFree((*R)+i); free(*R); *R=0;\n}\n\nvoid rleEncode( RLE *R, const byte *M, siz h, siz w, siz n ) {\n  siz i, j, k, a=w*h; uint c, *cnts; byte p;\n  cnts = malloc(sizeof(uint)*(a+1));\n  for(i=0; i<n; i++) {\n    const byte *T=M+a*i; k=0; p=0; c=0;\n    for(j=0; j<a; j++) { if(T[j]!=p) { cnts[k++]=c; c=0; p=T[j]; } c++; }\n    cnts[k++]=c; rleInit(R+i,h,w,k,cnts);\n  }\n  free(cnts);\n}\n\nvoid rleDecode( const RLE *R, byte *M, siz n ) {\n  for( siz i=0; i<n; i++ ) {\n    byte v=0; for( siz j=0; j<R[i].m; j++ ) {\n      for( siz k=0; k<R[i].cnts[j]; k++ ) *(M++)=v; v=!v; }}\n}\n\nvoid rleMerge( const RLE *R, RLE *M, siz n, bool intersect ) {\n  uint *cnts, c, ca, cb, cc, ct; bool v, va, vb, vp;\n  siz i, a, b, h=R[0].h, w=R[0].w, m=R[0].m; RLE A, B;\n  if(n==0) { rleInit(M,0,0,0,0); return; }\n  if(n==1) { rleInit(M,h,w,m,R[0].cnts); return; }\n  cnts = malloc(sizeof(uint)*(h*w+1));\n  for( a=0; a<m; a++ ) cnts[a]=R[0].cnts[a];\n  for( i=1; i<n; i++ ) {\n    B=R[i]; if(B.h!=h||B.w!=w) { h=w=m=0; break; }\n    rleInit(&A,h,w,m,cnts); ca=A.cnts[0]; cb=B.cnts[0];\n    v=va=vb=0; m=0; a=b=1; cc=0; ct=1;\n    while( ct>0 ) {\n      c=umin(ca,cb); cc+=c; ct=0;\n      ca-=c; if(!ca && a<A.m) { ca=A.cnts[a++]; va=!va; } ct+=ca;\n      cb-=c; if(!cb && b<B.m) { cb=B.cnts[b++]; vb=!vb; } ct+=cb;\n      vp=v; if(intersect) v=va&&vb; else v=va||vb;\n      if( v!=vp||ct==0 ) { cnts[m++]=cc; cc=0; }\n    }\n    rleFree(&A);\n  }\n  rleInit(M,h,w,m,cnts); free(cnts);\n}\n\nvoid rleArea( const RLE *R, siz n, uint *a ) {\n  for( siz i=0; i<n; i++ ) {\n    a[i]=0; for( siz j=1; j<R[i].m; j+=2 ) a[i]+=R[i].cnts[j]; }\n}\n\nvoid rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o ) {\n  siz g, d; BB db, gb; bool crowd;\n  db=malloc(sizeof(double)*m*4); rleToBbox(dt,db,m);\n  gb=malloc(sizeof(double)*n*4); rleToBbox(gt,gb,n);\n  bbIou(db,gb,m,n,iscrowd,o); free(db); free(gb);\n  for( g=0; g<n; g++ ) for( d=0; d<m; d++ ) if(o[g*m+d]>0) {\n    crowd=iscrowd!=NULL && iscrowd[g];\n    if(dt[d].h!=gt[g].h || dt[d].w!=gt[g].w) { o[g*m+d]=-1; continue; }\n    siz ka, kb, a, b; uint c, ca, cb, ct, i, u; bool va, vb;\n    ca=dt[d].cnts[0]; ka=dt[d].m; va=vb=0;\n    cb=gt[g].cnts[0]; kb=gt[g].m; a=b=1; i=u=0; ct=1;\n    while( ct>0 ) {\n      c=umin(ca,cb); if(va||vb) { u+=c; if(va&&vb) i+=c; } ct=0;\n      ca-=c; if(!ca && a<ka) { ca=dt[d].cnts[a++]; va=!va; } ct+=ca;\n      cb-=c; if(!cb && b<kb) { cb=gt[g].cnts[b++]; vb=!vb; } ct+=cb;\n    }\n    if(i==0) u=1; else if(crowd) rleArea(dt+d,1,&u);\n    o[g*m+d] = (double)i/(double)u;\n  }\n}\n\nvoid bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ) {\n  double h, w, i, u, ga, da; siz g, d; bool crowd;\n  for( g=0; g<n; g++ ) {\n    BB G=gt+g*4; ga=G[2]*G[3]; crowd=iscrowd!=NULL && iscrowd[g];\n    for( d=0; d<m; d++ ) {\n      BB D=dt+d*4; da=D[2]*D[3]; o[g*m+d]=0;\n      w=fmin(D[2]+D[0],G[2]+G[0])-fmax(D[0],G[0]); if(w<=0) continue;\n      h=fmin(D[3]+D[1],G[3]+G[1])-fmax(D[1],G[1]); if(h<=0) continue;\n      i=w*h; u = crowd ? da : da+ga-i; o[g*m+d]=i/u;\n    }\n  }\n}\n\nvoid rleToBbox( const RLE *R, BB bb, siz n ) {\n  for( siz i=0; i<n; i++ ) {\n    uint h, w, x, y, xs, ys, xe, ye, cc, t; siz j, m;\n    h=(uint)R[i].h; w=(uint)R[i].w; m=R[i].m;\n    m=((siz)(m/2))*2; xs=w; ys=h; xe=ye=0; cc=0;\n    if(m==0) { bb[4*i+0]=bb[4*i+1]=bb[4*i+2]=bb[4*i+3]=0; continue; }\n    for( j=0; j<m; j++ ) {\n      cc+=R[i].cnts[j]; t=cc-j%2; y=t%h; x=(t-y)/h;\n      xs=umin(xs,x); xe=umax(xe,x); ys=umin(ys,y); ye=umax(ye,y);\n    }\n    bb[4*i+0]=xs; bb[4*i+2]=xe-xs+1;\n    bb[4*i+1]=ys; bb[4*i+3]=ye-ys+1;\n  }\n}\n\nvoid rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n ) {\n  for( siz i=0; i<n; i++ ) {\n    double xs=bb[4*i+0], xe=xs+bb[4*i+2];\n    double ys=bb[4*i+1], ye=ys+bb[4*i+3];\n    double xy[8] = {xs,ys,xs,ye,xe,ye,xe,ys};\n    rleFrPoly( R+i, xy, 4, h, w );\n  }\n}\n\nint uintCompare(const void *a, const void *b) {\n  uint c=*((uint*)a), d=*((uint*)b); return c>d?1:c<d?-1:0;\n}\n\nvoid rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w ) {\n  // upsample and get discrete points densely along entire boundary\n  siz j, m=0; double scale=5; int *x, *y, *u, *v; uint *a, *b;\n  x=malloc(sizeof(int)*(k+1)); y=malloc(sizeof(int)*(k+1));\n  for(j=0; j<k; j++) x[j]=(int)(scale*xy[j*2+0]+.5); x[k]=x[0];\n  for(j=0; j<k; j++) y[j]=(int)(scale*xy[j*2+1]+.5); y[k]=y[0];\n  for(j=0; j<k; j++) m+=umax(abs(x[j]-x[j+1]),abs(y[j]-y[j+1]))+1;\n  u=malloc(sizeof(int)*m); v=malloc(sizeof(int)*m); m=0;\n  for( j=0; j<k; j++ ) {\n    int xs=x[j], xe=x[j+1], ys=y[j], ye=y[j+1], dx, dy, t;\n    bool flip; double s; dx=abs(xe-xs); dy=abs(ys-ye);\n    flip = (dx>=dy && xs>xe) || (dx<dy && ys>ye);\n    if(flip) { t=xs; xs=xe; xe=t; t=ys; ys=ye; ye=t; }\n    s = dx>=dy ? (double)(ye-ys)/dx : (double)(xe-xs)/dy;\n    if(dx>=dy) for( int d=0; d<=dx; d++ ) {\n      t=flip?dx-d:d; u[m]=t+xs; v[m]=(int)(ys+s*t+.5); m++;\n    } else for( int d=0; d<=dy; d++ ) {\n      t=flip?dy-d:d; v[m]=t+ys; u[m]=(int)(xs+s*t+.5); m++;\n    }\n  }\n  // get points along y-boundary and downsample\n  free(x); free(y); k=m; m=0; double xd, yd;\n  x=malloc(sizeof(int)*k); y=malloc(sizeof(int)*k);\n  for( j=1; j<k; j++ ) if(u[j]!=u[j-1]) {\n    xd=(double)(u[j]<u[j-1]?u[j]:u[j]-1); xd=(xd+.5)/scale-.5;\n    if( floor(xd)!=xd || xd<0 || xd>w-1 ) continue;\n    yd=(double)(v[j]<v[j-1]?v[j]:v[j-1]); yd=(yd+.5)/scale-.5;\n    if(yd<0) yd=0; else if(yd>h) yd=h; yd=ceil(yd);\n    x[m]=(int) xd; y[m]=(int) yd; m++;\n  }\n  // compute rle encoding given y-boundary points\n  k=m; a=malloc(sizeof(uint)*(k+1));\n  for( j=0; j<k; j++ ) a[j]=(uint)(x[j]*(int)(h)+y[j]);\n  a[k++]=(uint)(h*w); free(u); free(v); free(x); free(y);\n  qsort(a,k,sizeof(uint),uintCompare); uint p=0;\n  for( j=0; j<k; j++ ) { uint t=a[j]; a[j]-=p; p=t; }\n  b=malloc(sizeof(uint)*k); j=m=0; b[m++]=a[j++];\n  while(j<k) if(a[j]>0) b[m++]=a[j++]; else {\n    j++; if(j<k) b[m-1]+=a[j++]; }\n  rleInit(R,h,w,m,b); free(a); free(b);\n}\n\nchar* rleToString( const RLE *R ) {\n  // Similar to LEB128 but using 6 bits/char and ascii chars 48-111.\n  siz i, m=R->m, p=0; long x; bool more;\n  char *s=malloc(sizeof(char)*m*6);\n  for( i=0; i<m; i++ ) {\n    x=(long) R->cnts[i]; if(i>2) x-=(long) R->cnts[i-2]; more=1;\n    while( more ) {\n      char c=x & 0x1f; x >>= 5; more=(c & 0x10) ? x!=-1 : x!=0;\n      if(more) c |= 0x20; c+=48; s[p++]=c;\n    }\n  }\n  s[p]=0; return s;\n}\n\nvoid rleFrString( RLE *R, char *s, siz h, siz w ) {\n  siz m=0, p=0, k; long x; bool more; uint *cnts;\n  while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0;\n  while( s[p] ) {\n    x=0; k=0; more=1;\n    while( more ) {\n      char c=s[p]-48; x |= (c & 0x1f) << 5*k;\n      more = c & 0x20; p++; k++;\n      if(!more && (c & 0x10)) x |= -1 << 5*k;\n    }\n    if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x;\n  }\n  rleInit(R,h,w,m,cnts); free(cnts);\n}\n"
  },
  {
    "path": "lib/pycocotools/maskApi.h",
    "content": "/**************************************************************************\n* Microsoft COCO Toolbox.      version 2.0\n* Data, paper, and tutorials available at:  http://mscoco.org/\n* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n* Licensed under the Simplified BSD License [see coco/license.txt]\n**************************************************************************/\n#pragma once\n#include <stdbool.h>\n\ntypedef unsigned int uint;\ntypedef unsigned long siz;\ntypedef unsigned char byte;\ntypedef double* BB;\ntypedef struct { siz h, w, m; uint *cnts; } RLE;\n\n// Initialize/destroy RLE.\nvoid rleInit( RLE *R, siz h, siz w, siz m, uint *cnts );\nvoid rleFree( RLE *R );\n\n// Initialize/destroy RLE array.\nvoid rlesInit( RLE **R, siz n );\nvoid rlesFree( RLE **R, siz n );\n\n// Encode binary masks using RLE.\nvoid rleEncode( RLE *R, const byte *mask, siz h, siz w, siz n );\n\n// Decode binary masks encoded via RLE.\nvoid rleDecode( const RLE *R, byte *mask, siz n );\n\n// Compute union or intersection of encoded masks.\nvoid rleMerge( const RLE *R, RLE *M, siz n, bool intersect );\n\n// Compute area of encoded masks.\nvoid rleArea( const RLE *R, siz n, uint *a );\n\n// Compute intersection over union between masks.\nvoid rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o );\n\n// Compute intersection over union between bounding boxes.\nvoid bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o );\n\n// Get bounding boxes surrounding encoded masks.\nvoid rleToBbox( const RLE *R, BB bb, siz n );\n\n// Convert bounding boxes to encoded masks.\nvoid rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n );\n\n// Convert polygon to encoded mask.\nvoid rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w );\n\n// Get compressed string representation of encoded mask.\nchar* rleToString( const RLE *R );\n\n// Convert from compressed string representation of encoded mask.\nvoid rleFrString( RLE *R, char *s, siz h, siz w );\n"
  },
  {
    "path": "lib/roi_data_layer/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n"
  },
  {
    "path": "lib/roi_data_layer/layer.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"The data layer used during training to train a Fast R-CNN network.\n\nRoIDataLayer implements a Caffe Python layer.\n\"\"\"\n\nimport caffe\nfrom fast_rcnn.config import cfg\nfrom roi_data_layer.minibatch import get_minibatch\nimport numpy as np\nimport yaml\nfrom multiprocessing import Process, Queue\n\nclass RoIDataLayer(caffe.Layer):\n    \"\"\"Fast R-CNN data layer used for training.\"\"\"\n\n    def _shuffle_roidb_inds(self):\n        \"\"\"Randomly permute the training roidb.\"\"\"\n        if cfg.TRAIN.ASPECT_GROUPING:\n            widths = np.array([r['width'] for r in self._roidb])\n            heights = np.array([r['height'] for r in self._roidb])\n            horz = (widths >= heights)\n            vert = np.logical_not(horz)\n            horz_inds = np.where(horz)[0]\n            vert_inds = np.where(vert)[0]\n            inds = np.hstack((\n                np.random.permutation(horz_inds),\n                np.random.permutation(vert_inds)))\n            inds = np.reshape(inds, (-1, 2))\n            row_perm = np.random.permutation(np.arange(inds.shape[0]))\n            inds = np.reshape(inds[row_perm, :], (-1,))\n            self._perm = inds\n        else:\n            self._perm = np.random.permutation(np.arange(len(self._roidb)))\n        self._cur = 0\n\n    def _get_next_minibatch_inds(self):\n        \"\"\"Return the roidb indices for the next minibatch.\"\"\"\n        if self._cur + cfg.TRAIN.IMS_PER_BATCH >= len(self._roidb):\n            self._shuffle_roidb_inds()\n\n        db_inds = self._perm[self._cur:self._cur + cfg.TRAIN.IMS_PER_BATCH]\n        self._cur += cfg.TRAIN.IMS_PER_BATCH\n        return db_inds\n\n    def _get_next_minibatch(self):\n        \"\"\"Return the blobs to be used for the next minibatch.\n\n        If cfg.TRAIN.USE_PREFETCH is True, then blobs will be computed in a\n        separate process and made available through self._blob_queue.\n        \"\"\"\n        if cfg.TRAIN.USE_PREFETCH:\n            return self._blob_queue.get()\n        else:\n            db_inds = self._get_next_minibatch_inds()\n            minibatch_db = [self._roidb[i] for i in db_inds]\n            return get_minibatch(minibatch_db, self._num_classes)\n\n    def set_roidb(self, roidb):\n        \"\"\"Set the roidb to be used by this layer during training.\"\"\"\n        self._roidb = roidb\n        self._shuffle_roidb_inds()\n        if cfg.TRAIN.USE_PREFETCH:\n            self._blob_queue = Queue(10)\n            self._prefetch_process = BlobFetcher(self._blob_queue,\n                                                 self._roidb,\n                                                 self._num_classes)\n            self._prefetch_process.start()\n            # Terminate the child process when the parent exists\n            def cleanup():\n                print 'Terminating BlobFetcher'\n                self._prefetch_process.terminate()\n                self._prefetch_process.join()\n            import atexit\n            atexit.register(cleanup)\n\n    def setup(self, bottom, top):\n        \"\"\"Setup the RoIDataLayer.\"\"\"\n\n        # parse the layer parameter string, which must be valid YAML\n        layer_params = yaml.load(self.param_str_)\n\n        self._num_classes = layer_params['num_classes']\n\n        self._name_to_top_map = {}\n\n        # data blob: holds a batch of N images, each with 3 channels\n        idx = 0\n        top[idx].reshape(cfg.TRAIN.IMS_PER_BATCH, 3,\n            max(cfg.TRAIN.SCALES), cfg.TRAIN.MAX_SIZE)\n        self._name_to_top_map['data'] = idx\n        idx += 1\n\n        if cfg.TRAIN.HAS_RPN:\n            top[idx].reshape(1, 3)\n            self._name_to_top_map['im_info'] = idx\n            idx += 1\n\n            top[idx].reshape(1, 4)\n            self._name_to_top_map['gt_boxes'] = idx\n            idx += 1\n        else: # not using RPN\n            # rois blob: holds R regions of interest, each is a 5-tuple\n            # (n, x1, y1, x2, y2) specifying an image batch index n and a\n            # rectangle (x1, y1, x2, y2)\n            top[idx].reshape(1, 5)\n            self._name_to_top_map['rois'] = idx\n            idx += 1\n\n            # labels blob: R categorical labels in [0, ..., K] for K foreground\n            # classes plus background\n            top[idx].reshape(1)\n            self._name_to_top_map['labels'] = idx\n            idx += 1\n\n            if cfg.TRAIN.BBOX_REG:\n                # bbox_targets blob: R bounding-box regression targets with 4\n                # targets per class\n                top[idx].reshape(1, self._num_classes * 4)\n                self._name_to_top_map['bbox_targets'] = idx\n                idx += 1\n\n                # bbox_inside_weights blob: At most 4 targets per roi are active;\n                # thisbinary vector sepcifies the subset of active targets\n                top[idx].reshape(1, self._num_classes * 4)\n                self._name_to_top_map['bbox_inside_weights'] = idx\n                idx += 1\n\n                top[idx].reshape(1, self._num_classes * 4)\n                self._name_to_top_map['bbox_outside_weights'] = idx\n                idx += 1\n\n        print 'RoiDataLayer: name_to_top:', self._name_to_top_map\n        assert len(top) == len(self._name_to_top_map)\n\n    def forward(self, bottom, top):\n        \"\"\"Get blobs and copy them into this layer's top blob vector.\"\"\"\n        blobs = self._get_next_minibatch()\n\n        for blob_name, blob in blobs.iteritems():\n            top_ind = self._name_to_top_map[blob_name]\n            # Reshape net's input blobs\n            top[top_ind].reshape(*(blob.shape))\n            # Copy data into net's input blobs\n            top[top_ind].data[...] = blob.astype(np.float32, copy=False)\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\nclass BlobFetcher(Process):\n    \"\"\"Experimental class for prefetching blobs in a separate process.\"\"\"\n    def __init__(self, queue, roidb, num_classes):\n        super(BlobFetcher, self).__init__()\n        self._queue = queue\n        self._roidb = roidb\n        self._num_classes = num_classes\n        self._perm = None\n        self._cur = 0\n        self._shuffle_roidb_inds()\n        # fix the random seed for reproducibility\n        np.random.seed(cfg.RNG_SEED)\n\n    def _shuffle_roidb_inds(self):\n        \"\"\"Randomly permute the training roidb.\"\"\"\n        # TODO(rbg): remove duplicated code\n        self._perm = np.random.permutation(np.arange(len(self._roidb)))\n        self._cur = 0\n\n    def _get_next_minibatch_inds(self):\n        \"\"\"Return the roidb indices for the next minibatch.\"\"\"\n        # TODO(rbg): remove duplicated code\n        if self._cur + cfg.TRAIN.IMS_PER_BATCH >= len(self._roidb):\n            self._shuffle_roidb_inds()\n\n        db_inds = self._perm[self._cur:self._cur + cfg.TRAIN.IMS_PER_BATCH]\n        self._cur += cfg.TRAIN.IMS_PER_BATCH\n        return db_inds\n\n    def run(self):\n        print 'BlobFetcher started'\n        while True:\n            db_inds = self._get_next_minibatch_inds()\n            minibatch_db = [self._roidb[i] for i in db_inds]\n            blobs = get_minibatch(minibatch_db, self._num_classes)\n            self._queue.put(blobs)\n"
  },
  {
    "path": "lib/roi_data_layer/minibatch.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Compute minibatch blobs for training a Fast R-CNN network.\"\"\"\n\nimport numpy as np\nimport numpy.random as npr\nimport cv2\nfrom fast_rcnn.config import cfg\nfrom utils.blob import prep_im_for_blob, im_list_to_blob\n\ndef get_minibatch(roidb, num_classes):\n    \"\"\"Given a roidb, construct a minibatch sampled from it.\"\"\"\n    num_images = len(roidb)\n    # Sample random scales to use for each image in this batch\n    random_scale_inds = npr.randint(0, high=len(cfg.TRAIN.SCALES),\n                                    size=num_images)\n    assert(cfg.TRAIN.BATCH_SIZE % num_images == 0), \\\n        'num_images ({}) must divide BATCH_SIZE ({})'. \\\n        format(num_images, cfg.TRAIN.BATCH_SIZE)\n    rois_per_image = cfg.TRAIN.BATCH_SIZE / num_images\n    fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image)\n\n    # Get the input image blob, formatted for caffe\n    im_blob, im_scales = _get_image_blob(roidb, random_scale_inds)\n\n    blobs = {'data': im_blob}\n\n    if cfg.TRAIN.HAS_RPN:\n        assert len(im_scales) == 1, \"Single batch only\"\n        assert len(roidb) == 1, \"Single batch only\"\n        # gt boxes: (x1, y1, x2, y2, cls)\n        gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0]\n        gt_boxes = np.empty((len(gt_inds), 5), dtype=np.float32)\n        gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :] * im_scales[0]\n        gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds]\n        blobs['gt_boxes'] = gt_boxes\n        blobs['im_info'] = np.array(\n            [[im_blob.shape[2], im_blob.shape[3], im_scales[0]]],\n            dtype=np.float32)\n    else: # not using RPN\n        # Now, build the region of interest and label blobs\n        rois_blob = np.zeros((0, 5), dtype=np.float32)\n        labels_blob = np.zeros((0), dtype=np.float32)\n        bbox_targets_blob = np.zeros((0, 4 * num_classes), dtype=np.float32)\n        bbox_inside_blob = np.zeros(bbox_targets_blob.shape, dtype=np.float32)\n        # all_overlaps = []\n        for im_i in xrange(num_images):\n            labels, overlaps, im_rois, bbox_targets, bbox_inside_weights \\\n                = _sample_rois(roidb[im_i], fg_rois_per_image, rois_per_image,\n                               num_classes)\n\n            # Add to RoIs blob\n            rois = _project_im_rois(im_rois, im_scales[im_i])\n            batch_ind = im_i * np.ones((rois.shape[0], 1))\n            rois_blob_this_image = np.hstack((batch_ind, rois))\n            rois_blob = np.vstack((rois_blob, rois_blob_this_image))\n\n            # Add to labels, bbox targets, and bbox loss blobs\n            labels_blob = np.hstack((labels_blob, labels))\n            bbox_targets_blob = np.vstack((bbox_targets_blob, bbox_targets))\n            bbox_inside_blob = np.vstack((bbox_inside_blob, bbox_inside_weights))\n            # all_overlaps = np.hstack((all_overlaps, overlaps))\n\n        # For debug visualizations\n        # _vis_minibatch(im_blob, rois_blob, labels_blob, all_overlaps)\n\n        blobs['rois'] = rois_blob\n        blobs['labels'] = labels_blob\n\n        if cfg.TRAIN.BBOX_REG:\n            blobs['bbox_targets'] = bbox_targets_blob\n            blobs['bbox_inside_weights'] = bbox_inside_blob\n            blobs['bbox_outside_weights'] = \\\n                np.array(bbox_inside_blob > 0).astype(np.float32)\n\n    return blobs\n\ndef _sample_rois(roidb, fg_rois_per_image, rois_per_image, num_classes):\n    \"\"\"Generate a random sample of RoIs comprising foreground and background\n    examples.\n    \"\"\"\n    # label = class RoI has max overlap with\n    labels = roidb['max_classes']\n    overlaps = roidb['max_overlaps']\n    rois = roidb['boxes']\n\n    # Select foreground RoIs as those with >= FG_THRESH overlap\n    fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0]\n    # Guard against the case when an image has fewer than fg_rois_per_image\n    # foreground RoIs\n    fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_inds.size)\n    # Sample foreground regions without replacement\n    if fg_inds.size > 0:\n        fg_inds = npr.choice(\n                fg_inds, size=fg_rois_per_this_image, replace=False)\n\n    # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\n    bg_inds = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) &\n                       (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]\n    # Compute number of background RoIs to take from this image (guarding\n    # against there being fewer than desired)\n    bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image\n    bg_rois_per_this_image = np.minimum(bg_rois_per_this_image,\n                                        bg_inds.size)\n    # Sample foreground regions without replacement\n    if bg_inds.size > 0:\n        bg_inds = npr.choice(\n                bg_inds, size=bg_rois_per_this_image, replace=False)\n\n    # The indices that we're selecting (both fg and bg)\n    keep_inds = np.append(fg_inds, bg_inds)\n    # Select sampled values from various arrays:\n    labels = labels[keep_inds]\n    # Clamp labels for the background RoIs to 0\n    labels[fg_rois_per_this_image:] = 0\n    overlaps = overlaps[keep_inds]\n    rois = rois[keep_inds]\n\n    bbox_targets, bbox_inside_weights = _get_bbox_regression_labels(\n            roidb['bbox_targets'][keep_inds, :], num_classes)\n\n    return labels, overlaps, rois, bbox_targets, bbox_inside_weights\n\ndef _get_image_blob(roidb, scale_inds):\n    \"\"\"Builds an input blob from the images in the roidb at the specified\n    scales.\n    \"\"\"\n    num_images = len(roidb)\n    processed_ims = []\n    im_scales = []\n    for i in xrange(num_images):\n        im = cv2.imread(roidb[i]['image'])\n        if roidb[i]['flipped']:\n            im = im[:, ::-1, :]\n        target_size = cfg.TRAIN.SCALES[scale_inds[i]]\n        im, im_scale = prep_im_for_blob(im, cfg.PIXEL_MEANS, target_size,\n                                        cfg.TRAIN.MAX_SIZE,cfg.TRAIN.IMAGE_STRIDE)\n        im_scales.append(im_scale)\n        processed_ims.append(im)\n\n    # Create a blob to hold the input images\n    blob = im_list_to_blob(processed_ims)\n    \n\n    return blob, im_scales\n\ndef _project_im_rois(im_rois, im_scale_factor):\n    \"\"\"Project image RoIs into the rescaled training image.\"\"\"\n    rois = im_rois * im_scale_factor\n    return rois\n\ndef _get_bbox_regression_labels(bbox_target_data, num_classes):\n    \"\"\"Bounding-box regression targets are stored in a compact form in the\n    roidb.\n\n    This function expands those targets into the 4-of-4*K representation used\n    by the network (i.e. only one class has non-zero targets). The loss weights\n    are similarly expanded.\n\n    Returns:\n        bbox_target_data (ndarray): N x 4K blob of regression targets\n        bbox_inside_weights (ndarray): N x 4K blob of loss weights\n    \"\"\"\n    clss = bbox_target_data[:, 0]\n    bbox_targets = np.zeros((clss.size, 4 * num_classes), dtype=np.float32)\n    bbox_inside_weights = np.zeros(bbox_targets.shape, dtype=np.float32)\n    inds = np.where(clss > 0)[0]\n    for ind in inds:\n        cls = clss[ind]\n        start = 4 * cls\n        end = start + 4\n        bbox_targets[ind, start:end] = bbox_target_data[ind, 1:]\n        bbox_inside_weights[ind, start:end] = cfg.TRAIN.BBOX_INSIDE_WEIGHTS\n    return bbox_targets, bbox_inside_weights\n\ndef _vis_minibatch(im_blob, rois_blob, labels_blob, overlaps):\n    \"\"\"Visualize a mini-batch for debugging.\"\"\"\n    import matplotlib.pyplot as plt\n    for i in xrange(rois_blob.shape[0]):\n        rois = rois_blob[i, :]\n        im_ind = rois[0]\n        roi = rois[1:]\n        im = im_blob[im_ind, :, :, :].transpose((1, 2, 0)).copy()\n        im += cfg.PIXEL_MEANS\n        im = im[:, :, (2, 1, 0)]\n        im = im.astype(np.uint8)\n        cls = labels_blob[i]\n        plt.imshow(im)\n        print 'class: ', cls, ' overlap: ', overlaps[i]\n        plt.gca().add_patch(\n            plt.Rectangle((roi[0], roi[1]), roi[2] - roi[0],\n                          roi[3] - roi[1], fill=False,\n                          edgecolor='r', linewidth=3)\n            )\n        plt.show()\n"
  },
  {
    "path": "lib/roi_data_layer/roidb.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Transform a roidb into a trainable roidb by adding a bunch of metadata.\"\"\"\n\nimport numpy as np\nfrom fast_rcnn.config import cfg\nfrom fast_rcnn.bbox_transform import bbox_transform\nfrom utils.cython_bbox import bbox_overlaps\nimport PIL\n\ndef prepare_roidb(imdb):\n    \"\"\"Enrich the imdb's roidb by adding some derived quantities that\n    are useful for training. This function precomputes the maximum\n    overlap, taken over ground-truth boxes, between each ROI and\n    each ground-truth box. The class with maximum overlap is also\n    recorded.\n    \"\"\"\n    sizes = [PIL.Image.open(imdb.image_path_at(i)).size\n             for i in xrange(imdb.num_images)]\n    roidb = imdb.roidb\n    for i in xrange(len(imdb.image_index)):\n        roidb[i]['image'] = imdb.image_path_at(i)\n        roidb[i]['width'] = sizes[i][0]\n        roidb[i]['height'] = sizes[i][1]\n        # need gt_overlaps as a dense array for argmax\n        gt_overlaps = roidb[i]['gt_overlaps'].toarray()\n        # max overlap with gt over classes (columns)\n        max_overlaps = gt_overlaps.max(axis=1)\n        # gt class that had the max overlap\n        max_classes = gt_overlaps.argmax(axis=1)\n        roidb[i]['max_classes'] = max_classes\n        roidb[i]['max_overlaps'] = max_overlaps\n        # sanity checks\n        # max overlap of 0 => class should be zero (background)\n        zero_inds = np.where(max_overlaps == 0)[0]\n        assert all(max_classes[zero_inds] == 0)\n        # max overlap > 0 => class should not be zero (must be a fg class)\n        nonzero_inds = np.where(max_overlaps > 0)[0]\n        assert all(max_classes[nonzero_inds] != 0)\n\ndef add_bbox_regression_targets(roidb):\n    \"\"\"Add information needed to train bounding-box regressors.\"\"\"\n    assert len(roidb) > 0\n    assert 'max_classes' in roidb[0], 'Did you call prepare_roidb first?'\n\n    num_images = len(roidb)\n    # Infer number of classes from the number of columns in gt_overlaps\n    num_classes = roidb[0]['gt_overlaps'].shape[1]\n    for im_i in xrange(num_images):\n        rois = roidb[im_i]['boxes']\n        max_overlaps = roidb[im_i]['max_overlaps']\n        max_classes = roidb[im_i]['max_classes']\n        roidb[im_i]['bbox_targets'] = \\\n                _compute_targets(rois, max_overlaps, max_classes)\n\n    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\n        # Use fixed / precomputed \"means\" and \"stds\" instead of empirical values\n        means = np.tile(\n                np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (num_classes, 1))\n        stds = np.tile(\n                np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (num_classes, 1))\n    else:\n        # Compute values needed for means and stds\n        # var(x) = E(x^2) - E(x)^2\n        class_counts = np.zeros((num_classes, 1)) + cfg.EPS\n        sums = np.zeros((num_classes, 4))\n        squared_sums = np.zeros((num_classes, 4))\n        for im_i in xrange(num_images):\n            targets = roidb[im_i]['bbox_targets']\n            for cls in xrange(1, num_classes):\n                cls_inds = np.where(targets[:, 0] == cls)[0]\n                if cls_inds.size > 0:\n                    class_counts[cls] += cls_inds.size\n                    sums[cls, :] += targets[cls_inds, 1:].sum(axis=0)\n                    squared_sums[cls, :] += \\\n                            (targets[cls_inds, 1:] ** 2).sum(axis=0)\n\n        means = sums / class_counts\n        stds = np.sqrt(squared_sums / class_counts - means ** 2)\n\n    print 'bbox target means:'\n    print means\n    print means[1:, :].mean(axis=0) # ignore bg class\n    print 'bbox target stdevs:'\n    print stds\n    print stds[1:, :].mean(axis=0) # ignore bg class\n\n    # Normalize targets\n    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS:\n        print \"Normalizing targets\"\n        for im_i in xrange(num_images):\n            targets = roidb[im_i]['bbox_targets']\n            for cls in xrange(1, num_classes):\n                cls_inds = np.where(targets[:, 0] == cls)[0]\n                roidb[im_i]['bbox_targets'][cls_inds, 1:] -= means[cls, :]\n                roidb[im_i]['bbox_targets'][cls_inds, 1:] /= stds[cls, :]\n    else:\n        print \"NOT normalizing targets\"\n\n    # These values will be needed for making predictions\n    # (the predicts will need to be unnormalized and uncentered)\n    return means.ravel(), stds.ravel()\n\ndef _compute_targets(rois, overlaps, labels):\n    \"\"\"Compute bounding-box regression targets for an image.\"\"\"\n    # Indices of ground-truth ROIs\n    gt_inds = np.where(overlaps == 1)[0]\n    if len(gt_inds) == 0:\n        # Bail if the image has no ground-truth ROIs\n        return np.zeros((rois.shape[0], 5), dtype=np.float32)\n    # Indices of examples for which we try to make predictions\n    ex_inds = np.where(overlaps >= cfg.TRAIN.BBOX_THRESH)[0]\n\n    # Get IoU overlap between each ex ROI and gt ROI\n    ex_gt_overlaps = bbox_overlaps(\n        np.ascontiguousarray(rois[ex_inds, :], dtype=np.float),\n        np.ascontiguousarray(rois[gt_inds, :], dtype=np.float))\n\n    # Find which gt ROI each ex ROI has max overlap with:\n    # this will be the ex ROI's gt target\n    gt_assignment = ex_gt_overlaps.argmax(axis=1)\n    gt_rois = rois[gt_inds[gt_assignment], :]\n    ex_rois = rois[ex_inds, :]\n\n    targets = np.zeros((rois.shape[0], 5), dtype=np.float32)\n    targets[ex_inds, 0] = labels[ex_inds]\n    targets[ex_inds, 1:] = bbox_transform(ex_rois, gt_rois)\n    return targets\n"
  },
  {
    "path": "lib/rpn/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n"
  },
  {
    "path": "lib/rpn/anchor_target_layer.py",
    "content": "# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n\nimport os\nimport caffe\nimport yaml\nfrom fast_rcnn.config import cfg\nimport numpy as np\nimport numpy.random as npr\nfrom generate_anchors import generate_anchors\nfrom utils.cython_bbox import bbox_overlaps\nfrom fast_rcnn.bbox_transform import bbox_transform\n\nDEBUG = False\n\nclass AnchorTargetLayer(caffe.Layer):\n    \"\"\"\n    Assign anchors to ground-truth targets. Produces anchor classification\n    labels and bounding-box regression targets.\n    \"\"\"\n\n    def setup(self, bottom, top):\n        layer_params = yaml.load(self.param_str_)\n        self._feat_stride = [int(i) for i in layer_params['feat_stride'].split(',')]\n        self._scales = cfg.FPNRSCALES\n        self._ratios = cfg.FPNRATIOS\n        #anchor_scales = layer_params.get('scales', (8, ))\n        # allow boxes to sit over the edge by a small amount\n        self._allowed_border = layer_params.get('allowed_border', 1000)\n\n     \n        s = 0\n        for i in range(5):\n            height, width = bottom[i].data.shape[-2:]\n            s = s + height * width\n            \n            i_anchors = generate_anchors(base_size=self._feat_stride[i], ratios=self._ratios, scales=self._scales)\n            A = i_anchors.shape[0]\n\n        # labels\n        top[0].reshape(1, 1, A * s)\n        # bbox_targets\n        top[1].reshape(1, A * 4, s)\n        # bbox_inside_weights\n        top[2].reshape(1, A * 4, s)\n        # bbox_outside_weights\n        top[3].reshape(1, A * 4, s)\n\n    def forward(self, bottom, top):\n        \n\n        assert bottom[0].data.shape[0] == 1, \\\n            'Only single item batches are supported'\n\n        # map of shape (..., H, W)\n        h = []\n        w = []\n        for i in range(5):\n            height, width = bottom[i].data.shape[-2:]\n            h.append(height)\n            w.append(width)\n        # GT boxes (x1, y1, x2, y2, label)\n        gt_boxes = bottom[5].data\n        # im_info\n        im_info = bottom[6].data[0, :]\n\n        all_anchors_list = []\n        inds_inside_list = []\n        total_anchors = 0\n        \n        feat_strides = self._feat_stride\n        ratios = self._ratios\n\n        scales = self._scales\n\n        fpn_args = []\n        fpn_anchors_fid = np.zeros(0).astype(int)\n        fpn_anchors = np.zeros([0, 4])\n        fpn_labels = np.zeros(0)\n        fpn_inds_inside = []\n        for feat_id in range(len(feat_strides)):\n            # len(scales.shape) == 1 just for backward compatibility, will remove in the future\n        \n            base_anchors = generate_anchors(base_size=feat_strides[feat_id], ratios=ratios, scales=scales)\n\n            num_anchors = base_anchors.shape[0]\n            feat_height = h[feat_id]\n            feat_width = w[feat_id]\n\n            # 1. generate proposals from bbox deltas and shifted anchors\n            shift_x = np.arange(0, feat_width) * feat_strides[feat_id]\n            shift_y = np.arange(0, feat_height) * feat_strides[feat_id]\n            shift_x, shift_y = np.meshgrid(shift_x, shift_y)\n            shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose()\n            # add A anchors (1, A, 4) to\n            # cell K shifts (K, 1, 4) to get\n            # shift anchors (K, A, 4)\n            # reshape to (K*A, 4) shifted anchors\n            A = num_anchors\n            K = shifts.shape[0]\n            all_anchors = base_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2))\n            all_anchors = all_anchors.reshape((K * A, 4))\n            total_anchors = int(K * A)\n\n            # only keep anchors inside the image\n            inds_inside = np.where((all_anchors[:, 0] >= -self._allowed_border) &\n                                (all_anchors[:, 1] >= -self._allowed_border) &\n                                (all_anchors[:, 2] < im_info[1] + self._allowed_border) &\n                                (all_anchors[:, 3] < im_info[0] + self._allowed_border))[0]\n\n            # keep only inside anchors\n            anchors = all_anchors[inds_inside, :]\n\n            # label: 1 is positive, 0 is negative, -1 is dont care\n            # for sigmoid classifier, ignore the 'background' class\n            labels = np.empty((len(inds_inside),), dtype=np.float32)\n            labels.fill(-1)\n\n            fpn_anchors_fid = np.hstack((fpn_anchors_fid, len(inds_inside)))\n            fpn_anchors = np.vstack((fpn_anchors, anchors))\n            fpn_labels = np.hstack((fpn_labels, labels))\n            fpn_inds_inside.append(inds_inside)\n            fpn_args.append([feat_height, feat_width, A, total_anchors])\n\n        if gt_boxes.size > 0:\n            # overlap between the anchors and the gt boxes\n            # overlaps (ex, gt)\n            overlaps = bbox_overlaps(fpn_anchors.astype(np.float), gt_boxes.astype(np.float))\n            argmax_overlaps = overlaps.argmax(axis=1)\n            max_overlaps = overlaps[np.arange(len(fpn_anchors)), argmax_overlaps]\n            gt_argmax_overlaps = overlaps.argmax(axis=0)\n            gt_max_overlaps = overlaps[gt_argmax_overlaps, np.arange(overlaps.shape[1])]\n            gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]\n\n            if not cfg.TRAIN.RPN_CLOBBER_POSITIVES:\n                # assign bg labels first so that positive labels can clobber them\n                fpn_labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0\n            # fg label: for each gt, anchor with highest overlap\n            fpn_labels[gt_argmax_overlaps] = 1\n            # fg label: above threshold IoU\n            fpn_labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1\n            if cfg.TRAIN.RPN_CLOBBER_POSITIVES:\n                # assign bg labels last so that negative labels can clobber positives\n                fpn_labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0\n        else:\n            fpn_labels[:] = 0\n        # subsample positive labels if we have too many\n        num_fg = fpn_labels.shape[0] if cfg.TRAIN.RPN_BATCHSIZE == -1 else int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCHSIZE)\n        fg_inds = np.where(fpn_labels >= 1)[0]\n        if len(fg_inds) > num_fg:\n            disable_inds = npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False)\n            if DEBUG:\n                disable_inds = fg_inds[:(len(fg_inds) - num_fg)]\n            fpn_labels[disable_inds] = -1\n\n        # subsample negative labels if we have too many\n        num_bg = fpn_labels.shape[0] if cfg.TRAIN.RPN_BATCHSIZE == -1 else cfg.TRAIN.RPN_BATCHSIZE - np.sum(fpn_labels >= 1)\n        bg_inds = np.where(fpn_labels == 0)[0]\n        fpn_anchors_fid = np.hstack((0, fpn_anchors_fid.cumsum()))\n\n        # if balance_scale_bg:\n        #     num_bg_scale = num_bg / len(feat_strides)\n        #     for feat_id in range(0, len(feat_strides)):\n        #         bg_ind_scale = bg_inds[(bg_inds >= fpn_anchors_fid[feat_id]) & (bg_inds < fpn_anchors_fid[feat_id+1])]\n        #         if len(bg_ind_scale) > num_bg_scale:\n        #             disable_inds = npr.choice(bg_ind_scale, size=(len(bg_ind_scale) - num_bg_scale), replace=False)\n        #             fpn_labels[disable_inds] = -1\n        # else:\n        if len(bg_inds) > num_bg:\n            disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)\n            if DEBUG:\n                disable_inds = bg_inds[:(len(bg_inds) - num_bg)]\n            fpn_labels[disable_inds] = -1\n\n        fpn_bbox_targets = np.zeros((len(fpn_anchors), 4), dtype=np.float32)\n        if gt_boxes.size > 0:\n            fpn_bbox_targets[fpn_labels >= 1, :] = bbox_transform(fpn_anchors[fpn_labels >= 1, :], gt_boxes[argmax_overlaps[fpn_labels >= 1], :4])\n            # fpn_bbox_targets[:] = bbox_transform(fpn_anchors, gt_boxes[argmax_overlaps, :4])\n        # fpn_bbox_targets = (fpn_bbox_targets - np.array(cfg.TRAIN.BBOX_MEANS)) / np.array(cfg.TRAIN.BBOX_STDS)\n        fpn_bbox_weights = np.zeros((len(fpn_anchors), 4), dtype=np.float32)\n\n        fpn_bbox_weights[fpn_labels >= 1, :] = np.array(cfg.TRAIN.RPN_BBOX_INSIDE_WEIGHTS)\n        fpn_bbox_outside_weights = np.zeros((len(fpn_anchors), 4), dtype=np.float32)\n        if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0:\n            # uniform weighting of examples (given non-uniform sampling)\n            num_examples = np.sum(fpn_labels >= 0)\n            positive_weights = np.ones((1, 4)) * 1.0 / num_examples\n            negative_weights = np.ones((1, 4)) * 1.0 / num_examples\n        else:\n            assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) &\n                    (cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1))\n            positive_weights = (cfg.TRAIN.RPN_POSITIVE_WEIGHT /\n                                np.sum(fpn_labels == 1))\n            negative_weights = ((1.0 - cfg.TRAIN.RPN_POSITIVE_WEIGHT) /\n                                np.sum(fpn_labels == 0))\n        \n        fpn_bbox_outside_weights[fpn_labels == 1, :] = positive_weights\n        fpn_bbox_outside_weights[fpn_labels == 0, :] = negative_weights\n\n        label_list = []\n        bbox_target_list = []\n        bbox_weight_list = []\n        bbox_outside_weight_list = []\n        for feat_id in range(0, len(feat_strides)):\n            feat_height, feat_width, A, total_anchors = fpn_args[feat_id]\n            # map up to original set of anchors\n            labels = _unmap(fpn_labels[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=-1)\n            bbox_targets = _unmap(fpn_bbox_targets[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0)\n            bbox_weights = _unmap(fpn_bbox_weights[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0)\n            bbox_outside_weights = _unmap(fpn_bbox_outside_weights[fpn_anchors_fid[feat_id]:fpn_anchors_fid[feat_id+1]], total_anchors, fpn_inds_inside[feat_id], fill=0)\n\n            labels = labels.reshape((1, feat_height, feat_width, A)).transpose(0, 3, 1, 2)\n            labels = labels.reshape((1, A * feat_height * feat_width))\n\n            bbox_targets = bbox_targets.reshape((1, feat_height, feat_width, A * 4)).transpose(0, 3, 1, 2)\n            bbox_targets = bbox_targets.reshape((1, A * 4, -1))\n            bbox_weights = bbox_weights.reshape((1, feat_height, feat_width, A * 4)).transpose((0, 3, 1, 2))\n            bbox_weights = bbox_weights.reshape((1, A * 4, -1))\n\n            bbox_outside_weights = bbox_outside_weights.reshape((1, feat_height, feat_width, A * 4)).transpose((0, 3, 1, 2))\n            bbox_outside_weights = bbox_outside_weights.reshape((1, A * 4, -1))\n\n            label_list.append(labels)\n            bbox_target_list.append(bbox_targets)\n            bbox_weight_list.append(bbox_weights)\n            bbox_outside_weight_list.append(bbox_outside_weights)\n            # label.update({'label_p' + str(feat_id + feat_id_start): labels,\n            #               'bbox_target_p' + str(feat_id + feat_id_start): bbox_targets,\n            #               'bbox_weight_p' + str(feat_id + feat_id_start): bbox_weights})\n\n    \n        labels = np.concatenate(label_list, axis=1)\n        bbox_targets = np.concatenate(bbox_target_list, axis=2)\n        bbox_inside_weights =  np.concatenate(bbox_weight_list, axis=2)\n        bbox_outside_weights = np.concatenate(bbox_outside_weight_list, axis=2)\n\n        # print bbox_targets.shape\n        # print bbox_inside_weights.shape\n        # print bbox_outside_weights.shape\n        # print labels.shape\n       \n        \n        top[0].reshape(*labels.shape)\n        top[0].data[...] = labels\n\n        # bbox_targets\n\n        top[1].reshape(*bbox_targets.shape)\n        top[1].data[...] = bbox_targets\n\n        # bbox_inside_weights\n\n        top[2].reshape(*bbox_inside_weights.shape)\n        top[2].data[...] = bbox_inside_weights\n\n        # bbox_outside_weights\n\n        top[3].reshape(*bbox_outside_weights.shape)\n        top[3].data[...] = bbox_outside_weights\n\n\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\n\ndef _unmap(data, count, inds, fill=0):\n    \"\"\"\" unmap a subset inds of data into original data of size count \"\"\"\n    if len(data.shape) == 1:\n        ret = np.empty((count,), dtype=np.float32)\n        ret.fill(fill)\n        ret[inds] = data\n    else:\n        ret = np.empty((count,) + data.shape[1:], dtype=np.float32)\n        ret.fill(fill)\n        ret[inds, :] = data\n    return ret\n\n\ndef _compute_targets(ex_rois, gt_rois):\n    \"\"\"Compute bounding-box regression targets for an image.\"\"\"\n\n    assert ex_rois.shape[0] == gt_rois.shape[0]\n    assert ex_rois.shape[1] == 4\n    assert gt_rois.shape[1] == 5\n\n    return bbox_transform(ex_rois, gt_rois[:, :4]).astype(np.float32, copy=False)\n"
  },
  {
    "path": "lib/rpn/as_rois.py",
    "content": "# --------------------------------------------------------\r\n# Faster R-CNN\r\n# Copyright (c) 2015 Microsoft\r\n# Licensed under The MIT License [see LICENSE for details]\r\n# Written by Ross Girshick and Sean Bell\r\n# --------------------------------------------------------\r\n\r\nimport caffe\r\nimport yaml\r\nimport numpy as np\r\nimport numpy.random as npr\r\nfrom fast_rcnn.config import cfg\r\nfrom fast_rcnn.bbox_transform import bbox_transform\r\nfrom utils.cython_bbox import bbox_overlaps\r\n\r\nDEBUG = False\r\n\r\nclass As_roisLayer(caffe.Layer):\r\n    \"\"\"\r\n    Assign object detection proposals to ground-truth targets. Produces proposal\r\n    classification labels and bounding-box regression targets.\r\n    \"\"\"\r\n\r\n    def setup(self, bottom, top):\r\n        layer_params = yaml.load(self.param_str_)\r\n        self._batch_rois = cfg.TEST.RPN_POST_NMS_TOP_N\r\n\r\n        # sampled rois (0, x1, y1, x2, y2)\r\n        top[0].reshape(1, 5)\r\n        top[1].reshape(1, 5)\r\n        top[2].reshape(1, 5)\r\n        top[3].reshape(1, 5)\r\n        top[4].reshape(1, 5)\r\n\r\n    def forward(self, bottom, top):\r\n        # Proposal ROIs (0, x1, y1, x2, y2) coming from RPN\r\n        # (i.e., rpn.proposal_layer.ProposalLayer), or any other source\r\n        rois = bottom[0].data\r\n        w = (rois[:,3]-rois[:,1])\r\n        h = (rois[:,4]-rois[:,2])\r\n        s = w * h\r\n        k0 =4\r\n        s[s<=0]=1e-6\r\n        layer_indexs = np.floor(k0+np.log2(np.sqrt(s)/224))\r\n\r\n        layer_indexs[layer_indexs<2]=2\r\n        layer_indexs[layer_indexs>5]=5\r\n\r\n        rois_ = np.zeros((self._batch_rois*4, 5), dtype=rois.dtype)\r\n        rois_all =[]\r\n\r\n        for i in range(4):\r\n            index = (layer_indexs == (i + 2))\r\n            num_index = sum(index)\r\n            start = self._batch_rois*i\r\n            end = start+num_index\r\n            index_range = range(start, end)\r\n            rois_[index_range, :] = rois[index, :]\r\n            rois_all.append(rois_[range(start,start + self._batch_rois), :])\r\n\r\n\r\n        rois = rois_\r\n        rois_p2 = rois_all[0]\r\n        rois_p3 = rois_all[1]\r\n        rois_p4 = rois_all[2]\r\n        rois_p5 = rois_all[3]       \r\n\r\n        top[0].reshape(*rois.shape)\r\n        top[0].data[...] = rois\r\n        \r\n        top[1].reshape(*rois_p2.shape)\r\n        top[1].data[...] = rois_p2\r\n    \r\n        top[2].reshape(*rois_p3.shape)\r\n        top[2].data[...] = rois_p3\r\n\r\n        top[3].reshape(*rois_p4.shape)\r\n        top[3].data[...] = rois_p4\r\n        \r\n        top[4].reshape(*rois_p5.shape)\r\n        top[4].data[...] = rois_p5\r\n        \r\n      \r\n\r\n    def backward(self, top, propagate_down, bottom):\r\n        \"\"\"This layer does not propagate gradients.\"\"\"\r\n        pass\r\n\r\n    def reshape(self, bottom, top):\r\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\r\n        pass\r\n\r\n\r\n\r\n"
  },
  {
    "path": "lib/rpn/as_rois_mrcnn.py",
    "content": "# --------------------------------------------------------\r\n# Faster R-CNN\r\n# Copyright (c) 2015 Microsoft\r\n# Licensed under The MIT License [see LICENSE for details]\r\n# Written by Ross Girshick and Sean Bell\r\n# --------------------------------------------------------\r\n\r\nimport caffe\r\nimport yaml\r\nimport numpy as np\r\nimport numpy.random as npr\r\nfrom fast_rcnn.config import cfg\r\nfrom fast_rcnn.bbox_transform import bbox_transform\r\nfrom utils.cython_bbox import bbox_overlaps\r\n\r\nDEBUG = False\r\n\r\nclass As_rois_MergeRcnnLayer(caffe.Layer):\r\n    \"\"\"\r\n    Assign object detection proposals to ground-truth targets. Produces proposal\r\n    classification labels and bounding-box regression targets.\r\n    \"\"\"\r\n\r\n    def setup(self, bottom, top):\r\n        layer_params = yaml.load(self.param_str_)\r\n        self._batch_rois = cfg.TEST.RPN_POST_NMS_TOP_N\r\n\r\n        # sampled rois (0, x1, y1, x2, y2)\r\n        top[0].reshape(1, 5)\r\n        top[1].reshape(1, 5)\r\n        top[2].reshape(1, 5)\r\n        top[3].reshape(1, 5)\r\n        top[4].reshape(1, 5)\r\n\r\n    def forward(self, bottom, top):\r\n        # Proposal ROIs (0, x1, y1, x2, y2) coming from RPN\r\n        # (i.e., rpn.proposal_layer.ProposalLayer), or any other source\r\n        rois = bottom[0].data\r\n        w = (rois[:,3]-rois[:,1])\r\n        h = (rois[:,4]-rois[:,2])\r\n        s = w * h\r\n        k0 =4\r\n        s[s<=0]=1e-6\r\n        layer_indexs = np.floor(k0+np.log2(np.sqrt(s)/224))\r\n\r\n        layer_indexs[layer_indexs<2]=2\r\n        layer_indexs[layer_indexs>5]=5\r\n\r\n        rois_all =[]\r\n\r\n        for i in range(4):\r\n            index = (layer_indexs == (i + 2))\r\n            num_index = sum(index)\r\n            if num_index == 0:\r\n                rois_ = np.zeros((1*4, 5), dtype=rois.dtype)\r\n            else:\r\n                rois_ = rois[index, :]\r\n            rois_all.append(rois_)\r\n\r\n\r\n        rois = np.concatenate(rois_all,axis= 0)\r\n        rois_p2 = rois_all[0]\r\n        rois_p3 = rois_all[1]\r\n        rois_p4 = rois_all[2]\r\n        rois_p5 = rois_all[3]       \r\n\r\n        top[0].reshape(*rois.shape)\r\n        top[0].data[...] = rois\r\n        \r\n        top[1].reshape(*rois_p2.shape)\r\n        top[1].data[...] = rois_p2\r\n    \r\n        top[2].reshape(*rois_p3.shape)\r\n        top[2].data[...] = rois_p3\r\n\r\n        top[3].reshape(*rois_p4.shape)\r\n        top[3].data[...] = rois_p4\r\n        \r\n        top[4].reshape(*rois_p5.shape)\r\n        top[4].data[...] = rois_p5\r\n        \r\n      \r\n\r\n    def backward(self, top, propagate_down, bottom):\r\n        \"\"\"This layer does not propagate gradients.\"\"\"\r\n        pass\r\n\r\n    def reshape(self, bottom, top):\r\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\r\n        pass\r\n\r\n\r\n\r\n"
  },
  {
    "path": "lib/rpn/generate.py",
    "content": "# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nfrom fast_rcnn.config import cfg\nfrom utils.blob import im_list_to_blob\nfrom utils.timer import Timer\nimport numpy as np\nimport cv2\n\ndef _vis_proposals(im, dets, thresh=0.5):\n    \"\"\"Draw detected bounding boxes.\"\"\"\n    inds = np.where(dets[:, -1] >= thresh)[0]\n    if len(inds) == 0:\n        return\n\n    class_name = 'obj'\n    im = im[:, :, (2, 1, 0)]\n    fig, ax = plt.subplots(figsize=(12, 12))\n    ax.imshow(im, aspect='equal')\n    for i in inds:\n        bbox = dets[i, :4]\n        score = dets[i, -1]\n\n        ax.add_patch(\n            plt.Rectangle((bbox[0], bbox[1]),\n                          bbox[2] - bbox[0],\n                          bbox[3] - bbox[1], fill=False,\n                          edgecolor='red', linewidth=3.5)\n            )\n        ax.text(bbox[0], bbox[1] - 2,\n                '{:s} {:.3f}'.format(class_name, score),\n                bbox=dict(facecolor='blue', alpha=0.5),\n                fontsize=14, color='white')\n\n    ax.set_title(('{} detections with '\n                  'p({} | box) >= {:.1f}').format(class_name, class_name,\n                                                  thresh),\n                  fontsize=14)\n    plt.axis('off')\n    plt.tight_layout()\n    plt.draw()\n\ndef _get_image_blob(im):\n    \"\"\"Converts an image into a network input.\n\n    Arguments:\n        im (ndarray): a color image in BGR order\n\n    Returns:\n        blob (ndarray): a data blob holding an image pyramid\n        im_scale_factors (list): list of image scales (relative to im) used\n            in the image pyramid\n    \"\"\"\n    im_orig = im.astype(np.float32, copy=True)\n    im_orig -= cfg.PIXEL_MEANS\n\n    im_shape = im_orig.shape\n    im_size_min = np.min(im_shape[0:2])\n    im_size_max = np.max(im_shape[0:2])\n\n    processed_ims = []\n\n    assert len(cfg.TEST.SCALES) == 1\n    target_size = cfg.TEST.SCALES[0]\n\n    im_scale = float(target_size) / float(im_size_min)\n    # Prevent the biggest axis from being more than MAX_SIZE\n    if np.round(im_scale * im_size_max) > cfg.TEST.MAX_SIZE:\n        im_scale = float(cfg.TEST.MAX_SIZE) / float(im_size_max)\n    im = cv2.resize(im_orig, None, None, fx=im_scale, fy=im_scale,\n                    interpolation=cv2.INTER_LINEAR)\n    im_info = np.hstack((im.shape[:2], im_scale))[np.newaxis, :]\n    processed_ims.append(im)\n\n    # Create a blob to hold the input images\n    blob = im_list_to_blob(processed_ims)\n\n    return blob, im_info\n\ndef im_proposals(net, im):\n    \"\"\"Generate RPN proposals on a single image.\"\"\"\n    blobs = {}\n    blobs['data'], blobs['im_info'] = _get_image_blob(im)\n    net.blobs['data'].reshape(*(blobs['data'].shape))\n    net.blobs['im_info'].reshape(*(blobs['im_info'].shape))\n    blobs_out = net.forward(\n            data=blobs['data'].astype(np.float32, copy=False),\n            im_info=blobs['im_info'].astype(np.float32, copy=False))\n\n    scale = blobs['im_info'][0, 2]\n    boxes = blobs_out['rois'][:, 1:].copy() / scale\n    scores = blobs_out['scores'].copy()\n    return boxes, scores\n\ndef imdb_proposals(net, imdb):\n    \"\"\"Generate RPN proposals on all images in an imdb.\"\"\"\n\n    _t = Timer()\n    imdb_boxes = [[] for _ in xrange(imdb.num_images)]\n    for i in xrange(imdb.num_images):\n        im = cv2.imread(imdb.image_path_at(i))\n        _t.tic()\n        imdb_boxes[i], scores = im_proposals(net, im)\n        _t.toc()\n        print 'im_proposals: {:d}/{:d} {:.3f}s' \\\n              .format(i + 1, imdb.num_images, _t.average_time)\n        if 0:\n            dets = np.hstack((imdb_boxes[i], scores))\n            # from IPython import embed; embed()\n            _vis_proposals(im, dets[:3, :], thresh=0.9)\n            plt.show()\n\n    return imdb_boxes\n"
  },
  {
    "path": "lib/rpn/generate_anchors.py",
    "content": "# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n\nimport numpy as np\n\n# Verify that we compute the same anchors as Shaoqing's matlab implementation:\n#\n#    >> load output/rpn_cachedir/faster_rcnn_VOC2007_ZF_stage1_rpn/anchors.mat\n#    >> anchors\n#\n#    anchors =\n#\n#       -83   -39   100    56\n#      -175   -87   192   104\n#      -359  -183   376   200\n#       -55   -55    72    72\n#      -119  -119   136   136\n#      -247  -247   264   264\n#       -35   -79    52    96\n#       -79  -167    96   184\n#      -167  -343   184   360\n\n#array([[ -83.,  -39.,  100.,   56.],\n#       [-175.,  -87.,  192.,  104.],\n#       [-359., -183.,  376.,  200.],\n#       [ -55.,  -55.,   72.,   72.],\n#       [-119., -119.,  136.,  136.],\n#       [-247., -247.,  264.,  264.],\n#       [ -35.,  -79.,   52.,   96.],\n#       [ -79., -167.,   96.,  184.],\n#       [-167., -343.,  184.,  360.]])\ndef generate_anchors(base_size=8, ratios=[0.5, 1, 2],\n                     scales=2 ** np.arange(3, 6)):\n    \"\"\"\n    Generate anchor (reference) windows by enumerating aspect ratios X\n    scales wrt a reference (0, 0, 15, 15) window.\n    \"\"\"\n    base_anchor = np.array([1, 1, base_size, base_size]) - 1\n    ratio_anchors = _ratio_enum(base_anchor, ratios)\n    anchors = np.vstack([_scale_enum(ratio_anchors[i, :], scales)\n                         for i in xrange(ratio_anchors.shape[0])])\n    return anchors\n\ndef _whctrs(anchor):\n    \"\"\"\n    Return width, height, x center, and y center for an anchor (window).\n    \"\"\"\n\n    w = anchor[2] - anchor[0] + 1\n    h = anchor[3] - anchor[1] + 1\n    x_ctr = anchor[0] + 0.5 * (w - 1)\n    y_ctr = anchor[1] + 0.5 * (h - 1)\n    return w, h, x_ctr, y_ctr\n\ndef _mkanchors(ws, hs, x_ctr, y_ctr):\n    \"\"\"\n    Given a vector of widths (ws) and heights (hs) around a center\n    (x_ctr, y_ctr), output a set of anchors (windows).\n    \"\"\"\n    ws = ws[:, np.newaxis]\n    hs = hs[:, np.newaxis]\n    anchors = np.hstack((x_ctr - 0.5 * (ws - 1),\n                         y_ctr - 0.5 * (hs - 1),\n                         x_ctr + 0.5 * (ws - 1),\n                         y_ctr + 0.5 * (hs - 1)))\n    return anchors\n\ndef _ratio_enum(anchor, ratios):\n    \"\"\"\n    Enumerate a set of anchors for each aspect ratio wrt an anchor.\n    \"\"\"\n\n    w, h, x_ctr, y_ctr = _whctrs(anchor)\n    size = w * h\n    size_ratios = size / ratios\n    ws = np.round(np.sqrt(size_ratios))\n    hs = np.round(ws * ratios)\n    anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n    return anchors\n\ndef _scale_enum(anchor, scales):\n    \"\"\"\n    Enumerate a set of anchors for each scale wrt an anchor.\n    \"\"\"\n    w, h, x_ctr, y_ctr = _whctrs(anchor)\n    ws = w * scales\n    hs = h * scales\n    anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n    return anchors\n\nif __name__ == '__main__':\n    import time\n    t = time.time()\n    a = generate_anchors()\n    print time.time() - t\n    print a\n    from IPython import embed; embed()\n"
  },
  {
    "path": "lib/rpn/proposal_layer.py",
    "content": "# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n\nimport caffe\nimport numpy as np\nimport yaml\nfrom fast_rcnn.config import cfg\nfrom generate_anchors import generate_anchors\nfrom fast_rcnn.bbox_transform import bbox_transform_inv, clip_boxes\nfrom fast_rcnn.nms_wrapper import nms\nimport numpy.random as npr\n\nDEBUG = False\ndef vis_all_detection(im_array, detections, class_names, scale):\n    \"\"\"\n    visualize all detections in one image\n    :param im_array: [b=1 c h w] in rgb\n    :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ]\n    :param class_names: list of names in imdb\n    :param scale: visualize the scaled image\n    :return:\n    \"\"\"\n   # print im_array.shape\n    import matplotlib  \n    matplotlib.use('Agg') \n    import matplotlib.pyplot as plt\n    from matplotlib.pyplot import savefig  \n    import random\n    a =  [103.06 ,115.9 ,123.15]\n    a = np.array(a)\n    im = transform_inverse(im_array,a)\n    plt.imshow(im)\n    for j in range(detections.shape[0]):\n        # if class_names[j] == 0:\n        #     continue\n        color = (random.random(), random.random(), random.random())  # generate a random color\n        dets = detections[j]\n        det =dets\n        bbox = det[0:] \n        score = det[0]\n        rect = plt.Rectangle((bbox[0], bbox[1]),\n                                 bbox[2] - bbox[0],\n                                 bbox[3] - bbox[1], fill=False,\n                                 edgecolor=color, linewidth=3.5)\n        plt.gca().add_patch(rect)\n        # plt.gca().text(bbox[0], bbox[1] - 2,\n        #                    '{:s} {:.3f}'.format(str(class_names[j]), score),\n        #                    bbox=dict(facecolor=color, alpha=0.5), fontsize=12, color='white')\n    plt.show()\n    name = np.mean(im)\n    savefig ('vis/'+str(name)+'.png')\n    plt.clf()\n    plt.cla()\n\n    plt. close(0)\n\nclass ProposalLayer(caffe.Layer):\n    \"\"\"\n    Outputs object detection proposals by applying estimated bounding-box\n    transformations to a set of regular boxes (called \"anchors\").\n    \"\"\"\n\n    def setup(self, bottom, top):\n        # parse the layer parameter string, which must be valid YAML\n        layer_params = yaml.load(self.param_str_)\n\n        \n        self._feat_stride = [int(i) for i in layer_params['feat_stride'].split(',')]\n        self._scales = cfg.FPNRSCALES\n    \n        self._ratios = cfg.FPNRATIOS\n        self._min_sizes = 16\n        self._num_anchors = len(self._scales)*len(self._ratios)\n        self._output_score = False\n\n\n\n        if DEBUG:\n            print 'feat_stride: {}'.format(self._feat_stride)\n            print 'anchors:'\n            print self._anchors\n\n        # rois blob: holds R regions of interest, each is a 5-tuple\n        # (n, x1, y1, x2, y2) specifying an image batch index n and a\n        # rectangle (x1, y1, x2, y2)\n        top[0].reshape(1, 5)\n\n        # scores blob: holds scores for R regions of interest\n        if len(top) > 1:\n            top[1].reshape(1, 1, 1, 1)\n\n    def forward(self, bottom, top):\n        cfg_key = str(self.phase) # either 'TRAIN' or 'TEST'\n        pre_nms_topN  = cfg[cfg_key].RPN_PRE_NMS_TOP_N\n        post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N\n        nms_thresh    = cfg[cfg_key].RPN_NMS_THRESH\n        min_size = self._min_sizes\n        # the first set of _num_anchors channels are bg probs\n        # the second set are the fg probs, which we want\n    \n        im_info = bottom[0].data[0, :]\n        batch_size = bottom[1].data.shape[0]\n        if batch_size > 1:\n            raise ValueError(\"Sorry, multiple images each device is not implemented\")\n\n        cls_prob_dict = {\n            'stride64': bottom[10].data,\n            'stride32': bottom[9].data,\n            'stride16': bottom[8].data,\n            'stride8': bottom[7].data,\n            'stride4': bottom[6].data,\n        }\n        bbox_pred_dict = {\n            'stride64': bottom[5].data,\n            'stride32': bottom[4].data,\n            'stride16': bottom[3].data,\n            'stride8': bottom[2].data,\n            'stride4': bottom[1].data,\n        }\n      \n        proposal_list = []\n        score_list = []\n        for s in self._feat_stride:\n            stride = int(s)\n            sub_anchors = generate_anchors(base_size=stride, scales=self._scales, ratios=self._ratios)\n    \n            scores = cls_prob_dict['stride' + str(s)][:, self._num_anchors:, :, :]\n            bbox_deltas = bbox_pred_dict['stride' + str(s)]\n          \n            # 1. Generate proposals from bbox_deltas and shifted anchors\n            # use real image size instead of padded feature map sizes\n            height, width = int(im_info[0] / stride), int(im_info[1] / stride)\n\n            # Enumerate all shifts\n            shift_x = np.arange(0, width) * stride\n            shift_y = np.arange(0, height) * stride\n            shift_x, shift_y = np.meshgrid(shift_x, shift_y)\n            shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose()\n\n            # Enumerate all shifted anchors:\n            #\n            # add A anchors (1, A, 4) to\n            # cell K shifts (K, 1, 4) to get\n            # shift anchors (K, A, 4)\n            # reshape to (K*A, 4) shifted anchors\n            A = self._num_anchors\n            K = shifts.shape[0]\n    \n            anchors = sub_anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2))\n            anchors = anchors.reshape((K * A, 4))\n\n            # Transpose and reshape predicted bbox transformations to get them\n            # into the same order as the anchors:\n            #\n            # bbox deltas will be (1, 4 * A, H, W) format\n            # transpose to (1, H, W, 4 * A)\n            # reshape to (1 * H * W * A, 4) where rows are ordered by (h, w, a)\n            # in slowest to fastest order\n            bbox_deltas = _clip_pad(bbox_deltas, (height, width))\n            bbox_deltas = bbox_deltas.transpose((0, 2, 3, 1)).reshape((-1, 4))\n\n            # Same story for the scores:\n            #\n            # scores are (1, A, H, W) format\n            # transpose to (1, H, W, A)\n            # reshape to (1 * H * W * A, 1) where rows are ordered by (h, w, a)\n            scores = _clip_pad(scores, (height, width))\n            scores = scores.transpose((0, 2, 3, 1)).reshape((-1, 1))\n\n            # Convert anchors into proposals via bbox transformations\n            proposals = bbox_transform_inv(anchors, bbox_deltas)\n\n            # 2. clip predicted boxes to image\n            proposals = clip_boxes(proposals, im_info[:2])\n\n            # 3. remove predicted boxes with either height or width < threshold\n            # (NOTE: convert min_size to input image scale stored in im_info[2])\n            keep = _filter_boxes(proposals, min_size * im_info[2])\n            proposals = proposals[keep, :]\n            scores = scores[keep]\n\n            proposal_list.append(proposals)\n            score_list.append(scores)\n\n        proposals = np.vstack(proposal_list)\n        scores = np.vstack(score_list)\n\n        # 4. sort all (proposal, score) pairs by score from highest to lowest\n        # 5. take top pre_nms_topN (e.g. 6000)\n        order = scores.ravel().argsort()[::-1]\n        if pre_nms_topN > 0:\n            order = order[:pre_nms_topN]\n        proposals = proposals[order, :]\n        scores = scores[order]\n\n        # 6. apply nms (e.g. threshold = 0.7)\n        # 7. take after_nms_topN (e.g. 300)\n        # 8. return the top proposals (-> RoIs top)\n        det = np.hstack((proposals, scores)).astype(np.float32)\n        keep = nms(det,nms_thresh)\n        if post_nms_topN > 0:\n            keep = keep[:post_nms_topN]\n        # pad to ensure output size remains unchanged\n        if len(keep) < post_nms_topN:\n            pad = npr.choice(keep, size=post_nms_topN - len(keep))\n            keep = np.hstack((keep, pad))\n\n        # pad to ensure output size remains unchanged\n        if len(keep) < post_nms_topN:\n            try:\n                pad = npr.choice(keep, size=post_nms_topN - len(keep))\n            except:\n                proposals = np.zeros((post_nms_topN, 4), dtype=np.float32)\n                proposals[:,2] = 16\n                proposals[:,3] = 16\n                batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32)\n                blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False)))\n                top[0].reshape(*(blob.shape))\n                top[0].data[...] = blob\n                return      \n            keep = np.hstack((keep, pad))\n           \n        proposals = proposals[keep, :]\n        scores = scores[keep]\n\n        # Output rois array\n        # Our RPN implementation only supports a single input image, so all\n        # batch inds are 0\n        batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32)\n        blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False)))\n        # if is_train:\n    \n        top[0].reshape(*(blob.shape))\n        top[0].data[...] = blob\n            \n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\n\ndef _filter_boxes(boxes, min_size):\n    \"\"\" Remove all boxes with any side smaller than min_size \"\"\"\n    ws = boxes[:, 2] - boxes[:, 0] + 1\n    hs = boxes[:, 3] - boxes[:, 1] + 1\n    keep = np.where((ws >= min_size) & (hs >= min_size))[0]\n    return keep\ndef _clip_pad(tensor, pad_shape):\n    \"\"\"\n    Clip boxes of the pad area.\n    :param tensor: [n, c, H, W]\n    :param pad_shape: [h, w]\n    :return: [n, c, h, w]\n    \"\"\"\n    H, W = tensor.shape[2:]\n    h, w = pad_shape\n\n    if h < H or w < W:\n        tensor = tensor[:, :, :h, :w].copy()\n\n    return tensor\n"
  },
  {
    "path": "lib/rpn/proposal_target_layer.py",
    "content": "# --------------------------------------------------------\r\n# Faster R-CNN\r\n# Copyright (c) 2015 Microsoft\r\n# Licensed under The MIT License [see LICENSE for details]\r\n# Written by Ross Girshick and Sean Bell\r\n# --------------------------------------------------------\r\n\r\nimport caffe\r\nimport yaml\r\nimport numpy as np\r\nimport numpy.random as npr\r\nfrom fast_rcnn.config import cfg\r\nfrom fast_rcnn.bbox_transform import bbox_transform\r\nfrom utils.cython_bbox import bbox_overlaps\r\nfrom fast_rcnn.bbox_transform import clip_boxes, bbox_transform_inv\r\nimport matplotlib  \r\nmatplotlib.use('Agg') \r\nDEBUG = False\r\ndef vis_all_detection(im_array, detections, class_names, scale):\r\n    \"\"\"\r\n    visualize all detections in one image\r\n    :param im_array: [b=1 c h w] in rgb\r\n    :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ]\r\n    :param class_names: list of names in imdb\r\n    :param scale: visualize the scaled image\r\n    :return:\r\n    \"\"\"\r\n   # print im_array.shape\r\n    import matplotlib  \r\n    matplotlib.use('Agg') \r\n    import matplotlib.pyplot as plt\r\n    from matplotlib.pyplot import savefig  \r\n    import random\r\n    a =  [103.06 ,115.9 ,123.15]\r\n    a = np.array(a)\r\n    im = transform_inverse(im_array,a)\r\n    plt.imshow(im)\r\n    for j in range(detections.shape[0]):\r\n        # if class_names[j] == 0:\r\n        #     continue\r\n        color = (random.random(), random.random(), random.random())  # generate a random color\r\n        dets = detections[j]\r\n        det =dets\r\n        bbox = det[0:] \r\n        score = det[0]\r\n        rect = plt.Rectangle((bbox[0], bbox[1]),\r\n                                 bbox[2] - bbox[0],\r\n                                 bbox[3] - bbox[1], fill=False,\r\n                                 edgecolor=color, linewidth=3.5)\r\n        plt.gca().add_patch(rect)\r\n        # plt.gca().text(bbox[0], bbox[1] - 2,\r\n        #                    '{:s} {:.3f}'.format(str(class_names[j]), score),\r\n        #                    bbox=dict(facecolor=color, alpha=0.5), fontsize=12, color='white')\r\n    plt.show()\r\n    name = np.mean(im)\r\n    savefig ('vis/'+str(name)+'.png')\r\n    plt.clf()\r\n    plt.cla()\r\n\r\n    plt. close(0)\r\n\r\ndef transform_inverse(im_tensor, pixel_means):\r\n    \"\"\"\r\n    transform from mxnet im_tensor to ordinary RGB image\r\n    im_tensor is limited to one image\r\n    :param im_tensor: [batch, channel, height, width]\r\n    :param pixel_means: [B, G, R pixel means]\r\n    :return: im [height, width, channel(RGB)]\r\n    \"\"\"\r\n    assert im_tensor.shape[0] == 1\r\n    im_tensor = im_tensor.copy()\r\n    # put channel back\r\n    channel_swap = (0, 2, 3, 1)\r\n    im_tensor = im_tensor.transpose(channel_swap)\r\n    im = im_tensor[0]\r\n    assert im.shape[2] == 3\r\n    im += pixel_means[[2, 1, 0]]\r\n    im = im.astype(np.uint8)\r\n    return im\r\n\r\n\r\n\r\n\r\nclass ProposalTargetLayer(caffe.Layer):\r\n    \"\"\"\r\n    Assign object detection proposals to ground-truth targets. Produces proposal\r\n    classification labels and bounding-box regression targets.\r\n    \"\"\"\r\n\r\n    def setup(self, bottom, top):\r\n        layer_params = yaml.load(self.param_str_)\r\n        self._num_classes = layer_params['num_classes']\r\n        self._batch_rois = cfg.TRAIN.BATCH_SIZE\r\n\r\n        # sampled rois (0, x1, y1, x2, y2)\r\n        top[0].reshape(1, 5)\r\n        top[1].reshape(1, 5)\r\n        top[2].reshape(1, 5)\r\n        top[3].reshape(1, 5)\r\n        # labels\r\n        top[4].reshape(1, 1)\r\n        # bbox_targets\r\n        top[5].reshape(1, self._num_classes * 4)\r\n        # bbox_inside_weights\r\n        top[6].reshape(1, self._num_classes * 4)\r\n        # bbox_outside_weights\r\n        top[7].reshape(1, self._num_classes * 4)\r\n\r\n    def forward(self, bottom, top):\r\n        # Proposal ROIs (0, x1, y1, x2, y2) coming from RPN\r\n        # (i.e., rpn.proposal_layer.ProposalLayer), or any other source\r\n        all_rois = bottom[0].data\r\n        aaa = all_rois[:]\r\n        # GT boxes (x1, y1, x2, y2, label)\r\n        # TODO(rbg): it's annoying that sometimes I have extra info before\r\n        # and other times after box coordinates -- normalize to one format\r\n        gt_boxes = bottom[1].data\r\n        im = bottom[2].data\r\n        # Include ground-truth boxes in the set of candidate rois\r\n        zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype)\r\n        all_rois = np.vstack(\r\n            (all_rois, np.hstack((zeros, gt_boxes[:, :-1])))\r\n        )\r\n        \r\n        num_images = 1\r\n        rois_per_image = cfg.TRAIN.BATCH_SIZE / num_images\r\n        fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image)\r\n\r\n        rois, labels, bbox_targets, bbox_weights ,layer_indexs = _sample_rois(\r\n            all_rois, gt_boxes, fg_rois_per_image,\r\n            rois_per_image, self._num_classes,sample_type='fpn', k0 = 4)\r\n        vis =False\r\n        if vis:\r\n            ind = np.where(labels!=0)[0]\r\n            im_shape = im.shape\r\n            means = np.tile(\r\n                     np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (21, 1)).ravel()\r\n            stds = np.tile(\r\n                    np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (21, 1)).ravel()\r\n            bbox_targets = bbox_targets*stds +means\r\n            \r\n            pred_boxes = bbox_transform_inv(rois[:,1:], bbox_targets)\r\n            pred_boxes = clip_boxes(pred_boxes, im_shape[-2:])\r\n            l =labels[ind]\r\n            ro = rois[ind,1:]\r\n            b = bbox_targets[ind,:]\r\n            p = pred_boxes[ind,:]*bbox_weights[ind,:]\r\n            r = []\r\n            for i in range(p.shape[0]):\r\n                r.append(p[i,l[i]*4:l[i]*4+4])\r\n            r_ =  np.vstack(r)\r\n\r\n      #  Optionally normalize targets by a precomputed mean and stdev\r\n\r\n            vis_all_detection(im, aaa[:,1:], l, 1)\r\n\r\n        rois_ = np.zeros((self._batch_rois*4, 5), dtype=rois.dtype)\r\n        labels_all = np.ones((self._batch_rois*4, ), dtype=labels.dtype)*-1\r\n        bbox_targets_all = np.zeros((self._batch_rois*4, self._num_classes * 4), dtype=bbox_targets.dtype)\r\n        bbox_weights_all = np.zeros((self._batch_rois*4, self._num_classes * 4), dtype=bbox_weights.dtype)\r\n        rois_all =[]\r\n        for i in range(4):\r\n            index = (layer_indexs == (i + 2))\r\n            num_index = sum(index)\r\n           \r\n            start = self._batch_rois*i\r\n            end = start+num_index\r\n            index_range = range(start, end)\r\n            rois_[index_range, :] = rois[index, :]\r\n            rois_all.append(rois_[range(start,start + self._batch_rois), :])\r\n            labels_all[index_range] = labels[index]  \r\n            bbox_targets_all[index_range,:] = bbox_targets[index, :]\r\n            bbox_weights_all[index_range,:] = bbox_weights[index, :]\r\n\r\n\r\n        rois_p2 = rois_all[0]\r\n        rois_p3 = rois_all[1]\r\n        rois_p4 = rois_all[2]\r\n        rois_p5 = rois_all[3]    \r\n  \r\n\r\n\r\n        top[0].reshape(*rois_p2.shape)\r\n        top[0].data[...] = rois_p2\r\n    \r\n        top[1].reshape(*rois_p3.shape)\r\n        top[1].data[...] = rois_p3\r\n\r\n        top[2].reshape(*rois_p4.shape)\r\n        top[2].data[...] = rois_p4\r\n        \r\n        top[3].reshape(*rois_p5.shape)\r\n        top[3].data[...] = rois_p5\r\n        \r\n        # classification labels\r\n        top[4].reshape(*labels_all.shape)\r\n        top[4].data[...] = labels_all\r\n\r\n        # bbox_targets\r\n        top[5].reshape(*bbox_targets_all.shape)\r\n        top[5].data[...] = bbox_targets_all\r\n\r\n        # bbox_inside_weights\r\n        top[6].reshape(*bbox_weights_all.shape)\r\n        top[6].data[...] = bbox_weights_all\r\n\r\n        # bbox_outside_weights\r\n        top[7].reshape(*bbox_weights_all.shape)\r\n        top[7].data[...] = np.array(bbox_weights_all > 0).astype(np.float32)\r\n      \r\n\r\n    def backward(self, top, propagate_down, bottom):\r\n        \"\"\"This layer does not propagate gradients.\"\"\"\r\n        pass\r\n\r\n    def reshape(self, bottom, top):\r\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\r\n        pass\r\n\r\n\r\ndef _get_bbox_regression_labels(bbox_target_data, num_classes):\r\n    \"\"\"Bounding-box regression targets (bbox_target_data) are stored in a\r\n    compact form N x (class, tx, ty, tw, th)\r\n\r\n    This function expands those targets into the 4-of-4*K representation used\r\n    by the network (i.e. only one class has non-zero targets).\r\n\r\n    Returns:\r\n        bbox_target (ndarray): N x 4K blob of regression targets\r\n        bbox_inside_weights (ndarray): N x 4K blob of loss weights\r\n    \"\"\"\r\n\r\n    clss = bbox_target_data[:, 0]\r\n    bbox_targets = np.zeros((clss.size, 4 * num_classes), dtype=np.float32)\r\n    bbox_inside_weights = np.zeros(bbox_targets.shape, dtype=np.float32)\r\n    inds = np.where(clss > 0)[0]\r\n    for ind in inds:\r\n        cls = clss[ind]\r\n        start = 4 * cls\r\n        end = start + 4\r\n        bbox_targets[ind, start:end] = bbox_target_data[ind, 1:]\r\n        bbox_inside_weights[ind, start:end] = cfg.TRAIN.BBOX_INSIDE_WEIGHTS\r\n    return bbox_targets, bbox_inside_weights\r\n\r\n\r\ndef _compute_targets(ex_rois, gt_rois, labels):\r\n    \"\"\"Compute bounding-box regression targets for an image.\"\"\"\r\n\r\n    assert ex_rois.shape[0] == gt_rois.shape[0]\r\n    assert ex_rois.shape[1] == 4\r\n    assert gt_rois.shape[1] == 4\r\n\r\n    targets = bbox_transform(ex_rois, gt_rois)\r\n    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\r\n        # Optionally normalize targets by a precomputed mean and stdev\r\n        targets = ((targets - np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS))\r\n                / np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS))\r\n    return np.hstack(\r\n            (labels[:, np.newaxis], targets)).astype(np.float32, copy=False)\r\n\r\ndef _sample_rois(all_rois, gt_boxes, fg_rois_per_image, rois_per_image, num_classes,sample_type='fpn', k0 = 4):\r\n    \"\"\"Generate a random sample of RoIs comprising foreground and background\r\n    examples.\r\n    \"\"\"\r\n    # overlaps: (rois x gt_boxes)\r\n    overlaps = bbox_overlaps(\r\n        np.ascontiguousarray(all_rois[:, 1:5], dtype=np.float),\r\n        np.ascontiguousarray(gt_boxes[:, :4], dtype=np.float))\r\n    gt_assignment = overlaps.argmax(axis=1)\r\n    max_overlaps = overlaps.max(axis=1)\r\n    labels = gt_boxes[gt_assignment, 4]\r\n\r\n    # Select foreground RoIs as those with >= FG_THRESH overlap\r\n    fg_inds = np.where(max_overlaps >= cfg.TRAIN.FG_THRESH)[0]\r\n    # Guard against the case when an image has fewer than fg_rois_per_image\r\n    # foreground RoIs\r\n    fg_rois_per_this_image = min(fg_rois_per_image, fg_inds.size)\r\n    # Sample foreground regions without replacement\r\n    if fg_inds.size > 0:\r\n        fg_inds = npr.choice(fg_inds, size=fg_rois_per_this_image, replace=False)\r\n\r\n    # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\r\n    bg_inds = np.where((max_overlaps < cfg.TRAIN.BG_THRESH_HI) &\r\n                       (max_overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]\r\n    # Compute number of background RoIs to take from this image (guarding\r\n    # against there being fewer than desired)\r\n    bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image\r\n    bg_rois_per_this_image = min(bg_rois_per_this_image, bg_inds.size)\r\n    # Sample background regions without replacement\r\n    if bg_inds.size > 0:\r\n        bg_inds = npr.choice(bg_inds, size=bg_rois_per_this_image, replace=False)\r\n\r\n    # The indices that we're selecting (both fg and bg)\r\n    keep_inds = np.append(fg_inds, bg_inds)\r\n    # Select sampled values from various arrays:\r\n    labels = labels[keep_inds]\r\n    # Clamp labels for the background RoIs to 0\r\n    labels[fg_rois_per_this_image:] = 0\r\n    rois = all_rois[keep_inds]\r\n\r\n    bbox_target_data = _compute_targets(\r\n        rois[:, 1:5], gt_boxes[gt_assignment[keep_inds], :4], labels)\r\n\r\n    bbox_targets, bbox_inside_weights = \\\r\n        _get_bbox_regression_labels(bbox_target_data, num_classes)\r\n\r\n    if sample_type == 'fpn':\r\n        #print 0\r\n        w = (rois[:,3]-rois[:,1])\r\n        h = (rois[:,4]-rois[:,2])\r\n        s = w * h\r\n        s[s<=0]=1e-6\r\n        layer_index = np.floor(k0+np.log2(np.sqrt(s)/224))\r\n\r\n        layer_index[layer_index<2]=2\r\n        layer_index[layer_index>5]=5\r\n        #print 1\r\n        return rois, labels, bbox_targets, bbox_inside_weights, layer_index #rois:[512,5]   labels:[512,]\r\n    else:\r\n        return rois, labels, bbox_targets, bbox_inside_weights\r\n\r\n\r\n"
  },
  {
    "path": "lib/rpn/proposal_target_layer_mrcnn.py",
    "content": "# --------------------------------------------------------\r\n# Faster R-CNN\r\n# Copyright (c) 2015 Microsoft\r\n# Licensed under The MIT License [see LICENSE for details]\r\n# Written by Ross Girshick and Sean Bell\r\n# --------------------------------------------------------\r\n\r\nimport caffe\r\nimport yaml\r\nimport numpy as np\r\nimport numpy.random as npr\r\nfrom fast_rcnn.config import cfg\r\nfrom fast_rcnn.bbox_transform import bbox_transform\r\nfrom utils.cython_bbox import bbox_overlaps\r\nfrom fast_rcnn.bbox_transform import clip_boxes, bbox_transform_inv\r\nimport matplotlib  \r\nmatplotlib.use('Agg') \r\nDEBUG = False\r\ndef vis_all_detection(im_array, detections, class_names, scale):\r\n    \"\"\"\r\n    visualize all detections in one image\r\n    :param im_array: [b=1 c h w] in rgb\r\n    :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ]\r\n    :param class_names: list of names in imdb\r\n    :param scale: visualize the scaled image\r\n    :return:\r\n    \"\"\"\r\n   # print im_array.shape\r\n    import matplotlib  \r\n    matplotlib.use('Agg') \r\n    import matplotlib.pyplot as plt\r\n    from matplotlib.pyplot import savefig  \r\n    import random\r\n    a =  [103.06 ,115.9 ,123.15]\r\n    a = np.array(a)\r\n    im = transform_inverse(im_array,a)\r\n    plt.imshow(im)\r\n    for j in range(detections.shape[0]):\r\n        # if class_names[j] == 0:\r\n        #     continue\r\n        color = (random.random(), random.random(), random.random())  # generate a random color\r\n        dets = detections[j]\r\n        det =dets\r\n        bbox = det[0:] \r\n        score = det[0]\r\n        rect = plt.Rectangle((bbox[0], bbox[1]),\r\n                                 bbox[2] - bbox[0],\r\n                                 bbox[3] - bbox[1], fill=False,\r\n                                 edgecolor=color, linewidth=3.5)\r\n        plt.gca().add_patch(rect)\r\n        # plt.gca().text(bbox[0], bbox[1] - 2,\r\n        #                    '{:s} {:.3f}'.format(str(class_names[j]), score),\r\n        #                    bbox=dict(facecolor=color, alpha=0.5), fontsize=12, color='white')\r\n    plt.show()\r\n    name = np.mean(im)\r\n    savefig ('vis/'+str(name)+'.png')\r\n    plt.clf()\r\n    plt.cla()\r\n\r\n    plt. close(0)\r\n\r\ndef transform_inverse(im_tensor, pixel_means):\r\n    \"\"\"\r\n    transform from mxnet im_tensor to ordinary RGB image\r\n    im_tensor is limited to one image\r\n    :param im_tensor: [batch, channel, height, width]\r\n    :param pixel_means: [B, G, R pixel means]\r\n    :return: im [height, width, channel(RGB)]\r\n    \"\"\"\r\n    assert im_tensor.shape[0] == 1\r\n    im_tensor = im_tensor.copy()\r\n    # put channel back\r\n    channel_swap = (0, 2, 3, 1)\r\n    im_tensor = im_tensor.transpose(channel_swap)\r\n    im = im_tensor[0]\r\n    assert im.shape[2] == 3\r\n    im += pixel_means[[2, 1, 0]]\r\n    im = im.astype(np.uint8)\r\n    return im\r\n\r\n\r\n\r\n\r\nclass ProposalMergeRcnnTargetLayer(caffe.Layer):\r\n    \"\"\"\r\n    Assign object detection proposals to ground-truth targets. Produces proposal\r\n    classification labels and bounding-box regression targets.\r\n    \"\"\"\r\n\r\n    def setup(self, bottom, top):\r\n        layer_params = yaml.load(self.param_str_)\r\n        self._num_classes = layer_params['num_classes']\r\n        self._batch_rois = cfg.TRAIN.BATCH_SIZE\r\n\r\n        # sampled rois (0, x1, y1, x2, y2)\r\n        top[0].reshape(1, 5)\r\n        top[1].reshape(1, 5)\r\n        top[2].reshape(1, 5)\r\n        top[3].reshape(1, 5)\r\n        # labels\r\n        top[4].reshape(1, 1)\r\n        # bbox_targets\r\n        top[5].reshape(1, self._num_classes * 4)\r\n        # bbox_inside_weights\r\n        top[6].reshape(1, self._num_classes * 4)\r\n        # bbox_outside_weights\r\n        top[7].reshape(1, self._num_classes * 4)\r\n\r\n    def forward(self, bottom, top):\r\n        # Proposal ROIs (0, x1, y1, x2, y2) coming from RPN\r\n        # (i.e., rpn.proposal_layer.ProposalLayer), or any other source\r\n        all_rois = bottom[0].data\r\n        aaa = all_rois[:]\r\n        # GT boxes (x1, y1, x2, y2, label)\r\n        # TODO(rbg): it's annoying that sometimes I have extra info before\r\n        # and other times after box coordinates -- normalize to one format\r\n        gt_boxes = bottom[1].data\r\n        im = bottom[2].data\r\n        # Include ground-truth boxes in the set of candidate rois\r\n        zeros = np.zeros((gt_boxes.shape[0], 1), dtype=gt_boxes.dtype)\r\n        all_rois = np.vstack(\r\n            (all_rois, np.hstack((zeros, gt_boxes[:, :-1])))\r\n        )\r\n        \r\n        num_images = 1\r\n        rois_per_image = cfg.TRAIN.BATCH_SIZE / num_images\r\n        fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image)\r\n\r\n        rois, labels, bbox_targets, bbox_weights ,layer_indexs = _sample_rois(\r\n            all_rois, gt_boxes, fg_rois_per_image,\r\n            rois_per_image, self._num_classes,sample_type='fpn', k0 = 4)\r\n        vis =False\r\n        if vis:\r\n            ind = np.where(labels!=0)[0]\r\n            im_shape = im.shape\r\n            means = np.tile(\r\n                     np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (21, 1)).ravel()\r\n            stds = np.tile(\r\n                    np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (21, 1)).ravel()\r\n            bbox_targets = bbox_targets*stds +means\r\n            \r\n            pred_boxes = bbox_transform_inv(rois[:,1:], bbox_targets)\r\n            pred_boxes = clip_boxes(pred_boxes, im_shape[-2:])\r\n            l =labels[ind]\r\n            ro = rois[ind,1:]\r\n            b = bbox_targets[ind,:]\r\n            p = pred_boxes[ind,:]*bbox_weights[ind,:]\r\n            r = []\r\n            for i in range(p.shape[0]):\r\n                r.append(p[i,l[i]*4:l[i]*4+4])\r\n            r_ =  np.vstack(r)\r\n\r\n      #  Optionally normalize targets by a precomputed mean and stdev\r\n\r\n            vis_all_detection(im, aaa[:,1:], l, 1)\r\n\r\n        \r\n        labels_all = [] \r\n        bbox_targets_all = []\r\n        bbox_weights_all = []\r\n        rois_all =[]\r\n        for i in range(4):\r\n            index = (layer_indexs == (i + 2))\r\n            num_index = sum(index)\r\n            if num_index == 0:\r\n                rois_ = np.zeros((1*4, 5), dtype=rois.dtype)\r\n                labels_ = np.ones((1*4, ), dtype=labels.dtype)*-1\r\n                bbox_targets_ = np.zeros((1*4, self._num_classes * 4), dtype=bbox_targets.dtype)\r\n                bbox_weights_ = np.zeros((1*4, self._num_classes * 4), dtype=bbox_weights.dtype)\r\n            else:\r\n                rois_ = rois[index, :]\r\n                labels_ = labels[index]\r\n                bbox_weights_= bbox_weights[index, :]\r\n                bbox_targets_ = bbox_targets[index, :]\r\n\r\n            rois_all.append(rois_)\r\n            labels_all.append(labels_)  \r\n            bbox_targets_all.append(bbox_targets_)\r\n            bbox_weights_all.append(bbox_weights_)\r\n\r\n\r\n        rois_p2 = rois_all[0]\r\n        rois_p3 = rois_all[1]\r\n        rois_p4 = rois_all[2]\r\n        rois_p5 = rois_all[3]    \r\n        labels_all = np.concatenate(labels_all)\r\n        bbox_targets_all = np.concatenate(bbox_targets_all,axis= 0)\r\n        bbox_weights_all = np.concatenate(bbox_weights_all,axis= 0)\r\n      #  print bbox_targets_all.shape,bbox_weights_all.shape, rois_p2.shape,rois_p3.shape,rois_p4.shape,rois_p5.shape,labels_all.shape\r\n\r\n        top[0].reshape(*rois_p2.shape)\r\n        top[0].data[...] = rois_p2\r\n    \r\n        top[1].reshape(*rois_p3.shape)\r\n        top[1].data[...] = rois_p3\r\n\r\n        top[2].reshape(*rois_p4.shape)\r\n        top[2].data[...] = rois_p4\r\n        \r\n        top[3].reshape(*rois_p5.shape)\r\n        top[3].data[...] = rois_p5\r\n        \r\n        # classification labels\r\n        top[4].reshape(*labels_all.shape)\r\n        top[4].data[...] = labels_all\r\n\r\n        # bbox_targets\r\n        top[5].reshape(*bbox_targets_all.shape)\r\n        top[5].data[...] = bbox_targets_all\r\n\r\n        # bbox_inside_weights\r\n        top[6].reshape(*bbox_weights_all.shape)\r\n        top[6].data[...] = bbox_weights_all\r\n\r\n        # bbox_outside_weights\r\n        top[7].reshape(*bbox_weights_all.shape)\r\n        top[7].data[...] = np.array(bbox_weights_all > 0).astype(np.float32)\r\n      \r\n\r\n    def backward(self, top, propagate_down, bottom):\r\n        \"\"\"This layer does not propagate gradients.\"\"\"\r\n        pass\r\n\r\n    def reshape(self, bottom, top):\r\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\r\n        pass\r\n\r\n\r\ndef _get_bbox_regression_labels(bbox_target_data, num_classes):\r\n    \"\"\"Bounding-box regression targets (bbox_target_data) are stored in a\r\n    compact form N x (class, tx, ty, tw, th)\r\n\r\n    This function expands those targets into the 4-of-4*K representation used\r\n    by the network (i.e. only one class has non-zero targets).\r\n\r\n    Returns:\r\n        bbox_target (ndarray): N x 4K blob of regression targets\r\n        bbox_inside_weights (ndarray): N x 4K blob of loss weights\r\n    \"\"\"\r\n\r\n    clss = bbox_target_data[:, 0]\r\n    bbox_targets = np.zeros((clss.size, 4 * num_classes), dtype=np.float32)\r\n    bbox_inside_weights = np.zeros(bbox_targets.shape, dtype=np.float32)\r\n    inds = np.where(clss > 0)[0]\r\n    for ind in inds:\r\n        cls = clss[ind]\r\n        start = 4 * cls\r\n        end = start + 4\r\n        bbox_targets[ind, start:end] = bbox_target_data[ind, 1:]\r\n        bbox_inside_weights[ind, start:end] = cfg.TRAIN.BBOX_INSIDE_WEIGHTS\r\n    return bbox_targets, bbox_inside_weights\r\n\r\n\r\ndef _compute_targets(ex_rois, gt_rois, labels):\r\n    \"\"\"Compute bounding-box regression targets for an image.\"\"\"\r\n\r\n    assert ex_rois.shape[0] == gt_rois.shape[0]\r\n    assert ex_rois.shape[1] == 4\r\n    assert gt_rois.shape[1] == 4\r\n\r\n    targets = bbox_transform(ex_rois, gt_rois)\r\n    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\r\n        # Optionally normalize targets by a precomputed mean and stdev\r\n        targets = ((targets - np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS))\r\n                / np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS))\r\n    return np.hstack(\r\n            (labels[:, np.newaxis], targets)).astype(np.float32, copy=False)\r\n\r\ndef _sample_rois(all_rois, gt_boxes, fg_rois_per_image, rois_per_image, num_classes,sample_type='fpn', k0 = 4):\r\n    \"\"\"Generate a random sample of RoIs comprising foreground and background\r\n    examples.\r\n    \"\"\"\r\n    # overlaps: (rois x gt_boxes)\r\n    overlaps = bbox_overlaps(\r\n        np.ascontiguousarray(all_rois[:, 1:5], dtype=np.float),\r\n        np.ascontiguousarray(gt_boxes[:, :4], dtype=np.float))\r\n    gt_assignment = overlaps.argmax(axis=1)\r\n    max_overlaps = overlaps.max(axis=1)\r\n    labels = gt_boxes[gt_assignment, 4]\r\n\r\n    # Select foreground RoIs as those with >= FG_THRESH overlap\r\n    fg_inds = np.where(max_overlaps >= cfg.TRAIN.FG_THRESH)[0]\r\n    # Guard against the case when an image has fewer than fg_rois_per_image\r\n    # foreground RoIs\r\n    fg_rois_per_this_image = min(fg_rois_per_image, fg_inds.size)\r\n    # Sample foreground regions without replacement\r\n    if fg_inds.size > 0:\r\n        fg_inds = npr.choice(fg_inds, size=fg_rois_per_this_image, replace=False)\r\n\r\n    # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\r\n    bg_inds = np.where((max_overlaps < cfg.TRAIN.BG_THRESH_HI) &\r\n                       (max_overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]\r\n    # Compute number of background RoIs to take from this image (guarding\r\n    # against there being fewer than desired)\r\n    bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image\r\n    bg_rois_per_this_image = min(bg_rois_per_this_image, bg_inds.size)\r\n    # Sample background regions without replacement\r\n    if bg_inds.size > 0:\r\n        bg_inds = npr.choice(bg_inds, size=bg_rois_per_this_image, replace=False)\r\n\r\n    # The indices that we're selecting (both fg and bg)\r\n    keep_inds = np.append(fg_inds, bg_inds)\r\n    # Select sampled values from various arrays:\r\n    labels = labels[keep_inds]\r\n    # Clamp labels for the background RoIs to 0\r\n    labels[fg_rois_per_this_image:] = 0\r\n    rois = all_rois[keep_inds]\r\n\r\n    bbox_target_data = _compute_targets(\r\n        rois[:, 1:5], gt_boxes[gt_assignment[keep_inds], :4], labels)\r\n\r\n    bbox_targets, bbox_inside_weights = \\\r\n        _get_bbox_regression_labels(bbox_target_data, num_classes)\r\n\r\n    if sample_type == 'fpn':\r\n        #print 0\r\n        w = (rois[:,3]-rois[:,1])\r\n        h = (rois[:,4]-rois[:,2])\r\n        s = w * h\r\n        s[s<=0]=1e-6\r\n        layer_index = np.floor(k0+np.log2(np.sqrt(s)/224))\r\n\r\n        layer_index[layer_index<2]=2\r\n        layer_index[layer_index>5]=5\r\n        #print 1\r\n        return rois, labels, bbox_targets, bbox_inside_weights, layer_index #rois:[512,5]   labels:[512,]\r\n    else:\r\n        return rois, labels, bbox_targets, bbox_inside_weights\r\n\r\n\r\n"
  },
  {
    "path": "lib/setup.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport os\nfrom os.path import join as pjoin\nfrom setuptools import setup\nfrom distutils.extension import Extension\nfrom Cython.Distutils import build_ext\nimport subprocess\nimport numpy as np\n\ndef find_in_path(name, path):\n    \"Find a file in a search path\"\n    # Adapted fom\n    # http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/\n    for dir in path.split(os.pathsep):\n        binpath = pjoin(dir, name)\n        if os.path.exists(binpath):\n            return os.path.abspath(binpath)\n    return None\n\n\ndef locate_cuda():\n    \"\"\"Locate the CUDA environment on the system\n\n    Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'\n    and values giving the absolute path to each directory.\n\n    Starts by looking for the CUDAHOME env variable. If not found, everything\n    is based on finding 'nvcc' in the PATH.\n    \"\"\"\n\n    # first check if the CUDAHOME env variable is in use\n    if 'CUDAHOME' in os.environ:\n        home = os.environ['CUDAHOME']\n        nvcc = pjoin(home, 'bin', 'nvcc')\n    else:\n        # otherwise, search the PATH for NVCC\n        default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin')\n        nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path)\n        if nvcc is None:\n            raise EnvironmentError('The nvcc binary could not be '\n                'located in your $PATH. Either add it to your path, or set $CUDAHOME')\n        home = os.path.dirname(os.path.dirname(nvcc))\n\n    cudaconfig = {'home':home, 'nvcc':nvcc,\n                  'include': pjoin(home, 'include'),\n                  'lib64': pjoin(home, 'lib64')}\n    for k, v in cudaconfig.iteritems():\n        if not os.path.exists(v):\n            raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))\n\n    return cudaconfig\nCUDA = locate_cuda()\n\n\n# Obtain the numpy include directory.  This logic works across numpy versions.\ntry:\n    numpy_include = np.get_include()\nexcept AttributeError:\n    numpy_include = np.get_numpy_include()\n\ndef customize_compiler_for_nvcc(self):\n    \"\"\"inject deep into distutils to customize how the dispatch\n    to gcc/nvcc works.\n\n    If you subclass UnixCCompiler, it's not trivial to get your subclass\n    injected in, and still have the right customizations (i.e.\n    distutils.sysconfig.customize_compiler) run on it. So instead of going\n    the OO route, I have this. Note, it's kindof like a wierd functional\n    subclassing going on.\"\"\"\n\n    # tell the compiler it can processes .cu\n    self.src_extensions.append('.cu')\n\n    # save references to the default compiler_so and _comple methods\n    default_compiler_so = self.compiler_so\n    super = self._compile\n\n    # now redefine the _compile method. This gets executed for each\n    # object but distutils doesn't have the ability to change compilers\n    # based on source extension: we add it.\n    def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):\n        if os.path.splitext(src)[1] == '.cu':\n            # use the cuda for .cu files\n            self.set_executable('compiler_so', CUDA['nvcc'])\n            # use only a subset of the extra_postargs, which are 1-1 translated\n            # from the extra_compile_args in the Extension class\n            postargs = extra_postargs['nvcc']\n        else:\n            postargs = extra_postargs['gcc']\n\n        super(obj, src, ext, cc_args, postargs, pp_opts)\n        # reset the default compiler_so, which we might have changed for cuda\n        self.compiler_so = default_compiler_so\n\n    # inject our redefined _compile method into the class\n    self._compile = _compile\n\n\n# run the customize_compiler\nclass custom_build_ext(build_ext):\n    def build_extensions(self):\n        customize_compiler_for_nvcc(self.compiler)\n        build_ext.build_extensions(self)\n\n\next_modules = [\n    Extension(\n        \"utils.cython_bbox\",\n        [\"utils/bbox.pyx\"],\n        extra_compile_args={'gcc': [\"-Wno-cpp\", \"-Wno-unused-function\"]},\n        include_dirs = [numpy_include]\n    ),\n    Extension(\n        \"nms.cpu_nms\",\n        [\"nms/cpu_nms.pyx\"],\n        extra_compile_args={'gcc': [\"-Wno-cpp\", \"-Wno-unused-function\"]},\n        include_dirs = [numpy_include]\n    ),\n    Extension('nms.gpu_nms',\n        ['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],\n        library_dirs=[CUDA['lib64']],\n        libraries=['cudart'],\n        language='c++',\n        runtime_library_dirs=[CUDA['lib64']],\n        # this syntax is specific to this build system\n        # we're only going to use certain compiler args with nvcc and not with\n        # gcc the implementation of this trick is in customize_compiler() below\n        extra_compile_args={'gcc': [\"-Wno-unused-function\"],\n                            'nvcc': ['-arch=sm_35',\n                                     '--ptxas-options=-v',\n                                     '-c',\n                                     '--compiler-options',\n                                     \"'-fPIC'\"]},\n        include_dirs = [numpy_include, CUDA['include']]\n    ),\n    Extension(\n        'pycocotools._mask',\n        sources=['pycocotools/maskApi.c', 'pycocotools/_mask.pyx'],\n        include_dirs = [numpy_include, 'pycocotools'],\n        extra_compile_args={\n            'gcc': ['-Wno-cpp', '-Wno-unused-function', '-std=c99']},\n    ),\n]\n\nsetup(\n    name='fast_rcnn',\n    ext_modules=ext_modules,\n    # inject our custom trigger\n    cmdclass={'build_ext': custom_build_ext},\n)\n"
  },
  {
    "path": "lib/transform/__init__.py",
    "content": ""
  },
  {
    "path": "lib/transform/torch_image_transform_layer.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# --------------------------------------------------------\n\n\"\"\" Transform images for compatibility with models trained with\nhttps://github.com/facebook/fb.resnet.torch.\n\nUsage in model prototxt:\n\nlayer {\n  name: 'data_xform'\n  type: 'Python'\n  bottom: 'data_caffe'\n  top: 'data'\n  python_param {\n    module: 'transform.torch_image_transform_layer'\n    layer: 'TorchImageTransformLayer'\n  }\n}\n\"\"\"\n\nimport caffe\nfrom fast_rcnn.config import cfg\nimport numpy as np\n\nclass TorchImageTransformLayer(caffe.Layer):\n    def setup(self, bottom, top):\n        # (1, 3, 1, 1) shaped arrays\n        self.PIXEL_MEANS = \\\n            np.array([[[[0.48462227599918]],\n                       [[0.45624044862054]],\n                       [[0.40588363755159]]]])\n        self.PIXEL_STDS = \\\n            np.array([[[[0.22889466674951]],\n                       [[0.22446679341259]],\n                       [[0.22495548344775]]]])\n        # The default (\"old\") pixel means that were already subtracted\n        channel_swap = (0, 3, 1, 2)\n        self.OLD_PIXEL_MEANS = \\\n            cfg.PIXEL_MEANS[np.newaxis, :, :, :].transpose(channel_swap)\n\n        top[0].reshape(*(bottom[0].shape))\n\n    def forward(self, bottom, top):\n        ims = bottom[0].data\n        # Invert the channel means that were already subtracted\n        ims += self.OLD_PIXEL_MEANS\n        # 1. Permute BGR to RGB and normalize to [0, 1]\n        ims = ims[:, [2, 1, 0], :, :] / 255.0\n        # 2. Remove channel means\n        ims -= self.PIXEL_MEANS\n        # 3. Standardize channels\n        ims /= self.PIXEL_STDS\n        top[0].reshape(*(ims.shape))\n        top[0].data[...] = ims\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n"
  },
  {
    "path": "lib/utils/.gitignore",
    "content": "*.c\n*.so\n"
  },
  {
    "path": "lib/utils/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n"
  },
  {
    "path": "lib/utils/bbox.pyx",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Sergey Karayev\n# --------------------------------------------------------\n\ncimport cython\nimport numpy as np\ncimport numpy as np\n\nDTYPE = np.float\nctypedef np.float_t DTYPE_t\n\ndef bbox_overlaps(\n        np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    \"\"\"\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray of float\n    query_boxes: (K, 4) ndarray of float\n    Returns\n    -------\n    overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n    \"\"\"\n    cdef unsigned int N = boxes.shape[0]\n    cdef unsigned int K = query_boxes.shape[0]\n    cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)\n    cdef DTYPE_t iw, ih, box_area\n    cdef DTYPE_t ua\n    cdef unsigned int k, n\n    for k in range(K):\n        box_area = (\n            (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n            (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n        )\n        for n in range(N):\n            iw = (\n                min(boxes[n, 2], query_boxes[k, 2]) -\n                max(boxes[n, 0], query_boxes[k, 0]) + 1\n            )\n            if iw > 0:\n                ih = (\n                    min(boxes[n, 3], query_boxes[k, 3]) -\n                    max(boxes[n, 1], query_boxes[k, 1]) + 1\n                )\n                if ih > 0:\n                    ua = float(\n                        (boxes[n, 2] - boxes[n, 0] + 1) *\n                        (boxes[n, 3] - boxes[n, 1] + 1) +\n                        box_area - iw * ih\n                    )\n                    overlaps[n, k] = iw * ih / ua\n    return overlaps\n"
  },
  {
    "path": "lib/utils/blob.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Blob helper functions.\"\"\"\n\nimport numpy as np\nimport cv2\n\ndef im_list_to_blob(ims):\n    \"\"\"Convert a list of images into a network input.\n\n    Assumes images are already prepared (means subtracted, BGR order, ...).\n    \"\"\"\n    max_shape = np.array([im.shape for im in ims]).max(axis=0)\n    num_images = len(ims)\n    blob = np.zeros((num_images, max_shape[0], max_shape[1], 3),\n                    dtype=np.float32)\n    for i in xrange(num_images):\n        im = ims[i]\n        blob[i, 0:im.shape[0], 0:im.shape[1], :] = im\n    # Move channels (axis 3) to axis 1\n    # Axis order will become: (batch elem, channel, height, width)\n    channel_swap = (0, 3, 1, 2)\n    blob = blob.transpose(channel_swap)\n    return blob\n\ndef prep_im_for_blob(im, pixel_means, target_size, max_size,stride):\n    \"\"\"Mean subtract and scale an image for use in a blob.\"\"\"\n    im = im.astype(np.float32, copy=False)\n    im -= pixel_means\n    im_shape = im.shape\n    im_size_min = np.min(im_shape[0:2])\n    im_size_max = np.max(im_shape[0:2])\n    im_scale = float(target_size) / float(im_size_min)\n    # Prevent the biggest axis from being more than MAX_SIZE\n    if np.round(im_scale * im_size_max) > max_size:\n        im_scale = float(max_size) / float(im_size_max)\n    im = cv2.resize(im, None, None, fx=im_scale, fy=im_scale,\n                    interpolation=cv2.INTER_LINEAR)\n    if stride == 0:\n        return im, im_scale\n    else:\n        # pad to product of stride\n        im_height = int(np.ceil(im.shape[0] / float(stride)) * stride)\n        im_width = int(np.ceil(im.shape[1] / float(stride)) * stride)\n        im_channel = im.shape[2]\n        padded_im = np.zeros((im_height, im_width, im_channel))\n        padded_im[:im.shape[0], :im.shape[1], :] = im\n        return padded_im, im_scale\n\n"
  },
  {
    "path": "lib/utils/timer.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport time\n\nclass Timer(object):\n    \"\"\"A simple timer.\"\"\"\n    def __init__(self):\n        self.total_time = 0.\n        self.calls = 0\n        self.start_time = 0.\n        self.diff = 0.\n        self.average_time = 0.\n\n    def tic(self):\n        # using time.time instead of time.clock because time time.clock\n        # does not normalize for multithreading\n        self.start_time = time.time()\n\n    def toc(self, average=True):\n        self.diff = time.time() - self.start_time\n        self.total_time += self.diff\n        self.calls += 1\n        self.average_time = self.total_time / self.calls\n        if average:\n            return self.average_time\n        else:\n            return self.diff\n"
  },
  {
    "path": "models/README.md",
    "content": "## Model Zoo\n\n### COCO Faster R-CNN VGG-16 trained using end-to-end\n\nModel URL: https://dl.dropboxusercontent.com/s/cotx0y81zvbbhnt/coco_vgg16_faster_rcnn_final.caffemodel?dl=0\n\nTraining command:\n```\ntools/train_net.py \\\n    --gpu 0 \\\n    --solver ./models/coco/VGG16/faster_rcnn_end2end/solver.prototxt \\\n    --weights data/imagenet_models/VGG16.v2.caffemodel \\\n    --imdb coco_2014_train+coco_2014_valminusminival \\\n    --iters 490000 \\\n    --cfg ./experiments/cfgs/faster_rcnn_end2end.yml\n```\n\n`py-faster-rcnn` commit: 68eec95\n\ntest-dev2015 results\n```\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.242\n Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.453\n Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.235\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.077\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.264\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.371\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.238\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.340\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.346\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.120\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.385\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.544\n```\n\ntest-standard2015 results\n```\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.242\n Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.453\n Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.234\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.072\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.264\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.369\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.238\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.341\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.347\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.115\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.389\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.544\n```\n"
  },
  {
    "path": "models/pascal_voc/FPN/FP_Net_end2end/solver.prototxt",
    "content": "train_net: \"models/pascal_voc/FPN/FP_Net_end2end/train.prototxt\"\nbase_lr: 0.001\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 50000\ndisplay: 20\naverage_loss: 100\n# iter_size: 1\nmomentum: 0.9\nweight_decay: 0.0001\n# We disable standard caffe solver snapshotting and implement our own snapshot\n# function\nsnapshot: 0\n# We still use the snapshot prefix, though\nsnapshot_prefix: \"fpn\"\niter_size: 2\n"
  },
  {
    "path": "models/pascal_voc/FPN/FP_Net_end2end/solver_mergercnn.prototxt",
    "content": "train_net: \"models/pascal_voc/FPN/FP_Net_end2end/train_mergercnn.prototxt\"\nbase_lr: 0.001\nlr_policy: \"step\"\ngamma: 0.1\nstepsize: 50000\ndisplay: 20\naverage_loss: 100\n# iter_size: 1\nmomentum: 0.9\nweight_decay: 0.0001\n# We disable standard caffe solver snapshotting and implement our own snapshot\n# function\nsnapshot: 0\n# We still use the snapshot prefix, though\nsnapshot_prefix: \"fpn\"\niter_size: 2\n"
  },
  {
    "path": "models/pascal_voc/FPN/FP_Net_end2end/test.prototxt",
    "content": "\nname: \"ResNet-50\"\ninput: \"data\"\ninput_shape {\n  dim: 1\n  dim: 3\n  dim: 224\n  dim: 224\n}\n\ninput: \"im_info\"\ninput_shape {\n  dim: 1\n  dim: 3\n}\nlayer {\n\tbottom: \"data\"\n\ttop: \"conv1\"\n\tname: \"conv1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 7\n\t\tpad: 3\n\t\tstride: 2\n\t}\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"conv1\"\n\tname: \"bn_conv1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"conv1\"\n\tname: \"scale_conv1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"conv1\"\n\tname: \"conv1_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"pool1\"\n\tname: \"pool1\"\n\ttype: \"Pooling\"\n\tpooling_param {\n\t\tkernel_size: 3\n\t\tstride: 2\n\t\tpool: MAX\n\t}\n}\n\nlayer {\n\tbottom: \"pool1\"\n\ttop: \"res2a_branch1\"\n\tname: \"res2a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch1\"\n\ttop: \"res2a_branch1\"\n\tname: \"bn2a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch1\"\n\ttop: \"res2a_branch1\"\n\tname: \"scale2a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"pool1\"\n\ttop: \"res2a_branch2a\"\n\tname: \"res2a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2a\"\n\tname: \"bn2a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2a\"\n\tname: \"scale2a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2a\"\n\tname: \"res2a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2b\"\n\tname: \"res2a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2b\"\n\tname: \"bn2a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2b\"\n\tname: \"scale2a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2b\"\n\tname: \"res2a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2c\"\n\tname: \"res2a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2c\"\n\ttop: \"res2a_branch2c\"\n\tname: \"bn2a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2c\"\n\ttop: \"res2a_branch2c\"\n\tname: \"scale2a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch1\"\n\tbottom: \"res2a_branch2c\"\n\ttop: \"res2a\"\n\tname: \"res2a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res2a\"\n\ttop: \"res2a\"\n\tname: \"res2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"res2b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"bn2b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"scale2b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"res2b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2b\"\n\tname: \"res2b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2b\"\n\tname: \"bn2b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2b\"\n\tname: \"scale2b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2b\"\n\tname: \"res2b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2c\"\n\tname: \"res2b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2c\"\n\ttop: \"res2b_branch2c\"\n\tname: \"bn2b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2c\"\n\ttop: \"res2b_branch2c\"\n\tname: \"scale2b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a\"\n\tbottom: \"res2b_branch2c\"\n\ttop: \"res2b\"\n\tname: \"res2b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res2b\"\n\ttop: \"res2b\"\n\tname: \"res2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2b\"\n\ttop: \"res2c_branch2a\"\n\tname: \"res2c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2a\"\n\tname: \"bn2c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2a\"\n\tname: \"scale2c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2a\"\n\tname: \"res2c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2b\"\n\tname: \"res2c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2b\"\n\tname: \"bn2c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2b\"\n\tname: \"scale2c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2b\"\n\tname: \"res2c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2c\"\n\tname: \"res2c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2c\"\n\ttop: \"res2c_branch2c\"\n\tname: \"bn2c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2c\"\n\ttop: \"res2c_branch2c\"\n\tname: \"scale2c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b\"\n\tbottom: \"res2c_branch2c\"\n\ttop: \"res2c\"\n\tname: \"res2c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"res2c\"\n\tname: \"res2c_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"res3a_branch1\"\n\tname: \"res3a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch1\"\n\ttop: \"res3a_branch1\"\n\tname: \"bn3a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch1\"\n\ttop: \"res3a_branch1\"\n\tname: \"scale3a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"res3a_branch2a\"\n\tname: \"res3a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2a\"\n\tname: \"bn3a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2a\"\n\tname: \"scale3a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2a\"\n\tname: \"res3a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2b\"\n\tname: \"res3a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2b\"\n\tname: \"bn3a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2b\"\n\tname: \"scale3a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2b\"\n\tname: \"res3a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2c\"\n\tname: \"res3a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2c\"\n\ttop: \"res3a_branch2c\"\n\tname: \"bn3a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2c\"\n\ttop: \"res3a_branch2c\"\n\tname: \"scale3a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch1\"\n\tbottom: \"res3a_branch2c\"\n\ttop: \"res3a\"\n\tname: \"res3a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3a\"\n\ttop: \"res3a\"\n\tname: \"res3a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"res3b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"bn3b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"scale3b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"res3b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2b\"\n\tname: \"res3b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2b\"\n\tname: \"bn3b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2b\"\n\tname: \"scale3b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2b\"\n\tname: \"res3b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2c\"\n\tname: \"res3b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2c\"\n\ttop: \"res3b_branch2c\"\n\tname: \"bn3b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2c\"\n\ttop: \"res3b_branch2c\"\n\tname: \"scale3b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a\"\n\tbottom: \"res3b_branch2c\"\n\ttop: \"res3b\"\n\tname: \"res3b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3b\"\n\ttop: \"res3b\"\n\tname: \"res3b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3b\"\n\ttop: \"res3c_branch2a\"\n\tname: \"res3c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2a\"\n\tname: \"bn3c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2a\"\n\tname: \"scale3c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2a\"\n\tname: \"res3c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2b\"\n\tname: \"res3c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2b\"\n\tname: \"bn3c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2b\"\n\tname: \"scale3c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2b\"\n\tname: \"res3c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2c\"\n\tname: \"res3c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2c\"\n\ttop: \"res3c_branch2c\"\n\tname: \"bn3c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2c\"\n\ttop: \"res3c_branch2c\"\n\tname: \"scale3c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b\"\n\tbottom: \"res3c_branch2c\"\n\ttop: \"res3c\"\n\tname: \"res3c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3c\"\n\ttop: \"res3c\"\n\tname: \"res3c_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3c\"\n\ttop: \"res3d_branch2a\"\n\tname: \"res3d_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2a\"\n\tname: \"bn3d_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2a\"\n\tname: \"scale3d_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2a\"\n\tname: \"res3d_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2b\"\n\tname: \"res3d_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2b\"\n\tname: \"bn3d_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2b\"\n\tname: \"scale3d_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2b\"\n\tname: \"res3d_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2c\"\n\tname: \"res3d_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2c\"\n\ttop: \"res3d_branch2c\"\n\tname: \"bn3d_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2c\"\n\ttop: \"res3d_branch2c\"\n\tname: \"scale3d_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c\"\n\tbottom: \"res3d_branch2c\"\n\ttop: \"res3d\"\n\tname: \"res3d\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"res3d\"\n\tname: \"res3d_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"res4a_branch1\"\n\tname: \"res4a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch1\"\n\ttop: \"res4a_branch1\"\n\tname: \"bn4a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch1\"\n\ttop: \"res4a_branch1\"\n\tname: \"scale4a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"res4a_branch2a\"\n\tname: \"res4a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2a\"\n\tname: \"bn4a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2a\"\n\tname: \"scale4a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2a\"\n\tname: \"res4a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2b\"\n\tname: \"res4a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2b\"\n\tname: \"bn4a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2b\"\n\tname: \"scale4a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2b\"\n\tname: \"res4a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2c\"\n\tname: \"res4a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2c\"\n\ttop: \"res4a_branch2c\"\n\tname: \"bn4a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2c\"\n\ttop: \"res4a_branch2c\"\n\tname: \"scale4a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch1\"\n\tbottom: \"res4a_branch2c\"\n\ttop: \"res4a\"\n\tname: \"res4a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4a\"\n\ttop: \"res4a\"\n\tname: \"res4a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"res4b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"bn4b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"scale4b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"res4b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2b\"\n\tname: \"res4b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2b\"\n\tname: \"bn4b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2b\"\n\tname: \"scale4b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2b\"\n\tname: \"res4b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2c\"\n\tname: \"res4b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2c\"\n\ttop: \"res4b_branch2c\"\n\tname: \"bn4b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2c\"\n\ttop: \"res4b_branch2c\"\n\tname: \"scale4b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a\"\n\tbottom: \"res4b_branch2c\"\n\ttop: \"res4b\"\n\tname: \"res4b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4b\"\n\ttop: \"res4b\"\n\tname: \"res4b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4b\"\n\ttop: \"res4c_branch2a\"\n\tname: \"res4c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2a\"\n\tname: \"bn4c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2a\"\n\tname: \"scale4c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2a\"\n\tname: \"res4c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2b\"\n\tname: \"res4c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2b\"\n\tname: \"bn4c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2b\"\n\tname: \"scale4c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2b\"\n\tname: \"res4c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2c\"\n\tname: \"res4c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2c\"\n\ttop: \"res4c_branch2c\"\n\tname: \"bn4c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2c\"\n\ttop: \"res4c_branch2c\"\n\tname: \"scale4c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b\"\n\tbottom: \"res4c_branch2c\"\n\ttop: \"res4c\"\n\tname: \"res4c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4c\"\n\ttop: \"res4c\"\n\tname: \"res4c_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4c\"\n\ttop: \"res4d_branch2a\"\n\tname: \"res4d_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2a\"\n\tname: \"bn4d_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2a\"\n\tname: \"scale4d_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2a\"\n\tname: \"res4d_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2b\"\n\tname: \"res4d_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2b\"\n\tname: \"bn4d_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2b\"\n\tname: \"scale4d_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2b\"\n\tname: \"res4d_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2c\"\n\tname: \"res4d_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2c\"\n\ttop: \"res4d_branch2c\"\n\tname: \"bn4d_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2c\"\n\ttop: \"res4d_branch2c\"\n\tname: \"scale4d_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c\"\n\tbottom: \"res4d_branch2c\"\n\ttop: \"res4d\"\n\tname: \"res4d\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4d\"\n\ttop: \"res4d\"\n\tname: \"res4d_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4d\"\n\ttop: \"res4e_branch2a\"\n\tname: \"res4e_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2a\"\n\tname: \"bn4e_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2a\"\n\tname: \"scale4e_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2a\"\n\tname: \"res4e_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2b\"\n\tname: \"res4e_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2b\"\n\tname: \"bn4e_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2b\"\n\tname: \"scale4e_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2b\"\n\tname: \"res4e_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2c\"\n\tname: \"res4e_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2c\"\n\ttop: \"res4e_branch2c\"\n\tname: \"bn4e_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2c\"\n\ttop: \"res4e_branch2c\"\n\tname: \"scale4e_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d\"\n\tbottom: \"res4e_branch2c\"\n\ttop: \"res4e\"\n\tname: \"res4e\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4e\"\n\ttop: \"res4e\"\n\tname: \"res4e_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4e\"\n\ttop: \"res4f_branch2a\"\n\tname: \"res4f_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2a\"\n\tname: \"bn4f_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2a\"\n\tname: \"scale4f_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2a\"\n\tname: \"res4f_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2b\"\n\tname: \"res4f_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2b\"\n\tname: \"bn4f_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2b\"\n\tname: \"scale4f_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2b\"\n\tname: \"res4f_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2c\"\n\tname: \"res4f_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2c\"\n\ttop: \"res4f_branch2c\"\n\tname: \"bn4f_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2c\"\n\ttop: \"res4f_branch2c\"\n\tname: \"scale4f_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e\"\n\tbottom: \"res4f_branch2c\"\n\ttop: \"res4f\"\n\tname: \"res4f\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"res4f\"\n\tname: \"res4f_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"res5a_branch1\"\n\tname: \"res5a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch1\"\n\ttop: \"res5a_branch1\"\n\tname: \"bn5a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch1\"\n\ttop: \"res5a_branch1\"\n\tname: \"scale5a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"res5a_branch2a\"\n\tname: \"res5a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2a\"\n\tname: \"bn5a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2a\"\n\tname: \"scale5a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2a\"\n\tname: \"res5a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2b\"\n\tname: \"res5a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2b\"\n\tname: \"bn5a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2b\"\n\tname: \"scale5a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2b\"\n\tname: \"res5a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2c\"\n\tname: \"res5a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2c\"\n\ttop: \"res5a_branch2c\"\n\tname: \"bn5a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2c\"\n\ttop: \"res5a_branch2c\"\n\tname: \"scale5a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch1\"\n\tbottom: \"res5a_branch2c\"\n\ttop: \"res5a\"\n\tname: \"res5a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res5a\"\n\ttop: \"res5a\"\n\tname: \"res5a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"res5b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"bn5b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"scale5b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"res5b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2b\"\n\tname: \"res5b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2b\"\n\tname: \"bn5b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2b\"\n\tname: \"scale5b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2b\"\n\tname: \"res5b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2c\"\n\tname: \"res5b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2c\"\n\ttop: \"res5b_branch2c\"\n\tname: \"bn5b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2c\"\n\ttop: \"res5b_branch2c\"\n\tname: \"scale5b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a\"\n\tbottom: \"res5b_branch2c\"\n\ttop: \"res5b\"\n\tname: \"res5b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res5b\"\n\ttop: \"res5b\"\n\tname: \"res5b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5b\"\n\ttop: \"res5c_branch2a\"\n\tname: \"res5c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2a\"\n\tname: \"bn5c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2a\"\n\tname: \"scale5c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2a\"\n\tname: \"res5c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2b\"\n\tname: \"res5c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2b\"\n\tname: \"bn5c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2b\"\n\tname: \"scale5c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2b\"\n\tname: \"res5c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2c\"\n\tname: \"res5c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2c\"\n\ttop: \"res5c_branch2c\"\n\tname: \"bn5c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2c\"\n\ttop: \"res5c_branch2c\"\n\tname: \"scale5c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b\"\n\tbottom: \"res5c_branch2c\"\n\ttop: \"res5c\"\n\tname: \"res5c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res5c\"\n\ttop: \"res5c\"\n\tname: \"res5c_relu\"\n\ttype: \"ReLU\"\n}\nlayer {\n\tbottom: \"res5c\"\n\ttop: \"res6\"\n\tname: \"pool_res6\"\n\ttype: \"Pooling\"\n\tpooling_param {\n\t\tkernel_size: 3\n\t\tstride: 2\n\t\tpool: MAX\n\t}\n}\n####lateral\n\nlayer {\n\tbottom: \"res6\"\n\ttop: \"p6\"\n\tname: \"p6\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n        \n\t}\n}\n\nlayer {\n\tbottom: \"res5c\"\n\ttop: \"p5\"\n\tname: \"p5\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n        \n\t}\n}\n\nlayer {\n    name: \"upP5\"\n\ttype: \"Deconvolution\"\n    bottom: \"p5\" \n\ttop: \"upP5\"\n    convolution_param {\n    kernel_h : 4\n    kernel_w : 4\n    stride_h: 2\n    stride_w: 2\n    pad_h: 1\n    pad_w: 1\n    num_output: 256\n    group: 256\n    bias_term: false\n     weight_filler {\n      type: \"bilinear\"\n    }\n  }\n  param { lr_mult: 0 decay_mult: 0 } \n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"c4\"\n\tname: \"newC4\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\n\nlayer {\n    name: \"p4\"\n    type: \"Eltwise\"\n    bottom: \"c4\"\n    bottom: \"upP5\"\n    top: \"p4\"\n    eltwise_param {\n        operation: SUM\n    }\n}\n\n\nlayer {\n\tbottom: \"p4\"\n\ttop: \"p4_lateral\"\n\tname: \"p4_lateral\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\nlayer {\n    name: \"upP4\"\n\ttype: \"Deconvolution\"\n    bottom: \"p4_lateral\" \n\ttop: \"upP4\"\n    convolution_param {\n    kernel_h : 4\n    kernel_w : 4\n    stride_h: 2\n    stride_w: 2\n    pad_h: 1\n    pad_w: 1\n    num_output: 256\n    group: 256\n    bias_term: false\n     weight_filler {\n      type: \"bilinear\"\n    }\n  }\n  param { lr_mult: 0 decay_mult: 0 } \n}\n\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"c3\"\n\tname: \"newC3\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\nlayer {\n    name: \"p3\"\n    type: \"Eltwise\"\n    bottom: \"c3\"\n    bottom: \"upP4\"\n    top: \"p3\"\n    eltwise_param {\n        operation: SUM\n    }\n}\n\n\nlayer {\n\tbottom: \"p3\"\n\ttop: \"p3_lateral\"\n\tname: \"p3_lateral\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"c2\"\n\tname: \"newC2\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\nlayer {\n    name: \"upP2\"\n\ttype: \"Deconvolution\"\n    bottom: \"p3_lateral\" \n\ttop: \"upP2\"\n    convolution_param {\n    kernel_h : 4\n    kernel_w : 4\n    stride_h: 2\n    stride_w: 2\n    pad_h: 1\n    pad_w: 1\n    num_output: 256\n    group: 256\n    bias_term: false\n     weight_filler {\n      type: \"bilinear\"\n    }\n  }\n  param { lr_mult: 0 decay_mult: 0 } \n}\nlayer {\n    name: \"p2\"\n    type: \"Eltwise\"\n    bottom: \"c2\"\n    bottom: \"upP2\"\n    top: \"p2\"\n    eltwise_param {\n        operation: SUM\n    }\n}\n\n\n\n\n####\n\n#========= RPN/p2 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p2\"\n  type: \"Convolution\"\n  bottom: \"p2\"\n  top: \"rpn/output/p2\"\n  param { lr_mult: 1.0\n  \t\t  name: \"rpn_conv_3x3_w\"\n          }\n  param { lr_mult: 2.0\n  \t\t  name: \"rpn_conv_3x3_b\"\n          }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p2\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p2\"\n  top: \"rpn/output/p2\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p2\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p2\"\n  top: \"rpn_cls_score/p2\"\n  param { lr_mult: 1.0\n  \t\tname: \"rpn_cls_score_w\" }\n  param { lr_mult: 2.0\n \t\tname: \"rpn_cls_score_b\"\n        }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p2\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p2\"\n  top: \"rpn_bbox_pred/p2\"\n  param { lr_mult: 1.0\n  \t\tname: \"rpn_bbox_pred_w\"\n        }\n  param { lr_mult: 2.0\n  name: \"rpn_bbox_pred_b\"\n  }\n  convolution_param {\n    num_output: 24   # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p2\"\n   top: \"rpn_cls_score_reshape_/p2\"\n   name: \"rpn_cls_score_reshape_/p2\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p2\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p2\"\n  top: \"fpn_out/p2\"\n}\n\nlayer {\n   bottom: \"fpn_out/p2\"\n   top: \"fpn_out_reshape/p2\"\n   name: \"fpn_out_reshape/p2\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n#========= RPN/p3 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p3\"\n  type: \"Convolution\"\n  bottom: \"p3\"\n  top: \"rpn/output/p3\"\n  param { lr_mult: 1.0\n        name: \"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0 \n   name: \"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p3\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p3\"\n  top: \"rpn/output/p3\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p3\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p3\"\n  top: \"rpn_cls_score/p3\"\n  param { lr_mult: 1.0 \n  name: \"rpn_cls_score_w\"\n  }\n  param { lr_mult: 2.0\n    name: \"rpn_cls_score_b\"\n    }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p3\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p3\"\n  top: \"rpn_bbox_pred/p3\"\n  param { lr_mult: 1.0\n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n   name:\"rpn_bbox_pred_b\" \n  }\n  convolution_param {\n    num_output: 24   # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p3\"\n   top: \"rpn_cls_score_reshape_/p3\"\n   name: \"rpn_cls_score_reshape_/p3\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p3\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p3\"\n  top: \"fpn_out/p3\"\n}\n\nlayer {\n   bottom: \"fpn_out/p3\"\n   top: \"fpn_out_reshape/p3\"\n   name: \"fpn_out_reshape/p3\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n\n#========= RPN/p4 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p4\"\n  type: \"Convolution\"\n  bottom: \"p4\"\n  top: \"rpn/output/p4\"\n  param { lr_mult: 1.0\n  name: \"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0 \n    name: \"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p4\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p4\"\n  top: \"rpn/output/p4\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p4\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p4\"\n  top: \"rpn_cls_score/p4\"\n  param { lr_mult: 1.0 \n  name:\"rpn_cls_score_w\"\n  }\n  param { lr_mult: 2.0\n  name:\"rpn_cls_score_b\"\n    }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p4\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p4\"\n  top: \"rpn_bbox_pred/p4\"\n  param { lr_mult: 1.0\n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_bbox_pred_b\"\n  }\n  convolution_param {\n    num_output: 24   # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p4\"\n   top: \"rpn_cls_score_reshape_/p4\"\n   name: \"rpn_cls_score_reshape_/p4\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p4\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p4\"\n  top: \"fpn_out/p4\"\n}\n\nlayer {\n   bottom: \"fpn_out/p4\"\n   top: \"fpn_out_reshape/p4\"\n   name: \"fpn_out_reshape/p4\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n\n\n#========= RPN/p5 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p5\"\n  type: \"Convolution\"\n  bottom: \"p5\"\n  top: \"rpn/output/p5\"\n  param { lr_mult: 1.0\n  name:\"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p5\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p5\"\n  top: \"rpn/output/p5\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p5\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p5\"\n  top: \"rpn_cls_score/p5\"\n  param { lr_mult: 1.0\n  name:\"rpn_cls_score_w\"\n  \n  }\n  param { lr_mult: 2.0\n  name:\"rpn_cls_score_b\"\n  }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p5\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p5\"\n  top: \"rpn_bbox_pred/p5\"\n  param { lr_mult: 1.0 \n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_bbox_pred_b\"\n    }\n  convolution_param {\n    num_output: 24  # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p5\"\n   top: \"rpn_cls_score_reshape_/p5\"\n   name: \"rpn_cls_score_reshape_/p5\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p5\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p5\"\n  top: \"fpn_out/p5\"\n}\n\nlayer {\n   bottom: \"fpn_out/p5\"\n   top: \"fpn_out_reshape/p5\"\n   name: \"fpn_out_reshape/p5\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n#========= RPN/p6 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p6\"\n  type: \"Convolution\"\n  bottom: \"p6\"\n  top: \"rpn/output/p6\"\n  param { lr_mult: 1.0\n  name:\"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p6\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p6\"\n  top: \"rpn/output/p6\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p6\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p6\"\n  top: \"rpn_cls_score/p6\"\n  param { lr_mult: 1.0\n  name:\"rpn_cls_score_w\"\n  \n  }\n  param { lr_mult: 2.0\n  name:\"rpn_cls_score_b\"\n  }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p6\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p6\"\n  top: \"rpn_bbox_pred/p6\"\n  param { lr_mult: 1.0 \n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_bbox_pred_b\"\n    }\n  convolution_param {\n    num_output: 24  # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p6\"\n   top: \"rpn_cls_score_reshape_/p6\"\n   name: \"rpn_cls_score_reshape_/p6\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p6\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p6\"\n  top: \"fpn_out/p6\"\n}\n\nlayer {\n   bottom: \"fpn_out/p6\"\n   top: \"fpn_out_reshape/p6\"\n   name: \"fpn_out_reshape/p6\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n\n\n\n\n#========= RoI Proposal ============\n\n\n\n \n\nlayer {\n  name: 'proposal'\n  type: 'Python'\n    bottom: 'im_info'\n    bottom: 'rpn_bbox_pred/p2'\n    bottom: 'rpn_bbox_pred/p3'\n\tbottom: 'rpn_bbox_pred/p4'\n\tbottom: 'rpn_bbox_pred/p5'\n\tbottom: 'rpn_bbox_pred/p6'\n \tbottom: 'fpn_out_reshape/p2'\n\tbottom: 'fpn_out_reshape/p3'\n\tbottom: 'fpn_out_reshape/p4'\n\tbottom: 'fpn_out_reshape/p5'\n\tbottom: 'fpn_out_reshape/p6'\n  top: 'rpn_rois'\n  python_param {\n    module: 'rpn.proposal_layer'\n    layer: 'ProposalLayer'\n    param_str: \"'feat_stride': 4,8,16,32,64\"\n\n  }\n}\n\n\n\n#================rois process======================\n\n\nlayer {\n  name: 'as_rois'\n  type: 'Python'\n  bottom: 'rpn_rois'\n  top: 'rois'\n  top: 'rois/h2'\n  top: 'rois/h3'\n  top: 'rois/h4'\n  top: 'rois/h5'\n  python_param {\n    module: 'rpn.as_rois'\n    layer: 'As_roisLayer'\n\n  }\n}\n\n\n#========= RCNN ============\n\n######POOLING=======\nlayer {\n  name: \"roi_pool/h2\"\n  type: \"ROIPooling\"\n  bottom: \"p2\"\n  bottom: \"rois/h2\"\n  top: \"roi_pool/h2\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.25 # 1/4\n  }\n}\n\n\nlayer {\n  name: \"roi_pool/h3\"\n  type: \"ROIPooling\"\n  bottom: \"p3\"\n  bottom: \"rois/h3\"\n  top: \"roi_pool/h3\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.125 # 1/8\n  }\n}\nlayer {\n  name: \"roi_pool/h4\"\n  type: \"ROIPooling\"\n  bottom: \"p4\"\n  bottom: \"rois/h4\"\n  top: \"roi_pool/h4\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.0625 # 1/16\n  }\n}\n\nlayer {\n  name: \"roi_pool/h5\"\n  type: \"ROIPooling\"\n  bottom: \"p5\"\n  bottom: \"rois/h5\"\n  top: \"roi_pool/h5\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.03125 # 1/32\n  }\n}\n\n\n\n\n#h2\nlayer {\n  name: \"rcnn_fc6/h2\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h2\"\n  top: \"rcnn_fc6/h2\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h2\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h2\"\n  top: \"rcnn_fc6/h2\"\n}\nlayer {\n  name: \"drop6/h2\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h2\"\n  top: \"rcnn_fc6/h2\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h2\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h2\"\n  top: \"fc7/h2\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h2\"\n  type: \"ReLU\"\n  bottom: \"fc7/h2\"\n  top: \"fc7/h2\"\n}\nlayer {\n  name: \"drop7/h2\"\n  type: \"Dropout\"\n  bottom: \"fc7/h2\"\n  top: \"fc7/h2\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h2\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h2\"\n  top: \"cls_score/h2\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h2\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h2\"\n  top: \"bbox_pred/h2\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n\n\n\n#h3\nlayer {\n  name: \"rcnn_fc6/h3\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h3\"\n  top: \"rcnn_fc6/h3\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h3\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h3\"\n  top: \"rcnn_fc6/h3\"\n}\nlayer {\n  name: \"drop6/h3\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h3\"\n  top: \"rcnn_fc6/h3\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h3\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h3\"\n  top: \"fc7/h3\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h3\"\n  type: \"ReLU\"\n  bottom: \"fc7/h3\"\n  top: \"fc7/h3\"\n}\nlayer {\n  name: \"drop7/h3\"\n  type: \"Dropout\"\n  bottom: \"fc7/h3\"\n  top: \"fc7/h3\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h3\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h3\"\n  top: \"cls_score/h3\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h3\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h3\"\n  top: \"bbox_pred/h3\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n\n\n\n\n#h4\nlayer {\n  name: \"rcnn_fc6/h4\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h4\"\n  top: \"rcnn_fc6/h4\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h4\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h4\"\n  top: \"rcnn_fc6/h4\"\n}\nlayer {\n  name: \"drop6/h4\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h4\"\n  top: \"rcnn_fc6/h4\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h4\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h4\"\n  top: \"fc7/h4\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h4\"\n  type: \"ReLU\"\n  bottom: \"fc7/h4\"\n  top: \"fc7/h4\"\n}\nlayer {\n  name: \"drop7/h4\"\n  type: \"Dropout\"\n  bottom: \"fc7/h4\"\n  top: \"fc7/h4\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h4\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h4\"\n  top: \"cls_score/h4\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h4\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h4\"\n  top: \"bbox_pred/h4\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n#h5\nlayer {\n  name: \"rcnn_fc6/h5\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h5\"\n  top: \"rcnn_fc6/h5\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h5\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h5\"\n  top: \"rcnn_fc6/h5\"\n}\nlayer {\n  name: \"drop6/h5\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h5\"\n  top: \"rcnn_fc6/h5\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h5\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h5\"\n  top: \"fc7/h5\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h5\"\n  type: \"ReLU\"\n  bottom: \"fc7/h5\"\n  top: \"fc7/h5\"\n}\nlayer {\n  name: \"drop7/h5\"\n  type: \"Dropout\"\n  bottom: \"fc7/h5\"\n  top: \"fc7/h5\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h5\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h5\"\n  top: \"cls_score/h5\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h5\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h5\"\n  top: \"bbox_pred/h5\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n\n\nlayer {\n  name: \"cls_score_concat\"\n  type: \"Concat\"\n  bottom: \"cls_score/h2\"\n  bottom: \"cls_score/h3\"\n  bottom: \"cls_score/h4\"\n  bottom: \"cls_score/h5\"\n  top: \"cls_score\"\n  concat_param {\n    axis: 0\n  }\n}\n\nlayer {\n  name: \"bbox_pred_concat\"\n  type: \"Concat\"\n  bottom: \"bbox_pred/h2\"\n  bottom: \"bbox_pred/h3\"\n  bottom: \"bbox_pred/h4\"\n  bottom: \"bbox_pred/h5\"\n  top: \"bbox_pred\"\n  concat_param {\n    axis: 0\n  }\n}\n\nlayer {\n  name: \"cls_prob\"\n  type: \"Softmax\"\n  bottom: \"cls_score\"\n  top: \"cls_prob\"\n}\n"
  },
  {
    "path": "models/pascal_voc/FPN/FP_Net_end2end/test_mergercnn.prototxt",
    "content": "\r\nname: \"ResNet-50\"\r\ninput: \"data\"\r\ninput_shape {\r\n  dim: 1\r\n  dim: 3\r\n  dim: 224\r\n  dim: 224\r\n}\r\n\r\ninput: \"im_info\"\r\ninput_shape {\r\n  dim: 1\r\n  dim: 3\r\n}\r\nlayer {\r\n\tbottom: \"data\"\r\n\ttop: \"conv1\"\r\n\tname: \"conv1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 7\r\n\t\tpad: 3\r\n\t\tstride: 2\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"conv1\"\r\n\tname: \"bn_conv1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"conv1\"\r\n\tname: \"scale_conv1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"conv1\"\r\n\tname: \"conv1_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"pool1\"\r\n\tname: \"pool1\"\r\n\ttype: \"Pooling\"\r\n\tpooling_param {\r\n\t\tkernel_size: 3\r\n\t\tstride: 2\r\n\t\tpool: MAX\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"pool1\"\r\n\ttop: \"res2a_branch1\"\r\n\tname: \"res2a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch1\"\r\n\ttop: \"res2a_branch1\"\r\n\tname: \"bn2a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch1\"\r\n\ttop: \"res2a_branch1\"\r\n\tname: \"scale2a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"pool1\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"res2a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"bn2a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"scale2a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"res2a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"res2a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"bn2a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"scale2a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"res2a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2c\"\r\n\tname: \"res2a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2c\"\r\n\ttop: \"res2a_branch2c\"\r\n\tname: \"bn2a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2c\"\r\n\ttop: \"res2a_branch2c\"\r\n\tname: \"scale2a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch1\"\r\n\tbottom: \"res2a_branch2c\"\r\n\ttop: \"res2a\"\r\n\tname: \"res2a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a\"\r\n\ttop: \"res2a\"\r\n\tname: \"res2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"res2b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"bn2b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"scale2b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"res2b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"res2b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"bn2b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"scale2b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"res2b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2c\"\r\n\tname: \"res2b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2c\"\r\n\ttop: \"res2b_branch2c\"\r\n\tname: \"bn2b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2c\"\r\n\ttop: \"res2b_branch2c\"\r\n\tname: \"scale2b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a\"\r\n\tbottom: \"res2b_branch2c\"\r\n\ttop: \"res2b\"\r\n\tname: \"res2b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b\"\r\n\ttop: \"res2b\"\r\n\tname: \"res2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"res2c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"bn2c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"scale2c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"res2c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"res2c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"bn2c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"scale2c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"res2c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2c\"\r\n\tname: \"res2c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2c\"\r\n\ttop: \"res2c_branch2c\"\r\n\tname: \"bn2c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2c\"\r\n\ttop: \"res2c_branch2c\"\r\n\tname: \"scale2c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b\"\r\n\tbottom: \"res2c_branch2c\"\r\n\ttop: \"res2c\"\r\n\tname: \"res2c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"res2c\"\r\n\tname: \"res2c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"res3a_branch1\"\r\n\tname: \"res3a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch1\"\r\n\ttop: \"res3a_branch1\"\r\n\tname: \"bn3a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch1\"\r\n\ttop: \"res3a_branch1\"\r\n\tname: \"scale3a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"res3a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"bn3a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"scale3a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"res3a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"res3a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"bn3a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"scale3a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"res3a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2c\"\r\n\tname: \"res3a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2c\"\r\n\ttop: \"res3a_branch2c\"\r\n\tname: \"bn3a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2c\"\r\n\ttop: \"res3a_branch2c\"\r\n\tname: \"scale3a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch1\"\r\n\tbottom: \"res3a_branch2c\"\r\n\ttop: \"res3a\"\r\n\tname: \"res3a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a\"\r\n\ttop: \"res3a\"\r\n\tname: \"res3a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"res3b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"bn3b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"scale3b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"res3b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"res3b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"bn3b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"scale3b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"res3b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2c\"\r\n\tname: \"res3b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2c\"\r\n\ttop: \"res3b_branch2c\"\r\n\tname: \"bn3b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2c\"\r\n\ttop: \"res3b_branch2c\"\r\n\tname: \"scale3b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a\"\r\n\tbottom: \"res3b_branch2c\"\r\n\ttop: \"res3b\"\r\n\tname: \"res3b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b\"\r\n\ttop: \"res3b\"\r\n\tname: \"res3b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"res3c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"bn3c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"scale3c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"res3c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"res3c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"bn3c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"scale3c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"res3c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2c\"\r\n\tname: \"res3c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2c\"\r\n\ttop: \"res3c_branch2c\"\r\n\tname: \"bn3c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2c\"\r\n\ttop: \"res3c_branch2c\"\r\n\tname: \"scale3c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b\"\r\n\tbottom: \"res3c_branch2c\"\r\n\ttop: \"res3c\"\r\n\tname: \"res3c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c\"\r\n\ttop: \"res3c\"\r\n\tname: \"res3c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"res3d_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"bn3d_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"scale3d_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"res3d_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"res3d_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"bn3d_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"scale3d_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"res3d_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2c\"\r\n\tname: \"res3d_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2c\"\r\n\ttop: \"res3d_branch2c\"\r\n\tname: \"bn3d_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2c\"\r\n\ttop: \"res3d_branch2c\"\r\n\tname: \"scale3d_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c\"\r\n\tbottom: \"res3d_branch2c\"\r\n\ttop: \"res3d\"\r\n\tname: \"res3d\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"res3d\"\r\n\tname: \"res3d_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"res4a_branch1\"\r\n\tname: \"res4a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch1\"\r\n\ttop: \"res4a_branch1\"\r\n\tname: \"bn4a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch1\"\r\n\ttop: \"res4a_branch1\"\r\n\tname: \"scale4a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"res4a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"bn4a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"scale4a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"res4a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"res4a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"bn4a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"scale4a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"res4a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2c\"\r\n\tname: \"res4a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2c\"\r\n\ttop: \"res4a_branch2c\"\r\n\tname: \"bn4a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2c\"\r\n\ttop: \"res4a_branch2c\"\r\n\tname: \"scale4a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch1\"\r\n\tbottom: \"res4a_branch2c\"\r\n\ttop: \"res4a\"\r\n\tname: \"res4a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a\"\r\n\ttop: \"res4a\"\r\n\tname: \"res4a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"res4b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"bn4b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"scale4b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"res4b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"res4b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"bn4b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"scale4b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"res4b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2c\"\r\n\tname: \"res4b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2c\"\r\n\ttop: \"res4b_branch2c\"\r\n\tname: \"bn4b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2c\"\r\n\ttop: \"res4b_branch2c\"\r\n\tname: \"scale4b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a\"\r\n\tbottom: \"res4b_branch2c\"\r\n\ttop: \"res4b\"\r\n\tname: \"res4b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b\"\r\n\ttop: \"res4b\"\r\n\tname: \"res4b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"res4c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"bn4c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"scale4c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"res4c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"res4c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"bn4c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"scale4c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"res4c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2c\"\r\n\tname: \"res4c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2c\"\r\n\ttop: \"res4c_branch2c\"\r\n\tname: \"bn4c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2c\"\r\n\ttop: \"res4c_branch2c\"\r\n\tname: \"scale4c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b\"\r\n\tbottom: \"res4c_branch2c\"\r\n\ttop: \"res4c\"\r\n\tname: \"res4c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c\"\r\n\ttop: \"res4c\"\r\n\tname: \"res4c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"res4d_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"bn4d_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"scale4d_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"res4d_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"res4d_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"bn4d_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"scale4d_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"res4d_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2c\"\r\n\tname: \"res4d_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2c\"\r\n\ttop: \"res4d_branch2c\"\r\n\tname: \"bn4d_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2c\"\r\n\ttop: \"res4d_branch2c\"\r\n\tname: \"scale4d_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c\"\r\n\tbottom: \"res4d_branch2c\"\r\n\ttop: \"res4d\"\r\n\tname: \"res4d\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d\"\r\n\ttop: \"res4d\"\r\n\tname: \"res4d_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"res4e_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"bn4e_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"scale4e_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"res4e_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"res4e_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"bn4e_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"scale4e_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"res4e_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2c\"\r\n\tname: \"res4e_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2c\"\r\n\ttop: \"res4e_branch2c\"\r\n\tname: \"bn4e_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2c\"\r\n\ttop: \"res4e_branch2c\"\r\n\tname: \"scale4e_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d\"\r\n\tbottom: \"res4e_branch2c\"\r\n\ttop: \"res4e\"\r\n\tname: \"res4e\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e\"\r\n\ttop: \"res4e\"\r\n\tname: \"res4e_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"res4f_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"bn4f_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"scale4f_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"res4f_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"res4f_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"bn4f_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"scale4f_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"res4f_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2c\"\r\n\tname: \"res4f_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2c\"\r\n\ttop: \"res4f_branch2c\"\r\n\tname: \"bn4f_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2c\"\r\n\ttop: \"res4f_branch2c\"\r\n\tname: \"scale4f_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e\"\r\n\tbottom: \"res4f_branch2c\"\r\n\ttop: \"res4f\"\r\n\tname: \"res4f\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"res4f\"\r\n\tname: \"res4f_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"res5a_branch1\"\r\n\tname: \"res5a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch1\"\r\n\ttop: \"res5a_branch1\"\r\n\tname: \"bn5a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch1\"\r\n\ttop: \"res5a_branch1\"\r\n\tname: \"scale5a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"res5a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"bn5a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"scale5a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"res5a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"res5a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"bn5a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"scale5a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"res5a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2c\"\r\n\tname: \"res5a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2c\"\r\n\ttop: \"res5a_branch2c\"\r\n\tname: \"bn5a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2c\"\r\n\ttop: \"res5a_branch2c\"\r\n\tname: \"scale5a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch1\"\r\n\tbottom: \"res5a_branch2c\"\r\n\ttop: \"res5a\"\r\n\tname: \"res5a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a\"\r\n\ttop: \"res5a\"\r\n\tname: \"res5a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"res5b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"bn5b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"scale5b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"res5b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"res5b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"bn5b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"scale5b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"res5b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2c\"\r\n\tname: \"res5b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2c\"\r\n\ttop: \"res5b_branch2c\"\r\n\tname: \"bn5b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2c\"\r\n\ttop: \"res5b_branch2c\"\r\n\tname: \"scale5b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a\"\r\n\tbottom: \"res5b_branch2c\"\r\n\ttop: \"res5b\"\r\n\tname: \"res5b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b\"\r\n\ttop: \"res5b\"\r\n\tname: \"res5b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"res5c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"bn5c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"scale5c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"res5c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"res5c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"bn5c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"scale5c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"res5c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2c\"\r\n\tname: \"res5c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2c\"\r\n\ttop: \"res5c_branch2c\"\r\n\tname: \"bn5c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2c\"\r\n\ttop: \"res5c_branch2c\"\r\n\tname: \"scale5c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b\"\r\n\tbottom: \"res5c_branch2c\"\r\n\ttop: \"res5c\"\r\n\tname: \"res5c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c\"\r\n\ttop: \"res5c\"\r\n\tname: \"res5c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\nlayer {\r\n\tbottom: \"res5c\"\r\n\ttop: \"res6\"\r\n\tname: \"pool_res6\"\r\n\ttype: \"Pooling\"\r\n\tpooling_param {\r\n\t\tkernel_size: 3\r\n\t\tstride: 2\r\n\t\tpool: MAX\r\n\t}\r\n}\r\n####lateral\r\n\r\nlayer {\r\n\tbottom: \"res6\"\r\n\ttop: \"p6\"\r\n\tname: \"p6\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c\"\r\n\ttop: \"p5\"\r\n\tname: \"p5\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n    name: \"upP5\"\r\n\ttype: \"Deconvolution\"\r\n    bottom: \"p5\" \r\n\ttop: \"upP5\"\r\n    convolution_param {\r\n    kernel_h : 4\r\n    kernel_w : 4\r\n    stride_h: 2\r\n    stride_w: 2\r\n    pad_h: 1\r\n    pad_w: 1\r\n    num_output: 256\r\n    group: 256\r\n    bias_term: false\r\n     weight_filler {\r\n      type: \"bilinear\"\r\n    }\r\n  }\r\n  param { lr_mult: 0 decay_mult: 0 } \r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"c4\"\r\n\tname: \"newC4\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n    name: \"p4\"\r\n    type: \"Eltwise\"\r\n    bottom: \"c4\"\r\n    bottom: \"upP5\"\r\n    top: \"p4\"\r\n    eltwise_param {\r\n        operation: SUM\r\n    }\r\n}\r\n\r\n\r\nlayer {\r\n\tbottom: \"p4\"\r\n\ttop: \"p4_lateral\"\r\n\tname: \"p4_lateral\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\nlayer {\r\n    name: \"upP4\"\r\n\ttype: \"Deconvolution\"\r\n    bottom: \"p4_lateral\" \r\n\ttop: \"upP4\"\r\n    convolution_param {\r\n    kernel_h : 4\r\n    kernel_w : 4\r\n    stride_h: 2\r\n    stride_w: 2\r\n    pad_h: 1\r\n    pad_w: 1\r\n    num_output: 256\r\n    group: 256\r\n    bias_term: false\r\n     weight_filler {\r\n      type: \"bilinear\"\r\n    }\r\n  }\r\n  param { lr_mult: 0 decay_mult: 0 } \r\n}\r\n\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"c3\"\r\n\tname: \"newC3\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\nlayer {\r\n    name: \"p3\"\r\n    type: \"Eltwise\"\r\n    bottom: \"c3\"\r\n    bottom: \"upP4\"\r\n    top: \"p3\"\r\n    eltwise_param {\r\n        operation: SUM\r\n    }\r\n}\r\n\r\n\r\nlayer {\r\n\tbottom: \"p3\"\r\n\ttop: \"p3_lateral\"\r\n\tname: \"p3_lateral\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"c2\"\r\n\tname: \"newC2\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\nlayer {\r\n    name: \"upP2\"\r\n\ttype: \"Deconvolution\"\r\n    bottom: \"p3_lateral\" \r\n\ttop: \"upP2\"\r\n    convolution_param {\r\n    kernel_h : 4\r\n    kernel_w : 4\r\n    stride_h: 2\r\n    stride_w: 2\r\n    pad_h: 1\r\n    pad_w: 1\r\n    num_output: 256\r\n    group: 256\r\n    bias_term: false\r\n     weight_filler {\r\n      type: \"bilinear\"\r\n    }\r\n  }\r\n  param { lr_mult: 0 decay_mult: 0 } \r\n}\r\nlayer {\r\n    name: \"p2\"\r\n    type: \"Eltwise\"\r\n    bottom: \"c2\"\r\n    bottom: \"upP2\"\r\n    top: \"p2\"\r\n    eltwise_param {\r\n        operation: SUM\r\n    }\r\n}\r\n\r\n\r\n\r\n\r\n####\r\n\r\n#========= RPN/p2 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p2\"\r\n  type: \"Convolution\"\r\n  bottom: \"p2\"\r\n  top: \"rpn/output/p2\"\r\n  param { lr_mult: 1.0\r\n  \t\t  name: \"rpn_conv_3x3_w\"\r\n          }\r\n  param { lr_mult: 2.0\r\n  \t\t  name: \"rpn_conv_3x3_b\"\r\n          }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p2\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p2\"\r\n  top: \"rpn/output/p2\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p2\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p2\"\r\n  top: \"rpn_cls_score/p2\"\r\n  param { lr_mult: 1.0\r\n  \t\tname: \"rpn_cls_score_w\" }\r\n  param { lr_mult: 2.0\r\n \t\tname: \"rpn_cls_score_b\"\r\n        }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p2\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p2\"\r\n  top: \"rpn_bbox_pred/p2\"\r\n  param { lr_mult: 1.0\r\n  \t\tname: \"rpn_bbox_pred_w\"\r\n        }\r\n  param { lr_mult: 2.0\r\n  name: \"rpn_bbox_pred_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 24   # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p2\"\r\n   top: \"rpn_cls_score_reshape_/p2\"\r\n   name: \"rpn_cls_score_reshape_/p2\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p2\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p2\"\r\n  top: \"fpn_out/p2\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p2\"\r\n   top: \"fpn_out_reshape/p2\"\r\n   name: \"fpn_out_reshape/p2\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n\r\n#========= RPN/p3 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p3\"\r\n  type: \"Convolution\"\r\n  bottom: \"p3\"\r\n  top: \"rpn/output/p3\"\r\n  param { lr_mult: 1.0\r\n        name: \"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0 \r\n   name: \"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p3\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p3\"\r\n  top: \"rpn/output/p3\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p3\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p3\"\r\n  top: \"rpn_cls_score/p3\"\r\n  param { lr_mult: 1.0 \r\n  name: \"rpn_cls_score_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name: \"rpn_cls_score_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p3\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p3\"\r\n  top: \"rpn_bbox_pred/p3\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n   name:\"rpn_bbox_pred_b\" \r\n  }\r\n  convolution_param {\r\n    num_output: 24   # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p3\"\r\n   top: \"rpn_cls_score_reshape_/p3\"\r\n   name: \"rpn_cls_score_reshape_/p3\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p3\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p3\"\r\n  top: \"fpn_out/p3\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p3\"\r\n   top: \"fpn_out_reshape/p3\"\r\n   name: \"fpn_out_reshape/p3\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n\r\n\r\n#========= RPN/p4 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p4\"\r\n  type: \"Convolution\"\r\n  bottom: \"p4\"\r\n  top: \"rpn/output/p4\"\r\n  param { lr_mult: 1.0\r\n  name: \"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0 \r\n    name: \"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p4\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p4\"\r\n  top: \"rpn/output/p4\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p4\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p4\"\r\n  top: \"rpn_cls_score/p4\"\r\n  param { lr_mult: 1.0 \r\n  name:\"rpn_cls_score_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n  name:\"rpn_cls_score_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p4\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p4\"\r\n  top: \"rpn_bbox_pred/p4\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_bbox_pred_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 24   # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p4\"\r\n   top: \"rpn_cls_score_reshape_/p4\"\r\n   name: \"rpn_cls_score_reshape_/p4\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p4\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p4\"\r\n  top: \"fpn_out/p4\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p4\"\r\n   top: \"fpn_out_reshape/p4\"\r\n   name: \"fpn_out_reshape/p4\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n\r\n\r\n\r\n#========= RPN/p5 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p5\"\r\n  type: \"Convolution\"\r\n  bottom: \"p5\"\r\n  top: \"rpn/output/p5\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p5\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p5\"\r\n  top: \"rpn/output/p5\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p5\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p5\"\r\n  top: \"rpn_cls_score/p5\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_cls_score_w\"\r\n  \r\n  }\r\n  param { lr_mult: 2.0\r\n  name:\"rpn_cls_score_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p5\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p5\"\r\n  top: \"rpn_bbox_pred/p5\"\r\n  param { lr_mult: 1.0 \r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_bbox_pred_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 24  # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p5\"\r\n   top: \"rpn_cls_score_reshape_/p5\"\r\n   name: \"rpn_cls_score_reshape_/p5\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p5\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p5\"\r\n  top: \"fpn_out/p5\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p5\"\r\n   top: \"fpn_out_reshape/p5\"\r\n   name: \"fpn_out_reshape/p5\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n#========= RPN/p6 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p6\"\r\n  type: \"Convolution\"\r\n  bottom: \"p6\"\r\n  top: \"rpn/output/p6\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p6\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p6\"\r\n  top: \"rpn/output/p6\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p6\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p6\"\r\n  top: \"rpn_cls_score/p6\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_cls_score_w\"\r\n  \r\n  }\r\n  param { lr_mult: 2.0\r\n  name:\"rpn_cls_score_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p6\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p6\"\r\n  top: \"rpn_bbox_pred/p6\"\r\n  param { lr_mult: 1.0 \r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_bbox_pred_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 24  # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p6\"\r\n   top: \"rpn_cls_score_reshape_/p6\"\r\n   name: \"rpn_cls_score_reshape_/p6\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p6\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p6\"\r\n  top: \"fpn_out/p6\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p6\"\r\n   top: \"fpn_out_reshape/p6\"\r\n   name: \"fpn_out_reshape/p6\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n########rpn loss#####################\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n#========= RoI Proposal ============\r\n\r\n\r\n\r\n \r\n\r\nlayer {\r\n  name: 'proposal'\r\n  type: 'Python'\r\n    bottom: 'im_info'\r\n    bottom: 'rpn_bbox_pred/p2'\r\n    bottom: 'rpn_bbox_pred/p3'\r\n\tbottom: 'rpn_bbox_pred/p4'\r\n\tbottom: 'rpn_bbox_pred/p5'\r\n\tbottom: 'rpn_bbox_pred/p6'\r\n \tbottom: 'fpn_out_reshape/p2'\r\n\tbottom: 'fpn_out_reshape/p3'\r\n\tbottom: 'fpn_out_reshape/p4'\r\n\tbottom: 'fpn_out_reshape/p5'\r\n\tbottom: 'fpn_out_reshape/p6'\r\n  top: 'rpn_rois'\r\n  python_param {\r\n    module: 'rpn.proposal_layer'\r\n    layer: 'ProposalLayer'\r\n    param_str: \"'feat_stride': 4,8,16,32,64\"\r\n\r\n  }\r\n}\r\n\r\n\r\n\r\n#================rois process======================\r\n\r\nlayer {\r\n  name: 'as_rois'\r\n  type: 'Python'\r\n  bottom: 'rpn_rois'\r\n  top: 'rois'\r\n  top: 'rois/h2'\r\n  top: 'rois/h3'\r\n  top: 'rois/h4'\r\n  top: 'rois/h5'\r\n  python_param {\r\n    module: 'rpn.as_rois_mrcnn'\r\n    layer: 'As_rois_MergeRcnnLayer'\r\n\r\n  }\r\n}\r\n\r\n#========= RCNN ============\r\n\r\n######POOLING=======\r\nlayer {\r\n  name: \"roi_pool/h2\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p2\"\r\n  bottom: \"rois/h2\"\r\n  top: \"roi_pool/h2\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.25 # 1/4\r\n  }\r\n}\r\n\r\n\r\nlayer {\r\n  name: \"roi_pool/h3\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p3\"\r\n  bottom: \"rois/h3\"\r\n  top: \"roi_pool/h3\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.125 # 1/8\r\n  }\r\n}\r\nlayer {\r\n  name: \"roi_pool/h4\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p4\"\r\n  bottom: \"rois/h4\"\r\n  top: \"roi_pool/h4\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.0625 # 1/16\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"roi_pool/h5\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p5\"\r\n  bottom: \"rois/h5\"\r\n  top: \"roi_pool/h5\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.03125 # 1/32\r\n  }\r\n}\r\n\r\n\r\nlayer {\r\n  name: \"roi_pool_concat\"\r\n  type: \"Concat\"\r\n  bottom: \"roi_pool/h2\"\r\n  bottom: \"roi_pool/h3\"\r\n  bottom: \"roi_pool/h4\"\r\n  bottom: \"roi_pool/h5\"\r\n  top: \"roi_pool\"\r\n  concat_param {\r\n    axis: 0\r\n  }\r\n}\r\n\r\n#h2\r\nlayer {\r\n  name: \"rcnn_fc6\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"roi_pool\"\r\n  top: \"rcnn_fc6\"\r\n  param {\r\n    lr_mult: 1\r\n    name: \"rcnn_fc6_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name: \"rcnn_fc6_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 1024\r\n    weight_filler {\r\n      type: \"xavier\"\r\n    }\r\n    bias_filler {\r\n      type: \"constant\"\r\n    }\r\n  }\r\n}\r\nlayer {\r\n  name: \"relu6\"\r\n  type: \"ReLU\"\r\n  bottom: \"rcnn_fc6\"\r\n  top: \"rcnn_fc6\"\r\n}\r\n\r\nlayer {\r\n  name: \"fc7\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"rcnn_fc6\"\r\n  top: \"fc7\"\r\n  param {\r\n    lr_mult: 1\r\n    name:\"fc7_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name: \"fc7_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 1024\r\n    weight_filler { \r\n    type: \"xavier\"  \r\n    }  \r\n    bias_filler {  \r\n      type: \"constant\"  \r\n    } \r\n  }\r\n}\r\nlayer {\r\n  name: \"relu7\"\r\n  type: \"ReLU\"\r\n  bottom: \"fc7\"\r\n  top: \"fc7\"\r\n}\r\n\r\nlayer {\r\n  name: \"cls_score\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"fc7\"\r\n  top: \"cls_score\"\r\n  param {\r\n    lr_mult: 1\r\n    name:\"cls_score_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name:\"cls_score_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 21\r\n    weight_filler {\r\n      type: \"gaussian\"\r\n      std: 0.01\r\n    }\r\n    bias_filler {\r\n      type: \"constant\"\r\n      value: 0\r\n    }\r\n  }\r\n}\r\nlayer {\r\n  name: \"bbox_pred\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"fc7\"\r\n  top: \"bbox_pred\"\r\n  param {\r\n    lr_mult: 1\r\n    name:\"bbox_pred_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name:\"bbox_pred_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 84\r\n    weight_filler {\r\n      type: \"gaussian\"\r\n      std: 0.001\r\n    }\r\n    bias_filler {\r\n      type: \"constant\"\r\n      value: 0\r\n    }\r\n  }\r\n}\r\nlayer {\r\n  name: \"cls_prob\"\r\n  type: \"Softmax\"\r\n  bottom: \"cls_score\"\r\n  top: \"cls_prob\"\r\n}\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "models/pascal_voc/FPN/FP_Net_end2end/train.prototxt",
    "content": "name: \"ResNet-50\"\nlayer {\n  name: 'input-data'\n  type: 'Python'\n  top: 'data'\n  top: 'im_info'\n  top: 'gt_boxes'\n  python_param {\n    module: 'roi_data_layer.layer'\n    layer: 'RoIDataLayer'\n    param_str: \"'num_classes': 21\"\n  }\n}\nlayer {\n\tbottom: \"data\"\n\ttop: \"conv1\"\n\tname: \"conv1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 7\n\t\tpad: 3\n\t\tstride: 2\n\t}\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"conv1\"\n\tname: \"bn_conv1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"conv1\"\n\tname: \"scale_conv1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"conv1\"\n\tname: \"conv1_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"conv1\"\n\ttop: \"pool1\"\n\tname: \"pool1\"\n\ttype: \"Pooling\"\n\tpooling_param {\n\t\tkernel_size: 3\n\t\tstride: 2\n\t\tpool: MAX\n\t}\n}\n\nlayer {\n\tbottom: \"pool1\"\n\ttop: \"res2a_branch1\"\n\tname: \"res2a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch1\"\n\ttop: \"res2a_branch1\"\n\tname: \"bn2a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch1\"\n\ttop: \"res2a_branch1\"\n\tname: \"scale2a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"pool1\"\n\ttop: \"res2a_branch2a\"\n\tname: \"res2a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2a\"\n\tname: \"bn2a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2a\"\n\tname: \"scale2a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2a\"\n\tname: \"res2a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2a_branch2a\"\n\ttop: \"res2a_branch2b\"\n\tname: \"res2a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2b\"\n\tname: \"bn2a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2b\"\n\tname: \"scale2a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2b\"\n\tname: \"res2a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2a_branch2b\"\n\ttop: \"res2a_branch2c\"\n\tname: \"res2a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2c\"\n\ttop: \"res2a_branch2c\"\n\tname: \"bn2a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch2c\"\n\ttop: \"res2a_branch2c\"\n\tname: \"scale2a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a_branch1\"\n\tbottom: \"res2a_branch2c\"\n\ttop: \"res2a\"\n\tname: \"res2a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res2a\"\n\ttop: \"res2a\"\n\tname: \"res2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"res2b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"bn2b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"scale2b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2a\"\n\tname: \"res2b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2b_branch2a\"\n\ttop: \"res2b_branch2b\"\n\tname: \"res2b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2b\"\n\tname: \"bn2b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2b\"\n\tname: \"scale2b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2b\"\n\tname: \"res2b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2b_branch2b\"\n\ttop: \"res2b_branch2c\"\n\tname: \"res2b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2c\"\n\ttop: \"res2b_branch2c\"\n\tname: \"bn2b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b_branch2c\"\n\ttop: \"res2b_branch2c\"\n\tname: \"scale2b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2a\"\n\tbottom: \"res2b_branch2c\"\n\ttop: \"res2b\"\n\tname: \"res2b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res2b\"\n\ttop: \"res2b\"\n\tname: \"res2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2b\"\n\ttop: \"res2c_branch2a\"\n\tname: \"res2c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2a\"\n\tname: \"bn2c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2a\"\n\tname: \"scale2c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2a\"\n\tname: \"res2c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2c_branch2a\"\n\ttop: \"res2c_branch2b\"\n\tname: \"res2c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 64\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2b\"\n\tname: \"bn2c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2b\"\n\tname: \"scale2c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2b\"\n\tname: \"res2c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2c_branch2b\"\n\ttop: \"res2c_branch2c\"\n\tname: \"res2c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2c\"\n\ttop: \"res2c_branch2c\"\n\tname: \"bn2c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c_branch2c\"\n\ttop: \"res2c_branch2c\"\n\tname: \"scale2c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2b\"\n\tbottom: \"res2c_branch2c\"\n\ttop: \"res2c\"\n\tname: \"res2c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"res2c\"\n\tname: \"res2c_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"res3a_branch1\"\n\tname: \"res3a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch1\"\n\ttop: \"res3a_branch1\"\n\tname: \"bn3a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch1\"\n\ttop: \"res3a_branch1\"\n\tname: \"scale3a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"res3a_branch2a\"\n\tname: \"res3a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2a\"\n\tname: \"bn3a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2a\"\n\tname: \"scale3a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2a\"\n\tname: \"res3a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3a_branch2a\"\n\ttop: \"res3a_branch2b\"\n\tname: \"res3a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2b\"\n\tname: \"bn3a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2b\"\n\tname: \"scale3a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2b\"\n\tname: \"res3a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3a_branch2b\"\n\ttop: \"res3a_branch2c\"\n\tname: \"res3a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2c\"\n\ttop: \"res3a_branch2c\"\n\tname: \"bn3a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch2c\"\n\ttop: \"res3a_branch2c\"\n\tname: \"scale3a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a_branch1\"\n\tbottom: \"res3a_branch2c\"\n\ttop: \"res3a\"\n\tname: \"res3a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3a\"\n\ttop: \"res3a\"\n\tname: \"res3a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"res3b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"bn3b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"scale3b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2a\"\n\tname: \"res3b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3b_branch2a\"\n\ttop: \"res3b_branch2b\"\n\tname: \"res3b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2b\"\n\tname: \"bn3b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2b\"\n\tname: \"scale3b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2b\"\n\tname: \"res3b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3b_branch2b\"\n\ttop: \"res3b_branch2c\"\n\tname: \"res3b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2c\"\n\ttop: \"res3b_branch2c\"\n\tname: \"bn3b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b_branch2c\"\n\ttop: \"res3b_branch2c\"\n\tname: \"scale3b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3a\"\n\tbottom: \"res3b_branch2c\"\n\ttop: \"res3b\"\n\tname: \"res3b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3b\"\n\ttop: \"res3b\"\n\tname: \"res3b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3b\"\n\ttop: \"res3c_branch2a\"\n\tname: \"res3c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2a\"\n\tname: \"bn3c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2a\"\n\tname: \"scale3c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2a\"\n\tname: \"res3c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3c_branch2a\"\n\ttop: \"res3c_branch2b\"\n\tname: \"res3c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2b\"\n\tname: \"bn3c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2b\"\n\tname: \"scale3c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2b\"\n\tname: \"res3c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3c_branch2b\"\n\ttop: \"res3c_branch2c\"\n\tname: \"res3c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2c\"\n\ttop: \"res3c_branch2c\"\n\tname: \"bn3c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c_branch2c\"\n\ttop: \"res3c_branch2c\"\n\tname: \"scale3c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3b\"\n\tbottom: \"res3c_branch2c\"\n\ttop: \"res3c\"\n\tname: \"res3c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3c\"\n\ttop: \"res3c\"\n\tname: \"res3c_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3c\"\n\ttop: \"res3d_branch2a\"\n\tname: \"res3d_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2a\"\n\tname: \"bn3d_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2a\"\n\tname: \"scale3d_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2a\"\n\tname: \"res3d_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3d_branch2a\"\n\ttop: \"res3d_branch2b\"\n\tname: \"res3d_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 128\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2b\"\n\tname: \"bn3d_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2b\"\n\tname: \"scale3d_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2b\"\n\tname: \"res3d_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3d_branch2b\"\n\ttop: \"res3d_branch2c\"\n\tname: \"res3d_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2c\"\n\ttop: \"res3d_branch2c\"\n\tname: \"bn3d_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d_branch2c\"\n\ttop: \"res3d_branch2c\"\n\tname: \"scale3d_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3c\"\n\tbottom: \"res3d_branch2c\"\n\ttop: \"res3d\"\n\tname: \"res3d\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"res3d\"\n\tname: \"res3d_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"res4a_branch1\"\n\tname: \"res4a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch1\"\n\ttop: \"res4a_branch1\"\n\tname: \"bn4a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch1\"\n\ttop: \"res4a_branch1\"\n\tname: \"scale4a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"res4a_branch2a\"\n\tname: \"res4a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2a\"\n\tname: \"bn4a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2a\"\n\tname: \"scale4a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2a\"\n\tname: \"res4a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4a_branch2a\"\n\ttop: \"res4a_branch2b\"\n\tname: \"res4a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2b\"\n\tname: \"bn4a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2b\"\n\tname: \"scale4a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2b\"\n\tname: \"res4a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4a_branch2b\"\n\ttop: \"res4a_branch2c\"\n\tname: \"res4a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2c\"\n\ttop: \"res4a_branch2c\"\n\tname: \"bn4a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch2c\"\n\ttop: \"res4a_branch2c\"\n\tname: \"scale4a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a_branch1\"\n\tbottom: \"res4a_branch2c\"\n\ttop: \"res4a\"\n\tname: \"res4a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4a\"\n\ttop: \"res4a\"\n\tname: \"res4a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"res4b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"bn4b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"scale4b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2a\"\n\tname: \"res4b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4b_branch2a\"\n\ttop: \"res4b_branch2b\"\n\tname: \"res4b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2b\"\n\tname: \"bn4b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2b\"\n\tname: \"scale4b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2b\"\n\tname: \"res4b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4b_branch2b\"\n\ttop: \"res4b_branch2c\"\n\tname: \"res4b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2c\"\n\ttop: \"res4b_branch2c\"\n\tname: \"bn4b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b_branch2c\"\n\ttop: \"res4b_branch2c\"\n\tname: \"scale4b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4a\"\n\tbottom: \"res4b_branch2c\"\n\ttop: \"res4b\"\n\tname: \"res4b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4b\"\n\ttop: \"res4b\"\n\tname: \"res4b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4b\"\n\ttop: \"res4c_branch2a\"\n\tname: \"res4c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2a\"\n\tname: \"bn4c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2a\"\n\tname: \"scale4c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2a\"\n\tname: \"res4c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4c_branch2a\"\n\ttop: \"res4c_branch2b\"\n\tname: \"res4c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2b\"\n\tname: \"bn4c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2b\"\n\tname: \"scale4c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2b\"\n\tname: \"res4c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4c_branch2b\"\n\ttop: \"res4c_branch2c\"\n\tname: \"res4c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2c\"\n\ttop: \"res4c_branch2c\"\n\tname: \"bn4c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c_branch2c\"\n\ttop: \"res4c_branch2c\"\n\tname: \"scale4c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4b\"\n\tbottom: \"res4c_branch2c\"\n\ttop: \"res4c\"\n\tname: \"res4c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4c\"\n\ttop: \"res4c\"\n\tname: \"res4c_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4c\"\n\ttop: \"res4d_branch2a\"\n\tname: \"res4d_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2a\"\n\tname: \"bn4d_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2a\"\n\tname: \"scale4d_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2a\"\n\tname: \"res4d_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4d_branch2a\"\n\ttop: \"res4d_branch2b\"\n\tname: \"res4d_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2b\"\n\tname: \"bn4d_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2b\"\n\tname: \"scale4d_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2b\"\n\tname: \"res4d_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4d_branch2b\"\n\ttop: \"res4d_branch2c\"\n\tname: \"res4d_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2c\"\n\ttop: \"res4d_branch2c\"\n\tname: \"bn4d_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d_branch2c\"\n\ttop: \"res4d_branch2c\"\n\tname: \"scale4d_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4c\"\n\tbottom: \"res4d_branch2c\"\n\ttop: \"res4d\"\n\tname: \"res4d\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4d\"\n\ttop: \"res4d\"\n\tname: \"res4d_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4d\"\n\ttop: \"res4e_branch2a\"\n\tname: \"res4e_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2a\"\n\tname: \"bn4e_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2a\"\n\tname: \"scale4e_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2a\"\n\tname: \"res4e_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4e_branch2a\"\n\ttop: \"res4e_branch2b\"\n\tname: \"res4e_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2b\"\n\tname: \"bn4e_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2b\"\n\tname: \"scale4e_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2b\"\n\tname: \"res4e_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4e_branch2b\"\n\ttop: \"res4e_branch2c\"\n\tname: \"res4e_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2c\"\n\ttop: \"res4e_branch2c\"\n\tname: \"bn4e_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e_branch2c\"\n\ttop: \"res4e_branch2c\"\n\tname: \"scale4e_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4d\"\n\tbottom: \"res4e_branch2c\"\n\ttop: \"res4e\"\n\tname: \"res4e\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4e\"\n\ttop: \"res4e\"\n\tname: \"res4e_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4e\"\n\ttop: \"res4f_branch2a\"\n\tname: \"res4f_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2a\"\n\tname: \"bn4f_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2a\"\n\tname: \"scale4f_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2a\"\n\tname: \"res4f_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4f_branch2a\"\n\ttop: \"res4f_branch2b\"\n\tname: \"res4f_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2b\"\n\tname: \"bn4f_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2b\"\n\tname: \"scale4f_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2b\"\n\tname: \"res4f_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4f_branch2b\"\n\ttop: \"res4f_branch2c\"\n\tname: \"res4f_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 1024\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2c\"\n\ttop: \"res4f_branch2c\"\n\tname: \"bn4f_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f_branch2c\"\n\ttop: \"res4f_branch2c\"\n\tname: \"scale4f_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4e\"\n\tbottom: \"res4f_branch2c\"\n\ttop: \"res4f\"\n\tname: \"res4f\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"res4f\"\n\tname: \"res4f_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"res5a_branch1\"\n\tname: \"res5a_branch1\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch1\"\n\ttop: \"res5a_branch1\"\n\tname: \"bn5a_branch1\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch1\"\n\ttop: \"res5a_branch1\"\n\tname: \"scale5a_branch1\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"res5a_branch2a\"\n\tname: \"res5a_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 2\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2a\"\n\tname: \"bn5a_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2a\"\n\tname: \"scale5a_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2a\"\n\tname: \"res5a_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5a_branch2a\"\n\ttop: \"res5a_branch2b\"\n\tname: \"res5a_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2b\"\n\tname: \"bn5a_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2b\"\n\tname: \"scale5a_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2b\"\n\tname: \"res5a_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5a_branch2b\"\n\ttop: \"res5a_branch2c\"\n\tname: \"res5a_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2c\"\n\ttop: \"res5a_branch2c\"\n\tname: \"bn5a_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch2c\"\n\ttop: \"res5a_branch2c\"\n\tname: \"scale5a_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a_branch1\"\n\tbottom: \"res5a_branch2c\"\n\ttop: \"res5a\"\n\tname: \"res5a\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res5a\"\n\ttop: \"res5a\"\n\tname: \"res5a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"res5b_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"bn5b_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"scale5b_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2a\"\n\tname: \"res5b_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5b_branch2a\"\n\ttop: \"res5b_branch2b\"\n\tname: \"res5b_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2b\"\n\tname: \"bn5b_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2b\"\n\tname: \"scale5b_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2b\"\n\tname: \"res5b_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5b_branch2b\"\n\ttop: \"res5b_branch2c\"\n\tname: \"res5b_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2c\"\n\ttop: \"res5b_branch2c\"\n\tname: \"bn5b_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b_branch2c\"\n\ttop: \"res5b_branch2c\"\n\tname: \"scale5b_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5a\"\n\tbottom: \"res5b_branch2c\"\n\ttop: \"res5b\"\n\tname: \"res5b\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res5b\"\n\ttop: \"res5b\"\n\tname: \"res5b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5b\"\n\ttop: \"res5c_branch2a\"\n\tname: \"res5c_branch2a\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2a\"\n\tname: \"bn5c_branch2a\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2a\"\n\tname: \"scale5c_branch2a\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2a\"\n\tname: \"res5c_branch2a_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5c_branch2a\"\n\ttop: \"res5c_branch2b\"\n\tname: \"res5c_branch2b\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 512\n\t\tkernel_size: 3\n\t\tpad: 1\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2b\"\n\tname: \"bn5c_branch2b\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2b\"\n\tname: \"scale5c_branch2b\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2b\"\n\tname: \"res5c_branch2b_relu\"\n\ttype: \"ReLU\"\n}\n\nlayer {\n\tbottom: \"res5c_branch2b\"\n\ttop: \"res5c_branch2c\"\n\tname: \"res5c_branch2c\"\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 2048\n\t\tkernel_size: 1\n\t\tpad: 0\n\t\tstride: 1\n\t\tbias_term: false\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2c\"\n\ttop: \"res5c_branch2c\"\n\tname: \"bn5c_branch2c\"\n\ttype: \"BatchNorm\"\n\tbatch_norm_param {\n\t\tuse_global_stats: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5c_branch2c\"\n\ttop: \"res5c_branch2c\"\n\tname: \"scale5c_branch2c\"\n\ttype: \"Scale\"\n\tscale_param {\n\t\tbias_term: true\n\t}\n}\n\nlayer {\n\tbottom: \"res5b\"\n\tbottom: \"res5c_branch2c\"\n\ttop: \"res5c\"\n\tname: \"res5c\"\n\ttype: \"Eltwise\"\n}\n\nlayer {\n\tbottom: \"res5c\"\n\ttop: \"res5c\"\n\tname: \"res5c_relu\"\n\ttype: \"ReLU\"\n}\nlayer {\n\tbottom: \"res5c\"\n\ttop: \"res6\"\n\tname: \"pool_res6\"\n\ttype: \"Pooling\"\n\tpooling_param {\n\t\tkernel_size: 3\n\t\tstride: 2\n\t\tpool: MAX\n\t}\n}\n####lateral\n\nlayer {\n\tbottom: \"res6\"\n\ttop: \"p6\"\n\tname: \"p6\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n        \n\t}\n}\n\nlayer {\n\tbottom: \"res5c\"\n\ttop: \"p5\"\n\tname: \"p5\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n        \n\t}\n}\n\nlayer {\n    name: \"upP5\"\n\ttype: \"Deconvolution\"\n    bottom: \"p5\" \n\ttop: \"upP5\"\n    convolution_param {\n    kernel_h : 4\n    kernel_w : 4\n    stride_h: 2\n    stride_w: 2\n    pad_h: 1\n    pad_w: 1\n    num_output: 256\n    group: 256\n    bias_term: false\n     weight_filler {\n      type: \"bilinear\"\n    }\n  }\n  param { lr_mult: 0 decay_mult: 0 } \n}\n\nlayer {\n\tbottom: \"res4f\"\n\ttop: \"c4\"\n\tname: \"newC4\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\n\nlayer {\n    name: \"p4\"\n    type: \"Eltwise\"\n    bottom: \"c4\"\n    bottom: \"upP5\"\n    top: \"p4\"\n    eltwise_param {\n        operation: SUM\n    }\n}\n\n\nlayer {\n\tbottom: \"p4\"\n\ttop: \"p4_lateral\"\n\tname: \"p4_lateral\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\nlayer {\n    name: \"upP4\"\n\ttype: \"Deconvolution\"\n    bottom: \"p4_lateral\" \n\ttop: \"upP4\"\n    convolution_param {\n    kernel_h : 4\n    kernel_w : 4\n    stride_h: 2\n    stride_w: 2\n    pad_h: 1\n    pad_w: 1\n    num_output: 256\n    group: 256\n    bias_term: false\n     weight_filler {\n      type: \"bilinear\"\n    }\n  }\n  param { lr_mult: 0 decay_mult: 0 } \n}\n\n\nlayer {\n\tbottom: \"res3d\"\n\ttop: \"c3\"\n\tname: \"newC3\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\nlayer {\n    name: \"p3\"\n    type: \"Eltwise\"\n    bottom: \"c3\"\n    bottom: \"upP4\"\n    top: \"p3\"\n    eltwise_param {\n        operation: SUM\n    }\n}\n\n\nlayer {\n\tbottom: \"p3\"\n\ttop: \"p3_lateral\"\n\tname: \"p3_lateral\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\n\nlayer {\n\tbottom: \"res2c\"\n\ttop: \"c2\"\n\tname: \"newC2\"\n\tparam {\n\t\tlr_mult: 1.0\n\t}\n\tparam {\n\t\tlr_mult: 2.0\n\t}\n\ttype: \"Convolution\"\n\tconvolution_param {\n\t\tnum_output: 256\n\t\tkernel_size: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0.0 }\n        \n\t}\n}\nlayer {\n    name: \"upP2\"\n\ttype: \"Deconvolution\"\n    bottom: \"p3_lateral\" \n\ttop: \"upP2\"\n    convolution_param {\n    kernel_h : 4\n    kernel_w : 4\n    stride_h: 2\n    stride_w: 2\n    pad_h: 1\n    pad_w: 1\n    num_output: 256\n    group: 256\n    bias_term: false\n     weight_filler {\n      type: \"bilinear\"\n    }\n  }\n  param { lr_mult: 0 decay_mult: 0 } \n}\nlayer {\n    name: \"p2\"\n    type: \"Eltwise\"\n    bottom: \"c2\"\n    bottom: \"upP2\"\n    top: \"p2\"\n    eltwise_param {\n        operation: SUM\n    }\n}\n\n\n\n\n####\n\n#========= RPN/p2 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p2\"\n  type: \"Convolution\"\n  bottom: \"p2\"\n  top: \"rpn/output/p2\"\n  param { lr_mult: 1.0\n  \t\t  name: \"rpn_conv_3x3_w\"\n          }\n  param { lr_mult: 2.0\n  \t\t  name: \"rpn_conv_3x3_b\"\n          }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p2\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p2\"\n  top: \"rpn/output/p2\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p2\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p2\"\n  top: \"rpn_cls_score/p2\"\n  param { lr_mult: 1.0\n  \t\tname: \"rpn_cls_score_w\" }\n  param { lr_mult: 2.0\n \t\tname: \"rpn_cls_score_b\"\n        }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p2\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p2\"\n  top: \"rpn_bbox_pred/p2\"\n  param { lr_mult: 1.0\n  \t\tname: \"rpn_bbox_pred_w\"\n        }\n  param { lr_mult: 2.0\n  name: \"rpn_bbox_pred_b\"\n  }\n  convolution_param {\n    num_output: 24   # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p2\"\n   top: \"rpn_cls_score_reshape_/p2\"\n   name: \"rpn_cls_score_reshape_/p2\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\nlayer {\n   bottom: \"rpn_bbox_pred/p2\"\n   top: \"rpn_bbox_pred_reshape/p2\"\n   name: \"rpn_bbox_pred_reshape/p2\"\n   type: \"Reshape\"\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\n}\n\nlayer {\n   bottom: \"rpn_cls_score_reshape_/p2\"\n   top: \"rpn_cls_score_reshape/p2\"\n   name: \"rpn_cls_score_reshape/p2\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\n}\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p2\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p2\"\n  top: \"fpn_out/p2\"\n}\n\nlayer {\n   bottom: \"fpn_out/p2\"\n   top: \"fpn_out_reshape/p2\"\n   name: \"fpn_out_reshape/p2\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n#========= RPN/p3 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p3\"\n  type: \"Convolution\"\n  bottom: \"p3\"\n  top: \"rpn/output/p3\"\n  param { lr_mult: 1.0\n        name: \"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0 \n   name: \"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p3\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p3\"\n  top: \"rpn/output/p3\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p3\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p3\"\n  top: \"rpn_cls_score/p3\"\n  param { lr_mult: 1.0 \n  name: \"rpn_cls_score_w\"\n  }\n  param { lr_mult: 2.0\n    name: \"rpn_cls_score_b\"\n    }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p3\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p3\"\n  top: \"rpn_bbox_pred/p3\"\n  param { lr_mult: 1.0\n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n   name:\"rpn_bbox_pred_b\" \n  }\n  convolution_param {\n    num_output: 24   # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p3\"\n   top: \"rpn_cls_score_reshape_/p3\"\n   name: \"rpn_cls_score_reshape_/p3\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\nlayer {\n   bottom: \"rpn_bbox_pred/p3\"\n   top: \"rpn_bbox_pred_reshape/p3\"\n   name: \"rpn_bbox_pred_reshape/p3\"\n   type: \"Reshape\"\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\n}\n\nlayer {\n   bottom: \"rpn_cls_score_reshape_/p3\"\n   top: \"rpn_cls_score_reshape/p3\"\n   name: \"rpn_cls_score_reshape/p3\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\n}\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p3\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p3\"\n  top: \"fpn_out/p3\"\n}\n\nlayer {\n   bottom: \"fpn_out/p3\"\n   top: \"fpn_out_reshape/p3\"\n   name: \"fpn_out_reshape/p3\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n\n#========= RPN/p4 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p4\"\n  type: \"Convolution\"\n  bottom: \"p4\"\n  top: \"rpn/output/p4\"\n  param { lr_mult: 1.0\n  name: \"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0 \n    name: \"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p4\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p4\"\n  top: \"rpn/output/p4\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p4\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p4\"\n  top: \"rpn_cls_score/p4\"\n  param { lr_mult: 1.0 \n  name:\"rpn_cls_score_w\"\n  }\n  param { lr_mult: 2.0\n  name:\"rpn_cls_score_b\"\n    }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p4\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p4\"\n  top: \"rpn_bbox_pred/p4\"\n  param { lr_mult: 1.0\n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_bbox_pred_b\"\n  }\n  convolution_param {\n    num_output: 24   # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.001 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p4\"\n   top: \"rpn_cls_score_reshape_/p4\"\n   name: \"rpn_cls_score_reshape_/p4\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\nlayer {\n   bottom: \"rpn_bbox_pred/p4\"\n   top: \"rpn_bbox_pred_reshape/p4\"\n   name: \"rpn_bbox_pred_reshape/p4\"\n   type: \"Reshape\"\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\n}\n\nlayer {\n   bottom: \"rpn_cls_score_reshape_/p4\"\n   top: \"rpn_cls_score_reshape/p4\"\n   name: \"rpn_cls_score_reshape/p4\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\n}\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p4\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p4\"\n  top: \"fpn_out/p4\"\n}\n\nlayer {\n   bottom: \"fpn_out/p4\"\n   top: \"fpn_out_reshape/p4\"\n   name: \"fpn_out_reshape/p4\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n\n\n\n#========= RPN/p5 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p5\"\n  type: \"Convolution\"\n  bottom: \"p5\"\n  top: \"rpn/output/p5\"\n  param { lr_mult: 1.0\n  name:\"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p5\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p5\"\n  top: \"rpn/output/p5\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p5\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p5\"\n  top: \"rpn_cls_score/p5\"\n  param { lr_mult: 1.0\n  name:\"rpn_cls_score_w\"\n  \n  }\n  param { lr_mult: 2.0\n  name:\"rpn_cls_score_b\"\n  }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p5\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p5\"\n  top: \"rpn_bbox_pred/p5\"\n  param { lr_mult: 1.0 \n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_bbox_pred_b\"\n    }\n  convolution_param {\n    num_output: 24  # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p5\"\n   top: \"rpn_cls_score_reshape_/p5\"\n   name: \"rpn_cls_score_reshape_/p5\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\nlayer {\n   bottom: \"rpn_bbox_pred/p5\"\n   top: \"rpn_bbox_pred_reshape/p5\"\n   name: \"rpn_bbox_pred_reshape/p5\"\n   type: \"Reshape\"\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\n}\n\nlayer {\n   bottom: \"rpn_cls_score_reshape_/p5\"\n   top: \"rpn_cls_score_reshape/p5\"\n   name: \"rpn_cls_score_reshape/p5\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\n}\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p5\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p5\"\n  top: \"fpn_out/p5\"\n}\n\nlayer {\n   bottom: \"fpn_out/p5\"\n   top: \"fpn_out_reshape/p5\"\n   name: \"fpn_out_reshape/p5\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n#========= RPN/p6 ============\n\nlayer {\n  name: \"rpn_conv/3x3/p6\"\n  type: \"Convolution\"\n  bottom: \"p6\"\n  top: \"rpn/output/p6\"\n  param { lr_mult: 1.0\n  name:\"rpn_conv_3x3_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_conv_3x3_b\"\n  }\n  convolution_param {\n    num_output: 512\n    kernel_size: 3 pad: 1 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\nlayer {\n  name: \"rpn_relu/3x3/p6\"\n  type: \"ReLU\"\n  bottom: \"rpn/output/p6\"\n  top: \"rpn/output/p6\"\n}\n\nlayer {\n  name: \"rpn_cls_score/p6\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p6\"\n  top: \"rpn_cls_score/p6\"\n  param { lr_mult: 1.0\n  name:\"rpn_cls_score_w\"\n  \n  }\n  param { lr_mult: 2.0\n  name:\"rpn_cls_score_b\"\n  }\n  convolution_param {\n    num_output: 12   # 2(bg/fg) * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\nlayer {\n  name: \"rpn_bbox_pred/p6\"\n  type: \"Convolution\"\n  bottom: \"rpn/output/p6\"\n  top: \"rpn_bbox_pred/p6\"\n  param { lr_mult: 1.0 \n  name:\"rpn_bbox_pred_w\"\n  }\n  param { lr_mult: 2.0\n    name:\"rpn_bbox_pred_b\"\n    }\n  convolution_param {\n    num_output: 24  # 4 * 9(anchors)\n    kernel_size: 1 pad: 0 stride: 1\n    weight_filler { type: \"gaussian\" std: 0.01 }\n    bias_filler { type: \"constant\" value: 0 }\n  }\n}\n\n\n######\n\nlayer {\n   bottom: \"rpn_cls_score/p6\"\n   top: \"rpn_cls_score_reshape_/p6\"\n   name: \"rpn_cls_score_reshape_/p6\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\n}\n\n\nlayer {\n   bottom: \"rpn_bbox_pred/p6\"\n   top: \"rpn_bbox_pred_reshape/p6\"\n   name: \"rpn_bbox_pred_reshape/p6\"\n   type: \"Reshape\"\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\n}\n\nlayer {\n   bottom: \"rpn_cls_score_reshape_/p6\"\n   top: \"rpn_cls_score_reshape/p6\"\n   name: \"rpn_cls_score_reshape/p6\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\n}\n\n\n\n\n#####CLS out\n\n\nlayer {\n  name: \"fpn_out/p6\"\n  type: \"Softmax\"\n  bottom: \"rpn_cls_score_reshape_/p6\"\n  top: \"fpn_out/p6\"\n}\n\nlayer {\n   bottom: \"fpn_out/p6\"\n   top: \"fpn_out_reshape/p6\"\n   name: \"fpn_out_reshape/p6\"\n   type: \"Reshape\"\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\n}\n\n########rpn loss#####################\n\nlayer {\n  name: \"rpn_cls_score_reshapee\"\n  type: \"Concat\"\n  bottom: \"rpn_cls_score_reshape/p2\"\n  bottom: \"rpn_cls_score_reshape/p3\"\n  bottom: \"rpn_cls_score_reshape/p4\"\n  bottom: \"rpn_cls_score_reshape/p5\"\n  bottom: \"rpn_cls_score_reshape/p6\"\n  top: \"rpn_cls_score_reshape\"\n  concat_param {\n    axis: 2\n  }\n}\n\n\n\nlayer {\n  name: \"rpn_bbox_pred\"\n  type: \"Concat\"\n  bottom: \"rpn_bbox_pred_reshape/p2\"\n  bottom: \"rpn_bbox_pred_reshape/p3\"\n  bottom: \"rpn_bbox_pred_reshape/p4\"\n  bottom: \"rpn_bbox_pred_reshape/p5\"\n  bottom: \"rpn_bbox_pred_reshape/p6\"\n  top: \"rpn_bbox_pred\"\n  concat_param {\n    axis: 2\n  }\n}\n\n\n\nlayer {\n  name: 'rpn-data'\n  type: 'Python'\n  bottom: 'rpn_cls_score/p2'\n  bottom: 'rpn_cls_score/p3'\n  bottom: 'rpn_cls_score/p4'\n  bottom: 'rpn_cls_score/p5'\n  bottom: 'rpn_cls_score/p6'\n  bottom: 'gt_boxes'\n  bottom: 'im_info'\n  top: 'rpn_labels'\n  top: 'rpn_bbox_targets'\n  top: 'rpn_bbox_inside_weights'\n  top: 'rpn_bbox_outside_weights'\n  python_param {\n    module: 'rpn.anchor_target_layer'\n    layer: 'AnchorTargetLayer'\n    param_str: \"'feat_stride': 4,8,16,32,64\"\n  }\n}\n\nlayer {\n  name: \"fpn_loss_cls\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"rpn_cls_score_reshape\"\n  bottom: \"rpn_labels\"\n  propagate_down: 1\n  propagate_down: 0\n  top: \"FPNClsLoss\"\n  loss_weight: 1\n  loss_param {\n    ignore_label: -1\n    normalization: VALID\n  }\n}\n\nlayer {\n  name: \"rpn_loss_bbox\"\n  type: \"SmoothL1Loss\"\n  bottom: \"rpn_bbox_pred\"\n  bottom: \"rpn_bbox_targets\"\n  bottom: 'rpn_bbox_inside_weights'\n  bottom: 'rpn_bbox_outside_weights'\n  top: \"FPNLossBBox\"\n  loss_weight: 1\n  smooth_l1_loss_param { sigma: 3.0 }\n}\n\n#========= RoI Proposal ============\n\n\n\n \n\nlayer {\n  name: 'proposal'\n  type: 'Python'\n    bottom: 'im_info'\n    bottom: 'rpn_bbox_pred/p2'\n    bottom: 'rpn_bbox_pred/p3'\n\tbottom: 'rpn_bbox_pred/p4'\n\tbottom: 'rpn_bbox_pred/p5'\n\tbottom: 'rpn_bbox_pred/p6'\n \tbottom: 'fpn_out_reshape/p2'\n\tbottom: 'fpn_out_reshape/p3'\n\tbottom: 'fpn_out_reshape/p4'\n\tbottom: 'fpn_out_reshape/p5'\n\tbottom: 'fpn_out_reshape/p6'\n  top: 'rpn_rois'\n  python_param {\n    module: 'rpn.proposal_layer'\n    layer: 'ProposalLayer'\n    param_str: \"'feat_stride': 4,8,16,32,64\"\n\n  }\n}\n\n\n\n#================rois process======================\n\nlayer {\n  name: 'roi-data'\n  type: 'Python'\n  bottom: 'rpn_rois'\n  bottom: 'gt_boxes'\n\tbottom: 'data'\n  top: 'rois/h2'\n  top: 'rois/h3'\n  top: 'rois/h4'\n  top: 'rois/h5'\n  top: 'labels'\n  top: 'bbox_targets'\n  top: 'bbox_inside_weights'\n  top: 'bbox_outside_weights'\n  python_param {\n    module: 'rpn.proposal_target_layer'\n    layer: 'ProposalTargetLayer'\n    param_str: \"'num_classes': 21\"\n  }\n}\n\n#========= RCNN ============\n\n######POOLING=======\nlayer {\n  name: \"roi_pool/h2\"\n  type: \"ROIPooling\"\n  bottom: \"p2\"\n  bottom: \"rois/h2\"\n  top: \"roi_pool/h2\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.25 # 1/4\n  }\n}\n\n\nlayer {\n  name: \"roi_pool/h3\"\n  type: \"ROIPooling\"\n  bottom: \"p3\"\n  bottom: \"rois/h3\"\n  top: \"roi_pool/h3\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.125 # 1/8\n  }\n}\nlayer {\n  name: \"roi_pool/h4\"\n  type: \"ROIPooling\"\n  bottom: \"p4\"\n  bottom: \"rois/h4\"\n  top: \"roi_pool/h4\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.0625 # 1/16\n  }\n}\n\nlayer {\n  name: \"roi_pool/h5\"\n  type: \"ROIPooling\"\n  bottom: \"p5\"\n  bottom: \"rois/h5\"\n  top: \"roi_pool/h5\"\n  roi_pooling_param {\n    pooled_w: 7\n    pooled_h: 7\n    spatial_scale: 0.03125 # 1/32\n  }\n}\n\n\n\n\n#h2\nlayer {\n  name: \"rcnn_fc6/h2\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h2\"\n  top: \"rcnn_fc6/h2\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h2\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h2\"\n  top: \"rcnn_fc6/h2\"\n}\nlayer {\n  name: \"drop6/h2\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h2\"\n  top: \"rcnn_fc6/h2\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h2\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h2\"\n  top: \"fc7/h2\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h2\"\n  type: \"ReLU\"\n  bottom: \"fc7/h2\"\n  top: \"fc7/h2\"\n}\nlayer {\n  name: \"drop7/h2\"\n  type: \"Dropout\"\n  bottom: \"fc7/h2\"\n  top: \"fc7/h2\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h2\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h2\"\n  top: \"cls_score/h2\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h2\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h2\"\n  top: \"bbox_pred/h2\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n\n\n\n#h3\nlayer {\n  name: \"rcnn_fc6/h3\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h3\"\n  top: \"rcnn_fc6/h3\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h3\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h3\"\n  top: \"rcnn_fc6/h3\"\n}\nlayer {\n  name: \"drop6/h3\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h3\"\n  top: \"rcnn_fc6/h3\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h3\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h3\"\n  top: \"fc7/h3\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h3\"\n  type: \"ReLU\"\n  bottom: \"fc7/h3\"\n  top: \"fc7/h3\"\n}\nlayer {\n  name: \"drop7/h3\"\n  type: \"Dropout\"\n  bottom: \"fc7/h3\"\n  top: \"fc7/h3\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h3\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h3\"\n  top: \"cls_score/h3\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h3\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h3\"\n  top: \"bbox_pred/h3\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n\n\n\n\n#h4\nlayer {\n  name: \"rcnn_fc6/h4\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h4\"\n  top: \"rcnn_fc6/h4\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h4\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h4\"\n  top: \"rcnn_fc6/h4\"\n}\nlayer {\n  name: \"drop6/h4\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h4\"\n  top: \"rcnn_fc6/h4\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h4\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h4\"\n  top: \"fc7/h4\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h4\"\n  type: \"ReLU\"\n  bottom: \"fc7/h4\"\n  top: \"fc7/h4\"\n}\nlayer {\n  name: \"drop7/h4\"\n  type: \"Dropout\"\n  bottom: \"fc7/h4\"\n  top: \"fc7/h4\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h4\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h4\"\n  top: \"cls_score/h4\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h4\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h4\"\n  top: \"bbox_pred/h4\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n#h5\nlayer {\n  name: \"rcnn_fc6/h5\"\n  type: \"InnerProduct\"\n  bottom: \"roi_pool/h5\"\n  top: \"rcnn_fc6/h5\"\n  param {\n    lr_mult: 1\n    name: \"rcnn_fc6_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"rcnn_fc6_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {\n      type: \"xavier\"\n    }\n    bias_filler {\n      type: \"constant\"\n    }\n  }\n}\nlayer {\n  name: \"relu6/h5\"\n  type: \"ReLU\"\n  bottom: \"rcnn_fc6/h5\"\n  top: \"rcnn_fc6/h5\"\n}\nlayer {\n  name: \"drop6/h5\"\n  type: \"Dropout\"\n  bottom: \"rcnn_fc6/h5\"\n  top: \"rcnn_fc6/h5\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"fc7/h5\"\n  type: \"InnerProduct\"\n  bottom: \"rcnn_fc6/h5\"\n  top: \"fc7/h5\"\n  param {\n    lr_mult: 1\n    name:\"fc7_w\"\n  }\n  param {\n    lr_mult: 2\n    name: \"fc7_b\"\n  }\n  inner_product_param {\n    num_output: 4096\n    weight_filler {  \n    type: \"xavier\"  \n    }  \n    bias_filler {  \n      type: \"constant\"  \n    } \n  }\n}\nlayer {\n  name: \"relu7/h5\"\n  type: \"ReLU\"\n  bottom: \"fc7/h5\"\n  top: \"fc7/h5\"\n}\nlayer {\n  name: \"drop7/h5\"\n  type: \"Dropout\"\n  bottom: \"fc7/h5\"\n  top: \"fc7/h5\"\n  dropout_param {\n    dropout_ratio: 0.5\n  }\n}\nlayer {\n  name: \"cls_score/h5\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h5\"\n  top: \"cls_score/h5\"\n  param {\n    lr_mult: 1\n    name:\"cls_score_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"cls_score_b\"\n  }\n  inner_product_param {\n    num_output: 21\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.01\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\nlayer {\n  name: \"bbox_pred/h5\"\n  type: \"InnerProduct\"\n  bottom: \"fc7/h5\"\n  top: \"bbox_pred/h5\"\n  param {\n    lr_mult: 1\n    name:\"bbox_pred_w\"\n  }\n  param {\n    lr_mult: 2\n    name:\"bbox_pred_b\"\n  }\n  inner_product_param {\n    num_output: 84\n    weight_filler {\n      type: \"gaussian\"\n      std: 0.001\n    }\n    bias_filler {\n      type: \"constant\"\n      value: 0\n    }\n  }\n}\n\n\n\nlayer {\n  name: \"cls_score_concat\"\n  type: \"Concat\"\n  bottom: \"cls_score/h2\"\n  bottom: \"cls_score/h3\"\n  bottom: \"cls_score/h4\"\n  bottom: \"cls_score/h5\"\n  top: \"cls_score\"\n  concat_param {\n    axis: 0\n  }\n}\n\nlayer {\n  name: \"bbox_pred_concat\"\n  type: \"Concat\"\n  bottom: \"bbox_pred/h2\"\n  bottom: \"bbox_pred/h3\"\n  bottom: \"bbox_pred/h4\"\n  bottom: \"bbox_pred/h5\"\n  top: \"bbox_pred\"\n  concat_param {\n    axis: 0\n  }\n}\n\nlayer {\n  name: \"loss_cls\"\n  type: \"SoftmaxWithLoss\"\n  bottom: \"cls_score\"\n  bottom: \"labels\"\n  propagate_down: 1\n  propagate_down: 0\n  top: \"RcnnLossCls\"\n  loss_weight: 1\n\tloss_param{\n  ignore_label: -1\n\tnormalization: VALID\n}\n}\nlayer {\n  name: \"loss_bbox\"\n  type: \"SmoothL1Loss\"\n  bottom: \"bbox_pred\"\n  bottom: \"bbox_targets\"\n  bottom: \"bbox_inside_weights\"\n  bottom: \"bbox_outside_weights\"\n  top: \"RcnnLossBBox\"\n  loss_weight: 1\n}\n"
  },
  {
    "path": "models/pascal_voc/FPN/FP_Net_end2end/train_mergercnn.prototxt",
    "content": "name: \"ResNet-50\"\r\nlayer {\r\n  name: 'input-data'\r\n  type: 'Python'\r\n  top: 'data'\r\n  top: 'im_info'\r\n  top: 'gt_boxes'\r\n  python_param {\r\n    module: 'roi_data_layer.layer'\r\n    layer: 'RoIDataLayer'\r\n    param_str: \"'num_classes': 21\"\r\n  }\r\n}\r\nlayer {\r\n\tbottom: \"data\"\r\n\ttop: \"conv1\"\r\n\tname: \"conv1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 7\r\n\t\tpad: 3\r\n\t\tstride: 2\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"conv1\"\r\n\tname: \"bn_conv1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"conv1\"\r\n\tname: \"scale_conv1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"conv1\"\r\n\tname: \"conv1_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"conv1\"\r\n\ttop: \"pool1\"\r\n\tname: \"pool1\"\r\n\ttype: \"Pooling\"\r\n\tpooling_param {\r\n\t\tkernel_size: 3\r\n\t\tstride: 2\r\n\t\tpool: MAX\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"pool1\"\r\n\ttop: \"res2a_branch1\"\r\n\tname: \"res2a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch1\"\r\n\ttop: \"res2a_branch1\"\r\n\tname: \"bn2a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch1\"\r\n\ttop: \"res2a_branch1\"\r\n\tname: \"scale2a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"pool1\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"res2a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"bn2a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"scale2a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2a\"\r\n\tname: \"res2a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2a\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"res2a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"bn2a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"scale2a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2b\"\r\n\tname: \"res2a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2b\"\r\n\ttop: \"res2a_branch2c\"\r\n\tname: \"res2a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2c\"\r\n\ttop: \"res2a_branch2c\"\r\n\tname: \"bn2a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch2c\"\r\n\ttop: \"res2a_branch2c\"\r\n\tname: \"scale2a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a_branch1\"\r\n\tbottom: \"res2a_branch2c\"\r\n\ttop: \"res2a\"\r\n\tname: \"res2a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a\"\r\n\ttop: \"res2a\"\r\n\tname: \"res2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"res2b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"bn2b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"scale2b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2a\"\r\n\tname: \"res2b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2a\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"res2b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"bn2b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"scale2b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2b\"\r\n\tname: \"res2b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2b\"\r\n\ttop: \"res2b_branch2c\"\r\n\tname: \"res2b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2c\"\r\n\ttop: \"res2b_branch2c\"\r\n\tname: \"bn2b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b_branch2c\"\r\n\ttop: \"res2b_branch2c\"\r\n\tname: \"scale2b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2a\"\r\n\tbottom: \"res2b_branch2c\"\r\n\ttop: \"res2b\"\r\n\tname: \"res2b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b\"\r\n\ttop: \"res2b\"\r\n\tname: \"res2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"res2c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"bn2c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"scale2c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2a\"\r\n\tname: \"res2c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2a\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"res2c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 64\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"bn2c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"scale2c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2b\"\r\n\tname: \"res2c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2b\"\r\n\ttop: \"res2c_branch2c\"\r\n\tname: \"res2c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2c\"\r\n\ttop: \"res2c_branch2c\"\r\n\tname: \"bn2c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c_branch2c\"\r\n\ttop: \"res2c_branch2c\"\r\n\tname: \"scale2c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2b\"\r\n\tbottom: \"res2c_branch2c\"\r\n\ttop: \"res2c\"\r\n\tname: \"res2c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"res2c\"\r\n\tname: \"res2c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"res3a_branch1\"\r\n\tname: \"res3a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch1\"\r\n\ttop: \"res3a_branch1\"\r\n\tname: \"bn3a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch1\"\r\n\ttop: \"res3a_branch1\"\r\n\tname: \"scale3a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"res3a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"bn3a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"scale3a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2a\"\r\n\tname: \"res3a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2a\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"res3a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"bn3a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"scale3a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2b\"\r\n\tname: \"res3a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2b\"\r\n\ttop: \"res3a_branch2c\"\r\n\tname: \"res3a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2c\"\r\n\ttop: \"res3a_branch2c\"\r\n\tname: \"bn3a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch2c\"\r\n\ttop: \"res3a_branch2c\"\r\n\tname: \"scale3a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a_branch1\"\r\n\tbottom: \"res3a_branch2c\"\r\n\ttop: \"res3a\"\r\n\tname: \"res3a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a\"\r\n\ttop: \"res3a\"\r\n\tname: \"res3a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"res3b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"bn3b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"scale3b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2a\"\r\n\tname: \"res3b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2a\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"res3b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"bn3b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"scale3b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2b\"\r\n\tname: \"res3b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2b\"\r\n\ttop: \"res3b_branch2c\"\r\n\tname: \"res3b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2c\"\r\n\ttop: \"res3b_branch2c\"\r\n\tname: \"bn3b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b_branch2c\"\r\n\ttop: \"res3b_branch2c\"\r\n\tname: \"scale3b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3a\"\r\n\tbottom: \"res3b_branch2c\"\r\n\ttop: \"res3b\"\r\n\tname: \"res3b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b\"\r\n\ttop: \"res3b\"\r\n\tname: \"res3b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"res3c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"bn3c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"scale3c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2a\"\r\n\tname: \"res3c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2a\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"res3c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"bn3c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"scale3c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2b\"\r\n\tname: \"res3c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2b\"\r\n\ttop: \"res3c_branch2c\"\r\n\tname: \"res3c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2c\"\r\n\ttop: \"res3c_branch2c\"\r\n\tname: \"bn3c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c_branch2c\"\r\n\ttop: \"res3c_branch2c\"\r\n\tname: \"scale3c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3b\"\r\n\tbottom: \"res3c_branch2c\"\r\n\ttop: \"res3c\"\r\n\tname: \"res3c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c\"\r\n\ttop: \"res3c\"\r\n\tname: \"res3c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"res3d_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"bn3d_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"scale3d_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2a\"\r\n\tname: \"res3d_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2a\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"res3d_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 128\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"bn3d_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"scale3d_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2b\"\r\n\tname: \"res3d_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2b\"\r\n\ttop: \"res3d_branch2c\"\r\n\tname: \"res3d_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2c\"\r\n\ttop: \"res3d_branch2c\"\r\n\tname: \"bn3d_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d_branch2c\"\r\n\ttop: \"res3d_branch2c\"\r\n\tname: \"scale3d_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3c\"\r\n\tbottom: \"res3d_branch2c\"\r\n\ttop: \"res3d\"\r\n\tname: \"res3d\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"res3d\"\r\n\tname: \"res3d_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"res4a_branch1\"\r\n\tname: \"res4a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch1\"\r\n\ttop: \"res4a_branch1\"\r\n\tname: \"bn4a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch1\"\r\n\ttop: \"res4a_branch1\"\r\n\tname: \"scale4a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"res4a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"bn4a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"scale4a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2a\"\r\n\tname: \"res4a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2a\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"res4a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"bn4a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"scale4a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2b\"\r\n\tname: \"res4a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2b\"\r\n\ttop: \"res4a_branch2c\"\r\n\tname: \"res4a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2c\"\r\n\ttop: \"res4a_branch2c\"\r\n\tname: \"bn4a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch2c\"\r\n\ttop: \"res4a_branch2c\"\r\n\tname: \"scale4a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a_branch1\"\r\n\tbottom: \"res4a_branch2c\"\r\n\ttop: \"res4a\"\r\n\tname: \"res4a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a\"\r\n\ttop: \"res4a\"\r\n\tname: \"res4a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"res4b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"bn4b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"scale4b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2a\"\r\n\tname: \"res4b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2a\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"res4b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"bn4b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"scale4b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2b\"\r\n\tname: \"res4b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2b\"\r\n\ttop: \"res4b_branch2c\"\r\n\tname: \"res4b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2c\"\r\n\ttop: \"res4b_branch2c\"\r\n\tname: \"bn4b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b_branch2c\"\r\n\ttop: \"res4b_branch2c\"\r\n\tname: \"scale4b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4a\"\r\n\tbottom: \"res4b_branch2c\"\r\n\ttop: \"res4b\"\r\n\tname: \"res4b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b\"\r\n\ttop: \"res4b\"\r\n\tname: \"res4b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"res4c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"bn4c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"scale4c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2a\"\r\n\tname: \"res4c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2a\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"res4c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"bn4c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"scale4c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2b\"\r\n\tname: \"res4c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2b\"\r\n\ttop: \"res4c_branch2c\"\r\n\tname: \"res4c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2c\"\r\n\ttop: \"res4c_branch2c\"\r\n\tname: \"bn4c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c_branch2c\"\r\n\ttop: \"res4c_branch2c\"\r\n\tname: \"scale4c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4b\"\r\n\tbottom: \"res4c_branch2c\"\r\n\ttop: \"res4c\"\r\n\tname: \"res4c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c\"\r\n\ttop: \"res4c\"\r\n\tname: \"res4c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"res4d_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"bn4d_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"scale4d_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2a\"\r\n\tname: \"res4d_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2a\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"res4d_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"bn4d_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"scale4d_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2b\"\r\n\tname: \"res4d_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2b\"\r\n\ttop: \"res4d_branch2c\"\r\n\tname: \"res4d_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2c\"\r\n\ttop: \"res4d_branch2c\"\r\n\tname: \"bn4d_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d_branch2c\"\r\n\ttop: \"res4d_branch2c\"\r\n\tname: \"scale4d_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4c\"\r\n\tbottom: \"res4d_branch2c\"\r\n\ttop: \"res4d\"\r\n\tname: \"res4d\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d\"\r\n\ttop: \"res4d\"\r\n\tname: \"res4d_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"res4e_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"bn4e_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"scale4e_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2a\"\r\n\tname: \"res4e_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2a\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"res4e_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"bn4e_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"scale4e_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2b\"\r\n\tname: \"res4e_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2b\"\r\n\ttop: \"res4e_branch2c\"\r\n\tname: \"res4e_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2c\"\r\n\ttop: \"res4e_branch2c\"\r\n\tname: \"bn4e_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e_branch2c\"\r\n\ttop: \"res4e_branch2c\"\r\n\tname: \"scale4e_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4d\"\r\n\tbottom: \"res4e_branch2c\"\r\n\ttop: \"res4e\"\r\n\tname: \"res4e\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e\"\r\n\ttop: \"res4e\"\r\n\tname: \"res4e_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"res4f_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"bn4f_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"scale4f_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2a\"\r\n\tname: \"res4f_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2a\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"res4f_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"bn4f_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"scale4f_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2b\"\r\n\tname: \"res4f_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2b\"\r\n\ttop: \"res4f_branch2c\"\r\n\tname: \"res4f_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 1024\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2c\"\r\n\ttop: \"res4f_branch2c\"\r\n\tname: \"bn4f_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f_branch2c\"\r\n\ttop: \"res4f_branch2c\"\r\n\tname: \"scale4f_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4e\"\r\n\tbottom: \"res4f_branch2c\"\r\n\ttop: \"res4f\"\r\n\tname: \"res4f\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"res4f\"\r\n\tname: \"res4f_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"res5a_branch1\"\r\n\tname: \"res5a_branch1\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch1\"\r\n\ttop: \"res5a_branch1\"\r\n\tname: \"bn5a_branch1\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch1\"\r\n\ttop: \"res5a_branch1\"\r\n\tname: \"scale5a_branch1\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"res5a_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 2\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"bn5a_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"scale5a_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2a\"\r\n\tname: \"res5a_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2a\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"res5a_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"bn5a_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"scale5a_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2b\"\r\n\tname: \"res5a_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2b\"\r\n\ttop: \"res5a_branch2c\"\r\n\tname: \"res5a_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2c\"\r\n\ttop: \"res5a_branch2c\"\r\n\tname: \"bn5a_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch2c\"\r\n\ttop: \"res5a_branch2c\"\r\n\tname: \"scale5a_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a_branch1\"\r\n\tbottom: \"res5a_branch2c\"\r\n\ttop: \"res5a\"\r\n\tname: \"res5a\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a\"\r\n\ttop: \"res5a\"\r\n\tname: \"res5a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"res5b_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"bn5b_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"scale5b_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2a\"\r\n\tname: \"res5b_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2a\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"res5b_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"bn5b_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"scale5b_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2b\"\r\n\tname: \"res5b_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2b\"\r\n\ttop: \"res5b_branch2c\"\r\n\tname: \"res5b_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2c\"\r\n\ttop: \"res5b_branch2c\"\r\n\tname: \"bn5b_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b_branch2c\"\r\n\ttop: \"res5b_branch2c\"\r\n\tname: \"scale5b_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5a\"\r\n\tbottom: \"res5b_branch2c\"\r\n\ttop: \"res5b\"\r\n\tname: \"res5b\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b\"\r\n\ttop: \"res5b\"\r\n\tname: \"res5b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"res5c_branch2a\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"bn5c_branch2a\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"scale5c_branch2a\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2a\"\r\n\tname: \"res5c_branch2a_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2a\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"res5c_branch2b\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 512\r\n\t\tkernel_size: 3\r\n\t\tpad: 1\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"bn5c_branch2b\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"scale5c_branch2b\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2b\"\r\n\tname: \"res5c_branch2b_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2b\"\r\n\ttop: \"res5c_branch2c\"\r\n\tname: \"res5c_branch2c\"\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 2048\r\n\t\tkernel_size: 1\r\n\t\tpad: 0\r\n\t\tstride: 1\r\n\t\tbias_term: false\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2c\"\r\n\ttop: \"res5c_branch2c\"\r\n\tname: \"bn5c_branch2c\"\r\n\ttype: \"BatchNorm\"\r\n\tbatch_norm_param {\r\n\t\tuse_global_stats: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c_branch2c\"\r\n\ttop: \"res5c_branch2c\"\r\n\tname: \"scale5c_branch2c\"\r\n\ttype: \"Scale\"\r\n\tscale_param {\r\n\t\tbias_term: true\r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5b\"\r\n\tbottom: \"res5c_branch2c\"\r\n\ttop: \"res5c\"\r\n\tname: \"res5c\"\r\n\ttype: \"Eltwise\"\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c\"\r\n\ttop: \"res5c\"\r\n\tname: \"res5c_relu\"\r\n\ttype: \"ReLU\"\r\n}\r\nlayer {\r\n\tbottom: \"res5c\"\r\n\ttop: \"res6\"\r\n\tname: \"pool_res6\"\r\n\ttype: \"Pooling\"\r\n\tpooling_param {\r\n\t\tkernel_size: 3\r\n\t\tstride: 2\r\n\t\tpool: MAX\r\n\t}\r\n}\r\n####lateral\r\n\r\nlayer {\r\n\tbottom: \"res6\"\r\n\ttop: \"p6\"\r\n\tname: \"p6\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res5c\"\r\n\ttop: \"p5\"\r\n\tname: \"p5\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n    name: \"upP5\"\r\n\ttype: \"Deconvolution\"\r\n    bottom: \"p5\" \r\n\ttop: \"upP5\"\r\n    convolution_param {\r\n    kernel_h : 4\r\n    kernel_w : 4\r\n    stride_h: 2\r\n    stride_w: 2\r\n    pad_h: 1\r\n    pad_w: 1\r\n    num_output: 256\r\n    group: 256\r\n    bias_term: false\r\n     weight_filler {\r\n      type: \"bilinear\"\r\n    }\r\n  }\r\n  param { lr_mult: 0 decay_mult: 0 } \r\n}\r\n\r\nlayer {\r\n\tbottom: \"res4f\"\r\n\ttop: \"c4\"\r\n\tname: \"newC4\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n    name: \"p4\"\r\n    type: \"Eltwise\"\r\n    bottom: \"c4\"\r\n    bottom: \"upP5\"\r\n    top: \"p4\"\r\n    eltwise_param {\r\n        operation: SUM\r\n    }\r\n}\r\n\r\n\r\nlayer {\r\n\tbottom: \"p4\"\r\n\ttop: \"p4_lateral\"\r\n\tname: \"p4_lateral\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\nlayer {\r\n    name: \"upP4\"\r\n\ttype: \"Deconvolution\"\r\n    bottom: \"p4_lateral\" \r\n\ttop: \"upP4\"\r\n    convolution_param {\r\n    kernel_h : 4\r\n    kernel_w : 4\r\n    stride_h: 2\r\n    stride_w: 2\r\n    pad_h: 1\r\n    pad_w: 1\r\n    num_output: 256\r\n    group: 256\r\n    bias_term: false\r\n     weight_filler {\r\n      type: \"bilinear\"\r\n    }\r\n  }\r\n  param { lr_mult: 0 decay_mult: 0 } \r\n}\r\n\r\n\r\nlayer {\r\n\tbottom: \"res3d\"\r\n\ttop: \"c3\"\r\n\tname: \"newC3\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\nlayer {\r\n    name: \"p3\"\r\n    type: \"Eltwise\"\r\n    bottom: \"c3\"\r\n    bottom: \"upP4\"\r\n    top: \"p3\"\r\n    eltwise_param {\r\n        operation: SUM\r\n    }\r\n}\r\n\r\n\r\nlayer {\r\n\tbottom: \"p3\"\r\n\ttop: \"p3_lateral\"\r\n\tname: \"p3_lateral\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\n\r\nlayer {\r\n\tbottom: \"res2c\"\r\n\ttop: \"c2\"\r\n\tname: \"newC2\"\r\n\tparam {\r\n\t\tlr_mult: 1.0\r\n\t}\r\n\tparam {\r\n\t\tlr_mult: 2.0\r\n\t}\r\n\ttype: \"Convolution\"\r\n\tconvolution_param {\r\n\t\tnum_output: 256\r\n\t\tkernel_size: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0.0 }\r\n        \r\n\t}\r\n}\r\nlayer {\r\n    name: \"upP2\"\r\n\ttype: \"Deconvolution\"\r\n    bottom: \"p3_lateral\" \r\n\ttop: \"upP2\"\r\n    convolution_param {\r\n    kernel_h : 4\r\n    kernel_w : 4\r\n    stride_h: 2\r\n    stride_w: 2\r\n    pad_h: 1\r\n    pad_w: 1\r\n    num_output: 256\r\n    group: 256\r\n    bias_term: false\r\n     weight_filler {\r\n      type: \"bilinear\"\r\n    }\r\n  }\r\n  param { lr_mult: 0 decay_mult: 0 } \r\n}\r\nlayer {\r\n    name: \"p2\"\r\n    type: \"Eltwise\"\r\n    bottom: \"c2\"\r\n    bottom: \"upP2\"\r\n    top: \"p2\"\r\n    eltwise_param {\r\n        operation: SUM\r\n    }\r\n}\r\n\r\n\r\n\r\n\r\n####\r\n\r\n#========= RPN/p2 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p2\"\r\n  type: \"Convolution\"\r\n  bottom: \"p2\"\r\n  top: \"rpn/output/p2\"\r\n  param { lr_mult: 1.0\r\n  \t\t  name: \"rpn_conv_3x3_w\"\r\n          }\r\n  param { lr_mult: 2.0\r\n  \t\t  name: \"rpn_conv_3x3_b\"\r\n          }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p2\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p2\"\r\n  top: \"rpn/output/p2\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p2\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p2\"\r\n  top: \"rpn_cls_score/p2\"\r\n  param { lr_mult: 1.0\r\n  \t\tname: \"rpn_cls_score_w\" }\r\n  param { lr_mult: 2.0\r\n \t\tname: \"rpn_cls_score_b\"\r\n        }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p2\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p2\"\r\n  top: \"rpn_bbox_pred/p2\"\r\n  param { lr_mult: 1.0\r\n  \t\tname: \"rpn_bbox_pred_w\"\r\n        }\r\n  param { lr_mult: 2.0\r\n  name: \"rpn_bbox_pred_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 24   # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p2\"\r\n   top: \"rpn_cls_score_reshape_/p2\"\r\n   name: \"rpn_cls_score_reshape_/p2\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\nlayer {\r\n   bottom: \"rpn_bbox_pred/p2\"\r\n   top: \"rpn_bbox_pred_reshape/p2\"\r\n   name: \"rpn_bbox_pred_reshape/p2\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\r\n}\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score_reshape_/p2\"\r\n   top: \"rpn_cls_score_reshape/p2\"\r\n   name: \"rpn_cls_score_reshape/p2\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\r\n}\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p2\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p2\"\r\n  top: \"fpn_out/p2\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p2\"\r\n   top: \"fpn_out_reshape/p2\"\r\n   name: \"fpn_out_reshape/p2\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n\r\n#========= RPN/p3 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p3\"\r\n  type: \"Convolution\"\r\n  bottom: \"p3\"\r\n  top: \"rpn/output/p3\"\r\n  param { lr_mult: 1.0\r\n        name: \"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0 \r\n   name: \"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p3\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p3\"\r\n  top: \"rpn/output/p3\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p3\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p3\"\r\n  top: \"rpn_cls_score/p3\"\r\n  param { lr_mult: 1.0 \r\n  name: \"rpn_cls_score_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name: \"rpn_cls_score_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p3\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p3\"\r\n  top: \"rpn_bbox_pred/p3\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n   name:\"rpn_bbox_pred_b\" \r\n  }\r\n  convolution_param {\r\n    num_output: 24   # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p3\"\r\n   top: \"rpn_cls_score_reshape_/p3\"\r\n   name: \"rpn_cls_score_reshape_/p3\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\nlayer {\r\n   bottom: \"rpn_bbox_pred/p3\"\r\n   top: \"rpn_bbox_pred_reshape/p3\"\r\n   name: \"rpn_bbox_pred_reshape/p3\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\r\n}\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score_reshape_/p3\"\r\n   top: \"rpn_cls_score_reshape/p3\"\r\n   name: \"rpn_cls_score_reshape/p3\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\r\n}\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p3\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p3\"\r\n  top: \"fpn_out/p3\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p3\"\r\n   top: \"fpn_out_reshape/p3\"\r\n   name: \"fpn_out_reshape/p3\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n\r\n\r\n#========= RPN/p4 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p4\"\r\n  type: \"Convolution\"\r\n  bottom: \"p4\"\r\n  top: \"rpn/output/p4\"\r\n  param { lr_mult: 1.0\r\n  name: \"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0 \r\n    name: \"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p4\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p4\"\r\n  top: \"rpn/output/p4\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p4\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p4\"\r\n  top: \"rpn_cls_score/p4\"\r\n  param { lr_mult: 1.0 \r\n  name:\"rpn_cls_score_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n  name:\"rpn_cls_score_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p4\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p4\"\r\n  top: \"rpn_bbox_pred/p4\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_bbox_pred_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 24   # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.001 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p4\"\r\n   top: \"rpn_cls_score_reshape_/p4\"\r\n   name: \"rpn_cls_score_reshape_/p4\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\nlayer {\r\n   bottom: \"rpn_bbox_pred/p4\"\r\n   top: \"rpn_bbox_pred_reshape/p4\"\r\n   name: \"rpn_bbox_pred_reshape/p4\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\r\n}\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score_reshape_/p4\"\r\n   top: \"rpn_cls_score_reshape/p4\"\r\n   name: \"rpn_cls_score_reshape/p4\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\r\n}\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p4\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p4\"\r\n  top: \"fpn_out/p4\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p4\"\r\n   top: \"fpn_out_reshape/p4\"\r\n   name: \"fpn_out_reshape/p4\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n\r\n\r\n\r\n#========= RPN/p5 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p5\"\r\n  type: \"Convolution\"\r\n  bottom: \"p5\"\r\n  top: \"rpn/output/p5\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p5\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p5\"\r\n  top: \"rpn/output/p5\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p5\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p5\"\r\n  top: \"rpn_cls_score/p5\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_cls_score_w\"\r\n  \r\n  }\r\n  param { lr_mult: 2.0\r\n  name:\"rpn_cls_score_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p5\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p5\"\r\n  top: \"rpn_bbox_pred/p5\"\r\n  param { lr_mult: 1.0 \r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_bbox_pred_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 24  # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p5\"\r\n   top: \"rpn_cls_score_reshape_/p5\"\r\n   name: \"rpn_cls_score_reshape_/p5\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\nlayer {\r\n   bottom: \"rpn_bbox_pred/p5\"\r\n   top: \"rpn_bbox_pred_reshape/p5\"\r\n   name: \"rpn_bbox_pred_reshape/p5\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\r\n}\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score_reshape_/p5\"\r\n   top: \"rpn_cls_score_reshape/p5\"\r\n   name: \"rpn_cls_score_reshape/p5\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\r\n}\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p5\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p5\"\r\n  top: \"fpn_out/p5\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p5\"\r\n   top: \"fpn_out_reshape/p5\"\r\n   name: \"fpn_out_reshape/p5\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n#========= RPN/p6 ============\r\n\r\nlayer {\r\n  name: \"rpn_conv/3x3/p6\"\r\n  type: \"Convolution\"\r\n  bottom: \"p6\"\r\n  top: \"rpn/output/p6\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_conv_3x3_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_conv_3x3_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 512\r\n    kernel_size: 3 pad: 1 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\nlayer {\r\n  name: \"rpn_relu/3x3/p6\"\r\n  type: \"ReLU\"\r\n  bottom: \"rpn/output/p6\"\r\n  top: \"rpn/output/p6\"\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_cls_score/p6\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p6\"\r\n  top: \"rpn_cls_score/p6\"\r\n  param { lr_mult: 1.0\r\n  name:\"rpn_cls_score_w\"\r\n  \r\n  }\r\n  param { lr_mult: 2.0\r\n  name:\"rpn_cls_score_b\"\r\n  }\r\n  convolution_param {\r\n    num_output: 12   # 2(bg/fg) * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred/p6\"\r\n  type: \"Convolution\"\r\n  bottom: \"rpn/output/p6\"\r\n  top: \"rpn_bbox_pred/p6\"\r\n  param { lr_mult: 1.0 \r\n  name:\"rpn_bbox_pred_w\"\r\n  }\r\n  param { lr_mult: 2.0\r\n    name:\"rpn_bbox_pred_b\"\r\n    }\r\n  convolution_param {\r\n    num_output: 24  # 4 * 9(anchors)\r\n    kernel_size: 1 pad: 0 stride: 1\r\n    weight_filler { type: \"gaussian\" std: 0.01 }\r\n    bias_filler { type: \"constant\" value: 0 }\r\n  }\r\n}\r\n\r\n\r\n######\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score/p6\"\r\n   top: \"rpn_cls_score_reshape_/p6\"\r\n   name: \"rpn_cls_score_reshape_/p6\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 dim:0} }\r\n}\r\n\r\n\r\nlayer {\r\n   bottom: \"rpn_bbox_pred/p6\"\r\n   top: \"rpn_bbox_pred_reshape/p6\"\r\n   name: \"rpn_bbox_pred_reshape/p6\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape { dim: 0 dim: 0 dim: -1 } }\r\n}\r\n\r\nlayer {\r\n   bottom: \"rpn_cls_score_reshape_/p6\"\r\n   top: \"rpn_cls_score_reshape/p6\"\r\n   name: \"rpn_cls_score_reshape/p6\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 2 dim: -1 } }\r\n}\r\n\r\n\r\n\r\n\r\n#####CLS out\r\n\r\n\r\nlayer {\r\n  name: \"fpn_out/p6\"\r\n  type: \"Softmax\"\r\n  bottom: \"rpn_cls_score_reshape_/p6\"\r\n  top: \"fpn_out/p6\"\r\n}\r\n\r\nlayer {\r\n   bottom: \"fpn_out/p6\"\r\n   top: \"fpn_out_reshape/p6\"\r\n   name: \"fpn_out_reshape/p6\"\r\n   type: \"Reshape\"\r\n   reshape_param { shape {dim: 0 dim: 12 dim: -1 dim: 0  } }\r\n}\r\n\r\n########rpn loss#####################\r\n\r\nlayer {\r\n  name: \"rpn_cls_score_reshapee\"\r\n  type: \"Concat\"\r\n  bottom: \"rpn_cls_score_reshape/p2\"\r\n  bottom: \"rpn_cls_score_reshape/p3\"\r\n  bottom: \"rpn_cls_score_reshape/p4\"\r\n  bottom: \"rpn_cls_score_reshape/p5\"\r\n  bottom: \"rpn_cls_score_reshape/p6\"\r\n  top: \"rpn_cls_score_reshape\"\r\n  concat_param {\r\n    axis: 2\r\n  }\r\n}\r\n\r\n\r\n\r\nlayer {\r\n  name: \"rpn_bbox_pred\"\r\n  type: \"Concat\"\r\n  bottom: \"rpn_bbox_pred_reshape/p2\"\r\n  bottom: \"rpn_bbox_pred_reshape/p3\"\r\n  bottom: \"rpn_bbox_pred_reshape/p4\"\r\n  bottom: \"rpn_bbox_pred_reshape/p5\"\r\n  bottom: \"rpn_bbox_pred_reshape/p6\"\r\n  top: \"rpn_bbox_pred\"\r\n  concat_param {\r\n    axis: 2\r\n  }\r\n}\r\n\r\n\r\n\r\nlayer {\r\n  name: 'rpn-data'\r\n  type: 'Python'\r\n  bottom: 'rpn_cls_score/p2'\r\n  bottom: 'rpn_cls_score/p3'\r\n  bottom: 'rpn_cls_score/p4'\r\n  bottom: 'rpn_cls_score/p5'\r\n  bottom: 'rpn_cls_score/p6'\r\n  bottom: 'gt_boxes'\r\n  bottom: 'im_info'\r\n  top: 'rpn_labels'\r\n  top: 'rpn_bbox_targets'\r\n  top: 'rpn_bbox_inside_weights'\r\n  top: 'rpn_bbox_outside_weights'\r\n  python_param {\r\n    module: 'rpn.anchor_target_layer'\r\n    layer: 'AnchorTargetLayer'\r\n    param_str: \"'feat_stride': 4,8,16,32,64\"\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"fpn_loss_cls\"\r\n  type: \"SoftmaxWithLoss\"\r\n  bottom: \"rpn_cls_score_reshape\"\r\n  bottom: \"rpn_labels\"\r\n  propagate_down: 1\r\n  propagate_down: 0\r\n  top: \"FPNClsLoss\"\r\n  loss_weight: 1\r\n  loss_param {\r\n    ignore_label: -1\r\n    normalization: VALID\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"rpn_loss_bbox\"\r\n  type: \"SmoothL1Loss\"\r\n  bottom: \"rpn_bbox_pred\"\r\n  bottom: \"rpn_bbox_targets\"\r\n  bottom: 'rpn_bbox_inside_weights'\r\n  bottom: 'rpn_bbox_outside_weights'\r\n  top: \"FPNLossBBox\"\r\n  loss_weight: 1\r\n  smooth_l1_loss_param { sigma: 3.0 }\r\n}\r\n\r\n#========= RoI Proposal ============\r\n\r\n\r\n\r\n \r\n\r\nlayer {\r\n  name: 'proposal'\r\n  type: 'Python'\r\n    bottom: 'im_info'\r\n    bottom: 'rpn_bbox_pred/p2'\r\n    bottom: 'rpn_bbox_pred/p3'\r\n\tbottom: 'rpn_bbox_pred/p4'\r\n\tbottom: 'rpn_bbox_pred/p5'\r\n\tbottom: 'rpn_bbox_pred/p6'\r\n \tbottom: 'fpn_out_reshape/p2'\r\n\tbottom: 'fpn_out_reshape/p3'\r\n\tbottom: 'fpn_out_reshape/p4'\r\n\tbottom: 'fpn_out_reshape/p5'\r\n\tbottom: 'fpn_out_reshape/p6'\r\n  top: 'rpn_rois'\r\n  python_param {\r\n    module: 'rpn.proposal_layer'\r\n    layer: 'ProposalLayer'\r\n    param_str: \"'feat_stride': 4,8,16,32,64\"\r\n\r\n  }\r\n}\r\n\r\n\r\n\r\n#================rois process======================\r\n\r\nlayer {\r\n  name: 'roi-data'\r\n  type: 'Python'\r\n  bottom: 'rpn_rois'\r\n  bottom: 'gt_boxes'\r\n\tbottom: 'data'\r\n  top: 'rois/h2'\r\n  top: 'rois/h3'\r\n  top: 'rois/h4'\r\n  top: 'rois/h5'\r\n  top: 'labels'\r\n  top: 'bbox_targets'\r\n  top: 'bbox_inside_weights'\r\n  top: 'bbox_outside_weights'\r\n  python_param {\r\n    module: 'rpn.proposal_target_layer_mrcnn'\r\n    layer: 'ProposalMergeRcnnTargetLayer'\r\n    param_str: \"'num_classes': 21\"\r\n  }\r\n}\r\n\r\n#========= RCNN ============\r\n\r\n######POOLING=======\r\nlayer {\r\n  name: \"roi_pool/h2\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p2\"\r\n  bottom: \"rois/h2\"\r\n  top: \"roi_pool/h2\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.25 # 1/4\r\n  }\r\n}\r\n\r\n\r\nlayer {\r\n  name: \"roi_pool/h3\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p3\"\r\n  bottom: \"rois/h3\"\r\n  top: \"roi_pool/h3\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.125 # 1/8\r\n  }\r\n}\r\nlayer {\r\n  name: \"roi_pool/h4\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p4\"\r\n  bottom: \"rois/h4\"\r\n  top: \"roi_pool/h4\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.0625 # 1/16\r\n  }\r\n}\r\n\r\nlayer {\r\n  name: \"roi_pool/h5\"\r\n  type: \"ROIPooling\"\r\n  bottom: \"p5\"\r\n  bottom: \"rois/h5\"\r\n  top: \"roi_pool/h5\"\r\n  roi_pooling_param {\r\n    pooled_w: 7\r\n    pooled_h: 7\r\n    spatial_scale: 0.03125 # 1/32\r\n  }\r\n}\r\n\r\n\r\nlayer {\r\n  name: \"roi_pool_concat\"\r\n  type: \"Concat\"\r\n  bottom: \"roi_pool/h2\"\r\n  bottom: \"roi_pool/h3\"\r\n  bottom: \"roi_pool/h4\"\r\n  bottom: \"roi_pool/h5\"\r\n  top: \"roi_pool\"\r\n  concat_param {\r\n    axis: 0\r\n  }\r\n}\r\n\r\n#h2\r\nlayer {\r\n  name: \"rcnn_fc6\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"roi_pool\"\r\n  top: \"rcnn_fc6\"\r\n  param {\r\n    lr_mult: 1\r\n    name: \"rcnn_fc6_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name: \"rcnn_fc6_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 1024\r\n    weight_filler {\r\n      type: \"xavier\"\r\n    }\r\n    bias_filler {\r\n      type: \"constant\"\r\n    }\r\n  }\r\n}\r\nlayer {\r\n  name: \"relu6\"\r\n  type: \"ReLU\"\r\n  bottom: \"rcnn_fc6\"\r\n  top: \"rcnn_fc6\"\r\n}\r\n\r\nlayer {\r\n  name: \"fc7\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"rcnn_fc6\"\r\n  top: \"fc7\"\r\n  param {\r\n    lr_mult: 1\r\n    name:\"fc7_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name: \"fc7_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 1024\r\n    weight_filler { \r\n    type: \"xavier\"  \r\n    }  \r\n    bias_filler {  \r\n      type: \"constant\"  \r\n    } \r\n  }\r\n}\r\nlayer {\r\n  name: \"relu7\"\r\n  type: \"ReLU\"\r\n  bottom: \"fc7\"\r\n  top: \"fc7\"\r\n}\r\n\r\nlayer {\r\n  name: \"cls_score\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"fc7\"\r\n  top: \"cls_score\"\r\n  param {\r\n    lr_mult: 1\r\n    name:\"cls_score_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name:\"cls_score_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 21\r\n    weight_filler {\r\n      type: \"gaussian\"\r\n      std: 0.01\r\n    }\r\n    bias_filler {\r\n      type: \"constant\"\r\n      value: 0\r\n    }\r\n  }\r\n}\r\nlayer {\r\n  name: \"bbox_pred\"\r\n  type: \"InnerProduct\"\r\n  bottom: \"fc7\"\r\n  top: \"bbox_pred\"\r\n  param {\r\n    lr_mult: 1\r\n    name:\"bbox_pred_w\"\r\n  }\r\n  param {\r\n    lr_mult: 2\r\n    name:\"bbox_pred_b\"\r\n  }\r\n  inner_product_param {\r\n    num_output: 84\r\n    weight_filler {\r\n      type: \"gaussian\"\r\n      std: 0.001\r\n    }\r\n    bias_filler {\r\n      type: \"constant\"\r\n      value: 0\r\n    }\r\n  }\r\n}\r\n\r\n\r\n\r\n\r\n\r\nlayer {\r\n  name: \"loss_cls\"\r\n  type: \"SoftmaxWithLoss\"\r\n  bottom: \"cls_score\"\r\n  bottom: \"labels\"\r\n  propagate_down: 1\r\n  propagate_down: 0\r\n  top: \"RcnnLossCls\"\r\n  loss_weight: 1\r\n\tloss_param{\r\n  ignore_label: -1\r\n\tnormalization: VALID\r\n}\r\n}\r\nlayer {\r\n  name: \"loss_bbox\"\r\n  type: \"SmoothL1Loss\"\r\n  bottom: \"bbox_pred\"\r\n  bottom: \"bbox_targets\"\r\n  bottom: \"bbox_inside_weights\"\r\n  bottom: \"bbox_outside_weights\"\r\n  top: \"RcnnLossBBox\"\r\n  loss_weight: 1\r\n}\r\n"
  },
  {
    "path": "output/output.md",
    "content": ""
  },
  {
    "path": "test.sh",
    "content": "ge:\n# ./experiments/scripts/faster_rcnn_end2end.sh GPU NET DATASET [options args to {train,test}_net.py]\n# DATASET is either pascal_voc or coco.\n#\n# Example:\n# ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc \\\n#   --set EXP_DIR foobar RNG_SEED 42 TRAIN.SCALES \"[400, 500, 600, 700]\"\n\nset -x\nset -e\n\nexport PYTHONUNBUFFERED=\"True\"\n\nGPU_ID=$1\nNET=$2\nNET_lc=${NET,,}\nDATASET=$3\n\narray=( $@ )\nlen=${#array[@]}\nEXTRA_ARGS=${array[@]:3:$len}\nEXTRA_ARGS_SLUG=${EXTRA_ARGS// /_}\n\ncase $DATASET in\n  pascal_voc)\n    TRAIN_IMDB=\"voc_2007_trainval\"\n    TEST_IMDB=\"voc_2007_test\"\n    PT_DIR=\"pascal_voc\"\n    ITERS=70000\n    ;;\n  coco)\n    # This is a very long and slow training schedule\n    # You can probably use fewer iterations and reduce the\n    # time to the LR drop (set in the solver to 350,000 iterations).\n    TRAIN_IMDB=\"coco_2014_train\"\n    TEST_IMDB=\"coco_2014_minival\"\n    PT_DIR=\"coco\"\n    ITERS=490000\n    ;;\n  *)\n    echo \"No dataset given\"\n    exit\n    ;;\nesac\n\n\n\n\nNET_FINAL=\"./data/pretrained_model/ResNet50.v2.caffemodel\"\n\n\ntime ./tools/test_net.py --gpu ${GPU_ID} \\\n  --def models/${PT_DIR}/${NET}/FP_Net_end2end/test.prototxt \\\n  --net output/FP_Net_end2end/voc_2007_trainval/fpn_iter_60000.caffemodel \\\n  --imdb ${TEST_IMDB} \\\n  --cfg experiments/cfgs/FP_Net_end2end.yml \\\n  ${EXTRA_ARGS}\n\n"
  },
  {
    "path": "test_mergercnn.sh",
    "content": "ge:\n# ./experiments/scripts/faster_rcnn_end2end.sh GPU NET DATASET [options args to {train,test}_net.py]\n# DATASET is either pascal_voc or coco.\n#\n# Example:\n# ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc \\\n#   --set EXP_DIR foobar RNG_SEED 42 TRAIN.SCALES \"[400, 500, 600, 700]\"\n\nset -x\nset -e\n\nexport PYTHONUNBUFFERED=\"True\"\n\nGPU_ID=$1\nNET=$2\nNET_lc=${NET,,}\nDATASET=$3\n\narray=( $@ )\nlen=${#array[@]}\nEXTRA_ARGS=${array[@]:3:$len}\nEXTRA_ARGS_SLUG=${EXTRA_ARGS// /_}\n\ncase $DATASET in\n  pascal_voc)\n    TRAIN_IMDB=\"voc_2007_trainval\"\n    TEST_IMDB=\"voc_2007_test\"\n    PT_DIR=\"pascal_voc\"\n    ITERS=70000\n    ;;\n  coco)\n    # This is a very long and slow training schedule\n    # You can probably use fewer iterations and reduce the\n    # time to the LR drop (set in the solver to 350,000 iterations).\n    TRAIN_IMDB=\"coco_2014_train\"\n    TEST_IMDB=\"coco_2014_minival\"\n    PT_DIR=\"coco\"\n    ITERS=490000\n    ;;\n  *)\n    echo \"No dataset given\"\n    exit\n    ;;\nesac\n\n\n\n\nNET_FINAL=\"./data/pretrained_model/ResNet50.v2.caffemodel\"\n\n\ntime ./tools/test_net.py --gpu ${GPU_ID} \\\n  --def models/${PT_DIR}/${NET}/FP_Net_end2end/test_mergercnn.prototxt \\\n  --net output/FP_Net_end2end/voc_2007_trainval/fpn_iter_60000.caffemodel \\\n  --imdb ${TEST_IMDB} \\\n  --cfg experiments/cfgs/FP_Net_end2end.yml \\\n  ${EXTRA_ARGS}\n\n"
  },
  {
    "path": "tools/README.md",
    "content": "Tools for training, testing, and compressing Fast R-CNN networks.\n"
  },
  {
    "path": "tools/_init_paths.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Set up paths for Fast R-CNN.\"\"\"\n\nimport os.path as osp\nimport sys\n\ndef add_path(path):\n    if path not in sys.path:\n        sys.path.insert(0, path)\n\nthis_dir = osp.dirname(__file__)\n\n# Add caffe to PYTHONPATH\ncaffe_path = osp.join(this_dir, '..', 'caffe-fpn', 'python')\nadd_path(caffe_path)\n\n# Add lib to PYTHONPATH\nlib_path = osp.join(this_dir, '..', 'lib')\nadd_path(lib_path)\n"
  },
  {
    "path": "tools/compress_net.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Compress a Fast R-CNN network using truncated SVD.\"\"\"\n\nimport _init_paths\nimport caffe\nimport argparse\nimport numpy as np\nimport os, sys\n\ndef parse_args():\n    \"\"\"Parse input arguments.\"\"\"\n    parser = argparse.ArgumentParser(description='Compress a Fast R-CNN network')\n    parser.add_argument('--def', dest='prototxt',\n                        help='prototxt file defining the uncompressed network',\n                        default=None, type=str)\n    parser.add_argument('--def-svd', dest='prototxt_svd',\n                        help='prototxt file defining the SVD compressed network',\n                        default=None, type=str)\n    parser.add_argument('--net', dest='caffemodel',\n                        help='model to compress',\n                        default=None, type=str)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\ndef compress_weights(W, l):\n    \"\"\"Compress the weight matrix W of an inner product (fully connected) layer\n    using truncated SVD.\n\n    Parameters:\n    W: N x M weights matrix\n    l: number of singular values to retain\n\n    Returns:\n    Ul, L: matrices such that W \\approx Ul*L\n    \"\"\"\n\n    # numpy doesn't seem to have a fast truncated SVD algorithm...\n    # this could be faster\n    U, s, V = np.linalg.svd(W, full_matrices=False)\n\n    Ul = U[:, :l]\n    sl = s[:l]\n    Vl = V[:l, :]\n\n    L = np.dot(np.diag(sl), Vl)\n    return Ul, L\n\ndef main():\n    args = parse_args()\n\n    # prototxt = 'models/VGG16/test.prototxt'\n    # caffemodel = 'snapshots/vgg16_fast_rcnn_iter_40000.caffemodel'\n    net = caffe.Net(args.prototxt, args.caffemodel, caffe.TEST)\n\n    # prototxt_svd = 'models/VGG16/svd/test_fc6_fc7.prototxt'\n    # caffemodel = 'snapshots/vgg16_fast_rcnn_iter_40000.caffemodel'\n    net_svd = caffe.Net(args.prototxt_svd, args.caffemodel, caffe.TEST)\n\n    print('Uncompressed network {} : {}'.format(args.prototxt, args.caffemodel))\n    print('Compressed network prototxt {}'.format(args.prototxt_svd))\n\n    out = os.path.splitext(os.path.basename(args.caffemodel))[0] + '_svd'\n    out_dir = os.path.dirname(args.caffemodel)\n\n    # Compress fc6\n    if net_svd.params.has_key('fc6_L'):\n        l_fc6 = net_svd.params['fc6_L'][0].data.shape[0]\n        print('  fc6_L bottleneck size: {}'.format(l_fc6))\n\n        # uncompressed weights and biases\n        W_fc6 = net.params['fc6'][0].data\n        B_fc6 = net.params['fc6'][1].data\n\n        print('  compressing fc6...')\n        Ul_fc6, L_fc6 = compress_weights(W_fc6, l_fc6)\n\n        assert(len(net_svd.params['fc6_L']) == 1)\n\n        # install compressed matrix factors (and original biases)\n        net_svd.params['fc6_L'][0].data[...] = L_fc6\n\n        net_svd.params['fc6_U'][0].data[...] = Ul_fc6\n        net_svd.params['fc6_U'][1].data[...] = B_fc6\n\n        out += '_fc6_{}'.format(l_fc6)\n\n    # Compress fc7\n    if net_svd.params.has_key('fc7_L'):\n        l_fc7 = net_svd.params['fc7_L'][0].data.shape[0]\n        print '  fc7_L bottleneck size: {}'.format(l_fc7)\n\n        W_fc7 = net.params['fc7'][0].data\n        B_fc7 = net.params['fc7'][1].data\n\n        print('  compressing fc7...')\n        Ul_fc7, L_fc7 = compress_weights(W_fc7, l_fc7)\n\n        assert(len(net_svd.params['fc7_L']) == 1)\n\n        net_svd.params['fc7_L'][0].data[...] = L_fc7\n\n        net_svd.params['fc7_U'][0].data[...] = Ul_fc7\n        net_svd.params['fc7_U'][1].data[...] = B_fc7\n\n        out += '_fc7_{}'.format(l_fc7)\n\n    filename = '{}/{}.caffemodel'.format(out_dir, out)\n    net_svd.save(filename)\n    print 'Wrote svd model to: {:s}'.format(filename)\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "tools/demo.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"\nDemo script showing detections in sample images.\n\nSee README.md for installation instructions before running.\n\"\"\"\n\nimport _init_paths\nfrom fast_rcnn.config import cfg\nfrom fast_rcnn.test import im_detect\nfrom fast_rcnn.nms_wrapper import nms\nfrom utils.timer import Timer\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.io as sio\nimport caffe, os, sys, cv2\nimport argparse\n\nCLASSES = ('__background__',\n           'aeroplane', 'bicycle', 'bird', 'boat',\n           'bottle', 'bus', 'car', 'cat', 'chair',\n           'cow', 'diningtable', 'dog', 'horse',\n           'motorbike', 'person', 'pottedplant',\n           'sheep', 'sofa', 'train', 'tvmonitor')\n\nNETS = {'vgg16': ('VGG16',\n                  'VGG16_faster_rcnn_final.caffemodel'),\n        'zf': ('ZF',\n                  'ZF_faster_rcnn_final.caffemodel')}\n\n\ndef vis_detections(im, class_name, dets, thresh=0.5):\n    \"\"\"Draw detected bounding boxes.\"\"\"\n    inds = np.where(dets[:, -1] >= thresh)[0]\n    if len(inds) == 0:\n        return\n\n    im = im[:, :, (2, 1, 0)]\n    fig, ax = plt.subplots(figsize=(12, 12))\n    ax.imshow(im, aspect='equal')\n    for i in inds:\n        bbox = dets[i, :4]\n        score = dets[i, -1]\n\n        ax.add_patch(\n            plt.Rectangle((bbox[0], bbox[1]),\n                          bbox[2] - bbox[0],\n                          bbox[3] - bbox[1], fill=False,\n                          edgecolor='red', linewidth=3.5)\n            )\n        ax.text(bbox[0], bbox[1] - 2,\n                '{:s} {:.3f}'.format(class_name, score),\n                bbox=dict(facecolor='blue', alpha=0.5),\n                fontsize=14, color='white')\n\n    ax.set_title(('{} detections with '\n                  'p({} | box) >= {:.1f}').format(class_name, class_name,\n                                                  thresh),\n                  fontsize=14)\n    plt.axis('off')\n    plt.tight_layout()\n    plt.draw()\n\ndef demo(net, image_name):\n    \"\"\"Detect object classes in an image using pre-computed object proposals.\"\"\"\n\n    # Load the demo image\n    im_file = os.path.join(cfg.DATA_DIR, 'demo', image_name)\n    im = cv2.imread(im_file)\n\n    # Detect all object classes and regress object bounds\n    timer = Timer()\n    timer.tic()\n    scores, boxes = im_detect(net, im)\n    timer.toc()\n    print ('Detection took {:.3f}s for '\n           '{:d} object proposals').format(timer.total_time, boxes.shape[0])\n\n    # Visualize detections for each class\n    CONF_THRESH = 0.8\n    NMS_THRESH = 0.3\n    for cls_ind, cls in enumerate(CLASSES[1:]):\n        cls_ind += 1 # because we skipped background\n        cls_boxes = boxes[:, 4*cls_ind:4*(cls_ind + 1)]\n        cls_scores = scores[:, cls_ind]\n        dets = np.hstack((cls_boxes,\n                          cls_scores[:, np.newaxis])).astype(np.float32)\n        keep = nms(dets, NMS_THRESH)\n        dets = dets[keep, :]\n        vis_detections(im, cls, dets, thresh=CONF_THRESH)\n\ndef parse_args():\n    \"\"\"Parse input arguments.\"\"\"\n    parser = argparse.ArgumentParser(description='Faster R-CNN demo')\n    parser.add_argument('--gpu', dest='gpu_id', help='GPU device id to use [0]',\n                        default=0, type=int)\n    parser.add_argument('--cpu', dest='cpu_mode',\n                        help='Use CPU mode (overrides --gpu)',\n                        action='store_true')\n    parser.add_argument('--net', dest='demo_net', help='Network to use [vgg16]',\n                        choices=NETS.keys(), default='vgg16')\n\n    args = parser.parse_args()\n\n    return args\n\nif __name__ == '__main__':\n    cfg.TEST.HAS_RPN = True  # Use RPN for proposals\n\n    args = parse_args()\n\n    prototxt = os.path.join(cfg.MODELS_DIR, NETS[args.demo_net][0],\n                            'faster_rcnn_alt_opt', 'faster_rcnn_test.pt')\n    caffemodel = os.path.join(cfg.DATA_DIR, 'faster_rcnn_models',\n                              NETS[args.demo_net][1])\n\n    if not os.path.isfile(caffemodel):\n        raise IOError(('{:s} not found.\\nDid you run ./data/script/'\n                       'fetch_faster_rcnn_models.sh?').format(caffemodel))\n\n    if args.cpu_mode:\n        caffe.set_mode_cpu()\n    else:\n        caffe.set_mode_gpu()\n        caffe.set_device(args.gpu_id)\n        cfg.GPU_ID = args.gpu_id\n    net = caffe.Net(prototxt, caffemodel, caffe.TEST)\n\n    print '\\n\\nLoaded network {:s}'.format(caffemodel)\n\n    # Warmup on a dummy image\n    im = 128 * np.ones((300, 500, 3), dtype=np.uint8)\n    for i in xrange(2):\n        _, _= im_detect(net, im)\n\n    im_names = ['000456.jpg', '000542.jpg', '001150.jpg',\n                '001763.jpg', '004545.jpg']\n    for im_name in im_names:\n        print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n        print 'Demo for data/demo/{}'.format(im_name)\n        demo(net, im_name)\n\n    plt.show()\n"
  },
  {
    "path": "tools/eval_recall.py",
    "content": "#!/usr/bin/env python\n\nimport _init_paths\nfrom fast_rcnn.config import cfg, cfg_from_file, cfg_from_list\nfrom datasets.factory import get_imdb\nimport argparse\nimport time, os, sys\nimport numpy as np\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Test a Fast R-CNN network')\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to test',\n                        default='voc_2007_test', type=str)\n    parser.add_argument('--method', dest='method',\n                        help='proposal method',\n                        default='selective_search', type=str)\n    parser.add_argument('--rpn-file', dest='rpn_file',\n                        default=None, type=str)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n\n    imdb = get_imdb(args.imdb_name)\n    imdb.set_proposal_method(args.method)\n    if args.rpn_file is not None:\n        imdb.config['rpn_file'] = args.rpn_file\n\n    candidate_boxes = None\n    if 0:\n        import scipy.io as sio\n        filename = 'debug/stage1_rpn_voc_2007_test.mat'\n        raw_data = sio.loadmat(filename)['aboxes'].ravel()\n        candidate_boxes = raw_data\n\n    ar, gt_overlaps, recalls, thresholds = \\\n        imdb.evaluate_recall(candidate_boxes=candidate_boxes)\n    print 'Method: {}'.format(args.method)\n    print 'AverageRec: {:.3f}'.format(ar)\n\n    def recall_at(t):\n        ind = np.where(thresholds > t - 1e-5)[0][0]\n        assert np.isclose(thresholds[ind], t)\n        return recalls[ind]\n\n    print 'Recall@0.5: {:.3f}'.format(recall_at(0.5))\n    print 'Recall@0.6: {:.3f}'.format(recall_at(0.6))\n    print 'Recall@0.7: {:.3f}'.format(recall_at(0.7))\n    print 'Recall@0.8: {:.3f}'.format(recall_at(0.8))\n    print 'Recall@0.9: {:.3f}'.format(recall_at(0.9))\n    # print again for easy spreadsheet copying\n    print '{:.3f}'.format(ar)\n    print '{:.3f}'.format(recall_at(0.5))\n    print '{:.3f}'.format(recall_at(0.6))\n    print '{:.3f}'.format(recall_at(0.7))\n    print '{:.3f}'.format(recall_at(0.8))\n    print '{:.3f}'.format(recall_at(0.9))\n"
  },
  {
    "path": "tools/reval.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Reval = re-eval. Re-evaluate saved detections.\"\"\"\n\nimport _init_paths\nfrom fast_rcnn.test import apply_nms\nfrom fast_rcnn.config import cfg\nfrom datasets.factory import get_imdb\nimport cPickle\nimport os, sys, argparse\nimport numpy as np\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Re-evaluate results')\n    parser.add_argument('output_dir', nargs=1, help='results directory',\n                        type=str)\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to re-evaluate',\n                        default='voc_2007_test', type=str)\n    parser.add_argument('--matlab', dest='matlab_eval',\n                        help='use matlab for evaluation',\n                        action='store_true')\n    parser.add_argument('--comp', dest='comp_mode', help='competition mode',\n                        action='store_true')\n    parser.add_argument('--nms', dest='apply_nms', help='apply nms',\n                        action='store_true')\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\ndef from_dets(imdb_name, output_dir, args):\n    imdb = get_imdb(imdb_name)\n    imdb.competition_mode(args.comp_mode)\n    imdb.config['matlab_eval'] = args.matlab_eval\n    with open(os.path.join(output_dir, 'detections.pkl'), 'rb') as f:\n        dets = cPickle.load(f)\n\n    if args.apply_nms:\n        print 'Applying NMS to all detections'\n        nms_dets = apply_nms(dets, cfg.TEST.NMS)\n    else:\n        nms_dets = dets\n\n    print 'Evaluating detections'\n    imdb.evaluate_detections(nms_dets, output_dir)\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    output_dir = os.path.abspath(args.output_dir[0])\n    imdb_name = args.imdb_name\n    from_dets(imdb_name, output_dir, args)\n"
  },
  {
    "path": "tools/rpn_generate.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Fast/er/ R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Generate RPN proposals.\"\"\"\n\nimport _init_paths\nimport numpy as np\nfrom fast_rcnn.config import cfg, cfg_from_file, cfg_from_list, get_output_dir\nfrom datasets.factory import get_imdb\nfrom rpn.generate import imdb_proposals\nimport cPickle\nimport caffe\nimport argparse\nimport pprint\nimport time, os, sys\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Test a Fast R-CNN network')\n    parser.add_argument('--gpu', dest='gpu_id', help='GPU id to use',\n                        default=0, type=int)\n    parser.add_argument('--def', dest='prototxt',\n                        help='prototxt file defining the network',\n                        default=None, type=str)\n    parser.add_argument('--net', dest='caffemodel',\n                        help='model to test',\n                        default=None, type=str)\n    parser.add_argument('--cfg', dest='cfg_file',\n                        help='optional config file', default=None, type=str)\n    parser.add_argument('--wait', dest='wait',\n                        help='wait until net file exists',\n                        default=True, type=bool)\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to test',\n                        default='voc_2007_test', type=str)\n    parser.add_argument('--set', dest='set_cfgs',\n                        help='set config keys', default=None,\n                        nargs=argparse.REMAINDER)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n    if args.set_cfgs is not None:\n        cfg_from_list(args.set_cfgs)\n\n    cfg.GPU_ID = args.gpu_id\n\n    # RPN test settings\n    cfg.TEST.RPN_PRE_NMS_TOP_N = -1\n    cfg.TEST.RPN_POST_NMS_TOP_N = 2000\n\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    while not os.path.exists(args.caffemodel) and args.wait:\n        print('Waiting for {} to exist...'.format(args.caffemodel))\n        time.sleep(10)\n\n    caffe.set_mode_gpu()\n    caffe.set_device(args.gpu_id)\n    net = caffe.Net(args.prototxt, args.caffemodel, caffe.TEST)\n    net.name = os.path.splitext(os.path.basename(args.caffemodel))[0]\n\n    imdb = get_imdb(args.imdb_name)\n    imdb_boxes = imdb_proposals(net, imdb)\n\n    output_dir = get_output_dir(imdb, net)\n    rpn_file = os.path.join(output_dir, net.name + '_rpn_proposals.pkl')\n    with open(rpn_file, 'wb') as f:\n        cPickle.dump(imdb_boxes, f, cPickle.HIGHEST_PROTOCOL)\n    print 'Wrote RPN proposals to {}'.format(rpn_file)\n"
  },
  {
    "path": "tools/test_net.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Test a Fast R-CNN network on an image database.\"\"\"\n\nimport _init_paths\nfrom fast_rcnn.test import test_net\nfrom fast_rcnn.config import cfg, cfg_from_file, cfg_from_list\nfrom datasets.factory import get_imdb\nimport caffe\nimport argparse\nimport pprint\nimport time, os, sys\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Test a Fast R-CNN network')\n    parser.add_argument('--gpu', dest='gpu_id', help='GPU id to use',\n                        default=0, type=int)\n    parser.add_argument('--def', dest='prototxt',\n                        help='prototxt file defining the network',\n                        default=None, type=str)\n    parser.add_argument('--net', dest='caffemodel',\n                        help='model to test',\n                        default=None, type=str)\n    parser.add_argument('--cfg', dest='cfg_file',\n                        help='optional config file', default=None, type=str)\n    parser.add_argument('--wait', dest='wait',\n                        help='wait until net file exists',\n                        default=True, type=bool)\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to test',\n                        default='voc_2007_test', type=str)\n    parser.add_argument('--comp', dest='comp_mode', help='competition mode',\n                        action='store_true')\n    parser.add_argument('--set', dest='set_cfgs',\n                        help='set config keys', default=None,\n                        nargs=argparse.REMAINDER)\n    parser.add_argument('--vis', dest='vis', help='visualize detections',\n                        action='store_true')\n    parser.add_argument('--num_dets', dest='max_per_image',\n                        help='max number of detections per image',\n                        default=100, type=int)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n    if args.set_cfgs is not None:\n        cfg_from_list(args.set_cfgs)\n\n    cfg.GPU_ID = args.gpu_id\n\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    while not os.path.exists(args.caffemodel) and args.wait:\n        print('Waiting for {} to exist...'.format(args.caffemodel))\n        time.sleep(10)\n\n    caffe.set_mode_gpu()\n    caffe.set_device(args.gpu_id)\n    net = caffe.Net(args.prototxt, args.caffemodel, caffe.TEST)\n    net.name = os.path.splitext(os.path.basename(args.caffemodel))[0]\n\n    imdb = get_imdb(args.imdb_name)\n    imdb.competition_mode(args.comp_mode)\n    if not cfg.TEST.HAS_RPN:\n        imdb.set_proposal_method(cfg.TEST.PROPOSAL_METHOD)\n\n    test_net(net, imdb, max_per_image=args.max_per_image, vis=args.vis)\n"
  },
  {
    "path": "tools/train_faster_rcnn_alt_opt.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Train a Faster R-CNN network using alternating optimization.\nThis tool implements the alternating optimization algorithm described in our\nNIPS 2015 paper (\"Faster R-CNN: Towards Real-time Object Detection with Region\nProposal Networks.\" Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun.)\n\"\"\"\n\nimport _init_paths\nfrom fast_rcnn.train import get_training_roidb, train_net\nfrom fast_rcnn.config import cfg, cfg_from_file, cfg_from_list, get_output_dir\nfrom datasets.factory import get_imdb\nfrom rpn.generate import imdb_proposals\nimport argparse\nimport pprint\nimport numpy as np\nimport sys, os\nimport multiprocessing as mp\nimport cPickle\nimport shutil\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Train a Faster R-CNN network')\n    parser.add_argument('--gpu', dest='gpu_id',\n                        help='GPU device id to use [0]',\n                        default=0, type=int)\n    parser.add_argument('--net_name', dest='net_name',\n                        help='network name (e.g., \"ZF\")',\n                        default=None, type=str)\n    parser.add_argument('--weights', dest='pretrained_model',\n                        help='initialize with pretrained model weights',\n                        default=None, type=str)\n    parser.add_argument('--cfg', dest='cfg_file',\n                        help='optional config file',\n                        default=None, type=str)\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to train on',\n                        default='voc_2007_trainval', type=str)\n    parser.add_argument('--set', dest='set_cfgs',\n                        help='set config keys', default=None,\n                        nargs=argparse.REMAINDER)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\ndef get_roidb(imdb_name, rpn_file=None):\n    imdb = get_imdb(imdb_name)\n    print 'Loaded dataset `{:s}` for training'.format(imdb.name)\n    imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD)\n    print 'Set proposal method: {:s}'.format(cfg.TRAIN.PROPOSAL_METHOD)\n    if rpn_file is not None:\n        imdb.config['rpn_file'] = rpn_file\n    roidb = get_training_roidb(imdb)\n    return roidb, imdb\n\ndef get_solvers(net_name):\n    # Faster R-CNN Alternating Optimization\n    n = 'faster_rcnn_alt_opt'\n    # Solver for each training stage\n    solvers = [[net_name, n, 'stage1_rpn_solver60k80k.pt'],\n               [net_name, n, 'stage1_fast_rcnn_solver30k40k.pt'],\n               [net_name, n, 'stage2_rpn_solver60k80k.pt'],\n               [net_name, n, 'stage2_fast_rcnn_solver30k40k.pt']]\n    solvers = [os.path.join(cfg.MODELS_DIR, *s) for s in solvers]\n    # Iterations for each training stage\n    max_iters = [80000, 40000, 80000, 40000]\n    # max_iters = [100, 100, 100, 100]\n    # Test prototxt for the RPN\n    rpn_test_prototxt = os.path.join(\n        cfg.MODELS_DIR, net_name, n, 'rpn_test.pt')\n    return solvers, max_iters, rpn_test_prototxt\n\n# ------------------------------------------------------------------------------\n# Pycaffe doesn't reliably free GPU memory when instantiated nets are discarded\n# (e.g. \"del net\" in Python code). To work around this issue, each training\n# stage is executed in a separate process using multiprocessing.Process.\n# ------------------------------------------------------------------------------\n\ndef _init_caffe(cfg):\n    \"\"\"Initialize pycaffe in a training process.\n    \"\"\"\n\n    import caffe\n    # fix the random seeds (numpy and caffe) for reproducibility\n    np.random.seed(cfg.RNG_SEED)\n    caffe.set_random_seed(cfg.RNG_SEED)\n    # set up caffe\n    caffe.set_mode_gpu()\n    caffe.set_device(cfg.GPU_ID)\n\ndef train_rpn(queue=None, imdb_name=None, init_model=None, solver=None,\n              max_iters=None, cfg=None):\n    \"\"\"Train a Region Proposal Network in a separate training process.\n    \"\"\"\n\n    # Not using any proposals, just ground-truth boxes\n    cfg.TRAIN.HAS_RPN = True\n    cfg.TRAIN.BBOX_REG = False  # applies only to Fast R-CNN bbox regression\n    cfg.TRAIN.PROPOSAL_METHOD = 'gt'\n    cfg.TRAIN.IMS_PER_BATCH = 1\n    print 'Init model: {}'.format(init_model)\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    import caffe\n    _init_caffe(cfg)\n\n    roidb, imdb = get_roidb(imdb_name)\n    print 'roidb len: {}'.format(len(roidb))\n    output_dir = get_output_dir(imdb)\n    print 'Output will be saved to `{:s}`'.format(output_dir)\n\n    model_paths = train_net(solver, roidb, output_dir,\n                            pretrained_model=init_model,\n                            max_iters=max_iters)\n    # Cleanup all but the final model\n    for i in model_paths[:-1]:\n        os.remove(i)\n    rpn_model_path = model_paths[-1]\n    # Send final model path through the multiprocessing queue\n    queue.put({'model_path': rpn_model_path})\n\ndef rpn_generate(queue=None, imdb_name=None, rpn_model_path=None, cfg=None,\n                 rpn_test_prototxt=None):\n    \"\"\"Use a trained RPN to generate proposals.\n    \"\"\"\n\n    cfg.TEST.RPN_PRE_NMS_TOP_N = -1     # no pre NMS filtering\n    cfg.TEST.RPN_POST_NMS_TOP_N = 2000  # limit top boxes after NMS\n    print 'RPN model: {}'.format(rpn_model_path)\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    import caffe\n    _init_caffe(cfg)\n\n    # NOTE: the matlab implementation computes proposals on flipped images, too.\n    # We compute them on the image once and then flip the already computed\n    # proposals. This might cause a minor loss in mAP (less proposal jittering).\n    imdb = get_imdb(imdb_name)\n    print 'Loaded dataset `{:s}` for proposal generation'.format(imdb.name)\n\n    # Load RPN and configure output directory\n    rpn_net = caffe.Net(rpn_test_prototxt, rpn_model_path, caffe.TEST)\n    output_dir = get_output_dir(imdb)\n    print 'Output will be saved to `{:s}`'.format(output_dir)\n    # Generate proposals on the imdb\n    rpn_proposals = imdb_proposals(rpn_net, imdb)\n    # Write proposals to disk and send the proposal file path through the\n    # multiprocessing queue\n    rpn_net_name = os.path.splitext(os.path.basename(rpn_model_path))[0]\n    rpn_proposals_path = os.path.join(\n        output_dir, rpn_net_name + '_proposals.pkl')\n    with open(rpn_proposals_path, 'wb') as f:\n        cPickle.dump(rpn_proposals, f, cPickle.HIGHEST_PROTOCOL)\n    print 'Wrote RPN proposals to {}'.format(rpn_proposals_path)\n    queue.put({'proposal_path': rpn_proposals_path})\n\ndef train_fast_rcnn(queue=None, imdb_name=None, init_model=None, solver=None,\n                    max_iters=None, cfg=None, rpn_file=None):\n    \"\"\"Train a Fast R-CNN using proposals generated by an RPN.\n    \"\"\"\n\n    cfg.TRAIN.HAS_RPN = False           # not generating prosals on-the-fly\n    cfg.TRAIN.PROPOSAL_METHOD = 'rpn'   # use pre-computed RPN proposals instead\n    cfg.TRAIN.IMS_PER_BATCH = 2\n    print 'Init model: {}'.format(init_model)\n    print 'RPN proposals: {}'.format(rpn_file)\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    import caffe\n    _init_caffe(cfg)\n\n    roidb, imdb = get_roidb(imdb_name, rpn_file=rpn_file)\n    output_dir = get_output_dir(imdb)\n    print 'Output will be saved to `{:s}`'.format(output_dir)\n    # Train Fast R-CNN\n    model_paths = train_net(solver, roidb, output_dir,\n                            pretrained_model=init_model,\n                            max_iters=max_iters)\n    # Cleanup all but the final model\n    for i in model_paths[:-1]:\n        os.remove(i)\n    fast_rcnn_model_path = model_paths[-1]\n    # Send Fast R-CNN model path over the multiprocessing queue\n    queue.put({'model_path': fast_rcnn_model_path})\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n    if args.set_cfgs is not None:\n        cfg_from_list(args.set_cfgs)\n    cfg.GPU_ID = args.gpu_id\n\n    # --------------------------------------------------------------------------\n    # Pycaffe doesn't reliably free GPU memory when instantiated nets are\n    # discarded (e.g. \"del net\" in Python code). To work around this issue, each\n    # training stage is executed in a separate process using\n    # multiprocessing.Process.\n    # --------------------------------------------------------------------------\n\n    # queue for communicated results between processes\n    mp_queue = mp.Queue()\n    # solves, iters, etc. for each training stage\n    solvers, max_iters, rpn_test_prototxt = get_solvers(args.net_name)\n\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n    print 'Stage 1 RPN, init from ImageNet model'\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n\n    cfg.TRAIN.SNAPSHOT_INFIX = 'stage1'\n    mp_kwargs = dict(\n            queue=mp_queue,\n            imdb_name=args.imdb_name,\n            init_model=args.pretrained_model,\n            solver=solvers[0],\n            max_iters=max_iters[0],\n            cfg=cfg)\n    p = mp.Process(target=train_rpn, kwargs=mp_kwargs)\n    p.start()\n    rpn_stage1_out = mp_queue.get()\n    p.join()\n\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n    print 'Stage 1 RPN, generate proposals'\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n\n    mp_kwargs = dict(\n            queue=mp_queue,\n            imdb_name=args.imdb_name,\n            rpn_model_path=str(rpn_stage1_out['model_path']),\n            cfg=cfg,\n            rpn_test_prototxt=rpn_test_prototxt)\n    p = mp.Process(target=rpn_generate, kwargs=mp_kwargs)\n    p.start()\n    rpn_stage1_out['proposal_path'] = mp_queue.get()['proposal_path']\n    p.join()\n\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n    print 'Stage 1 Fast R-CNN using RPN proposals, init from ImageNet model'\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n\n    cfg.TRAIN.SNAPSHOT_INFIX = 'stage1'\n    mp_kwargs = dict(\n            queue=mp_queue,\n            imdb_name=args.imdb_name,\n            init_model=args.pretrained_model,\n            solver=solvers[1],\n            max_iters=max_iters[1],\n            cfg=cfg,\n            rpn_file=rpn_stage1_out['proposal_path'])\n    p = mp.Process(target=train_fast_rcnn, kwargs=mp_kwargs)\n    p.start()\n    fast_rcnn_stage1_out = mp_queue.get()\n    p.join()\n\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n    print 'Stage 2 RPN, init from stage 1 Fast R-CNN model'\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n\n    cfg.TRAIN.SNAPSHOT_INFIX = 'stage2'\n    mp_kwargs = dict(\n            queue=mp_queue,\n            imdb_name=args.imdb_name,\n            init_model=str(fast_rcnn_stage1_out['model_path']),\n            solver=solvers[2],\n            max_iters=max_iters[2],\n            cfg=cfg)\n    p = mp.Process(target=train_rpn, kwargs=mp_kwargs)\n    p.start()\n    rpn_stage2_out = mp_queue.get()\n    p.join()\n\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n    print 'Stage 2 RPN, generate proposals'\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n\n    mp_kwargs = dict(\n            queue=mp_queue,\n            imdb_name=args.imdb_name,\n            rpn_model_path=str(rpn_stage2_out['model_path']),\n            cfg=cfg,\n            rpn_test_prototxt=rpn_test_prototxt)\n    p = mp.Process(target=rpn_generate, kwargs=mp_kwargs)\n    p.start()\n    rpn_stage2_out['proposal_path'] = mp_queue.get()['proposal_path']\n    p.join()\n\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n    print 'Stage 2 Fast R-CNN, init from stage 2 RPN R-CNN model'\n    print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n\n    cfg.TRAIN.SNAPSHOT_INFIX = 'stage2'\n    mp_kwargs = dict(\n            queue=mp_queue,\n            imdb_name=args.imdb_name,\n            init_model=str(rpn_stage2_out['model_path']),\n            solver=solvers[3],\n            max_iters=max_iters[3],\n            cfg=cfg,\n            rpn_file=rpn_stage2_out['proposal_path'])\n    p = mp.Process(target=train_fast_rcnn, kwargs=mp_kwargs)\n    p.start()\n    fast_rcnn_stage2_out = mp_queue.get()\n    p.join()\n\n    # Create final model (just a copy of the last stage)\n    final_path = os.path.join(\n            os.path.dirname(fast_rcnn_stage2_out['model_path']),\n            args.net_name + '_faster_rcnn_final.caffemodel')\n    print 'cp {} -> {}'.format(\n            fast_rcnn_stage2_out['model_path'], final_path)\n    shutil.copy(fast_rcnn_stage2_out['model_path'], final_path)\n    print 'Final model: {}'.format(final_path)\n"
  },
  {
    "path": "tools/train_net.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Train a Fast R-CNN network on a region of interest database.\"\"\"\n\nimport _init_paths\nfrom fast_rcnn.train import get_training_roidb, train_net\nfrom fast_rcnn.config import cfg, cfg_from_file, cfg_from_list, get_output_dir\nfrom datasets.factory import get_imdb\nimport datasets.imdb\nimport caffe\nimport argparse\nimport pprint\nimport numpy as np\nimport sys\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Train a Fast R-CNN network')\n    parser.add_argument('--gpu', dest='gpu_id',\n                        help='GPU device id to use [0]',\n                        default=0, type=int)\n    parser.add_argument('--solver', dest='solver',\n                        help='solver prototxt',\n                        default=None, type=str)\n    parser.add_argument('--iters', dest='max_iters',\n                        help='number of iterations to train',\n                        default=40000, type=int)\n    parser.add_argument('--weights', dest='pretrained_model',\n                        help='initialize with pretrained model weights',\n                        default=None, type=str)\n    parser.add_argument('--cfg', dest='cfg_file',\n                        help='optional config file',\n                        default=None, type=str)\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to train on',\n                        default='voc_2007_trainval', type=str)\n    parser.add_argument('--rand', dest='randomize',\n                        help='randomize (do not use a fixed seed)',\n                        action='store_true')\n    parser.add_argument('--set', dest='set_cfgs',\n                        help='set config keys', default=None,\n                        nargs=argparse.REMAINDER)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\ndef combined_roidb(imdb_names):\n    def get_roidb(imdb_name):\n        imdb = get_imdb(imdb_name)\n        print 'Loaded dataset `{:s}` for training'.format(imdb.name)\n        imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD)\n        print 'Set proposal method: {:s}'.format(cfg.TRAIN.PROPOSAL_METHOD)\n        roidb = get_training_roidb(imdb)\n        return roidb\n\n    roidbs = [get_roidb(s) for s in imdb_names.split('+')]\n    roidb = roidbs[0]\n    if len(roidbs) > 1:\n        for r in roidbs[1:]:\n            roidb.extend(r)\n        imdb = datasets.imdb.imdb(imdb_names)\n    else:\n        imdb = get_imdb(imdb_names)\n    return imdb, roidb\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n    if args.set_cfgs is not None:\n        cfg_from_list(args.set_cfgs)\n\n    cfg.GPU_ID = args.gpu_id\n\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    if not args.randomize:\n        # fix the random seeds (numpy and caffe) for reproducibility\n        np.random.seed(cfg.RNG_SEED)\n        caffe.set_random_seed(cfg.RNG_SEED)\n\n    # set up caffe\n    caffe.set_mode_gpu()\n    caffe.set_device(args.gpu_id)\n\n    imdb, roidb = combined_roidb(args.imdb_name)\n    print '{:d} roidb entries'.format(len(roidb))\n\n    output_dir = get_output_dir(imdb)\n    print 'Output will be saved to `{:s}`'.format(output_dir)\n\n    train_net(args.solver, roidb, output_dir,\n              pretrained_model=args.pretrained_model,\n              max_iters=args.max_iters)\n"
  },
  {
    "path": "tools/train_svms.py",
    "content": "#!/usr/bin/env python\n\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"\nTrain post-hoc SVMs using the algorithm and hyper-parameters from\ntraditional R-CNN.\n\"\"\"\n\nimport _init_paths\nfrom fast_rcnn.config import cfg, cfg_from_file\nfrom datasets.factory import get_imdb\nfrom fast_rcnn.test import im_detect\nfrom utils.timer import Timer\nimport caffe\nimport argparse\nimport pprint\nimport numpy as np\nimport numpy.random as npr\nimport cv2\nfrom sklearn import svm\nimport os, sys\n\nclass SVMTrainer(object):\n    \"\"\"\n    Trains post-hoc detection SVMs for all classes using the algorithm\n    and hyper-parameters of traditional R-CNN.\n    \"\"\"\n\n    def __init__(self, net, imdb):\n        self.imdb = imdb\n        self.net = net\n        self.layer = 'fc7'\n        self.hard_thresh = -1.0001\n        self.neg_iou_thresh = 0.3\n\n        dim = net.params['cls_score'][0].data.shape[1]\n        scale = self._get_feature_scale()\n        print('Feature dim: {}'.format(dim))\n        print('Feature scale: {:.3f}'.format(scale))\n        self.trainers = [SVMClassTrainer(cls, dim, feature_scale=scale)\n                         for cls in imdb.classes]\n\n    def _get_feature_scale(self, num_images=100):\n        TARGET_NORM = 20.0 # Magic value from traditional R-CNN\n        _t = Timer()\n        roidb = self.imdb.roidb\n        total_norm = 0.0\n        count = 0.0\n        inds = npr.choice(xrange(self.imdb.num_images), size=num_images,\n                          replace=False)\n        for i_, i in enumerate(inds):\n            im = cv2.imread(self.imdb.image_path_at(i))\n            if roidb[i]['flipped']:\n                im = im[:, ::-1, :]\n            _t.tic()\n            scores, boxes = im_detect(self.net, im, roidb[i]['boxes'])\n            _t.toc()\n            feat = self.net.blobs[self.layer].data\n            total_norm += np.sqrt((feat ** 2).sum(axis=1)).sum()\n            count += feat.shape[0]\n            print('{}/{}: avg feature norm: {:.3f}'.format(i_ + 1, num_images,\n                                                           total_norm / count))\n\n        return TARGET_NORM * 1.0 / (total_norm / count)\n\n    def _get_pos_counts(self):\n        counts = np.zeros((len(self.imdb.classes)), dtype=np.int)\n        roidb = self.imdb.roidb\n        for i in xrange(len(roidb)):\n            for j in xrange(1, self.imdb.num_classes):\n                I = np.where(roidb[i]['gt_classes'] == j)[0]\n                counts[j] += len(I)\n\n        for j in xrange(1, self.imdb.num_classes):\n            print('class {:s} has {:d} positives'.\n                  format(self.imdb.classes[j], counts[j]))\n\n        return counts\n\n    def get_pos_examples(self):\n        counts = self._get_pos_counts()\n        for i in xrange(len(counts)):\n            self.trainers[i].alloc_pos(counts[i])\n\n        _t = Timer()\n        roidb = self.imdb.roidb\n        num_images = len(roidb)\n        # num_images = 100\n        for i in xrange(num_images):\n            im = cv2.imread(self.imdb.image_path_at(i))\n            if roidb[i]['flipped']:\n                im = im[:, ::-1, :]\n            gt_inds = np.where(roidb[i]['gt_classes'] > 0)[0]\n            gt_boxes = roidb[i]['boxes'][gt_inds]\n            _t.tic()\n            scores, boxes = im_detect(self.net, im, gt_boxes)\n            _t.toc()\n            feat = self.net.blobs[self.layer].data\n            for j in xrange(1, self.imdb.num_classes):\n                cls_inds = np.where(roidb[i]['gt_classes'][gt_inds] == j)[0]\n                if len(cls_inds) > 0:\n                    cls_feat = feat[cls_inds, :]\n                    self.trainers[j].append_pos(cls_feat)\n\n            print 'get_pos_examples: {:d}/{:d} {:.3f}s' \\\n                  .format(i + 1, len(roidb), _t.average_time)\n\n    def initialize_net(self):\n        # Start all SVM parameters at zero\n        self.net.params['cls_score'][0].data[...] = 0\n        self.net.params['cls_score'][1].data[...] = 0\n\n        # Initialize SVMs in a smart way. Not doing this because its such\n        # a good initialization that we might not learn something close to\n        # the SVM solution.\n#        # subtract background weights and biases for the foreground classes\n#        w_bg = self.net.params['cls_score'][0].data[0, :]\n#        b_bg = self.net.params['cls_score'][1].data[0]\n#        self.net.params['cls_score'][0].data[1:, :] -= w_bg\n#        self.net.params['cls_score'][1].data[1:] -= b_bg\n#        # set the background weights and biases to 0 (where they shall remain)\n#        self.net.params['cls_score'][0].data[0, :] = 0\n#        self.net.params['cls_score'][1].data[0] = 0\n\n    def update_net(self, cls_ind, w, b):\n        self.net.params['cls_score'][0].data[cls_ind, :] = w\n        self.net.params['cls_score'][1].data[cls_ind] = b\n\n    def train_with_hard_negatives(self):\n        _t = Timer()\n        roidb = self.imdb.roidb\n        num_images = len(roidb)\n        # num_images = 100\n        for i in xrange(num_images):\n            im = cv2.imread(self.imdb.image_path_at(i))\n            if roidb[i]['flipped']:\n                im = im[:, ::-1, :]\n            _t.tic()\n            scores, boxes = im_detect(self.net, im, roidb[i]['boxes'])\n            _t.toc()\n            feat = self.net.blobs[self.layer].data\n            for j in xrange(1, self.imdb.num_classes):\n                hard_inds = \\\n                    np.where((scores[:, j] > self.hard_thresh) &\n                             (roidb[i]['gt_overlaps'][:, j].toarray().ravel() <\n                              self.neg_iou_thresh))[0]\n                if len(hard_inds) > 0:\n                    hard_feat = feat[hard_inds, :].copy()\n                    new_w_b = \\\n                        self.trainers[j].append_neg_and_retrain(feat=hard_feat)\n                    if new_w_b is not None:\n                        self.update_net(j, new_w_b[0], new_w_b[1])\n\n            print(('train_with_hard_negatives: '\n                   '{:d}/{:d} {:.3f}s').format(i + 1, len(roidb),\n                                               _t.average_time))\n\n    def train(self):\n        # Initialize SVMs using\n        #   a. w_i = fc8_w_i - fc8_w_0\n        #   b. b_i = fc8_b_i - fc8_b_0\n        #   c. Install SVMs into net\n        self.initialize_net()\n\n        # Pass over roidb to count num positives for each class\n        #   a. Pre-allocate arrays for positive feature vectors\n        # Pass over roidb, computing features for positives only\n        self.get_pos_examples()\n\n        # Pass over roidb\n        #   a. Compute cls_score with forward pass\n        #   b. For each class\n        #       i. Select hard negatives\n        #       ii. Add them to cache\n        #   c. For each class\n        #       i. If SVM retrain criteria met, update SVM\n        #       ii. Install new SVM into net\n        self.train_with_hard_negatives()\n\n        # One final SVM retraining for each class\n        # Install SVMs into net\n        for j in xrange(1, self.imdb.num_classes):\n            new_w_b = self.trainers[j].append_neg_and_retrain(force=True)\n            self.update_net(j, new_w_b[0], new_w_b[1])\n\nclass SVMClassTrainer(object):\n    \"\"\"Manages post-hoc SVM training for a single object class.\"\"\"\n\n    def __init__(self, cls, dim, feature_scale=1.0,\n                 C=0.001, B=10.0, pos_weight=2.0):\n        self.pos = np.zeros((0, dim), dtype=np.float32)\n        self.neg = np.zeros((0, dim), dtype=np.float32)\n        self.B = B\n        self.C = C\n        self.cls = cls\n        self.pos_weight = pos_weight\n        self.dim = dim\n        self.feature_scale = feature_scale\n        self.svm = svm.LinearSVC(C=C, class_weight={1: 2, -1: 1},\n                                 intercept_scaling=B, verbose=1,\n                                 penalty='l2', loss='l1',\n                                 random_state=cfg.RNG_SEED, dual=True)\n        self.pos_cur = 0\n        self.num_neg_added = 0\n        self.retrain_limit = 2000\n        self.evict_thresh = -1.1\n        self.loss_history = []\n\n    def alloc_pos(self, count):\n        self.pos_cur = 0\n        self.pos = np.zeros((count, self.dim), dtype=np.float32)\n\n    def append_pos(self, feat):\n        num = feat.shape[0]\n        self.pos[self.pos_cur:self.pos_cur + num, :] = feat\n        self.pos_cur += num\n\n    def train(self):\n        print('>>> Updating {} detector <<<'.format(self.cls))\n        num_pos = self.pos.shape[0]\n        num_neg = self.neg.shape[0]\n        print('Cache holds {} pos examples and {} neg examples'.\n              format(num_pos, num_neg))\n        X = np.vstack((self.pos, self.neg)) * self.feature_scale\n        y = np.hstack((np.ones(num_pos),\n                       -np.ones(num_neg)))\n        self.svm.fit(X, y)\n        w = self.svm.coef_\n        b = self.svm.intercept_[0]\n        scores = self.svm.decision_function(X)\n        pos_scores = scores[:num_pos]\n        neg_scores = scores[num_pos:]\n\n        pos_loss = (self.C * self.pos_weight *\n                    np.maximum(0, 1 - pos_scores).sum())\n        neg_loss = self.C * np.maximum(0, 1 + neg_scores).sum()\n        reg_loss = 0.5 * np.dot(w.ravel(), w.ravel()) + 0.5 * b ** 2\n        tot_loss = pos_loss + neg_loss + reg_loss\n        self.loss_history.append((tot_loss, pos_loss, neg_loss, reg_loss))\n\n        for i, losses in enumerate(self.loss_history):\n            print(('    {:d}: obj val: {:.3f} = {:.3f} '\n                   '(pos) + {:.3f} (neg) + {:.3f} (reg)').format(i, *losses))\n\n        # Sanity check\n        scores_ret = (\n                X * 1.0 / self.feature_scale).dot(w.T * self.feature_scale) + b\n        assert np.allclose(scores, scores_ret[:, 0], atol=1e-5), \\\n                \"Scores from returned model don't match decision function\"\n\n        return ((w * self.feature_scale, b), pos_scores, neg_scores)\n\n    def append_neg_and_retrain(self, feat=None, force=False):\n        if feat is not None:\n            num = feat.shape[0]\n            self.neg = np.vstack((self.neg, feat))\n            self.num_neg_added += num\n        if self.num_neg_added > self.retrain_limit or force:\n            self.num_neg_added = 0\n            new_w_b, pos_scores, neg_scores = self.train()\n            # scores = np.dot(self.neg, new_w_b[0].T) + new_w_b[1]\n            # easy_inds = np.where(neg_scores < self.evict_thresh)[0]\n            not_easy_inds = np.where(neg_scores >= self.evict_thresh)[0]\n            if len(not_easy_inds) > 0:\n                self.neg = self.neg[not_easy_inds, :]\n                # self.neg = np.delete(self.neg, easy_inds)\n            print('    Pruning easy negatives')\n            print('    Cache holds {} pos examples and {} neg examples'.\n                  format(self.pos.shape[0], self.neg.shape[0]))\n            print('    {} pos support vectors'.format((pos_scores <= 1).sum()))\n            print('    {} neg support vectors'.format((neg_scores >= -1).sum()))\n            return new_w_b\n        else:\n            return None\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Train SVMs (old skool)')\n    parser.add_argument('--gpu', dest='gpu_id', help='GPU device id to use [0]',\n                        default=0, type=int)\n    parser.add_argument('--def', dest='prototxt',\n                        help='prototxt file defining the network',\n                        default=None, type=str)\n    parser.add_argument('--net', dest='caffemodel',\n                        help='model to test',\n                        default=None, type=str)\n    parser.add_argument('--cfg', dest='cfg_file',\n                        help='optional config file', default=None, type=str)\n    parser.add_argument('--imdb', dest='imdb_name',\n                        help='dataset to train on',\n                        default='voc_2007_trainval', type=str)\n\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    # Must turn this off to prevent issues when digging into the net blobs to\n    # pull out features (tricky!)\n    cfg.DEDUP_BOXES = 0\n\n    # Must turn this on because we use the test im_detect() method to harvest\n    # hard negatives\n    cfg.TEST.SVM = True\n\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    # fix the random seed for reproducibility\n    np.random.seed(cfg.RNG_SEED)\n\n    # set up caffe\n    caffe.set_mode_gpu()\n    if args.gpu_id is not None:\n        caffe.set_device(args.gpu_id)\n    net = caffe.Net(args.prototxt, args.caffemodel, caffe.TEST)\n    net.name = os.path.splitext(os.path.basename(args.caffemodel))[0]\n    out = os.path.splitext(os.path.basename(args.caffemodel))[0] + '_svm'\n    out_dir = os.path.dirname(args.caffemodel)\n\n    imdb = get_imdb(args.imdb_name)\n    print 'Loaded dataset `{:s}` for training'.format(imdb.name)\n\n    # enhance roidb to contain flipped examples\n    if cfg.TRAIN.USE_FLIPPED:\n        print 'Appending horizontally-flipped training examples...'\n        imdb.append_flipped_images()\n        print 'done'\n\n    SVMTrainer(net, imdb).train()\n\n    filename = '{}/{}.caffemodel'.format(out_dir, out)\n    net.save(filename)\n    print 'Wrote svm model to: {:s}'.format(filename)\n"
  }
]